You are on page 1of 29

CHAPTER – 5

STRESS-STRENGTH RELIABILITY ESTIMATION
5.1

Introduction
There are appliances (every physical component possess an inherent

strength) which survive due to their strength. These appliances receive a
certain level of stress and sustain. But if a higher level of stress is applied
then their strength is unable to sustain and they break down. Suppose Y
represents the ‘stress’ which is applied to a certain appliance and X
represents the ‘strength’ to sustain the stress, then the stress-strength
reliability is denoted by R= P(Y<X), if X,Y are assumed to be random.
The term stress is defined as: It is failure inducing variable. It is
defined as stress (load) which tends to produce a failure of a component or
of a device of a material. The term load may be defined as mechanical load,
environment, temperature and electric current etc.
The term strength is defined as: The ability of component, a device or
a material to accomplish its required function (mission) satisfactorily
without failure when subjected to the external loading and environment.
Therefore strength is failure resisting variable.
Stress-strength model: The variation in ‘stress’ and ‘strength’ results
in a statistical distribution and natural scatter occurs in these variables when
the two distributions interfere. When interference becomes higher than
strength, failure results. In other words, when probability density functions
of both stress and strength are known, the component reliability may be
determined analytically.
Therefore R= P(Y<X) is reliability parameter R. Thus R= P(Y<X) is
the characteristic of the distribution of X and Y.
152

Let X and Y be two random variables such that X represents
“strength” and Y represents “stress” and X,Y follow a joint pdf

f(x,y),

then reliability of the component is:
R = P(Y < X ) =

+∞ x

∫ ∫ f(x,y)dy dx

(5.1.1)

−∞ −∞

where P(Y<X) is a relationship which represents the probability, that the
strength exceeds the stress and f(x,y) is joint pdf of X and Y.
If the random variables are statistically independent, then
f(x,y) = f(x) g(y) so that
R=

∞ x

∫ ∫ f(x)g(y)dy dx

(5.1.2)

−∞ −∞

x

G y ( x) =

∫ g(y)dy

−∞

R=

(5.1.3)

G y (x)f(x)dx

−∞

where f(x) and g(y) are pdf’s of X and Y respectively.
The concept of stress-strength in engineering devices has been one
of the deciding factors of failure of the devices. It has been customary to
define safety factors for longer lives of systems in terms of the inherent
strength they have and the external stress being experienced by the systems.
If

X

o

is the fixed strength and

experiencing, then the ratio
X

o

-

Y is
o

Y
o

X o Yo is

is the fixed stress, that a system is

called safety factor and the difference

called safety margin. Thus in the deterministic stress-strength

153

situation the system survives only if the safety factor is greater than 1 or
equivalently safety margin is positive.
In the traditional approach to design of a system the safety factor or
margin is made large enough to compensate uncertainties in the values of
the stress and strength variates. Thus uncertainties in the stress and strength
of a system make the system to be viewed as random variables. In
engineering situations the calculations are often deterministic. However, the
probabilistic analysis demands the use of random variables for the concepts
of stress and strength for the evaluation of survival probabilities of such
systems. This analysis is particularly useful in situations in which no fixed
bound can be put on the stress. For example, with earth quakes, floods and
other natural phenomena, the shortcomings may result in failures of systems
with unusually small strengths. Similarly when economics rather than
safety is the primary criterion, the comparison of survival performance can
be better studied by knowing the increase in failure probability as the stress
and strength approach one another. In foregoing lines it is mentioned that
the stress-strength variates are more reasonable to be random variables than
purely deterministic.
Let the random samples Y , X 1 , X 2 ,..., X k being independent, G(y) be

the continuous cdf of Y and F(x) be the common continuous cdf
of X 1 , X 2 ,..., X k . The reliability in a multicomponent stress- strength model
developed by Bhattacharyya and Johnson (1974) is given by

R s,k = P [ at least s of the (X1 ,X 2 ,...,X k ) exceed Y]
k

i =s

−∞

= ∑ (ki ) ∫ [1 − G(y)]i [G(y)]k -i dF(y),

154

(5.1.4)

where X 1 , X 2 ,..., X k are iid with common cdf F(x), this system is subjected
to common random stress Y.
The R.H.S of the equation (5.1.3) is called the survival probability of
a system of single component having a random strength X, and
experiencing a random stress Y. Let a system consist of say k components
whose strengths are given by independently identically distributed random
variables with cumulative distribution function F(.) each experiencing a
random stress governed by a random variable ‘Y’ with cumulative
distribution function G(.).The above probability given by (5.1.4) is called
reliability in a multi-component stress-strength model ( Battacharya and
Johnson, 1974).
A salient feature of probability mentioned in (5.1.1) is that the
probability density functions of strength and stress variates will have a nonempty overlapping ranges for the random variables. In some cases the
failure probability may strongly depend on the lower tail of the strength
distribution.
When the stress or strength is not determined by either sum or the
product of many small components, it may be the extremes of these small
components that decide the stress or the strength. Extreme ordered variates
have proved to be very useful in the analysis of reliability problems of this
nature.

Suppose

that

a

number

of

random

stresses

say

(Y1 ,Y2 ,Y3 ,.......,Ym ) with cumulative distribution function G(.) are acting on
a system whose strengths are given by ‘m’ random variables
(X1 ,X 2 ,X3 ,.......,X n ) with cumulative distribution function F(.). If ‘V’ is the
maximum

of

Y1 ,Y2 ,Y3 ,.......,Ym

and

‘U’

is

the

minimum

of

X1 ,X 2 ,X 3 ,.......,X n , the survival of the system depends on whether or not
‘U’ exceeds ‘V’. That is the survival probability of such a system is
155

explained with the help of distributions of extreme order statistics in
random samples of finite sizes.
Thus the above introductory lines indicate that whether it is through
extreme values or independent variates like Y, X in a single component or
multicomponent system with stress and strength factors, the reliability in
any

situation ultimately turns out to be a parametric function of the

parameters in probability distribution of the concerned variates. If some or
all of the parameters are not known, evaluation of the survival probabilities
of a stress-strength model leads to estimation of a parametric function.
As mentioned in the introduction, several authors have taken up the
problem of estimating survival probability in stress-strength relationship
assuming various lifetime distributions for the stress-strength random
variates. Some recent works in this direction are Kantam et al. (2000),
Kantam and Srinivasa Rao (2007), Srinivasa Rao et al. (2010b) and the
references therein.
In this chapter, we present estimation of R using maximum likelihood
(ML) estimates and moment (MOM) estimates of the parameters in Section
5.2. Asymptotic distribution and confidence intervals for R are given in
Section 5.3. Simulation studies are carried out in Section 5.4 to investigate
bias and mean squared errors (MSE) of the MLE and MOM of R as well as
the lengths of the confidence interval for R. The conclusions and comments
are provided in Section 5.5.

156

5.2 Estimation of stress-strength reliability
5.2.1 Estimation of stress-strength reliability using ML and moment
estimates :
The purpose of this section is to study the inference of R = P(Y < X) ,
where X ~ IRD(σ 1 ) , Y ~ IRD (σ 2 ) and they are independently distributed.
The estimation of reliability is very common in statistical literature. The
problem arises in the context of reliability of a component of strength, X,
subject to a stress Y. Thus R = P(Y < X) is a measure of reliability of the
system. The system fails if and only if the applied stress is greater than its
strength. We obtain the maximum likelihood estimator (MLE) and moment
estimator (MOM) of R and obtain its asymptotic distribution. The
asymptotic distribution has been used to construct an asymptotic confidence
interval. The bootstrap confidence interval of R is also proposed. Assuming
that the different scale parameters are not known, we obtain the MLE and
MOM of R.
Suppose X and Y are random variables independently distributed as
X ~ IRD(σ 1 ) and Y ~ IRD (σ 2 ) . Therefore,
∞x

R = P(Y < X) = ∫ ∫

2σ12
3

00

=∫
0

=

x

2σ12
3
x

e

e

σ 
− 1 
 x 

σ 
− 1 
 x 

2

2σ22
3
y

2

e

σ 
− 2 
 x 

e

σ 
− 2 
 y 

2

dydx

2

dx

σ12
.
σ12 + σ 22

(5.2.1)

λ2
σ
Note that, R =
, λ= 1
2
σ2
1+ λ

(5.2.2)
157

Therefore,


∂R
=
> 0.
∂λ (1 + λ 2 )2

R is an increasing function in λ.
Now to compute R, we need to estimate the parameters σ1 and σ 2 ,
and these are estimated by using MLE and MOM.

5.2.2 Method of Maximum Likelihood Estimation (MLE)
Suppose X1 ,X 2 ,X3 ,...,X n is a random sample from inverse Rayleigh
distribution (IRD) with σ 1 and Y1 ,Y2 ,Y3 ,...,Ym is a random sample from
IRD with σ 2 . The log-likelihood function of the observed samples is
n

Log L ( σ1 , σ2 ) = (m + n) log2 + 2n log σ1 + 2m log σ2 − σ12 ∑
i =1

m
n
m
1
1
− σ22 ∑ 2 − ∑ log x13 −∑ log y 3j
2
xi
j =1 y j
i =1
j =1

(5.2.3)
The MLEs of σ1 and σ 2 , say σˆ 1 and σˆ 2 respectively can be obtained as

σˆ 1 =

σˆ 2 =

n
1
∑ x2
i

,

(5.2.4)

m
.
1
∑ y2
j

(5.2.5)

The MLE of stress-strength reliability R becomes
Rˆ1 =

σˆ 12
.
σˆ 12 + σˆ 22

(5.2.6)

158

5.2.3 Method of Moment (MOM) Estimation
If x, y are the sample means of samples on strength and stress
variates respectively, then moment estimators (MOM) of σ1 ,σ 2 are
σ1 = x / π and σ 2 = y / π respectively. Estimate of R using MOM method
adopting invariance property for moment estimation, we get

~2
σ
Rˆ 2 = ~ 2 1 ~ 2 .
σ1 + σ 2

(5.2.7)

5.2.4 Asymptotic distribution and confidence intervals
In this Section, first we obtain the asymptotic distribution of
θˆ = (σˆ1 , σˆ 2 ) and then we derive the asymptotic distribution of Rˆ1 . Based on
the asymptotic distribution of Rˆ1 , we obtain the asymptotic confidence
interval of R.
Let us denote the Fisher’s information matrix of θ = (σ1 , σ 2 ) as
I (θ ) = (I ij (θ ); i, j = 1,2 ). Therefore,

 I 11 I 12 
I (θ ) = 
.
I
I
 21 22 

(5.2.8)

where
 ∂ 2 log L  4n
I11 = E  −
.
=
∂σ 12  σ 12

 ∂ 2 log L  4m
I 22 = E  −
= 2 .
2

σ
2

 σ2

159

 ∂ 2 log L 
= 0.
I12 = I 21 = E  −
 ∂σ ∂σ 
1
2 

As n → ∞, m → ∞,

(

)

d
 n (σˆ1 − σ 1 ), m (σˆ 2 − σ 2 )  
→ N 0, A−1 (σ 1 ,σ 2 ) ,

0 
a
where A(σ 1 ,σ 2 ) =  11
,
 0 a22 
and a11 = 1n I11 =

a22 =

1
m I 22

4

σ 12

=

,

4

σ 22

.

To obtain asymptotic confidence interval for R, we proceed as follows
(Rao, 1973) :

d1 (σ 1 ,σ 2 ) =

∂R
2σ σ 2
= 2 1 22 2
∂σ 1 (σ 1 + σ 2 )

d 2 (σ 1,σ 2 ) =

∂R
−2σ 2σ
= 2 1 22 2 .
∂σ 2 (σ 1 + σ 2 )

This gives
Var ( Rˆ1 ) = var(σˆ1 )d12 (σ 1 ,σ 2 ) + var(σˆ 2 ) d 22 (σ 1 ,σ 2 )
=

σ 12
4n

d12 (σ 1 ,σ 2 ) +

σ 22
4m

d 22 (σ 1 ,σ 2 )

2

σ 12  2σ 1σ 22  σ 22  −2σ 12σ 2 
=

 +


4n  (σ 12 + σ 22 ) 2  4m  (σ 12 + σ 22 ) 2 

160

2

1
2 1
= [ R(1 − R) ]  + 
n m

(5.2.9)

Thus we have the following result,
As n → ∞, m → ∞,
Rˆ1 − R
1 1 
R(1 − R)  + 
n m

d

→ N (0,1) .

Hence the asymptotic 100(1 − α )% confidence interval for R would be

( L1, U1 ) , where

(

L1 = Rˆ1 − Z (1−α / 2) Rˆ1 1 − Rˆ1

)

1 1 
 + ,
n m

(5.2.10)

and

(

U1 = Rˆ1 + Z (1−α / 2) Rˆ1 1 − Rˆ1

)

1 1 
 + .
n m

(5.2.11)

Where Z (1−α / 2) is the (1−α /2)th percentile of the standard normal distribution
and Rˆ1 is given by equation (5.2.6).

5.2.5 Exact confidence interval
Let X1 ,X 2 ,X 3 ,...,X n and Y1 ,Y2 ,Y3 ,...,Ym are two independent
random samples of size n, m respectively, drawn from inverse Rayleigh
distribution with scale parameters σ 1 and σ 2 . Thus,

n

m
1
1
and
∑ x2 ∑ y2 are
i=1 i
j =1 j

independent gamma random variables with parameters ( n,σ 12 )and (m,σ 22 )
161

n

m
1
1
2
and
2
σ
are two independent Chi∑
2
2
2
x
y
i=1 i
j =1 j

respectively. We see that 2σ12 ∑

square random variables with 2n and 2m degrees of freedom respectively.
−1

 σˆ 2 
Thus, Rˆ1 in equation (5.2.6) could be rewritten as Rˆ1 = 1 + 22  .
 σˆ1 
Using (5.2.4) and (5.2.5) we obtain
 σ 22 
ˆ
R1 = 1 + 2 F 
 σ1 
n

where F =


(5.2.12)

1

∑ x2

2
1 i =1
2 m
2

−1

i

1
∑ y2
j =1 j

is an F distributed random variable with (2n, 2m)

degrees of freedom. From equations (5.2.5) and (5.2.12), we see that F
 1 − Rˆ1   R 
.
could be written as F = 
ˆ   1 − R 
R
 1 
Using F as a pivotal quantity, we obtain a 100(1 − α )% confidence interval
for R as ( L2 ,U 2 ) , where

L2 = 1 + ( R1ˆ − 1) F(1−α / 2) (2m,2n) 
1


U 2 = 1 + ( R1ˆ − 1) F(α / 2) (2m,2n) 
1

−1

(5.2.13)

−1

(5.2.14)

where F(α /2) (2m,2n) and F(1−α /2) (2m,2n) are the lower and upper (α / 2) th
percentiles of F distribution with 2m and 2n degrees of freedom.

162

5.2.6 Bootstrap confidence intervals
In this subsection, we propose to use confidence intervals based on
percentile bootstrap method (we call it from now on as Boot-p) based on the
idea of Efron (1982). We illustrate briefly how to estimate confidence
interval of R using this method.
Step 1: Draw random samples X 1 , X 2 ,..., X n and Y1 , Y2 ,..., Ym from the
populations of X and Y respectively.
Step 2: Using the random samples X 1 , X 2 ,..., X n and Y1 , Y2 ,..., Ym , generate

bootstrap samples x1∗ , x2∗ ,..., xn∗ and y1* , y2* ,..., ym* respectively. Compute the
bootstrap estimates of σ 1 and σ 2 , say σ 1∗ and σ 2∗ respectively. Using σ 1∗
and σ 2∗ and equation (5.2.6), compute the bootstrap estimate of R, say Rˆ ∗ .
Step 3: Repeat step 2, NBOOT times, where N=1000.
Step 4: Let G ( x ) = P( Rˆ ∗ ≤ x) be the empirical distribution function of Rˆ ∗ .

Let L3 = Rˆ ∗boot − p (α / 2) and U = Rˆ (1 − α / 2) be the 100α / 2 th and the

boot − p

3

100(1 − α / 2) th empirical percentiles of the Rˆ ∗ values respectively. The
small sample comparisons are studied through simulation in Section 5.4.

5.3 Estimation of reliability in multi-component stress-strength
Model
Assuming that F(.) and G (.) are inverse Rayleigh distributions with
unknown scale parameters σ 1 , σ 2 and that independent random samples
X1 < X 2 <....< X n and Y1 < Y2 < .... < Ym are available from F(.) and G(.)

respectively. The reliability in multi-component stress-strength for inverse
Rayleigh distribution using (5.1.4) is

163

k

( ) ∫ [1 − e

Rs ,k = ∑ K
i =s

k

i

()

=∑ k
i=s

i

− (σ 2 / y ) 2 i

] [e

− (σ 2 / y )2 k −i

]

0

1

2σ 12 − (σ1 / y )2
e
dy
y3

−σ / y
1/ λ i 1/ λ k −i
∫ [1 − t ] [t ] dt where t = e 1 and λ =
2

2

2

2

0

= ∑ ( k ) λ ∫ [1 − z ][ z ] dz

σ 12
σ 22

1

k

2

if z = t

k − i + λ2 − 1

i

i

1 / λ2

i=s

0

= ∑ ( k ) λ β (k − i + λ , i + 1).
k

2

2

i

i=s

After the simplification, we get
R =λ
s ,k

k!
∑ (k - i)! ∏ (k + λ - j ) , since k and i are integers.
-1

i

k

2

(5.3.1)

2

i = s

j = 0

The probability given in (5.3.1) is called reliability in a
multicomponent stress-strength model (Bhattacharyya and Johnson 1974).
Suppose a system with k identical components, functions, if s (1 ≤ s ≤ k ) or
more of the components simultaneously operate. In its operating
environment, the system is subjected to a stress Y which is a random
variable with cdf G(.). The strengths of the components, that is the
minimum stresses to cause failure

are independent and identically

distributed random variables with cdf F(.). Then the system reliability,
which is the probability that the system does not fail is the function

Rs ,k given in (5.3.1).
If σ , σ are not known, it is necessary to estimate σ , σ to estimate
1

2

1

2

Rs ,k . In this section, we estimate σ , σ by ML method and method of
1

2

moment thus giving rise to two estimates. The estimates are substituted in

λ to get an estimate of Rs ,k using equation (5.3.1). The theory of methods
of estimation is explained below.
It is well known that the method of maximum likelihood estimation
(MLE) has invariance property. When the method of estimation of
164

parameter is changed from ML to any other traditional method, this
invariance principle does not hold good to estimate the parametric function.
However, such an adoption of invariance property for other optimal
estimators of the parameters to estimate a parametric function is attempted
in different situations by different authors. Travadi and Ratani (1990),
Kantam and Srinivasa Rao (2002) and the references therein are a few such
instances. In this direction, we have proposed some estimators for the
reliability of multicomponent stress-strength model by considering the
estimators of the parameters of stress, strength distributions by standard
methods of estimation in inverse Rayleigh distribution. The MLEs of

σ 1 and σ 2 aredenoted as σ 1 and σ 2 .
The asymptotic variance of the MLE is given by
−1

 − E (∂ 2 log L / ∂σ i2 )  = σ i2 / 4n ; i = 1,2

when m=n.

(5.3.2)

The MLE of survival probability of multicomponent stress-strength model
is given by R s ,k with λ is replaced by λ = σ 1 / σ 2 in (5.3.1).

The second estimator, we propose here is R s ,k with λ is replaced
by λ = σ 1 / σ 2 in (5.3.1). Thus for a given pair of samples on stress, strength
variates we get two estimates of Rs ,k by the above two different methods.
The asymptotic variance (AV) of an estimate of Rs ,k which a function of
two independent statistics (say) t1 ,t 2 is given by (Rao, 1973).
 ∂R 
 ∂R 
AV(Rˆ )=AV(t ) 
AV(t
)
+



 ∂σ 
 ∂σ 
2

s ,k

s ,k

1

2

s ,k

2

1

(5.3.3)

2

Where t1 , t 2 are to be taken in two different ways namely, exact ML and
method of moment estimators. Unfortunately, we can find the variance of

165

inverse Rayleigh distribution, the asymptotic variance of Rs ,k is obtained
using MLE only.
From the asymptotic optimum properties of MLEs (Kendall and
Stuart, 1979) and of linear unbiased estimators (David, 1981), we know that
MLEs are asymptotically equally efficient having the Cramer-Rao lower
bound as their asymptotic variance as given in (5.3.2). Thus from equation
(5.3.3), the asymptotic variance of R s ,k obtained when ( t1 ,t 2 ) are replaced
by MLE.
To avoid the difficulty of derivation of R s ,k , we obtain the
derivatives of R s ,k for (s,k)=(1,3) and (2,4) separately, they are given by

∂R
−6λ
R

.
=
and
=
∂σ σ (3 + λ )
∂σ σ (3 + λ )
2

1 ,3

1 ,3

2

2

2

2

1

2

2

2

∂R
−24λ (2λ + 7)
∂R
24λ (2λ + 7)
.
=
=
and
∂σ σ (3 + λ )(4 + λ ) 
∂σ σ (3 + λ )(4 + λ ) 




2

2

2 ,4

2 ,4

2

2

2

2

1

2

2

2

2

2

where λ = σ 1 / σ 2


(3 + λ )
4

Thus AV(Rˆ )=
1 ,3

2

4

1 1 
 + 
n m

144λ (2λ + 7)
AV(Rˆ )=
[(3 + λ )(4 + λ )]
4

2

2

4

2 ,4

2

As n → ∞ ,

2

1 1 
 + 
n m

Rˆ − R

→ N(0,1) ,
AV(Rˆ )
s ,k

s ,k

d

s ,k

and the asymptotic confidence - 95% confidence interval for Rs ,k is given
by Rˆ s ,k ∓ 1.96 AV(Rˆ s ,k ) .
166

The asymptotic confidence - 95% confidence interval for R1,3 is given by
Rˆ1,3 ∓ 1.96

2
2

(3 + λ ) 2

1 1 
 + 
n m

The asymptotic confidence - 95% confidence interval for R2,4 is given by
2

Rˆ 2,4 ∓ 1.96

2

12λ (2λ + 7)
(3 + λ 2 )(4 + λ 2 ) 



2

1 1 
 + 
n m

The small sample comparisons are studied through simulation in Section
5.4.

5.4 Simulation study and Data analysis
5.4.1 Simulation study
In this section, we present some results based on Monte - Carlo
simulation, to compare the performance of the estimates of R and Rs ,k using
ML and MOM estimators mainly for some sample sizes. We consider the
following some sample sizes; (n, m) = (5, 5), (10, 10), (15, 15), (20, 20) and
(25, 25). In both estimations we take (σ 1 ,σ 2 ) =(1, 3), (1, 2.5), (1, 2), (1,
1.5), (1, 1), (1.5, 1), (2.5, 1) and (3,1). All the results are based on 3000
replications. From each sample, we compute the estimates of (σ 1 ,σ 2 ) using
ML and Method of Moment estimation. Once we estimate (σ 1 ,σ 2 ) , we
obtain the estimates of R by substituting in (5.2.1) and (5.2.2) respectively.
Also obtain the estimates of Rs ,k by substituting in 5.3.1. for (s,k)=(1,3) and
(2,4) respectively. We report the average biases of R in Table 5.4.1, Rs ,k in
Table 5.4.5, mean squared errors (MSEs) of R in Table 5.4.2 and Rs ,k in
Table 5.4.6 over 3000 replications. We also compute the 95% confidence
167

intervals based on the asymptotic distribution, exact distribution and Boot-p
method of Rˆ1 . We report the average confidence lengths in Table 5.4.3 and
the coverage probabilities in Table 5.4.4 based on 1000 replications.
Average confidence length and coverage probability of the simulated 95%
confidence intervals of Rs ,k are given in Table 5.4.7. Some of the points are
quite clear from this simulation. Even for small sample sizes, the
performance of the R using MLEs is quite satisfactory in terms of biases
and MSEs as compared with MOM estimates. When σ 1 < σ 2 , the bias is
positive and when σ 1 > σ 2 , the bias is negative. Also the absolute bias
decreases as sample size increases in both methods which is a natural trend.
It is observed that when m=n and m, n increase then MSEs decrease as
expected. It satisfies the consistency property of the MLE of R. As
expected, the MSE is symmetric with respect to σ 1 and σ 2 . For example, if
(n,m)=(10,10), MSE for (σ 1 ,σ 2 ) =(3,1) and MSE for (σ 1 ,σ 2 ) =(1,3) is the
same in case of MLE whereas MOM show approximately symmetric. The
length of the confidence interval is also symmetric with respect to
(σ 1 ,σ 2 ) and decreases as the sample size increases. For (n, m)=(5, 5), we
find that average length of ( L2 ,U 2 ) < average length of ( L1 ,U1 ) < average
length of ( L3 ,U 3 ) in all combinations of (σ 1 ,σ 2 ) . For other combinations of
(n,m), we find that average length of ( L2 ,U 2 ) < average length of ( L3 ,U 3 ) in
the case of σ 1 ≤ σ 2 , whereas average length of ( L2 ,U 2 ) < average length of
( L1 ,U1 ) < average length of ( L3 ,U 3 ) when σ 1 > σ 2 . Particularly, for a fixed

σ 2 when σ 1 ≥ 2.0 , the average length of exact confidence intervals and
Boot-p confidence intervals are almost same. Comparing the average
coverage probabilities, it is observed that for most of sample sizes, the
coverage probabilities of the confidence intervals based on the asymptotic
results reach the nominal level as compared with other two confidence
168

intervals. However, it is slightly less than 0.95. The performance of the
bootstrap confidence intervals is quite good for small samples. For all
combinations of (n,m) and (σ 1 ,σ 2 ) , the coverage probabilities in exact
confidence intervals is moderate and away from nominal level 0.95. The
overall performance of the confidence interval is quite good for asymptotic
confidence intervals. The simulation results also indicate that MLE shows
better performance than MOM in the average bias and average MSE for
different choices of the parameters. The exact confidence intervals are
preferable in the case of average short length of confidence intervals,
whereas asymptotic confidence intervals are advisable to use with respect to
coverage probabilities for different choices of the parameters.
Whereas the following points are observed for Rs ,k for simulation
study. The true value of reliability in multicomponent stress-strength
increases when σ 1 ≤ σ 2 otherwise true value of reliability decreases. Thus
the true value of reliability increases as λ decreases and vice-versa. Both
bias and MSE decreases as sample size increases for both methods of
estimation in reliability. With respect to bias, moment estimator is closer to
exact MLE in most of the parametric and sample combinations. Also the
bias is negative when σ 1 ≤ σ 2 and in other cases bias is positive in both
situations of (s,k). With respect to MSE, MLE shows first preference than
method of moment estimation. The length of the confidence interval also
decreases as the sample size increases. The coverage probability is close to
the nominal value in all cases for MLE. The overall performance of the
confidence interval is quite good for MLE. The simulation results also show
that there is no considerable difference in the average bias and average
MSE for different choices of the parameters, whereas there is considerable
difference in MLE and MOM. The same phenomenon is observed for the

169

average lengths and coverage probabilities of the confidence intervals using
MLE.

5.4.2 Data analysis
We present a data analysis for two data sets reported by Lawless
(1982) and Proschan (1963). The first data set is obtained from Lawless
(1982) and it represents the number of revolution before failure of each of
23 ball bearings in the life tests and they are as follows:
Data Set I: 17.88, 28.92, 33.00, 41.52, 42.12, 45.60, 48.80, 51.84, 51.96,
54.12, 55.56, 67.80, 68.44, 68.64, 68.88, 84.12, 93.12, 98.64, 105.12,
105.84, 127.92, 128.04, 173.40.
Gupta and Kundu (2001) have fitted gamma, Weibull and Generalized
exponential distribution to this data.
The second data set is obtained from Proschan (1963) and represents
times between successive failures of 15 air conditioning (AC) equipment in
a Boeing 720 airplane and they are as follows:
Data Set II: 12, 21, 26, 27, 29, 29, 48, 57, 59, 70, 74, 153, 326, 386, 502.
We fit the inverse Rayleigh distribution to the two data sets
separately. We used the Kolmogorov-Smirnov (K-S) tests for each data set
to fit the inverse Rayleigh model. It is observed that for Data Sets I and II,
the K-S distances are 0.12091 and 0.21378 with the corresponding p values
are 0.85028 and 0.43879 respectively. For data sets I and II, the chi-square
values are 0.3052 and 2.6383 respectively. Therefore, it is clear that inverse
Rayleigh model fits quite well to both the data sets. We plot the empirical
survival functions and the fitted survival functions in Figures 5.4.1 and
5.4.2. Figures 5.4.1 and 5.4.2 show that the empirical and fitted models are
very close for each data set.
170

Based on estimates of σ 1 and σ 2 the MLE of R becomes 0.7043 and
the corresponding 95% confidence interval becomes (0.56882, 0.83978).
We also obtain the 95% Boot-p confidence intervals as (0.54543, 0.81669).
The MLE of

Rs ,k for (s,k)=(1,3) become 0.5573 and the corresponding

95% confidence interval becomes (0.39684, 0.71776). The MLE of Rs ,k for
(s,k)=(2,4) become 0.3492 and the corresponding 95% confidence interval
becomes (0.16380, 0.53472).
1
Empirical survival function

0.9

Fitted survival function

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0

40

80

120

160

200

240

Figure 5.4.1: The empirical and fitted survival functions for Data Set I.

171

1
Empirical survival function

0.9

Fitted survival function

0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0

50

100

150

200

250

300

350

400

450

500

550

Figure 5.4.2: The empirical and fitted survival functions for Data Set II.
5.5 Conclusions
We compare two methods of estimating R = P(Y < X ) when Y and
X both follow inverse Rayleigh distributions with different scale
parameters. We provide MLE and MOM procedure to estimate the
unknown scale parameters and use them to estimate of R. We also obtain
the asymptotic distribution of estimated R and that was used to compute the
asymptotic confidence intervals. The simulation results indicate that MLE
shows better performance than MOM in the average bias and average MSE
for different choices of the parameters. The exact confidence intervals are
preferable for average short length of confidence intervals whereas
asymptotic confidence intervals are advisable to use with respect to
coverage probabilities for different choices of the parameters. We proposed
bootstrap confidence intervals and its performance is also quite satisfactory.

172

Whereas to estimate the multi-component stress-strength reliability,
we provided ML and MOM estimators of σ 1 and σ 2 when both of stress,
strength variates follow the same population. Also, we have estimated
asymptotic

confidence

interval

for

multicomponent

stress-strength

reliability. The simulation results indicate that in order to estimate the
multicomponent stress-strength reliability for inverse Rayleigh distribution,
the ML method of estimation is preferable than the method of moment
estimation. The length of the confidence interval also decreases as the
sample size increases and coverage probability is close to the nominal value
in all cases for MLE.

173

Table 5.4.1: Average bias of the simulated estimates of R.
(σ 1 ,σ 2 )
(1,3)

(1,2.5)

(1,2)

(1,1.5)

(1,1)

(1.5,1)

(2,1)

(2.5,1)

(3,1)

(n, m)
R= 0.10 R= 0.14 R= 0.20 R= 0.31 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= 0.90
0.0384 0.0414 0.0405 0.0286 -0.0066 -0.0403 -0.0500 -0.0492 -0.0449
(5,5)
0.0149 0.0170 0.0175 0.0129 -0.0036 -0.0193 -0.0224 -0.0207 -0.0179
0.0246 0.0271 0.0269 0.0187 -0.0070 -0.0315 -0.0375 -0.0357 -0.0317
(10,10)
0.0061 0.0069 0.0071 0.0046 -0.0038 -0.0111 -0.0119 -0.0106 -0.0088
0.0223 0.0252 0.0263 0.0212 0.0012 -0.0192 -0.0249 -0.0242 -0.0216
(15,15)
0.0058 0.0069 0.0078 0.0070 0.0016 -0.0043 -0.0057 -0.0054 -0.0046
0.0198 0.0225 0.0235 0.0192 0.0018 -0.0159 -0.0208 -0.0202 -0.0180
(20,20)
0.0035 0.0041 0.0044 0.0034 -0.0010 -0.0050 -0.0056 -0.0050 -0.0042
0.0127 0.0141 0.0139 0.0089 -0.0068 -0.0209 -0.0234 -0.0214 -0.0185
(25,25)
0.0025 0.0029 0.0031 0.0022 -0.0013 -0.0044 -0.0047 -0.0041 -0.0034

In each cell the first row represents the average bias of R using the
MOM and second row represents average bias of R using the MLE.

174

Table 5.4.2: Average MSE of the simulated estimates of R.
(σ 1 ,σ 2 )
(1,3)

(1,2.5)

(1,2)

(1,1.5)

(1,1)

(1.5,1)

(2,1)

(2.5,1)

(3,1)

(n, m)
R= 0.10 R= 0.14 R= 0.20 R= 0.31 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= 0.90
0.0192 0.0255 0.0338 0.0431 0.0490 0.0443 0.0354 0.0274 0.0211
(5,5)
0.0053 0.0082 0.0127 0.0190 0.0236 0.0193 0.0130 0.0084 0.0055
0.0109 0.0154 0.0218 0.0301 0.0362 0.0320 0.0240 0.0172 0.0123
(10,10)
0.0020 0.0033 0.0056 0.0090 0.0118 0.0091 0.0056 0.0033 0.0020
0.0084 0.0121 0.0176 0.0246 0.0294 0.0247 0.0178 0.0123 0.0086
(15,15)
0.0014 0.0023 0.0039 0.0065 0.0085 0.0064 0.0038 0.0022 0.0013
0.0074 0.0107 0.0155 0.0216 0.0256 0.0211 0.0149 0.0101 0.0069
(20,20)
0.0010 0.0016 0.0028 0.0047 0.0063 0.0047 0.0028 0.0016 0.0009
0.0053 0.0079 0.0118 0.0174 0.0218 0.0184 0.0129 0.0086 0.0058
(25,25)
0.0007 0.0012 0.0022 0.0037 0.0050 0.0037 0.0022 0.0012 0.0007

In each cell the first row represents the average MSE of R using the
MOM and second row represents average MSE of R using the MLE.

175

Table 5.4.3: Average confidence length of the simulated 95%
confidence intervals of R.
(σ 1 ,σ 2 )
(1,3)
(n, m)
(5,5)

(10,10)

(15,15)

(20,20)

(25,25)

(1,2.5)

(1,2)

(1,1.5)

(1,1)

(1.5,1)

(2,1)

(2.5,1)

(3,1)

Interval R= 0.10 R= 0.14 R= 0.20 R= 0.31 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= 0.90
A

0.2395 0.3049 0.3912 0.4933 0.5612 0.4985 0.3977 0.3111 0.2450

B

0.2228 0.2758 0.3422 0.4161 0.4617 0.4166 0.3430 0.2766 0.2235

C

0.2705 0.3397 0.4272 0.5241 0.5819 0.5200 0.4214 0.3338 0.2650

A

0.1627 0.2114 0.2781 0.3607 0.4176 0.3650 0.2832 0.2160 0.1666

B

0.1461 0.1867 0.2403 0.3037 0.3447 0.3040 0.2407 0.1870 0.1464

C

0.1942 0.2467 0.3140 0.3881 0.4211 0.3517 0.2676 0.2021 0.1549

A

0.1335 0.1741 0.2301 0.2995 0.3457 0.2981 0.2284 0.1726 0.1322

B

0.1158 0.1496 0.1953 0.2506 0.2871 0.2507 0.1954 0.1497 0.1159

C

0.1532 0.1968 0.2543 0.3201 0.3525 0.2934 0.2211 0.1655 0.1261

A

0.1138 0.1490 0.1981 0.2598 0.3021 0.2606 0.1990 0.1499 0.1145

B

0.0988 0.1283 0.1688 0.2184 0.2514 0.2182 0.1685 0.1281 0.0986

C

0.1564 0.1966 0.2463 0.2968 0.3025 0.2428 0.1777 0.1308 0.0986

A

0.1012 0.1328 0.1770 0.2330 0.2716 0.2339 0.1781 0.1338 0.1020

B

0.0875 0.1140 0.1506 0.1959 0.2263 0.1957 0.1504 0.1138 0.0873

C

0.1299 0.1645 0.2081 0.2538 0.2659 0.2105 0.1539 0.1131 0.0851

A: Asymptotic confidence interval

B: Exact confidence interval

C: Bootstrap confidence interval

176

Table 5.4.4: Average coverage probability of the simulated 95%
confidence intervals of R.
(σ 1 ,σ 2 )
(1,3) (1,2.5) (1,2) (1,1.5) (1,1) (1.5,1) (2,1) (2.5,1) (3,1)
(n, m) Interval R= 0.10 R= 0.14 R= 0.20 R= 0.31 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= 0.90
(5,5)

(10,10)

(15,15)

(20,20)

(25,25)

A

0.9337 0.9357 0.9387 0.9423 0.9330 0.9400 0.9397 0.9367 0.9353

B

0.8796 0.8784 0.8765 0.8744 0.8731 0.8765 0.8774 0.8797 0.8803

C

0.9590 0.9590 0.9590 0.9590 0.9590 0.9590 0.9590 0.9590 0.9590

A

0.9440 0.9447 0.9463 0.9470 0.9497 0.9533 0.9557 0.9547 0.9540

B

0.8892 0.8882 0.8878 0.8847 0.8819 0.8825 0.8840 0.8864 0.8872

C

0.9500 0.9490 0.9490 0.9490 0.9480 0.9460 0.9460 0.9460 0.9460

A

0.9430 0.9407 0.9413 0.9390 0.9387 0.9403 0.9423 0.9440 0.9440

B

0.8886 0.8885 0.8873 0.8863 0.8864 0.8876 0.8883 0.8888 0.8893

C

0.9340 0.9340 0.9340 0.9340 0.9300 0.9300 0.9300 0.9300 0.9310

A

0.9440 0.9447 0.9467 0.9463 0.9440 0.9427 0.9443 0.9463 0.9487

B

0.8966 0.8962 0.8954 0.8957 0.8961 0.8941 0.8954 0.8949 0.8963

C

0.8300 0.8310 0.8320 0.8350 0.8360 0.8380 0.8450 0.8430 0.8430

A

0.9473 0.9463 0.9473 0.9467 0.9470 0.9473 0.9507 0.9517 0.9523

B

0.8942 0.8938 0.8933 0.8931 0.8931 0.8930 0.8938 0.8948 0.8951

C

0.8080 0.8060 0.8040 0.8000 0.8070 0.8090 0.8130 0.8140 0.8130

A: Asymptotic confidence interval, B: Exact confidence interval and
C: Bootstrap confidence interval.

177

Table 5.4.5: Average bias of the simulated estimates of Rs ,k .
(σ 1 , σ 2 )
(n, m)

(s,k)

(1,3)

(1,2.5)

(1,2)

(1,1.5)

(1,1)

(1.5,1)

(2,1)

(2.5,1)

(3,1)

(1,3) Rs ,k =0.964 Rs ,k =0.949 Rs ,k =0.923 Rs ,k =0.871 Rs ,k =0.750 Rs ,k =0.571 Rs ,k =0.429 Rs ,k =0.324 Rs ,k =0.250
-0.0063

-0.0082

-0.0108

-0.0136

-0.0120

0.0011

0.0137

0.0209

0.0236

-0.0239

-0.0296

-0.0367

-0.0437

-0.0402

-0.0127

0.0148

0.0328

0.0423

-0.0036

-0.0048

-0.0065

-0.0087

-0.0091

-0.0026

0.0044

0.0087

0.0105

-0.0128

-0.0165

-0.0215

-0.0271

-0.0262

-0.0075

0.0119

0.0244

0.0306

-0.0022

-0.0029

-0.0039

-0.0053

-0.0055

-0.0012

0.0033

0.0060

0.0070

-0.0106

-0.0133

-0.0166

-0.0198

-0.0166

0.0006

0.0166

0.0260

0.0301

-0.0012

-0.0016

-0.0022

-0.0028

-0.0022

0.0016

0.0051

0.0069

0.0074

-0.0087

-0.0113

-0.0149

-0.0191

-0.0191

-0.0060

0.0079

0.0169

0.0212

-0.0015

-0.0020

-0.0028

-0.0040

-0.0048

-0.0028

0.0002

0.0016

0.0025

-0.0073

-0.0097

-0.0128

-0.0166

-0.0164

-0.0044

0.0081

0.0158

0.0193

(5,5)

(10,10)

(15,15)

(20,20)

(25,25)

(n, m)

(2,4)

Rs , k =0.938 Rs , k =0.913 Rs , k =0.869 Rs ,k =0.784 Rs , k =0.600 Rs , k =0.366 Rs , k =0.214 Rs , k =0.127 Rs , k =0.077

-0.0102

-0.0129

-0.0159

-0.0173

-0.0067

0.0186

0.0321

0.0335

0.0294

-0.0362

-0.0432

-0.0504

-0.0525

-0.0289

0.0241

0.0560

0.0658

0.0636

-0.0059

-0.0077

-0.0099

-0.0118

-0.0076

0.0066

0.0150

0.0163

0.0143

-0.0204

-0.0255

-0.0313

-0.0345

-0.0198

0.0182

0.0411

0.0471

0.0443

-0.0036

-0.0046

-0.0060

-0.0072

-0.0044

0.0048

0.0101

0.0107

0.0092

-0.0163

-0.0197

-0.0232

-0.0237

-0.0087

0.0231

0.0401

0.0428

0.0387

-0.0020

-0.0026

-0.0032

-0.0035

-0.0006

0.0067

0.0102

0.0100

0.0083

-0.0140

-0.0176

-0.0220

-0.0249

-0.0152

0.0122

0.0289

0.0330

0.0308

-0.0025

-0.0033

-0.0044

-0.0057

-0.0050

0.0003

0.0039

0.0048

0.0043

-0.0119

-0.0152

-0.0191

-0.0217

-0.0127

0.0122

0.0267

0.0297

0.0270

(5,5)

(10,10)

(15,15)

(20,20)

(25,25)

In each cell the first row represents the average bias of Rs ,k using the MLE
and second row represents average bias of Rs ,k using the MOM.
178

Table 5.4.6: Average MSE of the simulated estimates of Rs ,k .
(σ 1 , σ 2 )
(n, m)

(s,k)

(1,3)

(1,2.5)

(1,2)

(1,1.5)

(1,1)

(1.5,1)

(2,1)

(2.5,1)

(3,1)

(1,3) Rs ,k =0.964 Rs ,k =0.949 Rs ,k =0.923 Rs ,k =0.871 Rs ,k =0.750 Rs ,k =0.571 Rs ,k =0.429 Rs ,k =0.324 Rs ,k =0.250
0.0010

0.0017

0.0033

0.0069

0.0148

0.0216

0.0219

0.0190

0.0153

0.0072

0.0105

0.0158

0.0248

0.0389

0.0478

0.0482

0.0449

0.0400

0.0004

0.0007

0.0014

0.0033

0.0079

0.0122

0.0122

0.0102

0.0079

0.0024

0.0040

0.0071

0.0131

0.0246

0.0334

0.0341

0.0310

0.0267

0.0002

0.0004

0.0008

0.0020

0.0050

0.0079

0.0078

0.0064

0.0048

0.0028

0.0042

0.0068

0.0118

0.0208

0.0277

0.0280

0.0251

0.0212

0.0001

0.0003

0.0006

0.0014

0.0037

0.0060

0.0061

0.0050

0.0038

0.0014

0.0024

0.0044

0.0086

0.0171

0.0240

0.0244

0.0219

0.0185

0.0001

0.0002

0.0004

0.0011

0.0028

0.0046

0.0046

0.0037

0.0028

0.0011

0.0019

0.0036

0.0074

0.0152

0.0216

0.0217

0.0190

0.0156

(5,5)

(10,10)

(15,15)

(20,20)

(25,25)

(n, m) (2,4) Rs ,k =0.938 Rs ,k =0.913 Rs ,k =0.869 Rs ,k =0.784 Rs ,k =0.600 Rs ,k =0.366 Rs ,k =0.214 Rs ,k =0.127 Rs ,k =0.077
0.0026

0.0045

0.0082

0.0153

0.0265

0.0286

0.0218

0.0142

0.0087

0.0152

0.0212

0.0304

0.0436

0.0580

0.0594

0.0521

0.0422

0.0328

0.0010

0.0019

0.0037

0.0078

0.0151

0.0163

0.0114

0.0067

0.0037

0.0060

0.0096

0.0158

0.0266

0.0409

0.0434

0.0360

0.0271

0.0194

0.0006

0.0011

0.0022

0.0048

0.0097

0.0106

0.0071

0.0039

0.0020

0.0063

0.0092

0.0142

0.0225

0.0338

0.0356

0.0289

0.0210

0.0146

0.0004

0.0008

0.0016

0.0035

0.0073

0.0083

0.0056

0.0031

0.0016

0.0037

0.0060

0.0103

0.0181

0.0294

0.0314

0.0252

0.0183

0.0128

0.0003

0.0006

0.0012

0.0027

0.0056

0.0062

0.0041

0.0022

0.0011

0.0028

0.0049

0.0087

0.0160

0.0267

0.0280

0.0216

0.0149

0.0098

(5,5)

(10,10)

(15,15)

(20,20)

(25,25)

In each cell the first row represents the average MSE of Rs ,k using the MLE
and second row represents average MSE of Rs ,k using the MOM.
179

Table 5.4.7: Average confidence length and coverage probability of
the simulated 95% confidence intervals of Rs ,k using MLE.
(σ 1 , σ 2 )

(s,k)
(n, m)
(1,3)

(1,3)

(1,2.5)

(1,2)

(1,1.5)

(1,1)

(1.5,1)

(2,1)

(2.5,1)

(3,1)

A

0.09907 0.13539 0.19354 0.29058 0.44710 0.55454 0.55611 0.51087 0.45178

B

0.9413

A

0.06476 0.08956 0.13033 0.20109 0.32180 0.40908 0.41050 0.37338 0.32572

B

0.9410

A

0.05248 0.07278 0.10637 0.16525 0.26696 0.34022 0.33943 0.30609 0.26471

B

0.9503

A

0.04448 0.06181 0.09065 0.14166 0.23108 0.29669 0.29646 0.26709 0.23056

B

0.9510

A

0.03934 0.05472 0.08038 0.12597 0.20659 0.26653 0.26668 0.24011 0.20699

B

0.9490

(5,5)
0.9413

0.9413

0.9423

0.9470

0.9483

0.9413

0.9377

0.9413

(10,10)
0.9417

0.9430

0.9440

0.9457

0.9440

0.9493

0.9493

0.9483

(15,15)
0.9503

0.9503

0.9500

0.9483

0.9480

0.9450

0.9447

0.9470

(20,20)
0.9510

0.9500

0.9500

0.9463

0.9457

0.9510

0.9500

0.9500

(25,25)
0.9497

0.9500

0.9493

0.9490

0.9507

0.9520

0.9537

0.9547

(1.5,1)

(2,1)

(2.5,1)

(3,1)

(σ 1 , σ 2 )

(n, m) (2,4)
(1,3)

(1,2.5)

(1,2)

(1,1.5)

(1,1)

A

0.16630 0.22371 0.31136 0.44463 0.61153 0.63405 0.53166 0.41080 0.30794

B

0.9413

A

0.10985 0.14998 0.21336 0.31467 0.45127 0.47469 0.39039 0.29141 0.20993

B

0.9417

A

0.08923 0.12225 0.17486 0.26002 0.37643 0.39440 0.31893 0.23308 0.16441

B

0.9503

A

0.07576 0.10408 0.14953 0.22396 0.32790 0.34567 0.27889 0.20265 0.14204

B

0.9510

A

0.06706 0.09224 0.13279 0.19963 0.29421 0.31163 0.25118 0.18176 0.12672

B

0.9497

(5,5)
0.9413

0.9413

0.9440

0.9463

0.9457

0.9400

0.9307

0.9287

(10,10)
0.9423

0.9437

0.9460

0.9433

0.9510

0.9510

0.9477

0.9443

(15,15)
0.9503

0.9487

0.9500

0.9483

0.9467

0.9487

0.9473

0.9447

(20,20)
0.9507

0.9500

0.9483

0.9473

0.9520

0.9517

0.9473

0.9450

(25,25)
0.9497

0.9503

0.9483

A: Average confidence length

0.9520

0.9510

0.9563

0.9570

B: Coverage probability
180

0.9547