You are on page 1of 24

# Exact Two Sample Bayesian Prediction Bounds for

## the Exponential Distribution with Random Sample

Size
A.H. Abd Ellah
E-mail ahmhamed@hotmail.com
Department of Mathematics, Faculty of Science,
Sohag University 82524,
Egypt

Key Words and Phrases: Two sample, Order statistics, random sample size, pre-
dictive distribution, exponential model, probability coverage, average width.
Abstract:
This paper is concerned with the Bayesian prediction limits problem of the sth
order statistics when the size of the future sample is a random variable. The one pa-
rameter exponential distribution with constant failure rate is considered. Using the
gamma prior, the paper derives the posterior distribution for that failure rate and
hence the predictive distribution of future observations. Available data are from type
II censored sampling. The analysis depends mainly on assuming that the future sam-
ple size m is a random variable having Poisson or binomial distribution. Simulation
study and numerical examples are used to illustrate the procedure.

1 Introduction
In this paper, we consider the Bayesian prediction for future observations from ex-
ponential distribution in both (1) fixed sample size FSS and (2) random sample size
RSS. In next section, we derive the exact form for Bayesian predictive function in FSS
see, for example, Lawless (1971), Likes (1974), Lingappaiah (1973), Abd Ellah (1999,
2003), Abd Ellah and Soliman (2005),Lee (1997), Lindsey (1996), Huber -Carol, Bal-
akrishnan, Nikulin and Mesbah (2002), Johnson, Kotz and Balakrishnan (1994). In
section 3, we derive the Bayesian predictive function when the sample size is dis-
tributed as a Poisson pdf. In section 4, we derive the Bayesian predictive function

1
when the sample size is distributed as a binomial pdf. In section 5, we shown the
classical approach in pervious work. In section 6, in order to show the usefulness of
our results presented in the paper, we discuss simulation study and numerical exam-
ples. Finally, we present applications to illustrate the methods of inference developed
here. In many biological and quality control problems, the sample size of the future
sample can not always be taken as fixed. Lingappaiah (1986) assumed that the sam-
ple size of the future sample is a random variable. Under this assumption, he derived
the Bayesian predictive distribution of the range when the parent distribution is a
one parameter exponential distribution see, for example, Buhrman (1973), Raghu-
nandanan and Patil (1972). Upadhyay and Pandey (1989) have determined similar
prediction intervals for one parameter exponential distribution with fixed future sam-
ple size. Problems for constructing prediction limits have also been considered by
Hahn and Meeker (1991), Giesser (1993)and Soliman and Abd-Ellah (1996, 2006), .
Several authors, including Aitchison and Dunsmore (1975), Nelson (1982), Lawless
(1982), Cohen and Whitten (1988), Balakrishnan and Cohen (1991), Cohen (1991),
Bain and Engelhardt (1991), Meeker and Escobar (1998), Gelman, Carlin, Proschan
(1963), Stern and Rubin (1995), Gertsbakh (1989), Nagaraja (1995), Nelson (1982),
Patel (1989), Balakrishnan and Lin (2002, 2003) .
Let x1:n ≤ x2:n ≤ · · · ≤ xr:n , the observed ordered lifetimes of the first r compo-
nents to fail in a sample of n (r ≤ n) components. Let y1:m ≤ y2:m ≤ · · · ≤ xm:m , be
a second independent random sample of size m of future observation from the same
distribution, we assume that n,m components whose lifetimes have the one parameter
exponential distribution

## f (x|λ) = λe−λx , λ, x > 0 (1.1)

Suppose that the survival time of some components may be modelled by equation
(1.1) and suppose that n new components have been established at a given point in
time and that the survival times of the first r of these to fail are Let x1:n ≤ x2:n ≤
· · · ≤ xr:n , the problem under consideration is to make inferences about the survival
times of the future ys:m in the future sample of m lifetimes. Assuming that the survival
times are independent, the likelihood function for a type II censored sample is
n!
L(x|λ) = λr e−λSr (1.2)
(n − r)!
Pr
where Sr = i=1 xi:n + (n − r)xr:n . We assume a 2 parameter gamma prior for λ
1
g(λ|a, b) = λa−1 e−λ/b , a, b > 0 (1.3)
Γ(a)ba
Where a and b known.

2
Combining the prior distribution in (1.3) with the likelihood function in (1.2) via
Bayes theorem yields the posterior distribution of λ
1 R−1 R −λA
P (λ|x) = λ A e (1.4)
Γ(R)
where
R = r+a
A = Sr + 1/b (1.5)
The Bayes predictive density function of y given x denoted by
Z ∞
h(y|x) = f (x|λ)P (λ|x) dλ (1.6)
0

## from (1.1) and (1.4) The pdf and cdf of y are

AR
h(y|x) = R (1.7)
(A + y)R+1
R
A

H(y|x) = 1 − (1.8)
A+y
If we consider m future obervations, the pdf of the sth ordered future observation
is
!
m
g(ys:m|x) = s H s−1 (1 − H)m−s h (1.9)
s
where H and h are an abbreviations of H ≡ H(ys:m|x) and h ≡ h(ys:m |x)

## 2 Prediction Limits using fixed sample size

In this section, we derive the predictive function based on fixed sample size with prior
given by (1.3). Substituting (1.7) and (1.8) for H and h in (1.9), we get
AR
!  R s−1  R(m−s)
m A A

g1 (ys:m|x) = s R 1− (2.1)
s A + ys:m A + ys:m (A + ys:m )R+1
while the Bayesian predictive cdf is given by
Z t
G1 (t) = P r(ys:m ≤ t|x) = g(ys:m|x) dys:m
0
 
m s−1 (−1)i s−1
!
A R(m−s+i+1)
 
X i
= 1−s (2.2)
s i=0 (m − s + i + 1) A + t

3
where A and R are given in (1.5). The percentage points of the predictive cdf given
in (2.2) can be easily obtained by solving the nonlinear equation

G1 (t) = 1 − τ (2.3)

Then the exact 100τ % two sided Bayesian interval for the future ys:m can be con-
stracted as
 
tτ /2 , t1−τ /2 (2.4)

## If we consider equal tail limits, the probability in (2.7) gives

P r(y ≤ t1 ) = τ /2
P r(y ≥ t2 ) = τ /2 (2.5)

where t1 = tτ /2 and t2 = t1−T au/2 are the lower and upper percentage points of
G1 (t) . More precisely, given a value of τ , we may determine values of t1 and t2 .

## 2.1 Prediction Limits of the minimum y1:m with fixed sample

size
put s=1 in (2.3), and from (2.4) the LB and UB are
1
 1/M  
t3 = A −1
1 − τ /2
1 1/M
  
t4 = A −1 (2.6)
τ /2
Where t3 and t4 are lower (LB) and upper (UB) bounds of ys:m.

## 2.2 Prediction Limits of the maximum ym:m with fixed sample

size
put s=m in (2.3), and from (2.4)
 
m−1
m−1
X (−1)i i

A
Ri
G1 (t) = 1 − (2.7)
i=0 i A+t

## Using (2.5), we can determin values of LB and UB.

Example 1
By using the Mathematica see Hastings (2001) , we generate 11 order statistics from

4
exponential with mean 1 as follows: .107, .107, .569, .929, 1.061, 1.240, 1.546, 1.626,
1.778, 1.951, 2.413. The first 6 from the above order statistics are used together with
the percentage points of Bayesian that calculated from (2.2), then the predictive con-
fidence interval (2.5) is constructed for r = 6 and s = 1, 2, 3, 4, 5 with hyperparameter
of prior (known) a=1.01 and b=2.01 in the table given below:
r s 1% 2.5% 5% 10% 90% 95% 97.5% 99 %
6 1 1.2468 1.2572 1.2749 1.3124 3.2993 4.1343 5.0849 6.5390
2 1.3291 1.3871 1.4591 1.5754 4.8862 6.1256 7.5171 3.6254
3 1.4837 1.5936 1.7187 1.9077 6.4412 8.0685 9.8854 12.6276
4 1.6798 1.8403 2.0168 2.2756 8.0145 10.0296 12.2706 15.6433
5 1.8765 1.9403 2.178 2.756 8.4215 10.9816 12.2706 15.6433
From the above table, we see that, both the upper and lower 1 − τ bounds of the
predictive intervals are greater than xs:11 . Also, the bounds increase as the significant
level increases as well as s increases.

## 3 Prediction Limits using random sample size as

Poisson(θ)
In this section, we derive the predictive function based on random sample size as
Poisson
The predictive distribution of ys:m = y when the sample size m is random variable
is
1 X∞
g(y|x) = P (m)g(y|m) (3.1)
P r(m ≥ s) m=s
Suppose that the sample size m is a Poisson random variable with probability mass
function pmf
e−θ θm
P (m) = , m = 0, 1, 2, · · · (3.2)
m!
Using (1.9), (3.2) and (3.1)
 
e−θ RAR X∞ θm ms  
A
R s−1 
A
R(m−s)
g2 (y|x) = R+1
1− (3.3)
Ps m=s m!(A + y) A+y A+y
where
s−1
e−θ θm
X
Ps = 1 − (3.4)
m=0 m!

5
Hence to find the prediction limits for ys:m = y the sth smallest of a set m future
observations with pdf in (3.3) we choose t7 and t8 such that

P r(t7 ≤ y ≤ t8 ) = 1 − τ (3.5)

## If we consider equal tail limits, the probability in (3.5) gives

P r(y ≤ t7 ) = τ /2
P r(y ≥ t8 ) = τ /2 (3.6)

From (3.6)
Z t7
g2 (y) dy = τ /2
Z 0∞
g2 (y) dy = τ /2 (3.7)
t8

## while the Bayesian predictive cdf is given by

Z t
G2 (t) = P r(ys:m ≤ t|x) = g(ys:m|x) dys:m
0
 
m s−1 (−1)i s−1
!
e−θ X

A R(m−s+i+1)
 
X i
= 1− s (3.8)
Ps m=s s i=0 (m − s + i + 1) A + t

where A and R are given in (1.5). The percentage points of the predictive cdf given
in (3.8) where t7 = tτ /2 and t8 = t1−T au/2 are the lower and upper percentage points
of G2 (t).

## 3.1 Prediction Limits of y1:m with m ∼ Poisson(θ)

Put s=1 in (3.8), and from (3.3) the LB and UB are

e−θ X
∞ 
A Rm

G2 (t) = 1 − (3.9)
Ps m=1 A + t

G2 (t9 ) = τ /2
G2 (t10 ) = 1 − τ /2 (3.10)

## Where t11 and t12 are the LB and UB of G2 (t).

6
3.2 Prediction Limits of ym:m with m ∼ Poisson(θ)
Put s=m in (3.8), and from (3.3)
 
m−1
e ∞ m−1
−θ X X (−1)i i

A
Ri
G2 (t) = 1 − (3.11)
Ps m i=0 i A+t

and

G2 (t11 ) = τ /2
G2 (t12 ) = 1 − τ /2 (3.12)

## Using (3.12), we can determin values of LB and UB.

Example 2
In this example we generate the sample size m from P(m;2) to be m=10. Then, we
generate 11 order statistics based on m=10 from the standard exponential pdf by
using the Mathematica as follows: .107, .107, .569, .929, 1.061, 1.240, 1.546, 1.626,
1.778, 1.951, 2.413. The first 6 from the above order statistics are used together with
the percentage points of Bayesian that calculated from (3.7), then the predictive con-
fidence interval (3.8) is constructed for r = 6 and s = 1, 2, 3, 4, 5 with hyperparameter
of prior (known) a=1.01 and b=2.01 in the table given below:

## r s 1% 2.5% 5% 10% 90% 95% 97.5% 99 %

6 1 1.2468 1.2572 1.2749 1.3124 3.2993 4.1343 5.0849 6.5390
2 1.3291 1.3871 1.4591 1.5754 4.8862 6.1256 7.5171 3.6254
3 1.4837 1.5936 1.7187 1.9077 6.4412 8.0685 9.8854 12.6276
4 1.6798 1.8403 2.0168 2.2756 8.0145 10.0296 12.2706 15.6433
5 1.8971 1.9533 2.2187 2.5976 8.4655 10.5926 12.7641 15.9653

From the above table, we see that, both the upper and lower 1 − τ bounds of the
predictive intervals are greater than xs:11 . Also, the bounds increase as the significant
level increases as well as s increases.

## 4 Prediction Limits using m ∼ binomial (m,M,p)

In this section, we assume that the sample size is distributed as binomial pmf,
b(n;M,p) as
!
M m M −m
P (m) = p q , q = 1 − p, m = 0, 1, 2 · · · , M (4.1)
m

7
using (3.1), (3.3) and (4.1)
  
m M
M
sq RA R M
X (p/q)m s m

A
 R s−1 
A
R(m−s)
g3 (y|x) = R+1
1− (4.2)
Bs m=s (A + y) A+y A+y
where
M
!
X M m M −m
Bs = p q (4.3)
m=s m
Hence to find the prediction limits for ys:m = y the sth smallest of a set m future
observations with pdf in (4.2) we choose t13 and t14 such that
P r(t13 ≤ y ≤ t14 ) = 1 − τ (4.4)
If we consider equal tail limits, the probability in (4.4) gives
P r(y ≤ t13 ) = τ /2
P r(y ≥ t14 ) = τ /2 (4.5)
From (4.5)
Z t13
g3 (y) dy = τ /2
Z0 ∞
g3 (y) dy = τ /2 (4.6)
t14

## while the Bayesian predictive cdf is given by

Z t
G3 (t) = P r(ys:m ≤ t|x) = g(ys:m|x) dys:m
0
 
s−1
R M
M s−1 (−1)i
! ! R(m−s+i+1)
A m A

X
m
X i
= 1− (p/q) (4.7)
Bs m=s s m i=0 (m − s + i + 1) A + t
where A and R are given in (1.5). The percentage points of the predictive cdf given
in (4.7) where t13 and t14 are the lower and upper percentage points of G3 (t).

## 4.1 Prediction Limits of y1:m with m∼ binomial(m;M,p)

Put s=1 in (4.7), and from (4.6) the LB and UB are
M
AR X
!
M A Rm

m
G3 (t) = 1 − (p/q) (4.8)
Bs m=1 m A+t

G3 (t15 ) = τ /2
G3 (t16 ) = 1 − τ /2 (4.9)
Using (4.8) and (4.9), we can determin values of LB and UB.

8
4.2 Prediction Limits of ym:m with m∼ binomial(m;M,p)
Put s=m in (4.7), and from (4.2)
 
m−1
M
R X
M m−1 (−1)i
! Ri
A A

m
X i
G3 (t) = 1 − (p/q) (4.10)
Bs m m i=0 i A+t

and

G3 (t17 ) = τ /2
G3 (t18 ) = 1 − τ /2 (4.11)

Using (4.10) and (4.11), we can determin t17 and t18 of LB and UB of G3 (t)
Example 3
Follow the technique used in the above example, we use the truncated binomial
B(m; 15, 0.5) as a generator for the sample size, we get n = 12. Next we get 12
order statistics from exponential as:.024, .049, .085, .124, .154, .253, .279, .735, 1.107,
1.750, 2.258, 2.406. By making use the first 6 of above order statistics and the calcu-
lated percentage points of Bayesian given in (4.7), the predictive confidence intervals
for the future order statistics when r = 6 and s = 1, 2, 3, 4, 5 are estimated according
(4.6) with hyperparameter of prior (known) a=1.01 and b=2.01 as given below:

## r s 1% 2.5% 5% 10% 90% 95% 97.5% 99 %

6 1 .262 .275 .297 .345 2.982 4.148 5.504 7.617
2 .379 .462 .564 .729 5.605 7.502 9.656 12.952
3 .625 .793 .985 1.274 8.423 11.054 14.012 18.495
4 .963 1.223 1.508 1.927 11.413 14.794 18.571 24.272
5 1.0361 1.6431 1.8327 2.0975 11.8413 14.9734 18.8751 24.7615

The entries of the above table tell that, both the upper and lower (1 − τ ) bounds of
the predictive confidence intervals are greater than xs:12 and increase as the significant
level increases as well as s increases.

## 5 Classical Prediction Limits of the minimum y1:m

Use the well-known fact that if y1:m is the smallest ordered future observations then

my1:m Sr
∼ F (2, 2r), where x̄ = (5.12)
x̄ r

9
we choose l1 and l2 such that

my1:m
 
P r l1 < < l2 = 1 − τ (5.13)

For equal tail limits l1 and l2 are the lower and upper τ /2 cdf points of F (2, 2r).
These are the classical prediction limits for y1:m

 
P r C1 < y1:m < C2 = 1 − τ (5.14)

Thus knowing l1 and l2 we can obtain Cl and C2 the lower and upper limits for
y1:m .

## 6 Simulation study and Numerical Examples

The percentage points (factors) of the distributions of Classical and Bayesian when
the sample size n is distributed as P (m; λ) and b(m; M, p) were calculated. The
routine DZREAL from Mathematica were used for solving the nonlinear equations
when n distributed as:
(i) Possion P (m; θ) with θ = 1, 2, 3, (ii) Binomial B(m; M; p) with M = 15 and
p = 0.25, 0.50, 0.75. Tables for different choices of r, s and α are available with the
authors upon request. In order to examine the efficiency of our technique when the
sample size is random variable, the probability coverages and average width of the
predictive confidence intervals are calculated based on 10000 simulations in Tables
1, 2, 3 and 4 see, for example, Rubinstein (1981), Van Dorp and Mazzuchi (2004),
Witkovsky (2002), Sugita (2002), Stuart and Ord (1994). From Tables 1 and 2, we
see that the probability coverages are quite close to the corresponding confidence
levels including 90%, 95%, 97.5% and 99%. From Tables 3 and 4, we see that in each
case the average width decreases as both θ and p increase. Also, the average width
increases as r increases and s decreases. For seek of completeness and comparisons,
the probability coverages and the average width in the case of fixed sample size are
also calculated along in Tables 1,2 ,3 and 4. In conclusion, we can say that our
new technique presented in the papers in the case of random sample size is work
satisfactory as well as the technique when the sample size in fixed. By making use of
the percentage points, we simulate numerical examples of random sample sizes from
the exponential lifetime when the sample size distributed as P (m; 2) and B(m; 15, 0.5)
as follows:

10
Example 4
In this example, follows the same technique in Example 1 but when m distributed as
binomial. Then the truncated binomial generator B(m; 15, 0.25) from Mathematica
gives the sample size to be 11. Again, we generate 11 order statistics from exponential
as:.017, .173, .183, .223, .382, .391, .499, .581, .687, 2.165, 2.632. Once again, by
making use the first 6 of above order statistics together with the calculated percentage
points of Bayesian by solving the nonlinear equation (4.7). Then predictive confidence
interval for the future order statistics when r = 6 and s = 1, 2, 3, 4, 5 are estimated
according (4.6) as given below:

## r s 1% 2.5% 5% 10% 90% 95% 97.5% 99 %

6 1 .3928 .3960 .4013 .4127 1.0220 1.2798 1.5738 2.0242
2 .4185 .4366 .4590 .4952 1.5292 1.9171 2.3527 3.0126
3 .4680 .5028 .5424 .6022 2.0340 2.5475 3.1210 3.9861
4 .5318 .5833 .6398 .7226 2.5492 3.1889 3.9005 4.9715
5 .7328 .8431 .9391 .9126 3.142 3.5818 4.0152 5.0135

Again, from the above table, we see that, both the upper and lower (1 − τ ) bounds of
the predictive confidence intervals are greater than xs:11 . Also, these bounds increase
as the significant level increases as well as s increases.
Example 5
The truncated Poisson distribution P (m; 2) is used again to generate the sample size
to be 12. Next, 12 order statistics are generated from exponential as: .024, .049, .085,
.253, .279, .735, 1.107, 1.750, 1.926, 2.258, 2.406, 3.149. The first 6 from the above
sample of order statistics are used together with the percentage points of Bayesian
calculated from (3.8). Then the predictive confidence intervals given in (3.7) are
calculated for r = 6 and s = 1, 2, 3, 4, 5 in the table given below:

## r s 1% 2.5% 5% 10% 90% 95% 97.5% 99 %

6 1 .767 .816 .900 1.076 10.427 14.356 18.830 25.674
2 1.155 1.428 1.766 2.314 17.895 23.728 30.277 40.199
3 1.882 2.399 2.988 3.878 25.213 32.872 41.422 54.327
4 2.805 3.561 4.391 5.609 32.617 42.101 52.647 68.520
5 2.976 3.8526 4.532 5.651 32.724 42.411 52.765 68.723

The entries of the above table tell that, both the upper and lower (1 − τ ) bounds of
the predictive confidence intervals are greater than xs:12 and increase as the significant
level increases as well as s increases.

11
7 Applications
In this section, we apply our predictive technique to some real data that follow expo-
nential distribution as listed below:

1. Consider the data given in Lawless (1971) in which the test is terminated after 4
failures, they are 30, 90, 120 and 170 hours. By using these four times and apply
our predictive confidence intervals given in Sections 3 ,4 and 5, we may able
to predict the 95% and 99% predictive confidence intervals for the fifth failure
when the sample size m is distributed as P (m; 1) and B(m; 15, 0.25). Then
predictive confidence intervals when the sample size is fixed is also calculated
as given below:

## 1−τ Sample size Classical Bayesian

Fixed, m = 10 (173.10 , 435.70) (173.10 , 435.00)
95% P (m; 1) (185.60 , 1655.90) (176.30 , 749.50)
B(m; 15, 0.25) (180.90 , 1402.90) (175.30 , 678.30)
Fixed (170.60 , 684.90) (170.60 , 684.80)
99% P (m; 1) (173.00 , 3879.50 (171.20 , 1311.10)
B(m; 15, 0.25) (172.10 , 2674.50) (171.00 , 1184.80)

From the above table, we see that the lower bound of the predictive intervals
for the same level of significant are somewhat closed. Also, both of Classical
and Bayesian are almost give the same results for both fixed and random sam-
ple sizes. In addition, the interval width increases as the level of prediction
increases.

2. The following data given by Proschan (1963), are times between successive
failure of air conditioning equipment in a Boeing 720 airplane, arranged in
increase order of magnitude. The first 9 of the data are 12, 21, 26, 29, 29,
48, 57, 59 and 70. Upon using the percentage points of Classical and Bayesian
given in Sections 3,4 and 5, we predict 95% and 99% confidence intervals for
the 10th observation as given below:

## s 1−τ Sample size Classical Bayesian

Fixed, m = 10 (72.39 , 282.77) (71.87 , 244.22)
10 95% P (m; 1) (72.18 , 276.81 (71.77 , 239.74)
B(m; 15, 0.25) (72.39 , 282.65 (71.87 , 244.22)
Fixed (74.96 , 350.48) (73.85 , 301.85)
10 99% P (m; 1) (74.54 , 343.46) (73.66 , 296.52)
B(m; 15, 0.25) (74.96 , 350.43) (73.85 , 301.85)

12
From the above table, we see that the predictive confidence intervals are quite
closed for the same level of significant and their widths increase as the prediction
level increases. We also notice that, Classical gives intervals more wider than
the intervals based on Bayesian.

## 3. In this application, the first eight observations in a random sample of 12 lifetimes

from an assumed exponential distribution (in hours) are given by Lawless (1982)
as: 31, 58, 157, 185, 300, 470, 497 and 673. Again by applying the percentage
points of Classical and Bayesian given in Sections 3, 4 and 5, we predict 95%
and 99% confidence intervals for the future 11th to 12th observations as given
below:

## s 1−τ Sample size Classical Bayesian

Fixed, m = 12 (648.89 , 1761.34) (685.11 , 1811.11)
11 95% P (m; 1) (694.56 , 2780.17) (690.90 , 2434.85)
B(n; 15, 0.25) (696.05, 2826.26) (691.64 , 2470.11)
Fixed (697.53 , 2118.17) (698.04 , 2193.24)
11 99% P (m; 1) (717.60 , 3482.68) (709.95 , 3040.82)
B(m; 15, 0.25) (720.95 , 3538.06) (711.43 , 3083.35)
Fixed (786.37 , 3359.28) (787.28 , 3490.92)
12 95% P (m; 1) (777.45 , 3289.77) (782.16 , 3425.24)
B(m; 15, 0.25) (786.37 , 3357.42) (787.28 , 3490.92)
Fixed (845.10 , 4143.19) (846.9 , 4333.78)
12 99% P (m; 1) (832.46 , 4390.91) (839.30 , 4255.98)
B(m; 15, 0.25) (845.09 , 4142.45) (846.90 , 4333.78)
As we see from the above table, the predictive confidence intervals are closed
for the same level of significant and their widths increase as the prediction level
and s increase. We also see that, Classical gives intervals more wider than the
intervals based on Bayesian.

4. Let us consider the following data that is the failure times (in minutes) for a
specific type of electrical insulation in an experiment in which the insulation
was subjected to a continuously increasing voltage stress [Lawless (1982)]. The
first 9 observations of the data are 12.3, 21.8, 24.4, 28.6, 43.2, 46.9, 70.7, 75.3
and 98.1. Once again by using the percentage points of Classical and Bayesian
given in Sections 2 and 3, we predict 95% and 99% confidence intervals for the

13
10th observation as given below:

## 1−τ Sample size Classical Bayesian

Fixed, m = 10 (101.06 , 361.23) (100.72 , 342.25)
95% P (m; 1) (100.80 , 353.85) (100.58 , 335.97)
B(m; 15, 0.25) (101.06 , 361.07) (100.72 , 342.25)
Fixed (104.23 , 444.96) (103.50 , 423.03)
99% P (m; 1) (103.71 , 436.28) (103.23 , 415.55)
B(m; 15, 0.25) (104.23, 444.90) (103.50 , 423.03)

We see from the above table that the predictive confidence intervals are closed
for the same level of significant and their widths increase as the prediction level
increases. Also, we notice that Classical gives intervals more wider than the
intervals based on Bayesian.

5. Let us consider the data obtained from an experiment on insulating fluid break-
down [Nelson (1982)]. Of the 12 specimens tested at 45KV, 3 failed before 1
second and the time to breakdown (in second) of the remaining 9 specimens
were as: 2, 2, 3, 9, 13, 47, 50, 55 and 71. By using the percentage points
of Classical and Bayesian given in Sections 2 and 3, we predict 95% and 99%
confidence intervals for the 5th observation as given below:

## 1−τ Sample size Classical Bayesian

Fixed, m = 10 (72.84 , 234.63) (72.90 , 247.70)
95% P (m; 1) (72.68 , 230.05) (72.80 , 243.16)
B(m; 15, 0.25) (72.84 , 234.53) (72.90 , 247.70)
Fixed (74.81 , 286.70) (74.91 , 306.17)
99% P (m; 1) (74.49 , 281.31) (74.71 , 300.76)
B(m; 15, 0.25) (74.81 , 286.67) (74.91 , 306.17)

From the above table, we see that the predictive confidence intervals are closed
for the same level of significant and their widths increase as the prediction level
increases. We also, notice that Classical gives intervals more wider than the
intervals based on Bayesian.

## The percentage points (factors) of the distributions of Classical and Bayesian

when the sample size n is distributed as P (m; θ) and b(m; M, p) were calculated. The
Mathematica were used for solving the nonlinear equations when m distributed as:
(i) Possion P (m; θ) with θ = 1, 2, 3, (ii) Binomial B(m; M; p) with M = 15 and
p = 0.25, 0.50, 0.75. Tables for different choices of r, s and τ are available with the
authors upon request. In order to examine the efficiency of our technique when the

14
sample size is random variable, the probability coverages and average width of the
predictive confidence intervals are calculated based on 10000 simulations in Tables 1,
2, 3 and 4. From Tables 1 and 2, we see that the probability coverages are quite close
to the corresponding confidence levels including 90%, 95%, 97.5% and 99%. From
Tables 3 and 4, we see that in each case the average width decreases as both θ and p
increase. Also, the average width increases as r increases and s decreases. For seek
of completeness and comparisons, the probability coverages and the average width
in the case of fixed sample size are also calculated along in Tables 1,2 ,3 and 4. In
conclusion, we can say that our new technique presented in the papers in the case of
random sample size is work satisfactory as well as the technique when the sample size
in fixed. By making use of the percentage points, we simulate numerical examples of
random sample sizes from the exponential lifetime when the sample size distributed
as P (m; 2) and B(m; 15, 0.5) as follows:
Example 6
By using the Mathematica, the truncated Poisson P (m; 2) gives the sample size n =
11. Next, we generate 11 order statistics from exponential with mean 1 as follows:
.107, .107, .569, .929, 1.061, 1.240, 1.546, 1.626, 1.778, 1.951, 2.413. The first 6 from
the above order statistics are used together with the percentage points of Bayesian
that calculated from (3.8), then the predictive confidence interval (3.7) is constructed
for r = 6 and s = 1, 2, 3, 4, 5 in the table given below:

## r s 1% 2.5% 5% 10% 90% 95% 97.5% 99 %

6 1 1.2468 1.2572 1.2749 1.3124 3.2993 4.1343 5.0849 6.5390
2 1.3291 1.3871 1.4591 1.5754 4.8862 6.1256 7.5171 9.6254
3 1.4837 1.5936 1.7187 1.9077 6.4412 8.0685 9.8854 12.6276
4 1.6798 1.8403 2.0168 2.2756 8.0145 10.0296 12.2706 15.6433
5 1.7918 1.9840 2.2163 2.6751 8.2142 10.4293 12.4716 15.8432

From the above table, we see that, both the upper and lower 1 − τ bounds of the
predictive intervals are greater than xs:11 . Also, the bounds increase as the significant
level increases as well as s increases. intervals.
Example 7
In this example, follows the same technique in Example 1 but when m distributed as
binomial. Then the truncated binomial generator B(m; 15, 0.25) from IMSL routines
gives the sample size to be 11. Again, we generate 11 order statistics from exponential
as:.017, .173, .183, .223, .382, .391, .499, .581, .687, 2.165, 2.632. Once again, by
making use the first 6 of above order statistics together with the calculated percentage
points of Bayesian by solving the nonlinear equation (4.7). Then predictive confidence

15
interval for the future order statistics when r = 6 and s = 1, 2, 3, 4, 5 are estimated
according (4.6) as given below:

## r s 1% 2.5% 5% 10% 90% 95% 97.5% 99 %

6 1 .3928 .3960 .4013 .4127 1.0220 1.2798 1.5738 2.0242
2 .4185 .4366 .4590 .4952 1.5292 1.9171 2.3527 3.0126
3 .4680 .5028 .5424 .6022 2.0340 2.5475 3.1210 3.9861
4 .5318 .5833 .6398 .7226 2.5492 3.1889 3.9005 4.9715
5 .5318 .5833 .6398 .7226 2.5492 3.1889 3.9005 4.9715

Again, from the above table, we see that, both the upper and lower (1 − τ ) bounds of
the predictive confidence intervals are greater than xs:11 . Also, these bounds increase
as the significant level increases as well as s increases.
Example 8
The truncated Poisson distribution P (n; 2) is used again to generate the sample size
to be 12. Next, 12 order statistics are generated from exponential as:.024, .049, .085,
.253, .279, .735, 1.107, 1.750, 1.926, 2.258, 2.406, 3.149. The first 6 from the above
sample of order statistics are used together with the percentage points of Classical
calculated from (3.8). Then the predictive confidence intervals given in (3.7) are
calculated for r = 6 and s = 1, 2, 3, 4, 5 in the table given below:

## r s 1% 2.5% 5% 10% 90% 95% 97.5% 99 %

6 1 .7167 .8316 .9321 1.1078 10.4527 14.3546 18.3830 25.674
2 1.1525 1.4238 1.7366 2.3214 17.8195 23.7228 30.2377 40.199
3 1.8182 2.3919 2.9818 3.8278 25.2313 32.8172 41.4322 54.327
4 2.805 3.561 4.391 5.609 32.617 42.101 52.647 68.520
5 3.8851 4.9525 5.7931 6.9324 33.8762 43.7326 53.8412 69.8761

The entries of the above table tell that, both the upper and lower (1 − τ ) bounds of
the predictive confidence intervals are greater than xs:12 and increase as the significant
level increases as well as s increases.
Example 9
Follow the technique used in the above example, we use the truncated binomial
B(m; 15, 0.5) as a generator for the sample size, we get n = 12. Next we get 12
order statistics from exponential as:
.024, .049, .085, .124, .154, .253, .279, .735, 1.107, 1.750, 2.258, 2.406.
By making use the first 6 of above order statistics and the calculated percentage
points of Classical given in (4.7), the predictive confidence intervals for the future

16
order statistics when r = 6 and s = 1, 2, 3, 4, 5 are estimated according (4.6) as given
below:

## r s 1% 2.5% 5% 10% 90% 95% 97.5% 99 %

6 1 .262 .275 .297 .345 2.982 4.148 5.504 7.617
2 .379 .462 .564 .729 5.605 7.502 9.656 12.952
3 .625 .793 .985 1.274 8.423 11.054 14.012 18.495
4 .963 1.223 1.508 1.927 11.413 14.794 18.571 24.272
5 .974 1.6243 1.784 1.921 11.717 14.947 18.715 24.272

The entries of the above table tell that, both the upper and lower (1−τ ) bounds of
the predictive confidence intervals are greater than xr:12 and increase as the significant
level increases as well as s increases.
Comment
1. If r increases, the upper and lower prediction limits increases for all cases.

## 2. If m and θ increases, the upper and lower prediction limits decrease.

3. If m and θ increases, the upper and lower prediction limits using random sample
size is greater than that obtained by using fixed sample size.

## 4. Non-informative Bayesian prediction limits can be obtained from equations

(2.2), (3.8),(4.7) and (5.14) by setting a=b=0.

REFERENCES

Abd Ellah, A. H. (1999) A prediction problem concerning sample from the exponen-
tial distribution with random sample size Bull.Fac.SCi., Assiut Univ.28(1-c),
pp1-8

Abd Ellah, A.H. (2003) Bayesian one sample prediction bounds for Lomax distribu-
tion, Indian J. Pure Appl. Math., 43(1), 101-109.

Abd Ellah, A.H. and Sultan, K. S (2005) Exact Bayesian prediction of exponential
lifetime based on fixed and random sample sizes, QTQM Journal , vol 2,no.
2,pp 161-174.

## Arnold, B.C., Balakrishnan, N. and Nagaraja, H.N. (1992). A First Course in

Order Statistics, John Wiley & Sons, New York.

17
Aitchison, J. and Dunsmore , I.R. (1975).Statistical Prediction analysis, Cambridge
university press, Cambridge.

## Balakrishnan, N. and Cohen, A.C.(1991).Order Statistics and Inference: Estimation

Balakrishnan, N. and Lin, C. T. (2002). Exact linear inference and prediction for
exponential distributions based on general progressively Type-II censored sam-
ples, The Journal of Statistics Computation and Simulation, 72,, 8, 677-686.

## Balakrishnan, N. and Lin, C. T. (2003). Exact prediction intervals for exponen-

tial distributions based on doubly Type-II censored samples, Journal of Applied
Statistics, 30, 7, 783-8001.

## Bain, L. J. and Engelhardt, M. (1992).Introduction to Probability and Mathematical

Statistics, PWS-KENT Publishing Company, Bosten, Mossachusett.

Buhrman, J.M. (1973). Order statistics when the sample size has a binomial distri-
bution, Statistica Neerlandica , 27, 125-126.

## Cohen, A. C. and Whitten, B.J. (1998).Parameter Estimation in Reliability and

Life Span Models, New York :Dekker.

David, H.A. and Nagaraja, H.N. (2003). Order Statistics, John Wiley & Sons, New
York.

Dunsmore, I.R. (1974). The Bayesian predictive distribution in life testing model,
Technometrics, 16, 455-460.

## Eberly, L.E. and Casella, G. (2003). Estimating Bayesian Credible Intervals, J.

Stat. Pla. inf..

Escobar, L.A. and Meeker, W.Q. (1999). Sattistical prediction based on censored
Life data, Technometrics, 41(2), pp. 113-124.

don.

## Hahn, J. G. and Meeker, W. Q. (1991). Statistical Intervals A Guide for Practi-

tioners, John Wiley & Sons, New York.

18
Hastings, K. (2001).Introduction to Probability with Mathematica, Chapman & Hall.

Huber-Carol, C., Balakrishnan, N., Nikulin, M.S. and Mesbah, M. (2002). Goodness
- of- Fit Tests and Model Validity, Mirkhäuser Boston. Basel . Berlin.

## Johnson, N.L., Kotz, S. and Balakrishnan, N.(1994). Continuous univariate distri-

butions -1 ,John Wiley & Sons, New York.

Lawless, J.F. (1971). A prediction problem concerning samples from the exponential
distribution with application in life testing, Technometrics, 13, 725-730.

Lawlees, J.F. (1982). Statistical Models & Methods For Lifetime Data, Wiley &
Sons, New York.

York.

## Lindsey, J. K. (1996).Parametric statistical Inference, Oxford: Clarendon Press.

Likes, J. (1974). Prediction of sth ordered observation for the two parameter expo-
nential distribution, Technometrics, 16,241-244.

1, 113-117.

## Lingappaiah, G.S. ( 1986).Bayes prediction in exponential life-testing when Sample

Size is random variable, IEEE. Trans. Reli., R-35, No.1, 106-110.

Gelman, A., Carlin, J.B., Stern, H.S. and Rubin, D. B.(1995).Bayesian Data Anal-
ysis, New York: Chapman & Hall

## Gertsbakh, I.B.( 1989).Statistical Reliability Theory, New York: Marcel Dekker.

Meeker, W.Q. and Escobar, L.A (1998).Statistical Methods for Reliability Data,John
Wiley & Sons, New York.

Nelson, W. (1982). Applied Life Data Analysis,,John Wiley & Sons, New York.

## Nagaraja, H.N. (1995).Prediction problems, in the exponential distribution, (eds. N.

Balakrishnan and A.P. Basu) Gordon and Breach publishers, Amsterdam.

19
Patel, J.K. (1989).Prediction intervals - A review, Commun. Statist. - Theor. Meth.
, 18, 2393-2465.

## Proschan, F. (1963).Theoretical explanation of observed decreasing failure rate, Tech-

nometrics, 5, 375-383.

Raghunandanan, K. and Patil, S.A. (1972).On order statistics for random sample
size, Statistica Neelandica, 26, 121-126.

Rubinstein, R.Y.(1981).Simulation and the Monte Carlo Method ,John Wiley &
Sons, New York.

Soliman,A.A. and Abd Ellah, A.H. (1996).Additional factors for calculating pre-
diction intervals for sth ordered observation from one parameter exponential
distribution, Bull. Fac. Sci., Assiut Univ, 25(2-c), 1-18.

## Soliman, A. A. ,Abd Ellah, A. H. and Sultan, K. S. (2006). Comparision of Estima-

tions Using Record Statistics From Weibull Model : Bayesian and Non-Bayesian
Approaches Comput. statist.& Data Analysis51(2006) 2065-2077.

Stuart, A.and Ord, K.J. (1994).Kendall’s Advanced Theory of Statistics, New York:
Edward.

## Sugita, H. (2002).Robust numerical integration and pairwise independent random

variables, J. of Comp. and Appl. Math. 139, PP.1-8.

## Upadhyay, S.K. and Pandey, M. (1989).Prediction limits for an exponential distri-

bution : A Bayes predictive distribution approach, IEEE Trans. Rel., R38(5),
PP. 599-602.

Van Dorp, J.R. and Mazzuchi, T.A. (2004) A general Bayes exponential inference
model for accelerated life testing, J. Stat. Pla. and Inf.

## Witkovsky, V. (2002). Exact distribution of positive linear combinations of inverted

chi square random variables with odd degrees of freedom, stat. & Prob. Letters
56, PP. 45-50

20
Table 1: Probability coverage

## r s 97.5% 99% λ 97.5% 99% p 97.5% 99%

5 6 0.9005 0.9501 1 0.9910 0.9974 0.25 0.9847 0.9951
2 0.9923 0.9979 0.50 0.9064 0.9660
3 0.9905 0.9983 0.75 0.7944 0.8784
7 0.9017 0.9538 1 0.9910 0.9963 0.25 0.9873 0.9952
2 0.9908 0.9963 0.50 0.9470 0.9816
3 0.9927 0.9966 0.75 0.8375 0.9179
8 0.9014 0.9496 1 0.9854 0.9944 0.25 0.9803 0.9925
2 0.9846 0.9944 0.50 0.9474 0.9802
3 0.9848 0.9951 0.75 0.8530 0.9265
9 0.8987 0.9499 1 0.9617 0.9843 0.25 0.9541 0.9821
2 0.9615 0.9845 0.50 0.9212 0.9645
3 0.9594 0.9842 0.75 0.8272 0.9149
10 0.8988 0.9506 1 0.8726 0.9381 0.25 0.8648 0.9330
2 0.8708 0.9334 0.50 0.8193 0.8980
3 0.8673 0.9319 0.75 0.7124 0.8441
7 8 0.8982 0.9517 1 0.9729 0.9925 0.25 0.9870 0.9972
2 0.9864 0.9960 0.50 0.9451 0.9861
3 0.9538 0.9884 0.75 0.8167 0.9010
9 0.9010 0.9511 1 0.9605 0.9817 0.25 0.9742 0.9903
2 0.9715 0.9881 0.50 0.9379 0.9765
3 0.9356 0.9734 0.75 0.8296 0.9132
10 0.8986 0.9498 1 0.8562 0.9277 0.25 0.8881 0.9416
2 0.8825 0.9406 0.50 0.8398 0.9123
3 0.8193 0.9074 0.75 0.7069 0.8305
9 10 0.8984 0.9511 1 0.6914 0.8272 0.25 0.7628 0.8689
2 0.8695 0.9314 0.50 0.7120 0.8433
3 0.6314 0.7909 0.75 0.5144 0.6772

21
Table 2: Probability Coverage

## r s 90% 95% λ 90% 95% p 90% 95%

5 6 0.8998 0.9500 1 0.8992 0.9459 0.25 0.8751 0.9359
2 0.8860 0.9408 0.50 0.8134 0.8863
3 0.8716 0.9356 0.75 0.7014 0.7894
7 0.9026 0.2527 1 0.9392 0.9665 0.25 0.9270 0.9641
2 0.9274 0.9690 0.50 0.8848 0.9392
3 0.9206 0.9629 0.75 0.7686 0.8505
8 0.9007 0.9516 1 0.9395 .9719 0.25 0.9338 0.9683
2 0.9372 0.9717 0.50 0.9077 0.9516
3 0.9284 0.9703 0.75 0.8052 0.8851
9 0.8992 0.9507 1 0.9300 0.9694 0.25 0.9245 0.9661
2 0.9269 0.9655 0.50 0.9013 0.9533
3 0.9168 0.9609 0.75 0.8014 0.8902
10 0.8989 0.9493 1 0.8651 0.9283 0.25 0.8614 0.9236
2 0.8603 0.9261 0.50 0.8376 0.9127
3 0.8471 0.9221 0.75 0.7433 0.8482
7 8 0.8995 0.2521 1 0.8886 0.9418 0.25 0.8773 0.9343
2 0.8764 0.9370 0.50 0.8270 0.8948
3 0.8632 0.9295 0.75 0.7106 0.7978
9 0.9023 0.9511 1 0.9022 0.9554 0.25 0.8936 0.9476
2 0.8986 0.9508 0.50 0.8575 0.9320
3 0.8915 0.9446 0.75 0.7564 0.8451
10 0.8956 0.9521 1 0.8537 0.9137 0.25 0.8452 0.9088
2 0.8343 0.9083 0.50 0.8094 0.8903
3 0.8356 0.9013 0.75 0.7084 0.8131
9 10 0.8979 0.9518 1 0.7878 0.8745 0.25 0.7804 0.8690
2 0.7839 0.8657 0.50 0.7557 0.8435
3 0.7711 0.8585 0.75 0.6511 0.7564

22
Table 3: Average Width

## r s 97.5% 99% λ 97.5% 99% p 97.5% 99%

5 6 0.5828 0.8170 1 2.7377 3.8718 0.25 2.3604 3.4057
2 2.5511 3.6452 0.50 1.3380 2.0357
3 2.3420 3.3834 0.75 0.5158 0.7462
7 1.1692 1.5622 1 3.7619 5.0900 0.25 3.4422 4.7067
2 3.5784 4.8710 0.50 2.4092 3.4050
3 3.3643 4.6115 0.75 1.0466 1.4600
8 1.9284 2.5281 1 4.4212 5.8767 0.25 4.1794 5.5803
2 4.2524 5.6732 0.50 3.2972 4.4971
3 4.0508 5.4299 0.75 1.6979 2.3422
9 3.0726 3.9942 1 4.9140 6.4611 0.25 4.7442 6.2576
2 4.7595 6.2798 0.50 4.0357 5.3920
3 4.5722 6.0571 0.75 2.4821 3.3838
10 5.4438 7.0957 1 5.3064 6.9309 0.25 5.2022 6.8025
2 5.1681 6.7673 0.50 4.6535 6.1365
3 4.9965 6.5599 0.75 3.3403 4.4824
7 8 0.9037 1.2393 1 4.5901 3.5746 0.25 2.4228 3.3764
2 2.4525 3.4141 0.50 1.7994 2.6102
3 2.3071 3.4214 0.75 0.7657 1.1273
9 1.9934 2.6005 1 3.4978 4.6129 0.25 3.3757 4.2722
2 3.3575 4.4505 0.50 2.8209 3.8151
3 3.2104 4.2820 0.75 1.5621 2.1871
10 4.1852 5.3952 1 4.0714 5.2655 0.25 3.9999 5.1861
2 3.9379 5.1114 0.50 3.5624 4.6756
3 3.7991 4.9527 0.75 2.4205 3.2712
9 10 2.6072 3.5311 1 2.5135 3.4249 0.25 2.4510 3.3533
2 2.4057 3.3006 0.50 2.1176 2.9614
3 2.3072 3.1894 0.75 1.2815 1.8860

23
Table 4: Average Width

## r s 90% 95% λ 90% 95% p 90% 95%

5 6 0.5867 0.8242 1 2.7591 3.9113 0.25 2.2901 3.2705
2 2.4874 3.5410 0.50 1.2345 1.7789
3 2.2474 3.2132 0.75 0.5045 0.7114
7 1.1785 1.5770 1 4.9471 6.7056 0.25 4.2609 5.7990
2 4.5140 6.1361 0.50 2.5129 3.4517
3 4.1343 5.6366 0.75 1.0049 1.3555
8 1.9443 2.5526 1 7.1201 9.4708 0.25 6.2753 8.3688
2 6.5439 8.7236 0.50 3.9562 5.3154
3 6.0430 8.0745 0.75 1.6233 2.1601
9 3.0986 4.0343 1 9.3354 12.2843 0.25 8.3609 11.0195
2 8.6235 11.3675 0.50 5.5364 7.3379
3 8.0100 10.5785 0.75 2.4161 3.1977
10 5.4903 7.1635 1 11.6039 15.1607 0.25 10.5173 13.7527
2 10.7598 14.0779 0.50 7.2204 9.4795
3 10.0382 13.1552 0.75 3.4033 4.4831
7 8 0.9161 1.2604 1 2.6210 3.6360 0.25 2.4146 3.3659
2 2.4618 3.4287 0.50 1.6791 2.3740
3 2.2404 3.1339 0.75 0.7295 1.0208
9 2.0253 2.6510 1 4.5105 5.9662 0.25 4.2491 5.6344
2 4.2751 5.6708 0.50 3.1706 4.2441
3 3.9307 5.2297 0.75 1.5000 2.0091
10 4.2552 5.5036 1 6.3174 8.1877 0.25 6.0354 7.8318
2 6.0199 7.8189 0.50 4.7088 6.1486
3 5.5690 7.2503 0.75 2.4524 3.2236
9 10 2.6963 3.6835 1 2.5820 3.5396 0.25 2.4781 3.4046
2 2.4406 3.3584 0.50 2.0765 2.8832
3 2.2593 3.1211 0.75 1.2042 1.6923

24