You are on page 1of 12

Hacettepe Journal of Mathematics and Statistics

Volume 39 (3) (2010), 449 – 460
APPLICATION OF MML METHODOLOGY
TO AN α–SERIES PROCESS WITH
WEIBULL DISTRIBUTION
Halil Aydogdu
∗†
, Birdal Senoglu

and Mahmut Kara

Received 21 : 12 : 2009 : Accepted 12 : 03 : 2010
Abstract
In an α-series process, explicit estimators of the parameters α, µ and σ
2
are obtained by using the methodology of modified maximum likelihood
(MML) when the distribution of the first occurrence time of an event
is assumed to be Weibull. Monte Carlo simulations are performed to
compare the efficiencies of the MML estimators with the corresponding
nonparametric (NP) estimators. We also apply the MML methodol-
ogy to two real life data sets to show the performance of the MML
estimators compared to the NP estimators.
Keywords: α-series process, Modified likelihood, Nonparametric, Efficiency, Monte
Carlo simulation.
2000 AMS Classification: 60 G55, 62 F10, 62 G08.
1. Introduction
In the statistical literature, a counting process (CP) {N(t), t ≥ 0} is a common method
of modeling the total number of events that have occurred in the interval (0, t]. If the
data consist of independent and identically distributed (iid) successive interarrival times
(i.e., there is no trend), a renewal process (RP) can be used. However, this is not always
the case because it is more reasonable to assume that the successive operating times will
follow a monotone trend due to the ageing effect and accumulated wear, see Chan et al.
[8]. Using a nonhomogeneous Poisson process with a monotone intensity function is one
approach to modeling these trends, see Cox and Lewis [9] and Ascher and Feingold [2]. A
more direct approach is to apply a monotone counting process model, see, for example,
Lam [13, 14], Lam and Chan [15], and Chan et al. [8]. Braun et al. [6] defined such a
monotone process as below.

Ankara University, Department of Statistics, 06100 Tandogan, Ankara, Turkey. E-mail:
(H. Aydogdu) aydogdu@science.ankara.edu.tr (B. Senoglu) senoglu@science.ankara.edu.tr
(M.Kara) mkara@science.ankara.edu.tr

Corresponding Author.
450 H. Aydogdu, B. Senoglu, M. Kara
1.1. Definition. Let X
k
be the time between the (k −1)
th
and k
th
event of a counting
process {N(t), t ≥ 0} for k = 1, 2, . . .. The counting process {N(t), t ≥ 0} is said
to be an α-series process with parameter α if there exists a real number α such that
k
α
X
k
, k = 1, 2, . . . are iid random variables with a distribution function F .
The α-series process was first introduced by Braun et al. [6] as a possible alternative
to the geometric process (GP) in situations where the GP is inappropriate. They applied
it to some reliability and scheduling problems. Some theoretical properties of the α-series
process are given in Braun et al. [6, 7]. Clearly, the X
k
’s form a stochastically increasing
sequence when α < 0; they are stochastically decreasing when α > 0 . When α = 0, all
of the X
k
are identically distributed and an α-series process reduces to a RP.
If F is an exponential distribution function and α = 1, then the α-series process
{N(t), t ≥ 0} is a linear birth process [7].
Assume that the distribution function F for an α-series process has positive mean
µ(F(0) < 1), and finite variance σ
2
. Then
(1.1) E(X
k
) =
µ
k
α
and Var(X
k
) =
σ
2
k

, k = 1, 2, . . . .
Thus, α, µ and σ
2
are the most important parameters for an α-series process because
these parameters completely determine the mean and variance of X
k
.
In this paper, we study the statistical inference problem for α-series process with
Weibull distribution and obtain the explicit estimators of the parameters α, µ and σ
2
by
adopting the method of modified likelihood. We recall that the mean and variance of the
Weibull distribution are given by
(1.2) µ = Γ
_
1 +
1
a
_
b and σ
2
=
_
Γ
_
1 +
2
a
_
−Γ
2
_
1 +
1
a
__
b
2
,
respectively. Here, a and b are the shape and scale parameters of the Weibull distribution,
respectively.
Since obtaining explicit estimators of the parameters is similar to that given in [25],
we just reproduce the estimators and omit the details, see also [10], [11] and [23]. The
motivation for this paper comes from the fact that the Weibull distribution is not easy to
incorporate into the α-series process model because of difficulties encountered in solving
the likelihood equations numerically. To the best of our knowledge, this is the first study
applying the MML methodology to an α-series process when the distribution of the first
occurrence time is Weibull.
The remainder of this paper is organized as follows: Likelihood equations for esti-
mating the unknown parameters and the corresponding MML estimators are given in
Section 2. Section 3 presents the elements of the inverse of the Fisher information ma-
trix. Section 4 presents the simulation results for comparing the efficiencies of the MML
estimators with the NP estimators. Two real life examples are given in Section 5. Con-
cluding remarks are given in Section 6.
2. Likelihood equations
As mentioned in Section 1, Y
k
= k
α
X
k
, k = 1, 2, . . . , n are iid Weibull random
variables. The likelihood equations,
∂ ln L
∂α
= 0,
∂ ln L
∂a
= 0 and
∂ lnL
∂b
= 0 are obtained by
taking the first derivatives of the log-likelihood function with respect to the parameters
α, a and b. For example, we see that the likelihood equation for the parameter α is
∂ ln L
∂α
=
n

k=1
ln k −
n

k=1
_
k
α
X
k
b
_
α
ln k = 0.
Application of MML Methodology 451
The solutions of the likelihood equations give us the ML estimators of the parameters α, a
and b. However, the ML estimators cannot be obtained explicitly or numerically because
the first derivatives of the likelihood function involve power functions of the parameter
α, as well as the shape parameter a.
To overcome these difficulties, we take the logarithm of the Y
k
’s as shown below:
(2.1) ln Y
k
= αln k + ln X
k
, k = 1, 2, . . . , n.
It is known that the ln Y
k
’s are iid extreme value (EV) random variables with the prob-
ability density function given by
(2.2) f(w) =
1
η
exp
_
w −δ
η
_
exp
_
−exp
_
w −δ
η
__
, w ∈ R; η > 0, δ ∈ R,
where δ = ln b is the location parameter and η = 1/a is the scale parameter.
We first obtain the estimators of the unknown parameters in the extreme value distri-
bution, and then obtain the estimators of the Weibull parameters by using the following
inverse transformations:
(2.3) a = 1/η and b = exp(δ).
The likelihood function for lnX
k
, k = 1, 2, . . . , n is found by using (2.1) and (2.2):
(2.4) L(α, δ, η) =
1
η
n
exp
_
n

k=1
lnX
k
+αln k −δ
η
−exp
_
ln X
k
+αln k −δ
η
_
_
.
For the sake of simplicity, we use the notations c
k
instead of ln k in (2.4).
Then, it is obvious that maximizing L(α, δ, η) with respect to the unknown parameters
α, δ and η is equivalent to estimating the parameters in (2.5),
(2.5) ln x
k
= δ −αc
k

k
(1 ≤ k ≤ n),
when ε
k
∼ EV(0, η).
Then the likelihood equations are given by
(2.6)
∂ ln L
∂α
=
1
η
n

k=1
c
k

1
η
n

k=1
g(z
k
)c
k
= 0
∂ ln L
∂δ
=
1
η
n

k=1
g(z
k
) −
n
η
= 0
and
∂ ln L
∂η
=
1
η
n

k=1
g(z
k
)z
k

1
η
n

k=1
z
k

n
η
= 0,
where
z
k
=
ε
k
η
, (k = 1, 2, . . . , n) and g(z) = exp(z).
The ML estimators of the extreme value distribution parameters are the solutions of the
equations in (2.6). Because of the awkward function g(z), it is not easy to solve these
equations. The ML estimates are, therefore, elusive. Since it is impossible to obtain
estimators of the unknown parameters in a closed form, we resort to iterative methods.
However that can be problematic for reasons of
i) multiple roots,
ii) non-convergence of iterations, or
iii) convergence to wrong values;
452 H. Aydogdu, B. Senoglu, M. Kara
see, Barnett [4] and Vaughan [24]. If the data contain outliers, the iterations for the
likelihood equations might never converge, see Puthenpura and Sinha [17].
To overcome these difficulties, we use the method of modified maximum likelihood in-
troduced by Tiku [19, 20], see also Tiku et al. [22]. The method linearizes the intractable
terms in (2.6) and calls the resulting equations the “modified” likelihood equations. The
solutions of these equations are the following closed form MML estimators (see [10, 11,
23] and [25]):
(2.7) ´ α = ´ ηD −E,
´
δ = ln ¯ x
[.]
+ ´ α¯ c
[.]
+

m
´ η and ´ η =
B +

B
2
+ 4nC
2
_
n(n −2)
(bias corrected)
where
D =
n

k=1
(1 −a
k
)(c
[k]
− ¯ c
[.]
)
n

k=1
b
k
(c
[k]
− ¯ c
[.]
)
2
, E =
n

k=1
b
k
(ln x
[k]
−ln ¯ x
[.]
)(c
[k]
− ¯ c
[.]
)
n

k=1
b
k
(c
[k]
− ¯ c
[.]
)
2
,
ln ¯ x
[.]
=
n

k=1
b
k
ln x
[k]
m
, ¯ c
[.]
=
n

k=1
b
k
c
[k]
m
, ∆ =
n

k=1
(a
k
−1), m =
n

k=1
b
k
,
B =
n

k=1
(a
k
−1)
_
(lnx
[k]
−ln ¯ x
[.]
) −E(c
[k]
− ¯ c
[.]
)
¸
and
C =
n

k=1
b
k
_
(lnx
[k]
−ln ¯ x
[.]
) −E(c
[k]
− ¯ c
[.]
)
¸
2
.
Then, by (1.2) and (2.3), the MML estimators of µ and σ
2
are obtained as
(2.8) ´ µ = Γ(1 + ´ η) exp(
´
δ) and ´ σ
2
=
_
Γ(1 + 2´ η) −Γ
2
(1 + ´ η)
¸
exp(2
´
δ)
respectively.
It should be noted that the divisor n in the denominator of ´ η was replaced by
_
n(n −2) as a bias correction. See Vaughan and Tiku [25] and Tiku and Akkaya [23]
for the asymptotic and small samples properties of the MML estimators.
3. The Fisher information matrix
The elements of the inverse of the Fisher information matrix (I
−1
) are given by
V11 = η
2
__
n

k=1
c
2
k

_
n

k=1
c
k
_
2
_
n
_
,
V12 = η
2
n

k=1
c
k
__
n
n

k=1
c
2
k

_
n

k=1
c
k
_
2
_
, V13 = 0,
V22 = η
2
n

k=1
c
2
k
__
n
n

k=1
c
2
k

_
n

k=1
c
k
_
2
_
+ (1 −γ)
2
/nπ
2
,
V23 = −6η
2
(1 −γ)/nπ
2
and V33 = 6η
2
/nπ
2
,
where γ is the Euler constant.
It should be noted that the Fisher information matrix I is symmetric, so V21 =
V12, V31 = V13 and V32 = V23. The diagonal elements in I
−1
provide the asymptotic
variances of the ML estimators, i.e., V (´ α), V (
´
δ) and V (´ η). These variances are also known
as the minimum variance bounds (MVBs) for estimating α, δ and η. The MML estimators
Application of MML Methodology 453
are asymptotically equivalent to the ML estimators when the regularity conditions hold,
see Bhattacharrya [5] and Vaughan and Tiku [25]. Therefore, we can conclude that
the MML estimators are fully efficient, i.e., they are asymptotically unbiased and their
covariance matrix is asymptotically the same as the inverse of I. See Table 1 for the
simulated variances of the MML estimators in (2.7) and the corresponding MVB values.
Table 1. The simulated variances of the MML estimators and the
corresponding MVB values
Simulated variances MVB
n ´ α
´
δ ´ η ´ α
´
δ ´ η
30 0.0132 0.0908 0.0054 0.0119 0.0828 0.0051
50 0.0069 0.0650 0.0031 0.0065 0.0623 0.0030
100 0.0030 0.0424 0.0015 0.0029 0.0415 0.0015
It is clear from Table 1 that the simulated variances of the MML estimators and the
corresponding MVB values become close as n increases. Therefore, the MML estimators
are highly efficient estimators.
4. Simulation results
An extensive Monte Carlo simulation study was carried out in order to compare the
efficiencies of the MML estimators and the NP estimators given below; see Aydogdu and
Kara [3].
(4.1)
¯ α =
n

k=1
ln k
n

k=1
ln X
k
−n
n

k=1
ln X
k
ln k
n
n

k=1
(ln k)
2

_
n

k=1
ln k
_
2
,
¯ σ
2
= exp(2
¯
β)¯ σ
2
e
and
¯ µ =
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
root of 2¯ µ
2
ln ¯ µ −2
¯
β¯ µ
2
− ¯ σ
2
= 0, ¯ α < 0
n

k=1
k
−¯ α
X
k
/
n

k=1
k
−2¯ α
, 0 < ¯ α ≤ 0.7
n

k=1
X
k
/
n

k=1
k
−¯ α
, ¯ α > 0.7
where
¯
β =
n

k=1
lnk
n

k=1
ln X
k
ln k −
n

k=1
(ln k)
2
n

k=1
ln X
k
_
n

k=1
lnk
_
2
−n
n

k=1
(ln k)
2
and
¯ σ
2
e
=
_
n

k=1
(ln X
k
)
2

1
n
_
n

k=1
ln X
k
_
2
_
− ¯ α
2
_
n

k=1
(lnk)
2

1
n
_
n

k=1
ln k
_
2
_
n −2
.
All the computations were conducted in MATLAB 7. The means and the MSEs were
calculated for different sample sizes, α parameters and shape parameters, based on
[100, 000/n] Monte Carlo simulations. Here, [·] represents the greatest integer value.
The scale parameter b was taken to be 1 throughout the study. We consider the sample
454 H. Aydogdu, B. Senoglu, M. Kara
sizes n = 30, 50, 100, the α parameter values α = −0.5, 0.5, 0.9 and the shape parameters
a = 1.5, 2 and 3. The simulated means, the MSEs and the relative efficiencies (REs)
RE(´ µ/¯ µ) =
MSE(´ µ)
MSE(¯ µ)
×100, RE(´ σ
2
/¯ σ
2
) =
MSE(´ σ
2
)
MSE(¯ σ
2
)
×100,
RE(´ α/¯ α) =
MSE(´ α)
MSE(¯ α)
×100
of the NP and MML estimators are given in Table 2.
Table 2. The simulated means, nxMSEs and REs for the MML and NP
estimators of the parameters µ, σ
2
and α
´ µ ´ σ
2
´ α
a = 1.5; α = −0.5; µ = 0.9027; σ
2
= 0.3757
n Estimators Mean nxMSE RE Mean nxMSE RE Mean nxMSE RE
30 MML 0.8965 3.43 55 0.4316 3.49 30 -0.5117 0.71 65
NP 0.9468 6.22 0.5160 11.73 -0.4931 1.10
50 MML 0.8993 4.18 56 0.4005 4.94 44 -0.5087 0.62 66
NP 0.9179 7.43 0.4483 11.33 -0.5020 0.94
100 MML 0.9040 5.66 58 0.3808 5.38 43 -0.5048 0.49 61
NP 0.8945 9.72 0.3974 12.51 -0.4964 0.80
a = 2; α = −0.5; µ = 0.8862; σ
2
= 0.2146
30 MML 0.8737 1.85 55 0.2188 0.60 28 -0.5127 0.38 62
NP 0.9217 3.34 0.2745 2.16 -0.4943 0.61
50 MML 0.8797 2.74 57 0.2234 0.88 38 -0.5091 0.39 66
NP 0.9105 4.77 0.2612 2.34 -0.4984 0.58
100 MML 0.8813 3.40 60 0.2186 0.99 41 -0.5057 0.33 66
NP 0.8957 5.68 0.2471 2.39 -0.5018 0.50
a = 3; α = −0.5; µ = 0.8930; σ
2
= 0.1053
30 MML 0.8759 0.91 66 0.1037 0.06 25 -0.5077 0.18 72
NP 0.9154 1.37 0.1366 0.25 -0.4991 0.26
50 MML 0.8864 1.17 65 0.1031 0.09 26 -0.5048 0.17 69
NP 0.9084 1.78 0.1352 0.33 -0.5004 0.25
100 MML 0.8879 1.42 62 0.1040 0.09 26 -0.5036 0.13 65
NP 0.9074 2.29 0.1211 0.34 -0.4986 0.20
a = 1.5; α = 0.5; µ = 0.9027; σ
2
= 0.3757
30 MML 0.8901 3.43 76 0.4201 4.18 31 0.4878 0.74 70
NP 0.9318 4.50 0.5266 13.32 0.5041 1.06
50 MML 0.8971 4.66 72 0.4048 4.06 47 0.4897 0.61 67
NP 0.9216 6.48 0.4492 8.56 0.5031 0.91
100 MML 0.8993 5.81 64 0.3861 4.76 52 0.4934 0.52 65
NP 0.9111 9.06 0.4082 9.11 0.5016 0.80
a = 2; α = 0.5; µ = 0.8862; σ
2
= 0.2146
30 MML 0.8671 1.93 76 0.2204 0.62 32 0.4838 0.40 70
NP 0.9156 2.54 0.2757 1.93 0.5059 0.57
50 MML 0.8801 2.44 72 0.2166 0.72 38 0.4916 0.36 68
NP 0.8968 3.39 0.2415 1.90 0.4976 0.53
100 MML 0.8826 3.45 67 0.2162 0.84 37 0.4949 0.32 67
NP 0.8941 5.13 0.2363 2.25 0.4986 0.48
Application of MML Methodology 455
Table 2 (continued)
´ µ ´ σ
2
´ α
a = 3; α = 0.5; µ = 0.8930; σ
2
= 0.1053
n Estimators Mean nxMSE RE Mean nxMSE RE Mean nxMSE RE
30 MML 0.8754 0.92 77 0.1012 0.07 28 0.4901 0.17 68
NP 0.9058 1.20 0.1368 0.25 0.5016 0.25
50 MML 0.8829 1.08 71 0.1040 0.08 31 0.4948 0.15 65
NP 0.8957 1.52 0.1236 0.26 0.5026 0.23
100 MML 0.8878 1.50 68 0.1049 0.10 31 0.4967 0.14 70
NP 0.8890 2.22 0.1187 0.32 0.5018 0.20
a = 1.5; α = 0.9; µ = 0.9027; σ
2
= 0.3757
30 MML 0.8908 3.47 73 0.4103 3.67 43 0.8807 0.75 69
NP 0.9497 4.77 0.4890 8.73 0.9020 1.08
50 MML 0.8930 4.40 72 0.4031 4.34 49 0.8896 0.61 64
NP 0.9313 6.10 0.4443 8.92 0.8981 0.95
100 MML 0.8943 5.87 70 0.3924 4.89 49 0.8939 0.57 62
NP 0.9221 8.40 0.4088 9.95 0.9018 0.92
a = 2; α = 0.9; µ = 0.8862; σ
2
= 0.2146
30 MML 0.8686 2.01 76 0.2239 0.67 36 0.8849 0.41 71
NP 0.9258 2.66 0.2809 1.88 0.9064 0.58
50 MML 0.8764 2.47 70 0.2211 0.73 38 0.8901 0.36 67
NP 0.8987 3.52 0.2658 1.94 0.9017 0.54
100 MML 0.8825 3.39 71 0.2155 0.87 40 0.8929 0.31 66
NP 0.8943 4.80 0.2513 2.19 0.8976 0.47
a = 3; α = 0.9; µ = 0.8930; σ
2
= 0.1053
30 MML 0.8742 0.93 72 0.1028 0.07 26 0.8912 0.18 67
NP 0.9166 1.30 0.1354 0.27 0.9020 0.27
50 MML 0.8865 1.17 71 0.1027 0.08 24 0.8941 0.16 67
NP 0.9030 1.64 0.1274 0.33 0.8990 0.24
100 MML 0.8878 1.60 68 0.1030 0.10 32 0.8949 0.15 68
NP 0.8986 2.36 0.1176 0.31 0.8989 0.22
The results given in Table 2 show that the MML estimators are more efficient than the
NP estimators in every case.
5. Illustrative examples
In this section, the parameters α, µ and σ
2
in an α-series process are estimated on
two different real data sets by using the MML estimators and the NP estimators. The
first example uses data on coal-mining disasters and the second one is about the aircraft
6 data.
5.1. Coal-mining disaster data. This data set has 190 observations showing the in-
terval in days between successive disasters in Great Britain, see Andrews and Herzberg
[1]. The data contain one “zero” because there were two accidents on the same day.
The zero is replaced by 0.5 since two accidents on the same day usually are not at the
same time of the day, an approximate time interval between them should be 0.5 days,
see Jarrett [12].
456 H. Aydogdu, B. Senoglu, M. Kara
To obtain an idea about the underlying distribution of ε
k
in the following model
ln x
k
= δ −αc
k

k
(1 ≤ k ≤ 190),
we first constructed a Q-Q plot of the 190 residuals, see Figure 1. It should be noted
that the EV Q-Q plot of the residuals are constructed by plotting the quantiles of the
EV distribution against the ordered residuals ¯ ε
k
= lnx
k

¯
δ + ¯ αc
k
.
Figure 1. EV Q-Q plot of the coal-mining disaster data
-7 -6 -5 -4 -3 -2 -1 0 1 2 3
-7
-6
-5
-4
-3
-2
-1
0
1
2
3
Extreme Value Quantiles
Q
u
a
n
t
i
l
e
s

o
f

I
n
p
u
t

S
a
m
p
l
e
It is clear from Figure 1 that the data points do not deviate much from a straight line,
therefore we conclude that EV is the most appropriate distribution for ε
k
. This is also
supported by the well-known Z

test statistic proposed by Tiku [21] (i.e., Z

calculated
=
0.9499 and p −value = 0.1916), see also Surucu [18].
The estimates of the parameters α, µ and σ
2
obtained by using the MML estimators
and the NP estimators are given in Table 3.
Table 3. Parameter estimations for the coal-mining disaster data
Estimator ´ α ´ µ ´ σ
2
MML -0.393 36.311 1873.900
(±0.091) (±10.344)
NP -0.433 22.440 433.250
(±0.108) (±12.210)

Values given in parentheses are the standard errors (SE) of the estimators
Application of MML Methodology 457
Based on the simulation results given in Table 2, we conclude that the MML estimates
of the parameters are preferable to the NP estimates according to the MSE criterion,
since the MSE values of the MML estimators are smaller than the MSE values of the NP
estimators when α < 0 .
It should be noted that the variance of the Weibull distribution is very sensitive to
changes in the value of a, especially for a < 1. Since the value of a is less than 1 for
the coal-mining disaster data, the MML and NP estimators of the variance of Weibull
distribution are quite different from each other. This happens frequently in practice.
Similar statements can also be made for the Aircraft 6 data.
Let S
k
= X1 +X2 +· · · +X
k
, k = 1, 2, . . . , n. Then a fitted value of S
k
may be defined
by
´
S
k
= ´ µ
k

j=1
1/j
´ α
.
In Figure 2, we plot the coal-mining disaster times and their fitted times against the
number of disasters by using NP and MML estimators, respectively. It can be seen that
the MML estimators provide a better fit than the NP estimators.
Figure 2. Plot of S
k
against their fitted values using the NP and MML
estimators for the coal-mining disaster data
0 20 40 60 80 100 120 140 160 180 200
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
x 10
4
k
S
k


Observed
NP
MML
5.2. The aircraft 6 data. This data set has 30 observations showing the intervals
between successive failures of the air-conditioning equipment in Boeing 720 aircraft, see
Cox and Lewis [9]. The data is also studied by Proschan [16] and Cox and Lewis [9].
Following similar steps to those for the coal-mining disaster data in Subsection 5.1,
EV is again found to be the most appropriate distribution for the residuals, see Figure 3.
458 H. Aydogdu, B. Senoglu, M. Kara
Figure 3. EV Q-Q plot of the aircraft 6 data
-4 -3 -2 -1 0 1 2 3
-4
-3
-2
-1
0
1
2
3
Extreme Value Quantiles
Q
u
a
n
t
i
l
e
s

o
f

I
n
p
u
t

S
a
m
p
l
e
This is confirmed by the value of the Z

test statistic (i.e., Z

calculated
= 0.9203 and
p − value = 0.4300), see also Tiku [21] and Surucu [18]. The estimates of the unknown
parameters are given in Table 4.
Table 4. Parameter estimates for the aircraft 6 data set
Estimator ´ α ´ µ ´ σ
2
MML 0.4708 177.0362 40980
(±0.249) (±51.173)
NP 0.4775 163.1353 15061
(±0.272) (±56.234)

Values given in parentheses are the standard errors (SE) of the estimators
Again, based on the simulation results given in Table 2, the MML estimators are chosen
for estimating the parameters α, µ and σ
2
because the MML estimators outperform the
NP estimators in terms of the MSE criterion as in the coal-mining disasters data.
In Figure 4, we plot the failure times of the air-conditioning equipment and their fitted
times against the number of failures by using the NP and MML estimators, respectively.
It can be seen that MML estimators provide a better fit than the NP estimators.
Application of MML Methodology 459
Figure 4. Plot of S
k
against their fitted values using the NP and MML
estimators for the aircraft 6 data
0 5 10 15 20 25 30
0
200
400
600
800
1000
1200
1400
1600
1800
2000
k
S
k


Observed
NP
MML
6. Conclusions
We have compared the efficiencies of the MML estimators with the NP estimators
via a Monte Carlo simulation study. It is shown that the MML estimators have higher
efficiencies than the NP estimators.
We also applied the MML methodology two real life data sets. The parameters α, µ
and σ
2
for an α-series process with Weibull distribution have been estimated and com-
pared with the NP estimators. We see that the MML estimators provide a better fit than
NP estimators.
Acknowledgment We are grateful to the referee for very helpful comments.
References
[1] Andrews, D. F. and Herzberg, A. M. Data (Springer-Verlag, New York, 1985).
[2] Ascher, H. and Feingold, H. Repairable systems reliability (Marcel Dekker, New York, 1984).
[3] Aydogdu, H. and Kara, M. Nonparametric estimation in α-series processes, Computational
Statistics and Data Analysis (Submitted, 2009).
[4] Barnett, V. D. Evaluation of the maximum likelihood estimator where the likelihood equation
has multiple roots, Biometrika 53, 151–165, 1966.
[5] Bhattacharyya, G. K. The asymptotics of maximum likelihood and related estimators based
on type II censored data, J. Amer. Stat. Assoc. 80, 398–404, 1985.
[6] Braun, W. J., Li, W. and Zhao, Y. Q. Properties of the geometric and related processes, Naval
Research Logistics 52, 607–616, 2005.
[7] Braun, W. J., Li, W. and Zhao, Y. Q. Some theoretical properties of the geometric and α-
series processes, Communication in Statistics: Theory and Methods 37, 1483–1496, 2008.
460 H. Aydogdu, B. Senoglu, M. Kara
[8] Chan, S. K., Lam, Y. and Leung, Y. P. Statistical inference for geometric processes with
gamma distributions, Computational Statistics and Data Analysis 47, 565–581, 2004.
[9] Cox, D. R. and Lewis, P. A. W. The statistical analysis of series of events (Methuen, London,
1966).
[10] Islam, M. Q., Tiku, M. L. and Yildirim, F. Non-normal regression: Part I: Skew distribu-
tions, Communications in Statistics: Theory and Methods 30, 993–1020, 2001.
[11] Islam, M. Q. and Tiku, M. L. Multiple linear regression model under nonnormality, Com-
munications in Statistics: Theory and Methods 33(10), 2443–2467, 2004.
[12] Jarrett, R. G. A note on the intervals between coal-mining disasters, Biometrika 66, 191–
193, 1979.
[13] Lam, Y. A note on the optimal replacement problem, Adv. Appl. Probab. 20, 479–482, 1988.
[14] Lam, Y. Geometric process and replacement problem, Acta Math. Appl. Sinica 4, 366–377,
1988.
[15] Lam, Y. and Chan, S. K. Statistical inference for geometric processes with lognormal dis-
tribution, Computational Statistics and Data Analysis 27, 99–112, 1998.
[16] Proschan, F. Theoretical explanation of observed decreasing failure rate, Technometrics 5,
375–383, 1963.
[17] Puthenpura, S. and Sinha, N. K. Modified maximum likelihood method for the robust esti-
mation of system parameters from very noisy data, Automatica 22, 231–235, 1986.
[18] Surucu, B. A power comparison and simulation study of goodness-of-fit tests, Computers
and Mathematics with Applications 56, 1617–1625, 2008.
[19] Tiku, M. L. Estimating the mean and standard deviation from censored normal samples,
Biometrika 54, 155–165, 1967.
[20] Tiku, M. L. Estimating the parameters of lognormal distribution from censored samples, J.
Amer. Stat. Assoc. 63, 134–140, 1968.
[21] Tiku, M. L. Goodness-of-fit statistics based on the spacings of complete or censored samples,
Austral. J. Statist. 22, 260–275, 1980.
[22] Tiku, M. L., Tan, W. Y. and Balakrishnan, N. Robust inference (Marcel Dekker, New York,
1986).
[23] Tiku, M. L. and Akkaya, A. D. Robust estimation and hypothesis testing (New Age Interna-
tional (P) Limited, Publishers, New Delhi, 2004).
[24] Vaughan, D. C. On the Tiku-Suresh method of estimation, Communications in Statistics:
Theory and Methods 21, 451–469, 1992.
[25] Vaughan, D. C. and Tiku, M. L. Estimation and hypothesis testing for a nonnormal bivariate
distribution with applications, Mathematical and Computer Modelling 32, 53–67, 2000.

. . see also [10]. . . . α. kα k Thus. all of the Xk are identically distributed and an α-series process reduces to a RP. B. t ≥ 0} for k = 1.450 H. the Xk ’s form a stochastically increasing sequence when α < 0. For example. respectively. a and b are the shape and scale parameters of the Weibull distribution. Two real life examples are given in Section 5. k = 1. 2. 2. . Clearly. . n are iid Weibull random ln ln ln variables. µ and σ 2 by adopting the method of modified likelihood. then the α-series process {N (t). [11] and [23]. 7]. . k = 1. they are stochastically decreasing when α > 0 . [6. Some theoretical properties of the α-series process are given in Braun et al. Aydogdu. are iid random variables with a distribution function F . k = 1.1) E(Xk ) = (1. Since obtaining explicit estimators of the parameters is similar to that given in [25]. . Concluding remarks are given in Section 6. ∂ ∂aL = 0 and ∂ ∂bL = 0 are obtained by taking the first derivatives of the log-likelihood function with respect to the parameters α. When α = 0. this is the first study applying the MML methodology to an α-series process when the distribution of the first occurrence time is Weibull. 2. Kara 1.1.. The counting process {N (t). If F is an exponential distribution function and α = 1. Section 4 presents the simulation results for comparing the efficiencies of the MML estimators with the NP estimators. Section 3 presents the elements of the inverse of the Fisher information matrix. 2. t ≥ 0} is said to be an α-series process with parameter α if there exists a real number α such that kα Xk . The likelihood equations. respectively. Definition. we just reproduce the estimators and omit the details. . The remainder of this paper is organized as follows: Likelihood equations for estimating the unknown parameters and the corresponding MML estimators are given in Section 2. . Senoglu. we see that the likelihood equation for the parameter α is ∂ ln L = ln k − ∂α k=1 k=1 n n kα Xk b α ln k = 0. . . t ≥ 0} is a linear birth process [7]. ∂ ∂αL = 0. Then σ2 µ and Var(Xk ) = 2α . 2. To the best of our knowledge. . In this paper. Let Xk be the time between the (k − 1)th and kth event of a counting process {N (t). a and b. . The α-series process was first introduced by Braun et al. The motivation for this paper comes from the fact that the Weibull distribution is not easy to incorporate into the α-series process model because of difficulties encountered in solving the likelihood equations numerically. They applied it to some reliability and scheduling problems. we study the statistical inference problem for α-series process with Weibull distribution and obtain the explicit estimators of the parameters α.2) µ=Γ 1+ 1 a b and σ 2 = Γ 1 + 2 a − Γ2 1 + 1 a b2 . Assume that the distribution function F for an α-series process has positive mean µ(F (0) < 1). Yk = kα Xk . [6] as a possible alternative to the geometric process (GP) in situations where the GP is inappropriate. Likelihood equations As mentioned in Section 1. and finite variance σ 2 . µ and σ 2 are the most important parameters for an α-series process because these parameters completely determine the mean and variance of Xk . M. We recall that the mean and variance of the Weibull distribution are given by (1. Here.

δ ∈ R. η). η > 0. elusive. 2. δ. We first obtain the estimators of the unknown parameters in the extreme value distribution. ii) non-convergence of iterations. Then the likelihood equations are given by ∂ ln L 1 = ∂α η (2.3) a = 1/η and b = exp(δ). Then. .1) ln Yk = α ln k + ln Xk . the ML estimators cannot be obtained explicitly or numerically because the first derivatives of the likelihood function involve power functions of the parameter α. . it is obvious that maximizing L(α. . it is not easy to solve these equations. n) and g(z) = exp(z). a and b. Because of the awkward function g(z). . . k = 1. or iii) convergence to wrong values. Since it is impossible to obtain estimators of the unknown parameters in a closed form. . we take the logarithm of the Yk ’s as shown below: (2.2): (2. The ML estimates are. where δ = ln b is the location parameter and η = 1/a is the scale parameter. w ∈ R. n The likelihood function for ln Xk . δ and η is equivalent to estimating the parameters in (2. we resort to iterative methods. n. η n k=1 g(zk )zk − 1 η k=1 zk − n = 0. 2. k=1 For the sake of simplicity.2) f (w) = 1 exp η w−δ η exp − exp w−δ η . However. therefore.6). as well as the shape parameter a.Application of MML Methodology 451 The solutions of the likelihood equations give us the ML estimators of the parameters α.4) L(α. To overcome these difficulties. η) with respect to the unknown parameters α. .5) ln xk = δ − αck + εk (1 ≤ k ≤ n). (k = 1.1) and (2.4). However that can be problematic for reasons of i) multiple roots. δ. . we use the notations ck instead of ln k in (2. . . . η The ML estimators of the extreme value distribution parameters are the solutions of the equations in (2.5). . (2. 2. and then obtain the estimators of the Weibull parameters by using the following inverse transformations: (2. It is known that the ln Yk ’s are iid extreme value (EV) random variables with the probability density function given by (2. . when εk ∼ EV(0. k = 1. η) = 1 exp ηn ln Xk + α ln k − δ − exp η ln Xk + α ln k − δ η . n is found by using (2.6) ∂ ln L 1 = ∂δ η n k=1 n ck − 1 η n g(zk )ck = 0 k=1 k=1 g(zk ) − n =0 η n and ∂ ln L 1 = ∂η η where zk = εk .

Barnett [4] and Vaughan [24]. 11. m = bk .] ) − E(c[k] − c[. See Vaughan and Tiku [25] and Tiku and Akkaya [23] for the asymptotic and small samples properties of the MML estimators.] ) ¯ ¯ n k=1 .8) respectively. V (δ) and V (η). Then. Senoglu. B. i.2) and (2. The solutions of these equations are the following closed form MML estimators (see [10.] + α¯[. M. Kara where see. The Fisher information matrix The elements of the inverse of the Fisher information matrix (I −1 ) are given by n n 2 ck k=1 n 2 V11 = η 2 − n ck k=1 n n .3). ∆= k=1 (ak − 1). . µ = Γ (1 + η) exp(δ) and σ 2 = Γ (1 + 2η) − Γ2 (1 + η) exp(2δ) 3. c[.6) and calls the resulting equations the “modified” likelihood equations.] + η and η = ¯ c (bias corrected) m 2 n(n − 2) n n D= k=1 (1 − ak )(c[k] − c[.] )2 ¯ n bk (c[k] − c[. δ and η. Aydogdu. 2 V12 = η 2 k=1 n ck 2 ck k=1 2 n k=1 n c2 k c2 k k=1 2 − − ck k=1 n 2 . we use the method of modified maximum likelihood introduced by Tiku [19.] ) − E(c[k] − c[. k=1 k=1 n (ak − 1) (ln x[k] − ln x[. see also Tiku et al. the MML estimators of µ and σ 2 are obtained as (2. see Puthenpura and Sinha [17]. 20]. If the data contain outliers. + (1 − γ)2 /nπ 2 .452 H. V13 = 0. by (1.] = ¯ B= C= k=1 k=1 bk c[k] . V31 = V13 and V32 = V23 .] )2 ¯ n bk ln x[k] ln x[. the iterations for the likelihood equations might never converge. so V21 = V12 .. These variances are also known as the minimum variance bounds (MVBs) for estimating α.] ) and ¯ ¯ bk (ln x[k] − ln x[. The method linearizes the intractable terms in (2. k=1 n bk (c[k] − c[. δ = ln x[. [22].7) α = ηD − E.] )(c[k] − c[. It should be noted that the Fisher information matrix I is symmetric. V22 = η 2 n ck k=1 where γ is the Euler constant. To overcome these difficulties.] ) ¯ ¯ 2 . The MML estimators V23 = −6η (1 − γ)/nπ and V33 = 6η 2 /nπ 2 . V (α). It should be noted that the divisor n in the denominator of η was replaced by n(n − 2) as a bias correction.e. E= k=1 bk (ln x[k] − ln x[. The diagonal elements in I −1 provide the asymptotic variances of the ML estimators.] ) ¯ n . 23] and [25]): √ ∆ B + B 2 + 4nC (2.] = ¯ k=1 n m n m .

 k=1 k=1 n n n α<0 0 < α ≤ 0.0119 0. i. Simulation results An extensive Monte Carlo simulation study was carried out in order to compare the efficiencies of the MML estimators and the NP estimators given below. see Bhattacharrya [5] and Vaughan and Tiku [25]. they are asymptotically unbiased and their covariance matrix is asymptotically the same as the inverse of I.0132 0.0031 0.. 000/n] Monte Carlo simulations.0415 η 0. We consider the sample (ln Xk )2 − 1 n ln Xk k=1 − α2 n k=1 (ln k)2 − 1 n n 2 ln k k=1 .0065 0. [·] represents the greatest integer value.0828 0.7 n ln k β= k=1 k=1 ln Xk ln k − n 2 (ln k)2 (ln k)2 ln Xk k=1 k=1 n ln k k=1 −n n k=1 and n 2 σe = k=1 2 . the MML estimators are highly efficient estimators.1) σ = and 2 k=1 k=1 n ln Xk − n ln Xk ln k k=1 n 2 .0051 0. we can conclude that the MML estimators are fully efficient.0424 η 0.0029 MVB δ 0.0015 It is clear from Table 1 that the simulated variances of the MML estimators and the corresponding MVB values become close as n increases.7) and the corresponding MVB values. n−2 All the computations were conducted in MATLAB 7. α parameters and shape parameters. Therefore. The scale parameter b was taken to be 1 throughout the study.7 α > 0. µ = k=1 k=1  n n    Xk / k −α .0015 α 0.0069 0. The means and the MSEs were calculated for different sample sizes.Application of MML Methodology 453 are asymptotically equivalent to the ML estimators when the regularity conditions hold. see Aydogdu and Kara [3].0030 0.0650 0. Here. Table 1.0908 0.0623 0. based on [100.   n  n  k−α Xk / k−2α . Therefore. 4.0030 δ 0.e.0054 0. The simulated variances of the MML estimators and the corresponding MVB values Simulated variances n 30 50 100 α 0. n k=1 (ln k)2 − ln k k=1 2 exp(2β)σe where  root of 2µ2 ln µ − 2β µ2 − σ 2 = 0. n n n ln k α= (4. See Table 1 for the simulated variances of the MML estimators in (2.

49 61 NP 0.9027.25 100 MML 0.25 -0.93 0.8801 2. α = −0.57 50 MML 0.5087 0.5091 0.4492 8.1211 0.5160 11.9217 3.4984 0.454 H. σ 2 = 0. Table 2. σ 2 = 0.18 56 0.66 58 0.39 0.4005 4.34 0.37 0.9156 2.78 0.11 0. 0.3757 30 MML 0.32 0.9027.9179 7. The simulated means.9111 9.5016 0.8826 3.72 38 0.5. Kara sizes n = 30.4082 9.8957 5.4483 11.3861 4.60 28 -0.06 0.1040 0.39 -0.4934 0.43 0.4201 4.8930.40 60 0.06 25 -0.5048 0.90 0.74 70 NP 0.9105 4.74 57 0.2146 30 MML 0.2234 0.53 100 MML 0.8993 4.2146 30 MML 0.38 62 NP 0.5057 0.4986 0. α = 0.5036 0.3974 12.13 65 NP 0. σ 2 = 0.16 -0.48 0.58 100 MML 0.5031 0.50 a = 3.09 26 -0.1037 0.80 a = 2.2757 1.5059 0. µ = 0.52 65 NP 0.62 66 NP 0.8901 3.94 44 -0.38 43 -0.40 70 NP 0.4991 0.61 50 MML 0.44 72 0.18 72 NP 0. µ = 0.1053 30 MML 0.3757 n Estimators Mean nxMSE RE Mean nxMSE RE Mean nxMSE RE 30 MML 0.8864 1. MSE(µ) MSE(σ 2 ) MSE(α) RE(α/α) = × 100 MSE(α) RE(µ/µ) = of the NP and MML estimators are given in Table 2.39 66 NP 0.8671 1.2166 0.48 .73 -0.20 a = 1.4897 0.5.18 31 0.8759 0. α = −0. M.8879 1.66 72 0.29 0.2471 2.5004 0.5.54 0.8971 4.34 -0.99 41 -0.4986 0.4943 0.26 50 MML 0.13 0.8862.72 0.9318 4. µ = 0.4878 0.9468 6. the MSEs and the relative efficiencies (REs) MSE(µ) MSE(σ 2 ) × 100. α = 0.8965 3.4048 4.42 62 0. σ 2 and α µ σ2 α a = 1.50 0.93 76 0.4838 0. B.9084 1.91 66 0.61 67 NP 0. nxMSEs and REs for the MML and NP estimators of the parameters µ.5117 0. Aydogdu.5.8968 3.5.85 55 0. α = −0.5.17 69 NP 0.94 100 MML 0. µ = 0.32 67 NP 0.8993 5.5266 13.34 -0.2363 2.4949 0.5. The simulated means.5.4964 0.8737 1. σ 2 = 0.45 67 0.51 -0.25 0.4976 0.10 50 MML 0.91 100 MML 0.06 47 0.9216 6.5.33 -0.88 38 -0.4931 1.5.2186 0.8862.43 76 0.06 50 MML 0.49 30 -0. 0.8941 5.2162 0.8813 3.22 0.62 32 0.1352 0.2188 0.80 a = 2.9 and the shape parameters a = 1.77 0. RE(σ 2 /σ 2 ) = × 100.2745 2. σ 2 = 0. Senoglu.33 66 NP 0. the α parameter values α = −0.9074 2.4316 3.36 68 NP 0.5018 0.1366 0.5048 0.3808 5.5041 1.2415 1.5127 0. 50.33 -0.9154 1.2612 2.5020 0.17 65 0.09 26 -0.5077 0.2204 0.9040 5.8945 9. µ = 0.68 0.56 0. 100.8797 2.4916 0. 2 and 3.43 55 0.71 65 NP 0.1031 0.84 37 0.76 52 0.81 64 0.

9058 1.39 71 0.10 32 0.57 62 NP 0.9221 8. see Jarrett [12].8912 0.30 0.58 50 MML 0.8943 4.9.2809 1.25 0.3757 30 MML 0.8908 3.8865 1.8989 0.1040 0.75 69 NP 0.8829 1.3924 4. µ and σ 2 in an α-series process are estimated on two different real data sets by using the MML estimators and the NP estimators. µ = 0.52 0.66 0.73 0.8987 3.8990 0.8896 0.54 100 MML 0.26 0.8890 2.2211 0. 5. µ = 0.92 a = 2.8986 2. σ 2 = 0.94 0.61 64 NP 0.4901 0.Application of MML Methodology 455 Table 2 (continued) µ σ2 α a = 3.9018 0. .73 38 0. an approximate time interval between them should be 0.9313 6.2155 0.36 0.01 76 0.8930 4.5026 0.8878 1.5 since two accidents on the same day usually are not at the same time of the day.4890 8.1012 0.8943 5.1368 0.4967 0.8686 2.2658 1.14 70 NP 0.8849 0.24 100 MML 0.2146 30 MML 0.10 0.4948 0.10 31 0. α = 0.5.9030 1.8764 2.22 0. µ = 0.27 50 MML 0.87 40 0.52 0.8862. The first example uses data on coal-mining disasters and the second one is about the aircraft 6 data.9.17 71 0.9020 1.15 65 NP 0.89 49 0.8929 0. σ 2 = 0. Illustrative examples In this section.36 67 NP 0.1354 0.77 0.47 70 0.07 28 0.8742 0.31 0.32 0.9166 1.88 0.8901 0.8957 1.8807 0.25 50 MML 0.4031 4.08 71 0.8930.1030 0.4103 3.92 77 0.34 49 0.4088 9.15 68 NP 0.41 71 NP 0.92 0. 5.67 36 0.07 26 0.20 0.80 0. α = 0.93 72 0.40 0.27 0.95 100 MML 0.8976 0.95 0.8825 3.8949 0.9020 0. This data set has 190 observations showing the interval in days between successive disasters in Great Britain.08 31 0. σ 2 = 0.22 The results given in Table 2 show that the MML estimators are more efficient than the NP estimators in every case.8939 0.8878 1.5 days.4443 8.9064 0. see Andrews and Herzberg [1].9258 2.1053 n Estimators Mean nxMSE RE Mean nxMSE RE Mean nxMSE RE 30 MML 0.33 0.1176 0.2239 0.16 67 NP 0. The zero is replaced by 0.9497 4.17 68 NP 0. The data contain one “zero” because there were two accidents on the same day.20 a = 1.64 0.31 66 NP 0. Coal-mining disaster data.9017 0.5016 0.9.08 24 0. σ 2 = 0.47 73 0.1187 0.50 68 0. α = 0. µ = 0.08 50 MML 0.1027 0.1274 0. the parameters α.1236 0.87 70 0.40 72 0.47 a = 3.2513 2.60 68 0.8754 0.1053 30 MML 0.1028 0.8930. α = 0.9027.8981 0.19 0.8941 0.1049 0.23 100 MML 0.1.67 43 0.5.5018 0.18 67 NP 0.

therefore we conclude that EV is the most appropriate distribution for εk .9499 and p − value = 0. see Figure 1.250 Values given in parentheses are the standard errors (SE) of the estimators .900 433. 3 2 1 Quantiles of Input Sample 0 -1 -2 -3 -4 -5 -6 -7 -7 -6 -5 -4 -3 -2 -1 0 Extreme Value Quantiles 1 2 3 It is clear from Figure 1 that the data points do not deviate much from a straight line. see also Surucu [18].1916). Parameter estimations for the coal-mining disaster data Estimator MML NP ∗ α -0. Senoglu. Table 3. µ and σ 2 obtained by using the MML estimators and the NP estimators are given in Table 3. Figure 1.456 H.311 (±10. It should be noted that the EV Q-Q plot of the residuals are constructed by plotting the quantiles of the EV distribution against the ordered residuals εk = ln xk − δ + αck . Aydogdu.440 (±12.. The estimates of the parameters α.210) σ2 1873. M.344) 22.393 (±0. Kara To obtain an idea about the underlying distribution of εk in the following model we first constructed a Q-Q plot of the 190 residuals.108) µ 36. B. EV Q-Q plot of the coal-mining disaster data ln xk = δ − αck + εk (1 ≤ k ≤ 190).e. This is also ∗ supported by the well-known Z ∗ test statistic proposed by Tiku [21] (i.091) -0.433 (±0. Zcalculated = 0.

5 0 0 20 40 60 80 100 k 120 140 160 180 200 5. especially for a < 1.2. we plot the coal-mining disaster times and their fitted times against the number of disasters by using NP and MML estimators. In Figure 2. see Cox and Lewis [9]. Following similar steps to those for the coal-mining disaster data in Subsection 5. see Figure 3. It can be seen that the MML estimators provide a better fit than the NP estimators. This happens frequently in practice. the MML and NP estimators of the variance of Weibull distribution are quite different from each other. k = 1. Then a fitted value of Sk may be defined by k Sk = µ j=1 1/j α . . since the MSE values of the MML estimators are smaller than the MSE values of the NP estimators when α < 0 . we conclude that the MML estimates of the parameters are preferable to the NP estimates according to the MSE criterion.5 4 3.5 x 10 4 Observed NP MML Sk 2 1. The aircraft 6 data. Plot of Sk against their fitted values using the NP and MML estimators for the coal-mining disaster data 4. respectively. . . Let Sk = X1 + X2 + · · · + Xk . n.5 1 0.5 3 2. EV is again found to be the most appropriate distribution for the residuals. Since the value of a is less than 1 for the coal-mining disaster data.1. Similar statements can also be made for the Aircraft 6 data. 2. . Figure 2.Application of MML Methodology 457 Based on the simulation results given in Table 2. This data set has 30 observations showing the intervals between successive failures of the air-conditioning equipment in Boeing 720 aircraft. The data is also studied by Proschan [16] and Cox and Lewis [9]. It should be noted that the variance of the Weibull distribution is very sensitive to changes in the value of a. .

234) σ2 40980 15061 Values given in parentheses are the standard errors (SE) of the estimators Again. B.1353 (±56. we plot the failure times of the air-conditioning equipment and their fitted times against the number of failures by using the NP and MML estimators. the MML estimators are chosen for estimating the parameters α.9203 and p − value = 0. Aydogdu. Senoglu.4708 (±0. respectively.0362 (±51.249) 0.e.173) 163..4775 (±0. In Figure 4. M. EV Q-Q plot of the aircraft 6 data 3 2 Quantiles of Input Sample 1 0 -1 -2 -3 -4 -4 -3 -2 -1 0 Extreme Value Quantiles 1 2 3 ∗ This is confirmed by the value of the Z ∗ test statistic (i. based on the simulation results given in Table 2. . Table 4. Zcalculated = 0. It can be seen that MML estimators provide a better fit than the NP estimators.458 H. µ and σ 2 because the MML estimators outperform the NP estimators in terms of the MSE criterion as in the coal-mining disasters data.4300). The estimates of the unknown parameters are given in Table 4.272) µ 177. Kara Figure 3. see also Tiku [21] and Surucu [18]. Parameter estimates for the aircraft 6 data set Estimator MML NP ∗ α 0.

µ and σ 2 for an α-series process with Weibull distribution have been estimated and compared with the NP estimators. Properties of the geometric and related processes. W. Y. Q. 1984). [5] Bhattacharyya. H. D. The parameters α. 151–165. [4] Barnett. Naval Research Logistics 52. 1985. 2008. The asymptotics of maximum likelihood and related estimators based on type II censored data. W. Communication in Statistics: Theory and Methods 37. Repairable systems reliability (Marcel Dekker. H. Y. 1966. and Herzberg. Acknowledgment We are grateful to the referee for very helpful comments. 80. References [1] Andrews. Plot of Sk against their fitted values using the NP and MML estimators for the aircraft 6 data 2000 1800 1600 1400 1200 Sk Observed NP MML 1000 800 600 400 200 0 0 5 10 15 k 20 25 30 6. 398–404. Computational Statistics and Data Analysis (Submitted. and Zhao. W. Amer. 2009). J. 1483–1496. We also applied the MML methodology two real life data sets. and Zhao. M. 607–616. Li. J. Nonparametric estimation in α-series processes. [6] Braun. Assoc. . Some theoretical properties of the geometric and αseries processes. and Feingold. M. F. 2005..Application of MML Methodology 459 Figure 4. Conclusions We have compared the efficiencies of the MML estimators with the NP estimators via a Monte Carlo simulation study. W. K. Li. V. It is shown that the MML estimators have higher efficiencies than the NP estimators.. [3] Aydogdu. J. Data (Springer-Verlag. Q. Evaluation of the maximum likelihood estimator where the likelihood equation has multiple roots. [7] Braun. Biometrika 53. [2] Ascher. H. A. We see that the MML estimators provide a better fit than NP estimators. G. and Kara. D. 1985). New York. Stat. New York.

[11] Islam.. 1963. N. Adv. L. B. F. C. [14] Lam. 451–469. [19] Tiku. Technometrics 5. D. 2000. London. 1986). Statistical inference for geometric processes with gamma distributions. and Tiku. 2004). 231–235. 155–165. 1617–1625. N. A note on the optimal replacement problem.. Probab. [23] Tiku. Kara [8] Chan. D. 1968. M. S. New York. Tan.460 H. 375–383. 1986. Q. Computational Statistics and Data Analysis 27. [16] Proschan. L. 2008. Austral. Automatica 22. On the Tiku-Suresh method of estimation. D. and Sinha. Aydogdu. Tiku. Q. Senoglu. 1980. 565–581. 20. Appl. M. Lam.. D. [20] Tiku. Y. M. A. Y. Sinica 4. Y. 1988. [17] Puthenpura. [13] Lam. J. and Akkaya. Communications in Statistics: Theory and Methods 33 (10). Biometrika 54. Assoc. 479–482. B. 134–140. Computers and Mathematics with Applications 56. P. G. S. S. 1988. W. K. 191– 193. Communications in Statistics: Theory and Methods 21. C. 99–112. 53–67. M. L. 2004. . M. 260–275. 2001. 1967. Amer. New Delhi. and Tiku. 1966). M. Publishers. 366–377. L. [24] Vaughan. A power comparison and simulation study of goodness-of-fit tests. L. Appl. Y. 993–1020. Mathematical and Computer Modelling 32. Estimating the parameters of lognormal distribution from censored samples. [25] Vaughan. [9] Cox. Y. [22] Tiku. M. and Yildirim. [18] Surucu. M. [15] Lam. Estimating the mean and standard deviation from censored normal samples. and Lewis. K. Computational Statistics and Data Analysis 47. R. L. Geometric process and replacement problem. Estimation and hypothesis testing for a nonnormal bivariate distribution with applications. J. Acta Math. [12] Jarrett. Y. 1992. L. Biometrika 66. A. A note on the intervals between coal-mining disasters. and Chan. Goodness-of-fit statistics based on the spacings of complete or censored samples. M. M. 63. Statist. 1998. Multiple linear regression model under nonnormality. 22. K. Robust inference (Marcel Dekker. Theoretical explanation of observed decreasing failure rate. P. and Balakrishnan. F. Stat. [21] Tiku. 2443–2467. Communications in Statistics: Theory and Methods 30. Statistical inference for geometric processes with lognormal distribution. M. 1979. Robust estimation and hypothesis testing (New Age International (P) Limited. Non-normal regression: Part I: Skew distributions. and Leung. L. [10] Islam. R. The statistical analysis of series of events (Methuen. Modified maximum likelihood method for the robust estimation of system parameters from very noisy data. W. 2004.