You are on page 1of 4

Volume 6, Issue 6, June – 2021 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

On Bayesian Estimation of Fuction of Unknown


Parameter of Modified Power Series Distribution
Randhir Singh
Department of Statistics, Ewing Christian College, Prayagraj, India.

Abstract:- This paper deals with the Bayesian estimation II. NOTATIONS AND RESULTS USED:
of a function of the unknown parameter 𝛉 of Modified
Power Series distribution. These estimates have similar Let X1 , X2 , X3 ,… XN be a random sample of size N from the
forms as the classical MVUE given by Gupta (1977), but p .m. f given by (1).
are better than MVUV in the sense ,that they increase Then,
the range of estimation.The prior distribution for the TN = ∑N i=1 X i (2)
unknown parameter 𝛉 varies from distribution to
distribution, depending upon the range of 𝛉.On the part We shall use the following result as given by Abramowitz
of loss functions, the Squared Error Loss Function and Stegun (1964):

(SELF) and two different forms of Weighted Squared Γ(x) = ∫0 ux−1 e−u du (3)
Error Loss Function (WSELF) has been considered. −x ∞ x−1 −bu
Γ(x)b = ∫0 u e du (4)
Γ(b−a)Γ(a)M(a,b,z) 1
Keywords:- Modified Power Series Distribution, Bayes = ∫0 ua−1 (1 − 𝑢)b−a−1 e−zu du (5)
Γ(b)
Estimator, Squared Error Loss Function, Weighted Squared
Error Loss Function. Where, M(a, b, z) is the Confluent Hypergeometric Function
and has a series representation given by,
I. INTRODUCTION (a)n zn
M(a, b, z) = ∑∞n=0 (b) n! (6)
n
A discrete random variable X is said to have Modified Where, (a)0 = 1 amd
Power Series distribution, if its probability mass function (p. (a)n = ∏ni=1(a + i − 1) (7)
m .f.) pθ (x) = P(X = x) is given by,
For observed value t 𝑁 = ∑N i=1 xi of the statistic T𝑁 =
a(x){g(θ)}x
, if x ∈ S , θ ∈ A ∑Ni=1 X i , the likelihood function, denoted by L(θ), is given
pθ (x) = { f(θ) (1) by,
0, Otherwise.
L(θ) = k{g(θ)}tN {f(θ)}−N (8)
Where, k is function of x1 , x2 , x3 ,… xN and does not contain
Where, θ is unknown parameter of the
distribution,A ⊆ ℛ (the set of real numbers), a(x) > 0, S is a θ.
Let π(θ) be the prior probability density function of θ,then
subset of the set of non-negative integers, g(θ) > 0 and f(θ)
the posterior posterior probability density function of θ
is a function of θ such that ∑x∈S pθ (x) = f(θ) ,denoted by π(θ /t 𝑁 ),is given by,
L(θ)π(θ)
As mentioned by Gupta (1974) the p. m .f. given by π(θ /t 𝑁 ) = (9)
∫A L(θ)π(θ)dθ
(1) covers a wide range of discrete distributions. When
g(θ) = θ , (1) coincides with the class of discrete Under the Squared Error Loss Function
distributions as given by Roy and Mitra (1957). (SELF), L(ϕ(θ), d) = (ϕ(θ) − d)2 ,the Bayes Estimate of
̂ B is given by,
ϕ(θ),denoted by ϕ
Gupta (1977), has obtained MVUE of ϕ(θ) = θr , r ≥
̂ B = ∫ ϕ(θ)π(θ /t 𝑁 )dθ
ϕ (10)
1.For values of r < 1 ,no unbiased estimator of ϕ(θ) exists A
and hence no MVUE of ϕ(θ) exists. This is a serious
limitation of this Classical estimator. In this paper, Bayes Similarly, under the Weighted Squared Error Loss Function
Estimator of ϕ(θ) = θr , r ∈ (−∞, ∞).Here the range of (WSELF), L(ϕ(θ), d) = W(θ)(ϕ(θ) − d)2 , where, W(θ) is
estimation is increased as we have taken r ∈ (−∞, ∞). ̂𝑊
a function of θ, the Bayes Estimate of ϕ(θ),denoted by ϕ
is given by,
On the part of loss functions, the usual Squared Error ̂ 𝑊 = ∫A W(θ)ϕ(θ)π(θ /t𝑁 )dθ
Loss Function (SELF)and two different forms of the ϕ (11)
∫A W(θ)π(θ /t𝑁 )dθ
Weighted Squared Error Loss Function (WSELF) have been
taken.

IJISRT21JUN668 www.ijisrt.com 861


Volume 6, Issue 6, June – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
We have taken two different forms of W(θ),as given below: π1 (θ /t N ) =
(i). W(θ) = θ−2 .The Bayes Estimate of ϕ(θ),denoted by θtN+p−1 (1−θ)tN (β−1)+nN+q−1
, if p > 0, q > 0 ,0 < θ < 1
̂ M , is known as the Minimum Expected Loss (MELO)
ϕ { B(tN +p,(β−1)tN +nN+q)

Estimate and is given by, 0, Otherwise.


∫A θ−2 ϕ(θ)π(θ /t𝑁 )dθ
(18)
̂M =
ϕ (12)
∫A θ−2 π(θ /t𝑁 )dθ
Similarly, posterior p. d .f. of θ , corresponding to the prior
π2 (θ), denoted by π2 (θ /t N ),is given by
This loss function was used by Tummala and Sathe (1978) π2 (θ /t N )
for estimating reliability of certain life time distributions
and by Zellner (1979) for estimating functions of parameters e−bθ θtN+p−1 (1 − θ)tN (β−1)+nN+q−1
= { , if p > 0, q > 0 ,0 < θ < 1, b
in econometric models. K
−1
(ii). W(θ) = θ−2 e−aθ .The Bayes Estimate of 0, Otherwise.
̂ E , is known as the Exponentially Where,
ϕ(θ),denoted by ϕ
K = B(t N + p, (β − 1)t N + nN + q)M(t N + p, p + q +
Minimum Expected Loss EW(MELO) Estimate and is given
βt N + nN, −b) (20)
by,
−1
θ−2 e−aθ ϕ(θ)π(θ /t𝑁 )dθ
̂ E = ∫A
ϕ −1 (13) Under the SELF and corresponding to the posterior
∫A θ−2 e−aθ π(θ /t𝑁 )dθ distribution given by (18), Bayes Estimate of ϕ(θ) = θr
,denoted by θ̂1B
t
is given by,
This type of loss function was used by the author (1977) for ̂θ1B
t B(tN+p+r,(β−1)tN+nN+q)
the first time in his work for D.Phil. = (21)
B(tN +p,(β−1)tN +nN+q)
Now, we shall some special cases of the p. m. f. given by (1)
and obtain corresponding Bayes Estimate of ϕ(θ) = θr , r ∈ Similarly, under the WSELF, when W(θ) = θ−2 and
(−∞, ∞), in each case. corresponding to the posterior distribution given by (18), the
MELO Estimate of ϕ(θ) = θr ,denoted by θ̂1M t
is given by,
III. GENERALIZED NEGATIVE BINOMIAL B(t +p+r−2,(β−1)t +nN+q)
̂θ1M
t
= N N
(22)
DISTRIBUTION (GNBD) B(tN +p−2,(β−1)tN +nN+q)

−1
If we take
nΓ(n+βx)
a(x) = Γ(x+1)Γ(n+βx−x+1), g(θ) = θ(1 − Under the WSELF, when W(θ) = θ−2 e−aθ and
corresponding to the posterior distribution given by (18),
θ)β−1, f(θ) = (1 − θ)−n , the EWMELO Estimate of ϕ(θ) = θr ,denoted by θ̂1E t
is
S = {0,1,2 … ∞}, A = (0,1),β ≥ 0, θβ ∈ (−1,1),n being a given by,
positive integer, the corresponding discrete random variable B(t +p+r−2,(β−1)t +nN+q)M
θ̂1E
t
= B(tN +p−2,(β−1)t N+nN+q)M 2 (23)
X is said to have Generalized Negative Binomial N N 1
distribution. Where,
M1 = M(t N + p − 2, p + q + βt N + nN − 2, −a)
In this case, (24)
L(θ) = kθtN (1 − θ)tN (β−1)+nN (14) M2 = M(t N + p + r − 2, p + q + βt N + nN + r − 2, −a)
(25)
Since, in this case,0 < θ < 1,we have taken two different
prior distributions, namely, π1 (θ) and π2 (θ) as given On the other hand, under the SELF and corresponding to
below: the posterior distribution given by (19), Bayes Estimate of
π1 (θ) = ϕ(θ) = θr ,denoted by θ̂t2B is given by,
θp−1 (1−θ)q−1 B(t +p+r,(β−1)tN +nN+q)M4
, if p > 0, q > 0 ,0 < θ < 1 θ̂t2B = N (26)
{ B(tN +p,(β−1)tN +nN+q)M3
B(p,q) (15)
0, Otherwise. Where,
And, M3 = M(t N + p, p + q + βt N + nN, −b)
π2 (θ) = (27)
e−bθ θp−1(1−θ)q−1 M4 = M(t N + p + r, p + q + βt N + nN + r, −b)
, if p > 0, q > 0 ,0 < θ < 1 , b ≥ 0 (28)
{ B(p,q)M(p,p+q,−b)
0, Otherwise.
(16) Similarly, under the WSELF, when W(θ) = θ−2 and
Where, corresponding to the posterior distribution given by (19), the
Γ(p)Γ(q)
B(p, q) = Γ(p+q) (17) MELO Estimate of ϕ(θ) = θr ,denoted by θ̂t2M is given by,
B(t +p+r−2,(β−1)t +nN+q)M
θ̂t2M = B(tN +p−2,(β−1)t N+nN+q)M 6 (29)
N N 5
The posterior p. d .f. of θ , corresponding to the prior π1 (θ), Where,
denoted by π1 (θ /t N ),is given by, M5 = M(t N + p − 2, p + q + βt N + nN − 2, −b)
(30)
M6 = M(t N + p + r − 2, p + q + βt N + nN + r − 2, −b)
(31)

IJISRT21JUN668 www.ijisrt.com 862


Volume 6, Issue 6, June – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
−2 −aθ−1 B(t +r+1,(β−1)tN +k+1)
Finally, under the WSELF, when W(θ) = θ e and θ̂tB = N (37)
B(tN +1,(β−1)tN +k+1)
corresponding to the posterior distribution given by (19),the
EWMELO Estimate of ϕ(θ) = θr ,denoted by θ̂t2E is given Similarly, under the WSELF, when W(θ) = θ−2 and
by, corresponding to the posterior distribution given by (36), the
B(t +p+r−2,(β−1)tN +nN+q)M8
θ̂t2E = N (32) MELO Estimate of ϕ(θ) = θr ,denoted by θ̂tM is given by,
B(tN +p−2,(β−1)tN +nN+q)M7
B(t +r−1,(β−1)tN+k+1)
Where, θ̂tM = N (38)
B(tN −1,(β−1)tN +k+1)
M7 = M(t N + p − 2, p + q + βt N + nN − 2, −(a + b))
(33) −1
M2 = M(t N + p + r − 2, p + q + βt N + nN + r − 2, −(a + Under the WSELF, when W(θ) = θ−2 e−aθ and
b)) (34) corresponding to the posterior distribution given by (36),
the EWMELO Estimate of ϕ(θ) = θr ,denoted by θ̂tE is
Remark (1): The MVUE of θr is 0 if z < r (z = t N ) which given by,
B(tN +r−1,(β−1)tN +k+1)M10
is a serious limitation of the MVUE, in this case. The Bayes θ̂tE = B(t −1,(β−1)t +k+1)M
(39)
N N 9
Estimates on the other hand, are 0 if r < 0 is such that t N + Where,
p < −r, p + q + βt N + nN < −r, t N + p − 2 < −r, p + M9 = M(t N − 1, βt N + k, −a) (40)
q + βt N + nN − 2 < −r depending on various loss M10 = M(t N + r − 1, βt N + r + k, −a) (41)
functions and two posterior distributions.
Remark (2): The MVUE of θr is 0 if z < r , (z = t N ) which
SPECIAL CASE: Since, for β = 1 ,the GNBD coincides is a serious limitation of the MVUE, in this case. The Bayes
with the Negative Binomial Distribution (NBD), all results Estimates on the other hand, are 0 if r < 0 is such that t N +
as derived above give, Bayes Estimate of θr for the NBD 1 < −r,t N − 1 < −r βt N + k < −r, depending on various
when β = 1. Additionally ,when β = 1 and n=1,we get loss functions.
Bayes Estimate of θr for the Geometric distribution. When
β = 0, we get Bayes Estimate of θr for the Binomial SPECIAL CASE: Since, for β = 1 ,the GLSD coincides
distribution. with the Logarithmic Series Distribution (LSD), all results
as derived above give, Bayes Estimate of θr for the LSD
IV. GENERALIZED LOGARITHMIC SERIES when β = 1.
DISTRIBUTION (GLSD)
Γ(βx)
V. GENERALIZED POISSON DISTRIBUTION
If we take a(x) = Γ(x+1)Γ(βx−x+1), g(θ) = θ(1 − (GPD)
θ)β−1, f(θ) = −ln(1 − θ),
λ1 (λ1 +λ2 x)x−1
S = {1,2 … ∞}, A = (0,1),β ≥ 0, θβ ∈ (0,1),n being a If we take a(x) = , g(θ) = θe−θλ2 , f(θ) =
Γ(x+1)
positive integer, the corresponding discrete random variable e−θλ1 ,
X is said to have Generalized Logarithmic Series S = {0,1,2 … ∞}, A = (0, ∞),, λ2 ∈ (−1,1), λ1 > 0,the
distribution. corresponding discrete random variable X is said to have
Generalized Poisson distribution.
Since in this case, 0 < θ < 1 ,we take π3 (θ) as the p. d. f.
of Negative Log Gamma distribution given by
Since in this case, θ > 0 ,we take π4 (θ) as the p. d. f. of
π3 (θ) = Gamma distribution given by
(k+1)N+1 (1−θ)k {−ln(1−θ)}N
{ Γ(N+1)
, if k > 0, ,0 < θ < 1 π4 (θ ) =
cα θα−1 e−cθ
0, Otherwise. , if c > 0, α < 0, θ > 0
{ Γ(α)
(35)
0, Otherwise.
(42)
Where, N, a positive integer is same as the size of the
random sample.
The posterior p. d .f. of θ ,denoted by π4 (θ /t N ) , is given
by
The posterior p. d .f. of θ ,denoted by π3 (θ /t N ) , is given
π4 (θ /t N ) =
by (λ2 tN+Nλ1 +c)tN+α θtN+α−1 e−θ(λ2 tN+Nλ1 +c)
π3 (θ /t N ) = , if θ > 0
{ Γ(tN +α)
θtN (1−θ)tN (β−1)+k
, if k > 0, ,0 < θ < 1 0, Otherwise.
{ B(tN +1,(β−1)tN +k+1) (43)
0, Otherwise.
(36)
Under the SELF and corresponding to the posterior
distribution given by (43), Bayes Estimate of ϕ(θ) = θr
Under the SELF and corresponding to the posterior
,denoted by θ̂tB is given by,
distribution given by (36), Bayes Estimate of ϕ(θ) = θr Γ(t +α+r)
,denoted by θ̂tB is given by, θ̂tB = Γ(t +α)(λN t +Nλ +c)r (44)
N 2 N 1

IJISRT21JUN668 www.ijisrt.com 863


Volume 6, Issue 6, June – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Similarly, under the WSELF, when W(θ) = θ−2 and [6]. Tummala, V.M. and Sathe,P.T.(1978):”Minimum
corresponding to the posterior distribution given by (43), the Expected Loss Estimators of Reliability and
MELO Estimate of ϕ(θ) = θr ,denoted by θ̂tM is given by, Parameters of Certain Life Time Distributions”. IEEE
Γ(tN +α+r−2) Transactions on Reliability, Vol. R-27, No.4, pp.283-
θ̂tM = r (45)
Γ(tN +α−2)(λ2 tN+Nλ1 +c) 285
−1
[7]. Zellner, A .and Park, S.B.(1979):”Minimum Expected
Under the WSELF, when W(θ) = θ−2 e−aθ and Loss Estimators (MELO) of Functions of Parameters
corresponding to the posterior distribution given by (43), and Structural Coefficients of Econometric Models”.
the EWMELO Estimate of ϕ(θ) = θr ,denoted by θ̂tE is JASA 74, pp.185-193
given by,
Γ(tN +α+r−2)
θ̂tE = Γ(t +α−2)(λ t +Nλ +c+a)r
(46)
N 2 N 1

Remark (3): The MVUE of θr exists as long z ≥ r , (z = t N )


and is 0 if z < r which is a serious limitation of the MVUE,
in this case. The Bayes Estimates on the other hand, are free
from such restrictions between t N and r as far as r ≥ 1.
This is another advantage of Bayesian Estimation over
MVUE. However, if r < 0 ,Bayes Estimates are 0, if t N +
α < −r in (44) and t N + α − 2 < −r, in (45) and (46)
respectively depending on various loss functions.

SPECIAL CASE: Since, forλ1 = 1 and λ2 = 0,the GPD


coincides with the Poisson Distribution. So putting λ1 = 1
and λ2 = 0 , equations (44),(45) and (46) respectively give,
Bayes Estimate of θr for the Poisson Distribution as
follows:.
Γ(tN+α+r)
θ̂tB = Γ(t +α)(N+c)r (47)
N

Similarly, under the WSELF, when W(θ) = θ−2 and


corresponding to the posterior distribution given by (43), the
MELO Estimate of ϕ(θ) = θr ,denoted by θ̂tM is given by,
Γ(tN +α+r−2)
θ̂tM = r (48)
Γ(tN +α−2)(N+c)

−1
Under the WSELF, when W(θ) = θ−2 e−aθ and
corresponding to the posterior distribution given by (43),
the EWMELO Estimate of ϕ(θ) = θr ,denoted by θ̂tE is
given by,
Γ(tN +α+r−2)
θ̂tE = Γ(t +α−2)(N+c+a)r (49)
N

REFERENCES

[1]. Abramowitz,M.and Stegun,I., (1964): Handbook of


Mathematical Functions;U.S.Bereau of
Standards,Washington D.C.
[2]. Gupta,R.C.,(1974);”Modified Power Series
Distribution and Some of its Applications”,
Sankhya̅ B 35,288-298.
[3]. Gupta,R.C.,(1977);” Minimum Variance Unbiased
Estimation in a Modified Power Series Distribution
and Some of its Applications”, COMMUN.STATIST.-
THEOR.,METH., A6(10)pp977-991
[4]. Roy ,J.and Mitra,S.K., (1957);” Unbiased Minimum
Variance Estimation in a Class of Discrete
Distributions”, ”, Sankhya̅ , Vol.18,pp.371-378.
[5]. Singh, Randhir ,(1997): D . Phil Thesis (Unpublished),
Department of Mathematics and Statistics, University
of Allahabad, Allahabad.(INDIA)

IJISRT21JUN668 www.ijisrt.com 864

You might also like