Professional Documents
Culture Documents
ISSN No:-2456-2165
Abstract:- This paper deals with the Bayesian estimation II. NOTATIONS AND RESULTS USED:
of a function of the unknown parameter 𝛉 of Modified
Power Series distribution. These estimates have similar Let X1 , X2 , X3 ,… XN be a random sample of size N from the
forms as the classical MVUE given by Gupta (1977), but p .m. f given by (1).
are better than MVUV in the sense ,that they increase Then,
the range of estimation.The prior distribution for the TN = ∑N i=1 X i (2)
unknown parameter 𝛉 varies from distribution to
distribution, depending upon the range of 𝛉.On the part We shall use the following result as given by Abramowitz
of loss functions, the Squared Error Loss Function and Stegun (1964):
∞
(SELF) and two different forms of Weighted Squared Γ(x) = ∫0 ux−1 e−u du (3)
Error Loss Function (WSELF) has been considered. −x ∞ x−1 −bu
Γ(x)b = ∫0 u e du (4)
Γ(b−a)Γ(a)M(a,b,z) 1
Keywords:- Modified Power Series Distribution, Bayes = ∫0 ua−1 (1 − 𝑢)b−a−1 e−zu du (5)
Γ(b)
Estimator, Squared Error Loss Function, Weighted Squared
Error Loss Function. Where, M(a, b, z) is the Confluent Hypergeometric Function
and has a series representation given by,
I. INTRODUCTION (a)n zn
M(a, b, z) = ∑∞n=0 (b) n! (6)
n
A discrete random variable X is said to have Modified Where, (a)0 = 1 amd
Power Series distribution, if its probability mass function (p. (a)n = ∏ni=1(a + i − 1) (7)
m .f.) pθ (x) = P(X = x) is given by,
For observed value t 𝑁 = ∑N i=1 xi of the statistic T𝑁 =
a(x){g(θ)}x
, if x ∈ S , θ ∈ A ∑Ni=1 X i , the likelihood function, denoted by L(θ), is given
pθ (x) = { f(θ) (1) by,
0, Otherwise.
L(θ) = k{g(θ)}tN {f(θ)}−N (8)
Where, k is function of x1 , x2 , x3 ,… xN and does not contain
Where, θ is unknown parameter of the
distribution,A ⊆ ℛ (the set of real numbers), a(x) > 0, S is a θ.
Let π(θ) be the prior probability density function of θ,then
subset of the set of non-negative integers, g(θ) > 0 and f(θ)
the posterior posterior probability density function of θ
is a function of θ such that ∑x∈S pθ (x) = f(θ) ,denoted by π(θ /t 𝑁 ),is given by,
L(θ)π(θ)
As mentioned by Gupta (1974) the p. m .f. given by π(θ /t 𝑁 ) = (9)
∫A L(θ)π(θ)dθ
(1) covers a wide range of discrete distributions. When
g(θ) = θ , (1) coincides with the class of discrete Under the Squared Error Loss Function
distributions as given by Roy and Mitra (1957). (SELF), L(ϕ(θ), d) = (ϕ(θ) − d)2 ,the Bayes Estimate of
̂ B is given by,
ϕ(θ),denoted by ϕ
Gupta (1977), has obtained MVUE of ϕ(θ) = θr , r ≥
̂ B = ∫ ϕ(θ)π(θ /t 𝑁 )dθ
ϕ (10)
1.For values of r < 1 ,no unbiased estimator of ϕ(θ) exists A
and hence no MVUE of ϕ(θ) exists. This is a serious
limitation of this Classical estimator. In this paper, Bayes Similarly, under the Weighted Squared Error Loss Function
Estimator of ϕ(θ) = θr , r ∈ (−∞, ∞).Here the range of (WSELF), L(ϕ(θ), d) = W(θ)(ϕ(θ) − d)2 , where, W(θ) is
estimation is increased as we have taken r ∈ (−∞, ∞). ̂𝑊
a function of θ, the Bayes Estimate of ϕ(θ),denoted by ϕ
is given by,
On the part of loss functions, the usual Squared Error ̂ 𝑊 = ∫A W(θ)ϕ(θ)π(θ /t𝑁 )dθ
Loss Function (SELF)and two different forms of the ϕ (11)
∫A W(θ)π(θ /t𝑁 )dθ
Weighted Squared Error Loss Function (WSELF) have been
taken.
−1
If we take
nΓ(n+βx)
a(x) = Γ(x+1)Γ(n+βx−x+1), g(θ) = θ(1 − Under the WSELF, when W(θ) = θ−2 e−aθ and
corresponding to the posterior distribution given by (18),
θ)β−1, f(θ) = (1 − θ)−n , the EWMELO Estimate of ϕ(θ) = θr ,denoted by θ̂1E t
is
S = {0,1,2 … ∞}, A = (0,1),β ≥ 0, θβ ∈ (−1,1),n being a given by,
positive integer, the corresponding discrete random variable B(t +p+r−2,(β−1)t +nN+q)M
θ̂1E
t
= B(tN +p−2,(β−1)t N+nN+q)M 2 (23)
X is said to have Generalized Negative Binomial N N 1
distribution. Where,
M1 = M(t N + p − 2, p + q + βt N + nN − 2, −a)
In this case, (24)
L(θ) = kθtN (1 − θ)tN (β−1)+nN (14) M2 = M(t N + p + r − 2, p + q + βt N + nN + r − 2, −a)
(25)
Since, in this case,0 < θ < 1,we have taken two different
prior distributions, namely, π1 (θ) and π2 (θ) as given On the other hand, under the SELF and corresponding to
below: the posterior distribution given by (19), Bayes Estimate of
π1 (θ) = ϕ(θ) = θr ,denoted by θ̂t2B is given by,
θp−1 (1−θ)q−1 B(t +p+r,(β−1)tN +nN+q)M4
, if p > 0, q > 0 ,0 < θ < 1 θ̂t2B = N (26)
{ B(tN +p,(β−1)tN +nN+q)M3
B(p,q) (15)
0, Otherwise. Where,
And, M3 = M(t N + p, p + q + βt N + nN, −b)
π2 (θ) = (27)
e−bθ θp−1(1−θ)q−1 M4 = M(t N + p + r, p + q + βt N + nN + r, −b)
, if p > 0, q > 0 ,0 < θ < 1 , b ≥ 0 (28)
{ B(p,q)M(p,p+q,−b)
0, Otherwise.
(16) Similarly, under the WSELF, when W(θ) = θ−2 and
Where, corresponding to the posterior distribution given by (19), the
Γ(p)Γ(q)
B(p, q) = Γ(p+q) (17) MELO Estimate of ϕ(θ) = θr ,denoted by θ̂t2M is given by,
B(t +p+r−2,(β−1)t +nN+q)M
θ̂t2M = B(tN +p−2,(β−1)t N+nN+q)M 6 (29)
N N 5
The posterior p. d .f. of θ , corresponding to the prior π1 (θ), Where,
denoted by π1 (θ /t N ),is given by, M5 = M(t N + p − 2, p + q + βt N + nN − 2, −b)
(30)
M6 = M(t N + p + r − 2, p + q + βt N + nN + r − 2, −b)
(31)
−1
Under the WSELF, when W(θ) = θ−2 e−aθ and
corresponding to the posterior distribution given by (43),
the EWMELO Estimate of ϕ(θ) = θr ,denoted by θ̂tE is
given by,
Γ(tN +α+r−2)
θ̂tE = Γ(t +α−2)(N+c+a)r (49)
N
REFERENCES