Professional Documents
Culture Documents
Fungsi Peubah Acak (Bacaan PDF
Fungsi Peubah Acak (Bacaan PDF
u−3 u−3 3
u−3 u−3
Z Z
2 2
P Y ≤ = f (y) dy = 3y 2 dy =
2 0 0 2
so
0
u<3
u−3 3
FU (u) = 2 3≤u≤5
1 u>5
and (
3
dF 8 (u − 3)2 3≤5
fU (u) = =
du 0 otherwise
11
The general method works as follows.
The cdf method is useful for dealing with the squares of random
variables. Suppose U = X 2 , then
FU (u) = P (U ≤ u)
= P (X 2 ≤ u)
√ √
= P (− u ≤ X ≤ u)
Z √u
= √
f (x)dx
− u
√ √
= FX ( u) − FX (− u).
12
otherwise and U = X 2 then
1 √ √
fU (u) = √ fX ( u) + fX (− u)
2 u
√ √
1 u+1 − u+1
= √ +
2 u 2 2
1
= √ 0≤u≤1
2 u
Example 2.2. As a more important example suppose Z ∼ N (0, 1)
so that
2
1 z
fZ (z) = √ exp − − ∞ < z < ∞.
2π 2
Then if U = Z 2
1 √ √
fU (u) = √ fX ( u) + fX (− u)
2 u
u
1 1 u 1
= √ √ exp − + √ exp −
2 u 2π 2 2π 2
1 u
= √ exp −
2πu 2
where Z ∞
Γ(α) = y α−1 exp(−y) dy
0
is the Gamma function.
13
Note that some text books define a Gamma distribution with 1/β
instead, it doesn’t really matter.
Now we can see that
Z ∞ ∞
exp(−y) dy = −e−y 0 = 1.
Γ(1) =
0
Also if we integrate Γ(α) by parts we see that
Z ∞
Γ(α) = y α−1 exp(−y) dy
0 Z ∞
−y ∞
α−1
(α − 1)y α−2 exp(−y) dy
= y (−e ) 0 +
Z ∞ 0
= 0 + (α − 1) y α−2 exp(−y) dy
0
= (α − 1)Γ(α − 1)
14
So we have proved the following theorem.
Theorem 2.1. If the random variable Z ∼ N (0, 1) then Z 2 ∼ χ21 .
P (Y ≤ y) = P (FX (X) ≤ y)
= P (X ≤ FX−1 (y))
= FX (FX−1 (y))
= y 0<y<1
P (X ≤ x) = P (FX−1 (U ) ≤ x)
= P (U ≤ FX (x))
= FX (x)
15
2.2 Method of direct transformation
θ
fY (y) = y θ+1
× ey
(e )
= θe−yθ y > 0
16
Suppose X1 , X2 have joint pdf fX1 ,X2 (x1 , x2 ) with support A =
{(x1 , x2 ) : f (x1 , x2 ) > 0}. We are interested in the random vari-
ables Y1 = g1 (X1 , X2 ) and Y2 = g2 (X1 , X2 ). The transforma-
tion y1 = g1 (x1 , x2 ) ,y2 = g2 (x1 , x2 ) is a 1-1 transformation of A
onto B. So there is an inverse transformation x1 = g1−1 (y1 , y2 )x2 =
g2−1 (y1 , y2 ). Let the determinant J, given by
∂x1 ∂x1
∂y1 ∂y2
J = ∂x
∂y12 ∂x 2
∂y2
fY1 ,Y2 (y1 , y2 ) = |J|fX1 ,X2 (g1−1 (y1 , y2 ), g2−1 (y1 , y2 )) (y1 , y2 ) ∈ B.
f (x1 , x2 ) = exp(−(x1 + x2 )) x1 ≥ 0, x2 ≥ 0.
17
Note in this example that as we started with 2 random variables we
have to transform to 2 random variables. If we are only interested in
one of them we can integrate out the other.
Example 2.7. X1 and X2 have joint pdf
18
First note that MX (0) = 1. Differentiating MX (t) with respect to t
assuming X is continuous we have
Z
d
MX0 (t) = etx f (x) dx
Zdt
= xetx f (x) dx
Z
MX0 (0) = xf (x) dx
= E[X]
d
where we assume we can take dt inside the integral.
Similarly
d2
Z
MX00 (t) = 2
etx f (x) dx
Zdt
= x2 etx f (x) dx
Z
MX00 (0) = x2 f (x) dx
= E[X 2 ].
19
Then the moment generating function is
n
X
tx n
M (t) = e px (1 − p)n−x
x
x=0
n
X n
= (pet )x (1 − p)n−x
x
x=0t n
= pe + (1 − p)
Thus n−1 t
M 0 (t) = n pet + (1 − p)
pe
so M 0 (0) = np = E[X].
Also
n−2 2 2 n−1 t
M 00 (t) = n(n−1) pet + (1 − p) p e t+n pet + (1 − p)
pe
20
Example 2.10. Suppose X has a Gamma distribution, Ga(α, β). We
find the mgf of X as follows.
Z ∞ α
tx β
MX (t) = e xα−1 exp(−βx) dx
Γ(α)
Z0 ∞ α
β
= xα−1 exp(−x(β − t)) dx
0 Γ(α)
α Z ∞
β (β − t)α α−1
= x exp(−x(β − t)) dx
(β − t)α 0 Γ(α)
Now the integral is of the pdf of a Ga(α, β − t) random variable and
so is equal to 1. Note we have to have t < β to make this work. That
is ok, so long as we can let t → 0 which we can.
Thus the mgf for a Ga(α, β) random variable X is
α
β
MX (t) = .
β−t
21
and so as the final bracket does not depend on x we can take it out-
side the integral to give
Z ∞
2µσ 2 t + σ 4 t2
1 1 2 2 2
MX (t) = √ exp − 2 [x − (µ + σ t)] dx exp .
−∞ σ 2π 2σ 2σ 2
Now the function inside the integral is the pdf of a N (µ + σ 2 t, σ 2 )
rv and so is equal to one. Thus
σ 2 t2
MX (t) = exp µt + .
2
0 σ 2 t2
Now differentiating the mgf we find M (t) = exp µt + 2 µ +
σ 2 t), M 0 (0) = µ = E[X] and
2 2 2 2
σ t σ t
M 00 (t) = exp µt + (µ + σ 2 t)2 + exp µt + σ2,
2 2
M 00 (0) = µ2 + σ 2 and hence Var[X] = µ2 + σ 2 − (µ)2 = σ 2 .
The moment generating function is also useful for proving other re-
sults. For example results about sums of random variables.
Suppose X1 , X2 , . .P
. , Xn are independent rvs with mgf MXi (t), i =
1, . . . , n. Let Y = i Xi then
n
Y
MY (t) = MXi (t).
i=1
MY (t) = E[etY ]
P
= E[et Xi ]
= E[etX1 etX2 · · · etXn ]
= E[etX1 ] E[etX2 ] · · · E[etXn ] by independence
= MX1 (t)MX2 (t) · · · MXn (t)
23
λ/(λ − t). Thus
n n
Y λ λ
MY (t) = =
i=1
λ−t λ−t
We give the next result about sums of standard normal rvs as a the-
orem as the result is important.
Theorem 2.4. Suppose Y1 , . . . , Yn are independent, normally dis-
tributed with mean E[Yi ] = µi and variance Var[Yi ] = σi2 . Let
Zi = (Yi − µi )/σi so that Z1P
, . . . , Zn are independent and each has
a N (0, 1) distribution. Then Zi2 has a χ2n distribution.
Proof. We have seen before that each Zi2 has a χ21 distribution. So
24
MZi2 (t) = (1 − 2t)−1/2 . Let V =Zi2 . Then
P
Y
MV (t) = MZi2 (t)
1
=
(1 − 2t)n/2
1 n2
2
= 1
2 −t
but this is the mgf of a Ga(n/2, 1/2) random variable, that is a χ2n
rv.
25