Professional Documents
Culture Documents
Chapter 4 Random Variables
Chapter 4 Random Variables
: X ( ) x
is an event in . Thus we confine ourselves to -algebra of events of the type X x .
Unless explicitly required, we suppress the argument when referring to a random
variable in the rest of this text.
-∞ : X ( ) x x Real line
5
Measurable functions have been defined in Section 2.6
CF and MGF are introduced after the disucssion on moments of random variables in
Section 4.2.
FX ( x) P[ X x] (4.1)
FX () 0
FX () 1 (4.2)
FX ( x) is a non-decreasing function of x
Thus, the probability of finding the random variable X in the semi-open interval (a,b] is:
4.1.2.1 PDF
The probability density function (of continuous random variables) is defined as the
derivative of the CDF:
dFX ( x )
fX x (4.4)
dx
so that:
x
FX ( x)
f X (t )dt (4.5)
and,
© Baidurya Bhattacharya, IIT Kharagpur. All rights reserved.
f
a
X ( x)dx FX (b) FX ( a ) (4.6)
PDF may also be interpreted as giving rise to the small probability of observing the
random variable X around the point x:
f X x dx P x dx X x (4.7)
4.1.2.2 PMF
The probability mass function (of discrete random variables) is defined as the probability
that the random variable assumes a particular value:
If the sample space of the discrete rv X is {x1 , x2 ,...} so that xi xi 1 , then the PMF may
be given as the height of the step in the CDF curve:
p X ( xi ) FX ( xi ) FX ( xi 1 ) (4.9)
( x) dU ( x) / dx (4.10)
0if x 0
U (4.11)
1 if x 0
b
g (0) ( x) g ( x)dx (4.12)
a
f X ( x) pi ( x xi ) (4.13)
i
But, in the following, we will still write formulas and expressions separately for discrete
and continuous RVs.
4.2 Expectation
The expectation of any function g ( X ) of the random variable X is defined as:
E[ g ( X )] g ( x) f
X ( x) dx if X continuous
(4.14)
g ( xi ) p X ( xi ) if X discrete
all xi
E aX b a E X b (4.16)
and if Y g1 X g 2 X , then
E Y E ( g1 X ) E g 2 X (4.17)
x f X ( x) dx , continuous RV
E ( X ) (9.18)
x p ( x ) , discrete RV
i X i
all
xi
and its variance is the expecation of its squared deviation from the mean:
© Baidurya Bhattacharya, IIT Kharagpur. All rights reserved.
( x ) f X ( x)dx , continuous RV
2
2 E ( X )2 (9.19)
( x )2 p ( x ), discrete RV
i
all X i
xi
4.3 Moments
The kth central moment of X:
c ,k E ( X ) k (4.20)
n
pi ( xi ) for discrete RV
2
Variance, 2 c ,2 i 1 (4.21)
( x ) 2 f ( x)dx for continuous RV
X
The kth raw moment of X: 0, k E ( X ) k . Hence, the mean of X is simply the first
raw moment:
n
pi xi for discrete RV
0,1 i 1
x f ( x)dx for continuous RV
X
General formula for central moments (by simply expanding the binomial series6):
n
n
c ,n 0,k ( ) n k (4.22)
k
k 0
n n n n
6
If n is an integer, the binomial series is given by: ( x y ) n k 0 x n k y k k 0 x k y n k
k k
In particular,
c,3 0,3 30,2 2 3
, etc.
0,3 3 3 2 c ,3
1
If f X is discontinous at some x1 , this equals [ f X ( x1 ) f X ( x1 )] . In Eq (4.24) E is the
2
expectation operator discussed in Section 4.2. The requirement is that fX is absolutely
integrable, i.e., | f
X ( x) | dx , which is no problem since fX is a pdf to begin with.
M X ( ) p X ( x j )e
i x j
all x j
(4.26)
1
M ( )e d
i x
p X ( x)
2
X
Proof:
1
2 M
X ( )e i x d
1
p
i x j
( x j )e e i x d
2
X
all x j
1
i ( x j x )
pX ( x j ) e d (4.27)
2 all x j
1
2
p
all x j
X ( x j )2 ( x x j )
p X ( x)
© Baidurya Bhattacharya, IIT Kharagpur. All rights reserved.
It exists if the integral is finite for all s in some interval I that contains 0 in its interior
(Resnick, p. 294). If it exists, the MGF uniquely determines the distribution of X. In
comparison, the CF of a RV always exists.
MGF is infinitely differentiable and the nth derivative,
dn
n
GX ( s) x n e sx f X ( x)dx (4.29)
ds
dn
n
GX s |s 0 E X n (4.30)
ds
ds 3
etc.
where the identity GX (0) 1 has been used.
where Eq 4.20 has been used. This is valid if all moments are finite and the series
converges absolutely near s = 0. Since fX can be determined in terms of GX, Eq 4.21
shows that all moments of X are required for the complete specification of fX under the
conditions stated above.
1 dM X
E( X ) X (4.32)
i d 0
1 d
E( X ) ln M X ( ) 0 (4.33)
i d
1 d nM X
EX n
n
(4.34)
i d n 0
and in particular, the variance can be expressed through the log transformation as:
1 d2
var( X ) ln M X (0) (4.35)
i 2 d 2
1
P X , 0 (4.36)
1
P X , 0 (4.37)
2
© Baidurya Bhattacharya, IIT Kharagpur. All rights reserved.
1
P | X p | pq 2 for any 0 (4.38)
Case (1): X p
The LHS of (4.38) =
p if p pq 1 Case 1a
P X p pq
0 if p pq 1 Case 1b
Case (1a)
1
LHS p and we need to prove p
2
It is given that, p pq 1
1 p q q
pq pq p
q
2
p
q
p
2
1
Since q 1, we have p ... proved
2
Case (1b)
1
LHS 0 and we need to prove 0
2
It is given that, p pq 1
1 p q q
pq pq p
q
2
p
1
Since 2 is a positive quantity, we have 0 ... proved
2
Case (2): 0 X p
LHS =
P p X pq P X p pq
0 if p pq Case 2a
q if p pq Case 2b
Case (2a)
1
LHS 0 and we need to prove 0
2
It is given that p pq
p
2
q
1
Since 2 is a positive quantity, we have 0 ... proved
2
Case (2b)
1
LHS q and we need to prove q
2
It is given that p pq
p p
2 2 q
q
1
Since p 1, we have q ... proved
2
Case (3)
X 0
1
LHS P X p pq 0 2
p pq
p
2
q
1
i.e. 2 0 if 2 p
q
Proved.
Proof
k 1 k 1
Let T a X 2 X 2
k 1 k k 1
T 2 a2 X 2a X X
E T 2 a 2 k 1 2a k k 1 0
Dividing by k 1 we get,
k k2 k 1 k2
a 2a
2
0
k 1 k21 k 1 k21
k
Now choose, a which yields
k 1
k 1 k2
2 0 k 1 k 1 k2 (4.41)
k 1 k 1
We see that proposition 4.40 is true for k = 1,2,3. Let us claim it is true for some k - 1:
i.e.,
1 1
k k2 2 k k11 (4.42)
k 2 k k21
k 2
or, k 1 k 1 k k21
k 2 2 k 2k 2
k (4.43)
or, k k 1
2
k 1 k 1 k 1 k k11
1 1
k k k k11
Thus we prove (4.40) by induction: if (4.40) is true for k – 1, then it must be true for k.
Since we have proved (4.40) is true for k =3, it must be true for 4, 5, ….
Proved.
1 1 1 1 1 1
Since n n n n11 n n22 m m ... 3 3 2 2 1 1 and for n m, an equivalent
way of writing Lyapunov inequality is:
1 1
n n m m nm mn
(4.44)