You are on page 1of 12

STAT 410

Chebyshevs Inequality:
Let X be any random variable with mean and variance 2. For any > 0,
P(|X | < ) 1

2
2

or, equivalently,
P(|X | )

2
2

Setting = k , k > 1, we obtain


P(|X | < k ) 1

k2

or, equivalently,
P(|X | k )

k2

That is, for any k > 1, the probability that the value of any random variable will
1
.
be within k standard deviations of its mean is at least 1

k2

Def

Let U 1 , U 2 , be an infinite sequence of random variables, and let W be


another random variable. Then the sequence { U n } converges in probability
to W, if for all > 0,
lim P
n

Un W = 0 ,

and write U n W .

Def

An estimator for is said to be consistent if , i.e.,


for all > 0, P 0 as n .

The (Weak) Law of Large Numbers:


Let X 1 , X 2 , be a sequence of independent random variables, each having the
same mean and each having variance less than or equal to v < . Let
Mn =

X 1 + ... + X n

n = 1, 2, .

Then M n . That is, for all > 0, lim P


n

Mn = 0.

Let X 1 , X 2 , be i.i.d. with mean and standard deviation . Let


Xn =

X 1 + ... + X n

n = 1, 2, .

Then X n . That is, for all > 0,

lim P X n = 0 .
n

Let Y n be the number of successes in n independent Bernoulli trials with


probability p of success on each trial. Then for all > 0,
Yn
P
p

p (1 p ) ,

n 2

Yn P
and
p.

X n X , Yn Y X n + Yn X + Y

X n X , a = const a X n a X

X n a , g is continuous at a g X n g ( a )

X n X , g is continuous g X n g ( X )

X n X , Yn Y X n Yn X Y

Example 1:

Let X n have p.d.f. f n ( x ) = n e n x, for x > 0, zero otherwise.


P

Then X n 0 , since
if > 0, P ( | X n 0 | ) = P ( X n ) = e n 0 as n .

Example 2:

Let X n have p.d.f. f n ( x ) = n x n 1, for 0 < x < 1, zero otherwise.


P

Then X n 1 , since
if 0 < 1, P ( | X n 1 | ) = P ( X n 1 ) = ( 1 ) n 0 as n ,
and

Example 3:

if > 1, P ( | X n 1 | ) = 0.

1
1
Let X n have p.m.f. P ( X n = 3 ) = 1 , P ( X n = 7 ) = .
n
n
P

Then X n 3 , since
if 0 < 4, P ( | X n 3 | ) =

0 as n , and

if > 4, P ( | X n 3 | ) = 0.

Example 4:
Let

Suppose U ~ Uniform ( 0, 1 ).

1 if

X n = 2 if

3 if

1 1
U 0, +
3 n
1 1 2 1
U + ,
3 n 3 n

Then P ( | X n X | ) =
P

Therefore, X n X .

2 1
U ,1
3 n
2

1 if

X = 2 if

3 if

1
U 0,
3
1 2
U ,
3 3
2
U ,1
3

, 0 < 1, P ( | X n X | ) = 0, > 1.

1.

Let X 1 , X 2 , , X n be a random sample of size


distribution with probability density function
1

1
x
f (x; ) =

Recall:

a)

0 x 1

0 < < .

otherwise

1 X
1
= 1.
X
X
1 n
Maximum likelihood estimator of is = ln X i .
n i =1
1
,
E ( ln X ) = .
E( X ) =
1+

Method of moments estimator of is =

Show that is a consistent estimator of .


Let Y i = ln X i , i = 1, 2, , n.

Then = Y .

By WLLN, = Y E(Y ) = .

b)

Show that is a consistent estimator of .


P

By WLLN, X = E ( X ) =

g(x) =

1 x

is continuous at

g( X ) = ,

n from the

~ P
.

g(

1
.
1+

1
.
1+

1
) = .
1+

Similarly to Chebyshevs Inequality,

2.

a)

MSE

Let X 1 , X 2 , , X n be a random sample of size n from a uniform


distribution on the interval ( 0 , ).
~
Show that = 2 X is a consistent estimator of .
P

By WLLN, X n = E ( X ) =

.
2

P
~
= 2X .

OR

b)

2
0
3n

MSE ( ) =

~
P 0 as n for any > 0.

~
= 2 X is a consistent estimator for .

as n .

Show that = max X i is a consistent estimator of .

f max X i ( x ) = F 'max X i ( x ) =
P =

n x n 1

n x n 1
n

0 < x < .

x n n
dx =
= 1 0 as n .
n

= max X i is a consistent estimator for .

OR

22
0
( n + 1 )( n + 2 )

MSE ( ) =

P 0 as n for any > 0.

= max X i is a consistent estimator for .

1.

as n .

Let X 1 , X 2 , , X n be a random sample of size n from a


Geometric ( p ) distribution ( the number of independent trials
until the first success ). That is,
P ( X 1 = k ) = ( 1 p ) k 1 p,

p=1
Show that p = ~

is a consistent estimator of p.

By WLLN,

X = E(X ) = 1

p.

Since g ( x ) = 1 x is continuous at 1 p ,

p = g ( X ) g 1 p = p.

k = 1, 2, 3, .

p is a consistent estimator for p.

3.

Let X 1 , X 2 , , X n be a random sample from the distribution with probability


density function

f (x ) = 4 x 3 e x

> 0.

x>0

We already know ( Examples for 07/13/2015 ( part 2 ) ) that


the maximum likelihood estimator of is =

n
n

Xi

i =1
Is

a consistent estimator of ?

By WLLN,

1 n

X i4

n i =1

( )

E X4 =

1
,
Since g ( x ) = 1 x is continuous at

1 n
X4
= g
n i
i =1

is a consistent estimator of .

P
g 1 = .

OR

MSE ( ) =

( n + 2 ) 2
( n 1 )( n 2 )

as n .

P 0 as n for any > 0.

is a consistent estimator of .

4.

Let > 0 and let X 1 , X 2 , , X n be a random sample from the distribution


with the probability density function

f ( x; ) = 2 2 x 3 e x ,
2

a)

x > 0.

Recall that the maximum likelihood estimator of is =

2n
n

Xi

i =1

Is a consistent estimator of ?

E( X2) =

2 2

1
2

2
.
+ 2 = ( 3 ) = 1 2 ! =

OR
W = X 2 has Gamma ( = 2, =
E( X2) = E(W) = =

By WLLN,

) distribution.

X i2 E ( X 2 ) = .

n i =1

X n a , g is continuous at a g X n g ( a )

Since g ( x ) =

is continuous at

P
2
= g ( X 2 ) g ( ) = .

b)

n
Construct a consistent estimator for based on X i4 .
i =1
Recall that E ( X k ) =

Hint:

E( X4) =

4 2

k 2

+ 2 ,
2

k > 4.

2
6
4

.
+ 2 = ( 4 ) = 2 3 ! =
2

By WLLN,

X i4 E ( X 4 ) = 2 .
n i =1

X n a , g is continuous at a g X n g ( a )

Consider g ( x ) =

6n
n

X i4

i =1

Since g ( x ) =

6
X

= g( X 4 ) g(

x
6

is continuous at

) = .

5.

a)

Let S 2 be the sample variance of a random sample from a distribution with


variance 2 > 0. Since E ( S 2 ) = 2, why isnt E ( S ) = ?
Hint: Use Jensens inequality to show that E ( S ) < .

g ( x ) = x 2 is strictly convex. By Jensens Inequality,


2 = E ( S 2 ) = E [ g ( S ) ] > g [ E ( S ) ] = [ E ( S ) ] 2.
> E ( S ).

Therefore,

OR
E ( S 2 ) = 2.

g ( x ) = x , x > 0, is strictly convex. By Jensens Inequality,


E ( S ) = E [ g ( S 2 ) ] > g [ E ( S 2 ) ] = .
E ( S ) < .

Therefore,

b)

Suppose that the sample is drawn from a N ( , 2 ) distribution. Recall that

( n 1 ) S 2/ 2 has a 2 ( n 1 ) distribution. Use Theorem 3.3.1 to determine


an unbiased estimator of .
Hint:

That is, find b such that E ( b S ) = .

Theorem 3.3.1. Let X have a

2(r )

( b would depend on n )

distribution. If k > r 2 , then E ( X k ) exists

and it is given by

r
2 k + k
.
2
E( X k ) =
r

2

By Theorem 3.3.1, if r = n 1 and k = 1 2 , then

n
2
n 1 S
2.
=


n 1

Therefore,

n 1

2
n
2
2

n 1

n 1

2 S is unbiased for .
n
2
2

and

n=5

n=6

c)


S = ,

n 1

c =

4
4
2 =
5
2
2

c =

5
5
2 =
6
2
2

2 1
8
=
1.063846.
3 1
3
2

2
2 2
3 1
5
3 5
2 2
=
1.050936.
2 2
8 2

Suppose that the sample is drawn from a N ( , 2 ) distribution. Which value of c


minimizes MSE ( c S 2 ) = E [ ( c S 2 2 ) 2 ] ?

( c would depend on n )

2
2
Recall that E ( ( r ) ) = r, Var ( ( r ) ) = 2 r.

Hint:

E(S2) =

2 ( n 1 ) S 2 2
E
=
( n 1 ) = 2.
2

n 1

n 1

Var ( S ) =
n 1

2
( n 1 ) S 2 2
2 4

.
Var

=
2 ( n 1 )=
2

n 1
n

MSE ( c ) = E [ ( c )

c min =

= c E ( ) 2 c E ( ) + .
2

()

E
.
E 2

( )

For = S 2,

c min =

( )
2
Var (S 2 ) + [ E (S 2 ) ]
E S2 2

4
2 4
+ 4
n 1

n 1
.
n +1

n
2
1
n 1 2
( X i X ) would minimize the Mean Squared Error.
S =
n + 1 i =1
n +1

OR

( )

E(c S2) = cE S 2 = c 2 .

( )

2 4 2c 2 4
=
Var ( c S 2 ) = c 2 Var S 2 = c 2
.
n 1
n 1
2
MSE ( ) = E [ ( ) ] =

( E ( ) ) 2 + Var ( )

( bias ( ) ) 2 + Var ( ).

MSE ( c S 2 ) = ( E ( c S 2 ) 2 ) + Var ( c S 2 )
2

2 4
2
4 2c
(
)

1
+
=

n 1

n +1 2
=
c 2 c + 1 4 .
n 1

d
n +1
MSE ( c S 2 ) = 2
c 1 4 = 0.
dc
n 1

c=

n 1
.
n +1

n
2
1
n 1 2
( X i X ) would minimize the Mean Squared Error.
S =
n + 1 i =1
n +1
d2
( dc 2 MSE ( c S 2 ) = 2 nn + 11 4 > 0 min. )

You might also like