You are on page 1of 29

Lecture 1

Review of Some Statistical Concepts





Appendix A
in
Basic Econometrics 5e
by
Gujarati and Porter
YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


A1. Summation and Product Operators

Summation Operator:
n
n
i
i
x x x x + + + =

=
....
2 1
1

Properties:
1.
nk k
n
i
=

=1
, where k is a constant
2.
= =
=
n
i
i
n
i
i
x k kx
1 1
, where k is a constant
3.
( )

= =
+ = +
n
i
i
n
i
i
x b na bx a
1 1
, where a and b are constants
4.
( )

= = =
+ = +
n
i
i
n
i
i
n
i
i i
y x y x
1 1 1

Double Summation Operator:
( )

= = =
+ + + =
n
i
im i i
m
j
ij
n
i
x x x x
1
2 1
1 1
....

( ) ( ) ( )
nm m m n n
x x x x x x x x x + + + + + + + + + + + + = .... ..... .... ....
2 1 2 22 12 1 21 11
Properties:
1.
= = = =
=
n
i
ij
m
j
m
j
ij
n
i
x x
1 1 1 1
, order is interchangable
2.
= = = =
=
m
j
j
n
i
i
m
j
j i
n
i
y x y x
1 1 1 1

3.
( )

= = = = = =
+ = +
m
j
ij
n
i
m
j
ij
n
i
m
j
ij ij
n
i
y x y x
1 1 1 1 1 1

4.
< = + = =

= =
+ = + =

j i
j i
n
i
i
n
i j
j i
n
i
n
i
i
n
i
i
x x x x x x x
1
2
1 1
1
1
2
2
1
2 2

Product Operator:
n
n
i
i
x x x x =

=
....
2 1
YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


A2. Sample Space, Sample Points and
Events

Sample Space (population): set of all possible
outcomes of a random experiment
Ex1: sample space of tossing 2 coins: HH,HT,TH,TT

Sample Point: each member of the sample space
Ex: HH

Event: a subset of the sample space
Ex2: event A is the occurence of one head and one tail
so the subset is HT,TH

Mutually exclusive: occurence of one precludes the
occurence of the other
Ex3: A: two heads
B: at least one tail
A and B are mutually exclusive events

Exhaustive: exhaust all possible outcomes of an
experiment
Ex4: A: two heads
B: two tails
C: one head one tail
A, B and C are exhaustive events
YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


A3. Probability and Random Variables

Probability: Let A be an event in a sample space. P(A)
means the proportion of times the evet A will occur in
repeated trials of an experiment. Probability of event A
is the relative frequency of A.

Properties:

1.
1 ) ( 0 A P
, for every A
2. If A, B, C,.... constitute an exhaustive set of events,
P(A+B+C+...)=1
3. If A, B, C,.... are mutually exclusive events,
P(A+B+C+...)=P(A)+P(B)+P(C)+.....

Random Variable: a variable whose value is
determined by the outcome of a chance experiment.
Capital letters variable: X
Lowercase letters values: x
Discrete r.v.: takes on only a finite number of
values
Continuous r.v.: takes on any value in some interval
of values
YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


A4. Probability Density Function (PDF)

PDF of a Discrete r.v. : Probability that X takes on the
value
i
x
is given by

( ) ( )
i
x X P x f = =
for
n i ,..., 3 , 2 , 1 =

0 = for
i
x x

Ex5: throw 2 dice, X is the sum of the two numbers on
the dice
x = 2 3 4 5 6 7 8 9 10 11 12
f(x)=
36
1

36
2

36
3

36
4

36
5

36
6

36
5

36
4

36
3

36
2

36
1





YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


PDF of a Continuous r.v.:
( ) x f
is said to be the PDF of
X iff;

( ) 0 x f

( )

+

=1 dx x f

( )

+

= ) ( b x a P dx x f

where
( )dx x f
is know as the probability element and
) ( b x a P
means the probability that X lies in the
interval a to b.



Discrete Joint PDF: Let X and Y be two discrete
random variables

( ) ( ) y xandY X P y x f = = = ,
for
n i ,..., 3 , 2 , 1 =

0 = for x X and
y Y


YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


Discrete Marginal PDF: In relation to
( ) y x f ,
,
( ) x f
and
( ) y f
are called individual or marginal probability
distribution functions.
( ) ( )

=
y
y x f x f ,
marginal PDF of X
( ) ( )

=
x
y x f y f ,
marginal PDF of Y

Discrete Conditional PDF: gives the probability that X
will take on x given that Y takes on y and vice versa.

( ) ( ) y Y x X P y x f = = =
conditional PDF of X
( ) ( ) x X y Y P x y f = = =
conditional PDF of Y

These are obtained as:

( )
( )
( ) y f
y x f
y x f
,
=
conditional PDF of X
( )
( )
( ) x f
y x f
x y f
,
=
conditional PDF of Y

Statistical Independence: X and Y are independent iff
their joint PDF can be expressed as the product of the
marginal PDFs:

( ) ( ) ( ) y f x f y x f = ,

Notice that:

( ) ( ) x f y x f =
, if X and y are statistically independent.
YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


Ex6: A bag contains 3 balls numbered 1, 2, 3. Two balls
are drawn at random with replacement. Let X denote the
number of the first ball, Y the second ball. Joint PDF of
X and Y is given by:

Y X 1 2 3
1 1/9 1/9 1/9
2 1/9 1/9 1/9
3 1/9 1/9 1/9

a) derive the MPDFs of X and Y
b) derive the CPDFs of X and Y
c) Are X and Y independent?

a) For MPDF of X:
( ) ( )
3
1
9
1
9
1
9
1
, 1 1 = + + = = =

y
y f x f

( ) ( )
3
1
9
1
9
1
9
1
, 2 2 = + + = = =

y
y f x f

( ) ( )
3
1
9
1
9
1
9
1
, 3 3 = + + = = =

y
y f x f


For MPDF of Y
( ) ( )
3
1
9
1
9
1
9
1
1 , 1 = + + = = =

x
x f y f


( ) ( )
3
1
9
1
9
1
9
1
2 , 2 = + + = = =

x
x f y f


( ) ( )
3
1
9
1
9
1
9
1
3 , 3 = + + = = =

x
x f y f


b) For CPDF of X

( )
( )
( ) 3
1
3
1
9
1
1
1 , 1
1 1 = =
=
= =
= = =
y f
y x f
y x f

YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


( )
( )
( ) 3
1
3
1
9
1
1
1 , 2
1 2 = =
=
= =
= = =
y f
y x f
y x f

( )
( )
( ) 3
1
3
1
9
1
1
1 , 3
1 3 = =
=
= =
= = =
y f
y x f
y x f


We get the same results for
( ) 2 = y x f
and
( ) 3 = y x f


c) Yes they are statistically independent because
( ) ( ) ( ) y f x f y x f = ,
for all X and Y values:

( ) ( ) ( )
9
1
3
1
3
1
1 1 1 , 1 = = = = = = = y f x f y x f

( ) ( ) ( )
9
1
3
1
3
1
2 1 2 , 1 = = = = = = = y f x f y x f
and so on...
Moreover
( ) ( ) x f y x f =
for all X and Y values:

( ) ( ) 1 1 1 = = = = x f y x f


3
1
3
1
=
and so on...
YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


A5. Characteristics of Probability
Distributions

We describe probability distributions in terms of their
moments:
1
st
moment: expected value or mean
2
nd
moment: variance
3
rd
moment: skewness
4
th
moment: kurtosis

Expected Value:

( ) ( )

=
x
x xf X E
, where
( ) x f
is the PDF of X.
Properties:
1.
( ) b b E =

2.
( ) ( ) b X aE b aX E + = +
where a and b are constants
3. If X and Y are independent r.v.s then;
( ) ( ) ( ) Y E X E XY E =

4. If X is a r.v. with PDF
( ) x f
and if
( ) x g
is any function
of X, then;
( ) [ ] ( ) ( )

=
x
x f x g X g E
, for discrete X

Ex7: Let the joint PDF of X and Y be:

Y X -2 0
2
3
3 0.27 0.08
0.16
0
6 0 0.04
0.10
0.35

a) What is the expected value of X?
b) What is the expected value of Y?
YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


a)
( ) ( )

=
x
x xf X E
, where
( ) x f
is the MPDF of X. So we
first derive the marginal distribution:

( ) ( ) 27 . 0 0 27 . 0 , 2 2 = + = = =

y
y f x f

( ) ( ) 12 . 0 04 . 0 08 . 0 , 0 0 = + = = =

y
y f x f

( ) ( ) 26 . 0 10 . 0 16 . 0 , 2 2 = + = = =

y
y f x f

( ) ( ) 35 . 0 35 . 0 0 , 3 3 = + = = =

y
y f x f


Therefore:
( ) ( ) ( )( ) ( )( ) ( )( ) ( )( ) 03 . 1 35 . 0 3 26 . 0 2 12 . 0 0 27 . 0 2 = + + + = =

x
x xf X E

b) Do it yourselves!

Variance:
Let X be a r.v. and
( ) = X E
. Variance measures the
distribution of the X values around the mean.

( ) ( ) [ ]
2
2
X E X E X Var
X
= =


( )
2
= X E

( )
2 2
= X E

( ) ( ) [ ]
2
2
X E X E =

Properties:
1.
( ) ( )
2 2
2
= X E X E

2.
( ) 0 = b Var

3. If a and b are constants, then

( ) ( ) X Var a b aX Var
2
= +


YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


4. If X and Y are independent r.v.s, then

( ) ( ) ( ) Y Var X Var Y X Var + = +

( ) ( ) ( ) Y Var X Var Y X Var + =


5. If X and Y are independent r.v.s and a and b are
constants, then
( ) ( ) ( ) Y Var b X Var a bY aX Var
2 2
+ = +


6. Let X and Y be two r.v.s, then

( ) ( ) ( ) ( ) ( ) ( )
Y X
Y Var X Var Y X Cov Y Var X Var Y X Var 2 , 2 + + = + + = +

( ) ( ) ( ) ( ) ( ) ( )
Y X
Y Var X Var Y X Cov Y Var X Var Y X Var 2 , 2 + = + =

Ex8: Consider the joint PDF in example 7. Find
( ) X Var
and
( ) Y Var
.

( ) ( ) ( )( ) ( )( ) ( )( ) ( )( ) 03 . 1 35 . 0 3 26 . 0 2 12 . 0 0 27 . 0 2 = + + + = =

x
x xf X E

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 27 . 5 35 . 0 3 26 . 0 2 12 . 0 0 27 . 0 2
2 2 2 2
2 2
= + + + = =

x
x f x X E
So
( ) ( ) 21 . 4 03 . 1 27 . 5
2
= = X Var


Find
( ) Y Var
yourselves!

Covariance:
Let X and Y be two r.v.s with means
X

and
Y

,

( ) ( )( ) [ ] ( )
Y X Y X
XY E Y X E Y X Cov = = ,


YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


We compute it as:

( ) ( )( ) ( ) y x f Y X Y X Cov
x
Y X
y
, ,

=

( ) ( )
Y X
x y
y x f XY Y X Cov =

, ,

Properties:
1. If X and Y are independent,
( ) 0 , = Y X Cov


2. If X and Y are independent r.v.s and a, b, c and d are
constants, then
( ) ( ) Y X bdCov dY c bX a Cov , , = + +

Ex9: Consider the joint PDF in example 7.
( ) ? , = Y X Cov


( ) ( )
Y X
x y
y x f XY Y X Cov =

, ,

( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) 0 3 3 16 . 0 3 2 08 . 0 3 0 27 . 0 3 2 + + + =


( )( )( ) ( )( )( ) ( )( )( ) ( )( )( ) ( )( ) 24 . 2 47 . 4 03 . 1 35 . 0 6 3 10 . 0 6 2 04 . 0 6 0 0 6 2 = + + + +

Correlation Coefficient:
The population correlation coefficient

is defined as:

( )
( ) ( )
( )
Y X
Y X Cov
Y Var X Var
Y X Cov

, ,
= =

So;
( )
Y X
Y X Cov = ,


XY

is a measure of the linear association between X and


Y and lies between -1 and +1.

YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


Ex10: Consider the joint PDF in example 7. Find
XY

.

( )( )
73 . 0
50 . 1 05 . 2
24 . 2
= =
XY



Conditional Expectation and Conditional Variance:
Given discrete X and Y and
( ) y x f ,
the conditional
expectation of X given
y Y =
is:

( ) ( )

= = =
x
y Y x xf y Y X E

The conditional variance of X given
y Y =
is:

( ) { ( ) [ ] } y Y y Y X E X E y Y X Var = = = =
2


( ) [ ] ( )

= = =
x
y Y x f y Y X E X
2

Properties:
1. If
( ) x h
is a function of X then:

( ) [ ] ( ) X h X X h E =
, because if X is known so is
( ) x h
.

2. If
( ) x h
and
( ) x g
are functions of X then:

( ) ( ) [ ] ( ) [ ] ( ) X g X Y E X h X X g Y X h E + = +


3. If X and Y are independent then

[ ] ( ) Y E X Y E =



YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


4. If X and Y are independent then

[ ] ( ) Y Var X Y Var =


5.
( ) ( ) [ ] ( ) [ ] X Y E Var X Y Var E Y Var + =


6. The law of iterated expectations:

( ) ( ) [ ] X Y E E Y E
X
=

Intuition:
a. Let X and Y be two discrete random variables such
that X takes on the values
1
c
,
2
c
,
3
c
, ..,
m
c
with
probabilities
( )
1
c x P =
,
( )
2
c x P =
,
( )
3
c x P =
, ..,
( )
m
c x P =


b.
( ) ( )

= =
y
x X y yf X Y E
is a r.v. since its value will
change as the value of X changes.

c. The probability that
( ) X Y E
will take on a particular
value is the same as the probability that X takes on the
associated value.

d. LIE says:
( ) ( ) ( ) ( ) ( ) .....
2 2 1 1
+ = = + = = = c x Y E c x P c x Y E c x P Y E

( ) ( )
m m
c x Y E c x P = = + .....


e. Thus
( ) Y E
is a weighted average of the
( )
i
c X Y E =

where the weights are the probabilities that X takes on
the
i
c
.
YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


Ex11: Let (Y, W) represent the population of all
working individuals where Y is years of education and
W is hourly wage.
( ) 12 = Y W E
is the average hourly
wage for all people with 12 years of education;
( ) 16 = Y W E
for those with 16 years of education and so
on. We have:



Since education is a continuous variable the relation can
better be described by a linear function:
( ) Y Y W E 6 . 0 4 + =

We are also given the average years of education,
( ) 5 . 11 = Y E

Then LIE says
( ) ( ) ( ) ( ) 90 . 10 5 . 11 6 . 0 4 6 . 0 4 6 . 0 4 = + = + = + = Y E Y E W E
, $10.90 /hr

Ex12: Consider the joint PDF in example 7. Show that
( ) ( ) [ ] X Y E E Y E
X
=
.
The answer is found in 4 steps
1. Find
( ) Y E

2. Find
( ) X Y E

3. Find
( ) [ ] X Y E E
X

4. Show the equality between (1) and (3)
YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


A6. Some Important Theoretical PDFs

Normal Distribution: X ~N(,
2
) means X has a mean
of , a variance of
2
and follows the PDF below:

( )
( )

=
2
2
2
1
2
1


x
e x f
, for + < < x
Properties:
1. symmetrical around the mean
2. approximately 55% of the area under the pdf lies
between 2
3. once and
2
are known we can find the
probability that X will lie within a certain interval
4. a linear combination of normally distributed
variables is itself normally distributed:
X
1
~N(
1
,
2
1

)
X
2
~N(
2
,
2
2

)
Y=aX
1
+bX
2

Y ~N[(a
1
+b
2
), (a
2
1

+b
2
2

)]
5. Central Limit Theorem: Let X
1
, X
2
, X
3
, , X
n
be n
independent random variables with the same PDF, mean
and variance. Let the sample mean be
n
X
X
n
i
i
=
=
1
, then
X ~N(,
n
2

), as n and
n
X
z


=
~N(0, 1)
6. Third moment:
( ) 0
3
= X E

Fourth moment:
( )
4
4
3 = X E

7. We can test for normality of any PDF using (6)
YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


8. The mean and variance of a normally distributed
variable are independent.



Standard Normal Distribution: When we
standardize X we get z:


=
x
z
~N(0, 1)
( )

=
2
2
1
2
1
z
e z f



Ex13: X~N(8, 4) what is the probability that X will
have a value between x
1
=4 and x
2
=12?
We have to standardize X to get z~N(0,1)
2
2
8 4
1
1
=

x
z

2
2
8 12
1
2
=

x
z

( ) ( ) ( ) ( ) 2 0 0 2 2 2 12 4 + = = z P z P z P X P

9544 . 0 4772 . 0 4772 . 0 = + =

YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


The Chi-Square (
2

)Distribution: Sum of the squares


of k independent random variables which are distributed
standardized normal.
z
1
, z
2
, z
3
,.., z
k
~i.i.d. N(0, 1) then
=
=
k
i
i
z z
1
2
~
2
k




Properties:
1.
2
k

is skewed to the right but less so as k increases


2. Mean of the
2
k

is k and variance is 2k
3. z
1
~
2
1 k

and z
2
~
2
2 k

then (z
1
+z
2
)~
2
2 1 k k +



Students t Distribution: Let z
1
~N(0, 1) and z
2
~
2
k

and
z
1
and z
2
are independent

k
z
z
t
2
1
=
~ t
k

YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION




Properties:
1. symmetrical but flatter, as k t distribution
approaches to the normal distribution
2.
0 =
,
2
2

=
k
k



F Distribution: z
1
~
2
1 k

and z
2
~
2
2 k

then
2
1
2
1
k
z
k
z
F =
~ F
k1, k2





YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


Properties:
1. Skewed to the right, k
1
and k
2
F distribution
approaches to the normal distribution
2.
2 2
2

=
k
k

,
( )
( ) ( ) 4 2 2 1
2 2 1 2 2
2
2
2

+
=
k k k
k k k


3.
k k
F t
, 1
2
=

4. If k
2
is large k
1
F~
2
1 k


YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


A7. Statistical Inference

Estimation: The problem of estimation :
1. we know that X follows a particular PDF
2. we want to know the parameters of that PDF
3. we take a random sample of size n and use the
sample to estimate the parameters

Point Estimation: Let X~
( ) , x f
, unknown. We draw a
sample of size n and develop:
( )
n
x x x f , , ,

2 1
K =

which provides an estimate of the true .

is an
estimator, it is a random variable.

Ex14: the sample mean is a point estimator of the
population mean when estimated as
( ) X x x x
n
n
= + + + = K
2 1
1



Interval Estimation: If we obtain
( )
n
x x x f , , ,

2 1 1
K =
and
( )
n
x x x f , , ,

2 1 2
K =
instead of single estimator and say that
the true parameter lies in the interval between
1

and
2

,
this is interval estimation.

Ex15: If X ~N(,
2
) then X ~N(,
2
) so we can say as
a result of the properties of the normal distribution,
will lie in the interval X 2
n

with %95 probability.




YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


In general we have :
( ) = 1

2 1
P
1 0 < <
This is a confidence interval of size (1-). is also
known as the level of significance.

Methods of Estimation:
1. least squares (LS)
2. maximum likelihood (ML)
3. method of moments (MOM, GMM)

Desirable Properties of Estimators:
A. Small Sample Properties:

Unbiasedness: an estimator is unbiased if its expected
value is equal to the true parameter
( ) =

E

( ) 0

= E
, bias is zero



Minimum variance:

is a minimum variance estimator


of if its variance is smaller than or at most equal to the
variance of any other estimator of

YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


Efficiency: (best unbiased) minimum variance unbiased
estimator


Linearity: an estimator is linear if it is a linear function
of the sample observations

BLUE: best linear unbiased estimator

Minimum MSE: mean square error measures the
dispersion of the estimator around the true parameter.

( ) ( ) ( ) ( )
2 2

bias Var E MSE + = =


If

is unbiased then
( ) ( )

Var MSE =
and minimum MSE
criterion is equivalent to efficiency. Minimum MSE
criterion is used when unbiasedness is sacrificed for a
smaller variance.
YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION




B. Large Sample Properties:
1. Asymptotic unbiasedness
2. Consistency
3. Asymptotic efficiency
4. Asymptotic normality

Hypothesis Testing: After estimating

we now want
to know if

is compatible with a particular value of ,


for example
*
.
*
0
: = H
null hypothesis
*
1
: H
alternative hypothesis

Could our sample come from a distribution with
( )
*
, = x f
?

Two types of hypotheses:
Simple hypotheses equal sign
Composite hypotheses inequality sign

YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


Test statistic used to test the validity of
0
H

point estimator of

Ex16: We are given that:
X
i
~N(,
2
)
X =67
N=100

2
=(2.5)
2


We want to answer Could this sample with X =67
come from population with the mean value of 69 ?

Confidence Interval Approach:
If X
i
~N(,
2
) we know X ~N(,
n
2

) by CLT. Lets
construct a 100(1-)% confidence interval around
based on X and see if this interval includes
*
. If it does
we DO NOT REJECT H
0
, if it does not include
*
we
REJECT H
0
.

=0.05 gives us a 95% confidence interval. If
*
is in
this interval, 95 out of 100 intervals thus established
will include
*
. (1-) indicates the incidence of such
intervals.

YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


If X ~N(,
n
2

) then
n
X
z
i


=
~ N(0, 1). The z table
says
( ) 95 . 0 96 . 1 96 . 1 =
i
z P
so
95 . 0 96 . 1 96 . 1 =


n
X
P


.
We rearrange terms to get:
95 . 0 96 . 1 96 . 1 =

+
n
X
n
X P


( ) 95 . 0 49 . 67 51 . 66 = P



69 is not in this interval so we reject H
0
with 95%
confidence. Notice that 5 out of 100 times we will be
wrong!

In testing there are two types of mistakes we can make:

state of nature

Ho is true Ho is false
R Type 1 No error
d
e
c
i
s
i
o
n

DNR No error Type II

YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION


We cannot minimize both errors simultaneously.
Consensus is to keep the probability of Type I error as
low as possible.
: probability of type I error level of significance
: probability of type II error (1- ) is the power of
the test
p-value: lowest significance level at which a null
hypothesis can be rejected.

Test of Significance Approach:

Assuming the null hypothesis is true, that is =
*
, we
can calculate the value of
i
z
and find the probability of
this value occurring using the standard normal table. If
this probability is low we reject the null. If it is high we
do not reject the null.

If H
0
is true then
8
100
5 . 2
69 67
=

=
i
z
. What is the
probability of z taking on -8? Less than 0.001! Table D1
says
( ) 9980 . 0 4990 . 0 2 9 . 3 9 . 3 = = z P
. So we reject Ho!



YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION



YAAR UNIVERSITY FACULTY of ECONOMICS and ADMINISTRATIVE SCIENCES DEPARTMENT OF ECONOMICS FALL 2010
DO NOT DISTRIBUTE WITHOUT THE INSTRUCTOR'S PERMISSION

You might also like