You are on page 1of 45

Imperial College

London
Business School

FINANCIAL STATISTICS
LEC 1: OLS the basics

Paolo Zaffaroni

OLS: basics k-variable model Inference Asymptotics Questions 1 / 50


Imperial College
London
Business School
Ordinary Least Squares: basics

We are interested in some aspects of the relationship between


two observed variables, say the ith asset return
rit = Pit /Pit−1 − 1, where e.g. Pit is the IBM share price at
time t, and the market index return Rt (e.g. the Standard &
Poor’s 500 index return).
Say you would like to take a long position on the ith asset
and, at the same time, we wish to hedge the risk associated
using the index.
In principle we would like to know the joint distribution of
(rit, Rt ) but effectively would be sufficient to know only some
aspects of it.

OLS: basics k-variable model Inference Asymptotics Questions 3 / 50


Imperial College
London
Business School
Ordinary Least Squares: basics
For instance, for hedging purposes would be necessary to
know the conditional mean

E(rit | Rt ).

In general it is some nonlinear function g(Rt ) but useful to


think to the case when (rit , Rt ) are jointly Gaussian. Then it
turns out that
E(rit | Rt ) = αi + βi Rt , (1)
for constants parameters αi , βi .
Model (1) is”essentially” the Capital Asset Pricing Model
(CAPM)!
OLS: basics k-variable model Inference Asymptotics Questions 4 / 50
Imperial College
London
Business School
Ordinary Least Squares: CAPM example

In particular, if we assume that rit is defined as ”return on


asset i minus the risk free rate” and Rt is the ”market return
minus the risk free rate” , CAPM says that:

E(rit ) = βi E(Rt ).

Here Rt and rit are called the market risk premia and asset i
risk premia .
A test of the CAPM is to check whether αi = 0 for every
asset i into:
E(rit | Rt ) = αi + βi Rt .

OLS: basics k-variable model Inference Asymptotics Questions 5 / 50


Imperial College
London
Business School
Ordinary Least Squares: basics
Even without assuming Gaussianity, think of (1) as a linear
approximation to true, nonlinear, conditional mean.
Define the regression equation:

rit = αi + βi Rt + uit , (2)

where uit represents the disturbance term, or the regression


error.
uit represents whatever influences rit other than Rt .
Idea in (2): uit should be as non-influential as possible,
otherwise other regressors, beside Rt , should be in (2).
Non-influential means E(uit ) = 0: on average its effect is nil
on rit .
OLS: basics k-variable model Inference Asymptotics Questions 6 / 50
Imperial College
London
Business School
Ordinary Least Squares: empirical example
Figure: Note: Scatterplot and regression line of IBM return versus S &
P’s 500 index return.
Scatter plot and fitted line IBM−S&P500
0.04

0.03

0.02

0.01
IBM returns

−0.01

−0.02

−0.03

−0.04
−0.04 −0.03 −0.02 −0.01 0 0.01 0.02
S&P500 returns

OLS: basics k-variable model Inference Asymptotics Questions 7 / 50


Imperial College
London
Business School
Ordinary Least Squares: empirical example

Example: IBM on S&P’s 500 ols regression statistics


β̂ 0.0007 0.9008
Standard Error 0.000511779 0.080485511
t-stat 1.3510 11.1917
pval 0.1779 8.34E-24
R2 0.3356
R̄2 0.3329
We will understand each of these concepts very soon...

OLS: basics k-variable model Inference Asymptotics Questions 8 / 50


Imperial College
London
Business School
Ordinary Least Squares: general model in matrix notation

Linear regression of the dependent variable Yi on k


regressors 1, X2i , ..., Xki defined as:

Yi = β1 + β2 X2i + β3 X3i + ... + βk Xki + ui , (3)

β = (β1 , ..., βk )0 vector of parameters.


As before (3) represents E(Yi | X2i ...Xki ) under Gaussianity,
but we are not assuming this.
Note that first regressor equals 1 for all observations.
Formal assumptions on ui later on.

OLS: basics k-variable model Inference Asymptotics Questions 10 / 50


Imperial College
London
Business School
Ordinary Least Squares: general model in matrix notation
Assuming to have a sample of T observations of data, we
define the following T × 1 vectors:
   
Y1 u1
 Y2   u2 
y =  . , u =  . ,
   
.
 .  .
 . 
YT uT
and the following T × k matrix:
 0
1
 Xi2 
X = (x1 x2 · · · xT )0 , with ith rows xi 0 = 
 
.. 
 . 
Xik

OLS: basics k-variable model Inference Asymptotics Questions 11 / 50


Imperial College
London
Business School
Ordinary Least Squares: general model in matrix notation
We can now write the linear regression model as:

y = Xβ + u.

This can also be written as

u = u(β) = y − Xβ,

which says that, for a given sample of data (y, X), the u
depend on β.
We call β the true parameter value, and correspondingly u is
the true disturbance vector.

OLS: basics k-variable model Inference Asymptotics Questions 12 / 50


Imperial College
London
Business School
Ordinary Least Squares: general model in matrix notation
Let’s prepare the ground for the OLS estimator.
Let b be another k-dimensional vector. We shall call it an
admissible parameter value. We can define another vector
u(b):
u(b) = y − Xb 6= u in general.

Let us now consider another function of b, namely the


residuals sum of the squares (RSS):
T
X
RSS(b) = u2t (b) = u(b)0 u(b).
t=1

This is a scalar non-negative function of a vector b.


OLS: basics k-variable model Inference Asymptotics Questions 13 / 50
Imperial College
London
Business School
Ordinary Least Squares: general model in matrix notation

Hence one can think of choosing the value for b that


minimizes RSS(b), that is

β̂ = argminb∈B RSS(b),

where B is some subset of Rk .


We shall call β̂ the Ordinary Least Squares (OLS)
estimator.
This raises many questions. Does it always exist? What are
its statistical properties?

OLS: basics k-variable model Inference Asymptotics Questions 14 / 50


Imperial College
London
Business School
Ordinary Least Squares: general model in matrix notation
It turns out that the above minimization problem is easy to be
solved, since
RSS(b) = (y − Xb)0 (y − Xb)
= y 0 y − b0 X 0 y − y 0 Xb + b0 X 0 Xb
= y 0 y − 2b0 X 0 y + b0 X 0 Xb.
Differentiating with respect to b, and equating the result to
zero, yields the first order conditions (FOC)
∂RSS(b)
|b=β̂ = −2X 0 y + 2X 0 X β̂ = 0,
∂b
giving the normal equations:
X 0 X β̂ = X 0 y.

OLS: basics k-variable model Inference Asymptotics Questions 15 / 50


Imperial College
London
Business School
Ordinary Least Squares: general model in matrix notation
Alternatively, defining the OLS residuals e (this is NOT u!)
e = y − X β̂,
one gets the FOC
X 0 e = 0. (4)

Since first column of X is made by ones, then first element of


(4) is
XT
et = 0, or e0 ι = 0,
t=1
that is
ē = Ȳ − β̂1 − β̂2 X̄2 − ... − β̂k X̄k = 0,
(recall that a bar¯means the sample mean).
OLS: basics k-variable model Inference Asymptotics Questions 16 / 50
Imperial College
London
Business School
Ordinary Least Squares: general model in matrix notation
The other k − 1 entries of X 0 e = 0 are

x0i e = 0 i = 2, ..., k.

This means that the sample covariance between each


regressor and the OLS residual e is zero, since for i = 2, ..., k,
T T T
1X 1X 1X
(Xit − X̄i )(et − ē) = Xit (et − ē) = Xit et .
T T T
t=1 t=1 t=1

Do you remember the four different ways in which one can


write the covariance?
OLS: basics k-variable model Inference Asymptotics Questions 17 / 50
Imperial College
London
Business School
Ordinary Least Squares: general model in matrix notation
Set the estimate of the conditional mean equal to

ŷ = X β̂.

The following also holds:

ŷ 0 e = (X β̂)0 e = β̂ 0 X 0 e = 0.

where
y = ŷ + e.

This is a decomposition of y into two orthogonal components.


OLS: basics k-variable model Inference Asymptotics Questions 18 / 50
Imperial College
London
Business School
Ordinary Least Squares: general model in matrix notation
The following decomposition of the sum of the squares then
holds :
y 0 y = (ŷ + e)0 (ŷ + e) = β̂ 0 X 0 X β̂ + e0 e
since no cross terms. (Why?)
Since the sample mean of the observed y and fitted ŷ are the
same (why?)
ȳ = ŷ¯
then
y 0 y − T ȳ 2 = (ŷ + e)0 (ŷ + e) − T ȳ 2 = (ŷ 0 ŷ − T ȳ 2 ) + e0 e.

But how is y 0 y − T ȳ 2 related to the sample variance of the


yt ? The same (a part a constant term)!
OLS: basics k-variable model Inference Asymptotics Questions 19 / 50
Imperial College
London
Business School
Ordinary Least Squares: general model in matrix notation
The above decomposition is a variance decomposition also
written as T SS = ESS + RSS: total sum of the squares
equals the sum of explained and residual sum of the squares.
This leads to a measure of goodness of fit namely the
coefficient of multiple correlation R2 :
ESS RSS
R2 = =1− where always 0 ≤ R2 ≤ 1.
T SS T SS

It it unity in case of perfect fit and zero in case of worst fit.


But R2 never decreases if more and more regressors are added.
Thus the adjusted R2 (which can be negative though):
T −1
R̄2 = 1 − (1 − R2 ),
T −k
which
OLS: basics only increases
k-variable if new regressors Asymptotics
model Inference is relevant. Questions
20 / 50
Imperial College
London
Business School
Ordinary Least Squares: Inference
To establish the statistical properties of OLS β̂ we need to
’close the model’ (impose assumptions).
This is what we call the ’Gauss-Markov’ assumptions:
1 X is non-stochastic and full column rank. This implies that
−1
(X 0 X) is well defined.
2 The u satisfy:

E(u) = 0, (5)
0 2
var(u) = E(uu ) = σ I, (6)

where I is the (T × T ) identity matrix.


We call var(u) the covariance matrix of the us.
Any rv satisfying (5)-(6) is called a white-noise .

OLS: basics k-variable model Inference Asymptotics Questions 22 / 50


Imperial College
London
Business School
Ordinary Least Squares: Inference

Let me ’zoom’ into Euu0 . This is a large T × T matrix with


variances on the diagonal and covariances off the diagonal:

E(u21 )
 
E(u1 u2 ) ... E(u1 uT )
 E(u2 u1 )
 E(u22 ) ... E(u2 uT ) 
0
 ... ... E(ui uj ) ... 
Euu =  .
 
 
 .. .. .. .. 
 . . . . 
E(uT u1 ) E(uT u2 ) ... E(uT )2

Recall Eu2i = var(ui ), Eui uj = cov(ui , uj ) (why?).

OLS: basics k-variable model Inference Asymptotics Questions 23 / 50


Imperial College
London
Business School
Ordinary Least Squares: Inference
Under Assumption 1 the FOC have the unique solution:
β̂ = (X 0 X)−1 X 0 y.

It can be shown that indeed such solution represents an


absolute minimum for the OLS problem.
Recall that β̂ is ex-post a vector of numbers but ex-ante a
vector random variable.
As a random variable, we can then derive mean and variance
of OLS. Since
−1
β̂ = (X 0 X) X 0 (Xβ + u) = β + (X 0 X)−1 X 0 u
then the OLS is unbiased
E(β̂) = β + E(X 0 X)−1 X 0 u = β + (X 0 X)−1 X 0 E(u) = β.
OLS: basics k-variable model Inference Asymptotics Questions 24 / 50
Imperial College
London
Business School
Ordinary Least Squares: Inference

For the variance var(β̂)


0 −1
E[(β̂−β)(β̂−β)0 ] = (X 0 X)−1 X Euu0 X(X 0 X)−1 = σ 2 (X 0 X) .

How can we use it? We need an estimate of σ 2 . The OLS


estimator of σ 2 is
1
s2 = e0 e.
T −k

OLS: basics k-variable model Inference Asymptotics Questions 25 / 50


Imperial College
London
Business School
Ordinary Least Squares: Inference
In fact, setting (a symmetric and idempotent matrix)

M = I − X(X 0 X)−1 X 0 ( note that M X = 0),

then e = M y = M (Xβ + u) = M u, and

E(e0 e0 ) = E(u0 M u) = E(tru0 M u) = E(trM uu0 )


= σ 2 trM = σ 2 [trI − tr((X 0 X)−1 X 0 X)] = σ 2 (T − k).

Gauss-Markov Theorem. This fundamental result says that


Best Linear Unbiased Estimator (BLUE ) of c0 β, for any
constant vector c, is c0 β̂.

OLS: basics k-variable model Inference Asymptotics Questions 26 / 50


Imperial College
London
Business School
Ordinary Least Squares: empirical example
−3 Scatter plot and fitted line IBM−5yr zero coupon bond
x 10
3

−1

−2

−3
−0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04

OLS: basics k-variable model Inference Asymptotics Questions 27 / 50


Imperial College
London
Business School
Ordinary Least Squares: empirical example

Example 2: estimation results


IBM on 5-year zero-coupon bond
(simple regression statistics)
b 0.0015 -0.6109
standard error 0.000646539 0.796099169
t-stat 2.3191 -0.7673
pval 0.0212 8.34E-24
R2 0.0024
R̄ 2 -0.0017

OLS: basics k-variable model Inference Asymptotics Questions 28 / 50


Imperial College
London
Business School
Ordinary Least Squares: empirical example

Example 2 (cont) estimation results


IBM on 5-year zero-coupon bond and S&P’s 500
(multiple regression)
b 0.0007 0.9001 -0.0792
standard error 0.0005 0.0809 0.6527
t-stat 1.3296 11.1307 -0.1213
pval 0.1849 1.3675E-19 0.9036
R2 0.3356
R̄ 2 0.3302

OLS: basics k-variable model Inference Asymptotics Questions 29 / 50


Imperial College
London
Business School
Ordinary Least Squares: testing linear hypothesis
We wish to test whether the null hypothesis is supported by
the data or not:
H0 : Rβ = r,
where R is q × k constant matrix and r is q constant vector.
Examples of this are:
H0 : βi = 0.
H0 : βi = βi0 , for some given value βi0 .
H0 : β2 + β3 = 1.
H0 : β3 = β4 .
What about
β22
H0 : = 1?
β12 + 1
OLS: basics k-variable model Inference Asymptotics Questions 30 / 50
Imperial College
London
Business School
Ordinary Least Squares: testing linear hypothesis
Idea: compute sample equivalent Rβ̂ − r and see whether it is
large or small (in a statistical sense!).
By linearity of the expectation operator E(Rβ̂) = Rβ and
var(Rβ̂) = Rvar(β̂)R0 .
Here we consider finite sample inference, ie for a finite
sample size T . If we make additional assumption

u ∼ N (0, σ 2 I),

then, since linear combination of normal is also normal,

β̂ ∼ N (β, σ 2 (X 0 X)−1 )

OLS: basics k-variable model Inference Asymptotics Questions 31 / 50


Imperial College
London
Business School
Ordinary Least Squares: testing linear hypothesis
Likewise, under H0 ,
Rβ̂ ∼ N (r, σ 2 R(X 0 X)−1 R0 ). (7)

However (7) could be used for testing only if q = 1 since


otherwise how can we say whether the vector Rβ̂ is large or
small? For example, is Rβ̂ bigger if first entry is 10 and others
are all zero or rather if all entries are equal to 0.1?
We need to standartize a random vector! We know how to do
it for scalars, remember?
Consider a vector normal random variable:
W ∼ N (µ, Σ) with mean µ and covariance matrix Σ.

What
OLS: basics is the model
k-variable square-root of the
Inference matrixAsymptotics
Σ ? We areQuestions
not simply
32 / 50
Imperial College
London
Business School
Ordinary Least Squares: testing linear hypothesis

Here Σ is the matrix equivalent of a non-negative number.


We call this a positive-semi-definite matrix. Anyway, for such
matrix there exists always a matrix P such that

Σ = P P.

We call this P the square-root matrix of Σ and we write

P = Σ1/2 .

Computer software constructs P for us!


By the way, we also have Σ−1 = P −1 P −1 .

OLS: basics k-variable model Inference Asymptotics Questions 33 / 50


Imperial College
London
Business School
Ordinary Least Squares: testing linear hypothesis
Anyway, to standartize one needs to ‘substract the mean’ and
‘divide by the standard deviation’ (remember?):
Z = Σ−1/2 (W − µ) ∼ N (0, I).

Same form as for scalar case! Easy to see that E(Z) = 0 and
that
−1 −1 −1 −1
var(Z) = EZZ 0 = Σ 2 E(W −µ)(W −µ)0 Σ 2 =Σ 2 ΣΣ 2 = I.

Notice that var(Z) = I means that the components of Z are


uncorrelated and thus independent (remember?). Therefore
we obtain the chi-square:
Z 0 Z = Z12 + ....Zq2 ∼ χ2q same as Z 0 Z = (W − µ)0 Σ−1 (W − µ).
OLS: basics k-variable model Inference Asymptotics Questions 34 / 50
Imperial College
London
Business School
Ordinary Least Squares: testing linear hypothesis
Using this result for our test allows to go from vector to
positive scalar quantity: now we can evaluate if this is large or
small!
By standartizing
(Rβ̂ − r)0 [σ 2 R(X 0 X)−1 R0 ]−1 (Rβ̂ − r) ∼ χ2q .

Nearly there: we cannot use the test unless we estimate σ 2 .


0
It can also be shown that under normal u one has eσ2e ∼ χ2T −k
and that the latter is independently distributed of β̂.
Therefore ratio is also a positive random scalar quantity: the
0 0 X)−1 R0 ]−1 (Rβ̂−r)/q
F test F ≡ (Rβ̂−r) [R(Xe0 e/(T −k) ∼ F (q, T − k).

OLS: basics k-variable model Inference Asymptotics Questions 35 / 50


Imperial College
London
Business School
Ordinary Least Squares: testing linear hypothesis

Special case: when q = 1 and H0 : βi = βi0 then

(β̂i − βi0 )2
F = ∼ F (1, T − k),
s2 cii

where cii is (i, i)th element on diagonal of (X 0 X)−1 .


However, since now q = 1 we do not need to use the χ2 trick
and obtain directly the t test

β̂i − βi0
t≡ √ ∼ t(T − k),
s cii

distributed like a Student t with T − k dof.

OLS: basics k-variable model Inference Asymptotics Questions 36 / 50


Imperial College
London
Business School
Ordinary Least Squares: testing linear hypothesis
At the other end we can test whether all the regression
coefficients (other than β1 ) are zero and the F test becomes
R2 /(k − 1)
F = ∼ F (k − 1, T − k).
(1 − R2 )/(T − k)

This is somewhat like checking the magnitude of the R2 from


a statistical point of view (standard errors).
If regression is poor then R2 ≈ 0 and also F ≈ 0: coefficients
are therefore approx zero since regressors are useless (How is
the P-value now?)
Instead, as R2 ↑ 1 then F ↑ ∞ and test is highly significant!
(How is the P-value now?)
OLS: basics k-variable model Inference Asymptotics Questions 37 / 50
Imperial College
London
Business School
Ordinary Least Squares: testing linear hypothesis
Another important example is when null hypothesis says that
only subset of coefficients are zero. Rewrite general model as
y = Xβ + u = Xa βa + Xb βb + u, that is X = (Xa : Xb )
where Xb has kb < k columns. It is the same as saying that
β = (βa0 βb0 )0 , where βb is of dimension kb × 1, and let
H0 : βb = 0. (the little model y = Xa βa + u holds!)

0
(e∗ e∗ −e0 e)/kb
Then F = e0 e/(T −k) ∼ F (kb , T − k).
0
Here e∗ e∗ is the RSS from the restricted regression, that is
regressing y on Xa . Instead e0 e is the (old) RSS from
regressing y on all regressors Xa and Xb (unrestricted
regression ).
OLS: basics k-variable model Inference Asymptotics Questions 38 / 50
Imperial College
London
Business School
Ordinary Least Squares: testing linear hypothesis

How to implement the test?


Old days: compare the test statistic t with the values of the
corresponding sampling distribution (say Student with T − k
dof) corresponding to the chosen significance value (α,
typically 1% or 5%).
Nowadays: P − value. For example for the F test

p − value = P rob(F (k − 1, T − k) > F )

where F is a given number, as shown in the example below.


When P-value large, this gives confidence to the null
hypothesis and the contrary is it is small.

OLS: basics k-variable model Inference Asymptotics Questions 39 / 50


Imperial College
London
Business School
Ordinary Least Squares: empirical example

Example. F test for regression IBM on S&P’s 500:

rIBM,t = β0 + β1 rSP 500,t + ut .

H0 : β1 = 0.
F statistic = 62.386, pvalf orF1,249 = 1.82E − 13,
Result: reject H0 !

OLS: basics k-variable model Inference Asymptotics Questions 40 / 50


Imperial College
London
Business School
Ordinary Least Squares: confidence interval
For a single parameter, under H0 ,
!
β̂i − βi0
α = Pr | √ |< tα/2,T −k
s cii
 √ √ 
= P r β̂i − tα,T −k s cii ≤ βi0 ≤ β̂i + tα/2,T −k s cii .

Therefore the (confidence) interval


h √ √ i
β̂i − tα,T −k s cii , β̂i + tα/2,T −k s cii
contains all values of βi,0 which will not be rejected with
probability 1 − α when performing a two-tail t test.
As you know, confidence interval does not says that the true
parameter value is contained with probability 1 − α, right?
OLS: basics k-variable model Inference Asymptotics Questions 41 / 50
Imperial College
London
Business School
Ordinary Least Squares: asymptotics
In practice normality of u stringent and unlikely to hold, eg
asset returns Gaussian?
How to perform inference then? Way out is to derive
behaviour of β̂ as T → ∞.
Of course never true in practice but we use it as an
approximation : the larger T the better!
Also assuming non-stochastic X is very stringent.
Let us retain white-noise u and assume, for T → ∞,
E(ut | xt−s ) = 0 for any s > 0.
X 0X
→p Q where Q is non-random positive definite.
T
1
√ X 0 u →d N (0, σ 2 Q).
T
OLS: basics k-variable model Inference Asymptotics Questions 43 / 50
Imperial College
London
Business School
Ordinary Least Squares: asymptotics

Under the above conditions



β̂ →p β and T (β̂ − β) →d N (0, σ 2 Q−1 ).

First result says that β̂ is consistent: as T gets large, its


randomnes vanishes and it equals the true value.
Second result allows to perform inference: as T gets large, we
treat (standartized) β̂ as if was normally distributed. Then we
can apply all previous results (t test, F test, etc).
A (consistent) estimate of Q−1 will be T (X 0 X)−1 .

OLS: basics k-variable model Inference Asymptotics Questions 44 / 50


Imperial College
London
Business School
Ordinary Least Squares: Delta method

Delta method. Important result in order to obtain the


asymptotic distribution of a generic nonlinear function of β̂.
Let f : Rk → Rj be a twice continuouly differentiable
function.
Then √
T (f (β̂) − f (β)) →d N(0, σ 2 F Q−1 F 0 ),
setting
∂f (x)
F (x) ≡ , F ≡ F (β).
∂x0

OLS: basics k-variable model Inference Asymptotics Questions 45 / 50


Imperial College
London
Business School
Ordinary Least Squares: Delta method

Easily obtained by Taylor:

f (β̂) ≈ f (β) + F (β̂ − β),

yielding √ √
T (f (β̂) − f (β)) ≈ F T (β̂ − β).

To estimate the asymptotic covariance matrix (acm) replace


F by F (β̂) and Q−1 by T (X 0 X)−1 .

OLS: basics k-variable model Inference Asymptotics Questions 46 / 50


Imperial College
London
Business School
Ordinary Least Squares: analytical questions
1 Define the linear regression model. Derive the FOC for the
OLS estimator using matrix algebra differentiation. What do
these imply for the OLS residual e = y − X β̂? Then derive
the estimator itselrf. Discuss the hypothesis that one requires.
2 For the model
Yt = α + βXt + ut ,
derive the covariance matrix of the OLS estimator α̂, β̂. Are
the two estimator uncorrelated?
3 For the model
Yt = α + ut ,
derive the OLS estimator α̂ and derive the R2 of the
regression.
OLS: basics k-variable model Inference Asymptotics Questions 48 / 50
Imperial College
London
Business School
Ordinary Least Squares: analytical questions

4 Show the F test when testing

H0 : β1 = β2

for the regression

Yt = β1 + β2 X2t + β3 X3t .

Next, show the F test for the case

H0 : β2 = β3 = 0.

5 What is the asymptotic distribution of α̂/β̂?

OLS: basics k-variable model Inference Asymptotics Questions 49 / 50


Imperial College
London
Business School
Ordinary Least Squares: Summary

General concepts: the CAPM.


OLS estimator: k-variable linear regression models using
matrices.
OLS estimator: definition and statistical properies under
Gauss-Markov assumptions for finite T .
OLS estimator: testing linear hypothesis
OLS estimator: asymptotic properties when normality/non
random regressors do not hold for T ↑ ∞.
OLS estimator: Delta method to do inference on nonlinear
transformation.
OLS estimator: analytical questions.

OLS: basics k-variable model Inference Asymptotics Questions 50 / 50

You might also like