You are on page 1of 41

The Simple Regression Model

y = F0 + F1 x + u

Some Terminology
In the simple linear regression model, where y = F0 + F1x + u, we typically refer to y as the 
  

Dependent Variable, or Left-Hand Side Variable, or Explained Variable, or Regressand

Some Terminology, cont.
In the simple linear regression of y on x, we typically refer to x as the 
    

Independent Variable, or Right-Hand Side Variable, or Explanatory Variable, or Regressor, or Covariate, or Control Variables

A Simple Assumption
The average value of u, the error term, in the population is 0. That is, E(u) = 0 This is not a restrictive assumption, since we can always use F0 to normalize E(u) to 0

so that they are completely unrelated. that E(u|x) = E(u) = 0.Zero Conditional Mean We need to make a crucial assumption about how u and x are related We want it to be the case that knowing something about x does not give us any information about u. That is. which implies E(y|x) = F0 + F1x .

where for any x the distribution of y is centered about E(y|x) y f(y) . x1 x2 .E(y|x) = F + F x 0 1 .E(y|x) as a linear function of x.

yi): i=1.Ordinary Least Squares Basic idea of regression is to estimate the population parameters from a sample Let {(xi. it will be the case that yi = F0 + F1xi + ui .n} denote a random sample of size n from the population For each observation in this sample. «.

u{ 2 y1 .} u3 . sample data points and the associated error terms y y4 E(y|x) = F0 + F1x . u4 { y3 y2 } u1 .Population regression line. x1 x2 x3 x4 x .

u) = E(xu) = 0 Why? Remember from basic probability that Cov(X.Y) = E(XY) ± E(X)E(Y) .Deriving OLS Estimates To derive the OLS estimates we need to realize that our main assumption of E(u|x) = E(u) = 0 also implies that Cov(x.

Deriving OLS continued We can write our 2 restrictions just in terms of x. y. F0 and F . since u = y ± F0 ± F1x E(y ± F0 ± F1x) = 0 E[x(y ± F0 ± F1x)] = 0 These are called moment restrictions .

M. a sample estimator of E(X) is simply the arithmetic mean of the sample .Deriving OLS using M.O. the mean of a population distribution. The method of moments approach to estimation implies imposing the population moment restrictions on the sample moments What does this mean? Recall that for E(X).

More Derivation of OLS We want to choose values of the parameters that will ensure that the sample versions of our moment restrictions are true The sample versions are as follows: n n 1 y §.

i !1 n n i Ö Ö  F  F1 xi ! 1 Ö Ö xi yi  F  F1 xi ! § i !1 .

.

and properties of summation. we can rewrite the first condition as follows Ö Ö y ! F 0  F1 x .More Derivation of OLS Given the definition of a sample mean. or Ö ! yF x Ö F0 1 .

More Derivation of OLS Ö Ö xi yi  y  F1 x  F1 xi ! 0 § i !1 n n .

.

n Ö xi .

yi  y ! F1 § xi .

xi  x § i !1 n i !1 2 Ö § .

xi  x .

yi  y ! F1 § .

xi  x i !1 i !1 n .

So the OLS estimated slope is Ö F1 ! § .

x  x .

y i i !1 n i i !1 n n i  y 2 § .

x  x i !1 provided that § .

xi  x " 0 2 .

the slope will be negative Only need x to vary in our sample .Summary of OLS slope estimate The slope estimate is the sample covariance between x and y divided by the sample variance of x If x and y are positively correlated. the slope will be positive If x and y are negatively correlated.

and is the difference between the fitted line (sample regression function) and the sample point . u. û. hence the term least squares The residual. is an estimate of the error term.More OLS Intuitively. OLS is fitting a line through the sample points such that the sum of squared residuals is as small as possible.

û{ 2 y1 } û1 .} û3 . Ö Ö Ö y ! F 0  F1 x y3 y2 .Sample regression line. x1 x2 x3 x4 x . sample data points and the associated estimated error terms y y4 û4 { .

Alternate approach to derivation Given the intuitive idea of fitting a line. we can set up a formal minimization problem That is. we want to choose our parameters such that we minimize the following: Ö Ö Öi ! § yi  F 0  F1 xi §u i !1 i !1 n n .

2 .

which are the same as we obtained before. continued If one uses calculus to solve the minimization problem for the two parameters you obtain the following first order conditions. multiplied by n .Alternate approach.

y  FÖ § i i !1 n n Ö  F1 xi ! 0 0 Ö Ö xi yi  F 0  F1 xi ! 0 § i !1 .

.

Algebraic Properties of OLS The sum of the OLS residuals is zero Thus. the sample average of the OLS residuals is zero as well The sample covariance between the regressors and the OLS residuals is zero The OLS regression line always goes through the mean of the sample .

i !1 n Ö §u i !1 i n !0 Ö §xu i i !1 i !0 Ö Ö y ! F 0  F1 x .Algebraic Properties (precise) n n Ö § u i ! 0 and thus.

More terminology We can think o each observation as being made up o an explained part. and an unexplained part. Ö Ö yi ! yi  ui We then de ine the ollo ing : § .

y  y is the total sum o squares ( T) Ö § .

y  y is the explained sum o squares ( Ö § u is the residual sum o squares ( ) 2 2 i i 2 i ) Then T!  .

Proof that SST = SSE + SSR Ö Ö § .

y  y ! § ?.

y  y  .

y  y A Ö ! § ?Ö  .

y  y A u Ö Ö Ö Ö ! § u  2§ u .

y  y  § .

y  y Ö Ö ! SSR  2§ u .

y  y  SSE Ö Ö and e kno that § u .

y  y ! 0 2 i i i i 2 i i 2 i i i i i i i i 2 2 .

Goodness-of-Fit How do we think about how well our sample regression line fits our sample data? Can compute the fraction of the total sum of squares (SST) that is explained by the model. call this the R-squared of regression R2 = SSE/SST = 1 ± SSR/SST .

just type reg y x . to run the regression of y on x.Using Stata for OLS regressions Now that we¶ve derived the formula for calculating the OLS estimates of our parameters. you¶ll be happy to know you don¶t have to compute them by hand Regressions in Stata are very simple.

Thus we can write the sample model yi = F0 + F1xi + ui Assume E(u|x) = 0 and thus E(ui|xi) = 0 Assume there is variation in the xi . n}. yi): i=1. 2.Unbiasedness of OLS Assume the population model is linear in parameters as y = F0 + F1x + u Assume we can use a random sample of size n. from the population model. «. {(xi.

we need to rewrite our estimator in terms of the population parameter Start with a simple rewrite of the formula as Ö F ! s | 2 x .Unbiasedness of OLS (cont) In order to think about unbiasedness.

where .xi  x yi s 2 x 2 .

xi  x .

Unbiasedness of OLS (cont) § .

x  x y !§ .

x  x .

F  F x § .

x  x F  § .

x  x F x u  § .

x  x ! F § .

x  x  F § .

x  x x u  § .

x  x i i i 0 i 0 i 1 i i i 0 i 1 i i i i 1 i  ui ! .

Unbiasedness of OLS (cont) § .

x  x ! 0. § .

x  x x ! § .

x  x i i i i 2 1 x 2 so. the numerator can be rewritten as u F s  § .

xi  x i . and thus Ö F1 ! F 1 § .

x  x u  i i s 2 x .

Unbiasedness of OLS (cont) let d i ! .

then Fi © ¹§ i i 1 ª sx º Ö ! F  ¨ 1 2 ¸ d E .xi  x . so that Ö ! F  ¨ 1 2 ¸ d u .

u ! F E F1 ¹§ i © 1 1 i ª sx º .

.

Unbiasedness Summary The OLS estimates of F1 and F0 are unbiased Proof of unbiasedness depends on our 4 assumptions ± if any assumption fails. then OLS is not necessarily unbiased Remember unbiasedness is a description of the estimator ± in a given sample we may be ³near´ or ³far´ from the true parameter .

Variance of the OLS Estimators Now we know that the sampling distribution of our estimate is centered around the true parameter Want to think about how spread out this distribution is Much easier to think about this variance under an additional assumption. so Assume Var(u|x) = W2 (Homoskedasticity) .

the square root of the error variance is called the standard deviation of the error Can say: E(y|x)=F0 + F1x and Var(y|x) = W2 . called the error variance W.Variance of OLS (cont) Var(u|x) = E(u2|x)-[E(u|x)]2 E(u|x) = 0. so W2 = E(u2|x) = E(u2) = Var(u) Thus W2 is also the unconditional variance.

x1 x2 .E(y|x) = F + F x 0 1 .Homoskedastic Case y f(y|x) .

x1 x2 x3 x . . E(y|x) = F0 + F1x .Heteroskedastic Case f(y|x) .

Variance of OLS (cont) Ö ! Var ¨ F  ¨ 1 2 ¸ d u ¸ ! © 1 © Var F 1 ¹§ i i ¹ © ¹ sx º ª ª º .

2 ¨ 1 ¸ ¨ 1 ¸ © 2 ¹ Var .

§ d i ui ! © s x2 ¹ ª sx º ª º ¨ 1 ¸ !© 2¹ ª sx º 2 2 2 i 2 2 2 d i2Var .

u i § 2 ¨ 1 ¸ § d W ! W © s x2 ¹ ª º 2 d i2 ! § ¨ 1 ¸ 2 W2 Ö W © ! Var F 1 ¹ sx ! 2 2 sx º sx ª .

.

W2. the larger the variance of the slope estimate The larger the variability in the xi. a larger sample size should decrease the variance of the slope estimate Problem that the error variance is unknown . the smaller the variance of the slope estimate As a result.Variance of OLS Summary The larger the error variance.

because we don¶t observe the errors. ui What we observe are the residuals.Estimating the Error Variance We don¶t know what the error variance. ûi We can use the residuals to form an estimate of the error variance . is. W2.

Error Variance Estimate (cont) Ö Ö Ö ui ! yi  F 0  F1 xi Ö Ö ! .

F 0  F1 xi  ui  F 0  F1 xi Ö Ö !u  F F  F F i .

0 0 .

1 1 Then. an unbiased estimator of W 2 is 1 Ö ! Öi2 ! SSR / .

n  2 W §u .

n  2 2 .

2 .Error Variance Estimate (cont) Ö Ö W ! W ! tandard error o the regression Ö recall that sd F ! W sx Ö i e substitute W or W then e have Ö the standard error o F .

1 Ö Ö se F1 ! W / .

.

§ .

x  x 2 i 1 2 .