You are on page 1of 8

Economics 306

Midterm #1, Fall 2013


100 Points
100 Minutes

Instructions: Answer the following questions as clearly as you can. Circle your answers. Partial credit
is available, but we cannot give partial credit unless you show your work. You do not need to write out
every detail, but give a clear indication of how you are making your calculations. For problems
with many parts where you use information from earlier in the problem to answer the question, you
can receive full credit if you do the correct work but use the wrong numbers. The clearer you are in
your explanation the more straightforward this part of the grading will be.

You may use any type of hand-held calculator. All other electronic devices are prohibited.

DO NOT OPEN UNTIL I START THE TEST


Part 1 (Each question in this part is worth 5 points, 70 points total.)

You have collected data on two variables. The x-variable is the number of hours spent studying for an
exam, and the y-variable is the score on the exam. You have 24 observations.

You calculate the following values using Excel in order to calculate the OLS regression by hand.

X 180
i Y 30
( X X ) 37000
i
2
(Y Y ) i
2
42000
( X X )(Y Y ) 27000
i i

Also, in case this didnt make it onto your formula sheet (it was on a practice problem), note that it
can be shown that the Sum of Squares Estimated: SSE 1
2
(X i X )2 .

1) What is X ?

X
X i

180
6
n 30

2) What is the sample variance of Y?

SY2
(Y Y )
i
2


42000
1448.28
n 1 29

3) What is the sample standard deviation of Y ?

SY2 1448.28
SY2 49.94
n 30

4) What is the sample covariance of X and Y?

cov( X , Y )
(X i X )(Yi Y )

27000
931.03
n 1 29

5) What is the value of 1 ?

1 i
( X X )(Yi Y ) 27000
.7297
( X i X )2 37000
6) Interpret, using words, the meaning of 1 in this particular regression.

As the numbers of hours of study increases by 1, the exam score is predicted to decrease by .
7297 points. I know, this is a strange result to have the exam score go down with studying.
Maybe the class is some funky graduate philosophy course where the more you study the
worse you do. Its not as though I fabricated the example out of thin air. Oh, yeah, I did.

7) What is the value of 0 ?

0 Y 1 X 30 (.7297)*6 34.378

8) What is the value of (X i X)?

This equals zero, due to the definition of Xbar .

9) Suppose one observation (call it observation 0) in the smple is X0=20, Y0=14. What is the value
of 0 ?

Y0 0 1 X 0 34.378 .7297 * 20 19.78


The error is the difference between the actual and predicted values: 14-19.78=-5.78

2
10) What is the value of S ? That is, the variance of the regression.

SUBSTANTIAL ROUNDING ERRORS ARE GOING TO START TO CREEP IN NOW.


STUDENTS ARE NEVER TO BE PENALIZED FOR DOING THE RIGHT WORK BUT
GETTING NUMBERS A LITTLE DIFFERENT FROM WHAT I HAVE.

The clue to use SSE 1


2
(X i X )2 is critical now. SSE .7297 2 *37000 19702.7

SSR=SST-SSE=42000-19702.7=22297.3

SSR 22297.3
Finally, S 796.332
2

n2 28

11) What is the variance of 1 ?


S2i 796.332
S 2 .0215
1
(X i X) 2
37000

12) What is the test statistic for a null hypothesis that 1 = -.55?

1 1 .7941 .55
t0 1.23
S .0215
1

13) Will you reject the above null hypothesis at a level of =.10? Why or why not?

No. The critical value is larger than this test statistic (in absolute value). (1.697 is the
critical value, to be precise.)

14) What percentage of the variation of Y is explained by X?

SSE 19702.7
r2 .469
SST 42000
Part 2 (30 points total)

1) (10 points) Claim: All else equal, as the variance of the independent variable increases, the OLS
estimator 1 becomes more precise.

a. True or False (2 points)

True

b. Use math to explain your answer for a. (4 points)

S2i
One version of the formula for the variance of the estimator betahat is S 2
1
As
(n 1) S x2
the variance of X rises, the denominator rises, making the variance of the estimator
beta1hat smaller or equivalently the estimator becomes more precise.

c. Use English to explain the intuition for your answer for a. (4 points)

We are seeking to explain the dependent variable Y with X. The wider the range of
values we have in our dataset for X (and the variance is the measure of this variation of
X), the more information we will have about how X and Y are connected, which
improves our precision). If the support is very narrow, like a pencil tip, the beam of
wood will be very unstable and the beam will tilt one way of another. However if the
support is broad, like a table top, the beam will be stable and not likely to tip. Same
with 1 .
2) (12 points total (3 points each part)) Compare the following two OLS regressions:

Yi 0 1 X i i
i.
1
Yi 0 1 2 X i i
ii. 6

(The + sign distinguishes the fact that the values youd calculate for equation ii may differ from
i.) That is, the values of the Y variable have been multiplied by 1/6 and the value of the X
variables have been multiplied by 2. How do the values of the following terms derived from
equation ii differ from the equivalent terms for equation 1?

(You need to give us some indication of how you arrive at your answers. I recommend answering
these in the order given.)

If a person correctly identifies the direction, but was unable to quantify the amount of
change, that is worth 1.5 points each. A decent but failed attempt to mathematically
determine the change is worth 2 points, assuming they are going in the right direction. If
they do the math right but dont say at the end slope decreases etc that should not be
penalized.

a) 1

S2
1 XY2
( X i X )(Yi Y )
SX ( X i X )2
1 1 1
S 2 ( X i X )(Yi Y ) 1
(2 X i 2 X )( Yi Y )
1 XY 6 6 3 1
S X2 (2 X i 2 X )2 4 ( X i X ) 2 12

Slope decreases by a factor of 12.

b)
0

1 1 1 1 1
0 Y 2 1 X ( Y 2 1 X ) (Y 1 X ) 0
6 6 12 6 6

Intercept decreases by factor of 6.

c) SSR+
1 1 1 1
SSR ( Yi Y ) 2 (Yi Y ) 2 SSR
6 6 36 36

SSR decreases by a factor of 36

d) R2+

Unchanged. The ability of X to explain Y is no different if each is multiplied by a constant


number. Also, SST and SSE decrease by a factor of 36 (which I have not written out here),
so the ratio is no different.

3) (8 points) In the OLS regression of Y on X where the assumptions of the CLRM are met, what is
the expected value of 0 1 X i i ? (You may use the unbiasedness results from class, and
not rederive them. Show your reasoning and be precise.)

E[ 0 1 X i i ] E[ 0 ] E[ 1 X i ] E[i ]
Xi is assumed to be nonrandom in CLRM, and we proved that 0 and 1 are unbiased in
class. So we can rewrite as:

E[ 0 1 X i i ] 0 1 X i E[i ] 0 1 X i -- Getting this far is worth 6 points, since it


turns out to be true that E[i ] 0 . [Partial credit of 3 points for any thoughtful attempt at
answering the question.]

However, we have never actually established in class that E[i ] 0 . It may seem obvious,
but we should be able to rigorously justify the obvious, which is what I want here for the
last two points. It isnt hard, but to get there a student needs to go do roughly the
following. (A student doesnt have to have every single step, but it needs to be clear they
have this idea.)

E[i ] E[Yi Yi ] E[( 0 1 X i i ) ( 0 1 X i )] 0 1 X i E[ i ] E[ 0 ] X i E[ 1 ]


E[i ] 0 1 X i E[ i ] 0 1 X i E[ i ] 0 since the betahats are unbiased and since our
CLRM assumption is that the expected value of the POPULATION errors is zero.

Possible slightly streamlined alternative that I only thought of now. On HW 2 we showed


that E[Y ] is an unbiased estimator of Yi. Therefore E[ ] E[Y Y ] Y E[Y ] Y Y 0 ,
i i i i i i i i

where I have skipped a couple steps.


Cutoff Points for the tdistribution
df \ p 0.6 0.7 0.8 0.9 0.95 0.975 0.99 0.995
1 0.325 0.727 1.367 3.078 6.314 12.706 31.821 63.657
2 0.289 0.617 1.061 1.886 2.92 4.303 6.965 9.925
3 0.277 0.584 0.978 1.638 2.353 3.182 4.541 5.841
4 0.271 0.569 0.941 1.533 2.132 2.776 3.747 4.604
5 0.267 0.559 0.92 1.476 2.015 2.571 3.365 4.032
6 0.265 0.553 0.906 1.44 1.943 2.447 3.143 3.707
7 0.263 0.549 0.896 1.415 1.895 2.365 2.998 3.499
8 0.262 0.546 0.889 1.397 1.86 2.306 2.896 3.355
9 0.261 0.543 0.883 1.383 1.833 2.262 2.821 3.25
10 0.26 0.542 0.879 1.372 1.812 2.228 2.764 3.169
11 0.26 0.54 0.876 1.363 1.796 2.201 2.718 3.106
12 0.259 0.539 0.873 1.356 1.782 2.179 2.681 3.055
13 0.259 0.538 0.87 1.35 1.771 2.16 2.65 3.012
14 0.258 0.537 0.868 1.345 1.761 2.145 2.624 2.977
15 0.258 0.536 0.866 1.341 1.753 2.131 2.602 2.947
16 0.258 0.535 0.865 1.337 1.746 2.12 2.583 2.921
17 0.257 0.534 0.863 1.333 1.74 2.11 2.567 2.898
18 0.257 0.534 0.862 1.33 1.734 2.101 2.552 2.878
19 0.257 0.533 0.861 1.328 1.729 2.093 2.539 2.861
21 0.257 0.532 0.859 1.323 1.721 2.08 2.518 2.831
22 0.256 0.532 0.858 1.321 1.717 2.074 2.508 2.819
23 0.256 0.532 0.858 1.319 1.714 2.069 2.5 2.807
24 0.256 0.531 0.857 1.316 1.708 2.06 2.485 2.787
25 0.256 0.531 0.856 1.316 1.708 2.06 2.485 2.787
26 0.256 0.531 0.856 1.315 1.706 2.056 2.479 2.779
27 0.256 0.531 0.855 1.314 1.703 2.052 2.473 2.771
28 0.256 0.53 0.855 1.313 1.701 2.048 2.467 2.763
29 0.256 0.53 0.854 1.31 1.697 2.042 2.457 2.75
30 0.256 0.53 0.854 1.31 1.697 2.042 2.457 2.75
40 0.255 0.529 0.851 1.303 1.684 2.021 2.423 2.704
60 0.254 0.527 0.848 1.296 1.671 2 2.39 2.66
120 0.254 0.526 0.845 1.289 1.658 1.98 2.358 2.617
infinit
y 0.253 0.524 0.842 1.282 1.645 1.96 2.326 2.576

You might also like