You are on page 1of 57

1

Multiple Choice Question Bank for BUEC 333


(copyright 2006, Peter Kennedy)

This set of multiple choice questions has been prepared to supply you with a means of
checking your command of the course material. For each week of the course I have tried
to provide questions that cover the full range of material discussed during that week. I
have also tried not to include “duplicate” questions, namely questions that are basically
the same as other questions but, for example, just use different numbers. The questions
range quite widely in character. Some check your knowledge of definitions, some address
implications of concepts, some require finding numbers in tables, some ask you to do
calculations, some are very easy, and some are very difficult. Because of this, not all
questions are such that they are likely to appear on an exam, so how you score on these
questions is not as important as how well you understand the logic of the answers.

Spending quality time on these questions should help your efforts to learn this course
material. My advice is to work through each chapter’s questions after you believe you
have a good command of that chapter’s material, and then if there are any questions for
which you don’t understand why the answer provided is the best answer, make sure you
find out why.

These questions have not been tested, so there may be problems with them. Some
questions may be vague or defective, and some answers may be incorrect. Please bring to
my attention any questions that are messed up.

Week 1: Statistical Foundations I

The next 10 questions refer to a variable x distributed as follows:

x 1 2 3
Prob(x) .1 .2 k

1. The value of k is
a) .3 b) .5 c) .7 d) indeterminate

2. The expected value of x is


a) 2.0 b) 2.1 c) 2.6 d) indeterminate

3. The expected value of x squared is


a) 4.0 b) 6.76 c) 7.2 d) indeterminate

4. The variance of x is
a) 0.44 b) 0.66 c) 4.6 d) indeterminate

5. If all the x values were increased by 5 in this table, then the answer to question 2
would be
a) unchanged b) increased by 5 c) multiplied by 5 d) indeterminate
2

6. If all the x values were increased by 5 in this table, then the answer to question 3
would be
a) unchanged b) increased by 25 c) multiplied by 25 d) none of the above

7. If all the x values were increased by 5 in this table, then the answer to question 4
would be
a) unchanged b) increased by 25 c) multiplied by 25 d) none of the above

8. If all the x values were multiplied by 5 in this table, then the answer to question 2
would be
b) unchanged b) increased by 5 c) multiplied by 5 d) indeterminate

9. If all the x values were multiplied by 5 in this table, then the answer to question 3
would be
b) unchanged b) increased by 25 c) multiplied by 25 d) none of the above

10. If all the x values were multiplied by 5 in this table, then the answer to question 4
would be
a) unchanged b) increased by 25 c) multiplied by 25 d) none of the above

The next 17 questions refer to variables X and Y with the following joint distribution
prob(X,Y)

Y=4 Y=5 Y=6


X=1 .1 .05 k
X=2 .05 .1 .1
X=3 .1 .1 .4

11. The value of k is


a) 0 b) .1 c) .2 d) indeterminate

12. If I know that Y=4, then the probability that X=3 is


a) .1 b) .25 c) .4 d) .6

13. If I don’t know anything about the value of Y, then the probability that X=3 is
a) .1 b) .2 c) .4 d) .6

14. If I know that Y=5, then the expected value of X is


a) 0.55 b) 2.0 c) 2.2 d) 2.5

15. If I don’t know anything about Y, then the expected value of x is


a) 2.0 b) 2.25 c) 2.45 d) indeterminate

16. If I know that Y=5, then the variance of X is


a) .56 b) .75 c) 4.84 d) 5.4
3

17. If I don’t know anything about Y, then the variance of x is


a) .55 b) .74 c) 6.0 d) 6.55

18. The covariance between X and Y is


a) 0.0 b) .09 c) .19 d) .29

19. The correlation between X and Y is


a) 0.0 b) .29 c) .47 d) .54

20. If all the X values in the table above were increased by 8, then the answer to question
18 would be
a) unchanged b) increased by 8 c) multiplied by 8 d) multiplied by 64

21. If all the X values in the table above were increased by 8, then the answer to question
19 would be
a) unchanged b) increased by 8 c) multiplied by 8 d) multiplied by 64

22. If all the X values and all the Y values in the table above were increased by 8, then
the answer to question 18 would be
a) unchanged b) increased by 8 c) multiplied by 8 d) multiplied by 64

23. If all the X values and all the Y values in the table above were increased by 8, then
the answer to question 19 would be
a) unchanged b) increased by 8 c) multiplied by 8 d) multiplied by 64

24. If all the X values in the table above were multiplied by 8, then the answer to question
18 would be
a) unchanged b) increased by 8 c) multiplied by 8 d) multiplied by 64

25. If all the X values in the table above were multiplied by 8, then the answer to question
19 would be
a) unchanged b) increased by 8 c) multiplied by 8 d) multiplied by 64

26. If all the X values and all the Y values in the table above were multiplied by 8, then
the answer to question 18 would be
a) unchanged b) increased by 8 c) multiplied by 8 d) multiplied by 64

27. If all the X values and all the Y values in the table above were multiplied by 8, then
the answer to question 19 would be
a) unchanged b) increased by 8 c) multiplied by 8 d) multiplied by 64

28. The distribution of X when Y is known is called the ________ distribution of X, and
is written as ________. These blanks are best filled with
a) conditional, p(X) b) conditional, p(X |Y)
c) marginal, p(X) d) marginal, p(X |Y)
4

29. The distribution of X when Y is not known is called the ________ distribution of X,
and is written as ________. These blanks are best filled with
a) conditional, p(X) b) conditional, p(X |Y)
c) marginal, p(X) d) marginal, p(X |Y)

The next 5 questions refer to the following information. You have estimated the equation
wage = alphahat + betahat*experience to predict a person’s wage using years of
experience as an explanatory variable. Your results are that alphahat is 5.0 with standard
error 0.8, betahat is 1.2 with standard error 0.1, and the estimated covariance between
alphahat and betahat is –0.005. What this means is that 5.0 is a realization of a random
variable with unknown mean and standard error 1.0, and 1.2 is a realization of another
random variable which has unknown mean and standard error 0.01.

30. The estimated variance of your forecast of the wage of a person with no experience is
a) 0.64 b) 0.8 c) 0.81 d) none of these

31. The estimated variance of your forecast of the wage of a person with one year of
experience is
a) 0.01 b) 0.64 c) 0.65 d) none of these

32. The estimated variance of your forecast of the wage of a person with two years of
experience is
a) 0.64 b) 0.65 c) 0.66 d) 0.67

33. The estimate of the increase in wage enjoyed by a person with three additional years
of experience is
a) 3.6 b) 8.6 c) 15 d) none of these

34. The estimated variance of the estimate of the increase in wage enjoyed by a person
with three additional years of experience is
a) 0.01 b) 0.03 c) 0.09 d) none of these

The next 9 questions refer to the following information. The percentage returns from
stocks A, B, and C are random variables with means 0.05, 0.08, and 0.12 respectively,
and variances 0.04, 0.09, and 0.16, respectively. The covariance between A and B returns
is minus 0.01; the return from stock C is independent of the other two. A GIC is available
with a guaranteed return of 0.03.

35. If you buy a thousand dollars each of A and B, your expected percentage return for
this portfolio is
a) 0.05 b) 0.065 c) 0.08 d) none of these

36. If you buy a thousand dollars each of A and B, the variance of your percentage return
for this portfolio is
a) 0.11 b) 0.12 c) 0.13 d) none of these
5

37. If you buy a thousand dollars of A and two thousand dollars of B, your expected
percentage return for this portfolio is
a) 0.05 b) 0.07 c) 0.08 d) none of these

38. If you buy a thousand dollars of A and two thousand dollars of B, the variance of
your percentage return for this portfolio is
a) 0.04 b) 0.044 c) 0.73 d) none of these

39. If you were to supplement either of the above portfolios with some of stock C your
expected return should go ____ and if you were to supplement with some GIC your
expected return should go ____. The best ways to fill these blanks are
a) up, up b) up, down c) down, up d) down, down

40. If you were to supplement either of the above portfolios with some GIC the variance
of your return should
a) increase
b) decrease
c) remain unchanged
d) can’t tell what will happen

41. If you were to supplement either of the above portfolios with some of stock C, the
variance of your return should
a) increase
b) decrease
c) remain unchanged
d) can’t tell what will happen

42. Suppose you bought a thousand dollars of each of A, B, C and GIC. The expected
return of this portfolio is
a) .0625 b) .07 c) .087 d) none of these

43. Suppose you bought a thousand dollars of each of A, B, C and GIC. The variance of
the return of this portfolio is
a) .017 b) .068 c) .075 d) .27

44. Suppose we have a sample of size 100 from a random variable x with mean 3 and
variance 4. The standard deviation of xbar, the average of our sample values, is
a) 0.04 b) 0.2 c) 2 d) 4

45. You have obtained the following data on the wages of randomly-obtained
observationally-identical teenagers: 7, 8, 8, 7, 9, 8, 10, 8, 7, 8, 8. You calculate the
average as 8 and intend to report this figure; you also want to provide a confidence
interval but to do this you have to estimate the standard error of this average. The
estimated standard error you should use is approximately the square root of
a) 0.073 b) 0.08 c) 0.8 d) none of these
6

46. From a sample of size 300 you have estimated the percentage of workers who have
experienced an injury on the job last year to be six percent. You wish to report this
figure but you also want to provide a confidence interval. To do this you need to
estimate the standard error of this estimate. The estimated standard error you should
use is approximately
a) 0.0002 b) 0.014 c) 0.056 d) none of these

47. A negative covariance between x and y means that whenever we obtain an x value
that is greater than the mean of x
a) we will obtain a corresponding y value smaller than the mean of y
b) we will obtain a corresponding y value greater than the mean of y
c) we have a greater than fifty percent chance of obtaining a corresponding y value
smaller than the mean of y
d) we have a greater than fifty percent chance of obtaining a corresponding y value
greater than the mean of y

48. The central limit theorem assures us that the sampling distribution of the mean
a) is always normal
b) is always normal for large sample sizes
c) approaches normality as the sample size increases
d) appears normal only when the sample size exceeds 1,000

49. For a variable x the standard error of the sample mean is calculated as 20 when
samples of size 25 are taken and as 10 when samples of size 100 are taken. A
quadrupling of sample size has only halved the standard error. We can conclude that
increasing sample size is
a) always cost effective b) sometimes cost effective c) never cost effective

50. In the preceding question, what must be the value of the standard error of x?
a) 1000 b) 500 c) 377.5 d) 100

51. Suppose a random variable x has distribution given by f(x) = 2x, for 0 ≤ x ≤ 1 and
zero elsewhere. The expected value of x is
a) less than 0.5 b) equal to 0.5 c) greater than 0.5 d) indeterminate

52. Suppose a random variable x has distribution given by f(x) = kx, for 0 ≤ x ≤ 2 and
zero elsewhere. The value of k is
a) 0.5 b) 1.0 c) 2.0 d) indeterminate
7

Week 2: Statistical Foundations II

1. Suppose that if the null that beta equals one is true a test statistic you have calculated
is distributed as a t statistic with 17 degrees of freedom. What critical value cuts off
5% of the upper tail of this distribution?
a) 1.65 b) 1.74 c) 1.96 d) 2.11

2. Suppose that in the previous question beta is equal to 1.2. Then the critical value from
the previous question will cut off ______ of the upper tail of the distribution of your
test statistic. The blank is best filled with
a) less than 5% b) 5% c) more than 5%

3. Suppose that if the null that alpha and beta both equal one is true a test statistic you
have calculated is distributed as a chi-square statistic with 2 degrees of freedom.
What critical value cuts off 5% of the upper tail of this distribution?
a) 3.84 b) 5.02 c) 5.99 d) 7.38

4. Suppose that if the null that alpha and beta both equal one is true a test statistic you
have calculated is distributed as an F statistic with 2 and 22 degrees of freedom for
the numerator and denominator respectively. What critical value cuts off 5% of the
upper tail of this distribution?
b) 3.00 b) 3.44 c) 4.30 d) 5.72

5. Suppose that if the null that beta equals one is true a test statistic you have calculated
is distributed as a z (standard normal) statistic. What critical value cuts off 5% of the
upper tail of this distribution?
a) 0.31 b) 0.48 c) 1.65 b) 2.57

6. Suppose that if the null that beta equals one is true a test statistic you have calculated
is distributed as a z (standard normal) statistic. If you choose 1.75 as your critical
value, what is your (one-sided) type I error probability?
a) 4% b) 5% c) 6% d) 7%

7. Suppose that if the null that beta equals one is true a test statistic you have calculated
is distributed as a z (standard normal) statistic. If you choose 1.28 as your critical
value, what is your (two-sided) type I error probability?
a) 5% b) 10% c) 15% d) 20%

8. A type I error is
a) failing to reject the null when it is false
b) rejecting the null when it is true
8

9. The probability of a type I error is determined by


a) the researcher
b) the sample size
c) the degree of falsity of the null hypothesis
d) both b) and c) above

10. A type II error is


a) failing to reject the null when it is false
b) rejecting the null when it is true

11. The probability of a type II error is determined by


a) the researcher
b) the sample size
c) the degree of falsity of the null hypothesis
d) both b) and c) above

12. Hypothesis testing is based on


a) minimizing the type I error
b) minimizing the type II error
c) minimizing the sum of type I and type II errors
d) none of these

13. A power curve graphs the degree of falseness of the null against
a) the type I error probability
b) the type II error probability
c) one minus the type I error probability
d) one minus the type II error probability

14. When the null is true the power curve measures


a) the type I error probability
b) the type II error probability
c) one minus the type I error probability
d) one minus the type II error probability

15. Other things equal, when the sample size increases the power curve
a) flattens out
b) becomes steeper
c) is unaffected

16. Other things equal, when the type I error probability is increased the power curve
a) shifts up b) shifts down c) is unaffected
9

17. The power of a test statistic should become larger as the


a) sample size becomes larger
b) type II error becomes larger
c) null becomes closer to being true
d) significance level becomes smaller

18. A manufacturer has had to recall several models due to problems not discovered with
its random final inspection procedures. This is an example of
a) a type I error b) a type II error c) both types of error d) neither type of error

19. As the sample size becomes larger, the type I error probability
a) increases b) decreases c) does not change d) can’t tell

20. Consider the following two statements: a) If you reject a null using a one-tailed test,
then you will also reject it using a two-tailed test at the same significance level; b) For a
given level of significance, the critical value of t gets closer to zero as the sample size
increases.
a) both statements are true b) neither statement is true
c) only the first statement is true d) only the second statement is true

21. Power is the probability of making the right decision when


a) the null is true
b) the null is false
c) the null is either true or false
d) the chosen significance level is 100%

22. The p value is


a) the power b) one minus the power c) the type II error d) none of the above

23. After running a regression, the Eviews software contains


a) the residuals in the resid vector and the constant (the intercept) in the c vector
b) the residuals in the resid vector and the parameter estimates in the c vector
c) the squared residuals in the resid vector and the constant in the c vector
d) the squared residuals in the resid vector and the parameter estimates in the c vector

24. In the Eviews software, in the OLS output the intercept estimate by default is
a) printed last and called “I” for “intercept”
b) printed first and called “I”
c) printed last and called “C” (for “constant”)
d) printed first and called “C”
10

25. A newspaper reports a poll estimating the proportion u of the adult population in
favor of a proposition as 65%, but qualifies this result by saying that “this result is
accurate within plus or minus 3 percentage points, 19 times out of twenty.” What does
this mean?
a) the probablilty is 95% that u lies between 62% and 68%
b) the probability is 95% that u is equal to 65%
c) 95% of estimates calculated from samples of this size will lie between 62% and 68%
d) none of the above

26. In the Eviews software, when you run an OLS regression by clicking on buttons, the
parameter estimates are put in a vector called
a) c (for “coefficient vector”) with the first element in this vector the intercept estimate
b) c (for “coefficient vector”) with the last element in this vector the intercept estimate
c) b (for “beta vector”) with the first element in this vector the intercept estimate
d) b (for “beta vector”) with the last element in this vector the intercept estimate

27. A newspaper reports a poll of 400 people estimating the proportion u of the adult
population in favor of a proposition as 60%, but qualifies this result by saying that “this
result is accurate within plus or minus x percentage points, 19 times out of twenty.” The
value of x in this case is about
a) 2 b) 3 c) 4 d) 5

28. In the Eviews software, in the OLS output the far right column reports
a) the coefficient estimate b) the standard error c) the t value d) none of these

29. A politician wants to estimate the proportion of people in favour of a proposal, a


proportion he believes is about 60%. About what sample size is required to estimate the
true proportion to within plus or minus 0.05 at the 95% confidence level?
a) 10 b) 100 c) 200 d) 400

30. When you calculate a 95% confidence interval for an unknown parameter beta, the
interpretation of this interval is that
a) the probability that the true value of beta lies in this interval is 95%
b) 95% of repeated calculations of estimates of beta from different samples will lie in
this interval
c) 95% of intervals computed in this way will cover the true value of beta
d) none of the above

31. Suppose from a very large sample you have estimated a parameter beta as 2.80 with
estimated variance 0.25. Your 90% confidence interval for beta is 2.80 plus or minus
approximately
a) 0.41 b) 0.49 c) 0. 82 d) 0.98
11

The next 8 questions refer to the following information. You have an estimate 1.75 of a
slope coefficient which you know is distributed normally with unknown mean beta and
known variance 0.25. You wish to test the null that beta = 1 against the alternative that
beta > 1 at the 10% significance level.

32. The critical value to use here is


a) 1.28 b) 1.65 c) 1.96 d) none of these

33. You should _____ the null. If you had used a 5% significance level you would
______ the null. The blanks are best filled with
a) accept; accept b) accept; reject c) reject; accept d) reject; reject

34. The p value (one-sided) for your test is approximately


a) 5% b) 7% c) 10% d) 23%

35. If the true value of beta is 1.01, the power of your test is approximately
a) 1% b) 5% c) 10% d) nowhere near these values

36. If the true value of beta is 10.01, the power of your test is approximately
a) 1% b) 5% c) 10% d) nowhere near these values

37. If the true value of beta is 1.75, the power of your test is approximately
a) 10% b) 40% c) 60% d) 90%

38. If the true value of beta is 1.65, the power of your test is approximately
a) 10% b) 50% c) 70% d) 90%

39. If the true value of beta is 1.25, the power of your test is approximately
a) 22% b) 40% c) 60% d) 78%
12

Week 3: What is Regression Analysis?

1. In the regression specification y = α + βx + ε


a) y is called the dependent variable or the regressand, and x is called the regressor
b) y is called the dependent variable or the regressor, and x is called the regressand
c) y is called the independent variable or the regressand, and x is called the regressor
d) y is called the independent variable or the regressor, and x is called the regressand

2. In the regression specification y = α + βx + ε


a) α is called the intercept, β is called the slope, and ε is called the residual
b) α is called the slope, β is called the intercept, and ε is called the residual
c) α is called the intercept, β is called the slope, and ε is called the error
d) α is called the slope, β is called the intercept, and ε is called the error

3. In the regression specification y = α + βx + ε which of the following is not a


justification for epsilon
a) it captures the influence of a million omitted explanatory variables
b) it incorporates measurement error in x
c) it reflects human random behavior
d) it accounts for nonlinearities in the functional form

4. In the regression specification y = α + βx + ε if the expected value of epsilon is a


fixed number but not zero
a) the regression cannot be run
b) the regression is without a reasonable interpretation
c) this non-zero value is accommodated by the βx term
d) this non-zero value is incorporated into α

5. In the regression specification y = α + βx + ε the conditional expectation of y is


a) the average of the sample y values
b) the average of the sample y values corresponding to a specific x value
c) α + βx d) α + βx + ε

6. In the regression specification y = α + βx + ε the expected value of y conditional on


x=1 is
a) the average of the sample y values corresponding to x=1
b) α + β + ε c) β d) α + β

7. In the regression specification y = α + βx + δz + ε the parameter β is interpreted as


the amount by which y changes when x increases by one and
a) z does not change
b) z changes by one
c) z changes by the amount it usually changes whenever x increases by one
d) none of the above
13

8. In the regression specification y = α + βx + δz + ε the parameter α is called


a) the slope coefficient
b) the intercept
c) the constant term
d) both b) and c) above

9. The terminology ceteris paribus means


a) all else equal
b) changing everything else by the amount by which they usually change
c) changing everything else by equal amounts
d) none of the above

The next 3 questions refer to the following information. Suppose the regression
specification y = α + βx + δz + ε was estimated as y = 2 + 3x + 4z. We have a new
observation for which x = 5 and z = -2. For this new observation

10. the associated value of y is


a) 7 b) 9 c) 25 d) impossible to determine

11. the expected value of y is


a) 7 b) 9 c) 25 d) impossible to determine

12. our forecasted value of y is


a) 7 b) 9 c) 25 d) impossible to determine

13. Suppose the regression specification y = α + βx + ε was estimated as y = 1 + 2x. We


have a new observation for which x = 3 and y = 11. For this new observation the residual
is
a) zero b) 4 c) –4 d) unknown because the error is unknown

14. For the regression specification y = α + βx + ε the OLS estimates result from
minimizing the sum of
a) (α + βx)2
b) (α + βx + ε)2
c) (y - α + βx)2
d) none of these

15. For the regression specification y = α + βx + ε a computer search to find the OLS
estimates would search over all values of
a) x b) α and β c) α, β, and x d) α, β, x, and y

16. R-square is the fraction of


a) the dependent variable explained by the independent variables
b) the variation in the dependent variable explained by the independent variables
c) the variation in the dependent variable explained linearly by the independent
variables
14

17. Obtaining a negative R-square probably means that


a) the computer made a calculation error
b) the true functional form is not linear
c) an intercept was omitted from the specification
d) the explanatory variable ranged too widely

18. Maximizing R-square creates


a) a better fit than minimizing the sum of squared errors
b) an equivalent fit to minimizing the sum of squared errors
c) a worse fit than minimizing the sum of squared errors

19. When there are more explanatory variables the adjustment of R-square to create
adjusted R-square is
a) bigger b) smaller c) unaffected

20. Compared to estimates obtained by minimizing the sum of absolute errors, OLS
estimates are _______ to outliers. The blank is best filled with
a) more sensitive b) equally sensitive c) less sensitive

21. The popularity of OLS is due to the fact that it


a) minimizes the sum of squared errors
b) maximizes R-square
c) creates the best fit to the data
d) none of these

22. R-squared is
a) The minimized sum of squared errors as a fraction of the total sum of squared errors.
b) The sum of squared errors as a fraction of the total variation in the dependent
variable.
c) One minus the answer in a).
d) One minus the answer in b).

23. You have 46 observations on y (average value 15) and on x (average value 8) and
from an OLS regression have estimated the slope of x to be 2.0. Your estimate of the
mean of y conditional on x is
a) 15 b) 16 c) 17 d) none of the above

The following relates to the next two questions. Suppose we have obtained the following
regression results using observations on 87 individuals: yhat = 3 + 5x where the standard
errors of the intercept and slope are 1 and 2, respectively.

24. If an individual increases her x value by 4, what impact do you predict this will have
on her y value? Up by
a) 4 b) 5 c) 20 d) 23
15

25. What is the variance of this prediction?


a) 4 b) 16 c) 32 d) 64

26. Suppose wage = α + βage + ε and we have 100 observations on wage and age, with
average values 70 and 30, respectively. We have run a regression to estimate the slope of
x as 2.0. Consider now a new individual whose age is 20. For this individual the
predicted wage from this regression is
a) 40 b) 50 c) 60 d) impossible to predict without knowing the intercept estimate

27. After running an OLS regression, the reported R2 is


a) never smaller than the “adjusted” R2
b) a number lying between minus one and plus one
c) one minus the sum of squared errors divided by the variation in the independent
variables
d) none of the above

28. You have regressed y on x to obtain yhat = 3 + 4x. If x increases from 7 to 10, what is
your forecast of y?
a) 12 b) 31 c) 40 d) 43

29. Suppose wage = α + βexp + ε and we have 50 observations on wage and exp, with
average values 10 and 8, respectively. We have run a regression to estimate the intercept
as 6.0. Consider now a new individual whose exp is 10. For this individual the predicted
wage from this regression is
a) 6 b) 10 c) 11 d) impossible to predict without knowing the slope estimate

30. If the expected value of the error term is 5, then after running an OLS regression
a) the average of the residuals should be approximately 5
b) the average of the residuals should be exactly zero
c) the average of the residuals should be exactly five
d) nothing can be said about the average of the residuals

31. Suppose we run a regression of y on x and save the residuals as e. If we now regress e
on x the slope estimate should be
a) zero b) one c) minus one d) nothing can be said about this estimate

32. Suppose your data produce the regression result y = 10 + 3x. Consider scaling the
data to express them in a different base year dollar, by multiplying observations by 0.9.
If both y and x are scaled, the new intercept and slope estimates will be
a) 10 and 3 b) 9 and 3 c) 10 and 2.7 d) 9 and 2.7

33. You have used 60 observations to regress y on x, z, p, and q, obtaining slope


estimates 1.5, 2.3, -3.4, and 5.4, respectively. The minimized sum of squared errors is 88
and the R-square is 0.58. The OLS estimate of the variance of the error term is
a) 1.47 b) 1.57 c) 1.60 d) 1.72
16

34. Suppose your data produce the regression result y = 10 + 3x. Consider scaling the
data to express them in a different base year dollar, by multiplying observations by 0.9. If
y is scaled but x is not (because y is measured in dollars and x is measured in physical
units, for example), the new intercept and slope estimates will be
a) 10 and 3 b) 9 and 3 c) 10 and 2.7 d) 9 and 2.7

35. The variance of the error term in a regression is


a) the average of the squared residuals
b) the expected value of the squared error term
c) SSE divided by the sample size
d) none of these

36. The standard error of regression is


a) the square root of the variance of the error term
b) an estimate of the square root of the variance of the error term
c) the square root of the variance of the dependent variable
d) the square root of the variance of the predictions of the dependent variable

37. Asymptotics refers to what happens when


a) the sample size becomes very large
b) the sample size becomes very small
c) the number of explanatory variables becomes very large
d) the number of explanatory variables becomes very small

38. The first step in an econometric study should be to


a) develop the specification
b) collect the data
c) review the literature
d) estimate the unknown parameters

39. Your data produce the regression result y = 8 + 5x. If the x values were scaled by
multiplying them by 0.5 the new intercept and slope estimates will be
a) 4 and 2.5 b) 8 and 2.5 c) 8 and 10 d) 16 and 10
17

Week 4: The CLR Model

1. Whenever the dependent variable is a fraction we should use as our functional form
the
a) double log b) semi-log c) logarithmic d) none of these

2. Suppose y=AKαLβ. Then ceteris paribus


a) α is the change in y per unit change in K
b) α is the percentage change in y per unit change in K
c) α is the percentage change in y per percentage change in K
d) α is none of the above because it is an elasticity

3. Suppose we are estimating the production function y=AeθtKαLβ. Then θ is interpreted


as
a) the returns to scale parameter
b) the rate of technical change
c) an elasticity
d) an intercept

4. Suppose you are estimating a Cobb-Douglas production function using first-


differenced data. How would you interpret the intercept from this regression?
a) the percentage increase in output per percentage increase in time
b) the average percentage increase in output each time period
c) the average percentage increase in output each time period above and beyond
output increases due to capital and labour increments
d) there is no substantive interpretation because we are never interested in the
intercept estimate from a regression.

5. Suppose you regress y on x and the square of x.


a) Estimates will be unreliable
b) It doesn’t make sense to use the square of x as a regressor
c) The regression will not run because these two regressors are perfectly correlated
d) There should be no problem with this.

6. The acronym CLR stands for


a) constant linear regression
b) classical linear relationship
c) classical linear regression
d) none of these

7. The first assumption of the CLR model is that


a) the functional form is linear
b) all the relevant explanatory variables are included
c) the expected value of the error term is zero
d) both a) and b) above
18

8. Consider the two specifications y = α + βx-1 + ε and y = Axθ + ε.


a) both specifications can be estimated by a linear regression
b) only the first specification can be estimated by a linear regression
c) only the second specification can be estimated by a linear regression
d) neither specification can be estimated by a linear regression

9. Suppose you are using the specification wage = α + βEducation + δMale +


θEducation*Male + ε
In this specification the influence of Education on wage is the same for both males and
females if
a) δ = 0 b) θ = 0 c) δ = θ d) δ + θ = 0

10. The most common functional form for estimating wage equations is
a) Linear
b) Double log
c) semilogarithmic with the dependent variable logged
d) semilogarithmic with the explanatory variables logged

11. As a general rule we should log variables


a) which vary a great deal
b) which don’t change very much
c) for which changes are more meaningful in absolute terms
d) for which changes are more meaningful in percentage terms

12. In the regression specification y = α + βx + δz + ε the parameter α is usually


interpreted as
a) the level of y whenever x and z are zero
b) the increase in y whenever x and z increase by one
c) a meaningless number that enables a linear functional form to provide a good
approximation to an unknown functional form
d) none of the above

13. To estimate a logistic functional form we transform the dependent variable to


a) its logarithm b) the odds ratio c) the log odds ratio d) none of these

14. The logistic functional form


a) forces the dependent variable to lie between zero and one
b) is attractive whenever the dependent variable is a probability
c) never allows the dependent variable to be equal to zero or one
d) all of the above

15. Whenever the dependent variable is a fraction, using a linear functional form is OK if
a) most of the dependent variable values are close to one
b) most of the dependent variable values are close to zero
c) most of the dependent variable values are close to either zero or one
d) none of the dependent variable values are close to either zero or one
19

16. Violation of the CLR assumption that the expected value of the error is zero is a
problem only if this expected value is
a) negative
b) constant
c) correlated with an explanatory variable
d) uncorrelated with all explanatory variables

17. Nonspherical errors refers to


a) heteroskedasticity
b) autocorrelated errors
c) both a) and b)
d) expected value of the error not equal to zero

18. Heteroskedasticity is about


a) errors having different variances across observations
b) explanatory variables having different variances across observations
c) different explanatory variables having different variances
d) none of these

19. Autocorrelated errors is about


a) the error associated with one observation not being independent of the error
associated with another observation
b) an explanatory variable observation not being independent of another observation’s
value of that same explanatory variable
c) an explanatory variable observation not being independent of observations on other
explanatory variables
d) the error is correlated with an explanatory variable

20. Suppose your specification is that y = α + βx + ε where β is positive. If x and ε are


positively correlated then OLS estimation will
a) probably produce an overestimation of β
b) probably produce an underestimation of β
c) be equally likely to overestimate or underestimate β

21. Correlation between the error term and an explanatory variable can arise because
a) of error in measuring the dependent variable
b) of a constant non-zero expected error
c) the equation we are estimating is part of a system of simultaneous equations
d) of multicollinearity

22. Multicollinearity occurs when


a) the dependent variable is highly correlated with all of the explanatory variables
b) an explanatory variable is highly correlated with another explanatory variable
c) the error term is highly correlated with an explanatory variable
d) the error term is highly correlated with the dependent variable
20

23. In the specification wage = βEducation + δMale + θFemale + ε


a) there is perfect multicollinearity
b) the computer will refuse to run this regression
c) both a) and b) above
d) none of the above

24. In the CNLR model


a) the errors are distributed normally
b) the explanatory variables are distributed normally
c) the dependent variable is distributed normally

25. Suppose you are using the specification wage = α + βEducation + δMale +
θExperience + ε. In your data the variables Education and Experience happen to be
highly correlated because the observations with a lot of education happen not to have
much experience. As a consequence of this negative correlation the OLS estimates
a) are likely to be better because the movement of one explanatory variable offsets the
other, allowing the computer more easily to isolate the impact of each on the
dependent variable
b) are likely to be better because the negative correlation reduces variance making
estimates more reliable
c) are likely to be worse because the computer can’t tell which variable is causing
changes in the dependent variable
d) are likely to be worse because compared to positive correlation the negative
correlation increases variance, making estimates less reliable
21

Week 5: Sampling Distributions

1. A statistic is said to be a random variable because


a) its value is determined in part by random events
b) its variance is not zero
c) its value depends on random errors
d) all of the above

2. A statistic’s sampling distribution can be pictured by drawing a


a) histogram of the sample data
b) normal distribution matching the mean and variance of the sample data
c) histogram of this statistic calculated from the sample data
d) none of the above

3. An example of a statistic is
a) a parameter estimate but not a t value or a forecast
b) a parameter estimate or a t value, but not a forecast
c) a parameter estimate, a t value, or a forecast
d) a t value but not a parameter estimate or a forecast

4. The value of a statistic calculated from our sample can be viewed as


a) the mean of that statistic’s sampling distribution
b) the median of that statistic’s sampling distribution
c) the mode of that statistic’s sampling distribution
d) none of the above

5. Suppose we know that the CLR model applies to y = βx + ε, and that we estimate
using β* = Σy/Σx = β + Σε/Σx. This appears to be a good estimator because the
second term is
a) zero because Eε = 0
b) small because Σx is large
c) small because Σε is small
d) is likely to be small because because Σε is likely to be small

6. A drawback of asymptotic algebra is that


a) it is more difficult than regular algebra
b) it only applies to very small sample sizes
c) we have to assume that its results apply to small sample sizes
d) we have to assume that its results apply to large sample sizes
22

7. A Monte Carlo study is


a) used to learn the properties of sampling distributions
b) undertaken by getting a computer to create data sets consistent with the econometric
specification
c) used to see how a statistic’s value is affected by different random drawings of the
error term
d) all of the above

8. Knowing what a statistic’s sampling distribution looks like is important because


a) we can deduce the true value of an unknown parameter
b) we can eliminate errors when testing hypotheses
c) our sample value of this statistic is a random drawing out of this distribution
d) none of the above

9. We should choose our parameter estimator based on


a) how easy it is to calculate
b) the attractiveness of its sampling distribution
c) whether it calculates a parameter estimate that is close to the true parameter value
d) none of the above

10. We should choose our test statistic based on


a) how easy it is to calculate
b) how closely its sampling distribution matches a distribution described in a statistical
table
c) how seldom it makes mistakes when testing hypotheses
d) how small is the variance of its sampling distribution

11. An unbiased estimator is an estimator whose sampling distribution has


a) mean equal to the true parameter value being estimated
b) mean equal to the actual value of the parameter estimate
c) a zero variance
d) none of the above

12. Suppose we estimate an unknown parameter with the value 6.5, ignoring the data.
This estimator
a) has minimum variance
b) has zero variance
c) is biased
d) all of the above

13. MSE stands for


a) minimum squared error
b) minimum sum of squared errors
c) mean squared error
d) none of the above
23

14. A minimum variance unbiased estimator


a) is the same as the MSE estimator
b) has the smallest variance of all estimators
c) has a very narrow sampling distribution
d) none of the above

15. In the CLR model the OLS estimator is popular because


a) it minimizes the sum of squared errors
b) it maximizes R-squared
c) it is the best unbiased estimator
d) none of the above

16. Betahat is the minimum MSE estimator if it minimizes


a) the sum of bias and variance
b) the sum of bias squared and variance squared
c) the expected value of the square of the difference between betahat and the mean of
betahat
d) the expected value of the square of the difference between betahat and the true
parameter value

17. A minimum MSE estimator


a) trades off bias and variance
b) is used whenever it is not possible to find an unbiased estimator with a small variance
c) is identical to the minimum variance estimator whenever we are considering only
unbiased estimators
d) all of the above

18. Econometric theorists are trained to


a) find estimators with good sampling distribution properties
b) find test statistics with known sampling distributions when the null hypothesis is true
c) use asymptotic algebra
d) all of the above

19. The OLS estimator is not used for all estimating situations because
a) it is sometimes difficult to calculate
b) it doesn’t always minimize R-squared
c) it doesn’t always have a good-looking sampling distribution
d) sometimes other estimators have better looking sampling distributions

20. The traditional hypothesis testing methodology is based on whether


a) the data support the null hypothesis more than the alternative hypothesis
b) it is more likely that the test statistic value came from its null-is-true sampling
distribution or its null-is-false sampling distribution
c) the test statistic value is in the tail of its null-is-true sampling distribution
d) the test statistic value is in the tail of its null-is-false sampling distribution
24

21. To create a random variable that is normally distributed with mean 6 and variance 9
we should have the computer draw a value from a standard normal and then we
should
a) add 6 to it and multiply the result by 3
b) add 6 to it and multiply the result by 9
c) multiply it by 3 and add 6 to the result
d) multiply it by 9 and add 6 to the result

22. Suppose we have performed a Monte Carlo study to evaluate the sampling
distribution properties of an estimator betahat in a context in which we have chosen
the true parameter value beta to be 1.0. We have calculated 2000 values of betahat
and found their average to be 1.3, and their sample standard error to be 0.5. The
estimated MSE of betahat is
a) 0.34 b) 0.59 c) 0.8 d) none of these

23. Suppose we have performed a Monte Carlo study to evaluate the sampling
distribution properties of a test statistic that is supposed to be distributed as a t
statistic with 17 degrees of freedom if the null hypothesis is true. Forcing the null
hypothesis to be true we have calculated 3000 values of this statistic. Approximately
___ of these values should be greater than 1.333 and when ordered from smallest to
largest the 2850th value should be approximately ____. These blanks are best filled
with
a) 300, 1.74 b) 300, 2.11 c) 600, 1.74 d) 600, 2.11

For the next two questions, suppose you have programmed a computer as follows:
i. Draw 50 x values from a distribution uniform between 10 and 20.
ii. Count the number g of x values greater than 18.
iii. Divide g by 50 to get h1.
iv. Repeat this procedure to get 1000 h values h1 to h1000.
v. Calculate the average hav and the variance hvar of the h values.

24. Hav should be approximately


a) 0.1 b) 0.2 c) 2 d) 20

25. Hvar should be approximately


a) 0.0002 b) 0.003 c) 8 d) 160

26. Suppose the CNLR model applies and you have used OLS to estimate a slope as 2.4.
If the true value of this slope is 3.0, then the OLS estimator
a) has bias of 0.6
b) has bias of –0.6
c) is unbiased
d) we cannot say anything about bias here
25

For the next two questions, suppose you have programmed a computer as follows:
i. Draw randomly 25 values from a standard normal distribution.
ii. Multiply each of these values by 8 and add 5.
iii. Take their average and call it A1.
iv. Repeat this procedure to obtain 400 averages A1 through A400.
v. Compute the average of these 400 A values. Call it Abar.
vi. Compute the standard error of these 400 A values. Call it Asterr.

27. Abar should be approximately


a) 0.2 b) 5 c) 13 d) 125

28. Asterr should be approximately


a) 0.02 b) 0.4 c) 1.6 d) 8

29. Four econometricians have proposed four different estimates for an unknown slope.
The estimators that have produced these estimates have bias 1, 2, 3, and 4,
respectively, and variances 18, 14, 10, and 6, respectively. From what you have learned
in this course, which of these four should be preferred?
a) first b) second c) third d) fourth

30. Suppose the CNLR model applies and you have used OLS to estimate beta as 1.3 and
the variance of this estimate as 0.25. The sampling distribution of the OLS estimator
a) has mean 1.3 and variance 0.25.
b) has a normal distribution shape
c) has a smaller variance than any other estimator
d) has bias equal to the difference between 1.3 and the true value of beta

For the next three questions, suppose you have programmed a computer as follows:
i. Draw 12 x values from a distribution uniform between 5 and 15.
ii. Draw randomly 12 e values from a standard normal distribution.
iii. Create 12 y values as y = 3*x + 2*e.
iv. Calculate bhat1 as the sum of the y values divided by the sum of the x
values.
v. Calculate bstar1 as the sum of the xy values divided by the sum of the x
squared values.
vi. Repeat this procedure from ii above to obtain 4000 bhat values bhat1
through bhat4000 and 4000 bstar values bstar1 through bstar4000.
vii. Compute the averages of these 4000 values. Call them bhatbar and
bstarbar.
viii. Compute the variances of these 4000 values. Call them bhatv and bstarv.

31. In these results


a) neither bhatbar nor bstarbar should be close to three
b) bhatbar and bstarbar should both be very close to three
c) bhatbar should be noticeably closer to three than bstarbar
d) bstarbar should be noticeably closer to three than bhatbar
26

32. In these results


a) bhatv and bstarv should both be approximately equally close to zero
b) bhatv should be noticeably closer to zero than bstarv
c) bstarv should be noticeably closer to zero than bhatv
d) nothing can be said about the relative magnitudes of bhatv and bstarv

33. In the previous question suppose you had subtracted three from each of the bhat
values to get new numbers called q1 through q4000 and then ordered these numbers
from smallest to largest. The 3600th of these q values should be
a) approximately equal to 1.29
b) approximately equal to 1.36
c) approximately equal to 1.80
d) not very close to any of these values

34. Suppose you have programmed a computer to do the following.


i. Draw 20 x values from a distribution uniform between 2 and 8.
ii. Draw 20 z values from a normal distribution with mean 12 and variance 2.
iii. Draw 20 e values from a standard normal distribution.
iv. Create 20 y values using the formula y = 2 + 3x + 4z + 5e.
v. Regress y on x and z, obtaining the estimate bz of the coefficient of z and the
estimate sebz of its standard error.
vi. Subtract 4 from bz, divide this by sebz and call it w1.
vii. Repeat the process described above from step iii until 5,000 w values have
been created, w1 through w5000.
viii. Order the five thousand w values from smallest to largest.
The 4750th of these values should be approximately
a) 1.65 b) 1.74 c) 1.96 d) 2.11

35. Suppose you have a random sample of 100 observations on a variable x which is
distributed normally with mean 14 and variance 8. The sample average, xbar, is 15, and
the sample variance is 7. Then the mean of the sampling distribution of xbar is
a) 15 and its variance is 7
b) 15 and its variance is 0.07
c) 14 and its variance is 8
d) 14 and its variance is 0.08
27

Week 6: Dummy Variables

1. The dummy variable trap occurs when


a) a dummy is not defined as zero or one
b) there is more than one type of category using dummies
c) the intercept is omitted
d) none of the above

The next 13 questions are based on the following information. Suppose we specify that y
= α + βx + δ1Male + δ2Female + θ1Left + θ2Center + θ3Right + ε where Left, Center,
and Right refer to the three possible political orientations. A variable Fringe is created as
the sum of Left and Right, and a variable x*Male is created as the product of x and Male.

2. Which of the following creates a dummy variable trap? Regress y on an intercept, x,


a) Male and Left
b) Male, Left, and Center
c) Left, Center, and Right
d) None of these

3. Which of the following creates a dummy variable trap? Regress y on an intercept, x,


a) Male and Fringe
b) Male, Center, and Fringe.
c) Both of the above
d) None of the above

4. The variable Fringe is interpreted as


a) being on the Left or on the Right
b) being on both the Left and the Right
c) being twice the value of being on the Left or being on the Right
d) none of these

5. Using Fringe instead of Left and Right separately in this specification is done to force
the slopes of Left and Right to be
a) the same
b) half the slope of Center
c) twice the slope of Center
d) the same as the slope of Center

6. If we regress y on an intercept, x, Male, Left, and Center, the slope coefficient on


Male is interpreted as the intercept difference between males and females
a) regardless of political orientation
b) assuming a Right political orientation
c) assuming a Left or Center political orientation
d) none of the above
28

7. If we regress y on an intercept, x, Male, and x*Male the slope coefficient on x*Male


is interpreted as
a) the difference between the male and female intercept
b) the male slope coefficient estimate
c) the difference between the male and female slope coefficient estimates
d) none of these

8. Suppose we regress y on an intercept, x, and Male, and then do another regression,


regressing y on an intercept, x, and Female. The slope estimates on Male and on
Female should be
a) equal to one another
b) equal but opposite in sign
c) bear no necessary relationship to one another
d) none of these

9. Suppose we regress y on an intercept, x, Male, Left and Center and then do another
regression, regressing y on an intercept, x, and Center and Right. The interpretation of
the slope estimate on Center should be
a) the intercept for those from the political center in both regressions
b) the difference between the Center and Right intercepts in the first regression, and the
difference between the Center and Left intercepts in the second regression
c) the difference between the Center and Left intercepts in the first regression, and the
difference between the Center and Right intercepts in the second regression
d) none of these

10. Suppose we regress y on an intercept, x, Male, Left and Center and then do another
regression, regressing y on an intercept, x, and Center and Right. The slope estimate
on Center in the second regression should be
a) the same as the slope estimate on Center in the first regression
b) equal to the difference between the original Center coefficient and the Left coefficient
c) equal to the difference between the original Center coefficient and the Right
coefficient
d) unrelated to the first regression results

11. Suppose we regress y on an intercept, Male, Left, and Center. The base category is
a) a male on the left
b) a female on the left
c) a male on the right
d) a female on the right

12. Suppose we regress y on an intercept, Male, Left, and Center. The intercept is
interpreted as the intercept of a
a) male
b) male on the right
c) female
d) female on the right
29

13. Researcher A has used the specification:


y = α + βx + γMLMaleLeft + γMCMaleCenter + γMRMaleRight + γFLFemaleLeft +
γFCFemaleCenter + ε
Here MaleLeft is a dummy representing a male on the left; other variables are defined in
similar fashion.
Researcher B has used the specification:
y = αB + βBx + λMale + δLeft + κCenter + θMLMale*Left + θMCMale*Center + ε
Here Male*Left is a variable calculated as the product of Male and Left; other variables
are defined in similar fashion. These specifications are fundamentally
a) different
b) the same so that the estimate of γML should be equal to the estimate of θML
c) the same so that the estimate of γML should be equal to the sum of the estimates of λ,
δ, and θML
d) the same so that the sum of the estimates of γML, γMC, and γMR should be equal to the
estimate of λ.

14. In the preceding question, the base categories for specifications A and B are,
respectively,
a) male on the right and female on the right
b) male on the right and female on the left
c) female on the right and female on the right
d) female on the right and male on the right

15. Analysis of variance is designed to


a) estimate the influence of different categories on a dependent variable
b) test whether a particular category has a nonzero influence on a dependent variable
c) test whether the intercepts for all categories in an OLS regression are the same
d) none of these

16. Suppose you have estimated wage = 5 + 3education + 2gender, where gender is one
for male and zero for female. If gender had been one for female and zero for male, this
result would have been
a) Unchanged
b) wage = 5 + 3education - 2gender
c) wage = 7 + 3education + 2gender
d) wage = 7 + 3education - 2gender

17. Suppose we have estimated y = 10 + 1.5x + 4D where y is earnings, x is experience


and D is zero for females and one for males. If we had coded the dummy as minus
one for females and one for males the results (10, 2, 3) would have been
a) 14, 1.5, -4 b) 18, 1.5, -4 c) 12, 1.5, 2 d) 12, 1.5, -2
30

18. Suppose we have estimated y = 10 + 2x + 3D where y is earnings, x is experience and


D is zero for females and one for males. If we had coded the dummy as one for females
and two for males, the results (10, 2, 3) would have been
a) 10, 2, 3 b) 10, 2, 1.5 c) 7, 2, 3 d) 7, 2, 1.5

The following relates to the next three questions. In a study investigating the effect of a
new computer instructional technology for economics principles, a researcher taught a
control class in the normal way and an experimental class using the new technology. She
regressed student final exam numerical grade (out of 100) on GPA, Male, Age, Tech (a
dummy equaling unity for the experimental class), and interaction variables Tech*GPA,
Tech*Male, and Tech*Age. Age and Tech*GPA had coefficients jointly insignificantly
different from zero, so she dropped them and ended up with
grade = 45 + 9*GPA + 5*Male + 10*Tech - 6*Tech*Male - 0.2*Tech*Age
with all coefficients significant. She concludes that a) age makes no difference in the
control group, but older students do not seem to benefit as much from the computer
technology, and that b) the effect of GPA is the same regardless of what group a student
is in.

19. These empirical results suggest that


a) both conclusions are warranted
b) neither conclusion is warranted
c) only the first conclusion is warranted
d) only the second conclusion is warranted

20. These point estimates suggest that in the control class


a) males and females perform equally
b) females outperform males
c) males outperform females
d) we can only assess relative performance in the new technology group

21. These point estimates measure the impact of the new technology on male and female
scores, respectively, to be
a) 5 and zero b) 4 and 10 c) –1 and 10 d) 9 and 10

22. The MLE is popular because it


a) maximizes Rsquare
b) minimizes the sum of squared errors
c) has desirable sampling distribution properties
d) maximizes both the likelihood and loglikelihood functions

23. To find the MLE we maximize the


a) likelihood
b) log likelihood
c) probability of having obtained our sample
d) all of these
31

24. In a logit regression, to report the influence of an explanatory variable x on the


probability of observing a one for the dependent variable we report
a) the slope coefficient estimate for x
b) the average of the slope coefficient estimates for x of all the observations in the
sample
c) the slope coefficient estimate for x for the average observation in the sample
d) none of these

25. The logit functional form


a) is linear in the logarithms of the variables
b) has either zero or one on the left-hand side
c) forces the left-hand variable to lie between zero and one
d) none of these

26. The logit model is employed when


a) all the regressors are dummy variables
b) the dependent variable is a dummy variable
c) we need a flexible functional form
d) none of these

27. In the logit model the predicted value of the dependent variable is interpreted as
a) the probability that the dependent variable is one
b) the probability that the dependent variable is zero
c) the fraction of the observations in the sample that are ones
d) the fraction of the observations in the sample that are zeroes.

28. To find the maximum likelihood estimates the computer searches over all possible
values of the
a) dependent variable
b) independent variables
c) coefficients
d) all of the above

29. The MLE is popular because


a) it maximizes R-square and so creates the best fit to the data
b) it is unbiased
c) it is easily calculated with the help of a computer
d) none of these

30. In large samples the MLE is


a) unbiased b) efficient c) normally distributed d) all of these
32

31. To predict the value of a dependent dummy variable for a new observation we should
predict it as a one if
a) the estimated probability of this observation’s dependent variable being a one is
greater than fifty percent
b) more than half of the observations are ones
c) the expected payoff of doing so is greater than the expected payoff of predicting it as
a zero
d) none of these

32. Which of the following is the best way to measure the prediction success of a logit
specification?
a) the percentage of correct predictions across all the data
b) the average of the percent correct predictions in each category
c) a weighted average of the percent correct predictions in each category, where the
weights are the fractions of the observations in each category
d) the sum across all the observations of the net benefits from each observation’s
prediction

33. A negative coefficient on an explanatory variable x in a logit specification means that


an increase in x will, ceteris paribus,
a) increase the probability that an observation’s dependent variable is a one
b) decrease the probability that an observation’s dependent variable is a one
c) the direction of change of the probability that an observation is a one cannot be
determined unequivocally from the sign of this slope coefficient

34. You have estimated a logit model and found for a new individual that the estimated
probability of her being a one (as opposed to a zero) is 40%. The benefit of correctly
classifying this person is $1,000, regardless of whether she is a one or a zero. The
cost of classifying this person as a one when she is actually a zero is $500. You
should classify this person as a one when the other misclassification cost exceeds
what value?
a) $750 b) $1,000 c) $1250 d) $1500

35. You have estimated a logit model and found for a new individual that the estimated
probability of her being a one (as opposed to a zero) is 40%. The benefit of correctly
classifying this person is $2,000, regardless of whether she is a one or a zero. The
cost of classifying this person as a zero when she is actually a one is $1600. You
should be indifferent to classifying this person as a one or a zero when the other
misclassification cost equals what value?
a) $100 b) $200 c) $300 d) $400
33

36. You have estimated a logit model to determine the probability that an individual is
earning more than ten dollars an hour, with observations earning more than ten
dollars an hour coded as ones; your estimated logit index function is
-22 + 2*Ed – 6*Female + 4*Exp
where Ed is years of education, Female is a dummy with value one for females, and Exp
is years of experience. You have been asked to classify a new observation with 10 years
of education and 2 years of experience. You should classify her as
a) a one b) a zero c) too close to call
d) not enough information to make a classification

37. In the preceding question, suppose you believe that the influence of experience
depends on gender. To incorporate this into your logit estimation procedure you
should
a) add an interaction variable defined as the product of Ed and Female
b) estimate using only the female observations and again using only the male
observations
c) add a new explanatory variable coded as zero for the male observations and whatever
is the value of the experience variable for the female observations
d) none of the above

38. From estimating a logit model you have produced a slope estimate of 0.3 on the
explanatory variable x. This means that a unit increase in x will cause
a) an increase in the probability of being a y=1 observation of 0.3
b) an increase in the probability of being a y=0 observation of 0.3
c) an increase in the ratio of these two probabilities of 0.3
d) none of the above

39. You have obtained the following regression results using data on law students from
the class of 1980 at your university:
Income = 11 + .24GPA - .15Female + .14Married - .02Married*Female
where the variables are self-explanatory. Consider married individuals with equal GPAs.
Your results suggest that compared to female income, male income is higher by
a) 0.01 b) 0.02 c) 0.15 c) 0.17

Suppose you have run the following regression:


y = α + βx + γUrban + θImmigrant + δUrban*Immigrant + ε
where Urban is a dummy indicating that an individual lives in a city rather than in a rural
area, and Immigrant is a dummy indicating that an individual is an immigrant rather
than a native. The following three questions refer to this information.

40. The coefficient γ is interpreted as the ceteris paribus difference in y between


a) an urban person and a rural person
b) an urban native and a rural native
c) an urban immigrant and a rural immigrant
d) none of these
34

41. The coefficient θ is interpreted as the ceteris paribus difference in y between


a) an immigrant and a native
b) a rural immigrant and a rural native
c) an urban immigrant and an urban native
d) none of these

42. The coefficient δ is interpreted as the ceteris paribus difference in y between an urban
immigrant and
a) a rural native
b) a rural immigrant
c) an urban native
d) none of these

43. You have estimated a logit model to determine the success of an advertising program
in a town, with successes coded as ones; your estimated logit index function is -70 +
2*PerCap + 3*South where PerCap is the per capita income in the town (measured in
thousands of dollars), and South is a dummy with value one for towns in the south
and zero for towns in the north, the only other region. If the advertising program is a
success, you will make $5000; if it is a failure you will lose $3000. You are
considering two towns, one in the south and one in the north, both with per capita
incomes of $35,000. You should undertake the advertising program
a) in both towns
b) in neither town
c) in only the south town
d) in only the north town
35

Week 7: Hypothesis Testing

1. The square root of an F statistic is distributed as a t statistic. This statement is


a) true b) true only under special conditions c) false

2. To conduct a t test we need to


a) divide a parameter estimate by its standard error
b) estimate something that is supposed to be zero and see if it is zero
c) estimate something that is supposed to be zero and divide it by its standard error

3. If a null hypothesis is true, when we impose the restrictions of this null the minimized
sum of squared errors
a) becomes smaller b) does not change c) becomes bigger
d) changes in an indeterminate fashion

4. If a null hypothesis is false, when we impose the restrictions of this null the
minimized sum of squared errors
a) becomes smaller b) does not change c) becomes bigger
e) changes in an indeterminate fashion

5. Suppose you have 25 years of quarterly data and specify that demand for your
product is a linear function of price, income, and quarter of the year, where quarter of
the year affects only the intercept. You wish to test the null that ceteris paribus
demand is the same in spring, summer, and fall, against the alternative that demand is
different in all quarters. The degrees of freedom for your F test are
a) 2 and 19 b) 2 and 94 c) 3 and 19 d) 3 and 94

6. In the preceding question, suppose you wish to test the hypothesis that the entire
relationship (i.e., that the two slopes and the intercept) is the same for all quarters,
versus the alternative that the relationship is completely different in all quarters. The
degrees of freedom for your F test are
a) 3 and 94 b) 6 and 88 c) 9 and 82 d) none of these

7. In the preceding question, suppose you are certain that the intercepts are different
across the quarters, and wish to test the hypothesis that both slopes are unchanged
across the quarters, against the alternative that the slopes are different in each quarter.
The degrees of freedom for your F test are
a) 3 and 94 b) 6 and 88 c) 9 and 82 d) none of these

8. As the sample size becomes very large, the t distribution


a) collapses to a spike because its variance becomes very small
b) collapses to normally-distributed spike
c) approximates more and more closely a normal distribution with mean one
d) approximates more and more closely a standard normal distribution
36

9. Suppose we are using 35 observations to regress wage on an intercept, education,


experience, gender, and dummies for black and hispanic (the base being white). In
addition we are allowing the slope on education to be different for the three race
categories. When using a t test to test for discrimination against females, the degrees
of freedom is
a) 26 b) 27 c) 28 d) 29

10. After running a regression, to find the covariance between the first and second slope
coefficient estimates we
a) calculate the square root of the product of their variances
b) look at the first off-diagonal element of the correlation matrix
c) look at the first diagonal element of the variance-covariance matrix
d) none of these

11. Suppose you have used Eviews to regress output on capital, labor, and a time trend by
clicking on these variables in the order above, or, equivalently, using the command ls
y cap lab time c. To test for constant returns to scale using the Wald – Coefficient
Restrictions button you need to provide the software with the following information
a) cap+lab =1
b) c(1)+c(2)=1
c) c(2)+c(3) = 1
d) none of these

12. When testing a joint null, an F test is used instead of several separate t tests because
a) the t tests may not agree with each other
b) the F test is easier to calculate
c) the collective results of the t test could mislead
d) the t tests are impossible to calculate in this case

13. The rationale behind the F test is that if the null hypothesis is true, by imposing the
null hypothesis restrictions on the OLS estimation the per restriction sum of squared
errors
a) falls by a significant amount
b) rises by a significant amount
c) falls by an insignificant amount
d) rises by an insignificant amount

14. Suppose we are regressing wage on an intercept, education, experience, gender, and
dummies for black and hispanic (the base being white). To find the restricted SSE to
calculate an F test to test the null hypothesis that the black and hispanic coefficients
are equal we should regress wage on an intercept, education, experience, gender, and
a new variable constructed as the
a) sum of the black and hispanic dummies
b) difference between the black and hispanic dummies
c) product of the black and hispanic dummies
d) none of these
37

15. In the preceding question, if the null hypothesis is true then, compared to the
unrestricted SSE, the restricted SSE should be
a) smaller b) the same c) larger d) unpredictable

16. In question 14, if we regress wage on an intercept, education, experience, gender, and
a dummy for white, compared to the restricted SSE in that question, the resulting sum
of squared errors should be
a) smaller b) the same c) larger d) unpredictable

17. Suppose you have specified the demand for beer (measured in liters) as
LnBeer = β0 + β1lnBeerprice + β2lnOthergoodsprice + β3lnIncome + ε
where the notation should be obvious. Economists will tell you that in theory this
relationship should be homogeneous of degree zero, meaning that if income and prices all
increase by the same percent, demand should not change. Testing homogeneity of degree
zero means testing the null that
a) β1 = β2 = β3 = 0 b) β1 + β2 + β3 = 0 c) β1 + β2 + β3 = 1 d) none of these

Suppose you have run a logit regression in which defaulting on a credit card payment is
related to people’s income, gender, education, and age, with the coefficients on income
and age, but not education, allowed to be different for males versus females. The next 4
questions relate to this information.

18. The degrees of freedom for the LR test of the null hypothesis that gender does not
matter is
a) 1 b) 2 c) 3 d) 4

19. To calculate the LR test statistic for this null we need to compute twice the difference
between the
a) restricted and unrestricted maximized likelihoods
b) restricted and unrestricted maximized loglikelihoods
c) unrestricted and restricted maximized likelihoods
d) unrestricted and restricted maximized loglikelihoods

20. Suppose the null that the slopes on income and age are the same for males and
females is true. Then compared to the unrestricted maximized likelihood, the
restricted maximized likelihood should be
a) smaller b) the same c) bigger d) unpredictable

21. The coefficient on income can be interpreted as ceteris paribus the change in the
______ resulting from a unit increase in income.
a) probability of defaulting
b) odds ratio of defaulting versus not defaulting
c) log odds ratio of defaulting versus not defaulting
d) none of these
38

Week 9: Specification

1. Specification refers to choice of


a) test statistic
b) estimating procedure
c) functional form and explanatory variables
d) none of these

2. Omitting a relevant explanatory variable when running a regression


a) never creates bias
b) sometimes creates bias
c) always creates bias

3. Omitting a relevant explanatory variable when running a regression usually


a) increases the variance of coefficient estimates
b) decreases the variance of coefficient estimates
c) does not affect the variance of coefficient estimates

4. Suppose that y = α + βx + δw + ε but that you have ignored w and regressed y on


only x. If x and w are negatively correlated in your data, the OLS estimate of β will
be biased downward if
a) β is positive
b) β is negative
c) δ is positive
d) δ is negative

5. Suppose that y = α + βx + δw + ε but that you have ignored w and regressed y on


only x. The OLS estimate of β will be unbiased if x and w are
a) collinear
b) orthogonal
c) positively correlated
d) negatively correlated

6. Omitting an explanatory variable from a regression in which you know it belongs


could be a legitimate decision if doing so
a) increases R-square
b) decreases the SSE
c) decreases MSE
d) decreases variance

7. In general, omitting a relevant explanatory variable creates


a) bias and increases variance
b) bias and decreases variance
c) no bias and increases variance
d) no bias and decreases variance
39

8. Suppose you know for sure that a variable does not belong in a regression as an
explanatory variable. If someone includes this variable in their regression, in general
this will create
a) bias and increase variance
b) bias and decrease variance
c) no bias and increase variance
d) no bias and decrease variance

9. Adding an irrelevant explanatory variable which is orthogonal to the other


explanatory variables causes
a) bias and no change in variance
b) bias and an increase in variance
c) no bias and no change in variance
d) no bias and an increase in variance

10. A good thing about data mining is that it


a) avoids bias
b) decreases MSE
c) increases R-square
d) may uncover an empirical regularity which causes you to improve your specification

11. A bad thing about data mining is that it is likely to


a) create bias
b) capitalize on chance
c) both of the above
d) none of the above

12. The bad effects of data mining can be minimized by


a) keeping variables in your specification that common sense tell you definitely belong
b) setting aside some data to be used to check the specification
c) performing a sensitivity analysis
d) all of the above

13. A sensitivity analysis is conducted by varying the specification to see what happens
to
a) Bias
b) MSE
c) R-square
d) the coefficient estimates

14. The RESET test is used mainly to check for


a) collinearity
b) orthogonality
c) functional form
d) capitalization on chance
40

15. To perform the RESET test we rerun the regression adding as regressors the squares
and cubes of the
a) dependent variable
b) suspect explanatory variable
c) forecasts of the dependent variable
d) none of these

16. The RESET test is


a) a z test b) a t test c) a chi-square test d) an F test

17. Regressing y on x using a distributed lag model specifies that y is determined by


a) the lagged value of y
b) the lagged value of x
c) several lagged values of x
d) several lagged values of x, with the coefficients on the lagged x’s decreasing as the
lag becomes longer

18. Selecting the lag length in a distributed lag model is usually done by
a) minimizing the MSE
b) maximizing R-square
c) maximizing the t values
d) minimizing an information criterion

19. A major problem with distributed lag models is that


a) R-square is low
b) coefficient estimates are biased
c) variances of coefficient estimates are large
d) the lag length is impossible to determine

20. The rationale behind the Koyck distributed lag is that it


a) eliminates bias
b) increases the fit of the equation
c) exploits an information criterion
d) incorporates more information into estimation

21. In the Koyck distributed lag model, as the lag lengthens the coefficients on the lagged
explanatory variable
a) increase and then decrease b) decrease forever
c) decrease for awhile and then become zero d) none of these

22. Using the lagged value of the dependent variable as an explanatory variable is often
done to
a) avoid bias
b) reduce MSE
c) improve the fit of a specification
d) facilitate estimation of some complicated models
41

Week 10: Multicollinearity; Applied Econometrics

1. Multicollinearity occurs whenever


a) the dependent variable is highly correlated with the independent variables
b) the independent variables are highly orthogonal
c) there is a close linear relationship among the independent variables
d) there is a close nonlinear relationship among the independent variables

2. High collinearity is not a problem if


a) no bias is created
b) R-square is high
c) the variance of the error term is small
d) none of these

3. The multicollinearity problem is very similar to the problems caused by


a) nonlinearities
b) omitted explanatory variables
c) a small sample size
d) orthogonality

4. Multicollinearity causes
a) low R-squares
b) biased coefficient estimates
c) biased coefficient variance estimates
d) none of these

5. A symptom of multicollinearity is
a) estimates don’t change much when a regressor is omitted
b) t values on important variables are quite big
c) the variance-covariance matrix contains small numbers
d) none of these

6. Suppose your specification is y = βx + γMale + θFemale + δWeekday + λWeekend +


ε
a) there is no problem with this specification because the intercept has been omitted
b) there is high collinearity but not perfect collinearity
c) there is perfect collinearity
d) there is orthogonality

7. Suppose you regress y on x and the square of x.


a) Estimates will be biased with large variances
b) It doesn’t make sense to use the square of x as a regressor
c) The regression will not run because these two regressors are perfectly correlated
d) There should be no problem with this.
42

8. A friend has told you that his multiple regression has a high R2 but all the estimates of
the regression slopes are insignificantly different from zero on the basis of t tests of
significance. This has probably happened because the
a) intercept has been omitted
b) explanatory variables are highly collinear
c) explanatory variables are highly orthogonal
d) dependent variable doesn’t vary by much

9. Dropping a variable can be a solution to a multicollinearity problem because it


a) avoids bias
b) increases t values
c) eliminates the collinearity
d) could decrease mean square error

10. The main way of dealing with a multicollinearity problem is to


a) drop one of the offending regressors
b) increase the sample size
c) incorporate additional information
d) transform the regressors

11. A result of multicollinearity is that


a) coefficient estimates are biased
b) t statistics are too small
c) the variance of the error is overestimated
d) variances of coefficient estimates are large

12. A result of multicollinearity is that


a) OLS is no longer the BLUE
b) Variances of coefficient estimates are overestimated
c) R-square is misleadingly small
d) Estimates are sensitive to small changes in the data

13. Suppose you are estimating y = α + βx + δz + θw + ε for which the CLR assumptions
hold and x, z, and w are not orthogonal to one another. You estimate incorporating
the information that β = δ. To do this you will regress
a) y on an intercept, 2x, and w
b) y on an intercept, (x+z), and w
c) y-x on an intercept, z, and w
d) none of these

14. In the preceding question, suppose that in fact β is not equal to δ. Then in general,
compared to regressing without this extra information, your estimate of θ
a) is unaffected
b) is still unbiased
c) has a smaller variance
d) nothing can be said about what will happen to this estimate
43

15. Economic theory tells us that when estimating the real demand for exports we should
use the ______ exchange rate and when estimating the real demand for money we
should use the _______ interest rate. The blanks should be filled with
a) real; real
b) real; nominal
c) nominal; real
d) nominal; nominal

16. You have run a regression of the change in inflation on unemployment. Economic
theory tells us that our estimate of the natural rate of unemployment is
a) the intercept estimate
b) the slope estimate
c) minus the intercept estimate divided by the slope estimate
d) minus the slope estimate divided by the intercept estimate

17. You have thirty observations from a major golf tournament in which the percentage
of putts made was recorded for distances ranging from one foot to thirty feet, in
increments of one foot (i.e., you have 30 observations). You propose estimating
success as a function of distance. What functional form should you use?
a) linear
b) logistic
c) quadratic
d) exponential

18. Starting with a comprehensive model and testing down to find the best specification
has the advantage that
a) complicated models are inherently better
b) testing down is guaranteed to find the best specification
c) testing should be unbiased
d) pretest bias is eliminated

19. Before estimating your chosen specification you should


a) data mine
b) check for multicollinearity
c) look at the data
d) test for zero coefficients

20. The interocular trauma test is


a) a t test b) an F test c) a chi-square test d) none of the above

21. When the sample size is quite large, a researcher needs to pay special attention to
a) coefficient magnitudes
b) t statistic magnitudes
c) statistical significance
d) type I errors
44

22. Your only measure of a key economic variable is unsatisfactory but you use it
anyway. This is an example of
a) knowing the context
b) asking the right questions
c) compromising
d) a sensitivity analysis

23. “Asking the right question” means


a) selecting the appropriate null hypothesis
b) looking for a lost item where you lost it instead of where the light is better
c) resisting the temptation to change a problem so that it has a mathematically elegant
solution
d) all of the above

24. A sensitivity analysis involves


a) avoiding type I errors
b) checking for multicollinearity
c) omitting variables with low t values
d) examining the impact of specification changes

25. When testing if a coefficient is zero it is traditional to use a type I error rate of 5%.
When testing if a variable should remain in a specification we should
a) continue to use a type I error rate of 5%
b) use a smaller type I error rate
c) use a larger type I error rate
d) forget about the type I error rate and instead choose a type II error rate

26. An example of knowing the context is knowing that


a) some months have five Sundays
b) only children from poor families are eligible for school lunch programs
c) many auctions require a reserve price to be exceeded before an item is sold
d) all of the above

27. A type III error occurs when


a) you make a type I and a type II error simultaneously
b) type I and type II errors are confused
c) the right answer is provided to the wrong question
d) the wrong functional form has been used

28. The adage that begins with “Graphs force you to notice ….” is completed with
a) outliers
b) incorrrect functional forms
c) what you never expected to see
d) the real relationships among data
45

29. In econometrics, KISS stands for


a) keeping it safely sane b) keep it simple, stupid c) keep it sensibly simple
d) keep inference sophisticatedly simple

30. An advantage of simple models is that they


a) do not place unrealistic demands on the data
b) are less likely to lead to serious mistakes
c) facilitate subjective insights d) all of the above

31. An example of the laugh test is that


a) your coefficient estimates are of unreasonable magnitude
b) your functional form is very unusual
c) your coefficient estimates are all negative
d) some of your t values are negative

32. Hunting statistical significance with a shotgun means


a) avoiding multicollinearity by transforming data
b) throwing every explanatory variable you can think of into your specification
c) using F tests rather than t tests
d) using several different type I error rates

33. “Capitalizing on chance” means that


a) by luck you have found the correct specification
b) you have found a specification that explains the peculiarities of your data set
c) you have found the best way of incorporating capital into the production function
d) you have done the opposite of data mining

34. The adage that begins with “All models are wrong, ….” is completed with
a) especially those with low R-squares
b) but some are useful
c) so it is impossible to find a correct specification
d) but that should not concern us

35. Those claiming that statistical significance is being misused are referring to the
problem that
a) there may be a type I error
b) there may be a type II error
c) the coefficient magnitude may not be of consequence
d) there may be too much multicollinearity

36. Those worried that researchers are “using statistical significance to sanctify a result”
suggest that statistical analysis be supplemented by
a) looking for corroborating evidence
b) looking for disconfirming evidence
c) assessing the magnitude of coefficients
d) all of the above
46

37. To deal with results tainted by subjective specification decisions undertaken during
the heat of econometric battle it is suggested that researchers
a) eliminate multicollinearity
b) report a senstitivity analysis
c) use F tests instead of t tests
d) use larger type I error rates

38. You have regressed yt on xt and xt-1, obtaining a positive coefficient estimate on xt, as
expected, but a negative coefficient estimate on lagged x. This
a) indicates that something is wrong with the regression
b) implies that the short-run effect of x is smaller than its long-run effect
c) implies that the short-run effect of x is larger than its long-run effect
d) is due to high collinearity

39. Outliers should


a) be deleted from the data
b) be set equal to the sample average
c) prompt an investigation into their legitimacy
d) be neutralized somehow

40. Influential observations


a) can be responsible for a wrong sign
b) is another name for outliers
c) require use of an unusual specification
d) all of the above

41. Suppose you are estimating the returns to education and so regress wage on years of
education and some other explanatory variables. One problem with this is that people
with higher general ability levels, for which you have no measure, tend to opt for
more years of education, creating bias in your estimation. This bias is referred to as
a) multicollinearity bias
b) pretest bias
c) self-selection bias
d) omitted variable bias

42. A wrong sign could result from


a) a theoretical oversight
b) an interpretation error
c) a data problem
d) all of the above
47

Week 11: Autocorrelated errors; heteroskedasticity

1. If errors are nonspherical it means that they are


a) autocorrelated
b) heteroskedastic
c) autocorrelated or heteroskedastic
d) autocorrelated or heteroskedastic, or both

2. The most important consequence of nonspherical errors is that


a) coefficient estimates are biased
b) inference is biased
c) OLS is no longer BLUE
d) none of these

3. Upon discovering via a test that you have nonspherical errors you should
a) use generalized least squares
b) find the appropriate transformation of the variables
c) double-check your specification
d) use an autocorrelation- or heteroskedasticity-consistent variance covariance matrix
estimate

4. GLS can be performed by running OLS on variables transformed so that the error
term in the transformed relationship is
a) homoskedastic
b) spherical
c) serially uncorrelated
d) eliminated

5. Second-order autocorrelated errors means that the current error εt is a linear function
of
a) εt-1 b) εt-1 squared c) εt-2 d) εt-1 and εt-2

6. Suppose you have an autocorrelated error with rho equal to 0.4. You should transform
each variable xt to become
a) .4xt b) .6xt c) xt - .4xt-1 d) .6xt - .4xt-1

7. Pushing the autocorrelation- or heteroskedasticity-consistent variance-covariance


matrix button in econometrics software when running OLS causes
a) the GLS estimation procedure to be used
b) the usual OLS coefficient estimates to be produced, but with corrected estimated
variances of these coefficient estimates
c) new OLS coefficient estimates to be produced, along with corrected estimated
variances of these coefficient estimates
d) the observations automatically to be weighted to remove the bias in the coefficient
estimates
48

8. A “too-big” t statistic could come about because of


a) a very large sample size
b) multicollinearity
c) upward bias in our variance estimates
d) downward bias in our variance estimates

9. A “too-big” t statistic could come about because of


a) Multicollinearity b) a small sample size c) orthogonality d) none of these

10. The DW test is


a) called the Durbin-Watson test
b) should be close to 2.0 when the null is true
c) defective whenever the lagged value of the dependent variable appears as a regressor
d) all of the above

11. The Breusch-Godfrey test is


a) used to test the null of no autocorrelation
b) is valid even when the lagged value of the dependent variable appears as a regressor
c) is a chi-square test
d) all of the above

12. To use the Breusch-Godfrey statistic to test the null of no autocorrelation against the
alternative of second order autocorrelated errors, we need to regress the OLS
residuals on ________ and use _____ degrees of freedom for our test statistic. The
blanks are best filled with
a) two lags of the OLS residuals; 2
b) the original explanatory variables and one lag of the OLS residuals; 1
c) the original explanatory variables and two lags of the OLS residuals; 2
d) the original explanatory variables, their lags, and one lag of the OLS residuals; 1

13. With heteroskedasticity we should use weighted least squares where


a) by doing so we maximize R-square
b) use bigger weights on those observations with error terms that have bigger variances
c) we use bigger weights on those observations with error terms that have smaller
variances
d) the weights are bigger whenever the coefficient estimates are more reliable

14. Suppose you are estimating y = α + βx + δz + ε but that the variance of ε is


proportional to the square of x. Then to find the GLS estimate we should regress
a) y on an intercept, 1/x, and z/x
b) y/x on 1/x and z/x
c) y/x on an intercept, 1/x, and z/x
d) not possible because we don’t know the factor of proportionality
e) none of these
49

15. Pushing the heteroskedasticity-consistent variance-covariance matrix button in


econometric software
a) removes the coefficient estimate bias from using OLS
b) does not change the OLS coefficient estimates
c) increases the t values
d) none of these

16. Suppose your dependent variable is aggregate household demand for electricity for
various cities. To correct for heteroskedasticity you should
a) multiply observations by the city size
b) divide observations by the city size
c) multiply observations by the square root of the city size
d) divide observations by the square root of the city size
e) none of these

17. Suppose your dependent variable is crime rates for various cities. To correct for
heteroskedasticity you should
a) multiply observations by the city size
b) divide observations by the city size
c) multiply observations by the square root of the city size
d) divide observations by the square root of the city size
e) none of these

18. When using the eyeball test for heteroskedasticity, under the null we would expect the
relationship between the squared residuals and the explanatory variable to be such
that
a) as the explanatory variable gets bigger the squared residual gets bigger
b) as the explanatory variable gets bigger the squared residual gets smaller
c) when the explanatory variable is quite small or quite large the squared residual will
be large relative to its value otherwise
d) there is no evident relationship

19. Suppose you are estimating the relationship y = α + βx + δz + ε but you suspect that
the 50 male observations have a different error variance than the 40 female
observations. The degrees of freedom for the Goldfeld-Quandt test are
a) 50 and 40 b) 49 and 39 c) 48 and 38 d) 47 and 37

20. In the previous question, suppose you had chosen to use the studentized BP test. The
degrees of freedom would then have been
a) 1 b) 2 c) 3 d) 4

21. In the previous question, to conduct the studentized BP test you would have regressed
the squared residuals on an intercept and
a) x b) z c) x and z d) a dummy for gender
50

22. Suppose you are estimating demand for electricity using aggregated data on
household income and on electricity demand across 30 cities of differing sizes Ni.
Your specification is that household demand is a linear function of household income
and city price. To estimate using GLS you should regress
a) per capita demand on an intercept, price and per capita income
b) aggregate demand on an intercept, price, and aggregate income
c) per capita demand on the inverse of Ni, price divided by Ni, and per capita income
d) none of these

23. Suppose you are estimating student performance on an economics exam, regressing
exam score on an intercept, GPA, and a dummy MALE. The CLR model assumptions
apply except that you have determined that the error variance for the male
observations is eight but for females it is only two. To estimate using GLS you should
transform by
a) dividing the male observations by 8 and the female observations by 2
b) multiplying the male observations by 2 and the female observations by 8
c) dividing the male observations by 2
d) multiplying the female observations by 8

24. Suppose the CLR model applies except that the errors are nonspherical of known
form so that you can calculate the GLS estimator. Then
a) the R-square calculated using the GLS estimates is smaller than the OLS R-square
b) the R-square calculated using the GLS estimates is equal to the OLS R-square
c) the R-square calculated using the GLS estimates is larger than the OLS R-square
d) nothing can be said about the relative magnitudes of R-square

Consider a case in which there is a nonspherical error of known form so that you can
calculate the GLS estimator. You have conducted a Monte Carlo study to investigate the
difference between OLS and GLS, using the computer to generate 2000 samples with
nonspherical errors, from which you calculate the following.
a) 2000 OLS estimates and their average betaolsbar
b) 2000 estimated variances of these OLS estimates and their average betaolsvarbar
c) the estimated variance of the 2000 OLS estimates, varbetaols.
d) 2000 corresponding GLS estimates and their average betaglsbar
e) 2000 estimated variances of these GLS estimates and their average betaglsvarbar
f) the estimated variance of the 2000 GLS estimates, varbetagls
The following six questions refer to this information.

25. You should find that betaolsbar and betaglsbar are


a) approximately equal, and varbetaols and varbetagls are also approximately equal
b) not approximately equal, and varbetaols and varbetagls are also not approximately
equal
c) approximately equal, but varbetaols and varbetagls are not approximately equal
d) not approximately equal, but varbetaols and varbetagls are approximately equal
51

26. You should expect that


a) betaolsbar and betaglsbar are approximately equal
b) betaolsbar is bigger than betaglsbar
c) betaolsbar is smaller than betaglsbar
d) not possible to determine relative size here

27. You should expect that


a) Varbetaols and Varbetagls are approximately equal
b) Varbetaols is bigger than Varbetagls
c) Varbetaols is smaller than Varbetagls
d) not possible to determine relative size here

28. You should expect that varbetaols and betaolsvarbar are


a) approximately equal and varbetagls and betaglsvarbar are also approximately equal
b) not approximately equal but varbetagls and betaglsvarbar are approximately equal
c) approximately equal but varbetagls and betaglsvarbar are not approximately equal
d) not approximately equal and varbetagls and betaglsvarbar are also not approximately
equal

29. You should expect that


a) varbetaols and betaolsvarbar are approximately equal
b) varbetaols is bigger than betaolsvarbar
c) varbetaols is smaller than betaolsvarbar
d) not possible to determine relative size here

30. You should expect that


a) varbetagls and betaglsvarbar are approximately equal
b) varbetagls is bigger than betaglsvarbar
c) varbetagls is smaller than betaglsvarbar
d) not possible to determine relative size here

31. Suppose the CLR model holds but the presence of nonspherical errors causes the
variance estimates of the OLS estimator to be an underestimate. Because of this,
when testing for the significance of a slope coefficient using for our large sample the
critical t value 1.96, the type I error rate
a) is higher than 5%
b) is lower than 5%
c) remains fixed at 5%
d) not possible to tell what happens to the type I error rate

32. Suppose you want to undertake a Monte Carlo study to examine the impact of
heteroskedastic errors of the form V(ε) = 4 + 9x2 where x is one of the explanatory
variables in your specification. After getting the computer to draw errors from a
standard normal, to create the desired heteroskedasticity you need to multiply the ith
error by
a) 3xi b) 2 + 3xi c) 4 + 9xi2 d) none of these
52

33. Suppose the CLR model assumptions apply to y = α + βx + θz + ε except that the
variance of the error is proportional to x squared. To produce the GLS estimator you
should regress y/x on
a) an intercept, 1/x, and z/x
b) an intercept and z/x
c) 1/x and z/x
d) not possible to produce GLS because the factor of proportionality is not known

34. Suppose the CLR model assumptions apply to y = α + βx + θz + ε. You mistakenly


think that the variance of the error is proportional to x squared and so transform the
data appropriately and run OLS. If x and z are positively correlated in the data, then
your estimate of θ is
a) biased upward
b) biased downward
c) unbiased
d) not possible to determine the nature of the bias here

35. Suppose income is the dependent variable in a regression and contains errors of
measurement (i) caused by people rounding their income to the nearest $100, or (ii)
caused by people not knowing their exact income but always guessing within 5% of
the true value. In case (i) there is
a) heteroskedasticity and the same for case (ii)
b) heteroskedasticity but not for case (ii)
c) no heteroskedasticity but heteroskedasiticity for case (ii)
d) no heteroskedasticity and the same for case (ii)

36. Suppose you have regressed score on an economics exam on GPA for 50 individuals,
ordered from smallest to largest GPA. The DW statistic is 1.5; you should conclude
that
a) the errors are autocorrelated
b) there is heteroskedasticity
c) there is multicollinearity
d) there is a functional form misspecification

37. A regression using the specification y = α + βx + θz + ε produced SSE =14 using


annual data for 1961-1970, and SSE = 45 using data for 1971-1988. The Goldfeld-
Quant test statistic for a change in error variance beginning in 1971 is
a) 3.2 b) 1.8 c) 1.5 d) none of these
53

Week 12: Bayesian Statistics

1. The main difference between Bayesian and classical statisticians is


a) their choice of prior
b) their definitions of probability
c) their views of the type I error rate
d) the formulas for probability used in calculations

2. Suppose a classical statistician estimates via OLS an unknown parameter beta and
because the CLR model assumptions hold declares the resulting estimate’s sampling
distribution to be such that it is unbiased and has minimum variance among all linear
unbiased estimators. For the Bayesian the sampling distribution
a) is also unbiased
b) is biased because of the prior
c) has a smaller variance
d) does not exist

Suppose the CNLR model applies and with a very large sample size the classical
statistician produces an estimate betahat = 6, with variance 4. With the same data, using
an ignorance prior, a Bayesian produces a normal posterior distribution with mean 6 and
variance 4. The next ten questions refer to this information.

3. The sampling distribution of betahat


a) has mean 6
b) has mean beta
c) is graphed with beta on the horizontal axis
d) has the same interpretation as the posterior distribution

4. The posterior distribution of beta


a) has mean 6
b) has mean beta
c) is graphed with betahat on the horizontal axis
d) has the same interpretation as the sampling distribution

5. In this example the Bayesian estimate of beta would be the same as the classical
estimate if the loss function were
a) all-or-nothing b) absolute c) quadratic d) all of the above

6. If the Bayesian had used an informative prior instead of an ignorance prior the
posterior would have had
a) the same mean but a smaller variance
b) the same mean but a larger variance
c) a different mean and a smaller variance
d) a different mean and a larger variance
54

7. For the Bayesian, the probability that beta is greater than 7 is


a) 40% b) 31% c) 16% d) not a meaningful question

8. For the classical statistician, the probability that beta is greater than 7 is
a) 40% b) 31% c) 16% d) not a meaningful question

9. Suppose we want to test the null hypothesis that beta is equal to 4, against the
alternative that beta is greater than 4. The classical statistician’s p value is
approximately
a) .16 b) .31 c) .32 d) none of these

10. Suppose we want to test the null hypothesis that beta is less than or equal to 4, against
the alternative that beta is greater than 4. The Bayesian statistician’s probability that
the null is true is approximately
a) .16 b) .69 c) .84 d) none of these

11. The Bayesian would interpret the interval from 2.7 to 9.3 as
a) an interval which if calculated in repeated samples would cover the true value of beta
90% of the time
b) a range containing the true value of beta with 90% probability
c) an interval that the Bayesian would bet contains the true value of beta

12. Consider the interval from 2.7 to 9.3. For the Bayesian the probability that the true
value of beta is not in this interval is
a) approximately equal to the probability that beta is less than 3.4
b) a lot greater than the probability that beta is less than 3.4
c) a lot less than the probability that beta is less than 3.4
d) not a meaningful question

13. Bayes theorem says that the posterior is


a) equal to the likelihood
b) proportional to the likelihood
c) equal to the prior times the likelihood
d) proportional to the prior times the likelihood

14. The subjective element in a Bayesian analysis comes about through use of
a) an ignorance prior
b) an informative prior
c) the likelihood
d) the posterior

15. The Bayesian loss function tells us


a) the loss incurred by using a particular point estimate
b) the expected loss incurred by using a particular point estimate
c) the loss associated with a posterior distribution
d) the expected loss associated with a posterior distribution
55

16. The usual “Bayesian point estimate” is the mean of the posterior distribution. This
assumes
a) a quadratic loss function
b) an absolute loss function
c) an all-or-nothing loss function
d) no particular loss function

17. The Bayesian point estimate is chosen by


a) minimizing the loss
b) minimizing expected loss
c) finding the mean of the posterior distribution
d) all of the above

18. From the Bayesian perspective a sensitivity analysis checks to see by how much the
results change when a different
a) loss function is used
b) prior is used
c) posterior is used
d) data set is used

19. The main output from a Bayesian analysis is


a) the likelihood
b) the prior distribution
c) the posterior distribution
d) a point estimate

20. When hypothesis testing in a Bayesian framework the type I error


a) is fixed
b) is irrelevant
c) is set equal to the type II error
d) none of the above

21. The Bayesian accepts/rejects a null hypothesis based on


a) minimizing the type I error
b) minimizing the type II error
c) maximizing the benefit from this decision
d) maximizing the expected benefit from this decision

22. Suppose you are a Bayesian and your posterior distribution for next month’s
unemployment rate is a normal distribution with mean 8.0 and variance 0.25. If this
month’s unemployment rate is 8.1 percent, what would you say is the probability that
unemployment will increase from this month to next month?
a) 50% b) 42% c) 5% d) 2.3%
56

23. If a Bayesian has a quadratic loss function, his/her preferred point estimate is
a) the mean of the posterior distribution
b) the median of the posterior distribution
c) the mode of the posterior distribution
d) cannot be determined unless the specific quadratic loss function is known

24. Suppose the net cost to a firm of undertaking a venture is $1800 if beta is less than or
equal to one and its net profit is $Q if beta is greater than one. Your posterior
distribution for beta is normal with mean 2.28 and variance unity. Any value of Q
bigger than what number entices you to undertake this venture?
a) 100 b) 200 c) 300 d) 450

25. A Bayesian has a client with a loss function equal to the absolute value of the
difference between the true value of beta and the point estimate of beta. The posterior
distribution is f(beta) = 2*beta for beta between zero and one, with f(beta) zero
elsewhere. (This distribution has mean two-thirds and variance one-eighteenth.)
Approximately what point estimate should be given to this client?
a) 0.50 b) 0.66 c) 0.71 d) 0.75

26. A Bayesian has a client with a quadratic loss function. The posterior distribution is
beta = 1, 2, and 3 with probabilities 0.1, 0.3 and 0.6, respectively. What point
estimate should be given to this client?
a) 1 b) 2 c) 3 d) none of these

27. A Bayesian has a client with an all-or-nothing loss function. The posterior
distribution is beta = 1, 2, and 3 with probabilities 0.1, 0.3 and 0.6, respectively. What
point estimate should be given to this client?
a) 1 b) 2 c) 3 d) none of these
57

Answers

Week 1: Statistical Foundations I 1c, 2c, 3c, 4a, 5b, 6d, 7a, 8c, 9c, 10c, 11a, 12c, 13d,
14c, 15c, 16a, 17a, 18d, 19c, 20a, 21a, 22a, 23a, 24c, 25a, 26d, 27a, 28b, 29c, 30a, 31b,
32c, 33a, 34c, 35b, 36d, 37b, 38a, 39b, 40b, 41d, 42b, 43a, 44b, 45a, 46b, 47c, 48c, 49b,
50d, 51c, 52a

Week 2: Statistical Foundations II 1b, 2c, 3c, 4b, 5c, 6a, 7d, 8b, 9a, 10a, 11d, 12d, 13d,
14a, 15b, 16a, 17a, 18b, 19c, 20d, 21b, 22d, 23b, 24c, 25d, 26b, 27d, 28d, 29d, 30c, 31c,
32a, 33c, 34b, 35c, 36d, 37c, 38b, 39a

Week 3: What is Regression Analysis? 1a, 2c, 3b, 4d, 5c, 6d, 7a, 8d, 9a, 10d, 11b, 12b,
13b, 14d, 15b, 16c, 17c, 18b, 19a, 20a, 21d, 22d, 23d, 24c, 25d, 26b, 27a, 28d, 29c, 30b,
31a, 32b, 33c, 34d, 35b, 36b, 37a, 38c, 39c

Week 4: The CLR Model 1d, 2c, 3b, 4c, 5d, 6c, 7d, 8b, 9b, 10c, 11d, 12c, 13c, 14d, 15d,
16c, 17c, 18a, 19a, 20a, 21c, 22b, 23d, 24a, 25c

Week 5: Sampling Distributions 1d, 2d, 3c, 4d, 5d, 6c, 7d, 8c, 9b, 10c, 11a, 12d, 13c,
14d, 15d, 16d, 17d, 18d, 19d, 20c, 21c, 22a, 23a, 24b, 25b, 26c, 27b, 28c, 29b, 30b, 31b,
32c, 33d, 34b, 35d

Week 6: Dummy Variables 1d, 2c, 3b, 4a, 5a, 6a, 7c, 8b, 9b, 10b, 11d, 12d, 13c, 14c,
15c, 16d, 17c, 18c, 19a, 20c, 21b, 22c, 23d, 24d, 25c, 26b, 27a, 28c, 29d, 30d, 31c, 32d,
33b, 34c, 35d, 36d, 37c, 38d, 39d, 40b, 41b, 42d, 43a

Week 7: Hypothesis Testing 1b, 2c, 3c, 4c, 5b, 6d, 7b, 8d, 9b, 10d, 11b, 12c, 13d, 14a,
15c, 16b, 17b, 18c, 19d, 20a, 21c

Week 9: Specification 1c, 2b, 3b, 4c, 5b, 6c, 7b, 8c, 9c, 10d, 11b, 12d, 13d, 14c, 15d,
16d, 17c, 18d, 19c, 20d, 21b, 22d

Week 10: Multicollinearity; Applied Econometrics 1c, 2d, 3c, 4d, 5d, 6c, 7d, 8b, 9d,
10c, 11d, 12d, 13b, 14c, 15b, 16c, 17b, 18c, 19c, 20d, 21a, 22c, 23d, 24d, 25c, 26d, 27c,
28c, 29c, 30d, 31a, 32b, 33b, 34b, 35c, 36d, 37b, 38c, 39c, 40a, 41c, 42d

Week 11: Nonspherical Errors 1d, 2b, 3c, 4b, 5d, 6c, 7b, 8d, 9d, 10d, 11d, 12c, 13c,
14c, 15b, 16d, 17c, 18d, 19d, 20a, 21d, 22d, 23c, 24a, 25c, 26a, 27b, 28b, 29d, 30a, 31a,
32d, 33a, 34c, 35c, 36d, 37c

Week 12: Bayesian Statistics 1b, 2d, 3b, 4a, 5d, 6c, 7b, 8d, 9a, 10a, 11b, 12a, 13d, 14b,
15a, 16a, 17b, 18b, 19c, 20d, 21d, 22b, 23a, 24b, 25c, 26d, 27c

You might also like