You are on page 1of 6

Solution I

Lecture 2 exercise 2

E  Yi | Di  1 – E  Yi | Di  0 
 E  y1i | Di  1 – E  y0i | Di  0 
 E  y1i  y01 | Di  1  E  y0 i | Di  1  E  y0 i | Di  0

We get normal/mean causal impact for Di=1 + the determination predisposition: the counterfactual
result that can't be seen from the given condition (y0i given Di=1) short result that can be noticed.

Lecture 3 exercise 2

E  yi | Di  1  E  Yi | Di  0  E  y1i | Di  1  E  y 0i | Di  0 
                                      E      ui | Di  1 – E    ui | Di  0
                                        E  ui | Di  1 – E  ui | Di  0

exchange y0i and y1i with their formula: y0i = α + β*0 + ui

similarly, y1i = α +β*1 + ui

where, α and β are arbitrary constants so we can rewrite the expression, which β is the causal
relationship/inference and the rest in the model are selection bias.

Lecture 3 exercise 3

define

yxi     * x  ui,
 y  x  1 i      *  x  1  ui
Solving , above
          yxi  y  x  1 i

We can get the causal impact for an individual (i) of changing schooling, We have to exchange y(x-1)i
and yxi with the definition to get the causal inference/relationship, beta.

For 11-12

  y  12i  – y  11i 

For (15-16)

  y  16i  – y  15i 

for 17-18

  y  18i  – y  17i 

Where 11,12,15,16,17,18 represent group, the means the value of X at the given year.

Lecture 4 exercise 5

βols = cov(xi,yi)/var(xi),
replacing yi with alpha+βxi+ui, get cov(x, alpha+ βxi+ui)/var(xi)

= β + cov(xi, ui) / var(xi)

Answer II

a) Column (1)-(4)
 Column 1 represents the group mean of Oregon with no treatment effect in the model.
 Column 2 represent the group standard error of Oregon and the SD with the treatment
effect in the model.
 Column 3 represents the group means of Portland with the no treatment effect.
 Column 4 represent the group standard error of Portland and the SD with the treatment
effect in the given model.
b) Statement 1, statement 2, statement 3 and statement 6 express the above results.
c) The null hypothesis for the same is given below

Utilizing the "Advertising" datasets, which comprises information on sales of PCs/computers in


various areas just as advertizing spending on TV, newspaper and radio. Running the R program
("Advertising_Sample_Program") will give the given below outcomes.

a) Applying the regression algorithm on the specified datasets and variable after loading the
same.
I. Sales on TV
Using the code> (“regress<-lm(Sales~TV, data = mydata”)

> (“summary(regress)”

After the running the above codes we got the below-given results
II. Sales on TV and news paper

Suing the code >(“ regress1<-lm(Sales~TV+Newspaper, data = mydata)”

> summary(regress1)

III. Sales on radio

Using the codes > (“ regress2<-lm(Sales~Radio, data = mydata)”)

> summary(regress2)
IV. Sales on Radio and TV
Using the codes regress2<-lm(Sales~Radio+TV,data=mydata)
> summary(regress2)

b) Yes, there is a change in the coefficient on TV in the regression. The coefficient on regression
(i) is 0.047537 and in the (ii) regression is 0.046901 because of the effect of the omitted
variable bias due to the newspaper. This is regularly called the issue of barring an applicable
variable or under-indicating the model. This issue, by and large, makes the OLS assessors
biased. Inferring the inclination brought about by overlooking a significant variable is an
illustration of misspecification analysis.
Performing an additional regression
Newspaper on TV using codes regress4<-lm(Radio~TV, data = mydata)
> summary(regress4)
Radio on TV and newspaper

c) The coefficient on the Radio changed from regression (iii) to (iv). The coefficient of Radio in
regression (iii) is 0.20250 and regression (iv) is 0.18799, it happens because of omitted
variable bias due to variable TV. This is regularly called the issue of barring an applicable
variable or under-indicating the model. This issue, by and large, makes the OLS assessors be
biased. Inferring the inclination brought about by overlooking a significant variable is an
illustration of misspecification analysis.
Performing an additional regression
regress6<-lm(Sales~Newspaper,data=mydata)
> summary(regress6)
regress7<-lm(Sales~Newspaper+TV,data=mydata)
> summary(regress7)

You might also like