You are on page 1of 45

Learning Outcomes

• Based on the material in this Lecture, you will be able to:


• Discuss the importance of the Error Normality Assumption
• Explain how the sampling distribution of the OLS estimator relates to
interval estimation and hypothesis testing
• Perform interval estimation and explain how to interpret an interval
estimate
• Conduct tests of hypothesis about a single population parameter
• Explain how to use the P-Value to determine the outcome of a test of
hypothesis
• Test hypotheses in the CAPM

@Elisabetta Pellini 2
Error Normality Assumption

• In Lecture 1, we examined a set of assumptions about the error term in


the population model. We know that when these assumptions hold the
OLS estimator is BLUE

• Now we add an extra assumption:

• Assumption 6: the population error 𝜀𝜀 is independent of the explanatory


variable 𝑥𝑥 and is Normally distributed with zero mean and variance 𝜎𝜎 2 :
𝜺𝜺~𝑵𝑵(𝟎𝟎, 𝝈𝝈𝟐𝟐 )

@Elisabetta Pellini 3
Error Normality Assumption

• Assumption 6 is stronger than the previous assumptions. Since 𝜀𝜀 is


independent of the 𝑥𝑥, then the conditional distribution of 𝜀𝜀 given x is
the same as the unconditional distribution of 𝜀𝜀

• Thus under Assumption 6, we have that 𝐸𝐸 𝜀𝜀 𝑥𝑥 = 𝐸𝐸 𝜀𝜀 = 0 and


𝑉𝑉𝑉𝑉𝑉𝑉 𝜀𝜀 𝑥𝑥 = 𝑉𝑉𝑉𝑉𝑉𝑉 𝜀𝜀 = 𝜎𝜎 2

• This implies that if we make Assumption 6, then we are necessarily


making Assumption 4: 𝐸𝐸 𝜀𝜀 𝑥𝑥 = 0 and Assumption 5: 𝑉𝑉𝑉𝑉𝑉𝑉 𝜀𝜀|𝑥𝑥 = 𝜎𝜎 2

@Elisabetta Pellini 4
Error Normality Assumption

• For cross-sectional data, the set of Gauss-Markov Assumptions plus


the Normality Assumption are called the Classical Linear Regression
Model Assumptions

• Under the CLRM Assumptions the OLS estimator is a Minimum


Variance Unbiased Estimator, which means that the OLS has the
smallest variance among all the unbiased estimators and not only
those that are linear in y

@Elisabetta Pellini 5
Error Normality Assumption

• We can summarise all the 6 Assumptions of the classical linear


regression model saying that:
𝑦𝑦|𝑥𝑥~𝑁𝑁(𝛽𝛽0 + 𝛽𝛽1 𝑥𝑥, 𝜎𝜎 2 )

Under the CLRM


assumptions, conditional
on x, y is Normally
Distributed with mean
𝛽𝛽0 + 𝛽𝛽1 𝑥𝑥 (so mean that
depends on x) and
constant variance.

@Elisabetta Pellini 6
Sampling Distribution of the OLS Estimators

Implication of the Normality Assumption:

• Normality of the error term translates into Normality of sampling


distributions of the OLS estimators

• It can be proved that under the Classical Linear Regression Model

Assumptions (1-6), the OLS estimator 𝛽𝛽̂𝑗𝑗 is:

𝛽𝛽̂𝑗𝑗 ~𝑁𝑁 𝛽𝛽𝑗𝑗 , 𝑣𝑣𝑎𝑎𝑎𝑎 𝛽𝛽̂𝑗𝑗

• Normality of the OLS estimators is still approximately true in large


samples even without Normality of the errors

• This is a very important implication, because it gives us the possibility to


construct confidence intervals and tests of hypotheses on the population
parameters
@Elisabetta Pellini 7
Sampling Distribution of the OLS Estimators

• Given this result, we obtain a standardized normal random variable from

𝛽𝛽̂𝑗𝑗 by subtracting its mean (𝛽𝛽𝑗𝑗 ) and dividing by its standard deviation

(𝑠𝑠𝑠𝑠 𝛽𝛽̂𝑗𝑗 ):

𝛽𝛽̂𝑗𝑗 − 𝛽𝛽𝑗𝑗
~𝑁𝑁 0,1
̂
𝑠𝑠𝑠𝑠 𝛽𝛽𝑗𝑗

• Since the 𝑠𝑠𝑠𝑠 𝛽𝛽̂𝑗𝑗 is unknown, we use its estimator SE 𝛽𝛽̂𝑗𝑗 and we obtain
the following statistic:

𝛽𝛽̂𝑗𝑗 − 𝛽𝛽𝑗𝑗
~𝑡𝑡 𝑛𝑛 − 𝑘𝑘 − 1
̂
𝑆𝑆𝑆𝑆 𝛽𝛽𝑗𝑗

@Elisabetta Pellini 8
Sampling Distribution of the OLS Estimators

𝛽𝛽̂𝑗𝑗 − 𝛽𝛽𝑗𝑗
~𝑡𝑡 𝑛𝑛 − 𝑘𝑘 − 1
̂
𝑆𝑆𝑆𝑆 𝛽𝛽𝑗𝑗

• This is called the t statistic or t test. Given that we have replaced 𝑠𝑠𝑠𝑠 𝛽𝛽̂𝑗𝑗

with 𝑆𝑆𝑆𝑆 𝛽𝛽̂𝑗𝑗 , then the probability distribution of the t statistic is the
Student’s t-distribution with 𝑛𝑛 − 𝑘𝑘 − 1 degrees of freedom, where n is
the number of observations, k is the number of slope parameters (or
regressors) and 1 stands for the intercept

@Elisabetta Pellini 9
Interval Estimation

• We now examine an alternative approach with respect to point


estimation for estimating population parameters, this is the interval
estimation approach

• Interval estimation proposes a range of values in which the true


parameter is likely to fall

• Providing a range of values gives a sense of what the parameter value


might be, and the precision with which we have estimated it

• A range of values is called confidence interval

@Elisabetta Pellini 10
Interval Estimation

�𝑗𝑗 −𝛽𝛽𝑗𝑗
𝛽𝛽
• Using the fact that �𝑗𝑗 ) has a Student’s t distribution with 𝑛𝑛 − 𝑘𝑘 − 1
𝑆𝑆𝑆𝑆(𝛽𝛽
degrees of freedom, we can make the following statement:

�𝑗𝑗 −𝛽𝛽𝑗𝑗
𝛽𝛽
𝑃𝑃 −𝑡𝑡 𝑛𝑛−𝑘𝑘−1 ,𝛼𝛼 ⁄2 < �𝑗𝑗 ≤ 𝑡𝑡 𝑛𝑛−𝑘𝑘−1 ,𝛼𝛼 ⁄2 = 1 − 𝛼𝛼
𝑆𝑆𝑆𝑆 𝛽𝛽

where 𝑡𝑡 𝑛𝑛−𝑘𝑘−1 ,𝛼𝛼 ⁄2 is the critical value from the Student’s t distribution
such that the probability of finding a value larger than that is (𝛼𝛼⁄2)

@Elisabetta Pellini 11
Interval Estimation

• Rearranging this expression, we can define an interval that has a (1 − 𝛼𝛼)


probability of containing the 𝛽𝛽𝑗𝑗 parameter:

𝑃𝑃 𝛽𝛽̂𝑗𝑗 − 𝑡𝑡 𝑛𝑛−𝑘𝑘−1 ,𝛼𝛼 ⁄2 𝑆𝑆𝑆𝑆 𝛽𝛽̂𝑗𝑗 < 𝛽𝛽𝑗𝑗 < 𝛽𝛽̂𝑗𝑗 + 𝑡𝑡 𝑛𝑛−𝑘𝑘−1 ,𝛼𝛼 ⁄2 𝑆𝑆𝑆𝑆 𝛽𝛽̂𝑗𝑗 = 1 − 𝛼𝛼

• Then a 100 1 − 𝛼𝛼 % confidence interval for 𝛽𝛽𝑗𝑗 is:

𝛽𝛽̂𝑗𝑗 ± 𝑡𝑡 𝑛𝑛−𝑘𝑘−1 ,𝛼𝛼 ⁄2 𝑆𝑆𝑆𝑆 𝛽𝛽̂𝑗𝑗

@Elisabetta Pellini 12
Interval Estimation

• Suppose 𝜶𝜶 = 𝟎𝟎. 𝟎𝟎𝟎𝟎, then 1 − 𝛼𝛼 = 0.95, and a 95% confidence interval is


constructed as:

𝑃𝑃 𝛽𝛽̂𝑗𝑗 − 𝑡𝑡 𝑛𝑛−𝑘𝑘−1 ,0.025 𝑆𝑆𝑆𝑆 𝛽𝛽̂𝑗𝑗 < 𝛽𝛽𝑗𝑗 < 𝛽𝛽̂𝑗𝑗 + 𝑡𝑡 𝑛𝑛−𝑘𝑘−1 ,0.025 𝑆𝑆𝑆𝑆 𝛽𝛽̂𝑗𝑗 = 0.95

And the endpoints of the interval are:

𝛽𝛽̂𝑗𝑗 ± 𝑡𝑡 𝑛𝑛−𝑘𝑘−1 ,0.025 𝑆𝑆𝑆𝑆 𝛽𝛽̂𝑗𝑗

@Elisabetta Pellini 13
Interval Estimation: A practical Example

• In Lecture 1, we examined the relationship between firm performance


(ROE) and CEO compensation. We estimated the parameters of the
population model:
𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 = 𝛽𝛽0 + 𝛽𝛽1 𝑅𝑅𝑅𝑅𝑅𝑅 + 𝜀𝜀

using a sample of 15 observations. This is what we found:


� 𝑖𝑖 = 767.529 + 15.885𝑅𝑅𝑅𝑅𝑅𝑅𝑖𝑖
𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
𝑆𝑆𝑆𝑆 𝛽𝛽̂0 = 158.32

𝑆𝑆𝑆𝑆 𝛽𝛽̂1 = 6.81

@Elisabetta Pellini 14
Interval Estimation: A practical Example

• Let’s construct a 95% confidence interval for the population parameters


(from the Student’s t table 𝑡𝑡13,0.025 = 2.16)

𝛽𝛽̂0 ± 𝑡𝑡13,0.025 𝑆𝑆𝑆𝑆 𝛽𝛽̂0 = 767.529 ± 2.16(158.32) = 425.558, 1109.500

𝛽𝛽̂1 ± 𝑡𝑡13,0.025 𝑆𝑆𝑆𝑆 𝛽𝛽̂1 = 15.885 ± 2.16(6.81 ) = [1.1754 , 30.5946]

• We estimate with 95% confidence that if ROE=0 then the expected salary is
between $425,558 and $1,109,500 (note that salary is in thousands$)
• We estimate with 95% confidence that an increase in ROE of one
percentage point will lead to an increase in the expected salary between
$1,175.4 𝑎𝑎𝑎𝑎𝑎𝑎 $30,594.6

@Elisabetta Pellini 15
Hypothesis Testing in the Linear Regression Model

• Many business and economic decision problems require a judgment


as to whether changes in a variable lead to changes in another one,
for example:

• Does a change in a company’s net income determine a change in the


company’s stock returns?

• Or is a company’s performance related to its CEO compensation?

• Or by how much do sales increase if advertising expenses increase?

@Elisabetta Pellini 16
Hypothesis Testing in the Linear Regression Model

• Hypothesis testing procedures compare a conjecture we have about a


population to the information contained in a sample of data

• With the linear regression model we describe relationships between


variables:
𝑦𝑦 = 𝛽𝛽0 + 𝛽𝛽1 𝑥𝑥 + 𝜀𝜀

• In many cases we have an idea about the behaviour of y as x changes

• This idea or conjecture can be represented as a statement about the


model parameters

@Elisabetta Pellini 17
Hypothesis Testing in the Linear Regression Model

• For example, according to corporate finance theory, firm performance


should be positively related to CEO salary

• Using an econometric model we can test this theory. Let’s consider the
simple regression model:
𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 = 𝛽𝛽0 + 𝛽𝛽1 𝑅𝑅𝑅𝑅𝑅𝑅 + 𝜀𝜀

• Assuming that firm performance is positively related to CEO salary,


corresponds to assuming that the population parameter 𝛽𝛽1 is larger
than zero

• As researchers, this is the hypothesis we support

@Elisabetta Pellini 18
Hypothesis Testing in the Linear Regression Model

• To conduct a test of hypothesis we need:

• a null hypothesis 𝑯𝑯𝟎𝟎 about a population parameter, which is the


hypothesis that will be maintained unless there is strong evidence
against it. The null hypothesis is the one that the researcher will try
to disprove

• An alternative hypothesis 𝑯𝑯𝟏𝟏 is the one the researcher supports

• A test statistic (based on sample data) to determine how strong the


evidence is against the null hypothesis, so that if evidence is strong
enough, we can reject it in favour of the alternative

@Elisabetta Pellini 19
Testing Hypothesis about a Single Population Parameter: the t Test

• The null and the alternative hypotheses of the test are:

𝐻𝐻0 : 𝛽𝛽1 = 0 firm performance (ROE) is not related to CEO salary

𝐻𝐻1 : 𝛽𝛽1 > 0 firm performance (ROE) is positively related to CEO


salary

• This a test where the alternative is one-sided or a one-tailed

@Elisabetta Pellini 20
Testing Hypothesis about a Single Population Parameter: the t Test

• How can we check if the evidence against the null is strong enough to
reject it?

• We can use 𝛽𝛽̂1 , the estimate of the population parameter that we obtained
from the sample of data. If the null hypothesis is true, a sample value of
𝛽𝛽̂1 far away from zero (the null hypothesis) provides evidence against
the null

• Given that there is some sampling variability in the estimate 𝛽𝛽̂1 , the
distance of 𝛽𝛽̂1 from the hypothesized value under the null has to be
weighed against the standard error of 𝛽𝛽̂1

@Elisabetta Pellini 21
Testing Hypothesis about a Single Population Parameter: the t Test

• This leads to forming the test statistic:

𝛽𝛽̂1 − 𝛽𝛽1 𝛽𝛽̂1 − 0 𝛽𝛽̂1


𝑡𝑡 = = =
𝑆𝑆𝑆𝑆(𝛽𝛽1 ) 𝑆𝑆𝑆𝑆(𝛽𝛽1 ) 𝑆𝑆𝑆𝑆(𝛽𝛽̂1 )
̂ ̂

0 in the numerator is the value of 𝛽𝛽1 under the null hypothesis

• The t statistic measures how many estimated standard deviations 𝛽𝛽̂1 is


away from zero. Values of it far away from zero provides evidence
against the null and hence should result in rejection of the null
hypothesis

@Elisabetta Pellini 22
Testing Hypothesis about a Single Population Parameter: the t Test

• But what is a value far away form zero? It is an unusual value

• We have to look at the probability distribution that the t statistic


follows if the null is true to find values far away from zero, or unusual
values. The t statistic:

�1
𝛽𝛽
𝑡𝑡 = �1 ) ~𝑡𝑡𝑛𝑛−𝑘𝑘−1
𝑆𝑆𝑆𝑆(𝛽𝛽

follows a Student’s t distribution with (n-k-1) degrees of freedom

@Elisabetta Pellini 23
Testing Hypothesis about a Single Population Parameter: the t Test

• We could define an unusual value as one such that there is only a 5%


probability of finding a larger value than that if the null is true

• We call this value critical value

The level of significance of a test


(here 𝛼𝛼=0.05) is the probability of
observing an “unusual value” if the
null is true.

𝛼𝛼 = 0.05

0 Critical
value

@Elisabetta Pellini 24
Testing Hypothesis about a Single Population Parameter: the t Test

• Thus, if the t statistic, calculated using the sample of data, is larger than the
critical value, then we should reject the null hypothesis

• 𝑡𝑡𝑛𝑛−𝑘𝑘−1,0.05 is called the critical value because it separates the rejection


region from the region of non rejection

𝛼𝛼 = 0.05

Do not reject H0 Crit. Value: Reject H0


𝑡𝑡𝑛𝑛−𝑘𝑘−1,0.05

@Elisabetta Pellini 25
Testing Hypothesis about a Single Population Parameter: the t Test

Testing against one-sided alternatives (greater than zero)

𝐻𝐻0 : 𝛽𝛽1 = 0 • If we undertake a test using


𝐻𝐻1 : 𝛽𝛽1 > 0 the 5% significance level,
then we need to look up in
the Student‘s t distribution
table the value that leaves
in the upper tail an
area=0.05.

• We reject the null


hypothesis in favour of the
alternative hypothesis if:

𝛽𝛽̂1
𝑡𝑡 = > 𝑡𝑡𝑛𝑛−𝑘𝑘−1,0.05
𝑆𝑆𝑆𝑆(𝛽𝛽̂1 )

Crit. Value:
𝑡𝑡𝑛𝑛−𝑘𝑘−1,0.05

@Elisabetta Pellini 26
• In our example, suppose we
select 𝛼𝛼 = 0.05.

• The degrees of freedom are


15-1-1=13.

• The critical value 𝒕𝒕𝟏𝟏𝟏𝟏,𝟎𝟎.𝟎𝟎𝟎𝟎 is


1.77

@Elisabetta Pellini 27
Testing Hypothesis about a Single Population Parameter: the t Test

Testing against one-sided alternatives (greater than zero)

𝐻𝐻0 : 𝛽𝛽1 = 0
�1
𝛽𝛽
𝐻𝐻1 : 𝛽𝛽1 > 0 • From the data: t = �1 ) =
𝑆𝑆𝑆𝑆(𝛽𝛽
15.885
≈ 2.33
6.81

• Since t > 𝑡𝑡13,0.05 (2.33 >


1.77), we reject the null
hypothesis at the 5%
significance level. We say
that 𝛽𝛽̂1 is statistically
significant or statistically
greater than zero (at the
5% significance level)

Crit.
Value:
1.77

@Elisabetta Pellini 28
Testing Hypothesis about a Single Population Parameter: the p-value

• The p-value of the test statistic is the probability of getting the observed
value of the test statistic or a value with even greater evidence against
the null, if the null is true

• Thus we can use an alternative decision rule:


• If p-value < 𝛼𝛼 , reject H0
• If p-value ≥ 𝛼𝛼, do not reject H0

α= 0.05

p-value=0.018

1.77 2.33

Here: 𝑃𝑃 𝑡𝑡 > 2.33 = 0.018 it is smaller than 0.05, so we reject the null hypothesis at
5% significance level

@Elisabetta Pellini 29
P-Value

• Guideline:

• If 𝑝𝑝 − 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 < 0.01: very strong evidence against the null

• If 0.01 ≤ 𝑝𝑝 − 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 < 0.05: strong evidence against the null

• If 0.05 ≤ 𝑝𝑝 − 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 < 0.10: some weak evidence against the null

• If 𝑝𝑝 − 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 ≥ 0.10 little or no evidence against the null

@Elisabetta Pellini 30
Testing Hypothesis about a Single Population Parameter: the t Test

Testing against one-sided alternatives (less than zero)

𝐻𝐻0 : 𝛽𝛽1 = 0
𝐻𝐻1 : 𝛽𝛽1 < 0
• In this case, we reject the null
hypothesis in favour of the
alternative hypothesis if the t
statistic takes on a value
smaller than the critical value
�1
𝛽𝛽
t= �1 )
𝑆𝑆𝑆𝑆(𝛽𝛽
< −𝑡𝑡𝑛𝑛−𝑘𝑘−1,0.05

Crit. Value:
−𝑡𝑡𝑛𝑛−𝑘𝑘−1,0.05

@Elisabetta Pellini 31
Testing Hypothesis about a Single Population Parameter: the t Test

• The most common test is:


𝐻𝐻0 : 𝛽𝛽1 = 0
𝐻𝐻1 : 𝛽𝛽1 ≠ 0

• Where under the alternative the explanatory variable has an effect


without specifying whether the effect positive or negative

• This a test where the alternative is two-sided

• In this type of test, if we reject the null hypothesis 𝐻𝐻0 : 𝛽𝛽𝑗𝑗 = 0 at say 5%
significance level, we say that: x is statistically significant or statistically
different from zero at the 5% level

@Elisabetta Pellini 32
Testing Hypothesis about a Single Population Parameter: the t Test

Testing against two-sided alternatives

𝐻𝐻0 : 𝛽𝛽1 = 0 Reject the null hypothesis if


�1
𝛽𝛽
𝐻𝐻1 : 𝛽𝛽1 ≠ 0 t = 𝑆𝑆𝑆𝑆(𝛽𝛽� ) < −𝑡𝑡𝑛𝑛−𝑘𝑘−1,𝛼𝛼/2
1

𝛽𝛽
or t = 𝑆𝑆𝑆𝑆(𝛽𝛽1� ) > 𝑡𝑡𝑛𝑛−𝑘𝑘−1,𝛼𝛼/2
1

otherwise do not reject it.

Here the critical value is chosen so as


to make the area in each tail equal to
2.5% (if we want to keep the 5%
significance level).

Crit. Value: Crit. Value:


−𝑡𝑡𝑛𝑛−𝑘𝑘−1,0.025 𝑡𝑡𝑛𝑛−𝑘𝑘−1,0.025

@Elisabetta Pellini 33
• In our example, suppose we select
as significance level for the test
𝛼𝛼 = 0.05 so that 𝛼𝛼 = 0.025

• The degrees of freedom are 15-1-


1=13.

• The critical value 𝑡𝑡13,0.025 is 2.16



𝛽𝛽
• From the data: t = 𝑆𝑆𝑆𝑆(𝛽𝛽1� ) =
1
15.885
≈ 2.33
6.81
• Since t > 𝑡𝑡13,0.025 (2.33 > 2.16),
we reject the null hypothesis and
we say that 𝛽𝛽̂1 is statistically
significant or statistically
different than zero at the 5%
significance level

@Elisabetta Pellini 34
Testing Hypothesis about a Single Population Parameter: the p-value

• In a two-tailed test, the p-value corresponds to the sum of the upper- and
lower-tail probabilities for the positive and negative values of the t
statistic
α/2 = 0.025 α/2 = 0.025

p-value/2=0.018 p-value/2=0.018

-2.33 -2.16 2.16 2.33

Here: 𝑃𝑃 𝑡𝑡 > 2.33 = 0.018 and 𝑃𝑃 𝑡𝑡 < −2.33 = 0.018


p-value=0.018+0.018=0.036
Since 0.036<0.05, then we reject the null hypothesis at the 5% significance level

@Elisabetta Pellini 35
Testing Other Hypotheses about a Single Population Parameter

• Although 𝐻𝐻0 : 𝛽𝛽𝑗𝑗 = 0 is the most common hypothesis, we may sometimes want
to test other hypotheses

• We can test whether 𝛽𝛽𝑗𝑗 is equal to some given constant:

𝐻𝐻0 : 𝛽𝛽𝑗𝑗 = 𝑎𝑎 against 𝐻𝐻1 : 𝛽𝛽𝑗𝑗 ≠ a or against 𝐻𝐻1 : 𝛽𝛽𝑗𝑗 > a or 𝐻𝐻1 : 𝛽𝛽𝑗𝑗 < a

• Where a is the assumed value of 𝛽𝛽𝑗𝑗 . The test statistic becomes:


𝛽𝛽̂𝑗𝑗 − 𝑎𝑎
𝑡𝑡 =
𝑆𝑆𝑆𝑆(𝛽𝛽̂𝑗𝑗 )
• By comparing 𝑡𝑡 against the relevant critical value we determine the outcome
of the test

• In this case we would say 𝛽𝛽̂𝑗𝑗 is statistically different from “a” (or
larger/smaller than “a”)

@Elisabetta Pellini 36
Testing Other Hypotheses about a Single Population Parameter

• We can carry out two-tailed tests of hypothesis once a confidence interval is


constructed

• If the null hypothesis is 𝐻𝐻0 : 𝛽𝛽𝑗𝑗 = 𝑎𝑎, then the null is rejected against the
alternative 𝐻𝐻1 : 𝛽𝛽𝑗𝑗 ≠ 𝑎𝑎 at the 5% significance level if the value a is not
contained in the 95% confidence interval

@Elisabetta Pellini 37
Testing hypothesis in the CAPM

• Capital Asset Pricing Model (CAPM) is an equilibrium model that states


that the expected return on a stock i is equal to the free-risk rate plus a risk
premium
𝐸𝐸 𝑟𝑟𝑖𝑖 = 𝑟𝑟𝑓𝑓 + 𝛽𝛽 𝐸𝐸 𝑟𝑟𝑚𝑚 − 𝑟𝑟𝑓𝑓

• 𝑟𝑟𝑖𝑖 and 𝑟𝑟𝑓𝑓 are the returns to a given security and the risk-free rate,
respectively
• 𝑟𝑟𝑚𝑚 is the return on the market portfolio
• 𝛽𝛽 is the security’s ‘‘beta’’ value, which measures the sensitivity of a given
security return to variation in the whole stock market return. Values of
beta less than 1 indicate that the security is ‘‘defensive’’ (or less risky than
the market) since its variation is less than the market’s. A beta greater than
1 indicates an ‘‘aggressive stock” (or riskier than the market)

@Elisabetta Pellini 38
Testing hypothesis in the CAPM

• The econometric model is obtained by including an intercept in the


model and an error term:

𝑟𝑟𝑖𝑖 − 𝑟𝑟𝑓𝑓 = 𝛽𝛽0 + 𝛽𝛽1 𝑟𝑟𝑚𝑚 − 𝑟𝑟𝑓𝑓 + 𝜀𝜀

• 𝛽𝛽1 is not observable from the market, but it can be estimated with the
method that we have seen

• Typically 𝑟𝑟𝑓𝑓 , the risk-free rate, is the rate on short-term Treasury Bills,
while a broad stock market index is used as a proxy for the market (FTSE
All-Share, FTSE100, S&P500…)

@Elisabetta Pellini 39
Testing hypothesis in the CAPM

• Suppose we estimate: 𝑟𝑟𝑖𝑖 − 𝑟𝑟𝑓𝑓 = 𝛽𝛽0 + 𝛽𝛽1 𝑟𝑟𝑚𝑚 − 𝑟𝑟𝑓𝑓 + 𝜀𝜀

for a stock using five years of monthly data, that is T=60 observations, and find
the following result:

� 𝑡𝑡 = 0.0138 +1.347 𝑚𝑚𝑘𝑘𝑘𝑘𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑡𝑡


𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
(.0118)(.2089)

Standard errors

• 𝛽𝛽̂1 = 1.347. This value is larger than 1, thus the stock appears to be
“aggressive” or riskier than the market. However we cannot conclude that
the estimated beta is statistically larger than 1 and hence that the stock is
indeed riskier than the market, unless we conduct a test of hypothesis

@Elisabetta Pellini 40
Testing hypothesis in the CAPM

• We can use a t-test for testing the following hypothesis:

𝐻𝐻0 : 𝛽𝛽1 =1 (the stock is as risky as the market)

𝐻𝐻1 : 𝛽𝛽1 > 1 (the stock is riskier than the market. The alternative
hypothesis is the one that the researcher supports)

• The t statistic for this test is:


𝛽𝛽̂1 − 1 1.347 − 1
𝑡𝑡 = = = 1.661
𝑆𝑆𝑆𝑆(𝛽𝛽̂1 ) .2089

@Elisabetta Pellini 41
Testing hypothesis in the CAPM

• This is a one-sided upper tail test


as the alternative hypothesis is
“larger than”

• The critical value 𝑡𝑡58,0.05 is 1.672.


The test statistic (𝑡𝑡 = 1.661) is
smaller than the 5% critical value
from the Student’s t distribution
(with 58 df). We do not reject the
null hypothesis that the stock is as Crit. Value:
1.672
risky as the market

@Elisabetta Pellini 42
Testing for Abnormal Excess Returns- Jensen’s alpha

• In the model:

𝑟𝑟𝑖𝑖 − 𝑟𝑟𝑓𝑓 = 𝛽𝛽0 + 𝛽𝛽1 𝑟𝑟𝑚𝑚 − 𝑟𝑟𝑓𝑓 + 𝜀𝜀

• 𝜷𝜷𝟎𝟎 defines whether a stock (or a portfolio) outperforms the market, in


other words whether a stock is able to generate abnormal returns in
excess of market-required returns for a given stock of a given riskiness

• Testing for the presence and significance of abnormal returns was first
done by Jensen (1968)

@Elisabetta Pellini 43
Testing for Abnormal Excess Returns- Jensen’s alpha

• In the formulation of the model, Jensen calls the intercept 𝛼𝛼 (rather


than 𝛽𝛽0 ), this takes the name “Jensen’s alpha”:

𝑟𝑟𝑖𝑖 − 𝑟𝑟𝑓𝑓 = 𝛼𝛼 + 𝛽𝛽1 𝑟𝑟𝑚𝑚 − 𝑟𝑟𝑓𝑓 + 𝜀𝜀

• The null hypothesis and the alternative hypothesis are:

• 𝐻𝐻0 : 𝛼𝛼 = 0

• 𝐻𝐻1 : 𝛼𝛼 ≠ 0

• A positive and significant 𝜶𝜶 for a given stock would suggest that the
stock is able to earn significant abnormal returns or able to “beat the
market”

@Elisabetta Pellini 44
Testing for Abnormal Excess Returns- Jensen’s alpha

• Here 𝛼𝛼� = 0.0138

� 𝑡𝑡 = 0.0138 +1.347 𝑚𝑚𝑘𝑘𝑘𝑘𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑡𝑡


𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
(.0118)(.2089)

We test the following hypothesis:


𝐻𝐻0 : 𝛼𝛼 = 0
𝐻𝐻1 : 𝛼𝛼 ≠ 0


𝛼𝛼 0.0138
The test statistic is: 𝑡𝑡 = = = 1.169
𝑆𝑆𝑆𝑆(�𝛼𝛼) .0118

• This is a two-sided test. The critical value for a 5% significance level is


𝑡𝑡58,0.025 =2.00. Since t < 𝑡𝑡58,0.025 , we do not reject the null hypothesis and
we say that 𝛼𝛼� is not statistically different from zero at the 5% significance
level

@Elisabetta Pellini 45
References

Chapter 3, in: Brooks, C., (2019). Introductory Econometrics for Finance,


Cambridge University Press, 4th edition.

Chapter 4 in: Wooldridge, J.M., (2019). Introductory Econometrics: A


Modern Approach, 7th Edition. CENGAGE.

@Elisabetta Pellini 46

You might also like