You are on page 1of 25

‭Computer Exercises‬

‭ omputer exercise 1: Matrix calculations in Excel‬


C ‭‬
1
‭Computer exercise 2: Estimation method (OLS and MLE)‬ ‭3‬
‭OLS method‬ ‭3‬
‭MLE method‬ ‭7‬
‭Computer exercise 3: The AR and unit root test‬ ‭ 0‬
1
‭The ACF‬ ‭10‬
‭Dickey - Fuller test‬ ‭12‬
‭Augmented Dickey-fuller test‬ ‭14‬
‭Computer exercise 4: Test of CAPM Model‬ ‭17‬
‭Wald test‬ ‭20‬
‭Likelihood Ratio test‬ ‭22‬
‭Computer exercise 5: The event study‬ ‭23‬

‭Computer exercise 1: Matrix calculations in Excel‬

‭r * c‬
‭-‬ ‭r = upp ner‬
-‭ ‬ ‭c = →‬
‭1.‬ ‭MMULT‬‭- multiplication‬
‭Requires nr of columns for the first matrix to be the same as the nr of rows of the second‬
‭matrix.‬
‭𝑟‭𝑎‬ ‬ · ‭𝑐‭𝑎‬ ‬ * ‭𝑟‭𝑏‬ ‬ · ‭𝑐‭𝑏‬ ‬
-‭ ‬ c‭ ‬‭a‬ ‭= r‬‭b‬
‭-‬ ‭r‬‭a‬ ‭* c‬‭b‬

‭ .‬ ‭MINVERSE‬‭- inverse A‬‭-1‬


2
‭A = Matrix‬
‭A‭-‬1‬ ‭= Inverse matrix‬
−‭1‬ −‭1‬
‭Meaning that‬‭𝐴‬ * ‭𝐴‬ = ‭𝐴‬ * ‭𝐴‬

‭Only square matric have inverse, meaning that r=c.‬

‭ .‬ ‭TRANSPOSE‬‭- transpose A’‬


3
‭Här byter vi plats på en matrixs rader och columner.‬
‭𝑟‬ · ‭𝑐‬ −−> ‭𝑐‬ · ‭𝑟‬
‭Computer exercise 2: Estimation method (OLS and MLE)‬

‭OLS method‬
‭ rdinary least squares estimator:‬‭The estimator generating‬‭the set of values of the‬
O
‭parameters that minimizes the sum of squared residuals.‬

−‭1‬
‭OLS estimator =‬(‭𝑋‬‭'‭𝑋
‬ ‬) * ‭𝑋‬‭'‭𝑦
‬‬

‭10 steps - can be bade in once by the computer(Data → Data Analysis)‬


‭1.‬ ‭Define names for x and y‬
‭2.‬ ‭Calculate‬‭transpose‬
‭X’X‬

‭ = 100*2‬
X
‭X’ = 1* 100‬
‭X’X = 2*2‬
‭𝑀𝑀𝑈𝐿𝑇‬(‭𝑇𝑅𝐴𝑁𝑆𝑃𝑂𝑆𝐸‬(‭𝑋)‬ ; ‭𝑋‬)

‭3.‬ C
‭ alculate‬‭Inverse‬‭for the new matrix‬
‭(X’X)‬‭-1‬

‭ ’ = 2*100‬
X
‭𝑀𝐼𝑁𝑉𝐸𝑅𝑆𝐸‬(‭𝑠𝑒𝑙𝑒𝑐𝑡‬‭‬‭𝑎𝑏𝑜𝑣𝑒‬)

‭4.‬ C
‭ alculate‬‭transpose‬
‭X’y‬

‭ ’ = 2*100‬
X
‭y = 100*1‬
‭X’y = 2*1‬
‭𝑀𝑀𝑈𝐿𝑇‬(‭𝑇𝑅𝐴𝑁𝑆𝑃𝑂𝑆𝐸‬(‭𝑋)‬ ; ‭𝑦‬)

‭5.‬ ‭Calculate OLS estimate parameter‬


−‭1‬
(‭𝑋‬‭'‭𝑋
‬ ‬) * ‭𝑋‬‭'‭𝑦
‬‬

‭In this case the parameter β is 2*1‬

−‭1‬
‭𝑀𝑀𝑈𝐿𝑇‬((‭𝑋‬‭'‭𝑋
‬ ‬) ; ‭𝑋‭'‬‬‭𝑦‬)
‭-‬ ‭both above‬

‭6.‬ C
‭ alculate and name e (in table)‬
‭Glöm ej control shift enter i hela tabellen e‬
‭𝑌‬‭‬ = ‭‭𝑋‬ ‬‭β‬ + ‭𝐸‬
‭-‬ ‭E = error term‬
‭𝐸‭‬‬ = ‭𝑌‬ − ‭‬‭𝑋‭β‬‬

‭7.‬ ‭Estimate standard errors‬


‭2‬ ‭𝑒‭'‬‭𝑒‬ ‬
‭𝑠‬ = ‭𝑇−‬ ‭𝐾‬
-‭ ‬ e‭ = error term‬
‭-‬ ‭T = nr of observations‬
‭-‬ ‭K = amount of parameters‬

‭ ariance for error term S‬‭2‬


V
‭𝑌‬‭‬ = ‭‭𝑋 ‬ ‬‭β‬ + ‭𝐸‬
‭-‬ ‭E = error term‬
‭𝐸‭‬‬ = ‭𝑌‬ − ‭‬‭𝑋‭β ‬‬
‭E = 100*1‬
‭E’= 1*100‬
‭E’E = 1*1‬

‭𝑀𝑀𝑈𝐿𝑇‬(‭𝑇𝑅𝐴𝑁𝑆𝑃𝑂𝑆𝐸‬(‭𝐸‬); ‭𝐸‬)‭/(‬ ‭100‬ − ‭2‬)


‭-‬ ‭here T = 100 and K = 2‬

‭8.‬ ‭Calculate the variance for each parameter‬


‭2‬ −‭1‬
‭𝑠‬ * (‭𝑋‭'‬‬‭𝑋)‬

‭här tar vi bara s‬‭2‬ ‭* hela matrix sen tidigare (X’X)‬‭-1‬

‭9.‬ ‭Calculate standard error for each parameter‬


‭𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒‬‭‬‭𝑓𝑜𝑟‬‭‬‭𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟‬
‭-‬ ‭Här följer variance for parameter diagonalen från ex β‬‭1‬

‭-‬ ‭β‬‭2‬ ‭är alltså 0,0176621‬


‭10.‬‭Hypothesis testing‬
‭B1 and B2 = 0 means that there’s no relationship between x and y.‬

‭3 methods:‬
‭1.‬ ‭Test statistics/Significance method‬
‭Reject H0 if the‬‭test statistics > critical value.‬

e‭ x. Test statistic is -2, Critical value = +-1,96 = we reject since -2 is greater‬


‭than -1,96.‬
‭β‬−‭β‬*
‭𝑆𝐸‬(‭β‬)
-‭ ‬ ‭B* = vad hypotesen säger‬
‭ = notation, could be alpha as well.‬
B

‭Significance level 5% = Critical value +-1,96‬

I‭ f H0:β‬‭1‬ ‭= 0‬
‭If H0:β results to have test statistic 5,1 we’ll reject‬
‭-‬ ‭Reject H0:β = 0 since 5,1 > 1,96‬

‭2.‬ C
‭ onficende interval‬
‭→ reject H0 if its‬‭not included‬‭in the interval.‬

(‭ 95% confidence interval = 1-5% significance level → Critical value 1,96)‬


‭(90% confidence interval = 1-10% significance level → Critical value 1,64)‬
‭(99% confidence level = 1-1% significance level → Critical value 2,58)‬

‭Lower bound‬ ‭Upper bound‬

[‭β‬− ‭𝑡‬‭𝑐‬ * ‭𝑆𝐸‬(‭β‬) [‭β‬+ ‭𝑡‬‭𝑐‬ * ‭𝑆𝐸‬(‭β‬)

‭-‬ ‭t‬‭c‬ ‭= critical value given confidence interval‬


‭-‬ ‭SE‬(‭β‬) ‭= standard error of Beta hat‬‭(from step 8)‬

‭3.‬ P
‭ -value approach‬
‭Reject H0 if‬‭P-value is < significance level‬

‭-‬ I‭ f its very small it means that the null hyponthesis in not applicable‬
‭which means that we reject it.‬
‭-‬ ‭If its very large then we'll fail to reject.‬

‭2‬ * (‭1‬ − ‭𝑁𝑂𝑅𝑀𝑆𝐷𝐼𝑆𝑇‬(‭𝐴𝐵𝑆‬(‭𝑡𝑒𝑠𝑡‬‭𝑠𝑡𝑎𝑡𝑖𝑠𝑡𝑖𝑐‬))


‭ 1.‬‭Regression results‬
1
‭ =Xβ+E‬
Y

‭y‬‭t‬ ‭=‬‭β‬‭1‬ + ‭β‬‭2‬ * ‭𝑥‬‭𝑡‬

‭Then compare‬‭β‬‭1‬,‭2‬ ‭with each of its standard error (from 8)‬


‭Draw conclusion: ex‬
‭-‬ ‭Both are significally different from zero.‬
‭-‬ ‭There’s a positive relationship between x and t.‬

‭Do the output at once‬


‭1.‬ ‭Data‬
‭2.‬ ‭Data Analysis‬
‭3.‬ ‭Regression‬
‭4.‬ ‭Input Y Range: whole‬
‭5.‬ ‭Input X range: whole‬
‭6.‬ ‭Output range: where you want to place the Regression‬
‭OBS klicka inte i “constant is zero”‬

‭MLE method‬
-‭ ‬ M‭ aximum Likelihood Estimation‬
‭-‬ ‭The idea is to find the probability to describe different observations.‬

‭ powerful and flexible method to estimate both linear and non-linear regression‬
A
‭models.‬

‭Here we consider a simple regression model:‬

‭Specify the starting/initial values for the parameters‬


-‭ ‬ W‭ e have three unknown parameters: B1, B2 σ^2‬
‭-‬ ‭We need to specify their initial value. We can guess on "whatever" altough σ‬‭2‬
‭cannot be negative.‬
‭ex. 0,0,1 (sigma‬‭2‬ ‭cannot be 0)‬

‭Log likelihood value‬

‭1. Calculate error term --> e → Y - B‬‭1‬‭-B‬‭2‬‭*X‬


‭2. Calculate log likelihood value for each time period‬
‭(LnLt) → glöm ej låsa!‬

‭3. Sum up the toal liklehood value‬


‭4. Maximize the total log likelihood with SOLVER‬
‭by allowing the value of B1, B2 and σ^2 to be varied‬
‭1. Data‬
‭2. SOLVER‬
‭3. Set objective to SUM‬
‭4. Set the parameters under 0‬

‭ stimate the‬‭variance covariance matrix‬‭of the parameters‬‭using an estimate of the‬


E
‭in-formation matrix. Use the following expected value of the outer product of the‬
‭firstderivatives‬

‭ irst we calculate the information matrix (there are 2 options) and then calculate the‬
F
‭variance covariance matrix by‬

‭Information matrix:‬

-‭ ‬
‭Variance covariance matrix‬

‭N = nr of observations = T = 100‬
I‭ nformation matrix: (2 ways to calculate)‬
‭Way 1:‬

‭-‬ ‭When applying into the inf.Matrix (the first method)‬

‭𝐴𝑣𝑒𝑟𝑎𝑔𝑒‬(‭𝑑‭‬‬‭𝑖‭‬‬‭𝑓‭ö
‬ ‭𝑟‬ ℎ‬‭å‭𝑙‬ 𝑙𝑎𝑛𝑑𝑒‬‭‬‭𝑡𝑖𝑙𝑙‬‭‬‭𝑁‬ = ‭100‬)

‭Way 2:‬

‭N‬‭-1‬ ‭= 1/100‬

‭Variance covariance matrix‬

‭ ‬‭-1‬ ‭= 1/100 = average‬


N
‭I‭O‬ P‬‭-1‬ ‭= MINVERSE(info matrix)‬

‭●‬ ‭1/100 * MINVERSE(info matrix)‬


‭→‬‭the whole matrix 🙂‬
‭●‬ ‭SE(β)‬‭= SQRT(its variance from the variance)‬
‭ ‬‭‬−‭‬‭β‬*
β
‭●‬ ‭t(β) = test statistics of beta =‬ ‭𝑆𝐸‬(‭β)‬
‭Här är β* = 0‬

‭ ‬ ‭We reject if t(β) > critical value (‬‭ex. 5 sign. level‬‭critical value = +-1,96)‬

‭‬ P
● ‭ - value =‬‭2‬ * (‭1‬ − ‭𝑁𝑂𝑅𝑀𝑆𝐷𝐼𝑆𝑇‬(‭𝐴𝐵𝑆‬(‭𝑇‬(‭β‬)))
‭○‬ ‭We reject If it’s smaller than the significance level we reject.‬
‭Computer exercise 3: The AR and unit root test‬

‭The ACF‬
‭ .‬ ‭Calculate continuous daily returns → r‬‭t‬
1
‭Add next to Time and Index‬

‭𝑃‬
‭𝑟‭𝑡‬ ‬ = ‭𝑙𝑛‬(‭𝑃‭𝑡‬ ‬−‭1)‭

‬ = ‭‭𝑙‬ 𝑛‬ ‭𝑃‬ ‭𝑡‬
‭𝑡‬−‭1‬

-‭ ‬ P‭ = price of index level‬


‭-‬ ‭t = specific time‬
‭→ P‬‭t‬ ‭= price at a time t‬

‭ otera:‬
N
‭Vi kommer inte ha något värde på första eftersom att vi inte har något att jämföra med‬
‭sedan tidigare.‬

‭2.‬ ‭Estimate the 3 first values of the sample ACF → r‬‭1‭,‬ r‬‭2‬ ‭& r‬‭3‬

‭ACF is the sample outcorrelation function at different lag.‬

‭Add‬‭in the sheet‬

‭Tau‬‭: Calculate lag return (what we get back during‬‭different time periods):‬
‭𝑇‬‭1‬ = ‭𝐶𝑜𝑟𝑟‬(‭𝑟‭𝑡‬ ‬, ‭𝑟‭𝑡‬ ‬−‭1)‬
‭𝑇‬‭2‬ = ‭𝐶𝑜𝑟𝑟‬(‭𝑟‭𝑡‬ ‬, ‭𝑟‭𝑡‬ ‬−‭2)‬
‭𝑇‬‭3‬ = ‭𝐶𝑜𝑟𝑟‬(‭𝑟‭𝑡‬ ‬, ‭𝑟‭𝑡‬ ‬−‭3)‬

‭3.‬ P
‭ erform a significance test for each of them.‬
‭→‬‭Reject if Z‬‭k‬ ‭> T‬‭c‬

‭-‬ ‭T = observations‬
I‭ detta fall har vi i grunden 500 observationer, i och med att period 1 inte ger något‬
‭värde börjar vi senare vilket resulterar i att r‬‭1‭,‬ ‬‭T = 499, r‬‭2‭,‬ T = 498 och r‬‭3‭,‬ T = 497.‬
‭-‬ ‭Däremot, i och med att vi har ett sån hög observationsgrad kommer vi inte att‬
‭behöva ta hänsyn till detta vid uträkning av P-value och rejection regien.‬

‭4.‬ P
‭ - Value‬
‭→ Reject if P - value < Sign.level‬

‭2 * (1-NORMSDIST(ABS(Test‬‭Stat‬‭))‬

‭5.‬ N
‭ on rejection regien 95%‬
‭→ Reject if the value (Tau) is’nt in the interval.‬

‭ .‬
6 ‭ lot the sample ACF on a Correlogram‬
P
‭a)‬ ‭Insert‬
‭b)‬ ‭Charts‬
‭c)‬ ‭All charts‬
‭d)‬ ‭Combo‬
‭e)‬ ‭Lower b and upper b = simple line‬
‭f)‬ ‭Tau = column bar‬

‭The autocorrelation function at lag 1,2 and 3 is actually 0 in statistically sence.‬


‭ hite noise process‬‭: means that the autocorrelation‬‭at any lag is not signigicantly differen‬
W
‭from zero.‬
‭Dickey - Fuller test‬
‭ n testmetod för att undersöka om en tidsberoende serie är stationär eller inte, dvs om‬
E
‭dess statistiska egenskaper såsom medelvärde och varians är konstanta över tiden eller‬
‭inte.‬

‭ estar nollhypotesen (H‬‭0‭)‬ om en tidsberoende serie‬‭har en‬‭unit root‬‭vilket innebär att‬


T
‭den är icke-stationär dvs dess statistiska egensaper är inte konstanta.‬

‭ ärav, om testet ger statistiskt signifikanta resultat och‬‭avvisar nollhypotesen‬‭,‬‭tolkas‬


D
‭detta som att serien är‬‭stationär‬‭varpå serien inte‬‭har en enhetsrot och därmed kan‬
‭anses vara stationär.‬

‭Model to see if the time series is stationary or not:‬

‭ .‬ L
1 ‭ ägg in värden‬
‭2.‬ ‭Välj form av dickey fuller test (Model A,B eller C)‬
‭3.‬ ‭Ta fram regression (Data analysis, regression → y = Delta Yt, X = Yt-1 --> klicka ej‬
‭i "constant is zero", gör vi det får vi fram modell A.)‬
‭4.‬ ‭Critical value‬
‭5.‬ ‭Räkna Dickey-fuller test statistics och se om vi ska reject H‬‭0‬ ‭eller ej.‬

‭Three forms for the‬‭Dickey-fuller test‬


‭Model A‬

‭Model B‬

‭ odel C‬
M
‭-‬ ‭we add not only the constant term (μ) but also a time trend (⋋) to the regression‬
‭model.‬

ψ = ‭‭𝑝 ‬ 𝑠𝑖‬
λ = ‭𝑡𝑖𝑚𝑒‬‭‭𝑡‬ 𝑟𝑒𝑛𝑑‬
µ = ‭𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡‬‭‬‭𝑡𝑒𝑟𝑚‬
∆‭𝑦‬ = ‭𝑓𝑖𝑟𝑠𝑡‬‭‭𝑑 ‬ 𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒‬‭‭𝑖‬ 𝑛‬‭‬‭𝑡ℎ𝑒‬‭‬‭𝑖𝑛𝑑𝑒𝑥‬‭‬‭𝑣𝑎𝑙𝑢𝑒‬
∆‭𝑦‬‭𝑡‬−‭1‬ = ‭𝑙𝑎𝑔𝑔𝑒𝑑‬‭‭𝑖‬ 𝑛𝑑𝑒𝑥‬

‭Här lägger vi till följande i tabellen intill index:‬


‭1.‬ Δ ‭ y (first diff in the index)‬
‭𝐼𝑛𝑑𝑒𝑥‬‭‬‭2‭‬‬ − ‭‬‭𝐼𝑛𝑑𝑒𝑥‬‭‭1
‬‬
‭2.‬ ‭Y‬‭t-1‬
‭𝐼𝑛𝑑𝑒𝑥‬‭‬‭1‬
‭3.‬ ‭Δy‬‭t-1‬
‭𝐷𝑖𝑓𝑓‬‭‭𝑖‬ 𝑛𝑑𝑒𝑥‬‭‭1
‬‬

‭SUMMARY OUTPUT‬‭→ regression‬


‭1.‬ ‭Data analysis‬
‭2.‬ ‭Regression‬
‭a.‬ ‭y = Delta Yt, X = Yt-1‬
‭klicka ej i "constant is zero", gör vi det får vi fram modell A.‬

I‭ ntercept‬‭= μ (constant term)‬


‭X Variable 1 =‬ψ‭𝑦‬‭𝑡−
‬ ‭1‬
‭-‬ D
‭ ennas‬‭t Stat‬‭är detsamma som‬‭Dickey-fuller test statistic‬‭vilken ska jämföras‬
‭med t‬‭c‬ ‭(critical value).‬

‭Critical value‬

-‭ ‬ N‭ = nr of observations‬
‭-‬ ‭Top row = significance level‬

‭ ompare DF test statistic with critical value‬


C
‭→ Reject if DF test stat < t‬‭c‬
‭Augmented Dickey-fuller test‬
‭ ickey-Fuller-testet (DF-test) och Augmented Dickey-Fuller-testet (ADF-test) är båda‬
D
‭statistiska testmetoder som används för att undersöka om en tidsberoende serie är‬
‭stationär eller ej.‬

‭ enna funkar att använda om error termen (u) i modellen är white noise.‬
D
‭Om det är en autocorrelation i ΔY‬‭t‬ ‭så kommer detta‬‭såklart att påverka errortermen och‬
‭därav kommer den ej att vara en error term längre.‬

I‭ we assume that ΔY‬‭t‬ ‭only depends on Y‬‭t-1‬ ‭then we‬‭have to augment the model by one lag‬
‭is:‬

-‭ ‬ T‭ he part regarded as the augmented term.‬


‭-‬ ‭Same as a‬‭1‬‭*ΔY‬‭t.a1‬ ‭+ a‬‭2‬‭+ ΔY‬‭t.a2‬‭+...+ aP*aY‬‭t-p‬

‭Augmented Dickey fuller test‬


‭Model A‬
∆‭𝑦‬‭𝑡‬ = ψ∆‭𝑦‬‭𝑡‬−‭1‬ + α‭1∆‭
𝑦‬ + ‭𝑢‬‭𝑡‬
‬ ‭𝑡‬−‭1‬

‭ odel B (+‬µ‭)‬
M
∆‭𝑦‬‭𝑡‬ = ψ∆‭𝑦‬‭𝑡‬−‭1‬ + µ + α‭1∆‭
𝑦‬ ‬ ‭1‬ + ‭𝑢‭𝑡‬ ‬
‬ ‭𝑡−

‭Model C (+‬µ ‭+‬λ‭𝑡‬‭)‬


∆‭𝑦‬‭𝑡‬ = ψ∆‭𝑦‬‭𝑡‬−‭1‬ + µ + λ‭𝑡‬ + α‭1‬∆‭𝑦‭𝑡‬ −
‬ ‭1‬
+ ‭𝑢‬‭𝑡‬

ψ = ‭‭𝑝
‬ 𝑠𝑖‬
λ = ‭𝑡𝑖𝑚𝑒‬‭‭𝑡‬ 𝑟𝑒𝑛𝑑‬
µ = ‭𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡‬‭‬‭𝑡𝑒𝑟𝑚‬
∆‭𝑦‬ = ‭𝑓𝑖𝑟𝑠𝑡‬‭‭𝑑
‬ 𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒‬‭‭𝑖‬ 𝑛‬‭‬‭𝑡ℎ𝑒‬‭‬‭𝑖𝑛𝑑𝑒𝑥‬‭‬‭𝑣𝑎𝑙𝑢𝑒‬
∆‭𝑦‬‭𝑡‬−‭1‬ = ‭𝑙𝑎𝑔𝑔𝑒𝑑‬‭‭𝑖‬ 𝑛𝑑𝑒𝑥‬

‭ DF Model B‬
A
‭SUMMARY OUTPUT regression:‬
‭-‬ ‭y = Delta Yt, X = Yt-1 och delta Yt-1--> klicka ej i "constant is zero", gör vi det får‬
‭vi fram modell A.‬
‭-‬ ‭Lägg först in ΔY‬‭t-1‬

I‭ ntercept‬‭= μ (constant term)‬


‭X Variable 1 =‬ψ‭𝑦‬‭𝑡−
‬ ‭1‬
‭-‬ D
‭ ennas‬‭t Stat‬‭är detsamma som‬‭Dickey-fuller test statistic‬‭vilken ska jämföras‬
‭med t‬‭c‬ ‭(critical value).‬

‭X Variable 2 = T‬‭t‬ ‭(time trend)‬

‭Critical value‬

-‭ ‬ N‭ = nr of observations‬
‭-‬ ‭Top row = significance level‬

‭ ompare DF test statistic with critical value‬


C
‭→ Reject if DF test stat < t‬‭c‬
‭ADF Model C‬
∆‭𝑦‬‭𝑡‬ = ψ∆‭𝑦‬‭𝑡‬−‭1‬ + µ + λ‭𝑡‬ + α‭1‬∆‭𝑦‭𝑡‬ −
‬ ‭1‬
+ ‭𝑢‬‭𝑡‬

‭SUMMARY OUTPUT regression:‬


‭-‬ ‭y = Delta Yt, X = Yt-1, delta Yt-1 time trend t--> klicka ej i "constant is zero", gör vi‬
‭det får vi fram modell A.‬

I‭ ntercept‬‭= μ (constant term)‬


‭X Variable 1 =‬ψ‭𝑦‬‭𝑡−
‬ ‭1‬
‭-‬ D
‭ ennas‬‭t Stat‬‭är detsamma som‬‭Dickey-fuller test statistic‬‭vilken ska jämföras‬
‭med t‬‭c‬ ‭(critical value).‬

‭ Variable 2 = T‬‭t‬ ‭(time trend)‬


X
‭X Variable 3 =‬

‭Critical value‬

-‭ ‬ N‭ = nr of observations‬
‭-‬ ‭Top row = significance level‬

‭Compare DF test statistic with critical value‬


‭→ Reject if DF test stat < t‬‭c‬

‭Computer exercise 4: Test of CAPM Model‬


‭Capital Asset Pricing Model - CAPM‬
‭-‬ ‭i = individual = individual stock‬
‭-‬ ‭e‬
‭= excess return‬
‭-‬ ‭t = time period‬

‭ anel data NxT‬


P
‭Alpha = Nx1 matrix‬

‭ ‬‭1‬ ‭- R‬‭5‬ ‭= Excess return on five portfolios (days)‬


R
‭R‬‭m‬ ‭= Excess return of broad stock index‬

‭ he expected excess return of asset i for any asset → CAPM‬


​T
‭𝐸[‬ ‭𝑅‬‭𝑖]‬ = ‭β‭𝑖‬𝑚‬(‭𝐸[‬ ‭𝑅‭𝑚
‬ ‬])
-‭ ‬ B ‭ eta of asse * Expected excess return of the market portfolio.‬
‭-‬ ‭Linear relationship‬
‭-‬ ‭We need to design an emperical model to test this theoretical model.‬

‭The empirical model for testing Shapre-Linter CAPM:‬


‭-‬ ‭We test if alpha i is jointly zero for all the assets.‬

-‭ ‬ ‭ ‬‭t‬ ‭= observed excess return on n risky assets.‬


R
‭-‬ ‭a = intercept‬
‭-‬ ‭Beta = beta on n assets‬
‭-‬ ‭R‬‭m‬ ‭= observed excess return on market portfolio‬
‭ = how many assets = 5 in this case‬
N
‭T = how many time periods = 198‬

‭The model is written in Matrix, therefore‬

-‭ ‬ R ‭ ‬‭1t‬ ‭= a‬‭1‬ ‭+ β‬‭1‭R


‬ ‬‭mt‬ ‭+ ε‬‭1t‬
‭-‬ ‭R‭2‬ t‬ ‭= a‬‭2‬ ‭+ β‬‭2‭R‬ ‬‭mt‬ ‭+ ε‬‭2t‬
‭-‬ ‭R‬‭3t‬ = ‭ a‬‭3‬ ‭+ β‬‭3‭R
‬ ‬‭mt‬ ‭+ ε‬‭3t‬

‭We test if alpha (the intercepts) i is jointly zero for all the assets.‬

‭ ‭0‬ ‬ ‭: a = 0 → Nx1‬
H
‭H‭1‬ ‬‭: a ≠ 0‬

‭In order to test if alpha (the intercept) is zero we estmate the models:‬
‭1.‬ ‭Initial values‬
‭Lägg in valbara a och β‬

‭ nder lab‬
U
‭a = 1‬
‭b = 1‬

‭ ideo‬
V
‭a = 0‬
‭b = 1‬

‭2.‬ ‭Calculate errors for the regression (in this case 5)‬

-‭ ‬ R ‭ ‬‭1t‬ ‭= a‬‭1‬ ‭+ β‬‭1‭R


‬ ‬‭mt‬ ‭+ ε‬‭1t‬
‭-‬ ‭R‭2‬ t‬ ‭= a‬‭2‬ ‭+ β‬‭2‭R‬ ‬‭mt‬ ‭+ ε‬‭2t‬
‭-‬ ‭R‬‭3t‬ = ‭ a‬‭3‬ ‭+ β‬‭3‭R
‬ ‬‭mt‬ ‭+ ε‬‭3t‬

‭ i skriver om formeln för att ta reda på error (ε)‬


V
‭-‬ ‭ε‬‭1t‬ ‭= R‬‭1t‬ ‭- a‬‭1‬ ‭- β‬‭1‬‭R‭m‬ t‬
-‭ ‬ ‭ε‬‭2t‬ ‭= R‬‭2t‬ ‭- a‬‭2‬ ‭- β‬‭2‬‭R‭m‬ t‬
‭-‬ ‭ε‬‭3t‬ ‭= R‬‭3t‬ ‭- a‬‭3‬ ‭- β‬‭3‬‭R‭m‬ t‬
‭Glöm ej låsa alpha och beta!‬

‭3.‬ ‭Calculate log-likelihood‬


‭-‬ ‭Log Likelihood value is a measure of goodness of fit for any model. Higher‬
‭the value, better is the model.‬

‭We divide the calculations‬

‭a)‬ M
‭ atrix →‬‭Sigma (∑)‬
‭Since we have 5 portfolios this will be 5x5‬

‭1‬
= ‭‬
𝑇
* ‭ε‭𝑡‬ ‬‭'‬ * ‭ε‭𝑡‬ ‬

‭= 1/‬‭198‬‭* MMULT(TRANSPOSE(hela error matrisen);hela‬‭error matrisen)‬

‭ alculate the inverse of the matrix →‬‭Sigma‬‭-1‬‭∑‭-‬1‬


‭b)‬ C
‭=MINVERSE(matrix)‬
‭c)‬ ‭Calculate‬‭e‬‭t‭’‬ ∑‬‭-1‬‭e‬‭t‬

−‭1‬
= ‭ε‬‭𝑡‬‭'‬ * ‭ε‬ * ‭ε‬‭𝑡‬
−‭1‬
= ‭ε‬‭𝑡‬ * ‭ε‬ * ‭ε‬‭𝑡‬‭'‬
‭ MMULT(MMULT(Error matrix first row;Minverse of‬
=
‭matrix);TRANSPOSE(Error matrix first row)‬

‭GLÖM EJ LÅSA DENNA‬

‭d)‬ ‭Calculate log likelihood →‬‭lnL‬‭i‬

‭ ‬
𝑁 ‭‬
1 ‭‬
1
=− ‭2‬
‭𝑙𝑛‬(‭2π
‬) − ‭2‬
‭𝑙𝑛‬(‭𝑀𝐷𝐸𝑇𝐸𝑅𝑀‬(‭𝑚𝑎𝑡𝑟𝑖𝑥‬)) − ‭2‬
*‭e‬‭t‭’‬ ∑‬‭-1‬‭e‬‭t‬

‭Vi tar MDETERM istället för ABS då detta är determent och ej absolute.‬

‭e)‬ S‭ um the Total likelihood →‬‭Sum(lnL‬‭i‬‭)‬


‭Summera hela‬

‭f)‬ S‭ olver‬
‭Tot.likelihood‬
‭Change = a → B‬

‭Glöm ej klicka ur “make...”‬

‭Wald test‬
‭g)‬ ‭Wald test statistics‬

‭Alpha hat = maximum liklihood estimation‬


‭-‬ i‭n order to see the single Alpha hat and with MLE‬
‭H‬‭0‭:‬ alpha = 0 → t.test (alpha/SE(alpha))‬‭2‬ ‭~ N(0,1)‬

‭ ~ N(0,1)‬
X
‭= X‬‭2‬ ‭+...+X‬‭2‭1‬ 00‬ ‭then this will follow “Chisquare distribution”‬‭X‭2‬ ‬ ‭~ N(0,100)‬

‭Namnge R‬‭m‬ ‭innan:‬


‭●‬ ‭μ‬‭m‭2‬ ‬ ‭= (AVERAGE(R‬‭m‬‭))‬‭2‬
‭●‬ ‭σ‬‭m‭2‬ ‬ ‭= VAR(R‬‭m‬‭)‬

‭Matrix:‬

‭●‬

‭●‬
‭ MMULT(MMULT(TRANSPOSE(alphas);MINVERSE(Variance covariance‬
=
‭matrix));alphas)‬

‭ est hypothesis‬
T
‭ ‬ ‭Critical value (T‬‭c‭)‬ = CHIIN‬‭V‬‭(‬‭probability,deg_freedom‬‭)‬‭= CHIINV(0,05;5)‬

‭○‬ ‭probability = sign.level = 5% in this case‬
‭○‬ ‭deg_freedom = N = 5 in this case‬
‭●‬ ‭P-value = CHIN‬‭DIST‬‭(J‬‭0‬‭)‬

‭ eject if its Wald test > Tc‬


R
‭Reject if P-value < Sign.level‬
‭-‬ ‭If we fail to reject:‬
‭Alpha is actually not significance from 0 or all the five assets. In other words,the‬
‭theoretical model holds given this sample data‬‭𝐸[‬ ‭𝑅‬‭𝑖]‬ = ‭β‭𝑖‬𝑚‬(‭𝐸[‬ ‭𝑅‭𝑚
‬ ‬])
‭-‬ ‭If we reject:‬
‭ lpha is significance from 0 for all the five stat. In other words, the theoretical‬
A
‭models does’nt hold given this sample data‬‭𝐸[‬ ‭𝑅‬‭𝑖]‬ = ‭β‭𝑖‬𝑚‬(‭𝐸[‬ ‭𝑅‭𝑚
‬ ‬]) ‭.‬

‭Likelihood Ratio test‬


(‭ LR test)‬
‭This is used to compare to competing models:‬
‭1.‬ ‭Restricted‬‭(L*)‬‭=‬‭𝑟‭𝑡‬ ‬ = β‭𝑅‭𝑚
‬ 𝑡‬ + ε‭𝑡‬
a‭ = 0‬
‭2.‬ ‭Unrestricted‬‭(L)=‬‭𝑟‭𝑡‬ ‬ = 𝑎
‭ ‬ + β‭𝑅‬‭𝑚𝑡‬ + ε‭𝑡‬

‭The LR test is used to test wheter the restriction (a) is necessary or not.‬

‭ .‬ A
1 ‭ dd the restriction to a = 0‬
‭2.‬ ‭Solver‬
‭Only allow β to be changed‬

‭3.‬ A
‭ dd‬
‭lnL* = max log likelihood for the restricted model‬
‭InL = max log likelihood for the non-restricted model‬

l‭ nL* restricted = new value‬


‭lnL unrestricted = old value‬
‭ .‬ ‭Calculate LR‬
4

‭ est hypothesis‬
T
‭ ‬ ‭Critical value (T‬‭c‭)‬ = CHIINV(‬‭probability,deg_freedom‬‭)‬‭= CHIINV(0,05;5)‬

‭○‬ ‭probability = sign.level = 5% in this case‬
‭○‬ ‭deg_freedom = N = 5 in this case‬
‭●‬ ‭P-value = CHINDIST(LR)‬

‭ eject if its Wald test > Tc‬


R
‭Reject if P-value < Sign.level‬
‭-‬ I‭ f we fail to reject:‬
‭= restricted model‬
‭𝑟‭𝑡‬ ‬ = β‭𝑅‭𝑚
‬ 𝑡‬ + ε‭𝑡‬

‭ e need to add the restriction that a = 0, meaning we have the restricted model.‬
W
‭In other words, we claim that the Sharp Lintner CAPM model holds empirically‬
‭given the sample data:‬‭𝐸[‬ ‭𝑅‬‭𝑖]‬ = ‭β‭𝑖‬𝑚‬(‭𝐸[‬ ‭𝑅‭𝑚
‬ ‬]) ‭.‬

‭-‬ I‭ f we reject:‬
‭= unrestricted model‬
‭𝑟‭𝑡‬ ‬ = ‭𝑎‬ + β‭𝑅‬‭𝑚𝑡‬ + ε‭𝑡‬

‭ e don’t need to add the restriction that a = 0, meaning we have the unrestricted‬
W
‭model.‬
‭In other words, we claim that the Sharp Lintner CAPM model does’nt hold‬
‭empirically given the sample data:‬‭𝐸[‬ ‭𝑅‬‭𝑖]‬ = ‭β‭𝑖‬𝑚‬(‭𝐸[‬ ‭𝑅‭𝑚
‬ ‬])

S‭ ummary: both tests in this case give us the same conclusion that we cannot reject‬
‭the null hypothesis.‬

‭Computer exercise 5: The event study‬

‭ .‬ ‭Calculate returns‬
1
‭Log return on all prices‬

‭2.‬ ‭Within estimation window, run the OLS on the market model‬
-‭ ‬ a‭ lpha = intercept → y = firm return, x = price on market‬
‭-‬ ‭beta = slope → y = firm return, x = price on market‬

‭3.‬ ‭Within estimation window, find error term‬

‭We lock the alphas and betas‬

‭4.‬ ‭Variance of error term. VAR(e_i)=1/(L1-2)*e_i'e_i‬

‭5.‬ W
‭ ithin the event window, find the abnormal return (AR), which is‬
‭observed/actual return minus‬
‭a.‬ ‭Observed return, R‬‭it‬‭*‬
‭Ta från tidigare värden - returns (step 1)‬
‭b.‬ ‭Expected normal return E(Rit*|Ωit)‬
‭Observed value + alpha‬‭t‬ ‭+ beta‬‭t‬
‭c.‬ ‭Abnormal return (AR)‬
‭Expected return - Observed return‬

‭6.‬ ‭Aggregate abnormal returns and perform null hypotheses‬


‭a.‬ ‭Mean(AR)‬
‭AVERAGE(Abnormal return, all firms) → then drag down‬
‭b.‬ ‭S.E.(Mean(AR))‬
‭SQRT(SUM(Variance of error term, all firms) → same for all‬
‭c.‬ ‭t ratio‬
‭S.E.(Mean(AR))/Mean(AR) → drag down‬

‭7.‬ T
‭ est for specific event window‬
‭We get an‬‭event window‬‭and‬‭L‬
‭a.‬ ‭Mean(CAR)‬
‭SUM(Mean AR)‬
‭b.‬ ‭S.E.(Mean(CAR))‬
‭S.E.(Mean(AR))*SQRT(L)‬
‭c.‬ ‭t ratio‬
‭Mean(AR)/S.E.(Mean(CAR))‬

‭Reject if:‬
‭●‬ ‭T-ratio > T‬‭c‬

You might also like