You are on page 1of 8

Formulas and Probability Tables

going with the exam

Quantitative Methods III


EBC2011

Please do not write on these sheets


and return them after the examination!
QM3, 2017/2018 selected Math formulas

A stationary point (𝑎, 𝑏) with corresponding Lagrange multiplier


Determinant criterion for
𝜆0 solves the (local) optimization problem, if the determinant of
Max (min) 𝑓(𝑥, 𝑦)
the Hessian matrix of the Lagrange function
subject to 𝑔(𝑥, 𝑦) = 𝑐.
det 𝐻𝐿 (𝑎, 𝑏, 𝜆0 ) > 0 in case of maximization and
det 𝐻𝐿 (𝑎, 𝑏, 𝜆0 ) < 0 in case of minimization.

𝐿′𝑥 (𝑥, 𝑦) = 0
{
Kuhn-Tucker conditions for 𝐿′𝑦 (𝑥, 𝑦) = 0
Max (min) 𝑓(𝑥, 𝑦) 𝜆 = 0 or 𝑔(𝑥, 𝑦) = 𝑐
subject to 𝑔(𝑥, 𝑦) − 𝑐 ≤ 0. 𝑔(𝑥, 𝑦) − 𝑐 ≤ 0 and 𝜆 ≥ 0 (𝜆 ≤ 0 for min)
where L( x, y) is the corresponding Lagrange function.
Sufficient if moreover 𝐿(𝑥, 𝑦, 𝜆0 ) is concave (convex for min)

1
 x dx  a  1 x
a 1
a
 C (a  1)

1
Some integrals:  x dx  ln x  C
1
e dx  eax  C (a  0)
ax

a
1 x
 a dx  ln a a  C (a  0 and a  1)
x

Integration by parts:
 f ( x) g '( x)dx  f ( x) g ( x)   f '( x) g ( x)dx
Integration by substitution:
 f ( g ( x)) g '( x)dx   f (u)du (substitution of u  g ( x) )
Consider the characteristic equation

2  c   d  0

If D  c2  4d  0 , then the characteristic equation has two


solutions 1 and 2 and the general solution is
The general solution of the xt  k1 1t  k2 2t
homogeneous 2nd order linear
difference equation
xt  c xt 1  d xt 2 . If D  0 , then the characteristic equation has one solution  and the
general solution is
xt  k1  t  k2 t  t

If D  0 , then the characteristic equation has no real solutions and


the general solution will contain sin and cos functions:
xt  k1 r t sin t  k2 r t cos t , with r  d

The general solution of the If 1  2 . ( D  0 ), then the general solution is given by


homogeneous 2nd order linear x(t )  Ae1t  Be2t .
differential equation x  ax  bx ,
with solutions 1 and 2 of the
If 1  2   ( D  0 ), then the general solution is given by
characteristic equation.
x(t )  Aet  Btet .
QM3, 2017-2018: selected formulas from the course manual and Wooldridge

Course manual: Some basic concepts from probability theory


b
P(a  Y  b)   f ( y ) dy (B.1)
a
Probability distribution or density 
 f ( y ) dy  1 (B.2)


1 n
Sample mean y  n  yi (C.1)
i 1

Sample variance, 1 n
sY2  n 1
 ( yi  y) 2 and sY  sY2 (C.2)
Sample standard deviation i 1

Population mean E (Y )  μY   yf ( y ) dy (C.3)


Var (Y )  σ Y2  E[(Y  μY ) 2 ]   ( y  μY ) 2 f ( y )dy and
Population variance, 
(C.4)
Population standard deviation
sd (Y )  σ Y  σ Y2
For any constants a and b, E(aY  b)  aE(Y )  b (C.5ii)
For constants {a1,a2,...,an} and random variables {Y1,Y2,...,Yn},
Rules for calculating means (C.5iii)
E(a1Y1  a2Y2  ...  anYn )  a1 E(Y1 )  a2 E(Y2 )  ...  an E(Yn )
and variances
For any constants a and b,
(C.5iv)
Var (aY  b)  a 2Var (Y ) and sd (aY  b)  a  sd (Y )
Independence f ( y x)  f ( y ) (D.1)

Conditional population mean E (Y X  x)  μY x   yf ( y x) dy (F.1)


Conditional population variance   


Var (Y X  x)  σY2 x  E [Y  μY x ]2 X  x   ( y  μY x )2 f ( y x) dy

(F.2)

1 n
Sample covariance s XY   ( xi  x )( yi  y ) (G.1)
n 1
i 1
s XY
Sample correlation R XY  (G.2)
s X sY
Population covariance Cov( X , Y )   XY  E[( X   X )(Y  Y )] (G.3)
 XY
Population correlation Corr ( X , Y )   XY  (G.4)
 XY
For any constants a, b, c and d,
Cov(aX  b, cY  d )  acCov( X , Y ) (G.5i)

Rules for calculating variances For any constants a and b,


(G.5ii)
and covariances Var (aX  bY )  a 2Var ( X )  b 2Var (Y )  2abCov( X , Y )
For pairwise uncorrelated random variables {Y1,Y2,…,Yn},
(G.5v)
Var (Y1  Y2  ...  Yn )  Var (Y1 )  Var (Y2 )  ...  Var (Yn )
 constantconditional mean zero correlation
 
 (mean independence)  zero covariance
independence   (H.1)
 constantconditional variance

 (homoskedasticity)
nonzerocorrelation nonconstant conditional mean
  dependence
 nonzerocovariance (mean dependence)
(H.2)
nonconstant conditional variance
 dependence
(heteroskedasticity)
Ch. 3 Multiple Regression Analysis: Estimation (contains Ch. 2 as a special case)
Multiple linear regression model y  β0  β1 x1  β2 x2  ...  βk xk  u (3.6/31)
OLS regression line yˆ  βˆ 0  βˆ1 x1  βˆ 2 x2  ...  βˆ k xk (3.11/16)
OLS residual uˆ i  yi  yˆ i  yi  βˆ 0  βˆ1 xi1  ...  βˆ k xik (3.20/21)
n n
Residual sum of squares SSR   uˆ i2   ( y i  βˆ 0  βˆ1 xi1  ...  βˆ k xik ) 2 (3.12)
i 1 i 1
n
 ( y i  βˆ 0  βˆ1 xi1  ...  βˆ k xik )  0
i 1
n
 xi1 ( yi  βˆ 0  βˆ1 xi1  ...  βˆ k xik )  0
OLS first order conditions i 1 (3.13)

n
 xik ( yi  βˆ 0  βˆ1 xi1  ...  βˆ k xik )  0
i 1

βˆ 0  y  βˆ1 x1  ...  βˆ k xk
n n
 rˆij yi  ( xi  x )( y i  y ) (3.22)
βˆ j  i 1
for j = 1,…,k { k = 1: βˆ1  i 1
} (2.19)
OLS estimates n n
 rˆij  ( xi  x )
2 2
i 1 i 1
where the r̂ij are the residuals from a regression of xj against
the other independent variables (“partialling out”)
n
Total sum of squares SST   ( yi  y ) 2 (3.24)
i 1
n
Explained sum of squares SSE   ( yˆ i  y ) 2 (3.25)
i 1

Decomposition of SST SST  SSE  SSR (3.27)


R 2  SSE / SST  1  SSR / SST  Corr ( y, yˆ )
2
Coefficient of determination (3.28/29)
MLR.1: Linear in parameters y  β0  β1 x1  β2 x2  ...  βk xk  u (3.6/31)
MLR.2: Random sampling Random sample {(xi1,xi2,…,xik,yi): i = 1,…,n} from pop. model (3.31)
In the sample, none of the independent variables is constant,
MLR.3: No perfect collinearity
and there are no exact linear relationships among them
MLR.4: Zero conditional mean E (u x1 , x2 ,..., xk )  E (u)  0 (3.8/36)
Conditional mean of y E ( y x1 , x2 ,..., xk )  β0  β1 x1  β2 x2  ...  βk xk (2.8/55)
Th. 3.1: Unbiasedness of OLS Under MLR.1 – MLR.4, E ( βˆ j )  β j for j = 0,1,…,k (3.37)
y  β0  β1 x1  β2 x2  u (3.40)
~ ~ ~
y  β0  β1 x1 (3.41)
Omitted variable bias ~ ~ ~
x2  δ0  δ1 x1 (3.44)
~ ~
E ( β1 )  β1  β 2 δ1 (3.45)
MLR.5: Homoskedasticity Var (u x1 , x2 ,..., xk )  σ 2

Conditional variance of y Var ( y x1 , x2 ,..., xk )  σ 2 (2.56)


Under the Gauss-Markov assumptions MLR.1 – MLR.5,
n
conditional on the x-values, with SSTj   ( xij  x j ) 2 ,
Th. 3.2: Sampling variance of the i 1
(3.51)
OLS slope estimators σ2 σ
Var ( βˆ j )   sd ( βˆ j ) 
SST j (1  R 2j ) SST j (1  R 2j )
Under the Gauss-Markov assumptions MLR.1 – MLR.5,
Th. 3.3: Unbiased estimation of  2 E (σˆ 2 )  σ 2 , with σˆ 2  SSR /(n  k  1)
(3.56)
Standard error of the regression σˆ  σˆ 2 (2.62)
σˆ σˆ
Standard errors of the slopes se( βˆ j )   (3.58/59)
SST j (1  R 2j ) (n  1) s 2X (1  R 2j )
~ n
β j   wij yi
i 1
Linear estimators (3.60)
where each wij can be a function of the sample values of all the
independent variables
Under the Gauss-Markov assumptions MLR.1 – MLR.5, the OLS
Th. 3.4: Gauss-Markov Theorem
estimators ˆ j are the best linear unbiased estimators of  j , j = 0,…,k

Ch. 4 Multiple Regression Analysis: Inference


The population error u is independent of the explanatory variables
MLR.6: Normality x1,x2,…,xk and is normally distributed with zero mean and variance  2:
u  Normal(0, 2)
Conditional distribution of y y x1 x2 ,..., xk  Normal(0+1x1+…+kxk, 2)
Under the CLM assumptions MLR.1 – MLR.6, conditional on
x1,x2,…,xk,
Th. 4.1: Normal sampling
distributions ( βˆ j  β j )
βˆ j  Normal[ β j , Var ( βˆ j )]   Normal(0,1) (4.1)
sd ( βˆ ) j

Under the CLM assumptions MLR.1 – MLR.6,


Th. 4.2: t Distribution for the ( βˆ j  β j ) (4.3)
standardized estimators  t n k 1
se ( βˆ )
j

(estimate hypothesized value)


The general t statistic t (4.13)
standard error
A 100(1–)% confidence interval βˆ j  t α / 2  se( βˆ j ) , with t/2 the appropriate critical tn–k–1 value (4.16)
( SSRr  SSRur ) / q (4.37)
 Fq ,n k 1 under MLR.1 – MLR.6
The F statistic SSRur /(n  k  1)
q = numerator df = dfr – dfur , n–k–1 = denominator df = dfur (4.38/39)
2
( Rur  Rr2 ) / q
R-squared form of the F statistic  Fq ,n k 1 (4.41)
(1  Rur2
) /(n  k  1)
SSE / k R2 / k
Overall F statistic   Fk ,n k 1 (4.46)
SSR /(n  k  1) (1  R 2 ) /(n  k  1)

Ch. 5 Multiple Regression Analysis: OLS Asymptotics


MLR.4’: Zero mean and zero E (u)  0 and Cov( x j , u)  0 for j = 1,…,k
correlation Note that MLR.4’ is weaker than MLR.4.
Under MLR.1, MLR.2, MLR.3 and MLR.4’, the OLS
Th. 5.1: Consistency of OLS estimators are consistent:
(5.3)
plim βˆ  β
j jj = 0,…,k
Under the Gauss-Markov assumptions MLR.1 – MLR.5,
Th. 5.2: Asymptotic normality of ( βˆ j  β j ) a (5.7)
OLS  Normal(0,1) j = 0,…,k
se ( βˆ )
j

Under the Gauss-Markov assumptions MLR.1 – MLR.5, the


Th. 5.3: Asymptotic efficiency of
OLS estimators have the smallest asymptotic variances among
OLS
a broad class of consistent estimators
Ch. 6 Multiple Regression Analysis: Further Issues

SSR /(n  k  1) σˆ 2
Adjusted R-squared R 2 1 1 (6.21)
SST /(n  1) SST /(n  1)
Variance of the prediction error Var (eˆ 0 )  Var ( yˆ 0 )  Var (u 0 )  Var ( yˆ 0 )  σ 2 (6.35)

Ch. 7 Multiple Regression Analysis with Qualitative Information: Binary (or Dummy) Variables
[ SSRP  ( SSR1  SSR2 )] /[k  1]
The (traditional) Chow statistic  Fk 1,n 2( k 1) (7.24)
[ SSR1  SSR2 ] /[n  2(k  1)]

Ch. 10 Basic Regression Analysis with Time Series Data

yt  α0  δ0 zt  δ1 zt 1  ...  δq zt q  ut (10.6)
FDL model of order q
LRP  δ0  δ1  ...  δq (10.7)
The stochastic process {(xt1,xt2,…,xtk,yt): t = 1,…,n} follows the
TS.1: Linear in parameters linear model
(10.8)
yt  β0  β1 xt1  ...  βk xtk  ut
In the sample, none of the independent variables is constant,
TS.2: No perfect collinearity
and there are no exact linear relationships among them
TS.3: Zero conditional mean
E (u t X )  0 t = 1,…,n (10.9)
or strict exogeneity
Contemporaneous exogeneity E (u t x t )  0 t = 1,…,n (10.10)
Th. 10.1: Unbiasedness of OLS Under TS.1 – TS.3, E ( βˆ j )  β j , j = 0,1,…,k
TS.4: Homoskedasticity Var (ut X )  Var (ut )  σ 2 t = 1,…,n
TS.5: No serial correlation Corr (u t , u s X )  0 for all t  s
Under the Gauss-Markov assumptions TS.1 – TS.5,
Th. 10.2: Sampling variance of σ2
the OLS slope estimators Var ( βˆ j X )  , j = 1,…,k (10.13)
SSTj (1  R 2j )
Th. 10.3: Unbiased estimation Under the Gauss-Markov assumptions TS.1 – TS.5,
of  2 E (σˆ 2 )  σ 2 , with σˆ 2  SSR /(n  k  1)
Under the Gauss-Markov assumptions TS.1 – TS.5, the OLS
Th. 10.4: Gauss-Markov Theorem estimators ˆ j are the best linear unbiased estimators of  j ,
conditional on X.
The errors ut are independent of X and are independently and
TS.6: Normality
identically distributed as Normal(0, 2)
Under the CLM assumptions TS.1 – TS.6, the OLS estimators
Th. 10.5: Normal sampling are normally distributed, conditional on X. Further, under the
distributions null hypothesis, each t statistic has a t distribution, and each F
statistic has an F distribution.
Linear time trend yt  α0  α1t  et (10.24)
Exponential time trend log( yt )  β0  β1t  et (10.26)
yˆ t  βˆ 0  βˆ1 xt1  ...  βˆ k xtk  δˆ1t (10.36)
Detrending interpretation of a
regression with a time trend yˆt  βˆ1 xt1  ...  βˆ k xtk (10.37)
with y, x1 ,..., xk detrended variables
yˆ t  βˆ 0  βˆ1 xt1  ...  βˆ k xtk  δˆ1 febt  ...  δˆ11dect (10.41)
Deseasonalized interpretation of a
regression with seasonal dummies yˆt  βˆ1 xt1  ...  βˆ k xtk
with y, x1 ,..., xk deseasonalized variables
TABLE G.2 TABLE G.3a
Critical Values of the t Distribution 10% Critical Values of the F Distribution
NOTE: as the number of df goes to infinity, the t Distribution
converges to the standard normal! Numerator Degrees of Freedom
1 2 3 4 5 6 7 8 9 10
Significance Level 10 3.29 2.92 2.73 2.61 2.52 2.46 2.41 2.38 2.35 2.32
1-Tailed: .10 .05 .025 .01 .005 D 11 3.23 2.86 2.66 2.54 2.45 2.39 2.34 2.30 2.27 2.25
2-Tailed: .20 .10 .05 .02 .01 e 12 3.18 2.81 2.61 2.48 2.39 2.33 2.28 2.24 2.21 2.19
1 3.078 6.314 12.706 31.821 63.657 n 13 3.14 2.76 2.56 2.43 2.35 2.28 2.23 2.20 2.16 2.14
2 1.886 2.920 4.303 6.965 9.925 o 14 3.10 2.73 2.52 2.39 2.31 2.24 2.19 2.15 2.12 2.10
3 1.638 2.353 3.182 4.541 5.841 m 15 3.07 2.70 2.49 2.36 2.27 2.21 2.16 2.12 2.09 2.06
4 1.533 2.132 2.776 3.747 4.604 . 16 3.05 2.67 2.46 2.33 2.24 2.18 2.13 2.09 2.06 2.03
5 1.476 2.015 2.571 3.365 4.032 17 3.03 2.64 2.44 2.31 2.22 2.15 2.10 2.06 2.03 2.00
6 1.440 1.943 2.447 3.143 3.707 D 18 3.01 2.62 2.42 2.29 2.20 2.13 2.08 2.04 2.00 1.98
7 1.415 1.895 2.365 2.998 3.499 e 19 2.99 2.61 2.40 2.27 2.18 2.11 2.06 2.02 1.98 1.96
D 8 1.397 1.860 2.306 2.896 3.355 g 20 2.97 2.59 2.38 2.25 2.16 2.09 2.04 2.00 1.96 1.94
e 9 1.383 1.833 2.262 2.821 3.250 r 21 2.96 2.57 2.36 2.23 2.14 2.08 2.02 1.98 1.95 1.92
g 10 1.372 1.812 2.228 2.764 3.169 e 22 2.95 2.56 2.35 2.22 2.13 2.06 2.01 1.97 1.93 1.90
r 11 1.363 1.796 2.201 2.718 3.106 e 23 2.94 2.55 2.34 2.21 2.11 2.05 1.99 1.95 1.92 1.89
e 12 1.356 1.782 2.179 2.681 3.055 s 24 2.93 2.54 2.33 2.19 2.10 2.04 1.98 1.94 1.91 1.88
e 13 1.350 1.771 2.160 2.650 3.012 25 2.92 2.53 2.32 2.18 2.09 2.02 1.97 1.93 1.89 1.87
s 14 1.345 1.761 2.145 2.624 2.977 o 26 2.91 2.52 2.31 2.17 2.08 2.01 1.96 1.92 1.88 1.86
15 1.341 1.753 2.131 2.602 2.947 f 27 2.90 2.51 2.30 2.17 2.07 2.00 1.95 1.91 1.87 1.85
o 16 1.337 1.746 2.120 2.583 2.921 28 2.89 2.50 2.29 2.16 2.06 2.00 1.94 1.90 1.87 1.84
f 17 1.333 1.740 2.110 2.567 2.898 F 29 2.89 2.50 2.28 2.15 2.06 1.99 1.93 1.89 1.86 1.83
18 1.330 1.734 2.101 2.552 2.878 r 30 2.88 2.49 2.28 2.14 2.05 1.98 1.93 1.88 1.85 1.82
F 19 1.328 1.729 2.093 2.539 2.861 e 40 2.84 2.44 2.23 2.09 2.00 1.93 1.87 1.83 1.79 1.76
r 20 1.325 1.725 2.086 2.528 2.845 e 60 2.79 2.39 2.18 2.04 1.95 1.87 1.82 1.77 1.74 1.71
e 21 1.323 1.721 2.080 2.518 2.831 d 90 2.76 2.36 2.15 2.01 1.91 1.84 1.78 1.74 1.70 1.67
e 22 1.321 1.717 2.074 2.508 2.819 o 120 2.75 2.35 2.13 1.99 1.90 1.82 1.77 1.72 1.68 1.65
d 23 1.319 1.714 2.069 2.500 2.807 m ∞ 2.71 2.30 2.08 1.94 1.85 1.77 1.72 1.67 1.63 1.60
o 24 1.318 1.711 2.064 2.492 2.797
m 25 1.316 1.708 2.060 2.485 2.787
26 1.315 1.706 2.056 2.479 2.779
27 1.314 1.703 2.052 2.473 2.771
28 1.313 1.701 2.048 2.467 2.763
29 1.311 1.699 2.045 2.462 2.756
30 1.310 1.697 2.042 2.457 2.750
40 1.303 1.684 2.021 2.423 2.704
60 1.296 1.671 2.000 2.390 2.660
90 1.291 1.662 1.987 2.368 2.632
120 1.289 1.658 1.980 2.358 2.617
∞ 1.282 1.645 1.960 2.326 2.576
TABLE G.3b TABLE G.3c
5% Critical Values of the F Distribution 1% Critical Values of the F Distribution

Numerator Degrees of Freedom Numerator Degrees of Freedom


1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
10 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 10 10.04 7.56 6.55 5.99 5.64 5.39 5.20 5.06 4.94 4.85
D 11 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 2.85 D 11 9.65 7.21 6.22 5.67 5.32 5.07 4.89 4.74 4.63 4.54
e 12 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 e 12 9.33 6.93 5.95 5.41 5.06 4.82 4.64 4.50 4.39 4.30
n 13 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 2.67 n 13 9.07 6.70 5.74 5.21 4.86 4.62 4.44 4.30 4.19 4.10
o 14 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 2.60 o 14 8.86 6.51 5.56 5.04 4.69 4.46 4.28 4.14 4.03 3.94
m 15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 m 15 8.68 6.36 5.42 4.89 4.56 4.32 4.14 4.00 3.89 3.80
. 16 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54 2.49 . 16 8.53 6.23 5.29 4.77 4.44 4.20 4.03 3.89 3.78 3.69
17 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49 2.45 17 8.40 6.11 5.18 4.67 4.34 4.10 3.93 3.79 3.68 3.59
D 18 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46 2.41 D 18 8.29 6.01 5.09 4.58 4.25 4.01 3.84 3.71 3.60 3.51
e 19 4.38 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42 2.38 e 19 8.18 5.93 5.01 4.50 4.17 3.94 3.77 3.63 3.52 3.43
g 20 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 g 20 8.10 5.85 4.94 4.43 4.10 3.87 3.70 3.56 3.46 3.37
r 21 4.32 3.47 3.07 2.84 2.68 2.57 2.49 2.42 2.37 2.32 r 21 8.02 5.78 4.87 4.37 4.04 3.81 3.64 3.51 3.40 3.31
e 22 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34 2.30 e 22 7.95 5.72 4.82 4.31 3.99 3.76 3.59 3.45 3.35 3.26
e 23 4.28 3.42 3.03 2.80 2.64 2.53 2.44 2.37 2.32 2.27 e 23 7.88 5.66 4.76 4.26 3.94 3.71 3.54 3.41 3.30 3.21
s 24 4.26 3.40 3.01 2.78 2.62 2.51 2.42 2.36 2.30 2.25 s 24 7.82 5.61 4.72 4.22 3.90 3.67 3.50 3.36 3.26 3.17
25 4.24 3.39 2.99 2.76 2.60 2.49 2.40 2.34 2.28 2.24 25 7.77 5.57 4.68 4.18 3.85 3.63 3.46 3.32 3.22 3.13
o 26 4.23 3.37 2.98 2.74 2.59 2.47 2.39 2.32 2.27 2.22 o 26 7.72 5.53 4.64 4.14 3.82 3.59 3.42 3.29 3.18 3.09
f 27 4.21 3.35 2.96 2.73 2.57 2.46 2.37 2.31 2.25 2.20 f 27 7.68 5.49 4.60 4.11 3.78 3.56 3.39 3.26 3.15 3.06
28 4.20 3.34 2.95 2.71 2.56 2.45 2.36 2.29 2.24 2.19 28 7.64 5.45 4.57 4.07 3.75 3.53 3.36 3.23 3.12 3.03
F 29 4.18 3.33 2.93 2.70 2.55 2.43 2.35 2.28 2.22 2.18 F 29 7.60 5.42 4.54 4.04 3.73 3.50 3.33 3.20 3.09 3.00
r 30 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 2.16 r 30 7.56 5.39 4.51 4.02 3.70 3.47 3.30 3.17 3.07 2.98
e 40 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12 2.08 e 40 7.31 5.18 4.31 3.83 3.51 3.29 3.12 2.99 2.89 2.80
e 60 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 1.99 e 60 7.08 4.98 4.13 3.65 3.34 3.12 2.95 2.82 2.72 2.63
d 90 3.95 3.10 2.71 2.47 2.32 2.20 2.11 2.04 1.99 1.94 d 90 6.93 4.85 4.01 3.54 3.23 3.01 2.84 2.72 2.61 2.52
o 120 3.92 3.07 2.68 2.45 2.29 2.17 2.09 2.02 1.96 1.91 o 120 6.85 4.79 3.95 3.48 3.17 2.96 2.79 2.66 2.56 2.47
m ∞ 3.84 3.00 2.60 2.37 2.21 2.10 2.01 1.94 1.88 1.83 m ∞ 6.63 4.61 3.78 3.32 3.02 2.80 2.64 2.51 2.41 2.32

You might also like