Professional Documents
Culture Documents
Cheat Sheet
Copyright
c Matthias Vallentin, 2010
vallentin@icir.org
i=0
i! x!
0.3
0.20
0.6
0.15
0.2
1
PMF
PMF
PMF
PMF
0.4
●
● ● ● ● ● ● ● ●
n ●
●
0.10
●
●
0.1
0.2
●
0.05
● ●
●
● ●
● ● ●
●
●
● ● ●
● ● ●
● ● ●
0.00
● ● ●
0.0
0.0
● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
a b 0 10 20 30 40 0 2 4 6 8 10 0 5 10 15 20
x x x x
3
1.2 Continuous Distributions
FX (x) fX (x) E [X] V [X] MX (s)
0
x<a
I(a < x < b) a+b (b − a)2 esb − esa
x−a
Uniform(a, b) a<x<b
b−a b−a 2 12 s(b − a)
1 x>b
( )
Z x 2
σ 2 s2
2 1 (x − µ) 2
Normal(µ, σ ) Φ(x) = φ(t) dt φ(x) = √ exp − µ σ exp µs +
−∞ σ 2π 2σ 2 2
(ln x − µ)2
1 1 ln x − µ 1 2 2 2
Log-Normal(µ, σ ) 2
+ erf √ √ exp − eµ+σ /2
(eσ − 1)e2µ+σ
2 2 2σ 2 x 2πσ 2 2σ 2
1 T
Σ−1 (x−µ) 1
Multivariate Normal(µ, Σ) (2π)−k/2 |Σ|−1/2 e− 2 (x−µ) µ Σ exp µT s + sT Σs
2
1 k x 1
Chi-square(k) γ , xk/2 e−x/2 k 2k (1 − 2s)−k/2 s < 1/2
Γ(k/2) 2 2 2k/2 Γ(k/2)
1 −x/β 1
Exponential(β) 1 − e−x/β e β β2 (s < 1/β)
β 1 − βs
α
γ(α, x/β) 1 1
Gamma(α, β)1 xα−1 e−x/β αβ αβ 2 (s < 1/β)
Γ(α) Γ (α) β α 1 − βs
Γ α, βx β α −α−1 −β/x β β2 2(−βs)α/2 p
InverseGamma(α, β) x e α>1 α>2 Kα −4βs
Γ (α) Γ (α) α−1 (α − 1)2 (α − 2)2 Γ(α)
P
k k
Γ i=1 αi Y αi E [Xi ] (1 − E [Xi ])
i −1
Dirichlet(α) Qk xα
i Pk Pk
i=1 Γ (αi ) i=1 i=1 αi i=1 αi + 1
∞ k−1
!
2 Γ (α + β) α−1 β−1 α αβ X Y α+r sk
Beta(α, β) Ix (α, β) x (1 − x) 1+
Γ (α) Γ (β) α+β (α + β)2 (α + β + 1) r=0
α+β+r k!
k=1
∞ n n
k k x k−1 −(x/λ)k 1 2 X s λ n
Weibull(λ, k) 1 − e−(x/λ) e λΓ 1 + λ2 Γ 1 + − µ2 Γ 1+
λ λ k k n=0
n! k
x α
m xα
m αxm xα
m
Pareto(xm , α) 1− x ≥ xm α x ≥ xm α>1 α>2 α(−xm s)α Γ(−α, −xm s) s < 0
x xα+1 α−1 (α − 1)2 (α − 2)
1 γ(s, x) in the CDF of the Gamma distribution denotes the lower incomplete gamma function which is defined as γ(s, x) = 0x ts−1 e−t dt.
R
2I
x (a, b) in the CDF of the
R Beta distribution denotes the regularized incomplete beta function and is defined as Ix (a, b) = B(x; a, b)/B(a, b), where B(x; a, b) is the incomplete beta function
and is defined as B(x; a, b) = 0x ta−1 (1 − t)b−1 dt.
4
Uniform (continuous) Normal Log−normal χ2
0.5
µ = 0, σ2 = 0.2 µ = 0, σ2 = 3 k=1
1.0
µ = 0, σ2 = 1 µ = 2, σ2 = 2 k=2
µ = 0, σ2 = 5 µ = 0, σ2 = 1 k=3
0.8
µ = −2, σ2 = 0.5 µ = 0.5, σ2 = 1 k=4
µ = 0.25, σ2 = 1 k=5
0.4
µ = 0.125, σ2 = 1
0.8
0.6
0.3
0.6
1
PDF
PDF
φ(x)
● ●
b−a
0.4
0.2
0.4
0.2
0.1
0.2
0.0
0.0
0.0
● ●
x x x x
0.5
3.0
β=1 α = 2, β = 2 α = 2, β = 1 α = 5, β = 1
β = 0.4 α = 3, β = 2 α = 3, β = 1 α = 1, β = 3
α = 5, β = 1 α = 3, β = 0.5 α = 2, β = 2
2.5
α = 9, β = 0.5 α = 2, β = 5
0.4
1.5
2.0
3
0.3
PDF
PDF
1.0
1.5
2
0.2
1.0
0.5
0.1
0.5
0.0
0.0
0.0
0
0 1 2 3 4 5 0 5 10 15 20 0 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0
x x x x
Weibull Pareto
2.5
λ = 1, k = 0.5 xm = 1, α = 1
λ = 1, k = 1 xm = 1, α = 2
λ = 1, k = 1.5 xm = 1, α = 4
λ = 1, k = 5
2.0
3
1.5
2
PDF
PDF
1.0
1
0.5
0.0
x x
5
2 Probability Theory Law of Total Probability
n n
Definitions X G
P [B] = P [B|Ai ] P [Ai ] Ω= Ai
• Sample space Ω i=1 i=1
2. P [Ω] = 1
"∞ #
G X∞ 3 Random Variables
3. P Ai = P [Ai ]
i=1 i=1 Random Variable
• Probability space (Ω, A, P) X:Ω→R
(Y | X = x) ∼ N ρx, 1 − ρ2 (X | Y = y) ∼ N ρy, 1 − ρ2
and
Beta (function):
Independence
Z 1
x−1 y−1 Γ(x)Γ(y) X⊥
⊥ Y ⇐⇒ ρ = 0
• Ordinary: B(x, y) = B(y, x) = t (1 − t) dt =
0 Γ(x + y)
Z x
• Incomplete: B(x; a, b) = ta−1 (1 − t)b−1 dt 9.2 Bivariate Normal
0
• Regularized incomplete: Let X ∼ N µx , σx2 and Y ∼ N µy , σy2 .
a+b−1
B(x; a, b) a,b∈N X (a + b − 1)!
xj (1 − x)a+b−1−j
Ix (a, b) = = 1 z
B(a, b) j!(a + b − 1 − j)! f (x, y) = exp −
2(1 − ρ2 )
p
j=a 2πσx σy 1 − ρ2
• I0 (a, b) = 0 I1 (a, b) = 1
" 2 2 #
• Ix (a, b) = 1 − I1−x (b, a) x − µx
y − µy
x − µx
y − µy
z= + − 2ρ
σx σy σx σy
Retain H0 Reject H0
12.3.1 Multiparameter Delta Method √
H0 true Type
√ I error (α)
Let τ = ϕ(θ1 , . . . , θk ) be a function and let the gradient of ϕ be H1 true Type II error (β) (power)
p-value
∂ϕ
∂θ1 • p-value = supθ∈Θ0 Pθ [T (X) ≥ T (x)] = inf α : T (x) ∈ Rα
.
..
∇ϕ = Pθ [T (X ? ) ≥ T (X)]
• p-value = supθ∈Θ0 = inf α : T (X) ∈ Rα
∂ϕ | {z }
1−Fθ (T (X)) since T (X ? )∼Fθ
∂θk
p-value evidence
Suppose ∇ϕθ=θb 6= 0 and τb = ϕ(θ).
b Then,
< 0.01 very strong evidence against H0
τ − τ) D
(b 0.01 − 0.05 strong evidence against H0
→ N (0, 1) 0.05 − 0.1 weak evidence against H0
se(b
b τ)
> 0.1 little or no evidence against H0
where r Wald Test
T
se(b
b τ) = ˆ
∇ϕ Jˆn ∇ϕ
ˆ
• Two-sided test
and Jˆn = Jn (θ) ˆ = ∇ϕ b. θb − θ0
b and ∇ϕ
θ=θ • Reject H0 when |W | > zα/2 where W =
se
b
12.4 Parametric Bootstrap • P |W | > zα/2 → α
• p-value = Pθ0 [|W | > |w|] ≈ P [|Z| > |w|] = 2Φ(−|w|)
Sample from f (x; θbn ) instead of from F̂n , where θbn could be the mle or method
of moments estimator. Likelihood Ratio Test (LRT)
13
supθ∈Θ Ln (θ) Ln (θbn ) • X n = (X1 , . . . , Xn )
• T (X) = =
supθ∈Θ0 Ln (θ) Ln (θbn,0 ) • xn = (x1 , . . . , xn )
k • Prior density f (θ)
iid
D
X
• λ(X) = 2 log T (X) → χ2r−q where Zi2 ∼ χ2k with Z1 , . . . , Zk ∼ N (0, 1) • Likelihood f (xn | θ): joint density of the data
n
i=1 Y
In particular, X n iid =⇒ f (xn | θ) = f (xi | θ) = Ln (θ)
• p-value = Pθ0 [λ(X) > λ(x)] ≈ P χ2r−q > λ(x)
i=1
Multinomial LRT • Posterior density f (θ | xn )
• Normalizing constant cn = f (xn ) = f (x | θ)f (θ) dθ
R
X1 Xk
• Let p̂n = ,..., be the mle
n n • Kernel: part of a density that depends Ron θ
k θLn (θ)f (θ)
• Posterior Mean θ̄n = θf (θ | xn ) dθ = R Ln (θ)f
Xj R
Ln (p̂n ) Y p̂j (θ) dθ
• T (X) = =
Ln (p0 ) j=1
p0j
Xk
p̂j
14.1 Credible Intervals
D
• λ(X) = 2 Xj log → χ2k−1
j=1
p 0j 1 − α Posterior Interval
• The approximate size α LRT rejects H0 when λ(X) ≥ χ2k−1,α Z b
n
P [θ ∈ (a, b) | x ] = f (θ | xn ) dθ = 1 − α
2
Pearson χ Test a
k
X (Xj − E [Xj ])2 1 − α Equal-tail Credible Interval
• T = where E [Xj ] = np0j under H0
j=1
E [Xj ] Z a Z ∞
n
D f (θ | x ) dθ = f (θ | xn ) dθ = α/2
• T → χ2k−1 −∞ b
• p-value = P χ2k−1 > T (x)
2D
1 − α Highest Posterior Density (HPD) region Rn
• Faster → Xk−1 than LRT, hence preferable for small n
1. P [θ ∈ Rn ] = 1 − α
Independence Testing
2. Rn = {θ : f (θ | xn ) > k} for some k
• I rows, J columns, X multinomial sample of size n = I ∗ J
X
• mles unconstrained: p̂ij = nij Rn is unimodal =⇒ Rn is an interval
X
• mles under H0 : p̂0ij = p̂i· p̂·j = Xni· n·j
PI PJ
nX
14.2 Function of Parameters
• LRT: λ = 2 i=1 j=1 Xij log Xi· Xij·j
PI PJ (X −E[X ])2 Let τ = ϕ(θ) and A = {θ : ϕ(θ) ≤ τ }.
• Pearson χ2 : T = i=1 j=1 ijE[Xij ]ij
Posterior CDF for τ
D
• LRT and Pearson → χ2ν , where ν = (I − 1)(J − 1) Z
H(r | xn ) = P [ϕ(θ) ≤ τ | xn ] = f (θ | xn ) dθ
A
14 Bayesian Inference
Posterior Density
Bayes’ Theorem h(τ | xn ) = H 0 (τ | xn )
f (x | θ)f (θ) f (x | θ)f (θ) Bayesian Delta Method
f (θ | x) = =R ∝ Ln (θ)f (θ)
f (xn ) f (x | θ)f (θ) dθ
τ | X n ≈ N ϕ(θ),
b seb ϕ0 (θ)
b
Definitions
14
14.3 Priors Continuous likelihood (subscript c denotes constant)
Likelihood Conjugate Prior Posterior hyperparameters
Choice
Uniform(0, θ) Pareto(xm , k) max x(n) , xm , k + n
n
• Subjective Bayesianism: prior should incorporate as much detail as possible Exponential(λ) Gamma(α, β) α + n, β +
X
xi
the research’s a priori knowledge — via prior elicitation. i=1
• Objective Bayesianism: prior should incorporate as little detail as possible Pn
µ0 i=1 xi 1 n
(non-informative prior). Normal(µ, σc2 ) Normal(µ0 , σ02 ) + / + 2 ,
σ2 σ2 σ02 σc
• Robust Bayesianism: consider various priors and determine sensitivity of 0 c−1
1 n
our inferences to changes in the prior. + 2
σ02 σc
Pn
νσ02 + i=1 (xi − µ)2
Types Normal(µc , σ 2 ) Scaled Inverse Chi- ν + n,
ν+n
square(ν, σ02 )
• Flat: f (θ) ∝ constant νλ + nx̄ n
R∞ Normal(µ, σ 2 ) Normal- , ν + n, α + ,
• Proper: −∞ f (θ) dθ = 1 ν+n 2
scaled Inverse n
γ(x̄ − λ)2
R∞
• Improper: −∞ f (θ) dθ = ∞ 1X 2
Gamma(λ, ν, α, β) β+ (xi − x̄) +
• Jeffreys’ prior (transformation-invariant): 2 i=1 2(n + γ)
−1
Σ−1 −1
Σ−1 −1
p p MVN(µ, Σc ) MVN(µ0 , Σ0 ) 0 + nΣc 0 µ0 + nΣ x̄ ,
f (θ) ∝ I(θ) f (θ) ∝ det(I(θ))
−1 −1
Σ−1
0 + nΣc
n
• Conjugate: f (θ) and f (θ | xn ) belong to the same parametric family X
MVN(µc , Σ) Inverse- n + κ, Ψ + (xi − µc )(xi − µc )T
Wishart(κ, Ψ) i=1
n
X xi
14.3.1 Conjugate Priors Pareto(xmc , k) Gamma(α, β) α + n, β + log
i=1
xm c
Discrete likelihood Pareto(xm , kc ) Pareto(x0 , k0 ) x0 , k0 − kn where k0 > kn
Xn
Likelihood Conjugate Prior Posterior hyperparameters Gamma(αc , β) Gamma(α0 , β0 ) α0 + nαc , β0 + xi
n n i=1
X X
Bernoulli(p) Beta(α, β) α+ xi , β + n − xi
i=1
Xn n
X
i=1
n
X
14.4 Bayesian Testing
Binomial(p) Beta(α, β) α+ xi , β + Ni − xi If H0 : θ ∈ Θ0 :
i=1 i=1 i=1
n
X
Z
Negative Binomial(p) Beta(α, β) α + rn, β + xi Prior probability P [H0 ] = f (θ) dθ
n
i=1 ZΘ0
Posterior probability P [H0 | xn ] = f (θ | xn ) dθ
X
Poisson(λ) Gamma(α, β) α+ xi , β + n
Θ0
i=1
n
X
Multinomial(p) Dirichlet(α) α+ x(i)
i=1 Let H0 , . . . , HK−1 be K hypotheses. Suppose θ ∼ f (θ | Hk ),
n
f (xn | Hk )P [Hk ]
X
Geometric(p) Beta(α, β) α + n, β + xi
P [Hk | xn ] = PK ,
n
k=1 f (x | Hk )P [Hk ]
i=1
15
Marginal Likelihood 1. Estimate VF [Tn ] with VF̂n [Tn ].
Z 2. Approximate VF̂n [Tn ] using simulation:
f (xn | Hi ) = f (xn | θ, Hi )f (θ | Hi ) dθ
∗ ∗
Θ (a) Repeat the following B times to get Tn,1 , . . . , Tn,B , an iid sample from
Posterior Odds (of Hi relative to Hj ) the sampling distribution implied by F̂n
B B
!2
Bayes Factor 1 X ∗ 1 X ∗
log10 BF10 BF10 evidence vboot = V̂F̂n = Tn,b − T
B B r=1 n,r
b=1
0 − 0.5 1 − 1.5 Weak
0.5 − 1 1.5 − 10 Moderate
1−2 10 − 100 Strong
16.1.1 Bootstrap Confidence Intervals
>2 > 100 Decisive
p
1−p BF 10 Normal-based Interval
p∗ = p where p = P [H1 ] and p∗ = P [H1 | xn ]
1 + 1−p BF10
Tn ± zα/2 se
ˆ boot
• Algorithm
(Frequentist) Risk
1. Draw θcand ∼ f (θ)
Z
2. Generate u ∼ Unif (0, 1)
h i
R(θ, θ)
b = L(θ, θ(x))f
b (x | θ) dx = EX|θ L(θ, θ(X))
b
Ln (θcand )
3. Accept θcand if u ≤
Ln (θbn ) Bayes Risk
ZZ
16.3 Importance Sampling
h i
r(f, θ)
b = L(θ, θ(x))f
b (x, θ) dx dθ = Eθ,X L(θ, θ(X))
b
Sample from an importance function g rather than target density h.
Algorithm to obtain an approximation to E [q(θ) | xn ]:
h h ii h i
r(f, θ)
b = Eθ EX|θ L(θ, θ(X)
b = Eθ R(θ, θ)b
iid
1. Sample from the prior θ1 , . . . , θn ∼ f (θ)
h h ii h i
r(f, θ)
b = EX Eθ|X L(θ, θ(X)
b = EX r(θb | X)
Ln (θi )
2. For each i = 1, . . . , B, calculate wi = PB
i=1 Ln (θi )
n
PB 17.2 Admissibility
3. E [q(θ) | x ] ≈ i=1 q(θi )wi
• θb0 dominates θb if
∀θ : R(θ, θb0 ) ≤ R(θ, θ)
b
17 Decision Theory
∃θ : R(θ, θb0 ) < R(θ, θ)
b
Definitions
• θb is inadmissible if there is at least one other estimator θb0 that dominates
• Unknown quantity affecting our decision: θ ∈ Θ it. Otherwise it is called admissible.
17
17.3 Bayes Rule Residual Sums of Squares (rss)
Bayes Rule (or Bayes Estimator) n
X
rss(βb0 , βb1 ) = ˆ2i
• r(f, θ)
b = inf e r(f, θ)
θ
e
i=1
R
• θ(x) = inf r(θ | x) ∀x =⇒ r(f, θ)
b b b = r(θb | x)f (x) dx
Least Square Estimates
Theorems
βbT = (βb0 , βb1 )T : min rss
β
b0 ,β
b1
• Squared error loss: posterior mean
• Absolute error loss: posterior median
• Zero-one loss: posterior mode βb0 = Ȳn − βb1 X̄n
Pn Pn
(Xi − X̄n )(Yi − Ȳn ) i=1 Xi Yi − nX̄Y
17.4 Minimax Rules βb1 = i=1 Pn 2
= n
(X − X̄ ) 2
P 2
i=1 i n i=1 Xi − nX
Maximum Risk
β0
h i
R̄(θ)
b = sup R(θ, θ) R̄(a) = sup R(θ, a) E βb | X n =
b β1
θ θ
σ 2 n−1 ni=1 Xi2 −X n
h i P
Minimax Rule n
V β |X =
b
e = inf sup R(θ, θ)
b = inf R̄(θ)
sup R(θ, θ) e nsX −X n 1
θ θe θe θ
r Pn
2
σ i=1 Xi
√
b
se(
b βb0 ) =
θb = Bayes rule ∧ ∃c : R(θ, θ)
b =c sX n n
σ
√
b
Least Favorable Prior se(
b βb1 ) =
sX n
θbf = Bayes rule ∧ R(θ, θbf ) ≤ r(f, θbf ) ∀θ Pn Pn
where s2X = n−1 i=1 (Xi − X n )2 and σ
b2 = 1
n−2 ˆ2i
i=1 an (unbiased) estimate
of σ. Further properties:
18 Linear Regression
P P
• Consistency: βb0 → β0 and βb1 → β1
Definitions
• Asymptotic normality:
• Response variable Y
• Covariate X (aka predictor variable or feature) βb0 − β0 D βb1 − β1 D
→ N (0, 1) and → N (0, 1)
se(
b βb0 ) se(
b βb1 )
18.1 Simple Linear Regression
• Approximate 1 − α confidence intervals for β0 and β1 are
Model
Yi = β0 + β1 Xi + i E [i | Xi ] = 0, V [i | Xi ] = σ 2 βb0 ± zα/2 se(
b βb0 ) and βb1 ± zα/2 se(
b βb1 )
Fitted Line
rb(x) = βb0 + βb1 x • The Wald test for testing H0 : β1 = 0 vs. H1 : β1 6= 0 is: reject H0 if
|W | > zα/2 where W = βb1 /se(
b βb1 ).
Predicted (Fitted) Values
Ybi = rb(Xi ) R2
Pn b 2
Pn 2
Residuals i=1 (Yi − Y ) ˆ rss
2
= 1 − Pn i=1 i 2 = 1 −
ˆi = Yi − Ybi = Yi − βb0 + βb1 Xi R = Pn 2
i=1 (Yi − Y ) i=1 (Yi − Y )
tss
18
Likelihood If the (k × k) matrix X T X is invertible,
n n n
Y Y Y βb = (X T X)−1 X T Y
L= f (Xi , Yi ) = fX (Xi ) × fY |X (Yi | Xi ) = L1 × L2 h i
i=1 i=1 i=1 V βb | X n = σ 2 (X T X)−1
n
βb ≈ N β, σ 2 (X T X)−1
Y
L1 = fX (Xi )
i=1
n
(
2
) Estimate regression function
Y
−n 1 X
L2 = fY |X (Yi | Xi ) ∝ σ exp − 2 Yi − (β0 − β1 Xi ) k
2σ i X
i=1 rb(x) = βbj xj
j=1
Under the assumption of Normality, the least squares estimator is also the mle
2
Unbiased estimate for σ
n
1X 2 n
b2 =
σ ˆ 1 X 2
n i=1 i b2 =
σ ˆ ˆ = X βb − Y
n − k i=1 i
The training error is a downward-biased estimate of the prediction risk. Frequentist Risk
h i Z Z
h i R(f, fbn ) = E L(f, fbn ) = b2 (x) dx + v(x) dx
E R btr (S) < R(S)
h i
h
i n
X h i b(x) = E fbn (x) − f (x)
bias(R btr (S) − R(S) = −2
btr (S)) = E R Cov Ybi , Yi h i
i=1 v(x) = V fbn (x)
Adjusted R2
19.1.1 Histograms
2 n − 1 rss
R (S) = 1 −
n − k tss Definitions
Mallow’s Cp statistic • Number of bins m
1
• Binwidth h = m
R(S)
b =R σ 2 = lack of fit + complexity penalty
btr (S) + 2kb • Bin Bj has νj observations
R
• Define pbj = νj /n and pj = Bj f (u) du
Akaike Information Criterion (AIC)
Histogram Estimator
m
AIC(S) = bS2 )
`n (βbS , σ −k X pbj
fbn (x) = I(x ∈ Bj )
j=1
h
Bayesian Information Criterion (BIC) h i p
j
E fbn (x) =
k h
bS2 ) − log n
BIC(S) = `n (βbS , σ h i p (1 − p )
j j
2 V fbn (x) =
nh2
h2
Z
Validation and Training 2 1
R(fn , f ) ≈
b (f 0 (u)) du +
12 nh
m
X n n !1/3
R
bV (S) = (Ybi∗ (S) − Yi∗ )2 m = |{validation data}|, often or ∗ 1 6
i=1
4 2 h = 1/3 R 2 du
n (f 0 (u))
2/3 Z 1/3
Leave-one-out Cross-validation C 3 2
R∗ (fbn , f ) ≈ 2/3 C= (f 0 (u)) du
n n
!2 n 4
X
2
X Yi − Ybi (S)
R
bCV (S) = (Yi − Yb(i) ) = Cross-validation estimate of E [J(h)]
i=1 i=1
1 − Uii (S) Z n m
2 2Xb 2 n+1 X 2
JCV (h) = fn (x) dx −
b b f(−i) (Xi ) = − pb
U (S) = XS (XST XS )−1 XS (“hat matrix”) n i=1 (n − 1)h (n − 1)h j=1 j
20
19.1.2 Kernel Density Estimator (KDE) k-nearest Neighbor Estimator
Kernel K 1 X
rb(x) = Yi where Nk (x) = {k values of x1 , . . . , xn closest to x}
k
i:xi ∈Nk (x)
• K(x) ≥ 0
Nadaraya-Watson Kernel Estimator
R
• K(x) dx = 1
R
• xK(x) dx = 0 n
X
rb(x) = wi (x)Yi
R 2 2
• x K(x) dx ≡ σK >0
i=1
x−xi
KDE K
wi (x) = h ∈ [0, 1]
n
Pn x−xj
j=1 K
1X1 x − Xi h
fbn (x) = K
n i=1 h h h4
Z 4 Z
f 0 (x)
2
Z Z rn , r) ≈
R(b x2 K 2 (x) dx r00 (x) + 2r0 (x) dx
1 4 00 2 1 4 f (x)
R(f, fn ) ≈ (hσK )
b (f (x)) dx + K 2 (x) dx
4 nh σ 2 K 2 (x) dx
Z R
−2/5 −1/5 −1/5 + dx
nhf (x)
Z Z
c c2 c3
h∗ = 1 c1 = σ 2
K , c 2 = K 2
(x) dx, c 3 = (f 00 (x))2 dx c1
n1/5 h∗ ≈
Z 4/5 Z 1/5 n1/5
c4 5 2 2/5 c2
R∗ (f, fbn ) = 4/5 c4 = (σK ) K 2 (x) dx (f 00 )2 dx R∗ (b
rn , r) ≈ 4/5
n 4 n
| {z }
C(K)
Pm+n = Pm Pn
1
Pn = P × · · · × P = Pn St ∼ Exp
λ
Marginal probability
1
Pk
Sample autocorrelation function • If 2k+1 i=−k wt−j ≈ 0, a linear trend function µt = β0 + β1 t passes
without distortion
γ
b(h)
ρb(h) = Differencing
γ
b(0)
• µt = β0 + β1 t =⇒ ∇xt = β1
Sample cross-variance function
n−h
1 X 21.4 ARIMA models
γ
bxy (h) = (xt+h − x̄)(yt − y)
n t=1 Autoregressive polynomial
γ
bxy (h) Autoregressive operator
ρbxy (h) = p
γbx (0)b
γy (0)
φ(B) = 1 − φ1 B − · · · − φp B p
Properties
Autoregressive model order p, AR (p)
1
• σρbx (h) = √ if xt is white noise
n xt = φ1 xt−1 + · · · + φp xt−p + wt ⇐⇒ φ(B)xt = wt
1
• σρbxy (h) = √ if xt or yt is white noise AR (1)
n
k−1 ∞
X k→∞,|φ|<1 X
21.3 Non-Stationary Time Series • xt = φk (xt−k ) + φj (wt−j ) = φj (wt−j )
j=0 j=0
Classical decomposition model | {z }
linear process
P∞
xt = µt + st + wt • E [xt ] = j=0 φj (E [wt−j ]) = 0
2 h
σw φ
• µt = trend • γ(h) = Cov [xt+h , xt ] = 1−φ2
γ(h)
• st = seasonal component • ρ(h) = γ(0) = φh
• wt = random noise term • ρ(h) = φρ(h − 1) h = 1, 2, . . .
24
Moving average polynomial Seasonal ARIMA
θ(z) = 1 + θ1 z + · · · + θq zq z ∈ C ∧ θq 6= 0 • Denoted by ARIMA (p, d, q) × (P, D, Q)s
Moving average operator • ΦP (B s )φ(B)∇D d s
s ∇ xt = δ + ΘQ (B )θ(B)wt
θ(B) = 1 + θ1 B + · · · + θp B p
21.4.1 Causality and Invertibility
MA (q) (moving average model order q) P∞
ARMA (p, q) is causal (future-independent) ⇐⇒ ∃{ψj } : j=0 ψj < ∞ such that
xt = wt + θ1 wt−1 + · · · + θq wt−q ⇐⇒ xt = θ(B)wt
q ∞
X
xt = wt−j = ψ(B)wt
X
E [xt ] = θj E [wt−j ] = 0
j=0
j=0
( Pq−h P∞
2
σw j=0 θj θj+h 0≤h≤q ARMA (p, q) is invertible ⇐⇒ ∃{πj } : j=0 πj < ∞ such that
γ(h) = Cov [xt+h , xt ] =
0 h>q
∞
X
MA (1) π(B)xt = Xt−j = wt
xt = wt + θwt−1 j=0
2 2
(1 + θ )σw h = 0
Properties
γ(h) = θσw 2
h=1
0 h>1 • ARMA (p, q) causal ⇐⇒ roots of φ(z) lie outside the unit circle
(
θ
2 h=1 ∞
θ(z)
ρ(h) = (1+θ )
X
0 h>1 ψ(z) = ψj z j = |z| ≤ 1
j=0
φ(z)
ARMA (p, q)
xt = φ1 xt−1 + · · · + φp xt−p + wt + θ1 wt−1 + · · · + θq wt−q • ARMA (p, q) invertible ⇐⇒ roots of θ(z) lie outside the unit circle
∞
φ(B)xt = θ(B)wt X φ(z)
π(z) = πj z j = |z| ≤ 1
Partial autocorrelation function (PACF) j=0
θ(z)
• xh−1
i , regression of xi on {xh−1 , xh−2 , . . . , x1 }
Behavior of the ACF and PACF for causal and invertible ARMA models
• φhh = corr(xh − xh−1
h , x0 − xh−1
0 ) h≥2
• E.g., φ11 = corr(x1 , x0 ) = ρ(1) AR (p) MA (q) ARMA (p, q)
ARIMA (p, d, q) ACF tails off cuts off after lag q tails off
∇d xt = (1 − B)d xt is ARMA (p, q) PACF cuts off after lag p tails off q tails off
φ(B)(1 − B)d xt = θ(B)wt
Exponentially Weighted Moving Average (EWMA) 21.5 Spectral Analysis
xt = xt−1 + wt − λwt−1 Periodic process
∞
X xt = A cos(2πωt + φ)
xt = (1 − λ)λj−1 xt−j + wt when |λ| < 1
j=1 = U1 cos(2πωt) + U2 sin(2πωt)
x̃n+1 = (1 − λ)xn + λx̃n
• Frequency index ω (cycles per unit time), period 1/ω
25
• Amplitude A Discrete Fourier Transform (DFT)
• Phase φ n
• U1 = A cos φ and U2 = A sin φ often normally distributed rv’s
X
d(ωj ) = n−1/2 xt e−2πiωj t
Periodic mixture i=1
q
X Fourier/Fundamental frequencies
xt = (Uk1 cos(2πωk t) + Uk2 sin(2πωk t))
k=1
ωj = j/n
• Uk1 , Uk2 , for k = 1, . . . , q, are independent zero-mean rv’s with variances σk2
Pq
• γ(h) = k=1 σk2 cos(2πωk h) Inverse DFT
Pq n−1
• γ(0) = E x2t = k=1 σk2
X
xt = n−1/2 d(ωj )e2πiωj t
Spectral representation of a periodic process j=0
∞ ∞
X 1 X p |B| = n, |U | = m f arbitrary f injective f surjective f bijective
• pk = , pk = |p| < 1
1−p 1−p
k=0 k=1 ( (
mn m ≥ n
∞ ∞
! n n n! m = n
X d X d 1 1 B : D, U : ¬D m m!
• kpk−1 = pk
= = |p| < 1 0 else m 0 else
dp dp 1 − p 1 − p2
k=0 k=0
∞ (
X r+k−1 k n+n−1 m n−1 1 m=n
• x = (1 − x)−r r ∈ N+ B : ¬D, U : D
k n n m−1 0 else
k=0
∞
X α k m
( (
p = (1 + p)α |p| < 1 , α ∈ C
• X n 1 m≥n n 1 m=n
k B : D, U : ¬D
k=0 k 0 else m 0 else
k=1
m
( (
X 1 m≥n 1 m=n
B : ¬D, U : ¬D Pn,k Pn,m
k=1
0 else 0 else
22.2 Combinatorics
Sampling References
[1] P. G. Hoel, S. C. Port, and C. J. Stone. Introduction to Probability Theory.
Brooks Cole, 1972.
k out of n w/o replacement w/ replacement
[2] L. M. Leemis and J. T. McQueston. Univariate Distribution Relationships.
k−1
Y n! The American Statistician, 62(1):45–53, 2008.
ordered nk = (n − i) = nk
(n − k)!
i=0 [3] R. H. Shumway and D. S. Stoffer. Time Series Analysis and Its Applications
k
n n n! n−1+r n−1+r With R Examples. Springer, 2006.
unordered = = =
k k! k!(n − k)! r n−1
[4] A. Steger. Diskrete Strukturen – Band 1: Kombinatorik, Graphentheorie,
Algebra. Springer, 2001.
Stirling numbers, 2nd kind [5] A. Steger. Diskrete Strukturen – Band 2: Wahrscheinlichkeitstheorie und
Statistik. Springer, 2002.
n
n−1
n−1
(
n 1 n=0 [6] L. Wasserman. All of Statistics: A Concise Course in Statistical Inference.
=k + 1≤k≤n = Springer, 2003.
k k k−1 0 0 else
Partitions
n
X
Pn+k,k = Pn,i k > n : Pn,k = 0 n ≥ 1 : Pn,0 = 0, P0,0 = 1
i=1
27
Univariate distribution relationships, courtesy of Leemis and McQueston [2].
28