You are on page 1of 28

Probability and Statistics

Cookbook

Copyright
c Matthias Vallentin, 2011
vallentin@icir.org

12th December, 2011


This cookbook integrates a variety of topics in probability the- 12 Parametric Inference 11 20 Stochastic Processes 22
ory and statistics. It is based on literature [1, 6, 3] and in-class 12.1 Method of Moments . . . . . . . . . . . 11 20.1 Markov Chains . . . . . . . . . . . . . . 22
material from courses of the statistics department at the Uni- 12.2 Maximum Likelihood . . . . . . . . . . . 12 20.2 Poisson Processes . . . . . . . . . . . . . 22
versity of California in Berkeley but also influenced by other 12.2.1 Delta Method . . . . . . . . . . . 12
sources [4, 5]. If you find errors or have suggestions for further 21 Time Series 23
12.3 Multiparameter Models . . . . . . . . . 12
topics, I would appreciate if you send me an email. The most re- 21.1 Stationary Time Series . . . . . . . . . . 23
12.3.1 Multiparameter delta method . . 13
cent version of this document is available at http://matthias. 21.2 Estimation of Correlation . . . . . . . . 24
12.4 Parametric Bootstrap . . . . . . . . . . 13 21.3 Non-Stationary Time Series . . . . . . . 24
vallentin.net/probability-and-statistics-cookbook/. To
reproduce, please contact me. 21.3.1 Detrending . . . . . . . . . . . . 24
13 Hypothesis Testing 13 21.4 ARIMA models . . . . . . . . . . . . . . 24
21.4.1 Causality and Invertibility . . . . 25
Contents 14 Bayesian Inference 14 21.5 Spectral Analysis . . . . . . . . . . . . . 25
14.1 Credible Intervals . . . . . . . . . . . . . 14
1 Distribution Overview 3 14.2 Function of parameters . . . . . . . . . . 14 22 Math 26
1.1 Discrete Distributions . . . . . . . . . . 3 14.3 Priors . . . . . . . . . . . . . . . . . . . 15 22.1 Gamma Function . . . . . . . . . . . . . 26
1.2 Continuous Distributions . . . . . . . . 4 14.3.1 Conjugate Priors . . . . . . . . . 15 22.2 Beta Function . . . . . . . . . . . . . . . 26
14.4 Bayesian Testing . . . . . . . . . . . . . 15 22.3 Series . . . . . . . . . . . . . . . . . . . 27
2 Probability Theory 6 22.4 Combinatorics . . . . . . . . . . . . . . 27
15 Exponential Family 16
3 Random Variables 6
3.1 Transformations . . . . . . . . . . . . . 7
16 Sampling Methods 16
4 Expectation 7 16.1 The Bootstrap . . . . . . . . . . . . . . 16
16.1.1 Bootstrap Confidence Intervals . 16
5 Variance 7 16.2 Rejection Sampling . . . . . . . . . . . . 17
16.3 Importance Sampling . . . . . . . . . . . 17
6 Inequalities 8
17 Decision Theory 17
7 Distribution Relationships 8
17.1 Risk . . . . . . . . . . . . . . . . . . . . 17
8 Probability and Moment Generating 17.2 Admissibility . . . . . . . . . . . . . . . 17
Functions 9 17.3 Bayes Rule . . . . . . . . . . . . . . . . 18
17.4 Minimax Rules . . . . . . . . . . . . . . 18
9 Multivariate Distributions 9
9.1 Standard Bivariate Normal . . . . . . . 9 18 Linear Regression 18
9.2 Bivariate Normal . . . . . . . . . . . . . 9 18.1 Simple Linear Regression . . . . . . . . 18
9.3 Multivariate Normal . . . . . . . . . . . 9 18.2 Prediction . . . . . . . . . . . . . . . . . 19
18.3 Multiple Regression . . . . . . . . . . . 19
10 Convergence 9
18.4 Model Selection . . . . . . . . . . . . . . 19
10.1 Law of Large Numbers (LLN) . . . . . . 10
10.2 Central Limit Theorem (CLT) . . . . . 10
19 Non-parametric Function Estimation 20
11 Statistical Inference 10 19.1 Density Estimation . . . . . . . . . . . . 20
11.1 Point Estimation . . . . . . . . . . . . . 10 19.1.1 Histograms . . . . . . . . . . . . 20
11.2 Normal-Based Confidence Interval . . . 11 19.1.2 Kernel Density Estimator (KDE) 21
11.3 Empirical distribution . . . . . . . . . . 11 19.2 Non-parametric Regression . . . . . . . 21
11.4 Statistical Functionals . . . . . . . . . . 11 19.3 Smoothing Using Orthogonal Functions 21
1 Distribution Overview
1.1 Discrete Distributions
Notation1 FX (x) fX (x) E [X] V [X] MX (s)

0 x<a
(b a + 1)2 1 eas e(b+1)s

bxca+1 I(a < x < b) a+b
Uniform Unif {a, . . . , b} axb
ba ba+1 2 12 s(b a)
1 x>b

Bernoulli Bern (p) (1 p)1x px (1 p)1x p p(1 p) 1 p + pes
!
n x
Binomial Bin (n, p) I1p (n x, x + 1) p (1 p)nx np np(1 p) (1 p + pes )n
x
k k
!n
n! x
X X
Multinomial Mult (n, p) px1 1 pkk xi = n npi npi (1 pi ) pi e si
x1 ! . . . xk ! i=1 i=0
! m mx
 
x np x nx nm nm(N n)(N m)
Hypergeometric Hyp (N, m, n) N N 2 (N 1)
p 
np(1 p) x
N
!  r
x+r1 r 1p 1p p
Negative Binomial NBin (r, p) Ip (r, x + 1) p (1 p)x r r
r1 p p2 1 (1 p)es
1 1p p
Geometric Geo (p) 1 (1 p)x x N+ p(1 p)x1 x N+
p p2 1 (1 p)es
x
X i x e s
Poisson Po () e e(e 1)

i=0
i! x!

Uniform (discrete) Binomial Geometric Poisson


n = 40, p = 0.3 p = 0.2 =1

0.8

n = 30, p = 0.6 p = 0.5 =4
0.25

n = 25, p = 0.9 p = 0.8 = 10

0.3
0.20

0.6
0.15

0.2
1
PMF

PMF

PMF

PMF

0.4


0.10

0.1
0.2


0.05









0.00


0.0

0.0

a b 0 10 20 30 40 0 2 4 6 8 10 0 5 10 15 20
x x x x

1 We use the notation (s, x) and (x) to refer to the Gamma functions (see 22.1), and use B(x, y) and Ix to refer to the Beta functions (see 22.2).
3
1.2 Continuous Distributions
Notation FX (x) fX (x) E [X] V [X] MX (s)

0 x<a
(b a)2 esb esa

xa I(a < x < b) a+b
Uniform Unif (a, b) a<x<b
ba ba 2 12 s(b a)
1 x>b

(x )2
Z x
2 s2
   
1
N , 2 2

Normal (x) = (t) dt (x) = exp exp s +
2 2 2 2
(ln x )2
   
1 1 ln x 1 2 2 2
ln N , 2 e+ /2
(e 1)e2+

Log-Normal + erf exp
2 2 2 2 x 2 2 2 2
 
1 T
1 (x) 1
Multivariate Normal MVN (, ) (2)k/2 ||1/2 e 2 (x) exp T s + sT s
2
(+1)/2
+1
 
 
2 x2
Students t Student() Ix ,
 1+ 0 0
2 2 2
 
1 k x 1
Chi-square 2k , xk/21 ex/2 k 2k (1 2s)k/2 s < 1/2
(k/2) 2 2 2k/2 (k/2)
r
d
(d1 x)d1 d2 2
2d22 (d1 + d2 2)
 
d1 d1 (d1 x+d2 )d1 +d2 d2
F F(d1 , d2 ) I d1 x , d1 d1 d2 2 d1 (d2 2)2 (d2 4)

d1 x+d2 2 2 xB 2
, 2
1 x/ 1
Exponential Exp () 1 ex/ e 2 (s < 1/)
1 s
 
(, x/) 1 1
Gamma Gamma (, ) x1 ex/ 2 (s < 1/)
() () 1 s
, x

1 /x 2 2(s)/2 p 
Inverse Gamma InvGamma (, ) x e >1 >2 K 4s
() () 1 ( 1)2 ( 2)2 ()
P 
k
i=1 i Y 1
k
i E [Xi ] (1 E [Xi ])
Dirichlet Dir () Qk xi i Pk Pk
i=1 (i ) i=1 i=1 i i=1 i + 1
k1
!
( + ) 1 X Y +r sk
Beta Beta (, ) Ix (, ) x (1 x)1 1+
() () + ( + )2 ( + + 1) r=0
++r k!
k=1

sn n 
   
k k  x k1 (x/)k 1 2 X n
Weibull Weibull(, k) 1 e(x/) e 1 + 2 1 + 2 1+
k k n=0
n! k
 x 
m x xm x
Pareto Pareto(xm , ) 1 x xm m
+1 x xm >1 m
>2 (xm s) (, xm s) s < 0
x x 1 ( 1)2 ( 2)

4
Uniform (continuous) Normal Lognormal Student's t
=1

0.4
= 0, 2 = 0.2 = 0, 2 = 3

1.0
= 0, 2 = 1 = 2, 2 = 2 =2
= 0, 2 = 5 = 0, 2 = 1 =5
=

0.8
= 2, 2 = 0.5 = 0.5, 2 = 1
= 0.25, 2 = 1
= 0.125, 2 = 1

0.8

0.3
0.6

0.6
PDF

PDF

PDF
(x)

0.2
0.4
1

0.4
ba

0.1
0.2

0.2
0.0

0.0

0.0

a b 4 2 0 2 4 0.0 0.5 1.0 1.5 2.0 2.5 3.0 4 2 0 2 4


x x x x

2
F Exponential Gamma

k=1 d1 = 1, d2 = 1 =2 = 1, = 2
0.5

2.0

0.5
k=2 d1 = 2, d2 = 1 =1 = 2, = 2
3.0

k=3 d1 = 5, d2 = 2 = 0.4 = 3, = 2
k=4 d1 = 100, d2 = 1 = 5, = 1
k=5 d1 = 100, d2 = 100 = 9, = 0.5
0.4

0.4
2.5

1.5
2.0
0.3

0.3
PDF

PDF

PDF

PDF
1.0
1.5
0.2

0.2
1.0

0.5
0.1

0.1
0.5
0.0

0.0

0.0

0.0
0 2 4 6 8 0 1 2 3 4 5 0 1 2 3 4 5 0 5 10 15 20
x x x x

Inverse Gamma Beta Weibull Pareto


= 1, = 1 = 0.5, = 0.5 = 1, k = 0.5 xm = 1, = 1
3.0

2.5
= 2, = 1 = 5, = 1 = 1, k = 1 xm = 1, = 2
= 3, = 1 = 1, = 3 = 1, k = 1.5 xm = 1, = 4
= 3, = 0.5 = 2, = 2 = 1, k = 5
= 2, = 5
4

2.5

2.0

3
2.0
3

1.5

2
PDF

PDF

PDF

PDF
1.5
2

1.0
1.0

1
1

0.5
0.5
0.0

0.0
0

0
0 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 2.5 0 1 2 3 4 5
x x x x

5
2 Probability Theory Law of Total Probability
n n
Definitions X G
P [B] = P [B|Ai ] P [Ai ] = Ai
Sample space i=1 i=1

Outcome (point or element) Bayes Theorem


Event A n
-algebra A P [B | Ai ] P [Ai ] G
P [Ai | B] = Pn = Ai
1. A j=1 P [B | Aj ] P [Aj ] i=1
S
2. A1 , A2 , . . . , A = i=1 Ai A Inclusion-Exclusion Principle
3. A A = A A n n r
[ X X \
(1)r1

Probability Distribution P Ai =


Aij
i=1 r=1 ii1 <<ir n j=1
1. P [A] 0 A
2. P [] = 1
" #
G
X 3 Random Variables
3. P Ai = P [Ai ]
i=1 i=1 Random Variable (RV)
Probability space (, A, P) X:R
Probability Mass Function (PMF)
Properties
P [] = 0 fX (x) = P [X = x] = P [{ : X() = x}]
B = B = (A A) B = (A B) (A B) Probability Density Function (PDF)
P [A] = 1 P [A]
Z b
P [B] = P [A B] + P [A B]
P [a X b] = f (x) dx
P [] = 1 P [] = 0 a
S T T S
( n An ) = n An ( n An ) = n An DeMorgan
S T Cumulative Distribution Function (CDF)
P [ n An ] = 1 P [ n An ]
P [A B] = P [A] + P [B] P [A B] FX : R [0, 1] FX (x) = P [X x]
= P [A B] P [A] + P [B] 1. Nondecreasing: x1 < x2 = F (x1 ) F (x2 )
P [A B] = P [A B] + P [A B] + P [A B] 2. Normalized: limx = 0 and limx = 1
P [A B] = P [A] P [A B] 3. Right-Continuous: limyx F (y) = F (x)
Continuity of Probabilities
S Z b
A1 A2 . . . = limn P [An ] = P [A] whereA = i=1 Ai P [a Y b | X = x] = fY |X (y | x)dy ab
T
A1 A2 . . . = limn P [An ] = P [A] whereA = i=1 Ai a

f (x, y)
Independence
fY |X (y | x) =
A
B P [A B] = P [A] P [B] fX (x)
Independence
Conditional Probability
1. P [X x, Y y] = P [X x] P [Y y]
P [A B]
P [A | B] = P [B] > 0 2. fX,Y (x, y) = fX (x)fY (y)
P [B] 6
Z
3.1 Transformations E [XY ] = xyfX,Y (x, y) dFX (x) dFY (y)
X,Y
Transformation function
E [(Y )] 6= (E [X]) (cf. Jensen inequality)
Z = (X)
P [X Y ] = 0 = E [X] E [Y ] P [X = Y ] = 1 = E [X] = E [Y ]
Discrete X
X E [X] = P [X x]
fZ (z) = P [(X) = z] = P [{x : (x) = z}] = P X 1 (z) =
 
f (x) x=1
x1 (z)
Sample mean
n
Continuous 1X
Xn = Xi
Z n i=1
FZ (z) = P [(X) z] = f (x) dx with Az = {x : (x) z}
Az Conditional expectation
Z
Special case if strictly monotone E [Y | X = x] = yf (y | x) dy

d dx 1 E [X] = E [E [X | Y ]]
fZ (z) = fX (1 (z)) 1 (z) = fX (x) = fX (x)


dz dz |J|
Z
E[(X, Y ) | X = x] = (x, y)fY |X (y | x) dx
The Rule of the Lazy Statistician Z

Z E [(Y, Z) | X = x] = (y, z)f(Y,Z)|X (y, z | x) dy dz
E [Z] = (x) dFX (x)
E [Y + Z | X] = E [Y | X] + E [Z | X]
Z Z E [(X)Y | X] = (X)E [Y | X]
E [IA (x)] = IA (x) dFX (x) = dFX (x) = P [X A] E[Y | X] = c = Cov [X, Y ] = 0
A
Convolution
z
5 Variance
Z Z
X,Y 0
Z := X + Y fZ (z) = fX,Y (x, z x) dx = fX,Y (x, z x) dx
0 Definition and properties
Z
Z := |X Y | fZ (z) = 2 fX,Y (x, z + x) dx V [X] = X2
   
= E (X E [X])2 = E X 2 E [X]
2
Z 0 Z " n #
X n

Z := |x|fX,Y (x, xz) dx =
X X X
fZ (z) = xfx (x)fX (x)fY (xz) dx V Xi = V [Xi ] + 2 Cov [Xi , Yj ]
Y
i=1 i=1 i6=j
" n # n
4 Expectation
X X
V Xi = V [Xi ] iff Xi
Xj
i=1 i=1
Definition and properties
Standard deviation p
X
xfX (x) X discrete sd[X] = V [X] = X


x Covariance
Z
E [X] = X = x dFX (x) =
Cov [X, Y ] = E [(X E [X])(Y E [Y ])] = E [XY ] E [X] E [Y ]

Z
xfX (x) X continuous


Cov [X, a] = 0
P [X = c] = 1 = E [c] = c Cov [X, X] = V [X]
E [cX] = c E [X] Cov [X, Y ] = Cov [Y, X]
E [X + Y ] = E [X] + E [Y ] Cov [aX, bY ] = abCov [X, Y ]
7
Cov [X + a, Y + b] = Cov [X, Y ] limn Bin (n, p) = Po (np) (n large, p small)

Xn Xm Xn Xm limn Bin (n, p) = N (np, np(1 p)) (n large, p far from 0 and 1)
Cov Xi , Yj =
Cov [Xi , Yj ] Negative Binomial
i=1 j=1 i=1 j=1
X NBin (1, p) = Geo (p)
Correlation Pr
X NBin (r, p) = i=1 Geo (p)
Cov [X, Y ] P P
[X, Y ] = p Xi NBin (ri , p) = Xi NBin ( ri , p)
V [X] V [Y ]
X NBin (r, p) . Y Bin (s + r, p) = P [X s] = P [Y r]
Independence
Poisson
n n
!
X
Y = [X, Y ] = 0 Cov [X, Y ] = 0 E [XY ] = E [X] E [Y ] X X
Xi Po (i ) Xi Xj = Xi Po i
Sample variance i=1 i=1

n
1 X n n
X
i
S2 = (Xi Xn )2
X
Xi Po (i ) Xi Xj = Xi Xj Bin Xj , Pn
n 1 i=1 j=1 j=1 j
j=1
Conditional variance Exponential
    2 n
V [Y | X] = E (Y E [Y | X])2 | X = E Y 2 | X E [Y | X] X
Xi Exp () Xi
Xj = Xi Gamma (n, )
V [Y ] = E [V [Y | X]] + V [E [Y | X]] i=1
Memoryless property: P [X > x + y | X > y] = P [X > x]
6 Inequalities Normal  
X

X N , 2 = N (0, 1)

Cauchy-Schwarz  
2
E [XY ] E X 2 E Y 2 X N , Z = aX + b = Z N a + b, a2 2
2
   
  
Markov X N 1 , 12 Y N 2 , 22 = X + Y N 1 + 2 , 12 + 22
 
E [(X)] Xi N i , i2 = 2
P P P
X N i i , i i
P [(X) t] i i
t

P [a < X b] = b a


Chebyshev
V [X] (x) = 1 (x) 0 (x) = x(x) 00 (x) = (x2 1)(x)
P [|X E [X]| t] 1
t2 Upper quantile of N (0, 1): z = (1 )
Chernoff Gamma
e
 
P [X (1 + )] > 1 X Gamma (, ) X/ Gamma (, 1)
(1 + )1+ P
Gamma (, ) i=1 Exp ()
Jensen P P
Xi Gamma (i , ) Xi Xj = Xi Gamma ( i i , )
E [(X)] (E [X]) convex Z i
()
= x1 ex dx
0
7 Distribution Relationships Beta
1 ( + ) 1
Binomial x1 (1 x)1 = x (1 x)1
B(, ) ()()
n
X   B( + k, ) +k1
E X k1
 
Xi Bern (p) = Xi Bin (n, p) E Xk = =
i=1
B(, ) ++k1
X Bin (n, p) , Y Bin (m, p) = X + Y Bin (n + m, p) Beta (1, 1) Unif (0, 1)
8
8 Probability and Moment Generating Functions Conditional mean and variance
  X
GX (t) = E tX |t| < 1 E [X | Y ] = E [X] + (Y E [Y ])
" #   Y
X (Xt)i X E Xi
ti
t
 Xt

MX (t) = GX (e ) = E e =E = p
V [X | Y ] = X 1 2
i=0
i! i=0
i!
P [X = 0] = GX (0)
P [X = 1] = G0X (0) 9.3 Multivariate Normal
(i)
GX (0) Covariance matrix (Precision matrix 1 )
P [X = i] =
i!
E [X] = G0X (1 ) V [X1 ] Cov [X1 , Xk ]
(k) =
.. .. ..
.
 
E X k = MX (0) . .

X!

(k)
Cov [Xk , X1 ] V [Xk ]
E = GX (1 )
(X k)!
2
If X N (, ),
V [X] = G00X (1 ) + G0X (1 ) (G0X (1 ))  
d n/2 1/2 1
GX (t) = GY (t) = X = Y fX (x) = (2) || exp (x )T 1 (x )
2

Properties
9 Multivariate Distributions
Z N (0, 1) X = + 1/2 Z = X N (, )
9.1 Standard Bivariate Normal X N (, ) = 1/2 (X ) N (0, 1)

X N (, ) = AX N A, AAT
p
Let X, Y N (0, 1) X
Z where Y = X + 1 2 Z 
X N (, ) kak = k = aT X N aT , aT a
Joint density  2
x + y 2 2xy

1
f (x, y) = p
2 1 2
exp
2(1 2 )
10 Convergence
Conditionals Let {X1 , X2 , . . .} be a sequence of rvs and let X be another rv. Let Fn denote
the cdf of Xn and let F denote the cdf of X.
(Y | X = x) N x, 1 2 (X | Y = y) N y, 1 2
 
and
Types of convergence
Independence D
X
Y = 0 1. In distribution (weakly, in law): Xn X

lim Fn (t) = F (t) t where F continuous


n
9.2 Bivariate Normal
P
2. In probability: Xn X
 
Let X N x , x2 and Y N y , y2 .

1

z
 ( > 0) lim P [|Xn X| > ] = 0
n
f (x, y) = exp
2(1 2 )
p
2x y 1 2
as
3. Almost surely (strongly): Xn X
" 2  2   #
x x y y x x y y h i h i
z= + 2 P lim Xn = X = P : lim Xn () = X() = 1
x y x y n n
9
qm
4. In quadratic mean (L2 ): Xn X CLT notations

lim E (Xn X)2 = 0


  Zn N (0, 1)
n
2
 
Xn N ,
Relationships n
2
 
qm
Xn X = Xn X = Xn X
P D Xn N 0,
n
as
Xn X = Xn X
P
2

D P
n(Xn ) N 0,
Xn X (c R) P [X = c] = 1 = Xn X
P P P n(Xn )
Xn X Yn Y = Xn + Yn X + Y N (0, 1)
qm qm qm n
Xn X Yn Y = Xn + Yn X + Y
P P P
Xn X Yn Y = Xn Yn XY
Xn
P
X =
P
(Xn ) (X) Continuity correction
D D
Xn X = (Xn ) (X) x + 12
 
 
qm P Xn x
Xn b limn E [Xn ] = b limn V [Xn ] = 0 / n
qm
X1 , . . . , Xn iid E [X] = V [X] < Xn
x 12
 
 
P Xn x 1
Slutzkys Theorem / n
D P D Delta method
Xn X and Yn c = Xn + Yn X + c
2 2
   
D P D
Xn X and Yn c = Xn Yn cX Yn N , = (Yn ) N 0 2
(), ( ())
D D
In general: Xn X and Yn Y =
6
D
Xn + Yn X + Y n n

10.1 Law of Large Numbers (LLN)


11 Statistical Inference
iid
Let {X1 , . . . , Xn } be a sequence of iid rvs, E [X1 ] = , and V [X1 ] < . Let X1 , , Xn F if not otherwise noted.

Weak (WLLN) 11.1 Point Estimation


P
Xn n Point estimator bn of is a rv: bn = g(X1 , . . . , Xn )
h i
Strong (WLLN) bias(bn ) = E bn
as
Xn n P
Consistency: bn
Sampling distribution: F (bn )
r h i
10.2 Central Limit Theorem (CLT) Standard error: se(n ) = V bn
b
Let {X1 , . . . , Xn } be a sequence of iid rvs, E [X1 ] = , and V [X1 ] = 2 . h i h i
Mean squared error: mse = E (bn )2 = bias(bn )2 + V bn

limn bias(bn ) = 0 limn se(bn ) = 0 = bn is consistent


Xn n(Xn ) D
Zn := q   = Z where Z N (0, 1) bn D
Asymptotic normality: N (0, 1)
V Xn se
Slutzkys Theorem often lets us replace se(bn ) by some (weakly) consis-
lim P [Zn z] = (z) zR tent estimator
bn .
n 10
11.2 Normal-Based Confidence Interval 11.4 Statistical Functionals
 
b 2 . Let z/2 = 1 (1 (/2)), i.e., P Z > z/2 = /2 Statistical functional: T (F )
 
Suppose bn N , se
 
and P z/2 < Z < z/2 = 1 where Z N (0, 1). Then Plug-in estimator of = (F ): bn = T (Fbn )
R
Linear functional: T (F ) = (x) dFX (x)
Cn = bn z/2 se
b Plug-in estimator for linear functional:
Z n
1X
T (Fbn ) =(x) dFbn (x) = (Xi )
11.3 Empirical distribution n i=1
 
Empirical Distribution Function (ECDF) b 2 = T (Fbn ) z/2 se
Often: T (Fbn ) N T (F ), se b
Pn
I(Xi x) pth quantile: F 1 (p) = inf{x : F (x) p}
i=1
Fbn (x) = b = Xn
n
n
1 X
b2 =
(Xi Xn )2
n 1 i=1
(
1 Xi x
I(Xi x) = 1
Pn
0 Xi > x n i=1 (Xi b)3

b=
b3 j

Pn
Properties (for any fixed x) (Xi Xn )(Yi Yn )
b = qP i=1 qP
n 2 n
h i
i=1 (X i Xn ) i=1 (Yi Yn )
E Fbn = F (x)
h i F (x)(1 F (x))
V Fbn =
n 12 Parametric Inference
F (x)(1 F (x)) D 
mse = 0 Let F = f (x; ) : be a parametric model with parameter space Rk
n
P and parameter = (1 , . . . , k ).
Fbn F (x)

Dvoretzky-Kiefer-Wolfowitz (DKW) inequality (X1 , . . . , Xn F ) 12.1 Method of Moments


  j th moment
2
Z
P sup F (x) Fbn (x) > = 2e2n

j () = E X j = xj dFX (x)
 
x

Nonparametric 1 confidence band for F j th sample moment


n
1X j

bj = X
L(x) = max{Fbn n , 0} n i=1 i
U (x) = min{Fbn + n , 1} Method of moments estimator (MoM)
s  
1 2
= log 1 () =
b1
2n
2 () =
b2
.. ..
.=.
P [L(x) F (x) U (x) x] 1 k () =
bk
11
Properties of the MoM estimator Equivariance: bn is the mle = (bn ) ist the mle of ()
bn exists with probability tending to 1 Asymptotic normality:
P
p
Consistency: bn 1. se 1/In ()
Asymptotic normality: (bn ) D
N (0, 1)
D
se
n(b ) N (0, ) q
  b 1/In (bn )
2. se
where = gE Y Y T g T , Y = (X, X 2 , . . . , X k )T ,
1
(bn ) D
g = (g1 , . . . , gk ) and gj = j () N (0, 1)
se
b
Asymptotic optimality (or efficiency), i.e., smallest variance for large sam-
12.2 Maximum Likelihood ples. If en is any other estimator, the asymptotic relative efficiency is
Likelihood: Ln : [0, ) h i
V bn
n
Y are(en , bn ) = h i 1
Ln () = f (Xi ; ) V en
i=1
Approximately the Bayes estimator
Log-likelihood
n
`n () = log Ln () =
X
log f (Xi ; ) 12.2.1 Delta Method
i=1 b where is differentiable and 0 () 6= 0:
If = ()
Maximum likelihood estimator (mle)
n ) D
(b
N (0, 1)
Ln (bn ) = sup Ln () se(b
b )

where b = ()
b is the mle of and
Score function

s(X; ) = log f (X; )

b = 0 ()
se se(
b n )
b b

Fisher information
I() = V [s(X; )] 12.3 Multiparameter Models
In () = nI() Let = (1 , . . . , k ) and b = (b1 , . . . , bk ) be the mle.
Fisher information (exponential family)
2 `n 2 `n
  Hjj = Hjk =
2 j k
I() = E s(X; )
Fisher information matrix
Observed Fisher information
E [H11 ] E [H1k ]

n
In () = .. .. ..
2 X

. . .
Inobs () =

log f (Xi ; )
2 i=1 E [Hk1 ] E [Hkk ]

Properties of the mle Under appropriate regularity conditions


P
Consistency: bn (b ) N (0, Jn )
12
with Jn () = In1 . Further, if bj is the j th component of , then 13 Hypothesis Testing
H0 : 0 versus H1 : 1
(bj j ) D
N (0, 1) Definitions
se
bj
Null hypothesis H0
h i Alternative hypothesis H1
b 2j = Jn (j, j) and Cov bj , bk = Jn (j, k)
where se Simple hypothesis = 0
Composite hypothesis > 0 or < 0
Two-sided test: H0 : = 0 versus H1 : 6= 0
One-sided test: H0 : 0 versus H1 : > 0
12.3.1 Multiparameter delta method Critical value c
Test statistic T
Let = (1 , . . . , k ) and let the gradient of be Rejection region R = {x : T (x) > c}
Power function () = P [X R]

Power of a test: 1 P [Type II error] = 1 = inf ()
1
1 Test size: = P [Type I error] = sup ()
.
..
= 0

Retain H0 Reject H0

k H0 true Type
I Error ()
H1 true Type II Error () (power)
p-value
Suppose =b 6= 0 and b = ().
b Then,

p-value = sup0 P [T (X) T (x)] = inf : T (x) R
P [T (X ? ) T (X)]

p-value = sup0 = inf : T (X) R
) D
(b
N (0, 1)
| {z }
se(b
b ) 1F (T (X)) since T (X ? )F

p-value evidence
where < 0.01 very strong evidence against H0
0.01 0.05 strong evidence against H0
r
 T   0.05 0.1 weak evidence against H0
se(b
b ) =
b Jbn
b > 0.1 little or no evidence against H0
Wald test
Two-sided test
b and
and Jbn = Jn () b = b.
= b 0
Reject H0 when |W | > z/2 where W =
  se
b
P |W | > z/2
p-value = P0 [|W | > |w|] P [|Z| > |w|] = 2(|w|)
12.4 Parametric Bootstrap
Likelihood ratio test (LRT)

Sample from f (x; bn ) instead of from Fbn , where bn could be the mle or method sup Ln () Ln (bn )
T (X) = =
of moments estimator. sup0 Ln () Ln (bn,0 ) 13
k
D
X iid xn = (x1 , . . . , xn )
(X) = 2 log T (X) 2rq where Zi2 2k and Z1 , . . . , Zk N (0, 1)
Prior density f ()
 i=1  Likelihood f (xn | ): joint density of the data
p-value = P0 [(X) > (x)] P 2rq > (x) n
Y
In particular, X n iid = f (xn | ) = f (xi | ) = Ln ()
Multinomial LRT
i=1
Posterior density f ( | xn )
 
X1 Xk
mle: pbn = ,...,
Normalizing constant cn = f (xn ) = f (x | )f () d
R
n n
k  Xj Kernel: part of a density that depends Ron
Ln (b
pn ) Y pbj
T (X) = = Ln ()f ()
Posterior mean n = f ( | xn ) d = R Ln ()f
R
Ln (p0 ) j=1
p0j () d
k  
X pbj D
(X) = 2 Xj log 2k1 14.1 Credible Intervals
j=1
p 0j

The approximate size LRT rejects H0 when (X) 2k1, Posterior interval
Pearson Chi-square Test Z b
n
P [ (a, b) | x ] = f ( | xn ) d = 1
k
X (Xj E [Xj ])2 a
T = where E [Xj ] = np0j under H0
j=1
E [Xj ] Equal-tail credible interval
D
T 2k1 Z a Z
f ( | xn ) d = f ( | xn ) d = /2
 
p-value = P 2k1 > T (x)
b
2D
Faster Xk1 than LRT, hence preferable for small n
Highest posterior density (HPD) region Rn
Independence testing
1. P [ Rn ] = 1
I rows, J columns, X multinomial sample of size n = I J
X 2. Rn = { : f ( | xn ) > k} for some k
mles unconstrained: pbij = nij
X
mles under H0 : pb0ij = pbi pbj = Xni nj Rn is unimodal = Rn is an interval
 
PI PJ nX
LRT: = 2 i=1 j=1 Xij log Xi Xijj
PI PJ (X E[X ])2
PearsonChiSq: T = i=1 j=1 ijE[Xij ]ij
14.2 Function of parameters
D
LRT and Pearson 2k , where = (I 1)(J 1) Let = () and A = { : () }.
Posterior CDF for
Z
14 Bayesian Inference H(r | xn ) = P [() | xn ] = f ( | xn ) d
A
Bayes Theorem
Posterior density
f (x | )f () f (x | )f () h( | xn ) = H 0 ( | xn )
f ( | x) = n
=R Ln ()f ()
f (x ) f (x | )f () d
Bayesian delta method
Definitions  
| X n N (),
b seb 0 ()
b
n
X = (X1 , . . . , Xn )

14
14.3 Priors Continuous likelihood (subscript c denotes constant)
Likelihood Conjugate prior Posterior hyperparameters
Choice 
Unif (0, ) Pareto(xm , k) max x(n) , xm , k + n
Xn

Subjective bayesianism. Exp () Gamma (, ) + n, + xi


i=1
Objective bayesianism.  Pn   
Robust bayesianism.
0 i=1 xi 1 n
2
 2

N , c N 0 , 0 + / + 2 ,
2 2 02 c
 0 c1
1 n
Types + 2
02 c
Pn
 02 + i=1 (xi )2
N c , 2 Scaled Inverse Chi- + n,
Flat: f () constant +n
R square(, 02 )
Proper: f () d = 1  + nx n
N , 2
R
Improper: f () d = Normal- , + n, + ,
+n 2
Jeffreys prior (transformation-invariant): scaled Inverse n
1X 2 (x )2
Gamma(, , , ) + (xi x) +
2 i=1 2(n + )
p p
f () I() f () det(I()) 1
1 1
1 1

MVN(, c ) MVN(0 , 0 ) 0 + nc 0 0 + n x ,
1 1
1

0 + nc
Conjugate: f () and f ( | xn ) belong to the same parametric family
Xn
MVN(c , ) Inverse- n + , + (xi c )(xi c )T
Wishart(, ) i=1
n
14.3.1 Conjugate Priors
X xi
Pareto(xmc , k) Gamma (, ) + n, + log
i=1
xm c
Discrete likelihood Pareto(xm , kc ) Pareto(x0 , k0 ) x0 , k0 kn where k0 > kn
Xn
Likelihood Conjugate prior Posterior hyperparameters Gamma (c , ) Gamma (0 , 0 ) 0 + nc , 0 + xi
n n i=1
X X
Bern (p) Beta (, ) + xi , + n xi
i=1
Xn n
X
i=1
n
X
14.4 Bayesian Testing
Bin (p) Beta (, ) + xi , + Ni xi If H0 : 0 :
i=1 i=1 i=1
n
X
Z
NBin (p) Beta (, ) + rn, + xi Prior probability P [H0 ] = f () d
n
i=1 Z0
Posterior probability P [H0 | xn ] = f ( | xn ) d
X
Po () Gamma (, ) + xi , + n
0
i=1
n
X
Multinomial(p) Dir () + x(i)
i=1 Let H0 , . . . , HK1 be K hypotheses. Suppose f ( | Hk ),
n
f (xn | Hk )P [Hk ]
X
Geo (p) Beta (, ) + n, + xi
P [Hk | xn ] = PK ,
n
k=1 f (x | Hk )P [Hk ]
i=1
15
Marginal likelihood 1. Estimate VF [Tn ] with VFbn [Tn ].
Z 2. Approximate VFbn [Tn ] using simulation:
f (xn | Hi ) = f (xn | , Hi )f ( | Hi ) d

(a) Repeat the following B times to get Tn,1 , . . . , Tn,B , an iid sample from
Posterior odds (of Hi relative to Hj ) the sampling distribution implied by Fn b

P [Hi | xn ] f (xn | Hi ) P [Hi ] i. Sample uniformly X , . . . , Xn Fbn .


1
= ii. Compute Tn = g(X1 , . . . , Xn ).
P [Hj | xn ] f (xn | Hj ) P [Hj ]
| {z } | {z }
Bayes Factor BFij prior odds (b) Then
B B
!2
Bayes factor
bb = 1 1 X
X

log10 BF10 BF10 evidence vboot =V Tn,b T
Fn B B r=1 n,r
b=1
0 0.5 1 1.5 Weak
0.5 1 1.5 10 Moderate
12 10 100 Strong 16.1.1 Bootstrap Confidence Intervals
>2 > 100 Decisive
p
1p BF10 Normal-based interval
p = p where p = P [H1 ] and p = P [H1 | xn ]
1+ 1p BF10 Tn z/2 se
b boot

15 Exponential Family Pivotal interval

Scalar parameter 1. Location parameter = T (F )


fX (x | ) = h(x) exp {()T (x) A()} 2. Pivot Rn = bn
= h(x)g() exp {()T (x)} 3. Let H(r) = P [Rn r] be the cdf of Rn

4. Let Rn,b = bn,b bn . Approximate H using bootstrap:
Vector parameter
( s
)
B
1 X
X
fX (x | ) = h(x) exp i ()Ti (x) A() H(r)
b =
I(Rn,b r)
i=1 B
b=1
= h(x) exp {() T (x) A()}
= h(x)g() exp {() T (x)} 5. = sample quantile of (bn,1

, . . . , bn,B )
Natural form 6. r = sample quantile of (Rn,1

, . . . , Rn,B ), i.e., r = bn
 
fX (x | ) = h(x) exp { T(x) A()} 7. Approximate 1 confidence interval Cn = a, b where
= h(x)g() exp { T(x)}
b 1 1 =
 

= h(x)g() exp T T(x) a = bn H bn r1/2 = 2bn 1/2

2
b = bn Hb 1 =
bn r/2 =
2bn /2
2
16 Sampling Methods
Percentile interval
16.1 The Bootstrap  

Cn = /2 , 1/2
Let Tn = g(X1 , . . . , Xn ) be a statistic.
16
16.2 Rejection Sampling Decision rule: synonymous for an estimator b
Action a A: possible value of the decision rule. In the estimation
Setup
context, the action is just an estimate of , (x).
b
We can easily sample from g() Loss function L: consequences of taking action a when true state is or
We want to sample from h(), but it is difficult discrepancy between and , b L : A [k, ).
k()
We know h() up to a proportional constant: h() = R Loss functions
k() d
Envelope condition: we can find M > 0 such that k() M g() Squared error loss: L(, a) = ( a)2
(
K1 ( a) a < 0
Algorithm Linear loss: L(, a) =
K2 (a ) a 0
1. Draw cand g() Absolute error loss: L(, a) = | a| (linear loss with K1 = K2 )
2. Generate u Unif (0, 1) Lp loss: L(, a) = | a|p
k(cand ) (
3. Accept cand if u 0 a=
M g(cand ) Zero-one loss: L(, a) =
1 a 6=
4. Repeat until B values of cand have been accepted

Example 17.1 Risk


We can easily sample from the prior g() = f () Posterior risk
Target is the posterior h() k() = f (xn | )f () Z h i
Envelope condition: f (xn | ) f (xn | bn ) = Ln (bn ) M r(b | x) = L(, (x))f
b ( | x) d = E|X L(, (x))
b

Algorithm
(Frequentist) risk
1. Draw cand f ()
Z
2. Generate u Unif (0, 1)
h i
R(, )
b = L(, (x))f
b (x | ) dx = EX| L(, (X))
b
Ln (cand )
3. Accept cand if u
Ln (bn ) Bayes risk
ZZ
16.3 Importance Sampling
h i
r(f, )
b = L(, (x))f
b (x, ) dx d = E,X L(, (X))
b
Sample from an importance function g rather than target density h.
Algorithm to obtain an approximation to E [q() | xn ]:
h h ii h i
r(f, )
b = E EX| L(, (X)
b = E R(, )b
iid
1. Sample from the prior 1 , . . . , n f ()
h h ii h i
r(f, )
b = EX E|X L(, (X)
b = EX r(b | X)
Ln (i )
2. wi = PB i = 1, . . . , B
i=1 Ln (i )
PB 17.2 Admissibility
3. E [q() | xn ] i=1 q(i )wi
b0 dominates b if
: R(, b0 ) R(, )
b
17 Decision Theory
: R(, b0 ) < R(, )
b
Definitions
b is inadmissible if there is at least one other estimator b0 that dominates
Unknown quantity affecting our decision: it. Otherwise it is called admissible.
17
17.3 Bayes Rule Residual sums of squares (rss)
Bayes rule (or Bayes estimator) n
X
rss(b0 , b1 ) = 2i
r(f, )
b = inf e r(f, )

e
i=1
R
(x) = inf r( | x) x = r(f, )
b b b = r(b | x)f (x) dx
Least square estimates
Theorems
bT = (b0 , b1 )T : min rss

b0 ,
b1
Squared error loss: posterior mean
Absolute error loss: posterior median
Zero-one loss: posterior mode b0 = Yn b1 Xn
Pn Pn
(Xi Xn )(Yi Yn ) i=1 Xi Yi nXY
17.4 Minimax Rules b1 = i=1 Pn 2
= n
(X X ) 2
P 2
i=1 i n i=1 Xi nX
Maximum risk h i 
0

R()
b = sup R(, ) R(a) = sup R(, a) E b | X n =
1
b

2 n1 ni=1 Xi2 X n
h i  P 
Minimax rule V |X =
b n
e = inf sup R(, )
b = inf R()
sup R(, ) e nsX X n 1
e e r Pn
2
i=1 Xi

b
se(
b b0 ) =
b = Bayes rule c : R(, )
b =c sX n n

Least favorable prior
b
se(
b b1 ) =
sX n
bf = Bayes rule R(, bf ) r(f, bf ) Pn Pn
where s2X = n1 i=1 (Xi X n )2 and
b2 = 1
n2 2i
i=1  (unbiased estimate).
Further properties:
18 Linear Regression
P P
Consistency: b0 0 and b1 1
Definitions
Asymptotic normality:
Response variable Y
Covariate X (aka predictor variable or feature) b0 0 D b1 1 D
N (0, 1) and N (0, 1)
se(
b b0 ) se(
b b1 )
18.1 Simple Linear Regression
Approximate 1 confidence intervals for 0 and 1 :
Model
Yi = 0 + 1 Xi + i E [i | Xi ] = 0, V [i | Xi ] = 2 b0 z/2 se(
b b0 ) and b1 z/2 se(
b b1 )
Fitted line
rb(x) = b0 + b1 x Wald test for H0 : 1 = 0 vs. H1 : 1 6= 0: reject H0 if |W | > z/2 where
W = b1 /se(
b b1 ).
Predicted (fitted) values
Ybi = rb(Xi ) R2
Pn b 2
Pn 2
Residuals i=1 (Yi Y )  rss
2
= 1 Pn i=1 i 2 = 1
 
i = Yi Ybi = Yi b0 + b1 Xi R = Pn 2
i=1 (Yi Y ) i=1 (Yi Y )
tss
18
Likelihood If the (k k) matrix X T X is invertible,
n
Y n
Y n
Y b = (X T X)1 X T Y
L= f (Xi , Yi ) = fX (Xi ) fY |X (Yi | Xi ) = L1 L2 h i
i=1 i=1 i=1 V b | X n = 2 (X T X)1
n
b N , 2 (X T X)1
Y 
L1 = fX (Xi )
i=1
( ) Estimate regression function
n 2
Y 1 X k
L2 = fY |X (Yi | Xi ) n exp 2 Yi (0 1 Xi ) X
2 i rb(x) = bj xj
i=1
j=1

Under the assumption of Normality, the least squares estimator is also the mle Unbiased estimate for 2

n
1X 2
n 1 X 2
2
b2 =
b =   = X b Y

n i=1 i n k i=1 i
mle
nk 2
18.2 Prediction
b = X b2 =

n
1 Confidence interval
Observe X = x of the covariate and want to predict their outcome Y .
bj z/2 se(
b bj )
Yb = b0 + b1 x
h i h i h i h i 18.4 Model Selection
V Yb = V b0 + x2 V b1 + 2x Cov b0 , b1
Consider predicting a new observation Y for covariates X and let S J
Prediction interval denote a subset of the covariates in the model, where |S| = k and |J| = n.
 Pn 2
 Issues
i=1 (Xi X )
bn2 =
b2 +1
Underfitting: too few covariates yields high bias
P
n i (Xi X)2 j
Overfitting: too many covariates yields high variance
Yb z/2 bn
Procedure
1. Assign a score to each model
18.3 Multiple Regression 2. Search through all models to find the one with the highest score
Y = X +  Hypothesis testing
where H0 : j = 0 vs. H1 : j 6= 0 j J

X11 X1k 1 1 Mean squared prediction error (mspe)
.. .. = ...
.. ..
X= . =.

. .
h i
mspe = E (Yb (S) Y )2
Xn1 Xnk k n
Prediction risk
Likelihood n
X n
X h i
E (Ybi (S) Yi )2
 
2 n/2 1 R(S) = mspei =
L(, ) = (2 ) exp 2 rss
2 i=1 i=1
Training error
N
X n
X
rss = (y X)T (y X) = kY Xk2 = (Yi xTi )2 R
btr (S) = (Ybi (S) Yi )2
i=1 i=1 19
R2 Pn b Frequentist risk
2
R i=1 (Yi (S) Y )
rss(S) btr (S)
R2 (S) = 1 =1 =1 P n 2 i Z Z
i=1 (Yi Y )
tss tss h
R(f, fbn ) = E L(f, fbn ) = b2 (x) dx + v(x) dx
The training error is a downward-biased estimate of the prediction risk.
h i
E R btr (S) < R(S)
h i
h i n h i b(x) = E fbn (x) f (x)
X
bias(R
btr (S)) = E Rbtr (S) R(S) = 2 Cov Ybi , Yi h i
i=1
v(x) = V fbn (x)
2
Adjusted R
n 1 rss
R2 (S) = 1 19.1.1 Histograms
n k tss
Mallows Cp statistic
Definitions
R(S)
b =R 2 = lack of fit + complexity penalty
btr (S) + 2kb
Number of bins m
Akaike Information Criterion (AIC) Binwidth h = m 1

bS2 ) k
AIC(S) = `n (bS , Bin Bj has j observations
R
Define pbj = j /n and pj = Bj f (u) du
Bayesian Information Criterion (BIC)
k Histogram estimator
bS2 ) log n
BIC(S) = `n (bS ,
2
m
Validation and training
X pbj
fbn (x) = I(x Bj )
j=1
h
m
X n n
(Ybi (S) Yi )2
i p
m = |{validation data}|, often
h
R
bV (S) = or E fbn (x) =
j
i=1
4 2 h
h i p (1 p )
j j
Leave-one-out cross-validation V fbn (x) =
!2 nh2
n n h2
Z
Yi Ybi (S) 2 1
R(fn , f ) (f 0 (u)) du +
X X
2
R
bCV (S) = (Yi Yb(i) ) =
b
1 Uii (S) 12 nh
i=1 i=1 !1/3
1 6
U (S) = XS (XST XS )1 XS (hat matrix) h = 1/3 R 2 du
n (f 0 (u))
 2/3 Z 1/3
C 3 2
19 Non-parametric Function Estimation R (fbn , f ) 2/3
n
C=
4
(f 0 (u)) du

19.1 Density Estimation


R Cross-validation estimate of E [J(h)]
Estimate f (x), where f (x) = P [X A] = A f (x) dx.
Integrated square error (ise) Z n m
2Xb 2 n+1 X 2
Z  2 Z JbCV (h) = fbn2 (x) dx f(i) (Xi ) = pb
L(f, fbn ) = f (x) fbn (x) dx = J(h) + f 2 (x) dx n i=1 (n 1)h (n 1)h j=1 j
20
19.1.2 Kernel Density Estimator (KDE) k-nearest Neighbor Estimator
Kernel K 1 X
rb(x) = Yi where Nk (x) = {k values of x1 , . . . , xn closest to x}
k
i:xi Nk (x)
K(x) 0
R
K(x) dx = 1 Nadaraya-Watson Kernel Estimator
R
xK(x) dx = 0 n
X

R 2 2
x K(x) dx K >0 rb(x) = wi (x)Yi
i=1
xxi

KDE K
wi (x) = h  [0, 1]
Pn xxj
1X1
n 
x Xi

j=1 K h
fbn (x) = K 4 Z  2
n i=1 h h h4 f 0 (x)
Z
rn , r)
R(b x2 K 2 (x) dx r00 (x) + 2r0 (x) dx
4 f (x)
Z Z
1 4 00 2 1
R(f, fn ) (hK )
b (f (x)) dx + K 2 (x) dx
2 K 2 (x) dx
Z R
4 nh
2/5 1/5 1/5
+ dx
c c2 c3
Z Z nhf (x)
h = 1 c1 = 2
K , c 2 = K 2
(x) dx, c 3 = (f 00 (x))2 dx c1
n1/5 h
Z 4/5 Z 1/5 n1/5
c4 5 2 2/5 2 00 2 c2
R (f, fn ) = 4/5
b c4 = (K ) K (x) dx (f ) dx R (b
rn , r) 4/5
n 4 n
| {z }
C(K)

Cross-validation estimate of E [J(h)]


Epanechnikov Kernel
n n
X X (Yi rb(xi ))2
(Yi rb(i) (xi ))2 =
(
3
|x| < 5 JbCV (h) = !2
K(x) = 4 5(1x2 /5)
i=1 i=1 K(0)
0 otherwise 1 Pn  xx 
j
j=1 K h

Cross-validation estimate of E [J(h)]


19.3 Smoothing Using Orthogonal Functions
n n n  
1 X X Xi Xj
Z
2 2Xb 2 Approximation
JbCV (h) = fn (x) dx
b f(i) (Xi ) K + K(0)
n i=1 hn2 i=1 j=1 h nh
X J
X
r(x) = j j (x) j j (x)
Z j=1 i=1
K (x) = K (2) (x) 2K(x) K (2) (x) = K(x y)K(y) dy Multivariate regression
Y = +

19.2 Non-parametric Regression 0 (x1 ) J (x1 )
.. .. ..
where i = i and = . . .
Estimate f (x) where f (x) = E [Y | X = x]. Consider pairs of points
(x1 , Y1 ), . . . , (xn , Yn ) related by 0 (xn ) J (xn )
Least squares estimator
Yi = r(xi ) + i
b = (T )1 T Y
E [i ] = 0
1
V [i ] = 2 T Y (for equally spaced observations only)
n
21
Cross-validation estimate of E [J(h)] 20.2 Poisson Processes
2
Xn J
X Poisson process
R
bCV (J) = Yi j (xi )bj,(i)
i=1 j=1
{Xt : t [0, )} = number of events up to and including time t
X0 = 0
20 Stochastic Processes Independent increments:
Stochastic Process
( t0 < < tn : Xt1 Xt0

Xtn Xtn1
{0, 1, . . . } = Z discrete
{Xt : t T } T =
[0, ) continuous
Intensity function (t)
Notations Xt , X(t)
State space X P [Xt+h Xt = 1] = (t)h + o(h)
Index set T P [Xt+h Xt = 2] = o(h)
Rt
Xs+t Xs Po (m(s + t) m(s)) where m(t) = 0
(s) ds
20.1 Markov Chains
Markov chain Homogeneous Poisson process
P [Xn = x | X0 , . . . , Xn1 ] = P [Xn = x | Xn1 ] n T, x X
(t) = Xt Po (t) >0
Transition probabilities

pij P [Xn+1 = j | Xn = i] Waiting times


pij (n) P [Xm+n = j | Xm = i] n-step
Wt := time at which Xt occurs
Transition matrix P (n-step: Pn )
(i, j) element is pij  
1
pij > 0 Wt Gamma t,
P
i pij = 1

Chapman-Kolmogorov Interarrival times


X
pij (m + n) = pij (m)pkj (n) St = Wt+1 Wt
k

Pm+n = Pm Pn  
1
Pn = P P = Pn St Exp

Marginal probability

n = (n (1), . . . , n (N )) where i (i) = P [Xn = i]


St
0 , initial distribution
n = 0 Pn Wt1 Wt t
22
21 Time Series 21.1 Stationary Time Series
Mean function Z
Strictly stationary
xt = E [xt ] = xft (x) dx
P [xt1 c1 , . . . , xtk ck ] = P [xt1 +h c1 , . . . , xtk +h ck ]
Autocovariance function
k N, tk , ck , h Z
x (s, t) = E [(xs s )(xt t )] = E [xs xt ] s t

x (t, t) = E (xt t )2 = V [xt ] Weakly stationary


 

Autocorrelation function (ACF)  


E x2t < t Z
 2
Cov [xs , xt ] (s, t) E xt = m t Z
(s, t) = p =p
V [xs ] V [xt ] (s, s)(t, t) x (s, t) = x (s + r, t + r) r, s, t Z

Cross-covariance function (CCV) Autocovariance function


xy (s, t) = E [(xs xs )(yt yt )]
(h) = E [(xt+h )(xt )] h Z
 
Cross-correlation function (CCF) (0) = E (xt )2
(0) 0
xy (s, t)
xy (s, t) = p (0) |(h)|
x (s, s)y (t, t)
(h) = (h)
Backshift operator
B k (xt ) = xtk Autocorrelation function (ACF)
Difference operator
Cov [xt+h , xt ] (t + h, t) (h)
d = (1 B)d x (h) = p =p =
V [xt+h ] V [xt ] (t + h, t + h)(t, t) (0)
White noise
2 Jointly stationary time series
wt wn(0, w )
iid 2

Gaussian: wt N 0, w
xy (h) = E [(xt+h x )(yt y )]
E [wt ] = 0 t T
V [wt ] = 2 t T
xy (h)
w (s, t) = 0 s 6= t s, t T xy (h) = p
x (0)y (h)
Random walk
Linear process
Drift
Pt
xt = t + j=1 wj
X
X
E [xt ] = t xt = + j wtj where |j | <
j= j=
Symmetric moving average
k
X k
X
X
2
mt = aj xtj where aj = aj 0 and aj = 1 (h) = w j+h j
j=k j=k j=
23
21.2 Estimation of Correlation 21.3.1 Detrending
Sample mean Least squares
n
1X
x = xt 1. Choose trend model, e.g., t = 0 + 1 t + 2 t2
n t=1
2. Minimize rss to obtain trend estimate bt = b0 + b1 t + b2 t2
Sample variance 3. Residuals , noise wt
n  
1 X |h|
V [x] = 1 x (h) Moving average
n n
h=n
1
The low-pass filter vt is a symmetric moving average mt with aj = 2k+1 :
Sample autocovariance function
k
nh 1 X
1 X vt = xt1

b(h) = (xt+h x)(xt x) 2k + 1
n t=1 i=k

1
Pk
Sample autocorrelation function If 2k+1 i=k wtj 0, a linear trend function t = 0 + 1 t passes
without distortion

b(h)
b(h) = Differencing

b(0)
t = 0 + 1 t = xt = 1
Sample cross-variance function
nh
1 X 21.4 ARIMA models

bxy (h) = (xt+h x)(yt y)
n t=1 Autoregressive polynomial

Sample cross-correlation function (z) = 1 1 z p zp z C p 6= 0


bxy (h) Autoregressive operator
bxy (h) = p
bx (0)b
y (0)
(B) = 1 1 B p B p
Properties
Autoregressive model order p, AR (p)
1
bx (h) = if xt is white noise
n xt = 1 xt1 + + p xtp + wt (B)xt = wt
1
bxy (h) = if xt or yt is white noise AR (1)
n
k1
X k,||<1 X
21.3 Non-Stationary Time Series xt = k (xtk ) + j (wtj ) = j (wtj )
j=0 j=0
Classical decomposition model | {z }
linear process
P
xt = t + st + wt E [xt ] = j=0 j (E [wtj ]) = 0
2 h
w
t = trend (h) = Cov [xt+h , xt ] = 12
(h)
st = seasonal component (h) = (0) = h
wt = random noise term (h) = (h 1) h = 1, 2, . . .
24
Moving average polynomial Seasonal ARIMA
(z) = 1 + 1 z + + q zq z C q 6= 0 Denoted by ARIMA (p, d, q) (P, D, Q)s
Moving average operator P (B s )(B)D d s
s xt = + Q (B )(B)wt

(B) = 1 + 1 B + + p B p
21.4.1 Causality and Invertibility
MA (q) (moving average model order q) P
ARMA (p, q) is causal (future-independent) {j } : j=0 j < such that
xt = wt + 1 wt1 + + q wtq xt = (B)wt
q
X
xt = wtj = (B)wt
X
E [xt ] = j E [wtj ] = 0
j=0
j=0
( Pqh P
2
w j=0 j j+h 0hq ARMA (p, q) is invertible {j } : j=0 j < such that
(h) = Cov [xt+h , xt ] =
0 h>q

X
MA (1) (B)xt = Xtj = wt
xt = wt + wt1 j=0

2 2
(1 + )w h = 0

Properties
(h) = w 2
h=1


0 h>1 ARMA (p, q) causal roots of (z) lie outside the unit circle
(

2 h=1
(z)
(h) = (1+ )
X
0 h>1 (z) = j z j = |z| 1
j=0
(z)
ARMA (p, q)
xt = 1 xt1 + + p xtp + wt + 1 wt1 + + q wtq ARMA (p, q) invertible roots of (z) lie outside the unit circle

(B)xt = (B)wt X (z)
(z) = j z j = |z| 1
Partial autocorrelation function (PACF) j=0
(z)
xh1
i , regression of xi on {xh1 , xh2 , . . . , x1 }
Behavior of the ACF and PACF for causal and invertible ARMA models
hh = corr(xh xh1
h , x0 xh1
0 ) h2
E.g., 11 = corr(x1 , x0 ) = (1) AR (p) MA (q) ARMA (p, q)
ARIMA (p, d, q) ACF tails off cuts off after lag q tails off
d xt = (1 B)d xt is ARMA (p, q) PACF cuts off after lag p tails off q tails off
(B)(1 B)d xt = (B)wt
Exponentially Weighted Moving Average (EWMA) 21.5 Spectral Analysis
xt = xt1 + wt wt1 Periodic process

X xt = A cos(2t + )
xt = (1 )j1 xtj + wt when || < 1
j=1 = U1 cos(2t) + U2 sin(2t)
xn+1 = (1 )xn + xn
Frequency index (cycles per unit time), period 1/
25
Amplitude A Discrete Fourier Transform (DFT)
Phase n
X
U1 = A cos and U2 = A sin often normally distributed rvs d(j ) = n1/2 xt e2ij t
i=1
Periodic mixture
q
X Fourier/Fundamental frequencies
xt = (Uk1 cos(2k t) + Uk2 sin(2k t))
j = j/n
k=1

Uk1 , Uk2 , for k = 1, . . . , q, are independent zero-mean rvs with variances k2 Inverse DFT
n1
Pq X
(h) = k=1 k2 cos(2k h) xt = n1/2 d(j )e2ij t
  Pq
(0) = E x2t = k=1 k2 j=0

Spectral representation of a periodic process Periodogram


I(j/n) = |d(j/n)|2
(h) = 2 cos(20 h)
Scaled Periodogram
2 2i0 h 2 2i0 h
= e + e 4
2 2 P (j/n) = I(j/n)
Z 1/2 n
e2ih dF ()
!2 !2
= 2X
n
2X
n
1/2 = xt cos(2tj/n + xt sin(2tj/n
n t=1 n t=1
Spectral distribution function

0 < 0

22 Math
F () = 2 /2 < 0

2
0 22.1 Gamma Function
Z
F () = F (1/2) = 0 Ordinary: (s) = ts1 et dt
F () = F (1/2) = (0) 0 Z
Spectral density Upper incomplete: (s, x) = ts1 et dt
x
Z x

X 1 1 Lower incomplete: (s, x) = ts1 et dt
f () = (h)e2ih
2 2 0
h=
( + 1) = () >1
P R 1/2
Needs |(h)| < = (h) = e2ih f () d h = 0, 1, . . . (n) = (n 1)! nN
h= 1/2
f () 0 (1/2) =
f () = f ()
f () = f (1 ) 22.2 Beta Function
R 1/2 Z 1
(0) = V [xt ] = 1/2 f () d (x)(y)
Ordinary: B(x, y) = B(y, x) = tx1 (1 t)y1 dt =
2
White noise: fw () = w 0 (x + y)
Z x
ARMA (p, q) , (B)xt = (B)wt : Incomplete: B(x; a, b) = ta1 (1 t)b1 dt
0
|(e2i )|2
2 Regularized incomplete:
fx () = w a+b1
|(e2i )|2 B(x; a, b) a,bN X (a + b 1)!
Pp Pq Ix (a, b) = = xj (1 x)a+b1j
where (z) = 1 k=1 k z k and (z) = 1 + k=1 k z k B(a, b) j=a
j!(a + b 1 j)!
26
I0 (a, b) = 0 I1 (a, b) = 1 Stirling numbers, 2nd kind
Ix (a, b) = 1 I1x (b, a)         (
n n1 n1 n 1 n=0
=k + 1kn =
k k k1 0 0 else
22.3 Series
Finite Binomial Partitions
n n   n
n(n + 1) n
X
Pn+k,k = Pn,i k > n : Pn,k = 0 n 1 : Pn,0 = 0, P0,0 = 1
X X
n
k= =2
2 k i=1
k=1 k=0
n n 
f :BU D = distinguishable, D = indistinguishable.
  
X X r+k r+n+1 Balls and Urns
(2k 1) = n2 =
k n
k=1 k=0
n n    
X n(n + 1)(2n + 1) X k n+1 |B| = n, |U | = m f arbitrary f injective f surjective f bijective
k2 = =
6 m m+1
k=1 k=0 ( (
mn m n
 
n  2 Vandermondes Identity: n n n! m = n
X n(n + 1) B : D, U : D m m!
k3 = r  
m n
 
m+n

0 else m 0 else
2
X
k=1 =
n k rk r (
cn+1 1 k=0
     
X n+n1 m n1 1 m=n
ck = c 6= 1 Binomial Theorem: B : D, U : D
c1 n  
n nk k n n m1 0 else
k=0
X
a b = (a + b)n
k m  
(   (
k=0 X n 1 mn n 1 m=n
B : D, U : D
k 0 else m 0 else
Infinite k=1
m
( (
X 1 X p 1 mn 1 m=n
pk = pk =
X
, |p| < 1 B : D, U : D Pn,k Pn,m
1p 1p 0 else 0 else
k=0 k=1 k=1

!  
X d X d 1 1
kpk1 = pk
= = |p| < 1
dp dp 1 p (1 p)2 References
k=0 k=0
 
X r+k1 k
x = (1 x)r r N+ [1] P. G. Hoel, S. C. Port, and C. J. Stone. Introduction to Probability Theory.
k
k=0 Brooks Cole, 1972.
 
X k
p = (1 + p) |p| < 1 , C [2] L. M. Leemis and J. T. McQueston. Univariate Distribution Relationships.
k
k=0 The American Statistician, 62(1):4553, 2008.

22.4 Combinatorics [3] R. H. Shumway and D. S. Stoffer. Time Series Analysis and Its Applications
With R Examples. Springer, 2006.
Sampling
[4] A. Steger. Diskrete Strukturen Band 1: Kombinatorik, Graphentheorie,
k out of n w/o replacement w/ replacement Algebra. Springer, 2001.
k1
Y n! [5] A. Steger. Diskrete Strukturen Band 2: Wahrscheinlichkeitstheorie und
ordered nk = (n i) = nk Statistik. Springer, 2002.
i=0
(n k)!
 
n nk n!

n1+r
 
n1+r
 [6] L. Wasserman. All of Statistics: A Concise Course in Statistical Inference.
unordered = = = Springer, 2003.
k k! k!(n k)! r n1
27
Univariate distribution relationships, courtesy Leemis and McQueston [2].
28

You might also like