Professional Documents
Culture Documents
Wong Liang Ze
1 Motivation
Outline
1 Motivation
1 Motivation
1 Motivation
1 Motivation
References:
Combining statistical models (Massa & Lauritzen, 2010)
Algebraic representations of Gaussian Markov combinations
(Massa & Riccomagno, 2017)
Motivation
Cov(A, B) = 0.8
Cov(B, C ) = 0.9
Motivation
Cov(A, B) = 0.8
Cov(B, C ) = 0.9
Cov(A, B) = 0.8
Cov(B, C ) = 0.9
1 0.8 x
Σ(x) = · · · 1 0.9
··· ··· 1
Motivation
1 0.8 x
Σ(x) = · · · 1 0.9
··· ··· 1
Intuitively,
x 6= 0 (A and C should not be independent)
Motivation
1 0.8 x
Σ(x) = · · · 1 0.9
··· ··· 1
Intuitively,
x 6= 0 (A and C should not be independent)
x 6= 1 (but they should not be perfectly correlated)
Motivation
1 0.8 x
Σ(x) = · · · 1 0.9
··· ··· 1
Intuitively,
x 6= 0 (A and C should not be independent)
x 6= 1 (but they should not be perfectly correlated)
x should be somewhere between 0 and 1
Motivation
1 0.8 x
Σ(x) = · · · 1 0.9
··· ··· 1
Intuitively,
x 6= 0 (A and C should not be independent)
x 6= 1 (but they should not be perfectly correlated)
x should be somewhere between 0 and 1
Motivation
1 0.8 x
Σ(x) = · · · 1 0.9
··· ··· 1
Intuitively,
x 6= 0 (A and C should not be independent)
x 6= 1 (but they should not be perfectly correlated)
x should be somewhere between 0 and 1
Can we do better?
Motivation
Can we do better?
Can we do better?
Can we do better?
Can we do better?
B ΨBB ΨBC
Y = ∼ N 0, Ψ =
C ··· ΨCC
B ΨBB ΨBC
Y = ∼ N 0, Ψ =
C ··· ΨCC
So Σ−1 includes:
−1 0 0 0 0 0 0 0
Φ 0 + 0 − 0 (ΦBB )−1 0
Ψ−1
0 0 0 0 0 0 0
Markov combinations of multivariate normal distributions
So Σ−1 includes:
−1 0 0 0 0 0 0 0
Φ 0 + 0 − 0 (ΦBB )−1 0
Ψ−1
0 0 0 0 0 0 0
So Σ−1 includes:
−1 0 0 0 0 0 0 0
Φ 0 + 0 − 0 (ΦBB )−1 0
Ψ−1
0 0 0 0 0 0 0
So Σ−1 includes:
−1 0 0 0 0 0 0 0
Φ 0 + 0 − 0 (ΦBB )−1 0
Ψ−1
0 0 0 0 0 0 0
So Σ−1 includes:
−1 0 0 0 0 0 0 0
Φ 0 + 0 − 0 (ΦBB )−1 0
Ψ−1
0 0 0 0 0 0 0
So Σ−1 includes:
−1 0 0 0 0 0 0 0
Φ 0 + 0 − 0 (ΦBB )−1 0
Ψ−1
0 0 0 0 0 0 0
So Σ−1 includes:
−1 0 0 0 0 0 0 0
Φ 0 + 0 − 0 (ΦBB )−1 0
Ψ−1
0 0 0 0 0 0 0
pX (A, B) pY (C , B)
=
p(B)
Markov combinations
Properties of X ∗ Y :
Markov combinations
Properties of X ∗ Y :
Preserves marginals:
Properties of X ∗ Y :
Preserves marginals:
Conditional independence A ⊥
⊥ C | B:
Properties of X ∗ Y :
Preserves marginals:
Conditional independence A ⊥
⊥ C | B:
Maximizes entropy:
HX ∗Y (A ∪ C ) ≥ HZ (A ∪ C )