Professional Documents
Culture Documents
May 1 2009
Introduction
The law of large numbers for i.i.d. sequence gives convergence
of sample means to a constant, i.e., a deterministic quantity.
D efinition:
An urn has initially r red and b black balls. Draw a ball at random and note
its color, and replace the ball back and add another ball of the same color.
Let X i =1 if the ith draw yields a red ball and X i =0 otherwise.
Then,
r r +1 b r+2
P (1,1,0,1) = ⋅ ⋅ ⋅
b + r b + r +1 b + r + 2 b + r + 3
r b r +1 r+2
= ⋅ ⋅ ⋅ = P (1,0,1,1) .
b + r b + r +1 b + r + 2 b + r + 3
Similarly for other cases.
The sequence X1 ,L, X n ,L is exchangeable.
Conditional Expectation
Let X be r.v. on probability space ( Ω, S , P ) . Given another
σ -algebra Y ⊂ S , a r.v. Y is called conditional expectation
of X given Y if Y is Y -measurable, that is,
{ω ∈ Ω : Y (ω ) ≤ a} ∈ Y ∀a ∈ℜ
and ∫ YdP = ∫ XdP
A A
∀A ∈ Y .
Theorem:
{ X i } exchangeable, Ε X 1 < +∞
N
1
⇒
N
∑X
i =1
i → Ε [ X1 | Y ] a.s. , a random variable for some Y
LLN for exchangeable sequences
proof : Let an infinite sequence X = ( X 1 , X 2 ,L) of random variables
be exchangeable and Let Y n be the σ -algebra generated by all the n-
symmetric functions of X . Y n ⊇ Y n +1
If f is a measurable function for which E X 1 < +∞, and if Y = g ( X )
is bounded n-symmetric r.v., then for 1 ≤ j ≤ n,
{ } { }
E f ( X j ) g ( X ) = E f ( X 1 ) g ( X j , X 2 ,L , X j −1 , X 1 , X j +1 ,L)
= E { f ( X 1 ) g ( X )} ,
1 n
so that E ∑ f ( X j ) Y = E { f ( X1 ) Y } .
n j =1
(continued )
LLN for exchangeable sequences
proof : (continued )
Then, take Y as the indicator of A, 1A , so that
1 n
∫A n ∑
j =1
f ( X j ) dP = ∫ f ( X 1 ) dP
A
( A ∈ Y n ).
Then, by the definition of conditional expectation,
1 n
∑
n j =1
f ( X j ) = E { f ( X 1 ) | Y n }.
P ( X i ∈ Ai , 1 ≤ i ≤ n | Y ) = ∏i P ( X i ∈ Ai | Y ) , and
1 y ≤ x 1 n 1
Let f ( y ) = . Then, ∑ f ( X j ) = # { j ≤ n; X j ≤ x}
0 y > x n j =1 n
1 n
and lim ∑ f ( X j ) = E { f ( X 1 ) | Y ∞ } = P ( X 1 ≤ x | Y ∞ ) = F ( x ) ,
n →∞ n
j =1
{
Since P ( X 1 ≤ x1 ) = E I ( −∞ , x1 ] ( X 1 ) ,}
{
P ( X 1 ≤ x1 ∧ L ∧ X k ≤ xk | ℑ∞ ) = E I ( −∞ , x1 ] ( X 1 ) ⋅L ⋅ I ( −∞ , xk ] ( X k ) | Y ∞ }
= F ( x1 ) ⋅L ⋅ F ( xk )
∀x1 ,L , xk : ω a F (ω , x1 ) ⋅L ⋅ F (ω , xk ) is Y ∞ -measurable.
{{ } }
E E I ( −∞ , x1 ] ( X 1 ) ⋅L ⋅ I ( −∞ , xk ] ( X k ) | Y ∞ | F = E {F ( x1 ) ⋅L ⋅ F ( xk ) | F }
where F ⊂ Y ∞ , and F ( xi ) is F -measurable ∀i = 1, 2,L k .
Then, { }
E I ( −∞ , x1 ] ( X 1 ) ⋅L ⋅ I ( −∞ , xk ] ( X k ) | F = F ( x1 ) ⋅L ⋅ F ( xk ) ,
and thus, P ( X 1 ≤ x1 ∧ L ∧ X k ≤ xk | F ) = F ( x1 ) ⋅L ⋅ F ( xk ) .
Application to the mortgage mess
Let a r.v. X i be the payoff from mortgage i and bank wants to spread
the risk by making a large number of such mortgages and create a mortgage
pool with deterministic payoff.
However, if X i are not i.i.d but only exchangeable, there is some nontrivial
σ -algebra Y that underlies them all, i.e. X i are conditionally i.i.d. on Y .
Then the payoff seems to be (asymtotically) deterministic but is actually a
random variable, Y -measurable, so its value changes depending on which
set in Y , the event ω is in.