Professional Documents
Culture Documents
Stat 4110
E (g (X )) = E [g (X ) − h(X )] + E (h(X ))
Now, we can use Monte Carlo methods to find E [g (X ) − h(X )].
Because g and h are close, the random variable g (X ) − h(X ) has a
small variance, so the Monte Carlo error should be smaller than the
Monte Carlo error for g on its own.
We refer to h(X ) as a control variate for g (X ).
1 Generate U1 , U2 , . . . Un ∼ U[0, 1]
n
1X
log(Uj2 + 1) − Uj
2 Find s =
n
i=1
3 Compute CV
θn = s + E (h(U)) = s + 1/2
n<-1000
U < - runif ( n )
G < - log ( 2 * U ^ 2 + 1 )
BasicMCEstimator < - mean ( G )
CI < - c ( BasicMCEstimator - 1 . 9 6 * sd ( G )/ sqrt ( n ) ,
BasicMCEstimator + 1 . 9 6 * sd ( G )/ sqrt ( n ))
G 1 <- log ( 2 * U ^ 2 + 1 ) - U
CVEstimator < - mean ( G 1 )+ 0 . 5
CI 1 <-c ( CVEstimator - 1 . 9 6 * sd ( G 1 )/ sqrt ( n ) ,
CVEstimator + 1 . 9 6 * sd ( G 1 )/ sqrt ( n ))
where:
n
1X
θ̂nh = h(Xj )
n
j=1
cov 2 (g (X ), h(X ))
1
MSE θ̂nc0 ,h = Var (g (X )) −
n Var (h(X ))
1 cov 2 (g (X ),h(X ))
MSE (θ̂nc0 ,h ) n (Var (g (X )) − Var (h(X )) )
= 1
MSE (θ̂n ) n Var (g (X ))
2
= 1 − ρ (g (X ), h(X ))