Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more
Standard view
Full view
of .
0 of .
Results for:
P. 1
Non%26ParaBoot

Non%26ParaBoot

Ratings:

5.0

(1)
|Views: 33|Likes:

Published by: Fanny Sylvia C. on Dec 09, 2008

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
See more
See less

05/09/2014

pdf

text

original

Introduction to Monte Carlo Procedures: theNon-parametric and Parametric Bootstrap1. Review of the Non-parametric Bootstrap
Given a data set, say
{
x
1
,x
2
,...,x
n
}
and a statistic of interest, say
θ
, thebasic algorithm for the non-parametric bootstrap consists of the following:1. Resample the data with equal probability and with replacement. Thatis each resampling is performed on the entire
n
data points, so that eachobservation has probability
1
n
of being sampled at every resampling. Forexample, for a original sample of size 5, one bootstrap sample mightbe
x
=
{
x
4
,x
1
,x
3
,x
2
,x
2
}
.2. Calculate the statistic of interest,
θ
=
g
(
x
), call the
b
th
estimate
θ
b
,and store the value in a vector.3. Repeat (1) and (2) a large number of times.The resulting vector of bootstrap statistics then provides an estimate of the distribution of the statistic, by way of 1. Bootstrap estimate of the expected value:ˆ
θ
=1
B
b
θ
b
(1.1)2. Bootstrap quantiles: Let
θ
[
q
]
represent the
q
th
quantile of the bootstrapstatistic. That is, take the vector of statistics produced by the boot-strap procedure and rank them from smallest to largest. The ranks of the vector then correspond to the bootstrap estimate of the quantiles of the distribution. For example, if the number of bootstrap iterations was1000, then, the 25
th
element of the ranked vector of bootstrap statisticsis the bootstrap estimate of the 0
.
025
th
quantile of the distribution of the statistic.3. The bootstrap estimate of the standard errorˆ
se
(
θ
) =

1
B
1
B
b
=1
θ
b
¯
θ
2
(1.2)1

4. A (1
α
)% conﬁdence interval is
θ
[
B
α
2
]
,θ
[
B
(1
α
2
)]
Hypothesis testing and parameter estimation can then be carried outusing the bootstrap estimates. The non-parametric bootstrap will be thebest approach to inference when
everything
we know about the distributioncomes from the sample. In the case where we know
something
about thedistribution before we look at the sample, parametric approaches will giveus better results.
1.1. Inference with the Non-parametric Bootstrap
Inference with the bootstrap is a direct extension of traditional inference.
Example 1
The table below shows the results of a small experiment in which 7 mice were randomly chosen from 16 to receive a new medical treatment,while the remaining 9 were assigned to the non-treatment group. Investiga-tors wanted to test whether the treatment prolonged life after surgery. Thetable shows the survival times in days.Group Data n mean STreatment 94,197,16,38,99,141,23 7 86.86 25.24Control 52,104,146,10,51,30,40,27,46 9 56.22 14.14Diﬀerence 30.63 28.9Say we wish to test for treatment diﬀerences, and know that the median is a better measure of the center of distribution than the mean.How do we make inferences using the bootstrap?
2

100500501001502000501001502002503003504004505001000 bootstrapped Differences of Treatment Mediansmedian difference
f      r      e      q      u      e      n      c      y
Figure 1: Bootstrapped diﬀerences in the median lifetimes, in hours, of micereceiving two diﬀerent post-surgery treatments.The bootstrap 95% conﬁdence interval was
(
29
,
101)
. What is our con-clusion?
2. The Parametric Bootstrap
Sometimes we know the distribution of the sample, but we cannot derive thedistribution of the statistic of interest. Sometimes we can use
asymptotic
approximations, but if our sample is small these may be grossly inaccurate.Furthermore, there are cases where the non-parametric bootstrap will fail.Can you think of one?3

Activity (1)

You've already reviewed this. Edit your review.