This action might not be possible to undo. Are you sure you want to continue?

**Non-parametric and Parametric Bootstrap
**

1. Review of the Non-parametric Bootstrap

Given a data set, say {x

1

, x

2

, . . . , x

n

} and a statistic of interest, say θ, the

basic algorithm for the non-parametric bootstrap consists of the following:

1. Resample the data with equal probability and with replacement. That

is each resampling is performed on the entire n data points, so that each

observation has probability

1

n

of being sampled at every resampling. For

example, for a original sample of size 5, one bootstrap sample might

be x

∗

= {x

4

, x

1

, x

3

, x

2

, x

2

}.

2. Calculate the statistic of interest, θ

∗

= g(x

∗

), call the b

th

estimate θ

∗

b

,

and store the value in a vector.

3. Repeat (1) and (2) a large number of times.

The resulting vector of bootstrap statistics then provides an estimate of

the distribution of the statistic, by way of

1. Bootstrap estimate of the expected value:

ˆ

θ =

1

B

¸

b

θ

∗

b

(1.1)

2. Bootstrap quantiles: Let θ

∗

[q]

represent the q

th

quantile of the bootstrap

statistic. That is, take the vector of statistics produced by the boot-

strap procedure and rank them from smallest to largest. The ranks of

the vector then correspond to the bootstrap estimate of the quantiles of

the distribution. For example, if the number of bootstrap iterations was

1000, then, the 25

th

element of the ranked vector of bootstrap statistics

is the bootstrap estimate of the 0.025

th

quantile of the distribution of

the statistic.

3. The bootstrap estimate of the standard error

ˆ se(θ) =

1

B − 1

B

¸

b=1

θ

∗

b

−

¯

θ

2

(1.2)

1

4. A (1 − α)% conﬁdence interval is

θ

∗

[B

α

2

]

, θ

∗

[B(1−

α

2

)]

**Hypothesis testing and parameter estimation can then be carried out
**

using the bootstrap estimates. The non-parametric bootstrap will be the

best approach to inference when everything we know about the distribution

comes from the sample. In the case where we know something about the

distribution before we look at the sample, parametric approaches will give

us better results.

1.1. Inference with the Non-parametric Bootstrap

Inference with the bootstrap is a direct extension of traditional inference.

Example 1 The table below shows the results of a small experiment in which

7 mice were randomly chosen from 16 to receive a new medical treatment,

while the remaining 9 were assigned to the non-treatment group. Investiga-

tors wanted to test whether the treatment prolonged life after surgery. The

table shows the survival times in days.

Group Data n mean SD

Treatment 94,197,16,38,99,141,23 7 86.86 25.24

Control 52,104,146,10,51,30,40,27,46 9 56.22 14.14

Diﬀerence 30.63 28.93

Say we wish to test for treatment diﬀerences, and know that the median

is a better measure of the center of distribution than the mean.

How do we make inferences using the bootstrap?

2

−100 −50 0 50 100 150 200

0

50

100

150

200

250

300

350

400

450

500

1000 bootstrapped Differences of Treatment Medians

median difference

f

r

e

q

u

e

n

c

y

Figure 1: Bootstrapped diﬀerences in the median lifetimes, in hours, of mice

receiving two diﬀerent post-surgery treatments.

The bootstrap 95% conﬁdence interval was (−29, 101). What is our con-

clusion?

2. The Parametric Bootstrap

Sometimes we know the distribution of the sample, but we cannot derive the

distribution of the statistic of interest. Sometimes we can use asymptotic

approximations, but if our sample is small these may be grossly inaccurate.

Furthermore, there are cases where the non-parametric bootstrap will fail.

Can you think of one?

3

In the case where we know the distribution of the sample, but not of

the sample statistic, the parametric bootstrap often provides a powerful ap-

proach.

The basic algorithm for the parametric bootstrap is as follows:

1. Simulate a random sample of the same size as your original sample of

interest, using sample estimates for the parameters.

2. Calculate the statistic of interest, θ

∗

, from the simulated sample. Save

the value in a vector.

3. Repeat (1) and (2) a large number of times.

The resulting vector then provides an estimate of the distribution of the

statistic, just as for the non-parametric case.

Example 2 Recall that the distribution of ˆ p is approximately normal with

mean p and variance np(1 − p). Suppose we wish to conduct inference on a

population proportion using the exact distribution of the underlying sample

from which we calculate ˆ p. We know that the underlying distribution of each

of our sample observations is bernoulli with unknown parameter p. How

would we conduct the parametric bootstrap?

1. First calculate the sample estimate of p, which is ˆ p =

i

X

i

n

.

2. Then simulate a random sample of n bernoulli(p) random variables

and calculate ˆ p.

3. Repeat (2) a large number of times.

4. Plot a histogram, compute quantiles and conﬁdence intervals, etc.

Below we consider one example where the non-parametric bootstrap fails

and the parametric bootstrap proves to be quite useful.

2.1. Distribution of the Sample Maximum

Let X

1

, X

2

, · · · , X

n

be independent and identically distributed random vari-

ables whose probability distribution function (pdf) is given by f and whose

cumulative distribution function (cdf) is given by F.

4

That is, Pr{X

i

≤ x} = F(x) =

x

−∞

f(x)dx ∀x. Let Y

[n]

= max{X

1

, · · · , X

n

},

in words, Y

[n]

is the largest value in the sample, or the sample maximum.

What are the pdf and cdf of Y ?

G

n

(y) = Pr{Y

[n]

≤ y}

= Pr{X

1

≤ y, X

2

≤ y, . . . , X

n

≤ y}

= Pr{X

1

≤ y}Pr{X

2

≤ y} · · · Pr{X

n

≤ y}

= [F(y)]

n

(2.1)

The pdf, g

n

can be found by diﬀerentiating

g

n

(y) = n[F(y)]

n−1

f(y) (2.2)

However, for many random variables these distribution functions are

frightfully complicated. The normal distribution for example, has no closed

form solution for the distribution of the sample maximum. We want a better

way to use the information in the sample for our inference.

Why will the non-parametric bootstrap not work for the sample max?

Example 3 The following data are a random sample of Large-mouth Bass

from a reservoir on the Savannah River Site, a former nuclear processing fa-

cility. The reservoir was used as a cooling pond for nuclear eﬄuent through

the 1980s, receiving high levels of radioactive materials that now reside in

the sediments in the pond. It is of interest to know the probability that if

163 Bass are taken from the reservoir each year that the maximum tissue

concentration of radiocesium will exceed 30 picocuries per gram.

n min max mean SD

163 4.33 34.06 13.17 4.58

5

0 5 10 15 20 25 30 35

0

5

10

15

20

25

30

35

40

45

Radiocesium Tissue Concentration in Bass from PAR Pond

picocuries per gram

f

r

e

q

u

e

n

c

y

Figure 2: An approximately Normal Data set of

137

Cs Body Burdens

How do we conduct inference using the parametric bootstrap?

6

A parametric bootstrap was performed using the normal distribution for

the underlying distribution of the data. A histogram of the bootstrapped max-

imums is shown below.

20 25 30 35

0

50

100

150

200

250

300

Bootstrapped Maximum Radiocesium Tissue Concentrations in Bass from PAR Pond

picocuries per gram

f

r

e

q

u

e

n

c

y

Figure 3: Bootstrapped Maximum Body Burdens

There were 17 observations in the bootstrapped maximums that were above

30 picocuries per gram.

What is the bootstrap estimate of the probability that the maximum body bur-

den in a sample of size 163 will exceed 30 picocuries per gram?

7

2.2. Code for Non-parametric Bootstrap Two Sample

Inference

treatment = [94,197,16,38,99,141,23];

control = [52,104,146,10,51,30,40,27,46];

B=1000; mediantreat=zeros(B,1);

mediancontrol=zeros(B,1);

medianDiff=zeros(B,1);

boottreat=zeros(length(treatment),1);

bootcontrol=zeros(length(control),1);

for b=1:B

for j=1:length(treatment);

pick=unidrnd(length(treatment));

boottreat(j)=treatment(pick);

end

for k=1:length(control);

pick=unidrnd(length(control));

bootcontrol(k)=control(pick);

end

mediantreat(b) = median(boottreat);

mediancontrol(b) = median(bootcontrol);

medianDiff(b) = mediantreat(b)-mediancontrol(b);

end

hist(medianDiff);

title(’1000 bootstrapped Differences of Treatment Medians’)

xlabel(’median difference’)

ylabel(’frequency’)

8

sortmedian=sort(medianDiff);

BSCI=[sortmedian(25),sortmedian(975)];

2.3. Code for Parametric Bootstrap of the Sample Max-

imum

hist(bass);

title(’Radiocesium Tissue Concentrations in Bass from PAR Pond’);

xlabel(’picocuries per gram’);

ylabel(’frequency’);

mu = mean(bass);

sigma = sqrt(var(bass));

B=1000;

maxbass=zeros(B,1);

for b=1:B

basspboot = randn(length(bass),1)*sigma + mu;

maxbass(b)=max(basspboot);

end

hist(maxbass);

title(’Bootstrapped Maximum Radiocesium Tissue Concentrations

in Bass from PAR Pond’);

xlabel(’picocuries per gram’);

ylabel(’frequency’);

Count30=zeros(B,1);

for j=1:B

if maxbass(j)>=30, Count30(j)=1;

9

end

end

p30=sum(Count30)/B;

10

- Charles Taylor
- Model- vs. design-based sampling and variance estimation
- ReviewChaps3-4
- SampleSizeCalcRevisited
- Hypo%26PowerLecture
- ReviewChaps1-2
- Chapter 20
- Chapter 21
- Chapter 14
- Chapter 11
- Chapter 12
- Chapter 13
- Chapter 8
- Chapter 9
- Chapter 10
- Chapter 5
- Chapter5p2Lecture
- Chapter 6
- Chapter 7
- An Ova Power
- Intro Bootstrap 341
- Chapter 7
- Data Modeling
- Bio Math 94 CLUSTERING POPULATIONS BY MIXED LINEAR MODELS

Sign up to vote on this title

UsefulNot useful- p 1291519
- AMOS-nov1_1
- The Effect of Firm Specific Characteristics on Segment Information Disclosure a Case of Indian Listed Companies
- bolasso_hal_aos
- Midterm Vocab
- Why Use Mann-whitney U-test
- 1-Introduction to Statistics_chapter 1
- DIS_ch_9
- Comparative Study of Insurance Business
- nonparametric lecture.ppt
- SOP 20-005 Sampling Inspection
- Processes
- Sampling1
- Hypothesis Testing
- reichmann_30
- shannon.pdf
- Download Smart Chart Pro last edition for windows 10 x32
- Anova
- EIE_812[HOD]
- UUnit8-KSB
- NRDC
- Sekar's Project(Rough Draft)
- Probability
- unit IV
- Chapter 7 Probability 1
- North Malwa
- Lec 4 - Normality Testing
- 65901_1945-1949
- FinEcmtAll
- MGS 3100 Business Analysis Final Exam Review
- Non%26ParaBoot