You are on page 1of 17

Master’s Written Examination

Option: Statistics Spring 2020

Full points may be obtained for correct answers to eight questions. Each numbered question
(which may have several parts) is worth the same number of points. All answers will be
graded, but the score for the examination will be the sum of the scores of your best eight
solutions.

Use separate answer sheets for each question. DO NOT PUT YOUR NAME ON
YOUR ANSWER SHEETS. When you have finished, insert all your answer sheets into
the envelope provided, then seal it.

To earn full credits, you must show all the steps how you got your answer.

1
Problem 1—Stat 401. Let X be a Gaussian random variable with mean 0 and variance
Y , where Y is a random variable such that

P{Y = 1} = p, and P{Y = 2} = 1 − p.

Here 0 < p < 1 is a given number.

1. What is the conditional p.d.f. of X given Y = 1?

2. What is the conditional p.d.f. of X given Y = 2?

3. Find the probability P{X > 0}.

Solution to Problem 1.

1. Given Y = 1, X is a Gaussian random variable with mean µ = 0 and variance σ 2 = 1,


hence the conditional p.d.f. of X is
1 x2
fX|Y (x|y = 1) = √ e− 2 .

2. Similarly, the conditional p.d.f. of X given Y = 2 is


1 x2
fX|Y (x|y = 2) = √ e− 4 .

3. We have

P{X > 0} = P{X > 0|Y = 1}P{Y = 1} + P{X > 0|Y = 2}P{Y = 2}
1 1
= p + (1 − p)
2 2
1
= .
2

Problem 2—Stat 401. Let X, Y, Z have joint probability density function f (x, y, z) =
2(x + y + z)/3, 0 < x, y, z < 1; zero elsewhere.

1. Find the marginal probability density functions of X, Y and Z.

2. Are X, Y, Z independent?

3. Determine the conditional distribution of X, given Y = y and Z = z; and compute


E(X|Y = y, Z = z).

Solution to Problem 2.

2
1. We have Z 1 Z 1
2
fX (x) = f (x, y, z)dydz = (x + 1); 0 < x < 1.
0 0 3
Similarly, we have
2
fY (y) = (y + 1), 0<y<1
3
and
2
fZ (z) = (z + 1), 0 < z < 1.
3
2. Since fX (x)fY (y)fZ (z) 6= f (x, y, z). They are not independent.

3. We have for the joint probability density function of Y, Z


Z 1
2 1
fY,Z (y, z) = f (x, y, z)dx = (y + z + ); 0 < y, z < 1.
0 3 2
Hence, the conditional probability density of X given Y = y and Z = z is

f (x, y, z) 2(x + y + z)
fX|Y,Z (x|y, z) = = ; 0 < x, y, z < 1.
fY,Z (y, z) 2y + 2z + 1

The conditional expectation is therefore computed as follows.


Z 1
3y + 3z + 6
E(X|Y = y, Z = z) = x · fX|Y,Z (x|y, z)dx = ; 0 < y, z < 1.
0 6y + 6z + 3

Problem 3—Stat 401. For independent and identically distributed random variables
X1 , X2 , . . . , Xn with P [Xi > 0] = 1, and V ar(log(Xi )) = σ 2 , show that for every  > 0
(Hint: Chebyshev inequality)

(a) " #
n
Y σ2
P exp{n[E(log(X1 )) − ]} < Xi < exp{n[E(log(X1 )) + ]} ≥ 1 −
i=1
n2

(b) " #
n
Y
n n σ2
P Xi < (E(X1 )) e ≥1− 2
i=1
n
(You may use the inequality log(E(X1 )) ≥ E(log(X1 )).)

Solution to Problem 3.

3
(a) By the condition that P [Xi > 0] = 1, we have
n
Y
n[E(log(X1 ))−]
P [e < Xi < en[E(log(X1 ))+] ]
i=1
n
X
= P [n[E(log(X1 )) − ] < log(Xi ) < n[E(log(X1 )) + ]]
i=1
n
X
= P [−n < log(Xi ) − n[E(log(X1 ))] < n] (1)
i=1
n
X
= P [| log(Xi ) − n[E(log(X1 ))]| < n]
i=1
n
X
= 1 − P [| log(Xi ) − n[E(log(X1 ))]| ≥ n]
i=1

i = 1, . . . , n are i.i.d, thus E( ni=1 log(Xi )) = n[E(log(X1 )) and


P
Log(XPi ),
V ar( ni=1 log(Xi )) = nσ 2 . By Chebyshev inequality, we have
n
X σ2
P [| log(Xi ) − n[E(log(X1 ))]| ≥ n] ≤ . (2)
i=1
n2

Thus the conclusion follows.

(b) −log(x) is a convex function on (0, ∞), By Jensen’s Inequality, we have

−log(E(X1 )) ≤ E(−log(X1 )),

which is equivalent to
log(E(X1 )) ≥ E(log(X1 )).
On the other hand,
n
Y
n[E(log(X1 ))−]
P [e < Xi < en[E(log(X1 ))+] ]
i=1
n
Y
≤ P[ Xi < en[E(log(X1 ))+] ]
i=1
n (3)
Y
n[logE(X1 ))+]
≤ P[ Xi < e ]
i=1
n
Y
≤ P[ Xi < (E(X1 ))n E n ].
i=1

By (a), the conclusion follows.

4
Problem 4—Stat 401. In a lengthy manuscript, it is discovered that 86.5% percent of
the pages contain at least one typing errors. If we assume that the number of errors per
page is Poisson distributed.
(a) Let X be the number of errors per page and X follows a Poisson(λ) distribution. What
is λ?
(b) Let Y be the total number of typos on n pages. Suppose that the numbers of typos on
different page are independent and identically distributed as Poisson(λ). Show that Y
follows a Poisson distribution with parameter nλ.
(c) Suppose that the number of typos on n pages follows a Poisson distribution with
parameter nλ, where λ is computed in part (a). How many pages should be checked
so that at least one typo is found with probability no less than 0.99?
Solution to Problem 4. (a) Because X follows a Poisson distribution, we have
λ0
P (X ≥ 1) = 1 − P (X < 1) = 1 − P (X = 0) = 1 − exp(−λ) = 0.865.
0!
Then, it follows that λ = − log(0.135) = 2.00.
(b) Note that the moment generating function of X is
∞ ∞
X λk X (et λ)k
MX (t) = E(etX ) = etk exp(−λ) = exp(et λ − λ) exp(−et λ)
k=0
k! k=0
k!
t
= exp{(e − 1)λ}.
Then the moment generating function of Y is
MY (t) = {E(etX )}n = exp{(et − 1)nλ}.
Hence, Y follows a Poisson distribution with parameter nλ.
(c) We would like to find n so that
P (Y ≥ 1) ≥ 0.99.
The above inequality is equivalent to
1 − P (Y < 1) ≥ 0.99
Then, it follows that
(nλ)0
P (Y < 1) = P (Y = 0) = exp(−nλ) ≤ 0.01,
0!
which is equivalent to
n ≥ − log(0.01)/λ = − log(0.01)/2 = 2.30.
Thus, at least 3 pages needed to be checked so that at least one typo is found with probability
no less than 0.99.

5
Problem 5—Stat 411. Let X1 , . . . , Xn and Y1 , . . . , Yn be independent random samples
from two normal distributions N (µ1 , σ 2 ) and N (µ2 , σ 2 ), respectively, where σ 2 is the common
but unknown variance.
(a) Find the likelihood ratio ∆ for testing H0 : µ1 = µ2 = 0 against all alternatives.

(b) Rewrite ∆ so that it is a function


P of a statistics Z which has a well-known distribution
under null hypotheses (Hint: ni=1 Xi2 = ni=1 (Xi − X̄ + X̄)2 ).
P

(c) Give the distribution of Z under null hypotheses.


Solution to Problem 5. The likelihood function is
n n
!
2 −n 1 X 2
X
(2πσ ) exp − 2 ( (Xi − µ1 ) + (Yj − µ2 )2 )
2σ i=1 j=1

(a) Under the null hypotheses, the MLE of σ 2 is


Pn 2
Pn 2
2 i=1 X i + j=1 Yj
σ̂ =
2n
and Pn Pn !−n
i=1 Xi2 + j=1 Yj2
L(ω̂) = (2π)−n exp (−n).
2n
Under Ω, we have µ̂1 = X̄, µ̂2 = Ȳ ,
Pn Pn
2 i=1 (Xi − X̄)2 + j=1 (Yj − Ȳ )2
σ̂ = ,
2n
and Pn Pn !−n
2 2
i=1 (Xi − X̄) + j=1 (Yj − Ȳ )
L(Ω̂) = (2π)−n exp (−n).
2n
Thus, we have
Pn Pn !−n
2 2
L(ω̂) i=1 Xi +
j=1 Yj
∆= = Pn 2
P n 2
.
L(Ω̂) i=1 (Xi − X̄) + j=1 (Yj − Ȳ )

(b) Notice that


n
X n
X n
X n
X
Xi2 + Yj2 = 2
(Xi − X̄) + (Yj − Ȳ )2 + nX̄ 2 + nȲ 2 .
i=1 j=1 i=1 j=1

Thus
∆ = (1 + Z)−n ,

6
where
nX̄ 2 + nȲ 2
Z = Pn 2
Pn 2
.
i=1 (Xi − X̄) + j=1 (Yj − Ȳ )

nX̄ 2 /σ 2 ∼ χ2 (1), nȲ 2 /σ 2 ∼ χ2 (1), ni=1 (Xi − X̄)2 /σ 2 ∼ χ2 (n−1),


P
(c) Under
Pn null hypotheses,
and j=1 (Yj − Ȳ )2 /σ 2 ∼ χ2 (n − 1). Notice that they are independent with each other.
Thus (nX̄ 2 + nȲ 2 )/σ 2 ∼ χ2 (2) and ( ni=1 (Xi − X̄)2 + nj=1 (Yj − Ȳ )2 )/σ 2 ∼ χ2 (2n − 2).
P P
Consequently, Z ∼ F distribution with first df=2 and second df=2n − 2.

Problem 6—Stat 411. Let X1 , ..., Xn be iid Poisson random variables with parameter
θ > 0, and q(θ) = 1 − e−θ = Pθ (X1 > 0).
Pn
(a) Show that Sn = i=1 Xi is sufficient and complete for θ.

(b) Identify the distribution of Sn .

(c) Derive the minimum variance unbiased estimator for q(θ).

Solution to Problem 6. (a) Sufficiency be established by noting that the Possion distri-
bution is an exponential family.
To check the sufficient statistic U = Sn = ni=1 Xi is complete, suppose
P


X (nθ)u
Eθ [g(U )] = g(u) e−nθ = 0 ∀ θ > 0.
u=0
u!

Hence ∞ 
nu
X 
g(u) θu = 0 ∀ θ > 0,
u=0
u!
u
which implies g(u) nu! = 0 ∀ u = 0, 1, ... and g(u) = 0 ∀ u = 0, 1, ...
(b) Sn has a Poisson distribution with parameter nθ by using the MGF.
(c) First propose an unbiased estimator T = I{X1 >0} .
Next apply Rao-Blackwellization

E(T |U = u) = 1 − Pθ (X1 = 0|Sn = u)


 u
Pθ (X1 = 0) Pθ (X2 + · · · + Xn = u) n−1
= 1− = .
Pθ (Sn = u) n
n−1 Sn
Therefore, T ∗ = 1 −

n
is the UMVUE.

Problem 7—Stat 411. Let X1 , ..., Xn be iid uniform(0, θ) random variables, θ > 0.

7
(a) Find a sufficient and complete statistic for θ and verify it.

(b) Find the MLE of θ.

Solution to Problem 7. (a) The joint pdf of X1 , . . . , Xn is


n
Y
f (x1 , . . . , xn ) = θ−1 1(0,θ) (xi ) = θ−n 1(0,θ) (max{xi }) · 1(0,∞) (min{xi })
i i
i=1

According to the factorization theorem, Yn = maxi {Xi } is a sufficient statistic for θ.


It can be verified that Yn has the pdf
n n−1
fYn (y; θ) = y , 0<y<θ.
θn

For any function u(Yn ) of Yn such that Eu(Yn ) = 0 for all θ ∈ (0, ∞), we have 0
u(y) ·
nθ−n y n−1 dy = 0 for all θ > 0. That is,
Z θ
u(y)y n−1 dy = 0
0

for all θ > 0. Taking differentiation with respect to θ, we get u(θ)θn−1 = 0 for all θ > 0. That
is, u(y) = 0 for all y > 0, which is the support of Yn . By the definition of the completeness,
Yn is also complete for θ ∈ (0, ∞).
(b) The likelihood function of θ can be written as
n
Y
L(θ) = θ−1 1[0,θ] (Xi ) = θ−n 1[maxi {Xi },∞) (θ)
i=1

which attains its maximum at maxi {Xi }. That is, the MLE of θ is θ̂ = maxi {Xi }.

Problem 8—Stat 411. Let X1 , . . . , Xn be a random sample from a Bernoulli distribution


b(1, θ) with parameter 0 ≤ θ ≤ 1.

(a) Find the Fisher information I(θ) of the distribution.

(b) Suppose that the mle of θ is θ̂ = X̄ = n1 ni=1 Xi . Determine if X̄ is an efficient


P
estimator of θ.

(c) Find the asymptotic distribution of n(θ̂ − θ).
1
(d) Suppose we know that 3
≤ θ ≤ 1, find the mle of θ.

8
Solution to Problem 8. (a) The probability mass function of b(1, θ) is

f (x; θ) = θx (1 − θ)1−x

Then

log f (x; θ) = x log θ + (1 − x) log(1 − θ)


∂ log f (x; θ) x 1−x
= −
∂θ θ 1−θ
2
∂ log f (x; θ) x 1−x
2
= − 2−
∂θ θ (1 − θ)2

The Fisher information of the distribution is


 
X 1−X θ 1−θ 1
I(θ) = −E − 2 − 2
=− 2 − 2
=
θ (1 − θ) θ (1 − θ) θ(1 − θ)
−1
Pn
(b) First of all, the mle θ̂ = X̄ is unbiased since E( X̄) = n i=1 θ = θ. Secondly, the
variance of the mle is Var(X̄) = n−1 ni=1 θ(1 − θ) = θ(1 − θ)/n, which attains the Rao-
P
Cramér Lower Bound 1/[nI(θ)] = θ(1 √ − θ)/n. Therefore, the mle is efficient.
(c) The asymptotic distribution of n(θ̂ − θ) is N (0, 1/I(θ)) = N (0, θ(1 − θ)).
(d) The log-likelihood function of θ and its first derivative are
n n n
!
Y X X
l(θ) = log θXi (1 − θ)1−Xi = Xi log θ + n − Xi log(1 − θ)
i=1 i=1 i=1
n n
!
X 1 X 1 n(X̄ − θ)
l0 (θ) = Xi · − n− Xi =
i=1
θ i=1
1−θ θ(1 − θ)

respectively. That is, l(θ) increases before θ = X̄ and decreases after that. Given 13 ≤ θ ≤ 1,
l(θ) attains its maximum at X̄ if X̄ ≥ 1/3 or at 1/3 if X̄ < 1/3. That is, the mle is this case
is max{X̄, 1/3}.

Problem 9—Stat 481. Consider the regression model that relates gas mileage and weight
of automobiles. Thirty-eight cars were selected, and their weights x (in units of 1,000 pounds)
and fuel efficiencies MPG (miles per gallon) were measured.

(a) The residual plot Figure 1 was obtained after fitting the simple linear regression model
(Model I) MPG = β0 + β1 x + . Discuss whether Model I is appropriate based on
Figure 1. What would you suggest based on Figure 1?

9
Figure 1: Residual Plot of Model I

6
4
2
Residual

0
−2
−4

15 20 25 30

Fitted Value

(b) Define GPM = 100/MPG, that is, gallons per 100 miles. The residual plot Figure 2
was obtained after fitting the model (Model II) GPM = β0 + β1 x + . Discuss whether
Model II is better than Model I based on Figure 2.

Figure 2: Residual Plot of Model II


1.0
0.5
Residual

0.0
−0.5

3 4 5 6

Fitted Value

(c) Using software R to fit Model II, we get output as follows.


Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.00623 0.30266 -0.021 0.984
x 1.51484 0.10271 14.748 <2e-16 ***
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1

Residual standard error: 0.4416 on 36 degrees of freedom


Multiple R-squared: 0.858, Adjusted R-squared: 0.854

10
Based on the output, estimate the GPM of cars with a weight of 3,500 pounds. Con-
struct a 95% confidence interval for β0 . What is your conclusion based on your confi-
dence interval? (For your reference, some relevant critical values of t-distributions are
t(0.025; df=36) = 2.03, t(0.05; df=36) = 1.69.)

(d) Would you accept Model II as your final model? What else do you want to do?

Solution to Problem 9. (a) Model I is not appropriate based on Figure 1 because there
is clear nonlinear pattern in the residual plot. One may consider adding quadratic term x2
into the model or using some transformation of MPG instead.
(b) Figure 2 is much better than Figure 1 in the sense that there is little pattern in the
residual plot. We conclude that Model II is better than Model I.
(c) Based on the output, the fitted model is

[ = −0.00623 + 1.515x.
GPM

At x = 3.5 (3,500 pounds), the estimate of GPM is −0.00623 + 1.515 × 3.5 = 5.30 (gallons
per 100 miles).
Based on the output, βˆ0 = −0.00623 and se(βˆ0 ) = 0.303. Using the critical value t(0.025;
df=36)=2.03, a 95% confidence interval for β0 is −0.00623 ± 2.03 × 0.303 or (−0.621, 0.609)
which includes 0. The conclusion based on the confidence interval is that β0 is not signifi-
cantly from 0 at 0.05 level.
(d) Based on the output in (c), the intercept (β0 ) can be omitted. One may remove the
intercept item and fit a new model GPM = β1 x+ instead. It can be done by lm(GPM∼-1+x)
using R.

Problem 10—Stat 481. A study conducted by Baty et al. (2006) aims to measure
the influence of beverage on blood gene expression. In the study, they measured the gene
expression of participants who had 4 different beverages (500mL each: grape juice, red wine,
alcohol and water). Consider a linear regression with the gene expression as the response
and beverage types as predictors. The following output is given by R, where BeverFac is
a categorical variable with level 1 (Alcohol), level 2 (Grape Juice), level 3 (Red wine) and
level 4 (water), and resp is the gene expression response.

> BeverFac<-as.factor(Beverages)
> resp<-averageRFC2
> lm2<-lm(resp~BeverFac)
> summary(lm2)

Call:
lm(formula = resp ~ BeverFac)

11
Residuals:
Min 1Q Median 3Q Max
-0.198736 -0.062475 0.000081 0.062119 0.301559

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.28993 0.02028 211.525 <2e-16 ***
BeverFac2 -0.04088 0.02840 -1.439 0.153
BeverFac3 -0.03108 0.02790 -1.114 0.268
BeverFac4 0.02851 0.02767 1.030 0.305
---
Signif. codes: 0 ’***’ 0.001 ’**’ 0.01 ’*’ 0.05 ’.’ 0.1 ’ ’ 1

Residual standard error: 0.1014 on 104 degrees of freedom


Multiple R-squared: 0.07181,Adjusted R-squared: 0.04504
F-statistic: 2.682 on 3 and 104 DF, p-value: 0.05063

> vcov(lm2)
(Intercept) BeverFac2 BeverFac3 BeverFac4
(Intercept) 0.0004113169 -0.0004113169 -0.0004113169 -0.0004113169
BeverFac2 -0.0004113169 0.0008068140 0.0004113169 0.0004113169
BeverFac3 -0.0004113169 0.0004113169 0.0007785642 0.0004113169
BeverFac4 -0.0004113169 0.0004113169 0.0004113169 0.0007659005

(a) Let Y be the gene expression and Xi (i = 0, 1, 2, 3) be the dummy variables, respec-
tively, representing Alcohol, Grape Juice, Red Wine and Water. Write down the linear
regression model corresponding to the above R code. Provide the least square estimate
of the different effect of alcohol and water on the gene expression. Is this difference
significantly different from 0? Please justify your answer using the parameters defined
in your linear regression model.

(b) Give the least squares estimate of the different effects of Red Wine and Water on the
gene expression, and the standard error of the least squares estimate.

(c) Please complete the following ANOVA table using the R output given above.

Analysis of Variance Table

Response: resp
Degree of freedom Sum of Squares Mean Squares F value Pr(>F)
BeverFac
Residuals

12
Solution to Problem 10. (a) The linear regression model corresponds to the fitted linear
model is
Y = β0 + β1 X1 + β2 X2 + β3 X3 + ε,
where ε is mean zero with constant variance σ 2 . Because the baseline in the linear model is
Alcohol, the difference of effect of alcohol and water is represented by β3 , the coefficient of
the dummy variable corresponding to water. The least squares estimate of β3 is 0.002851.
To know if the difference is significantly from 0, we can do a hypothesis testing on β3 . The
null hypothesis is H3 : β3 = 0 versus the alternative H1 : β3 6= 0. The corresponding test
statistic value is 1.030 and the p-value is 0.305.
(b) The different effects of Red Wine and Water on the gene expression is given by β2 − β3 .
The least squares estimate is −0.03108 − 0.02851 = −0.05959, and its variance is
  
0.0007785642 0.0004113169 1
(1, −1) = 0.0007218308.
0.0004113169 0.0007659005 −1

Thus, the standard error is 0.02686691.


(c) Analysis of Variance Table (Response: resp)

Df Sum Sq Mean Sq F value Pr(>F)


BeverFac 3 0.08274 0.027579 2.682 0.05063
Residuals 104 1.06942 0.010283

Problem 11—Stat 481. In the computer science department of a large university, many
students change their major after the first year. A detailed study of the 256 students enrolled
as first-year compute science majors in one year was undertaken to help understand this
phenomenon. Students were classified on the basis of their status at the beginning of their
second year, and several variables measured at the time of their entrance to the university
were obtained. Here are summary statistics for the SAT mathematics scores:
Second-year major n x̄ s
Computer Science 103 619 86
Engineering & other sciences 31 629 67
Other 122 575 83
Assume a fixed, completely randomized design is used.

1. Write down the effect model for the CRD, the assumptions, and any model constraints.

2. Given SST R = 139, 372, calculate SSE and SSTO. Construct the ANOVA table.

3. Given that F0.01 (2, 253) = 4.7, test if there are differences in the SAT scores among
the three groups of students at α = 0.01. What is your conclusion?

13
4. Find an estimate for the mean difference between the SAT scores of computer science
and engineering students. Then derive its sampling distribution and find its 95%
confidence interval.

Solution to Problem 11.

1.
Yij = µ + τi + ij , i = 1, 2, 3; j = 1, 2, . . . , ni .
In this case, n1 = 103, n2 = 31, and n3 = 122.
Assumption:P i.i.d. errors ij ∼ N (0, σ 2 )
Constraint: i ni τi = 0. In this case,

ˆ Y is the response variable and represents the SAT Score.


ˆ The mean µi is the average score for each of the three majors (factors).

2. We know that
n
i k
1 X 2 X
s2i = Yij − Ȳi ⇒ SSE = s2i (ni − 1)
ni − 1 j=1 i=1

Then

SSE = 862 (103 − 1) + 672 (31 − 1) + 832 (122 − 1) = 1, 722, 631


SST O = SSR + SSE = 139, 372 + 1, 722, 631 = 1, 862, 003

And the ANOVA table

Source SS df MS F
Treatment 139372 2 69686 10.23
Error 1722631 253 6808.82
Total 1862003 255

3. Hypotheses:

H0 : τ1 = τ2 = τ3 = 0 ⇔ H0 : µ1 = µ2 = µ3
versus
H1 : at least one τi 6= 0 ⇔ H1 : at least one µi is not the same

Test Statistic: F = 10.23


Critical Region: Reject H0 if F > F0.01 (2, 253) = 4.7.
Decision: Since 10.23 > 4.7, Reject H0 .
Conclusion: There are differences in the SAT scores among the three groups of students.

14
   
σ2 σ2
4. We know that ȲCS ∼ N µCS , nCS
and ȲE ∼ N µE , . Then ȲCS − ȲE follows a
nE
 
2 1 1
normal distribution with mean µCS − µE and variance σ nCS + nE .
In addition,

ȲCS − ȲE − (µCS − µE ) SSE
r  ∼ N (0, 1) and 2
∼ χ2 (N − k).
1
 σ
σ 2 nCS + n1E

Pk
Based on Student’s Theorem, ȲCS − ȲE is independent of SSE = i=1 (ni − 1)s2i .
Hence, it has the sampling distribution

ȲCS − ȲE − (µCS − µE )
t= r   ∼ t(N − k),
1 1
M SE nCS + nE

where nCS = 103, nE = 31, k = 3, and N = 256.


In this case, α = 0.05 so α/2 = 0.025. Then tα/2 (N − k) = t0.025 (253) ≈ 1.97. The
confidence interval for µCS − µE is:
s  
1 1
619 − 629 ± 1.97 6808.82 + ⇒ −10 ± 33.3 ⇒ (−43.3, 23.3) .
103 31

Conclusion: There is no difference between the two groups.

Problem 12—Stat 481. A plant manager wants to investigate the productivity of three
groups of workers: those with little, those with average, and those with considerable expe-
rience. Because productivity depends to some degree on the day-to-day variability of the
available raw materials, which affects all groups in a similar fashion, the manager suspects
that the comparison should be blocked with respect to day. The results from five production
days are as follows:
Day (Block)
Experience Level 1 2 3 4 5 Row Mean
A 53 58 49 52 60 54.4
B 55 57 53 57 64 57.2
C 60 62 55 64 69 62.0
You obtain the SAS table:

15
1. What design method was used? Write down the model and necessary assumptions and
constraints.

2. Based on teh SAS output, are there treatment effects at α = 0.05? If so, between
which groups are there differences (use 95% Tukey Simultaneous Confidence Intervals
where q0.05 (3, 8) = 4.04).

3. Are there block effects at α = 0.05? What is the conclusion you can draw about block
effects?

4. If you ran this design as a completely randomized design, what would be the new
ANOVA table? Is this design as sensitive as using a block RCBD at α = 0.05? Given
F0.05 (2, 12) = 3.89, F0.05 (4, 10) = 3.47.

Solution to Problem 12.

1. Completely randomized block design. The statistical model is

Yij = µ + τi + βj + ij

where iid errors ij ∼ N (0, σ 2 ) and i τi = 0, j βj = 0.


P P

2. Hypotheses:

H0 : τA = τB = τC = 0 versus H1 : not all τi = 0

ˆ Test Statistic: F = 24.352


ˆ P-value: = 0.0004 ≈ 0

16
ˆ Decision: Reject H0 .
ˆ Conclusion: There is a difference among treatment means.

Follow-up test: Tukey Simultaneous Confidence Intervals.


r
3.033
τA − τB : 54.4 − 57.2 ± 4.04 ⇒ (−5.95, 0.35)
5
r
3.033
τA − τC : 54.4 − 62.0 ± 4.04 ⇒ (−10.75, −4.45)
5
r
3.033
τB − τC : 57.2 − 62.0 ± 4.04 ⇒ (−7.95, −1.65)
5
There are significant differences between experience levels A and C, and between B
and C. There is no significant difference between experience levels A and B.

3. Hypotheses:

H0 : β1 = β = 2 = β3 = β4 = β5 = 0 versus H1 : not all βj = 0

ˆ Test Statistic: F = 19.10


ˆ P-value: = 0.0004 < 0.05
ˆ Decision: Reject H0 .
ˆ Conclusion: There is a difference among block means. This means that blocking
was effective.
Source SS df MS F
Treatment 147.733 2 73.867 3.46
4.
Error 256 12 21.333
Total 403.733 14
In this case, the p-value for the F statistic would be P (F > 3.46) = 0.065. If we based
this on a critical value, F0.05 (2, 12) = 3.89. We would have concluded that there was
no difference among treatment means. The design was not as sensitive.

17

You might also like