Chapter 2

The Two Sample Problem
2.1 Observational Versus Randomized Studies
The table below is taken from The Statistical Sleuth by Ramsey and Schafer. We will discuss it in class. Allocation of Units to Groups Selection of Units At Random By Randomization A random sample is selected from one population; units are then randomly assigned to different treatment groups A group of study units is found units are then randomly assigned to treatment groups. Causal inferences can be drawn We will begin by considering examples from each cell in the above table. First, we will consider units that are subjects (distinct individuals). Notice that I am deliberately not defining the response or, if applicable, treatments. • Upper left: The population is all high school freshmen in Wisconsin. A random sample is selected from this population. Once the sample is obtained, the students are divided into treatment groups by randomization. • Upper right: All high schools in Wisconsin are classified as public or private. The high school 23 freshmen at these two types of schools form the two populations. Independent random samples of freshmen are selected from each population. • Lower left: All freshmen at Sun Prairie high school are selected for study. The students are divided into treatment groups by randomization. • Lower right: Freshmen at Sun Prairie high school are compared to freshmen at Edgewood high school. Next, I will consider units that are trials. Not by Randomization Random samples are selected from existing distinct populations.

Inferences to the populations can be drawn.

Not at Random

Collections of available units from distinct groups are examined.

24 • Upper left: A golfer wants to compare two drivers. The trials, individual shots, are assigned to driver by randomization; we get random samples by assuming a spinner model for each driver. • Upper right: A golfer has one driver and he wants to compare his ability playing at sea level versus playing at an altitude of 5,000 feet. We get random samples by assuming a spinner model at each site. • Lower left: Same as upper left, but we no longer assume spinner models. • Lower right: Same as upper right, but we no longer assume spinner models. Before we get into inference procedures, formulas for tests and estimation, I want to introduce some issues of scientific importance. On each unit (subject or trial) we plan to obtain a response. Typically, the response exhibits some variation as we move from unit to unit. (If there is no variation, we will not need to do Statistics.) We invent the notion of factors as the source of the variation. For example, in its last five basketball games, the Milwaukee Bucks scored 102, 93, 102, 104 and 91 points. Possible factors include strength of opponent, location of game, and length of time since the previous game. Note the following. These are natural factors for a basketball fan to suggest. If you know nothing about basketball, then you will be ill equipped to speculate on the identity of factors. As we will see below, if a scientist is bad at suggesting factors, then he/she will likely not learn much from collecting and analyzing data. From the list of possible factors, the researcher chooses one for special status; it is called the study factor. (In regression and ANOVA, the researcher may choose to have several study factors.) For example, for the Bucks I might choose location of game as my study factor. Next, I must specify the levels of the study factor. If there are two levels, then we have the “two sample problem” which is the title of this chapter. As a result, I might choose my levels to be “home” and “away.” Note that this is not the only possible choice. I could use the four time zones to be the levels, or the 28 arenas (if I am correct in my

CHAPTER 2. THE TWO SAMPLE PROBLEM
belief that the Clippers and Lakers share an arena; 29 if I am incorrect). If the researcher selects more than two levels and the levels are on an interval or ratio scale, then methods of regression might be used. After specifying the study factor(s), all other factors are collectively referred to as background factors. I want to mention two ways to handle background factors. (This is not an exhaustive list.) First, you can block on a background factor. In the basketball example, I could block on the opponent. To keep this simple, I will focus on eight games with four opponents, as shown below. Opponent Dallas Denver Toronto Washington Mean SD Home 102 116 104 99 105.25 7.46 Away 97 113 102 100 103.00 6.98 H −A 5 3 2 −1 2.25 2.50

Looking ahead a bit, if the home and away were independent random samples, then the following analysis could be appropriate. twos c1 c2; pool. % TWOSAMPLE T FOR C1 VS C2 N MEAN SD C1 4 105.25 7.46 C2 4 103.00 6.98 95 PCT CI FOR MU C1 - MU C2: ( -10.2, 14.7) TTEST MU C1 = MU C2 (VS NE): T= 0.44 P=0.67 DF= 6 POOLED STDEV = 7.22 But it is more appropriate to analyze the differences as a one sample problem, as with the following analysis. ttest c3 %

TEST OF MU = 0 VS MU N.E. 0

2.1. OBSERVATIONAL VERSUS RANDOMIZED STUDIES
N 4 MEAN 2.25 % SD 2.50 95 % C.I. (-1.73,6.23) SD 2.50 T 1.80 P VALUE 0.17

25

C3

tint c3 N 4

C3

MEAN 2.25

Notice that for the differences, the P-value is much smaller and the confidence interval is much narrower. Blocking is not always effective. For example, the Bucks’ data above was selected deliberately for illustration only. There are 14 teams that the Bucks have played home and away thus far. The analysis of all the data is very different than that above. twos c1 c2; pool. % TWOSAMPLE T FOR C1 VS C2 N MEAN SD C1 14 94.60 10.20 C2 14 101.71 7.16 95 PCT CI FOR MU C1 - MU C2: ( -13.9, -0.2) TTEST MU C1 = MU C2 (VS NE): T= -2.12 P=0.043 DF= 26 POOLED STDEV = 8.81 The analysis with blocking is below. ttest c3 %

It is my opinion that novice statisticians frequently are overly optimistic on the value of blocking. Unless you are pretty certain that the factor has a big impact on the response, it is usually better not to block. A second way to deal with a background factor is by controlling for it. This means that you keep the value of the factor constant throughout the study, or at least for the data you analyze. In a study of her cat’s consumption of two flavors of treats, Dawn (a former student) controlled the cat’s intake of other food, and tried to control his activity level. In addition, she presented either treat at the same time each day, in an attempt to control for time of day effect as well as the cat’s general level of hunger. After blocking and controlling, if either or both of these are used, there are still lots of background factors. If units are assigned to study factor by randomization, there is some reason to believe that the effects of these background factors will be “balanced” between the levels of the study factor (this notion can be made more precise). But if units are associated with the level of a study factor, it is very possible that the background factors will severely bias the study. Some examples follow. 1. Yesterday I heard a talk on the effects of “coaching” for the SAT. The subjects are students. The response is change in verbal score from PSAT to SAT. The study factor is coaching with levels “yes” and “no.” What are some possible background factors? 2. The subjects are people. The response is whether the person develops a particular disease of interest. The study factor is smoking, with levels “yes” and “no.” What are some possible background factors? 3. The subjects are men. The response is whether the man develops a particular disease of interest. The study factor whether the man has had a vasectomy, with levels “yes” and “no.” What are some possible background factors?

TEST OF MU = 0 VS MU N.E. 0 N MEAN SD T P VALUE C3 14 -7.07 13.04 -2.03 0.063 tint c3 %

The following hypothetical example illustrates the Note from above that the independent samples analy- possible effect of a background factor. sis compared to the block (paired data) analysis gives A company with 200 employees decides it must a smaller P-value and a narrower confidence interval. reduce its work force by one-half. The following ta-

N MEAN SD 95 % C.I. C3 14 -7.07 13.04 (-14.60, 0.46)

26

CHAPTER 2. THE TWO SAMPLE PROBLEM

ble reveals the relationship between gender and outThe background factor (job) is statistically related come. to the study factor (gender) and response (outcome). If the background factor fails to be statistically reOutcome lated to either the study factor or response, then Gender Released Not released Total p ˆ Simpson’s Paradox will not occur. (This issue will Female 60 40 100 0.60 be addressed in a future homework assignment.) Male 40 60 100 0.40 If a background factor is strongly (statistically) Total 100 100 200 related to the response, then you probably want to Now suppose that the value of a background fac- block on it. If a background factor is strongly (statistor, job type, is available for each person. One could tically) related to the study factor, then it will be diftake the data above and stratify it according to job ficult to separate statistically the effect of the study type, as I have done below. factor from the effect of the background factor. There is a sampling issue for observational studies Job A that I want to address. Years ago I saw a variation Outcome of the following example in a really bad introducGender Released Not released Total p ˆ tory Statistics book. Each person in a population of Female 56 24 80 0.70 college students can be assigned a value on each of Male 16 4 20 0.80 two dichotomous variables. The first is GPA: high Total 72 28 100 (A) or low (Ac ); the second is whether the person smokes tobacco (B) or not (B c ). We can imagine a Job B table of population counts (I will follow the notation Outcome in Wardrop, Chapter 8). Gender Released Not released Total p ˆ Female Male Total 4 24 28 16 56 72 20 80 100 0.20 0.30 A Ac Total B NAB N Ac B NB Bc NAB c N Ac B c NB c Total NA N Ac N

Note that in the original table, the female release rate is 0.20 larger than the male release rate, but in each component table (i.e. for each job) the female release rate is 0.10 smaller than the male release rate! This consistent (across component tables) reversal of the direction of the relationship is called Simpson’s Paradox. We can gain insight into the “why” behind Simpson’s Paradox by examining the following two tables. Gender Female Male Total Job A B 80 20 20 80 100 100 Total 100 100 200

Job A B Total

Outcome Released Not released 72 28 28 72 100 100

Total 100 100 100

There are several ways to view this table. You can view it as a single population with two dichotomous variables per person (as I have done above). In this case, inference would focus on estimating probabilities and conditional probabilities. Secondly, you could view smoking status as the response and GPA as the study factor, with levels high and low. This means that we have two distinct populations—high and low GPA. Inference would focus on the proportion of smokers in each GPA population. Thirdly, we can reverse the roles of smoking and GPA. This gives two distinct populations—smokers and nonsmokers. Inference would focus on the proportion of high GPA in each smoking group. (The bad book thought that this last perspective was the only one possible and compounded its error by suggesting a causal link— smoking leads to bad grades! One could just as easily argue that anxiety over low grades leads to smoking or that a background factor, time spent partying, is such that a large amount of time spent partying is

2.1. OBSERVATIONAL VERSUS RANDOMIZED STUDIES

27

Smoker? linked to smoking and low grades.) GPA Yes No Total A critical point that is often overlooked is the imHigh 60 437 497 portance of how a sample is selected. Let us imagine Low 137 366 503 three possible sampling schemes. We can take a ranTotal 197 803 1000 dom sample from the overall population of college students; we can take independent random samples Next, I used these data to estimate the three tables from the populations of smokers and nonsmokers; or above; the population proportions and the two tables we can take independent random samples from the of conditional probabilities. The results are below. populations of high and low GPA. Suppose that the Smoker? population counts are given by the following table. GPA Yes No Total Smoker? High 0.060 0.437 0.497 GPA Yes No Total Low 0.137 0.366 0.503 High 600 4400 5000 Total 0.197 0.803 1.000 Low 1400 3600 5000 Total 2000 8000 10000 Smoker? GPA Yes No Total The table of population proportions is below. High 0.121 0.879 1.000 Low 0.272 0.728 1.000 Smoker? GPA Yes No Total Smoker? High 0.06 0.44 0.50 GPA Yes No Low 0.14 0.36 0.50 High 0.305 0.544 Total 0.20 0.80 1.00 Low 0.695 0.456 The table of conditional probabilities of smoking Total 1.000 1.000 status given GPA is below. By inspection, all estimates are quite close to the Smoker? population proportions. As the comparisons based GPA Yes No Total on the last two tables suggest, if we have a random sample from the overall population, it is valid to preHigh 0.12 0.88 1.00 tend we have either: (a) independent random samLow 0.28 0.72 1.00 ples from the high and low GPA populations, or (b) Note that 0.12 and 0.28 are p1 and p2 for the perspec- independent random samples from the smoking and tive of smoking being the response. nonsmoking populations. The table of conditional probabilities of GPA Second, suppose that I select independent random given smoking status is below. samples (with replacement) of size 500 each from the smoking and nonsmoking populations. I did this on Smoker? my computer and obtained the results shown below. GPA Yes No High Low Total 0.30 0.70 1.00 0.55 0.45 1.00 GPA High Low Total Smoker? Yes No 157 288 343 212 500 500 Total 445 555 1000

Note that 0.30 and 0.55 are p1 and p2 for the perspective of GPA being the response. I will consider three ways to sample. First, suppose we select a random sample (with replacement) of size 1000 from the overall population. I did this on my computer and obtained the data below.

The estimates of: high GPA for smokers is 157/500 = 0.314, and high GPA for nonsmokers is 288/500 = 0.576. These numbers are reasonably close to the population proportions, 0.30 and

28 0.55, respectively. But now suppose that we pretend we have independent random samples from the GPA populations; what happens? Our estimate of smoking given high GPA is 157/445 = 0.353, which is considerably larger than 0.12, the population proportion. And our estimate of smoking given low GPA is 343/555 = 0.618, which is considerably larger than 0.28, the population proportion. As a result, we conclude that it is improper to pretend we have a independent random samples from the GPA populations. The reason for the strong bias shown above is simple. By taking equal sample sizes from each smoking population, we are grossly oversampling the smokers in the overall population, and also in the two GPA populations. Suppose, however, that I had selected samples of size 200 from the smokers and 800 from the nonsmokers. (These sample sizes match the proportions of smokers and nonsmokers in the overall population.) I did this and obtained the data below. GPA High Low Total Smoker? Yes No 62 434 138 366 200 800 Total 496 504 1000

CHAPTER 2. THE TWO SAMPLE PROBLEM
4. English 104, which is taught in 25 sections of 20 students each. 5. Humanities 105, which is taught in 50 sections of 10 students each. The college calculates that there are 2500 students in the 91 sections offered, for a mean of 27.5 students per section. A rival college reports that for every student, the mean class size is 136. Both computations are correct. What do you think?

2.2 Dichotomous response, independent samples
Later in this chapter we will consider dependent samples, which arise from pairing. Data from studies of this section can be presented in the following manner. Variable 2 B Bc a b c d m 1 m2

Variable 1 A Ac Total

Total n1 n2 n

If one divides each of the values in the table by 1000, one obtains a very good estimate of the table of population proportions. This is a general rule: if we sample from subpopulations in proportion to occurrence in the overall population, then it is ok to pretend we have a random sample from the overall population. Before returning to the two sample problem, I want to digress into a common error on sampling. The point is that it is very important to be careful about units. Suppose that a small college has a freshman class of 500 students. Each student enrolls in five courses, as detailed below. 1. Social Studies 101, which is taught in one sections of 500 students. 2. Science 102, which is taught in five sections of 100 students each.

This table is meant to be very general. It can be used for sampling from one population with two dichotomous responses per unit (remember the GPA and smoking example earlier). This table can be used for independent random samples from two populations. Finally, it can be used with a study with randomization. At some point in the analysis I usually view the one population, two responses problem as a problem on conditional probabilities, I will modify the above table to the following form which I find easier to understand. Study factor Level 1 Level 2 Total Response S F a b c d m 1 m2 Total n1 n2 n

In order to analyze such data, statisticians typically begin by arguing that the marginal totals can 3. Math 103, which is taught in 10 sections of 50 (or should) be viewed as fixed numbers. This can be students each. a bit of a stretch, so some discussion is merited.

2.2. DICHOTOMOUS RESPONSE, INDEPENDENT SAMPLES
For the one population, two responses model, only the value n is fixed in advance by the researcher; all other entries in the table are the observed values of random variables. The statistician then argues that one should perform analysis after conditioning on the other marginal totals. Here is an abridged version of the argument statisticians give. Suppose that we have the following marginal totals. Variable 2 B Bc a b c d 70 30

29

Variable 1 A Ac Total

Total 60 40 100

This is an approximate interval, based on using a normal curve approximation. Minitab will not evaluate this formula for us. For hypothesis testing, there are several possible approaches. The null hypothesis is p 1 = p2 ; there are three possible alternatives, obtained by replacing ‘=’ in the null hypothesis by >, <, or =. An exact P-value can be obtained by using the hypergeometric distribution. This distribution is not in Minitab. (See me if you want a macro for Version 9.) Approximate probabilities can be obtained by a normal or chi-squared approximation. The normal approximation can be written two ways. First, as z = x/σ, where x = p1 − p2 and σ = ˆ ˆ m1 m2 . n1 n2 (n − 1)

Only the total number of units, n = 100, is fixed by the sampling plan. But what do we learn from the other totals? Well, we get evidence that B is much more common than B c , and evidence that A is somewhat more common than Ac . But we don’t learn anything about a relationship between A and B, which is, after all, the primary purpose of the investigation. Note that with the above margins, we could have a = 60 which would provide evidence of a very strong positive association between A and B; or we could have a = 30 which would provide evidence of a very strong negative association between A and B; or we could have a = 42 which would provide no evidence of an association between A and B. In short, knowledge of the marginal totals does not provide the researcher with evidence of the strength or direction of association between A and B; hence, it probably won’t hurt to condition on the margins. Plus there is the added bonus that conditioning on the margins makes the math much easier. In the table below, define p1 = a/n1 , q1 = b/n1 , ˆ ˆ p2 = c/n2 , and q2 = d/n2 . ˆ ˆ Study factor Level 1 Level 2 Total Response S F a b c d m 1 m2

This expression can be rewritten as √ n − 1(ad − bc) z= √ . n1 n2 m1 m2 Some people modify z slightly and use √ n(ad − bc) z =√ . n 1 n2 m1 m2 Finally, others use χ2 = (z )2 . If z or z is used, P-values are obtained from the standard normal curve. Literally, χ2 can be used only for the alternative = and the P-value is obtained by using the chi-squared curve with one degree of freedom. Minitab presents the analysis only for χ 2 . Exercise 2 on page 252 of Wardrop presents the following data. Study factor Level 1 Level 2 Total Response S F 46 66 30 99 76 165 Total 112 129 241

Total n1 n2 n

The confidence interval for p1 − p2 is p1 − p 2 ± z ˆ ˆ ˆ ˆ p1 q1 p2 q2 ˆ ˆ + . n1 n2

Below is a Minitab analysis of these data. read c1 c2 46 66 (2.1) 30 99

30 end chis c1 c2

CHAPTER 2. THE TWO SAMPLE PROBLEM
are denoted % y1,1 , y1,2 , y1,3 , . . . , y1,n1 , and are summarized by their mean y1· and standard ¯ deviation s1 . Similarly, data from the second population are denoted y2,1 , y2,2 , y2,3 , . . . , y2,n2 , and are summarized by their mean y2· and standard ¯ deviation s2 . (Most authors suppress the comma in the subscript, and I might forget and do that too on occasion. I like the commas for later work when we need to know whether, for example y111 is the 11th observation from the first sample or the first observation from the 11th sample.) Attention focuses on the difference µ 1 −µ2 , which is estimated by y1· − y2· . For inference, we will need ¯ ¯ the sampling distribution of this estimate. Some basic results in mathematical statistics indicate that the standardized form of the estimate is W = ¯ ¯ (Y1· − Y2· ) − (µ1 − µ2 )
2 σ1 n1

Expected counts are printed below observed counts C1 46 35.3 30 40.7 76 C2 66 76.7 99 88.3 165 Total 112

1

2 Total

129 241

ChiSq = 3.230 + 1.488 + 2.804 + 1.292 = 8.813 df = 1 cdf 8.813; chis 1. 8.8130 0.9970 subt 0.9970 1 k1 ANSWER = 0.0030 let k2=sqrt(8.813) cdf k2 k3 subt k3 1 k4 ANSWER = 0.0015

+

2 σ2 n2

.

The mathematical problem arises in trying to deal with the unknown σ’s in the denominator. The first approach is to assume that they are equal and to estimate them by sp , where s2 = p (n1 − 1)s2 + (n2 − 1)s2 1 2 . n1 + n 2 − 2 ¯ ¯ (Y1· − Y2· ) − (µ1 − µ2 ) . sp 1/n1 + 1/n2

2.3 Numerical response, independent samples
I am “jumping over” multicategory responses, ordered or not, and proceeding to numerical. We will return to multicategory responses soon. The most commonly used procedures compare the populations by comparing their means. For reference, see 16.1 and 16.2 of Wardrop. In fact, the presentation below is a compression of the ideas in 16.2. We assume that we have independent random samples from two populations. The first population has (unknown) mean µ1 and standard deviation σ1 . The second population has (unknown) mean µ 2 and standard deviation σ2 . The data from the first population

Next, we substitute this estimate into W to yield W 1 , W1 =

If one assumes that the two populations are normal pdfs, then probabilities for W1 can be obtained from the t distribution with (n1 + n2 − 2) degrees of freedom. The following example is taken from exercise 1 on page 401 of Wardrop. set c1 321 323 329 330 331 332 337 337 343 347 end

2.3. NUMERICAL RESPONSE, INDEPENDENT SAMPLES
set c2 301 315 316 317 321 321 323 3(327) end desc c1 c2 N 10 10 MEAN 333.0 319.5 MEDIAN 331.50 321.00 STDEV 8.18 7.93

31

TWOSAMPLE T FOR C1 VS C2 N MEAN STDEV C1 10 333.00 8.18 C2 10 319.50 7.93 95 PCT CI FOR MU C1 - MU C2: ( 5.9, 21.1) TTEST MU C1 = MU C2 (VS NE): T= 3.75 P=0.0016 DF= 17 The only difference in the two analyses is that the latter has 17 degrees of freedom, while the former has 18. The values of T are identical because for a balanced study W1 = W2 . It is instructive to consider some artificial data. twos c11 c12 %

C1 C2

twos c1 c2; pool. TWOSAMPLE N C1 10 C2 10 95 PCT CI ( 5.9, T FOR C1 VS C2 MEAN STDEV 333.0 8.18 319.5 7.93 FOR MU C1 - MU C2: 21.1)

TTEST MU C1 = MU C2 (VS NE): T= 3.75 P=0.0015 DF= 18 POOLED STDEV = 8.06

TWOSAMPLE T FOR C11 VS C12 N MEAN STDEV C11 10 50.00 9.40 C12 10 45.00 9.40

95 PCT CI FOR MU C11 - MU C12: The second approach is to make no assumption ( -3.8, 13.8) about the two standard deviations; simply estimate each population standard deviation by its correTTEST MU C11 = MU C12 (VS NE): sponding sample standard deviation. Making this T= 1.19 P=0.25 DF= 18 change to W , we get W2 , W2 = ¯ ¯ (Y1· − Y2· ) − (µ1 − µ2 ) s2 /n1 + s2 /n2 1 2 Contrast this output with the following four. . twos c11 c12 %

Assuming normal pdfs, in this case, does not solve the problem. With normal pdfs, the sampling distribution of W2 can be approximated by, but does not equal, a t distribution. There are different opinions about the degrees of freedom in the approximating distribution. Minitab uses a horrendously messy formula for the degrees of freedom, but since we don’t need to evaluate it, the fact that it is horrendous is no problem. (See formulas 16.6 and 16.7 on page 592 of Wardrop if you want to see it!) The above data will be reanalyzed under this second situation. twos c1 c2 %

TWOSAMPLE T FOR C11 VS C12 N MEAN STDEV C11 10 50.00 9.40 C12 20 45.00 9.40 95 PCT CI FOR MU C11 - MU C12: ( -2.7, 12.7) TTEST MU C11 = MU C12 (VS NE): T= 1.37 P=0.19 DF= 18 twos c11 c12 %

TWOSAMPLE T FOR C11 VS C12

32 N 10 10 MEAN 50.00 45 STDEV 1.00 100

CHAPTER 2. THE TWO SAMPLE PROBLEM
if each sample size is 30 (or more), then we know that we have at least 29 (or more) d.f. As a result, we might be willing to use the standard normal curve instead of the t curve. I want to explore the issue of robustness for the above procedures. I performed a simulation study with 1000 runs. For each run I selected independent random samples with n1 = n2 = 10 from exponential (1) pdfs. For each random sample I calculated two 95% confidence intervals for µ1 − µ2 ; one with pooling and one without. The results were virtually identical and very close to what one would expect for normal pdfs. In particular, when pooling, 42 intervals were incorrect (4.2%) and the mean width of the intervals is 1.800. When not pooling, 38 intervals were incorrect (3.8%) and the mean width of the intervals is 1.838. I now want to address a strange property of the above procedures. Recall the earlier data from page 401 of Wardrop. Let us now suppose that the largest observation from the first population, 347, is replaced by 357. This increases the mean of the first sample by one and clearly has no effect on the second sample. Thus, we have evidence that µ 1 is even larger (compared to the evidence in the original data) and no evidence about µ2 . Thus, it seems “logical” that our estimate of µ1 − µ2 should “increase,” and certainly not decrease. But look at the analysis below. twos c1 c2; pool.

C11 C12

95 PCT CI FOR MU C11 - MU C12: ( -66.55, 77) TTEST MU C11 = MU C12 (VS NE): T= 0.16 P=0.88 DF= 9 twos c11 c12 %

TWOSAMPLE T FOR C11 VS C12 N MEAN STDEV C11 10 50.00 1.00 C12 20 45 100 95 PCT CI FOR MU C11 - MU C12: ( -41.83, 52) TTEST MU C11 = MU C12 (VS NE): T= 0.22 P=0.83 DF= 19 twos c11 c12 TWOSAMPLE T FOR C11 VS C12 N MEAN STDEV C11 10 50 100 C12 20 45.00 1.00 95 PCT CI FOR MU C11 - MU C12: ( -67, 76.55) TTEST MU C11 = MU C12 (VS NE): T= 0.16 P=0.88 DF= 9

TWOSAMPLE T FOR C1 VS C2 N MEAN STDEV C1 10 334.0 10.4 I want to remark on a dumb, but increasingly pop- C2 10 319.50 7.93 ular, approach. The suggestion is to use the t distribution with r − 1 degrees of freedom, where r is 95 PCT CI FOR MU C1 - MU C2: the minimum of n1 and n2 . The main virtue of this ( 5.8, 23.2) method is that we avoid having to calculate the d.f. with the horrendous formula; but if it is done by com- TTEST MU C1 = MU C2 (VS NE): puter, what is the problem? T= 3.51 P=0.0025 DF= 18 Finally, if n1 and n2 are both large and you must analyze the data by hand, you might as well use the POOLED STDEV = 9.25 standard normal curve for reference instead of bothering with calculating the degrees of freedom. By Earlier, the lower bound for the confidence interval the “minimum” approach in the previous paragraph, was 5.9; now it has decreased to 5.8! This is very

2.3. NUMERICAL RESPONSE, INDEPENDENT SAMPLES
strange! The same phenomenon occurs without pooling, as C3 N = 4 Median = 10.5 shown below. In this case, the lower bound decreases C4 N = 4 Median = 8.0 Point estimate for from 5.9 to 5.7. ETA1-ETA2 is 2.5 twos c1 c2 TWOSAMPLE T FOR C1 VS C2 N MEAN STDEV C1 10 334.0 10.4 C2 10 319.50 7.93 95 PCT CI FOR MU C1 - MU C2: ( 5.7, 23.3) TTEST MU C1 = MU C2 (VS NE): T= 3.51 P=0.0029 DF= 16 The Mann-Whitney-Wilcoxin procedure is an alternative to the above procedures. It assumes that the pdfs differ in a shift; see the picture in class. Mann-Whitney (Wilcoxin is usually suppressed to avoid confusion with the one-sample procedure) is a generalization of the normal case with equal standard deviations. (Discuss.) The idea behind Mann-Whitney will be illustrated with a small set of artificial data. Sample 1: Sample 2: 8 4 9 7 12 9 15 13 97.0 pct c.i. for ETA1-ETA2 is (-5.000,11.000) W = 21.5 Test of ETA1 = ETA2 vs. ETA1 n.e. ETA2 is significant at 0.3865 The test is significant at 0.3836 (adjusted for ties) Cannot reject at alpha = 0.05

33

I also ran this command for the earlier data in c1 and c2. mann c1 c2 %

Mann-Whitney Confidence Interval and Test C1 C2 N = N = 10 10 Median = Median = 331.5 321.0

The data are combined into one set and sorted, and Point estimate for ETA1-ETA2 is 13.0 ranks are assigned to the overall data, as below. Data: Ranks: Data: Ranks: 4 1 9 4.5 7 2 12 6 8 3 13 7 9 4.5 15 8 95.5 pct c.i. for ETA1-ETA2 is (5.00,21.00) W = 146.5 Test of ETA1 = ETA2 vs. ETA1 n.e. ETA2 is significant at 0.0019 The test is significant at 0.0019 (adjusted for ties) I replaced the largest observation in the first sample, 347, by 999. The analysis is below. mann c1 c2 %

Note that tied values are given mean ranks. The test statistic is the sum of the ranks of the data in the first sample; for these data it is W = 3 + 4.5 + 6 + 8 = 21.5 I put the above data into c3 and c4 and ran the following Minitab command. mann c3 c4 %

Mann-Whitney Confidence Interval and Test

34 Mann-Whitney Confidence Interval and Test C1 C2 N = N = 10 10 Median = Median = 331.5 321.0

CHAPTER 2. THE TWO SAMPLE PROBLEM
is called a cross-over design, which I don’t plan to cover in these notes. Below are some examples of matching similar units. 1. Sixty students are available for a comparison of two teaching materials. Students are paired based on some criterion (IQ, background in area, GPA, etc.). In each of the 30 pairs students are assigned to material by randomization. 2. This example is invalid, as demonstrated later in these notes, but is popular and is advocated in some introductory texts. Two classes have 30 students each. Class 1 will use teaching material A, and class 2 will use teaching material B. Students are paired across classes (i.e. each student in Class 1 is paired with a student in Class 2). This is invalid. Matching similar units is valid only if there is randomization.

Point estimate for ETA1-ETA2 is 13.0 95.5 pct c.i. for ETA1-ETA2 is (5.0,22.0) W = 146.5 Test of ETA1 = ETA2 vs. ETA1 n.e. ETA2 is significant at 0.0019 The test is significant at 0.0019 (adjusted for ties)

2.4 Paired data
Paired data arises in two ways: • Subdividing (or reusing) units, or • Matching similar units.

2.4.1 Dichotomous response
Read Section 8.5 of Wardrop.

2.4.2 Numerical response

Below are some examples of subdividing units.

The standard approach is to calculate differences and then use a one sample procedure. Page 405 of 1. The classic “before and after” studies, in which Wardrop presents data on 25-yard backstroke and a response is obtained before and after some breaststroke times. Below are the first ten pairs; see event (diet, exercise, training, etc.). Note that Wardrop for complete listing of data. these studies are observational; i.e. there is no Pair: 1 2 3 4 5 randomization. Bk. 40.0 39.5 39.5 41.0 39.0 2. We want to compare two brands of tires to see how they wear on the front wheels of frontwheel-drive cars. Each car is given one tire of each brand for its front. For each car the location (left or right) is assigned at random to the brand. Br. Diff. Pair: Bk. Br. Diff. 37.0 3.0 6 38.0 38.0 0.0 37.5 2.0 7 38.5 38.5 0.0 37.5 2.0 8 38.5 40.5 −2.0 37.0 4.0 9 39.0 39.0 0.0 38.0 1.0 10 39.5 39.0 0.5

Regarding the second example, if we have, say, 20 The sorted 25 differences are printed and analyzed cars for study and we randomize we might end up below. with, say, Brand A being on 12 left wheels and 8 right. If we decide to force these two numbers to prin c1 % be identical (10 each in this example), we get what

2.4. PAIRED DATA
C1 -3.5 0.5 1.5 2.0 2.5

35 I now want to suggest and investigate an inappropriate way to analyze data. I will do this via computer simulation. Suppose that I have independent random samples of size 10 each from two standard normal pdfs. For example, I generated such data on Minitab and got the results below. Sample 1

-2.0 0.5 1.5 2.0 2.5 % MEAN 1.44 %

0.0 1.0 1.5 2.0 3.0

0.0 1.0 1.5 2.0 4.0

0.0 1.0 2.0 2.5 7.0

tint c1 N 25

C1

STDEV 1.933

95.0% C.I. (0.64,2.24)

ttest c1

0.30 −0.22 −0.74 0.05 0.79 −1.70 1.15 0.02 −1.85 −0.02 Sample 2

TEST OF MU = 0 VS MU N.E. 0 N 25 MEAN 1.44 % STDEV 1.933 T 3.73 P VALUE 0.0011

C1

−0.45 0.40 −1.09 0.24 −1.97 −0.83 Now, let’s sort each sample.

0.41 −0.91 1.63 −0.39

sint c1

SIGN CONF INT FOR MED N 25 MEDIAN 1.500

Sample 1, Sorted −1.85 −1.70 −0.74 −0.22 −0.02 0.02 0.05 0.30 0.79 1.15

C1

ACHIEVED CONF CONF INT 0.892 (1.0, 2.0) 0.950 (1.0, 2.0) 0.957 (1.0, 2.0) stest c1 %

POS 9 NLI 8

Sample 2, Sorted −1.97 −1.09 −0.91 −0.83 −0.45 −0.39 0.24 0.40 0.41 1.63 Now, let’s pair the sorted data, matching the smallest values in each set, the second smallest values, and so on. Then, after pairing, subtract the values in the second sample from the corresponding values in the first sample. The 10 differences are below. Differences of Sorted Data 0.12 −0.61 0.17 0.61 0.43 0.41 −0.19 −0.10 0.38 −0.48 I constructed two 95% confidence intervals for the difference of the means. Using the two independent samples, pooled estimate of variance (an appropriate analysis), I obtained [−0.85, 1.00]. Using the differences of the sorted data, I obtained [−0.22, 0.37]. We will see that this latter analysis is incorrect. At this point, however, the latter analysis looks superior: both intervals are correct (they contain 0) and the second interval is much more precise.

SIGN TEST OF MED = 0 VS N.E. 0 N BELOW EQUAL ABOVE C1 25 2 3 20 P-VALUE 0.0001 wint c1 N 25 MEDIAN 1.500 % EST MED 1.50 %

C1

CONF 95.0

CONF INT (1.0, 2.0)

wtest c1

TEST OF MED = 0 VS MED N.E. 0 N FOR WILC N TEST STAT P-VALUE C1 25 22 220.5 0.002

EST MED 1.5

36 I repeated the above steps 1,000 times. For each pair of samples I constructed the 95% confidence interval (pooled) for the difference of the means. Of these intervals, 945 (94.5%) were correct. This is as expected. But for each pair of samples I also sorted the data and formed pairs of the sorted data. Then I calculated differences. Using the one sample t procedure, 517 (51.7%) of the intervals were correct! This horrible performance demonstrates that the pairing is not valid!

CHAPTER 2. THE TWO SAMPLE PROBLEM