Standard Error of the Difference Between Means Reliability of Differences Between Correlated Means (N>30) (N>30

)

Reliability of Differences Between Correlated Means (N>30) (N>30) 

Reasons for getting correlated Means
1 The participants serves as own control (repeated measures) 2 Participants are matched on task relevant variable. 3 Participants are identical twins or litter mates.

Reliability of Differences Between Correlated Means (N>30) (N>30)  Standard Error of Difference Between Correlated Means (N>30) SD ! X SX SX 1 2 2 2  2r S 2 X1S X2 D ! z SD X X ! X X S X  S X  2r S X S X 1 2 2 1 2 1 2 If we use this formula. nobody uses this formula!) .. it is necessary to compute the correlation between the two means««.(since there is an alternative.

§D z! §D ! N N§ D  § D 2 D SD X X ! X SX D D z! N S D N 1 . and works. ‡ Requires computation of Correlation between means. Of course this ‡ Too much room for erroris the one ‡ There is an easier way! people use! We take advantage of the fact that the mean of the difference is the same as the difference between the means. but it is rarely if ever used.Reliability of Differences Between Correlated Means (N>30) (N>30) The proceeding formula is appropriate.

2 N 2 N 1 .

12 Difference 0 0 2 0 1 -1 -1 3 0 2 3 0 -1 1 -1 1 0 0 2 3 0 -1 3 4 1 21 0.84 Study to evaluate two golf balls (Regular/New) .Reliability of Differences Between Correlated Means (N>30) (N>30) Example Illustrating Computation Pro 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Sum Mean Regular 3 5 4 6 5 4 2 5 3 4 5 2 3 5 5 4 2 3 4 5 3 2 5 6 4 99 3.84 z! ! 1384 25.96 New 3 5 2 6 4 5 3 2 3 2 2 2 4 4 6 3 2 3 2 2 3 3 2 2 3 78 3.

84 .73  441 625 625 24 25 1 .05 level .304 .84 . beyond . ! ! 276 * Sig..

01 level --->2 tailed ± 2. Spend some time learning how to use the normal curve table.01 level --->1 tailed ± 2. ± This is true since z is a 1 tailed table.57  We use additive theorem.0029 -->1 tailed  P=. .Reliability of Differences Between Correlated Means (N>30) (N>30)  When we look up Z=2. we find:  P=.   Value of z required for .76.01 level for 2 tailed test:  .0058--> 2 tailed ± to get 2 tailed value we double 1 tailed value.32  .

but until we get that small. As the sample gets smaller. ± Tables were developed by William Gossett (student) .   Differences between means will be normally distributed about the actual mean difference of zero (0) if we draw samples randomly from the same population. the distribution of differences (sampling distribution) becomes less and less normal. ± This difference starts occurring before 30.Reliability of Differences Between Correlated Means (N<30)  We use t test when we have small samples. it doesn¶t make much difference. the departure from normalcy is so great a special set of curves have been developed to describe these distributions. ± For samples less than 30.

Reliability of Differences Between Uncorrelated Means (N<30)  Standard Error of Difference Between Uncorrelated Means (N<30) t! D SD X X ! X X §x §x ¨n n © n n 2 ª n n 1 2 2 2 2 1 1 1 2 1 2 2 ¸ ¹ º This is the same principle as the z test .

t! D SD X X ! X X §x §x ¨n n © n n 2 ª n n 1 2 2 2 2 1 1 1 2 1 2 2 ¸ ¹ º Now all we have to do is plug in the numbers!! N N=13 .Reliability of Differences Between Uncorrelated Means (N<30) Computational Example: No We do a study to determine whether those who have had a course in 2 2 C a lc calculus will doxbetter in psychological statistics. Students are C a lc x1 2 x 1 x 2 grouped according to whether or not they have had statistics.

Reliability of Differences Between Uncorrelated Means (N<30) Computational Example: (Continued) No C a c 103 98 106 71 108 120 150 79 93 101 113 95 139 94 106 92 100 N 17 Ca c 137 151 131 133 115 110 139 124 94 123 90 135 104 x1 -1 2 -3 3 4 16 46 -2 5 -1 1 -3 9 -9 35 -1 0 2 -1 2 -4 x2 15 29 9 11 -7 -1 2 17 2 -2 8 1 -3 2 13 -1 8 x 1 1 36 4 1089 16 256 2116 625 121 9 81 81 1225 100 4 144 16 2 x 2 225 841 81 121 49 144 289 4 784 1 1024 169 324 2 t! D SD X X ! X X §x §x ¨n n © n n 2 ª n n 1 2 2 2 2 1 1 1 2 1 2 2 ¸ ¹ º t! 122 104 ¨ 5924 4056¸¨ 17 13 ¸ ¹© © © .

.

05 level of significance««.01)=2.05)=2.763 N=13 Thus our t is significant at .56 For df=28 df=N1+N2-2 t(.048 t(. . ¹ ¹ ª 17 13  2 ºª 17 13 º t ! 2.

05.01/2 to get the one (1) tailed value. To determine the equivalent 1 tailed value we divide . We divide it by 2 and look up .005)  One (1) tailed table *z tables ± Divide desired level by 2 and look up value for that value. (. the more leptokurtic the sampling distribution becomes  t tables are usually 2 tailed tables (often they list both 1 and 2 tailed value)  Two (2) tailed table ± Look up a given value.025 .  We want .Reliability of Differences Between Uncorrelated Means (N<30)  Some Miscellaneous Points  The smaller the sample.

Reliability of Differences Between Correlated Means (N<30) Standard Error of Difference Between Correlated Means §D X t! SX D §D ! N N§ D  § D 2 ! N D S diff N 1 .

2 ! N df = 2 N 1 -1 This is the same formula as we use for z for correlated means l= umber of matched participants .

each score had some freedom to vary or take on different values. Until this point.Reliability of Differences Between Correlated Means (N<30)  Concept of Degree of Freedom. -----------------------> This score must be 4.   1 4 8 Sum=16 Mean=4 -1 0 ? Refers freedom of scores to vary.1 is FIXED! .  This is OT true with this SCORE! ± Three scores are free to Vary .

it takes a larger t to be significant. X X X X X X X X X X X X X X X X -1=3 n1+n2-2=6 .Reliability of Differences Between Uncorrelated Means (N<30)  Comparison of df for Correlated and Uncorrelated Means   You can see that if we don¶t reduce variance by matching. we are better off using Uncorrelated Groups With fewer df.

2 .Comparison of Formula for z-test and t-test D t! SD X X ! X X §x  §x ¨ n  n ¸ © ¹ n n 2 ª n n º 1 2 2 2 2 1 1 2 1 2 1 2  D z! SD X X ! X X SX SX 1 2 2 2 1 We see that t and z are exactly the same except that t pool the variance before computing the Standard Error where in the case of z the variance is pooled after the Standard Error is computed.

Hypothesis Testing (t and z)  You really should learn these assumptions Assumptions for t and z tests 1. All other assumptions for parametric tests ± a) Interval level of Measurement ± b) Random Sampling from the Population . The population variances are equal 3. 2. Populations are normally distributed.

Summary of Steps used to Test the Null Hypothesis   State the ull Hypothesis and the associated Research Hypothesis Choose a statistical test with its associated statistical model We base our choice on level of measurement.  . assumptions. etc.  Decide whether to do parametric test.

Summary of Steps used to Test the Null Hypothesis(Continued:)    State the ull Hypothesis and the associated Research Hypothesis Choose a statistical test with its associated statistical model Specify a Level of Confidence (E) .

result of sample size. alpha and size of Standard Error. Power of Test (Function of Alpha.Summary of Steps used to Test the Null Hypothesis(Continued:)    State the ull Hypothesis and the associated Research Hypothesis Choose a statistical test with its associated statistical model Specify a Level of Confidence (E) and set the sample size    Alpha (Type I Error) .we set Beta (Type II Error) . Beta. Variance and N .

Make Decision .Summary of Steps used to Test the Null Hypothesis(Continued:)       State the ull Hypothesis and the associated Research Hypothesis Choose a statistical test with its associated statistical model Specify a Level of Confidence (E) and set the sample size Find or Assume the Sampling Distribution Define the Region of Rejection for the Ho Compute the Value of the Test .

THE END! Reliability of Differences Between Uncorrelated Means (N<30) Press This Button to Return to Class Page .

Sign up to vote on this title
UsefulNot useful