You are on page 1of 69

Foundations of 1

Research Statistics: Testing Hypotheses; the critical ratio.

 Click “slide show” to start this


presentation as a show.
 Remember: focus & think about
each point; do not just passively
click.

© Dr. David J. McKirnan, 2014


The University of Illinois Chicago Cranach, Tree of Knowledge [of Good and Evil] (1472)
McKirnanUIC@gmail.com
Do not use or reproduce without
permission
Foundations of Foundations of Research: 2
Research
Statistics module series

1. Introduction to statistics & number scales

2. The Z score and the normal distribution

3. The logic of research; Plato's Allegory of the Cave


You are here
4. Testing hypotheses: The critical ratio

5. Calculating a t score

6. Testing t: The Central Limit Theorem

7. Correlations: Measures of association


40
35
30
25
20
15 © Dr. David J. McKirnan, 2014
10 The University of Illinois Chicago
5
McKirnanUIC@gmail.com
0
An Al
co
Ma Ot
h
Al
-d
Do not use or reproduce without
ys ho rij er ru
u
ub
s tan
l an
a
d ru
g
g s+ permission
ce s se
x

African-Am., n=430 Latino, n = 130 White, n = 183


3
Foundations of
Research
Evaluating data

Here we will see how to use Z scores to evaluate data,


and will introduce the concept of critical ratio.


 Using Z scores to
evaluate data
 Testing hypotheses:
the critical ratio. Shutterstock.com
Foundations of 4
Research Using Z scores

Modules 2 & 3 introduced key statistical concepts:


 Individual scores (X) on a variable,

 The Mean (M) of a set of scores,

 The Standard Deviation (S), reflecting the variance of


scores around that Mean or average,
 The Z score; a measure of how far a score is above or
below the Mean, divided by the Standard deviation:
Z = X– M
S
 Z is a basic form of Critical Ratio
 Now we will talk about using the critical ratio in statistical
decision making.
Foundations of Using Z to evaluate data 5
Research

Z is at the core of how we use


statistics to evaluate data.
Z indicates how far a score is from the
M relative to the other scores in
the sample.
Z combines…
 A score
 The M of all scores in the sample
 The variance in scores above and below M.
Foundations of Using Z to evaluate data 6
Research

Z is at the core of how we use statistics


to evaluate data.
Z indicates how far a score is from the M
relative to the other scores in the
sample.
So…
 If X (an observed score) = 5.2 X - M = 1.2
 And M (The Mean score) = 4
 If S (Standard deviation of all scores in the sample) = 1.15
Foundations of Using Z to evaluate data 7
Research

Z is at the core of how we use statistics


to evaluate data.
Z indicates how far a score is from the M
relative to the other scores in the
sample.
So…
 If X = 5.2 X - M = 1.2
 And M = 4
 If S = 1.15 Z for our score is 1 (+).
X– M 5.2 – 4 1.2
Z= = = = 1.05
S 1.15 1.15
Foundations of Using Z to evaluate data 8
Research

Z is at the core of how we use statistics


to evaluate data.
Z indicates how far a score is from the M
relative to the other scores in the
sample.
So…
 If X = 5.2 X - M = 1.2
 And M = 4
 If S = 1.15 Z for our score is 1 (+).
 This tells us that our score is higher than ~ 84% of the
other scores in the distribution.
Foundations of Using Z to evaluate data 9
Research

Z is at the core of how we use statistics


to evaluate data.
Z indicates how far a score is from the M
relative to the other scores in the
sample.
 This tells us that our score is higher
than ~ 84% of the other scores in the
distribution.
 Unlike simple measurement with a ratio
scale where a value – e.g. < 32o – has an
absolute meaning.
 …inferential statistics evaluates a score Shutterstock.com

relative to a distribution of scores.


Foundations of Z scores: areas under the normal curve, 2 10
Research

50% of the scores in a 50% of scores are


distribution are above below the M
the M [Z = 0]
34.13% of the distribution
34.13% 34.13%
+13.59% of of
cases cases
+2.25%...etc.

13.59% 13.59%
of of
cases cases

2.25% 2.25%
of of
cases cases

-3 -2 -1 0
0 +1 +2 +3

Z Scores
(standard deviation units)
Foundations of Z scores: areas under the normal curve, 2 11
Research

84% of scores are


below Z = 1
(One standard deviation
above the Mean)
34.13% 34.13%
of of
34.13% + 34.13%+ 13.59% cases cases

+ 2.25%...

13.59% 13.59%
of of
cases cases

2.25% 2.25%
of of
cases cases

-3 -2 -1 0 +1 +1 +2 +3
Z Scores
(standard deviation units)
Foundations of Z scores: areas under the normal curve, 2 12
Research

84% of scores are


above Z = -1
(One standard deviation
below the Mean)
34.13% 34.13%
of of
cases cases

13.59% 13.59%
of of
cases cases

2.25% 2.25%
of of
cases cases

-3 -2 -1
-1 0 +1 +2 +3
Z Scores
(standard deviation units)
Foundations of Z scores: areas under the normal curve, 2 13
Research

98% of scores are less


than Z = 2
Two standard deviations
above the mean 34.13% 34.13%
of of
13.59% + 34.13% + 34.13% + cases cases
13.59% + 2.25%…

13.59% 13.59%
of of
cases cases

2.25% 2.25%
of of
cases cases

-3 -2 -1 0 +1 +2
+2 +3
Z Scores
(standard deviation units)
Foundations of Z scores: areas under the normal curve, 2 14
Research

98% of scores are


above Z = -2

34.13% 34.13%
of of
cases cases

13.59% 13.59%
of of
cases cases

2.25% 2.25%
of of
cases cases

-3 -2
-2 -1 0 +1 +2 +3
Z Scores
(standard deviation units)
15
Evaluating Individual Scores
Foundations of
Research
5

How good is a score of ‘6' in the 1

group described in… 0


0 1 2 3 4 5 6 7 8
Scale Value

 Table 1?
5

 Table 2? 4

Evaluate in terms of: 0


0 1 2 3 4 5
Scale Value
6 7 8

A. The distance of the score from the M.


B. The variance in the rest of the sample
C. Your criterion for a “significantly good” score
Foundations of Using the normal distribution, 2 16
Research

A. The distance of the score from the M.


The participant is 2 units above the mean in both tables.

B. The variance in the rest of the sample:


Since Table 1 has more variance, a given score is not as good
relative to the rest of the scores.

Table 1, high variance Table 2, low(er) variance


X-M=6-4=2 X-M=6-4=2
Standard Deviation (S) = 2.4. Standard Deviation (S) = 1.15.
X–M 2 2
Z= = = 0.88 Z= = 1.74
S 2.4 1.15

About 70% of participants are About 90% of participants are


below this Z score below this Z score
Foundations of 17
Research
Comparing Scores: deviation x Variance

5 High variance
4 (S = 2.4)
3

2 ‘6’ is not that high


1 compared to rest of
0
the distribution
0 1 2 3 4 5 6 7 8
Scale Value

5  Less variance
4 (S = 1.15)
3
Here ‘6’ is the highest
2
score in the
1
distribution
0
0 1 2 3 4 5 6 7 8
Scale Value
Foundations of Normal distribution; high variance 18
Research

Table 1, high variance


X–M 2
Z= = = 0.88
S 2.4
Z = .88
About 70% of participants are
below this Z score

About 70%
of cases

-3 -2 -1 0 +1 +2 +3
Z Scores
(standard deviation units)
Foundations of Normal distribution; low variance 19
Research

Table 2, low(er) variance

X–M 2
Z= = = 1.74
S 1.15
Z = 1.74
About 90% of participants are
below this Z score

About 90% of
cases

-3 -2 -1 0 +1 +2 +3
Z Scores
(standard deviation units)
Foundations of Evaluating scores using Z 20
Research

C. Criterion for a “significantly good” score

X = 6, M = 4, S = 2.4, Z = .88 If a “good” score is


better than 90% of the
X = 6, M = 4, S = 1.15, Z = 1.74 sample…

..with high variance ’6' is


not so good,

 with less variance ‘6’ is


> 90% of the rest of the
sample.
70% of cases

90% of cases
-3 -2 -1 0 +1 +2 +3
Z Scores
(standard deviation units)
21
Foundations of
Research
Summary: evaluating individual scores

How “good” is a score of ‘6' in two groups?

A. The distance of the score from the M.


In both groups X = 6 & M = 4, X – M = 2.
B. The variance in the rest of the sample
One group has low variance and one has higher.
C. Criterion for “significantly good” score
What % of the sample must the score be higher than…
Foundations of 22
Research Using Z to standardize scores

 Z scores (or standard deviation units) standardize scores by


putting them on a common scale.
 We want to combine two measures of social distance between racial
groups.
 For one, we measure how far someone stands from a member of a
different group, ranging from 0 to 36 inches.
 For the other, we give an attitude scale ranging from 1 (“Not distant at all”) to
9 (“Very distant”).
 We cannot simply combine these measures; one has much higher raw
scale values (ranging to 36) than the other (up to 9)…
 Meaning they have very different Means and Standard Deviations.
 To combine them we must put them on the same scale; we can change
the raw values to Z scores, that will each have M = 0 and S = 1.
 Any scores can be translated into Z scores for
comparison…
Foundations of 23
Research
Using Z to standardize scores, cont.

 Which is “faster”; a  or a 4 minute mile?


2:03:00 marathon,

Roberto Caucino / Shutterstock.com Gustavo Miguel Fernandes / Shutterstock.com

 We cannot directly compare these scores because they are on


different scales.
 One is measured in hours & minutes, one in 10ths of a second.
 We can use Z scores to change each scale to common metric,
where M = 0 and S = 1.
 Z scores can be compared, since they are standardized by being
relative to the larger population of scores.
Foundations of Comparing Zs 24
Research

Here is a (made up) distribution


of world class marathon
times.
Using the Z formula, we can
turn the raw scores into Z
2:50 2:45 2:40 2:30 2:20 2:15 2:10 scores, that have M = 0 and S
Marathon times (raw scores)
-4 -3 -2 -1 0 +1 +2 +3 +4 = 1.
Z Scores (standard deviation units)
Foundations of Comparing Zs 25
Research

We can do the same thing


with our mile times.

4:30 4:25 4:20 4:10 4:00 3:50 3:45


Mile times
-3 -2 -1 0 +1 +2 +3
Z Scores (standard deviation units)
Foundations of 26
Research
By putting them on the same scale – Z scores (Standard Deviations) –
we can directly compare Marathon v. Mile times.

A 4 minute mile is not A 2:03 marathon is very fast:


extreme: lots of people run only a few people have run it
faster. It translates to Z = 1. that fast. Here, Z = 4.

4:30 4:25 4:20 4:10 4:00 3:50 3:45 2:50 2:45 2:40 2:30 2:20 2:15 2:10
Mile times Marathon times (raw scores)
-3 -2 -1 0 +1 +2 +3 -4 -3 -2 -1 0 +1 +2 +3 +4
Z Scores (standard deviation units) Z Scores (standard deviation units)
Foundations of 27
Research

Standardizing Marathon & Mile times allows us to directly


compare them…
We simply compare their Z scores to find that a 2:03 marathon is a
good deal “faster” than a 4 minute mile.

Z=1 Z=4

4:30 4:25 4:20 4:10 4:00 3:50 3:45 2:50 2:45 2:40 2:30 2:20 2:15 2:10
Mile times Marathon times (raw scores)
-3 -2 -1 0 +1 +2 +3 -4 -3 -2 -1 0 +1 +2 +3 +4
Z Scores (standard deviation units) Z Scores (standard deviation units)
28
Foundations of
Research
The critical ratio

 Using Z scores to
evaluate data


 Testing hypotheses:
the critical ratio.

Click for nebular vs. catastrophic hypotheses


about the origin of the solar system.
(David Darling, Encyclopedia of Science.)

Illustration of the nebular hypothesis


29
Using statistics to test hypotheses:
Foundations of
Research

Core concept:

 No scientific finding is “absolutely” true.


 Any effect is probabilistic:
 We use empirical data to infer how the world words
 We evaluate inferences by how likely the effect would
be to occur by chance.

 We use the normal distribution to help us


determine how likely an experimental outcome would
be by chance alone.
Foundations of 30
Research Probabilities & Statistical Hypothesis Testing

Null Hypothesis: All scores differ from the M by chance


alone.

Scientific observations are “innocent until proven


guilty”.
If we compare two groups or test how far a score is from the
mean, the odds of their being different by chance alone is
always greater than 0.
We cannot just take any result and call it meaningful, since
any result may be due to chance, not the Independent
Variable.
So, we assume any result is by chance unless it is strong
enough to be unlikely to occur randomly.
Foundations of 31
Research Probabilities & Statistical Hypothesis Testing

Null Hypothesis: All scores differ from the M by chance


alone.
Alternate (experimental) hypothesis: This score differs
from M by more than we would expect by chance…

Using the Normal Distribution:


 More extreme scores have a lower probability of
occurring by chance alone
 Z = the % of cases above or below the observed score
 A high Z score may be “extreme” enough for us to
reject the null hypothesis
Foundations of “Statistical significance” 32
Research

Statistical Significance

 We assume a score with less than 5% probability of


occurring
(i.e., higher or lower than 95% of the other scores… p < .05)
is not by chance alone
 Z > +1.98 occurs < 95% of the time (p <.05).
 If Z > 1.98 we consider the score to be “significantly”
different from the mean
 To test if an effect is “statistically significant”
 Compute a Z score for the effect
 Compare it to the critical value for p<.05; + 1.98
Foundations of 33
Research Statistical significance & Areas under the normal curve

In a hypothetical With Z > +1.98 or < -1.98 we


distribution: reject the null hypothesis &
 2.4% of cases are higher assume the results are not by
than Z = +1.98 chance alone.
 2.4% of cases are lower
than Z = -1.98
34.13% 34.13%
 Thus, Z > +1.98 Z = -1.98 of of
Z = +1.98
cases cases
or < -1.98 will
occur < 5% of the 2.4% of
time by chance 95% of cases 13.59% cases
13.59%
alone. of of
2.4% of cases cases
cases
2.25% 2.25%
of of
cases cases

-3 -2 -1 0 +1 +2 +3
Z Scores
(standard deviation units)
Foundations of 34
Research
Evaluating Research Questions

Data Statistical Question


One participant’s score Does this score differ from the M for the
group by more than chance?

Does this M differ from the M for the general


The mean for a group population by more than chance?

Means for 2 or more groups Is the difference between these Means more
than we would expect by chance? -- more
than the M difference between any 2
randomly selected groups?

Scores on two measured Is the correlation (‘r’) between these


variables variables more than we would expect by
chance -- more than between any two
randomly selected variables?
Foundations of Critical ratio 35
Research

 To estimate the influence of chance we weight our results by the


overall amount of variance in the data.
 In “noisy” data (a lot of error variance) we need a very strong result
to conclude that it was unlikely to have occurred by chance alone.
 In very “clean” data (low variance) even a weak result may be
statistically significant.
 This is the Critical Ratio:

The strength of the results


(experimental effect)
Critical ratio =
Amount of error variance
(“noise” in the data)
Foundations of Critical ratio 36
Research

Distance of the score Strength of the


from the mean  experimental result
Z is a basic
Critical ratio Standard Deviation  Error variance or
“noise” in the data
5

 In our example the two samples had equally


3

strong scores (X - M). 0


0 1 2 3 4 5
Scale Value
6 7 8

 …but differed in the amount of variance in the 5

distribution of scores 4

 Weighting the effect – X - M – in each sample 0


0 1 2 3 4 5
Scale Value
6 7 8

by it’s variance [S] yielded different Z scores:


.88 v. 1.74.
 This led us to different judgments of how
likely each result would be to have occurred
by chance.
Foundations of Applying the critical ratio to an experiment 37
Research

Treatment Difference
Critical Ratio =
Random Variance (Chance)

 In an experiment the Treatment Difference is variance between


the experimental and control groups.
 Random variance or chance differences among participants
within each group.
 We evaluate that result by comparing it to a distribution of
possible effects.
 We estimate the distribution of possible effects based on the
degrees of freedom (“df”).
We will get to these last 2
points in the next modules.
Foundations of Examples of Critical Ratios 38
Research

Individual Score – M for Group x M


Z score =
Standard Deviation (S) for group
= s
Mgroup1  Mgroup2
Difference between group Ms
t-test = = Variance grp1

Variance grp2
Standard Error of the Mean ngrp1 ngrp2

Between group differences (differences among > 3 group Ms)


F ratio =
Within Group differences (random variance among participants within groups)

Association between variables (joint Z scores) summed


r (correlation) = across participants  (Zvariable1 x Zvariable2)
Random variance between participants within
variables
Foundations of Quiz 2 39
Research

Where would z or t have to fall for you to consider your


results “statistically significant”? (Choose a color).

A.
B.
C.
D.
F.
Foundations of Quiz 2 40
Research

Where would z or t have to fall for you to consider your


results “statistically significant”? (Choose a color).
 Both of these are
correct.

A.  A Z or t score
greater than or less
B. than 1.98 is
C. consided it
significant.
D.  This means that the
F. result would occur
< 5% of the time by
chance alone (p <
05).
Foundations of Quiz 2 41
Research

Where would z or t have to fall for you to consider your


results “statistically significant”? (Choose a color).
 This value would
also be statistically
significant..
A.
 ..it exceeds the
B. .05% value we
C. usually use, so it is
a more
D. conservative
stnandard.
F.
Foundations of 42
Research The t-Test

 In any experiment the


t-test: are the Ms of two
Ms will differ at least a
groups statistically little.
different from
Let’s apply theeach
critical ratio to
 an
Does experiment.
the difference we
other? observe reflect
“reality”? … i.e., really
due to the independent
variable.

 Statistically: is the
difference between Ms
more than we would
Control Experimental
expect by chance
Group M Group M alone?
Foundations of 43
Research M differences and the Critical Ratio.

The critical ratio applied to the t test.

Critical The between


Difference Ms for
experimental the two groups
effect
=
Ratio Errorwithin
Variability groups (error)
variance

Variance between
Mgroup1
Mgroup2 groups

What we would
Within-group
Within-group
variance, group2
expect by chance
given the variance
variance, group1

within groups.
control group experimental group
Foundations of 44
Research M differences and the Critical Ratio.

a
Mgroup2
Mgroup1

b
b

control group experimental group


45
The Critical Ratio in action
Foundations of
Research

All three graphs have = difference between groups.


They differ in variance within groups.
The critical ratio helps us determine which one(s) represent a
statistically significant difference.

Low variance

Medium variance

High variance
Foundations of 46
Research Clickers!

A = All of them
B = Low variance only
C = Medium variance
D = High variance
E = None of them
Foundations of 47
Research Critical ratio and variances, 1

Critical ratio:

Gets larger as the variance(s) decreases, given the


same M difference…..
Foundations of 48
Research Critical ratio and variances, 2

Critical ratio:
…also gets larger as the M difference increases,
even with same variance(s)
Foundations of 49
Research What Do We Estimate; experimental effect

Experimental
Effect Difference between group Ms

Error variance
M difference (between
control & experimental
is the same
groups)
in both data sets
Foundations of 50
Research What Do We Estimate: error term

Experimental
Effect
Error variance Variability within groups

Variances differ
a lot in the two
examples

Low variability High variability


51
Foundations of
Research
Assigning numbers to the critical ratio: numerator

Experimental
Effect Difference between group Ms
= Variability within groups
Error variance

(Mgroup1 - Mgroup2 ) - 0
=

Low variability High variability


52
Foundations of
Research
Assigning numbers to the critical ratio: denominator

Experimental Difference between Ms


Effect
= Variability within groups
Error variance
Mgroup1 - Mgroup2
=
Standard Variance grp1

Variance grp2

error: n grp1 n grp2

Low variability High variability


53
Foundations of
Research Critical ratio

 Experimental effect “adjusted” by the variance.


 Yields a score: Z, t, r, etc.
 Positive: grp1 > grp2
 …or Negative: grp1 < grp2.

 Any critical ratio [CR] is likely to differ from 0 by chance


alone.
 Even in “junk” data two groups may differ.
 Cannot simply test whether Z or t is “absolutely” different than 0.

 We evaluate whether the CR is greater than what we


expect by chance alone.
Foundations of 54
When is a critical ratio “statistically significant”
Research

 A large CR is likely not due only chance – it probably reflects


a “real” experimental effect.
 The difference between groups is very large relative to the
error (within-group) variance

 A very small CR is almost certainly just error.


 Any difference between groups is not distinguishable from
error or simple chance: group differences may not be due to the
experimental condition (Independent Variable).

 A mid-size CR? How large must it be to assume it did not


occur just by chance?

We answer this by comparing it to a (hypothetical) distribution


of possible CRs.
Foundations of Distributions of Critical Ratios 55
Research

➔ Imagine you perform the same experiment 100 times.


 You randomly select a group of people
 You randomly assign ½ to the experimental group, ½ to control group
 You run the experiment, get the data, and analyze it using the critical
ratio:

Mgroup1 - Mgroup2
= =t
Variance grp1 Variance grp2

ngrp1 ngrp2
Foundations of Distributions of Critical Ratios 56
Research

➔ Imagine you perform the same experiment 100 times.


 Then … You do the same experiment again, with another random sample
of people…
 And get a critical ratio (t score) for those results…

Mgroup1 - Mgroup2
= =t
Variance grp1 Variance grp2

ngrp1 ngrp2
Foundations of Distributions of Critical Ratios 57
Research

➔ Imagine you perform the same experiment 100 times.


 And you get yet another sample…
 And get a critical ratio (t score) for those results…

Mgroup1 - Mgroup2
= =t
Variance grp1 Variance grp2

ngrp1 ngrp2
Foundations of Distributions of Critical Ratios 58
Research

➔ Each time you (hypothetically) run the experiment you generate a


critical ratio (CR).
 For a simple 2-group experiment the CR is a t ratio
 It could just as easily be a Z score, an F ratio, an r…
➔ These Critical Ratios form a distribution. CR
CR
CR CR
This is called a Sampling CR
Distribution. CR CR
CR
CR CR
CR CR
CR
CR
CR CR
CR CR CR
CR CR
CR CR
CR CR
CR CR CR CR CR
CR
-3 -2 -1 0 +1 +2 +3
Critical ratios (Z, t…)
Foundations of Distributions of Critical Ratios 59
Research

➔ Imagine you perform the same experiment 100 times.


➔ Each experiment generates a critical ratio [Z score, t ratio…]
➔ These Critical Ratios form a distribution

This is called a Sampling Distribution.


Null hypothesis; there is no real
Most Critical Ratios will cluster around ‘0’
effect, so any CR above or below
 M=0
0 is by chance alone.
 Progressively fewer are greater
or less than 0. CR
 With more observations the CR
sampling distribution becomes CR
“normal” CR CR
CR CR CR
CR CR CR
CR CR CR CR
More extreme scores are CR CR CR CR CR
unlikely to occur by chance CR CR CR CR CR CR CR
alone. CR CR CR CR CR CR CR CR CR

-3 -2 -1 0 1 2 3
Critical ratio (Z score, t, …)
Foundations of Distributions of Critical Ratios 60
Research

This is called a Sampling Distribution.

Most Critical Ratios will cluster around ‘0’


(M = 0)

If a critical ratio is larger than we would CR


expect by chance alone, we Reject the Null CR
hypothesis and accept that there is a real CR
effect. CR CR
CR CR CR
CR CR CR
CR CR CR CR
More extreme scores are CR CR CR CR CR
unlikely to occur by chance CR CR CR CR CR CR CR
alone. CR CR CR CR CR CR CR CR CR

-3 -2 -1 0 1 2 3
Critical ratio (Z score, t, …)
Foundations of Distributions 61
Research

This is the distribution of raw


scores for an exam.
M =34.5
S (Standard Deviation) = 8.5

Statistics Introduction 2.
Foundations of Distributions of Critical Ratios 62
Research

What are the odds


that these scores
were by chance
Here we have taken the raw scores alone?
and converted them to Z scores.
Z scores are Standardized:
Mean, median & mode = .00
Standard Deviation (S) = 1.0
Z scores are a form of Critical Ratio:

Statistics Introduction 2.
Foundations of Distributions of Critical Ratios 63
Research

How about
these scores?
Here are the same scores,
shown as Z scores.
Z scores are a form of Critical
Ratio
They are Standardized: Mean,
median, mode = .00
Standard Deviation (S) = 1.0
Foundations of Distributions of Critical Ratios 64
Research

After we conduct our experiment and get a result


(a critical ratio or t score) our question is…

CR
CR

What are the odds that CR CR


CR
these results are due to CR
CR
chance alone? CR CR
CR CR
CR
CR
CR CR CR
CR
CR CR

CR CR CR CR CR CR
 larger – CRs 0 larger + CRs 
Foundations of 65
Research Distributions & inference

We infer statistical significance by locating a score


along the normal distribution.
 A score can be:
 An individual score (‘X’),
 A group M,
 A Critical Ratio such as a Z
or t score.

 More extreme scores are


less likely to occur by
chance alone.
M of sampling
distribution

Progressively less likely scores


Foundations of 66
Research Statistical significance & Areas under the normal curve, 1

 A Z or t score that exceeds + 1.98 would occur by chance


alone less than 5% of the time.

The probability of a
critical ratio +1.98 is t < -1.98 t > +1.98
low enough [p<.05]
that it likely indicates
a “real” experimental
effect.
< 2.4% of
We then reject the < 2.4% of cases
Null Hypothesis. cases
95% of
cases

-3 -2 -1 0 +1 +2 +3
Z or t Scores
(standard deviation units)
Foundations of 67
Research Statistical significance & Areas under the normal curve, 2

 A Z or t score that exceeds + 1.0 would occur by chance


alone about 32% of the time.

The probability of Z = 1
occurring by chance is too Z = -1.0 Z = +1.0
high for us to conclude
that the results are “real”

About 68% of
(i.e., “statistically
significant”). Occurs

cases
about 16%
We then accept the Null …about
of the time
Hypothesis and assume 16% by
by chance
that any effect is by chance
chance alone.

-3 -2 -1 0 +1 +2 +3
Z Scores
(standard deviation units)
Foundations of Summary 68
Research

 Statistical decisions
follow the critical ratio:

 Z is the prototype critical ratio:

Distance of the score (X) from the mean (M) X–M


Z= =
Summary

Variance among all the scores in the sample S


[standard deviation (S)]

 t is also a basic critical ratio used for comparing groups:

Difference between group Means M1 – M2


t= =
Variance within the two groups Variance grp1

Variance grp2

[standard error of the M (SE)] n grp1 n grp2


69
Foundations of
Research
The critical ratio

The next modules will:


• discuss the logic of
scientific (statistical)
reasoning,
• show you how to
derive a t value.

You might also like