Professional Documents
Culture Documents
Statistical Evaluation of Data-New
Statistical Evaluation of Data-New
Chapter 15
1 /52
Descriptive / inferential
• Descriptive statistics are methods that help
researchers organize, summarize, and simplify
the results obtained from research studies.
2 /52
Statistic / parameter
• A summary value that describes a sample is
called a statistic. M=25 s=2
3 /52
Frequency Distributions
One method of simplifying and organizing a set
of scores is to group them into an organized
display that shows the entire set.
4 /52
Example
5 /52
Histogram & Polygon
6 /52
Bar Graphs
7 /52
Central tendency
The goal of central tendency is to identify the
value that is most typical or most representative
of the entire group.
8 /52
Central tendency
• The mean is the arithmetic average.
• The median measures central tendency by
identifying the score that divides the
distribution in half.
• The mode is the most frequently occurring
score in the distribution.
9 /52
Variability
Variability is a measure of the spread of scores in
a distribution.
Total=60 SS =70
Mean=6
11 /52
Non-numerical Data
Proportion or percentage in each category.
For example,
• 43% prefer Democrat candidate,
• 28% prefer Republican candidate,
• 29% are undecided
12 /52
Correlations
A correlation is a statistical value that measures
and describes the direction and degree of
relationship between two variables.
13 /52
Types of correlation
14 /52
Regression
15 /52
Regression
• Whenever a linear relationship exists, it is
possible to compute the equation for the
straight line that provides the best fit for the
data points.
• The process of finding the linear equation is
called regression, and the resulting equation is
called the regression equation.
16 /52
Where is the regression line?
120
110
100
90
80
STRENGTH
70
140 150 160 170 180 190 200 210 220
WEIGHT
17 /52
Which one is the regression line?
120
110
100
90
80
STRENGTH
70
140 150 160 170 180 190 200 210 220
WEIGHT
18 /52
regression equation
All linear equations have the same general
structure and can be expressed as
•Y = bX+a Y= 2X + 1
19 /52
standardized form
• Often the regression equation is reported in
standardized form, which means that the
original X and Y scores were standardized, or
transformed into z- scores, before the
equation was computed.
ȥy=βȥx
20 /52
Multiple Regression
21 /52
22 /52
INFERENTIAL STATISTICS
• INFERENTIAL STATISTICS
23 /52
Sampling Error
Random samples
No treatment
24 /52
Is the difference due to a sampling
error?
Random samples
Violent /Nonviolent TV
25 /52
Is the difference due to a sampling
error?
• Sampling error is the naturally occurring
difference between a sample statistic and the
corresponding population parameter.
26 /52
Hypothesis testing
• A hypothesis test is a statistical procedure that
uses sample data to evaluate the credibility of
a hypothesis about a population.
27 /52
5 elements of a hypothesis test
1. The Null Hypothesis
The null hypothesis is a statement about the
population, or populations, being examined, and
always says that there is no effect, no change, or
no relationship.
28 /52
5 elements of a hypothesis test
3. The Standard Error
Standard error is a measure of the average, or standard distance
between sample statistic and the corresponding population
parameter.
"standard error of the mean , sm" refers to the standard deviation of the distribution of sample means taken from a population.
M1 M 2
t
sm
29 /52
5 elements of a hypothesis test
5. The Alpha Level ( Level of Significance)
The alpha level, or level of significance, for a
hypothesis test is the maximum probability that
the research result was obtained simply by chance.
31 /52
Errors in Hypothesis Testing
If a researcher is misled by the results from the
sample, it is likely that the researcher will reach
an incorrect conclusion.
Two kinds of errors can be made in hypothesis
testing.
32 /52
Type I Errors
• A Type I error occurs when a researcher finds evidence
for a significant result when, in fact, there is no effect
( no relationship) in the population.
• The error occurs because the researcher has, by chance, selected an extreme sample that appears to show
the existence of an effect when there is none.
33 /52
Type II error
• A Type II error occurs when sample data do
not show evidence of a significant effect
when, in fact, a real effect does exist in the
population.
• This often occurs when the effect is so small that it does not show up in the sample.
34 /52
Factors that Influence the Outcome of
a Hypothesis Test
1. The sample size.
The difference found with a large sample is more
likely to be significant than the same result
found with a small sample.
2. The Size of the Variance
When the variance is small, the data show a
clear mean difference between the two
treatments.
35 /52
Effect Size
• Knowing the significance of difference is not
enough. We need to know the size of the
effect.
36 /52
Measuring Effect Size with Cohen’s d
37 /52
Measuring Effect Size as a Percentage
of Variance ( r2)
The effect size can also be measured by calculating the
percentage of variance in the treatment condition that
could be predicted by the variance in the control group.
df = (n1-1)+(n2-1)
38 /52
Examples of hypothesis tests
report
• Two- Group Between- Subjects Test
• df = (n1-1)+(n2-1)
39 /52
ANOVA reports
• Comparing More Than Two Levels of a Single
Factor (ANOVA)
• df= k-1
• df(within)=(k-1) * (n-1)
• df(between)=(n1-1)+(n2-1)+(n3-1)+…
40 /52
Post Hoc Tests
Are necessary because the original ANOVA
simply establishes that mean differences exist,
but does not identify exactly which means are
significantly different and which are not.
41 /52
Factorial Tests Report
The simplest case, a two- factor design, requires
a two- factor analysis of variance, or two- way
ANOVA. The two- factor ANOVA consists of three
separate hypothesis tests. page 550
P<.01 4.26-7.82
P<.01 3.40-5.61
P<.01 3.40-5.61
42 /52
Comparing Proportions
chi- square
43 /52
Reporting X 2
df = ( C1 – 1)( C2 – 1)
45 /52
Reliability & Validity
• Reliability refers to the relationship between
two sets of measurements.
46 /52
split-half
evaluate the internal consistency of the test by
computing a measure of split- half reliability.
However, the two split- half scores obtained for each participant are based on
only half of the test items. So we can fix this by using ;
47 /52
Kuder-Richardson
• The Kuder-Richardson Formula 20
estimates the average of all the possible split-
half correlations that can be obtained from all
of the possible ways to split a test in half.
48 /52
Cronbach’s Alpha
• One limitation of the K- R 20 is that it can only
be used for tests in which each item has only
multiple choice/true-false/yes-no alternatives.
49 /52
Inter-rater reliability
• Inter-rater reliability is the degree of
agreement between two observers who have
independently observed and recorded
behaviors at the same time.
50 /52
Cohen’s kappa
• The problem with a simple measure of
percent agreement is that the value obtained
can be inflated by chance.
• To correct the chance factor
51 /52
Group Discussion
• Identify the two basic concerns with using a
correlation to measure split-half reliability and
explain how these concerns are addressed by
Spearman-Brown, K-R 20, and Cronbach’s
alpha.
• Identify the basic concern with using the
percentage of agreement as a measure of inter-
rater reliability and explain how this concern is
addressed by Cohen’s kappa.
52 /52