You are on page 1of 70

Chapter 14

Correlation and Regression


PowerPoint Lecture Slides
Essentials of Statistics for the
Behavioral Sciences
Eighth Edition
by Frederick J. Gravetter and Larry B. Wallnau
Chapter 14 Learning Outcomes

1 • Understand Pearson r as measure of variables’ relationship

2 • Compute Pearson r using definitional or computational formula

3 • Use and interpret Pearson r; understand assumptions &


limitations

4 • Test hypothesis about population correlation (ρ) with sample r

5 • Understand the concept of a partial correlation


Chapter 14 Learning Outcomes
(continued)
6 • Explain/compute Spearman correlation coefficient (ranks)

7 • Explain/compute point-biserial correlation coefficient (one


dichotomous variable)

8 • Explain/compute phi-coefficient for two dichotomous variables

9 • Explain/compute linear regression equation to predict Y values

10 • Evaluate significance of regression equation


Tools You Will Need
• Sum of squares (SS) (Chapter 4)
– Computational formula
– Definitional formula
• z-Scores (Chapter 5)
• Hypothesis testing (Chapter 8)
• Analysis of Variance (Chapter 12)
– MS values and F-ratios
14.1 Introduction to
Correlation
• Measures and describes the relationship
between two variables
• Characteristics of relationships
– Direction (negative or positive; indicated by the
sign, + or – of the correlation coefficient)
– Form (linear is most common)
– Strength or consistency (varies from 0 to 1)
• Characteristics are all independent
Figure 14.1 Scatterplot for
Correlational Data
Figure 14.2 Positive and
Negative Relationships
Figure 14.3 Different Linear
Relationship Values
14.2 The Pearson Correlation
• Measures the degree and the direction of the
linear relationship between two variables
• Perfect linear relationship
– Every change in X has a corresponding change in Y
– Correlation will be –1.00 or +1.00

covariabil ity of X and Y


r
variablity of X and Y separatel y
Sum of Products (SP)
• Similar to SS (sum of squared deviations)
• Measures the amount of covariability
between two variables
• SP definitional formula:

SP  ( X  M X )(Y  MY )
SP – Computational formula
• Definitional formula emphasizes SP as the sum
of two difference scores
• Computational formula results in easier
calculations
• SP computational formula:

SP   XY 
 X Y
n
Pearson Correlation
Calculation
• Ratio comparing the covariability of X and Y
(numerator) with the variability of X and Y
separately (denominator)

SP
r
SS X SS Y
Figure 14.4
Example 14.3 Scatterplot
Pearson Correlation and
z-Scores
• Pearson correlation formula can be expressed
as a relationship of z-scores.

Sample : r 
 z X zY
n 1

Population :  
 z X zY
N
Learning Check
• A scatterplot shows a set of data points that fit
very loosely around a line that slopes down to
the right. Which of the following values would
be closest to the correlation for these data?
A • 0.75
B • 0.35
C • -0.75
D • -0.35
Learning Check - Answer
• A scatterplot shows a set of data points that fit
very loosely around a line that slopes down to
the right. Which of the following values would
be closest to the correlation for these data?
A • 0.75
B • 0.35
C • -0.75
D • -0.35
Learning Check
• Decide if each of the following statements
is True or False

• A set of n = 10 pairs of X and Y


T/F scores has ΣX = ΣY = ΣXY = 20.
For this set of scores, SP = –20
• If the Y variable decreases when
T/F the X variable decreases, their
correlation is negative
Learning Check - Answers

(20)(20)
SP  20   20  40  20
10
14.3 Using and Interpreting
the Pearson Correlation
• Correlations used for:
– Prediction
– Validity
– Reliability
– Theory verification
Interpreting Correlations
• Correlation describes a relationship but does
not demonstrate causation
• Establishing causation requires an experiment
in which one variable is manipulated and
others carefully controlled
• Example 14.4 (and Figure 14.5) demonstrates
the fallacy of attributing causation after
observing a correlation
Figure 14.5 Correlation:
Churches and Serious Crimes
Correlations and Restricted
Range of Scores
• Correlation coefficient value (size) will be
affected by the range of scores in the data
• Severely restricted range may provide a very
different correlation than would a broader
range of scores
• To be safe, never generalize a correlation
beyond the sample range of data
Figure 14.6 Restricted Score
Range Influences Correlation
Correlations and Outliers

• An outlier is an extremely deviant individual in


the sample
• Characterized by a much larger (or smaller)
score than all the others in the sample
• In a scatter plot, the point is clearly different
from all the other points
• Outliers produce a disproportionately large
impact on the correlation coefficient
Figure 14.7 Outlier Influences
Size of Correlation
Correlations and the Strength
of the Relationship
• A correlation coefficient measures the degree
of relationship on a scale from 0 to 1.00
• It is easy to mistakenly interpret this decimal
number as a percent or proportion
• Correlation is not a proportion
• Squared correlation may be interpreted as the
proportion of shared variability
• Squared correlation is called the coefficient of
determination
Coefficient of Determination
• Coefficient of determination measures the
proportion of variability in one variable that
can be determined from the relationship with
the other variable (shared variability)

Coefficient of Determinat ion  r 2


Figure 14.8 Three Amounts of
Linear Relationship Example
14.4 Hypothesis Tests with
the Pearson Correlation
• Pearson correlation is usually computed for
sample data, but used to test hypotheses
about the relationship in the population
• Population correlation shown by Greek letter
rho (ρ)
• Non-directional: H0: ρ = 0 and H1: ρ ≠ 0
Directional: H0: ρ ≤ 0 and H1: ρ > 0 or
Directional: H0: ρ ≥ 0 and H1: ρ < 0
Figure 14.9 Correlation in
Sample vs. Population
Correlation Hypothesis Test
• Sample correlation r used to test population ρ
• Degrees of freedom (df) = n – 2
• Hypothesis test can be computed using
either t or F; only t shown in this chapter
• Use t table to find critical value with df = n - 2
r
t
(1  r 2 )
(n  2)
In the Literature
• Report
– Whether it is statistically significant
• Concise test results
– Value of correlation
– Sample size
– p-value or level
– Type of test (one- or two-tailed)
• E.g., r = -0.76, n = 48, p < .01, two tails
Partial Correlation
• A partial correlation measures the relationship
between two variables while mathematically
controlling the influence of a third variable by
holding it constant

rxy  ( rxy ryz )


rxy z 
(1  rxz )(1 
2 2
ryz )
Figure 14.10 Controlling the
Impact of a Third Variable
14.5 Alternatives to the
Pearson Correlation
• Pearson correlation has been developed
– For data having linear relationships
– With data from interval or ratio measurement
scales
• Other correlations have been developed
– For data having non-linear relationships
– With data from nominal or ordinal measurement
scales
Spearman Correlation
• Spearman (rs) correlation formula is used with
data from an ordinal scale (ranks)
– Used when both variables are measured on an
ordinal scale
– Also may be used if measurement scales is interval
or ratio when relationship is consistently
directional but may not be linear
Figure 14.11 Consistent
Nonlinear Positive Relationship
Figure 14.12 Scatterplot
Showing Scores and Ranks
Ranking Tied Scores
• Tie scores need ranks for Spearman
correlation
• Method for assigning rank
– List scores in order from smallest to largest
– Assign a rank to each position in the list
– When two (or more) scores are tied, compute the
mean of their ranked position, and assign this
mean value as the final rank for each score.
Special Formula for the
Spearman Correlation
• The ranks for the scores are simply integers
• Calculations can be simplified
– Use D as the difference between the X rank and
the Y rank for each individual to compute the rs
statistic

rs  1 
6 D 2

n(n 2  1)
Point-Biserial Correlation
• Measures relationship between two variables
– One variable has only two values
(called a dichotomous or binomial variable)
• Effect size for independent samples t-test in
Chapter 10 can be measures by r2
– Point-biserial r2 has same value as the r2
computed from t-statistic
– t-statistic tests significance of the mean difference
– r statistic measures the correlation size
Point-Biserial Correlation
• Applicable in the same situation as the
independent-measures t test in Chapter 10
– Code one group 0 and the other 1 (or any two
digits) as the Y score
– t-statistic evaluates the significance of mean
difference
– Point-Biserial r measures correlation magnitude
– r2 quantifies effect size
Phi Coefficient
• Both variables (X and Y) are dichotomous
– Both variables are re-coded to values 0 and 1 (or
any two digits)
– The regular Pearson formulas is used to calculate r
– r2 (coefficient of determination) measures effect
size (proportion of variability in one score
predicted by the other)
Learning Check
• Participants were classified as “morning people”
or “evening people” then measured on a 50-point
conscientiousness scale. Which correlation
should be used to measure the relationship?
A • Pearson correlation
B • Spearman correlation
C • Point-biserial correlation
D • Phi-coefficient
Learning Check - Answer
• Participants were classified as “morning people”
or “evening people” then measured on a 50-point
conscientiousness scale. Which correlation
should be used to measure the relationship?
A • Pearson correlation
B • Spearman correlation
C • Point-biserial correlation
D • Phi-coefficient
Learning Check
• Decide if each of the following statements
is True or False

• The Spearman correlation is used with


T/F dichotomous data

• In a non-directional significance test of


T/F a correlation, the null hypothesis states
that the population correlation is zero
Learning Check - Answers

• The Spearman correlation uses


False ordinal (ranked) data

• Null hypothesis assumes no


True relationship; ρ = zero indicates no
relationship in the population
14.6 Introduction to Linear
Equations and Regression
• The Pearson correlation measures a linear
relationship between two variables
• Figure 14.13 makes the relationship obvious
• The line through the data
– Makes the relationship easier to see
– Shows the central tendency of the relationship
– Can be used for prediction
• Regression analysis precisely defines the line
Figure 14.13 Regression line
Linear Equations
• General equation for a line
– Equation: Y = bX + a
– X and Y are variables
– a and b are fixed constant
Figure 14.14
Linear Equation Graph
Regression
• Regression is a method of finding an equation
describing the best-fitting line for a set of data
• How to define a “best fitting” straight line
when there are many possible straight lines?
• The answer: a line that is the best fit for the
actual data that minimizes prediction errors
Regression
• Ŷ is the value of Y predicted by the regression
equation (regression line) for each value of X
• (Y- Ŷ) is the distance each data point is from
the regression line: the error of prediction
• The regression procedure produces a line that
minimizes total squared error of prediction
• This method is called the least-squared-error
solution
Figure 14.15 Y-Ŷ Distance: Actual
Data Point Minus Predicted Point
Regression Equations
• Regression line equation: Ŷ = bX + a

• The slope of the line, b, can be calculated


SP sY
b or b  r
SS X sX
• The line goes through (MX,MY) therefore
a  M Y  bM X
Figure 14.16 Data Points and
Regression Line: Example 14.13
Standard Error of Estimate
• Regression equation makes a prediction
• Precision of the estimate is measured by the
standard error of estimate (SEoE)

SEoE =
SS residual

 (Y  Yˆ ) 2
df n2
Figure 14.17 Regression Lines:
Perfectly Fit vs. Example 14.13
Relationship Between Correlation
and Standard Error of Estimate
• As r goes from 0 to 1, SEoE decreases to 0
• Predicted variability in Y scores:
SSregression = r2 SSY
• Unpredicted variability in Y scores:
SSresidual = (1 - r2) SSY
• Standard Error of Estimate based on r:
SSresidual (1  r 2 ) SSY

df n2
Testing Regression Significance
• Analysis of Regression
– Similar to Analysis of Variance
– Uses an F-ratio of two Mean Square values
– Each MS is a SS divided by its df
• H0: the slope of the regression line (b or beta)
is zero
Mean Squares and F-ratio
SS regression
MS regression 
df regression

SSresidual
MS residual 
df residual

MS regression
F
MS residual
Figure 14.18 Partitioning SS
and df in Regression Analysis
Learning Check
• A linear regression has b = 3 and a = 4.
What is the “predicted Y” (Ŷ) for X = 7?

A • 14
B • 25
C • 31
D • Cannot be determined
Learning Check - Answer
• A linear regression has b = 3 and a = 4.
What is the predicted Y for X = 7?

A • 14
B • 25
C • 31
D • Cannot be determined
Learning Check
• Decide if each of the following statements
is True or False

• It is possible for the regression


T/F equation to place none of the actual
data points on the regression line

• If r = 0.58, the linear regression


T/F equation predicts about one third of
the variance in the Y scores
Learning Check - Answers

• The line estimates where points


True should be but there are almost
always prediction errors

True • When r = .58, r2 = .336 (≈1/3)


Figure 14.19
SPSS Output for Example 14.13
Figure 14.20 SPSS Output for
Examples 14.13—14.15
Figure 14.21 Scatter Plot for
Data of Demonstration 14.1
Equations?

Concepts
?

Any
Questions
?

You might also like