You are on page 1of 92

Introduction to Linear

Regression and
Correlation Analysis
Scatter Diagrams

A scatter plot is a graph that may be


used to represent the relationship
between two variables. Also
referred to as a scatter diagram.
diagram
Dependent and Independent
Variables

A dependent variable is the variable to be


predicted or explained in a regression
model. This variable is assumed to be
functionally related to the independent
variable.
Dependent and Independent
Variables

An independent variable is the variable


related to the dependent variable in a
regression equation. The independent
variable is used in a regression model to
estimate the value of the dependent
variable.
Best Fit

Represents our model. It is the line that


“best fits” our data points. The line
represents the best estimate of the y value
for every given input of x.
What is Simple Linear Regression?
• Simple Linear Regression is a method used to fit
the best straight line between a set of data points.
• After a graph is properly scaled, the data points
must “look” like they would fit a straight line, not
a parabola, or any other shape.
• The line is used as a model in order to predict a
variable y from another variable x. A regression
line must involve 2 variables, the dependent and
the independent variable.
• Finding the “best-fit” line is the goal of simple
linear regression.
Two Variable Relationships

(a) Linear
Two Variable Relationships

(b) Linear
Two Variable Relationships

(c) Curvilinear
Two Variable Relationships

(d) Curvilinear
Two Variable Relationships

(e) No Relationship
Correlation

The correlation coefficient is a quantitative


measure of the strength of the linear
relationship between two variables. The
correlation ranges from + 1.0 to - 1.0. A
correlation of  1.0 indicates a perfect linear
relationship, whereas a correlation of 0
indicates no linear relationship.
Correlation
SAMPLE CORRELATION COEFFICIENT

r
 ( x  x )( y  y )
[ ( x  x ) ][ ( y  y )
2 2
]
where:
r = Sample correlation coefficient
n = Sample size
x = Value of the independent variable
y = Value of the dependent variable
Correlation
SAMPLE CORRELATION COEFFICIENT

or the algebraic equivalent:

n xy   x  y
r
[n( x 2 )  ( x) 2 ][n( y 2 )  ( y ) 2 ]
Correlation
Sales Years
y x yx y2 x2
487 3 1,461 237,169 9
445 5 2,225 198,025 25
272 2 544 73,984 4
641 8 5,128 410,881 64
187 2 374 34,969 4
440 6 2,640 193,600 36
346 7 2,422 119,716 49
238 1 238 56,644 1
312 4 1,248 97,344 16
269 2 538 72,361 4
655 9 5,895 429,025 81
563 6 3,378 316,969 36

  4,855   55   26,091   2,240,687   4,855


Correlation

n xy   x  y
r
[n( x )  ( x) ][n( y )  ( y ) ]
2 2 2 2

12(26,091)  55(4,855)
r
[12(329)  (55) 2 ][12(2,240,687)  (4,855) 2 ]
 0.8325
Correlation

Sales Years with Midwest


Sales 1
Years with Midwest 0.832534056 1

Correlation between Years and Sales

Excel Correlation Output


Correlation
TEST STATISTIC FOR CORRELATION
r
t
2
1 r
n2
df  n  2
where:
t = Number of standard deviations r is from 0
r = Simple correlation coefficient
n = Sample size
Correlation Significance Test

H 0 :   0.0 (no correlatio n)


H A :   0.0
  0.05
Rejection Region Rejection Region
 /2 = 0.025  /2 = 0.025

t.025  2.228 0 t.025  2.228


10 n2
1  0.6931 1 r 2
 4.752  t
0.8325 r

Since t=4.752 > 2.048, reject H0, there is a significant


linear relationship
Correlation

Spurious correlation occurs when


there is a correlation between two
otherwise unrelated variables.
Simple Linear Regression
Analysis

Simple linear regression analysis


analyzes the linear relationship that
exists between a dependent variable
and a single independent variable.
Simple Linear Regression
One Variable
• Problem: A waiter wants to predict his next tip, but he forgot to record
the bill amounts for previous tips.
• Here is a graph of his tips. The tips is the only variable. Let’s call it the
y variable.
• Meal# is not a variable. It is simply used to identify a tip.
y variable

Can we come up with a model for this problem with only 1


variable?
𝒚=
10
• Now, let’s talk about goodness of fit. This will
tell us how good our data points fit the line.
• We need to calculate the residuals (errors) for
each point.

+7
+1 +4
-2 𝒚 = 10
-5 -5
Residual

A residual is the difference between


the actual value of the dependent
variable and the value predicted by
the regression model.

y  yˆ
• The best fit line is the one that minimizes the sum of the squares of the
residuals (errors).
• The error is the difference between the actual data point and the point on
the line.
• SSE (Sum Of Squared Errors) = (-5)2 + 72 + 12 + (-2) 2 + 42 + (-5) 2 = 120

+
7 +
+
1 4
- 𝒚=
- 2 -
10
5 5

• SST (Sum Of Squared Total) = SSR (Sum Of Squared Regression) + SSE is


the Sum Of Squares Equation.
• Since there is no regression line (as we only have 1 variable), we can
not make the SSE any smaller than 120, because SSR = 0.
Two Variables
Simple Linear Regression
Analysis
SIMPLE LINEAR REGRESSION MODEL
(POPULATION MODEL)

y   0  1 x  
where:
y = Value of the dependent variable
x = Value of the independent variable
 0= Population’s y-intercept
1 = Slope of the population regression line
 = Error term, or residual
Simple Linear Regression
Analysis
The simple linear regression model has four
assumptions:
 Individual values if the error terms, i, are
statistically independent of one another.
 The distribution of all possible values of  is normal.
 The distributions of possible i values have equal
variances for all value of x.
 The means of the dependent variable, for all specified
values of the independent variable, y, can be
connected by a straight line called the population
regression model.
Simple Linear Regression
Analysis

REGRESSION COEFFICIENTS
In the simple regression model, there
are two coefficients: the intercept and
the slope.
Simple Linear Regression
Analysis

The interpretation of the regression slope


coefficient is that is gives the average change
in the dependent variable for a unit increase
in the independent variable. The slope
coefficient may be positive or negative,
depending on the relationship between the
two variables.
Simple Linear Regression
Analysis
The least squares criterion is used
for determining a regression line
that minimizes the sum of squared
residuals.
Another Example…
Experience and Sales
Simple Linear Regression
Analysis
Y yˆ  150  60 x
Sales in Thousands

390
400

300
312
200
Residual = 312 - 390 = -78
100
4 X
Years with Company
Simple Linear Regression
Analysis
ESTIMATED REGRESSION MODEL
(SAMPLE MODEL)

yˆ i  b0  b1 x
where:
ŷ= Estimated, or predicted, y value
b0 = Unbiased estimate of the regression intercept
b1 = Unbiased estimate of the regression slope
x = Value of the independent variable
Simple Linear Regression
Analysis
LEAST SQUARES EQUATIONS

b1 
 ( x  x )( y  y )
 (x  x) 2

algebraic equivalent:
 xy   x y
b1  n
( x ) 2
 
x 2

n
and

b0  y  b1 x
Simple Linear Regression
Analysis

SUM OF SQUARED ERRORS

SSE   y  b0  y  b1  xy
2
Simple Linear Regression Analysis

Sales Years
y x xy y2 x2
487 3 1,461 237,169 9
445 5 2,225 198,025 25
272 2 544 73,984 4
641 8 5,128 410,881 64
187 2 374 34,969 4
440 6 2,640 193,600 36
346 7 2,422 119,716 49
238 1 238 56,644 1
312 4 1,248 97,344 16
269 2 538 72,361 4
655 9 5,895 429,025 81
563 6 3,378 316,969 36

  4,855   55   26,091   2,240,687   4,855


Simple Linear Regression
Analysis

 xy   x y
26,091 
55(4,855)
b1  n  12  49.9101
 x 2

(  x ) 2
329 
(55) 2

n 12

b0  y  b1 x  404.5833  49.9101(4.5833)  175.8288

The least squares regression line is:


yˆ  175.8288  49.9101( x)
Simple Linear Regression
Analysis

SUMMARY OUTPUT

Regression Statistics
Multiple R 0.832534056
R Square 0.693112955
Adjusted R Square 0.662424251
Standard Error 92.10553441
Observations 12

ANOVA
df SS MS F Significance F
Regression 1 191600.622 191600.622 22.58527906 0.000777416
Residual 10 84834.29469 8483.429469
Total 11 276434.9167

Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0%
Intercept 175.8288191 54.98988674 3.197475563 0.00953244 53.30369475 298.3539434 53.30369475 298.3539434
Years with Midwest 49.91007584 10.50208428 4.752397191 0.000777416 26.50996978 73.3101819 26.50996978 73.3101819

Excel Midwest Distribution Results


Least Squares Regression
Properties
 The sum of the residuals from the least
squares regression line is 0.
 The sum of the squared residuals is a
minimum.
 The simple regression line always passes
through the mean of the y variable and the
mean of the x variable.
 The least squares coefficients are unbiased
estimates of 0 and 1.
Simple Linear Regression
Analysis

SUM OF RESIDUALS

 ( y  yˆ )  0
SUM OF SQUARED RESIDUALS

 ( y  yˆ ) 2
Simple Linear Regression
Analysis

TOTAL SUM OF SQUARES

TSS   ( y  y ) 2

where:
TSS = Total sum of squares
n = Sample size
y = Values of the dependent variable
y= Average value of the dependent variable
Simple Linear Regression
Analysis

SUM OF SQUARES ERROR (RESIDUALS)

SSE   ( y  yˆ ) 2

where:
SSE = Sum of squares error
n = Sample size
y = Values of the dependent variable
ŷ= Estimated value for the average of y for the
given x value
Simple Linear Regression
Analysis

SUM OF SQUARES REGRESSION

SSR   ( yˆ  y ) 2

where:
SSR = Sum of squares regression
y= Average value of the dependent variable
y = Values of the dependent variable
ŷ= Estimated value for the average of y for the
given x value
Simple Linear Regression
Analysis

SUMS OF SQUARES

TSS  SSE  SSR


Simple Linear Regression
Analysis

The coefficient of determination is the


portion of the total variation in the
dependent variable that is explained by its
relationship with the independent variable.
The coefficient of determination is also
called R-squared and is denoted as R2.
Simple Linear Regression
Analysis

COEFFICIENT OF DETERMINATION (R2)

2 SSR
R 
TSS
Simple Linear Regression
Analysis

COEFFICIENT OF DETERMINATION (R2)

2SSR 191,600.62
R    0.6931
TSS 276,434.90

69.31% of the variation in the sales data for this


sample can be explained by the linear relationship
between sales and years of experience.
Simple Linear Regression
Analysis

COEFFICIENT OF DETERMINATION
SINGLE INDEPENDENT VARIABLE CASE

2 2
R r
where:
R2 = Coefficient of determination
r = Simple correlation coefficient
Simple Linear Regression
Analysis
STANDARD DEVIATION OF THE
REGRESSION SLOPE COEFFICIENT
(POPULATION)

 b1 
where:
 (x  x) 2

 b=1 Standard deviation of the regression slope


(Called the standard error of the slope)
 = Population standard error of the estimate
Simple Linear Regression
Analysis
ESTIMATOR FOR THE STANDARD ERROR
OF THE ESTIMATE

SSE
s 
n  k 1
where:
SSE = Sum of squares error
n = Sample size
k = number of independent variables in the model
Simple Linear Regression
Analysis
ESTIMATOR FOR THE STANDARD
DEVIATION OF THE REGRESSION SLOPE
s s
sb1  
 (x  x) 2

x 2

( x ) 2

where: n
sb1= Estimate of the standard error of the least squares
slope
s = SSE Sample standard error of the estimate
n2
Simple Linear Regression
Analysis
TEST STATISTIC FOR TEST OF
SIGNIFICANCE OF THE REGRESSION SLOPE

b1  1
t df  n  2
where: sb1
b1 = Sample regression slope coefficient
1 = Hypothesized slope
sb1 = Estimator of the standard error of the slope
Significance Test of
Regression Slope
H 0 :  1  0. 0
H A : 1  0.0
  0.05
Rejection Region Rejection Region
 /2 = 0.025  /2 = 0.025

t.025  2.228 0 t.025  2.228


sb1 10.50
t 1 1
  4.753
b  49.91  0

Since t=4.753 > 2.048, reject H0: conclude that the


true slope is not zero
Simple Linear Regression
Analysis

MEAN SQUARE REGRESSION

SSR
MSR 
where: k
SSR = Sum of squares regression
k = Number of independent variables in the model
Simple Linear Regression
Analysis

MEAN SQUARE ERROR

SSE
MSE 
where: n  k 1
SSE = Sum of squares error
n = Sample size
k = Number of independent variables in the model
Significance Test

H 0 : 1  0.0 F  Ratio
H A : 1  0.0 MSR 191,600.6
  22.59
  0.05 MSE 8,483.43

Rejection Region
 = 0.05

F  4.96
Since F= 22.59 > 4.96, reject H0: conclude that the
regression model explains a significant amount of the
variation in the dependent variable
Simple Regression Steps

Develop a scatter plot of y and x. You are


looking for a linear relationship between
the two variables.
Calculate the least squares regression line
for the sample data.
Calculate the correlation coefficient and the
simple coefficient of determination, R2.
Conduct one of the significance tests.
Simple Linear Regression
Analysis
CONFIDENCE INTERVAL ESTIMATE FOR
THE REGRESSION SLOPE

b1  t / 2 sb1
or equivalently: df  n  2
s
b1  t / 2
where:
 (x  x) 2

sb1 = Standard error of the regression slope


coefficient
s = Standard error of the estimate
Simple Linear Regression
Analysis
CONFIDENCE INTERVAL FOR y | xp

1 ( x p  x )2
yˆ  t / 2 s 
n  (x  x) 2

where:
ŷ = Point estimate of the dependent variable
t = Critical value with n - 2 d.f.
s = Standard error of the estimate
n = Sample size
xp = Specific value of the independent variable
x = Mean of independent variable observations
Simple Linear Regression
Analysis

PREDICTION INTERVAL FOR Y | xp


2
1 (xp  x)
yˆ  t / 2 s 1  
n  (x  x) 2
Residual Analysis

Before using a regression model for


description or prediction, you should do a
check to see if the assumptions concerning
the normal distribution and constant
variance of the error terms have been
satisfied. One way to do this is through
the use of residual plots.
plots
Simple Linear Regression Model
- Assumptions
Assumptions of CLRM
CLRM – ASSUMPTION 1
CLRM – ASSUMPTION 2
X values are fixed in repeated sampling
Values taken by the regressor, X, are
considered fixed in repeated samples. More
technically, X is assumed to be non-
stochastic (so that Xi and ui are also
uncorrelated)
CLRM – ASSUMPTION 3
 Error term εi has ZERO MEAN VALUE given the
value of X
Thus, the Conditional mean value of εi is
Zero.
 That is, E(εi Xi) = 0
CLRM – ASSUMPTION 4
Homoscedasticty or Equal variance of εi
Given the value of X, the variance of εi is
same for all observations
Thus, Var(εi Xi) = E[εi - E(εi Xi)]2
CLRM – ASSUMPTION 5
 No autocorrelation between the error terms
 Given any two X values, Xi and Xj, the correlation
between any two εi and εj is Zero
 Covar(εi,εj Xi,Xj) = E[εi - E(εj)] Xi [εj - E(εi)] Xj
= E(εi Xi)(εj Xj) = 0
CLRM – ASSUMPTION 6
Zero covariance between εi and Xi or
E(εiXi) = 0
CLRM – ASSUMPTION 7
The number of observations ‘n’ must be
greater than the number of parameters
to be estimated
That is, N > P
CLRM – ASSUMPTION 8
Variability in X values: The X values in a
given sample MUST NOT be same for all
Thus, Variance (X) must be a finite positive
number
CLRM – ASSUMPTION 9
The regression model should be
correctly specified
Therefore, there is NO SPECIFICATION
BIAS or ERROR in the model used for
empirical analysis
CLRM – ASSUMPTION 10
There is NO PERFECT MULTI-COLLINEARITY
There are no perfect linear relationships
among the explanatory variables
Example, X2 = A + B*X1

You might also like