You are on page 1of 18

Regression Analysis

Regression Analysis
Unified Concepts, Practical
Applications, and Computer
Bruce L. Bowerman, Richard T. OConnell, and
Emily S. Murphree

Regression Analysis: Unified Concepts, Practical Applications, and

Computer Implementation
Copyright Business Expert Press, LLC, 2015.
All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted in any form or by any
meanselectronic, mechanical, photocopy, recording, or any other
except for brief quotations, not to exceed 400 words, without the prior
permission of the publisher.
First published in 2015 by
Business Expert Press, LLC
222 East 46th Street, New York, NY 10017
ISBN-13: 978-1-60649-950-4 (paperback)
ISBN-13: 978-1-60649-951-1 (e-book)
Business Expert Press Quantitative Approaches to Decision Making
Collection ISSN: 2163-9515 (print)
Collection ISSN: 2163-9582 (electronic)
Cover and interior design by Exeter Premedia Services Private Ltd.,
Chennai, India
First edition: 2015
10 9 8 7 6 5 4 3 2 1
Printed in the United States of America.

Regression Analysis: Unified Concepts, Practical Applications, and Computer
Implementation is a concise and innovative book that gives a complete
presentation of applied regression analysis in approximately one-half the
space of competing books. With only the modest prerequisite of a basic
(non-calculus) statistics course, this text is appropriate for the widest
possible audience.

logistic regression, model building, model diagnostics, multiple regression, regression model, simple linear regression, statistical inference, time
series regression

Chapter 1

An Introduction to Regression Analysis..........................1

Chapter 2 Simple and Multiple Regression:

An Integrated Approach.................................................5
Chapter 3

More Advanced Regression Models...............................97

Chapter 4

Model Building and Model Diagnostics......................159

Appendix A

Statistical Tables..........................................................253


Regression Analysis: Unified Concepts, Practical Applications, and Computer
Implementation is a concise and innovative book that gives a complete
presentation of applied regression analysis in approximately one-half the
space of competing books. With only the modest prerequisite of a basic
(non-calculus) statistics course, this text is appropriate for the widest possible audienceincluding college juniors, seniors, and first year graduate
students in business, the social sciences, the sciences, and statistics, as
well as professionals in business and industry. The reason that this text
is appropriate for such a wide audience is that it takes a very unique and
integrative approach to teaching regression analysis. Most books, after a
short chapter introducing regression, cover simple linear regression and
multiple regression in roughly four chapters by beginning with a chapter
reviewing basic statistical concepts and then having chapters on simple
linear regression, matrix algebra, and multiple regression. In contrast, this
book, after a short chapter introducing regression, covers simple linear
regression and multiple regression in a single cohesive chapter, Chapter2,
by efficiently integrating the discussion of the two techniques. In addition, the same Chapter 2 teaches both the necessary basic statistical concepts (for example, hypothesis testing) and the necessary matrix algebra
concepts as they are needed in teaching regression. We believe that this
approach avoids the needless repetition of traditional approaches and
does the best job of getting a wide variety of readers (who might be students with different backgrounds in the same class) to the same level of
Chapter 3 continues the integrative approach of the book by discussing more advanced regression models, including models using squared
and interaction terms, models using dummy variables, and logistic regression models. The book concludes with Chapter 4, which organizes the
techniques of model building, model diagnosis, and model improvement
into a cohesive six step procedure. Whereas many competing texts spread
such modeling techniques over a fairly large number of chapters that can


seem unrelated to the novice, the six step procedure organizes both standard and more advanced modeling techniques into a unified presentation. In addition, each chapter features motivating examples (many real
world, all realistic) and concludes with a section showing how to use SAS
followed by a set of exercises. Excel, MINITAB, and SAS outputs are
used throughout the text, and the books website contains more exercises
for each chapter. The books website also houses Appendices B, C, and
D. Appendix B gives careful derivations of most of the applied results in
the text. These derivations are referenced in the main text as the applied
results are discussed. Appendix C includes an applied discussion extending the basic treatment of logistic regression given in the main text. This
extended discussion covers binomial logistic regression, generalized (multiple category) logistic regression, and Poisson regression. Appendix D
extends the basic treatment of modeling time series data given in the main
text. The Box-Jenkins methodology and its use in regression analysis are
Author Bruce Bowerman would like to thank Professor David
Nickerson of the University of Central Florida for motivating the writing
of this book. All three authors would like to thank editor Scott Isenberg,
production manager Destiny Hadley, and permissions
editor Marcy
Schneidewind, as well as the fine people at Exeter, for their hard work.
Most of all we are indebted to our families for their love and encouragement over the years.
Bruce L. Bowerman
Richard T. OConnell
Emily S. Murphree


An Introduction to
Regression Analysis
1.1 Observational Data and Experimental Data
In many statistical studies a variable of interest, called the response variable
(or dependent variable), is identified. Data are then collected that tell us
about how one or more factors might influence the variable of interest.
If we cannot control the factor(s) being studied, we say that the data are
observational. For example, suppose that a natural gas company serving
a city collects data to study the relationship between the citys weekly
natural gas consumption (the response variable) and two factorsthe
average hourly atmospheric temperature and the average hourly wind
velocity in the city during the week. Because the natural gas company
cannot control the atmospheric temperatures or wind velocities in the
city, the data collected are observational.
If we can control the factors being studied, we say that the data
are experimental. For example, suppose that an oil company wishes
to study how three different gasoline types (A, B, and C) affect the
mileage obtained by a popular midsized automobile model. Here the
response variable is gasoline mileage, and the company will study a
single factorgasoline type. Since the oil company can control which
gasoline type is used in the midsized automobile, the data that the oil
company will collect are experimental.

1.2 Regression Analysis and Its Objectives

Regression analysis is a statistical technique that can be used to analyze
both observational and experimental data, and it tells us how the factors
under consideration might affect the response (dependent) variable. In
regression analysis the factors that might affect the dependent variable are


most often referred to as independent, or predictor, variables. We denote

the dependent variable in regression analysis by the symbol y, and we
denote the independent variables that might affect the dependent variable
by the symbols x1 , x2 , . . . , xk . The objective of regression analysis is to
build a regression model or prediction equationan equation relating y to
x1 , x2 , . . . , xk . We use the model to describe, predict, and control y on the
basis of the independent variables. When we predict y for a particular set
of values of x1 , x2 , . . . , xk , we will wish to place a bound on the error of
prediction. The goal is to build a regression model that produces an error
bound that will be small enough to meet our needs.
A regression model can employ quantitative independent variables, or
qualitative independent variables, or both. A quantitative independent variable assumes numerical values corresponding to points on the real line.
A qualitative independent variable is nonnumerical. The levels of such a
variable are defined by describing them. As an example, suppose that we
wish to build a regression model relating the dependent variable
y = demand for a consumer product
to the independent variables
x1 = the price of the product,
x2 = the average industry price of competitors similar products,
x3 = advertising expenditures made to promote the product, and
x 4 = the type of advertising campaign (television, radio, print media,
etc.) used to promote the product.
Here x1, x2, and x3 are quantitative independent variables. In contrast,
x 4 is a qualitative independent variable, since we would define the levels
of x 4 by describing the different advertising campaigns. After constructing an appropriate regression model relating y to x1 , x2 , x3, and x 4, we
would use the model
1. to describe the relationships between y and x1 , x2 , x3, and x 4. For
instance, we might wish to describe the effect that increasing advertising expenditure has on the demand for the product. We might
also wish to determine whether this effect depends upon the price
of the product;

An Introduction to Regression Analysis 3

2. to predict future demands for the product on the basis of future

values of x1 , x2 , x3, and x 4;
3. to control future demands for the product by controlling the price of
the product, advertising expenditures, and the types of advertising
campaigns used.
Note that we cannot control the price of competitors products, nor
can we control competitors advertising expenditures or other factors that
affect demand. Therefore we cannot perfectly control or predict future
We develop a regression model by using observed values of the dependent and independent variables. If these values are observed over time,
the data are called time series data. On the other hand, if these values are
observed at one point in time, the data are called cross-sectional data. For
example, suppose we observe values of the demand for a product, the
price of the product, and the advertising expenditures made to promote
the product. If we observe these values in one sales region over 30 conse
cutive months, the data are time series data. If we observe these values in
thirty different sales regions for a particular month of the year, the data
are cross-sectional data.

Adjusted coefficient of determination,
Autocorrelated errors, 208216
Autoregressive model, 211
Backward elimination, 172174,
Biasing constant, 231
Bonferroni procedure, 134
Box-Jenkins methodology, 216

First-order autocorrelation, 208

F table, 254255
GaussMarkov theorem, 76
General logistic regression model, 136
Handling unequal variances, 188194
Hildreth-Lu procedure, 216

Dependent (response) variable, 7

fractional power transformations of,
Distance value, 82
Dummy variables, 110123, 137
DurbinWatson d statistic, 258259
DurbinWatson statistic, 209, 210
DurbinWatson test, 208216

Independence assumption, diagnosing

and remedying violations of,
autocorrelation, 202208
DurbinWatson test, 208216
modeling autocorrelated errors,
seasonal patterns, 202208
trend, 202208
Independent (predictor) variable, 7,
Indicator variables, 111
Individual t tests, 6669
population correlation coefficient,
simple linear regression model, 7778
using p-value, 7276
using rejection point, 6972
Individual value
point prediction of, 18
prediction interval for, 86
Interaction model, 120
Interaction terms, 97110, 174177
Interaction variables, 101
Intercept, 23
Inverse prediction in simple linear
regression, 8991

Error term, 11
Experimental region, 18, 24
Explained deviation, 53

Lack of fit test, 197202

Least squares line, 13, 14
Least squares plane, 36

Causal variable, 206

Chi-square p-values, 212
Cochran-Orcutt procedure, 215
Coefficients of determination, 5260
Conditional least squares method,
Confidence intervals, 8189, 90
Constant variance assumption, 44,
Correct functional form, assumption
of, 184187
Correlation, 5760
Correlation matrix, 159
Cross-sectional data, 2
C-statistic, 169
Curvature, rate of, 98

264 Index

Least squares point estimates, 1220

using matrix algebra, 2743
Least squares prediction equation, 17
Leverage values, 217220
Linear combination of regression
parameters, 130133
Linear regression model, 2627, 98
assumptions for, 4445
Line of means, 10
Logistic regression, 135142
Matrix algebra, least squares point
estimates using, 2743
Maximum likehood estimation, 136
Maximum likelihood method, 216
Mean square error, 4852, 232
Mean value, confidence interval for,
Measures of variation, 5256
Model assumptions, 4447
Model building, 159251
with squared and interaction terms,
Model comparison statistics, 165177
Model diagnostics, 159251
Multicollinearity, 159165
Multiple linear regression model,
Negative autocorrelation, 207
No-interaction model, 121
Nonlinear regression, 197202
Nonparametric regression, 236
Normality assumption, 45, 187188
Outlying and influential observations
Cooks D, Dfbetas, and Dffits,
dealing with outliers, 221
leverage values, 217220
studentized residuals and
studentized deleted residuals,
Overall F-test, 6066
Overall regression relationship, 63
Parabola, equation of, 97
Parallel slopes, 127
Parallel slopes model, 121

Partial coefficient of correlation, 129

Partial coefficient of determination,
Partial F-test, 123131
Partial leverage residual plots,
Plane of means, 25
Point estimation, 39
Point prediction, 39
Population correlation coefficient,
Positive autocorrelation, 207
Prediction error, 14
Prediction intervals, 8189, 90
p-value, 6466, 7276
Quadratic regression model, 97
Qualitative independent variable, 110
Quartic root transformation, 194
Regression analysis
cross-sectional data, 2
experimental data, 1
objectives of, 13
observational data, 1
qualitative independent variable, 2
quantitative independent variable, 2
Regression assumptions, 18, 45
Regression assumptions, diagnosing
and remedying violations of
constant variance assumption,
correct functional form, assumption
of, 184187
dependent variable, fractional
power transformations of,
handling unequal variances,
nonlinear regression, 197202
normality assumption, 187188
residual analysis, 178181
weighted least squares, 188194
Regression model, geometric
interpretation of, 2426
Regression parameters, 12, 23, 75,
statistical inference for linear
combination of, 130133

Index 265

Regression through the origin, 92

Regression trees, 236239
Rejection point, 6972
Residual analysis, 178181
Residual error, 14
Residual plots, 178, 206
Response/dependent variable, 1
Ridge regression, 229236
Robust regression procedures, 235
Robust regression technique, 229236
Sampling, 4748
Scatter diagram/scatter plot, 5, 7
Seasonal dummy variables, 205
Seasonal variations, 202
Shift parameter, 98
Simple coefficient of determination,
Simple linear regression, 9, 1112
inverse prediction in, 8991
Simple linear regression model, 512
Simultaneous confidence intervals,
Squared and interaction terms,
97110, 174177
Square root transformation, 194

Standard error, 4852

Standardized regression model,
Standard normal distribution areas, 257
Stepwise regression, 172174
t-distribution, 69
Time series data, 2
Time series variables, 206
Total deviation, 53
Total mean squared error, 169, 170
t statistics, 162
t-table, 256
Unbiased least squares point
estimates, 4748
Unconditional least squares method,
Unequal slopes model, 120
Unexplained deviation, 53
Validating model, 226227
Variance inflation factors, 161, 162
Wald Chi-Square, 142
Weighted least squares, 188194