You are on page 1of 9

Chapter 1:

• understand how a linear model is defined via a systematic component and a random
error;

• be aware of the assumptions made for the random error and the implications of these
assumptions for the distribution of the response vector;

• be able to describe a model both in vector/matrix notation and in terms of its model
equation for the jth unit of observation;

• be able to interpret the estimated coefficients of a mode

Chapter 2:

• know how to incorporate categorical predictors into a linear model, both theoretically
and in R;
• be able to interpret the estimated coefficients of such a mode
Chapter 3:

• understand how the least squares estimate for the parameter vector of a linear model
is defined;

• be able to derive a closed form expression for the least squares estimate;

• be able to compute the least squares estimate for the parameter vector of a linear
model;
• understand how model re-parametrisation can be described mathematically and be
aware of the effect of a model re-parameterisation.

Example of: reparameterisation


Chapter 4:

• be aware that linear models are linear in the parameters but not necessarily in the
explanatory variables;
• be able to state the model equation and design matrix for a polynomial regression
model;

• be aware of linear models using transformed explanatory variables;


• know how to extend a parallel lines regression model with one categorical and one
quantitative predictor to a model with unrelated regression lines by introducing an
interaction between the factor and the covariate.

Example:

Chapter 5:
• be aware of the linear model assumptions;

• be able to interpret residual and scale-location plots;

How to prove the assumptions hold:

Plot of the residuals against the fitted values or explanatory variables shows no distinct patterns.

The residuals scatter randomly around a horizontal line and with constant variation.

A scale-location plot can be used to identify violations of the constant variance assumption

• know the role of transformations in linear modelling, in particular the log-


transformation;
• understand how to apply transformations to address model violations;

Transformations may resolve violations of the model assumptions:


Transformations of the response and/or the explanatory variables may be used to meet
the assumption of linearity.
transformation of the response is often used to address certain forms of heteroscedas-
ticity.
Transformations may also reduce the influence of unusual observations
• be able to interpret models with log-transformed variables.

Summary of Chapter 6:

• know the definition of the least squares estimator ̂ β;

• be able to derive the expectation and variance of ̂ β;

• know the definition of the hat matrix H;


• be able to derive the properties of H and (In − H);

• be able to derive the variances and covariances of the residuals and the fitted values
Summary opf Chapter 7:





 know what is meant by a high leverage data point, a regression outlier and
an influential data point;
 be able to define leverage, standardised and studentised residuals, the
Cook’s distance and DFFITS;
 be familiar with various plots illustrating the above measures and
associated rules of thumb;
 know how to handle influential data points.

Summary of Chapter 8:
Summary of Chapter 9:

Proof of least squares estimate:

Summary of Chapter 9:

You might also like