Selection and Control of Forecasting

Methods
Topic 10
Course : Supply Chain: Logistics

Selecting a Forecasting Method
• It should be based on the following considerations:
– Forecasting horizon (validity of extrapolating past
data)
– Availability and quality of data
– Lead Times (time pressures)
– Cost of forecasting (understanding the value of
forecasting accuracy)
– Forecasting flexibility (amenability of the model to
revision; quite often, a trade-off between filtering
out noise and the ability of the model to respond to
abrupt and/or drastic changes)

Data Pattern
• A time series is likely to contain some or all of the following
components:
– Trend
– Seasonal
– Cyclical
– Irregular

Data Pattern
• Trend in a time series is the long-term change in the
level of the data i.e. observations grow or decline
over an extended period of time.
– Positive trend
• When the series move upward over an extended period of time
– Negative trend
• When the series move downward over an extended period of time
– Stationary
• When there is neither positive or negative trend.
Data Pattern
• Seasonal pattern in time series is a regular variation in the
level of data that repeats itself at the same time every year.
– Examples:
• Retail sales for many products tend to peak in
November and December.
• Housing starts are stronger in spring and summer than
fall and winter.
Data Pattern
• Cyclical patterns in a time series is presented by
wavelike upward and downward movements of the
data around the long-term trend.
• They are of longer duration and are less regular than
seasonal fluctuations.
• The causes of cyclical fluctuations are usually less
apparent than seasonal variations.
Data Pattern
• Irregular pattern in a time series data are the fluctuations that
are not part of the other three components
• These are the most difficult to capture in a forecasting model
Example:GDP, in 1996 Dollars
Example:Quarterly data on private housing starts
Example:U.S. billings of the Leo Burnet
advertising agency
Data Patterns and Model Selection
• The pattern that exist in the data is an important
consideration in determining which forecasting
techniques are appropriate.
• To forecast stationary data; use the available history to
estimate its mean value, this is the forecast for future
period.
• The estimate can be updated as new information becomes
available.
• The updating techniques are useful when initial estimates
are unreliable or the stability of the average is in question.

Data Patterns and Model Selection
• Forecasting techniques used for stationary time series data are:
– Naive methods
– Simple averaging methods,
– Moving averages
– Simple exponential smoothing
– autoregressive moving average(ARMA)

Data Patterns and Model Selection
• Methods used for time series data with trend are:
– Moving averages
– Holt’s linear exponential smoothing
– Simple regression
– Growth curve
– Exponential models
– Time series decomposition
– Autoregressive integrated moving average(ARIMA)

Data Patterns and Model Selection
• For time series data with seasonal component the goal is
to estimate seasonal indexes from historical data.
• These indexes are used to include seasonality in forecast
or remove such effect from the observed value.
• Forecasting methods to be considered for these type of
data are:
– Winter’s exponential smoothing
– Time series multiple regression
– Autoregressive integrated moving average(ARIMA)


Data Patterns and Model Selection
• Cyclical time series data show wavelike fluctuation around the
trend that tend to repeat.
• Difficult to model because their patterns are not stable.
• Because of the irregular behavior of cycles, analyzing these
type data requires finding coincidental or leading economic
indicators.
Data Patterns and Model Selection
• Forecasting methods to be considered for these type of data
are:
– Classical decomposition methods
– Econometric models
– Multiple regression
– Autoregressive integrated moving average (ARIMA)

Example:GDP, in 1996 Dollars
• For GDP, which has a trend and a cycle but no seasonality, the
following might be appropriate:
– Holt’s exponential smoothing
– Linear regression trend
– Causal regression
– Time series decomposition

Example:Quarterly data on private housing starts
• Private housing starts have a trend, seasonality, and a cycle.
The likely forecasting models are:
– Winter’s exponential smoothing
– Linear regression trend with seasonal adjustment
– Causal regression
– Time series decomposition
Example:U.S. billings of the Leo Burnet
advertising agency
• For U.S. billings of Leo Burnett advertising, There is a non-
linear trend, with no seasonality and no cycle, therefore the
models appropriate for this data set are:
– Non-linear regression trend
– Causal regression

Autocorrelation
• Correlation coefficient is a summary statistic that measures the
extent of linear relationship between two variables. As such
they can be used to identify explanatory relationships.
• Autocorrelation is comparable measure that serves the same
purpose for a single variable measured over time.
Autocorrelation
• In evaluating time series data, it is useful to look at the
correlation between successive observations over time.
• This measure of correlation is called autocorrelation and may
be calculated as follows:



– r
k
= autocorrelation coefficient for a k period lag.
– mean of the time series.
– y
t
= Value of the time series at period t.
– y
t-k
= Value of time series k periods before period t.


¿
¿
=
+ =
÷
÷
÷ ÷
=
n
t
t
n
k t
k t t
k
y y
y y y y
r
1
2
1
) (
) )( (
y
Autocorrelation
• Autocorrelation coefficient for different time lags can be used
to answer the following questions about a time series data.
– Are the data random?
• In this case the autocorrelations between y
t
and y
t-k
for
any lag are close to zero. The successive values of a
time series are not related to each other.

Correlograms: An Alternative Method of
Data Exploration
– Is there a trend?
• If the series has a trend, y
t
and y
t-k
are highly correlated
• The autocorrelation coefficients are significantly
different from zero for the first few lags and then
gradually drops toward zero.
• The autocorrelation coefficient for the lag 1 is often
very large (close to 1).
• A series that contains a trend is said to be non-
stationary.

Correlograms: An Alternative Method of
Data Exploration
– Is there seasonal pattern?
• If a series has a seasonal pattern, there will be a
significant autocorrelation coefficient at the seasonal
time lag or multiples of the seasonal lag.
• The seasonal lag is 4 for quarterly data and 12 for
monthly data.

Correlograms: An Alternative Method of
Data Exploration
– Is it stationary?
• A stationary time series is one whose basic statistical
properties, such as the mean and variance, remain
constant over time.
• Autocorrelation coefficients for a stationary series
decline to zero fairly rapidly, generally after the second
or third time lag.

Correlograms: An Alternative Method of
Data Exploration
• To determine whether the autocorrelation at lag k is
significantly different from zero, the following hypothesis and
rule of thumb may be used.
• H
0
: µ
k
= 0, H
a
: µ
k
= 0
• For any k, reject H
0
if
• Where n is the number of observations.
• This rule of thumb is for o = 5%
n
r
k
2
>
Correlograms: An Alternative Method of
Data Exploration
• The hypothesis test developed to determine whether a
particular autocorrelation coefficient is significantly different
from zero is:
• Hypotheses
• H
0
: µ
k
= 0, H
a
: µ
k
= 0
• Test Statistic:

k n
r
t
k
÷
÷
=
1
0
Correlograms: An Alternative Method of
Data Exploration
• Reject H
0
if


2 ; 2 ;
or
o o k n k n
t t t t
÷ ÷
÷ < >
Correlograms: An Alternative Method of
Data Exploration
• The plot of the autocorrelation Function (ACF) versus time lag
is called Correlogram.
• The horizontal scale is the time lag
• The vertical axis is the autocorrelation coefficient.
• Patterns in a Correlogram are used to analyze key features of
data.

Example:Mobil Home Shipment
• Correlograms for the mobile home shipment
• Note that this is quarterly data

-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
1 2 3 4 5 6 7 8 9 10 11 12
ACF
Upper Limit
Lower Limit
Example:Japanese exchange Rate
• As the world’s economy becomes increasingly
interdependent, various exchange rates between
currencies have become important in making business
decisions. For many U.S. businesses, The Japanese
exchange rate (in yen per U.S. dollar) is an important
decision variable. A time series plot of the Japanese-
yen U.S.-dollar exchange rate is shown below. On the
basis of this plot, would you say the data is
stationary? Is there any seasonal component to this
time series plot?

Example:Japanese exchange Rate
Japanese Exchange Rate
0
20
40
60
80
100
120
140
160
180
0 5 10 15 20 25 30
Months
E
x
c
h
a
n
g
e

R
a
t
e

(

y
e
n

p
e
r

U
.
S
.

d
o
l
l
a
r
)
EXRJ
Example:Japanese exchange Rate
• Here is the autocorrelation
structure for EXRJ.
• With a sample size of 12,
the critical value is


• This is the approximate
95% critical value for
rejecting the null
hypothesis of zero
autocorrelation at lag K.

Obs ACF
1 .8157
2 .5383
3 .2733
4 .0340
5 -.1214
6 -.1924
7 -.2157
8 -.1978
9 -.1215
10 -.1217
11 -.1823
12 -.2593
408 . 0
24
2 2
= =
n
Example:Japanese exchange Rate
• The Correlograms for EXRJ is given below

-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
1 2 3 4 5 6 7 8 9 10 11 12
ACF
Upper Limit
Lower Limit
Example:Japanese exchange Rate
• Since the autocorrelation coefficients fall to below the critical
value after just two periods, we can conclude that there is no
trend in the data.
Example:Japanese exchange Rate
• To check for seasonality at o = .05
• The hypotheses are:
– H
0
; µ
12
= 0 H
a

12
= 0
• Test statistic is:

• Reject H
0
if


899 . 0
12 24 / 1
2595 .
1
0
÷ =
÷
÷
=
÷
÷
=
k n
r
t
k
2 ; 2 ;
or
o o k n k n
t t t t
÷ ÷
÷ < >
179 . 2
025 . 0 ; 12 2 ;
= =
÷
t t
k n o
Example:Japanese exchange Rate
• Since

• We do not reject H
0
, therefore seasonality does not appear to
be an attribute of the data.


179 . 2 899 . 0
025 . 0 ; 12
÷ = ÷ > ÷ = t t
ACF of Forecast Error
• The autocorrelation function of the forecast errors is very
useful in determining if there is any remaining pattern in the
errors (residuals) after a forecasting model has been applied.
• This is not a measure of accuracy, but rather can be used to
indicate if the forecasting method could be improved.
Applying a Quantitative Forecasting
Method
Determine Method
•Time Series
•Causal Model
Collect data:
<Ind.Vars; Obs. Dem.>
Fit an analytical model
to the data:
F(t+1) = f(X1, X2,…)
Use the model for
forecasting future
demand
Monitor error:
e(t+1) = D(t+1)-F(t+1)
Model
Valid?
Update Model
Parameters
Yes No
- Determine
functional form
- Estimate parameters
- Validate

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.