Professional Documents
Culture Documents
2
Citations
333
Downloads
Planning for future events is an integral aspect of operating any business. Planing
allows actions to be taken that will meet lead time requirements and create a
competitive operation. The process of planning, however, assumes that forecasts of the
future are readily available. Manufacturers must anticipate future demand for
products or services and plan to provide capacity and resources necessary to meet that
demand. Forecasting is the first step in planning. It is one of the most important tasks,
as many other organizational decisions are based on a forecast of the future. The
quality of these decisions can only be as good as the quality of the forecast upon which
they are based.
In business, forecasts are made in virtually every function and at every organizational
level. For example, a bank manager might need to predict cash flows for the next
quarter or a marketing manager might need to predict consumer response.
Forecasting in operations and manufacturing is different from forecasting in other
functional areas because the needs and uses of that forecast are different. In
operations and manufacturing, forecasts are used to make numerous decisions that
include machine and labor scheduling, production and capacity planning, inventory
control, and many others. It is the forecast that drives production schedules and
inventory requirements. Forecasting in manufacturing use approaches that primarily
extrapolate history into the future. This is different from marketing approaches that
are oriented toward analysis of the environment and future needs of customers.
Though each forecasting situation is unique, there are certain general principles that
are common to almost all forecasting problems. Also, there is a large variety of
forecasting methodologies available to practitioners to choose from, ranging in degree
of complexity, cost, and accuracy. Understanding the basic principles of forecasting
and the forecasting options available is the first step toward generating good forecasts.
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 1/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
We begin with a brief historical overview and a discussion of these basic principles.
Then we discuss techniques of interest to manufacturing managers and develop
collective wisdom for applying these forecasts.
Forecasting models can be classified into two groups: quantitative and qualitative
models. Quantitative forecasting models are approaches based on mathematical or
statistical modeling. Qualitative methods, on the other hand, are subjective in nature.
They are based on judgment, intuition, experience and personal knowledge of the
practitioner. Quantitative models are more appropriate for manufacturing
management.
Much research has been done to compare the forecast accuracy of quantitative versus
qualitative models. Quantitative models, based on mathematics, are objective,
consistent, and can consider much information at a time. Qualitative models, based on
judgment, are subject to numerous biases that include limited processing ability, lack
of consistency, and selective perception. The biases found in human judgment cannot
be disputed and can indeed contribute to poor forecast accuracy. Though quantitative
and qualitative forecasts have usually been viewed in the literature as mutually
exclusive, recent studies have shown that combining their strengths may lead to
improved forecast accuracy.
The sophistication of many quantitative forecasting models has made them superior in
fitting historical data. However, this sophistication has not necessarily provided
superiority in their ability to forecast the future. This shortcoming has been
demonstrated to show that simpler models often outperform the more sophisticated
ones (Makridakis et al., 1982).
In the search for ways to improve forecast accuracy, researchers have found that
combining forecasts from different models in the form of an average forecast can lead
to significant improvements in forecast accuracy (Clemen, 1989). For example,
averaging the forecasts from a few different quantitative models to get a final forecast
should improve accuracy. Further, combining is more effective when the forecasts
combined are not highly correlated. As qualitative and quantitative forecasts usually
do not have a high correlation, their combination has shown to be quite competitive
(Blattberg, 1990). Combining qualitative and quantitative forecasts shows particular
promise for practitioners who inevitably adjust quantitative forecasts with their own
judgment (Sanders and Manrodt, 1994).
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 2/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
Another principle of forecasting is that forecasts are more accurate for groups of
items rather than for individual items themselves. Due to pooling of variances, the
behavior of group data can have very stable characteristics even when individual items
in the group exhibit high degrees of randomness. Consequently, it is easier to obtain a
high degree of accuracy when forecasting groups of items rather than individual items
themselves.
The last principle of forecasting is that forecasts are more accurate for shorter than
longer time horizons. The shorter the time horizon of the forecast, the lower the
uncertainty of the future. There is a certain amount of inertia inherent in the data, and
dramatic pattern changes typically do not occur over the short run. As the time
horizon increases, however, there is a much greater likelihood that a change in
established patterns and relationships may occur. Therefore, forecasters cannot expect
to have the same degree of forecast accuracy for long range forecasts as they do for
shorter ranges.
There are a number of factors that influence the selection of a forecasting model. The
first determining factor to consider is the type and amount of available data. Certain
types of data are required for using quantitative forecasting models and in the absence
of these qualitatively generated forecasts may be the only option. Also, different
quantitative models require different amounts of data. The amount of data available
may preclude the use of certain quantitative models reducing the pool of possible
techniques.
A third factor to consider is the length of the forecast horizon. Forecasting methods
vary in their appropriateness for different time horizons, and short-term versus long-
term forecasting methods differ greatly. It is important to select the correct forecasting
model for the forecast horizon being used. For example, a manufacturing manager
should use a vastly different model when forecasting product sales for the next 3
months as opposed to forecasting capacity requirements for the next two years.
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 3/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
forecasting is to select a model that is appropriate for the pattern present in the data
pattern. Four basic types of data patterns can be distinguished:
Trend – We can say that a trend is present when there is a steady increase or
decrease in the demand over time.
Quantitative models can be divided into two categories: time series models and causal
models. Time series models are based on the assumption that data representing past
demand can be used to obtain a forecast of the future. Causal models, by contrast,
assume that demand is related to some underlying factor or factors in the
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 4/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
Time series models, on the other hand, are typically much easier to use, provide more
expedient forecasts, and require smaller amounts of data. Further, time series models
are available on numerous forecasting software packages that do not require much
user expertise and involvement. Some of the most common of these models are
described below.
The Mean
One of the simplest forecasting models available is the mean, or the simple average.
Given a data set covering N time periods, X1, X2, ..., Xn, the forecast for next time
period t + 1 is given as:
Though this model is only useful for stationary data patterns, it has the advantage of
being simple and easy to use. As the mean is based on a larger and larger historical
data set, forecasts become more stable. One
When using the mean to forecast, one way to control the influence of past data can be
limited by the number of observations included in the mean. This process is described
by the term moving average because as each new observation becomes available, the
oldest observation is dropped and a new average is computed. The number of
observations used to compute the average is kept constant and always includes the
most recent observations. This model is particularly good at removing randomness
from the data. Like the simple mean, this model is only good for forecasting stationary
data and is not suitable for data with trend or seasonality.
Given N data points and a decision to use T observations for each average, the simple
moving average is computed as follows:
Time Forecast
T+1
T+2
The decision on the number of periods to include in the moving average is important
and there are several conflicting effects that must be considered. In general, the
greater the number of observations in the moving average, the greater the smoothing
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 5/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
of the random elements. However, if there is a change in data pattern, such as a trend,
the larger the number of observations in the moving average, the greater the
propensity to lag the forecast will lag this pattern. For an example of the moving
average forecasting model, see Forecasting Examples.
(1)
where Ft+1 and Ft are next period's and this period's forecasts, respectively. Xt is this
period's actual observation and α is a smoothing constant that can theoretically vary
between 0 and 1. It can be seen that next period's forecast is actually a weighted
average of this period's forecast and this period's actual.
(2)
where e t is the forecast error for period t. This provides another interpretation of SES.
It can be seen that the forecast provided through SES is simply the old forecast plus an
adjustment for the error that occurred in the last forecast. When α is close to 1, the
new forecast includes a large adjustment for the error. The opposite is true when α is
close to 0; the new forecast will include very little adjustment. These equations show
that SES has a built-in self adjusting mechanism. The past forecast error is used to
correct the next forecast in a direction opposite to that of the error.
SES is only appropriate for stationary data and is not appropriate for data containing
trend as the forecasts will always lag the trended data. However, there are many
versions of the basis SES model that have been adjusted to accommodate trend and/or
seasonality. These are discussed later.
How much weight is placed on each of the components depends on the value of α. The
proper selection of α is a critical component in generating good forecasts with
exponential smoothing. High values of α will generate forecasts that react quickly to
changes in the data, but do not offer much smoothing of the random components. On
the other hand, low α values will not allow the model to respond rapidly to changes in
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 6/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
data pattern, but will smooth out much of the randomness. A good rule to consider is
that small values of α are best if the demand is stable in order to minimize the effects
of randomness. If the demand pattern is volatile and changing, a large value of α
should be selected in order to keep up with the changes.
Overall smoothing
(3)
Trend smoothing
(4)
Forecast
(5)
In equation (3) (St−1 bt−1 is forecast for period t adjusted for trend, bt−1 to generate
next period's St. This is the technique that helps adjust the value of St for trend and
eliminate lag. The trend itself is updated over time through equation (4), where the
trend is expressed as the difference between the last two smoothed values. The form of
equation (4) is the basic single smoothing equation applied to trend. Similar to α, the
coefficient γ is used to smooth out the randomness in the trend and can vary between
0 and 1. Finally, equation (5) is used to generate the final forecast. The trend, bt, is
multiplied by m, the number of periods ahead and added to the base value St to
forecast several period into the future. To forecast the next period, set M = 1.
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 7/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
Winters' model is just one of several models appropriate for use with data exhibiting
either a seasonal pattern or a pattern containing a combination of trend and
seasonality. It is available on most software packages and is often considered the
model of choice for these data patterns. Winters' model is based on three smoothing
equations; one for forecasting the level (St) one for trend (bt), and one for seasonality
(It). The equations of this model are similar to Holt's model, with an added equation to
deal with seasonality. The model is described as follows:
Overall smoothing
(6)
Trend smoothing
(7)
Seasonal smoothing
(8)
Forecast
(9)
Equations (6) to (9) are similar to Holt's with a few exceptions. Here, L is the length of
seasonality, such as the number of months or quarters in a year, and It is the
corresponding seasonal adjustment factor. As in Holt's model, the trend component is
given by bt. Equation (8) is the seasonal smoothing equation which is comparable to a
seasonal index that is found as a ratio of the current values of the series Xt, divided by
the current single smoothed value for the series, St. When Xt is larger than St, the ratio
is greater than 1. The opposite is true when Xt is smaller than St, when the ratio will be
less than 1. It is important to understand that St is a smoothed value of the series that
does not include seasonality. The data values Xt, on the other hand, do contain
seasonality which is why they are deseasonalized in equation (6). Like parameters α
and γ, β can theoretically vary between 0 and 1.
As with other smoothing models, one of the problems in using Winters' method is
determining the values of parameters α, β and γ. The approach for determining these
values is the same as the selection of parameters for other smoothing procedures. Trial
and error on historical data is one approach that can be used. Most software packages
have the option of automatically selecting the parameters for the given data set to
minimize some criterion described later.
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 8/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
Collopy, 1992). Complicating the issue is in fact that different error measures often
provide conflicting results. The challenge for managers is to know which forecast error
measure to rely on in order to expediently assess forecast performance.
Measures of forecast accuracy are most commonly divided into standard and relative
error measures, which are fundamentally different from each other. In this section the
most common measures in these categories are shown. The next section provides some
specific suggestions for their use.
If Xt is the actual value for time period t and Ft is the forecast for the period t, the
forecast error for that period can be computed as the difference between the actual and
the forecast: et = Xt − Ft. When evaluating forecasting performance for multiple
observations, say n, there will be n error terms. The following are standard forecast
error measures:
Mean Error:
PE
= [(Xt − Ft)/Xt](100)
Standard error measures, such as mean error (ME) or mean squared error (MSE),
typically provide the error in the same units as the data. Because of this the true
magnitude of the error can be difficult for managers to comprehend. For example, a
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 9/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
forecast error of 50 units has a completely different level of gravity if the actual
forecast for that period was 1000 as opposed to 100 units. Also, a forecast error of 50
dollars is vastly different from a forecast error of 50 cartons. In addition, having the
error in actual units of measurement makes it difficult to compare accuracy's across
time series. For example, in inventory control some series might be measured in
dollars, others in pallets or boxes.
Relative error measures, which are unit-free, do not have these problems. They are
significantly better to use, and provide an easier understanding of the quality of the
forecast. As managers easily understand percentages, comparisons across different
time series or different time intervals are easy and meaningful. For example, it is easy
to understand the differences in the quality of forecasts between 12% and 48% error.
One of the most popular of the relative error measures is MAPE.
Error measures that use absolute values, such as the mean absolute deviation (MAD),
provide valuable information. Because absolute values are used, these measures do not
have the problem of errors of opposite signs canceling themselves out. For example, a
low mean error (ME) may mislead the manager into thinking that forecasts are doing
well when in fact very forecasts and very low forecasts may be canceling each other
out. This problem is avoided with absolute error measures which provide the average
total magnitude of the error.
One shortcoming of some absolute error measures is that they assume a symmetrical
loss function. That is, the organizational cost of over-forecasting is assumed to equal to
the cost of underforecasting when these values are summed. The manager is then
provided with the total magnitude of error but does not know the true bias or direction
of that error. When using an error measure like MAD it is useful to also compute a
measure of forecast bias. This may be mean error (ME) or mean percentage error
(MPE). ME or MPE, by contrast to MAD, provide the direction of the error which is
the systematic tendency to over- or underforecast. It is very common for managers and
sales personnel to have a biased forecast in line with the organizational incentive
system, such as performance against a sales quota. Information on forecast bias is
useful as the forecast can then be adjusted to compensate for the bias. The two pieces
of information, the total forecast error and forecast bias, can work to complement each
other and provide a more complete picture for the manager.
A frequent problem that can distort forecasts are outliers, or unusually high or low
data points. The problem can also arise due to a mistake in recording the data,
promotional events or just random occurrences. Outliers can make it particularly
difficult to compare performance across series. Some error measures, such as the
mean square error (MSE), are especially susceptible due to the squaring of the error
term which can make overall error appear unusually high. This can distort true
forecast accuracy.
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 10/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
There are certain requirements manufacturing managers need to consider for any
forecasting system used in an operations environment. These are described below:
the forecasting models used should generate forecasts for expected demand in a
timely fashion. This means that forecasts should be generated to allow for
ample time for decisions to be made. Typically, short range forecasts could be
made in actual product units, such as by SKU's. Medium to longer range
forecasts should be generated by an aggregate unit measure, such as dollars.
the forecasting system should provide past forecast errors, allowing for regular
monitoring of forecast performance. Summary error statistics should be
regularly updated, outliers recorded, and exception reports generated for large
errors.
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 11/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
In practice, the easiest and most common way to combine qualitative and quantitative
forecasts is through a qualitative adjustment of quantitatively generated forecasts
(Sanders and Manrodt, 1994). However, this practice can harm the forecast more often
than it can improve it. Managers need to exercise extreme caution when overriding the
system.
As discussed earlier, quantitative models are objective and can process much
information at a time. However, quantitative models are based on the assumption that
past patterns will continue into the future. As such, they do not work well during
periods of change. Qualitative techniques subject to biases of human judgment, but
they may perform better during periods of change. The reason is that practitioners can
sometimes bring information to the forecasting process not available to the
quantitative model. This means that primary reliance should be placed on quantitative
forecasts in order to maximize efficiency. However, managerial judgment may be used
to adjust the quantitative forecasts when management becomes aware of events and
information that is not available to the quantitative model.
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 12/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
References
Armstrong, J.S. (1995). Long-range Forecasting: From Crystal Ball to Computer, New
York: John Wiley and Sons.
Google Scholar (https://scholar.google.com/scholar?
q=Armstrong%2C%20J.S.%20%281995%29.%20Long-
range%20Forecasting%3A%20From%20Crystal%20Ball%20to%20Computer%2C%20
New%20York%3A%20John%20Wiley%20and%20Sons.)
Armstrong, J.S. and F Collopy (1992). “Error measures for generalizations about
forecasting methods: Empirical comparisons with discussion.” International Journal
of Forecasting, 8, 69–80.
CrossRef (https://doi.org/10.1016/0169-2070(92)90008-W)
Google Scholar (http://scholar.google.com/scholar_lookup?
title=Error%20measures%20for%20generalizations%20about%20forecasting%20met
hods%3A%20Empirical%20comparisons%20with%20discussion&author=J.S..%20Ar
mstrong&author=C..%20F&journal=International%20Journal%20of%20Forecasting
&volume=8&pages=69-80&publication_year=1992)
Blattberg, R.C. and S.J. Hoch (1990). “Database models and managerial intuition: 50%
model and 50% manager.” Management Science, 36, 889–899.
Google Scholar (http://scholar.google.com/scholar_lookup?
title=Database%20models%20and%20managerial%20intuition%3A%2050%25%20m
odel%20and%2050%25%20manager&author=R.C..%20Blattberg&author=H..%20S.J
&journal=Management%20Science&volume=36&pages=889-
899&publication_year=1990)
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 13/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 14/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
CrossRef (https://doi.org/10.1287/inte.24.2.92)
Google Scholar (http://scholar.google.com/scholar_lookup?
title=Forecasting%20practices%20in%20US%20corporations%3A%20survey%20resu
lts&author=N.R..%20Sanders&author=M..%20K.B&journal=Interfaces&volume=24&
issue=2&pages=91-100&publication_year=1994)
Sanders, N.R. and L.P. Ritzman (1992). “The need for contextual and technical
knowledge in Judgmental forecasting.” Journal of Behavioral Decision Making, 5, 39–
52.
Google Scholar (http://scholar.google.com/scholar_lookup?
title=The%20need%20for%20contextual%20and%20technical%20knowledge%20in%
20Judgmental%20forecasting&author=N.R..%20Sanders&author=R..%20L.P&journa
l=Journal%20of%20Behavioral%20Decision%20Making&volume=5&pages=39-
52&publication_year=1992)
Webby, R. and M O'Connor (1996). “Judgmental and statistical time series
forecasting: A review of the literature.” International Journal of Forecasting, 12, 91–
118.
CrossRef (https://doi.org/10.1016/0169-2070(95)00644-3)
Google Scholar (http://scholar.google.com/scholar_lookup?
title=Judgmental%20and%20statistical%20time%20series%20forecasting%3A%20A
%20review%20of%20the%20literature&author=R..%20Webby&author=O..%20M&jo
urnal=International%20Journal%20of%20Forecasting&volume=12&pages=91-
118&publication_year=1996)
Yurkiewicz, J. (1993). “Forecasting software: Clearing up a cloudy picture.” OR/MS
Today, 2, 64–75.
Google Scholar (http://scholar.google.com/scholar_lookup?
title=Forecasting%20software%3A%20Clearing%20up%20a%20cloudy%20picture&a
uthor=J..%20Yurkiewicz&journal=OR%2FMS%20Today&volume=2&pages=64-
75&publication_year=1993)
Copyright information
© Kluwer Academic Publishers 2000
How to cite
Cite this entry as:
Sanders N.R. (2000) FORECASTING GUIDELINES AND METHODS. In: Swamidass P.M. (eds)
Encyclopedia of Production and Manufacturing Management. Springer, Boston, MA
.
https://doi.org/10.1007/1-4020-0612-8_362
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 15/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink
Not logged in
Avans Hogeschool (2000690002) - SURF B.V. (3000174899)
148.102.115.116
https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 16/16