You are on page 1of 16

14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

Encyclopedia of Production and Manufacturing


Management
2000 Edition

FORECASTING GUIDELINES AND


METHODS
Nada R. Sanders (1) 

1. Wright State University, , Dayton, USA

Reference work entry


DOI: https://doi.org/10.1007/1-4020-0612-8_362

2
Citations
333
Downloads

Planning for future events is an integral aspect of operating any business. Planing
allows actions to be taken that will meet lead time requirements and create a
competitive operation. The process of planning, however, assumes that forecasts of the
future are readily available. Manufacturers must anticipate future demand for
products or services and plan to provide capacity and resources necessary to meet that
demand. Forecasting is the first step in planning. It is one of the most important tasks,
as many other organizational decisions are based on a forecast of the future. The
quality of these decisions can only be as good as the quality of the forecast upon which
they are based.

In business, forecasts are made in virtually every function and at every organizational
level. For example, a bank manager might need to predict cash flows for the next
quarter or a marketing manager might need to predict consumer response.
Forecasting in operations and manufacturing is different from forecasting in other
functional areas because the needs and uses of that forecast are different. In
operations and manufacturing, forecasts are used to make numerous decisions that
include machine and labor scheduling, production and capacity planning, inventory
control, and many others. It is the forecast that drives production schedules and
inventory requirements. Forecasting in manufacturing use approaches that primarily
extrapolate history into the future. This is different from marketing approaches that
are oriented toward analysis of the environment and future needs of customers.

Though each forecasting situation is unique, there are certain general principles that
are common to almost all forecasting problems. Also, there is a large variety of
forecasting methodologies available to practitioners to choose from, ranging in degree
of complexity, cost, and accuracy. Understanding the basic principles of forecasting
and the forecasting options available is the first step toward generating good forecasts.

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 1/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

We begin with a brief historical overview and a discussion of these basic principles.
Then we discuss techniques of interest to manufacturing managers and develop
collective wisdom for applying these forecasts.

Forecasting models can be classified into two groups: quantitative and qualitative
models. Quantitative forecasting models are approaches based on mathematical or
statistical modeling. Qualitative methods, on the other hand, are subjective in nature.
They are based on judgment, intuition, experience and personal knowledge of the
practitioner. Quantitative models are more appropriate for manufacturing
management.

Quantitative Versus Qualitative Models

Much research has been done to compare the forecast accuracy of quantitative versus
qualitative models. Quantitative models, based on mathematics, are objective,
consistent, and can consider much information at a time. Qualitative models, based on
judgment, are subject to numerous biases that include limited processing ability, lack
of consistency, and selective perception. The biases found in human judgment cannot
be disputed and can indeed contribute to poor forecast accuracy. Though quantitative
and qualitative forecasts have usually been viewed in the literature as mutually
exclusive, recent studies have shown that combining their strengths may lead to
improved forecast accuracy.

Combining Quantitative and Qualitative


Approaches to Forecasting

The sophistication of many quantitative forecasting models has made them superior in
fitting historical data. However, this sophistication has not necessarily provided
superiority in their ability to forecast the future. This shortcoming has been
demonstrated to show that simpler models often outperform the more sophisticated
ones (Makridakis et al., 1982).

In the search for ways to improve forecast accuracy, researchers have found that
combining forecasts from different models in the form of an average forecast can lead
to significant improvements in forecast accuracy (Clemen, 1989). For example,
averaging the forecasts from a few different quantitative models to get a final forecast
should improve accuracy. Further, combining is more effective when the forecasts
combined are not highly correlated. As qualitative and quantitative forecasts usually
do not have a high correlation, their combination has shown to be quite competitive
(Blattberg, 1990). Combining qualitative and quantitative forecasts shows particular
promise for practitioners who inevitably adjust quantitative forecasts with their own
judgment (Sanders and Manrodt, 1994).

Basic Principles of Forecasting

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 2/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

Forecasting future events involves uncertainty and perfect prediction is almost


impossible. Forecasters know that they have to live with a certain amount of error. Our
goal in forecasting is to have good forecast performance on the average over time, and
minimize the chance of very large errors.

Another principle of forecasting is that forecasts are more accurate for groups of
items rather than for individual items themselves. Due to pooling of variances, the
behavior of group data can have very stable characteristics even when individual items
in the group exhibit high degrees of randomness. Consequently, it is easier to obtain a
high degree of accuracy when forecasting groups of items rather than individual items
themselves.

The last principle of forecasting is that forecasts are more accurate for shorter than
longer time horizons. The shorter the time horizon of the forecast, the lower the
uncertainty of the future. There is a certain amount of inertia inherent in the data, and
dramatic pattern changes typically do not occur over the short run. As the time
horizon increases, however, there is a much greater likelihood that a change in
established patterns and relationships may occur. Therefore, forecasters cannot expect
to have the same degree of forecast accuracy for long range forecasts as they do for
shorter ranges.

Selecting a Forecasting Model

There are a number of factors that influence the selection of a forecasting model. The
first determining factor to consider is the type and amount of available data. Certain
types of data are required for using quantitative forecasting models and in the absence
of these qualitatively generated forecasts may be the only option. Also, different
quantitative models require different amounts of data. The amount of data available
may preclude the use of certain quantitative models reducing the pool of possible
techniques.

Another important factor to consider in model selection is the degree of accuracy


required. Some situations only require crude forecasts, whereas others require great
accuracy. Increasing accuracy, however, usually raises the costs of data acquisition,
computer time, and managerial involvement. A simpler but less accurate model may
be preferable over a complex but highly accurate one. As a general rule, it is best to use
as simple a model as possible for the conditions present and data available.

A third factor to consider is the length of the forecast horizon. Forecasting methods
vary in their appropriateness for different time horizons, and short-term versus long-
term forecasting methods differ greatly. It is important to select the correct forecasting
model for the forecast horizon being used. For example, a manufacturing manager
should use a vastly different model when forecasting product sales for the next 3
months as opposed to forecasting capacity requirements for the next two years.

Finally, an important criteria in selecting an appropriate method is to consider the


types of patterns present in the data. Forecasting models differ in their ability to
handle different types of data patterns. One of the most important factors in

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 3/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

forecasting is to select a model that is appropriate for the pattern present in the data
pattern. Four basic types of data patterns can be distinguished:

Level or Stationary – A level pattern exists when data values fluctuate


around a constant mean. In forecasting such a data is “stationary” in its mean.
An example of this would be a product whose sales do not increase or decrease
over time. This type of pattern is frequently encountered when forecasting
demand for products in the mature stage of their life cycle.

Trend – We can say that a trend is present when there is a steady increase or
decrease in the demand over time.

Seasonality – Any pattern that regularly repeats itself and is of constant


duration is considered a seasonal pattern. This pattern frequently exists when
the data are influenced by the events associated with a given month or quarter
of the year, as well as day of the week. An example of this would be the peaking
of sales during Christmas.

Cycles – When data are influenced by longer-term economic changes, such as


those associated with the business cycle, a cyclical pattern is present. The major
distinction between a seasonal and cyclical pattern is that a cyclical pattern
varies in length and magnitude. Consequently, cyclical factors are typically
more difficult to forecast than other patterns.

Randomness – randomness are changes in the data caused by chance events


that cannot be forecast. Randomness can frequently obscure the true patterns
in the data making it difficult to forecast. In forecasting, we try to predict the
other data patterns while trying to 'smooth out' as much of the randomness as
possible. As stated in the second principle of forecasting, group data has less
randomness present due to ‘pooling of variances.’ Any one or more of these
patterns can be present in the data.

Forecasting Models Used in Operations

Manufacturing managers most frequently need short- to medium-term demand


forecasts to make decisions in production control, inventory control, and work force
scheduling. Given the frequency with which many of these forecasts are made,
automating as much of the process is desirable. Efficiency is gained by relying on
forecasting models that are easy to use, easy to understand, and require little data
storage. Qualitative forecasts of practitioners can be combined with quantitative
forecasts during specific time periods, as will be discussed later in this chapter. For the
most part, reliance should be placed on quantitative models to gain efficiency and to
keep costs down.

Quantitative models can be divided into two categories: time series models and causal
models. Time series models are based on the assumption that data representing past
demand can be used to obtain a forecast of the future. Causal models, by contrast,
assume that demand is related to some underlying factor or factors in the

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 4/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

environment. Generating a forecast with causal models typically requires the


forecaster to build a model that captures the relationship between the variable being
forecast and the other factor (s) in the environment.

Time series models, on the other hand, are typically much easier to use, provide more
expedient forecasts, and require smaller amounts of data. Further, time series models
are available on numerous forecasting software packages that do not require much
user expertise and involvement. Some of the most common of these models are
described below.

The Mean

One of the simplest forecasting models available is the mean, or the simple average.
Given a data set covering N time periods, X1, X2, ..., Xn, the forecast for next time
period t + 1 is given as:

Though this model is only useful for stationary data patterns, it has the advantage of
being simple and easy to use. As the mean is based on a larger and larger historical
data set, forecasts become more stable. One

Simple Moving Average

When using the mean to forecast, one way to control the influence of past data can be
limited by the number of observations included in the mean. This process is described
by the term moving average because as each new observation becomes available, the
oldest observation is dropped and a new average is computed. The number of
observations used to compute the average is kept constant and always includes the
most recent observations. This model is particularly good at removing randomness
from the data. Like the simple mean, this model is only good for forecasting stationary
data and is not suitable for data with trend or seasonality.

Given N data points and a decision to use T observations for each average, the simple
moving average is computed as follows:

Time Forecast

T+1

T+2

The decision on the number of periods to include in the moving average is important
and there are several conflicting effects that must be considered. In general, the
greater the number of observations in the moving average, the greater the smoothing

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 5/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

of the random elements. However, if there is a change in data pattern, such as a trend,
the larger the number of observations in the moving average, the greater the
propensity to lag the forecast will lag this pattern. For an example of the moving
average forecasting model, see Forecasting Examples.

Single Exponential Smoothing

Single exponential smoothing belongs to a general class of models called exponential


smoothing models. These models are based on the premise that the importance of past
data diminishes as the past becomes more distant. As such, these models place
exponentially decreasing weights on progressively older observations. Exponential
smoothing models are the most used of all forecasting techniques and are an integral
part of virtually all computerized forecasting programs. They are widely used for
forecasting in practice, particularly in production and inventory control environments.
There are many reasons for their widespread use. First, these models have been shown
to produce accurate forecasts under many conditions. Second, model formulation is
relatively easy and the user can understand how the model works. Finally, little
computation is required to use the model and computer storage requirements are
quite small.

Single exponential smoothing, SES, is the simplest of the exponential smoothing


models. Forecasts using SES are generated as follows:

(1)
where Ft+1 and Ft are next period's and this period's forecasts, respectively. Xt is this
period's actual observation and α is a smoothing constant that can theoretically vary
between 0 and 1. It can be seen that next period's forecast is actually a weighted
average of this period's forecast and this period's actual.

An alternative way of writing equation (1) is as follows:

(2)
where e t is the forecast error for period t. This provides another interpretation of SES.
It can be seen that the forecast provided through SES is simply the old forecast plus an
adjustment for the error that occurred in the last forecast. When α is close to 1, the
new forecast includes a large adjustment for the error. The opposite is true when α is
close to 0; the new forecast will include very little adjustment. These equations show
that SES has a built-in self adjusting mechanism. The past forecast error is used to
correct the next forecast in a direction opposite to that of the error.

SES is only appropriate for stationary data and is not appropriate for data containing
trend as the forecasts will always lag the trended data. However, there are many
versions of the basis SES model that have been adjusted to accommodate trend and/or
seasonality. These are discussed later.

How much weight is placed on each of the components depends on the value of α. The
proper selection of α is a critical component in generating good forecasts with
exponential smoothing. High values of α will generate forecasts that react quickly to
changes in the data, but do not offer much smoothing of the random components. On
the other hand, low α values will not allow the model to respond rapidly to changes in

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 6/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

data pattern, but will smooth out much of the randomness. A good rule to consider is
that small values of α are best if the demand is stable in order to minimize the effects
of randomness. If the demand pattern is volatile and changing, a large value of α
should be selected in order to keep up with the changes.

Another option to consider when selecting α is to use what is known as a tracking


signal for α, a technique sometimes called adaptive exponential smoothing . Here, the
tracking signal is defined as the exponentially smoothed actual error divided by the
exponentially smoothed absolute error. As α is set equal to the tracking signal, its
value automatically changes from period to period as changes occur in the data
pattern. This allows α to adapt to changes in the data pattern and helps automate the
process. For example of the SES model, see, Forecasting Examples.

Holt's Two-Parameter Model

Holt's two-parameter model, also known as linear exponential smoothing, is one of


many models applicable for forecasting data with a trend pattern (Makridakis,
Wheelwright, and McGee, 1983). As mentioned earlier, stationary models will generate
forecasts that will lag the data if there is a trend present. All trend models are simply
level models that have an additional mechanisms that allows them to track the trend
and adjust the level of the forecast to compensate for that trend. Holt's model does this
through the development of a separate trend equation that is added to the basic
smoothing equation in order to generate the final forecast. The series of equations is as
follows:

Overall smoothing

(3)

Trend smoothing

(4)

Forecast

(5)

In equation (3) (St−1 bt−1 is forecast for period t adjusted for trend, bt−1 to generate
next period's St. This is the technique that helps adjust the value of St for trend and
eliminate lag. The trend itself is updated over time through equation (4), where the
trend is expressed as the difference between the last two smoothed values. The form of
equation (4) is the basic single smoothing equation applied to trend. Similar to α, the
coefficient γ is used to smooth out the randomness in the trend and can vary between
0 and 1. Finally, equation (5) is used to generate the final forecast. The trend, bt, is
multiplied by m, the number of periods ahead and added to the base value St to
forecast several period into the future. To forecast the next period, set M = 1.

Winters' Three-Parameter Trend and Seasonality Model

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 7/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

Winters' model is just one of several models appropriate for use with data exhibiting
either a seasonal pattern or a pattern containing a combination of trend and
seasonality. It is available on most software packages and is often considered the
model of choice for these data patterns. Winters' model is based on three smoothing
equations; one for forecasting the level (St) one for trend (bt), and one for seasonality
(It). The equations of this model are similar to Holt's model, with an added equation to
deal with seasonality. The model is described as follows:

Overall smoothing

(6)

Trend smoothing

(7)

Seasonal smoothing

(8)

Forecast

(9)

Equations (6) to (9) are similar to Holt's with a few exceptions. Here, L is the length of
seasonality, such as the number of months or quarters in a year, and It is the
corresponding seasonal adjustment factor. As in Holt's model, the trend component is
given by bt. Equation (8) is the seasonal smoothing equation which is comparable to a
seasonal index that is found as a ratio of the current values of the series Xt, divided by
the current single smoothed value for the series, St. When Xt is larger than St, the ratio
is greater than 1. The opposite is true when Xt is smaller than St, when the ratio will be
less than 1. It is important to understand that St is a smoothed value of the series that
does not include seasonality. The data values Xt, on the other hand, do contain
seasonality which is why they are deseasonalized in equation (6). Like parameters α
and γ, β can theoretically vary between 0 and 1.

As with other smoothing models, one of the problems in using Winters' method is
determining the values of parameters α, β and γ. The approach for determining these
values is the same as the selection of parameters for other smoothing procedures. Trial
and error on historical data is one approach that can be used. Most software packages
have the option of automatically selecting the parameters for the given data set to
minimize some criterion described later.

Measuring Forecast Error

Regardless of which forecasting model is used, a critical aspect of forecasting is


evaluating forecast performance. Unfortunately, there is little consensus among
forecasters as to the best and most reliable forecast error measures (Armstrong and

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 8/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

Collopy, 1992). Complicating the issue is in fact that different error measures often
provide conflicting results. The challenge for managers is to know which forecast error
measure to rely on in order to expediently assess forecast performance.

Common Forecast Error Measures

Measures of forecast accuracy are most commonly divided into standard and relative
error measures, which are fundamentally different from each other. In this section the
most common measures in these categories are shown. The next section provides some
specific suggestions for their use.

If Xt is the actual value for time period t and Ft is the forecast for the period t, the
forecast error for that period can be computed as the difference between the actual and
the forecast: et = Xt − Ft. When evaluating forecasting performance for multiple
observations, say n, there will be n error terms. The following are standard forecast
error measures:

Mean Error:

Mean Absolute Deviation:

Mean Squared Error:

Root Mean Squared Error:

Some of the most common relative forecast error measures are:

Mean Percentage Error:

PE

= [(Xt − Ft)/Xt](100)

Mean Absolute Percentage Error:

Standard Versus Relative Forecast Error Measures

Standard error measures, such as mean error (ME) or mean squared error (MSE),
typically provide the error in the same units as the data. Because of this the true
magnitude of the error can be difficult for managers to comprehend. For example, a

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 9/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

forecast error of 50 units has a completely different level of gravity if the actual
forecast for that period was 1000 as opposed to 100 units. Also, a forecast error of 50
dollars is vastly different from a forecast error of 50 cartons. In addition, having the
error in actual units of measurement makes it difficult to compare accuracy's across
time series. For example, in inventory control some series might be measured in
dollars, others in pallets or boxes.

Relative error measures, which are unit-free, do not have these problems. They are
significantly better to use, and provide an easier understanding of the quality of the
forecast. As managers easily understand percentages, comparisons across different
time series or different time intervals are easy and meaningful. For example, it is easy
to understand the differences in the quality of forecasts between 12% and 48% error.
One of the most popular of the relative error measures is MAPE.

Error Measures Based on Absolute Values

Error measures that use absolute values, such as the mean absolute deviation (MAD),
provide valuable information. Because absolute values are used, these measures do not
have the problem of errors of opposite signs canceling themselves out. For example, a
low mean error (ME) may mislead the manager into thinking that forecasts are doing
well when in fact very forecasts and very low forecasts may be canceling each other
out. This problem is avoided with absolute error measures which provide the average
total magnitude of the error.

One shortcoming of some absolute error measures is that they assume a symmetrical
loss function. That is, the organizational cost of over-forecasting is assumed to equal to
the cost of underforecasting when these values are summed. The manager is then
provided with the total magnitude of error but does not know the true bias or direction
of that error. When using an error measure like MAD it is useful to also compute a
measure of forecast bias. This may be mean error (ME) or mean percentage error
(MPE). ME or MPE, by contrast to MAD, provide the direction of the error which is
the systematic tendency to over- or underforecast. It is very common for managers and
sales personnel to have a biased forecast in line with the organizational incentive
system, such as performance against a sales quota. Information on forecast bias is
useful as the forecast can then be adjusted to compensate for the bias. The two pieces
of information, the total forecast error and forecast bias, can work to complement each
other and provide a more complete picture for the manager.

How to Handle Outliers

A frequent problem that can distort forecasts are outliers, or unusually high or low
data points. The problem can also arise due to a mistake in recording the data,
promotional events or just random occurrences. Outliers can make it particularly
difficult to compare performance across series. Some error measures, such as the
mean square error (MSE), are especially susceptible due to the squaring of the error
term which can make overall error appear unusually high. This can distort true
forecast accuracy.

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 10/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

One method which is immune to outliers is to compare forecast performance against a


comparison model which can serve as a baseline. Then a computation can be made of
the percentage of forecasts for which a given method is more accurate than the
baseline. This is called the Percent Better. A good model to use as a baseline is the
Naive Model, where one period's actual is assumed to the next period's forecast.
Finally, extreme values can be replaced by more typical values, as keeping outliers in
the data can distort future forecasts (Armstrong and Collopy, 1992).

The Effect of Zero Values and Missing


Observations

Another problem that is often encountered in practice, especially in inventory control,


is the existence of zero or near zero values. This is often an issue for items that
experience ‘lumpy’ demand where sales do not occur everyday. Relative error
measures, because they are based on a ratio, cannot handle a zero in the denominator
as the error would become infinite. In such a case, the results from using either the
percentage error (PE), mean percentage error (MPE) or the mean absolute percentage
error (MAPE) would become infinite and evaluation would be impossible. There are a
couple of ways to handle this problem. One is to replace the missing value with a more
typical value that can be obtained through a moving average. Another option,
suggested by practitioner Bernie Smith (Smith, 1984) is simply to define the error as
100% for all data points equal to zero.

Developing an Operations Forecasting System

There are certain requirements manufacturing managers need to consider for any
forecasting system used in an operations environment. These are described below:

the forecasting models used should generate forecasts for expected demand in a
timely fashion. This means that forecasts should be generated to allow for
ample time for decisions to be made. Typically, short range forecasts could be
made in actual product units, such as by SKU's. Medium to longer range
forecasts should be generated by an aggregate unit measure, such as dollars.

in addition to a point estimate of demand , the system should also generate a


confidence interval , providing an estimate of forecast error. This information is
particularly important for computing inventory and safety stock levels.

the forecasting system should provide past forecast errors, allowing for regular
monitoring of forecast performance. Summary error statistics should be
regularly updated, outliers recorded, and exception reports generated for large
errors.

the forecasting system should provide convenient presentation in both tabular


and graphical format for easy viewing.

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 11/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

the database should allow data presentation by different dimensions, such as by


product, product group, geographic region, customer, distribution channel, and
by time period.

the forecasting system should allow for forecasts to be updated frequently,


allowing decisions to be revised in the light of new information.

the system should allow human judgment to override quantitatively generated


forecasts and keep track of managerial adjustments. Managers occasionally
become aware of events that can significantly impact the forecast. Keeping track
of these adjustments allows for more accurate monitoring of forecast accuracy
and discourages unjustified adjustments.

There are numerous forecasting software packages available to manufacturing


managers and the most sophisticated is not necessarily the best. It should be kept in
mind that simplicity and ease of use are key elements for regular usage. A review of
available software is provided by Yurkiewicz (1993).

Getting the Best From Quantitative and


Qualitative Forecasts

The newest thinking on forecasting acknowledge that a combination of both


quantitative and qualitative forecasts may be best under certain conditions (Goodwin
and Wright, 1993; Webby and O'Connor, 1996).

In practice, the easiest and most common way to combine qualitative and quantitative
forecasts is through a qualitative adjustment of quantitatively generated forecasts
(Sanders and Manrodt, 1994). However, this practice can harm the forecast more often
than it can improve it. Managers need to exercise extreme caution when overriding the
system.

As discussed earlier, quantitative models are objective and can process much
information at a time. However, quantitative models are based on the assumption that
past patterns will continue into the future. As such, they do not work well during
periods of change. Qualitative techniques subject to biases of human judgment, but
they may perform better during periods of change. The reason is that practitioners can
sometimes bring information to the forecasting process not available to the
quantitative model. This means that primary reliance should be placed on quantitative
forecasts in order to maximize efficiency. However, managerial judgment may be used
to adjust the quantitative forecasts when management becomes aware of events and
information that is not available to the quantitative model.

To adjust quantitative forecasts successfully, the managerial adjustment ought to come


from experienced practitioners, who are knowledgeable. The managerial adjustments
should be based on specific information that the practitioner becomes aware of. For
example, it is appropriate to adjust the quantitative forecast in anticipation of
increased demand from an advertising campaign that is under way.

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 12/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

The cost of managerial involvement necessary to make qualitative adjustments to


quantitative forecasts is often quite high. Therefore, the benefits in forecast accuracy
need to be cost justified, as managerial time may be better served elsewhere.
Managerial involvement should be restricted to a small number of important forecasts.
Majority of forecasts should be strictly automated and the accuracy of all the forecasts
needs to be evaluated on a regular basis.

Finally, it must be kept in mind that managerial adjustment is subject to biases of


human judgment. All adjustments should be documented. Keeping accurate records
can help managers see which adjustments are most effective and which are not.

See  Adaptive exponential smoothing (https://doi.org/10.1007/1-4020-0612-8_23);


 Forecast accuracy (https://doi.org/10.1007/1-4020-0612-8_357);  Forecasting
examples (https://doi.org/10.1007/1-4020-0612-8_361);  Forecasting in
manufacturing management (https://doi.org/10.1007/1-4020-0612-8_363);
 Forecasting models – causal (https://doi.org/10.1007/1-4020-0612-8_364);
 Forecasting models – (https://doi.org/10.1007/1-4020-0612-8_364); qualitative;
 Forecasting models – quantitative (https://doi.org/10.1007/1-4020-0612-8_365),
 Holt's forecasting model (https://doi.org/10.1007/1-4020-0612-8_409);  Mean
absolute deviation (MAD) (https://doi.org/10.1007/1-4020-0612-8_579);  Mean
percentage error (MPE) (https://doi.org/10.1007/1-4020-0612-8_583);  Forecast
errors (https://doi.org/10.1007/1-4020-0612-8_358);  Winter's forecasting model
(https://doi.org/10.1007/1-4020-0612-8_1050).

References
Armstrong, J.S. (1995). Long-range Forecasting: From Crystal Ball to Computer, New
York: John Wiley and Sons.
Google Scholar (https://scholar.google.com/scholar?
q=Armstrong%2C%20J.S.%20%281995%29.%20Long-
range%20Forecasting%3A%20From%20Crystal%20Ball%20to%20Computer%2C%20
New%20York%3A%20John%20Wiley%20and%20Sons.)
Armstrong, J.S. and F Collopy (1992). “Error measures for generalizations about
forecasting methods: Empirical comparisons with discussion.” International Journal
of Forecasting, 8, 69–80.
CrossRef (https://doi.org/10.1016/0169-2070(92)90008-W)
Google Scholar (http://scholar.google.com/scholar_lookup?
title=Error%20measures%20for%20generalizations%20about%20forecasting%20met
hods%3A%20Empirical%20comparisons%20with%20discussion&author=J.S..%20Ar
mstrong&author=C..%20F&journal=International%20Journal%20of%20Forecasting
&volume=8&pages=69-80&publication_year=1992)
Blattberg, R.C. and S.J. Hoch (1990). “Database models and managerial intuition: 50%
model and 50% manager.” Management Science, 36, 889–899.
Google Scholar (http://scholar.google.com/scholar_lookup?
title=Database%20models%20and%20managerial%20intuition%3A%2050%25%20m
odel%20and%2050%25%20manager&author=R.C..%20Blattberg&author=H..%20S.J
&journal=Management%20Science&volume=36&pages=889-
899&publication_year=1990)

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 13/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

Clemen, R.T. (1989). “Combining forecasts: A review and annotated bibliography”


International Journal of Forecasting, 559–584.
Google Scholar (https://scholar.google.com/scholar?
q=Clemen%2C%20R.T.%20%281989%29.%20%E2%80%9CCombining%20forecasts
%3A%20A%20review%20and%20annotated%20bibliography%E2%80%9D%20Inter
national%20Journal%20of%20Forecasting%2C%20559%E2%80%93584.)
Edmundson, R., Lawrence, M. and M O'Connor (1988). “The use of non-time series
information in sales forecasting: a case study.” Journal of forecasting, 7, 201–211.
Google Scholar (http://scholar.google.com/scholar_lookup?
title=The%20use%20of%20non-
time%20series%20information%20in%20sales%20forecasting%3A%20a%20case%20
study&author=R..%20Edmundson&author=M..%20Lawrence&author=O..%20M&jou
rnal=Journal%20of%20forecasting&volume=7&pages=201-
211&publication_year=1988)
Goodwin, P. and G Wright (1993). “Improving judgmental time series forecasting: A
review of the guidance provided by research.” International Journal of Forecasting, 9,
147–161.
CrossRef (https://doi.org/10.1016/0169-2070(93)90001-4)
Google Scholar (http://scholar.google.com/scholar_lookup?
title=Improving%20judgmental%20time%20series%20forecasting%3A%20A%20revi
ew%20of%20the%20guidance%20provided%20by%20research&author=P..%20Good
win&author=W..%20G&journal=International%20Journal%20of%20Forecasting&vol
ume=9&pages=147-161&publication_year=1993)
Hogarth, R. (1987). Judgment and Choice, John Wiley & Sons, NY.
Google Scholar (https://scholar.google.com/scholar?
q=Hogarth%2C%20R.%20%281987%29.%20Judgment%20and%20Choice%2C%20J
ohn%20Wiley%20%26%20Sons%2C%20NY.)
S Makridakis, S Wheelwright and V McGee (1983). Forecasting: Methods and
Applications (2nd ed.), New York: John Wiley & Sons.
Google Scholar (https://scholar.google.com/scholar?
q=S%20Makridakis%2C%20S%20Wheelwright%20and%20V%20McGee%20%28198
3%29.%20Forecasting%3A%20Methods%20and%20Applications%20%282nd%20ed.
%29%2C%20New%20York%3A%20John%20Wiley%20%26%20Sons.)
S Makridakis, A Andersen, R Carbone, R Fildes, M Hibon, R Lewandowski, J Newton,
E Parzen and R Winkler (1982). “The accuracy of extrapolation (time series) methods:
Results of a forecasting competition.” Journal of Forecasting, 1, 111–153.
Google Scholar (http://scholar.google.com/scholar_lookup?
title=The%20accuracy%20of%20extrapolation%20%28time%20series%29%20metho
ds%3A%20Results%20of%20a%20forecasting%20competition&author=S..%20Makri
dakis&author=A..%20Andersen&author=R..%20Carbone&author=R..%20Fildes&auth
or=M..%20Hibon&author=R..%20Lewandowski&author=J..%20Newton&author=E..
%20Parzen&author=R..%20Winkler&journal=Journal%20of%20Forecasting&volume
=1&pages=111-153&publication_year=1982)
Mentzer, J. and K Kahn (1995). “Forecasting techniques familiarity, satisfaction, usage
and application,” Journal of Forecasting, 14, 465–476.
Google Scholar (http://scholar.google.com/scholar_lookup?
title=Forecasting%20techniques%20familiarity%2C%20satisfaction%2C%20usage%2
0and%20application&author=J..%20Mentzer&author=K..%20K&journal=Journal%2
0of%20Forecasting&volume=14&pages=465-476&publication_year=1995)
Sanders, N.R. and K.B. Manrodt (1994). “Forecasting practices in US corporations:
survey results.” Interfaces, 24(2), 91–100.

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 14/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

CrossRef (https://doi.org/10.1287/inte.24.2.92)
Google Scholar (http://scholar.google.com/scholar_lookup?
title=Forecasting%20practices%20in%20US%20corporations%3A%20survey%20resu
lts&author=N.R..%20Sanders&author=M..%20K.B&journal=Interfaces&volume=24&
issue=2&pages=91-100&publication_year=1994)
Sanders, N.R. and L.P. Ritzman (1992). “The need for contextual and technical
knowledge in Judgmental forecasting.” Journal of Behavioral Decision Making, 5, 39–
52.
Google Scholar (http://scholar.google.com/scholar_lookup?
title=The%20need%20for%20contextual%20and%20technical%20knowledge%20in%
20Judgmental%20forecasting&author=N.R..%20Sanders&author=R..%20L.P&journa
l=Journal%20of%20Behavioral%20Decision%20Making&volume=5&pages=39-
52&publication_year=1992)
Webby, R. and M O'Connor (1996). “Judgmental and statistical time series
forecasting: A review of the literature.” International Journal of Forecasting, 12, 91–
118.
CrossRef (https://doi.org/10.1016/0169-2070(95)00644-3)
Google Scholar (http://scholar.google.com/scholar_lookup?
title=Judgmental%20and%20statistical%20time%20series%20forecasting%3A%20A
%20review%20of%20the%20literature&author=R..%20Webby&author=O..%20M&jo
urnal=International%20Journal%20of%20Forecasting&volume=12&pages=91-
118&publication_year=1996)
Yurkiewicz, J. (1993). “Forecasting software: Clearing up a cloudy picture.” OR/MS
Today, 2, 64–75.
Google Scholar (http://scholar.google.com/scholar_lookup?
title=Forecasting%20software%3A%20Clearing%20up%20a%20cloudy%20picture&a
uthor=J..%20Yurkiewicz&journal=OR%2FMS%20Today&volume=2&pages=64-
75&publication_year=1993)

Copyright information
© Kluwer Academic Publishers 2000

How to cite
Cite this entry as:
Sanders N.R. (2000) FORECASTING GUIDELINES AND METHODS. In: Swamidass P.M. (eds)
Encyclopedia of Production and Manufacturing Management. Springer, Boston, MA
.
https://doi.org/10.1007/1-4020-0612-8_362

About this entry


DOI
https://doi.org/10.1007/1-4020-0612-8
Publisher Name
Springer, Boston, MA
Print ISBN
978-0-7923-8630-8
Online ISBN
978-1-4020-0612-8
eBook Packages
Springer Book Archive

Buy this book on publisher's site

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 15/16
14/6/2021 FORECASTING GUIDELINES AND METHODS | SpringerLink

Reprints and Permissions

© 2020 Springer Nature Switzerland AG. Part of Springer Nature.

Not logged in
Avans Hogeschool (2000690002) - SURF B.V. (3000174899)
148.102.115.116

https://link.springer.com/referenceworkentry/10.1007%2F1-4020-0612-8_362 16/16

You might also like