You are on page 1of 32

MODEL DESIGN

AND PLANNING
Chapter 4: Defining Sensitivity and Flexibility
Requirements
Introduction:
This chapter discusses what is perhaps the single most important area to consider when
planning and designing models. This concerns ensuring that one clearly defines (early in the
processes) the nature of the sensitivity analysis that will be used in decision support, and
using this as the fundamental driver in model design

A generalisation of the “sensitivity-analysis thinking” (SAT) concept focus on the “flexibility


requirements” of the model. In which covers functionality beyond standard sensitivities, such
as the ability to update a forecasting model with realized figures as they occur, or the facility
to be able to introduce new data or data sets without having to perform undue structural
modifications.
We will use the term (SAT) to refer to this general concept of “flexibility and sensitivity
thinking”.
Key Issues for
Consideration:
Some form of sensitivity-related techniques is relevant at all stages of the modelling
process: at the model design stage - the focus is of a conceptual (qualitative) nature
and seeks to define precisely the sensitivity and flexibility requirements.

As the model is being built, sensitivity Once the model is built, sensitivity
analysis can be used: analysis can be used in the
traditional sense, i.e. to better
to test it for the absence of logical errors
understand the range of possible
to ensure that more complex formulae are
variation around a point forecast
implemented correctly

that the relationships between variables are


captured
Sensitivity Concepts in
the Backward Thought
and Forward
Calculation Processes
Sensitivity Concepts in the Backward Thought and Forward
Calculation Processes
The use of SAT is key to ensuring that both the backward
and the forward processes are implemented appropriately.
Note: The backward process by itself is not sufficient to
fully determine the nature of an appropriate model

There are typically many ways of breaking down an item into subcomponents.
The use of SAT will help
• to clarify which approach is appropriate, especially relating to the choice of variables that
are used for inputs and intermediate calculations, and the level of detail that makes sense
(since one can run sensitivity analysis only on a model input).
• to ensure that the forward calculations correctly reflect dependencies between the items
(general dependencies or specific common drivers of variability), since sensitivity analysis
will be truly valid only if such dependencies are captured.

Example:

The aim is to calculate the labour cost associated with a project to renovate a house. In the first instance,
a backward thought process is applied to consider possible ways of breaking down the total cost into
components.
Figure 4.1 represents the initial method used, Figure 4.2 shows an example of a modified model, in which the
based on a hypothesis that the items shown are backward path has been extended to include an hourly labour rate, and
the underlying drivers of the total. the forward calculation path is based on using new underlying base
figures (derived so that the new totals for each are the same as the
original values).
In addition, one may desire to be able to vary the In a more general case, there may be several
figures using a percentage variation (as an underlying factors (or different categories of labour),
alternative, or in addition, to varying absolute with some individual items driven by one of these,
figures). Figure  4.3 shows an example of how this and other items by another. Figure 4.4 shows an
may be implemented. example of this.
In general when items fall into categories, it may be preferable to build a model which is not structurally constrained
by the categories; in other words, one in which the items can be entered in any order (rather than having to be entered
by category). This is simple to do by using functions such as INDEX, MATCH and SUM-IFS. Figure 4.5 shows an
example.
Another important case is that of models with a time axis where an important question is
whether the assumptions used for the forecast (e.g. for the growth rate in revenues) should
be individual to each time period, or common to several time periods:

• A separate assumption in each period can be cumbersome and inhibit sensitivity analysis

• A single assumption that applies to all future periods may be too crude (and unrealistic),
resulting in an excessively high sensitivity of the output to the input value.

A compromise approach, in which there are several growth rates, each applied to several
periods, is often the most appropriate. This can also be considered as a “parameter
reduction”, i.e. the number of inputs is reduced to a more manageable level, whilst aiming
to retain sufficient accuracy.
Figure  4.6, in which there is a single assumption for revenue growth in each of years 1–3, a single
assumption for each of years 4–5 and a single assumption for each of years 6–10
Time Granularity
Time Granularity
Where models have a time component (such as each column representing a time period), it
is important to consider the granularity of the time axis (such as whether a column is to
represent a day, a month, a quarter or a year, and so on).

It is generally better to build the model so that the granularity of the time axis is at least as
detailed as that required for the purposes of development of the formulae and results
analysis.

For example,
• If one may wish to delay some cash flows by a month, then a monthly model should be
considered.
• if the refinancing conditions for a bank or project loan are to be verified quarterly, then a
model which forecasts whether such conditions will be met should generally be built to
be at least quarterly.
The benefits of increasing granularity
• Models with a very granular time axis can be used to give the relevant figures for longer periods
(by summation).
• It is harder to validly allocate aggregate figures (i.e. for a period of a year) into the components
(such as monthly figures), since the effect of growth or other factors would lead to non-equal
values in the component periods

The disadvantages of increasing granularity


• Models with a very detailed time axis become large and cumbersome to maintain, whilst not
necessarily offering sufficient additional accuracy.
• It may be hard to calibrate the model by finding or estimating input data that is itself required
to be very granular.
• One may be required to forecast the time allocation of items that may be difficult to assess at
that level of detail.
Level of Detail
on Input
Variables
Level of Detail on Input Variables
It has an optimal level of detail for input variables. The appropriate level of detail will closely
relate to the nature of the sensitivities to be conducted, as well as to the data requirements and
sources.

A model that is more detailed may not have a better predictive


power than a less detailed one:
• When there are more variables, the number of possible dependencies between them
becomes large, whilst the formulae required to capture such dependencies will become
complex or may simply be overlooked. The sensitivity analysis of the results would be
inaccurate, and the predicted ranges of variation would be incorrect (either too wide or
too narrow).
• It may be hard to calibrate the input values, simply because data (or the ability to judge
or make estimates) is not available at that level of detail.
Thus, the appropriate level of detail is closely related both to the nature of the sensitivities to be
conducted, and to the nature and availability of data that will be used to populate it.

The appropriate level of granularity may be one that uses the detailed information explicitly (as shown in
Figure 4.4): (see Figure 4.7, which is also contained in the example file referred to earlier).
Sensitising Absolute
Values or Variations
from Base Cases
Sensitising Absolute Values or Variations from Base
Cases
At the model design stage, it is useful to consider explicitly whether sensitivity analysis will be
performed on an absolute or on a variation (change) basis.
• In the first approach, the value of a model’s output is shown, as an input takes each of a pre-
defined set of values.
• In the second approach, the output is shown for a set of input values corresponding to a
variation from the base case.

The advantage of the variation approach


• The position of the base case within the sensitivity table is fixed (even if its underlying
value has been updated), so that sensitivity tables can be formatted to highlight the base
case. For example, Figure 4.8 shows two sensitivity tables in the context of the earlier
labour-cost model.
CONT..

• In the latter approach (which uses a percentage variation), the base case position is fixed (at
0% variation), even if other assumption values were updated (such as the base unit labour-
cost).

When using the variation approach (whether absolute or percentages), the variation is an
additional model input, which must be used together with the original absolute input figure
within the calculations.
The percentage-variation approach has particular appeal, as it may correspond closely to
how many decision-makers think.

Main disadvantages of percentage-variation


approach
• The potential for error and/or confusion in cases where the base case values are
percentage figures.
• Where base case values may themselves be zero (so that a percentage variation
results in the same value).
Scenarios
Versus
Sensitivities
Scenarios Versus Sensitivities
At the model design stage, it is also worth reflecting on whether the sensitivity analysis will
be conducted using scenarios or not.

Uses of Scenarios
• Used most typically where it is desired to vary three or more input values
simultaneously.
• Is to reflect possible dependencies between two or more inputs. This can be useful where
the relationship between the variables is not well understood and cannot be represented
with simple formulae.
CONT..

For example, it may be difficult to express the volume of a product that might be sold
for every possible value of the price, but market research could be used to establish this
at several price points, with each volume-price combination forming a possible
scenario.

When scenarios are used, If the need for scenarios is


tables of data to define the not considered early in the
scenarios will be required, process, then the model
and their placement will may later require
affect the model design, significant rework, or
layout and construction. indeed be structured in an
inappropriate way.
Uncertain Versus
Decision
Variables
Uncertain Versus Decision Variables
There is no consideration is given frequently as to whether any change in the values of
input variables corresponds to something over which one would have control or not.
For example, the price at which to launch a new product is something that one can control
(or choose), whereas the price that one pays for oil is (generally) not.

Thus, two generic types of input variables


• Those which represent items that one can control (i.e. ones for which there is a
choice), with a resulting issue being how to choose them in an optimal way.
• Those which represent items that one cannot control, and are therefore uncertain or
associated with risk
It is important to reflect which category each input
belongs to:
• The process to explicitly consider the distinction between the role of the variables will
help to ensure that one develops additional insight into the situation, and into the levers
that may be used to affect a decision.

• It would often affect the best way to layout the model so that items of the similar type
(optimisation versus uncertain variables) are grouped together if possible, or are perhaps
formatted differently.

• The logic within the model may need to be adapted, potentially quite significantly

For example, if it is desired to find the price at which to sell a product in order to
maximise revenues, one will need to capture (in the logic of the model) the
mechanism by which volume decreases as price is increased.
Increasing
Model Validity
Using Formulae
Increasing Model Validity Using Formulae

A model is valid only within an implicit context.


Examples of assumed contexts include:
• The geographic location of a construction project.
• The timing of the start of production relative to that of the construction phase of a
project.
• The timing of one set of cash flows relative to another
• The composition of items which determine the cost structure.
• The range of profitability of a business.
• The interest rate earned on cash balances.
• The model is to be applied only to decisions relating to the planet Earth
Even where an effort is made to document a model, frequently no distinction is made
between items which are within the model (“model assumptions”) and those which
are about the model (“contextual assumptions”)

Model assumptions are numerical values typically (sometimes text fields also act as
inputs), which the model’s calculations should update correctly if these values are
altered (e.g. to conduct a sensitivity analysis).

Contextual assumptions are those which limit the validity of a model, and so cannot
be validly changed within the existing model

The creation of flexibility in a model often involves increasing its validity by


adapting it so that contextual (fixed or implicit) assumptions are replaced by genuine
numerical ones. In such cases, the (fixed) context of the original model is generally
simply a special case (possible scenario) within the new model.
In fact, the creation of models with insufficient flexibility in their formulae arises most
often due to inadequate consideration, a lack of knowledge, or a lack of capability in several
areas, including of:
• How structural (contextual) limitations may be replaced by appropriate numerical
assumptions.
• The sensitivities that decision-makers would like to see.
• How variations in multiple items may interact.
• How to implement the formulae or create model flexibility using Excel or VBA.
THANK YOU!
Reporters:
Cabrera, Krisnill
Cepeda, Helen Rose
Dionela, Rio Aijilette
Espino, Donna Jean
Gonzales, Jenny Angel Mae

You might also like