Professional Documents
Culture Documents
J.E. Beasley
MN3032
2014
Undergraduate study in
Economics, Management,
Finance and the Social Sciences
This subject guide is for a 300 course offered as part of the University of London
International Programmes in Economics, Management, Finance and the Social Sciences.
This is equivalent to Level 6 within the Framework for Higher Education Qualifications in
England, Wales and Northern Ireland (FHEQ).
For more information about the University of London International Programmes
undergraduate study in Economics, Management, Finance and the Social Sciences, see:
www.londoninternational.ac.uk
This guide was prepared for the University of London International Programmes by:
J.E. Beasley, Professor, Brunel University, London.
Acknowledgements to Ms R. Beasley for her diagrams.
This is one of a series of subject guides published by the University. We regret that due to
pressure of work the author is unable to enter into any correspondence relating to, or arising
from, the guide. If you have any comments on this subject guide, favourable or unfavourable,
please use the form at the back of this guide.
Contents
Introduction............................................................................................................. 1
Terminology................................................................................................................... 1
Aims.............................................................................................................................. 2
Learning outcomes......................................................................................................... 3
Syllabus.......................................................................................................................... 3
Reading advice............................................................................................................... 4
Online study resources.................................................................................................... 4
Recommended study time............................................................................................... 5
How to use the resources for this course......................................................................... 6
Examination advice........................................................................................................ 7
Using software for the course......................................................................................... 7
Excel spreadsheets for the course.................................................................................... 8
Concluding remarks........................................................................................................ 8
List of abbreviations used in this subject guide................................................................ 9
Chapter 1: Methodology....................................................................................... 11
Essential reading.......................................................................................................... 11
Aims of the chapter...................................................................................................... 11
Learning outcomes....................................................................................................... 11
Introduction................................................................................................................. 11
Evolution...................................................................................................................... 11
Two Mines company..................................................................................................... 13
Discussion.................................................................................................................... 16
Philosophy................................................................................................................... 17
Certainty versus uncertainty.......................................................................................... 17
Phases of an OR project................................................................................................ 18
Methodological issues.................................................................................................. 20
Benefits........................................................................................................................ 25
Links to other chapters................................................................................................. 26
Case studies................................................................................................................. 26
A reminder of your learning outcomes........................................................................... 27
Sample examination questions...................................................................................... 27
Chapter 2: Problem structuring and problem structuring methods...................... 29
Essential reading.......................................................................................................... 29
Aims of the chapter...................................................................................................... 29
Learning outcomes....................................................................................................... 29
Introduction................................................................................................................. 29
Problem structuring methods........................................................................................ 30
Strategic options development and analysis (SODA) and
JOURNEY (JOintly Understanding, Reflecting, and NEgotiating strategY) Making .......... 32
Soft systems methodology (SSM).................................................................................. 35
Strategic choice (SC)..................................................................................................... 40
Choosing and applying PSMs........................................................................................ 45
Education..................................................................................................................... 45
Links to other chapters................................................................................................. 45
Case studies................................................................................................................. 46
i
MN3032 Management science methods
ii
Contents
iii
MN3032 Management science methods
iv
Contents
v
MN3032 Management science methods
Notes
vi
Introduction
Introduction
Terminology
Management science can also be described by the alternative name
of operational research (OR). Some of you may have met the terms
management science (MS), operational/operations research (OR), and
OR/MS before. Often some, or all, of these terms are used interchangeably.
In this subject guide we use the term OR throughout.
Analytics (or data analytics) is another term that deals with the same
issues as we consider in OR – we have data; we have decisions to be made;
how can we analyse the data to help us make an appropriate decision?
1
MN3032 Management science methods
Aims
The aims and objectives of this course are to:
• enable you to see that many managerial decision-making situations
can be addressed using standard techniques and problem structuring
methods
• provide a comprehensive and concise introduction to the key
techniques and problem structuring methods used within management
science that are directly relevant to the managerial context
• enable you to see both the benefits, and limitations, of the techniques
and problem structuring methods presented.
2
Introduction
Learning outcomes
On completion of the course, you should be able to:
• discuss the main techniques and problem structuring methods used
within management science
• critically appraise the strengths and limitations of these techniques and
problem structuring methods
• carry out simple exercises using such techniques and problem
structuring methods themselves (or explain how they should be done)
• commission more advanced exercises.
Syllabus
The topics dealt with in this course (in chapter order) are:
Problem structuring and problem structuring methods: problem
structuring methods such as JOURNEY (JOintly Understanding, Reflecting,
and NEgotiating strategY) making, Soft Systems Methodology and Strategic
Choice.
Network analysis: planning and control of projects via the critical path;
float (slack) times, cost/time trade-off, uncertain activity completion times
and resource considerations.
Decision making under uncertainty: approaches to decision
problems where chance (probability) plays a key role; pay-off tables;
decision trees; utilities and expected value of perfect information.
Inventory control: problems that arise in the management of inventory
(stock); Economic Order Quantity, Economic Batch Quantity, quantity
discounts, probabilistic demand, Materials Requirements Planning, Just-in-
Time, Optimised Production Technology and supply chain issues.
Markov processes: approaches used in modelling situations that evolve
in a stochastic (probabilistic) fashion though time; systems involving both
non-absorbing and absorbing states.
Mathematical programming formulation: the representation of
decision problems using linear models with a single objective which is to be
optimised; the formulation of both linear programs and integer programs.
Linear programming solutions: the solution of linear programs; the
numeric solution of two variable linear programs, sensitivity analysis and
robustness.
Data envelopment analysis: assessing the relative efficiency of
decision-making units in organisations; input/output definitions, basic
efficiency calculations, reference sets, target setting and value judgements.
Multicriteria decision making: approaches to decision problems that
involve multiple objectives; analytic hierarchy process which considers the
problem of making a choice, in the presence of complete information, from
a finite set of discrete alternatives; goal programming which considers, via
linear programming, multicriteria decision problems where the constraints
are ‘soft’.
Queueing theory and simulation: the representation and analysis
of complex stochastic systems where queueing is a common occurrence;
M/M/1 queue; discrete event simulation.
3
MN3032 Management science methods
Reading advice
Essential reading
There are two texts associated with the readings given at various points
throughout this subject guide. These are:
Anderson, D.R., D.J. Sweeney, T.A. Williams and M. Wisniewski An introduction
to management science: quantitative approaches to decision making
(Andover: Cengage Learning EMEA, 2014) second edition
[ISBN 9781408088401].
Rosenhead J. and J. Mingers (eds) Rational analysis for a problematic world
revisited: problem structuring methods for complexity, uncertainty and
conflict. (Chichester: John Wiley, 2001) second edition
[ISBN 9780471495239].
Both of these books are recommended for purchase/reference and will be
listed as ‘Anderson’ and ‘Rosenhead’ respectively throughout this guide.
Anderson is available to purchase as separate chapters from:
http://edu.cengage.co.uk/catalogue/product.aspx?isbn=1408088401
Detailed reading references in this subject guide refer to the editions of the
set textbooks listed above. New editions of one or more of these textbooks
may have been published by the time you study this course. You can use
a more recent edition of any of the books; use the detailed chapter and
section headings and the index to identify relevant readings. Also check
the VLE regularly for updated guidance on readings.
Case studies
In each chapter we have listed a number of case studies. You will see
that these are often quite short. The idea here is that we expose you to a
number of different practical situations where the tools and techniques
which you studied have been applied. We would encourage you to read
these case studies to see the range of areas to which the topics presented
in this subject guide have been utilised.
The VLE
The VLE, which complements this subject guide, has been designed to
enhance your learning experience, providing additional support and a
4
Introduction
6
Introduction
topics raised and their applicability in real life, or asking for a calculation
to check that you can correctly apply a technique. These are labelled
‘Activity’ throughout the text. We would strongly encourage you
to try these activities for yourself.
Examination advice
Important: the information and advice given here are based on the
examination structure used at the time this guide was written. Please
note that subject guides may be used for several years. Because of this
we strongly advise you to always check both the current Regulations
for relevant information about the examination, and the VLE where you
should be advised of any forthcoming changes. You should also carefully
check the rubric/instructions on the paper you actually sit and follow
those instructions.
The examination is three hours long. You will have to answer four
questions (all carrying equal marks) from a choice of eight questions.
It should be noted here that you will be expected to do all the calculations
in the examination by hand and that no computers or software (such as
Excel) will be available to assist you. However, you are permitted to take
an appropriate (basic, non-programmable) calculator into the examination
for this subject. This calculator must comply in all respects with
the specification given in the Regulations.
Remember, it is important to check the VLE for:
• up-to-date information on examination and assessment arrangements
for this course
• where available, past examination papers and Examiners’ commentaries
for the course which give advice on how each question might best be
answered.
7
MN3032 Management science methods
Concluding remarks
Even a brief glance at the textbooks associated with this subject guide
reveals that there is much more to OR than we have explicitly considered
here. Inevitably in producing a subject guide of this type some topics have
to be excluded, either because they are less important, or because they are
better taught and appreciated using a more interactive approach involving
PC software and face-to-face tuition.
Nevertheless we believe that there is sufficient material in this subject
guide for you to have gained a clear idea of what OR is about and its value
in improving decision making.
We wish you well in your examination and in applying OR to improve the
quality of decision making in your future career!
8
Introduction
9
MN3032 Management science methods
Notes
10
Chapter 1: Methodology
Chapter 1: Methodology
Essential reading
Anderson, Chapter 1, section start–1.4.
Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• explain the philosophy underlying the reasons for mathematical
modelling of problems
• describe the phases of an OR project
• explain the philosophy underlying OR
• explain how OR is carried out (i.e. the client/consultant role)
• discuss consultancy, cost versus decision quality, optimisation and
implementation in the context of OR work
• discuss the benefits of an OR approach to decision problems.
Introduction
We hope in this chapter to illustrate to you that decision-making situations
can be transformed from a (perhaps imprecise) verbal description to
a precise mathematical description. This transformation, although
involving the use of mathematics, does not usually demand a high level
of mathematical skill. The chapter begins with a brief introduction to
the evolution of OR. We then actually do some OR by considering a very
simple decision problem. We highlight some general lessons and concepts
from this specific example. We then discuss some methodological issues
that arise in OR work.
I would like to emphasise here that OR is (in my view)
a subject/discipline that has much to offer in making a
difference in the real world. OR can help you to make better
decisions and it is clear that there are many, many people and
companies out there who need to make better decisions.
Evolution
OR is a relatively new discipline. Whereas in 1930 it would have been
possible to study mathematics, physics or engineering (for example) at
university, it would not have been possible to study OR; indeed, the term
OR did not exist then. It started in the UK as an organised form of research
11
MN3032 Management science methods
just before the outbreak of the Second World War in 1939. Scientists were
attempting to make operational use of radar data (radar only just having
been developed) for the air defence of the UK. The term ‘operational
research’ (RESEARCH into (military) OPERATIONS) was coined as a
suitable description for this new branch of applied science.
During the Second World War, OR developed both in the UK and in the
USA and was used in many different situations to help determine effective
operational methods (e.g. how large convoys carrying food and other
supplies across the Atlantic should be organised to minimise the number of
ships lost). By the end of the war in 1945 OR was well established in the
armed services in both the UK and the USA.
Although scientists had (plainly) been involved in the hardware side of
warfare (designing better planes, bombs, tanks, etc), scientific analysis
of the operational use of military resources had never taken place in a
systematic fashion before the Second World War. Military personnel were
simply not trained to undertake such analysis.
These early OR workers came from many different disciplines; one UK
group consisted of a physicist, two physiologists, two mathematical
physicists and a surveyor. What such people brought to their work
were ‘scientifically trained’ minds, used to querying assumptions, logic,
exploring hypotheses, devising experiments, collecting data, analysing
numbers, etc. Many too were of high intellectual calibre (at least four UK
wartime OR personnel were later to win Nobel prizes when they returned
to their peacetime disciplines).
Features of this early OR work were:
• the scientific basis of the work and of the people involved in doing it
• work was carried out by a team of individuals, that team often being
made up of individuals from different scientific disciplines
• work was organised into projects (specific pieces of work with explicit
terms of reference to be completed in a set time)
• the relationship between the OR worker (or team) and the decision
maker, where the OR worker/team carried out the project but the
decision maker implemented any solution and bore responsibility for
its success or failure
• the use of data collection to develop an understanding of the problem
under investigation
• the need for OR workers to work with all ranks (both junior and
senior) within the organisation.
Many of these features are still present in current OR work. One feature
that has (inevitably) decayed over time, however, is that as the subject
knowledge base of OR has expanded, present-day OR teams typically
do not include individuals from different scientific disciplines. Instead a
team might well contain individuals who have received some specialised
university level education (at undergraduate or Masters level) in OR.
In 1945, following the end of the war, OR took a different course in the
UK to that in the USA. In the UK many of the OR workers returned to
their original peacetime academic disciplines. As such, OR did not spread
particularly well, except for a few isolated industries (iron/steel and coal).
In the USA, OR spread to the universities so that systematic training in OR
for future workers began. Nowadays of course OR can be found worldwide.
12
Chapter 1: Methodology
It is perhaps worth stating here that activities that would, in modern light,
be viewed as OR had occurred before the 1930s. For example the Economic
Order Quantity formula (dealt with in Chapter 5) which helps decide how
much stock a company should order from a supplier is believed to date
from the early 1900s. However, it was only from the 1930s onwards that
OR really established itself as a recognised professional activity and as a
coherent scientific discipline.
Activity
Explore the internet to see if universities in your own country offer courses in operations
research or management science.
13
MN3032 Management science methods
Note:
• This is clearly a very simple (even simplistic) example but, as with
many things, we have to start at a simple level in order to progress to a
more complicated level.
• This is a decision problem (we have to decide something); many
of the techniques/topics you will meet in this subject guide address
decision problems.
Activity
Consider this problem by yourself for 10 minutes. What answer do you come up with
for the number of days per week each mine should be operated? What is the associated
cost? Write your answer here for later reference.
Guessing
To explore the Two Mines problem further we might simply guess (i.e. use
our (managerial) judgement) how many days per week to work and see
how any guesses we make work.
Work one day a week on X, one day a week on Y
This does not seem like a good guess as it results in only 7 tonnes a day of
high-grade, insufficient to meet the contract requirement for 12 tonnes of
high-grade a day. We say that such a solution is infeasible.
Work 4 days a week on X, 3 days a week on Y
This seems like a better guess as it results in sufficient ore to meet the
contract. We say that such a solution is feasible. However, it is quite
expensive (costly).
Rather than continue guessing we can approach the problem in a
structured logical fashion as below. Ideally we would like a solution
that supplies what is necessary under the contract at minimum cost.
Logically such a minimum cost solution to this decision problem must
exist. However, even if we keep guessing we can never be sure whether we
have found this minimum cost solution or not. Fortunately our structured
approach will enable us to find the minimum cost solution.
Variables
These represent the ‘decisions that have to be made’ or the ‘unknowns’.
Let:
x = number of days per week mine X is operated
y = number of days per week mine Y is operated
Note here that x ≥ 0 and y ≥ 0.
14
Chapter 1: Methodology
Constraints
It is best to first put each constraint into words and then express it in a
mathematical form.
• Ore production constraints – balance the amount produced with the
quantity required under the smelting plant contract.
High 6x + 1y ≥ 12
Medium 3x + 1y ≥ 8
Low 4x + 6y ≥ 24
• Note we have an inequality here rather than an equality. This implies
that we may produce more of some grade of ore than we need. In fact,
we have the general rule: given a choice between an equality
and an inequality, choose the inequality.
• For example – if we choose an equality for the ore production
constraints we have the three equations 6x + y = 12, 3x + y = 8 and
4x + 6y = 24 and there are no values of x and y which satisfy all three
equations (the problem is therefore said to be ‘over-constrained’). For
example, the values of x and y which satisfy 6x + y = 12 and 3x + y = 8
are x = 4/3 and y = 4, but these values do not satisfy 4x + 6y = 24.
• The reason for this general rule is that choosing an inequality rather
than an equality gives us more flexibility in optimising (maximising or
minimising) the objective (deciding values for the decision variables
that optimise the objective).
• Days per week constraint – we cannot work more than a certain
maximum number of days a week; for example, for a 5-day week we
have:
x≤5
y≤5
• Constraints of this type are often called implicit constraints because
they are implicit in the definition of the variables.
Objective
Again in words our objective is (presumably) to minimise cost which is
given by:
180x + 160y
Hence we have the complete mathematical representation of the problem
as:
Activity
Suppose now that there is a third mine Z, costing 120 (£’000) per day and producing 0.5
tonnes of high-grade, one tonne of medium-grade and nine tonnes of low-grade ore per
day. What would the formulation of the problem now be?
15
MN3032 Management science methods
Discussion
There are a number of points to note here:
• A key issue behind formulation is that it makes you think. Even if
you never do anything with the mathematics this process of trying to
think clearly and logically about a problem can be very valuable.
• A common problem with formulation is to overlook some constraints
or variables and the entire formulation process should be regarded
as an iterative one (iterating back and forth between variables/
constraints/objective until we are satisfied).
• The mathematical problem given above has the form:
all variables continuous (i.e. can take fractional values)
a single objective (maximise or minimise)
the objective and constraints are linear i.e. any term is either a
constant or a constant multiplied by an unknown (e.g. 24, 4x, 6y
are linear terms but xy is a non-linear term).
• Any formulation which satisfies these three conditions is called a
linear program (LP). As we shall see later LPs are important.
• We have (implicitly) assumed that it is permissible to work in fractions
of days – problems where this is not permissible and variables
must take integer values and will be dealt with under integer
programming (IP).
• Often (strictly) the decision variables should be integer but for reasons
of simplicity we let them be fractional. This is especially relevant in
problems where the values of the decision variables are large because
any fractional part can then usually be ignored (note that often the
data (numbers) that we use in formulating the LP will be inaccurate
anyway).
• The way the complete mathematical representation of the problem is
set out above is the standard way (with the objective first, then the
constraints and finally the reminder that all variables are ≥0).
Considering the Two Mines example given above:
• This was a decision problem.
• We have taken a real-world situation and constructed an equivalent
mathematical representation – such a representation is often called a
mathematical model of the real-world situation (and the process by
which the model is obtained is called formulating the model).
• Just to confuse things the mathematical model of the problem is
sometimes called the formulation of the problem.
• Having obtained our mathematical model we (hopefully) have some
quantitative method which will enable us to numerically solve the
model (i.e. obtain a numeric solution) – such a quantitative method
is often called an algorithm for solving the model. Essentially an
algorithm (for a particular model) is a set of instructions which, when
followed in a step-by-step fashion, will produce a numeric solution
to that model. Many algorithms for OR problems are available in
computer packages.
• Our model has an objective, that is something which we are trying to
optimise.
• Having obtained the numeric solution of our model we have to
translate that solution back into the real-world situation.
16
Chapter 1: Methodology
Activity
Think of a number of real-world business systems of which you are aware. Do you see
scope for OR to make a difference to those systems or not?
Philosophy
In general terms we can regard OR as being the application of scientific
methods/thinking to decision making. Underlying OR is the philosophy
that:
• decisions have to be made
• using a quantitative (explicit, articulated) approach will lead (on
average) to better decisions than using non-quantitative (implicit,
unarticulated) approaches (such as those used by human decision
makers).
Indeed it can be argued that although OR is imperfect it offers the best
available approach to making a particular decision in many instances
(which is not to say that using OR will produce the right decision).
Often the human approach to decision making can be characterised
(conceptually) as the ‘ask Fred’ approach: simply give Fred (‘the expert’)
the problem and relevant data, shut him in a room for a while and wait for
an answer to appear.
The difficulties with this approach are:
• speed (cost) involved in arriving at a solution
• quality of solution – does Fred produce a good quality solution in any
particular case
• consistency of solution – does Fred always produce solutions of the
same quality (this is especially important when comparing different
options).
You can form your own judgement as to whether OR is better than this
approach or not.
Activity
Form your own judgement as to whether OR is better than the ‘ask Fred’ approach or not.
Can you think of problems you have solved using the ‘ask Fred’ approach?
Activity
What do you think is the minimum cost solution to the Two Mines problem? Record it
here for reference.
17
MN3032 Management science methods
Phases of an OR project
Drawing on our experience with the Two Mines problem we can identify
the phases that a (real-world) OR project might go through. We are not
suggesting here that all OR projects go through the phases shown below,
rather that the phases shown are a sufficiently good description of the
phases that many projects go through to merit consideration.
18
Chapter 1: Methodology
This issue of the data environment can affect the model that you build. If
you believe that certain data can never (realistically) be obtained there is
perhaps little point in building a model that uses such data.
Activity
Have you ever met the data barrier in your own work? Think of an environment which you
consider to be data-poor and an environment which you consider to be data-rich.
Activity
Think of any decision you may have made based on analysing numbers. Did you conduct
sensitivity analysis to see if your decision would be different if the numbers changed or
not? If not, why not?
Phase 5: Implementation
This phase is implementation: that is, making a difference (hopefully
for the better!) in the real world.
It may involve the implementation of the results of the study or the
implementation of the algorithm for solving the model as an operational
tool (usually in a computer package). In the first instance detailed
instructions as to what has to be done (including time schedules) to
implement the results must be issued. In the second instance operating
manuals and training schemes will have to be produced for the effective
use of the algorithm as an operational tool.
19
MN3032 Management science methods
Activity
Think of a business situation where you were instrumental in making a difference in the
real world. Was this difference really for the better or not and why?
Note here that although we have presented the five phases above in a
sequential fashion, in practice we might well switch between phases as
and when the need dictates. For example we might find at Phase 4 when
we examine numeric solution values that we have made an error in our
formulation of a mathematical model at Phase 2 and so we need to loop
back to that phase.
It is believed that many OR projects that successfully pass through the first
four phases given above fail at Phase 5, the implementation stage (i.e. the
work that has been done does not have a lasting effect). As a result one
topic that has received attention in terms of bringing an OR project to a
successful conclusion (in terms of implementation) is the issue of client
involvement. This means keeping the client (the sponsor/originator
of the project) informed and consulted during the course of the project
so that they come to identify with the project and want it to succeed.
Achieving this is really a matter of experience. However, we believe that,
as with many things in life, some useful insights can be gained from the
written word (as opposed to real-life experience) and for this reason we
discuss this issue of implementation further below.
Methodological issues
There are a number of methodological issues that arise in OR work that
we need to consider here. These relate to:
• consultancy
• cost versus decision quality
• optimisation
• implementation.
We discuss each in turn below.
Consultancy
It often happens that there is a client who has some problem on which
they need help and they decide to call in an ‘expert’ to provide that help.
The ‘expert’ is called a consultant and the process in which they engage
(tackling the client’s problem) is called consultancy. Clearly a client might
well engage an OR worker as a consultant (for example, because the OR
worker has skills that the client lacks). Clients can be drawn from a wide
range of organisations (for example, private companies, public companies,
and governmental departments).
There is no consultant without a client, and the consultant needs to be
clear who the client is, the nature of the problem, and what kind of help is
needed. Often the answer to these questions will be covered in some form
of contract between the client and consultant. However, this may change
over the course of the project and therefore should be kept under review.
Close contact between the client and the consultant is a key determinant
of the success of the project and will be discussed further below.
Problems have a number of characteristics:
• things are not as they should be, or understanding is incomplete
• the problem owner wants to do something about it
20
Chapter 1: Methodology
In situations where the client is not the person commissioning the work or
where the problem crosses departmental boundaries, the consultant must
ensure that they can give advice on the options available so that the client
can make changes. Otherwise, the consultant’s work is likely to be in vain.
Activity
Can you think of a situation from your experience where a consultant’s work has been in
vain? Why did the work not succeed?
The consultant therefore helps the client decide what to do. Defining the
problem should involve finding and agreeing some activity that will be
useful in helping the client decide what to do. Note that problems are
subjective, and therefore so are solutions. It is the client’s problem and the
desired solution should be the client’s, not the consultant’s. The consultant
must respond to the client’s concerns and value systems, or risk the whole
enterprise – these can be debated/negotiated. However, ultimately the
client decides.
Clearly the consultant needs to acquire an understanding of the context in
which the problem is set. There is usually some obvious technical context
that needs to be understood (for example, the client’s organisation is
providing services of a particular type, using these particular resources,
to particular customers, and it is, for example, a non-profit making
organisation). There is also the social context (who is involved/affected and
how these things are articulated, how the actors interact with one another,
etc.); and a cultural one (what rules and beliefs are core to the client/
organisation, what is the power pattern, how do things get done, etc.).
An organisation or individual may employ consultants because:
• they lack the skills within the organisation to find a resolution to the
problem: the consultant is the expert
• they lack the time/resources to find a resolution to the problem: the
consultant as a hired body or temporary employee
• they need an ‘independent’ person to help resolve the problem:
either to act as an arbitrator between two or more groups, or
to provide external justification for a decision, or to audit
recommendations of an internal project
• they need to be seen to be doing something and employing a
consultant/firm of consultants will provide a positive image
21
MN3032 Management science methods
Activity
Has your company (or a former company) ever employed consultants? If so, why did they
do so?
Activity
Would you be the best person to carry out some consultancy work for your work/college?
Why or why not?
22
Chapter 1: Methodology
Activity
Consider any decision problem that you have been involved in. Do you think that enough
time and effort was spent in order to reach a good quality decision or not and why?
Optimisation
The purpose of carrying out a project is usually to provide advice to the
client on what to do, based on the construction and experimentation with
a model. Traditional OR models have used optimisation to determine
what is the best action the client should take (i.e. mathematical models
where the optimum value of the controllable variables can be determined).
Such a model was seen above for the Two Mines problem.
In general, optimisation assumes that:
• The model accurately represents the system, and therefore the optimal
solution for the model is also the optimal solution for the system.
This may not be the case since models rarely represent accurately the
system under investigation.
• There is one objective or, where there is more than one objective, they
can be translated into a common unit, usually monetary values: for
instance, giving time a monetary value and so being able to optimise
over cost and time.
• There is consensus over what the objective of the system is.
• The problem will not change over time (at least in the short-term) and
therefore one optimal solution can be found (i.e. the solution given is
the best advice available now for the near future).
• All data can be quantified (i.e. assigned numerical values). In some
cases it is not possible to quantify some factors, and therefore only
qualitative information can be provided.
Activity
Suppose a number of possible road schemes are being considered and the decision on
which scheme to choose is affected by many factors such as: the cost of building, the
maintenance cost, the number of cars likely to use any new roads, the estimated number
of road deaths, and the environmental impact (houses demolished, pollution) of the
scheme. Which of these factors can be numerically assessed, and which can be put in
monetary terms?
23
MN3032 Management science methods
Activity
Suppose there were two types of electricity generators – one cost £100,000 and was
sufficient to supply 20,000 homes, the other cost £150,000 and was sufficient to supply
35,000 homes – if there are currently 40,000 homes to be supplied, how many of each
type of generator should be built?
If the generators are each expected to last 10 years, and the number of homes is
estimated to rise in five years to 55,000 and then remain stable, how many of each type
should be built?
What would be a robust strategy if the number of homes in the next 10 years is likely to
be between 50,000 and 60,000?
Many of the models presented in this subject guide use optimisation. You
should be aware of the limitations of optimisation and also conscious of
the need to carry out sensitivity analysis. We will look at robustness and
sensitivity analysis with regard to one of the optimisation techniques,
linear programming, in Chapter 8 of this subject guide.
Activity
Consider the various problems associated with optimisation and identify the possible
strategies for dealing with them.
Implementation
To a large extent the success of an OR project is not determined by
whether the project produces an elegant model, or by the size of the
benefits of the recommended course of action, but by whether the project
affects the decisions made by the client, including whether any action
recommended is undertaken by the client.
In order for an organisation to implement a proposed course of
action, it must be possible to implement the solution (technologically
and culturally), and the person(s) with the power to implement the
recommended course of action must be committed to it.
In order for an OR project to produce a solution which it is possible
to implement (a feasible solution), it is necessary to ensure, when
formulating the problem, that all the relevant technological and cultural
constraints are known and, where appropriate, included in the model.
Continuous contact with the organisation, and specifically the client in the
organisation (the person/people who have the problem), including regular
discussions on the progress of the project and the model being produced,
should ensure that a feasible solution is produced.
Gaining the commitment of the person(s) with the power to implement
the solution requires the consultant to persuade them that the changes
recommended are worth making. Approaches which are likely to gain such
commitment include:
24
Chapter 1: Methodology
Activity
Reflect on how you might persuade your boss or a former boss to support a course of
action you are proposing. What strategies might you adopt?
Benefits
Throughout your career you will inevitably encounter people who have
little understanding of OR and, moreover, feel that problems in business/
management can be solved by (innate) personal ability and experience
(such as coincidentally they themselves possess). As such they see no need
for a ‘complicated’ approach such as OR.
On a personal note at the time of writing I am over 60 and have been
involved in OR for all of my working life. My personal view as to the
benefits of OR in solving problems in business/management is:
• OR is particularly well-suited for routine tactical decision-making
where data are typically well-defined and decisions for the same
problem must be made repeatedly over time (for example, how much
stock to order from a supplier).
• An advantage of explicit decision-making is that it is possible to
examine assumptions explicitly.
• We might reasonably expect an ‘analytical’ (structured, logical)
approach to decision-making to be better (on average) than simply
relying on a person’s innate decision-making ability.
• OR techniques combine the ability and experience of many people.
• Sensitivity analysis can be performed in a systematic fashion.
25
MN3032 Management science methods
Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
26
Chapter 1: Methodology
27
MN3032 Management science methods
Notes
28
Chapter 2: Problem structuring and problem structuring methods
Essential reading
Rosenhead, Chapters 2, 4 and 6.
Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should ensure that you can describe and explain:
General
• the common features of problem structuring methods.
Journey Making
• how cognitive maps can structure individual and group views of a
problem, including the identification of goals and options, and how to
produce and structure a cognitive map
• the Journey Making process.
Introduction
The first stage in the OR process, formulation of the problem, involves
identifying what is the problem facing the organisation. Rosenhead defines
a well-structured problem as one with:
• unambiguous objectives
• firm constraints
• established cause–effect relationships.
The problem formulation stage attempts to identify and clarify these
factors.
29
MN3032 Management science methods
Activity
Think of a problem of which you are aware. Write down a clear statement of what the
problem is. Did you find getting this clear statement difficult or not and why?
Activity
Consider the different perceptions to the definition of a good lesson by a teacher,
students and a school management body.
30
Chapter 2: Problem structuring and problem structuring methods
Overview
As an overview, problem structuring methods:
• help structure (complex) problems
• are mainly used with a small group of decision makers (people) in an
organisation
• do not try to get an objective definition of the problem
• emphasise the importance and validity of each individual’s subjective
perception of the problem.
To achieve this such methods typically use a consultant (external person)
whose role is:
• to see that the group contains individuals with knowledge of the
situation and/or individuals who will affect the success of any action
proposed
• to act as a facilitator/organiser of the process
• to orchestrate discussion
• to be seen to be open, independent and fair.
The consultant does not need to possess any special knowledge about
the problem (i.e. he or she does not need to be an expert in the problem
area). However, consultants are often experts in the particular problem
structuring method being applied.
Such methods try to capture the group’s perception of the problem:
• verbally (in words)
• in pictures/diagrams.
Words are used as they are believed to be the natural currency of
problem definition/discussion/solution (compare hard OR which uses
mathematics). The use of pictures/diagrams helps to structure the group’s
perception of the problem and enables discussion/debate to be less
personal.
Such methods help the members of the group:
• to gain an understanding of the problem they face
31
MN3032 Management science methods
Definitions
Problem structuring methods involve the use of a number of words with
specific meanings:
• Client(s) – person(s), the group, who face the decision problem and
for whom the consultant is working.
• Consultant – person from outside the group who acts as a facilitator.
• Facilitator – an independent person who aids the group by extracting
information from them about the problem and organising it.
Facilitators also act as a type of chairperson.
• Consensus – gaining the acceptance of all members of a group to a
particular view/decision.
• Workshop – group of people working/discussing an issue or issues in a
structured way.
• Pure model – model of a system which pursues a pure purpose from a
specific point of view.
• Purposeful activity system – a system, possibly hypothetical, in an
organisation which has a specific purpose.
In this chapter we consider three problem structuring methods:
• Strategic Options Development and Analysis (SODA) and JOURNEY
(JOintly Understanding, Reflecting, and NEgotiating strategY) Making
• Soft Systems Methodology (SSM)
• Strategic Choice (SC).
To try and illustrate these methods we will apply each of them to the
following example problem:
Crime is a real problem in this country. We are spending
more and more on locking up increasing numbers of people
in prisons, yet crime seems to go on rising. Many of those in
prison are there for reasons connected with medical problems
(e.g. drug addiction, mental illness), yet when they come out
of prison these problems are unresolved and so they go straight
back to crime. Perhaps the answer is longer prison sentences.
32
Chapter 2: Problem structuring and problem structuring methods
33
MN3032 Management science methods
In Journey Making cognitive maps are first produced for each individual by
interviewing them in a relatively unstructured ‘free-flowing’ way to try to
elicit their thought processes about the problem under discussion and what
they think is important about the problem. Such maps often contain 40 to
100 concepts and may also help each individual to refine their thinking.
In Figure 2.2 we show a small map based on our crime example given
above. You can see that the goals (at the top of the map) are ‘less crime’
and ‘reform criminals’ and we have a number of options available, e.g.
‘spend more money on prisons’.
less crime ...
reform criminals...
Activity
Map your own views about the treatment of crime and prisoners in your society.
Another map for an individual talking about the same problem might be:
more prosecutions...
longer sentences...
more police...
Figure 2.3
Activity
Map the views of a friend about the treatment of crime and prisoners in your society.
Once individual maps have been produced they need to be merged into a
single map, initially often containing several hundred concepts. In doing
this:
• similar concepts are merged into one
• concepts from key members of the group should be retained
• a balance of concepts from all members of the group should be present
• the consultant may add/delete concepts and links between concepts.
For example we might merge our two individual maps above to get:
34
Chapter 2: Problem structuring and problem structuring methods
more police...
Figure 2.4
In order to make this map manageable in problems larger than our simple
example considered here:
• The concepts in it are aggregated into clusters (say 15 to 30 concepts
in each cluster), so that we have a map within each cluster and each
cluster is appropriately labelled.
• The final merged map is an overview map at the cluster level showing
the labelled clusters and the links between clusters.
Activity
Merge together the two maps you have produced in the preceding two activities.
This merged overview map (and the individual cluster maps) serve as a
focus for discussion at a workshop involving:
• analysis of its content and structure
• identification of any ‘emerging themes’ and ‘core concepts’
• discussion of key goals, inter-related problems, key options and
assumptions.
As for all problem structuring methods the aim of Journey Making is to
achieve understanding/agreement within the group.
Activity
Think of a problem of your own and with a friend apply Journey Making to the problem.
SSM assumes:
• Different individuals and groups make different evaluations of events
and this leads to them taking different actions.
• Concepts and ideas from systems engineering are useful.
• It is necessary when describing any human activity system to take
account of the particular image of the world underlying the description
of the system and it is necessary to be explicit about the assumptions
underlying this image.
35
MN3032 Management science methods
Overview
SSM operates by defining systems of purposeful activity (the root
definition), building models of a number of relevant systems, and
comparing these models to the real world action going on, in order to
structure a debate focusing on the differences. That debate should lead
the group of people involved in the process to see their way to possible
changes and to motivate these people to carry out these changes.
Stages
There are seven stages in the SSM process, but they are not necessarily
followed in a linear fashion. Diagrammatically these stages are:
Enter situation considered problematic
Real world
36
Chapter 2: Problem structuring and problem structuring methods
Activity
Think of a number of possible transformations for a football match, for a bus service and
for a shop. Use the different actors in the system and the different physical objects to help
you (for example, how does the shop transform customers, how does it transform the
goods on sale, etc?).
37
MN3032 Management science methods
Activity
Using one transformation from each of the three activities above (the football match, the
bus service, and the shop) develop a root definition and CATWOE.
38
Chapter 2: Problem structuring and problem structuring methods
Reform criminal
Figure 2.6
39
MN3032 Management science methods
Uncertainty areas
SC identifies three types of uncertainty:
• Uncertainty about the working Environment (UE), reduced by a
technical response (e.g. collecting data, surveys, numeric analysis).
• Uncertainty about guiding Values (UV), reduced by a political
response (e.g. clarifying objectives, consulting interest groups, asking
higher authorities for their opinions).
• Uncertainty about Related decision fields (UR), also known as
Uncertainty about choices on Related agendas, reduced by an
exploration of structural relationships (e.g. adopting a broader
perspective, negotiating/collaborating with other decision makers,
looking at the links between a decision that might be made by
ourselves and decisions that might be made by others).
Throughout the strategic choice process:
• areas of uncertainty are listed as they arise, and
• are classified by UE/UV/UR.
In the last mode listed above, the choosing mode, these uncertainty areas
are addressed in the context of proposed decisions.
Shaping mode
In the shaping mode decision areas are identified as questions. These
are simply areas where alternative courses of action are possible (i.e. a
choice is possible). These decision areas are then presented on a decision
graph, where:
• each area is a node on the graph
• a link (edge) between two nodes (areas) exists if there is thought to
be a significant possibility of different outcomes if the two areas are
considered separately, rather then together.
Figure 2.7 shows one possible decision graph for our crime example.
40
Chapter 2: Problem structuring and problem structuring methods
Figure 2.7
Once the decision graph has been drawn, areas of problem focus –
consisting of three or four decision areas – need to be identified. The areas
chosen are generally those which are important, urgent and/or connected.
For our crime example above we will have one problem focus based on the
areas:
• build more prisons?
• impose longer sentences?
• increase rewards for informing?
With regard to uncertainty we will have just one factor in our uncertainty
list, namely:
• Can we find sites to build more prisons? Classified UE.
Designing mode
In the designing mode we take each problem focus in turn and:
• List a small number (say two to five) of mutually exclusive possible
courses of action (options) in each of the decision areas.
• List incompatible options in different decision areas (note all options
in the same decision area are incompatible as they are mutually
exclusive); this can be done graphically if so desired using an option
graph.
• List (enumerate) all the possible feasible decision schemes where
a feasible decision scheme consists of one option from each of the
decision areas and none of the options chosen are incompatible.
For our crime example with the problem focus based on the areas:
• build more prisons?
• impose longer sentences?
• increase rewards for informing?
We have the options in each of these decision areas of:
• build more prisons?
no
yes – five more
yes – 10 more
• impose longer sentences?
no
yes
41
MN3032 Management science methods
Build more prisons? Impose longer sentences? Increase rewards for informing?
no no no
yes no
no yes
yes yes
yes – five more no no
yes no
no yes
yes yes
yes – 10 more no no
yes no
no yes
yes yes
Table 2.1
Checking each of these schemes we find that for this example there are just
three possible feasible decision schemes (labelled A, B and C below) which are:
42
Chapter 2: Problem structuring and problem structuring methods
With regard to uncertainty, our uncertainty list, after the addition of two
more factors, becomes:
• Can we find sites to build more prisons at? Classified UE.
• Will 10 prisons be too many? Classified UE.
• Will the government/judiciary support longer sentences? Classified UV.
Comparing mode
In the comparing mode we compare each of the feasible decision schemes.
This is done by:
• identifying comparison areas
• within each area, assigning each feasible decision scheme a value.
The values chosen can be monetary sums or values chosen from some
scale (e.g. rank on a scale from 1 to 10).
Based on this assignment of values particular schemes may be selected
for closer analysis, either individually or as members of a shortlist. A
common approach is to compare, in a pairwise fashion, all members of the
shortlist. In this pairwise comparison the uncertainty areas are explicitly
considered to identify those uncertainty areas relating to the schemes
being compared.
For our crime example we could compare our feasible decision schemes
(three in this case) with respect to the comparison areas of:
• capital cost (in £’million terms)
• running cost (in £’million terms)
• acceptability to government (from 1 (almost unacceptable) to 5
(neutral) to 10 (very acceptable))
• acceptability to the public (from 1 (almost unacceptable) to 5
(neutral) to 10 (very acceptable)).
We present these numbers below:
Table 2.3
Selecting schemes A (no more prisons, no longer sentences and no
increased rewards for informing) and B (five more prisons, longer
sentences and increased rewards for informing) for pairwise comparison
we have:
Scheme A Scheme B
Capital cost 0 200
Running cost 0 40
Government acceptability 3 5
Public acceptability 5 3
Table 2.4
43
MN3032 Management science methods
Activity
Given four schemes, labelled A to D, what pairs of comparisons would be made?
Choosing mode
In the choosing mode a commitment package (i.e. what we are
proposing to do) is decided upon (or more than one package for
submission to higher authorities). A commitment package is guided by the
preferred feasible decision scheme and consists of:
• decisions taken now
• explorations to reduce levels of uncertainty (together with estimates
of resources needed and timescales)
• decisions deferred until later
• any contingency plans.
With regard to our crime example if we assume scheme B (five more
prisons, longer sentences and increased rewards for informing) is the
preferred feasible decision scheme then the relevant uncertainty areas are:
• Can we find sites to build more prisons? Classified UE.
• Will the government/judiciary support longer sentences? Classified UV.
The commitment package might be:
• decisions taken now – none
• explorations
study to identify provisional sites for five prisons (costing
£1,000,000 and taking three months)
consult government/judiciary about support for longer sentences
(negligible cost, likely to take up to six months)
• decisions deferred – final decision on scheme B until explorations to
reduce levels of uncertainty completed
• contingency plans – none.
Note that the actual decision scheme that we choose may be altered by the
results of the explorations (for example if our explorations reveal there are
no sites available for five prisons).
44
Chapter 2: Problem structuring and problem structuring methods
Activity
Think of a problem of your own and apply Strategic Choice to the problem.
Education
You will find that the majority of the topics in this subject guide deal
with quantitative/analytic topics. This chapter is the only main chapter
that is predominantly qualitative in nature. Hence it is natural to ask
whether you, as a student, perhaps quantitatively skilled, should just focus
on quantitative topics and miss this chapter out completely in terms of
engaging with this subject guide. Clearly that could be done. However,
you need to be clear that the purpose of this subject guide is not only to
prepare you for the examination, it is also to educate you. Obviously that
education is assessed by a single examination in terms of the University
of London, but you should be aware that education is for life. Even
though you may not focus on this chapter in terms of preparing for the
examination that does not mean it is of no value. After you graduate how
many years of working life do you think you will face? Anticipating being
a millionaire and living a life of leisure by the age of 35? Dream on! Like
most of us, myself included, you will work hard all your life – ‘life is hard
and then you die’ to quote the phrase. During all those years of working
life maybe you will use some of the quantitative topics which you engaged
with for the examination. Equally, knowing that problem structuring
methods exist, and having some knowledge of what they deal with may,
over those working years, be valuable at some point. Time will tell.
45
MN3032 Management science methods
Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
SSM
www.learnaboutor.co.uk/strategicProblems/c_s_1frs.htm
Journey Making
www.learnaboutor.co.uk/strategicProblems/c_j_1frs.htm
General
• the common features of problem structuring methods.
Journey Making
• how cognitive maps can structure individual and group views of a
problem, including the identification of goals and options, and how to
produce and structure a cognitive map
• the Journey Making process.
46
Chapter 3: Network analysis
Essential reading
Anderson, Chapter 9.
Spreadsheet
network.xls
• Sheet A: Calculation for project completion time
• Sheet B: Calculation for project completion time with delay activity
added
• Sheet C: Resource information
• Sheet D: Gantt chart calculated from Sheet C
• Sheet E: Resource usage chart calculated from Sheet C
This spreadsheet can be downloaded from the VLE.
Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• draw a network diagram
• calculate the project completion time
• calculate the earliest start time, latest start time and float time for each
activity
• identify the critical activities/critical path(s)
• explain the effects of uncertain activity times and cost/time trade-offs
• explain resource smoothing
• explain the benefits of network anaylsis.
Introduction
Network analysis is the general name given to certain specific techniques
which can be used for the planning, management and control of projects.
One definition of a project, from the Project Management Institute, is: a
temporary endeavour undertaken to create a ‘unique’ product
or service.
47
MN3032 Management science methods
Historical background
Two different techniques for network analysis were developed
independently in the late 1950s. These were:
• PERT (for Program Evaluation and Review Technique)
• CPM (for Critical Path Management).
PERT was developed to aid the US Navy in the planning and control of
its Polaris missile project. This was a project to build a strategic weapons
system, namely the first submarine-launched intercontinental ballistic
missile, at the time of the Cold War between the USA and Russia. Hence
there was a strategic emphasis on completing the Polaris project as
quickly as possible; cost was not an issue. However, no one had ever
built a submarine-launched intercontinental ballistic missile before, so
dealing with uncertainty was a key issue. PERT has the ability to cope with
uncertain activity completion times (e.g. for a particular activity the most
48
Chapter 3: Network analysis
likely completion time is four weeks but it could be any time between
three weeks and eight weeks).
CPM was developed as a result of a joint effort by the DuPont Company
and Remington Rand Univac. As these were commercial companies, cost
was an issue (unlike the Polaris project considered above). In CPM the
emphasis is on the trade-off between the cost of the project and its overall
completion time (e.g. for certain activities it may be possible to decrease
their completion times by spending more money. How does this affect the
overall completion time of the project?)
Modern commercial software packages tend to blur the distinction between
PERT and CPM and include options for uncertain activity completion times
and project completion time/project cost trade-off analysis. Note here that
many such packages exist for doing network analysis.
There is no clear terminology in the literature and you will see this area
referred to by the phrases: network analysis, PERT, CPM, PERT/CPM,
critical path analysis and project planning.
Example
We will illustrate network analysis with reference to the following example: suppose that
we are going to carry out a minor redesign of a product and its associated packaging.
We intend to test market this redesigned product and then revise it in the light of the test
market results, finally presenting the results to the Board of the company.
The key question is:
How long will it take to complete this project?
Table 3.1
Activity
Think of a small project, either at work or at home (for example, painting a room). List on
a piece of paper the activities associated with this project and their associated completion
times.
49
MN3032 Management science methods
Aside from this list of activities we must also prepare a list of precedence
relationships indicating activities which, because of the logic of the
situation, must be finished before other activities can start (e.g. in the
above list Activity 1 must be finished before Activity 3 can start).
It is important to note that, for clarity, we try to keep this list to a
minimum by specifying only immediate relationships: that is, relationships
involving activities that ‘occur near to each other in time’.
For example, it is plain that Activity 1 must be finished before Activity 9
can start but these two activities can hardly be said to have an immediate
relationship (since many other activities after Activity 1 need to be finished
before we can start Activity 9).
Activities 8 and 9 would be examples of activities that have an immediate
relationship (Activity 8 must be finished before Activity 9 can start).
Note here that specifying non‑immediate relationships merely complicates
the calculations that need to be done – it does not affect the final
result. Note too that, in the real world, the consequences of missing out
precedence relationships are much more serious than the consequences of
including unnecessary (non-immediate) relationships.
Again, after much thought (and aided by the fact that we listed the
activities in a logical/chronological order), we come up with the following
list of immediate precedence relationships.
Activity number Activity number
1 must be finished before 3 can start
2 4
3 5
4 6
5, 6 7
7 8
8 9
8 10
9, 10 11
Table 3.2
The key to constructing this table is, for each activity in turn, to ask the
question:
‘What activities must be finished before this activity can start?’
Note here that:
• Activities 1 and 2 do not appear in the right hand column of the above
table. This is because there are no activities which must finish before
they can start (i.e. both Activities 1 and 2 can start immediately).
• Two activities (5 and 6) must be finished before Activity 7 can start.
• It is plain from this table that non-immediate precedence relationships
(e.g. ‘Activity 1 must be finished before Activity 9 can start’) need not
be included in the list since they can be deduced from the relationships
already in the list.
Activity
For the project you thought of previously construct on a piece of paper a list of
precedence relationships.
50
Chapter 3: Network analysis
Once we have completed our list of activities and our list of precedence
relationships we combine them into a diagram (called a network – which is
where the name network analysis comes from).
Network construction
Activity/Reading
For this section read Anderson, Chapter 9, section 9.1.
In the network shown below, each node (circle) represents an activity and is
labelled with the activity number and the associated completion time (shown
in brackets after the activity number).
1(6) 3(3) 5(4)
9(3)
7(1)
8(6) 11(1)
10(1)
Figure 3.1
This network is an activity on node (AON) network.
In constructing the network we:
• draw a node for each activity
• add an arrow from (activity) node i to (activity) node j if Activity i must
be finished before Activity j can start (Activity i precedes Activity j).
Note here that all arcs have arrows attached to them (indicating the direction
the project is flowing in).
One tip that I find useful in drawing such diagrams is to structure the
positioning of the nodes (activities) so that the activities at the start of the
project are at the left, the activities at the end of the project at the right, and
the project ‘flows’ from left to right in a natural fashion.
Note here one key point, the above network diagram assumes
that activities not linked by precedence relationships can take
place simultaneously (e.g. at the start of the project we could be
doing Activity 1 at the same time as we are doing Activity 2).
Essentially the above diagram is not needed for a computer – a computer can
cope very well (indeed better) with just the list of activities and their precedence
relationships we had before. The above diagram is intended for people.
Consider what might happen in a large project – perhaps many thousands or
tens of thousands of activities and their associated precedence relationships.
Do you think it would be possible to list those out without making any errors?
Obviously not – so how can we spot errors? Looking at long lists in an attempt
to spot errors is just hopeless. With a little practice it becomes easy to look at
diagrams such as that shown above and interpret them and spot any errors in
the specification of the activities and their associated precedence relationships.
51
MN3032 Management science methods
Activity
Without looking at the network we have drawn above, draw for yourself the network
associated with the example given above. Does what you have drawn correspond to what
is shown above or not?
Draw the network for the project you thought of previously (from your list of activities and
precedence relationships).
9(3)
7(1)
8(6) 11(1)
10(1)
Let Ei represent the earliest start time for activity i such that
all its preceding activities have been finished. We calculate the
values of the Ei (i = 1, 2,..., 12) by going forward, from left to right, in the
network diagram. To ease the notation let Ti be the activity completion time
associated with activity i (e.g. T5 = 4). Then the Ei are given by:
E1 = 0 (assuming we start at time zero)
E2 = 0 (assuming we start at time zero)
E3 = E1 + T1 = 0 + 6 = 6
E4 = E2 + T2 = 0 + 2 = 2
E5 = E3 + T3 = 6 + 3 = 9
E6 = E4 + T4 = 2 + 2 = 4
E7 = max[E5 + T5, E6 + T6] = max[9 + 4, 4 + 1] = 13
E8 = E7 + T7 = 13 + 1 = 14
E9 = E8 + T8 = 14 + 6 = 20
E10 = E8 + T8 = 14 + 6 = 20
E11 = max[E9 + T9, E10 + T10] = max[20 + 3, 20 + 1] = 23
E12 = E11 + T11 = 23 + 1 = 24
Hence 24 (weeks) is the minimum time needed to complete all the activities
and hence is the minimum overall project completion time.
Note here that the formal definition of the earliest start times is given by:
Ej = max[Ei + Ti | i one of the activities linked to j by an arc from i to j]
Conceptually we can think of this earliest start time calculation as finding
the length of the longest path in the network (consider walking from the
left-hand side of the network, to the right-hand side, through the nodes,
where the completion time at each node indicates how long we must wait
at the node before we can move on). However, because of the risk
of error, we should always carry out the above calculation
explicitly, rather than relying on the eye/brain to inspect
the network to spot the longest path in the network. This
inspection approach is infeasible anyway for large networks.
As well as the minimum overall project completion time calculated above we
can extract additional useful information from the network diagram by the
calculation of latest start times. We deal with this below.
53
MN3032 Management science methods
L4 = L6 - T4 = 12 - 2 = 10
L3 = L 5 - T3 = 9 - 3 = 6
L2 = L4 - T2 = 10 - 2 = 8
L1 = L 3 - T1 = 6 - 6 = 0
Float
As we know the earliest start time Ei, and latest start time Li, for each
Activity i, it is clear that the amount of slack or float time Fi available
is given by Fi = Li - Ei which is the amount by which we can increase the
time taken to complete Activity i without changing (increasing) the overall
project completion time. Hence we can form the table below:
Activity Li Ei Float Fi
1 0 0 0
2 8 0 8
3 6 6 0
4 10 2 8
5 9 9 0
6 12 4 8
7 13 13 0
8 14 14 0
9 20 20 0
10 22 20 2
11 23 23 0
Table 3.3
Any activity with a float of zero is critical. Note here that, as a check, all
float values should be ≥ 0.
The float figures derived are also known as total float as in the above
example a ‘chain’ of successive activities (in this case 2, 4 and 6) share the
same float and this is common with total float.
The float value is defined, for each activity, as the amount of time that
each activity can be delayed without altering (increasing) the overall
project completion time. If delays occur in two or more activities then we
must recalculate the project completion time. Many textbooks also refer to
float by the term ‘slack’.
54
Chapter 3: Network analysis
Critical path
Activities with a slack of zero are called critical activities since they must
all be completed on time to avoid increasing the overall project completion
time. Hence, for this network, activities 1, 3, 5, 7, 8, 9 and 11 are the
critical activities.
Activity
If any of the critical activities are delayed, will this affect the overall project completion
time or not and why?
Note here that 1–3–5–7–8–9–11 constitutes a path from the initial node
(node 1) to the final node (node 11) in our network diagram. This is no
accident because, for any network, there will always be a path of critical
activities from the initial node to the final node. Such a path is called the
critical path. Note too here that the sum of the completion times for the
activities on the critical path is equal to the project completion time.
Activity
Try for yourself the example given above and see if you agree with the float values
presented above.
Activity
For the project you thought of previously, calculate the minimum overall project
completion time. What are the critical activities?
Activity
Can there be more than one critical path? Hint: consider the same example as given
above but with the completion time for Activity 10 increased to three weeks.
Checks
If you analyse a project network then there are a number of numeric
checks that can be applied to check the accuracy of your calculations.
These are checks in the sense that if you fail any of these checks then you
must have gone wrong somewhere. Conversely, passing all the checks
does not absolutely guarantee that you are correct, although it does (for
example, in an examination situation) enable you to have some confidence
in your calculations. These checks are:
• All activities have floats ≥ 0 - it is good practice to explicitly give a
table of floats (for non-critical activities) so that you are sure that you
have calculated floats for all of them.
• There exists at least one path of critical activities from the start of the
project to the end of the project.
• All activities on a critical path have float zero.
• The sum of the completion times for the activities on a critical path is
equal to the project completion time.
• If an activity has float zero then it must be on at least one critical path.
In addition, be clear that:
• If activity completion times increase (and the precedence relationships
remain unchanged) the project completion time cannot decrease.
• If you have a change in the precedence relationships and/or two or
more activity completion times change, you need to recalculate.
55
MN3032 Management science methods
Excel solution
Now examine Sheet A in the spreadsheet associated with this chapter, as
shown below. You will see there that the data for the example considered
above have already been entered and Excel shows the project completion
time as 24 (in cell C14), just as we calculated above.
Spreadsheet 3.1
You can see above that the cells in the sheet that you can change relate
to the completion times for the activities. The underlying precedence
relationships have been incorporated into the Excel logic and cannot be
changed. Note that the sheet indicates (column H) whether a particular
activity is critical or not. Float times are also given in column G.
Of course the advantage of a spreadsheet is that it can easily recalculate
the situation if we change anything. For example suppose the completion
time for Activity 1 increases to eight weeks – it is easy to confirm from the
spreadsheet that the project completion time increases to 26 weeks (as we
would suspect as Activity 1 is critical and delaying it will also delay the
completion of the entire project).
Note here that we have (implicitly) assumed in calculating this figure of 24
weeks that we have sufficient resources to enable activities to be carried
out simultaneously if required (e.g. Activities 1 and 2 can be carried out
simultaneously).
Activity
Is it possible to complete the project in 23 weeks or not and why?
Activity
If the completion time for Activity 2 increases to five weeks, will this affect the overall
project completion time or not? If it does affect the completion time what will the new
completion time be?
56
Chapter 3: Network analysis
Delay activities
A situation that is often encountered is that of a delay activity. By this we
mean that a specified time must elapse between the end of one activity
and the start of another. Delays can also be viewed as waiting – you
have to wait a certain time between the end of one activity and the start
of another. Incorporating such delays into a network diagram is an easy
task. Each delay adds an additional activity to the diagram. For example,
consider the network diagram shown in Figure 3.1. Suppose now that that
we have the following situation:
• There must be a delay of 16 weeks (or more) between the end of
Activity 3 and the start of Activity 9.
Note here the use of the phrase ‘or more’. Strictly we cannot guarantee
that there is a delay of exactly 16 weeks between the end of Activity
3 and the start of Activity 9. The precise delay that occurs depends
upon the other activities in the project. However, we can impose
the condition of the delay being a certain time period or longer, i.e.
mathematically the delay is ≥ a specified value. For this reason we
often drop the ‘or more’ when talking of delays and implicitly assume
that we mean a delay of at least the period given. So here we might
equally say:
• There must be a delay of 16 weeks between the end of Activity 3 and
the start of Activity 9.
57
MN3032 Management science methods
Now with this delay activity added we can carry out the same calculation
for earliest and latest times as we carried out above.
Activity
Compute the earliest and latest times, as well as the project completion time and the
float times, for the network as in Figure 3.1 but with the delay activity added.
The Excel solution when this delay activity is added can be seen in Sheet B
of the spreadsheet:
Spreadsheet 3.2
Here we can see that the project completion time is now 29 weeks with
the critical path being composed of Activities 1, 3, 9, 11 and the delay
activity. Note here how the delay activity can itself be critical.
58
Chapter 3: Network analysis
59
MN3032 Management science methods
In this figure the highest point on the probability curve (with probability
of approximately 0.011) corresponds to the most likely time t2=5. Notice
how the distribution is not symmetric (for example compare the left-
hand side of the distribution between times 2 and 3 and the right-hand
side of the distribution between times 10 and 11. If the distribution were
symmetric these would be mirror images of each other, clearly here they
are not.
In this extension to the basic method we assume that, for each activity, the
completion time can be reduced (within limits) by spending more money
on the activity. Essentially, each activity now has more than one possible
completion time (depending upon how much money we are willing to
spend on it).
This use of cost information is the CPM technique.
A common assumption is to say that for each activity the completion time
can lie in a range with a linear relationship holding between cost and
activity completion time within this range (as illustrated below).
Figure 3.4
Reducing an activity completion time is known as ‘crashing’ the activity
and, for a given project completion time, the problem is to identify which
activities to crash (and by how much) so as to minimise the total cost of
achieving the desired (given) project completion time. This can be done
using linear programming. For the purposes of this subject you will not be
expected to know how to do this, merely to be aware that this is how cost
crashing is done.
Resource restrictions
Typically, in real-world network analysis, each activity has associated with
it some resources (such as men, machinery, materials, etc). We mentioned
before that, in calculating the minimum overall project completion time,
we took no account of any resource restrictions. To illustrate how network
analysis can be extended to deal with resource restrictions consider the
activity on node network we had before in the case of certainty with
respect to activity completion times, for which the network diagram is
reproduced below.
60
Chapter 3: Network analysis
9(3)
7(1)
8(6) 11(1)
10(1)
Figure 3.6
To remind you of the interpretation of the Gantt chart above we have
(somewhat unconventionally) shown time on the vertical axis and each
activity along the horizontal axis. The solid column joins the earliest start
and earliest finish times for each activity.
A key point to grasp here is that in order for the project to be completed
on time (here a completion time of 24 weeks) all critical activities must
start at their earliest start times and finish at their earliest finish times (i.e.
we have no flexibility as to when those activities occur).
Recall that for this project the critical activities are 1, 3, 5, 7, 8, 9 and 11
and the non-critical activities are 2, 4, 6 and 10.
We consider just one resource restricted problem, resource smoothing
(also known as resource levelling).
Resource smoothing
Suppose now that we have just one resource (people) associated with each
activity and that the number of people required is:
• two for Activity 1
• one person for all the other activities (Activities 2 to 11 inclusive)
61
MN3032 Management science methods
Spreadsheet 3.3
Column C in that spreadsheet gives the resource usage for each activity,
column D is the suggested start time (if we wish to impose our own
suggested start time for each activity). There are other cells in the
spreadsheet beyond column I that contain values but these are concerned
with internal calculations in order to calculate a resource profile.
Suppose now that we decide:
• we wish to meet the minimum overall project completion time of 24
weeks (hence implying that the times at which critical activities occur
are fixed)
• we wish to start all non-critical activities at their earliest possible start
times
then in the light of these decisions what does the plot (profile) of resource
usage (number of people used) against time look like?
Using our spreadsheet from Sheet E the plot of resource usage against
time is:
Figure 3.7
The peak of the resource profile is associated with the start of the project,
when Activity 1 requires two people and the other activities, which are
being performed simultaneously with Activity 1, require one person.
A key question is:
What resource usage profile would you have most liked to
have seen here?
Clearly the ideal is a constant profile of resource against time (i.e. a
constant usage of resource over time). This is because variations from a
constant (straight line) profile most likely cost us extra money – either
in terms of hiring extra resource to cover peaks in the resource profile,
or in terms of unutilised resources when we have troughs in the resource
62
Chapter 3: Network analysis
Spreadsheet 3.4
Figure 3.8
So is our current usage of resource ideal? Plainly not, but what flexibility
do we have? If we still wish to complete in 24 weeks we can do nothing
with regard to the critical activities.
However, we have some choice for the non-critical activities. Recall that
such activities have an associated float (slack) time. We could artificially
delay starting some of these activities. If we do so the resource profile will
change, maybe for the better. Indeed this is what we did implicitly above.
There, in delaying the start of Activity 2 until Time 2, we still completed
the project on time, but had a different resource profile. In fact for this
particular example it is easy to see that delaying starting Activity 2 until
Time 6 leads to a better resource profile, and is still feasible in terms of the
24 weeks completion time.
Using our spreadsheet to delay starting Activity 2 until Time 6, when
Activity 1 will have finished we have:
63
MN3032 Management science methods
Spreadsheet 3.5
Figure 3.9
Here we clearly have a resource profile more in line with our ideal of a
constant resource usage.
It is important to note however that artificially delaying the start of non-
critical activities in order to improve a resource profile is not free. Rather
it costs us. Simply put, time lost by delaying the start of an activity cannot
later be regained (if things do not turn out as planned) given the desired
fixed completion time for the overall project.
This has illustrated the resource smoothing or resource levelling
problem, which can be stated as:
• Given a fixed overall project completion time (which we know is
feasible with respect to the resource constraints) ‘smooth’ the usage of
resources over the timescale of the project (so that, for example, we do
not get large changes between one week and the next in the number
of people we need).
This smoothing process makes creative use of float to artificially delay
activities in order to smooth resource usage.
There are two disadvantages to smoothing:
• If we have multiple resources then we may well find that smoothing
one resource makes another resource less smooth, hence we have to
make trade-offs between resource profiles.
• Time lost cannot be regained, once we have delayed an activity to
smooth a resource profile then, if things go wrong later (e.g. some
activities take longer than we planned) we cannot regain the time we
have lost.
64
Chapter 3: Network analysis
Structure
Forming the list of activities, precedence relationships and activity
completion times structures thought about the project and clearly indicates
the separate activities that we are going to have to undertake, their
relationship to one another and how long each activity will take. Hence
network analysis is useful at the planning stage of the project.
Management
Once the project has started then the basic idea is that we focus
management attention on the critical activities (since if these are delayed
the entire project is likely to be delayed). It is relatively easy to update
the network, at regular intervals, with details of any activities that have
been finished, revised activity completion times, new activities added to
the network, changes in precedence relationships, etc and recalculate the
overall project completion time. This gives us an important management
tool for managing (controlling) the project.
Plainly it is also possible to ask (and answer) ‘what if’ questions relatively
easily (e.g. what if a particular activity takes twice as long as expected –
how will this affect the overall project completion time?).
It is also possible to identify activities that, at the start of the project, were
non-critical but which, as the project progresses, approach the status of
being critical. This enables the project manager to ‘head off’ any crisis that
might be caused by suddenly finding that a previously neglected activity
has gone critical.
Activity
Think about projects you have been involved with, either at work, at home, or in your
community. Would these projects have benefited from applying the techniques presented
in this chapter or not? Next time you have a project to do will you attempt to apply the
techniques presented in this chapter or not? If not, why not?
Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
Nokia networks 371
Hospital revenue bond at Seasongood & Mayer 381
Kimberly-Clark Europe 391
66
Chapter 4: Decision making under uncertainty
Essential reading
Anderson, Chapter 13, excluding section 13.6.
Spreadsheet
• dectree.xls
• Sheet A: Solution of the pay-off table example
• Sheet B: Solution of the decision tree example
• Sheet C: Solution of the decision tree example, simulation.
This spreadsheet can be downloaded from the VLE.
Learning outcomes
By the end of this chapter, and having completed the Essential reading,
you should be able to:
• construct a pay-off table for a problem and analyse it numerically
using the standard decision criteria: optimistic, conservative
(pessimistic), regret, equally likely and expected monetary value
• draw a decision tree for a problem
• calculate expected monetary values
• process the tree to arrive at a suggested course of action
• calculate the upside and downside of any decision
• perform sensitivity analysis
• calculate the expected values associated with perfect information
• understand the use of simulation in decision trees
• understand the use of utilities in place of monetary values.
Introduction
People make personal choices all the time. For example should I accept
a particular job offer or not? Should I marry this person or not? In the
business world choices must also be made all the time. For example,
should our company apply for a particular contract or not? Even if we do
apply, what price should we bid for the contract?
67
MN3032 Management science methods
Example
Consider the example of a company that can invest in a new product produced by another
company. Depending upon how much they want to invest they are entitled to a specified
share of the profits made by the product over the next year. If they invest £80m they are
entitled to 50 per cent of the profits, but if they invest £35m only 25 per cent of the profits.
Of course they could choose not to invest at all. There are three scenarios for the demand
for the product, high, medium or low and in these cases the total profit from the product
would be £300m, £200m and £50m respectively. What should the company do?
Activity
Reflect on this problem for five minutes. Would you invest £80m, £35m or choose not to
invest? Why or why not? Record your decision (and the reasons for it) here.
Spreadsheet 4.1
68
Chapter 4: Decision making under uncertainty
69
MN3032 Management science methods
Spreadsheet 4.2
Here the regret value of 5 for cell E15 is associated with choice B and the
medium demand scenario. The best choice we could have made if we had
known in advance medium demand was going to occur would have been
choice A with a pay-off of 20. As in cell E15 we have made choice B, we
only obtain a pay-off of 15 so our regret is 20 – 15 = 5. For each of the
demand scenarios columns (D13 to D16, E13 to E15 and F13 to F16) the
regret values shown are calculated by taking the maximum pay-off for that
demand scenario and subtracting from it the particular pay-off values for
the choice/scenario being considered. Note here that small regret values
are better than large regret values.
Having calculated the regret values we calculate the maximum regret for
each choice, so here 55 for A, 30 for B and 70 for C and make the choice
that minimises this maximum regret – here choice B with a value of 30.
The reasoning here is that if we make choice B then we will not ‘miss out’
too much by having made a wrong choice irrespective of the demand that
occurs. If we make choice B then if high demand occurs we are ‘missing
out’ on 30, if medium demand occurs we are ‘missing out’ on 5 and if low
demand occurs we are ‘missing out’ on 22.5. Note here that the actual
outcome given this choice of B will be either 40 or 15 or -22.5 depending
upon the demand that occurs.
All of the above have not taken any probability information into account.
There are two standard decision criteria approaches to introducing
probability into the situation and we consider these below.
Equally likely – maximise average pay-off
Here we assume that each demand scenario is equally likely (so they all
have equal probability). We calculate the average pay-off for each choice
and take the maximum of these values. Here the average pay-offs for A,
B and C are 11.67, 10.83 and 0 respectively and the maximum of these is
11.67 associated with choice A. This decision criterion is also known as the
Laplace criterion.
Expected monetary value – maximise the probability weighted
pay-off
Here we assume we have information as to the probability of each demand
scenario. We calculate the expected monetary value (EMV) which is a
probability weighted average for each choice and take the maximum
of these values. Examining Sheet A opposite, we have the probabilities
associated with the demand scenarios as 0.2 for high, 0.3 for medium and
70
Chapter 4: Decision making under uncertainty
0.5 for low (note that these probabilities must sum to one as these three
scenarios are the only possibilities for the demand that might occur).
Spreadsheet 4.3
The pay-offs for choice A for high, medium and low are 70, 20 and –55
respectively so the EMV for choice A is hence 70(0.2) + 20(0.3) + –55(0.5)
= –7.5, as in cell H8 above. As can be seen here EMV is the probability
weighted average of the numeric monetary outcomes.
The EMV for choice B is 1.25 and for choice C is 0 (cells H9 and H10
above) and so the maximum EMV is 1.25 associated with choice B.
It can be seen above that for each of the decision criteria the spreadsheet
shows the decision that is ‘best’ and its associated value.
Discussion
We considered above a number of different standard decision criteria
for our particular example and it can be seen that, depending upon the
criteria used, A, B or C could be chosen. In some senses we have achieved
nothing as we knew when we first considered the problem that we could
choose A, B or C. However, we have articulated a number of decision
criteria, each with their own logic, that enable us to systematically take the
decision problem and reach a logical decision in a numeric way. The fact
that there is no ‘one best way’ to reach a decision in problems such as the
one considered above is just a fact of life. Instead the usual approach is to
consider the different decisions arrived at using all the criteria and then to
somehow select from them a unique final decision.
There is no ‘best way’ to choose a unique final decision and various ideas
have been proposed, such as:
• voting – choose the decision that is most popular over a number of
criteria
• personal preference – choose the decision criterion that best suits
your personal preferences (e.g. a risk-taker might use the optimistic
criterion; someone who is risk-adverse might use the conservative
criterion) and choose the decision based on those criteria.
Activity
Recall the activity above where you considered whether you would invest £80m, £35m
or nothing. Do you think you should revise that decision in the light of the calculations
carried out above or not? Why or why not?
71
MN3032 Management science methods
Example
A company faces a decision with regard to a product (codenamed M997) developed by
one of its research laboratories. It has to decide whether to proceed to test market M997
or whether to drop it completely. It is estimated that test marketing will cost £100K. Past
experience indicates that only 30 per cent of products are successful in a test market.
If M997 is successful at the test market stage then the company faces a further decision
relating to the size of plant to set up to produce M997. A small plant will cost £150K to
build and produce 2,000 units a year whereas a large plant will cost £250K to build but
produce 4,000 units a year.
The marketing department has estimated that there is a 40 per cent chance that the
competition will respond with a similar product and that the price per unit sold (in £) will
be as follows (assuming all production sold):
Large plant Small plant
Competition respond 20 35
Competition do not respond 50 65
Assuming that the life of the market for M997 is estimated to be seven years and that the
yearly plant running costs are £50K (both sizes of plant – to make the numbers easier!)
should the company go ahead and test market M997?
Activity
Reflect on this problem for five minutes. Would you advise test marketing M997 or not?
Why or why not? Record your decision (and the reasons for it) here.
Activity
Draw the decision tree by yourself using the information about M997 given before. Does
what you have drawn correspond to the decision tree presented above or not?
Activity
Consider the decision tree presented above. Does it cause you to revise your previous
decision about whether or not to test market M997? If so, why?
Although the decision tree diagram does help us to see more clearly the
nature of the problem it has not, so far, helped us to decide whether to drop
M997 or whether to test market it (the decision we are trying to make!). To
do this we have two steps as illustrated below.
In these steps we will need to use information (numbers)
relating to future sales, prices, costs, etc. Although we may not
be able to give accurate figures for these we need to factor such
figures into our calculations if we are to proceed. Investigating
how our decision to test market or not might change as these
figures change (i.e. sensitivity analysis) can be done once we
have carried out the basic calculations using our assumed
figures.
73
MN3032 Management science methods
Step 1
In this step we, for each path through the decision tree from the initial
node to a terminal node of a branch, work out the profit (in £) involved
in that path. Essentially in this step we work from the left-hand side of the
diagram to the right-hand side.
• path to terminal node 2 – we drop M997
• Total revenue = 0
Total cost = 0
Total profit = 0
Note that we ignore here (and below) any money already spent
on developing M997 (that being a sunk cost, namely a cost that
cannot be altered no matter what our future decisions are, so
logically it has no part to play in deciding future decisions).
• path to terminal node 4 – we test market M997 (cost £100K) but then
find it is not successful so we drop it
• Total revenue = 0
Total cost = 100
Total profit = –100 (all figures in £K)
• path to terminal node 7 – we test market M997 (cost £100K), find it
is successful, build a small plant (cost £150K) and find we are without
competition (revenue for seven years at 2,000 units a year at £65 per
unit = £910K)
• Total revenue = 910
Total cost = 250 + 7 × 50 (running cost)
Total profit = 310
• path to terminal node 8 – we test market M997 (cost £100K), find
it is successful, build a small plant (cost £150K) and find we have
competition (revenue for seven years at 2,000 units a year at £35 per
unit = £490K)
• Total revenue = 490
Total cost = 250 + 7 × 50
Total profit = –110
• path to terminal node 10 – we test market M997 (cost £100K), find it
is successful, build a large plant (cost £250K) and find we are without
competition (revenue for seven years at 4,000 units a year at £50 per
unit = £1,400K)
• Total revenue = 1,400
Total cost = 350 + 7 × 50
Total profit = 700
• path to terminal node 11 – we test market M997 (cost £100K), find
it is successful, build a large plant (cost £250K) and find we have
competition (revenue for seven years at 4,000 units a year at £20 per
unit = £560K)
• Total revenue = 560
Total cost = 350 + 7 × 50
Total profit = –140
• path to terminal node 12 – we test market M997 (cost £100K), find it
is successful, but decide not to build a plant
• Total revenue = 0
Total cost = 100
Total profit = –100
74
Chapter 4: Decision making under uncertainty
Activity
Repeat the calculations for profits associated with terminal nodes given above, but
without looking at the subject guide. Do you get the correct answers or not?
So far we have not made use of the probabilities in the problem – this
we do in the second step where we work from the right-hand side of the
diagram back to the left-hand side.
Step 2
Consider chance node 6 with branches to terminal nodes 7 and 8
emanating from it. The branch to terminal node 7 occurs with probability
0.6 and total profit 310K whilst the branch to terminal node 8 occurs with
probability 0.4 and total profit –110K.
Hence the expected monetary value (EMV) of this chance node is
given by:
0.6 × (310) + 0.4 × (–110) = 142 (£K)
Essentially this figure represents the expected (or average) profit from this
chance node (60 per cent of the time we get £310K and 40 per cent of the
time we get –£110K so on average we get (0.6 × (310) + 0.4 × (–110))
= 142 (£K)).
The EMV for any chance node is defined by ‘sum over all branches,
the probability of the branch multiplied by the monetary (£)
value of the branch’. Exactly as before when we considered pay-off
tables above it is a probability weighted average of the numeric monetary
outcomes.
Hence the EMV for chance node 9 with branches to terminal nodes 10 and
11 emanating from it is given by
0.6 × (700) + 0.4 × (−140) = 364 (£K)
node 10 node 11
We can now picture the decision node relating to the size of plant to
build as below where the chance nodes have been replaced by their
corresponding EMVs.
75
MN3032 Management science methods
EMV = 142 K
Alt 5 No plant
EMV = -100K
Figure 4.2
Hence at the plant decision node we have the three alternatives:
• Alternative 3: build small plant EMV = 142K
• Alternative 4: build large plant EMV = 364K
• Alternative 5: build no plant EMV = –100K
It is clear that, in £ terms, alternative number 4 is the most attractive and
so we can discard the other two alternatives, giving the revised decision
tree shown below.
1
Not successful in test market
Test market EMV = −100
M997 (£100K) (0.7)
Alt 2
Successful in test
3 market (0.3)
Activity
Repeat the calculations for the expected monetary values given above, but without
looking at the subject guide. Do you get the correct answers or not?
76
Chapter 4: Decision making under uncertainty
Summary
As a result of the above process we have decided:
• We should test market M997 and this decision has an expected
monetary value (EMV) of £39.2K.
• If M997 is successful in test market then we anticipate, at this stage,
building a large plant (recall the alternative we chose at the decision
node relating to the size of plant to build). However, it is plain that in
real life we will review this once test marketing has been completed.
Note here that the EMV of our decision (39.2 in this case)
does not reflect what will actually happen – it is merely an
average or expected value if we were to have the tree many
times – but in fact we have the tree once only. If we follow the
path suggested above of test marketing M997 then the actual
monetary outcome will be one of [–100, 310, –110, 700, –140,
–100] corresponding to terminal nodes 4, 7, 8, 10, 11 and 12
depending upon future decisions and chance events.
Activity
Look back to where the M997 problem was introduced. Does the above decision to test
market M997 – arrived at above using a decision tree – correspond to your decision as
recorded there? Do you think using decision trees is a good way of structuring decision
making or not?
77
MN3032 Management science methods
Excel solution
Look at Sheet B in the spreadsheet associated with this chapter. You will
see:
Spreadsheet 4.4
In column I are the values for terminal nodes and in column J are the
probabilities associated with those nodes (if appropriate). The structure of
the underlying decision tree is already coded into the Excel logic and all
other cells are fixed/calculated from columns I and J. Column D gives the
decision – at decision node 1 we choose to test market while at decision
node 5 we choose a large plant. These are the same decisions/EMVs as we
calculated above.
The advantage of using Excel comes when we consider sensitivity analysis.
Sensitivity analysis
Consider the decision tree given above. It is plain that the decision to
test market is influenced by the profit of 700 (£K) we will achieve if test
marketing is successful, we choose to build a large plant and we have
no competition. Hence we may vary this figure of 700 (and/or vary the
probability that this outcome occurs) to see if it changes the test market
decision.
For example, suppose the probability that we have no competition with
a large plant is no longer 0.6 but is instead 0.45. This implies that the
probability that we have competition is 1 – 0.45 = 0.55.
Amending the decision tree calculation using Excel we get:
Spreadsheet 4.5
Hence we can see that the initial decision (to test market) is still the
optimal decision, although (as we would have expected) the EMV
associated with this decision, and the EMV associated with a large plant,
has fallen.
We can also conduct sensitivity analysis using a more systematic
algebraic basis (i.e. assign a symbol p to a given probability and work out
algebraic expressions for EMVs). To see this suppose that the probability
of competition with a large plant is no longer 0.6 but is p instead. This
implies that the probability that we have competition is 1–p.
Assume that we can leave the probabilities for a small plant of
competition/no competition unaltered. It is clear that as p decreases we
will at some stage prefer a small plant over a large plant (e.g. if p = 0 then
a small plant with EMV of 142 would be preferable to a large plant with
EMV of –140). We can therefore ask the logical question: ‘How small
does p have to be before we prefer a small plant?’
78
Chapter 4: Decision making under uncertainty
To answer this question note that if the EMVs of a large and small plant
are equal then we will be indifferent as to which we choose. This will
occur when
p(700) + (1 – p)(–140) = 142
(i.e. when 840p – 140 = 142, so p = 282/840 = 0.3357).
Hence if p drops below 0.3357 we would prefer a small plant to a large
plant. This type of systematic sensitivity analysis can sometimes be
preferable to simply trying different numbers and redoing the calculation
to see what the effect is.
Spreadsheet 4.6
showing that the new EMV is 140 (and we would test market M997).
If we are absolutely sure there will be competition and using Sheet B again
but altering the probabilities for competition to 1 and for no competition
to zero we get:
Spreadsheet 4.7
showing that the new EMV is zero (and we would drop M997).
We can hence form the following table:
Original probability EMV if probability is one
No competition 0.6 140
Competition 0.4 0
Table 4.2
79
MN3032 Management science methods
Simulation
For the decision tree considered above we had a known value, 0.7, for the
probability of the product not being successful in test market. Suppose,
however, that this value was not known, rather we had a probability
distribution. For illustration, suppose that the probability of the product
not being successful in test market could be any value between 0.7 and
0.9. What can we do in this case?
The simple answer is that we can conduct a simulation. By this we mean
that we generate in a probabilistic fashion a set of values between 0.7 and
0.9 and see what the decision would be in each case.
Spreadsheet 4.8, taken from Sheet C in the spreadsheet associated with
this chapter, illustrates this point.
examining values taken from the range 0.7 to 0.9 and seeing what the
decision would be in each case make logical sense?
Clearly we could examine more than 10 probability cases. But hopefully
the point is clear, we can gain insight by generating in the fashion
illustrated here a probability value from a known distribution.
On a technical issue if you examine Sheet C you will see that the probability
values are generated in Excel using L2 + (M2 – L2)*RAND(). This
expression generates a value uniformly distributed in [L2,M2]. However
the RAND() function in Excel, which generates a random number between
zero and one, will be recomputed each time you make a change to the Excel
sheet, resulting in the values you see changing. For example, open Sheet
C and type anything you like into a blank cell and press the return/enter
key. You will see columns N and O change. This is because Excel, when you
press the return/enter key, recalculates, and here it recalculates a value of
RAND() 10 times, meaning that (potentially) different decisions and EMVs
result. For this recalculation reason the values you see when you open Sheet
C may well be different from those you see in Spreadsheet 4.8.
Clearly if we want to gain better insight we need to examine more than 10
probability values, but since this involves complicating Sheet C we are not
going to pursue this issue further here.
Although we have considered here simulating a probability value the same
approach can be taken to simulating other values in the decision tree. For
example, suppose the value associated with terminal node 7 was no longer
exactly 310 but a value taken from a Normal distribution with mean 310 and
standard deviation 50. Could we generate simulated values for this outcome
and examine the range of decisions and their associated EMVs so produced?
The answer is that we can, and the Excel needed to produce such a normally
distributed random number is simply NORMINV(RAND(),310,50).
On a terminology issue here this generation of values from known probability
distributions is called simulation, also called Monte-Carlo simulation or static
simulation. Dynamic simulation, or discrete-event simulation, is dealt with
in Chapter 11. You should be aware that when some people use the word
‘simulation’ they can mean either of these two definitions, and you have to
determine which is meant from the context under discussion.
Activity
Change the probability range in Sheet C. Firstly change it to be 0.8 to 0.9; then change it
to be 0.7 to 0.8. What do you observe about the decisions and their EMVs?
How does what you observe change if the value associated with terminal node 7 is no
longer exactly 310 but a value taken from a Normal distribution with mean 310 and
standard deviation 50.
Extensions
The basic decision tree technique presented above can be applied to any
problem for which a decision tree can be drawn. There are a number of
extensions to the technique and we briefly consider four such extensions
below. These relate to:
• discounting
• chance nodes
• decision nodes
• utility.
We will consider each in turn.
81
MN3032 Management science methods
Discounting
In the example given above we were concerned with money received over
seven years. It is plain that £1 received in seven years’ time is worth less than
£1 received now and a technique called discounting, or discounted cash
flow, (involving finding the net present value of any sum of money)
can be used to overcome this difficulty. Applying discounting merely alters
the numbers which are fed into the decision tree so that we are dealing with
monetary values on an equivalent (present-day) basis. It does not affect the
processing of the tree (which remains exactly as indicated above).
Chance nodes
In the example given above we calculated a value for each chance node.
Although we have used EMV as the value of a chance node this choice
is, in many respects, somewhat arbitrary, and other ways of calculating
a value for a chance node have been suggested. To put it another way,
although it is a law of the universe (in this particular corner of the space–
time continuum) that E = mc2 it is not a law of the universe that the value
of a chance node must be equal to the EMV value!
Reflect that EMV is an average value – but at a chance node we never
see the average – something happens once only (e.g. at chance node 6
we either see competition or not). Hence perhaps the average value is
misleading and we need to look at a chance node a different way.
If we were averse to losing money and wished to take a conservative
attitude to decision making we might calculate the value of a chance
node as the worst possible outcome that might occur at that node. Such a
strategy is often called a pessimistic strategy (e.g. such a strategy would
assign chance node 6 a value of –110 compared with an EMV of 142).
An alternative strategy would be the optimistic strategy of calculating
the value of a chance node as the best possible outcome that might occur
at that node (e.g. such a strategy would assign chance node 6 a value of
310 compared with an EMV of 142).
Yet another strategy would be to take the value of a chance node as equal
to the most likely (highest probability) outcome that might occur at that
node (e.g. such a strategy would assign chance node 4 a value of 310
compared with an EMV of 142).
Alternatively, we might take the value of a chance node to be some
weighted combination of the EMV and the values given by the optimistic
and pessimistic strategies. The literature contains a number of variations
on this theme of changing the value of a chance node.
Decision nodes
At each decision node we choose one of the alternative decisions at that
node based upon an implicit rule (‘choose the alternative with the highest
EMV’). Other rules are equally plausible (e.g. ‘choose the alternative with
the highest ROI’ (ROI = return on investment = profit divided by total
investment)).
For example consider the small and large plants above. We saw that a
small plant led to an EMV (actually an expected net profit) of 142. That
involved a total investment of 100 for test marketing plus 150 to build, so
a ROI of 142/(100 + 150) = 56.8 per cent.
A large plant led to an EMV (actually an expected net profit) of 364. That
involved a total investment of 100 for test marketing plus 250 to build, so
a ROI of 364/(100 + 250) = 104 per cent.
82
Chapter 4: Decision making under uncertainty
Although, in this case, ROI would lead to the same decision relating to the
size of plant to build it could have led to a different decision. For example
had the EMV at chance node 9 been 175 then on an EMV basis we would
still have chosen at decision node 5 to build a large plant. But on a ROI
basis [175/(100 + 250) = 50 per cent] we would choose to build a small
plant.
Utility
Using monetary values in the decision tree implies, for example, that a
loss of 200K is only twice as bad as a loss of 100K. If the company does
not have 200K to lose, but does have 100K, then it is plain that they will
regard losing 200K as much worse than losing 100K. Moreover, note that
often decisions are made by people within the company. The company
makes the profit/loss, not the people concerned with the decision.
Hence the idea of utility is to replace the monetary values at each
terminal node in the decision tree by utilities (points) which reflect the
view of the decision maker (or company) to that sum of money (e.g. a loss
of 100K might get a utility value of –5 but a loss of 200K a utility value
of –500). In simple terms you can think of utilities as replacing pounds
by points. Although we have spoken here of points when assigning utility
values, it is common for utilities to be scaled such that they lie between
zero and one, as below.
To illustrate utility further suppose that we have that the equation for
the utility to a decision maker of a monetary amount x, where x can vary
between a and b, is Utility = (x – a)R/(b – a)R, where R is a parameter.
What does the plot of utility against x look like? Well for R = 1, where we
consider a = 0, b = 100 it takes the form shown in Figure 4.4.
83
MN3032 Management science methods
84
Chapter 4: Decision making under uncertainty
Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
Decision analysis at Eastman Kodak 540
Controlling particulate emissions at Ohio 550
Edison Company
New drug decision analysis at Bayer 563
Pharmaceuticals
Medical screening test at Duke University 569
medical center
Investing in a transmission system at 577
Oglethorpe Power
85
MN3032 Management science methods
Notes
86
Chapter 5: Inventory control
Essential reading
Anderson, Chapter 10, sections 10.1–10.3, 10.5 and 10.6.
Spreadsheet
inventory.xls
• Sheet A: EOQ calculation and costing of an assigned order quantity
• Sheet B: Quantity discount calculation and costing of an assigned
order quantity
• Sheet C: Probabilistic demand order quantity.
This spreadsheet can be downloaded from the VLE.
Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• discuss the various factors that need to be included when determining
Economic Order Quantity and Economic Batch Quantity
• determine Economic Order Quantity when there are no quantity
discounts
• determine Economic Order Quantity when there are quantity discounts
• determine Economic Batch Quantity
• determine the optimum order quantity when the demand is
probabilistic (rectangular (uniform) distribution or Normal
distribution)
• explain Materials Requirements Planning (MRP) by means of a simple
example
• discuss Just-in-Time (JIT), Optimised Production Technology (OPT)
and Supply Chain Management (SCM).
Introduction
Inventory (also known as stock) is something that we come across in our
everyday life whether at work or outside the work environment. There are
many problems associated with it. Whether an international company with
operations spreading across a number of countries, a national company
87
MN3032 Management science methods
Activity
How many different items of stock can you list that you encounter, in your work/college
life (e.g. the stock of paper clips in your desk), in your personal life (e.g. the stock of
money in your pocket)?
Activity
Choose one business item and one personal item from the list of items you produced
above. How are stocks of these items controlled and managed? Classify the items in your
list into A/B/C.
to the immediate loss of production and hence the profit associated with
it, but also due to the long-term effect on customer goodwill and on future
market share. If the demand for a product is reasonably steady, the stock
can be minimal. There just needs to be enough to cater for any minor
deviations and delays in the supply chain. If the demand itself fluctuates,
larger inventories have to be held to counteract these fluctuations.
Activity
What other reasons can you think of for holding stock?
Whatever the reason for holding stock, it is evident that a company will
incur a cost associated with holding it. It is, therefore, in the interest of
the company to reduce the inventory and to control it more efficiently: to
aim for the ideal of zero inventory mentioned above. Developed after the
Second World War in Japan, the ‘Just-In-Time’ (JIT) philosophy aims to
achieve such an ideal situation. It does, however, require certain conditions
to be met. JIT will be discussed in more detail later in this chapter.
Inventory control is a compromise management action. Consequently,
there could be times when a part is out of stock for a short period, even
though every effort has been made to avoid this. Such situations are not
always costly, although, occasionally, production lines have had to be
stopped for very short periods due to lack of parts.
Basics
The basic function of stock (inventory) is to insulate the production
process from changes in the environment as shown below. For simplicity
here we will consider a classic manufacturing environment.
Manufacturing
Figure 5.1
Note here that although we refer to manufacturing, other industries also
have stock (e.g. the stock of money in a bank available to be distributed to
customers, the stock of policemen in an area, etc.).
The question then arises: how much stock should we have? It is this
simple question that inventory control theory attempts to answer.
There are two extreme answers to this question:
a lot
• this ensures that we never run out
• it is an easy way of managing stock
• it is expensive in stock costs, cheap in management costs.
89
MN3032 Management science methods
none/very little
• this is known (effectively) as Just-in-Time (JIT)
• it is a difficult way of managing stock
• it is cheap in stock costs, expensive in management costs.
We shall consider the problem of ordering raw material stock but the same
basic theory can be applied to the problem of deciding the:
• finished goods stock
• size of a batch in a batch production process.
The costs that we need to consider so that we can decide the amount of
stock needed can be divided into stock holding costs and stock ordering
(and receiving) costs as below. Note here that, conventionally,
management costs are ignored here.
90
Chapter 5: Inventory control
Basic model
Activity/Reading
For this section read Anderson, Chapter 10, sections 10.1 and 10.2.
Figure 5.2
Consider drawing a horizontal line at Q/2 in the above diagram. If you
were to draw this line then it is clear that the times when stock exceeds
Q/2 are exactly balanced by the times when stock falls below Q/2. In other
words, we could equivalently regard the above diagram as representing a
constant stock level of Q/2 over time.
Hence we have that:
• Annual holding cost = ch(Q/2)
where Q/2 is the average (constant) inventory level.
• Annual order cost = co(R/Q)
where (R/Q) is the number of orders per year (R used per year and
ordering Q each order must mean that the number of orders made is R/Q).
So total annual cost = ch(Q/2) + co(R/Q).
91
MN3032 Management science methods
Figure 5.3
We can calculate exactly which value of Q corresponds to the minimum total
cost by differentiating total cost with respect to Q and equating to zero.
d(total cost)/dQ = ch/2 − coR/Q² = 0 for minimisation
which gives Q² = 2coR/ch.
Hence the best value of Q (the amount to order = amount stocked) is
given by
Q =√(2Rco/ch)
and this is known as the Economic Order Quantity (EOQ).
Comments
This formula for the EOQ is believed to have been first derived in the early
1900s and so EOQ dates from the beginnings of mass production/assembly
line production.
To get the total annual cost associated with the EOQ we have from before
that total annual cost = ch(Q/2) + co(R/Q) so putting Q = √(2Rco/ch) into
this we get that the total annual cost is given by:
ch(√(2Rco/ch)/2) + co(R/√(2Rco/ch)) = √(Rcoch/2) + √(Rcoch/2) = √(2Rcoch).
Hence total annual cost is √(2Rcoch) which means that when ordering the
optimal (EOQ) quantity we have total cost proportional to the square root
of any of the factors (R, co and ch) involved. For example, if we were to
reduce co by a factor of 4 we would reduce total cost by a factor of 2 (note
the EOQ would change as well). This, in fact, is the basis of Just-in-Time
(JIT), to reduce (continuously) co and ch so as to drive down total cost.
To return to the issue of management costs being ignored for a moment,
the basic justification for this is that if we consider the total cost curve
shown above, then – assuming we are not operating a policy with a very
low Q (JIT) or a very high Q – we could argue that the management costs
are effectively fixed for a fairly wide range of Q values. If this is so then
such costs would not influence the decision as to what order quantity Q
92
Chapter 5: Inventory control
Figure 5.4
With this EOQ we can calculate our total annual cost from the equation:
Total annual cost = ch(Q/2) + co(R/Q)
Hence for this example we have that:
Total annual cost =
(30 × 22/2) + (35 × 200/22) = 330 + 318.2 = £648.20.
Note: If we had used the exact Q value given by the EOQ formula (i.e.
Q = 21.602) we would have had that the two terms relating to annual
holding cost and annual order cost would have been exactly equal to each
other: so holding cost = order cost at EOQ point (or, referring to the
diagram above, the EOQ quantity is at the point associated with the
Holding Cost curve and the Order Cost curve intersecting));
thus (chQ/2) = (coR/Q) so that Q = √(2Rco/ch).
In other words, as in fact might seem natural from the shape
of the Holding Cost and Order Cost curves, the optimal order
quantity coincides with the order quantity that exactly
balances Holding Cost and Ordering Cost.
93
MN3032 Management science methods
Note however that this result only applies to certain simple situations. It
is not true (in general) that the best order quantity corresponds to the
quantity where holding cost and ordering cost are in balance.
Example
Suppose, for administrative convenience, we ordered 20 and not 22 at each order – what
would be our cost penalty for deviating from the EOQ value?
With a Q of 20 we look at the total annual cost
= (chQ/2) + (coR/Q)
= (30 × 20)/2 + (35 × 200/20) = 300 + 350 = £650.
Hence the cost penalty for deviating from the EOQ value is £650 – £648.2 = £1.80.
Note that this is, relatively, a very small penalty for deviating from the
EOQ value. This is usually the case in inventory problems (i.e. the total
annual cost curve is flat near the EOQ so there is only a small cost penalty
associated with slight deviations from the EOQ value (see the diagram
above)).
This is an important point. Essentially we should view the EOQ as a
ballpark figure. That is, it gives us a rough idea as to how many we
should be ordering each time. After all our cost figures (such as the cost of
an order) are likely to be inaccurate. Also it is highly unlikely that we will
use items at a constant rate (as the EOQ formula assumes). However, that
said, the EOQ model provides a systematic and quantitative way of getting
an idea as to how much we should order each time. If we deviate far from
this ballpark figure then we will most likely be paying a large cost penalty.
Extensions
In order to illustrate extensions to the basic EOQ calculation we will
consider the following example.
Example
A company uses 12,000 components a year at a cost of 5 pence each. Order costs have
been estimated to be £5 per order and inventory holding cost is estimated at 20 per cent
of the cost of a component per year.
Note here that this is the sort of cheap item that is a typical non-JIT item.
Spreadsheet 5.1
Here we have simply reproduced in Excel the calculation we carried out
above with the addition that we have included the costs associated with
ordering, holding and purchasing, as well as the total cost. Sheet A also
94
Chapter 5: Inventory control
allows us to cost any assigned order quantity we wish – above you can see
that with an order quantity of 1,000 the total cost (per year) of ordering
and holding is £65 to which must be added the purchase cost of £600
(12,000 units a year at 0.05 each).
If orders must be made for 1, 2, 3, 4, 6 or 12 monthly batches,
what order size would you recommend and when would you
order?
Here we do not have an unrestricted choice of order quantity (as the EOQ
formula assumes) but a restricted choice as explained below.
This is an important point – the EOQ calculation gives us a quantity to
order, but often people are better at ordering on a time basis (e.g. once
every month).
In other words we need to move from a quantity basis to a
time basis.
For example the EOQ quantity of 3,464 has an order interval of
(3,464/12,000) = 0.289 years, i.e. we order once every 52(0.289) =
15 weeks. Would you prefer to order once every 15 weeks or every four
months? Recall here what we saw before, that small deviations from the
EOQ quantity lead to only small cost changes.
Hence if orders must be made for 1, 2, 3, 4, 6 or 12 monthly batches, the
best order size to use can be determined as follows.
Obviously when we order a batch we need only order sufficient to cover
the number of components we are going to use until the next batch
is ordered – if we order less than this we will run out of components
and if we order more than this we will incur inventory holding costs
unnecessarily. Hence for each possible batch size we automatically know
the order quantity (e.g. for the 1-monthly batch the order quantity is the
number of components used per month = R/12 = 12,000/12 = 1,000).
As we know the order quantity we can work out the total annual cost of
each of the different options and choose the cheapest option.
The total annual cost (with an order quantity of Q) is given by
(chQ/2) + (coR/Q) and we have the table below:
Table 5.1
The least cost option therefore is to choose either the 3-monthly or the
4-monthly batch.
In fact we need not have examined all the options. As we knew that the
EOQ was 3,464 (associated with the minimum total annual cost) we have
that the least cost option must be one of the two options that have order
quantities nearest to 3,464 (one order quantity above 3,464, the other
below 3,464) (i.e. either the 3-monthly (Q = 3,000) or the 4-monthly (Q
= 4,000) option). This can be seen from the shape of the total annual cost
curve shown below. The total annual cost for these two options could then
be calculated to find out which was the cheapest option.
95
MN3032 Management science methods
Figure 5.5
Activity
Use Sheet A and confirm for yourself the cost figures for varying batch sizes given above.
Quantity discounts
Activity/Reading
For this section read Anderson, Chapter 10, section 10.5.
96
Chapter 5: Inventory control
Figure 5.6
The order quantity which provides the lowest overall cost will
be the lowest point on the Combined Cost Curve shown in the
diagram above. We can precisely calculate this point as it corresponds
to:
• either an EOQ for one of the discount curves considered separately
(note that in some cases the EOQ for a particular discount curve may
not lie within the range covered by that discount and hence will be
infeasible)
• or one of the breakpoints between the individual discount curves on
the total annual cost curve for the combined discount structure.
We merely have to work out the total annual cost for each of these types of
points and choose the cheapest.
First the EOQs:
Table 5.3
Note here that we now include material (purchase) cost in total
annual cost.
The effect of the discount is to reduce the cost, and hence ch the inventory
holding cost per unit per year – all other terms in the EOQ formula (R and
co) remain the same. Of the EOQs only one, the first, lies within the range
covered by the discount rate.
For the breakpoints we have:
Order quantity Cost ch Inventory cost Material cost Total cost
5,000 0.0475 0.0095 35.75 570 605.75
10,000 0.045 0.009 51 540 591
20,000 0.0425 0.0085 88 510 598
Table 5.4
From these figures we can see that the economic order quantity associated
with minimum total annual cost is 10,000 with a total annual cost of 591.
97
MN3032 Management science methods
Note too here that this situation illustrates the point we made before
when we considered the simple EOQ model, namely that it is not true (in
general) that the best order quantity corresponds to the quantity where
holding cost and ordering cost are in balance. This is because the holding
cost associated with Q = 10,000 is ch(Q/2) = 0.009(10000/2) = 45, while
the ordering cost is co(R/Q) = 5(12000/10000) = 6.
Excel solution
Look at Sheet B in the spreadsheet associated with this chapter. You will
see:
Spreadsheet 5.2
For each of the discount ranges the EOQ is calculated and the spreadsheet
notes (cells F7 to F10) whether that EOQ is feasible or not. Any order
quantity can be costed using cell E14, the example above is for an order
quantity (breakpoint) of 10,000 units. The discount to be applied is
automatically shown in cell E16.
Activity
Cost the breakpoints using Sheet B and confirm that they agree with the values presented
above.
Note here that the use of discount analysis is not restricted to buyers, it
can also be used by a supplier to investigate the likely effects upon the
orders he receives of changes in the discount structure. For example, if the
supplier lowers the order size at which a particular discount is received
then how might this affect the orders he receives – will they become
bigger/smaller, less frequent/more frequent?
98
Chapter 5: Inventory control
Example
A workshop has the facility to manufacture a part at a rate of 3,000 units per day. The
standard cost of the part is £14.30. The setup cost of the machine is £1,200. Assuming
an interest rate of 15 per cent per year, calculate the Economic Batch Quantity if the part
is used for further assembly work at a rate of 500 units per day.
Activity
Repeat the calculation for the example given above, but without looking at the subject
guide. Do you get the correct answer or not?
Probabilistic demand
Activity/Reading
For this section read Anderson, Chapter 10, section 10.6.
Example
Consider a news vendor who stands on the street and sells an evening paper, the Evening
News. He sells the paper to his customers for 35 (pence) a copy. He pays his supplier 20
(pence) a copy, but any unsold copies can be returned to the supplier and he gets 10
(pence) back. This is known as a salvage value.
Assume that his demand for copies on any day is either:
1. a rectangular (uniform) distribution with the extremes of the distribution being 60
and 80 copies per day
2. a Normal distribution with mean 100 and standard deviation 7.
How many copies should he stock?
Before we can compute the amount he should order, we need to work out
Cover and Cunder.
The cost of overestimating demand is 20 – 10 = 10, the cost he pays
minus the salvage value he gets back for unsold copy, so Cover = 10.
In order to calculate Cunder. his shortage cost per unit – how much does he
lose if a customer wants a copy and he does not have a copy available?
As a first analysis he loses his profit (= revenue – cost = 35 – 20 = 15) so
we can estimate his shortage cost (opportunity cost) as 15 (this ignores
any loss of goodwill and any loss of future custom that might result from a
shortage). Hence we have that Cunder= 15.
Hence CP = Cunder/(Cunder + Cover) = 15/(15 + 10) = 0.6.
For a rectangular (uniform) distribution with the extremes of the
distribution being 60 and 80 copies per day the order quantity is
60 + 0.6(80 – 60) = 72 copies per day.
100
Chapter 5: Inventory control
Excel solution
Look at Sheet C in the spreadsheet associated with this chapter. You will
see:
Spreadsheet 5.3
which reproduces the same calculation as we carried out above. The slight
difference between the value we calculated for the Normal distribution
and the value shown in cell D9 is due to the fact that Excel is more
accurate than we have been when looking up the appropriate value from
the Normal distribution.
Note that there is an important conceptual difference between this news
vendor’s problem and the EOQ/discount problems considered above. In
those EOQ/discount problems we had a decision problem (how much to
order) even though the situation was one of certainty – we knew precisely
the rate at which we used items. In the news vendor problem if we knew
for certain how many customers would want a paper each day then the
decision problem becomes trivial (order exactly that many). In other
words:
• for the EOQ problem we had a decision problem even though there
was no uncertainty
• for the news vendor problem it was only the uncertainty that created
the decision problem.
Comment
There are many extensions to the simple EOQ models we have considered
– for example:
• Reorder lead time – allow a lead time between placing an order and
receiving it – this introduces the problem of when to reorder (typically
at some stock level called the reorder level).
• Stockouts – we can allow stockouts (often called shortages) (i.e.
no stock currently available to meet orders).
• Often an order is not received all at once, for example if the order
comes from another part of the same factory then items may be
received as they are produced.
• Buffer (safety) stock – some stock kept back to be used only when
necessary to prevent stockouts.
101
MN3032 Management science methods
Example
The production manager at SIM Manufacturing wishes to develop a materials’
requirements plan for producing chairs over an eight-week period. She estimates that
the lead time between releasing an order to the shop floor and producing a finished
chair is two weeks. The company currently has 260 chairs in stock and no safety stock
(safety stock is stock held in reserve to meet customer demand if necessary). The forecast
customer demand is for 150 chairs in week one, 70 in week three, 175 in week five, 90 in
week seven and 60 in week 8 eight.
would each ensure that we have sufficient chairs available to meet forecast
demand in week five.
If we order these chairs earlier than week three we will be carrying extra
inventory (stock) for a number of periods and, as we know, carrying stock
costs money. It would seem appropriate therefore to order 135 chairs in
week three. This will give:
Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110 110 40 40 0
Order 0 0 135 ? ? ? ? ?
Table 5.7
Continuing on in the same manner we get:
Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110 110 40 40 0 0 −90 0
Order 0 0 135 ? ? ? ? ?
Table 5.8
requiring an order of 90 chairs in week five and giving:
Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110 110 40 40 0 0 0 0
Order 0 0 135 0 90 ? ? ?
Table 5.9
Continuing again we get:
Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110 110 40 40 0 0 0 -60
Order 0 0 135 0 90 ? ? ?
Table 5.10
requiring an order of 60 chairs in week six and giving:
Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110 110 40 40 0 0 0 0
Order 0 0 135 0 90 60 ? ?
Table 5.11
Note that we have no data given here on which to base order decisions in
weeks seven and eight. As we are at the end of the planning period these
are usually taken as zero.
Decisions
Let us be clear what we have done here with respect to our two decisions
of:
• timing – when to order
• quantity – how much to order.
103
MN3032 Management science methods
For the chair production problem considered before suppose now that the production
manager as well as planning the production of the chair must also plan the production
of the components that make up the chair. These are: the seat, a back and four legs. The
lead time for seats and backs is two weeks and the lead time for legs is one week. The
company currently has an inventory of 60 seats, 40 backs and 80 legs. Scheduled receipts
are 50 seats in week one and 10 backs in week one.
Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110 110 40 40 0 0 0 0
Order 0 0 135 ? ? ? ? ?
Table 5.12
Now to have 135 chairs made we need to have to hand (i.e. currently
available) 135 seats, 135 backs and 4(135) = 540 legs. The current
inventory of these items (plus scheduled receipts) is insufficient, so orders
must be placed for these items. Just as we did for the chair itself above
these orders must be phased in time so as to ensure that we never
stockout.
Now to do all this manually for chairs, seats, backs and legs would just be
too time-consuming and error-prone. It would be far better to do this via a
computer package.
Activity
Continue the above example and produce the order schedule for seats, back and legs. Do
you now appreciate why a computer package to do this for you would be a good idea?
Can you see any disadvantages with using a computer package?
104
Chapter 5: Inventory control
Figure 5.7
This BOM means that to produce one chair we need:
• one seat
• one back
• four legs.
The Bill of Materials can be thought of as a diagrammatic recipe. Just
as in cooking we need a list of ingredients and their quantities to know
how to cook something, the BOM tells us what we need to make a chair.
The BOM is best thought of as being divided into levels, with the final item
(the chair) being at the top level and the items needed to make up a chair
being at the second level.
Other examples may have more levels (e.g. if items at the second level
are themselves made up from further items). Plainly BOMs are structural
information that change relatively infrequently. It is also plain that any
mistakes in specifying BOMs could have disastrous consequences on the
shop floor, for example consider what would happen if we fail to note that
a particular part is needed in the production of some item.
The tactical information required in MRP relates to:
• out-going inventory (sales) and planned production (master
production schedule)
• on-hand inventory and in-coming inventory (purchases).
Below we give a diagrammatic overview of the situation.
Out-going inventory
Finished goods inventory
Sales
Figure 5.8
Given all this information then (conceptually at least) we should be able
to calculate what we should do, in terms of when to place orders with
105
MN3032 Management science methods
external suppliers (or internal suppliers) and the size of those orders, so
that we never run out of stock of any item (i.e. we always achieve the
planned production and meet the sales orders).
This process of calculating the orders needed is called an MRP
EXPLOSION and produces the materials requirements (hence the name –
Materials Requirements Planning).
Just-in-time (JIT)
Just-in-time (JIT) is easy to grasp conceptually, everything happens
just-in-time. For example, consider your journey to work or school
today. You could have left your house, just-in-time to catch a bus to the
train station, just-in-time to catch the train, just-in-time to arrive.
Conceptually there is no problem about this. However, achieving it in
practice is likely to be difficult!
So too, in a manufacturing operation component parts could conceptually
arrive just-in-time to be picked up by a worker and used. So we would
at a stroke eliminate any inventory of parts, they would simply arrive
just-in-time! Similarly we could produce finished goods just-in-time
to be handed to a customer who wants them. So, at a conceptual extreme,
JIT has no need for inventory or stock, either of raw materials or work in
progress or finished goods.
Obviously any sensible person will appreciate that achieving the
conceptual extreme outlined above might well be difficult, or impossible,
or extremely expensive, in real life. However, that extreme does illustrate
that, perhaps, we could move an existing system towards a system with
more of a JIT element than it currently contains.
For example, consider a manufacturing process – we might not be able
to have a JIT process in terms of handing finished goods to customers, so
we would still need some inventory of finished goods. Perhaps it might be
possible however to arrange raw material deliveries so that, for example,
materials needed for one day’s production arrive at the start of the day
and are consumed during the day – effectively reducing/eliminating the
raw material inventory.
Adopting a JIT system is also sometimes referred to as adopting a lean
production system.
Just-in-time, as the name suggests, is the philosophy of having just the
right amount of material available at precisely the right time and is based
firmly on the principle that there is no need to have any inventory. In fact
inventory is considered to be an evil. The essence of the JIT approach is an
attempt to control three principal aspects:
• idle inventories constitute a direct waste of resources, money,
materials and indirectly of the energy used in the conversion and
refining of these inventories
• storage of idle inventories is a waste of space
• defective parts are a waste of resources and energy.
Perhaps the principal aspect of JIT is that one does not achieve an
improvement in the three aspects mentioned above and then stop. The
philosophy is of a cycle, a never-ending circle of inventory cuts, quality,
product and performance improvements, leading to more cuts in inventory
and more improvements. The reasoning is that a higher emphasis on
quality gives productivity rewards in the shape of savings in reworking,
in scrap, in inspection costs and in customer warranty claims. Constant
106
Chapter 5: Inventory control
Figure 5.9
If we reduce the inventory level then the rocks become exposed, as below.
Figure 5.10
Now the company can see the rocks (problems) and hopefully solve them
before it runs aground!
The requirements for a successful JIT system include:
• uniform final assembly schedule
• short set-up time
• low machine failure and low incidence of defects
• flexible equipment and workforce
• reliable suppliers.
The JIT approach is not universally applicable: it is particularly unsuitable
in a job shop environment where the products and schedules may
fluctuate. This type of environment is, however, appropriate for MRP.
Activity
Think of any manufacturing operations that you are aware of. Would such operations be
more suitable for JIT or MRP and why?
107
MN3032 Management science methods
Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
Ford-Otasan 406
Lowering inventory cost at Dutch companies 436
Dell computers 440
Multistage inventory planning at Deere & Company 442
109
MN3032 Management science methods
110
Chapter 6: Markov processes
Essential reading
Anderson, Chapter 18.
Spreadsheet
markov.xls
• Sheet A: Calculations for a two state example
• Sheet B: Market share over time – two states
• Sheet C: Calculations for a three state example
• Sheet D: Market share over time – three states
• Sheet E: Calculations for an absorbing state example
• Sheet F: Market share over time – absorbing states
• Sheet G: Solution via Solver for estimation of transition matrix
This spreadsheet can be downloaded from the VLE.
Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• draw a state-transition diagram
• calculate the state of the system at any time period
• calculate the long-run system state (both for systems involving no
absorbing states and for systems involving absorbing states).
Introduction
Markov process models are typically applicable where we have a number
of different states that something can be in over time. For example, we
can think of the state as being the petrol (gas) station you buy your petrol
from and there may be a number of different such stations (or states)
from which you bought petrol last year. For systems of this kind we may
be interested in prediction questions such as ‘what is the probability that
a particular petrol station will be visited in the next month?’. Applying
a Markov process model enables us to answer such questions easily. You
will see a number of applications of Markov process models mentioned
throughout this chapter.
111
MN3032 Management science methods
Solution procedure
Activity/Reading
For this section read Anderson, Chapter 18, section 18.1.
Observe that, each year, a customer can either buy K’s cereal or the
Competition’s. Hence we can construct a diagram as below where the
two circles represent the two states a customer can be in; and the arcs
represent the probability that a customer makes a transition each year
between states.
112
Chapter 6: Markov processes
Figure 6.1
Note the circular arcs indicating a ‘transition’ from one state to the same
state. The diagram is known as the state-transition diagram (and note
that all the arcs in that diagram are directed arcs).
Following on from this diagram we can construct the transition matrix
(usually denoted by the symbol P) which tells us the probability of making
a transition from one state to another state. Letting:
State 1 = customer buying K’s cereal
and
State 2 = customer buying Competition’s cereal,
we have the transition matrix P for this problem given by:
To State
1 2
From State 1 | 0.88 0.12 |
2 | 0.15 0.85 |
Note that the sum of the elements in each row of the transition matrix is
one.
Now we know that currently K has some 25 per cent of the market. Hence
we have the row matrix representing the initial state of the system given
by:
State
1 2
[0.25, 0.75]
We usually denote this row matrix by s1 indicating the state of the system
in the first period (years in this particular example). Now Markov theory
tells us that, in period (year) t, the state of the system is given by the row
matrix st where:
st = st–1(P) = st–2(P)2 = ... = s1(P)t–1
We have to be careful here as we are doing matrix multiplication and the
order of calculation is important (i.e. st–1(P) ≠ (P)st–1 in general). To find st
we could attempt to raise P to the power t–1 directly but, in practice, it is
far easier and more informative to calculate the state of the system in each
successive year 1, 2, 3,..., t.
We already know the state of the system in year one (s1) so the state of the
system in year two (s2) is given by:
s2 = s1P = [0.25,0.75] × | 0.88 0.12 |
| 0.15 0.85 |
= [(0.25)(0.88) + (0.75)(0.15), (0.25)(0.12) + (0.75)(0.85)]
= [0.3325, 0.6675]
Note that this result makes intuitive sense (e.g. of the 25 per cent currently
buying K’s cereal, 88 per cent continue to do so whereas, of the 75 per
cent buying the competitor’s cereal, 15 per cent change to buy K’s cereal –
giving a (fractional) total of (0.25)(0.88) + (0.75)(0.15) = 0.3325 buying
K’s cereal).
113
MN3032 Management science methods
Hence in year two, 33.25 per cent of the people are in State 1: that is
buying K’s cereal. Note here that, as a numerical check, the elements of st
should always total one.
In year three, the state of the system is given by:
s3 = s2P = [0.3325, 0.6675] × | 0.88 0.12 |
| 0.15 0.85 |
= [0.392725, 0.607275]
Hence in year three, 39.2725 per cent of the people are buying K’s cereal.
Activity
How does the answer calculated here compare with the answer you estimated before
(when the problem was introduced) for K’s share of the market in two years’ time.
Activity
Take the initial state and the transition matrix and repeat the calculations given above,
but without looking at the subject guide. Do you get the correct answer or not?
Now examine Sheet A in the spreadsheet associated with this chapter. You
will see there the data for this example and the market shares for K and
the Competition are calculated for you in that sheet for 20 periods – with
the change in market shares being shown graphically in Sheet B. These
sheets are reproduced below.
Spreadsheet 6.1
Figure 6.2
114
Chapter 6: Markov processes
Insight
One of the advantages of applying a Markov process approach to a
problem is that we can gain some insight into the change in market share
over time. Of course, it would be foolish to pretend that we can accurately
predict market share in the future. However, insight is valuable. Here, for
example, from Sheet B we can see that on current trends by time period
six K’s share of the market will roughly equal that of the Competition’s and
from then on K’s market share will exceed the Competition’s market share.
As a further example suppose that, through a marketing/advertising
campaign, K could increase the loyalty of their customers and specifically
increase the transition probability from K to K by 0.01, i.e. to 0.89.
Now if K were to do this suppose we believe that the Competition will
respond with their own campaign and this will enable them to maintain
loyalty among their own customers, with the Competition to Competition
transition probability increasing from 0.85 to 0.86. In this case, will K have
a larger or smaller (or equal) market share after two years?
Using Sheet A and changing the appropriate transition probabilities we have:
Spreadsheet 6.2
indicating that this situation will not improve K’s market share in two
years – as can be seen above, this is now 38.56 per cent whereas before it
was higher – 39.27 per cent. Knowing this without the effort and expense
of a marketing campaign is obviously extremely valuable.
115
MN3032 Management science methods
Long-run
Recall here that the question we originally posed above asked for K’s share
of the market in the long-term. This implies that we need to calculate st as
t becomes very large (as it approaches infinity).
The idea of the long-run is based on the assumption that, eventually, the
system reaches ‘equilibrium’ (often referred to as the ‘steady-state’) in
the sense that st = st‑1. This is not to say that transitions between states do
not take place; they do, but they ‘balance out’ so that the number in each
state remains the same.
There are two basic approaches to calculating the steady-state:
a. computational: find the steady-state by calculating st for t = 1,
2, 3,... and stop when st‑1 and st are approximately the same. This is
obviously very easy for a computer and is the approach often used by
computer packages. Indeed if you examine Sheet A you will see that
by time period 20 the share for K appears to have stabilised at around
55.5 per cent and the share for the Competition appears to have
stabilised at around 44.5 per cent. The same effect can be seen clearly
in the graph shown in Sheet B.
b. algebraic: to avoid the lengthy arithmetic calculations needed to
calculate st for t = 1, 2, 3,... we have an algebraic short-cut that can
be used. Recall that, in the steady-state, st = st‑1 (= [x1,x2] say for the
example considered above). Then as st = st‑1P we have:
[x1,x2] = [x1,x2] × | 0.88 0.12 |
| 0.15 0.85 |
(and note also that x1 + x2 = 1). Hence we have three equations which
we can solve. Note that although we have just two variables to solve,
we need to include the equation x1 + x2 = 1. If it is omitted then it
is impossible to find a unique solution (these equations are linearly
dependent).
Adopting the algebraic approach here we have the three equations:
x1 = 0.88x1 + 0.15x2
x2 = 0.12x1 + 0.85x2
x1 + x2 = 1
or
0.12x1 - 0.15x2 = 0
0.12x1 - 0.15x2 = 0
x1 + x2 = 1.
Echoing the point made above note here how the first two equations
are identical after algebraic manipulation. Hence clearly the equation
x1 + x2 = 1 is essential. Without it we could not obtain a unique
solution for x1 and x2. Solving we get:
x1 = 0.5556
x2 = 0.4444.
Hence, in the long-run, K’s market share will be 55.56 per cent.
Activity
How does the answer calculated here compare with the answer you estimated before
(when the problem was introduced) for K’s share of the market in the long-run?
116
Chapter 6: Markov processes
Activity
Take the transition matrix and repeat the calculations for the long-run given above, but
without looking at the subject guide. Do you get the correct answer or not?
Check
A useful numerical check (particularly for larger problems) is
backsubstitution, namely to substitute back. By this we mean put the final
calculated values back into the original equations to check that they are
consistent with those equations. Here the logic is clear, the values of x1=
0.5556 and x2 = 0.4444 that we have derived by algebraic manipulation
above are, we believe, the solution to the three original equations:
x1 = 0.88x1 + 0.15x2
x2 = 0.12x1 + 0.85x2
x1 + x2 = 1.
Therefore if we substitute the values of x1= 0.5556 and x2 = 0.4444 back
into all three of these equations we should (to within rounding errors) find
that the left-hand sides and the right-hand sides of the above equations are
equal. This is a very useful check (either for two states as considered here,
or for three states as will be considered below) since if we pass this check
the solution derived must (mathematically) be the correct solution to the
original equations.
Three states
Just as we have dealt with two states here so we can expand the example
and deal with three states. Suppose K has two main competitors A and B
and that the transition matrix is as shown in Sheet C below.
Spreadsheet 6.3
You can see that Sheet C has the steady state automatically calculated. For
ease of computation in Excel this is done using a matrix method that you
do not need to know.
However you do need to be able to reproduce the values shown there
by solving in an exactly analogous fashion as you did for the two state
example above, i.e. by letting the long-term be [x1, x2, x3] and taking the
matrix equation:
[x1,x2,x3] = [x1,x2,x3] × | 0.88 0.05 0.07 |
| 0.15 0.75 0.10 |
| 0.15 0.20 0.65 |
with x1 + x2 + x3 = 1 and solving.
117
MN3032 Management science methods
Activity
Solve for the long-run for this example.
Check your solution by substituting the values found back into the (four) equations you
started with.
Again we can gain some insight into the change in market share over time
– as in Sheet D below.
Figure 6.3
Here we see that the market share for K rapidly exceeds the market shares
for both A and B.
To consolidate your knowledge of Markov processes consider the following
example.
Activity/Reading
For this section read Anderson, Chapter 18, section 18.2.
Example
Suppose that the chip industry is controlled by four companies: Crispy, Crunchy, Mushy
and Scrunchy. If customers buy either Crispy or Crunchy they never buy another brand. If
they buy Mushy the probabilities that they will buy Crispy, Crunchy, Mushy and Scrunchy
next month are 0.45, 0.4, 0.05 and 0.1 respectively. If they buy Scrunchy the probabilities
that they will buy Crispy, Crunchy, Mushy and Scrunchy next month are 0.1, 0.2, 0.3 and
0.4 respectively.
a. Represent this situation on a state-transition diagram.
b. If the buyers are initially distributed as 20 per cent, 30 per cent, 30 per cent and
20 per cent for Crispy, Crunchy, Mushy and Scrunchy respectively what will be the
situation after two months?
c. What will be the long-run system state?
Activity
Try drawing the state-transition diagram for this problem yourself.
118
Chapter 6: Markov processes
Letting:
State 1 = Crispy
State 2 = Crunchy
State 3 = Mushy
State 4 = Scrunchy
we have:
P= |1 0 0 0|
|0 1 0 0|
| 0.45 0.4 0.05 0.1 |
| 0.1 0.2 0.3 0.4 |
and s1 = [0.2, 0.3, 0.3, 0.2].
Note here that the states corresponding to Crispy and Crunchy are
absorbing states (states which, once reached, cannot be left). States
which are non-absorbing are often called transient states.
The state-transition diagram is shown below:
Figure 6.4
Activity
Try computing the state of the system in the second month s2 = s1P yourself. You should
get s2=[0.355, 0.46, 0.075, 0.11].
Activity
Try computing the state of the system in the third month s3 = s2P yourself. You should get
s3 = [0.39975, 0.512, 0.03675, 0.0515].
| -0.3 0.6 |
= | 1.1111 0.1852 |
| 0.5556 1.7593 |
119
MN3032 Management science methods
Activity
Take the initial state and the transition matrix and repeat the calculations for the long-run
given above, but without looking at the subject guide. Do you get the correct answer or not?
This example is also given in Sheet E below. Note here that Sheet E
includes details of the various matrices encountered in the procedure so
that if you solve another example you will have a numeric check available
in that sheet on any calculations that you do by hand.
Spreadsheet 6.4
The changes in market share over time can be seen in Sheet F below.
120
Chapter 6: Markov processes
Figure 6.5
121
MN3032 Management science methods
With more detailed data that state could be disaggregated into a number
of different states – maybe one for each competitor’s brand of cereal. If
we have n states then we need n2 transition probabilities. Estimating these
probabilities is easy if we have access to a database which tells us from
individual consumer data whether people switched or not, and if so to
what.
Also we could have different models for different segments of the market
– for example, brand switching may be different in rural areas from brand
switching in urban areas. Families with young children would obviously
constitute another important brand switching segment of the cereal
market.
Note here that if we wish to investigate brand switching in a numeric
way then transition probabilities are key. Unless we can get such
numbers nothing numeric is possible.
Consider now how, in the absence of readily available information on
brand switching as gathered by a supermarket (e.g. because we cannot
afford the price the supermarkets are asking for such information), we
might get information as to transition probabilities. One way, indeed this is
how this was done before loyalty cards, is to survey customers individually.
Someone physically stands outside the supermarket and asks shoppers
about their current purchases and their previous purchases. Although this
can be done, it is plainly expensive – particularly if we need to achieve a
reasonable geographic coverage that is regularly updated as time passes.
Both of the above ways of estimating transition matrices – buying
electronic information and manual surveys – cost money. There is,
however, one approach to estimating transition matrices that avoids
any such costs although, as will become apparent below, it does involve
some intellectual effort. This approach involves estimating the transition
probabilities (i.e. the entire transition matrix) from the observed market
shares. We illustrate how this can be done in an example below.
Consider Sheet G in the spreadsheet associated with this chapter. We have
a Solver model associated with this sheet and to use Solver in Excel select
Tools and then Solver. In the version of Excel I am using (different versions
of Excel have slightly different Solver formats) you will get the Solver
model in Sheet G as below:
Spreadsheet 6.5
122
Chapter 6: Markov processes
Spreadsheet 6.6
This indicates that the transition matrix shown in C5 to E7 is the ‘best’
transition matrix we can find that explains the observed market shares.
Be clear here – what we have done above is to find, in a logical consistent
systematic fashion, a transition matrix that ‘best fits’ the observed market
shares – that transition matrix may (or may not) correspond to the
transition probabilities which we would find were we to survey customers
or gather electronic information in the real world.
However, the transition matrix we have derived above may give us further
insight into the situation – we can see for example that in this ‘best fit’
customers of company K appear exceptionally loyal (80 per cent remaining
with K at each transition) and this is an insight that we may not have
gained had we just looked at the observed market shares.
Comment
Any problem for which a state-transition diagram can be drawn can
be analysed using the approach given above. The advantages and
disadvantages of using Markov theory include:
• Markov theory is simple to apply and understand
• sensitivity calculations (i.e. ‘what-if’ questions) are easily carried out
• Markov theory gives us an insight into changes in the system over time
123
MN3032 Management science methods
Applications
• Population modelling studies (where we have objects which ‘age’) are
an interesting application of Markov processes. One example of this
would be modelling the car market as a Markov process to forecast the
‘need’ for new cars as old cars naturally die off.
• Another example would be to model the clinical progress of patients in
hospital as a Markov process and see how their progress is affected by
different treatment regimes.
Activity
Can you think of a number of different applications of Markov processes? (Hint: any
problem for which you can draw a state-transition diagram can be analysed using Markov
processes.)
Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
Benefit of health care services Chapter 18, 2
Managing credit card credit limits in Bank One Chapter 18, 17
124
Chapter 7: Mathematical programming – formulation
Essential reading
Anderson, Chapter 2, sections start–2.1; Chapter 4 (formulations only);
Chapter 15, sections start–15.1, 15.3 and 15.4 (formulations only).
Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• use a structured approach to formulating a decision problem as a
mathematical program
• formulate a decision problem either as a linear program, or as an
integer program, or as a mixed-integer program, as appropriate.
Introduction
You will recall that in Chapter 1 we gave a formulation (a precise
mathematical statement) of the Two Mines problem as a linear program.
In the real world, there are a number of application areas that have
problems which can also be formulated as linear programs. You will see
some of these areas mentioned in this chapter.
Linear programming is one of the most used OR techniques. This is
because it is a generic technique, not just applicable in one specific
problem area, but applicable across a range of problem areas.
We hope that you will come to realise that, although the formulation of
linear programs can be demanding, the benefits to be gained are high –
both in terms of a clear understanding/exposition of the problem and in
terms of what can be achieved numerically (i.e. the problem can be solved
numerically to give the ‘best possible’ answer). For these reasons we would
urge you to persevere with this chapter, even if you initially find some of
the mathematics daunting.
This chapter concentrates upon the formulation of mathematical programs
– specifically linear programs and integer programs. Integer programs are
very like linear programs except that some of the variables are restricted to
have integer (discrete) values.
Overview
You will recall from the Two Mines example that the conditions for a
mathematical model to be a linear program (LP) were:
125
MN3032 Management science methods
Blending problem
Consider the example of a manufacturer of animal feed who is producing
feed mix for dairy cattle. In our simple example the feed mix contains two
active ingredients and a filler to provide bulk. One kilogram (kg) of feed
mix must contain a minimum quantity of each of four nutrients as below:
Nutrient A B C D
gram 90 50 20 2
The ingredients have the following nutrient values and cost:
A B C D Cost/kg
Ingredient 1 (gram/kg) 100 80 40 10 40p
Ingredient 2 (gram/kg) 200 150 20 – 60p
What should be the amounts of active ingredients and filler in one kg of
feed mix?
Variables
In order to solve this problem it is best to think in terms of one kg of feed
mix. That kg is made up of three parts – ingredient 1, ingredient 2 and
filler so let:
x1 = amount (kg) of ingredient 1 in one kg of feed mix
x2 = amount (kg) of ingredient 2 in one kg of feed mix
x3 = amount (kg) of filler in one kg of feed mix
where x1 ≥ 0, x2 ≥ 0 and x3 ≥ 0.
126
Chapter 7: Mathematical programming – formulation
Constraints
a. Balancing constraint (an implicit constraint due to the definition of
the variables)
x1 + x2 + x3 = 1
which says that one kg of feed mix must be made up (precisely) from
the two ingredients and filler.
b. Nutrient constraints
100x1 + 200x2 ≥ 90 (nutrient A)
80x1 + 150x2 ≥ 50 (nutrient B)
40x1 + 20x2 ≥ 20 (nutrient C)
10x1 ≥ 2 (nutrient D).
Here for nutrient A, for example, 100x1 + 200x2 is the number of grams
of nutrient A we have in one kg of feed mix when we blend together x1
kilograms of ingredient one and x2 kilograms of ingredient two.
Note the use of an inequality rather than an equality in these
constraints, following the rule we put forward in the Two Mines
example that, given a choice between an equality and an
inequality choose the inequality. Here this implies that the
nutrient levels we want are lower limits on the amount of nutrient in
one kg of feed mix.
Objective
The objective is to minimise cost, that is:
minimise 40x1 + 60x2
which gives us our complete LP model for the blending problem.
Obvious extensions/uses for this LP model include:
• increasing the number of nutrients considered
• increasing the number of possible ingredients considered – more
ingredients can never increase the overall cost (other things being
unchanged), and may lead to a decrease in overall cost
• placing both upper and lower limits on nutrients
• dealing with cost changes
• dealing with supply difficulties
• filler cost.
Blending problems of this type were, in fact, some of the earliest
applications of LP (for human nutrition during rationing) and are still
widely used in the production of animal feedstuffs.
Activity
Suppose now there is a fifth nutrient E, where ingredient 1 contains five gram/kg of E
and ingredient 2 contains 15 gram/kg of E. The amount of E in the final feed mix must lie
between one and three grams. What would the formulation of the problem now be?
127
MN3032 Management science methods
Activity
Consider these two problems by yourself for 10 minutes. What answers do you come up
with? What are the associated profits? Write your answers here for later reference.
Variables
Let:
xi be the number of units of variant i (i = 1, 2, 3, 4) made per year
Tass be the number of minutes used in assembly per year
Tpol be the number of minutes used in polishing per year
Tpac be the number of minutes used in packing per year
where xi ≥ 0 i = 1,2,3,4 and Tass, Tpol, Tpac ≥ 0.
Constraints
a. Operation time definition
Tass = 2x1 + 4x2 + 3x3 + 7x4 (assembly)
Tpol = 3x1 + 2x2 + 3x3 + 4x4 (polish)
Tpac = 2x1 + 3x2 + 2x3 + 5x4 (pack)
b. Operation time limits: the operation time limits depend upon the
situation being considered. In the first situation, where the maximum
time that can be spent on each operation is specified, we simply have:
Tass ≤ 100,000 (assembly)
Tpol ≤ 50,000 (polish)
Tpac ≤ 60,000 (pack)
In the second situation, where the only limitation is on the total time spent
on all operations, we simply have:
Tass + Tpol + Tpac ≤ 210,000 (total time)
128
Chapter 7: Mathematical programming – formulation
Objective
The objective presumably is to maximise profit, hence we have:
maximise 1.5x1 + 2.5x2 + 3.0x3 + 4.5x4
which gives us the complete formulation of the problem.
Activity
Consider this problem by yourself for a while. How easy do you think it would be to arrive
at a good (minimum cost) production and storage schedule?
Variables
The decisions that need to be made relate to the amount to produce in
normal/overtime working each period. Hence let:
xt = number of units produced by normal working in period t
(t = 1, 2, 3, 4), where xt ≥ 0
yt = number of units produced by overtime working in period t
(t = 1, 2, 3, 4) where yt ≥ 0
In fact, for this problem, we also need to decide how much stock we carry
over from one period to the next so let:
It = number of units in stock at the end of period t (t = 0, 1, 2, 3, 4)
Constraints
• production limits
xt ≤ 100 t = 1, 2, 3, 4
y1 ≤ 60
y2 ≤ 65
129
MN3032 Management science methods
y3 ≤ 70
y4 ≤ 60
• limit on space for stock carried over
It ≤ 70 t = 1, 2, 3, 4
• we have an inventory continuity equation of the form
closing stock = opening stock + production – demand
then assuming
opening stock in period t = closing stock in period t-1 and
that production in period t is available to meet demand in period t
we have that
I1 = I0 + (x1 + y1) – 130
I2 = I1 + (x2 + y2) – 80
I3 = I2 + (x3 + y3) – 125
I4 = I3 + (x4 + y4) – 195
where I0 = 15.
Note here that inventory continuity equations of the type shown above are
common in production planning problems involving more than one time
period. Essentially the inventory variables (It) and the inventory continuity
equations link together the time periods being considered and represent a
physical accounting for stock.
• demand must always be met (i.e. no ‘stock-outs’). This is equivalent to
saying that the opening stock in period t plus the production in period
t must be greater than (or equal to) the demand in period t, i.e. we
have the constraints:
I0 + (x1 + y1) ≥ 130
I1 + (x2 + y2) ≥ 80
I2 + (x3 + y3) ≥ 125
I3 + (x4 + y4) ≥ 195.
However, these constraints can be viewed in another way. Considering the
inventory continuity equations above these constraints which ensure that
demand is always met can be rewritten as:
I1 ≥ 0
I2 ≥ 0
I3 ≥ 0
I4 ≥ 0.
Objective
To minimise cost – which consists of the cost of ordinary working plus the
cost of overtime working plus the cost of carrying stock over (1.5K per
unit). Hence the objective is:
minimise
(6x1 + 4x2 + 8x3 + 9x4) + (8y1 + 6y2 + 10y3 + 11y4) + (1.5I0 + 1.5I1 +
1.5I2 + 1.5I3 + 1.5I4)
Note here that we have assumed that if we get an answer involving
fractional variable values this is acceptable (as the number of units
130
Chapter 7: Mathematical programming – formulation
required each period is reasonably large, this should not cause too many
problems).
Note:
• As discussed above assuming It ≥ 0 t = 1, 2, 3, 4 means ‘no stock-
outs’ (i.e. we need a production plan in which sufficient is produced to
ensure that demand is always satisfied).
• Allowing It (t = 1, 2, 3, 4) to be unrestricted (positive or negative)
means that we may end up with a production plan in which demand is
unsatisfied in period t (It < 0). This unsatisfied demand will be carried
forward to the next period (when it will be satisfied if production is
sufficient, carried forward again otherwise).
• If It is allowed to be negative then we need to amend the objective to
ensure that we correctly account for stock-holding costs (and possibly
to account for stock-out costs).
• If we get a physical loss of stock over time (e.g. due to damage,
pilferage, etc) then this can be easily accounted for. For example if we
lose (on average) 2 per cent of stock each period then multiply the
right-hand side of the inventory continuity equation by 0.98. If this is
done then we often include a term in the objective function to account
financially for the loss of stock.
• If production is not immediately available to meet customer demand
then the appropriate time delay can be easily incorporated into the
inventory continuity equation. For example a 2 period time delay for
the problem dealt with above means replace (xt + yt) in the inventory
continuity equation for It by (xt-2 + yt‑2).
In practice we would probably deal with the situation described above on
a ‘rolling horizon’ basis in that we would get an initial production plan
based on current data and then, after one time period (say), we would
update our LP and resolve to get a revised production plan. In other words
even though we plan for a specific time horizon, here four months, we
would only even implement the plan for the first month, so that we are
always adjusting our four month plan to take account of future conditions
as our view of the future changes.
Integer programming
Activity/Reading
For this section read Anderson, Chapter 15, sections start–15.1, 15.3 and 15.4
(formulations only).
131
MN3032 Management science methods
IPs occur frequently because many decisions are essentially discrete (such
as yes/no, go/no-go) in that one (or more) options must be chosen from a
finite set of alternatives.
Note here that problems in which some variables can take only integer
values and some variables can take fractional values are called mixed-
integer programs (MIPs).
As for formulating LPs the key to formulating IPs is practice. Although
there are a number of standard ‘tricks’ available to cope with situations
that often arise in formulating IPs it is probably true to say that
formulating IPs is a much harder task than formulating LPs.
We consider an example integer program below.
Variables
Here we are trying to decide whether to undertake a project or not (a ‘go/
no-go’ decision). One ‘trick’ in formulating IPs is to introduce variables
which take the integer values 0 or 1 and represent binary decisions
(e.g. do a project or not do a project) with typically:
• the positive decision (do something) being represented by the value 1
• the negative decision (do nothing) being represented by the value 0.
132
Chapter 7: Mathematical programming – formulation
Constraints
The constraints relating to the availability of capital funds each year are:
0.5x1 + 1.0x2 + 1.5x3 + 0.1x4 ≤ 3.1 (Year 1)
0.3x1 + 0.8x2 + 1.5x3 + 0.4x4 ≤ 2.5 (Year 2)
0.2x1 + 0.2x2 + 0.3x3 + 0.1x4 ≤ 0.4 (Year 3).
Objective
To maximise the total return – hence we have:
maximise 0.2x1 + 0.3x2 + 0.5x3 + 0.1x4.
This gives us the complete IP which we write as:
133
MN3032 Management science methods
Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page
number)
The Kellogg Company 35
Optimising production planning at Jan de Wit Company, Brazil 56
Using linear programming for traffic control 71
Assigning products to worldwide facilities at Eastman Kodak 87
Evaluating options for the provision of school meals in Chile 96
The Nutricia dairy and drinks group, Hungary 114
Tea production and distribution in India 119
A marketing planning model at Marathon Oil Company 140
Scheduling the orange harvest in Brazil 152
Pilot staffing and training at Continental Airlines 157
A marketing resource allocation model at Reckitt and Coleman 170
Optimal lease structuring at GE Capital 175
Revenue management at National Car Rental 179
Crew scheduling at Air New Zealand Chapter 18
(online), p.3
Aluminium can production at Valley Metal Container Chapter 18, p.4
BMW’s global production network Chapter 18, p.30
Customer order allocation model at Ketron Chapter 18, p.31
Optimising rental vehicles
www.cmis.csiro.au/or/Clients/thl.htm
Rail crew rostering
www.cmis.csiro.au/or/rostering/railtex.htm
134
Chapter 7: Mathematical programming – formulation
135
MN3032 Management science methods
Notes
136
Chapter 8: Linear programming – solutions
Essential reading
Anderson, Chapter 2, Chapter 3.
Spreadsheet
lp.xls
• Sheet A: Solution of problem via Solver where time for assembly,
polishing and packing is individually constrained
• Sheet B: Solution of problem via Solver where time for assembly,
polishing and packing is constrained in total.
This spreadsheet can be downloaded from the VLE.
Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• solve Linear Programming problems (LPs) involving two variables
graphically via use of an iso-cost/iso-profit line
• interpret solution output for LPs and use information contained in
such output for the purposes of sensitivity analysis
• explain opportunity cost (reduced cost) and calculate it for any
variable in a LP that has been solved graphically
• explain shadow price and calculate it for any constraint in a LP that
has been solved graphically
• appreciate the areas where large LP problems arise.
Introduction
In this chapter we move from the mathematics we have been considering
in the previous chapter to some numbers, specifically to obtaining numeric
solutions for two of the LPs we formulated previously.
As previously stated, although the formulation of LPs can be
demanding, the benefits to be gained are high – both in terms of a clear
understanding/exposition of the problem and in terms of what can be
137
MN3032 Management science methods
Activity/Reading
For this section read Anderson, Chapter 2.
To get some insight into solving LPs, consider the Two Mines problem
again – the LP formulation of the problem was:
minimise 180x + 160y
subject to 6x + 1y ≥ 12
3x + 1y ≥ 8
4x + 6y ≥ 24
x≤5
y≤5
x,y ≥ 0
Since there are only two variables in this LP problem we have the graphical
representation of the LP given below with the feasible region (region of
feasible solutions to the constraints associated with the LP) outlined.
Figure 8.1
To draw the diagram above we turn all inequality constraints into
equalities and draw the corresponding lines on the graph (e.g. the
constraint 6x + y ≥ 12 becomes the line 6x + y = 12 on the graph). Once
a line has been drawn then it is a simple matter to work out which side
of the line corresponds to all feasible solutions to the original inequality
138
Chapter 8: Linear programming – solutions
Activity
Look back to Chapter 1 where the Two Mines problem was introduced. At that point you
were asked to find your own answer to the problem and to record it. Compare the answer
you found then with the answer given above.
How much more expensive was your answer (i.e. what is the value of 100 (your answer –
765.71)/765.71)? What do you conclude about the advantages of linear programming?
Activity
Suppose that the costs per day for mine X and mine Y were to change to 200 and 250
respectively. What would be the new optimal solution?
It is clear that the above graphical approach to solving LPs can be used
for LPs with two variables but (alas) most LPs have more than two
variables. This brings us to the simplex algorithm for solving LPs which is
considered below. However, first a word of warning:
Warning
You may be aware that an alternative approach to solving a two-variable
linear program is via the corner point method. For the purposes of this subject
guide, and any examination question relating to this course, use of this corner
point method is not acceptable. We shall always expect you to solve a two-
variable linear program via use of an iso-cost/iso-profit line, not via use of the
corner point method.
139
MN3032 Management science methods
Simplex
Note that in the example considered above the optimal solution to the
LP occurred at a vertex (corner) of the feasible region. In fact it is true
that for any LP (not just the one considered above) the optimal solution
occurs at a vertex of the feasible region. This fact is the key to the simplex
algorithm for solving LPs.
Essentially the simplex algorithm starts at one vertex of the feasible region
and moves (at each iteration) to another (adjacent) vertex, improving (or
leaving unchanged) the objective function as it does so, until it reaches the
vertex corresponding to the optimal LP solution.
The simplex algorithm for solving LPs was developed by George Dantzig
in the late 1940s and since then a number of different versions of the
algorithm have been developed. One of these later versions, called the
revised simplex algorithm (sometimes known as the ‘product form of
the inverse’ simplex algorithm) forms the basis of most modern computer
packages for solving LPs.
Although the basic simplex algorithm is relatively easy to understand and
use, it is widely available in the form of computer packages therefore we
have not set out the details of the simplex algorithm. Instead we shall
focus on the output from a simplex based LP package.
Recall the production planning problem concerned with four variants of
the same product which we formulated before as an LP. To remind you of it
we repeat below the problem and our formulation of it.
140
Chapter 8: Linear programming – solutions
Constraints
a. Operation time definition
Tass = 2x1 + 4x2 + 3x3 + 7x4 (assembly)
Tpol = 3x1 + 2x2 + 3x3 + 4x4 (polish)
Tpac = 2x1 + 3x2 + 2x3 + 5x4 (pack)
b. Operation time limits: the operation time limits depend upon the
situation being considered. In the first situation, where the maximum
time that can be spent on each operation is specified, we simply have:
Tass ≤ 100,000 (assembly)
Tpol ≤ 50,000 (polish)
Tpac ≤ 60,000 (pack).
In the second situation, where the only limitation is on the total time spent
on all operations, we simply have:
Tass + Tpol + Tpac ≤ 210000 (total time)
Objective
The objective presumably is to maximise profit; hence, we have:
maximise 1.5x1 + 2.5x2 + 3.0x3 + 4.5x4
which gives us the complete formulation of the problem.
Excel solution
Activity/Reading
For this section read Anderson, Chapter 3, (computer solutions only).
Spreadsheet 8.1
Here the values in cells B2 to B5 are how much of each variant we choose
to make – here set to zero. Cells C6 to E6 give the total assembly/polishing
and packing time used and cell F6 the total profit associated with the
amount we choose to produce.
141
MN3032 Management science methods
To use Solver in Excel select Tools and then Solver. In the version of
Excel I am using (different versions of Excel have slightly different Solver
formats) you will get the Solver model as below:
Spreadsheet 8.2
Here our target cell is F6 (ignore the use of $ signs here – that is a
technical Excel issue if you want to go into it in greater detail) which we
wish to maximise. We can change cells B2 to B5 – i.e. the amount of each
variant we produce subject to the constraint that C6 to E6 – the total
amount of assembly/polishing/packing used cannot exceed the limits
given in C7 to E7.
In order to tell Solver we are dealing with a linear program, click on
Options in the Solver box and you will see:
Spreadsheet 8.3
where both the ‘Assume Linear Model’ and ‘Assume Non-Negative’ boxes
are ticked – indicating we are dealing with a linear model with non-
negative variables.
Solving via Solver the solution is:
Spreadsheet 8.4
We can see that the optimal solution to the LP has value 58,000 (£) and
that Tass = 82,000, Tpol = 50,000, Tpac = 60,000, X1 = 0, X2 = 16,000, X3 =
6,000 and X4 = 0.
142
Chapter 8: Linear programming – solutions
Activity
How can you explain (in words) the fact that it appears that the best thing to do is not to
produce any of the variant with the lowest profit per unit?
Activity
How can you explain (in words) the fact that it appears that the best thing to do is not to
produce any of the variant with the highest profit per unit?
Activity
Suppose you employ extra workers giving you extra time available. Would you assign
these workers to assembly, polishing or packing and if so, why?
Second situation
For the second situation given in the question, where the only limitation
is on the total time spent on all operations examine Sheet B in the
spreadsheet associated with this chapter.
Invoking Solver in that sheet you will see:
Spreadsheet 8.5
where cell C7 is the total amount of processing time used and the only
constraint in Solver relates to that cell not exceeding the limit of 210,000
shown in cell C8. Note here that if you check Options in Solver here you
will see that both the ‘Assume Linear Model’ and ‘Assume Non-Negative’
boxes are ticked.
143
MN3032 Management science methods
Solving we get:
Spreadsheet 8.6
We can see that the optimal solution to the LP has value 78,750 (£) and
that Tass = 78,750, Tpol = 78,750, Tpac = 52,500, X1 = 0, X2 = 0, X3 = 26,250
and X4 = 0. This implies that we only produce variant 3.
Note here how much higher the associated profit is than before (£78,750
compared with £58,000, an increase of 36 per cent!). This indicates
that, however the allocation of 100,000, 50,000 and 60,000 minutes for
assembly, polishing and packing respectively was arrived at, it was a bad
decision!
Activity
Look back to Chapter 7 where this production planning problem was introduced. At that
point you were asked to find your own answer to the problem and to record it. Compare
the answer you found then with the answer given above.
How less profitable was your answer (i.e. what is the value of 100(78750 – your
answer)/78750)? What do you conclude about the advantages of LP?
Problem sensitivity
Activity/Reading
For this section read Anderson, Chapter 3, sections 3.1–3.3.
Problem sensitivity refers to how the solution changes as the data change.
Two issues are important here:
• robustness
• planning.
We deal with each of these in turn.
Robustness
In reality, data are never completely accurate and so we would like some
confidence that any proposed course of action is relatively insensitive
(robust) with respect to data inaccuracies. For example, for the production
planning problem dealt with before, how sensitive is the optimal solution
with respect to slight variations in any particular data item?
For example, consider the packing time consumed by variant 3. It is
currently set to exactly 2 minutes, i.e. 2.0000000. But suppose it is really
2.1, what is the effect of this on what we are proposing to do?
144
Chapter 8: Linear programming – solutions
What is important here is what you might call ‘the shape of the
strategy’ rather than the specific numeric values. Look at the solution of
value 58,000 we had before. The shape of the strategy there was ‘none
of variant 1 or 4, lots of variant 2 and a reasonable amount
of variant 3’. The aim is that, when we resolve with the figure of 2 for
packing time consumed by variant 3 replaced by 2.1, this general shape
remains the same. We should be concerned if we get a very different shape
(e.g. produce variants 1 and 4 only).
If the general shape of the strategy remains essentially the same under
(small) data changes we say that the strategy is robust.
If we take Sheet A again, change the figure of 2 for packing time
consumed by variant 3 to 2.1 and resolve, we get the following:
Spreadsheet 8.7
This indicates that for these data changes the strategy is robust.
Planning
With regard to planning we may be interested in seeing how the solution
changes as the data change (e.g. over time). For example, for the
production planning problem dealt with before (where the solution was
of value 58,000 involving production of variants 2 and 3) how would
increasing the profit per unit on variant 4 (e.g. by 10 per cent to 4.95 by
raising the price) impact upon the optimal solution?
Again taking Sheet A, making the appropriate change and resolving, we
get:
Spreadsheet 8.8
indicating that if we were able to increase the profit per unit on variant 4
by 10 per cent to 4.95, it would be profitable to make that variant in the
quantities shown above.
There is one thing to note here – namely that we have a fractional solution
X3 = 1,428.571 and X4 = 11,428.57. Recall that we have a LP – for which a
defining characteristic is that the variables are allowed to take fractional
values. Up to now for this production planning problem we had not
seen any fractional values when we solved numerically – here we do. Of
course in reality, given that the numbers are large, there is no practical
significance to these fractions and we can equally well regard the solution
as being a conventional integer (non-fractional) solution such as X3 = 1429
and X4 = 11429.
Approach
The approach taken both for robustness and planning issues is identical,
and is often referred to as sensitivity analysis.
It turns out that, as a by-product of solving a linear program, we
145
MN3032 Management science methods
Spreadsheet 8.9
but where now we have highlighted (clicked on) two of the reports
available – Answer and Sensitivity. Click OK. You will find that two new
sheets have been added to the spreadsheet – an Answer Report and a
Sensitivity Report.
As these reports are indicative of the information that is commonly
available when we solve a LP via a computer we shall deal with each of
them in turn.
Answer report
The answer report can be seen below:
146
Chapter 8: Linear programming – solutions
with equality at the LP optimal. Constraints which are not tight are called
loose or not binding.
Sensitivity report
The sensitivity report can be seen below:
Activity
Perform the same analysis for x1, x3 and x4 as was done for x2 above.
147
MN3032 Management science methods
Variable x1 x4
Reduced Cost (ignore sign) 1.5 0.2
New value (= or ≥) x1 = A or x1 ≥ A x4 = B or x4 ≥ B
Estimated objective function change 1.5A 0.2B
Table 8.2
The objective function will always get worse (go down if we have a
maximisation problem, go up if we have a minimisation problem) by
at least this estimate. The larger A or B are, the more inaccurate this
estimate is of the exact change that would occur if we were to resolve the
LP with the corresponding constraint for the new value of x1 or x4 added.
If the change is small, however, then it is commonly observed that the
objective function change is exactly the same as the estimate.
Activity
If exactly 100 of variant one were to be produced what would be your estimate of the
new objective function value?
Note here that the value in the Reduced Cost column for a variable is often
called the ‘opportunity cost’ for the variable.
An alternative (and equally valid) interpretation of the
reduced cost is that it is an estimate of the minimum amount
by which the objective function coefficient for a variable
needs to change before that variable will become non-zero.
Hence for variable x1 the objective function needs to change by 1.5
(increase since we are maximising) before that variable becomes non-zero.
In other words, referring back to our original situation, the profit per unit
on variant 1 would need to increase by 1.5 before it would be profitable to
produce any of variant 1. Similarly the profit per unit on variant 4 would
need to increase by 0.2 before it would be profitable to produce any of
variant 4.
zero at the LP optimum then you could adopt this approach to calculate
the reduced cost (opportunity cost) for that variable.
Note here that if a variable takes a value different from
zero in the LP optimum solution then its opportunity cost is
defined to be zero.
Activity
•• If you had an extra 100 hours, to which operation would you assign it?
•• If you had to take 50 hours away from polishing or packing, which one would you
choose?
•• What would the new objective function value be in these two cases?
The value in the column headed Shadow Price for a constraint is often
called the ‘marginal value’ or ‘dual value’ for that constraint.
149
MN3032 Management science methods
This may be relevant if (for example) you have a two variable linear program
which you have solved graphically. In this two variable linear program you
could adopt this approach to calculate the shadow price for a constraint. Note
that, for the purposes of this manual approach, we do not expect you to find
the limits within which a shadow price is valid.
Note that, as would seem logical, if the constraint is loose in the
current LP optimum then its shadow price is defined to be zero
(if the constraint is loose a small change in the right-hand side
cannot alter the optimal solution).
Activity
For the Two Mines problem which you solved graphically calculate the shadow prices for
the high, medium and low grade ore constraints.
Comments
Much of the information available as a by-product of the solution of the LP
problem can be useful to management in estimating the effect of changes
(e.g. changes in costs, production capacities, etc.) without going to the
trouble/expense of resolving the LP.
Note that, as mentioned above, the analysis given above relating to how the
LP solution changes is only valid for a single data change. If two (or more)
data changes are made the situation becomes more complex and it becomes
advisable to resolve the LP.
150
Chapter 8: Linear programming – solutions
151
MN3032 Management science methods
Here a 0 in a column indicates that the flight leg is not part of the crew
schedule, a 1 that the flight leg is part of the crew schedule. Usually a crew
schedule ends up with the crew returning to their home base, e.g. A−D and
D−A in crew schedule 1 above. A crew schedule such as 2 above (A−B and
B−C) typically includes as part of its associated cost the cost of returning
the crew (as passengers) to their base. Such carrying of crew as passengers
(on their own airline or on another airline) is called deadheading.
LP is used as part of the solution process for this crew scheduling problem
for two main reasons:
• A manual approach to crew scheduling problems of this size is just
hopeless, you may get a schedule but the cost is likely to be far from
minimal.
• A systematic approach to minimising cost can result in huge cost
savings (e.g. even a small percentage saving can add up to tens of
millions of dollars).
Summary
To summarise, there are people in the real world with large LP problems
to solve. What appears to be happening currently is that an increase in
solution technology (advances in hardware, software and algorithms)
is leading to users becoming aware that large problems can be tackled.
This in turn is generating a demand for further improvements in solution
technology.
152
Chapter 8: Linear programming – solutions
Case studies
The case studies associated with this chapter are the same as those
associated with Chapter 7. We would encourage you to read them.
153
MN3032 Management science methods
Notes
154
Chapter 9: Data envelopment analysis
Essential reading
Anderson, Chapter 4, Section 4.6.
Spreadsheet
dea.xls
• Sheet A: Calculation of efficiency using Solver.
• Sheet B: Calculation of efficiency using Solver with a value judgement
constraint added.
This spreadsheet can be downloaded from the VLE.
Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• compare decision-making units (DMUs) via ratios
• draw the efficient frontier and find the efficiencies of all DMUs for any
example involving two ratios
• state the reference set for an inefficient DMU
• formulate the mathematical problem of finding the efficiency of any
DMU
• discuss the use of value judgements
• discuss starting a data envelopment analysis (DEA) study.
Introduction
Data envelopment analysis (DEA) – occasionally called frontier analysis
– was first put forward by Charnes, Cooper and Rhodes in 1978. It is a
performance measurement technique which, as we shall see, can be used
for evaluating the relative efficiency of decision making units
(DMUs) in organisations.
Examples of units to which DEA has been applied are: banks, police
stations, hospitals, tax offices, prisons, military bases (army, navy, air-
force), schools and university departments. One advantage of DEA is that
it can be applied to non-profit making organisations.
155
MN3032 Management science methods
Since the technique was first proposed, much theoretical and empirical
work has been done. Many studies have been published dealing with
applying DEA in real-world situations. We will initially illustrate DEA by
means of a small example.
Much of what you will see below is a graphical (pictorial) approach to
DEA. This is very useful if you are attempting to explain DEA to those
less technically qualified (for example, many in the management world).
However, there is an alternative mathematical approach to DEA that can
be adopted. This is illustrated later below.
Example
Consider a number of bank branches. For each branch we have a single output measure
(number of personal transactions completed) and a single input measure (number of staff).
The data we have are as follows:
Ratios
A commonly used method is ratios. Typically, we take some output
measure and divide it by some input measure. Note the terminology here,
we view branches as taking inputs and converting them (with varying
degrees of efficiency, as we shall see below) into outputs.
For our bank branch example we have a single input measure, the number
of staff, and a single output measure, the number of personal transactions.
Hence we have:
156
Chapter 9: Data envelopment analysis
Table 9.4
For example, for the Dorking branch in one year there were 44,000
transactions relating to personal accounts, 20,000 transactions relating to
business accounts and 16 staff were employed.
How can we compare these branches and measure their performance
using these data?
As before, a commonly used method is ratios, just as in the case
considered before of a single output and a single input. Typically, we take
one of the output measures and divide it by one of the input measures.
For our bank branch example, the input measure is the number of staff
(as before) and the two output measures are the number of personal
transactions and the number of business transactions. Hence we have the
two ratios:
157
MN3032 Management science methods
Activity
For each of the nine bank branches shown above, write a single sentence that sums up its
performance but yet is more than just a repetition of the values of the two ratios shown.
158
Chapter 9: Data envelopment analysis
Graphical analysis
One way around the problem of interpreting different ratios, at least for
problems involving just two outputs and a single input, is a simple graphical
analysis. Suppose we plot the two ratios for each of our original four branches
as shown below.
Figure 9.1
The positions on the graph represented by Croydon and Redhill demonstrate a
level of performance which is superior to the other two branches. A horizontal
line can be drawn from the vertical axis (y-axis) to Croydon, from Croydon
to Redhill, and a vertical line from Redhill to the horizontal axis (x-axis).
This line is called the efficient frontier (sometimes also referred to as the
efficiency frontier).
The efficient frontier, derived from the examples of best practice contained in
the data we have considered, represents a standard of performance that the
branches not on the efficient frontier could try to achieve.
You can see therefore how the name data envelopment analysis arises –
the efficient frontier envelops (encloses) all the data we have.
However, a number is often easier to interpret than a graph. We say that
any branches on the efficient frontier are 100 per cent efficient (have an
efficiency of 100 per cent). Hence, for our example, Croydon and Redhill have
efficiencies of 100 per cent.
This is not to say that the performance of Croydon and/or Redhill could not
be improved. It may, or may not, be possible to do that. However we can say
that, on the evidence (data) available, we have no idea of the extent to
which their performance can be improved.
It is important to note here that:
• DEA only gives you relative efficiencies – efficiencies relative to the
data considered. It does not, and cannot, give you absolute efficiencies.
• We have used no new information here, merely taken data on inputs and
outputs and presented them in a particular way.
Note too that the statement that a branch has an efficiency of 100 per cent is
a strong statement; namely that we have no other branch that can be said to
be better than it.
159
MN3032 Management science methods
Figure 9.2
It might seem reasonable to suggest therefore that the best possible
performance that Reigate could be expected to achieve is given by the
point labelled ‘Best’ in the diagram above. This is the point where the line
from the origin through Reigate meets the efficient frontier.
In other words, ‘Best’ represents a branch that, were it to exist, would have
the same business mix as Reigate and would have an efficiency of 100 per
cent.
Then in DEA we numerically measure the (relative) efficiency of Reigate
by the ratio:
100 (length of line from origin to Reigate/length of line from origin through
Reigate to efficient frontier)
160
Chapter 9: Data envelopment analysis
Activity
Take a piece of graph paper, plot on it the four bank branches and measure the
efficiencies of Reigate and Dorking.
Recall the list of ratios with extra branches added given before.
Figure 9.3
There are two points to note here:
• the above diagram is a lot easier to understand, make sense of, and
interpret, than the list of ratios
• as before, we have not used any new data here, merely looked at the
existing data in a particular way.
This issue of looking at data in a different way is an important practical
issue. Many managers (without any technical expertise) are happy with
ratios. Showing them that their ratios can be viewed differently and used
to obtain new information is often an eye-opener to them.
161
MN3032 Management science methods
On a technical issue, note that the scale used for the x-axis and the y-axis
in plotting positions for each branch is irrelevant. Had we used a different
scale above we would have had a different picture, but the efficiencies of
each branch would be exactly the same.
With regard to finding the length of lines that you need to calculate
efficiencies then all we expect here is that you plot the branches on graph
paper to a reasonable accuracy and measure using a ruler. If you are more
mathematically minded then it is possible to use Pythagoras’s theorem to
find line lengths.
Activity
Take a piece of graph paper, plot on it the nine bank branches (our original four plus A to
E) and measure the efficiencies of branches A to E.
This does not automatically mean that Reigate is only approximately one-
third as efficient as the best branches. Rather the efficiencies here would
usually be taken as indicative of the fact that other branches are adopting
practices and procedures which, if Reigate were to adopt them, would
enable it to improve its performance.
This naturally invokes issues of highlighting and disseminating examples
of best practice.
162
Chapter 9: Data envelopment analysis
Activity
Consider the diagram above with branches A to E included. What would be the reference
sets for branches A to E?
Exercise
Suppose now that we have an extra branch F included in the analysis with personal
transactions per staff member = 1 and business transactions per staff member = 6. What
changes as a result of this extra branch being included in the analysis?
Figure 9.4
Note that the efficient frontier now excludes Redhill. We do not draw that
efficient frontier from Croydon to Redhill and from Redhill to F for two
reasons:
• mathematically the efficient frontier must be convex
• although we have not seen any branches on the line from Croydon to
F it is assumed in DEA that we could construct virtual branches,
which would be a linear combination of Croydon and F, and which
would lie on the straight line from Croydon to F.
In the above it is clear why Croydon and F have a relative efficiency of 100
per cent (i.e. are efficient); both are the top performers with respect to one
of the two ratios we are considering. The example below, where we have
163
MN3032 Management science methods
Figure 9.5
Note here that in the above diagram the ‘feasible space’ in which a single
branch might lie and still achieve 100 per cent efficiency without being a
top performer in either ratio is quite large. While G above is one such
branch, had G been positioned at any point inside the triangle formed
between the horizontal line though Croydon, the vertical line though F and
the line joining Croydon to F, then it would have had 100 per cent efficiency.
Activity
Take a piece of graph paper, plot on it the 10 bank branches (our original four plus A to F)
and measure the efficiencies of all branches not on the efficient frontier.
Recap
Let us recap what we have done here – we have shown how a simple
graphical analysis of data on inputs and outputs can be used to calculate
efficiencies.
Once such an analysis has been carried out then we can begin to tackle,
with a clearer degree of insight than we had before, issues such as:
• identification of best practice
• identification of poor practice
• target setting
• resource allocation
• monitoring efficiency changes over time.
165
MN3032 Management science methods
Excel
Look at Sheet A (as below) in the spreadsheet associated with this chapter.
Spreadsheet 9.1
Here you can see the data for the four branches and a ‘yes’ in Column F for
the Dorking branch. Currently the efficiencies for the branches make no
sense as the weights for personal/business/staff have been arbitrarily set
to 1, 2 and 3 respectively.
To maximize the efficiency for Dorking we will use Solver in Excel. To use
Solver in Excel select Tools and then Solver.
In the version of Excel I am using (different versions of Excel have slightly
different Solver formats) you will get the Solver model as below:
Spreadsheet 9.2
Here cells B7 to D7 can be changed (ignore the use of $ signs here, which
is a technical Excel issue) and when we change these cells we are trying to
maximize cell H6 – this is set using the ‘yes’ in column F and the working
column H to be the efficiency of the branch we wish to maximise. The
constraints are that cells B7 to D7 must be non-negative (greater than or
equal to zero) and also that the efficiencies (G2 to G5) must be less than
or equal to one.
Clicking Solve in this Solver window gives:
Spreadsheet 9.3
166
Chapter 9: Data envelopment analysis
Spreadsheet 9.4
showing that the maximum efficiency of Dorking is 0.43 (43 per cent)
and one set of values for the weights that enables Dorking to achieve this
maximum efficiency are given in cells B7 to D7.
Obviously being able to use a spreadsheet enables us to explore options
– suppose that next year Dorking operates with one less member of staff
than last year, and increases business transactions by 10 per cent (all other
figures remaining unchanged). What would its efficiency change to? The
answer can be seen below:
Spreadsheet 9.5
showing an increase in efficiency to 49 per cent as compared to the
previously calculated value of 43 per cent.
Activity
Use the Excel spreadsheet to calculate the efficiencies for each branch in turn.
Activity
Extend the Excel spreadsheet to incorporate a branch which employs nine staff and
processes 10 (’000) personal transactions and 6 (’000) business transactions. What is the
efficiency of this new branch?
Linear program
The optimisation problem we considered above, both in mathematics and
in Solver, is a non-linear problem. In fact, it can be converted into a linear
programming problem. To do this we:
• algebraically substitute for all efficiency variables, to give an
optimisation problem expressed purely in terms of weights
• introduce an additional constraint setting the denominator of the
objective function equal to one.
Doing this with the above optimisation problem for Dorking we get:
maximise (44Wper + 20Wbus)/(16Wstaff)
subject to (16Wstaff) = 1
0 ≤ (125Wper + 50Wbus)/(18Wstaff) ≤ 1
0 ≤ (44Wper + 20Wbus)/(16Wstaff) ≤ 1
0 ≤ (80Wper + 55Wbus)/(17Wstaff) ≤ 1
0 ≤ (23Wper + 12Wbus)/(11Wstaff) ≤ 1
Wper, Wbus, Wstaff ≥ 0
167
MN3032 Management science methods
This is easily made into a linear program. For each of the non-linear
constraints we multiply throughout by the denominator (which is ≥0). So,
for example, the non-linear constraint 0 ≤ (125Wper + 50Wbus)/(18Wstaff)
≤ 1 when multiplied throughout by the denominator (18Wstaff) becomes 0
≤ (125Wper + 50Wbus) ≤ (18Wstaff), which is clearly a linear constraint. The
objective above becomes linear as the denominator has been explicitly set
equal to one. Hence we get the LP:
maximise (44Wper + 20Wbus)
subject to
(16Wstaff) = 1
0 ≤ (125Wper + 50Wbus) ≤ (18Wstaff)
0 ≤ (44Wper + 20Wbus) ≤ (16Wstaff)
0 ≤ (80Wper + 55Wbus) ≤ (17Wstaff)
0 ≤ (23Wper + 12Wbus) ≤ (11Wstaff)
Wper, Wbus, Wstaff ≥ 0
Once this LP has been solved to generate optimal values for the weights
then the efficiency of the branch we are optimising for, Dorking in this
case, can be easily calculated using EDorking = (44Wper + 20Wbus)/(16Wstaff).
Value judgements
One thing that can happen in DEA is that inspection of the weights that
are obtained leads to further insight and thought. For example, in our
initial Solver solution above we had a weight Wper associated with personal
transactions of 0.19025 and a weight Wbus associated with business
transactions of 0.93085 – implicitly implying that business transactions
have an importance equal to 0.93085/0.19025 = 4.9 personal transactions.
Now it may be that after considering this ratio of 4.9 that bank
management consider that, as a matter of judgement, business
transactions are much more time consuming/valuable than personal
transactions and as such they would like the weights Wper and Wbus to
satisfy the constraint Wbus/Wper ≥ 6, implying that one business transaction
is worth at least six personal transactions. This constraint is a value
judgement to better reflect the reality of the situation.
We can add this constraint to our Solver model, as below in Sheet B,
where the ratio Wbus/Wper is in cell B9:
Spreadsheet 9.6
168
Chapter 9: Data envelopment analysis
Solving we get:
Spreadsheet 9.7
which shows that the efficiency of Dorking has been unaffected by the
addition of this value judgement. More technically Solver has found
another set of weights which retain the previous maximum efficiency of 43
per cent for Dorking but which mean that the ratio constraint Wbus/Wper ≥
6 is now also satisfied.
There is one technical issue here, which illustrates why it is common to
solve DEA problems using linear programming instead of just resorting to
Solver. If you try putting randomly chosen values for the weights and then
using Solver to maximise efficiency it is not too hard to come across the
situation shown below:
Spreadsheet 9.8
signifying that Solver failed to find a solution. Although it is beyond the
scope of this chapter, non-linear programs (such as those considered
by Solver) are notoriously difficult to solve numerically – which is why
treating a DEA problem as a linear program is much preferred. Linear
programs are very easy to solve numerically.
169
MN3032 Management science methods
Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
Pupil transportation in North Carolina 170
Benchmarking team and individual performance in R&D
laboratories
www.banxia.com/frontier/case-studies/benchmarking/
Data envelopment analysis in retail banking
www.banxia.com/frontier/case-studies/retail-banking/
170
Chapter 9: Data envelopment analysis
171
MN3032 Management science methods
Notes
172
Chapter 10: Multicriteria decision making
Essential reading
Anderson, Chapter 14, excluding section 14.3.
Spreadsheet
multi.xls
• Sheet A: Solution of weighted goal program via Solver
• Sheet B: Solution of priority level goal program via Solver, first
priority level
• Sheet C: Solution of priority level goal program via Solver, second
priority level
• Sheet D: Approximate calculation of AHP weights and consistency
• Sheet E: Exact calculation of AHP weights via Solver
• Sheet F: Approximate calculation of AHP weights for job offers.
This spreadsheet can be downloaded from the VLE.
Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• formulate a goal program
• when given pairwise comparison matrices apply the analytic hierarchy
process to:
calculate approximate weights
calculate and interpret the consistency ratio
decide the best alternative to choose
• discuss the criticisms that have been made of the analytic hierarchy
process (AHP)
• discuss other approaches to multicriteria decision making.
173
MN3032 Management science methods
Introduction
Multicriteria decision making refers to situations where we have more
than one objective (or goal) and these objectives conflict. Nevertheless,
we must somehow reach a decision taking them all into account. This
contrasts with, for example, decision trees or linear programming, where
we have a single objective – either optimise expected monetary value for
decision trees or optimise a single linear objective in the case of linear
programming. This chapter considers goal programming and the analytic
hierarchy process, two techniques used for multicriteria decision making.
174
Chapter 10: Multicriteria decision making
Activity
Consider this problem yourself for 10 minutes (keep cost to a minimum and keep the excess
of high-grade ore to a minimum). What answer do you come up with for the number of days
per week each mine should be operated? What are the associated costs and the number of
tonnes of excess high-grade ore? Write your answer here for later reference.
175
MN3032 Management science methods
176
Chapter 10: Multicriteria decision making
Hence for our Two Mines problem where we wish to reconcile our
conflicting goals we have the equations:
180x + 160y = 780 + C+ − C−
6x + 1y = 12.5 + H+ − H−
6x + 1y ≥ 12
3x + 1y ≥ 8
4x + 6y ≥ 24
x≤5
y≤5
x, y, C+, C−, H+, H− ≥ 0
Given our variables, these equations must be satisfied.
Activity
Suppose now that there is a third mine Z, costing 120 (£’000) per day and producing 0.5
tonnes of high-grade, one tonne of medium-grade and nine tonnes of low-grade ore per
day. How would the above equations change?
Weighted approach
For the weighted approach, we need to assign weights to our four
deviation variables C+,C−,H+,H−. These weights can only come from
managerial consideration of the situation and, if necessary, starting with a
set of weights and then revising them in the light of the solution obtained
after solving the problem with those weights.
There is an important issue associated with weights here – namely that
the equations that we are considering deal with different units – cost
(£’000) and tonnes of high-grade ore. What needs to be made clear to
management in setting these weights is that they need to think in terms of
the weight associated with one percentage deviation from the current
goal for each variable.
Suppose, to proceed, that management have decided that the weights to
be applied are:
177
MN3032 Management science methods
Variable Current goal Weight for one per cent One per cent of goal
deviation from this goal
C+ 780 50 7.8
- -
C 780 20 7.8
H+ 0.5 4 0.005
H- 0.5 2 0.005
Table 10.2
Note the negative weight for C-, this indicates that (as we shall be
minimising, see below) we prefer to have a downward deviation from our
cost goal of 780 – a natural desire.
Then our objective becomes:
minimise 50(C+/7.8) - 20(C-/7.8) + 4(H+/0.005) + 2(H-/0.005)
where in this objective (C+/7.8), for example, is the total percentage
upward deviation from the cost goal which is weighted using 50 (see
above).
Be clear here – in weighted goal programming the objective to be adopted
is always to minimise a weighted sum of deviation variables.
Hence we have the linear program:
minimise 50(C+/7.8) − 20(C-/7.8) + 4(H+/0.005) + 2(H-/0.005)
subject to 180x + 160y = 780 + C+ − C-
6x + 1y = 12.5 + H+ − H-
6x + 1y ≥ 12
3x + 1y ≥ 8
4x + 6y ≥ 24
x≤5
y≤5
x, y, C+, C-, H+, H- ≥ 0
which, when we solve it, will give us values for x and y, the number of
days to work each mine, that minimises the (total) weighted deviation
from our goals.
Excel solution
Take the spreadsheet associated with this chapter and look at Sheet A. You
should see the problem we considered above set out as:
Spreadsheet 10.1
178
Chapter 10: Multicriteria decision making
Here we have expressly given the weights (for one per cent deviation) both
for upward and downward deviations as in cells B10 to C11.
Cells B14 to C15 contain the deviation variables we are trying to decide and
cells F14 and F15 the variables (x and y respectively) associated with how
many days per week to work each mine. Rows 7 and 8 contain the left and
right hand sides of the two equality constraints in the LP involving the goals
and the deviation variables. Cell B7, for example, is the left-hand side of the
constraint 180x + 160y = 780 + C+ – C- and cell B8 the right-hand side of
that constraint. Row 6 shows the numeric goal values for the problem.
To use Solver in Excel select Tools and then Solver. In the version of Excel I
am using (different versions of Excel have slightly different Solver formats)
you will get the Solver model as below:
Spreadsheet 10.2
Here the objective function (total weighted deviation) is in cell B17 (ignore
the use of $ signs here), which we wish to minimise by changing our
variables. If you click on Options you will see:
Spreadsheet 10.3
where both the ‘Assume Linear Model’ and ‘Assume Non-Negative’ boxes are
ticked – indicating we are dealing with a linear model with non-negative
variables.
179
MN3032 Management science methods
Spreadsheet 10.4
where the deviation variables show no deviation from our high-grade excess
ore goal, i.e. the solution shown above of x = 1.5 and y = 3.5 has excess ore
equal to the goal of 0.5 (as can be seen in cell C4, we produce 12.5, require
12 (cell C5), so excess is 0.5). In this solution we exceed our cost goal (the
upward deviation variable with value 50 in cell B14), so the total cost is 50
above our goal of 780 (cell B6), so 780 + 50 = 830, as in cell B4.
The advantage here, as with all of the techniques given in this subject guide,
is that once we have gone to the effort of generating a computerised solution
approach (encoded the problem in Excel using Solver here) we can use it to
investigate sensitivity, or to adjust the solution to one that we prefer.
Here, for example, it may be that management, after considering the above
solution, decide that they wish to increase the weight associated with
exceeding the cost goal (e.g. to increase the weight attached to the upward
deviation variable C+ from 50 to 100). Be clear of the (potential) effect of
this – as we already exceed our cost goal (the upward deviation of 50 in cell
B14) this will tend to reduce that deviation – but the trade-off (of course)
is that in so doing we may get more excess high-grade ore (i.e. increase
the current deviation from our excess high-grade ore goal (currently zero
deviation from this goal, cells C14 and C15).
Making this change and resolving using Solver we get:
Spreadsheet 10.5
showing that with this set of weights we meet our cost goal, but this is
achieved by exceeding our high-grade ore goal (the upward deviation
variable in cell C14).
Irrespective of what solution management prefer, it is clear that GP provides
a flexible tool to investigate the effect of differing weightings as they attempt
to find a solution that satisfies their conflicting goals.
180
Chapter 10: Multicriteria decision making
Activity
Suppose now that there is a third mine Z, costing 120 (£’000) per day and producing 0.5
tonnes of high-grade, 1 tonne of medium-grade and 9 tonnes of low-grade ore per day.
What would be the weighted goal programming formulation?
Priority approach
As mentioned above, in the priority approach we need to decide priority levels
for the goals (Priority level 1 for the most important goal, then Priority level 2
for the second most important goal, etc.) and first satisfy Priority 1 goals, then
Priority 2 goals, then…, so that a sequence of related problems are solved.
Here, for the purposes of illustrating the approach, we shall assume that
management consider that their priority levels are:
• Priority level 1 – to meet the cost goal
• Priority level 2 – to meet the excess high-grade ore goal.
Hence the first problem that we solve (a linear program) is:
minimise C+ + C–
subject to 180x + 160y = 780 + C+ − C-
6x + 1y = 12.5 + H+ − H-
6x + 1y ≥ 12
3x + 1y ≥ 8
4x + 6y ≥ 24
x≤5
y≤5
x, y, C+, C-, H+, H– ≥ 0
As at Priority level 1, we wish (if possible) to meet the cost goal. This is
equivalent to saying that the deviation from that goal (upward/downward)
should be zero. This is represented in a linear manner by having as the
objective the sum of the two deviation variables associated with cost.
Note here that we have no weights in the problem, unlike weighted goal
programming considered above.
Excel solution
Take the spreadsheet associated with this chapter and look at Sheet B. You
should see the problem we considered above set out as below where we also
show the Solver model for Sheet B.
Spreadsheet 10.6
The objective for this Solver model is cell B13, which you will find is equal
to the sum of B10 and B11 (i.e. the upward deviation variable C+ plus the
downward deviation variable C–).
181
MN3032 Management science methods
Spreadsheet 10.7
which has the same form as we saw when we considered the weighted
problem (albeit you will see that the rows in the spreadsheet concerned
with weights have disappeared, since in the priority approach which we
are considering here we have no weights).
If you click Options in the Solver model you will also find that the ‘Assume
Linear Model’ and ‘Assume Non-Negative’ boxes are ticked – indicating
that we are dealing with a linear model with non-negative variables.
Solving the linear program represented in the above Solver model
we get:
Spreadsheet 10.8
This indicates that we can achieve our cost goal (a zero objective value
corresponding to a zero upward and downward deviation variable for the
cost goal, as in cells B10 and B11), but at the price of an upward deviation
from our excess ore goal of 0.5, as in cell C10.
We can now move to our second priority level. This was to meet the excess
high-grade ore goal. We know from the solution just considered above we
can meet our cost goal, but at the expense of exceeding our high-grade
excess ore goal by 0.5. But might it be possible to meet that cost goal and
get closer to our excess high-grade ore goal?
182
Chapter 10: Multicriteria decision making
Spreadsheet 10.9
where the constraint relating to variables C+ and C- (cells B10 and B11)
has been added and if you examine the objective cell B13 you will find
that it is now equal to C10 + C11 (i.e. H+ + H-). Solving we get:
183
MN3032 Management science methods
Spreadsheet 10.10
The solution seen here is the best we can do (in terms of our second
priority level, excess high-grade ore) subject to the constraint that we
continue to achieve the best we can at our first priority level (meet the
cost goal).
Activity
Suppose now that there is a third mine Z, costing 120 (£’000) per day and producing 0.5
tonnes of high-grade, 1 tonne of medium-grade and 9 tonnes of low-grade ore per day.
What would be the pre-emptive goal programming formulation?
Activity
Recall the solution you produced for the goal programming problem above. How does
that compare with the goal programming solutions produced using Excel above?
Activity/Reading
For this section read Anderson, Chapter 14, sections 14.4–14.6.
The analytic hierarchy process (AHP) was developed by Saaty and deals
with a systematic approach to deciding between a finite set of alternatives.
We shall illustrate AHP by means of an example.
Suppose that a student is considering two different job offers. They have
three factors (objectives) that they consider important in helping them to
choose between offers:
objective 1: starting salary – the higher the better
objective 2: promotion prospects – the quicker promotion occurs the better
objective 3: interest in the job – the more interest the better.
The first step in AHP is for the student to construct a matrix [Sij]
comprising a pairwise comparison of each of these objectives. This
pairwise comparison takes place with regard to a standard nine-point scale
as shown below:
184
Chapter 10: Multicriteria decision making
Scale values 2, 4, 6 and 8 lie midway between the definitions for their
nearest values given above. For example, a scale value of 4 for Sij indicates
that objective i is midway between moderately more important and strongly
more important than objective j.
For the example we are considering, suppose that the student considers that
their pairwise comparison matrix is:
Objective (j)
1 2 3
Objective (i) 1 1 5 3
2 – 1 –
3 – 4 1
Here we have that:
• as S12 = 5 Objective 1 is strongly more important than Objective 2
• as S13 = 3 Objective 1 is moderately more important than Objective 3
• as S32 = 4 Objective 3 is midway between moderately more important
and strongly more important than Objective 2.
There are two points to note here:
• the diagonal elements (Sii, i = 1, 2, 3) are all 1 (no other choice is
possible as objective i must by definition be as important as objective j
when i = j)
• for each distinct pair of objectives i and j we have only entered one
value, for Objectives 2 and 3 we have entered a value for S32 but not
for S23.
Activity
For the three objectives we have considered above, what values would you personally
assign in the pairwise comparison matrix? Write that matrix here for reference.
Activity
Do you think the values assigned in the pairwise comparison matrix we have used above
are consistent or not? Why? Do you think the values assigned in your own pairwise
comparison matrix (as in the previous activity above) are consistent or not? Why? Record
here your conclusions as to whether these judgements are consistent or not for future
reference.
The next step in the AHP process is to set the missing elements in the matrix
equal to the reciprocal of the corresponding element that has been entered,
as below:
185
MN3032 Management science methods
Objective (j)
1 2 3
Objective (i) 1 1 5 3
2 1/5 1 1/4
3 1/3 4 1
Spreadsheet 10.11
The next step in AHP is to see if the pairwise comparison matrix is
reasonably consistent. To do this we carry out the procedure below.
First we carry out a matrix multiplication of the pairwise comparison
matrix with the column vector of weights, i.e. we do:
|1 5 3 | | w1 |
| 1/5 1 1/4 | | w2 |
| 1/3 4 1 | | w3 |
= |1 5 3 | | 0.6194 | = | 1.9540 |
| 1/5 1 1/4 | | 0.0964 | | 0.2913 |
| 1/3 4 1 | | 0.2842 | | 0.8763 |
Spreadsheet 10.12
Note that slight differences arise (e.g. in the first element of the column
vector) between the calculation we carried out above and the Excel values.
These are due to rounding errors on our part since Excel is more accurate
and works to many more decimal places in carrying out calculations than
we have used above.
Activity
For your personal pairwise comparison matrix, as decided in an activity above, use the
Excel spreadsheet to calculate your consistency ratio. Does this ratio accord with your
own judgement as to whether your pairwise comparison matrix was consistent or not as
considered in the activity above?
187
MN3032 Management science methods
minimise λ
subject to |1 5 3 | | w1 | = λ | w1 |
| 1/5 1 1/4| | w2 | | w2 |
| 1/3 4 1 | | w3 | = | w3 |
w1 + w2 + w3 = 1
w 1, w 2, w 3 ≥ 0
Spreadsheet 10.13
Here cells C13 to E13 contain the weights and cell C14 the λ value which,
as you can see from the Solver model, is to be maximised. The current
values in these cells are there just to get the solution process started. Cells
C16 to C18 contain expressions for the left-hand side of the first three
constraints of the problem when rearranged to ensure all variables are
on the left-hand side (all three constraints have a zero right-hand side).
Cell C19 and its associated constraint in the Solver model ensures that the
weights add up to precisely one.
188
Chapter 10: Multicriteria decision making
Solving we get:
Spreadsheet 10.14
so that the exact AHP solution is:
w1 = 0.6267, w2 = 0.0936 and w3 = 0.2797 with λmax = 3.086
Hence we can see that our approximate values as calculated above (in
Excel) of
w1 = 0.6194, w2 = 0.0964 and w3 = 0.2842 with λmax = 3.087
were reasonable.
Activity
For your personal pairwise comparison matrix use the Excel spreadsheet to compute exact
values. Compare these with the approximate values calculated via Excel.
189
MN3032 Management science methods
Spreadsheet 10.15
As we had only two job offers we only entered a single judgement into
the pairwise comparison matrix above and hence there is no need to
calculate the consistency index since it will be zero (logically if we express
just a single judgement it is impossible to be inconsistent). Had we been
considering more than two job offers though we would have needed to
have considered the consistency of our judgements in the same manner
as presented above for the pairwise comparison matrix related to the
objectives.
As an aside here you will see that the columns in the normalised matrix
shown above in Sheet F of the spreadsheet associated with this chapter
are identical (each having 0.75 in Row 1, 0.25 in Row 2). This is not a
coincidence but is in fact something you will always observe when you
deal with a two row, two column, matrix in AHP. As both columns in that
matrix are identical it is not surprising that the row average column is also
identical to these two columns.
We now take our other two objectives, do a pairwise comparison of our
two job offers, and then calculate the AHP weights.
Suppose that the pairwise comparison matrix for Objective 2, promotion
prospects – the quicker promotion occurs the better, is:
Offer 1 Offer 2
Offer 1 1 –
Offer 2 7 1
indicating that Offer 2 is very strongly more important than Offer 1. With
respect to this objective the weights are 0.125 for Offer 1 and 0.875 for
Offer 2.
Suppose that the pairwise comparison matrix for Objective 3, interest in
the job – the more interest the better, is:
Offer 1 Offer 2
Offer 1 1 –
Offer 2 4 1
190
Chapter 10: Multicriteria decision making
We can now bring all the weights we have decided together in the table
below:
Table 10.3
where we have chosen to work with our approximate AHP weights for
illustration.
Here the total score for each job is computed in the natural way (sum
over the objectives the objective weight multiplied by the weight for that
objective for the job being considered). For example the score of 0.53 for
job offer 1 is computed as 0.6194(0.75) + 0.0964(0.125) + 0.2842(0.2).
The alternative with the highest score is preferred – so in this case Job
Offer 1 is (just) preferred to Job Offer 2.
One point to note here is that the objective weights (0.6194, 0.0964
and 0.2842) we have used to decide which job offer we prefer were
computed quite early in the overall procedure to decide whether or not the
judgements expressed with regard to the three objectives were reasonably
consistent or not. This means that, in the worse case, even if you cannot
carry through that procedure fully, or make a mistake in carrying through
that procedure, it is still possible to get a correct answer here (provided,
obviously, you have computed the objective weights correctly).
If we had chosen to work with our exact AHP weights we would have had:
Table 10.4
so that we would still have preferred Job Offer 1.
Criticisms of AHP
There have been a number of criticisms made of AHP and whether it is
a suitable technique for multicriteria decision making. These criticisms
centre around:
• The use of a nine-point scale:
it is purely arbitrary and has no theoretical justification
it seems to inevitably introduce numeric inconsistencies (e.g. if
A is very strongly more important than B and B in turn is very
strongly more important than C then SAB = 7 and SBC = 7 and these
judgements imply that SAC should be very high – yet the scale
limits SAC to having, at most, the value 9).
191
MN3032 Management science methods
• the fact that rank reversal can occur. AHP implicitly enables you to
rank alternatives (e.g. in our example above we ranked Job Offer 1
before Job Offer 2 because Job 1 had a higher total score than Job 2).
Suppose though that a third job offer is considered. It can be that this
third offer, even if a very poor job offer that is assigned a very low total
score, alters the relative ranking of the first two job offers. This seems
problematic and is seen by critics of AHP as inconsistent with a logical
rational approach to multicriteria decision making.
Other approaches
There are other approaches to making decisions when we have multiple
criteria that we have to consider beyond the two approaches dealt with in
this chapter (goal programming and AHP). MultiCriteria Decision Analysis
(or MCDA for short) has been widely studied. At the time of writing,
Wikipedia, for example, lists approximately 30 tools/techniques that have
been proposed to deal with problems of this type. Clearly it is impossible
to review/explain all of these approaches in this subject guide. However,
it is important that you should be clear that other approaches exist. One
approach that merits further consideration is multi-attribute utility/value
analysis (MAUA/MAVA).
In MAUA/MAVA the idea is that each alternative has a number of
dimensions. So, referring back to the AHP example considered above, we
had two alternatives (job offer 1 and job offer 2) and three dimensions
(objectives; objective 1: starting salary – the higher the better; objective 2:
promotion prospects – the quicker promotion occurs the better; objective
3: interest in the job – the more interest the better). The dimensions
represent the criteria in which we are interested. In MAUA/MAVA, each
dimension is mapped to numeric values (typically scaled to lie between
zero and 100) using a utility (value) function.
If you need to know what a utility (value) function is then review Chapter
4. Basically a utility/value function is an evaluation of the worth of
something to an individual decision maker. Different individuals assign
different utility values and they change according to circumstances. For
example, you might rate the offer of a gift of £10 as not very significant.
However, should you have just lost your wallet and face a walk of five
miles to get home late at night when it is pouring with rain the offer of
£10, which would enable you to use a taxi/public transport, might be
assigned a higher utility value than it would in normal circumstances.
In MAUA/MAVA weights are (subjectively) assigned to each dimension and
so, given:
• an evaluation of each alternative with respect to each dimension
• a mapping of each dimension to numeric values (typically scaled to lie
between zero and 100) using a utility/value function
• weights for each dimension
then we can bring all these factors together and arrive at a score for each
alternative, and hence rank them.
To illustrate MAUA/MAVA suppose that we expand the AHP example we
considered above to three job offers. The evaluation of each alternative
with respect to each dimension is shown in Table 10.5. In this table a high
192
Chapter 10: Multicriteria decision making
number is good, so, for example, with respect to the first dimension of
starting salary job offer 1 is the worst offer, job offer 3 the best offer and
job offer 2 lies in the middle.
Dimension
Objective 1: Objective 2: Objective 3:
starting salary promotion interest in the
prospects job
Alternative Job offer 1 1 2 40
Job offer 2 3 5 25
Job offer 3 7 11 10
Dimension
Objective 1: Objective 2: Objective 3:
starting salary promotion interest in the
prospects job
Alternative Job offer 1 0 0 100
Job offer 2 35 60 70
Job offer 3 100 100 0
Activity
Conduct an internet search using the following terms to discover more about these
techniques:
•• multicriteria decision analysis
•• goal programming
•• analytic hierarchy process
•• multi-attribute utility/value analysis.
193
MN3032 Management science methods
Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
Vehicle fleet management in Quebec 608
Scoring model at Ford motor company 613
Multicriteria decision making at NASA 625
194
Chapter 11: Queueing theory and simulation
Essential reading
Anderson, Chapter 11, sections start–11.3; Chapter 12, sections start,
12.3–12.4.
Spreadsheet
queue.xls
• Sheet A: statistics for a M/M/1 queue
• Sheet B: statistics for a M/M/2 queue
• Sheet C: simulation – small example
• Sheet D: simulation – large example.
This spreadsheet can be downloaded from the VLE.
Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• list and discuss the characteristics of queueing systems
• calculate various steady-state statistics for a single server queue with
Poisson arrivals and negative exponential service times (a M/M/1
queueing system)
• explain the basics of discrete-event simulation
• perform a small discrete-event simulation and produce statistics
from that simulation (relating to queueing time; time in the system;
minimum and maximum queue length; average queue length).
Introduction
Think back to the last time you were in a queue. It was probably not that
long ago. People stand in queues all the time, for example in a shop or at
a bank. However people are not the only things that queue. Cars waiting
at traffic lights are also queueing. Ships waiting for a berth in a port are
also queueing. Packs of breakfast cereal in a shop are also queueing; that
is, they are waiting for something to happen, namely for a shopper to pick
them off the shelf and buy them. Machines in a factory are also queueing;
that is, they are waiting to break down.
195
MN3032 Management science methods
Queueing theory
Activity/Reading
For this section read Anderson, Chapter 11, sections start–11.3.
In essence all queueing systems can be broken down into individual sub-
systems consisting of entities queueing for some activity (as shown below).
Queue
Activity
Figure 11.1
Typically we can talk of this individual sub-system as dealing with
customers queueing for service. To analyse this sub-system we need
information relating to:
196
Chapter 11: Queueing theory and simulation
• arrival process
• service mechanism
• queue characteristics.
We deal with each of these below.
Arrival process
This deals with:
• how customers arrive, for example, singly or in groups (batch or bulk
arrivals)
• how the arrivals are distributed in time, e.g. what is the probability
distribution of time between successive arrivals (the inter-arrival
time distribution)
• whether there is a finite population of customers or (effectively) an
infinite number.
The simplest arrival process is one where we have completely regular
arrivals (i.e. the same constant time interval between successive arrivals).
A Poisson stream of arrivals corresponds to arrivals at random. In a Poisson
stream, successive customers arrive after intervals which independently
are exponentially distributed. The Poisson stream is important as it is a
convenient mathematical model of many real life queueing systems and is
described by a single parameter – the average arrival rate. Other important
arrival processes are scheduled arrivals; batch arrivals; and time dependent
arrival rates (i.e. the arrival rate varies according to the time of day).
Service mechanism
This deals with:
• a description of the resources needed for service to begin
• how long the service will take (the service time distribution)
• the number of servers available
• whether the servers are in series (each server has a separate queue) or
in parallel (one queue for all servers)
• whether pre-emption is allowed (a server can stop processing a
customer to deal with another ‘emergency’ customer).
The assumption that the service times for customers are independent and
do not depend upon the arrival process is common. Another common
assumption about service times is that they are exponentially distributed.
Queue characteristics
This deals with:
• how, from the set of customers waiting for service, do we choose the
one to be served next (e.g. FIFO (first-in first-out) – also known as
FCFS (first-come first-served); LIFO (last-in first-out); randomly). This
is often called the queue discipline.
• whether we have:
balking (customers deciding not to join the queue if it is too long)
reneging (customers leaving the queue if they have waited too
long for service)
jockeying (customers switching between queues if they think they
will get served faster by so doing)
197
MN3032 Management science methods
198
Chapter 11: Queueing theory and simulation
Queueing notation
It is common to use the following symbols:
λ to be the mean (or average) number of arrivals per time period (i.e.
the mean arrival rate)
µ to be the mean (or average) number of customers served per time
period (i.e. the mean service rate).
Kendall (a British mathematician) formulated a standard notation system
to classify queueing systems as P1/P2/P3/P4/P5, where these five
parameters are:
P1 (first parameter) – the probability distribution for the arrival process
P2 (second parameter) – the probability distribution for the service
process
P3 (third parameter) – the number of channels (servers)
P4 (fourth parameter) – the maximum number of customers allowed
in the queueing system (either being served or waiting for service)
P5 (fifth parameter) – the maximum number of customers in total.
If the last two parameters are omitted then the implication is that they are
infinity (i.e. there is no limit on the number of customers).
Common options for P1 and P2 are:
M for a Poisson arrival distribution (exponential inter-arrival
distribution) or an exponential service time distribution
D for a deterministic or constant value
G for a general distribution (but with a known mean and variance).
For example the M/M/1 queueing system, the simplest queueing system,
has a Poisson arrival distribution, an exponential service time distribution
and a single channel (one server) with no limitations on the maximum
number of customers (either in the system or in total).
Note here that in using this notation it is always assumed that there is
just a single queue (waiting line) and customers move from this single
queue to the servers.
You can see a number of formulae in Anderson. For the purposes of this
chapter we shall expect you to know the following:
• the average number of units in the queue (i.e. the mean/average
queue length, also known as the mean/average queue size) = λ2/[µ(
µ - λ)]
• the probability of having to wait for service = λ/µ
• the average time in the system = 1/(µ - λ)
• the probability that there are n units (n ≥ 0) in the system =
(λ/µ)n(1-λ/µ).
Here the word ‘system’ refers to both queueing and to being served.
Hence for our particular example with λ = 0.5 and µ = 4 these are:
• the average number of units in the queue = 0.01786
• the probability of having to wait for service = 0.125
• the average time in the system = 0.2857
• the probability that there are n units (n ≥ 0) in the system is 0.875 for
n = 0 and 0.109 for n = 1.
Here, for example, a customer will on average spend 0.2857 minutes in
the system (queueing and being served).
Sheet A in the spreadsheet accompanying this chapter calculates these
values and is shown below:
Spreadsheet 11.1
One factor that is of note is traffic intensity = (arrival rate)/(departure
rate) [= λ/µ for one server] where arrival rate = number of arrivals per
unit time and departure rate = number of departures per unit time. Traffic
intensity is a measure of the congestion of the system. If it is near to zero
there is very little queueing and in general as the traffic intensity increases
(to near 1 or even greater than 1) the amount of queueing increases.
For the system we have considered above the arrival rate is 0.5 and the
departure rate is 4 so the traffic intensity is 0.5/4 = 0.125.
200
Chapter 11: Queueing theory and simulation
For the first situation one server working twice as fast corresponds to a
service rate µ = 8 customers per minute, so we have:
Spreadsheet 11.2
For two servers working at the original rate the situation is a M/M/2
queueing system. Here we shall not expect you to learn any formula,
rather we have encoded the standard formulae given in Anderson into
Sheet B of the spreadsheet associated with his chapter, as below:
Spreadsheet 11.3
Note too that this calculation assumes that these two servers are fed from
a single queue (rather than each having their own individual queue).
Compare the two outputs above – which option do you prefer?
Of the figures in the outputs above some are identical. Extracting key
figures which are different we have:
It can be seen that with one server working twice as fast, customers spend
less time in the system on average, but have a higher probability of having
to wait for service.
201
MN3032 Management science methods
Activity
Which do you prefer – one server working twice as fast or two servers working at the
original rate – and why?
Simulation
Activity/Reading
For this section read Anderson, Chapter 12, sections start, 12.3–12.4.
An example simulation
To illustrate discrete-event simulation let us take the very simple system
below, with just a single queue and a single server.
Queue
Activity
Figure 11.2
202
Chapter 11: Queueing theory and simulation
Suppose that customers arrive with inter-arrival times that are uniformly
distributed between one and three minutes, i.e. all arrival times between
one and three minutes are equally likely. Suppose too that service times
are uniformly distributed between 0.5 and two minutes (i.e. any service
time between 0.5 and two minutes is equally likely). We will illustrate how
this system can be analysed using simulation.
Conceptually we have two separate, and independent, statistical
distributions, namely:
• arrival
• service.
Hence we can think of constructing two long lists of numbers – the first list
being inter-arrival times sampled from the uniform distribution between
one and three minutes, the second list being service times sampled from
the uniform distribution between 0.5 and two minutes. By sampled we
mean that we (or a computer) look at the specified distribution and
randomly choose a number (inter-arrival time or service time) from this
specified distribution. For example in Excel using = 1 + (3-1)*RAND()
would randomly generate interarrival times and = 0.5 + (2-0.5)*RAND()
would randomly generate service times.
Suppose our two lists are:
Inter-arrival times (mins) Service times (mins)
1.9 1.7
1.3 1.8
1.1 1.5
1.0 0.9
Etc. Etc.
where to ease the processing we have chosen to work to one decimal
place.
Suppose now we consider our system at time zero (T = 0), with no
customers in the system. Take the lists above and ask yourself the
question: What will happen next?
The answer is that after 1.9 minutes have elapsed a customer will appear.
The queue is empty and the server is idle so this customer can proceed
directly to being served. What will happen next?
The answer is that after a further 1.3 minutes have elapsed (i.e. at T =
1.9 + 1.3 = 3.2) the next customer will appear. This customer will join the
queue (since the server is busy). What will happen next?
The answer is that at time T = 1.9 + 1.7 = 3.6 the customer currently being
served will finish and leave the system. At that time we have a customer in
the queue and so they can start their service (which will take 1.8 minutes
and hence end at T = 3.6 + 1.8 = 5.4). What will happen next?
The answer is that 1.1 minutes after the previous customer arrival (i.e. at
T = 3.2 + 1.1 = 4.3) the next customer will appear. This customer will join
the queue (since the server is busy). What will happen next?
The answer is that after a further 1.0 minutes have elapsed (i.e. at T = 4.3
+ 1.0 = 5.3) the next customer will appear. This customer will join the
queue (since there is already someone in the queue), so now the queue
contains two customers waiting for service. What will happen next?
203
MN3032 Management science methods
The answer is that at T = 5.4 the customer currently being served will
finish and leave the system. At that time we have two customers in the
queue and assuming a FIFO queue discipline the first customer in the
queue can start their service (which will take 1.5 minutes and hence end
at T = 5.4 + 1.5 = 6.9). What will happen next?
The answer is that... etc and we could continue in this fashion if we so
wished (and had the time and energy)! Plainly the above process is best
done by a computer.
To summarise what we have done we can construct the list below:
Time T What happened
1.9 Customer appears, starts service scheduled to end at T = 3.6
3.2 Customer appears, joins queue
3.6 Service ends
Customer at head of queue starts service, scheduled to end at T = 5.4
4.3 Customer appears, joins queue
5.3 Customer appears, joins queue
5.4 Service ends
Customer at head of queue starts service, scheduled to end at T = 6.9
Etc. Etc.
You can hopefully see from the above how we are simulating
(artificially reproducing) the operation of our queueing system.
Simulation, as illustrated above, is more accurately called discrete-
event simulation since we are looking at discrete events through time
(customers appearing, service ending). Here we were only concerned with
the discrete points T = 1.9, 3.2, 3.6, 4.3, 5.3, 5.4, etc.
Once we have done a simulation such as that shown above then we can
easily calculate statistics about the system – for example, the average time
a customer spends queueing and being served (the average time in the
system). Here two customers have gone through the entire system – the
first appeared at time 1.9 and left the system at time 3.6 and so spent 1.7
minutes in the system. The second customer appeared at time 3.2 and left
the system at time 5.4 and so spent 2.2 minutes in the system. Hence the
average time in the system is (1.7 + 2.2)/2 = 1.95 minutes.
We can also calculate statistics on queue waiting – for example what is the
average queue waiting time? Here the first customer does not queue at all
and the second customer queues from 3.2 to 3.6. Within our time frame
from zero to 5.4 (when we finished the simulation above) these
are the only two customers to completely go though the system and
hence, based just on these two customers, the average queueing time is
[0 + (3.6 – 3.2)]/2 = 0.2 minutes.
Activity
For the following values perform your own simulation and produce statistics relating to
average time in the system and average queueing time.
204
Chapter 11: Queueing theory and simulation
Now look at Sheet C in the spreadsheet associated with this chapter. You
will see:
Spreadsheet 11.4
Cells A4 to A7 contain the same inter-arrival times, and cells B4 to B7
contain the same service times, as considered in the example given
previously above. Here cells B1 and C1 specify the time period over which
statistics with regard to queueing time and time in the system should be
calculated. These can be seen in columns I and J. Columns G and H give
the same statistics but for all the customers that are seen.
Note here, however, how the above calculations (both for average time
in the system and average queueing time) took into account the system
when we first started – when it was completely empty. This is probably
biasing (rendering inaccurate) the statistics we are calculating and so it
is common in simulation to allow some time to elapse (so the system ‘fills
up’) before starting to collect information for use in calculating summary
statistics. This is the purpose of cells B1 and C1.
In order to illustrate this look at Sheet D in the spreadsheet associated
with this chapter. You will see:
Spreadsheet 11.5
Here we have 20 customers with statistics collected over the time period 5
to 30.
205
MN3032 Management science methods
Activity
For the inter-arrival times and service times shown in Sheet C above, what would be the
average queueing time (averaged over all four customers) if we had two servers?
Discussion
In simulation, statistical (and probability) theory plays a part both in
relation to the input data and in relation to the results that the simulation
produces. For example, in a simulation of the flow of people through
206
Chapter 11: Queueing theory and simulation
207
MN3032 Management science methods
Activity
Consider any system with which you are familiar (e.g. a bank, a shop, a train/metro
station) and list a number of factors which you think might help to increase system
output (e.g. as measured by the number of customers that the system can deal with per
hour). Which of these factors (or combination of factors) would be the best choice to
increase output? Note here that any change might reduce congestion at one point only
to increase it at another point so we have to bear this in mind when investigating any
proposed changes.
Activity
Think of a situation you know of which involves queueing. If you were to build a
simulation model of this situation, what questions might you have to which you
would like answers? What changes to the current situation would you be interested in
exploring?
Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
ATM waiting times at Citibank 452
Ensuring phone access to emergency services 459
Improving productivity at the New Haven fire 481
department
Call centre design 491
Meeting demand levels at Pfizer 504
Petroleum distribution in the Gulf of Mexico 509
Preboard screening at Vancouver International 520
Airport
Mount Isa mines
www.cmis.csiro.au/or/Clients/mim.htm
Prison management
www.cmis.csiro.au/OR/clients/mrrc.htm
Roadside services
www.cmis.csiro.au/or/Clients/racv.htm
208
Chapter 11: Queueing theory and simulation
209
MN3032 Management science methods
Notes
210
Appendix 1: Sample examination paper
211
MN3032 Management science methods
To company
Xpc Ytab Zbest
Xpc 0.61 0.04 0.35
From company Ytab 0.26 0.07 0.67
Zbest 0.51 0.17 0.32
Each 1% increase in the long-run market share for any company
is estimated to be worth £50,000. On this basis what would Zbest
gain by engaging the advertising firm? If Zbest do engage the
advertising firm what will be the effect on the long-run market
share for Xpc and Ytab? (12 marks)
212
Appendix 1: Sample examination paper
213
MN3032 Management science methods
214
Appendix 1: Sample examination paper
215
MN3032 Management science methods
216
Appendix 1: Sample examination paper
217
MN3032 Management science methods
Notes
218
Appendix 2: Sample Examiners’ commentary
Important note
219
MN3032 Management science methods
Question 2
Reading for this question
SSM is dealt with on pp.35–40 of the subject guide.
Approaching the question
For part (a):
• CATWOE – Customers, Actors, Transformation (or Transformation
process), Worldview (or Weltanschauung), Owner, Environmental
constraints
• Elements of CATWOE more fully explained
• Root definition clear indication that it is a statement of the ideal
• Link between CATWOE and root definition (DEDUCE from the root
definition answer to CATWOE).
For part (b):
Assumptions:
• different individuals and groups make different evaluations of events
and this leads to them taking different actions
• concepts and ideas from systems engineering are useful
• it is necessary when describing any human activity system to take
account of the particular image of the world underlying the description
of the system and it is necessary to be explicit about the assumptions
underlying this image
• it is possible to learn about a system by comparing pure models of
that system with perceptions of what is happening in the real-world
problem situation.
Discussion of the SSM stages:
For part (c)
For the problem:
• a clear statement (in words) of the problem considered, not just a
single phrase
• appropriate application of SSM to the problem
root definition (statement of the ideal)
CATWOE for their root definition
explicit and CLEAR check of root definition by CATWOE
application of the stages.
Question 3
Reading for this question
AHP relates to pp.182–90 of the subject guide, MRP to pp.102–06 and
SODA to pp.32–35.
220
Appendix 2: Sample Examiners’ commentary
Question 4
Reading for this question
This question relates to Chapter 6 of the subject guide.
Approaching the question
Applying the standard Markov approach for the long-run prediction need
to set up [x1,x2,x3] where
[x1,x2,x3] = [x1,x2,x3](transition matrix) and x1 + x2 + x3 = 1. Expanding
we get:
x1 = 0.57x1 + 0.30x2 + 0.72x3
x2 = 0.04x1 + 0.64x2 + 0.20x3
x3 = 0.39x1 + 0.06x2 + 0.08x3
x1 + x2 + x3 = 1
and solving these equations simultaneously:
x1 = 0.5534
x2 = 0.1990
x3 = 0.2476
For the second part where the transition matrix changes we have the
equations:
x1 = 0.61x1 + 0.26x2 + 0.51x3
x2 = 0.04x1 + 0.07x2 + 0.17x3
221
MN3032 Management science methods
Question 5
Reading for this question
This question relates to Chapter 10 of the subject guide.
Approaching the question
This is a goal programming question. Let x(A,B,C) (≥0) be the number of
units of product X purchased from A,B,C respectively; y(A,B,C) (≥0) be the
number of units of product Y purchased from A,B,C respectively.
Then the constraints are:
• x(A) + x(B) + x(C) ≥ 2,000
• y(A) + y(B) + y(C) ≥ 2,400
for the demand
• x(A) ≤ 1,200
• x(B) ≤ 1,380
• x(C) ≤ 3,000
• y(A) ≤ 1,345
• y(B) ≤ 1,500
• y(C) ≤ 2,000
for the availability
For the supplier B goal we have the constraint:
• x(B) = 1,000 + a+ – b-
where a+,b- ≥0 are the upward and downward deviation from this goal.
For the supplier C goal we have the constraint:
• 3.9x(C) + 5.2y(C) = 8,000 + c+ – d-
where c+,d- ≥0 are the upward and downward deviation from this goal.
For the total expenditure goal we have the constraint:
• 4.1x(A) + 4.2x(B) + 3.9x(C) + 4.5y(A) + 4.8y(B) + 5.2y(C) =
20,000 + e+ – f-
where e+,f- ≥0 are the upward and downward deviation from this goal.
The objective here is to minimise a weighted sum of deviation variables.
Take 1% of the goal values:
• Supplier B 1,000/100 = 10
• Supplier C 8,000/100 = 80
• Total expenditure 20,000/100 = 200
222
Appendix 2: Sample Examiners’ commentary
minimise
w1(a+/10) + w2(b-/10) + w3(c+/80) + w4(d-/80) + w5(e+/200) + w6(f-
/200)
where the ‘w’s are the weights (numeric values) for the deviations.
Sequential goal program:
Priority level 1: minimise the upward deviation from the supplier B
goal
minimise b+
subject to the eight inequality constraints and three goal-related
equality constraints seen above.
Priority level 2: minimise the downward deviation from supplier C
goal
minimise d-
subject to the eight inequality constraints and three goal-related
equality constraints seen above and b+ = B*
where B* is the value that b+ has in the solution at priority level 1
The subjective weights are the ‘w’s.
Setting initial values for these can only be a matter of managerial
judgement (see p.163 of the subject guide).
Revision of these weight values may occur as we see the numeric
solution from the goal program and seek to shape that to a
solution with which we are more comfortable.
Question 6
Reading for this question
This question relates to Chapter 9 of the subject guide.
Approaching the question
Appropriate ratios are:
• voice calls (‘000) per employee
• emails (‘000) per employee
giving:
Branch Voice calls (‘000) per employee Emails (‘000) per employee
A 0.2113 0.0622
B 0.1214 0.0340
C 0.1638 0.0533
D 0.1290 0.0829
E 0.1687 0.0973
F 0.4017 0.1500
G 0.1563 0.2150
223
MN3032 Management science methods
224
Appendix 2: Sample Examiners’ commentary
Question 7
Reading for this question
This question relates to Chapter 5 of the subject guide.
Approaching the question
The equation to use is CP = Cunder/(Cunder + Cover), as shown on p.100 of the
subject guide.
Here Cunder = the cost of underestimating demand by one unit = the lost
profit = 8.50 – 6.60 = 1.90
Cover = the cost of overestimating demand by one unit = 6.60 – 1.10 =
5.50.
Hence CP = 1.90/(1.90 + 5.50) = 0.257.
When we have the Normal distribution with mean 250 and variance 40
the quantity to order (x) is given by (x – 250)/√40 = the value from the
N(0,1) with 0.257 of probability to the left of it. Here the value from the
tables is –0.65 (approximately)
So (x – 250)/√40 = –0.65
x = 245.89 say 246 copies
We need an EOQ calculation. The EOQ formula is EOQ = √(2Rco/ch),
where:
R = 420 × 12 = 5,040 and co = 50
ch = 0.12(110) = 13.2 per year
so EOQ = √(2 × 5,040 × 50/13.2)
= 195.4 say 195
225
MN3032 Management science methods
This falls in the range for which we do pay the basic price of £110, as we
assumed in the calculation.
Using EOQ = 195 then cost per year is RP + (chQ/2) + (coR/Q)
= 5,040 × 110 + (13.2 × 195/2) + (50 × 5,040/195)
= 556,979
With the quantity discount the new purchase cost is 95 and the new
holding cost is 0.12(95) = 11.4.
The new EOQ is √(2 × 5,040 × 50/11.4) = 210.3, say 210.
This is feasible with respect to the minimum order quantity of 200 so no
need to cost the minimum order quantity of 200.
The cost per year would be RP + (chQ/2) + (coR/Q)
= 5,040 × 95 + (11.4 × 210/2) + (50 × 5,040/210)
= 481,197
Hence ordering with the discount is cheaper and that should be the policy
adopted.
We need an EBQ calculation. EBQ = √(2Accs/(ch(1–r)))
Where:
Ac = 500,000 per year
Ap = 85,000 × 12 = 1,020,000 per year (as the question uses a monthly
value of 85,000)
cs = 30,000
ch = 0.005(7.5 + 0.35 + 1.75) = 0.048 per year
r = Ac/Ap =500,000/1,020,000 = 0.4902
Hence EBQ = 1,107,236
So should make 1,107,236 items on the machine at a time. This will be
enough to supply demand for 1,107,236/Ac = 1,107,236/500,000 = 2.21
years so effectively agree with my colleague.
Question 8
Reading for this question
This question relates to Chapter 3 of the subject guide.
Approaching the question
The network diagram is shown below:
226
Appendix 2: Sample Examiners’ commentary
Details are:
Activity Earliest Start Latest Start Float
A 3 10 7
B 0 2 2
C 8 8 0
D 0 0 0
E 6 13 7
F 10 10 0
X1 4 4 0
X2 3 5 2
• Completion time 14 days
• Critical path is D(4)→X1(4)→C(2)→F(4)
Activity Latest start time (days)
A 10
B 2
C 8
D 0
E 13
F 10
As the critical path involves X1 this needs to be changed to reduce the
project completion time (at a cost of £400).
When we do this the new project completion time is 12 days (details as
below).
Activity Earliest Start Latest Start Float
A 3 8 5
B 0 0 0
C 6 6 0
D 0 0 0
E 6 11 5
F 8 8 0
X1 4 4 0
X2 3 3 0
Here all activities except A and E are critical. As the previous critical path
D→X1→C→F is still a critical path of duration 12 days there is no point in
spending any further money on X2 since reducing the completion time for
that cannot reduce the project completion time.
Hence the minimum possible project completion time is 12 days and this
can be achieved at a cost of £400.
We have:
Activity Latest start time (days)
A 8
B 0
C 6
D 0
E 11
F 8
The critical paths are:
D(4)→X1(2)→C(2)→F(4)
and
B(3)→X2(3)→C(2)→F(4)
227
MN3032 Management science methods
Notes
228