You are on page 1of 236

Management science methods

J.E. Beasley
MN3032
2014

Undergraduate study in
Economics, Management,
Finance and the Social Sciences

This subject guide is for a 300 course offered as part of the University of London
International Programmes in Economics, Management, Finance and the Social Sciences.
This is equivalent to Level 6 within the Framework for Higher Education Qualifications in
England, Wales and Northern Ireland (FHEQ).
For more information about the University of London International Programmes
undergraduate study in Economics, Management, Finance and the Social Sciences, see:
www.londoninternational.ac.uk
This guide was prepared for the University of London International Programmes by:
J.E. Beasley, Professor, Brunel University, London.
Acknowledgements to Ms R. Beasley for her diagrams.
This is one of a series of subject guides published by the University. We regret that due to
pressure of work the author is unable to enter into any correspondence relating to, or arising
from, the guide. If you have any comments on this subject guide, favourable or unfavourable,
please use the form at the back of this guide.

University of London International Programmes


Publications Office
Stewart House
32 Russell Square
London WC1B 5DN
United Kingdom
www.londoninternational.ac.uk

Published by: University of London


© University of London 2014
Reprinted with minor revisions 2015
The University of London asserts copyright over all material in this subject guide except where
otherwise indicated. All rights reserved. No part of this work may be reproduced in any form,
or by any means, without permission in writing from the publisher.
We make every effort to respect copyright. If you think we have inadvertently used your
copyright material, please let us know.
Contents

Contents

Introduction............................................................................................................. 1
Terminology................................................................................................................... 1
Aims.............................................................................................................................. 2
Learning outcomes......................................................................................................... 3
Syllabus.......................................................................................................................... 3
Reading advice............................................................................................................... 4
Online study resources.................................................................................................... 4
Recommended study time............................................................................................... 5
How to use the resources for this course......................................................................... 6
Examination advice........................................................................................................ 7
Using software for the course......................................................................................... 7
Excel spreadsheets for the course.................................................................................... 8
Concluding remarks........................................................................................................ 8
List of abbreviations used in this subject guide................................................................ 9
Chapter 1: Methodology....................................................................................... 11
Essential reading.......................................................................................................... 11
Aims of the chapter...................................................................................................... 11
Learning outcomes....................................................................................................... 11
Introduction................................................................................................................. 11
Evolution...................................................................................................................... 11
Two Mines company..................................................................................................... 13
Discussion.................................................................................................................... 16
Philosophy................................................................................................................... 17
Certainty versus uncertainty.......................................................................................... 17
Phases of an OR project................................................................................................ 18
Methodological issues.................................................................................................. 20
Benefits........................................................................................................................ 25
Links to other chapters................................................................................................. 26
Case studies................................................................................................................. 26
A reminder of your learning outcomes........................................................................... 27
Sample examination questions...................................................................................... 27
Chapter 2: Problem structuring and problem structuring methods...................... 29
Essential reading.......................................................................................................... 29
Aims of the chapter...................................................................................................... 29
Learning outcomes....................................................................................................... 29
Introduction................................................................................................................. 29
Problem structuring methods........................................................................................ 30
Strategic options development and analysis (SODA) and
JOURNEY (JOintly Understanding, Reflecting, and NEgotiating strategY) Making .......... 32
Soft systems methodology (SSM).................................................................................. 35
Strategic choice (SC)..................................................................................................... 40
Choosing and applying PSMs........................................................................................ 45
Education..................................................................................................................... 45
Links to other chapters................................................................................................. 45
Case studies................................................................................................................. 46

i
MN3032 Management science methods

A reminder of your learning outcomes........................................................................... 46


Sample examination questions...................................................................................... 46
Chapter 3: Network analysis................................................................................. 47
Essential reading.......................................................................................................... 47
Spreadsheet................................................................................................................. 47
Aims of the chapter...................................................................................................... 47
Learning outcomes....................................................................................................... 47
Introduction................................................................................................................. 47
Historical background................................................................................................... 48
Network construction................................................................................................... 51
Delay activities............................................................................................................. 57
Network analysis – extensions...................................................................................... 58
Network analysis – benefits.......................................................................................... 65
Network analysis – state of the art............................................................................... 65
Links to other chapters................................................................................................. 65
Case studies................................................................................................................. 66
A reminder of your learning outcomes........................................................................... 66
Sample examination questions...................................................................................... 66
Chapter 4: Decision making under uncertainty..................................................... 67
Essential reading.......................................................................................................... 67
Spreadsheet................................................................................................................. 67
Aims of the chapter...................................................................................................... 67
Learning outcomes....................................................................................................... 67
Introduction................................................................................................................. 67
Pay-off table example................................................................................................... 68
Decision tree example................................................................................................... 72
Sensitivity analysis........................................................................................................ 78
Expected value of perfect information........................................................................... 79
Simulation.................................................................................................................... 80
Extensions.................................................................................................................... 81
Links to other chapters................................................................................................. 84
Case studies................................................................................................................. 85
A reminder of your learning outcomes........................................................................... 85
Sample examination questions...................................................................................... 85
Chapter 5: Inventory control................................................................................. 87
Essential reading.......................................................................................................... 87
Spreadsheet................................................................................................................. 87
Aims of the chapter...................................................................................................... 87
Learning outcomes....................................................................................................... 87
Introduction................................................................................................................. 87
Reasons for holding stock............................................................................................. 88
Basics.......................................................................................................................... 89
Basic model.................................................................................................................. 91
Extensions.................................................................................................................... 94
Quantity discounts........................................................................................................ 96
In-house production or batch production ...................................................................... 98
Probabilistic demand.................................................................................................. 100
Materials requirements planning (MRP)...................................................................... 102
Just-in-time (JIT)......................................................................................................... 106
Optimised production technology (OPT)...................................................................... 108

ii
Contents

Supply chain management (SCM)................................................................................ 108


Links to other chapters............................................................................................... 109
Case studies............................................................................................................... 109
A reminder of your learning outcomes......................................................................... 109
Sample examination questions.................................................................................... 109
Table for the standard Normal distribution.................................................................. 110
Chapter 6: Markov processes.............................................................................. 111
Essential reading........................................................................................................ 111
Spreadsheet............................................................................................................... 111
Aims of the chapter.................................................................................................... 111
Learning outcomes..................................................................................................... 111
Introduction............................................................................................................... 111
Solution procedure..................................................................................................... 112
Long-run.................................................................................................................... 116
Three states................................................................................................................ 117
Estimating the transition matrix.................................................................................. 121
Comment................................................................................................................... 123
Applications............................................................................................................... 124
Links to other chapters............................................................................................... 124
Case studies............................................................................................................... 124
A reminder of your learning outcomes......................................................................... 124
Sample examination questions.................................................................................... 124
Chapter 7: Mathematical programming – formulation....................................... 125
Essential reading........................................................................................................ 125
Aims of the chapter.................................................................................................... 125
Learning outcomes..................................................................................................... 125
Introduction............................................................................................................... 125
Overview.................................................................................................................... 125
Blending problem....................................................................................................... 126
Production planning problem...................................................................................... 127
Factory planning problem........................................................................................... 129
Integer programming.................................................................................................. 131
Links to other chapters............................................................................................... 134
Case studies............................................................................................................... 134
A reminder of your learning outcomes......................................................................... 135
Sample examination questions.................................................................................... 135
Chapter 8: Linear programming – solutions........................................................ 137
Essential reading........................................................................................................ 137
Spreadsheet............................................................................................................... 137
Aims of the chapter.................................................................................................... 137
Learning outcomes..................................................................................................... 137
Introduction............................................................................................................... 137
Graphical solution for two variable LPs........................................................................ 138
Simplex...................................................................................................................... 140
Production planning problem...................................................................................... 140
Excel solution............................................................................................................. 141
Problem sensitivity...................................................................................................... 144
Mathematical programming – further considerations................................................... 150
Links to other chapters............................................................................................... 153
Case studies............................................................................................................... 153

iii
MN3032 Management science methods

A reminder of your learning outcomes......................................................................... 153


Sample examination questions.................................................................................... 153
Chapter 9: Data envelopment analysis............................................................... 155
Essential reading........................................................................................................ 155
Spreadsheet............................................................................................................... 155
Aims of the chapter.................................................................................................... 155
Learning outcomes..................................................................................................... 155
Introduction............................................................................................................... 155
Ratios........................................................................................................................ 156
Extending the example............................................................................................... 157
Graphical analysis....................................................................................................... 159
Quantifying efficiency scores for inefficient DMUs........................................................ 160
Achieving the efficient frontier.................................................................................... 162
Use of the efficiencies................................................................................................. 162
Recap......................................................................................................................... 164
Input measures and output measures.......................................................................... 164
Extending to more inputs/outputs............................................................................... 165
Excel.......................................................................................................................... 166
Linear program........................................................................................................... 167
Value judgements....................................................................................................... 168
Starting a DEA study................................................................................................... 169
Links to other chapters............................................................................................... 170
Case studies............................................................................................................... 170
A reminder of your learning outcomes......................................................................... 170
Sample examination questions.................................................................................... 171
Chapter 10: Multicriteria decision making.......................................................... 173
Essential reading........................................................................................................ 173
Spreadsheet............................................................................................................... 173
Aims of the chapter.................................................................................................... 173
Learning outcomes..................................................................................................... 173
Introduction............................................................................................................... 174
Goal programming (GP).............................................................................................. 174
Goal programming formulation................................................................................... 175
Weighted approach.................................................................................................... 177
Priority approach........................................................................................................ 181
Analytic hierarchy process (AHP)................................................................................. 184
Criticisms of AHP........................................................................................................ 191
Other approaches....................................................................................................... 192
Links to other chapters............................................................................................... 194
Case studies............................................................................................................... 194
A reminder of your learning outcomes......................................................................... 194
Sample examination questions.................................................................................... 194
Chapter 11: Queueing theory and simulation..................................................... 195
Essential reading........................................................................................................ 195
Spreadsheet............................................................................................................... 195
Aims of the chapter.................................................................................................... 195
Learning outcomes..................................................................................................... 195
Introduction............................................................................................................... 195
Queueing theory......................................................................................................... 196
Queueing notation .................................................................................................... 199

iv
Contents

Simple M/M/1 example............................................................................................... 199


Simulation.................................................................................................................. 202
Links to other chapters............................................................................................... 208
Case studies............................................................................................................... 208
A reminder of your learning outcomes......................................................................... 209
Sample examination questions.................................................................................... 209
Appendix 1: Sample examination paper............................................................. 211
Appendix 2: Sample Examiners’ commentary..................................................... 219
Important note........................................................................................................... 219
Comments on specific questions................................................................................. 219

v
MN3032 Management science methods

Notes

vi
Introduction

Introduction

Welcome to the subject guide for MN3032 Management science


methods. We hope that you enjoy the subject and benefit from it, not
just in terms of a good performance in your examination, but also in terms
of knowledge and analytical skills that you can use at some point in your
future career.
Management, in the modern world, is becoming increasingly complex.
Problems are becoming more difficult to solve and the timescales available
to solve them are becoming shorter. For many problems, data (numbers)
are available and these data need to be properly considered and analysed
to help in solving such problems.
While, for the gifted few, problem-solving may be an easy and natural
process, we believe that the majority of managers benefit from a
structured approach to problem-solving.
Management science can be defined as the application of scientific and
systematic procedures, techniques and tools to operational, strategic
and policy problems in order to help develop and evaluate solutions to
problems encountered within management. Management science includes
all rational approaches to management decision-making that are based on
an application of scientific and systematic methodologies.
Both management science as a discipline, and management scientists
as individuals with specialised training, aid management by providing a
structure to decision-making situations whose complexity, and/or level of
uncertainty, makes intuition an unsafe guide. The distinctive feature of the
management science approach is the construction of an explicit, simplified
model of relevant aspects of the situation under study. Such models are
often based on mathematical or statistical formulations, but may at other
times have a more qualitative character.
There are a number of areas where specific structured techniques to aid in
decision-making have been developed. Such techniques have proved their
worth historically and have made a quantitative difference to the way in
which organisations work and to the quality of their decision-making. You
will be introduced to some of these techniques in this subject guide.
Above all, what you need to gain from this subject guide is the knowledge
that there are techniques available to help you solve management
problems in a structured and logical way.

Terminology
Management science can also be described by the alternative name
of operational research (OR). Some of you may have met the terms
management science (MS), operational/operations research (OR), and
OR/MS before. Often some, or all, of these terms are used interchangeably.
In this subject guide we use the term OR throughout.
Analytics (or data analytics) is another term that deals with the same
issues as we consider in OR – we have data; we have decisions to be made;
how can we analyse the data to help us make an appropriate decision?

1
MN3032 Management science methods

Analytics is often linked to ‘big data’. Modern organisations are


accumulating, perhaps minute by minute, increasing amounts of data
in their interactions with their customers. The idea is that if (somehow)
these data could be analysed, the organisation would be able to make
better decisions, for example with regard to its product/service offerings to
customers.
Here the volume of data (so ‘big data’) preclude any analysis by a single
individual or team of people. To take just one example, if you were to be
personally presented with information on the interactions that Facebook
has with its billions of users in just a single day do you think that you
could sensibly analyse it? Of course not, the sheer volume of such
information would be over-whelming. Hence structured approaches to
analysing data are needed.
The Institute for Operations Research and the Management Sciences
(INFORMS, www.informs.org), a USA-based academic/professional society
that deals with analytics, identifies the following approaches:
• descriptive analytics
preparing and analysing historical data
identifying patterns from samples for reporting of trends
• predictive analytics
predicting future probabilities and trends
finding relationships in data that may not be readily apparent with
descriptive analysis
• prescriptive analytics
evaluating and determining new ways to operate
targeting business objectives
balancing all constraints.
A number of the topics you encounter in this subject guide fall under the
analytics banner. One thing to note, however, is that this subject guide
exists in the educational world, not the real world. In the educational
world, by its very nature, any examples considered need to be ‘small data’
examples.
However, although the quantitative/analytic techniques and tools you see
below will be illustrated on small data many of them are capable of scaling
to deal with big data.

Aims
The aims and objectives of this course are to:
• enable you to see that many managerial decision-making situations
can be addressed using standard techniques and problem structuring
methods
• provide a comprehensive and concise introduction to the key
techniques and problem structuring methods used within management
science that are directly relevant to the managerial context
• enable you to see both the benefits, and limitations, of the techniques
and problem structuring methods presented.

2
Introduction

Learning outcomes
On completion of the course, you should be able to:
• discuss the main techniques and problem structuring methods used
within management science
• critically appraise the strengths and limitations of these techniques and
problem structuring methods
• carry out simple exercises using such techniques and problem
structuring methods themselves (or explain how they should be done)
• commission more advanced exercises.

Syllabus
The topics dealt with in this course (in chapter order) are:
Problem structuring and problem structuring methods: problem
structuring methods such as JOURNEY (JOintly Understanding, Reflecting,
and NEgotiating strategY) making, Soft Systems Methodology and Strategic
Choice.
Network analysis: planning and control of projects via the critical path;
float (slack) times, cost/time trade-off, uncertain activity completion times
and resource considerations.
Decision making under uncertainty: approaches to decision
problems where chance (probability) plays a key role; pay-off tables;
decision trees; utilities and expected value of perfect information.
Inventory control: problems that arise in the management of inventory
(stock); Economic Order Quantity, Economic Batch Quantity, quantity
discounts, probabilistic demand, Materials Requirements Planning, Just-in-
Time, Optimised Production Technology and supply chain issues.
Markov processes: approaches used in modelling situations that evolve
in a stochastic (probabilistic) fashion though time; systems involving both
non-absorbing and absorbing states.
Mathematical programming formulation: the representation of
decision problems using linear models with a single objective which is to be
optimised; the formulation of both linear programs and integer programs.
Linear programming solutions: the solution of linear programs; the
numeric solution of two variable linear programs, sensitivity analysis and
robustness.
Data envelopment analysis: assessing the relative efficiency of
decision-making units in organisations; input/output definitions, basic
efficiency calculations, reference sets, target setting and value judgements.
Multicriteria decision making: approaches to decision problems that
involve multiple objectives; analytic hierarchy process which considers the
problem of making a choice, in the presence of complete information, from
a finite set of discrete alternatives; goal programming which considers, via
linear programming, multicriteria decision problems where the constraints
are ‘soft’.
Queueing theory and simulation: the representation and analysis
of complex stochastic systems where queueing is a common occurrence;
M/M/1 queue; discrete event simulation.

3
MN3032 Management science methods

Reading advice
Essential reading
There are two texts associated with the readings given at various points
throughout this subject guide. These are:
Anderson, D.R., D.J. Sweeney, T.A. Williams and M. Wisniewski An introduction
to management science: quantitative approaches to decision making
(Andover: Cengage Learning EMEA, 2014) second edition
[ISBN 9781408088401].
Rosenhead J. and J. Mingers (eds) Rational analysis for a problematic world
revisited: problem structuring methods for complexity, uncertainty and
conflict. (Chichester: John Wiley, 2001) second edition
[ISBN 9780471495239].
Both of these books are recommended for purchase/reference and will be
listed as ‘Anderson’ and ‘Rosenhead’ respectively throughout this guide.
Anderson is available to purchase as separate chapters from:
http://edu.cengage.co.uk/catalogue/product.aspx?isbn=1408088401
Detailed reading references in this subject guide refer to the editions of the
set textbooks listed above. New editions of one or more of these textbooks
may have been published by the time you study this course. You can use
a more recent edition of any of the books; use the detailed chapter and
section headings and the index to identify relevant readings. Also check
the VLE regularly for updated guidance on readings.

Case studies
In each chapter we have listed a number of case studies. You will see
that these are often quite short. The idea here is that we expose you to a
number of different practical situations where the tools and techniques
which you studied have been applied. We would encourage you to read
these case studies to see the range of areas to which the topics presented
in this subject guide have been utilised.

Online study resources


In addition to the subject guide and the Essential reading, it is crucial that
you take advantage of the study resources that are available online for this
course, including the virtual learning environment (VLE) and the Online
Library.
You can access the VLE, the Online Library and your University of London
email account via the Student Portal at:
http://my.londoninternational.ac.uk
You should have received your login details for the Student Portal with
your official offer, which was emailed to the address that you gave on
your application form. You have probably already logged in to the Student
Portal in order to register. As soon as you registered, you will automatically
have been granted access to the VLE, Online Library and your fully
functional University of London email account.
If you have forgotten these login details, please click on the ‘Forgotten
your password’ link on the login page.

The VLE
The VLE, which complements this subject guide, has been designed to
enhance your learning experience, providing additional support and a
4
Introduction

sense of community. It forms an important part of your study experience


with the University of London and you should access it regularly.
The VLE provides a range of resources for EMFSS courses:
• Self-testing activities: Doing these allows you to test your own
understanding of subject material.
• Electronic study materials: The printed materials that you receive from
the University of London are available to download, including updated
reading lists and references.
• Past examination papers and Examiners’ commentaries: These provide
advice on how each examination question might best be answered.
• A student discussion forum: This is an open space for you to discuss
interests and experiences, seek support from your peers, work
collaboratively to solve problems and discuss subject material.
• Videos: There are recorded academic introductions to the subject,
interviews and debates and, for some courses, audio-visual tutorials
and conclusions.
• Recorded lectures: For some courses, where appropriate, the sessions
from previous years’ Study Weekends have been recorded and made
available.
• Study skills: Expert advice on preparing for examinations and
developing your digital literacy skills.
• Feedback forms.
Some of these resources are available for certain courses only, but we
are expanding our provision all the time and you should check the VLE
regularly for updates.

Making use of the Online Library


The Online Library contains a huge array of journal articles and other
resources to help you read widely and extensively.
To access the majority of resources via the Online Library you will either
need to use your University of London Student Portal login details, or you
will be required to register and use an Athens login: http://tinyurl.com/
ollathens
The easiest way to locate relevant content and journal articles in the
Online Library is to use the Summon search engine.
If you are having trouble finding an article listed in a reading list, try
removing any punctuation from the title, such as single quotation marks,
question marks and colons.
For further advice, please see the online help pages: www.external.shl.lon.
ac.uk/summon/about.php

Recommended study time


With regard to the time that you will need to spend studying this
subject, it is obviously difficult to be precise since this will naturally vary
from student to student. Were the material in this subject guide to be
presented as a university course with you sitting in lectures then as a
rough indication each chapter might require four hours of lectures and
two hours of tutorial time in order to get over the basic concepts. This
is equivalent to a total of six contact hours in the classroom. We would
expect International Programmes students to spend two to three times as
long working independently in order to learn the material and attempting
5
MN3032 Management science methods

sample examination questions. This implies a total of 18 to 24 hours


of work per chapter. Any time spent in final revision for examination
assessment would not be included in this total.

How to use the resources for this course


Using the guide
This subject guide is designed to:
• introduce you to the material you are expected to learn
• direct you towards appropriate further reading.
It is not designed to be a self-contained learning text. In other words,
you will need to consult the essential readings to learn the material
presented in this subject guide. In addition, those readings contain many
more worked examples, as well as self-test problems, than are presented
here. Study of those is expected in order for you to master the material
presented in this subject guide.
This subject guide is intended to be used in such a way that you progress
through it from Chapter 1 to Chapter 11 in order. Some of you may wish
to study the subjects within this guide in a different order. This is perfectly
acceptable but please note the following:
• Chapter 1 must be studied first
• Chapter 7 must be studied before Chapter 8
• Chapters 7 and 8 must be studied before Chapter 9
• Chapters 7 and 8 must be studied before Chapter 10.
In terms of subject guide content, the chapters can be grouped as follows:
• introduction, Chapter 1, this chapter sets the scene for the subject
guide
• problem structuring, Chapter 2, this chapter introduces methods
that can be used when the problem to be faced is essentially
qualitative in nature and when a more quantitative (analytic)
approach would be inappropriate
• mathematical programming, Chapters 7, 8 and 10 (as it relates to
goal programming), these chapters cover topics related to formulating
a decision problem in mathematics and its subsequent numeric solution
(typically in this subject guide via Excel). Here we only consider
deterministic topics, so chance (probability) plays no role
• decision analysis, here a split can be made between deterministic
topics (Chapters 3, 5 and 9) and stochastic topics (Chapters 4, 6 and
11), where chance (probability) plays a key role.

Using the readings


To help you with the readings, we have stated:
• all the reading associated with that chapter at the start of the
chapter
• the relevant reading for a particular topic at various points within
the chapter.

Working through the activities


Throughout the guide you will find a number of questions designed to
provoke thought/action, typically either encouraging reflection on the

6
Introduction

topics raised and their applicability in real life, or asking for a calculation
to check that you can correctly apply a technique. These are labelled
‘Activity’ throughout the text. We would strongly encourage you
to try these activities for yourself.

Examination advice
Important: the information and advice given here are based on the
examination structure used at the time this guide was written. Please
note that subject guides may be used for several years. Because of this
we strongly advise you to always check both the current Regulations
for relevant information about the examination, and the VLE where you
should be advised of any forthcoming changes. You should also carefully
check the rubric/instructions on the paper you actually sit and follow
those instructions.
The examination is three hours long. You will have to answer four
questions (all carrying equal marks) from a choice of eight questions.
It should be noted here that you will be expected to do all the calculations
in the examination by hand and that no computers or software (such as
Excel) will be available to assist you. However, you are permitted to take
an appropriate (basic, non-programmable) calculator into the examination
for this subject. This calculator must comply in all respects with
the specification given in the Regulations.
Remember, it is important to check the VLE for:
• up-to-date information on examination and assessment arrangements
for this course
• where available, past examination papers and Examiners’ commentaries
for the course which give advice on how each question might best be
answered.

Sample examination paper


The Sample examination paper (Appendix 1) should only be attempted
once you have finished the subject guide and the recommended reading.
An accompanying Examiners’ commentary is also supplied.

Using software for the course


It is common nowadays for management science methods to be taught in
conjunction with a simple (and inexpensive) computer package, typically
for the personal computer (PC). This enables you to solve problems
numerically yourself and to explore the effect of changing numbers,
assumptions, etc. There are a number of such software packages, designed
for the student learning environment, which are readily available.
In this subject guide we shall make use of examples solved using Excel, the
Microsoft spreadsheet package. We chose this approach as most students
have access to a PC and to Excel and so making examples available
in Excel would make the subject more accessible than if we restricted
ourselves to a specialised PC package.
Having access to Excel is not mandatory. You will be able to
answer examination questions even if their only contact with Excel has
been through the pages of this subject guide. However, we do believe that
if you have used the Excel spreadsheets associated with this subject guide,
it is much more likely that you will:

7
MN3032 Management science methods

• enjoy the subject


• better appreciate its applicability to real world decision-making.

Excel spreadsheets for the course


The Excel spreadsheets associated with this subject guide can be
accessed and downloaded from the VLE. See the section earlier on in the
Introduction for more information on this.
These Excel spreadsheets are updated on an intermittent basis. If you have
difficulty in accessing/downloading these spreadsheets please email uolip.
vle@lse.ac.uk. Please do not contact the author of this subject
guide direct.
Throughout this subject guide we have had to assume that the reader has
some familiarity with Excel. One feature of Excel that readers may not
have met however is the use of Solver – which is an Excel Tool for solving
linear and non-linear programs. Solver is what is referred to as an Excel
‘add-in’ – it adds to the functionality of Excel.
The installation and use of Solver in Excel depends upon the particular
version of Excel that you use. Rather than attempt to give a myriad of
different instructions here, each dependent on a particular Excel version,
we shall simply say that Solver is a component that comes with Excel
(itself part of Microsoft Office), but not everybody chooses to install Solver
when they install Excel/Office. If you do not have Solver installed then
type ‘Solver’ into the Excel help function and you will find some guidance
on how to install it.
You may wish to check now whether the PC you use has Solver installed as
part of Excel or not. Solver is used in this subject guide in Chapters 6, 8, 9
and 10.
After you have downloaded the Excel spreadsheets associated with this
subject guide from the VLE you will see that certain items within them are
coloured in red using a bold italic font. These are the items that you are
free to change for any examples of your own. All other items (coloured in
black) cannot be changed.
Some of you may go to the lengths of examining the underlying Excel
coding (i.e. the formulae and cell references used) in these spreadsheets.
There is no need to do this. You are not expected to be able to reproduce
these spreadsheets yourself, merely to be able to use those provided to
you as part of this subject guide intelligently and to interpret the numeric
solution values they give you.

Concluding remarks
Even a brief glance at the textbooks associated with this subject guide
reveals that there is much more to OR than we have explicitly considered
here. Inevitably in producing a subject guide of this type some topics have
to be excluded, either because they are less important, or because they are
better taught and appreciated using a more interactive approach involving
PC software and face-to-face tuition.
Nevertheless we believe that there is sufficient material in this subject
guide for you to have gained a clear idea of what OR is about and its value
in improving decision making.
We wish you well in your examination and in applying OR to improve the
quality of decision making in your future career!

8
Introduction

List of abbreviations used in this subject guide


AHP analytic hierarchy process
AOA activity on arc
AON activity on node
BOM bill of materials
CATWOE customer, actors, transformation process, worldview,
owner, environmental constraints
CI consistency index
CPM critical path management
DEA data envelopment analysis
DMU decision making unit
EBQ economic batch quantity
EMV expected monetary value
EOQ economic order quantity
FCFS first-come first-served
FIFO first-in, first-out
GP goal program or goal programming
IP integer program or integer programming
JIT just-in-time
JOURNEY JOintly Understanding, Reflecting, and NEgotiating
strategY
LIFO last-in, first-out
LP linear program or linear programming
MAUA multi-attribute utility analysis
MAVA multi-attribute value analysis
MCDA multicriteria decision analysis
MIP mixed-integer program or mixed-integer programming
MRP materials requirements planning
MS management science
OPT optimised production technology
OR operational/operations research
PERT program evaluation and review technique
PSM problem structuring methods
RD root definition
SC strategic choice
SCM supply chain management
SODA strategic options development and analysis
SSM soft systems methodology
UE uncertainty about the working environment
UR uncertainty about related decision fields
UV uncertainty about guiding values

9
MN3032 Management science methods

Notes

10
Chapter 1: Methodology

Chapter 1: Methodology

Essential reading
Anderson, Chapter 1, section start–1.4.

Aims of the chapter


The aims of this chapter are to:
• give a brief introduction to the evolution of OR
• illustrate OR by considering a very simple decision problem
• illustrate translating an imprecise verbal description of a problem into
a precise mathematical description
• discuss some methodological issues that arise in OR work.

Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• explain the philosophy underlying the reasons for mathematical
modelling of problems
• describe the phases of an OR project
• explain the philosophy underlying OR
• explain how OR is carried out (i.e. the client/consultant role)
• discuss consultancy, cost versus decision quality, optimisation and
implementation in the context of OR work
• discuss the benefits of an OR approach to decision problems.

Introduction
We hope in this chapter to illustrate to you that decision-making situations
can be transformed from a (perhaps imprecise) verbal description to
a precise mathematical description. This transformation, although
involving the use of mathematics, does not usually demand a high level
of mathematical skill. The chapter begins with a brief introduction to
the evolution of OR. We then actually do some OR by considering a very
simple decision problem. We highlight some general lessons and concepts
from this specific example. We then discuss some methodological issues
that arise in OR work.
I would like to emphasise here that OR is (in my view)
a subject/discipline that has much to offer in making a
difference in the real world. OR can help you to make better
decisions and it is clear that there are many, many people and
companies out there who need to make better decisions.

Evolution
OR is a relatively new discipline. Whereas in 1930 it would have been
possible to study mathematics, physics or engineering (for example) at
university, it would not have been possible to study OR; indeed, the term
OR did not exist then. It started in the UK as an organised form of research
11
MN3032 Management science methods

just before the outbreak of the Second World War in 1939. Scientists were
attempting to make operational use of radar data (radar only just having
been developed) for the air defence of the UK. The term ‘operational
research’ (RESEARCH into (military) OPERATIONS) was coined as a
suitable description for this new branch of applied science.
During the Second World War, OR developed both in the UK and in the
USA and was used in many different situations to help determine effective
operational methods (e.g. how large convoys carrying food and other
supplies across the Atlantic should be organised to minimise the number of
ships lost). By the end of the war in 1945 OR was well established in the
armed services in both the UK and the USA.
Although scientists had (plainly) been involved in the hardware side of
warfare (designing better planes, bombs, tanks, etc), scientific analysis
of the operational use of military resources had never taken place in a
systematic fashion before the Second World War. Military personnel were
simply not trained to undertake such analysis.
These early OR workers came from many different disciplines; one UK
group consisted of a physicist, two physiologists, two mathematical
physicists and a surveyor. What such people brought to their work
were ‘scientifically trained’ minds, used to querying assumptions, logic,
exploring hypotheses, devising experiments, collecting data, analysing
numbers, etc. Many too were of high intellectual calibre (at least four UK
wartime OR personnel were later to win Nobel prizes when they returned
to their peacetime disciplines).
Features of this early OR work were:
• the scientific basis of the work and of the people involved in doing it
• work was carried out by a team of individuals, that team often being
made up of individuals from different scientific disciplines
• work was organised into projects (specific pieces of work with explicit
terms of reference to be completed in a set time)
• the relationship between the OR worker (or team) and the decision
maker, where the OR worker/team carried out the project but the
decision maker implemented any solution and bore responsibility for
its success or failure
• the use of data collection to develop an understanding of the problem
under investigation
• the need for OR workers to work with all ranks (both junior and
senior) within the organisation.
Many of these features are still present in current OR work. One feature
that has (inevitably) decayed over time, however, is that as the subject
knowledge base of OR has expanded, present-day OR teams typically
do not include individuals from different scientific disciplines. Instead a
team might well contain individuals who have received some specialised
university level education (at undergraduate or Masters level) in OR.
In 1945, following the end of the war, OR took a different course in the
UK to that in the USA. In the UK many of the OR workers returned to
their original peacetime academic disciplines. As such, OR did not spread
particularly well, except for a few isolated industries (iron/steel and coal).
In the USA, OR spread to the universities so that systematic training in OR
for future workers began. Nowadays of course OR can be found worldwide.

12
Chapter 1: Methodology

It is perhaps worth stating here that activities that would, in modern light,
be viewed as OR had occurred before the 1930s. For example the Economic
Order Quantity formula (dealt with in Chapter 5) which helps decide how
much stock a company should order from a supplier is believed to date
from the early 1900s. However, it was only from the 1930s onwards that
OR really established itself as a recognised professional activity and as a
coherent scientific discipline.
Activity
Explore the internet to see if universities in your own country offer courses in operations
research or management science.

A very recent evolution in terms of OR has been the appearance of the


term analytics (or data analytics) that deals with the same issues as we
consider in OR. Other phrases associated with analytics are ‘data science’
and ‘data scientist’. Often analytics is associated with ‘big data’, where the
volume of data to be considered is large.
Activity
Explore the internet to gain an understanding of the following terms:
•• analytics
•• big data
•• data science
•• data scientist.

In order to get a clearer idea of what OR is we shall actually do some by


considering the specific problem below and then highlight some general
lessons and concepts from this specific example. Note that you should be
clear here that we are not suggesting that all of OR can be equated to what
you are about to read below. Rather we are using a simple example to
introduce you to the type of approach that may be followed when tackling
a decision problem using OR.
Be clear here – what you are about to read below is a very
different approach to decision making to anything you will
have encountered before. You may well find it intellectually
demanding and (initially at least) hard to follow. We make no
apologies for this.

Two Mines company


The Two Mines company own two different mines that produce an ore
which, after being crushed, is graded into three classes: high, medium and
low-grade. The company has contracted to provide a smelting plant with
12 tons of high-grade, 8 tons of medium-grade and 24 tons of low-grade
ore per week. The two mines have different operating characteristics as
detailed below.

Mine Cost per day (£’000) Production (tons/day)


High Medium Low
X 180 6 3 4
Y 160 1 1 6
Table 1.1: Two Mines company.
How many days per week should each mine be operated to fulfil the
smelting plant contract?

13
MN3032 Management science methods

Note:
• This is clearly a very simple (even simplistic) example but, as with
many things, we have to start at a simple level in order to progress to a
more complicated level.
• This is a decision problem (we have to decide something); many
of the techniques/topics you will meet in this subject guide address
decision problems.

Activity
Consider this problem by yourself for 10 minutes. What answer do you come up with
for the number of days per week each mine should be operated? What is the associated
cost? Write your answer here for later reference.

Guessing
To explore the Two Mines problem further we might simply guess (i.e. use
our (managerial) judgement) how many days per week to work and see
how any guesses we make work.
Work one day a week on X, one day a week on Y
This does not seem like a good guess as it results in only 7 tonnes a day of
high-grade, insufficient to meet the contract requirement for 12 tonnes of
high-grade a day. We say that such a solution is infeasible.
Work 4 days a week on X, 3 days a week on Y
This seems like a better guess as it results in sufficient ore to meet the
contract. We say that such a solution is feasible. However, it is quite
expensive (costly).
Rather than continue guessing we can approach the problem in a
structured logical fashion as below. Ideally we would like a solution
that supplies what is necessary under the contract at minimum cost.
Logically such a minimum cost solution to this decision problem must
exist. However, even if we keep guessing we can never be sure whether we
have found this minimum cost solution or not. Fortunately our structured
approach will enable us to find the minimum cost solution.

Two Mines solution


What we have is a verbal description of the Two Mines problem. What
we need to do is to translate that verbal description into an equivalent
mathematical description. In dealing with problems of this kind we often
do best to consider them in the order:
• variables
• constraints
• objective.
We do this below. Please note here that this process is often called
formulating the problem (or more strictly formulating a mathematical
representation of the problem).

Variables
These represent the ‘decisions that have to be made’ or the ‘unknowns’.
Let:
x = number of days per week mine X is operated
y = number of days per week mine Y is operated
Note here that x ≥ 0 and y ≥ 0.
14
Chapter 1: Methodology

Constraints
It is best to first put each constraint into words and then express it in a
mathematical form.
• Ore production constraints – balance the amount produced with the
quantity required under the smelting plant contract.

High 6x + 1y ≥ 12
Medium 3x + 1y ≥ 8
Low 4x + 6y ≥ 24
• Note we have an inequality here rather than an equality. This implies
that we may produce more of some grade of ore than we need. In fact,
we have the general rule: given a choice between an equality
and an inequality, choose the inequality.
• For example – if we choose an equality for the ore production
constraints we have the three equations 6x + y = 12, 3x + y = 8 and
4x + 6y = 24 and there are no values of x and y which satisfy all three
equations (the problem is therefore said to be ‘over-constrained’). For
example, the values of x and y which satisfy 6x + y = 12 and 3x + y = 8
are x = 4/3 and y = 4, but these values do not satisfy 4x + 6y = 24.
• The reason for this general rule is that choosing an inequality rather
than an equality gives us more flexibility in optimising (maximising or
minimising) the objective (deciding values for the decision variables
that optimise the objective).
• Days per week constraint – we cannot work more than a certain
maximum number of days a week; for example, for a 5-day week we
have:
x≤5
y≤5
• Constraints of this type are often called implicit constraints because
they are implicit in the definition of the variables.

Objective
Again in words our objective is (presumably) to minimise cost which is
given by:
180x + 160y
Hence we have the complete mathematical representation of the problem
as:

minimise 180x + 160y


subject to 6x + 1y ≥ 12
3x + 1y ≥ 8
4x + 6y ≥ 24
x≤5
y≤5
x,y ≥ 0

Activity
Suppose now that there is a third mine Z, costing 120 (£’000) per day and producing 0.5
tonnes of high-grade, one tonne of medium-grade and nine tonnes of low-grade ore per
day. What would the formulation of the problem now be?

15
MN3032 Management science methods

Discussion
There are a number of points to note here:
• A key issue behind formulation is that it makes you think. Even if
you never do anything with the mathematics this process of trying to
think clearly and logically about a problem can be very valuable.
• A common problem with formulation is to overlook some constraints
or variables and the entire formulation process should be regarded
as an iterative one (iterating back and forth between variables/
constraints/objective until we are satisfied).
• The mathematical problem given above has the form:
all variables continuous (i.e. can take fractional values)
a single objective (maximise or minimise)
the objective and constraints are linear i.e. any term is either a
constant or a constant multiplied by an unknown (e.g. 24, 4x, 6y
are linear terms but xy is a non-linear term).
• Any formulation which satisfies these three conditions is called a
linear program (LP). As we shall see later LPs are important.
• We have (implicitly) assumed that it is permissible to work in fractions
of days – problems where this is not permissible and variables
must take integer values and will be dealt with under integer
programming (IP).
• Often (strictly) the decision variables should be integer but for reasons
of simplicity we let them be fractional. This is especially relevant in
problems where the values of the decision variables are large because
any fractional part can then usually be ignored (note that often the
data (numbers) that we use in formulating the LP will be inaccurate
anyway).
• The way the complete mathematical representation of the problem is
set out above is the standard way (with the objective first, then the
constraints and finally the reminder that all variables are ≥0).
Considering the Two Mines example given above:
• This was a decision problem.
• We have taken a real-world situation and constructed an equivalent
mathematical representation – such a representation is often called a
mathematical model of the real-world situation (and the process by
which the model is obtained is called formulating the model).
• Just to confuse things the mathematical model of the problem is
sometimes called the formulation of the problem.
• Having obtained our mathematical model we (hopefully) have some
quantitative method which will enable us to numerically solve the
model (i.e. obtain a numeric solution) – such a quantitative method
is often called an algorithm for solving the model. Essentially an
algorithm (for a particular model) is a set of instructions which, when
followed in a step-by-step fashion, will produce a numeric solution
to that model. Many algorithms for OR problems are available in
computer packages.
• Our model has an objective, that is something which we are trying to
optimise.
• Having obtained the numeric solution of our model we have to
translate that solution back into the real-world situation.
16
Chapter 1: Methodology

Hence we have a definition of OR as:


OR is the representation of real-world systems by
mathematical models together with the use of quantitative
methods (algorithms) for solving such models, with a view to
optimising them.

Activity
Think of a number of real-world business systems of which you are aware. Do you see
scope for OR to make a difference to those systems or not?

Philosophy
In general terms we can regard OR as being the application of scientific
methods/thinking to decision making. Underlying OR is the philosophy
that:
• decisions have to be made
• using a quantitative (explicit, articulated) approach will lead (on
average) to better decisions than using non-quantitative (implicit,
unarticulated) approaches (such as those used by human decision
makers).
Indeed it can be argued that although OR is imperfect it offers the best
available approach to making a particular decision in many instances
(which is not to say that using OR will produce the right decision).
Often the human approach to decision making can be characterised
(conceptually) as the ‘ask Fred’ approach: simply give Fred (‘the expert’)
the problem and relevant data, shut him in a room for a while and wait for
an answer to appear.
The difficulties with this approach are:
• speed (cost) involved in arriving at a solution
• quality of solution – does Fred produce a good quality solution in any
particular case
• consistency of solution – does Fred always produce solutions of the
same quality (this is especially important when comparing different
options).
You can form your own judgement as to whether OR is better than this
approach or not.

Activity
Form your own judgement as to whether OR is better than the ‘ask Fred’ approach or not.
Can you think of problems you have solved using the ‘ask Fred’ approach?

Activity
What do you think is the minimum cost solution to the Two Mines problem? Record it
here for reference.

Certainty versus uncertainty


In management decision making it is helpful to consider whether we are
deciding under conditions of certainty or uncertainty. The Two Mines
problem considered above was one where we had certainty, we knew all
data values precisely. Uncertainty can arise for two basic reasons:
• we are not sure of the exact numeric value for a data item

17
MN3032 Management science methods

• probability (chance) plays a natural role in the decision problem.


As to the first of these, this might occur if we were unsure as to the precise
cost per day of operating mines X and Y in the Two Mines problem and
only had imprecise information as to these costs. As to the second of these,
this might occur if we had a decision problem relating to the price to bid
on a contract and we are unsure, for a given price, of the probability that
our bid would be accepted.
It is sometimes believed in management that decision problems only arise
due to uncertainty. This is not true. As the Two Mines problem indicates
it may be difficult to make a decision even when we are absolutely certain
about everything. For example, do you know what the minimum cost
solution to that problem is? If you think you do, would you be prepared to
bet your life and that of your entire extended family on being correct?
Uncertainty, it is true, does tend to complicate a problem, but you will see
many examples throughout this subject guide where we are certain about
the problem being considered and all data items, but still have a difficult
decision problem to solve.

Phases of an OR project
Drawing on our experience with the Two Mines problem we can identify
the phases that a (real-world) OR project might go through. We are not
suggesting here that all OR projects go through the phases shown below,
rather that the phases shown are a sufficiently good description of the
phases that many projects go through to merit consideration.

Phase 1: Problem identification


In this phase we attempt to clarify the problem that we have to consider.
Factors that need consideration here are:
• Diagnosis of the problem from its symptoms if not obvious (i.e. what is
the problem?).
• Delineation of the sub-problem to be studied. Often we have to ignore
parts of the entire problem as it is simply too large/complex/time-
consuming to be tackled. Rather we choose some distinct part of the
overall problem and tackle that. Be aware here that often benefit
can be gained just by making improved decisions for some part of an
overall problem.
• Establishment of objectives, limitations and requirements for the
problem (or part of the problem) we have chosen to tackle.

Phase 2: Formulation as a mathematical model


It may be that a problem can be modelled in differing ways, and the choice
as to the appropriate model may be crucial to the success of the OR project.
In addition to algorithmic considerations for solving the model (i.e. can we
solve our model numerically?) we must also consider the availability and
accuracy of the real-world data that are required as input to the model.
Note that the ‘data barrier’ (‘we do not have the data!!!’) can appear
here, particularly if people are trying to block the project. Often data can
be collected or estimated, particularly if the potential benefits from the
project are large enough.
You will also find, if you do much OR in the real world, that some
environments are naturally data-poor, that is the data are of poor quality
or non-existent, and some environments are naturally data-rich.

18
Chapter 1: Methodology

This issue of the data environment can affect the model that you build. If
you believe that certain data can never (realistically) be obtained there is
perhaps little point in building a model that uses such data.

Activity
Have you ever met the data barrier in your own work? Think of an environment which you
consider to be data-poor and an environment which you consider to be data-rich.

Phase 3: Model validation (or algorithm validation)


Model validation involves running the algorithm for the model on the
computer in order to ensure:
• the input data are free from errors
• the computer program is bug-free (or at least there are no outstanding
bugs)
• the computer program correctly represents the model we are
attempting to validate
• the results from the algorithm seem reasonable (or if they are
surprising we can at least understand why they are surprising).
Sometimes we feed the algorithm historical input data (if available
and relevant) and compare the output with the historical result.

Phase 4: Solution of the model


Standard computer packages, or specially developed algorithms, can be
used to solve the model (as mentioned above). In practice, a ‘solution’
often involves considering different numeric scenarios under varying
assumptions to establish sensitivity. For example, what if we vary the
input data (which will be inaccurate anyway), then how will this affect
the values of the decision variables? Questions of this type are commonly
known as ‘what if’ questions nowadays.
Note here that the factors which allow such questions to be asked and
answered are:
• the speed of processing (turn-around time) available by using PCs
• the interactive/user-friendly nature of many PC software packages.

Activity
Think of any decision you may have made based on analysing numbers. Did you conduct
sensitivity analysis to see if your decision would be different if the numbers changed or
not? If not, why not?

Phase 5: Implementation
This phase is implementation: that is, making a difference (hopefully
for the better!) in the real world.
It may involve the implementation of the results of the study or the
implementation of the algorithm for solving the model as an operational
tool (usually in a computer package). In the first instance detailed
instructions as to what has to be done (including time schedules) to
implement the results must be issued. In the second instance operating
manuals and training schemes will have to be produced for the effective
use of the algorithm as an operational tool.

19
MN3032 Management science methods

Activity
Think of a business situation where you were instrumental in making a difference in the
real world. Was this difference really for the better or not and why?

Note here that although we have presented the five phases above in a
sequential fashion, in practice we might well switch between phases as
and when the need dictates. For example we might find at Phase 4 when
we examine numeric solution values that we have made an error in our
formulation of a mathematical model at Phase 2 and so we need to loop
back to that phase.
It is believed that many OR projects that successfully pass through the first
four phases given above fail at Phase 5, the implementation stage (i.e. the
work that has been done does not have a lasting effect). As a result one
topic that has received attention in terms of bringing an OR project to a
successful conclusion (in terms of implementation) is the issue of client
involvement. This means keeping the client (the sponsor/originator
of the project) informed and consulted during the course of the project
so that they come to identify with the project and want it to succeed.
Achieving this is really a matter of experience. However, we believe that,
as with many things in life, some useful insights can be gained from the
written word (as opposed to real-life experience) and for this reason we
discuss this issue of implementation further below.

Methodological issues
There are a number of methodological issues that arise in OR work that
we need to consider here. These relate to:
• consultancy
• cost versus decision quality
• optimisation
• implementation.
We discuss each in turn below.

Consultancy
It often happens that there is a client who has some problem on which
they need help and they decide to call in an ‘expert’ to provide that help.
The ‘expert’ is called a consultant and the process in which they engage
(tackling the client’s problem) is called consultancy. Clearly a client might
well engage an OR worker as a consultant (for example, because the OR
worker has skills that the client lacks). Clients can be drawn from a wide
range of organisations (for example, private companies, public companies,
and governmental departments).
There is no consultant without a client, and the consultant needs to be
clear who the client is, the nature of the problem, and what kind of help is
needed. Often the answer to these questions will be covered in some form
of contract between the client and consultant. However, this may change
over the course of the project and therefore should be kept under review.
Close contact between the client and the consultant is a key determinant
of the success of the project and will be discussed further below.
Problems have a number of characteristics:
• things are not as they should be, or understanding is incomplete
• the problem owner wants to do something about it

20
Chapter 1: Methodology

• the problem owner either:


does not know what to do
knows what needs to be done but lacks the time/skills to do it
themselves.
The client should be the ‘problem owner’ (that is, the person or group
who is responsible and who can make changes). However, it may be that
a consultant is called in by another person, and in these circumstances the
consultant will need to address the ‘problem owner’ through the client.
In addition, the consultant should be aware what changes the client can
feasibly make. Some problems cross departmental boundaries, and the
client may only be able to make changes in their department.
Activity
Think of a problem of which you are aware. Who is the problem owner? Who is the
client?

In situations where the client is not the person commissioning the work or
where the problem crosses departmental boundaries, the consultant must
ensure that they can give advice on the options available so that the client
can make changes. Otherwise, the consultant’s work is likely to be in vain.
Activity
Can you think of a situation from your experience where a consultant’s work has been in
vain? Why did the work not succeed?

The consultant therefore helps the client decide what to do. Defining the
problem should involve finding and agreeing some activity that will be
useful in helping the client decide what to do. Note that problems are
subjective, and therefore so are solutions. It is the client’s problem and the
desired solution should be the client’s, not the consultant’s. The consultant
must respond to the client’s concerns and value systems, or risk the whole
enterprise – these can be debated/negotiated. However, ultimately the
client decides.
Clearly the consultant needs to acquire an understanding of the context in
which the problem is set. There is usually some obvious technical context
that needs to be understood (for example, the client’s organisation is
providing services of a particular type, using these particular resources,
to particular customers, and it is, for example, a non-profit making
organisation). There is also the social context (who is involved/affected and
how these things are articulated, how the actors interact with one another,
etc.); and a cultural one (what rules and beliefs are core to the client/
organisation, what is the power pattern, how do things get done, etc.).
An organisation or individual may employ consultants because:
• they lack the skills within the organisation to find a resolution to the
problem: the consultant is the expert
• they lack the time/resources to find a resolution to the problem: the
consultant as a hired body or temporary employee
• they need an ‘independent’ person to help resolve the problem:
either to act as an arbitrator between two or more groups, or
to provide external justification for a decision, or to audit
recommendations of an internal project
• they need to be seen to be doing something and employing a
consultant/firm of consultants will provide a positive image

21
MN3032 Management science methods

• they need a ‘scapegoat’, someone else to blame if things go wrong.


Obviously in any particular case there may be a mix of reasons for
employing a consultant rather than any single reason.
Organisations may have their own internal group of consultants or they
may employ an outside consultancy firm. The internal consultant has
many advantages (e.g. shared organisational objectives, familiarity with
context, probable ease of working relationships) but may not be seen as
independent and may not have special skills. External consultants may be
employed as perceptibly independent and for their special technical skills
or relevant previous experience (e.g. with similar organisations in other
countries), but their value system may differ from the client’s (e.g. their
goal may be to maximise their own profit).

Activity
Has your company (or a former company) ever employed consultants? If so, why did they
do so?

Activity
Would you be the best person to carry out some consultancy work for your work/college?
Why or why not?

Cost versus decision quality


Consider a simple decision problem, should you ask someone to marry
you or not? You could reach a solution to this decision problem very easily
(very quickly and cheaply) by tossing a coin – heads I ask, tails I do not.
Alternatively, you might consider it worthwhile spending time and money
finding out whether you are compatible before deciding whether or not
to ask that person to marry you. The point is perhaps clear – if we wish
to reach good quality decisions then we have to take our time and incur
costs.
OR projects use resources – the consultant’s, the client’s, other staff’s – and
there is always a tension between minimising the cost and time taken to
reach a decision and making the ‘perfect’ decision. This tension comes in
the development of the model, in the collection of the data used in the
model, and in the accuracy of the results.
Models are representations of the system under investigation, but are
simplified representations. The more time, and cost, put into building a
model, the more accurately it should represent the system. The model
is also likely to be more complex, including more variables or more
relationships between variables, which is likely to make it less accessible
to the client and likely to take longer to solve on the computer. Therefore
there is a trade-off between an accurate model and the cost and time
involved in building the model. The same is true for the collection of data
used in the model. It is always possible to collect more data, and spend
more time validating the data, and the more time and effort spent in this
area the more accurate the data are likely to be.
Note here that although the issue of the time taken to solve a model on
a computer is less important today than it was in the past (with modern
PCs), there are still many applications where solutions to decision
problems have to be produced very quickly. An example of this is the
airline industry where decisions as to whether or not to sell a potential
passenger a ticket, or how to reconfigure the schedule to cope with
disruptions to the pre-planned schedule, must be made very quickly.

22
Chapter 1: Methodology

Activity
Consider any decision problem that you have been involved in. Do you think that enough
time and effort was spent in order to reach a good quality decision or not and why?

Optimisation
The purpose of carrying out a project is usually to provide advice to the
client on what to do, based on the construction and experimentation with
a model. Traditional OR models have used optimisation to determine
what is the best action the client should take (i.e. mathematical models
where the optimum value of the controllable variables can be determined).
Such a model was seen above for the Two Mines problem.
In general, optimisation assumes that:
• The model accurately represents the system, and therefore the optimal
solution for the model is also the optimal solution for the system.
This may not be the case since models rarely represent accurately the
system under investigation.
• There is one objective or, where there is more than one objective, they
can be translated into a common unit, usually monetary values: for
instance, giving time a monetary value and so being able to optimise
over cost and time.
• There is consensus over what the objective of the system is.
• The problem will not change over time (at least in the short-term) and
therefore one optimal solution can be found (i.e. the solution given is
the best advice available now for the near future).
• All data can be quantified (i.e. assigned numerical values). In some
cases it is not possible to quantify some factors, and therefore only
qualitative information can be provided.
Activity
Suppose a number of possible road schemes are being considered and the decision on
which scheme to choose is affected by many factors such as: the cost of building, the
maintenance cost, the number of cars likely to use any new roads, the estimated number
of road deaths, and the environmental impact (houses demolished, pollution) of the
scheme. Which of these factors can be numerically assessed, and which can be put in
monetary terms?

As an alternative to giving the client the optimal solution the consultant


could attempt to scan available options (i.e. attempt to look at all the
feasible options in the solution space and evaluate their performance
based on the various different objectives of the client). Some of the options
could be removed from consideration because other options are better
overall objectives (i.e. some options are dominated by other options), but
those remaining could be presented to the client for them to determine the
trade-off between the objectives, and non-quantifiable factors can then be
taken into account.
The effects of inaccurate data and minor changes in the systems’
environment can be overcome by carrying out sensitivity analysis
using the model. This involves changing assumptions and input data in
the model to determine how sensitive the solution is to minor, and major,
changes in data.
The accuracy and usefulness of a solution not only depend on the model’s
ability to represent the system and the accuracy of the data, it also

23
MN3032 Management science methods

depends on the robustness of the solution obtained. A solution is said


to be robust if the option identified as being ‘best’ remains the best option
(or if not best at least an extremely good option) even when the situation
changes (e.g. environmental changes, such as cost and demand data).
More informative and robust results are likely to be obtained by putting
extra time into experimentation with the model (e.g. changing input
data, changing assumptions about the future environment). It is always
important to carry out some sensitivity analysis on the results from a
model to check the robustness of the solution, and also to identify the cost
of choosing an option which, while it may not be the ‘best’, is more robust.

Activity
Suppose there were two types of electricity generators – one cost £100,000 and was
sufficient to supply 20,000 homes, the other cost £150,000 and was sufficient to supply
35,000 homes – if there are currently 40,000 homes to be supplied, how many of each
type of generator should be built?
If the generators are each expected to last 10 years, and the number of homes is
estimated to rise in five years to 55,000 and then remain stable, how many of each type
should be built?
What would be a robust strategy if the number of homes in the next 10 years is likely to
be between 50,000 and 60,000?

Many of the models presented in this subject guide use optimisation. You
should be aware of the limitations of optimisation and also conscious of
the need to carry out sensitivity analysis. We will look at robustness and
sensitivity analysis with regard to one of the optimisation techniques,
linear programming, in Chapter 8 of this subject guide.

Activity
Consider the various problems associated with optimisation and identify the possible
strategies for dealing with them.

Implementation
To a large extent the success of an OR project is not determined by
whether the project produces an elegant model, or by the size of the
benefits of the recommended course of action, but by whether the project
affects the decisions made by the client, including whether any action
recommended is undertaken by the client.
In order for an organisation to implement a proposed course of
action, it must be possible to implement the solution (technologically
and culturally), and the person(s) with the power to implement the
recommended course of action must be committed to it.
In order for an OR project to produce a solution which it is possible
to implement (a feasible solution), it is necessary to ensure, when
formulating the problem, that all the relevant technological and cultural
constraints are known and, where appropriate, included in the model.
Continuous contact with the organisation, and specifically the client in the
organisation (the person/people who have the problem), including regular
discussions on the progress of the project and the model being produced,
should ensure that a feasible solution is produced.
Gaining the commitment of the person(s) with the power to implement
the solution requires the consultant to persuade them that the changes
recommended are worth making. Approaches which are likely to gain such
commitment include:
24
Chapter 1: Methodology

• Regular and positive communication with the client, to get


information as well as to ensure the problem has not changed, and to
make sure that the client knows how the problem is being approached.
Problems can change over time; they may go away of their own
accord; the objectives of the organisation may change; the constraints
may change. A technically brilliant solution to a problem the client no
longer faces, or to the wrong problem, will have no value to the client
and will not be implemented, or may be impossible to implement.
• Continually involving the client during the project. A good
practical rule is that the consultant should never surprise the client by
what they do – the result may surprise the client (i.e. it may not fit
with their previously conceived ideas of what the solution would be),
but the work that is being done should not.
• Giving the solution, and justification for the solution, in plain
language, using terms the client will understand. This may
require the use, at least in persuading the client of the validity of the
solution, of simplified, transparent models and possibly examples
showing the benefits of the solution. A common criticism of OR work
over the years has been that some of the models used are ‘opaque’
(they cannot be understood by the client) which can lead to non-
implementation because the client does not believe in the model and
therefore cannot confidently implement results, especially where the
results are counter-intuitive or very radical.
• Ensuring that the action recommended in the solution is within the
power of the client (not recommending action which the client is
unable to implement).

Activity
Reflect on how you might persuade your boss or a former boss to support a course of
action you are proposing. What strategies might you adopt?

Benefits
Throughout your career you will inevitably encounter people who have
little understanding of OR and, moreover, feel that problems in business/
management can be solved by (innate) personal ability and experience
(such as coincidentally they themselves possess). As such they see no need
for a ‘complicated’ approach such as OR.
On a personal note at the time of writing I am over 60 and have been
involved in OR for all of my working life. My personal view as to the
benefits of OR in solving problems in business/management is:
• OR is particularly well-suited for routine tactical decision-making
where data are typically well-defined and decisions for the same
problem must be made repeatedly over time (for example, how much
stock to order from a supplier).
• An advantage of explicit decision-making is that it is possible to
examine assumptions explicitly.
• We might reasonably expect an ‘analytical’ (structured, logical)
approach to decision-making to be better (on average) than simply
relying on a person’s innate decision-making ability.
• OR techniques combine the ability and experience of many people.
• Sensitivity analysis can be performed in a systematic fashion.

25
MN3032 Management science methods

• OR enables problems too large for a person to tackle effectively to be


dealt with.
• Constructing an OR model highlights what is/is not important in a
problem.
• If you have an explicit OR model then it has the advantage that it
is transparent – conceptually you can write it down on a piece of
paper and everyone (specifically everyone who has a suitable amount
of training) can examine it, discuss it, criticise it and amend it as
appropriate.
• A training in OR teaches a person to think about problems in a logical
fashion.
• Using standard OR techniques prevents a person having to ‘reinvent
the wheel’ each time.
• OR techniques enable computers to be used with (usually) standard
packages and consequently all the benefits of computerised analysis
(speed, rapid (elapsed) solution time, graphical output, etc.).
• OR techniques are an aid (complement) to ability and experience, not
a substitute for them.
• Many OR techniques are simple to understand and apply.
• There have been many successful OR projects.
• Many successful companies use OR techniques.
• Ability and experience are vital. However, OR is necessary to use these
effectively in tackling large problems.
• OR techniques free executive time for more creative tasks.

Links to other chapters


The topics considered in this chapter link to all the other chapters in this
subject guide. This chapter ‘sets the scene’ for later chapters in that it
discusses the discipline of OR and a number of important general issues
that arise within it.

Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.

Title Anderson (page number)


Revenue management at American Airlines 3
Workforce scheduling for British 8
Telecommunications PLC
Quantitative analysis at Merrill Lynch 13
Models in Federal Express 15
A spreadsheet tool for Catholic Relief Services 18

26
Chapter 1: Methodology

A reminder of your learning outcomes


Having completed this chapter, and the Essential reading and activities,
you should be able to:
• explain the philosophy underlying the reasons for mathematical
modelling of problems
• describe the phases of an OR project
• explain the philosophy underlying OR
• explain how OR is carried out (i.e. the client/consultant role)
• discuss consultancy, cost versus decision quality, optimisation and
implementation in the context of OR work
• discuss the benefits of an OR approach to decision problems.

Sample examination questions


For sample examination questions relating to the material presented in this
chapter please visit the VLE.

27
MN3032 Management science methods

Notes

28
Chapter 2: Problem structuring and problem structuring methods

Chapter 2: Problem structuring and


problem structuring methods

Essential reading
Rosenhead, Chapters 2, 4 and 6.

Aims of the chapter


The aims of this chapter are:
• to contrast hard OR and soft OR (problem structuring methods)
• to illustrate three soft OR approaches:
Journey Making
Soft Systems Methodology
Strategic Choice.

Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should ensure that you can describe and explain:

General
• the common features of problem structuring methods.

Journey Making
• how cognitive maps can structure individual and group views of a
problem, including the identification of goals and options, and how to
produce and structure a cognitive map
• the Journey Making process.

Soft Systems Methodology (SSM)


• how root definitions and CATWOE are produced and used to describe
pure systems
• the seven stage SSM process.

Strategic Choice (SC)


• how uncertainties can be classified and reduced
• the four modes of working and the SC process.

Introduction
The first stage in the OR process, formulation of the problem, involves
identifying what is the problem facing the organisation. Rosenhead defines
a well-structured problem as one with:
• unambiguous objectives
• firm constraints
• established cause–effect relationships.
The problem formulation stage attempts to identify and clarify these
factors.
29
MN3032 Management science methods

Formulation of a problem may be difficult because:


• The problem is inter-related, either being one of a number of problems
which are facing different parts of an organisation, or the problem itself is
made up of a number of inter-related problems.
• There is disagreement within the organisation over the objectives, the
constraints and/or cause–effect relationships.
• There is a large amount of uncertainty over the constraints and/or the
cause–effect relationships.
Problems that are difficult to formulate are often strategic in nature, and it
is for this type of problem that the techniques presented in this chapter are
most relevant.
The formulation of such problems is often the most important and difficult
part of the OR process, and may in itself yield answers for the problem owner.

Activity
Think of a problem of which you are aware. Write down a clear statement of what the
problem is. Did you find getting this clear statement difficult or not and why?

Activity
Consider the different perceptions to the definition of a good lesson by a teacher,
students and a school management body.

In recent years a number of approaches to problems have appeared which


have come to be labelled collectively as ‘soft OR’. Many of these approaches
have their origins in the UK.
By contrast the classical OR techniques such as linear programming are
labelled collectively as ‘hard OR’.
Hard OR is used here in the sense that traditional/classical OR techniques are:
• tangible
• easy to explain
• easy to use.
Soft OR, by contrast, is:
• somewhat intangible
• not easy to explain
• not easy to use.
Note here that a collective generic name for these particular soft OR
approaches is problem structuring methods.
This chapter introduces some of the concepts behind problem structuring,
and a number of problem structuring methods.

Problem structuring methods


Before going on to consider each of the problem structuring methods (PSMs
for short) in detail it will be helpful if we first outline what they have in
common.
Probably the main characteristic of such methods is that, to a greater or
lesser extent, their primary focus is on:
the people involved with the problem

30
Chapter 2: Problem structuring and problem structuring methods

and their secondary focus is on:


the problem.
This is in sharp contrast to traditional hard OR approaches which are
geared to understanding the problem and developing the best answer to it.
Or to phrase it another way in hard OR the primary focus is on:
the problem.
and the secondary focus (if at all) is on:
the people involved with the problem.
You can clearly see here how in problem structuring methods the focus is
the reverse to that of hard OR.
Although it is plain that both issues need to be considered the focus
adopted can lead to some distinct differences.
For example, if my primary focus is on the people involved with the
problem, I could regard myself as a success if I can get them to better
understand the issues that they face and to agree on a course of action.
The fact that this course of action may be an absolutely disastrous one
from the point of view of the problem is somewhat irrelevant.

Overview
As an overview, problem structuring methods:
• help structure (complex) problems
• are mainly used with a small group of decision makers (people) in an
organisation
• do not try to get an objective definition of the problem
• emphasise the importance and validity of each individual’s subjective
perception of the problem.
To achieve this such methods typically use a consultant (external person)
whose role is:
• to see that the group contains individuals with knowledge of the
situation and/or individuals who will affect the success of any action
proposed
• to act as a facilitator/organiser of the process
• to orchestrate discussion
• to be seen to be open, independent and fair.
The consultant does not need to possess any special knowledge about
the problem (i.e. he or she does not need to be an expert in the problem
area). However, consultants are often experts in the particular problem
structuring method being applied.
Such methods try to capture the group’s perception of the problem:
• verbally (in words)
• in pictures/diagrams.
Words are used as they are believed to be the natural currency of
problem definition/discussion/solution (compare hard OR which uses
mathematics). The use of pictures/diagrams helps to structure the group’s
perception of the problem and enables discussion/debate to be less
personal.
Such methods help the members of the group:
• to gain an understanding of the problem they face

31
MN3032 Management science methods

• to gain an understanding of the views of the problem adopted by other


members of the group
• to negotiate about the action to take
• to agree on a consensus course of action to which they are committed.

Definitions
Problem structuring methods involve the use of a number of words with
specific meanings:
• Client(s) – person(s), the group, who face the decision problem and
for whom the consultant is working.
• Consultant – person from outside the group who acts as a facilitator.
• Facilitator – an independent person who aids the group by extracting
information from them about the problem and organising it.
Facilitators also act as a type of chairperson.
• Consensus – gaining the acceptance of all members of a group to a
particular view/decision.
• Workshop – group of people working/discussing an issue or issues in a
structured way.
• Pure model – model of a system which pursues a pure purpose from a
specific point of view.
• Purposeful activity system – a system, possibly hypothetical, in an
organisation which has a specific purpose.
In this chapter we consider three problem structuring methods:
• Strategic Options Development and Analysis (SODA) and JOURNEY
(JOintly Understanding, Reflecting, and NEgotiating strategY) Making
• Soft Systems Methodology (SSM)
• Strategic Choice (SC).
To try and illustrate these methods we will apply each of them to the
following example problem:
Crime is a real problem in this country. We are spending
more and more on locking up increasing numbers of people
in prisons, yet crime seems to go on rising. Many of those in
prison are there for reasons connected with medical problems
(e.g. drug addiction, mental illness), yet when they come out
of prison these problems are unresolved and so they go straight
back to crime. Perhaps the answer is longer prison sentences.

Strategic options development and analysis (SODA)


and JOURNEY (JOintly Understanding, Reflecting, and
NEgotiating strategY) Making
Activity/Reading
For this section read Rosenhead, Chapter 2.

Strategic Options Development and Analysis (SODA) and JOURNEY


(JOintly Understanding, Reflecting, and NEgotiating strategY) Making are
essentially the same at the level at which we consider them in this chapter.
The difference between them relates to the emphasis/meaning implied
from the words used to describe the technique:

32
Chapter 2: Problem structuring and problem structuring methods

• SODA – Strategic Options Development and Analysis – implies that the


emphasis is on developing and analysing options – possible decisions
that can be taken and hence the emphasis is on reaching a final decision
• JOURNEY (JOintly Understanding, Reflecting, and NEgotiating
strategY) Making – implies that the emphasis is on understanding/
reflecting/negotiating – i.e. the process is as important, or more
important, than the final decision reached
Since the description ‘Journey Making’ is intended to encompass and
replace ‘SODA’ we shall henceforth use the phrase ‘Journey Making’.
In Journey Making we elicit information from members of the group using
individual interviews. Such information is represented on cognitive
maps. These show:
• concepts that are relevant
• linkages between concepts.
By concept here we typically mean a short phrase capturing some idea.
This idea should be an action-oriented idea that is intended to suggest
an option for changing the situation. Often the negative (reverse) of the
concept is also introduced.
For example, in talking about our crime example above, a concept used
may be ‘more prisoners’ with the negative concept being ‘less prisoners’.
To ease concept representation we use three dots to separate positive
and negative concepts so that we can capture both concepts in ‘more
prisoners...less prisoners’. This is read as ‘more prisoners rather than
less prisoners’. As a shorthand we sometimes omit the negative concept
and write ‘more prisoners...’ so that the second concept is implied as the
negative of ‘more prisoners’.
Concepts are linked by arrows, with the direction of the arrow being
such that concepts representing options lead to concepts representing
outcomes. A negative sign associated with an arrow indicates that if the
first phrase of the first concept applies, then the second phrase of the
second concept also applies.
For example:
less crime ...
reform criminals...

longer prison sentences...

more prisoners...less prisoners


Figure 2.1
Figure 2.1 represents the view that more prisoners links to less crime and
longer prison sentences do not reform criminals.
Concepts in maps are generally either:
• goals at the head (the top) of the map (these are things that are self-
evidently regarded as ‘good things’)
• options being at the tail (the bottom) of the map.
Strategic options, sometimes called strategic directions, are options which
have no other options above them in the map.

33
MN3032 Management science methods

In Journey Making cognitive maps are first produced for each individual by
interviewing them in a relatively unstructured ‘free-flowing’ way to try to
elicit their thought processes about the problem under discussion and what
they think is important about the problem. Such maps often contain 40 to
100 concepts and may also help each individual to refine their thinking.
In Figure 2.2 we show a small map based on our crime example given
above. You can see that the goals (at the top of the map) are ‘less crime’
and ‘reform criminals’ and we have a number of options available, e.g.
‘spend more money on prisons’.
less crime ...
reform criminals...

longer prison sentences...


more medical care in prisons...

spend more money on prisons...

more prisoners...less prisoners


Figure 2.2
In the above the strategic options are ‘more medical care in prisons’ and
‘longer prison sentences’.
Note here that the map may show contradictory information. Just as
people’s ideas and thinking are often contradictory so the map may be.

Activity
Map your own views about the treatment of crime and prisoners in your society.

Another map for an individual talking about the same problem might be:

punish criminals... protect society...

more prosecutions...

longer sentences...
more police...
Figure 2.3

Activity
Map the views of a friend about the treatment of crime and prisoners in your society.

Once individual maps have been produced they need to be merged into a
single map, initially often containing several hundred concepts. In doing
this:
• similar concepts are merged into one
• concepts from key members of the group should be retained
• a balance of concepts from all members of the group should be present
• the consultant may add/delete concepts and links between concepts.
For example we might merge our two individual maps above to get:

34
Chapter 2: Problem structuring and problem structuring methods

less crime ...


protect society...
reform criminals...

longer prison sentences...


more medical care in prisons...

spend more money on prisons...

more prisoners...less prisoners

more police...
Figure 2.4
In order to make this map manageable in problems larger than our simple
example considered here:
• The concepts in it are aggregated into clusters (say 15 to 30 concepts
in each cluster), so that we have a map within each cluster and each
cluster is appropriately labelled.
• The final merged map is an overview map at the cluster level showing
the labelled clusters and the links between clusters.

Activity
Merge together the two maps you have produced in the preceding two activities.

This merged overview map (and the individual cluster maps) serve as a
focus for discussion at a workshop involving:
• analysis of its content and structure
• identification of any ‘emerging themes’ and ‘core concepts’
• discussion of key goals, inter-related problems, key options and
assumptions.
As for all problem structuring methods the aim of Journey Making is to
achieve understanding/agreement within the group.

Activity
Think of a problem of your own and with a friend apply Journey Making to the problem.

Soft systems methodology (SSM)


Activity/Reading
For this section read Rosenhead, Chapter 4.

SSM assumes:
• Different individuals and groups make different evaluations of events
and this leads to them taking different actions.
• Concepts and ideas from systems engineering are useful.
• It is necessary when describing any human activity system to take
account of the particular image of the world underlying the description
of the system and it is necessary to be explicit about the assumptions
underlying this image.

35
MN3032 Management science methods

• It is possible to learn about a system by comparing pure models of


that system with perceptions of what is happening in the real-world
problem situation.

Overview
SSM operates by defining systems of purposeful activity (the root
definition), building models of a number of relevant systems, and
comparing these models to the real world action going on, in order to
structure a debate focusing on the differences. That debate should lead
the group of people involved in the process to see their way to possible
changes and to motivate these people to carry out these changes.

Stages
There are seven stages in the SSM process, but they are not necessarily
followed in a linear fashion. Diagrammatically these stages are:
Enter situation considered problematic

Take action to improve the problem situation

Express the problem situation

Define possible changes which are both


desirable and feasible

Compare models with real-world situation

Real world

Systems thinking about


the real world

Formulate root definitions of relevant Build conceptual models of the


systems of purposeful activity systems named in the root definitions
Figure 2.5

Stages 1 and 2: Finding out


This stage involves entering the problem situation and identifying within it:
• people – essentially all those with an interest in the system or who are
likely to be affected by changes to it
• culture – social roles, norms of behaviour, values
• politics – commodities of power and how they are obtained, used,
preserved and transmitted.

Stage 3: Developing root definitions


SSM requires one or more root definitions to be stated. These are
sentences which describe the ideal system (or subsystems within the
overall system).
To ensure that appropriate elements of the system are captured in a
root definition it should be possible to deduce from the root definition
answers to the following questions:
C Who are the customers/victims/beneficiaries of the system?
A Who are the actors/participants in the system?
T What is transformed by this system; what inputs are transformed into
what outputs?

36
Chapter 2: Problem structuring and problem structuring methods

W Weltanschauung is a German word for which the usual translation


is worldview. It is helpful to consider it as the stock of images in our
head, put there by our origins, upbringing and experience of the
world. We use these images to help us to make sense of the world and
they normally go unquestioned. So what is the worldview underlying
the system?
O Who is the owner of the system; who has the power to stop the system?
E What are the environmental constraints that cannot be altered and
which need to be considered?
These questions are collectively referred to using the CATWOE mnemonic:
C Customer
A Actors
T Transformation process
W Weltanschauung or worldview
O Owner
E Environmental constraints.
Hence you should be clear here that the root definition and CATWOE are
linked together.
So using our crime example a possible root definition could be:
The prison system is a system for ensuring convicted criminals
(prisoners) serve their sentences in humane conditions, receive
appropriate medical care, are given opportunities to learn training and
skills, and are released back into society at the end of their sentence with
appropriate support so that they can be reformed from their life of crime.
Checking this root definition against CATWOE to ensure it is appropriate
we have:
C Customer – society
A Actors – prisoners and prison staff
T Transformation process – (here it can be regarded as) transforming
the need for convicted criminals to be locked away from society to that
need being met
W Weltanschauung (worldview) – the desire to reform criminals and to
prevent future crime
O Owner – government
E Environmental constraints – criminals exist.
Note that it may be possible to derive alternative answers for one or
more of these CATWOE elements from a root definition. For example, the
transformation above could be from unreformed criminals to reformed
criminals.
The root definition and CATWOE have at their centre the transformation
process (the T in CATWOE): what does the system defined by this root
definition do? This can be seen diagrammatically as:
Input → Transformation → Output.

Activity
Think of a number of possible transformations for a football match, for a bus service and
for a shop. Use the different actors in the system and the different physical objects to help
you (for example, how does the shop transform customers, how does it transform the
goods on sale, etc?).
37
MN3032 Management science methods

Activity
Using one transformation from each of the three activities above (the football match, the
bus service, and the shop) develop a root definition and CATWOE.

Stage 4: Building conceptual models


In SSM a model is a diagram of activities with links connecting them. It:
• is developed from the root definition
• uses verbs or action statements describing the activities that are
needed by the root definition
• links these activities according to logical dependencies, an arrow
directed from activity x to activity y, namely
x→y
• is equivalent to saying activity y is dependent on activity x having been
carried out
• should contain between five and nine activities.
The model should contain a monitoring and control subsystem which
monitors:
• the effectiveness of the system (is this the right thing to do?)
• the efficacy of the system (does it work?)
• the efficiency of the system (does it use the minimum resources
necessary?).
In building the model, the measures used for these controls need to be
determined.
The process of developing root definitions and models can be followed to
expand subsections of the overall model (that is, they can be developed
for an individual activity/activities in the main model built). Models
encompassing the initial root definition can also be built.
With regard to our root definition given above some activities could be:
• appreciate what constitutes humane conditions
• identify appropriate medical care
• identify training needs
• identify skills to be learnt
• identify appropriate support at the end of the sentence
• reform the criminal.
We could link these as shown in the conceptual model (Figure 2.6, below).
With regard to the monitoring and control subsystem (not shown in the
model for simplicity) this should address:
• the effectiveness of the system (is this the right thing to do?) – this
could be the number of statements on the need to reform criminals
made by the government
• the efficacy of the system (does it work?) – this could be the
proportion of criminals reformed
• the efficiency of the system (does it use the minimum resources
necessary?) – this could be the number of criminals dealt with per
pound spent.

38
Chapter 2: Problem structuring and problem structuring methods

Appreciate what constitutes humane conditions

Identify appropriate medical care

Identify training needs

Identify skills to be learnt

Identify appropriate support at end of sentence

Reform criminal
Figure 2.6

Stage 5: Comparing models with the real world


This stage involves comparing the models that have been developed
with the real world. A systematic way to do this is by ordered questions,
namely for each and every activity and link in the model, ask the following
questions:
• Does this happen in the real situation?
• How?
• By what criteria is it judged?
• Is it a subject of concern in the current situation?
This stage is designed to provide structure and substance to an organised
debate about improving the current situation.
With reference to our model above, for example, we might identify that
we are not making sufficient effort to carry out the activity ‘Identify
appropriate medical care.’
Of the soft OR approaches considered in this chapter SSM is (in my view)
the most powerful/applicable. Thinking clearly and logically about what
constitutes an ideal system and then comparing it to the real world can
plainly yield insights and ideas that might enable us to make the real
world a little more like our ideal world.

Stage 6: Identifying changes


This stage involves identifying changes that could be made to the real
world system, changes that appear worth trying, to those participating in
the SSM process. These changes need to be systematically desirable
and culturally feasible.
With reference to our model above, for example, we might identify putting
more resources into medical assessment/care of prisoners as a change to
be made.

Stage 7: Taking action


This stage involves putting into practice the most appropriate changes
identified in the previous stage.
Activity
Think of a problem of your own and apply Soft Systems Methodology to the problem.

39
MN3032 Management science methods

Strategic choice (SC)


Activity/Reading
For this section read Rosenhead, Chapter 6.

SC identifies four modes of decision making activity:


• shaping – considering the structure of the decision problems
• designing – considering the possible courses of action
• comparing – comparing the possible courses of action
• choosing – choosing courses of action
with the facilitator identifying when to switch between modes, as
appropriate.
A key theme underlying SC is the identification of uncertainty areas.

Uncertainty areas
SC identifies three types of uncertainty:
• Uncertainty about the working Environment (UE), reduced by a
technical response (e.g. collecting data, surveys, numeric analysis).
• Uncertainty about guiding Values (UV), reduced by a political
response (e.g. clarifying objectives, consulting interest groups, asking
higher authorities for their opinions).
• Uncertainty about Related decision fields (UR), also known as
Uncertainty about choices on Related agendas, reduced by an
exploration of structural relationships (e.g. adopting a broader
perspective, negotiating/collaborating with other decision makers,
looking at the links between a decision that might be made by
ourselves and decisions that might be made by others).
Throughout the strategic choice process:
• areas of uncertainty are listed as they arise, and
• are classified by UE/UV/UR.
In the last mode listed above, the choosing mode, these uncertainty areas
are addressed in the context of proposed decisions.

Shaping mode
In the shaping mode decision areas are identified as questions. These
are simply areas where alternative courses of action are possible (i.e. a
choice is possible). These decision areas are then presented on a decision
graph, where:
• each area is a node on the graph
• a link (edge) between two nodes (areas) exists if there is thought to
be a significant possibility of different outcomes if the two areas are
considered separately, rather then together.
Figure 2.7 shows one possible decision graph for our crime example.

40
Chapter 2: Problem structuring and problem structuring methods

Build more prisons?

Improve prison medical care?

Increase rewards for informing?

Impose longer sentences?

Figure 2.7
Once the decision graph has been drawn, areas of problem focus –
consisting of three or four decision areas – need to be identified. The areas
chosen are generally those which are important, urgent and/or connected.
For our crime example above we will have one problem focus based on the
areas:
• build more prisons?
• impose longer sentences?
• increase rewards for informing?
With regard to uncertainty we will have just one factor in our uncertainty
list, namely:
• Can we find sites to build more prisons? Classified UE.

Designing mode
In the designing mode we take each problem focus in turn and:
• List a small number (say two to five) of mutually exclusive possible
courses of action (options) in each of the decision areas.
• List incompatible options in different decision areas (note all options
in the same decision area are incompatible as they are mutually
exclusive); this can be done graphically if so desired using an option
graph.
• List (enumerate) all the possible feasible decision schemes where
a feasible decision scheme consists of one option from each of the
decision areas and none of the options chosen are incompatible.
For our crime example with the problem focus based on the areas:
• build more prisons?
• impose longer sentences?
• increase rewards for informing?
We have the options in each of these decision areas of:
• build more prisons?
no
yes – five more
yes – 10 more
• impose longer sentences?
no
yes

41
MN3032 Management science methods

• increase rewards for informing?


no
yes
Incompatibilities between options in this example are shown below, where a
link in that option graph indicates two options (in different problem areas)
that are, in our judgement, incompatible. Note here that this contrasts with
the meaning of the lines in the decision graph above – there lines indicated
areas that were connected and needed to be considered together – here lines
indicate incompatibilities.
Build more prisons?
No Yes – five more Yes – 10 more

Increase rewards for informing?


No
Yes
No Yes

Impose longer sentences?


Figure 2.8
With three options in one decision area and two in the other two there are
3(2)(2) = 12 possible decision schemes, although some of these will not be
feasible as they involve incompatible options.
These 12 schemes are listed below:

Build more prisons? Impose longer sentences? Increase rewards for informing?
no no no
yes no
no yes
yes yes
yes – five more no no
yes no
no yes
yes yes
yes – 10 more no no
yes no
no yes
yes yes
Table 2.1
Checking each of these schemes we find that for this example there are just
three possible feasible decision schemes (labelled A, B and C below) which are:

Scheme Build more Impose longer Increase rewards for


prisons? sentences? informing?
A no no no
B yes – five more yes yes
C yes – 10 more yes yes
Table 2.2

42
Chapter 2: Problem structuring and problem structuring methods

With regard to uncertainty, our uncertainty list, after the addition of two
more factors, becomes:
• Can we find sites to build more prisons at? Classified UE.
• Will 10 prisons be too many? Classified UE.
• Will the government/judiciary support longer sentences? Classified UV.

Comparing mode
In the comparing mode we compare each of the feasible decision schemes.
This is done by:
• identifying comparison areas
• within each area, assigning each feasible decision scheme a value.
The values chosen can be monetary sums or values chosen from some
scale (e.g. rank on a scale from 1 to 10).
Based on this assignment of values particular schemes may be selected
for closer analysis, either individually or as members of a shortlist. A
common approach is to compare, in a pairwise fashion, all members of the
shortlist. In this pairwise comparison the uncertainty areas are explicitly
considered to identify those uncertainty areas relating to the schemes
being compared.
For our crime example we could compare our feasible decision schemes
(three in this case) with respect to the comparison areas of:
• capital cost (in £’million terms)
• running cost (in £’million terms)
• acceptability to government (from 1 (almost unacceptable) to 5
(neutral) to 10 (very acceptable))
• acceptability to the public (from 1 (almost unacceptable) to 5
(neutral) to 10 (very acceptable)).
We present these numbers below:

Scheme Capital cost Running cost Government acceptability Public acceptability


A 0 0 3 5
B 200 40 5 3
C 400 75 9 1

Table 2.3
Selecting schemes A (no more prisons, no longer sentences and no
increased rewards for informing) and B (five more prisons, longer
sentences and increased rewards for informing) for pairwise comparison
we have:
Scheme A Scheme B
Capital cost 0 200
Running cost 0 40
Government acceptability 3 5
Public acceptability 5 3

Table 2.4

43
MN3032 Management science methods

Recalling the uncertainty list:


• Can we find sites to build more prisons? Classified UE.
• Will 10 prisons be too many? Classified UE.
• Will the government/judiciary support longer sentences? Classified UV.
The only relevant uncertainties for this pairwise comparison of schemes
A and B relate to scheme B (five more prisons, longer sentences and
increased rewards for informing) and are:
• Can we find sites to build more prisons? Classified UE.
• Will the government/judiciary support longer sentences? Classified UV.
These uncertainty areas are hence ones that will need attention if we are
to make a choice between these two schemes. Recall here that all feasible
decision schemes, three in this example, are mutually exclusive by the way
they were constructed and so, at the end of the process, a choice of just
one of them can be made.

Activity
Given four schemes, labelled A to D, what pairs of comparisons would be made?

Choosing mode
In the choosing mode a commitment package (i.e. what we are
proposing to do) is decided upon (or more than one package for
submission to higher authorities). A commitment package is guided by the
preferred feasible decision scheme and consists of:
• decisions taken now
• explorations to reduce levels of uncertainty (together with estimates
of resources needed and timescales)
• decisions deferred until later
• any contingency plans.
With regard to our crime example if we assume scheme B (five more
prisons, longer sentences and increased rewards for informing) is the
preferred feasible decision scheme then the relevant uncertainty areas are:
• Can we find sites to build more prisons? Classified UE.
• Will the government/judiciary support longer sentences? Classified UV.
The commitment package might be:
• decisions taken now – none
• explorations
study to identify provisional sites for five prisons (costing
£1,000,000 and taking three months)
consult government/judiciary about support for longer sentences
(negligible cost, likely to take up to six months)
• decisions deferred – final decision on scheme B until explorations to
reduce levels of uncertainty completed
• contingency plans – none.
Note that the actual decision scheme that we choose may be altered by the
results of the explorations (for example if our explorations reveal there are
no sites available for five prisons).

44
Chapter 2: Problem structuring and problem structuring methods

Activity
Think of a problem of your own and apply Strategic Choice to the problem.

Choosing and applying PSMs


Suppose that you are faced with a situation that you think can be tackled
using a PSM. Which PSM should you choose? Probably the simplest
answer is that you should choose the PSM with which you are most
familiar. One aspect of PSMs is that ‘success’, however defined, depends
significantly on the skill/knowledge/experience of the facilitator who
guides the process.
Obviously applying any technique, whether a PSM or one of the more
quantitative techniques considered later in this subject guide, is always a
matter of skill/knowledge/experience. The question is how much time and
effort is required to gain that skill/knowledge/experience. My view would
be that the time and effort required is significantly longer for a PSM than
for many of the more quantitative techniques considered in later chapters
(probably at least an order of magnitude longer).
Suppose though that you are equally experienced (or inexperienced) in
the three PSMs considered in this chapter – which one should you choose?
My advice would be to choose SSM. In my view the process of thinking
clearly about the ‘ideal world’ that is inherent in SSM and then comparing
that ideal world against the real world is a powerful one.

Education
You will find that the majority of the topics in this subject guide deal
with quantitative/analytic topics. This chapter is the only main chapter
that is predominantly qualitative in nature. Hence it is natural to ask
whether you, as a student, perhaps quantitatively skilled, should just focus
on quantitative topics and miss this chapter out completely in terms of
engaging with this subject guide. Clearly that could be done. However,
you need to be clear that the purpose of this subject guide is not only to
prepare you for the examination, it is also to educate you. Obviously that
education is assessed by a single examination in terms of the University
of London, but you should be aware that education is for life. Even
though you may not focus on this chapter in terms of preparing for the
examination that does not mean it is of no value. After you graduate how
many years of working life do you think you will face? Anticipating being
a millionaire and living a life of leisure by the age of 35? Dream on! Like
most of us, myself included, you will work hard all your life – ‘life is hard
and then you die’ to quote the phrase. During all those years of working
life maybe you will use some of the quantitative topics which you engaged
with for the examination. Equally, knowing that problem structuring
methods exist, and having some knowledge of what they deal with may,
over those working years, be valuable at some point. Time will tell.

Links to other chapters


The topics considered in this chapter do not directly link to other
chapters in this subject guide. This chapter introduced methods that can be
used when the problem to be faced is essentially qualitative in nature, and
a more quantitative (analytic) approach would be inappropriate. Many of
the other chapters in this subject guide deal with quantitative topics.

45
MN3032 Management science methods

Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
SSM
www.learnaboutor.co.uk/strategicProblems/c_s_1frs.htm
Journey Making
www.learnaboutor.co.uk/strategicProblems/c_j_1frs.htm

A reminder of your learning outcomes


Having completed this chapter, and the Essential reading and activities,
you should ensure that you can describe and explain:

General
• the common features of problem structuring methods.

Journey Making
• how cognitive maps can structure individual and group views of a
problem, including the identification of goals and options, and how to
produce and structure a cognitive map
• the Journey Making process.

Soft Systems Methodology (SSM)


• how root definitions and CATWOE are produced and used to describe
pure systems
• the seven stage SSM process.

Strategic Choice (SC)


• how uncertainties can be classified and reduced
• the four modes of working and the SC process.

Sample examination questions


For Sample examination questions relating to the material presented in
this chapter please see the VLE.

46
Chapter 3: Network analysis

Chapter 3: Network analysis

Essential reading
Anderson, Chapter 9.

Spreadsheet
network.xls
• Sheet A: Calculation for project completion time
• Sheet B: Calculation for project completion time with delay activity
added
• Sheet C: Resource information
• Sheet D: Gantt chart calculated from Sheet C
• Sheet E: Resource usage chart calculated from Sheet C
This spreadsheet can be downloaded from the VLE.

Aims of the chapter


The aims of this chapter are to:
• introduce some simple approaches that can be useful in project
management
• give some historical background to these approaches
• illustrate how these approaches can be applied to a simple project
• discuss a number of issues that arise in more complex projects.

Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• draw a network diagram
• calculate the project completion time
• calculate the earliest start time, latest start time and float time for each
activity
• identify the critical activities/critical path(s)
• explain the effects of uncertain activity times and cost/time trade-offs
• explain resource smoothing
• explain the benefits of network anaylsis.

Introduction
Network analysis is the general name given to certain specific techniques
which can be used for the planning, management and control of projects.
One definition of a project, from the Project Management Institute, is: a
temporary endeavour undertaken to create a ‘unique’ product
or service.

47
MN3032 Management science methods

This definition highlights some essential features of a project:


• it is temporary – it has a beginning and an end
• it is ‘unique’ in some way.
With regard to the use of the word unique I personally prefer to use the
idea of ‘non-repetitive’ or ‘non-routine’ (for example, building the very first
Boeing Jumbo jet was a project, but building them now is a repetitive/
routine manufacturing process).
We can think of many projects in real life (e.g. building a large multi-
storey office, developing a new drug, etc.).
Typically all projects can be broken down into:
• Separate activities (tasks/jobs) – where each activity has an
associated duration or completion time (i.e. the time from the start
of the activity to its finish).
• Precedence relationships – which govern the order in which we
may perform the activities (for example, in a project concerned with
building a house the activity ‘erect all four walls’ must be finished
before the activity ‘put roof on’ can start).
The problem is to bring all these activities together in a coherent fashion
to complete the project.
Network analysis is a vital technique in project management. It
enables us to take a systematic quantitative structured approach
to the problem of managing a project through to successful completion.
Moreover, as will become clear below, it has a graphical representation
which means it can be understood and used by those with a less technical
background. Projects typically involve many inter-related activities and
one of the strengths of the techniques presented in this chapter is that they
enable a project manager to adopt a systematic approach to both planning
the project and to managing the project through to successful completion.
The techniques you will meet in this chapter are not a magic recipe for
project management; that is to say, using them does not automatically
guarantee a successful project! However it is probably true to say that
attempting a project without using them does guarantee an increased
probability of failure (i.e. you are more likely to fail if you neglect to use
them). We hope that by the time you finish studying this chapter you also
will be convinced of the value of these techniques.

Historical background
Two different techniques for network analysis were developed
independently in the late 1950s. These were:
• PERT (for Program Evaluation and Review Technique)
• CPM (for Critical Path Management).
PERT was developed to aid the US Navy in the planning and control of
its Polaris missile project. This was a project to build a strategic weapons
system, namely the first submarine-launched intercontinental ballistic
missile, at the time of the Cold War between the USA and Russia. Hence
there was a strategic emphasis on completing the Polaris project as
quickly as possible; cost was not an issue. However, no one had ever
built a submarine-launched intercontinental ballistic missile before, so
dealing with uncertainty was a key issue. PERT has the ability to cope with
uncertain activity completion times (e.g. for a particular activity the most

48
Chapter 3: Network analysis

likely completion time is four weeks but it could be any time between
three weeks and eight weeks).
CPM was developed as a result of a joint effort by the DuPont Company
and Remington Rand Univac. As these were commercial companies, cost
was an issue (unlike the Polaris project considered above). In CPM the
emphasis is on the trade-off between the cost of the project and its overall
completion time (e.g. for certain activities it may be possible to decrease
their completion times by spending more money. How does this affect the
overall completion time of the project?)
Modern commercial software packages tend to blur the distinction between
PERT and CPM and include options for uncertain activity completion times
and project completion time/project cost trade-off analysis. Note here that
many such packages exist for doing network analysis.
There is no clear terminology in the literature and you will see this area
referred to by the phrases: network analysis, PERT, CPM, PERT/CPM,
critical path analysis and project planning.

Example
We will illustrate network analysis with reference to the following example: suppose that
we are going to carry out a minor redesign of a product and its associated packaging.
We intend to test market this redesigned product and then revise it in the light of the test
market results, finally presenting the results to the Board of the company.
The key question is:
How long will it take to complete this project?

After much thought we have identified the following list of separate


activities together with their associated completion times (assumed to be
known with certainty).

Activity Activity Completion


number time (weeks)
1 Redesign product 6
2 Redesign packaging 2
3 Order and receive components for redesigned product 3
4 Order and receive material for redesigned packaging 2
5 Assemble products 4
6 Make up packaging 1
7 Package redesigned product 1
8 Test market redesigned product 6
9 Revise redesigned product 3
10 Revise redesigned packaging 1
11 Present results to the Board 1

Table 3.1

Activity
Think of a small project, either at work or at home (for example, painting a room). List on
a piece of paper the activities associated with this project and their associated completion
times.

49
MN3032 Management science methods

Aside from this list of activities we must also prepare a list of precedence
relationships indicating activities which, because of the logic of the
situation, must be finished before other activities can start (e.g. in the
above list Activity 1 must be finished before Activity 3 can start).
It is important to note that, for clarity, we try to keep this list to a
minimum by specifying only immediate relationships: that is, relationships
involving activities that ‘occur near to each other in time’.
For example, it is plain that Activity 1 must be finished before Activity 9
can start but these two activities can hardly be said to have an immediate
relationship (since many other activities after Activity 1 need to be finished
before we can start Activity 9).
Activities 8 and 9 would be examples of activities that have an immediate
relationship (Activity 8 must be finished before Activity 9 can start).
Note here that specifying non‑immediate relationships merely complicates
the calculations that need to be done – it does not affect the final
result. Note too that, in the real world, the consequences of missing out
precedence relationships are much more serious than the consequences of
including unnecessary (non-immediate) relationships.
Again, after much thought (and aided by the fact that we listed the
activities in a logical/chronological order), we come up with the following
list of immediate precedence relationships.
Activity number Activity number
1 must be finished before 3 can start
2 4
3 5
4 6
5, 6 7
7 8
8 9
8 10
9, 10 11

Table 3.2
The key to constructing this table is, for each activity in turn, to ask the
question:
‘What activities must be finished before this activity can start?’
Note here that:
• Activities 1 and 2 do not appear in the right hand column of the above
table. This is because there are no activities which must finish before
they can start (i.e. both Activities 1 and 2 can start immediately).
• Two activities (5 and 6) must be finished before Activity 7 can start.
• It is plain from this table that non-immediate precedence relationships
(e.g. ‘Activity 1 must be finished before Activity 9 can start’) need not
be included in the list since they can be deduced from the relationships
already in the list.
Activity
For the project you thought of previously construct on a piece of paper a list of
precedence relationships.

50
Chapter 3: Network analysis

Once we have completed our list of activities and our list of precedence
relationships we combine them into a diagram (called a network – which is
where the name network analysis comes from).

Network construction
Activity/Reading
For this section read Anderson, Chapter 9, section 9.1.

In the network shown below, each node (circle) represents an activity and is
labelled with the activity number and the associated completion time (shown
in brackets after the activity number).
1(6) 3(3) 5(4)

9(3)

7(1)
8(6) 11(1)

10(1)

2(2) 4(2) 6(1)

Figure 3.1
This network is an activity on node (AON) network.
In constructing the network we:
• draw a node for each activity
• add an arrow from (activity) node i to (activity) node j if Activity i must
be finished before Activity j can start (Activity i precedes Activity j).
Note here that all arcs have arrows attached to them (indicating the direction
the project is flowing in).
One tip that I find useful in drawing such diagrams is to structure the
positioning of the nodes (activities) so that the activities at the start of the
project are at the left, the activities at the end of the project at the right, and
the project ‘flows’ from left to right in a natural fashion.
Note here one key point, the above network diagram assumes
that activities not linked by precedence relationships can take
place simultaneously (e.g. at the start of the project we could be
doing Activity 1 at the same time as we are doing Activity 2).
Essentially the above diagram is not needed for a computer – a computer can
cope very well (indeed better) with just the list of activities and their precedence
relationships we had before. The above diagram is intended for people.
Consider what might happen in a large project – perhaps many thousands or
tens of thousands of activities and their associated precedence relationships.
Do you think it would be possible to list those out without making any errors?
Obviously not – so how can we spot errors? Looking at long lists in an attempt
to spot errors is just hopeless. With a little practice it becomes easy to look at
diagrams such as that shown above and interpret them and spot any errors in
the specification of the activities and their associated precedence relationships.

51
MN3032 Management science methods

Activity
Without looking at the network we have drawn above, draw for yourself the network
associated with the example given above. Does what you have drawn correspond to what
is shown above or not?
Draw the network for the project you thought of previously (from your list of activities and
precedence relationships).

Once having drawn the network it is a relatively easy matter to analyse it


to find the critical path.
Below we repeat the network diagram for the problem we were
considering before. However, note that we have now added a dummy
activity (12) with a completion time of zero to represent the end of the
project. This just makes the calculations we have to do easier to follow.
1(6) 3(3) 5(4)

9(3)

7(1)
8(6) 11(1)

10(1)

2(2) 4(2) 6(1)


12(0)
Figure 3.2

Drawing the network


Students frequently find difficulty in drawing a correct network. Be clear
here, there are two different ways to draw a network, activity on node
(AON) which we have used above and activity on arc (AOA) which we
have not presented in this chapter.
Our strong advice here is: activity on node is easier than activity on arc.
The structured way to draw a network diagram is:
1. Take each of the precedence relationships in turn and include them in
the diagram.
2. If there are any time lags involved in the network (e.g. such as might
be encountered in an examination question) then for each time lag:
add a dummy activity
connect the dummy activity to the activities involved in the time lag
3. Check that all the activities are included in the network.
4. Include a dummy start node if the network needs one:
all activities with no incoming arc are connected to the dummy
start node.
5. Include a dummy end node if the network needs one:
all activities with no outgoing arc are connected to the dummy end
node.

Earliest start time calculation


In order to analyse this network we first calculate, for each node (activity)
in the network, the earliest start time for that activity such that all
preceding activities have been finished. We do this below.
52
Chapter 3: Network analysis

Let Ei represent the earliest start time for activity i such that
all its preceding activities have been finished. We calculate the
values of the Ei (i = 1, 2,..., 12) by going forward, from left to right, in the
network diagram. To ease the notation let Ti be the activity completion time
associated with activity i (e.g. T5 = 4). Then the Ei are given by:
E1 = 0 (assuming we start at time zero)
E2 = 0 (assuming we start at time zero)
E3 = E1 + T1 = 0 + 6 = 6
E4 = E2 + T2 = 0 + 2 = 2
E5 = E3 + T3 = 6 + 3 = 9
E6 = E4 + T4 = 2 + 2 = 4
E7 = max[E5 + T5, E6 + T6] = max[9 + 4, 4 + 1] = 13
E8 = E7 + T7 = 13 + 1 = 14
E9 = E8 + T8 = 14 + 6 = 20
E10 = E8 + T8 = 14 + 6 = 20
E11 = max[E9 + T9, E10 + T10] = max[20 + 3, 20 + 1] = 23
E12 = E11 + T11 = 23 + 1 = 24

Hence 24 (weeks) is the minimum time needed to complete all the activities
and hence is the minimum overall project completion time.
Note here that the formal definition of the earliest start times is given by:
Ej = max[Ei + Ti | i one of the activities linked to j by an arc from i to j]
Conceptually we can think of this earliest start time calculation as finding
the length of the longest path in the network (consider walking from the
left-hand side of the network, to the right-hand side, through the nodes,
where the completion time at each node indicates how long we must wait
at the node before we can move on). However, because of the risk
of error, we should always carry out the above calculation
explicitly, rather than relying on the eye/brain to inspect
the network to spot the longest path in the network. This
inspection approach is infeasible anyway for large networks.
As well as the minimum overall project completion time calculated above we
can extract additional useful information from the network diagram by the
calculation of latest start times. We deal with this below.

Latest start time calculation


Let Li represent the latest time we can start activity i and still
complete the project in the minimum overall completion time.
We calculate the values of the Li (i = 1, 2,..., 12) by going backward, from
right to left, in the network diagram. Hence:
L12 = 24
L11 = L12 - T11 = 24 - 1 = 23
L10 = L11 - T10 = 23 - 1 = 22
L9 = L11 - T9 = 23 - 3 = 20
L8 = min[L9 - T8, L10 - T8] = min[20 - 6, 22 - 6] = 14
L7 = L8 - T7 = 14 - 1 = 13
L6 = L7 - T6 = 13 - 1 = 12
L5 = L7 - T5 = 13 - 4 = 9

53
MN3032 Management science methods

L4 = L6 - T4 = 12 - 2 = 10
L3 = L 5 - T3 = 9 - 3 = 6
L2 = L4 - T2 = 10 - 2 = 8
L1 = L 3 - T1 = 6 - 6 = 0

The formal definition of the latest start times is given by:


Li = min[Lj - Ti | j one of the activities linked to i by an arc from i to j]
Note that as a check that we have done both the earliest start times and
latest start times calculations correctly:
• all latest start times must be ≥ 0
• at least one activity must have a latest start time of zero.
In fact using the latest start times Li and the concept of float we can
identify which activities are critical in the above network in the sense that
if a critical activity takes longer than its estimated completion
time the overall project completion time will increase. We deal
with this below.

Float
As we know the earliest start time Ei, and latest start time Li, for each
Activity i, it is clear that the amount of slack or float time Fi available
is given by Fi = Li - Ei which is the amount by which we can increase the
time taken to complete Activity i without changing (increasing) the overall
project completion time. Hence we can form the table below:
Activity Li Ei Float Fi
1 0 0 0
2 8 0 8
3 6 6 0
4 10 2 8
5 9 9 0
6 12 4 8
7 13 13 0
8 14 14 0
9 20 20 0
10 22 20 2
11 23 23 0

Table 3.3
Any activity with a float of zero is critical. Note here that, as a check, all
float values should be ≥ 0.
The float figures derived are also known as total float as in the above
example a ‘chain’ of successive activities (in this case 2, 4 and 6) share the
same float and this is common with total float.
The float value is defined, for each activity, as the amount of time that
each activity can be delayed without altering (increasing) the overall
project completion time. If delays occur in two or more activities then we
must recalculate the project completion time. Many textbooks also refer to
float by the term ‘slack’.

54
Chapter 3: Network analysis

Critical path
Activities with a slack of zero are called critical activities since they must
all be completed on time to avoid increasing the overall project completion
time. Hence, for this network, activities 1, 3, 5, 7, 8, 9 and 11 are the
critical activities.
Activity
If any of the critical activities are delayed, will this affect the overall project completion
time or not and why?

Note here that 1–3–5–7–8–9–11 constitutes a path from the initial node
(node 1) to the final node (node 11) in our network diagram. This is no
accident because, for any network, there will always be a path of critical
activities from the initial node to the final node. Such a path is called the
critical path. Note too here that the sum of the completion times for the
activities on the critical path is equal to the project completion time.

Activity
Try for yourself the example given above and see if you agree with the float values
presented above.

Activity
For the project you thought of previously, calculate the minimum overall project
completion time. What are the critical activities?

Activity
Can there be more than one critical path? Hint: consider the same example as given
above but with the completion time for Activity 10 increased to three weeks.

Checks
If you analyse a project network then there are a number of numeric
checks that can be applied to check the accuracy of your calculations.
These are checks in the sense that if you fail any of these checks then you
must have gone wrong somewhere. Conversely, passing all the checks
does not absolutely guarantee that you are correct, although it does (for
example, in an examination situation) enable you to have some confidence
in your calculations. These checks are:
• All activities have floats ≥ 0 - it is good practice to explicitly give a
table of floats (for non-critical activities) so that you are sure that you
have calculated floats for all of them.
• There exists at least one path of critical activities from the start of the
project to the end of the project.
• All activities on a critical path have float zero.
• The sum of the completion times for the activities on a critical path is
equal to the project completion time.
• If an activity has float zero then it must be on at least one critical path.
In addition, be clear that:
• If activity completion times increase (and the precedence relationships
remain unchanged) the project completion time cannot decrease.
• If you have a change in the precedence relationships and/or two or
more activity completion times change, you need to recalculate.

55
MN3032 Management science methods

We should emphasise this last point here since (particularly in an


examination situation) students fail to fully grasp it. So, if you have
a change in the precedence relationships and/or two or more activity
completion times change then:
if
you simply need the new project completion time you can do a partial
(forward) calculation of earliest start times
but
if you need the new critical path(s) and/or float times you need a full
recalculation (so recalculate both earliest start and latest start times).

Excel solution
Now examine Sheet A in the spreadsheet associated with this chapter, as
shown below. You will see there that the data for the example considered
above have already been entered and Excel shows the project completion
time as 24 (in cell C14), just as we calculated above.

Spreadsheet 3.1
You can see above that the cells in the sheet that you can change relate
to the completion times for the activities. The underlying precedence
relationships have been incorporated into the Excel logic and cannot be
changed. Note that the sheet indicates (column H) whether a particular
activity is critical or not. Float times are also given in column G.
Of course the advantage of a spreadsheet is that it can easily recalculate
the situation if we change anything. For example suppose the completion
time for Activity 1 increases to eight weeks – it is easy to confirm from the
spreadsheet that the project completion time increases to 26 weeks (as we
would suspect as Activity 1 is critical and delaying it will also delay the
completion of the entire project).
Note here that we have (implicitly) assumed in calculating this figure of 24
weeks that we have sufficient resources to enable activities to be carried
out simultaneously if required (e.g. Activities 1 and 2 can be carried out
simultaneously).

Activity
Is it possible to complete the project in 23 weeks or not and why?

Activity
If the completion time for Activity 2 increases to five weeks, will this affect the overall
project completion time or not? If it does affect the completion time what will the new
completion time be?

56
Chapter 3: Network analysis

Sheet A also lists, for each activity:


• Earliest start: this is the earliest possible time that an activity can
begin. All immediate predecessors must be finished before an activity
can start.
• Earliest finish: this is the earliest possible time that an activity can
be finished (= earliest start time + activity completion time).
• Latest start: this is the latest time that an activity can begin and not
delay the completion time of the overall project. If the earliest start
and latest start times are the same then the activity is critical.
• Latest finish: this is the latest time that an activity can be finished
and not delay the completion time of the overall project (= latest start
time + activity completion time). As with start times, the activity is
critical if the earliest finish and latest finish times are the same.
Note also:
a. There may be more than one critical path – in fact it often makes more
sense to talk about critical activities rather than the critical path.
b. The larger the slack the less critical the activity (e.g. what would
happen to the overall project completion time if the completion time
for Activity 6 increased by five)?
c. Be aware that, both in the textbooks and in the literature, different
ways of performing network analysis are presented – in particular:
different definitions of slack
different network diagrams (exchanging the role of nodes and
arcs) – as previously stated there are two types of network
diagram, activity on node (AON) which we have used above and
activity on arc (AOA) which we have not presented in this chapter
different notation conventions.

Delay activities
A situation that is often encountered is that of a delay activity. By this we
mean that a specified time must elapse between the end of one activity
and the start of another. Delays can also be viewed as waiting – you
have to wait a certain time between the end of one activity and the start
of another. Incorporating such delays into a network diagram is an easy
task. Each delay adds an additional activity to the diagram. For example,
consider the network diagram shown in Figure 3.1. Suppose now that that
we have the following situation:
• There must be a delay of 16 weeks (or more) between the end of
Activity 3 and the start of Activity 9.
Note here the use of the phrase ‘or more’. Strictly we cannot guarantee
that there is a delay of exactly 16 weeks between the end of Activity
3 and the start of Activity 9. The precise delay that occurs depends
upon the other activities in the project. However, we can impose
the condition of the delay being a certain time period or longer, i.e.
mathematically the delay is ≥ a specified value. For this reason we
often drop the ‘or more’ when talking of delays and implicitly assume
that we mean a delay of at least the period given. So here we might
equally say:
• There must be a delay of 16 weeks between the end of Activity 3 and
the start of Activity 9.

57
MN3032 Management science methods

To incorporate this delay we simply add an activity, running from


Activity 3 in Figure 3.1 to Activity 9 in Figure 3.1 with a duration
(completion time) of 16 weeks.
Activity
Add this delay activity to Figure 3.1.

Now with this delay activity added we can carry out the same calculation
for earliest and latest times as we carried out above.
Activity
Compute the earliest and latest times, as well as the project completion time and the
float times, for the network as in Figure 3.1 but with the delay activity added.

The Excel solution when this delay activity is added can be seen in Sheet B
of the spreadsheet:

Spreadsheet 3.2
Here we can see that the project completion time is now 29 weeks with
the critical path being composed of Activities 1, 3, 9, 11 and the delay
activity. Note here how the delay activity can itself be critical.

Network analysis – extensions


There are several important extensions to the basic network analysis
technique and these relate to:
• uncertain activity completion times
• project time/project cost trade-off
• resource restrictions.
We shall deal with these in turn.

Uncertain activity completion times


Activity/Reading
For this section read Anderson, Chapter 9, section 9.2.
In this extension to the basic method we give, for each activity, not a single
completion time but three times:
• optimistic time (t1): the completion time if all goes well
• most likely time (t2): the completion time we would expect under
normal conditions
• pessimistic time (t3): the completion time if things go badly.

58
Chapter 3: Network analysis

This use of three time estimates is the PERT technique.


These three times are combined into a single figure – the expected activity
completion time, given by (t1 + 4t2 + t3)/6 and – this figure is used as the
activity completion time when carrying out the calculations presented
before to find the project completion time and the critical activities.
Note here that this weighting of optimistic:most likely:pessimistic of
1/6:4/6:1/6 is essentially fixed and cannot be altered (as the underlying
theory depends on these weights).
In addition, through the use of the beta and normal probability
distributions, we can get an idea of how the project completion time might
vary (remember we no longer know the individual activity completion
times with certainty).
Essentially we can find answers to questions like:
What is the probability that:
• the project will take longer than...?
• the project will be finished by...?
• a particular activity will take longer than...?
• a particular activity will be finished by...?
For the purposes of this subject, you will not be expected to know how to
answer such questions numerically, merely to be aware that such questions
can be asked and answered.
To provide more insight into this extension to the basic method suppose
we have an activity for which the optimistic time is two weeks, the most
likely time is five weeks and the pessimistic time is 11 weeks. So we have
t1 = 2, t2= 5 and t3 = 11. Then the expected activity completion time is
given by (t1 + 4t2 + t3)/6 = 5.5 weeks. So we will expect that this activity
will take 5.5 weeks (note that fractional values are perfectly permissible,
many things in life take a fraction of a week).
The beta completion time distribution for this activity, which is the
probability distribution governing the completion time for this activity
given these three time estimates of t1=2, t2=5, t3 = 11, is (approximately)
as shown in Figure 3.3.

Figure 3.3: Variation in completion time, beta distribution.

59
MN3032 Management science methods

In this figure the highest point on the probability curve (with probability
of approximately 0.011) corresponds to the most likely time t2=5. Notice
how the distribution is not symmetric (for example compare the left-
hand side of the distribution between times 2 and 3 and the right-hand
side of the distribution between times 10 and 11. If the distribution were
symmetric these would be mirror images of each other, clearly here they
are not.

Project time/project cost trade-off


Activity/Reading
For this section read Anderson, Chapter 9, section 9.3.

In this extension to the basic method we assume that, for each activity, the
completion time can be reduced (within limits) by spending more money
on the activity. Essentially, each activity now has more than one possible
completion time (depending upon how much money we are willing to
spend on it).
This use of cost information is the CPM technique.
A common assumption is to say that for each activity the completion time
can lie in a range with a linear relationship holding between cost and
activity completion time within this range (as illustrated below).

Figure 3.4
Reducing an activity completion time is known as ‘crashing’ the activity
and, for a given project completion time, the problem is to identify which
activities to crash (and by how much) so as to minimise the total cost of
achieving the desired (given) project completion time. This can be done
using linear programming. For the purposes of this subject you will not be
expected to know how to do this, merely to be aware that this is how cost
crashing is done.

Resource restrictions
Typically, in real-world network analysis, each activity has associated with
it some resources (such as men, machinery, materials, etc). We mentioned
before that, in calculating the minimum overall project completion time,
we took no account of any resource restrictions. To illustrate how network
analysis can be extended to deal with resource restrictions consider the
activity on node network we had before in the case of certainty with
respect to activity completion times, for which the network diagram is
reproduced below.

60
Chapter 3: Network analysis

1(6) 3(3) 5(4)

9(3)

7(1)
8(6) 11(1)

10(1)

2(2) 4(2) 6(1)


Figure 3.5
The Gantt chart below, taken from Sheet D – which reflects the
information shown in Sheet C, illustrates when each activity can take
place. To meet the minimum project completion time of 24 weeks for the
above project, critical activities must take place at fixed points in time.
For non-critical activities however we have flexibility (within limits) as to
precisely when these activities take place.

Figure 3.6
To remind you of the interpretation of the Gantt chart above we have
(somewhat unconventionally) shown time on the vertical axis and each
activity along the horizontal axis. The solid column joins the earliest start
and earliest finish times for each activity.
A key point to grasp here is that in order for the project to be completed
on time (here a completion time of 24 weeks) all critical activities must
start at their earliest start times and finish at their earliest finish times (i.e.
we have no flexibility as to when those activities occur).
Recall that for this project the critical activities are 1, 3, 5, 7, 8, 9 and 11
and the non-critical activities are 2, 4, 6 and 10.
We consider just one resource restricted problem, resource smoothing
(also known as resource levelling).

Resource smoothing
Suppose now that we have just one resource (people) associated with each
activity and that the number of people required is:
• two for Activity 1
• one person for all the other activities (Activities 2 to 11 inclusive)

61
MN3032 Management science methods

The spreadsheet, sheet C, is shown (in part) below.

Spreadsheet 3.3
Column C in that spreadsheet gives the resource usage for each activity,
column D is the suggested start time (if we wish to impose our own
suggested start time for each activity). There are other cells in the
spreadsheet beyond column I that contain values but these are concerned
with internal calculations in order to calculate a resource profile.
Suppose now that we decide:
• we wish to meet the minimum overall project completion time of 24
weeks (hence implying that the times at which critical activities occur
are fixed)
• we wish to start all non-critical activities at their earliest possible start
times
then in the light of these decisions what does the plot (profile) of resource
usage (number of people used) against time look like?
Using our spreadsheet from Sheet E the plot of resource usage against
time is:

Figure 3.7
The peak of the resource profile is associated with the start of the project,
when Activity 1 requires two people and the other activities, which are
being performed simultaneously with Activity 1, require one person.
A key question is:
What resource usage profile would you have most liked to
have seen here?
Clearly the ideal is a constant profile of resource against time (i.e. a
constant usage of resource over time). This is because variations from a
constant (straight line) profile most likely cost us extra money – either
in terms of hiring extra resource to cover peaks in the resource profile,
or in terms of unutilised resources when we have troughs in the resource
62
Chapter 3: Network analysis

profile. Although it may not be possible to achieve an ideal resource profile


we should keep this ideal in mind.
Now another key question is:
Is the resource profile that we see fixed or can it be altered?
The simple answer is that the resource profile is not fixed. It can be
altered. Simply put, changing the start time for an activity changes the
resource profile. This is illustrated below where we have changed the
start time for Activity 2 to Time 2. It can be seen that the project is still
completed in 24 weeks, but the resource profile is different.

Spreadsheet 3.4

Figure 3.8
So is our current usage of resource ideal? Plainly not, but what flexibility
do we have? If we still wish to complete in 24 weeks we can do nothing
with regard to the critical activities.
However, we have some choice for the non-critical activities. Recall that
such activities have an associated float (slack) time. We could artificially
delay starting some of these activities. If we do so the resource profile will
change, maybe for the better. Indeed this is what we did implicitly above.
There, in delaying the start of Activity 2 until Time 2, we still completed
the project on time, but had a different resource profile. In fact for this
particular example it is easy to see that delaying starting Activity 2 until
Time 6 leads to a better resource profile, and is still feasible in terms of the
24 weeks completion time.
Using our spreadsheet to delay starting Activity 2 until Time 6, when
Activity 1 will have finished we have:

63
MN3032 Management science methods

Spreadsheet 3.5

Figure 3.9
Here we clearly have a resource profile more in line with our ideal of a
constant resource usage.
It is important to note however that artificially delaying the start of non-
critical activities in order to improve a resource profile is not free. Rather
it costs us. Simply put, time lost by delaying the start of an activity cannot
later be regained (if things do not turn out as planned) given the desired
fixed completion time for the overall project.
This has illustrated the resource smoothing or resource levelling
problem, which can be stated as:
• Given a fixed overall project completion time (which we know is
feasible with respect to the resource constraints) ‘smooth’ the usage of
resources over the timescale of the project (so that, for example, we do
not get large changes between one week and the next in the number
of people we need).
This smoothing process makes creative use of float to artificially delay
activities in order to smooth resource usage.
There are two disadvantages to smoothing:
• If we have multiple resources then we may well find that smoothing
one resource makes another resource less smooth, hence we have to
make trade-offs between resource profiles.
• Time lost cannot be regained, once we have delayed an activity to
smooth a resource profile then, if things go wrong later (e.g. some
activities take longer than we planned) we cannot regain the time we
have lost.

64
Chapter 3: Network analysis

Network analysis – benefits


Network analysis is nowadays very widely used so it might be profitable to
consider the benefits that using network analysis can bring to a project.

Structure
Forming the list of activities, precedence relationships and activity
completion times structures thought about the project and clearly indicates
the separate activities that we are going to have to undertake, their
relationship to one another and how long each activity will take. Hence
network analysis is useful at the planning stage of the project.

Management
Once the project has started then the basic idea is that we focus
management attention on the critical activities (since if these are delayed
the entire project is likely to be delayed). It is relatively easy to update
the network, at regular intervals, with details of any activities that have
been finished, revised activity completion times, new activities added to
the network, changes in precedence relationships, etc and recalculate the
overall project completion time. This gives us an important management
tool for managing (controlling) the project.
Plainly it is also possible to ask (and answer) ‘what if’ questions relatively
easily (e.g. what if a particular activity takes twice as long as expected –
how will this affect the overall project completion time?).
It is also possible to identify activities that, at the start of the project, were
non-critical but which, as the project progresses, approach the status of
being critical. This enables the project manager to ‘head off’ any crisis that
might be caused by suddenly finding that a previously neglected activity
has gone critical.

Activity
Think about projects you have been involved with, either at work, at home, or in your
community. Would these projects have benefited from applying the techniques presented
in this chapter or not? Next time you have a project to do will you attempt to apply the
techniques presented in this chapter or not? If not, why not?

Network analysis – state of the art


Computer packages are widely available for network analysis (including
packages on PCs). Typically such packages will:
• draw network diagrams
• calculate critical activities, and the overall project completion time
• cope with uncertain activity times
• perform project time/project cost trade-off
• deal with multiple projects
• provide facilities for updating the network as the project progresses.
Problems involving thousands of activities can be easily handled on a PC.

Links to other chapters


The topics considered in this chapter do not directly link to other
chapters in this subject guide. At a more general level the link between this
chapter and other chapters in this subject guide is the use of a quantitative
(analytic) approach to a problem.
65
MN3032 Management science methods

Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
Nokia networks 371
Hospital revenue bond at Seasongood & Mayer 381
Kimberly-Clark Europe 391

A reminder of your learning outcomes


Having completed this chapter, and the Essential reading and activities,
you should be able to:
• draw a network diagram
• calculate the project completion time
• calculate the earliest start time, latest start time and float time for each
activity
• identify the critical activities/critical path(s)
• explain the effects of uncertain activity times and cost/time trade-offs
• explain resource smoothing
• explain the benefits of network anaylsis.

Sample examination questions


For Sample examination questions relating to the material presented in
this chapter please visit the VLE.

66
Chapter 4: Decision making under uncertainty

Chapter 4: Decision making under


uncertainty

Essential reading
Anderson, Chapter 13, excluding section 13.6.

Spreadsheet
• dectree.xls
• Sheet A: Solution of the pay-off table example
• Sheet B: Solution of the decision tree example
• Sheet C: Solution of the decision tree example, simulation.
This spreadsheet can be downloaded from the VLE.

Aims of the chapter


The aims of this chapter are to:
• discuss a number of techniques that can be applied when we have to
make a decision, but aspects associated with the decision to be made
are uncertain (unknown)
• illustrate how these techniques can be applied to simple problems
• discuss the use of simulation in decision trees
• discuss some issues that might arise in more complicated problems.

Learning outcomes
By the end of this chapter, and having completed the Essential reading,
you should be able to:
• construct a pay-off table for a problem and analyse it numerically
using the standard decision criteria: optimistic, conservative
(pessimistic), regret, equally likely and expected monetary value
• draw a decision tree for a problem
• calculate expected monetary values
• process the tree to arrive at a suggested course of action
• calculate the upside and downside of any decision
• perform sensitivity analysis
• calculate the expected values associated with perfect information
• understand the use of simulation in decision trees
• understand the use of utilities in place of monetary values.

Introduction
People make personal choices all the time. For example should I accept
a particular job offer or not? Should I marry this person or not? In the
business world choices must also be made all the time. For example,
should our company apply for a particular contract or not? Even if we do
apply, what price should we bid for the contract?
67
MN3032 Management science methods

In such problems chance (or probability) plays an important role.


Considering the contract example given above, different bid prices may
have different probabilities of being accepted (a low price may have a high
probability of being accepted, a higher price less chance of being accepted).
Obviously such probability information needs to be incorporated into
any procedure for reaching a decision. Of course, it may be that we have
no real idea of what the probabilities are – in which case can we make a
reasonable decision or are we restricted simply to guessing what to do?
Here the key factor that must be incorporated is uncertainty – we are
not sure. Pay-off tables and decision trees are two specific techniques for
structuring and analysing decision problems which involve uncertainty.
In addition, as we shall see below, we can cope with the making of
decisions in situations involving uncertainty irrespective of whether we
have chance/probability information or not. We will illustrate both pay-off
tables and decision trees by use of examples.
Note here that it is possible that you may have encountered pay-off tables/
decision trees before in studying another subject associated with the
External System. If so, then this chapter will predominantly cover material
you have already studied.

Pay-off table example


Activity/Reading
For this section read Anderson, Chapter 13, sections 13.1–13.3.

Example
Consider the example of a company that can invest in a new product produced by another
company. Depending upon how much they want to invest they are entitled to a specified
share of the profits made by the product over the next year. If they invest £80m they are
entitled to 50 per cent of the profits, but if they invest £35m only 25 per cent of the profits.
Of course they could choose not to invest at all. There are three scenarios for the demand
for the product, high, medium or low and in these cases the total profit from the product
would be £300m, £200m and £50m respectively. What should the company do?

Activity
Reflect on this problem for five minutes. Would you invest £80m, £35m or choose not to
invest? Why or why not? Record your decision (and the reasons for it) here.

Pay-off table solution


The standard approach to problems of this type is to construct a pay-off
table, also known as a decision table, indicating for each of the possible
decisions the pay-off that the company receives. Look at Sheet A in the
spreadsheet associated with this chapter. You will see:

Spreadsheet 4.1

68
Chapter 4: Decision making under uncertainty

Cells A2 to A4 show the three choices, labelled A, B and C, of investing


£80m, £35m, or nothing respectively. Cells F2 to F4 show the profit for
each of the demand scenarios and cells D8 to F10 show the pay-off (profit
obtained by the company) given each of their possible decisions (choices)
and each of the possible demand scenarios. For example the profit of 15
(£m) shown in cell E9 corresponds with the company making choice B,
investing 35 and being entitled to 25 per cent of the profit that occurs
when the outcome is the medium demand scenario with a total profit of
200 (so that the pay-off/profit the company gains is 25 per cent of 200 less
35 = 15). There are other cells in this spreadsheet (not currently shown
above) but these do not concern us for now.
The key points here are:
• the company can only choose one of the three possible choices
• the demand will be one of the three possible scenarios.
It is the uncertainty in demand, which scenario will occur, that creates the
decision problem here. If we knew which demand scenario was going to
occur (what demand was going to be) over the next year then the decision
as to which choice to make would be trivial (if high choose A, if medium
choose A and if low choose C – why?)
Assume for the moment that we have no idea of the probabilities
associated with each of the demand scenarios. There are a number of
standard decision criteria that can be used as considered below.
Note here first though that ALL of these criteria, in terms of the numeric
procedure adopted, assume that the pay-off table has the format that:
• the rows are our ‘choices’; things we control
• the columns are ‘outcomes’; things we do not control.
Therefore you should always ensure that the pay-off table is in an appropriate
format before attempting to apply any of the criteria considered below.
Optimistic – maximax – maximise the maximum pay-off for
each choice
Find the maximum pay-off for each possible decision and then choose
the decision that gives the maximum pay-off. Here the maximum pay-offs
associated with A, B and C are 70, 40 and 0 respectively, so the maximum
of these is 70 corresponding to choice A. This is an optimistic criterion
since we make the choice that will lead to the largest possible pay-off
provided we have the most favourable demand situation. Note here that
the actual outcome given this choice of A will be either 70 (as we hope) or
20 or -55 depending upon the demand that occurs.
Conservative (pessimistic) – maximin – maximise the
minimum pay-off for each choice
Find the minimum pay-off for each possible decision and then choose
the decision that gives the maximum pay-off. Here the minimum pay-
offs associated with A, B and C are -55, -22.5 and 0 respectively, so the
maximum of these is 0 corresponding to choice C. This is a conservative/
pessimistic criterion since we make the choice that will lead to the largest
possible pay-off even if we have the most unfavourable demand situation.
Regret – minimax – minimise the maximum regret for each
choice
For each choice/demand scenario calculate the difference between the
pay-off for the best choice we could have made if we had known in
advance the demand that was going to occur and the pay-off for that

69
MN3032 Management science methods

choice/demand scenario. These values are known as the regret values.


They represent the opportunity loss we have incurred by not making the
best possible choice for the demand that did occur. These regret values can
also be seen in Sheet A as below.

Spreadsheet 4.2
Here the regret value of 5 for cell E15 is associated with choice B and the
medium demand scenario. The best choice we could have made if we had
known in advance medium demand was going to occur would have been
choice A with a pay-off of 20. As in cell E15 we have made choice B, we
only obtain a pay-off of 15 so our regret is 20 – 15 = 5. For each of the
demand scenarios columns (D13 to D16, E13 to E15 and F13 to F16) the
regret values shown are calculated by taking the maximum pay-off for that
demand scenario and subtracting from it the particular pay-off values for
the choice/scenario being considered. Note here that small regret values
are better than large regret values.
Having calculated the regret values we calculate the maximum regret for
each choice, so here 55 for A, 30 for B and 70 for C and make the choice
that minimises this maximum regret – here choice B with a value of 30.
The reasoning here is that if we make choice B then we will not ‘miss out’
too much by having made a wrong choice irrespective of the demand that
occurs. If we make choice B then if high demand occurs we are ‘missing
out’ on 30, if medium demand occurs we are ‘missing out’ on 5 and if low
demand occurs we are ‘missing out’ on 22.5. Note here that the actual
outcome given this choice of B will be either 40 or 15 or -22.5 depending
upon the demand that occurs.
All of the above have not taken any probability information into account.
There are two standard decision criteria approaches to introducing
probability into the situation and we consider these below.
Equally likely – maximise average pay-off
Here we assume that each demand scenario is equally likely (so they all
have equal probability). We calculate the average pay-off for each choice
and take the maximum of these values. Here the average pay-offs for A,
B and C are 11.67, 10.83 and 0 respectively and the maximum of these is
11.67 associated with choice A. This decision criterion is also known as the
Laplace criterion.
Expected monetary value – maximise the probability weighted
pay-off
Here we assume we have information as to the probability of each demand
scenario. We calculate the expected monetary value (EMV) which is a
probability weighted average for each choice and take the maximum
of these values. Examining Sheet A opposite, we have the probabilities
associated with the demand scenarios as 0.2 for high, 0.3 for medium and
70
Chapter 4: Decision making under uncertainty

0.5 for low (note that these probabilities must sum to one as these three
scenarios are the only possibilities for the demand that might occur).

Spreadsheet 4.3
The pay-offs for choice A for high, medium and low are 70, 20 and –55
respectively so the EMV for choice A is hence 70(0.2) + 20(0.3) + –55(0.5)
= –7.5, as in cell H8 above. As can be seen here EMV is the probability
weighted average of the numeric monetary outcomes.
The EMV for choice B is 1.25 and for choice C is 0 (cells H9 and H10
above) and so the maximum EMV is 1.25 associated with choice B.
It can be seen above that for each of the decision criteria the spreadsheet
shows the decision that is ‘best’ and its associated value.

Discussion
We considered above a number of different standard decision criteria
for our particular example and it can be seen that, depending upon the
criteria used, A, B or C could be chosen. In some senses we have achieved
nothing as we knew when we first considered the problem that we could
choose A, B or C. However, we have articulated a number of decision
criteria, each with their own logic, that enable us to systematically take the
decision problem and reach a logical decision in a numeric way. The fact
that there is no ‘one best way’ to reach a decision in problems such as the
one considered above is just a fact of life. Instead the usual approach is to
consider the different decisions arrived at using all the criteria and then to
somehow select from them a unique final decision.
There is no ‘best way’ to choose a unique final decision and various ideas
have been proposed, such as:
• voting – choose the decision that is most popular over a number of
criteria
• personal preference – choose the decision criterion that best suits
your personal preferences (e.g. a risk-taker might use the optimistic
criterion; someone who is risk-adverse might use the conservative
criterion) and choose the decision based on those criteria.

Activity
Recall the activity above where you considered whether you would invest £80m, £35m
or nothing. Do you think you should revise that decision in the light of the calculations
carried out above or not? Why or why not?

71
MN3032 Management science methods

Decision tree example


Activity/Reading
For this section read Anderson, Chapter 13, sections 13.1, 13.3–13.5.

Example
A company faces a decision with regard to a product (codenamed M997) developed by
one of its research laboratories. It has to decide whether to proceed to test market M997
or whether to drop it completely. It is estimated that test marketing will cost £100K. Past
experience indicates that only 30 per cent of products are successful in a test market.
If M997 is successful at the test market stage then the company faces a further decision
relating to the size of plant to set up to produce M997. A small plant will cost £150K to
build and produce 2,000 units a year whereas a large plant will cost £250K to build but
produce 4,000 units a year.
The marketing department has estimated that there is a 40 per cent chance that the
competition will respond with a similar product and that the price per unit sold (in £) will
be as follows (assuming all production sold):
Large plant Small plant
Competition respond 20 35
Competition do not respond 50 65
Assuming that the life of the market for M997 is estimated to be seven years and that the
yearly plant running costs are £50K (both sizes of plant – to make the numbers easier!)
should the company go ahead and test market M997?

Activity
Reflect on this problem for five minutes. Would you advise test marketing M997 or not?
Why or why not? Record your decision (and the reasons for it) here.

Decision tree solution


Although the above example is somewhat simplified it clearly represents
the type of decision that often has to be made about new products.
In particular note how we cannot separate the test market
decision from any decisions about the future profitability (if
any) of the product if test marketing is successful.
Consider Figure 4.1 below where we have drawn the decision tree for
the problem.
Test market decision Alt 1 Drop M997 (£0)
2
1
Not successful in test market (0.7)
Test market
M997 (£100K)
4
Alt 2 No competition
0.6 7
Successful in test 6
3 market (0.3) Competition
0.4 8
Alt 3 Small (£150K)
Plant size decision
5 Large (£250K)
No competition
Key
0.6 10
Alt 4 9
Decision node Competition
0.4
11
Chance node

Terminal node Alt 5 No plant (£0)


12
Figure 4.1
72
Chapter 4: Decision making under uncertainty

In Figure 4.1 we have three types of node represented:


• Decision nodes; these represent points at which the company has to
make a choice of one alternative from a number of possible alternatives
e.g. at the first decision node the company has to choose one of the two
alternatives ‘drop M997’ or ‘test market M997’.
• Chance nodes; these represent points at which chance, or probability,
plays a dominant role and reflect alternatives over which the company
has (effectively) no control.
• Terminal nodes; these represent the ends of paths from left to right
through the decision tree.
It is worth saying here that the difficult part of the decision tree
technique is drawing up a diagram such as the Figure shown
from the written description of the problem. Once that has
been done the solution procedure is quite straightforward. Note
here that most, but not all, decision trees start with a decision
node. One tip that may help you to draw decision trees is to ask
yourself the question ‘What happens next?’ at each point in the tree
as you draw it.
Note the inclusion of the ‘no plant’ alternative at the plant size decision node.
This is necessary because it simply may not be profitable to build
any plant (large or small) even if the product is successful in the test market.
It is common in decision tree problems to find that at decision nodes we
need a ‘do nothing’ alternative which is an implicit decision that can be
taken.
It is important for the decision tree to be drawn so that there is a unique
path in the tree from the initial node to each of the terminal nodes.
To ease the discussion of the decision tree we have numbered the nodes
(decision/chance/terminal) 1, 2, 3,..., 12. At each decision node we have
also numbered the alternatives, at node 1 we have alternatives 1 and 2 and
at node 5 alternatives 3, 4 and 5.

Activity
Draw the decision tree by yourself using the information about M997 given before. Does
what you have drawn correspond to the decision tree presented above or not?

Activity
Consider the decision tree presented above. Does it cause you to revise your previous
decision about whether or not to test market M997? If so, why?

Although the decision tree diagram does help us to see more clearly the
nature of the problem it has not, so far, helped us to decide whether to drop
M997 or whether to test market it (the decision we are trying to make!). To
do this we have two steps as illustrated below.
In these steps we will need to use information (numbers)
relating to future sales, prices, costs, etc. Although we may not
be able to give accurate figures for these we need to factor such
figures into our calculations if we are to proceed. Investigating
how our decision to test market or not might change as these
figures change (i.e. sensitivity analysis) can be done once we
have carried out the basic calculations using our assumed
figures.

73
MN3032 Management science methods

Step 1
In this step we, for each path through the decision tree from the initial
node to a terminal node of a branch, work out the profit (in £) involved
in that path. Essentially in this step we work from the left-hand side of the
diagram to the right-hand side.
• path to terminal node 2 – we drop M997
• Total revenue = 0
Total cost = 0
Total profit = 0
Note that we ignore here (and below) any money already spent
on developing M997 (that being a sunk cost, namely a cost that
cannot be altered no matter what our future decisions are, so
logically it has no part to play in deciding future decisions).
• path to terminal node 4 – we test market M997 (cost £100K) but then
find it is not successful so we drop it
• Total revenue = 0
Total cost = 100
Total profit = –100 (all figures in £K)
• path to terminal node 7 – we test market M997 (cost £100K), find it
is successful, build a small plant (cost £150K) and find we are without
competition (revenue for seven years at 2,000 units a year at £65 per
unit = £910K)
• Total revenue = 910
Total cost = 250 + 7 × 50 (running cost)
Total profit = 310
• path to terminal node 8 – we test market M997 (cost £100K), find
it is successful, build a small plant (cost £150K) and find we have
competition (revenue for seven years at 2,000 units a year at £35 per
unit = £490K)
• Total revenue = 490
Total cost = 250 + 7 × 50
Total profit = –110
• path to terminal node 10 – we test market M997 (cost £100K), find it
is successful, build a large plant (cost £250K) and find we are without
competition (revenue for seven years at 4,000 units a year at £50 per
unit = £1,400K)
• Total revenue = 1,400
Total cost = 350 + 7 × 50
Total profit = 700
• path to terminal node 11 – we test market M997 (cost £100K), find
it is successful, build a large plant (cost £250K) and find we have
competition (revenue for seven years at 4,000 units a year at £20 per
unit = £560K)
• Total revenue = 560
Total cost = 350 + 7 × 50
Total profit = –140
• path to terminal node 12 – we test market M997 (cost £100K), find it
is successful, but decide not to build a plant
• Total revenue = 0
Total cost = 100
Total profit = –100
74
Chapter 4: Decision making under uncertainty

As mentioned previously, we include this last option because, even if the


product is successful in test market, we may not be able to make sufficient
revenue from it to cover any plant construction and running costs.
Hence we can form the table below indicating, for each branch, the total
profit involved in that branch from the initial node to the terminal node.
Terminal node Total profit (£K)
2 0
4 –100
7 310
8 –110
10 700
11 –140
12 –100
Table 4.1

Activity
Repeat the calculations for profits associated with terminal nodes given above, but
without looking at the subject guide. Do you get the correct answers or not?

So far we have not made use of the probabilities in the problem – this
we do in the second step where we work from the right-hand side of the
diagram back to the left-hand side.

Step 2
Consider chance node 6 with branches to terminal nodes 7 and 8
emanating from it. The branch to terminal node 7 occurs with probability
0.6 and total profit 310K whilst the branch to terminal node 8 occurs with
probability 0.4 and total profit –110K.
Hence the expected monetary value (EMV) of this chance node is
given by:
0.6 × (310) + 0.4 × (–110) = 142 (£K)
Essentially this figure represents the expected (or average) profit from this
chance node (60 per cent of the time we get £310K and 40 per cent of the
time we get –£110K so on average we get (0.6 × (310) + 0.4 × (–110))
= 142 (£K)).
The EMV for any chance node is defined by ‘sum over all branches,
the probability of the branch multiplied by the monetary (£)
value of the branch’. Exactly as before when we considered pay-off
tables above it is a probability weighted average of the numeric monetary
outcomes.
Hence the EMV for chance node 9 with branches to terminal nodes 10 and
11 emanating from it is given by
0.6 × (700) + 0.4 × (−140) = 364 (£K)
node 10 node 11
We can now picture the decision node relating to the size of plant to
build as below where the chance nodes have been replaced by their
corresponding EMVs.

75
MN3032 Management science methods

EMV = 142 K

Alt 3 Small (£150K)


Plant size decision
Alt 4 Large (£250K)
5
EMV = 364K

Alt 5 No plant

EMV = -100K
Figure 4.2
Hence at the plant decision node we have the three alternatives:
• Alternative 3: build small plant EMV = 142K
• Alternative 4: build large plant EMV = 364K
• Alternative 5: build no plant EMV = –100K
It is clear that, in £ terms, alternative number 4 is the most attractive and
so we can discard the other two alternatives, giving the revised decision
tree shown below.

Test market decision


Alt 1 Drop M997 (£0)
EMV = 0

1
Not successful in test market
Test market EMV = −100
M997 (£100K) (0.7)
Alt 2

Successful in test
3 market (0.3)

EMV = 364 (large plant)


Figure 4.3
We can now repeat the process we carried out above.
The EMV for chance node 3 representing whether M997 is a success in test
market or not is given by:

0.3 × (364) + 0.7 × (–100) = 39.2 (£K)


plant decision node node 4
Hence at the decision node representing whether to test market M997 or
not we have two alternatives:
• Alternative 1: drop M997 EMV = 0
• Alternative 2: test market M997 EMV = 39.2K.
It is clear that, in £ terms, alternative number 2 is preferable and so we
should decide to test market M997.

Activity
Repeat the calculations for the expected monetary values given above, but without
looking at the subject guide. Do you get the correct answers or not?

76
Chapter 4: Decision making under uncertainty

Summary
As a result of the above process we have decided:
• We should test market M997 and this decision has an expected
monetary value (EMV) of £39.2K.
• If M997 is successful in test market then we anticipate, at this stage,
building a large plant (recall the alternative we chose at the decision
node relating to the size of plant to build). However, it is plain that in
real life we will review this once test marketing has been completed.
Note here that the EMV of our decision (39.2 in this case)
does not reflect what will actually happen – it is merely an
average or expected value if we were to have the tree many
times – but in fact we have the tree once only. If we follow the
path suggested above of test marketing M997 then the actual
monetary outcome will be one of [–100, 310, –110, 700, –140,
–100] corresponding to terminal nodes 4, 7, 8, 10, 11 and 12
depending upon future decisions and chance events.

Activity
Look back to where the M997 problem was introduced. Does the above decision to test
market M997 – arrived at above using a decision tree – correspond to your decision as
recorded there? Do you think using decision trees is a good way of structuring decision
making or not?

Upside and downside


Conceptually one can think of the set of terminal nodes that can be
reached as a result of our test market decision as the set of possible
futures. The best possible future corresponding to the decision to
test market M997 is called the upside and the worst possible future
corresponding to the decision to test market M997 is called the
downside.
We need to be slightly careful here since, as noted above, the actual
monetary outcomes will depend both upon future decisions and chance
events:
• If we are committed to a large plant (assuming we are
successful in test market) then the set of possible futures is given by
[–100, 700, –140] corresponding to terminal nodes 4, 10 and 11 and
hence the upside is 700 (£K) and the downside is –140 (£K).
• If we are not committed to a large plant (assuming we are
successful in test market) then the set of possible futures is given by
[–100, 310, –110, 700, –140, –100] corresponding to terminal nodes
4, 7, 8, 10, 11 and 12 and hence the upside is 700 (£K) and the
downside is –140 (£K).
Whereas in this particular example the upside and downside are the same
irrespective of whether we are committed to a large plant or not, note
how the possible futures are different depending upon whether we are
committed to a large plant or not.

77
MN3032 Management science methods

Excel solution
Look at Sheet B in the spreadsheet associated with this chapter. You will
see:

Spreadsheet 4.4
In column I are the values for terminal nodes and in column J are the
probabilities associated with those nodes (if appropriate). The structure of
the underlying decision tree is already coded into the Excel logic and all
other cells are fixed/calculated from columns I and J. Column D gives the
decision – at decision node 1 we choose to test market while at decision
node 5 we choose a large plant. These are the same decisions/EMVs as we
calculated above.
The advantage of using Excel comes when we consider sensitivity analysis.

Sensitivity analysis
Consider the decision tree given above. It is plain that the decision to
test market is influenced by the profit of 700 (£K) we will achieve if test
marketing is successful, we choose to build a large plant and we have
no competition. Hence we may vary this figure of 700 (and/or vary the
probability that this outcome occurs) to see if it changes the test market
decision.
For example, suppose the probability that we have no competition with
a large plant is no longer 0.6 but is instead 0.45. This implies that the
probability that we have competition is 1 – 0.45 = 0.55.
Amending the decision tree calculation using Excel we get:

Spreadsheet 4.5
Hence we can see that the initial decision (to test market) is still the
optimal decision, although (as we would have expected) the EMV
associated with this decision, and the EMV associated with a large plant,
has fallen.
We can also conduct sensitivity analysis using a more systematic
algebraic basis (i.e. assign a symbol p to a given probability and work out
algebraic expressions for EMVs). To see this suppose that the probability
of competition with a large plant is no longer 0.6 but is p instead. This
implies that the probability that we have competition is 1–p.
Assume that we can leave the probabilities for a small plant of
competition/no competition unaltered. It is clear that as p decreases we
will at some stage prefer a small plant over a large plant (e.g. if p = 0 then
a small plant with EMV of 142 would be preferable to a large plant with
EMV of –140). We can therefore ask the logical question: ‘How small
does p have to be before we prefer a small plant?’

78
Chapter 4: Decision making under uncertainty

To answer this question note that if the EMVs of a large and small plant
are equal then we will be indifferent as to which we choose. This will
occur when
p(700) + (1 – p)(–140) = 142
(i.e. when 840p – 140 = 142, so p = 282/840 = 0.3357).
Hence if p drops below 0.3357 we would prefer a small plant to a large
plant. This type of systematic sensitivity analysis can sometimes be
preferable to simply trying different numbers and redoing the calculation
to see what the effect is.

Expected value of perfect information


Associated with decision trees is the idea of perfect information – if we
could eliminate uncertainty (have perfect information) then this would be
of value to us. Consider our example above and let us focus on the issue of
competition.
Suppose that we had perfect information with respect to competition.
Then there are two possibilities:
• we are absolutely sure there will be no competition
• we are absolutely sure there will be competition.
We can evaluate what actions we would take (and the resultant EMV) in
each of these two situations using our spreadsheet as below.
If we are absolutely sure there will be no competition then using Sheet B
and altering the probability for no competition to one and the probability
for competition to zero we get:

Spreadsheet 4.6
showing that the new EMV is 140 (and we would test market M997).
If we are absolutely sure there will be competition and using Sheet B again
but altering the probabilities for competition to 1 and for no competition
to zero we get:

Spreadsheet 4.7
showing that the new EMV is zero (and we would drop M997).
We can hence form the following table:
Original probability EMV if probability is one
No competition 0.6 140
Competition 0.4 0
Table 4.2

79
MN3032 Management science methods

Hence the expected value WITH perfect information is defined for


this example to be 0.6(140) + 0.4(0) = 84.
Note that we have stressed the word with in ‘expected value with perfect
information’ above. This EMV value of 84 (£K) is the best we could hope
for if we knew the situation (had perfect information) with regard to
whether competition would exist or not.
Previously we had an EMV of 39.2 – hence the expected value OF
perfect information, defined as the additional benefit from perfect
information is given by 84 – 39.2 = 44.8 (£K).
The relative magnitude of this value gives us insight into whether it is
worth trying to improve our current estimates as to the probability of
competition or not.

Simulation
For the decision tree considered above we had a known value, 0.7, for the
probability of the product not being successful in test market. Suppose,
however, that this value was not known, rather we had a probability
distribution. For illustration, suppose that the probability of the product
not being successful in test market could be any value between 0.7 and
0.9. What can we do in this case?
The simple answer is that we can conduct a simulation. By this we mean
that we generate in a probabilistic fashion a set of values between 0.7 and
0.9 and see what the decision would be in each case.
Spreadsheet 4.8, taken from Sheet C in the spreadsheet associated with
this chapter, illustrates this point.

Spreadsheet 4.8: Decision tree simulation.


In cells L2 and M2 you can see the range of values for the probability we
are interested in, here 0.7 to 0.9. In cell J3 you can see the first simulated
value, 0.81 (displayed to two decimal places), with the associated decision
being seen in cell D2, and its EMV in cell E2. With this simulated value the
decision would be to drop M997 with an associated EMV of 0. In cell J12
you can see the second simulated value, 0.74, with the associated decision
being seen in cell D11 and its EMV in cell E11. With this simulated value the
decision would be to test market M997 with an associated EMV of 22.15.
You can also see in columns A to J just two of the 10 different probability
values considered. If you examine Sheet C in detail you will see that there
are actually 10 different simulated values for the probability of the product
not being successful in test market, with the decisions and the associated
EMVs for these 10 simulated values being summarised in columns N and O.
Obviously it would be easy to make a decision if, in all 10 cases, we had
the same decision in column N. But we do not. Here we have four cases
where we would choose to drop M997, six cases where we would test
market M997. However, we are gaining insight which (perhaps) we did
not have before. Consider, if the probability under consideration does
indeed lie between 0.7 and 0.9 how can we decide what to do? Does not
80
Chapter 4: Decision making under uncertainty

examining values taken from the range 0.7 to 0.9 and seeing what the
decision would be in each case make logical sense?
Clearly we could examine more than 10 probability cases. But hopefully
the point is clear, we can gain insight by generating in the fashion
illustrated here a probability value from a known distribution.
On a technical issue if you examine Sheet C you will see that the probability
values are generated in Excel using L2 + (M2 – L2)*RAND(). This
expression generates a value uniformly distributed in [L2,M2]. However
the RAND() function in Excel, which generates a random number between
zero and one, will be recomputed each time you make a change to the Excel
sheet, resulting in the values you see changing. For example, open Sheet
C and type anything you like into a blank cell and press the return/enter
key. You will see columns N and O change. This is because Excel, when you
press the return/enter key, recalculates, and here it recalculates a value of
RAND() 10 times, meaning that (potentially) different decisions and EMVs
result. For this recalculation reason the values you see when you open Sheet
C may well be different from those you see in Spreadsheet 4.8.
Clearly if we want to gain better insight we need to examine more than 10
probability values, but since this involves complicating Sheet C we are not
going to pursue this issue further here.
Although we have considered here simulating a probability value the same
approach can be taken to simulating other values in the decision tree. For
example, suppose the value associated with terminal node 7 was no longer
exactly 310 but a value taken from a Normal distribution with mean 310 and
standard deviation 50. Could we generate simulated values for this outcome
and examine the range of decisions and their associated EMVs so produced?
The answer is that we can, and the Excel needed to produce such a normally
distributed random number is simply NORMINV(RAND(),310,50).
On a terminology issue here this generation of values from known probability
distributions is called simulation, also called Monte-Carlo simulation or static
simulation. Dynamic simulation, or discrete-event simulation, is dealt with
in Chapter 11. You should be aware that when some people use the word
‘simulation’ they can mean either of these two definitions, and you have to
determine which is meant from the context under discussion.

Activity
Change the probability range in Sheet C. Firstly change it to be 0.8 to 0.9; then change it
to be 0.7 to 0.8. What do you observe about the decisions and their EMVs?
How does what you observe change if the value associated with terminal node 7 is no
longer exactly 310 but a value taken from a Normal distribution with mean 310 and
standard deviation 50.

Extensions
The basic decision tree technique presented above can be applied to any
problem for which a decision tree can be drawn. There are a number of
extensions to the technique and we briefly consider four such extensions
below. These relate to:
• discounting
• chance nodes
• decision nodes
• utility.
We will consider each in turn.
81
MN3032 Management science methods

Discounting
In the example given above we were concerned with money received over
seven years. It is plain that £1 received in seven years’ time is worth less than
£1 received now and a technique called discounting, or discounted cash
flow, (involving finding the net present value of any sum of money)
can be used to overcome this difficulty. Applying discounting merely alters
the numbers which are fed into the decision tree so that we are dealing with
monetary values on an equivalent (present-day) basis. It does not affect the
processing of the tree (which remains exactly as indicated above).

Chance nodes
In the example given above we calculated a value for each chance node.
Although we have used EMV as the value of a chance node this choice
is, in many respects, somewhat arbitrary, and other ways of calculating
a value for a chance node have been suggested. To put it another way,
although it is a law of the universe (in this particular corner of the space–
time continuum) that E = mc2 it is not a law of the universe that the value
of a chance node must be equal to the EMV value!
Reflect that EMV is an average value – but at a chance node we never
see the average – something happens once only (e.g. at chance node 6
we either see competition or not). Hence perhaps the average value is
misleading and we need to look at a chance node a different way.
If we were averse to losing money and wished to take a conservative
attitude to decision making we might calculate the value of a chance
node as the worst possible outcome that might occur at that node. Such a
strategy is often called a pessimistic strategy (e.g. such a strategy would
assign chance node 6 a value of –110 compared with an EMV of 142).
An alternative strategy would be the optimistic strategy of calculating
the value of a chance node as the best possible outcome that might occur
at that node (e.g. such a strategy would assign chance node 6 a value of
310 compared with an EMV of 142).
Yet another strategy would be to take the value of a chance node as equal
to the most likely (highest probability) outcome that might occur at that
node (e.g. such a strategy would assign chance node 4 a value of 310
compared with an EMV of 142).
Alternatively, we might take the value of a chance node to be some
weighted combination of the EMV and the values given by the optimistic
and pessimistic strategies. The literature contains a number of variations
on this theme of changing the value of a chance node.

Decision nodes
At each decision node we choose one of the alternative decisions at that
node based upon an implicit rule (‘choose the alternative with the highest
EMV’). Other rules are equally plausible (e.g. ‘choose the alternative with
the highest ROI’ (ROI = return on investment = profit divided by total
investment)).
For example consider the small and large plants above. We saw that a
small plant led to an EMV (actually an expected net profit) of 142. That
involved a total investment of 100 for test marketing plus 150 to build, so
a ROI of 142/(100 + 150) = 56.8 per cent.
A large plant led to an EMV (actually an expected net profit) of 364. That
involved a total investment of 100 for test marketing plus 250 to build, so
a ROI of 364/(100 + 250) = 104 per cent.

82
Chapter 4: Decision making under uncertainty

Although, in this case, ROI would lead to the same decision relating to the
size of plant to build it could have led to a different decision. For example
had the EMV at chance node 9 been 175 then on an EMV basis we would
still have chosen at decision node 5 to build a large plant. But on a ROI
basis [175/(100 + 250) = 50 per cent] we would choose to build a small
plant.

Utility
Using monetary values in the decision tree implies, for example, that a
loss of 200K is only twice as bad as a loss of 100K. If the company does
not have 200K to lose, but does have 100K, then it is plain that they will
regard losing 200K as much worse than losing 100K. Moreover, note that
often decisions are made by people within the company. The company
makes the profit/loss, not the people concerned with the decision.
Hence the idea of utility is to replace the monetary values at each
terminal node in the decision tree by utilities (points) which reflect the
view of the decision maker (or company) to that sum of money (e.g. a loss
of 100K might get a utility value of –5 but a loss of 200K a utility value
of –500). In simple terms you can think of utilities as replacing pounds
by points. Although we have spoken here of points when assigning utility
values, it is common for utilities to be scaled such that they lie between
zero and one, as below.
To illustrate utility further suppose that we have that the equation for
the utility to a decision maker of a monetary amount x, where x can vary
between a and b, is Utility = (x – a)R/(b – a)R, where R is a parameter.
What does the plot of utility against x look like? Well for R = 1, where we
consider a = 0, b = 100 it takes the form shown in Figure 4.4.

Figure 4.4: Utility, R = 1.


Here we have a straight line relationship, utility, the worth to the decision-
maker of a monetary amount x increases linearly with x. This is an
example of a utility function for a risk-neutral decision maker. Risk-neutral
because doubling the amount x doubles the utility. Here utility has been
scaled by the function adopted to lie between zero and one.
When R = 2 the graph is as in Figure 4.5 and this is an example of a utility
function for a risk-seeking decision maker. Risk-seeking because doubling
the amount x more than doubles the utility. For example from Figure 4.5 we
can see that a monetary amount x = 30 has a utility of approximately 0.10.
But double that, a monetary amount x = 60, has a utility of approximately
0.35, much more than double the utility associated with x = 30.

83
MN3032 Management science methods

Figure 4.5: Utility, R = 2.


When R = 0.5 the graph is as in Figure 4.6 and this is an example of a
utility function for a risk-averse decision maker. Risk-adverse because
doubling the amount x does not double the utility. For example from
Figure 4.6 we can see that a monetary amount x = 30 has a utility of
approximately 0.55. But double that, a monetary amount x = 60, has a
utility of approximately 0.75, much less than double the utility associated
with x = 30.

Figure 4.6: Utility, R = 0.5.


Trying to capture the decision-maker’s translation of pounds to points (i.e.
their underlying utility function) is an imprecise process, but it does enable
the decision-maker’s valid view of the problem in terms of the importance
of pounds to them to be incorporated. Once a utility function (mapping
of monetary values to utility values) has been found then we proceed
as before in the decision tree calculation, simply replacing all monetary
amounts by utility values.

Links to other chapters


The topics considered in this chapter directly link to Chapter 10 of this
subject guide where we consider decision making when we have multiple
criteria present. More generally the topics considered here link to other
topics where probability (stochastic elements) play a role in decision
making, so this chapter links to Chapters 6 and 11.

84
Chapter 4: Decision making under uncertainty

Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
Decision analysis at Eastman Kodak 540
Controlling particulate emissions at Ohio 550
Edison Company
New drug decision analysis at Bayer 563
Pharmaceuticals
Medical screening test at Duke University 569
medical center
Investing in a transmission system at 577
Oglethorpe Power

A reminder of your learning outcomes


Having completed this chapter, and the Essential reading and activities,
you should be able to:
• construct a pay-off table for a problem and analyse it numerically
using the standard decision criteria: optimistic, conservative
(pessimistic), regret, equally likely and expected monetary value
• draw a decision tree for a problem
• calculate expected monetary values
• process the tree to arrive at a suggested course of action
• calculate the upside and downside of any decision
• perform sensitivity analysis
• calculate the expected values associated with perfect information
• understand the use of simulation in decision trees
• understand the use of utilities in place of monetary values.

Sample examination questions


For Sample examination questions relating to the material presented in
this chapter please visit the VLE.

85
MN3032 Management science methods

Notes

86
Chapter 5: Inventory control

Chapter 5: Inventory control

Essential reading
Anderson, Chapter 10, sections 10.1–10.3, 10.5 and 10.6.

Spreadsheet
inventory.xls
• Sheet A: EOQ calculation and costing of an assigned order quantity
• Sheet B: Quantity discount calculation and costing of an assigned
order quantity
• Sheet C: Probabilistic demand order quantity.
This spreadsheet can be downloaded from the VLE.

Aims of the chapter


The aims of this chapter are to:
• discuss why inventory (stock) is important and why inventory occurs
• discuss a number of techniques that can be applied when we have to
make decisions relating to inventory
• illustrate how these techniques can be applied to simple problems
• discuss a number of other topics associated with inventory to provide a
greater appreciation as to the context in which inventory decisions are
made in the modern world.

Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• discuss the various factors that need to be included when determining
Economic Order Quantity and Economic Batch Quantity
• determine Economic Order Quantity when there are no quantity
discounts
• determine Economic Order Quantity when there are quantity discounts
• determine Economic Batch Quantity
• determine the optimum order quantity when the demand is
probabilistic (rectangular (uniform) distribution or Normal
distribution)
• explain Materials Requirements Planning (MRP) by means of a simple
example
• discuss Just-in-Time (JIT), Optimised Production Technology (OPT)
and Supply Chain Management (SCM).

Introduction
Inventory (also known as stock) is something that we come across in our
everyday life whether at work or outside the work environment. There are
many problems associated with it. Whether an international company with
operations spreading across a number of countries, a national company
87
MN3032 Management science methods

with several factories or plants, branches and locations, or a corner shop


in a local environment, all have to face these problems and tackle them
as best as they can. Indeed, many other organisations of various kinds are
not immune from these problems either (e.g. banks, insurance companies,
governments).

Activity
How many different items of stock can you list that you encounter, in your work/college
life (e.g. the stock of paper clips in your desk), in your personal life (e.g. the stock of
money in your pocket)?

Due to the diverse nature of the products involved, it might be assumed


that the management of inventory in one particular case differs
considerably from that in another. This is not so – it should be borne
in mind that most of the principles are the same whatever the size of
organisation; in most cases, the differences are the result of the different
emphasis placed on individual aspects.
One such factor might be the importance of a particular part to a
company. Imagine a company with over 10,000 different items available
in its stores. If every one of these items were given equal importance,
there would be a need to employ many more storekeepers and clerks,
because each item would need a certain (and equal) amount of attention
and monitoring from the staff employed. This is not a practical way of
controlling inventory so some sort of priority has to be assigned; then the
amount of care and attention would depend upon the assigned priority.
It must not be forgotten, though, that all of the items are required for the
company’s operations; the lack of even one part may affect manufacture
and assembly operations. In that respect all items have some importance.
The most common method used in industry is the ABC classification where
items are divided into three inventory categories:
• Category A items are the most important inventory items and in total
are responsible for 80 per cent of the inventory cost.
• Category B items are only moderately important inventory items and
in total are responsible for 15 per cent of the inventory cost.
• Category C items are relatively unimportant inventory items and in
total are responsible for 5 per cent of the inventory cost.
Obviously the idea here is that management attention should be primarily
focused upon category A items.

Activity
Choose one business item and one personal item from the list of items you produced
above. How are stocks of these items controlled and managed? Classify the items in your
list into A/B/C.

Reasons for holding stock


In an ideal world, where the demand for a product or a part is known
precisely, well in advance of when it is needed, where the suppliers keep
the promised delivery dates and meet the quality standards, there would
be no need to have any inventory (i.e. zero inventory). In reality, that
ideal stage has yet to be reached. There are consequently many situations
where stock has to be held and for a variety of reasons. One reason is to
prevent the production line stopping due to a lack of parts. Stoppage of a
production line has always been considered extremely costly, not only due
88
Chapter 5: Inventory control

to the immediate loss of production and hence the profit associated with
it, but also due to the long-term effect on customer goodwill and on future
market share. If the demand for a product is reasonably steady, the stock
can be minimal. There just needs to be enough to cater for any minor
deviations and delays in the supply chain. If the demand itself fluctuates,
larger inventories have to be held to counteract these fluctuations.
Activity
What other reasons can you think of for holding stock?

Whatever the reason for holding stock, it is evident that a company will
incur a cost associated with holding it. It is, therefore, in the interest of
the company to reduce the inventory and to control it more efficiently: to
aim for the ideal of zero inventory mentioned above. Developed after the
Second World War in Japan, the ‘Just-In-Time’ (JIT) philosophy aims to
achieve such an ideal situation. It does, however, require certain conditions
to be met. JIT will be discussed in more detail later in this chapter.
Inventory control is a compromise management action. Consequently,
there could be times when a part is out of stock for a short period, even
though every effort has been made to avoid this. Such situations are not
always costly, although, occasionally, production lines have had to be
stopped for very short periods due to lack of parts.

Basics
The basic function of stock (inventory) is to insulate the production
process from changes in the environment as shown below. For simplicity
here we will consider a classic manufacturing environment.

Raw materials stock


Finished goods stock

Manufacturing

Work in progress stock


Cost

Raw material arrivals (restocking) Revenue


Sales of finished goods (destocking)

Figure 5.1
Note here that although we refer to manufacturing, other industries also
have stock (e.g. the stock of money in a bank available to be distributed to
customers, the stock of policemen in an area, etc.).
The question then arises: how much stock should we have? It is this
simple question that inventory control theory attempts to answer.
There are two extreme answers to this question:
a lot
• this ensures that we never run out
• it is an easy way of managing stock
• it is expensive in stock costs, cheap in management costs.

89
MN3032 Management science methods

none/very little
• this is known (effectively) as Just-in-Time (JIT)
• it is a difficult way of managing stock
• it is cheap in stock costs, expensive in management costs.
We shall consider the problem of ordering raw material stock but the same
basic theory can be applied to the problem of deciding the:
• finished goods stock
• size of a batch in a batch production process.
The costs that we need to consider so that we can decide the amount of
stock needed can be divided into stock holding costs and stock ordering
(and receiving) costs as below. Note here that, conventionally,
management costs are ignored here.

Holding costs – associated with keeping stock over time


• storage costs
• rent/depreciation
• labour
• overheads (e.g. heating, lighting, security)
• money tied up (loss of interest, opportunity cost)
• obsolescence costs (if left with stock at end of product life)
• stock deterioration (lose money if product deteriorates whilst held)
• theft/insurance.

Ordering costs – associated with ordering and receiving an order


• clerical/labour costs of processing orders
• inspection and return of poor quality products
• transport costs
• handling costs.
Note here that a stockout occurs when we have insufficient stock to
supply customers. Usually stockouts occur in the order lead time, the time
between placing an order and the arrival of that order.
Given a stockout the order may be lost completely or the customer may
choose to backorder (i.e. to be prepared to wait until we have sufficient
stock to supply their order).
Note here that although conceptually we can see that these
cost elements are relevant it can often be difficult to arrive
at an appropriate numeric figure (e.g. if the stock is stored in
a building used for many other purposes, how then shall we
decide an appropriate allocation of heating/lighting/security
costs).
To see how we can decide the stock level to adopt consider the very simple
model below.

90
Chapter 5: Inventory control

Basic model
Activity/Reading
For this section read Anderson, Chapter 10, sections 10.1 and 10.2.

In this basic model we have the situation where:


• our company orders from an outside supplier
• that outside supplier delivers to us precisely the quantity we ask for
• we pass that stock onto our customers (either external customers, or
an internal customer within the same company (e.g. if ordering raw
materials for use in the production process)).
Assume:
• stock used up at a constant rate (R units per year)
• fixed set-up cost co for each order – often called the order cost
• no lead time between placing an order and arrival of the order
• variable stock holding cost ch per unit per year, where the equation
that defines ch is ch = PI + W; P being the price (cost) per unit; I being
the fractional interest rate per year; W being any warehousing/storage
cost.
Then we need to decide Q, the amount to order each time, often called the
batch (or lot) size.
With these assumptions the graph of stock level over time takes the form
shown below.

Figure 5.2
Consider drawing a horizontal line at Q/2 in the above diagram. If you
were to draw this line then it is clear that the times when stock exceeds
Q/2 are exactly balanced by the times when stock falls below Q/2. In other
words, we could equivalently regard the above diagram as representing a
constant stock level of Q/2 over time.
Hence we have that:
• Annual holding cost = ch(Q/2)
where Q/2 is the average (constant) inventory level.
• Annual order cost = co(R/Q)
where (R/Q) is the number of orders per year (R used per year and
ordering Q each order must mean that the number of orders made is R/Q).
So total annual cost = ch(Q/2) + co(R/Q).

91
MN3032 Management science methods

Total annual cost is the function that we want to minimise by choosing


an appropriate value of Q.
Note here that, obviously, there is a purchase cost associated with the R
units per year, namely PR, where, as above, P is the price (cost) per unit.
However, this is just a constant as R is fixed so we can ignore it here.
The diagram below illustrates how these two components (annual holding
cost and annual order cost) change as Q, the quantity ordered, changes.
As Q increases, holding cost increases but order cost decreases. Hence the
total annual cost curve is as shown below – somewhere on that curve lies a
value of Q that corresponds to the minimum total cost.

Figure 5.3
We can calculate exactly which value of Q corresponds to the minimum total
cost by differentiating total cost with respect to Q and equating to zero.
d(total cost)/dQ = ch/2 − coR/Q² = 0 for minimisation
which gives Q² = 2coR/ch.
Hence the best value of Q (the amount to order = amount stocked) is
given by
Q =√(2Rco/ch)
and this is known as the Economic Order Quantity (EOQ).

Comments
This formula for the EOQ is believed to have been first derived in the early
1900s and so EOQ dates from the beginnings of mass production/assembly
line production.
To get the total annual cost associated with the EOQ we have from before
that total annual cost = ch(Q/2) + co(R/Q) so putting Q = √(2Rco/ch) into
this we get that the total annual cost is given by:
ch(√(2Rco/ch)/2) + co(R/√(2Rco/ch)) = √(Rcoch/2) + √(Rcoch/2) = √(2Rcoch).

Hence total annual cost is √(2Rcoch) which means that when ordering the
optimal (EOQ) quantity we have total cost proportional to the square root
of any of the factors (R, co and ch) involved. For example, if we were to
reduce co by a factor of 4 we would reduce total cost by a factor of 2 (note
the EOQ would change as well). This, in fact, is the basis of Just-in-Time
(JIT), to reduce (continuously) co and ch so as to drive down total cost.
To return to the issue of management costs being ignored for a moment,
the basic justification for this is that if we consider the total cost curve
shown above, then – assuming we are not operating a policy with a very
low Q (JIT) or a very high Q – we could argue that the management costs
are effectively fixed for a fairly wide range of Q values. If this is so then
such costs would not influence the decision as to what order quantity Q
92
Chapter 5: Inventory control

to adopt. Moreover, if we wanted to adopt a more quantitative approach


we would need some function that captures the relationship between the
management costs we incur and our order quantity Q – estimating this
function would certainly be a non-trivial task.
Example
A retailer expects to sell about 200 units of a product per year. The storage space taken
up
in his premises by one unit of this product is costed at £20 per year. If the cost associated
with ordering is £35 per order, what is the economic order quantity given that interest
rates are expected to remain close to 10 per cent per year and the total cost of one unit
is £100?

We use the EOQ formula, EOQ = √(2Rco/ch).


Here R=200, co= 35 and the holding cost ch is given by:
ch = £20 (direct storage cost per unit per year) + £100 × 0.10 (this term
is the money interest lost if one unit sits in stock for one year)
i.e. ch = £30 per unit per year.
Hence EOQ = √(2Rco/ch) = √(2 × 200 × 35/30) = 21.602.
But as we must order a whole number of units we have that EOQ = 22.
We can illustrate this calculation by reference to the diagram below which
shows order cost, holding cost and total cost for this example.

Figure 5.4
With this EOQ we can calculate our total annual cost from the equation:
Total annual cost = ch(Q/2) + co(R/Q)
Hence for this example we have that:
Total annual cost =
(30 × 22/2) + (35 × 200/22) = 330 + 318.2 = £648.20.
Note: If we had used the exact Q value given by the EOQ formula (i.e.
Q = 21.602) we would have had that the two terms relating to annual
holding cost and annual order cost would have been exactly equal to each
other: so holding cost = order cost at EOQ point (or, referring to the
diagram above, the EOQ quantity is at the point associated with the
Holding Cost curve and the Order Cost curve intersecting));
thus (chQ/2) = (coR/Q) so that Q = √(2Rco/ch).
In other words, as in fact might seem natural from the shape
of the Holding Cost and Order Cost curves, the optimal order
quantity coincides with the order quantity that exactly
balances Holding Cost and Ordering Cost.

93
MN3032 Management science methods

Note however that this result only applies to certain simple situations. It
is not true (in general) that the best order quantity corresponds to the
quantity where holding cost and ordering cost are in balance.

Example
Suppose, for administrative convenience, we ordered 20 and not 22 at each order – what
would be our cost penalty for deviating from the EOQ value?
With a Q of 20 we look at the total annual cost
= (chQ/2) + (coR/Q)
= (30 × 20)/2 + (35 × 200/20) = 300 + 350 = £650.
Hence the cost penalty for deviating from the EOQ value is £650 – £648.2 = £1.80.

Note that this is, relatively, a very small penalty for deviating from the
EOQ value. This is usually the case in inventory problems (i.e. the total
annual cost curve is flat near the EOQ so there is only a small cost penalty
associated with slight deviations from the EOQ value (see the diagram
above)).
This is an important point. Essentially we should view the EOQ as a
ballpark figure. That is, it gives us a rough idea as to how many we
should be ordering each time. After all our cost figures (such as the cost of
an order) are likely to be inaccurate. Also it is highly unlikely that we will
use items at a constant rate (as the EOQ formula assumes). However, that
said, the EOQ model provides a systematic and quantitative way of getting
an idea as to how much we should order each time. If we deviate far from
this ballpark figure then we will most likely be paying a large cost penalty.

Extensions
In order to illustrate extensions to the basic EOQ calculation we will
consider the following example.

Example
A company uses 12,000 components a year at a cost of 5 pence each. Order costs have
been estimated to be £5 per order and inventory holding cost is estimated at 20 per cent
of the cost of a component per year.
Note here that this is the sort of cheap item that is a typical non-JIT item.

What is the EOQ?


Here R = 12,000, co = 5 and as the inventory holding cost is 20 per cent
per year the annual holding cost per unit ch = cost per unit × 20 per cent
= £0.05 × 0.2 per unit per year = 0.01.
Hence EOQ = √(2Rco/ch) = √(2 × 12000 × 5/0.01) = 3464.
Look at Sheet A in the Excel spreadsheet associated with this chapter. You
will see:

Spreadsheet 5.1
Here we have simply reproduced in Excel the calculation we carried out
above with the addition that we have included the costs associated with
ordering, holding and purchasing, as well as the total cost. Sheet A also
94
Chapter 5: Inventory control

allows us to cost any assigned order quantity we wish – above you can see
that with an order quantity of 1,000 the total cost (per year) of ordering
and holding is £65 to which must be added the purchase cost of £600
(12,000 units a year at 0.05 each).
If orders must be made for 1, 2, 3, 4, 6 or 12 monthly batches,
what order size would you recommend and when would you
order?
Here we do not have an unrestricted choice of order quantity (as the EOQ
formula assumes) but a restricted choice as explained below.
This is an important point – the EOQ calculation gives us a quantity to
order, but often people are better at ordering on a time basis (e.g. once
every month).
In other words we need to move from a quantity basis to a
time basis.
For example the EOQ quantity of 3,464 has an order interval of
(3,464/12,000) = 0.289 years, i.e. we order once every 52(0.289) =
15 weeks. Would you prefer to order once every 15 weeks or every four
months? Recall here what we saw before, that small deviations from the
EOQ quantity lead to only small cost changes.
Hence if orders must be made for 1, 2, 3, 4, 6 or 12 monthly batches, the
best order size to use can be determined as follows.
Obviously when we order a batch we need only order sufficient to cover
the number of components we are going to use until the next batch
is ordered – if we order less than this we will run out of components
and if we order more than this we will incur inventory holding costs
unnecessarily. Hence for each possible batch size we automatically know
the order quantity (e.g. for the 1-monthly batch the order quantity is the
number of components used per month = R/12 = 12,000/12 = 1,000).
As we know the order quantity we can work out the total annual cost of
each of the different options and choose the cheapest option.
The total annual cost (with an order quantity of Q) is given by
(chQ/2) + (coR/Q) and we have the table below:

Batch size option Order quantity Q Total annual cost


Monthly 1,000 65
2-monthly 2,000 40
3-monthly 3,000 35
4-monthly 4,000 35
6-monthly 6,000 40
12-monthly 12,000 65

Table 5.1
The least cost option therefore is to choose either the 3-monthly or the
4-monthly batch.
In fact we need not have examined all the options. As we knew that the
EOQ was 3,464 (associated with the minimum total annual cost) we have
that the least cost option must be one of the two options that have order
quantities nearest to 3,464 (one order quantity above 3,464, the other
below 3,464) (i.e. either the 3-monthly (Q = 3,000) or the 4-monthly (Q
= 4,000) option). This can be seen from the shape of the total annual cost
curve shown below. The total annual cost for these two options could then
be calculated to find out which was the cheapest option.

95
MN3032 Management science methods

Figure 5.5

Activity
Use Sheet A and confirm for yourself the cost figures for varying batch sizes given above.

Quantity discounts
Activity/Reading
For this section read Anderson, Chapter 10, section 10.5.

If the supplier offers the following quantity discount


structure, what effect will this have on the order quantity?
Order quantity Cost (per unit)
0 → 5,000 £0.05
5,000 → 10,000 £0.05 less 5%
10,000 → 20,000 £0.05 less 10%
20,000 and above £0.05 less 15%
Table 5.2
For example, were we to order 6,000 units we would only pay 0.95(0.05)
for each and every one of the 6,000 units (i.e. the discount would be given
on the entire order).
Here we need to remember to add to the total annual cost equation
(ch(Q/2) + co(R/Q)) a term relating to R multiplied by the unit cost, as the
cost of a unit is now no longer fixed but variable (unit cost = a function
f(Q) of the order quantity Q). Hence our total annual cost equation is:
ch(Q/2) + co(R/Q) + R[f(Q)].
It is instructive to consider what changes occur in this equation as we
change the order quantity Q. Obviously R and co remain unchanged,
equally obviously Q and f(Q) change. So what of ch? Well, it can remain
constant or it can change. You need to look back to how you calculated ch.
If it included money tied up then, as the unit cost f(Q) alters with Q, so
too does the money tied up.
The effect of these quantity discounts (breaks in the cost structure) is
to create a discontinuous total annual cost curve as shown below with
the total annual cost curve for the combined discount structure being
composed of parts of the total annual cost curves for each of the discount
costs.

96
Chapter 5: Inventory control

Figure 5.6
The order quantity which provides the lowest overall cost will
be the lowest point on the Combined Cost Curve shown in the
diagram above. We can precisely calculate this point as it corresponds
to:
• either an EOQ for one of the discount curves considered separately
(note that in some cases the EOQ for a particular discount curve may
not lie within the range covered by that discount and hence will be
infeasible)
• or one of the breakpoints between the individual discount curves on
the total annual cost curve for the combined discount structure.
We merely have to work out the total annual cost for each of these types of
points and choose the cheapest.
First the EOQs:

Discount Cost ch EOQ Inventory cost Material cost Total cost


0 0.05 0.01 3,464 34.64 600 634.64
5% 0.0475 0.0095 3,554 Infeasible
10% 0.045 0.009 3,651 Infeasible
15% 0.0425 0.0085 3,757 Infeasible

Table 5.3
Note here that we now include material (purchase) cost in total
annual cost.
The effect of the discount is to reduce the cost, and hence ch the inventory
holding cost per unit per year – all other terms in the EOQ formula (R and
co) remain the same. Of the EOQs only one, the first, lies within the range
covered by the discount rate.
For the breakpoints we have:
Order quantity Cost ch Inventory cost Material cost Total cost
5,000 0.0475 0.0095 35.75 570 605.75
10,000 0.045 0.009 51 540 591
20,000 0.0425 0.0085 88 510 598

Table 5.4
From these figures we can see that the economic order quantity associated
with minimum total annual cost is 10,000 with a total annual cost of 591.

97
MN3032 Management science methods

Note too here that this situation illustrates the point we made before
when we considered the simple EOQ model, namely that it is not true (in
general) that the best order quantity corresponds to the quantity where
holding cost and ordering cost are in balance. This is because the holding
cost associated with Q = 10,000 is ch(Q/2) = 0.009(10000/2) = 45, while
the ordering cost is co(R/Q) = 5(12000/10000) = 6.

Excel solution
Look at Sheet B in the spreadsheet associated with this chapter. You will
see:

Spreadsheet 5.2
For each of the discount ranges the EOQ is calculated and the spreadsheet
notes (cells F7 to F10) whether that EOQ is feasible or not. Any order
quantity can be costed using cell E14, the example above is for an order
quantity (breakpoint) of 10,000 units. The discount to be applied is
automatically shown in cell E16.

Activity
Cost the breakpoints using Sheet B and confirm that they agree with the values presented
above.

Note here that the use of discount analysis is not restricted to buyers, it
can also be used by a supplier to investigate the likely effects upon the
orders he receives of changes in the discount structure. For example, if the
supplier lowers the order size at which a particular discount is received
then how might this affect the orders he receives – will they become
bigger/smaller, less frequent/more frequent?

In-house production or batch production


Activity/Reading
For this section read Anderson, Chapter 10, section 10.3.

In batch production we have an in-house workshop that is producing


items for later consumption in the production process. In a sense it can
be treated as a source of supply, just like an outside supplier. The matter
of inventory can therefore be approached in much the same way as dealt
with above:
a. cost of ordering: it is generally denoted by the symbol cs (cost of
setup) instead of co, but has the same significance or meaning

98
Chapter 5: Inventory control

b. cost of the part


c. additional information
production capacity
availability: if the part is produced in a batch quantity (Q), is it
necessary to wait for the whole batch to be completed before one
item from it can be used, or can the part be used as soon as the
first unit is completed? We shall only consider the situation where
items from the batch can be used as soon as they are made.
Batch production becomes necessary when the demand for a part is
limited or when the rate of production is very high when compared with
the rate of demand or usage.
The following symbols are used:
P= cost of part (in £). In some companies, this cost could be further
broken down into material, labour and overhead costs but, for our
purposes, the total will do.
cs = cost of setting up the machine (in £)
ch = cost of inventory holding (in £ per unit per unit time)
Q= the batch quantity produced
Ac = rate of demand (or usage) in units per unit time
Ap = rate of production (in units per unit time)
Tc = time taken to consume (use up) the produced batch quantity
Tp = time taken to produce the batch
Note: Unit time could be day, week, month or a year as appropriate.
The batch quantity Q = TpAp = TcAc and a ratio r can be defined as
r = Ac/Ap = Tp/Tc
Total cost of producing one unit = P + cs/Q + 0.5chQ(1 - r)/Ac.
For minimum cost per unit, the above cost is differentiated with respect to
Q, equated to zero and solved for Q. The value of Q that results is:
Q = √(2Accs/(ch(1 - r))).
This order quantity is the Economic Batch Quantity (EBQ).

Example
A workshop has the facility to manufacture a part at a rate of 3,000 units per day. The
standard cost of the part is £14.30. The setup cost of the machine is £1,200. Assuming
an interest rate of 15 per cent per year, calculate the Economic Batch Quantity if the part
is used for further assembly work at a rate of 500 units per day.

Demand rate per day (Ac) = 500 units.


Production rate per day (Ap) = 3,000 units.
Standard cost per unit (P) = £14.30.
Interest rate (I) is 15 per cent per year = 0.15.
Machine setup cost (cs) = £1,200.
So r = Ac/Ap = 500/3000 = 1/6.
Assuming 365 days: the interest rate per day = 0.15/365
Therefore: ch = P × interest rate = 14.30 × 0.15/365
Note: Do not forget that the time unit here is a day and all factors have to
be in that timescale.
99
MN3032 Management science methods

The formula to use is: Q = √(2Accs/(ch(1 – r)))


so Q = √(2 × 500 × 1200)/((14.30 × 0.15)/365 × (1 – 1/6)) = 15654.

Activity
Repeat the calculation for the example given above, but without looking at the subject
guide. Do you get the correct answer or not?

Probabilistic demand
Activity/Reading
For this section read Anderson, Chapter 10, section 10.6.

If we have probabilistic demand we need to define:


Cover = cost per unit of overestimating the demand (i.e. this cost represents
the loss associated with ordering one additional unit and finding that it
cannot be sold)
Cunder = cost per unit of underestimating the demand (i.e. this cost
represents the opportunity loss of not ordering one additional unit and
finding that it could have been sold).
More detail with respect to probabilistic demand models is described
in Anderson. If CP is the cumulative probability of the demand being
less than or equal to Q*, where Q* is the optimum order quantity which
minimises the total cost (or maximises the total revenue or profit), then CP
= Cunder/(Cunder + Cover).

Example
Consider a news vendor who stands on the street and sells an evening paper, the Evening
News. He sells the paper to his customers for 35 (pence) a copy. He pays his supplier 20
(pence) a copy, but any unsold copies can be returned to the supplier and he gets 10
(pence) back. This is known as a salvage value.
Assume that his demand for copies on any day is either:
1. a rectangular (uniform) distribution with the extremes of the distribution being 60
and 80 copies per day
2. a Normal distribution with mean 100 and standard deviation 7.
How many copies should he stock?

Before we can compute the amount he should order, we need to work out
Cover and Cunder.
The cost of overestimating demand is 20 – 10 = 10, the cost he pays
minus the salvage value he gets back for unsold copy, so Cover = 10.
In order to calculate Cunder. his shortage cost per unit – how much does he
lose if a customer wants a copy and he does not have a copy available?
As a first analysis he loses his profit (= revenue – cost = 35 – 20 = 15) so
we can estimate his shortage cost (opportunity cost) as 15 (this ignores
any loss of goodwill and any loss of future custom that might result from a
shortage). Hence we have that Cunder= 15.
Hence CP = Cunder/(Cunder + Cover) = 15/(15 + 10) = 0.6.
For a rectangular (uniform) distribution with the extremes of the
distribution being 60 and 80 copies per day the order quantity is
60 + 0.6(80 – 60) = 72 copies per day.

100
Chapter 5: Inventory control

For a Normal distribution with mean 100 and standard deviation 7, we


must make use of the Normal distribution table shown at the end of this
chapter. As CP = 0.6 we have that the value taken from that Normal
distribution table is approximately 0.25, so that the quantity Q to order is
given by the equation
(Q – 100)/7 = 0.25
i.e. Q = 101.75, say 102 copies.

Excel solution
Look at Sheet C in the spreadsheet associated with this chapter. You will
see:

Spreadsheet 5.3
which reproduces the same calculation as we carried out above. The slight
difference between the value we calculated for the Normal distribution
and the value shown in cell D9 is due to the fact that Excel is more
accurate than we have been when looking up the appropriate value from
the Normal distribution.
Note that there is an important conceptual difference between this news
vendor’s problem and the EOQ/discount problems considered above. In
those EOQ/discount problems we had a decision problem (how much to
order) even though the situation was one of certainty – we knew precisely
the rate at which we used items. In the news vendor problem if we knew
for certain how many customers would want a paper each day then the
decision problem becomes trivial (order exactly that many). In other
words:
• for the EOQ problem we had a decision problem even though there
was no uncertainty
• for the news vendor problem it was only the uncertainty that created
the decision problem.

Comment
There are many extensions to the simple EOQ models we have considered
– for example:
• Reorder lead time – allow a lead time between placing an order and
receiving it – this introduces the problem of when to reorder (typically
at some stock level called the reorder level).
• Stockouts – we can allow stockouts (often called shortages) (i.e.
no stock currently available to meet orders).
• Often an order is not received all at once, for example if the order
comes from another part of the same factory then items may be
received as they are produced.
• Buffer (safety) stock – some stock kept back to be used only when
necessary to prevent stockouts.

101
MN3032 Management science methods

Materials requirements planning (MRP)


Materials requirements planning, referred to by the initials MRP, is
a technique which assists a company in the detailed planning of its
production. We shall introduce MRP by means of an example.

Example
The production manager at SIM Manufacturing wishes to develop a materials’
requirements plan for producing chairs over an eight-week period. She estimates that
the lead time between releasing an order to the shop floor and producing a finished
chair is two weeks. The company currently has 260 chairs in stock and no safety stock
(safety stock is stock held in reserve to meet customer demand if necessary). The forecast
customer demand is for 150 chairs in week one, 70 in week three, 175 in week five, 90 in
week seven and 60 in week 8 eight.

It helps to understand what is going on if we write out, over time, the


demand for chairs as below.
Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110
Order ? ? ? ? ? ? ? ?
Table 5.5
Here we have shown the demand in each of the eight weeks. Initially we
have 260 chairs available so if these are used to meet the demand of 150
in week one we have 260 − 150 = 110 left on-hand (i.e. in stock) at the
end of the week. Plainly we will need to order some more chairs in order
to meet all of the forecast future demand over the eight-week planning
period.
Conceptually therefore we face two related decisions about ordering:
• timing – when to order
• quantity – how much to order.
You can think of asking yourself the question, in each and every period,
should I order in this period, and if so how much?
For the moment suppose we order nothing in week 1, nothing in week 2,
etc. The situation by the time we reach the end of week 5 will be as below:
Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110 110 40 40 −135
Order ? ? ? ? ? ? ? ?
Table 5.6
If we are to avoid a stockout in week five, we plainly need to order at least
135 chairs.
Now we know that the lead time between ordering a chair and receiving
it is two weeks. Therefore to avoid a stockout in week five we must have
ordered 135 chairs either in week three, or in any week before week three.
In other words ordering:
• 135 chairs in week one, or
• 135 chairs in week two, or
• 135 chairs in week three,
102
Chapter 5: Inventory control

would each ensure that we have sufficient chairs available to meet forecast
demand in week five.
If we order these chairs earlier than week three we will be carrying extra
inventory (stock) for a number of periods and, as we know, carrying stock
costs money. It would seem appropriate therefore to order 135 chairs in
week three. This will give:
Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110 110 40 40 0
Order 0 0 135 ? ? ? ? ?
Table 5.7
Continuing on in the same manner we get:

Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110 110 40 40 0 0 −90 0
Order 0 0 135 ? ? ? ? ?
Table 5.8
requiring an order of 90 chairs in week five and giving:

Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110 110 40 40 0 0 0 0
Order 0 0 135 0 90 ? ? ?
Table 5.9
Continuing again we get:

Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110 110 40 40 0 0 0 -60
Order 0 0 135 0 90 ? ? ?
Table 5.10
requiring an order of 60 chairs in week six and giving:

Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110 110 40 40 0 0 0 0
Order 0 0 135 0 90 60 ? ?
Table 5.11
Note that we have no data given here on which to base order decisions in
weeks seven and eight. As we are at the end of the planning period these
are usually taken as zero.

Decisions
Let us be clear what we have done here with respect to our two decisions
of:
• timing – when to order
• quantity – how much to order.

103
MN3032 Management science methods

With respect to the timing decision we always ordered as late as


possible, but never planned a stockout. This is a driving principle in MRP,
never order before you need to, never plan to stockout.
With respect to the quantity decision we always ordered as little as
possible (i.e. just enough to avoid a stockout). This is known as the lot
for lot rule.

Extending the example


While for the example considered above, just a single item, we easily
worked out the orders manually, it is obvious that as the number of items
increases, a manual calculation becomes too complicated and we need a
computer package to help us. We illustrate this below.

For the chair production problem considered before suppose now that the production
manager as well as planning the production of the chair must also plan the production
of the components that make up the chair. These are: the seat, a back and four legs. The
lead time for seats and backs is two weeks and the lead time for legs is one week. The
company currently has an inventory of 60 seats, 40 backs and 80 legs. Scheduled receipts
are 50 seats in week one and 10 backs in week one.

Now in planning the production of chairs we need also to plan the


production of seats, backs and legs. For example, we show below the
situation as derived above where in week three we issued an order for 135
chairs.

Week 1 2 3 4 5 6 7 8
Demand 150 0 70 0 175 0 90 60
On-hand at end of week 110 110 40 40 0 0 0 0
Order 0 0 135 ? ? ? ? ?
Table 5.12
Now to have 135 chairs made we need to have to hand (i.e. currently
available) 135 seats, 135 backs and 4(135) = 540 legs. The current
inventory of these items (plus scheduled receipts) is insufficient, so orders
must be placed for these items. Just as we did for the chair itself above
these orders must be phased in time so as to ensure that we never
stockout.
Now to do all this manually for chairs, seats, backs and legs would just be
too time-consuming and error-prone. It would be far better to do this via a
computer package.

Activity
Continue the above example and produce the order schedule for seats, back and legs. Do
you now appreciate why a computer package to do this for you would be a good idea?
Can you see any disadvantages with using a computer package?

In MRP, two types of information are required:


• structural
• tactical.
Structural information is information about the items (parts/components)
that the company uses and how different items are related to one another.
It includes information for each item such as lead time and lot (or batch)
size rule. The key point about this information is that it changes relatively
infrequently.

104
Chapter 5: Inventory control

Tactical information is information about the current state of the


company – for example, sales orders (real and forecast) pending, the
master production schedule, on-hand inventory levels and purchase
orders. Obviously the key point about this information is that it changes
frequently.
In order to show the make-up (in terms of the parts needed for
production) we have a Bill of Materials (BOM) for the end-product
(namely the chair). Below we show the BOM for the chair.

Figure 5.7
This BOM means that to produce one chair we need:
• one seat
• one back
• four legs.
The Bill of Materials can be thought of as a diagrammatic recipe. Just
as in cooking we need a list of ingredients and their quantities to know
how to cook something, the BOM tells us what we need to make a chair.
The BOM is best thought of as being divided into levels, with the final item
(the chair) being at the top level and the items needed to make up a chair
being at the second level.
Other examples may have more levels (e.g. if items at the second level
are themselves made up from further items). Plainly BOMs are structural
information that change relatively infrequently. It is also plain that any
mistakes in specifying BOMs could have disastrous consequences on the
shop floor, for example consider what would happen if we fail to note that
a particular part is needed in the production of some item.
The tactical information required in MRP relates to:
• out-going inventory (sales) and planned production (master
production schedule)
• on-hand inventory and in-coming inventory (purchases).
Below we give a diagrammatic overview of the situation.

In-coming inventory BOM

On-hand inventory Production governed by


Purchases BOM and master
Production production schedule

Out-going inventory
Finished goods inventory
Sales

Note: all of the above implicitly time phased

Figure 5.8
Given all this information then (conceptually at least) we should be able
to calculate what we should do, in terms of when to place orders with
105
MN3032 Management science methods

external suppliers (or internal suppliers) and the size of those orders, so
that we never run out of stock of any item (i.e. we always achieve the
planned production and meet the sales orders).
This process of calculating the orders needed is called an MRP
EXPLOSION and produces the materials requirements (hence the name –
Materials Requirements Planning).

Just-in-time (JIT)
Just-in-time (JIT) is easy to grasp conceptually, everything happens
just-in-time. For example, consider your journey to work or school
today. You could have left your house, just-in-time to catch a bus to the
train station, just-in-time to catch the train, just-in-time to arrive.
Conceptually there is no problem about this. However, achieving it in
practice is likely to be difficult!
So too, in a manufacturing operation component parts could conceptually
arrive just-in-time to be picked up by a worker and used. So we would
at a stroke eliminate any inventory of parts, they would simply arrive
just-in-time! Similarly we could produce finished goods just-in-time
to be handed to a customer who wants them. So, at a conceptual extreme,
JIT has no need for inventory or stock, either of raw materials or work in
progress or finished goods.
Obviously any sensible person will appreciate that achieving the
conceptual extreme outlined above might well be difficult, or impossible,
or extremely expensive, in real life. However, that extreme does illustrate
that, perhaps, we could move an existing system towards a system with
more of a JIT element than it currently contains.
For example, consider a manufacturing process – we might not be able
to have a JIT process in terms of handing finished goods to customers, so
we would still need some inventory of finished goods. Perhaps it might be
possible however to arrange raw material deliveries so that, for example,
materials needed for one day’s production arrive at the start of the day
and are consumed during the day – effectively reducing/eliminating the
raw material inventory.
Adopting a JIT system is also sometimes referred to as adopting a lean
production system.
Just-in-time, as the name suggests, is the philosophy of having just the
right amount of material available at precisely the right time and is based
firmly on the principle that there is no need to have any inventory. In fact
inventory is considered to be an evil. The essence of the JIT approach is an
attempt to control three principal aspects:
• idle inventories constitute a direct waste of resources, money,
materials and indirectly of the energy used in the conversion and
refining of these inventories
• storage of idle inventories is a waste of space
• defective parts are a waste of resources and energy.
Perhaps the principal aspect of JIT is that one does not achieve an
improvement in the three aspects mentioned above and then stop. The
philosophy is of a cycle, a never-ending circle of inventory cuts, quality,
product and performance improvements, leading to more cuts in inventory
and more improvements. The reasoning is that a higher emphasis on
quality gives productivity rewards in the shape of savings in reworking,
in scrap, in inspection costs and in customer warranty claims. Constant

106
Chapter 5: Inventory control

quality and performance improvements lead to even better and cheaper


products and better products, it is believed, lead to larger market share.
These issues of eliminating inventory and continual improvement can be
illustrated by the classic JIT diagram as shown below. There the company
(the boat) floats on a sea of inventory; lurking beneath the sea are the
rocks, the problems that are hidden by the sea of inventory.

Figure 5.9
If we reduce the inventory level then the rocks become exposed, as below.

Figure 5.10
Now the company can see the rocks (problems) and hopefully solve them
before it runs aground!
The requirements for a successful JIT system include:
• uniform final assembly schedule
• short set-up time
• low machine failure and low incidence of defects
• flexible equipment and workforce
• reliable suppliers.
The JIT approach is not universally applicable: it is particularly unsuitable
in a job shop environment where the products and schedules may
fluctuate. This type of environment is, however, appropriate for MRP.

Activity
Think of any manufacturing operations that you are aware of. Would such operations be
more suitable for JIT or MRP and why?

107
MN3032 Management science methods

Optimised production technology (OPT)


Optimised production technology (OPT) is based upon the idea that
production is constrained – for example by a machine in the production
line that works slower than the machines either side of it. The idea
behind OPT is that these constraints (or ‘bottlenecks’) are identified and
eliminated if necessary. Moreover, the flow through the manufacturing
process should be such as to accommodate such bottlenecks as do
currently exist. For example, if we have a faster machine feeding a slower
machine then inevitably we will have work-in-progress stock building
up before the slower machine. Until that machine can be speeded up the
faster machine needs to be deliberately slowed to balance the process.

Supply chain management (SCM)


One phrase that is encountered these days is ‘supply chain management’.
The easiest way to envisage this is to think of a common product that you
have at home, for example a loaf of bread. How did this get to you? Most
likely you bought it in a shop close to where you live or work. It may have
arrived in that shop on a truck that probably carried a number of different
products, as well as your loaf of bread. That truck may have come from a
distribution centre where the loaf was stored until a shop needed it and
it was delivered to that distribution centre from the factory that made
it. The factory itself took delivery of the raw materials needed to make a
loaf – possibly those raw materials had been delivered to the factory from
another country.
It is clear that a long chain of activities are involved in getting a loaf on the
shelf in the shop so that you can buy it – this is the supply chain. Supply
chain management (SCM) addresses issues associated with coordination
and management of these various links so that cost (transport, storage and
production) is minimised while ensuring that customers are satisfied.
In SCM, two types of decision can be identified:
• strategic
• tactical.
Strategic decisions are decisions that, once made, cannot be altered in the
short-term. Tactical decisions, by contrast, are decisions that can be altered
in the short-term.
Typically in SCM the strategic decisions are:
• where to locate facilities, such as factories and distribution centres
• the overall distribution pattern – which factories produce which
product, which factories supply which distribution centres and which
distribution centres supply which customers.
Typically in SCM the tactical decisions are:
• the levels of inventory (stock) to hold
• the precise pattern of local delivery from distribution centres to
customers.
For example, at a particular distribution centre how much stock of a
particular product should we hold this week and how should we direct our
delivery vehicles around the many customers requiring delivery from our
distribution centre.
Also included in SCM is the idea that the various functions within the
organisation (such as marketing, production and distribution) need to
108
Chapter 5: Inventory control

work together to meet organisational goals. The marketing function, for


example, might aim for high sales though product availability – implying
high levels of stock or frequent deliveries to re-stock customers. Such a
goal might not be compatible with the distribution function’s goal to keep
down the amount of stock in the supply pipeline and the cost of delivery.

Links to other chapters


The topics considered in this chapter do not directly link to other
chapters in this subject guide. At a more general level the link between this
chapter and other chapters in this subject guide is the use of a quantitative
(analytic) approach to a problem.

Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
Ford-Otasan 406
Lowering inventory cost at Dutch companies 436
Dell computers 440
Multistage inventory planning at Deere & Company 442

A reminder of your learning outcomes


Having completed this chapter, and the Essential reading and activities,
you should be able to:
• discuss the various factors that need to be included when determining
Economic Order Quantity and Economic Batch Quantity
• determine Economic Order Quantity when there are no quantity
discounts
• determine Economic Order Quantity when there are quantity discounts
• determine Economic Batch Quantity
• determine the optimum order quantity when the demand is
probabilistic (rectangular (uniform) distribution or Normal
distribution)
• explain Materials Requirements Planning (MRP) by means of a simple
example
• discuss Just-in-Time (JIT), Optimised Production Technology (OPT)
and Supply Chain Management (SCM).

Sample examination questions


For Sample examination questions relating to the material presented in
this chapter please visit the VLE.

109
MN3032 Management science methods

Table for the standard Normal distribution


The value tabulated below shows the area, between -∞ and +z, under the
curve associated with the standard Normal distribution with mean zero and
variance one.
For example the area under this curve between -∞ and +0.32 is 0.626
z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.500 0.504 0.508 0.512 0.516 0.520 0.524 0.528 0.532 0.536
0.1 0.540 0.544 0.548 0.552 0.556 0.560 0.564 0.567 0.571 0.575
0.2 0.579 0.583 0.587 0.591 0.595 0.599 0.603 0.606 0.610 0.614
0.3 0.618 0.622 0.626 0.629 0.633 0.637 0.641 0.644 0.648 0.652
0.4 0.655 0.659 0.663 0.666 0.670 0.674 0.677 0.681 0.684 0.688
0.5 0.691 0.695 0.698 0.702 0.705 0.709 0.712 0.716 0.719 0.722
0.6 0.726 0.729 0.732 0.736 0.739 0.742 0.745 0.749 0.752 0.755
0.7 0.758 0.761 0.764 0.767 0.770 0.773 0.776 0.779 0.782 0.785
0.8 0.788 0.791 0.794 0.797 0.800 0.802 0.805 0.808 0.811 0.813
0.9 0.816 0.819 0.821 0.824 0.826 0.829 0.831 0.834 0.836 0.839
1.0 0.841 0.844 0.846 0.848 0.851 0.853 0.855 0.858 0.860 0.862
1.1 0.864 0.867 0.869 0.871 0.873 0.875 0.877 0.879 0.881 0.883
1.2 0.885 0.887 0.889 0.891 0.893 0.894 0.896 0.898 0.900 0.901
1.3 0.903 0.905 0.907 0.908 0.910 0.911 0.913 0.915 0.916 0.918
1.4 0.919 0.921 0.922 0.924 0.925 0.926 0.928 0.929 0.931 0.932
1.5 0.933 0.934 0.936 0.937 0.938 0.939 0.941 0.942 0.943 0.944
1.6 0.945 0.946 0.947 0.948 0.949 0.951 0.952 0.953 0.954 0.954
1.7 0.955 0.956 0.957 0.958 0.959 0.960 0.961 0.962 0.962 0.963
1.8 0.964 0.965 0.966 0.966 0.967 0.968 0.969 0.969 0.970 0.971
1.9 0.971 0.972 0.973 0.973 0.974 0.974 0.975 0.976 0.976 0.977
2.0 0.977 0.978 0.978 0.979 0.979 0.980 0.980 0.981 0.981 0.982
2.1 0.982 0.983 0.983 0.983 0.984 0.984 0.985 0.985 0.985 0.986
2.2 0.986 0.986 0.987 0.987 0.987 0.988 0.988 0.988 0.989 0.989
2.3 0.989 0.990 0.990 0.990 0.990 0.991 0.991 0.991 0.991 0.992
2.4 0.992 0.992 0.992 0.992 0.993 0.993 0.993 0.993 0.993 0.994
2.5 0.994 0.994 0.994 0.994 0.994 0.995 0.995 0.995 0.995 0.995
2.6 0.995 0.995 0.996 0.996 0.996 0.996 0.996 0.996 0.996 0.996
2.7 0.997 0.997 0.997 0.997 0.997 0.997 0.997 0.997 0.997 0.997
2.8 0.997 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998
2.9 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.999 0.999 0.999
3.0 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
3.1 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
3.2 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
3.3 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
3.4 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
3.5 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
Table 5.13

110
Chapter 6: Markov processes

Chapter 6: Markov processes

Essential reading
Anderson, Chapter 18.

Spreadsheet
markov.xls
• Sheet A: Calculations for a two state example
• Sheet B: Market share over time – two states
• Sheet C: Calculations for a three state example
• Sheet D: Market share over time – three states
• Sheet E: Calculations for an absorbing state example
• Sheet F: Market share over time – absorbing states
• Sheet G: Solution via Solver for estimation of transition matrix
This spreadsheet can be downloaded from the VLE.

Aims of the chapter


The aims of this chapter are to:
• discuss brand switching, which is an archetypal application for the
specific numeric technique – Markov processes – presented in this
chapter
• give numeric examples of the specific applications of the technique in
two different situations (this difference relating to the nature of the
states underlying the Markov process).

Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• draw a state-transition diagram
• calculate the state of the system at any time period
• calculate the long-run system state (both for systems involving no
absorbing states and for systems involving absorbing states).

Introduction
Markov process models are typically applicable where we have a number
of different states that something can be in over time. For example, we
can think of the state as being the petrol (gas) station you buy your petrol
from and there may be a number of different such stations (or states)
from which you bought petrol last year. For systems of this kind we may
be interested in prediction questions such as ‘what is the probability that
a particular petrol station will be visited in the next month?’. Applying
a Markov process model enables us to answer such questions easily. You
will see a number of applications of Markov process models mentioned
throughout this chapter.

111
MN3032 Management science methods

Many products/services in your everyday life involve ‘brand switching’ –


switching between various product brands, or switching between different
suppliers of a particular service. For example, consider what you had for
breakfast today; do you always have the same kind of breakfast cereal or
do you sometimes switch between brands? Those readers who have small
children will know that brand switching in terms of children’s cereals is
often influenced by what children have seen on television.
Reflect on what all the television/other media advertisements for
consumer items are meant to do. If the total market demand is effectively
stable, this is often called by economists a saturated market, ‘all the
people who are going to buy are already buying as much as they are
going to’. In which case, what such advertisements are meant to achieve
is brand switching. Although you may not have considered television
advertisements in this light before, reflect that television advertisements
for, say, washing powder, are not meant to make people wash their clothes
more often, rather they are designed to persuade (influence) people to
switch to the brand being advertised.
Markov processes give us an insight into how market shares develop over
time in the light of brand switching.
You will see a number of other applications of Markov processes
mentioned below. We illustrate Markov processes by use of an example.
Example
Consider the following problem: company K, the manufacturer of a breakfast cereal,
currently has some 25 per cent of the market. Data from the previous year indicate that
88 per cent of K’s customers remained loyal that year, but 12 per cent switched to the
Competition. In addition, 85 per cent of the Competition’s customers remained loyal to
the Competition but 15 per cent of the Competition’s customers switched to K.

Assuming these trends continue, determine K’s share of the market:


a. in two years’ time
b. in the long-run.
This problem is an example of a brand switching problem that often arises
in the sale of consumer goods.
Activity
Spend some time with a piece of paper and a calculator. What would you estimate the
answers to a. and b. above to be? Record your answers here for reference.

In order to solve this problem we make use of Markov chains or


Markov processes. The procedure is given below.

Solution procedure
Activity/Reading
For this section read Anderson, Chapter 18, section 18.1.

Observe that, each year, a customer can either buy K’s cereal or the
Competition’s. Hence we can construct a diagram as below where the
two circles represent the two states a customer can be in; and the arcs
represent the probability that a customer makes a transition each year
between states.

112
Chapter 6: Markov processes

Figure 6.1
Note the circular arcs indicating a ‘transition’ from one state to the same
state. The diagram is known as the state-transition diagram (and note
that all the arcs in that diagram are directed arcs).
Following on from this diagram we can construct the transition matrix
(usually denoted by the symbol P) which tells us the probability of making
a transition from one state to another state. Letting:
State 1 = customer buying K’s cereal
and
State 2 = customer buying Competition’s cereal,
we have the transition matrix P for this problem given by:
To State
1 2
From State 1 | 0.88 0.12 |
2 | 0.15 0.85 |
Note that the sum of the elements in each row of the transition matrix is
one.
Now we know that currently K has some 25 per cent of the market. Hence
we have the row matrix representing the initial state of the system given
by:
State
1 2
[0.25, 0.75]
We usually denote this row matrix by s1 indicating the state of the system
in the first period (years in this particular example). Now Markov theory
tells us that, in period (year) t, the state of the system is given by the row
matrix st where:
st = st–1(P) = st–2(P)2 = ... = s1(P)t–1
We have to be careful here as we are doing matrix multiplication and the
order of calculation is important (i.e. st–1(P) ≠ (P)st–1 in general). To find st
we could attempt to raise P to the power t–1 directly but, in practice, it is
far easier and more informative to calculate the state of the system in each
successive year 1, 2, 3,..., t.
We already know the state of the system in year one (s1) so the state of the
system in year two (s2) is given by:
s2 = s1P = [0.25,0.75] × | 0.88 0.12 |
| 0.15 0.85 |
= [(0.25)(0.88) + (0.75)(0.15), (0.25)(0.12) + (0.75)(0.85)]
= [0.3325, 0.6675]
Note that this result makes intuitive sense (e.g. of the 25 per cent currently
buying K’s cereal, 88 per cent continue to do so whereas, of the 75 per
cent buying the competitor’s cereal, 15 per cent change to buy K’s cereal –
giving a (fractional) total of (0.25)(0.88) + (0.75)(0.15) = 0.3325 buying
K’s cereal).

113
MN3032 Management science methods

Hence in year two, 33.25 per cent of the people are in State 1: that is
buying K’s cereal. Note here that, as a numerical check, the elements of st
should always total one.
In year three, the state of the system is given by:
s3 = s2P = [0.3325, 0.6675] × | 0.88 0.12 |
| 0.15 0.85 |
= [0.392725, 0.607275]
Hence in year three, 39.2725 per cent of the people are buying K’s cereal.

Activity
How does the answer calculated here compare with the answer you estimated before
(when the problem was introduced) for K’s share of the market in two years’ time.

Activity
Take the initial state and the transition matrix and repeat the calculations given above,
but without looking at the subject guide. Do you get the correct answer or not?

Now examine Sheet A in the spreadsheet associated with this chapter. You
will see there the data for this example and the market shares for K and
the Competition are calculated for you in that sheet for 20 periods – with
the change in market shares being shown graphically in Sheet B. These
sheets are reproduced below.

Spreadsheet 6.1

Figure 6.2

114
Chapter 6: Markov processes

Insight
One of the advantages of applying a Markov process approach to a
problem is that we can gain some insight into the change in market share
over time. Of course, it would be foolish to pretend that we can accurately
predict market share in the future. However, insight is valuable. Here, for
example, from Sheet B we can see that on current trends by time period
six K’s share of the market will roughly equal that of the Competition’s and
from then on K’s market share will exceed the Competition’s market share.
As a further example suppose that, through a marketing/advertising
campaign, K could increase the loyalty of their customers and specifically
increase the transition probability from K to K by 0.01, i.e. to 0.89.
Now if K were to do this suppose we believe that the Competition will
respond with their own campaign and this will enable them to maintain
loyalty among their own customers, with the Competition to Competition
transition probability increasing from 0.85 to 0.86. In this case, will K have
a larger or smaller (or equal) market share after two years?
Using Sheet A and changing the appropriate transition probabilities we have:

Spreadsheet 6.2
indicating that this situation will not improve K’s market share in two
years – as can be seen above, this is now 38.56 per cent whereas before it
was higher – 39.27 per cent. Knowing this without the effort and expense
of a marketing campaign is obviously extremely valuable.

Form of the transition matrix


Although we did not stress it above, the procedure we applied to calculate
the state of the system as time passed, and indeed the procedure we will
apply below to calculate the long-run state of the system, assumes that the
transition matrix has the form that:
• the rows represent ‘from’
• the columns represent ‘to’
(i.e. the transition matrix gives the probability of a transition from each
of the (row) states to each of the (column) states). A simple way to check
this is to see if the probabilities in each row of the transition matrix add up
to one. If they do then the transition matrix is most probably in a correct
form. If these probabilities do not total one then attention needs to be paid
to the transition matrix to get it into a correct form before any numeric
calculations are done (typically this will involve transposing the rows and
columns of the matrix).

115
MN3032 Management science methods

Long-run
Recall here that the question we originally posed above asked for K’s share
of the market in the long-term. This implies that we need to calculate st as
t becomes very large (as it approaches infinity).
The idea of the long-run is based on the assumption that, eventually, the
system reaches ‘equilibrium’ (often referred to as the ‘steady-state’) in
the sense that st = st‑1. This is not to say that transitions between states do
not take place; they do, but they ‘balance out’ so that the number in each
state remains the same.
There are two basic approaches to calculating the steady-state:
a. computational: find the steady-state by calculating st for t = 1,
2, 3,... and stop when st‑1 and st are approximately the same. This is
obviously very easy for a computer and is the approach often used by
computer packages. Indeed if you examine Sheet A you will see that
by time period 20 the share for K appears to have stabilised at around
55.5 per cent and the share for the Competition appears to have
stabilised at around 44.5 per cent. The same effect can be seen clearly
in the graph shown in Sheet B.
b. algebraic: to avoid the lengthy arithmetic calculations needed to
calculate st for t = 1, 2, 3,... we have an algebraic short-cut that can
be used. Recall that, in the steady-state, st = st‑1 (= [x1,x2] say for the
example considered above). Then as st = st‑1P we have:
[x1,x2] = [x1,x2] × | 0.88 0.12 |
| 0.15 0.85 |
(and note also that x1 + x2 = 1). Hence we have three equations which
we can solve. Note that although we have just two variables to solve,
we need to include the equation x1 + x2 = 1. If it is omitted then it
is impossible to find a unique solution (these equations are linearly
dependent).
Adopting the algebraic approach here we have the three equations:
x1 = 0.88x1 + 0.15x2
x2 = 0.12x1 + 0.85x2
x1 + x2 = 1
or
0.12x1 - 0.15x2 = 0
0.12x1 - 0.15x2 = 0
x1 + x2 = 1.
Echoing the point made above note here how the first two equations
are identical after algebraic manipulation. Hence clearly the equation
x1 + x2 = 1 is essential. Without it we could not obtain a unique
solution for x1 and x2. Solving we get:
x1 = 0.5556
x2 = 0.4444.
Hence, in the long-run, K’s market share will be 55.56 per cent.

Activity
How does the answer calculated here compare with the answer you estimated before
(when the problem was introduced) for K’s share of the market in the long-run?

116
Chapter 6: Markov processes

Activity
Take the transition matrix and repeat the calculations for the long-run given above, but
without looking at the subject guide. Do you get the correct answer or not?

Check
A useful numerical check (particularly for larger problems) is
backsubstitution, namely to substitute back. By this we mean put the final
calculated values back into the original equations to check that they are
consistent with those equations. Here the logic is clear, the values of x1=
0.5556 and x2 = 0.4444 that we have derived by algebraic manipulation
above are, we believe, the solution to the three original equations:
x1 = 0.88x1 + 0.15x2
x2 = 0.12x1 + 0.85x2
x1 + x2 = 1.
Therefore if we substitute the values of x1= 0.5556 and x2 = 0.4444 back
into all three of these equations we should (to within rounding errors) find
that the left-hand sides and the right-hand sides of the above equations are
equal. This is a very useful check (either for two states as considered here,
or for three states as will be considered below) since if we pass this check
the solution derived must (mathematically) be the correct solution to the
original equations.

Three states
Just as we have dealt with two states here so we can expand the example
and deal with three states. Suppose K has two main competitors A and B
and that the transition matrix is as shown in Sheet C below.

Spreadsheet 6.3
You can see that Sheet C has the steady state automatically calculated. For
ease of computation in Excel this is done using a matrix method that you
do not need to know.
However you do need to be able to reproduce the values shown there
by solving in an exactly analogous fashion as you did for the two state
example above, i.e. by letting the long-term be [x1, x2, x3] and taking the
matrix equation:
[x1,x2,x3] = [x1,x2,x3] × | 0.88 0.05 0.07 |
| 0.15 0.75 0.10 |
| 0.15 0.20 0.65 |
with x1 + x2 + x3 = 1 and solving.

117
MN3032 Management science methods

Activity
Solve for the long-run for this example.
Check your solution by substituting the values found back into the (four) equations you
started with.

Again we can gain some insight into the change in market share over time
– as in Sheet D below.

Figure 6.3
Here we see that the market share for K rapidly exceeds the market shares
for both A and B.
To consolidate your knowledge of Markov processes consider the following
example.

Activity/Reading
For this section read Anderson, Chapter 18, section 18.2.

Example
Suppose that the chip industry is controlled by four companies: Crispy, Crunchy, Mushy
and Scrunchy. If customers buy either Crispy or Crunchy they never buy another brand. If
they buy Mushy the probabilities that they will buy Crispy, Crunchy, Mushy and Scrunchy
next month are 0.45, 0.4, 0.05 and 0.1 respectively. If they buy Scrunchy the probabilities
that they will buy Crispy, Crunchy, Mushy and Scrunchy next month are 0.1, 0.2, 0.3 and
0.4 respectively.
a. Represent this situation on a state-transition diagram.
b. If the buyers are initially distributed as 20 per cent, 30 per cent, 30 per cent and
20 per cent for Crispy, Crunchy, Mushy and Scrunchy respectively what will be the
situation after two months?
c. What will be the long-run system state?

Activity
Try drawing the state-transition diagram for this problem yourself.

118
Chapter 6: Markov processes

Letting:
State 1 = Crispy
State 2 = Crunchy
State 3 = Mushy
State 4 = Scrunchy
we have:
P= |1 0 0 0|
|0 1 0 0|
| 0.45 0.4 0.05 0.1 |
| 0.1 0.2 0.3 0.4 |
and s1 = [0.2, 0.3, 0.3, 0.2].
Note here that the states corresponding to Crispy and Crunchy are
absorbing states (states which, once reached, cannot be left). States
which are non-absorbing are often called transient states.
The state-transition diagram is shown below:

Figure 6.4

Activity
Try computing the state of the system in the second month s2 = s1P yourself. You should
get s2=[0.355, 0.46, 0.075, 0.11].

Activity
Try computing the state of the system in the third month s3 = s2P yourself. You should get
s3 = [0.39975, 0.512, 0.03675, 0.0515].

In order to calculate the long-run system state we adopt the approach in


Anderson so that (using the notation presented there):
R = | 0.45 0.4 | Q = | 0.05 0.1 |
| 0.1 0.2 | | 0.3 0.4 |
The fundamental matrix N = (I-Q) –1

= | 1 - 0.05 -0.1 |–1


| -0.3 1 – 0.4 |
= | 0.95 -0.1 |–1

| -0.3 0.6 |
= | 1.1111 0.1852 |
| 0.5556 1.7593 |

119
MN3032 Management science methods

so NR = | 1.1111 0.1852 | × | 0.45 0.4 |


| 0.5556 1.7593 | | 0.1 0.2 |
= | 0.5185 0.4815 |
| 0.4259 0.5741 |
Note here, as a check, the elements in each row of this final matrix should
add up to one.
Now, since s1 = [0.2, 0.3, 0.3, 0.2], with the first two states being
absorbing states and the last two states transient states (shown in bold
here with values of 0.3 and 0.2 for emphasis), we have that the long-run
system state is [x1, x2, 0, 0] (since eventually the absorbing states absorb
everything) where:
x1 = 0.2 + [0.3, 0.2] × | 0.5185 | = 0.4407
| 0.4259 |
x2 = 0.3 + [0.3, 0.2] × | 0.4815 | = 0.5593
| 0.5741 |
Again, as a numerical check, the final values for x1 and x2 should add up to
one.

Activity
Take the initial state and the transition matrix and repeat the calculations for the long-run
given above, but without looking at the subject guide. Do you get the correct answer or not?

This example is also given in Sheet E below. Note here that Sheet E
includes details of the various matrices encountered in the procedure so
that if you solve another example you will have a numeric check available
in that sheet on any calculations that you do by hand.

Spreadsheet 6.4
The changes in market share over time can be seen in Sheet F below.

120
Chapter 6: Markov processes

Figure 6.5

Form of the transition matrix


Although we did not stress it above the procedure we applied to calculate
the long-run state of the system assumes that the transition matrix has
the first set of states (rows) corresponding to the absorbing states, with
the last set of states (rows) corresponding to the non-absorbing states.
In the above four state example the first two states (rows, states 1 and
2) were the absorbing states and the last two states (rows, states 3 and
4) were the non-absorbing states. But suppose we had an example in
which states 1 and 3 were the absorbing states, and states 2 and 4 were
the non-absorbing states? This would not have a transition matrix in the
appropriate form. If you encounter such an example then you need to
relabel the states (and correspondingly adjust the transition matrix) such
that the transition matrix does have the appropriate form.

Estimating the transition matrix


Nowadays many retailers (such as supermarkets) in the UK have their own
‘loyalty cards’ which are swiped through the checkout at the same time
as a customer makes their purchases. These provide a mass of detailed
information from which these retailers (or others) can deduce brand
switching transition matrices:
• Do you think those matrices might be of interest (value) to other
companies or not?
• For example, suppose you were a manufacturer/marketer of
breakfast cereal. How much would you pay a leading supermarket for
continuous, detailed electronic information on cereal brand switching?
How much extra would you pay for exclusive rights with that
supermarket (so your competitors cannot have access to those
data)?
How useful would a continuous flow of such information be to you
to judge the effect of promotional/marketing campaigns?
Now consider how many different products/types of products a
supermarket sells. The data they are gathering on their databases through
the use of loyalty cards can be extremely valuable to them.
Note too that the availability of such data enable more detailed brand
switching models to be constructed. For example, in the cereal problem
we dealt with above, the competition was represented by just one state.

121
MN3032 Management science methods

With more detailed data that state could be disaggregated into a number
of different states – maybe one for each competitor’s brand of cereal. If
we have n states then we need n2 transition probabilities. Estimating these
probabilities is easy if we have access to a database which tells us from
individual consumer data whether people switched or not, and if so to
what.
Also we could have different models for different segments of the market
– for example, brand switching may be different in rural areas from brand
switching in urban areas. Families with young children would obviously
constitute another important brand switching segment of the cereal
market.
Note here that if we wish to investigate brand switching in a numeric
way then transition probabilities are key. Unless we can get such
numbers nothing numeric is possible.
Consider now how, in the absence of readily available information on
brand switching as gathered by a supermarket (e.g. because we cannot
afford the price the supermarkets are asking for such information), we
might get information as to transition probabilities. One way, indeed this is
how this was done before loyalty cards, is to survey customers individually.
Someone physically stands outside the supermarket and asks shoppers
about their current purchases and their previous purchases. Although this
can be done, it is plainly expensive – particularly if we need to achieve a
reasonable geographic coverage that is regularly updated as time passes.
Both of the above ways of estimating transition matrices – buying
electronic information and manual surveys – cost money. There is,
however, one approach to estimating transition matrices that avoids
any such costs although, as will become apparent below, it does involve
some intellectual effort. This approach involves estimating the transition
probabilities (i.e. the entire transition matrix) from the observed market
shares. We illustrate how this can be done in an example below.
Consider Sheet G in the spreadsheet associated with this chapter. We have
a Solver model associated with this sheet and to use Solver in Excel select
Tools and then Solver. In the version of Excel I am using (different versions
of Excel have slightly different Solver formats) you will get the Solver
model in Sheet G as below:

Spreadsheet 6.5
122
Chapter 6: Markov processes

This sheet is similar to Sheet C where we have brand K and two


competitor brands A and B. The initial (observed) market share for each
brand is given in cells C9 to E9 but we have also entered the observed
market shares for K over 10 time periods as in cells K2 to K11. The
question we need to consider here is what, if cells K2 to K11 give the
observed market shares, would be a good transition matrix to have in cells
C5 to E7?
The Solver model enables us to find a transition matrix that generates the
‘best fit’ between the observed market shares in cells K2 to K11 and the
calculated market shares in cells H2 to H11. A standard approach in such
estimation problems is to minimise squared differences and so the squared
difference between the observed and calculated market shares are given in
cells L2 to L11 and the sum of these differences in cell L12.
In the Solver model shown in Sheet G we are minimising the value in cell
L12 (ignore the use of $ signs here – that is a technical Excel issue if you
want to go into it in greater detail). We can change the values in cells C5
to D7 (cells E5 to E7 in the transition matrix being already defined as 1-
the row sum of the other cells). The constraints in the problem are that the
cells C5 to E7 should all be ≥ 0.1. Here 0.1 is an arbitrarily chosen value
to reflect the fact that brand switching is believed to be occurring between
all brands at some minimum level (here 0.1). The values currently given in
cells C5 to D7 are just random values to start the solution process.
Solving we get the following result:

Spreadsheet 6.6
This indicates that the transition matrix shown in C5 to E7 is the ‘best’
transition matrix we can find that explains the observed market shares.
Be clear here – what we have done above is to find, in a logical consistent
systematic fashion, a transition matrix that ‘best fits’ the observed market
shares – that transition matrix may (or may not) correspond to the
transition probabilities which we would find were we to survey customers
or gather electronic information in the real world.
However, the transition matrix we have derived above may give us further
insight into the situation – we can see for example that in this ‘best fit’
customers of company K appear exceptionally loyal (80 per cent remaining
with K at each transition) and this is an insight that we may not have
gained had we just looked at the observed market shares.

Comment
Any problem for which a state-transition diagram can be drawn can
be analysed using the approach given above. The advantages and
disadvantages of using Markov theory include:
• Markov theory is simple to apply and understand
• sensitivity calculations (i.e. ‘what-if’ questions) are easily carried out
• Markov theory gives us an insight into changes in the system over time
123
MN3032 Management science methods

• P may be dependent upon the current state of the system. If P is


dependent upon both time and the current state of the system (i.e. P a
function of both t and st, then the basic Markov equation becomes
st = st‑1 P(t ‑ 1,st‑1))
• Markov theory is only a simplified model of a complex decision
making process.

Applications
• Population modelling studies (where we have objects which ‘age’) are
an interesting application of Markov processes. One example of this
would be modelling the car market as a Markov process to forecast the
‘need’ for new cars as old cars naturally die off.
• Another example would be to model the clinical progress of patients in
hospital as a Markov process and see how their progress is affected by
different treatment regimes.

Activity
Can you think of a number of different applications of Markov processes? (Hint: any
problem for which you can draw a state-transition diagram can be analysed using Markov
processes.)

Links to other chapters


The topics considered in this chapter do not directly link to other
chapters in this subject guide. More generally the topics considered here
link to other topics where probability (stochastic elements) play a role in
decision making, so this chapter links to Chapters 4 and 11.

Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
Benefit of health care services Chapter 18, 2
Managing credit card credit limits in Bank One Chapter 18, 17

A reminder of your learning outcomes


Having completed this chapter, and the Essential reading and activities,
you should be able to:
• draw a state-transition diagram
• calculate the state of the system at any time period
• calculate the long-run system state (both for systems involving no
absorbing states and for systems involving absorbing states).

Sample examination questions


For Sample examination questions relating to the material presented in
this chapter please visit the VLE.

124
Chapter 7: Mathematical programming – formulation

Chapter 7: Mathematical programming –


formulation

Essential reading
Anderson, Chapter 2, sections start–2.1; Chapter 4 (formulations only);
Chapter 15, sections start–15.1, 15.3 and 15.4 (formulations only).

Aims of the chapter


The aims of this chapter are to:
• present a clear approach for moving from a verbal description of a
decision problem to a mathematical description
• illustrate this approach for a number of example decision problems.

Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• use a structured approach to formulating a decision problem as a
mathematical program
• formulate a decision problem either as a linear program, or as an
integer program, or as a mixed-integer program, as appropriate.

Introduction
You will recall that in Chapter 1 we gave a formulation (a precise
mathematical statement) of the Two Mines problem as a linear program.
In the real world, there are a number of application areas that have
problems which can also be formulated as linear programs. You will see
some of these areas mentioned in this chapter.
Linear programming is one of the most used OR techniques. This is
because it is a generic technique, not just applicable in one specific
problem area, but applicable across a range of problem areas.
We hope that you will come to realise that, although the formulation of
linear programs can be demanding, the benefits to be gained are high –
both in terms of a clear understanding/exposition of the problem and in
terms of what can be achieved numerically (i.e. the problem can be solved
numerically to give the ‘best possible’ answer). For these reasons we would
urge you to persevere with this chapter, even if you initially find some of
the mathematics daunting.
This chapter concentrates upon the formulation of mathematical programs
– specifically linear programs and integer programs. Integer programs are
very like linear programs except that some of the variables are restricted to
have integer (discrete) values.

Overview
You will recall from the Two Mines example that the conditions for a
mathematical model to be a linear program (LP) were:

125
MN3032 Management science methods

1. all variables continuous (i.e. can take fractional values)


2. a single objective (minimise or maximise)
3. the objective and constraints are linear (i.e. any term is either a
constant or a constant multiplied by an unknown).
LPs are important; this is because:
1. many practical problems can be formulated as LPs
2. there exists an algorithm (called the simplex algorithm) which enables
us to solve LPs numerically relatively easily.
We will return in Chapter 8 to the simplex algorithm for solving LPs but for
the moment we will concentrate upon formulating LPs.
Some of the major application areas to which LP can be applied are:
• blending
• oil refinery management
• production/factory planning
• financial planning.
We consider below some specific examples of the types of problem that
can be formulated as LPs. Note here that the key to formulating LPs is
practice. You can get some practice for yourself by trying some of the
problems in Anderson. A useful hint is that common objectives for LPs are
minimise cost/maximise profit.

Blending problem
Consider the example of a manufacturer of animal feed who is producing
feed mix for dairy cattle. In our simple example the feed mix contains two
active ingredients and a filler to provide bulk. One kilogram (kg) of feed
mix must contain a minimum quantity of each of four nutrients as below:

Nutrient A B C D
gram 90 50 20 2
The ingredients have the following nutrient values and cost:

A B C D Cost/kg
Ingredient 1 (gram/kg) 100 80 40 10 40p
Ingredient 2 (gram/kg) 200 150 20 – 60p
What should be the amounts of active ingredients and filler in one kg of
feed mix?

Blending problem formulation

Variables
In order to solve this problem it is best to think in terms of one kg of feed
mix. That kg is made up of three parts – ingredient 1, ingredient 2 and
filler so let:
x1 = amount (kg) of ingredient 1 in one kg of feed mix
x2 = amount (kg) of ingredient 2 in one kg of feed mix
x3 = amount (kg) of filler in one kg of feed mix
where x1 ≥ 0, x2 ≥ 0 and x3 ≥ 0.

126
Chapter 7: Mathematical programming – formulation

Constraints
a. Balancing constraint (an implicit constraint due to the definition of
the variables)
x1 + x2 + x3 = 1
which says that one kg of feed mix must be made up (precisely) from
the two ingredients and filler.
b. Nutrient constraints
100x1 + 200x2 ≥ 90 (nutrient A)
80x1 + 150x2 ≥ 50 (nutrient B)
40x1 + 20x2 ≥ 20 (nutrient C)
10x1 ≥ 2 (nutrient D).
Here for nutrient A, for example, 100x1 + 200x2 is the number of grams
of nutrient A we have in one kg of feed mix when we blend together x1
kilograms of ingredient one and x2 kilograms of ingredient two.
Note the use of an inequality rather than an equality in these
constraints, following the rule we put forward in the Two Mines
example that, given a choice between an equality and an
inequality choose the inequality. Here this implies that the
nutrient levels we want are lower limits on the amount of nutrient in
one kg of feed mix.

Objective
The objective is to minimise cost, that is:
minimise 40x1 + 60x2
which gives us our complete LP model for the blending problem.
Obvious extensions/uses for this LP model include:
• increasing the number of nutrients considered
• increasing the number of possible ingredients considered – more
ingredients can never increase the overall cost (other things being
unchanged), and may lead to a decrease in overall cost
• placing both upper and lower limits on nutrients
• dealing with cost changes
• dealing with supply difficulties
• filler cost.
Blending problems of this type were, in fact, some of the earliest
applications of LP (for human nutrition during rationing) and are still
widely used in the production of animal feedstuffs.

Activity
Suppose now there is a fifth nutrient E, where ingredient 1 contains five gram/kg of E
and ingredient 2 contains 15 gram/kg of E. The amount of E in the final feed mix must lie
between one and three grams. What would the formulation of the problem now be?

Production planning problem


A company manufactures four variants of the same product and, in the
final part of the manufacturing process, there are assembly, polishing and
packing operations. For each variant the time required for these operations
is shown below (in minutes) as is the profit per unit sold.

127
MN3032 Management science methods

Assembly Polish Pack Profit (£)


Variant 1 2 3 2 1.50
2 4 2 3 2.50
3 3 3 2 3.00
4 7 4 5 4.50
Table 7.1
• Given the current state of the labour force the company estimates
that, each year, they have 100,000 minutes of assembly time, 50,000
minutes of polishing time and 60,000 minutes of packing time
available. How many of each variant should the company make per
year and what is the associated profit?
• Suppose now that the company is free to decide how much time
to devote to each of the three operations (assembly, polishing and
packing) within the total allowable time of 210,000 (= 100,000 +
50,000 + 60,000) minutes. How many of each variant should the
company make per year and what is the associated profit?

Activity
Consider these two problems by yourself for 10 minutes. What answers do you come up
with? What are the associated profits? Write your answers here for later reference.

Production planning formulation

Variables
Let:
xi be the number of units of variant i (i = 1, 2, 3, 4) made per year
Tass be the number of minutes used in assembly per year
Tpol be the number of minutes used in polishing per year
Tpac be the number of minutes used in packing per year
where xi ≥ 0 i = 1,2,3,4 and Tass, Tpol, Tpac ≥ 0.

Constraints
a. Operation time definition
Tass = 2x1 + 4x2 + 3x3 + 7x4 (assembly)
Tpol = 3x1 + 2x2 + 3x3 + 4x4 (polish)
Tpac = 2x1 + 3x2 + 2x3 + 5x4 (pack)
b. Operation time limits: the operation time limits depend upon the
situation being considered. In the first situation, where the maximum
time that can be spent on each operation is specified, we simply have:
Tass ≤ 100,000 (assembly)
Tpol ≤ 50,000 (polish)
Tpac ≤ 60,000 (pack)
In the second situation, where the only limitation is on the total time spent
on all operations, we simply have:
Tass + Tpol + Tpac ≤ 210,000 (total time)

128
Chapter 7: Mathematical programming – formulation

Objective
The objective presumably is to maximise profit, hence we have:
maximise 1.5x1 + 2.5x2 + 3.0x3 + 4.5x4
which gives us the complete formulation of the problem.

Factory planning problem


Under normal working conditions a factory produces up to 100 units of
a certain product in each of four consecutive time periods at costs which
vary from period to period as shown in the table below.
Additional units can be produced by overtime working. The maximum
quantity and costs are shown in the table below, together with the forecast
demands for the product in each of the four time periods.
Time Period Demand Normal Production Overtime Production Overtime Production
(units) Cost (£K/unit) Capacity (units) Cost (£K/unit)
1 130 6 60 8
2 80 4 65 6
3 125 8 70 10
4 195 9 60 11
Table 7.2
It is possible to hold up to 70 units of product in store from one period to
the next at a cost of £1.5K per unit per period. (This figure of £1.5K per
unit per period is known as a stock-holding cost and represents the fact
that we are incurring costs associated with the storage of stock.)
We need to determine the production and storage schedule which will
meet the stated demands over the four time periods at minimum cost
given that at the start of period 1 we have 15 units in stock.

Activity
Consider this problem by yourself for a while. How easy do you think it would be to arrive
at a good (minimum cost) production and storage schedule?

Factory planning formulation

Variables
The decisions that need to be made relate to the amount to produce in
normal/overtime working each period. Hence let:
xt = number of units produced by normal working in period t
(t = 1, 2, 3, 4), where xt ≥ 0
yt = number of units produced by overtime working in period t
(t = 1, 2, 3, 4) where yt ≥ 0
In fact, for this problem, we also need to decide how much stock we carry
over from one period to the next so let:
It = number of units in stock at the end of period t (t = 0, 1, 2, 3, 4)

Constraints
• production limits
xt ≤ 100 t = 1, 2, 3, 4
y1 ≤ 60
y2 ≤ 65
129
MN3032 Management science methods

y3 ≤ 70
y4 ≤ 60
• limit on space for stock carried over
It ≤ 70 t = 1, 2, 3, 4
• we have an inventory continuity equation of the form
closing stock = opening stock + production – demand
then assuming
opening stock in period t = closing stock in period t-1 and
that production in period t is available to meet demand in period t
we have that
I1 = I0 + (x1 + y1) – 130
I2 = I1 + (x2 + y2) – 80
I3 = I2 + (x3 + y3) – 125
I4 = I3 + (x4 + y4) – 195
where I0 = 15.
Note here that inventory continuity equations of the type shown above are
common in production planning problems involving more than one time
period. Essentially the inventory variables (It) and the inventory continuity
equations link together the time periods being considered and represent a
physical accounting for stock.
• demand must always be met (i.e. no ‘stock-outs’). This is equivalent to
saying that the opening stock in period t plus the production in period
t must be greater than (or equal to) the demand in period t, i.e. we
have the constraints:
I0 + (x1 + y1) ≥ 130
I1 + (x2 + y2) ≥ 80
I2 + (x3 + y3) ≥ 125
I3 + (x4 + y4) ≥ 195.
However, these constraints can be viewed in another way. Considering the
inventory continuity equations above these constraints which ensure that
demand is always met can be rewritten as:
I1 ≥ 0
I2 ≥ 0
I3 ≥ 0
I4 ≥ 0.

Objective
To minimise cost – which consists of the cost of ordinary working plus the
cost of overtime working plus the cost of carrying stock over (1.5K per
unit). Hence the objective is:
minimise
(6x1 + 4x2 + 8x3 + 9x4) + (8y1 + 6y2 + 10y3 + 11y4) + (1.5I0 + 1.5I1 +
1.5I2 + 1.5I3 + 1.5I4)
Note here that we have assumed that if we get an answer involving
fractional variable values this is acceptable (as the number of units

130
Chapter 7: Mathematical programming – formulation

required each period is reasonably large, this should not cause too many
problems).
Note:
• As discussed above assuming It ≥ 0 t = 1, 2, 3, 4 means ‘no stock-
outs’ (i.e. we need a production plan in which sufficient is produced to
ensure that demand is always satisfied).
• Allowing It (t = 1, 2, 3, 4) to be unrestricted (positive or negative)
means that we may end up with a production plan in which demand is
unsatisfied in period t (It < 0). This unsatisfied demand will be carried
forward to the next period (when it will be satisfied if production is
sufficient, carried forward again otherwise).
• If It is allowed to be negative then we need to amend the objective to
ensure that we correctly account for stock-holding costs (and possibly
to account for stock-out costs).
• If we get a physical loss of stock over time (e.g. due to damage,
pilferage, etc) then this can be easily accounted for. For example if we
lose (on average) 2 per cent of stock each period then multiply the
right-hand side of the inventory continuity equation by 0.98. If this is
done then we often include a term in the objective function to account
financially for the loss of stock.
• If production is not immediately available to meet customer demand
then the appropriate time delay can be easily incorporated into the
inventory continuity equation. For example a 2 period time delay for
the problem dealt with above means replace (xt + yt) in the inventory
continuity equation for It by (xt-2 + yt‑2).
In practice we would probably deal with the situation described above on
a ‘rolling horizon’ basis in that we would get an initial production plan
based on current data and then, after one time period (say), we would
update our LP and resolve to get a revised production plan. In other words
even though we plan for a specific time horizon, here four months, we
would only even implement the plan for the first month, so that we are
always adjusting our four month plan to take account of future conditions
as our view of the future changes.

Integer programming
Activity/Reading
For this section read Anderson, Chapter 15, sections start–15.1, 15.3 and 15.4
(formulations only).

When formulating LPs we often find that, strictly, certain variables


should have been regarded as taking integer values but, for the sake
of convenience, we let them take fractional values reasoning that the
variables were likely to be so large that any fractional part can be
neglected. While this is acceptable in some situations, in many cases it
is not, and in such cases we must find a numeric solution in which the
variables take integer values.
Problems in which this is the case are called integer programs
(IPs) and the subject of solving such programs is called integer
programming (also referred to by the initials IP).

131
MN3032 Management science methods

IPs occur frequently because many decisions are essentially discrete (such
as yes/no, go/no-go) in that one (or more) options must be chosen from a
finite set of alternatives.
Note here that problems in which some variables can take only integer
values and some variables can take fractional values are called mixed-
integer programs (MIPs).
As for formulating LPs the key to formulating IPs is practice. Although
there are a number of standard ‘tricks’ available to cope with situations
that often arise in formulating IPs it is probably true to say that
formulating IPs is a much harder task than formulating LPs.
We consider an example integer program below.

Capital budgeting problem


There are four possible projects, which each run for three years and have
the following characteristics.
Capital requirements
(£m) per year
Project Return (£m) 1 2 3
1 0.2 0.5 0.3 0.2
2 0.3 1.0 0.8 0.2
2 0.5 1.5 1.5 0.3
4 0.1 0.1 0.4 0.1
Available capital (£m) 3.1 2.5 0.4
Table 7.3
For example project 1 gives a return of 0.2 but requires capital of 0.5 in Year
1, 0.3 in Year 2 and 0.2 in Year 3. The total capital available in each year to
be committed to projects is 3.1 in Year 1, 2.5 in Year 2 and 0.4 in Year 3.
We have a decision problem here: Which projects would you choose in
order to maximise the total return?
Plainly as all the projects give a positive return we would prefer to do all
four – but the capital available makes this impossible (e.g. consider how
much capital you would need in year 3 if you were to undertake all four
projects).

Capital budgeting problem formulation


We follow the same approach as we used for formulating LPs – namely:
• variables
• constraints
• objective.
We do this below; note that the only significant change in formulating IPs as
opposed to formulating LPs is in the definition of the variables.

Variables
Here we are trying to decide whether to undertake a project or not (a ‘go/
no-go’ decision). One ‘trick’ in formulating IPs is to introduce variables
which take the integer values 0 or 1 and represent binary decisions
(e.g. do a project or not do a project) with typically:
• the positive decision (do something) being represented by the value 1
• the negative decision (do nothing) being represented by the value 0.

132
Chapter 7: Mathematical programming – formulation

Such variables are often called zero-one or binary variables.


To define the variables we use the verbal description of:
xj = 1 if we decide to do project j (j = 1,...,4)
= 0 otherwise, i.e. not do project j (j = 1,...,4)
Note here that, by definition, the xj are integer variables which must take
one of two possible values (zero or one).

Constraints
The constraints relating to the availability of capital funds each year are:
0.5x1 + 1.0x2 + 1.5x3 + 0.1x4 ≤ 3.1 (Year 1)
0.3x1 + 0.8x2 + 1.5x3 + 0.4x4 ≤ 2.5 (Year 2)
0.2x1 + 0.2x2 + 0.3x3 + 0.1x4 ≤ 0.4 (Year 3).

Objective
To maximise the total return – hence we have:
maximise 0.2x1 + 0.3x2 + 0.5x3 + 0.1x4.
This gives us the complete IP which we write as:

maximise 0.2x1 + 0.3x2 + 0.5x3 + 0.1x4


subject to: 0.5x1 + 1.0x2 + 1.5x3 + 0.1x4 ≤ 3.1
0.3x1 + 0.8x2 + 1.5x3 + 0.4x4 ≤ 2.5
0.2x1 + 0.2x2 + 0.3x3 + 0.1x4 ≤ 0.4
xj = 0 or 1 j = 1,...,4.
Note:
• In writing down the complete IP we include the information that xj =
0 or 1 (j = 1,...,4) as a reminder that the variables are integers.
• You see the usefulness of defining the variables to take zero/one values
(e.g. in the objective the term 0.2x1 is zero if x1 = 0 – as we want since
there is no return from project 1 if we do not do it) and 0.2 if x1 =
1 (again as we want since we get a return of 0.2 if we do project 1).
Hence effectively the zero-one nature of the decision variable means
that we always capture in the single term 0.2x1 what happens both
when we do the project and when we do not do the project.
• You will note that the objective and constraints are linear (i.e. any
term in the constraints/objective is either a constant or a constant
multiplied by an unknown). In this chapter we deal only with linear
integer programs (IPs with a linear objective and linear constraints). It
is plain though that there do exist non-linear integer programs – these
are, however, outside the scope of this chapter.
• Whereas before in formulating LPs if we had integer variables we
assumed that we could ignore any fractional parts, it is clear that we
cannot do so in this problem. For example, what would be the physical
meaning of a numeric solution with x1 = 0.4975?
Extensions to this basic problem include:
• projects of different lengths
• projects with different start/end dates
• adding capital inflows from completed projects
• projects with staged returns

133
MN3032 Management science methods

• carrying capital forward from year to year


• mutually exclusive projects (can have one or the other but not both)
• projects with a time window for the start time.
Activity
For a fifth potential project (using any data of your own choice) what changes are there in
the formulation presented above?

Links to other chapters


The topics considered in this chapter link to Chapters 8 and 10 of this
subject guide. They fall under the general heading of mathematical
programming, where we adopt a mathematical formulation approach to
a decision problem in order to optimise (maximise or minimise) some
objective. These chapters also (where appropriate) present the numeric
solution of the formulation adopted via use of Excel.

Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page
number)
The Kellogg Company 35
Optimising production planning at Jan de Wit Company, Brazil 56
Using linear programming for traffic control 71
Assigning products to worldwide facilities at Eastman Kodak 87
Evaluating options for the provision of school meals in Chile 96
The Nutricia dairy and drinks group, Hungary 114
Tea production and distribution in India 119
A marketing planning model at Marathon Oil Company 140
Scheduling the orange harvest in Brazil 152
Pilot staffing and training at Continental Airlines 157
A marketing resource allocation model at Reckitt and Coleman 170
Optimal lease structuring at GE Capital 175
Revenue management at National Car Rental 179
Crew scheduling at Air New Zealand Chapter 18
(online), p.3
Aluminium can production at Valley Metal Container Chapter 18, p.4
BMW’s global production network Chapter 18, p.30
Customer order allocation model at Ketron Chapter 18, p.31
Optimising rental vehicles
www.cmis.csiro.au/or/Clients/thl.htm
Rail crew rostering
www.cmis.csiro.au/or/rostering/railtex.htm

134
Chapter 7: Mathematical programming – formulation

A reminder of your learning outcomes


Having completed this chapter, and the Essential reading and activities,
you should be able to:
• use a structured approach to formulating a decision problem as a
mathematical program
• formulate a decision problem either as a linear program, or as an
integer program, or as a mixed-integer program, as appropriate.

Sample examination questions


For Sample examination questions relating to the material presented in
this chapter please visit the VLE.

135
MN3032 Management science methods

Notes

136
Chapter 8: Linear programming – solutions

Chapter 8: Linear programming –


solutions

Essential reading
Anderson, Chapter 2, Chapter 3.

Spreadsheet
lp.xls
• Sheet A: Solution of problem via Solver where time for assembly,
polishing and packing is individually constrained
• Sheet B: Solution of problem via Solver where time for assembly,
polishing and packing is constrained in total.
This spreadsheet can be downloaded from the VLE.

Aims of the chapter


The aims of this chapter are to:
• show how a linear program involving just two variables can be solved
graphically via use of an iso-cost/iso-profit line
• illustrate how linear programs involving more than two variables can
be solved on a computer using Excel and Solver
• demonstrate how sensitivity analysis (looking at the change in solution)
can be approached in linear programming, both using the graphical
solution approach and using the computerised solution approach.

Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• solve Linear Programming problems (LPs) involving two variables
graphically via use of an iso-cost/iso-profit line
• interpret solution output for LPs and use information contained in
such output for the purposes of sensitivity analysis
• explain opportunity cost (reduced cost) and calculate it for any
variable in a LP that has been solved graphically
• explain shadow price and calculate it for any constraint in a LP that
has been solved graphically
• appreciate the areas where large LP problems arise.

Introduction
In this chapter we move from the mathematics we have been considering
in the previous chapter to some numbers, specifically to obtaining numeric
solutions for two of the LPs we formulated previously.
As previously stated, although the formulation of LPs can be
demanding, the benefits to be gained are high – both in terms of a clear
understanding/exposition of the problem and in terms of what can be

137
MN3032 Management science methods

achieved numerically (i.e. the problem can be solved numerically to


give the ‘best possible’ answer). So far we have concentrated on the
first of these elements, formulating the problem so as to gain a clear
understanding/exposition of it.
In this chapter we concentrate on the second of these elements, namely
numeric solutions. We discuss the numeric solution of LPs, illustrating how
any LP involving just two variables can be solved graphically and how
to interpret computer solution outputs for LPs involving more than two
variables.
We also give a number of problem areas relating to ‘state of the art’ LP.
We hope that by so doing you will see that LP is an important generic
technique with many diverse applications.

Graphical solution for two variable LPs

Activity/Reading
For this section read Anderson, Chapter 2.

To get some insight into solving LPs, consider the Two Mines problem
again – the LP formulation of the problem was:
minimise 180x + 160y
subject to 6x + 1y ≥ 12
3x + 1y ≥ 8
4x + 6y ≥ 24
x≤5
y≤5
x,y ≥ 0
Since there are only two variables in this LP problem we have the graphical
representation of the LP given below with the feasible region (region of
feasible solutions to the constraints associated with the LP) outlined.

Figure 8.1
To draw the diagram above we turn all inequality constraints into
equalities and draw the corresponding lines on the graph (e.g. the
constraint 6x + y ≥ 12 becomes the line 6x + y = 12 on the graph). Once
a line has been drawn then it is a simple matter to work out which side
of the line corresponds to all feasible solutions to the original inequality

138
Chapter 8: Linear programming – solutions

constraint (e.g. all feasible solutions to 6x + y ≥ 12 lie to the right of the


line 6x + y = 12).
We determine the optimal solution to the LP by plotting (180x + 160y) =
K (K constant) for varying K values (iso-profit lines). One such line (180x
+ 160y = 180) is shown dotted on the diagram. The smallest value of K
(remember we are considering a minimisation problem) such that 180x
+ 160y = K goes through a point in the feasible region, is the value of the
optimal solution to the LP (and the corresponding point gives the optimal
values of the variables).
We can see therefore that the optimal solution to the LP occurs at the
vertex of the feasible region formed by the intersection of 3x + y = 8 and
4x + 6y = 24. Note here that it is inaccurate to attempt to read the
values of x and y off the graph and instead we solve the simultaneous
equations:
3x + y = 8
4x + 6y = 24
to get x = 12/7 = 1.71 and y = 20/7 = 2.86 and hence the value of the
objective function is given by 180x + 160y = 180(12/7) + 160(20/7) =
765.71.
Hence the optimal solution has cost 765.71.
With regard to choosing the value of K for the iso-profit line one approach
that can be followed is to take a point on one of the axes that is outside
the feasible region (as we want the iso-profit line outside the feasible
region). In the above example we might chose the point x=1, y=0 as that is
on one of the axes and outside the feasible region. At this point the value
of the objective is 180x + 160y = 180(1) + 160(0) = 180. So to get an iso-
profit line passing through our chosen point of x = 1, y = 0 we use K = 180;
giving the iso-profit line of 180x + 160y =180.

Activity
Look back to Chapter 1 where the Two Mines problem was introduced. At that point you
were asked to find your own answer to the problem and to record it. Compare the answer
you found then with the answer given above.
How much more expensive was your answer (i.e. what is the value of 100 (your answer –
765.71)/765.71)? What do you conclude about the advantages of linear programming?

Activity
Suppose that the costs per day for mine X and mine Y were to change to 200 and 250
respectively. What would be the new optimal solution?

It is clear that the above graphical approach to solving LPs can be used
for LPs with two variables but (alas) most LPs have more than two
variables. This brings us to the simplex algorithm for solving LPs which is
considered below. However, first a word of warning:

Warning
You may be aware that an alternative approach to solving a two-variable
linear program is via the corner point method. For the purposes of this subject
guide, and any examination question relating to this course, use of this corner
point method is not acceptable. We shall always expect you to solve a two-
variable linear program via use of an iso-cost/iso-profit line, not via use of the
corner point method.

139
MN3032 Management science methods

Simplex
Note that in the example considered above the optimal solution to the
LP occurred at a vertex (corner) of the feasible region. In fact it is true
that for any LP (not just the one considered above) the optimal solution
occurs at a vertex of the feasible region. This fact is the key to the simplex
algorithm for solving LPs.
Essentially the simplex algorithm starts at one vertex of the feasible region
and moves (at each iteration) to another (adjacent) vertex, improving (or
leaving unchanged) the objective function as it does so, until it reaches the
vertex corresponding to the optimal LP solution.
The simplex algorithm for solving LPs was developed by George Dantzig
in the late 1940s and since then a number of different versions of the
algorithm have been developed. One of these later versions, called the
revised simplex algorithm (sometimes known as the ‘product form of
the inverse’ simplex algorithm) forms the basis of most modern computer
packages for solving LPs.
Although the basic simplex algorithm is relatively easy to understand and
use, it is widely available in the form of computer packages therefore we
have not set out the details of the simplex algorithm. Instead we shall
focus on the output from a simplex based LP package.
Recall the production planning problem concerned with four variants of
the same product which we formulated before as an LP. To remind you of it
we repeat below the problem and our formulation of it.

Production planning problem


A company manufactures four variants of the same product and, in the
final part of the manufacturing process, there are assembly, polishing and
packing operations. For each variant the time required for these operations
is shown below (in minutes) as is the profit per unit sold.

Assembly Polish Pack Profit (£)


Variant 1 2 3 2 1.50
2 4 2 3 2.50
3 3 3 2 3.00
4 7 4 5 4.50
Table 8.1
• Given the current state of the labour force, the company estimates
that each year they have 100,000 minutes of assembly time, 50,000
minutes of polishing time and 60,000 minutes of packing time
available. How many of each variant should the company make per
year and what is the associated profit?
• Suppose now that the company is free to decide how much time
to devote to each of the three operations (assembly, polishing and
packing) within the total allowable time of 210,000 (= 100,000 +
50,000 + 60,000) minutes. How many of each variant should the
company make per year and what is the associated profit?

140
Chapter 8: Linear programming – solutions

Production planning formulation


Variables
Let:
xi be the number of units of variant i (i = 1, 2, 3, 4) made per year
Tass be the number of minutes used in assembly per year
Tpol be the number of minutes used in polishing per year
Tpac be the number of minutes used in packing per year
where xi ≥ 0 i = 1, 2, 3, 4 and Tass, Tpol, Tpac ≥ 0.

Constraints
a. Operation time definition
Tass = 2x1 + 4x2 + 3x3 + 7x4 (assembly)
Tpol = 3x1 + 2x2 + 3x3 + 4x4 (polish)
Tpac = 2x1 + 3x2 + 2x3 + 5x4 (pack)
b. Operation time limits: the operation time limits depend upon the
situation being considered. In the first situation, where the maximum
time that can be spent on each operation is specified, we simply have:
Tass ≤ 100,000 (assembly)
Tpol ≤ 50,000 (polish)
Tpac ≤ 60,000 (pack).
In the second situation, where the only limitation is on the total time spent
on all operations, we simply have:
Tass + Tpol + Tpac ≤ 210000 (total time)

Objective
The objective presumably is to maximise profit; hence, we have:
maximise 1.5x1 + 2.5x2 + 3.0x3 + 4.5x4
which gives us the complete formulation of the problem.

Excel solution
Activity/Reading
For this section read Anderson, Chapter 3, (computer solutions only).

Below we solve this LP using Solver in Excel. Take the spreadsheet


associated with this chapter and look at Sheet A. You should see the
problem we considered above set out as:

Spreadsheet 8.1
Here the values in cells B2 to B5 are how much of each variant we choose
to make – here set to zero. Cells C6 to E6 give the total assembly/polishing
and packing time used and cell F6 the total profit associated with the
amount we choose to produce.
141
MN3032 Management science methods

To use Solver in Excel select Tools and then Solver. In the version of
Excel I am using (different versions of Excel have slightly different Solver
formats) you will get the Solver model as below:

Spreadsheet 8.2
Here our target cell is F6 (ignore the use of $ signs here – that is a
technical Excel issue if you want to go into it in greater detail) which we
wish to maximise. We can change cells B2 to B5 – i.e. the amount of each
variant we produce subject to the constraint that C6 to E6 – the total
amount of assembly/polishing/packing used cannot exceed the limits
given in C7 to E7.
In order to tell Solver we are dealing with a linear program, click on
Options in the Solver box and you will see:

Spreadsheet 8.3
where both the ‘Assume Linear Model’ and ‘Assume Non-Negative’ boxes
are ticked – indicating we are dealing with a linear model with non-
negative variables.
Solving via Solver the solution is:

Spreadsheet 8.4
We can see that the optimal solution to the LP has value 58,000 (£) and
that Tass = 82,000, Tpol = 50,000, Tpac = 60,000, X1 = 0, X2 = 16,000, X3 =
6,000 and X4 = 0.
142
Chapter 8: Linear programming – solutions

This implies that we only produce variants 2 and 3 (a somewhat surprising


result in that we are producing none of variant 4 which had the highest
profit per unit produced).

Activity
How can you explain (in words) the fact that it appears that the best thing to do is not to
produce any of the variant with the lowest profit per unit?

Activity
How can you explain (in words) the fact that it appears that the best thing to do is not to
produce any of the variant with the highest profit per unit?

Activity
Suppose you employ extra workers giving you extra time available. Would you assign
these workers to assembly, polishing or packing and if so, why?

Second situation
For the second situation given in the question, where the only limitation
is on the total time spent on all operations examine Sheet B in the
spreadsheet associated with this chapter.
Invoking Solver in that sheet you will see:

Spreadsheet 8.5
where cell C7 is the total amount of processing time used and the only
constraint in Solver relates to that cell not exceeding the limit of 210,000
shown in cell C8. Note here that if you check Options in Solver here you
will see that both the ‘Assume Linear Model’ and ‘Assume Non-Negative’
boxes are ticked.

143
MN3032 Management science methods

Solving we get:

Spreadsheet 8.6
We can see that the optimal solution to the LP has value 78,750 (£) and
that Tass = 78,750, Tpol = 78,750, Tpac = 52,500, X1 = 0, X2 = 0, X3 = 26,250
and X4 = 0. This implies that we only produce variant 3.
Note here how much higher the associated profit is than before (£78,750
compared with £58,000, an increase of 36 per cent!). This indicates
that, however the allocation of 100,000, 50,000 and 60,000 minutes for
assembly, polishing and packing respectively was arrived at, it was a bad
decision!

Activity
Look back to Chapter 7 where this production planning problem was introduced. At that
point you were asked to find your own answer to the problem and to record it. Compare
the answer you found then with the answer given above.
How less profitable was your answer (i.e. what is the value of 100(78750 – your
answer)/78750)? What do you conclude about the advantages of LP?

Problem sensitivity
Activity/Reading
For this section read Anderson, Chapter 3, sections 3.1–3.3.

Problem sensitivity refers to how the solution changes as the data change.
Two issues are important here:
• robustness
• planning.
We deal with each of these in turn.

Robustness
In reality, data are never completely accurate and so we would like some
confidence that any proposed course of action is relatively insensitive
(robust) with respect to data inaccuracies. For example, for the production
planning problem dealt with before, how sensitive is the optimal solution
with respect to slight variations in any particular data item?
For example, consider the packing time consumed by variant 3. It is
currently set to exactly 2 minutes, i.e. 2.0000000. But suppose it is really
2.1, what is the effect of this on what we are proposing to do?

144
Chapter 8: Linear programming – solutions

What is important here is what you might call ‘the shape of the
strategy’ rather than the specific numeric values. Look at the solution of
value 58,000 we had before. The shape of the strategy there was ‘none
of variant 1 or 4, lots of variant 2 and a reasonable amount
of variant 3’. The aim is that, when we resolve with the figure of 2 for
packing time consumed by variant 3 replaced by 2.1, this general shape
remains the same. We should be concerned if we get a very different shape
(e.g. produce variants 1 and 4 only).
If the general shape of the strategy remains essentially the same under
(small) data changes we say that the strategy is robust.
If we take Sheet A again, change the figure of 2 for packing time
consumed by variant 3 to 2.1 and resolve, we get the following:

Spreadsheet 8.7
This indicates that for these data changes the strategy is robust.

Planning
With regard to planning we may be interested in seeing how the solution
changes as the data change (e.g. over time). For example, for the
production planning problem dealt with before (where the solution was
of value 58,000 involving production of variants 2 and 3) how would
increasing the profit per unit on variant 4 (e.g. by 10 per cent to 4.95 by
raising the price) impact upon the optimal solution?
Again taking Sheet A, making the appropriate change and resolving, we
get:

Spreadsheet 8.8
indicating that if we were able to increase the profit per unit on variant 4
by 10 per cent to 4.95, it would be profitable to make that variant in the
quantities shown above.
There is one thing to note here – namely that we have a fractional solution
X3 = 1,428.571 and X4 = 11,428.57. Recall that we have a LP – for which a
defining characteristic is that the variables are allowed to take fractional
values. Up to now for this production planning problem we had not
seen any fractional values when we solved numerically – here we do. Of
course in reality, given that the numbers are large, there is no practical
significance to these fractions and we can equally well regard the solution
as being a conventional integer (non-fractional) solution such as X3 = 1429
and X4 = 11429.

Approach
The approach taken both for robustness and planning issues is identical,
and is often referred to as sensitivity analysis.
It turns out that, as a by-product of solving a linear program, we

145
MN3032 Management science methods

automatically get sensitivity information. This information relates to:


• changing the objective function coefficient for a variable
• forcing a variable which is currently zero to be non-zero
• changing the right-hand side of a constraint.
We deal with each of these in turn, and note here that the analysis
presented below only applies for a single change. If two or more things
change simultaneously then we need to resolve the LP.
To investigate sensitivity take Sheet A as in our original situation and solve
as below:

Spreadsheet 8.9
but where now we have highlighted (clicked on) two of the reports
available – Answer and Sensitivity. Click OK. You will find that two new
sheets have been added to the spreadsheet – an Answer Report and a
Sensitivity Report.
As these reports are indicative of the information that is commonly
available when we solve a LP via a computer we shall deal with each of
them in turn.

Answer report
The answer report can be seen below:

Answer report 8.1


This is the most self-explanatory report. Note that we had three constraints
for total assembly, total polishing and total packing time in our LP. The
assembly time constraint is declared to be ‘Not Binding’ while the other
two constraints are declared to be ‘Binding’. Constraints with a ‘Slack’
value of zero are said to be tight or binding in that they are satisfied

146
Chapter 8: Linear programming – solutions

with equality at the LP optimal. Constraints which are not tight are called
loose or not binding.

Sensitivity report
The sensitivity report can be seen below:

Answer report 8.2


This sensitivity report provides us with information relating to:
• changing the objective function coefficient for a variable
• forcing a variable which is currently zero to be non-zero
• changing the right-hand side of a constraint.
We deal with each of these in turn below. We would stress here that
this information is defined with respect to the current LP
optimum. In other words this information cannot be found until we have
solved the underlying LP.

Changing the objective function coefficient for a variable –


Solver solution
To illustrate this, suppose we vary the coefficient of x2 in the objective
function. How will the LP optimal solution change?
Currently X1 = 0, X2 = 16,000, X3 = 6,000 and X4 = 0. The current solution
value for X2 of 16,000 is in cell B3 and the current objective function
coefficient for x2 is 2.5 (Here we use the notation capital X to denote the
current solution value, lower case x to denote the variable). The Allowable
Increase/Decrease columns tell us that, provided the coefficient of x2 in
the objective function lies between 2.5 + 2 = 4.5 and 2.5 − 0.142857143
= 2.3571 (to four decimal places), the values of all of the variables in the
optimal LP solution will remain unchanged. Note though that the actual
optimal solution value will change as the objective function coefficient of
x2 is changing.
In terms of the original problem we are effectively saying that the decision
to produce 16,000 of variant 2 and 6,000 of variant 3 remains optimal even
if the profit per unit on variant 2 is not actually 2.5 (but lies in the range
2.3571 to 4.50). Similar conclusions can be drawn about x1, x3 and x4.

Activity
Perform the same analysis for x1, x3 and x4 as was done for x2 above.

Changing the objective function coefficient for a variable –


manual solution
Suppose though that we did not have access to a linear programming
software package (such as Solver). How then could we calculate the values
that we have simply taken from the Solver output above? Well, the good
news is that we do not expect you to be able to do this.

147
MN3032 Management science methods

Forcing a variable which is currently zero to be non-zero –


Solver solution
For the variables, the Reduced Cost column gives us, for each variable
which is currently zero, an estimate of how much the objective function
will change if we make (force) that variable to be non-zero.
Hence we have the table:

Variable x1 x4
Reduced Cost (ignore sign) 1.5 0.2
New value (= or ≥) x1 = A or x1 ≥ A x4 = B or x4 ≥ B
Estimated objective function change 1.5A 0.2B

Table 8.2
The objective function will always get worse (go down if we have a
maximisation problem, go up if we have a minimisation problem) by
at least this estimate. The larger A or B are, the more inaccurate this
estimate is of the exact change that would occur if we were to resolve the
LP with the corresponding constraint for the new value of x1 or x4 added.
If the change is small, however, then it is commonly observed that the
objective function change is exactly the same as the estimate.

Activity
If exactly 100 of variant one were to be produced what would be your estimate of the
new objective function value?

Note here that the value in the Reduced Cost column for a variable is often
called the ‘opportunity cost’ for the variable.
An alternative (and equally valid) interpretation of the
reduced cost is that it is an estimate of the minimum amount
by which the objective function coefficient for a variable
needs to change before that variable will become non-zero.
Hence for variable x1 the objective function needs to change by 1.5
(increase since we are maximising) before that variable becomes non-zero.
In other words, referring back to our original situation, the profit per unit
on variant 1 would need to increase by 1.5 before it would be profitable to
produce any of variant 1. Similarly the profit per unit on variant 4 would
need to increase by 0.2 before it would be profitable to produce any of
variant 4.

Forcing a variable which is currently zero to be non-zero –


manual solution
Suppose though that we did not have access to a linear programming
software package (such as Solver). How then could we calculate the
values that we have simply taken from the Solver output above? Here
the approach adopted is simple. We add to the original linear program a
constraint that says:
variable for which we want the opportunity cost = 1 (or ≥ 1)
and resolve the linear program. The change in solution value will give us
the opportunity cost for that variable.
In terms of this subject guide this may be relevant if (for example) you
have a two variable linear program which you have solved graphically. If
in this two variable linear program one of the variables takes the value
148
Chapter 8: Linear programming – solutions

zero at the LP optimum then you could adopt this approach to calculate
the reduced cost (opportunity cost) for that variable.
Note here that if a variable takes a value different from
zero in the LP optimum solution then its opportunity cost is
defined to be zero.

Changing the right-hand side of a constraint – Solver solution


For each constraint the column headed Shadow Price tells us exactly how
much the objective function will change if we change the right-hand side
of the corresponding constraint within the limits given in the Allowable
Increase/Decrease columns.
Hence we can form the table:
Constraint Assembly Polish Pack
Shadow Price (ignore sign) 0 0.8 0.3
Change in right-hand side a b c
Objective function change 0 0.8b 0.3c
Lower limit for right-hand side 82,000 40,000 33,333.3
Current value for right-hand side 100,000 50,000 60,000
Upper limit for right-hand side − 90,000 75,000
Table 8.3
For example for the polish constraint, provided the right-hand side of
that constraint remains between 50,000 + 40,000 = 90,000 and 50,000 −
10,000 = 40,000 the objective function change will be exactly
0.80 [change in right-hand side from 50,000].
The direction of the change in the objective function (up or down)
depends upon the direction of the change in the right-hand side of the
constraint and the nature of the objective (maximise or minimise).
To decide whether the objective function will go up or down use:
• constraint more (less) restrictive after change in right-hand side
implies objective function worse (better)
• if objective is maximise (minimise) then worse means down (up),
better means up (down).

Activity
•• If you had an extra 100 hours, to which operation would you assign it?
•• If you had to take 50 hours away from polishing or packing, which one would you
choose?
•• What would the new objective function value be in these two cases?

The value in the column headed Shadow Price for a constraint is often
called the ‘marginal value’ or ‘dual value’ for that constraint.

Changing the right-hand side of a constraint – manual solution


Suppose though that we did not have access to a linear programming
software package (such as Solver). How then could we calculate the values
that we have taken from the Solver output above? We simply change the
right-hand side of the constraint for which we want the shadow price by
one unit (either increase by one unit, or decrease by one unit) and resolve
the linear program. The change in solution value will give us the shadow
price for that constraint.

149
MN3032 Management science methods

This may be relevant if (for example) you have a two variable linear program
which you have solved graphically. In this two variable linear program you
could adopt this approach to calculate the shadow price for a constraint. Note
that, for the purposes of this manual approach, we do not expect you to find
the limits within which a shadow price is valid.
Note that, as would seem logical, if the constraint is loose in the
current LP optimum then its shadow price is defined to be zero
(if the constraint is loose a small change in the right-hand side
cannot alter the optimal solution).

Activity
For the Two Mines problem which you solved graphically calculate the shadow prices for
the high, medium and low grade ore constraints.

Comments
Much of the information available as a by-product of the solution of the LP
problem can be useful to management in estimating the effect of changes
(e.g. changes in costs, production capacities, etc.) without going to the
trouble/expense of resolving the LP.
Note that, as mentioned above, the analysis given above relating to how the
LP solution changes is only valid for a single data change. If two (or more)
data changes are made the situation becomes more complex and it becomes
advisable to resolve the LP.

Mathematical programming – further considerations


Most large linear or integer programs are not entered directly into the
computer in the same fashion as in our Excel example. Instead an algebraic
modelling language (such as AMPL or GAMS) is used. This automates
the input of the model by (essentially) the user entering the underlying
mathematics into the computer in a fashion similar to that which they would
write on paper.
Until the 1980s all packages for solving LPs relied upon variants of the
simplex algorithm. However in 1984 Karmarkar published a new algorithm
for LP, called an interior point method, which is completely different from the
simplex algorithm.
Karmarkar’s work has sparked an immense amount of LP research, both
into interior point methods and into improving the simplex algorithm. Most
computer packages for mathematical programming now include both a
simplex algorithm and an interior point algorithm for solving linear programs.
It is worth emphasising here that even if a particular decision problem
involves formulating a LP with many tens of thousands of variables/
constraints, it is still well within the solution range of modern LP software
packages.
Problem areas where large LPs arise are:

Military officer personnel planning


The problem is to plan US Army officer promotions (to Lieutenant, Captain,
Major, Lieutenant Colonel and Colonel), taking into account the people
entering and leaving the Army and training requirements by skill categories
to meet the overall Army force structure requirements (a LP with 21,000
constraints and 43,000 variables).

150
Chapter 8: Linear programming – solutions

Military patient evacuation


The US Air Force Military Airlift Command (MAC) has a patient evacuation
problem that can be modelled as a LP. They use this model to determine
the flow of patients moved by air from an area of conflict to bases and
hospitals in the continental United States. The objective is to minimise the
time that patients are in the air transport system. The constraints are:
• all patients that need transporting must be transported
• limits on the size and composition of hospitals, staging areas and air
fleet must be observed.
MAC have generated a series of problems based on a number of time
periods (days). A 50-day problem consists of a LP with 79,000 constraints
and 267,000 variables.

Military logistics planning


The US Department of Defence Joint Chiefs of Staff have a logistics
planning problem that models the feasibility of supporting military
operations during a crisis.
The problem is to determine if different materials (called movement
requirements) can be transported overseas within strict time windows.
The LP includes capacities at embarkation and disembarkation ports,
capacities of the various aircraft and ships that carry the movement
requirements and penalties for missing delivery dates.
One problem (using simulated data) that has been solved had 15 time
periods, 12 ports of embarkation, seven ports of debarkation and nine
different types of vehicle for 20,000 movement requirements. This resulted
in an LP with 20,500 constraints and 520,000 variables.

Foreign exchange arbitrage


Foreign exchange transactions where there is switching between
currencies (e.g. from Singapore dollars to Japanese yen to US dollars and
back to Singapore dollars) can be modelled as LPs. This enables the flow
of currency through time for a particular company to be carried out in an
effective manner so as to meet foreign currency obligations (e.g. contract
payments that must be made). In addition LP enables opportunities for
arbitrage (making a profit through a sequence of related transactions) to
be quickly identified and exploited.

Airline crew scheduling


Here solving an LP is only the first stage in deciding crew schedules for
commercial aircraft. The problem that has to be solved is actually an
integer programming problem.
Crew scheduling models are a key to airline competitive cost advantage
these days; crew costs often being the second largest flying cost after fuel
costs. We shall enlarge upon this problem in greater detail.
Within a fixed airline schedule (the schedule changing twice a year
typically) each flight in the schedule can be broken down into a series
of flight legs. A flight leg comprises a takeoff from a specific airport at
a specific time to the subsequent landing at another airport at a specific
time. For example a flight in the schedule from Chicago O’Hare to London
Heathrow might have two flight legs, from Chicago to JFK New York and
from JFK to Heathrow. A key point is that these flight legs may be flown
by different crews.

151
MN3032 Management science methods

Typically in a crew scheduling exercise aircraft types have been pre-


assigned (not all crews can fly all types) so for a given aircraft type and a
given time period (the schedule repeating over (say) a two-week period)
the problem becomes one of ensuring that all flight legs for a particular
aircraft type have crews assigned. Note here that by crew we mean not
only the pilots/flight crew but also the cabin service staff; typically these
work together as a team and are kept together over a schedule.
As you probably know there are many restrictions on the hours that
crews (pilots and others) can work. These restrictions can be both legal
restrictions and union agreement restrictions. A potential crew schedule
is a series of flight legs that satisfies these restrictions (i.e. a crew could
successfully and legally work the flight legs in the schedule). All such
potential crew schedules can have a cost assigned to them.
Airline companies construct databases with many tens of millions of
potential crew schedules. Note here that we stress the word potential.
We have a decision problem here: out of these tens of millions of potential
crew schedules which shall we choose (so as to minimise cost obviously)
and ensure that all flight legs have a crew assigned to them?
Typically a matrix type view of the problem is adopted, where the rows
of the matrix are the flight legs and the columns the potential crew
schedules, as below.
Crew schedules 1 2 3 etc →
Leg A–B 0 1 1
B−C 0 1 1
C−A 0 0 1
B−D 0 0 0
A−D 1 0 0
D−A 1 0 0
etc.

Here a 0 in a column indicates that the flight leg is not part of the crew
schedule, a 1 that the flight leg is part of the crew schedule. Usually a crew
schedule ends up with the crew returning to their home base, e.g. A−D and
D−A in crew schedule 1 above. A crew schedule such as 2 above (A−B and
B−C) typically includes as part of its associated cost the cost of returning
the crew (as passengers) to their base. Such carrying of crew as passengers
(on their own airline or on another airline) is called deadheading.
LP is used as part of the solution process for this crew scheduling problem
for two main reasons:
• A manual approach to crew scheduling problems of this size is just
hopeless, you may get a schedule but the cost is likely to be far from
minimal.
• A systematic approach to minimising cost can result in huge cost
savings (e.g. even a small percentage saving can add up to tens of
millions of dollars).

Summary
To summarise, there are people in the real world with large LP problems
to solve. What appears to be happening currently is that an increase in
solution technology (advances in hardware, software and algorithms)
is leading to users becoming aware that large problems can be tackled.
This in turn is generating a demand for further improvements in solution
technology.
152
Chapter 8: Linear programming – solutions

Links to other chapters


The topics considered in this chapter link to Chapters 7 and 10 of this
subject guide. They fall under the general heading of mathematical
programming, where we adopt a mathematical formulation approach to
a decision problem in order to optimise (maximise or minimise) some
objective. These chapters also (where appropriate) present the numeric
solution of the formulation adopted via use of Excel.

Case studies
The case studies associated with this chapter are the same as those
associated with Chapter 7. We would encourage you to read them.

A reminder of your learning outcomes


Having completed this chapter, and the Essential reading and activities,
you should be able to:
• solve LPs involving two variables graphically via use of an iso-cost/
iso-profit line
• interpret solution output for LPs and use information contained in
such output for the purposes of sensitivity analysis
• explain opportunity cost (reduced cost) and calculate it for any
variable in a LP that has been solved graphically
• explain shadow price and calculate it for any constraint in a LP that
has been solved graphically
• appreciate the areas where large LP problems arise.

Sample examination questions


For Sample examination questions relating to the material presented in
this chapter please visit the VLE.

153
MN3032 Management science methods

Notes

154
Chapter 9: Data envelopment analysis

Chapter 9: Data envelopment analysis

Essential reading
Anderson, Chapter 4, Section 4.6.

Spreadsheet
dea.xls
• Sheet A: Calculation of efficiency using Solver.
• Sheet B: Calculation of efficiency using Solver with a value judgement
constraint added.
This spreadsheet can be downloaded from the VLE.

Aims of the chapter


The aims of the chapter are to:
• discuss how efficiency can be defined by consideration of input and
output measures
• show how efficiency can be calculated, in a simple case, by means of a
graphical approach
• discuss how, in more complex cases, efficiency can be calculated by
solving an appropriate linear program
• discuss some specific issues that might arise in practical situations
when attempting to determine efficiencies.

Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• compare decision-making units (DMUs) via ratios
• draw the efficient frontier and find the efficiencies of all DMUs for any
example involving two ratios
• state the reference set for an inefficient DMU
• formulate the mathematical problem of finding the efficiency of any
DMU
• discuss the use of value judgements
• discuss starting a data envelopment analysis (DEA) study.

Introduction
Data envelopment analysis (DEA) – occasionally called frontier analysis
– was first put forward by Charnes, Cooper and Rhodes in 1978. It is a
performance measurement technique which, as we shall see, can be used
for evaluating the relative efficiency of decision making units
(DMUs) in organisations.
Examples of units to which DEA has been applied are: banks, police
stations, hospitals, tax offices, prisons, military bases (army, navy, air-
force), schools and university departments. One advantage of DEA is that
it can be applied to non-profit making organisations.
155
MN3032 Management science methods

Since the technique was first proposed, much theoretical and empirical
work has been done. Many studies have been published dealing with
applying DEA in real-world situations. We will initially illustrate DEA by
means of a small example.
Much of what you will see below is a graphical (pictorial) approach to
DEA. This is very useful if you are attempting to explain DEA to those
less technically qualified (for example, many in the management world).
However, there is an alternative mathematical approach to DEA that can
be adopted. This is illustrated later below.

Example
Consider a number of bank branches. For each branch we have a single output measure
(number of personal transactions completed) and a single input measure (number of staff).
The data we have are as follows:

Branch Personal transactions (’000s) Number of staff


Croydon 125 18
Dorking 44 16
Redhill 80 17
Reigate 23 11
Table 9.1
For example, for the Dorking branch in one year, there were 44,000 transactions relating
to personal accounts and 16 staff were employed.
How then can we compare these branches and measure their performance using these
data?

Ratios
A commonly used method is ratios. Typically, we take some output
measure and divide it by some input measure. Note the terminology here,
we view branches as taking inputs and converting them (with varying
degrees of efficiency, as we shall see below) into outputs.
For our bank branch example we have a single input measure, the number
of staff, and a single output measure, the number of personal transactions.
Hence we have:

Branch Personal transactions per staff member (’000s)


Croydon 6.94
Dorking 2.75
Redhill 4.71
Reigate 2.09
Table 9.2
Here we can see that Croydon has the highest ratio of personal
transactions per staff member, whereas Reigate has the lowest ratio of
personal transactions per staff member.
As Croydon has the highest ratio of 6.94 we can compare all other
branches to it and calculate their relative efficiency with respect to
Croydon. To do this we divide the ratio for any branch by 6.94 (the value
for Croydon) and multiply by 100 to convert to a percentage. This gives:

156
Chapter 9: Data envelopment analysis

Branch Relative efficiency


Croydon 100(6.94/6.94) = 100 per cent
Dorking 100(2.75/6.94) = 40 per cent
Redhill 100(4.71/6.94) = 68 per cent
Reigate 100(2.09/6.94) = 30 per cent
Table 9.3
The other branches do not compare well with Croydon, so are presumably
performing less well. That is, they are relatively less efficient at using
their given input resource (staff members) to produce output (number of
personal transactions).
We could, if we wish, use this comparison with Croydon to set targets for
the other branches.
For example, we could set a target for Reigate of continuing to process the
same level of output but with one less member of staff. This is an example
of an input target as it deals with an input measure.
An example of an output target would be for Reigate to increase the
number of personal transactions by 10 per cent (e.g. by obtaining new
accounts).
In practice, we might well set a branch a mix of input and output targets
which we want it to achieve.

Extending the example


Typically, we have more than one input and one output. For the bank
branch example, suppose now that we have two output measures (number
of personal transactions completed and number of business transactions
completed) and the same single input measure (number of staff) as before.
The data we have are as follows:

Branch Personal Business transactions Number of


transactions (’000s) (’000s) staff
Croydon 125 50 18
Dorking 44 20 16
Redhill 80 55 17
Reigate 23 12 11

Table 9.4
For example, for the Dorking branch in one year there were 44,000
transactions relating to personal accounts, 20,000 transactions relating to
business accounts and 16 staff were employed.
How can we compare these branches and measure their performance
using these data?
As before, a commonly used method is ratios, just as in the case
considered before of a single output and a single input. Typically, we take
one of the output measures and divide it by one of the input measures.
For our bank branch example, the input measure is the number of staff
(as before) and the two output measures are the number of personal
transactions and the number of business transactions. Hence we have the
two ratios:

157
MN3032 Management science methods

Branch Personal transactions per Business transactions per


staff member (’000s) staff member (’000s)
Croydon 6.94 2.78
Dorking 2.75 1.25
Redhill 4.71 3.24
Reigate 2.09 1.09
Table 9.5
Here we can see that Croydon has the highest ratio of personal
transactions per staff member, whereas Redhill has the highest ratio of
business transactions per staff member.
Dorking and Reigate do not compare so well with Croydon and Redhill,
so are presumably performing less well. That is, they are relatively less
efficient at using their given input resource (staff members) to produce
outputs (personal and business transactions).
One problem with comparison via ratios is that different ratios can give a
different picture and it is difficult to combine the entire set of ratios into a
single numeric judgement.
For example, consider Dorking and Reigate – Dorking is (2.75/2.09)
= 1.32 times as efficient as Reigate at personal transactions but only
(1.25/1.09) = 1.15 times as efficient at business transactions. How would
you combine these figures into a single statistic?
Activity
Consider the four bank branches given above – which branch would you prefer to be the
manager at and why?

This problem of different ratios giving different pictures would be


especially true if we were to increase the number of branches (and/or
increase the number of input/output measures). For example, given five
extra branches (A to E) with ratios as below, what can be said?

Branch Personal transactions per Business transactions per


staff member (’000s) staff member (’000s)
Croydon 6.94 2.78
Dorking 2.75 1.25
Redhill 4.71 3.24
Reigate 2.09 1.09
A 1.23 2.92
B 4.43 2.23
C 3.32 2.81
D 3.70 2.68
E 3.34 2.96
Table 9.6
For example – what can you deduce about Branch C from these ratios?

Activity
For each of the nine bank branches shown above, write a single sentence that sums up its
performance but yet is more than just a repetition of the values of the two ratios shown.

158
Chapter 9: Data envelopment analysis

Graphical analysis
One way around the problem of interpreting different ratios, at least for
problems involving just two outputs and a single input, is a simple graphical
analysis. Suppose we plot the two ratios for each of our original four branches
as shown below.

Figure 9.1
The positions on the graph represented by Croydon and Redhill demonstrate a
level of performance which is superior to the other two branches. A horizontal
line can be drawn from the vertical axis (y-axis) to Croydon, from Croydon
to Redhill, and a vertical line from Redhill to the horizontal axis (x-axis).
This line is called the efficient frontier (sometimes also referred to as the
efficiency frontier).
The efficient frontier, derived from the examples of best practice contained in
the data we have considered, represents a standard of performance that the
branches not on the efficient frontier could try to achieve.
You can see therefore how the name data envelopment analysis arises –
the efficient frontier envelops (encloses) all the data we have.
However, a number is often easier to interpret than a graph. We say that
any branches on the efficient frontier are 100 per cent efficient (have an
efficiency of 100 per cent). Hence, for our example, Croydon and Redhill have
efficiencies of 100 per cent.
This is not to say that the performance of Croydon and/or Redhill could not
be improved. It may, or may not, be possible to do that. However we can say
that, on the evidence (data) available, we have no idea of the extent to
which their performance can be improved.
It is important to note here that:
• DEA only gives you relative efficiencies – efficiencies relative to the
data considered. It does not, and cannot, give you absolute efficiencies.
• We have used no new information here, merely taken data on inputs and
outputs and presented them in a particular way.
Note too that the statement that a branch has an efficiency of 100 per cent is
a strong statement; namely that we have no other branch that can be said to
be better than it.

159
MN3032 Management science methods

Quantifying efficiency scores for inefficient DMUs


Consider now Dorking and Reigate in the figure above. We can see that
with respect to both of the ratios Croydon (for example) dominates both
Dorking and Reigate.
Clearly both Dorking and Reigate are less than 100 per cent efficient. But
how much? Can we assign an appropriate numerical value?
Consider Reigate:
• number of staff 11
• personal transactions (’000s) 23
• personal transactions per staff member (23/11) = 2.09
• business transactions (’000s) 12
• business transactions per staff member (12/11) = 1.09
For Reigate the ratio personal transactions:business transactions =
(23/12) = 1.92 (i.e. there are 1.92 personal transactions for every
business transaction).
By definition this figure of 1.92 is also the ratio of:
personal transactions per staff member:business transactions per staff member.
In other words (2.09/1.09) is also equal to 1.92.
Consider the diagram below. You can see Reigate plotted on it. It can
be shown that any branch with a ratio (personal transactions per staff
member:business transactions per staff member) equal to 1.92 lies on
the straight line from the origin through Reigate. You can see that line
below. If you are geometrically minded then the slope (gradient) of this
line is 1.92 (i.e. there are 1.92 personal transactions for every business
transaction).

Figure 9.2
It might seem reasonable to suggest therefore that the best possible
performance that Reigate could be expected to achieve is given by the
point labelled ‘Best’ in the diagram above. This is the point where the line
from the origin through Reigate meets the efficient frontier.
In other words, ‘Best’ represents a branch that, were it to exist, would have
the same business mix as Reigate and would have an efficiency of 100 per
cent.
Then in DEA we numerically measure the (relative) efficiency of Reigate
by the ratio:
100 (length of line from origin to Reigate/length of line from origin through
Reigate to efficient frontier)

160
Chapter 9: Data envelopment analysis

For Reigate this is an efficiency of 36 per cent.


The logic here is to compare the current performance of Reigate (the
length of the line from the origin to Reigate) to the best possible
performance that Reigate could reasonably be expected to achieve (the
length of the line from the origin through Reigate to the efficient frontier).
Performing a similar calculation for Dorking we get an efficiency of 43 per
cent.

Activity
Take a piece of graph paper, plot on it the four bank branches and measure the
efficiencies of Reigate and Dorking.

Recall the list of ratios with extra branches added given before.

Branch Personal transactions per Business transactions per


staff member (’000s) staff member (’000s)
Croydon 6.94 2.78
Dorking 2.75 1.25
Redhill 4.71 3.24
Reigate 2.09 1.09
A 1.23 2.92
B 4.43 2.23
C 3.32 2.81
D 3.70 2.68
E 3.34 2.96
Table 9.7
Figure 9.3 shows the same diagram as before but with these five extra
branches (A to E) added as in the above list of ratios. We could easily find
their efficiencies from this diagram.

Figure 9.3
There are two points to note here:
• the above diagram is a lot easier to understand, make sense of, and
interpret, than the list of ratios
• as before, we have not used any new data here, merely looked at the
existing data in a particular way.
This issue of looking at data in a different way is an important practical
issue. Many managers (without any technical expertise) are happy with
ratios. Showing them that their ratios can be viewed differently and used
to obtain new information is often an eye-opener to them.

161
MN3032 Management science methods

On a technical issue, note that the scale used for the x-axis and the y-axis
in plotting positions for each branch is irrelevant. Had we used a different
scale above we would have had a different picture, but the efficiencies of
each branch would be exactly the same.
With regard to finding the length of lines that you need to calculate
efficiencies then all we expect here is that you plot the branches on graph
paper to a reasonable accuracy and measure using a ruler. If you are more
mathematically minded then it is possible to use Pythagoras’s theorem to
find line lengths.

Activity
Take a piece of graph paper, plot on it the nine bank branches (our original four plus A to
E) and measure the efficiencies of branches A to E.

Achieving the efficient frontier


The point labelled ‘Best’ on the efficient frontier represents the best
possible performance that Reigate can reasonably be expected to achieve.
Although we have talked above of Reigate varying the number of staff to
achieve ‘Best’, in fact there are a number of ways by which Reigate can
move towards that point. It can:
• reduce its input (number of staff) while keeping its output constant
(an input target)
• increase both its outputs, retaining the current personal:business
(business mix) ratio of 1.92 while keeping its input (number of staff)
constant (an output target)
• do a combination of the above.
In fact, the same diagram as we used to calculate the efficiency of Reigate
can be used to set targets in a graphical manner. Suppose we say to
Reigate that, in the next time period, their target is to achieve an efficiency
of 40 per cent (i.e. effectively a 10 per cent increase in current efficiency).
We know where, on the line from the origin to Best, a branch with the
same business mix as Reigate but an efficiency of 40 per cent, lies. We can
say to Reigate that their goal is to move from their current position to that
new position, and the combination of input/output changes necessary to
achieve that is left up to them.

Use of the efficiencies


It is important to be clear about the appropriate use of the (relative)
efficiencies we have calculated. Here we have:

Croydon 100 per cent


Dorking 43 per cent
Redhill 100 per cent
Reigate 36 per cent

This does not automatically mean that Reigate is only approximately one-
third as efficient as the best branches. Rather the efficiencies here would
usually be taken as indicative of the fact that other branches are adopting
practices and procedures which, if Reigate were to adopt them, would
enable it to improve its performance.
This naturally invokes issues of highlighting and disseminating examples
of best practice.

162
Chapter 9: Data envelopment analysis

Equally there are issues relating to the identification of poor practice.


In DEA the concept of the reference set can be used to identify best
performing branches with which to compare a poorly performing branch.
Consider Reigate above. The Best point associated with Reigate lies on the
efficient frontier. A branch at this point would be the best possible branch
to compare Reigate with as it would have the same business mix. No such
branch exists, however, so we go to the two efficient branches either side
of this Best point. These branches, Croydon and Redhill, are the reference
set for Reigate.
The reference set for any branch with less than 100 per cent efficiency
consists of those branches with 100 per cent efficiency with which it is
most directly comparable. Broadly put this means that the branches in the
reference set have a ‘similar’ mix of inputs and output.
Activity
What other reasons can you think of for the (apparently) low relative efficiency score for
Reigate?

Activity
Consider the diagram above with branches A to E included. What would be the reference
sets for branches A to E?

Exercise
Suppose now that we have an extra branch F included in the analysis with personal
transactions per staff member = 1 and business transactions per staff member = 6. What
changes as a result of this extra branch being included in the analysis?

The effect of this can be seen below:

Figure 9.4
Note that the efficient frontier now excludes Redhill. We do not draw that
efficient frontier from Croydon to Redhill and from Redhill to F for two
reasons:
• mathematically the efficient frontier must be convex
• although we have not seen any branches on the line from Croydon to
F it is assumed in DEA that we could construct virtual branches,
which would be a linear combination of Croydon and F, and which
would lie on the straight line from Croydon to F.
In the above it is clear why Croydon and F have a relative efficiency of 100
per cent (i.e. are efficient); both are the top performers with respect to one
of the two ratios we are considering. The example below, where we have

163
MN3032 Management science methods

added a branch G, illustrates that a branch can be efficient even if it is not


a top performer. In the diagram below, G is efficient since under DEA it is
judged to have ‘strength with respect to both ratios’, even though it is not
the top performer in either.

Figure 9.5
Note here that in the above diagram the ‘feasible space’ in which a single
branch might lie and still achieve 100 per cent efficiency without being a
top performer in either ratio is quite large. While G above is one such
branch, had G been positioned at any point inside the triangle formed
between the horizontal line though Croydon, the vertical line though F and
the line joining Croydon to F, then it would have had 100 per cent efficiency.

Activity
Take a piece of graph paper, plot on it the 10 bank branches (our original four plus A to F)
and measure the efficiencies of all branches not on the efficient frontier.

Recap
Let us recap what we have done here – we have shown how a simple
graphical analysis of data on inputs and outputs can be used to calculate
efficiencies.
Once such an analysis has been carried out then we can begin to tackle,
with a clearer degree of insight than we had before, issues such as:
• identification of best practice
• identification of poor practice
• target setting
• resource allocation
• monitoring efficiency changes over time.

Input measures and output measures


Key to DEA is being clear whether something is an input measure or an
output measure. If you are confused (say in an assessment/examination
task or even in real life) as to whether something is an input or an output –
a useful rule to use is:
Is this something the organisation inputs? Controls? If yes,
then it is an input (and by implication if no then an output).
Note that as far as DEA is concerned (at the level at which we deal with it)
something is either an input or an output.
Note also that a key distinction between inputs and outputs is:
164
Chapter 9: Data envelopment analysis

• if more of something is ‘good’ from an efficiency


viewpoint, then it is an output
• if less of something is ‘good’ from an efficiency viewpoint,
then it is an input.
These relate to the fact that in dealing with ratios in DEA (where we must
have ratios of the form Output/Input) we assume that a larger ratio is
better in terms of efficiency. Be clear here, in real life we can talk of ratios
and understand what they mean, even ratios made up from an input
measure divided by an output measure. But DEA is more restrictive, in
DEA the only ratios that have meaning are output divided by input.

Extending to more inputs/outputs


In our simple example, we had just one input and two outputs. This is
ideal for a simple graphical analysis. If we have more inputs or outputs
then drawing simple pictures is not possible. However, it is still possible
to carry out exactly the same analysis as above but using mathematics
rather than pictures.
In evaluating any number of DMUs with any number of inputs and
outputs, DEA:
• requires the inputs and outputs values for each DMU to be specified
• defines efficiency for each DMU as a weighted sum of outputs [total
output] divided by a weighted sum of inputs [total input]
• requires all efficiencies to be restricted to lie between zero and one
(i.e. between 0 per cent and 100 per cent).
In calculating the numerical value for the efficiency of a particular DMU
weights are chosen so as to maximise its efficiency, thereby presenting the
DMU in the best possible light.
For those of you who are comfortable with mathematics, the mathematics
for the simple four branch example given above is presented below.
To calculate the efficiency of Dorking (for example):
maximise EDorking
subject to
ECroydon = (125Wper + 50Wbus)/(18Wstaff)
EDorking = (44Wper + 20Wbus)/(16Wstaff)
ERedhill = (80Wper + 55Wbus)/(17Wstaff)
EReigate = (23Wper + 12Wbus)/(11Wstaff)
0 ≤ ECroydon ≤ 1
0 ≤ EDorking ≤ 1
0 ≤ ERedhill ≤ 1
0 ≤ EReigate ≤ 1
Wper, Wbus, Wstaff ≥ 0
where:
ECroydon is the efficiency of Croydon (expressed as a fraction)
EDorking is the efficiency of Dorking (expressed as a fraction)
ERedhill is the efficiency of Redhill (expressed as a fraction)
EReigate is the efficiency of Reigate (expressed as a fraction)
Wper is the weight attached to personal transactions

165
MN3032 Management science methods

Wbus is the weight attached to business transactions


Wstaff is the weight attached to staff.
You can see here how we associate non-negative weights with each input
and output measure in the equations given above.
To calculate the efficiency of the other branches just change what you
maximise (e.g. maximise ERedhill to calculate the efficiency of Redhill).

Excel
Look at Sheet A (as below) in the spreadsheet associated with this chapter.

Spreadsheet 9.1
Here you can see the data for the four branches and a ‘yes’ in Column F for
the Dorking branch. Currently the efficiencies for the branches make no
sense as the weights for personal/business/staff have been arbitrarily set
to 1, 2 and 3 respectively.
To maximize the efficiency for Dorking we will use Solver in Excel. To use
Solver in Excel select Tools and then Solver.
In the version of Excel I am using (different versions of Excel have slightly
different Solver formats) you will get the Solver model as below:

Spreadsheet 9.2
Here cells B7 to D7 can be changed (ignore the use of $ signs here, which
is a technical Excel issue) and when we change these cells we are trying to
maximize cell H6 – this is set using the ‘yes’ in column F and the working
column H to be the efficiency of the branch we wish to maximise. The
constraints are that cells B7 to D7 must be non-negative (greater than or
equal to zero) and also that the efficiencies (G2 to G5) must be less than
or equal to one.
Clicking Solve in this Solver window gives:

Spreadsheet 9.3

166
Chapter 9: Data envelopment analysis

which if we click OK (to keep Solver solution) gives:

Spreadsheet 9.4
showing that the maximum efficiency of Dorking is 0.43 (43 per cent)
and one set of values for the weights that enables Dorking to achieve this
maximum efficiency are given in cells B7 to D7.
Obviously being able to use a spreadsheet enables us to explore options
– suppose that next year Dorking operates with one less member of staff
than last year, and increases business transactions by 10 per cent (all other
figures remaining unchanged). What would its efficiency change to? The
answer can be seen below:

Spreadsheet 9.5
showing an increase in efficiency to 49 per cent as compared to the
previously calculated value of 43 per cent.

Activity
Use the Excel spreadsheet to calculate the efficiencies for each branch in turn.

Activity
Extend the Excel spreadsheet to incorporate a branch which employs nine staff and
processes 10 (’000) personal transactions and 6 (’000) business transactions. What is the
efficiency of this new branch?

Linear program
The optimisation problem we considered above, both in mathematics and
in Solver, is a non-linear problem. In fact, it can be converted into a linear
programming problem. To do this we:
• algebraically substitute for all efficiency variables, to give an
optimisation problem expressed purely in terms of weights
• introduce an additional constraint setting the denominator of the
objective function equal to one.
Doing this with the above optimisation problem for Dorking we get:
maximise (44Wper + 20Wbus)/(16Wstaff)
subject to (16Wstaff) = 1
0 ≤ (125Wper + 50Wbus)/(18Wstaff) ≤ 1
0 ≤ (44Wper + 20Wbus)/(16Wstaff) ≤ 1
0 ≤ (80Wper + 55Wbus)/(17Wstaff) ≤ 1
0 ≤ (23Wper + 12Wbus)/(11Wstaff) ≤ 1
Wper, Wbus, Wstaff ≥ 0

167
MN3032 Management science methods

This is easily made into a linear program. For each of the non-linear
constraints we multiply throughout by the denominator (which is ≥0). So,
for example, the non-linear constraint 0 ≤ (125Wper + 50Wbus)/(18Wstaff)
≤ 1 when multiplied throughout by the denominator (18Wstaff) becomes 0
≤ (125Wper + 50Wbus) ≤ (18Wstaff), which is clearly a linear constraint. The
objective above becomes linear as the denominator has been explicitly set
equal to one. Hence we get the LP:
maximise (44Wper + 20Wbus)
subject to
(16Wstaff) = 1
0 ≤ (125Wper + 50Wbus) ≤ (18Wstaff)
0 ≤ (44Wper + 20Wbus) ≤ (16Wstaff)
0 ≤ (80Wper + 55Wbus) ≤ (17Wstaff)
0 ≤ (23Wper + 12Wbus) ≤ (11Wstaff)
Wper, Wbus, Wstaff ≥ 0
Once this LP has been solved to generate optimal values for the weights
then the efficiency of the branch we are optimising for, Dorking in this
case, can be easily calculated using EDorking = (44Wper + 20Wbus)/(16Wstaff).

Value judgements
One thing that can happen in DEA is that inspection of the weights that
are obtained leads to further insight and thought. For example, in our
initial Solver solution above we had a weight Wper associated with personal
transactions of 0.19025 and a weight Wbus associated with business
transactions of 0.93085 – implicitly implying that business transactions
have an importance equal to 0.93085/0.19025 = 4.9 personal transactions.
Now it may be that after considering this ratio of 4.9 that bank
management consider that, as a matter of judgement, business
transactions are much more time consuming/valuable than personal
transactions and as such they would like the weights Wper and Wbus to
satisfy the constraint Wbus/Wper ≥ 6, implying that one business transaction
is worth at least six personal transactions. This constraint is a value
judgement to better reflect the reality of the situation.
We can add this constraint to our Solver model, as below in Sheet B,
where the ratio Wbus/Wper is in cell B9:

Spreadsheet 9.6
168
Chapter 9: Data envelopment analysis

Solving we get:

Spreadsheet 9.7
which shows that the efficiency of Dorking has been unaffected by the
addition of this value judgement. More technically Solver has found
another set of weights which retain the previous maximum efficiency of 43
per cent for Dorking but which mean that the ratio constraint Wbus/Wper ≥
6 is now also satisfied.
There is one technical issue here, which illustrates why it is common to
solve DEA problems using linear programming instead of just resorting to
Solver. If you try putting randomly chosen values for the weights and then
using Solver to maximise efficiency it is not too hard to come across the
situation shown below:

Spreadsheet 9.8
signifying that Solver failed to find a solution. Although it is beyond the
scope of this chapter, non-linear programs (such as those considered
by Solver) are notoriously difficult to solve numerically – which is why
treating a DEA problem as a linear program is much preferred. Linear
programs are very easy to solve numerically.

Starting a DEA study


The first step in a DEA study requires the inputs and outputs for each DMU
to be specified. This involves two key conceptual questions, the answers to
which may not be at all obvious.
• What are the DMUs that we wish to evaluate?
DMUs are compared one to another. Hence they must be engaged in
a similar set of operations. For example, it would be silly to compare
a bank branch to a supermarket as they do radically different things.
Even between bank branches there may well be such differences in
the customer base/business operations between branches located in
cities and those located in rural areas that it would be inappropriate to
compare them, one to another.

169
MN3032 Management science methods

• What are the inputs and outputs?


By this we mean what conceptually are they, in words. We do not,
at this stage, require numeric values. Obtaining such values for each
input/output measure, for each DMU, comes later.
Once we have answered these conceptual questions then we need
(obviously) to collect data on the branches and their input and output
measures. Here the issue of timescale appears – over what timescale
should we collect data? Clearly the timescale adopted relates to
the period over which we are trying to judge efficiency. To take one
example, if we were trying to judge efficiencies over a yearly timescale
it would obviously be inappropriate to use data relating to just one day.
However, we need not necessarily require one year of data; for instance,
perhaps data relating to a representative six-month period would be
sufficient to judge yearly efficiency.

Links to other chapters


The topics considered in this chapter do not directly link to other
chapters in this subject guide. At a more general level the link between this
chapter and other chapters in this subject guide is the use of a quantitative
(analytic) approach to a problem.
One aspect of the material considered in this chapter, namely the
mathematical formulation of a DEA problem as a linear program, links to
Chapter 7 of this subject guide.

Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
Pupil transportation in North Carolina 170
Benchmarking team and individual performance in R&D
laboratories
www.banxia.com/frontier/case-studies/benchmarking/
Data envelopment analysis in retail banking
www.banxia.com/frontier/case-studies/retail-banking/

A reminder of your learning outcomes


Having completed this chapter, and the Essential reading and activities, you
should be able to:
• compare decision-making units (DMUs) via ratios
• draw the efficient frontier and find the efficiencies of all DMUs for any
example involving two ratios
• state the reference set for an inefficient DMU
• formulate the mathematical problem of finding the efficiency of any
DMU
• discuss the use of value judgements
• discuss starting a data envelopment analysis (DEA) study.

170
Chapter 9: Data envelopment analysis

Sample examination questions


For Sample examination questions relating to the material presented in
this chapter please visit the VLE.

171
MN3032 Management science methods

Notes

172
Chapter 10: Multicriteria decision making

Chapter 10: Multicriteria decision making

Essential reading
Anderson, Chapter 14, excluding section 14.3.

Spreadsheet
multi.xls
• Sheet A: Solution of weighted goal program via Solver
• Sheet B: Solution of priority level goal program via Solver, first
priority level
• Sheet C: Solution of priority level goal program via Solver, second
priority level
• Sheet D: Approximate calculation of AHP weights and consistency
• Sheet E: Exact calculation of AHP weights via Solver
• Sheet F: Approximate calculation of AHP weights for job offers.
This spreadsheet can be downloaded from the VLE.

Aims of the chapter


The aims of this chapter are to:
• discuss two approaches (goal programming and the analytic hierarchy
process) that can be applied when dealing with decision situations
involving, in objective/goal terms, more than one factor
• illustrate how, for a simple example, goal programming can be
applied, where the approach requires the numeric solution of linear
programming problems
• illustrate how, for a simple example, the analytic hierarchy process can
be applied, where the approach can be applied with/without recourse
to a computer.

Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• formulate a goal program
• when given pairwise comparison matrices apply the analytic hierarchy
process to:
calculate approximate weights
calculate and interpret the consistency ratio
decide the best alternative to choose
• discuss the criticisms that have been made of the analytic hierarchy
process (AHP)
• discuss other approaches to multicriteria decision making.

173
MN3032 Management science methods

Introduction
Multicriteria decision making refers to situations where we have more
than one objective (or goal) and these objectives conflict. Nevertheless,
we must somehow reach a decision taking them all into account. This
contrasts with, for example, decision trees or linear programming, where
we have a single objective – either optimise expected monetary value for
decision trees or optimise a single linear objective in the case of linear
programming. This chapter considers goal programming and the analytic
hierarchy process, two techniques used for multicriteria decision making.

Goal programming (GP)


Activity/Reading
For this section read Anderson, Chapter 14, sections start–14.2.

To illustrate goal programming (GP) we will return to the Two Mines


problem that we considered previously in Chapter 1. To remind you the
problem was:
The Two Mines company own two different mines that produce an ore
which, after being crushed, is graded into three classes: high, medium and
low-grade. The company has contracted to provide a smelting plant with
12 tons of high-grade, eight tons of medium-grade and 24 tons of low-
grade ore per week. The two mines have different operating characteristics
as detailed below.
Mine Cost per day (£’000) Production (tons/day)
High Medium Low
X 180 6 3 4
Y 160 1 1 6
Table 10.1
How many days per week should each mine be operated to fulfil the
smelting plant contract?
To solve this problem we introduced the variables
x = number of days per week mine X is operated
y = number of days per week mine Y is operated
with x ≥ 0 and y ≥ 0 and formulated the problem as a linear program:

minimise 180x + 160y


subject to 6x + 1y ≥ 12
3x + 1y ≥ 8
4x + 6y ≥ 24
x≤5
y≤5
x, y ≥ 0
where we assumed we could work no more than five days per week on
each mine.
We solved this linear program in Chapter 8 and showed that the solution was:
x = 12/7 = 1.71
y = 20/7 = 2.86

174
Chapter 10: Multicriteria decision making

with the value of the objective function being given by


180x + 160y = 180(12/7) + 160(20/7) = 765.71.
Now although this is the minimum cost solution, it does raise a difficulty.
If we adopt this solution we will be producing precisely as much medium
and low-grade ore per week as we need. This is good. However we will be
producing 6(12/7) + 1(20/7) = 13.14 tonnes of high-grade per week. As the
contract is only for 12 tonnes we will somehow have to deal with this excess.
Hence we can see that the Two Mines company might well feel that
whereas before we had a single objective problem:
• minimise cost
now we have a problem that has two (conflicting) objectives:
• keep cost to a minimum
• keep the excess of high-grade ore to a minimum.
Goal programming is one approach to dealing with problems of this kind.

Activity
Consider this problem yourself for 10 minutes (keep cost to a minimum and keep the excess
of high-grade ore to a minimum). What answer do you come up with for the number of days
per week each mine should be operated? What are the associated costs and the number of
tonnes of excess high-grade ore? Write your answer here for later reference.

Goal programming formulation


In the above problem we have two factors to consider:
• cost
• excess of high-grade ore.
For each of these factors we first need to decide a numeric goal,
sometimes referred to as a target value, or target.
With regard to cost our previously calculated solution had an associated
cost of 765.71 (per week). The company may consider that, in the
interests of keeping excess ore low, they would be prepared to increase
the cost that they incur, but they would like to keep cost at or around 780.
This figure of 780 becomes our numeric cost goal.
Turning to excess high-grade ore our previously calculated solution had
an associated high-grade ore production of 13.14 tonnes, so 13.14 − 12 =
1.14 tonnes over what was needed (i.e. 1.14 tonnes excess). The company
may consider that, in the interests of keeping costs low, they would like to
keep excess ore at or around 0.5 tonnes. This figure of 0.5 becomes
our numeric excess ore goal.
These numeric goals can only come from managerial consideration of the
situation.
Once these numeric goals have been decided then the procedure to
follow is quite straightforward. We need to introduce extra variables, two
variables for each factor; these variables deal with the deviation from the
numeric goal for each factor.
Let C+ (≥ 0) represent the amount by which we deviate upward from
our cost goal of 780 and C− (≥ 0) represent the amount by which we
deviate downward from our cost goal of 780. Then we have an equation
(constraint) linking these new variables to our old variables:
180x + 160y = 780 + C+ − C−

175
MN3032 Management science methods

This constraint, which must be an equality, says that whatever we decide


in terms of production on mines X and Y (x and y respectively) the cost of
that (180x + 160y) must equal the goal (780) adjusted by the deviation
variables C+ and C−, the plus in front of C+ in this equation indicating an
upward movement (deviation) from the goal and the minus in front of C- in
this equation indicating a downward movement (deviation) from the goal.
There are several comments to make here:
• It is standard notational practice in GP to use a plus superscript to
indicate upward deviation from the goal and a negative superscript
to indicate downward deviation from the goal (you will see that in
defining C+ and C− we have adopted this standard).
• In GP the standard equality constraint (equation) that applies for each
goal is:
mathematical expression of the goal = numeric goal value +
upward deviation variable – downward deviation variable
colloquially this can be phrased as:
something = goal value + upward deviation variable – downward
deviation variable
where the precise form of the mathematical expression of the goal
(obviously) depends on the situation being considered.
• One subtle point here is that the way we have written our equation
including C+ − C− opens the possibility that when we come to numerically
solve the problem we get an answer like C+ = 100 and C− = 120, so both an
upward and a downward deviation with the overall deviation being
C+ − C− = 100 − 120 = −20; a downward deviation of 20. The simple
answer is that, as our equation 180x + 160y = 780 + C+ – C− is currently
written, there is nothing to prevent this happening. The reason why
effects like this tend not to happen lies in the nature of the objective that
we shall adopt later in terms of deciding variable values.
We can now deal with the constraint relating to production of excess
high-grade ore. Let H+ (≥0) represent the amount by which we deviate
upward from our excess high-grade ore goal of 0.5 and H− (≥0) represent
the amount by which we deviate downward from our goal of 0.5. Thus we
have an equation linking these new variables to our old variables:
6x + 1y − 12 = 0.5 + H+ − H−
Here 6x + 1y is the amount of high-grade ore produced, and we need 12
tonnes, so 6x + 1y − 12 is the excess high grade. This must equal the goal
(0.5) adjusted by the deviation variables H+ and H−, the plus in front of
H+ in this equation indicating an upward movement (deviation) from the
goal and the minus in front of H− in this equation indicating a downward
movement (deviation) from the goal.
Here, for ease of discussion and subsequent solution in Excel, we
rearrange the above equation to be:
6x + 1y = 12.5 + H+ − H−
Note that the above equation leaves open the possibility that the company
might decide not to supply all the high-grade ore it needs to supply (i.e.
we have a downward deviation from the goal level in excess of 0.5 (e.g.
H+ = 0, H− = 2 which would result in only 10.5 tonnes of high grade ore
being produced). If the company wishes to exclude this possibility, and
here we will assume it does, then we simply continue to include the
constraint 6x + 1y ≥ 12 in the problem.

176
Chapter 10: Multicriteria decision making

Hence for our Two Mines problem where we wish to reconcile our
conflicting goals we have the equations:
180x + 160y = 780 + C+ − C−
6x + 1y = 12.5 + H+ − H−
6x + 1y ≥ 12
3x + 1y ≥ 8
4x + 6y ≥ 24
x≤5
y≤5
x, y, C+, C−, H+, H− ≥ 0
Given our variables, these equations must be satisfied.
Activity
Suppose now that there is a third mine Z, costing 120 (£’000) per day and producing 0.5
tonnes of high-grade, one tonne of medium-grade and nine tonnes of low-grade ore per
day. How would the above equations change?

To proceed there are a number of different approaches:


• To assign weights to each of the variables associated with upward/
downward deviation and then to solve a single problem whose
objective is to minimise a weighted deviation sum. This approach is
sometimes known as weighted goal programming.
• To decide priority levels for the goals (priority level 1 for the most
important goal, then priority level 2 for the second most important
goal, etc) and first satisfy priority one goals, then priority two goals,
then…, so a sequence of related problems are solved. This approach
is sometimes known as sequential goal programming or pre-
emptive goal programming as priorities cannot be traded off
against each other (unlike the weighted goal programming approach).
We shall illustrate both approaches.

Weighted approach
For the weighted approach, we need to assign weights to our four
deviation variables C+,C−,H+,H−. These weights can only come from
managerial consideration of the situation and, if necessary, starting with a
set of weights and then revising them in the light of the solution obtained
after solving the problem with those weights.
There is an important issue associated with weights here – namely that
the equations that we are considering deal with different units – cost
(£’000) and tonnes of high-grade ore. What needs to be made clear to
management in setting these weights is that they need to think in terms of
the weight associated with one percentage deviation from the current
goal for each variable.
Suppose, to proceed, that management have decided that the weights to
be applied are:

177
MN3032 Management science methods

Variable Current goal Weight for one per cent One per cent of goal
deviation from this goal
C+ 780 50 7.8
- -
C 780 20 7.8
H+ 0.5 4 0.005
H- 0.5 2 0.005
Table 10.2
Note the negative weight for C-, this indicates that (as we shall be
minimising, see below) we prefer to have a downward deviation from our
cost goal of 780 – a natural desire.
Then our objective becomes:
minimise 50(C+/7.8) - 20(C-/7.8) + 4(H+/0.005) + 2(H-/0.005)
where in this objective (C+/7.8), for example, is the total percentage
upward deviation from the cost goal which is weighted using 50 (see
above).
Be clear here – in weighted goal programming the objective to be adopted
is always to minimise a weighted sum of deviation variables.
Hence we have the linear program:
minimise 50(C+/7.8) − 20(C-/7.8) + 4(H+/0.005) + 2(H-/0.005)
subject to 180x + 160y = 780 + C+ − C-
6x + 1y = 12.5 + H+ − H-
6x + 1y ≥ 12
3x + 1y ≥ 8
4x + 6y ≥ 24
x≤5
y≤5
x, y, C+, C-, H+, H- ≥ 0

which, when we solve it, will give us values for x and y, the number of
days to work each mine, that minimises the (total) weighted deviation
from our goals.

Excel solution
Take the spreadsheet associated with this chapter and look at Sheet A. You
should see the problem we considered above set out as:

Spreadsheet 10.1

178
Chapter 10: Multicriteria decision making

Here we have expressly given the weights (for one per cent deviation) both
for upward and downward deviations as in cells B10 to C11.
Cells B14 to C15 contain the deviation variables we are trying to decide and
cells F14 and F15 the variables (x and y respectively) associated with how
many days per week to work each mine. Rows 7 and 8 contain the left and
right hand sides of the two equality constraints in the LP involving the goals
and the deviation variables. Cell B7, for example, is the left-hand side of the
constraint 180x + 160y = 780 + C+ – C- and cell B8 the right-hand side of
that constraint. Row 6 shows the numeric goal values for the problem.
To use Solver in Excel select Tools and then Solver. In the version of Excel I
am using (different versions of Excel have slightly different Solver formats)
you will get the Solver model as below:

Spreadsheet 10.2
Here the objective function (total weighted deviation) is in cell B17 (ignore
the use of $ signs here), which we wish to minimise by changing our
variables. If you click on Options you will see:

Spreadsheet 10.3
where both the ‘Assume Linear Model’ and ‘Assume Non-Negative’ boxes are
ticked – indicating we are dealing with a linear model with non-negative
variables.

179
MN3032 Management science methods

Solving using Solver we get:

Spreadsheet 10.4
where the deviation variables show no deviation from our high-grade excess
ore goal, i.e. the solution shown above of x = 1.5 and y = 3.5 has excess ore
equal to the goal of 0.5 (as can be seen in cell C4, we produce 12.5, require
12 (cell C5), so excess is 0.5). In this solution we exceed our cost goal (the
upward deviation variable with value 50 in cell B14), so the total cost is 50
above our goal of 780 (cell B6), so 780 + 50 = 830, as in cell B4.
The advantage here, as with all of the techniques given in this subject guide,
is that once we have gone to the effort of generating a computerised solution
approach (encoded the problem in Excel using Solver here) we can use it to
investigate sensitivity, or to adjust the solution to one that we prefer.
Here, for example, it may be that management, after considering the above
solution, decide that they wish to increase the weight associated with
exceeding the cost goal (e.g. to increase the weight attached to the upward
deviation variable C+ from 50 to 100). Be clear of the (potential) effect of
this – as we already exceed our cost goal (the upward deviation of 50 in cell
B14) this will tend to reduce that deviation – but the trade-off (of course)
is that in so doing we may get more excess high-grade ore (i.e. increase
the current deviation from our excess high-grade ore goal (currently zero
deviation from this goal, cells C14 and C15).
Making this change and resolving using Solver we get:

Spreadsheet 10.5
showing that with this set of weights we meet our cost goal, but this is
achieved by exceeding our high-grade ore goal (the upward deviation
variable in cell C14).
Irrespective of what solution management prefer, it is clear that GP provides
a flexible tool to investigate the effect of differing weightings as they attempt
to find a solution that satisfies their conflicting goals.
180
Chapter 10: Multicriteria decision making

Activity
Suppose now that there is a third mine Z, costing 120 (£’000) per day and producing 0.5
tonnes of high-grade, 1 tonne of medium-grade and 9 tonnes of low-grade ore per day.
What would be the weighted goal programming formulation?

Priority approach
As mentioned above, in the priority approach we need to decide priority levels
for the goals (Priority level 1 for the most important goal, then Priority level 2
for the second most important goal, etc.) and first satisfy Priority 1 goals, then
Priority 2 goals, then…, so that a sequence of related problems are solved.
Here, for the purposes of illustrating the approach, we shall assume that
management consider that their priority levels are:
• Priority level 1 – to meet the cost goal
• Priority level 2 – to meet the excess high-grade ore goal.
Hence the first problem that we solve (a linear program) is:
minimise C+ + C–
subject to 180x + 160y = 780 + C+ − C-
6x + 1y = 12.5 + H+ − H-
6x + 1y ≥ 12
3x + 1y ≥ 8
4x + 6y ≥ 24
x≤5
y≤5
x, y, C+, C-, H+, H– ≥ 0

As at Priority level 1, we wish (if possible) to meet the cost goal. This is
equivalent to saying that the deviation from that goal (upward/downward)
should be zero. This is represented in a linear manner by having as the
objective the sum of the two deviation variables associated with cost.
Note here that we have no weights in the problem, unlike weighted goal
programming considered above.

Excel solution
Take the spreadsheet associated with this chapter and look at Sheet B. You
should see the problem we considered above set out as below where we also
show the Solver model for Sheet B.

Spreadsheet 10.6
The objective for this Solver model is cell B13, which you will find is equal
to the sum of B10 and B11 (i.e. the upward deviation variable C+ plus the
downward deviation variable C–).

181
MN3032 Management science methods

The Solver model is:

Spreadsheet 10.7
which has the same form as we saw when we considered the weighted
problem (albeit you will see that the rows in the spreadsheet concerned
with weights have disappeared, since in the priority approach which we
are considering here we have no weights).
If you click Options in the Solver model you will also find that the ‘Assume
Linear Model’ and ‘Assume Non-Negative’ boxes are ticked – indicating
that we are dealing with a linear model with non-negative variables.
Solving the linear program represented in the above Solver model
we get:

Spreadsheet 10.8
This indicates that we can achieve our cost goal (a zero objective value
corresponding to a zero upward and downward deviation variable for the
cost goal, as in cells B10 and B11), but at the price of an upward deviation
from our excess ore goal of 0.5, as in cell C10.
We can now move to our second priority level. This was to meet the excess
high-grade ore goal. We know from the solution just considered above we
can meet our cost goal, but at the expense of exceeding our high-grade
excess ore goal by 0.5. But might it be possible to meet that cost goal and
get closer to our excess high-grade ore goal?

182
Chapter 10: Multicriteria decision making

To find out if this is possible we have the new linear program:


minimise H+ + H-
subject to C+ = 0
C- = 0
180x + 160y = 780 + C+
− C-
6x + 1y = 12.5 + H+ − H-
6x + 1y ≥ 12
3x + 1y ≥ 8
4x + 6y ≥ 24
x≤5
y≤5
x, y, C+, C-, H+, H- ≥ 0
where we are trying to minimise deviation (upward/downward) from
our ore goal but where the deviation variables C+ and C- for cost are
constrained to have precisely the same values as they achieved in the
solution seen above at Priority level 1.
Take the spreadsheet associated with this chapter and look at Sheet C and
its associated Solver model. You will see:

Spreadsheet 10.9
where the constraint relating to variables C+ and C- (cells B10 and B11)
has been added and if you examine the objective cell B13 you will find
that it is now equal to C10 + C11 (i.e. H+ + H-). Solving we get:

183
MN3032 Management science methods

Spreadsheet 10.10
The solution seen here is the best we can do (in terms of our second
priority level, excess high-grade ore) subject to the constraint that we
continue to achieve the best we can at our first priority level (meet the
cost goal).

Activity
Suppose now that there is a third mine Z, costing 120 (£’000) per day and producing 0.5
tonnes of high-grade, 1 tonne of medium-grade and 9 tonnes of low-grade ore per day.
What would be the pre-emptive goal programming formulation?

Activity
Recall the solution you produced for the goal programming problem above. How does
that compare with the goal programming solutions produced using Excel above?

Analytic hierarchy process (AHP)

Activity/Reading
For this section read Anderson, Chapter 14, sections 14.4–14.6.

The analytic hierarchy process (AHP) was developed by Saaty and deals
with a systematic approach to deciding between a finite set of alternatives.
We shall illustrate AHP by means of an example.
Suppose that a student is considering two different job offers. They have
three factors (objectives) that they consider important in helping them to
choose between offers:
objective 1: starting salary – the higher the better
objective 2: promotion prospects – the quicker promotion occurs the better
objective 3: interest in the job – the more interest the better.
The first step in AHP is for the student to construct a matrix [Sij]
comprising a pairwise comparison of each of these objectives. This
pairwise comparison takes place with regard to a standard nine-point scale
as shown below:

Scale value Sij relating i to j Meaning


1 i is as important as j
3 i is moderately more important than j
5 i is strongly more important than j
7 i is very strongly more important than j
9 i is extremely more important than j

184
Chapter 10: Multicriteria decision making

Scale values 2, 4, 6 and 8 lie midway between the definitions for their
nearest values given above. For example, a scale value of 4 for Sij indicates
that objective i is midway between moderately more important and strongly
more important than objective j.
For the example we are considering, suppose that the student considers that
their pairwise comparison matrix is:

Objective (j)
1 2 3
Objective (i) 1 1 5 3
2 – 1 –
3 – 4 1
Here we have that:
• as S12 = 5 Objective 1 is strongly more important than Objective 2
• as S13 = 3 Objective 1 is moderately more important than Objective 3
• as S32 = 4 Objective 3 is midway between moderately more important
and strongly more important than Objective 2.
There are two points to note here:
• the diagonal elements (Sii, i = 1, 2, 3) are all 1 (no other choice is
possible as objective i must by definition be as important as objective j
when i = j)
• for each distinct pair of objectives i and j we have only entered one
value, for Objectives 2 and 3 we have entered a value for S32 but not
for S23.
Activity
For the three objectives we have considered above, what values would you personally
assign in the pairwise comparison matrix? Write that matrix here for reference.

One aspect of the values assigned in this pairwise comparison matrix


that we need to consider is that of consistency. That is, do the values
expressed represent a consistent view of the objectives? Here, for example,
as Objective 1 is strongly more important than Objective 2 and Objective 1
is moderately more important than Objective 3 it might seem reasonable to
suppose that Objective 3 is only more moderately important than Objective
2, but in fact the value assigned indicates that Objective 3 is midway
between moderately more important and strongly more important than
Objective 2.
Obviously, as in real life, it is impossible to be absolutely consistent about
everything and the issue is the degree of inconsistency that is shown. This
issue of consistency will be addressed numerically later below.

Activity
Do you think the values assigned in the pairwise comparison matrix we have used above
are consistent or not? Why? Do you think the values assigned in your own pairwise
comparison matrix (as in the previous activity above) are consistent or not? Why? Record
here your conclusions as to whether these judgements are consistent or not for future
reference.

The next step in the AHP process is to set the missing elements in the matrix
equal to the reciprocal of the corresponding element that has been entered,
as below:

185
MN3032 Management science methods

Objective (j)
1 2 3
Objective (i) 1 1 5 3
2 1/5 1 1/4
3 1/3 4 1

In AHP we need to decide the weight attached to each objective that is


associated with our pairwise objective comparison matrix. This can be
done as follows – let wi (≥ 0) represent the weight attached to objective i,
then:
• normalise the pairwise comparison matrix by dividing each element by
its column sum
• set wi equal to the average of the elements in row i in this normalised
matrix.
If you do this with the above matrix you will find that w1 = 0.6194, w2 =
0.0964 and w3 = 0.2842. These weights should (to within rounding errors)
total one and here they add up to precisely one. The calculation can be
seen in Sheet D of the spreadsheet associated with this chapter as below:

Spreadsheet 10.11
The next step in AHP is to see if the pairwise comparison matrix is
reasonably consistent. To do this we carry out the procedure below.
First we carry out a matrix multiplication of the pairwise comparison
matrix with the column vector of weights, i.e. we do:

|1 5 3 | | w1 |
| 1/5 1 1/4 | | w2 |
| 1/3 4 1 | | w3 |
= |1 5 3 | | 0.6194 | = | 1.9540 |
| 1/5 1 1/4 | | 0.0964 | | 0.2913 |
| 1/3 4 1 | | 0.2842 | | 0.8763 |

Then we compute the sum of [element in this column vector/


corresponding element in the weight vector] and set λmax equal to this sum
divided by n, the number of objectives we have – in this case 3.
Hence we have λmax = [1.9540 /0.6194 + 0. 2913/0.0964 +
0.8763/0.2842]/3 = 3.0866.
We now compute the consistency index (CI) which is given by
(λmax − n)/(n–1)
= (3.0866 − 3)/(3 − 1) = 0.0433.
As a numeric check you might note that λmax and CI cannot be negative.
We now compute the consistency ratio which is the consistency index CI
divided by RI which is defined to be the consistency index of a randomly
generated pairwise comparison matrix. The values of RI are standard
186
Chapter 10: Multicriteria decision making

values where for n = 3 RI = 0.58 and for n = 4 RI = 0.90.


Hence the consistency ratio for our problem is 0.0433/0.58 = 0.0747.
Standard practice in AHP is to say that if the consistency ratio is less than
0.1 then the judgements shown in the pairwise comparison matrix can
be taken to be reasonably consistent. Judgements which were perfectly
consistent would have a consistency index (and ratio) of zero. If the
consistency ratio is more than 0.1 then this is an indication that the
judgements given in the pairwise comparison matrix need revision.
Hence for our particular example the judgements given in the pairwise
comparison matrix can be taken to be reasonably consistent.
The above consistency ratio calculation can be seen in Sheet D of the
spreadsheet associated with this chapter as below:

Spreadsheet 10.12
Note that slight differences arise (e.g. in the first element of the column
vector) between the calculation we carried out above and the Excel values.
These are due to rounding errors on our part since Excel is more accurate
and works to many more decimal places in carrying out calculations than
we have used above.

Activity
For your personal pairwise comparison matrix, as decided in an activity above, use the
Excel spreadsheet to calculate your consistency ratio. Does this ratio accord with your
own judgement as to whether your pairwise comparison matrix was consistent or not as
considered in the activity above?

As noted in Anderson the above procedure actually only approximates the


weights and λmax that AHP theoretically assigns. The exact weights and
λmax value come from the solution to a mathematical program. Introduce λ
as an unknown numeric value and let the pairwise comparison matrix be
denoted by P then the exact weights for the problem we are considering
are the solution of:

187
MN3032 Management science methods

minimise λ
subject to |1 5 3 | | w1 | = λ | w1 |
| 1/5 1 1/4| | w2 | | w2 |
| 1/3 4 1 | | w3 | = | w3 |
w1 + w2 + w3 = 1
w 1, w 2, w 3 ≥ 0

where we have a matrix equation above, relating to the pairwise


comparison matrix and the weight vector, together with the constraint that
the weights sum to one and are non-negative.
Hence we have that the exact calculation of the weights assigned in AHP
for our example is given by:
minimise λ
subject to 1w1 + 5w2 + 3w3 = λw1
(1/5)w1 + 1w2 + (1/7)w3 = λw2
(1/3)w1 + 7w2 + 1w3 = λw3
w1 + w2 + w3 = 1
w 1, w 2, w 3 ≥ 0
If you look at Sheet E in the spreadsheet associated with this chapter, you
will see:

Spreadsheet 10.13
Here cells C13 to E13 contain the weights and cell C14 the λ value which,
as you can see from the Solver model, is to be maximised. The current
values in these cells are there just to get the solution process started. Cells
C16 to C18 contain expressions for the left-hand side of the first three
constraints of the problem when rearranged to ensure all variables are
on the left-hand side (all three constraints have a zero right-hand side).
Cell C19 and its associated constraint in the Solver model ensures that the
weights add up to precisely one.

188
Chapter 10: Multicriteria decision making

Solving we get:

Spreadsheet 10.14
so that the exact AHP solution is:
w1 = 0.6267, w2 = 0.0936 and w3 = 0.2797 with λmax = 3.086
Hence we can see that our approximate values as calculated above (in
Excel) of
w1 = 0.6194, w2 = 0.0964 and w3 = 0.2842 with λmax = 3.087
were reasonable.

Activity
For your personal pairwise comparison matrix use the Excel spreadsheet to compute exact
values. Compare these with the approximate values calculated via Excel.

Recall that in the example we were considering we were actually trying


to decide between two job offers and we had three factors (objectives) to
take into account. So far in the AHP procedure above we have just worked
with the objectives. In order to decide between the job offers we now have
to produce a pairwise comparison matrix for these job offers with regard
to our three objectives.
Taking the first objective, starting salary – the higher the better, suppose
that the pairwise comparison matrix is:
Offer 1 Offer 2
Offer 1 1 3
Offer 2 – 1

where the 3 above indicates that Offer 1 is moderately more important


than Offer 2. Note that we use precisely the same nine-point scale here as
we used above for comparing objectives.
We now essentially repeat the procedure we carried out above, but for this
new pairwise comparison matrix.
Let wi (≥0) represent the weight attached to offer i then:
• fill in the missing elements in the pairwise comparison matrix
• normalise the pairwise comparison matrix by dividing each element by
its column sum

189
MN3032 Management science methods

• set wi equal to the average of the elements in row i in this normalised


matrix.
If you do this with the above matrix you will find that w1 = 0.75 and
w2 = 0.25. These weights indicate how well each offer ‘scores’ with respect
to this objective. The calculation can be seen in Sheet F of the spreadsheet
associated with this chapter as below:

Spreadsheet 10.15
As we had only two job offers we only entered a single judgement into
the pairwise comparison matrix above and hence there is no need to
calculate the consistency index since it will be zero (logically if we express
just a single judgement it is impossible to be inconsistent). Had we been
considering more than two job offers though we would have needed to
have considered the consistency of our judgements in the same manner
as presented above for the pairwise comparison matrix related to the
objectives.
As an aside here you will see that the columns in the normalised matrix
shown above in Sheet F of the spreadsheet associated with this chapter
are identical (each having 0.75 in Row 1, 0.25 in Row 2). This is not a
coincidence but is in fact something you will always observe when you
deal with a two row, two column, matrix in AHP. As both columns in that
matrix are identical it is not surprising that the row average column is also
identical to these two columns.
We now take our other two objectives, do a pairwise comparison of our
two job offers, and then calculate the AHP weights.
Suppose that the pairwise comparison matrix for Objective 2, promotion
prospects – the quicker promotion occurs the better, is:
Offer 1 Offer 2
Offer 1 1 –
Offer 2 7 1

indicating that Offer 2 is very strongly more important than Offer 1. With
respect to this objective the weights are 0.125 for Offer 1 and 0.875 for
Offer 2.
Suppose that the pairwise comparison matrix for Objective 3, interest in
the job – the more interest the better, is:
Offer 1 Offer 2
Offer 1 1 –
Offer 2 4 1

indicating that Offer 2 is midway between moderately more important and


strongly more important than Offer 1. With respect to this objective we
would have that the weights are 0.2 for Offer 1 and 0.8 for Offer 2.

190
Chapter 10: Multicriteria decision making

We can now bring all the weights we have decided together in the table
below:

Objective Objective weight Job Offer 1 weight Job Offer 2 weight


for each objective for each objective
1 0.6194 0.75 0.25
2 0.0964 0.125 0.875
3 0.2842 0.2 0.8
Total score 0.53 0.47

Table 10.3
where we have chosen to work with our approximate AHP weights for
illustration.
Here the total score for each job is computed in the natural way (sum
over the objectives the objective weight multiplied by the weight for that
objective for the job being considered). For example the score of 0.53 for
job offer 1 is computed as 0.6194(0.75) + 0.0964(0.125) + 0.2842(0.2).
The alternative with the highest score is preferred – so in this case Job
Offer 1 is (just) preferred to Job Offer 2.
One point to note here is that the objective weights (0.6194, 0.0964
and 0.2842) we have used to decide which job offer we prefer were
computed quite early in the overall procedure to decide whether or not the
judgements expressed with regard to the three objectives were reasonably
consistent or not. This means that, in the worse case, even if you cannot
carry through that procedure fully, or make a mistake in carrying through
that procedure, it is still possible to get a correct answer here (provided,
obviously, you have computed the objective weights correctly).
If we had chosen to work with our exact AHP weights we would have had:

Objective Objective weight Job Offer 1 weight Job Offer 2 weight


for each objective for each objective
1 0.6267 0.75 0.25
2 0.0936 0.125 0.875
3 0.2797 0.2 0.8
Total score 0.54 0.46

Table 10.4
so that we would still have preferred Job Offer 1.

Criticisms of AHP
There have been a number of criticisms made of AHP and whether it is
a suitable technique for multicriteria decision making. These criticisms
centre around:
• The use of a nine-point scale:
it is purely arbitrary and has no theoretical justification
it seems to inevitably introduce numeric inconsistencies (e.g. if
A is very strongly more important than B and B in turn is very
strongly more important than C then SAB = 7 and SBC = 7 and these
judgements imply that SAC should be very high – yet the scale
limits SAC to having, at most, the value 9).

191
MN3032 Management science methods

• the fact that rank reversal can occur. AHP implicitly enables you to
rank alternatives (e.g. in our example above we ranked Job Offer 1
before Job Offer 2 because Job 1 had a higher total score than Job 2).
Suppose though that a third job offer is considered. It can be that this
third offer, even if a very poor job offer that is assigned a very low total
score, alters the relative ranking of the first two job offers. This seems
problematic and is seen by critics of AHP as inconsistent with a logical
rational approach to multicriteria decision making.

Other approaches
There are other approaches to making decisions when we have multiple
criteria that we have to consider beyond the two approaches dealt with in
this chapter (goal programming and AHP). MultiCriteria Decision Analysis
(or MCDA for short) has been widely studied. At the time of writing,
Wikipedia, for example, lists approximately 30 tools/techniques that have
been proposed to deal with problems of this type. Clearly it is impossible
to review/explain all of these approaches in this subject guide. However,
it is important that you should be clear that other approaches exist. One
approach that merits further consideration is multi-attribute utility/value
analysis (MAUA/MAVA).
In MAUA/MAVA the idea is that each alternative has a number of
dimensions. So, referring back to the AHP example considered above, we
had two alternatives (job offer 1 and job offer 2) and three dimensions
(objectives; objective 1: starting salary – the higher the better; objective 2:
promotion prospects – the quicker promotion occurs the better; objective
3: interest in the job – the more interest the better). The dimensions
represent the criteria in which we are interested. In MAUA/MAVA, each
dimension is mapped to numeric values (typically scaled to lie between
zero and 100) using a utility (value) function.
If you need to know what a utility (value) function is then review Chapter
4. Basically a utility/value function is an evaluation of the worth of
something to an individual decision maker. Different individuals assign
different utility values and they change according to circumstances. For
example, you might rate the offer of a gift of £10 as not very significant.
However, should you have just lost your wallet and face a walk of five
miles to get home late at night when it is pouring with rain the offer of
£10, which would enable you to use a taxi/public transport, might be
assigned a higher utility value than it would in normal circumstances.
In MAUA/MAVA weights are (subjectively) assigned to each dimension and
so, given:
• an evaluation of each alternative with respect to each dimension
• a mapping of each dimension to numeric values (typically scaled to lie
between zero and 100) using a utility/value function
• weights for each dimension
then we can bring all these factors together and arrive at a score for each
alternative, and hence rank them.
To illustrate MAUA/MAVA suppose that we expand the AHP example we
considered above to three job offers. The evaluation of each alternative
with respect to each dimension is shown in Table 10.5. In this table a high

192
Chapter 10: Multicriteria decision making

number is good, so, for example, with respect to the first dimension of
starting salary job offer 1 is the worst offer, job offer 3 the best offer and
job offer 2 lies in the middle.

Dimension
Objective 1: Objective 2: Objective 3:
starting salary promotion interest in the
prospects job
Alternative Job offer 1 1 2 40
Job offer 2 3 5 25
Job offer 3 7 11 10

Table 10.5: Evaluation of job offers with respect to each dimension.


Note that in this table the values adopted within each dimension need to
be consistent, but between columns (dimensions) different scales can be
used. So here, for example, we have deliberately used a different scale for
the third dimension. Note also that, for ease of illustration, we have not
attempted to make these evaluations consistent with the data given above
in the AHP example where we had two job offers and three objectives.
We now need to map each dimension to numeric values, scaled to between
zero and 100. Here the worst alternative in each dimension is mapped to
zero, the best to 100, and alternatives lying between the best and worst
mapped to a value between zero and 100 depending on the decision-
maker’s utility/preference. A mapping for the values here are shown in
Table 10.6.

Dimension
Objective 1: Objective 2: Objective 3:
starting salary promotion interest in the
prospects job
Alternative Job offer 1 0 0 100
Job offer 2 35 60 70
Job offer 3 100 100 0

Table 10.6: Mapping of job offers with respect to utility values.


Now suppose that we have (somehow) found from the decision-maker
the relative weights that they place on each dimension of 0.1:0.2:0.7, so
the first dimension has weight 0.1, the second 0.2 and the third 0.7 (the
weights summing to one). Hence the score for each job offer is:
• job offer 1: 0.1(0) + 0.2(0) + 0.7(100) = 70
• job offer 2: 0.1(35) + 0.2(60) + 0.7(70) = 64.5
• job offer 3: 0.1(100) + 0.2(100) + 0.7(0) = 30.
Here we can see that job offer 1 is to be preferred, having the highest
score, ranked just ahead of job offer 2 and far superior to job offer 3.

Activity
Conduct an internet search using the following terms to discover more about these
techniques:
•• multicriteria decision analysis
•• goal programming
•• analytic hierarchy process
•• multi-attribute utility/value analysis.

193
MN3032 Management science methods

Links to other chapters


The topics considered in this chapter link to two separate sets of chapters.
The material related to goal programming links to Chapters 7 and 8 of
this subject guide falling under the general heading of mathematical
programming, where we adopt a mathematical formulation approach to
a decision problem in order to optimise (maximise or minimise) some
objective. These chapters also (where appropriate) present the numeric
solution of the formulation adopted via use of Excel. The material related
to multicriteria decision making links to Chapter 4 of this subject guide.

Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
Vehicle fleet management in Quebec 608
Scoring model at Ford motor company 613
Multicriteria decision making at NASA 625

A reminder of your learning outcomes


Having completed this chapter, and the Essential reading and activities,
you should be able to:
• formulate a goal program
• when given pairwise comparison matrices apply the analytic hierarchy
process to:
calculate approximate weights
calculate and interpret the consistency ratio
decide the best alternative to choose
• discuss the criticisms that have been made of the analytic hierarchy
process (AHP)
• discuss other approaches to multicriteria decision making.

Sample examination questions


For Sample examination questions relating to the material presented in
this chapter please visit the VLE.

194
Chapter 11: Queueing theory and simulation

Chapter 11: Queueing theory and


simulation

Essential reading
Anderson, Chapter 11, sections start–11.3; Chapter 12, sections start,
12.3–12.4.

Spreadsheet
queue.xls
• Sheet A: statistics for a M/M/1 queue
• Sheet B: statistics for a M/M/2 queue
• Sheet C: simulation – small example
• Sheet D: simulation – large example.
This spreadsheet can be downloaded from the VLE.

Aims of the chapter


The aims of this chapter are to:
• give a brief introduction to queueing and its importance
• illustrate the two basic approaches to queueing situations:
an analytical (formula based) approach based on known
mathematical equations
a computer based approach (discrete-event simulation).

Learning outcomes
By the end of this chapter, and having completed the Essential reading and
activities, you should be able to:
• list and discuss the characteristics of queueing systems
• calculate various steady-state statistics for a single server queue with
Poisson arrivals and negative exponential service times (a M/M/1
queueing system)
• explain the basics of discrete-event simulation
• perform a small discrete-event simulation and produce statistics
from that simulation (relating to queueing time; time in the system;
minimum and maximum queue length; average queue length).

Introduction
Think back to the last time you were in a queue. It was probably not that
long ago. People stand in queues all the time, for example in a shop or at
a bank. However people are not the only things that queue. Cars waiting
at traffic lights are also queueing. Ships waiting for a berth in a port are
also queueing. Packs of breakfast cereal in a shop are also queueing; that
is, they are waiting for something to happen, namely for a shopper to pick
them off the shelf and buy them. Machines in a factory are also queueing;
that is, they are waiting to break down.

195
MN3032 Management science methods

In this chapter we discuss both queueing theory (sometimes called


waiting line theory) and simulation. Both of these deal with situations
which involve queues. The key difference between them is that queueing
theory approaches these problems through the use of analytic formulae,
while simulation approaches them through the use of computer models to
build a representation of a system.
Queueing theory and simulation are among the most used OR techniques.
This is because queues are such a common phenomenon. It is also
because simulation (always invariably involving a computer) is a powerful
technique that (via computer animation) can be made accessible to non-
mathematical managers.
The first queueing theory problem was considered by Erlang in 1908 who
looked at how large a telephone exchange needed to be in order to keep to
a reasonable value the number of telephone calls not connected because
the exchange was busy (lost calls). Within 10 years he had developed a
(complex) formula to solve the problem.

Queueing theory
Activity/Reading
For this section read Anderson, Chapter 11, sections start–11.3.

Queueing theory deals with problems which involve queueing (or


waiting). Typical examples might be:
• banks/supermarkets – waiting for service
• computers – waiting for a response
• failure situations – waiting for a failure to occur e.g. in a piece of
machinery
• public transport – waiting for a train or a bus.
As we know queues are a common every-day experience. Queues form
because resources are limited. In fact it makes economic sense to have
queues. For example how many supermarket tills would you need to avoid
queueing? How many buses or trains would be needed if queues were to
be avoided/eliminated?
In designing queueing systems we need to aim for a balance between
service to customers (short queues implying many servers) and economic
considerations (not too many servers).
Activity
List 10 situations not mentioned so far which involve queues. How much time do you
estimate that you spend each week in queues?

In essence all queueing systems can be broken down into individual sub-
systems consisting of entities queueing for some activity (as shown below).
Queue

Activity

Figure 11.1
Typically we can talk of this individual sub-system as dealing with
customers queueing for service. To analyse this sub-system we need
information relating to:
196
Chapter 11: Queueing theory and simulation

• arrival process
• service mechanism
• queue characteristics.
We deal with each of these below.

Arrival process
This deals with:
• how customers arrive, for example, singly or in groups (batch or bulk
arrivals)
• how the arrivals are distributed in time, e.g. what is the probability
distribution of time between successive arrivals (the inter-arrival
time distribution)
• whether there is a finite population of customers or (effectively) an
infinite number.
The simplest arrival process is one where we have completely regular
arrivals (i.e. the same constant time interval between successive arrivals).
A Poisson stream of arrivals corresponds to arrivals at random. In a Poisson
stream, successive customers arrive after intervals which independently
are exponentially distributed. The Poisson stream is important as it is a
convenient mathematical model of many real life queueing systems and is
described by a single parameter – the average arrival rate. Other important
arrival processes are scheduled arrivals; batch arrivals; and time dependent
arrival rates (i.e. the arrival rate varies according to the time of day).

Service mechanism
This deals with:
• a description of the resources needed for service to begin
• how long the service will take (the service time distribution)
• the number of servers available
• whether the servers are in series (each server has a separate queue) or
in parallel (one queue for all servers)
• whether pre-emption is allowed (a server can stop processing a
customer to deal with another ‘emergency’ customer).
The assumption that the service times for customers are independent and
do not depend upon the arrival process is common. Another common
assumption about service times is that they are exponentially distributed.

Queue characteristics
This deals with:
• how, from the set of customers waiting for service, do we choose the
one to be served next (e.g. FIFO (first-in first-out) – also known as
FCFS (first-come first-served); LIFO (last-in first-out); randomly). This
is often called the queue discipline.
• whether we have:
balking (customers deciding not to join the queue if it is too long)
reneging (customers leaving the queue if they have waited too
long for service)
jockeying (customers switching between queues if they think they
will get served faster by so doing)

197
MN3032 Management science methods

• a queue of finite capacity or (effectively) of infinite capacity.


Changing the queue discipline (the rule by which we select the next
customer to be served) can often reduce congestion. Often the queue
discipline ‘choose the customer with the lowest service time’ results in the
smallest value for the time (on average) a customer spends queueing.
Note here that integral to queueing situations is the idea of uncertainty
in, for example, inter-arrival times and service times. This means that
probability and statistics are needed to analyse queueing situations.
In terms of the analysis of queueing situations, the types of questions in
which we are interested are typically concerned with measures of system
performance and might include:
• How long does a customer expect to wait in the queue before they are
served?
• How long will they have to wait before the service is complete?
• What is the probability of a customer having to wait longer than a
given time interval before they are served?
• What is the average length of the queue?
• What is the probability that the queue will exceed a certain length?
• What is the expected utilisation of the server and the expected time
period during which they will be fully occupied (remember servers
cost us money so we need to keep them busy)? In fact if we can assign
costs to factors such as customer waiting time and server idle time
then we can investigate how to design a system at minimum total cost.
The above are questions that need to be answered so that management
can evaluate alternatives in an attempt to control/improve the situation.
Some of the problems that are often investigated in practice are:
• Is it worthwhile investing effort in reducing the service time?
• How many servers should be employed?
• Should priorities for certain types of customers be introduced?
• Is the waiting area for customers adequate?
In order to get answers to the above questions there are two basic
approaches:
• analytic methods or queueing theory (formula based)
• simulation (computer based).
The reason for there being two approaches (instead of just one) is that
analytic methods are only available for relatively simple queueing systems.
Complex queueing systems are almost always analysed using simulation
(more technically known as discrete-event simulation).
Essentially, the simple queueing systems that can be tackled via queueing
theory:
• consist of just a single queue; linked systems where customers pass
from one queue to another cannot be tackled via queueing theory
• have distributions for the arrival and service processes that are well
defined (e.g. standard statistical distributions such as Poisson or
Normal); systems where these distributions are derived from observed
data, or are time dependent, are difficult to analyse via queueing
theory.

198
Chapter 11: Queueing theory and simulation

Queueing notation
It is common to use the following symbols:
λ to be the mean (or average) number of arrivals per time period (i.e.
the mean arrival rate)
µ to be the mean (or average) number of customers served per time
period (i.e. the mean service rate).
Kendall (a British mathematician) formulated a standard notation system
to classify queueing systems as P1/P2/P3/P4/P5, where these five
parameters are:
P1 (first parameter) – the probability distribution for the arrival process
P2 (second parameter) – the probability distribution for the service
process
P3 (third parameter) – the number of channels (servers)
P4 (fourth parameter) – the maximum number of customers allowed
in the queueing system (either being served or waiting for service)
P5 (fifth parameter) – the maximum number of customers in total.
If the last two parameters are omitted then the implication is that they are
infinity (i.e. there is no limit on the number of customers).
Common options for P1 and P2 are:
M for a Poisson arrival distribution (exponential inter-arrival
distribution) or an exponential service time distribution
D for a deterministic or constant value
G for a general distribution (but with a known mean and variance).
For example the M/M/1 queueing system, the simplest queueing system,
has a Poisson arrival distribution, an exponential service time distribution
and a single channel (one server) with no limitations on the maximum
number of customers (either in the system or in total).
Note here that in using this notation it is always assumed that there is
just a single queue (waiting line) and customers move from this single
queue to the servers.

Simple M/M/1 example


Suppose we have a single server in a shop and customers arrive in the
shop with a Poisson arrival distribution at a mean rate of λ = 0.5 customers
per minute (i.e. on average one customer appears every 1/λ = 1/0.5 = two
minutes). This implies that the inter-arrival times have an exponential
distribution with an average inter-arrival time of two minutes. The server
has an exponential service time distribution with a mean service rate of
four customers per minute (i.e. the service rate is µ = 4 customers per
minute). As we have a Poisson arrival rate/exponential service time/single
server we have a M/M/1 queue in terms of the standard notation.
For this very simple M/M/1 queueing system there are exact formulae that
give various statistics relating to the system under the assumption that
the system has reached a steady state – that is that the system has
been running long enough so as to settle down into some kind
of equilibrium position.
Naturally real-life systems hardly ever reach a steady state. Simply put, life
is not like that. However, despite this, queueing formulae can give us some
insight into how a system might behave.
199
MN3032 Management science methods

You can see a number of formulae in Anderson. For the purposes of this
chapter we shall expect you to know the following:
• the average number of units in the queue (i.e. the mean/average
queue length, also known as the mean/average queue size) = λ2/[µ(
µ - λ)]
• the probability of having to wait for service = λ/µ
• the average time in the system = 1/(µ - λ)
• the probability that there are n units (n ≥ 0) in the system =
(λ/µ)n(1-λ/µ).
Here the word ‘system’ refers to both queueing and to being served.
Hence for our particular example with λ = 0.5 and µ = 4 these are:
• the average number of units in the queue = 0.01786
• the probability of having to wait for service = 0.125
• the average time in the system = 0.2857
• the probability that there are n units (n ≥ 0) in the system is 0.875 for
n = 0 and 0.109 for n = 1.
Here, for example, a customer will on average spend 0.2857 minutes in
the system (queueing and being served).
Sheet A in the spreadsheet accompanying this chapter calculates these
values and is shown below:

Spreadsheet 11.1
One factor that is of note is traffic intensity = (arrival rate)/(departure
rate) [= λ/µ for one server] where arrival rate = number of arrivals per
unit time and departure rate = number of departures per unit time. Traffic
intensity is a measure of the congestion of the system. If it is near to zero
there is very little queueing and in general as the traffic intensity increases
(to near 1 or even greater than 1) the amount of queueing increases.
For the system we have considered above the arrival rate is 0.5 and the
departure rate is 4 so the traffic intensity is 0.5/4 = 0.125.

Faster servers or more servers?


Consider the situation we had above – which would you prefer:
• one server working twice as fast
• two servers each working at the original rate?
The simple answer is that we can analyse this. For simplicity we shall use
Excel Sheet A to do this here.

200
Chapter 11: Queueing theory and simulation

For the first situation one server working twice as fast corresponds to a
service rate µ = 8 customers per minute, so we have:

Spreadsheet 11.2
For two servers working at the original rate the situation is a M/M/2
queueing system. Here we shall not expect you to learn any formula,
rather we have encoded the standard formulae given in Anderson into
Sheet B of the spreadsheet associated with his chapter, as below:

Spreadsheet 11.3
Note too that this calculation assumes that these two servers are fed from
a single queue (rather than each having their own individual queue).
Compare the two outputs above – which option do you prefer?
Of the figures in the outputs above some are identical. Extracting key
figures which are different we have:

One server Two servers,


twice as fast original rate
Average time in the system 0.13333 0.25098
Probability of having to wait for service 0.06250 0.00735

It can be seen that with one server working twice as fast, customers spend
less time in the system on average, but have a higher probability of having
to wait for service.

201
MN3032 Management science methods

Activity
Which do you prefer – one server working twice as fast or two servers working at the
original rate – and why?

Simulation
Activity/Reading
For this section read Anderson, Chapter 12, sections start, 12.3–12.4.

Whilst queueing theory can be used to analyse simple queueing systems


more complex queueing systems are typically analysed using simulation
(more accurately called discrete-event simulation). Precisely what
discrete-event simulation is will become clear below.
In operations research we typically deal with discrete-event simulation.
Note here however that the word simulation has a wider meaning. For
example, you may have heard of aircraft simulators which reproduce
the behaviour of an aircraft in flight, but in reality one never leaves the
ground. These are used for training purposes. You may have taken part in
a business game simulation in which you were in charge of an imaginary
company and the results for your company (sales made, profit, etc.) were
generated by a computer in some manner. These are often called business
simulations.
Simulation began to be applied to management situations in the late
1950s to look at problems relating to queueing and stock control. Monte-
Carlo simulation was used to model the activities of facilities such
as warehouses and oil depots. Queueing problems (e.g. supermarket
checkouts) were also modelled using Monte-Carlo simulation. The phrase
‘Monte Carlo’ derives from the well-known gambling city of Monte Carlo
in Monaco. Just as in roulette we get random numbers produced by a
roulette wheel when it is spun, so in Monte Carlo simulation we make use
of random numbers generated by a computer.
The advantages of using simulation, as opposed to queueing theory, are:
• it can more easily deal with time-dependent behaviour
• the mathematics of queueing theory is hard and only valid for certain
statistical distributions – whereas the mathematics of simulation is
easy and can cope with any statistical distribution
• in some situations it is virtually impossible to build the equations that
queueing theory demands (e.g. for features like queue switching,
queue dependent work rates)
• simulation is much easier for managers to grasp and understand than
queueing theory.

An example simulation
To illustrate discrete-event simulation let us take the very simple system
below, with just a single queue and a single server.
Queue

Activity

Figure 11.2

202
Chapter 11: Queueing theory and simulation

Suppose that customers arrive with inter-arrival times that are uniformly
distributed between one and three minutes, i.e. all arrival times between
one and three minutes are equally likely. Suppose too that service times
are uniformly distributed between 0.5 and two minutes (i.e. any service
time between 0.5 and two minutes is equally likely). We will illustrate how
this system can be analysed using simulation.
Conceptually we have two separate, and independent, statistical
distributions, namely:
• arrival
• service.
Hence we can think of constructing two long lists of numbers – the first list
being inter-arrival times sampled from the uniform distribution between
one and three minutes, the second list being service times sampled from
the uniform distribution between 0.5 and two minutes. By sampled we
mean that we (or a computer) look at the specified distribution and
randomly choose a number (inter-arrival time or service time) from this
specified distribution. For example in Excel using = 1 + (3-1)*RAND()
would randomly generate interarrival times and = 0.5 + (2-0.5)*RAND()
would randomly generate service times.
Suppose our two lists are:
Inter-arrival times (mins) Service times (mins)
1.9 1.7
1.3 1.8
1.1 1.5
1.0 0.9
Etc. Etc.
where to ease the processing we have chosen to work to one decimal
place.
Suppose now we consider our system at time zero (T = 0), with no
customers in the system. Take the lists above and ask yourself the
question: What will happen next?
The answer is that after 1.9 minutes have elapsed a customer will appear.
The queue is empty and the server is idle so this customer can proceed
directly to being served. What will happen next?
The answer is that after a further 1.3 minutes have elapsed (i.e. at T =
1.9 + 1.3 = 3.2) the next customer will appear. This customer will join the
queue (since the server is busy). What will happen next?
The answer is that at time T = 1.9 + 1.7 = 3.6 the customer currently being
served will finish and leave the system. At that time we have a customer in
the queue and so they can start their service (which will take 1.8 minutes
and hence end at T = 3.6 + 1.8 = 5.4). What will happen next?
The answer is that 1.1 minutes after the previous customer arrival (i.e. at
T = 3.2 + 1.1 = 4.3) the next customer will appear. This customer will join
the queue (since the server is busy). What will happen next?
The answer is that after a further 1.0 minutes have elapsed (i.e. at T = 4.3
+ 1.0 = 5.3) the next customer will appear. This customer will join the
queue (since there is already someone in the queue), so now the queue
contains two customers waiting for service. What will happen next?

203
MN3032 Management science methods

The answer is that at T = 5.4 the customer currently being served will
finish and leave the system. At that time we have two customers in the
queue and assuming a FIFO queue discipline the first customer in the
queue can start their service (which will take 1.5 minutes and hence end
at T = 5.4 + 1.5 = 6.9). What will happen next?
The answer is that... etc and we could continue in this fashion if we so
wished (and had the time and energy)! Plainly the above process is best
done by a computer.
To summarise what we have done we can construct the list below:
Time T What happened
1.9 Customer appears, starts service scheduled to end at T = 3.6
3.2 Customer appears, joins queue
3.6 Service ends
Customer at head of queue starts service, scheduled to end at T = 5.4
4.3 Customer appears, joins queue
5.3 Customer appears, joins queue
5.4 Service ends
Customer at head of queue starts service, scheduled to end at T = 6.9
Etc. Etc.

You can hopefully see from the above how we are simulating
(artificially reproducing) the operation of our queueing system.
Simulation, as illustrated above, is more accurately called discrete-
event simulation since we are looking at discrete events through time
(customers appearing, service ending). Here we were only concerned with
the discrete points T = 1.9, 3.2, 3.6, 4.3, 5.3, 5.4, etc.
Once we have done a simulation such as that shown above then we can
easily calculate statistics about the system – for example, the average time
a customer spends queueing and being served (the average time in the
system). Here two customers have gone through the entire system – the
first appeared at time 1.9 and left the system at time 3.6 and so spent 1.7
minutes in the system. The second customer appeared at time 3.2 and left
the system at time 5.4 and so spent 2.2 minutes in the system. Hence the
average time in the system is (1.7 + 2.2)/2 = 1.95 minutes.
We can also calculate statistics on queue waiting – for example what is the
average queue waiting time? Here the first customer does not queue at all
and the second customer queues from 3.2 to 3.6. Within our time frame
from zero to 5.4 (when we finished the simulation above) these
are the only two customers to completely go though the system and
hence, based just on these two customers, the average queueing time is
[0 + (3.6 – 3.2)]/2 = 0.2 minutes.

Activity
For the following values perform your own simulation and produce statistics relating to
average time in the system and average queueing time.

204
Chapter 11: Queueing theory and simulation

Inter-arrival times Service times


1.5 1.6
1.2 1.3
0.9 1.6
1.0 0.7

Now look at Sheet C in the spreadsheet associated with this chapter. You
will see:

Spreadsheet 11.4
Cells A4 to A7 contain the same inter-arrival times, and cells B4 to B7
contain the same service times, as considered in the example given
previously above. Here cells B1 and C1 specify the time period over which
statistics with regard to queueing time and time in the system should be
calculated. These can be seen in columns I and J. Columns G and H give
the same statistics but for all the customers that are seen.
Note here, however, how the above calculations (both for average time
in the system and average queueing time) took into account the system
when we first started – when it was completely empty. This is probably
biasing (rendering inaccurate) the statistics we are calculating and so it
is common in simulation to allow some time to elapse (so the system ‘fills
up’) before starting to collect information for use in calculating summary
statistics. This is the purpose of cells B1 and C1.
In order to illustrate this look at Sheet D in the spreadsheet associated
with this chapter. You will see:

Spreadsheet 11.5
Here we have 20 customers with statistics collected over the time period 5
to 30.

205
MN3032 Management science methods

Activity
For the inter-arrival times and service times shown in Sheet C above, what would be the
average queueing time (averaged over all four customers) if we had two servers?

Average queue length


While the concept of average queue length (also known as average
queue size) seems an intuitive concept, its calculation can at first sight be
difficult. Here let us consider the four customer simulation from time zero
to time 7.8 as shown in Sheet C above. Within this time frame, when do
we see customers queueing?
• Customer 1 starts his service as soon as he arrives (at time 1.9) and in
such cases it is conventional to say the customer ‘never queued’.
• Customer 2 queues between when they arrive (at time 3.2) and when
they start their service (time 3.6).
• Customer 3 queues between 4.3 and 5.4.
• Customer 4 queues between 5.3 and 6.9.
Putting this information together we can say that:
1. Between time zero and time 3.2 the queue is empty (it has no one in it).
2. Between 3.2 and 3.6 there is one customer in the queue (customer 2).
3. Between 3.6 and 4.3 the queue is empty.
4. Between 4.3 and 5.3 there is one customer in the queue (customer 3).
5. Between 5.3 and 5.4 there are two customers in the queue (customers
3 and 4).
6. Between 5.4 and 6.9 there is one customer in the queue (customer 4).
7. Between 6.9 and 7.8 (the end of our time frame) the queue is empty.
So we can immediately see here that the maximum queue length is 2, and
the minimum queue length is zero.
To calculate the average queue length we use a time-weighted average
(as the queue length varies over time). This is defined as the sum over the
distinct periods, the number of customers in the queue multiplied by the
length of the period, all divided by the total length of the time frame. Here
we have seven distinct periods in our time frame from zero to 7.8 and so
the average queue length will be:
[0(3.2) + 1(3.6 - 3.2) + 0(4.3 - 3.6) + 1(5.3 - 4.3) + 2(5.4 - 5.3) +
1(6.9 - 5.4) + 0(7.8 - 6.9)]/7.8 = 0.397.
So, on average, there will be 0.397 customers in the queue.
Above we have deliberately set out the average queue length calculation
in a way so as to give insight into the underlying concept. In fact, we can
calculate it directly in a much quicker fashion. Observe that each customer
queueing makes an equal contribution to the average queue length
calculation. Hence a quick way to calculate the average queue length is
to add up the queueing times of each customer, and divide by the total
length of the time frame. Here this is, from Sheet C above, [0 + 0.4 + 1.1 +
1.6]/7.8 = 3.1/7.8 = 0.397, as before.

Discussion
In simulation, statistical (and probability) theory plays a part both in
relation to the input data and in relation to the results that the simulation
produces. For example, in a simulation of the flow of people through

206
Chapter 11: Queueing theory and simulation

supermarket checkouts input data like the amount of shopping people


have collected, is represented by a statistical (probability) distribution,
and results relating to factors such as customer waiting times, queue
lengths, etc. are also represented by probability distributions. In our simple
example above we also made use of a statistical distribution – the uniform
distribution.
There are a number of problems relating to simulation models:
• Typically the simulation model has to be run on the computer for
a considerable time in order for the results to be statistically
significant – hence simulations can be expensive (take a long time)
in terms of computer time.
• Results from simulation models tend to be highly correlated
meaning that estimates derived from such models can be misleading.
Correlation is a statistical term meaning that two (or more) variables
are related to each other in some way. Often variance reduction
techniques can be used to improve the accuracy of estimates derived
from simulation.
• In the event that we are modelling an existing system, it can be
difficult to validate the model (computer program) to ensure that it is
a correct representation of the existing system.
• If the simulation model is very complex then it is difficult to isolate
and understand what is going on in the model and deduce cause and
effect relationships.
Another disadvantage of simulation is that it is difficult to find optimal
solutions, unlike linear programming, for example, where we have an
algorithm which will automatically find an optimal solution. The only way
to attempt to optimise using simulation is to:
• make a change
• run the simulation computer program to see if an improvement has
been achieved or not
• repeat the process.
Large amounts of computer time can be consumed by this process.
Once we have a simulation model we can use it to:
• Understand the current system, typically to explain why it is behaving
as it is. For example if we are experiencing long delays in production
in a factory then why is that – what factors are contributing to these
delays?
• Explore extensions (changes) to the current system, typically to try
and improve it. For example, if we are trying to increase the output
from a factory we could:
add more machines
speed up existing machines
reduce machine idle time by better maintenance.
• Which of these options would be best? Simulation can give us insights
into these questions.
• Design a new system from scratch, typically to try and design a system
that satisfies certain (often statistical) requirements at minimum cost.
For example, in the design of an airport passenger terminal what
resource levels (customs, seats, baggage facilities, etc) do we need and
how should they be sited in relation to one another?

207
MN3032 Management science methods

Activity
Consider any system with which you are familiar (e.g. a bank, a shop, a train/metro
station) and list a number of factors which you think might help to increase system
output (e.g. as measured by the number of customers that the system can deal with per
hour). Which of these factors (or combination of factors) would be the best choice to
increase output? Note here that any change might reduce congestion at one point only
to increase it at another point so we have to bear this in mind when investigating any
proposed changes.

Special purpose computer languages have been developed to help in


writing simulation programs (e.g. SIMSCRIPT) which have pictorial
based languages (using entity-cycle diagrams) and interactive program
generators. A comparatively recent development in simulation
are packages which run with animation – essentially one sees a
representation of the system being simulated and objects moving around
in the course of the simulation on screen.

Activity
Think of a situation you know of which involves queueing. If you were to build a
simulation model of this situation, what questions might you have to which you
would like answers? What changes to the current situation would you be interested in
exploring?

Links to other chapters


The topics considered in this chapter do not directly link to other
chapters in this subject guide. More generally the topics considered here
link to other topics where probability (stochastic elements) play a role in
decision making, so this chapter links to Chapters 4 and 6.

Case studies
The case studies associated with this chapter are given below. We would
encourage you to read them.
Title Anderson (page number)
ATM waiting times at Citibank 452
Ensuring phone access to emergency services 459
Improving productivity at the New Haven fire 481
department
Call centre design 491
Meeting demand levels at Pfizer 504
Petroleum distribution in the Gulf of Mexico 509
Preboard screening at Vancouver International 520
Airport
Mount Isa mines
www.cmis.csiro.au/or/Clients/mim.htm
Prison management
www.cmis.csiro.au/OR/clients/mrrc.htm
Roadside services
www.cmis.csiro.au/or/Clients/racv.htm

208
Chapter 11: Queueing theory and simulation

A reminder of your learning outcomes


Having completed this chapter, and the Essential reading and activities,
you should be able to:
• list and discuss the characteristics of queueing systems
• calculate various steady-state statistics for a single server queue with
Poisson arrivals and negative exponential service times (a M/M/1
queueing system)
• explain the basics of discrete-event simulation
• perform a small discrete-event simulation and produce statistics
from that simulation (relating to queueing time; time in the system;
minimum and maximum queue length; average queue length).

Sample examination questions


For Sample examination questions relating to the material presented in
this chapter, please visit the VLE.

209
MN3032 Management science methods

Notes

210
Appendix 1: Sample examination paper

Appendix 1: Sample examination paper

Important note: This Sample examination paper reflects the


examination and assessment arrangements for this course in the
academic year 2014–15. The format and structure of the examination
may have changed since the publication of this subject guide. You can
find the most recent examination papers on the VLE where all changes
to the format of the examination are posted.

Time allowed: three hours.


Candidates should answer FOUR of the following EIGHT questions. All
questions carry equal marks.
A calculator may be used when answering questions on this paper and
it must comply in all respects with the specification given with your
Admission Notice. The make and type of machine must be clearly stated
on the front cover of the answer book.

211
MN3032 Management science methods

1. Describe what you understand by the following in relation to


methodological issues that arise in Operational Research/Management
Science:
a. The benefits of an OR approach to decision problems (12 marks)
b. Cost versus decision quality (13 marks)
2. a. Explain in detail what the terms ‘root definition’ and ‘CATWOE’
mean. What connection (if any) is there between them?
(8 marks)
b. Explain what Soft Systems Methodology (SSM) involves.
(9 marks)
c. Apply SSM to one problem with which you are familiar.
(8 marks)
3. Briefly explain each of the following topics from the viewpoint of
Operational Research/Management Science:
a. AHP (7 marks)
b. MRP (9 marks)
c. SODA (9 marks)
4. A company Xpc selling tablet PCs to consumers is aware that its profits
are related to how often customers change their PC. When a customer
buys a new tablet PC the issue for company Xpc is whether they stay
with the company or move to one of its two main competitors, Ytab
and Zbest.
Customers buy a new tablet PC every nine months and past estimates
of the probability of a customer making a transition between
companies when they come to buy a new tablet PC are:
To company
Xpc Ytab Zbest
Xpc 0.57 0.04 0.39
From company Ytab 0.30 0.64 0.06
Zbest 0.72 0.20 0.08
For example the probability that a customer switches from company
Xpc to company Zbest is 0.39.
a. What is the long‑run prediction for the market shares for each of
the three companies? (13 marks)
b. An advertising firm has approached Zbest and suggested that if
they engage their services they can conduct a marketing campaign
that will permanently change the transition matrix seen above to
the one seen below:

To company
Xpc Ytab Zbest
Xpc 0.61 0.04 0.35
From company Ytab 0.26 0.07 0.67
Zbest 0.51 0.17 0.32
Each 1% increase in the long-run market share for any company
is estimated to be worth £50,000. On this basis what would Zbest
gain by engaging the advertising firm? If Zbest do engage the
advertising firm what will be the effect on the long-run market
share for Xpc and Ytab? (12 marks)

212
Appendix 1: Sample examination paper

5. A company sells two products (X and Y) and has available three


suppliers (A, B and C) from which it buys these products for later
resale. The suppliers charge a different price per unit supplied, and
have differing availability, as in the table below:
Price per unit (£) Availability (units)
A B C A B C
X 4.1 4.2 3.9 1,200 1,380 3,000
Product
Y 4.5 4.8 5.2 1,345 1,500 2,000
Product X, for example, can be purchased from Supplier A for £4.1 per
unit; from Supplier B for £4.2 per unit and from Supplier C for £3.9
per unit. These suppliers can supply up to 1,200, 1,380 and 3,000
units respectively.
The company has forecast likely demand and believes that they need
at least 2,000 units of product X and 2,400 units of product Y.
The company has a number of goals. They would like:
• Supplier B to get an order for 1,000 units of product X
• their expenditure with Supplier C to equal £8,000
• their total expenditure to equal £20,000.
a. Formulate this problem as a weighted goal program with linear
constraints. Note here that you are not expected to simplify any
linear equations that you give. (17 marks)
b. If the company has two priorities:
• priority level 1: minimise the upward deviation from the
Supplier B goal
• priority level 2: minimise the downward deviation from the
Supplier C goal
then formulate this problem as a sequential goal program with
linear constraints. Note here that you are not expected to simplify
any linear equations that you give. (5 marks)
c. Discuss how any issues related to subjective weights might be
resolved. (3 marks)
6. An international organisation that provides global outsourcing
services has a number of different call centres (A to G) that provide
telephone (voice) and internet (email) support to customers of other
organisations. Voice calls/emails originating from a customer are
allocated to their geographically nearest call centre. The data shown
below as to the operation of these call centres over a one-week period
have been collected.
Call Number of voice calls Number of emails Number of
centre dealt with (‘000) dealt with (‘000) employees
A 48.6 14.3 230
B 42.5 11.9 350
C 39.3 12.8 240
D 27.1 17.4 210
E 25.3 14.6 150
F 48.2 18.0 120
G 12.5 17.2 80
During the week for which data were collected call centre E, for
example, dealt with 25,300 voice calls, 14,600 emails and had 150
employees.

213
MN3032 Management science methods

a. Apply data envelopment analysis to compare the relative


performance of these call centres. Which call centres are efficient?
For the inefficient call centres find their efficiencies and reference
sets. (15 marks)
b. It is proposed to close the call centre that your analysis has shown
is the worst performer. Would you support this course of action?
Give your reasons. (4 marks)
c. A colleague has raised the issue that employees in call centres A to
F are paid $50 per week, while those in call centre G are paid £150
per week. The current exchange rate is 1 £ = 1.60 $. With this
new information analyse the relative performance of these calls
centres. (6 marks)
7. a. A news vendor buys a monthly magazine from a publisher for a
unit price of £6.60 and sells it to members of the public for £8.50.
Any magazines left over at the end of the month are returned to
the publisher and for each magazine returned the news vendor
receives £1.10. Past experience indicates that the demand for
this magazine from members of the public is expected to have
a Normal distribution with mean 250 copies and variance 40.
By making use of the table associated with the standard Normal
distribution (with mean zero and variance one) presented at
the end of this examination paper what would be your advice
as to how many copies the news vendor should order from the
publisher? (4 marks)
b. A company purchases a particular component from an external
supplier. The demand for the component is estimated to be 420
units per month. The cost of placing an order with the supplier is
£50. The component can be purchased from the supplier at a price
per unit of £110 if so desired. However the supplier also offers a
quantity discount such that if the company orders 200 or more the
price per unit is only £95. The cost of holding this component (per
year) is estimated to be 12% of the price. What advice can you
offer the company? Clearly explain any procedure you follow in
arriving at your advice. (14 marks)
c. A machine costs £30,000 to be set up. The machine has the ability
to produce 85,000 finished items of a particular product per
month. Part finished items are passed to this machine so they can
become finished items ready for sale. Part finished items have been
through previous parts of the production process and have already
cost the company £7.50 each. Finishing one item on the machine is
estimated to cost £0.35 in labour and £1.75 in materials. Demand
for the finished items produced by this machine is estimated to be
500,000 items per year. The current interest rate is 0.5% per year.
A colleague has argued that each time you operate the machine
you should produce enough to satisfy demand for two years.
Would you agree? Give your reasons. (7 marks)

214
Appendix 1: Sample examination paper

8. A company is planning a small project and the following table gives


the various activities in that project, as well as their associated
completion times.
Activity Completion time (days)
A 3
B 3
C 2
D 4
E 1
F 4
Here, for example, activity B takes 3 days to be completed.
The immediate precedence relationships are:
Activity Activity
B must be finished before A can start
A must be finished before E can start
D must be finished before F can start
C must be finished before F can start
In addition two other conditions (X1 and X2) must apply
Condition X1: 4 days must elapse between the end of activity D
and the start of activity C
Condition X2: 3 days must elapse between the end of activity B
and the start of activity C
a. Draw the network diagram and calculate the overall project
completion time. State the critical path(s). (9 marks)
b. Copy the following table into your answer book and fill in the
latest start times.

Activity Latest start time (days)


A
B
C
D
E
F
(4 marks)
c. Suppose now that:
• Condition X1 can change to: 2 days must elapse between the
end of activity D and the start of activity C; changing this
condition will cost £400
• Condition X2 can change to: 2 days must elapse between
the end of activity B and the start of activity C; changing this
condition will cost £100
How much should the company expect to spend to achieve the
minimum possible project completion time?
For your minimum possible project completion time copy the
following table into your answer book and fill in the latest start
times.

215
MN3032 Management science methods

Activity Latest start time (days)


A
B
C
D
E
F
State the critical path(s) associated with this minimum possible
project completion time. (12 marks)

216
Appendix 1: Sample examination paper

Table for the standard Normal distribution


The value tabulated below shows the area, between -∞ and +z, under the
curve associated with the standard Normal distribution with mean zero
and variance one.
For example the area under this curve between -∞ and +0.32 is 0.626
z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.500 0.504 0.508 0.512 0.516 0.520 0.524 0.528 0.532 0.536
0.1 0.540 0.544 0.548 0.552 0.556 0.560 0.564 0.567 0.571 0.575
0.2 0.579 0.583 0.587 0.591 0.595 0.599 0.603 0.606 0.610 0.614
0.3 0.618 0.622 0.626 0.629 0.633 0.637 0.641 0.644 0.648 0.652
0.4 0.655 0.659 0.663 0.666 0.670 0.674 0.677 0.681 0.684 0.688
0.5 0.691 0.695 0.698 0.702 0.705 0.709 0.712 0.716 0.719 0.722
0.6 0.726 0.729 0.732 0.736 0.739 0.742 0.745 0.749 0.752 0.755
0.7 0.758 0.761 0.764 0.767 0.770 0.773 0.776 0.779 0.782 0.785
0.8 0.788 0.791 0.794 0.797 0.800 0.802 0.805 0.808 0.811 0.813
0.9 0.816 0.819 0.821 0.824 0.826 0.829 0.831 0.834 0.836 0.839
1.0 0.841 0.844 0.846 0.848 0.851 0.853 0.855 0.858 0.860 0.862
1.1 0.864 0.867 0.869 0.871 0.873 0.875 0.877 0.879 0.881 0.883
1.2 0.885 0.887 0.889 0.891 0.893 0.894 0.896 0.898 0.900 0.901
1.3 0.903 0.905 0.907 0.908 0.910 0.911 0.913 0.915 0.916 0.918
1.4 0.919 0.921 0.922 0.924 0.925 0.926 0.928 0.929 0.931 0.932
1.5 0.933 0.934 0.936 0.937 0.938 0.939 0.941 0.942 0.943 0.944
1.6 0.945 0.946 0.947 0.948 0.949 0.951 0.952 0.953 0.954 0.954
1.7 0.955 0.956 0.957 0.958 0.959 0.960 0.961 0.962 0.962 0.963
1.8 0.964 0.965 0.966 0.966 0.967 0.968 0.969 0.969 0.970 0.971
1.9 0.971 0.972 0.973 0.973 0.974 0.974 0.975 0.976 0.976 0.977
2.0 0.977 0.978 0.978 0.979 0.979 0.980 0.980 0.981 0.981 0.982
2.1 0.982 0.983 0.983 0.983 0.984 0.984 0.985 0.985 0.985 0.986
2.2 0.986 0.986 0.987 0.987 0.987 0.988 0.988 0.988 0.989 0.989
2.3 0.989 0.990 0.990 0.990 0.990 0.991 0.991 0.991 0.991 0.992
2.4 0.992 0.992 0.992 0.992 0.993 0.993 0.993 0.993 0.993 0.994
2.5 0.994 0.994 0.994 0.994 0.994 0.995 0.995 0.995 0.995 0.995
2.6 0.995 0.995 0.996 0.996 0.996 0.996 0.996 0.996 0.996 0.996
2.7 0.997 0.997 0.997 0.997 0.997 0.997 0.997 0.997 0.997 0.997
2.8 0.997 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998
2.9 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.999 0.999 0.999
3.0 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
3.1 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
3.2 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
3.3 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
3.4 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
3.5 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000

217
MN3032 Management science methods

Notes

218
Appendix 2: Sample Examiners’ commentary

Appendix 2: Sample Examiners’


commentary

Important note

This commentary reflects the examination and assessment arrangements


for this course in the academic year 2014–15. The format and structure
of the examination may change in future years, and any such changes
will be publicised on the virtual learning environment (VLE).
Information about the subject guide and the Essential
reading references
Unless otherwise stated, all cross-references will be to the latest
version of the subject guide (2014). You should always attempt to use
the most recent edition of any Essential reading textbook, even if the
commentary and/or online reading list and/or subject guide refers to an
earlier edition. If different editions of Essential reading are listed, please
check the VLE for reading supplements – if none are available, please
use the contents list and index of the new edition to find the relevant
section.

Comments on specific questions


Question 1
Reading for this question
This question relates to Chapter 1 of the subject guide.
Approaching the question
For benefits of an OR approach this relates to pp.25–26 of the subject
guide and would be looking for mention of the points there plus a
discussion around them:
• OR is particularly well-suited for routine tactical decision making
where data are typically well defined and decisions for the same
problem (e.g. how much stock to order from a supplier) must be made
repeatedly over time
• we might reasonably expect an ‘analytical’ (structured, logical)
approach to decision making to be better (on average) than simply
relying on a person’s innate decision-making ability
• sensitivity analysis can be performed in a systematic fashion
• constructing an OR model structures thought about what is/is not
important in a problem.
For cost versus decision quality this relates to p.22–23 of the subject guide
and would be looking for mention of the points there plus a discussion
around them:
• if we wish to reach good quality decisions then we have to take our
time and incur costs

219
MN3032 Management science methods

• OR projects use resources – the consultant’s, the client’s, other staff –


and there is always a tension between minimising the cost and time
taken to reach a decision and making the ‘perfect’ decision
• models are representations of the system under investigation, but are
simplified representations. The more time, and cost, put into building
a model, the more accurately it should represent the system
• issue of the time available to reach a decision.

Question 2
Reading for this question
SSM is dealt with on pp.35–40 of the subject guide.
Approaching the question
For part (a):
• CATWOE – Customers, Actors, Transformation (or Transformation
process), Worldview (or Weltanschauung), Owner, Environmental
constraints
• Elements of CATWOE more fully explained
• Root definition clear indication that it is a statement of the ideal
• Link between CATWOE and root definition (DEDUCE from the root
definition answer to CATWOE).
For part (b):
Assumptions:
• different individuals and groups make different evaluations of events
and this leads to them taking different actions
• concepts and ideas from systems engineering are useful
• it is necessary when describing any human activity system to take
account of the particular image of the world underlying the description
of the system and it is necessary to be explicit about the assumptions
underlying this image
• it is possible to learn about a system by comparing pure models of
that system with perceptions of what is happening in the real-world
problem situation.
Discussion of the SSM stages:
For part (c)
For the problem:
• a clear statement (in words) of the problem considered, not just a
single phrase
• appropriate application of SSM to the problem
root definition (statement of the ideal)
CATWOE for their root definition
explicit and CLEAR check of root definition by CATWOE
application of the stages.

Question 3
Reading for this question
AHP relates to pp.182–90 of the subject guide, MRP to pp.102–06 and
SODA to pp.32–35.

220
Appendix 2: Sample Examiners’ commentary

Approaching the question:


For part (a)
• analytic hierarchy process
• systematic approach to:
deciding between finite set of alternatives
deciding whether objective are consistent
• use of scaling of pairwise judgements
• scale runs from 1 to 9.
For part (b)
• materials requirements planning
• bill of materials (BOM) mentioned
• BOM clearly explained (e.g. example)
• master production schedule (MPS)
• timing and quantity with respect to inventory orders
• structural and tactical information.
For part (c)
• strategic options and development analysis
• also knows as JOURNEY Making
• Journey = Jointly Understanding, Reflecting, and NEgotiating strategY
• uses cognitive maps
• cognitive map explained
• merging of individual maps.

Question 4
Reading for this question
This question relates to Chapter 6 of the subject guide.
Approaching the question
Applying the standard Markov approach for the long-run prediction need
to set up [x1,x2,x3] where
[x1,x2,x3] = [x1,x2,x3](transition matrix) and x1 + x2 + x3 = 1. Expanding
we get:
x1 = 0.57x1 + 0.30x2 + 0.72x3
x2 = 0.04x1 + 0.64x2 + 0.20x3
x3 = 0.39x1 + 0.06x2 + 0.08x3
x1 + x2 + x3 = 1
and solving these equations simultaneously:
x1 = 0.5534
x2 = 0.1990
x3 = 0.2476
For the second part where the transition matrix changes we have the
equations:
x1 = 0.61x1 + 0.26x2 + 0.51x3
x2 = 0.04x1 + 0.07x2 + 0.17x3

221
MN3032 Management science methods

x3 = 0.35x1 + 0.67x2 + 0.32x3


x1 + x2 + x3 = 1
and solving these equations simultaneously:
New market shares:
x1 = 0.5415
x2 = 0.0905
Company 3 gains market share (new share of 0.3679 minus old share
of 0.2476) = 0.1203, so 12.03% which at £50K per percentage point is
worth £601.5K.

Question 5
Reading for this question
This question relates to Chapter 10 of the subject guide.
Approaching the question
This is a goal programming question. Let x(A,B,C) (≥0) be the number of
units of product X purchased from A,B,C respectively; y(A,B,C) (≥0) be the
number of units of product Y purchased from A,B,C respectively.
Then the constraints are:
• x(A) + x(B) + x(C) ≥ 2,000
• y(A) + y(B) + y(C) ≥ 2,400
for the demand
• x(A) ≤ 1,200
• x(B) ≤ 1,380
• x(C) ≤ 3,000
• y(A) ≤ 1,345
• y(B) ≤ 1,500
• y(C) ≤ 2,000
for the availability
For the supplier B goal we have the constraint:
• x(B) = 1,000 + a+ – b-
where a+,b- ≥0 are the upward and downward deviation from this goal.
For the supplier C goal we have the constraint:
• 3.9x(C) + 5.2y(C) = 8,000 + c+ – d-
where c+,d- ≥0 are the upward and downward deviation from this goal.
For the total expenditure goal we have the constraint:
• 4.1x(A) + 4.2x(B) + 3.9x(C) + 4.5y(A) + 4.8y(B) + 5.2y(C) =
20,000 + e+ – f-
where e+,f- ≥0 are the upward and downward deviation from this goal.
The objective here is to minimise a weighted sum of deviation variables.
Take 1% of the goal values:
• Supplier B 1,000/100 = 10
• Supplier C 8,000/100 = 80
• Total expenditure 20,000/100 = 200

222
Appendix 2: Sample Examiners’ commentary

minimise
w1(a+/10) + w2(b-/10) + w3(c+/80) + w4(d-/80) + w5(e+/200) + w6(f-
/200)
where the ‘w’s are the weights (numeric values) for the deviations.
Sequential goal program:
Priority level 1: minimise the upward deviation from the supplier B
goal
minimise b+
subject to the eight inequality constraints and three goal-related
equality constraints seen above.
Priority level 2: minimise the downward deviation from supplier C
goal
minimise d-
subject to the eight inequality constraints and three goal-related
equality constraints seen above and b+ = B*
where B* is the value that b+ has in the solution at priority level 1
The subjective weights are the ‘w’s.
Setting initial values for these can only be a matter of managerial
judgement (see p.163 of the subject guide).
Revision of these weight values may occur as we see the numeric
solution from the goal program and seek to shape that to a
solution with which we are more comfortable.

Question 6
Reading for this question
This question relates to Chapter 9 of the subject guide.
Approaching the question
Appropriate ratios are:
• voice calls (‘000) per employee
• emails (‘000) per employee
giving:
Branch Voice calls (‘000) per employee Emails (‘000) per employee
A 0.2113 0.0622
B 0.1214 0.0340
C 0.1638 0.0533
D 0.1290 0.0829
E 0.1687 0.0973
F 0.4017 0.1500
G 0.1563 0.2150

223
MN3032 Management science methods

DEA diagram is:

F and G are the efficient branches (efficiency 100%).


Efficiencies (as fractions) for the other branches are:
Branch Acceptable Range
efficiency lower upper
A 0.53 0.51 0.55
B 0.30 0.28 0.32
C 0.41 0.39 0.43
D 0.46 0.44 0.48
E 0.55 0.53 0.57
The acceptable range above is the range within which we count a
candidate’s answer as correct.
Reference set is:
• F for A, B, C
• F and G for D and E.
The worst performer is branch B. Would not support closing it on this
analysis, deficiencies in the analysis are:
• only one week timescale
• no information on salaries and other costs
• the routing policy (geographically nearest) impacts on the number of
voice calls and emails received and so impacts on the efficiency of a
call centre.
With this new information the input measure changes, from number of
employees to a financial value (for example $). Multiplying the number of
employees for A to F by 50 to convert to a dollar value is simply a scaling
of the single input measure and can be ignored provided we apply an
appropriate multiplier to the input measure for call centre G. Employees
in call centre G are paid £150 per week = 150(1.6) = $240. So as a
multiplier of $50 to equate to the other call centres this is 240/50 = 4.8.
Hence the equivalent number of $50 employees at G is 4.8(80) = 384.

224
Appendix 2: Sample Examiners’ commentary

With this change the new DEA diagram is:

Branch Acceptable Range


efficiency lower upper
A 0.53 0.51 0.55
B 0.30 0.28 0.32
C 0.41 0.39 0.43
D 0.55 0.53 0.57
E 0.65 0.63 0.67
G 0.30 0.28 0.32
Reference set is F for A, B, C, D, E and G.

Question 7
Reading for this question
This question relates to Chapter 5 of the subject guide.
Approaching the question
The equation to use is CP = Cunder/(Cunder + Cover), as shown on p.100 of the
subject guide.
Here Cunder = the cost of underestimating demand by one unit = the lost
profit = 8.50 – 6.60 = 1.90
Cover = the cost of overestimating demand by one unit = 6.60 – 1.10 =
5.50.
Hence CP = 1.90/(1.90 + 5.50) = 0.257.
When we have the Normal distribution with mean 250 and variance 40
the quantity to order (x) is given by (x – 250)/√40 = the value from the
N(0,1) with 0.257 of probability to the left of it. Here the value from the
tables is –0.65 (approximately)
So (x – 250)/√40 = –0.65
x = 245.89 say 246 copies
We need an EOQ calculation. The EOQ formula is EOQ = √(2Rco/ch),
where:
R = 420 × 12 = 5,040 and co = 50
ch = 0.12(110) = 13.2 per year
so EOQ = √(2 × 5,040 × 50/13.2)
= 195.4 say 195

225
MN3032 Management science methods

This falls in the range for which we do pay the basic price of £110, as we
assumed in the calculation.
Using EOQ = 195 then cost per year is RP + (chQ/2) + (coR/Q)
= 5,040 × 110 + (13.2 × 195/2) + (50 × 5,040/195)
= 556,979
With the quantity discount the new purchase cost is 95 and the new
holding cost is 0.12(95) = 11.4.
The new EOQ is √(2 × 5,040 × 50/11.4) = 210.3, say 210.
This is feasible with respect to the minimum order quantity of 200 so no
need to cost the minimum order quantity of 200.
The cost per year would be RP + (chQ/2) + (coR/Q)
= 5,040 × 95 + (11.4 × 210/2) + (50 × 5,040/210)
= 481,197
Hence ordering with the discount is cheaper and that should be the policy
adopted.
We need an EBQ calculation. EBQ = √(2Accs/(ch(1–r)))
Where:
Ac = 500,000 per year
Ap = 85,000 × 12 = 1,020,000 per year (as the question uses a monthly
value of 85,000)
cs = 30,000
ch = 0.005(7.5 + 0.35 + 1.75) = 0.048 per year
r = Ac/Ap =500,000/1,020,000 = 0.4902
Hence EBQ = 1,107,236
So should make 1,107,236 items on the machine at a time. This will be
enough to supply demand for 1,107,236/Ac = 1,107,236/500,000 = 2.21
years so effectively agree with my colleague.

Question 8
Reading for this question
This question relates to Chapter 3 of the subject guide.
Approaching the question
The network diagram is shown below:

226
Appendix 2: Sample Examiners’ commentary

Details are:
Activity Earliest Start Latest Start Float
A 3 10 7
B 0 2 2
C 8 8 0
D 0 0 0
E 6 13 7
F 10 10 0
X1 4 4 0
X2 3 5 2
• Completion time 14 days
• Critical path is D(4)→X1(4)→C(2)→F(4)
Activity Latest start time (days)
A 10
B 2
C 8
D 0
E 13
F 10
As the critical path involves X1 this needs to be changed to reduce the
project completion time (at a cost of £400).
When we do this the new project completion time is 12 days (details as
below).
Activity Earliest Start Latest Start Float
A 3 8 5
B 0 0 0
C 6 6 0
D 0 0 0
E 6 11 5
F 8 8 0
X1 4 4 0
X2 3 3 0
Here all activities except A and E are critical. As the previous critical path
D→X1→C→F is still a critical path of duration 12 days there is no point in
spending any further money on X2 since reducing the completion time for
that cannot reduce the project completion time.
Hence the minimum possible project completion time is 12 days and this
can be achieved at a cost of £400.
We have:
Activity Latest start time (days)
A 8
B 0
C 6
D 0
E 11
F 8
The critical paths are:
D(4)→X1(2)→C(2)→F(4)
and
B(3)→X2(3)→C(2)→F(4)
227
MN3032 Management science methods

Notes

228

You might also like