You are on page 1of 194

MODULE TITLE:

Supply Chain Modeling for Decision-making


MODULE NO: LSCM 611-3
ECTS: 3.5

ADDIS ABABA UNIVERSITY


COLLEGE OF BUSINESS AND ECONOMICS
SCHOOL OF COMMERCE

MA- LOGISTICS AND SUPPLY CHAIN MANAGEMENT PROGRAM

JANUARY 2016
ADDIS ABABA
By: Dilnesahu Samuel
Matewos Ensermu (PhD)

0
Table of Contents

MODULE INTRODUCTION ................................................................................................... 2


MODULE OBJECTIVE ............................................................................................................ 2
CHAPTER ONE: MANAGING AND DESIGNING SUPPLY CHAIN .................................. 3
CHAPTER TWO: INTRODUCTION TO MODELING ........................................................ 24
CHAPTER THREE: MULTI-CRITERIA DECISION MAKING .......................................... 46
CHAPTER FOUR: SUPPLY CHAIN MODELING AND SIMULATION ........................... 76
CHAPTER FIVE: DECISION THEORY ............................................................................. 112
CHAPTER SIX: GAME THEORY ....................................................................................... 141
CHAPTER SEVEN: QUEUE THEORY............................................................................... 162
REFERENCES ...................................................................................................................... 193

1
MODULE INTRODUCTION
There are two decision-making approaches from which supply chain/logistics managers can
choose. These are qualitative or quantitative decision making. The qualitative approach
focuses on analysis of idea and relies of decision makers experience and judgment. The
quantitative one is a scientific approach that relies on quantified evidence to help logistics
managers make optimal decisions among alternatives. In this module, you will be exposed to
such different decision making processes and decision models available to supply
chain/logistics manager to be applied to solve problems effectively related to supply chain
management. More to the point, you will learn about the fundamental concepts of decision
making and their applications in supply chain management. The module basically describes
and presents key concepts to ensure conceptual clarity and provides examples and activities
to help you solve typical decision model problems. Besides, it will show how the concepts
and models are relevant to supply chain management and how to apply them in such context.

MODULE OBJECTIVE
Dear learner, at the end of this course, you will be able to:

 Describe the basic components of supply chain and logistic network and associated
optimization issues
 Explain the key elements of supply chain modeling
 Apply such models as breakeven analysis, goal programming, and Monte Carol
simulation in supply chain context
 Apply decision models to supply chain context
 Appreciate the importance of cooperative games in supply chain management
 Model and solve queue problems in supply chain context
In order for you to best achieve the learning outcomes, you are advised to do all activities,
end chapter questions, and refer reference materials. Besides, you will be given chapter
summary to help you synthesize the main topics of each of the seven chapters. These include:
1. Managing and designing supply chain
2. Supply chain modeling
3. Multi-criteria decision making
4. Supply chain modeling and simulation
5. Decision theory
6. Game theory
7. Queue theory

2
CHAPTER ONE: MANAGING AND DESIGNING SUPPLY CHAIN
INTRODUCTION
Dear learner in this chapter, you will be introduced with the key elements of supply chain
system and nature of supply chain decisions. In addition, you will be presented with key
issued related to global optimization, uncertainty and risk, and supply chain performance.

OBJECTIVE
At the end of the chapter, you will be able to:
 Describe logistic and supply chain network
 Explain nature of supply chain decisions
 Appreciate the value of optimization and decision models in supply chain networks.
 Identify various supply chain performance measurement methods
To this end, the chapter discusses the following topics:

1. The Logistic Network


2. The Development Chain
3. Global Optimization
4. Managing Uncertainty and Risk
5. Supply Chain Decisions
6. Supply Chain Performance Measures
7. Supply Chain Modeling Issues

1.1. THE LOGISTICS NETWORK

Fierce competition in today’s global markets, the introduction of products with shorter
life cycles, and the heightened expectations of customers have forced business
enterprises to invest in, and focus attention on, their supply chains. This, together with
continuing advances in communications and transportation technologies (e.g., mobile
communication, Internet, and overnight delivery), has motivated the continuous
evolution of the supply chain and of the techniques to manage it effectively. In a typical
supply chain, raw materials are procured and items are produced at one or more
factories, shipped to warehouses for intermediate storage, and then shipped to retailers
or customers. Consequently, to reduce cost and improve service levels, effective supply
chain strategies must take into account the interactions at the various levels in the supply
chain. The supply chain, which is also referred to as the logistics network, consists of
suppliers, manufacturing centers, warehouses, distribution centers, and retail outlets, as

3
well as raw materials, work-in-process inventory, and finished products that flow
between the facilities.

Supply chain management is a set of approaches utilized to efficiently integrate


suppliers, manufacturers, warehouses, and stores, so that merchandise is produced and
distributed at the right quantities, to the right locations, and at the right time, in order to
minimize system wide costs while satisfying service level requirements. This definition
leads to several observations. First, supply chain management takes into consideration
every facility that has an impact on cost and plays a role in making the product conform
to customer requirements: from supplier and manufacturing facilities through
warehouses and distribution centers to retailers and stores. Indeed, in some supply chain
analysis, it is necessary to account for the suppliers’ suppliers and the customers’
customers because they have an impact on supply chain performance.

Second, the objective of supply chain management is to be efficient and cost-effective


across the entire system; total system wide costs, from transportation and distribution to
inventories of raw materials, work in process, and finished goods, Thus, the emphasis is
not on simply minimizing transportation cost or reducing inventories but, rather, on
taking a systems approach to supply chain management. Finally, because supply chain
management revolves around efficient integration of suppliers, manufacturers,
warehouses, and stores, it encompasses the firm’s activities at many levels, from the
strategic level through the tactical to the operational level. What about logistics
management, or value chain management, or demand chain management?

Various companies, consultants, and academics have developed a variety of terms and
concepts to stress what they believe are the salient issues in supply chain management.
Although many of these concepts are useful and insightful, for the purposes of this
module, we will use supply chain management as the generic name for the set of
concepts, approaches, strategies, and ideas that we are discussing. What makes supply
chain management difficult? Although we will discuss a variety of reasons throughout
this text, they can all be related to some or all of the following observations: Supply
chain strategies cannot be determined in isolation. They are directly affected by another
chain that most organizations have, the development chain that includes the set of
activities associated with new product introduction. At the same time, supply chain
strategies also should be aligned with the specific goals of the organization, such as
4
maximizing market share or increasing profit. It is challenging to design and operate a
supply chain so that total system-wide costs are minimized, and system-wide service
levels are maintained. Indeed, it is frequently difficult to operate a single facility so that
costs are minimized and service level is maintained.

The process of finding the best system-wide strategy is known as global optimization.
Uncertainty and risk are inherent in every supply chain; customer demand can never be
forecast exactly, travel times will never be certain, and machines and vehicles will break
down. Similarly, recent industry trends, including outsourcing, offshoring, and lean
manufacturing that focus on reducing supply chain costs, significantly increase the level
of risk in the supply chain. Thus, supply chains need to be designed and managed to
eliminate as much uncertainty and risk as possible as well as deal effectively with the
uncertainty and risk that remain. These optimization concepts are optimized through
supply chain decision models.

1.2. THE DEVELOPMENT CHAIN

The development chain is the set of activities and processes associated with new product
introduction. It includes the product design phase, the associated capabilities and
knowledge that need to be developed internally, sourcing decisions, and production
plans. Specifically, the development chain includes decisions such as product
architecture; what to make internally and what to buy from outside suppliers, that is,
make/buy decisions; supplier selection; early supplier involvement; and strategic
partnerships. The development and supply chains intersect at the production point. It is
clear that the characteristics of and decisions made in the development chain will have
an impact on the supply chain. Similarly, it is intuitively clear that the characteristics of
the supply chain must have an impact on product design strategy and hence on the
development chain.

Example 1-1: Hewlett Packard was one of the first firms to recognize the intersection of
the development and supply chains. A case in point is the inkjet printer introduction,
where decisions about product architecture were made by taking into account not only
labor and material cost, but also total supply chain cost throughout the product life cycle.
More recently, HP has focused on making decisions such as what design activities to
outsource and the corresponding organizational structures needed to manage the

5
outsource design process by considering the characteristics of both the development and
the supply chains. Unfortunately, in most organizations, different managers are
responsible for the different activities that are part of these chains. Typically, the VP of
engineering is responsible for the development chain, the VP of manufacturing for the
production portion of the chains, and the VP of supply chain or logistics for the
fulfillment of customer demand. Unless carefully addressed, the typical impact of this
organizational structure is a misalignment of product design and supply chain strategies.
To make matters worse, in many organizations, additional chains intersect with both the
development and the supply chains. These may include the reverse logistics chain, that
is, the chain associated with returns of products or components, as well as the spare-
parts chain.

1.3. GLOBAL OPTIMIZATION

What makes finding the best system wide, or globally optimal, integrated solution so
difficult? A variety of factors make this a challenging problem. The supply chain is a
complex network of facilities dispersed over a large geography, and, in many cases, all
over the globe.

Example 1-2 National Semiconductor, whose list of competitors includes Motorola Inc.
and the Intel Corporation, is one of the world’s largest manufacturers of analog devices
and subsystems that are used in fax machines, cellular phones, computers, and cars.
Currently, the company has four wafer fabrication facilities, two in the United States and
one in Great Britain, and has test and assembly sites in Malaysia, China, and Singapore.
After assembly, finished products are shipped to hundreds of manufacturing facilities all
over the world, including those of Apple, Canon, Delphi, Ford, IBM, Hewlett-Packard,
and Siemens. Since the semiconductor industry is highly competitive, short lead time
specification and the ability to deliver within the committed due date are critical
capabilities. In 1994, 95 percent of National Semiconductor’s customers received their
orders within 45 days from the time the order was placed, while the remaining 5 percent
received their orders within 90 days. These tight lead times required the company to
involve 12 different airline carriers using about 20,000 different routes. The difficulty,
of course, was that no customer knew in advance if they were going to be part of the 5
percent of customers who received their order in 90 days or the 95 percent who received
their order within 45 days Different facilities in the supply chain frequently have
6
different, conflicting objectives. For instance, suppliers typically want manufacturers to
commit themselves to purchasing large quantities in stable volumes with flexible
delivery dates. Unfortunately, although most manufacturers would like to implement
long production runs, they need to be flexible to their customers’ needs and changing
demands. Thus, the suppliers’ goals are in direct conflict with the manufacturers’ desire
for flexibility. Indeed, since production decisions are typically made without precise
information about customers’ demand, the ability of manufacturers to match supply and
demand depends largely on their ability to change supply volume as information about
demand arrives. To make matters worse, objective of reducing inventory levels typically
implies an increase in transportation costs. The supply chain is a dynamic system that
evolves over time. Indeed, not only do customer demand and supplier capabilities
change over time, but supply chain relationships also evolve over time. For example, as
customers’ power increases, there is increased pressure placed on manufacturers and
suppliers to produce an enormous variety of high-quality products and, ultimately, to
produce customized products.

System variations over time are also an important consideration. Even when demand is
known precisely (e.g., because of contractual agreements), the planning process needs to
account for demand and cost parameters varying over time due to the impact of seasonal
fluctuations, trends, advertising and promotions, competitors’ pricing strategies, and so
forth. These time-varying demand and cost parameters make it difficult to determine the
most effective supply chain strategy, the one that minimizes systemwide costs and
conforms to customer requirements. Of course, global optimization only implies that it is
not only important to optimize across supply chain facilities, but also across processes
associated with the development and supply chains. That is, it is important to identify
processes and strategies that optimize, or, alternatively, synchronize, both chains
simultaneously.

1.4. MANAGING UNCERTAINTY AND RISK

Global optimization is made even more difficult because supply chains need to be
designed for, and operated in, uncertain environments, thus creating sometimes
enormous risks to the organization. A variety of factors contribute to this:

7
1. Matching supply and demand is a major challenge: a. Boeing Aircraft announced a
write-down of $2.6 billion in October 1997 due to “raw material shortages, internal and
supplier parts shortages and productivity inefficiencies.

2. “Second quarter sales at U.S. Surgical Corporation declined 25 percent, resulting in a


loss of $22 million. The sales and earnings shortfall is attributed to larger than
anticipated inventories on the shelves of hospitals”

3. “EMC Corp. said it missed its revenue guidance of $2.66 billion for the second
quarter of 2006 by around $100 million, and said the discrepancy was due to higher than
expected orders for the new DMX-3 systems over the DMX-2, which resulted in an
inventory snafu”

4. “There are so many different ways inventory can enter our system. It’s a constant
challenge to keep it under control,” [Johnnie Dobbs, Wal-Mart Supply Chain and
Logistics Executive].

5. “Intel, the world’s largest chip maker, reported a 38 percent decline in quarterly
profit Wednesday in the face of stiff competition from Advanced Micro Devices and a
general slowdown in the personal computer market that caused inventories to swell”
Obviously, this difficulty stems from the fact that months before demand is realized,
manufacturers have to commit themselves to specific production levels. These advance
commitments imply huge financial and supply risks.

Inventory and back-order levels fluctuate considerably across the supply chain, even
when customer demand for specific products does not vary greatly. Indeed, we will
argue that the first principle of forecasting is that “forecasts are always wrong.” Thus, it
is impossible to predict the precise demand for a specific item, even with the most
advanced forecasting techniques.

Demand is not the only source of uncertainty. Delivery lead times, manufacturing yields,
transportation times, and component availability also can have significant supply chain
impact.

Recent trends such as lean manufacturing, outsourcing, and offshoring that focus on cost
reduction increase risks significantly. For example, consider an automotive

8
manufacturer whose parts suppliers are in Canada and Mexico. With little uncertainty in
transportation and a stable supply schedule, parts can be delivered to assembly plants
“just-in-time” based on fixed production schedules. However, in the event of an
unforeseen disaster, such as the September 11 terrorist attacks, port strikes, or weather-
related calamities, adherence to this type of strategy could result in a shutdown of the
production lines due to lack of parts. Similarly, outsourcing and offshoring imply that
the supply chains are more geographically diverse and, as a result, natural and man-
made disasters can have a tremendous impact.

Activity 1.1.

Consider the supply chain for a domestic automobile.

a. What are the components of the supply chain for the automobile?

b. What are the different firms involved in the supply chain?

c. What are the objectives of these firms?

d. Provide examples of conflicting objectives in this supply chain.

e. What are the risks that rare or unexpected events pose to this supply chain?

Dear learner, section 1.1 to 1.4 presented key points needed to do this activity

1.5. SUPPLY CHAIN DECISIONS

1.5.1. Introduction

The supply chains of large corporations involve hundreds of facilities (retailers,


distributors, plants and suppliers) that are globally distributed and involve
thousands of parts and products.

As one example, one auto manufacturer has 12 thousand suppliers, 70 plants, operates
in 200 countries and has annual sales of 8.6 million vehicles. As a second example, the US
Defense Logistics Agency, the world’s largest warehousing operation, stocks over 100
thousand products. The goals of corporate supply chains are to provide customers with
the products they want in a timely way and as efficiently and profitably as possible.
Fueled in part by the information revolution and the rise of e-commerce, the
development of models of supply chains and their optimization has emerged as an

9
important way of coping with this complexity. Indeed, this is one of the most active
application areas of operations research and management science today. This reflects the
realization that the success of a company generally depends on the efficiency with
which it can design, manufacture and distribute its products in an increasingly competitive
global economy. There are many decisions to be made in supply chains. These include:

1. what products to make and what their designs should be;


2. how much, when, where and from whom to buy product; how much, when
and where to produce product;
3. how much and when to ship from one facility to another;
4. how much, when and where to store product;
5. how much, when and where to charge for products; and
6. How much, when and where to provide facility capacity.

These decisions are typically m a d e in an environment that involves uncertainty


about p r o d u c t demands, costs, prices, lead times, quality, etc. The decisions are
generally made in a multi-party environment that includes competition and often
includes collaborative alliances.

Alliances lead to questions of how to share the benefits of collaboration a n d what


information the various parties ought to share. In many supply chains, the tradition
has been not to share in- formation. On the other hand, firms in more than a few
supply chains realize that there are important benefits from sharing information
too, e.g., the potential for making supply chains more responsive and efficient.

1 . 5 . 2 . Supply Chain Design and Analysis: Models and Methods


Generally, multi-stage models for supply chain design and analysis can be divided into four
categories, by modelling approach. In the cases included here, the modelling approach is
driven by the nature of the inputs and the objective of the study. The four categories are: (1)
deterministic analytical models, in which the variables are known and specified (2) stochastic
analytical models, where at least one of the variables is unknown, and is assumed to follow a
particular probability distribution, (3) economic models, and (4) simulation models.

10
1. Deterministic Analytical Models

Williams (1981) presents seven heuristic algorithms for scheduling production and
distribution operations in an assembly supply chain network (i.e., each station has at most one
immediate successor, but any number of immediate predecessors). The objective of each
heuristic is to determine a minimum-cost production and/or product distribution schedule that
satisfies final product demand. The total cost is a sum of average inventory holding and fixed
(ordering, delivery, or set-up) costs.
Williams (1983) develops a dynamic programming algorithm for simultaneously determining
the production and distribution batch sizes at each node within a supply chain network. As in
Williams (1981), it is assumed that the production process is an assembly process. The
objective of the heuristic is to minimize the average cost per period over an infinite horizon,
where the average cost is a function of processing costs and inventory holding costs for each
node in the network.
Ishii, et. al (1988) develop a deterministic model for determining the base stock levels and
lead times associated with the lowest cost solution for an integrated supply chain on a finite
horizon. The stock levels and lead times are determined in such a way as to prevent stock out,
and to minimize the amount of obsolete (.dead.) inventory at each stock point. Their model
utilizes a pull-type ordering system which is driven by, in this case, linear (and known)
demand processes.

Cohen and Lee (1989) present a deterministic, mixed integer, non-linear mathematical
programming model, based on economic order quantity (EOQ) techniques, to develop what
the authors refer to as a global resource deployment policy. More specifically, the objective
function used in their model maximizes the total after-tax profit for the manufacturing
facilities and distribution centers (total revenue less total before-tax costs less taxes due). This
objective function is subject to a number of constraints, including managerial constraints,
resource and production constraints and logical consistency constraints. (Feasibility,
availability, demand limits, and variable non-negativity). The outputs resulting from their
model include:
 Assignments for finished products and subassemblies to manufacturing plants,
vendors to distribution centers, distribution centers to market regions.
11
 Amounts of components, subassemblies, and final products to be shipped among the
vendors, manufacturing facilities, and distribution centers.
 Amounts of components, subassemblies, and final products to be manufactured at the
manufacturing facilities.
Moreover, this model develops material requirements and assignments for all products, while
maximizing after-tax profits.

Cohen and Moon (1990) extend Cohen and Lee (1989) by developing a constrained
optimization model, called PILOT, to investigate the effects of various parameters on supply
chain cost, and consider the additional problem of determining which manufacturing facilities
and distribution centers should be open. More specifically, the authors consider a supply
chain consisting of raw material suppliers, manufacturing facilities, distribution centers, and
retailers. This system produces final products and intermediate products, using various types
of raw materials.
• Which of the available manufacturing facilities and distribution centers should be
open?
• Raw material and intermediate order quantities for vendors and manufacturing
facilities.
• Production quantities by product by manufacturing facility.
• Product-specific shipping quantities from manufacturing facility to distribution center
to customer.
The objective function of the PILOT model is a cost function, consisting of fixed and variable
production and transportation costs, subject to supply, capacity, assignment, demand, and raw
material requirement constraints. Based on the results of their example supply chain system,
the authors conclude that there are a number of factors that may dominate supply chain costs
under a variety of situations, and that transportation costs play a significant role in the overall
costs of supply chain operations.
Newhart, et. al. (1993) design an optimal supply chain using a two-phase approach. The first
phase is a combination mathematical program and heuristic model, with the objective of
minimizing the number of distinct product types held in inventory throughout the supply
chain. This is accomplished by consolidating substitutable product types into single SKUs.
The second phase is a spreadsheet-based inventory model, which determines the minimum
amount of safety stock required to absorb demand and lead time fluctuations. The authors

12
considered four facility location alternatives for the placement of the various facilities within
the supply chain. The next step is to calculate the amount of inventory investment under each
alternative, given a set of demand requirements, and then select the minimum cost alternative.

Arntzen, et. al. (1995) develop a mixed integer programming model, called GSCM (Global
Supply Chain Model), that can accommodate multiple products, facilities, stages (echelons),
time periods, and transportation modes. More specifically, the GSCM minimizes a composite
function of: (1) activity days and (2) total (fixed and variable) cost of production, inventory,
material handling, overhead, and transportation costs. The model requires, as input, bills of
materials, demand volumes, costs and taxes, and activity day requirements and provides, as
output: (1) the number and location of distribution centers, (2) the customer-distribution
center assignment, (3) the number of echelons (amount of vertical integration), and (4) the
product-plant assignment.

Voudouris (1996) develops a mathematical model designed to improve efficiency and


responsiveness in a supply chain. The model maximizes system flexibility, as measured by
the time-based sum of instantaneous differences between the capacities and utilizations of
two types of resources: inventory resources and activity resources. Inventory resources are
resources directly associated with the amount of inventory held; activity resources, then, are
resources that are required to maintain material flow. The model requires, as input, product-
based resource consumption data and bill-of-material information, and generates, as output:
(1) a production, shipping, and delivery schedule for each product and (2) target inventory
levels for each product.
Camm, et. al. (1997) develop an integer programming model, based on an incapacitated
facility location formulation, for Procter and Gamble Company. The purpose of the model is
to: (1) determine the location of distribution centers (DCs) and (2) assign those selected DCs
to customer zones. The objective function of the model minimizes the total cost of the DC
location selection and the DC-customer assignment, subject to constraints governing DC-
customer assignments and the maximum number of DCs allowed.

What is mathematical modelling?


Models describe our beliefs about how the world functions. In mathematical modelling, we
translate those beliefs into the language of mathematics. This has many advantages

13
1. Mathematics is a very precise language. This helps us to formulate ideas and
identify underlying assumptions.
2. Mathematics is a concise language, with well-defined rules for manipulations.
3. All the results that mathematicians have proved over hundreds of years are at
our disposal.
4. Computers can be used to perform numerical calculations.
The majority of interacting systems in the real world are far too complicated to model in their
entirety. Hence the first level of compromise is to identify the most important parts of the
system. These will be included in the model, the rest will be excluded. The second level of
compromise concerns the amount of mathematical manipulation which is worthwhile.
Although mathematics has the potential to prove general results, these results depend
critically on the form of equations used. Small changes in the structure of equations may
require enormous changes in the mathematical methods. Using computers to handle the
model equations may never lead to elegant results, but it is much more robust against
alterations.
What objectives can modelling achieve?
Mathematical modelling can be used for a number of different reasons. How well any
particular objective is achieved depends on both the state of knowledge about a system and
how well the modelling is done. Examples of the range of objectives are:
1. Developing scientific understanding
- through quantitative expression of current knowledge of a system (as well as
displaying what we know, this may also show up what we do not know);
2. test the effect of changes in a system;
3. aid decision making, including
(i) tactical decisions by managers;
(ii) strategic decisions by planners.

2. Stochastic Analytical Models


Cohen and Lee (1988) develop a model for establishing a material requirements policy for all
materials for every stage in the supply chain production system. In this work, the authors use
four different cost-based sub-models (there is one stochastic sub-model for each production
stage considered). Each of these sub-models is listed and described below:

14
1. Material Control: Establishes material ordering quantities, reorder intervals, and
estimated response times for all supply chain facilities, given lead times, fill rates,
bills of material, cost data, and production requirements.
2. Production Control: Determines production lot sizes and lead times for each product,
given material response times.
3. Finished Goods Stockpile (Warehouse): Determines the economic order size and
quantity for each product, using cost data, fill rate objectives, production lead times,
and demand data.
4. Distribution: Establishes inventory ordering policies for each distribution facility,
based on transportation time requirements, demand data, cost data, network data, and
fill rate objectives.
Each of these sub-models is based on a minimum-cost objective. In the final computational
step, the authors determine approximate optimal ordering policies using a mathematical
program, which minimizes the total sum of the costs for each of the four sub-models.
Svoronos and Zipkin (1991) consider multi-echelon, distribution-type supply chain systems
(i.e., each facility has at most one direct predecessor, but any number of direct successors). In
this research, the authors assume a base stock, one-for-one (S-1, S) replenishment policy for
each facility, and that demands for each facility follow an independent Poisson process. The
authors obtain steady-state approximations for the average inventory level and average
number of outstanding backorders at each location for any choice of base stock level. Finally,
using these approximations, the authors propose the construction of an optimization model
that determines the minimum-cost base stock level.
Lee and Billington (1993) develop a heuristic stochastic model for managing material flows
on a site-by-site basis. Specifically, the authors model a pull-type, periodic, order up to
inventory system, and determine the review period (by product type) and the order up to
quantity (by product type) as model outputs. The authors develop a model which will either:
(1) determine the material ordering policy by calculating the required stock levels to achieve
a given target service level for each product at each facility or (2) determine the service level
for each product at each facility, given a material ordering policy.

Lee, et. al. (1993), develop a stochastic, periodic-review, order-up-to inventory model to
develop a procedure for process localization in the supply chain. That is, the authors propose
an approach to operational and delivery processes that consider differences in target market
structures (e.g., differences in language, environment, or governments).
15
Thus, the objective of this research is to design the product and production processes that are
suitable for different market segments that result in the lowest cost and highest customer
service levels overall.

Pyke and Cohen (1993) develop a mathematical programming model for an integrated supply
chain, using stochastic sub-models to calculate the values of the included random variables
included in the mathematical program. The authors consider a three-level supply chain,
consisting of one product, one manufacturing facility, one warehousing facility, and one
retailer. The model minimizes total cost, subject to a service level constraint, and holds the
set-up times, processing times, and replenishment lead times constant. The model yields the
approximate economic (minimum cost) reorder interval, replenishment batch sizes, and the
order-up-to product levels (for the retailer) for a particular production network.

Pyke and Cohen (1994) follow the Pyke and Cohen (1993) research by including a more
complicated production network. In Pyke and Cohen (1994), the authors again consider an
integrated supply chain with one manufacturing facility, one warehouse, and one retailer, but
now consider multiple product types. The new model yields similar outputs; however, it
determines the key decision variables for each product type. More specifically, this model
yields the approximate economic (minimum cost) reorder interval (for each product type),
replenishment batch sizes (for each product type), and the order up to product levels (for the
retailer, for each product type) for a particular supply chain network.

Tzafestas and Kapsiotis (1994) utilize a deterministic mathematical programming approach to


optimize a supply chain, and then use simulation techniques to analyze a numerical example
of their optimization model. In this work, the authors perform the optimization under three
different scenarios.

1. Manufacturing Facility Optimization: Under this scenario, the objective is to


minimize the total cost incurred by the manufacturing facility only; the costs
experienced by other facilities is ignored.
2. Global Supply Chain Optimization: This scenario assumes a cooperative
relationship among all stages of the supply chain, and therefore minimizes the
total operational costs of the chain as a whole.

16
3. Decentralized Optimization: This scenario optimizes each of the supply chain
components individually, and thus minimizes the cost experienced by each level.
Towill and Del Vecchio (1994) consider the application of filter theory and simulation to the
study of supply chains. In their research, the authors compare filter characteristics of supply
chains to analyze various supply chain responses to randomness in the demand pattern. These
responses are then compared using simulation, in order to specify the minimum safety stock
requirements that achieve a particular desired service level.

Lee and Feitzinger (1995) develop an analytical model to analyze product configuration for
postponement (i.e., determining the optimal production step for product differentiation),
assuming stochastic product demands. The authors assume a manufacturing process with I
production steps that may be performed at a factory or at one of the M distribution centers
(DCs). The problem is to determine a step P such that steps 1 through P will be performed at
the factory and steps (P+1) to I will be performed at the DCs. The authors solve this problem
by calculating an expected cost for the various product configurations, as a sum of inventory,
freight, customs, setup, and processing costs. The optimal value of P is the one that
minimizes the sum of these costs.

Altiok and Ranjan (1995) consider a generalized production/inventory system with: M (M >
1) stages (j = 1,.M), one type of final product, random processing times (FIFO, for all stages)
and set-up times, and intermediate buffers. The system experiences demand for finished
products according to a compound Poisson process, and the inventory levels for inventories
(intermediate buffers and finished goods) are controlled according to a continuous review
(R,r) inventory policy, and backorders are allowed.

The authors develop an iterative procedure wherein each of the two-node sub-systems are
analyzed individually; the procedure terminates once the estimate average through put for
each sub-system are all approximately equal. Once the termination condition is met, the
procedure allows for calculation of approximate values for the two performance measures:

(1) the inventory levels in each buffer j, and (2) the backorder probability. The authors
conclude that their approximation is acceptable as long as the P(backorder) does not exceed
0.30, in which cases the system is failing to effectively accommodate demand volumes.

17
Finally, Lee, et. al. (1997) develop stochastic mathematical models describing, The Bullwhip
Effect., which is defined as the phenomenon in which the variance of buyer demand becomes
increasingly amplified and distorted at each echelon upwards throughout the supply chain.
That is, the actual variance and magnitude of the orders at each echelon is increasingly higher
than the variance and magnitude of the sales, and that this phenomenon propagates upstream
within the chain. In this research, the authors develop stochastic analytical models describing
the four causes of the bullwhip effect (demand signal processing, rationing game, order
batching, and price variations), and show how these causes contribute to the effect.

3. Economic Models
Christy and Grout (1994) develop an economic, game-theoretic framework for modeling the
buyer-supplier relationship in a supply chain. The basis of this work is a 2 x 2 supply chain
.relationship matrix, which may be used to identify conditions under which each type of
relationship is desired. These conditions range from high to low process specificity, and from
high to low product specificity. Thus, the relative risks assumed by the buyer and the supplier
are captured within the matrix. For example, if the process specificity is low, then the buyer
assumes the risk; if the product specificity is low, then the supplier assumes the risk. For each
of the four quadrants (and therefore, each of the four risk categories), the authors go on to
assign appropriate techniques for modeling the buyer-supplier relationship. For the two-
echelon case, the interested reader is referred to Cachon and Zipkin (1997).

4. Simulation Models
Towill (1991) and Towill, et. al. (1992) use simulation techniques to evaluate the effects of
various supply chain strategies on demand amplification. The strategies investigated are as
follows.
1. Eliminating the distribution echelon of the supply chain, by including the
distribution function in the manufacturing echelon.
2. Integrating the flow of information throughout the chain.
3. Implementing a Just-In-Time (JIT) inventory policy to reduce time delays.
4. Improving the movement of intermediate products and materials by modifying the
order quantity procedures.
5. Modifying the parameters of the existing order quantity procedures.
The objective of the simulation model is to determine which strategies are the most effective
in smoothing the variations in the demand pattern. The just-in-time strategy (strategy (3)
18
above) and the echelon removal strategy (strategy (1) above) were observed to be the most
effective in smoothing demand variations.
Wikner, et. al. (1991) examine five supply chain improvement strategies, and then implement
these strategies on a three-stage reference supply chain model. The five strategies are:
1. Fine-tuning the existing decision rules.
2. Reducing time delays at and within each stage of the supply chain.
3. Eliminating the distribution stage from the supply chain.
4. Improving the decision rules at each stage of the supply chain.
5. Integrating the flow of information, and separating demands into real orders,
which are true market demands, and cover orders, which are orders that bolster
safety stocks.
Their reference model includes a single factory (with an on-site warehouse), distribution
facilities, and retailers. Thus, it is assumed that every facility within the chain contains
(houses) some inventory. The implementation of each of the five different strategies is carried
out using simulation, the results of which are then used to determine the effects of the various
strategies on minimizing demand fluctuations. The authors conclude that the most effective
improvement strategy is strategy (5), improving the flow of information at all levels
throughout the chain, and separating orders.

1.6. SUPPLY CHAIN PERFORMANCE MEASURES

An important component in supply chain design and analysis is the establishment of


appropriate performance measures. A performance measure, or a set of performance
measures, is used to determine the efficiency and/or effectiveness of an existing system, or to
compare competing alternative systems. Performance measures are also used to design
proposed systems, by determining the values of the decision variables that yield the most
desirable level(s) of performance. Available literature identifies a number of performance
measures as important in the evaluation of supply chain effectiveness and efficiency. These
measures, described in this section, may be categorized as either qualitative or quantitative.
1.6.1 Qualitative Performance Measures
Qualitative performance measures are those measures for which there is no single direct
numerical measurement, although some aspects of them may be quantified. These objectives
have been identified as important, but are not used in the models reviewed here:

19
-Customer Satisfaction: The degree to which customers are satisfied with the product and/or
service received, and may apply to internal customers or external customers.
Customer satisfaction is comprised of three elements.
1. Pre-Transaction Satisfaction: satisfaction associated with service elements occurring
prior to product purchase.
2. Transaction Satisfaction: satisfaction associated with service elements directly
involved in the physical distribution of products.
3. Post-Transaction Satisfaction: satisfaction associated with support provided for
products while in use.
• Flexibility: The degree to which the supply chain can respond to random fluctuations
in the demand pattern.
• Information and Material Flow Integration [31]: The extent to which all functions
within the supply chain communicate information and transport materials.
• Effective Risk Management [22]: All of the relationships within the supply chain
contain inherent risk. Effective risk management describes the degree to which the
effects of these risks are minimized.
• Supplier Performance: With what consistency suppliers deliver raw materials to
production facilities on time and in good condition.
1.6.2 Quantitative Performance Measures
Quantitative performance measures are those measures that may be directly described
numerically. Quantitative supply chain performance measures may be categorized by:
(1) Objectives that are based directly on cost or profit and (2) objectives that are based on
some measure of customer responsiveness
1.6.2.1 Measures Based on Cost
• Cost Minimization: The most widely used objective. Cost is typically minimized for
an entire supply chain (total cost), or is minimized for particular business units or
stages.
• Sales Maximization [19]: Maximize the amount of sales dollars or units sold.
• Profit Maximization: Maximize revenues less costs.
• Inventory Investment Minimization [24]: Minimize the amount of inventory costs
(including product costs and holding costs)
• Return on Investment Maximization [8]: Maximize the ratio of net profit to capital that
was employed to produce that profit.

20
1.6.2.2 Measures Based on Customer Responsiveness
• Fill Rate Maximization: Maximize the fraction of customer orders filled on time.
• Product Lateness Minimization: Minimize the amount of time between the promised
product delivery date and the actual product delivery date.
• Customer Response Time Minimization: Minimize the amount of time required from
the time an order is placed until the time the order is received by the customer.
• Usually refers to external customers only.
• Lead Time Minimization: Minimize the amount of time required from the time a
product has begun its manufacture until the time it is completely processed.
• Function Duplication Minimization [31]: Minimize the number of business functions
that are provided by more than one business entity.
Decision Variables in Supply Chain Modeling
In supply chain modeling, the performance measures are expressed as functions of one or
more decision variables. These decision variables are then chosen in such a way as to
optimize one or more performance measures. The decision variables used in the reviewed
models are described below.
• Production/Distribution Scheduling: Scheduling the manufacturing and/or
distribution.
• Inventory Levels: Determining the amount and location of every raw material,
subassembly, and final assembly storage.
• Number of Stages (Echelons): Determining the number of stages (or echelons) that
will comprise the supply chain. This involves either increasing or decreasing the chain
level of vertical integration by combining (or eliminating) stages or separating (or
adding) stages, respectively.
• Distribution Center (DC) - Customer Assignment: Determining which DC(s) will
serve which customer(s).
• Plant - Product Assignment: Determining which plant(s) will manufacture which
product(s).
• Buyer - Supplier Relationships: Determining and developing critical aspects of the
buyer-supplier relationship.
• Product Differentiation Step Specification: Determining the step within the process of
product manufacturing at which the product should be differentiated (or specialized).
• Number of Product Types Held in Inventory: Determining the number of different
product types that will be held in finished goods inventory
21
1.7. SUPPLY CHAIN MODELING ISSUES

In supply chain modeling, there are a number of issues that are receiving increasing attention,
as evidenced by their prevalent consideration in the work reviewed here.
These issues are: (1) product postponement, (2) global versus single-nation supply chain
modeling, and (3) demand distortion and variance amplification.
1.7.1. Product Postponement
Product postponement is the practice of delaying one or more operations to a later point in the
supply chain, thus delaying the point of product differentiation. There are numerous potential
benefits to be realized from postponement, one of the most compelling of which is the
reduction in the value and amount of held inventory, resulting in lower holding costs. There
are two primary considerations in developing a postponement strategy for a particular end-
item: (1) determining how many steps to postpone and (2) determining which steps to
postpone. Current research addressing postponement strategy includes Lee and Feitzinger
(1995) and Johnson and Davis (1995).
1.7.2 Global vs. Single-Nation Supply Chain Modeling
Global Supply Chains (GSC) are supply chains that operate (i.e., contain facilities) in
multiple nations. When modeling GSCs, there are additional considerations affecting SC
performance that are not present in supply chains operating in a single nation. Export
regulations, duty rates, and exchange rates are a few of the additional necessary
considerations when modeling GSCs. Kouvelis and Gutierrez (1997), Arntzen, et. al. (1995)
and Cohen and Lee (1989) address modeling issues associated with GSCs.
1.7.3 Demand Distortion and Variance Amplification
Demand distortion is the phenomenon in which orders to the supplier have larger variance
than sales to the buyer and variance amplification occurs when the distortion of the demand
.propagates upstream in amplified form. These phenomena (also known collectively as the
bullwhip effect or whiplash effect) are common in supply chain systems and were observed
as early as Forrester (1961). The consequences of the bullwhip effect on the supply chain
may be severe, the most serious of which is excess inventory costs. As a result, a number of
strategies have been developed to counteract the effects of demand distortion and variance
amplification.
Activity 1.2.
Discuss the different decision making models practiced by Ethiopian manufacturing
companies you are familiar with.

22
___________________________________________________________________________
___________________________________________________________________________
___________________________________________________________________________
___________________________________________________________________________
___________________________________________________________________________
___________________________________________________________________________
_________________________________
Your selected model needs to be from deterministic, stochastic, economic, or simulation
models
CHAPTER SUMMARY
Dear learner in this unit we discussed the nature of logistic network and supply chain models
along with the idea of global optimization and the common difficulty facing supply chain
management: uncertainty and risk. Later in the chapter, we also made extensive discussion of
supply chain performance measures and supply chain modeling issues.

END CHAPTER QUESTIONS


1. Discuss in detail the nature of logistic network and supply chain
2. Explain the two categories of supply chain performance measures
3. Describe the concept of global optimization
4. Briefly explain key supply chain modeling issues.

23
CHAPTER TWO: INTRODUCTION TO MODELING
INTRODUCTION
Dear learner, in this chapter we will discuss the field of management science and the process
of model construction and solving model problems.
OBJECTIVE
After completing this chapter, students will be able to:
 Describe the quantitative analysis approach.
 Understand the application of quantitative analysis in a real situation.
 Describe the use of modeling in quantitative analysis.
Here are the outlines of the chapter:
1. Introduction to management science
2. The management science approach to problem solving
3. Model building

2.1. INTRODUCTION TO MANAGEMENT SCIENCE

Management science is the application of a scientific approach to solving management


problems in order to help managers make better decisions. As implied by this definition,
management science encompasses a number of mathematically oriented techniques that have
either been developed within the field of management science or been adapted from other
disciplines, such as the natural sciences, mathematics, statistics, and engineering. This text
provides an introduction to the techniques that make up management science and
demonstrates their applications to management problems.

Management science is a recognized and established discipline in business. The applications


of management science techniques are widespread, and they have been frequently credited
with increasing the efficiency and productivity of business firms. In various surveys of
businesses, many indicate that they use management science techniques, and most rate the
results to be very good. Management science (also referred to as operations research,
quantitative methods, quantitative analysis, and decision sciences) is part of the fundamental
curriculum of most programs in business.

Management science is a scientific approach to solving management problems. And


management science techniques can be applied to solve problems in different types of

24
organizations, including services, government, military, business and industry, and health
care.

Management science can be used in a variety of organizations to solve many different types
of problems.

Second, in this module all of the modeling techniques and solution methods are
mathematically based. Finally, as the various management science techniques are presented,
keep in mind that management science is more than just a collection of techniques.
Management science also involves the philosophy of approaching a problem in a logical
manner (i.e., a scientific approach). The logical, consistent, and systematic approach to
problem solving can be as useful (and valuable) as the knowledge of the mechanics of the
mathematical techniques themselves. This understanding is especially important for those
readers who do not always see the immediate benefit of studying mathematically oriented
disciplines such as management science.

Management science encompasses a logical approach to problem solving.

2.2. THE MANAGEMENT SCIENCE APPROACH TO PROBLEM SOLVING

As indicated in the previous section, management science encompasses a logical, systematic


approach to problem solving, which closely parallels what is known as the scientific method
for attacking problems. This approach follows a generally recognized and ordered series of
steps: (1) observation, (2) definition of the problem, (3) model construction, (4) model
solution, and (5) implementation of solution results. We will analyze each of these steps
individually.

a) Observation

The first step in the management science process is the identification of a problem that exists
in the system (organization). The system must be continuously and closely observed so that
problems can be identified as soon as they occur or are anticipated. Problems are not always
the result of a crisis that must be reacted to but, instead, frequently involve an anticipatory or
planning situation. The person who normally identifies a problem is the manager because the
managers work in places where problems might occur. However, problems can often be
identified by a management scientist, a person skilled in the techniques of management

25
science and trained to identify problems, who has been hired specifically to solve problems
using management science techniques.

b) Definition of the Problem

Once it has been determined that a problem exists, the problem must be clearly and concisely
defined. Improperly defining a problem can easily result in no solution or an inappropriate
solution. Therefore, the limits of the problem and the degree to which it pervades other units
of the organization must be included in the problem definition. Because the existence of a
problem implies that the objectives of the firm are not being met in some way, the goals (or
objectives) of the organization must also be clearly defined. A stated objective helps to focus
attention on what the problem actually is.

c) Model Construction

A management science model is an abstract representation of an existing problem situation. It


can be in the form of a graph or chart, but most frequently a management science model
consists of a set of mathematical relationships. These mathematical relationships are made up
of numbers and symbols.

A model is an abstract mathematical representation of a problem situation.

As an example, consider a business firm that sells a product. The product costs $5 to produce
and sells for $20. A model that computes the total profit that will accrue from the items sold
is:

Z = $20x - 5x

A variable is a symbol used to represent an item that can take on any value.

In this equation x represents the number of units of the product that are sold, and Z represents
the total profit that results from the sale of the product. The symbols x and Z are variables.
The term variable is used because no set numeric value has been specified for these items.
The number of units sold, x, and the profit, Z, can be any amount (within limits); they can
vary. These two variables can be further distinguished. Z is a dependent variable because its

26
value is dependent on the number of units sold; x is an independent variable because the
number of units sold is not dependent on anything else (in this equation).

Parameters are known, constant values that are often coefficients of variables in equations.

The numbers $20 and $5 in the equation are referred to as parameters. Parameters are
constant values that are generally coefficients of the variables (symbols) in an equation.
Parameters usually remain constant during the process of solving a specific problem. The
parameter values are derived from data (i.e., pieces of information) from the problem
environment. Sometimes the data are readily available and quite accurate. For example,
presumably the selling price of $20 and product cost of $5 could be obtained from the firm's
accounting department and would be very accurate. However, sometimes data are not as
readily available to the manager or firm, and the parameters must be either estimated or based
on a combination of the available data and estimates. In such cases, the model is only as
accurate as the data used in constructing the model.

Data are pieces of information from the problem environment.

The equation as a whole is known as a functional relationship (also called function and
relationship). The term is derived from the fact that profit, Z, is a function of the number of
units sold, x, and the equation relates profit to units sold.

A model is a functional relationship that includes variables, parameters, and equations.

Because only one functional relationship exists in this example, it is also the model. In this
case the relationship is a model of the determination of profit for the firm. However, this
model does not really replicate a problem. Therefore, we will expand our example to create a
problem situation.

Let us assume that the product is made from steel and that the business firm has 100 pounds
of steel available. If it takes 4 pounds of steel to make each unit of the product, we can
develop an additional mathematical relationship to represent steel usage:

4x = 100 lb. of steel

27
This equation indicates that for every unit produced, 4 of the available 100 pounds of steel
will be used. Now our model consists of two relationships:

Max Z=20x-5x

4x=100

We say that the profit equation in this new model is an objective function, and the resource
equation is a constraint. In other words, the objective of the firm is to achieve as much
profit, Z, as possible, but the firm is constrained from achieving an infinite profit by the
limited amount of steel available. To signify this distinction between the two relationships in
this model, we will add the following notations:

Max Z=20x-5x

Subject to: 4x=100

This model now represents the manager's problem of determining the number of units to
produce. You will recall that we defined the number of units to be produced as x. Thus, when
we determine the value of x, it represents a potential (or recommended) decision for the
manager. Therefore, x is also known as a decision variable. The next step in the management
science process is to solve the model to determine the value of the decision variable.

d) Model Solution

A management science technique usually applies to a specific model type.

Once models have been constructed in management science, they are solved using the
management science techniques presented in this text. A management science solution
technique usually applies to a specific type of model. Thus, the model type and solution
method are both part of the management science technique. We are able to say that a model is
solved because the model represents a problem. When we refer to model solution, we also
mean problem solution.

Max Z=20x-5x

Subject to: 4x=100

28
X=100/4
X=25 units.

Substituting the value of 25 for x into the profit function results in the total profit:

Max Z=20x-5x

Z=20(25)-5(25) =375.

e) Implementation

The final step in the management science process for problem solving is implementation.
Implementation is the actual use of the model once it has been developed or the solution to
the problem the model was developed to solve. This is a critical but often overlooked step in
the process. It is not always a given that once a model is developed or a solution found, it is
automatically used. Frequently the person responsible for putting the model or solution to use
is not the same person who developed the model and, thus, the user may not fully understand
how the model works or exactly what it is supposed to do. Individuals are also sometimes
hesitant to change the normal way they do things or to try new things. In this situation the
model and solution may get pushed to the side or ignored altogether if they are not carefully
explained and their benefit fully demonstrated. If the management science model and solution
are not implemented, then the effort and resources used in their development have been
wasted.

Activity 2.1: Describe the management science approach to problem solving and show the
model to maximize the profit using modified version of the above example on steel with the
following new values: Price/steel is 30, cost/Steel is 20, total pounds of steel is 150,
consumption per unit is 5 pounds.

___________________________________________________________________________
___________________________________________________________________________
___________________________________________________________________________

Your model needs to be set as Maximization. Hence, Max Z: 10X Subject to: 5X<=150

29
2.3. MODEL BUILDING

2.3.1. Break-Even Analysis

In the previous section we gave a brief, general description of how management science
models are formulated and solved, using a simple algebraic example. In this section we will
continue to explore the process of building and solving management science models, using
break-even analysis, also called profit analysis. Break-even analysis is a good topic to
expand our discussion of model building and solution because it is straightforward, relatively
familiar to most people, and not overly complex. In addition, it provides a convenient means
to demonstrate the different ways management science models can be solved mathematically
(by hand), graphically, and with a computer.

The purpose of break-even analysis is to determine the number of units of a product (i.e., the
volume) to sell or produce that will equate total revenue with total cost. The point where total
revenue equals total cost is called the break-even point, and at this point profit is zero. The
break-even point gives a manager a point of reference in determining how many units will be
needed to ensure a profit.

Components of Break-Even Analysis

The three components of break-even analysis are volume, cost, and profit. Volume is the
level of sales or production by a company. It can be expressed as the number of units (i.e.,
quantity) produced and sold, as the dollar volume of sales, or as a percentage of total capacity
available.

Two types of costs are typically incurred in the production of a product: fixed costs and
variable costs. Fixed costs are generally independent of the volume of units produced and
sold. That is, fixed costs remain constant, regardless of how many units of product are
produced within a given range. Fixed costs can include such items as rent on plant and
equipment, taxes, staff and management salaries, insurance, advertising, depreciation, heat
and light, plant maintenance, and so on. Taken together, these items result in total fixed costs.

Fixed costs are independent of volume and remain constant.

30
Variable costs are determined on a per-unit basis. Thus, total variable costs depend on the
number of units produced. Variable costs include such items as raw materials and resources,
direct labor, packaging, material handling, and freight.

Variable costs depend on the number of items produced.

Total variable costs are a function of the volume and the variable cost per unit. This
relationship can be expressed mathematically as

total variable cost = vcv

where cv = variable cost per unit and v = volume (number of units) sold.

The total cost of an operation is computed by summing total fixed cost and total variable cost,
as follows:

total cost = total fixed cost + total variable cost

or

TC = cf + vcv

where cf = fixed cost.

Total cost (TC) equals the fixed cost (cf) plus the variable cost per unit (cv) multiplied by
volume (v).

As an example, consider Western Clothing Company, which produces denim jeans. The
company incurs the following monthly costs to produce denim jeans:

If we arbitrarily let the monthly sales volume, v, equal 400 pairs of denim jeans, the total cost
is

TC = cf + vcv = $10,000 + (400)(8) = $13,200

The third component in our break-even model is profit. Profit is the difference between total
revenue and total cost. Total revenue is the volume multiplied by the price per unit,

31
total revenue = vp

where p = price per unit.

Profit is the difference between total revenue (volume multiplied by price) and total cost.

For clothing company example, if denim jeans sell for $23 per pair and we sell 400 pairs per
month, then the total monthly revenue is:

Total revenue = vp = (400)(23) = $9,200

Now that we have developed relationships for total revenue and total cost, profit (Z) can be
computed as follows:

Computing the Break-Even Point

For our clothing company example, we have determined total revenue and total cost to be
$9,200 and $13,200, respectively. With these values, there is no profit but, instead, a loss of
$4,000:

total profit = total revenue total cost = $9,200 13,200 = $4,000

We can verify this result by using our total profit formula,

Z = vp cf vcv

and the values v = 400, p = $23, cf = $10,000, and cv = $8:

Obviously, the clothing company does not want to operate with a monthly loss of $4,000
because doing so might eventually result in bankruptcy. If we assume that price is static
because of market conditions and that fixed costs and the variable cost per unit is not subject
to change, then the only part of our model that can be varied is volume. Using the modeling
terms developed earlier in this chapter, price, fixed costs, and variable costs are parameters,
whereas the volume, v, is a decision variable. In break-even analysis we want to compute the
value of v that will result in zero profit.

The break-even point is the volume (v) that equates total revenue with total cost where profit
is zero.

32
At the break-even point, where total revenue equals total cost, the profit, Z, equals zero.
Thus, if we let profit, Z, equal zero in our total profit equation and solve for v, we can
determine the break-even volume:

Breakeven point (BEP)= Cf/p-v=10,000/23-8= 666.7 units.

In other words, if the company produces and sells 666.7 pairs of jeans, the profit (and loss)
will be zero and the company will break even. This gives the company a point of reference
from which to determine how many pairs of jeans it needs to produce and sell in order to gain
a profit (subject to any capacity limitations).

Activity 2.2: Dear learner compute the breakeven point if fixed cost, unit price, and unit
variable cost are Birr 300,000, 20, and 5, respectively.

___________________________________________________________________________
___________________________________________________________________________

The answer is 20,000

2.3.2. Economic Order Quantity Model Analysis

Economic Order Quantity model is a model that helps to determine optimal inventory order
quantity. The model assumes that when inventory reaches a specific level, referred to as the
reorder point, a fixed amount is ordered. The most widely used and traditional means for
determining how much to order in a continuous system is the economic order quantity (EOQ)
model, also referred to as the economic lot size model.

The simplest form of the economic order quantity model on which all other model versions
are based is called the basic EOQ model. EOQ is essentially a single formula for determining
the optimal order size that minimizes the sum of carrying costs and ordering costs. The
model formula is derived under a set of simplifying and restrictive assumptions, as follows:

Multiperiod model – The Economic Order Quantity: Assumptions

• Demand is known and deterministic: D units/year


• We have a known ordering cost, S, and immediate replenishment
• Annual holding cost of average inventory is H per unit

33
• Purchasing cost C per unit

How to find the optimal quantity to order

First, it is essential to understand how the system works through this instantaneous receipt
of items as indicated in the following diagram. Here let’s bear in mind that number of
period (Triangles) is determined by D/Q and the average inventory is (Q+0)/2:

Figure 2.1: EOQ model

Next, it is essential to understand the total cost formula.

Total Cost = Purchasing Cost + Ordering Cost + Inventory Cost

• Purchasing Cost = (total units) x (cost per unit)


• Ordering Cost = (number of orders) x (cost per order)
• Inventory Cost = (average inventory) x (holding cost)

The details for each term of the total cost formula are given below:

34
Thus, the complete formula for total cost is:

In order to optimize, we have to optimize with respect to the decision variable Q. The first
derivate of the total cost formual with respect to Q yields the following EOQ formua that we
could apply to the example that follows:

Economic Order Quantity (EOQ) or Q* is:

Dear learner, it is important to know what insight do we get from EOQ. Basically, EOQ
shows the tade off between holding cost and Ordering cost as indicated in the following
picture:

35
Figure 2.2: EOQ cost trade-off

If delivery is not instantaneous, but there is a lead time L:

1. When should we order?


2. How much should we order?

The answer to the first question is straight forward if demand is known exactly. Place an
order when inventory equals demand during the lead time period. This demand is termed as
reorder point (ROP). The following diagram clarifies this.

Figure 2.3: Finding Reorder point

36
Activity 2.2: Dear learner what if the lead time to receive cars is 10 days to the previous
example? When should you place an order?

__________________________________________________________________

Here is the answer:

Dear learner, you now know how to find reorder level when demand is known. However,
demand is rarely predictable in reality. let’s now see how to find reorder point (when to
order) when demand is not predicatable. This situation is represented by the following
diagram in which demand for the lead time period is not known.

Figure 2.4: Inventory model for unpredicatable demand

We could face either excess inventory or stockout depending on whether actual demand is
less than or greater than the expected demand as indicated in the following pictures:

37
Figure 2.5 a: Inventory model for lesser actual demand

Figure 2.5 b: Inventory model for greater actual demand

Now let’s see what happens when reorder point (ROP) equals expected demand. We expect a
50% service level where inventory is left 50% of the time and stock out is experienced 50%
of the time. Such uncertain demand is represented by the normal curve in the following
picture.

38
Figure 2.6 : Inventory model for ROP matching expected demand

To reduce stock out, we need to add safety stock in the model as indicated below:

Figure 2.7 : Inventory model with safety stock for uncertain demand

Dear learner, let us bear in mind that probability of stock out signals the level of service level
we want to provide. Literally, service level is the same as probabilty of NOT stocking out as
given below:

39
Figure 2.7 :Sevice level versus probabilty of stockout

Safety
stock

Thus,

Safety stock= Safety factor Z * Standard deviation In LT demand

NB:

 we read Z from the normal distribution table


 We need to be caustion on how to get Standard deviation in LT demand for multiple
periods. Let’s note that assuming independent demand:

Variance for multiple periods = Sum of variances, but

Standard deviation for multiple periods ≠ Sum of standard deviations

Standard deviation for multiple periods = Square root of sum of variances

Turning our attention to average inventory with safety stock, average inventory involves
safety stock. Hence,

Average Inventory = (Order quality /2)+ Safety stock

The following diagram shows the relationship between order quality and average inventory.

40
Figure 2.8 :Average inventory and order quantity with saftey stock

To sum up, here are the formulae to find order quality (EOQ), ROP, and average inventory:

Dear learner, before we wind up the chapter discussion, lets compute ROP for uncertain
demand based on the previous example.

41
CHAPTER SUMMARY
Dear learner in this unit we have discussed the management science approach and seen how
we undertake break analysis and economic order and nature of logistic network and supply
chain models along with the idea of global optimization and the common difficulty facing
supply chain management: uncertainty and risk. Later in the chapter, we also made extensive
discussion of supply chain performance measures and supply chain modeling issues.

END CHAPTER PROBLEM SOLVING QUESTIONS:

Solve the following problems (Show all the necessary steps):

1. The Willow Furniture Company produces tables. The fixed monthly cost of
production is $8,000, and the variable cost per table is $65. The tables sell for $180
apiece.

a. For a monthly volume of 300 tables, determine the total cost, total revenue, and
profit.
b. Determine the monthly break-even volume for the Willow Furniture Company.

2. The Retread Tire Company recaps tires. The fixed annual cost of the recapping
operation is $60,000. The variable cost of recapping a tire is $9. The company charges
$25 to recap a tire.

42
a. For an annual volume of 12,000 tires, determine the total cost, total revenue,
and profit.
b. Determine the annual break-even volume for the Retread Tire Company
operation.
3. Andy Mendoza makes handcrafted dolls, which he sells at craft fairs. He is
considering mass-producing the dolls to sell in stores. He estimates that the initial
investment for plant and equipment will be $25,000, whereas labor, material,
packaging, and shipping will be about $10 per doll. If the dolls are sold for $30 each,
what sales volume is necessary for Andy to break even?
4. If the maximum operating capacity of the Rolling Creek Textile Mill described in
Problem 2 is 25,000 yards of denim per month, determine the break-even volume as a
percentage of capacity.
5. Molly Dymond and Kathleen Taylor are considering the possibility of teaching
swimming to kids during the summer. A local swim club opens its pool at noon each
day, so it is available to rent during the morning. The cost of renting the pool during
the 10-week period for which Molly and Kathleen would need is $1,700. The pool
would also charge Molly and Kathleen an admission, towel service, and life guarding
fee of $7 per pupil, and Molly and Kathleen estimate an additional $5 cost per student
to hire several assistants. Molly and Kathleen plan to charge $75 per student for the
10-week swimming class.

a. How many pupils do Molly and Kathleen need to enroll in their class to break
even?
b. If Molly and Kathleen want to make a profit of $5,000 for the summer, how many
pupils do they need to enroll?
c. Molly and Kathleen estimate that they might not be able to enroll more than 60
pupils. If they enroll this many pupils, how much would they need to charge per
pupil in order to realize their profit goal of $5,000?

6. The Star Youth Soccer Club helps to support its 20 boys' and girls' teams financially,
primarily through the payment of coaches. The club puts on a tournament each fall to
help pay its expenses. The cost of putting on the tournament is $8,000, mainly for
development, printing, and mailing of the tournament brochures. The tournament
entry fee is $400 per team. For every team that enters, it costs the club about $75 to

43
pay referees for the three-game minimum each team is guaranteed. If the club needs
to clear $60,000 from the tournament, how many teams should it invite?

Case Problem

The Clean Clothes Corner Laundry

When Molly Lai purchased the Clean Clothes Corner Laundry, she thought that because it
was in a good location near several high-income neighborhoods, she would automatically
generate good business if she improved the laundry's physical appearance. Thus, she initially
invested a lot of her cash reserves in remodeling the exterior and interior of the laundry.
However, she just about broke even in the year following her acquisition of the laundry,
which she didn't feel was a sufficient return, given how hard she had worked. Molly didn't
realize that the dry-cleaning business is very competitive and that success is based more on
price and quality service, including quickness of service, than on the laundry's appearance.

In order to improve her service, Molly is considering purchasing new dry-cleaning


equipment, including a pressing machine that could substantially increase the speed at which
she can dry-clean clothes and improve their appearance. The new machinery costs $16,200
installed and can clean 40 clothes items per hour (or 320 items per day). Molly estimates her
variable costs to be $0.25 per item dry-cleaned, which will not change if she purchases the
new equipment. Her current fixed costs are $1,700 per month. She charges customers $1.10
per clothing item.

1. What is Molly's current monthly volume?


2. If Molly purchases the new equipment, how many additional items will she have to
dry-clean each month to break even?
3. Molly estimates that with the new equipment she can increase her volume to 4,300
items per month. What monthly profit would she realize with that level of business
during the next 3 years? After 3 years?
4. Molly believes that if she doesn't buy the new equipment but lowers her price to $0.99
per item, she will increase her business volume. If she lowers her price, what will her
new break-even volume be? If her price reduction results in a monthly volume of
3,800 items, what will her monthly profit be?

44
5. Molly estimates that if she purchases the new equipment and lowers her price to $0.99
per item, her volume will increase to about 4,700 units per month. Based on the local
market, that is the largest volume she can realistically expect. What should Molly do?

45
CHAPTER THREE: MULTI-CRITERIA DECISION MAKING
INTRODUCTION

This chapter discusses the concept of multi-criteria decision making. Multi-criteria decision
making is applicable in a situation where multiple objectives and criteria are applied. This
topic, thus, covers tools to solve intricate decision problems.

OBJECTIVE

The objectives of the chapter are to let you be able to:

 Identify and analyze problems involving multiple decision criteria


 Apply and solve problems using analytical hierarchy process model, goal
programming model and scoring model.

In this chapter, we will discuss the following topics:

1. Introduction
2. Goal Programming
3. Analytical hierarchy process

3.1. OVERVIEW

Multi-criteria Decision Making is the study of problems with several criteria, multiple
criteria, instead of a single objective when making a decision.

Two techniques are worthy to discuss here: goal programming, and the analytical hierarchy
process.

Goal programming is a variation of linear programming considering more than one objective
(goals) in the objective function.

The analytical hierarchy process develops a score for each decision alternative based on
comparisons of each under different criteria reflecting the decision maker’s preferences.

46
In all the linear programming models, a single objective was either maximized or minimized.
However, a company or an organization often has more than one objective, which may relate
to something other than profit or cost. In fact, a company may have several criteria, that is,
multiple criteria that it will consider in making a decision instead of just a single objective.
For example, in addition to maximizing profit, a company in danger of a labor strike might
want to avoid employee layoffs, or a company about to be fined for pollution infractions
might want to minimize the emission of pollutants. A company deciding between several
potential research and development projects might want to consider the probability of success
of each of the projects, the cost and time required for each, and potential profitability in
making a selection.

In this chapter discuss three techniques that can be used to solve problems when they have
multiple objectives: goal programming, the analytical hierarchy process, and scoring models.
Goal programming is a variation of linear programming in that it considers more than one
objective (called goals) in the objective function. Goal programming models are set up in the
same general format as linear programming models, with an objective function and linear
constraints. The model solutions are very much like the solutions to linear programming
models. The format for the analytical hierarchy process and scoring models, however, is quite
different from that of linear programming. These methods are based on a comparison of
decision alternatives for different criteria that reflects the decision maker's preferences. The
result is a mathematical "score" for each alternative that helps the decision maker rank the
alternatives in terms of preferability.

3.2. GOAL PROGRAMMING

As mentioned, a goal programming model is very similar to a linear programming model,


with an objective function, decision variables, and constraints. Like linear programming, goal
programming models with two decision variables can be solved graphically and by using QM
for Windows and Excel. The module begins presentation of goal programming as we did
linear programming, demonstrating through an example of how to formulate a model. This
will illustrate the main differences between goal and linear programming.

47
Model Formulation

We will use the Beaver Creek Pottery Company example to illustrate the way a goal
programming model is formulated and the differences between a linear programming model
and a goal programming model.

The objective function, Z, represents the total profit to be made from bowls and mugs, given
that $40 is the profit per bowl and $50 is the profit per mug. The first constraint is for
available labor. It shows that a bowl requires 1 hour of labor, a mug requires 2 hours, and 40
hours of labor are available daily. The second constraint is for clay, and it shows that each
bowl requires 4 pounds of clay, each mug requires 3 pounds, and the daily limit of clay is 120
pounds.

Maximize Z = $40x1 + 50x2

Subject to:

1x1 + 2x2  40 hours of labor

4x2 + 3x2  120 pounds of clay

x1, x2  0

Where: x1 = number of bowls produced

x2 = number of mugs produced

Adding objectives (goals) in order of importance, the company:


Does not want to use fewer than 40 hours of labor per day.
Would like to achieve a satisfactory profit level of $1,600 per day.
Prefers not to keep more than 120 pounds of clay on hand each day.
Would like to minimize the amount of overtime.

Goal Constraint Requirements

All goal constraints are equalities that include deviational variables d- and d+.

A positive deviational variable (d+) is the amount by which a goal level is exceeded.

48
A negative deviation variable (d-) is the amount by which a goal level is underachieved.

At least one or both deviational variables in a goal constraint must equal zero.

The objective function in a goal programming model seeks to minimize the deviation
from goals in the order of the goal priorities.

Goal Constraints and Objective Function

Labor goals constraint (1, less than 40 hours labor; 4, minimum overtime):

Minimize P1d1-, P4d1+

Add profit goal constraint (2, achieve profit of $1,600):

Minimize P1d1-, P2d2-, P4d1+

Add material goal constraint (3, avoid keeping more than 120 pounds of clay on hand):

Minimize P1d1-, P2d2-, P3d3+, P4d1+

Complete Goal Programming Model:

Minimize P1d1-, P2d2-, P3d3+, P4d1+

Subject to:

x1 + 2x2 + d1- - d1+ = 40

40x1 + 50 x2 + d2 - - d2 + = 1,600

4x1 + 3x2 + d3 - - d3 + = 120

x1, x2, d1 -, d1 +, d2 -, d2 +, d3 -, d3 +  0

Alternative Forms of Goal Constraints

Changing fourth-priority goal limits overtime to 10 hours instead of minimizing


overtime:

49
d1- + d4 - - d4+ = 10

Minimize P1d1 -, P2d2 -, P3d3 +, P4d4 +

Addition of a fifth-priority goal- “important to achieve the goal for mugs”:

x1 + d5 - = 30 bowls

x2 + d6 - = 20 mugs

Minimize P1d1 -, P2d2 -, P3d3 -, P4d4 -, 4P5d5 -, 5P5d6 -

Complete Model with New Goals:

Minimize P1d1-, P2d2-, P3d3-, P4d4-, 4P5d5-, 5P5d6-

Subject to:

x1 + 2x2 + d1- - d1+ = 40

40x1 + 50x2 + d2- - d2+ = 1,600

4x1 + 3x2 + d3- - d3+ = 120

d1+ + d4- - d4+ = 10

x1 + d5- = 30

x2 + d6- = 20

x1, x2, d1-, d1+, d2-, d2+, d3-, d3+, d4-, d4+, d5-, d6-  0

Goal programming graphical interpretation

In graphical method, model equations representing constraints have to be drawn on XY


coordinate plane as indicated in figure 3.1. Then, successive priority goals will be
determined graphically through shading until solution is obtained as indicated from figure
3.2 to 3.5.

50
Figure 3.1: Goal programming graphical depiction of model constraints

Figure 3.2: Goal programming graphical depiction of First priority goal through
shading

Figure 3.3: Goal programming graphical depiction of second priority goal through
shading

51
Figure 3.4: Goal programming graphical depiction of third priority goal through
shading

Figure 3.5: Goal programming graphical depiction of fourth-priority goal

Graphical solution

Goal programming solutions donot always achieve all goals and they are not optimal. They
achieve the best or most satisfactory solution possible. The solution for the graphical problem
is given below along the original model.

52
Goal Programming using computer

Computer Solution Using QM for Windows is given below for Beaver Creek Company.

Minimize P1d1-, P2d2-, P3d3+, P4d1+

subject to:

x1 + 2x2 + d1- - d1+ = 40

40x1 + 50 x2 + d2 - - d2 + = 1,600

4x1 + 3x2 + d3 - - d3 + = 120

x1, x2, d1 -, d1 +, d2 -, d2 +, d3 -, d3 +  0

The screen shots showing the record, results, and graphical formulations are given below:

53
The Microsoft excel model formulation and solution is presented in the following excel
screen shots.

54
Goal Programming

Solution for Altered Problem Using Excel

Minimize P1d1-, P2d2-, P3d3-, P4d4-, 4P5d5-, 5P5d6-

Subject to:

x1 + 2x2 + d1- - d1+ = 40

40x1 + 50x2 + d2- - d2+ = 1,600

55
4x1 + 3x2 + d3- - d3+ = 120

d1+ + d4- - d4+ = 10

x1 + d5- = 30

x2 + d6- = 20

x1, x2, d1-, d1+, d2-, d2+, d3-, d3+, d4-, d4+, d5-, d6-  0

The following screenshots indicate the process excel solves the problem through solver.

56
3.3. ANALYTICAL HIERARCHY PROCESS: OVERVIEW

AHP is a method for ranking several decision alternatives and selecting the best one when the
decision maker has multiple objectives, or criteria, on which to base the decision.

The decision maker makes a decision based on how the alternatives compare according to
several criteria.

The decision maker will select the alternative that best meets his or her decision criteria.

AHP is a process for developing a numerical score to rank each decision alternative based on
how well the alternative meets the decision maker’s criteria.

Dear learner, let’s now see the following problem statement as an example to analytical
hierarchy process. We are interested in:

57
Top of the hierarchy: the objective (select the best site).

Second level: how the four criteria contribute to the objective.

Third level: how each of the three alternatives contributes to each of the four criteria.

Mathematically determine preferences for each site for each criterion.

Mathematically determine preferences for criteria (rank order of importance).

Combine these two sets of preferences to mathematically derive a score for each site.

Select the site with the highest score.

In a pair-wise comparison, two alternatives are compared according to a criterion and one is
preferred.

A preference scale assigns numerical values to different levels of performance as indicated


below:

58
Table 3.1: Analytical hierarchy process’s pair-wise comparison preference scale

A pair wise comparision matrix need to be developed as indicated below. A pair-wise


comparision matrix summarizes the pair-wise comparisions for a criteria.

Table 3.2: Analytical hierarchy process’s pair-wise comparison matrix

Table 3.3: Analytical hierarchy process developing preference within criteria

59
In synthetization, decision alternatives are prioritized with each criteria and then normalized.
The normalized matrix is given below with row averages. Bear in mind that synthesization
involves the following:

Table 3.4: Analytical hierarchy process’s normalized matrix with row averages

The row averages would represent values for market in critieria preference matrix.

Table 3.5: Analytical hierarchy process’s criteria preference matrix

The analytical hierarchy process helps to rank criteria using pair-wise comparision matrix and
normalized matrix for criteria with row averages. These mettices along with preference
vector are given below:

60
Table 3.6: Analytical hierarchy process’s ranking criteria

Finally, overall ranking need to be deveopled as given in the following table.

Table 3.7: Analytical hierarchy process’s developing an overall ranking

61
Dear learner, to encapsulate, the entire mathematical steps of analytical hierarchy process is
summarized below:

Analytical Hierarchy Process (AHP) Consistency

AHP is based primarily on the pair-wise comparisons a decision maker uses to establish
preferences between decision alternatives for different criteria. The normal procedure in AHP
for developing these pairwise comparisons is for an interviewer to elicit verbal preferences
from the decision maker, using the preference scale .However, when a decision maker has to
make a lot of comparisons (i.e., three or more), he or she can lose track of previous responses.
Because AHP is based on these responses, it is important that they be in some sense valid and
especially that the responses be consistent. That is, a preference indicated for one set of
pairwise comparisons needs to be consistent with another set of comparisons.

Using our site selection example, suppose for the income level criterion, Southcorp indicates
that A is "very strongly preferred" to B and that A is "moderately preferred" to C. That's fine,
but then suppose Southcorp says that C is "equally preferred" to B for the same criterion.
That comparison is not entirely consistent with the previous two pairwise comparisons. To
say that A is strongly preferred over B and moderately preferred over C and then turn around
and say C is equally preferred to B does not reflect a consistent preference rating for the three
sets of pairwise comparisons. A more logical comparison would be that C is preferred to B to
some degree. This kind of inconsistency can creep into AHP when the decision maker is
asked to make verbal responses for a lot of pairwise comparisons. In general, they are usually

62
not a serious problem; some degree of slight inconsistency is expected. However, a
consistency index can be computed that measures the degree of inconsistency in the pairwise
comparisons.

A consistency index measures the degree of inconsistency in pair-wise comparisons. To


demonstrate how to compute the consistency index (CI), we will check the consistency of the
pairwise comparisons for the four site selection criteria. This matrix, shown as follows, is
multiplied by the preference vector for the criteria:

The product of the multiplication of this matrix and vector is computed as follows:

0.8328/0.1993=4.1786

2.8524/0.6535=4.3648

0.3474/0.0860=4.0401

0.2473/0.0612=4.0422

16.257

If the decision maker, Southcorp, were a perfectly consistent decision maker, then each of
these ratios would be exactly 4, the number of items we are comparing in this case, four
criteria. Next, we average these values by summing them and dividing by 4:

63
16.6257/4=4.1564

• The consistency index, CI, is computed using the following formula:

CI= 4.1564 - n

n-1

Where

• n = the number of items being compared


• 4.1564 = the average we computed previously

CI= 4.1564-4 / 4 -1 = 0.05213

• If CI = 0, then Southcorp would be a perfectly consistent decision maker. Because


Southcorp is not perfectly consistent, the next question is the degree of inconsistency
that is acceptable. An acceptable level of consistency is determined by comparing the
CI to a random index, RI, which is the consistency index of a randomly generated
pairwise comparison matrix
• The RI has the values depending on the number of items, n, being compared. In our
example, n = 4 because we are comparing four criteria.

RI values for n items being compared

N 2 3 4 5 6 7 8 9 10

RI 0 0.58 0.90 1.12 1.24 1.32 1.41 1.45 1.5

• The degree of consistency for the pairwise comparisons in the decision criteria matrix
is determined by computing the ratio of CI to RI:

CI/RI=0.0521/0.90=0.0580

In general, the degree of consistency is satisfactory if CI/RI < 0.10, and


in this case, it is. If CI/RI > 0.10, then there are probably serious errors in the
model.

64
Activity 3.1. Dear learner, solve the following goal programming problems:

Problem 1

Problem statement

Requirement

Problem 2.

The Bay City Parks and Recreation Department has received a federal grant of $600,000
to expand its public recreation facilities. City council representatives have demanded four
different types of facilities: gymnasiums, athletic fields, tennis courts, and swimming
pools. In fact, the demand by various communities in the city has been for 7 gyms, 10
athletic fields, 8 tennis courts, and 12 swimming pools. Each facility costs certain
amount, requires a certain number of acres, and is expected to be used a certain amount,
as follows:

Facility Cost Required Acres Expected Usage (people/week)

Gymnasium $80,000 4 1,500

Athletic field 24,000 8 3,000

65
Facility Cost Required Acres Expected Usage (people/week)

Tennis court 15,000 3 500

Swimming pool 40,000 5 1,000

The Parks and Recreation Department has located 50 acres of land for construction (although
more land could be located, if necessary). The department has established the following
goals, listed in order of their priority:

1. The department wants to spend the total grant because any amount not spent must be
returned to the government.
2. The department wants the facilities to be used by a total of at least 20,000 people each
week.
3. The department wants to avoid having to secure more than the 50 acres of land
already located.
4. The department would like to meet the demands of the city council for new facilities.
However, this goal should be weighted according to the number of people expected to
use each facility.

a. Formulate a goal programming model to determine how many of each type of facility
should be constructed to best achieve the city's goals.

Problem 3

The Growall Fertilizer Company produces three types of fertilizer: Supergro, Dynaplant,
and Soilsaver. The company has the capacity to produce a maximum of 2,000 tons of
fertilizer a week. It costs $800 to produce a ton of Supergro, $1,500 for Dynaplant, and
$500 for Soilsaver. The production process requires 10 hours of labor for a ton of
Supergro, 12 hours for a ton of Dynaplant, and 18 hours for a ton of Soilsaver. The
company has 800 hours of normal production labor available each week. Each week the
company can expect a demand for 800 tons of Supergro, 900 tons of Dynaplant, and
1,100 tons of Soilsaver. The company has established the following goals, in order of
their priority:

66
1. The company does not want to spend over $20,000 per week on production, if
possible.
2. The company would like to limit overtime to 100 hours per week.
3. The company wants to meet demand for all three fertilizers; however, it is twice as
important to meet the demand for Supergro as it is to meet the demand for Dynaplant,
and it is twice as important to meet the demand for Dynaplant as it is to meet the
demand for Soilsaver.
4. It is desirable to avoid producing under capacity, if possible.
5. Because of union agreements, the company wants to avoid underutilization of labor.

a. Formulate a goal programming model to determine the number of tons of each brand
of fertilizer to produce to satisfy the goals.

Solve this model.

Problem 4

The East Midvale Textile Company produces denim and brushed-cotton cloth. The average
production rate for both types of cloth is 1,000 yards per hour, and the normal weekly
production capacity (running two shifts) is 80 hours. The marketing department estimates that
the maximum weekly demand is for 60,000 yards of denim and 35,000 yards of brushed
cotton. The profit is $3.00 per yard for denim and $2.00 per yard for brushed cotton. The
company has established the following four goals, listed in order of importance:

1. Eliminate underutilization of production capacity to maintain stable employment


levels.
2. Limit overtime to 10 hours.
3. Meet demand for denim and brushed cotton weighted according to profit for each.
4. Minimize overtime as much as possible.

Formulate a goal programming model to determine the number of yards (in 1,000-yard lots)
to produce to satisfy the goals.

Problem 5

67
A rural clinic hires its staff from nearby cities and towns on a part-time basis. The clinic
attempts to have a general practitioner (GP), a nurse, and an internist on duty during at least a
portion of each week. The clinic has a weekly budget of $1,200. A GP charges the clinic $40
per hour, a nurse charges $20 per hour, and an internist charges $150 per hour. The clinic has
established the following goals, in order of priority:

1. A nurse should be available at least 30 hours per week.


2. The weekly budget of $1,200 should not be exceeded.
3. A GP or an internist should be available at least 20 hours per week.
4. An internist should be available at least 6 hours per week.

a. Formulate a goal programming model to determine the number of hours to hire each
staff member to satisfy the various goals.
b. Solve the model

Problem 6

Mac's Warehouse is a large discount store that operates 7 days per week. The store needs the
following number of full-time employees working each day of the week:

Day Number of Employees Day Number of Employees

Sunday 47 Thursday 34

Monday 22 Friday 43

Tuesday 28 Saturday 53

Wednesday 35

Each employee must work 5 consecutive days each week and then have 2 days off. For
example, any employee who works Sunday through Thursday has Friday and Saturday off.
The store currently has a total of 60 employees available to work. Mac's has developed the
following set of prioritized goals for employee scheduling:

1. The store would like to avoid hiring any additional employees.


2. The most important days for the store to be fully staffed are Saturday and Sunday.
3. The next most important day to be fully staffed is Friday.

68
4. The store would like to be fully staffed the remaining 4 days in the week.

a. Formulate a goal programming model to determine the number of employees who


should begin their 5-day workweek each day of the week to achieve the store's
objectives.
b. Solve this model

3.4. SCORING MODEL: OVERVIEW

For selecting among several alternatives according to various criteria, a scoring model is a
method similar to AHP, but it is mathematically simpler. There are several versions of
scoring models. In the scoring model that we will use, the decision criteria are weighted in
terms of their relative importance, and each decision alternative is graded in terms of how
well it satisfies the criteria, according to the following formula:

Si = gij wj

where,

wj = a weight between 0 and 1.00 assigned to criterion j, indicating its relative importance,
where 1.00 is extremely important and 0 is not important at all. The sum of the total
weights equals 1.00.

gij = a grade between 0 and 100 indicating how well the decision alternative i satisfies
criterion j, where 100 indicates extremely high satisfaction and 0 indicates virtually no
satisfaction.

Si = the total "score" for decision alternative i, where the higher the score, the better.

To demonstrate the scoring model, we will use an example. Sweats and Sweaters is a chain of
stores specializing in cotton apparel. The company wants to open a new store in one of four
malls around the Atlanta metropolitan area. The company has indicated five criteria that are
important in its decision about where to locate: proximity of schools and colleges, area
median income, mall vehicle traffic flow and parking, quality and size (in terms of number of
stores in the mall), and proximity of other malls or shopping areas. The company has
weighted each of these criteria in terms of its relative importance in the decision-making

69
process, and it has analyzed each potential mall location and graded them according to each
criterion as shown in the following table:

Table 3.8: Scoring model example

The Excel solution of this model is given below:

Activity 3.2: Dear learner solve the following problem:

Federated Health Care has contracted to be Tech's primary health care provider for faculty
and staff. There are three major hospitals in the area (within 35 miles) County, Memorial, and
General that have full-service emergency rooms. Federated wants to designate one of the
hospitals as its primary care emergency rooms for its members. The company's criteria for
selection are quality of medical care, as determined by a patient survey; distance to the
emergency room by the majority of its members; speed of medical attention at the emergency

70
room; and cost. Following are the pairwise comparisons of the emergency rooms for each of
the four criteria and the pairwise comparisons for the criteria:

Medical Care

Hospital County Memorial General

County 1 1/6 1/3

Memorial 6 1 3

General 3 1/3 1

Distance

Hospital County Memorial General

County 1 7 4

Memorial 1/7 1 2

General 1/4 1/2 1

Speed of Attention

Hospital County Memorial General

County 1 1/2 3

Memorial 2 1 4

General 1/3 1/4 1

Cost

Hospital County Memorial General

County 1 6 4

Memorial 1/6 1 1/2

71
Cost

Hospital County Memorial General

General 1/4 2 1

Criterion Medical Care Distance Speed of Attention Cost

Medical care 1 8 6 3

Distance 1/8 1 ½ 1/6

Speed of attention 1/6 2 1 1/4

Cost 1/3 6 4 1

Using AHP, determine which hospital emergency room Federated Health Care should
designate as its primary care provider.

CHAPTER SUMMARY

In this chapter, we discussed goal programming in detail. We learned that goal programming
model is very similar to a linear programming model, with an objective function, decision
variables, and constraints. Like linear programming, goal programming models with two
decision variables can be solved graphically and by using QM for Windows and Excel.
However, we use Analytical Hierarchy Process (AHP) for several alternatives. AHP is a
method for ranking several decision alternatives and selecting the best one when the decision
maker has multiple objectives, or criteria, on which to base the decision. Further, we
discussed a mathematical model similar to but simpler than AHP. Such model is called
scoring model.

END CHAPTER QUESTIONS

Problem 1

Dampier Associates is a holding company that specializes in buying out small to medium-
sized textile companies. The company is currently considering purchasing three
companies in the Carolinas: Alton Textiles, Bonham Mills, and Core Textiles. The main

72
criteria the company uses to determine which companies it will purchase are current
profitability and growth potential. Dampier moderately prefers growth potential over
profitability in making a purchase decision. Dampier's pair wise comparisons for the three
target textile companies it is considering are as follows:

Profitability

Company A B C

A 1 1/3 7

B 3 1 9

C 1/7 1/9 1

c.

Growth Potential

Company A B C

A 1 1/2 1/5

B 2 1 1/3

C 5 3 1

a. Develop an overall ranking of the three companies for Dampier Associates by


using AHP.

Problem 2

Carol Latta is visiting hotels in Los Angeles to decide where to hold a convention for a
national organization of college business school teachers she represents. There are three
hotels from which to choose: the Cheraton, the Milton, and the Harriott. The criteria she
is to use to make her selection are ambiance, location (based on safety and walking
distance to attractions and restaurants), and cost to the organization. Following are the
pairwise comparisons she has developed that indicate her preference for each hotel for
each criterion and her pairwise comparisons for the criteria:

73
Ambiance

Hotel Cheraton Milton Harriott

Cheraton 1 1/2 1/5

Milton 2 1 1/3

Harriott 5 3 1

Location

Hotel Cheraton Milton Harriott

Cheraton 1 5 3

Milton 1/5 1 1/4

Harriott 1/3 4 1

Cost

Hotel Cheraton Milton Harriott

Cheraton 1 2 5

Milton 1/2 1 2

Harriott 1/5 1/2 1

Criterion Ambiance Location Cost

Ambiance 1 2 4

Location 1/2 1 3

Cost 1/4 1/3 1

Develop an overall ranking of the three hotels, using AHP, to help Carol Latta decide where
to hold the meeting.

74
Problem 3

Arsenal Electronics is to construct a new $1.2 billion semiconductor plant and has selected
four small towns in the Midwest as potential sites. The important decision criteria and grades
for each town are as follows:

Town

Decision Criterion Weight Abbeton Bayside Cane Creek Dunnville

Work ethic 0.18 80 90 70 75

Quality of life 0.16 75 85 95 90

Labor laws/unionization 0.12 90 90 60 70

Infrastructure 0.10 60 50 60 70

Education 0.08 75 90 85 95

Labor skill and education 0.07 75 65 70 75

Cost of living 0.06 70 75 85 75

Taxes 0.05 65 70 55 60

Incentive package 0.05 90 95 70 75

Government regulations 0.03 40 50 65 55

Environmental regulations 0.03 65 60 70 80

Transportation 0.03 90 80 95 80

Space for expansion 0.02 90 95 90 90

Urban proximity 0.02 60 90 70 80

Develop a scoring model to determine in which town the plant should be built.

75
CHAPTER FOUR: SUPPLY CHAIN MODELING AND SIMULATION

INTRODUCTION
Dear learner, in this chapter you will learn basic ideas on simulation and how to conduct
Monte Carlo simulation. The chapter will address concept of system and simulation along
with the way simulation works in supply chain modeling.
OBJECTIVE
By the end of this chapter you should be able to:
● describe the features of a simulation approach
● construct a simulation flowchart
● complete a manual simulation
● interpret information generated from a computer simulation
The following topics will be discussed in this chapter:
1. Introduction to systems and simulations
2. Monte Carlo simulations
3. Supply chain simulation modeling

4.1. INTRODUCTION TO SYSTEMS AND SIMULATIONS

There are very many situations in business decision making, however, where it may be
unrealistic to expect there to be an optimum solution. One group of models – simulation
models – are concerned with this type of situation and differ from most other quantitative
models in that they are primarily descriptive rather than deterministic: they are concerned
with describing in modelling terms the business problem we are investigating rather than
finding the solution to such a problem. Simulation models typically generate additional
information about the problem – frequently information that no other model can produce –
and this information must then be assessed and used by the decision maker as appropriate.
Simulation is a very flexible model and has been developed and applied across a variety of
business problems.

4.1.1. Introduction to Systems and models


System is a group of objects joined together in some regular interaction of interdependence
towards the accomplishments of some purpose. In a typical production system, for example,

76
machines, components, and workers operate jointly to produce vehicles as you see it in the
following picture.

Figure 4.1: Production system

System Environment
A system operates in its environment and is affected by the changes that occur outside its
boundaries. Such changes are said to occur in the system environment, which in turn is
separated from the system through system boundary.

System also has a set of components. The actitities of a system could be described as
exogenous and indogenous. Exogenous are activites ocurring outside the system while
indogenous are those occuring within the system. The elements in the following figures
exemplify the typical system components.

Figure 4.2a: System components

77
Figure 4.2b: Examples of system components

System Environment
There are two types of systems based on how set of variables change over time: discrete and
coninuous. The following figure describes the two systems.

Figure 4.3: Types of systems

78
Dear reader, bear in mind that it is often possible to use discrete event simulations to
approximate the behaviour of a continuous system. This greatly simplifies the analysis.

How to study a system?


Literally, a system could be studied through experiment with actual system or experiment
with a model. Further on the latter case, physical model or mathematical model could be
employed. In mathematical models, analytical solution and simulation could be used. The
entire hierarchy of the way to study a system is depicted in the following picture:

Figure 4.4: Way to study a system

Dear learner, let’s now turn our attention to models and how they are used in systems and
simulations. Let’s first see why we use models.

Why are Models Used?


• It is not possible to experiment with the actual system, e.g.: the experiment is
destructive
• The system might not exist, i.e. the system is in the design stage
Example: Bank
- Reducing the number of tellers to study the effect on the length of waiting lines may
annoy the customers such that they will move their accounts to a competitor
Meaning and facts about models

79
The following are important facts about model:
- A model is a representation of a system for the purpose of studying that system
- It is only necessary to consider those aspects of the system that affect the problem
under investigation
- The model is a simplified representation of the system
- The model should be sufficiently detailed to permit valid conclusions to be drawn
about the actual system
- Different models of the same system may be required as the purpose of the
investigation changes. Basically, there are two types of models: mathematical and
physical.
• A Mathematical Model utilizes symbolic notations and equations to represent a
system
- Example: current and voltage equations are mathematical models of an electric
circuit
• A Physical Model is a larger or smaller version of an object
- Example: enlargement of an atom or a scaled version of the solar system
Classification of simulation models
Simulation models are classified into three broad categories based on diverse considerations.
The following figure shows such classifications.
Figure 4.4: Classifications of simulation models

1. Static and Dynamic models


Models are classified as static and dynamic based on whether the model captures
reality at a point in time or changes in reality over a period. The following table
summarizes the nature of these models.

80
Table 4.1: Static and dynamic models

2. Deterministic and stochastic models


Models are classified as deterministic or stochastic based on availability of random
variables. The following table summarizes the nature of these models.

Table 4.2: Deterministic and stochastic models

3. Discrete and continuous models


Models are classified as discrete or conitnuous based on the values of the variables.The
former is represented by discrete or disconnected values (such as 1,2..) while the latter
could take up decimals. The following table summarizes the fact that we cannot use any
one of the two always in simulation. The choice to use one depends on the characterstics
of the system and the objectives of the study.

81
Table 4.3: Discrete and continuous models

4.1.2. Introduction to Simulation


Simulation is the imitation of a real-world process or system over time [Banks et al.] It is
used for analysis and study of complex systems. Simulation requires the development of a
simulation model and then conducting computer-based experiments with the model to
describe, explain, and predict the behaviour of the real system. Some areas of simulation
application include the following:

The history of simulation goes back 5,000 years, to Chinese war games, called weich’i. Then,
in 1780, the Prussians used the games to help train their army. Since then, all major military
powers have used war games to test out military strategies under simulated environments.
From military or operational gaming, a new concept, Monte Carlo simulation, was developed
as a quantitative technique by the great mathematician John von Neumann during World War
II. Working with neutrons at the Los Alamos Scientific Laboratory, von Neumann used
simulation to solve physics problems that were too complex or expensive to analyze by hand
or by physical model. The random nature of the neutrons suggested the use of a roulette
wheel in dealing with probabilities. Because of the gaming nature, von Neumann called it the
Monte Carlo model of studying laws of chance. With the advent and common use of business
computers in the 1950s, simulation grew as a management tool. Specialized computer
languages were developed in the 1960s (GPSS and SIMSCRIPT) to handle large-scale
problems more effectively. In the 1980s, prewritten simulation programs to handle situations
ranging from queuing to inventory were developed. They had such names as Xcell, SLAM,
SIMAN, Witness, and MAP/1. Previous topics usually dealt with mathematical models and

82
formulas that could be applied to certain types of problems. The solution approaches to these
problems were, for the most part, analytical. However, not all real-world problems can be
solved by applying a specific type of technique and then performing the calculations.
Some problem situations are too complex to be represented by the concise techniques
presented. In such cases, simulation is an alternative form of analysis.
• Analogue simulation is a form of simulation that is familiar to most people. In
analogue simulation, an original physical system is replaced by an analogous
physical system that is easier to manipulate. Much of the experimentation in
staffed spaceflight was conducted using physical simulation that re-created the
conditions of space. For example, conditions of weightlessness were simulated using
rooms filled with water. Other examples include wind tunnels that simulate the
conditions of flight and treadmills that simulate automobile tire wear in a laboratory
instead of on the road
• Analogue simulation replaces a physical system with an analogous physical system
that is easier to manipulate
• In computer mathematical simulation, a system is replicated with a mathematical
model that is analyzed by using the computer.
• Simulation is the operation of the model or simulator which is representation of
the system or organism. The model is amenable to manipulation which would be
impossible, too expensive or unpractical to perform on the entity it portrays.

Simulation is the process of designing a model of a real system and conducting


experiments with this model for the purpose of understanding the behavior for the
operation of the system
X simulated y is true if and only if:
• X and y are formal systems
• Y is taken to be formal system
• X is taken to be approximation to the real system
• The rules of validity on x are error free, otherwise x will become the real system
When is Simulation Appropriate?
 Simulation enables the study of, and interaction with, the internal actions of a real
system
 The effects of changes in state variables on the model’s behaviour can be observed

83
 The knowledge gained from the simulation model can be used to improve the design
of the real system under investigation
 Changing inputs and observing outputs can produce valuable insights about the
importance of variables and how they interact
 Simulations can be used to experiment with different designs and policies before
implementation so as to prepare for what might happen
 Simulations can be used to verify analytic solutions
 The problem can be solved by common sense
 The problem can be solved analytically
 It is less expensive to perform direct experiments
 Costs of modeling and simulation exceed savings
 Resources or time are not available
 Lack of necessary data
 System is very complex or cannot be defined
Advantages of Simulation
 Effects of variations in the system parameters can be observed without disturbing the
real system
 New system designs can be tested without committing resources for their acquisition
 Hypotheses on how or why certain phenomena occur can be tested for feasibility
 Time can be expanded or compressed to allow for speed up or slowdown of the
phenomenon under investigation
 Insights can be obtained about the interactions of variables and their importance
 Bottleneck analysis can be performed in order to discover where work processes are
being delayed excessively
It is relatively straightforward and flexible.
1. It can be used to compare many different scenarios side-by-side
2. Recent advances in software make some simulation models very easy to develop.
3. It can be used to analyze large and complex real-world situations that cannot be solved by
conventional quantitative analysis models. For example, it may not be possible to build and
solve a mathematical model of a city government system that incorporates important
economic, social, environmental, and political factors. Simulation has been used successfully
to model urban systems, hospitals, educational systems, national and state economies, and
even world food systems.

84
4. Simulation allows what-if? types of questions. Managers like to know in advance what
options are attractive. With a computer, a manager can try out several policy decisions within
a matter of minutes.
5. Simulations do not interfere with the real-world system. It may be too disruptive, for
example, to experiment with new policies or ideas in a hospital, school, or manufacturing
plant. With simulation, experiments are done with the model, not on the system itself.
6. Simulation allows us to study the interactive effect of individual components or variables
to determine which ones are important.
7. “Time compression” is possible with simulation. The effect of ordering, advertising or
other policies over many months or years can be obtained by computer simulation in a short
time.
8. Simulation allows for the inclusion of real-world complications that most quantitative
analysis models cannot permit. For example, some queuing models require exponential or
Poisson distributions; some inventory and network models require normality.
Disadvantages of Simulation
 Model building requires special training
 Simulation results are often difficult to interpret. Most simulation outputs are
random variables - based on random inputs – so it can be hard to distinguish
whether an observation is the result of system inter-relationship or randomness
 Simulation modeling and analysis can be time consuming and expensive
Good simulation models for complex situations can be very expensive.
1. It is often a long, complicated process to develop a model. A corporate planning
model, for example, may take months or even years to develop.
2. Simulation does not generate optimal solutions to problems as do other quantitative
analysis techniques such as economic order quantity, linear programming, or PERT. It
is a trial-and-error approach that can produce different solutions in repeated runs.
3. Managers must generate all of the conditions and constraints for solutions that they
want to examine. The simulation model does not produce answers by itself.
4. Each simulation model is unique. Its solutions and inferences are not usually
transferable to other problems.
Offsetting the Disadvantages of Simulation
 Utilize simulation packages that only need input for their operation, e.g.: Arena
 Many simulation packages have output analysis capabilities, e.g. Arena

85
 Simulation has become faster due to advances in hardware
Steps in simulation study
We follow diverse stages in simulation study. One model divides the steps in five phases and
the other as 12 step process as indicated in the following diagrams:

Figure 4.5: A five phase simulation study

Figure 4.6: A twelve phase simulation study

86
Activity 4.1: Dear learner, please indentify the commonalities and differences among the
above two approaches of simulation study.
___________________________________________________________________________
___________________________________________________________________________
___________________________________________________________________________
___________________________________________________________________________
They are basically similar except that the 12 steps are aggregated in to 5 phases in the first
case

4.2. MONTE CARLO SIMULATION

When a system contains elements that exhibit chance in their behavior, the Monte Carlo
method of simulation can be applied. The basic idea in Monte Carlo simulation is to
generate values for the variables making up the model being studied. There are a lot of
variables in real-world systems that are probabilistic in nature and that we might want to
simulate. A few examples of these variables follow:

1. Inventory demand on a daily or weekly basis


2. Lead time for inventory orders to arrive
3. Times between machine breakdowns
4. Times between arrivals at a service facility
5. Service times
6. Times to complete project activities
7. Number of employees absent from work each day

Some of these variables, such as the daily demand and the number of employees absent, are
discrete and must be integer valued. For example, the daily demand can be 0, 1, 2, 3, and so
forth. But daily demand cannot be 4.7362 or any other non-integer value. Other variables,
such as those related to time, are continuous and are not required to be integers because time
can be any value. When selecting a method to generate values for the random variable, this
characteristic of the random variable should be considered. Examples of both will be given in
the following sections.

87
The basis of Monte Carlo simulation is experimentation on the chance (or probabilistic)
elements through random sampling. The technique breaks down into five simple steps:

Five Steps of Monte Carlo Simulation


1. Establishing probability distributions for important input variables
2. Building a cumulative probability distribution for each variable in step 1
3. Establishing an interval of random numbers for each variable
4. Generating random numbers
5. Simulating a series of trials

In running simulation, it is important to be aware of the following:

• The principle behind Monte Carlo simulation technique is representative of a given


system under analysis by a system described by some known probability distribution
and then drawing random samples from probability distribution by means of random
numbers.
• To truly reflect the system being simulated, the artificially created random numbers
must have the following characteristics:
• The random numbers must be uniformly distributed. This means that each random
number in the interval of random numbers (i.e., 0 to 1 or 0 to 100) has an equal
chance of being selected. If this condition is not met, then the simulation results will
be biased by the random numbers that have a more likely chance of being selected.
• The numerical technique for generating random numbers should be efficient. This
means that the random numbers should not degenerate into constant values or recycle
too frequently.
• The sequence of random numbers should not reflect any pattern. For example, the
sequence of numbers 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5,
6, 7, 8, 9, 0, and so on, although uniform, is not random.
• One characteristic of some systems that makes them difficult to solve analytically is
that they consist of random variables represented by probability distributions. Thus,
a large proportion of the applications of simulations are for probabilistic models.
• The term Monte Carlo has become synonymous with probabilistic simulation in
recent years the Monte Carlo technique can be more narrowly defined as a technique
for selecting numbers randomly from a probability distribution (i.e., "sampling") for
88
use in a trial (computer) run of a simulation. The Monte Carlo technique is not a type
of simulation model but rather a mathematical process used within a simulation.
• Monte Carlo is a technique for selecting numbers randomly from a probability
distribution.
• The name Monte Carlo is appropriate because the basic principle behind the process
is the same as in the operation of a gambling casino, for example, in Monaco. In
Monaco such devices as roulette wheels, dice, and playing cards are used. These
devices produce numbered results at random from well-defined populations. For
example, a 7 resulting from thrown dice is a random value from a population of 11
possible numbers (i.e., 2 through 12). This same process is employed, in principle, in
the Monte Carlo process used in simulation models.

To add another example, the manager of Computer-World, a store that sells computers and
related equipment, is attempting to determine how many laptop PCs the store should
order each week. A primary consideration in this decision is the average number of laptop
computers that the store will sell each week and the average weekly revenue generated
from the sale of laptop PCs. A laptop sells for $4,300. The number of laptops demanded
each week is a random variable (which we will define as x) that ranges from 0 to 4. From
past sales records, the manager has determined the frequency of demand for laptop PCs for
the past 100 weeks. From this frequency distribution, a probability distribution of demand
can be developed, as shown in the table given below.

Table 4.4: Probability distribution of demand for laptop PCS

• The purpose of the Monte Carlo process is to generate the random variable, demand,
by sampling from the probability distribution, P(x). The demand per week can be
randomly generated according to the probability distribution by spinning a wheel that
is partitioned into segments corresponding to the probabilities.

89
• In the Monte Carlo process, values for a random variable are generated by sampling
from a probability distribution.
• Because the surface area on the roulette wheel is partitioned according to the
probability of each weekly demand value, the wheel replicates the probability
distribution for demand if the values of demand occur in a random manner
• By spinning the wheel, the manager artificially reconstructs the purchase of PCs
during a week. In this reconstruction, a long period of real time (i.e., a number of
weeks) is represented by a short period of simulated time (i.e., several spins of the
wheel).
• There are 100 numbers from 0 to 99 on the outer rim of the wheel, and they have been
partitioned according to the probability of each demand value. For example, 20
numbers from 0 to 19 (i.e., 20 % of the total 100 numbers) correspond to a demand of
no (0) PCs. Now we can determine the value of demand by seeing which number the
wheel stops at as well as by looking at the segment of the wheel. Such wheel is given
in the following diagram.
Figure 4.7: Monte Carlo wheel that use random numbers

Note that when wheel is spun actual demand for PC’s is determined by a number at rim of the
wheel. Here are what we get from the above example:

90
• When the manager spins this new wheel, the actual demand for PCs will be
determined by a number. For example, if the number 71 comes up on a spin, the
demand is 2 PCs per week; the number 30 indicates a demand of 1.
• Because the manager does not know which number will come up prior to the spin
and there is an equal chance of any of the 100 numbers occurring, the numbers occur
at random; that is, they are random numbers.
• Instead of spinning the wheel to get a random number, a random number table is
selected.

Random number table

39 65 76 45 45 19 90 69 64 61 20 26 36 31 62 58 24 97 14 97 95 06 70 99 00

73 71 23 70 90 65 97 60 12 11 31 56 34 19 19 47 83 75 51 33 30 62 38 20 46

72 18 47 33 84 51 67 47 97 19 98 40 07 17 66 23 05 09 51 80 59 78 11 52 49

75 12 25 69 17 17 95 21 78 58 24 33 45 77 48 69 81 84 09 29 93 22 70 45 80

37 17 79 88 74 63 52 06 34 30 01 31 60 10 27 35 07 79 71 53 28 99 52 01 41

02 48 08 16 94 85 53 83 29 95 56 27 09 24 43 21 78 55 09 82 72 61 88 73 61

87 89 15 70 07 37 79 49 12 38 48 13 93 55 96 41 92 45 71 51 09 18 25 58 94

98 18 71 70 15 89 09 39 59 24 00 06 41 41 20 14 36 59 25 47 54 45 17 24 89

10 83 58 07 04 76 62 16 48 68 58 76 17 14 86 59 53 11 52 21 66 04 18 72 87

47 08 56 37 31 71 82 13 50 41 27 55 10 24 92 28 04 67 53 44 95 23 00 84 47

93 90 31 03 07 34 18 04 52 35 74 13 39 35 22 68 95 23 92 35 36 63 70 35 33

21 05 11 47 99 11 20 99 45 18 76 51 94 84 86 13 79 93 37 55 98 16 04 41 67

95 89 94 06 97 27 37 83 28 71 79 57 95 13 91 09 61 87 25 21 56 20 11 32 44

97 18 31 55 73 10 65 81 92 59 77 31 61 95 46 20 44 90 32 64 26 99 76 75 63

69 08 88 86 13 59 71 74 17 32 48 38 75 93 29 73 37 32 04 05 60 82 29 20 25

41 26 10 25 03 87 63 93 95 17 81 83 83 04 49 77 45 85 50 51 79 88 01 97 30

91 47 14 63 62 08 61 74 51 69 92 79 43 89 79 29 18 94 51 23 14 85 11 47 23

91
Random number table

39 65 76 45 45 19 90 69 64 61 20 26 36 31 62 58 24 97 14 97 95 06 70 99 00

80 94 54 18 47 08 52 85 08 40 48 40 35 94 22 72 65 71 08 86 50 03 42 99 36

67 06 77 63 99 89 85 84 46 06 64 71 06 21 66 89 37 20 70 01 61 65 70 22 12

59 72 24 13 75 42 29 72 23 19 06 94 76 10 08 81 30 15 39 14 81 33 17 16 33

63 62 06 34 41 79 53 36 02 95 94 61 09 43 62 20 21 14 68 86 84 95 48 46 45

78 47 23 53 90 79 93 96 38 63 34 85 52 05 09 85 43 01 72 73 14 93 87 81 40

87 68 62 15 43 97 48 72 66 48 53 16 71 13 81 59 97 50 99 52 24 62 20 42 31

47 60 92 10 77 26 97 05 73 51 88 46 38 03 58 72 68 49 29 31 75 70 16 08 24

56 88 87 59 41 06 87 37 78 48 65 88 69 58 39 88 02 84 27 83 85 81 56 39 38

22 17 68 65 84 87 02 22 57 51 68 69 80 95 44 11 29 01 95 80 49 34 35 36 47

19 36 27 59 46 39 77 32 77 09 79 57 92 36 59 89 74 39 82 15 08 58 94 34 74

16 77 23 02 77 28 06 24 25 93 22 45 44 84 11 87 80 61 65 31 09 71 91 74 25

78 43 76 71 61 97 67 63 99 61 30 45 67 93 82 59 73 19 85 23 53 33 65 97 21

03 28 28 26 08 69 30 16 09 05 53 58 47 70 93 66 56 45 65 79 45 56 20 19 47

04 31 17 21 56 33 73 99 19 87 26 72 39 27 67 53 77 57 68 93 60 61 97 22 61

61 06 98 03 91 87 14 77 43 96 43 00 65 98 50 45 60 33 01 07 98 99 46 50 47

23 68 35 26 00 99 53 93 61 28 52 70 05 48 34 56 65 05 61 86 90 92 10 70 80

15 39 25 70 99 93 86 52 77 65 15 33 59 05 28 22 87 26 07 47 86 96 98 29 06

58 71 96 30 24 18 46 23 34 27 85 13 99 24 44 49 18 09 79 49 74 16 32 23 02

93 22 53 64 39 07 10 63 76 35 87 03 04 79 88 08 13 13 85 51 55 34 57 72 69

78 76 58 54 74 92 38 70 96 92 52 06 79 79 45 82 63 18 27 44 69 66 92 19 09

61 81 31 96 82 00 57 25 60 59 46 72 60 18 77 55 66 12 62 11 08 99 55 64 57

42 88 07 10 05 24 98 65 63 21 47 21 61 88 32 27 80 30 21 60 10 92 35 36 12

92
Random number table

39 65 76 45 45 19 90 69 64 61 20 26 36 31 62 58 24 97 14 97 95 06 70 99 00

77 94 30 05 39 28 10 99 00 27 12 73 73 99 12 49 99 57 94 82 96 88 57 17 91

Random numbers are equally likely to occur. These random numbers have been generated by
computer so that they are all equally likely to occur, just as if we had spun a wheel:

Table 4.5: Generating demand from random numbers

• By repeating this process of selecting random numbers from random number table
(starting anywhere in the table and moving in any direction but not repeating the same
sequence) and then determining weekly demand from the random number, we can
simulate demand for a period of time. For example, the following Table shows
demand for a period of 15 consecutive weeks.
Table 4.6: Randomly generated demand for 15 weeks

93
From the table the manager can compute the estimated average weekly demand and revenue:
Estimated average demand=31/15= 2.07 laptops/week
Estimated average revenue=133,300/15= $8886.67
The manager can then use this information to help determine the number of PCs to order each
week.

Although this example is convenient for illustrating how simulation works, the average
demand could have more appropriately been calculated using the formula for expected value.
The expected value or average for weekly demand can be computed analytically from the
probability distribution,
n
E(x): ∑p(xi)xi
i=1
Where xi=demand value i
P(xi) = probability of demand
n= the number of different demand values
Therefore,
E(x)=(.20)(0) + (.40)(1) + (.20)(2) + (.10)(3) + (.10)(4)
=1.5 PCs per week
Simulation results will not equal analytical results unless enough trials of the simulation have
been conducted to reach steady state. As simulation models get complex, they become
impossible to perform manually. Thus, the use of computer simulation is justified. Please
note the following on computer simulations:

Here, bear in mind that artificcially created random numbers must have the following
characterstics:

94
How many trials should you run?
• This question can be answered by looking at statistics reported with the results of
monte carlo run by most modeling software. Decide what the critical attribute is. Then
make sure the distribution of the mean of this attribute is narrow enough that
decisions can be made unambiguously.
• In a Monte Carlo the computer solves the model over and over again evaluating a
large number of possible scenarios, each solution being a trial of the simulation.
• As the number of trials becomes large, the percentage of times the result approaches
the reality.
• Therefore it is better to use a higher number of trials.
• Once a simulation has been repeated enough times that it reaches an average result
that remains constant, this result is analogous to the steady-state result, a concept we
discussed previously in our presentation of queuing. For this example, 1.5 PCs is the
long-run average or steady-state result, but we have seen that the simulation might
have to be repeated more than 15 times (i.e., weeks) before this result is reached
Stochastic Dominance
• One can carry out Monte Carlo simulation to test a particular strategy that is to see the
effects, of changing decision variables over their possible values. This amounts to
comparing probability distributions for attributes
• In certain circumstances comparisons can be made without evaluating the attitude
toward risk of the firm or decision maker.
• If under choice a, the probability of achieving x or better for specific attribute is
greater than under choice b, for any X, then A stochastically dominates B.
The Value of Information
• There are numerous occasions in model building where we ask ourselves do we have
enough information or good information or should we collect more? It is not unusual
to be concerned as much about the choice of whether to make the decision now or
study the matter further as about the choice of which alternative to take.
• The value of information can be determined separately for each random variable that
exists in the model.
• The model is used to determine the $ outcome (distribution outcome) for better
information vs current information.
• As a benchmark, it is useful to calculate what it would be worth to have information
that is perfect.
95
• The value of perfect information serves as a benchmark, an upper bound on the
amount we would be willing to pay for information on that variable. Often the cost of
improving the estimate of a variable is higher than this maximum.
• There are a variety of sources of imperfect information: E.g. purchase forecasts of a
variable at different prices.
Value of perfect information.- the difference between how well we could do well with the
information versus how well we could do without the information.

Given : uncertain demand for printing directories


• FC=50
• VC=2
• Sp=4
• Historical demand is normally distributed with mean of 100 and standard
deviation of 20.
• Clairvoyant sell information at $50
Expected value of perfect information (EVPI)=Expected profit (with perfect
information)-Expected profit with current information

Now suppose we want to do evaluate how well we can do without any additional information
about demand. The best level of production to use is 100 based on historical information.
To evaluate expected profit with 100 and run Monte Carlo simulation for profit
(800iterations)
• Then, if we have perfect information, the mean profit =$147.8, and with current
information, the mean profit Is 115.8
• Then we have EVPI =EPPI-EPC I= 147.8-115.8= 32
• Since this is less than the $50 price of perfect information, we would turn down the
offer.
The value of imperfect information (EVII)=Expected profit with imperfect
information-expected profit with current information
E.g. suppose we could purchase a forecast of demand for your directory for $10 from a
marketing research firm.
• The error distribution is normal with mean o and standard deviation 10.The results of
simulation run revels EVII =132.1-115.8= 16.3
• The value is higher than the cost of $10, we would buy the imperfect information.
96
Activity 4.2: Dear learner, please take time to solve the following problems

1. The Hoylake Rescue Squad receives an emergency call every 1, 2, 3, 4, 5, or 6 hours,


according to the following probability distribution. The squad is on duty 24 hours per
day, 7 days per week:

Time Between Emergency Calls (hr.) Probability


1 .05
2 .10
3 .30
4 .30
5 .20
6 .05
1.00

a. Simulate the emergency calls for 3 days (note that this will require a "running," or
cumulative, hourly clock), using the random number table.
b. Compute the average time between calls and compare this value with the expected
value of the time between calls from the probability distribution. Why are the results
different?
c. How many calls were made during the 3-day period? Can you logically assume that
this is an average number of calls per 3-day period? If not, how could you simulate to
determine such an average?

2. The Dynaco Manufacturing Company produces a product in a process consisting of


operations of five machines. The probability distribution of the number of machines
that will break down in a week follows:

Machine Breakdowns per Week Probability

0 .10

1 .10

2 .20

97
Machine Breakdowns per Week Probability

3 .25

4 .30

5 .05

1.00

a. Simulate the machine breakdowns per week for 20 weeks.


b. Compute the average number of machines that will break down per week

4.3. SUPPLY CHAIN SIMULATION MODELING

Introduction
Modern manufacturing enterprises must collaborate with their business partners through their
business process operations such as design, manufacture, distribution, and after-sales service.
Robust and flexible system mechanisms are required to realize such inter-enterprises
collaboration environments.
Supply chain management is one of the hottest topics in production and operational
management areas. A supply chain system is a chain of processes from the initial raw
materials to the ultimate consumption of the finished product spanning across multiple
supplier-customer links (Dugal, Healy, and Tankenton 1994). The primary goal of a supply
chain is to provide manufactured products to end customers.
Supply chain planning is, in a sense, restructuring a business system for supply chain
members to collaborate with each other by exchanging information.
A supply chain is a network of autonomous and semiautonomous business units collectively
responsible for procurement, manufacturing, distribution activities associated with one or
more families of products.
Individual process in the chain can be affected by technology, marketing, and transportation.
Such enterprise environment can also influence system performance of supply chain. Supply
chain managers, in both planning phases and operational phases; face various kinds of
problems, such as capacity planning, production planning, inventory planning and others
(Umeda and Jain 2004).

98
In today’s highly competitive marketplace, companies are faced with the need to meet or
exceed increasing customer expectations while cutting costs to stay competitive in a fierce
global market. In order to exceed customer expectations, companies must meet changes in
customer demand in the least amount of time while providing a reliable product.
Successful companies find their competitive advantage when they are able to make informed
decisions that optimize this balance. In order to make these informed decisions, decision
makers must have a holistic view of all the elements that affect the planning, design,
production and delivery of their product. They must be able to understand, estimate, and
project their business supply chain performance.
A supply chain is a network of facilities that perform the functions of sourcing of materials,
transformation of these materials into intermediate and finished products, distribution of these
finished products to customers and the return of defective or excess products.
Supply chains environment have the following characteristics:
 Uncertain and High Variability
 Dynamic
 Distributed
Uncertain and High Variability Environment
Like any real world environment, supply chain environments are governed by uncertainty.
However, uncertainty is extremely critical in a supply chain environment due to the
integrated nature of supply chains. Since supply chains are composed of different elements
(i.e. suppliers, supplier’s supplier, customer, etc) integrated and interrelated, each element’s
uncertainty interacts with one another greatly affecting supply chain activities. In order to
deal with this issue, managers must identify and understand the causes of uncertainty and
determine how it affects other activities up and down the supply chain. Then they can
formulate ways to reduce or eliminate it (Schunk & Plott, 2000). An example of this is the
Bullwhip effect. “The bullwhip effect is the phenomena of increasing demand variation as the
demand information is passed upstream through the supply chain. This amplification has
direct impacts on costs due to the increased safety stock requirements” (Chatfield, 2001).
“The bullwhip effect will propagate to the entire supply chain areas producing backlogs, poor
forecasts, unbalanced capacities, poor customer service, uncertain production plans, and high
backlog costs” (Chang & Makatsoris, 2000).
Dynamic Environment
According to Fayez (2005), “the dynamism in supply chains is encountered at different
levels, which are the supply chain level, the enterprise level, or enterprises’ elements level.
99
The dynamic behavior at the supply chain level is encountered when enterprises that
constitute the supply chain change over time, e.g. enterprises leave the chain or new
enterprises join the chain. Dynamism is encountered at the enterprise level when the elements
in the enterprise are changing over time, e.g. new functional units such as a factory or a new
information resource or enterprise application system may be added. The dynamism at the
enterprise element level is encountered when the specification or the definition of the element
changes over time, e.g. a change in the workflow, a change in the schema of an information
resource, or a change in the semantics” (Fayez, 2005). Dynamic environments are dictated by
change. Therefore, decision makers must count on a methodology that would allow for timely
and efficient updating to reflect changes in the environment
Distributed Environment
Since supply chains are physically distributed, the information that makes up the supply
chains is also distributed.
The information in any supply chain is originated and owned by different entities, i.e. supply
chain partners. Consequently, pieces of information are distributed along the Supply Chain in
different systems and, therefore, in different formats. This has a great implication when
decision makers attempt to make decisions regarding the supply chain as a unit. Often data is
available but the knowledge required for decision-making is hard to come by since a great
effort has to precede any analysis in order to obtain the data and format the available data into
a common body of knowledge that is universal to all elements of the supply chain. This issue
is further complicated when supply chain partners are hesitant to provide this data. According
to Gupta, Whitman, and Agarwal (2001), “Supply chain decisions are improved with access
to global information.
However, supply chain partners are frequently hesitant to provide full access to all the
information within an enterprise. A mechanism to make decisions based on global
information without complete access to that information is required for improved supply
chain decision making” (Gupta et al., 2001).

Simulating the Supply Chain


Simulation modeling provides the flexibility to model processes and events to the desired
level of complexity, in a risk free, dynamic and stochastic environment. It provides the
essential level of realism and utility required to model supply chain environments accurately.
The Simulation methodology provides a means by which decision makers can obtain accurate
results, given the model is valid, that take into account the uncertainty, dynamism and
100
distributed nature of supply chain environments. With decision support tools based on
mathematical models, spreadsheets or process map methodologies, decision makers are
making decisions based on too many assumptions that very rarely hold true. Further, very
rarely decisions making is solely based on the information provided by an average. For
example, by not taking the Bullwhip effect into consideration in an analysis, analysts are
planning for significant inefficiencies such as production backlogs and unbalanced capacities.
This also leads to loss in revenues due to lost sales, poor customer service, high backlog
costs, high inventory costs, etc. The bottom line is that not taking into account variability
costs money. Averages cost money. They decrease companies’ economic value added by
reducing sales, increasing the cost of goods sold, total expenses and increasing inventory .In
addition, simulation models provide flexibility to allow for the dynamism and distributed
nature of supply chain environments. Simulations allow for easy variation of parameters
within the model. The modified models can then be immediately run obtaining results
sometimes in a matter of seconds. This is not always possible with mathematical modeling
and process maps which sometimes require new models to be developed if the parameters
change significantly. This increases the investment in time, money and resources that
companies have to make when having to re-do models when parameters change.
Simulation and Supply Chain
Since the advent of supply chain and the realization of the advantages of using simulation in
supply chain environments, there have been many efforts aiming to apply these benefits
within their supply chains for specific supply chain problems (i.e. inventory planning, supply
chain design, etc.)

Banks, Buckley, Jain, Lendermann and Manivannan (2002) held a panel session were they
discussed the opportunities for simulation modeling in supply chain. Their paper
presents opportunities and challenges in the area. The topics of discussion were: the use of
simulation in process control, decision support, and proactive planning; simulation use
through the supply chain life cycle; the characteristics of firms for which simulation is
feasible for SCM; and opportunities for simulation in SCM. Many authors (Bansal, 2002;
Byrne & Heavey, 2004; Chang & Makatsoris, 2000; Chwif, Barretto, & Saliby, 2002;
Siprelle, Parsons, & Clark, 2003) discuss the promise, issues and requirements associated
with using simulation in a supply chain domain. Similarly, many efforts have been conducted
to develop simulation models and simulation-modeling tools to address different needs within
supply chain domains. Biswas and Narahari (2004) developed DESSCOM, an object oriented
101
supply chain simulation modeling methodology. Ding, Benyoucef, Xie, Hans and
Schumacher (2004) developed “ONE” a simulation and optimization tool to support decision
during assessment, design and improvement of supply chain networks.

Narayanan and Srinivasan (2003), developed a decision support system consisting of a user
interface and an object oriented simulation model. Ingalls and Kasales (1999) describe
CSCAT, an internal supply chain simulation analysis tool. CSCAT is based on Rockwell
Software’s ARENA. Jain and Workman (2001), describes their efforts developing a generic
simulation tool to model supply chains. Liu, Wang, Chai and Liu (2004) discuss the
development of Easy-SC, a Java-based simulation tool. Umeda and Lee (2004) describe a
design specification for a generic, supply-chain-simulation system. The generic simulation is
based on schedule-driven and stock-driven control methods to support the supply chain
management. Wartha, Peev, Borshchev and Filippov (2002) developed Decision Support
Tool - Supply Chain (DST-SC). DST-SC is a domain- oriented tool, which is an extension of
the UML-RT Hybrid Simulation kernel of AnyLogic by XJ Technologies.
In their paper Williams and Gunal (2003) present an overview and tutorial of SimFlex.
SimFlex is a supply chain simulation software package that uses Excel and MS Access for
data management. Another supply chain simulation modeling tool is Supply Chain Guru.
Supply Chain uses the ProModel discrete event simulation language as its simulation engine
(ProModel Corporation, 2002). Simulation modeling is a versatile and powerful tool that has
grown in popularity due to its ability to deal with complicated models of corresponding
complicated system (Kelton, Sadowski, & Sadowski, 2002; Wartha et al., 2002).
Nevertheless, simulation models can be time consuming to build, requiring substantial
development time, effort and experience. According to Mackulak, Lawrence & Colvin
(1998), simulation development time takes about 45% of the total simulation project effort.
Furthermore, simulation-modeling efforts often have to be modified to accommodate the
development of what if scenarios and constantly changing requirements. These modifications
also take time to model. An alternative to creating a unique simulation model is to reuse an
existing generic model that can be reconfigured for individual projects. Mackulak et al (1998)
define a generic model as a model that is applicable over some large set of systems, yet
sufficiently accurate to distinguish between critical performances criteria. The model
becomes specific when the data for a particular system is loaded. “Their primary advantages
are that they eliminate major portions of the upfront model design process, they are bug free,
they have been code optimized for fast run times, and they can be consistently applied
102
throughout the corporation” (Mackulak & Lawrence, 1998). In their research, Mackulak et al
(1998) state that there exists a need for generic/reusable models that are properly structured to
provide sufficient accuracy and computer assistance. In order to respond to this need and to
evaluate the advantages of generic simulation models in terms of design turnaround time,
they created a model of an automated material handling system. In their study, they
demonstrate that a generic model can be constructed to meet the needs of reuse for a situation
with a reasonably small set of unique components and that when properly constructed a
special purpose reusable model can be more accurate and efficient than new models
individually constructed for each application scenario. Simulation reusability resulted in an
order of magnitude improvement in design project turnaround time with model building and
analysis time being reduced from over six weeks to less than one week.

GEM-FLO is a generic modeling environment developed by Productivity Apex, Inc and


designed to aid in the rapid development of simulation models that can predict the operational
characteristics of future space transportation systems during the entire project lifecycle.
GEM-FLO was developed using Visual Basic and Rockwell Software ARENA simulation
language. GEM-FLO accepts any reusable launch vehicle design characteristics and
operational inputs (such as processing times, event probabilities, required resources, and
transportation times) and automatically generates a simulation model of the system. Once the
simulation model is executed, it will provide multiple measures of performance including
operations turnaround time, expected flight rate, and resource utilizations, thus enabling users
to assess multiple future vehicle designs using the same generic tool (Steele et al., 2002).

Nasereddin, Mullens & Cope (2002), developed a generic simulation model for the modular
housing manufacturing industry. The model involves the use of Excel spreadsheets/Visual
Basic capabilities for data input and post processing report generation. Following user
specification of system specific details, such as processes and process cycle times, ProModel
code is automatically generated using Visual Basic. Nasereddin et al (2002), found that with
the use of generic simulation, a significant reduction in model design and model maintenance
times can be achieved. Moreover, models can be rapidly modified to reflect different possible
scenarios changes. In addition, an improvement in knowledge transfer was also achieved,
since modelers can now decrease the time required to get proficient in modeling using the
generic simulation.

103
Brown & Powers (2000) generated a generic maintenance simulation model design to support
a model of Air Force Wing operations and the maintenance functions associated with them.
The model was also designed to be generic enough to be used in military applications as well
as the commercial world. The simulation tool used was Arena by Rockwell Software and
Excel/VBA for model input/output data. In addition, a Visual Basic Input Form also feeds
into the model providing additional values (specified by the user) that control the timing of
simulation events and the length of the simulation run. As some of the lessons learned, they
found that the generic nature of the model required large quantities of input leading to a
substantial amount of time consumed in setting up the model and manipulating the data.

Generic simulation models can be complicated to design and set up in order to obtain a truly
generic simulation model. Furthermore, they may require great amounts of user inputs and
knowledge on the specific simulation platform.

Automatic discrete event model generation facilitates the development of a valid simulation
model strictly from operational information, without the need for the user
to build the model. The need from user inputs can be minimized through the combined used
of technologies such as ontologies, artificial intelligence and computing.

Automatic generation of simulation models involves the development of the structure and
parameters of a simulation model automatically. In 1994, Morgan (1994) developed an
automatic DES model using Visual Basic and QUEST. In his study, Morgan (1994) uses
Microsoft Visual Basic as the model generation engine and the integrated graphical user
interface. Through this interface users maintained process, products, and production data in
external data files (Microsoft Excel spreadsheet). After following an iterative process, the
system reads the data files and a library of QUEST models. A QUEST simulation model is
then generated of a reconfigurable production facility that meets production requirements. In
order to develop this automated model, they required an open system to allow for external
(non-interactive) manipulation of the model.

This requirement was met by QUEST, a commercial off the shelf discrete event simulation
engine. A genetic algorithm was used to discover the heuristic rules required to generate a
schedule that maximized profit based on revenue on products sold and a variety of costs.

104
Son, Jones, & Wysk (2000), expressed the difficulty of building, running, and analyzing
simulation models due to the dramatically different simulation analysis tools capabilities and
characteristics. To address the model building issue, researchers at the National Institute of
Standards and Technology (NIST) proposed the development of neutral libraries of
simulation components and model templates.

The library of simulation objects became a basic building block to model systems of interest.
Then a translator generated a simulation model for a specific commercial package from the
neutral descriptions of the components. In this paper, the authors present the use of the
neutral libraries to generate a model in ProModel. The library of objects consists of header
information, experiment information, shop floor information, product process information,
production information and output information. The information objects were developed
using EXPRESS. These objects are then used to generate a collection of database tables in
MS Access. The model builder or translator, implemented in Visual Basic, then builds the
platform specific model (in this case ProModel).

Arief and Speirs (2000; 2004; Wartha et al., 2002) identified simulation components that are
applicable to many simulation scenarios along with the actions that can be performed by
them. Based on these components, they developed a simulation framework called Simulation
modeling Language (SimML) to bring the transformation from the design to a simulation
program. A UML tool that supports this framework was constructed in Java using the
JCF/Swing package. The simulation programs are generated in JAVA using JavaSim. XML
is used for storing the design and the simulation data. XML was used because of its ease of
manipulation and its ability to store information in a structured format by defining a
Document Type Definition (DTD).
In their research Bruschi, Santana, Santana and Aiza (2004), present a tool developed to
automatically generate distributed simulation environments. They named their tool, ASDA,
an automatic distributed simulation environment. In their research they state that “the
automatic word can be understood in three different ways: the environment automatically
generates a distributed simulation program code; the environment can automatically choose
one distributed simulation approach; and the environment can automatically convert a
sequential simulation program into a distributed simulation program using the MRIP
(Multiple Replication in Parallel) approach”(Chatfield, 2001). In their research they
developed a user interface, a code generator, a replication and a software interface module.
105
The user interface module was developed in Java. The Replication module implements
communication and analysis functions.

The Software Interface Module defines an interface between the developed simulation
program and the replication module. In his PhD dissertation, Dean C. Chatfield (2001),
addressed the difficulty of creating simulation models of supply chain systems due to the
need for the modeler to describe the logic of the component processes within the simulation
language in order to represent the various parts of the supply-chain (such as warehousing,
manufacturing, and transportation). “This is required because the processes and actions that
occur in a supply chain are not standard, built-in events of the simulation languages offered
by the major vendors. As a result, the user must create the supply-chain event procedures.
Unfortunately, this work is specific to the specific supply-chain being modeled. If the
modeler wishes to develop a simulation model for a different supply-chain, most of the work
will have to be performed again”(Chatfield, 2001). As part of his research, Chatfield (2001),
develop the Supply Chain Modeling Language (SCML) to address the information sharing
difficulties affecting supply-chain researchers and practitioners. SCML is a platform-
independent, methodology- independent, XML-based markup language that provides
a generic framework for storing supply-chain structural and managerial information. In
addition, a Visual Supply Chain Editor (VSCE) was developed as a dedicated SCML editor.
This allows users to create SCML-formatted supply-chain descriptions without directly
editing any SCML markup. Additionally, a Simulator for Integrated Supply Chain Operations
(SISCO) was developed as part of his research to address supply chain modeling difficulties.

SISCO is a GUI based, Object Oriented, Java-based tool combining visual model
construction, integrated SCML compatibility for easy information sharing, and future Internet
capabilities. Chatfield’s research addresses the three characteristics of a supply chain system
(Stochastic, Dynamism and Distributed). As part of his research, Chatfield uses SISCO to
analyze the bullwhip effect and demonstrates the benefits of his methodology (a visual
supply-chain simulation tool coupled with an information sharing standard).

The literature is rich with research and development efforts that use modeling to aid decision
makers in supply chain systems. These efforts address certain aspects of supply chain
environments (stochastic, distributed and dynamic system) independently or a combination of
these. However, no effort currently exists that addresses all of these aspects comprehensively.
106
The Supply Chain Simulation Ontology
The ontology will define the supply chain in a thorough and explicit way that will allow for
the development of simulation models by capturing the processes, process characteristics
(times, units, etc), resources, information/ information flow, materials/materials flow, objects/
objects flow, resources, interdependencies, networks, multi-tier processes, functional units,
and all their complex interactions. Specifically, the ontology will be used to define the
structure of the simulation model. The knowledge within the ontology will be used to define
the simulation processes logic, decision logic, routing, resource allocation, entity definitions
and interactions such as: process with process, process with resource, entity with process,
entity with entity. The core of the ontology was built around the SCOR model as the supply
chain industry standard operations reference model with over 200 mature best practices and
performance metrics. The supply chain structure in SCOR was used to develop different
supply chain models and views using the suite of IDEF models. The different views and
models were integrated in ontology. In order to incorporate simulation specific construct
(SCOR model does not incorporate simulation specific knowledge), the ontology was
modified to include a Resource class, Processing Duration class, Simulation Setup Class and
Entity Class. These classes were used to incorporate the knowledge that will be used to define
resource capacities, processing durations, run lengths, entity definitions (orders, signals
and/or objects), etc. The ontology was implemented using XML and XML schemas, where
the schema holds the logic and the relationships usually exists in the supply chain, the schema
was designed to be flexible and extensible in such a way that can be customized and altered
to define a particular supply chain specifics.

The Earth to Orbit Supply Chain: A Case Observation


The case study summarizes the application of the approach in NASA Supply Chain projects.
The objective of the project is to develop an end-to-end Space Exploration Supply Chain
modeling, simulation, and strategic analysis capability focusing on Earth-to-Orbit (ETO)
operations. The Space Exploration Supply Chain is defined as “The integration of NASA
centers, facilities, third party enterprises, orbital entities, space locations, and space carriers
that network/ partner together to plan, execute, and enable an Exploration mission that will
deliver and Exploration product (crew, supplies, data, information, knowledge, physical
samples) and to provide the after delivery support, services, and returns that may be requested
by the customer.
107
The project will deliver a unique strategic analysis capability that will enable system
operations analysts and decision makers to understand, estimate, and make informed
decisions about the Supply Chain for Exploration and Space Transportation Systems early in
the decision making. The Space Exploration Supply Chain is one of the largest chains known
to man-kind. This complex Supply Chain brings together a space transportation system for
which a usable payload is a small percent of extreme value. It starts on Earth, passes through
different locations in space, reaches deep space, ends on a planet or lunar surface, collects
samples and runs experiments, delivers back to Earth data and information through the deep
space network, and later delivers physical samples back to Earth.

CHAPTER SUMMARY

In this chapter, we discussed the concept, history, and application areas of simulation along
with the steps in simulation process. Further, the chapter discussed the types of simulation
models in three broad categories. The bulk of the chapter was devoted to how to conduct the
Monte Carlo simulation involving the way to get random variables. Finally, discussion was
made on applications of simulations in supply chain management.

END CHAPTER QUESTIONS

1. Sound Warehouse, (in Georgetown) which sells CD players (with speakers), orders
from Fuji Electronics in Japan. Because of shipping and handling costs, each order
must be for five CD players. Because of the time it takes to receive an order, the
warehouse outlet places an order every time the present stock drops to five CD
players. It costs $100 to place an order. It costs the warehouse $400 in lost sales when
a customer asks for a CD player and the warehouse is out of stock. It costs $40 to
keep each CD player stored in the warehouse. If a customer cannot purchase a CD
player when it is requested, the customer will not wait until one comes in but will go
to a competitor. The following probability distribution for demand for CD players has
been determined:

Demand per Month Probability


0 .04
1 .08
2 .28

108
Demand per Month Probability
3 .40
4 .16
5 .02
6 .02
1.00

The time required to receive an order once it is placed has the following probability
distribution:

Time to Receive an Order (mo.) Probability


1 .60
2 .30
3 .10
1.00
The warehouse has five CD players in stock. Orders are always received at the
beginning of the week. Simulate Sound Warehouse's ordering and sales policy for 20
months, using the first column of random numbers in the random number table.
Compute the average monthly cost.

2. The time between arrivals of oil tankers at a loading dock at Prudhoe Bay is given by
the following probability distribution:

Time Between Ship Arrivals (days) Probability


1 .05
2 .10
3 .20
4 .30
5 .20
6 .10
7 .05
1.00

The time required to fill a tanker with oil and prepare it for sea is given by the following
probability distribution:

109
Time to Fill and Prepare (days) Probability
3 .10
4 .20
5 .40
6 .30
1.00

a. Simulate the movement of tankers to and from the single loading dock for the first 20
arrivals. Compute the average time between arrivals, average waiting time to load,
and average number of tankers waiting to be loaded.
b. Discuss any hesitation you might have about using your results for decision making.

3. State University is playing Tech in their annual football game on Saturday. A


sportswriter has scouted each team all season and accumulated the following data:
State runs four basic plays: a sweep, a pass, a draw, and an off tackle; Tech uses three
basic defenses: a wide tackle, an Oklahoma, and a blitz. The number of yards State
will gain for each play against each defense is shown in the following table:

Tech Defense
State Play Wide Tackle Oklahoma Blitz
Sweep 3 5 12
Pass 12 4 10
Draw 2 1 20
Off tackle 7 3 3

The probability that State will run each of its four plays is shown in the following
table:

Play Probability
Sweep .10
Pass .20
Draw .20
Off tackle .50

110
The probability that Tech will use each of its defenses follows:

Defense Probability
Wide tackle .30
Oklahoma .50
Blitz .20

The sportswriter estimates that State will run 40 plays during the game. The sportswriter
believes that if State gains 300 or more yards, it will win; however, if Tech holds State to
fewer than 300 yards, it will win. Use simulation to determine which team the sportswriter
will predict to win the game.

111
CHAPTER FIVE: DECISION THEORY

INTRODUCTION
This chapter discusses the concept of decision making and various theories categorized as
descriptive, normative, and perspective. Further, decision making tools under certainty,
uncertainty, and risk are discussed.
OBJECTIVE
At the end of this unit, you will be able to:
 Describe the concept of decision theory and its basic categories

 Explain rational decision making models

 Solve decision problems under three conditions using decision theory

 Analyze decision problems using decision tree

 Identify areas in supply chain that could be modeled through decision theory

To such end, we will be dealing with decision theory and address the following essential
topics:
1. Overview of Decision Theory

2. Decision making under certainty, uncertainty, and risk

3. Decision Tree analysis

4. Application of Decision Theory in Supply Chain Management

5.1. NATURE OF DECISION MAKING

Decision-making may be defined as “intentional and reflective choice in response to


perceived needs. This distinguishes man from lower forms of life.” (Kliendorfer et.al,
1993:p.2)
Relation to Planning: Managerial decision making is the process of making a conscious
choice between two or more rational alternatives in order to select the one that will produce
the most desirable consequences (benefits) relative to unwanted consequences (costs). If there
is only one alternative, there is nothing to decide. The overall planning/decision-making
process involves setting objectives, establishing premises (assumptions), developing and

112
evaluating alternatives, and selecting from among them the best alternative. If planning is
truly “deciding in advance what to do, how to do it, when to do it, and who is to do it” (as
proposed by Amos and Sarchet), then decision making is an essential part of planning.
Decision making is also required in designing and staffing an organization, developing
methods of motivating subordinates, and identifying corrective actions in the control process.
However, it is conventionally studied as part of the planning function, and it is discussed
here.
Occasions for Decision: Chester Barnard wrote his classic book The Functions of the
Executive from his experience as president of the New Jersey Bell Telephone Company and
of the Rockefeller Foundation, and in it he pursued the nature of managerial decision making
at some length. He concluded that the occasions for decision originate in three distinct fields:
(a) from authoritative communications from superiors; (b) from cases referred for decision by
subordinates; and (c) from cases originating in the initiative of the executive concerned.
Barnard points out those occasions for decisions stemming from the “requirements of
superior authority cannot be avoided,” although portions of it may be delegated further to
subordinates decided by the executive. Barnard explains that “the test of executive action is
to make these decisions when they are important, or when they cannot be delegated
reasonably, and to decline the others.” Barnard concludes that “occasions of decision arising
from the initiative of the executive are the most important test of the executive.” These are
occasions where no one has asked for a decision, and the executive usually cannot be
criticized for not making one. The effective executive takes the initiative to think through the
problems and an opportunity facing the organization, conceives programs to make the
necessary changes, and implements them. Only in this way does the executive fulfill the
obligation to make a difference because he or she is in that chair rather than someone else.

Routine and Non-routine Decisions: Pringle et al. classify decisions on a continuum ranging
from routine to non-routine, depending on the extent to which they are structured. They
describe routine decisions as focusing on well-structured situations that recur frequently,
involve standard decision procedures, and entail a minimum of uncertainty. Common
examples include payroll processing, reordering standard inventory items, paying suppliers,
and so on. The decision maker can usually rely on policies, rules, past precedents,
standardized methods of processing, or computational techniques. Probably 90 percent of
management decisions are largely routine. Indeed, routine decisions usually can be delegated
to lower levels to be made within established policy limits, and increasingly they can be
113
programmed for computer “decision” if they can be structured simply enough. Non routine
decisions, on the other hand, “deal with unstructured situations of a novel, nonrecurring
nature,” often involving incomplete knowledge, high uncertainty, and the use of subjective
judgment or even intuition, where “no alternative can be proved to be the best possible
solution to the particular problem.” Such decisions become more and more common the
higher one goes in management and the longer the future period influenced by the decision is.
Unfortunately, almost the entire educational process of the engineer is based on the solution
of highly structured problems for which there is a single “textbook solution.” Engineers often
find themselves unable to rise in management unless they can develop the “tolerance for
ambiguity” that is needed to tackle unstructured problems.

Objective versus Bounded Rationality: Simon (1955) defines a decision as being


“‘objectively’ rational if in fact it is the correct behavior for maximizing given values in a
given situation.” Such rational decisions are made by: “(a) viewing the behavior alternatives
prior to decision in panoramic (exhaustive) fashion, (b) considering the whole complex of
consequences that would follow on each choice, and (c) with the system of values as criterion
singling out one from the whole set of alternatives.” Rational decision making, therefore,
consists of optimizing, or maximizing, the outcome by choosing the single best alternative
from among all possible ones. However, Simon believes that appellate cases (referred to the
executive by subordinates) should not always be actual behaviour. It falls short, in at least
three ways, of objective rationality:

1. Rationality requires a complete knowledge and anticipation of the


consequences that will follow on each choice. In fact, knowledge of
consequences is always fragmentary.
2. Since these consequences lie in the future, imagination must supply the lack of
experienced feeling in attaching value to them. But values can be only
imperfectly anticipated.
3. Rationality requires a choice among all possible alternative behaviors. In
actual behavior, only a few of these possible alternatives ever come to mind.

Managers, under pressure to reach a decision, have neither the time nor other resources to
consider all alternatives or all the facts about any alternative. A manager “must operate under
conditions of bounded rationality, taking into account only those few factors of which he or
114
she is aware, understands, and regards as relevant.” Administrators must “satisfice” by
accepting a course of action that is satisfactory or “good enough,” and get on with the job
rather than searching forever for the “one best way.” Managers of engineers and scientists, in
particular, must learn to insist that their subordinates go on to other problems when they
reach a solution that “satisfices”, rather than pursuing their research or design beyond the
point at which incremental benefits no longer match the costs to achieve them.

Level of Certainty: Decisions may also be classified as being made under conditions of
certainty, risk, or uncertainty, depending on the degree with which the future environment
determining the outcome of these decisions is known. These three categories are compared
later in this unit.

Management Science: Quantitative techniques have been used in business for many years in
applications such as return on investment, inventory turnover, and statistical sampling theory.
However, today’s emphasis on the quantitative solution of complex problems in operations
and management, known initially as operations research and more commonly today as
management science, began at the Bawdsey Research Station in England at the beginning of
World War II.

In August 1940, a research group was organized under the direction of P. M. S. Blackett of
the University of Manchester to study the use of a new radar-controlled antiaircraft system.
The research group came to be known as “Blackett’s circus.” The name does not seem
unlikely in the light of their diverse backgrounds. The group was composed of three
physiologists, two mathematical physicists, one astrophysicist, one Army officer, one
surveyor, one general physicist, and two mathematicians. The formation of this group seems
to be commonly accepted as the beginning of operations research. Some of the problems this
group (and several that grew from it) studied were the optimum depth at which antisubmarine
bombs should be exploded for greatest effectiveness (20–25 feet) and the relative merits of
large versus small convoys (large convoys led to fewer total ship losses). Soon after the
United States entered the war, similar activities were initiated by the U.S. Navy and the Army
Air Force. With the immediacy of the military threat, these studies involved research on the
operations of existing systems. After the war these techniques were applied to longer-range
military problems and to problems of industrial organizations. With the development of more
and more powerful electronic computers, it became possible to model large systems as a part
115
of the design process, and the terms systems engineering and management science came into
use. Management science has been defined as having the following “primary distinguishing
characteristics”:

1. A systems view of the problem—a viewpoint is taken that includes all of the
significant interrelated variables contained in the problem.
2. The team approach—personnel with heterogeneous backgrounds and training
work together on specific problems.
3. An emphasis on the use of formal mathematical models and statistical and
quantitative techniques.

Models and Their Analysis: A model is an abstraction or simplification of reality, designed


to include only the essential features that determine the behavior of a real system. For
example, a three-dimensional physical model of a chemical processing plant might include
scale models of major equipment and large-diameter pipes, but it would not normally include
small piping or electrical wiring. The conceptual model of the planning/decision-making
process certainly does not illustrate all the steps and feedback loops present in a real
situation; it is only indicative of the major ones.

Most of the models of management science are mathematical models. These can be as simple
as the common equation representing the financial operations of a company:

On the other hand, they may involve a very complex set of equations. As an example, the
Urban Dynamics model was created by Jay Forrester to simulate the growth and decay of
cities. This model consisted of 154 equations representing relationships between the factors
that he believed were essential: three economic classes of workers (managerial/professional,
skilled, and “underemployed”), three corresponding classes of housing, and three types of
industry (new, mature, and declining), taxation, and land use. The values of these factors
evolved through 250 simulated years to model the changing characteristics of a city. Even

116
these 154 relationships still proved too simplistic to provide any reliable guide to urban
development policies.

Management science uses a five-step process that begins in the real world, moves into the
model world to solve the problem, and then returns to the real world for implementation. The
following explanation is, in itself, a conceptual model of a more complex process:

Real world Simulated (model) world


1. Formulate the problem (defining
objectives, variables, and constraints)
2. Construct a mathematical model (a simplified
yet realistic representation of the system)
3. Test the model’s ability to predict the present
from the past, and revise until you are satisfied
4. Derive a solution from the model
5. Apply the model’s solution to the real
system, document its effectiveness, and
revise further as required
Dear learner: Note that step 2 to 4 is indented to represent simulated world. 1 and 4
represent real world.

The scientific method or scientific process is fundamental to scientific investigation and to


the acquisition of new knowledge based upon physical evidence by the scientific community.
Scientists use observations and reasoning to propose tentative explanations for natural
phenomena, termed hypotheses. Engineering problem solving is more applied and is
different to some extent from the scientific method.

Scientific Method Engineering Problem Solving Approach


• Define the problem • Define the problem
• Collect data • Collect and analyze the data
• Develop hypotheses • Search for solutions
• Test hypotheses • Evaluate alternatives
• Analyze results • Select solution and evaluate the impact
• Draw conclusion
117
Activity 5.1:
Dear learner, take time to differentiate between the following pairs: bounded
rationality and objective rationality, satisfying and maximizing objective, Scientific
method and engineering problem solving approach.
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
Key ideas for this activity are included in topic 5.1

5.2. DECISION THEORIES

Lewis in Bacharach and Hurley defined decision theory as “...a systematic exposition
of the consequences of certain well-chosen platitudes about belief, desire, preference
and choice. It is the very core of common-sense theory of persons, dissected out and
elegantly systematized.” (Bacharach & Hurley, 1991: pp.147) Bacharach and Hurley
further explained the purpose of decision theory as the ways in which decisions are
related to the decision-makers aims and beliefs.

Historically the earliest decision theory is traced back to the pioneering work of Daniel
Bernoulli (1738) in describing St. Petersburg paradox through the use of numerical
measures representing the value of alternative choices. For him money was not an
adequate measure of value rather the value function describes the utility of money to
individual decision maker. He claimed a non-linear and decreasing slope utility
function. The utilitarian philosophers such as Bentham and Mill (et.al) also talked of
utility as the sum that should be maximized in decisions involving ethical questions.
The major pitfall of this theory lies on the assumption that a number was available
representing the worth of any bundle of goods (cardinal utility) instead of any two
bundles of goods (ordinal utility) which was preferred. In spite of the fact that the later
has almost entirely replaced the former, the concept of cardinal utility has regained
attention with the works of Von Newman and Oskar Morgenstern (1944). To the

119
contrary, Luce and Tukey initiated a separate theory of conjoint measurement to
further strengthen the power of cardinal utility.
Decision theories can be broadly classified as Descriptive, Normative or Perspective
(Kliendorfer et al, 1993:pp.2)
Descriptive theory: These are theories and associated experiments, evidences and field
studies concerned with how actual decision makers actually perform these activities.
Descriptive theories involve decision analysis via decision tree analysis. Detail
discussion of decision tree comes later in this unit.
Normative Theories: They are theories like expected utility theory based on abstract
models and axioms that serve as theoretical benchmarks for how decision makers
should ideally perform these activities. According to Kleindorfer and etal (1993) the
normative models may be predictive or valuation/choice. The primary model of the
former is the lens model while the latter is the expected utility theory and preference
model.

The Lens Model is the central framework of social judgment theory, initially
developed by Hammond (1955) and Hoffman (1960), and formalized by Hursch
and Hammond (1964). Its intellectual roots trace back to pioneering work of Egon
Brunswik (1943 and 1955), a psychologist interested in visual perception.
Brunswik’s “Vicarious functioning”1, that is the identification with partial cues in
multiple functioning, indicates that “our recognition of objects is based on
imperfect cues that may be overlapping.” (Ibid, 1993:pp.71)

The Expected Utility Theory is the most popular method engendering much
research. much research on valuation and choice has focused on decision making
under risk. The impetus for most of this research has stemmed from failures of
expected utility theory, a formal model of choice developed by Von Newmann and
Moregenstern (1947) for dealing with problems of risk. These problems consist of
a set of formal axioms characterizing rational behaviour and a strategy for
choosing among alternatives. The axioms founded by Von. Newmann and

120
Morgenstern (1947) theory, have been modified by “Savage (1954), Raiffa (1968),
Tversky (1971), or Fishburn (1970,1982) and Pawes (1988).”
(Kliendorfer,Kunreuther and Schoemaker, 1993:pp.130)

Preference Theory rests on measurement of decision maker’s attitude towards


Risk. Hammond (1967) expresses person’s attitude towards risk in the form of
utility or preference curve, and then make direct use of the curve to incorporate
this attitude in many important types of business decisions involving uncertainty.
For Hammond the term “Utility theory” and “Preference theory” can be used
synonymously. However, he used the latter term to signify that his research is not
dominated by economics subject that used the former term more frequently. To
incorporate decision maker’s attitude towards risk in decision analysis one must
find for each event fork a certain (sure) amount that is equivalent in decision
maker’s mind to running the risk represented by the event fork. The certain
amount then replaces the event fork in a decision diagram.

Perspective Theories: These are theories and associated experimental evidence and
field studies concerned with helping decision makers improve their performance in
problem finding and problem solving, given the complexities and constraints of real
life. Common Perspective approaches include Intuitive approach, Linear rules and
boot strapping, Decision support system, Mathematical programming and Decision
analysis approach. The Intuitive approach is a decision making based on “…
intuitions rather than systematic analysis” (Kliendorfer etal, 1993:pp.182) when one
knows that taking a decision is the right step but fail to prove it, an intuitive approach
is used. Frequently this approach is commented for non-programmed of non-routine
decisions where there is not a large amount of experience on which to base one’s
judgment. Whereas, Linear rules and boot strapping involves the use of both
quantitative and qualitative information. The quantitative information is used as a
reference point for further consideration (linear rule). Then subjective judgment (boot
strapping) is attached to the linear rule to make the final decision. Decision support
system, in contrast, involves the use of the system that organizes and provides relevant
information useful for decision-making while Mathematical programming is usually

121
essential for decision involving cost, capacity or transport issues. It is usually known
as “transportation problem”. Using this approach one finds that the optimal production
and shipping patterns for available options. Decision analysis approach, on the other
hand, is the determination of the possible alternatives of a particular decision case
together with their outcome and the likelihood of the occurrence of the outcomes. It
involves the use of decision tree. (pp.181-195)

Activity 5.2: Dear learner, please take few minutes to discuss the three broad
categories of decision theory and the elements under them
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
The three categories include descriptive, normative, and perspective models.

5.3. DECISION MAKING UNDER CERTAINTY, UNCERTAINTY, AND


RISK

Frank Knight as early as 1921 distinguished between three classes of problems:


decision making under certainty, uncertainty and risk.

The condition of certainty states that each alternative could yield only one set of
outcomes. There is only one states of nature. In uncertainty condition no likelihood
assigned to the states of nature. It contains different states of nature without the
assignment of any probability to the particular state of nature. Whereas, in risk
condition there is some probability (which may be ambiguous) associated with each
states of nature. Most authors believe that most if not all real-world problems involve
decision making under risk. (Guzzo, 1982:pp.5)

Decision making can be discussed conveniently in three categories: decision making


under certainty, under risk, and under uncertainty. The payoff table, or decision

122
matrix, shown in Table 5.1 will help in this discussion. Our decision will be made
among some number m of alternatives, identified as there may be more than one future
“state of nature” N. (The model allows for n different futures.) These future states of
nature may not be equally likely, but each state will have some (known or unknown)
probability of occurrence Since the future must take on one of the n values of the sum
of the n values of must be 1.0.

The outcome (or payoff, or benefit gained) will depend on both the alternative chosen
and the future state of nature that occurs. For example, if you choose alternative and
state of nature takes place (as it will with probability) the payoff will be outcome A
full payoff table will contain m times n possible outcomes.
Table 5.1: Pay-off Table

Let’s now consider what this model implies and the analytical tools we might choose
to use under each of our three classes of decision making.

Decision Making under Certainty: Decision making under certainty implies that we
are certain of the future state of nature (or we assume that we are). (In the model, this
means that the probability of future is 1.0, and all other futures have zero probability.)
The solution, naturally, is to choose the alternative that gives us the most favorable
outcome. Although this may seem like a trivial exercise, there are many problems that

123
are so complex that sophisticated mathematical techniques are needed to find the best
solution.
Linear Programming is one common technique for decision making under certainty is
called linear programming. In this method, a desired benefit (such as profit) can be
expressed as a mathematical function (the value model or objective function) of
several variables. The solution is the set of values for the independent variables
(decision variables) that serves to maximize the benefit (or, in many problems, to
minimize the cost), subject to certain limits (constraints).

For example, consider a factory producing two products, product X and product Y.
The problem is this: If you can realize $10 profit per unit of product X and $14 per
unit of product Y, what is the production level of x units of product X and y units of
product Y that maximizes the profit P? That is, you seek to:

As illustrated above, you can get a profit of


• $350 by selling 35 units of X or 25 units of Y
• $700 by selling 70 units of X or 50 units of Y
• $620 by selling 62 units of X or 44.3 units of Y; or (as in the first two cases as well)
any combination of X and Y on the iso-profit line connecting these two points as
indicated in figure 5.2.

Your production, and therefore your profit, is subject to resource limitations, or


constraints.
Assume in this example that you employ five workers—three machinists and two
assemblers—and that each works only 40 hours a week. Products X and/or Y can be
produced by these workers subject to the following constraints:
• Product X requires three hours of machining and one hour of assembly per
unit.

124
• Product Y requires two hours of machining and two hours of assembly per
unit.
These constraints are expressed mathematically as follows:

Since there are only two products, these limitations can be shown on a two-
dimensional graph as indicated below. Since all relationships are linear, the solution to
our problem will fall at one of the corners. To find the solution, begin at some feasible
solution (satisfying the given constraints) such as and proceed in the direction of
“steepest ascent” of the profit function (in this case, by increasing production of Y at
$14 profit per unit) until some constraint is reached. Since assembly hours are limited
to 80, no more than 80/2, or 40, units of Y can be made, earning or $560 profit. Then
proceed along the steepest allowable ascent from there (along the assembly constraint
line) until another constraint (machining hours) is reached. At that point, profit is
$620. Since there is no remaining edge along which profit increases, this is the
optimum solution. Graphical solution for the problem is given below.

Figure 5.1: Isoprofit lines

125
Figure 5.2: Linear programming maximum profit point

Computer Solution. About 50 years ago, George Danzig of Stanford University


developed the Simplex method, which expresses the foregoing technique in a
mathematical algorithm that permits computer solution of linear programming
problems with many variables (dimensions), not just the two (assembly and
machining) in this example. Now linear programs in a few thousand variables and
constraints are viewed as “small.” Problems having tens or hundreds of thousands of
continuous variables are regularly solved; tractable integer programs are necessarily
smaller, but are still commonly in the hundreds or thousands of variables and
constraints. Today there are many linear programming software packages available.
Another classic linear programming application is the oil refinery problem, where
profit is maximized over a set of available crude oils, process equipment limitations,
products with different unit profits, and other constraints. Other applications include

126
assignment of employees with differing aptitudes to the jobs that need to be done to
maximize the overall use of skills; selecting the quantities of items to be shipped from
a number of warehouses to a variety of customers while minimizing transportation
cost; and many more. In each case there is one best answer, and the challenge is to
express the problem properly so that it fits a known method of solution.

Decision Making under Risk: In decision making under risk one assumes that there
exist a number of possible future states of nature as we saw in previously in pay-off
table. Each has a known (or assumed) probability of occurring, and there may not be
one future state that results in the best outcome for all alternatives. Examples of future
states and their probabilities are as follows:

• Alternative weather (weather) will affect the profitability of alternative


construction schedules; here, the probabilities of rain and of good weather can
be estimated from historical data.
• Alternative economic futures (boom or bust) determine the relative
profitability of conservative versus high-risk investment strategy; here, the
assumed probabilities of different economic futures might be based on the
judgment of a panel of economists.

Expected Value. Given the future states of nature and their probabilities, the solution
in decision making under risk is the alternative Ai that provides the highest expected
value Ei, which is defined as the sum of the products of each outcome times the
probability that the associated state of nature occurs:

Simple Example. For example, consider the simple payoff information of Table 5.2,
with only two alternative decisions and two possible states of nature. Alternative has a
constant cost of $200 (A1), and a cost of $100,000 (A2) if future takes place (and none
otherwise). At first glance, alternative 1 looks like the clear winner, but consider the

127
situation when the probability of the first state of nature is 0.999 and the probability of
the second state is only 0.001. The expected value of choosing alternative A2 is only.

Note that this outcome of is not possible: the outcome if alternative is chosen will be a
loss of either $0 or $100,000, not $100. However, if you have many decisions of this
type over time and you choose alternatives that maximize expected value each time,
you should achieve the best overall result. Since we should prefer expected value of to
of we should choose A2, other things being equal.

Table5.2: Example of Decision Making under Risk

But first, let us use these figures in a specific application. Assume that you own a
$100,000 house and are offered fire insurance on it for $200 a year. This is twice the
“expected value” of your fire loss (as it has to be to pay insurance company overhead
and agent costs). However, if you are like most people, you will probably buy the
insurance because, quite reasonably, your attitude toward risk is such that you are not
willing to accept loss of your house! The insurance company has a different
perspective, since they have many houses to insure and can profit from maximizing
expected value in the long run, as long as they do not insure too many properties in the
path of the same hurricane or earthquake.

Consider that you own rights to a plot of land under which there may or may not be
oil. You are considering three alternatives: doing nothing (“don’t drill”), drilling at

128
your own expense of $500,000, and “farming out” the opportunity to someone who
will drill the well and give you part of the profit if the well is successful. You see three
possible states of nature: a dry hole, a mildly interesting small well, and a very
profitable gusher. You estimate the probabilities of the three states of nature and the
nine outcomes as shown in table 5.3.

The first thing you can do is eliminate alternative since alternative is at least as
attractive for all states of nature and is more attractive for at least one state of nature.
A2 and A3 are therefore said to dominate A1.

Table 5.3: Well Drilling Example—Decision Making Under Risk

Next, you can calculate the expected values for the surviving alternatives A2and A3
and you choose alternative A2 if (and only if) you are willing and able to risk losing
$500,000.

Decision Trees: They provide another technique used in finding expected value.
Decision Tree Analysis takes the assumption that rational players will select the
alternatives that have the maximum pay off in case of profit or the minimum figure in
case of costs. Although maximization of benefit is the essence of it there are varieties
of expression of this benefit that constitute the decision criteria. These include asset
position, profit, cost, etc. Once such criteria is established the decision tree is sketched

129
with the corresponding pay-offs and probabilities. Then the decision maker’s best
strategy is determined by working backwards.

The analysis begins with a single decision node (normally represented by a square or
rectangle), from which a number of decision alternatives radiate. Each alternative ends
in a chance node, normally represented by a circle. From each chance node radiate
several possible futures, each with a probability of occurring and an outcome value.
The expected value for each alternative is the sum of the products of outcomes and
related probabilities, just as calculated previously. Figure 5.3 illustrates the use of a
decision tree in our simple insurance example.

The conclusion reached is identical mathematically to that obtained from above pay-
off tables. Decision trees provide a very visible solution procedure, especially when a
sequence of decisions, chance nodes, new decisions, and new chance nodes exist. For
example, if you are deciding whether to expand production capacity in December
2006, a decision a year later, in December 2007, as to what to do then will depend
both on the first decision and on the sales enjoyed as an outcome during 2007. The
possible December 2007 decisions lead to (a larger number of) chance nodes for 2008.
The technique used starts with the later year, 2008 (the farthest branches). Examining
the outcomes of all the possible 2008 chance nodes, you find the optimum second
decision and its expected value, in 2007, for each 2007 chance node—that is, for each
possible combination of first A2 A3:

E3 = 0.6(0) + 0.3(125,000) + 0.1(1,250,000) = $162,500


E2 = 0.6(-500,000) + 0.3(300,000) + 0.1(9,300,000) = $720,000

130
Figure 5.3: Insurance example of a decision tree analysis

Then you use those values as part of the calculation of expected values for each first-
level decision alternative in December 2006.

Risk as Variance: Another common meaning of risk is variability of outcome,


measured by the variance or (more often) its square root, the standard deviation.
Consider two investment projects, X and Y, having the discrete probability distribution
of expected cash flows in each of the next several years as shown in the following
Table 5.4.
Table 5.4: Data for risk and variance example

Expected cash flows are calculated in the same way as expected value:

131
Although both projects have the same mean (expected) cash flows, the expected
values of the variances (squares of the deviations from the mean) differ as follows (see
also Figure 5.4)

The standard deviations are the square roots of these values:

Figure 5.4: Projects with the same expected value and different variances

Since project Y has the greater variability (whether measured in variance or in standard
deviation), it must be considered to offer greater risk than does project X.

Decision Making under Uncertainty: At times a decision maker cannot assess the
probability of occurrence for the various states of nature. Uncertainty occurs when
there exist several (i.e., more than one) future states of nature but the probabilities of
each of these states occurring are not known. In such situations the decision maker can

132
choose among several possible approaches for making the decision. A different kind
of logic is used here, based on attitudes toward risk.

Different approaches to decision making under uncertainty include the following:


• The optimistic decision maker may choose the alternative that offers the
highest possible outcome (the “maximax” solution);
• The pessimist decision maker may choose the alternative whose worst outcome
is “least bad” (the “maximin” solution);
• The third decision maker may choose a position somewhere between optimism
and pessimism (“Hurwicz” approach);
• Another decision maker may simply assume that all states of nature are
equally likely (the so called “principle of insufficient reason”), set all values
equal to 1.0/n, and maximize expected value based on that assumption;
• The fifth decision maker may choose the alternative that has the smallest
difference between the best and worst outcomes (the “mini-max regret”
solution). Regret here is understood as proportional to the difference between
what we actually get, and the better position that we could have received if a
different course of action had been chosen. Regret is sometimes also called
“opportunity loss.” The mini-max regret rule captures the behaviour of
individuals who spend their post decision time regretting their choices.

For example, using the well-drilling problem as shown in Table given above 5.3,
consider if the probabilities for the three future states of nature cannot be estimated. In
Table 5.5, the “Maximum” column lists the best possible outcome for alternatives and
the optimist will seek to “maxim ax” by choosing as the best outcome in that column.

The pessimist will look at the “Minimum” column, which lists the worst possible
outcome for each alternative, and he or she will pick the maximum of the minimums
(Maxim in) by choosing as having the best (algebraic) worst case. (In this example,
both maxima came from future state and both minima from future state but this sort of
coincidence do not usually occur.)

133
Table 5.5: Example in Decision Making Under Uncertainty

A decision maker who is neither a total optimist nor a total pessimist may be asked to
express a “coefficient of optimism” as a fractional value between 0 and 1 and then to:

The outcome using this “Hurwicz” approach and a coefficient of optimism of 0.2 is
shown in the third column of Table 5.5 and is again the winner.
If decision makers believe that the future states are “equally likely,” they will seek the
higher expected value and choose on that basis:

(If, on the other hand, they believe that some futures are more likely than others, they
should be invited to express their best estimates as values and solve the problem as a
decision under risk!)

The final approach to decision making under uncertainty involves creating a second
matrix, not of outcomes, but of regret. Regret is quantified to show how much better
the outcome might have been if you had known what the future was going to be. If
there is a “small well” under your land and you did not drill for it, you would regret

134
the $300,000 you might have earned. On the other hand, if you farmed out the drilling,
your regret would be only $175,000 ($300,000 less the $125,000 profit sharing you
received). Table 5.6 provides this regret matrix and lists in the right hand column the
maximum regret possible for each alternative. The decision maker who wishes to
minimize the maximum regret (minim ax regret) will therefore choose Different
decision makers will have different approaches to decision making under uncertainty.
None of the approaches can be described as the “best” approach, for there is no one
best approache.

Obtaining a solution is not always the end of the decision making process. The
decision maker might still look for other arrangements to achieve even better results.
Different people have different ways of looking at a problem.

Table 5.6: Well drilling example of decision making under uncertainty for regret
analysis

5.4. APPLICATION OF DECISION THEORY IN SUPPLY CHAIN


MANAGEMENT

Supply chain managers in a growing number of companies are using optimization


models to support fact-based decision-making. Such models assist them in making
better decisions about sourcing, manufacturing, transportation, warehousing, customer
service, and inventory management across the geographically dispersed facilities and
markets of their companies’ supply chains. Imbedded in easy-to-use systems,
optimization models have helped many companies identify plans with significantly
reduced supply chain costs.

135
Practitioners who develop these models harbor a secret that is rarely revealed to their
clients. Classical inventory models, which compute safety stocks, replenishment
quantities, and re-order points, are incompatible with optimization models, which
compute holistic plans for minimizing total supply chain cost. Specifically, the
techniques of probability theory underlying the construction of classical inventory
models do not blend well with the techniques of linear and mixed integer
programming underlying the construction of supply chain optimization models.

Thus, many—if not most—supply chain models developed to date are limited by one
of two complementary deficiencies. Some emphasize inventory decisions with
insufficient detail about other supply chain decisions, such as those relating to facility
location, manufacturing processes, transportation, and warehousing. Conversely, other
supply chain models emphasize these other decisions with insufficient detail about
inventory management.

A guiding principle underlying integration of inventory decisions with other supply


chain decisions is hierarchical planning. Inventory decision-making as part of total
supply chain management is divided into three areas of planning:

(1) Strategic planning involving long-term inventory deployment plans;

(2) Tactical planning involving aggregate inventory plans; and

(3) Operational planning involving detailed inventory management plans. Moreover,


decision-making at the three levels of planning should be linked to ensure short,
medium and long-term profitability of the firm.

In another front, a recent specific model called the supply chain operations reference
model (SCOR) pertains to decision making in supply chain management. It is a
management tool used to address, improve, and communicate supply chain
management decisions within a company and with suppliers and customers of a
company. The model describes the business processes required to satisfy a customer’s

136
demands. It also helps to explain the processes along the entire supply chain and
provides a basis for how to improve those processes.

The SCOR model was developed by the supply chain council with the assistance of 70
of the world’s leading manufacturing companies. It has been described as the “most
promising model for supply chain strategic decision making.” The model integrates
business concepts of process re-engineering, benchmarking, and measurement into its
framework. This framework focuses on five areas of the supply chain: plan, source,
make, deliver, and return. These areas repeat again and again along the supply chain.
The supply chain council says this process spans from “the supplier’s supplier to the
customer’s customer.”

Plan

Demand and supply planning and management are included in this first step. Elements
include balancing resources with requirements and determining communication along
the entire chain. The plan also includes determining business rules to improve and
measure supply chain efficiency. These business rules span inventory, transportation,
assets, and regulatory compliance, among others. The plan also aligns the supply chain
plan with the financial plan of the company.

Source

This step describes sourcing infrastructure and material acquisition. It describes how
to manage inventory, the supplier network, supplier agreements, and supplier
performance. It discusses how to handle supplier payments and when to receive,
verify, and transfer product.

Make

Manufacturing and production are the emphasis of this step. Is the manufacturing
process make-to-order, make-to-stock, or engineer-to-order? The make step includes,
production activities, packaging, staging product, and releasing. It also includes
managing the production network, equipment and facilities, and transportation.

137
Deliver

Delivery includes order management, warehousing, and transportation. It also includes


receiving orders from customers and invoicing them once product has been received.
This step involves management of finished inventories, assets, transportation, product
life cycles, and importing and exporting requirements.

Return

Companies must be prepared to handle the return of containers, packaging, or


defective product. The return involves the management of business rules, return
inventory, assets, transportation, and regulatory requirements.

Benefits of Using the SCOR Model

The SCOR process can go into many levels of process detail to help a company
analyze its supply chain. It gives companies an idea of how advanced its supply chain
is. The process helps companies understand how the 5 steps repeat over and over again
between suppliers, the company, and customers. Each step is a link in the supply chain
that is critical in getting a product successfully along each level. The SCOR model has
proven to benefit companies that use it to identify supply chain problems. The model
enables full leverage of capital investment, creation of a supply chain road map,
alignment of business functions, and an average of two to six times return on
investment.

CHAPTER SUMMARY
Dear learner in this unit we discussed the concept of decision making and the three
basic decision theory categories: descriptive, normative, and perspective. Further, we
explored the options and tools available for decisions under conditions of certainty,
uncertainty, and risk.

The other emphasis in this unit has been on business problems in which the
consequences of each alternative event combination can be accurately expressed in
dollars of profit, revenue, or cost.

138
A payoff table shows the available alternatives, the possible events and their
probabilities, and the associated monetary consequences. A reasonable (but by no
means unique) criterion of choice is the maximization of expected profit or revenue, or
the minimization of expected cost. When alternatives and events succeed one another
in stages, a decision tree is often useful in determining the best strategy. The
procedure is to work backwards from last to first choices, calculating the expected
payoff at each event node, and selecting the best alternative at each decision node.

END CHAPTER QUESTIONS


1. Discuss in detail the three categories of decision theory
2. Explain how decision tree analysis is made
3. Differentiate among the three decision conditions and the techniques to be
employed in each case
4. Give some examples of each of the three “occasions for decision” cited by
Chester Barnard.
5. Explain in your own words why Barnard thought the third category was most
important.
(a) Explain the difference between “optimizing” and “sufficing” in making
decisions, and
(b) Distinguish between routine and non-routine decisions.
6. Use a concrete example showing the five-step process by which management
science uses a simulation model to solve real-world problems.
7. You must decide whether to buy new machinery to produce product X or to
modify existing machinery. You believe the probability of a prosperous
economy next year is 0.6 and of a recession is 0.4. Prepare a decision tree, and
use it to recommend the best course of action. The applicable payoff table of
profits (+) and losses (-) is:

139
8. If you have no idea of the economic probabilities in the preceding question,
what would be your decision based on uncertainty using (a) maxim ax, (b)
maxim in, (c) equally likely, and (d) minim ax regret assumptions?

140
CHAPTER SIX: GAME THEORY

INTRODUCTION
Dear learner, this chapter presents different types of games and discusses how two or
more decision makers arrive at a rational choice. Here the issue is to determine how
rational players behave considering the likely choice of other players.

OBJECTIVE
At the end of this unit, you will be able to:
 Describe the concept of game theory and its common application areas

 Identify components of game theory and selected type of games

 Solve basic type of games rationally

 Identify dominant strategy and Nash Equilibrium

 Identify areas in supply chain that could be modeled through Game Theory

To such end, we will be dealing with game theory and address the following essential
topics:
1. Overview of Game Theory

2. Components of Game Theory

3. Dominant Strategy and Nash Equilibrium

4. Application of Game Theory in Supply Chain Management

6.1. OVERVIEW OF GAME THEORY

Game theory is a normative model for group decision. It is the study of mathematical
models of conflict and cooperation between intelligent rational decision-makers.

141
Game theory is the study of mathematical models in group decisions and is mainly
used in economics, political science, psychology, logic, computer science, and poker.
Originally, it addressed zero-sum games, in which one person's gains result in losses
for the other participants. Today, game theory applies to a wide range of behavioral
relations, and is now an umbrella term for the science of logical decision making in
humans, animals, and computers.

A game is any decision problem where the outcome depends on the actions of more
than one agent, as well as perhaps on other facts about the world. Game Theory is the
study of what rational agents do in such situations. You might think that the way to
figure that out would be to come up with a theory of how rational agents solve
decision problems, that is, figure out Decision Theory, and then apply it to the special
case where the decision problem involves uncertainty about the behavior of other
rational agents.

6.2. COMPONENTS OF GAME THEORY

There are three key components of game theory: Players, Choices, and Pay-offs.
Players
For game theory application we need at least two players. In a pay-off matrix, one
player takes choices raw wise (R player for this module) and the other column wise (C
Player for this module)
Choices
There should be at least two choices for each player. Player R takes choices arrayed
vertically row wise and Player C takes choices arrayed horizontally column wise.
Pay-offs and outcomes
Pay-offs indicates the outcomes of various combinations of choices to be taken by all
players as indicated by utility or monetary value. The pay-offs for payer R and C are
written in brackets set-off by a comma, the first representing R and the next C as
follows: (R's payoff, C's payoff)

142
Let's however note that there is a difference between outcomes and payoffs. An
agent’s payoff may be quite high, even if their outcome looks terrible, if the result of
the game involves something they highly value. For instance, if one player values the
wealth of the other player, an outcome that involves the other player ending up with
lots of money will be one where the payoffs to both players are high. In that respect
the theory does not assume selfishness on the part of agents. It does assume that agents
should try to get what they value, but that doesn’t seem too big a constraint at first,
assuming that agents are allowed to value anything. But in practice things are a little
more complicated. The model game theorists are using here is similar to the model
that many ethicists, from G. E. Moore onward, have used to argue that any (plausible)
ethical theory has a consequentiality form. To take one example, let’s assume that we
are virtue ethicists, and we think ethical considerations are ‘trumps’, and we are
playing a game that goes from time t0 to time t1. Then we might say the payoff to any
agent at t1 is simply how virtuously they acted from t0 to t1. Since agents are
supposed to be as virtuous as possible, this will give us, allegedly, the right evaluation
of the agent’s actions from t0 to t1.

To make the game theory components clear, let's see the following diagram.

Figure 6.1: Game theory matrix

Player 1

Strategy 1 Strategy 2

Player 2 Strategy 1 Player 1


Player 2

Strategy 2 ( Player 2, Player 1)

143
Activity 6.1
Dear learner, identify the player, choices, and the payoffs for each player from the
following payoff matrix:
C
a b
R A 1, 1 5, 0
B 0, 5 3, 3
_____________________________________________________________________
_____________________________________________________________________
____________________________________________________________________
Players, choices, pay-offs are indicated in C and R letters, capital letters, and
numbers respectively.

6.3. TYPE OF GAMES

There are different kinds of games ranging from simple to complex. Let’s now start
with a very simple example of a game. Later we will discuss the Prisoners dilemma
game and the solution to it.

Each player in the game is to choose a letter, A or B. After they make the choice, the
player will be paired with a randomly chosen individual who has been faced with the
very same choice. They will then get rewarded according to the following table.

_ If they both choose A, they will both get £1


_ If they both choose B, they will both get £3
_ If one chooses A, and the other B, the one who chose A will get £5, and the one who
chose B will get £0.

We can represent this information in a small table, as follows. (Where possible, we’ll
use uppercase letters for the choices on the rows, and lowercase letters for choices on
the columns.)
Choose a Choose b

144
Choose A £1, £1 £5, £0
Choose B £0, £5 £3, £3

We represent one player, imaginatively called Row, or R, on the rows, and the other
player, imaginatively called Column, or C on the columns. A cell of the table
represents the outcomes, if R chose to be on that row, and C chose to be in that
column. There are two monetary sums listed in each cell. We will put the row player’s
outcome first, and the column player’s outcome second. You should verify that the
table represents the same things as the text.

Now let’s note a few distinctive features of this game.


 Whatever C does, R gets more money by choosing A. If C chooses a, then R
gets £1 by choosing A, and £0 by choosing B; i.e., R gets more by choosing A.
And if C chooses b, then R gets £5 by choosing A, and £3 by choosing B; i.e.,
again, R gets by choosing A.
 Since the game is symmetric, that’s true for C as well. Whatever R does, C
gets more money by choosing a.
 But the players collectively get the most money if they both choose B. So
doing what maximizes the players’ individual monetary rewards does not
maximize, indeed it minimizes, their collective monetary rewards. Here we
have been careful so far to distinguish two things: the monetary rewards each
player gets, and what is best for each player. More elegantly, we need to
distinguish the outcomes of a game from the payoffs of a game. The outcomes
of the game are things we can easily physically describe: this player gets that
much money, that player goes to jail, this other player becomes a criminal, etc.
The payoffs of a game describe how well off each player is with such an
outcome. Without knowing much about the background of the players, we
don’t know much about the payoffs. Let’s make this explicit by looking at four
ways in which the agents may value the outcomes of the game. The first way is
that agents simply prefer that their monetary payoff is as high as possible. If
that’s the way the players value things, then the game looks as follows.

145
Game 1 Choose a Choose b
Choose A 1, 1 5, 0
Choose B 0, 5 3, 3

Whenever we just put numbers in a table, we assume that they stand for utility. And
we assume that players are constantly trying to maximize utility. We will come back
to this assumption presently. But first, let’s note that it doesn’t require that players
only care about their own well-being. We could change the game, while keeping the
outcome the same, if we imagine that R and C are parts of a commune, where all
money is shared, and they both know this, so both players utility is given by how
much money is added to the commune in a give outcome. That will give us the
following game.

Game 2 Choose a Choose b


Choose A 2, 2 5, 5
Choose B 5, 5 6, 6

Activity 6.2: Dear learner, compare game 1 and game 2 and evaluate whether what is
good for each player is good for the collective.
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
Answer is including the following discussion
In Game 1, the players face the following awkward circumstance. Each individual will
be made better off by playing A rather than B, no matter what happens, but were they
both to have played B, they would both be better off than if they had both played A.
That’s not true in Game 2; here what is good for each player is good for the collective.

You might note that we have started numbering the games, and that we didn’t number
the initial description of the outcomes. There’s a reason for this. Technically, we’ll say

146
that a game is specified by setting out what moves, or as we’ll sometimes call them,
strategies are available for each player, and what the payoffs are for each player, given
the moves that they make. (And, perhaps, the state of the world; for now we’re just
looking at games where only moves matter for payoffs. And we’re only looking for
now at games where each player makes a simultaneous choice of strategy. We’ll
return to how general an account this is in a little while.) Specifying the outcome of a
game in physical terms doesn’t give us a unique game. We need to know more to get a
genuine game specification.

There are yet more ways we could imagine the outcomes being mapped into a
particular payoff matrix; that is, a game. Imagine that the players have the following
values.

First, they care a lot about how much money goes to the two of them together. So the
first determinant of their payoff is the sum of the money paid to each player. Second,
they care a lot about agreement. If the two players play different strategies, that is
equivalent to a cost of £5 to them. So here is the payoff table for players with those
values after deducting cost (£5) of disagreement or playing different strategies from
game 2 matrix.

Game 3 Choose a Choose b


Choose A 2, 2 0, 0
Choose B 0, 0 6, 6

Something new happens in Game 3 which we haven’t seen before. What is best for the
players to do depend on what the other players do. In Game 1, each player was best off
playing A, no matter what the other player did. In Game 2, each player was best off
playing B, no matter what the other player did. But in Game 3, the best move for each
player is to play what the other player does. If R plays A, then C gets either 2 (if C
plays a) or 0 (if C plays b). If R plays B, then C gets either 6 (if C plays b) or 0 (if C
plays a). There’s no single best strategy for her, until she knows what R does.

147
We can mix and match these. Let’s look at what the game is like if R has the egotistic
preferences from Game 1, and C has the obsession with agreement of the players in
Game 3.

Game 4 Choose a Choose b


Choose A 1, 2 5, 0
Choose B 0, 0 3, 6

You should confirm this, but what we have attempted to do here is have the first
number in each cell, i.e., R’s payoff, copy the matching cell in Game 1, and the second
number in each cell, i.e., C’s payoff, copy the matching cell in Game 3. We will come
back in a little while to what to say about Game 4, because it is more complicated than
the other games we’ve seen to date. First, let’s make three philosophical asides on
what we’ve seen so far.

Prisoners’ Dilemma
Game 1 is often called a Prisoners’ Dilemma. There is perhaps some terminological
confusion on this point, with some people using the term “Prisoners’ Dilemma” to
pick out any game whose outcomes are like those in the games we’ve seen so far, and
some using it only to pick out games whose payoffs are like those in Game 1.
Following what Simon Blackburn says in “Practical Tortoise Raising”, some scholars
think it’s not helpful to use the term in the first way. So we’ll only use it for games
whose payoffs are like those in Game 1. And what we mean by payoffs like those in
Game 1 is the following pair of features.

_ Each player is better off choosing A than B, no matter what the other player does.
_The players would both be better off if they both chose B rather than both chose A.

You might want to add a third condition, namely that the payoffs are symmetric. But
just what that could mean is a little tricky. It’s easy to compare outcomes of different

148
players; it’s much harder to compare payoffs. So we’ll just leave it with these two
conditions.

It is often very bad to have people in a Prisoners’ Dilemma situation; everyone would
be better off if they were out of it. Or so it might seem at first. Actually, what’s really
true is that the two players would be better off if they were out of the Prisoners’
Dilemma situation. Third parties might stand to gain quite a lot from it. (If we are
paying out the money at the end of the game, we prefer that the players are in Game 1
to Game 2.). There are several ways we could try and escape a Prisoners’ Dilemma.
We’ll mention four here, the first two of which we might naturally associate with
Adam Smith. The first way out is through compassion. If each of the players cares
exactly as much about the welfare of the other player as they do about themselves,
then we’ll be in something like Game 2, not Game 1. Note though that there’s a limit
to how successful this method will be. There are variants of the Prisoners’ Dilemma
with arbitrarily many players, not just two. In these games, each player is better off if
they choose A rather than B, no matter what the others do, but all players are better off
if all players choose B rather than A. It stretches the limit of compassion to think we
can in practice value each of these players’ welfare equally to our own. Moreover,
even in the two player game, we need exact match of interests to avoid the possibility
of a Prisoners’ Dilemma.

Let’s say that R and C care about each other’s welfare a large amount. In any game
they play for money, each players’ payoff is given by the number of pounds that
player wins, plus 90% of the number of pounds the other player wins. Now let’s
assume they play a game; with the following outcome structure.

Choose a Choose b
Choose A £9.50, £9.50 £20, £0
Choose B £0, £20 £10, £10

So we’ll have the following payoff matrix because of positive expectation to the other.

149
Game 5 Choose a Choose b
Choose A 18.05, 18.05 20, 18
Choose B 18, 20 19, 19

And that’s still a Prisoners’ Dilemma, even though the agents are very compassionate.
So compassion can’t do all the work. But, probably none of the other ‘solutions’ can
work unless compassion does some of the work. (That’s partially why Adam Smith
wrote the Theory of Moral Sentiments before going on to economic work; some moral
sentiments are necessary for economic approaches to work.)

Our second way out is through contract. Let’s say each party contracts with the other
to choose B, and agrees to pay £ 2.50 to the other if they break the contract. Assuming
that this contract will be enforced (and that the parties know this), here is what the
outcome table now looks like.
Choose a Choose b
Choose A £1, £1 £2.50, £2.50
Choose B £2.50, £2 £3, £3

Note that the above table is obtained by adjusting game 1 for penalty of defection 2.50

Adjusted Game 1 Choose a Choose b


Choose A 1, 1 5-2.5, 0+2.5
Choose B 0+2.5, 5-2.5 3, 3

Now if we assume that the players just value money, those outcomes generate the
following game.
Game 6 Choose a Choose b
Choose A 1,1 2.5, 2.5
Choose B 2.5, 2.5 3,3

150
Interestingly, the game looks just like the original Prisoners’ Dilemma as played
between members of a commune. Basically, the existence of side contracts is enough
to turn capitalists into communists.

A very closely related approach, one which is typically more efficient in games
involving larger numbers of players, is to modify the outcomes, and hence the payoffs,
with taxes. A striking modern example of this involves congestion charges in large
cities. There are many circumstances where each person would prefer to drive
somewhere than not, but if everyone drives, we’re all worse off than if everyone took
mass transit (or simply stayed home). The natural solution to this problem is simply to
put a price on driving into the congested area. If the price is set at the right level, those
who pay the charge are better off than if the charge was not there, since the amount
they lose through the charge is gained back through the time they save. In principle,
we could always avoid Prisoners’ Dilemma situations from arising through judicious
use of taxes and charges. But it’s hard to get the numbers right, and even harder to do
the enforcement. As a result, sometimes states will try to solve Prisoners’ Dilemma
situations with regulation. We see this in Beijing, for example, when they try to deal
with congestion not by charging people money to enter the city, but by simply banning
(certain classes of) people from driving into the city on given days. At a more abstract
level, you might think of ethical prohibitions on ‘free-riding’ as being ways of morally
regulating away certain options. If choosing B is simply ruled out, either by law or
morality, there’s clearly no Prisoners’ Dilemma!

Having said that, the most important kind of regulation around here concerns making
sure Prisoners’ Dilemma situations survive, and are not contracted away. Let the two
players be two firms in a duopoly; i.e., they are the only firms to provide a certain
product. It is common for there to be only two firms in industries that require massive
capital costs to start-up, e.g., telecommunications or transport. In small towns, it is
common to have only two firms in more or less every sphere of economic life. In such
cases there will usually be a big distance between the prices consumers are prepared to
pay for the product, and the lowest price that the firm could provide the product and

151
still turn a profit. Call these prices High and Low. If the firms only care about
maximizing profit, then it looks like setting prices to High is like choosing B in Game
1, and setting prices to Low is like choosing A in that game. The two firms would be
better off if each of them had High prices. But if one had High prices, the other would
do better by undercutting them, and capturing (almost) all the market. And if both had
Low prices, neither would be better off raising prices, because (almost) everyone
would desert their company. So the firms face a Prisoners’ Dilemma.

As Adam Smith observed, the usual way businesses deal with this is by agreeing to
raise prices. More precisely, he says, People of the same trade seldom meet together,
even for merriment and diversion, but the conversation ends in a conspiracy against
the public, or in some contrivance to raise prices. And that’s not too surprising.
There’s a state where they are both better off than the state where they can compete. If
by changing some of the payoffs they can make that state more likely to occur, then
they will. And that’s something that we should regulate away, if we want the benefits
of market competition to accrue to consumers.

The final way to deal with a Prisoners’ Dilemma is through iteration. But that’s a big,
complicated issue
Activity 6.3
Dear learner identify the various types of games discussed in this topic
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
The main types include the prisoner’s dilemma and games with contracts and
compassion

6.4. DOMINANT STRATEGY AND NASH EQUILIBRUM

Given that assumption, we infer that players should never play a dominated strategy.
A dominated strategy is, roughly, a strategy such that some other strategy can do

152
better, no matter how other things are. In other words, if a player knows that strategy
s1 will do better than s2, then it is irrational for her to do s2.

It’s important to be careful about what we mean by a dominated strategy. Here is a


more careful definition.

Strong Domination: A strategy s1 strongly dominates strategy s2 for player i if and


only if for any combination of moves by other players, and states of the external
world, playing s1 provides a greater payoff than playing s2, assuming other players
make those moves, and the world is that way.

Strongly Dominated: A strategy is strongly dominated if and only if some other


strategy, available to the same player, strongly dominates it.
There is a potential scope ambiguity in the description of a strongly dominated
strategy that it is important to be clear about. The claim is not that a strategy is
strongly dominated if no matter what else happens, some strategy or other does better
than it. It is that a strategy is dominated if some particular strategy does better in every
circumstance. We can see the difference between these two ideas in the following
game.
l r
U 3, 0 0, 0
M 2, 0 2, 0
D 0, 0 3, 0
Consider this game from R’s perspective; who is choosing as always the rows. R's
options are Up, Middle and Down. C is choosing the columns; her choices are left or
right. (I hope the ambiguity between r for Right and R for Row is not too confusing. It
should be very hard in any given context to get them mixed up, and hopefully the
convention we’ve adopted about cases will help.)

Notice that Middle is never the best outcome for R. If C chooses Left, R does best
choosing up. If C chooses Right, R does best choosing Down. But that does not mean

153
Middle is dominated. Middle would only be dominated if one particular choice was
better than it in both circumstances. And that’s not true. Middle does better than Up in
one circumstance (when C chooses Right) and does better than Down in another
circumstance (when C chooses Left).
Indeed, there are situations where Middle might be uniquely rational. We need to say a
bit more about expected utility theory to say this precisely, but consider what happens
when R suspects C is just going to flip a coin, and choose Left if it comes up Heads,
and Right if it comes up Tails. (Since literally nothing is at stake for C in the game,
this might be a reasonable hypothesis about what C will do.) Then it maximizes R’s
expected return to choose Middle.

So far we’ve talked about the notion of strong dominance. We also need a notion of
weak dominance.

Weak Domination: Roughly, strategy s1 weakly dominates strategy s2 if s1 can do


better than s2, and can’t do worse. More formally, a strategy s1 weakly dominates
strategy s2 for player i if and only if for some combination of moves by other players,
and states of the external world, playing s1 provides a greater payoff than playing s2,
assuming other players make those moves, and the world is that way, and for all
combination of moves by other players, and states of the external world, playing s1
provides at least as high a payoff as playing s2,

Assuming other players make those moves, and the world is that way, a strategy is
weakly dominated if and only if some other strategy, available to the same player,
weakly dominates it. It does seem plausible that agents should prefer any strategy over
an alternative that it weakly dominates. This leads to distinctive results in games like
the following.
a b
A 1, 1 0, 0
B 0, 0 0, 0

154
In this game, choosing A does not strongly dominate choosing B for either player. The
game is symmetric, so from now on we’ll just analyze it from R’s perspective. The
reason choosing A does not strongly dominate is that if C chooses b, then choosing A
leads to no advantage. R gets 0 either way.

But choosing A does weakly dominate choosing B. A does better than B in one
circumstance, namely when C chooses a, and never does worse. So a player who shuns
weakly dominated options will always choose A rather than B in this game.

Iterated Dominance: A rational player, we’ve argued, won’t choose dominated


strategies. Now let’s assume, as is often the case, that we’re playing a game where
each player knows that the other player is rational. In that case, the players will not
only decline to play dominated strategies, they will decline to play strategies that only
produce the best outcomes if the other player adopts a dominated strategy. This can be
used to generate a prediction about what people will, or at least should, do in various
games. We can see this going back to a variant of Prisoners’ Dilemma from earlier on.

Choose a Choose b
Choose A 1, 2 5, 0
Choose B 0, 0 3, 6

If we look at things from C’s perspective, neither strategy is dominated. She wants to
choose whatever R chooses. But if we look at things from R’s perspective, things are a
little different. Here there is a strongly dominating strategy, namely choosing A. So C
should really think to herself that there’s no way R, who is rational, is going to choose
B. Given that, the table really looks like this.

Choose a Choose b
Choose A 1, 2 5, 0

155
We have put the prime there to indicate it is officially a different game. But really all
we have done is delete a dominated strategy that the other player has. Now it is clear
what C should do. In this ‘reduced’ game, the one with the dominated strategy deleted,
there is a dominant strategy for C. It is choosing a. So C should choose a. The
reasoning here might have been a little convoluted, but the underlying idea is easy
enough to express. R is better off choosing A, so she will. C wants to choose whatever
R chooses. So C will choose a as well.

Mixed strategy
As game theorists sometimes put it, how should be interpret talk of mixed strategies. It
turns out the options here are very similar to the candidate ‘interpretations’ of
probability. And we said that if we expand the space of available strategies to include
mixed strategies, then those games do have Nash Equilibrium.

Basically each player reveals a penny, and Row wins if they are both heads up or both
tails up, and Column wins if one coin is heads up and the other tails up. Here choice
and result depend on one's view on the probability of throwing head or tail as given in
the following game:
Heads Tails
Heads 1, -1 -1, 1
Tails -1, 1 1, -1
Again, the only equilibrium solution is for each player to play 1/2, 1/2. And here the
chance interpretation of this strategy is that each player plays by simply flipping their
coin, and letting it land where it may.

Nash Equilibrium
Nash Equilibrium is a set of strategies where no single player can do better by
unilaterally changing their strategy. If all do not benefit by changing strategy with
knowledge of the opponent strategy, then Nash Equilibrium happens.

156
We find a Nash equilibrium from the normal form of the game by trying to pick out a
row and a column such that the payoff to their intersection is the highest possible for
player 1 down the column, and the highest possible for player 2 across the row there
may be more than one such pair). We literally follow the following procedure to
identify Nash Equilibrium:

1. Analyze own strategy in light of all opponent’s options and find if there is
dominant strategy
2. Do the same for the opponent and see if the opponent also has dominant
strategy
3. Find the intersection of the two dominant strategies to find the normative
outcome called Nash equilibrium
Dear learner, let’s now find the Nash Equilibrium for a classic prisoner's dilemma
case. Prisoners are imprisoned for minor crime for which evidence is established and
police suspects a major crime. Thus, police provided incentive in jail terms for each to
confess against (defect) the other. The defector would go free if the other player
cooperate (does not confess). In this case the later player will be jailed for maximum
term 3 years as indicated in table 6.1. If both cooperate to refuse to confess, they
would both be jailed for 1 year for their minor offenses. If both players defect (confess
against another) they would both be jailed for two years. The prisoners are taking
decisions separately and simultaneously not knowing the action of the other. Let's
now see how we find dominant strategy and Nash Equilibrium.

First find dominant strategy for each player. Dominant strategy: occurs when one
strategy is better than another strategy for one player, no matter how that player's
opponents may play. We need to ask what if question on the move of the other player
and our best bet. Table 6.1 and 6.2 indicate dominant strategies for the prisoners
through shading.

157
Table 6.1: Dominant strategy for prisoner 1

Prisoner 1

Cooperate Defect

Player 2 Cooperate 1 yr 0 yr
1 yr 3yr

Defect 3 yr 2 yr
0 yr 2 yr

Table 6.2: Dominant strategy for prisoner 2

Prisoner 1

Cooperate Defect

Player 2 Cooperate 1 yr 0 yr
1 yr 3yr

Defect 3 yr 2 yr
0 yr 2 yr

Once dominant strategies are obtained, Nash equilibrium could be found at the
intersection of dominant strategies.
Table 6.3: Nash equilibrium

Prisoner 1

Cooperate Defect

Player 2 Cooperate 1 yr 0 yr
1 yr 3yr

Defect 3 yr 2 yr Nash Equilibrium


0 yr 2 yr (Defect, Defect)

158
Activity 6.4: Describe the concept of dominant strategy and Nash equilibrium
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
Nash equilibrium is obtained at the intersection of two dominant strategies

6.5. APPLICATION OF GAME THEORY IN SUPPLY CHAIN


MANAGEMENT

Research indicate that both cooperative and non-cooperate games are applicable in
Supply Chain Management, the former being trendy in recent years.
The non-cooperative theory of games is strategy oriented – i.e., it studies what one
may expect the players to do and the nitty-gritty details of how they get there.
Cooperative game theory, on the other hand, takes a different tack. It directly looks at
the set of possible outcomes, studies what the players can achieve, what coalitions will
form, how the coalitions that do form divide the outcome, and whether the outcomes
are stable and robust. Thus, one may say that the non-cooperative theory is a micro
approach in that it focuses on precise descriptions of what happens.

The field of supply chain management has seen, in recent years, a wide variety of
research papers that employ non-cooperative game theory to model interaction
between players.

Cooperative game theory offers several recipes for this process. One such important
application is bargaining between the players. In this study, we focus extensively on
bargaining games and their implications to supply chains. Yet another important
theme is that of stability. When players decide on allocations from the set of feasible
outcomes, independent of the process (for instance bargaining), some or all players
can pursue options such as joining together as a coalition and agreeing on a joint
course of action. Two questions immediately raise themselves:

(i) How will players in an alliance divide the gains accrued by their joint venture?, and

159
(ii) what are the stable coalitions/outcomes that emerge in a particular setting?

Nagarajan and Soˇsi´c (2006) conducted surveys on some applications of cooperative


game theory to supply chain management. Their work gave special emphasis on two
important aspects of cooperative games: profit allocation and stability. The paper first
describes the construction of the set of feasible outcomes in commonly seen supply
chain models, and then uses cooperative bargaining models to find allocations of the
profit pie between supply chain partners. In doing so, several models were explored
and the following parties were observed: suppliers selling to competing retailers, and
assemblers negotiating with component manufacturers selling complementary
components. A good part of the study discusses the issue of coalition formation among
supply chain partners and ideas such as farsightedness among supply chain players.

Schiele and Vos (2014) conducted a study introducing the game theory, connecting it
to the supply chain management and analysing the added value the combination
generates. Based on the knowledge on game theory, the study explored game theory's
expected influence in the four major decision point of supply chain management:
make or buy decision, sourcing strategies, supplier selection and contracting. Evolving
from these finding, a matrix is developed that outlines key competences of the game
theory in regard to the four decision points.

CHAPTER SUMMARY
Dear learner, in this unit we discussed about the concept of game theory with its key
components. Game theory is a normative mathematical model applicable in group
decision. Game theory has three elements: players, strategies, and pay-offs.
We had also seen the various types of basic games such as the prisoners’ dilemma and
zero sum game. Further, we discussed the way through which we find dominant
strategy and the Nash Equilibrium. To find Nash Equilibrium of a classic example of
game, prisoner’s dilemma, rational players assess their dominant strategy. We learned
that knowledge of dominant strategy is a key to identify the Nash equilibrium
outcome.

160
Finally, we explore the application of game theory in supply chain management.

END CHAPTER QUESTIONS


1. Discuss the meaning of game theory
2. Describe strong dominance, weak dominance, and dominated strategy(Choice)
3. Discuss Nash Equilibrium
4. What is prisoner's dilemma?
5. Identify areas of game theory application in supply chain
6. Find the dominant strategies for player R and player C and the Nash
Equilibrium for the following game
Choose a Choose b
Choose A 1, 1 5, 0
Choose B 0, 5 3, 3

161
CHAPTER SEVEN: QUEUE THEORY
INTRODUCTION
This chapter addressing the common problem facing banks, telephone operators,
hospitals, even supply chain actors. This problem is waiting line. The chapter
discusses how to mathematically model waiting line problems and measure waiting
line (queue) performance. Further discussions exposed the application of waiting line
in supply chain management.
OBJECTIVE
At the end of this unit, you will be able to:
 Describe the concept of queue and its common application areas

 Identify queue notations and models

 Solve basic queue problems

 Identify areas in supply chain that could be modeled through queue model

To such end, the chapter will be dealing with queue theory and will address the
following essential topics:
1. Overview of Queue theory

2. Queue Notations

3. Types of Queue Models

4. Application of queue models

5. Application of queue model in supply chain

7.1. OVERVIEW OF QUEUE THEORY

Queue model measures “performance of service system considering the variability in


service time and customer arrival time”, stated Gross and Harris (1997, 170) A well-
known name behind queue theory is the Danish mathematician Erlang. The first
queuing theory problem was considered by Erlang in 1908 who looked at how large a

162
telephone exchange needed to be in order to keep to a reasonable value the number of
telephone calls not connected because the exchange was busy (lost calls). Within ten
years he had developed a (complex) formula to solve the problem. His proof that
arrivals in call centers follow Poisson distribution marked the birth of the concept.
Later Kandall introduced specific notations A/B/c to signify behavior of arrival (A)
and of service (B), and the number of agents (C).

Queuing theory deals with problems which involve queuing (or waiting). Typical
examples might be:

 banks/supermarkets - waiting for service


 computers - waiting for a response
 failure situations - waiting for a failure to occur e.g. in a piece of machinery
 public transport - waiting for a train or a bus

As we know queues are a common every-day experience. Queues form because


resources are limited. In fact it makes economic sense to have queues. For example
how many supermarket tills you would need to avoid queuing? How many buses or
trains would be needed if queues were to be avoided/eliminated?

In designing queueing systems we need to aim for a balance between service to


customers (short queues implying many servers) and economic considerations (not too
many servers).

In essence all queuing systems can be broken down into individual sub-systems
consisting of entities queuing for some activity (as shown below).

163
Typically, as Beasley put it, we can talk of this individual sub-system as dealing
with customers queuing for service. To analyse this sub-system we need information
relating to arrival process, service process, queue discipline, and queue structure:

 Arrival process:

Arrivals may originate from one or several sources referred to as the calling
population. The calling population can be limited or 'unlimited'. An example of
a limited calling population may be that of a fixed number of machines that fail
randomly. The arrival process consists of describing how customers arrive to
the system. If Ai is the inter-arrival time between the arrivals of the (i-1)th and
ith customers, we shall denote the mean (or expected) inter-arrival time by E(A)
and call it (λ ) = 1/(E(A) the arrival frequency. Thus, to understand the
characteristics of arrival we need to know:

o how customers arrive e.g. singly or in groups (batch or bulk arrivals)


o how the arrivals are distributed in time (e.g. what is the probability
distribution of time between successive arrivals (the inter arrival time
distribution))
o whether there is a finite population of customers or (effectively) an
infinite number

The simplest arrival process is one where we have completely regular


arrivals (i.e. the same constant time interval between successive
arrivals). A Poisson stream of arrivals corresponds to arrivals at
random. In a Poisson stream successive customers arrive after intervals
which independently are exponentially distributed. The Poisson stream
is important as it is a convenient mathematical model of many real life
queuing systems and is described by a single parameter - the average
arrival rate. Other important arrival processes are scheduled arrivals;
batch arrivals; and time dependent arrival rates (i.e. the arrival rate
varies according to the time of day).

164
 Service mechanism:

The service mechanism of a queuing system is specified by the number of


servers (denoted by s), each server having its own queue or a common queue
and the probability distribution of customer's service time. Let Si be the service
time of the ith customer, we shall denote the mean service time of a customer
by E(S) and µ = 1/(E(S) the service rate of a server. Here, it is essential to
know:

o a description of the resources needed for service to begin


o how long the service will take (the service time distribution)
o the number of servers available
o whether the servers are in series (each server has a separate queue) or in
parallel (one queue for all servers)
o whether preemption is allowed (a server can stop processing a customer
to deal with another "emergency" customer)

Assuming that the service times for customers are independent and do
not depend upon the arrival process is common. Another common
assumption about service times is that they are exponentially
distributed.

 Queue characteristics (Discipline of queue system):

Discipline of a queuing system means the rule that a server uses to choose the
next customer from the queue (if any) when the server completes the service of
the current customer. Commonly used queue disciplines are:

o FIFO - Customers are served on a first-in first-out basis.

o LIFO - Customers are served in a last-in first-out manner.

o Priority - Customers are served in order of their importance on the basis


of their service requirements.

165
Here, we need to answer the following questions:

o how, from the set of customers waiting for service, do we choose the
one to be served next (e.g. FIFO (first-in first-out) - also known as
FCFS (first-come first served); LIFO (last-in first-out); randomly; or
based on priority
o do we have:
 balking (customers deciding not to join the queue if it is too
long)
 reneging (customers leave the queue if they have waited too
long for service)
 jockeying (customers switch between queues if they think they
will get served faster by so doing)
 a queue of finite capacity or (effectively) of infinite capacity

Changing the queue discipline (the rule by which we select the next
customer to be served) can often reduce congestion. Often the queue
discipline "choose the customer with the lowest service time" results in
the smallest value for the time (on average) a customer spends queuing.

 Queue structure:

Queue structure involves the pattern and manner of service provision as


indicated in the following figures. In the first case, customers form
multiples waiting lines in front of each server and customers form only
one line in front of multiple servers. The models we use differ to the
two structures; the first represent multiple M/M/1 model and the second
M/M/C model. The following figures show the difference.

166
Figure 7.1: Multiple queue structure

Served customer

Queue
cccccccccc C S
Customer cccccccccc C S Service facility
cccccccccc c S

Served customer

Figure 7.2: Single queue structure

Served customer

Queue C S
Customer cccccccccc C S Service facility
C S

Served customer

Note here that integral to queuing situations is the idea of uncertainty in, for example,
inter-arrival times and service times. This means that probability and statistics are
needed to analyze queuing situations.

In terms of the analysis of queuing situations the types of questions in which we are
interested are typically concerned with measures of system performance and might
include:

 How long does a customer expect to wait in the queue before they are served,
and how long will they have to wait before the service is complete?
 What is the probability of a customer having to wait longer than a given time
interval before they are served?

167
 What is the average length of the queue?
 What is the probability that the queue will exceed a certain length?
 What is the expected utilization of the server and the expected time period
during which he will be fully occupied (remember servers cost us money so we
need to keep them busy). In fact if we can assign costs to factors such as
customer waiting time and server idle time then we can investigate how to
design a system at minimum total cost.

These are questions that need to be answered so that management can evaluate
alternatives in an attempt to control/improve the situation. Some of the problems that
are often investigated in practice are:

 Is it worthwhile to invest effort in reducing the service time?


 How many servers should be employed?
 Should priorities for certain types of customers be introduced?
 Is the waiting area for customers adequate?

In order to get answers to the above questions there are two basic approaches,
according to Beasley:

i. analytic methods or queuing theory (formula based); and


ii. Simulation (computer based).

The reason for there being two approaches (instead of just one) is that analytic
methods are only available for relatively simple queuing systems. Complex queuing
systems are almost always analyzed using simulation (more technically known as
discrete-event simulation).

The simple queuing systems that can be tackled via queuing theory essentially:

 consist of just a single queue; linked systems where customers pass from one
queue to another cannot be tackled via queuing theory

168
 have distributions for the arrival and service processes that are well defined
(e.g. standard statistical distributions such as Poisson or Normal); systems
where these distributions are derived from observed data, or are time
dependent, are difficult to analyze via queuing theory

 Measures of Performance for Queuing Systems

Queue performance measure formula depend queue model which in turn rely on
answers to questions to the above four areas: arrival pattern, service pattern, queue
discipline, and queue structure.

Activity7.1

Dear learner, take few minutes to identify four areas that characterize the queue
system________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________

The categories include arrival process, service process, queue characteristics, and
queue structure

7.2. QUEUING NOTATIONS

Dear learner, here are the common notations we use in Queue:

169
Let's now discuss in details the most important notations.

In queue, we commonly use the following symbols:

 lamda (λ )to be the mean (or average) number of arrivals per time period, i.e.
the mean arrival rate
 µ to be the mean (or average) number of customers served per time period, i.e.
the mean service rate

The relationship between λ and µ is represented by the following diagram:

Figure 7.3: The relationship between λ and µ

170
Note that there is a standard notation system to classify queuing systems as
A/B/C/D/E, where:

 A represents the probability distribution for the arrival process


 B represents the probability distribution for the service process
 C represents the number of channels (servers)
 D represents the maximum number of customers allowed in the queueing
system (either being served or waiting for service) or queue capacity. As per
Kindall’s notation is also represented by K.
 E represents the maximum number of customers (Population size) in total
 F represents queue ranking ordering rule such as FCFS

Common options for A and B are:

 M for a Poisson arrival distribution (exponential inter-arrival distribution) or a


exponential service time distribution
 D for a deterministic or constant value
 E for Erlang distribution
 G for a general distribution (but with a known mean and variance)

If C is specified, it refers to multiple servers. For one server we use 1. If D and E are
not specified then it is assumed that they are infinite. If F is not specified it is FCFS.

Thus for example, as in the above example, the [M/M/1]:{infinity/infinity/FCFS}


system is one where the arrivals and departures are a Poisson distribution with a single
server, infinite queue length, calling population infinite and the queue discipline is
FCFS. This is the simplest queue system that can be studied mathematically. This
queue system is also simply referred to as the M/M/1 queue.

171
Activity7.2

Please take time to write the model of this queue system. The queuing system has a
Poisson arrival distribution, an exponential service time distribution and two channels
(two servers) with infinite queue capacity (length) and population. The ordering rule is
First Come First Served (FCFS). Identify the model.

_____________________________________________________________________
____________________________________________________________________

It is represented by M/M/2 model

Dear learner, we will now discuss the basic types of queue models and the formula.

7.3. TYPE OF QUEUE MODELS

Overview

Two common classes - M/M/1 and M/M/C - are useful to most real world cases as
arrival and service patterns follow the models’ assumptions. The first M implies
Poisson arrival pattern, the second M exponential service pattern, 1 single channel,
and C multiple channel. As indicated in table 7.1, M/M/1 is represented by model 1
and is applicable for systems with one server (single channel); And M/M/C is
represented by model 3 and is applicable for multiple servers. Model 2 and model 4
are not commonly used because the former assumes that non changing demand
(arrival) and the later assumes a very small size of customer group.

Today variants of queue model applicable to specific situations have been flourishing
with some alterations to the properties characterizing queue systems: service layout,
service phase, population size, arrival pattern, and service pattern. There are four
tenets of queue model that Chase, and Jacobs (2011) summarized as indicated in table
7.1

172
Table 7.1: Properties of some various models of waiting line

Service Source of Arrival Queue


Model Layout phase Population Pattern discipline Service Pattern
1 Single channel Single Infinite Poisson(M) FCFS Exponential(M)
2 Single channel Single Infinite Poisson(M) FCFS Constant(D)
3 Multi-channel Single Infinite Poisson(M) FCFS Exponential(M)
4 Single channel Single Finite(K,N Poisson(M) FCFS Exponential(M)
Source: Chase, and Jacobs (2009) Operations and supply chain management

Dear learner, note that the basic queue formula reflected the general Little's rule widely
applicable in Operations Management. The rule implies that the product of rate of job arrival
and average job time represents the total number of job. Thus, the product of customer arrival
rate((λ ) and the waiting time in the system (W) or in queue (Wq) gives the length of the
customers in the system(L) or the queue length(Lq) as given below:

Note that the above formula is derived from Little’s Law that states that the amount of
work is the product of rate (r) and duration of work (t).

Let's now discuss the two common types of queue systems and some special queue
models in detail:

1. Single Channel Queuing Theory: [M/M/1]:{//FCFS} Queue System.


As discussed previously an M/M/1 model is represented by:
Arrival process: Poisson distribution
Service mechanism: Poisson distribution
Queue discipline. Fist come first served

173
It is important to see arrival time distribution as well as steady state and memory less
assumptions to further understand this system:

Arrival Time Distribution. This simple model assumes that the number of arrivals
occurring within a given interval of time t, follows a Poisson distribution. with
parameter (λ )t. This parameter (λ )t is the average number of arrivals in time t which
is also the variance of the distribution. If n denotes the number of arrivals within a
time interval t, then the probability function p(n) is given by, The arrival process is
called Poisson input. Taking Poisson property, The probability of no(zero) arrival in
the interval [0,t] is:

P(zero arrival in [0,t]) = e - λ t = p(0)

Also, P(zero arrival in [0,t]) = P(next arrival occurs after t) = P(time bet. two
successive arrivals exceeds t)
From this it can be shown that the probability density function of the inter-arrival
times is given by:

e - λ t for t >= 0

This is called the negative exponential distribution with parameter λ or simply


exponential distribution. The mean inter-arrival time and standard deviation of this
distribution are both 1/ (λ ) where, (λ ) is the arrival rate.

Note: At first glance this distribution seems unrealistic. But it turns out that this is an
extremely robust distribution and approximates closely a large number of arrival and
breakdown patterns in practice.

Property of Stationary (steady state) and Lack of Memory. A Poisson input implies
that arrivals are independent of one another or the state of the system. The probability
of an arrival in any interval of time h does not depend on the starting point of the

174
arrival or on the specific history of arrivals preceding it, but depends only on the
length h. This concept is described as memory less (lack of memory). Thus the
queuing systems with Poisson input can be considered as Markovian process.(The
reason for using M in the notation) Markovian refers to the idea that patterns of
arrivals are independent and they are captured through average. In this case both the
inter-arrival times and the service times are assumed to be negative exponential
distribution with parameters (λ ) and µ.

Let's now see a simple example on the relation between expected inter-arrival time
and arrival rate and that of expected service time and service rate.

Example: Suppose we have a single server in a shop and customers arrive in the shop
with a Poisson arrival distribution at a mean rate of lamda (λ ) =0.5 customers per
minute, i.e. on average one customer appears every 1/(λ ) = 1/0.5 = 2 minutes. This
implies that the inter-arrival times have an exponential distribution with an average
inter-arrival time of 2 minutes. The server has an exponential service time distribution
with a mean service rate of 4 customers per minute, i.e. the service rate µ=4 customers
per minute. The expected service time is 0.25 (1/4) minute. As we have a Poisson
arrival rate/exponential service time/single server we have a M/M/1 queue in terms of
the standard notation.

Let's now get in depth into M/M/1. To this end, It is essential to discuss the steady
state assumption and how essential queue formula developed for M/M/1 model.

Steady state
As M/M/ is a Markovian process, we are interested only in the long-run behaviour of
the system. That is steady state or statistical equilibrium state. It is obvious that if the
arrival rate is higher than the service rate the system will be blocked. Hence, we
consider only the analysis of the system where the arrival rate is less than the service
rate. At any moment in time, the state of the queuing system can be completely
described by the number of units in the system. Thus the state of the process can
assume values 0,1,2... (0 means none in the queue and the service is idle) Unlike

175
Markov process, here the change of state can occur at any time. However the process
will approach a steady state which is independent of the starting position or state.

Queuing system is developed under the assumption that the system has reached a
steady state. A steady state is that the system has been running long enough so as to
settle down into some kind of equilibrium position. Naturally real-life systems hardly
ever reach a steady state. Simply put life is not like that. However despite this, simple
queuing formulae can give us some insight into how a system might behave very
quickly.

Steady state is described as a birth and death process. It means moving from state 0
(i.e where there is nothing to serve in the system) to state one (one customer in the
system) is a birth process and the reverse from state one to state zero is a death
process. The birth process is represented by arrival rate () and the death process is
represented by service rate () that imply the return to a previous state. For a steady
state to happen a forward state change must equal the reverse state change. However,
here the probability of a state to happen should be considered in determining expected
back and forth changes.

The following diagram represents a steady state where number of customers in a


system change from 0 to 1 and back 0 from 1, 1 to 2 and back to 1 from 2, etc.

0 1 2

0 1 2 3

P(1) P(2) 3
P(0) 1 2

The circles represent the change of number of customers from 0 to 1, then to 2, and to
3 when a customer arrives as represented by  and back from 3 to 2, then to 1, and to 0
as the arriving customers leave the system after getting service as represented by
service rate .

176
Before discussing the steady state concept mathematically, let's bear in mind that:
=/
P(0)=1- or P(0)=1-/
where;  is utilization rate: how much the system is being used on average
P(0)=the probability that there is 0 customer or the system is non
utilized
The steady state concept is expressed mathematically as follows:
Change from state 0 to state 1
P(0) 0= P(1) 1 => P(1)= (0/1) P(0)
Where; P(0)=the probability that there is 0 customer or the system is non
utilized
0=Arrival rate for the first customer
P(1)=the probability that there is 1 customer or the system
1=The service rate for the first customer
Change from state 1 to state 2
P(1) 1= P(2) 2
P(2)= (1/2) P(1) ; Replacing P(1) to further link p(2) to P(0)
P(2)= (1/2) (0/1) P(0) ; representing1 and 0 with average  and 2 and 1 with
average, we get :
P(2)= (/)2 P(0)
P(3)= (/)3 P(0)
The general formula then is
P(ί)= (/)ί P(0)
Example: If =4/second and =16/second, compute the probability that the system has
no customer, one customer, two customers, and 10 customers respectively.
Solution: use these two formulae
=/
P(ί)= (/)ί P(0)
=/ =>4/16=1/4
P(0)= 1-(/) =>1-(1/4)=3/4

177
P(1)= (/)1 P(0) => (1/4)1*3/4 = 3/16
P(2)= (/)2 P(0) => (1/4)2*3/4 = 3/64
P(10)= (/)10 P(0) => (1/4)3*3/4=3/256
Dear learner, let’s now turn to other essential formulae of M/M/1 model.

Finding Expected value of x or E(x) and derived formulae

Expected value is the expected average that is computed by summing the values
multiplied by their corresponding probability. To clarify let’s begin with die.

Activity 7.3:Dear learner, let’s calculate the expected value of throwing a die:
_____________________________________________________________________
_____________________________________________________________________
The following discussion involve the answer

To find the expected value of a die, we need to:

List the values: 1,2,3,4,5,6


Find probabilities of occurrence for values: 1/6 each

Thus,

E(die)=(1/6*1)+ (1/6*2)+ (1/6*3)+ (1/6*4)+ (1/6*5)+ (1/6*6)=3.5

; remember that

From limits mathematical identity = /(1-α)2

To adjust part of equation to this mathematical identity form, pull out

Thus,

178
; Remember that P(0)=1-(/)

Example: Taking figures from the previous example for M/M/1 model of =4/second
and =16/second, compute
i. Expected number of customers in front of us E(x)
ii. Waiting time W(s)

Solution
i. (4/16)/(1-4/16) => (1/4)/(3/4) =1/3

The result implies that on average or overall, we expect to pass through or in front of
us 1/3 people when we pass the queue at any point in time. Linking this result to the
above examples answer, we could interpret the probability that we expect:
Zero customers in front of us is P(0)==3/4=75%
One customer in front of us is P(1)= 3/16≈19%
Two customers in front of us is P(2)= 3/64≈5%
Ten customers in front of us is P(10)= 3/256≈1%
Overall we expect customers in front of us.

179
ii. W(s)=(Expected customers in front + us)*average service time
W(s)=( E(x)+ 1)* (1/) ; Note that ( E(x)+ 1) queue length (q)
W(s)=( 1/3 + 1)* (1/16 Second)=4/3*1/16 Sec.
W(s)=1/12 Seconds
Activity7.3
Dear learner please compute the following for an M/M/1 queue model of =2/second
and =3/second and interpret the results.
=/
P(0)=
P(1)=
P(20)=

Hint: use formula given in the previous discussion

Performance measure of M/M/1 model


A broader list of formula to measure the nature and performance of an M/M/C model
is given below:
Let,
Wq be the delay in queue (waiting time) of the ith customer
Wq = Lq /(λ )
E(S) be the service time of the ith customers or 1/µ
Ws be the waiting time in the system of the ith customer = Wq + 1/µ
Lq be the number of customers in queue at time t

Lq=(λ ) Wq or
Ls be the number of customers in the system at time t=
Lq + No. of customers being served or Ls=Lq+λ/µ
L s= (λ ) Ws
In short, the following formula are applicable for M/M/1 model

180
Utilization factor or traffic intensity ρ= λ/µ

∕
W

Activity7.4
Dear learner if λ and µ are 3 per minute and 2 per minute, respectively for an M/M/1
model, compute the following:
ρ
Lq
Wq

Ws
Ls
Hint: use formula given in the previous discussion

2. M/M/C model
Analysis of M/M/C Queuing Model M/M/C is a multi server system with customer’s
arrival follows a Poisson process and exponential service time. When a customer
enters an empty system, s/he gets the service at once. If the system is non-empty the
incoming customer joins the Queue. M/M/C model is a Poisson birth death process.
Birth occurs when customer arrives and death occurs when customer departs. Both are
modeled as Memory less Markov process. In M/M/C , M refers to this memory less
/Morkov feature of the arrival and service. Let's note that:

181
Another important formula for M/M/C is Pw=Probability of waiting in line
Pw=Lq(S µ /λ −1)
Where, S= Number of identical servers
Activity 7.5
Dear learner if λ and µ are 3 per minute and 2 per minute, respectively for an M/M/2
model, compute the following:
ρ
Lq
Wq
Ws
Ls
Hint: use formula given in the previous discussion

3. Special models
There are a number of special types of models. The following are the list:

182
i. Erlang model
Another powerful model is Erlang c model (Cachon, and Terwiesch, 2013). This
model allows defining arrival and service rates in different time spans
ii. Finite Marcovian models
In this model, the space given for queue is limited and hence defined by N. It could be
applied for both M/M/1 and M/M/2.
ii(a) M/M/1/N
The M/M/1/N queue is a single server queue with a buffer of size N. This queue has
applications in telecommunications, as well as in biology when a population has a
capacity limit.
Steady State Distribution
We again use the parameters from the M/M/1 queue with,
λi = λ for 0 <= i< N
λi = 0 for i >= N ; this implies that the system does not accept more than N
μi = μ for 1 <= i <= N.

The state probabilities in equilibrium are given by:

183
Measures of Effectiveness for M/M/1/N
Measure Expression
Average number of customers in the system (Ls)

Average number of customers in queue (Lq) Ls – (λ/µ)

Expected waiting time in system (W) Ls/ λ


Expected waiting time in queue (Wq) Lq/ λ
Utilization ρ
Blocking Probability (PB)

Throughput

ii (b) Finite M/M/C model: M/M/c/N


The M/M/c/N queue is a multi server queue with a buffer of size N.
Steady state distribution
μi = iμ for i <= c
μi = cμ for c <= i <= N
λi = λ for all i.
For this model the steady state probabilities are given by:

where,

184
where, ,

Measures of Effectiveness for M/M/C/N


Measure Expression
Average number of
customers in the Queue (Lq)
, ρ≠1
Average number of Lq + r(1-PN)
customers in the system (Ls)
Expected waiting time in system
(W)
Expected waiting time in queue
(Wq)
Utilization Ρ
Blocking Probability (PB)

Throughput

iii. No queue system M/M/c/c


This is a special case of the truncated queue M/M/c/N for which N = c, i.e. where no
queue is allowed to form. This is also known as Erlang loss system. It plays an
important role in telecommunication.
Steady State Distribution
For this model the steady state probabilities are given by:

185
In case of an M/M/c model we define the following performance measures:
Measure Expression
Blocking Probability (PB)

Throughput

where , r = (λ/µ)

iv. Queue length formula (Lq) for M/M/1 and special models
For the M/M/1 queue, we can prove that:

For the M/G/1 queue, we can prove that:

The above is called the Pollazcek-Khintichine formula (named after its inventors and
discovered in the 1930s).

For the G/G/1 queue, we do not have an exact result. The following approximation
(derived by Marchal in 1976) is popular:

Notice that if the mean rate of arrival is _, and _2 a denotes the variance of the inter-

arrival time, then:

186
Similarly, if _ denotes the service rate and _2 s denotes the variance of the service
time, then:

Dear learner so far we had seen a one server and two server models and their variants.
Let's now see which would bring better results.

Faster servers or more servers?

Consider the situation we had above - which would you prefer?

 one server working twice as fast (M/M/1); or


 two servers each working at the original rate (M/M/2)?

The simple answer is that we can analyze this using the package. For the first situation
one server working twice as fast corresponds to a service rate µ=8 customers per
minute. The output for this situation is shown below. Compare the two outputs above -
which option do you prefer? ____

Computer result for the two options is given below:

One server twice as fast Two servers, original rate


Average time in the system 0.1333 0.2510
(waiting and being served)
Average time in the queue 0.0083 0.0010
Probability of having to wait for service 6.25% 0.7353%

It can be seen that with one server working twice as fast customers spend less time in
the system on average, but have to wait longer for service and also have a higher
probability of having to wait for service.

Let's now focus on the application of queue models in supply chain management.

187
7.4. APPLICATION OF QUEUE MODELS IN SUPPLY CHAIN
MANAGEMENT

Supply chains are formed out of complex interactions amongst several enterprises
whose aim is to produce and transport goods so that customer desired products are
sold at various retail outlets. Supply chain process (SCP) encompasses the full range
of intra-company and inter-company activities beginning with raw material
procurement by independent suppliers, through manufacturing and distribution, and
concluding with successful delivery of the product to the retailer or at times to the
customer.

One can succinctly define supply chain management (SCM) as the coordination or
integration of the activities/processes involved in procuring, producing, delivering and
maintaining products/services to the customer who are located in geographically
different places. Traditionally marketing, distribution, planning, manufacturing and
the purchasing activities are performed independently with their own functional
objectives. SCM is a process oriented approach to coordinating all the functional units
involved in the order to delivery process.

The SCP transits several organizations and each time a transition is made, logistics is
involved. Also since each of the organizations is under independent control, there are
interfaces between organizations and material and information flow depend on how
these interfaces are managed. We define interfaces as the procedures and vehicles for
transporting information and materials across functions or organizations such as
negotiations, approvals (so called paper work), decision making, and finally inspection
of components/assemblies etc. For example, the interface between a supplier and
manufacturer involves procurement decisions, price and delivery negotiations at the
strategic level and the actual order processing and delivery at the operational level.

The coordination of the Supply Chain Network (SCN) plays a big role in the overall
functioning of the SCP. In most cases, there is an integrator for the network, who

188
could be an original equipment Long term issues in SCP involve location of
production and inventory facilities, choice of alliance partners such as the suppliers
and distributors, and the logistics chain. The long term decisions also include choosing
make to order or make to stock policies, degree of vertical integration, capacity
decisions of various plants, amount of flexibility in each of the subsystems, etc. The
operational issues in SCP include cycle time and average inventory computations,
dynamic scheduling, inventory replenishments and the like. Recently, there are efforts
to define and determine non-traditional performance measures for SCP such as lead
time, quality, reliable delivery, customer service, rapid product introduction, and
flexible capacity. The speed of the SCP determines the delivery time in make-to-order
environments. Supply chain costs and time depend on all the constituents of the supply
chain. The variability of the lead time and defect rates sum up to make up the total
chain variability. Cycle time monitoring in the supply chain networks would help in
reducing the inventories, establishing good supplier relationships, reducing setup
times, etc. In fact, the traditional way of functional performance measurement is
presents only a partial picture of the prices.

Queuing models are helpful to analyze and evaluate a supply chain Network
performance. Queuing-network models are usually used for performance evaluation of
multistage discrete manufacturing systems, whereas optimizing inventory control in a
network system is commonly associated with multi-echelon inventory models.
Generally, in a supply chain, most of the parameters are not deterministic, for this
reason, some researchers used queuing theory to construct a mathematical model. In
this area, there are some who prepared a model for stochastic demand and some who
prepared a model for stochastic lead time. Parlar (1996) presented an inventory model
which was combined with queuing theory to consider demand and lead time stochastic
parameters. Hosseini et al. (2013) considered stochastic lead time, and developed a
multi- objective pricing-inventory model for a retailer, where their main objective was
to maximize retailer’s profit and service level. Seyedhoseini et al. (2014) considered
poison demand for customers in a cross-docking problem, and prepare a stochastic
model. For better modeling of stochastic environment, some researchers used queuing

189
theory, for example, Ha (1997) considered poison demand and exponential production
times for a single-item make-to-stock production system. He proposed an M/M/1/S
queuing system for modeling the system. Ettl et al. (2000) developed a network of
inventory queue model to analyze complex supply chains. Each stocking location is
modeled as an MX/G/ inventory queue operating under a base-stock control policy. By
considering the possible delay caused by stock-out and modifying the lead time
accordingly, they derive analytical expressions for performance measures and develop
a constrained nonlinear optimization model. Arda and Hennet (2006) analyzed
inventory control of a multi-supplier strategy in a two-level supply chain. They
considered random arrival for customers and random delivery time for suppliers, and
represented their system as a queuing network.

Isotupa (2006), considered a lost sales (s, Q) inventory system with two customer
groups, and illustrated the model by Markov processes. Babai et al. (2010) considered
demand and lead time stochastic and analyzed a single-echelon single-item inventory
system by means of queuing theory. Considering effectiveness of queuing theory in
inventory problems, we also used queuing theory to develop our model. Toktas- Palut
and U ¨ lengin (2011) coordinated the inventory poli- cies in a two-stage decentralized
supply chain, where each supplier has been considered as an M/M/1 queue and the
manufacture has been assumed as a GI/M/1 queue. Alimardani et al. (2013) applied
continuous review (S- 1,S) policy for inventory control and supposed a bi-product
three-echelon supply chain which is modeled as an (M/M/ 1) queue model for each
type of products offered through the developed network. In addition, to show the
performance of the proposed bi-product supply chain, they also considered a network
including two (M/M/1) queue for each type of products.

Dear learner, let's now see two examples on how supply chain and inventory process
are modeled through the works of Srinivasa & Viswanadham, respectively

First, Srinivasa & Viswanadham modeled a single product multiple competitor chain.
Assuming there is a steady demand for products of a type, at the three warehouses W1,

190
W2, and W3. There are three different manufacturers M1, M2, and M3 for this
product, who supply to the same three markets. There are two common suppliers S1
and S2 who in turn receive materials from early suppliers as represented by D.
Aggregation of early suppliers could be represented by arrival rate λ and the
proportion of arrival distributions by the suppliers (S) to manufactures (M)and then to
warehouses (W) is modeled as given by the following generalized queuing network
(GQN) model:

First phase model

Second phase model

Srinivasa & Viswanadham N developed a multistage inventory queue model and a


job-queue decomposition approach that evaluates the performance of serial
manufacturing and supply systems with inventory control at every stage. In their work
they presented an efficient procedure to minimize the overall inventory in the system

191
while meeting the required service level. The authors used a relatively simple
technique that is claimed to deliver accurate performance estimates and managerial
insights into related design and control issues. As the core of their work, the authors
used X/Y /Z/m/R to denote a base-stock inventory queue, in which X signifies the
external demand process, Y the service process, Z the material supply process, m the
number of parallel servers, and R the base-stock level at the output store. Similar to a
standard queue, the fundamental process here is the job queue N as indicated in the
following diagram.

CHAPTER SUMMARY
Dear learner, in this unit we discussed the concept of queue theory along with the
standard notations and models. More to the point, we saw the M/M/1 and M/M/2
models and their variants. Along the way, we discussed key concepts such as steady
state, lack of memory, and Markova process. At last, we discussed the applications on
queue models in supply chain management.
END CHAPTER QUESTIONS
1. Discuss the lack memory and markovian process concepts
2. Discuss the steady state concept
3. Describe the M/D/C/K/200/LCFS and M/D/2/10/200/LCFS models
4. If λ and µ are given as 4 per minute and 2 per minute, respectively for an
M/M/1 and M/M/2 models, compute ρ, Lq, Wq, Ws, and Ls for each case.

5. Discuss the areas of queue application in supply chain management

192
REFERENCES
Chase, R., N. Aquilano, and R. Jacobs (2001). Operations management for
competitive advantage.9th ed. Boston: McGraw-Hill.

Michael Bacharach and Susan Hurley. Foundations of Decision Theory: Issues and
Advances, UK: Blackwell Publishers, 1991.

Von Neumann,J. And Morgenstern,O., Theory of Games and Economic Behavior (2nd
ed.)Princeton,NJ: Pronceton Univ. Press, 1947.

John S. Hammond III. “Better Decisions With Preference Theory”. Harvard Business
Review, 45,1967, pp. 124-147.

Paul R. Kleindrofer,Howard C. Kunreuther and Paul J.H. Schoemaker. Decision


Science. UK: Cambridge University Press,1993

Simon, Herbert A, “Behavioral Model of Rational Choice” Quarterly Journal of


Economics,Vol. 70, 1955, pp.99-118

Hammond, Kenneth R.,Hursch C.J., and Todd,F.J, “Analysing the Componenets of


clinical Inference.” Psychological Review,71,1964.

Tverskey, Amos. “Intransitivity of Preferences.” Psychological Review,76,1969,pp.


31-48.

Raiffa, H., Decision Analysis, Reading: Addison-Wesley, 1967

Fishburn, Peter C., Utility Theory and Decision making, New York: John Wiley,1970

Richard A. Guzzo. Improving Group Decision Making in Organizations: Approaches


from Theory and Research, London: Academic Press, Inc, 1982.

Huan, Samuel. Sheoran, Sunil. & Wang, Ge., A research and analysis of supply chain
operations reference (SCOR) model, Supply Chain Management: An International
Journal, 9(1), 2004.

Arda Y, Hennet J (2006) Inventory control in a multi-supplier system. Int J Prod Econ
104:249–259

AriaNezhad MG, Makuie A, Khayatmoghadam S (2013) Developing and solving two-


echelon inventory system for perishable items in a supply chain: case study (Mashhad
Behrouz Company). J Ind Eng Int 9:39

Ha AY (1997) Stock rationing policy for a make-to-stock production system with two
priority classes and backordering. Nav Res Logist. 457–472

193
Hannet J, Arda Y (2008) Supply chain coordination: a game-theory approach. Eng
Appl Artif Intell 21(3):399–405

Isotupa KPS (2006) An (s, Q) Markovian inventory system with lost sales and two
demand classes. Math Comput Model 687–694

Seyedhoseini SM, Rashid R, Teimoury E (2014) Developing a crossdocking network


design model under uncertain environment.J Ind Eng Int.

Alimardani M, Jolai F, Rafiei H (2013) Bi-product inventory planning in a three-


echelon supply chain with backordering, Poisson demand, and limited warehouse
space. J Ind Eng Int 9:22

Babai MZ, Jemai Z, Dallery Y (2010) Analysis of order-up-to-level inventory systems


with compound Poisson demand. Eur J Oper Res 552–558

Toktas¸-Palut P, U¨ lengin F (2011) Coordination in a two-stage capacitated supply


chain with multiple suppliers. Eur J Oper Res .212(1):43–53

Hosseini Z, Yaghin RG, Esmaeili M (2013) A multiple objective approach for joint
ordering and pricing planning problem with stochastic lead times. J Ind Eng Int 9:29

Parlar M (1996) Continuous-review inventory problem with random supply


interruptions. Eur J Oper Res 366–385

Unpublished and Internet sources

Srinivasa R. & Viswanadham N., Performance Analysis of Supply Chains using


Queuing Models, National University of Singapore and Enterprsie Component
Technology Incorporation of India.
SCOR Model, Supply Chain Council, October 7, 2004.

Supply Chain Operations Reference Model. Supply Chain Council. October 7, 2004.

Bauhof, Ned. SCOR Model: Supply Chain Operations Reference Model. Beverage
Industry. August

http://ocw.mit.edu/courses/economics/economic-applications-of-game-theory

www.supply-chain.org.

http://www.sourcetrix.com/docs/Whitepaper-SC_decision_making.pdf

http://mthink.com/article/modeling-approaches-for-supply-chain-
decisions/

194

You might also like