You are on page 1of 10

Unit 1 Notes: Introduction to the Course!

I look forward to working with all of you in the coming units. Please feel free to post any questions in
the discussion forum or send me an email. My commitment is to respond to your questions or
concerns within 24 hours.

Before we get into this unit's materials, I would like to bring to your attention the work schedule for
this course. Every unit, with the exception of Unit 11, we need to complete the Individual Problem or
case study and participate in a class discussion. There is also a Final Exam. Please refer to the unit
outline for a schedule of these assignments. The text we use is an excellent one therefore, please be
diligent in completing the unit reading.

Data is used in many different business functions, including

 Finance and Accounting – Data serves as the basic element from which a
balance sheet is created, and the determination of costs and profits at a
company or within a business unit;
 Marketing – Data is used to determine advertising impact: how, when, and
where coupons and sales promotions are used by customers. It is also used
in market research to determine customer satisfaction and where new
product interests might lie;
 Human Resources – Data is used to determine employee turnover,
attendance, success of orientation programs, and the effectiveness of
training programs; and
 Strategic Planning – Data is used to determine such key decisions as
potential new markets to penetrate, and where to build manufacturing and
warehouse facilities.

Developing a general understanding of the management science/operations research approach to


decision making is important. We need to understand that managerial problem situations have both
quantitative and qualitative considerations which are both important in the decision-making process.
In this course, we will learn about decision-making models in terms of what they are and why they
are useful. We will place emphasis on mathematical models and learn to identify the step-by-step
procedures that are used to make key decisions including basic models of cost, revenue, and profit
and being able to compute the breakeven point. We will also learn to use computer software
packages such as Microsoft Excel as tools in the quantitative methods to decision-making.

In this unit, we will focus our discussion on decisions. What are decisions and how do we know
when we have made one? Is a mental commitment or intention enough to be defined a decision, or
do we actually have to commit resources before we can say a decision has been made? When can
we say that we have made a good decision? Is it based on the outcome of the decision, or is there
more to it? Can we make a good decision and still have a bad or negative outcome? These are
fundamental concepts that are not always given much thought, but that can have significant
influence on the decision-making environment.
The Individual Problems will concentrate on total cost, revenue, and profit gain in a business
environment and mathematical ways to determine them. We should use Excel to perform most, if
not all, of the calculations and to construct any required graphs.

Unit 2 Notes: Time Series Analysis and Forecasting

The long-run success of an organization is directly related to how well management is able to plan
for future operations, and a key component of operations planning is reliable forecasting. This unit
will help us understand how time series analysis can be used to explain patterns or behavior in
historical time series data, and to use these patterns to forecast probable values for the time series
in the future. We will study quantitative forecasting techniques such as moving averages and
exponential smoothing and learn how regression can be used build forecast models. Forecast
accuracy will be investigated by introducing various measures of error, including Mean Average
Deviation (MAD), Mean Square Error (MSE), and Mean Absolute Percentage Error (MAPE).

This unit’s readings start by introducing key time series patterns that are frequently encountered in
real life data including Horizontal, Trend, Seasonal, and Cyclical patterns. It is important to select a
forecasting method that is consistent with the observed; otherwise the resulting forecasted values
will not be reliable and will result in misleading information being used in the decision-making
process.

Methods for measuring forecast accuracy will be introduced, and the key differences between these
methods will be discussed. The first measure of error we will investigate is Mean Absolute Deviation
(MAD), defined as the absolute difference between the actual value and the forecast, averaged over
a range of forecasted values. It is an accurate and robust measure of forecast accuracy, and not
sensitive to outliers. Another measure that avoids the problem of positive and negative errors is
Mean Square Error (MSE), defined as the squared difference between the actual value and the
forecast, averaged over forecasted values. Lastly, the Mean Absolute Percentage Error (MAPE) is
introduced, which is the average of absolute errors divided by actual observation values. Since the
deviations are divided by actual observations, this measure is dimensionless (it does not have any
units of measure) and relative (measured as a ratio of two values), whereas, the previous two
depend on the units of observations and are absolute in that sense.

Two common and important forecasting methodologies will then be introduced: moving average
and exponential smoothing models. We will see that these methods are in fact related, and that the
exponential smoothing model can be considered a special case of the moving average model where
weights are used to exponentially diminish the importance in the forecast of historical values the
further back into the past we go.

It is important to understand that a key assumption in time series forecasting is that all relevant
information for determining future values is embedded within past values. That is, the historical time
series data is the only information used to generate the forecast model, and any information outside
of that is not included. One pitfall of simple time series forecast models is that any new ideas or
innovations introduced into the market place will not impact the forecasting model until the new
information is embedded in observed time series data.
The mathematical concepts of time series modeling are relatively straight forward implementations
and calculations, however, using these models in practice can often be tedious. Tools such as
Microsoft Excel provide functionality and toolkits relevant to forecasting and are often used to help
with calculations and analysis. It is to the benefit of all business professionals that they become
familiar with the functionality in Microsoft Excel, or equivalent tools, with respect to data analysis
and forecasting. I would encourage you to make the most of these tools for this and future
assignments. Devoting the time necessary to mastering some of these toolkits will only benefit you
as you move forward in your career.

While the Individual Problems this unit will focus on mathematical and technical skills required for
time series forecasting, the Discussion will focus on the reason for forecasting. We will investigate
how organizations use forecasting in the planning and decision-making processes. We will also
discuss important factors to consider when developing or using a forecasting, including what
potential problems or areas for failure may arise.

Unit 3 Notes: Introduction to Linear Programming

Linear programming is a problem-solving technique used to make decisions when some aspect of
the business operations has limits or constraints. In this Unit, we will look at the types of problems
that have been solved using this method and learn how to develop linear programming models for
simple problems. We will learn how to solve two variable linear programming models using the
graphical solution method.

It is necessary to understand the importance of extreme points in obtaining the optimal solution, the
use and interpretation of slack and surplus variables, the ability to interpret the computer solution of
a linear programming problem, and to understand how alternative optimal solutions, infeasibility, and
unboundedness can occur in linear programming problems. Modeling deals with the optimization,
objective function, optimal solution, constraint, constraint function, feasible solution, binding
constraint, and slack aspects which are involved in any business decision making project. They are
explained with reference to actual scenarios to provide a better understanding for all of the decision-
making process. The process of selecting values of decision variables that minimize or maximize
some quantity of interest and the quantity we seek to minimize or maximize limitations or
requirements that decision variables must satisfy, and the solution that satisfies all constraints of a
problem must be clearly identified for accurate decision-making.

The Individual Problems this unit will focus on solving two variable linear programming problems
using the graphical method. All the problems on this assignment should be solved using this
method, and not using more sophisticated tools such as Excel Solver (we will introduce this tool in a
later unit). It is important to develop an understanding of concepts such as constraints, feasible
regions and extreme points, and illustrating these concepts graphically is a great way to develop
such intuition.
It will not be sufficient to simply write down the answers or solutions to this unit’s problems. For a
graphical solution to be complete you must provide, at a minimum, the LP model in mathematical
format, a plot of the constraints, and intermediate work or explanation of how you determined the
location of the optimal solution.

Unit 4 Notes: Sensitivity Analysis and Solution Interpretation

Last unit we started our study of linear programming by solving simple profit maximization and cost
minimization problems. We assumed that factors like sale prices, unit costs, consumer demand, and
operational limits were known and not subject to uncertainty. Unfortunately, the world does not work
this way. Often when setting up our models we make a “best guess” as to the value some
parameters will assume. For instance, we may use a time series model to forecast consumer
demand in the next quarter or forecast prices of raw material to determine “expected” variable costs.
Faced with the problem of uncertainty, techniques for understanding the impact of changes to key
variables and constraints were developed. This unit we will introduce one of these tools: sensitivity
analysis.

We will use the graphical solution method to understand the impact, if any, on the optimal solution
and the optimal value (i.e., the value of the objective function at the optimal solution) when
coefficients of the objective function change or when the right-hand side of a constraint changes.

Changes to coefficients in the objective function, which might represent sales prices or unit costs for
raw materials, might be large enough that the location of the optimal solution will change. We will
learn to interpret the range for an objective function coefficient over which the calculated optimal
solution is valid.

We will see that changing the right-hand side of a constraint, say the total number of labour hours or
the amount of raw material available, may change the optimal solution and the optimal value without
changing the binding constraints. We will refer to this change in the objective value as the shadow
price. If the change to the right-hand side of the constraint is large enough, the constraints which are
binding might change. Understanding the range for the right-hand side is therefore very important.

We will learn to interpret these ranges and values so that recommendations can be made, or actions
taken, based on the results of the sensitivity analysis.

It will be important to design your spreadsheet models in such a way that effort is minimized when
performing sensitivity analysis using the graphical solution method. There are a number of
important guidelines to follow for modeling optimization problems on spreadsheets, including

 putting the objective function coefficients, constraint coefficients, and right-hand-side values
in a logical format in the spreadsheet,
 defining a separate set of cells (either rows or columns) for the values of the decision
variables, and
 defining separate cells for the objective function and each constraint function (the left-hand
side of a constraint).

While working on this chapter, keep in mind that a resource is a relevant cost and the amount paid
for it is dependent upon the amount of resources used by the decision variables. You should be
using relevant costs in your decision analysis which are costs reflected in the objective function
coefficients. Alternatively, a resource cost is sunk if it must be paid regardless of the amount of the
resource actually used by the decision variables. Sunk costs are not reflected in the objective
function coefficients.

The Individual Problems in this this unit will give you experience in using the results of sensitivity
analysis to make relevant business decisions. By using the graphical solution method, you should
also start developing an intuitive sense for how changes to objective function coefficients and
constraints can impact the optimal solution, optimal value, and binding constraints.

Unit 5 Notes: Linear Programming and Building Models in Excel

This unit we will time to reflect on what you have learned to date by applying our new skills and
problem-solving tools for solving realistic problems in Excel. 

Up to now we have been using the graphical method to solve two-variable linear programming
models. The graphical approach has given us the ability to understand how the optimal solution and
the optimal value will be impacted by changes to the objective function and to the right-hand side of
constraints. Now that we have developed an intuition regarding these changes, we are ready to
move on to more sophisticated tools for solving these problems.

The design of a spreadsheet model is critical. Reading Appendix A of the textbook will help you in
designing and maintaining spreadsheet models so that they are efficient, self-documenting
(important if you will eventually be giving the spreadsheet model to someone else to use), and easy
to maintain. A properly designed spreadsheet model will also help mitigate against possible model
implementation errors.

In participatory learning activities we will use an Excel worksheet to solve a linear programming
problem. We will enter the problem data in the top part of the worksheet and develop the linear
programming model in the bottom part of the worksheet. And we will see how Excel Solver can be
used to provide sensitivity analysis information.

Unit 6 Notes: Linear Programming Applications

Linear programming is a very common tool for supporting the decision-making process found
across all industries, often being used to solve more than one kind of problem within an
organization. This is the reason why we are devoting so much of the course to understanding this
approach. In this unit we will explore a variety of examples across a diverse range of industries.
Hopefully, through the readings, assigned questions for this unit, and Discussion, you will develop an
appreciation for the diversity of problems to which this method has been applied, and you will gain
experience and confidence for using it in practice.

The readings from the textbook provide a discussion of the wide-ranging areas where linear program
modeling has been successful, including media/advertising channel selection, marketing, portfolio
analysis, production planning, operations, and capital budgeting. Our Individual Problem set will
highlight specific examples for portfolio optimization, production planning, scheduling overtime,
economic dispatch of electricity, and capital budgeting. Excel Solver will be used to solve these
problems.

When faced with linear programming problems, we are often tempted to jump right into designing
the spreadsheet model. These models can often get complicated unless we start our design with a
good understanding of the problem and a plan in mind. It is ALWAYS recommended to start by
developing the mathematical formation of the linear program first.

A recommended approach to formulating the linear program is to

 understand what the question is asking you to do -- determine whether you are you being
asked to maximize profit, minimize cost, or optimize some other function;
 determine what the decision variables are – these are variables or elements management
has control over impacting the function we are trying to maximize or minimize;
 develop the objective function – using the answers from steps 1) and 2) will provide hints to
what the objective function should look like;
 determine what aspects of the problem are constrained;
 develop constraint equations by determining the inequalities suggested by step 4) and how
the decision variables from step 2) can be used to calculate the values limited by the
constraints; and
 formulate the linear model in math form using the objective function from step 3) and the
constraints from step 5).

Now that we have an LP model translating the math model into a spreadsheet model this becomes a
design and implementation problem, and no longer a business-domain problem.

Unit 7 Notes: Integer Linear Programming

In this chapter, we will study integer linear programming. The key difference between integer linear
programming and linear program is that for the former, at least one decision variable is required take
on integer value only. This sounds like a small difference but as we will see in the assignment
problems, the integer requirement can lead to significant changes in the location of the optimal
solution. If all variables are required to be integers, we have an all-integer linear program. If some,
but not all, are required to be integers, we have a mixed-integer linear program.

There is a special case of integer linear programs where some decision variables are required to
take on values of 0 or 1. We refer to these as 0-1 or binary variables. These variables will provide us
with modeling flexibility and allow us to address problems where we are looking to make choices
between resources. For example, we may be trying to decide which of our three warehouses to ship
merchandise from. We would define a variable for each warehouse and set that variable to 1 if we
decide to ship from there. The variable would otherwise be zero. We will also see how 0-1 integer
linear variables can be used to handle special situations such as multiple choices, k out of n
alternatives, and conditional constraints.

For this unit’s Individual Problem, we will solve simple integer linear programs using two methods:
the graphical solution method and Excel Solver. The graphical solution method will be used for two
variable problems and will help illustrate how the requirement for a decision variable to be an integer
impacts the location of the optimal solution.

For problems with more than two variables we will be required to use a computational tool such as
Excel Solver. To enforce integer restrictions on variables using Solver, additional constraints must be
added that requires the variable to be “int.”

Similarly, for 0-1 or binary constraints, we will add a constraint that requires the variable to be “bin.”
Explicit instructions for how to do this can be found in an appendix to Chapter 11 in the textbook.

Using the concepts, we have studied to date, and our growing understanding of what it means to
make good decisions, we will analyze our past decision processes and think about how we might
approach these decisions in more systematic way.

Unit 8 Notes: Decision Analysis

This unit, we will learn how to describe a problem situation in terms of decisions to be made, chance
events, and consequences. We will then analyze a simple decision analysis problem from both a
pay-off table and decision tree point of view. We will also develop a risk profile and interpret its
meaning and use sensitivity analysis to study how changes in problem inputs affect or alter the
recommended decision. This learning is needed in order to be able to determine the potential value
of additional information, and how new information and revised probability values can be used in the
decision analysis approach to problem solving.

This chapter opens with a discussion about decision analysis. As a business manager, decision
analysis is useful for developing an optimal strategy when faced with several decision alternatives
and uncertainty of future events. A good decision includes undertaking a risk analysis that provides
probability information about favorable as well as unfavorable consequences which may occur.

This unit also helps us to understand the following terms as they relate to decision analysis:
Decision alternatives, decision strategy, chance events, risk profile, states of nature, sensitivity
analysis, influence diagram, prior probabilities, payoff table, posterior probabilities, decision tree
expected value of sample information (EVSI), optimistic approach, efficiency of sample information,
conservative approach, Bayesian revision, mini-max regret approach, opportunity loss or regret,
expected value approach, and expected value of perfect information (EVPI).
By the end of this unit, we will be able to explain the components of a decision tree and how optimal
decisions are computed. Decision trees consist of both nodes (points in time at which events take
place) and branches (decisions or outcomes). Optimal decisions are found by taking expected
values at event nodes and choosing the best decision at decision nodes based on the expected
values. We will then learn what a risk profile is and how it can be used in conjunction with the
solution of a decision tree. A risk profile is the payoff distribution showing the possible payoffs that
can occur and their probabilities for a particular decision strategy.

We will then summarize the decision strategies that model different risk behaviors for making
decisions involving uncertainty for

 an average payoff strategy – treat all outcomes as equally likely and select the one with the
best average,
 an aggressive strategy – choose the decision that results in the best possible outcome,
 a conservative strategy – choose the decision that represents the “best of the worst”
possible outcome, and
 an opportunity loss strategy – choose the decision that minimizes the maximum opportunity
loss.

This unit also provides examples of when expected value decision-making is appropriate and when it
is not. Expected value decision-making is appropriate for repeated decisions, not one-time
decisions. The concepts of expected value of sample information and expected value of perfect
information can be explained as: The Expected Value of Sample Information (EVSI) is the EMV
(Expected Monetary Value) with Sample Information (assumed at no cost) minus the EMV without
sample information; it represents the most you should be willing to pay for the sample information.
The Expected Value of Perfect Information (EVPI) is the EMV with perfect information (assumed at
no cost) minus the EMV without any information. Again, it represents the most you should be willing
to pay for perfect information. EVPI represents the maximum improvement in the expected return
that can be achieved if the decision-maker is able to acquire perfect information about the future
event that will take place.

For this unit’s Individual Problem, we will be asked to investigate decisions using optimistic,
conservative, and minimax decision strategies. We will also be introduced to problems where we will
use sensitivity analysis to understand how changes to problem inputs may impact or alter the
recommended decision and calculate the value of new information introduced to decision problems.

Unit 9 Notes: Utility and Game Theory

In this unit we will focus on situations where the decision alternative with the best expected
monetary value may not be the preferred alternative. A decision maker may also wish to consider
intangible factors such as risk, image, or other nonmonetary criteria in order to evaluate the decision
alternatives. When monetary value does not necessarily lead to the most preferred decision,
expressing the value (or worth) of a consequence in terms of its utility will permit the use of
expected utility to identify the most desirable decision alternative. The discussion of utility and its
application in decision analysis is presented in the first part of this unit.

Utility is a measure of the total worth or relative desirability of a particular outcome; it reflects the
decision maker's attitude toward a collection of factors such as profit, loss, and risk. Researchers
have found that as long as the monetary value of payoffs stays within a range that the decision
maker considers reasonable, selecting the decision alternative with the best expected monetary
value usually leads to selection of the most preferred. However, when the payoffs are extreme,
decision makers are often unsatisfied with the decision that simply provides the best expected
monetary value.

Then, we introduce the topic of game theory. Game theory is the study of developing optimal
strategies where two or more decision makers, usually called players, compete as adversaries.
Game theory can be viewed as a relative of decision analysis. A key difference, however, is that each
player selects a decision strategy not by considering the possible outcomes of a chance event, but
by considering the possible strategies selected by one or more competing players.

In decision analysis, a single decision maker seeks to select an optimal decision alternative after
considering the possible outcomes of one or more chance events. In game theory, two or more
decision makers are called players, and they compete as adversaries against each other. Each player
selects a strategy independently without knowing in advance the strategy of the other player or
players. The combination of the competing strategies provides the value of the game to the players.
Game theory applications have been developed for situations in which the competing players are
teams, companies, political candidates, armies, and contract bidders.

In this section we will also learn two-person, zero-sum games. Two-person means that two
competing players take part in the game. Zero-sum means that the gain (or loss) for one player is
equal to the corresponding loss (or gain) for the other player (Anderson, D. R., Sweeney, D. J., et al,
2015, pp. 165-194).

Unit 10 Notes: Applying Decision Models

Congratulations on making it to the final regular unit of the course. We have covered a significant
amount of material over the last nine units. This unit, we will not introduce any new models. Instead
we will concentrate on consolidating the skills we have developed and the tools we have
accumulated, tying together any loose ends and learning how the models can work together.

Let’s take a few minutes and review where we have come from. The basics of cost-volume-price
analysis was introduced in Unit 1. We explored the concept of break-even analysis and learned how
to use the model for evaluating the profitability of a company or business initiative. In this unit we
will expand on the idea and introduce the idea of a break-even graph. The concept is explored in
more detail in the Cost-Volume-Price Analysis handout.
Next, we studied time series forecasting models. We learned the basics of forecasting through
moving average and exponential smoothing models. We also learned how to use regression to fit
linear trend models. The advantage of these time series models is the ability to forecast reasonable
values for time-varying parameters into the future. These forecasts will often be the first step in
providing input values for other decision models. One disadvantage is that forecast models are only
able to capture market innovations after they appear in observable data. If impacts of such changes
are required for modeling purposes, other approaches such as scenario analysis will have to be
used.

We spent a significant amount of time learning about linear programming models. These models are
helpful when trying to optimize a performance variable, such as profit or cost, in conditions were
resources are constrained. One major disadvantage is that the models cannot inherently handle
uncertainty in input variables.

Integer linear programming models were introduced next. At first glance these models appear to be
a special case of the more general linear programming models. Constraining a decision variable to
integer values can only have significant changes on the location of the optimal solution. We learned
how to approximate the solution to these problems using LP-Relaxation techniques. In many cases,
these approximations are “good enough” and spending more time in solving the integer problems
exactly is not justified.

Simulation models were introduced with the idea that such models can naturally account for the
uncertainty inherent in key drivers likes market prices, demand, and production capacity. One
disadvantage is that the results from these models are aggregate values and are harder to interpret.

Finally, we introduced decision analysis methods such as payoff tables and decision trees. One key
advantage of these models is that they can be applied when probabilities of events are unknown. We
can take a risk tolerance approach and apply optimistic, conservative, and minimax measures.

The Final Case Study this unit will require you to apply a number of quantitative decision models to a
realistic example. You are acting as a consulting for company that produces high quality sunflower
oil, and to make recommendations on the purchasing strategy for raw materials. A time series
forecasting model is used to generate values used as inputs to a linear program model. After
generating a minimum cost purchasing strategy, you will apply classical cost-volume-price analysis
to build an opinion on the profitability of the company in the next production cycle.

You might also like