You are on page 1of 21

Mathematical Programming

A Publication of the Mathematical Optimization Society


ISSN: 0025-5610 (Print) 1436-4646 (Online)

Description
Mathematical Programming publishes original articles dealing with every aspect of mathematical
optimization; that is, everything of direct or indirect use concerning the problem of optimizing a
function of many variables, often subject to a set of constraints. This involves theoretical and
computational issues as well as application studies. Included, along with the standard topics of linear,
nonlinear, integer, conic, stochastic and combinatorial optimization, are techniques for formulating
and applying mathematical programming models, convex, nonsmooth and variational analysis, the
theory of polyhedra, variational inequalities, and control and game theory viewed from the
perspective of mathematical programming. The editorial boards are particularly interested in novel
applications of mathematical programming and interfaces with engineering, economics, and computer
science. Articles primarily concerned with computational issues such as implementation and testing
should in general be submitted to Mathematical Programming Computation.
Mathematical Programming consists of two series. Series A publishes original research articles,
expositions and surveys, and reports on computational experimentation and new or innovative
practical applications as well as short communications dealing with the above. Issues of Series Beach
focus on a single subject of current interest to the mathematical programming community. Each issue
of Series B has one or more guest editors, who need not be members of the editorial board. An issue
may be a collection of original articles, a single research monograph or a selection of papers from a
conference.

. What is Mathematical Programming?

1. Introduction

First a definition: Mathematical Programming (MP) is the use of mathematical models, particularly optimizing
models, to assist in taking decisions.

The term 'Programming' antedates computers and means 'preparing a schedule of activities'. It is still used, for
instance, in oil refineries, where the refinery programmers prepare detailed schedules of how the various process
units will be operated and the products blended. Mathematical Programming is, therefore, the use of mathematics
to assist in these activities.

Mathematical Programming is one of a number of OR techniques. Its particular characteristic is that the best
solution to a model is found automatically by optimization software. An MP model answers the question "What's
best?" rather than "What happened?" (statistics), "What if?" (simulation), "What will happen?" (forecasting) or
"What would an expert do and why?" (expert systems).

Being so ambitious does have its disadvantages. Mathematical Programming is more restrictive in what it can
represent than other techniques. Nor should it be imagined that it really does find the best solution to the real-
world problem. It finds the best solution to the problem as modelled. If the model has been built well, this solution
should translate back into the real world as a good solution to the real-world problem. If it does not, analysis of
why it is no good leads to greater understanding of the real-world problem.

The key characteristics of MP are shown in the diagram below.

2. What can Mathematical Programming Do?

The essential characteristics of a problem for Mathematical Programming to be applied are:

• many potentially acceptable solutions

• some means of assessing the quality of alternative solutions;

• some interconnectedness between the variable elements of the system.


These are reflected in the essential components of an MP model:

• the values of the decision variables, which describe the solutions;

• the objective function, which measures the quality of solutions;

• the relationships between decision variables, or constraints.

The definitions of all these components will change repeatedly during the building of the model. Although the
process of MP involves finding optimum solutions, nobody is suggesting that the solution is optimum to the real-
world problem.

If the model is reasonably faithful, then its optimum solution should be a good solution to the real-world problem.
Whether it is or not, the process of building the model and analysing the solutions is a very powerful tool in
analysing the real-world problem.

Mathematical Programming is very suitable for problems involving blending, continuous flow processing,
production and distribution, and strategic planning. It answers questions such as:

• how much?

• when?

• where?

One special case of Mathematical Programming which has been enormously successful is Linear Programming (LP).
In an LP model all the relationships are linear, hence the name. LP has been so successful for two reasons:

• there are robust 'black box' solvers which find the best solution to LP problems automatically;

• many real-world phenomena can be approximated reasonably well by linear relationships.

Problems involving planning, blending, production and distribution are all capable of being solved using Linear
Programming. The principles of MP also apply to problems involving logistics and scheduling but the processes of
tackling such problems are more varied and a mixture of techniques is likely to be used, including MP, heuristics
and special-purpose algorithms.

3. Building an MP Model

Much of the art of building an MP model revolves around deciding which aspects of a real-world problem should be
included and which should not. In practice this is an iterative process. To start with, keep things simple. If in doubt,
leave out. If the optimum solution from the resulting model is clearly wrong in the real world, add extra detail and
try again.

When starting to formulate a model, it may be helpful to think in terms of the typical decision variables of an MP
model. These are:

• buying (or importing from outside the system being modelled);

• making;

• moving from place to place;

• storing (or moving through time);

• selling (or exporting to outside the system being modelled).

Typical constraints are:

• availability (of materials, resources);

• capacity (of processes);

• quality (upper and lower limits);

• demand;

• material balance (within the system being modelled).

With these in mind, identify what data are available and construct the model to fit them. If there are some data
which appear to be crucial but which are not available, consider how decisions are being made now. If judgements
are being made based on estimates, try to obtain the estimates and use them. If estimates are not available, push
ahead regardless and await the reaction to the results from your model. The supposed crucial factor may not
matter very much, in which case you are better off without it. If the results of your model are nonsense, the
explanation of why they are so should provide some guidance as to what to do.

Optimization exercises a model in a far more rigorous way than other techniques, such as simulation. The optimum
solution is, by definition, extreme. When building an MP model one is reminded of the proverb:

Man proposes: God disposes

One builds the model and then turns it over to the optimization algorithm to find the best solution. If there is a
fault in the model which can be exploited, the optimization algorithm will find it. The solution will be, at best,
impracticable, at worst, nonsense. But through such mistakes one acquires greater understanding of the problem
and moves towards solutions which are truly useful.

The process of building an optimization model is therefore necessarily iterative. A first draft of the model will be
sketched out and test data sought. Most probably some of the data will prove to be unobtainable. The model has
to be altered before it can be run. The solutions are nonsense. Some constraint has been omitted. It is added. The
solution is now plausible given the test data and the highly simplified representation.

More detail is added to the model. Further data are sought. So the process goes on. As the model gets better, so
the client's scepticism gives way to enthusiasm. The model starts to propose new ways of doing things. It is really
beginning to add value.

Ultimately the time comes when the model moves from being an experimental tool to a decision support aid. This
is when the user interface becomes critical. Fortunately, MP software has moved forward a lot in the past few years
and MP models can now be embedded straightforwardly in larger systems, taking their data from databases and
spreadsheets and returning their results there.

The use of a computer program to choose the best alternative from a set of available options.
Mathematical programming uses probability and mathematical models to predict future events. It
is used in investing and in determining the most efficient way to allocate scarce resources. Also
called optimization.

Read more: http://www.businessdictionary.com/definition/mathematical-programming.html

Break-Even Analysis and Forecasting

This site is a part of the JavaScript E-labs learning objects for decision making. Other JavaScript in this series are
categorized under different areas of applications in the MENU section on this page.

Professor Hossein Arsham   

The following JavaScript calculates the break-even point for a firm based on the information you provide. A firm's
break-even point occurs when at a point where total revenue equals total costs.

Break-even analysis depends on the following variables:

1. Selling Price per Unit:The amount of money charged to the customer for each unit of a product or service.

2. Total Fixed Costs: The sum of all costs required to produce the first unit of a product. This amount does not
vary as production increases or decreases, until new capital expenditures are needed.

3. Variable Unit Cost: Costs that vary directly with the production of one additional unit.

Total Variable Cost The product of expected unit sales and variable unit cost, i.e., expected unit sales times the
variable unit cost.

4. Forecasted Net Profit: Total revenue minus total cost. Enter Zero (0) if you wish to find out the number of
units that must be sold in order to produce a profit of zero (but will recover all associated costs)

Each of these variables is interdependent on the break-even point analysis. If any of the variables changes, the
results may change.

Total Cost: The sum of the fixed cost and total variable cost for any given level of production, i.e., fixed cost plus
total variable cost.
Total Revenue: The product of forecasted unit sales and unit price, i.e., forecasted unit sales times unit price.

Break-Even Point: Number of units that must be sold in order to produce a profit of zero (but will recover all
associated costs). In other words, the break-even point is the point at which your product stops costing you money
to produce and sell, and starts to generate a profit for your company.

One may use the JavaScript to solve some other associated managerial decision problems, such as:

 setting price level and its sensitivity

 targeting the "best" values for the variable and fixed cost combinations

 determining the financial attractiveness of different strategic options for your company

The graphic method of analysis (below) helps you in understanding the concept of the break-even point. However,
the break-even point is found faster and more accurately with the following formula:

Q = FC / (UP - VC)

where:

Q = Break-even Point, i.e., Units of production (Q),

FC = Fixed Costs,

VC = Variable Costs per Unit

UP = Unit Price

Therefore,

Break-Even Point Q = Fixed Cost / (Unit Price - Variable Unit Cost)

You may like using the JavaScript for performing some sensitivity analysis on the above parameters to investigate
their impacts on your decision-making.

What Is a Break-Even Analysis?


Break-even analysis entails the calculation and examination of the margin of safety for an entity based on
the revenues collected and associated costs. Analyzing different price levels relating to various levels of
demand a business uses break-even analysis to determine what level of sales are necessary to cover the
company's total fixed costs. A demand-side analysis would give a seller significant insight regarding
selling capabilities.

KEY TAKEAWAYS

 Break-even analysis tells you at what level an investment must reach to recover your initial outlay.
 It is considered a margin of safety measure.
 Break-even analysis is used broadly, from stock and options trading to corporate budgeting for
various projects.

The Basics Of Break-Even Analysis


Break-even analysis is useful in the determination of the level of production or a targeted desired sales mix.
The study is for management’s use only, as the metric and calculations are not necessary for external
sources such as investors, regulators or financial institutions.

Break-even analysis looks at the level of fixed costs relative to the profit earned by each additional unit
produced and sold. In general, a company with lower fixed costs will have a lower break-even point of
sale. For example, a company with $0 of fixed costs will automatically have broken even upon the sale of
the first product assuming variable costs do not exceed sales revenue. However, the accumulation of
variable costs will limit the leverage of the company as these expenses come from each item sold.

Break-even analysis is also used by investors to determine at what price they will break even on a trade or
investment. The calculation is useful when trading in or creating a strategy to buy options or a fixed-
income security product.

Volume 75%
 
How it works (Example):
The basic idea behind doing a break-even analysis is to calculate the point at
which revenues begin to exceed costs. To do this, one must first separate a company's
costs into those that are variable and those that are fixed. Fixed costs are costs that do
not change with the quantity of output and they are not zero when production is zero.
Examples of fixed cost include rent, insurance premiums or loan payments. Variable
costs are costs that change with the quantity of output. They are are zero when
production is zero. Examples of common variable costs include labor directly involved in a
company's manufacturing process and raw materials.

For example, at XYZ Restaurant, which sells only pepperoni pizza, the variable expenses
per pizza might be:

 Flour: $0.50
 Yeast: $0.05
 Water: $0.01
 Cheese: $3.00
 Pepperoni: $2.00
 Total: $5.56

Its fixed expenses per month might be:

 Labor: $1,500
 Rent: $3,000
 Insurance: $200
 Advertising: $500
 Utilities: $450
 Total: $5,650

Based on the total variable expenses per pizza, we now know that XYZ Restaurant must
price its pizzas at $5.56 or higher just to cover those costs. But if the price of a pizza is
$10, then the contribution margin, or the revenue minus the variable cost for XYZ
Restaurant, is ($10 - $5.56 = $4.44).

But how many pizzas does XYZ Restaurant need to sell at $10 each to cover all those
fixed monthly expenses? Well, if $4.44 is left over from each pizza after accounting for
variable costs, then we can determine that XYZ Restaurant must sell at least ($5,650 /
$4.44 = 1,272.5) pizzas per month in order to cover monthly fixed costs.

It is important to note that some fixed costs increase "stepwise," meaning that after a
certain level of revenue is reached, the fixed cost changes. For example, if XYZ
Restaurant began selling 5,000 pizzas per month rather than 2,000, it might need to hire
a second manager, thus increasing labor costs.

Understanding Cost-Benefit Analysis


Before building a new plant or taking on a new project, prudent managers conduct a cost-
benefit analysis to evaluate all the potential costs and revenues that a company might
generate from the project. The outcome of the analysis will determine whether the project
is financially feasible or if the company should pursue another project.

In many models, a cost-benefit analysis will also factor the opportunity cost into the
decision-making process. Opportunity costs are alternative benefits that could have been
realized when choosing one alternative over another. In other words, the opportunity cost
is the forgone or missed opportunity as a result of a choice or decision. Factoring in
opportunity costs allows project managers to weigh the benefits from alternative courses
of action and not merely the current path or choice being considered in the cost-benefit
analysis.

By considering all options and the potential missed opportunities, the cost-benefit analysis
is more thorough and allows for better decision-making.

KEY TAKEAWAYS

 A cost-benefit analysis (CBA) is the process used to measure the benefits of a


decision or taking action minus the costs associated with taking that action.
 A CBA involves measurable financial metrics such as revenue earned or costs
saved as a result of the decision to pursue a project.
 A CBA can also include intangible benefits and costs or effects from a decision
such as employee morale and customer satisfaction.
The Cost-Benefit Analysis Process
A cost-benefit analysis (CBA) should begin with compiling a comprehensive list of all the
costs and benefits associated with the project or decision.

The costs involved in a CBA might include the following:

 Direct costs would be direct labor involved in manufacturing, inventory, raw


materials, manufacturing expenses.
 Indirect costs might include electricity, overhead costs from management, rent,
utilities.
 Intangible costs such as customer impact of pursuing a new business strategy,
project, or construction of a manufacturing plant, delivery delays of product,
employee impact.
 Opportunity costs such as alternative investments, or buying a plant versus building
one.
 Cost of potential risks such as regulatory risks, competition, and environmental
impacts.

Benefits might include the following:

 Revenue and sales increases from increased production or new product.


 Intangible benefits, such as improved employee safety and morale, as well as
customer satisfaction due to enhanced product offerings or faster delivery.
 Competitive advantage or market share gained as a result of the decision.

An analyst or project manager should apply a monetary measurement to all of the items
on the cost-benefit list, taking special care not to underestimate costs or overestimate
benefits. A conservative approach with a conscious effort to avoid any subjective
tendencies when calculating estimates is best suited when assigning a value to both
costs and benefits for a cost-benefit analysis.

Finally, the results of the aggregate costs and benefits should be compared quantitatively
to determine if the benefits outweigh the costs. If so, then the rational decision is to go
forward with the project. If not, the business should review the project to see if it can make
adjustments to either increase benefits or decrease costs to make the project viable.
Otherwise, the company should likely avoid the project.

 
With cost-benefit analysis, there are a number of forecasts built into the process, and if
any of the forecasts are inaccurate, the results may be called into question.
A Simple Cost Benefit Analysis Example
Let’s assume that a board chairman of a construction company claims his team to
make a comparison between two potential real estate development projects to be
constructed. He also reminds them that the company’s financial health is getting
poor so he has to select one of them.

The team works and lists below the potential incomes and costs of each project.

Assumptions

Note: In order to simplify the cost benefit analysis example, we will not make a net
present value calculation for each cost and income.

Project 1

– 500 housing units will be constructed.


– 400 of them will be sold and 100 of them will be rented for 20 years.
– Rental Price of each unit is 4,000 USD per year
– Rented 100 units will be sold 70,000 USD after 20 years.
– Construction Cost of each unit is 100,000 USD.
– The sale price of each unit is 120,000 USD.
– The project needs a luxury sales office with a price of 2,000,000 USD.
– The sales personnel cost is 300,000 USD per year.
– The project duration is 3 years.
– Project financing cost is 3,000,000 USD per year

Project 2

– 400 housing units will be constructed.


– 350 of them will be sold and 50 of them will be rented for 15 years.
– Rented 50 units will be sold 80,000 USD after 15 years.
– Rental Price of each unit is 4,500 USD per year
– Construction Cost of each unit is 90,000 USD.
– The sale price of each unit is 135,000 USD.
– The project needs a luxury sales office with a price of 3,000,000 USD.
– The sales personnel cost is 250,000 USD per year.
– The project duration is 2 years.
– Project financing cost is 2,500,000 USD per year
Comparing the Project Parameters

Now we will calculate the amount of money to be spent and the amount of money
to be earned from each project.

All the project parameters are summarized in the table below :

Cost Benefit Analysis-


Project Parameters
Cost Calculations
Project cost calculations are summarized in the table below :

Cost Benefit Analysis-Project


Costs
Benefit Calculations

Project benefit calculations are summarized in the table below :

Cost Benefit Analysis-


Benefit Calculations
Costs and Benefits Comparison
Cost Benefit Analysis-
Costs and Benefits Comparison

In this cost benefit analysis example, there are too many parameters affecting the
board’s decision. Financing costs per year, units for sale, units for rent, total units
to be constructed are some of them that make decision making difficult.

The above table summarizes the benefits, costs, and profits of each project.
Although the incomes of Project 1 is more than Project 2, the costs of Project 2 is
less than the costs of Project 1.

It is obvious that Project 2 is more profitable than Project 1. If the board chairman
selects Project 2, the company will earn more profit by spending less money.

This simple example shows that Cost Benefit Analysis is a useful calculation while
comparing multiple projects.

Summary

This simple example shows how to make a cost benefit analysis for two projects. It
is important to bear in mind the intangible benefits such as customer satisfaction,
environment, employee satisfaction, or health and safety, historical importance
while making cost benefit or benefit cost analysis.

Because benefits do not only consist of revenues obtained from business actions
but also consist of intangible factors. In order to make a correct cost benefit
analysis, the current worth of future earnings must be calculated by the help of
financial techniques such as net present value.

Sometimes it may be difficult to compare the options that have very close values.
At this stage, intangible factors affect the final decision.
Generally, these kinds of analyses are done by high-level stakeholders, top
management, and board members. After the selection of the project, they start
the process of developing the project charter.

In this article, we review a simple cost benefit analysis example. We hope that it is
useful to understand and use the Cost Benefit Analysis for future decisions.

See Also

Linear Programming (LP)


Definition - What does Linear Programming (LP) mean?
Linear programming is a mathematical method that is used to determine the best possible
outcome or solution from a given set of parameters or list of requirements, which are represented
in the form of linear relationships. It is most often used in computer modeling or simulation in
order to find the best solution in allocating finite resources such as money, energy, manpower,
machine resources, time, space and many other variables. In most cases, the "best outcome"
needed from linear programming is maximum profit or lowest cost.

Because of its nature, linear programming is also called linear optimization.

Survey: Tell Us How You Use AI and ML in Business (And be entered to win
a $100 Amazon Gift Card!)

Techopedia explains Linear Programming (LP)


Linear programming is used as a mathematical method for determining and planning for the best
outcomes and was developed during World War II by Leonid Kantorovich in 1937. It was a
method used to plan expenditures and returns in a way that reduced costs for the military and
possibly caused the opposite for the enemy. 

Linear programming is part of an important area of mathematics called "optimization techniques"


as it is literally used to find the most optimized solution to a given problem. A very basic example
of linear optimization usage is in logistics or the "method of moving things around efficiently." For
example, suppose there are 1000 boxes of the same size of 1 cubic meter each; 3 trucks that are
able to carry 100 boxes, 70 boxes and 40 boxes respectively; several possible routes; and 48
hours to deliver all the boxes. Linear programming provides the mathematical equations to
determine the optimal truck loading and route to be taken in order to meet the requirement of
getting all boxes from point A to B with the least amount of going back and forth and, of course,
the lowest cost at the fastest time possible.

The basic components of linear programming are as follows:

 Decision variables - These are the quantities to be determined.

 Objective function - This represents how each decision variable would affect the cost, or,
simply, the value that needs to be optimized.
 Constraints - These represent how each decision variable would use limited amounts of
resources.

 Data - These quantify the relationships between the objective function and the constraints.

Capital Budgeting
REVIEWED BY WILL KENTON

 Updated Jun 26, 2019

What Is Capital Budgeting?


Capital budgeting is the process a business undertakes to evaluate potential major
projects or investments. Construction of a new plant or a big investment in an outside
venture are examples of projects that would require capital budgeting before they are
approved or rejected.

As part of capital budgeting, a company might assess a prospective project's lifetime cash
inflows and outflows to determine whether the potential returns that would be generated
meet a sufficient target benchmark. The process is also known as investment appraisal.

Volume 75%
 

1:43

Capital Budgeting

Understanding Capital Budgeting


Ideally, businesses would pursue any and all projects and opportunities that enhance
shareholder value. However, because the amount of capital any business has available
for new projects is limited, management uses capital budgeting techniques to determine
which projects will yield the best return over an applicable period.

Some methods of capital budgeting companies use to determine which projects to pursue
include throughput analysis, net present value (NPV), internal rate of return, discounted
cash flow, and payback period.

KEY TAKEAWAYS

 Capital budgeting is used by companies to evaluate major projects and


investments, such as new plants or equipment. 
 The process involves analyzing a project’s cash inflows and outflows to determine
whether the expected return meets a set benchmark.  
 The major methods of capital budgeting include throughput, discounted cash flow,
and payback analyses.
Example #2
Calculate the Pay Back Period and Discounted Pay Back Period for the project

which cost $270,000 and projects expected to generate $75,000 per year for the
next five years? Company required rate of return is 11 percent. Should the

company go ahead and invest in a project? The rate of Return 11%.Do we have to

find here, PB?DPB?Should the project be purchased?

Solution:

After adding cash flows of each year Balance will come as shown in below table.

From the above table positive balance is in between 3 and 4 years so,

 PB= (Year – Last negative Balance)/Cash Flows

 PB=[3-(-45,000)]/75,000

 PB= 3.6 Years

Or
 PB= Initial Investment/Annual Cash Flows

 PB= 270,000/75,000

 PB= 3.6 Years.

With the Discounted rate of return of 11% Present Value of Cash Flows as shown in

below table.

 DPB= (Year – Last negative Balance)/Cash Flows

 DPB= [(4-(37,316.57)/44,508.85)

 DPB= 4.84 Years

So from above both capital budgeting method, it is clear that the company should go

ahead and invest in the project as though both methods the company will cover initial

investment before 5 years.

Inventory management

Posted by: Margaret Rouse


WhatIs.com
  

Contributor(s): Jim O'Donnell and Brenda Cole





Inventory management is the supervision of non-capitalized assets (inventory) and


stock items. 

A component of supply chain management, inventory management supervises the


flow of goods from manufacturers to warehouses and from these facilities to point of
sale. A key function of inventory management is to keep a detailed record of each new
or returned product as it enters or leaves a warehouse or point of sale. 

The inventory management process

Inventory management is a complex process, particularly for larger organizations, but


the basics are essentially the same regardless of the organization's size or type. In
inventory management, goods are delivered into the receiving area of a warehouse in
the form of raw materials or components and are put into stock areas or shelves.

Compared to larger organizations with more physical space, in smaller companies, the
goods may go directly to the stock area instead of a receiving location, and if the
business is a wholesale distributor, the goods may be finished products rather than
raw materials or components. The goods are then pulled from the stock areas and
moved to production facilities where they are made into finished goods. The finished
goods may be returned to stock areas where they are held prior to shipment, or they
may be shipped directly to customers.

Inventory management uses a variety of data to keep track of the goods as they move
through the process, including lot numbers, serial numbers, cost of goods, quantity of
goods and the dates when they move through the process.

Inventory Management
Share

What it is:
Inventory management is the process of ensuring that a company always has the
products it needs on hand and that it keeps costs as low as possible.

How it works (Example):


Inventories are company assets that are intended for use in the production of goods or
services made for sale, are currently in the production process, or are finished products
held for sale in the ordinary course of business. Inventory also includes goods or services
that are on consignment (subject to return by a retailer) or in transit.

There are three types of inventory: raw materials, work-in-progress, and finished goods.
Given the significant costs and benefits associated with inventory, companies spend
considerable amounts of time calculating what the optimal level of inventory should be at
any given time. Because maximizing profits means minimizing inventory expenses,
several inventory-control models, such as the ABC inventory classification method, the
economic order quantity (EOQ) model, and just-in-time management are intended to
answer the question of how much to order or produce.

Inventory management also means maintaining effective internal controls over inventory,


including safeguarding the inventory from damage or theft, using purchase orders to
track inventory movement, maintaining an inventory ledger, and frequently comparing
physical inventory counts with recorded amounts.

Common inventory accounting methods include "first in, first out" (FIFO), "last in, first out"
(LIFO), and lower of cost or market (LCM). Some industries, such as the retail industry,
tailor these methods to fit their specific circumstances. Public companies must disclose
their inventory accounting methods in the notes accompanying their financial statements.

Inventory management makes its biggest mark on the inventory line item of the balance


sheet. That line item doesn't just reflect the cost of the inventory; it also reflects costs
directly or indirectly incurred in readying an item for sale, including not only the purchase
price of that item but the freight, receiving, unpacking, inspecting, storage, maintenance,
insurance, taxes, and other costs associated with it.

Introduction to Decision Trees :

A decision tree is a decision support tool that uses a tree-like graph or


model of decisions and their possible consequences, including chance
event outcomes, resource costs, and utility. It is one way to display an
algorithm that only contains conditional control statements.

A decision tree is a flowchart-like structure in which each internal node


represents a “test” on an attribute (e.g. whether a coin flip comes up
heads or tails), each branch represents the outcome of the test, and each
leaf node represents a class label (decision taken after computing all
attributes). The paths from root to leaf represent classification rules.

Tree based learning algorithms are considered to be one of the best and
mostly used supervised learning methods. Tree based methods empower
predictive models with high accuracy, stability and ease of interpretation.
Unlike linear models, they map non-linear relationships quite well. They
are adaptable at solving any kind of problem at hand (classification or
regression). Decision Tree algorithms are referred to
as CART(Classification and Regression Trees).
“The possible solutions to a given problem emerge as the
leaves of a tree, each node representing a point of
deliberation and decision.”

- Niklaus Wirth (1934 — ), Programming language designer

Methods like decision trees, random forest, gradient boosting are being
popularly used in all kinds of data science problems.
Common terms used with Decision trees:

A simulation is an approximate imitation of the operation of a process or system;[1] the act of simulating first


requires a model is developed. This model is a well-defined description of the simulated subject, and
represents its key characteristics, such as its behaviour, functions and abstract or physical properties. The
model represents the system itself, whereas the simulation represents its operation over time.
Simulation is used in many contexts, such as simulation of technology for performance optimization, safety
engineering, testing, training, education, and video games. Often, computer experiments are used to study
simulation models. Simulation is also used with scientific modelling of natural systems or human systems to
gain insight into their functioning, [2] as in economics. Simulation can be used to show the eventual real effects of
alternative conditions and courses of action. Simulation is also used when the real system cannot be engaged,
because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed
but not yet built, or it may simply not exist.[3]
Key issues in simulation include the acquisition of valid source information about the relevant selection of key
characteristics and behaviours, the use of simplifying approximations and assumptions within the simulation,
and fidelity and validity of the simulation outcomes. Procedures and protocols for model verification and
validation are an ongoing field of academic study, refinement, research and development in simulations
technology or practice, particularly in the field of computer simulation.

Simulation is used to model efficiently a wide variety of systems that are


important to managers. A simulation is basically an imitation, a model that
imitates a real-world process or system. In business and management,
decision makers are often concerned with the operating characteristics of a
system. One way to measure or assess the operating characteristics of a
system is to observe that system in actual operation. However, in many types
of situations the cost of direct observation can be very high. Furthermore,
changing some of the relationships or parameters within a system on an
experimental basis may mean waiting a considerable amount of time to
collect results on all the combinations that are of concern to the decision
maker.
In business and management, a simulation is a mathematical imitation of a
real-world system. The use of computers to conduct simulations is not
essential from a theoretical standpoint. However, most simulations are
sufficiently complex from a practical standpoint to require the use of
computers in running them. A simulation can also be considered to be an
experimental process. In a set of experimental runs, the decision maker
actively varies some of the parameters or relationships in the system. If the
mathematical model behind the simulation is valid, the results of the
simulation runs will imitate the results of the real system if it were to operate
over some period of time.
In order to better understand the fundamental issues of simulation, an
example is useful. Suppose a regional medical center seeks to provide air
ambulance service to trauma and burn victims over a wide geographic area.
Issues such as how many helicopters would be best and where to place them
would be in question. Other issues such as scheduling of flight crews and the
speed and payload of various types of helicopters could also be important.
These represent decision variables that are to a large degree under the
control of the medical center. There are uncontrollable variables in this
situation as well. Examples are the weather and the prevailing accident and
injury rates throughout the medical center's service region.
Given the random effects of accident frequencies and locations, the analysts
for the medical center would want to decide how many helicopters to acquire
and where to place them. Adding helicopters and flight crews until the budget
is spent is not necessarily the best course of action. Perhaps two strategically
placed helicopters would serve the region as efficiently as four helicopters of
some other type scattered haphazardly about. Analysts would be interested in
such things as operating costs, response times, and expected numbers of
patients who would be served. All of these operating characteristics would be
impacted by injury rates, weather, and any other uncontrollable factors as
well as by the variables they are able to control.
The medical center could run their air ambulance system on a trial-and-error
basis for many years before they had any reasonable idea what combinations
of resources would work well. Not only might they fail to find the best or near-
best combination of controllable variables, but also they might very possibly
incur an excessive loss of life as a result of poor resource allocation. For
these reasons, this decision-making situation would be an excellent candidate
for a simulation approach. Analysts could simulate having any number of
helicopters available. To the extent that their model is valid, they could
identify the optimal number to have to maximize service, and where they
could best be stationed in order to serve the population of seriously injured
people who would be distributed about the service region. The fact that
accidents can be predicted only statistically means that there would be a
strong random component to the service system and that simulation would
therefore be an attractive analytical tool in measuring the system's operating
characteristics.

What Is Queuing Theory?


Queuing theory is the mathematical study of the congestion and delays of waiting in line.
Queuing theory (or "queueing theory") examines every component of waiting in line to be
served, including the arrival process, service process, number of servers, number of
system places, and the number of customers—which might be people, data packets, cars,
etc.

As a branch of operations research, queuing theory can help users make informed
business decisions on how to build efficient and cost-effective workflow systems. Real-life
applications of queuing theory cover a wide range of applications, such as how to provide
faster customer service, improve traffic flow, efficiently ship orders from a warehouse, and
design of telecommunications systems, from data networks to call centers.

How Queuing Theory Works


Queues happen when resources are limited. In fact, queues make economic sense; no
queues would equate to costly overcapacity. Queuing theory helps in the design of
balanced systems that serve customers quickly and efficiently but do not cost too much to
be sustainable. All queuing systems are broken down into the entities queuing for an
activity.

At its most elementary level, queuing theory involves the analysis of arrivals at a facility,
such as a bank or fast food restaurant, then the service requirements of that facility, e.g.,
tellers or attendants.

The origin of queuing theory can be traced back to the early 1900s, found in a study of
the Copenhagen telephone exchange by Agner Krarup Erlang, a Danish engineer,
statistician and, mathematician. His work led to the Erlang theory of efficient networks and
the field of telephone network analysis.

Example of Queuing Theory


For example, a 2003 paper by Stanford School of Business professor Lawrence
Wein et al. used queuing theory to analyze the potential effects of a bioterrorism
attack on U.S. soil and proposed a system to reduce wait times for medications that
would decrease the number of What Is Game Theory?
Game theory is a theoretical framework for conceiving social situations among competing
players. In some respects, game theory is the science of strategy, or at least the optimal
decision-making of independent and competing actors in a strategic setting. The key
pioneers of game theory were mathematicians John von Neumann and John Nash, as
well as economist Oskar Morgenstern.

 
It is assumed players within the game are rational and will strive to maximize their payoffs
in the game.
Volume 75%
 
1:18
Game Theory

The Basics of Game Theory


The focus of game theory is the game, which serves as a model of an interactive situation
among rational players. The key to game theory is that one player's payoff is contingent
on the strategy implemented by the other player. The game identifies the players'
identities, preferences, and available strategies and how these strategies affect the
outcome. Depending on the model, various other requirements or assumptions may be
necessary.

Game theory has a wide range of applications, including psychology, evolutionary


biology, war, politics, economics, and business. Despite its many advances, game theory
is still a young and developing science.

deaths caused by such an attack. There are free queuing theory calculators available,


where a user can choose a specific queuing model.

Political science:

Making a decision in times of a domestic or international political conflict is all about


weighing the odds. The most weighted question politicians are faced with is: how will this
affect my country in the future? Understanding the reactions, actions and decisions other
politicians will make in reaction to one's decision is what makes this game theory. You do
not know if the other politician is bluffing or serious. It's a judgement call. The best
example of Game Theory in politics is the Cuban Missile Crisis.

These are just several examples of how applicable such an intriguing theoretical concept
can be applied to world conflicts that in essence affect our daily lives. Just think, look
around you and you will discover so much.

Information theory studies the quantification, storage, and communication of information. It was originally


proposed by Claude Shannon in 1948 to find fundamental limits on signal processing and communication
operations such as data compression, in a landmark paper entitled "A Mathematical Theory of
Communication". Applications of fundamental topics of information theory include lossless data
compression (e.g. ZIP files), lossy data compression (e.g. MP3sand JPEGs), and channel coding (e.g.
for DSL). Its impact has been crucial to the success of the Voyager missions to deep space, the invention of
the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and
of human perception, the understanding of black holes, and numerous other fields.
A key measure in information theory is "entropy". Entropy quantifies the amount of uncertainty involved in the
value of a random variable or the outcome of a random process. For example, identifying the outcome of a
fair coin flip (with two equally likely outcomes) provides less information (lower entropy) than specifying the
outcome from a roll of a die (with six equally likely outcomes). Some other important measures in information
theory are mutual information, channel capacity, error exponents, and relative entropy.
The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, information
engineering, and electrical engineering. The theory has also found applications in other areas,
including statistical inference, natural language processing, cryptography, neurobiology,[1] human vision,[2] the
evolution[3] and function[4] of molecular codes (bioinformatics), model selection in statistics,[5] thermal physics,
[6]
 quantum computing, linguistics, plagiarism detection,[7] pattern recognition, and anomaly detection.[8] Important
sub-fields of information theory include source coding, channel coding, algorithmic complexity
theory, algorithmic information theory, information-theoretic security, Grey system theory and measures of
information.

Preference theory is a multidisciplinary (mainly sociological) theory developed by Catherine Hakim.[1][2] It seeks


both to explain and predict women's choices regarding investment in productive or reproductive work. [3]
Description[edit]
The theory sets out five socio-economic conditions which jointly create a new scenario: [4]

1. The contraceptive revolution gives women reliable control over their own fertility for the first time in
history.
2. The equal opportunities revolution gives women genuine access to all positions and occupations for the
first time in history
3. The expansion of white-collar occupations, which are more attractive to women.
4. The creation of jobs for secondary earners, such as part-time jobs, working at home, teleworking, and
annual hours contracts.
5. The increasing importance of attitudes and values in affluent modern societies, which gives everyone
the freedom to choose their lifestyle.
Preference theory posits that in the rare countries that have fully achieved the new scenario for women (she
cites only Britain and the Netherlands), women have genuine choices as to how they resolve the conflict
between paid jobs and a major investment in family life. These choices fall into three main groups: women who
prioritise their careers and espouse achievement values (a work-centred lifestyle) and often remain childless by
choice (about 20%); women who prioritise family life and sharing values (a home-centred lifestyle) and often
have many children and little paid work (about 20%); and the majority of women who seek to combine paid jobs
and family work in some way without giving absolute priority to either activity or the accompanying values (the
adaptive lifestyle).[3]

You might also like