You are on page 1of 43



3.1 Introduction to Operations Research

3.2 Estimating

3.3 Contingency

3.4 Milestones

3.5 Gantt Chart

3.6 Programme Evaluation and Review Technique (PERT)

3.7 Critical Path Method (CPM)

3.8 Linear Programming

3.9 Transportation Model, Assignment Models, Queuing Models : Single Channel

and Multi Channel Queuing Models

3.10 Simulation : Deterministic Simulation Models and Probabilistic Simulation


3.11 Dynamic Programming

Unit 3
Planning Tools and Techniques

Destiny is no matter of chance. It is a matter of choice: It is not a thing to be waited

for, it is a thing to be achieved - William Jennings Bryan

3.1 Introduction to Operations Research

Operations Research (OR) (a term coined by McClosky and Trefthen in 1940) was a
technique that evolved during World War II to effectively use the limited military
resources and yet achieve the best possible results in military operations. In essence
you can state that OR is a technique that helps achieve best (optimum) results under
the given set of limited resources. Over the years, OR has been adapted and used very
much in the manufacturing sector towards optimization of resources. That is to use
minimum resources to achieve maximum output or profit or revenue
Operations Research (OR) is a science which deals with problem, formulation,
solutions and finally appropriate decision making. This subject is new and started
after World War II, when the failures of missions were very high. Scientists and
technocrats formed team to study the problem arising out of difficult situations and at
the later stage solutions to these problems. It is research designed to determine most
efficient way to do something new.
OR is the use of mathematical models, statistics and algorithm to aid in decision-
making. It is most often used to analyze complex real life problems typically with the
goal of improving or optimizing performance. Decision making is the main activity of
an engineer/manager. Some decisions can be taken by common sense, sound
judgment and experience without using mathematics, and some cases this may not be
possible and use of other techniques is inevitable.
With the growth of technology, the World has seen remarkable changes in the size
and complexity of organizations. An integral part of this had been the division of

labour and segmentation of management responsibilities in these organizations. The
results have been remarkable but with this, increasing specialization has created a
new problem to meet out organizational challenges. The allocation of limited
resources to various activities has gained significant importance in the competitive
market. These types of problems need immediate attention which is made possible by
the application of OR techniques. The tools of operations research are not from any
one discipline, rather Mathematics, Statistics, recent years application of OR
techniques have achieved significance in all walk of life, may it be industry or office
work for making strategical decisions more scientifically. Economics, Engineering,
Psychology, etc. have contributed to this newer discipline of knowledge. Today, it has
become a professional discipline that deals with the application of scientific methods
for decision-making, and especially to the allocation of scare resources.
Features of operations research
The significant features of operations research include the followings:
(i) Decision-making. Every industrial organization faces multifaceted
problems to identify best possible solution to their problems. OR aims to help the
executives to obtain optimal solution with the use of OR techniques. It also helps the
decision maker to improve his creative and judicious capabilities, analyze and
understand the problem situation leading to better control, better co-ordination, better
systems and finally better decisions.
(ii) Scientific Approach. OR applies scientific methods, techniques and tools
for the purpose of analysis and solution of the complex problems. In this approach
there is no place for guess work and the person bias of the decision maker.
(iii) Inter-disciplinary Team Approach. Basically the industrial problems are
of complex nature and therefore require a team effort to handle it. This team
comprises of scientist/mathematician and technocrats. Who jointly use the OR tools
to obtain a optimal solution of the problem. The tries to analyze the cause and effect
relationship between various parameters of the problem and evaluates the outcome of
various alternative strategies.
(iv) System Approach. The main aim of the system approach is to trace for
each proposal all significant and indirect effects on all sub-system on a system and to

evaluate each action in terms of effects for the system as a whole. The
interrelationship and interaction of each sub-system can be handled with the help of
mathematical/analytical models of OR to obtain acceptable solution.
(v) Use of Computers. The models of OR need lot of computation and
therefore, the use of computers becomes necessary. With the use of computers it is
possible to handle complex problems requiring large amount of calculations.
The objective of the operations research models is to attempt and to locate best or
optimal solution under the specified conditions. For the above purpose, it is necessary
that a measure of effectiveness has to be defined which must be based on the goals of
the organization. These measures can be used to compare the alternative courses of
action taken during the analysis.
Importance of operations research
The scope of OR is not only confined to any specific agency like defense services but
today it is widely used in all industrial organizations. It can be used to find the best
solution to any problem be it simple or complex. It is useful in every field of human
activities, where optimization of resources is required in the best way. Thus, it
attempts to resolve the conflicts of interest among the components of organization in
a way that is best for the organization as a whole. The main fields where OR is
extensively used are given below, however, this list is not exhaustive but only
(i) National Planning and Budgeting - OR is used for the preparation of Five
Year Plans, annual budgets, forecasting of income and expenditure, scheduling of
major projects of national importance, estimation of GNP, GDP, population,
employment and generation of agriculture yields etc.
(ii) Defense Services - Basically formulation of OR started from USA army,
so it has wide application in the areas such as: development of new technology,
optimization of cost and time, tender evaluation, setting and layouts of defence
projects, assessment of “Threat analysis”, strategy of battle, effective maintenance
and replacement of equipment, inventory control, transportation and supply depots

(iii) Industrial Establishment and Private Sector Units OR can be effectively
used in plant location and setting finance planning, product and process planning,
facility planning and construction, production planning and control, purchasing,
maintenance management and personnel management etc. to name a few.
(iv) R & D and Engineering - Research and development being the heart of
technological growth, OR has wide scope for and can be applied in technology
forecasting and evaluation, technology and project management, preparation of tender
and negotiation, value engineering, work/method study and so on.
(v) Business Management and Competition - OR can help in taking business
decisions under risk and uncertainty, capital investment and returns, business strategy
formation, optimum advertisement outlay, optimum sales force and their distribution,
market survey and analysis and market research techniques etc.
(vi) Agriculture and Irrigation - In the area of agriculture and irrigation also
OR can be useful for project management, construction of major dams at minimum
cost, optimum allocation of supply and collection points for fertilizer/seeds and
agriculture outputs and optimum mix of fertilizers for better yield.
(vii) Education and Training - OR can be used for obtaining optimum number
of schools with their locations, optimum mix of students/teacher student ratio,
optimum financial outlay and other relevant information in training of graduates to
meet out the national requirements.
(viii) Transportation - Transportation models of OR can be applied to real life
problems to forecast public transport requirements, optimum routing, forecasting of
income and expenses, project management for railways, railway network distribution,
etc. In the same way it can be useful in the field of communication.
(ix) Home Management and Budgeting - OR can be effectively used for
control of expenses to maximize savings, time management, work study methods for
all related works. Investment of surplus budget, appropriate insurance of life and
properties and estimate of depreciation and optimum premium of insurance etc.

3.2 Estimation in operations research

In general solution to an operations research problem will have the following stages.

1. Formulating the problem

2. Constructing a mathematical model based on the formulation
3. Solving the problem based on the model.
4. Checking whether the solution is optimal and feasible
5. Iterate till optimal and feasible solution is reached.

We have two new terms “optimal” and “feasible”. Optimal means the best possible
solution under the given conditions and feasible means a practical solution. Therefore
optimal and feasible solution means that the suggested solution must be both practical
to implement and the best one under the given conditions. All types of problems in
operations research can be categorized as either MINIMIZING or MAXIMIZING
type. We will focus on MINIMIZING costs, time and distances while we will be
interested in MAXIMIZING revenue, profits and returns. So you must be very careful
in identifying the type of problem while deciding upon the choice of algorithm to
solve the given problem.

Project Management Estimating Tools and Techniques

Estimating the effort, time, and resources needed to complete project activities is one
of the most challenging tasks that project managers must face. This is because of the
inherent uncertainty associated with many activities. Projects are unique. That is one
of the differences between projects and processes. This uniqueness often creates
uncertainty. Uncertainty because the activity is unique to the project, or the activity is
being accomplished by a resource that is not a practiced expert or the interaction of
this activity with other project activities is unique in this project. All of these can
create problems when estimating effort, time or resources.

Uncertainty in one aspect of an estimate leads to uncertainty on the other aspects. If
the effort needed to complete the scope is uncertain - for instance the number of hours
of work needed to complete an analysis - the time and resources needed will be
uncertain. If the timing of when an activity starts or ends is uncertain, the resource
availability and amount of effort required may change. If the resource assigned to an
activity is uncertain, the number of hours required to complete the activity and the
timing of the availability of the resource will be uncertain.
However, the good news is that not all project activities are uncertain. In many cases,
the activity is one that is well defined and the organization routinely accomplishes it.
When possible a project team is formed so that an expert is doing the work and the
availability of the expert is predictable. In those cases an accurate estimate can be
quickly generated.
We will discuss three types of activities and what type of estimating approach should
be used with each of them. Those are the Stable Activities, the Dependent Activities,
and the Uncertain Activities. Of course there is a fourth category which is the
unknown activity. These can't be estimated but must be accounted for in the project
reserves. A Traditional or Discovery project often will have a small reserve (or
possibly none at all) - at least for that portion of the project that is approved. Whereas
an Adaptive or Extreme project may need a large reserve. Also, as complexity
increases typically the level of reserve increases since there is a greater possibility of
unrecognized activities.
The techniques will include Analogous, Parametric Modeling, 3 Point Estimate,
Expert Judgment, Published Data Estimates, Vendor Bid Analysis, Reserve Analysis,
Bottom up Analysis, and Simulation. Finally, how to estimate a project when the key
boundary condition is the End Date or the Total Cost of the project and the effort is
tailored to fit this constraint.

Stable Activities

Stable Activities are those that are well understood and predictable. For activities in
this category, the estimating is usually straightforward. One will typically use

analogous, expert judgment, a parametric model, or published estimating data for
these types of activities. Based upon the information available to the project team
members, use the appropriate technique and set the estimate.

Dependent Activities

Dependent Activities are those activities where the time or effort is highly dependent NOTES
upon some project attribute or characteristic that is not yet know or knowable at the
time the original estimate is furnished. For instance, the amount of time needed to
complete testing will depend upon whether the test is successful on the first try or
whether a retest is required. For these types of activities, an assumption is made that
will drive the estimated effort, time and resources. This assumption is a risk and
should be tracked on the Risk Register. If the assumption is incorrect, the time or
money required to do the activity may be very different from the estimate. If a
conservative estimate is used, this is a positive risk. If an aggressive estimate is used,
this is a negative risk.

Uncertain Activities

Uncertain Activities are the most difficult to estimate. There is often very little data to
support a precise estimate. In addition, there are many factors that could affect the
estimate so one can't just make one assumption and track that in his risk register. An
example of an Uncertain Activity is a requirements definition task on a Complex
project. There are numerous stakeholders who have different opinions of what is
needed. Getting all of them to agree on the requirements will be an iterative process
with the number of iterations being completely unpredictable. Yet if this task is not
done well, there are likely to be major problems later in the project getting the
stakeholders to agree that the project deliverables have been met. Uncertain Activities
typically are listed in the Risk Register since the timing and cost are impossible to
estimate accurately.

Analogous Estimating

Analogous Estimating is one of the most common forms of estimating project

activities. This technique uses the experience from previous projects and extrapolates
that onto the current project. This technique is appropriate for those cases where the
type of work is similar and the resources doing the work are the same between
projects. Its advantage is that it is quick and, when the conditions are appropriate, it is
usually fairly accurate. The disadvantage is that the organization must have similar
projects for comparison.

Parametric Model Estimating

Parametric Model Estimating is a very accurate and easy estimating technique. A

formula is developed for estimating the time or resources needed to perform a project
activity. The formula is usually based upon a great deal of historical experience. A
PMO will often develop the parametric model based upon having done lessons
learned on many projects. A classic example from construction projects is the
parametric model for estimating resources and time based upon the number of square
feet of new construction. The advantage of parametric model estimating it is quick
and accurate. The disadvantage is that models don't exist for activities until there is a
large experience base for the activity.

3 Point Estimating

The 3 Point Estimating technique is used to understand the level of uncertainty

embedded within an estimate. In this technique three estimates are generated for the
project activity using three different sets of assumptions. The first estimate is a best
case or optimistic estimate. The second estimate is a worst case or pessimistic
estimate. The third estimate is between the other two and is the most likely estimate.
The way those estimates are developed is by using one of the other techniques such as

Analogous or Parametric Model. However, because of the high degree of uncertainty
due to the risk assumptions, the three estimates are used to create a boundary on
expectations for the activity. A variation on this technique, the PERT analysis, uses a NOTES
weighted average of these estimates to create a PERT estimate. When using this
approach, the most likely estimate is normally what is put in the project plan but the
optimistic and pessimistic estimates are used during the reserve analysis. Also, an
activity that has a great deal of difference between the optimistic and pessimistic
estimates is an uncertain activity and should be tracked in the Risk Register. The
advantage of this technique is that it provides boundaries on expectations. The
disadvantages are that it takes more work - since three estimates must be created not
one - and the most likely is still very much a guess - the actual could be significantly
better or worse.

Expert Judgment Estimating

Expert Judgment estimating is easy to do - provided you have an expert on the

project. This technique looks to the expert to create an estimate based upon their
understanding of the project requirements. Many, if not most, project estimates are
created in this fashion. The advantage of this is that it is quick and if the expert is
knowledgeable, it is often the most accurate estimate for uncertain activities. The
disadvantages are that you may not have an expert available and even if you do, the
expert often can provide no solid rationale for their estimate beyond, "That's what I
think it will take to do this."

Published Data Estimating

Published Data Estimating is an excellent technique for those activities for which
there is published data. In this technique, the activity is compared to the activities for
which data exists and the actual cost or durations of the closest comparable activity is
selected from the data and used as the estimate. The advantage of this technique is
that it is very accurate when the project conditions match the conditions under which
the published data was generated. The disadvantages are that data does not exist for
many activities and that the published data that does exist is based upon the
characteristics of the organizations that compiled and published the data - which may
not correspond with your organization's characteristics.

Vendor Bid Analysis

The Vendor Bid Analysis is a technique used when working with suppliers on
uncertain activities. The analysis considers the assumptions the vendor worked with
and does a sensitivity assessment on those assumptions. In addition, for effort that the
buying organization does not have experience with, they can contract with a
consulting firm that has experience to do a "Should Cost" analysis. This "Should
Cost" estimate is compared to the suppliers quote to identify any shortcomings. The
advantage of this technique is that it exposes supplier risk that can be accounted for in
the reserve analysis and it increases the confidence in the supplier's approach. The
disadvantages are that this can take a fair amount of time and if a consultant is used to
create a "Should Cost" it adds to the cost of the project.

Reserve Analysis

The Reserve Analysis is a fundamental technique for estimating. This technique

considers the level of uncertainty and risk in the project and establishes a reserve pool

of time, resources, or possibly performance that can be drawn upon to offset the un
estimated issues that arise.

Bottom up Analysis

A Bottom Up analysis is a technique to improve the accuracy of the overall project

estimate. This technique requires the project team to decompose the work into very
small work packages. Generally, the smaller the project activity, the easier it is to
estimate because the work scope is very small. All of these estimates of small
activities are added up into subgroups and finally into the project total. The advantage
of this technique is that the estimate is usually more accurate since the work is better
understood. The disadvantage of this technique is that it is very time consuming, and
it may be impossible to decompose activities that cannot be easily defined.

Project Simulation

A Project Simulation is a way of combining the uncertainty in 3 Point Estimates and

Reserve Analysis to understand the likely project outcomes. This requires that the
project be entered into a project management software application, being careful to
identify all relationships between activities. For those activities that have uncertainty,
the degree of uncertainty must be modeled and entered into the project management
software also. The method for doing this will vary based upon the software being
used. A simulation add-on is then used to run the software through a Monte Carlo
routine. The result will be a distribution of project time lines and costs. Based upon
the organization's risk sensitivity, the overall project time line and budget can be set.
The advantage of this approach is that it provides a global perspective on overall time
line and cost uncertainty. The disadvantages are that it can take months to do this on a
large project and the resulting estimates are only as good as the assumptions that are
allowed by the software.

Estimating Based Upon Project End Date

In some cases, the project end date is set even before the scope and deliverables are
defined. In those cases, a high-level time line is created starting from the end date and
going backward to the present time. Given the amount of time allocated for the major
activities, the project team considers the needed deliverables and available resources
during the time period. Essentially, the schedule side of the triangle is fixed and the
scope and resource sides are varied so as to create a viable project. Often this will
require an iterative estimating approach. Once the high level plan is established,
estimates for the activities are developed and then iterations are done varying
resources and scope until a viable estimate can be created. The Risk Register will be
dominated by schedule risk items. Sometimes, an estimate cannot be created. In those
cases, the project should not even be initiated, since it is doomed.

Estimating Based Upon Project Total Cost

In some cases, the project total cost is set even before the scope and schedule are
defined. In those cases, a high-level allocation of the budget is created between the
likely project deliverables. Each major activity is then estimated and if the estimate is
greater than the allocated cost, the timing of resources or scope and deliverables are
varied until the project is able to meet the budget goals. This is often an iterative
process that may take much iteration until it completes.

3.3 Contingency

When estimating the cost for a project, product or other item or investment, there is
always uncertainty as to the precise content of all items in the estimate, how work
will be performed, what work conditions will be like when the project is executed and
so on. These uncertainties are risks to the project. Some refer to these risks as
"known-unknowns" because the estimator is aware of them, and based on past

experience, can even estimate their probable costs. The estimated cost of the known-
unknowns is referred to by cost estimators as cost contingency. Contingency "refers
to costs that will probably occur based on past experience, but with some uncertainty
regarding the amount. The term is not used as a catchall to cover ignorance. It is poor
engineering and poor philosophy to make second-rate estimates and then try to satisfy
them by using a large contingency account. The contingency allowance is designed to
cover items of cost which are not known exactly at the time of the estimate but which
will occur on a statistical basis."

The cost contingency which is included in a cost estimate, bid, or budget may be
classified as to its general purpose that is what it is intended to provide for. For a class
1 construction cost estimate, usually needed for a bid estimate, the contingency may
be classified as an estimating and contracting contingency. This is intended to provide
compensation for "estimating accuracy based on quantities assumed or measured,
unanticipated market conditions, scheduling delays and acceleration issues, lack of
bidding competition, subcontractor defaults, and interfacing omissions between
various work categories." Additional classifications of contingency may be included
at various stages of a project's life, including design contingency, or design definition
contingency, or design growth contingency, and change order contingency

AACE International, the Association for the Advancement of Cost Engineering, has
defined contingency as "An amount added to an estimate to allow for items,
conditions, or events for which the state, occurrence, or effect is uncertain and that
experience shows will likely result, in aggregate, in additional costs. Typically
estimated using statistical analysis or judgment based on past asset or project
experience. Contingency usually excludes:

1) Major scope changes such as changes in end product specification,

capacities, building sizes, and location of the asset or project
2) Extraordinary events such as major strikes and natural disasters
3) Management reserves

4) Escalation and currency effects
Some of the items, conditions, or events for which the state, occurrence, and/or effect
is uncertain include, but are not limited to, planning and estimating errors and
omissions, minor price fluctuations, design developments and changes within the
scope, and variations in market and environmental conditions. Contingency is
generally included in most estimates, and is expected to be expended". A key phrase
above is that it is "expected to be expended". In other words, it is an item in an
estimate like any other, and should be estimated and included in every estimate and
every budget. Because management often thinks contingency money is "fat" that is
not needed if a project team does its job well, it is a controversial topic.
In general, there are four classes of methods used to estimate contingency”.These
include the following:

1) Expert judgment
2) Predetermined guidelines (with varying degrees of judgment and
empiricism used)
3) Simulation analysis (primarily risk analysis judgment incorporated in a
simulation such as Monte-Carlo)
4) Parametric Modeling (empirically-based algorithm, usually derived through
regression analysis, with varying degrees of judgment used).

While all are valid methods, the method chosen should be consistent with the first
principles of risk management in that the method must start with risk identification,
and only then are the probable cost of those risks quantified. In best practice, the
quantification will be probabilistic in nature (Monte-Carlo is a common method used
for quantification).

Typically, the method results in a distribution of possible cost outcomes for the
project, product, or other investment. From this distribution, a cost value can be
selected that has the desired probability of having a cost under run or cost overrun.
Usually a value is selected with equal chance of over or under running. The

difference between the cost estimate without contingency and the selected cost from
the distribution is contingency. Contingency is included in budgets as a control
account. As risks occur on a project, and money is needed to pay for them, the
contingency can be transferred to the appropriate accounts that need it. The transfer
and its reason is recorded. In risk management, risks are continually reassessed during
the course of a project, as are the needs for cost contingency.

3.4 Project's Milestone

A Milestone is a reference point that marks a major event in a project and is used to
monitor the project's progress. The milestones for a project should present a clear
sequence of events that will incrementally build up to the completion of the approved
project. As you complete each milestone, you can update the status from the
Milestones tab of your project.

In addition to signaling the completion of a key deliverable, a milestone may also

signify an important decision or the derivation of a critical piece of information,
which outlines or affects the future of a project. In this sense, a milestone not only
signifies distance traveled (key stages in a project) but also indicates direction of
travel since key decisions made at milestones may alter the route through the project

For example, in military acquisition or procurement, the United States created

specific terms for the point at which approval is made regarding starting or continuing
an acquisition to the next phase. Milestones established by DOD Instruction 5000.2
are 'Milestone A' for Technology Development, 'Milestone B' for System
Development and Demonstration, and 'Milestone C' for Production and Deployment.
Program schedules would also have milestones (lower case) reflecting major events
in the system development life cycle (such as System Requirements Review), key
items (such as documents needed for a Request for Proposal), items of external
approval, and project-specific points of accomplishment.

3.5 Gantt chart

The basic purpose of a Gantt chart is to break a large project into a series of smaller
tasks in an organized way. The chart shows when each task should begin and how
long it should take. The left-most column lists each of the tasks in chronological order
according to their start time. The remaining columns show the timeline (often shown
in weeks, but use whatever units are convenient for your project). For each row, a
task is listed and a line in drawn through the timeline for the weeks during which that
task will be addressed.
Following is a simple example of what a Gantt chart looks like. In this chart, a rough
outline of what tasks are to be accomplished up to the first design review on October

This is not given to show you how you should organize your own teams time so much
as to provide a sample Gantt chart for illustrative purposes. Notice how some tasks
take longer than others, so some weeks have more than one associated task. For
example, on the time period beginning September 18th, the team will be continuing
the information gathering started a week prior and begin shopping around for a
product for the reverse engineering exercise on October 2.

Steps in preparation of Gantt chart

• Information Gathering: user interviews, expert interview, personal
experience, patent search, market research, reverse engineering

• Problem Definition: problem statement, characteristics of problem,
characteristics of solution
• Divergence: brainstorming, morphological analysis, functional decomposition
• Transition: Pugh chart, QFD, sketch models, user feedback NOTES
• Convergence: analysis, detailed configuration, optimization, user feedback
• Prototyping: build rough prototype, test rough prototype, plan the build,
collect materials, start machining, physical testing, user testing
• Documentation: proposal, progress report, final report, presentations

3.6 (PERT)
Network Analysis

Routing is the first step in production planning. In small projects, routing is very
simple. Sequence of operations is almost decided and the operations can be
performed one after the other in a given sequence. But in large project, this is rather a
difficult problem. There may be more than one routes to complete a job. The
function of production manager is to find out the path which takes the least time in
completing the project.
In a big project, many activities are performed simultaneously. There are many
activities which can be started only at the completion of other activities. In such
cases a thorough study is required to collect in complete details about the project and
then to find out a new, better and quicker way to get the work done in a decent way.
In such cases, the first step is to draw some suitable diagram showing various
activities and their positions in the project. It should also explain the time to be taken
in completing the route from one operation to the other. It also defines the way in
which the delay in any activity can affect the entire project in terms of both money
and time. Such a diagram is called network diagram. In the words of James L.
Riggs ‘ A network is a picture of a project, a map of requirements tracing the work
from a departure points to the final completion objective. It can be a collection of all
the minute details involved or only a gross outline of general functions.

Important Characteristics in a Network Analysis

The following are some important points to remember in a network analysis:

(i) The objective is to be finished within the specified time otherwise
there is a penalty.
(ii) Various activities are to be completed in an order, however, a
number of activities are performed simultaneously while there are man
by other activities which can be started only when some other
activities are completed.
(iii) The cost of any activity is proportional to its time of completion.
(iv) There can be hurdles in the process and the resources to be allocated
may be limited. A network graph consists of a number of points or
nodes, each of which is connected to one or more of the other nodes by
routes or edges. It is a set of operations and activities describing the
time orientation of a composite project.

Concept of Slack and Floats in Network Analysis

Slack signifies the freedom for rescheduling or to start the job. It can be calculated
by the difference between EFT and LFT for any job. A job for which the slack time
is zero is known as critical job. The critical path can be located by all those activities
or events for which slack time is either zero or float time is the least. The
abbreviations EFT and LFT given in the above line have the following explanation.
EFT (Earliest Finish Time) This is the sum of the earliest start time plus the time of
duration for any event.
LFT (Latest Finish Time). It is calculated from the LFT of the head event. For its
calculation total project time is required. The total project time is the shortest
possible time required in completing the project.
Floats. Floats in the network analysis represent the difference between the maximum
time available to finish the activity and the time required to complete it. There are so
many activities where the maximum time available to finish the activity is more than
the total time required to complete it. This difference is known as floats.

Floats may be total, free, and independent:

A) Total Float. Total float is the maximum amount by which duration time of an
activity can be increased without increasing the total duration time of the project.
Total float can be calculated as follows:
(i) First, the difference between Earliest Start Time (EST) of tail event
and Latest Finish Time (LFT) of head event for the activity shall be
(ii) Then, substract the duration time of the activity from the value
obtained in (i) above to get the required float for the activity.
The total float can be helpful in drawing the following conclusions:
(a) If total float value is negative, it denotes that the resources for
completing the activity are not adequate and the activity, therefore,
cannot finish in time. So, extra resources or say critical path needs
crashing in order to reduce the negative float.
(b) If the total float value is zero, it means the resources are just sufficient to
complete the activity without any delay.
(c) If the total float value is positive, it points out that total resources
are in excess of the amount required or the resources should be reallocated to avoid
the delay otherwise the activity will be delayed by so much time.

(B) Free Float.
It is that the fraction from total float of an activity which can be used for rescheduling
the activity without affecting the succeeding activity. If both tail and head events are
given their earliest times, i.e., EST and EFT the free can be calculated by deducting
head slack from total float, i.e.,
Free Float = Total flat – Slack time of the head event.

(C) Independent Float.

It is the time by which an activity can be rescheduled without affecting the other
activities – preceding or succeeding. It may be calculated as follows:
= Earliest Start Time (EST) – Earliest Finish Time (EFT) of the activity. Or
Independent float – Free Float – Slack Time of tail event.
The basic difference between slack and float time is that a slack is used with
reference to events whereas float is used with reference to activity.

Use of Float Information in Decision Making

The float information can be used in decision-making in the following ways:

(i) Total float can affect both the previous and the subsequent activities.
(ii) Total float can be used without affecting the subsequent activities.
(iii) Independent float can be used in allocating the resources elsewhere
and increasing the time of some non-critical activities.
(iv) Negative float signifies reduction in target time to finish the work in


One of the main features of PERT and related techniques is their use of a network or
precedence diagram to depict major project activities and their sequential
relationships. There are two slightly different conventions for constructing these

network diagrams. Under one convention, the arrows designate activities; under the
other convention, the nodes designate activities. These conventions are referred to as
activity-on-arrow (AOA) and activity-on-node (AON). Activities consume
resources and /or time. The nodes in the AOA approach represent the activities
starting and finishing points, which are called events. Events are points in time.
Unlike activities, they consume neither resources not time. The nodes in an AON
diagram represent activities.
In the AOA diagram, the arrows represent activities and they show the sequence in
which certain activities must be performed (eg., Interview precedes Hire and Train);
in the AON diagram, the arrows show only the sequence in which certain activities
must be performed while the nodes represent the activities. Activities in AOA
networks can be referred to in either of two ways. One is by their endpoints (eg.,
activity 2-4) and the other is by a letter assigned to an arrow (e.g., activity c). Both
methods are illustrated in this chapter. Activities in AON networks are referred to by
a letter (or number) assigned to a node. Although these two approaches are slightly
different, they both show sequential relationships – something Gantt charts don’t.
Note that the AON diagram has a starting node, S, which is actually not an activity
but is added in order to have a single starting node.

Despite these differences, the two conventions are remarkably similar, so you should
not encounter much difficulty in understanding either one. In fact, there are
convincing arguments for having some familiarity with both approaches. Perhaps the
most compelling is that both approaches are widely used. However, any particular
organization would typically use only one approach, and employees would have to
work with that approach. Moreover, a contractor doing work for the organization
may be using the other approach, so employees of the organization who deal with the
contractor on project matters would benefit from knowledge of the other approach

A path is a sequence of activities that leads from the starting node to the ending node.
For example, in the AOA diagram, the sequence 1-2-4-5-6 is a path. In the AON
diagram, S-1-2-6-7 is a path. Note that in both diagrams there are three paths. One

reason for the importance of paths is that they reveal sequential relationships. The
importance of sequential relationships cannot be overstated: If one activity in a
sequence is delayed (i.e., late) or done in-correctly, the start of all following activities
on that path will be delayed.

Another important aspect of paths is the length of a path: How long will a particular
sequence of activities take to complete? The length (of time) for any path can be
determined by summing the expected times of the activities on that path. The path
with the longest time is of particular interest because it governs project completion
time. In other words, expected project duration equals the expected time of the
longest path. Moreover, if there are any delays along the longest path, there will be
corresponding delays in project completion time. Attempts to shorten project
completion must focus on the longest sequence of activities. Because of its influence
on project completion time, the longest path is referred to as the critical path, and its
activities are referred to as critical activities.

Paths that are shorter than the critical path can experience some delays and still not
affect the overall project completion time as long as the ultimate path time does not
exceed the length of the critical path. The allowable slippage for any path is called
slack, and it reflects the difference between the length of a given path and the length
of the critical path. The critical path, then, has zero slack time.

3.7 Critical Path Method (CPM)

The critical path analysis is an important tool in production planning and scheduling.
Gnatt charts are also one of the tools of scheduling but they have one disadvantage
for which they are found to be unsuitable. The problem with Gnatt Chart is that the
sequence of operations of a project or the earliest possible date for the completion of
the project as a whole cannot be ascertained. This problem s overcome by this
method of Critical Path Analysis.

CPM is used for scheduling special projects where the relationship between the
different parts of projects is more complicated than that of a simple chain of task to be
completed one after the other. This method (CPM) can be used at one extreme for the
very simple job and at other extreme for the most complicated tasks. NOTES

A CPM is a route between two or more operations which minimizes (or maximizes)
some measures of performance. This can also be defined as the sequence of activities
which will require greatest normal time to accomplish. It means that the sequence of
activities which require longest duration are singled out. It is called at critical path
because any delay in performing the activities on this path may cause delay in the
whole project. So, such critical activities should be taken up first.

According to John L. Burbidge, “One of the purposes of critical path analysis is to

find the sequence of activities with the largest sum of duration times, and thus find
the minimum time necessary to complete the project. The critical series of activities
is known as the ‘Critical Path’.”

Under CPM, the project is analyzed into different operations or activities and their
relationship are determined and shown on the network diagram. So, first of all a
network diagram is drawn. After this the required time or some other measure of
performance is posted above and to the left of each operation circle. These times are
them combined to develop a schedule which minimizes or maximizes the measure of
performance foe each operation. Thus CPM marks critical activities in a project and
concentrates on them. It is based on the assumption that the expected time is actually
the time taken to complete the object.

Main objectives of CPM.

The main objects of CPM are
(i) To find the difficulties and obstacles in the course of production process.
(ii) To assign time for each operation.
(iii) To ascertain the starting and finishing times of the work.

(iv) To find the critical path and the minimum duration time for the project as
a whole.
Situations where CPM can be effectively used

CPM techniques can be used effectively in the following situations:

(a) In production planning.
(b) Location of and deliveries from a warehouse.
(c) Road systems and traffic schedules.
(d) Communication network.

Advantages of CPM

The application of CPM leads to the following advantages:

(i) It provides an analytical approach to the achievement of project
objectives which are defined clearly.
(ii) It identifies most critical elements and pays more attention to these
(iii) It assists avoiding waste of time, energy and money on unimportant
(iv) It provides a standard method for communicating project plans,
schedules and cost.
Thus CPM technique is a very useful analysis in production planning of a very
large project.

PERT (Programme Evaluation and Review Technique)

There are so many modern techniques that have developed recently for the planning
and control of large projects in various industries especially in defense, chemical and
construction industries. Perhaps, the PERT is the best known of such techniques.
PERT is a time-event network analysis technique designed to watch how the parts of
a program fit together during the passage of time and events. This technique was

developed by the special project office of the U.S. Navy in 1958. It involves the
application of network theory to scheduling problems. In PERT we assume that the
expected time of any operation can never be determined exactly.
Major Features of PERT or Procedure or Requirement for PERT

The following are the main features of PERT:

(i) All individual tasks should be shown in a network. Events are
shown by circles. Each circle represents an event – an event – a
subsidiary plan whose completion can be measured at a given time.
(ii) Each arrow represents an activity – the time-consuming elements
of a programme, the effort that must be made between events.
(iii) Activity time is the elapsed time required to accomplish an event.

In the original PERT, three-time values are used as follows:

(a) t1 (Optimistic time) : It is the best estimate of time if
everything goes exceptionally well.
(b) t2 (Most likely time) : It is an estimated time what the project
engineer believes necessary to do the job
or it is the time which most often is
required if the activity is repeated a
number of times.
(c) t3 (Pessimistic time) : It is also an activity under adverse
conditions. It is the longest time and
rather is more difficult to ascertain.
The experiences have shown that the best estimator of time out of several estimates
made by the project engineer is:
t1 + 4t2 +t3

The variance of t is given by: 2

V(t) =
( )
t3 - t1

Here it is assumed that the time estimates follow the Beta distribution.
The next step is to compute the critical path and the slack time.
A critical path or critical sequence of activities is one which takes the longest time to
accomplish the work and the least slack time.

Advantages of PERT

PERT is a very important of managerial planning and control at the top concerned
with the overall responsibility of a project. PERT has the following merits.
(i) PERT forces managers and subordinate managers to make a plan for
production because time event analysis is quite impossible without
planning and seeing how the pieces fit together.
(ii) PERT encourages management control by exception. It concentrates
attention on critical elements that may need correction.
(iii) It enables forward-working control as a delay will affect the
succeeding events and possibly the whole project. The production
manager can somehow make up the time by shortening that of some
other event.
(iv) The network system with its sub-systems creates a pressure for action
at the right spot and level and at the right time.
(v) PERT can be effectively used for rescheduling the activities.

Limitations in using PERT

The uses of PERT technique are subject to the following limitations:

(i) It is a time-consuming and expensive technique. NOTES
(ii) It is based on Beta Distribution and the assumption of Beta Distribution
may not always be true.
(iii) PERT is not suitable when program is nebulous and a reasonable estimate
of time schedule is not possible.
(iv) It is not useful for routine planning of recurring events such as mass
production because once a repetitive sequence is clearly worked out,
elaborate and continuing control is not required.
(v) The expected time and the corresponding variance are only estimated

Difference between PERT and CPM

Although these techniques (PERT and CPM) use the same principles and are based
on network analysis yet they in the following respects from each other:
(i) PERT is appropriate where time estimates are uncertain in the duration of
activities as measured by optimistic time, most likely time, and pessimistic
time, whereas CPM (Critical Path Method) is good when time estimates
are found with certainty. CPM assumes that the duration of every activity
is constant and therefore every activity is critical or not.
(ii) PERT is concerned with events which are the beginning or ending points
of operation while CPM is concerned with activities.
(iii) PERT is suitable for non-repetitive projects while CPM is designed for
repetitive projects.
(iv) PERT can be analyzed statistically whereas CPM not.
(v) PERT is not concerned with the relationship between time and cost,
whereas CPM establishes a relationship between time and cost and cost is
proportionate to time.

3.8 Linear programming

Linear programming is the process of taking various linear inequalities relating to

some situation, and finding the "best" value obtainable under those conditions. A
typical example would be taking the limitations of materials and labor, and then
determining the "best" production levels for maximal profits under those conditions.
In "real life", linear programming is part of a very important area of mathematics
called "optimization techniques". This field of study(or at least the applied results of
it) are used every day in the organization and allocation of resources. These "real life"
systems can have dozens or hundreds of variables, or more.

The general process for solving linear-programming exercises is to graph the

inequalities (called the "constraints") to form a walled-off area on the x,y-plane
(called the "feasibility region"). Then you figure out the coordinates of the corners of
this feasibility region (that is, you find the intersection points of the various pairs of
lines), and test these corner points in the formula (called the "optimization equation")
for which you're trying to find the highest or lowest value.

Formulation of LPP

Elixir paints produces both interior and exterior paints from two raw materials M1
and M2. The following table provides the data. The market survey restricts the market
daily demand of interior paints to 2 tons. Additionally the daily demand for interior
paints cannot exceed that of exterior paints by more than 1 ton. Formulate the LPP.

The general procedure for formulation of an LPP is as follows
1. To identify and name the decision variables.

That is to find out the variables of significance and what is to be ultimately

determined. In this example the quantity (In tons) of exterior (EP) and interior paints
(IP) to be produced under the given constraints to maximize the total profit is to be
found. Therefore the EP and IP are the decision variables. The decision variables EP
and IP can be assigned name such as EP = x1 and IP =x2.

2. To the frame the objective equation.

Our objective is to maximize the profits by producing the right quantity of EP and IP.
For every ton of EP produced a profit Rs 5000/- and for every ton of IP a profit of Rs
4000/- is made. This is indicated as 5 and 4 in table 1.1 as profit in thousands of
rupees. We can use the values of 5 and 4 in our objective equation and later multiply
by a factor 1000 in the final answer. Therefore the objective equation is framed as
Max (Z) = 5 x1 + 4 x2. Where x1 and x2 represent the quantities (in tons) of EP and
IP to be produced.

3. To identify the constraints and frame the constraint equations

In the problem statement there are constraints relating to the raw materials used and
there are constraints relating to the demand for the exterior and interior paints. Let us
first examine the raw material constraints. There are two types of raw materials used
namely M1 and M2. The maximum availability of M1 every day is given as 24 tons
and the problem (refer table 1.1) states that 6 tons of M1 is required for producing 1
ton of exterior paint. Now the quantity of exterior paint to be produced is denoted as
x1, so if 6 tons of M1 is required for producing 1 ton of exterior paint, to produce x1
tons of exterior paint (6 * x1) tons of M1 is required. Similarly the problem states that
4 tons of M1 is required for producing 1 ton of interior paint. Now the quantity of
interior paint to be produced is denoted as x2, so if 4 tons of M1 is required for
producing 1 ton of interior paint, to produce x2 tons of exterior paint (4 * x2) tons of

M1 is required. But the total quantity of M1 used for producing x1 quantity of
exterior paint and x2 quantity of interior paint cannot exceed 24 tons (since that is the
maximum availability). Therefore the constraint equation can be written as 6 x1 +4 x2
< = 24

In the same way the constraint equation for the raw material M2 can be framed. At
this point I would suggest that you must try to frame the constraint equation for raw
material M2 on your own and then look into the equation given in the text. To
encourage you to frame this equation on your own I am not exposing the equation
now but am showing this equation in the consolidated solution to the problem.

Well now that you have become confident by framing the second constraint equation
correctly (I am sure you have), let us now look to frame the demand constraints for
the problem. The problem states that the daily demand for interior paints is restricted
to 2 tons. In other words a maximum of 2 tons of interior paint can be sold per day. If
not more than 2 tons of interior paints can be sold in a day, it advisable to limit the
production of interior paints also to a maximum of 2 tons per day (I am sure you
agree with me).

Since the quantity of interior paints produced is denoted by x2, the constraint is now
written as x2 <= 2

Now let us look into the other demand constraint. The problem states that the daily
demand for interior paints cannot exceed that of exterior paints by more than 1 ton.
This constraint has to be understood and interpreted carefully. Read the statement
carefully and understand that the daily demand for interior paints can be greater than
the demand for exterior paints but that difference cannot be more than 1 ton. Again
we can conclude that based on demand it is advisable that if we produce interior
paints more than exterior paints, that difference in tons of production cannot exceed 1
ton. By now you are familiar that the quantities of exterior paint and interior paint

produced are denoted by x1 and x2 respectively. Therefore let us frame the constraint
equation as the difference in the quantities of paints produced.
x2 - x1 <=1

In addition to the constraints derived from the statements mentioned in the problem,
there is one more standard constraint known as the non-negativity constraint. The
rationale behind this constraint is that the quantities of exterior and interior paints
produced can never be less than zero. That is it is not possible to produce negative
quantity of any commodity. Therefore x1 and x2 must take values of greater than or
equal to zero. This constraint is now written as x1, x2 are > = 0. Thus we have
formulated (that is written in the form of equations) the given statement problem.

The consolidated formulation is given below

The objective equation is Max (Z) = 5 x1 + 4 x2
Subject to the constraints: 6 x1 +4 x2 < = 24
x1 +2 x2 < = 6
x2 <= 2
x2 - x1 <=1
x1, x2 are > = 0 (Non-Negativity constraint). We have successfully formulated the
given statement problem

3.9 Transportation model

The transportation model uses the principle of 'transplanting' something, like taking a
hole from one place and inserting it in another without change. First it assumes that to
disturb or change the idea being transported in any way will damage and reduce it
somehow. It also assumes that it is possible to take an idea from one person's mind
into another person's so that the two people will then understand in exactly the same
way. The transportation model is a valuable tool in analyzing and modifying existing
transportation systems or the implementation of new ones. In addition, the model is
effective in determining resource allocation in existing business structures.

The model requires a few keys pieces of information, which include the following:
• Origin of the supply
• Destination of the supply
• Unit cost to ship

The transportation model can also be used as a comparative tool providing business
decision makers with the information they need to properly balance cost and supply.
The use of this model for capacity planning is similar to the models used by engineers
in the planning of waterways and highways.

This model will help decide what the optimal shipping plan is by determining a
minimum cost for shipping from numerous sources to numerous destinations. This
will help for comparison when identifying alternatives in terms of their impact on the
final cost for a system. The main applications of the transportation model mention in
the chapter are location decisions, production planning, capacity planning and
transshipment. Nonetheless, the major assumptions of the transportation model are
the following :

1. Items are homogeneous

2. Shipping cost per unit is the same no matter how many units are shipped
3. Only one route is used from place of shipment to the destination

The transportation problem involves determining a minimum-cost plan for shipping

from multiple sources to multiple destinations. A transportation model is used to
determine how to distribute supplies to various destinations while minimizing total
shipping cost. In this case, a shipping plan is produced and is not changed unless
factors such as supply, demand, or unit shipping costs change. The variables in this
model have a linear relationship and therefore, can be put into a transportation table.
The table will have a list of origins and each one's capacity or supply quantity period.
It will also show a list of destinations and their respective demands per period. Also,
it will show the unit cost of shipping goods from each origin to each destination.

Transportation costs play an important role in location decision. The transportation
problem involves finding the lowest-cost plan for distributing stocks of goods or
supplies from multiple origins to multiple destinations that demand the goods. The
transportation model can be used to compare location alternatives in terms of their
impact on the total distribution costs for a system. It is subject to demand satisfaction NOTES
at markets supply constraints. It also determines how to allocate the supplies available
form the various factories to the warehouses that stock or demand those goods, in
such a way that total shipping cost is minimized.

Assignment model
The assignment problem is one of the fundamental combinatorial optimization
problems in the branch of optimization or operations research in mathematics. It
consists of finding a maximum weight matching in a weighted bipartite graph. In its
most general form, the problem is as follows:

There are a number of agents and a number of tasks. Any agent can be assigned to
perform any task, incurring some cost that may vary depending on the agent-task
assignment. It is required to perform all tasks by assigning exactly one agent to each
task and exactly one task to each agent in such a way that the total cost of the
assignment is minimized. If the numbers of agents and tasks are equal and the total
cost of the assignment for all tasks is equal to the sum of the costs for each agent (or
the sum of the costs for each task, which is the same thing in this case), then the
problem is called the linear assignment problem. Commonly, when speaking of the
assignment problem without any additional qualification, then the linear assignment
problem is meant.

The Hungarian algorithm is one of many algorithms that have been devised that solve
the linear assignment problem within time bounded by a polynomial expression of the
number of agents. The assignment problem is a special case of the transportation
problem, which is a special case of the minimum cost flow problem, which in turn is
a special case of a linear program. While it is possible to solve any of these problems

using the simplex algorithm, each specialization has more efficient algorithms
designed to take advantage of its special structure. If the cost function involves
quadratic inequalities it is called the quadratic assignment problem.

Suppose that a taxi firm has three taxis (the agents) available, and three customers
(the tasks) wishing to be picked up as soon as possible. The firm prides itself on
speedy pickups, so for each taxi the "cost" of picking up a particular customer will
depend on the time taken for the taxi to reach the pickup point. The solution to the
assignment problem will be whichever combination of taxis and customers results in
the least total cost. However, the assignment problem can be made rather more
flexible than it first appears. In the above example, suppose that there are four taxis
available, but still only three customers. Then a fourth dummy task can be invented,
perhaps called "sitting still doing nothing", with a cost of 0 for the taxi assigned to it.
The assignment problem can then be solved in the usual way and still give the best
solution to the problem. Similar tricks can be played in order to allow more tasks than
agents, tasks to which multiple agents must be assigned (for instance, a group of more
customers than will fit in one taxi), or maximizing profit rather than minimizing cost.

Queuing Models
Delays and queuing problems are most common features not only in our daily-life
situations such as at a bank or postal office, at a ticketing office, in public
transportation or in a traffic jam but also in more technical environments, such as in
manufacturing, computer networking and telecommunications. They play an essential
role for business process re-engineering purposes in administrative tasks. “Queuing
models provide the analyst with a powerful tool for designing and evaluating the
performance of queuing systems.”

Whenever customers arrive at a service facility, some of them have to wait before
they receive the desired service. It means that the customer has to wait for his/her
turn, may be in a line. Customers arrive at a service facility (sales checkout zone in
ICA) with several queues, each with one server (sales checkout counter). The

customers choose a queue of a server according to some mechanism (e.g., shortest
queue or shortest workload). Sometimes, insufficiencies in services also occur due to
an undue wait in service may be because of new employee. Delays in service jobs
beyond their due time may result in losing future business opportunities. Queuing NOTES
theory is the study of waiting in all these various situations. It uses queuing models to
represent the various types of queuing systems that arise in practice. The models
enable finding an appropriate balance between the cost of service and the amount of

Single Channel Queuing Models

Single queuing nodes are usually described using Kendall's notation in the form
A/S/C where A describes the time between arrivals to the queue, S the size of jobs
and C the number of servers at the node. Many theorems in queue theory can be
proved by reducing queues to mathematical systems known as Markov chains, first
described by Andrey Markov in his 1906 paper. Agner Krarup Erlang, a Danish
engineer who worked for the Copenhagen Telephone Exchange, published the first
paper on what would now be called queuing theory in 1909. He modeled the number
of telephone calls arriving at an exchange by a Poisson process and solved the M/D/1
queue in 1917 and M/D/k queuing model in 1920. In Kendall's notation M stands for
Markov or memoryless and means arrivals occur according to a Poisson process D
stands for deterministic and means jobs arriving at the queue require a fixed amount
of service k describes the number of servers at the queuing node (k = 1, 2,...).

If there are more jobs at the node than there are servers then jobs will queue and wait
for service. The M/M/1 queue is a simple model where a single server serves jobs that
arrive according to a Poisson process and have exponentially distributed service
requirements. In an M/G/1 queue the G stands for general and indicates an arbitrary
probability distribution. The M/G/1 model was solved by Felix Pollaczek in 1930, a
solution later recast in probabilistic terms by Aleksandr Khinchin and now known as
the Pollaczek–Khinchine formula. After World War II queueing theory became an
area of research interest to mathematicians.

Work on queueing theory used in modern packet switching networks was performed
in the early 1960s by Leonard Kleinrock. It was in this period that John Little gave a
proof of the formula which now bears his name: Little's law. In 1961 John Kingman
gave a formula for the mean waiting time in a G/G/1 queue: Kingman's formula. The
matrix geometric method and matrix analytic methods have allowed queues with
phase-type distributed inter arrival and service time distributions to be considered.
Problems such as performance metrics for the M/G/k queue remain an open problem.

Multi channel problems

There are two types of multi channel problems. The first type occurs when the system
has more service centers, and the queues to every of them are isolated and an element
cannot pass from one queue to the other. The second type of waiting-queue problem
is said to be of multiple exponential channels or of several service channels in
parallel, when the element in queue can be served equally well by more than one
The queuing problems in multi-channel problems should be considered rather as
several problems of the single-channel type. Fig. 1 illustrates a valid case; the
formation of each queue is independent from the others. When an element has
selected the concrete queue, it becomes part of the single-channel system.

The probability of the arrivals in queue A is independent from the probability of the
arrivals in queue B (and C) due to different characteristics of the routes. On the other
hand, in multiple exponential channels type each one of the stations can deliver the

same type of service and is equipped with the same type of facilities. The element
which selects one station makes this decision without any external pressure from
anywhere. Due to this fact, the queue is single. The single queue (line) usually breaks
into smaller queues in front of each station. Fig. 2 schematically shows the case of a
single line (which has its mean rate ) that randomly scatters itself toward four NOTES
stations (S = 4), each of which has an equal mean service rate

3.10 Simulation modeling

It is basically data generating technique, where sometimes it is risky, cumbersome, or

time consuming to conduct real study or experiment to know more about situation or
problem. The available analytical methods cannot be used in all situations due to
large number of variables or large number of interrelationships among the variables
and the complexity of relationship; it is not possible to develop an analytical model
representing the real situation. Some times, even building of model is possible but its
solution may not be possible.

Under such situations simulation is used. It should be noted that simulation does not
solve the problem by itself, but it only generates the required information or data
needed for decision problem or decision-making.
Simulation modeling is the process of creating and analyzing a digital prototype of a
physical model to predict its performance in the real world. Simulation modeling is

used to help designers and engineers understand whether, under what conditions, and
in which ways a part could fail and what loads it can withstand. Simulation modeling
can also help predict fluid flow and heat transfer patterns.

Simulation modeling allows designers and engineers to avoid repeated building of

multiple physical prototypes to analyze designs for new or existing parts. Before
creating the physical prototype, users can virtually investigate many digital
prototypes. Using the technique, they can:

• Optimize geometry for weight and strength

• Select materials that meet weight, strength, and budget requirements
• Simulate part failure and identify the loading conditions that cause them
• Assess extreme environmental conditions or loads not easily tested on
physical prototypes, such as earthquake shock load
• Verify hand calculations
• Validate the likely safety and survival of a physical prototype before testing’s

The steps involved in developing a simulation model, designing a simulation

experiment, and performing simulation analysis are:

Step 1. Identify the problem.

Step 2. Formulate the problem.
Step 3. Collect and process real system data.
Step 4. Formulate and develop a model.
Step 5. Validate the model.
Step 6. Document model for future use.
Step 7. Select appropriate experimental design.
Step 8. Establish experimental conditions for runs.
Step 9. Perform simulation runs.
Step 10. Interpret and present results.
Step 11. Recommend further course of action.

Although this is a logical ordering of steps in a simulation study, much iteration at
various sub-stages may be required before the objectives of a simulation study are
achieved. Not all the steps may be possible and/or required. On the other hand,
additional steps may have to be performed. The next three sections describe these NOTES
steps in detail.

Deterministic simulation model

In mathematical modeling, deterministic simulations contain no random variables and

no degree of randomness, and consist mostly of equations, for example difference
equations. These simulations have known inputs and they results in a unique set of
outputs. In opposite we know stochastic (probability) simulation, which include
random variables. Simulation is a process of imitation or generating reality or things
that we cannot, or for some reasons didn’t accomplish in the real world. Simulation is
an indispensable problem-solving methodology for the solution of many real-world
problems. Deterministic simulation models are usually designed to capture some
underlying mechanism or natural process.

They are different from statistical models (for example linear regression) whose aim
is to empirically estimate the relationships between variables. The deterministic
model is viewed as a useful approximation of reality that is easier to build and
interpret than a stochastic model. However, such models can be extremely
complicated with large numbers of inputs and outputs, and therefore are often
noninvertible; a fixed single set of outputs can be generated by multiple sets of inputs.
Thus taking reliable account of parameter and model uncertainty is crucial, perhaps
even more so than for standard statistical models, yet this is an area that has received
little attention from statisticians.

Probabilistic simulation

Probabilistic simulation is the process of explicitly representing these uncertainties by

specifying inputs as probability distributions. If the inputs describing a system are
uncertain, the prediction of future performance is necessarily uncertain. That is, the
result of any analysis based on inputs represented by probability distributions is itself
a probability distribution.

Hence, whereas the result of a deterministic simulation of an uncertain system is a

qualified statement ("if we build the dam, the salmon population could go extinct"),
the result of a probabilistic simulation of such a system is a quantified probability ("if
we build the dam, there is a 20% chance that the salmon population will go extinct").
Such a result (jn this case, quantifying the risk of extinction) is typically much more
useful to decision-makers who might utilize the simulation results.

In order to compute the probability distribution of predicted performance, it is

necessary to propagate (translate) the input uncertainties into uncertainties in the
results. A variety of methods exist for propagating uncertainty. One common
technique for propagating the uncertainty in the various aspects of a system to the
predicted performance (and the one used by GoldSim) is Monte Carlo simulation.

In Monte Carlo simulation, the entire system is simulated a large number (e.g., 1000)
of times. Each simulation is equally likely, and is referred to as a realization of the
system. For each realization, all of the uncertain parameters are sampled (i.e., a single
random value is selected from the specified distribution describing each parameter).
The system is then simulated through time (given the particular set of input
parameters) such that the performance of the system can be computed. This results in
a large number of separate and independent results, each representing a possible
“future” for the system (i.e., one possible path the system may follow through time).
The results of the independent system realizations are assembled into probability
distributions of possible outcomes.

3.11 Dynamic programming

In mathematics, computer science, economics, and bioinformatics, dynamic

programming is a method for solving complex problems by breaking them down into NOTES
simpler sub problems. It is applicable to problems exhibiting the properties of
overlapping sub problems and optimal substructure (described below). When
applicable, the method takes far less time than naive methods that don't take
advantage of the sub problem overlap (like depth-first search).

The idea behind dynamic programming is quite simple. In general, to solve a given
problem, we need to solve different parts of the problem (sub problems), then
combine the solutions of the sub problems to reach an overall solution. Often when
using a more naive method, many of the sub problems are generated and solved many
times. The dynamic programming approach seeks to solve each sub problem only
once, thus reducing the number of computations: once the solution to a given sub
problem has been computed, it is stored or "memorized": the next time the same
solution is needed, it is simply looked up. This approach is especially useful when the
number of repeating subproblems grows exponentially as a function of the size of the

Dynamic programming algorithms are used for optimization (for example, finding the
shortest path between two points, or the fastest way to multiply many matrices). A
dynamic programming algorithm will examine all possible ways to solve the problem
and will pick the best solution. Therefore, we can roughly think of dynamic
programming as an intelligent, brute-force method that enables us to go through all
possible solutions to pick the best one. If the scope of the problem is such that going
through all possible solutions is possible and fast enough, dynamic programming
guarantees finding the optimal solution. The alternatives are many, such as using a
greedy algorithm, which picks the best possible choice "at any possible branch in the
road". While a greedy algorithm does not guarantee the optimal solution, it is faster.

Fortunately, some greedy algorithms (such as minimum spanning trees) are proven to
lead to the optimal solution.

For example, let's say that you have to get from point A to point B as fast as possible,
NOTES in a given city, during rush hour. A dynamic programming algorithm will look into
the entire traffic report, looking into all possible combinations of roads you might
take, and will only then tell you which way is the fastest. Of course, you might have
to wait for a while until the algorithm finishes, and only then can you start driving.
The path you will take will be the fastest one (assuming that nothing changed in the
external environment).

On the other hand, a greedy algorithm will start you driving immediately and will
pick the road that looks the fastest at every intersection. As you can imagine, this
strategy might not lead to the fastest arrival time, since you might take some "easy"
streets and then find yourself hopelessly stuck in a traffic jam.

Sometimes, applying memorization to a naive basic recursive solution already results

in an optimal dynamic programming solution, however many problems require more
sophisticated dynamic programming algorithms. Some of these may be recursive as
well but parameterized differently from the naive solution. Others can be more
complicated and cannot be implemented as a recursive function with memorization.

Check Your Progress:

1) State the importance of milestones in project management
2) Explain the significance of GANTT chart.
3) What is linear programming?
4) Why is PERT and CPM important for project management?