You are on page 1of 16

1.

Introduction
1.1 Introduction
Management Science (MS) can be defined as “A problem-solving process used by an
interdisciplinary team to develop mathematical models that represent simple-to-complex
functional relationships and provide management with a basis for decision-making and a means
of uncovering new problems for quantitative analysis”. Management science envelops, in any
case, something other than the development of models for particular problem. It makes a
substantial contribution in a much broader area: the use of the output from management science
models for decision-making at the lower, center, and top management levels. Management
science is the use of the scientific method to the study of the tasks of the operations of large,
complex organization or activities. Two disciplines intimately connected with management
science are industrial engineering and operations research.
There are four major characteristics of management science:-
(1) Examine Functional Relationships from a Systems Overview: The action of any one
function of an organization will have some impact on the action of every one of
alternate function. Consequently it is important to identify all important interactions
and determine their impact on the company as a whole. Initially, the functional
relationships in a management science project are expanded deliberately so that all the
significantly interacting parts and their related components are contained in a statement
of the problem. A system overview looks at the whole region under the manager’s
control. This approach gives a premise to starting investigation into issues that seem to
be affecting performance at all levels.
(2) Use the Interdisciplinary Approach: Management science makes good use of a simple
principle, it looks at the problem from different angles and approaches. For example, a
mathematician might look at the inventory problem and formulate some type of
mathematical relationships between the manufacturing departments and customer
demand. A chemical engineer might look at the same problem and formulate it in terms
of flow theory. A cost accountant might conceive the inventory problem in terms of
component costs for example, direct material cost, direct labor cost, overheads etc. and
how such costs can be controlled and reduced, etc. Therefore, management science
emphasizes over the interdisciplinary approach because each of the individual aspects
of a problem can be best understood and solved by those, experts in different fields
such as accounting, biological, economic, engineering, mathematics, physical,
psychological, sociological, statistical etc.
(3) Uncover New Problems for Study: The third characteristic of management science,
which is often overlooked, is that the solution of an MS problem brings new problems to light.
All interrelated problems uncovered by the Management Science approach do not have to be

Page|1
solved at the same time. However, each must be solved with consideration for other problems if
maximum benefits are to be obtained.
(4) Use a Modeling-Process Approach to Problem Solving: Management science takes a
systematic approach to problem solving. It may use a modeling process approach taking the help
of mathematical models.
Linear Programming (LP) is one of the most frequently applied OR techniques in real-world
problems. Traditional LP requires the decision maker to have deterministic and exact
information available but this assumption is not realistic much of the time for a few reasons: a)
many real life problems and models contain linguistic and/or vague variables and constraints. b)
Collecting precise data is often challenging because the environment of the system is unstable or
collecting precise data results in high information cost. c) Decision creators probably not be able
to express goals or constraints precisely because their utility functions are not defined precisely.
One of the most important discoveries in the early development of linear programming was the
concept of duality and its many important ramifications. This discovery revealed that every
linear programming problem has associated with it another linear programming problem called
the dual. The connections between the dual problem and the original problem prove to be
extremely useful in a variety of ways. Linear Programming (LP) important experiences that are
based on duality. The particular usefulness of duality theory is not only given through its
algorithmic. For example, dual simplex algorithm and mathematical benefits like, weak/strong
duality theorems. But it also includes explanatory power in economic interpretation.

1.2 Objective of the study:


1. To have a general idea about Management Science and its use in decision making.
2. To have clear understanding about the Duality of LP and primal-Dual relationship
3. To hove clear understanding about the dual simplex method and sensitivity analysis.

1.3 Methodology: The data and information for preparing this term paper have been
collected from secondary sources. For collecting secondary sources of information I read the
book, websites, and study of relevant reports, documents and different manuals.

1.4 Limitations of the study:

Although this study was able to reach its aim there


were some unavoidable limitations. They are given
below:
• Lack of available and/or reliable data.
• Lack of prior research studies on the topic.
• Short research time period Figure 1: Limitations of Study

Page|2
2. Literature Review
Singh (1972) made a feasibility study of crop insurance in U.P. using cross sectional information
from Tarai farms for the year 1970-71, with the assistance of Linear Programming System. He
considered the crop variability in U.P. during 1951-70 and analyzed the possibility of crop
insurance program. He also evaluated two alternative causes, namely crop insurance and
diversification, which will reduce income variance or minimize the probability of loss to achieve
a more stable farm income. It was inferred that the fluctuating crop production is a chronic
problem in U.P. and diversification stabilizes farm incomes at a higher level than the crop
insurance programme. Van de Paane and Stangeland (1974) used linear programming technique
to study the optimum concentration of cattle feed supplements, which were prepared by feed
mills and sold to cattle feeders. Two problems were examined, in the first problem the ratio in
which supplement and main feed are utilized was considered as given, while in the second
problem this ratio was optimally chosen. The relationships between these two problems as well
as their dual problem were analyzed. It is concluded that feed milk profit margins for their
supplements based on the quantity of the supplement and the inputs costs, generally lead to
supplements which are too concentrated.
Kanti Swarup (1968) have given a paper on it with nonlinear requirements yet he has not given
the evidence of converse duality. Sharma and Kanti Swarup (1972) proved this result of converse
duality by making use of Dorn’s (1960) technique. Kaska (1969) additionally gave some result
on duality involving primal variables in dul. Kyland (1972) has given some approach on duality
dependent on the work of Wolfe (1961). Different research laborers utilized the strict stage of
differentiability while giving the paper on duality. According to Bector (1968), the problems
which goes under the class of convex programs can be illuminated by the typical system. There
are different techniques available for solving convex programming Rosen (1960, 1961) has given
the strategy for the solution of Nonlinear Programming, called Rosen’s gradient projection
method. Zoutendzik (1959) gave a method of feasible direction for its solution. Cheney and
Goldsteen (1959) also gave a method called Newton’s method of convex programming. Killey
(1960) introduced a new method called cutting plan method for solving convex programs.
Jagannathan (1973) used parametric approach in duality in N.L.F.P.P.’s Bector (1973) used
fractional Lagrangian approach and Schaible (1983, 1974, 1976 A, 1876 B), used variables
transformation technique in duality problem. Aggarwal and Saxena (1975) established duality
results for standard error fractional program. The work in all these papers are based on the
duality theory of Chandra and Gulati (1976). However Mond (1978) has further extended the
duality theory of non-differentiable fractional programming by including the case of non-linear
constraints. Bector, CR; Chandra, S; Husain (1992): Generalized continuous fractional
programming duality-A parametric approach. Using a parametric approach, duality is presented
for a continuous minimax fractional programming problem that involves several ratios in the
objective. Duality results presented in the present paper can be regarded as the dynamic
generalizations of those of finite dimensional nonlinear programming problems recently

Page|3
explored. G.J. Zalmai (1996) Continuous- time multi objective fractional programming. Both
parametric and semi parametric necessary and sufficient proper parametric are established for a
class of continuous- time multi objective fractional programming problems. Based on the forms
and contents of these proper efficiency results, two parametric and four semi parametric duality
models are constructed in each case, weak and strong duality theorems are proved. These proper
efficiency and duality results contain, as special cases, similar results for continuous time
programming problems with multiple non-fractional, single fractional, and conventional
objective functions. These results improve and generalize a number of existing results in the area
of continuous-time programming and, moreover, provide continuous-time analogues of various
kindred results previously obtained for certain classes of finite dimensional nonlinear
programming problems.
De and Yadav (2011) provide a mathematical model for multi criteria transportation problem
under fuzzy environment considering the exponential membership function instead of taking
linear membership function. However, in contrast with the vast literature on modelling and
solution procedures for a linear program in a fuzzy environment (Lai and Hwang, 1993; Lai,
1995; Zimmermann, 1978, 1991), the studies in duality are rather scarce. The most basic results
on duality in FLP are due to Rodder and Zimmermann (1980) and Hamacher et al. (1978). In
Rodder and Zimmermann (1980), a generalization of maxmin and minmax problems in a fuzzy
environment is presented and thereby a pair of fuzzy dual linear programming problems is
constructed. An economic interpretation of this duality in terms of market and industry is also
discussed in that paper. In Bector and Chandra (2002), a pair of linear programming primal-dual
problem is introduced under fuzzy environment and appropriate results were proved to establish
the duality relationship between them. In Liu et al. (1995), a constructive approach has been
proposed to duality for fuzzy multiple criteria and multiple constraints level linear programming
problems. Samuel and Venkatachalapathy (2012) proposed a new algorithm for solving a special
type of transportation problem by assuming that a decision maker is uncertain about the precise
values of transportation fuzzy cost only but there is no uncertainty about the supply and demand
of the product. A new dual-based approach has been proposed to apply on real life transportation
problems. Zhong and Yong (2002) give a parametric approach for the duality in fuzzy multi
criteria and multi constraint level linear programming problem. In Gupta and Mehlawat (2009), a
study of a pair of primal-dual linear programming problems has been presented and calculated
duality results using an aspiration level approach using exponential membership function, while
a discussion of primal dual linear programming problem with coefficients has been presented
in Wu (2003, 2004). Ebrahimnejad and Nasseri (2012b) generalised the dual simplex method in
crisp environment for obtaining the optimal solution. Their method begins with a basic dual
solution and proceeds by pivoting through a series of dual basic solution until the associated
complementary primal basic solution is feasible. However, Ebrahimnejad and Nasseri (2012a)
give the version of conventional primal-dual method of linear programming problems that any
dual feasible solution, whether basic or not, is adequate to initiate this method.

Page|4
3. Finding & Analysis
3.1 Duality in Linear Program, dual form of the Problem and Primal & Dual
Relationship
Duality is a unifying theory that develops the relationships between a given linear program and
another related linear program stated in terms of variables with this shadow-price interpretation.
This unified theory is important-

 Because it allows fully understanding the shadow-price interpretation of the optimal


simplex multipliers, which can prove very useful in understanding the implications of a
particular linear-programming model.
 Because it is often possible to solve the related linear program with the shadow prices as
the variables in place of, or in conjunction with, the original linear program, thereby
taking advantage of some computational efficiencies.
For Example, there is a small company in Melbourne which has recently become engaged in the
production of office furniture. The company manufactures tables, desks and chairs. The
production of a table requires 8 kg of wood and 5 kg of metal and is sold for $80; a desk uses 6
kg of wood and 4 kg of metal and is sold for $60; and a chair requires 4 kg of both metal and
wood and is sold for $50. We would like to determine the revenue maximizing strategy for this
company, given that their resources are limited to 100 kg of wood and 60 kg of metal.
There are two thoughts fundamental to duality theory. One is the way that the dual of a dual
linear program is the original primal linear program. Moreover, every possible solution for a
linear program gives a bound on the optimal value of the objective function of its dual. The weak
duality theorem states that and gives a bound on the optimal value of the objective function of
either the primal or the dual. Simply stated, the value of the objective function for any feasible
solution to the primal maximization problem is bounded from above by the value of the objective
function for any feasible solution to its dual. Additionally, the estimation of the objective
function for its dual is bounded from below by the value of the objective function of the primal.
Pictorially, we might represent the situation as follows:

Figure 2: Situation in primal & dual

The strong duality theorem states that if the primal has an optimal solution, x*, then the dual also
has an optimal solution, y*, and cTx*=bTy*.

Page|5
A linear program can also be unbounded or infeasible. Duality theory tells us that if the primal is
unbounded then the dual is infeasible by the weak duality theorem. Likewise, if the dual is
unbounded, then the primal must be infeasible. However, it is possible for both the dual and the
primal to be infeasible.
The Duality in Linear Programming states that every linear programming problem has another
linear programming problem related to it and thus can be derived from it. The original linear
programming problem is called “Primal” while the derived linear problem is called “Dual.”
Before solving for the duality, the original linear programming problem is to be formulated in its
standard form. Standard form means, all the variables in the problem should be non-negative and
“≥,” ”≤” sign is used in the minimization case and the maximization case respectively.
The concept of Duality can be well understood through a problem given below:
Maximize P = 50X1+30X2
Subject to: 4X1 + 3X2 ≤ 100
3X1 + 5X2 ≤ 150
X1, X2 ≥ 0
The duality can be applied to the above original linear programming problem as:
Minimize C = 100Y1 + 150Y2
Subject to: 4Y1 + 3Y1 ≥ 50
3Y1 +5Y2 ≥ 30
Y1, Y2 ≥ 0
The following observations were made while forming the dual linear programming problem:
1. The primal or original linear programming problem is of the maximization type while the
dual problem is of minimization type.
2. The constraint values 100 and 150 of the primal problem have become the coefficient of
dual variables y1 and y2 in the objective function of a dual problem and while the
coefficient of the variables in the objective function of a primal problem has become the
constraint value in the dual problem.
3. The first column in the constraint inequality of primal problem has become the first row
in a dual problem and similarly the second column of constraint has become the second
row in the dual problem.
4. The directions of inequalities have also changed, i.e. in the dual problem, the sign is the
reverse of a primal problem. Such that in the primal problem, the inequality sign was “≤”
but in the dual problem, the sign of inequality becomes “≥”.

Page|6
The dual model of a Linear Programming problem consists of an alternative modeling instance
that allows us to recoup the data of the original problem commonly known as primal model. In
this way it is adequate to solve one of them (primal or dual) to get the optimal solution and the
optimal value of the equivalent problem (primal or dual as applicable). The quantity of factors in
the dual problem is equal to the number of constraints in the original (primal) problem. The
number of constraints in the dual problem is equivalent to the quantity of variables in the original
problem. Coefficient of the objective function in the dual problem come from the right-hand side
of the original problem. If the original problem is a max model, the dual is a min model. If the
original problem is a min model, the dual problem is the max problem. The coefficient of the
first constraint function for the dual problem are the coefficients of the first variable in the
constraints for the original problem, and the similarly for other constraints. The right-hand sides
of the dual constraints come from the objective function coefficients in the original problem.
Primal Dual Relationships can be summarized in the following table:

Figure 3: Primal & Dual Relationships

The dual of the dual problem is again the primal problem. Either of the two problems has an
optimal solution if and only if the other does. If one problem is feasible but unbounded, then the
other is infeasible. If one is infeasible, then the other is either infeasible or feasible/unbounded.
In the Weak Duality Theorem, the objective function value of the primal (dual) to be maximized
evaluated at any primal (dual) feasible solution cannot exceed the dual (primal) objective
function value evaluated at a dual (primal) feasible solution. In Strong Duality Theorem, when
there is an optimal solution, the optimal objective value of the primal is the same as the optimal
objective value of the dual is cTx*=bTy*.

Page|7
3.2 Dual Simplex Method
The dual simplex method maintains a non-negative row 0 (dual feasibility) and eventually
obtains a tableau in which each right-hand side is non-negative (primal feasibility).The dual
simplex method for a max problem.
Step 1: Is the right-hand side of each constraint non negative? If not, go to step 2.
Step 2: Choose the most negative basic variable as the variable to leave the basis. The row it is
in will be the pivot row. To select the variable that enters the basis, compute the following ratio
for each variable xj that has a negative coefficient in the pivot row:

Coefficient of xj in row 0
Coefficient of xj in pivot raw

Choose the variable with the smallest ratio as the entering variable. Now use EROs to make the
entering variable a basic variable in the pivot row.
Step 3: If there is any constraint in which the right-hand side is negative and each variable has a
non-negative coefficient, then the LP has no feasible solution. If no constraint infeasibility is
found, return to step 1.
The dual simplex method is often used to find the new optimal solution to an LP after a
constraint is added.
Simplex method and Dual simplex method are not same. In simplex method starts with a non-
optimal but feasible solution whereas dual simplex method start with an optimal but infeasible
solution. Simplex method maintains the feasibility during successive iterations whereas dual
simplex method maintains the optimality.
Example (1.0) form the dual problem. Suppose
Minimize C = 3X1 + 2X2
Subject to: 2X1 + X2 ≥ 6
X1 + X2 ≥ 4
X1, X2 ≥ 0
Step 1. Form the matrix A
2 1 6
A= 1 1 4
3 2 1

Page|8
Step 2. Find the transpose of A, AT.
2 1 3
AT = 1 1 2
6 4 1
Step 3. State the dual problem.
Maximize P = 6Y1 + 4Y2
Subject to: 2Y1 + Y2 ≤ 3
Y1 + Y2 ≤ 2
Y1, Y2 ≥ 0
After writing the dual problem in standard form now we consider the above parametric linear
programming problem. Let Y3 and Y4 be the slack variables for the respective functional
constraints. So we obtain the initial tableau.
Tableau 1
Basic Y1 Y2 Y3 Y4
P -6 -4 0 0 0
Y3 2 1 1 0 3
Y4 1 1 0 1 2
We see from the tableau that the pivot column is the Y1. The quotients are 3/2 = 1.5 and 2/1 = 2.
Hence the Y3 row is the pivot row. Thus Y1 is the entering variable which replaces Y3, the
leaving variable. The pivot element at the intersection of the pivot row and pivot column is 2. To
update the tableau we performing the Gauss reductions we obtain Tableau 2 given below.
Tableau 2
Basic Y1 Y2 Y3 Y4
P 0 -1 3 0 9
Y1 1 1/2 1/2 0 3/2
Y4 0 1/2 -1/2 1 1/2
We deduce that the current solution is not optimal solution. We need to update once more we
obtain Tableau 3 given below.

Page|9
Tableau 3
Basic Y1 Y2 Y3 Y4
P 0 0 2 2 10
Y1 1 0 1 -1 1
Y2 0 1 -1 2 1
The current solution is optimal solution since all the coefficients in the First row (P) are
nonnegative.
We are now going to extract the solution of the primal problem from the final simplex tableau of
the dual problem. The optimal objective value is: P = C = 10. Since the above final tableau is for
the dual problem, we recall that in transposing the primal problem the objective coefficients of
the original variables became the right-hand values of the constraints. This means that each
original variable now corresponds to a slack variable. The optimal values of the original
variables correspond to the slack variables in the final tableau of the dual problem. So, the
objective value is in the usual column i.e. Y1 = 1, Y2 = 1.
Note that if we substitute the basic variables of the dual problem in the dual objective function
we
Have: P = 6Y1 + 4Y2 = (6) (1) + (4) (1) = 10.

3.3 Dual Graphical Method


The graphical method of solving a linear programming problem is used when there are only two
decision variables. If the problem has three or more variables, the graphical method is not
suitable. There are some important definitions and concepts that are used in the methods of
solving linear programming problems.
1. Solution: A set of values of decision variables satisfying all the constraints of a linear
programming problem is called a solution to that problem.
2. Feasible solution: Any solution which also satisfies the non-negativity restrictions of
the problem is called a feasible solution.
3. Optimal feasible solution: Any feasible solution which maximizes or minimizes the
objective function is called an optimal feasible solution.
4. Feasible region: The common region determined by all the constraints and non-
negativity restriction of a LPP is called a feasible region.

P a g e | 10
5. Corner point: A corner point of a feasible region is a point in the feasible region that is
the intersection of two boundary lines.
In the example (1.0), the decision variables X1 and X2 of the primal problem correspond to the
slack variables of the dual problem. The objective function is Maximize P = 3X 1 + 2X2. And the
Constraint are: 2X1 + X2 ≤ 6; X1 + X2 ≤ 4. And also the non-negativity constraints are: X1, X2 ≥
0.
The boundary of the feasible region consists of the lines obtained from changing the inequalities
of equalities. The lines is-
2X1 + X2 = 6……………….. (1)
X1 + X2 = 4………………… (2)
In equation (1), Let X1 = 0 then X2 = 6
Let X2 = 0 then X1 = 3
In equation (2), Let X1 = 0 then X2 = 4
Let X2 = 0 then X1 = 4

P a g e | 11
Figure 4: Graphical Solution of the
Model

So, the corner points (or extreme points) and their corresponding objective functional values are:
Extreme points Profit (P = 3X1 + 2X2 )
(0, 4) 8
(2, 2) 10
(3, 0) 9
We therefore deduce that the optimal solution is X 1 = 2, X2 = 2 corresponding to a profit P = 10.
Thus profits are maximized when X1 = 2 and X2 = 2.

3.4 Sensitivity Analysis


A method that used to determine how different values of an independent variable will affect a
specific dependent variable under a given set of assumption. This system is used within specific
boundaries that will depend on one or more input variables, such as the effect that changes in
interest rates will have on a bond’s price. Sensitivity analysis is a way to predict the outcome of a
decision if a situation turns out to be different compared to the key predictions. For example,
many people use it to determine what the monthly payments for a loan will be given different
interest rates or periods of the loan, or for determining breakeven points based on different
assumptions. Spreadsheet software, such as Excel, is a common tool for performing sensitivity
analysis. Sensitivity analysis can be used for any activity or system. Sensitivity analysis works
on the simple principle: Change the model and observe the behavior. There are different methods
to carry out the sensitivity analysis:

 Modeling and simulation techniques

 Scenario management tools through Microsoft excel

There are mainly two approaches to analyzing sensitivity:

 Local Sensitivity Analysis

 Global Sensitivity Analysis

Local sensitivity analysis is derivative based (numerical or analytical). The term local indicates
that the derivatives are taken at a single point. This strategy is well-suited for basic cost
functions, but not feasible for complex models, like models with discontinuities don’t generally

P a g e | 12
have derivatives. Mathematically, the sensitivity of the cost function with respect to certain
parameters is equal to the partial derivative of the cost function with respect to those parameters.
Local sensitivity analysis is a one-at-a-time (OAT) technique that analyzes the impact of one
parameter on the cost function at a time, keeping the other parameters fixed.

Global sensitivity analysis is the second approach to sensitivity analysis, often implemented
using Monte Carlo techniques. This approach uses a global set of samples to explore the design
space.
One of the key applications of Sensitivity analysis is in the utilization of models by managers
and decision makers. All the content needed for the decision model can be fully utilized only
through the repeated application of sensitivity analysis. It helps decision analysts to understand
the uncertainties, pros and cons with the limitations and scope of a decision model.
Most if not all decisions are made under uncertainty. It is the optimal solution in decision making
for various parameters that are approximations. One approach to come to conclusion is by
replacing all the uncertain parameters with expected values and then carry out sensitivity
analysis. It would be a breather for a decision maker if he / she has some indication as to how
sensitive will the choices be with changes in one or more inputs.

Uses of Sensitivity Analysis

 The key application of sensitivity analysis is to indicate the sensitivity of simulation to


uncertainties in the input values of the model.

 It help in decision making

 Sensitivity analysis is a method for predicting the outcome of a decision if a situation


turns out to be different compared to the key predictions.

 It helps in assessing the riskiness of a strategy.

 Helps in identifying how dependent the output is on a particular input value. Analyses if
the dependency in turn helps in assessing the risk associated.

 Helps in taking informed and appropriate decisions

For Example:
Maximize P = 6Y1 + 4Y2
Subject To: 2Y1 + Y2 ≤ 4

P a g e | 13
Y1, Y2 ≥ 0
So, in this example the decision variables is Y1 and Y2 of the primal problem correspond to the
slack variables. The objective function is Maximize P = 6Y1 + 4Y2. And the Constraint are: 2Y1
+ Y2 ≤ 4. And also the non-negativity constraints are: Y1, Y2 ≥ 0.

The boundary of the feasible region consists of the lines obtained from changing the inequalities
of equalities. The lines is-
2Y1 + Y2 = 4……………….. (1)
In equation (1), Let Y1 = 0 then Y2 = 4
Let Y2 = 0 then Y1 = 2
So, the optimal solution is Y1 = 2, Y2 = 4 and the corresponding to a profit P = 28.

If we change the constraint like 2Y1 + Y2 ≤ 6. So the line is-


2Y1 + Y2 = 6……………….. (1)
In equation (1), Let Y1 = 0 then Y2 = 6
Let Y2 = 0 then Y1 = 3
Now the optimal solution is Y1 = 3, Y2 = 6 and the corresponding to a profit P = 42. So here we
see that if we change our value of input (Constraint) then it will change the profit (Output). This
change is called the sensitivity analysis.

P a g e | 14
4. Conclusion
Above I have discussed the duality theory in Linear Program. The theory of duality is a very
elegant and important concept within the field of Management Science. This theory was first
developed in relation to linear programming, but it has many applications, and perhaps even a
more natural and intuitive interpretation, in several related areas such as nonlinear programming,
networks. The notion of duality within linear programming asserts that every linear program has
associated with it a related linear program called its dual. The original problem in relation to its
dual is termed the primal. The relationship between the primal and its dual, both on a
mathematical and economic level that is truly the essence of duality theory. Every linear
programming problem has associated with it a dual linear programming problem. There are a
number of very useful relationships between the primal problem and its dual problem that
enhance the ability to analyze the primal problem. For example, the economic interpretation of
the dual problem gives shadow prices that measure the marginal value of the resources in the
primal problem and provides an interpretation of the simplex method. Because the simplex
method can be applied directly to either problem in order to solve both of them simultaneously,
considerable computational effort sometimes can be saved by dealing directly with the dual
problem. Duality theory, including the dual simplex method for working with super optimal
basic solutions, also plays a major role in sensitivity analysis. The values used for the parameters
of a linear programming model generally are just estimates. Therefore, sensitivity analysis needs
to be performed to investigate what happens if these estimates are wrong. The general objectives
are to identify the sensitive parameters that affect the optimal solution, to try to estimate these
sensitive parameters more closely, and then to select a solution that remains good over the range
of likely values of the sensitive parameters. This analysis is a very important part of most linear
programming studies.

P a g e | 15
P a g e | 16

You might also like