(EE3006/3006A)
Unit 7A. Sensitivity Analysis
1. Why do sensitivity analysis
2. The matrix form of linear programs
3. Some basic sensitivity analysis problems
Unit 7B. Nonlinear Programming
1. Introduction
2. One-variable unconstrained optimization
Newton’s Method
3. Multivariable unconstrained optimization
Gradient Search Method
1
(EE3006/3006A)
7A.1 Why do sensitivity analysis?
The scope of linear programming (LP) doesn’t end at finding the optimal
solution. Continued analysis on the optimal solution can provide additional
insight of the model or problem;
Reexamination of optimal solutions is essentially useful, as we live in a
dynamic world where cost changes of the scarce resources occur constantly.
1
(EE3006/3006A)
Sensitivity analysis (SA)
Sensitivity analysis is to examine how sensitive the optimal
solution is to a change in the coefficients of the LP model;
This process is also known as post-optimality analysis
because it starts after finding an optimal solution;
It is a very useful approach for some in-depth analysis or
perspective predictions.
(EE3006/3006A)
7A.2 The matrix form of linear programs
A linear programming (LP) problem in general form is:
Maximize : z = CT X,
subject to : AX ≤ B,
with X ≥ 0.
where X is the column vector of unknowns,
CT is the row vector of the corresponding cost (cost vector),
A is the coefficient matrix of the constraints (matrix of technological coefficients),
and B is the column vector of the right-hand side of the constraints (requirement vector).
Its augmented form can be written as:
maximize : z = CT X,
subject to : AX+IXs= B,
with X, Xs ≥ 0.
4
2
(EE3006/3006A)
Example A1:
An LP problem is given as:
Maximize : Z =20 x1 + 10 x2 Sales revenue
subject to: x1 +2 x2 ≤ 40 Material requirement
3 x1 +2 x2 ≤ 60 Work-hour requirement
and
x1 ≥ 0, x2 ≥ 0.
It can be written into the standard augmented form as:
Maximize : Z =20 x1+10 x2
subject to: x1 +2 x2 +x3 = 40
3 x1 +2 x2 + x4 = 60
and
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0.
(EE3006/3006A)
The solution of this problem in (modified) tabular form is:
Basic
variable
Z x1 (20) x2 (10) x3 (0) x4 (0) r.h.s.
z 1 -20 -10 0 0 0
x3 (0) 0 1 2 1 0 40
x4 (0) 0 3 2 0 1 60
Basic
variable
z x1 (20) x2 (10) x3 (0) x4 (0) r.h.s.
z 1 0 10/3 0 400
x3 (0) 0 0 4/3 1 -1/3 20
x1 (20) 0 1 2/3 0 20
Since the first row of the above tableau has no negative elements, the optimal
solution is found to be: Z = 400, when x1 = 20, x2 = 0.
Check: Z=2020 + 100 = 400.
6
3
(EE3006/3006A)
=> Understanding different parts of the tabular form
with vectors and matrixes:
Basic
variable
Z x1 (20) x2 (10) x3 (0) x4 (0) r.h.s.
z 1 -20 -10 -CT 0 0 0
x3 (0) 0 1 2 1 0 40
C A I B
x4 (0)b 0 3 2 0 1 60
After algebraic operations
Basic
variable
z x1 (20) x2 (10) x3 (0) x4 (0) r.h.s.
z 1 0 C T S-110/3
A–CT &0 CbT S-1 CbT400
S-1B
b
x3 (0) 0 0 4/3 1 -1/3 20
Cb S-1A S-1 S-1B
x1 (20) 0 1 2/3 0 20
(EE3006/3006A)
Insight of LP tabular formulation
Basic Original Variables Slack Variables
Variable
Z r.h.s.
x1 x2 … xn xsn+1 … xsn+m
(Original z 1 -CT 0 0
table)
XB(Cb) 0 A I B
After algebraic operations
Basic Original Variables Slack Variables
Variable
Z r.h.s.
x1 x2 … xn xsn+1 … xsn+m
(New z 1 CbT S-1A - CT CbTS-1 CbT S-1B
table)
XB (Cb) 0 S-1A S-1 S-1B
S-1 is the constant matrix (m× m) resulted from arithmetical operations;
CbT is the row vector of costs corresponding to basic variables; 8
4
(EE3006/3006A)
7A.3 Some basic sensitivity analysis problems
=> to find the effects of some potential changes on the optimal solution
of an LP problem:
i) Change in the cost coefficients of the objective function.
ii) Changes in the availability of resources or limits on demands
(requirements vector or RHS of constraints).
iii) Changes in the technological coefficients of variables in
constraints.
iv) Addition of a new variable (of product or activity).
v) Addition of a new constraint.
(EE3006/3006A)
Case I: modification of the cost vector CT
i) the cost coefficient of the nonbasic variable is modified,
e.g., the cost coefficient of x2 is modified from 10 to 15:
Basic Original Variables Slack/Surplus Variables
Variable
Z r.h.s.
x1 x2 … xn xsn+1 … xsn+m
z 1 CbT S-1A-CT CbTS-1 CbTS-1B
XB(Cb) 0 S-1A S-1 S-1B
If x1 and x2 are the numbers of two products to be manufactured in a company and
20 and 10 are their prices, the sensitivity analysis problem is to analyze the
influence of the potential price rise of product II from 10 to 15.
10
5
(EE3006/3006A)
i) the cost coefficient of the nonbasic variable is modified,
e.g., the cost coefficient of x2 is modified from 10 to 15:
Basic
variable
z x1 (20) x2 (10) x3 (0) x4 (0) r.h.s.
z 1 0 10/3 0 400
x3 (0) 0 0 4/3 1 -1/3 20
x1 (20) 0 1 2/3 0 20
10/3-(15-10)= -5/3
Basic
variable
z x1 (20) x2 (15) x3 (0) x4 (0) r.h.s.
z 1 0 -5/3 0 400
x3 (0) 0 0 4/3 1 -1/3 20
x1 (20) 0 1 2/3 0 20
11
(EE3006/3006A)
Basic
variable
z x1 (20) x2 (15) x3 (0) x4 (0) r.h.s.
z 1 0 -5/3 0 400
x3 (0) 0 0 4/3 1 -1/3 20
x1 (20) 0 1 2/3 0 20
Basic
variable
z x1 (20) x2 (15) x3 (0) x4 (0) r.h.s.
z 1 0 0 5/4 425
x2 (15) 0 0 1 3/4 -1/4 15
x1 (20) 0 1 0 -1/2 10
The optimal solution becomes Z= 425, when x1=10, x2 = 15.
Check: Z=2010+ 1515=425.
12
6
(EE3006/3006A)
ii) the cost coefficient of the basic variable is modified,
e.g., the cost coefficient of x1 is modified from 20 to 10:
Basic Original Variables Slack/Surplus Variables
Variable
Z r.h.s.
x1 x2 … xn xsn+1 … xsn+m
z 1 CbTS-1A-CT CbTS-1 CbTS-1B
xB(Cb) 0 S-1A S-1 S-1B
If x1 and x2 are the numbers of two products to be manufactured in a company and
20 and 10 are their prices, the sensitivity analysis problem is to analyze the influence
of the potential price drop of product I from 20 to 10.
13
(EE3006/3006A)
ii) the cost coefficient of the basic variable is modified,
e.g., the cost coefficient of x1 is modified from 20 to 10:
Basic
variable
z x1 (20) x2 (10) x3 (0) x4 (0) r.h.s.
z 1 0 10/3 0 400
x3 (0) 0 0 4/3 1 -1/3 20
x1 (20) 0 1 2/3 0 20
(…) 0×4/3+10×2/3-10= -10/3 (…) 0×(-1/3)+10×1/3= 10/3 0×20+10×20= 200
Basic
variable
z x1 (10) x2 (10) x3 (0) x4 (0) r.h.s.
z 1 0 -10/3 0 200
x3 (0) 0 0 4/3 1 -1/3 20
x1 (10) 0 1 2/3 0 20
14
7
(EE3006/3006A)
Basic
variable
z x1 (10) x2 (10) x3 (0) x4 (0) r.h.s.
z 1 0 -10/3 0 200
x3 (0) 0 0 4/3 1 -1/3 20
x1 (10) 0 1 2/3 0 20
Basic
variable
z x1 (10) x2 (10) x3 (0) x4 (0) r.h.s.
z 1 0 0 5/2 250
x2 (10) 0 0 1 3/4 -1/4 15
x1 (10) 0 1 0 -1/2 10
The optimal solution becomes Z = 250, when x1=10, x2 = 15.
Check: Z=1010+1015=250.
15
(EE3006/3006A)
Case II: modification of the technological coefficients A
e.g., the technological coefficients of x2
is changed from (2,2)T to (2, 1)T:
Basic Original Variables Slack/Surplus Variables
Variable
Z r.h.s.
x1 x2 … xn xsn+1 … xsn+m
z 1 CbTS-1A-CT CbTS-1 CbTS-1B
XB(Cb) 0 S-1A S-1 S-1B
The problem is to estimate the potential impact if the production line for product II
is improved to save work hours from 2 to 1 hour per product.
16
8
(EE3006/3006A)
e.g., the technological coefficients of x2 is changed from (2,2)T to
(2, 1)T :
Basic
variable
z x1 (20) x2 (10) x3 (0) x4 (0) r.h.s.
z 1 0 10/3 0 400
x3 (0) 0 0 4/3 1 -1/3 20
x1 (20) 0 1 2/3 0 20
S-1
Then, the new technological coefficient of x2 in the optimal tableau are
1
1
3 2 5 / 3
S A2
1
1 1 1 / 3
0
3
The corresponding cost coefficient is
5 / 3
CbTS 1 A2 C2 0 20 10 10 / 3
17
1 / 3
(EE3006/3006A)
Then, the new simplex tableaus is
Basic
variable
z x1 (20) x2 (10) x3 (0) x4 (0) r.h.s.
z 1 0 -10/3 0 400
x3 (0) 0 0 5/3 1 -1/3 20
x1 (20) 0 1 1/3 0 20
Basic
variable
z x1 (20) x2 (10) x3 (0) x4 (0) r.h.s.
z 1 0 0 2 440
x2 (10) 0 0 1 3/5 -1/5 12
x1 (20) 0 1 0 -1/5 16
The optimal solution becomes Z = 440, when x1=16, x2 = 12.
Check: Z=2016+1012=440.
18
9
(EE3006/3006A)
Case III: addition of a new variable
e.g., a new variable xk need added to the original problem as:
Maximize : Z =20 x1 + 10 x2 + 30xk
subject to: x1 +2 x2 +3xk ≤ 40
3 x1 +2 x2 + xk ≤ 60
with all variable nonnegative.
Basic Original Variables Slack/Surplus Variables
Variable
Z r.h.s.
x1 x2 … xn xsn+1 … xsn+m
z 1 CbTS-1A-CT CbTS-1 CbTS-1B
XB(Cb) 0 S-1A S-1 S-1B
The problem is to estimate the potential impact if the company introduces a new
product with higher price.
19
(EE3006/3006A)
Basic Original Variables Slack/Surplus Variables
Variable
Z r.h.s.
x1 x2 … xn xsn+1 … xsn+m
z 1 CbTS-1A-CT CbTS-1 CbTS-1B
XB 0 S-1A S-1 S-1B
Then, the new technological coefficient of xk in the optimal tableau are
1
1
3 3 8 / 3
S Ak
1
1 1 1 / 3
0
3
The corresponding cost coefficient is
8 / 3
Cb S 1 Ak Ck 0 20
T
30 70 / 3
1 / 3
20
10
(EE3006/3006A)
Basic
variable
z x1 (20) x2 (10) xk (30) x3 (0) x4 (0) r.h.s.
z 1 0 15 0 30/8 575
xk (30) 0 0 1/2 1 -1/8 15/2
x1 (20) 0 1 1/2 0 3/8 35/2
The optimal solution becomes Z = 575, when x1=17.5, x2 = 0, xk=7.5.
Check: Z=2017.5+100+307.5=575.
21
(EE3006/3006A)
Unit 7A. Sensitivity Analysis
1. Why do sensitivity analysis
2. The matrix form of linear programs
3. Some basic sensitivity analysis problems
Unit 7B. Nonlinear Programming
1. Introduction
2. One-variable unconstrained optimization
Newton’s method
3. Multivariable unconstrained optimization
Gradient search method
22
11
(EE3006/3006A)
7B.1 Introduction
In general, a nonlinear programming problem is to find x=(x1, x2,…, xn) to
Maximize : f(x),
subject to :
gi(x) ≤ bi, for i=1, 2, …, m,
and x ≥ 0,
where f(x) and gi(x) are the given functions with n decision variables.
Nonlinear programming is a very large subject and includes many types
of algorithms.
We study some basic approaches for solving certain important types of
nonlinear programming problems.
23
(EE3006/3006A)
Unconstrained optimization
Unconstrained optimization problems have no constraint, which
objective implies to
Maximize : f(x),
over all values x =(x1, x2, …, xm ).
• When f(x) is a differentiable function, the necessary condition that a
particular solution x=x* is optimal is
f
0 at x= x*, for i=1, 2, …, n.
xi
For nonlinear functions f(x), these derivative functions often are going to
be nonlinear as well. Therefore, one need use some algorithmic search
methods to find x*.
24
12
(EE3006/3006A)
7B.2 One-variable Unconstrained Optimization
• The simplest case of unconstrained optimization with just one variable x
(n=1), where the differentiable function f (x) to be maximized is concave
down.
• The necessary and sufficient condition for a particular solution x=x* to be
optimal is
df ( x )
0 at x=x*.
dx
25
(EE3006/3006A)
Newton’s Method
=> assume that f(x) is concave; the solution is a maximum, which can be
determined by setting the first derivative to zero.
The first derivative f’(x) at xi+1 can be expressed by using Taylor
series expansion as
f’(xi+1) ≈ f’(xi)+f”(xi)(xi+1 - xi),
where xi and xi+1 are two given points of x.
Setting the first derivative on the left side to zero yields
0= f’(xi) + f”(xi)(xi+1 - xi),
which leads algebraically to the iteration formula
xi+1 = xi - f ’(xi)/ f”(xi) .
26
13
(EE3006/3006A)
Steps of Newton’s Method
Tangent of
f (x) at xk
f (x) x* xk+1 xk x
27
(EE3006/3006A)
* Example B1:
Maximize the function
f(x)=12x-3x4-2x6
by using Newton’s method. Choose x0=1 as the initial trial solution,
and ε=0.0001.
i i i i i
i i i i i
Iteration i xi f(xi) f’(xi) f”(xi) xi+1 |xi+1-xi|
0 1 7 -12 -96 0.875 0.125
1 0.875 7.8439 -2.1940 -62.733 0.84003 0.03497
2 0.84003 7.8838 -0.1325 -55.279 0.83763 0.0024
3 0.83763 7.8839 -0.0006 -54.790 0.83762 0.00001
28
14
(EE3006/3006A)
7B.3 Multivariable Unconstrained Optimization
It is the problem of maximizing a concave function f(x) with multiple
variables x=(x1, x2, …, xn) when there are no constraints on the
feasible values.
The necessary and sufficient condition for optimality is that the
respective partial derivatives equal to zero:
df
0 at x= x*, for i=1, 2, …, n.
dxi
which is usually not able to be solved analytically for a nonlinear
functions f(x). Therefore, a numerical search method need used.
29
(EE3006/3006A)
Definition of gradient
Example:
f 15 x1 2( x2 ) 3 3 x1 ( x3 ) 2
f 15 3( x3 )2 , 6( x2 ) 2 , 6 x1 x3
The mathematic significance of a gradient is that it gives the direction
for maximizing the rate at which f(x) increases with a (infinitesimal)
change in x.
30
15
(EE3006/3006A)
Gradient Search Method
=> Use gradient to identify the direction of movement from the current trial
solution that maximize the rate at which f(x) is increased.
A better approach is to keep moving in a fixed direction from the current
trial solution until f(x) stops increasing, and the stopping point would be
the next trial solution.
31
(EE3006/3006A)
Steps of Gradient Search Method
i =
f 2 f
( ) ( ) 2 .... at xk 1 32
x1 x2
16
(EE3006/3006A)
Example B2:
With ε0 =2, the stopping rule then says to perform an iteration.
33
(EE3006/3006A)
34
1
17
(EE3006/3006A)
So, the x’ =(0,1/2)+(1,0)/2 = (½, ½).
35
(EE3006/3006A)
The summary of the gradient search procedure
Illustration of the whole gradient search procedure. 36
18
(EE3006/3006A)
Classwork 7:
Q1. Consider the following linear programming problem:
Maximize: z = 3x1+ 2x2,
Subject to: 4x1 + 3x2 ≤ 128,
x1 + 3x2 ≤ 60,
and x1 ≥ 0, x2 ≥ 0.
1) Solve the problem by using the simplex method in tabular form.
2) If the technological coefficients of x2 are changed from (3, 3)T to (2, 2)T, find
the new optimal solution through sensitivity analysis.
(round your answer to 2 decimal places)
37
(EE3006/3006A)
Reference:
1. R. Bronson, and G. Naadimuthu,
Schaum’s Outline of Theory and Problems of Operations Research
Second Edition, McGraw Hill, 1997
Chapter 4. Linear Programming: Duality and Sensitivity Analysis,
pp. 57~ 61.
2. Frederick s. Hillier and Gerald J. Lieberman,
Introduction to Operations Research
Eighth Edition, McGraw Hill, 2000
Chapter 12. Nonlinear Programming, pp. 566 ~ 571.
38
19