Constrained Optimization
ChengLiang Chen
PSE
LABORATORY
Department of Chemical Engineering
National TAIWAN University
Chen CL 1
Power Generation via Fuel Oil
Process Description
A twoboiler turbinegenerator combination to produce a power
output of 50 MW
Range of operation: [18, 30] MW and [14, 25] MW, respectively
Chen CL 2
Max supply of Blast Furnace Gas: 10 fuel units/hr
Fuel used (f ton/h) and power generated (x MW)
f = a
0
+ a
1
x + a
2
x
2
Generator Fuel Type a
0
a
1
a
2
1 Fuel Oil 1.4609 0.15186 .001450
BFG 1.5742 0.16310 .001358
2 Fuel Oil 0.8008 0.20310 .000916
BFG 0.7266 0.22560 .000778
Objective: min amount of fuel oil purchased
Chen CL 3
Power Generation via Fuel Oil
Formulation
Amount of fuel type (j) used by generator (i)
f
ij
= a
ij0
+ a
ij1
x
ij
+ a
ij2
x
2
ij
i = 1, 2; j = 1, 2
Powers generated by generator i and power demand
p
i
= x
i1
+ x
i2
i = 1, 2
p
1
+ p
2
50
Used fuel (j)
z
j
2
i=1
f
ij
=
2
i=1
a
ij0
+ a
ij1
x
ij
+ a
ij2
x
2
ij
j = 1, 2
Chen CL 4
NLP Formulation
min z
1
s.t. p
i
= x
i1
+ x
i2
i = 1, 2
p
1
+ p
2
50
z
j
2
i=1
a
ij0
+ a
ij1
x
ij
+ a
ij2
x
2
ij
x
ij
0, 0 z
2
10,
18 p
1
30, 14 p
2
25
Chen CL 5
Power Generation via Fuel Oil
GAMS Code
$TITLE Power Generation via Fuel Oil
$OFFUPPER
$OFFSYMXREF OFFSYMLIST
OPTION SOLPRINT = OFF;
* Define index sets
SETS G Power Generators /gen1*gen2/
F Fuels /oil, gas/
K Constants in Fuel Consumption Equations /0*2/;
* Define and Input the Problem Data
TABLE A(G,F,K) Coefficients in the fuel consumption equations
0 1 2
gen1.oil 1.4609 .15186 .00145
gen1.gas 1.5742 .16310 .001358
gen2.oil 0.8008 .20310 .000916
gen2.gas 0.7266 .22560 .000778;
PARAMETER PMAX(G) Maximum power outputs of generators /
GEN1 30.0, GEN2 25.0/;
PARAMETER PMIN(G) Minimum power outputs of generators /
GEN1 18.0, GEN2 14.0/;
SCALAR GASSUP Maximum supply of BFG in units per h /10.0/
PREQ Total power output required in MW /50.0/;
Chen CL 6
* Define optimization variables
VARIABLES P(G) Total power output of generators in MW
X(G, F) Power outputs of generators from specific fuels
Z(F) Total Amounts of fuel purchased
OILPUR Total amount of fuel oil purchased;
POSITIVE VARIABLES P, X, Z;
* Define Objective Function and Constraints
EQUATIONS TPOWER Required power must be generated
PWR(G) Power generated by individual generators
OILUSE Amount of oil purchased to be minimized
FUELUSE(F) Fuel usage must not exceed purchase;
TPOWER.. SUM(G, P(G)) =G= PREQ;
PWR(G).. P(G) =E= SUM(F, X(G,F));
FUELUSE(F).. Z(F) =G= SUM((K,G), a(G,F,K)*X(G,F)**(ORD(K)1));
OILUSE.. OILPUR =E= Z("OIL");
* Impose Bounds and Initialize Optimization Variables
* Upper and lower bounds on P from the operating ranges
P.UP(G) = PMAX(G);
P.LO(G) = PMIN(G);
* Upper bound on BFG consumption from GASSUP
Z.UP("gas") = GASSUP;
* Specify initial values for power outputs
P.L(G) = .5*(PMAX(G)+PMIN(G));
* Define model and solve
MODEL FUELOIL /all/;
SOLVE FUELOIL USING NLP MINIMIZING OILPUR;
DISPLAY X.L, P.L, Z.L, OILPUR.L;
Chen CL 7
Power Generation via Fuel Oil
Result
Power outputs for generators 1 and 2 are 30 MW and 20 MW
Use all BFG (10) due to free 36.325 MW total power
Purchase 4.681 ton/h of fuel oil 13.675 MW remaining power
Generator 1 is operating at the maximum capacity of its operating
range
Chen CL 8
Alkylation Process Optimization
Process Flowsheet
Chen CL 9
Alkylation Process Optimization
Notation and Variable
Chen CL 10
Alkylation Process Optimization
Total Prot
f(x) = C
1
x
4
x
7
C
2
x
1
C
3
x
2
C
4
x
3
C
5
x
5
(a)
C
1
= alkylate product value ($0.063/octanbarrel)
C
2
= olen feed cost ($5.04/barrel)
C
3
= isobutane recycle cost ($0.035/barrel)
C
4
= acid addition costs ($10.00/per thousand pounds)
C
5
= isobutane makeup cost ($3.36/barrel)
Chen CL 11
Alkylation Process Optimization
Some Regression Models
Reactor temperatures between 8090
o
F and the reactor acid strength by weight
percent at 85 93 was
x
4
= x
1
(1.12 + 0.13167x
8
0.00667x
2
8
) (b)
Alkylate yield x
4
equals the olen feed x
1
plus the isobutane makeup x
5
less
shrinkage. The volumetric shrinkage can be expressed as 0.22 volume per volume
of alkylate yield
x
5
= 1.22x
4
x
1
(c)
The acid strength by weight percent x
6
could be derived from an equation that
expressed the acid addition rate x
3
as a function of the alkylate yield x
4
, the
acid dilution factor x
9
, and acid strength by weight percent x
6
(the addition
acid was assumed to have acid strength of 98%)
x
6
=
98000x
3
x
4
x
9
+ 1000x
3
(d)
Chen CL 12
The motor octane number x
7
was a function of the external isobutanetoolen
ratio x
8
and the acid strength by weight percent x
6
(for the same reactor
temperatures and acid strengths as for the alkylate yield x
4
)
x
7
= 86.35 + 1.098x
8
0.038x
2
8
+ 0.325(x
6
89) (e)
The external isobutanetoolen ratio x
8
was equal to the sum of the isobutane
recycle x
2
and the isobutane makeup x
5
divided by the olen feed x
1
x
8
=
x
2
+ x
5
x
1
(f)
The acid dilution factor x
9
could be expressed as a linear function of the F4
performance number x
10
x
9
= 35.82 0.222x
10
(g)
The last dependent variable is the F4 performance number x
10
, which was
expressed as a function of the motor octane number x
7
x
10
= 133 + 3x
7
(h)
Chen CL 13
Alkylation Process Optimization
Constrained NLP Problem
max f(x) = C
1
x
4
x
7
C
2
x
1
C
3
x
2
C
4
x
3
C
5
x
5
(a)
s.t. Eq.s (b) (h)
(10 variables, 7 constraints)
Chen CL 14
Alkylation Process Optimization
GAMS Code
$ TITLE ALKYLATION PROBLEM FROM GINO USERS MANUAL
$ OFFSYMXREF
$ OFFSYMLIST
OPTION LIMROW=0;
OPTION LIMCOL=0;
POSITIVE VARIABLES X1,X2,X3,X4,X5,X6,X7,X8,X9,X10;
VARIABLE OBJ;
EQUATIONS E1,E2,E3,E4,E5,E6,E7,E8;
E1..X4=E=X1*(1.12+.13167*X80.0067*X8**2);
E2..X7=E=86.35+1.098*X80.038*X8**2+0.325*(X689.);
E3..X9=E=35.820.222*X10;
E4..X10=E=3*X7133;
E5..X8*X1=E=X2+X5;
E6..X5=E=1.22*X4X1;
E7..X6*(X4*X9+1000*X3)=E=98000*X3;
E8.. OBJ =E= 0.063*X4*X75.04*X10.035*X210*X33.36*X5;
X1.LO = 0.;
X1.UP = 2000.;
X2.LO = 0.;
X2.UP = 16000.;
X3.LO = 0.;
Chen CL 15
X3.UP = 120.;
X4.LO = 0.;
X4.UP = 5000.;
X5.LO = 0.;
X5.UP = 2000.;
X6.LO = 85.;
X6.UP = 93.;
X7.LO = 90;
X7.UP = 95;
X8.LO = 3.;
X8.UP = 12.;
X9.LO = 1.2;
X9.UP = 4.;
X10.LO = 145.;
X10.UP = 162.;
X1.L =1745;
X2.L =12000;
X3.L =110;
X4.L =3048;
X5.L =1974;
X6.L =89.2;
X7.L =92.8;
X8.L =8;
X9.L =3.6;
X10.L =145;
MODEL ALKY/ALL/;
SOLVE ALKY USING NLP MAXIMIZING OBJ;
Chen CL 16
Alkylation Process Optimization
Relaxation of Regression Variables
max f(x) = C
1
x
4
x
7
C
2
x
1
C
3
x
2
C
4
x
3
C
5
x
5
(a)
s.t. Rx
4
= x
1
(1.12 + 0.13167x
8
0.00667x
2
8
) (b)
x
5
= 1.22x
4
x
1
(c)
x
6
=
98000x
3
x
4
x
9
+1000x
3
(d)
Rx
7
= 86.35 + 1.098x
8
0.038x
2
8
+ 0.325(x
6
89) (e)
x
8
x
1
= x
2
+ x
5
(f)
Rx
9
= 35.82 0.222x
10
(g)
Rx
10
= 133 + 3x
7
(h)
0.99x
4
Rx
4
1.01x
4
0.99x
7
Rx
7
1.01x
7
0.99x
9
Rx
9
1.01x
9
0.99x
10
Rx
10
1.01x
10
Chen CL 17
Alkylation Process Optimization
Solution
f(x
0
) = 872.3 (initial guess) = f(x
) = 1768.75
Chen CL 18
Constraint Status at A Design Point
Minimize: f(x)
Subject to: g
j
(x) 0 j = 1, , m
h
i
(x) = 0 i = 1, ,
Active Constraint g
j
(x
(k)
) = 0, h
i
(x
(k)
) = 0
Inactive Constraint g
j
(x
(k)
) < 0
Violated Constraint g
j
(x
(k)
) > 0, h
i
(x
(k)
) = 0
Active Constraint g
j
(x
(k)
) < 0, g
j
(x
(k)
) + > 0
Chen CL 19
Conceptual Steps of
Constrained Optimization Algorithms
Initialized from A Feasible Point
Chen CL 20
Conceptual Steps of
Constrained Optimization Algorithms
Initialized from An Infeasible Point
Chen CL 21
Sequential Unconstrained Minimization
Techniques (SUMT)
Minimize: f(x)
Subject to: g
j
(x) 0 j = 1, , m
h
k
(x) = 0 k = 1, ,
Minimize: (x, r
p
) = f(x) + r
p
P(x)
P(x) : penalty if any constraint is violated
r
p
: weighting factor
Chen CL 22
The Exterior Penalty Function Methods
Minimize: f(x)
Subject to: g
j
(x) 0 j = 1, , m
h
k
(x) = 0 k = 1, ,
min (x, r
p
) = f(x) + r
p
P(x)
P(x) =
m
j=1
[max (0, g
j
(x))]
2
+
k=1
[h
k
(x)]
2
Chen CL 23
The Exterior Penalty Function Method:
Example
Minimize: f =
(x+2)
2
20
Subject to: g
1
=
1x
2
0
g
2
=
x2
2
0
Chen CL 24
Min: (x, r
p
) =
(x + 2)
2
20
+ r
p
_
_
max
_
0,
1 x
2
__
2
+
_
max
_
0,
x 2
2
__
2
_
Chen CL 25
The Exterior Penalty Function Method:
Example
Minimize: f = x
1
+ x
2
Subject to: g
1
= 2 + x
1
2x
2
0
g
2
= 8 6x
1
+ x
2
1
x
2
0
Chen CL 26
Pseudoobjective function: r
p
= 0.05
Chen CL 27
Pseudoobjective function: r
p
= 0.1
Chen CL 28
Pseudoobjective function: r
p
= 1.0
Chen CL 29
Algorithm for the Exterior Penalty Function
Method
r
p
: small value large value
Chen CL 30
Advantages and Disadvantages of Exterior
Penalty Function Methods
It is applicable to general constrained problems, i.e. equalities as
well as inequalities can be treated
The starting design point can be arbitrary
The method iterates through the infeasible region where the
problem functions may be undened
If the iterative process terminates prematurely, the nal design may
not be feasible and hence not usable
Chen CL 31
The Interior Penalty Function Methods
(Barrier Function Methods)
Minimize: f(x)
Subject to: g
j
(x) 0 j = 1, , m
h
k
(x) = 0 k = 1, ,
min (x, r
p
, r
p
) = f(x) + r
p
m
j=1
1
g
j
(x)
+ r
p
k=1
[h
k
(x)]
2
Chen CL 32
The Interior Penalty Function Method:
Example
Minimize: f =
(x+2)
2
20
Subject to: g
1
=
1x
2
0
g
2
=
x2
2
0
Chen CL 33
Min: (x, r
p
) =
(x + 2)
2
20
+ r
p
_
2
1 x
+
2
x 2
_
Chen CL 34
Algorithm for the Interior Penalty Function
Method
r
p
: large value small value 0
Chen CL 35
Advantages and Disadvantages of Interior
Penalty Function Methods
The starting design point must be feasible
The method always iterates through the feasible region
Chen CL 36
Augmented Lagrange Multiplier Method:
EqualityConstrained Problems
Min: f(x)
s.t. h
k
(x) = 0 k = 1, ,
L(x, ) = f(x) +
k
h
k
(x)
A(x, , r
p
) = f(x) +
k
_
k
h
k
(x) + r
p
[h
k
(x)]
2
_
NC:
A
x
i
=
f
x
i
+
k
(
k
+ 2r
p
h
k
)
h
k
x
i
= 0
at x
L
x
i
=
f
x
i
+
k
h
k
x
i
= 0
k
=
k
+ 2r
p
h
k
(x)
(update
k
with nite upper bound for r
p
)
Chen CL 37
Chen CL 38
Augmented Lagrange Multiplier Method:
Example
Min: f(x) = x
2
1
+ x
2
2
s.t. h(x) = x
1
+ x
2
1 = 0
Chen CL 39
A(x, , r
p
) = x
2
1
+ x
2
2
+ (x
1
+ x
2
1) + r
p
(x
1
+ x
2
1)
2
x
A(x, , r
p
) =
_
_
2x
1
+ + 2r
p
(x
1
+ x
2
1)
2x
2
+ + 2r
p
(x
1
+ x
2
1)
_
_
=
_
_
0
0
_
_
x
1
= x
2
=
2r
p
2+4r
p
Solve the problem for r
p
= 1; = 0, 2, respectively
True optimum:
= 1, x
1
= x
2
= 0.5
Chen CL 40
Augmented Lagrange Multiplier Method:
InequalityConstrained Problems
Min: f(x)
s.t. g
j
(x) 0 j = 1, , m
g
j
(x) + Z
2
j
= 0
A(x, , Z, r
p
) = f(x) +
j
_
j
_
g
j
(x) + Z
2
j
_
+ r
p
_
g
j
(x) + Z
2
j
_
2
_
A(x, , r
p
) = f(x) +
j
_
j
+ r
p
2
j
2
(??)
j
= max
_
g
j
(x),
j
2r
p
_
j
=
j
+ 2r
p
j
(update rule)
Case 1: g active Z
2
= 0,
= + 2r
p
g > 0, = g >
2r
p
Case 2: g inactive Z
2
= 0,
= + 2r
p
(g + Z
2
) = 0, = g + Z
2
=
2r
p
> g
Chen CL 41
Augmented Lagrange Multiplier Method:
General Problems
Min: f(x)
s.t. g
j
(x) 0 j = 1, , m
h
k
(x) = 0 k = 1, ,
A(x, , r
p
) = f(x) +
j
_
j
+ r
p
2
j
k
_
k+m
h
k
(x) + r
p
[h
k
(x)]
2
_
j
= max
_
g
j
(x),
j
2r
p
_
j
=
j
+ 2r
p
j
(x) j = 1, , m
k+m
=
k+m
+ 2r
p
h
k
(x) k = 1, ,
Chen CL 42
Chen CL 43
Advantages of ALM Method
Relatively insensitive to value of r
p
Precise g(x) = 0 and h(x) = 0 is possible
Acceleration is achieved by updating
Starting point may be either feasible or infeasible
At optimum,
j
= 0 will automatically identify the active constraint
set
Chen CL 44
Generalized Reduced Gradient Method
(GRG)
Min: f(x) x R
n
s.t. g
j
(x) 0 j = 1, , J
h
k
(x) = 0 k = 1, , K
x
i
x
i
x
u
i
Min: f(x) x R
n
s.t. g
j
(x) + x
n+j
= 0 j = 1, , J
h
k
(x) = 0 k = 1, , K
x
i
x
i
x
u
i
x
n+j
0 j = 1, , J
Chen CL 45
Min: f(x) x R
n+J
s.t. h
k
(x) = 0 k = 1, , K, , K + J
x
i
x
i
x
u
i
i = 1 n n + J
Note:
one h
k
(x) = 0 can reduce one degreeoffreedom
one variable can be represented by others
K + J depedent variables can be represented by
other (n + J) (K + J) = (n K) independent var.s
Chen CL 46
Divide x
(n+J)1
into indep/dep (design/state, Y /Z) variables to
satisfy h
k
(x) = 0
x =
_
_
Y
Z
_
_
(n+J)1
Y =
_
_
y
1
.
.
.
y
nK
_
_
Z =
_
_
z
1
.
.
.
z
K+J
_
_
Variations of objective and constraint functions:
df(x) =
nK
i=1
f
y
i
dy
i
+
K+J
i=1
f
z
i
dz
i
=
T
Y
fdY +
T
Z
fdZ
dh
k
(x) =
nK
i=1
h
k
y
i
dy
i
+
K+J
i=1
h
k
z
i
dz
i
=
T
Y
h
k
dY +
T
Z
h
k
dZ
dh =
_
dh
1
dh
K
dh
K+J
_
T
= CdY +DdZ
Chen CL 47
C =
_
_
h
1
y
1
h
1
y
nK
.
.
.
.
.
.
.
.
.
h
K
y
1
h
K
y
nK
.
.
.
.
.
.
.
.
.
h
K+J
y
1
h
K+J
y
nK
_
_
=
_
T
Y
h
1
.
.
.
T
Y
h
K
.
.
.
T
Y
h
K+J
_
_
D =
_
_
h
1
z
1
h
1
z
K+J
.
.
.
.
.
.
.
.
.
h
K
z
1
h
K
z
K+J
.
.
.
.
.
.
.
.
.
h
K+J
z
1
h
K+J
z
K+J
_
_
=
_
T
Z
h
1
.
.
.
T
Z
h
K
.
.
.
T
Z
h
K+J
_
_
Chen CL 48
x x + dx should maintain h = 0 dh = 0
dh = CdY +DdZ = 0
DdZ = CdY
OR dZ = D
1
CdY
df(x) =
T
Y
fdY +
T
Z
fdZ
=
_
T
Y
f
T
Z
fD
1
C
1(nK)
dY
= G
T
R
dY (Generalized Reduced Gradient)
df
dY
(x) = G
R
=
Y
f
_
D
1
C
_
T
Z
f ((n K) 1)
Chen CL 49
Use G
R
to generate search direction S step size
dZ = D
1
CdY is based on linear approximation
some h
k
(x + dx) = 0
x Y , correct dZ to maintain h + dh = 0
(repeat until dZ is small)
h(x) + dh(x) = 0
h(x) +CdY +DdZ = 0
dZ = D
1
[h(x) +CdY ]
Chen CL 50
GRG: Algorithm
Specify design (Y ) and state (Z) variables
select state variables (Z) to avoid singularity of D
any component of x that is equal to its lower/upper bound
initially is set to be a design variable
slack variable should be set as state variables
Compute G
R
(analytical or numerical)
Test for convergence: stop if G
R
 <
Determination of search direction S:
steepest descent: S = G
R
FletcherReeve conjugate gradients,
DFP, BFGS,
Chen CL 51
Find optimum step size along s
dY = S =
_
_
s
1
.
.
.
s
nK
_
_
dZ = D
1
CdY = D
1
C(S) =
_
D
1
CS
_
= T =
_
_
t
1
.
.
.
t
K+J
_
_
Chen CL 52
distance to side constraints: considering design var.s
i
=
_
_
_
y
u
i
y
i,old
s
i
if s
i
> 0
y
i
y
i,old
s
i
if s
i
< 0
1
= min
i
{
i
}
distance to side constraints: considering state var.s
j
=
_
_
_
z
u
j
z
j,old
t
j
if t
j
> 0
z
j
z
j,old
t
j
if t
j
< 0
2
= min
j
{
j
}
0 <
min{
1
,
2
}
x
new
=
_
_
Y
old
+ dY
Z
old
+ dZ
_
_
=
_
_
Y
old
+
S
Z
old
+
T
_
_
if x
new
is infeasible keep Y
new
and modify Z
new
Chen CL 53
GRG: Example
min: f(x) = (x
1
x
2
)
2
+ (x
2
x
3
)
4
s.t. h(x) = x
1
(1 + x
2
2
) + x
4
3
3 = 0
3 x
1
, x
2
, x
3
3
Choose arbitrarily the design/state variables as
Y =
_
_
y
1
y
2
_
_
=
_
_
x
1
x
2
_
_
Z =
_
z
1
_
=
_
x
3
_
x
1
=
_
_
2.6
2
2
_
_
f(x
1
) = 21.16
Chen CL 54
Compute reduced gradient:
Y
f =
_
f
x
1
f
x
2
_
=
_
2(x
1
x
2
)
2(x
1
x
2
) + 4(x
2
x
3
)
3
_
=
_
9.2
9.2
_
Z
f =
_
f
x
3
_
=
_
4(x
2
x
3
)
3
_
=
_
0
_
C =
_
h
x
1
h
x
2
_
=
_
1 + x
2
2
2x
1
x
2
_
=
_
5 10.4
_
D =
_
h
x
3
_
=
_
4x
3
3
_
=
_
32
_
D
1
C =
1
32
C =
_
.15625 .325
_
G
R
=
Y
f D
1
C
Z
f
. .
0
=
_
9.2
9.2
_
G
R
 = 9.2 = continue
Use steepest descent method: S = G
R
Find
along S
1
= min
_
3(2.6)
9.2
,
32
9.2
_
= min{0.6087, 0.5435} = 0.5435
Chen CL 55
T = D
1
CS =
_
.15625 .325
_
_
9.2
9.2
_
= 4.4275
2
=
32
4.4275
= 1.1293 = min{
1
,
2
} = 0.5435
x =
_
Y + S
Z + T
_
=
_
_
2.6
2
2
_
_
+
_
_
9.2
9.2
4.4275
_
_
=
_
_
2.6 + 9.2
2 9.2
2 4.4275
_
_
f() = [(2.6 + 9.2) (2 9.2)]
2
+ [(2 9.2) (2 4.4275)]
4
= 518.7806
4
+ 338.56
2
169.28 + 21.16
df
d
= 2075.1225
3
+ 677.12 169.28 = 0
= 0.22
x
new
=
_
_
2.6 + 9.2
2 9.2
2 4.4275
_
=
_
_
2.6+2.024
22.024
2.97405
_
_
=
_
_
0.576
0.024
1.02595
_
_
Chen CL 56
h(x
new
) = 2.4684 = 0!! x
new
FR
D
new
=
_
h
x
3
_
=
_
4(x
3
3
)
new
_
=
_
4(1.02595)
3
_
=
_
4.32
_
C
new
=
_
h
x
1
h
x
2
_
=
_
1 + x
2
2
2x
1
x
2
_
new
=
_
1 0.028
_
dZ = D
1
[h(x) CdY ]
new
=
1
4.32
_
2.4684 +
_
1 0.028
_
_
2.024
2.024
__
= [0.116]
Z
new
= Z
old
+ dZ =
_
1.02595
_
+
_
0.116
_
=
_
1.142
_
h(x
new
) = 1.876 = 0!! x
new
FR
.
.
.
.
.
.
Chen CL 57
Linearization of The Constrained Problem
Min: f(x)
s.t. h
j
(x) = 0 j = 1, , p
g
j
(x) 0 j = 1, , m
Min: f(x
(k)
+ x) f(x
(k)
) +
f
..
T
f(x
(k)
)x
s.t. h
j
(x
(k)
+ x) h
j
(x
(k)
) +
h
j
..
T
h
j
(x
(k)
)x = 0
g
j
(x
(k)
+ x) g
j
(x
(k)
) +
T
g
j
(x
(k)
)x
. .
g
j
0
Chen CL 58
f
k
f(x
(k)
) e
j
h
j
(x
(k)
) b
j
g
j
(x
(k)
)
c
i
f(x
(k)
)
x
i
d
i
x
(k)
i
n
ij
h
j
(x
(k)
)
x
i
a
ij
g
j
(x
(k)
)
x
i
Min:
f =
n
i=1
c
i
d
i
= c
T
d
s.t.
h
j
=
n
i=1
n
ij
d
i
= e
j
_
[n
1
n
p
]
T
d = N
T
d = e
_
g
j
=
n
i=1
a
ij
d
i
b
j
_
[a
1
a
m
]
T
d = A
T
d b
_
Chen CL 59
Denition of Linearized Subproblem:
Example
Min: f(x) = x
2
1
+ x
2
2
3x
1
x
2
s.t. g
1
(x) =
1
6
x
2
1
+
1
6
x
2
2
1 0
g
2
(x) = x
1
0
g
3
(x) = x
2
0 x
(0)
= (1, 1)
f(x) =
_
_
2x
1
3x
2
2x
2
3x
1
_
_
g
1
(x) =
_
_
1
3
x
1
1
3
x
2
_
_
g
2
(x) =
_
_
1
0
_
_
g
3
(x) =
_
_
0
1
_
_
Chen CL 60
Chen CL 61
f(x
(0)
) = 1, b
1
= g
1
(x
(0)
) =
2
3
,
b
2
= g
2
(x
(0)
) = 1, b
3
= g
3
(x
(0)
) = 1
f(x
(0)
) =
_
_
1
1
_
_
g
1
(x
(0)
) =
_
_
1
3
1
3
_
_
A =
_
_
1
3
1 0
1
3
0 1
_
_
b =
_
_
2
3
1
1
_
_
Chen CL 62
Min:
f =
_
1 1
_
_
_
d
1
d
2
_
_
s.t.
_
_
1
3
1
3
1 0
0 1
_
_
_
_
d
1
d
1
_
_
_
2
3
1
1
_
_
Or
Min:
f = d
1
d
2
s.t. g
1
=
1
3
d
1
+
1
3
d
2
2
3
g
2
= d
1
1
g
3
= d
2
1
Chen CL 63
Or
Min:
f(x) = f(x
(0)
) +f (x x
(0)
)
= 1 +
_
1 1
_
_
_
(x
1
1)
(x
2
1)
_
_
= x
1
x
2
+ 1
s.t. g
1
(x) = g
1
(x
(0)
) +g
1
(x x
(0)
)
=
2
3
+
_
1
3
1
3
_
_
_
(x
1
1)
(x
2
1)
_
_
=
1
3
(x
1
+ x
2
4) 0
g
2
(x) = x
1
0
g
3
(x) = x
2
0
Chen CL 64
Optimum solution: line D E
d
1
+ d
2
= 2 (x
1
1) + (x
2
1) = 2 x
1
+ x
2
4 = 0
Chen CL 65
Sequential Linear Programming Algorithm
Basic Idea
Min: f(x)
s.t. h
j
(x) = 0 j = 1, , p
g
j
(x) 0 j = 1, , m
Min: f(x
(k)
+ x) f(x
(k)
) +
f
..
T
f(x
(k)
)x
s.t. h
j
(x
(k)
+ x) h
j
(x
(k)
) +
h
j
..
T
h
j
(x
(k)
)x = 0
g
j
(x
(k)
+ x) g
j
(x
(k)
) +
T
g
j
(x
(k)
)x
. .
g
j
0
Chen CL 66
f
k
f(x
(k)
) e
j
h
j
(x
(k)
) b
j
g
j
(x
(k)
)
c
i
f(x
(k)
)
x
i
d
i
x
(k)
i
n
ij
h
j
(x
(k)
)
x
i
a
ij
g
j
(x
(k)
)
x
i
Min:
f =
n
i=1
c
i
d
i
= c
T
d
s.t.
h
j
=
n
i=1
n
ij
d
i
= e
j
_
[n
1
n
p
]
T
d = N
T
d = e
_
g
j
=
n
i=1
a
ij
d
i
b
j
_
[a
1
a
m
]
T
d = A
T
d b
_
(k)
i
d
i
(k)
iu
, i = 1, , n (move limits)
Chen CL 67
Move limits: the linearized problem may not have a bounded
solution or the changes in design may become too large
Selection of proper move limits is of critical importance because it
can mean success or failure of the SLP algorithm
All b
i
0
d
i
: no sign restriction
d
i
= d
+
i
d
i
, d
+
i
0, d
i
0
Stopping Criteria:
g
j
1
, j = 1 m; h
j
1
, j = 1 p
d
2
Chen CL 68
An SLP Algorithm
Step 1: x
(0)
, k = 0,
1
,
2
Step 2: calculate f
k
; b
j
, j = 1 m; e
j
, j = 1 p
Step 3: evaluate c
i
=
f
x
i
, n
ij
=
h
j
x
i
, a
ij
=
g
j
x
i
Step 4: select proper move limits
(k)
i
,
(k)
iu
(some fraction of current design)
Step 5: dene the LP subproblem
Step 6: use Simplex method to solve for d
(k)
Step 7: check for convergence, Stop ?
Step 8: update the design x
(k+1)
= x
(k)
+d
(k)
set k = k + 1 and go to Step 2
Chen CL 69
An SLP Example
Min: f(x) = x
2
1
+ x
2
2
3x
1
x
2
s.t. g
1
(x) =
1
6
x
2
1
+
1
6
x
2
2
1 0
g
2
(x) = x
1
0
g
3
(x) = x
2
0 x
(0)
= (3, 3)
f(x) =
_
_
2x
1
3x
2
2x
2
3x
1
_
_
g
1
(x) =
_
_
1
3
x
1
1
3
x
2
_
_
g
2
(x) =
_
_
1
0
_
_
g
3
(x) =
_
_
0
1
_
_
Chen CL 70
f(x
(0)
) = 9, b
1
= g
1
(x
(0)
) = 2 < 0 (violated)
b
2
= g
2
(x
(0)
) = 3 (inactive) b
3
= g
3
(x
(0)
) = 3 (inactive)
f(x
(0)
) =
_
_
3
3
_
_
= c g
1
(x
(0)
) =
_
_
1
1
_
_
A =
_
_
1 1 0
1 0 1
_
_
b =
_
_
2
3
3
_
_
Chen CL 71
Min:
f =
_
3 3
_
_
_
d
1
d
2
_
_
s.t.
_
_
1 1
1 0
0 1
_
_
_
_
d
1
d
2
_
_
_
2
3
3
_
_
Or
Min:
f = 3d
1
3d
2
s.t. g
1
= d
1
+ d
2
2
g
2
= d
1
3
g
3
= d
2
3
Chen CL 72
Feasible region: AB C;
Cost function parallel to B C(OptSoln : BC d
1,2
= 1)
100% move limits 3 d
1
, d
2
3 (ADEF)
d
1
= 1, d
2
= 1 x
1
= 2, x
2
= 2 (x
1
3 = d
1
= 1)
20% move limits 0.6 d
1
, d
2
0.6 (A
1
D
2
E
1
F
1
)
no feasible solution
The linearized constraints are satised, but the original nonlinear
constraint g
1
is still violated
Chen CL 73
SLP: Example
Min: f(x) = x
2
1
+ x
2
2
3x
1
x
2
s.t. g
1
(x) =
1
6
x
2
1
+
1
6
x
2
2
1 0
g
2
(x) = x
1
0
g
3
(x) = x
2
0 x
(0)
= (1, 1)
1,2
= 0.001, 15% design change
Min:
f = d
1
d
2
s.t. g
1
=
1
3
d
1
+
1
3
d
2
2
3
g
2
= d
1
1
g
3
= d
2
1
0.15 d
1
, d
1
0.15
Chen CL 74
Solution region: DEFG F, (d
1
= d
2
= 0.15)
Chen CL 75
d
1
d
+
1
d
1
, d
2
d
+
2
d
2
Min:
f = d
+
1
+ d
1
d
+
2
+ d
2
s.t. g
1
=
1
3
_
d
+
1
d
1
+ d
+
2
d
2
_
2
3
g
2
= d
+
1
+ d
1
1
g
3
= d
+
2
+ d
2
1
d
+
1
d
1
0.15
d
1
d
+
1
0.15
d
+
2
d
2
0.15
d
2
d
+
2
0.15
d
+
1
, d
1
0
d
+
2
, d
2
0
Chen CL 76
d
+
1
= 0.15, d
1
= 0, d
+
2
= 0.15, d
2
= 0
d
1
= d
+
1
d
1
= 0.15
d
2
= d
+
2
d
2
= 0.15
x
(1)
= x
(0)
+d
(0)
= (1.15, 1.15)
f(x
(1)
) = 1.3225
g
1
(x
(1)
) = 0.6166 (inactive)
d = 0.212 >
2
next iteration
Chen CL 77
Observations on The SLP Algorithm
The rate of convergence and performance depend to a large extent
on the selection of move limits
The method cannot be used as a black box approach for engineering
design problems
(selection of move limits is a trial and error process)
The method is not convergent since no descent function is dened
The method can cycle between two points if the optimum solution
is not a vertex of the constraint set
Lack of robustness
Chen CL 78
Quadratic Programming Problem
Quadratic Step Size Constraint
Constrained Nonlinear Programming
Linear Programming Subproblem: lack of robustness
Quadratic Programming Subproblem:
with descent function and step size determination strategy
Quadratic Step Size Constraint:
(k)
i
d
i
(k)
iu
, i = 1, , n (move limits)
1
2
d =
1
2
d
T
d =
1
2
(d
i
)
2
Chen CL 79
Min:
f =
n
i=1
c
i
d
i
= c
T
d
s.t.
h
j
=
n
i=1
n
ij
d
i
= e
j
_
[n
1
n
p
]
T
d = N
T
d = e
_
g
j
=
n
i=1
a
ij
d
i
b
j
_
[a
1
a
m
]
T
d = A
T
d b
_
1
2
(d
i
)
2
(move limits)
Chen CL 80
Quadratic Programming (QP) Subproblem
Min:
f =
n
i=1
c
i
d
i
= c
T
d
s.t.
h
j
=
n
i=1
n
ij
d
i
= e
j
_
[n
1
n
p
]
T
d = N
T
d = e
_
g
j
=
n
i=1
a
ij
d
i
b
j
_
[a
1
a
m
]
T
d = A
T
d b
_
1
2
(d
i
)
2
(0.5d
T
d )
(? check KTC, = 1)
Min:
f = c
T
d+0.5d
T
d
s.t. N
T
d = e
A
T
d b
a convex programming problem unique soln
Chen CL 81
Quadratic Programming (QP) Subproblem
Min: f(x) = 2x
3
1
+ 15x
2
2
8x
1
x
2
4x
1
s.t. h(x) = x
2
1
+ x
1
x
2
+ 1 = 0
g(x) = x
1
1
4
x
2
2
1 0 x
(0)
= (1, 1)
Opt: x
) = 74(A), 78(B)
Chen CL 82
c = f = (6x
2
1
8x
2
4, 30x
2
8x
1
)
x
(0)
= (6, 22)
h = (2x
1
+ x
2
, x
1
)
x
(0)
= (3, 1)
g = (1, x
2
/2)
x
(0)
= (1, 0.5)
f(1, 1) = 5, h(1, 1) = 3 = 0, g(1, 1) = 0.25 < 0
Chen CL 83
LP SubP: Min:
f = 6d
1
+ 22d
2
s.t.
h = 3d
1
+ d
2
= 3
g = d
1
0.5d
2
0.25
50% move limits: 0.5 d
1
, d
2
0.5 infeasible (HIJK)
100% move limits: 1 d
1
, d
2
1
L : d
1
=
2
3
, d
2
= 1,
f = 18
Chen CL 84
QP SubP:
Min:
f = 6d
1
+ 22d
2
+ 0.5(d
2
1
+ d
2
2
)
s.t.
h = 3d
1
+ d
2
= 3
g = d
1
0.5d
2
0.25
G : d
1
= 0.5, d
1
= 1.5,
f = 28.75
Chen CL 85
Solution of QP Problems
Min: q(x) = c
T
x +
1
2
x
T
Hx (H = I ?)
s.t. A
T
x b
m1
N
T
x = e
p1
x 0
n1
A
T
x + s = b
m1
x 0
L = c
T
x +
1
2
x
T
Hx +u
T
(A
T
x +s b)
T
x
+v
T
(N
T
x e)
Chen CL 86
KTC v = y z, y, z 0
i
x
i
= 0 i = 1 n
s
j
, u
j
,
i
0
Chen CL 87
_
H A I
nn
0
nm
N N
A
T
0
mm
0
mn
I
mm
0
mp
0
mp
N
T
0
pm
0
pn
0
pm
0
pp
0
pp
_
_
_
_
x
u
s
y
z
_
_
=
_
_
c
b
e
_
_
B
(n+m+p)2(n+m+p)
x
2(n+m+p)1
= D
(n+m+p)1
x
i
i
= 0, u
j
s
j
= 0 X
i
X
n+m+i
= 0 i = 1 n + m
X
i
0 i = 1 2(n + m + p)
Chen CL 88
TwoPhase Simplex Method to Solve QP
Problem
Bx +Y = D Y : articial variables
w =
n+m+p
i=1
Y
i
=
n+m+p
i=1
D
i
+
2(n+m+p)
j=1
_
n+m+p
i=1
B
ij
_
X
j
= w
0
+
2(n+m+p)
j=1
C
j
X
j
Note: both X
i
, X
n+m+i
are not simultaneously bases
Chen CL 89
Procedures:
Step 1: nd B
Step 2: dene D, rearrange s.t. D
i
0
Step 3: calculate w
0
, C
j
s
Step 4: complete phase I
Step 5: recover optimum values for original QP variables x,
Lagrangian multipliers u, v, , and slack variables s
Chen CL 90
Solution of QP Problem: Example
Min: f(x) = (x
1
3)
2
+ (x
2
3)
2
= x
2
1
6x
1
+ x
2
2
6x
2
(+18)
s.t. g(x) = x
1
+ x
2
4
h(x) = x
1
3x
2
= 1
x
1
, x
2
0
q(x) =
_
6 6
_
_
_
x
1
x
2
_
_
+ 0.5
_
x
1
x
2
_
_
_
2 0
0 2
_
_
_
_
x
1
x
2
_
_
c =
_
_
6
6
_
_
H =
_
_
2 0
0 2
_
_
A =
_
_
1
1
_
_
N =
_
_
1
3
_
_
b =
_
4
_
e =
_
1
_
Chen CL 91
B =
_
_
2 0 1 1 0 0 1 1
0 2 1 0 1 0 3 3
1 1 0 0 0 1 0 0
1 3 0 0 0 0 0 0
_
_
D =
_
_
6
6
4
1
_
_
x =
_
x
1
x
2
u
1
1
2
s
1
y
1
z
1
_
T
X
1
X
4
= 0, X
2
X
5
= 0, X
3
X
6
= 0
X
1
=
13
4
, X
2
=
3
4
, X
3
=
3
4
, X
8
=
5
4
X
4
= 0, X
5
= 0, X
6
= 0, X
7
= 0
x
1
=
13
4
, x
2
=
3
4
, u
1
=
3
4
,
1
= 0,
2
= 0
s
1
= 0, y
1
= 0, z
1
=
5
4
, v
1
= y
1
z
1
=
5
4
f(
13
4
,
3
4
) =
41
8
Chen CL 92
Basic w X
1
X
2
X
3
X
4
X
5
X
6
X
7
X
8
Y
1
Y
2
Y
3
Y
4
D
Y
1
0 2 0 1 1 0 0 1 1 1 0 0 0 6
Y
2
0 0 2 1 0 1 0 3 3 0 1 0 0 6
Y
3
0 1 1 0 0 0 1 0 0 0 0 1 0 4
Y
4
0 1 3 0 0 0 0 0 0 0 0 0 1 1
w 1 4 0 2 1 1 1 2 2 0 0 0 0 17
Y
1
0 0 6 1 1 0 0 1 1 1 0 0 2 4
Y
2
0 0 2 1 0 1 0 3 3 0 1 0 0 6
Y
3
0 0 4 0 0 0 1 0 0 0 0 1 1 3
X
1
0 1 3 0 0 0 0 0 0 0 0 0 1 1
w 1 0 12 2 1 1 1 2 2 0 0 0 4 13
X
2
0 0 1
1
6
1
6
0 0
1
6
1
6
1
6
0 0
1
3
2
3
Y
2
0 0 0
2
3
1
3
1 0
10
3
10
3
1
3
1 0
2
3
14
3
Y
3
0 0 0
2
3
2
3
0 1
2
3
2
3
2
3
0 1
1
3
1
3
X
1
0 1 0
1
2
1
2
0 0
1
2
1
2
1
2
0 0 0 3
w 1 0 0 0 1 1 1 4 4 2 0 0 0 5
Chen CL 93
Basic w X
1
X
2
X
3
X
4
X
5
X
6
X
7
X
8
Y
1
Y
2
Y
3
Y
4
D
X
2
0 0 1 0 0 0
1
4
0 0 0 0
1
4
1
4
3
4
Y
2
0 0 0 4 3 1 5 0 0 3 1 5 1 3
X
8
0 0 0 1 1 0
3
2
1 1 1 0
3
2
1
2
1
2
X
1
0 1 0 0 0 0
3
4
0 0 0 0
3
4
1
4
13
4
w 1 0 0 4 3 1 5 0 0 2 0 6 2 3
X
2
0 0 1 0 0 0
1
4
0 0 0 0
1
4
1
4
3
4
X
3
0 0 0 1
3
4
1
4
5
4
0 0
3
4
1
4
5
4
1
4
3
4
X
8
0 0 0 0
1
4
1
4
1
4
1 1
1
4
1
4
1
4
1
4
5
4
X
1
0 1 0 0 0 0
3
4
0 0 0 0
3
4
1
4
13
4
w 1 0 0 0 0 0 0 0 0 1 1 1 1 0
Chen CL 94
Sequential Quadratic Programming (SQP)
Constrained Steepest Descent Method
Min:
f = c
T
x +
1
2
x
T
x
d = c (steepest descent direction if no constraint)
constrained QP subproblem is solved sequentially
Min:
f = c
T
d + 0.5d
T
d
s.t. N
T
d = e
A
T
d b
d : modify c to satisfy constraints
Q: Descent function ? Step size ?
Chen CL 95
CSD: Pshenichnys Descent Function
(x) = f(x) + RV (x)
(x
(k)
) = f(x
(k)
) + RV (x
(k)
)
k
= f
k
+ RV
k
V
k
= max {0; h
1
, , h
p
; g
1
, , g
m
} 0
(maximum constraint violation)
R r
k
=
p
i=1
v
(k)
i
+
m
i=1
u
(k)
i
Chen CL 96
Note:
the Lagrange function is a lower bound on the descent function
Min: f(x)
s.t. h
i
(x) = 0 i = 1, , p
g
i
(x) 0 i = 1, , m
L(x) = f(x) +
p
i=1
v
i
h
i
+
m
i=1
u
i
g
i
f(x) +
p
i=1
v
i
h
i
 +
m
i=1
u
i
g
i
f(x) +
_
p
i=1
v
i
 +
m
i=1
u
i
_
. .
r, R=max{R
0
,r}
max {0; h
1
, , h
p
; g
1
, , g
m
}
. .
V
f(x) + RV = (x)
Chen CL 97
CSD: Descent Function Example
Min: f(x) = x
2
1
+ 320x
1
x
2
s.t. g
1
(x) =
x
1
60x
2
1 0
g
2
(x) = 1
x
1
(x
1
x
2
)
3600
0
x
1
, x
2
0, x
(0)
= (40, 0.5), R = 10000
f(x
(0)
) = (40)
2
+ 320(40)(0.5) = 8000
g
1
(x
(0)
) =
40
60(0.5)
1 = 0.333 > 0, (violation)
g
2
(x
(0)
) = 1
40(400.5)
3600
= 0.5611 > 0, (violation)
g
3
= 40 < 0, g
4
= 0.5 < 0, (inactive)
V
0
= max {0; 0.333, 0.5611, 40, 0.5} = 0.5611
0
= f
0
+ RV
0
= 8000 + (10000)(0.5611) = 13611
Chen CL 98
CSD: Step Size Determination
Use inexact step size
(for eciency, exact line search is not necessary)
Trial step size t
j
:
j = 0 t
0
=
_
1
2
_
0
= 1
j = 1 t
1
=
_
1
2
_
1
=
1
2
j = 2 t
2
=
_
1
2
_
2
=
1
4
.
.
.
Chen CL 99
Determine step size as
k
= t
j
with j as the smallest integer to
satisfy descent condition of inequality
x
(k+1,j)
= x
(k)
+ t
j
d
(k)
k+1,j
= f
k+1,j
+ RV
k+1,j
k
t
j
k
=
d
(k)

2
, 0 < 1
Chen CL 100
Eect of parameter on step size determination
a larger tends to reduce step size to satisfy descent condition of
Inequality
Chen CL 101
CSD: Step Size Example
Min: f(x) = x
2
1
+ 320x
1
x
2
s.t. g
1
(x) =
x
1
60x
2
1 0
g
2
(x) = 1
x
1
(x
1
x
2
)
3600
0
g
3
(x) = x
1
0
g
4
(x) = x
2
0, x
(0)
= (40, 0.5)
d
(0)
= (25.6, 0.45)
u = (4880, 19400, 0, 0)(?), = 0.5
R =
u
i
= 4880 + 19400 + 0 + 0 = 24280
= 0.5(25.6
2
+ 0.45
2
) = 328
Chen CL 102
f(x
(0)
) = (40)
2
+ 320(40)(0.5) = 8000
g
1
(x
(0)
) =
40
60(0.5)
1 = 0.333 > 0, (violation)
g
2
(x
(0)
) = 1
40(400.5)
3600
= 0.5611 > 0, (violation)
g
3
(x
(0)
) = 40 < 0, (inactive)
g
4
(x
(0)
) = .5 < 0, (inactive)
V
0
= max {0; 0.333, 0.5611, 40, 0.5} = 0.5611
0
= f
0
+ RV
0
= 8000 + (24280)(0.5611) = 21624
Chen CL 103
Initial: t
0
= 1 (j = 0)
x
(1,0)
1
= x
(0)
1
+ t
0
d
(0)
1
= 40 + (1)(25.6) = 65.6
x
(1,0)
2
= x
(0)
2
+ t
0
d
(0)
2
= .5 + (1)(0.45) = 0.95
f
1,0
= f(x
(1,0)
) = (65.6)
2
+ 320(65.6)(.95) = 24246
g
1
(x
(1,0)
) =
65.6
60(0.95)
1 = 0.151 > 0, (violation)
g
2
(x
(1,0)
) = 1
65.6(65.60.95)
3600
= .1781 > 0, (inactive)
g
3
(x
(1,0)
) = 65.6 < 0, (inactive)
g
4
(x
(1,0)
) = 0.95 < 0, (inactive)
V
1,0
= max {0; 0.151, .1781, 65.6, 0.95} = 0.151
1,0
= f
1,0
+ RV
1,0
= 24246 + (24280)(0.151) = 27912
LHS = 27912 + 328 = 28240 > 21624 = RHS!
Chen CL 104
Second: t
1
= 0.5 (j = 1)
x
(1,1)
1
= x
(0)
1
+ t
1
d
(0)
1
= 40 + (0.5)(25.6) = 52.8
x
(1,1)
2
= x
(0)
2
+ t
1
d
(0)
2
= .5 + (0.5)(0.45) = .725
f
1,1
= f(x
(1,1)
) = (52.8)
2
+ 320(52.8)(.725) = 15037
g
1
(x
(1,1)
) =
52.8
60(.725)
1 = 0.2138 > 0, (violation)
g
2
(x
(1,1)
) = 1
52.8(52.8.725)
3600
= 0.2362 > 0, (violation)
g
3
(x
(1,1)
) = 52.8 < 0, (inactive)
g
4
(x
(1,1)
) = .725 < 0, (inactive)
V
1,1
= max {0; 0.2138, 0.2362, 52.8, .725} = 0.2362
1,1
= f
1,1
+ RV
1,1
= 15037 + (24280)(0.2362) = 20772
LHS = 20772 + (0.5)328 = 20936 < 21624 = RHS
Chen CL 105
SQP: CSD Algorithm
Step 1: k = 0; x
(0)
, R
0
(= 1), 0 < (= 0.2) < 1;
1
,
2
Step 2: f(x
(k)
), h
j
(x
(k)
), g
j
(x
(k)
); V
k
;
f(x
(k)
), h
j
(x
(k)
), g
j
(x
(k)
)
Step 3: dene and solve QP subproblem d
(k)
, v
(k)
, u
(k)
Step 4: Stop if d
(k)

2
and V
k
1
Step 5: calculate r
k
(sum of Lagrange multipliers),
set R = max{R
k
, r
k
}
Step 6: set x
(k+1)
= x
(k)
+
k
d
(k)
(minimize (inexact) (x) = f(x) + RV (x) along x
(k)
)
Chen CL 106
Step 7: save R
k+1
= R, k = k + 1 and go to Step 2
The CSD algorithm along with the foregoing step size determination
procedure is convergent provided second derivatives of all the
functions are piecewise continuous (Lipschitz Condition) and
the set of design points x
(k)
are bounded as follows:
(x
(k)
) (x
(0)
)
Chen CL 107
SQP: CSD Example
Min: f(x) = x
2
1
+ x
2
2
3x
1
x
2
s.t. g
1
(x) =
1
6
x
2
1
+
1
6
x
2
2
1 0
g
2
(x) = x
1
0
g
3
(x) = x
2
0 x
(0)
= (1, 1)
R
0
= 10, = 0.5,
1
=
2
= 0.001
x
= (
3,
= 3.0625
Chen CL 108
Iteration 1 (k=0)
Step 1: k = 0; x
(0)
= (1, 1), R
0
= 10, = 0.5;
1
=
2
= 0.001
Step 2:
f(x
(0)
) = 1 g
1
(x
(0)
) =
2
3
g
2
(x
(0)
) = 1 g
3
(x
(0)
) = 1 (inactive)
f(x
(0)
) = (1, 1) g
1
(x
(0)
) = (
1
3
,
1
3
)
g
2
(x
(0)
) = (1, 0) g
3
(x
(0)
) = (0, 1)
V
0
= 0
Chen CL 109
linearized constraints and linearized constraint set
Chen CL 110
Step 3: dene and solve QP subproblem d
(k)
, v
(k)
, u
(k)
Min:
f = (d
1
d
2
) + 0.5(d
2
1
+ d
2
2
)
s.t. g
1
=
1
3
d
1
+
1
3
d
2
2
3
d
1
1, d
2
1
L = (d
1
d
2
) + 0.5(d
2
1
+ d
2
2
) + u
1
_
1
3
(d
1
+ d
2
2) + s
2
1
u
2
(d
1
1 + s
2
2
) + u
3
(d
2
1 + s
2
3
)
L
d
1
= 1 + d
1
+
1
3
u
1
u
2
= 0
L
d
2
= 1 + d
2
+
1
3
u
1
u
3
= 0
1
3
(d
1
+ d
2
2) + s
2
1
= 0
d
1
1 + s
2
2
= 0
d
2
1 + s
2
3
= 0
u
i
s
i
= 0, u
i
0, i = 1, 2, 3
d
(0)
= (1, 1) (D), u
(0)
= (0, 0, 0),
f = 1
Chen CL 111
Step 4: d
(0)
 =
2
2
, continue
Step 5: r
0
=
u
i
= 0, R = max{R
0
, r
0
} = max{10, 0} = 10
Chen CL 112
Step 6:
0
= f
0
+ RV
0
= 1 + (10)(0) = 1
0
= d
(0)

2
= 0.5(1 + 1) = 1
x
(1,0)
= x
(0)
+ t
0
d
(0)
= (2, 2) (t
0
= 1)
f
1,0
= f(2, 2) = 4
V
1,0
= V (2, 2) = max{0;
1
3
, 2, 2} =
1
3
1,0
= f
1,0
+ RV
1,0
= 4 + (10)
1
3
=
2
3
0
t
0
0
= 1 1 = 2 <
1,0
t
1
= 0.5
x
(1,1)
= x
(0)
+ t
1
d
(0)
= (1.5, 1.5) (t
1
= 0.5)
f
1,1
= f(1.5, 1.5) = 2.25
V
1,1
= V (1.5, 1.5) = max{0;
1
4
, 1.5, 1.5} = 0
1,1
= f
1,1
+ RV
1,1
= 2.25 + (10)(0) = 2.25
0
t
1
0
= 1 0.5 = 1.5 >
1,0
0
= t
1
= 0.5, x
(1)
= (1.5, 1.5)
Chen CL 113
Step 7: R
1
= R
0
= 10, k = 1 go to Step 2
Iteration 2 (k=1)
Step 2:
f(x
(1)
) = 2.25 g
1
(x
(1)
) = 0.25
g
2
(x
(1)
) = 1 g
3
(x
(1)
) = 1 (inactive)
f(x
(1)
) = (1.5, 1.5) g
1
(x
(1)
) = (0.5, 0.5)
g
2
(x
(1)
) = (1, 0) g
3
(x
(1)
) = (0, 1)
V
1
= 0
Chen CL 114
Step 3: dene and solve QP subproblem d
(k)
, v
(k)
, u
(k)
Min:
f = (1.5d
1
1.5d
2
) + 0.5(d
2
1
+ d
2
2
)
s.t. g
1
= 0.5d
1
+ 0.5d
2
0.25
d
1
1.5, d
2
1.5
d
(1)
= (0.25, 0.25) (D), u
(1)
= (2.5, 0, 0),
f = 2.25
Step 4: d
(1)
 = 0.3535
2
, continue
Step 5: r
0
=
u
i
= 2.5, R = max{R
1
, r
1
} = max{10, 2.5} =
10
Chen CL 115
Step 6:
1
= f
1
+ RV
1
= 2.25 + (10)(0) = 2.25
1
= d
(1)

2
= 0.5(0.125) = 0.0625
x
(2,0)
= x
(1)
+ t
0
d
(1)
= (1.75, 1.75) (t
0
= 1)
f
2,0
= f(1.75, 1.75) = 3.0625
V
2,0
= V (1.75, 1.75) = max{0; 0.0208, 1.75, 1.75} = 0.0208
2,0
= f
2,0
+ RV
2,0
= 3.0625 + (10)(0.0208) = 2.8541
1
t
0
1
= 2.25 (1)(0.0625) = 2.3125 >
2,0
1
= t
0
= 1.0, x
(2)
= (1.75, 1.75)
Step 7: R
2
= R
1
= 10, k = 2 go to Step 2
Chen CL 116
SQP: Eect of in CSD Method
Same as last example, case 1: = 0.5 0.01
Step 1 5: same as before
Chen CL 117
Step 6: Iteration 1
0
= f
0
+ RV
0
= 1 + (10)(0) = 1
0
= d
(0)

2
= 0.01(1 + 1) = 0.02
x
(1,0)
= x
(0)
+ t
0
d
(0)
= (2, 2) (t
0
= 1)
f
1,0
= f(2, 2) = 4
V
1,0
= V (2, 2) = max{0;
1
3
, 2, 2} =
1
3
1,0
= f
1,0
+ RV
1,0
= 4 + (10)
1
3
=
2
3
0
t
0
0
= 1 (1)(0.02) = 1.02 <
1,0
t
1
= 0.5
x
(1,1)
= x
(0)
+ t
1
d
(0)
= (1.5, 1.5) (t
1
= 0.5)
f
1,1
= f(1.5, 1.5) = 2.25
V
1,1
= V (1.5, 1.5) = max{0;
1
4
, 1.5, 1.5} = 0
1,1
= f
1,1
+ RV
1,1
= 2.25 + (10)(0) = 2.25
0
t
1
0
= 1 (0.5)(0.02) = 1.01 >
1,0
0
= t
1
= 0.5, x
(1)
= (1.5, 1.5)
Chen CL 118
Step 6: Iteration 2
1
= f
1
+ RV
1
= 2.25 + (10)(0) = 2.25
1
= d
(1)

2
= 0.02(0.125) = 0.0025
x
(2,0)
= x
(1)
+ t
0
d
(1)
= (1.75, 1.75) (t
0
= 1)
f
2,0
= f(1.75, 1.75) = 3.0625
V
2,0
= V (1.75, 1.75) = max{0; 0.0208, 1.75, 1.75} = 0.0208
2,0
= f
2,0
+ RV
2,0
= 3.0625 + (10)(0.0208) = 2.8541
1
t
0
1
= 2.25 (1)(0.0025) = 2.2525 <
2,0
1
= t
0
= 1.0, x
(2)
= (1.75, 1.75)
A smaller value of has no eect on the rst two iteration
Chen CL 119
Same as last example, case 2: = 0.5 0.9
Iteration 2: step size t
1
= 0.5
x
(2)
= (1.625, 1.625), f
2
= 2.641, g
1
= 0.1198, V
1
= 0
A larger value results in a smaller step size and the new design
point remains strictly feasible
Chen CL 120
SQP: Eect of R in CSD Method
Same as last example, case 1: R = 10 1
Iteration 1
Step 1 5: same as before
Step 6:
0
= f
0
+ RV
0
= 1 + (1)(0) = 1
0
= d
(0)

2
= 0.5(1 + 1) = 1
x
(1,0)
= x
(0)
+ t
0
d
(0)
= (2, 2) (t
0
= 1)
f
1,0
= f(2, 2) = 4
V
1,0
= V (2, 2) = max{0;
1
3
, 2, 2} =
1
3
1,0
= f
1,0
+ RV
1,0
= 4 + (1)
1
3
=
11
3
0
t
0
0
= 1 (1)(1) = 0 >
1,0
0
= t
0
= 1, x
(1)
= (2, 2)
Chen CL 121
Iteration 2
Step 2:
f(x
(1)
) = 4 g
1
(x
(1)
) =
g
2
(x
(1)
) = 1 g
3
(x
(1)
) = 1 (inactive)
f(x
(1)
) = g
1
(x
(1)
) =
g
2
(x
(1)
) = (1, 0) g
3
(x
(1)
) = (0, 1)
V
1
=
1
3
Step 3: dene and solve QP subproblem d
(k)
, v
(k)
, u
(k)
Min:
f = (1.5d
1
1.5d
2
) + 0.5(d
2
1
+ d
2
2
)
s.t. g
1
= 0.5d
1
+ 0.5d
2
0.25
d
1
1.5, d
2
1.5
d
(1)
= (0.25, 0.25) (D), u
(1)
= (
27
8
, 0, 0),
f = 4
Chen CL 122
Step 4: d
(1)
 = 0.3535
2
, continue
Step 5: r
0
=
u
i
=
27
8
, R = max{R
1
, r
1
} = max{1,
27
8
} =
27
8
Step 6:
1
= f
1
+ RV
1
= 4 + (
27
8
)(
1
3
) = 2.875
1
= d
(1)

2
= 0.5(0.3535)
2
= 0.0625
x
(2,0)
= x
(1)
+ t
0
d
(1)
= (1.75, 1.75) (t
0
= 1)
f
2,0
= f(1.75, 1.75) = 3.0625
V
2,0
= V (1.75, 1.75) = max{0; 0.0208, 1.75, 1.75} = 0.0208
2,0
= f
2,0
+ RV
2,0
= 3.0625 + (
27
8
)(0.0208) = 2.9923
1
t
0
1
= 2.875 (1)(0.0625) = 2.9375 >
2,0
1
= t
0
= 1.0, x
(2)
= (1.75, 1.75)
Step 7: R
2
= R
1
=
27
8
, k = 2 go to Step 2
A smaller value of R gives a larger step size in the rst iteration,
but results at the end of iteration 2 is the same as before
Chen CL 123
Observations on the CSD Algorithm
CSD algorithm is a 1storder method for constrained optimization
CSD can treat equality as well as inequality constraints
Golden section search may be used to nd the step size by
minimizing the descent function instead of trying to satisfy the
descent function (not suggested as it is inecient)
Rate of convergence of CSD can be improved by including higher
order information about problem functions in the QP subproblem
Now, the step size is not allowed to be greater than one
In practice, step size can be larger than one
Numerical uncertainties in selection of parameters , R
0
Starting point can aect performance of the algorithm
Chen CL 124
SQP: Constrained QuasiNewton Methods
To introduce curvature information for the Lagrange function into
the quadratic cost function new QP subproblem
Min: f(x) s.t. h
i
(x) = 0, i = 1 p
L(x, v) = f(x) +
p
i=1
v
i
h
i
(x) = f(x) +v
T
h(x)
KTC: L(x, v) = f(x) +
p
i=1
v
i
h
i
(x)
= f(x) +Nv = 0
h(x) = 0
note: N =
_
_
h
1
x
1
h
p
x
1
.
.
.
.
.
.
.
.
.
h
1
x
n
h
p
x
n
_
_
np
Chen CL 125
_
_
L(x, v)
h(x)
_
_
=
_
_
0
0
_
_
, let y =
_
_
x
v
_
_
or F(y) = 0
Newton: F
T
(y
(k)
)y
(k)
= F(y
(k)
)
_
_
2
L N
N
T
0
_
_
(k)
_
_
x
v
_
_
(k)
=
_
_
L
h
_
_
2
Lx
(k)
+ Nv
(k)
= L
2
Lx
(k)
+ N
_
v
(k+1)
v
(k)
_
= f(x
(k)
) Nv
(k)
2
Lx
(k)
+ Nv
(k+1)
= f(x
(k)
)
_
_
2
L N
N
T
0
_
_
(k)
_
_
x
v
(k+1)
_
_
(k)
=
_
_
f
h
_
_
solve these equations to obtain x
(k)
, v
(k+1)
Chen CL 126
Note: the same as minimizing the following constrained function
Min: f
T
x + 0.5x
T
2
Lx
s.t. h + N
T
x = 0
L = f
T
x + 0.5x
T
2
Lx +v
T
(h +N
T
x)
KTC:
L = f +
2
Lx +Nv = 0
h + N
T
x = 0
_
_
2
L N
N
T
0
_
_
_
_
x
v
_
_
=
_
_
f
h
_
_
Chen CL 127
Now, the solution x should be treated as a search direction d and
step size determined by minimizing an appropriate descent function
to obtain a convergent algorithm
the new QP subproblem
Min:
f = c
T
d +
1
2
d
T
Hd
s.t.
h = N
T
d = e
g = A
T
d b
Chen CL 128
QuasiNewton Hessian Approximation
H
(k+1)
= H
(k)
+D
(k)
E
(k)
s
(k)
=
k
d
(k)
(change in design)
z
(k)
= H
(k)
s
(k)
y
(k)
= L(x
(k+1)
, u
(k)
, v
(k)
) L(x
(k)
, u
(k)
, v
(k)
)
1
= s
(k)
y
(k)
,
2
= s
(k)
z
(k)
= 1 if
1
0.2
2
, otherwise =
0.8
2
1
w
(k)
= y
(k)
+ (1 )z
(k)
,
3
= s
(k)
w
(k)
D
(k)
=
1
3
w
(k)
w
(k)
T
, E
(k)
=
1
2
z
(k)
z
(k)
T
Chen CL 129
SQP: Modied CSD Algorithm
PLBA (PshenichnyLimBelegunduArora) method
Step 1: same as CSD, let H
(0)
= I
Step 2: calculate functions and gradients, maximum violation of
constraints, update Hessian if k > 0
Step 3: solve d
(k)
, u
(k)
, v
(k)
Step 47: same as CSD
Chen CL 130
SQP: Modied CSD Algorithm Example
Min: f(x) = x
2
1
+ x
2
2
3x
1
x
2
s.t. g
1
(x) =
1
6
x
2
1
+
1
6
x
2
2
1 0
g
2
(x) = x
1
0
g
3
(x) = x
2
0 x
(0)
= (1, 1)
R
0
= 10, = 0.5,
1
=
2
= 0.001
x
= (
3,
= 3.0625
Chen CL 131
Iteration 1: same as before
d
(0)
= (1, 1),
0
= 0.5, x
(1)
= (1.5, 1.5)
u
(0)
= (0, 0, 0), R
1
= 10, H
(0)
= I
Iteration 2: Step 2:
at x
(1)
= (1.5, 1.5) f = 6.75, g
1
= 0.25, g
2
= g
3
= 1.5
Chen CL 132
f = (1.5, 1.5), g
1
= (0.5, 0.5)
g
2
= (1, 0), g
3
= (0, 1)
s
(0)
=
0
d
(0)
= (0.5, 0.5)
z
(0)
= H
(0)
s
(0)
= (0.5, 0.5)
y
(0)
= f(x
(1)
) f(x
(0)
) = (0.5, 0.5)
1
= s
(0)
y
(0)
= 0.5,
2
= s
(0)
z
(0)
= 0.5
= 0.8(0.5)/(0.5 + 0.5) = 0.4
w
(0)
= 0.4(0.5, 0.5) + (1 0.4)(0.5, 0.5) = (0.1, 0.1)
3
= s
(0)
w
(0)
= 0.1
D
(0)
=
_
_
0.1 0.1
0.1 0.1
_
_
E
(0)
=
_
_
0.5 0.5
0.5 0.5
_
_
H
(0)
=
_
_
1 0
0 1
_
_
+
_
_
0.1 0.1
0.1 0.1
_
_
_
_
0.5 0.5
0.5 0.5
_
_
=
_
_
0.6 .4
.4 0.6
_
_
Chen CL 133
Step 3: QP subproblem
Min:
f = 1.5d
1
1.5d
2
+ 0.5(0.6d
2
1
0.8d
1
d
2
+ 0.6d
2
2
)
s.t. g
1
= 0.5d
1
+ 0.5d
2
0.25
g
2
= d
1
1.5
g
3
= d
2
1.5
d
(1)
= (0.25, 0.25), u
(1)
= (2.9, 0, 0)
Same as previous CSD method,
In general, inclusion of approximate Hessian will give dierent
directions and better convergence
Chen CL 134
NewtonRaphson Method to Solve
Multiple Nonlinear Equations
F(x) = 0 (n 1)
x
(k+1)
= x
(k)
+ x
(k)
, k = 0, 1, 2,
Stop if F(x
) =
_
n
i=1
{F
i
(x
)}
2
_
1/2
Ex: F
1
(x
1
, x
2
) = 0 F
2
(x
1
, x
2
) = 0
F
1
(x
(k)
1
, x
(k)
2
) +
F
1
x
1
x
(k)
1
+
F
1
x
2
x
(k)
2
= 0
F
2
(x
(k)
1
, x
(k)
2
) +
F
2
x
1
x
(k)
1
+
F
2
x
2
x
(k)
2
= 0
_
_
F
(k)
1
F
(k)
2
_
_
. .
F
(k)
=
_
_
F
1
x
1
F
1
x
2
F
2
x
1
F
2
x
2
_
_
. .
F
T
=J
_
_
x
(k)
1
x
(k)
2
_
_
. .
x
(k)
=
_
_
0
0
_
_
x
(k)
= J
1
F
(k)
or solve these eq.s directly
Chen CL 135
NewtonRaphson Method to Solve Multiple
Nonlinear Equations: Example
F
1
(x) = 1.0
4.010
6
x
2
1
x
2
= 0
F
2
(x) = 250
4.010
6
x
1
x
2
2
= 0
J =
_
_
F
1
x
1
F
1
x
2
F
2
x
1
F
2
x
2
_
_
= 4.0 10
6
_
_
2
x
3
1
x
2
1
x
2
1
x
2
2
1
x
2
1
x
2
2
2
x
1
x
3
2
_
_
Step 1: x
(0)
= (500, 1.0), = 0.1, k = 0
Step 2: F
1
= 15, F
2
= 7750 F = 7750 >
Step 3: J
(0)
=
_
_
8
125
16
16 16000
_
_
Step 4: solve
_
8
125
16
16 16000
__
x
(k)
1
x
(k)
2
_
=
_
15
7750
_
x
(0)
= (151, 0.33) x
(1)
= (651, 1.33)
Chen CL 136
k x
(k)
1
x
(k)
2
F
(k)

0 500.0 1.0000 7750.00
1 651.0 1.3300 3206.00
2 822.0 1.7700 1291.00
3 976.0 2.3500 489.20
4 1047.0 3.0490 161.30
5 1025.0 3.6770 38.50
6 1003.0 3.9630 3.98
7 1000.3 3.9995 0.05
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Cancel anytime.