Professional Documents
Culture Documents
Alberto Bemporad
imt.lu/ab
• Hybrid MPC
• Stochastic MPC
• Learning-based MPC
• Numerical examples:
– Linear Systems:
http://cse.lab.imtlucca.it/~bemporad/intro_control_course.html
– Numerical Optimization:
http://cse.lab.imtlucca.it/~bemporad/optimization_course.html
– Machine Learning:
http://cse.lab.imtlucca.it/~bemporad/ml.html
model-based optimizer
process
measurements
simplified likely
Use a dynamical model of the process to predict its future
evolution and choose--------------------
the “best” control action
a good
(Propoi, 1963)
• MPC used in the process industries since the 80’s
(Qin, Badgewell, 2003) (Bauer, Craig, 2008)
©SimulateLive.com
Table 6
• Industrial
Summary of linearsurvey of MPC
MPC applications by areasapplications conducted
(estimates based on vendor indomid
survey; estimates 1999
not include applications by companies who have
licensed vendor technology)a
Table 2
The percentage of survey respondents indicating whether a control technology had
demonstrated (“Current Impact”) or was likely to demonstrate over the next five
(Samad et al., 2020)
years (“Future Impact”) high impact in practice.
vis-à-vis other control technologies, especially those that can be considered the "crown jewels"
of control theory - robust control, adaptive control, and nonlinear control."
€ optimization
state (static I/O models)
• Reference trajectory
generator
reference-signal r(t)
tactical planner
dynamic optimizer
• optimize transient response
MPC • ensure reference tracking
• handle constraints
• coordinate multiple inputs
suspension deflection
tire deflection
suspension deflection
tire deflection
4
• •MPC combines
Combine path path planning,
planning, path tracking,
path tracking and obstacle
and obstacle avoidance
avoidance
• •Stochastic
Employ stochastic MPC
prediction to take
models into
are account
used uncertainty
to account and
for uncertainty
driver’s behavior
and driver’s behavior
Measurements
• Real-time MPC is able to take into account coupled dynamics and constraints,
optimizing performance also during transients
Engine
torque
request
Desired
axle torque
Engine Control
nowcar.com
MPC
CVT US06 Double Hill driving cycle
ratio
request CVT Control (Bemporad, Bernardini, Livshiz, Pattipati, 2018)
http://www.odys.it/odys-and-gm-bring-online-mpc-to-production
(Bemporad, Rocchi, 2011) (Pascucci, Bennani, Bemporad, 2016) (Krenn et. al., 2012)
http://www.pw.utc.com/Press/Story/20100527-0100/2010
coal 1 ? ?
photovoltaic
coal 2
? ? ? hydro-storage
? ? ?
demand
natural gas
? ? ? wind farm
Drinking water
network of
Barcelona:
63 tanks
114 controlled flows
17 mixing nodes
(2012-2015)
80
0.15
40
0.05
20
0
0
−20 −0.05
0 50 100 150 200 −0.05 0 0.05 0.1 0.15 0.2
Stock price at expiration Payoff
( )
x(ti ) − x(ti−1 )
p(T ) = max{w(T ) − K, 0} p(T ) = max 0, C + min
i∈{1,...,Nfix } x(ti−1 )
closed-loop simulation
MPC
design
physical modeling +
parameter estimation
real-time
system code
identification
/* z=A*x0; */
for (i=0;i<m;i++) {
z[i]=A[i]*x[0];
for (j=1;j<n;j++) {
z[i]+=A[i+m*j]*x[j];
}
}
experiments
physical process
"Model Predictive Control" - © A. Bemporad. All rights reserved. 24/132
MPC toolboxes
• MPC Toolbox (The Mathworks, Inc.): (Bemporad, Ricker, Morari, 1998-today)
odys.it/embedded-mpc
• ODYS Embedded MPC Toolset: (ODYS, 2013-today)
x ∈ Rn , f : Rn → R, g : Rn → Rm
g(x) 0
x
x1 g1 (x1 , x2 , . . . , xn )
. ..
x=
.. , f (x) = f (x1 , x2 , . . . , xn ), g(x) =
.
xn gm (x1 , x2 , . . . , xn )
http://www.neos-server.org
http://www.coin-or.org/
convex set
nonconvex set
S x1 x2
x1 x2
• Every local solution is also a global one (we will see this later)
P = {x ∈ R : Ax ≤ b} n
v3
v1 A
1 x=
b1
A3x=b3 A3
• Vertex (V-)representation:
X
q X
p Convex hull = transformation
P = {x ∈ Rn : x = αi vi + β j rj } from V- to H-representation
i=1 j=1
Vertex enumeration =
∑ q
αi , βj ≥ 0, αi = 1, vi , rj ∈ Rn transformation from H- to
i=1
V-representation
when q = 0 the polyhedron is a cone
vi = vertex, rj = extreme ray
′
• LP in standard form: min cx
s.t. Ax = b
x ≥ 0, x ∈ Rn
• Conversion to standard form:
1. introduce slack variables
X
n X
n
aij xj ≤ bi ⇒ aij xj + si = bi , si ≥ 0
j=1 j=1
X
n X
n
aij xj ≥ bi ⇒ −aij xj ≤ −bi
j=1 j=1
2 x Qx = 2x ( 2 ′ + 2 )x = 12 x′ ( Q+Q 1 ′ 1 ′ ′ ′
2 )x + 4 x Qx − 4 (x Q x)
1 ′ Q+Q
= 2 x ( 2 )x
• Many good solvers are available (CPLEX, Gurobi, GLPK, FICO Xpress, CBC, ...)
For comparisons see http://plato.la.asu.edu/bench.html
• GNU MathProg a subset of AMPL associated with the free package GLPK
(GNU Linear Programming Kit)
X
k−1
• Relation between input and states: xk = Ak x0 + Aj Buk−1−j
j=0
• Performance index
u0
X
N −1 R = R′ ≻0 u1
J(z, x0 ) = x′N P xN + x′k Qxk +u′k Ruk Q = Q′ ⪰0 z= ..
.
k=0 P = P ′ ⪰0 uN −1
• Goal: find the sequence z ∗ that minimizes J(z, x0 ), i.e., that steers the state x
to the origin optimally
′ ′
J(z, x0 ) = (S̄z + T̄ x0 ) Q̄(S̄z + T̄ x0 ) + z R̄z + x′0 Qx0
1 ′ 1
= z 2(R̄ + S̄ ′ Q̄S̄) z + x′0 2T̄ ′ Q̄S̄ z + x′0 2(Q + T̄ ′ Q̄T̄ ) x0
2 | {z } | {z } 2 | {z }
H F′ Y
"Model Predictive Control" - © A. Bemporad. All rights reserved. 40/132
Linear MPC - Unconstrained case
u0
u1
1 1 condensed
J(z, x0 ) = z ′ Hz+x′0 F ′ z+ x′0 Y x0 z= ..
2 2 . form of MPC
uN −1
∇z J(z, x0 ) = Hz + F x0 = 0
u∗
0
u∗
1
and hence z ∗ = .. = −H −1 F x0 (“batch” solution)
∗
.
uN −1
t t+k t+N
ymin ≤ yk ≤ ymax , k = 1, . . . , N
.
uN −1
X
k−1
• Output constraints yk = CAk x0 + CAi Buk−1−i ≤ ymax , k = 1, . . . , N
i=0
CB 0 ... 0 ymax CA
CAB CB ... 0 ymax CA2
. . z ≤ . − . x0
. . . .
. . . .
CA −1 B
N ... CAB CB ymax CAN
t t+k t+N
feedback
u∗ z }| {
0
u∗
1
min
1 ′
2 z Hz + x′ (t)F ′ z
• Get the solution z ∗ = .. of the QP z
.
∗
uN −1
s.t. Gz ≤ W + S x(t)
|{z}
feedback
• Apply only u(t) = u∗0 , discarding the remaining optimal inputs u∗1 , . . . , u∗N −1
• Sample time: Ts = 1 s
(
x(t + 1) = [ 10 11 ] x(t) + [ 01 ] u(t)
• State-space realization:
y(t) = [ 1 0 ] x(t)
• Constraints: −1 ≤ u(t) ≤ 1
P
+ x′2 [ 10 01 ] x2
1 1 2
• Performance index: min k=0 yk2 + 10 uk (N = 2)
• QP matrices:
H = [ 4.2 2 26
2 2.2 ] , F = [ 0 2 ]
1 ′ 1 0
cost: 2 z Hz + x′ (t)F ′ z + 12 x′ (t)Y x(t)
6 ] , G = −1 0
Y = [ 46 12 0 1
1 0 00 −1
constraints: Gz ≤ W + Sx(t)
W = 11 , S = 00 00
1 00
u x2
x1
x2,k ≥ −1, k = 1
• New QP matrices:
H = [ 4.2 2 26 4 6
2 2.2 ] , F = [ 0 2 ] , Y = [ 6 12 ]
" −1 0 # "1# "0 1#
1 0 1 0 0
G= −1 0 ,W = 1 ,S= 0 0
0 1 1 0 0
0 −1 1 0 0
x1
x2
u x2
x1
• As u(t) = u(t − 1) + ∆u(t) we need to extend the system with a new state
xu (t) = u(t − 1)
(
x(t + 1) = Ax(t) + Bu(t − 1) + B∆u(t)
xu (t + 1) = xu (t) + ∆u(t)
h i h i
x(t+1) x(t)
= [ A0 BI ] xu (t) + [ BI ] ∆u(t)
xu (t+1) h i
x(t)
y(t) = [ C 0 ] xu (t)
• Again a linear system with states x(t), xu (t) and input ∆u(t)
• Add the extra penalty ∥W u (uk − uref (t))∥22 to track input references
• Constraints may depend on r(t), such as emin ≤ yk − r(t) ≤ emax
1.4
1.2
1
• System: y(t) = s2 +0.4s+1 u(t) 1
0.8
d2 y
(or equivalently dt2 + 0.4 dy
dt + y = u) 0.6
0.4
0.2
X
9 input
Step
Linear Constrained
Controller
u(t) y(t)
u(t) y(t)
• Note that MPC naturally provides feedforward action on v(t) and r(t)
input input
process #1 process #2
u1(t)
r1(t) r2(t)
MPC #1 MPC #2
u1 (t), u1(t+1), …, u1(t+N-1)
• Commands generated by MPC #1 are measured dist. in MPC#2, and vice versa
X
τ −1
x̄(t) = x̂(t + τ ) = Aτ x(t) + Aj B u(t − 1 − j)
| {z }
j=0
past inputs!
Albert Einstein
(1879–1955)
x(t) x(t)
x*(t+2|t) x*(t+2|t)
x*(t+1|t)=x(t+1) x*(t+1|t)=x(t+1)
0 t t+N 0 t t+N
x*(t+2|t) x*(t+2|t)
x*(t+1|t)=x(t+1) x*(t+1|t)=x(t+1)
0 t t+N 0 t t+N
x*(t+1|t)=x(t+1)
0 t +1 time
Richard Bellman
"An optimal policy has the property that, regardless of the decisions taken to enter a particular state, (1920–1984)
the remaining decisions made for leaving that stage must constitute an optimal policy."
(Bellman, 1957)
"Model Predictive Control" - © A. Bemporad. All rights reserved. 67/132
Convergence and stability
X
N −1
min x′N P xN + x′k Qxk + u′k Ruk
z
k=0
QP problem
s.t. umin ≤ uk ≤ umax , k = 0, . . . , N − 1
ymin ≤ Cxk ≤ ymax , k = 1, . . . , N
Q = Q′ ⪰ 0, R = R′ ≻ 0, P = P ′ ⪰ 0
• Stability is a complex function of the model (A, B, C) and the MPC parameters
N, Q, R, P, umin , umax , ymin , ymax
• Stability constraints and weights on the terminal state xN can be imposed over
the prediction horizon to ensure stability of MPC
• Hence
0 ≤ x′ (t)Qx(t) + u′ (t)Ru(t) ≤ V ∗ (x(t)) − V ∗ (x(t + 1)) → 0 for t → ∞
• The shifted optimal sequence z̄t+1 = [ut1 . . . utN −1 0]′ (or z̄t+1 = [ut1 ut2 . . .]′ in
case N = ∞) is feasible sequence at time t + 1 and has cost
• As u(t) → 0, also LCAk x(t) → 0, and since L is nonsingular CAk x(t) → 0 too,
for all k = 0, . . . , n − 1
xr = Axr + Bur
(equilibrium state/input)
r = Cxr
• Formulate the MPC problem (assume umin < ur < umax , ymin < r < ymax )
∑
N −1
min (yk − r)′ Qy (yk − r) + (uk − ur )′ R(uk − ur )
k=0
s.t. xk+1 = Axk + Buk
umin ≤ uk ≤ umax , ymin ≤ Cxk ≤ ymax
xN = xr
• Drawback: the approach only works well in nominal conditions, as the input
reference ur depends on A, B, C (inverse static model)
2. End-point constraint xN = 0
(Kwon, Pearson, 1977) (Keerthi, Gilbert, 1988) (Bemporad, Chisci, Mosca, 1994)
All the proofs in (1,2,3) use the value function V ∗ (x(t)) = minz J(z, x(t)) as a
Lyapunov function.
P = A′ P A − A′ P B(B ′ P B + R)−1 B ′ P A + Q
Jacopo Francesco Riccati
(1676–1754)
that is the maximum output admissible set for the closed-loop system
(A + BK, B, C) and constraints umin ≤ Kx ≤ umax , ymin ≤ Cx ≤ ymax
• This makes MPC ≡ constrained LQR where the MPC problem is feasible
• Recursive feasibility in ensured by the terminal constraint for all x(0) such that
the MPC problem is feasible @t = 0
Linearized model:
" # " #
−0.0151 −60.5651 0 −32.174 −2.516 −13.136
ẋ = −0.0001 −1.3411 0.9929 0
x+ −0.1689 −0.2514
u
0.00018 43.2541 −0.86939 0 −17.251 −1.5766
0 0 1 0 0 0
0 1 0 0
y = 0001 x (Kapasouris, Athans, Stein, 1988)
:); 80*79
/0'$%+1
• Inputs: elevator and flaperon (=flap+aileron) angles
!"##$%
• Outputs: attack and pitch angles -').,
&'$()*+%,
• Sampling time: Ts = 0.05 sec (+ ZOH) !+''
25◦
• Input constraints umin = −umax = 25◦
30
20
20
15
10
y1 (t) 10
u(t) 0
W y = [ 10 01 ]
y2 (t) 5
-10
-20
0
-30
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4
30
20
20
15
10
y1 (t) 10
u(t) 0
W y = [ 100 0
0 1]
y2 (t) 5 -10
-20
0
y1 (t) 10
10
u(t) 0
W y = [ 10 01 ]
y2 (t) 5
-10
0
-20
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4
min ρϵ2
s.t. y1,min − ϵV1,min ≤ y1 ≤ y1,max − ϵV1,max
20
3
10
y1 (t) 2
u(t) 0
W y = [ 10 01 ]
y2 (t) 1
-10
-20
0
-30
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4
unstable!
saturation
r(t) ua(t)
linear x(t)
controller process
• MPC takes saturation into account and handles it automatically (and optimally)
saturation
r(t) ua(t)
MPC x(t)
controller process
• weights: the larger the ratio W y /W ∆u the more aggressive the controller
• input horizon: the larger Nu , the more “optimal” but more complex the controller
• prediction horizon: the smaller N , the more aggressive the controller
• constraints horizon: the smaller Nc , the simpler the controller
• limits: controller less aggressive if ∆umin , ∆umax are small
• penalty ρϵ : pick up smallest ρϵ that keeps soft constraints reasonably satisfied
0
0 1 2 3 4 5 6
time (s)
∆u ∆umax
du uk − uk−1 z }|min{ z }| {
u̇ ≜ ≈ Ts u̇min ≤ ∆uk ≤ Ts u̇max
dt Ts
• Hence, when changing the sampling time from T1 to T2 , can can keep the same
MPC cost function by leaving W y , W u unchanged and simply rescaling
W u̇ T1 W u̇ T1 ∆u ρϵ T 1 ρϵ T1
W2∆u = = = W1 ρϵ2 = = = ρϵ1
T2 T2 T1 T2 T2 T2 T1 T2
• Example:
• Ideally all variables should be in [−1, 1]. For example, one can replace y with
y/ymax
AB B ... 0 AB B ... 0 0 R ... 0
H= .
.
.
. . . .. ... ... .. . .
. .. .. .
.
.
. . . .. + . . . .
. . .. .
. . . . . . . . . . .
0 ... 0 Q 0
AN −1 B AN −2 B ... B AN −1 B AN −2 B ... B 0 ... 0 R
0 0 ... 0 P
x(t)
Observer
state estimate
• Full state x(t) of process may not be available, only outputs y(t)
• Even if x(t) is available, noise should be filtered out
• Prediction and process models may be quite different
• The state x(t) may not have any physical meaning
(e.g., in case of model reduction or subspace identification)
zm(t)
nm(t) m(t) + +
measurement ym(t)
white noise noise model measured
outputs
[ ] [ ][ ]
[B ] [B ]
x(t+1) ABd C̄ 0 x(t)
u v
xd (t+1) = 0 Ā 0 xd (t) + 0 u(t) + 0 v(t)+
xm (t+1) [ 0 ] 0 Ã xm (t)
[0]
0
[B ]
0
Bd D̄ u
B̄ nd (t) + 0 nm (t) + 0 nu (t)
0 [ B̃ ] 0
x(t)
ym (t) = [ Cm Ddm C̄ C̃ ] xd (t) + Dvm v(k) + D̄m nd (t) + D̃m nm (t)
xm (t)
• Time update
yk-1
and treat u(t − 1) as an extra state uk-1
uk
t
• Not an issue for unmeasured outputs (k-1)Ts kTs (k+1)Ts
"Model Predictive Control" - © A. Bemporad. All rights reserved. 93/132
Integral action in MPC
Output integrators
manipulated variables
u(t)
yu(t)
v(t) measured disturbances plant controlled variables
unmeasured
outputs
ni(t) output
nm(t) m(t) +
measurement +
noise model
ym(t)
white noise measured
outputs
• Use the extended model to design a state observer (e.g., Kalman filter) that
ˆ from y(t)
estimates both the state x̂(t) and disturbance d(t)
Theorem
Let the number nd of disturbances be equal to the number ny of measured outputs.
Then all pairs (Bd , Dd ) satisfying
" #
I − A −Bd
rank = nx + n y
C Dd
guarantee zero offset in steady-state (y = r), provided that constraints are not
active in steady-state and the closed-loop system is asymptotically stable.
A Bd
• Add an input integrator: Bd = B , Dd = 0, rank C m Dd = 5 = nx + ny
x(t + 1) = Ax(t) + B(u(t) + d(t))
d(t + 1) = d(t)
y(t) = Cx(t) + 0 · d(t)
• Idea: add the integral of the tracking error as an additional state (original idea
developed for integral action in state-feedback control)
primal system
x(t)
estimated plant + controller states
B(z)
u(t) = y(t), B(1) ̸= 0
(z − 1)A(z)
• One may think that the ∆u-formulation of MPC provides integral action ...
... is it true ?
• Example: we want to regulate the output y(t) to zero of the scalar system
• Since x(t) = y(t) and u(t) = u(t − 1) + ∆u(t) we get the linear controller
ρβα
1+ρβ 2 z
u(t) = − y(t) No pole in z = 1
z − 1+ρβ
1
2
• Reason: MPC gives a feedback gain on both x(t) and u(t − 1), not just on x(t)
ans = 0.9000
ans = -6.4185e-16
xr (t + 1) = Ar xr (t)
r(t) = Cr xr (t)
r
• Design
h i a state observer
h i(e.g., via pole-placement) to estimate x(t), x (t) from
y(t) C 0
x(t)
r(t)
= 0 Cr xr (t)
xk
• Design an MPC controller with output ek = yk − rk = [ C −Cr ] xrk
input
2
-1
A= 1 0 0
-3
0 5 10 15 20 25 30 35 40 45 50
h 0.50i 0.125 0
B= 0
0
observer poles placed in
C = [ 0.1349 0.0159 −0.1933 ]
{0.3, 0.4, 0.5, 0.6, 0.7}
1.7552 −1 sample time Ts = 0.5 s
Ar = 1 0
Cr = [ .2448 .2448 ] input saturation −2 ≤ u(t) ≤ 2
– …
• Prediction models can be generated by the Identification Toolbox or
automatically linearized from Simulink
>> mpclib
0 1 0 0
− kθ 0
JL − βJL
L kθ
0 [ θL ]
ρJL
0
ẋ = 0 0 0 1 x + V θ̇L
0 x= θM
kT θ̇M
2
βM +kT /R
kθ
0 − ρ2kJθ − RJM
[ ρJM ] M JM
θL = 1 0 0 0 x [θ ]
y= L
T
[ ]
T = kθ 0 − kρθ 0 x
|V | ≤ 220 V
• Finite shear strength τadm = 50N/mm2 requires that the torsional torque T
satisfies the constraint
|T | ≤ 78.5398 N m
• Sampling time of model/controller: Ts = 0.1s
>> MV = struct('Min',-220,'Max',220);
>> OV = struct('Min',{-Inf,-78.5398},'Max',{Inf,78.5398});
>> Ts = 0.1;
∑
p−1
min ∥W y (yk+1 − r(t))∥2 + ∥W ∆u ∆uk ∥2 + ρϵ ϵ2
∆U
k=0
R
µM Tr
V e µ
JM s T µL
bM r JL
bL
closed-loop
simulation in
Simulink
3 300
2 50 200
1 100
0 0 0
-1 -100
-2 -50 -200
-3 -300
-4 -100 -400
0 5 10 15 20 0 5 10 15 20 0 5 10 15 20
1/Iyy 1 1 Theta
My s Thetadot s Theta
1/(Body Inertia) THETAdot THETA
Pitch moment
induced by
vehicle acceleration Thetadot
signal1 signal1
-Front Pitch Moment Rear Pitch Moment
1/Mb 1 1 Z+h
s Zdot s Z Z+h
1/(Body Mass) Zdot Z
-9.81
Zdot
acceleration Zdot
due to gravity
cm
signal1 100
1000
mo Theta
Gain2
Gain4 uf
mv MPC ref y0 1/1000
ur
1000
reference Gain5
Gain3 signal1
md signal2 100
Open-loop
2
z_ (cm/s)
Closed-loop
4 0
2 -2
0 -4
Open-loop
Closed-loop
-2 -6
0 2 4 6 8 10 0 2 4 6 8 10
0
100
-20
50
-40
0
-60
-50 -80
0 2 4 6 8 10 0 2 4 6 8 10
60
-5
40
-10
20
0 -15
0 2 4 6 8 10 0 2 4 6 8 10
u(t)
MPC
^
x(t) r(t)
i.e., the MPC law coincides with Kd when the constraints are inactive
• Recall that the QP matrices are H = 2(R̄ + S̄ ′ Q̄S̄), F = 2S̄ ′ Q̄T̄ , where
Q 0 0 ... 0
R 0 ... 0
B 0 ... 0 A
0 Q 0 ... 0
A2
. . . . . 0 R ... 0 AB B ... 0
Q̄ = . . . . . , R̄ = .. .. . . .. , S̄ = .. .. . . .. , T̄ = ..
. . . . . . . . . . . . . .
0 ... 0 Q 0 N −1 N −2
0 0 ... 0 P
0 ... 0 R A B A B ... B AN
10
0
y
−10
−20
0 2 4 6 8 10 12 14 16 18 20
t
50 matched MPC
25
u1
−25
−50
0 2 4 6 8 10 12 14 16 18 20
t
y2
−40
−60
−80
instability! −50
0 2 4 6 8 10
t
12 14 16 18 20
( x ∈ Rn
xk+1 = Axk + Buk
• Linear prediction model: u ∈ Rm
yk = Cxk
y ∈ Rp
• Constraints to enforce:
(
umin ≤ u(t) ≤ umax
ymin ≤ y(t) ≤ ymax
ymin ≤ yk ≤ ymax , k = 1, . . . , N
ϵxk ≥ Qi xk i = 1, . . . , n
ϵxk ≥ −Qi xk k = 0, . . . , N − 1
ϵxk ≥ ∥Qxk ∥∞
ϵuk ≥ R i uk i = 1, . . . , m
ϵuk ≥ ∥Ruk ∥∞
ϵuk ≥ −Ri uk k = 0, . . . , N − 1
ϵxN ≥ ∥P xk ∥∞
ϵxN ≥ P i xN i = 1, . . . , n
ϵxN ≥ −P i xN
• Example:
minx |x| → minx,ϵ ϵ
s.t. ϵ ≥ x
ϵ ≥ −x
• Optimization problem:
V (x0 ) = min [1 . . . 1 0 . . . 0]z (linear objective)
z
• optimization vector: z ≜ [ϵu0 . . . ϵuN −1 ϵx1 . . . ϵxN u′0 , . . . , u′N −1 ]′ ∈ RN (nu +2)
s.t. gk (xk , uk ) ≤ 0, k = 0, . . . , N − 1
gN (xN ) ≤ 0
ϵ
minϵ,x ϵ
ϵ ≥ a′1 x + b1
ϵ ≥ a′ x + b
2 2
s.t.
ϵ ≥ a′3 x + b3
x
ϵ ≥ a′4 x + b4
• Small-signal response, however, is usually less smooth with LP than with QP,
because in LP an optimal point is always on a vertex
LP optimizer QP optimizer
always on a same feasible set may lie
vertex anywhere
( )
Gz ≤ W + Sx
z: {z : Gz ≤ W + Sx}
Ḡz + Ēϵ ≤ W̄ + S̄x
ϵ = additional slack variables introduced
to represent convex PWA costs