You are on page 1of 8

Survey Paper:

Optimization in Model Predictive Control


M. A. Abbas
FEAS, UOIT, Canada.

Abstract— This paper presents a comprehensive survey of Until 70s MPC was only used for plants with slow dy-
different optimization methods used in Model Predictive Con- namics. It was widely applied in petro-chemical and related
trol (MPC) systems. We discuss optimization methods for industries where satisfaction of constraints was particularly
both linear and non-linear systems and describe advan-
tages/disadvantages for each method and also the specific important because efficiency demands operating points on or
conditions in which each method should be used. State of the close to the boundary of the set of admissible states and
Art in this field is described together with current and future control. One of the primary advantages of this technique
research trends. is its explicit capability to handle constraints. However, the
fact that the optimisation procedure is to be repeated every
I. I NTRODUCTION time-step, is the reason that the application of MPC has
Control theory is a branch of engineering and mathematics been limited to the slow dynamics of systems in the process
which deals with the behaviour of dynamical systems[1]. industry until recently. The boom in MPC started in 1990s
There are many control strategies in use today like intelligent when faster computer became available together with rapid
control, adaptive control, stochastic control, optimal control development of optimization algorithms. These days MPC is
etc. Optimal control is such a control technique in which we applied to various types of plants with fast dynamics such
minimize certain cost index to achieve desired performance. as airplanes, satellites, robotics, automotives etc.
The two types of optimal control techniques are
B. Purpose of this survey
• Linear Quadratic Gaussian (LQG)
• Model Predictive Control (MPC)
With our discussion above, many theoretical issues arise
in MPC by application of control law. One of the major
In this survey we consider optimization problems in Model
issues in model predictive control is finding the appropriate
Predictive Control technique only because it is most widely
optimization algorithm to be employed in order to reduce
used in industry as opposed to LQG which was termed as
future errors. In our survey, we focus on these varieties of
failure. The reasons cited for this failure are [2], [3]:
optimization methods. This is, however, a general survey
• constraints covering common optimization techniques used in MPC.
• process nonlinearities Due to space limitation, we do not go into detail of each
• model uncertainty (robustness) algorithm, rather touch many of them with little detail and
• unique performance criteria focus on general trends and methodology employed.
• cultural reasons (people, education, etc.)
C. Organization of paper
A. Brief History This paper is organized in 6 sections. Section I provides
History of optimal control can be traced back to 1960s introduction to the problem in hand, whereas Section II
when two ground breaking paper by Kalman appeared formulates the problem mathematically. In Section III and
[4], [5]. These papers were in fact the first to introduce IV we survey various optimization methods for Linear and
the algorithm for computing the state feedback gain of Nonlinear MPC respectively. Section V details some practical
the optimal controller for a linear system with a quadratic implementations of MPC in industrial processes. In section
performance criterion. Kalman introduced the notions of IV we conclude our discussion and predict some future
controllability and observability, and their exploitation in research trends.
the regulator problem, which is considered as principal
contribution of the paper. These papers had a significant II. P ROBLEM F ORMULATION
effect on researchers working in the field of optimal Model predictive control (MPC) algorithms utilize an
controls. It was this development of of Linear Quadratic explicit process model to predict the future response of a
Gaussian (LQG) controller that later led to the development plant. At each control interval an MPC algorithm attempts
of Model Predictive Control theory. MPC is basically a to optimize future plant behaviour by computing a sequence
form of LQG controller with added finite prediction horizon of future manipulated variable adjustments. The first input
and constraints handling. in the optimal sequence is then sent into the plant, and the
entire calculation is repeated at subsequent control intervals.
Unconst. infinite horizon Linear MPC = simple LQG Thus optimizer solves an optimization problem during each
sampling interval of the controller.Figure 1 shows the oper- desired behaviour we need to define the system’s future
ation of MPC residual function.
Residual = e2 (t + ih), i = 0, 1, .., n (5)
Our objective is to use appropriate optimization algorithms
and find future inputs that minimize this residual i.e., find
M in(Residual). With the value of c well defined, our
optimized input Uoptimized is now well defined. The whole
optimization procedure is depicted in Figure 2

Fig. 1. MPC operation concept

Mathematically, if we have Y = process output and u =


controller input

Ydesired (t) = a1 u(t − h) + a2 u(t − 2h) + ......


+b1 y(t − h) + b2 y(t − 2h) + .... (1)
Fig. 2. MPC Optimization Flow
0 0
Ypredicted (t) = a1 u(t − h) + a2 u(t − 2h) + ......
0 0
To probe further, we divide our discussion into two sec-
+b1 y(t − h) + b2 y(t − 2h) + .... (2) tions
• Linear Model Predictive Control
• Non-Linear Model Predictive Control
where t = t − h is the time step. Equation (1) describes
system desirable input/output behaviour of our system to Both linear and nonlinear systems have specific problem
achieve goals within its operational limits. Meanwhile equa- statements and utilize different optimization methods. It is
tion (2) empirically predicts system’s future behaviour based important to describe each of them separately. Below is the
on system’s past input u(t − h) and output y(t − h). brief discussion and optimization methods used for each of
The error between predicted and desired output can be these types of MPC.
calculated as III. L INEAR MPC
e(t) = Ypredicted (t) − Ydesired (3) A. Problem Formulation
Consider a linear time-invariant and discrete-time system
Uoptimized (t) = ce(t) (4) described by following set of equations:

Equation (4) scales the error by c and predicts the cal- xt+1 = Axt + But (6)
culated optimized input Uoptimized (t) based on previous yt = Cxt (7)
calculations. Equations (1),(2),(3) and (4) describe system
subject to the following constraints
behaviour for one future time step. We now iterate the four
equations for time t + h,t + 2h,.... to predict system future ymin ≤ yt ≤ ymax (8)
I/O profile similar to that shown in Figure 1. By this iteration umin ≤ ut ≤ umax (9)
we also get associated errors e(t), e(t + h), e(t + 2h) + ...
for each time step. To optimize the system i.e., minimize where xt ∈ Rn ,ut ∈ Rm and yt ∈ Rp are state, input and
the deviation between predicted output of the system from output vectors respectively. Subscripts min and max denote
lower and upper bounds respectively. Generally our objective 1. We solve this problem (12) repeatedly at eact time t
in linear MPC is minimization of cost function of the form for current measurement xt and predicted state variable,
xt+1|t , ..., xt+k|t at time steps t + 1, ..., t + k and corre-
J = xt Qx + ut Ru (10) sponding optimal control actions U ∗ = {u∗t , ..., u∗t+k−1 } is
Majority of industrial MPC applications use linear empirical obtained. The first predicted input is applied to the system as
models, therefore most of MPC products and optimization first control action i.e., ut = u∗t . This procedure is repeated
algorithms are based on this model type. at time t + 1 based on new state xt+1 .
The the tuning cost function matrix P and state feedback
B. Optimization Methods gain K is generally used to guarantee closed loop stability of
Main challenge in MPC is to find fastest ways of optimiza- ststem (12). Algebraic solution of this system depends upon
tion as time required for solving on-optimization is very lim- finding the values of P and Q matrices. P is found by the
ited. Thus we need real-time optimal solution. Sometimes we solution of discrete Lyapunov equation
find a trade off by looking for a suboptimal solution which is
P = A0 P A + Q.
less complex. Constraints are linear. Exact solution methods
are well studied for linear MPC. The optimal solution relies Assuming the problem is unconstrained, infinite horizon
on a linear dynamic model of the process, respects all input i.e., Nc = Nu = Ny = ∞ we can find the state feedback
and output constraints, and minimizes a performance figure. gain K by solving the Algebraic Ricatti equation:
This is usually expressed as a quadratic or a linear criterion,
so that the resulting optimization problem can be cast as a
quadratic program (QP) or linear program (LP), respectively, K = −(R + B 0 P B)−1 B 0 P A,
for which a rich variety of efficient active-set and interior- P = (A + BK)0 P (A + BK) + K 0 RK + Q
point solvers are available[10]. Solution of Lyapunov and Algebraic Ricatti equations is
1) Linear Programming (LP): A few authors have inves- the most popular method to find the values of K and P
tigated MPC optimization based on linear programming [6], matrices[13], [14] thus solving the problem algebraically.
[7], [8]. If we have objective function of the form
N
X −1 3) Quadratic Programming (QP): Rawlins and Morari
minJ = min ||P xN ||∞ + ||Qxk ||∞ + ||Ruk ||∞ [10], [11], [12] proved that linear MPC can be posed as
k=0 Quadratic Programming (QP) problem. If we incorporate the
(11) following realtion
subject to constraints k−1
X
xt+k|t = Ak xt + Aj But+k−1kj
Gz ≤ W + Sx(t)
j=0
then MPC law can be defined by the solution of a Linear into system represented by set of equations (12) then it gives
Program[9]. Schechter[9] proved that this is true for any us the following quadratic programming (QP) optimization
sum of convex piecewise affine costs. problem [18]
1 0 1
2) Algebraic Method: If we consider the following objec- J ∗ (xt ) = min U HU + x0t F U + x0t Y x(t) (13)
tive function: 2 2
Ny −1 subject to constraints
X
minJ = x0t+Ny |t P xt+Ny |t + x0t+k|t Qxt+k|t Gz ≤ W + Sx(t)
k=0
+u0t+k Rut+k (12) where U , [u0t , ..., u0t+N u−1 ]0 ∈ Rs , and s , mNu , is
a vector of optimization variables, H = H 0  0, and
subject to constraints H, F, Y, G, W, E matrices are obtained from state constraint
matrix S and input matrix R. MPC is applied by solving
ymin ≤ yt ≤ ymax , k = 1, 2, ..., Nc
QP problem (13) repeatedly at each time t ≥ 0 for xt , the
umin ≤ ut ≤ umax , k = 0, 1, ..., Nc current state value .Despite the fact that efficient QP solvers
and system dynamics are available to solve , computing the input ut online may
require significant computational effort [15].
xt+k+1 = Axt+k|t + But+k , k ≥ 0,
yt+k|t + Cxt+k|t , k ≥ 0, 4) Multi Parametric Quadratic Programming (mp-QP):
ut+k = Kxt+k|t , Nu ≤ k ≤ Ny , In MPC our goal is to reduce online optimization time
because system operates in real-time. These days, substantial
where matrices Q = Q0 ≥ 0, R = R0 ≥ 0 and P ≥ 0. research is being carried out to fiend more efficient opti-
Nu , Ny , Nc are input horizon, output horizon and constraint mization algorithms. Bemporad et. al. [13], [16], [17] solved
horizon respectively such that Ny ≥ Nu and Nc ≤ Ny − the problem (12) by multiparametric quadratic programming
TABLE I
IV. N ONLINEAR MPC
A DVANTAGES / D ISADVANTAGES OF OPTIMIZATION METHODS USED IN
L INEAR MPC Model predictive control (MPC), also referred to as mov-
ing horizon control or receding horizon control, has become
Algebraic LP QP Constrained QP an attractive feedback strategy, especially for linear processes
Difficulty Small Medium Large Larger [21]. Many systems are, however, in general inherently
Optimization No Yes Better Better nonlinear. This, together with higher product quality specifi-
Constraints No Yes No Yes
cations and increasing productivity demands, tighter environ-
mental regulations and demanding economical considerations
in the process industry require to operate systems closer to
(mp-QP) which avoids repetitive optimization. They trans- the boundary of the admissible operating region. In these
formed the QP problem (13) into multiparametric optimiza- cases, linear models are often not good enough to describe
tion problem by using the following linear transformation: the process dynamics and nonlinear models have to be used.
Fortunately, considerable progress has been achieved in the
z , U + H −1 F 0 xt last decade that allows to reduce both, computational delays
and approximation errors. This progress is possible by the
where z ∈ Rs is optimization variable parameter. Then QP development of dedicated real-time optimization algorithms
problem (13) can be written as following (mp-QP) problem: for NMPC and moving horizon estimation (MHE) that allows
nowadays applying NMPC to plants with tens of thousands
1
Vz (xt ) = min z 0 Hz (14) of states or to mechatronic applications. By now linear MPC
2 is widely used in industrial applications (Qin and Badgwell;
subject to constraints Garca et al; Morari and Lee; Froisy)[22]. The basic structure
of nonlinear MPC is depicted in Figure 3
Gz ≤ W + Sxt

where xt is vector of parameters and

S = E + GH −1 F 0

The advantage obtained from such a formulation is that xt


only appears in right hand side of constraints and not in
objective function. As opposed to Equations (13) where state
vector xt appears on right hand sides of both constraints and
objective function. Thus in Equation (14) z can be obtained
as affine function of x for complete feasible space of x [15]. Fig. 3. Basic NMPC control loop [21]
Vassilis et.al [15] proved that the set of feasible parameters
Xf ⊆ X is convex, the optimal solution, z(x) : Xf 7−→ The basic NMPC scheme works as follows:
Rs is continuous and piecewise affine, and the optimzation 1) Obtain measurements/estimates of the states of the
objective function Vz (x) : Xf 7−→ R is continuous, convex, system
and piecewise quadratic. 2) Compute an optimal input signal by minimizing a
According to our survey the optimization methods can be given cost function over a certain prediction horizon
tabulated according to Table I. in the future using a model of the system
Normally, limits on the storage space or the computation 3) Implement the first part of the optimal input signal until
time restrict the applicability of model predictive controllers new measurements/estimates of the state are available
(MPC) in many real problems. Morari et. al. [19] introduced 4) Continue with 1
a new approach combining the two paradigms of explicit
and online MPC to overcome their individual limitations. A. Problem Formulation
His developed algorithm computed a piecewise affine ap- Consider a nonlinear discrete time dynamic system de-
proximation of the optimal solution that warm-started an scribed by following set of equations[23]
active set linear programming procedure. A pre-processing
method was introduced that provided hard realtime, stability xk = fts (xk−1 , uk−1 )
and performance guarantees for controller. Doing so, the
researcher was able to trade off some optimization perfor- yk|k−1 = g(xk )
mance in-order to find faster processing time. Advantage
subject to the following constraints
with this method is that its easy to implement and makes on-
line evaluation faster. Disadvantage is that, with increasing U = u ∈ Rm |umin ≤ u ≤ umax
states of system, the controller regions may become large
thus making algorithm difficult to implement. X = x ∈ Rn |xmin ≤ x ≤ xmax
Where xk is a vector of states, uk is the vector of
manipulated inputs, and yk is a vector of outputs.ts is sample
5x (X ∗ , λ∗ , µ∗ ) = 0 (21)
time; the k|k − 1 subscript notation is used to indicate the

prediction at step k based on measurements at step k-1. Here G(X ) = 0 (22)
umin , umax and xmin , xmax are given constant vectors. The 0 ≥ H(X ∗ ) ⊥ µ∗ ≥ 0 (23)
error in the model is calculated by this equation
Here we have used the defination of Lagrange function
dk = yk − yk|k−1
L(X, λ, µ) = F (X) + G(X)T λ + H(X)T µ
so the objective is to minimize the error as much as possible
to get the optimal output which is given by this equation and the symbol ⊥ between the two vector valued inequal-
ities in (23) that also the complementarity condition should
xk+1 = fts (xk ), uk hold.
All the Newton type optimization try to linearize the
yk+1|k = g(xk+1 ) + dk problem functions and for this they use Sequential Quadratic
Based on this dynamic system form, the following sim- Programming (SQP) type method.
plified optimal control problem in discrete time is given by a) Sequential Quadratic Programming: First step to
these set of equations solve the KKT system is to linearize all nonlinear functions
appearing in (21)-(23) by using the conditions of quadratic
N
X −1 prgramming(QP)
minimize Li (xi , zi , ui ) + E(xN ) (15)
x,z,u
i=0 (
k G(X k ) + 5G(X k )T (X − X k ) = 0
minimizeFQP (X)s.t
X H(X k ) + 5H(X k )T (X − X k ) ≤ 0
subject to x0 − x̄o = 0 (16)
xi+1 − fi (xi , zi , ui ) = 0, i = 0, . . . , N − 1, (17)
gi (xi , zi , ui ) = 0, i = 0, . . . , N − 1, (18) with objective function
hi (xi , zi , ui ) ≤= 0, i = 0, . . . , N − 1, (19) k 1
FQP (X) = 5H(X k )T (X) + (X − X k )T (24)
r(xN ) ≤ 0. (20) 2
52x L(X k , λk , µk )(X − X k )
B. Optimization Methods for Nonlinear MPC
52x L(X k , λk , µk ) is called the Hessian matrix it is positive
There are many methods to solve these problems, in this semidefinite, this QP is convex so that global solution can
survey we cover two methods to solve these NMPC problems be found reliably. This general approach to address the non-
• Newton Type Optimization linear optimization problem is called Sequential Quadratic
• Numerical Method Programming(SQP).
1) Newton type optimization: Newton’s method for solu- b) Powell’s Classical SQP Method: One of the most
tion of a nonlinear equation R(W) =0 starts with an initial successfully used SQP variants is due to Powell [25]. It
guess that W 0 and generates a series of iterates W k that each uses exact constraint Jacobians, but replaces the Hessian
solves a linerazation of the system at the previous iterates, matrix 52x L(X k , λk , µk )by an approximation Ak . Each new
i.e, for given W k the next iterate W k+1 shall satisfy Hessian approximation Ak+1 is obtained from the previous
approximation Ak by an update formula that uses the differ-
R(W k ) + 5(W k )T (W k+1 − W k ) = 0. ence of the Lagrange gradients,
γ = 5x L(X k+1 , λk+1 , µk+1 ) − 5x L(X k , λk+1 , µk+1 )
Newtons method has locally a quadratic convergance rate
(25)
whics is as fast as making any numerical analyst happy[22].If
the jacobian 5R(W k )T is not computed or inverted exactly, Aim of these Quasi-Newton or Variable-Metric methods
this leads to slower convergance rates , but cheaper iteration, is to collect second order information in Ak+1 by satisfying
and gives rise to the larger class of ”Newton type methods”.A the secant equation
good overview of the field is given in [24].
Ak+1 σ = γ.
The NMPC problem as stated above is the specially
structured form of a generic nonlinear program that has the The most widely used update formula is the Broyden-
form ( Fletcher-Goldfarb-Shanno (BFGS) update[26]
G(X) = 0 Ak + γγ T Ak σσ T Ak
minimizeF (X) s.t Ak+1 = −
X H(X) ≤ 0 (γ T σ) (σT Ak σ)
for the optimal solution X* must satisfy the famous Karush- Quasi-Newton methods converge very linearly under mild
Kuhn-Tucker(KKT) conditions which are: conditions, and had a tremendous impact in the field of
nonlinear optimization. Successful implementations are the and Plitt [33]. Thus, the QP subproblem has the following
packages NPSOL and SNOPT for general NLPs [27], and form:
MUSCOD-II [28] for optimal control. N
X −1
c) Constrained Gauss-Newton Method: Another partic- minimize LQP,i (xi , zi , ui ) + EQ P (xN ) (27)
x,z,u
ularly successful SQP variant the Constrained (or General- i=0
ized) Gauss-Newton method is also based on approximations
of the Hessian. It is applicable when the objective function subject to x0 − x¯0 = 0
is a sum of squares: (28)
0

F (X) =
1
||R(X)||22 xi+1 − fi − Fix xi − Fiz zi − Fiu u = 0, i = 0, . . . , N − 1,
2 (29)
0
For this case Hessian is defined as gi + Gxi xi + Gzi zi + Gui ui = 0 i = 0 . . . , N − 1,
Ak = 5R(X k ) 5 R(X k )T (30)
0

the corresponding QP objective is defined as: hi + Hix xi + Hiz zi + Hiu ui ≤ 0, i = 0, . . . , N − 1,


(31)
k 1
||R(X k ) + 5R(k )T (X − X k )||22
0
FQP (X) = r + RxN ≤ 0.
2
The constrained Gauss-Newton method has only linear This partially reduced QP can be post-processed either by
convergance but often with a surprisingly fast contraction a condensing or a band structure exploiting strategy[32].
rate. Newton type SQP methods use approximation of Hes-
sain, as well as the constrained jacobians which was analysed C. Advantages and Disadvantages of NMPC
in [31],[29] It uses approximations Ak , bk , ck of the Hessian In general one would like to use an infinite prediction
matrix whic is already defined in the SQP and also called and control horizon, to minimize the performance objective
”modified gradient”. determined by the cost. However, solving a nonlinear optimal
control problem over an infinite horizon is often computa-
ak = 5x L(X k , λk , µk ) − Bk λk − Ck µk (26) tionally not feasible. Thus typically a finite prediction hori-
Now QP objective is defined as zon is used. In this case the actual closed-loop input and state
1 trajectories differ from the predicted open-loop trajectories,
k
FadjQP (X) = aTk X + (X − X k )T Ak (X − X k ) even if no model plant mismatch and no disturbances are
2
present. This can be explained considering somebody hiking
and this is the equation of QP which is solved in each in the mountains without a map. The goal of the hiker is to
iteration: take the shortest route to his goal. Since he is not able to
( see infinitely far (or up to his goal), the only thing he can
k G(X k ) + BkT (X − X k ) = 0 do is to plan a certain route based on the current information
minimizeFadjQP (X)s.t (skyline/horizon) and then follow this route. After some time
X H(X k ) + CkT (X − X k ) ≤ 0
the hiker re evaluates his route based on the fact that he
In this equation it is shown that by using a modified might be able to see further. The new route obtained might
gradient ak allows to locally converge to solutions of the be significantly different from the previous route and he will
original nonliner NLP even in the presence of inequality change his route, even though he has not yet reached the end
constraint Jacobians [29], [30], [31]. of the previous considered route [21]. Basically, the same
2) Numerical Method: When Newton type optimization approach is employed in a finite horizon NMPC strategy.
is applied to the optimal control problem (15)-(20). By At a recalculation instant the future is only predicted over
using sequential approach, where all state variables x, z are the prediction horizon. At the next recalculation instant the
eliminated and the optimization routine only sees the control prediction horizon moves further, thus allowing obtaining
variables u, the specific optimal control problem structure more information and re-planning.
plays a minor role. Thus, often an off-the-shelf code for
nonlinear optimization can be used. This makes practical V. MPC IN I NDUSTRY
implementation very easy and is a major reason why the For complex constrained multivariable control problems,
sequential approach is used by many practitioners [32]. model predictive control (MPC) has become the accepted
a) The Linearized Optimal Control Problem: Let us standard in the process industries [20]. So we found it worth-
regard the linearization of the optimal control problem (15)- while to extend our survey to different practical applications
(20) within an SQP method, which is a structured QP. It turns and technologies being used in the industry.
out that due to the dynamic system structure the Hessian of
the Lagrangian function has the same separable structure as A. Applications
the Hessian of the original objective function (26), so that According to Badgwell [40] there are 4500+ succesful
the quadratic QP objective is still representable as a sum of industrial applications of linear MPC and 50+ applications
linearquadratic stage costs, which was first observed by Bock of nonlinear MPC. In Table II some applications of MPC are
TABLE II
– Other Features: 3rd generation technology.
MPC APPLICATIONS
• HMPC (Horizon multivariable Predictive Control), RM-
Application Sampling Rate Company PCT (Robust Model Predictive Control Technology),
Integrated room automation [34] 0.002 Siemens 1991
Adaptive cruise control [35] 2 Chrysler
– Company: Honeywell.
Mechanical systems with backlash [36] 25 –
– Other Features: Different to any other scheme, no
Car automatic steering [37] 30 Ford
data available due to proprietary reasons.
Automotive hybrid traction control [38] 50 Ford
• DMC-plus and RMPCT, 2000
Electronic throttle control [39] 200 Ford
DC-DC voltage Inverters [41] 10 × 103 – – Company: Honeywell + Profimatics + Treiber Con-
Induction motors torque control [42] 40 × 103 ABB trols
DC-DC converters / power balance [43] 50 × 103 STM
– Model Type: Linear and non linear
– Optimization method: Multiple optimization meth-
ods depending on control objective.
– Other Features: 4th generation cutting edge tech-
tabulated with emphasis on controller operating frequency, nology in use today.
which is the most critical constraint in the development of
VI. C ONCLUSION AND F UTURE R ESEARCH T RENDS
industrial MPC.
In this paper, we presented a survey of different opti-
B. Commercial Technologies and Softwares mization methods use in linear / non-linear MPC. MPC
There are many versions of MPC software depending technology has progressed steadily since its conception. With
upon developer group. These different versions are similar the availability of faster computing power, it has now become
in principles but differ in implementation procedure, model possible to implement better optimization methods (requiring
type, objective function and optimization method used. immense computing power) for control systems. There is
Below we list some of the most popular MPC technologies plenty of room to develop new algorithms and prove new
used in industry and optimization procedure used in each of facts in MPC domain. Also, there is a need to extend
them: current MPC implementations to new areas because existing
implementation domains are uneven. Many researchers are
• IDCOM (Identification and Command) [44], 1987 working to reduce time and find better solutions to opti-
– Company: Set point, Inc. USA mization problem. MPC is finding new application on small-
– Methodology: Model algorithmic control scale, fast loops as well as large-scale, networked systems.
– Model Type: Impulse response, linear in inputs or Specifically researchers are working in following fields:
internal variables. • Reducing complexity of online optimization [47]

– Optimization method: QP method • Reduce complexity of explicit solution (reduce number

– Other Features: Direct interface to Honeywell MPC of regions) [48]


systems, input and output constraints included in • Combination of online and explicit off-line optimization

the formulation, 1st generation technology. • MPC Optimization for non-linear plant models
• Robustness of MPC
• DMC (Dynamic Matrix Control) [45], 1985
• MPC for stochastic systems [49]
– Company: Shell Co.
• Adaptive MPC
– Methodology: Dynamic Matrix control
• MPC for switched / hybrid systems
– Model Type: Step response
• MPC for hierarchical / decentralized structures
– Optimization method: LP method
In future we hope to see amazing developments to fill
– Other Features: Direct interface to Honeywell MPC
in the vacuum in the field of optimal control systems.
systems, 1st generation technology.
• OPC (Optimum Predictive Control), 1987 R EFERENCES
– Company: Treiber Controls, Inc. [1] Pierre-Alain Muller, Olivier Barais,“Control-theory and models
at runtime,” Lancaster University, Computing Department, [On-
– Model Type: Step response line] Available: http://www.comp.lancs.ac.uk/˜bencomo/MRT07/papers/
– Optimization method: LP method MRT07 Muller Barais.pdf.
– Other Features: Controller design and simulation [2] Garclia, C. E., Prett, D. M., Morari, M., “Model predictive control:
Theory and practice-a survey. ”, Automatica, 25(3),pp. 335-348, 1989.
can be performed on personal computers [3] Richalet, J., Rault, A., Testud, J. L., Papon, J.,“Algorithmic control of
• PCT (Predictive Control Technology), 1994 industrial processes”, Proc. The 4th IFAC symposium on identification
and system parameter estimation. pp. 1119-1167, 1976.
– Company: Profimatics, Inc. [4] R.E. Kalman, “Contributions to the Theory of Optimal Control,” 1960.
– Model Type: Combines aspects of IDCOM and [5] R.E. Kalman, “A New Approach to Linear Filtering and Prediction
DMC Problems,” 1960.
[6] T. S. Chang, D. E. Seborg., “A linear programming approach for
– Optimization method: Solves optimization for one multivariable feedback control with inequality constraints.” Int. Journal
control move only. of Control, 37(3): pp. 583-597, 1983.
[7] P.J. Campo, M. Morari., “Model predictive optimal averaging level [31] Wirsching, L, “An SQP algorithm with inexact derivatives for a direct
control,” AIChE Journal, 35(4): pp. 579-591, 1989. multiple shooting method for optimal control problems,” Masters thesis,
[8] C.V. Rao and J.B. Rawlings., “Linear programming and model predic- University of Heidelberg, 2006.
tive control,” J. Process Control, 10: pp. 283-289, 2000. [32] Moritz Diehl, Hans Joachim, and Niels haverbeke,“Efficient Numerical
[9] M. Schechte, “Polyhedral functions and multiparametric linear program- Methods for Nonlinear MPC and Moving Horizon Estimation,” Non-
ming.” Journal of Optimization Theory and Applications, 53(2), pp. linear Model Predictive Control Springler, LNCIS 384, pp. 391-417.
269-280, May 1987. [33] Bock, H.G., Plitt, K.J, “A multiple shooting algorithm for direct
[10] Alberto Bemporad, Francesco Borrelli, Manfred Morari., “Model Pre- solution of optimal control problems,” Proceedings 9th IFACWorld
dictive Control Based on Linear Programming - The Explicit Solution,” Congress Budapest, pp. 243247. Pergamon Press, Oxford, 1984.
Tech. Report AUT01-06, 2001. [34] Frauke Oldewurtel, Dimitrios Gyalistras, Markus Gwerder, Colin N.
[11] D. Q. Mayne, J. B. Rawlings, C. V. Rao, P. O. M. Scokaert, “Con- Jones, Alessandra Parisio, Vanessa Stauch, Beat Lehmann, Manfred
strained model predictive control: Stability and optimality,” Automatica Morari, “Increasing Energy Efficiency in Building Climate Control
36, pp. 789-814, 2000. using Weather Forecasts and Model Predictive Control,” Automatic
[12] M. Morari, J.H. Lee,“Model predictive control: Past, present and Control Laboratory, ETH Zurich, Zurich, Switzerland, 2008.
future,” Computers & Chemcial Engineering, vol. 23, no. 4, pp. 667- [35] Rainer Mbus, Mato Baotic, Manfred Morarii, “Multi-object Adaptive
682, 1999. Cruise Contro” DaimlerChrysler Research and Technology Assisting
[13] Bemporad, A., Morari, M., Dua, V. and Pistikopoulos, E. N., “The Systems, (RIC/AA), 70546 Stuttgart, Germany, 2003.
explicit linear quadratic regulator for constrained systems,” Automatica, [36] P. Rostalski, T. Besselmann, M. Bari, F. van Belzen and M. Morarii,
38, pp. 3-20, 2002. “A hybrid approach to modelling, control and state estimation of
[14] M. Scokaert, J. B. Rawlings, “Constrained linear quadratic reg- ula- mechanical systems with backlash.” International Journal of Control.,
tion,” IEEE Transactions on Automatic Control, 43, pp. 1163-1169, Vol. 80, No. 11, 1729-1740, 2007.
1998. [37] Th. Besselmann, M. Morarii, “Hybrid Parameter-Varying MPC for
[15] Vassilis Sakizlis, Konstantinos I. Kouramas, Efstratios N. Pistikopou- Autonomous Vehicle Steering,” European Journal of Control vol. 14,
los, “Linear Model Predictive Control via Multiparametric Pro- no. 5, pp. 418 - 431, 2008.
gramming,” Book: Process Systems Engineering: Volume 2: Multi- [38] F. Borrelli, A. Bemporad, M. Fodor, D. Hrovat, “A Hybrid Approach
Parametric Model-Based Control, Chapter 1, Wiley, March 2007. to Traction Control,” International Workshop on Hybrid Systems: Com-
[16] Bemporad, A.,Morari, M., Dua, V., Pistikopoulos, “The Explicit Linear putation and Control, Roma, Italy, 2001.
Quadratic Regulator for Constrained ystems” E. N., Tech. Rep. AUT99- [39] M. Vasak, M. Baotic, M. Morari, I. Petrovic, N. Peric, “Constrained
16, Automatic Control Lab, ETH Zrich, Switzerland, 1999. optimal control of an electronic throttle” International Journal of
[17] E.N. Pistikopoulos, V. Dua, N.A Bozinis, A. Bemporad, M. Morari Control, vol. 79, no. 5, pp. 465-478, 2006.
“On-line optimization via off-line parametric optimization tools” Com- [40] S. Joe Qina, Thomas A. Badgwell, “A survey of industrial model
puters and Chemical Engineering,vol. 26, no. 2, pp. 175-185, 2002. predictive control technology,” Control Engineering Practice , pp. 733-
[18] M. Sznaier, M. Damborg, “Suboptimal control of linear systems with 764, 2003.
state and control inequality constraints”, Proceedings of the 26th IEEE [41] S. Marithoz, M. Herceg, M. Kvasnica, “Model Predictive Control
Conference on Decision and Control , pp. 761-762, 1987. of buck DC-DC converter with nonlinear inductor,” IEEE COMPEL,
[19] M.N. Zeilinger, C.N. Jones, M. Morari, “Real-time suboptimal Model Workshop on Control and Modeling for Power Electronics, Zurich,
Predictive Control using a combination of Explicit MPC and Online Switzerland, 2008.
Computation,” IEEE Conference on Decision and Control, IFA 3110, [42] G. Papafotiou, T. Geyer, M. Morari, “A hybrid model predictive control
2008. approach to the direct torque control problem of induction motors,”
[20] S.J. Qin, T.A. Badgwell. “An overview of industrial model predictive International Journal of Robust & Nonlinear Control, vol. 17, pp. 1572-
control technology,” In Chemical Process Control, AIChe Symposium 1589, 2007.
Series - American Institute of Chemical Engineers, volume 93, no. 316, [43] S. Marithoz, A.G. Beccuti, M. Morari, “Model Predictive Control of
p. 232256, 1997. multiphase interleaved DC-DC converters with sensorless current limi-
[21] Findeisen, Frank Allgower,“ An Introduction to Nonlinear Model Pre- tation and power balance,” IEEE PESC, Power Electronics Specialists
dictive Control,” Institute for Systems Theory in Engineering, University Conf., Rhodes, Greece, pp. 1069 - 1074, 2008.
of Stuttgart, 70550 Stuttgart, Germany. [44] Richalet, J., Rault, A., Testud, J. L., Papon, J., “Model predictive
[22] Findeisen, Frank Allgower “Nonlinear Model Predictive Control:A heuristic control: Applications to industrial processes,” Automatica, 14,
Sampled-Data Feedback Perspective” Institute for Systems Theory in pp. 413-428, 1978.
Engineering, University of Stuttgart, 70550 Stuttgart, Germany. [45] Cutler, C. R., Ramaker, B. L., “Dynamic matrix control - a computer
[23] B. Wayne Bequette,“ Non-Linear Model Predictive Control: A Per- control algorithm” In Proceedings of the joint automatic control con-
sonal Retrospective,” Department of Chemical and Biological Engineer- ference, 1980.
ing, Rensselaer Polytechnic Institute, Troy, NY, U.S.A. 12180-3590. [46] S. Joe Qina, Thomas A. Badgwell, “A survey of industrial model
[24] Deuflhard, Newton Methods for Nonlinear Problems, Springer, New predictive control technology,” Control Engineering Practice, pp. 733-
York, 2004. 764, 2003.
[25] Powell, M.J.D, “A fast algorithm for nonlinearly constrained optimiza- [47] Y. Wang, S. Boyd., “Fast model predictive control using online
tion calculations,” In: Watson, G.A. (ed.) Numerical Analysis, Dundee optimization” IEEE Transactions on Control Systems Technology, 18(2),
1977. LNM, vol. 630, Springer, Berlin, 1978. pp.267278, March 2010.
[26] Nocedal, J., Wright, S.J, Numerical Optimization, Springer, Heidel- [48] C.N. Jones, M. Baric, M. Morari, “Multiparametric Linear Program-
berg, 1999. ming with Applications to Control” European Journal of Control, vol.
[27] Gill, P.E., Murray, W., Saunders, M.A,“SNOPT: An SQP algorithm for 13, no. 2-3, pp. 152-170, 2007.
largescale constrained optimization.” Technical report, Numerical Anal- [49] Y. Wang, S. Boyd, “Performance bounds for linear stochastic control,”
ysis Report 97-2, Department of Mathematics, University of California, Systems and Control Letters, 58(3), pp.178182, 2009.
San Diego, La Jolla, CA, 1997.
[28] Leineweber, D.B., Bauer, I., Schafer, A.A.S., Bock, H.G., Schloder,
J.P, “An efficient multiple shooting based reduced SQP strategy for
large-scale dynamic process optimization (Parts I and II).” Computers
and Chemical Engineering, 27, 157174, 2003.
[29] Bock, H.G., Diehl, M., Kostina, E.A., Schloder, J.P, “Constrained opti-
mal feedback control of systems governed by large differential algebraic
equations,” Real-Time and Online PDE-Constrained Optimization, pp.
322. SIAM, Philadelphia, 2007.
[30] Diehl, M., Walther, A., Bock, H.G., Kostina, E, “An adjoint-based
SQP algorithm with quasi-newton jacobian updates for inequality con-
strained optimization,” Technical Report Preprint MATH-WR-02-2005,
TU Dresden, 2005.

You might also like