172 views

Uploaded by gatzke

- mtech_ei.pdf
- Advanced Control and Information Systems Handbook - 2003 (2003)(338s).pdf
- dmc plus
- Cooperative Distributed Model Predictive Control
- ZAERO_9.2_Apps_Vol1_3rd_Ed.pdf
- Aspen Hysys Dynamics Process Control
- A control scheme based on discrete time-varying sliding surface
- Computer Simulation of Paraplegic Standing 98
- CS Cource File.docx
- [Wiberg]_Schaum's_State_Space_Linear.pdf
- Tuning Advanced PID Controllers via Direct H∞-Norm Minimization
- 22. Eng-Design and Analysis-Arifur Rahman
- Rotary Double Inverted Pendulum - Laboratory Guide (1)
- Antenna Azimuth Position Control System
- APC tool - DMCplus Model by ASPENtech
- Heli Ce150 Manual
- 10.1016@j.net.2018.09.018(v)
- Lecture 09
- 982_rtcs24_DTDesign4.pdf
- 88f814deeb894b7fa6712e8d584e57fb885d

You are on page 1of 137

Jay H. Lee

School of Chemical and Biomolecular Engineering

Center for Process Systems Engineering

Georgia Inst. of Technology

Prepared for

Pan American Advanced Studies Institute Program on

Process Systems Engineering

Schedule

•Lecture 2: Details of MPC Algorithm

and Theory

•Lecture 3: Linear Model Identification

Lecture 1

Introduction to MPC

- Motivation

- History and status of industrial use of MPC

- Overview of commercial packages

Key Elements of MPC

problem as an

(deterministic)

min ∑ φi ( xi , ui )

ui

i =0

optimization problem ? u0 = µ ( x0 )

g i ( xi , ui ) ≥ 0

HJB Eqn.

xi +1 = F ( xi , ui )

• On-line optimization

implementation (with Solve the optimization problem numerically

feedback update)

Implement solution u0 as the current move.

Repeat!

Popularity of Quadratic Objective in Control

p m−1

xk +1 = Axk + Buk

∑x Qx + ∑u Ru

T

i i

T

i i

i =0 i =0 yk = Cxk

• Fairly general

– State regulation

– Output regulation

– Setpoint tracking

• Unconstrained linear least squares problem has an analytical

solution. (Kalman’s LQR)

• Solution is smooth with respect to the parameters

• Presence of inequality constraints → no analytical solution

Classical Process Control

⎛ 1

t

de ⎞

p = kc ⎜1 + ∫ e(t ' )dt '+τ D ⎟⎟

⎜

⎝ τI 0 dt ⎠ Ad Hoc Strategies,

Heuristics • Regulation

PID Controllers • Constraint handling

• Local optimization

Switches inside the control algorithm

Min, Max Selectors • No clearly stated objective

If / Then Logics and constraints

Sequence Logics

• Inconsistent performance

• Complex control structure

Other Elements • Not robust to changes and failures

• Focus on the performance of a local unit

Example 1: Blending System

• Control rA and rB

• Control q if possible

• Flowrates of

additives are limited

Classical

Solution

Model-Based Optimal Control

x& = f ( x , u ), y = g ( x , u )

Controller Plant

Set-point Input Output

r(t) u(t) y(t)

⎧ p −1 ⎫

min ⎨ ∑ i i φ ( x , u ) + φ p (x )

p ⎬

u 0 ,K , u p −1 ⎩ i = 0 ⎭

stage-wise terminal

cost cost

g i ( xi , ui ) ≥ 0 Path constraints

g p (xp ) ≥ 0 Terminal constraints

x& = f ( x, u ) Model constraints

Model-Based Optimal Control

x& = f ( x , u ), y = g ( x , u )

Plant

Controller

Input Output

Set-point

u(t) y(t)

r(t)

Measurements

Open-Loop Optimal Control Problem

• Must be coupled with on-line state / model parameter update

⎧ p −1 ⎫

• Requiresumin

on-line

⎨∑

solution

φ ( x

fori , u

eachi ) + φ p

updated

(x )

p ⎬

problem

0 ,K , u p − 1 ⎩ i = 0 ⎭

stage-wise terminal

cost

• Analytical solution possible cost (LQ control)

only in a few cases

• Computational ) ≥ 0

g i ( x i , u i limitation for

Pathnumerical

constraints solution, esp. back in

the ’50sgand p ) ≥ 0

p ( x ’60s Terminal constraints

x& = f (x,u ) Model constraints

Model Predictive Control (Receding Horizon Control)

open-loop optimal

control problem on-line

with x0 = x(k)

– Apply the optimal input

moves u(k) = u0

– Obtain new

measurements, update

the state and solve the

OLOCP at time k+1 with

x0 = x(k+1)

– Continue this at each

sample time

Implicitly defines the feedback law u(k) = h(x(k))

Analogy to Chess Playing

my move

his move

my

move

Opponent

(The Plant)

The Opponent’s

My Move

Move I

(The Controller) New State

Operational Hierarchy Before and After MPC

Unit 1 - Conventional

Structure Unit 2 - MPC Structure

Global Steady-State

Plant-Wide Optimization Optimization

(every day)

Local Steady-State

Unit 1 Local Optimization Unit 2 Local Optimization Optimization

(every hour)

Dynamic

High/Low Select Logic Constraint

Model Control

Predictive (every minute)

PID Lead/Lag PID

Control Supervisory

(MPC) Dynamic

SUM SUM

Control

(every minute)

System (PID) System (PID) Control

(every second)

FC TC FC TC

PC LC PC LC

Example: Blending System

• Control rA and rB

• Control q if possible

• Flowrates of

additives are limited

Classical MPC:

Solution Solve at

each time k

p = Size of prediction window

p

1

min

2 3

(

u ( j ),u ( j ),u ( j ) ∑ A

r ( k + i | k ) − rA *)2

i =1

j = k ,L, k + p −1

+ (rB (k + i | k ) − rB *)

2

+ γ (q(k + i | k ) − q *)

2

γ << 1

Optimization and Control

An Exemplary Application (1)

An Exemplary Application (2)

Industrial Use of MPC

• Initiated at Shell Oil and other refineries during late 70s.

• The most applied advanced control technique in the process

industries.

• >4600 worldwide installations + unknown # of “in-house”

installations (Result of a survey in yr 1999).

• Majority of applications (67%) are in refining and petrochemicals.

Chemical and pulp and paper are the next areas.

• Many vendors specializing in the technology

– Early Players: DMCC, Setpoint, Profimatics

– Today’s Players: Aspen Technology, Honeywell, Invensys, ABB

• Models used are predominantly empirical models developed

through plant testing.

• Technology is used not only for multivariable control but for most

economic operation within constraint boundaries.

MPC Industry Consolidation

CPC-V Update

(late 1995) (Late 2000)

Aspentech

Adersa Setpoint Adersa

Neuralware

CCI DMCC

DMCC Treiber ABB

DOT Products DOT

Honeywell Honeywell

Litwin (Profimatics) CCI GE

Neuralware

Pavilion

Predictive Controls Pavilion

Profimatics

Litwin Foxboro

Setpoint SimSci

Treiber Control Invensys

PCL

MDC Technology

FRSI Emerson

MDC Elec.

Linear MPC Vendors and Packages

• Aspentech

– DMCplus

– DMCplus-Model

• Honeywell

– Robust MPC Technology (RMPCT)

• Adersa

– Predictive Functional Control (PFC)

– Hierarchical Constraint Control (HIECON)

– GLIDE (Identification package)

• MDC Technology (Emerson)

– SMOC (licensed from Shell)

– Delta V Predict

• Predictive Control Limited (Invensys)

– Connoisseur

• ABB

– 3d MPC

Result of a Survey in 1999 (Qin and Badgwell)

Nonlinear MPC Vendors and Packages

• Adersa

– Predictive Functional Control (PFC)

• Aspen Technology xk+1 = Axk + Buuk + Bv v k

– Aspen Target

yk = g(xk ) = Cxk + NN(xk )

• Continental Controls

– Multivariable Control (MVC): Linear Dynamics + Static Nonlinearity

• DOT Products

– NOVA Nonlinear Controller (NLC): First Principles Model

• Pavilion Technologies

– Process Perfecter: Linear Dynamics + Static Nonlinearity

Results of a Survey in 1999 for Nonlinear MPC

Controller Design and Tuning Procedure

2. Conduct plant test: Vary MV’s and DV’s & record

the response of CV’s

3. Derive a dynamic model from the plant test data

4. Configure the MPC controller and enter initial

tuning parameters

5. Test the controller off-line using closed loop

simulation

6. Download the configured controller to the

destination machine and test the model

predictions in open-loop mode

7. Commission the controller and refine the tuning as

needed

Role of MPC in the Operational Hierarchy

Plant-Wide Optimization

Determine plant-wide the

optimal operating condition for

the day

Local Optimization

Make fine adjustments for

local units

Take each local unit to the Multivariable

optimal condition fast but Control

smoothly without violating

constraints

MPC

Distributed Control

System (PID)

FC TC

PC LC

Local Optimization

• A separate steady-state optimization to determine steady-state

targets for the inputs and outputs; RMPCT introduced a dynamic

optimizer recently

input and output constraints and determine optimal input and

output targets for the thin and fat plant cases

• The RMPCT and PFC controllers allow for both linear and

quadratic terms in the SS optimization

determine optimal input and output targets; CV’s are ranked in

priority so that SS control performance of a given CV will never be

sacrificed to improve performance of lower priority CV’s; MV’s are

also ranked in priority order to determine how extra degrees of

freedom is used

Dynamic Optimization

described (approximately) as minimizing a performance index

with up to three terms; an output penalty, an input penalty, and an

input rate penalty:

J = ∑ j =1 e + ∑ j =0 ∆uk + j + ∑ j =0 e

P 2 M −1 2 M −1 2

y u

k+ j k+ j

Qj Sj Rj

constraints on the inputs and outputs:

( )

T

u = u , u ,...u

M T

0

T

1

T

M−1 u ≤ uk ≤ u

x k +1 = f (x k , u k ) ∆u ≤ ∆u k ≤ ∆ u

y k +1 = g ( x k +1 ) + b k +1 y ≤ yk ≤ y

Dynamic Optimization

optimizations to resolve conflicting control objectives; CV errors

are minimized first, followed by MV errors

approach

trajectory yr and input uM which minimize the following objective:

∑

P 2

J = − yk+ j + u M −1 − u ss

r 2

min

r M j =1

y k+ j S

y k + j ,u Q

Output Trajectories

Aspen Tech’s DMC ID-COM, Adersa’s

past future

past future

Soft Constraint, “Zone Control” Honeywell’s RMPCT

quadratic penalty

past future past future

Output Horizon

Finite horizon

Coincidence points

Input Parameterization

control horizon

Process Model Types

Equations

data

Function data

Convolution data L S

(Finite Impulse

or Step Response)

(Polynomial,

Neural Net)

Identification Technology

• Most products use PRBS-like or multiple steps test signals. Glide us

es non-PRBS signals

• Most products use FIR, ARX or step response models

– Glide uses transfer function G(s)

– RMPCT uses Box-Jenkins

– SMOC uses state space models

• Most products use least squares type parameter estimation:

– prediction error or output error methods

– RMPCT uses prediction error method

– Glide uses a global method to estimate uncertainty

• Connoisseur has adaptive capability using RLS

• A few products (DMCplus, SMOC) have subspace identification metho

ds available for MIMO identification

• Most products have uncertainty estimate, but most products do not m

ake use of the uncertainty bound in control design

Summary

• MPC is a mature technology!

– Many commercial vendors with packages differing in model form, objective

function form, etc.

– Sound theory and experience

• Challenges are

– Simplifying the model development process

• plant testing & system identification

• nonlinear model development

– State Estimation

• Lack of sensors for key variables

– Reducing computational complexity

• approximate solutions, preferably with some guaranteed properties

– Better management of “uncertainty”

• creating models with uncertainty information (e.g., stochastic model)

• on-line estimation of parameters / states

• “robust” solution of optimization

FCCU Debutanizer

~20% under

capacity

Debutanizer Diagram

PC

190 lb

Pressure 160 F

Fan

PCT

To Deethanizer

Reflux

Pre-Heater

From Stripper TC

Feed

TC

Flooding Tray 20 Temp.

TC

400 F

Gasoline to blending

Process Limitation

Operation Problems:

• Overloading

-- over design capacity.

• Flooding

-- usually jet flooding, causing very poor separation.

-- especially in summer.

Consequences:

• High RVP, giving away Octane Number

• High OVHD C5, causing problems at Alky.

Control Objectives

Constrained Control:

• Keep the tower from flooding

• Keep RVP lower than its target.

Regulatory Control:

• Regulate OVHD PCT or C5 at spec.

• Rejecting disturbance not through slurry, if possible.

Real-Time Optimization

Optimization Objectives:

• Minimizing energy consumed

• Minimizing overhead cooling required

• Maximizing separation efficiency.

MPC Configuration

Controlled Manipulated

PCT OVHD Pressure

Fan R-PID Fan Output

RVP Internal Reflux

Diff. Pressure MPC R-PID Reflux

Feed Temp. Pre-heater By-Pass

Pre-heater By-Pass

Tray 20 Temp.

OVHD C5 % Reboiler By-Pass

Reboiler By-Pass

Disturbances

Feed

Stripper Bottom Temp.

MPC’s MV Moves

200

180

160

Pressure (lb)

140

120

Feed-heater by-pass

100

80

60

Reboiler by-pass

40

20 Reflux

0

0 500 1000 1500 2000 2500 3000

minute

Reflux vs. Feed

32

30

28 Feed

26

24 An indication

22 that separation

efficiency

20

increased.

18

16 Reflux

14

12

0 500 1000 1500 2000 2500 3000

minute

Reflux-to-Feed Ratio

0.54

0.52

0.5

0.48

Handling Flood

0.46

0.44

0.42

0 500 1000 1500 2000 2500 3000

minute

Handling Flood

3.2

3

Potential Flooding

2.8

2.6

2.4

Differential

Pressure 2.2

Across Tower 2

1.8

1.6

1.4

1.2

0 500 1000 1500 2000 2500 3000

minute

Product Spec's

180

160

120

100

80

60

40

0

0 500 1000 1500 2000 2500 3000

minute

MPC Operation Overview

190 lb

Pressure 160 F

Fan

Top Tray

PCT

Reflux

Pre-Heater

Feed Tray

Feed

Tray 20 Temp.

Flooding Tray 20

After MPC is on

400 F

RVP

Temperature

Tray #

Gasoline to blending

MPC Control Results (1)

C4- Total 5

% 4.5

4

MPC was turned on

When RMPCT

3.5

3

2.5

2

1.5

1

0.5

0

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

April, 1991 Date

MPC Control Results (2)

RVP 7.5

When RMPCT

MPC was turned on

7

6.5

5.5

5

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

April, 1991 Date

Debutanizer Bottom RVP Daily Average

MPC Control Results (3)

C5+ Total 20

% 18

MPC was turned on

When RMPCT

16

14

12

10

2

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

April, 1991 Date

Debutanizer Top C5+ Total (I-C5, N-C5 & C5=+)

Other Benefits

Note: Variance on the tray-20 temperature is

increased, and it should be!!

• Flooding is eliminated.

Acknowledgment

• Dr. Joseph Lu, Honeywell, Phoenix, AZ

• Center for Process Systems Engineering

Industrial Members

Lecture 2

the prediction equation

- Use of state estimation

- Optimization

- Infinite-horizon MPC and stability

- Use of nonlinear models

Important Ingredients of an MPC Algorithm

– Predicted future outputs = Function of current “state”

(stored in memory) + feedforward measurement +

feedback measurement correction + future input

adjustments

• Objective and Constraints

• Optimization Algorithm

• Receding Horizon Implementation

General Setup

Dynamic Model State:

Compact representation

Of the past input record

Previous State

(in Memory)

State Update

(Just Implemented)

? Measurement

Prediction Correction

Feedback / Feedforward

Future Input Moves

Prediction Model Measurements

(To Be Determined)

for Future Outputs

To Optimization

Options

• Model Types

– Finite Impulse Response Model or Step Response Model

– State-Space Model

– Linear or Nonlinear

• Measurement Correction

– To the prediction (based on open-loop state calculation)

– To the state (through state estimation)

• Objective Function

– Linear or Quadratic

– Constrained or Unconstrained

Prediction Model for Different

Model Types

• Step Response Model

• State-Space Model

Sample-Data (Computer) Control

Model relates

input samples to

output samples

{v0 , v1 , v2 ,L}

↓

{y0 , y1 , y2 ,L}

v can be a MV

(u) or a

Measured DV

(d)

Finite Impulse Response Model (1)

Assumptions:

• H0 = 0: no immediate effect

• The response settles back in n steps s.t. Hn+1 = Hn+2

= … = 0: “Finite Impulse Response” (reasonable for

stable processes)

Finite Impulse Response Model (2)

y (k ) = H1v(k − 1) + L + H n v(k − n)

Finite Impulse Response Model (3)

T

• State x( k ) = ⎡⎣ v ( k − 1),L , v ( k − n) ⎤⎦

T T

n past input samples

(includes both MVs and

measured DVs)

• State Update: Easy!

x(k + 1) = M 0 x(k ) + Τ { v(k )

{ insertion

shift

⎢ v(k − 1) ⎥ ⎢ v(k − 2) ⎥

⎢ ⎥ ⎢ ⎥

M ⎢ M ⎥

⎢ ⎥

⎢v(k − n + 2)⎥ ⎢v(k − n + 1)⎥

⎢ v(k − n + 1) ⎥ ⎢ v ( k − n) ⎥

⎣ ⎦ ⎣ ⎦

drop

Finite Impulse Response Model (4)

• Prediction

Model prediction of y y (k ) = H1v(k − 1) + L + H n v(k − n) = Cx (k )

Model prediction error

Model error+Unmeasured disturbance

e( k ) = y m ( k ) − y ( k ) Measured output

Future input

Predicted future Past input samples

samples

output samples stored in memory

(to be decided)

⎡ yk +1|k ⎤ ⎡ u (k − 1) ⎤ ⎡ u (k ) ⎤

⎢ M ⎥ = Ψu ⎢ M ⎥ + Ψu ⎢ M ⎥

Dynamic ⎢ ⎥ 1

⎢ ⎥ 2

⎢ ⎥

Matrices ⎢ ⎣ yk + p|k ⎥⎦ ⎣u (k − n)⎦ ⎣u (k + p − 1)⎦

(made of Feedback

impulse ⎡ d (k − 1) ⎤ ⎡ d ( k ) ⎤ ⎡e( k ) ⎤ Error

response Ψ1d ⎢ M ⎥ + Ψ2d ⎢ M ⎥+⎢ M ⎥ Correction

coefficients) ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ d ( k − n) ⎦ ⎣d (k + p − 1)⎦ ⎣e(k )⎦

Feedforward term: Past disturbance Feedforward term: new measurement

samples stored in memory (Assume d(k)=d(k=1)=….=d(k+p-1))

Step Response Model (1)

Assumptions:

• S0 = 0: no immediate effect

• The response settles in n steps s.t. Sn= Sn+1 = …= S∞: the same

as the finite impulse response assumption

• Relationship with the impulse response coefficients:

Step Response Model (2)

•State

T n future outputs assuming

x(k ) = ⎡⎣ y0T (k ),L , yn −1T (k ) ⎤⎦ the input remains constant at

the most recent value

yi (k ) = y (k + i ) w/ ∆u (k ) = ∆u (k + 1) = L = 0

x(k)

Note yn −1 (k ) = yn (k ) = L = y∞ (k )

Also y (k ) = y0 (k )

Pictorial Representation of the State Update

x(k+1)

M1 x(k)

Step Response Model (3)

• State Update

x(k + 1) = M

{ 1 x ( k ) + S{ ∆v(k )

shift step response

drop

⎡ y0 (k + 1) ⎤ S1

S2 ⎡ y0 ( k ) ⎤

⎢ y (k + 1) ⎥ ⎢ y (k ) ⎥

⎢ 1

⎥ ⎢ 1

⎥

⎢ M ⎥ Sn-1 ⎢ M ⎥

⎢ yn − 2 (k + 1)⎥ Sn ⎢ yn − 2 (k )⎥

⎢ y (k + 1) ⎥ ⎢ y (k ) ⎥

⎣ n −1 ⎦ ⎣ n −1 ⎦

Step Response Model (4)

•Prediction

x(k)

(to be determined)

Step Response Model (4)

• Prediction

Model prediction of y(k) y (k ) = y0 (k ) = [1,0, L,0]x(k )

Model prediction error e( k ) = y m ( k ) − y ( k )

Future input

Predicted future The “state” stored in moves

output samples memory (to be decided)

⎡ yk +1|k ⎤ ⎡ y1 (k ) ⎤ ⎡ ∆u (k ) ⎤

⎢ ⎥ ⎢ ⎥ u⎢ ⎥

⎢ M =

⎥ ⎢ M ⎥ + Ω ⎢ M ⎥

⎢ y k + p| k ⎥ ⎢ y p ( k ) ⎥ ⎢⎣∆u (k + p − 1)⎥⎦

⎣ ⎦ ⎣ ⎦

Dynamic Matrices

(made of step ⎡ ∆d (k ) ⎤ ⎡e(k )⎤

response coefficients)

Ω d ⎢⎢ M ⎥+⎢ M ⎥

⎥ ⎢ ⎥ Feedback Error

⎢⎣∆d (k + p − 1)⎥⎦ ⎢⎣e(k )⎥⎦ Correction

Feedforward term: new measurement

(Assume ∆d(k+1)=…= ∆ d(k+p-1)=0)

State-Space Model (1)

z (k + 1) = Az (k ) + B u u (k ) + B d d (k )

y (k ) = Cz (k )

Discretization State-Space

Realization

~ ~ ~ y ( z ) = G u ( z )u ( z ) + G d ( z )d ( z ) State-Space

z& = A z + B u u + B d d Identification

~ or

y = Cz

y ( s ) = G ( s )u ( s ) + G d ( s )d ( s )

Linearization

I/O model

z& = f ( z , u , d ) Identification

y = g ( z) {y(i), u (i), i = 1,L N }

Fundamental Model Test Data

State-Space Model (2)

⎛ z (k + 1) = Az (k ) + B u u (k ) + B d d (k ) ⎞

⎜ ⎟

⎜ y (k + 1) = Cz (k + 1) ⎟

⎝ ⎠

⎛ z (k ) = Az (k − 1) + B u u (k − 1) + B d d (k − 1) ⎞

- ⎜ ⎟

⎜ y (k ) = Cz (k ) ⎟

⎝ ⎠

∆z (k + 1) = A∆z (k ) + B u ∆u (k ) + B d ∆d (k )

∆y (k + 1) = C∆z (k + 1) →

(

y (k + 1) = y (k ) + C A∆z (k ) + B u ∆u (k ) + B d ∆d (k ) )

⎡∆z (k + 1)⎤ ⎡ A 0⎤ ⎡∆z (k )⎤ ⎡ B u ⎤ ⎡ Bd ⎤

⎢ y (k + 1) ⎥ = ⎢CA I ⎥ ⎢ y (k ) ⎥ + ⎢ u ⎥ ∆u (k ) + ⎢ d ⎥ ∆d (k )

⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣CB ⎦ ⎣CB ⎦

⎡∆z (k )⎤

y (k ) = [0 I ]⎢

State Update

⎥ x k + = Φ x k + Γ u

∆u k + Γ d

∆d (k )

⎣ y (k ) ⎦ ( 1) ( ) ( )

y ( k ) = Ξx ( k )

State-Space Model (3)

• Prediction

Model prediction of y(k) y (k ) = Ξ x(k )

Model prediction error e( k ) = y m ( k ) − y ( k )

Future input

Predicted future The “state” stored in moves

output samples “memory” (to be decided)

⎡ yk +1|k ⎤ ⎡ΞΦ ⎤ ⎡ ∆u (k ) ⎤

⎢ ⎥ ⎢ ⎥ u⎢ ⎥

⎢ M =

⎥ ⎢M ⎥ x ( k ) + Ω ⎢ M ⎥

⎢ yk + p|k ⎥ ⎢ΞΦ p ⎥ ⎢⎣∆u (k + p − 1)⎥⎦

⎣ ⎦ ⎣ ⎦

Dynamic Matrix

(made of step ⎡ ∆d (k ) ⎤ ⎡e(k )⎤

response coefficients)

Ω d ⎢⎢ M ⎥+⎢ M ⎥

⎥ ⎢ ⎥ Feedback Error

⎢⎣∆d (k + p − 1)⎥⎦ ⎢⎣e(k )⎥⎦ Correction

Feedforward term: new measurement

(Assume ∆d(k+1)=…= ∆ d(k+p-1)=0)

Summary

equation in the form of

⎡ yk +1|k ⎤ ⎡ ∆u (k ) ⎤

⎢ ⎥ u⎢ ⎥

⎢ M = Lx

x ( k

⎥ 14444244443) + L d

∆d ( k ) + Le

e ( k ) + L ⎢ M ⎥

⎢ y k + p| k ⎥ known ≡ b ( k ) ⎢⎣∆u (k + p − 1)⎥⎦

⎣1

424 3 ⎦ 1442443

Y (k ) ∆U ( k )

• Assumptions

– Measured DV (d) remains constant at the current value of d(k)

– Model prediction error (e) remains constant at the current value

of e(k)

Ramp Type Extrapolation

• For Integrating Processes, Slow Dynamics

e(k + i ) = e(k ) + i (e(k ) − e(k − 1))

∆d (k ) = ∆d (k + 1) = L = ∆d (k + p − 1)

e(k)-e(k-1)

Use of State Estimation

Measurement Correction of State

State Estimate:

Compact representation

Previous State Estimate Of the past input and

(in “Memory”) measurement record

State Update

(Just Implemented)

Measurement

Prediction Correction

Feedback / Feedforward

Future Input Moves

Prediction Model Measurements

(To Be Determined)

for Future Outputs

To Optimization

State Update Equation

x(k + 1) = Φx(k ) + Γ ∆u (k ) + Γ ∆d (k )

u d

+ K ( ym (k ) − Ξx(k ))

• K is the update gain matrix that can be found in

various ways

– Pole placement: Not so effective with systems with

many states (most chemical processes)

– Kalman filtering: Requires a stochastic model of form

y ( k ) = Ξx ( k ) + ν ( k )

Can be obtained using, e.g., White noises of known covariances

subspace ID Effect of unmeasured disturbances and noise

Prediction Equation

⎡ yk +1|k ⎤ ⎡ΞΦ ⎤ ⎡ ∆u (k ) ⎤

⎢ ⎥ ⎢ ⎥ u⎢ ⎥

⎢ M ⎥ = ⎢M ⎥ x(k ) + Ω ⎢ M ⎥

⎢ yk + p|k ⎥ ⎢ΞΦ p ⎥ ⎢⎣∆u (k + p − 1)⎥⎦

⎣ ⎦ ⎣ ⎦

⎡ ∆d (k ) ⎤ ⎡e(k )⎤

d⎢ ⎥ ⎢ ⎥

Ω ⎢ M +

⎥ ⎢ M ⎥

⎢⎣∆d (k + p − 1)⎥⎦ ⎢⎣e(k )⎥⎦

Additional measurement correction NOT needed here!

What Are the Potential Advantages?

– Integrating processes, run-away processes

• Cross-channel measurement update

– More effective update of output channels with delays or

measurement problems based on other channels.

Early, robust

Unmeasured y1 update through

inputs modeled correlation

Process Delays,

Measured inputs Measurement Difficulties, y2

Slow Sampling

• Optimal extrapolation of output error and filtering of noise

(based on the given stochastic system model)

Optimization

Objective Function

• Minimization Function: Quadratic cost (as in DMC)

p m −1

V (k ) = ∑ ( yk +i|k − y*) Λ ( yk +i|k − y*) + ∑ ∆u T (k + i )Λu ∆u (k + i )

T y

i =1 i =0

– Penalize the tracking error as well as the magnitudes of adjustments

• Use the prediction equation.

First m columns of Lu

T

constant

V (k ) = ∆U (k ) H∆U m (k ) + g (k )∆U m (k ) + c(k )

T

m

T

Constraints

equation and rearrange to

C∆U m (k ) ≥ h(k )

Optimization Problem

• Quadratic Program

T

m

T

∆U m ( k )

• Unconstrained Solution

1 −1

∆U m (k ) = − H g (k )

2

• Constrained Solution

– Must be solved numerically

Quadratic Program

constraints

• Convex and therefore fundamentally tractable

• Solution methods

– Active set method: Determination of the active set of constraints

on the basis of the KKT condition

– Interior point method: Use of barrier function to “trap” the solution

inside the feasible region, Newton iteration

• Solvers

– Off-the-shelf software, e.g., QPSOL

– Customization is desirable for large-scale problems

Two-Level Optimization

min L( y∞|k , u s (k ))

us ( k )

⎡ y ∞| k ⎤

Cs ⎢ ⎥ ≥ cs ( k )

⎣u s (k )⎦

u s (k ) = u (k − 1) + ∆u (k ) + L + ∆u (k + m − 1)

y∞|k = bs (k ) + Ls ∆u s (k )

Steady-State

Optimal Setting Values (setpoints) Prediction Eqn.

y , u (k )

*

∞| k

*

s

State

(sometimes input deviations are included Feedforward Measurement

in the quadratic objective function) Feedback Error

Dynamic

To Dynamic Optimization (Quadratic Program) Prediction Eqn.

Use of Infinite Prediction Horizon

and Stability

Use of ∞ Prediction Horizon – Why?

• Stability guarantee

– The optimal cost function can be shown to be the

control Lyapunov function

• Less parameters to tune

• More consistent, intuitive effect of weight

parameters

• Close connection with the classical

optimal control methods, e.g., LQG control

Step Response Model Case

∞ m −1

V (k ) = ∑ ( yk +i|k − y*) Λ ( yk +i|k − y*) + ∑ ∆u T (k + i )Λu ∆u (k + i )

T y

i =1 i =0

m + n −1 m −1

V (k ) = ∑ k + i|k

( y

i =1

− y*) ΛT

( y y

k + i|k − y*) + ∑ ∆u

i =0

T

( k + i ) Λu

∆u (k + i )

cost to be bounded

Additional Comments

• Can be generalized to state-space models

– More complicated procedure to turn the ∞-horizon problem into a

finite horizon problem

– Requires solving Lyapunov equation to get the terminal cost matrix

– Also, must make sure that output constraints will be satisfied

beyond the finite horizon → construction of output admissible set

• Use of a sufficiently large horizon (p≈ m+ the settling

time) should have a similar effect

• Can we always satisfy the settling constraint?

– y=y* may not be feasible due to input constraints or insufficient m

→ use two-level approach

Two-Level Optimization

Steady-State Optimization

(Linear Program or Quadratic Program)

y , u (k )

*

∞| k

*

s

Constraint ∆u(k) + L + ∆u(k + m - 1 ) = ∆us* → yk + m + n-1|k = y∞* |k

Use of Nonlinear Model

Difficulty (1)

x& = f ( x, u , d ) Discretization?

x(k + 1) = F ( x(k ), u (k ), d (k ))

y = g ( x) y (k ) = g ( x(k ))

Orthogonal

Collocation

yk +1|k = g o F ( x(k ), u (k ), d (k )) + e(k )

yk + 2|k = g o F (F ( x(k ), u (k ), d (k )), u (k + 1), d (k ) ) + e(k )

M

yk + p|k = g o F p ( x(k ), u (k ),Lu (k + p − 1), d (k ) ) + e(k )

The prediction equation is nonlinear w.r.t. u(k), ……, u(k+p-1)

Difficulty (2)

State Estimation

x& = f ( x, u , d ) + w

y = g ( x) + ν

Extended Kalman Filtering

k +1

x(k + 1) = ∫ f ( x , u , d ) + K ( k )( y

k

m (k ) − g ( x(k ) )

• Based on linearization at each time step – not optimal, may not be stable

• Best practical solution at the current time

• Promising alternative: Moving Horizon Estimation (requires solving NLP)

• Difficult to come up with an appropriate stochastic system model (no ID technique)

Practical Algorithm

EKF Dynamic Matrix based on the

linearized model at the

x(k) current state and input values

⎡ k +1 ⎤

⎢ ∫ f ( x, u , d ) ⎥

⎡ yk +1|k ⎤ ⎢k ⎥ ⎡ ∆u (k ) ⎤

⎢y ⎥ ⎢k +2 ⎥ ⎢ ∆u (k + 1) ⎥

⎢ k + 2|k ⎥ = ⎢ ∫ f ( x, u , d ) ⎥ + Ω u (k ) ⎢ ⎥

⎢ M ⎥ ⎢k ⎥ ⎢ M ⎥

⎢ ⎥ ⎢k + p M ⎥ ⎢ ⎥

⎢⎣ yk + p|k ⎥⎦ ⎢ ⎥ ⎣ ∆u (k + p − 1)⎦

144424443

⎢∫ f ( x , u , d ) ⎥ Linearized Effect of Future Input Adjustments

⎣ k ⎦

Model integration with

Linear prediction equation

constant input u=u(k-1)

and d=d(k)

Linear or Quadratic Program

Additional Comments / Summary

possible

– Use the previously calculated input trajectory (instead of the

constant input) in the integration and linearization step

– Iterate between integration/linearization and control input

calculation

prohibitive in most applications

Lecture 3

- Model structure

- Parameter/model estimation

- Error analysis

- Plant testing

- Data pretreatment

- Model validation

System Identification

Building a dynamic system model using data obtained from the plant

Why Important?

del obtained through system identification

• Poor model → Poor Prediction → Poor Performance

• Up to 80% of time is spent on this step

• Direct interaction with the plant

– Cost factor, safety issues, credibility issue

• Issues and decisions are sufficiently complicated that syst

ematic procedures must be used

Steps and Decisions Involved

• Test signal (shape, size of perturbation)

• Closed-loop or open-loop?

Plant Test • One-input-at-a-time or simultaneous?

• How long?

• Etc.

Raw Data

• Outlier removal

Pretreatment • Pre-filtering

Conditioned

Data

• Model structure

ID Algorithm • (Parameter) estimation algorithm

Model

• Source of validation data

Validation • Criterion

No

End Objective: Control

Yes

Model Structure

Model Structure (1)

Model

Output

Disturbance Noise

Process Noise Model

Measured

Outputs

Inputs Plant

Dynamics

Σ Σ

• I/O Model

White noise sequence

y ( k ) = G ( q )u ( k ) + H ( q ) e( k )

1424 3 14243

effect of inputs effect of disturbances, noise

Models auto- and cross-correlations of the residual (not physical cause-effect)

SISO I/O Model Structure (1)

• FIR (Past inputs only)

y (k ) = b1u (k − 1) + L + bmu (k − m) + e(k )

G (q ) = b1q −1 + L + bm q − m , H (q ) = 1

• ARX (Past inputs and outputs: “Equation Error”)

b1q −1 + L + bm q − m 1

G (q) = −1 −the

, H ( q ) = −1 −n

1 − a1q average

• ARMAX (Moving − L − aof

n q n noise term)1 − a1 q − L − a n q

y (k ) = a1 y (k − 1) + L + an y (k − n) + b1u (k − 1) + L + bmu (k − m)

+ e(k ) + c1e(k − 1) + L + cn e(k − n)

b1q −1 + L + bm q − m 1 + c1q −1 + L + cn q − n

G (q) = −1 −n

, H (q) =

1 − a1q − L − an q 1 − a1q −1 − L − an q − n

SISO I/O Model Structure (2)

~

y (k ) = a1 ~

y (k − 1) + L + an ~

y (k − n) + b1u (k − 1) + L + bmu (k − m);

y (k ) = ~

y ( k ) + e( k )

b1q −1 + L + bm q − m

G (q) = −1 −n

, H (q) = 1

1 − a1q − L − an q

• Box Jenkins (More general than ARMAX)

b1q −1 + L + bm q − m 1 + c1q −1 + L + cn q − n

G (q) = −1 −n

, H (q) =

1 − a1q − L − an q 1 − d1q −1 − L − d n q − n

MIMO I/O Model Structure

• Inputs and outputs are vectors. Coefficients are matrices.

• For example, ARX model becomes

y (k ) = A1 y (k − 1) + L + An y (k − n)

+ B1u (k − 1) + L + Bmu (k − m) + e(k )

( −1

G (q) = I − A1q − L − An q − n −1

) (B q 1

−1

+ L + Bm q − m )

H (q) = (I − A q1

−1

− L − An q )

− n −1

–AiDifferent y × of

is an nsets n ycoefficient and Bi giving

matrix matrices n y × nsame

is anexactly u matrix

G(q) and H(

q) through pole/zero cancellations → Problems in parameter estimati

on → Requires special parameterization to avoid problem

State Space Model

• Deterministic

x(k + 1) = Ax(k ) + Bu (k ) Output Error

y (k ) = Cx(k ) + e(k ) Structure

x(k + 1) = Ax(k ) + Bu (k ) + Ke(k ) ARMAX

y (k ) = Cx(k ) + e(k ) Structure

yd (k ) = Cx(k )

+ y s (k ) = Cxs (k ) + e(k )

Effect of deterministic input Auto- and cross-correlation of the residual

• Identifiability can be an issue here too

– State coordinate transformation does not change the I/O relationship

Parameter (Model)

Estimation

Overview

Model

Model Model Parameter For

Structure Structure (Model) Validation

Data Selection Estimation

Statistical

Prediction Error IV Method

Method Method • MLE

• Bayesian

Subspace ID

Method

ETFE

• Frequency Domain

Prediction Error Method

• Developed by Ljung and coworkers

• Flexible

– Can be applied to any model structure

– Can be used in recursive form

• Well developed theories and software tools

– Book by Ljung, System ID Toolbox for MATLAB

• Computational complexity depends on the model st

ructure

– ARX, FIR → Linear least squares

– ARMAX, OE, BJ → Nonlinear optimization

• Not easy to use for identifying multivariable models

Prediction Error Method

• Put the model in the predictor form

( )

yk |k −1 = G (q,θ )u (k ) + I − H −1 (q,θ ) ( y (k ) − G (q,θ )u (k ) )

144244 3

Contains at least 1 delay

• Choose the parameter values to minimize the sum of the pr

ediction error for the given data

⎧1 N

⎫

∑

2

min ⎨ e( k ) 2 ⎬ e(k ) = H −1 (q,θ )( y (k ) − G (q,θ )u (k ) )

θ

⎩N k =1 ⎭

– ARX, FIR → Linear least squares,

– ARMAX, OE, BJ → Nonlinear least squares

Subspace Method

• More recent development

• Dates back to the classical realization theories but rediscov

ered and extended by several people

• Identifies a state-space model

• Some theories and software tools

• Computationally simple

– Non-iterative, linear algebra

• Good for identifying multivariable models

– No special parameterization is needed

• Not optimal in any sense

• May need a lot of data for good results

• May be combined with PEM

– Use SS method to obtain an initial guess for PEM

Main Idea of the SS-ID Method (1)

y (k ) = Cx(k ) + ν (k )

Innovation Form

(Steady-State Kalman Filter)

Equivalent to the above in I/O sense

y (k ) = Cx(k ) + e(k )

within some similarity transformation

We are free to choose the state coordinates

Main Idea of the SS-ID Method (2)

An upper bound

⎡ C ⎤

⎡ y (k − 1) ⎤ of the state dimension n

⎢ CA ⎥

⎢ M ⎥ Extended

⎢ ⎥

⎢ ⎥ ⎡ x1 (k ) ⎤ Observability

⎢ M ⎥

⎢⎣ y (k − n )⎥⎦ L1 ⎢ M ⎥ Matrix ⎢ n −1 ⎥

⎢ ⎥ ⎣CA ⎦

Γo

Past outputs

⎢⎣ xn (k )⎥⎦

⎡ yk |k −1 ⎤ ⎡ ek |k −1 ⎤

⎡ u (k − 1) ⎤ L2 State

⎢ ⎥ ⎢ ⎥

⎢ M ⎥ (Minimum Storage)

⎢ M +

⎥ ⎢ M ⎥

⎢ ⎥ ⎢ yk + n −1|k −1 ⎥ ⎢ek + n −1|k −1 ⎥

⎢⎣u (k − n )⎥⎦ ⎣ ⎦ ⎣ ⎦

Past inputs ⎡ u (k ) ⎤ Future Prediction

⎢ M ⎥ Output Error

⎢ ⎥ Prediction

⎢⎣u (k + n − 2)⎥⎦

Future inputs

Main Idea of the SS-ID Method (3)

⎡Y − ⎤

Y 0+ = Γo [L1 L2 ]⎢ − ⎥ + L3U 0+ + E 0+

14243 U

M ⎣ ⎦

– Consistent estimation since E0+ is independent of the regres

sors

– Oblique projection of data matrices

• Perform SVD on M and find n as well as Γo

⎡Σ n 0 ⎤ ⎡ P1T ⎤ Γo = Q1Σ1n/ 2

M = [Q1 Q2 ]⎢ ⎥ ⎢ T⎥

⎣0 ≈ 0⎦ ⎣ P2 ⎦

Some variations exist among

different algorithms in terms

The column space of Q1 is the state space of picking the state basis

Main Idea of the SS-ID Method(4)

• Obtain the data for x(k)

− ⎡ Y (k ) ⎤

−

x(k ) = Γo

{ ⎢U − (k )⎥

pseudo inverse ⎣ ⎦

• Obtain the data for x(k+1) in a similar manner

• Obtain A,B and C through linear regression

– Consistent estimation since the residual is independent of the regre

ssor

⎡ x(k + 1)⎤ ⎡ A B ⎤ ⎡ x(k ) ⎤ ⎡ Ke(k )⎤

⎢ y (k ) ⎥ = ⎢C 0 ⎥ ⎢ y (k )⎥ + ⎢ e(k ) ⎥

⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣1 424 3⎦

Residual

• Obtain K and Cov(e) by using the residual data

Properties

– Kalman filter interpretation

– Proof of asymptotically unbiasedness of A, B, and C

– Efficient algorithm using QR factorization

– -

• CVD (Larimore)

– Founded on statistical argument

– Same idea but the criterion for choosing the state basis (Q1) diff

ers a bit from N4SID – based on “correlation” between past I/O

data and future output data, rather than minimization of the pre

diction error for the given data

Alternative

Γo → C , A → B

• MOESP (Verhaegen)

Error Analysis

Error Types

• Bias: Error due to structural mismatch

– Bias = the error as # of data points → ∞

– Independent of # of data points collected

– Bias distribution (e.g., in the frequency domain) depends on the

input spectrum, pre-filtering of the data, etc.

– Frequency-domain bias distribution under PEM - by Ljung

• Variance: Error due to limited availability of data

– Vanishes as # of data points → ∞

– Depends on the number of parameters, the number of data poin

ts, S/N ratio, etc. but not on pre-filtering

– Asymptotic distribution (as n, N → ∞):

• Main tradeoff

– Richer structure (more parameters) → Bias↓, Variance↑

n

cov(vec(Gˆ N )) ≈ (Φ u ) −T ⊗ Φ d

N 442443

1

Noise − to −signal ratio

Plant Test for

Data Generation

Test Signals

• Very Important

– Signal-to-noise ratio → Distribution and size of the variance

– Bias distribution

• Popular Types

– Multiple steps: Power mostly in the low-frequency region→ Good e

stimation of steady-state gains (even with step disturbances) but g

enerally poor estimation of high frequency dynamics

– PRBS: Flat spectrum → Good estimation of the entire frequency re

sponse, given the error also has a flat spectrum (often not true)

– Combine steps w/ PRBS?

Multi-Input Testing (MIT) vs.

Single Input Testing (SIT)

esting time

• Control-relevant data generation requires MIT

• MIT can be necessary for identification of highly i

nteractive systems (e.g., systems with large RGA)

• SIT is often preferred in practice because of the m

ore predictable effect on the on-going operation

Open-Loop vs. Closed-Loop

Open-Loop Testing

Perturbation Signal r d

y

G0

u

Closed-Loop Testing

Dither Signal d

r

Location 1 Location 2

y

C G0

_ u

Pros and Cons of Closed-Loop Testing

• Pros

– Safer, less damaging effect on the on-going operation

– Generates data that are more relevant to closed-loop c

ontrol

• Cons

– Correlation between input perturbations and disturbanc

es / noise through the feedback.

– Many algorithms can fail or give problems – They give “

bias” unless the assumed noise structure is perfect

Important Points from Analysis

• External perturbations (“dither”) are necessary.

– Perturbations due to error feedback hardly contributes to variance

reduction (since they are correlated to the errors)

n

cov(vec(GN )) ≈ (Φ ur ) −T ⊗ Φ d

ˆ due to the dithering

N

Output error spectrum

– The level of external perturbation signals also contribute to the siz

e of bias due to the feedback-induced correlation

123 1424 3

E{G0 − Gˆ N } = ( H 0 − H µ )Φ eu Φ u−1 Noise − to − signal Relative contribution

ratio of noise feedback

to input spectrum

Different Approaches to Model Identification

with Closed-Loop Data

• Direct Approach

D N = { y (i ), u (i ), i = 1,..., N } → Gˆ N ( and Hˆ N )

• Indirect Approach

Gˆ N =T̂Nyr ( I −T̂NyrC )−1

D = { y (i ), r (i ), i = 1,..., N } → T̂

N yr

N ⎯⎯⎯⎯⎯⎯→ Gˆ N

Ĝ N = T̂Nyr ( T̂Nur ) −1

D = { y (i ), u (i ), r (i ), i = 1,..., N } → (T̂ , T̂ ) ⎯⎯ ⎯ ⎯⎯→ ĜN

N yr

N

ur

N

~N

D = {u(i), r(i),i = 1,...,N} → D1 = {u(i)} → Gˆ N

1

N

~N

D2 = { y(i)}

Data Pretreatment

Main Issues (1)

• Remove outliers

• Remove portions of data corresponding to u

nusual disturbances or operating conditions

• Filter the data

– Affects bias distribution (emphasize or de-emphasize

different frequency regions)

– Does NOT improve the S/N ratio – often a misconcept

ion

Main Issues (2)

(∆y = y(k) − y(k-1), ∆u(k) = u(k) − u(k-1))

– Removes trends (e.g., effect of step disturbances, set

point changes) that can destroy the effectiveness of m

any ID methods (e.g., subspace ID)

– Often used in practice

– Also removes the input power in the low-frequency re

gion. (PRBS → zero input power at ω = 0)

– Amplifies high-frequency parts of the data (e.g., noise

), so low-pass filtering may be necessary

Model Validation

Overview

odel building

• Various methods

– Size of the prediction error

– “Whiteness” of the prediction error

– Cross correlation test (e.g., prediction error and inputs)

• Good prediction with test data but poor predictio

n with validation data

– Sign of “overfit”

– Reduce the order or use more compact structures like ARM

AX (instead of ARX)

Concluding Remarks on Linear ID

t of model-based controller design

• Involves many decisions that affect

– Plant operation during testing

– Eventual performance of the controller

• Good theories and systematic tools are available

• System ID can also be used for constructing monitorin

g models

– Subspace identification

– Trend model, not a causal model

→ Active testing is not needed

Deterministic Multi-Stage Optimization

⎧ p −1 ⎫

min ⎨∑ φ ( x j , u j ) + φ p (x p )⎬

u0 ,K,u p −1 ⎩ j = 0 ⎭

stage-wise terminal

cost cost

g j (x j ,u j ) ≥ 0 Path constraints

g p (xp ) ≥ 0 Terminal constraints

x j +1 = f ( x j , u j ) Model constraints

scheduling problems.

• Continuous and integer state / decision variables

possible

• In control, p=∞ case is solved typically.

• Uncertainty is not explicitly addressed.

Solution Approaches

– Derivation of closed form optimal policy ( )

requires solution to HJB equation (hard!)

• Numerical approach: 80s-now

– Math programming (LP, QP, NLP, MILP, etc.):

• Fixed parameter case solution

• Computational limitation for large-size problem (e.g., when p= ∞).

– Parametric programming:

• General parameter dependent solution (e.g., a lookup table)

• Significantly higher computational burden

– Practical solution:

• Resolve the problem on-line whenever parameters are updated or

constraints are violated (e.g., in Model Predictive Control or

Reactive Scheduling).

Stochastic Multi-Stage Decision Problem

with Recourse

⎧∞ j ⎫

min E ⎨∑ α φ ( x j , u j ) ⎬

u j = µ ( x j ) ⎩ j =0 ⎭

0 <α <1

Discount factor

x j +1 = f h ( x j , u j , ω j ) Markov sys. model

[

Pr g ( x j , u j ) ≥ 0 ≥ ζ] Chance constraint

control, scheduling, and other real-time decision

problems in an uncertain dynamic environment.

• No satisfactory solution approach currently available.

Limitation of Stochastic Programming

Approach

Simple case of 2 scenarios (↑ or↓) per stage

↑↑

x k+2

…

↑

xk+1 u ↑↑

k+2

…

xk ↑

u k +1 x↑↓ …

k+2 Total number of

uk u↑↓ decision variables

k+2 … = (1+2+4+…+2p-1) nu

x↓↑

k +2 …

x↓k+1 u ↓↑ Number of branches

k +2 … to evaluate for each

↓

u k +1 x ↓↓ … candidate decisions =

k +2 2p

u ↓↓

k +2

…

Stochastic Programming Approach

General case (S number of scenarios per stage)

= (1 + S + S2 +…+ Sp-1) nu

• Number of branches to evaluate for each

decision candidate = Sp

• Not feasible for large S (large number of

scenarios) and/or large p (large number of

stages)

– practically limited to two stage problems with a

small number of scenarios.

• Current practical approach: Evaluate

most likely branch(es) only. BUT highly

limited!

Dynamic Programming (DP)

( ( x j ))

∞

• The concept of cost-to- *

go

J ( xk ) ⇒ ∑ α j − k −1

j = k +1

φ x j , µ *

under (optimal) control

as a function of state x

µ ( x ) = arg min {φ ( x, u ) + J * ( fh ( x, u) )}

u

{

J * ( x ) = min E φ ( x, u ) + αJ * ( f h ( x, u ) )

u∈U

}

Value Iteration Approach to Solving DP

[

J * ( xk ) = min E φ ( xk , u k ) + αJ * ( f h ( xk , u k )

u(k )

]

a li

i o n es)

ime ns ip+1ac

n

J

S ( ξ ) = min E φ [

(ξ , u ) + α J ]

(ξ )

i ˆ

f D ctio

u

o

r se & A

Cu tate sampling &

(S discretization

i = i+1

Approximate Dynamic Programming (ADP)

– Curse of dimensionality! Not suitable for high dimens. sys.

• Key idea of ADP

– To find approximate cost-to-go function

small “relevant” fraction of the states and initialize cost-to-go

value table.

• Iterate over only the sampled points in the state space

• Use interpolation to evaluate the cost-to-go values for non-

sampled points.

Approximate Dynamic Programming (ADP)

Approximate Value Iteration

On-line Decision

[ ]

u = µ ( x) = arg min E φ ( x,u ) + αJ ∗ ( f ( x,u ) )

u

~

• Closed-loop w/ suboptimal policies

• MPC, PI, etc. u∈U

[ ~

]

J i +1 ( x) = min E φ ( x,u ) + αJ i ( f ( x,u ) )

Sample Avg.

Sampled State Points

converged

i ← i +1 ~

solution

J*

Approximation of Cost-to-Go

~

• State and input trajectories: xk , u k J ( x) := xk → J ( xk )

• Initial cost-to-go: J ( x) = ∑ α iφ i

- mtech_ei.pdfUploaded bysurabhidivya
- Advanced Control and Information Systems Handbook - 2003 (2003)(338s).pdfUploaded byMatthew Irwin
- dmc plusUploaded byedo7474
- Cooperative Distributed Model Predictive ControlUploaded byfalcon_vam
- ZAERO_9.2_Apps_Vol1_3rd_Ed.pdfUploaded byAnonymous Pwr6pyD
- Aspen Hysys Dynamics Process ControlUploaded byAkhi Sofi
- A control scheme based on discrete time-varying sliding surfaceUploaded byNguyen Trong Tai
- Computer Simulation of Paraplegic Standing 98Uploaded byAmit Panchal
- CS Cource File.docxUploaded bysreekantha2013
- [Wiberg]_Schaum's_State_Space_Linear.pdfUploaded byAdib Hashemi
- Tuning Advanced PID Controllers via Direct H∞-Norm MinimizationUploaded byH.E. Musch
- 22. Eng-Design and Analysis-Arifur RahmanUploaded byImpact Journals
- Rotary Double Inverted Pendulum - Laboratory Guide (1)Uploaded bySaqib Khattak
- Antenna Azimuth Position Control SystemUploaded byJunaid Iqbal
- APC tool - DMCplus Model by ASPENtechUploaded byessakkirajm19902116
- Heli Ce150 ManualUploaded byjonnycagex
- 10.1016@j.net.2018.09.018(v)Uploaded byBobby Shitorus Jr.
- Lecture 09Uploaded byIrfan Khan
- 982_rtcs24_DTDesign4.pdfUploaded byWylie
- 88f814deeb894b7fa6712e8d584e57fb885dUploaded byAhmed Ashraf Selim
- 06492256.pdfUploaded byPradeepChandraVarmaMandapati
- Control Assignment - Jonathan Avilés Cedeño - 4479564.pdfUploaded byJonathanAviles
- Linear Active Disturbance Rejection Control for LCLUploaded byhafiz858
- yigit2000.pdfUploaded byPetro Nimchuk
- Lecture 2Uploaded byMuhammad Hassan
- paperUploaded byRam Nice
- Model Predictive Control and Stability Analysis of a Standing Biped with Toe-JointUploaded byInternational Journal of Robotics, Theory and Applications
- 45447828-MITUploaded bygeorgez111
- Transfer Functions & ConvolutionUploaded byDebashish Pal
- Thermal and Energy Management of High-Performance Multicores:Uploaded bykostya_4524

- npc-Chapter1-scanUploaded bygatzke
- ray-3Uploaded bygatzke
- Edgar Recursive EstimationUploaded bySaurabh K Agarwal
- edgar-kalmanfilterUploaded bygatzke
- 64 Interview QuestionsUploaded byshivakumar N
- 64 Interview QuestionsUploaded byshivakumar N
- 64 Interview QuestionsUploaded byshivakumar N
- ray-1Uploaded bygatzke
- npc-Chapter3-scanUploaded bygatzke
- Npc Chapter 4 NofigsUploaded bygatzke
- huang-MPCUploaded bygatzke
- ch26Uploaded bygatzke
- ray-2Uploaded bygatzke
- ray-5Uploaded bygatzke
- ray-6Uploaded bygatzke
- Huang Control and Design DOFUploaded bygatzke
- Edgar Recursive EstimationUploaded bySaurabh K Agarwal
- Huang Multi Loop ControlUploaded bygatzke
- 64 Interview QuestionsUploaded byshivakumar N
- Huang Control and DesignUploaded bygatzke
- ch25Uploaded bygatzke
- huang-LQRUploaded bygatzke
- morari-zafiriou-PI-chapters4-6Uploaded bygatzke
- huang-MPCUploaded bygatzke
- ch23Uploaded bygatzke
- Huang MVC DesignUploaded bygatzke
- ch24Uploaded bygatzke
- ch22Uploaded bygatzke
- Huang MVC GeneralUploaded bygatzke

- 37184526 Bio TEch 2007 RegulationsUploaded bySanthana Krishnan
- BCom Course Strucure & Syllabi - BIT Offshore Campus RAKUploaded byAakriti Mathur
- Charles L. Mader- Numerical Modeling of the Deflagration-to-Detonation TransitionUploaded bySteemWheel
- Trapezoidal RuleUploaded byRicardo Wan Aguero
- Fundamentals of BiomechanicsUploaded byAlan Magpantay
- Journal of Computer Science and Information Security IJCSIS May 2013Uploaded byijcsis
- Comprehensive Design of Axissymmetric Wind Tunnel ContractionsUploaded byd3cepti0ns
- exp04.pdfUploaded byJ M L
- 50996034X2RM_Principles of Control Systems_Solution ManualUploaded byClyde Cauchi
- #5 6.2-6.3 Wex2 Superposition Student v1Uploaded byiJordanScribd
- Standing Waves on Strings!Uploaded byAtif Imam
- Scheduling (2)Uploaded bymanishbhardwaj8131
- FinTS PackUploaded byantonio@mailinator
- Catia V5 Assembly DesignUploaded bySimona Simone
- dg-38_eUploaded byJadur Rahman
- Nonlinear ProgrammingUploaded byuser2127
- Predictive Drivers of Students Satisfaction in Open Distance Learning in Sri LankaUploaded byIJARP Publications
- probability distributionUploaded byAditi Gupta Singh
- Slugs AparicionUploaded byYesi Matos
- Single Phase1Uploaded byyアミール
- 4363-114-IETMUploaded byyogesh_b_k
- Characterization of Crosslinked Gel Kinetics and Gel Strength Using NMRUploaded bymohamadi42
- Weka SampleUploaded byLoveAstro
- DESIGN AGAINST TOOTH INTERIOR FATIGUE FRACTUREUploaded byCA RV
- 2010 Algebra II Trig Review 13.4-13.6Uploaded byAlice Dwyer
- MbaUploaded bySuraj Sharma
- On Relative ClauseUploaded byArif Budiman
- B.TECH TEXTILE TECHNOLOGY SYLLABUS SEM 2 TO 8Uploaded byVinitha
- six behavioral objectivesUploaded byapi-349767644
- Jurnal HongkongUploaded byrisal