You are on page 1of 41

# SYSTEM IDENTIFICATION

## The System Identification Problem is to estimate a

model of a system based on input-output data.
Basic Configuration
v(t) disturbance (not observed)
continuous

## u(t) System y(t)

input (observed) output (observed)

discrete {v(k)}

## {u(k)} System {y(k)}

280
We observe an input number sequence (a
sampled signal)
{u(k)} = {u(0), u(1), ..., u(k), ..., u(N)}
and an output sequence
{y(k)} = {y(0), y(1), ..., y(k), ..., y(N)}

## If we assume the system is linear we can

write:-
Y( z)  G( z) U( z)  V( z)

281
V(z)

## The disturbance v(k) is often considered as

generated by filtered white noise :-
(z) H(z)
white noise V(z)
filter disturbance
U(z) G(z) + Y(z)
input output
process

## giving the description:

Y( z)  G ( z) U( z)  H ( z) ( z)
282
Parametric Models
ARX model (autoregressive with exogenous
variables)
1
(z)
A( z 1 )
V(z)
H(z)

B( z 1 )
U(z) z  n + Y(z)
A ( z 1 )

G(z)
where
1 1  na
A ( z )  1  a1z ana z
B( z 1 )  b1z 1  b2 z 2 bnb z  nb 283
giving the difference equation:

y( k )  a1 y( k  1) ana y( k  na ) 
b1u( k  n  1)  b2 u( k  n  2) bnb u( k  n  nb )   ( k )
 n
and z represents an extra delay of n sampling
instants.
identification problem
determine n, na, nb (structure)
estimate a1 , a2 , , ana
(parameters)
b1 , b2 , , bnb
284
ARMAX model (autoregressive moving average with
exogenous variables)

(z) C ( z 1 )
A ( z 1 )
V(z)
H(z)

U(z) B( z 1 ) Y(z)
z  n

A ( z 1 )
+

G(z)
where A ( z 1 )  1  a1z 1 ana z  na
B( z 1 )  b1z 1  b2 z 2 bnb z  nb
1 1  nc
C( z )  1  c1z cnc z 285
giving the difference equation:
y ( k )  a1 y ( k  1)  ana y ( k  na ) 
b1u( k  n  1)  b2 u( k  n  2) bnb u( k  n  nb ) 
 ( k )  c1 ( k  1)  cn  ( k  nc )
c

identification problem
determine n, na, nb, nc (structure)
estimate a1 , a2 , , an
a

b1 , b2 , , bnb (parameters)
c1 , c2 , , cnc 286
General Prediction Error Approach
u(t) y(t)
Process
-
e(t,)

Predictor with +
parameters 

Algorithm for
minimising some
function of e(t,)

## Predictor based on a parametric model

Algorithm often based on a least squares method.
N
min  e (k )2
 287
k 0
Consistency
A desirable property of an estimate is that it
converges to the true parameter value as the
number of observations N increases towards
infinity.
This property is called consistency
Consistency is exhibited by ARMAX model
identification methods but not by ARX
approaches (the parameter values exhibit
bias).

288
Example of MATLAB Identification Toolbox Session

## Input and Output Data of Dryer Model

OUTPUT #1
2

-1

-2
0 5 10 15 20 25
INPUT #1

-1

0 5 10 15 20 25
289
input and output data
MATLAB statements and results:
(ARX n, na = 2, nb = 2)

## th = arx(z2,[2 2 3]); % z2 contains data

th = sett(th,0.08); % Set the correct sampling interval.
present(th)

Results:

## Loss fcn: 0.001685

Akaike`s FPE: 0.001731 Sampling interval 0.08
The polynomial coefficients are
B= 0 0 0 0.0666 0.0445
A = 1.0000 -1.2737 0.3935
290
ARX Simulated (solid) and measured (dashed) outputs - error = 6.56
1.5

0.5

-0.5

-1

-1.5
64 65 66 67 68 69 70 71 72
Time

1
0. 0666  0 .0445 z
ARX model: G( z)  z 3
1  12737
. z 1  0.3935z 2 291
MATLAB Demo

292

PERFORMANCE
ASSESSMENT &
UPDATING
MECHANISM
K J Astrom disturbances
regulator fast
parameters varying

REGULATOR PROCESS
ref + _
parameters slowly
varying

## outputs (fast varying)

293
Adaptive control is a special type of nonlinear
control in which the states of the process can
be separated into two categories:-
(i) slowly varying states (viewed as
parameters
(ii) fast varying states (compensated by
standard feedback)
In adaptive control it is assumed that there is
feedback from the system performance which
compensate for the slowly varying process
parameters.
294
An adaptive controller will contain :-
•characterization of desired closed-loop
performance (reference model or design
specifications)

## •control law with adjustable parameters

•design procedure
•parameters updating based on measurements
•implementation of the control law (discrete or
continuous) 295
Overview of Some Adaptive Control Schemes
Gain Scheduling
operating
gain conditions
schedule
regulator
command parameters
signal y
u
regulator process
control output
signal

## The regulator parameters are adjusted to suit

different operating conditions. Gain scheduling is an
open-loop compensation.
296
Auto-tuning
parameters K, Ti, Td

+ PID
controller Process
_
F
KG
1
1 I
 T sJ
H Ts Ki
d

## PID controllers are traditionally tuned using simple

experiments and empirical rules. Automatic methods
can be applied to tune these controllers.
(i) experimental phase using test signals; then:-
(ii) use of standard rules to compute PID
parameters. 297

model
ideal
regulator ym output
parameters
mechanism

uc y
u
regulator process
actual
output

298
The parameters of the regulator are adjusted such
that the error e = y - ym becomes small. The key
problem is to determine an appropriate adjustment
mechanism and a suitable control law.

d e
  e
dt 
where  determines the adaptation rate. This rule
changes the parameters in the direction of the
299
Combining the MIT rule with the control law: u   (uc  y )
and computing the sensitivity derivatives
e
produces the scheme: 
ym
model

filter
integrator  e  _

s  e
multiplier +

+
 process
uc _ u y
multiplier

## Note: steady-state will be achieved when the input to the

300
integrator becomes zero. That is when y = ym
Self Tuning Regulators STR

process parameters

design estimation
regulator
parameters 
uc y
u
regulator process
actual
output

301
The process parameters are updated and the
regulator parameters are obtained from the solution
of a design problem. The adaptive regulator consists
of two loops:-
(i) inner loop consisting of the process and a
linear feedback regulator
(ii) outer loop composed of a parameter estimator
(recursive) and a design calculation. (To obtain
good estimates it is usually necessary to
introduce perturbation signals)

Two problems:-
(i) underlying design problem
(ii) real time parameter estimation problem
302
Example - SIMULINK Simulation of MRAS

CONTROL
2
s+2
reference
model -K-
filter

reference
error
1/s -K- * -
+
Integrator g1 mult e
1/s -K- * 1
Integrator1 g2 mult_ s+2
filter_

* + 0.5
- Mux
to s+1
feedback
Input error process Mux reference,
Mux output,
command
Mux1 *
so
parameters

303
Input, Reference and Actual Outputs

0.8

0.6

0.4

0.2

-0.2
0 50 100 150
Time (second) 304
MATLAB Demo

305
INTRODUCTION TO THE KALMAN FILTER
State Estimation Problem
w(t) v(t)

## u(t) x(t) y(t)

x  Ax  Bu  Gw y  Cx  Du  v

SYSTEM

## Vectors w(t) and v(t) are noise terms, representing unmeasured

system disturbances and measurement errors respectively.
They are assumed to be independent, white, Gaussian, and to
have zero mean. In mathematical terms:-
306
EF
v
G (t ) w T ( )I  0 for all t and 
J
H K
EF
w
G (t ) w T ( )I  Q (assumed constant)
J
H K
EF
v
G (t ) v T ( )I  R (assumed constant)
J
H K
where Q and R are symmetric and non negative definite
covariance matrices. (E is the expectation operator)
Only u(t) and y(t) are assessable.

## The state estimation problem is to estimate the states x(t)

from a knowledge of u(t) and y(t). (and assuming we know
A, B, G, C, D, Q, and R).

307
Construction of the Kalman-Bucy Filter
u(t) SYSTEM y(t)
x(t)

A
x 
x y(t )
u(t)
B + z C + _ + y(t)

FILTER L(t)

x(t )
Filter equation :- x  Ax  Bu  L(t )(y  Cx  Du)308
Filter equation :- x  Ax  Bu  L(t )(y  Cx  Du)
L(t) is a time dependent matrix gain.
The estimation problem is now to find L(t) such that
the error between the real states x(t) and the estimated
states x(t ) is minimized. This can be formulated as:

min E GF
[x(t )  x
(t )] [x(t )  x
T I
(t )]J
L (t ) H K

R E Kalman

309
Duality Between the Optimum State Estimation
Problem and the Optimum Regulator Problem
It can be shown that the optimum state estimation problem:
min E F[
Gx(t )  (
x t )]T [x(t )  x
(t )]I
J
L (t ) H K
subject to:
x  Ax  Bu  Gw
y  Cx  Du  v
xˆ  Axˆ  Bu  L(t )(y  Cxˆ  Du)
E(ww )=Q, E(vv )=R
T T

## is the dual of the optimum regulator problem:

min
L (t )
1
zT
2 0
FxT GQGT x  uT RuIdt
G
H J
K
subject to: x  AT x  CT u
u  L(t )x 310
Thus L(t) can be obtained by solving the matrix Ricatti
equation:
 T T 1
S  AS  SA  SC R CS  GQG T

1
L(t )  R CS(t )

converges to:
Lim L(t )  L
T

## a constant matrix gain.

311
Linear Quadratic Estimator Design Using MATLAB

## LQE Linear quadratic estimator design. For the continuous-time system:

.
x = Ax + Bu + Gw {State equation}
z = Cx + Du + v {Measurements}
with process noise and measurement noise covariances:
E{w} = E{v} = 0, E{ww'} = Q, E{vv'} = R, E{wv'} = 0
L = LQE(A,G,C,Q,R) returns the gain matrix L such that the
stationary Kalman filter:
.
x = Ax + Bu + L(z - Cx - Du)
produces an LQG optimal estimate of x.

312
Example:
x1  x2
x2   x1  w(t ) , E ( w 2 (t ))  1
y  x1  v (t ) , E (v (t ))  3
2

## A=[0 1;-1 0];

G=[0;1];
C=[1 0];
Q=1;
R=3;
L=lqe(A,G,C,Q,R)
produces: L=

0.5562 313
0.1547
giving the filter equations:

x  x  l ( y  y )
1 2 1

x   x  l ( y  y )
2 1 2

y  x1

## where l1 = 0.5562, l2 = 0.1547

314
w(t) v(t)

u(t) = 0 x2 x1 y(t)
1 1
+ +
s s

-1
SYSTEM

-1

1
x2 1
x1 _ +
+ +
s s
y y

l1
l2
FILTER
315
x2 ( t ) x1 ( t )
vt
WS1

wt
1.7
WS2
v(t) sqrt(R)

PLANT
1 + +
- 1/s 1/s +
sqrt(Q) - x2 x1 y
w(t) meas(y)
+ +
- - e1t
y-Cx e1 WS3

- 1/s + Mux
+ + 1/s
_ x2hat __ x1hat Mux1 x1/x1hat
+ e2t
-
0.556 e2 WS4
l1 Mux
0.155
Mux x2/x2hat
l2
KALMAN FILTER
316
Comparison of actual (solid) and measured (dash) states

2
x1
0

-2

-4

-6

## 265 270 275 280

317
Time (second)
Comparison of actual (solid) and measured (dash) states
6

2
x2
0

-2

-4

-6

## 265 270 275 280

Time (second) 318
Measurement signal y(t)

5
0
-5
-10
265 270 275 280

319
MATLAB Demo

320