You are on page 1of 34

CHAPTER

T T
E

INTRODUCTION
_ A model is an image of a dynamic system. A dynamic system is shown
in Fig. 3.1. The dynamic system is driven by input variables u(t) and
disturbances vet). The user can control u(t) to obtain desired output yet).
For a dynamic system the control action at time t will influence the output
at time instants s > t. t > 0.

Disturbance
v(t)

Input _ _+I ..-.._....Output


System
u(t) y(t)

Fig. 3.1: A Dynamic System

3.1 NEED FOR MODELING DYNAMIC SYSTEMS .


The following example of dynamic system illu3trate the need for
mathematical model. '
Consider a stirred tank as shown in Fig. 3.2, where two flows are
mixed. The inputs variables are input flows F1(t) and F 2 (t). The output
variables are outflow F(t) and output concentration c(t). The disturbing
variables are input concentration C1(t) and C 2(t). The flows F 1 and F 2 are
controlled with control valves.
Flow F 1 Flow F2

Concentration C 1 Concentration C z

h
Volume V
Concentration C

Fig. 3.2: A·Stirred Tank

Suppose we want to design a controller which acts on the flows F 1(t)


and F 2 (t) using the measurements of F(t) and c(t). The purpose of the
controller is to ensure that F(t) or C(t) remain contant even if the
concentrations C 1(t) and C 2(t) vary considerably.
For such a controller design we need some form of mathematical
m.odel which describes how the input, output and disturbing variables
are related.
Many indus.trial process, for example for production of Iron, Sugar,
Glass, Paper must be controlled in order to run safely and efficiently.
To design controllers, some type of model of the process is needed.
The models can be of following various types.
• Distributed and lumped model
• Static and dynamic model
• Discrete and continuous model
• Deterministic and stochastic model
• Explicit and implicit model
• Linear and nonlinear model
The main uses of model are ,
1. To improve understanding of the process
.2. To optimize process design und hence operating conditions.
3. To design a control strategy for the process.
4. To train operating personal
5. To design model based controller (Example: MPC, Model
Refe:3nce Adative Controller) which helps in achieving
uniformity, disturbance rejection and setpoint tracking, all of
which translate into better process economics.
6. To plan and schedule production.

3.2. THEMATICAL MODELING AND SYSTEM IDENTlflCATIO


In order to analysis the behaviour of a process and to design its
controller, we need a mathematical representation of the physical and
chemich.l phenomena taking place in it. Such a mathematical representation
constitutes the model of a process. The activities leading to the constrution
of the mathematical model will be reforred to as mathematical modeling.
There are two ways of constructing mathematical models:
1. Mathe~atical Modeling
2. System Indentifi~ation
-
-"-.
, '.

1. Mathematical Modeling -
I

This is an analytical approach. Basic -laws from physics (sucli as


Newton's laws and balance equations) are used to describe the dynamic
behaviour of a process.

2. System Identification
System identification is the field modeling dynamic syste~l or process
from experimental data. So it is an exper:::nental approach. Some
experiments are performed on the process' to collect input and output
data. Then a model is fille~ to the eAperimental data by assigningJsuitable
values to its parameterer. -
For example the model to be fitted is

yes)
R(s) - 't5+1

Where,
K- Process steady state gain
t - Tinle constant (in seconds)
Delay time (in seconds)
From experimental data, the process parameters such as K, 't and td
are estimated using system identification technique.

3.3 SYSTEM IDENTIFICATION STEPS


Since the system identification is an experimental approach, the
steps listed in Fig. 3.3 are used to estimate model of a process.

Start )
T
Construct an Priori knowledge
experimental setup of the model
!
Conduct an experiment
and collect data
-r
Determine / choose
model structure '';')

!
Choose method to
estimate parameters
! -
M('Idel validation
,
~----- -----I
I
I
I
No
~ew d'ata set'
Model
Accepted? I
Yes

StoQ
Fig. 3.3: Schematic Flowchart of-System Identification
Step 1: Construct an experimental set~p for the given process or plant.
Step 2: Conduct an experiment by exciting the process (usipg input
signal as step, sinusoidal or random signal) and observing
its input and output over a time interval and record it for
preprocessIng.
Step 3: Detern~ine /choose appropriate luodel structure (typically
linear differential equation of a certain order).
Step 4: Choose a suitable satistically based method to estimate the
unknown parameters of the model (such as coeficients in the
differential equation).
Note: In practice, the estimations of structure and parameters are
often done· iteratively.
Ste·p 5: The model obtained is validated to test whether it is an exact
model of the process. If the model is correct the identificafion
process wIll be stoped.· Otherwise the identification process
will be repeated by selecting new. experimental data, new
model structure (complex structure if. required), new
estimation methods· etc.

TWO MARKS QUESTIONS AND ANSWERS


1. What is model?
A model servers as a good mathematical substitute for the process.
A model usually consists of a set of. differential or difference and
algebric equations. Simulations using ~odels are very economical,
safe and powerful substitutes for experiment using model. It is safe
to study response of a process under faulty conditions.
2. What are the advantages of model?
• To improve understanding of the process._
• To optimize process design and hence operating conditions.
• To design a controller for thp. process.
• To train oprating personal.
• To plan and schedule production.
• To obtain inferential estimate of a physical quantity.
3. What is mathematical modeling?
Models obtained from fundamental 'laws (such as Newton laws,
material. and energy balance equations .... ) are known as
mathematical lllodelsCalso known as first principles models) which
are used to describe the. dynamic behaviour of a process.· It is an
analytical approa6h..
4. What ~s empirical model?
It is an ~lternative approch to a mathelliaticaf model. Some.
experiments are performed on the process to collect inp\!t and output
data. Then a model is fitted to the experimental data by assigning
suitable values to its parameters. So it is an experimental approach.
5. What are the different approaches to modeling?
a) Mathematical model
.
b) Emprical model.
6. What is system identification? .
System identification is the field of modeling dynamic process from
experimental data. ·The model obtained using system. identification
technique is known as emprical model. Unmeasured dynamics and
uncertainties are difficult to handle using mathematical model. They
have to be handled using "emp.rical models.
(
CHAPTER

C' E F
s CAT
4.1 INTRODUCTION
This chapter describes four different non parametric m~thods for
system identification. Non-parametric identification methods are
characterised by. the property that the resulting models arc curves or
functions whichare not' necessarly paramaterized by a finite dimensional
parameter vector. In this, method estimated parameters will have some
physical insight of the process.
The four different non parametric methods for system identification are

1. Transient Analysis
In transient analysis, input is applied as a step or impulse and
recorded output helps to idendify model. Fig. 4.1 shows the transient

response of ~ first order syst~m with delay (~s~ l)tO a unit step input.
From Fig. 4.1, we can easily estimate model parameters such as steady
state gain K, time constant 't and delay time td·
y

K'~----'---==----

l-_-L-~----~t

~ t ~. ~ _I
Fig. 4.1: Response of a first order system with delay ( G ( s) =n K+ 1 e-',' ) to a unit step
2. Frequency AnalysJs
The input is sinusoidal. For a linear system the output will also be
sinusoidal at steady state. The change in amplitude and phase will give
the frequency response (Example: bode plot). From the frequency response
the model will be estimated.

Sinusoidal input Sinusoidal output


--------.~I Process
a sin(Cllt) b sin(cot + 4».

Fig. 4.2: Process with sinusoidal Input and Output

3. Correlation Analysis .
The weighting function h(K) is' used in this method to model the
process.

00

Yet) - Lh(K)u(t ~'K) + v(t) ... (4.1)


K=O

Where u(t) - Input


Yet) - Output
h(K) = Weighting sequence
Vet) = disturbance
The input is a white noise. A normalized cross-covariance function
between output and input will provide an estimate of the weighting
. function h(K). .

Spectral Analysis:
Equation (4.1) is used as a model in spectral analysis. The model can
be estimated for arbitrary inputs by dividing the cross-spectrum (between
output and Input) to the input spectrum.

4.2 TRANSIENT ANALYSIS


In this method, the model is estimated from step response or impulse
response of the process. In this section,. how a model can be estimated
from step response using transient analysis technique is explained. :--'
Iedntifying First-Order-Dead-Time (FODT) model
Let us consider a FODT system which is described as transfer
function model as shown below.

yes)
... (4.2)
U(s)

Where, yes) - Laplace Transform of the Output Signal yet)


U(s) - Laplace Transform of the Input signal u(t)
G(s) - Transfer function of FODT system
T - Time constant
't - Dead time .
K = Steady state gain
Apply a unit step as a input to FODT system as shown in Figure 4.3
and obtain a step response which is shown in Figure 4.4.
FODT'system

Unit step input K -rS Output


---=--=-- e
u(t) TS + 1 Y(t)

Fig. 4.3: Unit step Input to FOOT system

Figure 4.4 demo:p.strates a graphical method for determining the


FODT paramters K, T, and 't from step response.
The gain K is given by the final value of the response. By drawing
the steepest tangeJ:?t, T and 't can be obtained as shown in Figure 4.4. The
tangent crosses the t-axis at t = 'to
Y(t)

K l-------,-----:=::::::----

Step response

L...---L----L.-------.t
I. T ~I~ T~I
Fig. 4.4: Step response of FOOT system to a unit step ~nput
Identifying Second Order Model
Let us consider a second order system which IS described as a
transfer function model as shown below

Y(s)
... (4.2)
U(s)

\Vhere, Y(s) - Laplace Transform of the Output Signal yet)


U(s) - Laplace Transform of the Input signal u(t)'
G(s) - Transfer fU1J.ction of second order system
K = Steady state gain
COo = Undamped natural frequency
8 = Damping ratio
Apply a unit step as· a input to second order sy~tem as shown in
Figure-4.5 and obtain a step response which is shown in Figure 4.6.

Unit step input· Output


u(t) Y(t)

Fig. 4.5: Unit step hlput to second order system

. Y(t)
First Maxima

Second Maxima
I
K I
I .
I
I
I
I
: First minima:
t
t1 t2 t3

Fig. 4.6: Step response of a second order syst~m to a unit step input
The gain K is given by final value (after convergence) as shown in
Figure,4.6. The maxima and Minima of the step response occur· at times .

... (4.4)

n = 1, 2, ....

and that ... (4.5)


where the overshoot M is given by

... (4.6)

Note: First max~ma occurs at n = 1, First mininia'occursat n =2,


second maxima occurs at n = 3 and so on.
From the' step response as shown in Figure 4.6, the first peak time
tl and first peak overshoot M can be determined from (4.5).

"":logM
, 8 = ... (4.7)

Then COo can be determined from (4.4) at tl


1t
COo = (since n = 1) ... (4.8)
t 1 ~1 - 8
2

From (4.7) and (4.8) the parameters 8 and COo can be determined.

4.3 FREOUENCY ANALYSIS


For a frequency analysis, it is convenient to use following system
with transfer function model as
yes) = G(s) D(s) .. (4.9)

Where, yes) = Laplace Transform of Output signal yet)


D(s) = Laplace Transform of Input Signal u(t)
G(s) = Transform function of a system
-- --- --- -- ---

Apply· a following sinusoidal Input u(t) to a system (described in (4.9)


as shown in Figure 4.7.
u(t) - a sin (rot) ... (4.10)
Where a - amplitude of the sinusoidal input u(t)
00 - Frequency of the sinusoidal input u(t) in rad/sec

Sinusoidal input.1 G(S) ISlnusol~al output


u(t) '"-._ _ _ _ . Y (t)

Fig. 4.7: Sinusoidal Input to G(s)

If the system G(s) is 'asymtotically stable, then the output y (t) is


also a sinusoidal signal.
yet) - b sin (oot + ~) ... (4.11)
Where, b - amplitude of sinudsoidal Output yet)

~ - Phase difference between Input u(t) and output


yet) (Shown in Figure 4.8)

Yet) ;-,
/' '\ ~Y(t)
I . 'f
\
\
\
\

I
I
I
I
uG) I
I
, I
\ I
" .... ,., /

Fig. 4.8: Input and Output waveforms of G(s)

. From (4.11), we can write


b - a I G(jw) I .. (4.12a)

<j> - arg I G(jw) I ... (4.12b)


·
This can be proved as follows. Assume the system is initially at rest.
Then the system G(s) can be represented using a weighting function h(t)
as follows.
t

Y (t) = fh('t)u(t- t)dt ... (4.13)


o
where h(t) is the function whose laplace transform equals G(s).
t

:. G(s) = fh(t)e-Srdt ... (4.14)


o
(e irot _ e-i rot)
Sinc.e sincot = ... (4.15)
2i
Equations (4.10) (4.13)(4.14) and (4.15)
t

yet} - fh(t)u(t - t)d't:


o
t

- . fh('t:)asin(co(t-'t:»)d't:
o .'
t' (i~(t-'t)
e . -e -iro(t-T»)
- fh(t)a' 2i .' d't:
o
t
_ ~ fh('t:)(eiro(t-T) _ e-iro(t~T»)d't: .
o
_ f t
~ h ('t:) (e irot e-iror - e-irot e+i(l)t ) d't:
o

t t
_ ~ e imt fh(,)e(-iID')d~ _ e- imt fh(~)e-(-im')d~
o' 0 .
' y \. y

G(iro) G(-iro)
Since we can represent
G(iro) - re ie
Where, r = magnitude of G(iro) - I GUro) I
e = argument of G(iro) - eiarg G(iO)

(.: .1 G(iro) I = I G(-iro) I =I G(iro) I)

= :.IG(iro)lsin(rot + argG(iro))~
~~ .

- .b . sin (rot + ~) ... (4.16)


From above equatiGn (4.11) and (4.12) are proved.
By measuring the amplitudes a and b as well as the· phase difference ~,
one can draw a bode plot (or Nyquist or equivalent plot) for different (i)
values. From the bode plot, one can easily estimate the transfer function
model G(s) of a system.

4.4 CQRRELATIDN ANALYSIS


The form of model used in correlation analysis is
00

yet) - Lh(K)u(t-K) + V (1:) ... (4.17)


K=O
Where Y(t) - Output Signal
u(t) - Input Signal
h(K) = Weighting sequence
vet) = Disturbance term
V(t)

r-----------~--------~
I I
I w I
'~ft~t ----':~ L h(K)u (t-K) ...-...;..:_-Ii"' O~~~)ut
I 1(::.0 I
I I
I I
----------------------
System

Fig. 4.9: System used in correlation analysis

Assume that in put u(t) is a stationary stochastic process (white noise;


which is independent of the disturbance vet). Then the following relati9n
holds for the cross covariance function
00

ryu('t)= Lh(K) r u ('t - K) ... (4.18)


K==O

Where
ryu('t) = Cross covariance function between output yet) and input u(t)
- EY (t + 't) u (t)
ru('t) - Covariance function of input u(t)
- Eu (t + 't) u (t)
Note: E is called expected value = meanvalue
Conduct <l.n experiment and collect Input u(t) and Output yet) data.
The covariance functions in (4.18) can be estimated from the Input and
Output data as

N-max('r,o)
- ~ L Y(t+'t)u(t) 't = 0, ±1, ±2, .. : ... (4.19)
N to=l-min(:r,o)

N-.
- ~N I: u(t+'t)u(t)
t=l
1\

ru (--r) -
1\

ru (-r) -r = 0, 1, 2, ...
Where
N - Numbe'r of experimental data

r Yu (-r) - Estimated cross covariance function between yet)


and u(t) from input and Output data

ru (-r) - Estimated covariance function of u(t) from Input


data

Then ar, estimate {h(K)t9f j;he weighting function [h(K)} can be


determined by solving following:equation
~ _.-
00 -.~.,..-:_~:"'"~.~

r~u(-r) = Lh(K)r:(-r-K) -- ~':.: (4~20)


. K=O

The above equation represent a linear system of infinite dimension.

J\ J\

J\
r u (0) r u (00) J\

ryu (0) J\ h(O)


- r u (1)
···,

... (4.21)
J\ J\

ryu (00) J\ J\ h(oo)


r u (00) r u (0)

Solving above infinite dimensional equation is very difficult. This


problem can be simplified by using white noise (mean value = 0, variance
(J2 = 1) as input.

For white noise,


rue-r) = 0 for t = 1, 2, ...
r u ('t) = Constant value for 't =0 (Le., non zero value)
... r u (0)= Constant value
,
Since input is a white noise, the equation (4.20) can be written as

K = 0,1, ... ... (4.22)

ryu (0)
r u (0)
. [h(O) ryu (1)
h(l)
ru (0)
·lh(~)

From the above equation, the weighting function {h(K)} \i.e,h(O), h(l), ... ,
h( (0) can be easily estimated.

4.5 SPECTRAL ANALYSIS


Theform of model used in spectral analysis is
if.)

yet) = L,h(K)u(t-K) + v(t) ... (4.23)


K=O

Where,
yet) - Output Signal
u(t) - Input Signal.
h(t) - Weighting sequence
vet) - Disturbance term
The system described in (4.23) is shown in Figure 4.9
Taking discrete fourier transforms for the system described in (4.23)~
the following relation for the spectral densities can be derived from cross
covariance function described in (4.18).
,

... (4.24)
Where
~Yu (co) - Cross spectral density between Input u(t) and
Output yet)

~u (co) - Spectral density of input u (t)

... (4.25)

H (e-i (() = Transfer funciton


Cf;)

_ Lh(K)e- iKm
K=O

Now the transfer function. H(e-i (() can be estimated from (4.24) as .

Conduct an experiment and collect ·'N' number of Input u(t) and


Output yet) data. Then cross spectral density ~Yu ((0) and spectral density
~u (co) can be estimated from experimental data as

... (4.27)

N
-1 ~
1\

~u ((0) = 1\
~ ru ( 't) e -ItCO

. ... (4.28)
2n .
1:=-N

Using (4.19), equation (4.27) can be written as


N N-max(1:,o)

~YU (00) = 2~ L L
1:=-N t=l-min(t,O)
Y(tH)U(t)e-itro ... (4.29)
Next make the substitution s = t + 'to Figure 4.10 illustrates bow to
derive the limits for the new' summation index.
Since
_ e-ism . e itm
We get,

. . -_ _ ~ ---,."-J- ~_--"

Fig. 4.10: Change of summation variables


(summation is over the shaded area)

... (4.30)

Where,
N
YN(ro) - Ly(s)e-i5(~ = Discrete fourier transform 'of yet)
5=1

N .
UN(ro) - L u(s)e- i5m = Discrete fourier transform of u(t)
s=l

For 0) = 0, 2n, 4n ... , n, YN(ro) and UN(O) can be computed efficiently


N N
using Fast Fourier Transform (FFT) algorithms. Then using (4.30), we
A

can estimate cross spectral density <PYu (co).


J

Similarly,

1
-. 2rrN I I U(S)U(t)e-ISWe-ItW
N N .' .

s=J. t=1

... (4.31)

By computing UN(ro) by using FFT algo.,:ithm,we can estimate


II.

spectral density ~u (0)) .

Then we can estimate transfer function

II.

~yu(~)
II.

~u(w)

... (4.32)

This transfer function is called empirical transfer function estimate.


TWO MARKS OUESTIONS AND ANSWEHS
1. What is non parametric Inethod for system identification?
Non parametric identification methods are characterise~ by the
property that the resulting models are curves or functions,which
are not necessariiy parametrised by a finite dimensional parameter
vector. The four different non 'parametric methods for system
identification are
1. Transient A'1alysis
2. Frequency Analysis
3. Correlational Analysis
4. Spectral Analysis
2. What is Transient analysis method for system identification?
. In this method the input is taken as a step or impulse and the
recorded output (example transient response of a FODT system to a
unit step input) helps to Identify a nlodel.

K I----~---,..----~=-

KI (' Transient

"
.. I respons~ to
.~ • c· _:" a unit step ,input
,.. I ..-,-
" ~ t
J~ T ~I~ td-./

The above transient response '(recorded da:ta)·helps to find out FODT


system parameters such as K, 't and' td,.
- .
3. What is Frequency analysis n'lethod fpr system identification?
. ,
In this method input to ~ sysferrt '(to be id'entified) is sinusoid~l
For"a linear system the output will also be sinusoidal at stead,y state~
J

The change in amplitude and phase will give the frequency r~sp ~ . onse
(example bode plot). Fro~ the frequency respo~se,. the lnudel will
be estimated. \
4. What is correlation analysis method for system identification?
The model used in correlation analysis is
co

yet) - Lh(K)u(t-K) + v(t)


K=O

Where, yet) - Output Signal


u(t) - Inp\lt Signal
h(K) = Vtleighting sequence
vet) . = Disturbance term
The input u(t) is a white noise signal, a normalised crossconvariance
function between output and input will provide an estimate of the
weighting function h(K). '. .
5. What is spectral analysis method for system identification?
The model used in spectral analysis is
·00

yet) - Lh(K)u(t-K) + v(t)


K=O

Where, yet) - Output Signal.


u(t) - Input 'Signal
h(t) - Weighting sequence
vet) - Disturbance term
The model can be estimated for arbitarary inputs by dividing the
cross -spectrum (bet~een output and i\lput) to the input spectrulll.
CHAPTER

E E

5.1 INTRODUCTION
A parametric method can be characterised as a maping from the
recorded data to the estimafed .pa~ameter vector. TQ~ estimated
parameters donot have any physical insight of the process. The various.
parametric methods of system identificution are
1. Least squares (LS) estimate
~. Prediction error method (PEM)
3. Instrumental variable .(IV) method.

5.2 LEASTSOUARES ESTIMATION


The _method of least squares is about estimating parameters by
minimizing the squared error between observed data and their expected values.
The linear regression is the simple type of parametric model.
This model structure can be written as
yet) = <l>T (t) e. ... (5.1)
Where,
yet) - Measurable Quantity
<I>(t) - n - Vector of known quantities
- [--yet - 1), -yet - 2), .:., -yet - n a ) u(t -1) ... -u(t - nb)]T
e - Ii - Vector of unknQnwn parameters
The following two examples sho~s how a model can be represented
Using linear regression model form. . .
I Example 5.1 I
Consider a following first order linear discrete model
yet) + ay(t - 1) = b u (t - 1) ... (5.2)
The model represented in (5.2) can be written in linear regression
model as follows .
yet) - - ay (t - 1) + bu (t - 1)

- [-y(t - 1) u(t-l)][:]
- <j>T (t) 8 ... (5.3)

Where, <j>(t} - [-y(t-1) u(t-1)]

8 =
[:]
The elements' of <j>(t) are often called regression variables or
regressors while yet) is called the regressed variable. The 8 is called,
parameter vector. The variable t takes ~nteger values.

lExample 5.2 .
t I .""',:-,~
. . ~.~~~~ :~~'~'~~~
Consider a truncated weighting f~~~ilon<.inodel. '
yet) = h o u(t) + hI u (t - 1) + ... '+ 'hni~1 u(t- M + 1)
The input signal u(t) ... u(t - M + 1) are recorded during the
experiment. Hence regression' variables ,
<I> (t) - (u(t) u(t - .1) ... u (t - M + 1» is a M - vector of
,known quantities. and parameter vector.
(8 -; cho hI hm_1)T is' a 1V1-Ve~tor of unknown
p!i;ameters
, ,
to be estimated.

The ,'problem' is to find' an "estimate e of the parameter vector e as·


shown in (5.1) from experimental ~.3asurements Y(l), <j>(1), Y(2), <\>(2), .. ,
Y (N) <p(N). Here 'N' represents number of experimental data and 'n'
represents number of known quantities in <I>(t) or number of unknown
parameters in 8.
Given experimental mea·surements, a sys t~em 0 f -1'lnear equatIons
. .IS
obtain9d as

Y(l) - ~T (1) 8
Y(2) - ~T (2) 8

YeN) = ~T (N) 8
This can be written in matrix notation as
Y = <D8 ... (5.4)
Where,

Y(l)
Y- an (N x 1) vector '" (5.5a)
Y(N)

<D- an (N x n) matrix '" (5.5b)

One way to find 8 from (5.4) would of course be to choose the number
of measurements, N to be equal to n. Then <D becomes a square matrix. If
this matrix is non singular the linear system of equations (5.4) could easily
be solved for 8. In practice, however noise, disturbances and model misfit
are good reasons for using a number of experimental data 'N' greater than
'n'. With the possible to get an improved estimate. When N > n, exact
solution for linear system of equations (5.4) in general not exist.

To find an estimate 8, introduce the equation of erro~s as


-E(t) ~T (t) 8
- Y (t) ... (5.6)
J, J,
observed expected
value value
." «

and stack these in a vector 8 defi~ed as


8(1)
8 =.
8(N)

In satisticalliterature the equation errors are often called residuals.


1\
The least square estimate of S is defined as the vector e that minimizes
the loss function

... (5.7)

Note:
The other form of loss functions are

V(S) = 1 T '8
-8 ... (5.8a)
2

... (5.8b)

where \1'\\ denotes the Euclidean vector norm..

The ewill be estimated frCJm experimental measurements Y(l), ~(1),


Y(2), 4»(2) ... , Y (N), 4»(n) by minimizing the loss function V(S) in (5.7) and
(5.6). The solution to this optimized problem is .

e = (<1><1»-1 <I>Ty ... (5.9)

For this solution, the minimum value of Vee) is

... (5.10)
Note:
1. The matrix <1>T<1> is positive definite.
2. The form (5.9) of the least squares estimate can be rewritten in
the equivalent form

... (5.11)

In many cases <p(t) is known as a function of t. Then (5.11) might be


easier to implement than (5.9) since the matrix <1> of large dimension is
not needed in (5.11). Also the form (5.11) is the starting point in deriving
several recursive estimates.

5.3 RECURSIVE IDENTIFICATION METHOD


In re'cursive (also called on-line) identification methods, the
paramete_r estimate~ R:re computed recursively in time. This means that
--. -~'~... .. . . 1\

if there is an e·stimate-S(t-I) based on data upto time (t - 1), then 8(t) is


- . 1\

computed by some 'simp!~ modificatio~' of S(t-I).


The counter parts to one-line methods are the so-called off-line or
batch methods, in which all the recorded data are used simultaneously to
find the parameter estimates.
Recursive identification methods have the following general features:
• They are central part of adaptive systems (used, for example, for
control or signal processing) where the action is based on the most
recent model.
• Their requirement on primary memory is quite less compare to offline
identification methoQ.s which require large memory to store entire data.
• They can be easily .modified into real-time algorithms, aimed at
tracking time-varying parameters.
• They can be first step in a fault detection algorithm, which is used
to find out whether the system has changed significantly.
- I

Disturbances

Process

Regulator

Estimated
parameters

Recursive
identifier

Fig. 5.1: A general scheme for adaptive control

Most adaptive systems, for example adaptive control systems as


shown in Figure 5.1 are based (explicity or implicity) on recursive
identification.
Then a current estimated model of the process is available at all
times. This time varying model is· used to determine the parameters· of
the (also time-varying) regulator (also called controller).
In this way the regulator will be dependent on the previous behaviour
of the process (through the information flow: Process ~ model ~
regulator).
If an appripricate principle is used to design the regulator, then
the regulator should adopt to the changing characteristics of the
process.
The various identification methods are·
a) Recursive least squares method
b) Real time identification method
c) Recursive instrumental variable method
d) Recursive prediction error method
5.4 RECURSIVE LEAST SOUARES ESTIMATION
The linear time-varient system can be represented as
A (q-I) yet) . = B (q-l) u (t) + E (t) ... (5.12)
Where, A (q-I) = 1 + al q-I + ... + ana q-nn

B (q-I) =' hI q-I + ... + bnb q-nb


8(t) = Equation error
This model can he expressed in regression model form as
. yet) - ~T (t) e+ e (t) ... (5.13)

Where, ~(t) - [-yet -1), ... , -y(t - n a ) u(t - 1) ... -u(t - nb)]

e - [alaZ ... a na h1b z .. ·b nb ]

Then the least squares parameter estimate is given by .

The argument t has'been used to stress the dependence of eon time.


The (5.14) can be computed in a recursive fashion.
Introduce the notation

P(t) = lt cI>(s)cl,T (S)f ... (5.15)

p-l (t) - l t cl>(s)cI>T(s) l


p- 1 (t) - l~cI>(SHT (s) + cl>(t)cl>T (t) l
p-I (t) _ p-l (t - 1) + ~(t) ~T(t)
:. p-I (t - 1) = p-l (t) - ~(t) ~T(t) ... (5.16)
---.J

Then using (5.15), (5.14) can be writt~n as

O(t) = [t HS)Y(S)] . ... (5.17)

Note:
If we replace t by (t - 1) in (5.17), we get

8(t-l) = P(t-l)[~HS)Y(S)]
8(t-l)P-1(t-l) = [~HS)Y(S)] . ... (5.18)

The equation (5.17) can be rewritt'en as

8(t) = P(t)[~HS)Y(S) + Ht)Y(t)] . ... (5.19)

By substituting (5~18) in (5.19), we get

By substituting (5.16)

O(t) = P(t)[{p-1(t) - Ht)<I?(t)}O(t-l) + Ht)y(t)]·

- P(t)[p-1(t)8(t-l) - cj> (t) <I? (t)8(t-l) + cj>(t)y(t)]


A A
_ 8(t -1) - P (t ) <I> (t) <I> T (t) 8 (t -1) +. P (t) <I> ( t ) y( t)

- O(t-l) + P(t}c\>(t)[y(t) - cj>T (t)8(t-l)} ... (5.20)

Thus (5.20) can be written as


A A
8(t) = 8(t-1)+ K(t)E(t) ... (5.21a)
K(t) = P (t) <I> (t) ... (5.21b)
A

E (t) = y(t) - <l>T (t)8(t-1) ... (5.21c)


Here the term E(t) should be interpreted as a prediction error. It is
the difference between the measured output yet) and the one-ste-p-ahead

prediction ;(tit-I; 8(t-l)) =~T (t)8(t-1) ofy(t) made at time t -1 bas~d


, • 1\

on the model correspondIng to the estimate 8 (t -1). If E(t) is small, the


estimate 8 (t -1) is 'good' a~d should not be modified very much. The vector
K(t) in (5.21b) should be interprected as a weighting or gain factor showing
how much the value of 8(t) will modify the different· elements of the
parameter vector. .
To complete the algorithm, (5.16) must be used to compute pet) w~ich
is needed in (5.21b). However, the use of (5.16) needs a matrix inversion
at each time step. This would be time consuming procedure. Using matrix
inversion lemma, however (5.16) can be rewritten in updating equation
form as

P(t -l)~(t)~T (t)P(t -1)


PC t) = P (t - 1) - [1 + $T (t) P (t _ 1)cj> (t)] ... (5.22)

Note that in (5.22) there is now a scalar division (scalar inversion)


instead of matrix inversion. From (5.21b) and (5.22),

. P(t-1)~(t)
K(t) = - ... (5.23)
[1 + <l>T (t)P(t-1)~(t)J

The recursive least squares algorithm (RLS) consists of


1\ 1\

8(t) - 8(t~1) + K(t) +E(t)

1\
E(t) - yet) - ~T (t)8(t-1)

( ) pet -1)ep(t)~T (t)P(t -1)


pet) - P t-1 -
[1 + ~T(t)P(t-1)~(t)
]

pet -1)<j>(t)
K(t) -
[1 + <j>T(t)P(t-1)~(t)J
TWO MARKS OUESTIONS AND ANSWERS
1. Define parametric method of system identification.
A parametric nl thod cun b :1 aracterised as a maping from thE:
recorded data to the estinul L ·et parameter vector. The estimated
parameters do not have :1ny physical insight of the proces;'. The
various parametric methods of system identificati on are
1. Least squres (LS) estimate

2. Predication error method (PEM)


3. Instrumental variable (IV) method
2. Define Least squares estimativn method.

The LS estimate is to find eof the parameter vector e in a regression


model yet) = <j>T (t) e froin experimental measurements by minimizing
the loss function.

N
Vee) = !2.I(Y(t) -q? (t)et
t=l .

The LS estimate of parameter vector q is

3. What are the advantages of LS method?


• Primary tool for process modeling because of its effectiveness and
completenes s.
• Since LS method uses dat~ very effectively, good results can be
obtained with reletively small data sets,
• Easy to constant statiscal techniques for prediction, calibration
and optimizations.
~ Simple to use.
4. Where the limitations of LS luethod?
The LS method gives consistent parameter estimates only under
following restrive conditions.
+ If E <!>(t) <!>T(t) is non singular

+ If E <p(t) vet) = 0 '


+ If input is persistently exciting of order nb.
+ vet) should be white noise.
5. Differentiate parametric and non parametric Illethod of system
identification.

Parametric Method Non-parametric Method


J

1. Characterised as a maping Characterised by the property


from recorded data to the that the resulting models are
estimated parameter vector. curves or functions which are
not necessarly parametrized. I

2. Parameters does not have Parameters have physical


any physical insight of the insight of the process.
process. Example: 't -, time constant
'Q
v. Different methods are Different methods are
+ Least square estimate + ,Treansient Analysis
+ Prediction error method + Fraquency Analysis
+ Instrument variable method • Correlation Analysi's

6. Define recursive identification method.


In recrusive
- (also called on-line) identification methods ,
the parameter estimates are- computed recursively in time',
A

This means that if there is an estimate a(t -1) based on data upto
A

8
time (t - 1) then a(t) is computed by some 'Simple modification' of (t- 1) .
This method is a central part of adaptive systems. .
The various recursive identification methods are
1. Recursive least squ.ares method
2. Real time identification method
3. Recursive instrumental variable method
4. Recursive prediction error method
7. What are the advantages of recursive identification method?
• They are central part of adaptive system
-
• Requires less primary memory
• They can be easily modified into real-time algorithms, aimed at
tracking time-varying parameters.
• U·sed in fault detection algorithm.
8. Write down recursive least square (RLS) algorithm.
The RLS algorithm consists of
1\ 1\ .

Set) - S(t -1) + K(t) e(t) .


. 1\

- yet) - <j>T (t)S(t-l)

pet)
_ P(t-I) _ P(t-l)<j>(t)<j>T (t)P(t-l)
[1 + <j>T (t) P (t - 1) <j> (t )]

K(t) - pet -I)<j>(t)


[I + <j>T (t)P(t-l)<j>(t)]

You might also like