## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

System Identiﬁcation

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice

July 7, 2009

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Outline I

1

Introduction

Deﬁnition

Objective

2

Classiﬁcation

Non-parametric Models

Parametric Models

3

Least Squares Estimation

4

Recursive Least Squares Estimation

Derivation

Statistical Analysis of the RLS Estimator

5

Weighted Least Squares

6

Discrete-Time Kalman Filter

Features

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Outline II

Derivations

7

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

ARX Model

Pulse Response Model

Pseudo-Inverse

Physical Interpretation of SVD

Approximation Problem

Basic Equations

Condition Number

Eigen Realization Algorithm

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Deﬁnition

Objective

Deﬁnition

System identiﬁcation is the process of developing or improving the

mathematical representation of a physical system using

experimental data. There are three types of identiﬁcation

techniques: Modal parameter identiﬁcation and

Structural-model parameter identiﬁcation (primarily used in

structural engineering) and Control-model identiﬁcation

(primarily used in mechanical and aerospace systems). The primary

objective of system identiﬁcation is to determine the system

matrices, A, B, C, D from measured/analyzed data often with

noise. The modal parameters are computed from the system

matrices.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Deﬁnition

Objective

Objective

The main aim of system identiﬁcation is to determine a

mathematical model of a physical/dynamic system from observed

data. Six key steps are involved in system identiﬁcation (1)

develop an approximate analytical model of the structure, (2)

establish levels of structural dynamic response which are likely to

occur using the analytical model and characteristics of anticipated

excitation sources, (3) determine the instrumentation requirements

needed to sense the motion with prescribed accuracy and spatial

resolution,(4) perform experiments and record data, (5) apply

system identiﬁcation techniques to identify the dynamic

characteristics such as system matrices,modal parameters, and

excitation and input/output noise characteristics, and (6)

reﬁne/update the analytical model based on identiﬁed results.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Non-parametric Models

Parametric Models

Parameteric and Non-parameteric Models

Parametric Models: Choose the model structure and estimate

the model parameters for best ﬁt.

Non-parametric Models: Model structure is not speciﬁed a

priori but is instead determined from data. Non parametric

techniques rely on the Cross correlation function (CCF) R

yu

/

Auto correlation function (ACF) R

uu

and Spectral Density

Functions S

yu

/ S

uu

(Fourier transform of CCF and ACF) to

estimate the transfer function/frequency response function of

the model

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Non-parametric Models

Parametric Models

Non-parametric Models

Frequency Response Function (FRF)

Y(j ω) = H(j ω)U(j ω)

FRF - Nonparametric

H(j ω) =

S

yu

(j ω)

S

uu

(j ω)

Impulse Response Function (IRF)

y(t) =

t

_

0

h(t −τ)u(τ) dτ

[Note: IRF and FRF form a Fourier Transform Pair]

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Non-parametric Models

Parametric Models

Parametric Models

TF Models (SISO)

Y(s) =

b

m

s

m

+ b

m−1

s

m−1

+· · · + b

1

s + b

o

s

n

+ a

n−1

s

n−1

+· · · + a

1

s + a

o

U(s)

In this model structure, we choose m and n and estimate

parameters b

0

, · · · , b

m

, a

0

, · · · , a

n−1

.

Time-domain Models (SISO)

d

n

y

dt

n

+ a

n−1

d

n−1

y

dt

n−1

+· · · + a

1

dy

dt

+ a

o

y(t)

= b

m

d

m

u

dt

m

+ b

m−1

d

m−1

u

dt

m−1

+· · · + b

1

du

dt

+ b

o

u(t)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Non-parametric Models

Parametric Models

Parametric Models

Discrete Time-domain Models (SISO)

y(k)+a

1

y(k −1)+· · ·+a

n

y(k −n) = b

1

u(k −1)+· · ·+b

m

u(k −m)

State Space Models (MIMO)

˙ x

n×1

= A

n×n

x

n×1

+B

n×m

u

m×1

y

r ×1

= C

r ×n

x

n×1

+D

r ×m

u

m×1

The parameters n, r , m are given and the models parameters

A, B, C, D are to be estimated.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Non-parametric Models

Parametric Models

Parametric Models

Transfer Function Matrix Models (MIMO)

Y (s) =

_

¸

_

H

11

(s) . . . H

1m

(s)

.

.

.

.

.

.

.

.

.

H

r 1

(s) · · · H

rm

(s)

_

¸

_

U (s)

which can be written as:

Y(s) = H(s)U(s)

=

_

C(sI −A)

−1

B +D

¸

U(s)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Non-parametric Models

Parametric Models

Parametric Models

System identiﬁcation can be grouped into frequency domain

identiﬁcation methods and time-domain identiﬁcation

methods. We will focus mainly on discrete-time domain model

identiﬁcation and state-space identiﬁcation:

1

Discrete Time-domain Models (SISO)

2

State Space Models (MIMO)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Least Squares Estimation

Consider a second-order discrete model of the form,

y(k) + a

1

y(k −1) + a

2

y(k −2) = a

3

u(k) + a

4

u(k −1)

The objective is to estimate the parameter vector

p

T

= [a

1

a

2

a

3

a

4

] using the vector of input and output

measurements. Making the substitution,

h

T

= [−y(k −1) −y(k −2) u(k) u(k −1)]

we can write

y(k) = h

T

p

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Least Squares Estimation

Let us say, we have k sets of measurements. Then, we can

write the above equation in matrix form as,

_

¸

¸

¸

_

y

1

y

2

.

.

.

y

k

_

¸

¸

¸

_

=

_

¸

¸

¸

¸

_

h

11

h

12

· · · h

1n

h

21

.

.

.

.

.

.

.

.

.

h

k1

h

kn

_

¸

¸

¸

¸

_

_

¸

¸

¸

_

¸

¸

¸

_

p

1

p

2

.

.

.

p

n

_

¸

¸

¸

_

¸

¸

¸

_

y

i

= h

T

i

p i = 1, 2, · · · , k (1)

In matrix form, we can write,

y = H

T

p

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Least Squares Estimation

In least-squares estimation, we minimize the following

performance index:

J =

_

y −H

T

ˆ p

_

T

_

y −H

T

ˆ p

_

= y

T

y −y

T

H

T

ˆ p − ˆ pHy + ˆ p

T

HH

T

ˆ p (2)

Minimizing the performance index in eq. 2 with respect to ˆ p,

∂J

∂ˆ p

=

∂

∂ˆ p

_

y

T

y −y

T

H

T

ˆ p − ˆ pHy + ˆ p

T

HH

T

ˆ p

_

= 0

= −Hy −Hy + 2HH

T

ˆ p = 0

which results in the expression for the parameter estimate as:

ˆ p =

_

HH

T

_

−1

Hy (3)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Derivation

Statistical Analysis of the RLS Estimator

Derivation

Limitations of Least Squares Estimation

The parameter update law in eq. 3 involves operating in a batch

mode. For every k + 1

th

measurement, the matrix inverse

_

HH

T

_

−1

needs to be re-calculated. This is a cumbersome

operation and it is best if it can be avoided.

In a recursive estimator, there is no need to store all the

previous data to compute the present estimate. Let us use the

following simpliﬁed notations:

P

k

=

_

HH

T

_

−1

and

B

k

= Hy

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Derivation

Statistical Analysis of the RLS Estimator

Derivation

Hence, the parameter update law in eq. 3 can be written as:

ˆ p

k

= P

k

B

k

In the recursive estimator, the matrices P

k

, B

k

are updated

as follows:

B

k+1

= B

k

+h

k+1

y

k+1

(4)

In order to update P

k

, the following update law is used:

P

k+1

= P

k

−

P

k

h

k+1

h

T

k+1

P

k

1 +h

T

k+1

P

k

h

k+1

(5)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Derivation

Statistical Analysis of the RLS Estimator

Derivation

Note that the update for matrix P

k+1

does not involve matrix

inversion.

The updates for P

k

, B

k

can then be used to update the

parameter vector as follows:

ˆ p

k+1

= P

k+1

B

k+1

ˆ p

k

= P

k

B

k

(6)

Combining these equations,

ˆ p

k+1

− ˆ p

k

= P

k+1

B

k+1

−P

k

B

k

Substituting eqs. 4 and 5 in the above equationwe get:

ˆ p

k+1

= ˆ p

k

+P

k

h

k+1

_

1 +h

T

k+1

P

k

h

k+1

_

−1

_

y

k+1

−h

T

k+1

ˆ p

k

_

(7)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Derivation

Statistical Analysis of the RLS Estimator

Statistical Analysis

Consider the scalar form of the equation again:

y

i

= h

T

i

p i = 1, 2, · · · , k

In the presence of measurement noise, it becomes,

y

i

= h

T

i

p + n

i

i = 1, 2, · · · , k

With the folowing assumptions

1

Average value of the noise is zero, that is, E(n

i

) = 0, where E

is the expectation operator.

2

Noise samples are uncorrelated, that is

E(n

i

n

j

) = E(n

i

)E(n

j

) = 0, i = j

3

E(n

2

i

) = r , the covariance of noise.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Derivation

Statistical Analysis of the RLS Estimator

Statistical Analysis

Recalling eq. 6:

ˆ p

k

= P

k

B

k

(8)

This can be expanded as:

ˆ p

k

=

_

k

i =1

h

i

h

T

i

_

−1

k

i =1

h

i

y

i

(9)

Taking E() on both sides, we get,

E (ˆ p

k

) = E(p) = p (10)

This makes the estimator unbiased estimator, that is the

expected value of the estimate is equal to to that of the

quantity being estimated.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Derivation

Statistical Analysis of the RLS Estimator

Statistical Analysis

Now, let us look at the covariance of the error,

Cov = E

_

(ˆ p

k

−p

k

) (ˆ p

k

−p

k

)

T

_

(11)

Which upon simpliﬁcation, we get

Cov = P

k

r (12)

It can be shown that P

k

decreases as k increases. Hence, as

more measurements become available, the error reduces and

converges to the true value of p. Hence, this is known as a

consistent estimator.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighted Least Squares

Extension of RLS Method

The scalar formulation can be extended to a MIMO

(multi-input multi-output) system.

A weighting matrix is introduced to emphasize the relative

importance of one parameter over the other.

Consider eq. 1. Extending this to the MIMO case and

including measurement noise,

y

i

= H

T

i

p +n

i

, i = 1, 2, · · · , k

where, y

i

is l ×1, H

i

is n ×l , p is n ×1 and n

i

is l ×1.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighted Least Squares

The performance index,J is deﬁned by,

J =

k

i =1

_

y

i

−H

T

i

ˆ p

_

T

_

y

i

−H

T

i

ˆ p

_

Minimizing J with respect to ˆ p, we get,

ˆ p =

_

k

i =1

H

i

H

T

i

_

−1

k

i =1

H

i

y

i

The above equation is a batch estimator. The recursive LS estimator can

be calculated by proceeding the same way as was done for the scalar case.

Deﬁning,

P

k

=

_

k

i =1

H

i

H

T

i

_

−1

B

k

=

k

i =1

H

i

y

i

(13)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighted Least Squares

The parameter update rule is given by,

ˆ p

k+1

= ˆ p

k

+P

k

H

k+1

_

y

k+1

−H

T

k+1

ˆ p

k

_

Now, if we introduce a weighting matrix, W into the

performance index, we get

J =

k

i =1

_

y

i

−H

T

i

ˆ p

_

T

W

_

y

i

−H

T

i

ˆ p

_

(14)

The minimization of eq. 14 leads to

ˆ p =

_

k

i =1

H

i

WH

T

i

_

−1

k

i =1

H

i

Wy

i

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighted Least Squares

Once again, deﬁning

P

k

=

_

k

i =1

H

i

WH

T

i

_

−1

B

k

=

k

i =1

H

i

Wy

i

(15)

the recursive relationships are given by,

ˆ p

k+1

= ˆ p

k

+P

k+1

H

k+1

W

_

y

k+1

−H

T

k+1

ˆ p

k

_

(16)

and

P

k+1

= P

k

−P

k

H

k+1

_

W

−1

+H

T

k+1

P

k

H

k+1

_

−1

H

T

k+1

P

k

(17)

Assuming that the noise samples are uncorrelated, or,

E

__

n

i

n

T

i

__

=

_

0 i = j

R i = j

It can be shown that by choosing W = R

−1

produces the minimum

covariance estimator. In other words, the estimation error is the minimum.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Features

Derivations

Discrete-Time Kalman Filter

Kalman Filter is the most widely used state estimation tool

used for control and identiﬁcation.

LS, RLS, WLS deals with the estimation of system parameters

Kalman ﬁlter deals with the estimation of states for a

dynamical system.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Features

Derivations

Discrete-Time Kalman Filter

Consider the linear discrete-system given by,

x

k

= Ax

k−1

+Gw

k−1

y

k

= H

T

x

k

+n

k

Note : The parameter vector p is replaced by x, consistent with the

terminology we have adopted for representing states.

w

k

is a n ×1, is noise measurements with E() = 0 and COV() = Q

x

k

is the n ×1 state vector

A is the state matrix assumed to be known

n

k

is a l ×1 vector of output noise with E() = 0 and COV() = R

y

k

is l ×1 vector of measurements

G is n ×n, H is l ×n and they are assumed to be known

The objective is to estimate the states, x

k

based on k observations of

y. A Recursive ﬁlter is used for this purpose. This recursive ﬁlter is

called Kalman ﬁlter.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Features

Derivations

Fundamental diﬀerence between the WLS for Dynamical

case and Non-dynamic case

Non-dynamic case, at time t

k−1

, an estimate x

k−1

needs to be

produced and the covariance estimate x

k−1

needs to be updated.

These quantities do not change between t

k−1

and t

k

because,

x

k−1

= x

k

.

In the dynamic case, x

k−1

= x

k

, since the state evolves between the

time-steps k −1 and k. That means a prediction is needed of what

happens to these state estimates and the covariance estimates

between measurements.

Recall the WLS estimator in eq. 16 and eq. 17

ˆx

k

= ˆx

k−1

+P

k

H

k

W

_

y

k

−H

T

k

ˆx

k−1

_

P

k

= P

k−1

−P

k−1

H

k

_

W

−1

+H

T

k

P

k−1

H

k

_

−1

H

T

k

P

k−1

In this estimator we cannot replace ˆx

k−1

with ˆx

k−1|k−1

as ˆx

k

is

changing between t

k−1

and t

k

. The same applies to P

k−1

.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Features

Derivations

Discrete-Time Kalman Filter

Consider the state estimate equation. If we know the state

estimate based on k −1 measurements, ˆx

k−1|k−1

and the

state matrix, A, then, we can predict the quantity ˆx

k|k−1

using the relationship,

ˆx

k|k−1

= Aˆx

k−1|k−1

We can write the state estimate equation as,

ˆx

k|k

= ˆx

k|k−1

+P

k|k

HR

−1

_

y

k

−H

T

ˆx

k|k−1

_

(18)

The above equation assumes that the weighting matrix,

W = R

−1

.

Similarly, it can be shown that the covariance estimate is,

P

k|k

= P

k|k−1

−P

k|k−1

H

_

R +H

T

P

k|k−1

H

_

−1

H

T

P

k|k−1

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Features

Derivations

Discrete-Time Kalman Filter

Note that the matrix, H is constant. The quantity P

k|k−1

can

be calculated as,

P

k|k−1

= E

_

[x

k

−ˆx

k|k−1

][x

k

−ˆx

k|k−1

]

T

_

= AP

k−1|k−1

A

T

+GQG

T

In summary, the following steps are for the discrete-time

Kalman ﬁlter:

1

Prediction:P

k|k−1

= AP

k−1|k−1

A

T

+GQG

T

2

Prediction: ˆx

k|k−1

= Aˆx

k−1|k−1

3

Covariance Estimate:P

k|k

=

P

k|k−1

−P

k|k−1

H

_

R +H

T

P

k|k−1

H

¸

−1

H

T

P

k|k−1

4

State Estimate: ˆx

k|k

= ˆx

k|k−1

+P

k|k

HR

−1

_

y

k

−H

T

ˆx

k|k−1

_

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Features

Derivations

Discrete-Time Kalman Filter

If input is present, such that the equations are of the form,

x

k

= Ax

k−1

+Bu

k−1

Gw

k−1

y

k

= H

T

x

k

+n

k

Then, the state estimation becomes,

ˆx

k|k

= ˆx

k|k−1

+Bu

k−1

+P

k|k

HR

−1

_

y

k

−H

T

ˆx

k|k−1

_

Note: Do not confuse the input matrix, B with B

k

!

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

State Space Identiﬁcation

The objective is to identify the state matrices, A, B and C.

The general state-space description for a dynamical system is

given by:

˙ x(t) = A

c

x(t) +B

c

u(t)

y(t) = Cx(t) + Du(t) (19)

for a system of order n, r inputs and q outputs.

The discrete representation of the same system is given by:

x(k + 1) = Ax(k) +Bu(k)

y(k) = Cx(k) + Du(k) (20)

Note the distinction in the state matrices between the

continuous and discrete versions!

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Weighting Sequence Model

Representing output as a weighted sequence of inputs

Start from the initial conditions, x(0):

x(0) = 0; y(0) = Cx(0) + Du(0)

x(1) = Ax(0) +Bu(0); y(1) = Cx(1) + Du(1)

x(2) = Ax(1) +Bu(1); y(2) = Cx(2) + Du(2)

If x(0) is zero, or, k is suﬃciently large, A

k

≈ 0 (stable

system with damping). Hence,

y(k) = CBu(k −1) +· · · +CA

k−1

Bu(·) +Du(k)(21)

y(k) =

k

i =1

CA

i −1

Bu(k −i ) + Du(k) (22)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Weighting Sequence Model

Eq. 22 is known as the weighting-sequence model it does not

involve any state measurements, and depends only on inputs.

The output y(k) is a weighted sum of input values

u(0), u(1), u(k)

Weights CB, CAB, CA

2

B, · · · are called Markov parameters.

Markov parameters are invariant to state transformations

Since, the Markov parameters are the pulse responses of the

system, they must be unique for a given system.

Note that the input-output description in eq. 22 is valid only

under zero initial conditions (steady-state). It is not applicable

if the transient eﬀects are present in the system.

In this model, there is no need to consider the exact nature of

the state equations.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

State-space Observer Model

If the system is asymptotically stable, then there are only a

ﬁnite number of steps in the weighting sequence model.

However, for lightly damped systems, the number of terms in

the weighting sequence model can be too large.

Under these conditions, the state-space observer model is

advantageous.

Consider the state space model:

x(k + 1) = Ax(k) +Bu(k)

y(k) = Cx(k) + Du(k)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

State-space Observer Model

Add and subtract the term Gy(k) to the state equation:

x(k + 1) = Ax(k) +Bu(k) +Gy(k) −Gy(k)

y(k) = Cx(k) +Du(k)

x(k + 1) =

¯

Ax(k) +

¯

Bv(k)

y(k) = Cx(k) +Du(k)

where,

¯

A = A +GC

¯

B = [B +GD −G]

and

v(k) =

_

u(k)

y(k)

_

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

State-space Observer Model

Continuing from the previous method, objective is to ﬁnd G

so that A +GC is asymptotically stable.

The weighting-sequence model in terms of the Observer

Markov parameters (from eq. 22) is:

y(k) =

k

i =1

C

¯

A

i −1

¯

Bv(k −i ) + Du(k) (23)

where, C

¯

A

k−1

¯

B are known as the observer Markov

parameters. If G is chosen appropriately, then

¯

A

p

= 0 for

ﬁnite p.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Linear Diﬀerence Model

Eq. 23 can be written as (proceeding the same way as done

for the weighting-sequence description):

y(k) =

k

i =1

C

¯

A

i −1

(B +GD) u(k−i )−

k

i =1

C

¯

A

i −1

Gy(k−i )+Du(k)

which can written as:

y(k) +

k

i =1

¯

Y

(2)

i

y(k −i ) =

k

i =1

¯

Y

(1)

i

u(k −i ) + Du(k) (24)

where

¯

Y

(1)

i

=

k

i =1

C

¯

A

i −1

(B +GD) and

¯

Y

(2)

i

=

k

i =1

C

¯

A

i −1

G

Eq. 24 is commonly referred to as ARX (AutoRegressive with

eXogeneous part) model.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Linear Diﬀerence Model

Models discussed so far (weighting sequence, ARX, etc.) are

related in terms of the system matrices, A, B, C and D.

If these matrices are known, all the models can be derived

describing the IO relationships.

The system Markov parameters and the observer markov

parameters play an important role in system identiﬁcation

using IO descriptions.

Starting from the initial conditions, x(0) we get :

x(l −1) =

l −1

i =1

A

i −1

Bu(l −1 −i )

y(l −1) =

l −1

i =1

CA

i −1

Bu(l −1 −i ) + Du(l −1)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Linear Diﬀerence Model

which can be written as,

_

y(0) y(1) . . . y(l −1)

¸

=

_

D CB . . . CA

l −2

B

¸

×

_

¸

¸

¸

¸

¸

_

u(0) u(1) u(2) . . . u(l −1)

u(0) u(1) . . . u(l −2)

u(0) . . . u(l −3)

.

.

.

.

.

.

u(0)

_

¸

¸

¸

¸

¸

_

In compact form,

Y

q×l

= P

q×rl

V

rl ×l

(25)

Hence,

P = YV

+

(26)

where V

+

is called the pseudo-inverse of the matrix. The matrix,

V becomes square in case if single-input system. ARX models

can be expressed in this form.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Linear Diﬀerence Model: ARX Model

Consider the ARX model given in eq. 24. This can be written in a

slightly modiﬁed form as:

y(k)+α

1

y(k−1)+· · ·+α

p

y(k−p) = β

0

u(k)+β

1

u(k−1)+· · ·+β

p

u(k−p)

(27)

where, p indicates the model order. This can be arranged as:

y(k) = −α

1

y(k−1)−· · ·−α

p

y(k−p)+β

0

u(k)+β

1

u(k−1)+· · ·+β

p

u(k−p)

(28)

which means that the output at any step k, y(k) can be expressed in

terms of p previous output and input measurements, i.e.,

y(k −1), · · · , y(k −p) and u(k −1), · · · , u(k −p).

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Linear Diﬀerence Model: ARX Model

Let us deﬁne a vector v(k) as,

v(k) =

_

y(k)

u(k)

_

, k = 1, 2, · · · , l

where, l is the length of the data. Eq. 28 can be written as,

[y

0

y] = θ [V

0

V] (29)

where,

y

0

= [y(1) y(2) · · · y(p)]

y = [y(p + 1) y(p + 2) · · · y(l )]

θ = [β

0

(−α

1

β

1

) · · · (−α

p−1

β

p−1

) (−α

p

β

p

)]

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Linear Diﬀerence Model: ARX Model

V

0

=

_

¸

¸

¸

¸

¸

_

u(1) u(2) · · · u(p)

v(0) v(1) v(p −1)

.

.

.

.

.

.

.

.

.

.

.

.

v(2 −p) v(3 −p) · · · v(1)

v(1 −p) v(2 −p) · · · v(0)

_

¸

¸

¸

¸

¸

_

V =

_

¸

¸

¸

¸

¸

_

u(p + 1) u(p + 2) · · · u(l )

v(p) v(p + 1) v(l −1)

.

.

.

.

.

.

.

.

.

.

.

.

v(2) v(3) · · · v(l −p + 1)

v(1) v(2) · · · v(l −p)

_

¸

¸

¸

¸

¸

_

The parameters can then be solved as:

θ = [y

0

y] [V

0

V]

+

(30)

If the system does not start from rest, the quantities y

0

and V

0

are

usually unknown. In which case, the parameters are calculated as:

θ = yV

+

(31)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Linear Diﬀerence Model: Pulse Response Model

Given,

y(k) = Du(k) +CBu(k −1) +CABu(k −2) +· · · +CA

p−1

BU(k −p)

(32)

Find, D, CB, CAB, · · · , CA

p−1

B using,

_

y(k) y(k + 1) . . . y(k + l −1)

¸

=

_

D CB . . . CA

p−1

B

¸

_

¸

¸

¸

¸

¸

_

u(k) u(k + 1) · · · u(k + l −1)

u(k −1) u(k) u(k + l −2)

u(k −2) u(k −1) u(k + l −3)

.

.

.

.

.

.

.

.

.

u(k −p) u(k −p + 1) u(k + l −1 −p)

_

¸

¸

¸

¸

¸

_

In compact form,

Y

q×l

= P

q×r (p+1)

V

r (p+1)×l

(33)

q:output, r :input and l :length of data.

Hence,

P = YV

+

(34)

In Matlab, you can compute the pseudo-inverse through the

command: pinv(V)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Linear Diﬀerence Model: Pseudo-Inverse

Say, A

m×n

X

n×1

= b

m×1

⇒X = A

+

b, m equations in n

unknowns.

It has a unique (consistent) solution if:

Rank[A, b] = Rank(A) = n

It has inﬁnite(fewer linear independent equations than

unknowns) number of solutions if: Rank[A, b] = Rank(A) < n

It has no solutions(inconsistent) if: Rank[A, b] > Rank(A)

Note that [A, b] is an augmented matrix.

Rank is the number of linearly independent columns or rows.

Due to the presence of noise, system identiﬁcation mostly

produces a set of inconsistent equations. These can be dealt

with what is known as the Singular Value Decomposition

(SVD).

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Physical Interpretation of SVD: Approximation Problem

Let A ∈

n×m

, where n ≤ m and rank(A) = n. Then, ﬁnd a

matrix X ∈

n×m

with rank(X) = k < n such that A −X

2

is minimized (minimize σ

1

), A −X

2

≥ σ

k+1

(A) and

rank(X) = k.

SVD addresses the question of rank and handles non-square

matrices automatically.

1

If the system has an unique solution, SVD provides this unique

solution

2

For inﬁnite solutions, SVD provides the solution with minimum

norm

3

When there is no solution, SVD provides a solution which

minimizes the error

Items 2 and 3 are called Least Squares Solutions

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Physical Interpretation of SVD: Basic Equations

if A is a m ×n, then there exists two ortho-normal matrices

U (m ×m and V (n ×n such that

A

m×n

= U

m×m

Σ

m×n

V

T

n×n

(35)

where, Σ is a matrix with the same dimension as A, but

diagonal.

The scalar values, σ

i

are the singular values of A with

σ

1

≥ σ

2

≥ σ

3

≥ · · · σ

k

> 0 and σ

k+1

= σ

k+2

= · · · = 0

Example

Example: Let σ = [1, 0.3, 0.1, 0.0001, 10

−12

, 0, 0]

Then, strong rank=3, weak rank=4, very weak rank=5.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Physical Interpretation of SVD: Basic Equations

The nonzero singular values are unique, but U and V are not.

U and V are square matrices.

The columns of U are called the left singular vectors and the

columns of V are called the right singular vectors of A.

Since, U and V are orthonormal matrices, they obey the

relationship,

U

T

U = I

m×m

= U

−1

U

V

T

V = I

n×n

= V

−1

V (36)

From eq. 35, if A = UΣV

T

then,

Σ = U

T

AV

Σ

m×n

=

_

Σ

k×k

0

k×n−k

0

m−k×k

0

m−k×n−k

_

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Physical Interpretation of SVD: Basic Equations

SVD is closely related to the eigen-solution in case of

symmetric positive-deﬁnite matrices AA

T

and A

T

A.

A = UΣV

T

⇒A

T

= VΣ

T

U

T

Hence, the non-zero singular values of A are the positive

square roots of the non-zero eigenvalues of A

T

A or AA

T

.

The columns of U are the eigenvectors corresponding to the

eigenvalues of AA

T

and

The columns of V are the eigenvectors corresponding to the

eigenvalues of A

T

A.

If A consists of complex elements, then the transpose is

replaced by complex-conjugate transpose.

Deﬁnitions of condition number and rank are closely related

to the singular values.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Physical Interpretation of SVD: Condition Number

Rank: The rank of a matrix is equal to the number of

non-zero singular values. This is the most reliable method of

rank determination. Typically, a rank tolerance equal to the

square of machine precision is chosen and the singular values

above it are counted to determine the rank.

In order to calculate the Pseudo-inverse of matrix A, denoted

by A

+

using SVD,

A

+

= V

1

Σ

−1

U

T

1

= V

1

diag[σ

−1

1

, σ

−1

2

, · · · , σ

−1

k

]U

T

1

(37)

where,

A = UΣV

T

=

_

U

1

.

.

. U

2

_ _

Σ

1

0

0 0

_

_

_

V

T

1

· · ·

V

T

2

_

_

and

A = U

1

Σ

1

V

T

1

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Eigen Realization Algorithm

Given the pulse-response histories (system Markov parameters),

ERA is used to extract the state-space model of the system.

Deﬁne the Markov parameters as follows:

Y

0

= D = β

0

Y

1

= CB = β

(1)

0

Y

2

= CAB = β

(2)

0

Y

3

= CA

2

B = β

(3)

0

.

.

.

Y

k

= CA

k−1

B = β

(k)

0

(38)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Eigen Realization Algorithm

Start with a generalized αm ×βr Hankel matrix (m outputs

and r inputs and α, β are integers)

H(k −1) =

_

¸

¸

¸

_

Y

k

Y

k+1

· · · Y

k+β−1

Y

k+1

Y

k+2

· · · Y

k+β

.

.

.

.

.

.

.

.

.

.

.

.

Y

k+α−1

Y

k+α

· · · Y

k+α+β−2

_

¸

¸

¸

_

For the case when k = 1,

H(0) =

_

¸

¸

¸

_

Y

1

Y

2

· · · Y

β

Y

2

Y

3

· · · Y

1+β

.

.

.

.

.

.

.

.

.

.

.

.

Y

α

Y

1+α

· · · Y

α+β−1

_

¸

¸

¸

_

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Eigen Realization Algorithm

If α ≥ n and β ≥ n, the matrix H(k −1) is of rank n.

Substituting the Markov parameters from eq. 38 into

H(k −1), we can factorize the Hankel matrix as:

H(k −1) = P

α

A

k−1

Q

β

(39)

ERA starts with the SVD of the Hankel matrix

H(0) = RΣS

T

(40)

where the columns of R and S are orthonormal and Σ is,

Σ =

_

Σ

n

0

0 0

_

in which 0’s are zero matrices of appropriate dimensions, and

Σ

n

= diag[σ

1

, σ

2

, · · · , σ

i

, σ

i +1

, · · · , σ

n

] and

σ

1

≥ σ

2

≥ · · · ≥ σ

i

≥ σ

n

≥ 0.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Eigen Realization Algorithm

Let R

n

and S

n

be the matrices formed by the ﬁrst n columns

of R and S respectively. Then,

H(0) = R

n

Σ

n

S

T

n

= [R

n

Σ

1/2

n

][Σ

1/2

n

S

T

n

] (41)

and the following relationships hold: R

T

n

R

n

= S

T

n

S

n

= I

n

Now, examining, eq. 39 for k = 1, that is,

H(0) = P

α

Q

β

(42)

Equating eq. 42 and eq. 41, we get,

P

α

= R

n

Σ

1/2

n

Q

β

= Σ

1/2

n

S

T

n

(43)

That means, B = the ﬁrst r columns of Q

β

, and C = the ﬁrst

m rows of P

α

and D = Y

0

.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Eigen Realization Algorithm

In order to determine the state matrix, A, we start with,

H(1) =

_

¸

¸

¸

_

Y

2

Y

3

· · · Y

β+1

Y

3

Y

4

· · · Y

2+β

.

.

.

.

.

.

.

.

.

.

.

.

Y

α+1

Y

2+α

· · · Y

α+β

_

¸

¸

¸

_

From eq. 38, we can then see that H(1) canbe factorized

using SVD as:

H(1) = P

α

AQ

β

= R

n

Σ

1/2

n

AΣ

1/2

n

S

T

n

(44)

from which

A = Σ

−1/2

n

R

T

n

H(1)S

n

Σ

−1/2

n

(45)

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

Introduction

Classiﬁcation

Least Squares Estimation

Recursive Least Squares Estimation

Weighted Least Squares

Discrete-Time Kalman Filter

State Space Identiﬁcation

Weighting Sequence Model

State-space Observer Model

Linear Diﬀerence Model

Physical Interpretation of SVD

Eigen Realization Algorithm

Acknowledgements

Dr. Sriram Narasimhan, Assistant Professor, Univ. of Waterloo,

and Dharma Teja Reddy Pasala, Graduate Student, Rice Univ,

assisted in putting together this presentation. The materials

presented in this short course are a condensed version of lecture

notes of a course taught at Rice University and at Univ. of

Waterloo.

References

1. Jer-Nan Juang, Applied System Identiﬁcation, Prentice Hall.

2. Jer-Nan Juang and M. Q. Phan, Identiﬁcation and control of

mechanical systems, Cambridge Press.

3. DeRusso et al., State variables for engineers, Wiley Interscience.

Satish Nagarajaiah, Prof., CEVE & MEMS, Rice System Identiﬁcation

- kowsi
- Dm Regression
- Clustering SVD Master Thesis
- Delimitrou_Paragon QoS Aware Scheduling for Heterogeneous Datacenters
- Matlab Tutorial1
- note08
- 01261332.pdf
- Forecasting Longevity Gains for a Population with Short Time Series Using a Structural SUTSE Model
- The Application of Latent Semantic Indexing and Ontology in Text Classification
- Gn 2511881192
- 189 - Wavelet, Kalman Filter and Fuzzy-Expert Combined System for Classifying Power System Disturbances
- Effect of Singular Value Decomposition Based Processing on Speech Perception
- target tracking using kalman ppt
- -Narrowband Direction of Arrival Estimation for Antenna Arrays Synthesis Lectures on Antennas
- MOD_LVM_SIM
- 19
- Mechanical
- Tomasso 2012
- mm-9
- Docfoc.com-Chapter 6 OFDM-DMT for Wireline Communications.ppt
- 06111324
- Hans Havlicek- Dual Polar Spaces and the Geometry of Matrices
- FOCUSS.pdf
- leastsqE
- Inertial Measurements of Upper Limb Motion
- Force Redistribution in a Quadruped Running Trot
- 1 Linear Eqs
- 2. IJCSEITR - Study of Different Transformation Algorithms
- 1990 Hydrological Forecasting
- 34369334-Adaptive-Noise-Cancellation.pdf

- 5 cosas que la gente exitosa hace antes de la 8 de la mañana | Informe21.com
- d Guinea
- Doghouse Plans
- Data Acquisition Toolbox Adaptor Kit User's Guide
- InTech-Modelling_and_fuzzy_control_of_an_efficient_swimming_ionic_polymer_metal_composite_actuated_robot.pdf
- 4FOROdoc-basico2
- Sma Control for Bio-mimetic Fish Locomotio
- 5 cosas que la gente exitosa hace antes de la 8 de la mañana | Informe21.com
- tarifas_interphoto
- Fish Physiology Put Into Practice
- Dog House Plans
- Communications System Toolbox Getting Started Guide
- Estudio Caso Cadi n
- Solar Powered SEC 'Smart' Street Lights SL8 - Nov. 2010
- uwm_datasheet
- Luis Angel Silva
- 2009 11 17 Arduino Basics
- 150
- 1748-3190_8_1_016006
- 36892678 Munual Compilador CCS PICC
- (a) Modelamiento de Prototipos de Antenas Fractales en La Banda de 5Ghz ZMNLnx
- Manual Fotovoltaica Es
- CONSULTOR _proyeccion a 20 anos
- Reporte

Read Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading