5 views

Uploaded by Ali Akbar

Adaptive Control lectures

save

- Exam P Practice Exam by Jared Nakamura
- Probability Insurance
- Rosenbaum 1985
- US Federal Reserve: 200734pap
- Isi Jrf Physics 08
- o 0447796
- 01.Probability Random Process
- asharchitectureandadvancedusagermoug2014-140703192353-phpapp02
- Pub 1041586
- A Generalized Least Squares Estimation Method for Vector Moving Average Models. Flores de F., R. y Serrano, G. R. 1996
- Assignment 1 GV9037 C Henrique G Freitas
- Tutorials 3
- Comparison of Large Aperture Scintillometer and Satellite-based Energy Balance Models in Sensible Heat Flux and Crop Evapotranspiration Determination
- Appendix
- ContentServer (1)
- PTMBA2013 Managerial Finance Beta Case
- Lecture 2 - IV
- Reliability
- Part 1_Statistical Analysis
- PGPX Initial Survey
- Heckman - Sample Selection Bias as a Specification Error
- Worksheet
- Categorical statistics analysis
- Ma1014 Unit i Qb
- Factors for Returns
- syllabus
- T7-6 T.pdf
- LAMPIRAN-VALIDITAS-RELIABILITAS
- Copie de Homework Solutions
- ch 7 finance
- 12 Adaptive Filtering
- Lecture 21.pdf
- 125879 Getting Started Tutorials
- LecturenoteonMPC-JHL.pdf
- Mae 331 Lecture 9
- Notes on Eliptic Filter Design
- siu2011_draft1
- Sampath Sowrirajan[1]
- Lecture 02 EAILC
- Model 10-03 Input
- GM28
- Sampath Sowrirajan[1]

You are on page 1of 78

•

Least Squares and Regression Models

•

Estimating Parameters in Dynamical Systems

•

Examples

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 1/25

Preliminary Comments

Real-time parameter estimation is one of methods of system

identiﬁcation

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 2/25

Preliminary Comments

Real-time parameter estimation is one of methods of system

identiﬁcation

The key elements of system identiﬁcation are:

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 2/25

Preliminary Comments

Real-time parameter estimation is one of methods of system

identiﬁcation

The key elements of system identiﬁcation are:

•

Selection of model structure (linear, nonlinear, linear in

parameters, . . . )

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 2/25

Preliminary Comments

Real-time parameter estimation is one of methods of system

identiﬁcation

The key elements of system identiﬁcation are:

•

Selection of model structure (linear, nonlinear, linear in

parameters, . . . )

•

Design of experiments (input signal, sampling period, . . . )

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 2/25

Preliminary Comments

Real-time parameter estimation is one of methods of system

identiﬁcation

The key elements of system identiﬁcation are:

•

Selection of model structure (linear, nonlinear, linear in

parameters, . . . )

•

Design of experiments (input signal, sampling period, . . . )

•

Parameter estimation (off-line, on-line, . . . )

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 2/25

Preliminary Comments

Real-time parameter estimation is one of methods of system

identiﬁcation

The key elements of system identiﬁcation are:

•

Selection of model structure (linear, nonlinear, linear in

parameters, . . . )

•

Design of experiments (input signal, sampling period, . . . )

•

Parameter estimation (off-line, on-line, . . . )

•

Validation mechanism (depends on application)

All these step should be present

in Real-Time Parameter Estimation Algorithms!

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 2/25

Parameter Estimation: Problem Formulation

Suppose that a system (a model of experiment) is

y(i) = φ

1

(i)θ

0

1

+ φ

2

(i)θ

0

2

+· · · + φ

n

(i)θ

0

n

=

_

φ

1

(i), φ

2

(i), . . . , φ

n

(i)

_

. ¸¸ .

φ(i)

T

←− 1×n

_

¸

_

θ

0

1

.

.

.

θ

0

n

_

¸

_

. ¸¸ .

θ

0

←− n×1

= φ(i)

T

θ

0

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 3/25

Parameter Estimation: Problem Formulation

Suppose that a system (a model of experiment) is

y(i) = φ

1

(i)θ

0

1

+ φ

2

(i)θ

0

2

+· · · + φ

n

(i)θ

0

n

=

_

φ

1

(i), φ

2

(i), . . . , φ

n

(i)

_

. ¸¸ .

φ(i)

T

←− 1×n

_

¸

_

θ

0

1

.

.

.

θ

0

n

_

¸

_

. ¸¸ .

θ

0

←− n×1

= φ(i)

T

θ

0

Given the data

_

[y(i), φ(i)]

_

N

i=1

=

_

[y(1), φ(1)], . . . , [y(N), φ(N)]

_

The task is to ﬁnd (estimate) the n unknown values

θ

0

=

_

θ

0

1

, θ

0

2

, . . . , θ

0

n

_

T

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 3/25

What kind of solution is expected?

Algorithm solving the estimation problem should be as follows.

•

Estimates should be computed in a numerically efﬁcient

way (an analytical formula is preferable!)

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 4/25

What kind of solution is expected?

Algorithm solving the estimation problem should be as follows.

•

Estimates should be computed in a numerically efﬁcient

way (an analytical formula is preferable!)

•

Computed estimates (the mean value) should be close to

the real estimated values (be unbiased), provided that the

model is correct.

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 4/25

What kind of solution is expected?

Algorithm solving the estimation problem should be as follows.

•

Estimates should be computed in a numerically efﬁcient

way (an analytical formula is preferable!)

•

Computed estimates (the mean value) should be close to

the real estimated values (be unbiased), provided that the

model is correct.

•

Estimates should have small variances, preferably

decreasing with N →∞(more data –> better estimate).

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 4/25

What kind of solution is expected?

Algorithm solving the estimation problem should be as follows.

•

Estimates should be computed in a numerically efﬁcient

way (an analytical formula is preferable!)

•

Computed estimates (the mean value) should be close to

the real estimated values (be unbiased), provided that the

model is correct.

•

Estimates should have small variances, preferably

decreasing with N →∞(more data –> better estimate).

•

The algorithm, if possible, should have intuitive clear

motivation.

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 4/25

Least Squares Algorithm:

Consider the function (loss-function) to be minimized

V

N

(θ) =

1

2

_

(y(1) −φ(1)

T

θ)

2

+· · · + (y(N) −φ(N)

T

θ)

2

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 5/25

Least Squares Algorithm:

Consider the function (loss-function) to be minimized

V

N

(θ) =

1

2

_

(y(1) −φ(1)

T

θ

. ¸¸ .

ε(1)

)

2

+· · · + (y(N) −φ(N)

T

θ

. ¸¸ .

ε(N)

)

2

_

=

1

2

_

ε(1)

2

+ ε(2)

2

+· · · + ε(N)

2

_

=

1

2

E

T

E

Here

E =

_

ε(1), ε(2), . . . , ε(N)

_

T

= Y −Φθ

and

Y =

_

y(1), y(2), . . . , y(N)

_

T

, Φ =

_

¸

¸

¸

¸

_

φ(1)

T

φ(2)

T

.

.

.

φ(N)

T

_

¸

¸

¸

¸

_

←− N×n

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 5/25

Theorem (Least Square Formula):

The function V

N

(θ) is minimal at θ = θ, which satisﬁes the

equation

Φ

T

Φθ = Φ

T

Y

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 6/25

Theorem (Least Square Formula):

The function V

N

(θ) is minimal at θ = θ, which satisﬁes the

equation

Φ

T

Φθ = Φ

T

Y

If det

_

Φ

T

Φ

_

= 0, then

•

the minimum of the function V

N

(θ) is unique,

•

the minimum value is

1

2

Y

T

Y −

1

2

Y

T

Φ

_

Φ

T

Φ

_

−1

Φ

T

Y and

is attained at

θ =

_

Φ

T

Φ

_

−1

Φ

T

Y

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 6/25

Proof:

The loss-function can be written as

V

N

(θ) =

1

2

(Y −Φθ)

T

(Y −Φθ)

It reaches its minimal value at θ if

∇

θ

V

N

(θ) =

_

∂V

N

(θ)

∂θ

1

, . . . ,

∂V

N

(θ)

∂θ

n

_

= 0 with θ = θ

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 7/25

Proof:

The loss-function can be written as

V

N

(θ) =

1

2

(Y −Φθ)

T

(Y −Φθ)

It reaches its minimal value at θ if

∇

θ

V

N

(θ) =

_

∂V

N

(θ)

∂θ

1

, . . . ,

∂V

N

(θ)

∂θ

n

_

= 0 with θ = θ

This equation is

∇

θ

V

N

(θ) =

1

2

·2·∇

θ

(−Φθ)

T

(Y −Φθ) = −Φ

T

(Y −Φθ) = 0

( ∇

θ

(θ

T

a) = ∇

θ

(a

T

θ) = a

T

, ∇

θ

(θ

T

Aθ) = θ

T

(A + A

T

) )

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 7/25

Proof:

The loss-function can be written as

V

N

(θ) =

1

2

(Y −Φθ)

T

(Y −Φθ)

It reaches its minimal value at θ if

∇

θ

V

N

(θ) =

_

∂V

N

(θ)

∂θ

1

, . . . ,

∂V

N

(θ)

∂θ

n

_

= 0 with θ = θ

This equation is

∇

θ

V

N

(θ) =

1

2

·2·∇

θ

(−Φθ)

T

(Y −Φθ) = −Φ

T

(Y −Φθ) = 0

( ∇

θ

(θ

T

a) = ∇

θ

(a

T

θ) = a

T

, ∇

θ

(θ

T

Aθ) = θ

T

(A + A

T

) ) Hence

Φ

T

Φθ = Φ

T

Y

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 7/25

Proof (con’d):

If the matrix Φ

T

Φ is invertible, then we solve can ﬁnd the

solution θ of Φ

T

Φθ = Φ

T

Y :

θ =

_

Φ

T

Φ

_

−1

Φ

T

Y

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 8/25

Proof (con’d):

If the matrix Φ

T

Φ is invertible, then we solve can ﬁnd the

solution θ of Φ

T

Φθ = Φ

T

Y :

θ =

_

Φ

T

Φ

_

−1

Φ

T

Y

V

N

(θ) =

1

2

(Y −Φθ)

T

(Y −Φθ) =

1

2

Y

T

Y −Y

T

Φθ+

1

2

θ

T

Φ

T

Φθ

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 8/25

Proof (con’d):

If the matrix Φ

T

Φ is invertible, then we solve can ﬁnd the

solution θ of Φ

T

Φθ = Φ

T

Y :

θ =

_

Φ

T

Φ

_

−1

Φ

T

Y

V

N

(θ) =

1

2

(Y −Φθ)

T

(Y −Φθ) =

1

2

Y

T

Y −Y

T

Φθ+

1

2

θ

T

Φ

T

Φθ

and since

Y

T

Φθ = Y

T

Φ

_

Φ

T

Φ

_

−1

_

Φ

T

Φ

_

θ = θ

T

_

Φ

T

Φ

_

θ

by completing the square

V

N

(θ) =

1

2

Y

T

Y −

1

2

θ

T

_

Φ

T

Φ

_

θ

. ¸¸ .

independent of θ

+

1

2

(θ −θ)

T

_

Φ

T

Φ

_

(θ −θ)

. ¸¸ .

≥ 0, · · · > 0 for θ = θ

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 8/25

Comments:

The formula can be re-written in terms of y(i), φ(i) as

θ =

_

Φ

T

Φ

_

−1

Φ

T

Y =

_

N

i=1

φ(i)φ(i)

_

−1

_

N

i=1

φ(i)y(i)

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 9/25

Comments:

The formula can be re-written in terms of y(i), φ(i) as

θ =

_

Φ

T

Φ

_

−1

Φ

T

Y = P(N)

_

N

i=1

φ(i)y(i)

_

where P(N) =

_

Φ

T

Φ

_

−1

=

_

N

i=1

φ(i)φ(i)

_

−1

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 9/25

Comments:

The formula can be re-written in terms of y(i), φ(i) as

θ =

_

Φ

T

Φ

_

−1

Φ

T

Y = P(N)

_

N

i=1

φ(i)y(i)

_

where P(N) =

_

Φ

T

Φ

_

−1

=

_

N

i=1

φ(i)φ(i)

_

−1

The condition det

_

Φ

T

Φ

_

= 0 is called an excitation condition.

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 9/25

Comments:

The formula can be re-written in terms of y(i), φ(i) as

θ =

_

Φ

T

Φ

_

−1

Φ

T

Y = P(N)

_

N

i=1

φ(i)y(i)

_

where P(N) =

_

Φ

T

Φ

_

−1

=

_

N

i=1

φ(i)φ(i)

_

−1

The condition det

_

Φ

T

Φ

_

= 0 is called an excitation condition.

To assign different weights for different time instants, consider

V

N

(θ) =

1

2

_

w

1

(y(1)−φ(1)

T

θ)

2

+· · · +w

N

(y(N)−φ(N)

T

θ)

2

_

=

1

2

_

w

1

ε(1)

2

+w

2

ε(2)

2

+· · · +w

N

ε(N)

2

_

=

1

2

E

T

WE

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 9/25

Comments:

The formula can be re-written in terms of y(i), φ(i) as

θ =

_

Φ

T

Φ

_

−1

Φ

T

Y = P(N)

_

N

i=1

φ(i)y(i)

_

where P(N) =

_

Φ

T

Φ

_

−1

=

_

N

i=1

φ(i)φ(i)

_

−1

The condition det

_

Φ

T

Φ

_

= 0 is called an excitation condition.

To assign different weights for different time instants, consider

V

N

(θ) =

1

2

_

w

1

(y(1)−φ(1)

T

θ)

2

+· · · +w

N

(y(N)−φ(N)

T

θ)

2

_

=

1

2

_

w

1

ε(1)

2

+w

2

ε(2)

2

+· · · +w

N

ε(N)

2

_

=

1

2

E

T

WE

⇒ θ =

_

Φ

T

WΦ

_

−1

Φ

T

WY

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 9/25

Exponential forgetting

The common way to assign the weights is Least Squares with

Exponential Forgetting:

V

N

(θ) =

1

2

N

i=1

λ

N−i

_

y(i) −φ(i)

T

θ

_

2

with 0 < λ < 1. What is the solution formula?

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 10/25

Stochastic Notation

Uncertainty is often convenient to represent stochastically.

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 11/25

Stochastic Notation

Uncertainty is often convenient to represent stochastically.

•

e is random if it is not known in advance and could take

values from the set {e

1

, . . . , e

i

, . . . } with probabilities p

i

.

•

The mean value or expected value is

E e =

i

p

i

e

i

, E is the expectation operator.

For a realization {e

i

}

N

i=1

: E e ≈

1

N

N

i=1

e

i

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 11/25

Stochastic Notation

Uncertainty is often convenient to represent stochastically.

•

e is random if it is not known in advance and could take

values from the set {e

1

, . . . , e

i

, . . . } with probabilities p

i

.

•

The mean value or expected value is

E e =

i

p

i

e

i

, E is the expectation operator.

For a realization {e

i

}

N

i=1

: E e ≈

1

N

N

i=1

e

i

•

The variance or dispersion characterizes possible

deviations from the mean value:

σ

2

= var(e) = E (e −E e)

2

.

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 11/25

Stochastic Notation

Uncertainty is often convenient to represent stochastically.

•

e is random if it is not known in advance and could take

values from the set {e

1

, . . . , e

i

, . . . } with probabilities p

i

.

•

The mean value or expected value is

E e =

i

p

i

e

i

, E is the expectation operator.

For a realization {e

i

}

N

i=1

: E e ≈

1

N

N

i=1

e

i

•

The variance or dispersion characterizes possible

deviations from the mean value:

σ

2

= var(e) = E (e −E e)

2

.

•

Two random variables e and f are independent if

E (e f) = (E e) · (E f) or cov(e, f) = 0.

where cov(e, f) = E

_

(e −E e)(f −E f)

_

is covariance.

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 11/25

Statistical Properties of LS Estimate:

Assume that the model of the system is stochastic, i.e.

y(i) = φ(i)

T

θ

0

+e(i)

_

⇔Y (N) = Φ(N)θ

0

+ E(N)

_

.

Here θ

0

is vector of true parameters, 0 ≤ i ≤ N ≤ +∞,

•

_

e(i)

_

=

_

e(0), e(1), . . .

_

is a sequence of independent

equally distributed random variables with Ee(i) = 0,

•

_

φ(i)

_

is either a sequence of random variables

independent of e or deterministic.

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 12/25

Statistical Properties of LS Estimate:

Assume that the model of the system is stochastic, i.e.

y(i) = φ(i)

T

θ

0

+e(i)

_

⇔Y (N) = Φ(N)θ

0

+ E(N)

_

.

Here θ

0

is vector of true parameters, 0 ≤ i ≤ N ≤ +∞,

•

_

e(i)

_

=

_

e(0), e(1), . . .

_

is a sequence of independent

equally distributed random variables with Ee(i) = 0,

•

_

φ(i)

_

is either a sequence of random variables

independent of e or deterministic.

Pre-multiplying the model by

_

Φ(N)

T

Φ(N)

_

−1

Φ(N)

T

, gives

_

Φ(N)

T

Φ(N)

_

−1

Φ(N)

T

×Y (N) =

ˆ

θ

ˆ

θ =

_

Φ(N)

T

Φ(N)

_

−1

Φ(N)

T

×

_

Φ(N)θ

0

+ E(N)

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 12/25

Statistical Properties of LS Estimate:

Assume that the model of the system is stochastic, i.e.

y(i) = φ(i)

T

θ

0

+e(i)

_

⇔Y (N) = Φ(N)θ

0

+ E(N)

_

.

Here θ

0

is vector of true parameters, 0 ≤ i ≤ N ≤ +∞,

•

_

e(i)

_

=

_

e(0), e(1), . . .

_

is a sequence of independent

equally distributed random variables with Ee(i) = 0,

•

_

φ(i)

_

is either a sequence of random variables

independent of e or deterministic.

Pre-multiplying the model by

_

Φ(N)

T

Φ(N)

_

−1

Φ(N)

T

, gives

ˆ

θ = θ

0

+

_

Φ(N)

T

Φ(N)

_

−1

Φ(N)

T

×E(N)

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 12/25

Theorem: Suppose the data are generated by

y(i) = φ(i)

T

θ

0

+ e(i)

_

⇔Y (N) = Φ(N)θ

0

+ E(N)

_

where

_

e(i)

_

=

_

e(0), e(1), . . .

_

is a sequence of

independent random variables with zero mean and variance σ

2

.

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 13/25

Theorem: Suppose the data are generated by

y(i) = φ(i)

T

θ

0

+ e(i)

_

⇔Y (N) = Φ(N)θ

0

+ E(N)

_

where

_

e(i)

_

=

_

e(0), e(1), . . .

_

is a sequence of

independent random variables with zero mean and variance σ

2

.

Consider the least square estimate

ˆ

θ =

_

Φ(N)

T

Φ(N)

_

−1

Φ(N)

T

Y (N).

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 13/25

Theorem: Suppose the data are generated by

y(i) = φ(i)

T

θ

0

+ e(i)

_

⇔Y (N) = Φ(N)θ

0

+ E(N)

_

where

_

e(i)

_

=

_

e(0), e(1), . . .

_

is a sequence of

independent random variables with zero mean and variance σ

2

.

Consider the least square estimate

ˆ

θ =

_

Φ(N)

T

Φ(N)

_

−1

Φ(N)

T

Y (N).

If det Φ(N)

T

Φ(N) = 0, then

•

E

ˆ

θ = θ

0

, i.e. the estimate is unbiased;

•

the covariance of the estimate is

cov

ˆ

θ = E

_

ˆ

θ −θ

0

_ _

ˆ

θ −θ

0

_

T

= σ

2

_

Φ(N)

T

Φ(N)

_

−1

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 13/25

Regression Models for FIR

FIR (Finite Impulse Response) or MA (Moving Average Model):

y(t) = b

1

u(t −1) + b

2

u(t −2) +· · · + b

n

u(t −n)

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 14/25

Regression Models for FIR

FIR (Finite Impulse Response) or MA (Moving Average Model):

y(t) = b

1

u(t −1) + b

2

u(t −2) +· · · + b

n

u(t −n)

y(t) = [u(t −1), u(t −2), . . . , u(t −n)]

. ¸¸ .

=φ(t−1)

T

_

¸

¸

¸

¸

_

b

1

b

2

.

.

.

b

n

_

¸

¸

¸

¸

_

. ¸¸ .

=θ

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 14/25

Regression Models for FIR

FIR (Finite Impulse Response) or MA (Moving Average Model):

y(t) = b

1

u(t −1) + b

2

u(t −2) +· · · + b

n

u(t −n)

y(t) = [u(t −1), u(t −2), . . . , u(t −n)]

. ¸¸ .

=φ(t−1)

T

_

¸

¸

¸

¸

_

b

1

b

2

.

.

.

b

n

_

¸

¸

¸

¸

_

. ¸¸ .

=θ

Note the change of notation for the typical case when the RHS

does no have the term b

0

u(t): φ(i) −→φ(t −1) to indicate

dependence of date on input signals up to t −1th.

However, we keep: Y (N) = Φ(N) θ

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 14/25

Regression Models for ARMA

IIR (Inﬁte Impulse Response) or ARMA (Autoregressive Moving

Average Model):

_

q

n

+ a

1

q

n−1

+· · · + a

n

_

. ¸¸ .

A(q)

y(t) =

_

b

1

q

m−1

+ b

2

q

m−2

+· · · + b

m

_

. ¸¸ .

B(q)

u(t)

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 15/25

Regression Models for ARMA

IIR (Inﬁte Impulse Response) or ARMA (Autoregressive Moving

Average Model):

_

q

n

+ a

1

q

n−1

+· · · + a

n

_

. ¸¸ .

A(q)

y(t) =

_

b

1

q

m−1

+ b

2

q

m−2

+· · · + b

m

_

. ¸¸ .

B(q)

u(t)

q is the forward shift operator:

q u(t) = u(t + 1), q

−1

u(t) = u(t −1)

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 15/25

Regression Models for ARMA

IIR (Inﬁte Impulse Response) or ARMA (Autoregressive Moving

Average Model):

_

q

n

+ a

1

q

n−1

+· · · + a

n

_

. ¸¸ .

A(q)

y(t) =

_

b

1

q

m−1

+ b

2

q

m−2

+· · · + b

m

_

. ¸¸ .

B(q)

u(t)

q is the forward shift operator:

q u(t) = u(t + 1), q

−1

u(t) = u(t −1)

y(t) = [−y(t −1), . . . , −y(t −n), u(t −n + m−1), . . . , u(t −n)]

. ¸¸ .

=φ(i−1)

T

_

¸

¸

¸

¸

¸

¸

¸

¸

¸

_

a

1

.

.

.

a

n

b

1

.

.

.

b

m

_

¸

¸

¸

¸

¸

¸

¸

¸

¸

_

. ¸¸ .

=θ

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 15/25

Problem 2.2

Consider the FIR model

y(t) = b

0

u(t) + b

1

u(t −1) + e(t), t = 1, 2, 3, . . . , N

where

_

e(t)

_

is a sequence of independent normal N(0, σ)

random variables

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 16/25

Problem 2.2

Consider the FIR model

y(t) = b

0

u(t) + b

1

u(t −1) + e(t), t = 1, 2, 3, . . . , N

where

_

e(t)

_

is a sequence of independent normal N(0, σ)

random variables

•

Determine LS estimates for b

0

, b

1

when u(t) is a step.

Analyze the covariance of the estimate, when N →∞

•

Make the same investigation when u(t) is white noise with

unit variance.

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 16/25

Solution (Problem 2.2):

For the model

y(t) = b

0

u(t) + b

1

u(t −1) + e(t)

the regression form is readily seen

y(t) =

_

u(t), u(t −1)

¸

_

b

0

b

1

_

+ e(t) = φ(t)

T

θ + e(t)

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 17/25

Solution (Problem 2.2):

For the model

y(t) = b

0

u(t) + b

1

u(t −1) + e(t)

the regression form is readily seen

y(t) =

_

u(t), u(t −1)

¸

_

b

0

b

1

_

+ e(t) = φ(t)

T

θ + e(t)

Then LS estimate is

ˆ

θ =

_

Φ(N)

T

Φ(N)

_

−1

Φ(N)

T

Y (N)

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 17/25

Solution (Problem 2.2):

For the model

y(t) = b

0

u(t) + b

1

u(t −1) + e(t)

the regression form is readily seen

y(t) =

_

u(t), u(t −1)

¸

_

b

0

b

1

_

+ e(t) = φ(t)

T

θ + e(t)

Then LS estimate is

ˆ

θ =

_

_

_

_

_

_

¸

¸

¸

_

u(1) u(0)

u(2) u(1)

.

.

.

u(N) u(N−1)

_

¸

¸

¸

_

T

_

¸

¸

¸

_

u(1) u(0)

u(2) u(1)

.

.

.

u(N) u(N −1)

_

¸

¸

¸

_

_

_

_

_

_

−1

_

¸

¸

¸

_

u(1) u(0)

u(2) u(1)

.

.

.

u(N) u(N−1)

_

¸

¸

¸

_

T

_

¸

¸

¸

_

y(1)

y(2)

.

.

.

y(N)

_

¸

¸

¸

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 17/25

Solution (Con’d):

The formula for an arbitrary input signal u(t)

ˆ

θ =

_

_

_

_

_

_

¸

¸

¸

_

u(1) u(0)

u(2) u(1)

.

.

.

u(N) u(N−1)

_

¸

¸

¸

_

T

_

¸

¸

¸

_

u(1) u(0)

u(2) u(1)

.

.

.

u(N) u(N −1)

_

¸

¸

¸

_

_

_

_

_

_

−1

_

¸

¸

¸

_

u(1) u(0)

u(2) u(1)

.

.

.

u(N) u(N−1)

_

¸

¸

¸

_

T

_

¸

¸

¸

_

y(1)

y(2)

.

.

.

y(N)

_

¸

¸

¸

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 18/25

Solution (Con’d):

The formula for an arbitrary input signal u(t)

ˆ

θ =

_

_

_

_

_

_

¸

¸

¸

_

u(1) u(0)

u(2) u(1)

.

.

.

u(N) u(N−1)

_

¸

¸

¸

_

T

_

¸

¸

¸

_

u(1) u(0)

u(2) u(1)

.

.

.

u(N) u(N −1)

_

¸

¸

¸

_

_

_

_

_

_

−1

_

¸

¸

¸

_

u(1) u(0)

u(2) u(1)

.

.

.

u(N) u(N−1)

_

¸

¸

¸

_

T

_

¸

¸

¸

_

y(1)

y(2)

.

.

.

y(N)

_

¸

¸

¸

_

becomes

ˆ

θ =

_

_

N

t=1

u(t)

2

N

t=1

u(t)u(t −1)

N

t=1

u(t)u(t −1)

N

t=1

u(t −1)

2

_

_

−1

×

×

_

_

N

t=1

u(t)y(t)

N

t=1

u(t −1)y(t)

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 18/25

Solution (Con’d):

Suppose that u(t) is a unit step applied at t = 1

u(t) =

_

1, t = 1, 2, . . .

0, t = 0, −1, −2, −3, . . .

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 19/25

Solution (Con’d):

Suppose that u(t) is a unit step applied at t = 1

u(t) =

_

1, t = 1, 2, . . .

0, t = 0, −1, −2, −3, . . .

Substituting this signal into the general formula, we obtain

ˆ

θ =

_

_

N

t=1

u(t)

2

N

t=1

u(t)u(t −1)

N

t=1

u(t)u(t −1)

N

t=1

u(t −1)

2

_

_

−1

×

×

_

_

N

t=1

u(t)y(t)

N

t=1

u(t −1)y(t)

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 19/25

Solution (Con’d):

Suppose that u(t) is a unit step applied at t = 1

u(t) =

_

1, t = 1, 2, . . .

0, t = 0, −1, −2, −3, . . .

Substituting this signal into the general formula, we obtain

ˆ

θ =

_

_

N

N

t=1

u(t)u(t −1)

N

t=1

u(t)u(t −1)

N

t=1

u(t −1)

2

_

_

−1

×

×

_

_

N

t=1

u(t)y(t)

N

t=1

u(t −1)y(t)

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 19/25

Solution (Con’d):

Suppose that u(t) is a unit step applied at t = 1

u(t) =

_

1, t = 1, 2, . . .

0, t = 0, −1, −2, −3, . . .

Substituting this signal into the general formula, we obtain

ˆ

θ =

_

_

N N −1

N −1

N

t=1

u(t −1)

2

_

_

−1

×

×

_

_

N

t=1

u(t)y(t)

N

t=1

u(t −1)y(t)

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 19/25

Solution (Con’d):

Suppose that u(t) is a unit step applied at t = 1

u(t) =

_

1, t = 1, 2, . . .

0, t = 0, −1, −2, −3, . . .

Substituting this signal into the general formula, we obtain

ˆ

θ =

_

_

N N −1

N −1 N −1

_

_

−1

×

×

_

_

N

t=1

u(t)y(t)

N

t=1

u(t −1)y(t)

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 19/25

Solution (Con’d):

Suppose that u(t) is a unit step applied at t = 1

u(t) =

_

1, t = 1, 2, . . .

0, t = 0, −1, −2, −3, . . .

Substituting this signal into the general formula, we obtain

ˆ

θ =

_

_

N N −1

N −1 N −1

_

_

−1

×

×

_

_

N

t=1

y(t)

N

t=2

y(t)

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 19/25

Solution (Con’d):

Suppose that u(t) is a unit step applied at t = 1

u(t) =

_

1, t = 1, 2, . . .

0, t = 0, −1, −2, −3, . . .

Substituting this signal into the general formula, we obtain

ˆ

θ =

1

N(N −1) −(N −1)

2

_

_

N −1 −(N −1)

−(N −1) N

_

_

×

×

_

_

N

t=1

y(t)

N

t=2

y(t)

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 19/25

Solution (Con’d):

Suppose that u(t) is a unit step applied at t = 1

u(t) =

_

1, t = 1, 2, . . .

0, t = 0, −1, −2, −3, . . .

Substituting this signal into the general formula, we obtain

ˆ

θ =

_

¸

_

1 −1

−1

N

N −1

_

¸

_

×

×

_

_

N

t=1

y(t)

N

t=2

y(t)

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 19/25

Solution (Con’d):

Suppose that u(t) is a unit step applied at t = 1

u(t) =

_

1, t = 1, 2, . . .

0, t = 0, −1, −2, −3, . . .

Substituting this signal into the general formula, we obtain

ˆ

θ =

_

¸

¸

_

N

t=1

y(t) −

N

t=2

y(t)

−

N

t=1

y(t) +

N

N −1

N

t=2

y(t)

_

¸

¸

_

=

_

¸

¸

_

y(1)

1

N −1

N

t=1

y(t) −y(1)

_

¸

¸

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 19/25

Solution (Con’d):

How to compute the covariance of θ? Consider

θ −θ

0

=

_

_

y(1)

1

N−1

N

t=1

y(t) −y(1)

_

_

−

_

_

b

0

b

1

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 20/25

Solution (Con’d):

How to compute the covariance of θ? Consider

θ −θ

0

=

_

¸

¸

_

y(1)

1

N −1

N

t=1

y(t) −y(1)

_

¸

¸

_

−

_

_

b

0

b

1

_

_

=

_

¸

¸

¸

¸

¸

_

_

b

0

· 1 + b

1

· 0 + e(1)

_

−b

0

N

t=2

_

b

0

· 1 + b

1

· 1 + e(t)

_

+ y(1)

N −1

−y(1) −b

1

_

¸

¸

¸

¸

¸

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 20/25

Solution (Con’d):

How to compute the covariance of θ? Consider

θ −θ

0

=

_

¸

¸

_

y(1)

1

N −1

N

t=1

y(t) −y(1)

_

¸

¸

_

−

_

_

b

0

b

1

_

_

=

_

¸

¸

_

e(1)

1

N −1

N

t=2

e(t) −e(1)

_

¸

¸

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 20/25

Solution (Con’d):

The estimation error is

θ −θ

0

=

_

¸

¸

_

e(1)

1

N −1

N

t=2

e(t) −e(1)

_

¸

¸

_

The covariance E(θ −θ

0

)(θ −θ

0

)

T

of estimate is then

cov(θ) =

_

¸

_

Ee(1)

2

E

_

N

t=2

e(t)

N−1

−e(1)

_

e(1)

E

_

N

t=2

e(t)

N−1

−e(1)

_

e(1) E

_

N

t=2

e(t)

N−1

−e(1)

_

2

_

¸

_

=

_

_

σ

2

−σ

2

−σ

2

σ

2

_

N−1

(N−1)

2

−1

_

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 20/25

Solution (Con’d):

The estimation error is

θ −θ

0

=

_

¸

¸

_

e(1)

1

N −1

N

t=2

e(t) −e(1)

_

¸

¸

_

The covariance E(θ −θ

0

)(θ −θ

0

)

T

of estimate is then

cov(θ) =

_

¸

_

Ee(1)

2

E

_

N

t=2

e(t)

N−1

−e(1)

_

e(1)

E

_

N

t=2

e(t)

N−1

−e(1)

_

e(1) E

_

N

t=2

e(t)

N−1

−e(1)

_

2

_

¸

_

=

_

_

σ

2

−σ

2

−σ

2

σ

2

N

N−1

_

_

= σ

2

_

_

1 −1

−1

N

N−1

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 20/25

Solution (Con’d):

To conclude with the unit step input signal LS estimate is

ˆ

θ =

_

¸

_

b

0

b

1

_

¸

_

=

_

¸

¸

_

y(1)

1

N −1

N

t=1

y(t) −y(1)

_

¸

¸

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 21/25

Solution (Con’d):

To conclude with the unit step input signal LS estimate is

ˆ

θ =

_

¸

_

b

0

b

1

_

¸

_

=

_

¸

¸

_

y(1)

1

N −1

N

t=1

y(t) −y(1)

_

¸

¸

_

The covariance of such estimate looks as

E(θ −θ

0

)(θ −θ

0

)

T

= σ

2

_

_

1 −1

−1

N

N−1

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 21/25

Solution (Con’d):

To conclude with the unit step input signal LS estimate is

ˆ

θ =

_

¸

_

b

0

b

1

_

¸

_

=

_

¸

¸

_

y(1)

1

N −1

N

t=1

y(t) −y(1)

_

¸

¸

_

The covariance of such estimate looks as

E(θ −θ

0

)(θ −θ

0

)

T

= σ

2

_

_

1 −1

−1

N

N−1

_

_

As a number of measured data increases N →∞

E(θ −θ

0

)(θ −θ

0

)

T

→

_

_

1 −1

−1 1

_

_

and does NOT IMPROVE!

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 21/25

Solution (Problem 2.2):

Consider now the model

y(t) = b

0

u(t) + b

1

u(t −1) + e(t)

when the input signal is a white noise with unit variance

Eu(t)

2

= 1, Eu(t)u(s) = 0 if t = s

and when u and e are independent

Eu(t)e(s) = 0 ∀ t, s

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 22/25

Solution (Problem 2.2):

Consider now the model

y(t) = b

0

u(t) + b

1

u(t −1) + e(t)

when the input signal is a white noise with unit variance

Eu(t)

2

= 1, Eu(t)u(s) = 0 if t = s

and when u and e are independent

Eu(t)e(s) = 0 ∀ t, s

Such assumptions imply that

Ey(t)u(t) = b

0

, Ey(t)u(t −1) = b

1

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 22/25

Solution (Problem 2.2):

The LS estimate becomes dependent on a realization of the

stochastic variable u(t); we need to consider its mean value

E

ˆ

θ = E

_

_

_

_

_

N

t=1

u(t)

2

N

t=2

u(t)u(t −1)

N

t=2

u(t)u(t −1)

N

t=2

u(t −1)

2

_

_

−1

×

×

_

_

N

t=1

u(t)y(t)

N

t=2

u(t −1)y(t)

_

_

_

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 23/25

Solution (Problem 2.2):

The LS estimate becomes dependent on a realization of the

stochastic variable u(t); we need to consider its mean value

E

ˆ

θ = E

_

_

_

_

_

_

_

_

N

_

1

N

N

t=1

u(t)

2

_

(N −1)

_

1

N−1

N

t=2

u(t)u(t −1)

_

(N −1)

_

1

N−1

N

t=2

u(t)u(t −1)

_

(N −1)

_

1

N−1

N

t=2

u(t −1)

2

_

_

¸

_

−1

×

×

_

_

_

N

_

1

N

N

t=1

u(t)y(t)

_

(N −1)

_

1

N−1

N

t=2

u(t −1)y(t)

_

_

¸

_

_

_

_

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 23/25

Solution (Problem 2.2):

The LS estimate becomes dependent on a realization of the

stochastic variable u(t); we need to consider its mean value

E

ˆ

θ ≈

_

_

N Eu(t)

2

(N −1) Eu(t)u(t −1)

(N −1) Eu(t)u(t −1) (N −1) Eu(t −1)

2

_

_

−1

×

×

_

_

N Eu(t)y(t)

(N −1) Eu(t −1)y(t)

_

_

=

_

_

N 0

0 (N −1)

_

_

−1

_

_

Nb

0

(N −1)b

1

_

_

=

_

_

b

0

b

1

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 23/25

Solution (Problem 2.2):

To compute the covariance, use the formula in Theorem

cov(θ −θ

0

) = σ

2

E (Φ(N)

T

Φ(N))

−1

= σ

2

_

_

N 0

0 (N −1)

_

_

−1

= σ

_

_

1

N

0

0

1

N−1

_

_

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 24/25

Solution (Problem 2.2):

To compute the covariance, use the formula in Theorem

cov(θ −θ

0

) = σ

2

E (Φ(N)

T

Φ(N))

−1

= σ

2

_

_

N 0

0 (N −1)

_

_

−1

= σ

_

_

1

N

0

0

1

N−1

_

_

It converges to zero!

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 24/25

Next Lecture / Assignments:

•

Next lecture (April 13, 10:00-12:00, in A206Tekn): Recursive

Least Squares, modiﬁcations, continuous-time systems.

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 25/25

Next Lecture / Assignments:

•

Next lecture (April 13, 10:00-12:00, in A206Tekn): Recursive

Least Squares, modiﬁcations, continuous-time systems.

•

Solve problems 2.5 and 2.6

c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 – p. 25/25

- Exam P Practice Exam by Jared NakamuraUploaded bytom
- Probability InsuranceUploaded byKairu Hiroshima
- Rosenbaum 1985Uploaded byFelipeGalvez
- US Federal Reserve: 200734papUploaded byThe Fed
- Isi Jrf Physics 08Uploaded byapi-26401608
- o 0447796Uploaded byAnonymous 7VPPkWS8O
- 01.Probability Random ProcessUploaded byKaviya Aranganadin
- asharchitectureandadvancedusagermoug2014-140703192353-phpapp02Uploaded byCharles Sam
- Pub 1041586Uploaded byIdar Sutan
- A Generalized Least Squares Estimation Method for Vector Moving Average Models. Flores de F., R. y Serrano, G. R. 1996Uploaded byJosé Daniel Rivera Medina
- Assignment 1 GV9037 C Henrique G FreitasUploaded byCarlos
- Tutorials 3Uploaded byz_k_j_v
- Comparison of Large Aperture Scintillometer and Satellite-based Energy Balance Models in Sensible Heat Flux and Crop Evapotranspiration DeterminationUploaded bySEP-Publisher
- AppendixUploaded bylicenca_1202
- ContentServer (1)Uploaded byCristina Maria Bostan
- PTMBA2013 Managerial Finance Beta CaseUploaded bysuperwinnie1
- Lecture 2 - IVUploaded bybanderas
- ReliabilityUploaded bySreekanth Kv
- Part 1_Statistical AnalysisUploaded byicehorizon88
- PGPX Initial SurveyUploaded byAditya Modi
- Heckman - Sample Selection Bias as a Specification ErrorUploaded byAnderson Passos
- WorksheetUploaded byJohn Paul Ore
- Categorical statistics analysisUploaded bychiradzulu
- Ma1014 Unit i QbUploaded byritesh1996
- Factors for ReturnsUploaded bysamir22sjj
- syllabusUploaded byOnat Pagaduan
- T7-6 T.pdfUploaded bymyasmreg
- LAMPIRAN-VALIDITAS-RELIABILITASUploaded byAyi Wahid
- Copie de Homework SolutionsUploaded byGerges
- ch 7 financeUploaded bymrodr333

- 12 Adaptive FilteringUploaded byAli Akbar
- Lecture 21.pdfUploaded byAli Akbar
- 125879 Getting Started TutorialsUploaded byAli Akbar
- LecturenoteonMPC-JHL.pdfUploaded byAli Akbar
- Mae 331 Lecture 9Uploaded byAli Akbar
- Notes on Eliptic Filter DesignUploaded byAli Akbar
- siu2011_draft1Uploaded byAli Akbar
- Sampath Sowrirajan[1]Uploaded byAli Akbar
- Lecture 02 EAILCUploaded byAli Akbar
- Model 10-03 InputUploaded byAli Akbar
- GM28Uploaded byAli Akbar
- Sampath Sowrirajan[1]Uploaded byAli Akbar