Professional Documents
Culture Documents
Examples
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 1/25
Preliminary Comments
Real-time parameter estimation is one of methods of system
identication
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 2/25
Preliminary Comments
Real-time parameter estimation is one of methods of system
identication
The key elements of system identication are:
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 2/25
Preliminary Comments
Real-time parameter estimation is one of methods of system
identication
The key elements of system identication are:
1
(i),
2
(i), . . . ,
n
(i)
_
. .
(i)
T
1n
_
0
1
.
.
.
0
n
_
_
. .
0
n1
= (i)
T
0
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 3/25
Parameter Estimation: Problem Formulation
Suppose that a system (a model of experiment) is
y(i) =
1
(i)
0
1
+
2
(i)
0
2
+ +
n
(i)
0
n
=
_
1
(i),
2
(i), . . . ,
n
(i)
_
. .
(i)
T
1n
_
0
1
.
.
.
0
n
_
_
. .
0
n1
= (i)
T
0
Given the data
_
[y(i), (i)]
_
N
i=1
=
_
[y(1), (1)], . . . , [y(N), (N)]
_
The task is to nd (estimate) the n unknown values
0
=
_
0
1
,
0
2
, . . . ,
0
n
_
T
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 3/25
What kind of solution is expected?
Algorithm solving the estimation problem should be as follows.
. .
(1)
)
2
+ + (y(N) (N)
T
. .
(N)
)
2
_
=
1
2
_
(1)
2
+ (2)
2
+ + (N)
2
_
=
1
2
E
T
E
Here
E =
_
(1), (2), . . . , (N)
_
T
= Y
and
Y =
_
y(1), y(2), . . . , y(N)
_
T
, =
_
_
(1)
T
(2)
T
.
.
.
(N)
T
_
_
Nn
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 5/25
Theorem (Least Square Formula):
The function V
N
() is minimal at = , which satises the
equation
T
=
T
Y
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 6/25
Theorem (Least Square Formula):
The function V
N
() is minimal at = , which satises the
equation
T
=
T
Y
If det
_
_
= 0, then
_
1
T
Y and
is attained at
=
_
_
1
T
Y
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 6/25
Proof:
The loss-function can be written as
V
N
() =
1
2
(Y )
T
(Y )
It reaches its minimal value at if
V
N
() =
_
V
N
()
1
, . . . ,
V
N
()
n
_
= 0 with =
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 7/25
Proof:
The loss-function can be written as
V
N
() =
1
2
(Y )
T
(Y )
It reaches its minimal value at if
V
N
() =
_
V
N
()
1
, . . . ,
V
N
()
n
_
= 0 with =
This equation is
V
N
() =
1
2
2
()
T
(Y ) =
T
(Y ) = 0
(
(
T
a) =
(a
T
) = a
T
,
(
T
A) =
T
(A + A
T
) )
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 7/25
Proof:
The loss-function can be written as
V
N
() =
1
2
(Y )
T
(Y )
It reaches its minimal value at if
V
N
() =
_
V
N
()
1
, . . . ,
V
N
()
n
_
= 0 with =
This equation is
V
N
() =
1
2
2
()
T
(Y ) =
T
(Y ) = 0
(
(
T
a) =
(a
T
) = a
T
,
(
T
A) =
T
(A + A
T
) ) Hence
T
=
T
Y
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 7/25
Proof (cond):
If the matrix
T
is invertible, then we solve can nd the
solution of
T
=
T
Y :
=
_
_
1
T
Y
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 8/25
Proof (cond):
If the matrix
T
is invertible, then we solve can nd the
solution of
T
=
T
Y :
=
_
_
1
T
Y
V
N
() =
1
2
(Y )
T
(Y ) =
1
2
Y
T
Y Y
T
+
1
2
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 8/25
Proof (cond):
If the matrix
T
is invertible, then we solve can nd the
solution of
T
=
T
Y :
=
_
_
1
T
Y
V
N
() =
1
2
(Y )
T
(Y ) =
1
2
Y
T
Y Y
T
+
1
2
and since
Y
T
= Y
T
_
1
_
_
=
T
_
T
_
. .
independent of
+
1
2
( )
T
_
_
( )
. .
0, > 0 for =
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 8/25
Comments:
The formula can be re-written in terms of y(i), (i) as
=
_
_
1
T
Y =
_
N
i=1
(i)(i)
_
1
_
N
i=1
(i)y(i)
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 9/25
Comments:
The formula can be re-written in terms of y(i), (i) as
=
_
_
1
T
Y = P(N)
_
N
i=1
(i)y(i)
_
where P(N) =
_
_
1
=
_
N
i=1
(i)(i)
_
1
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 9/25
Comments:
The formula can be re-written in terms of y(i), (i) as
=
_
_
1
T
Y = P(N)
_
N
i=1
(i)y(i)
_
where P(N) =
_
_
1
=
_
N
i=1
(i)(i)
_
1
The condition det
_
_
= 0 is called an excitation condition.
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 9/25
Comments:
The formula can be re-written in terms of y(i), (i) as
=
_
_
1
T
Y = P(N)
_
N
i=1
(i)y(i)
_
where P(N) =
_
_
1
=
_
N
i=1
(i)(i)
_
1
The condition det
_
_
= 0 is called an excitation condition.
To assign different weights for different time instants, consider
V
N
() =
1
2
_
w
1
(y(1)(1)
T
)
2
+ +w
N
(y(N)(N)
T
)
2
_
=
1
2
_
w
1
(1)
2
+w
2
(2)
2
+ +w
N
(N)
2
_
=
1
2
E
T
WE
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 9/25
Comments:
The formula can be re-written in terms of y(i), (i) as
=
_
_
1
T
Y = P(N)
_
N
i=1
(i)y(i)
_
where P(N) =
_
_
1
=
_
N
i=1
(i)(i)
_
1
The condition det
_
_
= 0 is called an excitation condition.
To assign different weights for different time instants, consider
V
N
() =
1
2
_
w
1
(y(1)(1)
T
)
2
+ +w
N
(y(N)(N)
T
)
2
_
=
1
2
_
w
1
(1)
2
+w
2
(2)
2
+ +w
N
(N)
2
_
=
1
2
E
T
WE
=
_
T
W
_
1
T
WY
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 9/25
Exponential forgetting
The common way to assign the weights is Least Squares with
Exponential Forgetting:
V
N
() =
1
2
N
i=1
Ni
_
y(i) (i)
T
_
2
with 0 < < 1. What is the solution formula?
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 10/25
Stochastic Notation
Uncertainty is often convenient to represent stochastically.
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 11/25
Stochastic Notation
Uncertainty is often convenient to represent stochastically.
i
p
i
e
i
, E is the expectation operator.
For a realization {e
i
}
N
i=1
: E e
1
N
N
i=1
e
i
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 11/25
Stochastic Notation
Uncertainty is often convenient to represent stochastically.
i
p
i
e
i
, E is the expectation operator.
For a realization {e
i
}
N
i=1
: E e
1
N
N
i=1
e
i
2
= var(e) = E (e E e)
2
.
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 11/25
Stochastic Notation
Uncertainty is often convenient to represent stochastically.
i
p
i
e
i
, E is the expectation operator.
For a realization {e
i
}
N
i=1
: E e
1
N
N
i=1
e
i
2
= var(e) = E (e E e)
2
.
0
+e(i)
_
Y (N) = (N)
0
+ E(N)
_
.
Here
0
is vector of true parameters, 0 i N +,
_
e(i)
_
=
_
e(0), e(1), . . .
_
is a sequence of independent
equally distributed random variables with Ee(i) = 0,
_
(i)
_
is either a sequence of random variables
independent of e or deterministic.
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 12/25
Statistical Properties of LS Estimate:
Assume that the model of the system is stochastic, i.e.
y(i) = (i)
T
0
+e(i)
_
Y (N) = (N)
0
+ E(N)
_
.
Here
0
is vector of true parameters, 0 i N +,
_
e(i)
_
=
_
e(0), e(1), . . .
_
is a sequence of independent
equally distributed random variables with Ee(i) = 0,
_
(i)
_
is either a sequence of random variables
independent of e or deterministic.
Pre-multiplying the model by
_
(N)
T
(N)
_
1
(N)
T
, gives
_
(N)
T
(N)
_
1
(N)
T
Y (N) =
=
_
(N)
T
(N)
_
1
(N)
T
_
(N)
0
+ E(N)
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 12/25
Statistical Properties of LS Estimate:
Assume that the model of the system is stochastic, i.e.
y(i) = (i)
T
0
+e(i)
_
Y (N) = (N)
0
+ E(N)
_
.
Here
0
is vector of true parameters, 0 i N +,
_
e(i)
_
=
_
e(0), e(1), . . .
_
is a sequence of independent
equally distributed random variables with Ee(i) = 0,
_
(i)
_
is either a sequence of random variables
independent of e or deterministic.
Pre-multiplying the model by
_
(N)
T
(N)
_
1
(N)
T
, gives
=
0
+
_
(N)
T
(N)
_
1
(N)
T
E(N)
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 12/25
Theorem: Suppose the data are generated by
y(i) = (i)
T
0
+ e(i)
_
Y (N) = (N)
0
+ E(N)
_
where
_
e(i)
_
=
_
e(0), e(1), . . .
_
is a sequence of
independent random variables with zero mean and variance
2
.
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 13/25
Theorem: Suppose the data are generated by
y(i) = (i)
T
0
+ e(i)
_
Y (N) = (N)
0
+ E(N)
_
where
_
e(i)
_
=
_
e(0), e(1), . . .
_
is a sequence of
independent random variables with zero mean and variance
2
.
Consider the least square estimate
=
_
(N)
T
(N)
_
1
(N)
T
Y (N).
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 13/25
Theorem: Suppose the data are generated by
y(i) = (i)
T
0
+ e(i)
_
Y (N) = (N)
0
+ E(N)
_
where
_
e(i)
_
=
_
e(0), e(1), . . .
_
is a sequence of
independent random variables with zero mean and variance
2
.
Consider the least square estimate
=
_
(N)
T
(N)
_
1
(N)
T
Y (N).
If det (N)
T
(N) = 0, then
E
=
0
, i.e. the estimate is unbiased;
0
_ _
0
_
T
=
2
_
(N)
T
(N)
_
1
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 13/25
Regression Models for FIR
FIR (Finite Impulse Response) or MA (Moving Average Model):
y(t) = b
1
u(t 1) + b
2
u(t 2) + + b
n
u(t n)
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 14/25
Regression Models for FIR
FIR (Finite Impulse Response) or MA (Moving Average Model):
y(t) = b
1
u(t 1) + b
2
u(t 2) + + b
n
u(t n)
y(t) = [u(t 1), u(t 2), . . . , u(t n)]
. .
=(t1)
T
_
_
b
1
b
2
.
.
.
b
n
_
_
. .
=
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 14/25
Regression Models for FIR
FIR (Finite Impulse Response) or MA (Moving Average Model):
y(t) = b
1
u(t 1) + b
2
u(t 2) + + b
n
u(t n)
y(t) = [u(t 1), u(t 2), . . . , u(t n)]
. .
=(t1)
T
_
_
b
1
b
2
.
.
.
b
n
_
_
. .
=
Note the change of notation for the typical case when the RHS
does no have the term b
0
u(t): (i) (t 1) to indicate
dependence of date on input signals up to t 1th.
However, we keep: Y (N) = (N)
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 14/25
Regression Models for ARMA
IIR (Inte Impulse Response) or ARMA (Autoregressive Moving
Average Model):
_
q
n
+ a
1
q
n1
+ + a
n
_
. .
A(q)
y(t) =
_
b
1
q
m1
+ b
2
q
m2
+ + b
m
_
. .
B(q)
u(t)
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 15/25
Regression Models for ARMA
IIR (Inte Impulse Response) or ARMA (Autoregressive Moving
Average Model):
_
q
n
+ a
1
q
n1
+ + a
n
_
. .
A(q)
y(t) =
_
b
1
q
m1
+ b
2
q
m2
+ + b
m
_
. .
B(q)
u(t)
q is the forward shift operator:
q u(t) = u(t + 1), q
1
u(t) = u(t 1)
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 15/25
Regression Models for ARMA
IIR (Inte Impulse Response) or ARMA (Autoregressive Moving
Average Model):
_
q
n
+ a
1
q
n1
+ + a
n
_
. .
A(q)
y(t) =
_
b
1
q
m1
+ b
2
q
m2
+ + b
m
_
. .
B(q)
u(t)
q is the forward shift operator:
q u(t) = u(t + 1), q
1
u(t) = u(t 1)
y(t) = [y(t 1), . . . , y(t n), u(t n + m1), . . . , u(t n)]
. .
=(i1)
T
_
_
a
1
.
.
.
a
n
b
1
.
.
.
b
m
_
_
. .
=
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 15/25
Problem 2.2
Consider the FIR model
y(t) = b
0
u(t) + b
1
u(t 1) + e(t), t = 1, 2, 3, . . . , N
where
_
e(t)
_
is a sequence of independent normal N(0, )
random variables
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 16/25
Problem 2.2
Consider the FIR model
y(t) = b
0
u(t) + b
1
u(t 1) + e(t), t = 1, 2, 3, . . . , N
where
_
e(t)
_
is a sequence of independent normal N(0, )
random variables
_
b
0
b
1
_
+ e(t) = (t)
T
+ e(t)
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 17/25
Solution (Problem 2.2):
For the model
y(t) = b
0
u(t) + b
1
u(t 1) + e(t)
the regression form is readily seen
y(t) =
_
u(t), u(t 1)
_
b
0
b
1
_
+ e(t) = (t)
T
+ e(t)
Then LS estimate is
=
_
(N)
T
(N)
_
1
(N)
T
Y (N)
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 17/25
Solution (Problem 2.2):
For the model
y(t) = b
0
u(t) + b
1
u(t 1) + e(t)
the regression form is readily seen
y(t) =
_
u(t), u(t 1)
_
b
0
b
1
_
+ e(t) = (t)
T
+ e(t)
Then LS estimate is
=
_
_
_
_
_
_
_
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N1)
_
_
T
_
_
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N 1)
_
_
_
_
_
_
_
1
_
_
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N1)
_
_
T
_
_
y(1)
y(2)
.
.
.
y(N)
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 17/25
Solution (Cond):
The formula for an arbitrary input signal u(t)
=
_
_
_
_
_
_
_
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N1)
_
_
T
_
_
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N 1)
_
_
_
_
_
_
_
1
_
_
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N1)
_
_
T
_
_
y(1)
y(2)
.
.
.
y(N)
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 18/25
Solution (Cond):
The formula for an arbitrary input signal u(t)
=
_
_
_
_
_
_
_
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N1)
_
_
T
_
_
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N 1)
_
_
_
_
_
_
_
1
_
_
u(1) u(0)
u(2) u(1)
.
.
.
u(N) u(N1)
_
_
T
_
_
y(1)
y(2)
.
.
.
y(N)
_
_
becomes
=
_
_
N
t=1
u(t)
2
N
t=1
u(t)u(t 1)
N
t=1
u(t)u(t 1)
N
t=1
u(t 1)
2
_
_
1
_
_
N
t=1
u(t)y(t)
N
t=1
u(t 1)y(t)
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 18/25
Solution (Cond):
Suppose that u(t) is a unit step applied at t = 1
u(t) =
_
1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):
Suppose that u(t) is a unit step applied at t = 1
u(t) =
_
1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
_
_
N
t=1
u(t)
2
N
t=1
u(t)u(t 1)
N
t=1
u(t)u(t 1)
N
t=1
u(t 1)
2
_
_
1
_
_
N
t=1
u(t)y(t)
N
t=1
u(t 1)y(t)
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):
Suppose that u(t) is a unit step applied at t = 1
u(t) =
_
1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
_
_
N
N
t=1
u(t)u(t 1)
N
t=1
u(t)u(t 1)
N
t=1
u(t 1)
2
_
_
1
_
_
N
t=1
u(t)y(t)
N
t=1
u(t 1)y(t)
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):
Suppose that u(t) is a unit step applied at t = 1
u(t) =
_
1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
_
_
N N 1
N 1
N
t=1
u(t 1)
2
_
_
1
_
_
N
t=1
u(t)y(t)
N
t=1
u(t 1)y(t)
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):
Suppose that u(t) is a unit step applied at t = 1
u(t) =
_
1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
_
_
N N 1
N 1 N 1
_
_
1
_
_
N
t=1
u(t)y(t)
N
t=1
u(t 1)y(t)
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):
Suppose that u(t) is a unit step applied at t = 1
u(t) =
_
1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
_
_
N N 1
N 1 N 1
_
_
1
_
_
N
t=1
y(t)
N
t=2
y(t)
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):
Suppose that u(t) is a unit step applied at t = 1
u(t) =
_
1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
1
N(N 1) (N 1)
2
_
_
N 1 (N 1)
(N 1) N
_
_
_
_
N
t=1
y(t)
N
t=2
y(t)
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):
Suppose that u(t) is a unit step applied at t = 1
u(t) =
_
1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
_
_
1 1
1
N
N 1
_
_
_
N
t=1
y(t)
N
t=2
y(t)
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):
Suppose that u(t) is a unit step applied at t = 1
u(t) =
_
1, t = 1, 2, . . .
0, t = 0, 1, 2, 3, . . .
Substituting this signal into the general formula, we obtain
=
_
N
t=1
y(t)
N
t=2
y(t)
N
t=1
y(t) +
N
N 1
N
t=2
y(t)
_
_
=
_
_
y(1)
1
N 1
N
t=1
y(t) y(1)
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 19/25
Solution (Cond):
How to compute the covariance of ? Consider
0
=
_
_
y(1)
1
N1
N
t=1
y(t) y(1)
_
_
_
_
b
0
b
1
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 20/25
Solution (Cond):
How to compute the covariance of ? Consider
0
=
_
_
y(1)
1
N 1
N
t=1
y(t) y(1)
_
_
_
b
0
b
1
_
_
=
_
_
_
b
0
1 + b
1
0 + e(1)
_
b
0
N
t=2
_
b
0
1 + b
1
1 + e(t)
_
+ y(1)
N 1
y(1) b
1
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 20/25
Solution (Cond):
How to compute the covariance of ? Consider
0
=
_
_
y(1)
1
N 1
N
t=1
y(t) y(1)
_
_
_
b
0
b
1
_
_
=
_
_
e(1)
1
N 1
N
t=2
e(t) e(1)
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 20/25
Solution (Cond):
The estimation error is
0
=
_
_
e(1)
1
N 1
N
t=2
e(t) e(1)
_
_
The covariance E(
0
)(
0
)
T
of estimate is then
cov() =
_
_
Ee(1)
2
E
_
N
t=2
e(t)
N1
e(1)
_
e(1)
E
_
N
t=2
e(t)
N1
e(1)
_
e(1) E
_
N
t=2
e(t)
N1
e(1)
_
2
_
_
=
_
_
2
_
N1
(N1)
2
1
_
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 20/25
Solution (Cond):
The estimation error is
0
=
_
_
e(1)
1
N 1
N
t=2
e(t) e(1)
_
_
The covariance E(
0
)(
0
)
T
of estimate is then
cov() =
_
_
Ee(1)
2
E
_
N
t=2
e(t)
N1
e(1)
_
e(1)
E
_
N
t=2
e(t)
N1
e(1)
_
e(1) E
_
N
t=2
e(t)
N1
e(1)
_
2
_
_
=
_
_
2
N
N1
_
_
=
2
_
_
1 1
1
N
N1
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 20/25
Solution (Cond):
To conclude with the unit step input signal LS estimate is
=
_
_
b
0
b
1
_
_
=
_
_
y(1)
1
N 1
N
t=1
y(t) y(1)
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 21/25
Solution (Cond):
To conclude with the unit step input signal LS estimate is
=
_
_
b
0
b
1
_
_
=
_
_
y(1)
1
N 1
N
t=1
y(t) y(1)
_
_
The covariance of such estimate looks as
E(
0
)(
0
)
T
=
2
_
_
1 1
1
N
N1
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 21/25
Solution (Cond):
To conclude with the unit step input signal LS estimate is
=
_
_
b
0
b
1
_
_
=
_
_
y(1)
1
N 1
N
t=1
y(t) y(1)
_
_
The covariance of such estimate looks as
E(
0
)(
0
)
T
=
2
_
_
1 1
1
N
N1
_
_
As a number of measured data increases N
E(
0
)(
0
)
T
_
_
1 1
1 1
_
_
and does NOT IMPROVE!
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 21/25
Solution (Problem 2.2):
Consider now the model
y(t) = b
0
u(t) + b
1
u(t 1) + e(t)
when the input signal is a white noise with unit variance
Eu(t)
2
= 1, Eu(t)u(s) = 0 if t = s
and when u and e are independent
Eu(t)e(s) = 0 t, s
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 22/25
Solution (Problem 2.2):
Consider now the model
y(t) = b
0
u(t) + b
1
u(t 1) + e(t)
when the input signal is a white noise with unit variance
Eu(t)
2
= 1, Eu(t)u(s) = 0 if t = s
and when u and e are independent
Eu(t)e(s) = 0 t, s
Such assumptions imply that
Ey(t)u(t) = b
0
, Ey(t)u(t 1) = b
1
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 22/25
Solution (Problem 2.2):
The LS estimate becomes dependent on a realization of the
stochastic variable u(t); we need to consider its mean value
E
= E
_
_
_
_
_
N
t=1
u(t)
2
N
t=2
u(t)u(t 1)
N
t=2
u(t)u(t 1)
N
t=2
u(t 1)
2
_
_
1
_
_
N
t=1
u(t)y(t)
N
t=2
u(t 1)y(t)
_
_
_
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 23/25
Solution (Problem 2.2):
The LS estimate becomes dependent on a realization of the
stochastic variable u(t); we need to consider its mean value
E
= E
_
_
_
_
_
_
_
_
N
_
1
N
N
t=1
u(t)
2
_
(N 1)
_
1
N1
N
t=2
u(t)u(t 1)
_
(N 1)
_
1
N1
N
t=2
u(t)u(t 1)
_
(N 1)
_
1
N1
N
t=2
u(t 1)
2
_
_
_
1
_
_
_
N
_
1
N
N
t=1
u(t)y(t)
_
(N 1)
_
1
N1
N
t=2
u(t 1)y(t)
_
_
_
_
_
_
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 23/25
Solution (Problem 2.2):
The LS estimate becomes dependent on a realization of the
stochastic variable u(t); we need to consider its mean value
E
_
_
N Eu(t)
2
(N 1) Eu(t)u(t 1)
(N 1) Eu(t)u(t 1) (N 1) Eu(t 1)
2
_
_
1
_
_
N Eu(t)y(t)
(N 1) Eu(t 1)y(t)
_
_
=
_
_
N 0
0 (N 1)
_
_
1
_
_
Nb
0
(N 1)b
1
_
_
=
_
_
b
0
b
1
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 23/25
Solution (Problem 2.2):
To compute the covariance, use the formula in Theorem
cov(
0
) =
2
E ((N)
T
(N))
1
=
2
_
_
N 0
0 (N 1)
_
_
1
=
_
_
1
N
0
0
1
N1
_
_
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 24/25
Solution (Problem 2.2):
To compute the covariance, use the formula in Theorem
cov(
0
) =
2
E ((N)
T
(N))
1
=
2
_
_
N 0
0 (N 1)
_
_
1
=
_
_
1
N
0
0
1
N1
_
_
It converges to zero!
c Leonid Freidovich. March 31, 2010. Elements of Iterative Learning and Adaptive Control: Lecture 2 p. 24/25
Next Lecture / Assignments: