You are on page 1of 49

# Lecture 5: Real-Time Parameter Estimation

• Least Squares and Recursive Computations
• Estimating Parameters in Dynamical Systems
• Persistent Excitation and Linear Filtering
• Examples

c Leonid Freidovich.

April 15, 2010.

Elements of Iterative Learning and Adaptive Control:

Lecture 5

– p. 1/15

Theorem (Persistent Excitation for FIR model):
Assume that data are generated by the following FIR model:
y(t) = g1 u(t − 1) + g2 u(t − 2) + · · · + gn u(t − n) + e(t)

with E{e(t)} = 0, and mutually independent e(t).
Then, one can use Recursive Least Square algorithm with
h
i
φ(t, n)T = u(t − 1), u(t − 2), . . . , u(t − n)
iT
h
to estimate
θ 0 (n) = g1 , g2 , . . . , gn .

c Leonid Freidovich.

April 15, 2010.

Elements of Iterative Learning and Adaptive Control:

Lecture 5

– p. 2/15

Theorem (Persistent Excitation for FIR model):
Assume that data are generated by the following FIR model:
y(t) = g1 u(t − 1) + g2 u(t − 2) + · · · + gn u(t − n) + e(t)

with E{e(t)} = 0, and mutually independent e(t).
Then, one can use Recursive Least Square algorithm with
h
i
φ(t, n)T = u(t − 1), u(t − 2), . . . , u(t − n)
iT
h
to estimate
θ 0 (n) = g1 , g2 , . . . , gn .

If the signal u(t) is persistently exciting of at least order n, the
estimate is unbiased. Possible conditions to check is

T 

T
T
φ(n, n)
φ(n, n)

T 
T 
φ(n
+
1,
φ(n
+
1,
n)
n)
 

1 

>0

lim
.
.

..
.
N →∞ N 
.
 

φ(N, n)T

c Leonid Freidovich.

April 15, 2010.

φ(N, n)T

Elements of Iterative Learning and Adaptive Control:

Lecture 5

– p. 2/15

Persistence of Excitation (Def 1):
Definition 1: Given a signal u(t), i.e. a sequence of numbers
u(0), u(1), u(2), . . .

c Leonid Freidovich.

April 15, 2010.

Elements of Iterative Learning and Adaptive Control:

Lecture 5

– p. 3/15

.Persistence of Excitation (Def 1): Definition 1: Given a signal u(t). 2010. a sequence of numbers u(0). u(2). 3/15 . k = 0. . u(1). . . u(i)u(i − k). . n − 1 Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. . It is called persistently exciting of order n if the limits exist c(k) = lim N →∞ c Leonid Freidovich. N 1 X N i=1 April 15. 1. .e. i.

. c(0) . k = 0.Persistence of Excitation (Def 1): Definition 1: Given a signal u(t). N 1 X N i=1 u(i)u(i − k). . c(0) c(1) c(1) . 1. . .. 2010. . u(2). i. It is called persistently exciting of order n if the limits exist c(k) = lim N →∞ and if      Cn =      c Leonid Freidovich.e. c(n − 1) April 15. n − 1 c(n − 1)    c(n − 2)   >0     c(0) Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. .. . c(n − 2) . .. 3/15 . u(1). a sequence of numbers u(0)... . ..

4/15 . a sequence of numbers u(0). Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. 2010. .e. u(2). . u(1). i.Persistence of Excitation (Def 2): Definition 2: Given a signal u(t). April 15. . c Leonid Freidovich.

.Persistence of Excitation (Def 2): Definition 2: Given a signal u(t). April 15. . n)T > ρ2 In k=t where ρ1 . It is called persistently exciting of order n if for all t there exists an integer m such that ρ 1 In > t+m X φ(k. . i. n)T = u(t − 1). u(1). . . . u(t − 2). Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. u(t − n) c Leonid Freidovich. . n)φ(k. 4/15 . a sequence of numbers u(0). ρ2 > 0 and h i φ(t.e. 2010. u(2).

.Persistence of Excitation (Def 2): Definition 2: Given a signal u(t). 2010. . n)φ(k. It is called persistently exciting of order n if for all t there exists an integer m such that ρ 1 In > t+m X φ(k. . 4/15 . n)T = u(t − 1).e. 2 imply the ones of Def. a sequence of numbers u(0). ρ2 > 0 and h i φ(t. . . April 15. . u(t − 2). Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. . u(1). i. u(2). 1. c Leonid Freidovich. u(t − n) It can be shown that conditions of Def. n)T > ρ2 In k=t where ρ1 .

5/15 . is PE of order n if and only if L = lim N →∞ N 1 X N [A(q) u(k)]2 > 0 k=1 for all non-zero polynomials of degree n − 1 or less. . . . n − 1 exist. April 15. k = 0. . c Leonid Freidovich.Polynomial Conditions for Persistent Excitation: The signal with the property that limits c(k) = lim N →∞ N 1 X N i=1 u(i)u(i − k). Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. 2010. 1.

. . . . . . .Polynomial Conditions for Persistent Excitation: The signal with the property that limits c(k) = lim N →∞ N 1 X N i=1 u(i)u(i − k). k = 0. is PE of order n if and only if L = lim N →∞ N 1 X N [A(q) u(k)]2 > 0 k=1 for all non-zero polynomials of degree n − 1 or less. n − 1 exist. . . Proof: with A(q) = a0 q n−1 + · · · + an−1 L = [a0 . Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. . 5/15 . an−1 ] Cn [a0 . . April 15. 2010. . 1. an−1 ]T c Leonid Freidovich.

2010. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. . . . April 15. c Leonid Freidovich. .Polynomial Conditions for Persistent Excitation: The signal with the property that limits c(k) = lim N →∞ N 1 X N i=1 u(i)u(i − k). 1. . . . . 5/15 . . an−1 ]T Corollary: if u(t) satisfies a polynomial equation of degree n for t ≥ t0 with some t0 . Proof: with A(q) = a0 q n−1 + · · · + an−1 L = [a0 . . an−1 ] Cn [a0 . . n − 1 exist. . then it can be at most PE of order n. is PE of order n if and only if L = lim N →∞ N 1 X N [A(q) u(k)]2 > 0 k=1 for all non-zero polynomials of degree n − 1 or less. k = 0.

the step is PE of order 1. c Leonid Freidovich. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. u(t) = 1 (q − 1)u(t) = 0 t 1X t for for t≥0 t≥0 u2 (t) = 1. 6/15 . 2010. k=1 Hence.Examples of computing order of PE: Example 1: The STEP signal u(t) = 0 for satisfies: and c(1) = lim t→∞ t < 0. April 15.

so. u(t) = 1 (q − 1)u(t) = 0 t 1X t for for t≥0 t≥0 u2 (t) = 1. Example 2: Any m-PERIODIC signal: u(t + m) = u(t) for ∀t satisfies (q m − 1) u = 0. April 15. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p.Examples of computing order of PE: Example 1: The STEP signal u(t) = 0 for satisfies: and c(1) = lim t→∞ t < 0. c Leonid Freidovich. 2010. 6/15 . k=1 Hence. the step is PE of order 1. it can be at most PE of order m.

2010.Examples 3: order of PE for sinusoid Example 3: consider the signal u(t) = sin(ω0 t). April 15. 7/15 . with 0 < ω0 < π Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. c Leonid Freidovich.

7/15 .Examples 3: order of PE for sinusoid Example 3: consider the signal u(t) = sin(ω0 t). April 15. 2010. c Leonid Freidovich. with 0 < ω0 < π It is not hard to show [using sin(a + b) = sin(a) cos(b) + sin(b) cos(a)] that   q 2 − 2 q cos(ω0 ) + 1 u(t) = 0 Hence. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. it can at most be PE of order 2.

Let us compute c(k): c(k) = lim N →∞ c Leonid Freidovich.Examples 3: order of PE for sinusoid Example 3: consider the signal u(t) = sin(ω0 t). 2010. with 0 < ω0 < π It is not hard to show [using sin(a + b) = sin(a) cos(b) + sin(b) cos(a)] that   q 2 − 2 q cos(ω0 ) + 1 u(t) = 0 Hence. it can at most be PE of order 2. April 15. 7/15 . N 1 X N i=1 u(i)u(i − k) Elements of Iterative Learning and Adaptive Control: Lecture 5 – p.

Let us compute c(k): c(k) = lim N →∞ c Leonid Freidovich.Examples 3: order of PE for sinusoid Example 3: consider the signal u(t) = sin(ω0 t). with 0 < ω0 < π It is not hard to show [using sin(a + b) = sin(a) cos(b) + sin(b) cos(a)] that   q 2 − 2 q cos(ω0 ) + 1 u(t) = 0 Hence. 2010. it can at most be PE of order 2. N 1 Xh N April 15. i=1 sin(ω0 i) sin(ω0 i − ω0 k) Elements of Iterative Learning and Adaptive Control: i Lecture 5 – p. 7/15 .

Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. 7/15 . with 0 < ω0 < π It is not hard to show [using sin(a + b) = sin(a) cos(b) + sin(b) cos(a)] that   q 2 − 2 q cos(ω0 ) + 1 u(t) = 0 Hence. it can at most be PE of order 2. Let us compute c(k): c(k) = lim N →∞ N 1 Xh N i=1 sin(ω0 i) sin(ω0 i − ω0 k) i N i 1 Xh 2 = lim sin (ω0 i) cos(ω0 k)−sin(ω0 i) cos(ω0 i) sin(ω0 k) N →∞ N i=1 c Leonid Freidovich.Examples 3: order of PE for sinusoid Example 3: consider the signal u(t) = sin(ω0 t). April 15. 2010.

Examples 3: order of PE for sinusoid Example 3: consider the signal u(t) = sin(ω0 t). April 15. it can at most be PE of order 2. Let us compute c(k): c(k) = lim N →∞ = lim N →∞ N N 1 Xh 1 N i=1 N 1 Xh 1 i=1 sin(ω0 i) sin(ω0 i − ω0 k) i 1 cos(ω0 k)− cos(2 ω0 i) cos(ω0 k)− sin(2 ω0 i) sin(ω0 k) 2 2 2 c Leonid Freidovich. 2010. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. with 0 < ω0 < π It is not hard to show [using sin(a + b) = sin(a) cos(b) + sin(b) cos(a)] that   q 2 − 2 q cos(ω0 ) + 1 u(t) = 0 Hence. 7/15 i .

Examples 3: order of PE for sinusoid Example 3: consider the signal u(t) = sin(ω0 t). 2010. i=1 2 N 1 Xh N i=1 sin(ω0 i) sin(ω0 i − ω0 k) cos(ω0 k) − April 15. 1 2 cos(2 ω0 i + ω0 k) Elements of Iterative Learning and Adaptive Control: i i Lecture 5 – p. it can at most be PE of order 2. with 0 < ω0 < π It is not hard to show [using sin(a + b) = sin(a) cos(b) + sin(b) cos(a)] that   q 2 − 2 q cos(ω0 ) + 1 u(t) = 0 Hence. 7/15 . Let us compute c(k): c(k) = lim N →∞ = lim N →∞ N 1 Xh 1 N c Leonid Freidovich.

it can at most be PE of order 2. Let us compute c(k): c(k) = lim N →∞ N 1 Xh N i=1 sin(ω0 i) sin(ω0 i − ω0 k) i N i X 1 = lim constant + cos(ω0 k) N →∞ N 2 i=1 1h c Leonid Freidovich. 2010. 7/15 . with 0 < ω0 < π It is not hard to show [using sin(a + b) = sin(a) cos(b) + sin(b) cos(a)] that   q 2 − 2 q cos(ω0 ) + 1 u(t) = 0 Hence.Examples 3: order of PE for sinusoid Example 3: consider the signal u(t) = sin(ω0 t). Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. April 15.

1 2 i cos(ω0 k) Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. 7/15 . Let us compute c(k): c(k) = lim N →∞ N 1 Xh N i=1 sin(ω0 i) sin(ω0 i − ω0 k) c(k) = c Leonid Freidovich. with 0 < ω0 < π It is not hard to show [using sin(a + b) = sin(a) cos(b) + sin(b) cos(a)] that   q 2 − 2 q cos(ω0 ) + 1 u(t) = 0 Hence.Examples 3: order of PE for sinusoid Example 3: consider the signal u(t) = sin(ω0 t). April 15. it can at most be PE of order 2. 2010.

Let us compute c(k): c(k) = lim N →∞ c(k) = c Leonid Freidovich. 2010. 7/15 . it can at most be PE of order 2.Examples 3: order of PE for sinusoid Example 3: consider the signal u(t) = sin(ω0 t). sin(ω0 i) sin(ω0 i − ω0 k) =⇒ C2 = " i c(0) c(1) c(1) c(0) Elements of Iterative Learning and Adaptive Control: # Lecture 5 – p. 1 2 N 1 Xh N i=1 cos(ω0 k) April 15. with 0 < ω0 < π It is not hard to show [using sin(a + b) = sin(a) cos(b) + sin(b) cos(a)] that   q 2 − 2 q cos(ω0 ) + 1 u(t) = 0 Hence.

2010. 7/15 . N 1 Xh April 15. it can at most be PE of order 2. i=1 =⇒ sin(ω0 i) sin(ω0 i − ω0 k) C2 = " 1 2 1 cos(ω0 ) 1 2 i cos(ω0 ) 1 Elements of Iterative Learning and Adaptive Control: Lecture 5 # >0 – p.Examples 3: order of PE for sinusoid Example 3: consider the signal u(t) = sin(ω0 t). with 0 < ω0 < π It is not hard to show [using sin(a + b) = sin(a) cos(b) + sin(b) cos(a)] that   q 2 − 2 q cos(ω0 ) + 1 u(t) = 0 Hence. Let us compute c(k): c(k) = lim N →∞ c(k) = 1 2 N cos(ω0 k) c Leonid Freidovich.

1 t t+m−1 X i=m u(i) u(i − k) Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. 8/15 .Frequency domain characterization of PE signals Definition: A signal u(t) is called stationary if the limit Ru (k) = lim t→∞ exists uniformly in m. April 15. c Leonid Freidovich. 2010.

c Leonid Freidovich.Frequency domain characterization of PE signals Definition: A signal u(t) is called stationary if the limit Ru (k) = lim t→∞ exists uniformly in m. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. 1 t t+m−1 X i=m u(i) u(i − k) Note: Ru (k) = Ru (−k) and Ru (k) = c(k) is called covariance of u(t). 8/15 . April 15. 2010.

Frequency domain characterization of PE signals Definition: A signal u(t) is called stationary if the limit Ru (k) = lim t→∞ exists uniformly in m. j = −1 e Ru (k) = 2 π −π . 1 t t+m−1 X i=m u(i) u(i − k) Note: Ru (k) = Ru (−k) and Ru (k) = c(k) is called covariance of u(t). Definition: The Fourier transform of the covariance. is called the frequency spectrum of u(t): Z π √ 1 jωk Φu (ω) dω. Φu (ω).

.

2 Z +∞ +∞ X .

.

1 −j ω k −j ω t .

Φu (ω) = Ru (k) e =.

u(t) e dt.

.

8/15 . 2010. 2 π −∞ k=−∞ c Leonid Freidovich. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. April 15.

Φu (ω) = πh 2 δ(ω−ω0 )+δ(ω+ω0 ) Elements of Iterative Learning and Adaptive Control: Lecture 5 i – p. 0 < ω0 < π with we have Ru (k) = 1 2 c Leonid Freidovich. 2010. cos(ω0 k). April 15.Frequency domain characterization of PE signals Example: for the signal u(t) = sin(ω0 t). 9/15 .

Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. c Leonid Freidovich. 0 < ω0 < π with we have Ru (k) = 1 2 cos(ω0 k). 2010. 9/15 . Φu (ω) = πh 2 δ(ω−ω0 )+δ(ω+ω0 ) i Lemma: A stationary signal is PE of order n if its frequency spectrum is non zero for at least n points. April 15.Frequency domain characterization of PE signals Example: for the signal u(t) = sin(ω0 t).

Frequency domain characterization of PE signals Example: for the signal u(t) = sin(ω0 t). c Leonid Freidovich. while the signal u(t) = m X Ai sin(ωi t+ϕi ). April 15. i=1 ωi 6= ωj is PE of order 2 m. Example: white noise is PE of any order. Φu (ω) = πh 2 δ(ω−ω0 )+δ(ω+ω0 ) i Lemma: A stationary signal is PE of order n if its frequency spectrum is non zero for at least n points. with 0 < ωi < π. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. 2010. 0 < ω0 < π with we have Ru (k) = 1 2 cos(ω0 k). 9/15 .

. . . u(t − n) iT h to estimate θ 0 = a1 . an . . . . . Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. 2010. with φ(t−1)T = B(q) Then. b1 . . bm . April 15. u(t − n + m − 1). . . . . . . . one can use RLS algorithm with h − y(t − 1). 10/15 i . c Leonid Freidovich.Theorem (Persistent Excitation for ARMA model): Assume that data are generated by the following ARMA model:   n n−1 m−1 m−2 q + a1 q + · · · + an y(t) = b1 q + b2 q + · · · + bm u(t) | | {z } {z } A(q) n > m. . −y(t − n).

.   φ(t − 1)T c Leonid Freidovich. u(t − n) iT h to estimate θ 0 = a1 . .    . .. bm . 10/15 i . . one can use RLS algorithm with h − y(t − 1). u(t − n + m − 1).Theorem (Persistent Excitation for ARMA model): Assume that data are generated by the following ARMA model:   n n−1 m−1 m−2 q + a1 q + · · · + an y(t) = b1 q + b2 q + · · · + bm u(t) | | {z } {z } A(q) n > m. . t→∞ t t→∞ t  . . with φ(t−1)T = B(q) Then. . . −y(t − n). .. 2010. . The estimate is unbiased if the signal u(t) is such that T   T φ(n)T φ(n)   T T  h i φ(n + 1) φ(n + 1)    1 1   lim Φ(t−1)T Φ(t−1) = lim  . . . . . . φ(t − 1)T Elements of Iterative Learning and Adaptive Control: Lecture 5    >0   – p. . an . April 15. b1 . .

.     −y(t − n)   φ(t−1) =   u(t−n+m−1)      . .Persistent Excitation for  A(q)y(t) = B(q)u(t)  −y(t − 1)   . 2010. 11/15 . .  u(t − n) c Leonid Freidovich. April 15. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. .

Persistent Excitation for   A(q)y(t) = B(q)u(t)   −y(t − 1)     .. . . 11/15 .   u(t − n) q −n u(t) c Leonid Freidovich. 2010. April 15...       −n  −y(t − n)   −q y(t)      φ(t−1) =  =   −n+m−1 u(t)   u(t−n+m−1)   q      .. .  . −q −1 y(t) Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. .  .

.      −q −n−m A(q)y(t)     q −n−1 A(q)u(t)      ..       m −n  −y(t − n)    q −q y(t) = = φ(t−1) =   u(t−n+m−1)   q −n+m−1 u(t)  A(q)         ..   q −n−m A(q)u(t) Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. 11/15 . −q −1 y(t)  −q −1−m A(q)y(t)    .. .  .Persistent Excitation for   A(q)y(t) = B(q)u(t)   −y(t − 1)     ..  . . . April 15. 2010.. ...   u(t − n) q −n u(t) c Leonid Freidovich.

2010.  . 11/15 ..  q −n−m A(q)u(t) Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. April 15..       m −n  −y(t − n)    q −q y(t) = = φ(t−1) =   u(t−n+m−1)   q −n+m−1 u(t)  A(q)         ...  −q −1−m A(q)y(t)    ... . . .Persistent Excitation for   A(q)y(t) = B(q)u(t)   −y(t − 1) −q −1 y(t)     . ..  .     −n−m  −q A(q)y(t)      q −n−1 A(q)u(t)      ..   u(t − n) q −n u(t) c Leonid Freidovich.

.     −n−m  −q B(q)u(t)      q −n−1 A(q)u(t)      . 2010.  .. . ..  −q −1−m B(q)u(t)    .Persistent Excitation for   A(q)y(t) = B(q)u(t)   −y(t − 1) −q −1 y(t)     .  ...   u(t − n) q −n u(t) c Leonid Freidovich..  q −n−m A(q)u(t) Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. . ..       m −n  −y(t − n)    q −q y(t) = = φ(t−1) =   u(t−n+m−1)   q −n+m−1 u(t)  A(q)         .. April 15. 11/15 .

.Persistent Excitation for   A(q)y(t) = B(q)u(t)   −y(t − 1)     .   u(t − n) q −n u(t) c Leonid Freidovich.....       m −n  −y(t − n)    q −q y(t) = = φ(t−1) =   u(t−n+m−1)   q −n+m−1 u(t)  A(q)         . . 2010.. . April 15..  .      −q −n−m B(q)u(t)     q −n−1 A(q)u(t)      .  . . 11/15 .. −q −1 y(t)  −q −1−m B(q)u(t)    ..   q −n−m A(q)u(t) Elements of Iterative Learning and Adaptive Control: Lecture 5 – p.

.  . Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. 11/15 ... . ..... 2010. April 15. .  .  .          m −n−m −n  −y(t − n)     q  −q B(q)u(t)  −q v(t)      φ(t−1) =  = = S    q −n−1 v(t)  −n−1 A(q)u(t)   u(t−n+m−1)  A(q)  q          . .Persistent Excitation for  A(q)y(t) = B(q)u(t)      −y(t − 1)       .  ..    u(t − n) q −n−m A(q)u(t) q −n−m v(t) where v(t) = qm u(t) −q −1−m B(q)u(t) and S −q −1 v(t) is not singular in the A(q) case when A(q) and B(q) are relatively prime. c Leonid Freidovich.

 . 11/15 . c Leonid Freidovich..Persistent Excitation for  A(q)y(t) = B(q)u(t)      −y(t − 1) −v(t − 1)       .    u(t − n) q −n−m A(q)u(t) v(t−n−m) {z } | where v(t) = qm −q −1−m B(q)u(t) ψ(t−1) u(t) and S is not singular in the A(q) case when A(q) and B(q) are relatively prime.  . ..          m −n−m  −y(t − n)     q  −q B(q)u(t)  −v(t − n)      φ(t−1) =  = = S    v(t−n−1)  −n−1 A(q)u(t)   u(t−n+m−1)  A(q)  q          .  . ..  . Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. . April 15... .. 2010...

.          m −n−m  −y(t − n)     q  −q B(q)u(t)  −v(t − n)      φ(t−1) =  = = S    v(t−n−1)  −n−1 A(q)u(t)   u(t−n+m−1)  A(q)  q          ..  .. . .  . 2010.  .    u(t − n) q −n−m A(q)u(t) v(t−n−m) {z } | where v(t) = qm −q −1−m B(q)u(t) ψ(t−1) u(t) and S is not singular in the A(q) case when A(q) and B(q) are relatively prime.. . April 15.. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p.Persistent Excitation for  A(q)y(t) = B(q)u(t)      −y(t − 1) −v(t − 1)       .  .. . 11/15 ..   h i h i 1 1 T lim Φ(t−1) Φ(t−1) = S lim Ψ(t − 1)T Ψ(t − 1) S T > 0 t→∞ t t→∞ t c Leonid Freidovich..

c Leonid Freidovich. but should be concerning the signal v(t) = n > m. formulated in the case of FIR model for the signal u(t). . q m /A(q) is stable and minimum phase.The conditions for parameter convergence. qm u(t) A(q) is stable. q m Hence. are similar in the case of ARMA model.

jmω .

2 .

e .

Φv (ω) = . Φu (ω) .

A(e jω ) .

April 15. 12/15 . Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. 2010.

but should be concerning the signal v(t) = n > m. . q m Hence. qm u(t) A(q) is stable.The conditions for parameter convergence. are similar in the case of ARMA model. formulated in the case of FIR model for the signal u(t). q m /A(q) is stable and minimum phase.

jmω .

2 .

e .

Φv (ω) = . Φu (ω) .

A(e jω ) .

3. A(q) and B(q) are relatively prime. all roots are inside the unite circle). c Leonid Freidovich. Theorem (Persistent Excitation for ARMA model): Suppose that 1. u(t) is a stationary signal such that Φu (ω) is nonzero for at least (n + m) points. 2010. April 15. A(q) is stable (i. 2. Then. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. 12/15 .e. RLS algorithm converges to the correct value.

c Leonid Freidovich. 2010. which is rich enough to ensure parameters convergence. April 15. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p.Example Consider the system y(t) + a1 y(t − 1) + a2 y(t − 2) = b1 u(t − 1) + b2 u(t − 2) Choose a nice (as simple as possible) input signal. 13/15 .

B(q) = b1 q + b2 q 2 Since n = 2 and m = 2. April 15. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p.Example Consider the system y(t) + a1 y(t − 1) + a2 y(t − 2) = b1 u(t − 1) + b2 u(t − 2) Choose a nice (as simple as possible) input signal. to have exactly 4 non zero points in Φu (ω). we can take u(t) = A1 sin(ω1 t)+A2 sin(ω2 t). ω1 6= ω2 . 13/15 . which is rich enough to ensure parameters convergence.2 < π. 0 < ω1. c Leonid Freidovich. Solution: A(q) = q 2 + a1 q + a2 . 2010.

2010.Example 2.5. 14/15 .5. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. c Leonid Freidovich. April 15. e(t) – zero mean white noise with standard deviation σ = 0. b = 0.8.12 (book) Consider the system y(t) + a y(t − 1) = b u(t − 1) + e(t) with a = −0.

5.8.12 (book) Consider the system y(t) + a y(t − 1) = b u(t − 1) + e(t) with a = −0. 2010.5. April 15. b = 0. c Leonid Freidovich. 14/15 .Example 2. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. e(t) – zero mean white noise with standard deviation σ = 0.

c Leonid Freidovich. in A205Tekn): Recitations Homework problem: repeat the simulation for Example 2.Next Lecture / Assignments: Next meeting (April 19.12 shown above (see pages 71–72). April 15. Elements of Iterative Learning and Adaptive Control: Lecture 5 – p. taking P (0) = 100 I2 and ˆ θ(0) = 0. 13:00-15:00. 15/15 . 2010.