Professional Documents
Culture Documents
System Model
02-Apr-2009
Introduction
◮ So far, we have only considered estimation problems with fixed
parameters, e.g.
i.i.d.
◮ Yk ∼ N (θ, σ 2 ).
◮ Yk = a cos(ωk + φ) + Wk for θ = [a, φ, ω]⊤ .
i.i.d.
◮ Yk ∼ θe−θyk .
◮ Many problems require us to estimate dynamic or time-varying
parameters, e.g.
◮ Radar (position and velocity of target changing over time)
◮ Communications (amplitude and phase of signal changing over time)
◮ Stock market (price of shares next week changing over time)
◮ Step 1: We need to develop a general model for
◮ How time-varying parameters evolve over time and
◮ How observations are generated according from the parameter state.
◮ Step 2: We need to develop good techniques for estimating dynamic
parameters. We can expect these techniques to be based on our good
static parameter estimators, i.e. MVU and/or ML, and also use some
knowledge of the time-varying model.
Worcester Polytechnic Institute D. Richard Brown III 02-Apr-2009 2 / 14
ECE531 Lecture 10b: Dynamic Parameter Estim. System Model
V [n] h[n] : Rm 7→ Rk
1
velocity v
−1
−2
−3
−4
0 5 10 15 20 25 30 35
position x
Remarks
where X̂[ℓ] is the estimate of the state X[ℓ]. We assume that we know
the joint distribution of the input U [n] and the distribution of the initial
state X[0].
Given the observation Y [0], . . . , Y [k], what estimator X̂[ℓ] minimizes the
MSE?
would require us to
◮ Keep all the past observations in memory
◮ Compute the estimate as a function of all the past observations
◮ In other words, the general dynamic state estimation problem has
linearly growing memory and computational burdens.
◮ Additional restrictions are going to be necessary to make the dynamic
state estimation problem computationally feasible.
where, for each n, F [n] ∈ Rm×m , G[n] ∈ Rm×s , and H[n] ∈ Rk×m.
◮ We’ve already seen that one-dimensional motion fits within this linear
model.
◮ The same is true for two- and three-dimensional motion.
◮ Many nonlinear systems can approximately fit in this model by
linearizing f and h around a nominal state (Taylor series expansion).
where
t
Y
F [n − k] := F [n]F [n − 1]F [n − 2] · · · F [n − t]
k=0
When
F [n] ≡ F
G[n] ≡ G
then the discrete time dynamical system is time invariant and the state at
time n + 1 is simply
n
X
X[n + 1] = F n+1 X[0] + F n−j G[j]U [j]
j=0
E{U [k]} = 0
(
Q[k] k = j
E{U [k]U ⊤ [j]} =
0 otherwise
E{V [k]} = 0
(
R[k] k = j
E{V [k]V ⊤ [j]} =
0 otherwise
E{U [k]V ⊤ [j]} = 0 for all j and k
We also assume that the initial state X[0] ∼ N (m[0], Σ[0]) is a Gaussian
random vector independent of U [0], U [1], . . . and V [0], V [1], . . . .
Worcester Polytechnic Institute D. Richard Brown III 02-Apr-2009 13 / 14
ECE531 Lecture 10b: Dynamic Parameter Estim. System Model