Professional Documents
Culture Documents
Abstract: In this paper, the state estimation problem for continuous-time linear systems with
two types of sampling is considered. First, the optimal state estimator under periodic sampling
is presented. Then the state estimator with event-based updates is designed, i.e., when an event
occurs the estimator is updated linearly by using the measurement of output, while between the
consecutive event times the estimator is updated by minimum mean-squared error criteria. The
average estimation errors under both sampling schemes are compared quantitatively for first and
second order systems, respectively. A numerical example is given to compare the effectiveness
of two state estimators.
∫ τk+1
estimators under the same average sampling rates for
first order systems and symmetric second order systems, x(τk+1 ) = eA(τk+1 −τk ) x(τk ) + eA(τk+1 −s) DdW (s),
τk
respectively. Finally, we give a numerical example to com- y(τk ) = Cx(τk ), k = 0, 1, . . . .
pare the estimation errors of two state estimators. (5)
The following notations will be used in the paper. For a First, consider the case of periodic sampling, i.e., τk = kh.
sequence of numbers aj , j = 1, . . . , m, diag{a1 , . . . , am } Following the argument to the standard Kalman filter
denotes the diagonal matrix with aj in the diagonal derivation (Anderson and Moore, 1979; Åström, 2006), we
and zero elsewhere. For a stochastic process x(t), t ≥ 0, get the algorithm for the periodic estimator:
σ(x(s), s ≤ t) denotes the σ-algebra generated by x(s), s ≤
t. E[·] and E[·|·] denote the mathematical expectation and Algorithm I. Periodic estimator:
the conditional expectation, respectively.
x̂[(k + 1)h] = x̂′ [(k + 1)h]
{ }
2. PROBLEM FORMULATION + Kk+1 y[(k + 1)h] − C x̂′ [(k + 1)h] ,
x̂′ [(k + 1)h] = eAh x̂(kh),
′ ′
Consider the following linear stochastic system: Kk+1 = Pk+1 C T (CPk+1 C T )−1 ,
′ ′
Pk+1 = Pk+1 − Kk+1 CPk+1 ;
dx(t) = Ax(t) + DdW (t), (1) ′ T
Pk+1 = eAh Pk eA h
y(t) = Cx(t), (2) ∫ (k+1)h
T
(3) we obtain that At the time τk+1 , an event occurs and a new measurement
is received by the estimator. Then by Theorem 3.2.1 in
Anderson and Moore (1979), we get the linear MMSE
E[x(t)|Ht ] = E[eA(t−τk ) x(τk )|Hτk ]
∫ t update as follows:
[ ]
+E E[ eA(t−s) DdW (s)|Fτk ]Hτk x̂(τk+1 ) = x̂(τk+1 −) + Kk+1 z(τk+1 −), k = 0, 1, . . . (9)
τk where
= eA(t−τk ) E[x(τk )|Hτk ], τk ≤ t < τk+1 . (4) −
Kk+1 = Pk+1 −
C T (CPk+1 C T )−1 , (10)
From the fundamental result of state estimation (Åström,
−
= eA(τk+1 −τk ) Pk eA (τk+1 −τk )
T
2006; Oksendal, 2003), we have the MMSE (minimum Pk+1
mean-squared error) update between two consecutive sam- ∫ τk+1
eA(τk+1 −s) DDT eA (τk+1 −s) ds,
T
pling times: + (11)
τk
x̂(t) = e x̂(τk ), τk ≤ t < τk+1 ,
A(t−τk )
Pk = Pk− − Kk CPk− . (12)
where x̂(t) is the estimator for x(t). Here, (11) is obtained from
∫ τk+1
In what follows, we develop the recursive algorithms for x̃(τk+1 −) = eA(τk+1 −τk )
x̃(τk ) + eA(τk+1 −s) DdW (s),
x̂(τk ), k = 0, 1, . . . . By (2) and (3), it follows that τk
5509
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
n−1 ∫ [ ∫ t 2 ]
which follows by (5) and (8). 1 ∑ (k+1)h
= lim sup E σea(t−s) dw(s) dt.
By the above result we get the following algorithm for the n→∞ nh kh kh
k=0
event-based estimator.
Thus, we have for the case a = 0,
Algorithm II. Event-based estimator: n−1 ∫
1 ∑ (k+1)h 2 σ2 h
[ ] JP = lim sup σ (t − kh)dt = ,
x̂(τk ) = x̂(τk −) + Kk y(τk ) − C x̂(τk −) , n→∞ nh kh 2
k=0
dimensional Wiener process. We will compare the average = lim sup 2 E |z(t)|2 dt
n→∞ c τn n τk
estimation errors for both sampling schemes. [∫τ ]
k=0
E 0 1 |z(t)|2 dt
For the system above, by Algorithm II we obtain that = . (18)
1 1 c2 E[τ1 ]
x̂(τk ) = y(τk ), Kk = , Pk = 0. From the probabilistic representation of solutions to dif-
c c
ferential equations (See Lemma 3.1 in Wang et al. (2014)
From (4), it follows that
or Oksendal (2003) ), it follows that E[τ1 |z(0) = x] is the
1 solution of
x̂(t) = ea(t−τk ) y(τk ), τk ≤ t < τk+1 .
c c2 σ 2 ′′
h (x) + axh′ (x) = −1,
We now check the average estimation errors 2
[∫ ] with boundary conditions h(δ) = h(−δ) = 0, which gives
T
1 ∫ δ ∫ y
J = lim sup E |x(t) − x̂(t)| dt ,
2
(16) 1 [ a(y 2 − z 2 ) ]
T →∞ T 0 E[τ1 ] =2 2 2
exp − dzdy
0 c σ 0 c2 σ 2
for periodic and event-based sampling, respectively. First, ∑∞
under the periodic sampling scheme (i.e., τk = kh), the 22k−1 (−a)k−1 (k − 1)!δ 2k
= .
average estimation error is (2k)!
k=1
n−1 ∫
Similarly, we have
1 ∑ (k+1)h [ ] [ ∫ τ1 ] ∫ δ ∫ y
z2 [ a(y 2 − z 2 ) ]
JP = lim sup E |x(t) − x̂(t)|2 dt E |z(t)| dt = 2
2
exp − dzdy.
n→∞ nh kh c2 σ 2 c2 σ 2
k=0 0 0 0
5510
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
n−1 ∫
5 1 ∑ (k+1)h [ ]
JP = lim sup E ∥x(t) − x̂(t)∥2 dt
4.5
n→∞ nh
a=1 k=0 kh
∫ ∫ 2 ]
1 ∑ (k+1)h [ t a(t−s)
4
n−1
3.5
a=0
= lim sup 2E e dw(s) dt
n→∞ nh
k=0 kh kh
JP/JE
{
3
2.5 h a=0
= e2ah − 2ah − 1
a ̸= 0.
2 a= −1
1.5 2a2 h
1
0 0.5
h or Eτ1
1 1.5 In what follows, we calculate the average estimation error
under the event-based sampling. In this case, the sampling
times are τk+1 = inf{t > τk ; ∥z(t−)∥ ≥ δ}, where z(t) =
Fig. 1. JP /JE for a = −1, 0, 1 with the same average
x(t) − x̂(t). Then
sampling period
(∫ t ∫ t )T
From this together (18), we get z(t) = ea(t−s) dw1 (s), ea(t−s) dw2 (s)
∫δ∫y 2 [ a(y 2 −z 2 ) ]
τk τk
0 0
z exp − c2 σ 2 dzdy satisfies
JE = ∫ δ ∫ y [ a(y 2 −z 2 ) ]
,
c2 0 0 exp − dzdy dz(t) = Az(t) + dw(t), z(τk ) = (0, 0)T , τk ≤ t < τk+1 .
c2 σ 2
which coincides with the result in (Rabi, 2006, Chapter 3). As in Gardiner (2004) and Meng and Chen (2012), set
Particularly, for the case a = 0, z1 (t) = r(t)cos[ϕ(t)], z2 (t) = r(t)sin[ϕ(t)],
δ2 δ2 with √
E[τ1 ] = 2 2 , JE = 2 .
c σ 6c r(t) = z12 (t) + z22 (t).
We now compare the estimation effectiveness under both Then by using Ito’s formula, we have that
sampling schemes. Specifically, the estimation error with 1
dr(t) = [ar(t)+ ]dt + dv(t), r(τk ) = 0, τk ≤ t < τk+1 ,
periodic sampling is compared to one with event-based 2r(t)
sampling under the assumption that average sampling where
rates are the same, i.e. h = hE . For the case a = 0, we
have h = E[τ1 ] = δ 2 /(c2 σ 2 ) and v(t) = w1 (t)cos[ϕ(t)] + w2 (t)sin[ϕ(t)].
It can be verified that v(t) is a standard Wiener process,
JP δ 2 /2c2
= 2 2 = 3. since it is an orthogonal transformation of w(t).
JE δ /6c
From Lemma 3.1 in Wang et al. (2014) or Oksendal (2003),
The ratio JP /JE as a function of h or E[τ1 ] is plotted in
we obtain that E[τ1 |r(0) = x] satisfies
Figure 1 for a = −1, 0, 1. The figure shows that event-
based sampling gives substantially smaller estimation er- 1 ′′ 1 ′
g (x) + (ax + )g (x) = −1,
rors under the same sampling rates. It can be seen that 2 2x
the ratio for unstable systems is larger than one for stable with boundary conditions g(δ) = g(−δ) = 0, which gives
systems.
∫ ∫
δ
z y[ ]
E[τ1 ] = 2 exp − a(y 2 − z 2 ) dzdy
0 0 y
∞
∑ (−a)k−1 δ 2k
= .
5. THE CASE OF SYMMETRIC SECOND ORDER 2kk!
k=1
SYSTEMS
Similarly, we have
In this section, consider the following second order system [∫ τ1 ] ∫ δ
z3
∫
[y ]
described by E ∥z(t)∥2 dt = 2 exp −a(y 2 − z 2 ) dzdy
0 0 0 y
∞
∑
dx(t) = Ax(t)dt + Ddw(t), (−a)k−1 δ (2k+2)
= .
y(t) = Cx(t), (19) (2k + 2)(k + 1)!
k=1
where x(t), y(t) ∈ R , A = diag{a, a}, C = diag{c, c}
2
From this together (18), we get
and D = diag{σ, σ}. w(t) = (w1 (t), w2 (t))T is a two- ∫ δ ∫ y z3 [ ]
exp −a(y 2 − z 2 ) dzdy
dimensional Wiener process. Without loss of generality, 0 0 y
JE = ∫ δ ∫ y [ ] .
assume c = σ = 1. In this case, by Algorithm II and (4),
0 0 y
z
exp −a(y 2 − z 2 ) dzdy
it follows that
Particularly, for the case a = 0,
Kk = I, Pk = 0, x̂(τk ) = x(τk ),
δ2 δ2
x̂(t) = eA(t−τk ) x(τk ), τk ≤ t < τk+1 . E[τ1 ] = , JE = .
2 4
First, for the case of the periodic sampling the average We now compare the estimation errors for both sampling
estimation error is under the assumption that average sampling rates are
5511
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
∫ t
3
2.8
= (t − τk )x̃2 (τk ) + (t − s)dw2 (s)
τk
2.6 ∫ t
2.4
= [x̃2 (τk ) + w2 (s)]ds, (21)
2.2 a=1
τk
JP/JE
1.6 a= −1
1.4
τk+1 = inf t > τk ; [x̃2 (τk ) + w2 (s)]ds = δ .
τk
1.2
1
0 0.2 0.4
h or Eτ1
0.6 0.8 1 Take δ = 0.3. By implementing Algorithm II, we get the
trajectory of the event-based estimator, which together
Fig. 2. JP /JE for a = −1, 0, 1 with the same average with the state x(t) is shown in Figure 3. Meanwhile, we
sampling period have limk→∞ τk /k = 0.64. Let h = 0.64. From Algorithm I,
we get the trajectory of the the periodic estimator, which is
the same, i.e. h = E[τ1 ]. For the case a = 0, we have also plotted in Figure 3. It can be seen that the event-based
h = E[τ1 ] = δ 2 /2, and estimator is more consistent with the state trajectory than
JP δ 2 /2 the periodic estimator, irrespective of the first state or the
= 2 = 2. second state.
JE δ /4
The ratio JP /JE as a function of h or E[τ1 ] is plotted in 6
the state
Figure 2 for a = −1, 0, 1. The figure shows that event- 5 event based estimator
periodic estimator
based sampling gives smaller estimation errors and the 4
ratio for unstable systems is larger than one for stable
3
systems. Different from the case of first order systems,
the improvement of event-based sampling over periodic 2
−1
−2
0 2 4 6 8 10
t/s
3
the state
6. AN ILLUSTRATIVE EXAMPLE event based estimator
2 periodic estimator
A= , D= , and C = (1 0).
00 0 1 −3
0 1 2 3 4 5 6 7 8 9 10
t/s
Then it follows that
( ) ( ) (b) The second state
0 1 0
dx(t) = x(t)dt +
0 0 dw2 (t) Fig. 3. Trajectories of the state, event-based estimator and
y(t) = (1 0) x(t). (20) periodic estimator
Here x(t) = (x1 (t), x2 (t))T is characterized as the position The squared estimation errors ∥x̃(t)∥2 under event-based
and speed of an object, while only the position can be and periodic sampling are shown in Figure 4. It can be
measured. Noting seen that the average estimation errors under event-based
( )
1 t sampling are smaller compared with the periodic sampling.
At
e = , C(x(τk ) − x̂(τk )) = 0,
01 Indeed, the time-average value of squared estimation errors
we get that for τk ≤ t < τk+1 , k = 0, 1, . . ., under event-based and periodic samplings are 0.4107 and
0.5636, respectively.
∫ t
z(t) = CeA(t−τk ) (x(τk ) − x̂(τk )) + C eA(t−s) Ddw(s) 7. CONCLUDING REMARKS
( )( ) τk
1 t − τk x̃1 (τk )
= (1 0) This paper considered the state estimation problem of
0 1 x̃2 (τk )
∫ t( )( ) continuous-time linear systems with two types of sampling.
1 t−s 0 We first presented the optimal state estimator under
+(1 0)
τk
0 1 dw2 (s) periodic sampling, and designed the state estimator with
5512
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
7
Imer, O. and Basar, T. (2005). Optimal estimation with
periodic
event based
limited measurements. In Proceedings of the 44th IEEE
6
Conference on Decision and Control and European Con-
5 trol Conference, 1029–1034.
squared estimation errors
4
Keener, J. and Sneyd, J. (2008). Mathematical Physiology:
I: Cellular Physiology. Springer.
3
Lemmon, M. (2010). Event-triggered feedback in control,
2 estimation, and optimization. In Networked Control
1
Systems, 293–358. Springer.
Li, L., Lemmon, M., and Wang, X. (2010). Event-
0
0 2 4
t/s
6 8 10 triggered state estimation in vector linear processes.
In Proceedings of American Control Conference, 2138–
2143.
Fig. 4. Squared estimation errors under event-based and
Meng, X. and Chen, T. (2012). Optimal sampling and
periodic sampling
performance comparison of periodic and event based
event-based updates. Then, we compared the average impulse control. IEEE Transactions on Automatic
estimation errors for both sampling schemes in first and Control, 57(12), 3252 – 3259.
second order systems, respectively. A numerical example Meng, X., Wang, B., Chen, T., Darouach, M., et al. (2013).
was provided to compare the effectiveness of two state Sensing and actuation strategies for event triggered
estimators. It can be shown that in these cases the event- stochastic optimal control. In Proceedings of the 52nd
based sampling does outperform periodic sampling for IEEE Conference on Decision and Control.
state estimation. Specifically, for critically stable first order Miskowicz, M. (2006). Send-on-delta concept: an event-
systems, the ratio of the average estimation error with based data reporting strategy. Sensors, 6(1), 49–63.
periodic sampling to one with event-based sampling is 3; Oksendal, B. (2003). Stochastic Differential Equations: An
for critically stable symmetric second order systems, the Introduction with Applications. Springer Verlag.
ratio is 2. However, due to severe mathematical difficulties, Rabi, M. (2006). Packet Based Inference and Control.
the quantitative comparison of average estimation errors Ph.D. thesis, University of Maryland.
for the general case under both sampling schemes was not Rabi, M., Johansson, K., and Johansson, M. (2008). Opti-
provided in this paper, which needs to be investigated mal stopping for event-triggered sensing and actuation.
further. In Proceedings of the 47th IEEE Conference onDecision
and Control, 3607–3612.
ACKNOWLEDGEMENTS Rabi, M., Moustakides, G., and Baras, J. (2012). Adaptive
sampling for linear state estimation. SIAM Journal on
We would like to thank Professor Tongwen Chen for his Control and Optimization, 50(2), 672–702.
many insightful discussions and suggestions. Shi, D., Chen, T., and Shi, L. (2013). Event-triggered max-
imum likelihood state estimation. Automatica, 50(1),
REFERENCES 247-254.
Sijs, J. and Lazar, M. (2011). Event based state estimation
Anderson, B. and Moore, J. (1979). Optimal Filtering. with time synchronous updates. IEEE Transactions on
Prentice-hall Englewood Cliffs, NJ. Automatic Control, 57(10), 2650–2655.
Åström, K. (2006). Introduction to Stochastic Control Suh, Y., Nguyen, V., and Ro, Y. (2007). Modified Kalman
Theory. Courier Dover Publications. filter for networked monitoring systems employing a
Åström, K. and Bernhardsson, B. (2002). Comparison of send-on-delta method. Automatica, 43(2), 332–338.
Riemann and Lebesgue sampling for first order stochas- Tabuada, P. (2007). Event-triggered real-time scheduling
tic systems. In Proceedings of the 41st IEEE Conference of stabilizing control tasks. IEEE Transactions on
on Decision and Control, volume 2, 2011–2016. Automatic Control, 52(9), 1680–1685.
Åström, K. (2007). Event based control. Analysis and De- Wang, B., Meng, X., and Chen, T. (2014). Event based
sign of Nonlinear Control Systems: In Honor of Alberto pulse-modulated control of linear stochastic systems.
Isidori, 127–147. IEEE Transactions on Automatic Control. In press.
Dodds, S. (1981). Adaptive, high precision, satellite atti- Wang, X. and Lemmon, M. (2011). Event-triggering in dis-
tude control for microprocessor implementation. Auto- tributed networked control systems. IEEE Transactions
matica, 17(4), 563–573. on Automatic Control, 56(3), 586–601.
Gardiner, C. (2004). Handbook of Stochastic Methods: For Wu, J., Jia, Q., Johansson, K., and Shi, L. (2013). Event-
Physics, Chemistry and the Natural Sciences. based sensor data scheduling: Trade-off between commu-
Heemels, W., Johansson, K.H., and Tabuada, P. (2012). nication rate and estimation quality. IEEE Transactions
An introduction to event-triggered and self-triggered on Automatic Control, 58(4), 1041–1046.
control. In Proceedings of the 51th IEEE Conference on Xu, Y. and Hespanha, J. (2004). Optimal communication
Decision and Control and European Control Conference. logics in networked control systems. In Proceedings of
Henningsson, T., Johannesson, E., and Cervin, A. the 43rd IEEE Conference on Decision and Control, vol
(2008). Sporadic event-based control of first-order linear 4, 3527–3532.
stochastic systems. Automatica, 44(11), 2890–2895. You, K. and Xie, L. (2013). Kalman filtering with sched-
Holmes, D. and Lipo, T. (2003). Pulse Width Modulation uled measurements. IEEE Transactions on Signal Pro-
for Power Converters: Principles and Practice. Wiley- cessing, 61, 1520–1530.
IEEE Press.
5513