6 views

Uploaded by Wanasinghe Arachchige Thumeera Ruwansiri

- FPGA Based Kalman Filter for Wireless Sensor Networks
- Chow_Joh_ACC_10_rls
- 3 1 7 p ru vex machinecontroldesignrubric
- Final 418
- Adaptive_Control
- Production Planning and Industrial Scheduling Examples, Case Studies and Applications 2º Ed. Dileep R. Sule (2007)
- Interview Questions on Test Design Techniques
- Appraising the role of cloud computing in daily life and presenting new solutions for stabilization of this technology
- Contiki Kth 2008 Abstracts
- Gps 9 Kalman Filtering
- Fuzzy Logic Tools Reference Manual v2.0
- Attitude Estimation Using Modified Rodrigues Parameters
- 01499296
- EG Six Sigma Toolkit Final Sm
- Fractional Order Pid
- aula7_listasenc
- The Need to Clarify the Relationship between Physics and Mathematics in Science Curriculum: Cultural Knowledge as Possible Framework
- woah?
- algebra project 1
- lesson plan example 1

You are on page 1of 28

Thumeera Ruwansiri, Wanasinghe Arachchige Student # 201095700 Faculty of Engineering and Applied Science Memorial University of Newfoundland

Contents

1 2 Answer for Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Answer for Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Comparison of implementation of KF (Algorithm 1) and UKF (Algorithm 2) . . . . . 6 19 26

List of Figures

1 2 3 4 5 6 7 8 9 10 11 12 13 Comparison of actual, estimated and ltered value for X(1) . . . . . . . . . . . . Comparison of actual, estimated and ltered value for X(2) . . . . . . . . . . . . Comparison of actual, estimated and ltered value when Qn =2000 and Rn =0.1 . Comparison of actual, estimated and ltered value when Qn =0.0002 and Rn =0.1 Comparison of actual, estimated and ltered value when Qn =0.2 and Rn =1000 . Comparison of actual, estimated and ltered value when Qn =0.2 and Rn =0.0001 Comparison of actual, estimated and ltered value when Qn =0.1 and Rn =0.1 . . Comparison of actual, estimated and ltered value when A=[-0.8 1 ; 1.2 -2.3] . . Comparison of actual, estimated and ltered value when A=[-1.5 1 ; -1.2 -0.3] . . Comparison of actual, estimated and ltered value when A=[-0.1 -1 ; 1.2 0] . . . Comparison of actual, estimated and ltered value when a plat (state variables) fully observable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Four tank system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of actual and ltered value for four tank system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . is(are) not . . . . . . . . . . . . . . . . . . . . . 9 10 11 11 12 12 13 14 15 16 18 20 25

List of Tables

1 2 Dierent scenarios where the process and noise covariance term are in noise . . . . . . . . . . Dierent scenarios where the process model and process and measurement noise covariance term are in noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 13

List of Algorithms

1 2 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unscented Kalman Filter (UKF)- Additive (zero-mean) noise case . . . . . . . . . . . . . . . 7 19

The continuous-time state-space description of a system is given below; x1 x2 = 0.9 0.9 0 2.1 x1 x2 + 1.0 0 x1 x2 u+ 0.5 0 1 + 0 0.6 2 (1)

y=

(2)

The system and measurement noise terms are zero-mean, white-noise sequences with variances and covariances given as: E{ } = diag(0.2, 0.2), E{ } = 0.1, E{ } = 0. (a) When obtain a discrete-time realization for the continuous-time LTI system given by (1) and (2) using Matlab function c2d . Code used to generate discrete-time model is given below.

function [A,B,C,D,w1,w2]=systemmdl %Contimuous time state space model a=[0.9 0;0.9 2.1]; b=[1 ;0 ]; c=[0 1]; d=0; sys=ss(a,b,c,d); %Continuous to discreate sysd=c2d(sys,0.2); %Extract individual matrixes A=sysd(1,1).a; %State matrix B=sysd(1,1).b; %Input matrix C=sysd(1,1).c; %Output matrix D=sysd(1,1).d; %Direct transmission matrix w1=[0.5;0]; %First measurement noise w2=[0;0.6]; %Second measurement noise

B C D W1 W2

= = = =

where

xk+1 yk+1

(3) (4)

Algorithm [1]

Algorithm 1 Kalman Filter Initialize with: x = E[x ] P = E[(x x )(x x )T ] Qn = E[( )( )T ] a Rn = E[( )( )T ] b for k {1, ..., } do Time update: (1) Project the state ahead x = Ak1 + Buk k x (2) Project the error covariance ahead Pk = APk1 AT + Qn Measurement update: (1) Compute the Kalman gain Kk = Pk H T (HPk H T + Rn )1 (2) Update estimate with measurement zk xk = x + Kk (zk H x ) k k (3) Update the error covariance Pk = (I Kk H)Pk end for

a For additive (zero-mean) noise case = 0. Hence Qn = E[ T ] b For additive (zero-mean) noise case = 0. Hence R = n E[ T ]

Main function

clear all; close all; duration=100; dt=1; N=duration/dt;

[A,B,C,D,w1,w2]=systemmdl;

%Process noise covariance E{ww'} Q=0.2*eye(2); Qn=1*Q; %Measurement noise covariance E{vv'} R=0.1; Rn=1*R; %Noise covariance E{wv'} Nn=0; %Initialcondition x=[0;0]; x hat=x; P=10*eye(2); %Initial guess for error covariance %Variables to store data state1actual=[]; state1estimated=[]; state1filtered=[]; state2actual=[]; state2estimated=[]; state2filtered=[]; % Inputs are constant flowrate with random fluctuations u(1,:)=[1*ones(N/4,1);3.5*ones(N/4,1);3*ones(N/4,1);1*ones(N/4+1,1)]*1 ; for i=1:dt:duration %Generate Noise LL=chol(Q); %SD of process noise L=chol(R); %SD of measurement noise %Proces model with noise x=A*x+B*u(1,i)+LL*[0.5*randn;0.6*randn]; y=C*x+L*randn;

%Call for Kalman estimator [x hat,J,P]=kf(A,B,C,D,u(1,i),y,x hat,P,Qn,Rn); %Save date to plot state1actual=[state1actual;x(1)]; state1estimated=[state1estimated;J(1)]; state1filtered=[state1filtered;x hat(1)]; state2actual=[state2actual;x(2)]; state2estimated=[state2estimated;J(2)]; state2filtered=[state2filtered;x hat(2)]; end %Plot results %Time axis t=1:dt:duration; t=t'; %Plot state 1 figure(1) plot(t,state1actual,'k',t,state1estimated,'r',t,state1filtered,'.b'); grid on title('Comparison of actual, estimated and filtered value for X(1)') xlabel('Time')

ylabel('Magnitude') legend('Actual value','Estimated Value','Filtered Value',0) figure(2) plot(t,state2actual,'k',t,state2estimated,'r',t,state2filtered,'.b'); grid on title('Comparison of actual, estimated and filtered value for X(1)') xlabel('Time') ylabel('Magnitude') legend('Actual value','Estimated Value','Filtered Value',0)

Kalman Filter

function [x hat,J,P]=kf(A,B,C,D,u,y,x hat,P,Qn,Rn) %Prdiction level ("Time update") J=A* x hat+B*u; %Project the state ahead S=A*P*A'+Qn; %Project the error covariance ahead %correction level ("Measurement K=S*C'*inv(C*S*C'+Rn); x hat=J+K*(yC*J); P=(eye(size(K,1))K*C)*S; update") % Compute the kalman gain % Update estimation with measurement % Update the error covariance

Results

4.5 4 3.5 3 Magnitude 2.5 2 1.5 1 0.5 0 0 20 40 60 Actual value Estimated Value Filtered Value Time 80 100

2.5 2 1.5 1 0.5 0 0.5 1 0 Actual value Estimated Value Filtered Value 20 40 Time 60 80 100

Magnitude

In this simulation, ratio between Qn and Rn was 2. It is not a great ratio. Moreover neither Qn nor Rn is much closer to zero (0) compare with each other. Hence, Kalman lter does not bias (trust) too much towards to process or measurement. It will take both estimate and measurement for consideration and bias little towards to prediction as depicted in Figure 1 and Figure 2 since process noise covariance comparatively higher than measurement noise. (c) If the process and noise variances are not exactly known, we can simulate ve scenarios as given in Table 1 in order to evaluate the ltering properties of Kalman Filter. Table 1: Dierent scenarios where the process and noise covariance term are in noise Scenarios Qn Rn Ratio = (i) 2000 0.1 20000 (ii) 0.0002 0.1 0.00002 (iii) 0.2 1000 0.00002 (iv) 0.2 0.0001 20000 (v) 0.2 0.2 1

Qn Rn

In the rst scenario, process noise has higher covariance compare to measurement covariance. Hence, Kalman lter trust measurements more than process. However from (2), we can see state one (x1 ) is not measured. Hence, there will not be considerable correction to estimated (predicted) state of x1 . As a result more correction applies on state x2 while less correction applies on state x1 . As depicted 10

in Figure 3(b) ltered state for x2 overlap with actual state x2 and as in Figure 3(a) ltered state x1 bias to estimated x1 since there is no any measurements on state x1 . Kalman gain for two states are Kx1 = 0.3142 and Kx2 = 1.0000.

60

100 80

40

60 40 Magnitude 20 0 20 40 60 80

20 Magnitude

20

40

60 0

100

10

20

30

40

50 Time

60

70

80

90

100

(a) State 1

(b) State 2

Figure 3: Comparison of actual, estimated and ltered value when Qn =2000 and Rn =0.1 In the second scenario, process noise has very lover covariance compared with measurement noise covariance. Hence Kalman lter trust more on process. Moreover both the states of the system are observable. Hence, ltered output will overlap with predicted (estimated) states as depicted in Figure 4. Kalman gain for two states are Kx1 = 0.0016 and Kx2 = 0.0042.

4

2

1.5

2 Magnitude

Magnitude 1

0.5

2 0

20

40

Time

60

80

100

0.5

(a) State 1

(b) State 2

Figure 4: Comparison of actual, estimated and ltered value when Qn =0.0002 and Rn =0.1 As given in Table 1 ratio between Qn and Rn is constant in second and third scenarios. Hence ltering property of Kalman lter should be identical for both case that, ltered state should overlap with prediction since process noise has very poor covariance than measurement noise covariance. This observation is depicted in Figure 5. Since Kalman lter has less faith on measurements, Kalman gain must be small for both states. Kalman gain got from simulation for two states are Kx1 = 1.6353e 04 and Kx2 = 4.2296e 04.

11

2.5

0.5

Time

0.5 0

20

40

Time

60

80

100

(a) State 1

(b) State 2

Figure 5: Comparison of actual, estimated and ltered value when Qn =0.2 and Rn =1000 Similarly, ratio between Qn and Rn is constant in rst and fourth scenarios. Hence ltering property of Kalman lter should be identical for both case that, ltered value should overlap with actual value for state x2 and ltered value should bias to predicted value for state x1 . The result got from simulation is depicted in Figure 6 and corresponding Kalman gain for two states are Kx1 = 0.3141 and Kx2 = 0.9995.

4.5 4 3.5 3 Magnitude 2.5 2 1.5 1 0.5 0 0 20 40 60 Actual value Estimated Value Filtered Value Time 80 100 0 Actual value Estimated Value Filtered Value 20 40 Time 60 80 100 Magnitude 1 1.5 2

0.5

0.5 0

(a) State 1

(b) State 2

Figure 6: Comparison of actual, estimated and ltered value when Qn =0.2 and Rn =0.0001 Final scenario is both process and measurement noise covariances are same. In this case, ltered value will not follow neither estimation not the actual value. However given system does not have any measurement for rst state (x1 ). Hence, ltered value for rst state (x1 ) bias to estimation value than actual values. For second state, ltered value lie between actual and predicted. Results are given in Figure 7. Kalman gain for two states are Kx1 = 0.1895 and Kx2 = 0.5716.

12

4.5 4

2.5

2 3.5 3 Magnitude 2.5 2 1.5 1 0.5 0 0 20 40 60 Actual value Estimated Value Filtered Value Time 80 100 0 Actual value Estimated Value Filtered Value 20 40 Time 60 80 100 Magnitude 1.5

0.5

0.5 0

(a) State 1

(b) State 2

Figure 7: Comparison of actual, estimated and ltered value when Qn =0.1 and Rn =0.1 (d) In order to evaluate the performance of Kalman lter when both the plant model and the process and measurement noise covariance terms are in error, three scenarios take in to consideration. Table 2: Dierent scenarios where the process model and process and measurement noise covariance term are in noise Scenario A Qn Rn (i) [-0.8 1 ; 1.2 -2.3] 0.2 0.2 0.1 1000 0.0001 0.1 (ii) [-1.5 1 ; -1.2 -0.3] 0.2 0.2 0.1 1000 0.0001 0.1 (iii) [-0.1 -1 ; 1.2 0] 0.2 0.2 0.1 1000 0.0001 0.1

From the tuning parameters (qn and Rn ) given in Table 2 most appropriate parameters are Qn = 0.2 and Rn = 0.0001 since process should have lower trust compare to measurement. By comparing sub gures in Figure 8, 9 and 10 with corresponding sub gures in part (c), we can conclude that Kalman lter does better job even both plant model and the process plus noise covariance terms are in error.

13

12

6 5 4 3 2 1 Actual value Estimated Value Filtered Value 0 20 40 Time 60 80 100 0 1 0 Actual value Estimated Value Filtered Value

10

8 Magnitude Magnitude

20

40

Time

60

80

100

12 10 8 6 4 2 0 2 0 Actual value Estimated Value Filtered Value

6 5 4 3 2 1 0 1 0 Actual value Estimated Value Filtered Value

Magnitude

20

40

Time

60

80

100

Magnitude

20

40

Time

60

80

100

12 10 8 6 4 2 0 2 0 Actual value Estimated Value Filtered Value

6 5 4 3 2 1 0 1 0 Actual value Estimated Value Filtered Value

Magnitude

20

40

Time

60

80

100

Magnitude

20

40

Time

60

80

100

Figure 8: Comparison of actual, estimated and ltered value when A=[-0.8 1 ; 1.2 -2.3]

14

1.5

0 0.5 1

Magnitude

Magnitude

0.5

1.5 2 2.5

0.5 3 1 0 3.5 0 Actual value Estimated Value Filtered Value 20 40 Time 60 80 100

20

40

Time

60

80

100

2

0.5 0 0.5

1.5

1 Magnitude Magnitude

1 1.5 2 2.5

0.5

0.5

3 3.5 0

1 0

20

40

Time

60

80

100

1.4 1.2 1 0.8 Magnitude 0.6 0.4 0.2 0 0.2 0.4 0 20 40 60 80 100

0.5 0 0.5 1 Magnitude 1.5 2 2.5 3 3.5 0 Actual value Estimated Value Filtered Value 20 40 Time 60 80 100

Time

Figure 9: Comparison of actual, estimated and ltered value when A=[-1.5 1 ; -1.2 -0.3]

15

7 6 5 4 Magnitude 3 2 1 0 1 0

Time

20

40

Time

60

80

100

5 4 3 2 Magnitude 1 0 1 2 3 4 0 20 40 60 80 100

10 8 6 4 2 0 2 4 0

Magnitude

Time

20

40

Time

60

80

100

5 4 3 2 Magnitude

8

4 Magnitude

1 0 1 2 3 4 5 0 20 40 60 80 100

Time

4 0

Figure 10: Comparison of actual, estimated and ltered value when A=[-0.1 -1 ; 1.2 0]

16

(e) State transition matrix (A) of this plant is a 2 2 matrix. Hence observability matrix is; O= C CA 1 2.1 (5)

O=

0 0.9

(6) (7)

rank(O) = 2

From (7), rank of the system is equal to number of state variable of the plant. Hence, system is fully observable. When we change a11 = 0, state transition matrix becomes; A= 0 0.9 0 2.1

However, this modication does not make any change to observability matrix. It still keep as (6) and rank of the O equal to 2. Hence, still both the states are observable. In order to make system unobservable, assume a21 = 0. Then state transition matrix becomes; A= 0.9 0 0 2.1

rank of (8) is one which implies states are not fully observable. When a state become unobservable Kalman lter does not apply any correction to that state estimation at the state update step. However, this unobservable state does not eect to the Kalman lter performance on the observable states. Hence ltering capability of observable state keep as it is. For this example, state one (x1 ) becomes unobservable after the modication and Kalmann gain for that state keep as zero for all simulation attempt as depicted in Figure 11.

17

9 8 7 6 Magnitude 5 4 3 2 1 0

x 10

0.8 0.6 0.4 0.2 Magnitude 0 0.2 0.4 0.6 0.8 Actual value Estimated Value Filtered Value 20 40 Time 60 80 100

20

40

Time

60

80

100

1 0

8 7 6 5 Magnitude 4 3 2 1 0 x 10

7

0.8 0.6 0.4 0.2 Magnitude 0 0.2 0.4 0.6 0.8 1 Actual value Estimated Value Filtered Value 20 40 Time 60 80 100

20

40

Time

60

80

100

1.2 0

8 7 6 5 Magnitude x 10

7

0.8 0.6 0.4 0.2 Magnitude 0 0.2 0.4 0.6 0.8 Actual value Estimated Value Filtered Value 20 40 Time 60 80 100

4 3 2 1 0

20

40

Time

60

80

100

1 0

Figure 11: Comparison of actual, estimated and ltered value when a plat (state variables) is(are) not fully observable

18

Algorithm [2]

Algorithm 2 Unscented Kalman Filter (UKF)- Additive (zeromean) noise case Initialize with: x = E[x ] P = E[(x x )(x x )T ] for k {1, ..., } do Calculate sigma points: k1 = xk1 xk1 + Time Update: k|k1 = F [k1 , uk1 ]

2L

Pk1

xk1

Pk1

x = k

i=0 2L Pk = i=0 a

Wi

(m)

i,k|k1 i,k|k1 x k x + k

Pk

Wi

(c)

i,k|k1 x k x k

+ Rv

k|k1 = x k

Pk

Yk|k1 = H k|k1

2L

yk =

i=0

Wi

2L

(m)

Yi,k|k1

i=0 2L

Wi

(c)

Yi,k|k1 yk i,k|k1 x k

Yi,k|k1 yk Yi,k|k1 yk

+ Rn

Pxk yk =

i=0

Wi

(c)

where, = L + , =composite scaling parameter, L=dimension of the state, Rv =process noise covariance, Rn =measurement noise covariance, Wi =weights as below;

i

i = 1, ..., L i = L + 1, ..., 2L

iL

i = x W

(m)

(L + )Px

i = 1, ..., 2L

where, = 2 (L + ) L is a scaling parameter. determines the spread of the sigma points around x and is usually set to a small positive value (e.g., 1e 3). is a secondary scaling parameter which is usually set to 0, and is used to incorporate prior knowledge of the distribution of x (for Gaussian distributions, = 2 is optimal)

a Here we have to redraw a new set of sigma points to incorporate the eect of the additive process noise.

19

Implementation: System Dynamic In order to implement UKF, we consider the non-linear system given in Figure 12 (4-tank model).

Figure 12: Four tank system Continuous-time dynamic equations that govern the system is given in (9) [3]. dh1 dt dh2 dt dh3 dt dh4 dt = = = = a1 A1 a2 A2 a3 A3 a4 A4 a3 1 k1 2gh3 + u1 A1 A1 a4 2 k2 2gh2 + 2gh4 + u2 A2 A2 (1 2 ) 2gh3 + k2 u2 A3 (1 1 ) 2gh4 + k1 u1 A3 2gh1 +

(9)

First, system dynamic given by (9) converted in to discrete time model using nite dierence approximation given in (11). hk+1 hk dh dt t (11)

20

The resulting discrete time system dynamics model is given in (12) h1 (k + 1) h2 (k + 1) h3 (k + 1) h4 (k + 1) = = = = a1 A1 a2 h2 (k) A2 a3 h3 (k) A3 a4 h4 (k) A4 h1 (k) a3 1 k1 2gh3 t + u1 t A1 A1 a4 2 k2 u2 t 2gh2 t + 2gh4 t + A2 A2 (1 2 ) k2 u2 t 2gh3 t + A3 (1 1 ) 2gh4 t + k1 u1 t A3 2gh1 t +

(12)

Implementation: Matlab Code [4]

Main Function

clear all; close all; n=4; duration=500; global dt; dt=0.1; N=duration/dt; Q=(dt2)*0.002*eye(4); R=0.02*eye(2); q=chol(Q); r=chol(R); [A, B, C, D]=systemmdl2; %number of state %Data Generation

% Covariance of process noise % Covariance of measurement noise % std of process % std of measurement % Getting the coefficient for the process

%Initializtion State

%Tuning Parameters Qf=1*Q; Rf=1*R; % Initialisation Block P0 = 10*eye(4); P=P0; x initial= [14.1 11.2 7.2 4.7]*1; in=u(:,1); f1=@(x,in)[x+A*sqrt(x)+B*in]; f2=@(x,in)[C*x+D*in];

21

s=x initial'; x=s+q*randn(4,1); % allocate memory xV = zeros(n,N); hf1=[]; hf2=[]; hf3=[]; hf4=[]; %Actual ststes ha1=[]; ha2=[]; ha3=[]; ha4=[]; %Output (Measurements) ho1=[]; ho2=[];

%Filtered states

for i=1:N h=h+A*sqrt(h)+B*u(:,i)+q*randn(4,1); z=C*h+r*randn(2,1); s = f1(s,u(:,i)) + q*randn(4,1); [x, P] = ukf(f1,x,u(:,i),P,f2,z,Q,R); % save filtered estimations hf1=[hf1 x(1,1)]; hf2=[hf2 x(2,1)]; hf3=[hf3 x(3,1)]; hf4=[hf4 x(4,1)]; % Save actual states ha1=[ha1 s(1,1)]; ha2=[ha2 s(2,1)]; ha3=[ha3 s(3,1)]; ha4=[ha4 s(4,1)]; % Save (output) measurment ho1=[ho1 z(1,1)]; ho2=[ho2 z(2,1)]; end

%Plot Results figure(1) plot(1:N, ha1, 'k', 1:N, hf1, '.k'); grid on title('Comparison of actual and filtered value for tank 1') xlabel('Dynamic Steps') ylabel('Fluid Level (cm)') legend('Actual value','Filtered Estimation',0) figure(2) plot(1:N, ha2, 'k', 1:N, hf2, '.k'); grid on title('Comparison of actual and filtered value for tank 2') xlabel('Dynamic Steps') ylabel('Fluid Level (cm)') legend('Actual value','Filtered Estimation',0)

22

figure(3) plot(1:N, ha3, 'k', 1:N, hf3, '.k'); grid on title('Comparison of actual and filtered value for tank 3') xlabel('Dynamic Steps') ylabel('Fluid Level (cm)') legend('Actual value','Filtered Estimation',0) figure(4) plot(1:N, ha4, 'k', 1:N, hf4, '.k'); grid on title('Comparison of actual and filtered value for tank 4') xlabel('Dynamic Steps') ylabel('Fluid Level (cm)') legend('Actual value','Filtered Estimation',0)

function [A, B, C, D]=systemmdl2 % Step 2 : Computing the A,B,C & D Matrices %========================================= global dt;

%Initial values of the parameters h1 = 25; h2 = 20; h3 = 10; h4 = 10; % units in cm. % Defining A, B, C, D Matrices. % Parametric data at1=0.071; at3=0.071; at2=0.057; at4=0.057; % Orifice area in cm. At1=28; At3=28; At2=32; At4=32; % Tank area in cm. kc=0.5; % Units are in Volts/cm. g=981; % Units in cm./sec.sq. nu1=0.7; nu2=0.6; % Percentage opening of the splitter valve kt1=3.33; kt2=3.35; % Units are in cc/Voltseconds T1 = (at1*dt/At1) * sqrt(2*g); T2 = (at2*dt/At2) * sqrt(2*g); T3 = (at3*dt/At3) * sqrt(2*g); T4 = (at4*dt/At4) * sqrt(2*g); a11 a21 a31 a41 = = = = T1; 0; 0; 0; a12 a22 a32 a42 = = = = 0; T2; 0; 0; a13 a23 a33 a43 = = = = (At1 * T3)/At3; 0; T3; 0; a14 a24 a34 a44 = = = = 0; (At2 * T4)/At4; 0; T4;

= = = =

23

B = [b11 b12; % The 'B' Matrix b21 b22; b31 b32; b41 b42]; % c11 = kc; c12 = 0; c13 = 0; c14 = 0; c21 = 0; c22 = kc; c23 = 0; c24 = 0; C = [c11 c12 c13 c14; c21 c22 c23 c24]; D=0; %The 'C' Matrix

function [x,P]=ukf(fstate,x,u1,P,hmeas,z,Q,R) %Parameters and constant to calculate sigma points %Parameter list L=numel(x); %numer of states m=numel(z); %numer of measurements %Constant list alpha=1e3; %default, tunable ki=0; %default, tunable beta=2; %default, tunable %compute sigma point lambda=alpha2*(L+ki)L; %scaling factor c=L+lambda; %scaling factor C=sqrt(c);

%************************ Weight calculation ****************************** Wm=[lambda/c 0.5/c+zeros(1,2*L)]; %Weights for means Wc=Wm; Wc(1)=Wc(1)+(1alpha2+beta); %Weights for covariance %************************ Time update ************************************* % Sigma point calculation A = C*chol(P)'; Y = x(:,ones(1,numel(x))); X = [x Y+A YA]; % Sigma points % Unscented transformation [x1,X1,P1,X2]=ut(fstate,X,u1,Wm,Wc,L,Q); % Process [z1,Z1,P2,Z2]=ut(hmeas,X1,u1,Wm,Wc,m,R); % Measurments Pxx=P1; % Project error covariance ahead %*********************** Measurement update ******************************* Pxy=X2*diag(Wc)*Z2'; % Transformed crosscovariance Pyy=P2; % Transformed autocovariance K=Pxy*inv(Pyy); x=x1+K*(zz1); P=PxxK*Pxy'; % Kalman gain % Update estimate with measurement % Update the error covariance

Unscented transformation

function [y,Y,P,Y1]=ut(f,X,u1,Wm,Wc,n,R) L=size(X,2); y=zeros(n,1); Y=zeros(n,L);

24

% Unscented transformation % weighted sum to calculate mean state or measurement % Error % Projet the error covariance to ahead (weighted sum).

Results

15 14

12

10

1000

4000

5000

1000

4000

5000

(a) Tank 1

(b) Tank 2

9 8

5 7 6 Fluid Level (cm) 5 4 3 2 1 1 0 0 1000 2000 3000 Dynamic Steps 4000 5000 0 0 1000 2000 3000 Dynamic Steps 4000 5000 Fluid Level (cm) Actual value Filtered Estimation 4 Actual value Filtered Estimation

(c) Tank 3

(d) Tank 4

Figure 13: Comparison of actual and ltered value for four tank system

25

2.1

Initialization of two implementation is identical to each other In KF at the time updates, it project the state ahead as rst step. But UKF does not project state as a single unit. UKF, rst extracted several point (generally three points) call sigma point from current process state distribution and project all sigma point to next state using nonlinear state transition equation. We should be much careful about matrix dimensions when we do sigma point calculation. In sigma point calculation equation has a part xk1 Pk1 , since process has four state variables Pk1 would be squire matrix of 4 4. After do Cholesky decomposition and scale, it will remain as 4 4 squire matrix. On the other hand, xk1 is a 4 1 vector. Hence, xk1 Pk1 cannot be achieved. To overcome this issue we generated 4 4 matrix duplicating xk1 vector four times as given bellow.

function X=sigmas(x,P,c) A = c*chol(P)'; Y = x(:,ones(1,numel(x))); X = [x Y+A YA];

Because of this modication selection of sigma points always align with principle axis. Hence dierence of the rst step is KF project the state ahead while UKF project the sigma points ahead.

Second step of time update is Project the error covariance ahead. IN KF (13) used to project the error covariance. If we expand the right hand side of the (13), AE{(xk1|k1 xk1|k1 )(xk1|k1 T T xk1|k1 ) }A + Qn which has summation of error covariance of state prediction and process noise covariance. Same thing take place in UKF. First it calculate mean of the projected sigma points using

2L

x = k

i=0

Wi

(m)

i,k|k1 . Then error covariance projection takes place using (14). In here projected

process error covariance is the weighted sum of error covariance of individual sigma points. Finally it adds process error noise covariance as KF. Pk

Pk

= =

APk1 AT + Qn

2L

(13) i,k|k1 x k

T

Wi

i=0

(c)

i,k|k1 x k

+ Qn

(14)

Noise covariance has added in both cases. Major dierence is part APk1 AT in KF has replaced by

2L

i=0

Wi

(c)

i,k|k1 x k

i,k|k1 x k

matrix corresponds to linear state instead it has sigma points to represent nonlinear state transition. Hence, in UKF takes weighted sum instead of direct multiplication with error covariance of previous state.

Then UKF projects the measurement to ahead similar way it projects the state ahead. And calculate the mean of projected measurements. Moreover, at each time update UKF drawn new set of sigma points which is not done in KF. Measurement updates has three steps in both KF and UKF.

Compute the Kalman gain: In KF, Kalman gain is calculated using (15). Pk H T is the cross covariance of measurement and process error and HPk H T +Rn is the covariance of measurements.

26

In UKF Kalman gain is calculated using (16). Pxk yk is the cross-covariance of measurement and process error and Pyk yk is the covariance of measurements. Hence calculation of Kalman gain is identical in two algorithm. Only dierence is cross-covariance and covariance in KF calculated using simple matrix multiplication while UKF uses weighted sum of individual sigma points covariance as given in (17) and (18).

Kk Kk

= Pk H T (HPk H T + Rn )1

(15) (16)

1 Pxk yk Pyk yk

2L

Pyk yk Px k y k

=

i=0 2L

Wi

(c)

Yi,k|k1 yk i,k|k1 x k

Yi,k|k1 yk Yi,k|k1 yk

+ Rn

(17)

=

i=0

Wi

(c)

(18)

Update estimation with measurement is identical in each algorithm. Error covariance update is also identical to each other.

27

Bibliography

[1] G. Welch and G. Bishop, An introduction to the kalman lter, University Lecture,University of North Carolina, USA, 2001, accesed on March 1, 2012. [2] E. A. Wan and R. van der Merwe, The unscented kalman lter, Online: http//www.stomach.v2.nl/docs/TechPubs/Tracking and AR/wan01unscented.pdf, accesed on March 1, 2012. [3] I. Syed, State estimation: Kalman lter and beyond, University Lecture,Memorial University of Newfoundland, Canada, 2012. [4] Y. Cao, Learning the unscented kalman lter, Online: http//www.mathworks.com/matlabcentral/leexchange/18217, Jan 2008, accesed on March 1, 2012.

28

- FPGA Based Kalman Filter for Wireless Sensor NetworksUploaded byeditor2533
- Chow_Joh_ACC_10_rlsUploaded byMuhammad Surya Sulila
- 3 1 7 p ru vex machinecontroldesignrubricUploaded byapi-243220941
- Final 418Uploaded bymodphysnoob
- Adaptive_ControlUploaded byGilmar Leite
- Production Planning and Industrial Scheduling Examples, Case Studies and Applications 2º Ed. Dileep R. Sule (2007)Uploaded byinpol
- Interview Questions on Test Design TechniquesUploaded bysanket mahapatra
- Appraising the role of cloud computing in daily life and presenting new solutions for stabilization of this technologyUploaded byJournal of Computing
- Contiki Kth 2008 AbstractsUploaded bycjwang
- Gps 9 Kalman FilteringUploaded byg161803
- Fuzzy Logic Tools Reference Manual v2.0Uploaded byc_cohen
- Attitude Estimation Using Modified Rodrigues ParametersUploaded byaperez999
- 01499296Uploaded byGiri Prasad
- EG Six Sigma Toolkit Final SmUploaded byLeonardo Vence Ordoñez
- Fractional Order PidUploaded byRavi Chandra
- aula7_listasencUploaded bylariao
- The Need to Clarify the Relationship between Physics and Mathematics in Science Curriculum: Cultural Knowledge as Possible FrameworkUploaded bydavid.valenzuela.z3870
- woah?Uploaded by;(
- algebra project 1Uploaded byapi-459342567
- lesson plan example 1Uploaded byapi-239222095
- Faculty of TechnolgyUploaded bydkallan
- Graph PlottedUploaded byyahoo_veer
- Lesson #22 - Creating Division Stores - PartitiveUploaded byMR.E
- SGUploaded byKapil_Kaushik_8847
- KJM597 s1 2009.PDFUploaded byMohd Affnan
- 1-28-13Uploaded bymacoffey
- Robust Stability Analysis of TeleoperationUploaded byhhakim32
- ncert english medium books for downloadUploaded byapi-257652779
- Latihan Pemrograman DasarUploaded byDwiki Darmawan
- What Makes a SchoolUploaded bydan1ij2el3a4

- 01 Dr. G.L Samuel - IITMUploaded byMohan Prasad.M
- syllebiUploaded byuday
- Mechantronics SOP English 20.03Uploaded byvik
- ECE 311 - Lab 5Uploaded byChristopher Prinds Bilberg
- Chapter 1- Effective Communication in an OrganisationUploaded byPerbz Jay
- Sanz AAAI SSS 2019 Consciousness and UnderstandingUploaded byRicardo Sanz
- 2005 FBLA Introduction to Business CommunicationUploaded byOmala
- Business Comm Ppb3043 Ri Sem2 Year 2013 _2014Uploaded bybatraz79
- Bernard Scott - Cybernetics of Gordon PaskUploaded byGuilherme Kujawski
- Lyapunov StabilityUploaded bySri Mathi
- 1focs_part3Uploaded byUlas Sonkale
- Understanding PID ControlUploaded byMiko Quijano
- Verilog Code for Fir Filter mghmhbUploaded bysivasankarmeae
- Yechout_TableOfContentsUploaded byparsa90p
- Seminar Presentation on CyberneticsUploaded byBhagyashree Shejwal Bhagat
- Effective CommunicationUploaded byshahbazulhaq
- Pathologies on the Vertebral ColumnUploaded byJudickael Bah
- Ch 1- Introduction to CommunicationUploaded byMuhammad Hashim Memon
- Application of Supervised Machine Learning TechniquesUploaded bydeejay217
- Course Outline HCIUploaded byIrtza Sahir
- MFF311 HandoutUploaded byAakash Goel
- Lecture Introduction PID Controllers2010[1]Uploaded byLuther King
- Visual BowUploaded byMaliha Sultana
- 8_Cascade.pdfUploaded bysab
- Feb - Tension Control 101Uploaded byRicardo Pratiwiharja
- Neural Predictive Control Toolbox for Cacsd in Matlab EnvironmentUploaded byepiccheese
- Modern ControlUploaded byeka
- A Very Short History of Artificial Intelligence AIUploaded byhxsxax
- Homework 1Uploaded bynguyenduythuc
- NeuralNetworkPresentation.pptUploaded byRamadhan Bilal Assidiq