0 Up votes0 Down votes

0 views6 pagesMay 18, 2019

© © All Rights Reserved

PDF, TXT or read online from Scribd

© All Rights Reserved

0 views

© All Rights Reserved

- Steve Jobs
- Wheel of Time
- NIV, Holy Bible, eBook
- NIV, Holy Bible, eBook, Red Letter Edition
- Cryptonomicon
- The Woman Who Smashed Codes: A True Story of Love, Spies, and the Unlikely Heroine who Outwitted America's Enemies
- Contagious: Why Things Catch On
- Crossing the Chasm: Marketing and Selling Technology Project
- Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are
- Zero to One: Notes on Start-ups, or How to Build the Future
- Console Wars: Sega, Nintendo, and the Battle that Defined a Generation
- Dust: Scarpetta (Book 21)
- Hit Refresh: The Quest to Rediscover Microsoft's Soul and Imagine a Better Future for Everyone
- The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution
- Crushing It!: How Great Entrepreneurs Build Their Business and Influence—and How You Can, Too
- Make Time: How to Focus on What Matters Every Day
- Algorithms to Live By: The Computer Science of Human Decisions
- Wild Cards

You are on page 1of 6

yk =C(rk )xk + D(rk )vk (2)

where

• The properly sized matrices A(·), B(·), C(·) and D(·) are known functions of the mode

state.

(time invariant) Markov chain with transition probability matrix (TPM) Π = [πij , P (rk =

j|rk−1 = i)]. The problem is to estimate or approximate the posterior distribution p(xk |y0:k ) in

a computationally tractable way.

We have seen in the class that the optimal p(xk |y0:k ) is a mixture of Gaussians with an

exponentially growing number of components. Hence approximations are necessary. The so-

called interacting multiple model (IMM) filter [1] makes the approximation

Nr

X

p(xk |y0:k ) ≈ µik N (xk ; x̂ik|k , Σik|k ) (3)

i=1

where

are the posterior mode probabilities. When such an approximation is given, one can calculate

the overall posterior mean x̂k|k and covariance Σk|k using the standard Gaussian mixture mean

and covariance formulas as

Nr

X

x̂k|k = µik x̂ik|k (5)

i=1

Nr

X h i

Σk|k = µik Σik|k + (x̂ik|k − x̂k|k )(x̂ik|k − x̂k|k )T (6)

i=1

This estimate and covariance can be given to the user as the output. The mode conditional means

{x̂ik|k }N i Nr i Nr

i=1 , covariances {Σk|k }i=1 and mode probabilities {µk }i=1 must be calculated recursively

r

1

from their previous values {x̂ik−1|k−1 , Σik−1|k−1 , µik−1 }N

i=1 . For deriving such a recursion, we write

r

Nr

X

p(xk |y0:k ) , p(xk |y0:k , rk = i) P (rk = i|y0:k ) (7)

| {z }

i=1

,µik

Nr

X

= µik p(xk |y0:k , rk = i) (8)

i=1

Nr

X p(yk |xk , rk = i)

= µik p(xk |y0:k−1 , rk = i) (9)

p(yk |y0:k−1 , rk = i)

i=1

Nr

i p(yk |xk , rk = i)

X Z

= µk p(xk |xk−1 , rk = i)p(xk−1 |y0:k−1 , rk = i) dxk−1 (10)

p(yk |y0:k−1 , rk = i)

i=1

Nr

i p(yk |xk , rk = i)

X Z

= µk p(xk |xk−1 , rk = i)

p(yk |y0:k−1 , rk = i)

i=1

XNr

× p(xk−1 |y0:k−1 , rk−1 = j) P (rk−1 = j|rk = i, y0:k−1 ) dxk−1 (11)

| {z }

j=1

µji

k−1|k−1

Nr

p(yk |xk , rk = i)

X Z

= µik p(xk |xk−1 , rk = i)

p(yk |y0:k−1 , rk = i)

i=1

Nr

µji

X

× k−1|k−1 p(xk−1 |y0:k−1 , rk−1 = j) dxk−1 (12)

j=1

Substituting this into (12) would clearly yield a Gaussian mixture with Nr2 components for

p(xk |y0:k ). Therefore we make the following approximation

Nr

µji

X

p(xk−1 |y0:k−1 , rk = i) , k−1|k−1 p(xk−1 |y0:k−1 , rk−1 = j) (14)

j=1

Nr

µji

X

i i

= k−1|k−1 N (xk−1 ; x̂k−1|k−1 , Σk−1|k−1 ) (15)

j=1

≈N (xk−1 ; x̂0i 0i

k−1|k−1 , Σk−1|k−1 ) (16)

k−1|k−1 and covariance Σk−1|k−1 are obtained by moment matching as

Nr

µji j

X

x̂0i

k−1|k−1 = k−1|k−1 x̂k−1|k−1 , (17)

j=1

Nr h i

µji j j j

X

Σ0i

k−1|k−1 = k−1|k−1 Σ k−1|k−1 + (x̂ k−1|k−1 − x̂ 0i

k−1|k−1 )(x̂ k−1|k−1 − x̂ 0i

k−1|k−1 ) T

. (18)

j=1

In order to separate the merging in (17) and (18) from the merging in output calculation (i.e., in

(5) and (6)), the merging in (17) and (18) is called “mixing” in the literature [1]. The estimate

2

x̂0i 0i

k−1|k−1 and covariance Σk−1|k−1 are called the mixed estimate and covariance accordingly. Now

substituting the approximation (16) into (12), we get

Nr

p(yk |xk , rk = i)

X Z

p(xk |y0:k ) = µik p(xk |xk−1 , rk = i)N (xk−1 ; x̂0i 0i

k−1|k−1 , Σk−1|k−1 )dxk−1

p(yk |y0:k−1 , rk = i)

i=1

(19)

Noticing that

the integral in (19) represents a prediction update with ith model. Hence, we see that

Nr

X p(yk |xk , rk = i)

p(xk |y0:k ) = µik N (xk ; x̂ik|k−1 , Σik|k−1 ) (21)

p(yk |y0:k−1 , rk = i)

i=1

where

x̂ik|k−1 =A(i)x̂0i

k−1|k−1 (22)

Σik|k−1 =A(i)Σ0i T

k−1|k−1 A (i) + B(i)QB (i) T

(23)

the multiplication in (21) represents a measurement update with the ith model. Hence

Nr

X

p(xk |y0:k ) = µik N (xk ; x̂ik|k , Σik|k ) (25)

i=1

where

i

x̂ik|k =x̂ik|k−1 + Kki (yk − ŷk|k−1 ) (26)

Σik|k =Σik|k−1 − Kki Ski KkiT (27)

i

ŷk|k−1 =C(i)x̂ik|k−1 (28)

Ski =C(i)Σik|k−1 C T (i) + D(i)RD (i) T

(29)

Kki =Σik|k−1 C T (i)(Ski )−1 (30)

We have obtained the recursions for the estimates and covariances. In order to complete the re-

j

cursion, all we need to do is to give recursions for the probabilities {µik }N Nr

i=1 in terms of {µk−1 }j=1

r

k−1|k−1 .

The following gives a recursion for {µik }N

i=1 .

r

∝ p(yk |y0:k−1 , rk = i) P (rk = i|y0:k−1 ) (32)

| {z }

i

=N (yk ;ŷk|k−1 ,Ski )

Nr

X

i

=N (yk ; ŷk|k−1 , Ski ) P (r = i|rk−1 = j) P (rk−1 = j|y0:k−1 ) (33)

| k {z }| {z }

j=1 πji µjk−1

Nr

πji µjk−1

X

i

=N (yk ; ŷk|k−1 , Ski ) (34)

j=1

3

Hence, we have

i

PNr j

N (yk ; ŷk|k−1 , Ski ) j=1 πji µk−1

µik = PNr PNr j

. (35)

` `

`=1 N (yk ; ŷk|k−1 , Sk ) j=1 πj` µk−1

k−1|k−1 as

µji

k−1|k−1 ,P (rk−1 = j|rk = i, y0:k−1 ) (36)

∝P (rk = i|rk−1 = j, y0:k−1 ) P (rk−1 = j|y0:k−1 ) (37)

| {z }

µjk−1

| {z }

πji

Hence, we have

πji µjk−1

µji

k−1|k−1 = PNr . (40)

π µ `

`=1 `i k−1

Now we can give a verbal description of one step of the IMM filter as follows.

Algorithm 1 Single Step of IMM filter: Suppose we have the previous sufficient statis-

tics {xjk−1|k−1 , Σjk−1|k−1 , µjk−1 }N r

j=1 . Then, a single step of the IMM algorithm to obtain current

sufficient statistics {xik|k , Σik|k , µik }N r

i=1 is given as follows.

• Mixing:

k−1|k−1 }i,j=1 as

πji µjk−1

µji

k−1|k−1 = PNr . (41)

`

`=1 π`i µk−1

Nr Nr

– Calculate the mixed estimates {x̂0i 0i

k−1|k−1 }i=1 and covariances {Σk−1|k−1 }i=1 as

Nr

µji j

X

x̂0i

k−1|k−1 = k−1|k−1 x̂k−1|k−1 , (42)

j=1

Nr h i

µji j j j

X

Σ0i

k−1|k−1 = k−1|k−1

0i 0i T

Σk−1|k−1 + (x̂k−1|k−1 − x̂k−1|k−1 )(x̂k−1|k−1 − x̂k−1|k−1 ) .

j=1

(43)

• Mode Matched Prediction Update: For ith model, i = 1, . . . , Nr , calculate the pre-

dicted estimate x̂ik|k−1 and covariance Σik|k−1 from the mixed estimate x̂0i

k−1|k−1 and co-

0i

variance Σk−1|k−1 as

x̂ik|k−1 =A(i)x̂0i

k−1|k−1 , (44)

Σik|k−1 =A(i)Σ0i T

k−1|k−1 A (i) + B(i)QB (i). T

(45)

4

– Calculate the updated estimate x̂ik|k and covariance Σik|k from the predicted estimate

x̂ik|k−1 and covariance Σik|k−1 as

i

), (46)

Σik|k =Σik|k−1 − Kki Ski KkiT , (47)

i

ŷk|k−1 =C(i)x̂ik|k−1 , (48)

Ski =C(i)Σik|k−1 C T (i) + D(i)RDT (i), (49)

Kki =Σik|k−1 C T (i)(Ski )−1 . (50)

i

PNr j

N (yk ; ŷk|k−1 , Ski )

j=1 πji µk−1

µik = PNr PNr j

. (51)

` `

`=1 N (yk ; ŷk|k−1 , Sk ) j=1 πj` µk−1

• Output Estimate Calculation: Calculate the overall estimate x̂k|k and covariance as

Nr

X

x̂k|k = µik x̂ik|k , (52)

i=1

Nr

X h i

Σk|k = µik Σik|k + (x̂ik|k − x̂k|k )(x̂ik|k − x̂k|k )T . (53)

i=1

Remark 1 The output calculation step is done only for output purposes and therefore does not

affect the sufficient statistics.

A block diagram of a single step of the IMM filter is given in Figure 1 below.

References

[1] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with Applications to Tracking and

Navigation. New York: Wiley, 2001.

[2] Y. Bar-Shalom, S. Challa, and H. A. P. Blom, “IMM estimator versus optimal estimator for

hybrid systems,” IEEE Trans. Aerosp. Electron. Syst., vol. 41, no. 3, pp. 986–991, Jul. 2005.

5

{µik−1 }N

i=1

Mixing yk Mode {µik }N

i=1

Probability Probability

Calculation Calculation

x̂1k−1|k−1 x̂01

k−1|k−1 x̂1k|k

1

Σ1k−1|k−1 Σ01

k−1|k−1 KF-1 Σk|k

x̂2k−1|k−1 x̂02

k−1|k−1 x̂2k|k

2

Σ2k−1|k−1 Σ02

k−1|k−1 KF-2 Σk|k

Mixing

x̂N

k−1|k−1 x̂0N

k−1|k−1 x̂N

k|k

N

ΣNk−1|k−1 Σ0N

k−1|k−1 KF-N Σk|k

Output

Estimate

Calculation

x̂k|k

Σk|k

Figure 1: Block diagram of a single step of the IMM algortihm for N -models.

- MIE1605_2b_DTMC.pdfUploaded byJessica Tang
- Statistics Probability Midterm Cheat SheetUploaded byJeff Farmer
- 5 Starters and PlenariesUploaded byRebecca
- A Downtown Car Service Station Has Facilities for a Maximum of 4 Cars Being Serviced or Waiting for Service on Its PremisesUploaded byMariaHosannaPerez
- Is There a Kuznets CurveUploaded byglenlcy
- fumi2701_04Uploaded bySangita Roy
- Foss Lecture4Uploaded byJarsen21
- Automated Parking System - IEEE.docUploaded byTabita Sebestyen
- Excel pesquisa operacional.xlsxUploaded byTarek Sati
- F5Uploaded byCY
- johannessonUploaded byWxz96kf
- 2Uploaded byCristina Alexandra
- exam2solUploaded byspitzersglare
- A Zero Text Watermarking Algorithm Based on the Probabilistic PatternsUploaded byIAEME Publication
- Markov chainUploaded byArmin Gudzevic
- Final ReportUploaded byAyush Jain
- Web-Object Rank Algorithm For Efficient Information ComputingUploaded byijcsis
- Wealth Variation CalculatorUploaded bymorrisonkaniu8283
- Spectral Method and Regularized Mle Are Both Optimal for Top-k RankingUploaded byJorge Boole
- Ch 06 - Little's TheormUploaded byNazar Aziz
- Leonard Adleman, Qi Cheng, Ashish Goel and Ming-Deh Huang- Running Time and Program Size for Self-assembled SquaresUploaded byLpeo23
- DTMC.pdfUploaded bydmezin
- An Introduction to Stochastic CalculusUploaded bythjames
- PRACTICA N°4Uploaded bypasttrulin12
- Chapter2_Lecture1Uploaded byNdomadu
- ProbabilityUploaded byrauf tabassum
- MB-104 ID CO104 .pdfUploaded byUDay
- Markov Chain Monte Carlo Maximum Likelihood_(Geyer)_1991Uploaded bysss
- Assessing Measurement Equivalence Across Rating SourcesUploaded byPuja Patel
- A Course in Mathematical StatisticsUploaded byManuel Carmo

- Microstructure of Dental AmalgamUploaded bytorfeh
- Painting Method StatmentUploaded byKhaled Gamal
- Serial Number is Not Available When Doing a Inventory Transaction but Shows in on Hand QuantityUploaded bydeepunym
- biometric technologyUploaded bybong
- ARK_1_AR_1E_8P_3Uploaded byHengky Gustian
- Product Data 19XR EnUploaded byHUMBERTORRIVERO
- Sexual Singularity lenguage of lustUploaded byValpo Valparaiso
- literary analysis literary theory presentation rubricUploaded byapi-403618226
- Using Voicemail on IsatPhone LandPhone and Fleet PhoneUploaded byNukeviet Viet
- UT Dallas Syllabus for aim4342.ppa.11s taught by Jennifer Johnson (jxj091000)Uploaded byUT Dallas Provost's Technology Group
- API 4.9Uploaded byHumberto Ernesto Di Ciccio
- IEM TALKUploaded bynitiran
- EAT 309 assignment.docUploaded byLogan Narayan
- 7. Larkin - Cognitive ModelingUploaded byafosr
- 06 BO InheritanceUploaded byలక్ష్మీనారాయణ పుట్టంశెట్టి
- social authUploaded byNitish Kumar
- The Relationship Among Achievement Motivation, Psychological Contract and Work AttitudesUploaded byFakhrul Islam
- 184245970-App-Builder-Messages.pdfUploaded bymrobledo1
- Myth of Wug TestUploaded byCoie8t
- Common Core Algebra I Unit 1 Starting PointsUploaded byiodean
- Modelos cognitivo conductuales del trastorno por estréspostraumáticoUploaded byLuisa Rendon
- Diabetes support workshops at Leeds MetUploaded byLeeds Metropolitan University
- Bulding a Referee Report.pdfUploaded byEdson Camacho
- Third Party Repair ProcessUploaded bydoriangray661
- Twain's Letters Vol 4 1886-1900 by Mark TwainUploaded byapi-3714708
- Group CommunicationUploaded bysurya
- The Design of Safe Chemical ProcessesUploaded bymarcalpi
- 3.Longevity - ExtraActivities.pdfUploaded byZsuzsi Salat
- Most Commonly Used Methods in OAF_ADF.docxUploaded bySirish Mahendrakar
- Liability of Minor - CopyUploaded bySukirti Baghel

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.