0 Up votes0 Down votes

25 views4 pagesMay 08, 2014

© © All Rights Reserved

PDF, TXT or read online from Scribd

© All Rights Reserved

25 views

© All Rights Reserved

- Odd Answers for Precalculus Book
- Linear Inequalities@Ketaksamaan Linear
- Technical Note for Road Safety Infrastructure Management
- Reliability of Objectively Measured Sedentary Time and Physical Activity in Adults Journal.pone.0133296
- CUSAT CAT 2013 Statistics Question Paper
- ejercicios sensores
- 1 St Micro Blooms for QM
- Lesson 1
- MVCO1_U1_A3_RAIC
- Activity 1 Factoring Techniques (cherry).pptx
- SC 3-13 & 3-14 (1)
- A10-P&S1
- Summative test in solving quadratic equation
- nacm-2.pdf
- SHABRINA_2D250418_RAK1
- Factor Analysis 4-10-19
- Solution2_1_18
- EVal
- Revised Syllabus for MMS
- mcasyllabus

You are on page 1of 4

1 Comparison of MMSE estimators for an example

(a) The minimum MSE estimator of X of the form g(U) is given by g(u) = u

3

, because this

estimator has MSE equal to zero. That is, E[X|U] = X.

(b) We begin by calculating some moments.

E[U

n

] =

_

1

1

(0.5)u

n

du =

_

1

n+1

n even

0 n odd

Var(U) = E[U

2

] E[U]

2

= E[U

2

] =

1

3

E[X] = E[U

3

] = 0

Var(X) = E[X

2

] = E[U

6

] =

1

7

Cov(X, U) = E[XU] E[X]E[U] = E[U

4

] =

1

5

So

E[X|U] = E[X] +

Cov(X,U)

Var(U)

(U E[U]) =

Cov(X,U)U

Var(U)

=

3U

5

and the mean square error is

Var(X)

Cov(X,U)

2

Var(U)

=

1

7

3

25

=

4

175

.

2 Estimation with jointly Gaussian random variables

(a) E[W] = E[X + 2Y + 3] = E[X] + 2E[Y ] + 3 = 13.

We use the fact Cov(X, Y ) =

X

Y

= (0.2)(3)(5) = 3 to get

Var(W) = Var(X + 2Y + 3)

= Var(X) + 4 Var(Y ) + 2 2 Cov(X, Y )

= 9 + 100 + 12 = 121

(b) Since W is a linear combination of jointly Gaussian random variables, W is Gaussian. So

P{W 20} = P

_

W 13

11

20 13

11

_

= Q

_

20 13

11

_

= Q(0.6364) = 0.2623

(c) Since W and Y are linear combinations of the jointly Gaussian random variables X and Y , the

variables W and Y are jointly Gaussian. Therefore, the best unconstrained estimator of Y given

W is the best linear estimator of Y given W.

So

g

(W) =

E[Y |W] = E[Y ] +

Cov(Y, W)

Var(W)

(W E[W]).

Using

Cov(Y, W) = Cov(Y, X + 2Y + 3) = Cov(Y, X) + 2Var(Y ) = 3 + 50 = 53,

1

we nd

g

(W) =

E[Y |W] = 4 +

53

121

(W 13)

and

MSE = Var(Y )

(Cov(Y, W))

2

Var(W)

= 25

(53)

2

121

= 1.7851.

3 Projections onto nested linear subspaces

(a) By the Orthogonality Principle, since Z

1

V

1

, it must only be shown that (Z

0

Z

1

) Z for

all Z V

1

. But this follows from the three facts:

Z

0

Z

1

= (X Z

1

) (X Z

0

),

(X Z

1

) Z for all Z V

1

, (X Z

0

) Z for all Z V

0

, so for all Z V

1

(b) (i) V

0

= {a +bY

1

+cY

2

: a, b, c are real constants} V

1

= {a +bY

1

: a, b are real constants}

(ii) V

0

= {g(Y

1

, Y

2

) : g so that E[g(Y

1

, Y

2

)

2

] < } V

1

= {g(Y

1

) : g so that E[g(Y

1

)

2

] < }

(iii) V

0

= {a +bY

1

: a, b are real constants} V

1

= set of real constants

4 Conditional third moment for jointly Gaussian variables

(a) Z has the same distribution as W +, where W is a N(0,

2

) random variable. Since E[W] = 0

and E[W

3

] = 0, E[Z

3

] = E[W

3

+3W

2

u +3W

2

+

3

] = 3

2

+

3

. An alternative approach is to

use the characteristic function of Z.

(b) The conditional distribution of X given Y = y is N(y, 1

2

). Therefore, the answer to

this part is obtained by replacing by Y and

2

by 1

2

in the answer to part (a). That is,

E[X

3

|Y ] = 3(1

2

)Y +

3

Y

3

.

2

5 Some identities for estimators, version 3

(a) TRUE. In fact, the estimators E[X|Y ] and E[X|Y, Y

2

] are identical because any function of

Y, Y

2

is a function of Y alone, so equality always holds in (a).

(b) FALSE. (It would be true if the inequality was pointed in the other direction.) For example,

suppose Y is a N(0, 1) random variable and X = Y

2

. Then

E[X|Y ] = E[X] because X and Y are

uncorrelated. However

E[X|Y, Y

2

] = X, which has MSE equal to zero.

(c) TRUE. If X and Y are jointly Gaussian,

E[X|Y ] has the minimum MSE over all possible func-

tions of Y so it also has the minimum MSE over all possible functions of Y of the form a+bY +cY

2

.

Therefore,

E[X|Y ] =

E[X|Y, Y

2

].

(d) TRUE. The estimator E[X|Y ] minimizes the MSE over all functions of Y, in particular it has

MSE at least a small as E[E[X|Z] |Y ], which is also a function of Y.

(e) TRUE. The given implies that the mean, E[X] has the minimum MSE over all possible func-

tions of Y. (i.e. E[X] = E[X|Y ]) Therefore, E[X] also has the minimum MSE over all possible

ane functions of Y, so

E[X|Y ] = E[X]. Thus, E[X|Y ] = E[X] =

E[X|Y ].

6 Steady state gains for one-dimensional Kalman lter

(a) Let b

k

=

2

k

. The given equations show that the sequence b

k

satises the recursion b

k+1

= F(b

k

),

where F(b) =

bf

2

1+b

+ 1, with the initial condition b

0

=

2

0 given. The function F is positive,

strictly increasing, and bounded. Thus if b

k

b

k+1

for some k, then b

k+1

= F(b

k

) F(b

k+1

) =

b

k+2

. Therefore, if b

0

b

1

, then the sequence (b

k

: k 0) is monotone nondecreasing and bounded.

Similarly, if b

0

b

1

then the sequence (b

k

: k 0) is monotone nonincreasing and bounded. Since

bounded monotone sequences have nite limits, the sequence (b

k

: k 0) converges.

(b) Denote the limit by b

(so b

=

2

in the equation b

k+1

= F(b

k

) yields b

= F(b

b

=

f

2

+

f

4

+4

2

.

(c) If f = 0, the states x

k

are uncorrelated with variance one. The observations y

0

, . . . , y

k1

are

therefore orthogonal to x

k

, and the variance of the error,

2

k

, is just the variance of x

k

, equal to

one for all k. The limiting variance of error is thus also one.

7 Kalman lter for a rotating state with 2D observations

(a) Given a nonzero vector x

k

, the vector Fx

k

is obtained by rotating the vector x

k

one tenth

revolution counter-clockwise about the origin, and then shrinking the vector towards zero by one

percent. Thus, successive iterates F

k

x

o

spiral in towards zero, with one-tenth revolution per time

unit, and shrinking by about ten percent per revolution.

(b) The equations for

k+1|k

and K

k

can be written as

k+1|k

= F

_

k|k1

k|k1

(

k|k1

+I)

1

k|k1

F

T

+I

K

k

= F

k|k1

(

k|k1

+I)

1

.

with the usual initial condition

0|1

= P

0

.

(c) If P

0

=

2

0

I, then we see by induction that

k|k1

is proportional to I for all k 0. Moreover,

we can write

k|k1

=

2

k

I, where

2

k

=

2

k|k1

is the conditional variance sequence arising for the

one dimensional Kalman lter in the previous problem, and K

k

=

_

2

k

2

k

+1

_

F.

(d) The steady state covariance of error and gain do not depend on the initial covariance matrix

3

P

0

so we can assume without loss of generality that the initial condition of part (c) holds. Using

the results of the previous problem, we nd

=

2

I K

=

_

2

+ 1

_

F, where

2

=

f

2

+

_

f

4

+ 4

2

.

4

- Odd Answers for Precalculus BookUploaded byNoah Biolsi
- Linear Inequalities@Ketaksamaan LinearUploaded byVisnuVaratan
- Technical Note for Road Safety Infrastructure ManagementUploaded byKadarisman Andriyono
- Reliability of Objectively Measured Sedentary Time and Physical Activity in Adults Journal.pone.0133296Uploaded byGary Lam
- CUSAT CAT 2013 Statistics Question PaperUploaded byaglasem
- ejercicios sensoresUploaded byDuvan Bayona
- 1 St Micro Blooms for QMUploaded bypradeep
- Lesson 1Uploaded byrt2222
- MVCO1_U1_A3_RAICUploaded byRaul Ibañez Couoh
- Activity 1 Factoring Techniques (cherry).pptxUploaded byEMMA C. EBORRA
- SC 3-13 & 3-14 (1)Uploaded byNpranto
- A10-P&S1Uploaded byzzzzz
- Summative test in solving quadratic equationUploaded byEMMA C. EBORRA
- nacm-2.pdfUploaded byKetan Joshi
- SHABRINA_2D250418_RAK1Uploaded byafidah khoiru ummah
- Factor Analysis 4-10-19Uploaded byajay kalangi
- Solution2_1_18Uploaded byArsy
- EValUploaded byshahzadsheriff
- Revised Syllabus for MMSUploaded byrdkool
- mcasyllabusUploaded byRemshad Medappil
- AnovaUploaded byPravab Dhakal
- 0104215v1Uploaded byVigneshRamakrishnan
- Nocolini_ a formative model for measuring customer satisfaction with a degree courseUploaded bymealleta
- Factor Models - Portfolio ManagementUploaded bycarogonher
- PROBABILITY AND STATISTICSUploaded byvinaymadipoju
- Basic stats.pptUploaded byAzura Chan
- US Federal Reserve: williamsUploaded byUSA_FederalReserve
- D 2234 – 00 stdUploaded byKhoai Sai Gon
- Statistics and Probability Activity Sheet (1)Uploaded byCapt Karli
- Ecology Practical 1 EcologyUploaded byLee Mee

- sol7Uploaded byThinh
- sol6Uploaded byThinh
- sol5Uploaded byThinh
- sol4Uploaded byThinh
- sol2Uploaded byThinh
- FinalUploaded byThinh
- sol1Uploaded byThinh
- Quiz SolUploaded byThinh
- exam2solUploaded byThinh
- QuizUploaded byThinh
- ps7Uploaded byThinh
- ps6Uploaded byThinh
- ps5Uploaded byThinh
- ps4Uploaded byThinh
- ps3Uploaded byThinh
- ps2Uploaded byThinh
- ps1Uploaded byThinh
- Final SolUploaded byThinh
- exam2Uploaded byThinh
- exam1Uploaded byThinh
- exam1solUploaded byThinh
- 534 CoversUploaded byThinh
- Minimax OptimizationUploaded byThinh

- Presentation 2Uploaded byarpitsinghai109
- Biochrom Microplate-washers Atlantis ManualUploaded byPaulo Henrique Evangelista
- Concordance of cranial and dental morphological traits and evidence for endogamy in ancient Egypt.pdfUploaded bypollinctor
- SwCfgUploaded bySaurav Bhasin
- In for Ma Tic a 9Uploaded byapi-3699253
- Bdc Using Call Transaction MethodUploaded byAshok Kumar Mohanty
- BE III Sem New SyllabusUploaded byMahesh Singh
- survayUploaded byAnonymous ZIQVFKsIm
- Riley DebunkUploaded bymostafa
- JscriptReferenceUploaded byKosta73
- Lect03 GUI Programming1 GtkUploaded byTeto Schedule
- The Economic Worth of Celebrity Endorsers an Event Study AnalysisUploaded byMuhammadHanbal
- James Hutton - Theory of the Earth, With Proofs and Illustrations Vol 1 1419189506Uploaded bybhero
- 209_2017 Regulation SyllabusUploaded byAnonymous H7nsyCog
- European Standards for Reinforced ConcreteUploaded byNic James
- Turbina Wiiting FRL60Uploaded byVictor Dasilva
- Diagnostic Value of Urinary Kidney Injury Molecule 1 for Acute Kidney Injury- A Meta-AnalysisUploaded byAbas Suherli
- NATOVANJEUploaded byRajko Terzic
- The Andreas Berr Lute 1699Uploaded byViennalute
- 01_RecursionUploaded bymr_ayman
- AI in gamesUploaded byBenas Lie
- Broncho Pulmonary Segments CBMS07 SBUploaded byPölczman Melinda
- Plugin-SensorReadout FSK PaperUploaded bySreelal Sreedharan
- TS PLA 2018 en Plan and Track ProjectsUploaded byamadhubalan
- article_wjpps_1446289728 (2)Uploaded byAshok Lenka
- Assignment 3Uploaded byAkatew Haile Mebrahtu
- Viii Nstse Mock Test # 02Uploaded byVikas Nagar
- mind body dualism introUploaded byapi-196193978
- 1-Copy.docxUploaded byrahmaniqbal
- Double WishboneUploaded byRian Steve

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.