You are on page 1of 2

EE5111 - Estimation Theory (Spring

2020)Tutorial 2 - Least Squares1. In this problem we examine


the initialization of a sequential LSE. Assume that we
chooseˆθ[−1]andΣ[−1] =αIto initialize the LSE. We will show that, asα−→∞,
the batch LSEˆθB[n]
=(HT[n]C−1[n]H[n])−1HT[n]C−1[n]x[n]=(n∑k=01σ2kh[k]hT[k])−1(n∑k=01σ2kx[k]hT[k]
)forn≥pis identical to the sequential LSE. First, assume that the initial
observation vectors{h[−p],h[−(p−1)],...,h[−1]}and noise
variances{σ2−p,σ2−(p−1),.....,σ2−1}exist, so thatfor the chosen initial LSE and
covariance we can use a batch estimator. Hence,ˆθ[−1] =ˆθB[−1]
=(HT[−1]C−1[−1]H[−1])−1HT[−1]C−1[−1]x[−1]andΣ[−1]
=(HT[−1]C−1[−1]H[−1])−1whereH[−1]
= hT[−p]hT[−(p−1)]...hT[−1] C[−1] =
diag(σ2−p,σ2−(p−1),...,σ2−1)Thus, we may view the initial estimate for the
sequential LSE as the result of applying abatch estimator to the initial
observation vectors. Since the sequential LSE initialized usinga batch
estimator is identical to the batch LSE using all the observation vectors, we
have forthe sequential LSE with the assumed initial conditionsˆθS[n]
=(n∑k=−p1σ2kh[k]hT[k])−1(n∑k=−p1σ2kx[k]hT[k])Prove that this can be rewritten
asˆθS[n] =(Σ−1[−1] +n∑k=01σ2kh[k]hT[k])−1(Σ−1[−1]ˆθ[−1]
+n∑k=01σ2kx[k]hT[k])Then examine what happens
asα−→∞for0≤n≤p−1andn≥p.1
2. Consider the minimization ofh(θ)with respect toθand assume thatˆθ= arg
minθh(θ)Letα=g(θ)wheregis a one-to-one mapping. Prove that
ifˆαminimizesh(g−1(α)), thenˆθ=g−1( ˆα).3. Let the observed data bex[n]
=exp(θ) +w[n]n= 0,1,...,N−1Set up a Newton-Raphson iteration to find the
LSE ofθ. Can you avoid the nonlinear opti-mization and find the LSE
analytically?2
SolutionsA1. We have been given, for the given initial conditions:ˆθS[n]
=(n∑k=−p1σ2kh[k]hT[k])−1(n∑k=−p1σ2kx[k]hT[k])=(−1∑k=−p1σ2kh[k]hT[k]
+n∑k=01σ2kh[k]hT[k])−1(−1∑k=−p1σ2kx[k]hT[k] +n∑k=01σ2kx[k]hT[k])Now look at
the batch estimate for observations{−p...−1},ˆθ[−1] =ˆθB[−1]
=(−1∑k=−p1σ2kh[k]hT[k])−1(−1∑k=−p1σ2kx[k]hT[k])AlsoˆΣ−1[−1]
=HT[−1]C−1[−1]H[−1] =(−1∑k=−p1σ2kh[k]hT[k])(1)From above two equations
we get,(−1∑k=−p1σ2kx[k]hT[k])=ˆΣ−1[−1]ˆθ[−1](2)Using (1) and (2), we
getˆθS[n] =(ˆΣ−1[−1] +n∑k=01σ2kh[k]hT[k])−1(ˆΣ−1[−1]ˆθ[−1]
+n∑k=01σ2kx[k]hT[k])(3)Ifα→ ∞,ˆΣ−1[−1]→0. Now the remaining matrix in the
above equation becomesinvertible only ifn≥p(note that eachh[k]is
anRpvector and sum ofnunit rankp×pmatrices is invertible only ifn≥p).
Hence, ifn < p, we cannot setα→ ∞. Whenn≥pand asα→∞, using the limit
values:ˆθS[n] =(n∑k=01σ2kh[k]hT[k])−1(n∑k=01σ2kx[k]hT[k])=ˆθB[n](4)This is the
required proof. Also note that whenˆΣ−1[−1]→0, the choice
ofˆθ[−1]isinconsequential.A2. Given,ˆαminimizes functionh(g−1(α)). By
assumption,h(g−1( ˆα))≤h(g−1(α))∀ α(5)=⇒h(θo)≤h(θ)∀ θ(6)3
whereθo=g−1( ˆα), sincegis an invertible function. (2) implies
thatθominimizesh(θ)i.e.θo=ˆθ=⇒ˆθ=g−1( ˆα)A3. Givenx[n] =eθ+w[n].Using
Newton-Raphson method we try to minimize:J= (x−s(θ))T(x−s(θ))The
update equation
is:θk+1=θk+[H(θk)TH(θk)−N−1∑n=0Gn(θk)[x[n]−[s(θk)]n]]−1H(θk)T(x−s(θk))(7)Here,
s(θ) =eθ1N. Since there is only one parameter to estimate,His a vector of
lengthNandGn(θk)is a scalar for eachn. Hence we can
evaluate[H(θ)]i=∂s[i]∂θ=eθ∀ i= 0,1...N−1.(8)HenceH(θ) =eθ1N. AlsoGn(θ)
=∂2s[i]∂θ2=eθ(9)Hence solving for (7) using (8) and (9) and
denoting ̄x=1NN−1∑n=0x[n]θk+1=θk+[Ne2θk−N−1∑n=0eθk[x[n]−eθk]]−1eθk1TN[x−eθk1N]
=θk+eθkN ̄x−Ne2θk2Ne2θk−eθkN ̄x=θk+ ̄x−eθk2eθk− ̄xHence the Newton-
Raphson update equation to find the LS estimate is
:θk+1=θk+ ̄x−eθk2eθk− ̄x(10)From the original minimization forJ:ˆθ= arg
minθh(θ) = arg minθe2θN−2eθN ̄x(11)4
Consider the transformationα=g(θ) =eθ. Note that this is a one-one
transformation.Therefore,ˆα= arg minαh(g−1(α)) = arg minαh(log(α))(12)=
arg minαα2N−2αN ̄x,(13)which is the normal least squares solution
forH=1Nandˆα= ̄x(14)From the previous problem 2, whenˆαsolves for (12),
thenˆθ=g−1( ˆα) = log( ̄x). This isthe analytical solution for this problem.5

You might also like