You are on page 1of 33
MODULE 4 THE WIENER FILTER PAGE 4.0 MODULE 4: THE WIENER FILTER a — Tre Wiener Filter addresses the additive nvise MMSE Siqnal estimation problems ASSUMPTION 7 There is a@ “desirable” statistical Signal and an fandesirable’ candom noise. ~Both are modeled as stochastic processes. - The autocorrelations, power spectral densities, Cross correlation, and cross PSD are known. GWEN: The sum of the signal and noise. FIND: The best estimete of the signal, FIGURE OF MERIT + The best estimate of the Signal is the one that minimizes the mean (expected value) of the sguored- erro, Note: TF the signal and wise care spectral ly disjoint, then they The interesting case is when the signal and mise have overlapping spectra. In this case, no linear filter can seperate them perfectly. ry, =? Thus, given thet no linear filter can perform a perfect separation, we ask: “what Fitter will do the best job in the MMSE sense?” Symbology xt): the desired signal. n(t) @ the corrupting noise. Z(t) = AU) + n(t) + the given signal, y.{t) = R(t) + our estimate of the signal X(t). © Goal ; a zt=xtentt)—y Gf ye) = Rit) ~Design the Filter G so thet R(t) is the Optimal estimate m the MMSE sense. — Since we seek 40 optimize the squared-errer, We must first find an expression for tt Z(s)= X6)+NG), Quy: YG) = GG) Z(s) = Geneon), PAGE 4:2. = The exrog in the estimate is given by e(t)= x(t) — Xi) - So E.(s)= X(s) ~ Xe) Xs) - G ts) [ Xs) t nes) | Xts) — Gis) Xs) - GOs) NUS) [i- Gu] xcs1 - GeevNles) VS UY 0 u E Error due to ‘bth we Filtering the € nois S14 na] = Until otherwise stated, we will assume thet XU) and alt) are WSS and 2672 mean > Then z(t) and X(t) are also WSS. More Symbols: 5 (s)=€[R(t)] = PSD oF xl). Sn(s) = L[Rq(t)] = Psd eof (4). otere * for WSS processes, the ero mean assumption ts not Wo cestrictive. Each process has @ contort meon. TEAS womens, it can be easily Sublvocted. FRoe 4,3 —Now for the mean squared error It will take some work... Re(t)= E fettve(t+e)} = €{ [xtt)- Rw] [xlert)-& ere) | § = E { xlt)xatet)- Alt) RUT) — Ree) x (47) + Lupkarc§ = Re(t)— Rye (t) — Re x(t) + RE (c) =) Se ls) = $6) ~ Sue ls) - Sp is)4 Secs). CK) R(T) = EL eltyecere) § =E {LW enol [aererencerer yf == { K(t) X(EtT) + X(t) nn (ett) tm (t) xCeeT) + nce)ntereyt = RxCT) + Ram LT) + Ray OC) +R, (Ct) => Se(5)= SOS) + Sans) + Say (5) + Sn (5) Now, using the Wiener~ Kintchne relation from page 3.8 Sg(s)= S2s)GC-5)G(s) = [S.(s)# Syn l5) + Sy als) #Sn(s) [GC-8)GCS) (#4) Pace 4.4 Ryg (t) = € {xo kerry$ =E— { xe) [etered+s (trey) { = ER xw iL (xuet) + nltee)) *y ere) | ' =E | x(t) [x(eee) #9 lett) + nterc)egctse) ) | =E §xco[ (xUt+e-8) 416) de + [Pnteet-a) gran] { ~o0 =E } [Pxtt) x(t 2-8) 98) do + Je @dn(ert-r) gAAAS SE [aed nctee-a)] add (Ekle) x(e+e-6)]9(0 de + = (7 R(t-0)4()d0 + ("Ryn (e-a) 90a) dd = Relt)e4(T) + Royle) #900) — Suk (s) = Sx (s)Gls) t Sxn (s)Gls) = G(s) (sl) + Sats) ] (+) PACE 4.5 Rax (0) = Ef Rud xlere)§ { (20>#500] atere) § : Ef [ (xe +ntt)) )¥ gto] xcerey§ Ef [xttegie) + nce) ¥g(t)] (eet) § z= —E x(t-8) x (t+T dgtedae + (ne (t-d) a (t+T) 90) ang [ “ELC! [[xit-)glb)d0 + {y(4- gan] Jato) cf {Tele oyateer | ¢e)d0 Ss E[nct-a)x(tec)] 904) dd -[ Re (THe) 4 (8) do + Sr Rax(ttr) 90) dd A= -8 pr-> = [leu ge dae (R., (c-pyg re) db = R(T) ¥9 CT) + Ray (TI #9 (-0) =) Spxls) = SVG ES) + Sy 4 (8) GUS) = 605) [S) + Sy w)] PAGE 4h ~ Plug into (4) en PAGE 44 the following + Sy (5) fromm Ge) on Pree 4.4 SyRlS) from (+) on PACE 4.5 SxS) Lew (Uk) on PAGE 46 =) Sols) Sx ls)~ C(s)[Sel2) +Sxals)] - GC5)[Set5)+ Saxl3] + GLS)GC-5)[ 540s +Seu 5) +Saacls) Sq s)]. “Th general, the MSE Is given by E[erlt)] = Re (0) = xf Self te0 > = (ey : ~ am | S(s)est ds =je0 t=o Ne Ge mL ds (4) aes PAGE 4.7 NOTE + Recall cur assumptions thet x(t) and w(t) ace WSS an zero herre — >In the special case where X(t) and nt) ave uncorrelated — (assumed often in prac tee), we have Rem Uc) O11 rs) 0 Rax(T) =O => Sx Gs) = 0 Th His case, Se (s)= Sxl) - Gls)S (5) - 6-5) 5 (5) +GIG(-s)|56)+5015) | = Sx(S)-Gl)Sx (5) ~ 6-5) SLs) + G()G(-5) Sels) + GG)G(-s) S (s) = [I-Ges)- Gls) +6(5)G(-5)] Sls) #629605) Spl) = [Gt] [1-605)] 5.06) +665) Sas), PAGE 4.8 => Plugging the last result on PAGE 4¢ into (+) on PAGE 4,7, we lave ye E(etle)] = se ( Ci-cul][1-6e-s)] sxts)ds Me ys + 3m | GlSIGC-s)S (s) ds, de X(t) and n(t) uncorrelated, > which is Eg. (42.5) on Pose (42 of the bok. — Early tempts to minimize the MSE went like this: » Assume a parameterized form for G(s) “Plug iwte (4) on PAGE 4:7 or (#) abe. 3. Solve the integral for e [eu] in terms of the Parameter (s), nN 4, Use standard caleulus to find the pavameter Value (s) +, Minimize e[e(t)]. NOTE ; This procedure optimizes only over the Parameterized Class of filters assumed, See page (62 of the book for an example, PAGE 4.9 Wiener Filter Problem Formulation wienee ited Treniem _ormuene Z2(t)= x+nlt) 3] G Perylt)s Rlera) -Input 2lt) is the sum of the desired signal x (t) and a corrupting noise nt). ~ X(t) and n(t) are covariance Stationary stochastic processes with Known autocorrelations, crosscorrelations, spectral density, and Cross power spectvum. ~G is an LST Filter. — Output yl) iS covariance stationary any transients have died out ). Problem; Find Gls) to minimize Efer(t)§, where elt)= x(tt<) ~y lt) =x(tt¢)- (tte). Notes ; -when t>0, G tries to estimate future values of x(t) based on the values of ZLt). This is the classical Prediction Problem - —when 420, G tries to estimate. the current value of x(t) from the values of 2(t). This is the Classica | Filtering Problew. PAGE 410 - When <0, G tries to estimate past values of x(t) from the values of Z(t). This is the classical Smoothing Problem . STF the filter is allowed +o consider all values of Z(t) in Computing yl=R(t te), then it is tot causal. ie) itis a Noncausal filter LF G ‘is only allowed to consider past and Present values of Z(t) in computing y(t)=X (tt), then it 1s 4 causal filter, —> Ineither case (caveal orremcausal ) ) the Gls) thet minimizes ELelt))] is called a Wiener Filter, => The Stationary problem can be formu lated in the time clomain or in the frequency comain, - The Freguency domain formulation is elegant. However It does not gereralize to the Nons tationmary case, ~ Therefeye, we will formulate the problem m the +e domain. PAGE 4.11 — The estimation error is given by Lr 2 xltt) — Rera). -The MSE is Efeuy] = Efelers)-2xceroye) +f =E {atlera)-2a(eea)[ zCt-#)916)d9 + [2eenngerar fern a6adae § =E fe (tra) - fe 2t-€)xlt+4) 4(6)de mt Z(t-d) 2(e #90) 3la)di de f = ucwsee = 26 “efetee)x(t+)}q (8) d9 “0 +I" (EF ac-a recta) $4(A)9(6) dh dp = Rx(o) -2f Rax (+6) 90) do + LF (aa) g (0906) daa. le) PAGE 4.10 -The problem is to find the g(t) that minimizes ELert)] in (¥) on poge 412. — This is a problem iw the calculus of variations, — We are assuming that g(t) is the impuke respense of the optimal Litter — For an arbitrary Fitter H with impulse response het), We way write h(t) =H Glt)+EN(t) , where — 2(t) is a perturbing function - € ig a perturbation seale factor. — For the Filter Hy the MSE is E[ett)) =R, (o)-2{ Rex (are) [s0e) +é7(6) ] de SP ROO [400+ 10) [sere] drdB (#) In the limit as Eo, hlt)4 q(t) and the above becomes the MSE of the optimal Pilter g(t). —> The MSE above is a function of € and 7(t), ~ We con optimize with respect to & by differentiating with respect to & and setting the result equal to evo, POE 413 > Since the optimum filter is obtained when €=0, WE Can obtain a constraint on g lt) by evaluating the equation obtained im the prewous step at €=0, , > with Luck, this will give us a constraint thet can be solved Lr g(t), The solution must be valid Ler aybitrary choices of Ct). ~Ditterentiate Gt) on PACE ANB with respect to €: ee} EEC Seo Lamaerssarere)ecrday) erm rp} zAds - 26 Realared[qorerl0)] de ‘ =H {S (eL-0ar4@ dhdB+S" ("a (-g)q@a) £406) dad ~0 Oy +P RUB EAD)GAS + f("eLrg)220) 2G) dads o ai ° ~2) Rep (+9)5(6)do - 2% (uso) 2 de { pace 4.14 eee =3\f° ROP) gOd5(darag-ve)” SR O-f)au)r49) ands ~” +R U+6)16)gnanas +e" (2,00) 90) 16) aid x -o 2 ~ 21 Ray (H8) 40040 2 FRaplare) 1) do Deere eee ZERO = ST Ro-mser@ars + ("l'2,0-620)9(6) dade #26). ("%(B) 00) 16) drag -2| Re eCare) 18) do =O. ~For the optimal fitter git), €=05 this becomes Oz s [Ralf 4 AN)andg ff & Rald-B) 1009 (8) ddd SS Let ¢=r, P=h Let god, $= 6 7 2.1'R, (ure) (6) de = VRC d-w) gH aeayay « (% {Raly-v) 379) dda p - 2) R,xlar8) 1(0)de <30 — PAGE 4:15 - But Ra(t}= RCT) So this becomes od, 2 —— Let TH let t= 0 (divide both stdes by 2) af ( Ra(¥-@)4C¥)0l¢) apd y - 25° Ry (are) 116) do =0 asses eaaasaeeeaaeeeeeeeeeeeeenc J (ee-e)g tHe) avde - Seay xe) 0le) dt =O wy 4 Lue) [-Rexlare) + (Relee)gHay] de=O. => This Is Eg. (43.7) On page [65 of the boat « — The noncavsal and causal wiener filters are considered separately for the remainder of the solution . Noncausol Solution —Since Gb) above mut hold fr arbitrary nt), we heve ~Rox(a+t) + ("Ry (0-t) 4 (e)de =0 td {a (o-c)4le)de = Rare) “0 Pace 4.16 - Since R(T) = R(T), this becomes J &le-0) 408) do = Byte) Rate) A9 CU) = Ry (t+) s,s) G(s) = 5, (e* Sex fs) e** Sz (5) ee ——_————— >This is the trans fer function of the optima | Coin mean -squered error) noncavsal filter Gls) = =? This Bitter is called te noncausal continuous-time Wienee filter optimization over Note: This is an LUST Siltey, [ tejetss 04 Usz Note ; By choosing X70, k=0, oF L This is the lowest mse tht can be achieved by any LSE Filter, noncavsal or cavsal, > Although the optimal fi Hee will generally be honcavsal, i+ may Sometimes turn out to be causal, PACE 418 ~We now return to Eq Ue) on PAGE 4.16 and solve for the optimal causo| LSI Filter —Sine the clas of causal LST Filters is a Subset of the class of noncausal LEZ filters, the Optimal causal Filter can hot be bettey- than the optimal noncausal Filter, For the optimal g(t), we have again that S700[-Retaer) +SRa(0-) 4l0)a0] dt=o (#% ~ Tn this cae, Gt) and hit)=9lt)+eE7lt) are both Causal. hr Cr ——s- —> Se (Habsve is satisfied for Teo independent of g(t). — The constraint on glt) then becomes (R(o-t) g(0)d0 ~ Rey (ar) =O, t70. (#4) =>This as the famous “Wiener - Hopf ” equation. -The classic solution to the Wiewer- Hopf equation wakes use of spectral factorization. — The left side of GH) on page 4:19 must be zero tor TO, > for T<0, however it is unknown and may be nonzero. —% we re-write (##) on page 4,19 as { Rele-e) 5lo)de - Rax(utt)= a(t), where alt) = { unknown, TCO } ) TVvO. ~ Applying the Laplace transform, we have GUs)S,(s) - Sex (s)e“F= ACs) GO) S35) = Als) + Sy (s)e%® Now we spectrally factorize +o obtain $2. (5) = sfls) SCs) where SIS) has lett half -plne poles only and Sz CS) has right half-plane poles only TAGE 4.20 -This gives 4S G(S)Sf 6) Sp) = AG) # Sa,x(s) eS where G(s) lett half-plane poles > trancform of @ causal signal Shs) — lett half-plane poles —> transform of a causal signal SG) > right half-plane poles > trans firm of ant anticausal sign | AG) right half-plane poles—> transform ofan anticausal sigra | Sals)e"* > tay have poles in both half-planes —) trancform of a signal that is not restricted to be causal or anticausal. — Dividing through by Sy ls), we obtain A AG) ’ Sax (s)ex® G(s) f(s) = . — tat oe ce a ota ™ ot a causal signal cn ot an two-sided signa | anticavsal signal PAGE 4.2\ — For equality to hold in (+) on PAGE 4-21, Se a § Szx(s) e*8 x aos andl SEO) must cancel each other fie teo. ~For tro, L"f as =O. Thus, for t70, . “s Z'fowset = fer] | tre. In other words, 1 - (s) ex8 Zz Ade h aa ecee G fewstif=z =a Fat) ~ Mew take the transform of both sides and solve fer Gls): GE)St(s)= FP Zl Bess Jaen Sz Gs) tL af Sax(s) ers Gs) = amit | SG) cy ~In words: as 1 Cammpute Sx(sde™ $3.5) 2. Take the inverse transform of the result from Step (1) and multiply by ult), pace 422. 3, Take the transform of the result of step (2). 4. Divide the result of skep (3) by S*(s) to obkain G(s). —Examples are given in the book on pages 169-112. The Nonstationa ry Case -We will not cover this in detail, ~ hs before, Xlt) and wit) are covariance stationary Stochastic processes with known autocorrelations, Power spectral densities, cross correlation, ane Cross power Spectrum. , “In this case, the input 2(e)= [xd)enG)] ult) is applied at time t=o. : XK ___, (t= Ritts), 2()=xt)ent) eX $6 oy —A specialized Variational approach fer finding A(tit) is given iM the book. —The solution is a causal, time-varying Nineor Filter. ~This leads to a treatment of optimal estimation in terms of the so-called “innovations process”! PAGE 4.23 Ortho sonality Peneiple o rm 7" if X and Y ave two RVY such that ELKY] =O, then X% ard Y are called or thogona | 7 | > ik X ad are gerv-meen, then orthigonalrty ws equivalent to uncorrelated, > Ie X ond Y ave orthogonal, we write X LY. ~ Suppose X is a fandom vector to be estimated. Thus X can be a single RV, a vector of Cuite dimension, or a vector of infinite Aimension. — Let EB be an RY, random vector, oF stochastic process Containing observations tht are relateat to XY is some way: eeu ted 9 The indeying cot Lf could be Suite, countably infinite, oF uncountably \ndivite = A general function of the observations is denoted hy MEA wits 15 any arritrary function, andl could be nonlinear Pace 4.24 ee sr —©————Ci=C=BN — Let the optimal MASE estimator be K= (2). — The error in the optimal estimate is given by e@=2Xx-x = X-9(2), -The orthogonality principle states that the error in the optimal estimate Is orthogonal to any function WZ) of the observations. NoTEs if hle) and g(t) come from a cestricted Class, linear functions for example, then the Principle states thet the erro in the optimal MMSE estimator Crom the class \s orthogonal to all functions in the class. => We will prove this for unit-dimension ce | X and ZB Cscalar RvY), The proof can easily be extended to Kinite —dimensionel Vectors on a component-wise basis. PAGE 4.25 Theorem Corthogonali ty principle): A function (2) of the observation 5 is the MMSE estimator of x) he. g(@)= k, if and only if E{nca)Lx-302)J} EO) ry, ice): Proof i k) Sub ficiency t For @ particular 9 (2), Suppose Efhle)[x-9(2)] § =O Whiz), Then E{ [x-h ey f 7 ef [x-gte) este) -mle)}§ = Ef [x-gle]? + 20x- 9 C2) (92) -hl2)] +(4¢e)-hey)' f Ellxaelf + 2ef[qee)-har}Dx-ate]} +e f[5te)-har}*f pace 4.26 Wore thet [3 le)-hl2)] is a function of the Observations, Then by hypothesis Ef (alel- heed) [x-gt21] { =O. We have EfLx-hay]}§ = EFLa-ge yf x ef Cgeer-hal)$ > E) [aeed-beery§ yo. There fore, E §Cx-heey]*f % EfCx-qim)it ies) the MSE of lt) is less than or equal to the MSE of kit) fu all ble). => Then g(2)=X, the optimal AMSE estimator, B) Necessity: The proof will be by contrapesitive. That Cr NoT(B) > NoT(A) | then we have also established tat kB. Tn this case, we will Show thet if any W(t) 1S not orthogonal to xX-4(2%)) then gle) is not the MMSE estimator PAGE 4.27 Suppose on hl2) exists such that EGh)[x-gla]f= € #0. Assume without loso of generality thet 2 = EtR (#)4 = 1) since ig ts is not true appropriate sealing can be applied +t \le) without changing the fact thet Ef hai legiei]! #0. Then | Ef [x s@)-eh@)]'f = Efbself M2 EG KML 92) + PEFR) S E fx-ste)'f Beata =Eflx-a@)]}f - & Then gle) is dot the optimal estimator R, QED. Pace 4.29 The Diserete Case — Suppose we have a vector of obser vations te, pbayes Bra from which we want to estimate a related Sequence EW, YS. ~ We consider linear estimators for Y,. of the form Y=eZ7hi2. W k fel ? he set of N h's is for Ye. — Assume without loss of generality thet = a\l the Yo and ea lereeeereimen ~ By the orthogonality principle, the apt imal efmelor E{LY%-% ] 4 F200 ¥ye LUN). > Bat ERY] 2 f . E$ %q 2) - 9,355 = ELE T-E Lf 25-0. S Efyat=Ffef ¥ jefuw], Gv) PAGE 4,29 ~ Substituting (t) om page 4,29 into (et) on page 429, we have E {Yeast = Ef z by Bi § “Me 4 he, EL BBs § ) je Con) i W ~ TH other words, N Rye (a3) = hy; RCh J) » Vé LUN) , i=I =) For each Ye, this gives a set of NJ equations Cis Cun)) that can be solved for the N filter coefti cients he ty ie Tun), of the optimal eckimator for Yeo ’ Tt the processes YES%$ and b= [ef we Jointly WS.6.) this becomes N Ryz (j-k) = 2 hei Rels-i) = Shi (ij), Jeti 5) — PAGE 4.30 — Then N Rayles)= Dhe Bii) , Je tH¥ det mekriy ic kom ky Rey Ce-G) = Do haem Redkom-j) » jeluud mek) =| = Sid he, ker Ra tem-j ) Mz kN Let Re k-4 “1 Roy LL) = 2 haem Re (tom) ) Le Lew et) mek-N —> For each k, this gives a set of N kinear equations That can be soluect fer the optimel weights hy aggueheie! — If the sequence Yand the observations 2 are ountably infinite in extent, then (¥) above becomes Revie Dohauem Rallen), Lew (+2) => The left side of CH4) dees LUE — > Therefere, the right side cannct depend on k either — te) the hig, coon are independent of Kj they are the same for each k. — PAGE 43) —Define hy= haaee Then (+) on PAGE 431 becomes Ray (0) = 2 hn Ryltem)= hy RlQ), LEZ) wire = This is an finite set of equations in the optima| Filter weisnts han) thus we cannot solve for the hm directly, ~ However, we can use the Z-trans ew: Szy (2) = HES, 2) ~ The optimal (noncavsal ) LSE estimeter is then the filter with transfer function Sey (2) &@) (aralosous to continuous-time case ). H@)= —If we restrict the optimization to causal LST Filters only, then (+) ahove becomes ReylL)= DhaReltem) , dre. 7 Ths is the discrete Wiewer- Hopf equation. TA can be solved by ‘Z-domsin spectral tuctorieatin techniques thet ore completely analogers to the 7 4 ‘oMtinvous-time Case. PAGE 4.32.

You might also like