You are on page 1of 104
Ast, M-Tech, SsP CHAPTER 2 Wiener Filters lic processes at hang, we are ready to develop a mance of linear ad VRE21. Block di is chapter we study known collectively as Wi 1. Whether the impulse response of 2. The type of statistical ci le eponse (FR) ox a infer aged ie) 1s. The choice of eloped te theory includes that for FIR ee sf this chapter, and at We do vo because an 2.4. UNEAR OPTIMUM FILTERING: STATEMENT OF THE PROBLEM sider the block diagram of Fig. 21 built around a linear discrete ime series u(O),u(1 and the by the impelse response wo, ofa desired put and the desired response representing single processes, the estimation i ordinavly accompanied by own. In particular, the est nce between the desired response dn) and the ‘to make the estimation error e(n) 40 far been placed on the g the use of adapt is linear, which makes the mathemsi terisless demanding in com discrete time, which makes it pos hardware or software. ‘The final detsls of the filter spe have to be made: putational requirements, ing next tothe issue jon, however, depend on two other choices thal 4 olu)a}a = £ Jou azonbs-vous ayy se wonstny $0924 suyap snip an (u)2fo pon avonbs-uoou ay atu 0 8004p 98 US} 2) inf ay azpunydo of “qqersea wopues & jo onj2a ajduaes 4p sf (3)2 Yous Wont 0) (26 — (wp = (wpe souosognp ay kg >ouyjap stony soura ue Xq ponmeduroooe kqeamteu s (u)p Jo woneums9 oy, Z| Uo (rene cen om te Auyeuobounio 50 21 zz uowas 29g ty apow Sujssooodard wo sqrewtor re (w)n andy 99 wH03 9 Sn Tuer osoe ua tog es8204¢ opsEy20%8 Ca 2 padsop aga pu mu soy ogy vey) aumse aun jo oveus9 we aonpoxd 01 51 12 i UL MTF oun fo sodind au, ‘baw pagtosop 2.sip S03 O41 ay op saveasa 2 Sun “Cy — wu) md on ays puo * jo sonpoud 48g We Yo UOsI96 EDs 2 SIasa1004 (9 ~ 1) Zoe wa unis wornjonuo> s90uyy 249 ka pauyep 511 au pure san v2} dtua> 2484 0} Pott ‘op # Joi tp Jo asuodsas osjndt up pur ‘p stand 2415 9, T'2 Be UE poquosap woyqoud Bu AAMYNOSOHINO 40 314DNNd ZZ uy Aqyiy s atd}outsd om pur ajduu sop ayy asne2og uy Keteuodomz9 Jo 2jdioutd ayp Buyat9p ka paacoud te f uo wong 1s09 ays Jo aUSpuadap 19pIO/PUOD eu) 39qL25 Naty ypvordde sompo 2, Angeuodoxpi0 id oxy se umouy Kquoues wirost yd wo Jo wusudoranop 2401 ypvoidde 949 “seyowzdutoo ase yerp saxseoudde 1worapEp KjouiUd O71 8 ‘q waqqoud uoneztumdo re jos eonemaqew 219 do[axop On 19 woneumso a4 J0 oF sour uoneuaiso 3 jek ainjosge aun Jowonerrodxa miso ah Jo anyea avenbs-ueayy “T sommqsssod jo sofued fo xopin 10 uoysuens 03 © Burzpuumus 59 wBts9p 39 avai cyeideu> 96 Chapter? Section 2.2. Principle of Orthogonality & = =jutin =). that J be real: The gradient oper pois of 8 ‘The necessary and suificient condi ‘mum value is for the corresponding value of ‘orthogonal to each input sample that ent sponse at time 7, en Under this set of conditions, the filter is seid to be optimum in the mean-sauare- According to Eq. (2:3);the cost function J is scalar tha it independent of time ‘of that equation into Eq, 2.6), we get optimum condition, Corollary to the Principle of Orthoger e,(n) denoting the corresponding estims ‘of orthogonality described by Ei ease ciple ofthe dae of cout tna theo a Ss dcused in Append By where -uonons q}ssod 110M 24901 spuodsai0> sy sony suonb om 2579 woowsag exs0syoy juowa9.S Ou sayy an 59 fr “pu BHO 3H UC) (u)p asuodsas pap ays pe indimo sao (ne wp atouensa 249 290909 MuaUa2ATD ‘Goad sosuado say wim ayn ‘ute 9 fp ts950 anny apo 29 an sod sSemje 5126/0 ones 24 (2) pu aro aq ronot mo pier om (1) 100 39h Z2) Faas . tj mp “ous om you yo suts9) uy '4ous auombs-uoou po2yoiou aya poy Dausap v Jo ase fergn ayn wr ydaox9 ‘osse ee wd Boe (612) woenbs amp Jo saps yiog Furpeatp 198 9m ‘9 arenbs-ueaus winuyutus a 24 ‘uiajos “ura oraz Jo 29 0} paurnsse are so[qetzBa twopues 2504) Jo Yio P ums oy) Jo souetsea au st 20 pus axuodsas pasisop ays Jo a0uvizea ai st 20 3104 aaey au (gt) ba bg Jo asm apeur oney ou ‘out priosos ayy tt LON $69020 ‘ois #5 oad yp SOUR Yn po 0p ‘unogs so a 30 aontags9 aioe 38 peso 2g a0" fgets ope to Spsom xno ues seed Hd ‘Sopees saqhuay "ens topans yo weBgHE ea Jo Ase Gro 0} pads oq uo $9421 (€2)-bay ‘won poo umamdo su msoyerodo rz Sty wi sy oun et Ayyoucboyuo jo ajdpunig 249 04 Auzyo105 auf Jo Uoneaidiayul 21.9099 )p ssuodso: 22 0192 24ey OF 9 941 1eH) SION P 241 Jo weeur o192 pouunsse ays soyareUr U 88 axe sindu dey oy) asnenog ‘edu o227 sey (" iz) (1) = (rely uo dem ayy 4 a Supprpous ad? 9598 40119-94 ) asuadseu ponsop ay) fo 2youniss 249 20U2P (* Pio or deny 220ds a1 unds wy op wy pozsundo ‘uous save 2293049 oF 102 Chapter2 Wiener 2.4 WIENER-HOPF EQUATIONS ing this equation and re Sx = Rut(n =) = Blu(n — Ka ions in Eq, (2.24) may be interpreted as Cn — kur = Hib) = Blan ~ Rur(n ~ 0} 2 The expectation Elu(n ~ B)a*( ~ and the (225) orrelation between the wg of ~K, We may thus | ‘These equations are called the Wiener-Hopf equation: nea opium tr was formulated orginally ites, whereak of eure, the system of equ n2.4 Wiener-Hopf Equations 10 (Med, ‘output forth “The impulse response where 2,0, Wer Seen FIGURE 24 Transversal by a cascade of labeled 27) in Fig. 2.4 defined by the finite set of Wiener-Hopf equations (2.27) reduce to == PCK, FOL M1 @2 are the optimum values ofthe tap weights ofthe filter. “7 Suu ong ess -suen a J nfs aounisojid-tous ayy 8 paqposap 08 208} 24} 0} 9]! 5U0S “pou snoiago sg Tunurtaria onbeen v fq pactsorsereyy st ups sy, xayyf 248 fo sryBion der rnc open fo iy Ha sma +) dn “yuoq ese a" de om uo ¢ jo 2ouaptiodap 241 2ensia Kear om “tvanbosuop “ayy om np fo osu apro-puodos w Saspaud | 1013 Brenbs-qgot 4 "Uo[auny [seo ayg zevo}Hes Ayulof axe asuodsos poxsop aq pu 29, “iy restaging Jo sind do ogy uous ase a4) Joy in sores (¢y2) uonenbs, gouny wopepsicaoyte amp st (3 ~ #4 2x94 we (4 — wna = axxy an ‘uoneyodko qimdy amp so Seu “€ iodsox pons ap ayy pus (x ~ u)n induydey a ua9aag vor (A) aso (2) poe (re) (6e2) aney 94 woneyaadx0 i om tog “1 (ge) ba ‘uy aut puooas ayn Jo apis puey-1y8u 941 Uo suonneysodx9 sno} ou azIs80091 se (u)o zone womens omp ay 2m 8-7 JOH [estOAstER om jo YBa de, ‘241 wor uoriouny1s00 ayn Jo aouopuadep ayy Buruturexo £q suorienba dopy-rota.A ‘alto ospe feu o942°Z Uo! UE POALEp sea J ‘0 ajqeeoes are ‘uor}225 snowoud 24) ur poxnop se (pz) suonenbo ydopr-ToTeIA4 OM, S0VAUNS SINVWHOMIE YONI. $77 seaasuen ayp Jo.sonda nySianoy tr do 1-Aq-py au) saiouap %m 21948 de cuy ‘ui0g x1veur yoedur0o 24 ut (82'Z) stonenbo sdopy—sousty, ay) MOE sn Kew 244 2aneFau 10 0222 Pp oMp Ur posn soja Tex 310 wo [ue Nd] =a ane 9 05 papaedeo ut (cee) uy doy lo2-ss019 -Aq-WV amp aloud 43] (2 Wet (1 = Wet pan andurdey 1-Kq-yy 2m 51 (ez) = (wa ‘ro (2) esronsuens ow ur (1 + Au) 2109 WY-AQ-PY 24) 2100p YT suonenbg ydoH-s2v9]q4 ayy Jo Uope|ULiOg XIE wai z2ide4) FO 105 Chapter 2. Wiener Filters inimuum poin, of the error performance surface, the cost fng. value, denoted by Jag. At this point, the gradient veetor Vy other words WH, k= O11, (244) where UJ isthe kth element of the gradient vector. As before, we write the kth top weight as ty = ay + i Hence, using Eq. (243), we may express Vy as (245) ~h). Applying the necessary and sulficient condition of Eq. (2-4) for optimality tog. (2.45), we find thatthe optimum tap weights 1759, , for the transversal Fig. 24 are defined by the system of equations world ; = Ck), ‘This system of equations is identical to the Wiener-Hopf equations (228) derived Section 24. Minimum Mean-Squsre Error Let (nit, denote the estimate ofthe desired response d(n), pioduced at the output ofthe transversal filter in Fig 24 thats optimized inthe mean-squart-error sense, given the tap inpitsu(n),u(n ~ 1,n.u(n = M 4-1)-thatspan the space Mi, Then from th figure, we deduce Ani) = Syeteutn — 6) = wea, 46) here mth tap-wéight vector of th optimum ier wih leat, Ma ZiU(n) isthe tap-npat veto defined in Eq (20), Not that wean) dente an fact reduc oft optmum tapi vert, andthe taint vector us). Weassie u(r) has ero mean, which makes the estimate d(n[ have zero mean oo, Hence, We may.use Ba. (2.46) to evaluate the variance of d(n]U,} obtaining. of = Elwifu(n)ut"(myw,] = w! Bln)" es = win, Section 2.5. Error Performance Surtace where Ris the correlation matrix ofthe tap-weight vector u(n). a8 defined in {We may eliminate the dependence of the variance 7 on the optimum pW tor we by using Eq (2.34). In particular, we may rewite Eq (247) 38 oh pe = p'R' ‘To evaluate the minimum mean-square error produced by the transver: Fig. 24, we may use Eq, (247) or Eq, (2.48) in Eq. (219), ob Ing = 03 ~ wRW, which is the desired result, Canonical Form of the Error-Performance Surface Equation (2.43) defines the expanded f the transversal filter in Fig.24.We may rewri definitions of the correlation matrix R and the cross-correl vector p giver [Eqs (231) and (2.33), respectively, as shown by Un) =e wip ~ pliw +. 0!Rw, e as J(w) to emphasize its dependence on the in w, we may rewrite Jw) = of ~ pip + (w — Rep} R(w ~ Rp) e rom Fg, (251), we now immediatly see that : (Ww) = 03 ~ p"R'p for wa Rp. In effect, starting from Eq, (250), ve have rederived the Wiener filter ple way. Moreover, we may wse the defining equations forthe Wiener fiteto write (9) = aan + (1 = W)"R(W ~ Wo) @ “Thisequation shows explicitly the unique optimality of the minimizing tap-weight tor we, since we immediately see that J(W,) = Jnua- “Although the quadratic form on the right-hand side of Eq, (2.52) is quite in mative, nevertheless it is desirable to change the basis on which tis defined 50 that representation of the errorperformance surface is considerably simplified. To this 2 ways +e = “ois pe annippe st sfou Sun, *¢ 1 ve iuonypoo pass sto 9 wos Sue yy BHO] ny 2 Hs Aed ‘goua30 wep (reno) ae a z respond C i 2useo.0 yy 2oueuuOpod posoudan sugn9 soym sous mn ouTON STM] _suondamsse 2jgéwoseas 2ouy) ost 2m ‘SKoto} 1 > fe zepou pamgiopun * 5 Jo suas aon Supt rida a41 pur (u)p asuods2s pansop oy) uooanag digstoneay oi 8 i Bian 2 sptdap donsonb ue oF unop */ 2882129p 9m UD HO won2er25 ‘hour aay wr apespenb s yoIyA ay fu st uoRsOIGD de Jo soquinu paquos raster wo} saxidde (6p'2) “ba us war ou, TAGOW NoIssauay YANN TWIN gz fq unos 109 so119 azenbs-ueoUs ay} Jo wore|MuNIOp Mow StU, rus oun &q poyeusroude Aosoy> 29 usp (un yn 94) 01 asuodsar ot (u)p evep aqaessasqo 2¢p Sunezoua$ 10) a/qisuodsor wsteysou SueApopun 24 “y8Aoue (sz) BVA + “rer 2) bag 0 wo} anespenb axp yd deus 9m ua 0 101998 sayaUesed LOW Ue s9}0Up & 9F9Y ~ Pa) <4 (es) 0 + (uty = (uP 58 4 201998 1yStow-der amp pus em or nos winwndo 2x9 wosmsaq souaraysp ay) o torsi9A pe of (2) {Cs ~ M)yDVO, ("4 — 8) +P = ¢ au hq paquosop spppow wo}s21894. p HSE aM FL peau 408 99U94 Saansodsou Sanjenusdia saqp yitm pare ‘posse Mh 5 SUUENfOD Sy 20} ey xENIEUE 2yp pu XIU LOHR -aforioo atp yo My" 8y Ty senjeaua8fo 24) jo Suns|su09 Kase [BuodeIp Bt y axoK (ez) ‘yOVD = (ampuoddy 255) ssomeauada porejoosse put sanjoawe® (ye (es) repo wopsostor OP x ” on soa adn 52340 601 Jao} wo|ssos6ay Je0UN atdniNIN 9°z uoRPaS ers pei = R,,. Correspondingly, the minimum mesn-squ the ireducible value? which is defined by 26 3. Overfited model: M > ‘When the length of the Wienes of the tap-weight vector of the Wiener ailend 2.7 EXAMPLE ‘Toillustrate the optimum sion model of ord -developedin the preceding sections, consi pararreter vector denoted by os" “Ot 0s 14, “O08 TS 1a Section 2.7. Example 11 and observabl pp = (05272, ~0.4458, ~0.1003, -00124 es that the model param the fourth entry Problem 9) (© Thevariance of the observable data is 3 = 0.9886. = 0.1066 (@) The variance of the adi Jnimum mean-square error J, produced t of varying length M = 1,2,3,4 performance surface of a Wien + Compute the canonical form of the error performance surfece, ter with length Mf ~ 2. filer length M- With model order M = 3, the real-value osm 048s, osm ons ~04sss asi 1066 00s, Lanse 11 05 01-005 osm f osns] as 1105 o2 oes ~09129 on 01 ost 05 01003 se + L-oons oa 005 01 05-11 ° 8% 91g w siogd moyos amp 30 sono xp 10 2-H WE aoKpINS aouRUMtORIod fous aya Jo toriog aut souyp gist) = "Sp sola arenbs-ueous wnuuna oq pus {fe58L0~ ‘0560) = °H 30108 y8kon-der umemdo am Ka duo postasaidas od ou, @od ‘esto = [at esses zeso) -sanco = “(ov2) bar uve soueproaD® wry rosso azenbs- ests Win ou, pe oges'0 = °*m sayputpr00: ‘ea tema amp soypeoudde uM sp FOX} 8 Zope TELIA 501 34 18a BupKt80 2958 s:96 on aySfom ded om JO WE anoquo9 SAYS Re om on pat Sia dt 3 Snsion [ei sous azenbs-uronrom jo pd and frst ath SHOU any . 1 eonoas 0 apm ‘atm poquap sons eissuen debam Sy jobepne ioc * Fe ayn? LPR wef YM WN i PP LBS LXM) By He ayditexs 2 vores wo (en + Sm) ttm + in 19680 + SH BesOT - Ors” = mrt so : To [or Se Fa emt [Or sre eucor— oo = (052) te anepiane pone 504 onde ouoda-om ayy worms aeeae, sour 93 auopundap 27 = Ar yRBUA Sy 0g, soe souwaoped out “op “or ron poseonet st jy yrSda, fours o4y ways xoxz0 axenbs-ueoMt wine -suyus 941 wy doap daars yp aanon “Eo = (0)"*/ Yor 10} 9 = JY won!pUOD atasssod Crp 019 archers maT 116 Chapter? Wiener Filters “7 uN cy a0 10 10 200 FIGURE2.8 Contour plot ofthe error performsnce sree depicted in Fig. 27. ‘Computation of the Canonical Form of the Error-Performance Surface For eigenvalue analysis, we formulate the characteristic equation ofthe two-by-two cor relation matrix (1 — ay - (057 “The two eigenvalues ofthe correlation matrix Rare therefore Section 28 Linearly Constrained Minimum-Vaviance Filter 1 A= 16 and A, = 06, ‘The eanonical error performance surface is defined [in accordance with Eq. (25 LINEARLY CONSTRAINED MINIMUM-VARIANCE FILTER imizes the mean-square value of an tween a desired response ‘The essence of a Wiener such solution, Consider: the tap input For especial case of a @n) we may rewrite Eg. (2.71) 8 @73) as follows: Find the optim ‘mean-square value of the ‘where ay isa prescribed value ofthe normalized angular frequency o, lying inside the range —m < w= mand g is a complex-valued gain, ssonermanoy oq suonenysjeoesqd map one sauppe Kxn yfroWe I 905 8, ‘org Bua yo saad woneds oxp pur 6 i aunjonnyjesodusoy 9 BuyedwoD ouonboy offurs. exe Kuo pourenstoo 2q 0) spon vsuandsox sy 1p asuas O17 ‘oy pueqnouen st xoulroyureaq au uiv8 ponfea-xojdaiod @ sf 8 pue'n's: g > e— ora indyno saioyse¥0q 210 30 onyeA axenbs-ceau 24) “on siitom epuoUta|9 JO 195 umyo 94) PULL . sxnshioy spars 9g sf ido pourensvo> ali jo woksdk periods Su ato} fe ‘souar9pos 30 juIod VY awn we ‘ease am op Zepnatpuad: one ayy uo sf 20 sKeare ayy (38 mn uF unoys YoU) siysIOK aigersnipe yum syuataay SuUD;EE BS fazeyyen yo Kez s09U & Jo fw pornidap sauuofuriag ay o> &q woyqoad ay Jo worerauonnds Seti ayy aunyet ut vo 2) Pue (11:2) Sba Aq paqursep wayqoud’ jeztumdo pouressuoo oi, pul aken ued of 3unD1E _ RS sam W2pDeH bi sony pueren-whuquIN pouensuey AyeaUNT 97 voRDaS wusiusn saiey 62 3uNDUE ou 418 Chapter 2 Wiener Filters Section 2.8. Linearly Constrained Minimum-Variance Filter 119 terms, Indeed, in both cases we have exactly 3 problem on our hands. where Ris the inverse of the correlation matrix R, assuming that ‘fais assumption is perfectly justified in practice by virtue of the tthe output of each antenna eles liminate A* from this expr wisl60) = &. (283) ence, taking the Hermitian transpose of both sides of Eq, (282), postult ind then using the linear constraint of Eq (2.83), we get AR = Eq. 284) into Ea. ( vveciok V/ and then set it equal to zero. Thus, proceeding in a manner simil ‘scribed it Section 2.3, we find thatthe kth element ofthe gradient vecto¥ (285) constraint of Eq. (2.83), the prescribed value 6 tend By = 23 wield) + ate Note that by iimizing th oufpat power siject oth signals incident of the aray along directions difere Lat w,be the ith elesnent ofthe optisium weight vector w,-Then the condi timality of the beamformer is described by Minimum- Variance Distortionless Response Beamformer ‘The complex constant g defines the response of an LCMV beamformer atthe electrical angle For the special case of g = 1, the optimum solution given in Eq, (285) reduce In particular, we may rewrite the system of M simultaneous eq Eg. (2.79) simply as Rw, == 4a), where R is the M-by-M correlation matrix-and {or of the constrained beamformer. The M- : : 0) 0 ‘Solying Eq. (2:80) for-w,, we thus have tortioness response slong the look ; minimum mean-square value (average power) ofthe optimum beam former oviput msy be exptesed asthe quadratic form Iuug = ERM en) iS wo ERM) -ppesanosat 4 oop my 29 111 apart 9 020 RL pool eu ead oreo oo Hae Cy sb ie a CIO pov owen 2) on ef fas Wi (ee) eo on pmipaom Gr) Pa 2 HN AL (862) 6D.” : Bm myo, poop soauan sou adm Bxonpon ka (5 Aa.unoas se (52) ber are EA sjqnedwHoDsouUeW eH Pa (962) : Ms B * ka pongep sb iowsort keep (6 ‘hae ke SUaTIZQNVD 3BOTAGIS GaZNIVUINID GT 2 Jo stis8 ut uaptin 29 oiopsvag ay so soldon wid | AG szoiap ayy ie} 3H JO aR Kgs -39 patie asroal ou sods jeu euoisuownp-)yaunu9 9 Ueds suumnoo 048 .sardery) uf passnasip 210m Goo) (IAW) 25¥0ds21 siquonuoisp 2usuD4 nso winsisods i3aiod v se a-ms (62) 7 0 = 34d (262) (0 = "Dy. fi snp eum rou ddioo feuooMI0 weyD HOH sp a4 ups 9 xuneury sunt} if pouteds asd oad ue UeBOD 40 a4 105 s1864 a phi thd COMB T sou ois 198 am suns Su uns pue (19°2) “bet our (992) bal Sunmnsghs souUepy 121 sayjasuey agorepis peziessuey 6 7 uoRD9s sayy uaa ZedewD ozL 202 Chapter 3. Linear Prediction co eee eee apply the min Method of Steepest Descent enc, using the al-pole model characterized by In this chapter, we begin the study of gradient-based adaptation by des the understanding of the various ways in which gradient-based adaptation is imj ‘mented in practice. ‘The method of steepest descent is recursive in the sense th resented by a feedback system whereby the com; step-by-step manner. When the method isa 1er-Hopf equations each time ary environment, ¥ = find tha fom an arbitrary initial value of the tap-weight vector, the solution improve ineregsed nuznher of iterations. The important point to note is that, under the app invert the correlation matrix of the input vector. 41. BASICIDEA OF THE STEEPEST-DESCENT ALGORITHM Consider acost function J(w) th Known weight vector ‘We want tofind an op Jw) = JO") stement of unconstrained op 1 optimization algorithms that ring is based on the idea of local iterative descen Starting witi#h inital guess denoted by w(0), generate a sequence 6F tors w(1), w(2),.,such that the cost function J(w) is reduced at ea of the algorithms thats, pani 8 Howlin + 1)) < smn) where w(n) is the old value of the weight vector and w(n + 1) isits updated val 2 (u)a soy2ae andut-dey aup jo ws1eu uoREpe:z09 = H ip ssuodsar paxsop oun pue t02-s5019 = d aoueurea = 20 2104 )f won op a pie (u)n so} nudes 403 996 ses nd 2710 pu in (ujen] = (x) sosap 81 s0199 sy 8t0n-de 9x9 Jo.uuo papuedxo ayy, (21)m 20}900 inde 2 ayy Jo sonposd s9UuH ayy st (u)n(u) aa wai0n amp a9 uot 398 04 paddy wipvoBiy warsagasadsans ay. zy UONDAS soz ranean nun ay) Suryovoudde ‘sasvoxsop K[aaissorGoud 99u97 aansod st 9 oyotuered ('y) ba ue (62) "ba Jo a8n 24 2sodswen1 ueRUHDE ay} JO 950 atp sOUAy—001 ‘xajdtoo 8 ye 2a sayeU Yat 101294 Be duso> © ye (97) "Ba ULW ems 30} pousn st yor Jo asn 9) (wan (ane 7 (Hm = (C4 wie = (ue pe ydrom ay dogdde wyauo§e ayn] + wo: uo easedde 310209 i BupnporT 0} Hos “sr — (aya = (1+ fq paquosop Ayeuss H08e wo>sep-isadsais aq) iSupsO99y a= 3 ym 2m "uonentasard yo aouoquanuon 104 (4). 49 parousp st y>x arqersun 910229) 4.240%, £5 294), oe ay veg) ado 2M, yadaeis jo pourany Pade $02 ‘wand WaNaIAA aHL Oi G3rTady WHLMODTY INaDSIG:S3Ez2IS IHL TY 206 Chapter 4 Method of steepest Descent From Chapter 2, ihe gradient vectors given by Jin) ad) that in Eg. (4 10 that we may com the gradient vector VJ (n) for a given value ofthe tap-weight vector w(n). Thus: we may compute the updated value of the tap-weight ve imple recursive rel = win) + ulp — Rut al formulation of the steepestescent algorithm rection Bw(n) applied tothe tap-weight vector w(n igure, the elements of the correction vector &w that we may view the steepest descent as a feedback model, a illustrated by he signal-flow graph shown in Fig the sense that the “signals” atthe nodes of the nal vector flowing in; branches contiected in parallel, transmittance matriegs oft cade, the overall transmi ce mnatrices arranged: Section 4.2 The Steepest-Descent Algorithm Applied to the Wiener pe fem intr fo computing the eorecton toh ofthe ‘git ysoapadaon ajo vopmustada: ye mop rus ey aun (om weet a wyha(tet ~ 1) (+ we ary sna ‘roe juasopsodsas a Jo pou pumou yy au oy com (rm ‘ (orn) “mpd = (0 : : “}=O oy sionpot (gr) baton sy s9008 sisaed en om eM) SuUNSEY rn om =H], = (os 9 (u)a yo anyes jenna ony, (ry) (wave =) = (+ a io} pauioysuen up ty (ftp) “ba ansmo4 Kea! am G8uxpH099y [opm — a], = (or) H)yO = (wa 'sMolIO] Se So1euIpro0o Jo 195 aU ¥aUyEp NOH a4 ry) (>,D(v4 = 1) = (1+) DVO = 4 Asean ai yo Kyadoud ay Bsn put 0.44 G fox) se a xe uonerosros oyp ssazdxa Aews a4 ‘uorsodurooapusdio "Y pud 7 uo Ajaashiax9 wapuodap st wopLos]e yusDsop-sade0)s aq Jo BUEN Payad Aaqugers yp 184: 1299 2p sazpsrydu soyrAN) WeTEAP SIAL "yp By UI UNOYs i pou: ypeqp2ay ax hq pawussosdas s (Zp) vorenby "renew Ayueps aut S| 22044 h “w)pQapt = 1) = (r+ ma 198 am"(u)o 20,994 10119-14Biom 94 Jo StUIO] ut yrsox ayn Sumas pu (OL'h) (ve-2) sb uneisingd 10196 uoRLePHOD-ssOx ay) uNEMIUN"UOUY. (HE Z)sUOHLENES gg -saauy ayn Aq pouyop se sorsaa nyten-de 249 Jo onjea wnuutido ayy sta 2309 (uy = Pm = (ua (ors) (a(4DVO= 1) = (L# us {nf an'(er'p)-ba ot (Erp) hat Sanmnsang 91409 @ ys porebosse 8 anea89 ed 4q payovap sanreauodia eau "yeu U 5 Sey pur x13eaPUoHe ELV Doss of 24u98 Jo 1540808) ysis) ay) fo sue Aeoyun og poyeo" eee 2 ‘ xupeut jo uwijoo 10 zopanuadia% {21 pu datnsod ye 228 “Ay 1 ey saneurouL a “so ur suttnjoa se aut so1294 roua.nian v Baaunap £qssKteuP ap UIBaq aN 01308 148tSue-det 1p Jo wosszon Poses © oujap or sossanuato pur sonyeauaBia so Sua) uta xt Kduioxa so yoogp22f yo 20u0so1d WHLIMOOTY 1N3DS3q:1$343315 HL 40 ALigvIS €y 50% wipLosiy wareaciaradaais ay jo Augers ep uoHD95 suaosaq asadoa3s jo poureW w2eidey> B02 210 chapter 4 Method of Steepest Descent Section 4.3 stability of the Steepest-Descent Algorithm 24 repre ‘mode othe steepest descent pect 1 tone) ‘ways of viewing the steepest-descent algo ‘a homogeneous differencé equation of ‘kt time constant can be expressed in terms of the stepsize parameter and the kth eigenvalue as, ao) the eigenvalues +4 are the digenvectord associated, respectively, wit (n) is defined by Av AgroegAw of the correlation matrix R and the kth natural mode ine dojpasb heat ag, surg ovenbs-ueoyy aya Jo s0}Reyog 1U0}sueNL (a) = (2 = unto + (1 = unto + (wn uowenbe sovatayip Jopto ones a ka pomUonap sovoid yy 2 s01qpad a. sreuurxoudde Kew om 2 sorouresed anda (et = Nutz Stu's eyoq watsuen x eq pay om (52'5) bao uaseg 138015 Jo poyiany yioideuD zz €1Z 9ydwexs yyuon>as Characterization of the AR Process thapter 4. Method of Steepest Descent 24 Chapt Section 4.4 Example 2 arecamplessthat isa} < 4a, The prtioula valves assigned to and ae det ‘by the desi venvalue spread x(R). For specified values of and a,, the varian [ ono the whit ‘chosen to make the process u(n) have variance 0; = 1 o-& ‘The requirement isto evaluate the transient behavior of the steepest-dese VE atgorihrn forthe folowing conditions ‘both of which are normalized to unit length. «Varying eigenvalue spread x(R) and fixed step-size parameter a ‘Varying step-size parameter 1 and fined eigenvalue spread x(R). 4 EXPERIMENT 1 Varying Eigenvalue Spread In this experiment, the step-size parameter 1 is fixed at 0.3, and the evalu: made for the four sets of AR parameters given in Table 4.1. Since the predictor of Fig. 4.7 has two tap weighs and the AR process u() is real ant {follows thatthe correlation matrix R of the tap inputs is atwo-by [3 2h 10) = 03 where (see Chapter 1) and ry =-7 oa vo [2] ws . [f = era e \", = (0 — wAsyex(0) i a “To aleulte the iit value w(0), we use Eq (4.19) assuming that ihintial vale w(0 ae (t+y3y)4 tap-weight vector w(n) i eroThis equation requites know : 2 {ap weight vector w, Now, when the wo-tap predictor of Fig. 47is Hence, the eigenvalué spread equals (assuring that a is negative) second-order AR process of Eq. (4.33) supplying the tap inputs, we find that the mun ap-weight vector gyda toate XR) Tea ta - [2] “The cigenvectors associated with the eigenyaies hy and As are respective bea. aris and the minimum mean-square error is anvil Ing 02 “ot = (WEG) 0 = CWO) sued ua (u)'a] Jo Suoyoofen oun ea Zorowiezed24p Jo 280 241, (sey) pue (bey) sba ur ry ata s0 QDXpraudg aneusigy =z 250, rasa ay sty 229t¢0Y/[ sjoua esidaseadas (9¢'p) ba ‘pa Jo 1o4d feuorsustp-o sprweed (rep) “bat as OY (55) Wo ooh om usauese poppods Joy (Eet}e- Co][r J4- tay |= conpea jentut ayn IK (61'p) *ba Jo asn ayn ‘KiSurpio29y uring wadaais jo pow pimidey 942 pears sass Bidie poe c0 =f seraed ome dona mundi pasa stm BET OF SYNDL s9n199 8 op plno4s 0 ‘sou azenbs-veaus winuuqurr 99} (paiejari09 oxo sousooai{ ss00%d jndut 4p pus ‘soar pedads anyeatosi9 24 i 998 oq -00t Bue OTe Zt Speatds omyenuade> sno} ay) 405 w snsios (u) ¢s04y9 oxenbs-ueott 241 panoid 9484 Om “OT'y BLE UI sng igrausem® npuodsa09 os rae sjyeuoxsle, BONS FEET ain spit (ey) DE sd tu) JO. sonyes ‘sanyea soqouresed ou 1 10y UMOYS 10} [eptosd Jo asm ayp pus ed 8 tz adwesd ry vores 4 two) gy Banos « ze 222 Chapter4 Method of teepest Descent : EXPERIMENT 2 Varying Step-Size Parameter OOD In this experiment, the eigen “Observations ‘On the bass of the results presented for Experiments 1 and 2, we may make the followin observat 1 so follows a curved ps ted in Fig 4.9(¢). When the eigenvalue spread is very high (Le, the input dit test when the eigenvalues A, and ee = starting point of the algorithm is chosen prope! the trajectory formed by joining the points w(0), v(1), u( shortest possible path, Loci of in) vers (for the tepent descent aloriti with egenalue = tend vaya sep sae parameter) overdamped, = 03, -aanvol sonora anpootgns wap sts 1Witon-dayeareye 0 4g eso wodje a Jo suonesan 29630 0} pot aa ‘sogoeid uf 1odanogy a ojod wnsando uu, we somabor spufisop Ag Squo pourianap njos umusndo oy oF nous Jo won pti ait ses i “axgaO3ye aya Jo Augers 40 xo¥ a ‘ye ay jo worioty uo woos si Tesxaastien amp 0 JoY29a MAKE, de aTuegp [ehudurotou ayp jo dats oy (821u09 Yt s2yrumL0d a2q ‘ompins ateopiod 101102 opinoid wnyiogfe1u2dsop-isodoays ayy utod wuncaau rua aoepins (ateapen “2 podegsTs09 & Zuuos-opim eu utvesado zy [etoasvess aaidepe ue Jo oeyins soueuuopsed-o2!9 9, ‘GOHISW HOWVas QUSININYaIIa ¥ WHLWOSTY INSDSaq-1S34321S 3H ‘sindudey xp Jo anew tonejasios ayp yo peosds anyeausfioa4p pe vf soyourezed 398 Ary st unpode yusasap-asadaors am jo 0 19840 9504) Os] UALEAP a9 0} WOISM;DHOD au, 226 Chapter 4 Method of Steepest Descent 4.6 VIRTUE AND LIMITATION OF THE STEEPEST-DESCENT ALGORITHM uation with respee or AIsinoa4 ur uotpes aEsUNGZD}Op Bso5m YO 059 pou jour ay: wos; yIUOd]e Syy"T aM} YSLABURSIP 01 popuonu st, 1UOypesd on 0}, ID) au “suynu0S}o meaype 13 ausoypoi yo Sma} ay} Jo r9quiaU ya od ungtiof(e ST 24 (0961) JOH pue mospsa4 SHOLEUIIO sit Kq wenrz05}0 (ST) 2 -woous09y 24) patueu ungto8fe pasa Ajopea & Jo K10oq aM dojaxap a4 *19}deN Kh HI saayid aandepy asenbs-ueayj-3se97 CE) ay 0) Gee 4 pougep s+ yuejsuos aun asouy apie yo ssan0ud (yay) aBes9¥e-2umoU 2 1 pomndon snsoy 24 aam99 S502 = wo = (ue = (ue tendo aoustaynp 2 &qpaqunsop auo s9pu0 Jo snzoud (Vx) 91948 uNROU eZOpISUO.) uassag asodaois Jo pouieny v awdeyy oF. Agape neigh copia eet bioy(i) a i sym) sawed 62) feMen FraUAE S140 se anes er (Beta sae raves itt componen () Det suture ofthe apie weigh sa 9) (Harton = pps st wouroupbos syn sousres vous way wy -yUe}su09 es (20) onqea eu, pue ) a Srannaeidsot 9 aa wp zodfe SW 243 Aq poyiduos [( soUoIs\ 948 Uo Bunyeu) Ueyp 23 ‘apd soy204 wor ets09ss000 21 pUE A. 1 ABoqens sngqago iso 2yn"(W) 4 FOIA Ws ouiuostate unouqun we ut aye1odo 246 way Hep aIge|reae 24 wos] porounse 2q yn 10}98 yuoqpes8 aq Kawwonbasto; suodsar pansap een pure sind dey axp-toaiiaq d 4o,20n uonp[ass0>-ssox> ayy pure sindut ey a4 Joo tod tou ate ayeumpso ur dojanap on, ose yeqn Kzopoafen ansrurasnop v Boye 29e}s soueusopsad-s01%9 a8 r0Ne aqcosue 2y1 unop saacas yp (1) 101994 1Blem-de e saindutoa yuapsop sda Jo om a Fey f15idey)wHOH} MONTY 4 WeOUIUOATAUD Ue Yons og Buy ot} —unoU{UN S| oIDeA JoFoUIEsed BsOYR [af suv donejeti00 2x hog Jo 2¥papwour soud dunbas pynow wey! 20 ‘to}pan wo;ped oy Jo SUaURInSeoU EXD" TOAAMAH anor aut pooptr now wo 12> “ st 7 sayounered o715-days: ows te (x) 101990 yuaipes? ay Jo sIugUIAsNSvOW! 19ex9 aABUI HLIWOOTY NOUWWLdvaY auYADS-NVIWASYTT 7S pot jo 199y}9 ay) Sey “WIM UE SLL “Ino po 21d Jo s12aj9 ay pue ApMoys ss0z801d 01 pe Bpoaey (up andor po pod Joly yes104 Fs20oud 40j poyddns, Stunt Ayuguy soypeosdde'y ‘aursosate KrevoHeis 2st2s-2pem © 103)“ BOHN|OS 121A yp auto Kew anyea patoadxa asoya areums9 Ue siuaso1d>4 uNALEBTE SY 3H 01398 18nd 1-44-74 24 Jo SUSHI i “WAM jOsIUaRIaIS ap wo} ey oy (@)I°g 844 ut powasond 218 1Wouodwon. 236 chapter 5 Least Mean-Square Adapt Section 5.2 Least: Mean-Square Adaptation Algorithm — 23 torn FIGURE 5.2 Signal graph er en) = (a) ~ y(n) : 3: Tap-wcigh edapttion (n+ 4) = Hn) + naan sense, during the course of agp ochastc inputs the eyelet the of true gradient di Figure 52showsasignalibw glaphrepresedttion of the LMS algorithm in te {¢. fosm of feedback model This model bears a close resemblance tothe feedback mod fF 43 desering the stzpestdescen algo. The inal graph inthe Fue Summary of the LMS Algj In Table 5.1 we presenta summary through (5.8). The table also in! hm. sament, in which the tap-input Vector u(t) and desired arebboth deterministic signal swt 24800 og piusion og ype ite w atond or BT ay sy F9ATEMbd 2p O1 ssopoa nystomedey * iy Bulemnap pEpH Joy. ouuays « (asi) 22u2vafona joquacsionyy « sauodsou pansaq -jouureyp en. or uopssadsip hq fjureum pasmeD st Yo ie . S101094 0m ono dup i ayit2081¢ SINT 19} UOS!eDH aM St HOI ‘ur'pasn aze (W'WO) UC Mappobyy Swi x9jdwo> arp ont ads ay rey os 3 Qf. youuego aaisodsip € 1910 evep Artung jo suguodye sh ou jo seoneoydde wworapp Atopra xs Sunasoud &q vem op ue aouadioadoo © ui yh) SuposcoNd 910}98 qh U2ah) ots ny on ano 2p pout TU ue SHED co ou (Es AE ‘pk sande ut ans 80 press oxous yusiuudues 3yp wy suonetea eons yo Suryoou ogy" o00EU [uaunbos jecontppe ue agey wou am Yorn WWaLIUONIAUD Kxpu @ jenbo apepa au (e}e°¢ BU pporuosoudo1 are 'sjeullsyndino pu 10.9 ain Susuyap nour (prs) suonends, i Pe (ue = (T+ Hm pur = (rea 18 am syed Areurgeum pus eas Sunenbe woth pu sa vaaoadsar syed (cues pur 31 ue ,oseyd-ul, a}ouap 6 pur J sidunsgns aul 4 = (ue ousa wove Wie ar + (Wye = (We oe ee amdino sayy osioasueal az 6€z suoneNddy E's uones apoW fe2uoUeD =} uoneniddy sNowonsay €s 240 Chapter 5 Least Mean-Square Adaptive Filters Section 3 Applications 241 wight update equation, 1 is switched to its decision-directed mode, more on which is said in «+ pltadonoise (PN) sequence, which has broaélband noiselike properties ‘eterministc-waveform that repeals periadicaly. The PN, sequence is gener 0 of seismic waves from the interfaces between the ‘geological layers. : ‘nara paatn pone sur eepaedde, "y's sun Tt 1 Jo auo asn Art ay in24sp tM) atU0>1940 9] 4 ut kq parkaydivoo st woqqoad 24) "9 1 sanoKto1 uo}myosuosep atusi25 pj. ‘nding ‘np ai Sr Savvy spesop for ou0 pu uo 842 2) uoNN ALN AAT {64st spay a wan st Uonjoanosp anspeH j0ONBNED Ve (o)a sson8 cent ue yun unaq st wone Lion njbauooop axusias sandepe paseq-SWT ay) sinyystoo (17'S) Pur (0's) suonenby (cs) (unio + wy — Cv + Hmp + m= (EE paiepdn st (uv) soverodo ou, « = (u)n = (wie (w paveneaa s sajdwes poroipard pe indus a4) usamiog sowssaysip aun Supuyap (w)K 28H panjonuooap aL + ‘pouad dujdures ay Jo squn ut parnseatoyop uopnjaui02op 20 yidep wopsypard a st = Y pu 4 no(E = wm (u}] = (OO “(enema (egies Gore] = (Oe “urpaus 24} Jo" asuodsas 25) oes) “wna as = (#2) 1 k'porouap(susna suites o poor #1) esTOMsIs © UHI (Lust 1 1 squypso) #01109 $8 Buypoo2osd fa wet qo1d won Yen Kq parousp ‘qr aafos 01 tano3fe Sy e9e agp asn Kear am '(u) wesBoUIIDS Pa eee soved out ua> ui loud aq) aney ox pounsse st (woriafoxuo2ep O48 JO «cig you a Gary 01 Surpaosoe'sioyodty wopUuc! a * Coe (= pr gi8uay 0 topo ed oh . seen topots BEE SPosor are (u)n oxen ausns Guo) indo ay andar 3 tr unos fapot pongea-e04 241 Ot Surprosay 2>e}:nsa0 we 70 sas ag) wostsuny. pou (yy) aarssasd jody yoogpeaf 2, jppow ayy gua paxaky 104 apou au-eyap-paddorvsiidap p's 23nLy ‘404 a0) S04 @ SKeyd topousyp28>-Pa19Ke] = Jo Lp! LL SHORI9S91 ey uyeyuon Kou au asnen0q asa seed Jo axe sro} K2EUOUND2S aH, )upuuypes Jo pues Jo sia4e'SyP04 se yas SuOHRas pa}sn19 Jo Uo! psy s woneyazdzoyu SIL -spuva aut jo singe ieaPFojOo8 yU>toqsp 2M WHO} sts jo uoweiaidrayut amp st ABojOUS!S oy joauozap 2an21p22d 1-91 saxdeun 01 Pf noassi -uomnyonuosep aanotpaxd urpozoUd} st yey) uoneutojur—uresBowstas uonnaayss20 ‘ut poutelwoo uotieutioyut aseqd 2[qenjea 10} sjunonoe yore uwonnjonuosap pu ‘fzoayy worsipaid fo sojax sanpaooud tf asne99q palle> Sst yoius,'wornjoxuonep aatayped’ Jo amjonsns aoepanagns amp Noge UoHTeULOyW Kuz09 YoU” (S99EH 2 agusos Jo suotizayes"2") suINIPS 1uDs 4} Zuypio2as pue dn Bund «| ere suonenddy es ues suanng aandepy senbs-ueayriseoy suaideu> zp 24h Chapter 5 Least Mean-Square Adaptive Filters Section 53 Applications 2 ion 3: Instantaneous Frequency Measurement In this example, we study the use of the LMS algorithm as the bs frequency content of a narrowband signal characterized by a rapidly vary 5, 1975). In so doing, we illustrate the linkage between three gressive (AR) model for describing a stochastic process, tugs 'a linear predictor for analyzing the process, studied in Chapter 3 Sian ei mae nn cee es for estimating the AR parameters 4 wap weighs: band signal we mea a signal whose bandwidth (fs smal ea tiy(n #1) = ulm) + sl AYfulm), = 524 ana strated in ig 55.4 frequency mod (#1) = ulm) # aan = Kulm), M. (328 Bin tis equation, | respect to tine) of an FM Lut) = wn) ~ Bulan ~ 4) 525 then,a narrowbend processu(n) generated by at 1 jhe prediction error-The tap weights ofthe aday Shown by the ference equation (suming real ive predictor are slated tothe AF a > model parameters a follows: vn) = Sinn ~ 4) +H) ~iy(n) = estimate of y(n) at time n, fork = 12... where the a(n) are the time varying model parameters and v(n) ia zero-mean yf ase’ Morcover, the average power ofthe prediction eror fn) provides an estimate of th he frequency ofa narrowband signal mn of ailn). Specifically, we use vageonly the tap weights of the adaptive predictor to define the time-varying frequency fat function ose proces of time-varying variance of(n). Te time-varying AB (rower) pag ‘tthe narrowband prowess un) is given by [see Fg (101)] Saa(osn) = — i), -a cose. pe Samer} (526) Given the relationship between i(n) and a(n), we see that the essen tween.te frequency function F(a) in Eq. (526) and the AR power spectrum Sy, in Bq, (5.23) lies in their numerator sale factors. The numerator of F(w n) i stant equal to unity whereas that of Saxo? ‘The advantages of F(w;n) over Sua( wi) a herent in the narrowband spectrum of Eq, (5.23) is replaced by a “comput: ‘ractable” limit of 1/0 in Eq, (5 fected by amplitude scale changes value of F(u;7) isrlated directly tothe spectral wi ‘We may use the function F(w;n) to measure Iregiency-modulated signal u( (Grits, 1975) + The adaptive predictor Hits been in operation sufficies itference be- (n), with the result ‘of the input signal. instantancous frequency ly long, so as toensure that — anya yan be ape ne doa ly constant over the sampling range of the FIGURES. Defisiton of nasowband sigan terms of ts pect adaptive predictor, whick extends from time (n"- M) to time (n ~ 1) ; z sno on unde oc Tress? + corm? = i dope yo wed 201 9's SUNDLE sy [0 + (2 w)to]s00¥7 jnduy dey atp Jo onjea Suxpuodsoxt09 ou au sauteoy or rouodulo ts 24 (22 suiodosd puosas axa andino aqy‘e2u Md “ sppouresed ozis-do}s ay) J0} one us # Buysooys Kq 25u2r9}190 3219 Jo Aayanbayy ayy fIa80 ‘son appus 24 vo Jofzouea 24) Jo asuodso: Kouanbayy axa Ut HDX 1 soso Aauanbouy Sunn ayy pue ‘a(geuns posnu ayy Jor fousnbayy 1 10u anifepe we se saneyon. eiosaad Jo 2oua!uoAuoo roy Syn 99 0} pawnsse ‘urpue srep ida om Sui ur spuds sap te yn 99U xf 2oUD Nop paaisrap sx wiod yjnu as0ge 1 es) (ors) : eau &q pardepe aie agian de, dope au fog 249:2}910 ne yd s0Uata}a1 UL Tag. Hoe pur [euas Suzeaq-voKewoyul ues ot ar dope enp & jo wesBeIp Y>049 2 14) UL an Snood 3 jap fo an ay fq pope stu > vey wa}gard © anny 9-03 2 pu dzeys 082901 (os) suoento a1 Jo pera 20s, sme faouanad © yeoydds oun oT sguodsos Aouonbalh ys aqqun SLL ;Ponyea [tou aq 04 pauunsse axe yep yndut x ‘24ay patapisuod ui ‘o|oura asfou axndepe ayy \depy avenbs-ueayiaseay 5 seidey> intererence FIGURE'S.7_(a)New rpreientation of adaptive noe cancalet. (Continued onthe nex pat) FIGURE 5.7 (e)Sinai-Rowsgraghepreeainn aC adage one cnc. nang {heh top weigh for dtnled tention ore sous A wd nous 28: ss fessonsuEN aN mh douonbory mde onl © 204 ape 8 sooxy au Aa apmydune wr poqe>ssyuouoduo> prosos si, 2um) qu Ho a v0 20uD4 pue "p aseyd aur wo 1uapuodap st yey tusuoduion BunKuDR ue 'p aseyd oy 9 dopuadapur st yxy Lowe) . 1) yw a postop nauodios muouneut-aui Yt yuodutoo om jo wins ain jo (ce) ba Zunmnsqas'sna, Peele (4S es) (es) (ES andor am (@)es wioysueni-2 a1 Jo wwouodwo> Burdrea-oumn 349 21003 4 ary 1s suoneriddy e's uones sang aandepy orenbs-ueayy-seay saideuy 0s 252 Chapter Least Mean Square Adaptive Filters “The model of Fig. 5.1() is recognied as a closed-loop feedback system tcanstes function H(2)istlated tothe operloop transfer uncon G(2) va the ransform of the stem output the system input d(n)- Accordingly, su approximate re ) ransfer functionof a second-order lar frequency wy. The zer0s of H(z) it circle, a radial approximate the poles of £1 also includes the haf power points of H le the adaptive noise canceller has (in theory) a notch of infinite (in dB) at w = ay. The sharpness of the notch is determined by the closeness o poles of H(z) toits zeros The 3-dB bandwidth Bis determined by locating the two ak [power points on the unit circle that are ‘V2 times as far from the poles a they ar oa ‘the zeros. Using this geometric approach, we find thatthe 3B bandwidth ofthe adapt. noise eancelleris A cis (4) ‘Therefore, the smaller we make , the smaller ‘bandwidth B is and the sh ayn “+ a) buried in wideband noise v u(n) = Asin( Section5:3 Appl aptive Line Enhancement ine enhancer (ALE) ; signal buried in a wideband noise background." Thi in Fig. 59, isa system that may be used + dy) #2 and the noise vin) is assumed to have zero ignal and thereby comps y(n that multiples of = aAsin(ayn + 8) 4 You denotes the output noise. The seal sia agin in o's Sa Sa eu 2) (3H's) bg yo tans}20ts towed 3yp ty syuatiodwon 4no3 ysin (el amo 20UPLTEA pte EOL, Jo prosnuis—siuauodusoa om Jo os 30 (10)4 99104 pusqoprm pue Yo AouanbatysejnduE ssuoo jem india Sry aq wey BurzUsO9y IoKs $e porepoU 2q stu Aeus ATTY U4-28}0U UDIPEAB O1 anp (u Wopuer ueau-o192 uneniony Apso © ila oyfered ut Fun9e* a uoLntos s5UoH94 3H 430 ss{sUoa o1294 aytam Paseat0> 2 J, pour a118-APeDis ag Kn Aes 0} A248 “not Joy'uos}oas vou 94 u poyuosaud st romByag st JO sISKtetEE TeuAO} ¥ “(u)Pm YON 2 01 S3819n1009 (4) J0422n 348!9M 94 Jo UeDUH aU) HDUUOHLAKD KreUO! urteq 1s Uors9g wt paruosaid wuxyuio3j@ gy] ay) jo walalaKo ax oN [[eD2z 89 9 “(8¥'5)"a Jo uontsodwo> ay) puerssopun df onaun} ey3p IeAIC esaIOUDp (.)9 2100 Go — eso $1 (i = w)s00 ~ 1 ten ayamo sil (879) “nso pa la no OW I Wat + [(00 + o)9 + (00 — me lavion + ,0) se possaudio 09 Kou (u}( indino TV 28 40 Ausuep resiaods som0od oy! vou umos 038 3484 (6U61) sAIpt2z pu¥ pileypr 40 sa0ueyu9 ot jensads, aweu S41 2ouaq—prosnuis Sutus6Sut ag Jo 8 AouENbaH seu on e Awad v s.quyxo astodsaiAouonban)asoye sop Sunn fasesesioe ITV 241°(SP'S) ba 01 BuIptoasy ary ayi Jo induy 249 Te oor astounoy jue au s21OUP S82 suonenddy e's uonsas ‘seme aun sandepy 6's 3unDIs om 2, a sianis anadepy azenbs.weayy-se—7 sundeus vse sions 2 lean-Square Adaptive Filters Section 5.3. Appl when an adequate SNR exists at the ALE inp: sproximately equal to the sinusoidal compone imple adaptive system for the detection of a sinusoi jponent of angular frequency ap and average power mathe averag >f processing the input sinusoid by the Wiener filter rep x C,on the basis ¢ is defined by (as Sci tn the GSC, the vector of weight ar array of antenna cleme represented by (ssc amy fguRe 5.12 ‘> ayonur Aout 2u'ssuas afezaqe ue ut LORE oNseyPOIS € Yons fo TOMeYAG 29UOHI95 noo au APMUS op {(4),/0(u)w ~ q]ortenba xuveur uray jo azniea} gstAIDesEYS 2m, ‘gy (H)2s0}P94 Jonsa-1ffan af Ur MonoMba aouiafip asoypos Ws (96's) wonenbs, powony Sujferany rang 39 sods uae) sea nay ov wos uo suds ay (038 tn todoud rowtowoy a pies is tou esau honsodedne Joandomad say pages a 408 sh “(4 (u)}€ 0) abo aateoe 6 a ike ‘xt (ua (ua soon day sod 2 ati yo oy (a not son 20481 worst sod nap (aot asmaHr a) non ina ue 01 ums «Jo oxo ah 1049p (4) ustouy pur wesodd sjsu srs sores segunda ae an ot “oy souayyy uenuando a9 &g paonpoud s0119 uénournso ap st ss) (ung = (ep = (uy? sasanog Kear up ~sourgiap> co2uy e pune yng $4 oneruouraydust eotshyd 84 vet : pup xireur Ayiuopy 2p st 2104 520 ST aun 01 pautojor 9m id (oss) Su) zo(usort ~ (w)a[(u) y(n ~ 1} = (1+ 4) ° AUOAHL SWTIVIUSUVIS 9's 2 (v)3 201304 souro-yBtom 9g Jo suzI91 UE MANIUOSY SIT MH tums feu aa '(6's) bat Jo apis pueu- Sis mn vo usa Hounsnpe au wos (4) 83168 tury 01 (gs) ba J. uOH Np 24 Bus pus “a JO3204 rfte-der wnwsndo ayy wos (Gs) ba Sunoenang -u woesox ve sous SIT 2p. Ka poanpozd oveumse aun st (up ‘ue 301998 Stand} a4 10} uornfos faus1y4 wnusTdo am so.ouap “a atojog Se‘OIOT Z1'S 31g yo wesdeIp yPOIQ ap b Suuyews ays Jo ie pur royoutesed azisas ayy sara, ‘(Qnty ~ Pa le) ungont+ (u'a = a pateidap 2f8 Cy ee 59) (WRZolw),H~ (a Faun got + (alta : ‘a s2419 SyeT 241 wt A0p904 soLL2-8}0m BM BOUDp 96" LONGED i" eee : ay yuaosep yodoens Jo peta si 7 Apmis pas Honey afp i UoLnpuee (aa(upet + (ura = (Ce ay so199n ryiaa-de) 2p ue) q7e1 10}994 1O.NI-1ystOM tf REA LOM 'SWCT 24) Jo ssKjeue peonssves w quEN pacooud oy "uons95 Sut pazuosard ye21euu 24) Jo [e08 at SDA e4lom Jo 3988 Jo} sseq feonsead am sopinoud 1 38 Areuoners ® ur aouBuLOpzed si Jo BupuER tam Ageqoxd ayInb pure "zoueWLO) jm Ae am jeauto0ds “S99 a4 wt (1) Lo129H fiom jo woneidepe oy) soy wiuodye Syyry Mp atefaUi0s oF SPEDE MOU 9:8 aM “(mf = (wx " [2m hutod su azseydua 01:(u)" xO}204 YEON 2IgeIsMlpE au shed (n)m,29 onpord xtneur hUayO1 aul ay (wan = (0p prone op, 420} 10}204 nda ay jadxo w20q seq Jeqp Hoye aAIsu ayeidt09 éypunojoxd st SW 9% YSnow r apu ‘ Pasap oun yo 30r249 Skeid (Jn Su jonpoad souut ayp(z6°s) bs 10F 2) bg ut (u)4 se owes am s (20's) 5 Jo Suuunfoo 3p Kq pautieds avedsqns ‘ag1 saouangt (1)? 101994 1y8!oN .&q pauueds ooedsqns ayy 534 You ed eq) sooueN Ut i 1O}IN IyALaM BadsatND 241 SPION UL 2suodsex ‘2 uo spuadap (?)2 roudis sox a4) 154 3 (0) 0)2 St Cooma) = (04 se sony SpCT 240 Jo worre}2ndino-andu yp ssoudxo Sous 2m 90H Da(W"a'D =") = (apna) = (492 . st nding souniojuivag au so}ooa ute paguosaid © 53 Siu {Qyo}o =" _ 4 9 suet ytesst09 230 sso) wt pouyop s 1990 nySlem-yuzaszenb ay) st Pub 201294 48}an-}qoempO a m)49 SH = (Wa . spe (g5) wonenb epdaaion 2 wu >eveoder uy = (q) vontpuoyeu at sie Susman SAT NOU 2p Toine Ragen ef sau) nm (8) oqo nd ates a 9) 2 Jodtuno sone] ern 562 MOULsWTJeRMeS y's LORDS uanndepy orenbs-ueanaseoy suaidew gz 260 Chapter 5 Least-Mean-Square Adaptive Filters f another stochastic difference equation whose sy © the ensemble average, i., E[E = na(nyul(n)] = 1 = pk, put vector n(n). More special may replace the stochastic diference equation (556) with another stochastic differ ‘uation described by eon #1) = (I~ pR)eg(n) = ‘where, for reasons that will be become apparent presently, we have used the symby | : a(n) for the weight-error vector, which is different from that used in Eg, (5.56), however, that the solutions of Eqs (5.56) and (5.58) become equal fo i of vanishingly small step size parameter. Butterweck’s Iterative Procedure ‘ooedure devised by Butterweck (1995, 2001), the solution of £4 (5.8) point for Renerating # whole set of solutions of the original sto. chastic difference equation (5.56) The aceuracy ofthe solation so obtained improves wit ton order: Thus, starting wth the solution eg(n), we may express the so. 56) asthe sum of partial functions &(m) = elm) + ex(n) + eal) + (559) Where e9(n) isthe zero-brder solution of Eq (5.56) for the case of > O and (x), €3(n),...are higher order corretionsto the zero-order solution for > O.Ii we now define the zero-mean difference matrix P(n) “unum R, then, substituting Eqs. (5.59) anid (£60) into Ea. (5.6) yields eal +1) + en-#1) # elt 1) + = (L~ wR)fed(n) + ex(a) + en) +] ~AP(een) +e) + Ele) +4) = plete, from which we reaily deduce the set of coupled ditference equations en 1) AR)ele) + Hn), T= O12 (66) where the subserpt refers tothe iteration order. The driving force f(n) forthe ditfereace ‘equation (561) is defined by = f-aiimesin), oe cae, lution of Eq, (500) 0) J vig transformed into a set of equations having the same basic format as that desc Sel) i ae) fllows rom the = 1h eqv jevBher reduced Toasty ofthe ransmision ofa Section 5.4 statistical LMS Theory 26 ‘thus 2 time-varying system characterized by the stochastic difference equation (5.56 that the solution ta the ith e > In particular, the analysis of the LMS nary stochastic proces: ta low-pass filter with an extremely low cutoff frequ ‘approaches zero, (On the basis of Eq, (5.59, we may now express the correlation malrixof the w error vector e(n) by a corresponding seties as follows: K(n) = Ele(nel(a ik) = 0,1,2, rms in the stepsize parameter p, we get the correspondi ries expansion K(n) = Koln) + Kiln) + 22KG() + (5.68) where the various matrix coefficients are themselves defined a follows Bled(n)edt(n)] for j= wKin) = 4 LEleimetm)] forall i,k) =0 (5.68) such that i +k = 2j ~ 1,2, Jow result is ascribed toa very low degree of correlation between ¢, (n) and e,(n):that 's these two weight-ertor vectors are alniost orthogonal. Note also that K, is not independent of u; rather, its Taylor expansio’ contains higher order terms, wh makes the series expansion of K(n) less elegant than Eq, (5.64) may suggest ‘The matrix coefficients in Eq. (5.6) are determined, albeit in a rather complex fashion, by the spectral and probabil n of the environment in whic LMS filter operates. In a general setting with arbitrarily colored input signals, the calculation of K,(n) for j = 1 can be rather tedious, except in some special cases (Butterweck, 1995). Nevertheless, Butterweck's procedure reveals a Structure in the statistical charactersties of the LMS filter. Small Step-size Statistical Theory vv much of what follows, we r tthe development of statistical LMS theory to small step sizes, embodied inthe following assumptions: Assumptioal. The stepize parameter pis smalls she LMSfiter acts as. low: pass filter wih alow cutoff frequency. wunsenogy p asuodsas pousep 2 puo (u)m.some4 dus ay “aHL wonduNssy vs) + uae) 9 = (W4]r 0102 91 (1p 401204 a2u0f ausorors ays fo uma YL “T ssoudka Seu 9m (u) 4 103208 29105 (wes) ees) ws) 91 41 J0 s0p10 94) 01 [wae KIB 5 (199940 = (Ha ‘yp axatyoe 01, sonyeataaio 24130 pyanico ayp Jo Somyoauadi> a4) ua 109 Suu sous xyDLU Karyn BSED 922 ‘y= du,0 ‘wEygo 24 t0s op 2m usu (a xtpUadde 29g) a XE ys Capyuon ay Burka Aq was squats © OF, ‘oqpiny durpooooud 3074 q umoys se“parejsuiosun 2 Jo uowezrarsese4p 2U, fo mopuedopur "2 ove wowouryse 2ygenpeue ay 2204s ymnba souszapsp aur tnojsue ws) (89's) 10d Sumo : souanbosy | ounid-2 241 ut a9 mun 910s ope Asan pur yo pss es 01 azoxs ney [1e4s.94 e1ep indo souepuadeput ue Suryew pioae 2m Kio0y) ots-days- fetus oy) usAL0p SIUDUIUONAUD Jo aBuer apy 19K09 SOLsEUDDS HS (g6's) uorenba aouazeynp anseyoys a us WT ue yo ayduexa ayy 2p 15 aM 0p Ayus0Ys Padojarap 2q 01 1024 27164 ‘siuouiiadxo kq pouuijuoo ag éeur buat shud 2qu252p 01 [apou uetssMeD e Jo asn au “ouutoMpn opeiKdosdde $4 luejssneg © eya yons Yayo are euowoudye jeatshyd Kq poonposd sassa00xdt suaig anndepy ovends ueapi sear §s910ey> 292. est Kau swe nS. y's uorDas 264 Chapter S Least Mean-Square Adaptive Filters Section 5.4 Statistical LMS Theory 26! result in Bg, (5.77), eget ELo(m)o"(n)] = Hence, using | 2 The correlation matrix of the stochastic force vector (n) is a diagon, that is, "RQ = duis EL4(n) O(n) = wed, which agsin demonstrates Property 2. ‘According toa, (5.73), the number of natural modes const the LMS filters equal 1 the number of adjustable parameters in the se of t components of the transfoi 2d with each ober ‘ns = (I= pai)utn) + @ the corresponding €q (5:74), we wai jpest-descent algorithm in the presence of thes (0) = -nO"ELw(ndettn] i efound implatons i pricy, rom (278), fllos that the =O the natural model fom one We next is where the expectation term is zero by virtue of the principle of orthogonality. Au(a) = 1m +1) ~ (5.79) ‘The correlation matrix of(n) is defined by = maven) + os ELS(n)o"(a)] = LOMEL(npezndes(mpu(n)]O. 6 paris ind a stochastic force bhignis b(n). ‘éu(n) is. 3 zero-mean whi the uifference equation (5.79) asthe discrete stich characterizes Brownian’ motion* (Ulenbeck & Orns. noise process, we recognize he Langevin egu 1, 1930; Reif, 1965) ‘To evaluate the expectation term in Eq. (5:77), we invoke Assumption Il or Assunpt Il, depending on which operational scenario's applicable: ‘+ When the Wiener fie is perfectly matched to the multiple regression model Eq, (5.66), the estimation error en) is white (Assumption I). According ‘ay fétrie the expectation term in Eq, (5.77) as follows Elu(nes(nea(n)ul(n)] = Elez(n)eg(n)]Efu()u(n)] = JR macsoscopic patil of ms a Hence, re Esthet fon operator elma conn and ithe E(o(}6")] = 1YpisQ"RQ The total fore exried onthe ete made op fo one = Wigs which demonsfrates Property 2 + When the Wiener filter is mismatched to the mul Eq. (5.66), we invoke Assumption I. Specifically with the input da . d(n) assumed to be jointly Gaussian, the estimation error e(n) ‘Gaussian Then, applying the Gaussian momenfactring theorem of Eq. (110) wemay write Ela(n)ed(n)e(n)a!(n)] = Elutnpez(n)]Eleatna"(n)] + eles est) Ef u(npu Which, by virtue ofthe principle of orthogonality, reduces fo = Blu(nnes(n)eo(n)ul"(n)] = Elez(n)ea(n)]E{u(n)u"(n)} = Jaa. on 3) eal the Langevin equation ach debe the bear ofthe pa mate ing Browrion tin wih Eq (579) forte LMS algorithm. we ime serson ofthe Langevine ions re specie wa 1 ‘Comparing E33) pet lhe ater equation asthe rae mathematical or she die owes renizeds aya ‘s yyy(u)m soyoa4’yndut ay) Jo 2809 YEN pareduI09 mops 218 ‘un qui (u)°2 01298 sodsoryziam yt Jo SuOeIA ay; “| UoNdunssY Jo aouen -2su09 @ $2 20yp J92} 24) 259 nm (ogrs) DAL UL WHO) ZuUIRUIDs axp arenjeAD OF, -swomos (/9°s) ba 29u9H1 ai s05 om [G4 ~ «)m(uy tau) go] "is uteago éeepaun o4*(99/1) bari paquostp Kinuopr aut Suysiddeqaxnwepsceg ose s20jatin st (12 oxo uoneus9 2p puPuDEsHe Syurof 28 (11)p esuodsox porwsap ayy pue (um 101984 ind 94) yey SuRtaNssY st {Or -erupene|z uonsipodxd ain sitzm 24%(u)n put (1% jo'siaate duro in 249 204 IU wondumnssy spin spjoy ose (3) ba 30 HASAH IU ILL, « (ss) Tow oe a) = [(w)m(u) ga(u), an aati} (85's) "bat Jo 98h pavero1 ox wlo4y sx u)%2 Jo sanjen ised uo spuadap (u)°a 103398 10439 V7 (wu) soy200 and &q posnposd (u)’> 10139 ui tf sans SajonpSiut uh“ wondunssy 1open « sattdd oxetzo orgs uo Fupuodapsuoner fapmotty a 15 012 st (085) ba Jo apis purg-E ofp Ouro paoDe8 ou, “sou 2th wooo posou Asnuen 2) fo red aro 7002p (()ou pe su owen 209g poonpond oss srenbeeonn ena ost 3130 oO) (ooo 4a uaatl sag SIT Bupuodso1z09 0 & peonpoxd sous oxenbs-unau ap 's3u94 249) ST a Jo 10199 031-1890 199 732 amy st (uP pe so sou 24 A poonpond 1019 U0 ses) “ews 105 (uy (o) (Hau) + (wm — (wp = (Hate ae = (xP = (a2 se passardxa 6 sou ST ut Aa Ua (ups w ya (Uys = o1d 10119 uo} ou 42 (u)a 10100 soe Brom pu (u)2 20219 uojeuns og asnea9g ts uoNetn 2 uadap wqyuogie SW 240 uu) uonsiaap axenbs-ueot pa (u) sou orenbs §n M909 y sdex) w palapion uyiode 1uoqsopesadoos a Joana 0 Ot 92 osu sw jensners v's uonsos sess200ud A1ouoy1suoW DOS i ‘uuopesou a gastox ss) [lai = (oe, uotiondp aonbs-smei xp jo 044 sm) umn Tuyuzea{pudoos stu, A(u)af woretaap som pozenbs aun Jo Bueraxe 2542 Uo paseg st yon ‘numa Sumuoy (QSH0) uonnapovombscnta ony,“ etsy aca ess) (lmllz =e 10119 orenbs-ueou ay; Jo de sayy st ansno Snure9y smu) xos40 uonewinse parenbs a yo durdez94e awasu> uo poseg st yoty “uno Supusvay (agp) vous ayonDi-uoout OK] “T rue spupy ons Ajwopr Keun ‘(nosed sayy aandepe yo aoueuizog is a4. Kpis 01 Stans Suzy aBeraxe-—yQuasUD Ban 01 2ONTEId WOUNIOD stp soning Buyusesy “ua SHYT Jo wonezusioes049 824 Onu Alaap ar0LW aqoud oy Kpeox axe a Tesodsip sno Ve [eU.oVEu su UN ‘jaanoodsos (29's) sba 49 pouyjop Buroq sopous jeanreu ay) jo sivattoUW puoD9s pue ‘0 ‘awos jnoge Wotjow ueuNOrg ain29x9 Fam) a4 JO St fou SAT Ue Jo Joreuiesed oes-dais ai ey popindig ‘moj10y 80 Patuasaid odin Sy 218 Jo wonlsody> ayy azul shin Ae 9M, (S53 a) anos wwopy “1 neu a Jo stuatuoK 94 10} seize} Susmoyioy 4p ureIqO a4 “(oe S) PU (s¢'s) sbz wi poqUDsep 3o10} o1\sey>018 343 Jo > parvo ay a19ys, IT + OM — 4) = (apne (te! B ious ay (uyre nq (u)¥a Jo anj2n rou e yum onseypors st (0)! Yume si jo S124 J0(W)%0 9p0urpeseNeU 94y"(a) "9 2477 onndepy avenbsueoniseoy 5 v2ide4) 992 268 Chapter S_ Least Mean-Square Adaptive Filters of the input signal u(n) is signifi for k = 0, 1,...,M ~ 1. Consequer may replace the stochastic product u()u"() by (r)u(nyul(njeain)] = E where A and B are matrices of compat ‘we may write = tr{REleo(n)eff(n)}} = uf RK,(n)], é where R is the correlation matrix of the tap inputs and Kg(n) i the zero approximation to-the Weighterrorcorzelation matrix defined in ti Eq. (65). Accordingly, using Eqs. (587) and (5.88) in Ei, (5.86), we ‘mean-square error produced bythe LMS ltr simpy as He) % Join + te RK Equation (5.88) indicates that, (orallnthe mean-square value ofthe estima in the LMS filter consisis of twocomponents: the minimum m ‘component depending ‘on tne transient tortelation matiix K,(n). Since the the LMS fter produces a iean su square error la We now formally define the etess mean-square error the mean-square etzor J(n) produced by the LMS filter ‘mean-squateertor Jia produced by the corresponding Wiener for I(n) that isin excess ofthe minimum mean the difference betweer mem and the minimun| Denoting the exces reansguareezor By Jan), we write Salt) = (0) ~ J = t[RK(n)] om) Employing the definition in the fst ine of (5.65) and proceeding ia aman sini ‘o that described in Eq, (5,88), we may go on to express Je(n) as. Section 54 Statistical LMS Theory Jasin) = {REL Eo(n) = {RE Qv(njw"(n)Q")} = E{te{RQr(n)v"(mo")} = EXtelv"(nyQ"RQW(m)}} Efte[v(n)Av(n)]} assumption J, we may correspondingly use Eq, (5.71) to approximate the me jeviation of Eq, (5.84) as fev B(n) = Elteo(n)P) 1, we have used the fact that the Euclidean norm ofa vect lavty transformation. Now, Tet An 206 A, een Using Eqs (591) and (5:92), wemay therefore bound the mean-square devi Arann) $ Jet) Anae(01) Equivalently we may write Jaln) forall n of ‘bounded by Jal) ye 21 upper bowrded by J(n) the mean-square deviation decays with an increasing) similar to that ofthe excess mean-square error. It ‘on the convergence behavior of Je,(1). refore suffices to focus attentic Transient Behavior and Convergence Considerations Accorditig tog (581), the exponential factor (1 ~ 44,)" governs the evolu ‘mean of the kth natural mode ofthe LMS filter with time step n.A necessary co for this exponential factor to decay fo ero is to have L<1 may < +1 for ‘which, in turn, imposes the following condi 2m on the step-size parameter: 0hsen w a: 1 a Ws (1) Jo wonnpOsD au ayy uegg r9ye0F8 yusarad gf St 3EW) (P ous <( zavvi ae? 2g me fe soonposd unpo8[e SwWT 34) Tey) suesU 1U9- esas © ‘ord ‘wens vet = (ak eo” vg +S waa te srexa Jog-aGeyuaorod wse yssoidxa-oyArewysn9 st n ka paso; not =2 ue Wad Bulag uowve Buusayy sandepe oe st parnduroo 3 1 lte af ote kor) 7 wt a Mp = (ule spine oon sntsarnbrat oP CC ‘Moy Jo aingeour v sopiaodd yeqy s2youeed sto[uo|suaUup w st yy yURKUISM(pesTEX UL ‘wors) wR Lay Sesto anseoduy ened aq seta asejra orgs po ana ua oTerone es 5m sy Sets (G6) (365) Bassons rene wna 99 10)" ota ENE > 7334 Jo ann spans fo ones a 8 pauIps woUmANpESKN ap SpHONT ou 24} Jo ouaS9Au09 a Jo BURLCOHONS SA PHOAE 9 “0192 01 puss Poou sajquuea wopues ueow-o107 4 jase Jo 2ouonbas & 2001s ‘ani {pain yo st uooiua 2ouadzequo> e yons"rK9AOf{ Uolin(os Z9UDKy aU sts BOHN, ‘sss) wor” abso wo eu se we [wala 2 pouson A ; y SWIM anny am (Pa 4) jo wonrunordy an pue (11's) pur (6¢') sba yo onus hq inuoeambra ye) Sew we om MONH)e ‘aitin Kew 99 2509 Yonge 1) J01¢} enusuodys 2p" senfposuu 24) soiouesed wou & Bupsnponut £9 4 "uoro9s 2) Ut 2949] ans sup Uo ples 29 Ts 200 i a ‘wduseaso0y 7 swore aq wey Jo}ouesed sa-doys wunupeet 24) Uo UonpueD Aussao0d 18H Dou e soxinbex sy S71 9 Jo AimIGeIs “\BuIpioroy ‘pataprsu0d ag o7 a4e4 PIM 10201 Burkeaap yo pane srt 9 teoudinas a uum posed eur 94 es 94 fyaueted zi-d3389 rpolgns st Bip) xs0 wonessp aus vey e104 ane ow Az Kooy. sw yeEnsners vs uonDas: say annidepy ovends-ueoyy seo) ssaxdey> 04 272 Chapter’S Least Mean Square Adaptive Fi Section 5.4 sta of Small-Step-Size Theory and Comparison with ‘Average Time Constant ‘To continue with useful design ules, we define an average eigenvalue for the un correlation matrix R ofthe tap inputs as the ensemble-average a deterministic behavior in accordance with Eq, (5 indepen wet al the + The tap-input vectors u(l) dependent vectors 1. Tue misadjustment Mitcreases linearly with th fora fixed tian 2. Thesetling ime of cut) is proportional to Te. inversely proportional to the 3. The misadjustment beamforming, where cxived by an array ofa pendent of each other, Howe1 1 step size y; whereas to. We therefore have then the misagjustment is increased e choice of w. {Sando neds 241 wo. wepuidap 201909 &es9p © uouodke sKepop (4) sous nyftan ai yo wHoqsten sumed (orrs) {ow Patwdat = 1) = (4 wa) 0p Butt 2g) o1ut saajons (01's) ‘bat 241 19] 2M Se “Lew Buspuodso.u00 v ut (ors i (1)! sou yt Ja asuodion jemen aur « (1) — )4n(u)n}g = (1)4 pu Couantiauf unde ax swap » 94 (sors) ae = (o2hu q pouyep stein Sepou jopromuis pesnqussip éotinds fo iunntauod e ou sOAjoxd XIE wone}21109 ziuaoy, aut jo (sonqequadia jo 198 ays “9t) umnnivadsuatio UL « uel We Uro|Gosd 24 10) Ins02 FuEMOHO} 3 ) S24 2} do0y 28.0 fo sersvy2 2Hordeutso SIXT S401 Ktouugur we jo ssugdsor Ye (worse ag S01 4pt0 wa 8 op eu, id}295qn5 913 ppe'pMoys ax 1992409 24 0) LONEION 2204 Burgeads Assis) (u}Pa,) = (4u)e Jo Yuouoduaco yy 24 sty otota (ors) WUT aT “Oya, 1) = (ya ‘au snp 99492 04 fonbo(og's) bar ut 90109 anseyo0%s a4) ams fq poutsigo ‘au sm om "(60 Fy "9 utpuodsouso9 2 our any aia:(u) 1 A s989] Jo souIy SHAT WE Jo ast0dsp:[eaMIeU Duy asWOdsE (russuO.) roaMION “pauinbas Ou st uonidwunsse yn ypamtahing Aq padoton -3P root anem peonsners aq w Y(sg6t “ued 7 serBn0qq) uot duinsse 2ouoptod ->puL 941 Jo pee on YaEm pruno} sje axe syfNsSu sq Jo YIOQ TEMA AyONDION ST ae suonemoay 1yStom ayr"(u)%2 s0819 °y 11840) 40119 o1enbs-weout wnuuyuree ayp satan 2/7 sjenbo (4)"%9 yo snjea atenbs-eats oy sou2ty dupuods jeuins2 oq) st (4)"9 pe ABIL SHE 2M {yo wauoduioo yey ast (x) azayas Sez GoauL sn jeansiers ¥'5 wonses uyeala & = [(uyrrga(uye fu- z ors) ney am eaytpads soqeos awn WALA aenduoo wo 3/8hysny aM10u pur sou 1ysto4-de} 2 YSN U9A>'SuoNe|a1109 sot soda} 3yp 0} parejax axe suoqe|2i300 1ue-1y312M jeneds 241"(00 yy pe 0 = 9) stow SIT Buoy pur saz dajs eus tog “2svodsou aiesipodyy * 5 S99] soup 7USUNUENS ap SU unios £9 pa nous $1 11"uon295 sy) vey (1092) sPaKUaH 7 a Jo wsTUEND ouls t (payfdau 2q LowueD soem 2124} at ayy) worSar uonpayan, ai) Jo 278 2 20g 2xvx pa}oayai jo sual w porgudioiu 9g ue suon321109 2504110 ps pue Sunuuts9 2430 Anum 2m ur apeus 39 1 24ey SuOHIaLI09"FaIIy SVT FeMIDe Ue 1 34) premoy sano Se yeedoud 1 Parusosouens’ og yor tu lepe yons 19psuo> O} aneWIO}UL rosy supqieaydde aysnose ut Kuejorsed asa peaidsopin ut 18519) sian SiN 6u07 Jo A1oauL anenA I 21109 49p10 ity 1 SH 0 S3AIN9 jossiseiele pastaiBix/ su) Zulroaysi ‘s4nieu snseyoois e axey™ 9 suuia1 J9p10 UL ‘Papn|DUr-9q 0} aXeY 210}2294) PINON pus JUESIyUTs Stdo994 S191) SINT jeue [oyistieis 24101 Susa) sopso JoYSty 25041 Jo SuoNINglsIvOD ay "2942819409 soe o} az1s dais ase] @ asm aM Uayw foxanoy "(66's) "ba ul UDAIE 1 uorsueda ouput" [a)-a"(u)'2 Sw) Fopuo Joy 34) jes 9 Buratap u] 49p10 urs! OLULUOD 10410 34) “dade oF Ase 5 prey sisut possnasip wad ui papunos® i pastor ‘uo p2s0q Suo|snjo4oo oy seus 920 1 SHON SW JO 28Uods34 31s ApLOIS om seuorsuetn 34) anode soisn;2409 01 Sea} Kioaiy 29U9PUNE>PUL HL, nopuacap Koons poops a 1 ST 2 Kg paindiuoa suoiacip Dip "oy A(urpuodsion pus Suoison yndut-dey ayy Supiosedodteoi vu wy 10}2494) PoC + ue adiues mou 24p 105 woos a¥EU oF yun 940 Ay 2 HL YEG PANN sey os 3uuIe93 oxy ps8") wos} paPsEDE dures 189p(0 291 34diues maU 34 Jo eatZe 24) YA SOM Mew anyon wou 24 Uo $94 101998 241°1 v syoqy anndepy avenbs-ueayyasear ssaidey) —¥¢z 3 276 Chapter Least Mean-Square Adaptive Section 5.4 Statistical LMS Theory 277 : ‘The fluctuations superimposed onthe exponential weigh-ror af (osm ofthe autocorrelation function r(!). Then we may restate the con 10) eto the ito coreton (1) [he Mh component 115) in the equivalent form O>, dua 29 sae vein sind ipo Ae SATAN "uspaonpar sp sorouresed a2i¢-dos ou se 1afeurs souios fue Ou s]oMuauodse Suxko>ap ‘sr0U Jo sysisuo> >uing up Jo suo we ump iu2asep-»sadao}s 241 wroms199 aolsrayip oiseq oqioue 4 sours axenbs-ueaus ssaon9 ayy i ™/ uous J018948 5 (2°) yo anon fede Pe Kq wunowe ou p sox axenbs-uvaus mtu ayy ueu faves ms24 wise yy janbosuo.» souuwtn wonoU. 8% Siu0 ea w e305 (u) apewas9 sorsaax4Stow dol os ff »>19e18 ayn 205 o1ewnsa smoaweiueysu dstoue uo: 0 aM) UC "wyHOdfe a4) Jo WOHeAoM yoe J sosm 41 asne099 5142 op 01 An ary AL Sud + (dan = (14 hen : ‘Se uaiium st somipard oy) yo 1yStomdes (6quo pu. gue wens a 29 soe SA 1 9 5 Bw pot Eap SHO 29:09 s0r-1poid candepenr asn am‘p zoiourezed ayy ayputsa of, <0 aougt4ea jo ssoa0ud at0u'P on. ueotaro2 es (Ja pur ssavoxd ox jo syouuresed cers) rule's lwoqienba aouaraqKp 241 Sq paquasap'auo Jap10 0 ss2001d YY Ue U>4opIsue) ‘}0p 00410} wiysHOde SA a4 0 Buy2erane ajqurosus jo sivayo 9u) ssano1d 2 (wn worsuen idP4 Ur parpmis NolDtaaid aAlLavay No .Nawuniaaxa wainditOag LOG uoosap s0da918 24 ROqE NOUY 34 18 JOH nSeUDOIS : ® SATT 248 Yoot 015 0} ateutioyue SF aotsiox au) ied srry WHuNOOTY ANaDSIa-4s34zI15 IHL HLIM WHLMOSTY SW1 3HLJ0 NOSIWIWOD s's 1 941 OF anp st stow a nos topo ors pozoyos ous ouIOU HE ano Buldesane Kq poxjoous uot st NO \depe jo 2jquuasua ue 20} porn sun awos99 f|uappns uot pu 8 99 01 ead Buo & 40} parissyo 2401 sty 191 us zaindiuos ut parao1ap hysea you ys ur Spms yeiuotuuodyo ue yons u (212 sDAIMOH 1 Butea 4p sasiog sous rend 1) soaano Sunse9} soy Suturasayep Joy suaquo3}e sy] pve 28a suoneiodo GuBeione-ygutosu uota}p 6&2 _von2pa1g anndepy uo uownadK3 Jeindul0) 95 uolw9s, Hou anidepy asenbs-ueayyaseay s mideuy gue 280 Chapter 5. Least-Mean-Square Adaptive Fiters, where Flo) = ula) ~ sadu(n ~ Y) isthe piedictin error. Figure 5.14 shows plots of to() versus the number of iterations, for a g of the experiment and the folowing two sets of cond B llows a noisy exponential curve. 7 htain' by erent a diferent com a R ‘over 100 independent trials ofthe experiment. 7 sversging operation onthe leming curve of the figure. A Figure 5.16 shows experiment z £ “ treviously mentioned condition 1 and varying ste § i. i le parameter j Speci 1 values used for 4 are 0.01, 0.05, and 0.1. The ensenit z 43 3 le $eroging was performed over 100inéependent rial ofthe experiment From i 3° : ‘we observe the following: : 3 5 g + As the steprsize parameter 1 is reduced, the rate of convergence oft algorithm is correspondingly decreased, araimeter 4. also has the effect of redu variation inthe experimentally computed learning curve Ensemble sveroged ren Comparison of Experimental Results with Small-Step-Size Theory the AR proces of order one (ie.,M = 1),as described in Eq, (5.121), we note os ~s an eigeospectrum that cont cigeivaie cual to ssocaled egevectr gy egea 2. the Wich otion ws orth tap weigh ofthe pedir amber o trations 2) of adaptive test onde recor or of waht iy FIGURE 5.14 Transient hea peo sou wonypasd ayn angen parenbs ayy SuiBe19 eum equauuodko a4) sano Bu aq 0 uogeauaN soouy ore poo Aion smous 1's 20814 ‘uo g's uy ut powuasaad st Kuoayy 275 8 0} yuoutodxs pue A105) we2M129 (ars) “gon aod bm (ogee asm Ke (1's) Be w= (ode Sums Aa pourergos oo atm = (Qe 4 pauyap von puo> DY 9g 01 WHE SI jaqey ue Soaind osoy) tat 1uopwadoput OT sjqurasua 4g paureige simnsou pasttap.fijeluawulodxs 01 spuodsoison s9aino Jo sed 2uQ -v s2yburesed 10 ange yuD:9pIp 0} Funso}2r sted yowe yuUN “FOO: zeyoueTed anjsdays pu w snsian (1) 94 paz|reumuns ase Sones 2s0u, (ua agou ani seam jo suyed ox patioyd axey 2m" son povends Li. al f 8% _vonsipoig aandepy uo wawuadys Joinéwes 9's uon2as, aandepy s1enbsueawareay sisidey> zee Tap weight 284 Chapter5. Least Mean Square Adaptive Fi Section5.7 Computer Experiment on Adaptive Equalization 21 (0.93627, and 4: = 04 theory and expe agreement betweer ie learning curve. Profound effect onthe agreement between the 1 convergence of the LMS pred nisyexploged in Problem 23. concerned. This nx fh Ov aban EQuavzaToN Inia cuter efi, dyin eof he LMS orth adap nd lala epee can sce i ota a ‘hove hock daar tine bs ued ig cay othe st Ranomnmbergeveao,! ois th test signal x, . use ing the channel, whereas randem-number generator 2 serve: According we of Bg 128) na, (5.12 iets n) os — (1 on Hoos ot (1+) ~" ’ os Section5.7 Computer Expe” vent on Adaptive Equal (0) = Hi Keto}, ) = fey + hh, ~ 140) =Wihs, sand hy are detefmined by the value assign Experiment: Effect of Eigenvalue Spread For the: sete step: rement | pr each eigenvalue spread, an approxi ing curve,of the adaptive equalizer is obt 296 Chapter 5 Least-Mean Square Adaptive Fi Section 5.10 Robustness ofthe LMS Filter: "Criterion 2 direction along which the convergence of the algorith place, it is possible for the convergence to be accelerated by an increase eigenvalue spread x(R). Case 1. Minimum eigenfilier, For this case, the true tap-weight vector of th we a= [LI ‘These 1wo aspects ofthe directionality of convergence are illustrated in the fou example: (Consider a two-tap trans ) is denoted ic process consi the convergence of the LMS algorithm is along 2 “slow trajectory. traversing about halfway to the true parameterization of the filter in 20 arting from w(0) = [0,0], 28 shown in Fig, 5271) for which the eigenvalue spre spread - The tap inputs ing of two sinusoids according 129, compared with 29 fo increased eccentricity of the Fig 527(b) with (a) = Aycos(an) + Azcos(wn), SS LLrLr—r—UCN their respective amplitudes The cor =f eel Hi 7 (ohn Elen ~ 1)] pat ad Afcose, + Aj cosa “2 ee + Abcosw, AT + A} | This two-bytwo matre sd assained eigenvectors ae as follows: a= EARL + cosin) + pau eoswn) qu = [1,1] i 2 study the convergence behavior ofthe LMS ale spesticiions:. : Case 2, Maximum eige ofthe transversal wera = (I we now find that the convergence Ay 5 AHL cosey). + 5 AX ~ co8e,) gy * [1.1]! ve may equivalently specify the wo) { ‘Thus, in light of the results presented in this example, the directio convergence of the [MS algorithm may be exploited by choosing a suitable value 1 condition w(0), such that approach, of course, assumes t in which the LMS algorithm is operating, In such ascenario,the LMS algorithm per‘ essentially the solé of “tu for this example af We = G2 for Case 1 (minimum eigentilie) p> qy for Case 2 with the followin step-size parameter p = 0.01 + initial condition ¥#(0) = (0,0) eigenvalue fie). Ths wo inputs ate «Mn = s0i(.2n) + 08 sos(0.nj lr présented in Section 5.2 was 1d of stegpest descent asthe bi ive transversal filter, However, once instantaneous est R and cross-correlation vector p are invoked i least-mean-square esti are destroyed. I then, a “single” realization of the LMS imum in the leastsmean-square sense, what isthe actual ? To answer this fundamental question, we address two related issues sv vat) 890.6) + 05 c0s(023n), pit has ai eigenvalue-spread x(R) 29, and eigenvalue spread x(R) = 129. Thus, there are four distinct combinations 10 2 onsidered; which we d@ inde he following two cases: 562 vouaWUD HW aMH4 SA aul JOssauISNGoY 01's uonD—s ‘avg aandepy aienbs-ueayyisea) suede) 962 ii 300 ChapterS. Least-Mean Square Adaptive Filters 1. In focusing a (ona single realization of the LMS filter. we avoid the ion of sta ‘Taster over ‘he weight error vector the neorrpted tp ero, ee ‘To deal withthe first issue, suppose that we have a set of observable data tha ‘nto the multiple regression model Fhe H* norm 2f the transfer operator J. We may now formulate the opt as follows! nd a causal estimator that minimizes the 17 ni F is a toaster yperator that maps the disturbances to the estnia observable d ulse response is ignored in order date the use of an FIR model as in Eq, (5.138). + Other disturbances orig view the #1 cp Sense: Nature, act ‘maximizing the energy ig from unknown sources. e energy gain. Since no timator has to account fc may be “overconservatve 1 (5.138) is of the same mathematical form as that of Eq, (5.6), b ed for the parameter vector and noise so as to emphasize is diferent from tle stochastic approach of S eee etal Th wha follow show that th standard LMS ssa scree rodos te undatrbd bed er gral dined Se known parameter vector w.Such an tors and desired responses (u( (a) are strictly causa, which ise \veight-error vecior, denoted by © lent from F4. (55) for is defined by = w= wn) (53) 1 = eM(n)u(n), (5.140) et ‘Typically, the inital value (0) is ifferent froni w. Hence, there are to be considered in evaluating the determi strategy: ‘ the estima. sigaified by subscript u,is used here to distingi mn error £,(n) fromalicestimation error e() of Eq. (5.7). Specifically ¢,(n the filters response W"(n)u(n) with the “undisturbed” response w' vo disturbaness ple regression model of Eg (5,138) rather than the desired response d the defining equations (57) and (5.140), we re in) and e(n) are related by ly find thatthe two + Theii il weight-error vector: (0) = w ~ (0). Gulrt) = en) = v(m), (5141) nes turbance v(n) in the mul le regression model, le regression model Let J denote the transfer operator that maps these disturbances at th cursive estimation strategy to estimation errors the output, as depicted in Fig. 52% [Note that J a finetion of the estimation s(ategy-used to construct #(n), We maf then define the energy gain of the estimator a the fato of the ervor energy put tothe total disturbance eneray at the input. Cleary this rato depends on the pat ticular choice of input disturbances To remove ths dependence, we consider the larg! ‘nergy gain over al conceivable dsurbince sqienceIn so doing. we will have defied og FTN HT pia othe LMS ge was ist dscssdin Hose al. (585,196: lo bietly ‘duce in Sayed and Kalla (194) ad Sayed nd Rup (1997, “The segment presented in Section 5.10sply tothe standard LAS Gter Theycan te generalnedin ios way (eg pormalied LMS tex considered in Chapter 6) The pois o note ees that te =o ti not ung ad he LMS filters the sale central solution tn 0 tP theory. Homever, the Bok by Habit (199) ba ‘Woro}uLJo sroyourexed pv saygetzen 3 Je SopMUY Mow YONA ‘ba uw pauyep 80-7f s5quumu [eon 241m Setus0} Kew a8(1 91's) ba Busse 2 281 Mou 244 ‘paindusoo “of orouered 225-don8 yum 9119 SWT ON} ‘rej smu, aya sug ory (wrs) lou 220m 9S-KgpneD 2 Jo oneodde soU9Ey wn=4 pue punog soddn emtgog-K4902-) ‘saws 2endepy oxenbs-veapiaseay sadey> 206 Section 5.10 Robustness of the LMS Filter: H* Criterion 30 304 Chapter LeastMean-Square Adaptive Fi Pa) = sp — : Bie conclude tha fortis Sum enery ea, where H? denotes the space of for supremum. The denomina isthe energy of the di for of Eq, (5.148) the energy of the u or any pos ‘an integer NV such that __ Using these choices in Eg, (5.148) and cancelling common terms, we get {wo statements imply tha ‘maximum energy Bain, an an adaptive equalizer based on the LMS algorithm was robust with respect to disturbances on a telephone channel. by gain will not change its estimate of the unknown weight vector w when confro#! _ateoned asda pe iL snoop el oye sssta}a oF ss9pHBoogoy ap wo spandap SH Hey BpUodes 4 uy spunog, ‘UL sauanbas wore yt puinog, ‘onteoung Jog a poe wooyer fa pomesae Srouandepe ons SHTT2MN Jo dino at spp vonenb, esneo Sununssy (Q's) vonienbo ayepdniyfiom 2 b.wt panes are = 5 aye jue = He z sto Sp 990150 tars yovordde wou sm jo 21me3} 2 1 Joplosnus & Jo wo p's uonoas ui poss rg.) ote 9 WH weed 3 Bo putogsedr, ‘asuayaiduon v juasoud oF Bake uy SOnIWNaDS LNSUS4sIa Mod WAL3WVUvd 3715-4315 3H NO SGNNOS Waddn 26095 a4 pouido 98) LU ogy ou Aq paunyuos st woneasasqo [3 ssouta aandepy arenbs ueeyiiseay s iadeu 906 Chapter 5 Least Mean-Squave Adaptive (5162) where H(e! ‘TriconeM 2: The steddy.state mean-square valve of the process ime system with x(n) providing being the ré a i FHGURE 530 Linese dscretetimeapproxnatio for log LMS ter. pues ined AES UNO posmbas st ofpauoury soud IqNssod 29 un yo won IY SWCT am Jo aousTHoNUD auto yp way 2") 9 Jo Sou8raAto9 ayy we 1uDT¥9 29589] 8 uu sal94q 900 JO 190A 1UBtOM ay) 2 a (ua iy uy uaoUs se spuo indu Bsasfposyur Suxkre9-34m ® pe Ftuooop 99 204 SITU IT sods J 218 $9589 0 paurep srregh eu}s9 01 Z0n9 aaidepe ue Jo 2sn 24 30} padU OU st 2 Supuodsouos au) &q paonpoud (0) sep ut we sks seouy Tunewrxosdde ay 01 pay (u)x woweyax9 ‘9H 91g 29URIA 2H be Arewuns ers vores \gamdepy asenbs-usownaseoy 5 saideyp 2 Section 6.1. Normalized LMS Fi Cet AP Ea Re | q Normalized Least-Mean-Square | Adaptive Filters vector of an adaptiv imposed on the scussion of "The chapter may be viewed as a gene 6.1 NORMALIZED LMS FILTER AS THE SOLUTION TO A CONSTRAINED. ‘OPTIMIZATION PROBLEM Euclidean norm of the change, Bin(n + 1) = (nt 1) ~ Hm) (et the constraint + 1)u(n) = a0. (or 9)"ba ur uoAsB uns} 41 01 Sonat (Z1'9) ba’) * 80370 < gory a) wotuy +m (E+ ‘onposd oy snydys baz jo wommaas amp Ajpour ay way goud sp 911021940 eL 4 soy a8 [ews £qoptatp 01 aney am Lou asnp90q ast Aa (u)m 101994 mndut-den ayn wor 1eyy sfoure“uAO sit Jo wr2[qoud ¥ s2anpo:uy PEN PamIeCN 2a) SAT HoNnHoY i oimoee woqgoR won, ‘2p SugmOdzaN0 ze jos ou) isda &q y (emyy — (Cuda — (14 al sy sq 49 Sujoc (29) “bao rwFENsI09 a 1 dats Jo nsor ay) Bun 188 amg xpuoddy ut paguosap se 101298 yySlem yads (1 + Ha ‘28ueq9renwour | eeindo 241 area 01 z pas 1 sais yo sifisou aq surguiocy *¢ ~ (wp = (wa ym ap ua 2+ (uy = (1+) (mar E+ we = (r+ HD uieigo 1 + W) ones wnunido ayp 10j Suayos pur 0192 01 jonbo ynsos sup BuMnag 2 wre sianty aandepy enbs-ueayiniseay pezrewion guide ze 324 Chapter 6 Normalized Least Mean-Square Adaptive Filters Section 6.2 Stability of the Normalized LMS Filter 32 nto iteration n * 1, subject to a constraint imposed on the updated tap-we (a + 1). In light of this idea, itis logical that we base the st ormalized LMS fiter on the mean-square deviation (see Ea, (588)] Gi 8 then taking expectations we get ee) an + 1) - O¢n) = wel vn n+ 1) — 04m) = ELE im) = a(n) Hn 1) =m) + value of the mean asiou aq Jo mos vs z sxordde bi 184p a10N -wonopsgp acombs-weoa ay st (ui 2448 ues oY>a sSTODE }99}9 UY ke eusis paxdausoo-oysa ayy wos yoBNIGRS pus o4>2 ayy Jo oxox w azsa4HUKs AfandepYy a= Ku)n(u) aj} = [Cea] onouaoudd oy Bu fast kgs tae yore fg pardno00 losses (1) os suoydope ul 227aour9 oye oxndepe ue asn oy st 22n>eId uouUueD 24} "uDKIUOLALD HOR yunu039]9) 252m 9 ausnene 46 apuasoud Bufouue ay} auH0219A0 Of, “ulysts Jo awn dun-punos 241 &q paceypp "ysoads uno 1941 OF UDI 8 ‘Sunt 4q pa Kou atuoa9q wiajs%s 243 Jo ssn 31} YaA0240yy pUNOs Bu ‘2onp 551 8 sy3mo|2U> 91 JO jnsse 158] U0 29nposut ues ay sttoniun ay asin Ks 104 "cge a F Teuiis 10110 paqinsipun -ord pu 2jqeisun 2wosag eur yno1 a1sno>eO1! 919 33 stopiog aq) tos s90y9 51 os) you da syoud auoydosons 24 is pooeid o1e auoydozs & pue sayeodspno] fensn am ‘suoyajoy 204p-spuey & 30 asn yy ut S24oHdoor 22uf- “aio b1ye-ano} ¥ 0} sIosP9 Dow) & HuN}294UOD IO} Wo;{Aodd 2q 01 sey a104H ARUIPIOD -2¥-uon29ufp saii9 uf uoneayumussiog Jo} Kvessanou st ted ajezedas' sapier wei 128u0] synout> auoydaya1 10} uaxemopy s4ay20d8 om waamtaq uortedtumAtOD-}e2 4203 p22u 24) $0438 Y>IY"doo} uy azpn-044 849 204 porsouuoo 81 pose jenudesBo98 uonsd eu ys auioydaqan Arana (oo0e “1 19 Sutursg ‘900e 19 Joe izg6t “oyoq pus mypuos) SuLmolfoy 241 apmyoUT s204>9 sord ajqepionvu ay) 4 po1 “aiqea: 4 410 941 UOJ, Md 2axsyan sondunssy Buon way pu (5¢'9) ba Bus (ane + YS = (we (apt 398) da 4'9 eu 20228 pagans toudys sos1a pagdnisip ayy reqs e321 aa Zoidey wos spaaoxo Sepa ayn°P )"9 yous 404s pq. uuvenag 4efop ou -pafoaut ‘eu o4p9 wys90129 an8n038 Jo ot sorbixondde (129) ba jo winusoy a (o0g2 “1818 zapeyy) sind w>22ds * neupeordde poo esapeotd(6z9) " rep ayn Fy jo anita yropoead 2 (Welz ~ [ ze uo 24D O43 ansnO>y 40} JonUOD axIs-dors__€°9 UONs—5 Suould anndepy ovenbs-ueayy-seay panjewion gueicey> 926 328 Chapter 6 Normalized Least Mear-Square Adapti Remote ote speaker by the micropho: tpt denoted by dn: The nal step-Size Control ‘The normalized LMS algorithm of Ea, may become oversized forthe echo cance Section 3. Step-Siae Control for Acoustic Echo Cancellation 3 Several factors ate responsible for the gene encing the oper ion ofthe disturbance » of an acoustic echo canceller (Breining e + The speech signal produced by the local speaker leads to @ adaptive filter. When both the local andthe far-end speaker ter . However, the use of a permanen + Inthe likely case ofa cause the adap! to become unstable. + When the locel disturbance is small, (63 nonzero step size and n(n) the optimal step sz fr iteration ‘the computed value of f-(n) The use of £4 2000) getstround problem by using an approximation Step site instead A scheme f leseribed next. ‘As remarked earlier, Eq. (628) provides a good appro speech inputs According to that equation, the estimation of spate estimation problems 1 of the mean-square deviation, 2(n) + paging sos 28 ‘seuss pg App tous susie 2 prog 0m ane wana ogy so pose jones eda aap a 0 depp ra ae OR MH \inenop anbarou a Sousa yeaa 9 Seiemododr aman Yep sun) oe pebatpstam ao 100913 a jana RM ( we (op tau onsen ‘ws sono ago ana sone = mon eos usual oun ogo dope re onesepy vegies enti jos an pono (oui oy ont es (11 so ie 1 dog. vonap serena 9 Senin fon OU oaN one ORT ale fh oe [> xpapsene pre coed eno ean [Se : om (eo) d se uopersap arenbs-ueau ayp a1eusord %'Ge9) ‘ba ut pouyop yorewst soy a4) Jo WorIod uwsoUy ox) Fussn ‘sm panes year 209 “ory '(t ~ um pur (u)m sioran yada wonojauo. ayduies 24) wo wopuodap st (T + uae wotuisnlpe ay) Jo 9215 2YL,Z.. nsowon indur amo ease aur ASL +, Wap MoUSNKpEAJOOUROMP LL. « yer amy # W woneroiste som ST Paztes0N v opaque P.O} Sty UoIUA (69) “be on BULLI. \VLVQ G3NTVA-TV3¥ NOS $$3D0Ud JNIDYIANOD!™ © HL OL ONINIVLN34 SNOLLVUIAISNOD SMLaWOa +? 4 TOW O= 1 20F (uy = (ute 128 Kea sn yo 232 top {fears out 01 Sumpuodsat400.4 sora soyouesed am, Jo Uo! ay Sh to[ente9 fo ogy uy anys 19818] 0} 21s dows 24 1) ay) Sayqeua yoru paunbar ‘uloas9n0 of :paquaaz yt st jo sual : udu pu [(), 2} s9mod [eu 30119 949 areus9 sn Sng Sous am paged © 104 666060 IetaIu 24) pss IRL iuerstoo Bumpoouuse st & pue days oun ye a}tuts9 omod ayy st (ux atau wees (Tema = (ree utosap ainpasoxd a[anoos Japuo- €ye/MU0} 01 sn Geu am*(u)x fq porouap[eURs¥ Jo uoAeNs9 aNd WL 04s. “umouyu tao ayo osu since ioyeaspno| op Surzuapezn ppou wtf ad B24 oa oso otutied a soneseyuoneropsivo papeiep 20m soot (4) nus 29) s9xoNOH pIEAIOH Hens stomod oBezne attodsoa sy Jo Yon ou aou94 9159908 og 218 (up (eu J0339 ayy pus (un TeUBs NG a4, 9) baa paurengo ((u).»]7 pue[(u)]Z, Jas 101994 1yStam-der 1-AG-AH euitou 24 Jo uaa 2m 5p 91008 sianty amadepy ends. ueayeaseay pazjewion 9 sidey) oft jquua> x9au09 Jo Wap! Section 9.1. Some Prelim (HA Pte Re i M4 2) een Recursive Least-Squares Adaptive Filters we extend the se ofthe ethod of ‘or the design of adaptive transvers vector of the updated est the resulting algo + by reviewing some bas .quares. Then, by exploiting a r mma, we develop the RLS: ‘an order of ‘mean, This improvement in é “Increase in computational complexity of the RLS filter 9.1 SOME PRELIMINARIES jmplementations of the méthod of least squares, we stat the com with pr conditions and use the information contained in new d Toupdate the old estimates We therefore find that the length of lable Accordingly, we express the cost function to be minim fe length of the observable data. Also, its customary {factor into the defiition of &(rr) We thus write Tn reeu ofdinary method of least squares. Th evahy seainare menare ‘ sen rovghly speaking, a messre ie memory fhe algrtim. The spedal ease A = 1 corresponds tonite memory j) produced by a Pecos = M+ Dyas mapping uniquely, aPtu)n + (= way = (ue puesinduy de, ona kg poye2nat 3 Soa ORI 8 pnd pos 6 pose he keene tp Saran ue) gia + (1 — whey = (em sindut det ax yo x{reur uone[2ss09 ay) Jo anjea amp Sunepdn + (x6) bay wan Burros gp SFA a opicuafip ag) 1 puE >HI ST ah &q poze deus dyno“ Indu fg uo} om) jo 09 pleogs tua) Furruemaronywewodu « B10} sf UOLDUN} 1509 OM OF 3192.0) SPO) tom yenuauodyo ayy isriz SUOseaL sgsTH J0 US189p Mh 9s fTuONTIOD ya 813050 jos ay) Suypoows &a woygord sexenbsrse2t payout St U2} HL, a9 sone aH viv 9 3194 ore ‘wo gz40pNB24 YT sorp9n yodu-ey ap 01 paves St yp asuodsox p2usap 24) W909 JO? jodtupo sty i8apuadap PEP 5 YU = rh S ‘sogonbs 4ouia pansion fo wins 4,“ art = (We yns0y 241 £q uaous s58 19 ay syuauodudos omy a ("pauinsse st Eusopursesd yo 90 210 212) SNES CL papdedxo 2q isnt UoHuny jatddews yndino-andut hse poequuumus a9 02 un022e 0 ynusoy ay veg sueaKE uoyut rots Jo wz0j tH0s,"p2s0d set kag aH 8 cat ssuonenb3 jewioN ayn yo uae" Gey saueujuarg awos 16 uoRDas chapter 9 Recursive Least-Squares Adaptive Filters “To compute the least-square estimate W we have to determine the section 9.3. The Exponentially Weighted Recursive Least Squares Algorithm 441 larly ifthe number of 9.2. THE MATRIX INVERSION LEMMA Let A and B be two positive-definite M- ing Eq. (9.14) by Ea. (915) ang that if we are given any the lemma is described by that pa ture as Woodbury’s iden inversion lemma can be app With the correlation mat wemay apply the mat sama ot known: Housholer (156 ie ftring erature by Kath, by ave whe Gass 197} (See Problem 18 of Cog! ‘signal amidt additive white Gaussian nase (Brooks and Ree I Inother words, the gain vector k I by the itverse ofthe correlation matrix (7) ioe Update for the Tap-Weight Vector to develop a recursive equation for updating the least-squares estimate (or. To do this, we use Eqs. (98), (9.13), an (9.17) ‘of the tap-weight vector 30 a1vBau09 x2jdtu09 24 101294 143-42) op.{yoroya ‘uayALOBfe a4p Jo uoTEIado son9 uorrewinse woud w 24 2andua09 01 tiv shane emuns V6 TevL yet moqy in) e8ep o0.8)-wRNAOS STA HIE ORRIN TE SUNOS 1p 9sv0dso1 poitsap aut Jo jonpoud sou 9410142 uoyounse woud ¥ ayy St os ne nding. (r= wat ~ ated Cat = ee yy ay) uo Kuo we py wiyHobry soxenbs-ase97 anisinsey parysion 94 444 Chapter 9 Recursive Least Squares Adaptive Filters Section 94 Selection of the Regularization Parameter 4 mn of the RLS filer includes setting th jue of the ime-average correlation matrix (0) © ot the dependence of the regularization parameter 6 on SNR is given d Moustakides (1997).In particular, (0) is reformulated os division. gure 9.2(b) depicts a signal-low graph representation ofthe RLS a ‘that expands on the block diagram of Fig.9.2(a). Note that in the summary presented in Table 9. vector k(n) proceeds in two stages ‘+ First an intermediate quantity, denoted by n(),is computed. 1 Secon (0 Bue to compte oe ‘This two-stage computation of k(n) is preferred over the disect computation gy " Uhna O18) feat pecan mtn pemt oie Chm alg wt i : : ree é discussion of To ini sve i deferred to Chapter 13) ize the RLS filter, we need to specify two quantities + ‘The intiat weigh vector ¥(0)-The customary practice is to set (0) = + The inital correlation matrix (0) Setting n = Oin Bq. (08),We find tha, wig use of prewindowing, we obtain (0) = at, ‘where 8 isthe regularization parameter. The parameter 8 showid be asf Small valu for high 5 (SNR) and a large value for low ga hich may be justified on regularization grounds trix defined by and Ry is 9 deterministic posi in which ois the variance-of a data sample u(n)-Thus, according to Egs (9.31) an (0.3),the regularization parameter 8 is defined by {tn Chaptéf14 ihe stepsize parameter jin the LMS algorthm—hence the n "Eq. (932)) “The parameter @ provides value ofthe correlation matrix #( Siustions in which 31 basis for SELECTION OF THE REGULARIZATION PARAMETER In a detailed study reported in Moustakides (1997), the convergence behavic: RLS algorithm was evaluated ora two variable parameters: + The signal-to-noise ratio (SNR) ofthe tap-input data, which prevalent operating conditions. + The reglerization parameter bh wi (93 we may distingu ofthe definition introduced in E 1. «> 0, which corresponds to a small inital ¥alue (0). 2.0 > a =~, which corresponds to-a medium initial value (0). 4. ~1 = a, which corresponds toa large inital value (0). ich is under the designer's control eee ‘Moustakides (1997) on the selection ing the RLS algorithm for situations that 2 ‘Toset the stage fora summary of tte findings presented in M denote a matrix funetion of x, and let f(x) denote a nonnegative scalar func where the variable x assumes values in some set of, We may then int Aefinition tion parameter & in y Eg. (9.38) 1. High SNR Wien the noise level in the tap inpats is low (j. order of 30 B or rhore), thie RLS algorithm ex convergefice, provided that the correlation matrix is init is requirement is stistied by sett ‘the matrix norm of (0) is increase« 1 input SNR is igh, on ¢ ts an exceptionally ist rate alized witha small eno ga = 1. Asari reduc the gonvergence behav F(z) = @(f), ‘where there exist constint'¢, and ¢, that are independent of the variable x, such ail) $F) scf(2) forall xest,, ad where F(x) isthe matrx norm of F(x), which is itself defined by IF(2)1 = (te[P*(2)F(2)))”. tion introduced in Eq. (9.28) will become app of the RLS algorithm deteriorates. 2 Medium SNR Ina medium SNR environment (i.e. The significance of the defi the input SNR ison the order of 10B),¢ sn the optimal rate for t presently. . -{ouin Surbueases roe) sopooues (0+ snp soy suonenbo Jo 198 Binonoy amy spouse ST mp Jo wonoxiddy oqauesanot aarp 2g Jo tndno og aouosopoy at soda 4 (uw) s0}004 wpe om Jo wonFETFP pot e at jo a28afuo9 x2pdtsoo amp afoot (1 ~ H)"2, idx ps (u)n pods ououafar w pe zouar2finu 1009. iibiexofa eyo 4sH09 Yon) p ous cvs ay 26 Fa wr partdop se fadu asou: Me ex ) and the requirement n > 1 inal spreads across all the taps of the transversal iter. TF 19) improves with an increasing number of time steps ensures that the input approximation of Assumption IN The fluctuations che weight error vectore(n) are slow compare With these ofthe input signal vector w(1). [Mo 9 = [cocoate (wh fave = Con : o soonpas xuyeur sonn mui oe ap nyusans os UHH ESI asa 01 ((nra)zala 8 enmte of = (erred op grout yuouunseott qn] wondurissy J9ptN 20S omen 0 ourys st (1) la-0 foro @.onmong Sera] =m suonvidss we wane oP, ssoxdo Kets 26 (801p1029% a 220}01818 pu (H)R 0128 twapuadopul {" (sau), OD MINE KE ol = os raf amy < #305 P2 1 Butsoudy pue (456) ba ove (556) ba SuNTHNSANS yu jo 99912 2 {,(coe — “ae - T= (@acna}a = a yotyasuoneat s6) oq aun inate = (0 ca ussan035t ‘Raodes pousop un woowg 20294 VOnee10 104 yn 101294 nut "A 821 231 ose wew ad s(o)ey at tgp unnuobiy srw aun 9 sskieuy s0ua6s0AV0) £6 YODA pposdde 1580192 01 $9860? 1 anp aH syd20ou* a fear 38 “pwr stonduanssy Buryoaus pus (6576) ba Jo sept tog jo won eIvodx9 o4 TUE, Clea ¢ © + °M(}O(u).@ = m= DSC + M(0) CH) — alw)e@(u),@ = (4) $e (1576) ba amos deus on W esveourerm t'seu-t = yun (G6) "bao 250 apeur ane eu ef oN UH 212 = (we 128 an (es) ‘bar fursn omg pu (56) ba owt (9'6)"baBuNSgns o rag =e pue Wo + mony =We Sm Yop 2104 wee Wn). = Ce ‘ium Kou 9m (11) 405 (01'6) suonenbs yeusOU amp SUAS anjen ueay ayp Ul uNALOBly Sty ayp Jo a2uaBienUo), “smojioy vey wr 9peu e2uor2e S1-(u)a wo roo 5, 8 S04 (96) ba w donewuns ay (A Go i 5 pu (Dy yiog yinounty (+s waxgord 298) (Gz) ba WOM SmOLO} Y>IGH } SNE (uy (0) = Swe (ua fq umoys st Auradord stat any Ho-ypitom ay) yey) 3280095 01 st [1] UonduINSSY 405 u 1 suauls asndepy savenbsaseay anjsumey guaideu) ose 482 Section 8.7. Convergence Analysis ofthe ALS Algorithm a! general shape as th iMjirect graphical comparison between the learning curves of the RLS and LM “gorithms We therefore base a computation ofthe ensemble average learning curve « RLS algorithm on the a pri ion error &( Where the A, are the eigenvalues ofthe ensemible-average correlation matrix R On the basis of Eq. (9.61), we may now make the following 1w0 i >M: vations fo L The mean-square devi Hence, to i@ are Ww6 types of error thea error e(n).Given the ‘mean-square values of these two e1 becomes large—equ sponse d(n)—and then decays ‘on the other hand, becomes 3 Substituting Eq. (9-59) into Eq, (9.65) yields “Te relation ; 1 me je(n ~ A)e"(n ~ 1)u(n)] > 5 of tr[ RR] 1 , {salso justified in Appendis G on compléx Wishart disnbutions The borrelation matrix @"(n) is sci ma Ce bys complex Wishet distri under the folowing condos alee: s + The input vectors w(i) )are bid. M + theier reson re oe fom och pce ed ee Gaussndlsibuion of zero meat ad an ensemble average eaelalon ‘Thee wo ssteptions holdin en smy-proseing ste thal opts in a ‘where Ms the filter length { pouyop spuueyo 2x9 yo asnodsax ‘se 11 sey somyenbo ou ones owtn-o outs patsop am hy Poutausrop ue “U9aUt 0320s (1)« aouanbos paodes 24 23 se aqgeiea wopmer ot = “rum sovanbe owing 2amppe jo yoy au Sune 0 uu ap Bmgond 2" Kq patouap'Su0 poem a Saun-wopues yU=pUodapur owt 5's Hug ut polidap 5 pms sty & porn watate 209 2s toast poyUasd sei ApS sh JO WOIION SCT OU, Roo aod sou eyo woe pen "7 = Ye ooo ay hen wtnuode $a am woud yn sea Nolwzitvnba 3AUAVaY NO INIWRUZEX3 WRLNAWOD 9° us pareduioa“()% prosds anyeauadta ai ur sud 2 Jo 20ue8roxu09 >yeur Kellam ‘samnig on asa ur paruasoidsysor tn 3 (51010 =. 7 sojouresed azys-do}s gus) uy) uo8'S {oj Soasna Buruze9q wadson orzo put puodsa1ie0 omni 1X peaids onyeaussis ‘yL so¥deyD uy poropisuoo st | > ¥ > 0 o8ues omy urBuxAy aie 38 al Jo SuauiS9q amp 1 pauonuacn sy “(CouoMt 21 puetzea v'xuazeamba) Ep of Jo siskjeue asusBsa,u09 ot '¥aoto4pa1p Jo asnenaq 2424 Aide You S309 aquisap aanpaooxd at red ylog Up mops 1998 nd 941 Joy xINEUT UONEfoLI0> aBera4¥-2[quisu oq JO 300A Jo ywepuodapir sate syuode STE Hh Fe Faq eo} Besaae-ofqusasus Op, “T 2) BUH xu AOU aN (G97) *BELJO sISEA 24 oad39 (00 = sa(u)a( ~ wa] wow o192 sey (u)°2 pus “wap =} S® _Uoneztend3 Bardepy uo tuaunade3 indo) g°6 UONDES days oun Bua Suauy anidepy sarends.3se91 ansimay Ga2ideuD Sy Enembleaverage squared error Section 9.9 Robustness of 456 Chapter 9 Recursive Least Squares Adaptive Filters ee VW eA, Pah med Myr ra AANA, pty GURES7 pea) FIGURE. Lenbing crc forthe RLS alg with four diferent genau spreads = G00s,and = 1 onverge; however, fon problem. neve ced in deriving the : Wee at end esignal ‘concerned, We now see that the RLS and LMS algorithms perform in rou sve) ‘suyop am gre 0) 6) NS m= Treatouraq utwyo sone iy tne om 1eqh Alsou 10119 aauequnisip 24) Aiosiooid “€2r6) ba pou (ut so}epuoprendon a 30 eNOS a 5 HO “Catt = wont = (He 4 poutap 5 (u}udeives om 't = x 05 ‘0205 + (W9() OO) 49 T= H)a(t - wo(r - 4y3 = a nba aip qu aaueps0220 uu pout Syonssinods st 101908 1y8iom SOMONE STR aM EPISODE (u)a + (WP = (WF . “sg wojgorg se 1p8 24 ‘Spunog 2521 ‘ued ABsou9 wincuxeus ane an (11°6) bq WOH ‘aus 2201 30 ‘eq $312u9 10179 von el pour pur aslou juoUroIsrOU sapniout ey 29Ueq ye pUE 101994 z919WUELEd ROUEN UE st 4 S19 slog al 4 *(foo2 “WieKeN pus iq|ssop)su0yRy STH JO ssouNSTgOS 24 UO oy sone ueIsqNS Wty AIODY! a JO Wed ye yUasa1d am Sa Sry Jo ss9ursngoy 66 uoRDas f i | 460° Chapter 9 Recursive Least Squares Adaptive Filters where the undisturbed estimation efror energy term 5)" Doan sr 27) (12) rns ate > Applying this inequ: (after area = Va + VRP ale(oy? + (+ VF s(+ vay (sie wos (cEEEE I) vt ap moma (EEE oD he maximum energy gain (or #* norm) 30 strategy we used tocbtain a lower bound on the maximum energy gaint Section 5.10 was to construct a suitable disturbance signal and to compe | the energy gain for any disturbance signal serves! F lower bound oti the maximum (or worst-case) energy gain. We shall adopt a sinilt strategy now for the RLS filter. egy suunssy 2am Suton 20 mopu oq zen (u)} 20119 uot s-usoit 24) Jo uowreindwoo “Iua42 Aue uy 0292 F28u9T OU st ju 3890X9 up 19g “POH INS Z PUE I SoTadoty Ye) UAOYS SEL D104 “PT paraprsuoa st 4 ¥ Jo 28% AULT = YJ0%De) BuNysom fentrowodxe ox tym yuaKTTOL nua Aaeuones e 0} sndde uyosoy poztewswns tn ado ou any 346" ‘2x0 stoy9en anduy ayy pt 08 we ‘souanbas Sulseoiz=puou e s ey swaigoig 1 ioponrered 27s-d2y any 01 woadsa: Ua JOH SHTT 241 JOJOIAE TOG ;punog ram} i 2net am 22652194, HCO)? + L= (0)! = 23841 210% STR a o ssomsngor ay a20F=r0441 HAP sxew = J uo puadap spunog 24) 2% saopro wt Jo ssouisngor uo Syeu91 Ma} Y [eu8ts sousqunisp 4) £0) 2919 STH 949 30 UIE ABiouD sy) Papunog ZaMOI SMAI OnEY BA sioytdanndepy sevenbsaseo7 ansinmey 6 s0ideu) 739 CHAPTER 10 Kalman Filters In this chapter, we revisit the linear optimum fi in Chapter 2. Specifically, we develop to stationary as wel of the state is comy in the Kalman filteris motivated by the fact that it prov derivation ofthe complete family of recursive le 4 SquaressootRLS * Orderrecursive RLSiilters tobe discussed in Chapter 12 Property 1. The innovation a( y(n) is orthogonal tothe past observations ving the recursive minimum mean-square estim: ‘This i simply a restatement of the principle of orthogo tot RECURSIVE MINIMUM MEAN-SQUARE ESTIMATION FOR SCALAR RANDOM VARIABLES 2), +, @(n) are orthogonal to each other onthe bass of «complete et of ob ng th heft observation t time Land extend aE Stave found the inl mean-square ated von random vanble x(n = 1) Weare asi ee for before) n = O's ero. The space spanned by th i eae v( ‘denoted by %, ;. Suppose that we now have an adi rr Paation (104) i tec, states that te iniovation process a(n), deseribed by Eas (1 aad the rgurenents to compute an updated estate sO) ote ee eae dam variable sr), ere, denotes the space sant Seana hn 32). yf). We may do this computation by storing the past ot rope \ere is a orie-to-orie correspondence beivieen the observed date a a ee y(n) ad he mavens ha heme equ Property 2. The innovations o(1) as shown by. 10.4 0 sken-1 466 -arojgord Surs2y1y wows fesoue8 ozout oy Apmis O1Apear ‘iqiinuo onoyoromn st uoxmuoysien aL ‘mou aze 04 P01 Bex uy wows santana apranod Yo pur dass UF rey amie sm om fa (ETOD a sore wey fe spomecout ama sy (u)é*~" (2) (1) erep paazosgo eur dor Abt 3 290s “uoneULIOssUCi si Jo e8iN00 uy) -suoteasesgo a4 01 ‘tenbs-ueooi anus 24 (1321200 ‘uoss21102-1opjpaud 8 jo wos ap Urs Lo}eumse so1r9-oxenbs- wes ua yo ampnns Suypopun sin vem mous (OTOL) pre (TOT) su is jason pts paayeuu6U nba EY UY Wa wooyp Kq pays st wouosinbas sq, = (Wr@e]a a unos se suoneaowut ath (DA + (2K = (20 ons wasoyp st )e suonpaouur a uaaid renbs-ueaus wna = (“g|0) “sqwuayeamnb2"0 P paarasgo aq von axenbs- eau eunuquras = ("Ay 01) ba fu, 2a sma Kou 244 wonousofu fo'sop Kuo ‘ym say Aes om inate dyosnee pu yosnos w Jo euros kqJ2yI0 241 04} pauteigo 29 Kou 69 uonewns3 asenbs-ueayy wu axsimey 101 YoRD—s ‘souls uewey ot mde 89% 470 ‘The concept of denoted by obseriation vee aed tan pontyed a snot by the veto ror simply observation, y\ Pros pation Measrenet gta is asinied tobe of dension W. ‘wemay write Section 102 statement 4 ms, the signal-flow. of Fig. 102 embodies the following pa fequat 1. Aprocess equation, x(n +1) = Fln + Lnx(n) + ¥y(n) ion, the M-by-1 ve ‘zero-mean, white-noise process whose cor From these two Note also t system described in Fig, 10.2is stationary, then imairix F(n + 1) isa constant 2. A measurement equation, which describes the observation vector as aknowa N-by-M measurement matrix. The N-by-I vector , modeled as 4 zero-mean, white-noise process W tothe state x(n), a5 dep ‘tisassumed thar ‘n= O,Thenoise vectors »(n) and ,(n) are = 0° forallnand & + qo21) (lua + (ele) x(u)o = (fal) ” $8 309% aoreniaago am Jo (4K anyea wosoud 9930 syeumse arenbs-neow unum 99 tN (697) BowveNbo yuouUDANsE=R AT HON Py, ‘osye am #4 aoeds omy weds re (T= \f suoneasosqo weed ann on, (son usts0 ‘0 ala ‘yum Ke am ‘fonoas0yy ‘o-= [jaca] (eon) seq ges sopue seo an 9g) uno wood ouodoy 2 ned ss gt 2m pavomad oh aeGTEC 2 JO UOMO 38, P paasasgo oa, *Z es agp Jo anes joptut Lt ssuondeunsse ‘om apeur pre xyreur uosisuen 94) JujwseK03 a[au ronpoud aun sn ancy om B19U (con) 8 (1) yay parepoosse ss2204d suonnaot a4 2015p 2 wert =H) (01) woneabo ares 2g} SUREG2IOP CL, ‘2p aos 114 99 “(u)» ssap01d suoneAOUUT 94130 XE Ssa004g suopeAouuy ayy jo xineWN HORE ‘2p 39 t9mpsa axenbs-upour unas ann | = 1 Yoe9 20) ‘pus (1) suoneasosqo uo Bunsisuo9 "Bp p2Arasqo 21109 24 ‘wopues 101294 Jo 2auanbas ayy uoomy9q aouapuodsa1i0> au on Tausast ‘0= (Cru poe sees eran pt eH \ yom uewjen or sided ZL ‘q wnoys se saxo ype9 0} reuoBoyH0 a5 49 sojqeytea wopuer 201904 Jo aouanbes & J sistsuon ssoo0rd sHoneAcuUr oq, TZ ely sao ReAOUUL BYL EOL LOPES, Section 10.4 Est ion of the State Using the Innovations Process 4 an 10- Kalman Filters i ‘and using the orthogonality property oft sement noise vectors 2er0, [Note that the predicted state-error vector is rnd the measurement noise vector ¥ ions process a(n) is defined by ‘The correlation matrix of the inn0\ ‘Therefore, using this relation, we may writ Efa(n + 1)a(e)] = E({F(n + natn) + fate-error vector inthe predicted estimate 104 ‘M-by-N matrices to be determined, According 1 slater vecorisorthogonal othe a the innovation proce Mie dj vet By (10 p)]e"(mm)} ot : (n+ 1)9,) = (nt principle tions process as shown by. Feti, “oan oun soured sane = sonndooy wed woatey E04 3414 ‘ooenbs Sy nes conan oot ona Jo so4ey upaweL H open E2 IAP NENA ML git ONE F219 10119-21 eindaioa oasinao1 24) 105 onombo 2ubiofip one SOTO oq pe (BS 01) BaLJo prs puey- ppraperd aut won eorex 2030; ny Burpurdxe Ag ery T+ H)a 8 newt woneyaisoo soxs9 eis paripaid ‘qi ssouda Kew om tparefaczooun ftjenynut are (u)Fa pure (ula sio1oax dsiou a4 pue ‘u)a soja 10119 94) yey SuruB090x pue (¢s 91) “ba OM (26-91) bat SnMISGNS 1+ t)K wo(win = (wt + wal(t — inl (son) Tors 9,901 + )3]z = yo}20n 10119 aves paratpaud ai yo wotjejndusoo anisin2a1 ayy 305 uonenbs 2ousz9pp Buia 18 am ‘(Fs01) ba 40} =quaxtOD WHO} idx Jo waygoxd a fuo sues: mou 9:14, up xe! 2 “Ue &quDM BussoUodd ay) Jo wont paid days-auo 24) pu 1o}%94 OHS Pa + wane yews snornord 241 01 Sumppe kq yewnso azenbs-ueau wrsrurat uonenby 0 ie reyr-uonotpard dajs-suo ogy a1epda 0} (50 ey ssapoug suoniexouuy aun Susy aves 2X8 jo uonewnsy YOR YORDES ow an ode nee dymamicalsystem Spam sate Stn 1 ‘Unity megtve feedback, Section 10.5. Fi 9 4 sqye new Mcby-M matrix K(n) introduced in Eq, (10.5) is described by the recurso K(n) = K(nyn~ 1) ~ Flan + G(WIC(MK (mm = 1), (105 Mee we have wid the propery F(n + 1.n)F(nn + 1) =1, sich follows from the product and inverse rules governing the transition matrix, nificance of the matrix K() in Eq. (10.56) will be explained !: Iman's one-step predic RING ime next signal-processing operation we wish to consider §ve wish to compute the fered estimate 3(n|4%,) by “algorithm described pre We first ‘each other, Hence, from the state eq ate of the slate x(n 4+ 1 including time ie. given y(1), 9m 5 80 #1494) = FO + A mps(ty) +8 ce the noise y(n) is independent ofthe observed data corresponding minimum mean-square estimate §, 5 (1057) simplifies vo _ S(n-+ 11%) = Fn + ton) ‘To find the flfered estimate-x(n|%)), fv preml thetransitich matric Finrfy + 1):By.s0doing, we obtain ba R(n8L) = Rem mc 1}R(n + 1/99). ass iat fret the one-step predi the process noise »\(n) arendependent n (10.17), we find that the ri ime n + 1, given the observedata up to at 3 (m)i follows th 19h) is zero, According! FIGURE 10.4 Oneatep sat predictor sien the ol es ‘servation y(n. predictor computes he new sae ‘This equation shows that knowing thes the minimum mean-square estimate (x ing tered estimate ¥(|%, simply by mul B F(n.n+1). 9 tothe one-step prediction problem fie Tae te cortesponc tion mat Estimation Error and Conversion Factor Ina filtering framework, itis natural that we define a filtered estimation error vector terms ofthe filtered estimate of the state as follows: 5, j/ e(n) = y(n) ~ CLA) (n1). "06 = wi) ,3(u)o| (ut + da — 1) ge) wer — 40} = 3 hes ayn) "ba ut hoes ‘uonrayap Ag yop Tig wu)a 2104 ‘suoyfo} 5° woneadxo 8 {(w yatya a vontioedxo amy eno (ua yo xuat “Sserord suofiesouut i # gE 2a: 28) a Fey ee 198 am supeurAinmope. aun enbo (uy + wa pue (T+ WH) IO se igs ey onposd au) eq) SureuBoo91 pur (£90 1) pue (01) ba Bu E js = = (eon) souo ny Jo xen worteoti0 an sen b> ‘eiatey patpueng jo wotsnenp uoraxdano apnpuoa 2x4 (6501) Hone 24: apy ah PoDMp OHM 3 {uyeW uopejau99 4OU3-25 PF uoisraauo9 24 128x9 ‘roto pougep 2P4 2410} fen (u)"D JO 9509 ‘aoa Jo aqor 341 Shed ora sadam sonsazioaso-se porspaid 2 FupeBedaad oj or onenbe BONY FOL INN io} e saptaord y>IMas‘20100f uo}ss2 4402 jwenb panjea-eunew OWL ug <8 = (la ane s01d 20) Seana wonmaed arn (1+ wu)alu)g — (up = ZpREag (k= We TEE 8. sha tas (0 eet yet (“| rsqns 224 OM8 RF ‘oxo lupo oon suowenobe 9429 (Te) Bao wonaop 5 ter 1404 won>95 482 Chapter 10 Kalma 7 find that the four ei ac the: may be interpreted individually as follows: Hescipsion ofthe error inthe fi relation rnatrix KG. ‘hoicé forthe initial conditions not only advantage of yielding a imate of Problem 9) Assur state vector xi Eqs GUAR (0) 5 dylan — 1)EMe). deena rn A Eas (1048) and 1067) apg teaver ule fer tet y(n) repressed bythe vce space a te Vector. fn Table 102, we presen par 20 snd ioe an wee sora ean ‘uonenbo neon oy, + _suapuodstioe 900-0 50 ns pseu 99 aco smee ou ode Bay EY Fo “near 291 2p mo 202690) BER PE ates ode a NEO} PE a8 NANL "paeond asa andes 1 Goo gs sa, 1 4 upon opyald deat a, ‘Sx2oq4 fouorisn 23 io paseq, eyo 900t Buy we vans STA Jo dures ax axmnswoo yor sunUOSTE Sauoyy akndepe se2uq 980 ‘voneatap 2th 20) 4romouses Suesrun 8 sapraoud i yy sf Az09qT 211 oeUES sno Jo} wose9sArvunad op soyde> sip 0} sazetiaAzoionponuy ayy wy PavOHTaND v SSUAKTH STY WOd SISV@ ONIAJINN 3HL SY HILT NVINTVA S01 ‘oppo doa wo prog ay UE 200 ude oHg FL SUMO ree oe arasiiy ani Om o AeEING FOL FTAA Sey soa STH 20} sea BuKyun ain se saya UEUIeY gL LoRDas sys uewrey oLuadeu> veh 486 Chapter 10. Kalma section 10.8 alma “The key question is: Given a Kalman filter or one of jo we derive the corresponding versi jn Ba (1075), the measurement noise A) the parameter vector M.Thes taking the compl 10.73) and compari where ma isthe unknown parameter vector With A assumed toque u 0 Moreover, the underlying dynamics of the RLS process noise is zero. Hence, using the Kalman filter notations adopte Iwe may postulate the following pair of equations f those of Eq. pendent of A ince between the measurement eq FIGURE 107 Mall near regression motel, terizing the unforced Eq, (10.79) reduces Accordingly, henceforth we work comrelgtionmatixK(n + Isr) andthe fered tat have a comirion value. ‘2. Assuming thatthe measurement noise »(n) has uit wariance, and: ihe formula forthe Kalman gain takes the form of vector ot eration a) rea Hey, 1 ST 9 uusomaq Saouaptiodsais09 ay) dn 135 Kew 24 ‘wo Goren) (ooron. STA 30-10 ue (uk Ke so oe} worsronttoo € Aq pozus}serey> S240) ST a, 19H WEES roy @ > pur 19 STH AM 205 (€L:01) ba Fapou uorssa‘far ajdnqnur ayn qua uonrenba Bunsas ayy Suuedutoo usin Pur 218. ‘wewjey, yr 205 (g1'91) Uonenba yuawamseou 2x3 oMut (96°01) "ba, san Gey S201 STW 0 sea BuNKyun axa se Jay UeUYey 01 uoR>9s, es yo 29 redo» wa po «Wi gsor) mE — uae — (up = ‘aney am SOU STH Og1 HO 6 soudoyS Jo (97'6) “bar oy Ki Stpodsaxio[ ann oy wt ty "| days 26} stseq yout0o ou SmusuOD Dw = (u's + «spit (cot) Sam (@x01) Bayo seam eR 375 AoW PRY Cen em (8 pur 1 anipox datnaidson E71) pue (08-01) "(B8'01) SBar a> ois etek = (4 pue lyk = Ls Ma (801) sonenbar mines np ays 2 oF = (wnt), O(a T+ a ue = ms, set + aL :sarihuap! om SuMojo} ap 2onpap 2m eT tou uoreU 220 + (T= w= ‘nny on (Burpuodsaszoc -Beye9s Mots 241 doy (rE 0D a WON.“ 490 Chapter 10. Kalman Filters section, we bul 108; and Table 10, bles. Spe specialized model ‘equation (29.103) pleteness. Starting with the i that (Chapters 11 and 12, res ce (Kalman) Filtering Algojithm Clee (covariance) Feeresion node ot a ne deo that Table 102 reduces to the covarlance filtering algorithm sumimanied in Table you table, we have used 4(7) es the for ‘Thus for am anbitrary.A we may define x(1) in term of ws a8 “os i Te unforced ea dynamical model of Bqs-(10100) atid (10103) represent § by the signal-flow graphrof Fig. 10.8; which generélizes the multiple sn modelo! ing the inverss-matrix K’'(n Fie. 107 by including 3 dynazical omponent for Clearly ‘he fering protest The invers "Wate-pace mol of fig. 108 mote powerk f 116Fig. 107, Note however that wih the exponent the multiple regression mode! weighting factor A confined tot i For this reason, at i _ypeo iy Aenuensqns pp Ker suc, eaindo Aeayeqaae 23e *us04 paqtosap se" URI 99 suo wonemope pe meeuRAe aq BhoNy ‘OT sai povosau uiiuoaye sy Aeuine ¥-9r sy a4 198 am “(1 — HW) i pu 941 1 mepe bv= (oro 4 ynsos nn Buftdpynea pue (6prOT) Pu (90101 Sba{WooKing (x) SueURD NH oa svauiyuewiey o1saideu) ze ewes 24240 siuewen 601 WonDes 494 Chapter 10. Kalmen Filters Tsay a in Regt ip ee agular matrix and K-? ists Herm TWe'stusré-root implémientation’ 6 2 Kalman fil af the conventional Kalman fi 2d fo the developinient of a mot ‘kndwn a8 the UD-factorization algorithm (Bierman, 1977 ‘The covariance implementation of the : to’serious numerical difficulties that‘are well dacumented inthe litera and a real diagonal m = Ulm) (mW divergence phenomenon: "A.simple way of overcoming the diverge the (noiseless) process eq 3. The’ Variance ‘matrix K(h) is nonnegative forall A. * sdvantage of a standard equ Ubscotaton nyo Chapman, 199; when an the ever-increasing improvements in roots are expensive and awkward to calculate ‘no longer 2s compellingras it used.to be. Accordingly to avoid the divergence of a ‘Keli filter, ve will pursue a detailed discussion of square-root filtering inthis book. (See Chapterzt) : Kin)r + “the Cholesky factorization vas lo dtcusted in Section 37 inthe content o linear peicoe -@uykren au pouonount sossa00ud astow (ero (ero coy s2ypg uewjey papusrd2>uL O01 worDas 188 (ez ‘yy oneur yx0d Hh 1) 8 = wg yfHa(u’o — 1) = (OX (wow +) a = WD ikp © 70 f9POw LDH e UT LOID2A a ‘ordn parapistuon wojqond Susan MEwE 2, \yaLTld NYT G3QN3L 3HL vay uewex 01 121deuD 967 10 Kalman Fite Section 10.10 The Extended Kalman Filter 49% 498 Chapter 10 Kalman Fiters With the foregoing approximate expressions al hand, we may now pro jproximate the nonlinear state equations (10.135) and (10.136). The results of the g pproximation are, respectively, x(n #1) © Fl # Ay ndx(n) + y(n) + a(n) whichis taken to be either (|) oF (11%), depending on which ime Tunetonal is being considered. Once lineat mode is obtained, the standar. filter equation are applied. More explicitly the approximation proceeds in'two stages. je, The following two matrices are constructed: Stage L. The following r onus where we have introduced the two new quantities F(m) = y(n) ~ [C(n, 56 a)) ~ Clnpi(n | Y,-.}} and © and a(n) = Fi (nf %,)) — F(a + 1.m)i(n|,). “That the th entry of Fn + 1, n) is equal to the partial derivative ofthe th xg nent of Fr, x) with respect othe jth component of x. Likewise, the ith entry ey ss equalto the partial derivative of the Mt component of C(x) with respect oy ‘component of x. In the former case, the derivatives are evaluated at #(n|), whisk the latter case the derivatives aré evaluated at X(n|%,.1): The entries ofthe mata F(n + 1,1) and C(n) ae all own (ie, computable), since (11%) and (aa are made available, as described later. "Appling the definitions of Eqs (10137) and (10.138) to our exile, we ieee e model of Eqs. (10.141) and (10.142) is a linear forin as that described in Eqs. (10.132) and (10.133); ions: The extended Kalman (“applying the standard Kalman equations (10.127) through (1 and (10.134) to the preceding linearized model. Ths leads to 1 D¥(n},) + alm) and didn) + es = Flo ent) which ads to (ni) = X(n19)..) + Grima(n) z x(n) = Hn) Cla) (n 1) a. oo[a = y(n) ~ €(n, 5(0 and = wlan) ~ C0 SC ©(n) = [580] %eas) © 2% (m1 %-s}(n1 Be] ‘raph of Fg. 109 for updating the one-step predic In Table 10.6, we present a summary of the extended Kalman filtering algorithrn, according to x F(n + 1,n) and C() are computed from their respective nonlinear counterparts using Eqs, (10.137) and (10.138). Given a honlinear statespage model ofthe forth described in Eqs. (10.135) and (10.136), we ‘may thus use this algorithm to compute state estimaies recursively. Comparing equations ofthe extended Kalman filter summarized herein with those of the standard Kalman filter given in Eqs (10.126) through (10.131),we see thatthe only differences, between them.arise in the computations of the innovations vector a(t) and thes Stagé 2, Once the matrices F(n + 1, a) and C(n) are employed in first order Teylr approximation of the nonlinear fu and C{r, x(n) around £(n|,) aad #(a 91) teopectively Speci €(n, x(n)) are respectively approximated as (ns x(n)) > Fn, (1 %,)) + FC + Asm) fxn) ~ £C}@)] 0” and C(n, x(0)) © CL, nl Yas) + CmN[aCn) — (MY) OMY = [emg — 099 : ee = (Mas = Gates — Dae = Ov us, (abez = (eins - = [eget ~ wij * sno mi ae sound ger ove cto) sa a (oe (ht ee loos shaman peel ALN ‘pdseno 2000 f(y wu)a vem mous srouop (4)F« pur (1) jean ema LL (wou =1)= "1 ‘ z Jeena = Can + 8 (rales Geetha oma + (alae = Calele (("alnsia - = 4+ (oyotr = wehatwig enact =x » (9 : io + ot = Sets waemnaiog PeT(L~ w)a (1) ep passssgo aq 6g pooweds seq) "5 foeds oun and (u)x avye at Jo ateunsoSrenbs-ueour uma: a st" |u)x 2129 J = wen aoe wouter 7 mH WORE, 2 po sou snd Jo et HoH pu $19) uamioq sayy weUsyeyy YoY} Uo NUzo2ax 940 pute Sutssaooid oust stzaiqord aq) Jo Kur yoy} ples ancy sioreyuauio> snosaLINN, soy wetuqeyy papuarko orp ny doug weuney pinpuers our ur wonerpond Seats ui reduios urdn mots'osye sdouaz9yTI cen eat eannsnr (nen Cale Im SESH Ap altubrsosdde op Aq pooeidas axe. ‘uewpey paspues amp th ("| u)x(D_ yo seen 9 parepdan MT epstuey wa ssn sein aun yeortoads (Pel +) 2H qu, jo uyyuo8ye Suton STI 41 PLE ZOT aTAEL FO we yo wospreduoa 19} “soiqeren ST pee $9 eo e ps paptons 0 omy demu OL TUN out &q Peqisop Aa poattarsexeys szony (ST) 82!0nbs-s89f axisIN993 Jo SoHUUAD BuLKHopun UL “T ssuosva1 ow 10} ouer1odu nos0043 punoyord ot Azooy sony weWsey ‘WoHeUsD soxmmbs-302 a stUAs7PP 2 © WON} SOO} YI wearsKs jeqmUEUAD JEOUI 0, uw 4p SF YEU) Sf sO4f WeUN|EH OM JO Kysodoxd 44 fom Fomndwoo yeyidip @soqeu ye azmanns anit pamopuia waists feuotsuaup-ayruy aumn-aalosp reoUy & st Joy wewyey ML zara tos suaiaong oma uewyey 01 2ideu> 005

You might also like