Professional Documents
Culture Documents
>,
dua 29
sae vein sind ipo Ae
SATAN
"uspaonpar sp sorouresed a2i¢-dos ou se 1afeurs souios
fue Ou s]oMuauodse Suxko>ap ‘sr0U Jo sysisuo> >uing
up Jo suo
we ump iu2asep-»sadao}s 241 wroms199 aolsrayip oiseq oqioue
4 sours axenbs-ueaus ssaon9 ayy i ™/ uous J018948 5 (2°) yo anon fede
Pe Kq wunowe ou p sox axenbs-uvaus mtu ayy ueu faves
ms24 wise yy
janbosuo.» souuwtn wonoU.
8% Siu0 ea w e305 (u) apewas9 sorsaax4Stow dol os
ff »>19e18 ayn 205 o1ewnsa smoaweiueysu dstoue uo:
0 aM) UC "wyHOdfe a4) Jo WOHeAoM yoe
J sosm 41 asne099 5142 op 01 An
ary AL Sud + (dan = (14 hen :
‘Se uaiium st somipard oy) yo 1yStomdes (6quo pu.
gue wens a 29 soe SA 1 9 5 Bw pot Eap SHO 29:09
s0r-1poid candepenr asn am‘p zoiourezed ayy ayputsa of, <0 aougt4ea jo ssoa0ud at0u'P
on. ueotaro2 es (Ja pur ssavoxd ox jo syouuresed
cers) rule's
lwoqienba aouaraqKp 241 Sq paquasap'auo Jap10 0 ss2001d YY Ue U>4opIsue)
‘}0p 00410} wiysHOde SA a4
0 Buy2erane ajqurosus jo sivayo 9u) ssano1d
2 (wn
worsuen
idP4 Ur parpmis
NolDtaaid aAlLavay No .Nawuniaaxa wainditOag LOG uoosap s0da918 24 ROqE NOUY 34 18 JOH nSeUDOIS
: ® SATT 248 Yoot 015 0} ateutioyue SF aotsiox au) ied srry
WHuNOOTY ANaDSIa-4s34zI15 IHL
HLIM WHLMOSTY SW1 3HLJ0 NOSIWIWOD s's
1 941 OF anp st stow a
nos topo ors pozoyos ous ouIOU HE
ano Buldesane Kq poxjoous uot st NO
\depe jo 2jquuasua ue 20} porn
sun awos99 f|uappns uot pu
8 99 01 ead
Buo & 40} parissyo 2401 sty 191
us zaindiuos ut parao1ap hysea you
ys ur Spms yeiuotuuodyo ue yons u (212 sDAIMOH
1 Butea 4p sasiog sous rend
1) soaano Sunse9} soy Suturasayep Joy suaquo3}e sy] pve
28a suoneiodo GuBeione-ygutosu uota}p
6&2 _von2pa1g anndepy uo uownadK3 Jeindul0) 95 uolw9s, Hou anidepy asenbs-ueayyaseay s mideuy gue280 Chapter 5. Least-Mean-Square Adaptive Fiters,
where
Flo) = ula) ~ sadu(n ~ Y)
isthe piedictin error.
Figure 5.14 shows plots of to() versus the number of iterations, for a g
of the experiment and the folowing two sets of cond B
llows a noisy exponential curve. 7
htain' by erent
a diferent com a
R
‘over 100 independent trials ofthe experiment. 7
sversging operation onthe leming curve of
the figure. A
Figure 5.16 shows experiment z £ “
treviously mentioned condition 1 and varying ste § i. i le
parameter j Speci 1 values used for 4 are 0.01, 0.05, and 0.1. The ensenit z 43 3 le
$eroging was performed over 100inéependent rial ofthe experiment From i 3° :
‘we observe the following: : 3 5 g
+ As the steprsize parameter 1 is reduced, the rate of convergence oft
algorithm is correspondingly decreased,
araimeter 4. also has the effect of redu
variation inthe experimentally computed learning curve
Ensemble sveroged
ren
Comparison of Experimental Results with Small-Step-Size Theory
the AR proces of order one (ie.,M = 1),as described in Eq, (5.121), we note
os
~s
an eigeospectrum that cont
cigeivaie cual to ssocaled egevectr gy egea
2. the Wich otion ws orth tap weigh ofthe pedir
amber o trations
2) of adaptive test onde recor or
of waht iy
FIGURE 5.14 Transient hea
peosou wonypasd ayn angen parenbs ayy SuiBe19
eum equauuodko a4) sano Bu
aq 0 uogeauaN soouy
ore poo Aion smous 1's 20814
‘uo g's uy ut powuasaad st Kuoayy 275
8 0} yuoutodxs pue A105) we2M129
(ars) “gon aod bm (ogee
asm Ke (1's) Be w= (ode Sums Aa pourergos
oo atm = (Qe
4 pauyap von puo>
DY 9g 01 WHE SI
jaqey ue Soaind osoy) tat 1uopwadoput OT
sjqurasua 4g paureige simnsou pasttap.fijeluawulodxs 01 spuodsoison s9aino Jo sed
2uQ -v s2yburesed 10 ange yuD:9pIp 0} Funso}2r sted yowe yuUN “FOO: zeyoueTed
anjsdays pu w snsian (1)
94 paz|reumuns ase Sones 2s0u,
(ua agou ani
seam jo suyed ox patioyd axey 2m"
son povends
Li.
al f
8% _vonsipoig aandepy uo wawuadys Joinéwes 9's uon2as, aandepy s1enbsueawareay sisidey> zeeTap weight
284 Chapter5. Least Mean Square Adaptive Fi Section5.7 Computer Experiment on Adaptive Equalization 21
(0.93627, and 4: = 04
theory and expe
agreement betweer
ie learning curve.
Profound effect onthe agreement between the
1 convergence of the LMS pred
nisyexploged in Problem 23.
concerned. This nx
fh Ov aban EQuavzaToN
Inia cuter efi, dyin eof he LMS orth adap
nd lala epee can sce
i ota a ‘hove hock daar
tine bs ued ig cay othe st Ranomnmbergeveao,! ois th
test signal x, . use ing the channel, whereas randem-number generator 2 serve:
According we of Bg 128) na, (5.12 iets
n) os — (1 on
Hoos ot (1+) ~"
’ osSection5.7 Computer Expe”
vent on Adaptive Equal
(0) = Hi Keto},
) = fey + hh,
~ 140) =Wihs,
sand hy are detefmined by the value assign
Experiment: Effect of Eigenvalue Spread For the:
sete step:
rement
| pr each eigenvalue spread, an approxi
ing curve,of the adaptive equalizer is obt296 Chapter 5 Least-Mean Square Adaptive Fi
Section 5.10 Robustness ofthe LMS Filter: "Criterion 2
direction along which the convergence of the algorith
place, it is possible for the convergence to be accelerated by an increase
eigenvalue spread x(R).
Case 1. Minimum eigenfilier, For this case, the true tap-weight vector of th
we a= [LI
‘These 1wo aspects ofthe directionality of convergence are illustrated in the fou
example:
(Consider a two-tap trans
) is denoted
ic process consi
the convergence of the LMS algorithm is along 2 “slow
trajectory. traversing about halfway to the true parameterization of the filter in 20
arting from w(0) = [0,0], 28 shown in Fig, 5271)
for which the eigenvalue spre
spread
- The tap inputs
ing of two sinusoids according
129, compared with 29 fo
increased eccentricity of the
Fig 527(b) with
(a) = Aycos(an) + Azcos(wn),
SS LLrLr—r—UCN
their respective amplitudes The cor
=f eel Hi
7 (ohn Elen ~ 1)]
pat ad Afcose, + Aj cosa
“2 ee + Abcosw, AT + A} |
This two-bytwo matre sd
assained eigenvectors ae as follows:
a= EARL + cosin) + pau eoswn) qu = [1,1]
i
2
study the convergence behavior ofthe LMS ale
spesticiions:. :
Case 2, Maximum eige
ofthe transversal
wera = (I
we now find that the convergence
Ay 5 AHL cosey). + 5 AX ~ co8e,) gy * [1.1]! ve may equivalently specify the
wo) {
‘Thus, in light of the results presented in this example, the directio
convergence of the [MS algorithm may be exploited by choosing a suitable value
1 condition w(0), such that
approach, of course, assumes t
in which the LMS algorithm is operating, In such ascenario,the LMS algorithm per‘
essentially the solé of “tu
for this example af
We = G2 for Case 1 (minimum eigentilie)
p> qy for Case 2
with the followin
step-size parameter p = 0.01
+ initial condition ¥#(0) = (0,0)
eigenvalue fie). Ths wo inputs ate
«Mn = s0i(.2n) + 08 sos(0.nj
lr présented in Section 5.2 was
1d of stegpest descent asthe bi
ive transversal filter, However, once instantaneous est
R and cross-correlation vector p are invoked i
least-mean-square esti
are destroyed. I then, a “single” realization of the LMS imum in the
leastsmean-square sense, what isthe actual ?
To answer this fundamental question, we address two related issues
sv vat) 890.6) + 05 c0s(023n),
pit has ai eigenvalue-spread x(R) 29, and
eigenvalue spread x(R) = 129. Thus, there are four distinct combinations 10
2 onsidered; which we d@ inde he following two cases:562 vouaWUD HW aMH4 SA aul JOssauISNGoY 01's uonD—s ‘avg aandepy aienbs-ueayyisea) suede) 962 ii300 ChapterS. Least-Mean Square Adaptive Filters
1. In focusing a (ona single realization of the LMS filter. we avoid the
ion of sta
‘Taster
over
‘he weight error vector the
neorrpted tp ero, ee
‘To deal withthe first issue, suppose that we have a set of observable data tha
‘nto the multiple regression model
Fhe H* norm 2f the transfer operator J. We may now formulate the opt
as follows!
nd a causal estimator that minimizes the 17 ni F is a toaster
yperator that maps the disturbances to the estnia
observable d
ulse response is ignored in order
date the use of an FIR model as in Eq, (5.138).
+ Other disturbances orig
view the #1
cp Sense: Nature, act
‘maximizing the energy
ig from unknown sources.
e energy gain. Since no
timator has to account fc
may be “overconservatve
1 (5.138) is of the same mathematical form as that of Eq, (5.6), b
ed for the parameter vector and noise so as to emphasize
is diferent from tle stochastic approach of S
eee etal
Th wha follow show that th standard LMS
ssa
scree
rodos te undatrbd bed er gral dined Se
known parameter vector w.Such an
tors and desired responses (u(
(a) are strictly causa, which ise
\veight-error vecior, denoted by ©
lent from F4. (55) for
is defined by
= w= wn) (53)
1 = eM(n)u(n), (5.140)
et
‘Typically, the inital value (0) is ifferent froni w. Hence, there are
to be considered in evaluating the determi
strategy: ‘
the estima.
sigaified by subscript u,is used here to distingi
mn error £,(n) fromalicestimation error e() of Eq. (5.7). Specifically ¢,(n
the filters response W"(n)u(n) with the “undisturbed” response w'
vo disturbaness
ple regression model of Eg (5,138) rather than the desired response d
the defining equations (57) and (5.140), we re
in) and e(n) are related by
ly find thatthe two
+ Theii
il weight-error vector:
(0) = w ~ (0). Gulrt) = en) = v(m), (5141)
nes
turbance v(n) in the mul
le regression model,
le regression model
Let J denote the transfer operator that maps these disturbances at th
cursive estimation strategy to estimation errors the output, as depicted in Fig. 52%
[Note that J a finetion of the estimation s(ategy-used to construct #(n), We maf
then define the energy gain of the estimator a the fato of the ervor energy
put tothe total disturbance eneray at the input. Cleary this rato depends on the pat
ticular choice of input disturbances To remove ths dependence, we consider the larg!
‘nergy gain over al conceivable dsurbince sqienceIn so doing. we will have defied
og FTN HT pia othe LMS ge was ist dscssdin Hose al. (585,196: lo bietly
‘duce in Sayed and Kalla (194) ad Sayed nd Rup (1997,
“The segment presented in Section 5.10sply tothe standard LAS Gter Theycan te generalnedin
ios way (eg pormalied LMS tex considered in Chapter 6) The pois o note ees that te =o
ti not ung ad he LMS filters the sale central solution
tn 0 tP theory. Homever, the Bok by Habit (199) ba‘Woro}uLJo sroyourexed pv saygetzen 3
Je SopMUY Mow YONA
‘ba uw pauyep
80-7f s5quumu [eon
241m Setus0} Kew a8(1 91's) ba Busse 2 281 Mou 244 ‘paindusoo
“of orouered 225-don8 yum 9119 SWT ON} ‘rej smu,
aya sug ory
(wrs)
lou 220m 9S-KgpneD 2 Jo oneodde soU9Ey
wn=4
pue
punog soddn
emtgog-K4902-)
‘saws 2endepy oxenbs-veapiaseay sadey> 206Section 5.10 Robustness of the LMS Filter: H* Criterion 30
304 Chapter LeastMean-Square Adaptive Fi
Pa) = sp — :
Bie conclude tha fortis
Sum enery ea,
where H? denotes the space of
for supremum. The denomina
isthe energy of the di
for of Eq, (5.148)
the energy of the u
or any pos
‘an integer NV such that
__ Using these choices in Eg, (5.148) and cancelling common terms, we get
{wo statements imply tha
‘maximum energy Bain, an
an adaptive equalizer based on the LMS algorithm was robust
with respect to disturbances on a telephone channel.
by gain will not change its estimate of the unknown weight vector w when confro#!_ateoned asda pe iL
snoop el oye sssta}a oF ss9pHBoogoy ap wo spandap SH Hey BpUodes
4 uy spunog,
‘UL sauanbas wore
yt puinog,
‘onteoung Jog a
poe wooyer fa pomesae
Srouandepe ons
SHTT2MN Jo dino at
spp vonenb,
esneo Sununssy (Q's) vonienbo ayepdniyfiom 2
b.wt
panes are = 5
aye jue = He z sto Sp 990150 tars
yovordde wou sm jo 21me3} 2
1 Joplosnus & Jo wo
p's uonoas ui poss
rg.) ote 9 WH
weed
3 Bo putogsedr,
‘asuayaiduon v juasoud oF Bake uy
SOnIWNaDS LNSUS4sIa
Mod WAL3WVUvd 3715-4315 3H NO SGNNOS Waddn
26095 a4 pouido 98) LU
ogy ou Aq paunyuos st woneasasqo [3
ssouta aandepy arenbs ueeyiiseay s iadeu 906Chapter 5 Least Mean-Squave Adaptive
(5162)
where H(e!
‘TriconeM 2: The steddy.state mean-square valve of the process
ime system with x(n) providing
being the ré a i
FHGURE 530 Linese dscretetimeapproxnatio for log LMS ter.pues
ined AES UNO
posmbas st ofpauoury soud IqNssod 29
un yo won
IY SWCT am Jo aousTHoNUD
auto yp way 2") 9
Jo Sou8raAto9 ayy we 1uDT¥9 29589] 8
uu sal94q 900 JO
190A 1UBtOM ay) 2
a (ua
iy uy uaoUs se spuo indu
Bsasfposyur Suxkre9-34m ® pe
Ftuooop 99 204 SITU IT
sods J 218 $9589 0
paurep srregh
eu}s9 01 Z0n9 aaidepe ue Jo 2sn 24 30} padU OU st 2
Supuodsouos au) &q paonpoud (0)
sep ut we sks seouy Tunewrxosdde ay 01 pay (u)x woweyax9
‘9H 91g 29URIA 2H
be Arewuns ers vores \gamdepy asenbs-usownaseoy 5 saideyp2
Section 6.1. Normalized LMS Fi
Cet AP Ea Re |
q
Normalized Least-Mean-Square
| Adaptive Filters
vector of an adaptiv
imposed on the
scussion of
"The chapter
may be viewed as a gene
6.1 NORMALIZED LMS FILTER AS THE SOLUTION TO A CONSTRAINED.
‘OPTIMIZATION PROBLEM
Euclidean norm of the change,
Bin(n + 1) = (nt 1) ~ Hm) (et
the constraint
+ 1)u(n) = a0.(or 9)"ba ur uoAsB uns} 41 01 Sonat (Z1'9) ba’) * 80370 < gory
a) wotuy +m (E+
‘onposd oy snydys
baz jo wommaas amp Ajpour ay way goud sp 911021940 eL
4 soy a8 [ews £qoptatp 01 aney am Lou asnp90q ast Aa
(u)m 101994 mndut-den ayn wor 1eyy sfoure“uAO sit Jo wr2[qoud ¥ s2anpo:uy
PEN PamIeCN 2a) SAT HoNnHoY i oimoee woqgoR won,
‘2p SugmOdzaN0
ze jos ou)
isda &q y
(emyy — (Cuda — (14
al sy
sq 49 Sujoc (29) “bao rwFENsI09 a
1 dats Jo nsor ay) Bun
188 amg xpuoddy ut paguosap se 101298 yySlem
yads (1 + Ha ‘28ueq9renwour |
eeindo 241 area 01 z pas 1 sais yo sifisou aq surguiocy *¢
~ (wp = (wa
ym ap
ua 2+ (uy = (1+)
(mar E+ we = (r+ HD
uieigo
1 + W) ones wnunido ayp 10j Suayos pur 0192 01 jonbo ynsos sup BuMnag
2
wre
sianty aandepy enbs-ueayiniseay pezrewion guide ze324 Chapter 6 Normalized Least Mean-Square Adaptive Filters Section 6.2 Stability of the Normalized LMS Filter 32
nto iteration n * 1, subject to a constraint imposed on the updated tap-we
(a + 1). In light of this idea, itis logical that we base the st
ormalized LMS fiter on the mean-square deviation (see Ea, (588)]
Gi 8 then taking expectations we get
ee)
an + 1) - O¢n) = wel vn
n+ 1) — 04m) = ELE
im) = a(n)
Hn 1) =m) +
value of the mean