Professional Documents
Culture Documents
doi:10.1006/mssp.2002.1519
Least-mean square (LMS) algorithms, which are commonly used for adaptive
feedforward noise cancellation, have performance issues related to insufficient excitation,
non-stationary reference inputs, finite-precision arithmetic, quantisation noise and
measurement noise. Such factors cause weight drift and potential instability in the
conventional LMS algorithm. Here, we analyse the stability and performance of the leaky
LMS algorithm, which is widely used to correct weight drift. A Lyapunov tuning method is
developed to find an adaptive leakage parameter and adaptive step size that provide
optimum performance and retain stability in the presence of measurement noise on the
reference input of known variance. The method accounts for non-persistent excitation
conditions and non-stationary reference inputs and requires no a priori knowledge of the
reference input signal characteristics other than a lower bound on its magnitude or a
minimum signal-to-noise ratio. The Lyapunov tuning method is demonstrated for three
candidate adaptive leakage and step size parameter combinations, each of which is a
function of the instantaneous measured reference input, measurement noise variance, and/
or filter length. These candidates illustrate stability vs performance tradeoffs in the leaky
LMS algorithm elicited through the Lyapunov tuning method. The performance of each
candidate Lyapunov tuned algorithm is evaluated experimentally in a single source, single-
point acoustic noise cancellation system.
# 2003 Elsevier Science Ltd. All rights reserved.
1. INTRODUCTION
The conventional LMS algorithm has received wide recognition and use in applications
ranging from telephony to acoustic noise cancellation due to its computational simplicity.
However, well-known problems associated with its lack of stability or weight drift in the
presence of inadequate excitation, measurement noise, or finite precision effects have
resulted in variants to the standard LMS algorithm. Among the variants, the leaky LMS
algorithm has received significant attention. The leaky LMS algorithm, first proposed by
Gitlin et al. [1] introduces a leakage factor than retains stability and adds robustness, often
at significant expense to performance, due to bias of the weight vector away from the
Wiener solution. However, selection of the leakage factor in order to retain stability at
minimal performance degradation is not a simple task, and often involves empirical, on-
line tuning.
Mayyas and Aboulnasr [2] present a detailed analysis of the mean-square error (MSE)
performance of the leaky LMS algorithm for zero mean Gaussian (stationary) input data
and independence of the reference input signal and the weight vector. Their analysis
establishes conditions on a constant leakage factor and step size for which MSE
convergence is guaranteed. The focus of Nascimento and Sayed [3] is to address the
computational cost of the leaky LMS algorithm. Nascimento and Sayed [3] present
low computational cost, time-varying leakage parameters which ensure stability and
unbiased weight estimates in the presence of finite-precision arithmetic. The algorithms
presented in [3] also retain performance closely approximating that of conventional LMS
algorithms.
The analysis presented in [2] and the modified, time-varying leaky LMS algorithms
proposed in [3] provide significant steps towards reducing the need for on-line empirical
tuning of leaky LMS algorithms. In this paper, we present a Lyapunov tuning approach
for the leaky LMS algorithm that considers additional cases not considered in [2, 3], such
as quantization and/or measurement noise on reference input signals, large variations in
the dynamic range of the reference input signal, and periods of insufficient excitation.
Such factors are significant in acoustic noise cancellation applications, in which the
presence of real measurement components such as microphones, amplifiers and analog-to-
digital converters causes non-negligible measurement noise. In some applications, such as
single-source, single-point noise cancellation, LMS algorithms have been shown to
perform well, even when the requirement of a stationary noise source does not hold. In
such cases, we show that in the presence of measurement noise, the minimum reference
input level and statistics of measurement noise resulting from electrical and electro-
mechanical hardware are the only necessary parameters for choosing a time varying
leakage parameter and step size combination that retains adequate stability at minimal
expense to performance. To show this, measurement noise is considered explicitly in the
leaky LMS weight update equation, and a Lyapunov analysis is applied to the weight
update equation.
Lyapunov tuning provides a unified approach to examining the stability of the weight
update equation and resulting performance tradeoffs for candidate combinations of tuning
parameters, in the presence of bounded input signals and measurement noise with known
statistical properties. Using Lyapunov analysis, candidate time-varying leakage and step
size parameters that are functions of the instantaneous input power, variance of the noise
and/or filter length are proposed. These tuning parameters retain uniform asymptotic
stability of the LMS weight update equation in some region around the optimal weight
vector. Three candidate tuning laws are evaluated experimentally in a low frequency single
source, single-point acoustic noise cancellation system in order to evaluate the use of
the Lyapunov tuning method to provide tuning laws with acceptable stability, and
performance exceeding those of empirically tuned, leaky LMS algorithms with fixed
leakage factors.
where
e k ¼ dk yk ð2Þ
and yk ¼ WkT Xk is the LMS filter output. The well-known Wiener solution, or optimum
weight vector is
1
W0 ¼ E Xk XkT E ½Xk dk : ð3Þ
T
E Xk Xk is the autocorrelation of the input signal and E ½Xk dk is the cross correlation
between the input vector and process output. The Wiener solution reproduces the
unknown process, such that dk ¼ WoT Xk :
By following the stochastic gradient of the cost surface, the well-known unbiased,
recursive LMS solution is obtained
Wkþ1 ¼ Wk þ mek Xk : ð4Þ
Stability, convergence and random noise in the weight vector at convergence is governed
by the step size m. Fastest convergence to the Wiener solution is obtained
m ¼ 1=lmax
for
where lmax is the largest eigenvalue of the autocorrelation matrix E Xk XkT [4].
As an adaptive noise cancellation method, LMS filters have some drawbacks. First, high
input power leads to large weight updates and large excess mean-square error at
convergence. Operating at the largest possible step size enhances convergence, but also
causes large excess mean-square error, or noise in the weight vector, at convergence. A
non-stationary input dictates a large adaptive step size for enhanced tracking, thus the
LMS algorithm is not guaranteed to converge for non-stationary inputs. In addition,
applications necessitate the use of finite precision components, and under such conditions,
the LMS algorithm does not always converge in the traditional form of equation (4), even
with an appropriate adaptive step size. Finally, non-persistent excitation due to a constant
or nearly constant reference input, such as can be the case during ‘quiet periods’ in
adaptive noise cancellation systems with non-stationary inputs, can also cause weight
drift. In response to such issues, the leaky LMS (LLMS) algorithm or step-size normalised
versions of the leaky LMS algorithm (LNLMS) ‘leak off’ excess energy associated with
weight drift by including a constraint on output power in the cost function to be
minimised. Minimising the resulting cost function,
e2k þ gWkT Wk
J¼ ð5Þ
2
results in the recursive weight update equation
Wkþ1 ¼ lWk þ mek Xk ð6Þ
928 D. A. CARTES ET AL.
Figure 2. Error signal and RMS weight vector of an LMS filter applied to a second-order system subject to a
non-stationary reference input.
where l ¼ 1 gm is the leakage factor [1]. Under conditions of constant tuning parameters
l and m; no measurement noise (Qk in Fig. 1) or finite-precision effects, and bounded
signals Xk and ek, equation (6) converges to [5]
X
k1
Wk ¼ li mXk1i ek1i ð7Þ
i¼0
as k ! 1: Thus, for stability 04l41 is required. The lower bound on l assures that the
sign of the weight vector does not change with each iteration.
The traditional constant leakage factor leaky LMS results in a biased weight vector that
does not converge to the Wiener solution and hence results in reduced performance over
the traditional LMS algorithm and its step size normalised variants. A substantial tradeoff
exists between stability and performance. For stability, a ‘small’ leakage factor is required,
while for maximum performance, the leakage factor must be unity. A decrease in
performance is unacceptable in many targeted applications. Hence, we seek to find time-
varying tuning parameters that maintain acceptable stability and retain maximum
performance of the leaky LMS algorithm in the presence of quantifiable measurement
noise and bounded dynamic range.
Figure 2 shows this stability-performance trade-off through simulation of an LMS filter
for a second-order process subject to measurement noise on Xk : Here, non-stationary
acoustic noise recorded from an aircraft is passed through the system and cancelled using
an LMS filter with weight vector of length 100. In practice, such a noise source can have a
large dynamic range, depending on the flight condition, or a passenger’s seat, causing a
time-varying SNR. Figures 2(a) and (b) show the convergence of the error signal and
weight vector for a reference input free of measurement noise and with a leakage factor of
unity. For this idealistic case, the steady-state error is effectively zero, and the LMS filter is
LYAPUNOV TUNING OF LEAKY LMS ALGORITHM 929
stable. Instability occurs for SNR=4, and a constant leakage factor of 0.9996 is required
to regain stability. When a constant leakage factor is used, it must be chosen so as to
maintain stability for worst-case SNR, even if such a SNR occurs infrequently. However, a
substantial performance reduction, as evidenced by the large steady-state error shown in
Fig. 2(c) and the steady-state RMS weight vector bias of Fig. 2(d) results. Thus, when
l ¼ 0:9996 is applied under more normal high SNR conditions, performance reduction is
unacceptable.
Since scalar tuning parameters lk and mk are required, W* k and W* kþ1 are projected in the
direction of Xk þ Qk ; as shown in Fig. 3:
Xk þ Qk
w* k ¼ W* Tk ð10Þ
jjXk þ Qk jj
Xk þ Qk
w* kþ1 ¼ W* Tkþ1 : ð11Þ
jjXk þ Qk jj
Combining equations (9)–(11) and simplifying the expression gives
2 Xk þ Qk Xk þ Qk
*
w* kþ1 ¼ W k lk mk jjXk þ Qk jj
T T
þ W 0 ð lk 1Þ mk Qk jjXk þ Qk jj :
jjXk þ Qk jj jjXk þ Qk jj
ð12Þ
A candidate Lyapunov function satisfying stability condition (i) above, is
Vk ¼ w* Tk w* k ð13Þ
930 D. A. CARTES ET AL.
g1k ¼ lk 1 ð18Þ
Qk
ak ¼ : ð20Þ
jjXk þ Qk jj
With these definitions, the Lyapunov function difference becomes,
Vkþ1 Vk ¼ ðf2k 1ÞW* Tk uk uTk W k þ g21k W0T uk uTk W0 þ g22k W0T ak aTk W0
þ 2fk g1k W* Tk uk uTk W0 þ 2fk g2k W* Tk uk aTk W0 þ 2g1k g2k W0T uk aTk W0 : ð21Þ
Using the projected weight vector of equations (10) and (11) and the resulting Lyapunov
function candidate of equation (13), it is possible to find a time-invariant scalar function
V n such that the Lyapunov candidate Vk5V n for all k>0. Since the scalar projection is
always in the direction of the unit vector defined by equation (16), an example of such a
function is V n ¼ 10W* Tk W* k : Hence, the Lyapunov function can be used to assess uniform
asymptotic stability. For a stronger stability statement (global uniform asymptotic
stability), the scalar function V n must be radially unbounded. The Lyapunov function
candidate of equation (13) does not satisfy this condition. However, due to the inherent
performance cost, it is undesirable to consider a more stringent stability metric than
necessary. Consequently, two conditions must be considered in order to assure that W* k
remains bounded: (a) Xk ¼ Qk ; or (b) W* k is orthogonal to uk or some component
LYAPUNOV TUNING OF LEAKY LMS ALGORITHM 931
of W* k is orthogonal to uk : Heuristically, condition (a) is unlikely at realistic tap lengths
and SNR. In fact, if this condition does occur, then, intuitively, it must be the case that
SNR is so low that noise cancellation is futile, since the noise floor effectively dictates the
maximum performance that can be achieved. If W* k is orthogonal to uk under average
SNR conditions, then it is likely that the filter output ek is very close to zero, i.e. the LMS
algorithm is simply unnecessary if such a condition persists. Thus, though it is possible,
but unlikely, that one or more of the weight vector components could become unbounded,
in considering such unlikely occurrences it is impossible to avoid serious performance
degradation. Moreover, the goal of the Lyapunov analysis is to enable quantitative
comparison of stability and performance tradeoffs for candidate tuning rules. Since
uniform asymptotic stability suffices to make such comparisons, and since the Lyapunov
function of equation (13) enhances the ability to make such comparisons, it was selected
for the analysis that follows.
Several approaches to examining Lyaponov stability condition (ii) Vkþ1 Vk 50 for
equation (21) exist. The usual approach to determining stability is to examine Vkþ1 Vk
term by term to determine whether the two parameters lk and mk can be chosen to make
each term negative thereby guaranteeing uniform asymptotic stability. Since there are
several terms that are clearly positive in equation (21), we cannot guarantee that each
individual term will be negative. Furthermore, it is clear from an analysis of equation (21)
that the solution is nearly always biased away from zero. At W* k ¼ Wk W0 ¼ 0; equation
(21) becomes
Vkþ1 Vk ¼ g21k W0T uk uTk W0 þ g22k W0T ak aTk W0 þ 2g1k g2k W0T uk aTk W0 : ð22Þ
For 05lk 51; all coefficients of terms in equation (22) are positive, and it is clear that a
negative definite Vkþ1 Vk results only if g21k W0T uk uTk W0 þ g22k W0T ak aTk W0 5 2g1k g2k W0T
uk aTk W0 with g1k g2k 50: That the leaky LMS algorithm as examined using the Lyapunov
candidate of equation (13) is biased away from W0 is in agreement with the literature [7]. It
is possible, but difficult to examine the remaining space of W* k ¼ Wk W0 (i.e. the space
that excludes the origin) to determine whether time-varying tuning parameters can be
found to guarantee stability of some or all other points in the space or a maximal region of
the space. Time-varying tuning parameters are required since constant tuning parameters
found in such a manner will retain stability of points in the space at the expense of
performance. However, since we seek time-varying leakage and step-size parameters that
are uniquely related to measurable quantities, and since the Wiener solution is generally
not known a priori, the value of such a direct analysis of the remaining space of W* k ¼
Wk W0 is limited. Thus, the approach taken here is to define the region of stability
around the Wiener solution in terms of parameters
W* T uk
A ¼ kT ð23Þ
W 0 uk
W0T ak
B¼ ð24Þ
W0T uk
and to parameterise the resulting Lyapunov function difference such that the remaining
scalar parameter(s) can be chosen by optimisation. The parameters A and B physically
represent the output error ratio between the actual output and ideal output for a system
converged to the Wiener solution, and the output noise ratio, or portion of the ideal
output that is due to noise vector Qk : Physically, these parameters are statistically bounded
based on (i) the maximum output that a real system is capable of producing, (ii) average
and instantaneous SNR for the measured reference input, and (iii) the convergence
932 D. A. CARTES ET AL.
behaviour of the system. SNR will vary more in systems with non-stationary reference
inputs. For example, for a stationary reference input with an average SNR of 35 dB, B is
within the range of
1, and A is within the range of
3 for electrical and mechanical
components typical of a single-source, single-point noise cancellation system [8]. For a
non-stationary reference input with an average SNR of 35 dB, the range of B increases to
3, indicating an increase in the number of instances of very low SNR conditions [8].
Such bounds can be approximated using computer simulation, given the average SNR, the
noise floor associated with microphone and amplifier components used to implement noise
cancellation, and variance of the measurement noise. Parameters A and B provide
convenient means for visualising the region of stability around the Wiener solution at
various SNR conditions and thus for comparing candidate tuning rules.
Note that B is completely independent of the tuning parameters. However, equation (21)
shows that B can affect stability and performance. In a persistently excited system with
high SNR, B approaches zero, while the Wiener solution corresponds to A=0, i.e. Wk ¼
W0 : Thus, high performance and high SNR operating conditions imply both A and B are
near zero in the leaky LMS algorithm, though the leaky solution will always be biased
away from A=0. In a system with low excitation and/or low SNR, larger instantaneous
magnitudes of A and B are possible. However, it is unreasonable to demand global
uniform asymptotic stability; since only statistical properties of B are known a priori,
intuitively, global uniform asymptotic stability can never be achieved. Moreover, the
greater the stability demands, the more performance suffers in the leaky LMS algorithm.
Using parameters A and B, equation (21) becomes
!
ðf2k 1ÞA2 þ g21k þ g22k B2 þ 2fk g1k A
Vkþ1 Vk ¼ W0T uk uTk W0 : ð25Þ
þ2fk g2k AB þ 2g1k g2k B
By choosing an adaptive step size and/or leakage parameter that simplifies analysis of
equation (25), we parameterise and subsequently determine conditions on remaining scalar
parameters such that Vkþ1 Vk 50 for the largest region possible around the Wiener
solution. Such a region is now defined by parameters A and B, providing a means to
graphically display the stable region and to visualise performance/stability tradeoffs
introduced for candidate leakage and step-size parameters.
XkT Xk QTk Qk
lk ¼ ð33Þ
jjXk þ Qk jj2
results in the alternate parameterisation
fk ¼ 1 m0 lk ð34Þ
g1k ¼ lk 1: ð36Þ
The expression for lk in equation (33) is not measurable, but it can be approximated as
2Ls2q
lk ¼ 1 ð37Þ
jjXk þ Qk jj2
where L is the filter length. Equation (37) is a function of statistical and measurable
quantities, and is a good approximation of equation (33) when jjXk jj >> jjQk jj: Using
equations (32) and (37), equation (25) becomes
2 !
m0 lk ðA þ BÞ2 2m0 l2k A2 þ A þ B þ AB
Vkþ1 Vk ¼ 2 2 2 W0T uk uTk W0 :
2
þ lk 1 A þ ðlk 1Þ þ lk lk 2A þ 2m0 lk ðA þ BÞ
ð38Þ
The optimum m0 for this candidate, which is again found by scalar optimisation subject to
worst-case conditions on A and B is m0 ¼ 1=2:
934 D. A. CARTES ET AL.
In summary, the three candidate adaptive leakage factor and step-size solutions are
Candidate 1: equations (26) and (27), Candidate 2: equations (26) and (31), and Candidate 3:
equations (32) and (37). All are computationally efficient, requiring little additional
computation over a fixed leakage, normalised LMS algorithm, and all three candidate
tuning laws can be implemented based on knowledge of the measured, noise-corrupted
reference input, the variance of the measurement noise and the filter length. Again,
Candidate 1 represents the currently accepted practice, while Candidate 3 is a direct result
of choosing the Lyapunov function of equation (13).
To evaluate stability and performance tradeoffs, we examine Vkþ1 Vk for various
instantaneous SNRs jXk j=jQk j, and 1>A>1, 1>B>1. Figure 4 shows plots of Vkþ1
Vk vs A and B for SNR of 2, 10, and 100, and a filter length of 20. The two extreme SNR
conditions are approximate worst- and best-case SNR conditions for experimental
conditions in which the candidate Lyapunov tuned algorithms are tested, as reported in
the following section. Numerical results corresponding to Fig. 4 are shown in Table 1.
Figure 4 includes the ‘zero’ plane, such that stability regions provided by the intersection
of the Lyapunov difference with this plane can be visualised. Note again, that A=0
corresponds to the LMS Wiener solution. At sufficiently high SNR, for all candidates,
Vkþ1 Vk ¼ 0 for A=B=0, i.e. operation at the Wiener solution with Qk=0. A notable
exception to this is Candidate 3, for which Vkþ1 Vk > 0 for A=0 and B=0 and SNR=2,
due to the breakdown of the approximation of the leakage factor in equation (37) for low
SNR. For A=0 and B>0, the Wiener solution is unstable, which is consistent with the
bias of leaky LMS algorithms away from the Wiener solution. The uniform asymptotic
stability region in Fig. 4 is the region for which Vkþ1 Vk 50: At sufficiently high SNR,
this stability region is largest for Candidate 3, followed by Candidate 1. Candidate 2
provides the smallest overall stability region. For example, if one takes a slice of each
figure at B=1, the resulting range of A for which Vkþ1 Vk > 0 is largest for Candidate
2. Near the origin, which is the most likely operating point, the stability region for all three
candidates is similar for sufficiently high SNR. Note again that for global uniform
asymptotic stability, the entire surface would need to be below the zero plane, which is
impossible unless absolute bounds on jjQk jj are known, given the inability to affect
parameter B through feedforward control.
Performance of each candidate tuning law is assessed by examining both the size of the
stability region and the gradient of Vkþ1 Vk with respect to parameters A and B. Note
from equation (25) that the gradient of Vkþ1 Vk approaches zero as lk approaches one
and mk approaches zero (i.e. stability, but no convergence, and hence no noise reduction
performance). In this case, the Vkþ1 Vk surface is flat. In the stable region of Fig. 4, the
gradient of the Lyapunov difference is larger for tuning that provides an aggressive step
size. By inference, a tuning law providing a more negative Vkþ1 Vk in the stable region
should provide the best performance, while the tuning law providing the largest region in
which Vkþ1 Vk 50 provides the best stability. Table 1 records the maximum and
minimum values of Vkþ1 Vk for the range of A and B examined, showing Candidate 2
should provide the best performance (and least stability), while Candidate 3 provides the
best overall stability/performance trade-off for high SNR, followed by Candidates 1 and 2.
For all three candidates, leakage factor approaches one as SNR increases, as expected, and
Candidate 2 provides the most aggressive step size, which relates to the larger gradient of
c
Figure 4. Lyapunov function difference as a function of output error ratio (A) and output noise ratio (B) for three
candidate turning parameter laws and three signal-to-noise ratios (SNR).
LYAPUNOV TUNING OF LEAKY LMS ALGORITHM 935
936 D. A. CARTES ET AL.
Table 1
Tuning parameters and bounds on Lyapunov difference for three candidate tuning laws
Tuning law Signal-to-noise ratio
2 10 100
Candidate 1
Min(Vk+1Vk) 0.4511 0.4447 0.4442
Max(Vk+1Vk) 0.2690 0.2668 0.2667
mk 3.0759e+05 2.2879e+04 2.7138e+02
lk 0.9969 0.9998 1.000
Candidate 2
Min(Vk+1Vk) 0.9988 1.0000 1.0000
Max(Vk+1Vk) 0.9148 0.9932 0.9999
mk 8.7449e+05 6.8354e+04 8.1410e+02
lk 0.9912 0.9931 0.9999
Candidate 3
Min(Vk+1Vk) 0.9999 0.9996 1.0000
Max(Vk+1Vk) 0.5197 0.3451 0.3326
mk 2.9051e+05 3.3373e+04 4.0694e+02
lk 0.6296 0.9725 0.9997
Vkþ1 Vk and thus the best predicted performance. An alternate view of Vkþ1 Vk as it
relates to performance is to consider Vkþ1 Vk as the rate of change of energy of the
system. The faster the energy decreases, the faster convergence, and hence the better
performance.
The stability analysis results do not require a stationary Wiener solution, and thus these
results can be applied to reduction of both stationary and non-stationary Xk : The actual
value of the Wiener solution, which is embedded in the parameters A and B does affect the
stability region, and it is possible, that any of the three candidates can be instantaneously
unstable given an inappropriate combination of A and B. The graphical representation of
Fig. 4 shows how close to the Wiener solution one can operate as a measure of
performance and uses the size of the stability region as a measure of stability. If the Wiener
solution is significantly time variant, the possibility of operating far from the Wiener
solution increases, requiring more attention to developing candidate tuning laws that
enhance the stability region for larger magnitudes of parameters A and B.
5. EXPERIMENTAL RESULTS
The three candidate Lyapunov tuned leaky LMS algorithm are evaluated and compared
to (i) an empirically tuned, fixed leakage parameter leaky, normalised LMS algorithms
(LNLMS), and (ii) an empirically tuned normalised LMS algorithm with no leakage
parameter (NLMS). The comparisons are made for a low-frequency single-source, single-
point noise cancellation system in an acoustic test chamber designed to provide a carefully
controlled acoustic environment with a flat frequency response over the range of 0–200 Hz
[11] for sound pressure levels up to 140 dB. The system under study is a prototype
communication headset earcup. The earcup contains an external microphone to measure
the reference signal, an internal microphone to measure the error signal, and an internal
noise cancellation speaker to generate yk : Details regarding the prototype are given in [8].
LYAPUNOV TUNING OF LEAKY LMS ALGORITHM 937
Figure 5. Representative power spectrum of aircraft noise for experimental evaluation of Lyapunov tuned
leaky LMS algorithms. Shaded area denoted band limited range used for experiments.
Table 2
Mean tuning parameters for three candidate adaptive LNLMS algorithms
LMS algorithm and SPL (dB) Mean leakage factor Standard deviation
of leakage factor
( 10 000)
LNLMS(80) 80 0.9999306 0 (constant)
LNLMS(1 0 0) 100 0.999988305 0 (constant)
Figure 6. Performance of empirically tuned NLMS and LNLMS algorithms for non-stationary aircraft noise
at 100 dB.
Figure 7. Performance of empirically tuned NLMS and LNLMS algorithms for non-stationary aircraft noise
at 80 dB.
weight vector in Fig. 8. The results of Figs 6–8 demonstrate both the loss of stability when
using an overly aggressive (large) fixed parameter leakage parameter and the loss of
performance when a less aggressive (small) leakage parameter is required in order to retain
stability over a large dynamic range of the reference input signal.
940
D. A. CARTES ET AL.
Figure 8. RMS weight vector trajectory for empirically tuned NLMS and LNLMS algorithms for non-stationary aircarft noise at (a) 100 dB SLP, (b) 80 dB SPL.
LYAPUNOV TUNING OF LEAKY LMS ALGORITHM 941
Figure 9. Performance of three candidate Lyapunov tuned LNLMS algorithms for non-stationary aircraft
noise at 100 dB candidate 1: equations (26) and (27), candidate 2: equations (26) and (31), candidate 3: equations
(32) and (37).
Figure 10. Performance of three candidate Lyapunov tuned LNLMS algorithms for non-stationary aircraft
noise at 80 dB candidate 1: equations (26) and (27), candidate 2: equations (26) and (31), candidate 3: equations
(32) and (37).
algorithms are more aggressively tuned and operate closer to the Wiener solution,
providing better performance over a large dynamic range than constant leakage factor
algorithms. Finally, relative performance, which is predicted to be most aggressive for
Candidate 2, followed by Candidates 3 and 1, respectively, is seen in Fig. 9. Candidate 2
provides the fastest convergence and the largest SPL attenuation of the three candidates.
The experimental results provide evidence that stability and performance gains are
achieved in the reduction of highly non-stationary noise for an optimised combination of
both adaptive step size and adaptive leakage factor without requiring empirical tuning,
with Candidate 3 providing the best overall stability and performance trade-offs.
6. CONCLUSION
A Lyapunov tuning method is presented to develop candidate adaptive leakage and step
size combinations and to study the stability and performance trade-offs of time-varying
leakage and step size LMS algorithms for these candidate tuning laws in the presence of
bounded noise on the reference input signal. By parameterising a Lyapunov function
difference, candidate tuning laws that depend only on known quantities}the instanta-
neous, noise corrupted reference input, the measurement noise variance and/or the filter
length}are proposed. The Lyapunov analysis enables comparison of candidate tuning
laws in terms of the region of stability around the Wiener solution for a worst-case SNR,
which in turn provides an intuitive understanding of noise reduction performance.
Analysis of stability and performance of Lyapunov tuned algorithms applies to systems
with either stationary or non-stationary reference inputs. Experimental low-frequency
noise reduction performance in a single-source, single-point acoustic noise cancellation
system is measured for three candidate tuning laws and a non-stationary noise source.
LYAPUNOV TUNING OF LEAKY LMS ALGORITHM
Figure 11. RMS weight vector trajectory for three candidate Lyapunov tuned LNLMS algorithms for non-stationary aircraft noise at (a) 100 dB SPL, (b) 80 dB SPL.
943
944 D. A. CARTES ET AL.
These results demonstrate that a candidate Lyapunov tuned leaky LMS algorithm can be
designed to retain stability and exhibit noise reduction performance superior to
empirically tuned, fixed leakage parameter LNLMS algorithms.
ACKNOWLEDGEMENTS
This research is supported in part by the United States Air Force contract number
F41624-99-C-6006 though a subcontract with Creare, Inc. The authors are grateful to the
Air Force Research Laboratory Human Effectiveness Directorate, Wright Patterson Air
Force Base, Dayton, Ohio, for the support. The authors thank Ashok Ramasubramanian
for helpful discussions regarding Lyapunov analysis.
REFERENCES
1. R. D. Gitlin, H. C. Meadows and S. B. Weinstein 1982 Bell Systems Technology Journal 61,
1817–1839. The tap-leakage algorithm: an algorithm for the stable operation of a digitally
implemented fractionally spaced equilizer.
2. K. Mayyas and T. Aboulnasr 1997 IEEE Transactions on Signal Processing 45, 924–934.
Leaky LMS algorithm: MSE analysis for gaussian data.
3. V. Nascimento and A. Sayed 1999 IEEE Transactions on Signal Processing 47, 3261–3276.
Unbiased and stable leakage-based adaptive filters.
4. S. M. Kuo and D. R. Morgan 1996 Active Noise Control Systems. New York: John Wiley and
Sons.
5. O. Macchi 1995 Adaptive Processing, the Least Mean Squares Approach with Applications in
Transmission. New York: John Wiley and Sons.
6. J. E. Slotine and W. Li 1991 Applied Nonlinear Control. Englewood Cliffs, NJ: Prentice-Hall.
7. N. Kalouptsidis and S. Theodoridis 1993 System Identification and Processing Algorithms.
Englewood Cliffs, NJ: Prentice-Hall.
8. D. Cartes 2000 Ph.D. dissertation, Thayer School of Engineering, Dartmouth College.
Lyapunov tuning and optimization of feedforward noise reduction for single-point, single-source
cancellation.
9. S. Haykin, 1996 Adaptive Filtering Theory. Englewood Cliffs, NJ: Prentice-Hall.
10. S. B. Gelfand, Y. Wei and J. V. Krogmeier 1999 IEEE Transactions of Signal Processing 47,
3277–3288. The stability of variable step-size LMS algorithms.
11. J. G. Ryan, E. A. G. Shaw, A. J. Brammer and G. Zhang 1993 Canadian Acoustics 21, 19–20.
Enclosure for low frequency assessment of active noise reducing circumaural headsets and
hearing protection.