Professional Documents
Culture Documents
3, MARCH 2004
Abstract—This paper proposes a new structure for split in terms of convergence rate and tracking capability depends
transversal filtering and introduces the optimum split Wiener on the eigenvalue spread of the input signal correlation ma-
filter. The approach consists of combining the idea of split filtering trix [1]–[3]. Transform domain LMS algorithms, like the dis-
with a linearly constrained optimization scheme. Furthermore,
a continued split procedure, which leads to a multisplit filter crete cosine transform (DCT) and the discrete Fourier transform
structure, is considered. It is shown that the multisplit transform is (DFT), have been employed to solve this problem at the expense
not an input whitening transformation. Instead, it increases the di- of a high computational complexity [2], [4]. In general, it con-
agonalization factor of the input signal correlation matrix without sists of using an orthogonal transform together with power nor-
affecting its eigenvalue spread. A power normalized, time-varying malization for speeding up the convergence of the LMS algo-
step-size least mean square (LMS) algorithm, which exploits the
nature of the transformed input correlation matrix, is proposed for rithm. Very interesting, efficient, and different approaches have
updating the adaptive filter coefficients. The multisplit approach is also been proposed in the literature [5], [6], but they still present
extended to linear-phase adaptive filtering and linear prediction. the same tradeoff between performance and complexity.
The optimum symmetric and antisymmetric linear-phase Wiener Another alternative to overcome the aforementioned draw-
filters are presented. Simulation results enable us to evaluate the backs of nonrecursive adaptive systems is the split processing
performance of the multisplit LMS algorithm.
technique. The fundamental principles were introduced when
Index Terms—Adaptive filtering, linear-phase filtering, linear Delsarte and Genin proposed a split Levinson algorithm for real
prediction, linearly constrained filtering, split filtering, Wiener Toeplitz matrices in [7]. Identifying the redundancy of com-
filtering.
puting the set of the symmetric and antisymmetric parts of the
predictors, they reduced the number of multiplication operations
I. INTRODUCTION of the standard Levinson algorithm by about one half. Subse-
quently, the same authors extended the technique to classical
N ONRECURSIVE systems have been frequently used in
digital signal processing, mainly in adaptive filtering. Such
finite impulse response (FIR) filters have the desirable properties
algorithms in linear prediction theory, such as the Schur, the lat-
tice, and the normalized lattice algorithms [8].
of guaranteed stability and absence of limit cycles. However, A split LMS adaptive filter for autoregressive (AR) mod-
in some applications, the filter order must be large (e.g., noise eling (linear prediction) was proposed in [9] and generalized
and echo cancellation and channel equalization, to name a few to a so-called unified approach [10], [11] by the introduction of
in the communication field) in order to obtain an acceptable the continuous splitting and the corresponding application to a
performance. Consequently, an excessive number of multiplica- general transversal filtering problem. Actually, an appropriate
tion operations is required, and the implementation of the filter formulation of the split filtering problem has yet to be provided,
becomes unfeasible, even to the most powerful digital signal and such a formulation would bring to us more insights on this
processors. The problem grows worse in adaptive filtering. versatile digital signal processing technique, whose structure
Besides the computational complexity, the convergence rate and exhibits high modularity, parallelism, or concurrency. This is
the tracking capability of the algorithms also deteriorate with the purpose of the present paper.
an increasing number of coefficients to be updated. By using an original and elegant joint approach combining
Owing to its simplicity and robustness, the least mean square split transversal filtering and linearly constrained optimization,
(LMS) algorithm is one of the most widely used algorithms a new structure for the split transversal filter is proposed. The
for adaptive signal processing. Unfortunately, its performance optimum split Wiener filter and the optimum symmetric and an-
tisymmetric linear-phase Wiener filters are introduced. The ap-
proach consists of imposing the symmetry and the antisymmetry
Manuscript received March 21, 2002; revised May 19, 2003. The associate conditions on the impulse responses of two filters connected
editor coordinating the review of this paper and approving it for publication was in parallel by means of an appropriate set of linear constraints
Dr. Naofal M. W. Al-Dhahir.
L. S. Resende is with the Electrical Engineering Department, Federal implemented with the so-called generalized sidelobe canceller
University of Santa Catarina, 88040-900, Florianópolis-SC, Brazil (e-mail: structure.
leonardo@eel.ufsc.br). Furthermore, a continued splitting process is applied to the
J. M. T. Romano is with the Communication Department, State University
of Campinas, 13083-970, Campinas-SP, Brazil (e-mail: romano@decom.fee. proposed approach, giving rise to a multisplit filtering structure.
unicamp.br). We show that such a multisplit processing does not reduce the
M. G. Bellanger is with the Laboratoire d’Electronique et Communication, eigenvalue spread, but it does improve the diagonalization factor
Conservatoire National des Arts et Métiers, 75141, Paris, France (e-mail:
bellang@cnam.fr). of the input signal correlation matrix. The interpretations of the
Digital Object Identifier 10.1109/TSP.2003.822351 splitting transform as a linearly constrained processing are then
1053-587X/04$20.00 © 2004 IEEE
RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS 637
(a)
and
(6)
(4) .. .. .. ..
. . . .
and is the -by- reflection matrix (or exchange matrix),
which has unit elements along the cross diagonal and zeros
elsewhere. Thus, if , (8)
, where . The symmetry and an-
tisymmetry conditions of and are, respectively, described .. .. ..
..
by . . . .
(5)
638 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004
(16)
(9) where is the variance of the desired response , is
the -by- correlation matrix of , and is the -by-1
cross-correlation vector between and .
Appealing to the symmetric and square Toeplitz properties of
and , for odd ( ), or the correlation matrix , it can easily be shown that .
A matrix with this property is said to be centrosymmetric [15]
and, in the case of , can be partitioned into the form
(10)
(17)
(11)
(24)
where
(25)
..
.
(20)
and the scheme of Fig. 3(b) corresponds to the optimum split (27)
Wiener filter:
Let us denote by the matrix at the right side of
(21) the above equation. It is similar to and is obtained from by
means of a similarity transformation [16]. Therefore, based on
where (27), we can stress that , and consequently, the linear
transformation does not affect the eigenvalue spread of the input
(22) data correlation matrix .
Now, an interesting point to mention is that the columns of
and
can be permuted without affecting its properties. This amounts
(23) to a rearrangement of the single parameters in Fig. 4 in different
sequences. Then, there are possible permutations. The re-
The proof of (21) is given in the Appendix. markable result is that one of them turns into the -order
Notice that (22) and (19) define the true optimum linear-phase Hadamard matrix so that the multisplit scheme can be repre-
Wiener filter , having both constant group delay and con- sented in the compact form shown in Fig. 5.
stant phase delay (symmetric impulse response). On the other The Hadamard matrix of order can be constructed from
hand, (23) and (20) define a second type of optimum generalized as follows:
linear-phase Wiener filter (affine phase filter), having only
the group delay constant (antisymmetric impulse response).
(28)
IV. MULTISPLIT AND LINEAR TRANSFORMS
For ease of presentation, let , where is an integer
number greater than one. Now, if each branch in Fig. 3(b) is Starting with , this gives , , , and Hadamard
considered separately, the transversal filters and can matrices of all orders which are powers of two. An alternative
also be split into their symmetric and antisymmetric parts. By way of describing (28) is
proceeding continuously with this process and splitting the re-
sulting filters, we arrive, after steps composed of split- for (29)
ting operations each ( ), at the multisplit scheme
where denotes the Kronecker product of matrices, and
shown in Fig. 4. and are -by- matrices such
as in (10) and (11), respectively, and , for
(30)
, are the single parameters of the resulting zero-order filters.
640 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004
(35)
Fig. 5. x
Hadamard transform of the input (n).
(36)
and
Fig. 6. Flow graph of butterfly computation for Mxn.( )
(31)
and
(37)
(32)
which means that (34) becomes the unitary similarity transfor-
mation.
. Using (25), this results in a linear transforma- As a matter of fact, (34) reveals that the multisplit operation
tion of with the flow graph depicted in Fig. 6 ( ). makes and uncorrelated, where
Similarly, denoting this linear transform by , the multisplit
scheme is also represented by Fig. 5 by substituting for ,
with ..
.
(33) (38)
..
.
and .
(34)
RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS 641
Finally, the optimum coefficients , for where is the largest eigenvalue of . Table I presents
, in the scheme of Fig. 5 can be obtained by minimizing of the a summary of the proposed algorithm for multisplit adaptive
mean-squared-error, which results in filtering.
It is important to stress that the use of in Table I is con-
ditioned to . Otherwise, the linear transform matrix
is not composed only of ’s and ’s, requiring a number of
(39) multiplication operations proportional to . Notwithstanding,
the number of filter coefficients can usually be set to the next
where . From (39), we also power of two to take advantage of the implementation simplicity
have that and reduce computational burden. The butterfly computation in
Fig. 6 requires only addition operations per iteration. In
(40) other words, no multiplication operation is demanded.
From here, we describe the application of the constrained Finally, the procedure can be extended to complex parameters
optimization interpretation for the generation of new multisplit applying the split processing as much to its real part as to its
adaptive filtering structures and algorithms. imaginary part.
In the adaptive context, we can explore the aforementioned In applications that require an adaptive filter with linear-
properties of the multisplit transform in order to propose a power phase response, the symmetry constraint of the impulse
normalized and time-varying step-size LMS algorithm for up- response has been used. In that case, the utilization of the
dating the single parameters independently. DFT-LMS, DCT-LMS, and RLS techniques, to name a few,
Let us start with . Since, in this case, and would not solve the problem directly, requiring a symmetry
are uncorrelated, a least-squares version of the Newton constraint to be imposed. With the multisplit technique, such
method can be applied to update the parameters and applications fit perfectly, due to the fact that the symmetric or
as follows: antisymmetric conditions in the impulse response of the filter
have already been guaranteed. In fact, we need to consider just
(41) one branch in Fig. 3(b).
where For symmetric impulse response constraint, the input samples
for updating the single-parameters ( ) are given by
(42) and for the antisymmetric impulse
response constraint by . However,
there is another interesting point concerning linear-phase adap-
and
tive filtering.
Consider again the structure of the split filter in Fig. 3(b). The
least-squares (LS) criterion can be used in either symmetric or
antisymmetric parts to obtain the linear-phase filters. For ex-
ample, the optimum solution for the filter with symmetric im-
(43) pulse response is given by
Equation (43) is an estimate of the eigenvalues of the correla-
tion matrix of , which defines the step-sizes used to adapt (44)
the individual weights in (41). To account for adaptive filtering
operation in a nonstationary environment, a forgetting factor where
is included in this recursive computation of
the eigenvalues. The case applies to a wide-sense sta-
tionary environment. Notice that making in (41) corre-
sponds to the RLS algorithm applied to the parameters
and independently. (45)
For , despite the residual correlation among the vari-
ables inside of and , the same strategy and
in (41) can be used since the diagonalization factor of has
been improved by the multisplit transforms. In other words, the
multisplit orthogonal transform together with power normaliza-
tion can be used to improve the convergence rate of the LMS
algorithm. In this case, based on (27), convergence of the single (46)
parameters is assured by
Proceeding, in the adaptive context, the RLS algorithm can be
directly applied to update . On the other hand, it is worth
642 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004
TABLE I
MULTISPLIT LMS (MS-LMS) ALGORITHM
pointing out that fast LS algorithms, which exploit the time shift
relationship of the input data vector, cannot be applied due to the
fact that does not satisfy this property [3].
A final observation is required here. The direct application
of the LS criterion in the scheme of Fig. 3(b) in order to obtain
the optimum Wiener filter by computing independently
and corresponds to an approximate LS solution (qua-
sioptimal). Since the LS-estimated correlation matrix is
not centrosymmetric, the last term in (16) becomes nonzero, and
consequently, and cannot be independently
computed.
VIII. SIMULATION RESULTS , and the impulse response of the channel is described by the
To evaluate the performance of the MS-LMS algorithm in raised cosine:
adaptive filtering, the same equalization system in [2, Ch. 5] is
(47)
considered (see Fig. 8). The input channel is binary, with otherwise
RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS 643
IX. CONCLUSION
In the present work, it has been shown that the split
transversal filtering problem can be formulated and solved
by using a linearly constrained optimization and can be im-
plemented by means of a parallel GSC structure. Then, the
optimum split Wiener filter can be introduced together with its
symmetric and antisymmetric linear-phase parts. Furthermore,
(a) a continued split procedure, which led to a multisplit filter
structure, has been considered. It has also been shown that the
multisplit transform is not an input whitening transformation.
Instead, it increases the diagonalization factor of the input
signal correlation matrix without affecting its eigenvalue
spread. A power normalized, time-varying step-size LMS
algorithm, which exploits the nature of the transformed input
correlation matrix, has been proposed for updating the adaptive
filter coefficients. The novel approach is generic since it can
be used for any value of filter order , for complex and real
parameters, and can be extended to linear-phase adaptive
filtering and linear prediction. For a power-of-two value of ,
such a proposition corresponds to a Hadamard transform do-
main filter, from which the structure exhibits high modularity,
parallelism, or concurrency, which is suited to implementation
using very large-scale integration (VLSI). Finally, simulation
(b)
results confirm that the split processing structures provide a
Fig. 9. Learning curves: Adaptive equalization. (a) (R) = 6:0782.
R
(b) ( ) = 46:8216.
powerful and interesting tool for adaptive filtering.
APPENDIX
where controls the eigenvalue spread of the correlation PROOF OF (21)
matrix of the tap inputs in the equalizer, with
for , and for . The sequence The purpose of this Appendix is to show that the optimum
is an additive white noise that corrupts the channel output Wiener filter can be split into two symmetric and antisymmetric
with variance , and the equalizer has 11 coefficients. impulse response filters connected in parallel according to
Fig. 9 shows a comparison of the ensemble-averaged error Fig. 3(b) and (21). Using (19)–(23), we need to prove that
performances of the DCT-LMS, MS-LMS, standard LMS, and
RLS algorithms for and (100 (48)
independent trials). The good performance of the MS-LMS al-
gorithm can be observed in terms of convergence rate when or
compared with the standard LMS algorithm. On the other hand,
we can verify that the MS-LMS algorithm is somewhat sensi- (49)
tive to variations in the eigenvalue spread so that the DCT-LMS
exhibits a better performance in the second case. This shows The left side of (49) can be rewritten as
clearly that the multisplit preprocessing does not orthoganalize
the input data vector, but it improves the diagonalization of the
input signal correlation matrix, which is taken into account in
the power normalization used by the MS-LMS algorithm. Nev-
ertheless, as far as computational burden is concerned, its sim-
plicity is notorious when compared with the DCT transform.
Such an aspect, together with convergence improvement, is de-
finitive in justifying its application.
Finally, the good performance of the linear-phase RLS
(LP-RLS) algorithm is also illustrated in this equalization
example where the considered channel has the linear-phase (50)
644 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004