You are on page 1of 9

636 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO.

3, MARCH 2004

Split Wiener Filtering With Application


in Adaptive Systems
Leonardo S. Resende, Member, IEEE, João Marcos T. Romano, Member, IEEE, and Maurice G. Bellanger, Fellow, IEEE

Abstract—This paper proposes a new structure for split in terms of convergence rate and tracking capability depends
transversal filtering and introduces the optimum split Wiener on the eigenvalue spread of the input signal correlation ma-
filter. The approach consists of combining the idea of split filtering trix [1]–[3]. Transform domain LMS algorithms, like the dis-
with a linearly constrained optimization scheme. Furthermore,
a continued split procedure, which leads to a multisplit filter crete cosine transform (DCT) and the discrete Fourier transform
structure, is considered. It is shown that the multisplit transform is (DFT), have been employed to solve this problem at the expense
not an input whitening transformation. Instead, it increases the di- of a high computational complexity [2], [4]. In general, it con-
agonalization factor of the input signal correlation matrix without sists of using an orthogonal transform together with power nor-
affecting its eigenvalue spread. A power normalized, time-varying malization for speeding up the convergence of the LMS algo-
step-size least mean square (LMS) algorithm, which exploits the
nature of the transformed input correlation matrix, is proposed for rithm. Very interesting, efficient, and different approaches have
updating the adaptive filter coefficients. The multisplit approach is also been proposed in the literature [5], [6], but they still present
extended to linear-phase adaptive filtering and linear prediction. the same tradeoff between performance and complexity.
The optimum symmetric and antisymmetric linear-phase Wiener Another alternative to overcome the aforementioned draw-
filters are presented. Simulation results enable us to evaluate the backs of nonrecursive adaptive systems is the split processing
performance of the multisplit LMS algorithm.
technique. The fundamental principles were introduced when
Index Terms—Adaptive filtering, linear-phase filtering, linear Delsarte and Genin proposed a split Levinson algorithm for real
prediction, linearly constrained filtering, split filtering, Wiener Toeplitz matrices in [7]. Identifying the redundancy of com-
filtering.
puting the set of the symmetric and antisymmetric parts of the
predictors, they reduced the number of multiplication operations
I. INTRODUCTION of the standard Levinson algorithm by about one half. Subse-
quently, the same authors extended the technique to classical
N ONRECURSIVE systems have been frequently used in
digital signal processing, mainly in adaptive filtering. Such
finite impulse response (FIR) filters have the desirable properties
algorithms in linear prediction theory, such as the Schur, the lat-
tice, and the normalized lattice algorithms [8].
of guaranteed stability and absence of limit cycles. However, A split LMS adaptive filter for autoregressive (AR) mod-
in some applications, the filter order must be large (e.g., noise eling (linear prediction) was proposed in [9] and generalized
and echo cancellation and channel equalization, to name a few to a so-called unified approach [10], [11] by the introduction of
in the communication field) in order to obtain an acceptable the continuous splitting and the corresponding application to a
performance. Consequently, an excessive number of multiplica- general transversal filtering problem. Actually, an appropriate
tion operations is required, and the implementation of the filter formulation of the split filtering problem has yet to be provided,
becomes unfeasible, even to the most powerful digital signal and such a formulation would bring to us more insights on this
processors. The problem grows worse in adaptive filtering. versatile digital signal processing technique, whose structure
Besides the computational complexity, the convergence rate and exhibits high modularity, parallelism, or concurrency. This is
the tracking capability of the algorithms also deteriorate with the purpose of the present paper.
an increasing number of coefficients to be updated. By using an original and elegant joint approach combining
Owing to its simplicity and robustness, the least mean square split transversal filtering and linearly constrained optimization,
(LMS) algorithm is one of the most widely used algorithms a new structure for the split transversal filter is proposed. The
for adaptive signal processing. Unfortunately, its performance optimum split Wiener filter and the optimum symmetric and an-
tisymmetric linear-phase Wiener filters are introduced. The ap-
proach consists of imposing the symmetry and the antisymmetry
Manuscript received March 21, 2002; revised May 19, 2003. The associate conditions on the impulse responses of two filters connected
editor coordinating the review of this paper and approving it for publication was in parallel by means of an appropriate set of linear constraints
Dr. Naofal M. W. Al-Dhahir.
L. S. Resende is with the Electrical Engineering Department, Federal implemented with the so-called generalized sidelobe canceller
University of Santa Catarina, 88040-900, Florianópolis-SC, Brazil (e-mail: structure.
leonardo@eel.ufsc.br). Furthermore, a continued splitting process is applied to the
J. M. T. Romano is with the Communication Department, State University
of Campinas, 13083-970, Campinas-SP, Brazil (e-mail: romano@decom.fee. proposed approach, giving rise to a multisplit filtering structure.
unicamp.br). We show that such a multisplit processing does not reduce the
M. G. Bellanger is with the Laboratoire d’Electronique et Communication, eigenvalue spread, but it does improve the diagonalization factor
Conservatoire National des Arts et Métiers, 75141, Paris, France (e-mail:
bellang@cnam.fr). of the input signal correlation matrix. The interpretations of the
Digital Object Identifier 10.1109/TSP.2003.822351 splitting transform as a linearly constrained processing are then
1053-587X/04$20.00 © 2004 IEEE
RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS 637

(a)

Fig. 2. Generalized sidelobe canceller.

and

(6)

which can be easily verified from (3) and (4).


(b) Now, consider the classical scheme of a transversal filter, as
Fig. 1. Split adaptive transversal filtering. shown in Fig. 1, in which the tap-weight vector has been
split into its symmetric and antisymmetric parts. The input
signal and the desired response are modeled as
considered in adaptive filtering, and a power normalized and
wide-sense stationary discrete-time stochastic processes of
time-varying step-size LMS algorithm is suggested for updating
zero mean. Without loss of generality, all the parameters have
the parameters of the proposed scheme. We also extend such an
been assumed to be real valued.
approach to linear-phase adaptive filtering and linear prediction.
Finally, simulation results obtained with the multisplit algorithm
are presented and compared with the standard LMS, DCT-LMS, III. SPLIT FILTERING AS A LINEARLY CONSTRAINED
and recursive least squares (RLS) algorithms. FILTERING PROBLEM
The principle of linearly constrained transversal optimal fil-
II. SPLIT TRANSVERSAL FILTERING tering is to minimize the power of the estimation error ,
subject to a set of linear equations defined by
Let us start with the following sentence.
Any finite sequence (e.g., the impulse response of a (7)
transversal filter) can be expressed as the sum of a symmetric
sequence and an antisymmetric sequence. The symmetric where is the -by- constraint matrix, and is a -ele-
(antisymmetric) sequence is given by the sum (difference) of ment response vector. An alternative implementation is repre-
the original sequence and its backward version, dividing by sented in block diagram form in Fig. 2. This structure is called
two the resulting sequence. the generalized sidelobe canceller (GSC) [12], [14]. Essentially,
Specifically, in matrix notation, let it consists of changing a constrained minimization problem into
an unconstrained one. The columns of the -by-( - ) ma-
(1) trix represent a basis for the orthogonal complement of the
subspace spanned by columns of . The
denote the -by-1 tap-weight vector of a transversal filter [see matrix is termed the signal blocking matrix. The ( - )-el-
Fig. 1(a)]. With and denoting the vectors of the sym- ement vector represents an unconstrained filter, and the co-
metric and antisymmetric parts of , then efficient vector represents a filter that satisfies
the constraints ( ).
(2) The splitting of into its symmetric and antisymmetric
parts (see Fig. 1) can be interpreted as a linearly constrained
where optimization problem. Let us define matrices and , as well
as vectors and , as
(3)

(4) .. .. .. ..
. . . .
and is the -by- reflection matrix (or exchange matrix),
which has unit elements along the cross diagonal and zeros
elsewhere. Thus, if , (8)
, where . The symmetry and an-
tisymmetry conditions of and are, respectively, described .. .. ..
..
by . . . .

(5)
638 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

subspace spanned by the columns of . This property can


also be verified by the fact that forces to be symmetric
through (12), whereas would force it to be antisymmetric.
Considering the above properties, Fig. 3(a) can be simplified
to the block diagram shown in Fig. 3(b).
It is interesting to observe that the vectors and are
merely composed of the first coefficients of and .
This can be easily verified by noting that the premultiplication
of by yields and of by yields .
The estimation error is then given by
(a)
(14)
where
(15)
denotes the -by-1 tap-input vector. In the mean-squared-error
sense, the vectors and are chosen to minimize the
(b) following cost function:
Fig. 3. GSC implementation of the split filter.

(16)
(9) where is the variance of the desired response , is
the -by- correlation matrix of , and is the -by-1
cross-correlation vector between and .
Appealing to the symmetric and square Toeplitz properties of
and , for odd ( ), or the correlation matrix , it can easily be shown that .
A matrix with this property is said to be centrosymmetric [15]
and, in the case of , can be partitioned into the form
(10)

(17)
(11)

for even ( ), where and


and , for even ( ). Then, consider are -by- correlation matrices
imposing the constraints defined by of , and
(12)
..
and .
(13)

on and . It establishes the symmetry and antisymmetry


properties of and , respectively. ..
.
Notice that (12), with , requires that be orthogonal
to the subspace spanned by the columns of . Likewise, is
orthogonal to the subspace spanned by the columns of for When is odd ( ), it can be partitioned as
.
Using the GSC structure and these constraints on the
respective branches of and in Fig. 1(b) leads to the
block diagram shown in Fig. 3(a) ( even). However, since (18)
and , and
, eliminating the two branches and .
Moreover, it is easy to verify that and .
Thus, is a possible choice of signal blocking matrix where is a -by-1 correlation
to span the subspace that is the orthogonal complement of the vector of , and is a scalar denoting
RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS 639

The above multisplit scheme can be viewed as a linear trans-


formation of , which is denoted by

(24)

where

(25)
..
.

Fig. 4. Multisplit adaptive filtering.

the variance of the input signal . Then, it is easy to show and


by direct substitution of (10), (11), and (17) (or (8), (9), and
(26)
(18) for odd) that the last term in (16) is equal to zero. In
other words, and are statistically uncorrelated It can be verified by direct substitution that for , is
( ), and consequently, the symmetric and a matrix of ’s and ’s, in which the inner product of any
antisymmetric parts can be optimized separately. Thus, the op- two distinct columns is zero. In fact, is a nonsingular matrix,
timum solutions are given by and . In other words, the columns of are mutually
(19) orthogonal.
The correlation matrix of is given by
and

(20)

and the scheme of Fig. 3(b) corresponds to the optimum split (27)
Wiener filter:
Let us denote by the matrix at the right side of
(21) the above equation. It is similar to and is obtained from by
means of a similarity transformation [16]. Therefore, based on
where (27), we can stress that , and consequently, the linear
transformation does not affect the eigenvalue spread of the input
(22) data correlation matrix .
Now, an interesting point to mention is that the columns of
and
can be permuted without affecting its properties. This amounts
(23) to a rearrangement of the single parameters in Fig. 4 in different
sequences. Then, there are possible permutations. The re-
The proof of (21) is given in the Appendix. markable result is that one of them turns into the -order
Notice that (22) and (19) define the true optimum linear-phase Hadamard matrix so that the multisplit scheme can be repre-
Wiener filter , having both constant group delay and con- sented in the compact form shown in Fig. 5.
stant phase delay (symmetric impulse response). On the other The Hadamard matrix of order can be constructed from
hand, (23) and (20) define a second type of optimum generalized as follows:
linear-phase Wiener filter (affine phase filter), having only
the group delay constant (antisymmetric impulse response).
(28)
IV. MULTISPLIT AND LINEAR TRANSFORMS
For ease of presentation, let , where is an integer
number greater than one. Now, if each branch in Fig. 3(b) is Starting with , this gives , , , and Hadamard
considered separately, the transversal filters and can matrices of all orders which are powers of two. An alternative
also be split into their symmetric and antisymmetric parts. By way of describing (28) is
proceeding continuously with this process and splitting the re-
sulting filters, we arrive, after steps composed of split- for (29)
ting operations each ( ), at the multisplit scheme
where denotes the Kronecker product of matrices, and
shown in Fig. 4. and are -by- matrices such
as in (10) and (11), respectively, and , for
(30)
, are the single parameters of the resulting zero-order filters.
640 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

At this stage, it is worth pointing out that the aforementioned


linear transforms do not convert the vector into a corre-
sponding input vector of uncorrelated variables. Therefore, the
single parameters in Figs. 4 and 5 cannot be optimized sepa-
rately by the mean-squared error criterion.
Consider the linear transform . Premultiplying and post-
multiplying in (17) by and , we get (34), shown at the
bottom of the page. This enables us to verify that the correlation
matrix of is diagonal by block and that the multisplit opera-
tion improves the diagonalization of . This improvement can
be measured by a diagonalization factor defined in [11] by

(35)
Fig. 5. x
Hadamard transform of the input (n).

With and transforms, the zeros in change po-


sition, but they never take place in the main diagonal. Hence,
, , and are not orthogonal transforms that decorrelate the
input signal. In other words, (34) is not a unitary similarity trans-
formation. As an exception, when

(36)

and
Fig. 6. Flow graph of butterfly computation for Mxn.( )

Another very interesting linear transform is obtained, making

(31)

and
(37)
(32)
which means that (34) becomes the unitary similarity transfor-
mation.
. Using (25), this results in a linear transforma- As a matter of fact, (34) reveals that the multisplit operation
tion of with the flow graph depicted in Fig. 6 ( ). makes and uncorrelated, where
Similarly, denoting this linear transform by , the multisplit
scheme is also represented by Fig. 5 by substituting for ,
with ..
.

(33) (38)

..
.
and .

(34)
RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS 641

Finally, the optimum coefficients , for where is the largest eigenvalue of . Table I presents
, in the scheme of Fig. 5 can be obtained by minimizing of the a summary of the proposed algorithm for multisplit adaptive
mean-squared-error, which results in filtering.
It is important to stress that the use of in Table I is con-
ditioned to . Otherwise, the linear transform matrix
is not composed only of ’s and ’s, requiring a number of
(39) multiplication operations proportional to . Notwithstanding,
the number of filter coefficients can usually be set to the next
where . From (39), we also power of two to take advantage of the implementation simplicity
have that and reduce computational burden. The butterfly computation in
Fig. 6 requires only addition operations per iteration. In
(40) other words, no multiplication operation is demanded.
From here, we describe the application of the constrained Finally, the procedure can be extended to complex parameters
optimization interpretation for the generation of new multisplit applying the split processing as much to its real part as to its
adaptive filtering structures and algorithms. imaginary part.

V. MULTISPLIT ADAPTIVE FILTERING VI. LINEAR PHASE ADAPTIVE FILTERING

In the adaptive context, we can explore the aforementioned In applications that require an adaptive filter with linear-
properties of the multisplit transform in order to propose a power phase response, the symmetry constraint of the impulse
normalized and time-varying step-size LMS algorithm for up- response has been used. In that case, the utilization of the
dating the single parameters independently. DFT-LMS, DCT-LMS, and RLS techniques, to name a few,
Let us start with . Since, in this case, and would not solve the problem directly, requiring a symmetry
are uncorrelated, a least-squares version of the Newton constraint to be imposed. With the multisplit technique, such
method can be applied to update the parameters and applications fit perfectly, due to the fact that the symmetric or
as follows: antisymmetric conditions in the impulse response of the filter
have already been guaranteed. In fact, we need to consider just
(41) one branch in Fig. 3(b).
where For symmetric impulse response constraint, the input samples
for updating the single-parameters ( ) are given by
(42) and for the antisymmetric impulse
response constraint by . However,
there is another interesting point concerning linear-phase adap-
and
tive filtering.
Consider again the structure of the split filter in Fig. 3(b). The
least-squares (LS) criterion can be used in either symmetric or
antisymmetric parts to obtain the linear-phase filters. For ex-
ample, the optimum solution for the filter with symmetric im-
(43) pulse response is given by
Equation (43) is an estimate of the eigenvalues of the correla-
tion matrix of , which defines the step-sizes used to adapt (44)
the individual weights in (41). To account for adaptive filtering
operation in a nonstationary environment, a forgetting factor where
is included in this recursive computation of
the eigenvalues. The case applies to a wide-sense sta-
tionary environment. Notice that making in (41) corre-
sponds to the RLS algorithm applied to the parameters
and independently. (45)
For , despite the residual correlation among the vari-
ables inside of and , the same strategy and
in (41) can be used since the diagonalization factor of has
been improved by the multisplit transforms. In other words, the
multisplit orthogonal transform together with power normaliza-
tion can be used to improve the convergence rate of the LMS
algorithm. In this case, based on (27), convergence of the single (46)
parameters is assured by
Proceeding, in the adaptive context, the RLS algorithm can be
directly applied to update . On the other hand, it is worth
642 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

TABLE I
MULTISPLIT LMS (MS-LMS) ALGORITHM

pointing out that fast LS algorithms, which exploit the time shift
relationship of the input data vector, cannot be applied due to the
fact that does not satisfy this property [3].
A final observation is required here. The direct application
of the LS criterion in the scheme of Fig. 3(b) in order to obtain
the optimum Wiener filter by computing independently
and corresponds to an approximate LS solution (qua-
sioptimal). Since the LS-estimated correlation matrix is
not centrosymmetric, the last term in (16) becomes nonzero, and
consequently, and cannot be independently
computed.

VII. LINEAR PREDICTION


All split and multisplit transversal filtering theory developed Fig. 7. Linear phase prediction-error filter.
above can be applied to linear prediction by making
or . In the case of linear-phase pre-
diction, the appropriate structure of the prediction-error filter is
presented in Fig. 7 for . In this figure, both forward
and backward prediction have been considered. For forward pre-
diction, the sign of in the sum block is positive, whereas
the sign of depends on the symmetry ( ) or anti-
symmetry ( ) condition. The opposite combination and
corresponds to backward prediction. Fig. 8. Adaptive equalizer for simulation.

VIII. SIMULATION RESULTS , and the impulse response of the channel is described by the
To evaluate the performance of the MS-LMS algorithm in raised cosine:
adaptive filtering, the same equalization system in [2, Ch. 5] is
(47)
considered (see Fig. 8). The input channel is binary, with otherwise
RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS 643

property. It is clear that the LP-RLS technique surpasses all


other approaches in terms of convergence rate, even standard
RLS, because the symmetry of the response is built into the
adaptive filtering structure. The algorithm parameters were
and .

IX. CONCLUSION
In the present work, it has been shown that the split
transversal filtering problem can be formulated and solved
by using a linearly constrained optimization and can be im-
plemented by means of a parallel GSC structure. Then, the
optimum split Wiener filter can be introduced together with its
symmetric and antisymmetric linear-phase parts. Furthermore,
(a) a continued split procedure, which led to a multisplit filter
structure, has been considered. It has also been shown that the
multisplit transform is not an input whitening transformation.
Instead, it increases the diagonalization factor of the input
signal correlation matrix without affecting its eigenvalue
spread. A power normalized, time-varying step-size LMS
algorithm, which exploits the nature of the transformed input
correlation matrix, has been proposed for updating the adaptive
filter coefficients. The novel approach is generic since it can
be used for any value of filter order , for complex and real
parameters, and can be extended to linear-phase adaptive
filtering and linear prediction. For a power-of-two value of ,
such a proposition corresponds to a Hadamard transform do-
main filter, from which the structure exhibits high modularity,
parallelism, or concurrency, which is suited to implementation
using very large-scale integration (VLSI). Finally, simulation
(b)
results confirm that the split processing structures provide a
Fig. 9. Learning curves: Adaptive equalization. (a) (R) = 6:0782.
R
(b) ( ) = 46:8216.
powerful and interesting tool for adaptive filtering.

APPENDIX
where controls the eigenvalue spread of the correlation PROOF OF (21)
matrix of the tap inputs in the equalizer, with
for , and for . The sequence The purpose of this Appendix is to show that the optimum
is an additive white noise that corrupts the channel output Wiener filter can be split into two symmetric and antisymmetric
with variance , and the equalizer has 11 coefficients. impulse response filters connected in parallel according to
Fig. 9 shows a comparison of the ensemble-averaged error Fig. 3(b) and (21). Using (19)–(23), we need to prove that
performances of the DCT-LMS, MS-LMS, standard LMS, and
RLS algorithms for and (100 (48)
independent trials). The good performance of the MS-LMS al-
gorithm can be observed in terms of convergence rate when or
compared with the standard LMS algorithm. On the other hand,
we can verify that the MS-LMS algorithm is somewhat sensi- (49)
tive to variations in the eigenvalue spread so that the DCT-LMS
exhibits a better performance in the second case. This shows The left side of (49) can be rewritten as
clearly that the multisplit preprocessing does not orthoganalize
the input data vector, but it improves the diagonalization of the
input signal correlation matrix, which is taken into account in
the power normalization used by the MS-LMS algorithm. Nev-
ertheless, as far as computational burden is concerned, its sim-
plicity is notorious when compared with the DCT transform.
Such an aspect, together with convergence improvement, is de-
finitive in justifying its application.
Finally, the good performance of the linear-phase RLS
(LP-RLS) algorithm is also illustrated in this equalization
example where the considered channel has the linear-phase (50)
644 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

Leonardo S. Resende (M’96) received the electrical


where , and it has been taken into account engineering degree from Catholic University Pontific
that . The square matrix denotes of Minas Gerais (PUC-MG), Belo Horizante, Brazil,
in 1988, and the M.S. and Ph.D. degrees in electrical
the similarity transformation that splits the Wiener filter once engineering from the State University of Campinas
into symmetric and antisymmetric parts. In fact, the subspaces (UNICAMP), Campinas, Brazil, in 1991 and 1996,
spanned by the columns of and are complementary, and respectively.
From October 1992 to September 1993, he
is an orthogonal transform with . worked toward the doctoral degree at the Laboratoire
From (50), we have d’Electronique et Communication, Conservatoire
National des Arts et Métiers (CNAM), Paris, France.
Since 1996, he has been with the Electrical Engineering Department, Federal
University of Santa Catarina (UFSC), Florianópolis, Brazil, where he is an
(51) Associate Professor. His research interests are in constrained and unconstrained
digital signal processing and adaptive filtering.

which proves the veracity of (21).


João Marcos T. Romano (M’90) was born in Rio
ACKNOWLEDGMENT de Janeiro, Brazil, in 1960. He received the B.S. and
M.S. degrees in electrical engineering from the State
The authors wish to thank Prof. J. C. M. Bermudez and University of Campinas (UNICAMP), Campinas,
Brazil, in 1981 and 1984, respectively. In 1987, he
R. D. Souza, from the Federal University of Santa Catarina, received the Ph.D. degree from the University of
and the anonymous reviewers for their suggestions, which have Paris-XI, Paris, France.
improved the presentation of the material in this paper. In 1988, he joined the Communications Depart-
ment of the Faculty of Electrical and Computer
Engineering, UNICAMP, where he is now Professor.
REFERENCES He served as an Invited Professor with the University
René Descartes, Paris, during the winter of 1999 and in the Communications
[1] B. Widrow and S. D. Stearns, Adaptive Signal Processing. Englewood and Electronic Laboratory in CNAM/Paris during the winter of 2002. He is
Cliffs, NJ: Prentice-Hall, 1985. responsible for the Signal Processing for Communications Laboratory, and
[2] S. Haykin, Adaptive Filter Theory, 4th ed. Englewood Cliffs, NJ: Pren- his research interests concern adaptive and intelligent signal processing and
tice-Hall, 2002. its applications in telecommunications problems like channel equalization and
[3] M. G. Bellanger, Adaptive Digital Filters and Signal Analysis, 2nd smart antennas. Since 1988, he has been a recipient of the Research Fellowship
ed. New York: Marcel Dekker, 2001. of CNPq-Brazil.
[4] F. Beaufays, “Transform-domain adaptive filters: An analytical ap- Prof. Romano is a member of the IEEE Electronics and Signal Processing
proach,” IEEE Trans. Signal Processing, vol. 43, pp. 422–431, Feb. Technical Committee. From April 2000, he has been the President of the
Brazilian Communications Society (SBrT), a sister society of ComSoc-IEEE,
1995.
and since April 2003, he has been the Vice-Director of the Faculty of Electrical
[5] J. S. Goldstein, I. S. Reed, and L. L. Scharf, “A multistage representa-
and Computer Engineering at UNICAMP.
tion of the Wiener filter based on orthogonal projections,” IEEE Trans.
Inform. Theory, vol. 44, pp. 2943–2959, Nov. 1998.
[6] P. Strobach, “Low-rank adaptive filters,” IEEE Trans. Signal Processing,
vol. 44, pp. 2932–2947, Dec. 1996. Maurice G. Bellanger (F’84) graduated from Ecole
[7] P. Delsarte and Y. V. Genin, “The split levinson algorithm,” IEEE Trans. Nationale Supérieure des Télécommunications
Acoust., Speech, Signal Processing, vol. ASSP-34, pp. 470–478, June (ENST), Paris, France, in 1965 and received the
1986. doctorate degree from the University of Paris in
[8] , “On the splitting of classical algorithms in linear prediction 1981.
theory,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-35, He joined T.R.T. (Philips Communications in
pp. 645–653, May 1987. France), Paris, in 1967 and, since then, he has
[9] K. C. Ho and P. C. Ching, “Performance analysis of a split-path LMS worked on digital signal processing and applications
adaptive filter for AR modeling,” IEEE Trans. Signal Processing, vol. in telecommunications. From 1974 to 1983, he was
40, pp. 1375–1382, June 1992. head of the telephone transmission department of
the company that developed speech, audio, video,
[10] P. C. Ching and K. F. Wan, “A unified approach to split structure adaptive
and data terminals as well as multiplexing and line equipments for digital
filtering,” in Proc. IEEE ISCAS, Detroit, MI, May 1995.
communication networks. Then, he was deputy scientific director of the
[11] K. F. Wan and P. C. Ching, “Multilevel split-path adaptive filtering and company and, from 1988 to 1991, he was the scientific director. In 1991, he
its unification with discrete walsh transform adaptation,” IEEE Trans. joined the Conservatoire National des Arts et Métiers (CNAM), Paris, a public
Circuits Syst. II, vol. 44, pp. 147–151, Feb. 1992. education and research institute, where he is a professor of electronics. He is
[12] L. J. Griffiths and C. W. Jim, “An alternative approach to linearly con- the head of the Electronics and Communications research team.
strained adaptive beamforming,” IEEE Trans. Antennas Propagat., vol. Dr. Bellanger has published 100 papers, has been granted 16 patents, and is
AP-30, pp. 27–34, Jan. 1982. the author of two textbooks: Theory and Practice of Digital Signal Processing
[13] B. D. Van Veen and K. M. Buckley, “Beamforming: A versatile approach (New York: Wiley, 3rd ed., 2000) and Adaptive Filtering and Signal Analysis
to spatial filtering,” IEEE Acoust., Speech, Signal Processing Mag., vol. (New York: Marcel Dekker, first ed., 1987; second ed. 2001). He was elected
5, pp. 4–24, Apr. 1988. a fellow of the IEEE for contributions to the theory of digital filtering and the
[14] S. Haykin and A. Steinhardt, Eds., Adaptive Radar Detection and Esti- applications to communication systems. He is a former associate editor of the
mation. New York: Wiley, 1992. IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING and
[15] S. L. Marple Jr., Digital Spectral Analysis with Applications. Engle- was the technical program chairman of the conference ICASSP, Paris, in 1982.
wood Cliffs, NJ: Prentice-Hall, 1988. He was the president of EURASIP, the European Association for Signal Pro-
[16] B. Noble and J. W. Daniel, Applied Linear Algebra, 3rd ed. Englewood cessing, from 1987 to 1992. He is a member of the French Academy of Tech-
Cliffs, NJ: Prentice-Hall, 1988. nology.

You might also like