You are on page 1of 12

Signal Processing 80 (2000) 2567}2578

Turbo estimation algorithms: general principles,


and applications to modal analysis夽
Letizia Lo Presti, Gabriella Olmo*, Davide Bosetto
Dipartimento di Elettronica, Politecnico di Torino, Corso Duca degli Abruzzi 24 } 10129 Torino, Italy
Received 22 July 1999; received in revised form 6 June 2000

Abstract
In this paper, a new class of parameter estimation algorithms, called turbo estimation algorithms (TEA), is introduced.
The basic idea is that each estimation algorithm (EA) must perform a sort of intrinsic denoising of the input data in order
to achieve reliable estimates. Optimum algorithms implement the best possible noise reduction, compatible with the
problem de"nition and the related lower bounds to the estimation error variance; however, their computational
complexity is often overwhelming, so that in real life one must often resort to suboptimal algorithms; in this case, some
amount of noise could be still eliminated. The TEA methods reduce the residual noise by means of a closed loop
con"guration, in which an external denoising system, fed by the master estimator output, generates an enhanced signal to
be input to the estimator for next iteration. The working principle of such schemes can be described in terms of a more
general turbo principle, well known in an information theory context. In this paper, an example of turbo algorithm for
modal analysis is described, which employs the Tufts and Kumaresan (TK) method as a master EA.  2000 Elsevier
Science B.V. All rights reserved.

Zusammenfassung
In diesem Artikel wird eine neue Klasse von Algorithmen zur ParameterschaK tzung vorgestellt, die sogenannten Turbo
Estimation Algorithms (TEA). Die Grundidee ist, dass jeder SchaK tzalgorithmus (EA) eine Art spezi"sche Rauschunter-
druK ckung der Eingangsdaten leisten muss, um zuverlaK ssige SchaK tzungen zu erzielen. Optimale Algorithmen implemen-
tieren die bestmoK gliche RauschunterdruK ckung, die kompatible zu der Problemde"nition und der entsprechenden unteren
Schranke der SchaK tzfehlervarianz ist. Der damit verbundene Rechenaufwand ist oft sehr gro{, so dass in praktischen
Situationen ein Ausweg in suboptimalen Algorithmen gesucht wird, wobei dadurch das Rauschen immer noch ein wenig
unterdruK ckt werden kann. Die TEA-Methode reduziert das restliche Rauschen durch eine Schleifenkon"guration, in der
ein externes RauschunterdruK ckungssystem, das durch einen HauptschaK tzer gefuK ttert wird, ein verbessertes Signal erzeugt,
welches wiederum das Eingangssignal der naK chsten Iteration ist. Die Arbeitsprinzipien solcher Schemata koK nnen
allgemein als Turboprinzipien bezeichnet werden, welche im Zusammenhang mit der Informationstheorie wohlbekannt
sind. In diesem Artikel wird als Beispiel ein Turboalgorithmus beschrieben, der die Tufts und Kumaresan Methode als
HauptschaK tzalgorithmus verwendet.  2000 Elsevier Science B.V. All rights reserved.

Re2 sume2
Dans cet article, nous introduisons une nouvelle classe d'algorithmes d'estimation de paramètres appeleH s algorithmes
d'estimation turbo (TEA). L'ideH e de base est que chaque algorithme d'estimation (AE) doit reH aliser une sorte de


The authors are with the Signal Analysis and Simulation group at Politecnico di Torino.
* Corresponding author. Tel.: #39-11-5644033; fax: #39-11-5644149.
E-mail addresses: lopresti@polito.it (L.L. Presti), olmo@polito.it (G. Olmo).

0165-1684/00/$ - see front matter  2000 Elsevier Science B.V. All rights reserved.
PII: S 0 1 6 5 - 1 6 8 4 ( 0 0 ) 0 0 1 3 2 - 8
2568 L.L. Presti et al. / Signal Processing 80 (2000) 2567}2578

deH bruitage intrinsèque des donneH es d'entreH e pour atteindre des estimations "ables. Des algorithmes optimaux mettent en
wuvre la meilleure reH duction de bruit possible, compatible avec la deH "nition du problème et avec les limites infeH rieures
correspondantes de la variance de l'erreur d'estimation. Cependant, leur complexiteH de calcul est souvent prohibitive, de
sorte que dans la reH aliteH on doit souvent se contenter d'algorithme sous-optimaux. Dans ce cas, une certaine quantiteH de
bruit pourrait encore e( tre eH limineH e. Les meH thodes TEA reH duisent le bruit reH siduel au moyen d'une con"guration en boucle
fermeH e, dans laquelle un systeH me de deH bruitage externe, nourri par la sortie de l'estimateur mam( tre, geH nère un signal
rehausseH qui sera l'entreH e de l'estimateur à l'iteH ration suivante. Le principe de travail de tels scheH mas peut e( tre deH crit en
termes d'un principe plus geH neH ral, le principe turbo, bien connu dans le contexte de la theH orie de l'information. Dans cet
article, un exemple d'algorithme turbo pour une analyse modale est deH crit, qui utilise la meH thode de Tufts et Kumaresan
comme AE mam( tre.  2000 Elsevier Science B.V. All rights reserved.

Keywords: Parameter estimation; Modal analysis; Iterative algorithm

1. Introduction EAs are suboptimal, and some amount of theoret-


ically removable noise still a!ects the estimation.
The performance of any estimation algorithm Following these considerations, it is worth ques-
(EA) is a!ected by the noise which impairs the tioning if it is possible to improve the performance
measured data samples. Given an input signal-to- of a given suboptimal EA, operating an external
noise ratio (SNR) and the noise statistical proper- reduction of the residual noise which is not sup-
ties, the performance of each algorithm depends on pressed by the EA but does not overlap with the
its capability of extracting the useful information signal. This task could be done by means of a closed
from the available data, getting rid of the largest loop con"guration, in which an external denoising
possible amount of noise. In other words, each EA system (EDS), fed by the master EA output, reduces
performs a sort of intrinsic denoising of the data. the residual noise generating an enhanced signal to
A very intuitive example of such a mechanism, in be input to the EA for next iteration. The obtained
the "eld of modal analysis, is o!ered by those class of algorithms are named turbo estimation algo-
methods which exploit the properties of the princi- rithms (TEA).
pal eigenvectors of the autocorrelation matrix R This paper is organized as follows. Section 2 in-
VV
[3]. In fact, under the assumption that the signal is troduces the turbo estimation principle and de-
the superimposition of a known number p of modes scribes the intrinsic and external denoising
in white additive gaussian noise (AWGN), it can be operations. Section 3.1 reports the basics of the
shown that the signal subspace is spanned by the modal analysis problem. In Section 3.2 a modi"ed
p principal eigenvectors of R , whereas the remain- TK algorithm is described and interpreted in the
VV
ing M!p eigenvectors, where M is the linear di- TEA context; speci"cations are given for the design
mension of R , span the noise subspace. This of the EDS. Sections 4 and 5 report the experi-
VV
simple result makes it possible to achieve a separ- mental results and a complexity analysis of the
ation between the signal and the noise subspaces, proposed algorithm. Finally, in Section 6 con-
i.e. to operate a sort of data denoising. Of course, as clusions are drawn and future research directions
is pointed out in [3], some amount of noise is are identi"ed.
actually present also in the signal subspace; how-
ever, this noise cannot be separated from the signal.
The presence of a certain amount of unavoidable 2. TEA description
noise leads to lower bounds to the estimation vari-
ance [15], which are reached by maximum likeli- The general scheme of a TEA is reported in
hood estimators (MLE). However, MLE is often Fig. 1, where y is the data vector, containing N sam-
infeasible due to the associated computational ples of the input process. This latter is the sum of
complexity. Therefore, most commonly employed a random noise process realization and a signal
L.L. Presti et al. / Signal Processing 80 (2000) 2567}2578 2569

which in turn is controlled by the (k!1)th iter-


ation estimates produced by the EA.
The convergence check (CC) module controls the
iterative procedure termination on the basis of
a predetermined convergence criterion, which takes
into account the variations of the estimated para-
meter values during the iterations.
A TEA scheme can achieve the MLE if the resid-
Fig. 1. Scheme of a turbo estimation algorithm. ual noise is completely suppressed by the EDS after
an arbitrary number of iterations. If the residual
component, which is a known function of an M- noise can be only reduced by the EDS, a subopti-
dimensional, deterministic but unknown parameter mal solution is obtained, whose advantage lies in
vector q"[q , q ,2, q ]. The vector parameter the lower estimation error variance with respect to
  +
must be estimated by means of some proper EA, the master EA, or, alternatively, on the lower com-
yielding a vector estimate q( . putational complexity for the same estimation ac-
In the following, we will make the assumption curacy. This is a typical characteristic of the
that in the noise-free case a procedure does exist feedback systems, which, in many practical cases,
able to exactly obtain the vector parameter q from represent the only feasible solution to match the
a "nite number of available data; in other words, quality requirements with given computational
we do not consider ill-posed problems. Moreover, complexity.
we assume that, in the presence of noise, an e$cient From the above discussion, it is clear that TEAs
estimator of q does exist, and consequently tight be classi"ed as approximations of the MLE, in
bounds to the variance of each component of the which one starts from a known EA, and enhances its
vector estimation error e"q( !q can be obtained performance through an iterative mechanism in
by inverting the Fisher information matrix [15]. order to approach the MLE, instead of devising an
These variances are zero only in the noise-free case, approximate MLE algorithm from scratch. Actual-
when the e$cient estimator must yield q( "q. ly, well known approximate MLE algorithms, such
As discussed, each EA implements some sort of as the Iterative Filtering Algorithm (IFA) [3] can be
noise reduction, which can be active if the EA is re-interpreted within the framework of iterative de-
actually able to improve the input SNR (for noising.
example, by means of "ltering operations), or pass- If the input SNR is so low that the EA is forced to
ive if, due to the algorithm structure, not all the operate below (or close to) threshold at the initia-
noise components present in the input signal a!ect lization step, and consequently an outlier occurs in
the estimation process (as in the periodogram, the parameter initial estimates which feeds the
where the estimation is not a!ected by the noise EDS, the well known steep depart of the estimation
components at frequencies other than those to be error variances from the asymptotic values occurs
estimated). In the MLE, all the possible denoising is [13]. The overall outlier probability is related } but
performed, and the estimation variance reaches its not equal } to the outlier probability of the master
lower bound [15]. However, in many cases the EA; in fact, due to the external denoising capability,
complexity of the optimal algorithm is overwhelm- it is possible that an outlier in the initial estimate is
ing, and suboptimal solutions must be adopted. In corrected through subsequent iterations. This error
this case an external reduction of the noise not correction capability of turbo algorithms has been
suppressed by the EA, but a!ecting its performance observed experimentally during the simulations of
(which will be named residual noise) can be tried. TEA systems, and represents a further advantage of
This basic principle is implemented in the block the scheme.
diagram of Fig. 1, where, at each iteration k*1 the Several improvements can be devised to the basic
EA is fed with s , which is an improved version of TEA scheme; di!erent EAs, or the same EA with
I
the measured input signal y, processed by an EDS di!erent parameters, can be employed at di!erent
2570 L.L. Presti et al. / Signal Processing 80 (2000) 2567}2578

iterations, in order to achieve a trade o! between determine the frequencies f given observations of
I
the EA robustness (crucial at the "rst iteration, y[n] over a "nite time interval n"0,2, N!1.
when all the noise is present) and the estimation The MLE approach to the estimation of the
accuracy [10]. Moreover, the EDS can be designed frequencies and complex amplitudes in Eq. (1)
taking into account some sort of soft information would be to set up a nonlinear least squares minim-
yielded by the EA, and representing the reliability ization problem, which is computationally burden-
of the estimate. some, and necessary only at very low SNR values.
Iterative estimation schemes share some features In the recent years a great deal of attention has
with the well known turbo principle [2], introduced been devoted to model-based eigendecomposition
in an information theory context to generalize the methods. The use of linear models for the signal to
well known turbo codes [1]. In fact, a suboptimal be processed allows one to convert a nonlinear
EA and an EDS block, put in a closed loop con"g- estimation problem into the simpler task of estima-
uration, with a possible exchange of reliability in- ting the parameters of the linear model [4]. The
formation, can provide a feasible way to approach nonlinearity is postponed to the second step, which,
the theoretical limits of the estimation accuracy. in all model-based methods, is devoted to the ex-
Thanks to this analogy, iterative EA/EDS systems traction of the desired information (frequencies in
have been named turbo estimation algorithms our case) from the estimated model parameters.
(TEAs). Often in signal processing, a polynomial realization
of the linear model is assumed and used. The poly-
nomial realization, "rst suggested by Prony [12],
3. A turbo algorithm for modal analysis exploits the fact that in the absence of noise, the
complex exponential signal is exactly predictable as
In this section, the basic TEA concepts are de- a linear combination of its p past samples
scribed by means of an example in the context of N
modal analysis. Further developments can be y[n]" a y[n!k]. (2)
I
found in [8]. It is important to remark that the I
scope of TEAs is not limited to modal analysis; for So, the Prony linear prediction (LP) formulation
example, in [9,10] TEA schemes are devised for converts the nonlinear problem of frequency es-
carrier frequency estimation; so, the main concepts timation to the linear parameter estimation prob-
described in this paper can be easily generalized to lem for this polynomial model. Once the model
parameters are estimated, the frequencies f are
many other estimation problems and master EAs. I
obtained from the roots of the polynomial
3.1. The frequency estimation problem N
H(z)"1! a z\I.
I
The problem of estimating closely spaced fre- I
As Eq. (2) holds for n"p#1,2, N, when the
quencies of multiple, superimposed sinusoids from
order p is known a priori, a linear system of equa-
noisy measurements, is a representative of a class of
tions is obtained which, if N'2p, can be solved for
nonlinear parameter estimation problems with
the linear prediction coe$cients a , a ,2, a . The
a vast range of signal processing applications. The   N
problem reduces to the solution of the set of linear
signal y[n] is assumed to be the sum of p complex
equations
undamped exponentials
y[¸] y[¸!1] 2 2 y[1]
N
y[n]" A e
pDI L>(I #w[n] (1) y[¸#1] y[¸] y[2]
I
I $ $ $
A being the constant (complex) amplitudes, f the $ $ $
I I
numeric frequencies,
the initial phases and w[n]
I y[N!1] y[N!2] 2 2 y[N!¸]
an additive noise term. The problem of interest is to
L.L. Presti et al. / Signal Processing 80 (2000) 2567}2578 2571

a 3.2. The TKIF method and its interpretation as a TEA



a y[¸#1]
 Let us consider the block diagram of Fig. 2. The
a y[¸#2]
 block labeled H (z), where k*0 is the index of the
I
; $" $ (3) iterative process, is a digital IIR multiband "lter
$ $ performing a noise suppression which, for each
iteration k*1, bene"ts from the knowledge of the
$ y[N] frequency vector estimated at the previous step,
a fK . From now on, the frequency estimates will be
* I\
labeled with the iteration index k, as they are pro-
or Y ) a"y. When the data is noisy, based on prior gressively re"ned.
knowledge of the number of harmonics, ¸ is chosen At the initialization step k"0, the "ltering is
to be equal to p, and the LP parameters can be omitted, so H (z)"1, s "y, and a rough pre-
obtained from the least squares solution of the  
estimate of f, fK , is obtained using the TK algorithm
overdetermined system of equations, [12]. 
with relatively small values of N and ¸ (numerical
In the last years, there has been a burst of activity values of these parameters are given in Section 4).
in singular value decomposition (SVD) based Then, at each step of subsequent iteration:
methods for sinusoid retrieval, and they all use the
close-to-p-rank property of Y in the presence of (1) the IIR multiband "lter is designed, exploiting
additive noise at moderate SNR. One of the "rst the knowledge of the previous cycle frequency
methods of this class was introduced by Tufts and estimates fK ;
I\
Kumaresan [5], who obtained signi"cant improve- (2) y is passed through the "lter H (z), giving rise
I
ments with respect to least square solution in the to s ;
I
presence of noise [6]. Based on the fact that the (3) s is processed by the TK algorithm and a new
I
noiseless matrix Y has rank p, they suggest that in frequency estimate fK is obtained.
I
the noisy case the matrix Y be replaced by a p-rank The procedure stops at the Kth iteration, with
approximant YK before the pseudo inverse opera- K determined according to the convergence cri-
N
tion. To this end, one compute the SVD of matrix terion, and the last iteration yields the "nal esti-
Y, and sets all but the p largest singular values to mated frequency vector fK "fK.
zero. With r"min(¸, N!¸)*p )
The described algorithm, being an iterative
method for frequency estimation, and involving
P
Y"URV &" p u *&, frequency-selective "ltering, can be interpreted as
I I I
I a TEA. The TK plays the role of the basic EA,
where R is a diagonal matrix containing the ¸ sin-
gular values p of Y, the symbol & denotes trans-
I
pose, U and V are two orthogonal matrices [7]. The
p rank approximation is [7]

N
YK " p u *&.
N I I I
I
This low-rank approximation of Y before the
pseudo-inverse operation considerably improves
the quality of the parameters estimates, because the
(¸!p) singular vectors corresponding to the clus-
ter of small singular values will no be longer con-
tribute to the solution, thus removing a major
source of sensitivity to noise. Fig. 2. Scheme of the TKIF algorithm.
2572 L.L. Presti et al. / Signal Processing 80 (2000) 2567}2578

whereas the "lter H (z) can be identi"ed with The second choice (leading to the algorithm label-
I
the EDS module. The algorithm has been named led TKIF-2) is a modi"cation of the previous one,
TK algorithm with iterative xltering (TKIF). The motivated by the obtained simulation results as will
key feature of TKIF lies in its capability of pro- be discussed in the following: the "lter transfer
gressively rexning the estimates of the linear predic- function in the z domain at the kth iteration is
tion vector a( , thanks to the noise reduction
“+ (z!wI)
performed on the measured data y by the "lter- HI (z)" K K , (7)
ing operation. In fact, let us compare the initial #"1 “+ (z!pI)
G G
estimate, [7]
where
N 1
a( " * u&y, (4) pI"oe
SIG , wI"(o!d)e
SIG .
p I I G K
I I
The new parameter d, which assumes values in the
and the estimate at iteration k range [0,o], has been introduced for reasons which
N 1 will be clari"ed in the following.
a( " * u&s ,
I p I I I
I I 3.3. EDS parameter dexnition
where the label k has been attached to the matrices
output by the SVD at step k. It can be noticed that This section is devoted to the evaluation of the
at the kth iteration, the SVD operates in enhanced EDS "lter parameter e!ects on the algorithm per-
SNR conditions; consequently, it is able to yield formance. A test signal made of two undamped
enhanced frequency estimates, which in turn can be modes with frequencies f"[0.20, 0.22], phases
used to drive a more e!ective (k#1)th iteration u"[!3.63, 0], amplitudes A"[1, 1] has been
"lter design. used thorough all the related simulations. A num-
In order to completely de"ne TKIF, it is neces- ber N"128 of available data points has been con-
sary do design the EDS "lter. A possible choice is to sidered. The original noise contribution is an
employ a multiband "lter H (z), with each band AWGN, and all the simulations are carried out
I
centered around a component f K [m], 1)m)p of with SNR"0 dB. When the signal consists of
I
the estimated frequency vector fK . In this way, at two equal-level complex tones, the CrameH r}Rao
I\
each iteration k*1, it suppresses most of the out- Bounds (CRBs) for the two frequencies are equal as
of-band noise power, exploiting the information of the mutual interference is reciprocal [13]. Using the
the estimated frequency vector at the previous iter- above numerical values and inverting the Fisher
ation, fK . An IIR "lter structure has been chosen, information matrix we obtain that CRB"70 dB
I\
because it can be easily designed from the estimated for each frequency.
vector fK , it involves a low computational complex- First of all, the TEA scheme has been tested in an
I
ity both in the "ltering and in the design phase, and open loop con"guration: the EDS has been kept
can o!er a good frequency selectivity. "xed with pole locations in p "oe
pDG , where f are
G G
In the following we propose two actual imple- the true frequencies. As in this phase we are not
mentations of the EDS module. The "rst one interested in evaluating the e!ects of the "lter initial
(which leads to the algorithm labelled TKIF-1) is conditions on the estimation process, but only to
an all pole "lter whose transfer function in the investigate the parameter selection, after the "lter-
z domain at the kth iteration is ing operation we skip those data samples which are
a!ected by the "lter transient and employ, for the
z+
HI (z)" , (5) actual estimation process, only N "64 samples
#"1 “+ (z!pI) G
G G out of the N"128 available ones. Obviously, as
where the simulations are worked out exploiting only the
information carried by the stationary signal seg-
pI"oe
SIG , uI"2pf [i ]. (6) ment, the obtained performance turn out to be
G G I
L.L. Presti et al. / Signal Processing 80 (2000) 2567}2578 2573

rather far from the corresponding CRB. On the intuitively explained noticing that, when the pre-
other hand, simulation results, worked out with estimated frequencies are imprecise (due either to
optimized values of N and reported in Section 4, an outlier occurrence in the initialization EA or to
G
show that the actual performance of the proposed simple estimation variance), a very selective "lter-
algorithm is very close to the CRB. ing around the pre-estimated frequencies could
In summary, the following two-step frequency completely cut o! some of the true signal compo-
estimation procedure is addressed: nents, forcing an outlier in the overall estimation
process. On the other hand, if less selective "lters
(1) pass the input data sequence y through the EDS
are employed, an initial outlier can be recovered by
"lter to obtain s;
means of the noise suppression mechanism.
(2) enter the second half of s (N "64 data sam-
G If we impose an insertion gain of about 10 dB per
ples) into the TK algorithm with ¸"48 to
sinusoid, the corresponding pole module of Eq. (6)
obtain an estimate fK.
turns out to be o"0.82; with this value, it can be
One hundred independent realization of the data veri"ed that the "lter transient does not exceed the
vector y with SNR"0 dB have been generated "rst 64 samples of s; in fact, o"0.82 corresponds to
and the same data sequences have been used for all an exponential decay time constant qK5, and
the addressed experiments. With this Monte Carlo therefore 64 is about 13q. Therefore, the "lter tran-
method, the following quantities have been esti- sient does not a!ect the estimation procedure, as
mated and employed as overall performance index. this latter is based on the second half of the
The average mean-square error (MSE), denoted by N"128 data record. The related simulation results
are presented in the "rst row of Table 1.
k"+E( f K !f )#E( f K !f ),, (8)
     In order to verify if a better "lter selectivity is
which depends on both the squared bias and the able to improve the estimation accuracy, we have
estimation variance; the average squared-bias carried out a second experiment, modifying the
pole location so as to achieve a larger gain. From
b"+[E( fK )!f ]#[E( fK )!f ], (9)
     the previous evaluation it can be noticed that o can
and the average variance be increased to some extent without exceeding a "l-
ter transient of 64 samples. Imposing 64+5q,
p"+var[ fK ]#var[ fK ],. (10)
   a value oK0.93 can be worked out, which yields an
In the "rst experiment, we employ the EDS "lter of insertion gain of about 14 dB per sinusoid.
Eq. (5). The choice of the pole module o is crucial as The corresponding experimental values of k,
for the TKIF method performance, as it a!ects the p and b are shown in the second row of Table 1.
"lter e!ectiveness in rejecting the residual noise. As it can be noticed, the performance improvement
A natural choice would be to select values of o very obtained using a larger gain EDS "lter is negligible,
close to unity; however this choice presents two as a small variance decrease is paid in terms of
serious drawbacks. First of all, a high frequency a larger bias. The obtained results reveal that the
selectivity involves that the "lter impulse response performance of the EDS "lter is a!ected by some
time decay is slow; this induces a long transient in factor other than the gain. It can be noticed that the
the "ltered signal s , which leads to the need for "lter alters the AWGN condition, which is one of
I
extra measured data in order to accommodate both the assumptions of the TK algorithm in order to
the transient and the steady-state data to be assure proper operation. Therefore, a possible hy-
processed by the EA. Moreover, it has been ex- pothesis can be that the obtained results are nega-
perimentally veri"ed that the "lter selectivity is tively a!ected by the presence of colored noise.
related to the overall outlier probability; this can be In order to test this assumption, the design of the
EDS has been modi"ed so that the "lter e!ectively
 It must be remarked that this number of extra data has little enhances the sinusoids but, at the same time, keeps
e!ect on the computational complexity, as they are only used in the output noise spectrum su$ciently smooth (or
the "ltering process (see Section 5). `locally whitea). To this end, the EDS "lter of
2574 L.L. Presti et al. / Signal Processing 80 (2000) 2567}2578

Table 1
Simulation results for the design of the EDS module. N"128; N "64, SNR"0 dB. The CRB is 70 dB
G
MSE in dB
Method MSE, k variance, p bias, b !10 log (p)

TKIF-1 (o"0.82) 4.66e!7 4.64e!7 1.56e!9 63.3


TKIF-1 (o"0.93) 4.21e!7 4.7e!7 3.55e!9 63.8
TKIF-2 (o"0.93, d"0.18) 3.30e!7 3.30e!7 2.96e!10 64.8

Eq. (7) is employed in the third experiment. The Simulations have been carried out, assuming that
parameter d determines both the selectivity and the either N"64 or 128 data samples are available. It
gain of the pre"lter. Taking o"0.93 and d"0.18, has been experimentally veri"ed that, as the band-
the results reported in the last row of Table 1 can be width parameter o increases towards unity, the
obtained. It can be noticed that this second EDS MSE of both methods does not decrease monotoni-
"lter achieves improved performance with respect cally, but reaches a minimum and then gets in-
to the "lter of Eq. (5), especially in terms of esti- creased. A closer examination of the simulation
mation bias; the bias reduction represents a results reveals that this is due to an increase of the
remarkable advantage of HI (z) over HI (z), squared bias b, while the estimation variance
#"1 #"1
also con"rmed by the simulation results reported in monotonically decreases as o grows towards unity.
Section 4. Moreover, from Table 1, we see that the Actually, when o is low, never dominates the vari-
second EDS "lter achieves improved performance ance, and hence k shares the same decreasing
also in terms of estimation variance, with an in- trend of the variance. On the other hand, the
crease of 1 dB switching from HI (z) to HI (z). squared bias becomes dominant as o tends to unity,
#"1 #"1
We remark once again that these results are not and a tradeo! e!ect between b and p takes place.
signi"cant as for the actual performance of the Consequently, using a value of o very close to unity
algorithm, but are only focussed on the "lter de- is no longer a good strategy to minimize the MSE.
sign, and therefore must be considered as terms of Due to the same e!ect, the di!erence between
comparison of the addressed experiments. TKIF-1 and TKIF-2, whose EDS "lter introduces
lower bias, becomes apparent. Note that the es-
timation bias can be reduced also by discarding
4. The TKIF algorithm: experimental results ¹ initial samples from the sequence after the "rst
M
estimate.
From the results of Table 1, it could seem that In order to show the performance of TKIF-1 and
the cascade EDS}EA does not allow the algorithm TKIF-2 in di!erent SNR conditions, Fig. 3 reports
to reach the CRB with any of the two considered the estimation variance p as a function of SNR, for
EDS "lters. Actually, it is indeed possible to ap- the two situations of N"128 and 64 available
proach the CRB provided that the whole transient, samples. The thin lines represent the CRB for
or a portion of it, is included in the data sequence N"128 and 64. The results are obtained for ex-
passed on to the EA block. So far, the in#uence of perimentally optimized values of the parameters o,
the data "lter initial conditions has been neglected; d and ¹ . The "nal estimates are reached in K"3
M
the "ltered noise has been considered stationary iterations, and the initial estimate is obtained with
and the amplitude of the "ltered sinusoid constant TK and ¸"10. The TK algorithm with ¸"10 is
in time. These assumptions are not realistic, espe- also employed as the EA for the closed loop iter-
cially when the "lter transition period becomes ations. It can be noticed that both the curves of
comparable to N. The in#uence of the biasedness of TKIF-1, especially the one with N"64, exceed
the frequency estimation algorithm in an iterative the CRB. This is due to the bias present in both
"ltering procedure has already been pointed out in estimates. On the contrary, the variance of TKIF-2
[3,11], as well as in this paper. closely follows, but never exceeds, the CRB.
L.L. Presti et al. / Signal Processing 80 (2000) 2567}2578 2575

Moreover, the TKIF-2 threshold is lower, thanks both TKIF-1 and TKIF-2 achieve a MSE very
to the better selectivity of the EDS "lter. close to the CRB. Moreover, a closer examination
The numerical performance index for the same of Tables 2 and 3 reveals that the bias of TKIF-1 is
SNR values are presented in Table 2 for N"64 not negligible, but instead it plays a role compara-
and in Table 3 for N"128. These results show that ble to the standard deviation in the mean square
error de"nition.
Finally, it is worth noticing that the performance
of both algorithms degrades dramatically when the
SNR falls below 2.5 dB. The poor initial accuracy of
TK with ¸"10 is largely responsible for this thre-
shold. In fact, simulations have shown that the
threshold can be drastically reduced if the two
frequencies corresponding to the largest absolute
values of the data discrete Fourier transform are
taken as the initial estimates.

5. The TKIF method: analysis of complexity

In this section, we address the topic of analyzing


the computational complexity of TKIF. It is worth
Fig. 3. Plot of !10 log (var) against SNR in dB for TKIF-1 and recalling that the objective of TEA is either to
TKIF-2; the thin lines indicate the CRBs for N"64 and reduce the basic EA complexity, maintaining the
N"128. same performance, or to yield higher estimation

Table 2
TKIF-1 and TKIF-2 estimation indices in the case N"64

TKIF-1 TKIF-2

SNR k b p k b p

4 1.41e!2 1.40e!2 9.85e!5 8.55e!7 2.15e!8 8.33e!7
5 1.01e!6 1.22e!7 8.85e!7 4.62e!7 1.52e!8 4.47e!7
6 2.54e!7 9.10e!8 1.63e!7 3.24e!7 6.55e!9 3.17e!7
10 1.46e!7 8.43e!8 6.12e!8 1.21e!7 2.98e!9 1.18e!7
20 8.23e!8 7.78e!8 4.53e!9 1.17e!8 8.65e!10 1.08e!8
30 7.65e!8 7.61e!8 4.33e!10 1.16e!9 1.54e!10 1.01e!9

Table 3
TKIF-1 and TKIF-2 estimation indices in the case N"128

TKIF-1 TKIF-2

SNR k b p k b p

3 4.43e!6 1.14e!7 4.32e!6 1.04e!7 1.80e!9 1.02e!7
5 4.38e!8 7.68e!9 3.61e!8 3.85e!8 1.54e!9 3.70e!8
10 2.57e!8 1.90e!8 6.68e!9 9.96e!9 1.67e!10 9.79e!9
20 2.12e!8 2.06e!8 5.92e!10 1.30e!9 3.63e!10 9.38e!10
2576 L.L. Presti et al. / Signal Processing 80 (2000) 2567}2578

accuracy with the same complexity of a single EA dent of k) and N is the number of signal samples.
processing. As a metric for computational complex- The "ltering complexity is then determined by:
ity, we evaluate the number F of #oating point
E the number N of input samples to be processed;
operations ( yops) involved by TK and by his TEA
E the number M (proportional to N and N ) of
versions. N U
modes which compose the signal;
The key property of TKIF is determined by the
E the number of transmission zeros.
TK procedure and by the number of iterations, but
is almost independent of the "ltering operation. Let If the complexity of the TK and TKIF algorithms
us de"ne the complexity of the TK method as are directly compared, it could seem that this latter
is far more complex than the former. Actually, this
F2)(N, ¸)KF14"(N, ¸)#F(¸), (11) consideration holds true when the two algorithms
are compared with the same values of N and ¸,
where F14"(N, ¸) is the SVD complexity, and
which, in practice, is not a reasonable assumption.
F* is the complexity of the procedure of roots
In fact:
extraction of an ¸th order polynomial. It is known
that [14] E The complexity of the IIR "ltering operation is
very low, if compared with that of the SVD or
20 root "nding procedure. Indeed, for a sample
F(¸)K ¸, (12)
3 length N"64 and an optimal choice ¸"20, we
have FK5.3;10 and F14"K3.1;10,
whereas the SVD complexity F14" can be approxi-
while for the "ltering operation, we need only 64
mated, according to the e$cient SVD computation
#oating point operation per pole/zero.
described in [14], as
E The TKIF method requires smaller values of
F14"(N, ¸)K14RC#8¸, (13) N and ¸ to achieve the same performance of TK.
This fact, which is a consequence of the TEA
where R and C are, respectively, the numbers of concepts, has been experimentally veri"ed, as
rows and columns of the LP data matrix Y; in our will be described in the following. The complex-
case, R"N!¸ and C"¸, as it can be inferred ity of the TK is so strongly dependent on N and
from Eq. (3). Eqs. (12) and (13) put into evidence ¸ so that it can be more economical to process
that the Prony "lter order ¸ is very critical as for some iterations with lower N and ¸ than to
the algorithm complexity, and must be chosen so as perform only one TK estimation with higher
to minimize the computational cost to the max- values of N and ¸. In fact, the square and cubic
imum possible extent. terms ¸ and ¸ in Eq. (13) of F14" can be
The complexity of the TKIF algorithm can be predominant against the linear term KF2) in Eq.
then de"ned as (14).
E The individual estimation error variance
F2)'$(N, ¹ , ¸, K)
M decrease very quickly after the "rst iteration,
KF2)(N, ¸)#K[F2)(N!¹ , ¸)#F$ ], (14) so that it is often possible to achieve a high
M estimation accuracy with a limited number of
where K is the number of iterations after the pre- iterations.
estimation, ¹ is the number of transient samples
M All these considerations are con"rmed by experi-
to be skipped after the "rst estimation and F$  is mental results. A comparison between the TKIF
the complexity of the IIR "ltering performed by the and the TK algorithms can be made in terms of the
EDS module. This latter is de"ned as complexity ratio
F$ "N(N #N ), c"F2)/F2)'$ (15)
U N
where N and N are, respectively, the numbers of In Section 4, the parameters o, ¹ and d of TKIF-2
U N M
zeros w and poles p of H (z) (supposed indepen- have been optimized for SNR"10 dB and the
G K I
L.L. Presti et al. / Signal Processing 80 (2000) 2567}2578 2577

two sample lengths N"64 and 128. In these Further research could be spent to re"ne the
conditions, the TEA method with TK and ¸"10 theoretical foundations of TEAs, for example by
as the basic EA attains the CRB with negligible evaluating the asymptotic performance of a generic
bias. In order to achieve the same performance with TEA versus its master EA. The TEA capability of
the TK algorithm alone, ¸"40 when N"64 and correcting an outlier occurred in the initial estima-
¸"30 when N"128 are necessary. Replacing the tion step, which has only been experimentally veri-
numerical values into Eqs. (11) and (14), we obtain "ed in the present work, could be veri"ed and
for the case N"64 (¹ "20): formalized in detail. The conditions to be satis"ed
M
by an EA in order to be admissible as the master
F2)(64, 40)"1.5;10, EA within a turbo scheme should be investigated.
Finally, other turbo algorithms could be devised, in
F2)'$\(64, 20, 10, 3)"2.8;10, order to solve various estimation problems other
and consequently c"5.39. Likewise, for N"128 than the modal analysis one, possibly with exten-
we obtain (note that ¹ "40) sions to the case of random parameter estimation.
M
F2)(128, 30)"1.6;10,
References
F2)'$\(128, 40, 10, 3)"5.5;10,
[1] C. Berrou, A. Glavieux, Near optimum error correcting
and c"2.91. coding and decoding: turbo-codes, IEEE Trans. Commun.
The results reveal that the TKIF algorithm is 44 (10) (1996) 1261}1271.
[2] J. Hagenauer, The turbo principle: tutorial introduction
very e$cient with respect to the TK alone. More
and state of the art, Proceedings of the International Sym-
precisely, in the considered numerical examples, the posium on Turbo Codes, Brest, France, 1997.
total number of #ops gets reduced of a factor be- [3] S. Kay, Accurate frequency estimation at low signal-to-
tween 2.91 and 5.39, depending on the sample data noise ratio, IEEE Trans. Acoust. Speech Signal Process.
length N. ASSP-32 (June 1984) 540}547.
[4] S.M. Kay, S.L. Marple, Spectrum analysis } a
modern perspective, Proc. IEEE 69 (November 1981)
1380}1419.
6. Conclusions [5] R. Kumaresan, D.W. Tufts, Estimation of frequencies of
multiple sinusoids: making linear prediction perform like
maximum likelihood, Proc. IEEE 70 (9) (September 1982)
In this paper we have introduced the class of 975}989.
turbo estimation algorithms (TEA), which represent [6] R. Kumaresan, D.W. Tufts, Estimation of frequencies of
a feasible approximation of optimum parameter multiple sinusoids: making linear prediction work like
estimation, based on a closed loop con"guration maximum likelihood, Proc. IEEE 70 (9) (September 1982)
able to achieve a very e$cient noise suppression. 975}989.
[7] S.J. Leon, Linear Algebra with Applications, Macmillan,
The denoising mechanism of TEA has been de- New York, 1994.
scribed, and an interpretation in terms of a more [8] G. Olmo, L. Lo Presti, An enhanced TEA algorithm for
general turbo principle has been given. As an modal analysis, ICASSP 99, August 1998.
example, the TKIF algorithm has been de"ned and [9] G. Olmo, L. Lo Presti, D. Bosetto, A turbo estimation
analyzed in the context of modal analysis; it em- algorithm for PSK carrier frequency recovery, Internal
Report of Politecnico di Torino, 1999.
ploys TK as a master EA and use an iterative [10] G. Olmo, L. Lo Presti, D. Bosetto, High Per-
"ltering procedure to perform the external denois- formance Estimation Algorithm for PSK carrier frequency
ing. The algorithm has been tested through simula- recovery, Eusipco 2000, September 2000, accepted for pre-
tions and the obtained results show that it is able sentation.
either to reduce the complexity of TK, achieving [11] K.K. Paliwal, Some comments about the iterative "ltering
algorithm for spectral estimation of sinusoids, Signal Pro-
the same performance, or to yield higher accuracy cess. 10 (1986) 307}310.
estimates with the same complexity of a single TK [12] B.R. de Prony, Essai expeH rimental et analytique: sur la lois
processing. de la dilatabiliteH de #uides eH lastiques et sur celles de la force
2578 L.L. Presti et al. / Signal Processing 80 (2000) 2567}2578

expansive de la vapeur de l'eau et de la vapeur de l'alcool, [14] W.M. Steedly, C.J. Ying, R.L. Moses, A modi"ed TLS-
aH di!eH rentes tempeH ratures, J. l'ED cole Polytech., Paris, 1795. Prony method using data decimation, IEEE Trans. Signal
[13] D.C. Rife, R.R. Boorstyn, Multiple tone parameter estima- Process. 42 (9) (September 1994) 2292}2303.
tion from discrete-time observations, Bell System Tech. J. [15] H.L. Van Trees, Detection, Estimation and Modulation
55 (9) (November 1976). Theory, Wiley, New York, 1968.

You might also like