This action might not be possible to undo. Are you sure you want to continue?

4, APRIL 1999

1099

**The Wigner Distribution of Noisy Signals with Adaptive Time–Frequency Varying Window
**

LJubiˇ sa Stankovi´ c, Senior Member, IEEE, and Vladimir Katkovnik, Member, IEEE

Abstract— Time–frequency representations using the Wigner distribution (WD) may be signiﬁcantly obscured by the noise in observations. The analysis performed for the WD of discrete-time noisy signals shows that this time–frequency representation can be optimized by the appropriate choice of the window length. However, the practical value of this analysis is not signiﬁcant because the optimization requires knowledge of the bias, which depends on the unknown derivatives of the WD. A simple adaptive algorithm for the efﬁcient time–frequency representation of noisy signals is developed in this paper. The algorithm uses only the noisy estimate of the WD and the analytical formula for the variance of this estimate. The quality of this adaptive algorithm is close to the one that could be achieved by the algorithm with the optimal window length, provided that the WD derivatives were known in advance. The proposed algorithm is based on the idea that has been developed in our previous work for the instantaneous frequency (IF) estimation. Here, a direct addressing to the WD itself, rather than to the instantaneous frequency, resulted in a time and frequency varying window length and showed that the assumption of small noise and bias is no longer necessary. A simpliﬁed version of the algorithm, using only two different window lengths, is presented. It is shown that the procedure developed for the adaptive window length selection can be generalized for application on multicomponent signals with any distribution from the Cohen class. Simulations show that the developed algorithms are efﬁcient, even for a very low value of the signal-to-noise ratio.

I. INTRODUCTION VARIETY of tools is used for the time–frequency analysis of nonstationary signals. Many of these tools have a form of energy distributions in the time–frequency plane. The Wigner distribution (WD) is one of the most prominent members of this class. It has been deﬁned by Wigner in quantum mechanics and then used ﬁrst by Ville in signal analysis. All quadratic time–frequency distributions belonging to the general Cohen class, including the widely used spectrogram, may be written as two-dimensionally smoothed forms of the WD. The properties of these distributions have been investigated in detail during the last two decades [2], [4], [5], [7]–[9], [13], [14], [28], [34]. In particular, one of the topics of intensive study has been the inﬂuence of noise on the

Manuscript received February 18, 1998; revised August 23, 1998. This work was supported in part by the Alexander von Humboldt Foundation. The associate editor coordinating the review of this paper and approving it for publication was Dr. Frans M. Coetzee. LJ. Stankovi´ c is with Signal Theory Group, Ruhr University Bochum, Bochum, Germany, on leave from the Elektrotehnicki Fakultet, University of Montenegro, Montenegro, Yugoslavia (e-mail: l.stankovic@ieee.org). V. Katkovnik is with the Statistics Department, University of South Africa, Pretoria, Republic of South Africa. Publisher Item Identiﬁer S 1053-587X(99)02149-2.

A

time–frequency representations [1], [4], [10], [12], [18], [15], [22], [24], [25], [29], [30]. It has been concluded that the lag window length is one of the most important parameters. It has to be chosen by a tradeoff between the variance and bias of the WD estimate [25]. The aim of this paper is to present an algorithm that produces the window length close to the optimal one without knowing the bias since the bias depends on derivatives of the unknown distribution to be estimated. The proposed algorithm is based on the idea developed in [11], [16], and [17]. The orientation of the algorithms presented in [16] and [17] on the instantaneous frequency (IF) estimation resulted in a time-varying-only window length that is the best for the IF estimation, i.e., for reconstruction of the WD at its peak on the frequency, for a given time instant. Then, this window length is used for all frequencies at this time instant. The algorithm developed in this paper directly addresses the problem of “denoising” the WD that was itself obtained from the noisy signals. This new algorithm produces the adaptive window length that is both time and frequency varying. Two points determine the difference of the new algorithm versus the algorithms studied in [16] and [17]. First, the adaptive window in the new algorithm is assumed to be dependent both on time and frequency. Second, the variance of the WD itself, instead of the variance of the IF, is used in this algorithm. The calculation of this variance requires no assumption about the smallness of the noise. A theoretical analysis of the algorithm parameters is done in this paper as well. Furthermore, the new adaptive algorithm appears to be efﬁcient for the analysis of multicomponent signals when it is applied in a combination with the reduced interference distributions. On the whole, the idea of the developed algorithm is quite universal and ﬂexible and can be adapted to various time–frequency analysis problems. Finally, the algorithm is simple in implementation. In particular, a simpliﬁed version of the algorithm, based on the WD calculation with only two different (small and large) window lengths, is developed in this paper. The paper is organized as follows. The analysis of the variance and bias of the WD of a deterministic noisy signal is reviewed in Section II. The idea of the adaptive algorithm, as well as a choice of its key parameters, is presented in Section III. The basic adaptive algorithm is described in Section IV. The simpliﬁed “two-window” version of the algorithm is presented in Section V. Sections V and VI deal with examples, including the algorithm application on the

1053–587X/99$10.00 © 1999 IEEE

. i. is assumed as well. we may consider that is the true value). we get (8) For the FM signal with a slowly varying amplitude (14) (13) . [35] (1) . 47. THEORY The WD of a discrete-time noisy signal is deﬁned as [3]. [25]. the term is omitted. [7]. APRIL 1999 method and a generalization to the other distributions from the Cohen class. the term are essential for the since only the terms depending on window length optimization (formally. true for many commonly used windows when can be omitted in the following analysis Then. the variance of the unwindowed approaches . The variance of the WD is deﬁned by (5) . according to the previous disscusion. it does not play a signiﬁcant role in is time–frequency analysis of the signal. It means that the WD of the This bias is constant for all noise-free signal is superimposed on a pedestal whose height [18]. with (2) where the asterisks denotes a complex conjugate value. Then. In practice. the bias is obtained in the form bias where. Let us introduce the symmetric window of a ﬁnite length . as the WD calculation.e. Thus. Let (3) be an error of estimation of the bias of the estimate is equal to by . The Gaussian-distribution and . 4.1100 IEEE TRANSACTIONS ON SIGNAL PROCESSING. [13]. NO. as is We will assume that . Note that the constant factor of 2 was used in with the original deﬁnition of (1). with and law for . noise is complex-valued white. is deterministic and that the We assume that the signal . and is the Kronecker function. where the ﬁrst one It has two components depends on both the signal and noise. the variance of (1) is obtained as [1]. Therefore. the variance should be written in a more general form than (6) (11) (6) For the FM signal (7) with a slow-varying real-valued amplitude . and (12) For a nonrectangular window. [30] The amplitude moment of the window is deﬁned by According to (3). II. and denotes a convolution window in . VOL. For the considered one white complex-valued Gaussian noise. [25]. and the second depends on the noise only [25]. The ﬁrst term in (10) shows that decreasing of the lag-window length (increasing the length of its FT) causes corresponding increase of the WD gives bias. the WD of the (the pseudo WD) is signal truncated by the window of the form (9) Calculation of the mean of gives [25] (10) FT is the Fourier transform (FT) of the where . does not depend on . Using a Taylor series expansion of [25] the approximate expression for because (4) . Then. truncated signals are used in inﬁnity.

It is easy to show that (21) for the given formulae of the bias and variance (17).e. includes constant Now. Formula (18) is very interesting for a theoretical analysis. parameter of this proportion to be chosen. It does not depend on the WD or any other parameter of the problem. Then. we can apply the quantile of the Gaussian distribution in (22). It is deﬁnitely unknown in advance. Let us emphasize this quite interesting point.´ AND KATKOVNIK: WIGNER DISTRIBUTION OF NOISY SIGNALS STANKOVIC 1101 where III. and . The optimal window length is obtained by minimizing the mean squared error (MSE) deﬁned by bias is the Hanning window and Assuming that FM signal with a slow-varying amplitude (7). we computational time we may allow. assumes a proportion beb) tween the bias and the standard deviation corresponding to that which holds for the optimal window length minimizing the MSE (21). we have (15) where bias . through an optimal estimation of the unc) Finding . a ﬁnite number of window of lengths has to be assumed. At the optimal . Second. i. as we did in the IF estimation in [17]. Thus. it determines a proportion between the bias and the standard is a deviation in the window length selection. or . Our aim is to design an adaptive algorithm that produces the value of close to the optimal one at any given point without knowledge of and its derivatives. we may write the inequality bias . In between these two values. BASIC IDEA OF ADAPTIVE WINDOW LENGTH SELECTION According to the presented analysis. it can be shown that as the Gaussian distribution is a good approximation for the distribution of WD estimation errors. A few approaches could be used here. the ratio of the bias to standard deviation window length is a constant. Here. For the last length may assume any reasonably small value. distribution. for that particular to the form (23) . (22). meaning that The optimal window length depends on it is time–frequency varying. and is the which holds with probability th quantile of the standard Gaussian distribution. The widest length . This value will be a good in the simpliﬁed “two-window” intuitive choice for algorithm that is going to be presented. (22) These values may be easily calculated for any other window type [19]. we can conclude that the random error of the WD estimation can be represented in the form is an energy of the window For the Hanning window and . In order to derive the algorithm. First. The algorithm will be deﬁned on the basis of the analysis that follows. we get (16) is the (17) The optimal window length follows from the equation . 1W is not crucial for the . we have to deﬁne a ﬁnite set of the window length values for which the WD will is determined by the be calculated. We wish to make clear the idea behind assumption (23).. the last approach is considered in the sequel known of this section. but it is not useful for practical is the second derivative of applications because on . . assumes the equality of the a) The simplest choice random error and the bias. The MSE (16) can be represented in the form (19) where bias (20) is a ratio of the bias to standard deviation of the WD estimate. for example.1 Let us assume that for some bias holds. can be strengthened (24) that gives (18) . is a quadratic Precisely speaking. However. Let us introduce a dyadic set descending window lengths (25) 1 Note that the Gaussianity assumption for analysis that follows. it has a non-Gaussian . the random error function of the Gaussian noise .

according to (22).. is within the interbias val bias .e. we can conclude that for . we have . given The maximal possible value of bias by . and the intervals a point in common. whereas for than the variance. Contrary to this case. thus giving bias bias bias bias According to (30). The value of . we assumed.. i. we get other bound limits required by (31). of the adaptive window length close to the optimal as the estimate of and the using variance of this estimate only. the probability . according to (29). the bias is much larger with the variance. for we obtain . Let us consider conﬁdence intervals for (24). and large for all . the intersection of three intervals and only will be considered. using the where . where The bias and variance for any and (28). without loss of generality. Consider window length these conﬁdence intervals for the successive values of . From the monotonicity property of functions in (30). for therefore. In the same way. . any can be represented as (28) corresponds to the optimal window length . there exists the value of and when . Denote the upper and as lower bounds of these interval’s conﬁdence and (26) is an estimate of . as functions of . 4. Since the bias is small and the variance is . we are going to derive the algorithm for determination . using the standard radix-2 FFT algorithms. may be rewritten as bias . where From here. according to (17). the best reasons in favor of selecting Now. The algorithm is based on the following statement. the bias is much smaller compared . VOL. it follows that and have at least a point if and only if the intervals and do not have in common. provided that (31) Minimization and maximization in (31) are produced with . set as a window length corresponding to the Determine when two successive largest conﬁdence intervals still intersect. all for any and the bias is large.e.1102 IEEE TRANSACTIONS ON SIGNAL PROCESSING. given and from the monotonicity of From the deﬁnition of .. the successive window lengths. Conﬁdence intervals (26). we have an inverse picture. Let us assume that (22) holds with the probability and that the optimal window length belongs to the . Note again that dependent parameter. APRIL 1999 A simplicity and efﬁciency of the WD calculation for . . is one of as a dyadic set. NO. i. for example. is an that the bias is positive. for the possible values of (32) (30) In the last expression. it follows from (32) that (33) The above relation gives the interval for (34) For example. consider .e. The variance is small. is. i. . It can be and seen that for the positive bias. respect to the random values of the estimate which determine a middle point of the conﬁdence intervals Let us. therefore. takes a value from the interval bias interval bias . in (26) such that Then. variance and bias in (30). . They play a key role in the adaptive algorithm. therefore. We assume from the beginning and the amplitude of (7) are that the noise variance known. the upper bound . (35) exists if It Inequality (34) shows that completes the proof of the statement and gives the inequality . the intersection of and the nonintersection of and will occur. 47. where the indexing starts from — in the form window length (when ). are deﬁned by (29) is to be found. and is its variance. (27) is still satisﬁed. Proof: It is clear that because of the assumption that . Thus. which denotes the indexing starting — in the form from the largest toward the narrowest window length. (21). of the conﬁdence Consequently. we start to use two different indices for the window lengths: .

we may easily get the estimate of (38). in the neighbor. when (36) still holds. Note that the other window length schemes hood of may also be assumed.. However. it inevitably leads to a suboptimal solution due to the discretization effects since. have a point in common. the following and can be used: estimates of (39) observations. The variance can be estimated by median median (40) and are the real and imaginary part of where . Thus. from (27) is satisﬁed. small difference between . for a point .. (a) With the constant window length With the adaptive time–frequency varying window length. we 1) The WD is calculated for all of obtain a set of distributions 3) The WD with the adaptive window length. as we will consider in the simulations in this paper. the other is valid for low signal to an accurate noise ratio cases only. but that is the price of efﬁcient calculation schemes and the simplicity of the presented algorithm. deviation of the noise . for the Hanning window. b) A note on the realization: The WD with samples can be easily obtained from the distribution with samples as its smoothed version. One is to increase the number of discrete window lengths between and .e. i. For example.´ AND KATKOVNIK: WIGNER DISTRIBUTION OF NOISY SIGNALS STANKOVIC 1103 (a) (b) (c) Fig. If we may computationally allow a large number of and . the standard . 1) Provided that the sampling period is small. for example. steps are then generated for each value of the time-instant and the frequency . in does not belong to general. i. 2) In the case of a very high noise. deviation . suppose 2) The optimal window length is determined by the largest .e. a) The standard deviation (38) depends on the amplitude of the signal . is the smallest of those for which the This and . The following for example. i. Let us. Fortunately. Deﬁne a dyadic set . is a signal independent conThe window energy stant. segments . Thus.. . variance While the ﬁrst one is restricted by a small sampling rate for estimation. ALGORITHM Let us initially assume that the amplitude and the standard of the noise are known. IV. Wigner distribution of the noisy signal. We may go in two directions.e. 1.e. N1 = 64. could also be The average used as an estimate of . . (b) With the constant window length N2 = 512. of a low signalto-noise ratio. the optimal window length the dyadic set . It always results in worse values of the MSE. window lengths. However.. This topic is discussed in Section V. The only difference in the case when two successive window lengths are very close to each other is that with all previthe intersection of the conﬁdence interval should be used as an indicator ous intervals when the optimal window length is achieved [11]. for every . Our aim is to reduce the numand while ber of considered window lengths between keeping the high quality of the presented adaptive approach. we may use . is (37) Comments on the Algorithm. (c) A search of the optimal window length over the ﬁnite set simpliﬁes the optimization. then a similar approach and altheoretically gorithm can be used. Which one of the previous methods will be used for the estimation depends on the speciﬁc situation. This is possible due to a speciﬁc form of time–frequency representation. and where the sum is calculated over all is assumed to be large. and the energy of the window. i. we will go in the other direction. this loss of accuracy is not signiﬁcant in many cases because the MSE (19) varies slowly around the stationary point. respectively.

Assuming that the samples of the Hanning of window Fourier transform may be neglected outside the main lobe. we will use the distribution these points calculated with the smaller window length as a better choice with respect to the variance. In . except the domains where the ﬁrst two components exist. Otherwise. The time–frequency within the time interval analysis is done using the WD with the Hanning window and the discretization period .. (c) Fig. say times. calculated with Ns (n. consider a linear FM signal (42) . using twice narrower Hanning window width means further smoothing of the values .e. Time–frequency varying window length s (n. bution is small. 4. 2(a). Since the distribution for these two components has signiﬁcant variations along the frequency axis.. and . TWO-WINDOW ALGORITHM Let us consider only two window lengths such that is small enough so that the variance of distriis large so that bias is small (17). i. Using only two distributions. This fact indicates . 3. we can expect a signiﬁcant improvement of the time–frequency representation. The distribution with rectangular window of the length the Hanning window of the same length can be obtained as where . we get where are samples of the Fourier transform of the Hanning window within its main lobe . then the corresponding two conﬁdence intervals will intersect. V. The “two-window” domain-smoothed form of algorithm may be very efﬁciently implemented in this way. Variance is estimated according to the comments in Section IV. NO. Now. the conﬁdence intervals do not intersect.e. ) = N1 = 64. ) obtained using “two-window” algorithm = f64 512g. VI. for . The previous procedure may be easily generalized to any set of Wigner distributions where may be treated as a smoothed form of . with a practical realizations. can be calculated from Note that the variance as the better estimated since if we scale any window along the time axis. producing large bias coefﬁcient.. The variance was such that [dB]. we will present results obtained by the twowindow method for signals with a very low signal-to-noise ratio (SNR). 2(c) as a function of (n. for the entire time–frequency plane. N N very small. can be treated just as a very frequency . the factor is close to zero. 2. 2(b) with Ns (n. This very simple algorithm will be demonstrated on several examples. Therefore. of the noise . the algorithm has only here used the lowbiased distribution values from Fig. its energy is also scaled times. 47. The algorithm has used the small variance distribution from Fig. that the algorithm has to use the distribution which is the better choice with respect to the bias. VOL. Thus. We may expect that this “black-and-white” approach is very suitable for this kind of problem since a time–frequency representation is usually either zero or very highly concentrated. very fast varying 2-D function. This procedure may signiﬁcantly save the computation time. since the bias is now dominant. N1 = 64. This algorithm has the following form: if true otherwise where true means that (41) that the distribution is calculated using the . If the bias at the points is i. for a large bias. Wigner distribution of the noisy signal: (a) With the constant window length With the adaptive time-frequency varying window length. holds. i. 1) First. EXAMPLES In the examples. ) = N2 = 512. ) in Fig. the distribution narrower window. we may deﬁne a very simple “two-window” algorithm. APRIL 1999 (a) (b) (c) Fig.1104 IEEE TRANSACTIONS ON SIGNAL PROCESSING. (b) With the constant window length N2 = 512.e.

we get the adaptive distribution as it is presented in Fig. 1(c). The WD’s with the constant and adaptive window lengths. Time–frequency representation of the multicomponentnoisy signal. according choose the smaller window length for any . (b) Wigner distribution of noisy signal with the constant window length 2 = 256. Applying algorithm (41). the lag-window was sufﬁcient to reduce the crossterms in the WD. [14]. a) If the bias is zero. The adaptive algorithm gives a very concentrated distribution with a reduced noise inﬂuence. with the variance . Note that the signals do not signiﬁcantly overlap in time. 4(a)–(c). present. For example. the adaptive algorithm will always since.´ AND KATKOVNIK: WIGNER DISTRIBUTION OF NOISY SIGNALS STANKOVIC 1105 (a) (b) (c) N Fig. The cross-term effects are here. let . (a) Wigner distribution of the signal without noise. respectively. whereas Fig. 3. are presented in Fig. (a) With the constant window length 2 = 512. [13]. . 1(a) and (b) present and . The same parameters as in the previous examples are used. 2) Consider a sum of three noisy chirp signals: . with the low variance for all regions where the bias is small. The adaptive algorithm uses the low-variance distribution everywhere except in the area where the wider window signiﬁcantly improves the distribution concentration. as well as the distribution with the adaptive variable window length. 4. (c) -method of noisy signal with the adaptive time–frequency varying window length. VII. (c) With the adaptive time–frequency varying window length. 3) The following are the zero bias and zero noise cases. and to (11) and (12). N1 = 64. The distribution is chosen by the adaptive algorithm only for with the ﬁrst two signal components. respectively.e. consider a two-component signal that is a combination of the linear frequency modulated signal and the signal with a constant frequency (43) The parameters of the algorithm are the same as in the ﬁrst example. are presented in Fig. the WD with the constant windows The improvement is evident. 5. therefore. of course. The WD’s with and window lengths. N S We use two window lengths and . [34] (44) . the algorithm chooses the distribution with the larger window length.. Wigner distribution of the noisy multicomponet signal. This choice results in a better concentration of the distribution. including the third signal component. Approximately. 2(a)–(c). respectively. The adaptive algorithm uses the distribution with i. (b) With the constant window length (a) (b) (c) Fig. GENERALIZATION TO THE COHEN’S CLASS OF DISTRIBUTIONS Consider an arbitrary quadratic time–frequency distribution from the Cohen class [7]. bias the conﬁdence intervals intersect (41). and the bias is not dependent on . 4) Finally. b) If there is no noise and the difference between and is not equal to zero. A 3-D plot of the adaptive time–frequency varying window length is given in Fig. then according to (41). this situation has appeared for the third signal’s component in Example 2. then.

47. as well as and any . In the discrete time-lag form. ThereFor the Choi-Williams distribution. . that the variance invariant components contains one component that has zero mean value and is time–frequency dependent. 3) It satisﬁes the condition Then.and frequency-invariant and for any part obtained by summation for . [1]. of holds 1) For all product kernels. It will increase the probability that the algorithm takes the parameter value . we may directly apply the algorithm described by (41) in Section V. we want to emphasize. This part of variance. . and we may directly apply variance estimation techniques described in Section IV: “Comments on the algorithm”. in the adaptive algorithm. we get where is the Fourier transform of with respect to . Thus. Since the kernel energy is a constant for a given distribution. Comments on the Estimation Variance: The above analysis has been presented for the mean (over frequency ) value of the estimation variance. as an optimization-relevant parameter (49) is the energy of where distribution’s kernel. we will use lower values for the estimation variance than the true ones in the regions where the nonnoisy time–frequency distribution has signiﬁcant values. instead of the mean variance value. It is very interesting to . given by zero mean value).1106 IEEE TRANSACTIONS ON SIGNAL PROCESSING. where the distribution therefore. . The estimation variance may be time and frequency dependent. VOL. Here. we will need to estimate the noise variance only. Here. [29] (45) where The constant term is neglected as in (10). is equal to . over frequency . this variance is given by [1]. for the Choi–Williams distribution [5]. and the other much that the bias is small but still within a region larger where the crossterms are sufﬁciently suppressed [28]). we get when bias (46) The mean of the estimation variance in the Cohen class of distribution is proportional to the kernel energy. Although the analysis of this part is very complicated. is also note that the mean of equal to the same value [1] These were the reasons why. [7]. for frequency-modulated signals. Since the value of estimation variance at a speciﬁc point is a relevant parameter in the algorithm. [29] (47) . It is a two-dimensional (2–D) Fourier transform of the . APRIL 1999 where is a kernel in the analog time–frequency domain. . we have fore. In the case of the reduced interference distributions. [12]. usual ambiguity domain kernel The bias of estimate (44) is derived using a Taylor expansion and the following facts. [29] It is important to point out that this part of the estimation variance is time and frequency invariant. the estimation variance has two parts: one that depends on the noise only and the other. however. we have a more complex situation than in the Wigner distribution case. it has been shown [30] that this time-frequency-variant part of variance increases the within the region where the time–frequency value of distribution of a nonnoisy signal has signiﬁcant values and. we have used the mean value of the variance over the frequency [1]. This procedure has already been described in Section IV. 4. once beside the time–frequencymore. the total MSE has the same form with respect to the parameter as (19) with respect to (48) (one very small Now. decreases value of of nonnoisy signal assumes values close to zero (since it has . producing . [29] Note that contains the time. however. For example. in order to estimate . The noise only dependent part of the estimation variance is given by [1]. has The second part of the estimation variance the form [1]. we will consider this parameter in more detail. which is signal and noise dependent [1]. we came up to a formally the same expression as (14). [29]. taking only two values for such that the variance may be neglected. using the mean value of (49). NO. Thus. is an even function with respect to the 2) Kernel coordinate axis.

“Alias-free generalized discrete-time timefrequency distributions. Nemirovski. Black. IEEE. 38. vol. all factors (partial derivatives) in series (45) expansion are equal to zero up to the term (51) . Rao. vol. Boashash and P. called the twowindow algorithm.. Katkovnik. vol. “Improved time-frequency representation of multicomponent signals using exponential kernels. [27]. Implementations. 183–189. U.. Thus. [11] A. 1992. 43. Englewwod Cliffs. IEEE. Zhao. Haifa. An appropriate rectangular window eliminates (reduces) cross-terms and reduces the noise inﬂuence..” IEEE Trans. and [31]. 1348. [15] V. 35. Hearon and M. Nov. “Time-frequency distributions—A review. “The use of cone shape kernels for generalized time-frequency representations of nonstationary signals. Fortunately. we may conclude that we get the distribution in accordance with (53) with a very suppressed noise inﬂuence. For the cases with the small distribution values. the mean of estimation variance will be higher than the true one so that the algorithm will chose the lower variance parameter value . 183–189. 1987. IEEE-SP Time-Freq. 1992. Technol. “Adaptive local polynomial periodogram for time-varying frequency estimation. Cohen. CONCLUSION The algorithm with the varying and adaptive window length is developed in order to obtain the improved time–frequency distributions of noisy signals. Grifﬁn. but for For . Ed. 2352–2356. “Roundoff error analysis of the discrete Wigner distribution using ﬁxed-point arithmetic. NJ: PrenticeHall. Jeong and W.” in Proc. “Minimum variance time-frequency distribution kernels for signals in additive noise. from the point of view of the algorithm presented in this paper. Speech. Cohen and C. pp. May 1995. E.. “Instantaneous bandwidth.. 1258–1262. 519–538.” Proc. vol. pp. The multicomponent signal case will be illustrated on the adaptive version of the method [26]. Inform. “Nonparametric estimation of instantaneous frequency. It is deﬁned by STFT denoted by STFT DFT . 5(b) shows the WD of noisy signal with the same constant window length. pp. 1994. 1991. and F. vol. “On spatial adaptive estimation of nonparametric regression.´ AND KATKOVNIK: WIGNER DISTRIBUTION OF NOISY SIGNALS STANKOVIC 1107 a low bias since it will conclude that the variance is already small. From Fig.” in Time-Frequency Signal Analysis. Amin. [7] L. the method can produce a sum of the WD’s of each of these signal components separately Noise (53) of the signal provided that the components do not overlap in the time–frequency plane. pp. 5/94. Taylor. It is deﬁned as STFT For the multicomponent signals STFT (52) the distribution presented in Fig. 77. at the frequency in (52). vol. [30].. [8] L.” IEEE Trans. B. we have the same expression as (19). The algorithm is simple and uses only the estimate of the distribution and its variance. 1996. Katkovnik. Arch.” IEEE Trans. vol. 1990. Acoust.” IEEE Trans. [31]. 43. [12] S. Cohen. [16] V. we get frequency . Speech.” Proc. 1992. 1990. Signal Processing. The -method with simpliﬁed algorithm (41) is applied in order to produce a time–frequency representation of the signal (54) Using the same parameters as in the ﬁrst example and the with the rectangular window window lengths . Boudreaux-Bartels. II. 1992. J. “An efﬁcient real-time implementation of the Wigner-Vile distribution. Goldenshluger and A. 1084–1091. B. Signal Processing. vol. The simpliﬁed version of the algorithm. London. pp. Lee. Hlawatsch and G. [3] B. 39. Paris. [5] H. pp. for this distribution .: Longman Cheshire. [2] L.” IEEE Signal Processing Mag. The -method belongs to the Cohen class of distributions [28]. Signal Processing. Boashash. France.. 5(c). Choi and W. 80. whereas Fig. Marks. Adv. Signal Processing. 5(c). we have a much faster decreasing bias that allows us and for better ﬁnal results (sense to use closer values of of the derivations in Section III). as explained in detail in [26]. Amin. pp.. G. is time Fourier transform. [6] L. June 1996. Signal Processing. is proposed.” Res. Nov. Theory. Cohen. [14] J. [13] F. “Linear and quadratic timefrequency signal representation.” IEEE Trans. An interesting distribution.” IEEE Trans. Nov. 149–157. G. 21–67. Signal Processing. as expected. 40. Rep. July 1989. Technion—Israel Inst. Speech. Y.” IEEE Trans. J. [4] B. F. June 1989. 1995. vol. 2096–2098. P. Williams. this is a safe side error. Alg. pp. 1611–1618. [30]. VIII.K. Sept. REFERENCES [1] M. Jan. 44. which includes only two samples around the central on each side . and it will not degrade the results in the two-parameter algorithm. pp. is the Butterworth distribution with kernel [33] (50) For this distribution. “Minimum-variance time-frequency distributions kernels. pp. 862–871. 43. “Distributions concentrated along the instantaneous frequency” SPIE. [9] L. Signal Processing. Boashash. 1997. The WD of original nonnoisy signal with the constant window length is given in Fig. Atlas. vol.. 2757–2765. Time-Frequency Analysis. Signal Process. The short. J. Apr. pp. Time-Scale Anal. Williams. “Estimating and interpreting the instantaneous frequency of a signal—Part 1: Fundamentals. pp. The variance of (41) is estimated by techniques described in Section IV (Comments on the Algorithm) and (49).” IEEE Trans. vol. 5(a). 941–981. Sept. The numerical and qualitative efﬁciency of the proposed simpliﬁed algorithm is demonstrated on the time–frequency representations of monocomponent and multicomponent signals corrupted by the noise with the very low SNR. [27]. and R. pp. Apr. Acoust. Acoust. using only two different values of the window lengths. [10] C.

Stankovi´ c was awarded the highest state award of the Republic of Montenegro for scientiﬁc achievements in 1997. pp.” IEEE Signal Processing Lett.. APRIL 1999 [17] V. 1977. Lett. Vladimir Katkovnik (M’96) received the M. Dec..” IEEE Trans. [23] B. and adaptive stochastic control. “Special purpose hardware for time-frequency analysis. and Professor with the Department of Mechanics and Control Processes. Ristic and B. Stankovi´ c. vol. all in technical cybernetics. 1863–1867. Bochum. New York: McGraw-Hill. Speech. pp. He was also active in politics as a Vice President of the Republic of Montenegro from 1989 to 1991 and then as the leader of democratic (anti-war) opposition in Montenegro from 1991 to 1993. Signal Processing. he has been a Professor of Statistics with the Department of the University of South Africa. nonstationary systems. Katkovnik. 42. J. Sept.” IEEE Trans. “Wigner-Ville spectral analysis os nonstationary processes. Time-Scale Anal. vol.” in Proc. Signal Processing. Leningrad. Pretoria. July 1994. Yugoslavia. pp. From 1981 until 1991. Stankovi´ c and LJ.S. 46. LJubiˇ sa Stankovi´ c (M’91–SM’96) was born in Montenegro. Morris. 464–466. he held positions of Assistant. His current interests are in the signal processing and electromagnetic ﬁeld theory. [35] Y. pp. Stankovi´ c. PA. mainly the IEEE editions. 585–594. Signal Processing./Dec. 464–466. 246–248. [29] LJ. 2. 1650–1655.” IEEE Trans. 40. [20] D. and Z. 1994. 956–960. Philadelphia. 105–117.” IEEE Trans. Stankovi´ c and V. vol. June 1996. nonparametric and robust estimation. 1998. [24] LJ. Boashash. From 1961 to 1986. vol.” Electron. Ph. pp. pp. 1985. From 1997 to 1998. pp. 1995. degree in electrical engineering in 1988 from the University of Montenegro. “Architecture for the realization of a system for time-frequency analysis. J. 1990. Liu. “Time-frequency representation using a radial Butterworth kernel. Prof. pp. 532–535. Associate. “On the Wigner distribution of discrete-time noisy signals with [25] application to the study of quantization effects. 33. degree in electrical engineering in 1984 from the University of Belgrade. he was on leave at the Signal Theory Group. Signal Processing. May 1995. Zhu.S. Stankovi´ c. and 1974.” Electron. “Transformation de Wigner-Ville: Description d’ un nouvel outil de traitement du signal et des images. time–frequency analysis. pp. pp.” Ann.” IEEE Trans. [34] D. where he presently holds the position of a Full Professor. . supported by the Alexander von Humboldt Foundation. Russia. 1964. 227–229. and the Ph. [32] LJ. He has published more than 100 technical papers.” IEEE Trans. “Algorithm for the instantaneous frequency estimation using time-frequency representations with adaptive window length. Stankovi´ c. Philadelphia. Since 1982. vol. Peyrin. VOL. pp. Signal Processing. on June 1. M. Feb. 1993. pp. 60–63.” IEEE Trans. II. He has also published several textbooks in signal processing (in Serbo-Croat) and the monograph Time–Frequency Signal Analysis (in English). [26] LJ. F. vol.1108 IEEE TRANSACTIONS ON SIGNAL PROCESSING. Jan.D. Signal Processing. As a Fulbright grantee. “Relationship between the polynomial and the higher order Wigner–Ville distribution. NO. 42. 600–604. Belgrade. des Telecomm.. Nov. Oct. 1960. 33. 1997. Circuits Syst. PA. [28] . from Leningrad Polytechnic Institute. Dec. Signal Analysis. [31] S.. Mar. MA. Circuits Syst. Signal Processing. 26. 43. vol. Morris. 47. [30] LJ. Since 1991. 1996. Leningrad Polytechnic Institute. Petrovi´ c. [19] A. IEEE-SP Time-Freq.. . nos... vol. [33] D. pp. R. 41. Dec. 1557–1564. He published has ﬁve books and more than 150 papers. vol. [21] K. 44. His research interests include linear and nonlinear ﬁltering. vol.D. “Novel parallel architectures for short-time Fourier transform. “Uniﬁed approach to the noise analysis in the Wigner distribution and spectrogram using the S -method. Sept. July 1997. 2315–2325. Stankovi´ c and S. Stankovi´ c and V. “A method for time-frequency analysis. Katkovnik and LJ. Flandrin. and LJ. “Auto-term representation by the reduced interference distributions.” in Proc. 1262–1268. Yugoslavia.” Ann. . 11/12. IEEE-SP Time-Freq. about 40 of them in leading international journals. he has been on the Faculty at the University of Montenegro. nos.. and Doctor of Sc. and R. Papoulis. pp. June 1997. vol. pp.” IEEE Signal Processing Lett. 44. he was with the Department of Automatic Machines of the same institute. “Wigner distribution of noisy signals. Lett. 3–4. in 1960. 1993. Stankovi´ c. He is a member of the National Academy of Science and Art of Montenegro (CANU). “A method for improved distribution concentration in the [27] time-frequency analysis of multicomponent signals the L-Wigner distribution” IEEE Trans. 224–227. “Estimation of the instantaneous frequency using the discrete Wigner distribution. pp. 1987. Ruhr University Bochum. “Instantaneous frequency estimation using the Wigner distribution with varying and data-driven window length. “Discrete Cohen’s class of distributions. A procedure for kernel design. Rao and F. [22] P. 1994. M. 42. Taylor. he spent the 1984–1985 academic year at the Worcester Polytechnic Institute. Oct. M. 1998.. 225–229. Wu and J. Wu and J. 1461–1470. He received the B. Martin and P. Petranovi´ c. Germany. vol.Sc. vol. Stankovi´ c.. vol. stochastic optimization. pp. vol. Goutte. S. “Further results on the minimum variance time-frequency distributions kernels. Ivanovi´ c.” the M. Ivanovi´ c.” IEEE Trans. 4. [18] W. degree in electrical engineering from the University of Montenegro in 1982 with the honor “the best student at the University. Stankovi´ c.” IEEE Trans. Telecomm... pp. 5. Time-Scale Anal. V. Worcester. Acoust. Signal Processing. 1994.

- 2011 Rekreacija i.doc
- Proracun gromobrana
- Osnove o Svjetlosti i Zaptivenosti i Zastiti
- Windows System Key Combinations
- Opsti Zakon o Obrazovanju i Vaspitanju
- gramatika-crnogorskoga-jezika.pdf
- Advokatska Tarifa
- STATUT PRIJESTONICE 2014
- Nacrtu Zakona o Zaradama u Javnom Sektoru
- Notarijat u Crnoj Gori-2010
- 14. Evropska Konvencija Za Zastitu Ljudskih Prava i Osnovnih
- Mvpei - Eu Screen Mne 05
- Pub BalkanK E4
- REGRESIJA Gil de Zu-n-iga, Correa & Valenzuela (2012) Journal of Broadcast & Electronic Media 56(4) 597-615
- Maireder, Weeks, Gil de Zúñiga & Schlögl (2015) Social Science Computer Review Doi_0894439315617262
- Metodologija Naucno Istrazivackog Rada ZL
- Visestruka Regresija i Korelacija
- E²STORMED Final brochure - Montenegrin
- Microfloor technicalsheet
- LUG_POND_LED_EN
- Neos Led Lensoflex2 to 2012
- Minel-Schréder PERLA
- Uredba o Mjerilima Za Razvrstavanje Poslova Radnih Mjesta Državnih Službenika u Zvanja u Okviru Nivoa i Kategorija -MART 2013
- Eticki_kodeks_sluzbenici
- Notarijat u Crnoj Gori-2010

Sign up to vote on this title

UsefulNot usefultf analiza signala stankovic

tf analiza signala stankovic

- Str
- ex1
- A New Variable Step-size Lms
- Annurev Statistics 022513 115540
- MATLABhw1
- Practice Problems Standard Cost
- Monte-Carlo Planning
- Quing
- Merlin WP October 2012
- eswr
- Christian Notes for Exam P
- Accounts Presentation (1)
- Comment on Industry, Corporate and Business-segment Effects and Business Performance
- P Exam Notes
- Research Method Chap_015
- Boston Creamery Final
- Sales Variances
- SR232 Project Closedown Actual Effort Hours 3.0
- Adaptive Algorithms 1
- Econ 399 Chapter3d
- Anarta Ghosh, Michael Biehl and Barbara Hammer- Dynamical analysis of LVQ type learning rules
- tool box
- store
- Chapter 2. Methodology for Algorithm Design - Fundamentals of Statistical Signal
- Statistics Project
- How_Much Does Industry Matter-Rumelt
- Chapter 7 edited.doc
- DGK41
- Standard Costing Part 6
- Summary
- The Wigner Distribution of Noisy Signals -Ljubisa