You are on page 1of 7

PHYSICAL REVIEW E VOLUME 61, NUMBER 4 APRIL 2000

Efficient implementation of the Gaussian kernel algorithm in estimating invariants and noise
level from noisy time series data
Dejin Yu,1,* Michael Small,1 Robert G. Harrison,1 and Cees Diks2
1
Department of Physics, Heriot-Watt University, Riccarton, Edinburgh EH14 4AS, United Kingdom
2
CeNDEF, Department of Economics, University of Amsterdam, Roetersstraat 11, NL-1018 WB Amsterdam, The Netherlands
Received 2 June 1999
We describe an efficient algorithm which computes the Gaussian kernel correlation integral from noisy time
series; this is subsequently used to estimate the underlying correlation dimension and noise level in the noisy
data. The algorithm first decomposes the integral core into two separate calculations, reducing computing time
from O(N 2 N b ) to O(N 2 N 2b ). With other further improvements, this algorithm can speed up the calculation
of the Gaussian kernel correlation integral by a factor of (2 10)N b . We use typical examples to demon-
strate the use of the improved Gaussian kernel algorithm.

PACS numbers: 05.45.Tp, 02.50.r, 05.45.Ac, 05.45.Jn

I. INTRODUCTION tegral core into two separate calculations, which reduces


computing time from O(N 2 N b ) to O(N 2 N 2b ), where N
Estimation of dimensions, entropies, and Lyapunov expo- and N b are the length of reconstructed state vectors and the
nents has become a standard method of characterizing the number of computed bandwidth values, respectively. With
complex temporal behavior of a chaotic trajectory from mea- other further improvements, this algorithm can speed up the
sured time series data 1. Of these, the most widely used is calculation of the Gaussian kernel correlation integral by a
the correlation dimension. This is largely due to the fact that factor of (2 10)N b . We also present robust methods
used for nonlinear least squares fitting in extracting correla-
Grassberger and Procaccia found a simple algorithm to com-
tion exponents and noise level. Armed with the improved
pute the correlation integral and the correlation dimension is algorithms, we find that the GKA provides a reasonable es-
believed to be a more relevant measure of the attractor than timate of the correlation dimension and noise level even
the other dimension quantities 2. Many fast and efficient when the contaminated noise is as high as 50% of the signal
algorithms have since been developed for calculation of the content.
correlation dimension and found to be successful in charac- Our problem is formulated as follows. Suppose that we
terizing dynamics from clean time series data 3,4. have a scalar time series s i :i1,2,...,N s sampled at
When applied to experimental data, the dimension algo- equally spaced times t i it, where t is the sampling time
rithms have limitations since all recorded data are to some interval. The data is assumed to be corrupted by Gaussian
extent corrupted by noise, which masks the scaling region at noise. Our aim is to measure two dynamical invariants
small scales. It has been shown that a 2% noise is serious correlation dimension and entropyand estimate the noise
level in the time series.
enough to prevent accurate estimation 5. From an applied
Following Takens 18, the underlying attractor can be
point of view, the problem of characterizing nonlinear time reconstructed using delay co-ordinates. The reconstruction
series in the presence of noise is therefore nontrivial and of embeds the measured time series s i in an m-dimensional
practical significance. Considerable effort has been given to Euclidean space to create NN s (m1) delay state vec-
understanding the influence of noise on dimension measure- tors xi :i1,2,...,N in terms of
ments and exploring new scaling laws; see 615. A com-
parison of methods dealing with the influence of Gaussian xi s i ,s i ,s i2 ,...,s i m1 T , 1
noise on the correlation dimension can be found in 16. In where is an integer, referred to as a time lag the delay time
addition, a Grassberger-Procaccia-type algorithm for estimat- is t t, and m is called the embedding dimension.
ing entropy has been developed to characterize experimental This paper is organized as follows. After this introduction,
data 17. we describe the Gaussian kernel algorithm and its direct
In this paper, we address the estimation of correlation implementation in Sec. II. We explore the efficient imple-
exponents and noise level in the presence of Gaussian noise mentation and simplified calculation of the Gaussian kernel
in time series data. In the presence of noise, the dimension correlation integral in Sec. III. Section IV is devoted to tech-
and entropy are defined as those invariants of the underlying nical considerations and further improvements of the GKA.
clean time series 12,14,16. Particular attention is paid to Numerical tests are presented by way of examples in Sec. V.
efficient implementation of the Gaussian kernel algorithm Finally, we conclude this work in Sec. VI.
GKA developed by Diks 12. We first decompose the in-
II. GAUSSIAN KERNEL ALGORITHM

*Author to whom correspondence should be addressed. Electronic A. Theoretical background


address: phydjy@phy.hw.ac.uk The essence of the Gaussian kernel algorithm in estimat-

URL: http://www.phy.hw.ac.uk/resrev/ndos/index.html ing correlation exponents and noise level is summarized as

1063-651X/2000/614/37507/$15.00 PRE 61 3750 2000 The American Physical Society


PRE 61 EFFICIENT IMPLEMENTATION OF THE GAUSSIAN . . . 3751

follows 12. 1 The Gaussian kernel correlation integral where s , c , and n are standard deviations of the input
T m (h) for the noise-free case scales as noisy signal s i , underlying clean component c i , and
Gaussian noise part n i . In the total signal s i c i n i , we
T m h dx m x dy m y e xy
2 /4h 2 have assumed c i and n i to be statistically independent,
i.e., scn and s2 2c 2n , where s, c, and n are the

D means of s i , c i , and n i , respectively.


h
e mK t for h0, m, 2
m B. Direct implementation of the GKA

where D and K are the correlation dimension and entropy, h The numerical implementation of the Gaussian kernel al-
is referred to as the bandwidth, and m (x) is the distribution gorithm requires the transformation of the input time series
function. The scaling behavior T m (h)e mK t h D in Eq. 2 data s i :i1,2, . . . ,N s into a new time series v i :i
was first justified by Ghez et al. 19 and latter by Diks 12 1,2, . . . ,N s according to
with inclusion of the factor (1/m) D , in which the m depen- s i s
dence was originally introduced by Frank et al. 20 to im- v i . 8
prove convergence of K. 2 In the presence of Gaussian s
noise, the distribution function m (y) can be expressed in
terms of a convolution between the underlying noise-free Under the transformation 8, the noise effect is described by
distribution function m (x) and a normalized Gaussian dis- the distribution function 4 and the standard deviation of the
tribution function with standard deviation 12,21, i.e., noise part is in Eq. 7. Accordingly, the delay state vec-
tors are reconstructed by replacing s i with v i in Eq. 1.

m y dx m x m
g
yx
In the case of discrete sampling, we assume vector points
on the attractor to be dynamically independently distributed
according to m (x) and use an average over delay vectors to
replace the integrals over the vector distributions in Eq. 5.

2
1
m dx m x e yx
2 /2 2
, 3 Consequently, T m (h) can be computed by 12
N N
1

2 2
where T m h e x x /4h .
i j 9
N N1 i1
ji

1 2 /2 2
mg yx e yx , 4 In practice, T m (h) is calculated at a series of discrete band-
2 m
width values h k (k0,1,2, . . . ,N b ). The parameters D, K,
and are then extracted through fitting the scaling relation
accounts for noise effects in the m-dimensional space. is
6 to the Gaussian kernel correlation sum T m (h) computed
the Euclidean (L 2 ) norm.
from Eq. 9. One can see that such a direct implementation
Under conditions 2 and 3, the Gaussian kernel corre-
of the GKA has a computational complexity O(N 2 N b ).
lation integral T m (h) in the presence of Gaussian noise and
the corresponding scaling law become 12
C. Nonlinear fitting

T m h dx m x dy m y e xy
2 /4h 2
5
After computing the Gaussian kernel correlation sum, the
parameters D, K, and in Eq. 6 can be extracted using
nonlinear least squares fitting 22. Here we exemplify the

h2
h 2 2 m/2
dx m x dy m y
GKA described above; a robust fitting procedure will be pre-
sented in Sec. IV.
Figure 1 illustrates the fitted D, K, and as a function of
2 /4 h 2 2
e xy embedding dimension. The clean data are generated by the
standard Henon map 23. The noisy data are produced by
h2
h 2
2 m/2
e mK t h 2 2
m D/2
6
adding 5% Gaussian noise to the output of the first variable.
In calculating T m (h) from Eq. 9, we use the following
parameters: N5000, N ref500, N b 100, log2(l)10,
for h 2 2 0, m, and m1,2, . . . ,10. We choose u to be equal to the attrac-
tor diameter. A definition and discussion of these parameters
where is a normalized constant. will be detailed below. As seen in Fig. 1, the fitted correla-
In Eq. 6, D and K are the two invariants to be estimated, tion exponents D and K and noise level converge to their
and the parameter is referred to as the noise level, defined true values when the embedding dimension is increased be-
as yond m3. We note that a convergence in D and is readily
achieved while K exhibits fluctuations, since saturation
n n should only appear in principle as m. In this example,
, 7 the average values of the correlation exponents and noise
s 2c 2n
level, taken over embedding dimensions m3 10, are ob-
3752 YU, SMALL, HARRISON, AND DIKS PRE 61

tractor. The summand 2 ( i j , k , k1 ) is a double step func-


tion with k k1 , defined through the Heaviside step func-
tion () by 2 ( , 1 , 2 ) ( 1 ) ( 2 ), that is,

2 , 1 , 2 1
0
if 1 2
otherwise.
11

N
Noticing that the term i1 Nji 2 ( i j , k , k1 ) in Eq.
10 depends on the index k only, this allows us to decom-
pose Eq. 10 into two separate steps: 1 Calculation of the
binned interpoint distance distribution Cm ( k ), given by

Cm k Number of pairs i, j whose distances


satisfy i j xi x j k , k1
N N

2 xi xj , k , k1 ;
i1 ji
12

2 calculation of the Gaussian kernel correlation sum in


terms of
b N
1

2 2
T m h Cm k e k /4h . 13
N N1 k0
FIG. 1. Estimation of a correlation dimension D, b correla-
In Eq. 12, we choose the binned interpoint distances k in a
tion entropy K, and c noise level using Henon map data. The
way such that they are equidistant in a logarithmic scale.
dotted lines give their true values D true1.22, K true0.29 18,
and in5%. h c 0.2 is used; see below for its definition.
This choice has its numerical advantage since it gives a high
resolution at the small scale where the interpoint distance
distributions are more relevant to our calculations. In addi-
tained as D1.2270.011, K0.3010.003, and tion, the bandwidth values h k are determined in the same
4.9840.028. way as k .
Since Cm ( k ) can be readily computed, the calculation of
III. SIMPLIFIED ALGORITHM
T m (h) based on Eq. 13 becomes simple. Equations 12
We notice that T m (h) in Eq. 9 is essentially an average and 13 jointly lead to a computational complexity O(N 2
2 2 N 2b ). By comparison with the direct calculation based on
of the function e xi x j /4h over all pairs i,j with an equal
Eq. 9, this reduces the computing time by a factor of
unitary weight. For a given bandwidth h k , this algorithm
N b 1(N b /N) 2 N b .
must perform (N1)N operations to scan all distances. For
The relation between Cm ( k ) and the Grassberger-
another bandwidth h k (k k), all the same distances are
Procaccia GP correlation integral Cm ( ) in the Euclidean
revisited again. Such a distance scanning process is repeated
norm is
N b 1 times, leading to O(N 2 N b ) operations, which
wastes a large amount of computing time. The algorithm can N
1
be simplified by eliminating repetition in distance scanning.
Since the same distances will be used for all bandwidth val-
Cm
C ,
N N1 k0 m k
14
ues we can calculate a binned interpoint distance distribution
Cm ( k ) once and then take the average by summing over where the integer N corresponds to N . A further un-
indices (k0,1,2,...,N b ) of all binned interpoint distances, derstanding of Eq. 13 can be gained from an alternative
instead of pairs i,j. As a result, the Gaussian kernel corre- expression of the Gaussian kernel correlation integral. To do
2 2
lation sum is approximated by averaging e k /4h over all so, we define the Gaussian kernel correlation integral T m (h)
interpoint distances with the weight function Cm ( k ). Under in terms of interpoint distance distribution density m ( ) and
2 2
these considerations, Eq. 9 can be rewritten as Gaussian kernel function w( /h)e /4h as 12


1
N b N N
T m h m w d, 15
2 2
T m h 2 i j , k , k1 e k /4h ,
N N1 k0 i1 ji 0 h
10
where m ( ) is given by
where i j xi x j , and k0 and N b correspond to mini-
mum ( l ) and maximum ( u ) distances, respectively. Here dC m
m , 16
we assume that l 0 and u equals the diameter of the at- d
PRE 61 EFFICIENT IMPLEMENTATION OF THE GAUSSIAN . . . 3753

where C m ( ) is the GP correlation integral with the hard In the following, we introduce a three-step fitting method,
kernel function w(r/ ) ( r). Substituting Eq. 16 into which is found to be quite robust. The described method can
Eq. 15, the Gaussian kernel correlation integral T m (h) is extract D and to a high precision and obtain a fair estimate
expressed as of K. The procedure is detailed as follows.
1 Fitting D, , and . We use the relation 6 between
T m h 0

e
2 /4h 2
dC m . 17
T m (h) and h to fit D, , and . The values of D and
obtained will be used in the next step. Letting e mK t
yields a model equation
This is the integral form of Eq. 13.
If one performs a partial integration in Eq. 17, T m (h) y h T m h h m m D/2 h 2 2 Dm /2. 19
can be expressed in the form of
2 Fitting K and deriving . We use y(h)

T m h
1

e
2 /4h 2
C m d . 18
T m1 (h)/T m (h) to eliminate , leading to a model


2h 2 0 D/2
m
y h he K t h 2 2 1/2. 20
m1
In previous work, this formula was used to estimate T m (h)
24,25. We fit K with D and fixed. Then can be derived from
e mK t , where is given in the last step.
IV. COMPUTATIONAL CONSIDERATIONS 3 Refitting D, K, and . We use the original relation 6
to fit D, K, and with the fixed derived in the last
A. Efficient binning
step. D, K, and obtained in the last two steps are used as
The use of the simplified algorithm given by Eqs. 12 trial values.
and 13 can speed up the calculation of T m (h). For instance, Note that the use of Eq. 20 requires computing the
for N10 000 and N b 200, this gives rise to a gain factor Gaussian kernel correlation integrals for two consecutive
200. We note that most of the time is consumed in com- embedding dimensions. In practice, we compute T m (h) for
puting the binned interpoint distance distribution Cm ( k ) in m1,2,...,M and extract D, K, and as functions of m. Thus
Eq. 12, which takes N(N1) operations. Apart from the the fitting procedure described above is performed for con-
remarkable advance by using Eqs. 12 and 13, a further secutive embedding dimensions. Furthermore, the standard
reduction of computing time is achievable. In this algorithm, deviations of the correlation integral T m (h) are used as
we adopt three improvements. weights in the fitting procedure 12. This can greatly reduce
1 A set of N ref representative points randomly chosen on fitting errors by comparison with a calculation using equal
the attractor are used for the sum over i to replace the origi- weights. The latter results in large deviations at higher em-
nal N points in Eq. 12. This reduces O N(N1) to bedding dimensions.
O (N1)N ref . Usually, we use N refN/10.
2 We observe that large distances have a negligible con- C. Choice of cutoff bandwidth h c
tribution to the Gaussian kernel correlation sum T m (h), as in
the GP algorithm 26. Thus we set an upper limit of the There exists an unsolved technical problem in the nonlin-
correlation distance u to be smaller than the attractor diam- ear fitting procedure described above, that is, determination
eter, above which the binning is not done. This can save of the largest bandwidth h c to be used within the scaling
computing time by a factor of 100 101 , depending on the region. This is a difficult task in the presence of noise. The
value of u . reason is simple: noise masks the scaling region at the small
3 A recursive version is used. Since we compute Cm ( k ) scale. Intuitively, h c should be small enough to keep fitting
for a series of consecutive embedding dimensions m in the scaling region presented by the underlying noise-free
dynamics. But h c cannot be so small as to prevent extracting
1,2,...,M and in the L 2 norm 2k (m1) 2k (m)( v im
D. On the other hand, h c should not be so large so as to
v jm ) 2 , the distance in the m dimension is successively
exceed the scaling region. In the general case, we suggest a
used in the (m1) dimension.
choice of h c 3 .
These three binning methods are found to be very effi-
Numerical simulations show that the fitted is insensitive
cient and speed up the calculation by a factor 101 102 at
to the choice of cutoff bandwidth h c . Thus, an iteration
least.
scheme can be adopted. In the first run, a trial h c , for ex-
ample, h c 0.5, is used so as to obtain a fitted . The value
B. Robust fitting procedure
of 3 is in turn adopted as the cutoff bandwidth h c .
Direct fitting based on Eq. 6 does not make sense. We Caution should be used when the noise level is either high
find that though D and can be extracted with a high preci- or low. For example, a 30% noise level will give rise to h c
sion, there is an uncertainty between and K. This is be- 0.9. This value is close to the upper limit of the linear
cause and e mK t are not independent for a fixed m but scaling region for most chaotic attractors tested. On the
their product e mK t is a real independent parameter. other hand, when the noise level is low, say, below 3%, the
In the nonlinear fitting process, as long as a stable is ob- value of 3 is to small to be used for h c . In practice, we
tained, the program will return values of and K. But such recommend fitting D, K, and for a series of cutoff band-
and K are in general arbitrary. width values starting from a small value and from this iden-
3754 YU, SMALL, HARRISON, AND DIKS PRE 61

FIG. 2. Computing time as a function of the upper limit of scale


u , given by log2(u), for in10%. a Henon map, N5000 and
m1,2, . . . ,10 and b Lorenz chaos, N10 000 and m
1,2, . . . ,15. Two linear regions I and II are indicated by the
FIG. 3. Fitted parameters D, K, and as a function of the cutoff
dashed and solid lines, respectively. Note that the appearance of
bandwidth h c in the Henon map left column and Lorenz model
steps in a is because we use one second as the time unit.
right column. The solid, dotted, and dashed lines correspond to
three input noise levels, in2%, 5%, and 10% for the Henon map
tifying the saturation region of the scaling parameters as a and in5%, 10%, and 20% for the Lorenz model.
function of h c . A good cutoff bandwidth is thus chosen as a
value below the deformation region, but as large as possible long time; for the former 29 569 seconds 8 hours are
to reduce the fitting errors. required while for the latter 4679 seconds are needed. By
contrast, the simplified algorithm is much faster than the di-
V. EXAMPLES AND TESTS rect implementation of the GKA and, as expected, gives a
In this section, we demonstrate the performance of the speeding-up factor T direct /T simplified400. This value is
simplified Gaussian kernel algorithm presented in Secs. III twice N b since the exponential operation is time consuming
and IV by applying it to some well-known chaotic systems, in Eq. 9. It follows that the improvement by using Eqs. 12
namely, the Henon map and Lorenz chaos. First the clean and 13 is indeed significant.
data are generated from the standard models 23. The noisy We next examine the dependence of the computing time
time series are then prepared by adding a Gaussian noise on the upper limit of the correlation distance u . Figure 2
component with a standard deviation in to the noise-free depicts the computing time for the Henon map and Lorenz
data. In all cases, we fix N b 200, log2(l)10. The num- system. Our tests are terminated at the smallest scale 0 ,
ber of delay vectors N, the embedding dimension m, and the below which the measured D and deviate from their cor-
upper limit of scale u are left to be adjustable. The comput- responding true values by 10%, in Fig. 2 log2(0)0 for the
ing time to be used below is the full CPU time of the pro- Henon map and log2(0)0.5 for the Lorenz system. By
gram and all tests are done on an ULTRA 10 SUN Worksta- comparison with the direct implementation, a total
tion. In addition, the recursive algorithm is adopted for both speeding-up factor 2500 has been achieved at 0 . More-
direct and improved Gaussian kernel algorithms and the over, we see that the use of u alone leads to an improvement
number of reference points is set to be N refN/10. of speed by 16 times. A further feature is observed in Fig.
2: that there is a turning point which separates two linear
regions. Below this point, the computing time shows a rela-
A. Speed with respect to direct implementation tively weak dependence on log2(u), while in the second re-
In order to make a comparison with the direct implemen- gion decreasing u gives rise to a significant reduction of
tation of the GKA in Secs. II B and II C we first set the upper computing time. This leads us to suggest that u be set within
limit of the correlation distance u to be equal to the diam- the linear region I.
eter of the attractor. In this case, all interpoint distances are
binned in order to test the improvement of the simplified B. Cutoff bandwidth h c
algorithm Eqs. 12 and 13 with respect to the direct cal- Figure 3 shows the average values of D, K, and on
culation of Eq. 9. Two groups of controlled tests have been increasing the cutoff bandwidth h c in the Henon map and
conducted using N10 000, m1,2, . . . ,15 for the Lorenz Lorenz system. Plotted are average values taken over m
chaos and N5000, m1,2, . . . ,10 for the Henon map. Not 3 6 for the Henon map (N5000) and over m6 10
surprisingly, the direct calculation of T m (h) takes quite a for the Lorenz model (N10 000). These represent mea-
PRE 61 EFFICIENT IMPLEMENTATION OF THE GAUSSIAN . . . 3755

in20%. These results show that the Gaussian kernel al-


gorithm is a reliable tool for measuring the correlation di-
mension and noise level from noisy time series and works
well for different types of noise sources when the noise level
is below 20%. This nice property provides a basis for char-
acterizing experimental data that are believed to contain both
types of noise.

VI. CONCLUDING REMARKS

The main point in this paper is to develop an efficient


algorithm to simplify the calculation of the Gaussian kernel
correlation integral. Numerical simulations show that our im-
proved algorithm is computationally efficient and speeds up
the calculation by a factor (2 10)N b by comparison
with direct implementation. We hope that this improved al-
gorithm meets broad computational needs and can find wide-
spread applications in characterizing experimental data.
We find that the GKA not only works for pure Gaussian
noise, but is also applicable to other types of noise provided
that the underlying noise level is relatively low, say, below
20%, such as a combination of Gaussian with uniform IID
FIG. 4. Measurements of a correlation dimension D and b noise and even uniformly distributed noise. This property is
noise level using the Gaussian kernel algorithm. , , and of practical importance since the noise type is usually un-
correspond to Gaussian, uniform, and a combination of uniform known a priori in experimental time series data. More gen-
with Gaussian noise, respectively. The dotted lines give their true erally, any filtering and the presence of multiple noise
values. Calculations are performed based on averages over embed- sources will turn the noise into an approximately Gaussian
ding dimensions m8 15.
distribution according to the central limit theorem. Recently,
the GKA has been successfully used to characterize electro-
surements in the discrete mapping and continuous flow sys- cardiograph data of ventricular fibrillation 27.
tems. For both types of dynamics, it is obvious that there is a The simplified GKA provides a reliable estimation of the
broad scaling region where both the correlation dimension D correlation dimension and noise level. However, it seems
and noise level fitted are saturated around their true values
difficult to extract exactly the correlation entropy K from the
and from this h c can be chosen. Notice that the correlation
non-linear fitting procedure described in Sec. IV B. This is
entropy K exhibits a sensitive dependence on the cutoff
understandable because we have assumed to be a constant
bandwidth h c for in5%, decreasing with h c , and shows
in Eq. 6, which in turn leads to Eq. 20. This is true only
considerable difference for different noise levels at the large
bandwidth. The latter indicates that it is more difficult to when h 2 2 is small. Therefore, for fixed , the deviation
measure K when the noise level is high. Further, K should be of K will increase with the bandwidth h c . On the other hand,
estimated when m is sufficiently large 20. The tests given the correlation entropy is asymptotically obtained only as
in Fig. 3 do not meet this limiting condition. m according to its definition.
Note that, in writing Eq. 9, we assume that the delay
C. Precision against noise level vectors are independently distributed on the attractor accord-
ing to the distribution m (x). This is not always true. For
We examine the estimated D and as a function of in data generated from continuous dynamical systems, serial
for different types of independent and identically distributed temporal correlation must be ruled out 28. Finally, though
IID noise: Gaussian, uniform, and a combination of the the simplified algorithm is much faster than the direct imple-
Gaussian with uniform IID noise with each being 50%. The mentation of the GKA, it is not suitable for too long time
Lorenz system is used as a representative example. The input series since the algorithm for Cm ( k ) is still O(NN ref).
noise level in is set from 0% to 50%. Numerical results are Nevertheless, the improved GKA is fast enough for N
shown in Fig. 4. As can be seen, the measurements of cor- 104 105 long on most current workstations and personal
relation dimension D and noise level are in good agree- computers.
ment with their true values for pure Gaussian IID noise up to Source codes FORTRAN 77 can be obtained on request
in50%. The results also show good consistency for the from the first author.
combined noise when the noise level is in40%. This is
what is expected since the GKA is established under an as-
sumption of Gaussian noise. By contrast, the correlation di- ACKNOWLEDGMENT
mension is underestimated and exhibits a large deviation in
the case of the pure uniform IID noise for in20%, but the This work was funded by a Research Development Grant
Gaussian kernel algorithm still provides a fair estimate of D by the Scottish Higher Education Funding Council SHEFC,
and , as indicated by circles in Fig. 4, in particular when Grant No. RDG/078.
3756 YU, SMALL, HARRISON, AND DIKS PRE 61

1 Measures of Complexity and Chaos, edited by N. B. Abraham, 17 J. G. Caputo and P. Atten, Phys. Rev. A 35, 1311 1987.
A. M. Albano, A. Passamante, and P. E. Rapp Plenum, New 18 F. Takens, in Dynamical Systems and Turbulence, Warwick,
York, 1989. 1980, edited by D. A. Rand and L. S. Young Springer-Verlag,
2 P. Grassberger and I. Procaccia, Phys. Rev. Lett. 50, 346 New York, 1981, Vol. 898, pp. 366381.
1983; Physica D 9, 189 1983. 19 J. M. Ghez and S. Vaienti, Nonlinearity 5, 777 1992; J. M.
3 Dimensions and Entropies in Chaotic Systems, edited by G. Ghez, E. Orlandini, M. C. Tesi, and S. Vaienti, Physica D 63,
Mayer-Kress Springer-Verlag, Berlin, 1986. 282 1993.
4 J. Theiler, J. Opt. Soc. Am. A 7, 1055 1990. 20 M. Frank, H.-R. Blank, J. Heindl, M. Kaltenhauser, H. Koch-
5 T. Schreiber and H. Kantz, in Predictability of Complex Dy- ner, W. Kreische, N. Muller, S. Poscher, R. Sporer, and T.
namical Systems, edited by Y. A. Kravtsov and J. B. Kadtke
Wagner, Physica D 65, 359 1993.
Springer, New York, 1996.
21 M. Casdagli, S. Eubank, J. D. Farmer, and J. Gibson, Physica
6 E. Ott, E. D. Yorke, and J. A. Yorke, Physica D 16, 62 1985.
D 51, 52 1991.
7 M. Moller, W. Lange, F. Mitschke, N. B. Abraham, and U.
22 W. H. Press, S. A. Teukolski, W. T. Vetterling, and B. P.
Hubner, Phys. Lett. A 138, 176 1989.
Flannery, Numerical Recipes in FORTRAN, 2nd ed. Cam-
8 R. L. Smith, J. R. Stat. Soc., Ser. B Methodol. 54, 329 1992.
bridge University Press, Cambridge, 1992.
9 G. G. Szpiro, Physica D 65, 289 1993.
10 T. Schreiber, Phys. Rev. E 48, R13 1993. 23 A. Wolf, J. B. Swift, H. L. Swinney, and J. A. Vastano,
11 J. C. Schouten, F. Takens, and C. M. van den Bleek, Phys. Physica D 16, 285 1985.
Rev. E 50, 1851 1994. 24 T. Schreiber, Phys. Rep. 308, 1 1999.
12 C. Diks, Phys. Rev. E 53, R4263 1996. 25 A large error in estimating D, K, and may result. This is
13 H. Oltmans and P. J. T. Verheijen, Phys. Rev. E 56, 1160 because Eq. 17 is derived under the condition that l 0 and
1997. u . This is not the case when T m (h) is numerically com-
14 D. Kugiumtzis, Int. J. Bifurcation Chaos Appl. Sci. Eng. 7, puted.
1283 1997. 26 J. Theiler, Phys. Rev. A 36, 4456 1987.
15 J. Argyris, I. Anderadis, G. Pavlos, and M. Athanasiou, Chaos, 27 Dejin Yu, M. Small, R. G. Harrison, C. Robertson, G. Clegg,
Solitons and Fractals 9, 343 1998. M. Holzer, and F. Sterz, Phys. Lett. A 265, 68 2000.
16 T. Schreiber, Phys. Rev. E 56, 274 1997. 28 J. Theiler, Phys. Rev. A 34, 2427 1986.

You might also like