Professional Documents
Culture Documents
135
In order to visualize the meaning of Eq. (4) let us events with Gaussian distribution and zero mean. The
refer to the hydrological context in which the method was Hurst exponent for such a time record is 1/ 2 .
devised by Hurst. Here x is the annual water input into a
reservoir in the ith year of a series of N years, and .v~,,,,~ For 1 1 2 < H < 1 the time series is called persistent,
is the net gain or loss of stored water in the year 1, + k , i.e. i.e. an increasing trend in the past implies, on the average,
a continued increasing trend in the future, or a decreasing
some time within the time lag in question. That is, the
trend in the past implies a decrease in the future. If,
annual increment is the object of analysis. The ideal
O<H < 1 / 2 prevails, the time series observed is anti
reservoir never empties and never overflows, so the
persistent, i.e. an increasing trend in the past implies
required storage capacity is equal to the difference
decreasing trend in the future and vice versa. Persistency
between the maximum and minimum value of Y ~ , , , ~
is found also in cases where the time series exhibit clear
over j . This difference is called the range, R,h3, trends with relatively little noise [I 1][14][22].
1 I=[Nij]
where H = I12 , 5 is a normalized independent
R/S=- C ( R j l /Sj,,) ('I
IN1 f M, Gaussian process and X ( t J the initial position [4][5].
Replacing the exponent H = 1 / 2 in Eq.5 with any other
Hurst observed that there is a great number of number in the range O < H < l defines an fBm function
natural phenomena, for which the ratio R I S obeys the X , ( r ) . The exponent H here corresponds to the statistic H
rule that R I S analysis calculates.
136
increment, future increments are more likely to be positive.
This is known as persistence. When H < I / 2 , increments
are negatively correlated, which means an increase in the
past makes a decrease more likely in the future. This is
called anti-persistence.
discarded.
To begin with, we verified whether change of a Figure 2. Relationship of scaling interval (N) versus Hurst
stock price time series follows the random walk exponent (H)
hypothesis using the rescaled range analysis. We analyzed As shown in this Figure 2, H =0.88 corresponding
stock prices time series of Nikkei Stock Average. The to N = 3 is the maximum. Therefore, it is found that Data
analysis period used the data for 1500 days from July, for 3 days show the strongest correlation using the fractal
1996 to October, 2002. In the analysis, the logarithm analysis of the Nikkei Stock Average Prices. This
profitability was applied to the original analysis object. knowledge is discovered .using a feature.de map with
rough set theory [26] shown in Table I . Firstly a Hurst
exponent is obtained. Then the fractal dimension and the
). - 0.66Pn-010,l
autocorrelation coefficient are calculated from the Hurst
exponent.
137
Table 2. Pawlak’s Lower and Upper Approximation 4 Short-term Prediction of using
Class Brownian Motion BPNN
We obtained the features (knowledge: N = 3 ) of the
Lower approximation time series by the fractal analysis. Two kinds of 3 layers
Upper approximation BP neural networks which have a delay element between
Aooroximation accuracv 1 each input node are considered. No filter at each node has
Class I N=3 I connected in the first one. The second one has a 3-order
FIR at each input node.
Number of objects: 3
I
configuration of FIR filter is shown in Figure 3. It is a Error 0) I 59.4803
finite impulse response (FIR) digital filter which is
connected to each input of a back propagation type
Epochs 201
forward neural network (BPNN). A time delay element is
also put between the inputs of the filter. 4.1.2 BPNN simulation with 5 input Nodes
In the same way, the structure of the neural network
is illustrated in Figure 6. The simulation result with 5
input nodes is shown in Figure 7. The error and the
number of epochs are listed in Table 5.
Error 0) I 304.9743
Epochs 447
Zl:delay element
138
input input
Three-layer FIR filters Three-layer
Zl. .Pm
Neural Network
K) Neural Nework
2-1. .2 1
2-1.. 2~'
Figure 4. The structure ofthe BPNN without filter with 3 Figure 8. The structure of the BPNN with filter with 3
input nodes input nodes.
.- ..__.."..
I I I I
Figure 5. BPNN simulation without filters with 3 input Figure 9. BPNN Simulation with filter with 3 input nodes
nodes
input Three-layer input Three-layer
Neural Network
Figure 6. The structure of the BPNN without filter with 5 Figure 10. The structure of the BPNN with filter with 5
input nodes input nodes
"-.-
I
I-
">-
- c .-*./- --d,-
Figure 7. BPNN simulation without filters with 5 input Figure 11. BPNN Simulation with tilter with 5 input nodes
nodes
139
Table 1 I , Conparison of both networks
4.2 Simulation by the 3 layer BPNN with filters.
Table 7 shows the structure of the 3 layer BPNN
with filters.
r
I T T i n p u t Nodes I
Erroro)
59.4803
I
I
Epochs
201 I
5 Input Nodes 304.9743 447
filters
Table 7. 3 layer BPNN with filters network structure
Table IO. Conparison ofboth [4] J. Feder, Fractals, Plenum Press, New York, 1988, p.
288.
- .. -
\ 3 Inputs Node 5 Inputs Node ,/%_I
Error 0) 47.3381 88.2962 [5] N. Wiener, Differential space, I. Math. Phys. Mass.
Epochs 37 154 Inst. Techno. 2,1923, pp. 131-174
140
[IO] B. Mandelbrot and J. R. Wallis, WaferResour. Res:
5, 1969, p. 228.
141