0 Up votes0 Down votes

9 views6 pagesjdfjj

May 20, 2016

© © All Rights Reserved

PDF, TXT or read online from Scribd

jdfjj

© All Rights Reserved

9 views

jdfjj

© All Rights Reserved

- Textbooks
- Vintage Restaurant Case
- PC00PR002 Cost Control Procedure Rev.02
- Forecasting Technique Familiarity Satisfaction Usage and Application
- 100808
- C13 Sales Focast
- Demand Forecasting on Colour Television Industry - By VASIM S. SHAIKH
- Demand Estimation & Forecasting
- Applications of Improved Grey Prediction Model
- A Fuzzy Approach For Clustering Gene Expression Time Series Data
- Demand Forecasting
- 1831120
- Forecasting
- Arima
- 3 Stojanovic
- Demand Estimation & Forecasting
- Fitting and Predicting a Time Series Model
- Time Series
- 3The Exponential Smoothing Model
- Gordon - 1990

You are on page 1of 6

2, September 2007: 82 - 87

Neuro-Fuzzy in Time Series Forecasting

1

2

Institut fr Automatisierungstechnik Universitt Bremen

Email: indi@peter.petra.ac.id, natarajan@uni-bremen.de

ABSTRACT

This paper describes LSE method for improving Takagi-Sugeno neuro-fuzzy model for a multi-input and multi-output

system using a set of data (Mackey-Glass chaotic time series). The performance of the generated model is verified using

certain set of validation / test data. The LSE method is used to compute the consequent parameters of Takagi-Sugeno neurofuzzy model while mean and variance of Gaussian Membership Functions are initially set at certain values and will be

updated using Back Propagation Algorithm. The simulation using Matlab shows that the developed neuro-fuzzy model is

capable of forecasting the future values of the chaotic time series and adaptively reduces the amount of error during its

training and validation.

Keywords: forecasting, time series, gaussian membership function, neuro-fuzzy, least square.

INTRODUCTION

When considering fuzzy logic for analyzing and

solving certain problem in engineering which requires

computational intelligence, practically ones choose

between Mamdani model and Takagi-Sugeno model.

Of course another model also could be used as well.

But, Takagi-Sugeno model is preferred in the case of

data-based (or numerically-data-driven) fuzzy

modeling [1, 2, 3] as well as forecasting time series

data where in many cases it can be seen as system

with locally linear model. Another advantage is that

the inference formula of the Takagi-Sugeno model is

only two-step procedure, based on a weighted average

defuzzifier, whereas Mamdani type of fuzzy model

basically consists of four steps [4, 5].

In this paper, we implement neuro-fuzzy modeling

which combines the ability of reasoning on a

linguistic level and handling of uncertain or imprecise

data with the ability of learning and adaptation certain

aspects of intelligent system. Here we choose TakagiSugeno type fuzzy model along with differentiable

operators and membership function and also the

weighted average defuzzifier for defuzzification of

output data. The corresponding output inference can

then be represented in a multilayer feedforward

network structure such as described in [4, 5] which is

identical to the architecture of ANFIS [6]. Since we

use Takagi-Sugeno fuzzy model with Gaussian

Membership Function (GMF), then we have three

types of free parameters: mean and variance of each

Note: Discussion is impected before December, 1st 2007, and will

be publiced in the Jurnal Teknik Elektro volume 8, number 1

March 2008.

82

Parameters (we usually name them as theta).

Optimizing these parameters so that the speed of

convergence can be accelerated becomes special

interest in the research field of neuro-fuzzy technique.

This paper concentrates our main interest in the

optimization of theta by estimation using least square

method (LSE) for MIMO system. So, we arrange our

presentation as follows. First, we explain the

importance of MIMO Takagi-Sugeno Neuro-Fuzzy

implementation for time series forecasting and

treatment procedure of data, in this case we use

Mackey-Glass chaotic time series. Next, we explain

how we implement least square method for

optimizing the free parameters of Takagi-Sugeno

Rules Consequent Parameter. All important

equations or equation(s) derived directly from its

definition will be numbered, while its derivation steps

will not be numbered. Finally, the result from Matlab

simulation will be discussed.

MIMO TAKAGI-SUGENO NEURO-FUZZY

FOR FORECASTING CHAOTIC TIME

SERIES

Here we consider short-term forecasting as the

implementation case of MIMO Takagi-Sugeno

Neuro-Fuzzy for chaotic time series. The chaotic time

series are generated from deterministic nonlinear

systems that are sufficiently complicated as they

appear to be random. The chaotic time series we use

here is Mackey-Glass Time Series which available in

Matlab. This chaotic time series generated by solving

the Mackey-Glass time delay differential equation [7]:

(dx/dt) = (0.2*x(t-)/(1+x10(t-)))-0.1*x(t)

where x(0)=1.2, =17 and x(t)=0 for t<0.

(1)

Parameter Estimation using Least Square Method for MIMO Takagi-Sugeno Neuro-Fuzzy in Time Series Forecasting

[Indar Sugiarto, et al]

exist, then data preparation for forecasting consists

only structuring of data and determination of training

and validation data set. According to Matlab

documentation, Mackey-Glass data consists of 1200

data points. For our experiment purpose, we neglected

the first 100 data due to its harmonic nature, then we

took next 200 data for training, and finally we used

the next 800 data for validation. So, we have time

series in the form of X = {X1, X2, X3, , Xn} which is

taken at regular intervals of time t = {1, 2, 3, , n}.

For this forecasting problem, usually a set of known

values of the time series up to a point in time, say t, is

used to predict the future value of the time series at

some point, lets say (t+P). Then we created a

mapping from S sample data points, sampled every s

units in time, to a predicted future value of the time

series at time point which resulted an S-dimensional

vector of the form:

XI(t) = [X{t-(S-1)s}, X{t-(S-2)s}, , X{t}]

Palit [3] shows that for predicting the Mackey-Glass

time series, the value of S=4 and s = P = 6 are good

choices. The output data of the fuzzy predictor

corresponds to the trajectory prediction, where for

MIMO case, it can be written as:

XO(t) = [X{t+P}, X{t+2P}, X{t+3P}, ]

(a)

(b)

Figure 1. Structure of Takagi-Sugeno neuro-fuzzy as

proposed by Palit (a) and Jang (b)

Referring to figure 1.a above, the fuzzy logic system

considered here for constructing neuro-fuzzy

structures is based on Takagi-Sugeno fuzzy model

with Gaussian membership functions. It uses product

inference rules and a weighted average defuzzifierdefines as:

and XO, name it as XIO which stands for InputOutput Matrix. For our MIMO case, we use 4 input

and 3 output so that the XIO matrix can be written in

the form:

X1

X

XIO = 4

...

X n 6

X2

X3

X4

X5

X5

X6

X7

X8

...

...

...

...

X n 5 X n 4 X n 3 X n 2

XI matrix

X7

X 9 X 10

...

...

X n1 X n

X6

( )

f j xp =

l =1

M

(2)

l =1

and l = 1,2,3,M reflects the number of rules

(Gaussian membership function).

The output consequent of the lth rule for each output jth

is:

n

y lj = 0l j + ijl xi

i =1

XO matrix

here is the same with that one described in [4, 5]

which is similar with MANFIS (Multiple-output

ANFIS) model [1, 6]. For convenient, here we redraw

the neuro-fuzzy model proposed by [5] in comparison

with MANFIS model [1], both for two input and two

output system.

y lj z l

(3)

Then we generate degree of fulfillment and fuzzification procedure as:

n

z l = Gl ( xi ) where Gl ( xi ) = e

i =1

( xi cil ) 2

il

(4)

83

function with the corresponding mean and variance

parameters cil and il , we assume that cil Ii , il > 0,

and y lj O j , where I i and O j are the input and

output universes of discourse respectively. The

corresponding l th rule from the above fuzzy logic

system can be written as

S / 0l as follows:

S = 0.5ETE = 0.5[e12 + e22 + . en2]

e n2

S 1 e12

=

+

+

en =

e

2

...

2

1

l

l

l

0 2 0

0

then y lj = ojl + 1jl x1 + 2jl x 2 + ... + njl x n

output consequents, and f j are its m outputs. If all

the parameters for neuro-fuzzy network are properly

selected, then the system can correctly approximate

any nonlinear system based on given XIO matrix data.

For training that neuro-fuzzy, which equivalent with

multi-input multi-output (MIMO) feedforward network, we use backpropagation algorithm (BPA).

Generally, BPA based on Steepest Descent Gradient

Rule uses a low learning rate () for network training

in order to avoid the oscillations in the final phase of

the training. That is why BPA needs large number of

epochs (recursive steps) for the smallest SSE (Sum of

Squared Error). The steepest descent gradient rule,

used for Neuro-Fuzzy network training is based on

the recursive expression:

0l j (k + 1) = 0l j (k ) .(S / 0l j )

cil (k + 1) = cil (k ) .(S / cil )

il (k + 1) = il (k ) .(S / il )

S j = 0 .5 ( e ) = 0 .5 E E j

p =1

p 2

j

T

j

(5)

m

SSE = S j

j =1

the difference between the real sampled data from

XIO matrix and its prediction, or, ej = (dj fj).

84

fj =

y z

l =1

M

l

j

, whereas

l =1

f j / 0l as follows:

1 zl

zl

m

=

+

+

...

y

y

j

j

M

M

0l 0l

zl

zl

l =1

l =1

f j

With

e j

0l

(7)

f j

d

f

(

)

j

j

0l

0l

m

z l y 1j

z M y j z l

zl

+

+

=

=

...

1

b 0l b

b

0l b 0l

f j

=0 if lm

= 1,2,3,M reflects the number of rules (Gaussian

membership function), and i = 1, 2, 3, n reflects the

number of input. In the sense of BPA, , c, and are

parameters that will be updated during the training

phase. Here we differentiate 0 from i since it will not

contain any part of input vector, hence will be

calculated independently. The SSE for each epoch

reflects the training performance function. Here we

use notation S for the squared error, which can be

described as:

N

(6)

e1 e12 e2 e22

en en2

+

+

...

l

0l

0l

0

where, b =

l =1

Then we have

e n e n2

S e1 e12 e 2 e 22

...

=

+

+

=

0l 0l

0l

0l

(8)

fn

f1

(f1 d1 ) l + ... + (fn dn ) l

0

0

z lj

S

(

)

= fj dj

0l

b j

(9)

derivative for the adjustable parameters can be

obtained as:

(S / ijl ) = ( f j d j ) ( z l / b) x

(S / cil ) = A {2 ( z l / b) ( xi cil ) /( il ) 2 }

(S / il ) = A {2 ( z l / b) ( xi cil ) 2 /( il ) 3 }

Parameter Estimation using Least Square Method for MIMO Takagi-Sugeno Neuro-Fuzzy in Time Series Forecasting

[Indar Sugiarto, et al]

so that the consequent parameters can be written as:

where

m

A = y lj f j ( f j d j ) =

j =1

(10)

n

m l

0j + ijl x i f j ( f j d j )

j =1

i =1

model:

R 1 : If x 1 is G 11 and x 2 is G 21 and ... x n is G n1

y l1 = oj1 + 11 j x 1 + 21 j x 2 + ... + nj1 x n

then yl2 = oj2 + 12j x1 + 22j x 2 + ... + nj2 x n

R M : If x 1 is G1M and x 2 is G 2M and ... x n is G nM

then ylM = ojM + 1Mj x 1 + 2Mj x 2 + ... + njM x n

Takagi-Sugeno inference will be:

M

fj =

y lj z l

l =1

(z

y1j + z 2 y 2j + ... + z M y Mj

z + z + ... + z

1

OPTIMIZING THE FREE PARAMETERS OF

TAKAGI-SUGENO RULES CONSEQUENT

PARAMETER

then

l = [ 0l ,1l , 2l ,..., nl ]

(11)

l =1

which is computed for the n input system using the

product operator as:

previous section).

Then the matrix form of the corresponding TakagiSugeno inference can be written as:

1

f1 1 XIe1

1

f2 = 2 XIe2

M

1

fN N XIe N

12 XIe1

22 XIe2

M

XIe N

2

N

1

2

.... 22 XIe2

M

.... N2 XIe N M

....

12 XIe1

[f] = [XIe][], or,

[XIe]T[XIe][] = [XIe]T[f]

which yield:

(13)

[] = ([XIe]T[XIe])-1[XIe]T[f]

SIMULATION USING MATLAB

matrix form and we use Matlab to simulate them so

that we can observe its performance. For the first

experiment, we generate matrix c (mean) and

(variance) of the Gaussian membership function

randomly and matrix is also generated randomly.

Here is the result from the simulation using randomly

generated free parameters of Takagi-Sugeno neurofuzzy network.

z l = Gl ( xi )

(12)

i =1

l

rule in the form: l = z

M

zl

l =1

for ith training sample will be:

(

(

(

f i = i1 01 + 11 x1,i + ... + n1 x n ,i

+

2

i

M

i

2

0

M

0

2

1

2

n

M

1

x1,i + ... +

M

n

x n ,i

matrices. We can apply Least Square Estimation

using XI matrix into matrix fj. But first, we append 1

along with n inputs in XI which take care of 0l from

randomly generated free parameters of TakagiSugeno neuro-fuzzy network

85

different results from the above figure. We recorded

the best simulation which yield SSE = 0.14347. Here

is the best simulation for initially random parameters

during validation process.

parameters during validation process.

The next experiment is using LSE to optimize the rule

consequent parameters . First we modify the

equation for with small adaptation step to avoid

singularities problem during matrix calculation so it

became:

(14)

[] = ([XIe]T[XIe] + I)-1[XIe]T[f]

Here is the optimized result of the Gaussian

membership function after being updated in 200

iterations (recall that we use first 200 data of XIO

matrix for training).

The implementation of LSE method for rule conesquent parameters reduces the value of SSE down to

0.0206 which is much smaller than the previous SSE.

It means that the performance of neuro-fuzzy system

is increased by factor up to 700%. Simulation during

validation process yields the following result.

LSE method.

improvement from the previous result.

DISCUSSION AND CONCLUSION

membership function after being updated in 200

iterations

86

conclude two important remarks here:

a. From the simulation, we can see that LSE method

can enhance the performance of Takagi-Sugeno

neuro-fuzzy for time series forecasting although it

requires more computational power. Although

GMFs arent equally distributed along the universe of discourse, the system gives satisfactory

prediction of Mackey-Glass Data.

b. The value of for updating the parameters should

be choosen properly since this value is sensitive.

Too high value of this parameter makes the

program tends to oscillating. This computation

involves huge number of data. One must find

optimization procedure to simplify the program or

accelerate the calculation process.

Parameter Estimation using Least Square Method for MIMO Takagi-Sugeno Neuro-Fuzzy in Time Series Forecasting

[Indar Sugiarto, et al]

this paper certainly confirm that the same performance can only be achieved with the high number of

rules which in turn will require much efforts to reduce

redundancies and conflicting rules.

REFERENCES

Fuzzy Modelling, Proceeding of the International Conference on Neural Network: 1995.

[2] Min-You Chen, D.A. Linkens, Rule-base selfgeneration and simplification for data-driven

fuzzy models, Fuzzy Set and System, 2003.

[3] Hossein Salehfar, Nagy Bengiamin, Jun Huang,

A Systematic Approach To Linguistic Fuzzy

Modeling Based On Input-Output Data, Proceedings of the 2000 Winter Simulation Conference:

2000.

Time Series using Neuro-Fuzzy Approach,

Proceeding of IEEE-IJCNN: 1999.

[5] Palit A. K., Popovic D., Computational Intelligence in Time Series Forecasting: Theory and

Engineering Application, Springer-Verlag London: 2005.

[6] J.-S. Roger Jang, ANFIS: Adaptive-Networkbased Fuzzy Inference Systems. IEEE Transaction on Systems, Man and Cybernetics: 1993.

[7] MATLAB, The Language of Technical Computing, Users Guide and Programming, The

MathWorks Inc: 1998.

[8] Felix Pasila, Indar Sugiarto, Mamdani Model

For Automatic Rule Generation of A MISO

System, Proceedings of Soft Computing, Intelligent Systems and Information Technology (SIIT2005): 2005.

87

- TextbooksUploaded byMaria Doan
- Vintage Restaurant CaseUploaded byanghellow
- PC00PR002 Cost Control Procedure Rev.02Uploaded bymasahin
- Forecasting Technique Familiarity Satisfaction Usage and ApplicationUploaded bydeltababs
- 100808Uploaded byijcnsvol2no8
- C13 Sales FocastUploaded byPerri Cebedo
- Demand Forecasting on Colour Television Industry - By VASIM S. SHAIKHUploaded byvasims7162
- Demand Estimation & ForecastingUploaded byNakul Bilwar
- Applications of Improved Grey Prediction ModelUploaded byBui Quang Nha
- A Fuzzy Approach For Clustering Gene Expression Time Series DataUploaded byAnonymous Gl4IRRjzN
- Demand ForecastingUploaded byarunchauhan31
- 1831120Uploaded byNouman Badar
- ForecastingUploaded byGreencare To You
- ArimaUploaded byRaunak Jain
- 3 StojanovicUploaded byawan_kr
- Demand Estimation & ForecastingUploaded byNitin Sengar
- Fitting and Predicting a Time Series ModelUploaded bySueja Goldhahn
- Time SeriesUploaded bySandeep Yadav
- 3The Exponential Smoothing ModelUploaded byNsr Murthy
- Gordon - 1990Uploaded byEdgardo Cerda
- Capstone Case StudyUploaded byadrn321
- Air Transport AnalysisUploaded byabeh
- Ch5Uploaded byAkshay Aggarwal
- 1-s2.0-S0736584515000137-main.pdfUploaded byEJMV
- CP2 ForecastingUploaded byHomero Xavier Vera Calderón
- OM0012 - ADDUploaded bySmu Doc
- Finding Motif Sets in Time Series.pdfUploaded bylerhlerh
- 231765Uploaded byNhek Vibol
- Caso MBC - C ParcialUploaded byCindy Gineth Barrera Barrera
- A_Analysis_of_Different_Type_of_Advance.pdfUploaded byHeka

- Fritzing Creator Kit Download EnUploaded byHarbey Alexander Millan Cardenas
- AaljbfpkjdjUploaded byHarbey Alexander Millan Cardenas
- qcerDQFCAWVCFFUploaded byHarbey Alexander Millan Cardenas
- Emulations of a Digital Control SystemsUploaded byarafatasghar
- 00019394Uploaded byHarbey Alexander Millan Cardenas
- 4N26Uploaded byHarbey Alexander Millan Cardenas
- Hora Rio 2014Uploaded byHarbey Alexander Millan Cardenas
- Dn Scl1700-d01 Product SpecUploaded byHarbey Alexander Millan Cardenas
- 83949100Uploaded byHarbey Alexander Millan Cardenas
- A a a AaaaaaaaaaaaaaaaaaaaaaaaaaUploaded byHarbey Alexander Millan Cardenas
- 3-s2.0-B978075063429850018X-mainUploaded byHarbey Alexander Millan Cardenas
- 3-s2.0-B9780750634298500191-mainUploaded byHarbey Alexander Millan Cardenas
- visayUploaded byHarbey Alexander Millan Cardenas
- Profile FormUploaded byOrrego Cortes Luis
- KYIGPWUDÑQEfUploaded byHarbey Alexander Millan Cardenas
- Full TextUploaded byHarbey Alexander Millan Cardenas

- Predicting Hotel Demand With d Mo Web Traffic DataUploaded byLau TungWong
- ForecastingUploaded bySanan Asghar
- A Little Book of R for Time SeriesUploaded by정관용
- Blueman 5th_Chapter 2 HW SolnUploaded byAlbert Ameka
- Mcmbm ReportUploaded byMark Reinhardt
- AR001_Auditoría Eólica_Cube_2012.12.21.pdfUploaded byCandela Caraballo
- Hal VarianUploaded byWisdom Note
- Arp Uploads 2015Uploaded byAyiAhadiat
- NumXL Cookbook - AirLine Model in ExcelUploaded byNumXL Pro
- Quality Engineer Exam Prep - How To Study For Your ASQ CQE CertificationUploaded byJenny
- Volcanic Plumes.pdfUploaded bymichgeb
- Timeseries 586.pptUploaded byAnonymous iEtUTYPOh3
- Time Series Part 2Uploaded byanmolgarg129
- pgUploaded byitashok1
- MSc in Mathematics (Budapest - Eotvos University)Uploaded byzhazzu
- 2ab612f94ffb9e92752f72306bc8fc36c281Uploaded byDevika
- EFR 2011_2012Uploaded bywebassistant-sbe
- Data Processing in Research MethodologyUploaded byhijkayelmnop
- bishop-mlUploaded byHassan Salem
- DownloadUploaded bytharmaraj
- Bar Charts NOTESUploaded bylastnotes84
- Statistics for Business Decision Making and Analysis 3rd Edition Stine Test BankUploaded bya320307493
- Trends Projection MethodUploaded byms86100
- All Cheat SheetsUploaded bytete
- TREND User GuideUploaded bykardra
- Ch04Uploaded byCzarina Casalla
- downloads-Diffusion.pdfUploaded byAnonymous gUySMcpSq
- 12Uploaded byCristian Garcia
- Box JenkinsUploaded bysiva_koti10
- Fractal Analysis and Chaos in Geosciences - Sid-Ali OuadfeulUploaded byCibercultural

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.