Professional Documents
Culture Documents
and decision analysis," Civ. Eng. Syst., vol. 2, pp. 201-208, 1985. rule and the corresponding fuzzy model identification algorithmr
[24] G. Hobson and K. Luetkemeyer, "EO/IR automatic feature recogni-
tion," Emerson IRAD Tech. Rep., fiscal year 1986, Oct. 1986. were also proposed by Pedrycz in [10]. Li et al. studied the
[25] L. T. Minor and J. Sklansky, "The detection and segmentation of blobs self-learning of fuzzy models [9] and proposed a self-learning
in infrared images," IEEE Trans. Syst. Man, Cybern., vol. SMC-11, no. algorithm for the simple SISO (single-input/single-output) fuzzy
3, pp. 194-201, 1981. models.
[26] J. K, McWiliams and M. D. Srinath, "Performance analysis of target This correspondence proposes a general fuzzy model identifica-
detection systems using infrared imagery," IEEE Trans. Aerospace Elec-
tron. Syst., vol. AES-20, no. 1, pp. 38-48, 1984. tion algorithm for MISO (multi-input/single-output) systems
[27] H. Dreyfus, What Computers Can't Do. New York: Harper and Row, based on Pedrycz's work [10] and a related self-learning al-
1972. gorithm. Two numerical examples show that the proposed identi-
[28] G. Waldman, J. R. Wootton, and K. Lanburg, "An IR detection pro- fication algorithm can provide a fuzzy model with a fairly high
gram incorporating EOSAEL," presented at the 4th Ann. EOSAL
Workshop, Nov. 1983. accuracy and that the proposed self-learning algorithm might
[29] J. R. Wootton, E. Carney, and H. Nelgner, "Sensor stabilization require- make the model more accurate. Note that the self-learning al-
ment investigation," in Proc., Optical Platforms, Nat. Symp. and gorithm can also readily be converted into a real-time form for
Workshop, SPIE, vol. 493, June 1984, pp. 426-428. on-line applications purposes.
[30] H. Nelgner and J. Wootton, "Computer based electro-optical detection
model," Emerson Electric, St. Louis, MO, Tech. Rep., vol. 17342, Apr.
1980.
II. Fuzzy MODEL IDENTIFICATION
A. Problem Statement
A discrete-time fuzzy relational model for a MISO system with
p inputs may be written as
y(t)=y(t-T) y(t T-)o o (t-r1-ny)oul(t-
Fuzzy Model Identification and Self-Learning for
Dynamic Systems
CHEN-WEI XU AND YONG-ZAI LU
o . ou(t- )
ing for multi-input/multi-output dynamic systems are proposed. The re- where output y(.) and inputsu'(.), .,up(.) are all fuzzy
quired computer capacity and time for implementing the proposed variables, and R is the fuzzy relation between the inputs and the
output. The symbol "c" denotes the fuzzy composition operator.
algorithms and related resulting models are- significantly reduced by
T, 1,* *, '1,T are time delays and ny, n1,* *, np present the system
introducing the concept of the "referential fuzzy sets." Two numerical
examples are given to show that the proposed algorithms can provide the
orders. Equation (1) is one of the general forms of the MISO
fuzzy models with satisfactory accuracy. discrete-time fuzzy models. As is well-known, the identification
problem usually involves both the structure identification and the
I. INTRODUCTION parameter estimation. Obviously, the structure identification for
A number of approaches to the identification of system dy- the problem under study is to determine the delays (T, T1.*, Tp)
namics have been proposed during the last two decades [1]. and the orders (ny , tn, ,np). The approach to the structure
However, many difficulties still exist in applying the existing identification could be simnilar to the regular methods.
methods to many real complex systems with nonlinear time-vary- For convenience, let
ing characteristics. One of the possible approaches to overcome
these difficulties is to use a fuzzy model [5]-[7] to describe the x1(t) =y(t-'T)
static and/or dynamic behavior of these systems. The identifica- x2(t) = (t - T-1
tion of such fuzzy models may be done in two ways: the linguistic
approach [7]-[10], [12] and the approach based on resolving fuzzy
relational equations [5], [6], [11].
The fuzzy relational model based identification was proposed a
xny+l(t) =uy(t-r Y) (2)
few years ago. Tong [7] proposed a "logical examination" method Xn +2( t) =ui( t 1)T
to solve the linguistic identification problem [17]. Li et al. mod-
ified Tong's method and got a better result [8]. They also pro-
posed an adaptive model modification algorithm based on the
"decision table." However, the proposed algorithmns could not be
Ix"(t) = -T,- u,(t n,)
used for multivariable systems with high dimensions due to the where
large amounts of computer memory and time required. In ad- p
dition, the correlation analysis method determining the structure n=n +1 +, (T,+1).
of the rule set, as proposed in [7] and [8], is also difficult to i-=1
extend to multivariable systems.
Pedrycz emphasized that the "referential fuzzy set" is an Substituting (2) into (1) yields
important concept in linguistic modeling [10]. A new composition
A(t) 7xI(t) 0 x2(t) 0.* *,* x,,t) o R. (3)
It should be emphasized that here the fuzzy relational model (3)
is based on the concept of " referential fuzzy sets" used by several
Manuscript received April 19, 1986; revised December 12, 1986. This work investigators [7J-[10], [21]. A new composition rule has been
was supported in part by the National Science Foundation from the National adopted such that the fuzzy relation R in (3) does not directly
Educational Committee of China. connect the elements of all universes but connects the prespeci-
The authors are with the Laboratory for Industrial Process Modelling and
Control, Zhejiang University, Hangzhou, China. fied linguistic constants (referential sets) on these universes as
IEEE Log Number 8714515. proposed in [10].
maX( T1,
= supmin[Anj(xn),Xn(k,xn)]
Tmax = , Tp ) + Xn
pj(k) poss(Bjly(k))
=
for all xi E Xi, 3j E T Aij(xi) > 0, i=1,n (6) for all sj, *,sns E-r. (12)
and c) Calculate R:
N
forall yeY. Aje r By(y) >0 (7) R= U Rk (13)
k =1
where A{1,2,- -,.r
The number of referential fuzzy sets in each universe, r, should where U denotes union operation, i.e.,
be selected according to experience and tests. In general, the N
model accuracy may be improved by increasing r [10].: However, R(s1, ,s,,s) = V Rk(sl . i . n, S
.
TABLE IVa
T 1 1 1 1 1 1 1 1 1 1 1 1
T1 0 0 0 0 0 0 0 0 0 0 0 0
2 2 3 4 5 6 4 4 4 4 4 4 4
J, 0.0853 0.0798 0.0331 0.0843 0.0827 0.0927 0.1129 0.1286 0.0855 0.0932 0.0858 0.0875
J2 0.0892 0.0833 0.0369 0.0858 0.0818 0.0989 0.1249 0.1313 0.0893 0.0970 0.0922 0.0926
a Max-product composition and de F2.
TABLE Va
T 1 1 1 1 1 1 1 1 1 2 3 4
T, 0 0 0 0 0 1 2 3 4 0 0 0
T2 2 3 4 5 6 4 4 4 4 4 4 4
J1 0.0932 0.0925 0.0393 0.0918 0.0866 0.0863 0.0988 0.0986 0.1017 0.0947 0.1213 0.1271
J2 0.0934 0.0938 0.0429 0.0933 0.09 0.0902 0.1071 0.1045 0.1022 0.1037 0.1236 0.1308
aMax-min composition and de F2.
in Table I. It can be seen that J reaches its minimum when T1 =1 distributed on ( - 0.08, 0.08). Therefore, data 1 are noise-free and
and T2 = 4.- data 2 are noisy. Inputs ul (k) and u2(k) are both uncorrelated
A comparison between different composition and defuzzifica- random sequences uniformly distributed on (0.1,0.9).
tion methods is shown in Table II. The minimal J corresponds to Identification Procedures:
max-product composition and de F2. a) Determine universes Y, U1, U2.
Self-Learning of the Model: The relation R with J = 0.4555, b) Determine referential sets. Again r = 5. The memberships
taken as the initial relation, is modified by algorithm A2. Let Ji of the referential sets All,* *,A15, A 21,*, A259 Bj. *,B5 are
denote J after i times of modification, i = 0,1,2, * * . The effect shown in Fig. 3.
of modification with a different "step length" h is shown in Fig. c) Suppose that the rule has the following structure:
2. Self-learning significantly reduces J. It is noted that in self-
learning h plays an important role, somewhat like that of step (y( t- ), u(t-T1), U2(t- 2)) Yy(t) (38)
length in gradient methods. An adequate h will result in good where delays T, T1, and T2 will be determined later.
convergence in self-learning. d) Define a performance index
When algorithm A 2 is converted into an on-line form, the
requirement on h may be different, because in the off-line case 1 400
the J may be reduced through carrying out self-learning step by .J17 J2 3909 k-l y(k) - y"( k)]
y
step. However, for the on-line case, usually a larger h is desired k =11
to follow the time-varying characteristics of the system as quickly
where J1 corresponds to data 1, and J2 is for data 2.
as possible. The delays T, T1, and T2 and composition and defuzzification
Table III shows the comparison results between the proposedmethods will be determined by minimizing J1/J2.
algorithms and the published literature. The results obviously It may be intuitively seen that the effect of delays T, Tr, and T2,
demonstrate the advantages of the proposed methods. on J1 /J2 are not affected by composition and defuzzification
Example 2: A Simulation Dynamic Model methods. To verify this, let us look for T, T,, and T2 that
minimize J, 7J2 under two different combinations of composition
A two-input/single-output bilinear model and defuzzification methods. The results are shown in Tables IV
y(k) 0.8y(k -l)ul(k) +0.5u1(k -1)y(k -2) and V. It can be seen in Tables IV and V that when T = 1, i, = 0,
and T2 = 4, J. /J2 are minimum. This result shows that in identi-
+ U2(k-4)+ae(k) (35) fication, we can first choose appropriate T, T1, and T2 by mini-
is used to provide the input-output data sequence, which is mizing J1/J2 using any composition and defuzzification meth-
expressed as follows: ods, then fix the delays to their best values and minimize J7/J2
in terms of composition and defuzzification methods.
data 1 (ae = 0): { y(k), u1(k), U2( k), k 1, 400} (36)
= The effect of composition and defuzzification methods on
J1/J2 is shown in Table VI. Here the best choices are max-prod-
data 2 (a = 1): {y(k), u1(k), u2(k), t =1, 400}. (37) uct composition and de F3. This is different from Example 1. In
In model (35), e(t) is an uncorrelated random noise uniformly addition, de F3 causes very poor performance.
688 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-17, NO. 4, JULY/AUGUST 1987
TABLE VIa
J1, Defuzzification
J2
Composition de F1 de F2 de F3
Max-min J, = 0.0663 J. = 0.0393
J2 = 0.0652 J2 = 0.0429 J2J1 == 0.0442
0.0468
Max-product J4 0.0821
=J2
=0.0780 J4
J2
=
=
0.0331
0.0369 J1 0.0364
J2
=
=
0.0328
aT = 1, 1-, ,T
= 4.
J (i)
0.06
0 2 4 6 8 10 i
Fig. 6. Self-learning reduces J (MI).
0.0393
0.02
0 2 4 6 8 10 i
Fig. 4. Self-learning reduces J (I).
0.0329
0.025
0 2 4 6 8 10 i
Fig. 7. Self-learning reduces J.
V. CONCLUSION
In this correspondence a linguistic identification method (al-
Ih2 gorithm Al) has been proposed. The concepts of linguistic vari-
h-> _ h=1 ables and conditional possibility are used for constructing a fuzzy
model. Two numerical examples have shown that the proposed
identification method can result in fuzzy models with fairly high
-10 _ accuracy, although the models are inherently imprecise.
0.022 0 To improve the fuzzy model accuracy, a self-learning algorithm
2 4 6 8 10 (A2) associated with the algorithm Al has further been devel-
Fig. 5. Self-learning reduces J(I). oped in this correspondence. In fact, the algorithm (A2) is a
linguistic rule-modification algorithm. Numerical examples show
that the self-learning algorithm A2 can considerably improve the
In this examplle, we not only examine the effect of h on J /J2 fuzzy model accuracy.
but also the effe cts of different composition and defuzzification The methods proposed in this correspondence might also be
methods on J.. 17he results are shown in Figs. 4-6. used for deriving human control strategies. They also can be used
The effectiven ess of the self-learning algorithm is again proven in fuzzy self-organizing control algorithms and other decision-
It is interesting
--
--o-- to note that in Figs. 4 and 5 the least possible
-- v-- r
---- making processes.
values J can reach are the same (J, = 0.0230) for the different
composition and defuzzification methods used. Fig. 6 again shows REFERENCES
the poor performance of de F1. [1] P. Eykhoff, Ed. Trends and Progress in System Identification. Oxford,
Fig. 7 shows the effect of self-learning on J2. Due to the noises England: Pergamon, 1981.
in the data, J2 is usually greater than J. both before and after [2] L. A. Zadeh, "Outline of a new approach to the analysis of complex
systems and decision processes," IEEE Trans. Syst., Man, Cybern. vol.
self-learning. SMC-3, pp. 28-44,1973.
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-17, NO. 4, JULY/AUGUST 1987 689