Professional Documents
Culture Documents
CHAPTER 3
RULES
Crisp Inputs Crisp Outputs
FUZZIFIER DEFUZZIFIER
INFERENCE
Fuzzy Input Sets Fuzzy Output Sets
Fuzzy inference is the process which maps the given input into the
output using fuzzy logic. Any fuzzy inference system can be simply
represented in four integrating blocks:
hence it denotes the set theoretical operation of union. The slash in these
expressions associates the elements in U with their membership grades, where
F (x) > 0.
Linguistic variables are variable whose values are not numbers but
words or sentences in a natural or artificial language. In general, linguistic
variables are less specific than numerical ones. Let u denote the names of
linguistic variable, numerical values of a linguistic variable u are denoted x,
where x U . Sometimes x and u are interchangeably used. A linguistic
variable is usually decomposed into a set of terms, T(u), which covers its
universe of discourse.
Height Height
4 5 6 7 4 5 6 7
(a) (b)
IF x is A and y is B THEN z is C
In a two hidden layer MLP, the inputs are fed into the input layer
and get multiplied by interconnection weights as they are passed from the
input layer to the first hidden layer. Within the first hidden layer, they get
summed up and then processed by a nonlinear function (usually the
hyperbolic tangent). As the processed data leaves the first hidden layer, again
it gets multiplied by interconnection weights, then summed and processed by
the second hidden layer. Finally the data is multiplied by interconnection
weights then processed one last time within the output layer to produce the
neural network output.
41
The MLP and many other neural networks learn using an algorithm
called back-propagation. With back-propagation, the input data is repeatedly
presented to the neural network. With each presentation the output of the
neural network is compared to the desired output and an error is computed.
This error is then fed back (back-propagated) to the neural network and used
to adjust the weights such that the error decreases with each iteration and the
neural model gets closer and closer to producing the desired output. This
process is known as "training".
Oi1 Ai ( x) (3.2)
Where, x is the input to the node i , and Ai is the linguistic label associated
with this node function.
Wi
Oi3 Wi , where i=1: 2 (3.4)
W1 W2