Professional Documents
Culture Documents
Outline
General Definition Applications Operations Rules Fuzzy Logic Toolbox FIS Editor Tipping Problem: Fuzzy Approach Defining Inputs & Outputs Defining MFs Defining Fuzzy Rules
2
General Definition
Fuzzy Logic - 1965 Lotfi Zadeh, Berkely
superset of conventional (Boolean) logic that has been extended to handle the concept of partial truth central notion of fuzzy systems is that truth values (in fuzzy logic) or membership values (in fuzzy sets) are indicated by a value on the range [0.0, 1.0], with 0.0 representing absolute Falseness and 1.0 representing absolute Truth. deals with real world vagueness
Applications
Expert Systems Control Units Bullet train between Tokyo and Osaka Video Cameras Automatic Transmissions
Operations
AB
AB
Controller Structure
Fuzzification Scales and maps input variables to fuzzy sets Inference Mechanism Approximate reasoning Deduces the control action Defuzzification Convert fuzzy output values to control signals
MATLAB fuzzy logic toolbox facilitates the development of fuzzy-logic systems using:
graphical user interface (GUI) tools command line functionality Fuzzy Expert Systems Adaptive Neuro-Fuzzy Inference Systems (ANFIS)
There are five primary GUI tools for building, editing, and observing fuzzy inference systems in the Fuzzy Logic Toolbox:
Fuzzy Inference System (FIS) Editor Membership Function Editor Rule Editor Rule Viewer Surface Viewer
10
11
12
To control the speed of a motor by changing the input voltage When a set point is defined, if for some reason, the motor runs faster, we need to slow it down by reducing the input voltage. If the motor slows below the set point, the input voltage must be increased so that the motor speed reaches the set point.
13
Input/Output
Input status words be: Too slow Just right Too fast output action words be:
14
15
16
17
18
19
Membership Functions
20
Rules
If the motor is running too slow, then more voltage. If motor speed is about right, then no change. If motor speed is to fast, then less voltage.
21
22
Rule Base
23
Rule Viewer
24
Surface Viewer
25
Save the file as one.fis. Now type in the commend window to get the result: >>fis = readfis('one'); out=evalfis(2437.4,fis) >>out =2.376
26
Takagi-Sugeno-Kang, method of fuzzy inference similar to the Mamdani method in many respects Fuzzifying the inputs and applying the fuzzy operator, are exactly the same. The main difference between Mamdani and Sugeno is that the Sugeno output membership functions are either linear or constant.
27
28
29
30
Add Input MF
31
Define Input MF
32
Add output MF
33
Define output MF
34
Add rules
35
36
View rules
37
Rules viewer
38
Surface viewer
39
Sugeno is a more compact and computationally efficient representation than a Mamdani system. It is computationally efficient. It works well with linear techniques (e.g., PID control). It works well with optimization and adaptive techniques. It has guaranteed continuity of the output surface. It is well suited to mathematical analysis.
40
41
Overview
Discussion
The fundamental principle of classification using the SVM is to separate the two categories of patterns Map data x into a higherdimensional feature space via a nonlinear mapping. The linear classification (regression) in the high dimensional space is equivalent to the nonlinear classification (regression) in the lowdimensional space
Linear Classifiers
w x + b>0
denotes +1 denotes -1
yest
f(x,w,b) = sign(w x + b)
w x + b<0
Linear Classifiers
x
denotes +1 denotes -1
yest
f(x,w,b) = sign(w x + b)
Linear Classifiers
x
denotes +1 denotes -1
f
f(x,w,b) = sign(w x + b)
yest
Linear Classifiers
x
denotes +1 denotes -1
yest
f(x,w,b) = sign(w x + b)
Linear Classifiers
x
denotes +1 denotes -1
f
f(x,w,b) = sign(w x + b)
yest
Misclassified to +1 class
Classifier Margin
x
denotes +1 denotes -1
yest
Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.
f(x,w,b) = sign(w x + b)
Maximum Margin
denotes +1 denotes -1 1. Maximizing the margin is good accordingf( to and w PAC theory x,intuition w,b) = sign( x+ b) 2. Implies that only support vectors are important; other The training examples maximum are ignorable.
Support Vectors are those datapoints that the margin pushes up against
margin linear 3. Empirically it works very very classifier iswell. the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM)
Linear SVM
M=Margin Width
X-
What we know:
w. + b = +1 w . x- + b = -1 w . (x+-x-) = 2
x+
(x x ) w 2 M = = w w
Goal: 1) Correctly classify all training data wx i + b 1 if yi = +1 wx i + b 1 if yi = -1 yi ( wxi + b) 1for all i 2 M = 2) Maximize the Margin 1 t w same as minimize ww 2
subject to
1 t Minimize ( w) = w w 2
yi ( wxi + b) 1
and for all {(xi ,yi)}: yi (wTxi + b) 1 Needtooptimizeaquadraticfunctionsubjecttolinearconstraints. Quadraticoptimizationproblemsareawellknownclassofmathematical programmingproblems,andmany(ratherintricate)algorithmsexistfor solvingthem. ThesolutioninvolvesconstructingadualproblemwhereaLagrange multiplier iisassociatedwitheveryconstraintintheprimaryproblem: Find 1N such that Q() =i - ijyiyjxiTxj is maximized and (1)
i y i = 0
The solution has the form: w = i y i x i b= yk- wTxk for any xk such that k 0 Each non-zero i indicates that corresponding xi is a support vector. Then the classifying function will have the form: f(x) = iyixiTx + b Notice that it relies on an inner product between the test point x and the support vectors xi we will return to this later. Also keep in mind that solving the optimization problem involved computing the inner products xiTxj between all pairs of training points.
OVERFITTING!
11
Linear SVMs:Overview
The classifier is a separating hyperplane. Most important training points are support vectors; they define the hyperplane. Quadratic optimization algorithms can identify which training points xi are support vectors with non-zero Lagrangian multipliers i. Both in the dual formulation of the problem and in the solution training points appear only inside dot products: Find 1N such that Q() =i - ijyiyjxiTxj is maximized and (1) iyi = 0 (2) 0 i C for all i f(x) = iyixiTx + b
Non-linear SVMs
Datasets that are linearly separable with some noise work out great:
0 x
But what are we going to do if the dataset is just too hard? How about mapping data to a higher-dimensional space:
x2 0 x
General idea: the original input space can always be mapped to some higher-dimensional feature space where the training set is separable:
: x (x)
The linear classifier relies on dot product between vectors K(xi,xj)=xiTxj If every data point is mapped into high-dimensional space via some transformation : x (x), the dot product becomes: K(xi,xj)= (xi) T(xj) A kernel function is some function that corresponds to an inner product in some expanded feature space. Example: 2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2, Need to show that K(xi,xj)= (xi) T(xj): K(xi,xj)=(1 + xiTxj)2, = 1+ xi12xj12 + 2 xi1xj1 xi2xj2+ xi22xj22 + 2xi1xj1 + 2xi2xj2 = [1 xi12 2 xi1xi2 xi22 2xi1 2xi2]T [1 xj12 2 xj1xj2 xj22 2xj1 2xj2] = (xi) T(xj), where (x) = [1 x12 2 x1x2 x22 2x1 2x2]
For some functions K(xi,xj) checking that K(xi,xj)= (xi) T(xj) can be cumbersome. Mercers theorem: Every semi-positive definite symmetric function is a kernel Semi-positive definite symmetric functions correspond to a semi-positive definite symmetric Gram matrix:
K=
K (x i , x j ) = exp(
xi x j 2
2
Dual problem formulation: Find 1N such that Q() =i - ijyiyjK(xi, xj) is maximized and (1) iyi = 0 (2) i 0 for all i
SVM locates a separating hyper plane in the feature space and classify points in that space It does not need to represent the space explicitly, simply by defining a kernel function The kernel function plays the role of the dot product in the feature space.
Properties of SVM
Flexibility in choosing a similarity function Sparseness of solution when dealing with large data sets - only support vectors are used to specify the separating hyper plane Ability to handle large feature spaces - complexity does not depend on the dimensionality of the feature space Over fitting can be controlled by soft margin approach Nice math property: a simple convex optimization problem which is guaranteed to converge to a single global solution Feature Selection
SVM Applications
Weakness of SVM
It is sensitive to noise
- A relatively small number of mislabeled examples can dramatically decrease the performance
Some Issues
Choiceofkernel
Choiceofkernelparameters
Optimizationcriterion Hardmarginvs.Softmargin
WindPowerForecasting(WPF)
-SVM
The objective function of the SVM is based on a insensitive loss function. The formula for the SVM is given as follows:
StructureofSVM
Data Resolution
DataValue
1 ) x j (t ) = Ts
ti +Ts
) x j (t )dt
ti
FixedStepPredictionScheme
Windspeednormalization
Autocorrelationsofthewindspeedsamples
SVMmodelandtheRBFmodel
CONCLUSIONS