You are on page 1of 81

Introduction to Fuzzy Logic Control

Outline

General Definition Applications Operations Rules Fuzzy Logic Toolbox FIS Editor Tipping Problem: Fuzzy Approach Defining Inputs & Outputs Defining MFs Defining Fuzzy Rules
2

General Definition
Fuzzy Logic - 1965 Lotfi Zadeh, Berkely

superset of conventional (Boolean) logic that has been extended to handle the concept of partial truth central notion of fuzzy systems is that truth values (in fuzzy logic) or membership values (in fuzzy sets) are indicated by a value on the range [0.0, 1.0], with 0.0 representing absolute Falseness and 1.0 representing absolute Truth. deals with real world vagueness

Applications

Expert Systems Control Units Bullet train between Tokyo and Osaka Video Cameras Automatic Transmissions

Operations

AB

AB

Controller Structure

Fuzzification Scales and maps input variables to fuzzy sets Inference Mechanism Approximate reasoning Deduces the control action Defuzzification Convert fuzzy output values to control signals

MATLAB fuzzy logic toolbox

MATLAB fuzzy logic toolbox facilitates the development of fuzzy-logic systems using:

graphical user interface (GUI) tools command line functionality Fuzzy Expert Systems Adaptive Neuro-Fuzzy Inference Systems (ANFIS)

The tool can be used for building


Graphical User Interface (GUI) Tools

There are five primary GUI tools for building, editing, and observing fuzzy inference systems in the Fuzzy Logic Toolbox:

Fuzzy Inference System (FIS) Editor Membership Function Editor Rule Editor Rule Viewer Surface Viewer

MATLAB: Fuzzy Logic Toolbox

MATLAB: Fuzzy Logic Toolbox

10

Fuzzy Inference system

Two type of inference system


Mamdni inference method Sugeno inference method

*Mamdani's fuzzy inference method, the most common methodology

11

FIS Editor: Mamdani s inference system

12

Fuzzy Logic Examples using Matlab

To control the speed of a motor by changing the input voltage When a set point is defined, if for some reason, the motor runs faster, we need to slow it down by reducing the input voltage. If the motor slows below the set point, the input voltage must be increased so that the motor speed reaches the set point.

13

Input/Output

Input status words be: Too slow Just right Too fast output action words be:

Less voltage (Slow down) No change More voltage (Speed up)

14

FIS Editor: Adding Input / Output

15

FIS Editor: Adding Input / Output

16

Membership Function Editor

17

Input Membership Function

18

Output Membership Function

19

Membership Functions

20

Rules

Define the rule-base:


1) 2) 3)

If the motor is running too slow, then more voltage. If motor speed is about right, then no change. If motor speed is to fast, then less voltage.

21

Member function Editor: Adding Rules

22

Rule Base

23

Rule Viewer

24

Surface Viewer

25

Save the file as one.fis. Now type in the commend window to get the result: >>fis = readfis('one'); out=evalfis(2437.4,fis) >>out =2.376

26

Sugeno-Type Fuzzy Inference


Takagi-Sugeno-Kang, method of fuzzy inference similar to the Mamdani method in many respects Fuzzifying the inputs and applying the fuzzy operator, are exactly the same. The main difference between Mamdani and Sugeno is that the Sugeno output membership functions are either linear or constant.

27

FIS Editor: Sugeno inference system

28

Add Input/output variables

29

Define Input/output variables

30

Add Input MF

31

Define Input MF

32

Add output MF

33

Define output MF

34

Add rules

35

Define Rule Base

36

View rules

37

Rules viewer

38

Surface viewer

39

Advantages of the Sugeno Method

Sugeno is a more compact and computationally efficient representation than a Mamdani system. It is computationally efficient. It works well with linear techniques (e.g., PID control). It works well with optimization and adaptive techniques. It has guaranteed continuity of the output surface. It is well suited to mathematical analysis.
40

Advantages of the Mamdani Method


It is intuitive. It has widespread acceptance. It is well suited to human input.

41

Support Vector Machine & Its Applications

Overview

Introduction to Support Vector Machines (SVM) Properties of SVM Applications


Gene Expression Data Classification Text Categorization if time permits

Discussion

Support Vector Machine(SVM)

The fundamental principle of classification using the SVM is to separate the two categories of patterns Map data x into a higherdimensional feature space via a nonlinear mapping. The linear classification (regression) in the high dimensional space is equivalent to the nonlinear classification (regression) in the lowdimensional space

Linear Classifiers
w x + b>0

denotes +1 denotes -1

yest

f(x,w,b) = sign(w x + b)
w x + b<0

How would you classify this data?

Linear Classifiers
x
denotes +1 denotes -1

yest

f(x,w,b) = sign(w x + b)

How would you classify this data?

Linear Classifiers
x
denotes +1 denotes -1

f
f(x,w,b) = sign(w x + b)

yest

How would you classify this data?

Linear Classifiers
x
denotes +1 denotes -1

yest

f(x,w,b) = sign(w x + b)

Any of these would be fine.. ..but which is best?

Linear Classifiers
x
denotes +1 denotes -1

f
f(x,w,b) = sign(w x + b)

yest

How would you classify this data?

Misclassified to +1 class

Classifier Margin
x
denotes +1 denotes -1

yest
Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.

f(x,w,b) = sign(w x + b)

Maximum Margin
denotes +1 denotes -1 1. Maximizing the margin is good accordingf( to and w PAC theory x,intuition w,b) = sign( x+ b) 2. Implies that only support vectors are important; other The training examples maximum are ignorable.

Support Vectors are those datapoints that the margin pushes up against

margin linear 3. Empirically it works very very classifier iswell. the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM)

Linear SVM

Linear SVM Mathematically


x+

M=Margin Width

X-

What we know:

w. + b = +1 w . x- + b = -1 w . (x+-x-) = 2

x+

(x x ) w 2 M = = w w

Linear SVM Mathematically

Goal: 1) Correctly classify all training data wx i + b 1 if yi = +1 wx i + b 1 if yi = -1 yi ( wxi + b) 1for all i 2 M = 2) Maximize the Margin 1 t w same as minimize ww 2

We can formulate a Quadratic Optimization Problem and solve for w and b

subject to

1 t Minimize ( w) = w w 2

yi ( wxi + b) 1

Solving the Optimization Problem


Find w and b such that (w) = wTw is minimized;

and for all {(xi ,yi)}: yi (wTxi + b) 1 Needtooptimizeaquadraticfunctionsubjecttolinearconstraints. Quadraticoptimizationproblemsareawellknownclassofmathematical programmingproblems,andmany(ratherintricate)algorithmsexistfor solvingthem. ThesolutioninvolvesconstructingadualproblemwhereaLagrange multiplier iisassociatedwitheveryconstraintintheprimaryproblem: Find 1N such that Q() =i - ijyiyjxiTxj is maximized and (1)

i y i = 0

(2) i 0 for all i

The Optimization Problem Solution

The solution has the form: w = i y i x i b= yk- wTxk for any xk such that k 0 Each non-zero i indicates that corresponding xi is a support vector. Then the classifying function will have the form: f(x) = iyixiTx + b Notice that it relies on an inner product between the test point x and the support vectors xi we will return to this later. Also keep in mind that solving the optimization problem involved computing the inner products xiTxj between all pairs of training points.

Dataset with noise


denotes +1 denotes -1

Hard Margin: So far we require


all data points be classified correctly - No training error

What if the training set is noisy? - Solution 1: use very powerful


kernels

OVERFITTING!

Soft Margin Classification


Slack variables i can be added to allow misclassification of difficult or noisy examples.

11

What should our quadratic optimization criterion be? Minimize


R 1 w.w + Ck 2 k =1

Hard Margin v.s. Soft Margin

The old formulation:


Find w and b such that (w) = wTw is minimized and for all {(xi ,yi)} yi (wTxi + b) 1

The new formulation incorporating slack variables:


Find w and b such that (w) = wTw + Ci is minimized and for all {(xi ,yi)} yi (wTxi + b) 1- i and i 0 for all i

Parameter C can be viewed as a way to control overfitting.

Linear SVMs:Overview

The classifier is a separating hyperplane. Most important training points are support vectors; they define the hyperplane. Quadratic optimization algorithms can identify which training points xi are support vectors with non-zero Lagrangian multipliers i. Both in the dual formulation of the problem and in the solution training points appear only inside dot products: Find 1N such that Q() =i - ijyiyjxiTxj is maximized and (1) iyi = 0 (2) 0 i C for all i f(x) = iyixiTx + b

Non-linear SVMs

Datasets that are linearly separable with some noise work out great:
0 x

But what are we going to do if the dataset is just too hard? How about mapping data to a higher-dimensional space:
x2 0 x

Non-linear SVMs: Feature spaces

General idea: the original input space can always be mapped to some higher-dimensional feature space where the training set is separable:

: x (x)

The Kernel Trick


The linear classifier relies on dot product between vectors K(xi,xj)=xiTxj If every data point is mapped into high-dimensional space via some transformation : x (x), the dot product becomes: K(xi,xj)= (xi) T(xj) A kernel function is some function that corresponds to an inner product in some expanded feature space. Example: 2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2, Need to show that K(xi,xj)= (xi) T(xj): K(xi,xj)=(1 + xiTxj)2, = 1+ xi12xj12 + 2 xi1xj1 xi2xj2+ xi22xj22 + 2xi1xj1 + 2xi2xj2 = [1 xi12 2 xi1xi2 xi22 2xi1 2xi2]T [1 xj12 2 xj1xj2 xj22 2xj1 2xj2] = (xi) T(xj), where (x) = [1 x12 2 x1x2 x22 2x1 2x2]

What Functions are Kernels?

For some functions K(xi,xj) checking that K(xi,xj)= (xi) T(xj) can be cumbersome. Mercers theorem: Every semi-positive definite symmetric function is a kernel Semi-positive definite symmetric functions correspond to a semi-positive definite symmetric Gram matrix:

K=

K(x1,x1) K(x1,x2) K(x1,x3) K(x2,x1) K(x2,x2) K(x2,x3) K(xN,x1) K(xN,x2) K(xN,x3)

K(x1,xN) K(x2,xN) K(xN,xN)

Examples of Kernel Functions

Linear: K(xi,xj)= xi Txj


Polynomial of power p: K(xi,xj)= (1+ xi Txj)p Gaussian (radial-basis function network):

K (x i , x j ) = exp(

xi x j 2
2

Sigmoid: K(xi,xj)= tanh(0xi Txj + 1)

Non-linear SVMs Mathematically

Dual problem formulation: Find 1N such that Q() =i - ijyiyjK(xi, xj) is maximized and (1) iyi = 0 (2) i 0 for all i

The solution is: f(x) = iyiK(xi, xj)+ b

Optimization techniques for finding is remain the same!

Nonlinear SVM - Overview

SVM locates a separating hyper plane in the feature space and classify points in that space It does not need to represent the space explicitly, simply by defining a kernel function The kernel function plays the role of the dot product in the feature space.

Properties of SVM
Flexibility in choosing a similarity function Sparseness of solution when dealing with large data sets - only support vectors are used to specify the separating hyper plane Ability to handle large feature spaces - complexity does not depend on the dimensionality of the feature space Over fitting can be controlled by soft margin approach Nice math property: a simple convex optimization problem which is guaranteed to converge to a single global solution Feature Selection

SVM Applications

SVM has been used successfully in many real-world problems


- text (and hypertext) categorization - image classification - bioinformatics (Protein classification, Cancer classification) - hand-written character recognition

Weakness of SVM

It is sensitive to noise
- A relatively small number of mislabeled examples can dramatically decrease the performance

It only considers two classes


- how to do multi-class classification with SVM? - Answer: 1) with output arity m, learn m SVMs SVM 1 learns Output==1 vs Output != 1 SVM 2 learns Output==2 vs Output != 2 : SVM m learns Output==m vs Output != m 2)To predict the output for a new input, just predict with each SVM and find out which one puts the prediction the furthest into the positive region.

Some Issues

Choiceofkernel

Gaussianorpolynomialkernelisdefault ifineffective,moreelaboratekernelsareneeded domainexpertscangiveassistanceinformulating appropriatesimilaritymeasures inGaussiankernel isthedistancebetweenclosestpointswithdifferent classifications Alengthyseriesofexperimentsinwhichvarious parametersaretested

Choiceofkernelparameters

Optimizationcriterion Hardmarginvs.Softmargin

WindPowerForecasting(WPF)

WPFisatechniquewhichprovidestheinformationof howmuchwindpowercanbeexpectedatagiven pointoftime. Duetotheincreasingpenetrationofwindpowerinto theelectricpowergrid. Agoodshorttermforecastingwillensuregrid stabilityandafavorabletradingperformanceonthe electricitymarkets.

-SVM

The objective function of the SVM is based on a insensitive loss function. The formula for the SVM is given as follows:

StructureofSVM

Data Resolution

Theresolutionofthedatasetis10minutes. Eachdatarepresentstheaveragewindspeedand powerwithinonehour. Thedatavaluesbetweenxjtwoadjacentsamplesare linearlychanged,thatis:


xi+1 + xi ) x j (t ) = xi + .t dti 0 t dti

x i and xi +1 . Where dti isthetimeintervalbetween

DataValue

TheaveragevalueofthedatawithinT canbe s calculatedas

1 ) x j (t ) = Ts

ti +Ts

) x j (t )dt
ti

where Ts =60minutesisusedintheveryshortterm Ts =2hoursisused forecasting(lessthan6hours)and forshorttermforecasting.

FixedStepPredictionScheme

Predictionhorizonofhsteps fixedstepforecastingmeansonlythevalueofthe next hth sampleispredictedbyusingthehistorical data.


(t + h) = f (yt, yt-1,,yt-d)
Wheref isanonlinearfunctiongeneratedbySVM

yt +h ispredictedwiththedatabefore yt (theredblocks),is yt +h1 predictedwiththedatabefore yt 1 (thegreenblocks)

Windspeednormalization

Autocorrelationsofthewindspeedsamples

SVMmodelandtheRBFmodel

1h-ahead wind power prediction using the SVM model.

CONCLUSIONS

TheSVMhasbeensuccessfullyappliedtothe problemsofpatternclassification,particularlythe classificationoftwodifferentcategoriesofpatterns. SVMmodelismoresuitableforveryshorttermand shorttermWPF ProvidesapowerfultoolforenhancingtheWPF accuracy.

You might also like