ANN Simulink Examples (For ECE662

)
In class, we discussed Artificial Feed-forward Neural Network theory in detail. I found a website that may help other students get a jump start on using Simulink to develop either supervised or unsupervised classifiers for their research. See: this Lab Exercise. DISCLAIMER I have not used these techniques for an actual project, but the NN Toolbox looks very promising. I just think it's neat. --mreeder

Supervised Learning
The downside to any ANN is a long training time. In this case, 80 "epochs" or training runs by default. For simpler nonlinear systems, this might not be an issue. The following MATLAB code uses the Neural Network Toolbox to train a 1 dimensional feature space from the umu.se link, above.

% Supervised learning Example

file name: bp_ex1.m

Date:01-10-09.

% Here is a problem consisting of inputs P and targets T that we would % like to solve with a network. P = [0 1 2 3 4 5 6 7 8 9 10]; T = [0 1 2 3 4 3 2 1 2 3 4]; % Here a two-layer feed-forward network is created. % input ranges from [0 to 10]. The network's

The first layer has six TANSIG The TRAINLM

% neurons, the second layer has one PURELIN neuron. % network training function is to be used. net = newff([0 10],[6 1],{'tansig' 'purelin'});

%Here the network is simulated and its output plotted against %the targets. Y = sim(net,P); figure(1); plot(P,T,'+',P,Y,'o'); title('network before training, + is teacher'); % Here the network is trained for 80 epochs. % output is plotted. net.trainParam.epochs = 80; net = train(net,P,T); Y = sim(net,P); figure(2) plot(P,T,'+',P,Y,'o'); title('Supervised output (+), Network output (o)'); xlabel('x');ylabel('y'); whos net % Which varibles do I have? % Gives structure information of trained Net Again the network's

input_weights = net.iw{1} input_bias = net.b{1} layer_weights = net.lw{2} layer_bias = net.b{2}

Unsupervised Learning Example
The Self-Organizing Map method for unsupervised learning is ideal for complex systems with multiple inputs, multiple outputs, and only 2 dimensions. Example 2 from the link, above.

% SOM Example

2-dim File name: som2d_1.m Date:01-10-09

and adjacent (connected) % neurons respond to adjacent regions. We would like to classify 1000 two-element vectors % occuring in a rectangular shaped vector space. load iris_dataset net = newsom(irisInputs. % This self-organizing map will learn to represent different regions of the input space where % input vectors occur. and trigger a shutdown when required.1}.5.iw{1. For the first six seconds of operation. rather than a line.% NEWSOM % TRAIN % SIM . net. . plotsom(net. net.[5 5]). net=newsom([0 1.epochs=2. % Please wait for it. net=train(net. plotsomhits(net. and neighboring neurons to % respond to adjacent regions. . Real Time Motivation In 1991. In this demo. % so only one dot appears.3]. a = sim(net. the neurons will arrange themselves in a two- % dimensional grid. Paper link: here. % here input is random (0 to 1) % We will use a 5 by 6 layer of neurons to classify the vectors above. a=0. We create a layer of 30 neurons spread out in a 5 by 6 grid: Mathworks SOM Example For better documentation: >>help plotsomhits <p> This example show a more Non-Linear classifier.trainParam. which still outputs classes in an intuitive visual display. Real World.p) % The neuron in parenthesis above responded with a "1".trainParam.distances) %Note that the layer of neurons has begun to self-organize so that each neuron now % classifies a different region of the input space.layers{1}.Simulates a neural network. the network was trained to detect failures when the engine was fired.irisInputs). 0.000 lb liquid Hydrogen/Oxygen rocket). 0 1]. . NASA used a 3-input ANN to help diagnose catastrophic failures in the Space Shuttle's Main Engine (a 7. however.Creates a self-organizing map. Blue lines show neurons with distance =1. P = rand(2.show=1.net. so p belongs to that class.Trains a neural network.tr] = train(net.irisInputs). % We can now use SIM to classify vectors by giving them to the network and seeing % which neuron responds.P). We would like each % neuron to respond to a different region of the rectangle.[5 6]). [net.. % Now we train the map on the 1000 vectors for 1 epoch and replot the network weights. p = [0. % Initially all the neurons have the same weights in the middle of the vectors.1000).

Conclusion and Change log INTRODUCTION Matlab's Neural Network Toolbox (NNT) is powerful. Trying to understand this object and its properties can be a bewildering experience. Introduction 2. there will probably come a time when it will be necessary to directly edit the network object properties. .0.0 (R12). especially since the documentation is of the usual Matlab quality (which is a Bad ThingTM). Note: This tutorial describes version 3. yet at times completely incomprehensible.0 of the NNT.0 The Matlab Neural Network Toolbox (NNT) is an all-purpose neural network environment.e. Network Layers o Constructing Layers o Connecting Layers o Setting Transfer Functions 3. Everything but the kitchen sink is included. and most of it has somehow been incorporated in the network object. This is mainly due to the complexity of the network object. a `blank' neural network object). Weights and Biases 4. available since Matlab 6. like newp and newff. are included in the Toolbox. The newest version is 4. Training Functions & Parameters o The difference between train and adapt o Performance Functions o Train Parameters o Adapt Parameters 5. It consists of the following sections: 1. The purpose of this document is to try to explain to all those interested how to build a custom feed-forward network starting from scratch (i. Even though high-level network creation functions.Select Theme Introduction to the Matlab Neural Network Toolbox 3.

Some of it I learned form the manual. relying on the Matlab documentation for specific details. In order to save you a lot of time I already had to spent learning about the NNT. i. I learned a lot about the NNT.e. without any properties. a layer is defined as a layer of neurons. Any errors. Therefore. All Matlab commands given in this document assume the existence of a NNT network object named `net'. To construct such an object from scratch.uva. only feed-forward networks will be treated. with the exception of the input layer. so if you have any comments. Since we wanted the students to concern themselves with the ideas behind neural networks rather than with implementation and programming issues. type >> net = network. are completely my responsibility. In the course of writing this software. omissions etc.nl. questions or hate mail. some software was written to hide the details of the network object behind a Matlab GUI. I do think however that reading this can give you a firm enough background to start building your own custom networks. NETWORK LAYERS The term `layer' in the neural network sense means different things to different people. Which properties to set and how to set them is the subject of this document. This document is far from extensive. In the NNT. And that's the reason I wrote this introduction.Part of my job is teaching a neural networks practicum. but most of it I learned through trial-and-error. which gives you a `blank' network. and is naturally restricted to my own field of application. So in NNT terminology this would be a one-layer network: . please send them to me at portegie@science.

Each layer has a number of properties. This should of course be equal to the dimensionality of your data set. The NNT supports networks which have multiple input layers.numInputs = 1.size = 2. The next properties to set are net. Now we should define the number of neurons in the input layer. I'll assume you have an empty network object named `net' in your workspace. which not surprisingly sets the total number of layers in the network. So to make a network which has 2 dimensional points as inputs. if not.layers{i}. type: >> net. type >> net = network. Constructing Layers OK. so let's set this to 1: >> net. This defines (for now) the input layer.numLayers. The appropriate property to set is net.size. where i is the index of the input layers. to get one. and the function that defines the net input of each neuron given its weights and the output of the previous layer. To build our example network. I've never used such networks. so let's get to it.inputs{1}. the most important being the transfer functions of the neurons in that layer.inputs{i}. and don't know of anybody who has. which sets the number of neurons in the ith layer. we define 2 extra layers (a hidden layer with 3 neurons and an output layer with 1 neuron).size. Let's start with defining the properties of the input layer. andnet. using: .and this would be a two-layer network: We will use the last network as an example throughout the text.

layers{2}. .biasConnect to either 0 or 1. and the second layer linear transfer functions.>> net.transferFcn property.targetConnect(i) to 1 for the appropriate layer i. then the outputs of layer j are connected to the inputs of layer i. For a list of possible transfer functions.layers{i}.outputConnect(2) = 1. So to make the first layer use sigmoid transfer functions.transferFcn = 'purelin'.biasConnect(i) = 1means layer i has biases attached to it. layer by Finally.layers{1}.) This is done by setting net. for our example. >> net. if we have a supervised training set. define which layers have biases by setting the elements of net. >> net. use >> net. The connections between the rest of the layers are defined a connectivity matrix called net. this will be the output layer. Setting Transfer Functions Each layer has its own transfer function which is set through the net.numLayers = 2.layers{1}.transferFcn = 'logsig'. so i = 1).layerConnect(2.inputConnect(1) = 1. we also have to define which layers are connected to the target values. (Usually. define to which layer the inputs are connected by settingnet.targetConnect(2) = 1. If element (i. net. First. net. We also have to define which layer is the output setting net. WEIGHTS AND BIASES Now. which can have either 0 or 1 as element entries. Connecting Layers Now it's time to define which layers are connected.layerConnect.j) is 1.inputConnect(i) to 1 for the appropriate layer i (usually the first.size = 3. net. where net.size = 1. check the Matlab documentation. the appropriate commands would be >> >> >> >> net. >> net. 1) = 1. So.layers{2}.outputConnect(i) to 1 for the appropriate layer i.

j}. This let's each layer of weights and biases use their own initialisation routine to initialise.layerWeights{i. or 'initwb'. 1]. you have to specify the initialisation routine for each set of weights and biases separately.layers{i}. define the initialisation for the input weights.initFcn.initFcn = 'initwb'.1}. use >> net.initFcn = 'rands'. When using 'initnw' you only have to set >> net.layerWeights{i. >> net. The most common option here is 'rands'. the value 'initlay' is the way to go. . The first thing to do is to set net. you should be able to simply issue a >> net = init(net).biasConnect = [ 1 . First.To attach biases to each layer in our example network. >> net. Unless you have build your own initialisation routine.initFcn = 'rands'.initFcn for each layer. which let's you choose the initialisation for each set of weights and biases separately. Exactly which function this is should of course be specified as well.initFcn = 'initlay'.initFcn = 'initnw'.biases{i}.j} denotes the weights from layer j to layer i. and for each set of biases >> net. which sets all weights or biases to a random number between -1 and 1.initFcn = 'rands'. and weight matrices >> net.layers{i}. When done correctly. Next.layers{i}. type 'help initnw' for details). This is done through the property net. for each layer i. Now you should decide on an initialisation procedure for the weights and biases.inputWeights{1. The two most practical options here are Nguyen-Widrow initialisation ('initnw'. we'd use >> net. to reset all weights and biases according to your choices. where net. When using 'initwb'. for each layer i and you're done.

2000 2.2000 0. and 4 corresponding 1D target vectors).0 1.TRAINING FUNCTIONS & PARAMETERS The difference between train and adapt One of the more counterintuitive aspects of the NNT is the distinction between train and adapt. When using adapt.6 .2 .3 . Both functions are used for training a neural network.5] P = 0. like >> P = [ 0.5]} P = [2x1 double] double] [2x1 double] [2x1 double] [2x1 . If it consists of two matrices of input and target vectors.4000 0.54 .5400 1.3000 1. >> P = {[0.3 0. 2.54 0. 1.0000 0. both incremental and batch training can be used.2] [0. (In this case.5000 >> T = [ 0 1 1 0 ] T = 0 1 1 0 the network will be updated using batch training.0] [0. If the training set is given in the form of a cell array.6 . What then is the difference between the two? The most important one has to do with incremental training (updating the weights after the presentation of each single training sample) versus batch training (updating the weights after each presenting the complete data set).2 0.2 2. 1. 1.4] [0. we have 4 samples of 2 dimensional input vectors.6000 1. Which one is actually used depends on the format of your training set. and most of the time both can be used for the same network. 1.4 1.

passes. The mae is usually used in networks for classification.epochs! If anybody can find any sort of ratio behind this design choice (or better. Fair enough. only batch training will be used. for instance: >> net. The big plus of train is that it gives you a lot more choice in training functions (gradient descent.>> T = { [0] [1] [1] [0] } T = [0] [1] [1] [0] then incremental training will be used. the property that determines how many times the complete training data set is used for training the network is called net. my own favourite difference between train and adapt. The performance function is set with the net. regardless of the format of the data (you can use both). When using train on the other hand. When using adapt. the exact same property is now called net. So when you don't have a good reason for doing incremental training. and the reason for which completely escapes me: the difference between passes and epochs. while the mse is most commonly seen in function approximation networks. Performance Functions The two most common options here are the Mean Absolute Error (mae) and the Mean Squared Error (mse). Levenberg-Marquardt. To conclude this section. Train Parameters If you are going to train your network using train.) which are implemented very efficiently.performFcn property. etc. please let me know!.adaptParam. when usingtrain. design flaw). train is probably your best choice. the last step is defining net. which is trivial yet annoying. gradient descent w/ momentum. But.trainParam.trainFcn. and setting the appropriate parameters .performFcn = 'mse'. (And it usually saves you setting some parameters).

>> net. and net. we'll have to set the learning function for all weights and biases: >> net.show.innet. check the Matlab documentation for a complete overview of possible update algorithms. the Perceptron learning rule. which is the maximum number of times the complete data set may be used for training. where in this example we've used learnp. First. Which parameters are present depends on your choice for the training function.trainParam. (In this case. and mc the momentum term. etc.lr = 0. For example. So if you for example want to train your network using a Gradient Descent w/ Momentum algorithm. . and then set the parameters >> net.biases{1}. Two other useful parameters are net.trainParam. you'd set >> net.trainParam.show = 100.) Check the Matlab documentation for possible training functions and their parameters. set net.adaptFcn to the desired adaptation function.inputWeights{1.1.).epochs.trainParam.trainParam. Adapt Parameters The same general scheme is also used in setting adapt parameters. to the desired values. lr is the learning rate. Next.9. which is the time between status reports of the training function. since we're using adaptwb.trainParam. >> net. >> net.adaptFcn = 'adaptwb'. >> net. >> net. (Type 'help learnp'.learnFcn = 'learnp'. We'll use adaptwb (from 'adapt weights and biases').mc = 0.trainFcn = 'traingdm'. which allows for a separate update algorithm for each layer.epochs = 1000.1}. Again.trainParam.learnFcn = 'learnp'.

Updates October 13.uva. we are taking this model for data generation. b=rand(1. Actual Model Let us take that our model has three inputs a.nl. (Thanks to Homera Saeed). 2000 Corrected some weird typos. 2000 Corrected cell array syntax and some spelling errors.1000). CONCLUSION I hope this tutorial will help any of you struggling with the NNT. In actual cases. Feb 22. which is the maximum number of times the complete training set may be used for updating the network: >> net. If you have any comments or questions.adaptParam.Finally.1000).passes. May 11. . This example shows you a very simple example and its modelling through neural network using MATLAB.passes = 10. Let us first write a small script to generate the data a= rand(1.b and c and generates an output y. let us take this model as y=5a+bc+7c. 2000 First Version Starting with neural network in matlab The neural networks is a way to model any input to output relations based on some input output data when nothing is known about the model.adaptParam. you dont have the mathematical model and you generate the data by running the real system. For data generation purposes. you can e-mail me atportegie@science. c=rand(1.1000). a useful parameter is net.

c]. Size of the v vector is known as v-size of the layer.[4 1]. The magnitude of the noise is 0. net is the neural model. .1000)*0. First we will make a matrix R which is of 3 *2 size. First column will show the minimum of all three inputs and second will show the maximum of three inputs.n=rand(1. The first layer which takes input and put into internal layers or hidden layers are known as input layer. We will take the input layerv-size as 5. n is the noise.b and c and output is y. 0 1]. These coefficient is known as weight matrix w. In our case all three inputs are from 0 to 1 range.1 and is uniform noise.*u)+b So we will make a very simple neural network for our case.1 input and 1 output layer. we added deliberately to make it more like a real data. So R=[0 1. The outer layer which takes the output from inner layers and gives it to outer world is known as output layer. Now We make a Size matrix which has the v-size of all the layers. b.'purelin'} shows the mapping function of the two layers. T=y.{'tansig'.0 1 .*c+7*c+n. Each layer is a basically a function which takes some variables (in the form of vector u) and transforms it to another variable(another vector v) by multiplying it with coefficients and adding some biases b. y=a*5+b. Understanding Neural Networks Neural network is like brain full of nerons and made of different layers. Creating a simple Neural FF Network We will use matlab inbuilt function newff for generation of model. The internal layers can be any number of layers. v=sum(w. S=[5 1]. Let us not waste time on this. {'tansig'. So our input is set of a. our input layer will take u with three values and transform it to a vector v of size 5. and our output layer now take this 5 element vector as input u and transforms it to a vector of size 1 because we have only on output. I=[a. 0 1 . Now call the newff function as following net = newff([0 1.05. Since we have three input .'purelin'}).0 1].

I.Now as each brain need training.puts.O. . We will train this neural network with the data we generated earlier. as it gets trained.1:1000. Now net is trained. this neural network too need it.I).O1).O). net=train(net. You can see the performance curve. plot(1:1000. O1=sim(net. You can observe how closely the the two data green and blue follow each other. So now simulate our neural network again on the same data and compare the out.

Let us observe the weight matrix of the trained model. you will see 13.Let us try scatter plot between simulated output and actual target output. y1=sim(net.1} -11.LW{2.5875 -1.5402 0. .3684 0.O1).9569 1.1138 net.4640 1.0279.9138 0.1990 9.b=1 and c=1.0006 -0.[1 1 1]').4589 -1. scatter(O. which is close to 13 the actual output (5*1+1*1+12*1). What about a=1.2340 0.0841 0.6887 1.5403 1. So input matrix will be [1 1 1]'.IW{1} -0.0308 -0. net.2439 Now test it again on some other data.

Sign up to vote on this title
UsefulNot useful