ACE7460 Computational Intelligence Tutorial #2 MATLAB Neural Network Toolbox February 14, 2007 New Networks newc Create a competitive

layer. newcf Create a cascade-forward backpropagation network. newelm Create an Elman backpropagation network. newff Create a feed-forward backpropagation network. newfftd Create a feed-forward input-delay backprop network. newgrnn Design a generalized regression neural network. newhop Create a Hopfield recurrent network. newlin Create a linear layer. newlind Design a linear layer. newlvq Create a learning vector quantization. networknewp Create a perceptron. newpnn Design a probabilistic neural network. newrb Design a radial basis network. newrbe Design an exact radial basis network. newsom Create a self-organizing map. NEWFF Create a feed-forward backpropagation network. Syntax net = newff net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) Description NET = NEWFF creates a new network with a dialog box. NEWFF(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes, PR R x 2 matrix of min and max values for R input elements Si Size of ith layer, for Nl layers TFi Transfer function of ith layer, default = 'tansig' BTF Backpropagation network training function, default = 'traingdx' BLF Backpropagation weight/bias learning function, default = 'learngdm' PF Performance function, default = 'mse' and returns an N layer feed-forward backprop network. The transfer functions TFi can be any differentiable transfer function such as TANSIG, LOGSIG, or PURELIN. The training function BTF can be any of the backprop training functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc. net.performFcn Performance Functions mae Mean absolute error-performance function.

TV . TRAIN(NET. traingdm Gradient descent with momentum backpropagation.Network targets. NET . net.trainFcn and NET. traincgp Polak-Ribiere conjugate gradient backpropagation. trainb Batch training with weight and bias learning rules. trainscg Scaled conjugate gradient backpropagation.T.trainParam. default = []. default = []. default = zeros. traincgf Fletcher-Powell conjugate gradient backpropagation.P. TR . Pi . default = zeros.Initial layer delay conditions. Syntax [net.trainParam. NET . .Ai) takes.Structure of validation vectors.VV. default = zeros.E.New network. trainrp Resilient backpropagation (Rprop). trainoss One-step secant backpropagation.trainParam. net.epochs = maximum_number_of_epochs net2. T .Pi. traingdx Gradient descent with momentum and adaptive lr backprop.Y. Sum squared error-performance function. trainbr Bayesian regularization.T. Y .tr.Ai. trainc Cyclical order incremental training with learning functions.Pf. traingd Gradient descent backpropagation.mse msereg sse Mean squared error-performance function.trainFcn Training Functions trainbfg BFGS quasi-Newton backpropagation. traingda Gradient descent with adaptive lr backpropagation.Structure of test vectors.Pi.Network inputs.Network.Network outputs.TV) Description TRAIN trains a network NET according to NET. P .Af] = train(NET.P. Mean squared error w/reg performance function.Initial input delay conditions. trainlm Levenberg-Marquardt backpropagation.Training record (epoch and perf).goal = performance_goal TRAIN Train a neural network. traincgb Powell-Beale conjugate gradient backpropagation. VV . and returns. Ai . trainr Random order incremental training with learning functions.

The code below creates a network and trains it on this problem.NAME.E.'tansig'}). P = 1:8.perf] = sim(net.E. TR . .Q. default = NaN.E.[]. [net.T) [Y. EPOCH .8)*0. Example Here are 8 input values P and associated targets T. and plots the training performance.1. and test performance. plus a like number of validation inputs VV.T.Ai. net = newff(minmax(P). and if available.Pf.Af.T = T+rand(1.T) [Y. Af .Final layer delay conditions.E . the performance goal. You can also call PLOTPERF directly with the final training record TR.Pi.P and targets VV. During training PLOTPERF was called to display the training record.Af. Pf .Pi.{Q TS}.epoch) Description PLOTPERF(TR.[]. Syntax plotperf(tr. VV. plotperf(tr) Reference page in Help browser doc plotperf SIM Simulate a neural network. GOAL .Network errors. PLOTPERF Plot network performance.Performance goal. T = sin(P).P.{'tansig'.Pf. VV.Training record returned by train. as shown below.T) Description SIM simulates neural networks.perf] = sim(net.Pf.Final input delay conditions.Ai.P.tr] = train(net.goal.[4 1].Ai.P = P. Syntax [Y.name. default = ''.Number of epochs. NAME .EPOCH) takes these inputs.GOAL.Af.Pi.T. default = length of training record.VV).perf] = sim(net.Training function name. validation performance.

Pi. P .Initial input delay conditions. Pf. default = zeros. and Af are optional and need only be used for networks that have input or layer delays. Ai .Network errors. Af .[Y.Network performance. and returns: Y . NET . T .Af.Network inputs.Final input delay conditions. Ai.perf] = SIM(net. default = zeros.E.Initial layer delay conditions. perf .Network.Pf. E . Pi . Pf . .Ai.Network outputs.Network targets.T) takes. default = zeros.P.Final layer delay conditions. Note that arguments Pi.

Neural Network Toolbox Transfer Function Graphs .

.

01:pi. P=x. 15 neurons in the hidden layer and 1 output. title('y=sin(x)*cos(3x)'). y_t=sim(trained_net2.15:pi. net2.m =================== To simulate a two-layer feed-forward network for approximating a function y = sin(x).performFcn='mse'. % maximum number of epochs net2. % testing result stem(x_t. xlabel('x').1:pi. % 'tr' is the training record which is shown by plotpert(tr). net2. % set bipolar sigmoid function as the activation function for the hidden % nodes and linear function as the activation function for the output node.P. figure. hold on.trainParam.epochs=5000.y_t).x_t).'logsig' net2=newff([-pi pi]. % Note: unipolar sigmoid function -.trainFcn='traingdx'. % training sample plot(x. % Step 1 % create a two-layer feed-forward backpropagation network with % 1 input (ranges from -pi to pi) and 15 neurons in each hidden layer. % original function xx=-pi:0.% % % % % MLP_SampleProgram.*cos(3*x). x=-pi:0.'kx'). plot(xx.T). x_t=-pi:0. [15 1].'c'). % performance goal % Step 3 % define training patterns.y. yy=sin(xx). [trained_net2.*cos(3*xx). % define gradient descent training function net2. T=y. ylabel('y'). {'tansig' 'purelin'}). % Step 2 % set training options. % Step 5 % test the trained network. .0001.trainParam. y=sin(x).goal=0.yy. % Step 6 % plot the result. % Step 4 % perform training.*cos(3*x) with 1 input. % a trained network 'trained_net2' will be returned. tr]=train(net2.