Matlab NN Toolbox

http://www.mathworks.com/

4. 2. 7. (for supervised learning) 5. and testing data.Implementation 1. Network Training and Testing. Loading data source. Decide training. Data manipulations and Target generation. Selecting attributes required. . 3. Performance evaluation. Neural Network creation (selection of network architecture) and initialisation. 6. validation.

.mat format. – In ascii or .mat format. >> data = load(„nndata.txt‟).z. y .txt x. >> whos data. Name data Size 826x7 Bytes Class 46256 double array • Save: saves variables in matlab environment in .Loading and Saving data • load: retrieve data from disk. >> save nnoutput.

16].*w => [1. w.4.2.1).:) • w=[1.4].2. 1 2 2 4 • w=[1.2]. 1 4 4 16 . • training = data([1:500].4].4.Matrix manipulation • region = data(:.2. w*w‟ => [1.2.

2)). figure(1). plot(power(y.power(y.2)). >> plot(x. . Redefine x axis: >> x = [2 4 6 8].Plotting Data • plot : plot the vector in 2D or 3D >> y = [1 2 3 4].

default = 'tansig'.TFNl}.Performance function. • BLF .[S1 S2. BTF . • default = 'mse‟ • newff : create and returns “net” = a feed-forward backpropagation network.Transfer function of ith layer. ..BLF.Backprop network training function.BTF.SNl]. TFi ..Network creation >>net = newff(PR.Rx2 matrix of min and max values for R input elements.Backprop weight/bias learning function.. Si . • default = 'learngdm'. • PF .PF) • • • • PR . • default = 'trainlm'. for Nl layers.{TF1 TF2..Size of ith layer.

Network creation (cont.) S2: number of ouput neuron S1: number hidden neurons Number of inputs decided by PR .

-1 1]. net. -1 1.layers{1}. net. net. % init is called after newff • re-initialise with other function: – – – – net.biases{1. -1 1.Network Initialisation >> PR = [-1 1.initFcn = 'initwb'. .initFcn = 'rands'.1}.initFcn = 'rands'.inputWeights{1.1}.biases{2. Min -1 -1 -1 -1 1 1 1 1 neuron 1 Max • Initialise the net‟s weighting and biases • >> net = init(net).initFcn = 'rands'.1}.

-1 1. [4. TF2: logsig TF1: logsig . {„logsig‟ „logsig‟}).Neurons activation >> net = newff([-1 1. -1 1].1]. -1 1.

lr =0. net. of epochs to train) [100] (stop training if the error goal hit) [0] (learning rate. net.trainParam.001. not default trainlm) [0. net.trainParam.01] net.goal =0.epochs =1000. net. (no. (Max no.trainParam.01.time =1000.trainParam.trainParam.show =1. • variable can be reset.Network Training • The overall architecture of your neural network is store in the variable net. epochs between showing error) [25] (Max time to train in sec) [inf] .

net.1000 mu_inc: 10 mu_max: 1.0000e+010 show: 25 time: Inf .0000e-010 mu: 0.trainParam parameters: • • • • • • • • • • • epochs: 100 goal: 0 max_fail: 5 mem_reduc: 1 min_grad: 1.0010 mu_dec: 0.

net.trainFcn=trainlm . a variant of BP based on second order algorithm (Levenberg-Marquardt) .trainFcn options • net.

P.Pi.5 -1 0.5 0.5 1.5 -1 -0. -0. T .Network Training(cont. default = zeros.5 -1 -0.5.Initial layer delay conditions.5 1 -0.5 1 0. Ai . >> TRAIN(NET. >> p = [-0.) TRAIN trains a network NET according to NET.5 1 -0. 0. -0.5 1 -1 0.Network targets. P .trainFcn and NET.T. Pi .5 -1]. default = zeros.5 -0.Ai) • • • • • NET .5 1.5 1 -1 Training pattern 1 For neuron 1 .trainParam.Network.Initial input delay conditions.5 1 0.Network inputs.5 -1 0. default = zeros. -1 0.

Network targets. 0.5 1 -0.Ai) NET . -1 1 -1 1 Training pattern 1 .5 -1 -0. default = zeros.Network. (optional only for NN with targets) Pi .Pi.5 1.Initial input delay conditions.Network Training(cont. -0. • • • T . • P .5 -1 0. t).5 1. -1 0.Network inputs. Ai .T.5.Initial layer delay conditions.) >>TRAIN(NET.5 1 0.5 -1].P. default = zeros. default = zeros. >> net = train(net. p. >> p = [-0. >> t = [-1 1 -1 1].

UT) • Y : Returned output in matrix or structure format.25 .5 -0. -0. the input to the model is interpolated. • model : Name of a block diagram model. Training pattern 1 -0.25 0.UT). >> UT = [-0.00 1.Simulation of the network >> [Y] = SIM(model. • UT : For table inputs.50 For neuron 1 .00 1.25 -1.5 1 . -1 0. >> Y = sim(net.00 0. -1 0.5].25 1.00 -1.

or simply use mse. • Comparison between target and network‟s output in training set.Performance Evaluation • Comparison between target and network‟s output in testing set. • Design a metric to measure the distance/similarity of the target and output. .

tns) • • • • • • • • PR . TND .Tuning phase neighborhood distance.tlr.tfcn.02. TLR ..osteps.d2.olr.Size of ith layer dimension. Di . OLR . default = 'linkdist'. default = 1.Ordering phase steps.dfcn. OSTEPS . . default = 'hextop'.[d1. TFCN . >> net = newsom(PR.9.Tuning phase learning rate.Topology function. default = 0.Rx2 matrix of min and max values for R input elements.Ordering phase learning rate.. DFCN .NEWSOM • Create a self-organizing map. defaults = [5 8].]. default = 0.Distance function. default = 1000..

[3 5]).NewSom parameters • The topology function TFCN can be HEXTOP.Q.tr] = trainwb1(net.Pd.Ai.400)*2.400)]. rand(1.Tl.VV. >> plotsom(net.layers{1}. or MANDIST. • Exmple: >> P = [rand(1. >> net = newsom([0 2. or RANDTOP.positions) TRAINWB1 By-weight-&-bias 1-vector-at-a-time training function >> [net.TV) . • The distance function can be LINKDIST. DIST. 0 1]. GRIDTOP.TS.

Sign up to vote on this title
UsefulNot useful