You are on page 1of 15

ANAND PARTHASARATHY UFID: 6692-7926

Approximation and optimization in engineering - Homework 4


(a). Fit a radial basis neural network to the data from the noisy y=x of HW 1 using three centers
using Matlab newrb with SPREAD=0.5.

Values of X and Y with noise from HW1 is given below:

X = [0 0.2 0.4 0.6 0.8 1]

Y = [0.1190 0.1600 0.4330 0.6170 0.7810 0.9280]

Matlab code used to fit a Radial Basis function for the above values is given in the Annexure1. The value
of Y from the RBNN is given below for a spread of 0.5 with 3 centers and with a mean squared error goal
of 0.01:

YRBNN = [0.11006 0.20843 0.37955 0.61017 0.82256 0.90723]

A plot showing both the function Y with noise and the values of Y from the Radial Basis neural network
function is shown below:

0.8

0.6

0.4

0.2

-0.2

-0.4

-0.6

-0.8 Function Y with noise


Function fitted with RBNN with Spread= 0.5
-1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
X -->

NOTE THAT THE PLOT SHOULD SHOW ONLY THE DATA POINTS RATHER THAN CONNECT
THEM BY A LINE, SINCE THE LINE DOES NOT HAVE ANY MEANING.
ANAND PARTHASARATHY UFID: 6692-7926

(b) Calculate the error statistics of the fit using the equations developed in Chapters 3 and 4 of the
notes (including R2, the standard error, and the maximum prediction variance)

Y  YRBNN  (YRBNN ‐ Y)^2  (Y‐Yavg)^2 


0.119  0.11006  7.99E‐05 0.1500
0.16  0.20843  2.35E‐03 0.1199
0.433  0.37955  2.86E‐03 0.0054
0.617  0.61017  4.66E‐05 0.0122
0.781  0.82256  1.73E‐03 0.0754
0.928  0.90723  4.31E‐04 0.1778
      SSe = 0.007488 SSy = 0.5408

RMS error,

, 1

nβ – No of centers in the radial basis functions = 3

0.007488
,
6 3

, .

Since, we have 6 data points, choosing 3 centers from the six data points will lead to a 20 possible
combination of centers. Since, we already have values of Yhat from newrb function, we generated the
values of Y using the following weights formula:

Y = w 1 Ф1 + w 2 Ф2 + w 3 Ф3

where Ф1, Ф2, Ф3 – are the shape functions

and w1, w2, w3 – are the weights of the shape functions corresponding to the three centers.
ANAND PARTHASARATHY UFID: 6692-7926

The matlab code is given in the Appendix 2. When the shape functions were evaluated and the
corresponding weights, we obtained the three centers [0 0.8 1] which closely matched the values
generated by the matlab function newrb. So the three centers chosen for the calculation of the error
measures are [0 0.8 1].

The weights for the shape functions using the above mentioned centers is calculated using the formula,

The shape function used is of the form,

Where, | | , X being the data points and C being the centers and σ is the spread which is 0.5.

Therefore, for each center we have six values of the shape function since there are six data points.

The shape functions matrix for the six points corresponding to centers X = 0, 0.8 and 1 are given below:

Ф = 1.0000 0.9231 0.7261 0.4868 0.2780 0.1353

0.2780 0.4868 0.7261 0.9231 1.0000 0.9231

0.1353 0.2780 0.4868 0.7261 0.9231 1.0000

The Prediction variance is calculated using the formula:

Pred variance = σhat2 XmT (ФT Ф)-1 Xm

Prediction variances at the six points are:

Pred Var = σhat2 [0.6922 0.3421 0.3740 0.4428 0.3509 0.7979]

Therefore, maximum prediction variance occurs at the point X = 1 with the value

Maximum Pred Var = 0.04992 * 0.7979 = 0.00198

(c) Compare the error statistics with the actual errors at the data points and in a dense grid of 101
points in (0,1)

After having generated the RBNN surrogate with three centers, the data range of (0,1) is divided into a
dense grid of 101 points with an interval of 0.01. The values of the function at all these 101 points are
calculated using the surrogate toolbox. The actual error at all these 101 points is the difference between
the value of the true function at these points and the values predicted by the surrogate. The table in the
annexure gives the values and the corresponding actual errors at the 101 points and the six data points:

The average of the actual error = 0.008116

Sum of the squares of the actual error, SSe actual = 0.13573 

rms of the actual error at 101 points,   
ANAND PARTHASARATHY UFID: 6692-7926

0.13573
101

0.0366

The error statistics from section (b) are:

0.03553

, 0.0499

0.0814

(d) Kriging:

Fit with Kriging using a constant trend (ordinary Kriging) and compare accuracy at 101 points with
RBNN(remember that the exact function is y=x)

A Kriging surrogate is fitted using the surrogate toolbox (Matlab code attached in the Appendix 4) and
compared with the Radial Basis Neural Network function (RBNN) as shown in the graph below.

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1 Function fitted with KRG


Function fitted with RBNN
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
X -->

Kriging actual rms error = 1.27176E‐09 (almost zero) 

RBNN actual rms error = 0.01


ANAND PARTHASARATHY UFID: 6692-7926

THIS IS RMS AT THE DATA POINTS, WHILE WHAT NEEDS TO BE COMPARED IS RMS ERROR AT
THE 101 POINTS.

(e) Fit with support vector regression using a polynomial of degree 1 as kernel and an epsilon-
insensitive loss function with epsilon of 0.1 (corresponding to the standard deviation of the noise)
and compare the accuracy at 101 points with the other two surrogates.

We have to fit a support vector regression with a linear polynomial. The epsilon value given in the
problem is 0.1. Also, the loss function is epsilon insensitive. The following parameters where used for the
support vector regression in the surrogate toolbox. The code used to generate the SVR is given in the
Annexure 5

optionsSVR = srgtsOptionsSet(... % SVR model


'P', P, ...
'T', T, ...
'Surrogates',@srgtsSVR, ...
'Loss', 'einsensitive', ...
'Insensitivity', 0.1, ...
'Kernel', Kernel, ...
'C', C, ...
'Display','diagnose');

The value of C is given by the equation:

C = max( [ abs(mean(T) + 3*std(T)) , abs(mean(T) - 3*std(T)) ] )


where T is the Y values matrix with noise.
ANAND PARTHASARATHY UFID: 6692-7926

The plot between the Support Vector Regression function, RBNN and the Kriging models are shown
below:

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2
Function with SVR
0.1 Function fitted with Kriging
Function fitted with RBNN
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
X -->

Kriging actual rms error = 1.27176E‐09 

RBNN actual rms error = 0.01

SVR actual rms error = 0.05831 

As seen in the graph, the SVR does not map the true function exactly and hence has the highest rms 
error.
ANAND PARTHASARATHY UFID: 6692-7926

(f) Check if you can get better fit with newrb using a different SPREAD:

With a spread of 1 as against a spread of 0.5, the following results are obtained for Radial Basis function
for the grid of 101 points:

Spread = 1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1 Grid of 101 points


Function fitted with RBNN
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
X -->

As seen in the above plot, for a spread of 1 the surrogate improves appreciably and almost maps the true
function Y = X

Appendix 1:

Matlab code to generate the RBNN using newrb function:

P = [0 0.2 0.4 0.6 0.8 1];


T = [0.119 0.16 0.433 0.617 0.781 0.928];
[net tr] = newrb(P,T,0.01,0.5,3,25);
YRBNN = sim(net, P);
h = plot(P, T,...
P, YRBNN);grid
set(h, 'LineWidth', 3);
set(h(1), 'LineStyle', '--', 'Color', [0 0 0]);
set(h(2), 'LineStyle', '-', 'Color', [1 0 0]);
% set(h(3), 'LineStyle', '--', 'Color', [0 0 1]);
axis tight;
legend('Function Y with noise',...
'Function fitted with RBNN with Spread= 0.5',...
'Location', 'SE');
xlabel('X -->');
ANAND PARTHASARATHY UFID: 6692-7926

a = axis;
axis([a(1) a(2) -1 1]);

Appendix 2: Matlab code to generate shape functions and compute prediction variance

X = [0;0.2;0.4;0.6;0.8;1];
Y = [ 0.119;0.16;0.433;0.617;0.781;0.928];
S = 0.5; % Spread
% Different possible centers combinations
c(1,:) = [ 0 0.2 0.4];
c(2,:) = [0 0.2 0.6];
c(3,:) = [0 0.2 0.8];
c(4,:) = [0 0.2 1];
c(5,:) = [0 0.4 0.6];
c(6,:) = [0 0.4 0.8];
c(7,:) = [0 0.4 1];
c(8,:) = [0 0.6 0.8];
c(9,:) = [0 0.6 1];
c(10,:) = [0 0.8 1];
c(11,:) = [0.2 0.4 0.6];
c(12,:) = [0.2 0.4 0.8];
c(13,:) = [0.2 0.4 1];
c(14,:) = [0.2 0.6 0.8];
c(15,:) = [0.2 0.6 1];
c(16,:) = [0.2 0.8 1];
c(17,:) = [0.4 0.6 0.8];
c(18,:) = [0.4 0.8 1];
c(19,:) = [0.6 0.8 1];
c(20,:) = [0.4 0.6 1];

for j = 1:20
for i =1:6
phi1(i) = exp(-(sum(X(i,:) - c(j,1)))^2/(2 * S^2));
phi2(i) = exp(-(sum(X(i,:) - c(j,2)))^2/(2 * S^2));
phi3(i) = exp(-(sum(X(i,:) - c(j,3)))^2/(2 * S^2));
end
phi = [phi1;phi2;phi3]';
% Compute the weights of the shape functions for each of the above
% mentioned 20 centers
w(:,j) = ((inv(phi'*phi))*phi')*Y;
end;

for j = 1:20
for i =1:6
phi1(i) = exp(-(sum(X(i,:) - c(j,1)))^2/(2 * S^2));
phi2(i) = exp(-(sum(X(i,:) - c(j,2)))^2/(2 * S^2));
phi3(i) = exp(-(sum(X(i,:) - c(j,3)))^2/(2 * S^2));
y(i,j) = w(1,j) * phi1(i) + w(2,j) * phi2(i) + w(3,j) * phi3(i);
end
end

P1 = [ 0 0.2 0.4 0.6 0.8 1];


T1 = [ 0.119 0.16 0.433 0.617 0.781 0.928];
net = newrb(P1,T1,0.01,0.5,3,25);
Y_newrb = sim(net,P1);
ANAND PARTHASARATHY UFID: 6692-7926

% Calculate the value of Y for the 10th center [0 0.8 1]

for i =1:6
phi101(i) = exp(-(sum(X(i,:) - c(10,1)))^2/(2 * S^2));
phi102(i) = exp(-(sum(X(i,:) - c(10,2)))^2/(2 * S^2));
phi103(i) = exp(-(sum(X(i,:) - c(10,3)))^2/(2 * S^2));
Y10(i,1) = w(1,j) * phi1(i) + w(2,j) * phi2(i) + w(3,j) * phi3(i);
end

phi10 = [phi101;phi102;phi103]';
l = length(X);
for k=1:l
xm1(k) = exp(-(sum(X(k) - 0))^2/(2 * S^2));
xm2(k) = exp(-(sum(X(k) - 0.8))^2/(2 * S^2));
xm3(k) = exp(-(sum(X(k) - 1))^2/(2 * S^2));
end;
% Compute the prediction variance
for i = 1:6
pv(i) = [xm1(i) xm2(i) xm3(i)] * (inv(phi10'*phi10)) * [xm1(i) xm2(i)
xm3(i)]';
end

Appendix 3:

Actual errors at the data points:

True 
Surrogate values  function 
X  at data points   values  Actual error 
0  0.11006 0 0.11006
0.2  0.20843 0.2 0.00843
0.4  0.37955 0.4 ‐0.02045
0.6  0.61017 0.6 0.01017
0.8  0.82256 0.8 0.02256
1  0.90723 1 ‐0.09277

Actual errors at the dense grid of 101 points:

Surrogate  True 
values at 101  function  Actual 
X  points  values  Error 
0  0.11006 0 0.11006
0.01  0.11356 0.01 0.10356
0.02  0.11719 0.02 0.09719
0.03  0.12095 0.03 0.09095
0.04  0.12484 0.04 0.08484
0.05  0.12888 0.05 0.07888
0.06  0.13305 0.06 0.07305
ANAND PARTHASARATHY UFID: 6692-7926

0.07  0.13737 0.07 0.06737


0.08  0.14184 0.08 0.06184
0.09  0.14647 0.09 0.05647
0.1  0.15125 0.1 0.05125
0.11  0.15619 0.11 0.04619
0.12  0.1613 0.12 0.0413
0.13  0.16657 0.13 0.03657
0.14  0.17201 0.14 0.03201
0.15  0.17763 0.15 0.02763
0.16  0.18343 0.16 0.02343
0.17  0.1894 0.17 0.0194
0.18  0.19556 0.18 0.01556
0.19  0.2019 0.19 0.0119
0.2  0.20843 0.2 0.00843
0.21  0.21515 0.21 0.00515
0.22  0.22206 0.22 0.00206
0.23  0.22916 0.23 ‐0.00084
0.24  0.23646 0.24 ‐0.00354
0.25  0.24395 0.25 ‐0.00605
0.26  0.25164 0.26 ‐0.00836
0.27  0.25953 0.27 ‐0.01047
0.28  0.26761 0.28 ‐0.01239
0.29  0.27589 0.29 ‐0.01411
0.3  0.28436 0.3 ‐0.01564
0.31  0.29304 0.31 ‐0.01696
0.32  0.3019 0.32 ‐0.0181
0.33  0.31096 0.33 ‐0.01904
0.34  0.32021 0.34 ‐0.01979
0.35  0.32965 0.35 ‐0.02035
0.36  0.33927 0.36 ‐0.02073
0.37  0.34908 0.37 ‐0.02092
0.38  0.35906 0.38 ‐0.02094
0.39  0.36922 0.39 ‐0.02078
0.4  0.37955 0.4 ‐0.02045
0.41  0.39004 0.41 ‐0.01996
0.42  0.40069 0.42 ‐0.01931
0.43  0.41149 0.43 ‐0.01851
0.44  0.42244 0.44 ‐0.01756
0.45  0.43352 0.45 ‐0.01648
0.46  0.44474 0.46 ‐0.01526
0.47  0.45608 0.47 ‐0.01392
ANAND PARTHASARATHY UFID: 6692-7926

0.48  0.46754 0.48 ‐0.01246


0.49  0.4791 0.49 ‐0.0109
0.5  0.49075 0.5 ‐0.00925
0.51  0.5025 0.51 ‐0.0075
0.52  0.51431 0.52 ‐0.00569
0.53  0.52619 0.53 ‐0.00381
0.54  0.53813 0.54 ‐0.00187
0.55  0.55011 0.55 0.00011
0.56  0.56211 0.56 0.00211
0.57  0.57413 0.57 0.00413
0.58  0.58616 0.58 0.00616
0.59  0.59818 0.59 0.00818
0.6  0.61017 0.6 0.01017
0.61  0.62213 0.61 0.01213
0.62  0.63403 0.62 0.01403
0.63  0.64587 0.63 0.01587
0.64  0.65763 0.64 0.01763
0.65  0.6693 0.65 0.0193
0.66  0.68085 0.66 0.02085
0.67  0.69228 0.67 0.02228
0.68  0.70357 0.68 0.02357
0.69  0.71471 0.69 0.02471
0.7  0.72567 0.7 0.02567
0.71  0.73645 0.71 0.02645
0.72  0.74702 0.72 0.02702
0.73  0.75738 0.73 0.02738
0.74  0.76751 0.74 0.02751
0.75  0.77738 0.75 0.02738
0.76  0.787 0.76 0.027
0.77  0.79634 0.77 0.02634
0.78  0.80539 0.78 0.02539
0.79  0.81413 0.79 0.02413
0.8  0.82256 0.8 0.02256
0.81  0.83065 0.81 0.02065
0.82  0.8384 0.82 0.0184
0.83  0.84579 0.83 0.01579
0.84  0.85281 0.84 0.01281
0.85  0.85945 0.85 0.00945
0.86  0.8657 0.86 0.0057
0.87  0.87155 0.87 0.00155
0.88  0.87698 0.88 ‐0.00302
ANAND PARTHASARATHY UFID: 6692-7926

0.89  0.88199 0.89 ‐0.00801


0.9  0.88656 0.9 ‐0.01344
0.91  0.8907 0.91 ‐0.0193
0.92  0.8944 0.92 ‐0.0256
0.93  0.89764 0.93 ‐0.03236
0.94  0.90042 0.94 ‐0.03958
0.95  0.90273 0.95 ‐0.04727
0.96  0.90458 0.96 ‐0.05542
0.97  0.90596 0.97 ‐0.06404
0.98  0.90686 0.98 ‐0.07314
0.99  0.90729 0.99 ‐0.08271
1  0.90723 1 ‐0.09277

Appendix 4: Matlab code for Kriging:

X = [0:.01:1]';
Y = [0:.01:1]';
P = [0:.01:1];
T = [0:.01:1];
NbVariables = 1;
NbPointsTraining = length(X);

Theta0 = (NbPointsTraining^(-1/NbVariables))*ones(1,NbVariables);
LowerBound = 1e-3*ones(1,NbVariables);
UpperBound = 3*Theta0;
optionsKRG = srgtsOptionsSet(...
'P', X, ...
'T', Y, ...
'Surrogates', @srgtsKRG,...
'RegressionModel', @regpoly0, ...
'CorrelationModel', @corrgauss, ...
'Theta0', Theta0, ...
'LowerBound', LowerBound, ...
'UpperBound', UpperBound, ...
'Display','final');

[surrogateKRG, stateKRG] = srgtsSurrogate(optionsKRG);


coeffKRG = surrogateKRG.PRSBeta;
[CVEMatrixKRG, PRESSRMSKRG, PRESSKRG] =
srgtsComputeCrossValidation(optionsKRG,surrogateKRG);
PRESSRMSKRG
PRESSKRG

[YhatKRG outputsKRG] = srgtsSimulate(X,surrogateKRG,optionsKRG);

PredictionMetrics = srgtsPredictionMetrics(X,Y,YhatKRG);
CorrelationCoefficient = PredictionMetrics.CorrelationCoefficient
RMSError = PredictionMetrics.RMSError
MaximumAbsoluteError = PredictionMetrics.MaximumAbsoluteError
ANAND PARTHASARATHY UFID: 6692-7926

[net tr] = newrb(P,T,0.01,0.5,3,25);


YRBNN = sim(net, P);

h = plot(X, YhatKRG, ...


P,YRBNN);grid
set(h, 'LineWidth', 3);
set(h(1), 'LineStyle', '-', 'Color', [0 0 0]);
set(h(2), 'LineStyle', '--', 'Color', [0 0 1]);
% set(h(3), 'LineStyle', '--', 'Color', [1 0 0]);
axis tight;
legend('Function fitted with KRG',...
'Function fitted with RBNN',...
'Location', 'SE');
xlabel('X -->');
a = axis;
axis([a(1) a(2) 0 1]);

Appendix 5: Matlab code for SVR:

P = [0:.01:1]';
T = [0:.01:1]';
Kernel = 'Linear';
KernelOptions = 1;
C = max( [ abs(mean(T) + 3*std(T)) , abs(mean(T) - 3*std(T)) ] );
% Insensitivity = std(T)/sqrt( NbPoints );

optionsSVR = srgtsOptionsSet(... % SVR model


'P', P, ...
'T', T, ...
'Surrogates',@srgtsSVR, ...
'Loss', 'einsensitive', ...
'Insensitivity', 0.1, ...
'KernelOptions',KernelOptions,...
'Kernel', Kernel, ...
'C', C, ...
'Display','diagnose');

disp('Fitting SVR model...');


[surrogateSVR stateSVR] = srgtsSurrogate(optionsSVR);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%
% computation of cross-validation errors
disp(sprintf('\nPerforming cross-validation calculations...'));
% [CVEMatrix, PRESSRMS, PRESS] =
srgtsComputeCrossValidation(optionsSVR,surrogateSVR);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%
[YhatSVR outputsSVR] = srgtsSimulate(P,surrogateSVR,optionsSVR);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% KRIGING
NbVariables = 1;
NbPointsTraining = length(P);
ANAND PARTHASARATHY UFID: 6692-7926

Theta0 = (NbPointsTraining^(-1/NbVariables))*ones(1,NbVariables);
LowerBound = 1e-3*ones(1,NbVariables);
UpperBound = 3*Theta0;
optionsKRG = srgtsOptionsSet(...
'P', P, ...
'T', T, ...
'Surrogates', @srgtsKRG,...
'RegressionModel', @regpoly0, ...
'CorrelationModel', @corrgauss, ...
'Theta0', Theta0, ...
'LowerBound', LowerBound, ...
'UpperBound', UpperBound, ...
'Display','final');

[surrogateKRG, stateKRG] = srgtsSurrogate(optionsKRG);


% coeffKRG = surrogateKRG.PRSBeta;
%[CVEMatrixKRG, PRESSRMSKRG, PRESSKRG] =
srgtsComputeCrossValidation(optionsKRG,surrogateKRG);
%PRESSRMSKRG
%PRESSKRG

[YhatKRG outputsKRG] = srgtsSimulate(P,surrogateKRG,optionsKRG);

PredictionMetrics = srgtsPredictionMetrics(P,T,YhatKRG);
CorrelationCoefficient = PredictionMetrics.CorrelationCoefficient
RMSError = PredictionMetrics.RMSError
MaximumAbsoluteError = PredictionMetrics.MaximumAbsoluteError

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%RBNN
optionsRBNN = srgtsOptionsSet(...
'P', P, ...
'T', T, ...
'Surrogates', @srgtsRBNN,...
'Spread',0.5,...
'Goal',0.01,...
'MN',3,...
'DF',25,...
'Display','final');

[surrogateRBNN, stateRBNN] = srgtsSurrogate(optionsRBNN);


stateRBNN

% coeffRBNN = surrogateRBNN.PRSBeta;
[CVEMatrixRBNN, PRESSRMSRBNN, PRESSRBNN] =
srgtsComputeCrossValidation(optionsRBNN,surrogateRBNN);
PRESSRMSRBNN
PRESSRBNN

[YhatRBNN outputsRBNN] = srgtsSimulate(P,surrogateRBNN,optionsRBNN);


% [YhatRBNN outputsRBNN] = srgtsSimulate(Xtest,surrogateRBNN,optionsRBNN);

PredictionMetrics = srgtsPredictionMetrics(P,T,YhatRBNN);
ANAND PARTHASARATHY UFID: 6692-7926

CorrelationCoefficient = PredictionMetrics.CorrelationCoefficient
RMSError = PredictionMetrics.RMSError
MaximumAbsoluteError = PredictionMetrics.MaximumAbsoluteError
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

h = plot(P, YhatSVR,...
P, YhatKRG,...
P, YhatRBNN);grid
set(h, 'LineWidth', 3);
set(h(1), 'LineStyle', '--', 'Color', [0 0 0]);
set(h(2), 'LineStyle', '-', 'Color', [0 0 1]);
set(h(3), 'LineStyle', '-.', 'Color', [1 0 0]);
axis tight;
legend('Function with SVR',...
'Function fitted with Kriging',...
'Function fitted with RBNN',...
'Location', 'SE');
xlabel('X -->');
a = axis;
axis([a(1) a(2) 0 1]);

You might also like