Professional Documents
Culture Documents
Devdutt Gautam
1904796
B.Tech (Mechanical)
Observation Table:
Observation:
The presented table shows that, each characterized by distinct architectures, learning rates, and momentum.
Notably, the number of neurons in hidden layers, represented as "2-3-1," "2-4-1," "2-6-1," and "2-10-1,"
demonstrates that increasing layer complexity does not consistently translate to improved performance. The
range of learning rates (from 0.5 to 0.95) reveals that higher rates do not universally yield superior
outcomes, exemplified by a higher final testing error in the case with a learning rate of 0.95. Momentum
values at 0.01 and 0.02 show no clear pattern of impact on performance. The variable number of training
epochs, spanning 4 to 11, indicates that a lower epoch count does not consistently lead to optimal results,
suggesting possible convergence issues. Interestingly, the relationship between training and testing errors is
not straightforward, highlighting the importance of considering overfitting. Overall, the observations
emphasize the nuanced interplay of architecture and hyperparameters, suggesting that more nuanced
analyses and fine-tuning may be necessary to optimize neural network performance.
Modified Code:
clear all test_error = 2 * tol;
% creating data set
x = linspace(-2,2,25); format long
y = x;
k = 1; train_errors = []; % store training errors for
for i = 1:25 each epoch
for j = 1:25 test_errors = []; % store testing errors for
pattern(k,:) = [x(i) y(j) exp(-(x(i)^2 + each epoch
y(j)^2))];
k = k +1; while (test_error > tol)
end epoch = epoch + 1;
end train_error = 0;
for k = 1:Q
index = randperm(625);
trainpattern = pattern(1:500,:); %80 percent % Forward propagation
testpattern = pattern(501:625,:); %20 Zh = Si(k,:) * Wih;
percent Sh = [1 1./(1 + exp(-Zh))];
Results: