Professional Documents
Culture Documents
Composite Structures
journal homepage: www.elsevier.com/locate/compstruct
Note
a r t i c l e i n f o a b s t r a c t
Article history: This short communication comments on a series of papers using artificial neural networks published by
Available online 13 November 2010 Guessasma and co-workers in structures journals. The issues discussed include the size of the database
for training a neural network, database enlargement for training a neural network, and extrapolation
Keywords: based on modelling results. The paper includes criticisms of other mistakes in those previously published
Neural networks papers.
Non-linear model Ó 2010 Elsevier Ltd. All rights reserved.
Size of database
Database enlargement
Extrapolation
0263-8223/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.compstruct.2010.11.003
1310 W. Sha / Composite Structures 93 (2011) 1309–1310
analysis (3%) and predict the performance for this larger domain’’, In [1, Fig. 7], comparison between NN calculations with training
implying extrapolation. However, the later discussion about [2, data is made. Such comparison is not useful because NN is
Fig. 5] shows the lack of extrapolation ability, as expected, voting known to overfit the training data when the database is small.
against the authors’ own statement of ‘‘advantages’’. Extrapolation Comparison should be made between NN calculations with
should not be used. Controversially again, the authors conclude the so-called testing data, i.e., data not used during the training
that ‘‘This result proved the extrapolation capability of the ANN process.
as it permitted us to predict instability situations outside the inter- In [3, Figs. 2 and 4–6], the two sub-figures of a and b in each fig-
vals adopted for the databases’’, against their own findings. The ori- ure are placed the wrong way round.
ginal 3% is extended to 5–10%, but no explanation is given on how The final comment is on the number of epochs selected by
these larger percentages are obtained. There is no validation of Guessasma and co-workers (different in each of their papers)
accuracy or even estimation of it. Later in that paper, it is stated, that is chosen to ‘‘avoid overtraining’’ of the ANN. In [1],
‘‘When using database 1 for the optimisation, the decrease of per- 10,000 epochs are used and it would have been important to
formance (increase of quadratic function) is overestimated. The show that this number is indeed low enough to prevent
scatter between the two curves relative to the databases used for overtraining.
the optimisation of the neural network was on average 4.38%,
which is reasonable.’’ Again, there is no explanation or justification
on why it is overestimated and why 4.38% is reasonable. References