Professional Documents
Culture Documents
com
ScienceDirect
ICT Express xxx (xxxx) xxx
www.elsevier.com/locate/icte
Received 7 June 2021; received in revised form 19 July 2021; accepted 26 September 2021
Available online xxxx
Abstract
Design of bridge piers and abutments is significantly impacted by hydrodynamic processes that cause scouring of the foundation. Although,
many empirical formulae are available in the literature to estimate the depth of scouring, but they suffer from several limitations. A major
limitation of empirical formulae is that they are largely applicable to the hydraulic conditions for which they have been derived. In this
research, a deep neural network (DNN) has been developed and applied to predict the depth of scour around bridge piers and abutments.
The practicality of the proposed model has been demonstrated using the experimental data sets consisting of 211 data points. The novelty
of the DNN model applied herein lies in the use of Adam Optimizer for optimizing the parameters of the DNN model. The performance of
the DNN model was evaluated for each parameter set using statistical indicators such as the coefficient of determination, root mean square
error, and mean absolute error. A regression equation based upon the available data set has also been proposed. Based upon the values of
the statistical parameters, the DNN model has been found to be significantly better than the regression model. The model proposed herein
performs better than the regression model. A distinct practical advantage of the model proposed herein is that it eliminates the need of hit
and trial procedure to determine the optimal parameter set for the model.
⃝c 2021 The Author(s). Published by Elsevier B.V. on behalf of The Korean Institute of Communications and Information Sciences. This is an open
access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Keywords: DNN; Hyperparameter; Scour; Adam; Optimizer
Please cite this article as: M. Asim, A. Rashid and T. Ahmad, Scour modeling using deep neural networks based on hyperparameter optimization, ICT Express (2021),
https://doi.org/10.1016/j.icte.2021.09.012.
M. Asim, A. Rashid and T. Ahmad ICT Express xxx (xxxx) xxx
Fig. 5. Progression of loss curve with epochs during the training process.
Fig. 4. Scatterplot of observed and values predicted from a regression
equation using the entire dataset.
Table 1
Statistical Performance of DNN model.
Testing data Entire data
MSE MAE R2 MSE MAE R2
DNN 0.0156 0.087 0.987 0.0151 0.077 0.988
Regression 0.103 0.253 0.933 0.219 0.286 0.821
based upon R2 , MSE, and MAE was carried for the test as well
as the entire data set. The values of R2 , MSE, and MAE for
the test as well as the entire data set is presented in Table 1.
The value of R2 was 0.933 when the test data was used to
predict the values of ds /y using the equation developed in this Fig. 6. Progression of loss curve with epochs during the validation process.
work. A considerable degree of scatter was obtained around
the 450 line as shown in Fig. 3. The MSE computed using the
observed and values predicted by the regression equation was optimization the DNN model with 3 hidden layers with 50,
0.103, which is acceptable. For the case when the test data was 90, and 60 neurons in respective hidden layers, and a learning
considered, the value of the MAE was found to be 0.253. rate of 0.01was found to be optimal. With this optimal set
A comparison of observed ds /y and values of ds /y predicted of parameters, the DNN model was trained for 500 epochs.
by the regression equation for the entire dataset is presented The progression of loss during the training process, which is
in Fig. 4. The value of R2 was found to be 0.821, which was defined by MSE, is shown in Fig. 5. The loss curve measures
lower than that obtained when only the test data was provided the model error, and it can be seen from Fig. 5 that the MSE
as an input to the regression model. This was due to the greater abruptly decreases from a value of greater than 4 at the start
number of data points considered in computing R2 when the of the training to close to zero by the end of first few epochs.
entire dataset was provided as input to the regression equation. Thereafter, the MSE decreases steadily with epochs and be-
The MSE, in this case, was also higher (0.219) than the case comes constant at around 450 epochs. No further improvement
when only the test data was provided as input to the regression is visible after 450 epochs. It can be seen from the loss curves
equation. A value of 0.286 was obtained for MAE. A R2 value (Figs. 5 and 6) that the initial choice of training the model
of 0.81 during the testing stage and 0.77 during the training for 500 epochs was correct. A zig-zag loss curve indicates
stage of their regression model was reported in [20]. In our over-fitting of the model. In the present case, the loss curves
case, the values of R2 were significantly higher than reported for both training and validation are relatively smooth, which
in [20]. indicates that there is no over-fitting of the model.
The scatter plots of observed values of ds/y and those
3.2. Performance evaluation of the DNN model predicted with the DNN model for the test data and the entire
data are shown in Fig. 7 and Fig. 8, respectively. It is evident
The input layer for the DNN comprised of five neurons from Figs. 7 and 8 that there is a very good agreement between
(U/Uc , L/y, σg , Fr , d50 /y). The output layer consists of a single the observed and predicted values for the DNN model that
neuron (ds /y). The DNN model was trained with 147 sets of was trained using the optimal parameter set based on Adam
observed values, whereas the testing of the model was carried optimizer.
out using 64 sets of observed values. Using hyperparameter
4
M. Asim, A. Rashid and T. Ahmad ICT Express xxx (xxxx) xxx
[6] A.E. Shahidi, M.S. Rohani, Prediction of scour at abutments using [14] C. Affonso, A.L.D. Rossi, F.H.A. Vieira, A.C.P. de Leon Ferreira,
piecewise regression, in: Proceedings of the Institution of Civil Deep learning for biological image classification, Expert Syst. Appl.
Engineers Water Management 167 2014 Issue WM2, 2014, pp. 79–87, 85 (2017) 114–122.
http://dx.doi.org/10.1680/wama.11.00100. [15] E. Chong, C. Han, F.C. Park, Deep learning networks for stock market
[7] S.Y. Kayatürk, Scour and Scour Protection at Bridge Abutments (Ph.D. analysis and prediction: Methodology, data representations, and case
thesis), Civil Eng. Department, METU, 2005. studies, Expert Syst. Appl. 83 (2017) 187–205.
[8] S. Dey, A.K. Barbhuiya, Time variation of scour at abutments, J. [16] A. Guven, A multi-output descriptive neural network for estimation
Hydraul. Eng. 131 (1) (2005) 11–23. of scour geometry downstream from hydraulic structures, Adv. Eng.
[9] E.S.R. Chaurasia, P.B.B. Lal, Local scour around bridge abutments,
Softw. 42 (3) (2011) 85–93.
Int. J. Sed. Res. 17 (1) (2002) 48–74.
[17] F. Chollet, et al., Keras, 2015, URL https://github.com/keras-team/
[10] E.V. Richardson, S.R. Davis, Evaluating scour at bridges, in: Hy-
keras.
draulic Engineering Circular No. 18, fourth ed., Federal Highway
Administration, Arlington, VA, 2001. [18] Duchi, et al., Adaptive Subgradient Methods for Online Learning and
[11] D.C. Froehlich, Local scour at bridge abutments, in: Proc. ASCE Stochastic Optimization, Stanford, 2011.
National Hydraulic Conference, Colorado Springs, Colorado, 1989, pp. [19] G. Hinton, N. Srivastava, K. Swersky, Neural Networks for Machine
13–18. Learning (Lecture 6) UToronto and Coursera, 2012.
[12] M.A. Gill, Erosion of sand beds around spur dikes, J. Hydraul. Div. [20] M. Muzzammil, Application of neural networks to scour
ASCE 98 (HY9) (1972) 1587–1602. depth prediction at the bridge abutments, Eng. Appl. Comput.
[13] D.P. Kingma, J. Ba, Adam: a method for stochastic optimization, 2015, Fluid Mech. 2 (1) (2008) 30–40, http://dx.doi.org/10.1080/19942060.
arXiv preprint arXiv:1412.6980. 2008.11015209.