You are on page 1of 12

Tunnelling and Underground Space Technology 95 (2020) 103103

Contents lists available at ScienceDirect

Tunnelling and Underground Space Technology


journal homepage: www.elsevier.com/locate/tust

Prediction of rock mass parameters in the TBM tunnel based on BP neural T


network integrated simulated annealing algorithm

B. Liua,b, R. Wanga,b, G. Zhaoa,b, X. Guoa, Y. Wanga,b, J. Lic, S. Wanga,
a
Geotechnical and Structural Engineering Research Center, Shandong University, Shandong, China
b
School of Qilu Transportation, Shandong University, Shandong, China
c
China Railway Engineering Equipment Group, Henan, China

A R T I C LE I N FO A B S T R A C T

Keywords: The prediction of rock mass parameters is of great significance in ensuring the safety and efficiency of tunnel
Simulated annealing boring machine (TBM) tunnel construction. Previous studies have confirmed the existence of a relationship
BP neural network between TBM driving parameters and rock mass parameters. In this work, we attempt to utilize the TBM driving
TBM parameters to predict rock mass parameters, including uniaxial compressive strength (UCS), brittleness index
(Bi), distance between plane of weakness (DPW), and the orientation of discontinuities (α). We propose a hybrid
algorithm (SA-BPNN) which integrates the back propagation neural network (BPNN) with simulated annealing
(SA). A three-layer BPNN model was trained, using TBM driving and rock mass parameters from the Songhua
River water conveyance project. We collected 320 samples, and randomly selected 280 of these to train the
model, while the remaining 40 samples made up the first dataset to test the model. The predicted mean absolute
percentage errors (MAPEs) of α, UCS, DPW, and Bi were 7.7%, 13.9%, 12.9%, and 11.0%, respectively, with the
corresponding determination coefficient (R2) of 0.845, 0.737, 0.731, and 0.657, respectively. Another 40
samples with different lithology were collected to verify the model. Although the prediction results were not as
good as those from the first dataset, they were still acceptable. The results reveal that the SA-BPNN model has a
relatively high accuracy. To verify the optimization effect of the SA method on the BPNN algorithm, a BPNN
model was established and tested. The results of the SA-BPNN model were more accurate than those of the BPNN
model.

1. Introduction most famous prediction models of TBM penetration rate, the CSM
model, was established (Rostami et al., 1977). The Norwegian Uni-
With the advantages of economy and efficiency, tunnel boring ma- versity of Science and Technology developed another widely used
chines (TBMs) are widely used in the construction of deep and long prediction model of TBM performance, the NTNU model, after regres-
tunnels. TBMs are sensitive to rock mass conditions during excavation. sion analysis of rock mass parameters and driving parameters (Bruland,
Unknown rock mass information may lead to inappropriate operating 1998). In addition to the two typical prediction models of TBM per-
parameters and even low safety and efficiency of excavation. However, formance, other common methods of rock classification were also used
the complex environment and limited space make it hard to obtain rock in this field. Hamidi et al. (2010) analyzed the relationship between the
mass parameters by direct observation or in-situ tests (Yamamoto et al., TBM field penetration index (FPI) and the five basic parameters of the
2003). An accurate and reliable prediction method of rock mass para- Rock Mass Rating (RMR) system. The results showed that good corre-
meters is required for the security and efficiency of TBMs. lations among FPI, the orientation of discontinuities (α), and the uni-
The researches of TBM performance prediction have confirmed the axial compressive strength (UCS). The conclusion is helpful in pre-
correlation between rock mass and TBM driving parameters. Those dicting the FPI value. Parameters QTBM, RME and GSI were also used to
studies provide us with a reference for the prediction of rock mass predict TBM performance by multiple regression (Barton, 2000; Benato
parameters using TBM driving parameters as input variables. Ozdemir and Oreste, 2015; Frough et al., 2015, Preinl, 2006). Gong and Zhao
obtained the TBM penetration rate based on full-scale laboratory cut- (2009) conducted multivariable regression analysis on the TBM pene-
ting tests and multiple regression analysis, following which one of the tration rate using rock mass parameters, and the research proved that


Corresponding author.
E-mail address: sdgeowsg@gmail.com (S. Wang).

https://doi.org/10.1016/j.tust.2019.103103
Received 1 March 2019; Received in revised form 13 July 2019; Accepted 25 August 2019
Available online 11 October 2019
0886-7798/ © 2019 Elsevier Ltd. All rights reserved.
B. Liu, et al. Tunnelling and Underground Space Technology 95 (2020) 103103

Nomenclature MAPE Mean absolute percentage error


NTNU Norwegian University of Science and Technology
List of symbols PR Penetration rate
PSO Particle swarm optimization
α The angle between tunnel axis and the plane of weakness QTBM Modified Q system
AI Artificial intelligence R2 Determination coefficient
ANN Artificial neural networks Ra Roughness average
AR Advanced rate RME Rock mass classification
Bi Brittleness index RMR Rock mass rating
BPNN Back propagation neural network RPM Revolutions per minute
CSM Colorado School of Mines Rt Roughness maximum
DE Differential Evolution SA Simulated annealing algorithm
DPW Distance between planes of weakness SVR Support vector regression
FL Fuzzy logic Th Thrust of TBM cutterhead
GSI Geological Strength Index Tor Torque of TBM cutterhead
GWO Grey Wolf Optimizer UCS Uniaxial compressive strength
Jv Volumetric joint count UI Utilization index

UCS, brittleness (Bi), and volumetric joint count (Jv) had a certain ef- driven by the gradient, and the prediction results depend on the initial
fect on TBM penetration, with negative correlation with UCS and po- value of the target. If the initial value is close to the local optimum, the
sitive correlation with Bi and Jv. In all of the above studies, regression model often converges in this locality and stops the optimization pro-
analysis was used to obtain the relationship between the two types of cedure, because the gradient is continually to be zero. Compared with
parameters, which formed the basis of the TBM performance prediction. linear optimization algorithms, non-linear optimization algorithms
Encouraged by the results of these researches, we attempt to develop a based on random search technique can globally search for the optimal
rapid and reasonable way to obtain rock mass parameters during TBM solution and better escape a local optimum. Therefore, to offset the
excavation, i.e., to establish a prediction model with TBM driving shortcomings of linear algorithms and solve the problem of a local
parameters as input variables. The key to this problem is to solve the optimum, nonlinear optimization methods are adopted, such as the si-
complex and nonlinear relationship between rock mass and TBM mulated annealing algorithm (SA), which is already widely used to
driving parameters. improve BPNN method. The name of a hybrid method is hyphenated,
In essence, the relationship between TBM driving and rock mass such as the SA-BPNN method. Chen et al. (2010) solved a prediction
parameters is a regression problem, and artificial intelligence algo- problem in the electric industry using a hybrid prediction method that
rithms are widely used to solve such problems. For instance, Yagiz es- combined traditional BPNN and the SA optimization algorithm. In his
tablished multiple prediction models of the TBM penetration rate using research, he effectively predicted several parameters of the wire elec-
several artificial intelligence methods, such as support vector regression trical discharge, such as the cutting velocity, roughness average (Ra),
(SVR), Differential Evolution (DE), and Grey Wolf Optimizer (GWO) and roughness maximum (Rt). In metallurgy, hybrid methods com-
(Mahdevari et al., 2014; Yagiz and Karahan, 2015). Particle swarm bining BPNN and SA were used to predict the pyrite remaining in a coal
optimization (PSO), artificial neural networks (ANNs), and fuzzy logic washing waste pile (Bahrami and Doulati Ardejani, 2016). Similar
(FL) were also used in this field (Minh et al., 2017; Okubo et al., 2003; method was also used in other fields, such as water quality analysis and
Yagiz and Karahan, 2011), with acceptable results. In addition to the earthquake prediction (Chaki and Ghosal, 2011; Gandomi et al., 2016).
penetration rate, other TBM properties such as the cutter load and Referring the application of this hybrid algorithm, we adopt the SA-
chamber pressure are studied by previous researchers. Sun et al. (2018) BPNN method and establish a corresponding prediction model. We se-
introduced an effective method to handle the heterogeneous field data quentially conduct the gradient descent and SA algorithms to obtain
and predicted the thrust and torque of TBM. Gao et al. (2019) adopted optimal weights and bias, which are the key factors of the BPNN model.
sequential machine learning methods, including the recurrent neural For the gradient descent and SA methods, the optimization targets are
networks (RNNs), long-short term memory (LSTM) and gated recurrent the weights and bias, which are the key factors of the BPNN method. At
unit (GRU), to continuously evaluate the thrust, the torque, the velo- the beginning, gradient descent is used to obtain a relatively good
city, and the chamber pressure of TBM. Zhang et al. (2019) applied the weights and bias of the BPNN model. Then, SA is adopted to further
cluster analysis method on big TBM operational data and obtained train BPNN for a global optimal solution and a more suitable model. In
accurate prediction results of rock types. These research studies reflect this procedure, the SA algorithm involves random perturbation by
the rapid development of artificial intelligence and its significant con- multiple iterations and globally searches for an optimal solution, while
tribution in understanding the relationship. The back propagation a linear optimization method cannot, which is beneficial to escape local
neural network (BPNN) is one of the most notable of these methods. A extrema and obtain optimal results. Accordingly, we modified the tra-
complete neural network structure consists of several nodes (called ditional BPNN by integrating the SA method into it to predict the rock
neurons), which weigh the input variables and perform nonlinear cal- mass parameters instead of using it directly. Although this kind of in-
culations to obtain a nonlinear prediction model. The errors between tegrated algorithm has been introduced in the computer science field, it
the predicted and actual output are then obtained and transmitted to has not been applied for predicting rock mass parameters to the best of
each neuron. Based on these errors, the key calculation factors of each our knowledge. Further, this research also provides an idea for im-
neuron are continuously optimized and finally the optimal nonlinear proving the predicted accuracy of rock mass parameters, via integrating
model is obtained. Back propagation neural network (BPNN) based non-linear optimization algorithm into BPNN.
models were successfully employed recently to obtain the relationship This research adopts the BPNN and SA methods to predict rock mass
between rock mass and TBM driving parameters (Benardos et al., 2004; parameters on the basis of TBM driving parameters. In Section 2, the
Zhao et al., 2007). Traditional BPNN adjusted the key factors (weights basic principles of the BPNN, SA, and SA-BPNN methods are in-
and bias) using a linear optimization algorithm, usually the gradient troduced. Section 3 provides an engineering background and the results
descent method. However, model optimization by gradient descent is of the SA-BPNN method applied to engineering data. In Section 4, we

2
B. Liu, et al. Tunnelling and Underground Space Technology 95 (2020) 103103

carry out a comparative analysis between SA-BPNN and traditional temperature reaches the prescribed minimum. Temperature is the key
BPNN in the prediction of rock mass parameters. factor in the training process of the SA method, and this affects not only
the size of the random perturbation but the probability of accepting a
2. Method worse solution. At the primary stage of the SA method, a high tem-
perature leads to rapid random perturbation, which is good for accel-
In this paper, SA-BPNN is used to train a prediction model of rock erating the global search and escaping local optimal values. As the
mass parameters. The weights and bias of BPNN are calculated by iteration continues and the temperature descends, random disturbances
gradient descent, and are further optimized by the SA method to obtain decrease and a worse solution is less likely to be accepted, and the
more suitable one and finally improve the prediction accuracy. In this model tends to stabilize and converge, which is expressed as
section, we introduce the BPNN and SA algorithms and explain their t
x i' = x i + r·vi·
combination. T (3)

2.1. Introduction to BP neural network ⎧ 1, if ΔE < 0


P (ΔE ) = ΔE
⎨ exp − T , otherwise (4)

BPNN was proposed by a group of scientists led by Rumelhart et al.
(1986). A BPNN model has three major components, which are layers where T is called the analogous temperature, which controls the cooling
(input, hidden, and output layers), neurons, and weights between down or “annealing” dictated by a specified cooling schedule; xi is the
neurons. Fig. 1 shows the two-step calculation procedure of each current vector of solutions and x i' is the solutions given a small random
neuron in the BPNN method. First, neurons in the output layer produce perturbation; r is a uniformly distributed random number in [−1, 1]; vi
the initial results of the network. The computation process between are step-length vectors with the same length as x; t is the initial tem-
input and output in the BPNN model can be expressed as (Hechtnielsen perature; E is the corresponding objective function value; and ΔE is the
1988) change in the objective value from old solutions to new. As the iteration
continues, T decreases, and the acceptance probability of a worse so-
H I
⎛ ⎞ lution decreases. T is calculated using the following equation (Matias
=f
Y output ∑ wkj ⎜fhidden ⎛⎜∑ wji Xi + bj⎞⎟ + bk ⎟ et al., 2014; Tavakkoli-Moghaddam et al., 2008):
j=1 ⎝ ⎝ i=1 ⎠ ⎠ (1)
TK + 1 = TK ·θ (5)
where I and H are the numbers of neurons in the input and hidden
th
layer; bj and bk are respectively the hidden and output layer bias; foutput, where TK and TK+1 represent the analogous temperatures in the K and
and fhidden are the respective transfer functions for the hidden and K + 1th iteration, and θ is a constant which controls the speed of des-
output neuron; the wji are weights connecting the input layer and cent of T. θ is always set between 0.95 and 0.99.
hidden layer; and thewkj are the weights between the hidden layer and
output layer. 2.3. Hybrid artificial neural network and simulated annealing
Then the error between the output and actual value is measured by
Eq. (2). If the error exceeds the tolerance, then the weights and bias will Conventional BP neural networks suffer from several drawbacks,
be modified by gradient descent. The output values are retrained by the such as a low convergence rate, sensitivity to weight initialization, and
modified weights and bias, and the above process is repeated until the a high probability of entrapment in local extrema (Chatterjee et al.,
output is within the tolerance, which is expressed as 2009; Lee et al., 2001; Zain et al., 2011). The SA algorithm introduces
N randomness to the neural network, enabling it to effectively avoid the
1
E=
N
∑ (Yn − Yn)2n = 1, 2, 3⋯⋯N weights and bias of BPNN becoming entrapped in local extrema. To
n=1 (2) obtaining the global optimal solution, a relatively high initial tem-
n and Yn are the respective predicted and actual output for the perature and slow cooling speed are often adopted, which always leads
where Y
to long training time. To solve this problem, we adopt gradient descent
training vector, and N is the number of training samples.
at the beginning of BPNN to obtain a relatively good solution; on this
basis, SA is adopted to train the BPNN model with a relatively low
2.2. Introduction to simulating annealing algorithm temperature and rapid cooling speed. This method can save training
time while obtaining a reasonable model. The training process of SA-
Simulated annealing is an optimization algorithm proposed by BPNN is summarized as follows:
Metropolis et al. (1953). Based on the physical process of annealing, it Step 1: Normalize the data using Z-score normalization, as defined
is a meta-heuristic technique for solving complex optimization pro-
blems.
SA is a general optimization algorithm with three steps: heating
process, isothermal process, and cooling process. The heating process is
also an algorithm-initialization process, in which an initial temperature
and solution are given to start the algorithm. In the isothermal process,
multiple iterations are conducted under the same temperature to up-
date the vector of solutions. In each iteration, the old vector of solutions
is given a small random perturbation related to the current tempera-
ture, using Eq. (3), and the new vector of solutions is accepted if the
results become better. Otherwise, the new solution is accepted ran-
domly, and the probability of acceptance of a new solution is controlled
by the temperature, as in Eq. (4) (Kirkpatrick et al. 1983). After a
certain number of iterations, the temperature is reduced by a certain
proportion, as in Eq. (5), and this process is called a cooling process.
When the cooling process ends, the isothermal process is conducted
again to obtain a new solution vector at a lower temperature. The
isothermal and cooling processes are conducted alternately until the Fig. 1. Principle of neuron calculation (Mcculloch and Pitts, 1943).

3
B. Liu, et al. Tunnelling and Underground Space Technology 95 (2020) 103103

in Eq. (6). After normalization, the average value and standard devia- excavation. They characterize the condition of joints and rock integrity,
tion of the data are 0 and 1, respectively, which is given as which changes the whole properties of the rock (Zhu and Zhao, 2013).
x−μ Mahdevari et al. (2014) chose several rock mass parameters as input
X= variables for prediction TBM penetration rate, and UCS and α had
σ (6)
slightly greater effects than the others. Gong and Zhao (2007) and Zhao
where μ and σ are respectively the average and standard deviation of all (2007) proved the effect of Bi and Jv, which is an important index of
samples. We normalize the data of each parameter separately. rock integrity used to express joint spacing and can also be measured by
Step 2: Set the weight and bias of each neuron randomly. Then train DPW. Liu et al. (2015) proved that the bedding direction, which is
the neural network with gradient descent until convergence, and obtain measured by α in this paper, has a significant influence on the rock
a series of weights and biases as the initial solution vector for the SA strength. Mikaeil et al. (2009) and Bejari (2013) evaluated the
method.
Step 3: Use the SA method to acquire optimal weights and biases. In
this step, the optimal weights and biases are calculated during the
iteration process and the decreasing of T, based on the initial ones
obtained by step 2. As the main difference between SA-BPNN and tra-
ditional BPNN, this step can often obtain more suitable weights and
biases, which can be the key to improving the prediction accuracy.
Step 4: Terminate the training process if the temperature T de-
creases to the limit, or if the E value of the prediction results satisfies
the expectation. Otherwise, the optimization algorithm returns to step 3
to train a better solution of weights and biases.
SA-BPNN is conducted using the above four steps. To establish the
proposed prediction model, the method was programmed in ATLAB.
The flowchart is shown in Fig. 2. The part of the program before the
initial convergence is nearly the same as for conventional neural net-
works. After the initial convergence obtained by gradient descent, the
SA algorithm is implemented to calculate more suitable weights and
biases.

3. Dataset establishment and results

3.1. Data collection

The Songhua River water conveyance project is the largest cross-


regional water diversion project in Jilin Province with the largest in-
vestment scale. The project is located at the center of the Jilin Province,
with a length of 263.58 km. Approximately 72 km of the project be-
tween Fengman Reservoir and the Yinma River is connected by tunnel
and divided into four sections. The No. 4 bid of the project is from
northeast to southwest, with an elevation from 264 m to 484 m and a
length of 22.9 km. The study area is at the mileage between K70 + 690
and K58 + 100, as shown in Fig. 3. The study area is divided into two
parts. The first part passes through sections with multiple lithology,
including limestone, diorite, tuff, and sandstone, and is located between
K70 + 690–K59 + 400. In this part, 320 samples were collected to train
and test the SA-BPNN model. The second part, which is between
K58 + 900–K58 + 100, passes through granite. Another 40 samples
were collected to verify the generalization of the model under different
lithologies. The tunnel of this section was excavated with an open TBM
equipped with 56 cutters and the cutter-head had a diameter of 7.9 m.
Before training the proposed model, appropriate input variables and
prediction targets had to be selected. In this paper, TBM-related per-
formance and operational parameters are considered as input para-
meters according to previous research. TBM performance is mainly
measured by penetration rate (PR), utilization index (UI), and advance
rate (AR) (Mahdevari et al., 2014). Considering how we collected
samples, with a short interval between sampling positions, often several
meters, the selected index should be continuously changed along the
tunnel axis. Therefore, PR was selected as an input variable. Opera-
tional parameters mainly influencing TBM excavation were also chosen,
including TBM thrust (Th), torque (Tor), and revolutions per minute
(RPM) of cutter-heads. Those parameters were often involved in pre-
vious research (Preinl, 2006; Ribacchi and Fazio, 2005; Rostami, 2008).
Four rock mass parameters were selected as prediction targets in the
same way. Except for the strength parameters such as UCS and Bi, DPW
and α are also important parameters which influence the tunnel Fig. 2. Flowchart of the SA- BPNN method (Chen et al., 2010).

4
B. Liu, et al. Tunnelling and Underground Space Technology 95 (2020) 103103

Fig. 3. Location of the No. 4 bid of the Songhua River water conveyance project and the study area.

penetrability using UCS, Brazilian tensile strength (BTS), Bi, DPW, and where I, O, and H represent the number of neurons in the input, output,
α, which are regarded as the key factors to determine the efficiency of and hidden layers, respectively. a is a constant taking values in the
TBM excavation under approximately the same operational parameters. interval [1, 10]. Therefore, we tried the range of [4, 13]. Under the
Referring to those researches, four rock mass parameters (UCS, Bi, same parameters and number of iterations, the relation between the
DPW, and α) were considered to be relevant factors on TBM excavation number of neurons in the hidden layer and the mean squared training
and selected as prediction targets, and corresponding field and la- error is shown in Fig. 6. It can be seen that the minimum mean squared
boratory tests were carried out. At each location of field and laboratory error for the neural network appears when the number of neurons is 8.
tests, TBM driving parameters (RPM, Th, Tor, PR) were recorded. A The number of neurons in the input, hidden, and output layers was
total of 320 items of rock mass and TBM machine data made up a da- chosen to be 4, 8, and 4, respectively. Then the structure of the neural
taset which was used to train and test the proposed model. The basic network was determined. In addition, activation functions of each
information is listed in Table 1. Accordingly, the frequencies of each neuron in the hidden and output layers were used to add nonlinear
parameter in the data set are obtained. Further, assuming the para- factors and solve problems that cannot be solved by linear models,
meters following the Gaussian distribution, the probability density which have an important influence on prediction accuracy. In this
curves of each parameter are also obtained according to their average paper, the TanSig and Purelin functions, which are widely used in
value and the standard deviation. The distribution information of each neural networks, were selected as the activation functions in the hidden
parameter is given in Fig. 4 and Fig. 5. and output layers, respectively. The two activation functions are shown
As Fig. 4 and Fig. 5 show, DPW, Bi, and Th are in perfect accordance as Eq. (8) and Eq. (9). The neural networks can be expressed as follows
with the Gaussian distribution law, while PR and RPM are also closed to (Fig. 7):
it and have more samples near the mean value. As for α and UCS, there 2
is no tendency to concentrate on the mean value. Excluding the samples tansig (x ) = −1
1 + exp (−2x ) (8)
greater than 70° and less than 20°, α is approximately similar to uni-
form distribution. UCS samples are mainly distributed between 20 MPa purlin(x)=x (9)
and 60 MPa, those samples are accounting for about 70% of the total
number of samples. The 360 samples collected in the study area were The main factors controlling the SA method are the initial and
used for training and testing the proposed model. The study area was minimum temperature, iterations per temperature step, and tempera-
divided into two parts. The first part is located at the mileage of ture descending index. The minimum temperature is usually selected as
K70 + 690–K59 + 400, passing through sections with multiple li- 1 °C. Considering training time and prediction errors, we tried several
thology, including limestone, diorite, tuff, and sandstone. We collected combinations of the other three factors, and the parameter combination
320 samples in this part. Among these, 280 samples were randomly with the smallest error was selected. When the error index was similar,
chosen and used for training the model, and the remaining 40 samples factor combinations with shorter training times were selected. Initial
were selected as the first test set to evaluate the prediction area. The temperature affects both iteration times and the randomness of the
second part is located at the mileage of K58 + 900–K58 + 100, with training process. A large initial temperature allows the algorithm to
completely different lithology than the first part (granite). There were escape local minima more frequently, but often leads to a long training
40 samples in the second test set, to verify the generalization of the time (Zameer et al. 2014). Since the inputs of SA are relatively suitable
model in different lithologies.
Table 1
Input and output parameters with their range, mean value, standard deviation,
3.2. Design of neural network and variance.
Maximum Minimum Mean value Standard deviation
The main factors of the neural network structure are the number of
neurons in the input, hidden, and output layers. These three numbers α (°) 78.00 26.00 51.61 12.66
determine the structure of the neural network. Generally, the number of UCS (MPa) 98.90 14.09 51.20 20.65
neurons in the input and output layers equal the number of input and DPW (cm) 70.90 10.50 38.23 8.97
output variables, respectively. The most suitable number of neurons in Bi 17.16 2.05 8.42 3.10
Th (KN) 18819.53 4528.31 11929.02 3103.87
hidden layers is located in an interval, which is always calculated as Tor (KN⋅m) 3406.07 126.21 2166.69 733.95
(Matias et al. 2014) PR (mm/min) 91.84 22.97 63.75 13.42
RPM (r/min) 16.58 3.40 9.99 3.87
H = (I + O)1 2 +a (7)

5
B. Liu, et al. Tunnelling and Underground Space Technology 95 (2020) 103103

Fig. 4. Rock mass parameters distribution of the database: (a) α, (b) UCS, (c) DPW, and (d) Bi.

weights and bias obtained by gradient descend instead of taking a and bias. Before this, the weights and bias calculated by gradient des-
completely random value, a high initial temperature was abandoned cent are the initial solution of the SA module. This program restores the
and we tried temperatures in the range from 100 °C to 400 °C, with a SA-BPNN method used in this paper. The whole modeling process takes
step of 100 °C. Iterations per temperature step of temperature cycling 65 s.
were required to train the network, for which a relatively small value is At the beginning of training, the training set is randomly divided
recommended, because the benefits of the SA algorithm are often loses into 10 phases (P1 – P10) to conduct 10-fold cross-validation. Each
under too many iterations per temperature (Bahrami and Doulati phase has 28 different samples and is used to evaluate the models with
Ardejani, 2016). Therefore, only 50, 100, 150, and 200 were tried in different controlling factors, and the best models are chosen according
this paper. Regarding the temperature descending index, the range from to their mean absolute percentage error (MAPE) and determination
0.95 to 0.99 is widely used in the SA method. We also tried this range coefficient (R2). The expressions of the two indices are provided as Eq.
with a step of 0.01. After testing several combinations of those factors, (10) and Eq. (11), respectively (Fattahi and Bazdar, 2017; Fazeli et al.,
we selected 200, 150, and 0.98, respectively, as the initial temperature 2013), and the chosen model prediction results of the 10 phases are
value, iterations per temperature, and temperature descending index. listed in Table 2.
We found that the prediction accuracy nearly reached its maximum
when these three values were chosen. Higher values of any of the three 1
N
n − Yn
Y
MAPE =
N
∑ Yn
indexes cannot obviously improve accuracy, while results in longer n=1 (10)
training time.
N n − Yn )2
∑n = 1 (Y
R2 = 1 −
3.3. Results N n − Y¯n )2
∑n = 1 (Y (11)

As mentioned above, we used the SA-BPNN method to train a pre- To obtain stable and acceptable prediction models, each phase is
diction model based on 320 samples of rock mass and TBM driving tested by a series of indexes, i.e. the MAPE and R2 of the four para-
parameters. To train the proposed model, we programmed the SA- meters. In addition, the average value and the range of MAPE and R2 of
BPNN method in MATLAB; the code structure is shown in Fig. 8. It can the 10 phase are also calculated to evaluate the accuracy and robustness
be seen that the SA algorithm is regarded as an independent module, of the method. As Table 2 shows, the average MAPE of α, UCS, DPW,
which is nested in the BPNN method to solve for the optimal weights and Bi are 7.63%, 11.54%, 11.06%, and 10.15% respectively, and the

6
B. Liu, et al. Tunnelling and Underground Space Technology 95 (2020) 103103

Fig. 5. TBM driving parameters distribution of the database: (a) Th, (b) Tor, (c) PR, and (d) RPM.

Fig. 6. Curve of the mean squared error versus the number of hidden layer
neurons.

corresponding average R2 are 0.888, 0.797, 0.799, and 0.723, respec-


tively. These results proved that the accuracy of the method should be
recognized. Meanwhile, the prediction accuracy of each phase is rela-
tively close to each other. All of the range of predicted MAPE and R2 of
Fig. 7. Architecture of the proposed neural network.
the four targets are below 1% and 0.03, which reflect that the perfor-
mance of the method on the 10 phases is similar and proved the ro-
bustness of the method. obtained. The comparision of the results are shown in Figs. 9–12.
The SA-BPNN model was applied on the first test set, which con- Figs. 9–12 show the prediction results of the four parameters of each
sisted of 40 samples selected randomly. The prediction results and ac- sample. In general, predicted values and measured values exhibit ex-
tual values were compared in each location where the 40 samples were cellent agreement. On the basis of the statistical results, the MAPEs of

7
B. Liu, et al. Tunnelling and Underground Space Technology 95 (2020) 103103

Fig. 9. Predicted values of α.

Fig. 8. Code structure for the SA-BPNN algorithm.

the four parameters (α, UCS, DPW, and Bi) are 7.7%, 13.9%, 12.9%,
and 11.0%, respectively, with corresponding R2 values of 0.845, 0.737,
Fig. 10. Predicted values of UCS.
0.713, and 0.657. The above accuracy indices reflect that the prediction
results are close to measured results. The prediction results of α are
obviously better than those of the other three parameters, and all of the
prediction errors of α are within 10°. Similarly, the prediction errors of
UCS, DPW, and Bi are acceptable, with average MAPE and R2 of 12.6%
and 0.7, respectively. Most errors of the three parameters are less than
8 MPa, 8 cm, and 2, respectively. Only samples 12, 13, 37, and 38 of
UCS, samples 13, 15, 16, 29, and 32 of DPW, and samples 4, 5, 15, and
35 of Bi exceed these limitations, accounting for 10%, 12.5%, and 10%
of the total samples, respectively.
To verify the generalization of the SA-BPNN model, we applied the
same model on the second test set, which was collected from the granite
section and had totally different lithology. The prediction results are
shown in Figs. 13–16.
The two curves in Figs. 13–16 still show good coincident, with the
MAPE of α, UCS, DPW, and Bi equal to 11.07%, 17.01%, 11.46%, and Fig. 11. Predicted values of DPW.
16.33%, respectively, with the corresponding R2 values of 0.692, 0.789,
0.537, and 0.68. This proves that the prediction results of the second

Table 2
Predicted results of 10-fold cross-validation.
Number Training Testing Results of α Results of UCS Results of DPW Results of Bi

2 2 2
MAPE R MAPE R MAPE R MAPE R2

1 Training data set P1 7.60% 0.892 11.60% 0.792 11.10% 0.798 9.93% 0.733
2 Training data set P2 7.75% 0.886 11.49% 0.802 11.35% 0.784 10.32% 0.717
3 Training data set P3 7.74% 0.886 11.73% 0.782 10.77% 0.814 9.92% 0.733
4 Training data set P4 7.50% 0.888 11.53% 0.801 10.78% 0.812 10.30% 0.715
5 Training data set P5 7.87% 0.884 11.57% 0.798 10.99% 0.809 10.43% 0.713
6 Training data set P6 7.84% 0.884 11.35% 0.810 11.26% 0.787 10.64% 0.711
7 Training data set P7 7.66% 0.890 11.43% 0.802 11.32% 0.792 10.18% 0.724
8 Training data set P8 7.32% 0.893 11.76% 0.784 10.69% 0.808 10.43% 0.714
9 Training data set P9 7.37% 0.893 11.30% 0.803 11.13% 0.795 9.71% 0.735
10 Training data set P10 7.69% 0.885 11.64% 0.793 11.24% 0.794 9.66% 0.735
Average – – 7.63% 0.888 11.54% 0.797 11.06% 0.799 10.15% 0.723
Range – – 0.55% 0.009 0.41% 0.028 0.66% 0.03 0.98% 0.024

8
B. Liu, et al. Tunnelling and Underground Space Technology 95 (2020) 103103

Fig. 12. Predicted values of Bi. Fig. 16. Bi predicted values of the second test set.

test set to be accurate and relatively acceptable. However, the accuracy


of the results is not as good as that of the first test set, in terms of
average errors or relevance. The average MAPE and R2 of the second
test set are approximately 14% and 0.68, respectively, while the cor-
responding values for the first test set are 11.4% and 0.74. Moreover, if
using the same limitation as for the first test set, there are 4, 11, 3, and 7
samples of the four targets, α, UCS, DPW, and Bi, which exceed the
standard, accounting for 10%, 27.5%, 7.5%, and 17.5% of the total
samples. In general, the R2 are decreased compared with the results of
the first test set, especially the R2 of DPW. This phenomenon may at-
tribute to the different data distribution of the two data sets. The DPW
range of the training set is from 10.50 to 70.90 cm while the range of
the second test set is from 21.20 to 49.60 cm. On the premise of the
Fig. 13. α predicted values of the second test set.
similar error between the measured and predicted value, the value of
N n − Y¯n )2 decreased, which finally results in a lower R2. The
∑n = 1 (Y
above results prove that although the model can be used to predict the
four rock mass parameters under the granite condition, the model is
more suitable for use in areas of limestone, diorite, tuff, or sandstone.

4. Discussion

4.1. Result comparison between SA-BPNN and conventional BPNN models

Results obtained Section 3 of this paper prove that the SA-BPNN


based prediction model has a good prediction ability. However, the
improvement effect of the SA on BPNN has not been verified. For this
purpose, traditional BPNN was used to establish another prediction
model, and the results obtained by the two methods were compared. To
maximize the prediction ability of the BPNN model, we redesigned the
Fig. 14. UCS predicted values of the second test set. structure and reselected its key factors. The number of neurons in the
hidden layers was calculated by Eq. (7) again. As shown in Fig. 17, we
determined a BPNN structure with 12 neurons in the hidden layer.

Fig. 15. DPW predicted values of the second test set.

Fig. 17. Curve of mean squared error versus the number of neurons in the
hidden layer.

9
B. Liu, et al. Tunnelling and Underground Space Technology 95 (2020) 103103

TanSig and Purelin were used as the activation functions in the hidden Mirjalili (2014) and Deb et al. (2002), respectively, and we programed
and output layers, respectively. The BPNN algorithm was again pro- GWO-BPNN and GA-BPNN in MATLAB under the reference of these
grammed in MATLAB, and the whole training procedure took 15 s. studies. In the two integrated algorithms, GWO and GA play the same
The results of the 40 samples tested by the BPNN and SA-BPNN role as SA, i.e. the two optimal algorithms are used to calculate the
based models were compared and are shown in Figs. 18–21. In addition, optimal weights and bias of BPNN.
MAPE and R2 values predicted by the two models are given in Table 3. Programmed GWO-BPNN and GA-BPNN are used to train two other
The results show that there is a certain prediction ability of the prediction models by using the training set consisting of 280 samples,
BPNN model, but it is obviously worse than the SA-BPNN. For the four and they are applied on the first test set. The training time cost by
parameters (α, UCS, DPW, and Bi), there are, respectively, 28, 24, 26, GWO-BPNN and GA-BPNN are 74 and 69 s relatively, which are slightly
and 23 samples whose prediction results of the BPNN model are ob- longer than the cost by SA-BPNN. The comparison of the predicted
viously worse than those of the SA-BPNN model. As shown in Table 3, results using GWO-BPNN and GA-BPNN are shown in Figs. 22–25, and
the MAPE values predicted by BPNN are 21.6%, 22.8%, 22.2%, and MAPE and R2 of each result are also given on these figures.
22.9%, and the corresponding R2 values are 0.651, 0.328, 0.449, and As optimization algorithms, GWO and GA can also modified BPNN
0.529. When compared with the SA-BPNN model, these two indices by integrating into it. Compared with traditional BPNN algorithm, the
show that the prediction ability of the BPNN model is obviously worse. MAPE of GWO-BPNN and GA-BPNN are reduced differently, which
It is worth noting that the sharp rise and fall of the rock mass para- verified the positive effect of the two non-linear algorithms on BPNN.
meters greatly affect the TBM excavation, and the ability to judge the The two integrated methods are also accurate, and have similar accu-
changing trend of rock mass parameters should be used to evaluate the racy on the first test set with their MAPE and R2 are closed to each
model. In this aspect, SA-BPNN is also better than BPNN. Taking the other. However, according to the testing results of the first test set, the
result of α in the 32nd sample as an example, the measured value is 28°, accuracy of GWO-BPNN and GA-BPNN are slightly lower than SA-
which is very low in the dataset. Although the corresponding result of BPNN. It is particularly significant in the prediction of α. The MAPE of
SA-BPNN is 11° higher than the actual value, the downward trend of α α predicted by GWO-BPNN and GA-BPNN are 14.1% and 14.9%, while
can be seen in this location. However, the prediction result of BPNN has the one of SA-BPNN is only 7.7%.
an incorrect trend in the same location. There are many similar ex- The test results proved that it is reasonable and practical idea to
amples, such as the α of samples 1, 21, and 26; UCS of samples 6 and integrate non-linear optimization algorithm (at least the GWO and GA
14; DPW of samples 21 and 31; and Bi of samples 17 and 24. It is proved algorithm) into BPNN to improve the prediction accuracy of rock mass
that SA-BPNN can better predict a changing trend. Furthermore, the parameters, not just the simulated annealing algorithm.
better prediction ability of SA-BPNN is due to the optimization of SA on
the BPNN algorithm, which is the only difference between SA-BPNN 5. Conclusions
and BPNN.
This study introduces a method to predict rock mass parameters
4.2. Application of other non-linear optimal algorithms in BPNN based on TBM driving parameters. For this purpose, SA-BPNN was
adopted and applied to a dataset consisting of 360 samples, which were
A non-linear algorithm (simulated annealing algorithm) is adopted collected in the No. 4 bid of the Songhua River water conveyance
to improve the ability of BPNN for predicting rock mass parameters. project. In the SA-BPNN based model, TBM driving parameters (RPM,
The results show that non-linear algorithm is more suitable to integrate Th, Tor, and PR) are input parameters and rock mass parameters (α,
BPNN than the gradient descent method. However, there is more than UCS, DPW, and Bi) are prediction targets. The main conclusions of this
one type of non-linear optimal algorithm. In order to extend our re- research can summarized as follows:
search, two non-linear optimal algorithms, Grey Wolf Optimizer (GWO)
and Genetic algorithm (GA) are used to compare with the SA method. 1. The SA-BPNN-based model was trained by randomly selecting 280
For this purpose, GWO-BPNN and GA-BPNN are adopted to apply on samples and tested by the first test set, made up of the remaining 40
the second data set, and their results are compared with above-men- samples. The MAPE results of α, UCS, DPW, and Bi were 7.9%,
tioned one. 13.9%, 12.9%, and 11%, respectively, while the corresponding R2
GWO is a kind of meta-heuristic with the ability of global optimi- values were 0.845, 0.737, 0.713, and 0.657. It was proved that the
zation, which is inspired of the hunting behavior of Canis lupus SA-BPNN based model can predict rock mass parameters with a
(Mirjalili et al., 2014). Previous research of Yagiz and Karahan (2015) relatively low error and high correlation.
has proved the rationality and accuracy of GWO for predicting TBM 2. SA-BPNN was also applied to another 40 samples collected from the
performance. In the optimization procedure of GWO, multiple (more granite area for the second test set. The MAPEs of α, UCS, DPW, and
than three) solutions are randomly internalized and defined as four Bi were 11.07%, 17.01%, 11.46%, and 16.33%, respectively, while
groups according to their accuracy. The first three best solutions are the R2 values were 0.692, 0.789, 0.537, and 0.68. The results show
named alpha, beta, and delta, respectively, whose movement is affected
by randomness. The rest solutions are called omega and guided by the
three above-mentioned solutions. During iterations, ‘wolves’ update
their positions to get close to, or sometimes away from the prey, i.e. the
optimal solution.
Similar to GWO, GA is another global optimization algorithm which
refers to the evolutionary law of biology. It is generally considered to be
with good robustness and parallelism (Bahrami and Doulati Ardejani,
2016). When GA is working, a group of candidate solutions is defined,
and each candidate solution randomly conducts copy, crossover and
mutation to create new solution. The fitness of all candidate solutions is
evaluated and some solutions with low fitness are abandoned during
iterations. The fitness of the solution group is improved continuously by
creating new solutions and abandoning old one multiple times, and the
accurate and acceptable solution is obtained finally. The basic principle
and pseudo code of GWO and GA is introduced in the literature by Fig. 18. Comparison of α results predicted by BPNN and SA-BPNN.

10
B. Liu, et al. Tunnelling and Underground Space Technology 95 (2020) 103103

Fig. 19. Comparison of UCS results predicted by BPNN and SA-BPNN. Fig. 22. Comparison of α results predicted by GWO-BPNN and GA-BPNN.

Fig. 20. Comparison of DPW results predicted by BPNN and SA-BPNN.


Fig. 23. Comparison of UCS results predicted by GWO-BPNN and GA-BPNN.

Fig. 21. Comparison of Bi results predicted by BPNN and SA-BPNN.


Fig. 24. Comparison of DPW results predicted by GWO-BPNN and GA-BPNN.

Table 3
Statistical parameters of SA-BPNN and simple BPNN model for the cross vali-
dation of prediction of rock mass parameters.
BPNN SA-BPNN

2
MAPE R MAPE R2

α 19.2% 0.685 7.70% 0.845


UCS 20.8% 0.382 13.9% 0.737
DPW 21.6% 0.494 12.9% 0.713
Bi 20.9% 0.566 11.0% 0.657

that although the model can still predict rock mass parameters
under the granite conditions, it is more suitable in areas of lime-
stone, diorite, tuff, or sandstone.
3. To verify the optimization of the SA method on the BPNN algorithm, Fig. 25. Comparison of Bi results predicted by GWO-BPNN and GA-BPNN.
we established the BPNN based model and applied it to the first
training set. A comparison of the results shows that the SA-BPNN

11
B. Liu, et al. Tunnelling and Underground Space Technology 95 (2020) 103103

model can better predict both the value and the changing trend of for perception, vol 1, pp. 445.
rock mass parameters. The optimization effect of the SA method on Kirkpatrick, S., Gelatt, C.D., Vecchi, M., 1983. Optimization by simulated annealing.
Science 220, 671–680.
BPNN is proved to exist and to be significant. Lee, H.M., Chen, C.M., Huang, T.C., 2001. Learning efficiency improvement of back
propagation algorithm by error saturation prevention method. Neurocomputing 41
Acknowledgements (1–4), 125–143.
Liu, X.H., Dai, F., Zhang, R., Liu, J.F., 2015. Static and dynamic uniaxial compression tests
on coal rock considering the bedding directivity. Environ. Earth Sci. 73 (10),
The authors wish to thank the China Railway Tunnel Stock 5933–5949.
Company Limited for sharing their experiences of data gathering efforts Mahdevari, S., Shahriar, K., Yagiz, S., Shirazi, M.A., 2014. A support vector regression
model for predicting tunnel boring machine penetration rates. Int. J. Rock Mech. Min.
in the field. This research was supported by the National Natural Sci. 72, 214–229.
Science Foundation of China (NSFC) (No. 51739007), The National Key Matias, T., Souza, F., Araújo, Rui, Antunes, C.H., 2014. Learning of a single-hidden layer
Research and Development Program of China (No. 2016YFC0401805), feedforward neural network using an optimized extreme learning machine.
Neurocomputing 129, 428–436.
the National Natural Science Foundation of China (NSFC) (No.
Mcculloch, W.S., Pitts, W., 1943. A logical calculus of the ideas immanent in nervous
U1806226), the Key Research and Development Program of Shandong activity. Bull. Math. Biophys. 5 (4), 115–133.
Province (No Z135050009107), and the Interdisciplinary Development Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., Teller, E., 1953.
Program of Shandong University (No. 2017JC001, 2017JC002). Equation of state calculations by fast computing machines. The J. Chem. Phys. 21,
1087–1091.
Mikaeil, R., Naghadehi, M.Z., Sereshki, F., 2009. Multifactorial fuzzy approach to the
References penetrability classification of TBM in hard rock conditions. Tunn. Undergr. Space
Technol. 24 (5), 500–505.
Minh, V.T., Katushin, D., Antonov, M., Veinthal, R., 2017. Regression models and fuzzy
Bahrami, S., Doulati Ardejani, F., 2016. Prediction of pyrite oxidation in a coal washing logic prediction of TBM penetration rate. Open Eng. 7 (1), 60–68.
waste pile using a hybrid method, coupling artificial neural networks and simulated Mirjalili, S., Mirjalili, S.M., Lewis, A., 2014. Grey wolf optimizer. Adv. Eng. Softw. 69 (3),
annealing (ANN/SA). J. Clean Prod. 137, 1129–1137. 46–61.
Barton, Nick, 2000. TBM tunnelling in jointed and faulted rock. Balkema. Okubo, S., Kfukie, K., Chen, W., 2003. Expert systems for applicability of tunnel boring
Bejari, H., 2013. Simultaneous effects of joint spacing and orientation on TBM cutting machine in Japan. Rock Mech. Rock Eng. 36, 305–322.
efficiency in jointed rock masses. Rock Mech. Rock Eng. 46 (4), 897–907. Preinl, Z.T.B.V., 2006. Rock mass excavability indicator: New way to selecting the op-
Benardos, A.G., Kaliampakos, D.C., 2004. Modelling TBM performance with artificial timum tunnel construction method. Tunn. Undergr. Space Technol. 21 (3) 237 237.
neural networks. Tunn. Undergr. Space Technol. 19 (6), 597–605. Ribacchi, R., Fazio, A.L., 2005. Influence of rock mass parameters on the performance of a
Benato, A., Oreste, P., 2015. Prediction of penetration per revolution in TBM tunneling as TBM in a Gneissic formation (Varzo tunnel). Rock Mech. Rock Eng. 38 (2), 105–127.
a function of intact rock and rock mass characteristics. Int. J. Rock Mech. Min. Sci. Rostami, J., 2008. Hard rock TBM cutterhead modeling for design and performance
74, 119–127. prediction. Geomech. Tunnel. (Aust. J. Geotech. Eng.) 1, 18–28.
Bruland, A., 1998. Hard rock tunnel boring. Doctoral thesis. Norwegian University of Rostami, J., Ozdemir, L., Nilsen, B., 1977. Comparison between CSM and NTH hard rock
Science and Technology, Trondheim. TBM performance prediction models. In: Proceedings of the annual technical
Chaki, S., Ghosal, S., 2011. Application of an optimized SA-ANN hybrid model for meeting: Institute of Shaft Drilling.
parametric modeling and optimization of LASOX cutting of mild steel. Prod. Eng. Res. Rumelhart, D.E., Hinton, G.E., Williams, R.J., 1986. Learning representations by back-
Devel. 5 (3), 251–262. propagating errors. Nature 323 (6088), 533–536.
Chatterjee, A., Maitra, M., Goswami, S.K., 2009. Classification of overcurrent and inrush Sun, W., Shi, M.L., Zhang, C., Zhao, J.H., Song, X.G., 2018. Dynamic load prediction of
current for power system reliability using Slantlet transform and artificial neural tunnel boring machine (TBM) based on heterogeneous in-situ data. Autom. Constr.
network. Expert Syst. Appl. 36 (2), 2391–2399. 92, 23–34.
Chen, H.C., Lin, J.C., Yang, Y.K., Tsai, C.H., 2010. Optimization of wire electrical dis- Tavakkoli-Moghaddam, R., Safaei, N., Sassani, F., 2008. A new solution for a dynamic cell
charge machining for pure tungsten using a neural network integrated simulated formation problem with alternative routing and machine costs using simulated an-
annealing approach. Expert Syst. Appl. 37 (10), 7147–7153. nealing. J. Oper. Res. Soc. 59 (4), 443–454.
Deb, K., Pratap, A., Agarwal, S., Meyarivan, T., 2002. A fast and elitist multi objective Yagiz, S., Karahan, H., 2011. Prediction of hard rock TBM penetration rate using particle
genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6 (2), 182–197. swarm optimization. Int. J. Rock Mech. Min. Sci. 48, 427–1423.
Fattahi, H., Bazdar, H., 2017. Applying improved artificial neural network models to Yagiz, S., Karahan, H., 2015. Application of various optimization techniques and com-
evaluate drilling rate index. Tunn. Undergr. Space Technol. 70, 114–124. parison of their performances for predicting TBM penetration rate in rock mass. Int. J.
Fazeli, H., Soleimani, R., Ahmadi, M.A., Badrnezhad, R., Mohammadi, A.H., 2013. Rock Mech. Min. Sci. 80, 308–315.
Experimental study and modeling of ultrafiltration of refinery effluents using a hybrid Yamamoto, T., Shirasagi, S., Yamamoto, S., Mito, Y., Aoki, K., 2003. Evaluation of the
intelligent approach. Energy Fuels 27, 3523–3537. geological condition ahead of the tunnel face by geostatistical techniques using TBM
Frough, O., Torabi, S.R., Yagiz, S., 2015. Application of RMR for estimating rock- driving data. Tunn. Undergr. Space Technol. 18 (2), 213–221.
mass–related TBM utilization and performance parameters: A case study. Rock Mech. Zain, A.M., Haron, H., Sharif, S., 2011. Estimation of the minimum machining perfor-
Rock Eng. 48 (3), 1305–1312. mance in the abrasive waterjet machining using integrated ANN-SA. Expert Syst.
Gandomi, M., Soltanpour, M., Zolfaghari, M.R., Gandomi, A.H., 2016. Prediction of peak Appl. 38 (7), 8316–8326.
ground acceleration of Iran’s tectonic regions using a hybrid soft computing tech- Zameer, A., Mirza, S.M., Mirza, N.M., 2014. Core loading pattern optimization of a typical
nique. Geosci. Front. 7 (1), 75–82. two-loop 300 MWe PWR using Simulated Annealing (SA), novel crossover Genetic
Gao, X.J., Shi, M.L., Song, X.G., Zhang, C., Zhang, H.W., 2019. Recurrent neural networks Algorithms (GA) and hybrid GA(SA) schemes. Ann. Nucl. Energy 65, 122–131.
for real-time prediction of TBM operating parameters. Autom. Constr. 98, 225–235. Zhang, Q.L., Liu, Z.Y., Tan, J.R., 2019. Prediction of geological conditions for a tunnel
Gong, Q.M., Zhao, J., 2007. Influence of rock brittleness on TBM penetration rate in boring machine using big operational data. Autom. Constr. 100, 73–83.
Singapore granite. Tunn. Undergr. Space Technol. 22 (3), 317–324. Zhao, Z.Y., Gong, Q.M., Zhang, Y., Zhao, J., 2007. Prediction model of tunnel boring
Gong, Q.M., Zhao, J., 2009. Development of a rock mass characteristics model for TBM machine performance by ensemble neural networks. Geomech. Geoeng. Int. J. 2 (2),
penetration rate prediction. Int. J. Rock Mech. Min. Sci. 46 (1), 8–18. 123–128.
Hamidi, J.K., Shahriar, K., Rezai, B., Rostami, J., 2010. Performance prediction of hard Zhu, J.B., Zhao, J., 2013. Obliquely incident wave propagation across rock joints with
rock TBM using Rock Mass Rating (RMR) system. Tunn. Undergr. Space Technol. 25, virtual wave source method. J. Appl. Geophys. 88, 23–30.
333–345.
Hechtnielsen, R., 1988. Theory of the backpropagation neural network. Neural networks

12

You might also like