You are on page 1of 10

Estimation of Rock Mass Rating System with an Artificial

Neural Network

Zhi Qiang Zhang 1,2, Qing Ming Wu1, Qiang Zhang1, and Zhi Chao Gong1
1
School of Power and Mechanical Engineering, Wuhan University, Wuhan 430072, China
2
Hubei Provincial Key Laboratory of Fluid Machinery and Power Equipment Technology,
Wuhan 430072, China
zhangzhiqiang78@263.net

Abstract. The geo-mechanical classification - rock mass rating (RMR) - is used


for categorizing rock mass. Assessing RMR is an important factor for success-
ful accomplishment of a tunneling project. In the rock mechanics and mining
literatures, some empirical methods exist between rock mass and other rock
properties, such as using characteristic of the rock, geological structure etc.
However, those means have some limitations by special rock types. After ana-
lyzed the information to identify RMR, a new parameter as one of the input
neurons was used to develop predictive relations. There are eight parameters as
the input parameters are presented based on artificial neural networks (ANN).
The situ-test data of the tunnel face were measured and the experimental results
indicate the proposed method was effective.

Keywords: Rock mass rating (RMR), artificial neural network (ANN), tunnel
boring machine (TBM), root mean square error (RMSE).

1 Introduction
Nowadays the trend in tunnel construction is toward larger and longer tunnels. Tunnel
boring machine (TBM) is one of the important modern tunnel construction machines.
However, in tunnel excavation by TBM, it is difficult to grasp the ground conditions
ahead of and surrounding the tunnel face because the face cannot be observed during
tunnel excavation [1]. In situ conditions, the environments of tunnel face are so compli-
cated, hence fracture, faults and wet layers can be the limitation of the tunnel operation.
With regard to the trend toward the use of TBM, the geological evaluation ahead of the
tunnel face is an important issue to make effective use of it [2]. Most of rock mass rating
systems assign numerical values to the different rock mass parameters that influence its
behavior and thereafter combine these parametric values to give an overall rock mass
rating (RMR) value [3]. Therefore, it is necessary to applying advancing geological
prediction with RMR in order to reduce the risk of disasters.
As ANN model can cope with the complexity of intricate and ill-defined systems in
a flexible and consistent way, in the last a couple of years an increase in their applica-
tions to solve various problems in the field of mechanics and mining geo-mechanics
has been observed [4-8]. Chua and Goh [9] used Bayesian neural networks theory for

W. Yu, H. He, and N. Zhang (Eds.): ISNN 2009, Part III, LNCS 5553, pp. 963–972, 2009.
© Springer-Verlag Berlin Heidelberg 2009
964 Z.Q. Zhang et al.

estimating wall deflections in deep excavations. A neural network system was devel-
oped by Javadi [10] for the estimation of air losses in compressed air tunnel.

2 Artificial Neural Network


The foundation of the artificial neural network (ANN) paradigm was laid in the
1950s. ANN model has non-linearity, high parallelism, robustness, fault and failure
tolerance, learning, ability to handle imprecise and fuzzy information and so on [11].
Sarajedini [12] also indicates that ANN neurons are able to perform massively paral-
lel computations for data processing and knowledge representation.
In this paper, a typical feedforward neural network (FNN) topology-- backpropaga-
tion network (BP) -- is introduced. It is comprised of the input layer, one or more
hidden layers and the output layer. A topology of a simple FNN is presented in Fig. 1.
Each layer includes a certain number of neurons that will transfer signals from one
neuron to next.

Fig. 1. A topology of neural network Fig. 2. The model of the jth BP neuron and
signal-flow

Fig. 1 also shows the process of the neuron feedforward.


⎧ ⎛ n1 −1 ⎞
⎪ yk(3) =
⎪⎪
f⎜
⎜ ∑ w (jk2) x (j2) − θ k( 2) ⎟,

k = 0,1,", m − 1
⎨ ⎝ j =0 ⎠ (1)
⎪ ( 2) ⎛ n−1 (1) (1) ⎞
⎪x j = f⎜
⎜ ∑ wij xi − θ j ⎟, (1 )

j = 0,1,", n1 − 1
⎩⎪ ⎝ i =0 ⎠

Clearly, equation (1) indicates that the neurons have responsibility for mapping n-
dimension into m-dimension.
Fig. 2 describes a general model of one BP neuron, where x=input value;
w=weight; ∑ =summation; θ=bias; f=activation or transformation function and
y=output value. One of the neurons’ signal-flow is shown in equation (2, 3):
n
uj = ∑ w x −θ
i =1
i i j (2)
Estimation of Rock Mass Rating System with an Artificial Neural Network 965

1
y j = f (u j ) = (3)
1 + e− x

where: yj is the sigmoid function.


Based on the classic BP algorithm, the Levenberg-Marquardt algorithm is one of
the most famous algorithms [7]. It depends on numerical optimization techniques to
minimize and accelerate the required calculations, resulting in much faster training
(Demuth and Beale, 1994). Fig. 2 also denoted the iterative process.
yk (n) = yk −1 (n) − Yk−−11 (n) ⋅ gk −1 (n) (4)
Where, Yk-1(n) is the Hessian matrix of the error function at the current values of
weights and biases and gk-1(n) is the gradient of the error function.

3 Parametric Establishment
It is very important to acquire the input parameters accuracy based on a sufficient num-
ber of training samples. For this purpose, Kavzoglu [13] and Gallagher [14] proposed
that greater than 30 times the number of training samples as weights should be used.
However, Staufer [15] suggested that the optimal number of training samples are almost
2/3 samples. While Lee [16] and Tong [17] proposed about 80% of data for training.
Curry [18] recommended approximately 75% of data for training. In the present study,
120 component subset of the 150 component dataset (80% of database) were used in the
training stage, and the remainder (30 components) was used in testing.
There are also some parameters (the initial weights, momentum coefficients (μ)
and learning rate (η)) that can influence the BP network convergent accuracy. In the
literatures, the initial weights are generally set small values. Different ranges were
applied to set the initial weights: such as [-0.1; 0.1] by Staufer [15] and Raybum [19];
[-0.25; 0.25] by Gallagher [14]; [-0.4; 0.4] by Weigend [20]; [-1; 1] by Looney [21].
Random small values are usually set as the initial weights which has a significant
effect on both the convergence and final network architecture because too small range
can result in small error gradients which may slow down the initial learning process.
In this study, [-0.1; 0.1] was selected as the initial range.
The training rate of an ANN is also sensitive to the learning rate η and momentum
coefficient μ. The larger of learning rate is selected, the quicker of the training rate,
because large η value causes more changes to weights in the network. However the
training phase can cause oscillations when η is selected too large. Lee [22] and
Feng [23] set learning rate to 0.1, Tong [24] set it to 0.6 while Wang [25] endows it
with 0.001. In order to solve learning rate sensitivity, μ is introduced. To a certain
extent, the momentum coefficient μ has a stabilizing effect and makes curves smooth-
ness. Curry [26] and Lee [22] set the momentum coefficient are 0.4 and 0.6 respec-
tively; Feng [23] and Tong [24] suggested 0.1 and 0.9; and Wang [25] proposed 0.95.
Therefore, in this study, the learning rate was selected as 0.1 and the momentum coef-
ficient was set to 0.9.
What’s more, the root mean square error (RMSE) is also important to ANN model,
the learning rules are based on RMSE, the equation of RMSE is:
966 Z.Q. Zhang et al.

E=
1
(D − Y )2 (5)
2
Where: D and E are the output of expectation and RMSE respectively. The weight
coefficient will be adapted by E and keep Y close to D. Obviously, when the signals
flow forward to output layer and the results are compared with target output D. Fig. 2
also denoted the iterative process of one neuron.
It is necessary to have correct network architecture in order to reduce RMSE to
minimum. Therefore the equation (5) is used as the transfer function in this study just
as the most common transfer function implemented in the literatures [27-30]. In addi-
tion, the number of training neurons of the hidden layer is an important factor that
will determine the training efficiency and optimization. Generally, excessive neurons
of the hidden layer, which are also called over-fitting, can conduce near-zero error on
predicting training data, or may lead to a longer training time and slower training
speed and result in the process whereby the network learns the training data well but
has no ability to meet results for test data. When training set size is too small, the
network cannot learn effectively, this will lead to under-fitting and weak generation.
In a word, the appropriate number of hidden layer neurons and the minimal error of
the test data are considered to be the optimal ANN architecture.

4 Design of the Optimal ANN


It is very important to verify the classification of the rock mass because the differen-
tiation is the base of the TBM excavating and geological disaster prediction. Many
scholars proposed some parameters of the rock mass that can decide excavation, for
example: Benardos and Kaliampakos [7] proposed eight parameters as their ANN
model for excavation by TBM. Suwansawat and Einstein [8] showed 13 input nodes
for their ANN model. C.G. Chua and Goh [9] developed 35 input units in their paper.
Palmstrom and Broch [31] considered 12 factors that influenced RMR. In this paper,
several significant characteristics that influence the rock mass rating are adopted.
These factors include:

z Rock mass quality (Q) represented by RMR classification;


z Characteristic of the rock mass (RMC);
z Hydrogeological conditions (HC) represented by the water surface relative to
the tunnel;
z Geological structure (GS);
z Rock mass fracture degree (RMFD) as represented by rock quality designation
(RQD);
z Weathering degree (WD) of the rock mass;
z Elastic longitudinal wave speed (Vn) of the rock mass.

Moreover, the value of logsig is salient between 0 and 1; the training sample
should be processed normalizable data. The normalizable data of each parameter is
presented in Table 2. The rock mass rating is made up of five grades [32] and nor-
malizable data of each grade are also shown in Table 1.
Estimation of Rock Mass Rating System with an Artificial Neural Network 967

Consequently, there are seven kinds of input values and five sorts of output values,
namely Ni=7 and No=5. In order to obtain a good performance of the ANN, it is indis-
pensable to have an optimal ANN model. The empirical calculated number of neuron
of hidden layer is proposed by Table 3. Details on the implementation of this system
are addressed in [4].

Table 1. Rock mass lithology description, classification and results

Rock mass lithology description Rating Results


Fault, chlorite schist, cataclasm and loose,
Ⅰ (0.1, 0.9, 0.9, 0.9, 0.9)
weathering
Crash, dolerite, mid-weathering, ground water Ⅱ (0.9, 0.1, 0.9, 0.9, 0.9)
Block crack, mid-weathering, ground water,
Ⅲ (0.9, 0.9, 0.1, 0.9, 0.9)
mid-stiffness
Block, mid-weathering and fresh, without
Ⅳ (0.9, 0.9, 0.9, 0.1, 0.9)
ground water, stiff
Block, fresh, without ground water, stiff Ⅴ (0.9, 0.9, 0.9, 0.9, 0.1)

Table 2. Normalizable data of the principal parameters

types horniness middle soft ---- ----


Q
value 0.1 0.5 0.9 ---- ----
types integrity block samdwich chip smash
RMC
value 0.1 0.3 0.5 0.7 0.9
types dry seep drop flow stream
HC
value 0.1 0.3 0.5 0.7 0.9
types slight less severity severity more ----
GS
value 0.1 0.4 0.7 0.9 ----
types none less growth growth more ----
RMFD
value 0.1 0.4 0.7 0.9 ----
types none slight feeble strong full
WD
value 0.1 0.3 0.5 0.7 0.9
Vp (km/s) Vp / Vp (max) where, Vp (max)=5
DR (s/20cm)
types >10 ~
7 9 ~
5 7 ~
3 5 <3
value 0.1 0.3 0.5 0.7 0.9

As can be seen from Table 3, the number of hidden layer neuron, which calculated
by empirical formula from 7 input neurons, varies between 6 and 21. The optimal
ANN model will be established:

z Initial momentum coefficient μ=0.9;


z learning rate η=0.1;
z Numbers of hidden layers: 1, 2;
z Numbers of hidden neurons in each hidden layer: 6, 10, 14, 21;
z The goal of the training: 0.01;
z The epochs of the training: 20000.
968 Z.Q. Zhang et al.

Table 3. The empirical calculated neuron of hidden layer(s) (Ni:number of input neuron,
No:number of output neuron )

The empirical formula input neurons=7 input neurons=8


≤ 2 × Ni + 1 ≤ 15 ≤ 17
3N i 21 24
(Ni + No ) / 2 6 7
2 + N o × N i + 0.5 N o × ( N o2 + N i ) − 3
Ni + No
10 10
2Ni / 3 5 6
Ni × No 6 6
2Ni 14 16

Fig. 3 shows the training RMSE and validation RMSE curve of 7 input neurons
model, which including single-layer and double-layer in its hidden layer. Each ANN
model is trained with the training set until it reaches pre-defined training goal. The
parameters of validation set are consistent with corresponding model. The results are
used for comparing with the desired outputs. If the outputs of the validation samples
act in accordance with target data, the training will be finished. To evaluate the net-
work architecture, each RMSE is used to be compared with other models in Fig. 3.

Fig. 3. The RMSE of 7 input neurons Fig. 4. The RMSE of 8 input neurons

The number of hidden neurons and layers are important variable that as a result
is shown in Fig. 3 and Table 4 is definition of neural network models. Theoretically,
the higher the number of hidden layers and neurons, the better the ANN fit the
training data, just like curve and ① ③
. But, more neurons may lead to “over-
fitting”, which presented in the end of section 3, as shown in curve and . The ② ④
larger number of the hidden layers and neurons failed to estimate rock mass rating.
Therefore, model 2 is the optimal ANN model which has the highest prediction
(88%) in these models.
Estimation of Rock Mass Rating System with an Artificial Neural Network 969

5 A New Optional Parameter, Results and Discussion


In the previous section, the applicability of the 7 input neurons of neural network model
for estimating rock mass classification was applied. The predictive model performed not
well and confirmed that ANN can be unsuccessfully used for estimating RMR.
Determination of the advancing geological prediction by testing samples is almost
impossible due to the presence of discontinuities. To overcome this difficulty, a lot of
literatures [1-3, 33] have been proposed for predicting the rock mass rating. However,
there are some limitations of these, the parameters, which defined by past, are not
good parameters to predict RMR, namely parameters are insufficient. In view of the
drilling machines are applied in locale, drilling rate (DR) is a factor not to be ignored
too. So in this study, DR is set as another parameter, as can be seen from Table 2.
Therefore, the number of optimal input parameters are eight and the hidden layer
neurons number are also shown in Table 3. Based on the previous section, the optimal
ANN model was founded. Only the hidden layer neurons have a change, all other
parameters are unchangeable. That is to say, the numbers of hidden neurons in each
hidden layer are changed into 6, 10, 17 and 24.
The results of the new neural network were given by Fig. 4. The training data of
the new ANN model shows the double-layer neuron number are 10 by curve ⑦and
⑧ is optimum (i.e., RSME=0.002585), the validation samples are tested by this
model with lowest error (i.e., RMSE=0.00075). In addition, this model has higher
efficiency and accuracy; in the meantime it consumes lower resource. Both the train-
ing and testing results of the network are plotted in Fig. 5.

Fig. 5. Performances of the optimal ANN by model 14

As can be seen from Fig. 5, the new optional neural network model is able to estab-
lish a high correlation between the results of prediction and reality. The X-axis is nor-
mal results which are given by Table 1, while Y-axis is ANN predicted results. If the
predicted results are very excellence, the curve will be a beeline with 45 degree. The
linearity is better from this figure and the rate of the prediction is 92%. It is shown that
the neural network is capable of capturing the main features of the relationship about the
eight parameters and reflected the rule of the rock mass rating, it has higher reliability,
so the rock mass rating of the tunnel face can be prediction accuracy.
970 Z.Q. Zhang et al.

Based on Table 4, seven other combinations of the hidden layers were examined
when the successful ANN model 14 was established. In these models, the number of
hidden layers and neurons was provided by Table 3. The architectures of these models
were shown in Fig. 4. The results showed that, in all cases, the neural network was
acted in accordance with pre-defined. For example, model 10 had 1 hidden layer and
10 hidden neurons. The neural network model saved time, had efficiency and accu-
racy. In addition, model 11 can also learn the prediction with high accuracy.
However there are some deficiencies why these models are not selected as the op-
timal ANN model. First of all, the RMSE is bigger than others’ like model 11, 12, 16;
Secondly, the prediction rating that maybe lower than others’ such as model 9, 12, 13,
16, and another crucial point is over-fitting that will consume more resources and
lower efficiency which is denoted by model 11, 12, 15, 16, and under-fitting which
can make the network learn ineffectively just as model 9, 13. By comparing the re-
sults of these ANN models, model 14 is found to be optimal.

Table 4. Defined of neural network models

Network architecture ( 7 input Network architecture ( 8 input


Model Model
neurons) neurons)
1 single-layer, 6 hidden neurons 9 single-layer, 6 hidden neurons
2 single-layer, 10 hidden neurons 10 single-layer, 10 hidden neurons
3 single-layer, 14 hidden neurons 11 single-layer, 17 hidden neurons
4 single-layer, 21 hidden neurons 12 single-layer, 24 hidden neurons
5 double-layer, 6 hidden neurons 13 double-layer, 6 hidden neurons
6 double-layer, 10 hidden neurons 14 double-layer, 10 hidden neurons
7 double-layer, 14 hidden neurons 15 double-layer, 17 hidden neurons
8 double-layer, 21 hidden neurons 16 double-layer, 24 hidden neurons

In a word, model 14 suggests that if a database exists for estimation of rock mass
rating, a new ANN trained with this model can be applied to predict the RMR for a
new project. It is obvious that the quality and results of prediction will improve with
veracity of the RMR by tunnel excavation.

6 Conclusions
In order to fully grasp the classification of rock mass in the tunneling face, reduce the
incidence of disaster, advancing geological prediction is necessary. With a view to the
complexity and non-linear of the tunnel face, the RMR prediction takes full advantage
of the artificial neural networks. In this study, a new parameter as input neuron
method has been developed. In order to realize high performance in prediction of the
rock mass rating, a four-layer BP network (8-10-10-5) is proposed, which is adopted
as the optimal ANN model that is presented to update the neuron networks weights
and momentum coefficient. The optimal model of ANN system demonstrates very
satisfactory results in RMR prediction with locale data. The resulting remarks can be
drawn hereinafter:
Estimation of Rock Mass Rating System with an Artificial Neural Network 971

A. This ANN system gives a fairly fast response for the RMR prediction.
B. This ANN predictable scheme efficiently learns from situ-test data, the result
of RMSE is lower than other models and achieves performance goal.
C. A new parameter as the input neuron is proposed, it can improve the precision
of the RMR prediction results in application.
D. This optional ANN system can adapt rock mechanics and mining projects that
are predicted the RMR before tunnel face. The results show the effectiveness
of the presented control method.
E. The open source code increases the optimal model’s flexibility, allowing also
the insertion of additional parameter to enhance the RMR prediction accuracy
and efficiency.

Acknowledgments. This work is supported by the 10th Five-Year National Key


Technological Equipment Plan of P.R. China (NO. ZZ02-03-03-02-02).

References
1. Shirasagi, S., Mito, Y., Aoki, K.: Evaluation of the Geological Condition Ahead of the
Tunnel Face by Geostatistical Techniques Using TBM Driving Data. Tunneling and Un-
derground Space Technology 18, 213–221 (2003)
2. Ashida, Y.: Seismic Imaging Ahead of a Tunnel Face with Three-component Geophones.
International Journal of Rock Mechanics & Mining Sciences 38, 823–831 (2001)
3. El-Naqa, A.: Application of RMR and Q Geomechanical Classification Systems Along the
Proposed Mujib Tunnel Route, Central Jordan. Bull. Eng. Geol. Env. 60, 257–269 (2001)
4. Sonmez, H., Gokceoglu, C., Nefeslioglu, H.A., Kayabasi, A.: Estimation of Rock
Modulus: For Intact Rocks with an Artificial Neural Network and for Rock Masses with a
New Empirical Equation. International Journal of Rock Mechanics & Mining Sciences 43,
224–235 (2006)
5. Karri, V.: Drilling Performance Prediction Using General Regression Neural Networks. In:
Logananthara, R., Palm, G., Ali, M. (eds.) IEA/AIE 2000. LNCS (LNAI), vol. 1821, pp.
67–73. Springer, Heidelberg (2000)
6. Lin, S.C., Ting, C.J.: Drill Wear Monitoring Using Neural Networks. Int. J. Mach. Tools
Manufact. 36(4), 465–475 (1996)
7. Benardos, A.G., Kaliampakos, D.C.: Modeling TBM Performance with Artificial Neural
Network. Tunneling and Underground Space Technology 19, 597–605 (2004)
8. Suwansawat, S., Einstein, H.H.: Artificial Neural Networks for Predicting the Maximum
Surface Settlement Caused by EPB Shield Tunneling. Tunneling and Underground Space
Technology 21, 133–150 (2006)
9. Chua, C.G., Goh, A.T.C.: Estimating Wall Deflections in Deep Excavations Using Bayes-
ian Neural Networks. Tunneling and Underground Space Tech. 20, 400–409 (2005)
10. Javadi, A.A.: Estimation of Air Losses in Compressed Air Tunneling Using Neural Net-
work. Tunneling and Underground Space Technology 21, 9–20 (2006)
11. Jain, A., Kumar, A.M.: Hybrid neural network models for hydrologic time series forecast-
ing. Applied Soft Computing 7, 585–592 (2007)
12. Sarajedini, A., Hecht-Nielsen, R., Chau, P.M.: Conditional Probability Density Function
Estimation with Sigmoidal Neural Networks. IEEE Transactions on Neural Net-
works 10(2), 231–238 (1999)
13. Kavzoglu, T., Mather, P.M.: Using Feature Selection Techniques to Produce Smaller Neu-
ral Networks with Better Generalisation Capabilities. In: Geoscience and Remote Sensing
Symposium, vol. 7, pp. 3069–3071 (2000)
972 Z.Q. Zhang et al.

14. Gallagher, M., Downs, T.: Visualization of Learning in Multilayer Perceptron Networks
Using Principal Component Analysis. IEEE Transactions on Systems, Man, and Cybernet-
ics-Part B: Cybernetics 33(1), 28–34 (2003)
15. Staufer, P.: Spatial Analysis and GeoComputation, pp. 183–207. Springer, Berlin (2006)
16. Lee, Y.C.: Application of Support Vector Machines to Corporate Credit Rating Prediction.
Expert Systems with Applications 33, 67–74 (2007)
17. Tong, L.I., Chao, L.C.: Novel Yield Model for Integrated Circuits with Clustered Defects.
Expert Systems with Applications 34(4), 2334–2341 (2007)
18. Curry, B., Morgan, P., Silver, M.: Neural Networks and Non-linear Statistical Methods: an
Application to the Modeling of Price-quality Relationships. Computers & Operations Re-
search 29, 951–969 (2002)
19. Rayburn, D.B., Klimasauskas, C.C.: The Use of Back Propagation Neural Networks to
Identify Mediator-Specific Cardiovascular Waveforms. In: International Joint Conference
on Neural Networks, vol. 2, pp. 105–110 (1990)
20. Weigend, A.S., Rumelhart, D.E., Huberman, B.A.: Generalization by Weight-Elimination
applied to Currency Exchange Rate Prediction. In: Seattle International Joint Conference
on Neural Networks, vol. 1, pp. 837–841 (1991)
21. Looney, G.G.: Advances in Feedforward Neural NetWorks: Demystifying Knowledge Ac-
quiring Black Boxes. IEEE Transactions on Knowledge and Data Engineering 8(2), 211–
226 (1996)
22. Lee, Y.C.: Application of support Vector Machines to Corporate Credit Rating Prediction.
Expert Systems with Applications 33, 67–74 (2007)
23. Feng, X.Y., Wang, Q.Q., Zhang, J.: Studying Aromatic Compounds in Infrared Spectra
Based on Support Vector Machine. Vibrational Spectroscopy 44(2), 243–247 (2007)
24. Tong, L.I., Chao, L.C.: Novel Yield Model for Integrated Circuits with Clustered Defects.
Expert Systems with Applications 34(4), 2334–2341 (2007)
25. Wang, Y.S., Lee, C.M.: Sound-quality Prediction for Nonstationary Vehicle Interior Noise
Based on Wavelet Pre-processing Neural Network Model. Journal of Sound and Vibra-
tion 299, 933–947 (2007)
26. Curry, B., Morgan, P., Silver, M.: Neural Networks and Non-linear Statistical Methods: an
Application to the Modeling of Price-quality Relationships. Computers & Operations Re-
search 29, 951–969 (2002)
27. Zhang, C.L., Mei, D.Q., Chen, Z.C.: Active Vibration Isolation of a Micro-manufacturing
Platform Based on a Neural Network. Journal of Materials Processing Technology 129,
634–639 (2002)
28. Hecht-Nielsen, R.: Theory of the Backpropagation Neural Network. In: International Joint
Conference on Neural Networks, vol. 1, pp. 593–650 (1989)
29. Sonmez, H., Gokceoglu, C., Nefeslioglu, H.A., Kayabasi, A.: Estimation of Rock
Modulus: For Intact Rocks with an Artificial Neural Network and for Rock Masses with a
New Empirical Equation. International Journal of Rock Mechanics & Mining Sciences 43,
224–235 (2006)
30. Karri, V.: Drilling Performance Prediction Using General Regression Neural Networks. In:
Logananthara, R., Palm, G., Ali, M. (eds.) IEA/AIE 2000. LNCS (LNAI), vol. 1821, pp.
67–73. Springer, Heidelberg (2000)
31. Palmstrom, A., Broch, E.: Use and Misuse of Rock Mass Classification Systems with Par-
ticular Reference to the Q-system. Tunneling and Underground Space Technology 21,
575–593 (2006)
32. Wang, L.K.: Application of Several Kinds of Advanced Forecast of Geology in Tunnel
Construction. Metal Mine 305, 45–47 (2001) (in Chinese)
33. Banks, D.: Rock Mass Ratings (RMRs) Predicted from Slope Angles of Natural Rock Out-
crops. International Journal of Rock Mechanics & Mining Sciences 42, 440–449 (2005)

You might also like