You are on page 1of 5

Artificial Intelligence and Prediction of Rock Fragmentation

P.Y. Dhekne, Manoj Pradhan, and R.K. Jade


Department of Mining Engineering, National Institute of Technology Raipur, Chhattisgarh, India

Abstract. Over the last few years, use of artificial intelligence (AI) has increased in many areas of mining and allied branches of engineering. The technique has been successfully applied to solve many engineering problems and has demonstrated reasonable feasibility therein. A review of the literature reveals that geotechnical studies, mineral processing, reserve estimation, rock fragmentation etc. are some of the areas of mining engineering where AI based approach has been successfully implemented. This paper aims to briefly provide a general view of some of the existing AI based models for prediction of rock fragmentation. Empirical studies for estimation and assessment of fragmentation are also reviewed. The reach and flexibility of basic AI techniques have been elucidated. Keywords: Rock fragmentation, ANN, Fuzzy set.

1 Introduction
Rock fragmentation is governed by numerous parameters relating to the rock, explosive and blast geometry. Estimation of rock fragmentation for a blast round is an important stage before the implementation of the production blast. Many models have been developed and are in practice. Bergmann (1973), Cunningham (1983), Kou- Rastan (1993) and Chung and Katsabanis (2000) developed models with which the fragmentation can be estimated (see Ouchterlony, 2003) [12]. The model developed by Cunningham is the most widely used model worldwide. Julius Kruttschnitt Mineral Research Centre (JKMRC) has done a notable work in the development of models. The Two Component Model (TCM) and Crushed Zone Model (CZM) have been developed by them. The TCM uses experimental data obtained from a blast chamber to estimate the fines end of the distribution, while the CZM uses a semi-mechanistic approach to estimate fines. Onederral, I, et al proposed a new model to predict the proportion of fines generated during blasting. Kuznetsov (1973) suggested an empirical equation to predict the mean fragmentation size resulting from rock blasting using mean fragment size, rock factor, rock volume, mass of explosive per blast hole (2). Kuznetsov also has suggested using RosinRammler equation to estimate the
892 P.Y. Dhekne, M . Pradhan, and R.K. Jade

complete fragmentation distribution resulting from rock blasting. Cunningham (1983, 1987) modified the Kuznetsov's equation to estimate the mean fragment size and used the RosinRammler distribution to describe the entire size distribution [2,3]. The uniformity exponent of RosinRammler distribution was estimated as a function of blast design parameters. The suggested empirical models based on theoretical and mechanistic reasoning have been developed for the prediction of rock fragmentation. The complexity of using a number of parameters makes it an arduous task to predict the fragmentation using such models. The pattern of the output of these models is not conducive to the needs of the field engineers. Further, these models overlook some of the important factors involved in blasting and assign equal weights to all the parameters irrespective of their importance in blasting. To such problem involving a variety of governing parameters the AI techniques can render better solution in prediction of rock fragmentation. This paper discusses on the basic elements of networks, fundamental rules underlying network selection to the present problem. Also, the AI applications which consider the dominant parameters affecting the rock fragmentation for prediction are hence presented.

2 Artificial Intelligence and Neural Network Model


Artificial intelligence (AI) studies how computers may be programmed to yield

intelligent behavior without necessarily attempting to provide a correlation between structures in the program and structures in the brain. Neural computing, the branch of AI, has most widely found applications in wide spectrum including life sciences, medicines, engineering, management, and even in stocks and finance. Neural networks have donned the tasks of both being a dynamic system as well as adaptive system. The machine learning community is frequently working in areas where no plausible data model suggests itself, due to our lack of knowledge of the generating mechanisms [14]. All neural networks do pattern classification, pattern completion, optimization, data clustering, approximation, and function evaluation. A neural network performs nonlinear mapping of a set of inputs to a set of outputs. Learning molds the mapping surface according to a desired response, either with or without an explicit training process. A network can learn under supervised or unsupervised training. Whilst external prototypes are available these can be used as target outputs for specific inputs helping in supervised learning. Learning algorithms use to main rules of learning - Hebbian learning, used with unsupervised learning and the delta rule (or least mean squared error rule [LMS]), used with supervised learning. It is estimated that over 80% of all neural network projects use backpropagation [17]. In Backpropagation training algorithm for training multilayer (feed-forward)
Artificial Intelligence and Prediction of Rock Fragmentation 893

perceptron (MLP), there are two phases in its learning cycle, one to propagate the input pattern through the network and the other to adapt the output, by changing the weights in the network. It is the error signals that are backpropagated in the network operation to the hidden layers. The portion of the error signal that a hidden-layer neuron receives in this process is an estimate of the contribution of a particular neuron to the output error. Adjusting on this basis the weights of the connections, the squared error, or some other metric, is reduced in each cycle and finally minimized, if possible. Three main issues need to be addressed [9] Complexity: Is the network complex enough to encode a solution method? Practicality: Can the network achieve such a solution within a feasible period of time? Efficacy: How do we guarantee that the generalization achieved by the machine matches our conception of a useful solution? The threshold functions do the final mapping of the activations of the output neurons into the network outputs. But the outputs from a single cycle of operation of a neural network may not be the final outputs; the network is iterated into further cycles of operation until convergence is achieved. If convergence seems possible, but is taking a lot of time and effort, i.e., if it is too slow to learn, a tolerance level may be assigned and so as to settle for the network to achieve near convergence. Thresholding sometimes is done for the sake of scaling down the activation and mapping it into a meaningful output for the problem, and sometimes for adding a bias. Generally used functions include sigmoid, linear, ramp, and step function. A network with stability indicates convergence that facilitates an end to the iterative process, e.g., same output for two consecutive cycles, convergence of weights. A neural network can comprise of single or multiple layers (fig. 1).

2.1 Single Layer


A neural network with a single layer is also capable of processing for some important applications, such as integrated circuit implementations or assembly line control. The most common capability of the different models of single layer neural networks is pattern recognition. Ex: Hopfield network, with its neurons all fully connected with one another, makes an association between different patterns (heteroassociation) or associates a pattern with itself (autoassociation). Perceptron technically has two layers, but has only one group of weights, hence, referred to as

a single-layer network. In Perceptron the neurons in the same layer, the input layer in this case, are not interconnected.
894 P.Y. Dhekne, M . Pradhan, and R.K. Jade Fig. 1 A typical neural network Fig. 2 A feed-forward neural network with topology 2-2-1

2.2 Two-Layer
Many important neural network models have two layers. The feed-forward backpropagation network, in its simplest form, is one example. Grossberg and Carpenters ART1 paradigm uses a two-layer network. The Counterpropagation network has a Kohonen layer followed by a Grossberg layer. Bidirectional Associative Memory, (BAM), Boltzman Machine, Fuzzy Associative Memory, and Temporal Associative Memory are other two-layer networks. For autoassociation, a single-layer network could do the job, but for heteroassociation or other such mappings, you need at least a two-layer network [17].
Artificial Intelligence and Prediction of Rock Fragmentation 895

2.3 Multi-layer
Kunihiko Fukushimas Neocognitron, noted for identifying handwritten characters is a network with several layers. It is also possible to combine two or more neural networks into one network by creating appropriate connections between layers of one subnetwork to those of the others. This would create a multilayer network.

2.4 Connections between Layers


Many possibilities of establishing connections exist. Firstly, in the case of Hopfield network, every neuron was connected to every other in the single layer that forms the network, the connections being lateral. Secondly, as in Perceptron, neurons within the same layer are not connected with one another, but the connections are between the neurons in one layer and those in the next layer, the connections being forward and the signals are fed forward within the network. Thirdly, all the neurons in any layer may have extra connections, with each neuron connected to itself, i.e., recurrent connections. The fourth possibility is that there are connections from the neurons in one layer to the neurons in a previous layer, in which case there is both forward and backward signal feeding. This occurs, if feedback is a feature for the network model. The type of layout for the network neurons and the type of connections between the neurons constitute the architecture of the particular model of the neural network. Fig. 2 presents a feed-forward neural network with architecture 2-2-1. The type of connections in the network, and the type of learning algorithm used must be chosen appropriate to the application. A network with lateral connections can do autoassociation, while a feed-forward type can do forecasting.

3 Neural Network Applications


Neural Networks must generalize well so as to have the capacity to apply learned knowledge to new situations. Quantitatively it can be defined as the expected performance of the MLP, measured by a misclassification rate or matrix. In practice the generalization ability of the MLP is often measured by the performance on a set of previously unseen data. Fitting a model to data involves negotiating a trade-off between bias and variance, as a model with either an inflated bias or variance will lead to an inflated mean squared error. Such a model may not perform well on new data [16]. Thus the NN approach although provides advantages still has its limitations. The NN approach to get outputs or solution for a problem finds minimization of assumptions, empiricism being eliminated; weighted input parameters help in providing only the influencing ones to drive towards the solution, absence of certain data can yield results but under caution.
896 P.Y. Dhekne, M . Pradhan, and R.K. Jade

Of the myriad solutions obtained from neural network, some applications in mining (9) are found in Geophysics: for the interpretation of seismic and geophysical data from

various sources, Mineral processing [4, 5, 15]: for performing pattern classification, e.g., particle shape and size analysis. ANN has been used to estimate the mean bubble diameter and the bubble size distribution of mineralized froth surfaces [13]. Image analysis: for multispectral classification of Landsat images so as to highlight uses of land, habitation, over-mining, subsidence, pollution, etc. Process control optimization in mineral processing plants and equipment selection Optimal blast design: for estimation of burden and spacing to achieve optimum production blast [6].

4 AI Approach in Determination of Rock Fragmentation


Mishnaevsky Jr and Schmauder (1996) modeled the damage evolution in loaded rock on the basis of the methods of the theory of fuzzy sets [10]. The heterogeneity of rock as well as damage were characterized by the membership functions of the rock into fuzzy sets of materials with given properties and failed materials, respectively. On the basis of developed concept of the fuzzy damage parameter, the damage evolution in heterogeneous rock was simulated and the influence of initial rock heterogeneity on its strength was studied numerically. It was concluded that the more heterogeneous is the rock, the less its strength. It was concluded that in order to achieve the maximal degree of rock fragmentation, one should form maximum heterogeneous stress field in loaded rock. Kulatilake et al (2010) estimated mean particle size prediction in rock blast fragmentation using neural networks and multivariate regression [7]. They found that Prediction capability of the trained neural network models as well as multivariate regression models was strong and better than the existing most applied fragmentation prediction model. Diversity of the blasts data used is one of the most important aspects of the developed models. The neural network models and multivariate regression analysis models so developed are suitable for practical use at mines. Oraee and Asi developed a predicting model using ANN and compared the predictions with the one obtained with Kuz-Ram model and concluded that ANN provided better the results with a better accuracy [11]. They showed that the effect of rock mass specifications is not evaluated comprehensively in Kuz-Ram model and therefore it cannot be reliable in all the conditions. Bahrami et al (2010) also predicted rock fragmentation due to blasting using artificial neural network [1]. They concluded that the artificial neural network with high correlation capability seems to be a competence measure to predict rock
Artificial Intelligence and Prediction of Rock Fragmentation 897

fragmentation. The results of the model were compared with the results of the regression model. For the regression method R2 and RMSE were calculated 0.701 and 1.958, respectively. Linearity hypothesis applied may be the main cause of poor efficiency of the statistical method. They adopted cosine amplitude method (CAM) to identify the most sensitive factors affecting rock fragmentation. It was concluded that Blastability index, charge per delay, burden and powder factor were the most effective parameters on rock fragmentation. Whereas, average hole depth, specific drilling, spacing, stemming length and hole diameter were the least effective parameters on fragmentation. The AI based models have still a potential to apply in the field of blasting and fragmentation. Besides, the neural network models, fuzzy technique can provide a good measure of prediction, especially in cases where the information on which outputs are to be based is incomplete and or missing.

5 Conclusion
Rock fragmentation is governed by numerous parameters relating to the rock,

explosive and blast geometry. Empirical models to estimate the rock fragmentation are tedious to work and still remain difficult to implement in the field. Few attempts have been made to use neural network approach for predicting fragmentation. The fragmentation models developed so far using ANN suffer from the drawbacks like insufficient number of datasets, their inconsequentiality and collection from an entirely diverse rock mass e.g. that involved in uranium and coal mine. It therefore becomes difficult to apply these models in a universal form in the field conditions. However, the models based on artificial intelligence score over the regression models because of their distinctive advantages like flexibility, non-linearity, greater fault tolerance, adaptive learning, ease in handling the incompleteness, inexactness, uncertainty probabilistic reasoning and fuzzy reasoning. The diversity of AI techniques are yet not been exhaustively applied to such complex problem of rock fragment classification and prediction. There is a sufficient scope and a need for the development of new model which can be adaptive, handles noise in the data, and can provide near-tolerance results.

You might also like