This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

**Approximating Nonlinear Relations Between Susceptibility and Magnetic Contents in Rocks Using Neural Networks
**

William W Guo**, Michael Li, Zhengxiang Li†, Greg Whymark

Faculty of Arts, Business, Informatics and Education, Central Queensland University, Rockhampton QLD 4701, Australia; † The Institute for Geoscience Research (TIGeR), Curtin University of Technology, Perth WA 6845, Australia Abstract: Correlations between magnetic susceptibility and contents of magnetic minerals in rocks are important in interpreting magnetic anomalies in geophysical exploration and understanding magnetic behaviors of rocks in rock magnetism studies. Previous studies were focused on describing such correlations using a sole expression or a set of expressions through statistical analysis. In this paper, we use neural network techniques to approximate the nonlinear relations between susceptibility and magnetite and/or hematite contents in rocks. This is the first time that neural networks are used for such study in rock magnetism and magnetic petrophysics. Three multilayer perceptrons are trained for producing the best possible estimation on susceptibility based on magnetic contents. These trained models are capable of producing accurate mappings between susceptibility and magnetite and/or hematite contents in rocks. This approach opens a new way of quantitative simulation using neural networks in rock magnetism and petrophysical research and applications. Key words: neural networks; nonlinear function approximation; rock magnetism; magnetic susceptibility; magnetic contents

Introduction

Magnetism of a rock depends on the magnetic minerals that the rock contains. Usually magnetic properties of a rock are determined by ferromagnetic minerals if they are present. Magnetite and hematite are the most common ferromagnetic minerals in rocks. For example, magnetite and hematite often form some 5% by weight of igneous and metamorphic rocks, and are present in many sedimentary rocks with various fractions[1]. Although many factors, such as grain size of magnetic minerals, may affect magnetic properties of a rock, the content of magnetic minerals in a rock is the predominate factor.

Received: 2010-04-13; revised: 2010-05-08

**** To whom correspondence should be addressed.
**

E-mail: w.guo@cqu.edu.au; Tel: 61-7-49309687

Great effort has been made in understanding the relations between magnetic properties (particularly magnetic susceptibility) and the content of magnetite or hematite for the purposes of interpreting magnetic anomalies[2-5] and rock magnetism study[1], and a few statistical correlations between susceptibility and magnetite content have been reported in some of these studies. However, most results of magnetite content used in these early studies were determined by magnetic separation plus chemical analysis[2-4] or microscopic grain counting[5]. These methods used in these early studies, compared with the lately used analysis techniques, are far less accurate. Currently, the more accurate X-ray diffraction (XRD) analysis is widely used to determine magnetic contents in rocks. Therefore, to couple magnetic susceptibility with magnetic contents obtained from XRD analysis, there is a need to establish new correlations between susceptibility

an exponential correlation exists between susceptibility and weight percent of hematite. and classification of the rock magnetic data used for our study. In this paper. The simplest way to do so is to use the average susceptibility from all specimens taken from the same rock sample. Since XRD analysis is expensive. only 43 composite samples were selected for XRD analysis.5% by weight. Typically six to nine standard specimens were extracted from each rock sample. Magnetite dominates susceptibility of rocks in power law if the rocks contain magnetite higher than 0. 15(3): 281-287 and weight percentage of magnetite and/or hematite in rocks. It is obvious that these 43 mappings are not sufficient for training a reliable neural network.282 Tsinghua Science and Technology.7]. This is an innovative application of neural networks in rock magnetism and magnetic petrophysics. The magnetic susceptibility and XRD data used in this study were collected in a petrophysical study partly supported by three mining companies during 19951999[6]. These data were reanalyzed using a statistical data mining approach[7]. must be adopted in dealing with the approximation of such nonlinearity. and testing using these classified data. which revealed that the nonlinearity of correlation between susceptibility and magnetite could not be sufficiently described using one single expression. sj . we briefly introduce the process of collection. The composite sample was then crushed to fine grains for XRD analysis. This complexity of nonlinear relations between susceptibility and magnetic contents implies that new strategies. June 2010. 1 Data Collection. rather than finding a general expression for describing such a correlation. preprocessing. In practice. Magnetite and hematite were found to be the main carriers of magnetism in these samples. At last discussion and conclusion are drawn based on the outcomes of the neural network simulation. Making XRD data availability even worse is that most XRD analysis is financially supported by industry partners who usually impose some restrictions on data release. training. This could partly be the reason why only a very few new studies in this area have been reported in the last two decades[6. These 43 mappings are probably sufficient for statistical data analysis. This study showed that for rocks containing magnetite less than 0. These one-to-many mappings must be rationalized before data analysis takes place. susceptibility data are relatively easier and cheaper to obtain in the laboratory.5% by weight. This should give 43 one-to-one mappings between magnetic content and susceptibility. instead. This study focuses on training a dynamic neural network for producing the best possible estimation on susceptibility with respect to the given magnetic contents. Nearly 300 susceptibility measurements can be mapped to the 43 XRD samples due to the fact that multiple specimens can be mapped to the same rock sample. and Classification A total of 573 rock samples were collected from 114 sites in the northwest of Western Australia. Both magnetic susceptibility measurement and XRD analysis were carried out in the laboratories at The University of Western Australia. Each composite sample was made of a few small rock pieces taken from the same rock sample so as to minimize the bias to a particular specimen. Preprocessing. This is necessary because magnetic minerals are normally distributed unevenly in a rock and even in a rock sample. In the following sections. and then present the mathematical description of this nonlinear problem. different from the traditional philosophy that a correlation could be described by a sole expression or a set of expressions through statistical analysis.7]. the magnetic contents from an XRD sample can be redistributed to individual specimen through its susceptibility based on the following formula sij c j cij = (1) sj where cj is the magnetic content of j-th XRD sample. Instead of directly using the average susceptibility from all specimens of the same rock sample. However. which were indeed used in the previous studies[6. Following is the outline of the processes for neural network model selection. XRD analysis is more expensive and thus is only applied to some selected samples. which returns more than 3000 susceptibility datasets. segmentation-based multiple fittings seemed more useful. neural network techniques are used for approximating nonlinear relations between magnetic susceptibility and contents of magnetic minerals in rocks. Magnetic susceptibility of all specimens was measured individually.

The structure of a three-layer MLP with a hidden layer of L nodes. A summary of these subclasses is shown in Table 1. w1. susceptibility of rocks is largely determined by the composition of these two minerals contained.6. a logical inference for such a relation would lead to the following expression: s = amb + cd h + ef (m. The best effort to get a usable solution is through approximation using a collection of data for a particular case.5% by weight.e. Table 1 Classifications of susceptibility and magnetic content mappings Subclass Mag-Sus Hem-Sus Mag/Hem-Sus Mappings 239 268 144 Training size 202 (82%) 231 (84%) 123 (83%) Testing size 37 (18%) 37 (16%) 21 (17%) where both magnetite and hematite are present in rocks. It has been proven that a three-layer multilayer perceptron (MLP) neural network can approximate any continuous function mapped from one finite-dimensional space to another by adjusting the number of nodes in the hidden layer[8]. ji denotes the input-to-hidden layer weights at the hidden neuron . 1 Three-layer MLP The relationship between the input and output components for this MLP can be generally expressed as ⎛ L ⎞ (5) yk = φ ⎜ ∑ w2. s = amb (2) where a and b are statistical constants depending on datasets used. Statistical approximation has been proven too course for the purpose of simulation in magnetic petrophysics[6.7].. A power law seems to exist between susceptibility (s) and magnetite content (m) when magnetite is higher than 0. Fig.5%.6. h) is an unknown function depending on both magnetite and hematite contents.5% by weight in rocks[2-4. s = cd h (3) where c and d are statistical constants depending on datasets used. sij is the susceptibility of the i-th specimen from the j-th rock sample.1 Neural network model selection 2 Problem Description Previous studies have revealed some correlations between magnetic contents and susceptibility in rocks and ores[2-4. Since magnetite and hematite are the most common and persistent magnetic minerals in rocks. For a general case The core of a neural network is actually an adaptive mathematical model that is capable of approximating any arbitrary unknown function constrained by training datasets.7] so new approaches are needed to achieve a better approximation for such purpose. a p-dimensional input vector x. These relations are all nonlinear and data-dependable functions without unique solutions. Such redistribution produces about 300 new mappings. i. This classification is based on the findings of the previous statistical analysis[6]. h) (4) where e is another data-dependable constant and f(m. but they only indicate the general trends between susceptibility and magnetic contents. 3 Approximating Nonlinear Relations by Neural Networks 3. 1. These new mappings are further classified into three subclasses: magnetite-susceptibility (Mag-Sus). cij is the redistributed magnetic content to the i-th specimen from the j-th XRD sample. In summary. i. The splitting of these mappings for both neural network training and testing is kept at a ratio of about 83% to 17%.e. kjψ ( ∑ w1. and a q-dimensional output vector y is illustrated in Fig.. For rocks containing magnetite less than 0.William W Guo et al. ji xi ) ⎟ ⎝ j =1 ⎠ where ϕ and ψ are the transfer functions. and magnetite-hematite-susceptibility (Mag/Hem-Sus) whilst both magnetite and hematite are present. an exponential correlation seems to exist between susceptibility and hematite (h) content[6].7]. hematite-susceptibility (Hem-Sus) whilst magnetite is less than 0. the following knowledge has been discovered in these studies.：Approximating Nonlinear Magnetic Relations … 283 is the average susceptibility of all specimens from the j-th corresponding rock sample.

200. For our problem.[15] 3.17]. If the network is well trained. 2) and produce the most balanced outcome. The details are given in Table 1. Although a single hidden layer is technically sufficient for achieving satisfactory approximation[8. kj is the hidden-to-output layer weights at the output unit k. 3). These features are . it is faster than Newton’s and the gradient methods in computing. respectively. Therefore. but also two input vectors of both magnetite and hematite for approaching 3-D correlation between both minerals and susceptibility that has not been reported in the world by now. the outcome of the MLPs is a set of numerical values. 150. 3. We also choose the Levenberg-Marquardt (LM) algorithm[13] to train the selected MLPs because this algorithm has been reported to be the fastest method for training moderate-sized feedforward neural networks[14. i. Among the three subclasses. Normally a performance function is used to control the network training process. weights (w) are The datasets are split randomly into training and testing subsets which are approximately at a ratio of 83% to 17% in general. the tansig-linear combination is chosen as the transfer functions for our MLPs. Mean square error (MSE) is chosen as the performance function to control the process of neural network training in this study 1 N (6) MSE = ∑ ( so (t ) − ss (t ))2 N t =1 where so and ss are original and simulated values.e. A detailed description of the LM algorithm can be found in Marquardt[13]. and 250 nodes show that a hidden layer with 80 nodes can achieve the target MSE within 10 epochs (Fig. rather than an analytical formula like that resulting from statistical analysis should it exist. Therefore.0001 indicates a good fit being achieved.2 Neural network training updated according to the following formula wij (t+1) = wij(t) + Δwij(t) (7) with wij = (J TJ + I)−1J Te (8) where J is the Jacobian matrix containing first derivatives of the network errors with respect to the weights. the single output of such an MLP is obviously susceptibility. Despite this generalized formula.. but the input varies with different magnetic minerals. or the practitioner’s strategy[11]. neither under-fit nor over-fit. Hence such selection is determined by running a number of experiments for individual cases. For the LM algorithm. 15(3): 281-287 j. the outcomes of a hidden layer of 80 nodes will be used for our discussion later. A good reference to performance functions commonly used for controlling neural network training is given by Qi and Zhang[12]. Assuming that an MSE smaller than 0. We will use not only a single input vector of either magnetite or hematite for approximating 2-D correlations between susceptibility and either mineral.9]. Other neighboring MLPs also produce satisfactory outcomes that are shown in Table 2 for comparison. Our MLP models are built using the neural network tools in MATLAB®[16. These experiments indicate that there is no significant difference between the logsig-linear and tansig-linear combinations as the transfer functions for the hidden and output layers respectively.3 Training and testing results The neural network process involves two phases: training the network with known datasets and testing the trained network using different known datasets for model generalization. experiments using hidden layers of 25. and Hagan et al. 50. The training of three-layer MLPs is based on running a number of experiments for datasets in different subclasses. 100. Hagan and Menhaj[14]. both Mag-Sus and Mag/Hem-Sus return almost a perfect correlation between the targets and simulated data whereas Hem-Sus shows a trend of underestimating the targets at the higher end (Fig. The LM algorithm was designed to approach second-order training speed without having to compute the Hessian matrix H=J TJ (9) Therefore. and e is a vector of network errors. the MLP only returns the closest approximated values in response to new input data. The 80-neuron MLPs return consistently satisfactory results for all three subclasses in terms of both the mean absolute error (MAE) and the maximum error (Max) (Table 3). June 2010. and w2.284 Tsinghua Science and Technology. there has been no universal rule for selecting the number of nodes in the hidden layer even though some simple rules of thumb have been proposed[10].15]. 80.

William W Guo et al. and Mag/Hem-Sus (c) subclasses . Hem-Sus (b). in which the fittings of both Mag-Sus and Mag/Hem-Sus are intuitively perfect whereas the simulated values are mostly smaller than the targets for the Hem-Sus model. 2 Training curves for Mag-Sus (a). Hem-Sus (b) and Mag/Hem-Sus (c) subclasses with 80-node hidden-layer MLPs Fig. (a) Mag-Sus (a) Mag-Sus (b) Hem-Sus (b) Hem-Sus (c) Mag/Hem-Sus (c) Mag/Hem-Sus Fig.：Approximating Nonlinear Magnetic Relations … 285 demonstrated clearly in Fig. 4. 3 Linear regression between the targets (T) and simulated outcomes (S) with 80-node hidden-layer MLPs for Mag-Sus (a).

901 1.4×10−6 1. The Hamersley Iron Pty Subclass . These trained neural networks are capable of producing accurate mappings between susceptibility and magnetite and/or hematite contents in rocks. even too small to be detected by XFD. Undoubtedly the best outcome is achieved by combining the two inputs together for training the neural networks.0053 0.3×10 2.8×10−7 Testing results of the 80-node hidden-layer MLPs MAE 0. On the contrary. For the single input MLPs.1×10−7 1.998 0. the Mag-Sus model performs much better than the Hem-Sus model. could still make a non-negligible contribution to the magnetism of a hematitedominated rock. This cannot be achieved by using statistical methods because the statistical correlations only offer qualitative trends between the factors[6.5% by weight will see it dominate other minerals in magnetism. For example. June 2010. particularly hematite.0000 Max 0.6×10−5 2. On the other hand.0002 0. statistical analysis has shown that an exponential correlation seems to exist between susceptibility and hematite content for rocks containing magnetite less than 0. 4 Plots of targets and simulated outcomes with 80-node hidden-layer MLPs for samples of Mag-Sus (a).1×10−9 1.000 All three MLPs produce satisfactory approximations to nonlinear functions between the contents of magnetic minerals and the susceptibility in rocks. the role that statistics played is still important in discovering general patterns among the relevant factors. 15(3): 281-287 4 Discussion and Conclusions (a) (b) (c) Fig. This brings some deflection in the training of the Hem-Sus MLP.3×10 −5 −7 100 nodes 3. This may be attributed to the fact that magnetite is far more magnetic than any other minerals in rocks so its presence with more than 0. Such quantitative simulation provided by the MLP models opens a new way in rock magnetism and petrophysical research and applications. which can be hardly provided by neural networks.0008 0. which results in a general underestimation on susceptibility by hematite alone. but offer no general description of the hidden nonlinear functions.1×10 2.5% by weight[6]. and Mag/Hem-Sus (c) subclasses Table 2 Subclass Mag-Sus Hem-Sus Mag/Hem-Sus Table 3 Mag-Sus Hem-Sus Mag/Hem-Sus MLP training results MSE 50 nodes 6.286 Tsinghua Science and Technology. Acknowledgements The Commonwealth Government of Australia and The University of Western Australia are thanked for supporting this research through scholarship schemes. the MLP models can produce accurate simulations. However. Such knowledge provides a general guide for researchers to interpret and better understand some magnetic phenomena in rock magnetism even though this rule is too coarse for producing a reliable susceptibility value.0000 Correlation 0.7×10 −5 −7 80 nodes 5.0350 0. Hem-Sus (b).7]. a trace of magnetite.

Eyre Peninsula. An algorithm for least-squares estimation of nonlinear parameters. Natick. 1953. IEEE Transactions on Neural Networks. I. 1986. A regression algorithm for rock magnetic data mining. Oklahoma. Neural Network Design. [7] Guo W. 1966. [2] Mooney H M. Australia: The University of Western Australia. [3] Balsley J R. 1994. USA: MIT Press. Geophysics. 2: 359-366. [10] Rumelhart D E. 1996. Hagan M. 2005. 11: 431-441. [8] Hornik K. 28: 756-766. Model selection in neural networks: Some difficulties. SIAM Journal of Applied Mathematics. Economic Geology. Some asymptotic results for learning in single hidden layer feedforward network models. Learning internal representations by error propagation. Hrouda F. New York. 287 [9] White H. 1989. USA: The MathWorks. Journal of American Statistical Association. Bleifuss R. rocks and aeromagnetic anomalies of the Adirondack area. Neural Networks. The Magnetic Anisotropy of Rocks. and Robe River Iron Association are thanked for the financial and field assistance. An investigation of model selection criteria for neural network time series forecasting. WSEAS Transactions on Information Science and Applications. USA: Society of Exploration Geophysicists. [6] Guo W. 2007. USA: The MathWorks. Parallel Distributed Processing. Demuth H B. Case Histories. [16] Demuth H. 170: 567-577. 132: 666-680. 5: 989-993. Magnetic susceptibility measurements in Minnesota: II.：Approximating Nonlinear Magnetic Relations … Ltd. 2006. [11] Curry B. England: Chapman & Hall. 84: 1008-1013. Morgan P H. Stinchcomb M. 18: 383-393. 53: 777-805. European Journal of Operational Research. 1993. In: Mining Geophysics: Vol. BHP Iron Ore. In: Rumelhart D E. [5] Webb J E. References [1] Tarling D H. [13] Marquardt D. European Journal of Operational Research. 1963. Magnetic susceptibility of bedded iron formation. Magnetic petrophysics and density investigations of the Hamersley Province. 1963. USA: PWS Publishing. Zhang G P. Neural Network Toolbox for Use with Matlab. Training feedforward networks with the Marquardt algorithm. 2004. [12] Qi M. [14] Hagan M T. Cambridge. Multilayer feedforward networks are universal approximators. [4] Jahren C E. . Beale M. Menhaj M. Western Australia: Implications for magnetic and gravity interpretation [Dissertation]. Neural Network Toolbox 5. Beale M H.William W Guo et al. Iron-titanium oxide minerals. 2: 671-678. 2001. Boston. Geophysics. Natick. Buddington A F. [15] Hagan M T. Hinton G E. Beale M. Perth. White H. Analysis of field results. The search for iron ore. 1999. [17] Demuth H. London. South Australia. McClelland J L. Williams R J. eds. 1989. 1958.

- SC088_III
- SC088_II
- D-88-12_2
- D-88-12
- D-00-12 (W)
- anwerkey_paperI
- 55035-100303-prn
- 55035-100303-prn
- Power consumption in CMOS VLSI circuits has in recent years become a major design constraint. This is in particular important for wireless networks, due to the limited life time of the batteries that wireless nodes are operating on. Orthogonal Frequency Division Multiplexing (OFDM) is one example of a technique which in recent years has become widely applied in wireless communication systems. However, the performance of OFDM and other spectrally efficient schemes depends, to a large extend, on advanced digital signal processing (DSP) and on the use of efficient and possibly adaptive resource allocation and transmission techniques. These in turn require that accurate estimates of the channel are available in the receiver and transmitter. However, accurate channel estimation of a time and frequency dispersive wireless fading channel calls for complex estimators, which might lead to significant power dissipation in such devices. Therefore, characterizing and analyzing power consumed
- 1.IJAEST Vol No 7 Issue No 1 FPGA Implementation of 2 D DCT for JPEG Image Compression 001 009
- Artificial Neural Network Maximum Power Point Tracker
- Adaptive Fuzzy Neural Tree Network
- A Novel High Adaptability Out-Door Mobile
- Adaptive Fuzzy Neural Tree Network
- 3-D Shape Measurement of Complex Objects by Combining
- 18198_1_List of students 9th 10th Nov.xls
- 18209_1_Job offer Policy 2.doc
- Chapter 18
- 6
- 06_SPDF Model VHDL Code Generation
- 2D-FFT
- Introduction (4)
- School Forensic Sciences Allahabad

nural network
In computer science and related fields, artificial neural networks are computational models inspired by animal central nervous systems (in particular the brain) that are capable of m...

nural network

In computer science and related fields, artificial neural networks are computational models inspired by animal central nervous systems (in particular the brain) that are capable of machine learning and pattern recognition. They are usually presented as systems of interconnected "neurons" that can compute values from inputs by feeding information through the network.

For example, in a neural network for handwriting recognition, a set of input neurons may be activated by the pixels of an input image representing a letter or digit. The activations of these neurons are then passed on, weighted and transformed by some function determined by the network's designer, to other neurons, etc., until finally an output neuron is activated that determines which character was read.

Like other machine learning methods, neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming, including computer vision and speech recognition.

In computer science and related fields, artificial neural networks are computational models inspired by animal central nervous systems (in particular the brain) that are capable of machine learning and pattern recognition. They are usually presented as systems of interconnected "neurons" that can compute values from inputs by feeding information through the network.

For example, in a neural network for handwriting recognition, a set of input neurons may be activated by the pixels of an input image representing a letter or digit. The activations of these neurons are then passed on, weighted and transformed by some function determined by the network's designer, to other neurons, etc., until finally an output neuron is activated that determines which character was read.

Like other machine learning methods, neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming, including computer vision and speech recognition.

- Approximating Nonlinear Relations Between Susceptibility And
- Predicting Earthquakes Through Data Mining
- P293
- 1744859
- West et al 2002 TLE
- Constable - Occam Inversion to Generate Smooth 2D Models From MT Data
- Compositional Data
- histogram
- Assignment Technique
- Data Models lecture
- Int 2 NC Revision Sheet Paper 1
- Chapter 10-Analysis of Ecological Distance by Ordination (6)
- Standard Deviation
- Esemen Mate.............................
- Jose R. Correa and Claire Kenyon- Approximation Schemes for Multidimensional Packing
- ch6
- NA-M-89-03
- Maxima and Minima
- First Derivative and Graph
- Quadratic Functions in Standard Form
- ChBE 312, HW6
- 3.6 NoteGuide
- 360 Notes
- T1_sample(_1)
- Numerical Integration
- Algebra II Chapter 3 Test Retake
- 9 Key to Numerical Differentiation
- Algebra II Chapter 3 Test
- svm
- Pena (2006). Graphical Model for Fishers Discriminant

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd