You are on page 1of 20

Artificial Neural Network Application for Tribological

studies of composite materials


Dr. N.Raghavendra1, Dr. D.Shivalingappa2
1
Associate Professor, Dept of Mechanical Engineering, BNMIT, Bangalore-70.
2
Professor, Dept of Mechanical Engineering, BNMIT, Bangalore-70.
Introduction
Composite will have the matrix material as Polymers, Ceramics and Metals. The polymer matrix
composite degrades from high temperature and ceramic composites are brittle with lack of
strength. The Aluminium based metal matrix composite materials are developed with both
continuous and discontinuous reinforcements. Beginning stages in the development of MMCs
continuous reinforcements are used, however due to higher cost of production, improper
alignment of fibers, fiber damage, and fiber surface degradation has given rise to development of
discontinuous reinforced composites. The particulate Composites are the recent development
which are extensively researched for academic interest and industrial applications. The
Approach for the development of metal matrix composite is a challenging work in which one
should take care of processing parameters, selection of matrix material, reinforcement type, size,
volume fraction, selection of secondary process, interface reactions, grain growth during
solidification, strengthening mechanisms etc. The controlling parameters are numerous for the
particulate composite, from the development point as well as performance point of view these
parameter has to be studied. The application of composite can be from different perspective like
increase in strength and hardness, increase in wear resistance, increase in fracture toughness,
high temperature performance and creep strength, etc.

Wear Behavior of MMC


Wear is complex phenomenon which has nonlinear behavior during high speed and load
conditions. It is a thermo-mechanical process where thermal conditions and mechanical
conditions affecting the process. Wear rate, Wear coefficient and Coefficient of frictions are the
Tribological behavior of any interacting surfaces which has to be studied at various operating
Speed, Load and Sliding distances for the composite materials. The wear data generated for the
given composite materials are in large numbers which has to be represented suitably.
Representation of wear data can be carried out by tables, graphical method and 3-D spectrums.
The graph should include all the wear regions, wear rate, and wear mechanisms. Wear rate map
or Wear mechanism map are used for representation of large data base of wear behavior. The
wear map provides the details of wear rate, wear mechanism, transition region at various speed
and load [1]. Wear map is one which gives comprehensive details of the nature of the wear
behavior, severity and possible operating range of load and speed. Further optimization of the
material parameters and prediction of the behavior can be carried out with mathematical and
statistical approach. This reduces the cost, time involved in conduction of large amount of
experiments and also creation of data base of the performance of Composite can be easily
created.

Tribological Properties of Particulate MMCs


The Tribological properties are the behavior of the system which has surfaces interacting with
each other in motion. The Tribological aspect consists of wear and friction. These are not
intrinsic property but it depends on system parameter which are interacting. These parameters are
classifies as
1. Intrinsic Parameter- Composition, Geometry and Microstructure
2. Operational Parameter– Stress, Load, Temperature, Surface Roughness and Velocity
3. Structural Parameters- Materials, Lubricant and Environment
4. Operational Parameters-Loading, Kinematic and Temperature
5. Interaction Parameters- Contact mode, lubrication regime
Wear behavior of the materials are studied to know the wear rate and wear coefficient. Apart
from this the mass loss and volume loss of materials are also studied. Tribolayers are studied to
evaluate the mechanism of wear and to quantify the wear rate [2].
Wear classificationcarried out by the researchers in different ways. The figure 1 shows the types
of wear process and their relations with major parameters like Thermal. Chemical and
Mechanical Causes.
The following are the major mode of wear in most of the interacting surfaces
1. Adhesive wear
2. Abrasive wear
3. Fatigue wear
4. Corrosive wear
1. Adhesive Wear
The type of wear in plastic region is called adhesive wear, Due to compression and shearing
deformation of metal occurs out of which some portion get adhere to either surface and
remaining hard particle will be fractured out of the surface.

Fig 1 Classification of Wear and their Interrelations [3]

2 Abrasive Wear
Abrasive wear is due to hard particle interacting on the surface of soft material. The rate of
abrasive wear depends on the characteristics of the surface, shape of the particles in between the
two surfaces, sliding speed and the environmental conditions. Two body and three body
abrasives are common methods. The materials are removed by fracture, fatigue and melting in
abrasive wear.

3Fatigue Wear
Fatigue wear occurs in components like bearings, gears, cams, friction drives etc. load carrying
small contacts such as Hertzian’s contact generates fatigue wear. Pure rolling and sliding effects
of the contacting surfaces generates repeated loading on the surfaces which has fatigue nature.
The process of fatigue wear consists of generation of stress at the contact points which develops
plastic deformation and nucleation growth. Further sub surface crack nucleation and propagation
takes place. In the final stage the particles fractures from the surface and results in wear.

4Corrosive Wear
Corrosive wear is loss of material in the presence of water due to chemical actions. At high
temperature the chemical process induces the loss of material from the surface to form corrosive
wear. The surface in contact with atmosphere will develop a layer of corrosive products and
further due to mechanical action the wear particles are separated from the surface. The work
hardening and chemical action results in corrosive wear.

Importance of ANN for Wear study


The automobile consists of various Tribological parts which are interacting in various kinds of
motion. The major interacting components are engine parts, transmission, drive line, tires and
brake and wind shield wipers. The engine parts requires reduced wear and friction to achieve fuel
economy, emission control, longer service intervals and improved reliability. The material with
increases wear resistance can be used in engine parts such as piston, connecting rod and piston
ring inserts to reduce the wear rate. The interaction of the surfaces are also seen in valve train
parts like valve spring, valve mechanism, rocker arms. Also high temperature prevails at the
valve attached to exhaust of engine. Transmission clutch and associated parts experiences wear
during operation. Brakes are another automobile part which are subjected to extreme wear and
friction [4]. The wear phenomenon in each of the machine parts are to be controlled so that
efficiency of the vehicle can be enhanced.

Composite materials are developed for the controlling the Tribological parameters of the
interacting surfaces. The hard ceramics in the soft ductile matrix influences the wear and friction
behavior of the material at various operating conditions. The particulate MMC are replacing the
metal alloys as good candidate for the Tribological applications. Second phase particles such as
borides, carbides and oxides are used due its high hardness and wear resistance. Aluminium
matrix reinforced with fibers or particulates are developed for application in the automotive
engine components, piston, cylinder bore and piston inserts due to high wear resistance. Wear
models are developed with various controlling parameters which amounts to more than 100
(parameters) influencing the wear phenomenon. The empirical equations are developed for
various wear mechanisms. The experimental validation are necessary for wear models [5].

Numerical Modelling by Artificial Neural Network for Composite


Wear Behavior
Development of mathematical model for the discontinuously reinforced composite is more
complex than the fibers due to orientation of the particles in random manner. Simulation of
Tribological process and prediction of wear rate are carried out in the past with different
numerical approach like FEM analysis, Statistical methods, Taguchi methods, Genetic
algorithms, Artificial Neural Network etc. The most comprehensive tool for prediction with best
accuracy is the Artificial Neural Network Tool. This has a trained neuron which was trained on
the basis of experimental values and with various standard networks, algorithms and transfer
functions. The trained neurons are tested and validated with another set of experimental values
containing both input and output parameters. Once the ANN algorithm are optimized in terms of
minimum error, it is used for predicting the behavior of the material for the entirely new set of
parameters. Back propagation algorithm used in modelling the ANN network with one hidden
layer can predict the wear rate effectively with minimum error [6]. ANN model developed for
wear behavior of the brake disc indicates apart from training algorithm the network architecture,
distribution of the input parameter, output parameters and transfer function affects the
predictions [7]. The prediction accuracy can be further increased by increasing the input data set
numbers [8]. ANN can be effectively used for predicting both Mechanical and Tribological
properties of aluminium based composites [9]. Most popular ANN was multilayer perceptron
MLP for prediction of data in material science and composite materials [10].

Artificial Neural Network applied to studyParticulateMMCs


The Neural Network (NN) are one of the developments of artificial intelligence. The NN was
originally from biological research which was developed 50 years back. McCulloch and Pits in
1943 explained functioning of Neural Network. In 1949 Hebb stated that Neuron activation in
learning process and neuron distribution for storage of data and information. The computers were
introduced in Neural Network in 1956, In 1960 Resenbeltt perceptron network gain popularity.
Commercial application of improved neural network started in the year 1980 after addressing the
limitations of the rule based approach shifting from sequential methods to parallelism. Neural
Network gain popularity in the field of Pattern recognition, prediction of trend and behavior, date
filtering to separate the noise, nonlinear data analysis and optimization. In the present work
optimization and Predictions are employed for nonlinear system behavior.

Neural Network is simplified representation of human brain which consists of ten to one
hundred billion cells. Each cell is called Neuron. The neurons can have capabilities to receive,
storing and sending the information. The neurons are connected by means of dendrites. These
acts as a passage medium through which neuron sends signals. A complex network is formed due
to connection established between single neuron with tens and thousands of other neurons. The
neuron communicate with each other through dendrites by sending electrical and chemical
signals. Large amount of these pulses can be transmitted simultaneously the network functions as
massive neuron activity. These neurons can process enormous amount of data and information
[11].

The Architecture of Artificial neural network consists of input layer for accepting the
input parameters followed by hidden layer for processing and output layer for result. Each
neuron present in one layer is connected to neurons present in the other layer. In put layer
represents the parameter that are used to describe the problem variables. The output parameters
represents the solution to the problem. The hidden layer neurons represents intermediate between
the input and output. The hidden layer neuron will not have any outside connection. Every
neuron will have its own connections. Every neuron will have its own specific weights and
signals which are referred with respect to connections. The weight may be positive or negative or
zero value. The weight is important as it carries the information or knowledge in the network and
adjusted with bias [11].

The values (weights) are received by the neuron from other neurons, before the
neurons sends output signal it compares it with threshold value. The activation of the neuron is
determined by transfer function or activation function. The neurons are activated as it receives
signal which are more than threshold value. The weights of the neuron are modified if the neuron
learns with respect to new input. The network is altered in terms of weights to fit model
parameters. In the step transfer function the neuron pass the signal at one particular value where
as in sigmoidal transfer function the range of positive and negative values acts as limits in
between which the signal is passed. Different type of network can be connected by varying the
number of layers and neurons but also which neuron should receive signal can be specified. The
neuron in different layer can send signal to same layer or different layer. The Neural Networks
are processing the information sequentially where as human brain the signals are processed with
parallel massive connections. But ANN works on the electrical signal which are faster in
processing as compared to biological signals (chemical Process) as in case of human brain [11].

In the development of ANN, the performance of neuron depends on network structure,


training set quality, training method and threshold error rate. There is no standard development
guide lines for ANN for good results. It is difficult and complicated process to develop in a
simple way. The ANN are always developed with number of trial and error of each of the factors
and options. The development of ANN is an iterative process. Four steps are essential in the
development of ANN
1. Defining the task that is to be simulated and collection of sufficient data for training
2. Development of Neural Network structure
3. Training the Network based on available Data
4. Test and validate the new data set.
Entities to be considered in the design of ANN [12] are as follows
Neuron- it is the simple entity which has input and output. This is activated by the threshold
value of signal. The representation of the neuron as shown in figure 2.

Fig 2. Comparison of Biological and Artificial neuron

Transfer function- It is the threshold value of the neuron signal which are used in ANN for
activation of neuron. The figure 3 shows the Transfer functions.

Fig 3 Types of Transfer functions


Network Architecture – It is the representation of complete neural network architecture with all
layers considered. Its block diagram representation is shown in figure 4

Fig 4 Artificial Neural Network Configuration

The ANN are considered improved prediction tool as compared to regression or statistical
approach. The parameter in the regression analysis is linear but in ANN it is nonlinear. It is
referred as nonlinear regression analysis. The back propagation ANN tool is superior to
regression analysis. The ANN can be considered as multivariate, non-parametric, stochastic and
nonlinear with dynamic feature extraction and selection type of Analysis. ANN can be applied in
case of increase in complexity and non-linearity of the input parameters. In common ANN are
more advanced simulation and prediction approach as compared to Gaussian Statistical
regression approach [13]. ANN are employed for solving variety of problems such as
1. Pattern recognition for images
2. Data Clustering
3. Function Approximation in Equations
4. Forecasting the data
5. Optimization of parameters in any process
6. Association
7. Control
Classification of ANN in different contexts
1. Learning – Supervised, un Unsupervised
2. Data – Binary , Continuous
3. Flow of Data- Feed Forward, Feed Back
Types of Neural Networks - Many researchers classified the neural network in different ways.
Some are old models and some are modifications to existing models [13].
1. Hop Field Network- used for solving optimization problem
2. Adaptive Resonance Theory Network- used for pattern recognition, computation and
classification.
3. Kohonen Network- used for data and image compression.
4. Back Propagation Network - used for forecasting, prediction, modelling the data, data
and image compression.
5. Recurrent Network- used in dynamic memories
6. Counter Propagation Network- Used for simulation of data
7. Radial Basis Function Network- Prediction and simulation of data
General Consideration in ANN
1. Database size and partitioning- Good prediction from ANN can be expected based on the
data size at the input. Sufficient quantity of data set provides accurate predictions. The data
set are further partitioned in to Training, Test and Validation. The training data set requires
maximum value which is the major function of ANN. The sub set data sets are used for
testing and validation. Based on the performance of Neural Network and training the
architecture of ANN is altered by trial and error and finally optimized. There is no standard
percentage classification of Data sets to be used in ANN. One of the suggestions by Loony is
65% for training, 25% for network testing and 10% for validation. The general way is 20%
for testing and 80 % for training.
2. Data Processing, Balancing and Enrichment- The data set used for training should be
processed suitably to remove the noise, reduce the input dimensionality and data inspection.
Balancing of data are carried out by removing extra data which are underrepresented. If data
base size is small the prediction will not be accurate, to increase the data base size new set of
input data has to be generated by simplified method.
3. Data Normalization- Scaling the data (Normalizing) within 0 and 1 is essential to reduce the
complex data values and premature saturation of the neuron weights. It is usually practical to
have normalization between 0.1 to 0.9 instead of 0 to 1. It is carried out to prevent saturation
of function with no learning.
4. Input/output Representation- The output and input data in ANN may be of continuous,
discrete or part of both the class. The level of input represented by binary numbers as 0 and
1. If four levels exists for input then 01, 11, 10 and 00 can be used. Also continuous
variables of input and output are represented by binary numbers.
5. Network Weight Initialization – In the beginning of ANN the weight and threshold has to be
initialized. It is having effect on convergence. This changes are carried out in a small range.
6. BP Learning Rate –The learning rate of neuron can be controlled in optimum rate to have
good convergence. It is carried out by changing the weight. If high learning rate are used it
will never converge. Therefore the learning rate as found in the literature = 0.1 to 1.0 or =
0.1 to 0.6 are used.
7. B P Momentum Coefficient () – Momentum are used in updating weights. High value of 
will reduce the struck of network, low value of  will make training very slow.  Value in
the range of 0.4 to 0.9 is recommended. Also  with 0 to 1 are used.
8. Transfer Function- The activation of the neuron is achieved by comparison of weight based
on the transfer function with threshold value. Sigmoid Transfer function are most properly
used IN Back Propagation ANN which involves continuity and differentiability in Back
Propagation Learning.
9. Convergence Criterion – The training can be stopped based on the training error, gradient
error and cross validation basis. Cross validation based decisions are more reliable to check
the convergence. The coefficient of determination R2 indicating the similarity of predicted
and target output can be used. The sum square error (SSM) and Mean square error (MSE)
based methods for stopping the training are used more commonly.
10. Number of Training cycles –In ANN the number of cycles used for training varies and can
be decided by trial and error. The minimum error detects the ANN goal but sometimes
excessive training results in near zero error but test data predicted will not be agreeing. The
number of training cycles are repeated till increase in error found to occur.
11. Training mode – The training is carried out example by example (EET) or by batch mode.
EET consumes less memory but with wrong example the learning trend will follow an
improper direction. In batch training all the previous training examples are stored and taken
in to account. It requires large storage weights but it accurately predicts the value.
12. Hidden Layer Size- identifying number of hidden layer and identification of number of
hidden neuron in ANN is difficult task. In Most of the approximation problems one hidden
layer are working satisfactorily. Two hidden layers are required for learning with discrete
random variables. Number of proposals are available from the literature which give the
relation to arrive at number of hidden layer.
Upper bound NHN = N TRN/[R+ (NINP+NOUT)] (2.7)
Training patternsNTRN , R= 5-10
13. Parameter Optimization- All the Parameter discussed should be selected suitably to arrive at
better ANN. More than 6 parameters will be optimum as it is not too high or too low.
Careful selection of Parameters by trial and error should be used.

The field of material research has been studied experimentally in past and large amount
of time, money are spent in carrying out to prepare the data base of the material behavior at
various situations. However with the fast developing technology and available strong simulation
and prediction tool the conventional techniques of establishment of data base based on series of
laboratory experiments has been slowly turning towards Analytical statistical tools like ANN,
Genetic algorithm etc. The ANN is the most adopted tool in this process as it can easily predict
nonlinear, random and large data set behavior of material system.
Artificial Neural network works on the set of input data developed experimentally which will
be used to train the neurons. The input layer hidden layer and output layer are selected suitably.
The transfer function, Bias, Neurons and ANN architecture was developed to predict the output
based on the trained neuron. The ANN models are developed with several training function and
algorithm for the given problems and by trial and error the ANN will be optimized. Large
amount of input data are used for the training and validated with remaining data set. Levenberg
Marquardt with Feed forward Network are most popular ANN network. Comprehensive material
system for advanced materials applications can be developed with systematic approach with
material science, manufacturing, experimental and mathematical tools as an interdisciplinary
work. The resulting material system will prove itself suitable in all respect as demanded by the
technological need.
The prediction or estimation of data based on statistical methods learned by the specific patterns
in the dataset represents the primary function of ANN. It is machine learning concept with large
dataset training the system and making the system to be capable of predicting the output. ANN is
alternative approach to experimental and physical models for prediction and simulation of
behavior. ANN handles nonlinearity, irregular data set, data patterns, accepts numerous variables
and provides acceptable general solutions. ANN are large set of distributed and parallel strictures
containing a small units called neurons. Each neuron is connected by signal channel referred as
weights. The ANN essentially contains 1. Links that are referred as weights, 2. Adder that makes
the sum of weights, 3. Transfer function for producing output.
The neuron in the ANN is like a brain cell it assess the signal received by it, adds the
signal with weight and compares with threshold value to produce the output. The neurons are
provided with many input based on the variable parameters of the experiment and generates only
one output. The neurons are three types input neuron, output neuron and hidden neuron. The
input neuron receives the signal from the output neuron through the feedback mechanism and
compares with the input parameter and after weight adjustments generates the signal to pass it to
hidden layer. The hidden layer neuron connects the input layer and output layer by transferring
the signal within the system. The neuron containing transfer function converts signal at input to
output based on the weight of the associated parameters. The weight of the input neuron, bias
and output neurons are represented in figure 5.

Fig 5. Basic neuron model

Activation of the neuron will be done by transfer function. The commonly used transfer
function is sigmoid function similar to as shown in figure 5. The transfer function will be
working in the range of (0, 1). This function is continuous and nonlinear. The activation function
created by the transfer function for each neuron are compared within the threshold value and if it
is greater than threshold value the signal is generated by the neuron else no signal generation
from the neuron. The processing of data in the ANN is carried out by network architecture as
Feed Forward network, Feedback network and Self-organizing Network. Feed forward contains
unidirectional flow of signal. In Feedback network the signal from the output is compared and
feed to the input of previous layer. The network containing interconnecting layers will have
Feedback networks. In recurrent network which is a self-organizing network the organization of
the predicted pattern are based on input received. Kohonen network is a recurrent network
containing input layer. Kohonen layer after receiving the input identifies the patterns of the data
and develops the self-organized pattern in the form of table.
Training of neural network was carried out by transfer function and signals transfer
between the layers. The weights are computed to produce output of neuron between the layers
based on previous memory. The ANN architecture is developed as first step and then the training
the network will be carried out by supervised and unsupervised methods. The training or learning
used in the ANN involves remembering and generalizing the output from a given set of input.
The supervised learning is used in most of the ANN. In supervised learning both input and
expected output is provided to network for training. The generated output is compared with the
expected output to generate the error. The error is sent back for the previous layer for generation
of optimal signal computed by weight adjustment and transfer function. This process of
generation of error by comparison will be carried out for number of trials till the error will
become least value. The network is thus optimized and tested for the known output value and
finally generalized for prediction.

5.3 Design of Artificial Neural Network for predicting Wear


Behavior
In Problem definition step the parameters controlling the wear behavior of the composite
materials are numerous. The maximum number of data which can be generated so that the
accurate prediction will be possible was obtained through laboratory tests. The major
contributing factors for wear behavior in terms of mechanical properties, working environment
testing parameters are generated for the given composite material. The nonlinear behavior of
wear rate of composite controlled by more than 100 controlling parameter, but in the present
work effect of limited parameters(13 parameters) are studied.
In the Data collection process from the wear test the input parameters are limited to
speed, load and sliding distance but the predicted output are based on the additional parameters
like composite mechanical properties, operating environment and thermal properties of
composite. The final input parameters contains all the above parameters for accurately prediction
of output with reduction in noise. The table 1 shows the typical input / output parameters used
for ANN prediction.
In Optimal model design the selection of transfer function, number of hidden layer,
number of neuron and layer configuration are considered. The trial and error approach was used
for arriving at optimal ANN model. The comparison of predicted and expected output are used to
generate an error called Mean Square Error (MSE). If the performance of the system converges
towards zero error for efficient ANN model. The selection of Transfer function and Training
algorithm from the available list are shown in table 2. Based on the literature two algorithm are
selected for the present work 1.Levenberg Marquardt 2.Bayesian Regularization The
corresponding transfer functions are Trainlm and Trainbr are used for the network models.
Table 1 Input parameters for ANN prediction

Sl No Input Parameters Output Results


1 Speed Weight Loss
2 Load Volume Loss
3 Sliding Distance Wear rate
4 Pressure Specific wear rate
5 Density Wear resistance
6 Track radius Wear coefficient
7 Hardness Coefficient of friction
8 Tensile stress Friction force
9 Compressive stress
10 Young’s modulus
11 CTE
12 Thermal Conductivity

These algorithms are capable of computing nonlinear behavior of the composite wear behavior.
Also relatively large data set are handled by the algorithms. The sigmoid transfer functions used
in the ANN are also used to account for nonlinear behavior.
Table 2 Training Algorithms and Transfer Functions

The transfer function continuously adjusts the weight during the error computation and back
propagation of the signal. The weights and bias are random numbers generated during each trial.
The adjustment of weight and bias decides the duration of the training. For larger set of neurons
weight adjustment should be carried out to minimize the computation time. The random weight
initialization were used to avoid this problem. The normalization of the input and output data is
necessary for the ANN to have same order of magnitude else override will occur. The mini-max
normalization was most widely used in ANN.
The output of ANN are finally renormalized to convert the values from the range of (0, 1)
to normal values. The trained network should be optimal and over fitting can be avoided by
generalization. The improved generalization is obtained by early stopping the training process.
The MSE is compared with number of iterations and at the minimum MSE value network
computations are stopped. The optimal architecture and ANN models were trained then tested for
generalization. Comparing MSE and Regression values of the tested network the models are
generalized for further prediction. The optimal models are tested and validated by Regression
curve for further use.

5.4 Structure of Artificial Neural Network Model


Structure of ANN consists of algorithm, transfer function, number of hidden layer, and number
of neurons and generalized optimal model. The trial and error approach will be followed in the
development to obtain the minimum error difference between the predicted and set target.
Development of ANN involves selection of training algorithm which is the basic foundation of
the ANN Model. Three types of neural network Architecture are available 1. Feed forward
network, 2. Feedback network, 3. Self-organizing Network. In the present work Feedback neural
network with back propagation Algorithm was used. It contains the output neuron to be feedback
in the same layer or next layer for the weight adjustments.
The primary function of Feed Back network is a) Forwarding the input to the hidden
layer and then to the Output layer, b) Calculating the Error and back propagating the calculated
error, c) Adjusting the weights in each layer with computation. The representation of neuron
work flow in the network is shown in figure 6.In the input layer the neuron receives the signal
from training pattern. This signal will be sent to hidden layer containing the neurons. The
neurons in the hidden layer containing the signal will be computed with nonlinear activation
function to produce the output.

Fig 6 Basic configuration and flow of Neuron

This output will be sent to output neuron. The signal in the output layer neuron will be
compared with that of input layer and the difference will be generated as an error. This signal
will be sent to hidden layer for improvement with bias and adjustments in weight to improve the
prediction.

5.5 Artificial Neural Network Architecture


The ANN architecture are crucial for predicting output and results of linear and nonlinear
behaviors of any system. The network architecture consists of selection of hidden layer, number
of neurons in the hidden layer, transfer function and training algorithm. In the present work one
hidden layer was used for ANN. The neurons in the hidden layers are selected as 10, 30 and 50.
The hidden neurons cannot be calculated by any specific formula as standards but number of
researchers are using trial and error method for the estimation of neuron. Starting from the lower
value of the neuron and increasing the number till the minimum MSE is achieved. The number
of neurons are selected when once the MSE reaches the minimum value at this point the training
will be stopped. One of the proposed empirical relation is
Number of hidden neuron = 2 x √(OP – IP) (1)
Where OP= number of output and IP = number of Input
The training process are carried by selecting with minimum number of neurons and
increasing the neurons till the MSE will become least, which are identifies by the graph of MSE
vs number of neurons as shown in figure 7

Fig 7 optimum number of neuron selection for ANN

When the minimum error was obtained the training will be stopped, this early stooping criterion
will be used for the present work. With further increasing the neuron the error will be same and
threshold limit will be reached. More than this number of neuron will increase the computation
time without any variation in output.
Activation function in the hidden neuron are selected for the signal modification based on linear
and nonlinear nature of the system. Tanh, Sinh, Logistic or Sigmoidal functions are used in
ANN. These are able to solve the problem faster and steeper slopes are obtained. The activation
function works between 0 and 1.
Initial weight has to be set for the network for learning process. The weight will be
adjusted between the layers for learning and prediction. The weight adjustment will help the
neuron to compare the set target and predicted target and generate the signal based on the error.
For the large set of data the weight adjustments will have more oscillations. To reduce the
oscillations the weight change will be made as function of previous computed weight change.
This is carried out by momentum adjustments factor. The momentum decides the portion of
previous weight change to be added to the present weight change and minimizes the oscillation
of weight change. Momentum helps in attaining the convergence of the predicted data.
Once the network is designed and optimized, then the ANN will be trained with large
data set from the experiment. The testing will be carried out after effective training of ANN
model. The training process are carried out with different trials based on number of neurons,
number of hidden layers and training algorithms. The optimizations are decided based on the
minimum error for the given epoch. The series of training data provided to the network between
the each weight update is called epoch. The epoch size is a function of data used for the
computation. The epoch used for early stopping and if the process is not stopped then over fitting
of data occurs. The plot of epoch and MSE is shown in figure 8.

Fig 8 Comparison of training, testing and validation based on MSE and Epoch

The mean square error is the performance measure used for determining epoch. The process of
training generates the graph of epoch vs MSE, for a minimum MSE the epoch are found and the
training was stopped. Similarly the number of hidden neurons are varied with the corresponding
MSE values are plotted to find out the optimal neuron for minimum MSE. A Typical results of
ANN is shown in figure 9 which consists of a) Mean Square error plot, b) Histogram of Error, c)
Comparison of regression plot of Training, Testing, validation and overall. d) plot of gradient,
mu and val fail.
The network were trained first and testing are carried out for the trained network with available
input and output dataset. In the training the model look for a particular pattern in the dataset and
remembers the pattern. Each trial the output is compared with the input and error will be
generated. Number of iterations are carried out to minimize the error and finally optimum
network will be obtained. This optimal network will be generalized after testing at various
parameters of input.
a b

d
c

Fig 9Typical Training results of ANN for Coefficient Of Friction a) Mean Square error plot, b) Histogram
of Error, c) Comparison of regression plot of Training, Testing, validation and overall. d) plot of gradient,
mu and val fail.

Conclusion
Wear behavior is Nonlinear and complex phenomenon and further the particulate composite
exhibits unique tribological behavior at different operating parameters. The experimental burden
of carrying out large number of test can be reduced with mathematical prediction tools. The
Artificial Neural Network models can be used by incorporating large number of variables in the
input to predict the output in terms of Tribological Properties. The ANN Models are well
established Tools for data prediction and Simulation for material science application. The ANN
model can be developed for studying tribological behavior of Particulate composite material by
incorporating as many controlling parameters
Acknowledgment
Author wishes to acknowledge Management and Department of Mechanical Engg, BNMIT for
extending the support for writing the book chapter.
References
1. Anthony Macke,B.F. Schultz,Pradeep Rohatgi, Metal Matrix Composites Offer the
Automotive Industry an Opportunity to Reduce Vehicle Weight, Improve Performance,
Advanced Materials & Processes,March 2012.
2. Yanqiang Liu, Zhong Han, Hongtao Cong, Effects of sliding velocity and normal load on
the Tribological behavior of nanocrystalline Aluminium based Composite, wear,
Elsevier, 268 (2010) 976–983.
3. T.W.CLYNE, P j Withers, An Introduction to Metal Matrix composites. Cambridge
diversity press, Cambridge, First edition UK, 1993.
4. G. H. Jang, K. H. Cho, S. B. Park, W. G. Lee, U. S. Hong, and H. Jang Tribological
Properties of C/C-SiC Composites for Brake Discs Met. Mater. Int., Vol. 16, No. 1
(2010), pp. 61~66.
5. H.C. Meng , K.C. Ludema. Wear models and predictive equations: their form and
content, Wear Elsevier Science S.A.181-183 (1995) 443-457.
6. A. Canakci1, T. Varol 1, S. Ozsahin2 , S. Ozkaya1, Artificial Neural Network Approach
to Predict the Abrasive Wear of AA2024-B4C Composites, Universal Journal of
Materials Science Horizon Research Publishing 2(6): 111-118, 2014.
7. Necat Altinkok,Use of Artificial Neural Network for Prediction of Mechanical Properties
of Al-Al2O3 Particulate-reinforced Al–Si10Mg Alloy Composites Prepared by using Stir
Casting Process, Journal of COMPOSITE MATERIALS, Vol. 40, No. 9/2006.
8. Z. Zhang , K. Friedrich, K. Velten , Prediction on tribological properties of short fibre
composites using artificial neural networks, Wear Elsevier Science B.V 252 (2002) 668–
675.
9. Necat Altinkok, Rasit Koker,Use of artificial neural network for prediction of physical
properties and tensile strengths in particle reinforced aluminium matrix composites,
Journal Of Materials Science Springer Science + Business Media, Inc.40 (2005) 1767 –
1770.
10. D.M. Kennedy a, M.S.J. Hashmi, Methods of wear testing for advanced surface coatings
and bulk materials, Journal of Materials Processing Technology 77, 1998, 246–253.
11. A neural network prototype for use-wear analysis: WARP1.
12. Howard Demuth, Mark Beale, Neural Network Toolbox For Use with MATLAB, The
MathWorks, Inc. September 2000 Sixth printing Revised for Version 4 (Release 12)
,2000.
13. I.A. Basheera , M. Hajmeerb,, Artificial neural networks: fundamentals, computing,
design, and application, Journal of Microbiological Methods Elsevier Science B.V. 43
(2000) 3–31.
14. Raghavendra.N, D.Shivalingappa, Modeling and Artificial Neural Network based
Prediction of Wear rate of AA7075/Al2O3 Particulate Metal Matrix Composites,
International Journal Of Scientific & Technology Research, IJSTR, vol 8, Issue 11, ISSN
2277-8616, Nov 2019.

You might also like