You are on page 1of 6

Application of Multi-Objective optimization

algorithm and Artificial Neural Networks at


machining process
Farshid Jafarian, Hosein Amirabadi, Javad Sadri

 utilized separately. For instance, FEM is able to predict the


Abstract-- Since, experimentally investigation of machining outputs of process just for specific input parameters. On the
processes is difficult and costly, the problem becomes more other hand, intelligent methods based on the predictive
difficult if the aim is simultaneously optimization of the models, need numerous experimental data in order to estimate
machining outputs. This paper presents a novel hybrid method outputs of the process. Therefore, some researches have been
based on the Artificial Neural networks (ANNs), Multi-Objective done to develop hybrid techniques based on the intelligent and
Optimization (MOO) and Finite Element Method (FEM) for
FE methods in order to eliminate aforementioned deficiencies
evaluation of thermo-mechanical loads during turning process.
After calibrating controllable parameters of simulation by [3]. Based on this, predicted results of FE simulation are
comparison between FE results and experimental results of applied to the intelligent methods. But, hybrid methods mainly
literature, the results of FE simulation were employed for employed for analysis of machining operations have been
training neural networks by Genetic algorithm. Finally, the restricted to the FEM-ANN (Finite Element Method –
functions implemented by neural networks were considered as Artificial Neural Network) technique [3]. Owing to the fact
objective functions of Non-Dominated Genetic Algorithm (NSGA- that, quality of machined surface is evaluated from several
II) and optimal non-dominated solution set were determined at aspects, mere prediction of machining outputs cannot be
the different states of thermo-mechanical loads. Comparison adequate especially at the practical applications and it’s
between obtained results of NSGA-II and predicted results of FE
required to be investigated and optimized simultaneously.
simulation showed that, developed hybrid technique of FEM-
ANN-MOO in this study provides a robust framework for Consequently, developing a novel hybrid technique for
manufacturing processes. applying to the machining processes seems necessary which
deserve further investigations.
Index Terms – Intelligent methods, Hybrid technique, According to the above mentioned points, at the performed
Machining process. study, novel numerical investigation was implemented based
on the development of new hybrid technique of FEM-ANN-
I. INTRODUCTION MOO during the machining of AISI H13 hardened die steel.
So far, several investigations have been carried out
regarding the machining. But implementing numerous II. MODEL VALIDATION
experimental tests in order to find optimal machining In this section, Predicted results including principal and
parameters, are very time consuming and expensive. To solve thrust cutting forces and maximum chip surface temperature,
this problem, some researchers have tried to model the were validated by comparing with experimental results from
machining processes by various methods such as statistic and literature [4]. Validation was performed on the AISI H13 die
intelligent ones [1]. Intelligent methods are one of the most steel (52HRC) at the feed rate of 0.25mm/rev and cutting
famous methods among them and have been utilized for speed of 75m/min, 150m/min and 200m/min. Also, cutting
predicting and optimizing machining processes [2]. tool was considered PCBN material with rake angle of -5°,
Furthermore, application of Finite Element method (FEM) clearance angle of 5° and chamfered edge of (20°) × (0.2) mm.
in simulation of manufacturing processes has great advantages
comparison between predicted results of simulation and
such as reduction in time and cost by eliminating experimental
corresponding experimental tests are given in Table 1.
tests as well as prediction capability of some results which
experimentally are difficult or impossible to be assessed.
Table. 1. Comparison between predicted results of finite element simulation
In spite of the fact that the above mentioned methods have and corresponding results of experiment at the different cutting speed
several advantages, they may have some defects, if they are Different cutting speed (M/Min) percentage of
200(M/Min) 150(M/Min) 75(M/Min) error
FJ. Department of Mechanical engineering, University of Birjand, Birjand, 8% 12% 3% Principal forces
Iran. (e-mail: farshid.jafarian@ymail.com). Thrust forces
9% 11% 13%
HA. Department of Mechanical engineering, University of Birjand, Birjand,
Iran. (e-mail: hamirabadi@birjand.ac.ir). 2% 2% 3% Temperature
JS. Department of Electrical and computer engineering, University of
Birjand, Birjand, Iran and School of Computer Science, McGill University,
As shown at the Table 1, there are suitable agreement
Montreal, Quebec, Canada. (e-mail: javad.sadri@cs.mcgill.ca).
between results of simulations and corresponding experimental
tests. In the following, required data for optimization of the 30 300 0.15 -15 0.20 15 402 1.40
process are extracted by FE simulation. Fig. 1 shows example 31 300 0.20 0 0.15 30 346 1.18
of machining simulation at the cutting speed of 200 M/Min. 32 300 0.25 -5 0.10 25 301 0.89
At the following sections, obtained results of Table 2, are
applied to the intelligent methods.

III. ARTIFICIAL INTELLIGENT


Intelligent methods based on the predictive and optimization
methods have been significantly developed and utilized in
several fields such as mechanical engineering [5]. In the rest of
the paper, after briefly explaining some basic intelligent
methods, these aims have been pursued by utilizing some
programs under MATLAB software.
Fig. 1. FE simulation of temperature distribution during machining process A. Genetic Algorithm (GA)
with cutting speed of 200 M/Min.
Genetic algorithm which is derived from nature (human's
After validation stage, numerical investigations were genetic), is one of well-known population base algorithms in
performed to evaluate thermo-mechanical loads. In this regard, order to find optimal solutions [1]. Any member of population
effect of machining parameters including cutting speed Vc, is called a chromosome and consists of genes equal to the
feed rate Ap, chamfer angle (Ø), chamfer width (W) and rake optimization variables of the problem. In every generation all
angle (γ) were evaluated on the effective strain and of the chromosomes are evaluated by objective function and
temperature of the workpiece during the machining process. based on their fitness, fitter chromosomes are produced in the
At the present study, (L32) orthogonal array in Taguchi next generation. Finally, after some iteration algorithm
method was considered in order to determine the desired converges toward the optimal solution.
testing conditions. The details of the machining parameters, B. Multi-Objective Optimization (MOO)
testing conditions and obtained results of FE simulation are Many real-life problems are not limited to the optimizing of
given in Table 2. one objective and two or more conflicting objectives are
aimed to, which should be optimized at the same time.
Table. 2. Predicted results of simulation
No VC F γ W Ø Workpiece Effective Normally, for these problems, a single solution cannot be
Temperature Strain found that simultaneously optimizes all the objectives. While
searching for solutions, one reaches points such that, when
1 150 0.10 0 0.10 15 151 0.73 attempting to increase an objective, other objectives will be
2 150 0.15 -5 0.15 20 219 0.94 decreased as a result [6]. Therefore, Multi-Objective
3 150 0.20 -10 0.20 25 339 1.41 Optimization (MOO) can be defined as the process of
4 150 0.25 -15 0.25 30 495 1.69 simultaneously optimizing two or more conflicting objectives,
5 200 0.10 -0 0.15 20 184 0.86
and it can be found in many fields wherever optimal decisions
6 200 0.15 -5 0.10 15 173 0.73
need to be taken in the presence of trade-offs between two or
7 200 0.20 -10 0.25 30 502 1.86
more conflicting objectives [7]. In multi-objective
8 200 0.25 -15 0.20 20 410 1.24
9 250 0.10 -5 0.20 30 382 1.93 optimization problems there are two spaces: decision and
10 250 0.15 0 0.25 25 265 1.07 objective spaces. As regard the first, decision space involves
11 250 0.20 -15 0.10 20 293 1.09 input variables of problems, while second space involves
12 250 0.25 -10 0.15 15 287 0.91 output value of the problem. Since, output variables are related
13 300 0.10 -5 0.25 25 276 1.32 to the particular input values, the main goal of optimization
14 300 0.15 0 0.20 30 367 1.24 problems is finding the optimal input variables in order to
15 300 0.20 -15 0.15 15 360 1.06 minimize/maximize corresponding objective value.
16 300 0.25 -10 0.10 20 279 0.97
17 150 0.10 -15 0.10 30 358 1.71 C. Fitness evaluation in MOO
18 150 0.15 -10 0.15 25 316 1.39 Fitness evaluation in MOO is different from single objective
19 150 0.20 -5 0.20 20 234 1.14 optimization and there are set of optimal solution which
20 150 0.25 0 0.25 15 219 0.71 dominates other solutions in the decision space. For example,
21 200 0.10 -15 0.15 25 367 1.82 In minimization problems, a feasible solution X is said to
22 200 0.15 -10 0.10 30 282 1.29 dominate other feasible solution y, if and only if, zi(x) ≤ zi(y)
23 200 0.20 -5 0.25 15 209 0.93 for i=1,…, m and zj(x) < zj(y) for at least one objective
24 200 0.25 0 0.20 20 240 0.91 function j. In this condition, the set of all feasible non-
25 250 0.10 -10 0.20 15 242 1.04 dominated solution in X space is said Pareto optimal, and
26 250 0.15 -15 0.25 20 415 1.55
corresponding objective function values in objective space is
27 250 0.20 0 0.10 25 254 0.81
called Pareto front [6, 8]. Also, all solution set in the Pareto
28 250 0.25 -5 0.15 30 356 1.09
29 300 0.10 -10 0.25 20 287 1.24 front are not absolutely better than other solutions in the
Pareto front, and each solution is better than other solutions at Also an independent weight that is so-called bias can be added
least in one objective. to the each neuron. Furthermore, the transfer function
determines the effect of weights and biases of a neuron on the
D. Non-dominated Sorting Genetic Algorithm (NSGA-II)
neurons of the next layer and might be linear or nonlinear.
One of the most popular evolutionary algorithms for solving Types of Transfer functions (such as pureline, logsig,
multi-objective optimization problems is NSGA-II (Non- tansig…) and number of neurons and number of hidden layers
dominated Sorting GA-II) [9]. It is an extension of simple are hyper parameters of a neural network, which are selected
Genetic Algorithm (GA) to a non-dominated sorting version of in order to make different structures of neural network. Also,
candidate solutions (Pareto-optimal points) in order to solve the process of adjusting weights and biases is called network
multi-objective problems. Fig. 2 illustrates main procedure of training, which is often evaluated by minimizing the mean
basic NSGA-II. This algorithm starts with a random square error (MSE) between the predicted outputs of the
population of input variables Pt. Then population Rt at time t is neural network and the actual outputs. Fig. 3a shows
created by population Pt and population Qt (which is created schematic view of a neuron with considering corresponding
from the parent population Pt by using usual genetic operators weights, bias and transfer function, as well as an example of a
such as mutation and crossover). After that entire population multilayer perception neural network with two hidden layers
Rt evaluated by objective functions and based on the non- are illustrated in Fig. 3b.
dominated procedure, all members of Rt are classified
according to the ascending order of dominance. Then, the best (a)
Pareto fronts (which are stored on top of the list), are
transferred to the new parent population P t+1. This operation is
call elitism. Since, size of population Pt+1 is half of Rt (in fact
size of Pt+1 is equal to the size of Pt), the half of Pareto fronts
will be deleted during the transferring. This procedure will be
continued until all individuals of a particular Pareto front (b)
cannot be accommodated entirely in the parent population Pt+1.
Therefore, for choosing the exact number of individuals of that
particular front for filling remained space of the population
Pt+1, crowded comparison operator which is called crowding
distance is employed. According to the this method, finally the
individuals which have more distance than other individuals in Fig. 3. (a) Schematic view of a neuron (b) Structure of artificial multilayer
the particular Pareto (individuals with less density) selected perception neural network
for filling the rest of the parent population P t+1. It should be
noted that crowding distance operation lead to more diversity F. Training ANN by GA
of the solutions in the Pareto front (this operation is explained Previously, different methods have been utilized to train
more in the next section). At the end, population Pt+1 will be neural networks which most of them use mathematical based
utilized instead of population Pt on the next generation of methods like back propagation [10]. Recently, some
NSGA-II and optimal Pareto front will be determined after researchers have presented new efficient method in order to
particular generation [9]. train NN which weights and biases of neural network are
updated by GA [11]. In this method, initial network topology
is a multilayer perception neural network which none of the
conventional training methods have been considered.
At the mentioned method, weights and biases of neural
network are the variables of GA. The flowchart of training the
ANN by GA is shown in Fig. 4.

Fig. 2. Basics of NSGA-II procedure

E. Artificial Neural Network (ANN)


Artificial neural networks simulate the simplified model of
human brain and capable to estimate complex nonlinear
relationships between input machining parameters and
corresponding outputs [12]. According to the topology of
neural network, there are three kinds of layers in the ANNs
including input layer, hidden layer and output layer. Each
network is composed from several neurons, which are
organized into mentioned layers. Neurons of the various layers
are connected to each other by weighted connection links.
Fig. 4. Training the ANN by GA
During network training, testing and training errors are testing data while training process performed on the 31
gradually decreased till specific number of iterations, which remained data. Then, the average of absolute error of the 32
depends of the network capacity, after pass these iterations the iterations was considered as network error. Fig. 5 shows the
testing error will be no more decreased. In this stage it's called flowchart of the applied method in this section.
the network has been over trained which lead to increasing
testing error and decreasing network performance. Therefore,
by using the training of the ANN by GA, it's possible to obtain
the most suitable iteration for training network and prevent
over training of the neural network by controlling testing error
in different iterations of GA.
G. Leave One Out Cross Validation (LOOCV) method
Cross validation is one of the most important and effective
approaches used for verification of models in statistical
machine learning in order to estimating how well a predictive
model we have learned from training data is going to perform
on the future unseen (test) data. In order words, we can
measure the generalization power (accuracy) of our trained
model in practice and avoid happening over fitting, using cross
validation techniques. One of the popular methods of cross Fig. 5. LOOCV method for choosing network structure
validation is leave-one-out cross validation (LOOCV). This
method is usually used when the number of training and Mentioned method repeated three times on the different
testing data is not very large or it is too difficult to create large structures of neural network and at the end, the average of
training/testing dataset for learning of the model. In this obtained errors was considered as error of each network.
method, in each iteration one sample of the data point is These results are reported in Table 3. Obtained results showed
temporarily considered as a validation data, and the remaining that, neural network with the structure of 5-5-3-1 and 5-6-4-1
data are used for the training of the model. After training the are the best structure for predicting the maximum temperature
model its prediction error is calculated on the validation data. and effective strain of workpiece among the other examined
If the original training data has K samples, this procedure structures, respectively. It should be noted that, the all inputs
repeated K times (equal to the number of observations in the and outputs data were normalized linearly between 0 and 1 in
original training set). After that, average of these errors is order to improve training process. Also, transfer functions of
reported as the prediction error of our predictive model on the purline and tansig were considered empirically for output and
whole dataset. As seen, in this validation approach, during its hidden layers of neural networks, respectively. Besides, some
K iterations, all the samples have the chance to act as a of the parameters of GA including population size of 135, elite
training sample and testing sample. For more information on count of 8, iteration of 100, migration Fraction of 0.7 and
LOOCV refer to [13]. Migration Interval of 6 were considered.

IV. OBTAINED RESULTS Table 3. Choosing the best structure of the Artificial Neural
Networks (ANNs) by leave one out cross validation method
Obtained results in this part divided in two subdivisions. ANN 5- 5-9 5-6 5-5 5-3 5-4 5- 5-7 5-3
The first subdivision is allocated to presentation of a precise Structur 12 -4-1 -4-1 -3-1 -2-1 -3-1 12 -1 -1
model for predicting the maximum temperature and effective e -3-1 -1
strain of workpiece using two discrete neural networks which
Tem 11 13 9 7 8 9 18 21 24
divided in two parts: choosing the best structure of neural
strain 12 8 7 9 13 12 23 19 16
network and training of the chosen network structure. The
second subdivision is allocated to simultaneous optimization
After selecting the suitable structure, NNs should be
of thermo-mechanical loads during orthogonal turning
trained. For this propose, GA was employed for training the
operation. This subdivision also divided in two parts:
two chosen networks by applying 27 training data and 5
simultaneous optimization of the process by NSGA-II and
testing data from data samples of the Table 3. Also, the
evaluation of obtained results of NSGA-II by performing FE
training process was carried out in different iterations of GA
simulation for some of the obtained results.
because of preventing over training. At the end, the neural
A. Prediction effective strain and temperature of machined networks were trained excellently for predicting the maximum
workpiece by ANN temperature and effective strain of the workpiece by trial and
First, the best structure of ANNs was determined by leave error procedure and changing the iterations of GA. Obtained
one out cross validation method. In this method, 31 data for results of each network is shown in Table 4. Also Fig. 6 shows
training and 1 data for testing were selected from data samples convergence diagram of GA for training the NNs.
of Table 2 as well as training operations of the network were
carried out by GA. Besides, this operation was repeated 31
times and each time a new data sample was selected as a
Table 4. Final errors of the ANNs trained by GA
ANN GA MSE Mean Mean
iteratio training training testing
n error error error
Temperature 900 0.00121 2.8% 3.7%
strain 1500 0.00057 1.7% 3.2%

Fig. 7. Optimal Pareto front

Table 5. Predicted results of multi-objective optimization by NSGA-II


No VC F γ W Ø Tem strain

1 156.4 0.100 0 0.101 15.14 149.4 0.686


2 156.4 0.100 0 0.110 15.12 150.8 0.699
3 156.4 0.101 0 0.141 15.45 154.3 0.739
4 161.5 0.103 -1.31 0.152 16.25 164.9 0.803
5 163.7 0.101 -1.88 0.148 16.83 167.5 0.830
6 168.2 0.100 -3.02 0.168 17.88 173.8 0.936
7 170.6 0.101 -3.65 0.170 18.51 179.0 0.996
8 172.3 0.100 -4.09 0.171 18.97 183.5 1.044
9 176.0 0.100 -5.14 0.164 19.96 193.9 1.131
10 178.6 0.100 -5.71 0.176 20.61 208.1 1.244
11 180.2 0.100 -6.12 0. 17 21.01 215.7 1.294
12 182.5 0.100 -6.70 0.178 21.61 229.5 1.377
13 184.2 0.100 -7.12 0.179 22.04 239.9 1.436
14 185.2 0.100 -7.39 0.180 22.31 247.3 1.474
15 188.2 0.102 -8.16 0.185 23.08 271.9 1.586
16 189.7 0.100 -8.53 0.187 23.46 283.0 1.643
Fig. 6. Convergence curves of training procedure 17 191.0 0.100 -8.86 0.180 23.84 288.6 1.665
18 193.1 0.100 -9.41 0.186 24.36 308.8 1.747
B. Simultaneous optimization of thermo-mechanical loads 19 194.9 0.100 -9.87 0.191 24.83 326.4 1.816
To access the desired surface properties of the machined 20 196.7 0.100 -10.3 0.191 25.29 340.7 1.869
workpiece, thermo-mechanical loads were optimized. Here, 21 198.2 0.100 -10.7 0.195 25.65 354.8 1.917
22 200.9 0.100 -11.3 0.195 26.38 374.7 1.985
suitable machining conditions were determined by multi- 23 202.2 0.100 -11.7 0.201 26.70 389.6 2.034
objective optimization algorithm (NSGA-II) in order to 24 206.4 0.100 -12.8 0.201 27.83 417.5 2.133
simultaneously minimize the temperature and maximize 25 210.2 0.100 -13.7 0.208 28.76 444.8 2.215
effective strain of the workpiece. 26 211.4 0.100 -14.0 0.210 29.07 453.3 2.240
The functions implemented by NNs in the previous section, 27 215.0 0.100 -14.9 0.218 30.00 476.9 2.306
were utilized as objective functions of NSGA-II. By doing
this, NSGA-II determined the optimal Pareto-front by To make sure about suitable interaction between FEM and
searching the decision space (input machining parameters) and MOO, some FE simulations were carried out according to the
sorting the non-dominated solutions (based on the minimizing machining parameters of the Table 5. In this regard, Fig. 8
temperature and maximizing effective strain). Some adjusted shows comparison between predicted results of FE simulation
parameters of NSGA-II include crossover probability of 0.8, and obtained results of NSGA-II. As is shown in this Fig.,
mutation probability of 0.25, population size of 90 and simulation results are very near the corresponding results of
iteration of 1000. The optimal Pareto-front (non-dominated multi-objective optimization and there is very well agreement
solution set) which obtained after optimization procedure is between these methods. Hence, it can be said that proposed
shown in Fig. 7. Also a set of 27 solution (out of 90 obtained hybrid technique of FEM-ANN-MOO in this paper, was
solutions) and corresponding decision variables (machining implemented successfully. In fact by employing this hybrid
parameters) sorted based on the increasing temperature and technique, suitable machining parameters were determined in
reported in Table 5. As can be seen, by moving to lower order to optimize thermo-mechanical loads during machining
solution set in this table, the state of objective 1 becomes process, while these results were not obtained by using
worse while objective 2 becomes better. Since neither of the
previous simulations.
solution set is absolutely better than the other (not dominated
by each other), any of them is optimal solution which can be
selected according to the requirement of the process engineer.
15 degree, 0.1 to .021 mm and 15 to 30 degree,
respectively, which by increasing each, the state of
mechanical loads was improved and the state of thermal
loads was deteriorated.

VI. REFERENCES
[1] A. M. Zain, H. Haron and S. Sharif, “Application of GA to optimize
cutting conditions for minimizing surface roughness in end milling
machining process,” Expert Systems with Applications. Vol. 37, pp.
4650–4659. 2010.
[2] G. Quintana, M. L. G. Romeu and J. Ciurana, “Surface roughness
monitoring application based on artificial neural networks for ball-end
milling operations,” Journal of Intelligent Manufacturing, vol. 22, pp.
607-617, 2009.
[3] D. Umbrello, G. Ambrogio, L. Filice and R. Shivpuri, “A hybrid finite
element method–artificial neural network approach for predicting
residual stresses and the optimal cutting conditions during hard turning
of AISI 52100 bearing steel,” Materials and Design, vol. 29, pp. 873–
883. 2008.
[4] E. G. Ng, D. K. Aspinwall, D. Brazil and J. Monaghan, "Modeling of
temperature and forces when orthogonally machining hardened steel,”
International Journal of Machine Tools & Manufacture, vol. 39, pp.
885–903, 1999.
[5] N. Muthukrishnana and J. P. Davim, “Optimization of machining
parameters of Al/SiC-MMC with ANOVA and ANN analysis,” Journal
of Materials Processing Technology, vol. 209, pp. 225–232. 2009.
[6] E. Zitler, M. Laumanns and S. Bleuler, “A Tutorial on Evolutionary
Multi-objective Optimization,” Lecture Notes in Economics and
Fig.8. comparison between results of FE and NSGA-II Mathematical Systems, Springer, 2004.
[7] L. N. Xing, Y. W. Chen, K. W. Yang, “An efficient search method for
V. CONCLUSION multi-objective flexible job shop scheduling problems,” Journal of
Intelligent Manufacturing, vol. 20, pp. 283-293, 2009.
The main objective of this paper is presentation of useful [8] N. Srinivas K. Deb, “Multi-objective optimization using non-dominated
and new hybrid method based on the FEM-ANN-MOO and sorting in genetic algorithms,” Evolutionary Computation, vol. 2, pp.
221-248, 1994.
developing application of intelligent methods at the
[9] K. Dev, A. Pratap, S. Agarwal and T. Meyarivan, “A fast and elitist
manufacturing processes as well. It is possible to state that: multi-objective genetic algorithm: NSGA-II,” IEEE Transactions on
Evolutionary Computation, vol. 6, pp. 182-197, 2002.
 Always, application of ANNs to the manufacturing [10] T. O¨zel, and A. Nadgir, “ Prediction of flank wear by using back
processes have been suffered several limitations because propagation neural network modeling when cutting hardened H-13 steel
of very few available data at these processes. To improve with chamfered and honed CBN tools,” International Journal of
this problem, at the performed study, a new method was Machine Tools & Manufacture, vol. 42, pp. 287–297, 2002.
[11] A. C. P. Filho and R. M. Filho, “Hybrid training approach for artificial
developed for application of ANNs at the machining neural networks using genetic algorithms for rate of reaction estimation:
process more efficient than previous works. Based on this, Application to industrial methanol oxidation to formaldehyde on silver
suitable structures of neural networks were determined by catalyst,” Chemical Engineering Science, vol. 157, pp. 501–508, 2010.
leave one out cross validation method, then training of [12] I. N. Tansel, S.Gulmez, M. Demetgul and S. Aykut, “Taguchi Method–
GONNS integration: Complete procedure covering from experimental
selected networks was carried out by combination of GA design to complex optimization,” Expert Systems with Applications,
and LOOCV method. Although these methods were time vol. 38, pp. 4780–4789, 2010.
consuming in aspect of programming and computations, [13] K. Ron, “A study of cross-validation and bootstrap for accuracy
finally applied methods lead to an increase in accuracy estimation and model selection,” 1995 Proceedings of the Fourteenth
International Joint Conference on Artificial Intelligence, pp.1137–1143.
and effectiveness of the neural networks in spite of few
training data (32 data). In this regard, testing error ranges
were obtained between 0.5 to 6.1% for the both ANNs.
 Some FE simulations were performed with the same
machining parameters obtained by NSGA-II. Comparison
between results of NSGA-II and results of FE simulation
showed that there were very good agreement between FE
and intelligent methods employed in this paper.
 Optimal solution set of Table 5 showed that in order to
simultaneous optimization of thermo-mechanical loads in
specific range of investigated input parameters, feed rate
was kept at the minimum rate (0.1 mm/rev). Besides, it
was found that cutting speed, rake angle, chamfered width
and Chamfer angle were between 156 to 215 m/min, 0 to

You might also like