You are on page 1of 8

Materials Today Communications 30 (2022) 103193

Contents lists available at ScienceDirect

Materials Today Communications


journal homepage: www.elsevier.com/locate/mtcomm

Machine learning-enabled prediction of density and defects in additively


manufactured Inconel 718 alloy
Aman Kumar Sah a, M. Agilan b, S. Dineshraj b, M.R. Rahul a, *, B. Govind b
a
Department of Fuel, Minerals and Metallurgical Engineering, Indian Institute of Technology (ISM), Dhanbad 826004, Jharkhand, India
b
Materials and Mechanical Entity, Vikram Sarabhai Space Centre, Indian Space Research Organisation, Thiruvananthapuram 695022, Kerala, India

A R T I C L E I N F O A B S T R A C T

Keywords: Additive manufacturing of engineering materials is getting wider attention, and the properties of AM samples are
Inconel 718 widely different from conventional alloys. The prediction of properties or defect formation in AM samples will
Additive manufacturing accelerate the AM component development. The current study uses trained machine learning algorithms to
Machine learning
predict the density and defect formation in the AM sample. Experiments are carried out to generate the dataset
Density
Defects
for ML algorithm training and testing, and the analyzed data shows a strong correlation of energy density with
the density of the sample. Multiple ML algorithms are trained, the Naïve Bayes and Artificial Neural Network
show more than 85% accuracy in porosity prediction for the test dataset. RF algorithm shows the best fit for the
trained data set for density prediction. The experimental comparison shows the trained algorithms can predict
reasonably well the density and defect formation in AM sample

1. Introduction trained multiple algorithms for AM defect prediction, and found that
Bagging and Random Forest (RF) have high predictive capability.
Additive manufacturing of materials is getting wider attention due to Different ML framework to control the AM process was proposed by
the reduced material wastage and the manufacturing of complex ge­ many researchers [16,17]. Machine learning-enabled material devel­
ometry. Many researchers reported AM studies on superalloys[1–3]. opment for AM was also reported [10].
Different parameters influence the quality and density of the AM Apart from process optimization and defect monitoring, machine
component [4,5]. The successful AM component development depends learning algorithms are also used to predict mechanical properties [18].
on many factors, including identifying an optimum processing domain. Zhixin et al. [19] use ANN, RF, and SVM for the prediction of fatigue life
Series of experimental trials, simulation methods, and integrated ap­ in additively manufactured 316 L stainless steel. They were using a
proaches are reported in the literature [1,5,6]. Recently machine database developed by continuum damage mechanic technique. Meng
learning-enabled frameworks are getting wider attention in many et al. [20] use a machine learning approach based on a neural network to
research domains, including AM [7–10]. The available data in the predict the high cycle fatigue life of SS 316 L alloy. Zhixiong et al. [21]
literature or laboratory experiments can be used to train ML algorithms use an ensemble learning-based algorithm to predict the surface
and identify the optimum domain or properties of AM components. roughness of AM samples. Luo et al. [22] use a machine learning
Christian et al. [11] use the Support Vector Machine (SVM) technique to framework to establish the correlation between the pore location and
identify the defects by monitoring in-situ images during AM process and the fatigue life based on the data developed from Inconel 718 AM
reported an accuracy of >80%. Kenta et al. [12] use SVM to generate a samples. The density prediction of material using machine learning re­
processing map and establish the optimum processing domain for ported in the literature [23], Mariana et al. [23] reported the bulk
additively manufactured CoCr alloy. Ren et al. [13] use a physics-based density of flash sintered material using KNN, SVM, ANN, and RF. Ad­
machine learning model to establish the thermal field for various ditive manufacturing of Inconel 718 is reported by many researchers,
scanning strategies. Erik et al. [14] use a CNN-based approach to classify and the influence of processing parameters on mechanical properties,
the defective AM component, and the VGG based network shown an density etc. are studied [24–27]. The properties of AM component will
accuracy of more than 95%. Rui et al. [15] uses a synthetic dataset, depend on the density of the part, which will be affected by the

* Corresponding author.
E-mail address: rahulmr@iitism.ac.in (M.R. Rahul).

https://doi.org/10.1016/j.mtcomm.2022.103193
Received 15 September 2021; Received in revised form 20 January 2022; Accepted 23 January 2022
Available online 29 January 2022
2352-4928/© 2022 Elsevier Ltd. All rights reserved.
A.K. Sah et al. Materials Today Communications 30 (2022) 103193

processing parameters that need to be established for the successful AM intervals, scan speed was changed from 750 to 1150 mm/s at 100 mm/s
process development. One could note that establishing a trained ML intervals, and hatch spacing varied between 90 µm and 130 µm at 10 µm
model based on processing parameters, the density of the build, and the interval. The layer thickness was constant (40 µm) for all the samples.
defects generated will accelerate the AM component development. The effect of these process parameters on density and defects (porosity
In the current study, a dataset was generated by varying the pro­ and Lack of fusion) formation was studied. Density was measured using
cessing parameters, the density value was evaluated, and the defects like Archimedes’ principle in Mettler Toledo make balance. Defects were
porosity, Lack of fusion were analyzed. The seven machine learning analyzed on the build direction using an optical microscope at 100x
algorithms, namely Artificial Neural Network (ANN), Decision Tree magnification. Specimens were polished using different grades of emery
(DT), Random Forest (RF), Support Vector Regression (SVR), XG Boost paper and chemically etched using Kaling’s No.2 reagent.
(XGB), Multi-Variable Linear Regression (MVLR), Gradient Boosting
(GB) are trained and used for density prediction of AM sample. The 3. Results and discussion
defects such as porosity and Lack of fusion were predicted by nine
different algorithms such as ANN, RF, DT, SVM, GB, Naive Bayes (NB), The current data set is initially trained to predict density, and the
K-Nearest Neighbors (KNN), Kernel SVM, Logistic Regression (LR). The dataset with density is trained for the prediction of defect formation in
trained algorithms are validated against unknown experimental data. AM sample. One could note that the porosity and Lack of fusion only
considered for the current study.
2. Experimental details
3.1. Prediction of density
Inconel 718 alloy was 3D printed using laser powder bed fusion
(LBPF) technique with EOS M290–400 W equipment. 125 numbers of The current dataset used for ML studies consists of 123 points ob­
cube samples (10 mm × 10 mm x 10 mm) were fabricated with different tained from the experimental AM studies. Fig. 1a shows the additive
parameters (Fig. 1a). LPBF parameters such as laser power, scan speed, manufactured samples generated from the experiments. The interrela­
and hatch spacing were varied at five levels each to generate a 5 x 5 x 5 tion between various process parameters, energy density, and density of
matrix. Laser power was varied between 250 and 350 Watts at 25 watts the sample is shown in Fig. 1b & c. The volumetric energy density (J/

Fig. 1. Data collection and correlation a) Image shows the 125 samples prepared using AM for the current study, b) Heatmap shows the Pearson correlation values
(ED- Energy Density), c) Hexbin plot shows the distribution of data with respect to energy density (J/mm3) and density (g/cm3).

2
A.K. Sah et al. Materials Today Communications 30 (2022) 103193

mm3) [28] is calculated as follows reduced in an iterative method and algorithm is built-in sequential
p order. In the Gradient Boosting algorithm, the maximum depth which
The energydensity(Ed ) = (1) implies the maximum depth of the individual regression estimators is
v x h x d
varied from 1 to 10 and the learning rate which will affect the contri­
The laser power in watts is denoted by P, the laser scan speed is v, bution of each tree varied from 0.01 to 0.1 with an interval of 0.01; the n
hatch distance is h, and d is the powder bed layer thickness. Fig. 1b estimators varied from 10 to 120 with an interval of 10. The optimum
shows the heatmap where the Pearson correlation values can vary be­ parameters for the current dataset are a maximum depth of 1, the
tween − 1 to + 1, showing a strong negative and positive correlation, learning rate of 0.08, and the n estimator value of 70. The MSE value of
respectively. In the current dataset, the maximum correlation value of 0.000323 was obtained for testing data. The RF is based on large number
0.52 was obtained between energy density and density of the AM sam­ of independent decision trees and the final prediction based on decision
ple, which implies that while increasing energy density, the density will tree pool. In the RF algorithm, the n estimators (number of trees in the
show an increasing trend. Fig. 1c confirms the positive correlation; the forest) varied from 10 to 100 with an interval of 5. The optimum n
hexbin plot shows the increase in density with an increase in energy estimator value used for the current dataset is 75, with an MSE of
density. One could note that the dataset is distributed in a wide range of 0.0000629. The SVR creates a hyperplane or a series of hyperplanes in
energy density and density. The maximum negative correlation value of higher dimensional space and the nearest data points on either side of
− 0.65 obtained between the scan speed and energy density implies that the hyperplane are termed as support vectors which are used to plot the
the energy density will decrease while increasing scan speed. One could boundary line which separates data points. In the case of SVR, the reg­
note that the parameters considered have reasonable positive or nega­ ularization parameter (C) value varies from 1 to 100 with an interval of
tive correlation values with density. 10, and the optimum parameter is 1. The MSE of 0.000895 was obtained
for the testing data. In the XG Boost algorithm which is a gradient
3.1.1. Training and testing of ML algorithms boosting-based ensemble Machine Learning technique, the n estimator
The current dataset with 123 values was divided such that 80% data (the number of runs that algorithm will attempt to learn) value varied
was taken for training and 20% for testing. Seven algorithms [14,29–36] from 10 to 150, and the optimum is 90. MSE of 0.000399 was obtained.
[37–40] are trained, and the optimum parameters are identified using Fig. 2 shows the testing data comparison for the algorithms; one
gridsearchcv method. The ANN algorithm behaves like the human brain, could note that the density distribution in the test data covers a
where the artificial neuron will receive the information and send it to the reasonable range of density of the total dataset. Linear curves connect
next neuron after analyzing. The ANN will have layers of neurons, and the predicted data points and actual data points separately for visuali­
one can see that the information will move from input to output layers. zation. One could see that the Random forest algorithm predicting
In the case of ANN, the number of hidden layers varied between 2 and 5 reasonably well (lowest mean squared error value) in the testing data
with an interval of one, and neurons in each layer varied between 100 and ANN prediction have variation with the actual data. XG boost and
and 250 with an interval of 50. Drop-out [41] values varied from 0.1 to Gradient boosting also shown reasonable matching with the actual data.
0.3 for regularization. Tanh (Tangent Hyperbolic function) and Relu SVR prediction value is constant (8.14), which is not reasonably
(Rectified linear unit) activation functions were varied in the hidden matching with the actual data.
layer. For the current dataset, the optimum parameters are the number
of hidden layers value of 2 and Relu activation in the hidden layer. The
mean squared error (MSE) value of 0.0023 was obtained for the testing 3.2. Prediction of defects in AM sample
dataset. In the DT algorithm, the maximum tree depth was varied from 1
to 12, and the minimum sample leaf varied from 1 to 10. The optimum In the current study, we have considered the porosity and Lack of
minimum sample leaf of 1 and the MSE of 0.00055 were obtained for the fusion of additive manufactured samples. These defects will affect the
current dataset. In case of gradient boosting algorithm the error will be quality and performance of AM components, and it is vital to monitor

Fig. 2. Prediction of data during testing and comparison with the actual data point using various algorithms a) Gradient Boosting, b) Multi-Variable Regression, c)
ANN, d) RF, e) XG Boost, f) DT (density is in g/cm3).

3
A.K. Sah et al. Materials Today Communications 30 (2022) 103193

and avoid processing parameters that cause these defects. Fig. 3a shows readable from Fig. 3h. One could note that the overlapping tendency of
the optical micrograph of AM sample without any defect, and the inset the diagonal plot for samples with porosity, Lack of fusion, and both and
shows the plane considered for characterization, which is parallel to the without defect confirms that all parameters should be considered for the
build direction. Fig. 3b,c& d shows the different types of defects ML training for defect classification.
considered for the current study. Based on the morphology, the presence
of porosity is confirmed in Fig. 3b and the Lack of fusion in Fig. 3c. 3.2.1. Training and testing of ML algorithms
Fig. 3d shows the sample with both porosity and Lack of fusion. Samples The algorithms are trained separately for each condition, such as
with these defects in the current dataset have been evaluated separately samples with porosity, Lack of fusion, and either of these defects. All the
also. Fig. 3e shows the heatmap with porosity, density, and other pro­ conditions the optimum parameters are identified by using grid­
cessing parameters; it can be noted that the samples having porosity searchcv. The DT algorithm uses an optimum parameter for porosity
have been assigned with a value of 1 and without porosity with a value prediction, such as a maximum tree depth of 7 with a minimum sample
of zero. The Pearson correlation value varies from − 0.65 to 0.52. The split of 0.1. The DT algorithm has internal nodes which represent an
porosity shows a negative correlation with scan speed and hatch dis­ attribute test, each branch indicates the test’s conclusion, and the ter­
tance and a positive correlation with energy density and laser power. minal node contains a class label. For Gradient boosting maximum depth
Fig. 3f shows the heatmap with a lack of fusion and other parameters. of 2 with a learning rate of 1 and n estimator value of 70. The KNN al­
The map shows the positive correlation between the Lack of fusion gorithm classification is based on the information from the new data
defected sample with scan speed and hatch distance and the negative point neighbors. KNN for the current porosity prediction uses the
correlation with energy density. Fig. 3g shows the heatmap by consid­ number of neighbor values of 3. RF algorithm uses a maximum depth of
ering samples with or without defects. Here, the result label will take a 8 and an n estimator value of 90. SVM operates radial basis function
value of 1 for the sample with either Lack of fusion or porosity or both. In kernel function with regularization value of 1. The optimum parameter
this condition, the correlation value with the defect formation near zero for the current data set for ANN includes the number of hidden layers of
may be attributed to the opposite tendency of correlation values of 3 with the ReLu activation function in the hidden layer. For Lack of
porosity and Lack of fusion. Fig. 3h shows the pair plot for the distri­ fusion prediction, the DT algorithm uses a maximum tree depth of 7 with
bution of data points with respect to different processing parameters and a minimum sample split of 0.1. For Gradient boosting, the maximum
energy density. The plot shows the reasonable distribution of points in tree depth of 7 with a learning rate of 0.01. For KNN, we used 10 number
the domain considered; the trend of different defected samples is of neighbors, and for the RF algorithm, we use an n estimator value of 20

Fig. 3. (a to d): Microstructures of additively manufactured samples a) without defects (inset shows the defect analysis plane) b) with porosities c) with Lack of
fusion d) with porosities and Lack of fusion, Data correlation e) Heatmap by considering porosity, f) Heatmap by considering lack of fusion, g) Heatmap of the sample
by considering the defects together, h) Pair plot shows the data distribution along with density (g/cm3) (ED is energy density (J/mm3), P denotes samples with
porosity, 0 denotes samples without defects, L denotes samples with Lack of fusion, P+LP + L denotes samples with both porosity and Lack of fusion).

4
A.K. Sah et al. Materials Today Communications 30 (2022) 103193

Fig. 3. (continued).

with entropy criteria. In the case of ANN, we use 3 number of hidden the ANN algorithm for porosity prediction; one could note that in
layers with the Relu activation function. Similar parameter optimization Fig. 4d, the comparison plot shows the high accuracy of more than 85%
was carried out for predicting samples with either porosity or Lack of for ANN for porosity prediction. Fig. 4b shows the confusion matrix for
fusion or both. KNN for Lack of fusion prediction, and the comparative plot shows more
Fig. 4 shows the confusion matrices and algorithm testing accuracies than 65% accuracy. Fig. 4c shows the confusion matrix for the Naïve
for each condition. Fig. 4a shows the confusion matrix corresponding to Bayes algorithm for porosity prediction, and from the comparative bar

5
A.K. Sah et al. Materials Today Communications 30 (2022) 103193

Fig. 4. Normalized confusion matrix for testing data a) for porosity using ANN, b) for Lack of fusion using KNN, c) for porosity using Naïve Bayes and d) comparison
of testing accuracies (LOF- Lack of fusion).

chart, it is clear that the algorithm reached a maximum efficiency of comparison with the actual value. The Fig. 5a shows the comparison for
more than 90% for porosity prediction. From Fig. 4d, it is clear that KNN point 1; one could note that RF and XG Boost is predicting close value
will have high accuracy of more than 75% for prediction of either similar to the training dataset. The predicted values of density for the
porosity or Lack of fusion or both. For the current study, algorithms such algorithm are less than the actual one. Fig. 5b shows the prediction for
as SVM, Naïve Bayes, ANN show more than 80% testing accuracy for point 2. In this case, ANN and RF predict close value. The defect for­
porosity prediction. The algorithms such as SVM, KNN, RF, and ANN mation prediction for these two data points is compared in Table 2. Point
show testing accuracy of more than 60% for lack of fusion defect pre­ 1 has porosity, and point 2 has Lack of fusion. ANN exhibited high
diction, and the limited accuracy may be due to the less number of data testing accuracy in porosity prediction, but the same is not evident in the
points. The optimum algorithm for the current study is different from the new dataset for the prediction of porosities, although it performs
previous reports [15] for classification studies is mainly due to the na­ reasonably well for prediction of Lack of fusion. One could note that all
ture of data and its distribution. algorithms can predict whether the AM component is defective or not,
but few are failing in specific prediction may be attributed to the less
4. Comparison with experimental data number of data points trained. From the comparison data, one can
conclude that the trained algorithms will predict the density and defects
The trained and tested algorithms predict the density and defect in AM sample with reasonable accuracy. This can be improved by adding
formation in two new data sets. The parameters and density of the new more training datasets.
dataset are shown in Table 1. Fig. 5 shows the density prediction and its
5. Conclusion
Table 1
Additive manufacturing of 125 samples was carried out to generate
Parameters and density of experimental data point for comparison.
the dataset for ML algorithm development. The analyzed data shows a
Data 1 (point 1) Data 2 (point 2)
strong positive correlation between the energy density and density of the
Scan Speed (mm/s) 750 950 AM sample. Seven ML algorithms are trained for density prediction, and
Hatch Distance (mm) 0.13 0.1 the RF algorithm shows a reasonably good fit with the tested data. The
Laser Power (Watts) 275 325
correlation between the parameters and defect formation, such as
Energy Density (J/mm3) 70.51282 85.52632
Density (g/cm3) 8.186 8.161 porosity and Lack of fusion, is established. Nine algorithms are trained

6
A.K. Sah et al. Materials Today Communications 30 (2022) 103193

Fig. 5. Comparison of density (g/cm3) prediction with experimental data points a) point 1, b) point 2.

Table 2
Algorithm prediction for unknown data points from experiments (red indicate wrong prediction and green indicate correct prediction).

for defect prediction in the AM sample. ANN and Naïve Bayes show the fatigue properties of selective laser melted Inconel 718 evaluated using miniature
specimens, J. Mater. Sci. Technol. 97 (2022) 239–253.
highest testing accuracy (>85%) for porosity, whereas KNN shows the
[4] G. Liu, X. Zhang, X. Chen, Y. He, L. Cheng, M. Huo, J. Yin, F. Hao, S. Chen,
highest testing accuracy for Lack of fusion. The experimental compari­ P. Wang, S. Yi, L. Wan, Z. Mao, Z. Chen, X. Wang, Z. Cao, J. Lu, Additive
son shows that the trained algorithms can predict the defect formation in manufacturing of structural materials, Mater. Sci. Eng. R. 145 (2021), 100596.
the AM sample with reasonable accuracy. [5] S. Cooke, K. Ahmadi, S. Willerth, R. Herring, Metal additive manufacturing:
technology, metallurgy and modelling, J. Manuf. Process. 57 (2020) 978–1003.
[6] Z. Snow, A.R. Nassar, E.W. Reutzel, Invited review article: review of the formation
CRediT authorship contribution statement and impact of fl aws in powder bed fusion additive manufacturing, Addit. Manuf.
36 (2020) 1–15, 101457.
[7] C. Liu, L. Le Roux, Z. Ji, P. Kerfriden, F. Lacan, S. Bigot, Machine Learning-enabled
Aman Kumar Sah: Formal analysis, Investigation, Software, Visu­ feedback loops for metal powder bed fusion additive manufacturing, Procedia
alization, Writing – original draft, Writing – review & editing., Comput. Sci. 176 (2020) 2586–2595, https://doi.org/10.1016/j.
procs.2020.09.314.
Muthumanickam Agilan: Data curation, Formal analysis, Investiga­ [8] D.J. Huang, H. Li, A machine learning guided investigation of quality repeatability
tion, Resources, Writing – review & editing., Subburaj Dineshraj: Data in metal laser powder bed fusion additive manufacturing, Mater. Des. 203 (2021),
curation, Formal analysis, Investigation, Resources, Writing – review & https://doi.org/10.1016/j.matdes.2021.109606.
[9] S. Liu, A.P. Stebner, B.B. Kappes, X. Zhang, Machine learning for knowledge
editing., Rahul M R: Conceptualization, Investigation, Methodology, transfer across multiple metals additive manufacturing printers, Addit. Manuf. 39
Supervision, Writing – review & editing, Govind Bajargan: Formal (2021), https://doi.org/10.1016/j.addma.2021.101877.
analysis, Funding acquisition, Resources, Supervision, Writing – review [10] N.S. Johnson, P.S. Vulimiri, A.C. To, X. Zhang, C.A. Brice, B.B. Kappes, P. Stebner,
Invited review: machine learning for materials developments in metals additive
& editing.
manufacturing, Addit. Manuf. 36 (2020), 101641.
[11] C. Gobert, E.W. Reutzel, J. Petrich, A.R. Nassar, S. Phoha, Application of
supervised machine learning for defect detection during metallic powder bed
fusion additive manufacturing using high resolution imaging, Addit. Manuf. 21
Declaration of Competing Interest
(2018) 517–528, https://doi.org/10.1016/j.addma.2018.04.005.
[12] K. Aoyagi, H. Wang, H. Sudo, A. Chiba, Simple method to construct process maps
The authors declare that they have no known competing financial for additive manufacturing using a support vector machine, Addit. Manuf. 27
interests or personal relationships that could have appeared to influence (2019) 353–362, https://doi.org/10.1016/j.addma.2019.03.013.
[13] K. Ren, Y. Chew, Y.F. Zhang, J.Y.H. Fuh, G.J. Bi, Thermal field prediction for laser
the work reported in this paper. scanning paths in laser aided additive manufacturing by physics-based machine
learning, Comput. Methods Appl. Mech. Eng. 362 (2020), https://doi.org/
10.1016/j.cma.2019.112734.
References [14] E. Westphal, H. Seitz, A machine learning method for defect detection and
visualization in selective laser sintering based on convolutional neural networks,
[1] E. Hosseini, V.A. Popovich, A review of mechanical properties of additively Addit. Manuf. 41 (2021), https://doi.org/10.1016/j.addma.2021.101965.
manufactured Inconel 718, Addit. Manuf. 30 (2019), 100877. [15] R. Li, M. Jin, V.C. Paquit, Geometrical defect detection for additive manufacturing
[2] S. Ghorbanpour, et al., Effect of microstructure induced anisotropy on fatigue with machine learning models, Mater. Des. 206 (2021), https://doi.org/10.1016/j.
behaviour of functionally graded Inconel 718 fabricated by additive matdes.2021.109726.
manufacturing, Mater. Charact. 179 (2021), 111350.
[3] H.Y. Wan, W.K. Yang, L.Y. Wang, Z.J. Zhou, C.P. Li, G.F. Chen, L.M. Lei, G.
P. Zhang, Toward qualification of additively manufactured metal parts: tensile and

7
A.K. Sah et al. Materials Today Communications 30 (2022) 103193

[16] C. Wang, X.P. Tan, S.B. Tor, C.S. Lim, Machine learning in additive manufacturing: [27] X. Wang, X. Gong, K. Chou, Review on powder-bed laser additive manufacturing of
state-of-the-art and perspectives, Addit. Manuf. 36 (2020), https://doi.org/ Inconel 718 parts, Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 231 (2016)
10.1016/j.addma.2020.101538. 1890–1903, https://doi.org/10.1177/0954405415619883.
[17] I. Baturynska, O. Semeniuta, K. Martinsen, Optimization of process parameters for [28] S. Bai, N. Perevoshchikova, Y. Sha, X. Wu, The effects of selective laser melting
powder bed fusion additive manufacturing by combination of machine learning process parameters on relative density of the AlSi10Mg parts and suitable
and finite element method: a conceptual framework, Procedia CIRP 67 (2018) procedures of the archimedes method, Appl. Sci. 9 (2019), https://doi.org/
227–232, https://doi.org/10.1016/j.procir.2017.12.204. 10.3390/app9030583.
[18] S. Nasiri, M.R. Khosravani, Machine learning in predicting mechanical behavior of [29] P. Fabian, Scikit-learn: machine learning in python fabian, J. Mach. Learn. Res. 12
additively manufactured parts, J. Mater. Res. Technol. 14 (2021) 1137–1153, (2011) 2825–2830.
https://doi.org/10.1016/j.jmrt.2021.07.004. [30] T. Wang, C. Zhang, H. Snoussi, G. Zhang, Machine learning approaches for
[19] Z. Zhan, H. Li, Machine learning based fatigue life prediction with effects of thermoelectric materials research, Adv. Funct. Mater. 30 (2020) 1–14, https://doi.
additive manufacturing process parameters for printed SS 316L, Int. J. Fatigue 142 org/10.1002/adfm.201906041.
(2021), https://doi.org/10.1016/j.ijfatigue.2020.105941. [31] C. Chen, Y. Zuo, W. Ye, X. Li, Z. Deng, S.P. Ong, A critical review of machine
[20] M. Zhang, C.N. Sun, X. Zhang, P.C. Goh, J. Wei, D. Hardacre, H. Li, High cycle learning of energy materials, Adv. Energy Mater. 10 (2020) 1–36, https://doi.org/
fatigue life prediction of laser additive manufactured stainless steel: a machine 10.1002/aenm.201903242.
learning approach, Int. J. Fatigue 128 (2019), https://doi.org/10.1016/j. [32] W. Huang, P. Martin, H.L. Zhuang, Machine-learning phase prediction of high-
ijfatigue.2019.105194. entropy alloys, Acta Mater. 169 (2019) 225–236, https://doi.org/10.1016/j.
[21] Z. Li, Z. Zhang, J. Shi, D. Wu, Prediction of surface roughness in extrusion-based actamat.2019.03.012.
additive manufacturing with machine learning, Robot. Comput. Integr. Manuf. 57 [33] Y. Liu, C. Niu, Z. Wang, Y. Gan, Y. Zhu, S. Sun, T. Shen, Machine learning in
(2019) 488–495, https://doi.org/10.1016/j.rcim.2019.01.004. materials genome initiative: a review, J. Mater. Sci. Technol. 57 (2020) 113–122.
[22] Y.W. Luo, B. Zhang, X. Feng, Z.M. Song, X.B. Qi, C.P. Li, G.F. Chen, G.P. Zhang, [34] A.F.M. Agarap, Deep Learning using Rectified Linear Units ( ReLU), 2–8 arXiv:
Pore-affected fatigue life scattering and prediction of additively manufactured 1803.08375.
Inconel 718: an investigation based on miniature specimen testing and machine [35] L. Qiao, Y. Liu, J. Zhu, A focused review on machine learning aided high-
learning approach, Mater. Sci. Eng. A. 802 (2021), https://doi.org/10.1016/j. throughput methods in high entropy alloy, J. Alloy. Compd. 877 (2021), 160295,
msea.2020.140693. https://doi.org/10.1016/j.jallcom.2021.160295.
[23] M.G. d Abreu, E.M.J.A. Pallone, J.A. Ferreira, J.V. Campos, R.V. d Sousa, [36] A. Shrestha, A. Mahmood, Review of deep learning algorithms and architectures,
Evaluation of machine learning based models to predict the bulk density in the IEEE Access 7 (2019) 53040–53065, https://doi.org/10.1109/
flash sintering process, Mater. Today Commun. 27 (2021), 102220, https://doi. ACCESS.2019.2912200.
org/10.1016/j.mtcomm.2021.102220. [37] C. Corinna, V. Vladimir, Support vector networks, Mach. Learn. 20 (1995)
[24] J. Strößner, M. Terock, U. Glatzel, Mechanical and microstructural investigation of 273–297, https://doi.org/10.1111/j.1747-0285.2009.00840.x.
nickel-based superalloy IN718 manufactured by selective laser melting (SLM), Adv. [38] U.M.R. Paturi, S. Cheruku, Application and performance of machine learning
Eng. Mater. 17 (2015) 1099–1105, https://doi.org/10.1002/adem.201500158. techniques in manufacturing sector from the past two decades: a review, Mater.
[25] D. Deng, R.L. Peng, H. Brodin, J. Moverare, Microstructure and mechanical Today Proc. 38 (2020) 2392–2401, https://doi.org/10.1016/j.matpr.2020.07.209.
properties of Inconel 718 produced by selective laser melting: sample orientation [39] J. Schmidhuber, Deep learning in neural networks: an overview, Neural Netw. 61
dependence and effects of post heat treatments, Mater. Sci. Eng. A. 713 (2018) (2015) 85–117, https://doi.org/10.1016/j.neunet.2014.09.003.
294–306, https://doi.org/10.1016/j.msea.2017.12.043. [40] T. Kanungo, D.M. Mount, N.S. Netanyahu, C.D. Piatko, R. Silverman, A.Y. Wu, An
[26] P. Kumar, J. Farah, J. Akram, C. Teng, J. Ginn, M. Misra, Influence of laser efficient k -means clustering algorithm: analysis and implementation, IEEE Trans.
processing parameters on porosity in Inconel 718 during additive manufacturing, Pattern Anal. Mach. Intell. 24 (2002) 881–892.
Int. J. Adv. Manuf. Technol. 103 (2019) 1497–1507, https://doi.org/10.1007/ [41] S. Nitish, H. Geoffrey, K. Alex, S. Ilya, S. Ruslan, Dropout: a simple way to prevent
s00170-019-03655-9. neural networks from overfitting, J. Mach. Learn. Res. 15 (2014) 1929–1958.

You might also like