You are on page 1of 14

ISIJ International, Vol. 39 (1999), No.

10, pp, 966-979


Revie w
Neural Networks in Materials Science

H. K. D. H BHADESHIA
University of Cambridge, Department of Materials Science and Metallurgy, PembrokeStreet, Cambridge CB23aZ, UK.
E-mail: hkdb@cus.cam.ac.uk www,msm.cam,ac,uk/phase-trans
(Received on April 2. 1999, accepted in final form on June 12. 1999)

There are difficult problems in materials science where the generai concepts might be understood but
which are not as yet amenableto scientific treatment. We are at the sametime told that good engineering
has the responsibility to reach objectives in a cost and time-effective way. Any model which deals with
only a smali part of the required technology is therefore unlikely to be treated with respect. Neural network
analysis is a form of regression or classification modelling which can help resolve these difficulties whilst
striving for longer term solutions. This paper begins with an introduction to neural networks and contains
a review of someapp]ications of the technique in the context of materia]s science.
KEYWORDS:
neural networks; materials science; introduction; applications.

1. Introduction to materials.

The development and processing of materials is 2. Physical and Empirical Models


complex. Although scientific investigations on materials
have helped greatly in understanding the underlying A good theory must satisfy at least two criteria. It
phenomena,there remain manyproblems where quan- must describe a large class of observations with few
titative arbitrary parameters. And secondly, it must makepre-
treatments are dismally lacking. For example,
dictions which can be verified or disproved. Physical
whereas dislocation theory can be used to estimate the
yield strength of a microstructure, models, such as the crystallographic theory of martens-
it is
not yet possible ite,1'2) satisfy both of these requirements. Thus, it is
to predict the strain hardening coefficient of an engi-
neering alloy. possible to predict the habit plane, orientation rela-
It follows that the tensile strength,
tionship and shape deformation of martensite with
elongation, fatigue life, creep life and toughness, all of a
precision greater than that of most experimental tech-
which are vital engineering design parameters, cannot
niques, from a knowledge of just the crystal structures
even be estimated using dislocation theory. Amore
comprehensive list of what needs to be done in this of the parent and product phases. By contrast, a linear
regression equation requires at least as
context is presented in Table 1. manyparame-
The lack of progress in predicting mechanical prop-
erties is because of their dependence
on large num- Table l. Mechanical properties that need to be expressed
bers of variables, Nevertheless, there are clear patterns inquantitative models as a function of large
which experienced metallurgists recognise and under- numbersof variables.
stand. For example, it is well understood that the tough-
Property Relevance
ness of a steel can be improved by making its micro- Yleld strength All structural applications
structure more chaotic so that propagating cracks are Ultimate tenslle strength All structural applications
frequently deflected. It is not clear exactly how much
YS/UTSratio Tolerance to plastic overload
the toughness is expected to improve, but the qualitative Elongation Reslstance to brittle fracture
relationship is well established
on the basis of a vast Uniform elongation ~elated to YS and UTS
numberof experiments. Non-uniform elongation Related to inclusions
NeLu'a] network models are extremely useful in such Toughness Tolerance to defects
circumstances, not only in the study of mechanical Fatigue Cyclic loading, Iife assessments
properties but wherever the complexity of the problem Stress corroslon Slow corrosion & cracking
is overwhelming from
a fundamental perspective and Creep st,rength Hlgh temperature service
where simplification is unacceptable. The
purpose of Creep ductlllty Safe design
this review is to explain how ideas might be Creep-f atigue Fatigue at creep temperatures
vague
incorporated into quantitative models using the neural Elastic modulus Deflection, stored energy

network methodology. Webegin with an introduction Thermal expansivity Thermal fatigue/strcss/shock


to the method, followed by a review of its application Hardncss Tribological propertics

C 1999 ISIJ 966


ISIJ Internationa[, Vol. 39 (1999), No. 10

ters as the number of variables to describe the experi-


mental data, and the equation itself maynot be physi-
(1)\w(M1~~/
cally justified. Neural networks fall in this second cat-
\~rOf:i Nl input nodes

egory of empirical models; we shall see that they have


o
wc
\
considerable advantages over linear regression. In spite ~ hjdden unit tanh(~wj(1) xj
+e(1))
of the large numberof parameters usually necessary to
define a trained network, they are useful in circumstances
where physical models do not exist. O output node

3. Linear Regression
~
Most are familiar with regression analysis
scientists (a) (b)
where data are best-fitted to a specified relationship which
is usually linear. The result is
an equation in which each
Fig. 1. (a)
A neulal network representation of linear re-

of the inputs xj is multiplied by a weight wj; the sumof


gression. (b) A non-linear network representation.

all such products and constant


a e
then gives an estimate
applies across the entire span of the
of the output y=~jwjxj +e. As an example, the tem-
input space. This maynot be reasona-
perature at which the bainite reaction starts (i.e. Bs) in ble. For example, the relationship be-
steel
maybe written3)' tween strength and the carbon con-
Bs ('C) centration of an iron-base alloy must
=
change radically as steel gives way to
830 270 x cc ~90 x cM* 37 x 70 x
- - cNi
- cc*
~ 83 x cM~ cast iron.
o ,*,c +*,"" *+'~* *+,'' }*'~*

.(1) 4. Neural Networks


where ci the wto/o of element i which is in solid so-
is
A generalmethod of regression which avoids these
lution in austenite. The term wi is then the best-fit value difficulties neural network analysis, illustrated
is at first
of the weight by which the concentration is multiplied; using the familiar linear regression method. A network
e is
a constant. Becausethe variables are assumedto be representation of linear regression is illustrated in Fig.
independent, this equation can be stated to apply for the l(a). The inputs xi (concentrations) define the input
concentration range: nodes, the bainite-start temperature the output node.
Each input is multiplied by a randomweight wi and the
Carbon O.1-0.55wt"/o Nickel 0.0-5.0
Silicon O.
I-O 35 Chromium O.0-3 5
products are summed together with a constant to give e
. .
the output y=~iT4;ixi+0. The summation is an op-
Manganese 0.21.7 Molybdenum 0.0-1.0 eration which is hidden at the hidden node. Since the
and for this range the bainite-start temperature can be weights and the constant O
were chosen at random, the
estimated with 90 o/* confidence to + 25'C. value of the output will not match with experimental
Equation (1) assumesthat there is a linear dependence data. The weights are systematically changed until a
of Bs on concentration.* It mayalso be argued that there best-fit description of the output is obtained as function
a
is
an interaction between carbon and molybdenumsince of the inputs; this operation is known as training the
the latter is a strong carbide forming element. This can network.
be expressed by introducing an additional term as fol- The network can be madenon-linear as shownin Fig.
lows: l(b). As before, the input data xj are multiplied by weights
(w(1)),
but the sum of all these products forms the
Bs'C = 830 270 x cc~ 90 x cM~ 37 x cNi - 70 > cc.
- - argument of a hyperbolic tangent:
83 x cM. + 22 x (cc X cM~)
-
(
.(2)
interaction
h=tanh~~, w(1)xJ+e with
y= w(2)h+0(2) (3)

Of course, there is no justification for the choice of the


particular form of relationship. This and other difficulties where T4'(2) is
a weight and O(2) another constant. The
associated with ordinary linear regression analysis can strength of the hyperbolic tangent transfer function is
determined by the weight T4'j. The output
be summarisedas follows: y is therefore
Difficulty (a) A
relationship has to be chosen before a non-linear function of xj, the function usually chosen
analysis. being the hyperbolic tangent because of its flexibility.
Difficulty (b) The chosen tends to be
relationship The exact shape of the hyperbolic tangent can be varied
linear, or with non-linear terms added by altering the weights (Fig. 2(a)). Difficulty (c) is avoided
together to form a psued0-1inear equa- because the hyperbolic function varies with position in
tion. the input space.
Difficulty (c) The regression equation, once derived, A one hidden-unit model maynot however be suffi-
* In this particular case, a physical model exists which proves that the dependenceon concentration
is not linear4). In general, blind models such

as linear regression or neural network analysis are only used whenphysical models are not available or whenthe latter are tedious to apply.

967 C 1999 ISIJ


ISIJ International, Vol. 39 (1 999), No. 10

(a) (b) Ni input nodes


\2
y
1
hidden units

f{xj
}
Fig. 2. (a) Three hyperbolic tangent functions; the
different
"strength" of each depends on the weights. (b) A
~ OUtput node

Fig. 3. The structure of a two hidden-unit neural network.


combination of two hyperbolic tangents to produce a Details have been omitted for simplicity.
more complex model.

>, ;b

o 2 4 6 8 o 8
x x

o
:b UJ

8 Complexity of model

Fig. 4. Variations in the test and training errors as a function of model complexity, for noisy data in a case wherey
should vary with x3. The filled points were used to create the models (i.e, they represent training data), and
the circles constitute the test data. (a) A
Iinear function which is too simple. (b) cubic polynomial with A
optimum representation of both the training and test data. (c) fifth order polynomial which generalises
A
poorly. (d) Schematic illustration of the variation in the test and training errors as a function of the model
complexity.

ciently Further degrees of non-linearlty can be


fiexible. the form of the equation has to be specified before the
introduced by combining several of the hyperbolic analysis.
tangents (Fig. 2(b)), permitting the neural network The neural network can capture interactions between
method to capture almost arbltrarily non-linear rela- the inputs because the hidden units are nonlinear. The
tionships. The numberof tanh functions is the number nature of these interactions is implicit in the values of
of hidden Lmits; the structure of a two hidden unit the weights, but the weights maynot always be easy to
network is shownin Fig. 3. interpret. For example, there mayexist more than just
i hidden units pairwise interactions, in which case the problem becomes
The function a network with
for is

given by difficult to visualise from an examination of the weights.


A better method is to actually use the network to make
y"
= ~ T,,
(2)h
+ e(2) ..........(4)
predictions and to see how these depend on various
combinations of inputs.
where
5. Overfitting
(J
hi= tanh ~ lv(~)
iJ '
)c
J+
e(i)
s
.(5)
A potential
linear regression
difficulty
methods is the
with the use of powerful non-
possibility of overfitting
Notice that the complexity of the function is related to data.To avoid this difficulty, the experimental data
the number of hidden units. The availability of a suf- can be divided into two sets, a training dataset and
ficiently complex and fiexible function meansthat the a test dataset. The model is produced using only the
analysis is not as restricted as in linear regression where training data. The test data are then used to check that

C 1999 ISIJ 968


ISIJ International, Vol. 39 (1999), No. 10

the model behaves itself whenpresented with previous-


ly unseen data. This is illustrated in Fig. 4. which shows
three attempts at modelling noisy data for a case where

y should vary with x3. A Ilnear model (Fig. 4(a)) is too


y A
simple and does not capture the real complexity in the
data. An overcomplex function such as that lllustrated
in Fig. 4(c) accurately models the training data but gen- B
eralises badly. The optimum model is illustrated in Fig.
4(b). The training and test errors are shown schemati-
cally in Fig. not surprlsingly,
4(d); the training error
tends to decrease continuously as the model complexity X
increases. It is the minimumin the test error which Fig. 5. Schematic illustration of the uncertainty in defining a
function in regions where data are sparse (B) or
fitting
enables that model to be chosen which generaiises best
where they are noisy (A). The thinner lines represent
to unseen data.
error bounds due to uncertainties in determining the
This discussion of overfitting is rather brief because
weights.
the problem does not simply involve the minimisation
of test error. There are other parameters which control can be misleading:
the complexity, which are adjusted automatically to try (1) The neural network method described here can
to achieve the rlght complexlty of model.s,6) be expressed as an equation which is precise and
which is precisely reproducible for a given set of
Error Estimates
inputs. It is a regression method of which linear
The input parameters are generally assumedin the regression is a subset. One has to use the con-
analysis to be precise and it is normal to calculate an
siderable imagination to see in this a connec-
overall error by comparing the predicted values Cyj) of
the output against those measured(tj), for example,
tion with "intelligence" or with the working of
the brain.
The method is sometimes incorrectly
~
EDCC (tj-yj)2 .........
..........(6)
(2)

a "black box" technlque.7) On the


transparent, consisting of an equation and associ-
described as
contrary, it is

ED is expected to Increase if important input variables ated coefficients (the weights). Both the equation
have been excluded from the analysis. WhereasEDgives and the weights can be studied to reveal rela-
an overall perceived level of noise in the output pa- tionships and interactions.
rameter, it is, on its own, an unsatisfying description of
Wenowproceed to discuss specific applications of neural
the uncertainties of prediction. Figure illustrates the 5 networks in the context of materials. There are some
problem; the practice of using the best-fit function (i.e. general principles which emerge; these are emphasisedin
the most probable values of the weights) does not
adequately describe the uncertainties in regions of the
narrow paragraphs.
input space where data are sparse (B), or where the data 7. Welding
are noisy (A).
MacKayhas developed a particularly useful treatment We]ding has seen major applications of the neural
of neural networks in a Bayesian framework,5,6) which network method. Examplesinclude: weld seamtracking
allows the calculation of error bars representing the where the oLutput from sensors is interpreted by a trained
uncertainty in the fitting parameters. The method rec- network to control a welding robot8); the interpretation
ognises that there are many functions which can be of sensor information measured during welding to de-
fitted or extrapolated into uncertain regions of the input termine weld quality9'10); the detection of defects in
space, wlthout unduly compromising the
fit in adjacent welds using ultrasonic, radiation or other signalsll ~ 19)'
reglons which are rich in accurate data. Instead of the estimation of weld profile (including penetration)
calculating a unique set of weights, a probability dis- from varlations in welding parameters or other sensed
tribution of sets of weights is used to define the fitting parameters. 20 - 27)
uncertainty. The error bars therefore become large The use of neural networks is therefore well established
whendata are sparse or locally noisy, as illustrated in in the control and monitoring of welds. Weshall focus
Fig. 5. here on specific applications of greatest interest in
This methodology has proved to be extremely useful materials science. There are other papers in the present
in materials science where properties need to be estimated issue of ISIJ Intel'national, which are not discussed
here.
as a function of a vast array of inputs. It is then most
unlikely that the inputs are uniformly distributed in the
Charpy Toughnessof Steel Weld Metal
input space.
The concept of toughness as a measureof the energy
absorbed durlng fracture is weli-developed.28,29) It is
often measuredusing notched-bar impact tests of which
6. Miscellany
the most
commonis the Charpy test. A square section
Neural networks are often associated with the working notched bar is fractured under specified conditions and
of the humanbrain or with artificial intelligence. This the energy absorbed during fracture is taken as a measure

969 C 1999 ISIJ


ISIJ International, Vol. 39 (1999), No. 10

0.8

~I 0.6 eo
'o
233 K
caE

0.4
~
/::
0.5

O' t~
!~
~5
0.2 o
~l
0.0
1:!
0.0 a)
~D
J::
,~ ~ c:f

o ~~
-0.5
~
~
o
;~
~ \.

-1.0
Fig. 6. Bar chart showing a measure of the model-perceived O 500 1000
significance of each of the input variables in influencing
toughness.30].
/
Oxygen p.p,m.
Fig. 7. Variation in the normalised toughness as a function
of the oxygen concentration. Oxygen is varied here
of toughness. The Charpy test is empirical in that the without changing any of the other inputs. The max-
data cannot be used directly in engineering design. It imumoxygen concentration in the training data was
821 p,p.m.
does not provide the most searching mechanical con-
ditions. The sample has a notch, but this is less than
the atomically sharp brittle crack. Although the test that acicular ferrite greatly infiuences toughness, Ni-
involves impact loading, there is
a requirement to start trogen has a large effect, as is well estab]ished experi-
a brittle crack from rest at the tip of the notch, suggest- mentally.
ing that the test is optimistic in its comparison against a
propagating brittle crack.29) Most materials can be as- A trained neural network is assoclated with reveal-
sumedto contain sub-critical cracks so that the initia- ing parameters other than just the transfer function
tion of a crack seemsseldom to be an issuc. and weights. For example, the extent to which each
The Charpy test is nevertheless
a vital quality con- input explains variations in the output parameter
trol
measure which is specified widely in international can easily be examined.
standards, and in the ranking of samples in research
and development exercises. It is the most common first It is that carbon has such a small effect
surprising
assessment of toughness and in this has (Fig. but what the results really demonstrate is that
6),
sense a proven
record of reliability. The test is usually carried out at a the influence of carbon comes in via the strength and
variety of temperatures in order to characterlse the microstructure. Other trends have been discussed in Ref.
30).
ductilebrittle transition intrinsic to body-centred cubic
metals with their large Peierls barriers to dislocation OxygenInfluences welds in both beneficial and harmful
motion. ways, e.g. by helping the nucleation of acicular ferrite
The toughness of ferritic steel welds has been studied or contributing to fracture by nucleating oxides. The
using neural networks.30) The Charpy toughness was predicted effect of oxygen concentration is illustrated in
Fig. 7 a]ong with the
expressed as a function of the welding process (manual I standard ~
deviation predicted
metal arc or submergedarc), the chemical composition error bars. It is clear that extrapolation into regions where
data are sparse or noisy is identified with large error bars.
(C, Mn, Si. Al, P, S, O&
N), the test temperature and
The training data used for the toughness model had a
the microstructure (primary, secondary, allotriomorphic
ferrite, and acicular ferrite). The
Widmanstatten ferrite, maximum oxygenconcentration of 82 1parts per million.
inclusion of microstructure
greatly limited the quantity
of data available for analysis because few such results Neural network models which indicate appropri-
ately large error bars in regions of the input space
are reported in the literature. Nevertheless, the aim of
the analysis was to see if the network recognised known where the fitting is uncertain
are less dangerous in
trends in toughness as a function of the microstructure. applications than those which simply identify a
global level of noise in the output.
Thewelding process wasnumerically distinguished in the
O I
analysis by using and for the manual and submerged
arc methods. Figure 8showshowthe toughness varies as a function
Figure 6
Illustrates the significance ((T~) of each of the of the manganese concentration and the test temperature,
input variables, as perceived by the neural network, in It is obvious that the effect of temperature is smaller at

influencing the toughness of the weld. As expected, the large concentrations of manganese, i,e, there is an
welding-process has its ownlarge effect; it is well known interaction between the manganeseand temperature.
that submerged arc welds are in general of a lower Thls interaction has been recognlsed naturally by the
quality than manua] metal arc welds. The yield strength model and is expected from a metallurgical point of
has a major effect; it is known that an increase in the view,30)
yield strength frequently leads to a deterioration in the
toughness. It is also widely believed, as seen in Fig. 6,

C 1999 ISIJ 970


ISIJ International, Vol. 39 (1 999). No. IO
for low and high carbon steels, etc.

Neural networks deduce the relationship between The toughness model described above30) js revealing
variables, including any interactions. In complex but nevertheless, Impractical for routine use because the

cases involving many variables, the interactions inputs include the microstructure, which can be difficult
are revealed both qualitatively and quantitatively to predict or measure. This can be resolved by eliminating
by exarnining the predictions, as illustrated in Fig. 8 the microstructural inputs and including the welding
.

conditions (current, voltage, speed, interpass tempera-


Analyses carried out without an examination of the ture, arc efficiency) which determine the cooling rate of
consequenceshave tended to lead to the incorrect con- the weld. The microstructure is a function of the cooling
clusion that the neural network method lacks trans- rate and chemical composition (both easy to measure)
al.3 i) created
parency. For example, Chanet a model for so it does not have to be included explicitly.
the hardness of the heat affected zone of steel welds as
a function of the carbon concentration, the carbon- For practical applications, the most useful neural
equivalent and the cooling rate within a specified tem- network models are those whose inputs are easily
perature range. Having produced the model, they did not measured, perhaps as a part of the quality control
continue to study how the hardness depends on each process. Onthe other hand, it maybe revealing to
of the input parameters, whether the relationship differs use inputs which are related directly to the output
parameter, so that mechanisms are revealed.
0.6
U
,D
,,,
:o
0,4
Strength of Steel Welds
,:,
o The tensile strength of a weld metal is frequently set
E =o, 0.2 ' 0,5Mn higher than that of the corresponding base metal.32) It
O :I
Z O h o.o mayalso be necessary to maintain a significant difference
~ between the yieid and ultimate tensile strengths in order
a,
,~
>
~
-0.2 2,0Mn to ensure considerable plasticity before failure, and to
= ~ -0.4 resist the growth of fatigue cracks. Svensson32) has
O
,~O compiled an extensive list of linear regression.equations
O -O 6
200 220 240 260 280 300 for estimating the strength as a function of the weld
Temperature / K metal chemical composition. Theequations are limited to
specific alloy systemsandcover no
Fig. 8. Variation in the normalised toughness as a function morethan five alloying
of the manganeseconcentration and the test tem- elements. There is no facility for estimating the effect of
perature. heat treatments.

Table 2. The variables used in the analysis ofstrength. The abbreviation


p.p.m,w, stands for parts per million by weight, Notice that this
table cannot be used to define the range over which a neural
network might be expected to give safe predictions.

Variable Range Mean Standard


Deviation
Carbon weight 9~o
o 029-0. 16 o 074 o 024
Silicon weight % O040-1. 14 O34 O124
Manganeseweight 9;~o
O. 27-2 25 .
1.20 o 39
Sulphur weight 9~o
o OOl-O.14 o 0097 0.0069
Phosphorus weight% o. 004o. 25 0.013 OOll
Nickel weight % o o0-3 50 0.22 o 63
Chromiumweight % o o0-9 35 o 734 2 07
Molybdenumweight ~a OO0-l 50 0,17 o 35
Vanadiumweight % o o0-0 24 0.018 0,049
Copper weight 9~o
o.O0-1 .63 o 074 0.224
Cobalt weight % o o0-2 80 OOll O. 147
Tungsten weight % o. o0-2. 99 0.0115 O. 146
Titanium p p m w. o690 40.86 79.9
Boron p p. m w. o69 l 17 5 78
Niobium p,p mw 0-lOOO 57.4 151
Oxygenp. m. w,
p, 132-1650 441 152
-l
Heat Input kJ mm o.
67 9 l 85 l.47
Interpass Temperature o~ 20-300 207 7 48,g3
Tempering Temperature oC 0-760 320.4 257
Tempering Time hours 0-24 5 72 6 29
Yield Strength MPa 315920 507.3 92.8
Ultimate Tensile Strength MPa 447-ll51 599 92

971 C 1999 ISIJ


ISIJ International, Vol. 39 (1 999), No. 10

A moregenerally applicable neural network model has


been created by Cool et a/.33) using data from some1652
experiments reported in the published literature.
1
The
extensive set of variables Included in the analysis is 08

~
::h
presented in Table 2, which contains information about 0.6
the range of each variable. It is emphasised however,
O,4 ~
c*
that unlike equation I the informatlon in Table 2cannot OL
,
0.2
be used to define the range of appiicability of the neural
network model. This Is because the inputs are in general
O.
1O,4 O
12 iO
expected to interact. Weshall see later that it is the O 0,08
O_06 0.3 O2
Bayesian framework which allows the calculation of error Carbon wt e'~ ,r

bars which define the range of useful applicability of the 06


trained network. Predicted
Frg. 9. effect of molybdenumand carbon on
The analysis demonstrated conclusively that there are solidification cracking in steel welds.38)

strong interactions between the Inputs; thus, the effect


of molybdenumon the strength at low chromium con-
centrations was found to be qulte different from that at
10cal so that the actual range of applicability
fit of the
model not c]ear. The reliability
is of the model is not
high chromiumconcentrations. The entire dataset used
in the analysis Is available
est',rblished whenextrapolated or interpo]ated beyondthe
on the world wide web domain of the training dataset,
(www,msm,cam.ac.uk/map/mapmain,html).Whennew
data are generated, these can be added to the dataset
to enable the creatlon of moreknowledgeable neural nets.
Another feature of the analysis was that Cool et al.
used a committee of best models to makepredictions. It
is possible that
a commlttee of models can makea more
reliable prediction than an individual model.34) The best
models are ranked using the values of the test errors. Hot Cracking of Welds
Solidification cracking occurs in welds during coo]ing
Committees are then formed by combining the predic-
tions of the best L models, where L=1, 2,
from the liquidus temperature, if the density changes
; the -

associated with solidification and thermal contraction


size of the commlttee is therefore glven by the value of
L. A plot of the test error of the committee versus its cannot be accommodated by fluid fiow or by the motion
of the solid components which constitute the weld
size L gives a minimumwhich defines the optimum size assembly. Hot-cracklng tests frequently lead to a discrete
of the committee.
outcome, i.e. whether cracking occurs or not. Such a
problem is knownas a binary classificatlon prob]em, and
It is good practice to use multiple good models and
a classification neural network is used to model the
combine their predlctlons.34) This will makelittle
probability of a crack as a function of the input var-
difference in the region of the input space where the
fit is good, but
mayimprove the reliabi]ity where
iables.34)
A probability of I represents definite crack-
ing whereas a probabi]ity of Orepresents the absence of
there is great uncertainty.
cracking. In the Bayesian framework the uncertain-
ty of the prediction is accounted for by
a process of
Weld Cooling Rate marginalisatlon. The effect of marginalisation is to take
The tlme taken for a weld to cool in the
temperature the output of the best fit neural network and moveit
range 800-500'C is an important parameter in de- closer to a probabi]ity of 0.5 by an amount depending
termining the microstructure and mechanical prop-
erties of the final deposit, and indeed of the adjacent on the parameters' uncertainty. The value 0.5 in the
OLltpLlt represents the highest level of uncertainty.
heat-affected zone. An analytical approximation to the
heat flow problem35,36) has indicated that the time should
An app]ication of the classification neural network to
the hot-cracking of ferritic steel welds is shown in Fig.
depend on the heat input (i.e. welding current, voltage, 9.38)
speed and arc transfer emciency), the substrate
temperature and the thickness of the plate (i,e, the 8. Superalloys
dimensionallty of the heat flow). Separate equations are
required for 2, 3 or 2~ dimensional heat flow. Overall Strength
Chan et al.37) used the inputs indicated by Adams The yield and ultimate tensile strength of nickel-base
physlcal-model and incorporated them into a neural superalloys with yh,' microstructures has been mod-
el]ed3940) using the neural network method,
network model, to obtain a single model for all di- as a function
mensionalities of heat flow. They demonstrated that of the Ni, Cr, Co. Mo, W, Ta, Nb, A], Ti, Fe, Mn, Si.
the accuracy achieved is better with the neural network, C, B, and Zr concentratlons, and of the test temperature.
given the approximations inherent in the ana]ytical The analysis is based on data selected from the publlshed
equations of Adam's theory. literature. The trained models were subjected to a variety
Onedifficulty is that the neural network model used of metailurgical tests. As expected, the test temperature
does not give errors bars which are dependent on the (in the range 25-1100'C) was found to be the most

C 1999 ISIJ 972


ISIJ International, Vol. 39 (1999), No, 10

slgnlficant variable Influencing the tensile properties, generallty or to give only qualitative indications of the
both temperature dependence of strengthening
via the trends.41'42) Experiments on the initiation
and prop-
mechanismsand due to variations in the y' fraction wlth agation of cracks In turbine-disc superalloys have faiied
temperature. Since precipitation hardening is a dominant to clarify howfatigue theory could be used to makequan-
strengthening mechanism, it was encouraglng that the titative predictions.
network recognised Ti, Al and Nb to be key factors A neural network
methodhas therefore been used after
controlling the strength. The physical significance of the identifying some variables that could
51 be expected to
neural network was apparent in all the interrogations influence the fatigue crack growth rate in nickel base
performed. superalloys.43) In fact, it is not difficult to compile an
Oneexample illustrating this last point is presented in even larger list of variables which could Influence fatigue
Fig. 10. The softening of the matrix is offset by the
y properties, but an over ambitlous choice of inputs is
remarkable reversible increase in the strength of the y' likely to reduce the number of data available in the
with Increasing temperature. literature.

A further revelation from the neura] network analysis


came
clearly
from the error estimates, which demonstrated
that there are great uncertainties in the ex-
The number of Inputs chosen
between the definition
is
a compromise
of the problem and the
~Tl I
perimental data on the effect of large concentrations availabilityof data. The neural network method
of molybdenumon the tensile properties. This has cannot cope with missing values. Anover ambltious
identified a region where careful experlments are needed; choice of input variables will in genera] 1lmit the
molybdenumis knownto have a large influence on the number of complete sets of data available for
y/y' Iattice misfit. analysis. Onthe other hand, neglect of an important
variable will lead to an increase in the noise
9. Fatigue Properties associated with predictions.

Fatigue is
one of the most difficult mechanical prop- Thevariables studied included the stress intenslty range
erties An extensive literature review has been
to predict.
AK, Iog{AK} chemical composition, temperature, grain
carried out to assess methods for predicting the fatigue size, heat treatment, frequency, Ioad waveform, atmo-
crack growth rates.41) This Included an examination sphere. R-ratio, the dlstinction between short and long
of physical models, which where either found to lack crack growth, sample thickness and yleld strength. The
analysls wasconducted on some1894 data collected from
lOOO the published literature. The reason for including both
c$ :h AK and log{AK} as inputs is because the latter has
A*
~: 800 ~\\ +1
l' ~ =~\ metallurgical significance since a plot of the logarithm
\ ~ of the crack growth rate versus log{AK} is a simple and
~
~~ 600 \
-- /
/ ~~
\ well-established relationship. However, there mayexist
~~
\ /
~' unknownand separate effects of AK, in which case that
~
C,~
400 ~~
\i
should be included as an additional input. It is
en-
15f couraging that the trained network in fact assigned
~g 200 ~ the greatest significance to log{AK}.
;~ ' Experimental r8sults ~1
~
O O O
Ifa certaln functional relationship Is expected be-
2 O 4 O 6 O 8O 10 12
tween the raw output and a particular input, than
Temperature f 'C that inpLrt can be included twice, in its functional
Fig. lO. Predicted temperature dependence of the yield form and in its raw form. The inclusion of the latter
strength of a y/y' superalloy (after Tancret), prevents bias.
_J
~~ -3

~-4
,,
~o -4
-J
h
e)
~)
c,

\~-5 \~-5
~
hd/ ll-13
um ~
~; ~;

~4;
r::,
-e 40-50 ,hm '~~

\
1~
s]
-e 11-13 uIn
j
/+:
~oo t:'c-7 40-50 ,1
m
-7 ~'

.r'lh
-8 -B
4 e B Io iao 40 4 6 B iO i
~0 40
AK (MPa m~)
~ AK (MPa m~
MPam~)
Fig. Il. Astroloy: (a) effect of grain size alone; (b) effect of heat treatment alone.43)

973 @1999 ISIJ


ISIJ International, Vol. 39 (1 999). No. IO

The model, unlike any experimental approach, could shape and the solidification method.
be used to study the effect of each variable in isolation. Sathyanarayanan et al.46) have developed a neural
This gave interesting results. For example, it was verified network model for the "creep feed grinding" of nickel-
that an increase in the grain size should lead to a decrease 'base superalloys and titanium alloys, using the feed
in the fatigue crack growth rate, when the grain size is rate, depth of cut and wheel bond type as Inputs, and
varied without affecting any other input. This cannot be the surface finish, force and power as outputs.
done in experiments because the change in grain size is
Lattice Parameters of Superalloys
achieved by altering the heat treatment, which in turn
influences other features of the microstructure (Fig. 11). The lattice constants of the
y and y' phases of nickel
superalloys have been modelled using a neural network
It
was also possible to confirm that log{AK} is more within a Bayesian framework.47) The analysis was based
strongly llnked to the fatigue crack growth rate than to
AK, as expected from the Paris law. There are many on newX-ray measurementsand peak separation tech-
niques, for a numberof alloys and as a function of tem-
other metallurgical trends revealed.43)
perature. These data were supplemented using the pub-
lished literature.
The neural network model can in principle be used
Thelattice parameters of the two phases were expressed
to examine the effect of an individual input on the
as a non-1inear functlon of eighteen variables including
output parameter, whereas this be
may incredibly the chemical composition and temperature. It was pos-
difficult to do experimentally. sible to estimate the uncertainties and the method has
proved to be extremely useful in understanding both
In another approach44) a different neural computing
the effect of solutes on the lattice mismatch, and on
approach was used to focus on stage 11 of the Paris
regime, where the growth rate should depend mainly on how this mismatch changes with temperature.
the stress intensity range. Young's modulus and yield
strength. The model was used successfully in estimating 10. Transformations
newtest data. The effect of the ultimate tensile strength Martensite-start Temperature
and phase stability was a]so investigated; although this
Martensite forms without diffusion and has a well-
proved promlsing, it is probable that the results will be
defined start-temperature (Ms), which for the majority
more convincing whena greater range of data become
of engineering steels is only sensitive to the chemical
available.
composltion of the austenite. There are numerous re-
Fatigue Threshold gression equations in the published literature, which
In a recent study of the fatigue thresholds in nickel-base have been used for manydccades in the estimation of
superalloys, Schooling et al. have attempted to compare Ms, mostly as a linear function of the chemical com-
a "neurofuzzy" modelling approach with the classical position. Vermeulen et a/.48) demonstrated that a neu-
neural network.7) The application of fuzzy rules to the ral network model can do this muchmore effectively,
network involves the biassing of the inputs according to and at the same time demonstrated clear interactions
humanexperience. between the elernents. For example, the magnitude of
It
was suggested that the fuzzy method has an ad- the effect of a given carbon content on Ms is much
vantage with restricted datasets because the complexity larger at low manganeselevels than at high
manganese
of the relationships can be restricted by the operator. concentrations. This is in contrast to all published
This conclusion is surprising because the complexity of equations where the sensitivity to carbon is independent
a classical neural network, whenassessed for generalisa- of the presence of other alloying elements.
tion, naturally tends to be minimised for small datasets.
Schooling et al. found it necessary to makesignificant Continuous Cooling Transformation Diagrams
adjustments to the fuzzy rules in order reduce the mean The transformation of austenite as the temperature
decreases during continuous cooling has been modelled
square error to a value comparable to the classical
network. It is evident that there is considerable operator with the neural network method using the chemical
bias introduced in designing fuzzy networks. This composition, the austenitisation temperature and cooling
may rate as inputs.49] The results were concluded to be
not be satisfactory for complexproblems wherethe actual
relatlonships are not understood to begin with. satisfactory but with large errors particularly for the
bainite reaction. Theselarge errors were attributed partly
The comparison of the two methods by Schooling et
al. does not to noise in the experimental data, to the neglect of
seemjustified because the predictions of the austenite grain size as an input, and to the assumption
neurofuzzy methodwere not accompaniedby error bars
that all transformations occur at all cooling rates,
(other than the meansquare error).
whereas this Is not the case in practice.
Creep of Superalloys The network is empirical and hence permits a calcula-
The creep rupture life of nicke] base superalloys has tion of each transformation under all circumstances, even
been modelled as a function of 42 variables including whenthis involves extrapolation into forbidden domains.
Cr, Co, C, Si. Mn, P, S, Mo, Cu, Ti, A1, B, N, Nb, Ta, This difficulty might be avoided by modelling the total
Zr, Fe, W, V, Hf, Re, Mg, La and Th02'45) Other fraction of transformation and the transformation-start
variables include four heat treatment steps (characterised temperatures separately, and then using a logical ruie to
by temperature, duration and cooling rate), the sample determine whether the transformatlon is real or not.

@1999 ISIJ 974


ISIJ lnternational, Vol. 39 (1 999), No. 10

A further effect is illustrated by a later studyso); the Austenite Formation


neural network calculations of CCT curves as illustrated Most commercial processes rely to some extent on
in Fig. 12 appear jagged, rather than the expected smooth heat treatments which cause the steel to revert into the
curves. This Is because the points on the curve are austenitic condition. Thls includes the processes in-
calculated individually and have been joined without volved in the manufacture of wrought steels, and in the
accounting for errors. If smooth curves are definitely fabrlcation of steel componentsby welding.
required, then the experimental curves should be rep- The formation of austenite during heating differs in
resented mathematically, and the parameters required manywaysfrom those transformations that occur during
to describe these curves can be used as inputs. There will, the cooling of austenite. Both the diffusion coefficient
of course, be error bars to be taken into account but and the driving force increase with the extent of superheat
these will be smoothly disposed about the most expected above the equilibrium temperature, so that the rate of
curve. austenite formation must increase indefinitely with
temperature.
There another important difference between the
is

900 transformation of austenite, and the transformation to


austenite. In the former case, the kinetics of transforma-
T [~:) 800 tion can be described completely in terms of the alloy
700 + composition and the austenite grain size. By contrast,
eoe
y the microstructure from which austenite
maygrow can
be infinitely varied. Manymore variables are therefore
~eo needed to describe the kinetics of austenite formation.
B
400 Gavard et al,sl) have created a neural network model
in which the Acl and Ac3 temperatures of steel are
~oo estimated as a function of the chemical composition and
200
M 0.17 9~ C the heating rate. Themodel has been very revealing; some
t 0-35 9e C
e,5a 9~ C of the features are illustrated in Fig. 13. As might be
1oo expected, the Acl temperature decreases with carbon
0,1 1 10 Ioo iC(lO 10000 100000
f$}
t concentration reaching a limiting value which is very
Fig. 12. The effect of carbon on the CCTdiagram of an ailoy close to the eutectoid temperature of about 723'C. This
steel as function of the carbon concentration.so) latter limit is expected because of the slow heating rate
a

fh
CJ
e*_.

U
,~:

700
oo 02 04 06 08 Io oo a2 04 05 oS Ic
c (Wt. %) c (Wt. %)
940

.~
u v
e*_,

cr)

~ u
.~

7so 820
O1 1 1100 1000
10 O1 1 1 10 100 Iaao
Heating Rate ('C/s) Heating Rate ('C/s)
Fig. 13. (a, b)The predicted variation in the Acl and Ac3 temperatures of plain carbon steels as a function of
the carbon concentration at a heating rate of l'Cs~1. (c,d) The predicted variation in the Acl and Ac3
temperatures of an Fe-0.2 wto/o alloy as a function of the heating rate. In all of these diagrams, the lines
represent the l(T
~error bars about the calculated points. All the results presented here are based on
models with four and two hidden units for the Acl and Ac3 temperatures respectively.

975 C 1999 ISIJ


ISIJ International. Vol, 39 (1999), No. 10

and the fact that does not contain any


the test steel of strip ternperature on a hot strip mill rLlnoLlt table has
substitutional solutes. Note that
there is a slight un- also been modelled by Loney et a/.ss)
derprediction of the Acl temperature for pure iron,
although the expected temperature of about 910'C is Heat Treatment
within the 95 o/o confidence limits of the prediction (twice A Jominy test used to measurethe hardenability of
is

the width of the error limits illustrated in Fig. 13).


steel during heat treatment. Vermeulenet al. 56) have been
able to accurately represent the Jominy hardness profiles
By contrast, the Ac3 temperature appears to go
through a minimum at about the eutectoid carbon of steels as a function of the chemica] composition and
austenitising temperature.
concentratlon. This is also expected because the Ae3
temperature also goes through a minimum at the Mechanica] Properties
eutectoid composi~ion. Furthermore, uniike the Acl There are manyother examples of the use of neural
temperature, the minimumvalue of the calculated Ac3 networks to describe the mechanical properties of steels;
never reaches the eutectoid temperature; even at the slow Dumortier et al. have modelled the properties of micro-
heating rate it is expected that the austenite transforma- al]oyed stee]s57); Mllykoski58,59) has addressed the
tion finishes at some superheat above the eutectoid problem of strength variations in thin steel sheets; mi-
temperature, the superheat belng reasonabiy predicted crostructureproperty relationshlps of C-Mnsteels60).
to be about 25'C. the tenslle properties of mechanically alloyed lron61,62)
At slow heating rates, the predicted Acl and Ac3 with a comparlson with predictions using physical
temperatures are in fact very close to the equilibrium models; the hot-torsion properties of austenite.63)
Ael and Ae3 temperatures and insensitive to the rate of
heating. As expected, they both increase
more rapidly 12. Polymeric and Inorganic Compounds
whenthe heating rate rises exceeds about lO'C s~ l. The
Neural network methodshave been used to model the
significant maximum as a function of the heating rate is
glass transition
unexpected and is a prediction which suggests that further temperatures of amorphousand semi-
crystalline po]ymers to accuracy of about 10K,
experiments are necessary. an
and similar models have been developed for relaxatlon
Neural network analysis of published data can temperatures, degradation temperature, refractive index,
tensile strength, e]ongation, notch strength, hardness,
help identify experiments in two ways. First, un-
etc.6466) The molecular structure of the monomerlc
expected trends mlght emerge. Secondly, the error
repeating unit Is described using topological indices from
bars maybe so large as to makethe prediction un-
certain, in which case experiments are necessary.
graph theory. The techniques have been exploited, for
example, in the design of polycarbonates for increased
impact resistance. In another analysis, the glass transltion
11. Steel Processing and Mechanical Properties temperature of linear homopolymershas been expressed
as a function of the monomer structure, and the model
Hot Rolling
has been shown to generalise to unseen data to an
The properties of steel are great]y enhanced by the
rolling possible to cast steel into virtually
It is accuracy of about 35 K.67)
process.
the final shape but such a product will not have the Comparisonwith QuantumMechanical Calculations
quality or excellence of a carefully rolled product. There is an interesting study68) which claims that
Singh et al.52) have deve]oped a neural network model neural networks are able to predict the equilibrium
in which the and tensile strength of the steel is
yleld bond length, bond dissociation energy and equilibrium
estimated as a function of some 108 variables, includ- stretching frequency more accurately, and far more
ing the chemical composition and an array of rolling rapidly than quantummechanical calculations. The work
parameters. Implicit in the rolling parameters is the dealt with diatomic molecules such as LiBr, using thirteen
thermal history and mechanical reduction of the slab as inputs: atomic number, atomic weight (to include isotope
it
progresses to the final product. The training data come effects), valence electron configuration (s,
p, d, .f elec-
from sensors on the rolling mill. There is therefore no trons) both atoms, and the overall charge. The
for
shortage of data, the limitation in thls case being the corresponding quantum mechanical calculations used
need to economise on computations. There are some effective core potentials as inputs. It
was found that all
exciting results which makesense from
a metallurgical three molecular properties could be predicted
more
point of view, together with somenovel predictions on accurately using neural networks, with a considerable
a way to control the yield to tensile strength ratio. A reduction in the computational effort.
similar mode] by Korczak et al.53) uses microstructural Such a comparison of a physica] model with one vjhich
parameters as inputs and has been applied to the is empirical is not always likely
to be fair. In general,
calculation of the ferrite grain size and property dis- appropriate neural network model should perform
an
tribution through the thickness of the final plate. badly whencomparedwith a physical model, whenboth
Vermeulen a/.54)
et have slmilarly modelled the are presented with precisely identical data. This is because
temperature of the steel at the last finishing stand. They the neural network can only learn from the data it is
demonstrated that it is definitely necessary to use a exposed to. By contrast, the physical model wlll contaln
non-llnear representation of the input variables to obtain relationships which have somejustification In science,
an accurate prediction of the temperature. The control and which impose constraints on the behaviour of the

C 1999 ISIJ 976


ISIJ International, Vol. 39 (1999), No. 10

model during extrapolation. As a consequence, the neural There is application which falls in the
one particular
network is likely to violate physical princip]es whenused category of "alloy design"; Asada et a/.78) trained a
without restriction. The continuous cooling transforma- neural network on a database of (Y1_*Ca*)Ba2Cu30_.
tion curve model discussed above49'50) Is an example and Y(Ba2_*Ca*)Cu30=,where -' is generally less than
where the neural network produces information in 7, the ideal number of
oxygen atoms. The output pa-
forbidden domains and produces jagged curves, which a rameter was the superconducting transition tempera-
physical model using the samedata would not because ture as a function of x and z. They were thus able to
the form of the curves would be based on phase predict the transition temperature of YBa2Cu30= doped
transformation theory [e,g. Ref. 69)]. with calclum. It was demonstrated that the highest
temperature is expected for x=0.3 and z=6.5 in
13. Ceramics (Y1_*Ca*)Ba2Cu30=whereas a different behaviour oc-
Ceramic Matrix Composites curs for Y(Ba2_*Ca*)Cu30..
Ceramic matrix composites rely on a weak interface
between the matrlx and fibre. This introduces slip and l 5. Composites
debonding durlng deformation, thus avoidlng the catas- There are manyapplications where vibration informa-
trophic propagation of failure. The mathernatlcal treat- tlon can be used to assess the damagein composite
ment of the deformation has a large numberof variables structures e.g.79 ~ 81) Acoustic emission signals have been
with manyfitting parameters. For an Al203 matrix SiC used to train a neural network to determine the burst
whisker composite a constitutive law has been derived pressure of fiberglass epoxy pr~ssure vessels.82) There has
using an artificial neural network, using inputs generated even been an application trr the detection of cracks in
by finite element analysis.70) eggs.83)
Onedifferentapplication is in the optimisation of the
Hybrid models can be created by training neural curing process for polymer-matrix composites made
networks on data generated by physical models. using thermosetting resins.84) An interesting applica-
tion is the modelling of damageevolution during forging
of AlSiC particle relnforced brake dlscs.85) The authors
Machining and Processing
There are manyexamples where neural networks have were able to predict damagein a brake component
previously unseen by the neural network model.
been used to estimate machine-tool wear. For example,
Ezuguwuet al.71) have modelled the tool 1lfe of a Hwanget cd.86) compareda prediction of the failure
mixed-oxide ceramlc cutting tool as a function of the strength of carbon fibre reinforced polymer composite,
feed rate, cutting speed and depth of cut. Tribology madeusing a neural network model, against the Tsai-Wu
issues in machining, Including the use of neural networks, theory and an alternative hybrid model. Of the three
have been reviewed by Jahanmir.72) models, the neural network gave the smallest root-mean
Neural networks are also used routinely in the control square error. Nevertheless, the earlier commentsabout
of cast ceramic products madeusing the slip casting the validityof the neural network in extrapolation etc.
technique, using variables such as the ambient conditions, remain as a cautionary note in comparisons of neural
and physical models.
raw materlal information and production line settings.73)
In another application, scanning electron microscope
images of ceramic powders were digitised and processed 16. Publication
to obtain the particle boundary profile; this informa-
Theapplication of neural networks in materials science
tlon was then classified using a neural approach, with is
a rapidly growing field. There are numerous papers
exceptionally good results even on unseen data.74) being published but the vast majority are of little use
other than to the authors. This is becausethe publications
almost never include detailed algorithms, weights and
14. Thin Films and Superconductors
databases of the kind necessary to reproduce the work.
A Iotof the materials science type issues about thin Work which cannot be reproduced or checked goes
films naturallyinvolve deposition and characterisation. against the principles of scientlfic publication.
The deposition process can be very compllcated to con- The mlnimuminformation required to reproduce a
trol and is ideally suited for neural network applica- trained network is the structure of the network, the nature
tions. of the transfer functions, the weights corresponding to
Neural networks have been used to interpret Raman the optimised network and the range of each input and
spectroscopy data to deduce the superconducting output variable. Such detalled numerical information is
transition temperature of YBCO thin fiims during the unlikely to be accepted for publication in journals. There
process75); to characterise reflection hlgh-
deposition is
nowa world wide website where this information can
energy electron dlffractlon patterns from semiconductor be logged for common access:
thin films in order to monitor the deposition process76).
wT'vvv.msm.ca,11.ac.uk/map/mcrpmain.html
to rapidly estimate the optical constants of thin films
using the computationa] results of a physical model of It is also good practice to deposit the datasets used in
thin films77). and there are numerous other simi]ar the development of neural networks in this materials
examples. algorithms library.87)

977 @1999 ISIJ


ISIJ International. Vol. 39 (1999), No, 10

17. Summary worths, London, (1973).


29) A. H. Cottrell: Int. J. Pressu,'e Vessels Piping, 64 (1995), 171.
Neural networks are clearly extremely useful in recog- 30) H. K. D. H. Bhadeshia, D. J. C. MacKayand L.-E. Svensson:
nising patterns in complex data. The resulting quantita- Ma!er. Sci. Tc'chnol., Il (1995), 1046,
tive models are transparent; they can be interrogated 31)
32)
1 (
B. Chan. M. Bibby and N. Holtz: Ca,1. Metall. Q., 34 995), 353.
L.-E. Svensson: Control of Microstructure and Properties in Steel
to reveal the patterns and the model parameters can
Arc Welds, CRC Press, London, (1994).
be studied to illuminate the significance of particular 33) T. Cool. H. K. D. H. Bhadeshia and D. J. C. MacKay: Mater.
variables. A trained network embodies the knowledge Sci. E,7g. A, 223 (1997), 179.
within the training dataset, and can be adapted as 34) D. J. C. MacKay:Mathematical Modelling of Weld Phenomena
III, ed. by H, Cerjak and H. K, D. H. Bhadeshia, The Institute
knowledge accumulated. is
of Materials, London, (1997), 359.
Neural network analysis has had a liberating effect on 35) C. M. Adams,Jr.: Weld. J., 37 (1958). S210.
materials science, by enabling the study of incredibly 36) P. Jhaveri, W. G. Moffat and C. M. Adams, Jr.: We!d. J., 42
diverse phenomenawhich are not as yet accessible to (1962), 12.
physical modelllng. The methodology is used extensively 37) B. Chan. M. Bibby and N. Holtz: Trans. Can. Soc. Mech. Eng.
in process control, process design and alloy design. It is (CSME), 20 (1996), 75.
38) K. Ichikawa, H. K. D. H. Bhadeshia and D. J. C. MacKay:Sci.
a technique which should nowform a standard part of Tecllnol. Wc'Id. Joining,
I (1996), 43.
the undergraduate curriculum. 39) J. Jones and D, J. C. MacKay: 8th Int. Symp. on Superalloys,
ed, by R. D. Kissinger et al., TMS,Warrendale, (1996), 417.
Acknowledgments 40) J. Jones, D. J. C.
MacKayand H. K. D. H. Bhadeshia: Proc.
I amimmensely grateful David MacKayfor his to 4th Int. Symp, on AdvancedMaterials, eds. by Anwarul Haqet
help and friendship over many years, and to Franck al., A, Q.
Kahn Research Laboratories, Pakistan, (1995), 659.
41) J. M. Schooling: The modelling of fatigue in nickel base alloys,
Tancret for reviewing the draft manuscript. I would also
like to thank Dr Kaneaki Tsuzaki and the editorial board
Ph,D. Thesis. University of Cambridge, (1997).
42) J. M. Schooling and P. A. S. Reed: 8th Int.
Symp.on Superalloys,
of ISIJ International for inviting us to assemble a special ed. by R. D. Kissinger et a!.. TMS.Warrendale, (1996), 409.
issue on neural networks. 43) H. Fujii. D, J. C. MacKayand H. K. D. H. Bhadeshia: ISIJ 1,It.,
36 (1996), 1373.
44) J. M. Schooling and P. A. S. Reed: Proc. 4th Int.
REFERENCES Symp. on
Advanced Materials, Ed. by Anwar ul Haq et a!.. A. Q. Kahn
l) J. S, Bowles and J. K. MacKenzie: Acta Metall., 2 (1954), 129. Research Laboratories, Pakistan, (i995), 555.
2) M. S. Wechsler, D. S. Lieberman and T. A. Read: Trans, Amer.
45) H. Fujii. D. J. C. MacKay,H. K. D. H. Bhadeshia, H. Harada
Inst. Min. Meta!l. Engrs., 197 (1953), 1503.
and K. Nogi: 6th Leige Conference, Materials for Advanced
3) W. Stevens and A. J. Haynes: J. Iron S!ee! h7st., 183 (1956), 349. Power Engineering, Univ. of Leige, (1998), 1401.
4) H. K. D. H. Bhadeshia: Acta Metall., 29 (1981), 1117. 46) G. Sathyanarayanan, I. J. Lin and M, K. Chen: Int. J. Production
5) D. J. C. MacKay: Neu"a! Conlputation, 4 (1992), 415.
6) D. J. C. MacKay: Neu"a! C0'1rpu!crtion. 4 (1992), 448. 47)
Research, 30 1992), 2421
( .

S. Yoshitake, V. Narayan, H. Harada, H. K, D. H. Bhadeshia


7) J. M. Schooling. M. Brown and P. A. S. Reed: Mafer. Sci. Eng.
and D. J. C. MacKay:ISIJ Int., 38 (1998), 495.
A. A260 (1999), 222. 48) W. G. Vermeulen, P. F. Morris, A. P. de Weijer and S, van der
8) S. H, ~iam and S. V. Oh: A!'p!ied 1,Itel!igence, 10 (1999), 53. Zwaag: !,'0,1,naking Stee!making, 23 (1996), 433.
9) D. C. Lim and D. G. Gweon:J. Eng Ma'7uf:, 213 (1999), 51. 49) W, G. Vermeulen, S, van der Zwaag, P. F. Morris and T. de
1O) T. K. Mengand C. Butler: 1,It. J. Ac!v. Manuf. Techno!.. 13 (1997), Weijer: Stee/ Res., 68 (1997), 72.
666, 50) P. J. van der Wolk. C. Dorrepaal. J. Sietsma and S,
van der
11) W. Yi and Yun: KSME
I. S. 1,7/. J., 12 (1998),
I 150. Zwaag: Intelligent Processing of High Performance Materials,
12) R. Polikar. L. Udpa, S. S. Udpaand T. Taylor: IEEE T,'ans. 0,1
Research and Technology Organization, Neuilly-sur-seine,

13)
U!trasonics, Ferroe!ectrics andFrecf uenc;,' C0,It,'o!. 45 1
T. W. Liao and K. Tang: Appliec[A,'!~ficia! Inte!!igence,
998). 614.
11 (1997),
( France, (1998), 6-1 to 6-6,
51) L. Gavard, H. K. D. H Bhadeshia, D. J. C. MacKayand S,
l 97 . Suzuki: Mate,', Sci. Technol., 12 (1996), 453.
l 4) A. Masnata and M. Sunseri: NDT &EInt., 29 (1996), 87. 52) S. B. Singh, H. K. D. H. Bhadeshia, D, J. C. MacKay.H. Carey
15) D. Farson and J. Kern: Lase,'s in Enginee"i,1g, 4 (1995), 13. and I. Martin: I,'on,naking Steelmalcing, 25 (1998), 355.
16) L. M. Brown and R. Denale: Nav. Eng. J., 107 (1995), 179. 53) P. Korczak, H. Dyja and E. Labuda: J. Mater. Process. Techno!.,
17) A. McNaband l. Dunlop: Insight, 37 (1995), Il. 80-1 (1998), 481.
I8) E. V. Hill. P. L, Israel and G. L. Knotts: Mater. Eval., 51 (1993), 54) W. G. Vermeulen, A. Bodin and S, van der Zwaag: Stee/ Res.,
l 040. 68 (1997), 20.
19) C. G. Winsdor, F. Anselme. L. Capineri and J. P. Mason: Br. 55) D. Loney, I. Roberts and J. Watson: Ironmaking Steelmaking, 24
J. Non-Desn'. Test., 35 (1993), 15.
(1997), 34.
20) J. D. Brown, M. G. Rodd and N. T. Williams: Ironmaking 56) W. G. Vermeulen. P. J, van der Wolk. A. P. de Weijer and S.
Steelmaki,1g, 25 (1998), 199.
van der Zwaag: J. Mate,'. Eng. Pe,formance, 5 (1996), 57.
21) S. C. Juang, Y, S. Tarng and H. R. Lii: J. Mate". P,'ocess. Tech,10!., 57) C. Dumortier, P. Lehert. P. Krupa and A. Charlier: Mater. Sci.
75 (1998), 54. Forum, 284 (1998), 393.
2,~) P. Li. M. T. C. Fang and J. Lucas: J. Mate". P,'ocess. Tc'cll'70!.,
58) P, Myllykoski: J. Mate,'. Process. Tecll,10!., 79 (1998), 9.
71 (1997). 288. 59) P. Myllykoski. J. Larkiola and J. Nylander: J. Mate,'. P,'ocess,
,~3) Y. M. Zhang, L. L1 and R. Kovacevic: J. Manuf: Sci. Eng.- Techno!., 60 (1996), 399.
Tra,',sactions of'
ASME,119 (1997), 63 1 . 60) Z. Y. Liu, W. D. Wang and W. Gao: J. Mater. P,'ocess. Tech,10l.,
24) H. S. Moonand S. J. Na: J. ManLtf, Sys!., 15 (1996), 392. 57 (1996), 332.
25) R. Kovacevic, Y. M. Zhangand L. Li: Weld. J., 75 (1 996). S3 17. 61) A. Y. Badmos,H. K. D. H. Bhadeshla and D. J. C. MacKay:
26) D Farson. K. Hillsley, J. Sames and R. Young: J. Laser Mate,'. Sci. Tc'chnol., 14 (1998), 793.
Applicafions. 8(1996), 33. 62) A. Y. Badmosand H. K. D. H. Bhadeshia: Mater. Sci. Tecllno!.,
27) Y. M. Zhang. R. Kovacevic and L. Li: 1,1!. J. Mach. Too!s Manuf., 14 (1998), 1221.
36 (1996), 799. 63) V, Narayan. R. Abad-Lera, B. Lopez, H. K. D. H. Bhadeshia
28) J. F. Knott: Fundamentals of Fracture Mechanics. Butter-
and D. J. C. MacKay: ISIJ In!., 39 (1999). 999.

C 1999 ISIJ 978


IS]J International, Vol. 39 (1999), No. 10

64) C. W. Ulmer. D. A. Smith, B. G. Sumpter and D. I. Noid: 77) Y. S. Ma. X. Liu, P. F. Guand J. F. Tang: App!ied Optics, 35
Comput. Theoretica/ Polym. Sci., 8 (1998), 31 1. (1996), 5035.
65) B.G. SumpterandD.W.Noid:J. Therm.Anal.,46(1996), 833. 78) Y, Asada, E. Nakada, S. Matsumotoand H. Uesaka: Jout'nal of
66) B, G. Sumpter and D. W. Noid: Mac,'omolecula,' Theory and Superconductivity, 10 (1997), 23.
Simulalions, 3(1994), 363. 79) M. Q. Fengand F, Y. Bahng: J. Struct. Eng.ASCE,125 (1999),
67) S. J. Osguthorpe, J. A. Padgett and G. J Price: J.
Joyce. D. J. 265.
80) X. Xu. J. Evans and H. Vanderveldt: Modelling and Control of
Chem.Soc.-Faraday Trans., 91 (1995), 2491 .

68) T. R. Cundari and E. W, Moody: J. Cl7em. Inf. Comput, Sci., 37 Joining Processes, ed. by T. Zacharia. American Welding Society,
(1997), 871 Florida. USA, (1993), 203.
.

69) H. K. D. H. Bhadeshia: MetalSci., 16 (1982), 159. 81) C. Doyle and G. Fernando: Smarl Mate,'ials a,Id St,'uctu,'es, 7
70) H. S. Rao, J. M. DeshpandeandA. Mukherjee: Sci. Eng. Compos. (1998). 543.
Mater., 6 (1997), 225. 82) M. E. Fisher and E. V. Hill: Mater. Eval., 56 (1998), 1395.
71) E. O. Ezugwu, S, J. Arthur and E. L Hines: J. Mater. Process. 83) V. C Patel,R. W. McClendonand J. W. Goodrum: A,'t~ficicd
Technol., 49 (1995), 225, In!e!!igence App!icalions, 10 (1996), 19.
72) S. Jahanmir: Mac:hining Science and Technology, 2 (1998), 137, 84) N. Rai and R, Pitchumani: J. Mater. P,'ocess. Manuf'. Sci., 6
73) S. E. Martinez, A.E.Smith and B. Bidanda: J. Inte!ligentManuf., (1997), 39.

5(1994), 277. 85) S. M. Roberts, J, Kusiak, Y.L. Liu, A. Forcellese and P. J.


74) G. Bonifazi and P. Burrascano: Adv. Polvde,' Technol., 5(1994), Withers: J. Mater. Process. Technol., 80-1 (1998), 507.
225. 86) W. Hwang. H. C. Park, K. S. Han: Mech. Compos.Mater., 34
75) J. D. Busbee, B. Igelnik, D. Liptak. R, R. Biggers and I. (1998), 28.
Maartense: Engineering Applications ofA,'t~cial Intelligence, Il 87) S.Cardie and H. K. D. H. Bhadeshia: Mathematical Mod-
(1998), 637. ofWeld Phenomena
elling 4, ed. by H. Cerjak and H. K. D. H.

76) A. Bensaoula, H. A. Malki and A. M.Kwari: IEEE Transactions Bhadeshia, The Institute of Materials, London, (1998), 235.
on Semiconducto,' Manufacturing, Il (1998), 421 .

979 C 1999 ISIJ

You might also like