You are on page 1of 6

MODELING CONCRETE STRENGTH WITH

AUGMENT-NEURON NETWORKS

By I-Cheng Yeh 1

ABSTRACT: In this paper, a novel neural network architecture, augment-neuron network, is proposed a~d
examined for its efficiency and accuracy in modeling concrete strength with seven factors (water/cement ratIO,
water, cement, fine aggregate, coarse aggregate, maximum grain size, and age of testing). The architectur~ of
the augment-neuron network is that of a standard back-propagation neural network, but augment neurons, I.e.,
logarithm neurons and exponent neurons, are added to the input layer and the output layer of the net~ork. Two
Downloaded from ascelibrary.org by Universidad Politecnica De Valencia on 06/26/15. Copyright ASCE. For personal use only; all rights reserved.

hundred examples were collected from actual experimental data from 15 sources. The sy~tem was tral~~d based
on 100 training examples chosen randomly from the example set, and then teste~ usmg the remaml~g 100
examples. The results showed that the logarithm neurons and exponent neurons m. the network proVIde .an
enhanced network architecture to improve performance of these networks for modeling concrete strength sIg-
nificantly. A neural network-based concrete mix optimization methodology is proposed and is verified to be a
promising tool for mix optimization.

INTRODUCTION work. Their results look very promising. Brown et al. (1991)
demonstrated the applicability of neural networks to composite
Most research in material modeling aims to construct math- material characterization. In their approach, a back-propaga-
ematical models to describe the relationship between compo- tion neural network had been trained to accurately predict
nents and material behavior. These models consist of mathe- composite thermal and properties when provided with basic
matical rules and expressions that capture these varied and information concerning the environment, constituent materials,
complex behaviors. Concrete is a highly nonlinear material, and component ratios used in the creation of the composite.
so modeling its behavior is a difficult task. Kasperkiewicz et al. (1995) demonstrated that the fuzzy-
Artificial neural networks are a family of massively parallel ARTMAP neural network can model strength properties of
architectures that solve difficult problems via the cooperation high-performance concrete mixes and optimize the concrete
of highly interconnected but simple computing elements (or mixes.
artificial neurons). Basically, the processing elements of a neu- A back-propagation neural network consists of a number of
ral network are similar to neurons in the brain, which consist interconnected processing elements (artificial neurons). The el-
of many simple computational elements arranged in layers. ements are logically arranged into two or more layers, and
Interest in neural networks has expanded rapidly in recent interact with each other via weighted connections. These scalar
years. Much of the success of neural networks is due to such weights determine the nature and strength of the influence be-
characteristics as nonlinear processing and parallel processing. tween the interconnected elements. Each element is connected
In the past decade, considerable attention has been focused on to all the neurons in the next layer. There is an input layer
the problem of applying neural networks in diverse fields, such where data are presented to the neural network, and an output
as system modeling, fault diagnosis, and control. This is be- layer that holds the response of the network to the input. It is
cause neural networks offer the advantages of performance the intermediate layers (hidden layers) that enable these net-
improvement through learning by using parallel processing. works to represent the interaction between inputs as well as
The neural network's performance can be measured by the nonlinear property between inputs and outputs. Traditionally,
speed of learning (efficiency) and generalization capability the learning process is used to determine proper interconnec-
(accuracy) of these networks. The speed of learning can be tion weights, and the network is trained to make proper as-
expressed either as CPU time or as the number of epochs sociations between the inputs and their corresponding outputs.
required for convergence of the network and thus can form Once trained, the network provides rapid mapping of a given
the basis for comparison. There is at present no formal defi- input into the desired output quantities. A thorough treatment
nition of what it means to generalize correctly, but the gen- of neural network methodology is beyond the scope of this
eralization capability of the network may be assessed based paper. The basic architecture of neural networks has been cov-
on how well it performs on the test data set. ered widely (Rumelhart et al. 1986; Yeh et al. 1992, 1993).
The back-propagation algorithm is now recognized as a The basic strategy for developing a neural-based model of
powerful tool in many neural-network applications. Most ap- material behavior is to train a neural network on the results of
plications of neural networks are based on the back-propaga- a series of experiments on a material. If the experimental re-
tion paradigm, which uses the gradient-descent method to min- sults contain the relevant information about the material be-
imize the error function (Rumelhart et al. 1986; Yeh et al. havior, then the trained neural network would contain suffi-
1992, 1993). In the area of material modeling, Ghaboussi et cient information about the material behavior to qualify as a
al. (1991) modeled the behavior of concrete in the state of material model. Such a trained neural network not only would
plane stress under monotonic biaxial loading and compressive be able to reproduce the experimental results it was trained
uniaxial cycle loading with a back-propagation neural net- on, but through its generalization capability should be able to
approximate the results of other experiments (Ghaboussi et al,
I Assoc. Prof., Dept. of Civ. Engrg., Chung-Hua Univ., 30 Tung Shiang, 1991).
Hsin Chu, Taiwan, 30067. Although a back-propagation neural network can be better
Note. Discussion open until April 1, 1999. To extend the closing date suited than an analytical approach for problems involving pre-
one month, a written request must be filed with the ASCE Manager of
Journals. The manuscript for this paper was submitted for review and
dicting the output response of a complex and nonlinear phys-
possible publication on September 20, 1996. This paper is part of the ical system to its inputs, the network's speed of learning is
Journal of Materials in Civil Engineering, Vol. 10, No.4, November, often unacceptably slow, or its generalization capability is of-
1998. ©ASCE, ISSN 0899-1561/98/0004-0263-0268/$8.00 + $.50 per ten unsatisfactorily low, for solving highly nonlinear function
page. Paper No. 14196. mapping problems. To improve the efficiency and accuracy of
JOURNAL OF MATERIALS IN CIVIL ENGINEERING / NOVEMBER 1998/283

J. Mater. Civ. Eng. 1998.10:263-268.


Input Layer Hidden Layer Output Layer
the back-propagation neural networks, a number of variations
of the back-propagation paradigm have been developed. While
many researchers have been focusing on improving the back- X, Y,
propagation algorithm itself (Jacobs 1988; Cho and Kim 1991; Y,
Schultz 1991), very little attention has been given to network
architecture for back-propagation networks (Gunaratnam and
Al
Gero 1994; Hajela and Berke 1991). CI
In this paper, a novel neural network architecture, augment-
neuron network, is proposed and examined for its efficiency B,
and accuracy in modeling concrete strength. This network is D,
a modification of the standard back-propagation neural net-
X, Y,
work, where enhancements with logarithm neurons and ex-
ponent neurons in the input and output layers are used to im- Y,
Downloaded from ascelibrary.org by Universidad Politecnica De Valencia on 06/26/15. Copyright ASCE. For personal use only; all rights reserved.

prove the network's performance, including efficiency and


accuracy. A2 C2
The following four sections describe the augment-neural
network; examine the network in modeling the compressive
strength of concrete, and give some remarks on this network; B, D2
propose a neural network-based concrete mix optimization
model; and give a summary and conclusions.

ARCHITECTURE OF AUGMENT-NEURON NETWORKS Training Example

Normalization of Data
Before the neural nets are trained, the input and output data
must be normalized. In this paper, the normalization formula o normal • logarithm • exponent
of input data is as follows: FIG. 1. Architecture of Augment-Neuron Network

x =X o1d - !J. (1) exponent transformation of the corresponding input value of


new k.cr the training data with the following formula:
where Xnew = normalized data of the input variable; X o1d = Bi = 0.851 . exp(X i) - 1.313 (4)
original data of the input variable; !J. = mean of the input
variable; cr = standard deviation of the input variable; and k = where B1 = output of the ith exponent neuron in the input layer.
parameter that controls the mapping range. When the input Under (3) and (4), -1 will be transferred into -1 identi-
variable is normal distribution and k is 1.96, 95% data of the cally, and + 1 will be transferred into + 1 identically (Fig. 2).
input variable will be mapping into the [-1,1] range. In this The logarithm neuron in the output layer will receive natural
paper, 1.96 is adopted for this parameter. logarithm transformation of the corresponding output value of
The normalization formula of output data is as follows: the training data with the following formula:
CJ = In(1.718l'j + I) (5)
Ymin
v
'new = YY
01d -

ma • - Y min
.( yd
max -
yd )
min + d
Ymln (2)
where lJ =jth output value of the training data; and CJ = output
of the jth logarithm neuron in the output layer.
where Ynew = normalized data of the output variable; YOld = The exponent neuron in the output layer will receive natural
original data of the output variable; Ymin = minimum value of exponent transformation of the corresponding output value of
the output variable; Yma • = maximum value of the output var- the training data with the following formula:
iable; Y~in = minimum desired value of output variable; and
Y~a. = maximum desired value of output variable. DJ = exp(0.6931l'j) - I (6)
This formula can transfer the minimum value of the output
variable into the minimum desired value, as well as transfer where DJ = output of the jth exponent neuron in the output
the maximum value of the output variable into the maximum layer.
desired value. In this approach, the minimum and maximum
desired values of the output variable are set to 0.2 and 0.8,
respectively.

Network Architecture
The architecture of the augment-neural network is that of a
standard back-propagation neural network, but logarithm neu-
rons and exponent neurons are added to the input layer and
the output layer of the network (Fig. 1).
The logarithm neuron in the input layer will receive natural
logarithm transformation of the corresponding input value of
the training data with the following formula:
Ai = In(1.l75Xi + 1.543) (3)

where Xi = ith input value of the training data; and Ai = output


Input Value
of the ith logarithm neuron in the input layer.
The exponent neuron in the input layer will receive natural FIG. 2. Transfer Function of Augment Neurons on Input Layer

264/ JOURNAL OF MATERIALS IN CIVIL ENGINEERING / NOVEMBER 1998

J. Mater. Civ. Eng. 1998.10:263-268.


1 TABLE 1. Ranges of Experimental Data
0.9
Variable Minimum Maximum
0.8 (1 ) (2) (3)
0.7 Eq.5 Water/cement ratio 0.23 0.90
; Water (kg/m') 153 314
;; 0.6
> Cement (kg/m') 200 897
&0.5 Fine aggregate (kg/m') 613 945
.. 0.4 Coarse aggregate (kg/m') 879 1,136
Maximum aggregate size (mm) 12.7 25.4
Z 0.3 Age of testing (day) 1 365
0.2 Eq.6 Compressive strength (MPa) 4.5 86.9
0.1
Downloaded from ascelibrary.org by Universidad Politecnica De Valencia on 06/26/15. Copyright ASCE. For personal use only; all rights reserved.

o (12)
o 0.2 0.4 0.6 0.8 1
Output Value Two hundred examples were collected from actual experi-
mental data from 15 sources (Lay 1993). Their ranges are
FIG. 3. Transfer Function of Augment Neurons on Output listed in Table 1. From these examples, 100 are sampled ran-
Layer domly as training examples, and the remaining 100 are re-
garded as testing examples.
Under (5) and (6), 0 will be transferred into 0 identically,
and 1 will be transferred into 1 identically (Fig. 3).
Network Parameters
In addition to network architecture, the learning rule is the
same as the standard back-propagation neural network, that is, The values of network parameters considered in this ap-
the general delta rule (Rumelhart et al. 1986) will be employed proach are as follows:
to modify the connection weights of the network. In this ar-
chitecture, the output neurons that receive the original output
• Number of hidden layers = 1.
values will provide the reasoning output values.
• Number of hidden units:
1. Back-propagation network: 4· (number of input varia-
Learning Rate and Momentum Factor bles + number of output variables).
In this approach, the learning rate and the momentum factor 2. Augment-neuron network: 2· (number of input varia-
of the general delta rule decay under the following formulas: bles + number of output variables).
• Learning rate: initial value = 5.0; reduced factor = 0.95;
TI'+I = r,,' TI, ~ TImiD (7) minimum value = 0.1.
• Momentum factor: initial value = 0.5; reduced factor =
(8) 0.95; minimum value = 0.1
where TI = learning rate; r" = reduced factor of learning rate; • Learning cycles = 3,000.
= minimum bound of learning rate; lX =momentum factor;
TImID
ra = reduced factor of momentum factor; and lX miD = minimum Training Results
bound of momentum factor.
The neural network's performance can be measured by ac-
MODELING OF CONCRETE STRENGTH curacy and efficiency. To evaluate the network's accuracy,
root-mean-square (RMS) is adopted. Table 2 shows the results
System Models of these four models by standard back-propagation neural net-
work and the augment-neuron network, and verifies that the
Compressive strength of concrete is a function of the fol- augment-neuron network is superior to the standard back-prop-
lowing seven input features: agation neural network in generalization capabilities in all four
models; model 1 is the best model. The predicted values of
1. Water/cement ratio (WC) model 1 with the two networks compared with values actually
2. Cement (C) (kg/m 3) observed in tests are shown in Figs. 4 and 5.
3. Water (W) (kg/m 3 ) The efficiency varies with the number of training iterations
4. Fine aggregate (FA) (kglm 3) the network must perform to reach a specific error limit. Fig.
5. Coarse aggregate (CA) (kglm 3) 6 shows the convergence histories of model 1 with the two
6. Maximum grain size (MG) (mm) networks, and demonstrates that the augment-neuron network
7. Age of testing (T) (days) is superior to the standard back-propagation neural network in
Four models are considered in this approach: TABLE 2. Training Results of BPN and Augment-Neuron Net-
works
• Modell. Seven-inputs model
Back-Propagation Augment-Neuron
f: =fMODEL1(WC, C, W, FA, CA, MG, T) (9) Network Network
RMS of RMS of RMS of RMS of
• Model 2. Six-inputs model Strength train set test set train set test set
f: =fMODEL2(C, W, FA, CA, MG, T) (10) model (MPa) (MPa) (MPa) (MPa)
(1 ) (2) (3) (4) (5)
• Model 3. Five-inputs model Model I 3.80 4.13 2.95 3.69
Model 2 5.44 5.78 3.96 4.11
f: =fMODEL3(WC, FA, CA, MG, T) (11) Mode13 5.41 5.78 3.82 4.48
Model 4 4.57 5.29 3.64 4.15
• Model 4. Two-inputs model

JOURNAL OF MATERIALS IN CIVIL ENGINEERING / NOVEMBER 1998/265

J. Mater. Civ. Eng. 1998.10:263-268.


90 TABLE 3. Training Results Based on Various Neural Networks
80 RMS of RMS of
,..70 train set test set
=-
!60 Neural networks (MPa) (MPa)
.-! SO (1)
Standard back-propagation
(2)
3.80
(3)
4.13
:> Augment-neuron network 2.95 3.69
1 40 Back-propagation with delta-bar-delta 3.72 4.11
't Radial basis function network 4.17 4.47
~ 30
Modular neural network 3.85 4.22
l!: 20 General regression neural network 5.75 6.65
10
Downloaded from ascelibrary.org by Universidad Politecnica De Valencia on 06/26/15. Copyright ASCE. For personal use only; all rights reserved.

1. Back-propagation neural network with delta-bar-delta al-


10 20 30 40 50 60 70 80 90 gorithm (Jacobs 1988)
Actual Value (Mpa) 2. Radial basis function network (Moody and Darken 1989)
3. Modular neural network (Jacobs et al. 1991)
FIG. 4. Predicted Values of Strength of Concrete with Back· 4. General regression neural network (Specht 1991)
Propagation Network Based on Model 1 Compared with Actual
Values
Table 3 shows the results and verifies that the augment-
90 neuron network is the most accurate neural network paradigm.
80 ••
• •• Comparison with Statistical Techniques
'Oi'70 •••
=- • To compare with statistical techniques, the aforementioned
!60
.. •• model 1 (the seven-inputs model) is remodeled with the fol-
••
~50 lowing regression formulas:
:> •
1 40 f; = 57.26 - 23.93' WC - 0.2653' W + 0.07864' C
't
~3O
+ 0.01319·FA + 0.006995·CA - O.6253·MG + O.09012·T
l!: 20 (13)
10
f; = 145.32 ·O.185887 wC
·O.997669 w , 1.000875 c . O.999677 FA
o
o 10 20 30 40 50 60 70 80 90 . O.999785 cA • 0.983425 MG • 1.003236T (14)
Actual Value (Mpa)
f; = 1857.5· WC-I.666. WO.7SS7. C-O. 499 • FA -o.m. CA -O.34S
FIG. 5. Predicted Values of Strength of Concrete with Aug-
ment-Neuron Network Based on Model 1 Compared with Actual . MGo.ooss. TO.267S (15)
Values
Table 4 shows the results and verifies that the augment-
25 neuron network is the most accurate paradigm for modeling
concrete strength.
20
-BPN_Test Generation of C Code of Neural-Based Model
-ANN Test f-- In this approach, after the neural network has been trained,
a C source code can be generated from the trained network
~_._---
with a transfer program. This code can conveniently be em-
ployed by end users.

5
'--I--- TABLE 4. Training Results Based on Statistical Techniques
RMS of train set RMS of test set
o Statistical techniques (MPa) (MPa)
o 500 1000 1500 2000 2500 3000 (1 ) (2) (3)

Learning Cycles Linear regression [Eq. (13)] 8.07 6.99


Nonlinear regression [Eq. (14)] 9.41 9.19
FIG. 6. RMS Error Convergence Histories of Model 1 Nonlinear regression [Eq. (15)] 7.99 7.95

speed of learning because the augment-neuron network takes TABLE 5. Data of Analysis
fewer iterations (about 870) to reach the error, while the stan-
dard back-propagation neural network takes 3,000 iterations to Variable Case 1 Case 2 Case 3 Case 4 Case 5
reach it. (1 ) (2) (3) (4) (5) (6)
Water/cement ratio 0.5 0.6 0.7 0.8 0.9
Water (kglm3 ) 180 180 180 180 180
Comparison with Other Neural Networks Cement (kglm3 ) 360 300 257 225 200
Fine aggregate (kglm 3) 700 750 800 850 900
To compare with other neural networks, the aforementioned Coarse aggregate (kglm3 ) 1,170 1,171 1,157 1,134 1,105
model 1 (the seven-inputs model) is remodeled with the fol- Note: Maximum aggregate size = 19.05 mm.
lowing famous neural network paradigms:
266/ JOURNAL OF MATERIALS IN CIVIL ENGINEERING / NOVEMBER 1998

J. Mater. Civ. Eng. 1998.10:263-268.


Evaluation of Neural Network-Based Model 1700

To evaluate the neural network-based model of concrete 1600


strength, two diagrams are obtained based on the data listed
i 1500

i
in Table 5. Fig. 7 shows the water/cement ratio-strength
curves under various ages (14, 28, 90, and 365 days). Fig. 8
shows the age-strength curve under various water/cement ra- 1400
tios (0.5, 0.6, 0.7, 0.8, and 0.9).
~ 1300
OPTIMIZATION OF CONCRETE MIX i
\,,)
1200

Most of the designing methods of concrete mixtures are 1100


based on a generalization of previous experience, available as
Downloaded from ascelibrary.org by Universidad Politecnica De Valencia on 06/26/15. Copyright ASCE. For personal use only; all rights reserved.

tables or empirical formulas. In this approach, a concrete mix 1000


optimization methodology based on the neural network-based o 10 20 30 40 50 60 70
model of concrete strength is proposed as follows:
Strength (Mpa)
Find W, C, and CA FIG. 9. Strength and Cost Scatter Diagram Based on 20,000
Minimum cost = Cwo W + Cc ' C + CFA ·FA + CCA' CA (16)
Random Samples (1 NT$ $0.0290) =
Subject to TABLE 6. Optimum Mix Based on 20,000 Random Samples
Coarse
f; = fMODEL ,(WC, C, W, FA, MG, T) 2: f;d (17)
Desired Fine ag- aggre- Predicted
strength Water Cement gregate gate strength Cost
Wmln :5 W:5 Wmax (18)
(MPa) WlC (kg/m 3 ) (kg/m 3 ) (kg/m 3 ) (kg/m 3 ) (MPa) (NT$/m 3 )
(1 ) (2) (3) (4) (5) (6) (7) (8)
(19)
13.7 0.896 181.9 202.9 883.1 1,114.2 15.4 1,097.2
FA min :5 FA :5 FA max (20) 20.6 0.766 153.6 200.5 937.0 1,137.3 22.1 1,110.5
27.4 0.688 152.4 221.5 920.8 1,139.1 27.9 1,165.6
CA min :5 CA :5 CAm.. (21) 34.3 0.631 151.3 239.9 941.8 1,105.4 34.6 1,213.8
41.2 0.561 151.1 269.1 947.7 1,075.6 41.4 1,289.2
Note: 1 NT$ '" $0.0290.
FA = PFA'(I,OOO _ C _ W _ CA) (22)
Pc pw PeA
W
60 WCmln < C < WCm•• (23)

SO where C w, C c , CFA , CeA = unit price (NT$/kg) of water, ce-


ment, fine aggregate, and coarse aggregate;f;d = desired con-
'Ii' 40 crete strength (MPa); W min , W m... Cmin , Cm ... FA min , FA max ,
=- CA min , CA max , WCm1n , and WCma • = minimum and maximum
6 bound (kg/m 3) of water (W), cement (C), fine aggregate (FA),

i
30
coarse aggregate (CA), and water/cement ratio (WC); and Pw,
Vl 20 Pc, PFA, and PeA = specific gravity of water, cement, fine ag-
gregate, and coarse aggregate.
10 In this approach, it is assumed that C w = 0,01; Cc = 2.8;
CFA = 0.264; CeA = 0.264; Wmin = 150; Wmax = 325; C m1n = 200;
Cmax = 425; FA min = 600; FA max = 950; CA min = 850; CAm.. =
0
1,150; WCm1n = 0.35; WC max = 0.9; Pw = 1.0; Pc = 3.15; PFA =
0.4 0.5 0.6 0.7 0.8 0.9
2,65; PCA = 2.65; T (age of testing) = 28 days; and MG (max-
W/C Ratio imum grain size) = 19.05 mm (3/4 in,).
FIG. 7. Water/Cement Ratio and Strength of Concrete Curves Because this problem has only three independent design
variables (W, C, and CA), a simple random search can get the
80 near-optimum solution. Therefore, 20,000 random combina-
-W/C=0.5 tions of these three design variables (water, cement, and coarse
70 ~
-W/C=O.6 aggregate) between the minimum and maximum bound [(18),
60 ~ --W/C=O.7 (19), and (21)] were generated, and fine aggregate was cal-
-W/C=O.8 culated with (22). These combinations were tested by (20) and
~~/C=o.91--
.---- ..... _.
(23) to generate feasible mix designs. Then their strengths
were predicted with the augment-neuron network based on the

Z
seven-inputs model, and the results are shown in Fig. 9, The
~ optimum mix design can be deduced with the filter [to satisfy
20 D"
/' ~ (17)] and sort [to optimize (16)] functions of a commercial
spreadsheet. Table 6 shows the optimum mix designs for var-
;.-.
10 • ious desired concrete strength.
o CONCLUSIONS
o 20 40 60 80 100
Age (days)
Concrete is a highly nonlinear material, so modeling its be-
havior is a difficult task. An artificial neural network is a good
FIG. 8. Age and Strength of Concrete Curves tool to model nonlinear systems. The major disadvantage of
JOURNAL OF MATERIALS IN CIVIL ENGINEERING 1 NOVEMBER 1998/267

J. Mater. Civ. Eng. 1998.10:263-268.


the neural-network approach is that it may take a very long Hajela, P., and Berke, L. (1991). "Neurobiological computational models
time to tune the weights in the net to generate an accurate in structural analysis and design." Compo and Struct., 41(4), 657-667.
Jacobs, R. A. (1988). "Increased rates of convergence through learning
model for a complex and nonlinear system. In this paper, a rate adaptation." Neural Networks, 1,295-307.
novel network architecture, augment-neuron network, is ex- Jacobs, R. A., Jordan, M. I., Nowlan, S. J., and Hinton, G. E. (1991).
amined for its efficiency and accuracy in modeling highly "Adaptive mixtures of local experts." Neural Computation, 3, 79-87.
complex and nonlinear concrete behavior. The results showed Kasperkiewicz, J., Racz, J., and Dubrawski, A. (1995). "HPC strength
that the logarithm neurons and exponent neurons in the net- prediction using artificial neural network. " J. Computing in Civ.
work provide an enhanced network architecture to improve the Engrg., ASCE, 9(4),279-284.
Lay, H.-C. (1993). "Application of neural network learning models in
performance of these networks. A neural network-based con- predicting the strength of concrete material," Master's thesis, Dept. of
crete mix optimization methodology is proposed and is veri- Civ. Engrg., National Chiao Tung University, Hsin Chu, Taiwan (in
fied to be a promising tool for mix optimization. Chinese).
From a physical point of view, the seven-input model [(9)] Moody, J., and Darken, C. J. (1989). "Fast learning in networks oflocally
and six-input model [(10)] are the same, because one of the tuned processing units." Neural Computation, I, 281- 294.
Downloaded from ascelibrary.org by Universidad Politecnica De Valencia on 06/26/15. Copyright ASCE. For personal use only; all rights reserved.

parameters in the seven-input model, WIC, is made up of two Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). "Learning
internal representation by error propagation." ParaLLel Distributed
other parameters, Wand C. In other words, even in the seven- Processing, Vol. I, D. E. Rumelhart and J. L. McClelland, eds., MIT
input model, there are only six independent physical param- Press, Cambridge, Mass., 318 - 362.
eters. Why do the two models give different results? The rea- Schultz, A. (1991). "Differentiating similar patterns using a weight-decay
son is that the six-input model must discover the dominant term." J. Neural Network Computing, Winter, 5-14.
factor of concrete strength WIC itself, so it needs more efforts Specht, D. F. (1991). "A general regression neural network." IEEE Trans.
than the seven-input model. Although an artificial neural net- on. Neural Networks, 2(6), 568-576.
Yeh, I.-C., Kuo, Y.-H., and Hsu, D.-S. (1992). "Building an expert system
work can discover complex relations in data, the previous ex- for debugging FEM input data with artificial neural networks." Expert
perience and knowledge in a specific domain are very useful Sys. with Applications, 5, 59-70.
to build a more accurate model efficiently. Yeh, I.-C., Kuo, Y.-H., and Hsu, D.-S. (1993). "Building KBES for di-
Like other data-fitting techniques, the neural network only agnosing PC Pile with artificial neural network." J. Computing in Civ.
processes predictive capability within the range of data em- Engrg., ASCE, 7(1), 71-93.
ployed for model fitting. The range of applicability of the pres-
ent work is limited to the range of the various parameters of APPENDIX II. NOTATION
experimental data listed in Table 1.
The following symbols are used in this paper:
Future efforts will be directed toward other components like
superplasticizers, fly ash, and blast furnace slag. In addition,
other factors that can significantly affect the strength of con-
C = amount of water in mix;
crete, like its casting and curing temperature, will be consid-
CA = amount of coarse aggregate in mix;
ered.
C w , Cc , CFA> CCA = unit price of water, cement, fine aggregate,
and coarse aggregate;
APPEND~I. REFERENCES
= amount of fine aggregate in mix;
= concrete strength;
Brown, D. A., Murthy, P. L. N., and Berke, L. (1991). "Computational = desired concrete strength;
simulation of composite ply micromechanics using artificial neural net- ra = reduced factor of momentum factor;
works." Microcomputers in Civ. Engrg., 6, 87-97.
Cho, S.-B., and Kim, J. H. (1991). "A fast back-propagation learning
r" = reduced factor of learning rate;
W= amount of water in mix;
method using Aitken's process." Int. J. Neural Networks, 2(1), 37-47. momentum factor;
Ghaboussi, J., Garrett, J. H., and Wu, X. (1991). "Knowledge-based mod-
amin = minimum bound of momentum factor;
eling of material behavior with neural networks." J. Engrg. Mech.,
ASCE, 117(1), 129 - 134. II learning rate;
Gunaratnam, D. J., and Gero, J. S. (1994). "Effect of representation on llmin minimum bound of learning rate; and
the performance of neural networks in structural engineering applica- Pw, Pc, PFA, PCA = specific gravity of water, cement, fine aggre-
tions." Microcomputers in Civ. Engrg., 9, 97 -108. gate, and coarse aggregate.

268/ JOURNAL OF MATERIALS IN CIVIL ENGINEERING / NOVEMBER 1998

J. Mater. Civ. Eng. 1998.10:263-268.

You might also like