You are on page 1of 31

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/337021489

Artificial Intelligence and Parametric Construction Cost Estimate Modeling:


State-of-The-Art Review

Article  in  Journal of Construction Engineering and Management · November 2019


DOI: 10.1061/(ASCE)CO.1943-7862.0001678

CITATIONS READS
71 3,688

1 author:

Haytham H. Elmousalami
Zagazig University
22 PUBLICATIONS   285 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

PREDICTION OF CONSTRUCTION COST FOR FIELD CANALS IMPROVEMENT PROJECTS IN EGYPT View project

All content following this page was uploaded by Haytham H. Elmousalami on 25 January 2021.

The user has requested enhancement of the downloaded file.


State-of-the-Art Review

Artificial Intelligence and Parametric Construction Cost


Estimate Modeling: State-of-the-Art Review
Haytham H. Elmousalami 1

Abstract: This study reviews the common practices and procedures conducted to identify the cost drivers that the past literature has classified
into two main categories: qualitative and quantitative procedures. In addition, the study reviews different computational intelligence (CI)
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

techniques and ensemble methods conducted to develop practical cost prediction models. This study discusses the hybridization of these
modeling techniques and the future trends for cost model development, limitations, and recommendations. The study focuses on reviewing
the most common artificial intelligence (AI) techniques for cost modeling such as fuzzy logic (FL) models, artificial neural networks (ANNs),
regression models, case-based reasoning (CBR), hybrid models, diction tree (DT), random forest (RF), supportive vector machine (SVM),
AdaBoost, scalable boosting trees (XGBoost), and evolutionary computing (EC) such as genetic algorithm (GA). Moreover, this paper pro-
vides the comprehensive knowledge needed to develop a reliable parametric cost model at the conceptual stage of the project. Additionally,
field canals improvement projects (FCIPs) are used as an actual case study to analyze the performance of the ML models. Out of 20 AI
techniques, the results showed that the most accurate and suitable method is XGBoost with 9.091% and 0.929 based on mean absolute
percentage error (MAPE) and adjusted R2 , respectively. Nonlinear adaptability, handling missing values and outliers, model interpretation,
and uncertainty are discussed for the 20 developed AI models. In addition, this study presents a publicly open data set for FCIPs to be used for
future model validation and analysis. DOI: 10.1061/(ASCE)CO.1943-7862.0001678. © 2019 American Society of Civil Engineers.
Author keywords: Artificial intelligence; Feature engineering; Ensemble methods; Hybrid intelligent systems; Fuzzy analytic hierarchy
process; Genetic algorithm; Factor analysis; Fuzzy logic; XGBoost; Project cost modeling.

Introduction definitive cost estimate at 30% of project design completion to con-


trol of bid or tender based on detailed unit cost, for which accuracy
Cost estimating is a primary part of construction projects, in which ranges from −15% to þ20%. The third class is the preliminary cost
cost is considered as one of the major criteria in the projects’ estimate at 10% of project design completion to budget authoriza-
feasibility studies and decision making at the early stages of the tion based on semi-detailed unit cost, for which accuracy ranges
project. The accuracy of estimation is a critical factor in the success from −20% to þ30%. The fourth class is the studying cost estimate
of any construction project, for which cost overruns are a major at 1% of project design completion for feasibility studying based
problem, especially with the current emphasis on tight budgets. on the parametric model, for which accuracy ranges from −30% to
Indeed, cost overruns can lead to the cancellation of a project þ50%. The fifth class is the conceptual cost estimate at 0% of
(Zhu et al. 2010; AACE 2004). project design completion for conceptual screening based on capac-
Subsequently, the cost of a construction project needs to be es- ity factored, judgment, analog, and parametric models, for which
timated within a specified accuracy range, but the largest obstacles accuracy ranges from −50% to þ100%. Conceptual estimating
standing in front of a cost estimate, particularly in the early stage, works as the main stage of project planning in which limited project
are lack of preliminary information and larger uncertainties as a information is available and high levels of uncertainty and risk ex-
result of engineering solutions. As such, to overcome this lack of ist. Moreover, the estimating should be completed within a limited
detailed information, cost estimation techniques are used to ap- time period. Therefore, the accurate conceptual cost estimate is a
proximate the cost within an acceptable accuracy range (AACE challenging task and a crucial process for cost engineers, project
2004; Chen 2002; Waty et al. 2018). managers, and decision makers (Jrade 2000).
The American Association of Cost Engineers (AACE) defines This research aims to review the common computational intelli-
five classes of cost estimates (AACE 2004). The first class is the gence (CI) techniques used for parametric cost models and to high-
detailed cost estimate at 65% of project design completion to check light the future trends. An accurate cost estimate is a critical aspect of
estimate of bid or tender based on detailed unit cost, for which ac- the project’s success (AACE 2004; Hegazy 2014). At the conceptual
curacy ranges from −10% to þ15%. The second class is the stage, cost prediction models can be based on numerous techniques,
including probabilistic techniques such as Monte Carlo simulation,
and machine learning based techniques such as support vector ma-
1
Project Engineer and Project Management Professional (PMP), chines (SVM) and artificial neural networks (ANNs) (Elfaki et al.
General Petroleum Company, St. 8, Ash Sharekat, Nasr City, Cairo 2014; Friedman et al. 2001). The object of selecting the optimal tech-
Governorate, Egypt; Fellow, Dept. of Energy Engineering, Technical
nique for cost modeling is to provide accurate results, minimize
Univ. Berlin, Campus El Gouna, Governorate 84513, Egypt. Email:
Haythamelmousalami2014@gmail.com
prediction errors, and provide a more reliable model.
Note. This manuscript was published online on October 19, 2019.
Discussion period open until March 19, 2020; separate discussions must Research Methodology
be submitted for individual papers. This paper is part of the Journal
of Construction Engineering and Management, © ASCE, ISSN The objective of the study is to review and analyze the most
0733-9364. common techniques and practices to develop a reliable parametric

© ASCE 03119008-1 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Literature Survey Key cost drivers' identification

Qualitative procedure Quantitative procedure


Model Development
AI Traditional delphi method(TDM) Exploratory factor analysis(EFA)

Key parameters identification Regression Analysis Likert scale Regression methods


Qualitative procedure Fuzzy delphi method Correlation matrix
Fuzzy Logic
Fuzzy analytic hierarchy process(FAHP) Genetic algorithm(GAs)
Quantitative procedure ANNs
Boruta feature selection algorithm
CBR
SVM Fig. 2. Qualitative and quantitative procedure.
DT
Ensemble methods
Hybrid intelligent system

set of parameters produces the optimal performance of the devel-


Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

oped model with less computation effort and fewer parameters


Analysis and Discussion
needed to run the model (Kan 2002). As illustrated in Fig. 2, such
procedures can be categorized into two main types: qualitative
and quantitative. The qualitative procedure includes all practices
Remarks and future research
that depend on the experts’ questionnaire and gathering opinions.
On the other hand, the quantitative procedure depends on the col-
Fig. 1. Research methodology. lected data where statistical techniques are required to discover and
learn the patterns of data to extract the knowledge based on the
collected data (Hastie et al. 2009).
cost model. In addition, the study emphasizes the important role of
machine learning and computational intelligence that can effec-
tively improve and automate cost modeling predictions. As shown Key Cost Drivers Identification by Qualitative
in Fig. 1, the research methodology consists of three main phases. Procedures
The first phase is reviewing the past literature. Past literature has
been surveyed to identify the practices for cost drivers identification Conceptual cost estimation mainly depends on the conceptual
and cost prediction modeling in the construction industry. Many parameters of the project. Therefore, defining such parameters is
high-impact research journals have been reviewed, such as the the first and the most significant step in the cost model develop-
Journal of Construction Engineering and Management, the Journal ment. Identifying cost model parameters is the critical stage in
of Civil Engineering and Management, and Construction Manage- model development in which poor identification of cost parameters
ment and Economics. More than 100 studies have been reviewed decreases the model accuracy. Cost drivers can be determined
and analyzed, of which most papers were published from 2002 to based on a questionnaire survey or the collected data. The quali-
2018, to draw the studies’ recommendations and conclusions. tative procedures for key parameters identification are dependent
According to the first phase, developing the parametric cost on experts’ interviews and field surveys. Many approaches such as
model can be divided into two main steps: cost drivers identifica- the traditional Delphi method, the Likert scale, the fuzzy Delphi
tions and computational intelligence modeling. Therefore, the scope method (FDM), and the fuzzy analytic hierarchy process (FAHP)
of this study is reviewing the past practices used to identify the have been conducted to select and evaluate the key cost drivers
key parameters for parametric cost models and then reviewing the based on the viewpoints of experts.
different intelligent models for cost prediction. The first part of
the literature survey is reviewing the different approaches used
Traditional Delphi Method and Likert Scale
for key parameters identification. The second part of the literature
survey is reviewing the different intelligent models for cost predic- Traditional Delphi method (TDM) is conducted to collect experts’
tion. The second phase of the research methodology is an analysis opinions about a certain case. Based on the collected experts’ opin-
and discussion of the reviewed papers, whereas the third phase is ions, all parameters affecting a system can be identified. Delphi
developing the concluding remarks and future research trends. Thus, technique consists of several rounds for collecting, ranking, and
a methodology can be generalized to review the similar and relevant revising the collected parameters. Therefore, experts are also asked
studies as follows. Content analysis is one of the research method- to give their feedback and revise their opinions to enhance the qual-
ologies to analyze the content of the selected texts. However, con- ity of the survey. The Delphi rounds continue until no other opin-
tent analysis suffers from several disadvantages; it can be extremely ions remain and a consensus is reached (Hsu and Sandford 2007).
time-consuming, information can be lost because of poor selection Therefore, the first step is to select the experts to be asked based on
of categorization, and it may be prone to bias. Content analysis can their experience. The second step is to prepare a list of questions to
be difficult to automate or computerize in cases that computationally discover the knowledge and parameters of the proposed case study.
need text mining techniques. Therefore, content analysis is subject The third step is to apply Delphi rounds, where all experts should
to increased error. With complex texts, content analysis may provide be asked through interviews or their answers can be collected via
poor performance in text interpretation and analysis. Quantitatively, emails. The fourth step is to collect all experts’ answers and make a
content analysis tends often to simply consist of word counts. list of all the collected parameters. The fifth step is to ask experts
However, it needs a qualitative approach to correctly interpret the again to assess and evaluate the parameters. Finally, the experts can
collected texts. This study has conducted a narrative review method revise their parameters and state the reasons for their rating (Hsu
(Green et al. 2006) to obtain a broad qualitative perspective on AI and Sandford 2007).
applications for construction cost prediction. The Likert scale is a rating scale to represent the opinions of
Poor identification of parameters means poor performance and experts that can consist of three points, five points, or seven points.
accuracy of the parametric model. On the other hand, the optimal For example, a five-point Likert scale may include ratings of

© ASCE 03119008-2 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


“Extremely Important,” “Important,” “Moderately Important,” point Lij , and the maximum as point U ij . This is illustrated in
“Unimportant,” and “Extremely Unimportant,” that the experts will Eqs. (3)–(6) (Klir and Yuan 1995; Elmousalami et al. 2018a):
select to answer the questions received (Bertram 2017).
Based on the completed survey forms, statistical indices can be Lj ¼ Minðlij Þ; i ¼ 1; 2; : : : n; j ¼ 1; 2; : : : m ð3Þ
calculated to gather the final rank for each question or criterion by
experts’ ratings. Mean score is (MS) used to gather the final rating !1=n
Y
n;m
for each criterion of the survey as Eq. (1), whereas the standard Mj ¼ mij ; i ¼ 1; 2; : : : n; j ¼ 1; 2; : : : m ð4Þ
error (SE) is calculated to check the sample size of experts as i¼1;j¼1
in Eq. (2)
P
ðf × sÞ U j ¼ Maxðuij Þ; i ¼ 1; 2; : : : n; j ¼ 1; 2; : : : m ð5Þ
MS ¼ ð1Þ
n
where MS = mean score to represent the impact of each parameter ðW ij Þ ¼ ðLj ; M j ; U j Þ ð6Þ
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

based on the respondents’ answers; s = a score set to each param-


eter by the respondents; f = frequency of responses to each rating where i = an individual expert; j = the cost parameter; Lij = mini-
for each parameter; and n = total number of participants mum of the experts’ common consensuses; M ij = average of the
experts’ common consensuses; U ij = maximum of the experts’
σ
SE ¼ pffiffiffi ð2Þ common consensuses; Lj = opinions mean of the minimum of
n the experts’ common consensuses (Lij ); Mj = opinions mean of the
average of the experts’ common consensuses (Mij ); U j = opinions
where SE = standard error; σ = standard deviation among partic-
mean of the maximum of the experts’ common consensuses (U ij );
ipants’ opinions for each cost parameter; and n = total number of
W ij = fuzzy number of all experts’ opinions; n = number of experts;
participants. Thus, all parameters will be collected and ranked
and m = number of cost parameters.
based on the experts’ opinions.
The fourth step is using a simple center of gravity method to
defuzzify the fuzzy weight wj of each parameter to develop value
Fuzzy Delphi Method Sj by Eq. (7)
Fuzzy Delphi Method (FDM) consists of the traditional Delphi Lj þ Mj þ U j
method and fuzzy theory (Ishikawa et al. 1993). Maintaining the Sj ¼ ð7Þ
fuzziness and uncertainty in participants’ opinions is the advantage 3
of this method over the traditional Delphi method. Instead of ap-
where Sj = crisp number after defuzzification process. Finally, the
plying the experts’ opinions as deterministic values, this method
fifth step is that the experts provided a threshold to select or delete
uses membership functions such as triangular, trapezoidal, or
the collected parameters as follows:
Gaussian functions to map the deterministic numbers to fuzzy num-
bers. Accordingly, the reliability and quality of the Delphi method If Sj ≥ α; then the parameter should be selected
are improved (Liu 2013). The objective of the FDM is to avoid
misunderstanding of the experts’ opinions and to make a good
If Sj < α; then the parameter should be deleted
generalization to the experts’ opinions.
The first step of FDM is collecting initial parameters affecting a
proposed system like the first round of TDM. The second step is to where α = defined threshold. The FDM can be summarized as the
assess each parameter by fuzzy terms in which each linguistic term following steps:
consists of three fuzzy values ðl; m; uÞ as shown in Fig. 3. For ex- 1. Identify all possible variables affecting a proposed system.
ample, an unimportant term would be (0.00, 0.25, 0.50), and an 2. Assess evaluation score for each parameter by fuzzy terms.
important term would be (0.50, 0.75, 1.00). The third step applies 3. Aggregate fuzzy numbers (W ij ).
triangular fuzzy numbers to handle fuzziness of the experts’ opin- 4. Apply defuzzification (S).
ions where the minimum of the experts’ common consensuses as 5. Define a threshold (α).

Fuzzy Analytic Hierarchy Process


The analytic hierarchy process (AHP) is a decision-making ap-
µ(X) proach to evaluate and rank the priorities among different alter-
A natives and criteria (Saaty 1980; Vaidya and Kumar 2006). The
1.0 conventional AHP cannot deal with the vague or imprecise nature
of linguistic terms. Accordingly, Laarhoven and Pedrycz (1983)
combined fuzzy theory and AHP to develop FAHP. In the tradi-
tional FAHP method, the deterministic values of AHP could be
expressed by fuzzy values to apply uncertainty during making
decisions. The aim is to assess the most critical cost parameters
determined by FDM.
In FAHP, linguistic terms have been applied in pairwise com-
0.0 X
l m u parison that could be expressed by triangular fuzzy numbers
(Srichetta and Thurachon 2012; Erensal et al. 2006). Triple triangu-
Support Set of A lar fuzzy set numbers ðl; m; uÞ are used as fuzzy values where
l ≤ m ≤ u. Ma et al. (2010) and Elmousalami et al (2018a) have
Fig. 3. Triangular fuzzy number.
applied the following steps:

© ASCE 03119008-3 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


1. Identify criteria and construct the hierarchical structure. Optimization
2. Set up pairwise comparative matrices and transfer linguistic Filtering or Ranking Regression Regression
terms of positive triangular fuzzy numbers by a linguistic scale
of importance.
3. Generate group integration by Eq. (8).
4. Estimate the fuzzy weight. Selected Features Selected Features Selected Features

5. Defuzzify the triangular fuzzy number into a crisp number.


6. Rank defuzzified numbers. (a) (b) (c)
Experts’ opinions are used to construct the fuzzy pairwise com-
Fig. 4. (a) Filter; (b) Wrapper; and (c) Embedded algorithms.
parison matrix to construct a fuzzy judgment matrix. After collect-
ing the fuzzy judgment matrices from all experts using Eq. (8),
these matrices can be aggregated by using the fuzzy geometric
mean (Buckley 1985). The aggregated triangular fuzzy numbers of computation, and to provide high accuracy by removing noisy
(n) decision makers’ judgment in a certain case W ij ¼ ðLij ; M ij ; and redundant features. Moreover, Kohavi and John (1997) con-
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

U ij Þ where, for example, civil (C), mechanical works (M), and cluded that too many features higher than optimal diminishes
electrical works (E) refer to three different criteria, respectively the model performance. As shown in Fig. 4, a feature selection al-
(Elmousalami et al. 2018a) gorithm can be categorized into three main categories, filter, wrap-
per, and embedded, according to their selection manners (Guyon
Y
n 1 Yn 1 Y
n 1  and Elisseeff 2003). Filter algorithms rank features by evaluating
n n n
W ijn ¼ lijn ; mijn ; uijn ð8Þ correlation with independent variables such as Boruta feature
n¼1 n¼1 n¼1 analysis and principal component analysis. They do not optimize
the prediction accuracy. Therefore, filter algorithms are often more
where i = a criterion such as C, M, or E; j = screened cost param- computationally efficient than wrapper algorithms. However, the
eter for a defined case study; n = number of experts; Lij = minimum key limitation of the filter algorithms is not taking into consider-
of the experts’ common consensuses; M ij = average of the experts’ ation the effects of the selected features. Wrapper algorithms
common consensuses; U ij = maximum of the experts’ common depend on an inductive algorithm for sample regression in an iter-
consensuses; Lj = opinions mean of the minimum of the experts’ ative manner. Wrapper algorithms include optimization based on
common consensuses (Lij ); M j = opinions mean of the average of genetic algorithm (GA) or greedy approaches such as forward se-
the experts’ common consensuses (M ij ); U j = opinions mean of the lection and backward elimination. However, wrapper algorithms
maximum of the experts’ common consensuses (U ij ); and W ijn = can be prone to overfitting and high-cost computation. The em-
aggregated triangular fuzzy numbers of the nth expert’s view. bedded approach is a hybrid approach between filter algorithms
Based on the aggregated pairwise comparison matrix, the value and wrapper algorithms. The embedded approach depends on cer-
of fuzzy synthetic extent Si with respect to the ith criterion can be tain types of an inductive algorithm such as SVM-based recursive
computed by Eq. (9) by algebraic operations on triangular fuzzy feature elimination (RFE) (Lin et al. 2012).
numbers (Saaty 1994; Srichetta and Thurachon 2012) The objective of variables identification is to increase the model
X m Xn Xm −1 prediction accuracy and provide a better understanding of collected
Si ¼ W ij × W ij ð9Þ data (Guyon and Elisseeff 2003). Accurate cost drivers identification
j¼1 i¼1 j¼1 leads to the optimal performance of the developed the cost model.
Quantitative methods depend on the collected data such as factor
where i = a criterion; j = screened parameter; W ij = aggregated analysis, regression methods, and correlation methods in which ma-
triangular fuzzy numbers of the nth expert’s view; and Si = value chine learning models can be applied and conducted to figure out
of fuzzy synthetic extent. Based on the fuzzy synthetic extent val- patterns and relations in the collected data. Therefore, quantitative
ues, this study used Chang’s method (Saaty 1980) to determine techniques can automatically identify the key cost drivers.
the degree of possibility by Eq. (10). Accordingly, the degree of
possibility can assess and evaluate the system alternatives
8 9 Factor Analysis
>
> 1; if mm ≥ mc >>
>
< >
= Factor analysis (FA) is a machine learning method to cluster correlated
0; if lm ≥ uc variables to a lower number of factors that is used to filter data and
VðSm ≥ Sc Þ ¼ ð10Þ
>
> lc − um >
> determine key parameters. Many types of factoring exist such as prin-
>
: ; Otherwise ; >
ðlm − uc Þ − ðlm − uc Þ cipal component analysis (PCA), canonical factor analysis, and image
factoring (Polit and Beck 2012). The advantage of exploratory factor
where VðSm ≥ Sc Þ = degree of possibility between (C) criterion analysis (EFA) is to combine two or more variables into a single factor
and (M) criterion; ðlc ; mc ; uc Þ = fuzzy synthetic extent of (C) that reduces the number of variables. However, factor analysis cannot
criterion; and ðlm ; mm ; um Þ = fuzzy synthetic extent of (M) provide results’ causality to interpret the factored data.
criterion. EFA is conducted by PCA to reduce the number of variables,
as well as to understand the structure of a set of variables (Field
2009). The following questions should be answered before con-
Key Cost Drivers Identification by Quantitative ducting EFA:
Procedures 1. How large does the sample need to be?
2. Is there multicollinearity or singularity?
One of the key challenges of predictive modeling is the high dimen- 3. What is the method of data extraction?
sionality of data with small data size. Therefore, the dimensionality 4. What is the number of factors to retain?
reduction technique should be equipped in the prediction model 5. What is the method of factor rotation?
to simplify the model to a decision maker, to enable faster 6. Should factor analysis or principal component analysis be used?

© ASCE 03119008-4 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Table 1. Survey of sample size for FA
Reference Summary of findings
Nunnally (1978) Sample size is 10 times the number of variables.
Kass and Tinsley (1979) Sample size is between 5 and 10 cases per variable.
Tabachnick and Fidell (2007) Sample size is at least 300 cases. 50 observations = very poor; 100 = poor; 200 = fair; 300 = good; 500 = very good;
1,000 or more = excellent.
Comrey and Lee (1992) Sample size can be classified to 300 as a good sample size, 100 as poor, and 1,000 as excellent.
Guadagnoli and Velicer (1988) A minimum sample size of 100–200 observations.
MacCallum et al. (1999) The minimum sample size depends on the design of the study where a sample of 300 or more would probably provide a
stable factor solution.
Kaiser (1970, 1974), and Based on the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy values (Kaiser 1970), the values greater than
Hutcheson and Sofroniou 0.5 are barely acceptable (values below this should lead you to either collect more data or rethink which variables
(1999) to include). Moreover, values between 0.5 and 0.7 are mediocre, values between 0.7 and 0.8 are good, values between
0.8 and 0.9 are great, and values above 0.9 are superb.
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

Kline (1999) The absolute minimum sample size required is 100 cases.

Sample Size eigenvalues where a scree plot diagram can be developed to retain
factors (Cattell 1966). All factors that have eigenvalues greater than
Factors obtained from small data sets cannot generalize as well as
those derived from larger samples. Some researchers have sug- 1 can be retained (Kaiser 1960). On the other hand, Jolliffe (1972,
gested using the ratio of sample size to the number of variables. 1986) recommended retaining factors that have eigenvalues more
As illustrated in Table 1, this ratio can be 10 times the number of than 0.7.
all variables (Nunnally 1978) or between 5 and 10 cases per each According to factor rotation, two types of rotation exist: orthog-
variable (Kass and Tinsley 1979). For example, if the number of onal rotation and oblique rotation (Field 2009). Orthogonal rotation
variables is ten variables, the sample size will be at least 100 ob- can be varimax, quartimax, and equamax, whereas oblique rota-
servations (Nunnally 1978) or between 50 and 100 observations tion can be direct oblimin and promax. Accordingly, the resulting
(Kass and Tinsley 1979). outputs depend on the selected rotation method. For a first analysis,
the varimax rotation should be selected to easily interpret the fac-
tors, and this method can generally be conducted. The objective of
Multicollinearity or Singularity the varimax is to maximize the loadings dispersion within factors
and to load a smaller number of clusters (Field 2009). Stevens
The first step is to check the correlation among variables and avoid
(2002) concludes that no difference between factor analysis and
multicollinearity and singularity (Tabachnick and Fidell 2007;
Hays 1983). Multicollinearity means variables are correlated too component analysis exists if there are 30 or more variables and
highly, whereas singularity means variables are perfectly corre- communalities greater than 0.7 for all variables. On the other hand,
lated. It is used to describe variables that are perfectly correlated a difference between factor analysis and component analysis exists
(it means the correlation coefficient is 1 or −1). There are two if the variables are fewer than 20 variables and there are low
methods for assessing multicollinearity or singularity. The first communalities (<0.4).
method depends on the correlation matrix scanning, whereas the
second methods depends on the determinant of the correlation Regression Methods for Key Cost Drivers Selection
matrix. The first method is conducted by scanning the correlation
matrix among all independent variables to eliminate variables with Regression analysis can be used for both cost drivers selection and
correlation coefficients greater than 0.90 (Field 2009; Hays 1983) cost prediction modeling (Ratner 2010). Regression models can
or correlation coefficients greater than 0.80 (Rockwell 1975). The learn from the given data by adjusting the regression parameters
second method is to scan the determinant of the correlation matrix. to map a mathematical relationship based on the given data. The
Multicollinearity or singularity may be in existence if the determi- current study focused on cost drivers selection. Therefore, the for-
nant of the correlation matrix is less than 0.00001. One simple heu- ward, backward, and stepwise methods are reviewed as follows.
ristic is that the determinant of the correlation matrix should be Forward selection initiates with no variables in the model, where
greater than 0.00001 (Field 2009; Hays 1983). If the visual inspec- each added variable is tested by a comparison criterion to improve
tion reveals no substantial number of correlations greater than 0.3, the model performance (Wilkinson and Dallal 1981). If the inde-
PCA probably is not appropriate. Also, any variables that corre- pendent variable significantly improves the ability of the model to
late with no others (R ¼ 0) should be eliminated (Field 2009; predict the dependent variable, then this predictor is retained in the
Hays 1983). model and the method searches for a second independent variable
Bartlett’s test can be used to test the adequacy of the correlation (Field 2009; Draper and Smith 1998).
matrix. It tests the null hypothesis that the correlation matrix is an Backward selection is the opposite of the forward method.
identity matrix where all the diagonal values are equal to 1 and In this method, all input independent variables are initially selected,
all off-diagonal values are equal to 0. A significant test indicates and then the most unimportant independent variables are eliminated
that the correlation matrix is not an identity matrix with a signifi- one-by-one based on the significance value of the t-test for each
cance value less than 0.05 and the null hypothesis can be rejected variable. The contribution of the remaining variable is then reas-
(Dziuban and Shirkey 1974). sessed (Field 2009; Draper and Smith 1998).
According to factor extraction, factor (component) extraction is Stepwise selection is an extension of the forward selection ap-
conducting EFA to determine the smallest number of components proach, in which input variables may be removed at any subsequent
that can be used to represent interrelations among a set of variables iteration (Field 2009; Draper and Smith 1998). Despite forward se-
(Tabachnick and Fidell 2007). Factors can be retained based on lection, stepwise selection tests at each step for variables to be

© ASCE 03119008-5 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Table 2. Genetic algorithm (GA) processes Siedlecki and Sklansky (1988, 1989) and is called the wrapper
Process Element Binary coding approach to screen input variables (features).
The genetic information can be represented as chromosomes;
Crossover Chromosome A 1011001
it is a powerful tool for optimization and search problems. Chromo-
Chromosome B 1111111
Offspring 1 1011111
some representation is the first design step of the EC, in which a
Offspring 2 1111001 chromosome is the possible candidate solution for searching and
Mutation Offspring 1 1011111 optimization purposes. The gene is the functional unit of inherit-
Offspring 2 1111011 ance, and any chromosome is expressed by a number of genes.
A chromosome can be represented as a vector (C) consisting of (n)
genes denoted by (cn ) as follows: C ¼ fc1 ; c2 ; c3 : : : cn g. Each
included or excluded where stepwise is a combination of backward chromosome (C) represents a point in the n-dimensional search
and forward methods (Flom and Cassell 2007). space.
The first task in chromosome construction is to encode the
genetic information. Binary coding is commonly one of the most
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

Correlation Method used chromosome representations in which (bi ) is a binary value


The relations among all variables are shown in the correlation ma- equal to zero or one and X is a search space of chromosomes as
trix; the aim is to screen variables based only on the correlation follows:
matrix. Therefore, all independent variables that are highly corre-
lated with each other are eliminated (R >¼ 0.8), and all de- X ¼ fðb1 ; b2 : : : bl Þðb1 ; b2 ; : : : ; bl Þ; : : : g; bi ∈ f0; 1g
pendent variables that are weakly correlated with the dependent
variable (R <¼ 0.3) are eliminated. Such an approach is dependent A fitness function (FCi ) is a problem-dependent function that
on the hypothesis that the relevant input independent variable is guides the search model to converge and get the optimal solution.
highly correlated with the output dependent variables and less FCi is applied to evaluate the fitness of each chromosome to select
correlated with the other input independent variables in the input the best subset of chromosomes for crossover and mutation proc-
subset (Ozdemir et al. 2001). esses. The crossover process is intended to produce new genera-
Pearson correlation is a measure of the linear correlation be- tions and solutions, whereas the mutation process is intended to
tween two variables, giving a value between þ1 and −1, where 1, randomly escape from converging to a local minimum solution.
0, and −1 mean positive correlation, no correlation, and negative As illustrated in the following example, where two chromosomes
correlation, respectively. It was developed by Karl Pearson as a A and B are composed of seven genes, Offspring 1 and Offspring 2
measure of the degree of linear dependence between two variables are produced by a crossover of the first chromosome and the second
(Field 2009). On the other hand, Spearman correlation is a nonpara- chromosome where the one-point crossover is applied at the third
metric measure of statistical dependence between two variables generation of the chromosomes. The next process is mutation,
using a monotonic function. A perfect Spearman correlation of þ1 in which the sixth gene of the second offspring is mutated to
or −1 occurs when each of the variables is a perfect monotone func- value one. An example is given in Table 2.
tion of the other (Field 2009). Relative fitness is the fitness function for each chromosome in
which relative fitness is the criterion to select the next generation
of chromosomes. Genetic operators are selection processes of the
Genetic Algorithm for Key Cost Drivers Selection
fittest chromosomes and then conducting crossover and mutation
Evolutionary computing (EC) is a set of natural selection–inspired processes subsequently. Many selection approaches exist such as
methods based on evolutionary theory (Darwin 1859), such as the random selection, proportional selection (roulette wheel selection),
genetic algorithm (GA) (Holland 1975). The genetic algorithm tournament selection, and rank-based selection. Five main steps are
(GA) is an evolutionary algorithm (EA) used for search and opti- required to develop an optimization problem by EC:
mization based on a fitness function (Siddique and Adeli 2013). 1. chromosome representation,
GA can be applied to select the input parameters of a prediction 2. an initial population representation,
machine learning model such as an artificial neural network 3. definition of the fitness function as a chromosome selection
(ANN). All irrelevant, redundant, and useless parameters can be criterion, and
removed to reduce the size of the ANN. The chromosome can 4. EA parameter values determination such as population size and
be represented in a binary-coded system in which the bits num- the maximum number of generations.
ber in the chromosome string equals the input variables number. As shown in Fig. 5, data can be screened to key cost drivers in
This approach was proposed by Kohavi and John (1997) and by which each chromosome represents a possible solution for input

T
X1 1 0 0
X2 1 0 1 Si
P Yfinal
P= EA ANNs

Xm 0 1 1
F
Feature selection

system

Fig. 5. GA for cost driver identification.

© ASCE 03119008-6 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


parameters. The chromosome consists of a binary gene for which ElSawy et al. (2011) have used the selected 10 factors to develop
one represents the existence of a parameter and zero represents the an ANN in which the selected 10 factors represent the 10 neurons
absence of the parameter. Each gene in the chromosome is asso- in the first layer of the ANN model. Table 3 provides a structured
ciated with an input feature for which the value of one represents review of 37 papers that can be used effectively in future research in
the input feature existence and the value of zero represents the which the reviewed paper can be categorized by the method used
elimination of this variable. Thus, the number of ones in a chromo- and its year of publication. In addition, the tables summarize the
some is the number of the screened variables by GA. Chromosomes paper contributions in a high-level description. Based on Table 3,
build a population of a set of possible solutions (Si ). The objective the questionnaire survey approach is the most common approach
of EA is to select the best subset of parameters (P) based on fitness conducted to identify and assess the cost drivers of a certain case
function (F) that inherently minimizes the total system error. The study. Therefore, the qualitative approach is more common than
fitness function is minimizing ANNs’ prediction error. The main quantitative methods. Such a claim can be a result of unavailability
disadvantage of this methodology is a high computational effort of data for the studied cases. Moreover, asking experts is a simple
(Siddique and Adeli 2013). approach for which no deep statistical knowledge is needed. On the
The fitness function is the guide to EA to converge, and wrong
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

other hand, the data-driven procedures require understanding and


fitness function formulation means false search and inaccurate conducting statistical methods.
optimization. The fitness function can be formulated in terms of As shown in Fig. 6, based on the literature survey, FAHP is the
a minimum number of selected ANNs features, maximum accu- most commonly used technique. However, traditional techniques
racy, and minimized computational cost (Yang and Honavar 1998; have many advantages such as identifying divergence of opinions
Ozdemir et al. 2001). among participants and share of knowledge and reasoning among
participants. In addition, rounds enable participants to review, re-
Boruta Feature Selection Algorithm evaluate, and revise all their previous opinions. Moreover, these
Cao et al. (2018) have conducted a Boruta feature analysis to rank methods require simple calculations. However, the major disad-
57 related variables and filter them to 20 features. The Borura vantage of the traditional techniques is their inability to maintain
algorithm is an ensemble based on multiple decision trees voting uncertainty among different participants’ opinions. Accordingly,
for which the computed votes are applied to rank the importance information extracted from a selected group of experts may not
of the variables. Boruta feature selection algorithm selects the be representative. Alternatively, the advanced techniques such as
greatest potential for increasing the prediction accuracy (Kursa and FDM and FAHP have applied fuzzy methods to maintain uncer-
Rudnicki 2010). Unimportant features are identified based on a tainty among participants’ opinions, and this feature is the major
two-tailed hypothesis. The procedure of the Boruta feature selec- advantage of the advanced techniques. However, the advanced
tion algorithm is as follows: techniques require more calculations and statistical forms to be
1. Boruta feature selection algorithm identifies shadow variables conducted. Generally, qualitative techniques such as the Delphi
that have no correlation with the target variable. method, FDM, and FAHP are time-consuming because they require
2. Each explanatory variable has shadow variables in which a new collecting experts’ opinions, interviews, and meetings. In contrast,
data set is formulated based on both the original explanatory quantitative techniques need only the collected data to automati-
variable and the shadow variables. cally explore the key cost drivers.
3. A random forest algorithm is conducted for fitting the resulting On the one hand, the most common qualitative methods are
data set. questionnaire surveys and FAHP. On the other hand, the most
4. The z-score of each feature (the original and the shadow) are common quantitative methods are regression methods and factor
computed in which the z-score is the average loss in accuracy analysis. The key advantage of the quantitative methods is that they
divided by the standard deviation. depend on machine learning models that can learn from iterations
5. The original attributes that have higher z-scores than the shadow and automatically identify accurate key cost drivers based on the
attributes are retained as important features of the given data set. given data. The key disadvantage of the quantitative methods is
6. Steps 1–5 are repeated iteratively several times until all original related to data availability, which controls the applicability of
attributes that are significantly higher than the maximum the machine learning.
shadow variables are retained.
7. Features that are significantly below the maximum z-score of
the shadow variables are recorded as unimportant.
Cost Estimating Methods
According to estimating methods, top-down and bottom-up ap-
Review Survey and Discussion Key Cost Drivers
proaches are the main two approaches to the cost estimate. On the
Identification
one hand, the top-down approach occurs in the conceptual phase
Past literature has been surveyed to identify the practices for and depends on the historical cost data, from which the data from
cost drivers identification in the construction industry. Table 3 similar projects are retrieved to estimate the current project. On the
presents 37 papers to review the key cost drivers techniques; other hand, the bottom-up approach requires detailed information
most papers were published from 2002 to 2018. For example, about the studied project. First, all projects are divided into items to
Petroutsatou et al. (2012) has applied a questionnaire survey to create a cost breakdown structure (CBS). The main items of a CBS
determine the basic parameters affecting the final construction cost. are dependent on the number of resources (labor, equipment,
The variables were then classified into independent variables and materials, and subcontractors). The next step is to calculate the
dependent variables. Accordingly, the data have been collected cost of each broken item and sum up the total construction cost
and analyzed for ANN model development. Similarly, ElSawy et al. (AACE 2004).
(2011) have conducted a questionnaire survey to select the 10 most Cost estimating methods have been classified into four types
influential key cost drivers out of 52 factors of building con- (Dell’Isola 2002): single-unit rate methods, parametric cost mod-
struction in Egypt based on the experts’ opinions. Consequently, eling, elemental cost analysis, and quantity survey.

© ASCE 03119008-7 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Table 3. Review of cost drivers identification
Reference Method Key findings
ElSawy et al. (2011) Questionnaire survey Based on 52 factors of building construction in Egypt, 10 cost drivers have been selected by a
questionnaire survey of experts for an ANN cost model.
Emsley et al. (2002) Questionnaire survey þ factor Based on 300 building projects, FL is investigated to select the key cost drivers to be used by
analysis ANNs and regression models.
El-Sawah and Trial and error approach Based on trial and error approach and combination of input variables, ANN models have been
Moselhi (2014) built for cost prediction of steel building and timber bridge.
Moselhi and Questionnaire survey A questionnaire survey has been conducted to discover the input variables for ANNs for
Hegazy (1993) markup estimation.
Attalla and Hegazy Questionnaire survey A questionnaire survey has been operated to identify the input variables for ANNs for cost
(2003) deviation; 36 factors have been identified.
El-Sawalhi and Questionnaire survey Eighty questionnaires have been conducted to determine significant variables for cost
Shehatto (2014) prediction for the building project.
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

Petroutsatou et al. Questionnaire survey A questionnaire survey has been conducted to determine significant parameters for an ANN
(2012) cost prediction model for tunnel construction in Greece.
Williams (2002) Regression methods Based on bidding data, the stepwise regression method has been utilized to check the
significance of each parameter and select the key cost drivers for the regression model.
El Sawalhi (2012) Questionnaire survey Both a questionnaire survey and relative index ranking technique have been conducted
to investigate and rank the factors affecting the cost of building construction for the fuzzy
logic model.
Knight and Fayek Questionnaire survey Based on past related literature and interview surveys, all parameters affecting cost overruns
(2002) for building projects have been identified and ranked for the fuzzy logic model.
Choi et al (2014) Questionnaire survey Based on a questionnaire survey, attributes of the road construction project have been
identified.
Alroomi et al. Questionnaire survey þ factor Based on 228 completed questionnaires, all relevant cost data of competencies have been
(2012) analysis collected by experts, whereas the factor analysis has been conducted to investigate the
correlation effects of the estimating competencies.
Kim (2013) Questionnaire survey þ factor Based on a questionnaire survey and factor analysis, all parameters affecting best practices of
analysis infrastructure projects have been identified and ranked.
Manoliadis et al. FAHP Based on qualifications survey, FDM is conducted to assess bidders’ suitability for improving
(2009) bidder selection.
Pan (2008) FAHP Fuzzy AHP is conducted to provide the vagueness and uncertainty for selecting a
bridge construction method. Therefore, FAHP obtains more reliable results than the
conventional AHP.
Marzouk and Questionnaire survey A questionnaire survey has been conducted to identify and evaluate 14 parameters affecting
Ahmed (2011) the construction costs of pump station projects.
Liu (2013) FDM þ FAHP Both FDM and FAHP were conducted to evaluate and filter all factors affecting indicators of
managerial competence.
Saaty (2008) AHP For a countless application, AHP has been conducted as a powerful decision-making
procedure among different criteria and alternatives.
Laarhoven and FAHP AHP and fuzzy theory were combined to produce FAHP in which the objective was to evaluate
Pedrycz (1983) the most important cost parameters.
Ma et al. (2010) FAHP FAHP was conducted for pile-type selection based on the collected field factors where the
fuzzy AHP approach produces an efficient performance for pile-type selection.
Erensal et al. (2006) FAHP FAHP was conducted for evaluating key parameters in technology management.
Srichetta and FAHP FAHP was conducted for evaluating notebook computer products.
Thurachon (2012)
Hsu et al. (2010) FDM þ FAHP This study utilized two processes of selection and decision making. FDM is the first process to
identify the most important factors, whereas the second process is FAHP to identify the
importance of each factor.
Marzouk and Elkadi Questionnaire survey þ FA EFA was conducted to select the cost drivers of water treatment plants in which a total of
(2016) 33 variables were reduced to four components. Such components are used as inputs to the
ANN model.
Woldesenbet and FA Based on the collected roadway project data, factor analysis of a covariance and correlation
Jeong (2012) matrix have been investigated to identify critical factors of the project.
Park and Kwon Questionnaire survey þ FA Both questionnaire and FA have been investigated to discover the critical success factors for
(2011) infrastructure projects in Korea.
Akintoye (2000) FA Seven factors out of 24 factors influencing contractors’ cost estimating have been selected
by FA.
Stoy et al. (2012) Regression method Backward regression method has been computed to determine key cost drivers based on a total
of 75 residential projects.
Lowe et al. (2006) Regression method Based on 286 sets of data collected in the United Kingdom, both forward and backward
stepwise regression have been used to develop six parametric cost models.
Yang (2005) Correlation method Correlation matrix should be scanned to reduce variable and to detect redundant variables.
Ranasinghe (2000) Correlation method This study presents induced correlation concept to analyze input cost variables for residential
building projects in Germany.

© ASCE 03119008-8 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Table 3. (Continued.)
Reference Method Key findings
Stoy et al. (2008); Questionnaire survey þ regression Based on 70 residential properties in Germany, Stoy et al. used a regression method to select
Stoy and Schalcher method cost drivers.
(2007)
Kim et al. (2005) Gas for parameter selection This study has built three cost ANN models by back-propagation (BP) algorithm, GA for
optimizing ANNs weights, and GA for parameters optimization of the BP algorithm.
Optimizing parameters of the BP algorithm produces better results.
Xu et al. (2015) GA þ correlation method A correlation method was used to rank model features, whereas GA was used for selecting the
optimal subset of features for the model.
Elmousalami et al. FDM þ FAHP þ traditional This study has compared the traditional Delphi method, FDM, and FAHP to evaluate and
(2018a) Delphi method select the key cost drivers of field canals improvement projects in Egypt.
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

the input parameters or data to obtain the cost estimate. The esti-
FDM + mates are the outputs of the cost model.
FAHP
8%
Questionnaire
survey + Computational Intelligence
Literature survey
FAHP 24%
Computational intelligence (CI) techniques are aspects of human
24%
knowledge and make computations adaptively to become more
vigorous in system modeling than classical mathematical modeling
(Bezdek 1994). Based on CI, an intelligent system can be devel-
Trial and error
approach oped to produce consequent outputs and actions depending on
4% the observed inputs and outputs of the system (Siddique and
AHP Adeli 2013).
4%
The objective is to solve complex real-world problems based on
GA Regression
8% methods Factor analysis
data analytics such as classification, regression, prediction, and
12% 8% optimization in an uncertain environment. The core advantage of
the intelligent systems is their human-like capability to make de-
Correlation method
8%
cisions that depend on information with uncertainty and impreci-
sion. The basic approaches to computational intelligence are fuzzy
Fig. 6. Cost drivers identification methods. logic (FL), artificial neural networks (ANNs), and evolutionary
computing (EC). Accordingly, CI is a combination of FL, neuro-
computing, and EC (Engelbrecht 2002). The scope of this study
focuses on the three methodologies of computational intelligence—
FL, ANNs, and EC—and their fusion.
Single-unit rate methods calculate the total cost of the project
based on a unit such as the area of the building or an accommo-
dation method such as cost per bed for hotels or hospitals. Para- Multiple Regression Analysis
metric cost modeling is to develop a model based on statistical Multiple regression analysis (MRA) is a statistical analysis that
relations of the key parameters extracted by conducting qualitative uses given data for prediction applications. Based on historical
techniques (Elmousalami et al. 2018a) or statistical analyses such cases, regression analysis develops a mathematical form to fit the
as regression models, ANNs, and FL models. Elemental cost analy- given data (Field 2009; Walker 1989). This mathematical form can
sis is dividing the project into the main elements and estimating the be formulated as Eq. (11)
cost of each element based on historical data. A quantity survey is a
detailed cost estimate based on quantities surveyed and contract Y ¼ B0 þ B 1 X 1 þ B 2 X 2 þ : : : : : : : : : B n X n ð11Þ
unit costs rates in which such quantities include the resources used
such as materials, labor, and equipment for each activity. Therefore, where Y = dependent variable; B0 = constant; Bi = variable coef-
estimators usually apply single-unit rate method or parametric cost ficient; and X i = independent variables. The change by one unit of
estimate at the conceptual stage where no detailed information of the independent variable X 1 causes a change by B1 in the dependent
the project is available. variable Y. Similarly, the change by one unit of the independent
The estimating process consists of six main elements (Kan variable X 2 causes a change by B2 in the dependent variable Y.
2002): project information, historical data, current data, estimating In addition, the sign of B1 and B2 determines the decrease or in-
methodology, cost estimator, and estimates. Project information in- crease in the dependent variable Y. The objective of the regression
cludes the project characteristics that can be used as inputs to the model is to mathematically represent data with minimal prediction
cost model. Historical data are the collected data of the previous error. Therefore, regression analysis is applied in cost estimate
projects to statistically develop the cost model. Current data are modeling to represent the cost-estimate relationships where the
the data extracted from the project information such as unit cost cost prediction is represented as the dependent variable and the cost
rates of material, labor, and equipment. Estimating methodology drivers are represented as the independent variables.
is the method used for cost estimate such as parametric cost model. Multiple regression analysis includes a model called polynomial
The cost estimator is the user who uses the cost model and enters regression. Polynomial regression regresses a dependent variable

© ASCE 03119008-9 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


based on powers of the independent variables. Mathematically, the µ (X)
polynomial regression model can be formulated as Eq. (12)

Y ¼ B0 þ B1 X i þ B2 X 2i þ : : : Bn X Ki þ ei ; for i ¼ 1; 2; : : : ; n A B
ð12Þ 1.0
Core Core
where K = degree of the polynomial.
According to the sample size, (50 þ 8k) may be the minimum 0.5
sample size, where k is the number of predictors (Green 1991).
According to deleting outliers, Cook’s distance detects the impact
of a certain case on the regression model (Cook and Weisberg
1982). If the Cook’s values are <1, there is no need to delete that 0.0 X
a1 a2 a3 a4 b3 b4
case (Stevens 2002). Otherwise, if the Cook’s values are >1, there b1 b2
is a need to delete that case. Variables are highly correlated where
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

the coefficient of determination is higher than 0.8 (R2 > 0.8) Support Set of A Cross point of A and B
(Rockwell 1975). The variance inflation factor (VIF) examines Fig. 7. Fuzzy trapezoidal membership function (MF).
the linear relationship with the other variable (Field 2009), and if
the average VIF is greater than 1, then multicollinearity occurs and
can be detected (Bowerman and O’Connell 1990; Myers 1990).
Homoskedasticity occurs when the residual terms vary constantly if-then rules can be developed to establish rule-based systems.
and the residual variance should be constant to avoid a biased re- Each rule represents human logic and experience in which all rules
gression model (Field 2009). The Durbin–Watson test is conducted represent the brain of the fuzzy system. A single fuzzy if-then rule
to check the correlations among errors where the test values range can be represented by the following:
between 0 and 4. The value of two denotes that residuals are un-
correlated (Durbin and Watson 1951). Accordingly, regression If hfuzzy proposition ðx is A1 Þi Then hfuzzy proposition ðy is B2 Þi
models can be summarized in the following steps:
1. Collect and prepare the historical cases. where x = input parameter; A1 = MF of x; y = output parameter; and
2. Divide the collected cases into a training set and a validation set. B2 = MF of y. Rule-based systems are systems that have more than
3. Check the sample size of the collected training data (Green one rule to represent human logic and experience to the developed
1991; Stevens 2002). system. Aggregation of rules is the process of developing the over-
4. Define key independent parameters (cost drivers) and dependent all consequent from the individual consequents added by each rule
parameter (cost variable). (Siddique and Adeli 2013).
5. Develop a regression model and check the significance As shown in Fig. 8, there are two parameters X 1 and X 2 , where
(P-value) of each coefficient (Field 2009). μ X 1 ¼ fa1 ; b1 ; c1 ; d1 g, μ X 2 ¼ fa2 ; b2 ; c2 ; d2 g, μ Y ¼ fay ; by ;
6. Check outliers (Cook and Weisberg 1982). cy ; dy g and the fuzzy system consists of two rules as follows:
7. Check the variance inflation factor (VIF) (Bowerman and
O’Connell 1990; Myers 1990). Rule 1: IF X 1 is a1 AND X 2 is c2 THEN y is ay
8. Check homoskedasticity (Durbin and Watson 1951).
9. Calculate the resulting error such as mean absolute percentage Rule 2: IF X 1 is b1 AND X 2 is d2 THEN y is by
error (MAPE).
where two inputs are used fX 1 ¼ 4; X 2 ¼ 6g. Such two inputs
intersect with the antecedents MF of the two rules where two con-
Fuzzy Logic sequent rules are produced {R1 and R2 } based on minimum inter-
sections. The consequent rules are aggregated based on maximum
Fuzzy logic (FL) is the modeling of human decision making by
intersections where the final crisp value is 3. The aggregated output
representing uncertainty, incompleteness, and randomness of the
for Ri rules is given by
real-world system (Zadeh 1965, 1973). In addition, FL represents
the experts’ experience and knowledge by developing fuzzy rules. Rule 1: μ R1 ¼ min½μ a1 ðX 1 Þ and μ c2 ðX 2 Þ
Such knowledge is represented in fuzzy systems by membership
functions (MFs), which range from zero to one. MFs can be tri- Rule 2: μ R2 ¼ min½μ b1 ðX 1 Þ and μ d2 ðX 2 Þ
angular, trapezoidal, Gaussian, and bell-shaped functions in which
the selection of the MF function is problem-dependent. Fig. 7 Y: Fuzzification½max½R1 ; R2 
illustrates a trapezoidal MF that consists of a core set fa2 ; a3 g and
a support set fa1 ; a2 ; a3 ; a4 g. The shape of MF significantly influ- Fuzzification is transforming crisp values into fuzzy inputs.
ences the performance of a fuzzy model (Wang 1997; Chi et al. Conversely, defuzzification is transforming a fuzzy quantity into a
1996). Therefore, many methods, such as clustering approaches crisp output. Many different methods of defuzzification exist such
and genetic algorithms, are applied to develop MFs automatically as max-membership, center of gravity, weighted average, mean-
to select the optimal shape of MFs. max, and center of sums (Runker 1997). An inference mechanism
Once MFs can be identified for each dependent and independent is the process of converting input space to output space such as
parameter, a set of operations on fuzzy sets can be conducted. Such Mamdani fuzzy inference, Sugeno fuzzy inference, and Tsukamoto
operations are the union of fuzzy sets, the intersection of fuzzy sets, fuzzy inference (Mamdani and Assilian 1974; Takagi and Sugeno
and complement of fuzzy set and α-cut of a fuzzy set. Linguistic 1985; Sugeno and Kang 1988; Tsukamoto 1979).
terms are used to approximately represent the system features Fuzzy modeling identification includes two phases: structure
where such terms cannot be represented as quantitative terms identification and parameter identification (Emami et al. 1998).
(Zadeh 1976). Once MFs and linguistic terms have been defined, Structure identification is to define input and output variables

© ASCE 03119008-10 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


X1 X2 Y

a1 ay
c2
X2R1 X2R1
X1R1 AND Rule 1
min X1R1
X1R1

one ten one ten one ten


b1 X1R2 d2 X1R2 by
Rule 2
X2R2 X2R2 min X2R2
AND

one ten one ten one ten


X1 = 4 fuzzification X2 = 6 Second: consequent rules
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

First: antecedent rules

one ten
YC = 3
Third: aggregation and defuzzification

Fig. 8. Fuzzy rules firing.

and to develop input and output relations through if-then rules. The In a feedforward network, all neurons are connected together.
following points summarize the structure identification of fuzzy The feedforward network consists of the input vector (x), a weight
system: matrix (W), a bias vector (b), and an output vector (Y) that can be
1. Determine relevant inputs and outputs. formulated as Eq. (13)
2. Select fuzzy inference system, e.g., Mamdani, Sugeno, or
Tsukamoto.
3. Define the linguistic terms associated with each input and output Y ¼ fðW · x þ bÞ ð13Þ
variable.
4. Develop a set of fuzzy if-then rules to represent the relation
between the inputs and outputs. where fðÞ includes a nonlinear activation function. Different types
On the other hand, the parameters identification is an optimiza- of activation functions exist such as linear function, step function,
tion problem in which the objective is to maximize the performance ramp function, and tan sigmoid function. Selection of ANN param-
of the developed system. Defining MFs such as the shape of eters such as the number of neurons, connections transfer functions,
MF (triangular, trapezoidal, Gaussian, and bell-shaped functions) and hidden layers mainly depend on the ANN’s application.
and its corresponding values can significantly optimize the system Several types of feedforward neural network architectures exist
such as multilayer perceptron networks (MLP), radial basis
performance.
function networks, generalized regression neural networks, prob-
abilistic neural networks, belief networks, Hamming networks,
Artificial Neural Networks and stochastic networks in which each architecture is problem-
dependent (Siddique and Adeli 2013). In this study, multilayer
ANNs are biologically inspired models to mimic the human neural perceptron networks (MLP) are explained in some detail.
system for information-processing and computation purposes. As shown in Fig. 9, an MLP network is a network with several
ANN is a machine learning (ML) technique that can learn from layers of perceptrons in which each layer has a weight matrix (W),
past data. Learning forms can be supervised, unsupervised, and a bias vector (b), and an output vector (Y). The input vector X ¼
reinforcement learning. Contrary to traditional modeling tech- fX 1 ; X 2 ; X 3 ; : : : g feeds forward to n neurons in the hidden layer
niques such as linear regression analysis, ANN models have the with a transfer function fðÞ where weights w ¼ fw1 ; w2 ; w3 ; : : : g
ability to approximate nonlinear functions to a specified accuracy. combined to produce the output. The outputs of each layer are
The first model of artificial neural networks came in 1943 when computed as Y kn ¼ fðW n;m;k · X m þ bi;k Þ, where k is the number
Warren McCulloch, a neurophysiologist, and Walter Pitts, a young of layers, d is the number of inputs, i is the number of bias nodes,
mathematician, outlined the first formal model of an elementary n is the number of neurons, m is the number of weight for each
computing neuron (McCulloch and Pitts 1943). The first model sending neuron, and fðÞ is the activation function (e.g., sigmoid
of ANNs proposed by Warren McCulloch to mimic the human neu- and tan sigmoid functions). No exact rule exists to determine the
ral system, the model is based on the concept of electrical circuits, number of hidden layers and neurons in the hidden layer. Huang
in which the output is zero or one. This is called a perceptron or and Huang (1991) and Choi et al. (2001) stated that one hidden
neuron, and such a neuron is the unit of the ANN (McCulloch and layer MLP needs at least (P − 1) hidden neurons to classify P
Pitts 1943). Hopfield connected these neurons and developed a patterns. A standard rectified linear unit (ReLU) is an activation
network to create ANNs (Hopfield 1982). Generally, ANNs can be function that can enhance the computing performance of ANNs
categorized into two main categories: feedforward networks and (Nair and Hinton 2010). Mathematically, ReLU is defined as
recurrent networks. follows:

© ASCE 03119008-11 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


neuron
first hidden layer second hidden layer output layer
input layer w1,1,k
w1,1,k N net11 Y11 net21 Y21
11 f() N21 f()
X1 w1,1,k
b i,k b i,k
net12 Y12 net 22 Y22 net31 Yfinal
X2 N12 f() N22 f() f()

Xd b i,k wn,m,kb i,k wn,m,k


net kn Ykn netkn Ykn
wn,m,k Nkn f() Nkn f()
b i,k b i,k

Fig. 9. Multilayer perceptron network (MLP).


Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.


Xi ; if X i ≥ 0 yi ðW × X i þ bÞ ≥ 0 − ξ; i ¼ 1; 2; 3; : : : m ð15Þ

0; Xi < 0
Accordingly, the objective function for SVM optimization is as
For big data and high dimensionality of data, deep neural net- expressed in Eq. (16)
works (DNNs) can be conducted with a ReLU activation function
for high performance computing (LeCun et al. 2015). X
i¼m
1 X
i¼m

A three-layer network (input layer, hidden layer, and output Min w × wT þ C ξi ð16Þ
i¼0
2 i¼0
layer) can solve a wide range of prediction, approximation, and
classification problems. Moreover, to avoid overfitting problems
and enhance the generalization capability, the number of training
Decision Trees
cases should be more than the size of the network (Rutkowski
2005). The learning mechanism of ANNs is modifying the weights Decision trees (DT) is a supervised ML model that divides the
and biases of the network to minimize the in-sample error. Devel- given data into hierarchical rules on each tree node by a repetitive
oping ANNs can be summarized in the following steps: splitting algorithm. Three of the most commonly applied algo-
1. Collect and prepare the historical cases. rithms for decision trees modeling are the chi-square automatic in-
2. Divide the collected cases into a training set and a valida- teraction detector (CHAID), classification and regression trees
tion set. (CART), and C4.5/C5.0 algorithms; CHAID is for categorical var-
3. Determine relevant inputs and outputs. iables, and CART and C4.5/C5.0 are for both continuous variables
4. Select the number of hidden layers. and categorical variables (Berry and Linoff 1997; Breiman et al.
5. Select the number of neurons in each hidden layer. 1984). CART is a tree learning model that can be applied for both
6. Select the transfer function. regression (continuous variables) and classification (categorical
7. Set initial weights. variables) applications (Quinlan 2014). An alternative to the
8. Select the learning algorithm to develop the ANNs’ weights. black box nature of ANNs, DT generates logic statements and inter-
9. Train the model for several iterations to get minimal predic- pretable rules that can be used for identifying the importance of
tion error. data features (Perner et al. 2001). Another advantage of DT is
10. Calculate the resulting error such as mean absolute percentage avoiding the curse of dimensionality and providing high per-
error (MAPE). formance computing efficiency through its splitting procedure
(Prasad et al. 2006). However, DT produces unsatisfactory perfor-
mance in time series, noisy, or nonlinear data (Curram and
Support Vector Machines Mingers 2017).
A support vector machine (SVM) is a nonparametric supervised
ML algorithm that can be applied for regression and classification
Case-Based Reasoning
problems (Vapnik 1979). The objective is to minimize misclassifi-
cations cases through optimizing the margin and hyperplanes Case-based reasoning (CBR) is a sustained learning and incremen-
distance as shown in Fig. 10. tal approach that solves problems by searching the most similar
Slack variables are added to solve the inseparability problem past case and reusing it for the new problem situation (Aamodt
(Cortes and Vapnik 1995). According to the linear separable case, and Plaza 1994). Therefore, CBR mimics human problem solving
the objective is to maximize the hyperplane distance between the (Ross 1989; Kolodner 1992). As illustrated in Fig. 11, CBR is a
two class boundaries cyclic process of learning from past cases to solve a new case. The
 main processes of CBR are retrieving, reusing, revising, and retain-
W × Xi þ b ≥ 1; if yi ≥ 0 ing. The retrieving process solves a new case by retrieving the past
Linear SVM ¼ ð14Þ
W × X i þ b < −1; if yi < 0 cases. The case can be defined by key attributes. Such attributes are
used to retrieve the most similar case, whereas the reusing process
for i ¼ 1; 2; 3; : : : ; m. uses the new case information to solve the problem. The revising
Eq. (14) can be generalized as yi ðW × X i þ bÞ ≥ 0. For opti- process evaluates the suggested solution to the problem. Finally,
mum separation, the distance between Pi¼m the two marginal hyper- retaining process is to update the stored past cases with such a
1
planes should be minimized as i¼0 2 w × wT . For nonlinear new case by incorporating the new case to the existing case-base
data, a positive slack variable (ξ) is added to handle the nonlinearity (Aamodt and Plaza 1994). A CBR model can be developed to pre-
of the data as shown in Eq. (15) dict the conceptual cost based on similar attributes of the entered

© ASCE 03119008-12 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Tree_1 Tree_2
SVM hyperplane
wx+b = 0 Predictor_1 Predictor_2
-1 in
+b= ar g Is male ?
M age 15 ?
wx
1
Y N Y N
W = default default
1/ W +b
1/ wx X1 X2 X1 X2

+ 20 - 20 +5 +5
F(X1) = +20 +5 = + 25
Misclassified cases
F(X2) = -20 +5 = - 15
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

Supportive vectors
Fig. 12. Additive function concept.
Fig. 10. Linear support vector machine. (Data from Burges 1998.)

Ensemble Methods
Problem
Case attributes Ensemble methods (fusion learning) are elegant data mining tech-
1 niques to combine multiple learning algorithms to enhance the
New case
Retrieve overall performance (Hansen and Salamon 1990). Ensemble meth-
Learned case ods can apply any ML algorithm such as ANN, decision tree, and
SVM, which are called base model or base learner, as inputs for
4 Retain ensemble methods. The concept behind ensemble methods can be
Cases-base Retrieved case
illustrated as in Fig. 12 and mathematically as Eq. (19) (Chen and
Confirmed case
2 Guestrin 2016).
3 Reuse For a given data set (D) with n examples and m features
Revise Suggested case D ¼ fðxi ; yi Þgðxi ∈ Rm ; yi ∈ RÞ where R is the real numbers set.
K is an additive function to predict the output as Eq. (19)
Fig. 11. CBR processes. (Data from Aamodt and Plaza 1994.)
X
K
y^ i ¼ fk ðX i Þ; fk ∈ F ð19Þ
k¼1

case comparable with the stored cases. Once attributes are entered,
where F ¼ ffðxÞ ¼ wqðxÞ g (q: Rm → T, T ∈ Rm ). q is the struc-
attributes similarities (AS) can be computed based on Eq. (17)
ture of each tree that maps an example to the corresponding. T cor-
(Kim et al. 2004)
responds to the number of leaves in the tree. y^ i is the predicted
MinðAV N ; AV R Þ dependent variable. Each fk represents an independent tree struc-
AS ¼ ð17Þ ture q and leaf weights w. xi represents independent variables.
MaxðAV N ; AV R Þ
F represents the regression trees space. Fig. 12 shows a tree ensem-
where AS = attribute similarity; AVN = attribute value of the newly ble model. wi corresponds to the score on ith leaf where each leaf
entered case; and AVR = attribute value of the retrieved case. has a continuous score unlike decision trees.
Depending on AS and attribute weights (AW), case similarity (CS) Ensemble methods compromise different methods such as
can be computed by Eq. (18) (Perera and Watson 1998). AW are voting, stacking, bagging, and boosting. Voting and averaging
selected by an expert to emphasize the existence and importance of are two of the simplest ensemble methods, in which averaging is
the case attributes used for regression and voting is used for classification (Opitz
Pn and Maclin 1999). The ensemble learning methods can effec-
ðAS × AW i Þ tively deal with the problems of high-dimension data, complex
CS ¼ i¼1 Pn i ð18Þ
i¼1 ðAW i Þ
data structures, and small sample size (Dietterich 2000). Bagging
algorithms [Fig. 13(a)] can increase generalization by decreasing
where CS = case similarity; AS = attribute similarity; AW = attrib- variance (Breiman 1998), whereas boosting [Fig. 13(b)] can im-
ute weight; and i = number of the attributes (key cost drivers). prove generalization by decreasing bias error (Schapire et al.
The advantage of CBR is that it deals with a vast amount of data 1998).
where all past cases and new cases are stored in database techniques In addition, ensemble models can be classified into two main
(Kim et al. 2004). Developing CBR methods can be summarized in types: homogeneous and heterogeneous. The homogeneous model
the following steps: applies the same base algorithm on different training data sets,
1. Collect and prepare the historical cases. whereas the heterogeneous model uses different base algorithms
2. Divide the collected cases into a training set and a validation set. on the same training data (Reid 2007). Ensemble methods can
3. Determine relevant input attributes and outputs. effectively handle continuous, categorical, and dummy features
4. Identify the similarity function and conduct CBR processes. with missing values. However, ensemble methods may increase
5. Calculate the resulting error such as mean absolute percentage model complexity, which decreases the model interpretability
error (MAPE). (Kuncheva 2004).

© ASCE 03119008-13 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Classification Classification Classification
OR OR OR
D Bootstrap D Feature Regression Regression Regression
Randomization I1 I2
Sampling Ik

Ik Ik
D1 D2 Dk D1 D2 Dk
D D D

Averaging Averaging Averaging


OR OR OR
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

Voting Voting Voting

(a) (b) (c)

Fig. 13. (a) Bagging; (b) RF; and (c) Boosting.

Bagging the top ten data mining algorithms, and the authors of AdaBoost
won the Gödel Prize for their work (Wu et al. 2008). AdaBoost
Bagging is a variance reduction algorithm to train several classifiers
serially manipulates the given data for each base learner.
based on bootstrap aggregation. A bagging algorithm randomly
AdaBoost assigns equal weights for all instances in which larger
draws replicas of a training data set with replacement to train each
weights are assigned to the misclassified cases. The objective is
classifier (Breiman 1996). As a result, diversity is obtained by re-
to make a greater focus on the misclassified cases to be corrected
sampling several data subsets. On average, each bootstrap sample
in the consequent iteration. In addition, the AdaBoost algorithm
contains 63.2% of the original training data set. The following three
uses other weights to rank each individual base learning algorithm
steps summarize the algorithm execution (Breiman 1999):
based on its accuracy (Bauer and Kohavi 1999).
1. T bootstrap samples BS1 ; BS2 ; : : : ; BST are generated.
2. A classifier Ci is developed based on each bootstrap sample Bsi.
3. An optimal classifier C is selected from C1 ; C2 ; : : : ; CT whose Extreme Gradient Boosting (XGBoost)
output is the class predicted most often by its subclassifiers, with
ties broken arbitrarily. Extreme Gradient Boosting (XGBoost) is a large-scale machine
learning system that can build a highly scalable end-to-end ensem-
ble tree boosting system for big data processing. The unique
Random Forest
advantage of the XGBoost is its scalability, so it can process noisy
The random forest (RF) is a bagging ensemble learning model data. XGBoost applies parallel computing to effectively reduce
that can produce accurate performance without overfitting issues computational complexity and learn faster (Chen and Guestrin
(Breiman 2001). RF algorithms draw bootstrap samples to develop 2016). The unique advantage of XGBoost is its scalability to fit
a forest of trees based on random subsets of features. Therefore, high-dimension data without overfitting based on the following
some features may be selected more than once, whereas others Eq. (20):
might never be selected (Breiman 2001). RF is more robust against
noisy data or big data than the DT algorithm (Breiman 1996; X
n X
K
Dietterich 2000). The key limitation of the RF algorithm is that LðÞ ¼ ðx þ aÞn ¼ lð^yi ; yi Þ þ Ωðfk Þ;
it cannot interpret the importance of features or the mechanism k¼0 k¼1
of producing the results. RF does not search for the best split var- 1
iables that diminish the correlation among the developed trees and where ΩðfÞ ¼ γT þ λkwk2 ð20Þ
2
the strength of every single tree. As a result, RF decreases the gen-
eralization error (Breiman 2001). However, RF cannot interpret the
produced predictions. An extremely randomized tree (ERT) algo- where L represents a differentiable convex cost function that deter-
rithm merges the randomization of random subspace to a random mines the difference between the predicted output y^ i and the actual
selection of the cut-point during the tree node splitting process. output yi . Ω is to avoid overfitting and smooth the learnt weights
Extremely randomized tree mainly controls the attribute randomi- (W i ), The second term penalizes the complexity of the regression
zation and smoothing parameters (Geurts et al. 2006). tree functions.
XGBoost is a traditional gradient boosting tree algorithm
with a regularization parameter. Once the regularization term
Boosting and Adaptive Boosting (AdaBoost)
is removed, the XGBoost is converted back to the traditional
Schapire has presented a boosting procedure (also known as adap- gradient tree boosting. A differentiable convex cost function
tive resampling) as an algorithm that boosts the performance of can be replaced by Taylor series as a second-order approxima-
weak learning algorithms (Schapire 1990). Bagging generates clas- tion for fasting optimization (Friedman et al. 2000). Another key
sifiers in parallel, whereas boosting develops the classifiers sequen- advantage of XGBoost is handling the missing values for which
tially as shown in Fig. 13(c). Thus, boosting converts weak models defaults direction is identified as shown in Fig. 12. Accordingly,
to strong ones. Freund and Schapire (1997) have presented an adap- no effort is needed for cleaning the collected data (Fan et al.
tive boosting algorithm (AdaBoost). AdaBoost is selected as one of 2008).

© ASCE 03119008-14 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Stochastic Gradient Boosting instead of raw cost data model in which such a data transformation
approach has more accurate results than an untransformed data
The performance of gradient boosting can iteratively be improved
model. Love et al. (2005) have represented the project time–cost
as a stochastic gradient boosting (SGB) algorithm by injecting ran-
relationship using a logarithmic regression model. Wheaton and
domization into the selected data subsets. Adding randomization to
Simonton (2007) have performed a semilog regression model to
a boosting algorithm can substantially boost both the fitting accu-
assess a building cost index. Thus, a transformation of raw data
racy and computational cost of the gradient boosting algorithm
can help to produce a more reliable cost model.
(Breiman 1996; Freund and Schapire 1996). The training data is
randomly drawn at each iteration without replacement from the
data set. Stochastic gradient boosting can be viewed in this sense Genetic-Fuzzy Model
as a boosting–bagging hybrid. Many approaches exist for evolutionary fuzzy hybridization
(Angelov 2002; Pedrycz 1997)]. Traditionally, an expert is con-
Hybrid Intelligent System sulted to define such fuzzy rules, or the fuzzy designer can use
the trial and error approach to map the fuzzy rules and MFs. How-
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

The fusion of the CI methodologies is called a hybrid intelligent ever, such an approach is time-consuming and does not guarantee
system, and Zadeh (1994) has predicted that the hybrid intelli- the optimal set of fuzzy rules. Moreover, the number of fuzzy
gent systems will be the way of the future. FL is an approximate if-then inference rules increases exponentially by increasing the
reasoning technique. However, it does not have any adaptive capac- number of inputs, linguistic variables, or outputs. In addition, the
ity or learning ability. On the other hand, ANNs provide an efficient experts cannot easily define all required fuzzy rules and the asso-
mechanism for learning from given data and accounting for uncer- ciated MFs. In many engineering problems, the evolutionary algo-
tainty that naturally exists. EC enables an optimization structure rithm (EA) has been conducted to automatically develop fuzzy
for the developed system. Combining these methodologies can rules and MFs to improve system performance (Chou 2006; Loop
enhance the computational model so that the limitations of any et al. 2010). The genetic-fuzzy model has been developed to
single method can be compensated by other methods (Siddique and optimally generate fuzzy inference rules. The formulation of the
Adeli 2013). Fig. 14 illustrates a fusion of three basic models: genetic algorithm model depends mainly on defining two core
FL, ANNs, and EC. Based on such models, many hybrid models terms: a chromosome representation and an objective function.
can be evolved such as neurofuzzy models and evolutionary neural Based on the Michigan approach, the chromosomes represents the
networks. fuzzy rules where the number of chromosomes is the number of
Moreover, the objective of data transformation is to address the fuzzy rules. Each chromosome contains a number of genes. The
normality assumption of data distribution in which the probability process of the developed model consists of five main steps:
distribution shape has an important role in statistical modeling 1. An initial population of chromosomes has been identified to
to convert error terms for linear models (Tabachnick and Fidell represent the initial state of the fuzzy rules. The four key cost
2007). Data transformation may produce more accurate results. drivers have been fed to the fuzzy system.
Stoy et al. (2008, 2012) have developed a semilog model to pre- 2. The fuzzy system produces the final predicted output of the
dict the cost of residential construction in which the MAPE for the system, y^ i .
semilog model (9.6%) was better than the linear regression model 3. The predicted cost (^yi has been fed to fitness function (F) to
(9.7%). The previous result proved that semilog models may pro- evaluate the model performance where fitness function (F) is
duce a more accurate model than a plain regression model. How- model evaluation function.
ever, this is not a rule; in other words, plain regression models 4. GA uses the fitness function (F) to evaluate the search process in
may produce more accurate and simpler models than transformed which crossover probability and mutation probability have been
models. set at 0.7 and 0.01, respectively.
Lowe et al. (2006) have established a predictive model based on 5. The new population of fuzzy rules has been produced based
286 historical cases in which three alternatives—cost=m2 , the log on the crossover and mutation processes to form the optimal
of cost variable, and the log of cost=m2 —have been developed fuzzy rules.

Case-based reasoning Neural Networks

Fuzzy Logic Evolutionary Computing Fuzzy Logic Evolutionary Computing

Fuzzy Logic Neural Networks

Evolutionary Computing Hybrid intelligent models


Fuzzy Logic

Neural Networks

Evolutionary Computing

Fig. 14. Hybrid intelligent systems.

© ASCE 03119008-15 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Evaluation Techniques the Journal of Construction Engineering and Management, the
Journal of Computing in Civil Engineering, and Automation in
The whole collected data set is divided into two main sets: a train- Construction, and Construction Management and Economics.
ing set and validation set. The validation set is ranges from 10% to Such journals represent the most common and high-ranked journals
30% of the whole set to evaluate the performance of the developed for construction cost modeling.
model. Moreover, the training set could be divided into training and Table 4 presents 70 papers to review the machine learning
cross-validation data to apply K-fold cross-validation technique. techniques for construction cost prediction, with most papers hav-
K-fold cross-validation data are used to set the optimal hyper- ing been published from 2002 to 2018. The study focuses on
parameters. Evaluation techniques for predictive regression model surveying the most common techniques used to build a reliable
can be mean absolute percentage error (MAPE), the mean squared parametric cost model based on the collected data. Many machine
error (MSE), the root-mean squared error (RMSE), the coefficient learning (ML) and statistical techniques have been conducted
of determination (R2 ) or adjusted R2 . such as a regression model, ANNs, CBR, and SVM. Moreover,
MAPE compares the predicted and actual outcomes hybrid models and fuzzy models have been reviewed to provide
(Makridakis et al. 1998). MAPE can be classified as an excellent an overall perspective of the cost models developments as shown
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

prediction if MAPE is less than 10%, and if between 10% and in Table 4.
20% it can provide good prediction. Between 20% and 50%, Many previous studies have applied AI techniques and ML
MAPE generates acceptable forecasting, and more than 50% models. Building information modeling (BIM) can feed data for
gives inaccurate prediction (Lewis 1982). MAPE can be ex- cost estimation, whereas a predictive ML model such as a regres-
pressed as follows: sion model or ANN can predict the project’s cost on a macro level
(Juszczyk 2017). ANN has been applied for cost estimation of
1X n
jyi − y^ i j
MAPE ¼ × 100 ð21Þ sports fields where the general applicability of ANNs model has
n i¼1 y^ i been investigated (Juszczyk et al. 2018). ANNs have conducted
the early cost estimation of building projects for reinforced con-
where n = number of cases; i = number of the case; Y i ðhatÞ = crete buildings with acceptable performance (Ambrule and Bhirud
outcome of model; and Y i = actual outcome. The MSE 2017). CBR has been proposed for estimation the preliminary costs
measures of how well the regression fits the data as following of sports field construction based on 16 predictors using 143 con-
Eq. (22) (Aczel 1989): struction projects. Different calculations were conducted to formu-
late the case similarity based on quantitative and qualitative data for
1X n
MSE ¼ ½y − y^ i 2 ð22Þ which the final total error was 14% at the early stage (Leśniak and
n i¼1 i Zima 2018). Prediction performance of a cost prediction model has
been improved by 17.23% and 4.39% for business facilities and
where yi = observed values; and y^ i = predicted values of the multifamily housing, respectively.
dependent variable Y for the ith case. The RMSE equals the Based on more than 1,400 projects, a multilayer ensemble of
square root of MSE. methods has been developed for forecasting the unit price bids of
The R2 is expressed as Eq. (23): resurfacing highway projects (Cao et al. 2018). Wang and Ashuri
Pn (2016) have applied a random tree model for construction cost
2 SSE ðyi − y^ i Þ2
R ¼1− ¼ 1 − Pi¼1n ðy − ȳ Þ2
ð23Þ index prediction. Williams and Gong (2014) have built a stacking
SST i¼1 i i ensemble learning and text mining model to estimate the cost over-
where SSE = sum of squares of the residuals; SST = total sum of run using the project contract document for which the accuracy was
squares; y = arithmetic mean of the variable Y; and R2 measures 44%. Building information modeling (BIM) can automate cost
the percentage of the variation percentage of the predictor Y ex- estimation processes and improve inaccuracies where New Rules
plained by the dependent variable X. Thus, R2 indicates how well of Measurement (NRM) for cost estimation can be mined for au-
the regression model fits the data. R2 ranges from zero to one, tomatic cost estimation based on a four-dimensional BIM modeling
−1 ≤ R2 ≤ 1. If the R2 value is 0.9 or above, it is classified as very software (Kim et al. 2019).
good, above 0.8 is good, above 0.5 is satisfactory, and below 0.5 is Arabzadeh et al. (2018) have developed ANN, regression, and
poor (Aczel 1989; Ostertagová 2011). Adjusted R-squared (R2 ) hybrid models for cost estimation of spherical storage tanks. The
is computed by Eq. (24): results indicated that ANNs were more accurate than a hybrid re-
gression model and hybrid ANNs were more accurate than single
ð1 − R2 ÞK ANNs. Linear and multiple regression models have been counted to
R2 ¼ R2 − ð24Þ predict the preliminary estimate of road projects in Nigeria at the
n − ðK þ 1Þ
early stage (Ogungbile et al. 2018). However, the whole collected
where R2 is adjusted for the number of variables included in the data set was only 50 for seven predictors, which is not a sufficient
regression equation where R2 is lower than the R2 value. For data size to train regression models. Zhang et al. (2018) have con-
model evaluation, R2 is always preferred to R2 to avoid the over- verted the time series model into a graph to forecast the construc-
fitting problem (Aczel 1989; Ostertagová 2011). tion cost index, and the application showed its ability to provide
more accurate estimations.
A parametric model mainly depends on parameters to simu-
Cost Modeling Review late and describe the case studied (AACE 2004; Elfaki et al. 2014).
Parametric modeling builds a mathematical relationship between
The objective of the cost modeling review is to provide an overview dependent and independent variables. CI, machine learning, and
of the recent and future trends in construction cost model develop- data science are the disciplines that map the relationships among
ment. The study has reviewed the past practices of parametric these variables and figure out such patterns. The most common
cost estimation at the conceptual stage for construction projects. techniques for those disciplines have been reviewed and discussed
Recently, many international journals have been reviewed such as in the paper, including MRA, FL, ANNs, CBR, and hybrid systems

© ASCE 03119008-16 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Table 4. Review of past practices of cost model development
Algorithm Application Findings and characteristics Reference
ANNs Mark up estimation In 1992, This study showed that ANNs produce better performance than the Moselhi and
hierarchical model for markup estimation. Moreover, GA is used for optimizing Hegazy (1993)
ANN weights for markup estimation. Such a model has displayed good
generalization results.
ANNs Highway construction In 1998, this study conducted a regularization neural networks model that produced a Adeli and Wu
more predictable and reliable model for highway construction projects. (1998)
ANNs Building projects Based on 300 examples, three cost models composed of 5, 9, and 15 input parameters Emsley et al.
have been developed to predict the cost per m2 and the log of cost per m2 in the (2002)
United Kingdom in 2002. The cost per m2 model produces higher R2 , whereas the
log model produces lower values of MAPE. For the selected model, the R2 value is
0.789 and the MAPE is 16.6%
ANNs Structural systems of Based on 30 examples, a cost model composed of eight design parameters was Günaydın and
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

residential buildings developed to predict the cost per m2 of reinforced concrete for 4- to 8-story Doğan (2004)
residential buildings in Turkey in 2004. The cost estimation accuracy is 93%
ANNs Highway construction In 2005, an ANN model was built for highway construction costs in which the index Wilmot and Mei
of highway construction cost reflected the change in overall cost over time. (2005)
ANNs Building projects Based on 286 past cases of data collected in the United Kingdom, linear regression Lowe et al. (2006)
models and ANN models have been established to assess the cost of buildings. Three
alternatives—cost=m2 , the log of cost variable, and the log of cost=m2 —have been
conducted instead of raw cost data when such data transformation approaches have
better results than an untransformed data model. A total of six models have been
developed based on forward and backward stepwise regression analyses. The best
regression model was the log of cost backward model.
ANNs Building projects Based on 169 examples, an ANN cost model has been developed for building El-Sawalhi and
construction projects with acceptable prediction error. Shehatto (2014)
ANNs Highway construction A prediction model has been developed with an MAPE of 1.4% for the unit cost of Elbeltagi et al.
the highway project in Libya by changing ANNs structure, training functions, and (2014)
training algorithms until the optimum model was reached.
ANNs Public construction Based on 232 public construction projects in Turkey, a multilayer perceptron Bayram et al.
projects (MLP) model and radial basis function (RBF) model were developed to (2015)
estimate construction cost. RBF shows superior performance to MAPE, with
approximately 0.7%.
ANNs Building projects Based on 657 building projects in Germany, a multistep ahead approach is conducted Dursun and Stoy
to increase the accuracy of the model’s prediction. (2016)
ANNs Water treatment plant First, cost drivers that influence construction costs of water treatment plants have Marzouk and
costs been identified. Cost drivers have been determined through descriptive statistics Elkadi (2016)
ranking (DSR) and exploratory factor analysis. Principal component analysis (PCA)
with varimax rotation through five iterations has been used to minimize the
multicollinearity problem. Kaiser criterion was used so that a total of 33 variables
were reduced to eight components, whereas using Cattell’s scree test reduced
variables to four components.
ANNs and Highway projects Radial basis neural networks and regression models were developed for completed Williams (2002)
regression project cost estimation. The regression model produced better performance than the
ANN model. Moreover, a hybrid model was developed and produced reliable results.
A natural log transformation helped to improve the linear relationship between
variables.
ANNs and Cost deviation in Based on 41 examples, this study compared an ANN model with the regression Attalla and
regression reconstruction projects model for cost deviation in reconstruction projects in 2003. Hegazy (2003)
ANNs and Tunnel construction Based on 33 constructed tunnels, both ANNs and regression models have been Petroutsatou et al.
regression developed for tunnel construction in which the developed models were fitted for their (2012)
purpose and were reliable for cost prediction.
ANNs and Structural steel Based on 35 examples, a cost model consisted of three input parameters was El-Sawah and
regression buildings developed to predict the preliminary cost of structural steel buildings in 2014. ANNs Moselhi (2014)
produced better performance than regression models in which the ANNs model had
improved the MAPE by approximately 4% compared to the regression model.
ANNs and Field canals This paper developed a quadratic regression model and ANNs that can predict ElMousalami
regression improvement projects the conceptual cost at 9% MAPE. Data transformation plays important role in et al. (2018b)
prediction accuracy.
CBR Building projects This study incorporated the decision tree into CBR to identify attribute weights of Doğan et al.
CBR. Such an approach shows more reliable results for residential building projects (2008)
cost assessment.
CBR Pavement Based on the library of past cases, this study developed a CBR model for pavement Chou (2009)
maintenance operation maintenance operations costs based on computing case similarity.
CBR Pump stations A parametric cost model was presented in which a questionnaire survey was Marzouk and
organized to analyze the most critical factors affecting the final cost of pump Ahmed (2011)
stations. Using a Likert scale, these factors were screened to determine the key
factors. A case-based reasoning was built and tested to develop the proposed model.

© ASCE 03119008-17 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Table 4. (Continued.)
Algorithm Application Findings and characteristics Reference
CBR Military barrack Based on 129 military barrack projects, a CBR model has been developed for cost Ji et al. (2012)
projects estimation in which the model produces reliable results.
CBR Building projects This study introduced an improved CBR model based on multiple regression analysis Jin et al. (2012)
(MRA) technique in which MRA has been applied in the revision stage of the CBR
model. Such a model significantly improved the prediction accuracy; the
performance of the business facilities model was improved by 17.23%.
CBR Storage structures This study built a CBR model to estimate resources and quantities in construction De Soto and Adey
projects. The nearest neighbor technique was conducted to measure the retrieval (2015)
phase similarity of the CBR model. The model has shown reliable MAPE ranging
from 8.16% to 28.40%.
CBR Building projects Ahn et al. (2017) examined the CBR similarity measure based on the weighted Ahn et al. (2017)
Mahalanobis distance to take attributes covariance into consideration.
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

CBR Building projects Ji et al. (2018) have proposed a learning method to handle missing data values Ji et al. (2018)
based on a data mining algorithm to improve the stability and performance of the
CBR model.
CBR and GA Bridges A cost estimation model was developed based on CBR and GA for bridge projects Kim and Kim
which was used for optimizing parameters of CBR. Such methodology improves the (2010)
accuracy compared to the conventional cost model.
CBR and AHP Highway Analytic hierarchy process (AHP) was incorporated into CBR to build a reliable cost Kim (2013)
estimation model for highway projects in South Korea.
Evolutionary neural Highway Based on 18 examples, a reliable NN cost model was developed based on optimizing Hegazy and Ayed
network (NN) NN weights for highway projects. Simplex optimization of neural network weights is (1998)
more accurate than trial and error and GA optimization for which the MAPE was 1%.
Evolutionary NN Residential buildings Based on 498 cases, a reliable NN cost model was developed based on optimizing Kim et al. (2005)
NN weights for residential buildings. GA optimization of NN parameters was more
accurate than trial and error model for which the MAPE was 4.63%.
Evolutionary fuzzy Building projects This study incorporated computation intelligence models such as ANNs, FL, and EA Cheng et al.
neural inference to make a hybrid model that improves the prediction accuracy in a complex project. (2009)
model (EFNIM), As a result, an evolutionary fuzzy neural model was developed for conceptual cost
estimation for building projects with reliable accuracy.
Evolutionary fuzzy Building projects An evolutionary fuzzy hybrid neural network model was developed for conceptual Cheng and Roy
hybrid neural cost estimation. FL was used for fuzzification and defuzzification for inputs and (2010)
network outputs, respectively. GA was used for optimizing the parameters of models such as
NN layer connections and FL memberships.
Evolutionary fuzzy Building projects Hybrid AI system based on SVM, FL, and GA were built for decision making for Cheng and Roy
and SVM project construction management. The system used FL to handle uncertainty in the (2010)
system, SVM to map fuzzy inputs and outputs, and GA to optimize the FL and SVM
parameters. The objective of such a system is to produce accurate results with less
human intervention in which MF shapes and distributions can be automatically
mapped.
GA for ANNs Residential buildings This study has built three cost NN models by back-propagation (BP) algorithm, GA Kim et al. (2005)
for optimizing NN weights, and GA for parameters optimization of the BP algorithm.
Optimizing the parameters of the BP algorithm produced the best results.
GA for ANNs Bridge construction GA was used as an optimizing tool for ANNs and CBR cost models in which two Chou et al. (2015)
and CBR projects such models have been developed for bridge projects in Taiwan. Both models have
displayed reliable results.
Fuzzy linear Wastewater treatment Based on 48 wastewater treatment plants, a fuzzy logic model was developed with Chen (2002)
regression plants acceptable error and uncertainty considerations.
Fuzzy logic Design cost overruns Based on the collected building projects in 2002, a fuzzy logic model was developed Knight and Fayek
on building projects for estimating design cost overruns on building projects with acceptable error and (2002)
uncertainty considerations.
Fuzzy sets Cost range estimation This study proposed the use of fuzzy numbers for cost range estimation and claimed Shaheen et al.
the fuzzy numbers for fuzzy scheduling range assessment. (2007)
Fuzzy model Wastewater treatment This study compared a linear regression model with a fuzzy linear regression model Papadopoulos
projects for wastewater treatment plants in Greece. The results of both models are similar and et al. (2007)
reliable.
Fuzzy model Building projects A fuzzy model is built based on four inputs and one output in which a set of if-then Yang and Xu
rules, the center of gravity fuzzification, the product inference engine, and singleton (2010)
fuzzifier are applied. The maximal error is 3.2%.
Fuzzy model Building projects This study applied index values for membership degree and exponential smoothing Shi et al. (2010)
method to develop a construction cost model.
Fuzzy neural Cost estimation An evolutionary fuzzy neural network model was developed for cost estimation Zhu et al. (2010)
network based on 18 examples and 2 examples for training and testing, respectively. GA is
used to avoid sinking in local minimum results.
Fuzzy logic Building projects Based on 106 building projects in the Gaza Strip in 2012, a fuzzy logic model was El Sawalhi (2012)
developed for building projects with acceptable error and good generalization.

© ASCE 03119008-18 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Table 4. (Continued.)
Algorithm Application Findings and characteristics Reference
Fuzzy model Cost prediction An improved fuzzy system is established based on fuzzy c-means (FCM) to solve the Zhai et al. (2012)
problem of fuzzy rules generation. Such a model has produced better results for
scientific cost prediction.
Fuzzy logic and Construction materials This study developed an ANN model for predicting construction material prices, Marzouk and
neural networks prices whereas a fuzzy logic model was applied to determine the degree of importance of Amin (2013)
each material to use for an ANN model. Such modeling has acceptable accuracy in
training and testing phases.
Fuzzy subtractive Telecommunication Based on 568 cases, a four-input fuzzy clustering model and sensitivity analysis were Marzouk and
clustering towers conducted for estimating telecommunication towers construction cost with Alaraby (2014)
acceptable MAPE.
Fuzzy logic Satellite cost Based on two input parameters, a fuzzy logic model was developed for satellite cost Karatas and Ince
estimation estimation. Such models work as fuzzy expert tools for satellite cost prediction. (2016)
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

Regression Building projects Based on 530 examples, three cost models composed of nine parameters were Kim et al. (2004)
analysis, NN, and developed to predict the building costs in Korea in 2004. The NN model produces
CBR better results than the CBR and regression models. However, CBR produces better
results than NN for long-term use due to updating cases to the CBR system.
Neurogenetic Residential buildings Based on 530 cases of residential buildings, a neurofuzzy cost estimation model was Kim et al. (2005)
built in which GA is applied for optimizing BP algorithm parameters. Such an
approach has more accurate results than a trial and error BP algorithm.
Neurofuzzy Residential This study developed an adaptive neurofuzzy model for cost estimation for Yu and
construction projects residential construction projects. Such a model is the integration of the ratio Skibniewski
estimation method with the adaptive neurofuzzy method to obtain mining assessment (2010)
knowledge that is not available in traditional approaches.
Neurofuzzy and GA Semiconductor Based on 54 case studies of semiconductor hookup construction, a neurofuzzy cost Hsiao et al. (2012)
hookup construction estimation model was built and optimized by GA. Such a model has an accuracy
better than the conventional cost method by approximately 20%.
Neurofuzzy Water infrastructure Based on 98 examples, a combination of neural networks and fuzzy set theory was Ahiaga-Dagbui
incorporated to develop a more accurate and precise model for water infrastructure et al. (2013)
projects in which MAPE is 0.8%.
Neurofuzzy Water infrastructure Based on 1,600 water infrastructure projects in the United Kingdom, a neurofuzzy Tokede et al.
projects hybrid cost model has been built in which max-product composition produces better (2014)
results than max-min composition.
Regression Building projects A logarithmic regression model has been developed to examine the project time–cost Love et al. (2005)
relationship. Projects in various Australian states have performed a transformed
regression model (semilog) to estimate a building cost index based on historical
construction projects in several markets (Wheaton and Simonton 2007).
Regression Building projects A semilog model has used to predict the cost of residential construction projects in Stoy et al. (2008)
which the MAPE for the semilog model (9.6%) is more than a linear regression
model (9.7%). The previous result proved that semilog models may produce a more
accurate model than plain regression models. However, this is not a rule; in other
words, plain regression models may produce more accurate and simpler models than
transformed models.
Regression Building projects A semilog regression model was performed to develop cost models for residential Stoy et al. (2012)
building projects in Germany. The most significant variables were identified by
backward regression method. For the selected population, the proposed model has a
prediction accuracy of 7.55%.
ANNs and MRA Highway Gardner et al. (2016) built an ANN and MRA for conceptual cost estimating for Gardner et al.
infrastructure Highway infrastructure (2016)
Bayesian regression Masonry retrofit Nasrazadani et al. (2017) have created a Bayesian regression to develop probabilistic Nasrazadani et al.
cost models for retrofit actions based on 167 masonry retrofit projects. (2017)
LASSO regularized Highway construction Zhang et al. (2017) developed a LASSO regularized regression for Forecasting cost Zhang et al.
regression projects of highway construction projects (2017)
ANN and GA Sale prices of real Rafiei and Adeli (2015) have proposed a novel hybrid model of deep belief restricted Rafiei and Adeli
estate units Boltzmann machine and genetic algorithm for estimation of sale prices of real (2015)
estate units
SVM Building construction Based on 62 cases of building construction projects in Korea, the SVM model was An et al. (2007a,
project developed to evaluate conceptual cost estimation. Such a model can help clients to b)
know the quality and accuracy of cost prediction.
SVM Building projects This study utilized the theory of the rough set (RS) with SVM to improve the HongWei (2009)
prediction accuracy. RS was used for attributes reduction.
SVM and ANNs Building projects Based on 92 building projects, ANNs and SVM were used to predicted cost and Wang et al. (2012)
schedule success at the conceptual stage. Such a model has a prediction accuracy of
92% and 80% for cost success and schedule success, respectively.
SVM Commercial building Based on 84 cases of commercial building projects, a principal component analysis Son et al. (2012)
projects method was developed into SVM to predict cost estimate based on project
parameters.

© ASCE 03119008-19 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Table 4. (Continued.)
Algorithm Application Findings and characteristics Reference
SVM Building projects This study incorporated a least squares support vector machine (LS-SVM), Cheng et al.
differential evolution (DE), and machine learning–based interval estimation (MLIE) (2013)
for interval estimation of construction cost. DE was used for optimizing
cross-validation process to avoid overfitting.
SVM Building projects Based on 122 historical cases, a hybrid intelligence model was developed for Cheng et al.
construction cost index estimation with 1% MAPE. Such a model consists of least (2013)
squares support vector machine (LS-SVM) and differential evolution (DE), and DE
is applied to optimize LS-SVM tuning parameters.
SVM Building projects This study developed a hybrid cost prediction model for the building based on the Cheng and Hoang
machine learning based interval, least squares support vector machine (LS-SVM), (2014)
and estimation (MLIE), and differential evolution (DE).
SVM Predicting bidding Based on 54 tenders, an SVM model was developed with 2.5% MAPE for bidding Petruseva et al.
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

price price prediction. (2016)


SVM, ANN, School building This study presented a comparison for a reliable cost prediction model based on Kim et al. (2013)
regression construction SVM, ANN, and regression models.
SVM, regression Construction costs This study proved the superiority of SVM compared to regression models in Petruseva et al.
prediction accuracy. (2017)
Structural equation Cost of construction This study discussed reducing construction costs based on reducing the Ajayi and Oyedele
modeling wastes construction wastes. (2018)

(Siddique and Adeli 2013). For example, ElMousalami et al. models where hybrid models represent all combined methods such
(2018b) have identified the key cost drivers based on both quali- as fuzzy neural network and evolutionary fuzzy hybrid neural net-
tative and quantitative techniques. Consequently, ElMousalami work. As shown in Fig. 15(a), the percentages of the catego-
et al. (2018b) have conducted two machine learning models by uti- ries are 27%, 25%, 14%, 13%, 11%, and 10% for hybrid models,
lizing MRA and ANNs. Accordingly, the selected quadratic regres- ANNs, fuzzy models, regression, SVM, and CBR, respectively.
sion model produces a prediction accuracy of 9.12% and 7.82% for These percentages indicate that hybrid models are the current trend
training and validation, respectively. Similarly, Marzouk and Elkadi in parametric cost estimate modeling in which the researchers use
(2016) have identified the key cost drivers based on qualitative such hybrid models to enhance the performance of the developed
techniques such as questionnaires and quantitative techniques such model and the accuracy of the prediction results. In addition, hybrid
as exploratory factor analysis. Consequently, the next stage is the models avoid the limitations of a single method. For example, the
model development. Marzouk and Elkadi (2016) have applied hybrid model of ANNs and FL produces a neurofuzzy model that
ANNs when the MAPE for test sets was 21.18%. provides uncertainty for ANNs. On the other hand, the ANN model
Fan et al. (2006) have developed a decision tree approach for provides learning ability to the fuzzy system.
investigating the relationship between house prices and housing The second percentage of 25% represents the use of ANNs,
characteristics. Moussa et al. (2006) have established a decision which is a powerful ML technique to represent nonlinear data.
tree model using integrated multilevel stochastic networks. Cao The third percentage is 14% that represents fuzzy models. The
et al. (2018) have proposed a multilayer ensemble of methods for fuzzy model should be widely conducted since the fuzzy model
prediction of unit price bids of resurfacing highway projects based provides vagueness and uncertainty to the results and more reli-
on more than 1,400 projects. The ensemble of methods was com- able prediction to the future real-world cases. The fourth percent-
posed of gradient boosting machine, XGBoost, and RF models, of age is 13% that represents the regression model. Generally, the
which the XGBoost has the best accuracy. Monte Carlo simulation regression model has been widely conducted because of its sim-
and a multiple linear regression model have been developed as a plicity. However, ANNs can provide better results than the regres-
benchmark model to evaluate the model’s performance where the sion model, specifically with nonlinear data. SVM and CBR have
MAPE was 7.56%. Wang and Ashuri (2016) have developed a similarly small percentages. However, CBR represents a promis-
highly accurate model based on random tree ensembles to predict ing technique in which a CBR works as an incremental search
a construction cost index in which the model’s accuracy has
engine for similar cases.
reached 0.8%. Williams and Gong (2014) have built a stacking en-
Based on the reviewed studies as shown in Table 4 and
semble learning and text mining to estimate the cost overrun using
Fig. 15(a), the survey has been classified into four main categories
the project contract document in which the accuracy was 44%.
to represent the projects used for the cost estimate. These categories
Chou and Lin (2012) have established an ensemble learning model
are buildings, highway, water infrastructure, and other projects. The
of ANNs, SVM, and a decision tree for predicting the potential for
buildings category includes residential, industrial, and commercial
disputes in public–private partnership (PPP) with an accuracy
building projects, whereas the highway category includes highway,
of 84%.
road, pavement maintenance, and bridge construction projects.
Water infrastructure includes wastewater treatment and water infra-
Review Analysis and Discussion for Cost Modeling structure projects. Other projects include tunnel projects, steel proj-
Techniques ects, and telecommunication towers, etc.
As shown in Fig. 15(b), the building category represents 48% of
Based on the reviewed studies shown in Table 4, the survey has all collected projects, whereas the other projects category repre-
been classified into six main categories to represent models used sents 30%, the highway category represents 13% of all collected
for cost model development. These categories are ANN model, projects, and the water infrastructure category represents 9%. Sub-
FL model, regression model, SVM model, CBR model, and hybrid sequently, building projects and highway projects have the greatest

© ASCE 03119008-20 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Regression Other Modeling Methods
13%
CBR
10% Regression In addition to the common or established models previously dis-
ANNs cussed, multiple simple methods exist from industry practice. Time
Hybrid SVM series is a statistical model that uses the successive time data points
model ANNs to build predictive models (Bishop 2006; Wang and Ashuri 2016).
27% 25% Fuzzy model
Ashuri and Lu (2010) have developed a time series for construction
Hybrid model
SVM
cost index (CCI) prediction with approximately 1% MAPE.
CBR
11% Monte Carlo simulation can model the project cost where no
Fuzzy model independent attributes exist. Monte Carlo simulation uses a ran-
14% domness procedure that maintains the uncertainty to the estimation.
(a) However, the prediction performance is relatively lower than the
ML algorithms. Back et al. (2000) have applied Monte Carlo sim-
ulation to randomly generate an escalation rate at any point in time
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

buildings for project cost likelihood quantification. However, Ilbeigi et al.


Other (2016) pointed out that the limitation of Monte Carlo tech-
Highway nique is ignoring the autocorrelation effects of the collected data.
projects
30% Anderson et al. (2006) have estimated highway projects cost based
buildings water infrastructure
48% on a simple escalation approach where possible changes in the
Other projects future prices of materials have been modeled as an inflation cost
water estimator.
infrastructure
9%
Highway
13% Application to Field Canals Improvement Projects
(b)
AI can automatically develop the relations among cost drivers and
the project costs for which the final prediction error can be mini-
mized. Therefore, AI can diminish human interventions to estimate
above 300 above 300 cases the project cost. Moreover, the automated parametric model needs a
cases few cost drivers as inputs to predict the final cost without quantity
23% above 100 cases survey using low computational time and memory (ElMousalami
less 100 cases
50% less 100 cases
et al. 2018b). In this section, the selected AI techniques are applied
above 100
cases to the conceptual cost prediction of field canals improvement proj-
27% ects (FCIPs) in Egypt as an actual case study.

(c) Case Background


FCIPs are one of the main projects in irrigation improvement
Fig. 15. Percentage breakdown of the reviewed studies: (a) AI models; projects (IIPs) in Egypt. The strategic aim of these projects is to
(b) project categories; and (c) data sample size. save fresh water, facilitating water usage and distribution among
stakeholders and farmers. To finance this project, conceptual cost
models are important to accurately predict preliminary costs at the
early stages of the project (ElMousalami et al. 2018b; Radwan
share of researchers’ interest, whereas the other projects have fewer 2013).
research efforts.
Based on the surveyed studies, the collected sample sizes range
from 18 cases to 1,600 cases. The sample size can be divided into Data Collection and Feature Selection
three categories: less than 100 cases, over 100 cases, and over Based on contract information, cost drivers of FCIPs can be rep-
300 cases. These three categories represent the most frequent values resented through a total of 17 parameters (ElMousalami et al.
of the collected sample size. In addition, many sample size studies 2018b). Data preprocessing includes data normalization, cleaning,
stand on these values: 100 and 300 observations (Tabachnick and and transformation. Once the inputs (key cost drivers) and the out-
Fidell 2007; Comrey and Lee 1992; Guadagnoli and Velicer 1988; put (conceptual construction cost of FCIP) are identified, relevant
MacCallum et al. 1999). As shown in Fig. 15(c), about 50% of data are collected to build the parametric cost model. The quantity
cases are less than 100 cases, whereas the studies with above 100 and quality of the collected instances are significant for the concep-
and 300 cases have 27% and 23%, respectively. Most of the sample tual estimate that affects the accuracy of the developed model
sizes are less than 100 cases, and that may reflect a model bias and (Bode 2000).
less model ability for generalization. Most of the studies do not These 17 parameters can be filtered to be entered to ML models.
provide detailed steps for model development such as singularity, Therefore, Elmousalami et al. (2018a) have conducted qualitative
multicollinearity, outliers, and sample size. For example, Hegazy approaches such as the fuzzy Delphi method and fuzzy analytical
and Ayed (1998) have applied only 14 cases and 4 cases for training hierarchy process to rank the cost drivers. Moreover, ElMousalami
and validation, respectively. Based on Green (1991), this study may et al. (2018b) have developed a quantitative hybrid approach based
have concerns about the sample size, and Green (1991) has recom- on both Pearson correlation and stepwise regression to filter the
mended that 50 þ 8k may be the minimum sample size, where k is key cost drivers. Accordingly, the final key cost drivers were area
the number of predictors. served (P1 ), pipeline total length (P2 ), the number of irrigation

© ASCE 03119008-21 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Table 5. Accuracy of the developed algorithms
MAPE (%)
Notation Algorithm/model Algorithm type MAPE (%) categorization R2 R2
M1 XGBoost Ensemble methods 9.091 Below 10 0.931 0.929
M2 Quadratic regressiona MRA 9.120 Below 10 0.857 0.851
M3 Plain regressiona MRA 9.130 Below 10 0.803 0.796
M4 Quadratic MLPa ANNs 9.200 Below 10 0.904 0.902
M5 Plain MLPa ANNs 9.270 Below 10 0.913 0.912
M6 Semilog regressiona MRA 9.300 Below 10 0.915 0.910
M7 Extra trees Ensemble methods 9.714 Below 10 0.948 0.947
M8 Natural log MLPa ANNs 10.230 Below 20 0.905 0.910
M9 Bagging Ensemble methods 10.246 Below 20 0.914 0.911
M10 RF Ensemble methods 10.503 Below 20 0.916 0.913
M11 AdaBoost Ensemble methods 10.679 Below 20 0.875 0.871
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

M12 SGB Ensemble methods 11.008 Below 20 0.926 0.924


M13 Reciprocal regressiona MRA 11.200 Below 20 0.814 0.801
M14 Power (2) regressiona MRA 11.790 Below 20 0.937 0.931
M15 DNNs ANNs 12.059 Below 20 0.785 0.779
M16 DT Tree model 12.488 Below 20 0.886 0.883
M17 Genetic fuzzy Hybrid model 14.700 Below 20 0.863 0.857
M18 CBR Case-based 17.300 Below 20 0.859 0.852
M19 SVM Kernel-based 21.217 Unacceptable 0.136 0.133
M20 Fuzzy Fuzzy theory 26.300 Unacceptable 0.857 0.851
a
ElMousalami et al. (2018b).

valves (P3 ), and construction year (P4 ). Accordingly, a total of the model interpretability (Kuncheva 2004). RF (M10) is more a
144 FCIPs during 2010 and 2015 have been collected. robust algorithm against noisy data or big data than the DT (M16)
For validation purposes, this collected sample has randomly algorithm (Breiman 1996; Dietterich 2000). However, the RF algo-
branched into a training sample (111 instances) and a testing sam- rithm is unable to interpret the importance of features or the mecha-
ple (33 instances). The training sample in the present case study of nism of producing the results.
111 instances would be sufficiently acceptable to train reliable ML DNNs (M15) produce 12.059% MAPE, which is less accurate
models (Green 1991). than all the developed MLP models (M4, M5, and M8). Accord-
ingly, DNNs provide bad performances with a small data set.
Conversely, deep learning and DNNs can produce the most accu-
Comparison and Analysis rate performance with high-dimension data (LeCun et al. 2015).
MAPE and R2 have been validated for the 20 developed models as An alternative to the black box nature of ANNs and DNNs, DTs
displayed in Table 5. The whole developed models have been generate logic statements and interpretable rules that can be used
sorted in descending order from M1 to M20 based on MAPE. for identifying the importance of data features (Perner et al. 2001).
ElMousalami et al. (2018b) have presented a quadratic regression Another advantage of DT is avoiding the curse of dimensionality
model (M2) as the most accurate for FCIPs among the developed and providing a high performance computing efficiency through its
regression and ANN models (M3, M4, M5, M6, M8, M13, and splitting procedure (Prasad et al. 2006). However, DT produces
M14) with 9.120 and 0.851 for MAPE and R2 , respectively. How- unsatisfactory performance for time series, noisy, or nonlinear data
ever, this study presents that XGBoost (M1) is more accurate than (Curram and Mingers 2017). Although DT (CART) is inherently
quadratic regression (M2). XGBoost (M1) has obtained the first used as a base learner for the ensemble methods, DT (M16) pro-
place slightly higher than M2 with 9.091% and 0.929 for MAPE duces 12.488% MAPE, which is less accurate than all developed
and R2 , respectively. Moreover, the unique advantage of the ensemble methods (M1, M7, M9, M10, M11, and M12). Therefore,
XGBoost is its high scalability; it can process noisy data and fit ensemble methods produce better performance than a single learn-
high dimension data without overfitting. XGBoost applies parallel ing algorithm. Moreover, ensemble methods can effectively handle
computing to effectively reduce computational complexity and missing values and noisy data because of their scalability.
learn faster (Chen and Guestrin 2016). Another key advantage of Ensemble methods and data transformation play an important
XGBoost is handling the missing values where defaults direction is role in prediction accuracy. However, the main gap of the previous
identified. Accordingly, no effort is needed for cleaning the col- models is the lack of uncertainty modeling in the prediction cost
lected data. model. Therefore, fuzzy logic theory has been conducted to main-
Ensemble methods such as Extra Trees (M7), bagging (M9), tain uncertainty concept through the fuzzy logic model (M17) as
RF (M10), AdaBoost (M11), and SGB (M12) have produced high, shown in Fig. 16 and hybrid fuzzy model (M20) as shown in
acceptable performance with accuracy ranging from 9.714% to Fig. 17. The number of generated rules by the fuzzy genetic model
11.008%. The ensemble learning methods can effectively deal with (M17) is 63 rules, and the MAPE is 14.7%. On the other hand, a
the problems of high-dimension data, complex data structures, and traditional fuzzy logic model (M20) has been built based on the
small sample size. Bagging algorithms can increase generalization experts’ experience in which a total of 190 rules were generated
by decreasing variance error (Breiman 1998), whereas boosting can to cover all the possible combinations of the fuzzy system, and
improve generalization by decreasing bias error (Schapire et al. the MAPE is 26.3%. Moreover, the fuzzy rules (if-then rules) gen-
1998). Ensemble methods can effectively handle continuous, cat- erated by experts have redundant rules that can be deleted to im-
egorical, and dummy features with missing values. However, en- prove the model computation and performance. Moreover, the
semble methods increase the model complexity, which decreases experts’ knowledge cannot cover all combinations to represent

© ASCE 03119008-22 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


PVC pipeline length Fuzzy system Concluding Remarks and Future Research Trends
Key cost drivers
Command area This study presents a general methodology to develop a robust
IF-THEN rules
parametric cost model at the conceptual stage of the project, and
Number of irrigation valves
the study explores qualitative and quantitative techniques to iden-
Constrution year
FCIPs Cost tify key cost drivers and develop an intelligent predictive model.
All results and recommendations have been extracted based on re-
viewing a total of 100 papers relevant to construction cost modeling
of which most of the papers were published in the period of 2002
µ ( MF) 1.0 1.0 to 2018 as illustrated in Tables 3 and 4. The methodology and the
potential recommendations can be validated by selecting any rel-
evant paper and comparing its methodology and recommendations
0.5 MF1 MF2 MF3 MF4 MF5 MF6 MF7
0.5 to this study’s methodology and recommendations. For example,
Cao et al. (2015) have developed a hybrid computational model
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

for forecasting Taiwan's construction cost index by conducting


0.0 0.0 regression, ANN, and evolutionary computing to develop the
hybrid model. This paper is from the literature survey sample
and recommends the hybrid computational model for better predic-
Fig. 16. Fuzzy logic model. tion accuracy. Therefore, this paper presents a comparison of AI
techniques to develop a reliable conceptual cost prediction model.
Twenty machine learning models are developed utilizing tree-based
models, ensemble methods, fuzzy systems, CBR, ANNs, SVM,
and transformed regression models. The accuracy of the developed
Inputs Output
P1 P2 P3 P4 Cost models is tested from two perspectives: MAPE and adjusted R2 .
CH1 MF1 : MF7 MF1 : MF7 MF1 : MF7 MF1 : MF7 MF1 : MF7 The results show that the most accurate and suitable method is
CH2
……..
MF1 : MF7 MF1 : MF7 MF1 : MF7 MF1 : MF7 MF1 : MF7
…….. …….. …….. …….. …….. XGBoost with 9.091% and 0.929 for MAPE and adjusted R2 ,
CHn MF1 : MF7 MF1 : MF7 MF1 : MF7 MF1 : MF7 MF1 : MF7 respectively.
According to the key cost drivers identification, this study has
5
reviewed the common practices of cost drivers identification for
1 parametric cost model development. Cost drivers identification
1 2 has been classified into two main types of procedures: qualitative
4 GA Inputs Fuzzy system Ymodel
and quantitative. The trends and recommendation of the study are
as follows:
1. This study recommends using fuzzy approaches such as FDM
F 3 and FAHP over traditional methods such as TDM and AHP be-
cause the fuzzy approaches produce more reliable performance.
2. This study recommends applying both qualitative and quantita-
Fig. 17. Genetic fuzzy model. tive approaches to obtain the most reliable cost drivers. Such
a procedure can be called a hybrid approach for cost drivers
identification. The limitation of the hybrid approach is that both
experts’ opinions and historical cases are required.
all possible rules (2,401 rules). In addition, the generation of the 3. This study recommends establishing a database for every con-
experts’ rules is a time- and effort-consuming process. Conse- struction project and for such projects to be open source to be
quently, hybrid fuzzy systems are more effective than the traditional used for research development.
fuzzy logic system. Although the prediction accuracy of the fuzzy 4. The genetic algorithm is a powerful tool to select the optimal set
genetic model (M17) and the fuzzy logic model (M20) is 14.7% of the cost drivers for which the prediction error is minimized.
and 26.3%, respectively, the fuzzy model would produce more re- 5. The paper emphasizes the importance of machine learning
liable prediction results because it takes uncertainty into account. models such as factor analysis and GA-ANNs models to auto-
However, the traditional fuzzy model gives unacceptable accuracy matically identify the key cost drivers based on the collected
of 26.3% MAPE (Peurifoy and Oberlender 2002). Therefore, main- data without human errors and interventions.
taining uncertainty decreases the predictive model accuracy. Moreover, the core trend of cost modeling is to computerize and
CBR (M18) produces an acceptably low accuracy of 17.3% automate the cost model so that less human intervention is required
MAPE. The advantage of CBR is dealing with a vast amount of for operating such models while obtaining higher accuracy and op-
data in which all past cases and new cases are stored in database timal results. According to cost modeling techniques, based on the
techniques (Kim et al. 2004). Moreover, finding similarities and survey literature (Table 4), this study has performed a survey and
similar cases improves the reliability and confidence in the analysis for construction cost modeling development. The study is
output. Hybrid models can be incorporated to CBR to enhance the presented in two main parts. The first part presents the most
performance of CBR such as applying GA and decision tree to common AI modeling techniques used for cost models, and the
optimize attribute weights and by applying regression analysis for second part presents the review of the current state for cost model
the revision process. SVM can be applied for both regression development. The first part explains statistical methods such as
and classification tasks. SVM (M19) produces an unacceptable MRA and intelligent methods such as FL, ANNs, EC, CBR,
accuracy of 21.217% MAPE (Peurifoy and Oberlender 2002). and hybrid models. The second part reviews the model develop-
Finally, Table 6 summarizes the strengths and weakness of the de- ment summarized in Table 4 where modeling techniques, construc-
veloped models. tion project, parameters used, sample size, and model accuracy

© ASCE 03119008-23 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Table 6. Characteristics of the developed algorithms
Missing
Algorithm values and
notations Strengths Weaknesses Interpretation Uncertainty noisy data
M1 High scalability, handling missing values, No uncertainty and interpretation No No Yes
high accuracy, low computational cost
M2 More accurate than plain regression, handles Prone to overfitting Yes No No
nonlinearity of data
M3 Works on small size of data set Linear assumptions Yes No No
M4 High accuracy, handling complex patterns Black box nature, need sufficient data for training No No No
M5 High accuracy, handling complex patterns Black box nature No No No
M6 Producing better results than plain regression Unable to capture complex patterns Yes No No
M7 Handling data randomness Black box nature and sufficient data No No Yes
M8 Producing better results than plain MLP Black box nature and sufficient data No No No
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

M9 Providing higher performance than a single Depending on other algorithms performance No No Yes
algorithm
M10 Accurate and high performance on many No interpretability, need to choose the number No No Yes
problems including nonlinear of trees
M11 High scalability, and high adaptability Depends on other algorithms’ performance No No Yes
M12 Handling difficult examples High sensitivity to noisy data No No Yes
M13 Handling data nonlinearity and training Unable to capture complex patterns Yes No No
small sample size
M14 Handling data nonlinearity and training Unable capture complex patterns Yes No No
small sample size
M15 Capturing complex patterns, processing big Sufficient training data and high cost computation No No No
data and high performance computing
M16 Working on both linear and nonlinear data, Poor results on too small data sets, overfitting can Yes No No
and producing logical expressions easily occur
M17 Handling uncertainty and more accurate than More complex than fuzzy model and needs more Yes Yes No
fuzzy model computational resources
M18 Handling small data sets, simple and needs Poor performance for case in which the optimal Yes No No
less computational time case cannot be retrieved
M19 Easily adaptable, works very well on Compulsory to apply feature scaling, more No No No
nonlinear problems, not biased by outliers difficult to understand
M20 Handling uncertainty Low accuracy Yes Yes No

have been extracted and summarized. The following points sum- different kinds of construction projects to help project man-
marize the recommendations and future trends: agers and cost estimation engineers.
1. Computational models and information systems have been 6. Automated cost models are prone to many machine learning
applied in business and construction industry to effectively problems such as overfitting issues, and hyper-parameter
improve the job efficiency (Davis 1993). Therefore, the hybrid selection. It is recommended to develop more than one cost
model represents the current trend of parametric cost model- prediction model such as regression, ANNs, FL, or CBR. As
ing to improve the model performance and accuracy so that a result, the researcher can compare the results of the devel-
the limitations of each technique can be avoided. The objective oped models and set a benchmark to select the most accurate
is to develop computerized automated systems with fewer model. In addition, the comparisons of the developed models
interventions of humans to save time and effort, and to enhance the quality of cost estimate and the decisions based
avoid human error for the cost estimate. Moreover, computer on it (Amason 1996).
technologies have a great ability to deal with vast data and 7. There is a need to develop a model that has the ability
complicated computations. to give justification for the model’s results and to give answers
2. AI and CI models such as ANNs, FL, and GA are used widely and interpretations for the predicted cost. That may require a
for hybrid model development. Moreover, ML techniques can higher level of AI and may represent the future trend of cost
be efficiently conducted for the parametric cost modeling. modeling. Moreover, such concept may be generalized for any
Therefore, the cost modeling researcher should firstly study prediction model. The objective is to avoid the estimator’s
ML, CI, and AI techniques. biases, warn the user to the input parameters of the model,
3. CBR represents the increasing importance of ML tools and and avoid the limitation of the black box nature.
data mining techniques for knowledge acquisition, prediction, 8. The conceptual cost estimate is conducted under uncertainty.
and decision making. Specifically, CBR efficiently deals with Therefore, this study recommends using fuzzy theory such as
vast data and has the ability to update the case-base for future FL and to develop a hybrid model based on FL to obtain un-
problem solving. Moreover, Finding similarities and similar certainty for the developed model and produce a more reliable
cases improves the reliability and confidence in the output. performance as illustrated in Fig. 18 (Elmousalami 2019).
4. Hybrid models can be incorporated to CBR to enhance the 9. One of the key challenges in cost estimation accuracy is
performance of CBR such as by applying GA and decision substantial variations over time. Therefore, each cost predic-
tree to optimize attribute weights and by applying regression tion model should have an input time-related macroeconomic
analysis for the revision process. indicator that represents the change in the market inflation
5. Most of the studies focus on building types of construction rate over time because significant variability in material cost
projects, so a need exists to apply cost models widely for or inflation rate can reduce the prediction performance.

© ASCE 03119008-24 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


2. Parametric modeling References
Fuzzy AACE (Association for the Advancement of Cost Engineering). 2004.
GA
rules
1. Feature selection AACE International recommended practices. Morgantown, WV:
Hybrid AACE.
Key cost Fuzzy
Data feature
drivers model Prediction Aamodt, A., and E. Plaza. 1994. “Case-based reasoning: Foundational is-
model sues, methodological variations, and system approaches.” AI Commun.
7 (1): 39–59.
Evaluation
Test Aczel, A. D. 1989. Complete business statistics. Burlington, MA:
Data
Irwin.
Adeli, H., and M. Wu. 1998. “Regularization neural network for construc-
Fig. 18. Intelligent methodology for project conceptual cost predic- tion cost estimation.” J. Constr. Eng. Manage. 124 (1): 18–24. https://
tion. [Reprinted from Elmousalami 2019, under Creative Commons doi.org/10.1061/(ASCE)0733-9364(1998)124:1(18).
BY-NC-ND 4.0 license (https://creativecommons.org/licenses/by-nc Ahiaga-Dagbui, D. D., O. Tokede, S. D. Smith, and S. Wamuziri.2013. “A
-nd/4.0/).] neuro-fuzzy hybrid model for predicting final cost of water infrastruc-
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

ture projects.” In Proc., 29th Annual ARCOM Conf., 2–4. Reading, UK:
Association of Researchers in Construction Management.
Ahn, J., M. Park, H. S. Lee, S. J. Ahn, S. H. Ji, K. Song, and B. S. Son.
ElMousalami et al. (2018b) maintain reliability for future pre-
2017. “Covariance effect analysis of similarity measurement methods
diction based on the following formula: for early construction cost estimation using case-based reasoning.”
Future cost ¼ Predicted cost × ð1 þ IRÞn Autom. Constr. 81 (Sep): 254–266. https://doi.org/10.1016/j.autcon
.2017.04.009.
where IR = average inflation rate; and n = number of years Ajayi, S. O., and L. O. Oyedele. 2018. “Waste-efficient materials procure-
from present to the future time. ment for construction projects: A structural equation modelling of
critical success factors.” Waste Manage. 75: 60–69.
10. ML gives satisfactory performance and accuracy, but it needs
Akintoye, A. 2000. “Analysis of factors influencing project cost estimating
sufficient features and a sufficient data size to train ML algo- practice.” Constr. Manage. Econ. 18 (1): 77–89. https://doi.org/10.1080
rithms. Monte Carlo simulation, regression, and time series /014461900370979.
analysis are not sufficiently robust for cost prediction with Alroomi, A., D. H. S. Jeong, and G. D. Oberlender. 2012. “Analysis of cost-
high variability and nonlinearity of data. Cao et al. (2018) re- estimating competencies using criticality matrix and factor analysis.”
ported that the ensemble learning models provide 37.98% and J. Constr. Eng. Manage. 138 (11): 1270–1280. https://doi.org/10
26.89% more accurate results than the regression method .1061/(ASCE)CO.1943-7862.0000351.
and the Monte Carlo simulation method, respectively, using Amason, A. 1996. “Distinguishing the effects of functional and dysfunc-
mean absolute error (MAE) scale. tional conflict on strategic decision making: Resolving a paradox for top
11. Ensemble methods are promising techniques that can handle management teams.” Acad. Manage. J. 39 (1): 123–148.
a large number of features, model both numerical and catego- Ambrule, V. R., and A. N. Bhirud. 2017. “Use of artificial neural network
for pre design cost estimation of building projects.” Int. J. Recent
rical variables, capture nonlinear patterns, and fit data with
Innovation Trends Comput. Commun. 5 (2): 173–176.
missing values.
An, S.-H., G.-H. Kim, and K.-I. Kang. 2007a. “A case-based reasoning cost
12. Decision tree algorithms and ensemble methods can provide estimating model using experience by analytic hierarchy process.”
an alternative technique to many ML algorithms such as multi- Build. Environ. 42 (7): 2573–2579. https://doi.org/10.1016/j.buildenv
ple regression analysis and ANNs. The study emphasizes the .2006.06.007.
importance of ensemble methods for improving the prediction An, S.-H., U.-Y. Park, K.-I. Kang, M.-Y. Cho, and H.-H. Cho. 2007b.
accuracy and handling noisy and missing data. However, the “Application of support vector machines in assessing conceptual cost
key limitation of the ensemble methods in an inability to inter- estimates.” J. Comput. Civ. Eng. 21 (4): 259–264. https://doi.org/10
pret the produced results. .1061/(ASCE)0887-3801(2007)21:4(259).
Most of the studies focus on building types of construction proj- Anderson, S. D., K. R. Molenaar, and C. J. Schexnayder. 2006. Guidance
ects, so a need exists to apply cost models widely for different kinds for cost estimation and management for highway projects during plan-
ning, programming, and preconstruction. NCHRP Rep. No. 574.
of construction projects. In addition, more reviewed studied means
Washington, DC: Transportation Research Board.
more generalization and better quality of the results. Finally, accu-
Angelov, P. P. 2002. Evolving rule-based models: A tool for design of
rate cost estimate means accurate decisions about the project man- flexible adaptive systems. Wurzburg, Germany: Physica-Verlag.
agement. Therefore, this study has analyzed the past cost modeling Arabzadeh, V., S. T. A. Niaki, and V. Arabzadeh. 2018. “Construction
practices to provide a recent direction for construction cost model- cost estimation of spherical storage tanks: Artificial neural networks and
ing. The study shows that the computational intelligence, artificial hybrid regression—GA algorithms.” J. Ind. Eng. Int. 14 (4): 747.
intelligence, and machine learning techniques have a powerful abil- https://doi.org/10.1007/s40092-017-0240-8.
ity to develop applicable and accurate cost predictive models. Ashuri, B., and J. Lu. 2010. “Time series analysis of ENR construction cost
Moreover, the cost modeling research area needs more studies index.” J. Constr. Eng. Manage. 136 (11): 1227–1237. https://doi.org
to develop intelligent models that have the ability to interpret /10.1061/(ASCE)CO.1943-7862.0000231.
the resulting cost prediction and analyze the input model’s param- Attalla, M., and T. Hegazy. 2003. “Predicting cost deviation in reconstruc-
eters. In addition, this study has provided a list of recommendations tion projects: Artificial neural networks versus regression.” J. Constr.
Eng. Manage. 129 (4): 405–411. https://doi.org/10.1061/(ASCE)0733
and references for cost model developers to build a more practical
-9364(2003)129:4(405).
parametric cost model. Back, W. E., W. W. Boles, and G. T. Fry. 2000. “Defining triangular prob-
ability distributions from historical cost data.” J. Constr. Eng. Manage.
Data Availability Statement 126 (1): 29–37. https://doi.org/10.1061/(ASCE)0733-9364(2000)
126:1(29).
Data generated by the authors or analyzed during the study are Bauer, E., and R. Kohavi. 1999. “An empirical comparison of voting clas-
available at: https://github.com/HaythamElmousalami/Field-canals sification algorithms: Bagging, boosting, and variants.” Mach. Learn.
-improvement-projects-FCIPs-. 36 (1–2): 105–139. https://doi.org/10.1023/A:1007515423169.

© ASCE 03119008-25 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Bayram, S., M. E. Ocal, E. L. Oral, and C. D. Atis. 2015. “Comparison of Choi, S., D. Y. Kim, S. H. Han, and Y. H. Kwak. 2014. “Conceptual cost-
multi-layer perceptron (MLP) and radial basis function (RBF) for con- prediction model for public road planning via rough set theory and case-
struction cost estimation: The case of Turkey.” J. Civ. Eng. Manage. based reasoning.” J. Constr. Eng. Manage. 140 (1): 04013026. https://
22 (4): 480–490. https://doi.org/10.3846/13923730.2014.897988. doi.org/10.1061/(ASCE)CO.1943-7862.0000743.
Berry, M. J., and G. Linoff. 1997. Data mining techniques: For marketing, Choi, S., K. Ko, and D. Hong. 2001. “A multilayer feedforward neural net-
sales, and customer support. New York: Wiley. work having N/4 nodes in two hidden layers.” In Vol. 3 of Proc., IEEE
Bertram, D. 2017. “Likert Scales are the meaning of life.” Accessed Int. Joint Conf. on Neural Networks, 1675–1680. New York: IEEE.
February 20, 2017. http://poincare.matf.bg.ac.rs/∼kristina/topic-dane Chou, C.-H. 2006. “Genetic algorithm-based optimal fuzzy controller de-
-likert.pdf. sign in the linguistic space.” IEEE Trans. Fuzzy Syst. 14 (3): 372–385.
Bezdek, J. C. 1994. “What is computational intelligence?” In Computa- https://doi.org/10.1109/TFUZZ.2006.876329.
tional intelligence imitating life, edited by J. M. Zurada, II, R. J. Marks, Chou, J. S., and C. Lin. 2012. “Predicting disputes in public-private part-
and C. J. Robinson, 1–12. New York: IEEE. nership projects: Classification and ensemble models.” J. Comput.
Bishop, C. M. 2006. “Introduction.” In Pattern recognition and machine Civ. Eng. 27 (1): 51–60. https://doi.org/10.1061/(ASCE)CP.1943-5487
learning, 1–58. New York: Springer. .0000197.
Bode, J. 2000. “Neural networks for cost estimation: Simulations and pilot Chou, J.-S. 2009. “Web-based CBR system applied to early cost budget-
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

application.” Int. J. Prod. Res. 38 (6): 1231–1254. https://doi.org/10


ing for pavement maintenance project.” Expert Syst. Appl. 36 (2):
.1080/002075400188825.
2947–2960. https://doi.org/10.1016/j.eswa.2008.01.025.
Bowerman, B. L., and R. T. O’Connell. 1990. Linear statistical models:
Chou, J.-S., C.-W. Lin, A.-D. Pham, and J.-Y. Shao. 2015. “Optimized
An applied approach. 2nd ed. Belmont, CA: Duxbury.
artificial intelligence models for predicting project award price.” Autom.
Breiman, L. 1996. “Bagging predictors.” Mach. Learn. 24 (2): 123–140.
Constr. 54 (Jun): 106–115. https://doi.org/10.1016/j.autcon.2015
Breiman, L. 1998. “Arcing classifier (with discussion and a rejoinder by
.02.006.
the author).” Ann. Stat. 26 (3): 801–849. https://doi.org/10.1214/aos
/1024691079. Comrey, A. L., and H. B. Lee. 1992. A first course in factor analysis.
Breiman, L. 1999. “Pasting small votes for classification in large databases 2nd ed. Hillsdale, NJ: Erlbaum.
and on-line.” Mach. Learn. 36 (1–2): 85–103. https://doi.org/10.1023 Cook, R. D., and S. Weisberg. 1982. Residuals and influence in regression.
/A:1007563306331. New York: Chapman & Hall.
Breiman, L. 2001. “Random forests.” Mach. Learn. 45 (1): 5–32. https://doi Cortes, C., and V. Vapnik. 1995. “Support-vector networks.” Mach. Learn.
.org/10.1023/A:1010933404324. 20 (3): 273–297.
Breiman, L., J. H. Friedman, R. Olshen, and C. Stone. 1984. Classification Curram, S. P., and J. Mingers. 2017. “Neural networks, decision tree in-
and regression trees. Belmont, CA: Wadsworth. duction and discriminant analysis: An empirical comparison.” J. Oper.
Buckley, J. J. 1985. “Fuzzy hierarchical analysis.” Fuzzy Sets Syst. 34 (2): Res. Soc. 45 (4): 440–450. https://doi.org/10.1057/jors.1994.62.
187–195. https://doi.org/10.1016/0165-0114(90)90158-3. Darwin, C. 1859. The origin of species by means of natural selection or the
Burges, C. J. C. 1998. “A tutorial on support vector machines for pattern preservation of favoured races in the struggle for life. New York:
recognition.” Data Min. Knowl. Discovery 2 (2): 121–167. https://doi Mentor.
.org/10.1023/A:1009715923555. Davis, F. D. 1993. “User acceptance of information technology: System
Cao, M.-T., M.-Y. Cheng, and Y.-W. Wu. 2015. “Hybrid computational characteristics, user perceptions, and behavioral impacts.” Int. J. Man
model for forecasting Taiwan construction cost index.” J. Constr. Mach. Stud. 38 (3): 475–487. https://doi.org/10.1006/imms.1993.1022.
Eng. Manage. 141 (4): 04014089. https://doi.org/10.1061/(ASCE)CO Dell’Isola, M. D. 2002. Architect’s essentials of cost management.
.1943-7862.0000948. New York: Wiley.
Cao, Y., B. Ashuri, and M. Baek. 2018. “Prediction of unit price bids De Soto, B. G., and B. T. Adey. 2015. “Investigation of the case-based
of resurfacing highway projects through ensemble machine learning.” reasoning retrieval process to estimate resources in construction proj-
J. Comput. Civ. Eng. 32 (5): 04018043. https://doi.org/10.1061/(ASCE) ects.” Procedia Eng. 123: 169–181. https://doi.org/10.1016/j.proeng
CP.1943-5487.0000788. .2015.10.074.
Cattell, R. B. 1966. “The scree test for the number of factors.” Dietterich, T. G. 2000. “An experimental comparison of three methods for
Multivariate Behav. Res. 1 (2): 245–276. https://doi.org/10.1207 constructing ensembles of decision trees: Bagging, boosting, and ran-
/s15327906mbr0102_10. domization.” Mach. Learn. 40 (2): 139–157. https://doi.org/10.1023
Chen, H. 2002. “A comparative analysis of methods to represent uncer- /A:1007607513941.
tainty in estimating the cost of constructing wastewater treatment Doğan, S. Z., D. Arditi, and H. M. Günaydin. 2008. “Using decision trees
plants.” J. Environ. Manage. 65 (4): 383–409. https://doi.org/10.1016 for determining attribute weights in a case-based model of early cost
/S0301-4797(01)90563-8. prediction.” J. Constr. Eng. Manage. 134 (2): 146–152. https://doi.org
Chen, T., and C. Guestrin. 2016. “XGBOOST: A scalable tree boosting /10.1061/(ASCE)0733-9364(2008)134:2(146).
system.” In Proc., 22nd ACM SIGKDD Int. Conf. on Knowledge
Draper, N. R., and H. Smith. 1998. Applied regression analysis. New York:
Discovery and Data Mining, 785–794. New York: ACM.
Wiley.
Cheng, M.-Y., and N.-D. Hoang. 2014. “Interval estimation of construction
Durbin, J., and G. S. Watson. 1951. “Testing for serial correlation in least
cost at completion using least squares support vector machine.” J. Civ.
squares regression, II.” Biometrika 38 (1–2): 159–178. https://doi.org
Eng. Manage. 20 (2): 223–236. https://doi.org/10.3846/13923730.2013
.801891. /10.1093/biomet/38.1-2.159.
Cheng, M.-Y., N.-D. Hoang, and Y.-W. Wu. 2013. “Hybrid intelligence Dursun, O., and C. Stoy. 2016. “Conceptual estimation of construction
approach based on LS-SVM and differential evolution for construction costs using the multistep ahead approach.” J. Constr. Eng. Manage.
cost index estimation: A Taiwan case study.” Autom. Constr. 35 (Nov): 142 (9): 04016038. https://doi.org/10.1061/(ASCE)CO.1943-7862
306–313. https://doi.org/10.1016/j.autcon.2013.05.018. .0001150.
Cheng, M.-Y., and A. F. Roy. 2010. “Evolutionary fuzzy decision model for Dziuban, C. D., and E. C. Shirkey. 1974. “When is a correlation matrix
construction management using support vector machine.” Expert Syst. appropriate for factor analysis? Some decision rules.” Psychol. Bull.
Appl. 37 (8): 6061–6069. https://doi.org/10.1016/j.eswa.2010.02.120. 81 (6): 358–361. https://doi.org/10.1037/h0036316.
Cheng, M.-Y., H.-C. Tsai, and W.-S. Hsieh. 2009. “Web-based conceptual Elbeltagi, E., O. Hosny, R. Abdel-Razek, and A. El-Fitory. 2014. “Concep-
cost estimates for construction projects using evolutionary fuzzy neural tual cost estimate of Libyan highway projects using artificial neural
inference model.” Autom. Constr. 18 (2): 164–172. https://doi.org/10 network.” Int. J. Eng. Res. Appl. 4 (8): 56–66.
.1016/j.autcon.2008.07.001. Elfaki, A. O., S. Alatawi, and E. Abushandi. 2014. “Using intelligent
Chi, Z., H. Yan, and T. Phan. 1996. Fuzzy algorithms: With applications to techniques in construction project cost estimation: 10-year survey.”
image processing and pattern recognition. Singapore: World Scientific. Adv. Civ. Eng. 2014: 107926. https://doi.org/10.1155/2014/107926.

© ASCE 03119008-26 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Elmousalami, H. H. 2019. “Intelligent methodology for project conceptual J. Chiropr. Med. 5 (3): 101–117. https://doi.org/10.1016/S0899-3467
cost prediction.” Heliyon 5 (5): e01625. https://doi.org/10.1016/j (07)60142-6.
.heliyon.2019.e01625. Green, S. B. 1991. “How many subjects does it take to do a regression
Elmousalami, H. H., A. H. Elyamany, and A. H. Ibrahim. 2018a. “Evalu- analysis?” Multivariate Behav. Res. 26 (3): 499–510. https://doi.org/10
ation of cost drivers for field canals improvement projects.” Water .1207/s15327906mbr2603_7.
Resour. Manage. 32 (1): 53–65. https://doi.org/10.1007/s11269-017 Guadagnoli, E., and W. F. Velicer. 1988. “Relation of sample size to the
-1747-x. stability of component patterns.” Psychol. Bull. 103 (2): 265–275.
ElMousalami, H. H., A. H. Elyamany, and A. H. Ibrahim. 2018b. “Predict- https://doi.org/10.1037/0033-2909.103.2.265.
ing conceptual cost for field canal improvement projects.” J. Constr. Günaydın, H. M., and S. Z. Doğan. 2004. “A neural network approach for
Eng. Manage. 144 (11): 04018102. https://doi.org/10.1061/(ASCE)CO early cost estimation of structural systems of buildings.” Int. J. Project
.1943-7862.0001561. Manage. 22 (7): 595–602. https://doi.org/10.1016/j.ijproman.2004
El-Sawah, H., and O. Moselhi. 2014. “Comparative study in the use of neu- .04.002.
ral networks for order of magnitude cost estimating in construction.” Guyon, I., and A. Elisseeff. 2003. “An introduction to variable and feature
ITcon 19: 462–473. selection.” J. Mach. Learn. Res. 3 (Mar): 1157–1182.
El Sawalhi, N. I. 2012. “Modeling the parametric construction project cost Hansen, L. K., and P. Salamon. 1990. “Neural network ensembles.” IEEE
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

estimate using fuzzy logic.” Int. J. Emerging Technol. Adv. Eng. 2 (4): Trans. Pattern Anal. Mach. Intell. 12 (10): 993–1001. https://doi.org/10
631–636. .1109/34.58871.
El-Sawalhi, N. I., and O. Shehatto. 2014. “A neural network model for Hastie, T., R. Tibsharani, and J. Friedman. 2009. The elements of statistical
building construction projects cost estimating.” J. Constr. Eng. Project learning: Data mining, inference, and prediction. 2nd ed. New York:
Manage. 4 (4): 9–16. https://doi.org/10.6106/JCEPM.2014.4.4.009. Springer. https://doi.org/10.1007/b94608.
ElSawy, I., H. Hosny, and M. Abdel Razek. 2011. “A neural network model Hays, W. L. 1983. “Review of using multivariate statistics. [Review of the
for construction projects site overhead cost estimating in Egypt.” Int. J. book Using Multivariate Statistics. B. G. Tabachnick & L. S. Fidell].”
Comput. Sci. Issues 8 (3): 273–283. Contemp. Psychol. 28 (8): 642. https://doi.org/10.1037/022267.
Emami, M. R., I. B. Turksen, and A. A. Goldberg. 1998. “Development of a Hegazy, T. 2014. Computer-based construction project management.
systematic methodology of fuzzy logic modeling.” IEEE Trans. Fuzzy Essex: Pearson Education.
Syst. 6 (3): 346–361. https://doi.org/10.1109/91.705501. Hegazy, T., and A. Ayed. 1998. “Neural network model for parametric
Emsley, M. W., D. J. Lowe, A. R. Duff, A. Harding, and A. Hickson. 2002. cost estimation of highway projects.” J. Constr. Eng. Manage. 124 (3):
“Data modeling and the application of a neural network approach to the 210–218. https://doi.org/10.1061/(ASCE)0733-9364(1998)124:3(210).
prediction of total construction costs.” Constr. Manage. Econ. 20 (6): Holland, J. H. 1975. Adaptation in natural and artificial systems. Ann
465–472. https://doi.org/10.1080/01446190210151050. Arbor, MI: University Michigan Press.
Engelbrecht, A. P. 2002. Computational intelligence: An introduction. HongWei, M. 2009. “An improved support vector machine based on rough
New York: Wiley. set for construction cost prediction.” In Vol. 2 of Proc., 2009 Int. Forum
on Computer Science-Technology and Applications. New York: IEEE.
Erensal, Y. C., T. Öncan, and M. L. Demircan. 2006. “Determining key
Hopfield, J. J. 1982. “Neural networks and physical systems with emergent
capabilities in technology management using fuzzy analytic hierarchy
collective computational abilities.” Proc. Natl. Acad. Sci. 79 (8): 2554–
process: A case study of Turkey.” Inf. Sci. 176 (18): 2755–2770. https://
2558. https://doi.org/10.1073/pnas.79.8.2554.
doi.org/10.1016/j.ins.2005.11.004.
Hsiao, F.-Y., S.-H. Wang, W.-C. Wang, C.-P. Wen, and W.-D. Yu. 2012.
Fan, G. Z., S. E. Ong, and H. C. Koh. 2006. “Determinants of house price:
“Neuro-fuzzy cost estimation model enhanced by fast messy genetic
A decision tree approach.” Urban Stud. 43 (12): 2301–2315. https://doi
algorithms for semiconductor hookup construction.” Comput.-Aided
.org/10.1080/00420980600990928.
Civ. Infrastruct. Eng. 27 (10): 764–781. https://doi.org/10.1111/j.1467
Fan, R. E., K. W. Chang, C. J. Hsieh, X. R. Wang, and C. J. Lin. 2008.
-8667.2012.00786.x.
“LIBLINEAR: A library for large linear classification.” J. Mach. Learn.
Hsu, C.-C., and B. A. Sandford. 2007. “The Delphi technique: Making
Res. 9 (Aug): 1871–1874.
sense of consensus.” Pract. Assess. Res. Eval. 12 (10): 1–8.
Field, A. 2009. Discovering statistics using SPSS for windows. London:
Hsu, Y.-L., C.-H. Lee, and V. Kreng. 2010. “The application of fuzzy
Sage Publications. Delphi method and fuzzy AHP in lubricant regenerative technology
Flom, P. L., and D. L. Cassell. 2007. “Stopping stepwise: Why stepwise and selection.” Expert Syst. Appl. 37 (1): 419–425. https://doi.org/10
similar selection methods are bad, and what you should use.” In Proc., .1016/j.eswa.2009.05.068.
2007 Conf. NorthEast SAS Users Group (NESUG): Statistics and Data Huang, S.-C., and Y.-F. Huang. 1991. “Bounds on the number of hidden
Analysis. Portland, OR: NorthEast SAS Users Group. neurons in multilayer neurons.” IEEE Trans. Neural Networks 2 (1):
Freund, Y., and R. E. Schapire. 1996. “Experiments with a new boosting 47–55. https://doi.org/10.1109/72.80290.
algorithm.” In Proc., 13th Int. Conf. on Machine Learning. Princeton, Hutcheson, G., and N. Sofroniou. 1999. The multivariate social scientist.
NJ: International Machine Learning Society. London: Sage.
Freund, Y., and R. E. Schapire. 1997. “A decision-theoretic generalization Ilbeigi, M., B. Ashuri, and A. Joukar. 2016. “Time-series analysis for fore-
of on-line learning and an application to boosting.” J. Comput. Syst. Sci. casting asphalt-cement price.” J. Manage. Eng. 33 (1): 04016030.
55 (1): 119–139. https://doi.org/10.1006/jcss.1997.1504. https://doi.org/10.1061/(ASCE)ME.1943-5479.0000477.
Friedman, J., T. Hastie, and R. Tibshirani. 2000. “Additive logistic regres- Ishikawa, A., M. Amagasa, T. Shiga, G. Tomizawa, R. Tatsuta, and
sion: A statistical view of boosting (with discussion and a rejoinder by H. Mieno. 1993. “The max-min Delphi method and fuzzy Delphi
the authors).” Ann. Stat. 28 (2): 337–407. https://doi.org/10.1214/aos method via fuzzy integration.” Fuzzy Sets Syst. 55 (3): 241–253.
/1016218223. https://doi.org/10.1016/0165-0114(93)90251-C.
Friedman, J., T. Hastie, and R. Tibshirani. 2001. Vol. 1 of The elements of Ji, S.-H., J. Ahn, E.-B. Lee, and Y. Kim. 2018. “Learning method for
statistical learning. New York: Springer. knowledge retention in CBR cost models.” Autom. Constr. 96 (Dec):
Gardner, B. J., D. D. Gransberg, and H. D. Jeong. 2016. “Reducing data- 65–74. https://doi.org/10.1016/j.autcon.2018.08.019.
collection efforts for conceptual cost estimating at a highway agency.” Ji, S.-H., M. Park, and H.-S. Lee. 2012. “Case adaptation method of case-
J. Constr. Eng. Manage. 142 (11): 04016057. https://doi.org/10.1061 based reasoning for construction cost estimation in Korea.” J. Constr.
/(ASCE)CO.1943-7862.0001174. Eng. Manage. 138 (1): 43–52. https://doi.org/10.1061/(ASCE)CO.1943
Geurts, P., D. Ernst, and L. Wehenkel. 2006. “Extremely randomized -7862.0000409.
trees.” Mach. Learn. 63 (1): 3–42. https://doi.org/10.1007/s10994-006 Jin, R., K. Cho, C. Hyun, and M. Son. 2012. “MRA-based revised CBR
-6226-1. model for cost prediction in the early stage of construction projects.”
Green, B. N., C. D. Johnson, and A. Adams. 2006. “Writing narrative Expert Syst. Appl. 39 (5): 5214–5222. https://doi.org/10.1016/j.eswa
literature reviews for peer-reviewed journals: Secrets of the trade.” .2011.11.018.

© ASCE 03119008-27 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Jolliffe, I. T. 1972. “Discarding variables in a principal component analysis. Laarhoven, P. J. M., and W. Pedrycz. 1983. “A fuzzy extension of Sati’s
I: Artificial data.” Appl. Stat. 21 (2): 160–173. https://doi.org/10.2307 priority theory.” Fuzzy Sets Syst. 11 (1–3): 229–241. https://doi.org/10
/2346488. .1016/S0165-0114(83)80082-7.
Jolliffe, I. T. 1986. Principal component analysis. New York: Springer. LeCun, Y., Y. Bengio, and G. Hinton. 2015. “Deep learning.” Nature
Jrade, A. 2000. “A conceptual cost estimating computer system for building 521 (7553): 436. https://doi.org/10.1038/nature14539.
projects.” Masters thesis, Dept. of Building Civil and Environmental Leśniak, A., and K. Zima. 2018. “Cost calculation of construction proj-
Engineering, Concordia Univ. ects including sustainability factors using the case based reasoning
Juszczyk, M. 2017. “Studies on the ANN implementation in the macro (CBR) method.” Sustainability 10 (5): 1608. https://doi.org/10.3390
BIM cost analyzes.” Przegląd Naukowy. Inżynieria i Kształtowanie /su10051608.
Środowiska 26 (2): 183–192. Lewis, C. D. 1982. Industrial and business forecasting methods. London:
Juszczyk, M., A. Leśniak, and K. Zima. 2018. “ANN based approach Butterworth.
for estimation of construction costs of sports fields.” Complexity 2018: Lin, X., F. Yang, L. Zhou, P. Yin, H. Kong, W. Xing, X. Lu, L. Jia,
1–11. https://doi.org/10.1155/2018/7952434. Q. Wang, and G. Xu. 2012. “A support vector machine-recursive feature
Kaiser, H. F. 1960. “The application of electronic computers to factor elimination feature selection method based on artificial contrast varia-
bles and mutual information.” J. Chromatogr. B 910: 149–155. https://
analysis.” Educ. Psychol. Meas. 20 (1): 141–151. https://doi.org/10
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

doi.org/10.1016/j.jchromb.2012.05.020.
.1177/001316446002000116.
Liu, W.-K. 2013. “Application of the fuzzy Delphi method and the fuzzy
Kaiser, H. F. 1970. “A second generation little jiffy.” Psychometrika 35 (4):
analytic hierarchy process for the managerial competence of multina-
401–415. https://doi.org/10.1007/BF02291817.
tional corporation executives.” IJEEEE 3 (4): 313–317. https://doi.org
Kaiser, H. F. 1974. “An index of factorial simplicity.” Psychometrika 39 (1):
/10.7763/IJEEEE.2013.V3.248.
31–36. https://doi.org/10.1007/BF02291575.
Loop, B. P., S. D. Sudhoff, S. H. Zak, and E. L. Zivi. 2010. “Estimating
Kan, P. 2002. “Parametric cost estimating model for conceptual cost esti- regions of asymptotic stability of power electronics systems using
mating of building construction projects.” Ph.D. thesis, Faculty of the genetic algorithms.” IEEE Trans. Control Syst. Technol. 18 (5):
Graduate School, Univ. of Texas. 1011–1022. https://doi.org/10.1109/TCST.2009.2031325.
Karatas, Y., and F. Ince. 2016. “Feature article: Fuzzy expert tool for small Love, P. E. D., R. Y. C. Tse, and D. J. Edwards. 2005. “Time–cost relation-
satellite cost estimation.” IEEE Aerosp. Electron. Syst. Mag. 31 (5): ships in Australian building construction projects.” J. Constr. Eng.
28–35. https://doi.org/10.1109/MAES.2016.140210. Manage. 131 (2): 187–194. https://doi.org/10.1061/(ASCE)0733-9364
Kass, R. A., and H. E. A. Tinsley. 1979. “Factor analysis.” J. Leisure Res. (2005)131:2(187).
11 (4): 120–138. Lowe, D. J., M. W. Emsley, and A. Harding. 2006. “Predicting construction
Kim, G. H., D. S. Seo, and K. I. Kang. 2005. “Hybrid models of neural cost using multiple regression techniques.” J. Constr. Eng. Manage.
networks and genetic algorithms for predicting preliminary cost esti- 132 (7): 750–758. https://doi.org/10.1061/(ASCE)0733-9364(2006)
mates.” J. Comput. Civ. Eng. 19 (2): 208–211. https://doi.org/10.1061 132:7(750).
/(ASCE)0887-3801(2005)19:2(208). Ma, L., S. Shen, J. Zhang, Y. Huang, and F. Shi. 2010. “Application of
Kim, G.-H., S.-H. An, and K.-I. Kang. 2004. “Comparison of construction fuzzy analytic hierarchy process model on determination of optimized
cost estimating models based on regression analysis, neural networks, pile-type.” Front. Archit. Civ. Eng. China 4 (2): 252–257. https://doi
and case-based reasoning.” Build. Environ. 39 (10): 1235–1242. https:// .org/10.1007/s11709-010-0017-2.
doi.org/10.1016/j.buildenv.2004.02.013. MacCallum, R. C., K. F. Widaman, S. Zhang, and S. Hong. 1999. “Sample
Kim, G.-H., J.-M. Shin, S. Kim, and Y. Shin. 2013. “Comparison of school size in factor analysis.” Psychol. Methods 4 (1): 84–99. https://doi.org
building construction costs estimation methods using regression analy- /10.1037/1082-989X.4.1.84.
sis, neural network, and support vector machine.” J. Build. Constr. Makridakis, S., S. C. Wheelwright, and R. J. Hyndman. 1998. Forecasting
Plann. Res. 1 (1): 1–7. https://doi.org/10.4236/jbcpr.2013.11001. methods and applications. New York: Wiley.
Kim, K. J., and K. Kim. 2010. “Preliminary cost estimation model Mamdani, E. H., and S. Assilian. 1974. “Application of fuzzy algorithms
using case-based reasoning and genetic algorithms.” J. Comput. Civ. for control of simple dynamic plant.” Proc., Institution of Electrical
Eng. 24 (6): 499–505. https://doi.org/10.1061/(ASCE)CP.1943-5487 Engineers 121 (12), 1585–1588.
.0000054. Manoliadis, O. G., J. P. Pantouvakis, and S. E. Christodoulou. 2009.
Kim, S. 2013. “Hybrid forecasting system based on case-based reasoning “Improving qualifications-based selection by use of the fuzzy Delphi
and analytic hierarchy process for cost estimation.” J. Civ. Eng. Manage. method.” Constr. Manage. Econ. 27 (4): 373–384. https://doi.org/10
19 (1): 86–96. https://doi.org/10.3846/13923730.2012.737829. .1080/01446190902758993.
Kim, S., S. Chin, and S. Kwon. 2019. “A discrepancy analysis of BIM- Marzouk, M., and M. Alaraby. 2014. “Predicting telecommunication tower
costs using fuzzy subtractive clustering.” J. Civ. Eng. Manage. 21 (1):
based quantity take-off for building interior components.” J. Manage.
67–74. https://doi.org/10.3846/13923730.2013.802736.
Eng. 35 (3): 05019001. https://doi.org/10.1061/(ASCE)ME.1943-5479
Marzouk, M., and A. Amin. 2013. “Predicting construction materials
.0000684.
prices using fuzzy logic and neural networks.” J. Constr. Eng. Manage.
Kline, P. 1999. The handbook of psychological testing. 2nd ed. London:
139 (9): 1190–1198. https://doi.org/10.1061/(ASCE)CO.1943-7862
Routledge.
.0000707.
Klir, G. J., and B. Yuan. 1995. Fuzzy sets and fuzzy logic theory and
Marzouk, M., and M. Elkadi. 2016. “Estimating water treatment plants
applications. Upper Saddle River, NJ: Prentice Hall. costs using factor analysis and artificial neural networks.” J. Clean.
Knight, K., and A. R. Fayek. 2002. “Use of fuzzy logic for predicting Prod. 112 (Part 5): 4540–4549. https://doi.org/10.1016/j.jclepro.2015
design cost overruns on building projects.” J. Constr. Eng. Manage. .09.015.
128 (6): 503–512. https://doi.org/10.1061/(ASCE)0733-9364(2002) Marzouk, M. M., and R. M. Ahmed. 2011. “A case-based reasoning ap-
128:6(503). proach for estimating the costs of pump station projects.” J. Adv. Res.
Kohavi, R., and G. John. 1997. “Wrappers for feature subset selection.” 2 (4): 289–295. https://doi.org/10.1016/j.jare.2011.01.007.
Artif. Intell. 97 (1/2): 273–324. https://doi.org/10.1016/S0004-3702 McCulloch, W. S., and W. H. Pitts. 1943. “A logical calculus of the ideas
(97)00043-X. imminent in nervous activity.” Bull. Math. Biophys. 5 (4): 115–133.
Kolodner, J. L. 1992. “An introduction to case-based reasoning.” Artif. https://doi.org/10.1007/BF02478259.
Intell. Rev. 6 (1): 3–34. https://doi.org/10.1007/BF00155578. Moselhi, O., and T. Hegazy. 1993. “Markup estimation using neural net-
Kuncheva, L. I. 2004. Combining pattern classifiers: Methods and work methodology.” Comput. Syst. Eng. 4 (2–3): 135–145. https://doi
algorithms. New York: Wiley. .org/10.1016/0956-0521(93)90039-Y.
Kursa, M., and W. Rudnicki. 2010. “Feature selection with the Boruta Moussa, M., J. Ruwanpura, and G. Jergeas. 2006. “Decision tree modeling
package.” J. Stat. Software 36 (11): 1–13. using integrated multilevel stochastic networks.” J. Constr. Eng.

© ASCE 03119008-28 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Manage. 132 (12): 1254–1266. https://doi.org/10.1061/(ASCE)0733 Rafiei, M. H., and H. Adeli. 2015. “A novel machine learning model for
-9364(2006)132:12(1254). estimation of sale prices of real estate units.” J. Constr. Eng. Manage.
Myers, R. 1990. Classical and modern regression with applications. 142 (2): 04015066. https://doi.org/10.1061/(ASCE)CO.1943-7862
2nd ed. Boston: Duxbury. .0001047.
Nair, V., and G. E. Hinton. 2010. “Rectified linear units improve restricted Ranasinghe, M. 2000. “Impact of correlation and induced correlation on the
Boltzmann machines.” In Proc., 27th Int. Conf. on Machine Learning estimation of project cost of buildings.” Constr. Manage. Econ. 18 (4):
(ICML-10), 807–814. Madison, WI: Omnipress. 395–406. https://doi.org/10.1080/01446190050024815.
Nasrazadani, H., M. Mahsuli, H. Talebiyan, and H. Kashani. 2017. “Prob- Ratner, B. 2010. “Variable selection methods in regression: Ignorable prob-
abilistic modeling framework for prediction of seismic retrofit cost of lem, outing notable solution.” J. Targeting Meas. Anal. Marketing
buildings.” J. Constr. Eng. Manage. 143 (8): 04017055. https://doi.org 18 (1): 65–75. https://doi.org/10.1057/jt.2009.26.
/10.1061/(ASCE)CO.1943-7862.0001354. Reid, S. 2007. A review of heterogeneous ensemble methods. Boulder, CO:
Nunnally, J. C. 1978. Psychometric theory. New York: McGraw-Hill. Dept. of Computer Science, Univ. of Colorado at Boulder.
Ogungbile, A. J., A. E. Oke, and K. Rasak. 2018. “Developing cost model Rockwell, R. C. 1975. “Assessment of multicollinearity: The Haitovsky test
for preliminary estimate of road projects in Nigeria.” Int. J. Sustainable of the determinant.” Sociological Methods Res. 3 (3): 308–320. https://
Real Estate Constr. Econ. 1 (2): 182–199. https://doi.org/10.1504 doi.org/10.1177/004912417500300304.
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

/IJSRECE.2018.092277. Ross, B. H. 1989. “Some psychological results on case-based reasoning.”


Opitz, D., and R. Maclin. 1999. “Popular ensemble methods: An empirical In Proc., Case-Based Reasoning Workshop, 144–147. Burlington, MA:
study.” J. Artif. Intell. Res. 11: 169–198. https://doi.org/10.1613 Morgan Kaufmann.
/jair.614. Runker, T. A. 1997. “Selection of appropriate defuzzification methods
Ostertagová, E. 2011. Applied statistics. [In Slovak.] Sever, Slovakia: using application specific properties.” IEEE Trans. Fuzzy Syst. 5 (1):
Elfa Košice. 72–79.
Ozdemir, M., M. J. Embrechts, F. Arciniegas, C. M. Breneman, L. Rutkowski, L. 2005. New soft computing techniques for system modeling,
Lockwood, and K. P. Bennett. 2001. “Feature selection for in-silico pattern classification and image processing. Heidelberg: Springer.
drug design using genetic algorithms and neural networks.” In Proc., Saaty, T. L. 1980. The analytic hierarchy process: Planning, priority
IEEE Mountain Workshop on Soft Computing in Industrial Applica- setting. New York: McGraw Hill International Book.
tions, 53–57. New York: IEEE. Saaty, T. L. 1994. “How to make a decision: The analytic hierarchy pro-
Pan, N.-F. 2008. “Fuzzy AHP approach for selecting the suitable bridge cess.” Interfaces 24 (6): 19–43. https://doi.org/10.1287/inte.24.6.19.
construction method.” Autom. Constr. 17 (8): 958–965. https://doi
Saaty, T. L. 2008. “Decision making with the analytic hierarchy process.”
.org/10.1016/j.autcon.2008.03.005.
Int. J. Serv. Sci. 1 (1): 83. https://doi.org/10.1504/IJSSCI.2008.017590.
Papadopoulos, B., K. P. Tsagarakis, and A. Yannopoulos. 2007. “Cost and
Schapire, R. E. 1990. “The strength of weak learnability.” Mach. Learn.
land functions for wastewater treatment projects: Typical simple linear
5 (2): 197–227.
regression versus fuzzy linear regression.” J. Environ. Eng. 133 (6):
Schapire, R. E., Y. Freund, P. Bartlett, and W. S. Lee. 1998. “Boosting
581–586. https://doi.org/10.1061/(ASCE)0733-9372(2007)133:6(581).
the margin: A new explanation for the effectiveness of voting methods.”
Park, H.-S., and S. Kwon. 2011. “Factor analysis of construction prac-
Ann. Stat. 26 (5): 1651–1686. https://doi.org/10.1214/aos/1024691352.
tices for infrastructure projects in Korea.” KSCE J. Civ. Eng. 15 (3):
Shaheen, A. A., A. R. Fayek, and S. M. Abourizk. 2007. “Fuzzy numbers
439–445. https://doi.org/10.1007/s12205-011-1064-5.
in cost range estimating.” J. Constr. Eng. Manage. 133 (4): 325–334.
Pedrycz, W., ed. 1997. Fuzzy evolutionary computation. Dordrecht,
https://doi.org/10.1061/(ASCE)0733-9364(2007)133:4(325).
Netherlands: Kluwer Academic Publishers.
Shi, H., J. Song, and X. Zhang. 2010. “The method and application for
Perera, S., and I. Watson. 1998. “Collaborative case-based estimating and
design.” Adv. Eng. Software 29 (10): 801–808. https://doi.org/10.1016 estimating construction project costs.” In 2010 IEEE Int. Conf. on
/S0965-9978(97)00064-1. Advanced Management Science (ICAMS 2010). New York: IEEE.
Perner, P., U. Zscherpel, and C. Jacobsen. 2001. “A comparison between Siddique, N., and H. Adeli. 2013. Computational intelligence: Synergies of
neural networks and decision trees based on data from industrial radio- fuzzy logic, neural networks and evolutionary computing. Chichester,
graphic testing.” Pattern Recognit. Lett. 22 (1): 47–54. https://doi.org UK: Wiley.
/10.1016/S0167-8655(00)00098-2. Siedlecki, W., and J. Sklansky. 1988. “On automatic feature selection.”
Petroutsatou, K., E. Georgopoulos, S. Lambropoulos, and J. P. Pantouvakis. Int. J. Pattern Recognit. Artif. Intell. 2 (2): 197–220. https://doi.org/10
2012. “Early cost estimating of road tunnel construction using neural .1142/S0218001488000145.
networks.” J. Constr. Eng. Manage. 138 (6): 679–687. https://doi.org Siedlecki, W., and J. Sklansky. 1989. “A note on genetic algorithms for
/10.1061/(ASCE)CO.1943-7862.0000479. large-scale feature selection.” In Handbook of pattern recognition and
Petruseva, S., P. Sherrod, V. Z. Pancovska, and A. Petrovski. 2016. computer vision, edited by C. H. Chen, L. F. Pau, and P. S. P. Wang,
“Predicting bidding price in construction using support vector 88–107. Singapore: World Scientific.
machine.” TEM J. 5 (2): 143–151. Son, H., C. Kim, and C. Kim. 2012. “Hybrid principal component analysis
Petruseva, S., V. Zileska-Pancovska, V. Žujo, and A. Brkan-Vejzović. 2017. and support vector machine model for predicting the cost performance
“Construction costs forecasting: Comparison of the accuracy of linear of commercial building projects using pre-project planning variables.”
regression and support vector machine models.” Tehnicki Vjesnik- Autom. Constr. 27 (Nov): 60–66. https://doi.org/10.1016/j.autcon.2012
Technical Gaz. 24 (5): 1431–1438. .05.013.
Peurifoy, R. L., and G. D. Oberlender. 2002. Estimating construction costs. Srichetta, P., and W. Thurachon. 2012. “Applying fuzzy analytic hierarchy
5th ed. New York: McGraw-Hill. process to evaluate and select product of notebook computers.” Int. J.
Polit, D. F., and C. T. Beck. 2012. Nursing research: Generating and Model. Optim. 2 (2): 168–173. https://doi.org/10.7763/IJMO.2012
assessing evidence for nursing practice. 9th ed. Philadelphia: .V2.105.
Wolters Kluwer Health, Lippincott Williams & Wilkins. Stevens, J. P. 2002. Applied multivariate statistics for the social sciences.
Prasad, A. M., L. R. Iverson, and A. Liaw. 2006. “Newer classification and 4th ed. Hillsdale, NJ: Erlbaum.
regression tree techniques: Bagging and random forests for ecological Stoy, C., S. Pollalis, and O. Dursun. 2012. “A concept for developing
prediction.” Ecosystems 9 (2): 181–199. https://doi.org/10.1007/s10021 construction element cost models for German residential building proj-
-005-0054-1. ects.” Int. J. Project Organ. Manage. 4 (1): 38. https://doi.org/10.1504
Quinlan, J. R. 2014. C4.5: Programs for machine learning. San Mateo, /IJPOM.2012.045363.
CA: Morgan Kaufmann. Stoy, C., S. Pollalis, and H.-R. Schalcher. 2008. “Drivers for cost estimating
Radwan, H. G. 2013. “Sensitivity analysis of head loss equations on the in early design: Case study of residential construction.” J. Constr. Eng.
design of improved irrigation on-farm system in Egypt.” Int. J. Adv. Manage. 134 (1): 32–39. https://doi.org/10.1061/(ASCE)0733-9364
Res. Technol. 2 (1): 1–9. (2008)134:1(32).

© ASCE 03119008-29 J. Constr. Eng. Manage.

J. Constr. Eng. Manage., 2020, 146(1): 03119008


Stoy, C., and H.-R. Schalcher. 2007. “Residential building projects: Wilmot, C. G., and B. Mei. 2005. “Neural network modeling of highway
Building cost indicators and drivers.” J. Constr. Eng. Manage. 133 (2): construction costs.” J. Constr. Eng. Manage. 131 (7): 765–771. https://
139–145. https://doi.org/10.1061/(ASCE)0733-9364(2007)133:2(139). doi.org/10.1061/(ASCE)0733-9364(2005)131:7(765).
Sugeno, M., and G. T. Kang. 1988. “Structure identification of fuzzy Woldesenbet, A., and D. H. S. Jeong. 2012. “Historical data driven and
model.” Fuzzy Sets Syst. 28 (1): 15–33. https://doi.org/10.1016/0165 component based prediction models for predicting preliminary engi-
-0114(88)90113-3. neering costs of roadway projects.” In Construction Research Congress
Tabachnick, B. G., and L. S. Fidell. 2007. Using multivariate statistics. 2012. Reston, VA: ASCE.
5th ed. Boston: Allyn & Bacon. Wu, X., et al.2008. “Top 10 algorithms in data mining.” Knowl. Inf. Syst.
Takagi, T., and M. Sugeno. 1985. “Fuzzy identification of systems and its 14 (1): 1–37. https://doi.org/10.1007/s10115-007-0114-2.
application to modeling and control.” IEE Trans. Syst. Man Cybern. Xu, M., B. Xu, L. Zhou, and L. Wu. 2015. “Construction project cost
15 (1): 116–132. https://doi.org/10.1109/TSMC.1985.6313399. prediction based on genetic algorithm and least squares support vector
Tokede, O., D. Ahiaga-Dagbui, S. Smith, and S. Wamuziri. 2014. machine.” In Proc., 5th Int. Conf. on Civil Engineering and Transpor-
“Mapping relational efficiency in neuro-fuzzy hybrid cost models.” tation 2015. Paris: Atlantis.
In Construction Research Congress 2014: Construction in a Global Yang, I.-T. 2005. “Simulation-based estimation for correlated cost ele-
Network, 1458–1467. Reston, VA: ASCE. ments.” Int. J. Project Manage. 23 (4): 275–282. https://doi.org/10
Downloaded from ascelibrary.org by haytham elmousalami on 10/19/19. Copyright ASCE. For personal use only; all rights reserved.

Tsukamoto, Y. 1979. “An approach to fuzzy reasoning method.” In Advan- .1016/j.ijproman.2004.12.002.


ces in fuzzy set theory and applications, edited by M. M. Gupta, Yang, J., and V. Honavar. 1998. “Feature subset selection using a genetic
R. K. Ragade, and R. Yager, 137–149. Amsterdam, Netherlands: algorithm.” IEEE Intell. Syst. 13 (2): 44–49. https://doi.org/10.1109
North-Holland. /5254.671091.
Vaidya, O. S., and S. Kumar. 2006. “Analytic hierarchy process: An over- Yang, S.-S., and J. Xu. 2010. “The application of fuzzy system method to
view of applications.” Eur. J. Oper. Res. 169 (1): 1–29. https://doi.org the cost estimation of construction works.” In Proc., 2010 Int. Conf. on
/10.1016/j.ejor.2004.04.028. Machine Learning and Cybernetics. New York: IEEE.
Vapnik, V. 1979. Estimation of dependences based on empirical data.
Yu, W.-D., and M. J. Skibniewski. 2010. “Integrating neurofuzzy system
[In Russian.] Moscow: Nauka.
with conceptual cost estimation to discover cost-related knowledge
Walker, E. 1989. “Applied regression analysis and other multivariable
from residential construction projects.” J. Comput. Civ. Eng. 24 (1):
methods.” Technometrics 31 (1): 117–118. https://doi.org/10.1080
35–44. https://doi.org/10.1061/(ASCE)0887-3801(2010)24:1(35).
/00401706.1989.10488486.
Zadeh, L. A. 1965. “Fuzzy sets.” Inf. Control 8 (3): 338–353. https://doi.org
Wang, J., and B. Ashuri. 2016. “Predicting ENR construction cost index
/10.1016/S0019-9958(65)90241-X.
using machine learning algorithms.” Int. J. Constr. Educ. Res. 13 (1):
47–63. https://doi.org/10.1080/15578771.2016.1235063. Zadeh, L. A. 1973. “Outline of a new approach to the analysis of com-
Wang, L.-X. 1997. Adaptive fuzzy systems and control: Design and stability plex systems and decision process.” IEEE Trans. Syst. Man Cybern.
analysis. Englewood Cliffs, NJ: Prentice-Hall. SMC-3 (1): 28–44. https://doi.org/10.1109/TSMC.1973.5408575.
Wang, Y.-R., C.-Y. Yu, and H.-H. Chan. 2012. “Predicting construction cost Zadeh, L. A. 1976. “The concept of linguistic variable and its application
and schedule success using artificial neural networks ensemble and to approximate reasoning-III.” Inf. Sci. 9 (1): 43–80. https://doi.org/10
support vector machines classification models.” Int. J. Project Manage. .1016/0020-0255(75)90017-1.
30 (4): 470–478. https://doi.org/10.1016/j.ijproman.2011.09.002. Zadeh, L. A. 1994. “Fuzzy logic, neural networks and soft computing.”
Waty, M., S. W. Alisjahbana, O. Gondokusumo, H. Sulistio, C. Hasyim, Commun. ACM 37 (3): 77–84. https://doi.org/10.1145/175247
M. I. Setiawan, D. Harmanto, and A. S. Ahmar. 2018. “Modeling of .175255.
waste material costs on road construction projects.” Int. J. Eng. Technol. Zhai, K., N. Jiang, and W. Pedrycz. 2012. “Cost prediction method based
7 (24): 474–477. https://doi.org/10.14419/ijet.v7i2.11250. on an improved fuzzy model.” Int. J. Adv. Manuf. Technol. 65 (5–8):
Wheaton, W. C., and W. E. Simonton. 2007. “The secular and cyclic behav- 1045–1053.
ior of true construction costs.” J. Real Estate Res. 29 (1): 1–26. Zhang, R., B. Ashuri, Y. Shyr, and Y. Deng. 2018. “Forecasting construc-
Wilkinson, L., and G. E. Dallal. 1981. “Tests of significance in forward tion cost index based on visibility graph: A network approach.” Physica
selection regression with an F-to-enter stopping rule.” Technometrics A 493: 239–252. https://doi.org/10.1016/j.physa.2017.10.052.
23 (4): 377–380. Zhang, Y., R. E. Minchin, Jr., and D. Agdas. 2017. “Forecasting completed
Williams, T. P. 2002. “Predicting completed project cost using bidding cost of highway construction projects using LASSO regularized regres-
data.” Constr. Manage. Econ. 20 (3): 225–235. https://doi.org/10.1080 sion.” J. Constr. Eng. Manage. 143 (10): 04017071. https://doi.org/10
/01446190110112838. .1061/(ASCE)CO.1943-7862.0001378.
Williams, T. P., and J. Gong. 2014. “Predicting construction cost overruns Zhu, W.-J., W.-F. Feng, and Y.-G. Zhou. 2010. “The application of genetic
using text mining, numerical data and ensemble classifiers.” Autom. fuzzy neural network in project cost estimate.” In Proc., 2010 Int. Conf.
Constr. 43 (1): 23–29. https://doi.org/10.1016/j.autcon.2014.02.014. on E-Product E-Service and E-Entertainment. New York: IEEE.

© ASCE 03119008-30 J. Constr. Eng. Manage.

View publication stats J. Constr. Eng. Manage., 2020, 146(1): 03119008

You might also like