Professional Documents
Culture Documents
net/publication/270685478
CITATIONS READS
38 1,977
3 authors:
Raja R A Issa
University of Florida
245 PUBLICATIONS 3,628 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Building Information Modeling (BIM) and Multi-user Vritual Reality (VR) System View project
All content following this page was uploaded by Rui Liu on 12 February 2015.
Abstract: A variety of building information modeling (BIM) performance evaluation initiatives have been proposed to quantify BIM uti-
lization capacity of enterprises. These initiatives were designed for evaluating, instead of benchmarking, an organization’s performance in
BIM utilization. Unlike evaluation that mainly focuses on ascertaining the achievement of BIM utilization within an organization, bench-
marking is more interested in comparing one organization’s BIM performance to their industry peers. By identifying gaps in specific areas,
decisions to make improvements can be facilitated. This paper proposes a cloud-based BIM performance benchmarking application called
building information modeling cloud score (BIMCS) to automatically collect BIM performance data from a wide range of BIM users nation-
Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.
wide. It utilizes the software as a service (SaaS) model of cloud computing to make the collection, aggregation, and presentation of bench-
marking data autonomous and interactive. Based on the big data collected from the BIMCS database, an overall view of the industry status
quo of BIM utilization may be obtained, and ultimately, a protocol for BIM performance can be developed on the basis of a better knowledge
discovery process. Furthermore, BIMCS data will help individual companies compare and improve their performance in BIM utilization with
respect to their industry competitors. DOI: 10.1061/(ASCE)CO.1943-7862.0000891. © 2014 American Society of Civil Engineers.
Author keywords: Building information modeling (BIM); Cloud computing; Building information modeling cloud score (BIMCS);
Benchmarking; Information technologies.
Literature Review
Benchmarking
fects per unit of measure (Construction Industry Institute 2013). the limitations of time or distance. Redmond et al. (2012) found
However, by comparing four major benchmarking initiatives in- that a main problem of the industry foundation classes (IFCs)
cluding CII BM&M and CBPP, Costa et al. (2006) still found was the difficulty of storing and carrying relevant data for all multi-
significant divergence among different benchmarking initia- featured construction processes. They therefore proposed a
tives, suggesting the lack of a benchmarking protocol in the con- cloud-based platform for seamless data exchange among various
struction industry. disciplines at the early stage of a project. The proposed platform
• The second challenge is the weighting method of benchmarking harnesses the capability of IFC extensible markup language (XML)
metrics. Although the literature has suggested multiple objective with simplified markup language (SML) subsets of XML for para-
weighting methods (Datta et al. 2009; Greco et al. 2001; Zeleny metric modelling and interaction. Recently, Zhang and Issa (2012)
and Cochrane 1982), it was noticed that many of these bench- compared various BIM cloud computing systems including Revit
marking initiatives still apply subjective weight methods and Server, Revit Cloud and STRATUS, and general purpose cloud
extensively rely on opinions of domain experts. This is ques- services such as Advance 2000 and Amazon. Based on the find-
tionable when a benchmarking is designed for a wider range ings, they proposed a framework for making decisions on adopting
of subjects. cloud computing for BIM applications in specific contexts.
• The last challenge is the method used to collect benchmarking The Autodesk 360 suite of cloud-based BIM services (Field,
data. Although some benchmarking initiatives, such as the Glue, etc.) is an example of a commercially available package
CII BM&M (Construction Industry Institute 2013), have created (Autodesk 2014).
web applications to make the benchmarking process more
friendly, this process is still not fully open to the industry practi-
tioners (for example, CII requires membership for benchmark-
Methodology
ing service) and is not autonomous (most benchmarking
initiatives require manual input of performance data). Cloud To develop the BIMCS a three-step roadmap was followed as
computing can be used to address the above challenges. shown in Fig. 1:
1. The first step focuses on developing a list of metrics that are
Cloud Computing suitable for benchmarking BIM performance. As discussed
above, a prerequisite of benchmarking is an agreement on
According to NIST, cloud computing is a model for “enabling the definitions and measures of the common set of metrics
ubiquitous, convenient, on-demand network access to a shared pool (Costa et al. 2006). At first, an initial list of metrics was pro-
of configurable computing resources (e.g., networks, servers, posed based on existing literature and modeling theories
storage, applications, and services) that can be rapidly provisioned (Sargent 2005; Succar 2009; Succar et al. 2013; Underwood
and released with minimal management effort or service provider and Isikdag 2010). Then, a survey was distributed to domain
interaction (Mell and Grance 2011).” NIST (Mell and Grance 2011) experts to validate the list. The pool of respondents included
identified three service models that are currently employed by architects, contractors, engineers, owners, and BIM software
cloud computing including (1) software as a service (SaaS), developers from the AEC industry. They were asked to select
(2) platform as a service (PaaS), and (3) infrastructure as a service the metrics that they believed were irrelevant to the context and
(IaaS). Users have different capacities of exercising control over the to provide metrics that they believed to be critical for BIM
components of the infrastructure. For example, IaaS users are al- performance benchmarking. Another purpose of the survey
lowed to provision processing, storage, networks, and other funda- was to obtain the subjective weights of the proposed metrics
mental computing resources, whereas users of SaaS may only use by asking domain experts to respond to five-point Likert scale
the provider’s applications running on a cloud infrastructure. The questions (Issa and Suermann 2009). At the time of writing
applications are accessible from various client devices through ei- this paper, the work for step 1 is ongoing.
ther a thin client (e.g., a web browser or a program interface) or a fat 2. The second step aims to perform the cloud-based benchmark-
client (e.g., a tower personal computer). Depending on the acces- ing of BIM performance. Since multiple metrics are consid-
sibility of a cloud service, cloud can be deployed as a private cloud, ered, the benchmarking starts with a multivariate evaluation
community cloud, public cloud, or hybrid cloud (Subashini and (Fornell 1982). For any BIM project, the performance on each
Kavitha 2011). The choice of a cloud deployment method depends of the 20 metrics will first be scored individually according to
on the concerns of the users such as missions, security policies, and predetermined standards. Then based on the initial weights ob-
compliance considerations (Marinescu 2013). Despite deployment tained from step 1, the overall performance score of a BIM
project is calculated as the weighted summation of the sub- BIMCS: A BIM Performance Benchmarking
scores of 20 metrics. A cloud-based questionnaire is used Apparatus
to collect the calculated BIM performance scores. Per the per-
mission of the end-users, BIMCS automatically calculates the Overview
performance score and submits the information to a cloud ser-
ver maintained for the purposes of the study. The information The BIMCS is a web-based application for BIM performance
is then aggregated into a database called BIM performance da- benchmarking. The BIMCS utilizes the SaaS service model (Fig. 2).
tabase. A probability density function (PDF) will be generated Compared to PaaS and IaaS, SaaS allows the users of BIMCS to
on the basis of the aggregated data. End-users of BIMCS are have the full capacity of transporting the performance data, while
therefore able to compare their performance scores to other reducing the risk of manipulating the system to the minimal level.
industry peers. BIMCS can be implemented as an add-in The BIMCS is deployed as a community cloud (Fig. 2). The com-
for the off-the-shelf BIM software or a pure web application. munity cloud environment ensures the accessibility of an exclusive
Also of importance is the fact that BIM performance depends community of users from different organizations that have shared
on the various project types and their organizational scales. An concerns such as mission, policy, and compliance considerations
inquiry about the background information such as project (Marinescu 2013). Unlike private cloud and public cloud, it pro-
type and magnitude will be sent out to the end users before vides a secured cross-organizational environment that satisfies
the benchmarking process takes effect. The data collected the requirements of the BIMCS.
from step 2 can be later classified, grouped, and processed A BIMCS user may select to submit BIM-related information
for further analysis based on the associated background infor- such as generic background information and BIM model outputs
mation. Such background information can be used as the filters to a remote Hadoop distributed file system (HDFS) server (Borthakur
of benchmarking results because users may only interested in 2007) called the benchmarking server. The benchmarking server ag-
particular sectors. gregates the submitted information and calculates statistics of inter-
3. The third step is designed to improve the system on the basis est. The user can then see their own performance as a percentile or
of accumulated information. The quality of benchmarking de- probability of success compared with all other industry peers. A set of
pends on an appropriate list of metrics and scientific weighting filters may be set so that the user can query and focus only on the
of the metrics (Costa et al. 2006). The BIMCS developed in benchmarking aspects/metrics that the user is interested in.
step 1 is based on the perceptions of domain experts. Once
sufficient data is obtained in the BIM performance database,
a set of data mining analyses will be performed to generate a Benchmarking Metrics
more proper weighting system for the metrics. Possible meth- To facilitate the benchmarking of BIM performance metrics, the
ods include maximum entropy method, principal component following requirements must be satisfied:
analysis, and multiple correlation method (Diakoulaki et al. • Objective: Metrics should be objective. BIM users may have
1995; Srdjevic et al. 2004; Sheng et al. 2007). A factor ana- divergent perceptions and interpretations for a given metric.
lysis (Var 1998) will also be applied to find out a new list of Objective metrics ensure that the measure of any metric is built
metrics which are the linear combinations of the original list. on the same denotation.
This process will be repeated through several iterations until • Quantifiable: Following the objectiveness requirement, metrics
an objective and unbiased benchmarking system is obtained. should also be able to be quantified so that they can be defini-
The importance of incorporating opinions of domain experts in tively defined and defended.
the final version of benchmarking system is also recognized. As a • Inherent: Metrics should be inherent in the BIM database. In
result, subjective weight methods will also be applied such as Del- other words, measures of metrics must be able to be read and
phi sessions. These sessions will be based on the BIM performance pulled directly from the BIM database. As a result, fast and
database and therefore informed discussions can be conducted. The automatic performance benchmarking can be realized.
final version of the BIMCS will reflect the pattern of the collected • Generic: Metrics should focus on the common aspects of differ-
data and the perspectives of the domain experts. ent BIM projects. Specific BIM projects may have very unique
features that can hardly be compared against other projects browser application. Users access the benchmarking application
and thus, benchmarking works on the common features that server through the uniform resource locator (URL) of SaaS, which
are valid in general settings. For example, the development dura- sends the necessary files for executing the benchmarking applica-
tion of a complex BIM project and a simple BIM project are not tion, such as java script application (JSP), to the users’ terminals.
comparable, but the number of objects developed per unit of time Port 80 is used to send requests to the benchmarking server as a
reflects the productivity of development to some extent. There- pure endpoint for the HTTP. It is compatible with Microsoft Inter-
fore, the latter one is a better candidate for benchmarking metrics. net Explorer and therefore easier for the client side. Other ports,
• Representative: Metrics should be representative of critical such as Port 20, can be used to send back application files.
aspects of the BIM model and modeling processes. On the back end, the installed application files query the BIM da-
Based on the above requirements, 20 metrics were developed tabase to compute metric values. IFC as an industry standard can be
that are categorized into six aspects: (1) productivity, (2) effective- queried by the BIM server through Java (Liu 2012). On the front end,
ness, (3) quality, (4) accuracy, (5) usefulness, and (6) economy users have the ability to view and operate benchmarking applications.
(Table 1). The first two aspects quantify the performance of The best way to implement this kind of interactive applications is
BIM modeling (production), and the rest captures the performance using the model-view-controller (MVC) pattern (Krasner and Pope
of BIM models (product). Some metrics rely on increasing values 1988). MVC is a classic design pattern that depends on three major
as good performance indicators (e.g., number of objected created components: (1) models for maintaining data, (2) views for display-
per week) while some rely on decreasing values as good perfor- ing purposes, and (3) controllers for handling user initiated events
mance indicators (e.g., number of structural warnings/number of (Krasner and Pope 1988). The other way to set up the benchmarking
nonstructural warnings); therefore, the metrics selected need to application is through add-ins to off-the-shelf BIM software. The
be normalized before their use in BIMCS. Revit database can be queried in Visual Studio though C#. The fol-
lowing are sample code lines for such an add-in in Revit:
Cloud Benchmarking using System;
using System.Collections.Generic;
Fig. 3 illustrates the architecture of the BIMCS. It applies the using System.Text;
client—server model of centralized computing, i.e., off-loading using Autodesk.Revit.DB;
computational tasks to a remote Hadoop server in the network using Autodesk.Revit.UI;
(Nieh et al. 2000). There are four major components in this design: using Autodesk.Revit.ApplicationServices;
• Client: A fat client that has many resources and does not rely on using System.Drawing;
the server for essential functions, or a thin client that relies heav- using Autodesk.Revit;
ily on network resources for computation and storage; using System.Collections.ObjectModel;
namespace BIMCloudScore
• Internet: A computer network that uses the standard internet
{
protocol suite (transmission control protocol/IP) to serve as // <summary>
the communication channel; // This class contain the data about object changes (getting from Revit)
• Web server: A web server that supports not only hypertext trans- // </summary>
fer protocol (HTTP), but also server-side scripting using active public class ChangesofObjects
server pages (ASP), hypertext processer, or other scripting {
languages; and private UIApplication m_revit; //Application of Revit
• Benchmarking server: A Hadoop server (HDFS) that assumes private WireFrame m_sketch; //Profile information of opening
the mass storage and benchmarking related computation private BoundingBox m_boundingBox; //BoundingBox of Opening
requirements. public UIApplication Revit
{:::}
The first step in implementing this architecture is to set up the
// Other methods not shown
benchmarking application, i.e., configuring the client for executing :::
the necessary application files. Although the main benchmarking }
computation is done on the remote HDFS server, some application
files need to be set up in the client to access local BIM databases After the application is set up in the client, metric scores will be
and to compute metric values. One solution is through a web metered locally. Then, they are sent to the benchmarking server to
© ASCE
Modeling (production)
Productivity How fast a BIM Number of objects created Number of objects/duration (weeks) It indicates on average how many objects were created per week
model was developed per week
Number of absolute object Number of absolute changes of object number/ It indicates the absolute object number changes per week, which may include
number changes per week duration (weeks) created or removed objects per week
Model LOD per number of Model LOD/number of coordination meetings It indicates on average how many meetings were needed to increase LOD a
coordination meeting level
Project data changes Total number of project data entries/duration It shows on average how many nongeometric project parameters (entries) were
per week (weeks) entered per week
Effectiveness How effectively a Variance of QTO ðQTO produced by the modal at each phase − Consistent QTO prediction indicates a more effective development process
BIM model was average QTOÞ2 =number ofphases
developed Number of steps per object Total number of modeling steps/total number of It shows on average how many steps it took to develop an object; too many
objects steps per object indicates an ineffective development (e.g., corrections/rework)
Average changes per object Number of object changes/number of objects It shows on average how many times an object was changed during the
development; more changes indicate a less effective development
Model (product)
Quality Degree to which a set Number of warnings Total number of warnings/total number of objects More warnings indicate a less quality model
of inherent per object
characteristics of Criticality of warnings Number of structural warnings/number of Structural warnings (e.g., warnings pertaining to beams and columns) are more
BIM models fulfills nonstructural warnings critical; this ratio shows the criticality of warnings
desired requirements Consistency of 3D model Number of errors in reference to 2D deliverables/ It reflects how consistent the BIM generated 2D drawings are in reference to
and 2D references number of objects the 2D drawings in use
Models’ analytical Number of objects need to be modified or added Before utilizing the BIM model in any analytical study (such as structural
reporting quality before reporting /number of objects study), if more objects need to be modified or added, it reflects a less reliable
model
04014054-6
Accuracy Degree to which the QTO accuracy Summation of absolute difference between A lagging indicator showing the ability of a BIM model in predicting quantity
BIM models BIM-yield, QTO, and the actual quantity obtained
precisely reflects the by the end of the project
Note: LOD = level of development; MB = megabyte; QTO = quantity takeoff; SF = square foot.
calculate overall performance scores. Following the multi-criteria transformation format—8-bit) (Consortium 2011). UTF-8 is com-
evaluation process, the final score of a BIM project is given as patible with ASCII and is not as complicated as UTF-16 and UTF-
a weighted summation (Fornell 1982) 32. More importantly, it is increasingly being used as the default
character encoding in application programming interfaces which
X
m
typically serve as the basis of add-ins to off-the-shelf BIM software.
Xi ¼ ωj xij ; for i ¼ 1; 2; 3; : : : ; n ð1Þ In a similar fashion, all users will send performance metrics in-
j¼1
formation to the benchmarking server, as shown in Fig. 4. Upon
reception, performance information will be first formatted and
where X i is the performance score of the ith BIM project in the processed in a range of structured query language (SQL) tables.
database, ωj is the weight of the jth metric (obtained by standard- As a standard domain-specific language (DSL), SQL is suitable
izing survey results), xij is the score of the ith BIM project on the for the relational database queries of the BIMCS (ISO 2008). Then,
jth metric, m is the number of metrics (in the current version, it is the benchmarking server will aggregate metrics information,
20), and n is the number of BIM projects contained in the database. denoted as X
xi and the scores of separate metrics are sent to the benchmarking
server as strings encoded in UTF-8 format (universal character set X ¼ fX 1 ; X 2 ; X 3 ; : : : ; X n g ð2Þ
processing applications (Welch et al. 2008). Using the master-slave multifold meanings in statistics: (1) metric variability (Zeleny and
structure, lots of inexpensive commodity servers are deployed to Cochrane 1982) reflects how much information is contributed by a
yield scalable data processing power (Welch et al. 2008). HDFS metric. If the value of a metric is more volatile, the metric contains
provides reliable interfaces to the BIMCS application and builds more useful information (Shannon 1948) and is more important;
high reliability and scalability of distributed systems. (2) metric independence (Datta et al. 2009) reflects the pure infor-
mation explained by a metric. If a metric is more independent of
Posterior Reformation other metrics, it contains less repeated information and thus is more
important; and (3) metric distinguishing capability (Greco et al.
BIMCS is dynamic in nature. It evolves as information accumu- 2001) reflects the capability of a metric in distinguishing the differ-
lates. Fig. 6 illustrates how the system is reformed when sufficient ence between samples. If removing a metric will significantly
performance data is obtained in the BIM performance database. In change the classification of the outcomes, the metric is more im-
particular, three aspects of the system are to be improved to reflect
portant. Following the above notion, a variety of objective weight-
the pattern of performance data: (1) weights of metrics, (2) metrics,
ing methods can be applied to reform the weights of metrics based
and (3) classification.
on solicited performance data, including the coefficient of variation
Reform Weights of Metrics method (Zeleny and Cochrane 1982), maximum entropy method
In the survey (step 1), respondents were asked to weight the im- (Srdjevic et al. 2004), intercriteria correlation method (Diakoulaki
portance of BIM benchmarking metrics, which are used to calculate et al. 1995), rough set method (Greco et al. 2001), and principal
the BIM performance score in the earlier phase of the benchmark- component analysis method (Sheng et al. 2007). For example, fol-
ing process. When there is a lack of existing data, this method is lowing Shannon’s information theory (Shannon 1948), entropy is a
easier to implement. However, the data should speak for itself measure of the system’s disorder (or uncertainty) that can be used to
(Shannon 1948). As data starts to accumulate in the BIM perfor- quantify the expected value of information contained in a message.
mance database, the importance of metrics can be calculated by If a benchmarking metric contains more information (i.e., its en-
investigating the pattern of the data. Metric importance has tropy is bigger), it should be assigned a larger importance weight.
Fig. 6. Reforming BIM cloud score based on the aggregated performance data
allows users to upload their BIM performance information to the The updated weights are sent to the BIMCS add-in automatically
server, and the third function displays the benchmarking result. with a notice to the end-users, giving the end-users the option of
After starting the monitoring function, the BIMCS will screen adopting the new weights. This procedure can be done on a regular
and monitor the BIM database continuously and meter the scores of basis to reflect the latest trend of BIM performance. Another sig-
each performance metric on the back end. The user’s BIM model- nificant update of BIMCS is the list of metrics. For example, it may
ing activities will not be affected. If the start/terminate uploading be found by a factor analysis that the scores of how often the model
information is turned on, performance information will be uploaded gets accessed are highly correlated with the scores of ease of con-
to the benchmarking server automatically on a regular basis. The struction documentation creation, which indicates the fact that
uploaded information is classified, processed, and aggregated in the these two metrics capture similar performance aspects and should
remote server. be combined into one. If a new list of metrics is suggested, it will be
Then, the user can view the results using the add-in. By clicking sent to the user add-in for validation, and if new metrics are ac-
view benchmarking result, a window will pop out and show the cepted, a new version of BIMCS will be launched for installation.
results as a probability distribution curve and tabular results (Fig. 8).
The weights of metrics are also shown in the right hand side. Users
may revise the weights according to their own needs. The final re- Discussion
sult of BIMCS is shown as a percentile value (from 0 to 100%). The
percentile value shows the amount of projects that the assessed The authors have introduced the architecture and implementation
BIM project outperforms. For example, if a BIM project’s result method of a cloud-based application called BIMCS for benchmark-
is 95%, it means 95% of the assessed projects are worse than it, ing BIM performance. Two unique features make the success of
or it is in the top 5%. this system possible:
Filters are also provided with the results. Users may check the 1. BIMCS provides an open environment for BIM users, where
benchmarking results for specific categories such as productivity they can identify their competitiveness with respect to all their
and quality. Users may also view the results for specific market industry competitors. It provides sufficient accessibility to the
sectors such as commercial or residential. practitioners and is free of charge. This feature is attractive to
A detailed analysis report may also be generated (Fig. 9). It pro- the practitioners and companies who wish to continuously im-
vides a new window for summary presentation of the benchmark- prove their performance in BIM utilization by identifying their
ing results. In the detailed analysis section, performance based on gaps with the industry’s best practitioners.
each of the 20 metrics is reviewed and analyzed. 2. BIMCS is able to correct/adjust itself based on the Big Data
When sufficient data has been collected, BIMCS can be updated collected nationwide. A variety of data mining techniques will
automatically. One possible update is the weights of metrics. For be applied to dynamically improve the benchmarking metrics,
example, the statistical analysis on the aggregated benchmarking metric importance weights, and classification method. Each
data may find the variance of number of objects developed per newer version of the BIMCS reflects the direct pattern of
month to be bigger than that of other metrics. Eqs. (3)–(5) will the national BIM performance data and provides an overall
be used to recalculate the importance weights of the 20 metrics. view of the status quo of the user’s BIM utilization.
Because the modeling process of BIM, such as the productivity 1. It creates a log file for each BIM model to record the changes,
of object development, is of concern, it is a nontrivial task to model outcomes, and project indicators that were retrieved
monitor and meter performance metrics in the BIMCS. One chal- from the BIM model at different times.
lenge is that time stamps should be implanted onto BIM databases 2. It provides BIM model forensic analysis for comparing the
to document the dynamic change of performance metrics. For difference between two BIM models or the same BIM model
example, to monitor the rework of a BIM model or the access fre- at different times.
quency to some BIM model, the exact time stamps of such changes BIM diagnostics add-in should be able to seamlessly commu-
should be well documented. Therefore, an add-in called BIM nicate to the BIM benchmarking database. The next task will be
diagnostics will be developed in the future. The add-in has two focused on the development of BIM diagnostics. Although the
functions: proposed application has the potential to be utilized by BIM
BIM utilization, a variety of BIM performance evaluation frame- J. Comput. Civ. Eng., 10.1061/(ASCE)CP.1943-5487.0000377,
works have been proposed. Because of the designs of these frame- 04014066.
works, they are mostly used for evaluating an organization’s Chuang, T.-H., Lee, B.-C., and Wu, I.-C. (2011). “Applying cloud comput-
performance in BIM utilization, i.e., to assess the level of BIM ing technology to BIM visualization and manipulation.” Proc., 28th Int.
utilization within an organization. In contrast, benchmarking, com- Symp. on Automation and Robotics in Construction, International
pared to internal evaluation, was found to be more helpful for BIM Association for Automation and Robotics in Construction (IAARC),
utilizers to make decisions for improvement plans. Through cross- Bratislava, Slovakia, 144–149.
Construction Excellence. (2013). “Construction best practice programme.”
organizational comparison, an organization can locate its competi-
〈http://www.constructingexcellence.org.uk〉 (Jan. 15, 2014).
tive position among all industry peers. Then, lessons learned from
Construction Industry Institute. (2013). “CII benchmarking and
other organizations can be used to establish improvement targets metrics.” 〈https://www.construction-institute.org/nextgen/index.cfm〉
and to promote changes in the organization. (Jan. 15, 2014).
This paper introduces a cloud-based BIM performance bench- Costa, D., Formoso, C., Kagioglou, M., Alarcón, L., and Caldas, C. (2006).
marking application called BIMCS to automatically collect BIM “Benchmarking initiatives in the construction industry: Lessons learned
performance data from a wide range of BIM users nationwide. and improvement opportunities.” J. Manage. Eng., 10.1061/(ASCE)
Twenty validated benchmarking metrics are used to quantify BIM 0742-597X(2006)22:4(158), 158–167.
utilization performance in terms of the BIM model (the product) Datta, S., Nandi, G., Bandyopadhyay, A., and Pal, P. K. (2009). “Applica-
and BIM modeling (the production). BIMCS is a community cloud tion of PCA-based hybrid Taguchi method for correlated multicriteria
environment that utilizes SaaS to make the collection, aggregation, optimization of submerged arc weld: A case study.” Int. J. Adv. Manuf.
and presentation of benchmarking data autonomous and interac- Technol., 45(3–4), 276–286.
tive. Based on the Big Data collected from the BIMCS database, Diakoulaki, D., Mavrotas, G., and Papayannakis, L. (1995). “Determining
an overall view of the industry status quo of BIM utilization may be objective weights in multiple criteria problems: The critic method.”
Comput. Oper. Res., 22(7), 763–770.
obtained, and ultimately, a protocol for BIM performance may be
Domeniconi, C., Papadopoulos, D., Gunopulos, D., and Ma, S. (2004).
developed on the basis of this improved knowledge discovery
“Subspace clustering of high dimensional data.” Proc., 2004 SIAM
process. (Society for Industrial and Applied Mathematics) Int. Conf. on Data
Mining, SIAM, Philadelphia, PA.
Fornell, C. (1982). A second generation of multivariate analysis. 2.
References Measurement and evaluation, Praeger Publishers, Westport, CT.
Friedman, J. H., and Meulman, J. J. (2004). “Clustering objects on subsets
Achtert, E., Böhm, C., Kriegel, H.-P., Kröger, P., Müller-Gorman, I., and
of attributes.” J. R. Stat. Soc. Series B (Stat. Method.), 66(4), 815–849.
Zimek, A. (2007). “Detection and visualization of subspace cluster
Garvin, D. A. (1993). “Building a learning organization.” Harv. Bus. Rev.,
hierarchies.” Advances in databases: Concepts, systems and applica-
71, 78–91.
tions, Springer, Berlin, Germany, 152–163.
Gong, J., and Azambuja, M. (2013). “Visualizing construction supply
Aggarwal, C. C., Wolf, J. L., Yu, P. S., Procopiuc, C., and Park, J. S. (1999).
chains with google cloud computing tools.” Proc., ICSDEC 2012:
“Fast algorithms for projected clustering.” Proc., Association for Com-
puting Machinery Special Interest Group on Management of Data Developing the Frontier of Sustainable Design, Engineering, and
(ACM SIGMOD) Record, Association for Computing Machinery, Construction, ASCE, Reston, VA, 671–678.
New York, 61–72. Gracia, J., and Bayo, E. (2013). “Integrated 3D web application for struc-
Agrawal, R., Gehrke, J., Gunopulos, D., and Raghavan, P. (1998). tural analysis software as a service.” J. Comput. Civ. Eng., 10.1061/
“Automatic subspace clustering of high dimensional data for data min- (ASCE)CP.1943-5487.0000217, 159–166.
ing applications.” Proc., Association for Computing Machinery Special Greco, S., Matarazzo, B., and Slowinski, R. (2001). “Rough sets theory
Interest Group on Management of Data (ACM SIGMOD) Record, for multicriteria decision analysis.” Eur. J. Oper. Res., 129(1), 1–47.
Asoociation for Computing Machinery, New York. Hagerty, M. R., and Land, K. C. (2007). “Constructing summary indices of
Anderberg, M. R. (1973). “Cluster analysis for applications.” Defense tech- quality of life a model for the effect of heterogeneous importance
nical information center (DTIC) document, Academic Press, New York. weights.” Socio. Meth. Res., 35(4), 445–496.
Autodesk. (2014). “Revit overview.” 〈http://www.autodesk.com/products/ Handfield, R., Walton, S. V., Sroufe, R., and Melnyk, S. A. (2002).
autodesk-revit-family/overview〉 (Jan. 15, 2014). “Applying environmental criteria to supplier assessment: A study in
Barber, E. (2004). “Benchmarking the management of projects: A review of the application of the analytical hierarchy process.” Eur. J. Oper.
current thinking.” Int. J. Project Manage., 22(4), 301–307. Res., 141(1), 70–87.
Bartholomew, D. J., Knott, M., and Moustaki, I. (2011). Latent variable International Organization for Standardization (ISO). (2008). “Information
models and factor analysis: A unified approach, Wiley, West Susses, technology—Database languages—SQL—Part 1: Framework (SQL/
U.K. framework).” ISO/International Electrotechnical Commission (IEC)
Latu, K., Swain, N., Christensen, S., Jones, N., Nelson, E., and Williams, cess communication methodology: Improving the effectiveness and
G. (2013). “Essential GIS technologies for hydrologic simulation appli- efficiency of collaboration, sharing, and understanding.” J. Archit.
cations in cloud computing.” Proc., World Environmental and Water Eng., 10.1061/(ASCE)AE.1943-5568.0000122, 05013001.
Resources Congress 2013, ASCE, Reston, VA. Shannon, C. (1948). “A mathematical theory of communication.” Bell Syst.
Liang, F., and Luo, Y. (2013). “A framework of the civil engineering CAD Tech. J., 27(3), 379–423.
experimental platform based on cloud computing.” Proc., Int. Conf. on Sheng, Z., Sun, S., Wang, J., and Chu, W. (2007). “Comprehensive evalu-
Construction and Real Estate Management 2013: Construction and ation of river water environmental quality based on the principal
Operation in the Context of Sustainability, ASCE, Reston, VA. component analysis.” Environ. Sci. Manage., 32(12), 172–175.
Linstone, H. A., and Turoff, M. (1975). The delphi method, Addison- Srdjevic, B., Medeiros, Y., and Faria, A. (2004). “An objective multi-
Wesley Reading, Boston, MA. criteria evaluation of water management scenarios.” Water Resour.
Manage., 18(1), 35–54.
Liu, J., Wang, H., Ge, Y., and Huang, J. (2013). “Application of multi-
Subashini, S., and Kavitha, V. (2011). “A survey on security issues in ser-
source information fusion technology in the construction of a secure
vice delivery models of cloud computing.” J. Network Comput. Appl.,
and emergent transportation platform.” Proc., Int. Conf. on Transpor-
34(1), 1–11.
tation Information and Safety 2013: Improving Multimodal Transpor-
Succar, B. (2009). “Building information modelling framework: A research
tation Systems—Information, Safety, and Integration, ASCE, Reston,
and delivery foundation for industry stakeholders.” Autom. Constr.,
VA, 237–243.
18(3), 357–375.
Liu, R. (2012). “BIM-based life cycle information management: Integrating
Succar, B. (2010). “Building information modelling maturity matrix.”
knowledge of facility management into design.” Ph.D. thesis, Univ. of
Handbook of research on building information modelling and construc-
Florida, Gainesville, FL.
tion informatics: Concepts and technologies, J. Underwood and
Lohuis, M. M., Dekkers, J., and Smith, C. (1992). “Probability of success
U. Isikdag, eds., IGI, Hershey, PA, 65–103.
and predicted returns of sires in progeny test programs.” J. Dairy Sci.,
Succar, B., Sher, W., and Williams, A. (2012). “Measuring BIM perfor-
75(6), 1660–1671.
mance: Five metrics.” Archit. Eng. Des. Manage., 8(2), 120–142.
Marinescu, D. C. (2013). Cloud computing: Theory and practice, Elsevier, Succar, B., Sher, W., and Williams, A. (2013). “An integrated approach to
Philadelphia, PA. BIM competency assessment, acquisition and application.” Autom.
Marosszekey, M., and Karim, K. (1997). “Benchmarking—A tool for lean Constr., 35(2013), 174–189.
construction.” Proc., 5th Int. Conf. of the International Group for Lean Suermann, P., Issa, R., and McCuen, T. (2008). “Validation of the
Construction, International Group for Lean Construction (IGLC), Gold U.S. national building information modeling standard interactive
Coast, Australia, 157–167. capability maturity model.” Proc., 12th Int. Conf. on Computing in Civil
Mell, P., and Grance, T. (2011). “The NIST definition of cloud computing.” and Building Engineering, International Society for Computing in Civil
National Institute of Standards and Technology (NIST) special and Building Engineering, Nottingham, U.K.
publication, Vol. 800, Gaithersburg, MD, 7. Suermann, P. C. (2009). “Evaluating the impact of building information
Moore, D. S. (1978). “Chi-square tests.” Studies in statistics, R. V. Hogg, modeling (BIM) on construction.” Ph.D. dissertation, Univ. of Florida,
ed., Vol. 19, Mathematical Association of America, Washington, DC. Gainesville, FL.
Mulaik, S. A. (1987). “A brief history of the philosophical foundations Underwood, J., and Isikdag, U. (2010). Handbook of research on building
of exploratory factor analysis.” Multivariate Behav. Res., 22(3), information modeling and construction informatics: Concepts and
267–305. technologies, Information Science Reference.
National Institute of Building Sciences (NIBS). (2007). “National building Unicode Consortium. (2011). “Unicode 6.0.0.” 〈http://www.unicode.org/
information modeling standard—Version 1.0.” NIBS Rep., Washington, versions/Unicode6.0.0/〉 (Jan. 15, 2014).
DC. Var, I. (1998). “Multivariate data analysis.” Vectors, 8(2), 125–136.
National Institute of Building Sciences (NIBS). (2012). “National building Welch, B., et al. (2008). “Scalable performance of the panasas parallel file
information modeling standard—Version 2.0—Chapter 5.2 minimum system.” Proc., File and Storage Technologies, USENIX (Advanced
BIM.” NIBS Rep., Washington, DC. Computing Systems Association), Berkeley, CA, 1–17.
Nieh, J., Yang, S. J., and Novik, N. (2000). “A comparison of thin-client Yang, Y. (2006). “Weighting methods in multivariate evaluation.” Stat.
computing architectures.” Technical Rep. CUCS-022-00, Dept. of Decis., 2006(13), 17–19.
Computer Science, Columbia Univ., New York. Zeleny, M., and Cochrane, J. L. (1982). Multiple criteria decision making,
Procopiuc, C. M., Jones, M., Agarwal, P. K., and Murali, T. (2002). McGraw-Hill, New York.
“A Monte Carlo algorithm for fast projective clustering.” Proc., 2002 Zhang, L., and Issa, R. (2012). “Comparison of BIM cloud computing
Association for Computing Machinery Special Interest Group on frameworks.” Proc., Computing in Civil Engineering (2012), ASCE,
Management of Data (ACM SIGMOD) Int. Conf. on Management of Reston, VA, 389–396.