You are on page 1of 14

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/270685478

BIM cloud score: Benchmarking BIM performance

Article  in  Journal of Construction Engineering and Management · July 2014


DOI: 10.1061/(ASCE)CO.1943-7862.0000891

CITATIONS READS
38 1,977

3 authors:

Jing Du Rui Liu


Texas A&M University University of Florida
55 PUBLICATIONS   651 CITATIONS    51 PUBLICATIONS   763 CITATIONS   

SEE PROFILE SEE PROFILE

Raja R A Issa
University of Florida
245 PUBLICATIONS   3,628 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Building Information Modeling (BIM) and Multi-user Vritual Reality (VR) System View project

BIM for Facility Management View project

All content following this page was uploaded by Rui Liu on 12 February 2015.

The user has requested enhancement of the downloaded file.


BIM Cloud Score: Benchmarking BIM Performance
Jing Du 1; Rui Liu 2; and Raja R. A. Issa, F.ASCE 3

Abstract: A variety of building information modeling (BIM) performance evaluation initiatives have been proposed to quantify BIM uti-
lization capacity of enterprises. These initiatives were designed for evaluating, instead of benchmarking, an organization’s performance in
BIM utilization. Unlike evaluation that mainly focuses on ascertaining the achievement of BIM utilization within an organization, bench-
marking is more interested in comparing one organization’s BIM performance to their industry peers. By identifying gaps in specific areas,
decisions to make improvements can be facilitated. This paper proposes a cloud-based BIM performance benchmarking application called
building information modeling cloud score (BIMCS) to automatically collect BIM performance data from a wide range of BIM users nation-
Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.

wide. It utilizes the software as a service (SaaS) model of cloud computing to make the collection, aggregation, and presentation of bench-
marking data autonomous and interactive. Based on the big data collected from the BIMCS database, an overall view of the industry status
quo of BIM utilization may be obtained, and ultimately, a protocol for BIM performance can be developed on the basis of a better knowledge
discovery process. Furthermore, BIMCS data will help individual companies compare and improve their performance in BIM utilization with
respect to their industry competitors. DOI: 10.1061/(ASCE)CO.1943-7862.0000891. © 2014 American Society of Civil Engineers.
Author keywords: Building information modeling (BIM); Cloud computing; Building information modeling cloud score (BIMCS);
Benchmarking; Information technologies.

Introduction The greatest difference between evaluation and benchmarking is


the cross-organizational comparison. Through cross-organizational
A messy building information modeling (BIM) model will comparison, an organization can learn how well the targets perform
ultimately lead to a messy project, which is difficult to execute, and more importantly, the business processes that explain why the
control, coordinate and will probably introduce additional cost leading organizations are successful. Then, lessons learned from
(Suermann 2009). To continuously increase an industry’s aware- other organizations can be used to establish improvement targets
ness of BIM performance issues, a variety of BIM performance and promote changes in the organization (Barber 2004).
evaluation frameworks have been proposed to score enterprises’ Cross-organizational benchmarking, compared to internal
capacity of BIM development, technological maturity, and other evaluation, can significantly improve an organization’s perfor-
administrative and technical perspectives of BIM performance mance by explicitly identifying the gaps with their best peers
(CIFE 2013; NIBS 2012; Succar 2009; Succar et al. 2013). (Costa et al. 2006). Unfortunately, current BIM performance evalu-
Despite the great success of current BIM performance evalu- ation initiatives have several limitations in their designs, making
ation initiatives, they are susceptible to omissions and several lim- benchmarking harder. First, most initiatives were originally
itations. The greatest shortcoming is that current initiatives were designed for internal evaluation of BIM utilization within a single
designed for evaluation instead of benchmarking. Evaluation is organization (NIBS 2007; Succar et al. 2013); the metrics used are
the systematic process to determine the merit, value, and signifi- either subjective or overspecific, unsuitable for benchmarking
cance of the design, implementation, and outcomes of the evaluated purpose. Second, although certain BIM evaluation initiatives were
subject (e.g., a program) (Rossi et al. 2004). The primary purpose claimed for benchmarking (CIFE 2013; Sebastian and van Berlo
of evaluation is to ascertain the degree of achievement of an organi- 2010), they use self-reported questionnaires as the source of data,
zation, program, or project in regard to the aim, objectives, and which is more questionable in the benchmarking process. Last,
results that have been achieved (Rossi et al. 2004). In contrast, many initiatives are commercialized (not free of charge) and thus
benchmarking is the process of comparing one’s business processes make it difficult to collect sufficient data for statistical conclusions.
and performance metrics to the industry bests (Costa et al. 2006). Without a proper benchmarking function, these initiatives can
hardly renovate BIM performance, point the future directions,
1
Assistant Professor, Dept. of Construction Science, Univ. of Texas at and identify the gaps easily. Following the above notions, two
San Antonio, 501 W.César E. Chávez Blvd., San Antonio, TX 78207 challenges need to be addressed:
(corresponding author). E-mail: jing.du@utsa.edu 1. Can a system be designed that allows automated BIM perfor-
2
Assistant Professor, Dept. of Construction Science, Univ. of Texas at mance benchmarking?
San Antonio, 501 W.César E. Chávez Blvd., San Antonio, TX 78207. 2. How can enough data be aggregated for benchmarking BIM
E-mail: rui.liu@utsa.edu performance?
3
Univ. of Florida Research Foundation and Holland Professor, Rinker To overcome these challenges, this study proposes a free cloud-
School of Construction Management, Univ. of Florida, P.O. Box 115703, based BIM performance benchmarking application called building
Gainesville, FL 32611. E-mail: raymond-issa@ufl.edu
information modeling cloud score (BIMCS) to automatically
Note. This manuscript was submitted on January 31, 2014; approved on
May 1, 2014; published online on July 3, 2014. Discussion period open collect BIM performance data from a wide range of BIM users
until December 3, 2014; separate discussions must be submitted for indi- nationwide. The overall BIM performance is quantified based on
vidual papers. This paper is part of the Journal of Construction Engineer- two critical aspects: the BIM model (product) and BIM modeling
ing and Management, © ASCE, ISSN 0733-9364/04014054(13)/$25.00. (process). By comparing their BIM performance scores against

© ASCE 04014054-1 J. Constr. Eng. Manage.

J. Constr. Eng. Manage. 2014.140.


industry peers, users of BIMCS are able to assess the competitive performance against industry standards. Another representative
position of their BIM performance to identify gaps in their BIM framework, QuickScan, was introduced in the Netherlands in
model and modeling processes and to decide improvement direc- 2009 by the Netherlands Organization for Applied Scientific
tions. Facilitated by the software as a service (SaaS) model of Research (TNO) (Sebastian and van Berlo 2010). According to
cloud computing, the collection, aggregation, and presentation TNO, QuickScan is a benchmarking tool to evaluate BIM perfor-
of benchmarking data in BIMCS are autonomous and interactive. mance of organizations utilizing BIM. Four categories of criteria
Furthermore, based on the big data collected from the nationwide are used which include (1) organization and management, (2) men-
benchmarking process, the BIMCS makes in-depth data mining tality and culture, (3) information structure and information flow,
possible to discover critical BIM performance aspects, the group- and (4) tools and applications, with each category consisting of
ings of BIM utilization areas, and market sectors. It provides an weighted key performance indicators (KPIs). Similar to the CIFE,
overall view of the status of the industry’s BIM utilization efforts. VDC, and BIM scorecard, the QuickScan performance data was
Ultimately, a protocol of BIM performance may be developed on collected via multiple choice questionnaires from BIM manage-
the basis of better knowledge discovery of the BIM utilization pro- ment personnel (Sebastian and van Berlo 2010). These benchmark-
vided by BIMCS. ing instruments are based on self-reported questionnaires as the
source of data and thus tend to be questionable.
Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.

Literature Review
Benchmarking

BIM Evaluation Benchmarking is a structured approach of measuring and compar-


ing an organization’s processes, activities, and performance against
Since 2007, there have been various attempts at evaluating BIM that of other organizations in key business areas (Barber 2004).
performance in the construction industry. One of the pioneering Although in practice, benchmarking may refer to multiple types
efforts was the national BIM standard’s interactive capability such as process benchmarking, strategy benchmarking, and perfor-
maturity model (ICMM) (NIBS 2007). ICMM is an interactive mance benchmarking (Bogan and English 1994), this paper is
Microsoft Excel worksheet that helps organizations measure the concerned with performance benchmarking, i.e., focused on the
degree to which their BIM projects implemented a mature BIM comparison of key performance outcomes.
standard (NIBS 2007). Eleven metrics were used to evaluate infor- During the process of benchmarking, decision makers identify
mation management pertaining to BIM utilization, including data the best peers in the same area and compare the performance out-
richness, life-cycle views, roles or disciplines, business processes, comes of those studied organizations (called targets) to their own
timeliness of response, delivery methods, graphical information, performance outcomes (Costa et al. 2006). In this process, several
spatial capability, information accuracy, change management, and benefits can be obtained. The greatest benefit is that it allows an
interoperability/industry foundation classes (IFC) support. ICMM organization to assess its competitive position in the industry.
was designed for internal use only and not for comparing BIM proj- Performance outcomes are not only quantified, but also ranked
ects across organizations (Suermann et al. 2008). and shown as a percentile value among industry competitors
Since 2009, Succar (Succar 2010; Succar et al. 2012, 2013) has (Bendell et al. 1993). It is also possible to convert such percentile
developed a series of BIM evaluation frameworks to measure the values to the possibility of success (Lohuis et al. 1992). The bench-
quality of individuals, teams, and organizations that implement marking process can also encourage innovation (Garvin 1993).
BIM. His first model, the BIM maturity matrix, aimed to evaluate Benchmarking involves a community of similar organizations that
administrative capacity of an organization in utilizing BIM, rather compare results and share practices for continuous improvement.
than assessing a specific BIM-assisted project (Succar 2010). Ten This circumstance establishes benchmarking clubs where people
key maturity areas (KMAs) were used to assign an organization share the same concerns and are willing to learn from the best
into one of the five maturity levels (Succar 2010). Shortly after this practices (Costa et al. 2006). Finally, benchmarking can also raise
work, Succar et al. (2012) developed a BIM competency index to the awareness of the decision makers in the key business processes
quantify the abilities of individuals (BIM developers) within of an organization. It helps managers understand the ways that lead
professional and academic settings. This framework highlights to better performance (Camp and Camp Robert 1995). According
individuals as a key metric of BIM performance measurement. to Garvin (1993), it also encourages managers to act more proac-
Then, competency is measured as an individual competency index tively in business processes rather than depending exclusively on
(ICI) consisting of five levels of increasing competence: (1) none, outcomes.
(2) basic, (3) intermediate, (4) advanced, and (5) expert. These In recognition of the importance of benchmarking, various
levels are treated as significant predictors of BIM performance out- benchmarking initiatives have been established in the construction
comes (Succar et al. 2013). Recently, the above framework has industry. Representative works include the following:
been integrated into a larger project called BIM excellence (BIMe) • Construction industry institute benchmarking and metrics
(Succar et al. 2013). BIMe provides a 360-degree BIM perfor- (CII BM&M) (Construction Industry Institute 2013);
mance assessment tool to assess individuals, organizations, teams, • U.K.’s construction best practice programme (CBPP)
or projects according to multiple competency areas. (Construction Excellence 2013);
Unlike ICMM and Succar’s works, certain BIM performance • National benchmarking system for the Chilean construction
evaluation initiatives declared themselves as benchmarking tools industry (NBS-Chile) (Ramírez et al. 2004);
from the very beginning. One of such works is the virtual design • Benchmarking for lean construction (Marosszekey and Karim
and construction (VDC) and BIM scorecard developed by 1997); and
Stanford’s Center for Integrated Facility Engineering (CIFE • Other initiatives developed in Brazil, Chile and Hong Kong
2013). This framework evaluates the maturity of virtual design (Costa et al. 2006).
and BIM practices on specific projects on the basis of the results Although serving different regions with different foci and tar-
of four survey forms pertaining to planning, adoption, technology geted at different market sectors, these construction benchmarking
and performance (CIFE 2013). Users are able to benchmark BIM initiatives are all targeted at developing common definitions and

© ASCE 04014054-2 J. Constr. Eng. Manage.

J. Constr. Eng. Manage. 2014.140.


guidance for performance measurement, providing performance differences, a cloud infrastructure may be owned, managed, and
norms for the industry, and disseminating the identified best prac- operated by the organizations themselves, third-party service prov-
tices via reports and benchmarking networks. Certain benchmark- iders, or some combination of them.
ing initiatives also attempt to establish support networks for Cloud computing has attracted increasing interest from the ar-
participants and domain experts to discuss the best practices in chitecture, engineering, and construction (AEC) industry. Applica-
the industry and determine how to implement them for performance tions can be grouped into three categories: (1) cloud computing in
improvement (Construction Industry Institute 2013). construction management (Gong and Azambuja 2013; Kumar et al.
Current benchmarking initiatives face three challenges: 2010; Liang and Luo 2013; Senescu et al. 2014); (2) cloud com-
• One challenge is a commonly agreed list of benchmarking puting in engineering design and analysis (Chen et al. 2013; Gracia
metrics. For example, one of the CII BM&M’s objectives is to and Bayo 2013; Karamouz et al. 2013; Latu et al. 2013; Liu et al.
reach an “agreement on common definitions for metrics of per- 2013); and (3) cloud computing in BIM. Particularly, BIM cloud
formance and practice use (Costa et al. 2006).” The metrics used application focuses on utilizing cloud technologies to advance the
in benchmarking must be specific enough for all organizations. collaborativity, interoperability, and usability of BIM applications.
The metrics typically measured are quality, time, and cost. Chuang et al. (2011) developed a web application for BIM visu-
Others may include cost per unit of measure, productivity alization and operation by utilizing the SaaS aspect of cloud com-
per unit of measure, cycle time of x per unit of measure, or de- puting. Users can operate 3D BIM models through the web without
Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.

fects per unit of measure (Construction Industry Institute 2013). the limitations of time or distance. Redmond et al. (2012) found
However, by comparing four major benchmarking initiatives in- that a main problem of the industry foundation classes (IFCs)
cluding CII BM&M and CBPP, Costa et al. (2006) still found was the difficulty of storing and carrying relevant data for all multi-
significant divergence among different benchmarking initia- featured construction processes. They therefore proposed a
tives, suggesting the lack of a benchmarking protocol in the con- cloud-based platform for seamless data exchange among various
struction industry. disciplines at the early stage of a project. The proposed platform
• The second challenge is the weighting method of benchmarking harnesses the capability of IFC extensible markup language (XML)
metrics. Although the literature has suggested multiple objective with simplified markup language (SML) subsets of XML for para-
weighting methods (Datta et al. 2009; Greco et al. 2001; Zeleny metric modelling and interaction. Recently, Zhang and Issa (2012)
and Cochrane 1982), it was noticed that many of these bench- compared various BIM cloud computing systems including Revit
marking initiatives still apply subjective weight methods and Server, Revit Cloud and STRATUS, and general purpose cloud
extensively rely on opinions of domain experts. This is ques- services such as Advance 2000 and Amazon. Based on the find-
tionable when a benchmarking is designed for a wider range ings, they proposed a framework for making decisions on adopting
of subjects. cloud computing for BIM applications in specific contexts.
• The last challenge is the method used to collect benchmarking The Autodesk 360 suite of cloud-based BIM services (Field,
data. Although some benchmarking initiatives, such as the Glue, etc.) is an example of a commercially available package
CII BM&M (Construction Industry Institute 2013), have created (Autodesk 2014).
web applications to make the benchmarking process more
friendly, this process is still not fully open to the industry practi-
tioners (for example, CII requires membership for benchmark-
Methodology
ing service) and is not autonomous (most benchmarking
initiatives require manual input of performance data). Cloud To develop the BIMCS a three-step roadmap was followed as
computing can be used to address the above challenges. shown in Fig. 1:
1. The first step focuses on developing a list of metrics that are
Cloud Computing suitable for benchmarking BIM performance. As discussed
above, a prerequisite of benchmarking is an agreement on
According to NIST, cloud computing is a model for “enabling the definitions and measures of the common set of metrics
ubiquitous, convenient, on-demand network access to a shared pool (Costa et al. 2006). At first, an initial list of metrics was pro-
of configurable computing resources (e.g., networks, servers, posed based on existing literature and modeling theories
storage, applications, and services) that can be rapidly provisioned (Sargent 2005; Succar 2009; Succar et al. 2013; Underwood
and released with minimal management effort or service provider and Isikdag 2010). Then, a survey was distributed to domain
interaction (Mell and Grance 2011).” NIST (Mell and Grance 2011) experts to validate the list. The pool of respondents included
identified three service models that are currently employed by architects, contractors, engineers, owners, and BIM software
cloud computing including (1) software as a service (SaaS), developers from the AEC industry. They were asked to select
(2) platform as a service (PaaS), and (3) infrastructure as a service the metrics that they believed were irrelevant to the context and
(IaaS). Users have different capacities of exercising control over the to provide metrics that they believed to be critical for BIM
components of the infrastructure. For example, IaaS users are al- performance benchmarking. Another purpose of the survey
lowed to provision processing, storage, networks, and other funda- was to obtain the subjective weights of the proposed metrics
mental computing resources, whereas users of SaaS may only use by asking domain experts to respond to five-point Likert scale
the provider’s applications running on a cloud infrastructure. The questions (Issa and Suermann 2009). At the time of writing
applications are accessible from various client devices through ei- this paper, the work for step 1 is ongoing.
ther a thin client (e.g., a web browser or a program interface) or a fat 2. The second step aims to perform the cloud-based benchmark-
client (e.g., a tower personal computer). Depending on the acces- ing of BIM performance. Since multiple metrics are consid-
sibility of a cloud service, cloud can be deployed as a private cloud, ered, the benchmarking starts with a multivariate evaluation
community cloud, public cloud, or hybrid cloud (Subashini and (Fornell 1982). For any BIM project, the performance on each
Kavitha 2011). The choice of a cloud deployment method depends of the 20 metrics will first be scored individually according to
on the concerns of the users such as missions, security policies, and predetermined standards. Then based on the initial weights ob-
compliance considerations (Marinescu 2013). Despite deployment tained from step 1, the overall performance score of a BIM

© ASCE 04014054-3 J. Constr. Eng. Manage.

J. Constr. Eng. Manage. 2014.140.


Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.

Fig. 1. Methodology for developing BIM cloud score

project is calculated as the weighted summation of the sub- BIMCS: A BIM Performance Benchmarking
scores of 20 metrics. A cloud-based questionnaire is used Apparatus
to collect the calculated BIM performance scores. Per the per-
mission of the end-users, BIMCS automatically calculates the Overview
performance score and submits the information to a cloud ser-
ver maintained for the purposes of the study. The information The BIMCS is a web-based application for BIM performance
is then aggregated into a database called BIM performance da- benchmarking. The BIMCS utilizes the SaaS service model (Fig. 2).
tabase. A probability density function (PDF) will be generated Compared to PaaS and IaaS, SaaS allows the users of BIMCS to
on the basis of the aggregated data. End-users of BIMCS are have the full capacity of transporting the performance data, while
therefore able to compare their performance scores to other reducing the risk of manipulating the system to the minimal level.
industry peers. BIMCS can be implemented as an add-in The BIMCS is deployed as a community cloud (Fig. 2). The com-
for the off-the-shelf BIM software or a pure web application. munity cloud environment ensures the accessibility of an exclusive
Also of importance is the fact that BIM performance depends community of users from different organizations that have shared
on the various project types and their organizational scales. An concerns such as mission, policy, and compliance considerations
inquiry about the background information such as project (Marinescu 2013). Unlike private cloud and public cloud, it pro-
type and magnitude will be sent out to the end users before vides a secured cross-organizational environment that satisfies
the benchmarking process takes effect. The data collected the requirements of the BIMCS.
from step 2 can be later classified, grouped, and processed A BIMCS user may select to submit BIM-related information
for further analysis based on the associated background infor- such as generic background information and BIM model outputs
mation. Such background information can be used as the filters to a remote Hadoop distributed file system (HDFS) server (Borthakur
of benchmarking results because users may only interested in 2007) called the benchmarking server. The benchmarking server ag-
particular sectors. gregates the submitted information and calculates statistics of inter-
3. The third step is designed to improve the system on the basis est. The user can then see their own performance as a percentile or
of accumulated information. The quality of benchmarking de- probability of success compared with all other industry peers. A set of
pends on an appropriate list of metrics and scientific weighting filters may be set so that the user can query and focus only on the
of the metrics (Costa et al. 2006). The BIMCS developed in benchmarking aspects/metrics that the user is interested in.
step 1 is based on the perceptions of domain experts. Once
sufficient data is obtained in the BIM performance database,
a set of data mining analyses will be performed to generate a Benchmarking Metrics
more proper weighting system for the metrics. Possible meth- To facilitate the benchmarking of BIM performance metrics, the
ods include maximum entropy method, principal component following requirements must be satisfied:
analysis, and multiple correlation method (Diakoulaki et al. • Objective: Metrics should be objective. BIM users may have
1995; Srdjevic et al. 2004; Sheng et al. 2007). A factor ana- divergent perceptions and interpretations for a given metric.
lysis (Var 1998) will also be applied to find out a new list of Objective metrics ensure that the measure of any metric is built
metrics which are the linear combinations of the original list. on the same denotation.
This process will be repeated through several iterations until • Quantifiable: Following the objectiveness requirement, metrics
an objective and unbiased benchmarking system is obtained. should also be able to be quantified so that they can be defini-
The importance of incorporating opinions of domain experts in tively defined and defended.
the final version of benchmarking system is also recognized. As a • Inherent: Metrics should be inherent in the BIM database. In
result, subjective weight methods will also be applied such as Del- other words, measures of metrics must be able to be read and
phi sessions. These sessions will be based on the BIM performance pulled directly from the BIM database. As a result, fast and
database and therefore informed discussions can be conducted. The automatic performance benchmarking can be realized.
final version of the BIMCS will reflect the pattern of the collected • Generic: Metrics should focus on the common aspects of differ-
data and the perspectives of the domain experts. ent BIM projects. Specific BIM projects may have very unique

© ASCE 04014054-4 J. Constr. Eng. Manage.

J. Constr. Eng. Manage. 2014.140.


Fig. 2. BIM cloud score as SaaS
Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.

features that can hardly be compared against other projects browser application. Users access the benchmarking application
and thus, benchmarking works on the common features that server through the uniform resource locator (URL) of SaaS, which
are valid in general settings. For example, the development dura- sends the necessary files for executing the benchmarking applica-
tion of a complex BIM project and a simple BIM project are not tion, such as java script application (JSP), to the users’ terminals.
comparable, but the number of objects developed per unit of time Port 80 is used to send requests to the benchmarking server as a
reflects the productivity of development to some extent. There- pure endpoint for the HTTP. It is compatible with Microsoft Inter-
fore, the latter one is a better candidate for benchmarking metrics. net Explorer and therefore easier for the client side. Other ports,
• Representative: Metrics should be representative of critical such as Port 20, can be used to send back application files.
aspects of the BIM model and modeling processes. On the back end, the installed application files query the BIM da-
Based on the above requirements, 20 metrics were developed tabase to compute metric values. IFC as an industry standard can be
that are categorized into six aspects: (1) productivity, (2) effective- queried by the BIM server through Java (Liu 2012). On the front end,
ness, (3) quality, (4) accuracy, (5) usefulness, and (6) economy users have the ability to view and operate benchmarking applications.
(Table 1). The first two aspects quantify the performance of The best way to implement this kind of interactive applications is
BIM modeling (production), and the rest captures the performance using the model-view-controller (MVC) pattern (Krasner and Pope
of BIM models (product). Some metrics rely on increasing values 1988). MVC is a classic design pattern that depends on three major
as good performance indicators (e.g., number of objected created components: (1) models for maintaining data, (2) views for display-
per week) while some rely on decreasing values as good perfor- ing purposes, and (3) controllers for handling user initiated events
mance indicators (e.g., number of structural warnings/number of (Krasner and Pope 1988). The other way to set up the benchmarking
nonstructural warnings); therefore, the metrics selected need to application is through add-ins to off-the-shelf BIM software. The
be normalized before their use in BIMCS. Revit database can be queried in Visual Studio though C#. The fol-
lowing are sample code lines for such an add-in in Revit:
Cloud Benchmarking using System;
using System.Collections.Generic;
Fig. 3 illustrates the architecture of the BIMCS. It applies the using System.Text;
client—server model of centralized computing, i.e., off-loading using Autodesk.Revit.DB;
computational tasks to a remote Hadoop server in the network using Autodesk.Revit.UI;
(Nieh et al. 2000). There are four major components in this design: using Autodesk.Revit.ApplicationServices;
• Client: A fat client that has many resources and does not rely on using System.Drawing;
the server for essential functions, or a thin client that relies heav- using Autodesk.Revit;
ily on network resources for computation and storage; using System.Collections.ObjectModel;
namespace BIMCloudScore
• Internet: A computer network that uses the standard internet
{
protocol suite (transmission control protocol/IP) to serve as // <summary>
the communication channel; // This class contain the data about object changes (getting from Revit)
• Web server: A web server that supports not only hypertext trans- // </summary>
fer protocol (HTTP), but also server-side scripting using active public class ChangesofObjects
server pages (ASP), hypertext processer, or other scripting {
languages; and private UIApplication m_revit; //Application of Revit
• Benchmarking server: A Hadoop server (HDFS) that assumes private WireFrame m_sketch; //Profile information of opening
the mass storage and benchmarking related computation private BoundingBox m_boundingBox; //BoundingBox of Opening
requirements. public UIApplication Revit
{:::}
The first step in implementing this architecture is to set up the
// Other methods not shown
benchmarking application, i.e., configuring the client for executing :::
the necessary application files. Although the main benchmarking }
computation is done on the remote HDFS server, some application
files need to be set up in the client to access local BIM databases After the application is set up in the client, metric scores will be
and to compute metric values. One solution is through a web metered locally. Then, they are sent to the benchmarking server to

© ASCE 04014054-5 J. Constr. Eng. Manage.

J. Constr. Eng. Manage. 2014.140.


Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.

Table 1. Proposed BIMCS Benchmarking Metrics


Category Explanation Metrics Measurement Notes

© ASCE
Modeling (production)
Productivity How fast a BIM Number of objects created Number of objects/duration (weeks) It indicates on average how many objects were created per week
model was developed per week
Number of absolute object Number of absolute changes of object number/ It indicates the absolute object number changes per week, which may include
number changes per week duration (weeks) created or removed objects per week
Model LOD per number of Model LOD/number of coordination meetings It indicates on average how many meetings were needed to increase LOD a
coordination meeting level
Project data changes Total number of project data entries/duration It shows on average how many nongeometric project parameters (entries) were
per week (weeks) entered per week
Effectiveness How effectively a Variance of QTO ðQTO produced by the modal at each phase − Consistent QTO prediction indicates a more effective development process
BIM model was average QTOÞ2 =number ofphases
developed Number of steps per object Total number of modeling steps/total number of It shows on average how many steps it took to develop an object; too many
objects steps per object indicates an ineffective development (e.g., corrections/rework)
Average changes per object Number of object changes/number of objects It shows on average how many times an object was changed during the
development; more changes indicate a less effective development
Model (product)
Quality Degree to which a set Number of warnings Total number of warnings/total number of objects More warnings indicate a less quality model
of inherent per object
characteristics of Criticality of warnings Number of structural warnings/number of Structural warnings (e.g., warnings pertaining to beams and columns) are more
BIM models fulfills nonstructural warnings critical; this ratio shows the criticality of warnings
desired requirements Consistency of 3D model Number of errors in reference to 2D deliverables/ It reflects how consistent the BIM generated 2D drawings are in reference to
and 2D references number of objects the 2D drawings in use
Models’ analytical Number of objects need to be modified or added Before utilizing the BIM model in any analytical study (such as structural
reporting quality before reporting /number of objects study), if more objects need to be modified or added, it reflects a less reliable
model

04014054-6
Accuracy Degree to which the QTO accuracy Summation of absolute difference between A lagging indicator showing the ability of a BIM model in predicting quantity
BIM models BIM-yield, QTO, and the actual quantity obtained
precisely reflects the by the end of the project

J. Constr. Eng. Manage. 2014.140.


physical real world Discrepancies between Number of discrepancies between each It captures the situation where information in one discipline’s model
conditions of a each discipline’s models discipline’s model per number of objects contradicts information in another discipline’s model.
project Average number of generic Number of generic objects per number of This refers to the quality of the assemblies that are modeled; bigger portion of
objects per assembly assemblies generic objects means very little data to extract
Constructability Number of detected clashes per number of objects More detected clashes indicates a less accurate model
(Clash detection)
Usefulness How useful the How often the model gets Number that the model was accessed per duration A more accessed model indicates that it was more useful in the project lifecycle
model information accessed (weeks)
and geometry is over Ease of construction Number of objects that need to be added or Before generating a construction document, if more objects need to be
the project lifecycle documentation creation modified before documentation creation per modified or added, it reflects a less useful model
number of objects
Reliability of model data for Number of discrepancies between BIM model and It indicates how useful a BIM model can be used during operations and
end users during operations actual physical building maintenance
and maintenance
Economy Degree to which the File size per SF File size (MB) per SF of the building Larger file size in relation to the square footage of the modeled building
BIM model is (at certain LOD) indicates a less economic (or more complex) model
constructed in the Number of objects Number of objects per SF of the building Larger number of objects in relation to the square footage of the modeled
most cost and time created per SF building indicates a less economic (or more complex) model
efficient manner

Note: LOD = level of development; MB = megabyte; QTO = quantity takeoff; SF = square foot.

J. Constr. Eng. Manage.


Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.

Fig. 3. Architecture of BIM cloud score

calculate overall performance scores. Following the multi-criteria transformation format—8-bit) (Consortium 2011). UTF-8 is com-
evaluation process, the final score of a BIM project is given as patible with ASCII and is not as complicated as UTF-16 and UTF-
a weighted summation (Fornell 1982) 32. More importantly, it is increasingly being used as the default
character encoding in application programming interfaces which
X
m
typically serve as the basis of add-ins to off-the-shelf BIM software.
Xi ¼ ωj xij ; for i ¼ 1; 2; 3; : : : ; n ð1Þ In a similar fashion, all users will send performance metrics in-
j¼1
formation to the benchmarking server, as shown in Fig. 4. Upon
reception, performance information will be first formatted and
where X i is the performance score of the ith BIM project in the processed in a range of structured query language (SQL) tables.
database, ωj is the weight of the jth metric (obtained by standard- As a standard domain-specific language (DSL), SQL is suitable
izing survey results), xij is the score of the ith BIM project on the for the relational database queries of the BIMCS (ISO 2008). Then,
jth metric, m is the number of metrics (in the current version, it is the benchmarking server will aggregate metrics information,
20), and n is the number of BIM projects contained in the database. denoted as X
xi and the scores of separate metrics are sent to the benchmarking
server as strings encoded in UTF-8 format (universal character set X ¼ fX 1 ; X 2 ; X 3 ; : : : ; X n g ð2Þ

X is a collection of the performance scores of n BIM projects.


It is used to fit the probability distribution of all submitted data.
Statistical tests such as the chi-squared test are then applied to find
out the fittest PDFs (Moore 1976). A set of statistics may also be
Benchmarking
Benchmarking
results results calculated to demonstrate the pattern of the result including μ, σ,
Aggregate skewness, kurtosis, and critical percentiles that are of interest.
The users may opt into a process of benchmarking specific BIM
Benchmarking performance metrics, other than the overall performance. The set of
Server
filters shown in Fig. 5 can be set to classify the BIM projects in the
performance database so that the user can tailor their query and
focus only on their areas or aspects of interest.
Submissions and requests Given the large amount of data to be processed in BIMCS (typ-
ically gigabytes to terabytes in size), the Apache HDFS is used to
deploy the benchmarking server (Borthakur 2007). HDFS is a
standards-based open-source framework built on Google’s MapRe-
duce and Google file system (GFS). It is designed to leverage the
power of massive parallel computation to deal with Big Data,
Fig. 4. Benchmarking BIM performance
i.e., data sets that are too large and complex for traditional data

© ASCE 04014054-7 J. Constr. Eng. Manage.

J. Constr. Eng. Manage. 2014.140.


Market Sector Annual Volume Project Delivery Method

Residential >=1 billion Design Bid Build


Commercial
100 MM~999 MM CM at Risk
Industrial
50 MM ~ 99 MM CM Agency
Heavy Civil
5 MM ~ 49 MM Design Build
Health Care
Other <5MM IPD

Fig. 5. Filters of BIM cloud score


Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.

processing applications (Welch et al. 2008). Using the master-slave multifold meanings in statistics: (1) metric variability (Zeleny and
structure, lots of inexpensive commodity servers are deployed to Cochrane 1982) reflects how much information is contributed by a
yield scalable data processing power (Welch et al. 2008). HDFS metric. If the value of a metric is more volatile, the metric contains
provides reliable interfaces to the BIMCS application and builds more useful information (Shannon 1948) and is more important;
high reliability and scalability of distributed systems. (2) metric independence (Datta et al. 2009) reflects the pure infor-
mation explained by a metric. If a metric is more independent of
Posterior Reformation other metrics, it contains less repeated information and thus is more
important; and (3) metric distinguishing capability (Greco et al.
BIMCS is dynamic in nature. It evolves as information accumu- 2001) reflects the capability of a metric in distinguishing the differ-
lates. Fig. 6 illustrates how the system is reformed when sufficient ence between samples. If removing a metric will significantly
performance data is obtained in the BIM performance database. In change the classification of the outcomes, the metric is more im-
particular, three aspects of the system are to be improved to reflect
portant. Following the above notion, a variety of objective weight-
the pattern of performance data: (1) weights of metrics, (2) metrics,
ing methods can be applied to reform the weights of metrics based
and (3) classification.
on solicited performance data, including the coefficient of variation
Reform Weights of Metrics method (Zeleny and Cochrane 1982), maximum entropy method
In the survey (step 1), respondents were asked to weight the im- (Srdjevic et al. 2004), intercriteria correlation method (Diakoulaki
portance of BIM benchmarking metrics, which are used to calculate et al. 1995), rough set method (Greco et al. 2001), and principal
the BIM performance score in the earlier phase of the benchmark- component analysis method (Sheng et al. 2007). For example, fol-
ing process. When there is a lack of existing data, this method is lowing Shannon’s information theory (Shannon 1948), entropy is a
easier to implement. However, the data should speak for itself measure of the system’s disorder (or uncertainty) that can be used to
(Shannon 1948). As data starts to accumulate in the BIM perfor- quantify the expected value of information contained in a message.
mance database, the importance of metrics can be calculated by If a benchmarking metric contains more information (i.e., its en-
investigating the pattern of the data. Metric importance has tropy is bigger), it should be assigned a larger importance weight.

Fig. 6. Reforming BIM cloud score based on the aggregated performance data

© ASCE 04014054-8 J. Constr. Eng. Manage.

J. Constr. Eng. Manage. 2014.140.


To calculate the entropy of a metric, the results of metrics should Reform Classification
first be standardized The classification of BIM projects, such as the one shown in Fig. 5,
can be improved based on the cluster analysis of accumulated
xij
aij ¼ Pn ; for j ¼ 1; 2; 3; : : : ; m ð3Þ information in the database (Anderberg 1973). Cluster analysis
i¼1 xij groups outcomes in such a way that outcomes in the same group
are more similar to each other than in other groups. In the BIMCS,
where xij is the performance score of the ith BIM project on the jth it means that BIM projects with the same group membership are
metric, m is the number of metrics (in the current version, it is 20), more similar in terms of metric scores and overall performance.
and n is the number of BIM projects contained in the database. Compared to the original classification of BIM projects as shown
Then, Shannon’s entropy of the jth metric is given by in Fig. 5, the one that emerges from this data may lead to an un-
Xn   biased classification. It may provide a better guideline for users to
1
Hj ¼ −k aij ln aij k ¼ ð4Þ query the benchmarking results and focus only on their areas of
i¼1
ln n interest.
A challenge of applying cluster analysis in the BIM perfor-
The importance weight of the jth metric is given by mance database is of high dimensionality. Beyer et al. (1999) have
proven that distance-based cluster methods fail as dimensionality
Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.

ð1 − H j Þ increases. This is because when dimensionality reaches a certain


wj ¼ P ð5Þ
ðm − m j¼1 H j Þ point, as few as 10–15 dimensions, the distance to the nearest data
point approaches the distance to the farthest data point
Hagerty and Land (2007) recognized the importance of the val-
didtmax − didtmin
ues judgment of domain experts in multicriteria evaluation. Such limd→∞ ¼ ¼0 ð9Þ
judgment should be made in an educated and structured manner. didtmin
With the collected performance data in the BIM performance data-
base, a range of methods can be employed to elicit educated opin- As a result, high dimensional cluster analysis methods are used
ions from domain experts. These methods include the Delphi in BIMCS to generate a new classification of BIM projects. Avail-
method (Linstone and Turoff 1975) and analytical hierarchy able methods include PROCLUS (Aggarwal et al. 1999), CLIQUE
processing (Handfield et al. 2002). These methods are designed (Agrawal et al. 1998), LAC (Domeniconi et al. 2004), COSA
to reach a consensus among a group by utilizing repetitive and in- (Friedman and Meulman 2004), and hybrid methods such as
formed information and statistical tests. The results from these DOC (Procopiuc et al. 2002) and DiSH (Achtert et al. 2007).
methods will be integrated with those from the objective weighting Because of the limitation of the scope of this study, the details
methods. of these methods will not be discussed.

Reform Metrics Consistency Test


The initial list of benchmarking metrics may contain overlapping Kendall’s rank correlation analysis is used in BIMCS to check
information. There are likely more fundamental metrics that are whether results from multiple weighting methods are consistent
harder to identify at the early phase of benchmarking, which are (Kendall 1938). Kendall’s rank correlation analysis builds on a
called latent variables (Bartholomew et al. 2011). Unlike observ- set of statistics called Kendall’s tau (τ ) to measure the similarity
able variables, latent variables are those that cannot be directly ob- between two ranking results. The τ is obtained from
served but are rather inferred from other variables that are directly nc − nd
measured (Bartholomew et al. 2011). Finding latent variables is τ B ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð10Þ
ðn0 − n1 Þðn0 − n2 Þ
critical to the success of BIMCS because certain aspects of BIM
performance may be impossible to measure with a single metric where nc = number of concordant
and require a linear combination of several metrics (Mulaik 1987). P pairs, nd = number of discordant
pairs, n0 ¼ nðn − 1Þ=2, n1 ¼ ti ðti − 1Þ=2 (ti is the number of
Factor analysis is used to find out a new list of metrics that are tied valuesPin the ith group of ties for the first ranking result),
independent of each other (Var 1998). Suppose there are n BIM and n2 ¼ uj ðuj − 1Þ=2 (uj is the number of tied values in
project data points and m metrics. All the data in the BIM perfor- the jth group of ties for the second ranking result).
mance database can then be denoted as an m × n matrix, Xm×n . If Kendall’s rank correlation analysis cannot be satisfied, it sug-
Then, the mathematical model of factors is gests significant discrepancy between different weighting methods.
Then, compound weighting methods are to be used to integrate
ðX − μÞm×n ¼ Lm×n Zm×n þ εm×n ð6Þ
the results, including the minimum bias synthesis method, the
where μm×n = mean matrix of X m×n , Lm×n = loading matrix, Zm×n = multiplicative synthesis method, the Spearman synthesis method,
factor matrix, and εm×n = matrix for independent error terms. It is and the sum of squared deviation (SSD) based synthesis method
also known that (Yang 2006).

EðZm×n Þ ¼ 0 and CovðZm×n Þ ¼ I ð7Þ


Implementation Example
Therefore, Lm×n can be found by solving
The following example shows how the BIMCS is implemented.
CovðX − μÞm×n ¼ Lm×n LTm×n þ ψ ð8Þ The BIMCS can be installed as an add-in to Revit (Autodesk
2014). After installing it, a link is created under the external tools
Zm×n contains a list of new metrics that are linear combinations tab of Revit (Fig. 7). There are three main functions: (1) start/
of the original metrics. The new metrics are independent of each terminate monitoring, (2) start/terminate uploading information,
other, and several of them assume a maximum possible variation of and (3) view benchmarking results. The first function controls
the data. the start or end of the monitoring actions. The second function

© ASCE 04014054-9 J. Constr. Eng. Manage.

J. Constr. Eng. Manage. 2014.140.


Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.

Fig. 7. Add-in of Revit (hypothetical case)

allows users to upload their BIM performance information to the The updated weights are sent to the BIMCS add-in automatically
server, and the third function displays the benchmarking result. with a notice to the end-users, giving the end-users the option of
After starting the monitoring function, the BIMCS will screen adopting the new weights. This procedure can be done on a regular
and monitor the BIM database continuously and meter the scores of basis to reflect the latest trend of BIM performance. Another sig-
each performance metric on the back end. The user’s BIM model- nificant update of BIMCS is the list of metrics. For example, it may
ing activities will not be affected. If the start/terminate uploading be found by a factor analysis that the scores of how often the model
information is turned on, performance information will be uploaded gets accessed are highly correlated with the scores of ease of con-
to the benchmarking server automatically on a regular basis. The struction documentation creation, which indicates the fact that
uploaded information is classified, processed, and aggregated in the these two metrics capture similar performance aspects and should
remote server. be combined into one. If a new list of metrics is suggested, it will be
Then, the user can view the results using the add-in. By clicking sent to the user add-in for validation, and if new metrics are ac-
view benchmarking result, a window will pop out and show the cepted, a new version of BIMCS will be launched for installation.
results as a probability distribution curve and tabular results (Fig. 8).
The weights of metrics are also shown in the right hand side. Users
may revise the weights according to their own needs. The final re- Discussion
sult of BIMCS is shown as a percentile value (from 0 to 100%). The
percentile value shows the amount of projects that the assessed The authors have introduced the architecture and implementation
BIM project outperforms. For example, if a BIM project’s result method of a cloud-based application called BIMCS for benchmark-
is 95%, it means 95% of the assessed projects are worse than it, ing BIM performance. Two unique features make the success of
or it is in the top 5%. this system possible:
Filters are also provided with the results. Users may check the 1. BIMCS provides an open environment for BIM users, where
benchmarking results for specific categories such as productivity they can identify their competitiveness with respect to all their
and quality. Users may also view the results for specific market industry competitors. It provides sufficient accessibility to the
sectors such as commercial or residential. practitioners and is free of charge. This feature is attractive to
A detailed analysis report may also be generated (Fig. 9). It pro- the practitioners and companies who wish to continuously im-
vides a new window for summary presentation of the benchmark- prove their performance in BIM utilization by identifying their
ing results. In the detailed analysis section, performance based on gaps with the industry’s best practitioners.
each of the 20 metrics is reviewed and analyzed. 2. BIMCS is able to correct/adjust itself based on the Big Data
When sufficient data has been collected, BIMCS can be updated collected nationwide. A variety of data mining techniques will
automatically. One possible update is the weights of metrics. For be applied to dynamically improve the benchmarking metrics,
example, the statistical analysis on the aggregated benchmarking metric importance weights, and classification method. Each
data may find the variance of number of objects developed per newer version of the BIMCS reflects the direct pattern of
month to be bigger than that of other metrics. Eqs. (3)–(5) will the national BIM performance data and provides an overall
be used to recalculate the importance weights of the 20 metrics. view of the status quo of the user’s BIM utilization.

© ASCE 04014054-10 J. Constr. Eng. Manage.

J. Constr. Eng. Manage. 2014.140.


Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.

Fig. 8. Showing benchmarking results in Revit (hypothetical case)

Because the modeling process of BIM, such as the productivity 1. It creates a log file for each BIM model to record the changes,
of object development, is of concern, it is a nontrivial task to model outcomes, and project indicators that were retrieved
monitor and meter performance metrics in the BIMCS. One chal- from the BIM model at different times.
lenge is that time stamps should be implanted onto BIM databases 2. It provides BIM model forensic analysis for comparing the
to document the dynamic change of performance metrics. For difference between two BIM models or the same BIM model
example, to monitor the rework of a BIM model or the access fre- at different times.
quency to some BIM model, the exact time stamps of such changes BIM diagnostics add-in should be able to seamlessly commu-
should be well documented. Therefore, an add-in called BIM nicate to the BIM benchmarking database. The next task will be
diagnostics will be developed in the future. The add-in has two focused on the development of BIM diagnostics. Although the
functions: proposed application has the potential to be utilized by BIM

Fig. 9. Sample report of BIM cloud score

© ASCE 04014054-11 J. Constr. Eng. Manage.

J. Constr. Eng. Manage. 2014.140.


coordinators (mainly employed by general contractors), it currently Bendell, T., Boulter, L., and Kelly, J. (1993). Benchmarking for competitive
targets BIM authors (mainly employed by subcontractors and de- advantage, Financial Times/Pitman, London.
signers) as the main users. The proposed metrics aim to capture the Beyer, K., Goldstein, J., Ramakrishnan, R., and Shaft, U. (1999). “When is
technical aspects of the development process and final products of ‘nearest neighbor’ meaningful?” Database Theory—Int. Conf. on
BIM. They should be used by BIM authors to examine and com- Database Theory (ICDT)’99, Springer, Jerusalem, Israel, 217–235.
Bogan, C. E., and English, M. J. (1994). Benchmarking for best practices:
pare the performance of BIM development efforts. In fact, these
Winning through innovative adaptation, McGraw-Hill, New York.
metrics were developed and validated based on a survey, where
Borthakur, D. (2007). “The hadoop distributed file system: Architecture
60.9% of the respondents were architects, BIM developers, or BIM and design.” 〈http://www.lhelper.org/dev/hadoop-0.12.2/docs/hdfs
managers, and 41.3% of them were from architecture companies _design.pdf〉 (Jan. 15, 2014).
compared to 13% from general contractors. Therefore, the present Camp, R. C., and Camp Robert, C. (1995). Business process benchmark-
version of BIMCS mainly meets the need of BIM authors. ing: Finding and implementing best practices, American Society for
Quality Control (ASQC) Quality, Milwaukee, WI.
Center for Integrated Facility Engineering (CIFE). (2013). “VDC and BIM
Conclusion scorecard.” 〈https://vdcscorecard.stanford.edu/home〉 (Jan. 15, 2014).
Chen, Z., Chen, J., Shen, F., and Lee, Y. (2013). “Collaborative mobile-
In order to focus the industry’s attention on the performance of cloud computing for civil infrastructure condition inspection.”
Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.

BIM utilization, a variety of BIM performance evaluation frame- J. Comput. Civ. Eng., 10.1061/(ASCE)CP.1943-5487.0000377,
works have been proposed. Because of the designs of these frame- 04014066.
works, they are mostly used for evaluating an organization’s Chuang, T.-H., Lee, B.-C., and Wu, I.-C. (2011). “Applying cloud comput-
performance in BIM utilization, i.e., to assess the level of BIM ing technology to BIM visualization and manipulation.” Proc., 28th Int.
utilization within an organization. In contrast, benchmarking, com- Symp. on Automation and Robotics in Construction, International
pared to internal evaluation, was found to be more helpful for BIM Association for Automation and Robotics in Construction (IAARC),
utilizers to make decisions for improvement plans. Through cross- Bratislava, Slovakia, 144–149.
Construction Excellence. (2013). “Construction best practice programme.”
organizational comparison, an organization can locate its competi-
〈http://www.constructingexcellence.org.uk〉 (Jan. 15, 2014).
tive position among all industry peers. Then, lessons learned from
Construction Industry Institute. (2013). “CII benchmarking and
other organizations can be used to establish improvement targets metrics.” 〈https://www.construction-institute.org/nextgen/index.cfm〉
and to promote changes in the organization. (Jan. 15, 2014).
This paper introduces a cloud-based BIM performance bench- Costa, D., Formoso, C., Kagioglou, M., Alarcón, L., and Caldas, C. (2006).
marking application called BIMCS to automatically collect BIM “Benchmarking initiatives in the construction industry: Lessons learned
performance data from a wide range of BIM users nationwide. and improvement opportunities.” J. Manage. Eng., 10.1061/(ASCE)
Twenty validated benchmarking metrics are used to quantify BIM 0742-597X(2006)22:4(158), 158–167.
utilization performance in terms of the BIM model (the product) Datta, S., Nandi, G., Bandyopadhyay, A., and Pal, P. K. (2009). “Applica-
and BIM modeling (the production). BIMCS is a community cloud tion of PCA-based hybrid Taguchi method for correlated multicriteria
environment that utilizes SaaS to make the collection, aggregation, optimization of submerged arc weld: A case study.” Int. J. Adv. Manuf.
and presentation of benchmarking data autonomous and interac- Technol., 45(3–4), 276–286.
tive. Based on the Big Data collected from the BIMCS database, Diakoulaki, D., Mavrotas, G., and Papayannakis, L. (1995). “Determining
an overall view of the industry status quo of BIM utilization may be objective weights in multiple criteria problems: The critic method.”
Comput. Oper. Res., 22(7), 763–770.
obtained, and ultimately, a protocol for BIM performance may be
Domeniconi, C., Papadopoulos, D., Gunopulos, D., and Ma, S. (2004).
developed on the basis of this improved knowledge discovery
“Subspace clustering of high dimensional data.” Proc., 2004 SIAM
process. (Society for Industrial and Applied Mathematics) Int. Conf. on Data
Mining, SIAM, Philadelphia, PA.
Fornell, C. (1982). A second generation of multivariate analysis. 2.
References Measurement and evaluation, Praeger Publishers, Westport, CT.
Friedman, J. H., and Meulman, J. J. (2004). “Clustering objects on subsets
Achtert, E., Böhm, C., Kriegel, H.-P., Kröger, P., Müller-Gorman, I., and
of attributes.” J. R. Stat. Soc. Series B (Stat. Method.), 66(4), 815–849.
Zimek, A. (2007). “Detection and visualization of subspace cluster
Garvin, D. A. (1993). “Building a learning organization.” Harv. Bus. Rev.,
hierarchies.” Advances in databases: Concepts, systems and applica-
71, 78–91.
tions, Springer, Berlin, Germany, 152–163.
Gong, J., and Azambuja, M. (2013). “Visualizing construction supply
Aggarwal, C. C., Wolf, J. L., Yu, P. S., Procopiuc, C., and Park, J. S. (1999).
chains with google cloud computing tools.” Proc., ICSDEC 2012:
“Fast algorithms for projected clustering.” Proc., Association for Com-
puting Machinery Special Interest Group on Management of Data Developing the Frontier of Sustainable Design, Engineering, and
(ACM SIGMOD) Record, Association for Computing Machinery, Construction, ASCE, Reston, VA, 671–678.
New York, 61–72. Gracia, J., and Bayo, E. (2013). “Integrated 3D web application for struc-
Agrawal, R., Gehrke, J., Gunopulos, D., and Raghavan, P. (1998). tural analysis software as a service.” J. Comput. Civ. Eng., 10.1061/
“Automatic subspace clustering of high dimensional data for data min- (ASCE)CP.1943-5487.0000217, 159–166.
ing applications.” Proc., Association for Computing Machinery Special Greco, S., Matarazzo, B., and Slowinski, R. (2001). “Rough sets theory
Interest Group on Management of Data (ACM SIGMOD) Record, for multicriteria decision analysis.” Eur. J. Oper. Res., 129(1), 1–47.
Asoociation for Computing Machinery, New York. Hagerty, M. R., and Land, K. C. (2007). “Constructing summary indices of
Anderberg, M. R. (1973). “Cluster analysis for applications.” Defense tech- quality of life a model for the effect of heterogeneous importance
nical information center (DTIC) document, Academic Press, New York. weights.” Socio. Meth. Res., 35(4), 445–496.
Autodesk. (2014). “Revit overview.” 〈http://www.autodesk.com/products/ Handfield, R., Walton, S. V., Sroufe, R., and Melnyk, S. A. (2002).
autodesk-revit-family/overview〉 (Jan. 15, 2014). “Applying environmental criteria to supplier assessment: A study in
Barber, E. (2004). “Benchmarking the management of projects: A review of the application of the analytical hierarchy process.” Eur. J. Oper.
current thinking.” Int. J. Project Manage., 22(4), 301–307. Res., 141(1), 70–87.
Bartholomew, D. J., Knott, M., and Moustaki, I. (2011). Latent variable International Organization for Standardization (ISO). (2008). “Information
models and factor analysis: A unified approach, Wiley, West Susses, technology—Database languages—SQL—Part 1: Framework (SQL/
U.K. framework).” ISO/International Electrotechnical Commission (IEC)

© ASCE 04014054-12 J. Constr. Eng. Manage.

J. Constr. Eng. Manage. 2014.140.


9075-1:2008, Nederlands Normalisatie Institute, ISO, Geneva, Data, Association for Computing Machinery (ACM), New York,
Switzerland. 418–427.
Issa, R. R., and Suermann, P. (2009). “Evaluating industry perceptions of Ramírez, R. R., Alarcón, L. F. C., and Knights, P. (2004). “Benchmarking
building information modeling (BIM) impact on construction.” J. Inf. system for evaluating management practices in the construction
Technol. Constr., 14(2009), 574–594. industry.” J. Manage. Eng., 10.1061/(ASCE)0742-597X(2004)20:
Karamouz, M., Zahmatkesh, Z., and Saad, T. (2013). “Cloud computing in 3(110), 110–117.
urban flood disaster management.” Proc., World Environmental and Redmond, A., Hore, A., Alshawi, M., and West, R. (2012). “Exploring how
Water Resources Congress 2013, ASCE, Reston, VA. information exchanges can be enhanced through cloud BIM.” Autom.
Kendall, M. G. (1938). “A new measure of rank correlation.” Biometrika, Constr., 24(2012), 175–183.
30(1–2), 81–93. Rossi, P. H., Lipsey, M. W., and Freeman, H. E. (2004). Evaluation:
Krasner, G. E., and Pope, S. T. (1988). “A description of the model-view- A systematic approach, Sage, Thousand Oaks, CA.
controller user interface paradigm in the smalltalk-80 system.” J. Object Sargent, R. (2005). “Verification and validation of simulation models.”
Oriented Program., 1(3), 26–49. Winter Simulation Conf., WSC, 130–143.
Kumar, B., Cheng, J. C., and McGibbney, L. (2010). “Cloud computing Sebastian, R., and van Berlo, L. (2010). “Tool for benchmarking BIM
and its implications for construction IT.” Proc., Int. Conf. in Computing performance of design, engineering and construction firms in the
in Civil and Building Engineering, W. Tizani, ed., International Society Netherlands.” Archit. Eng. Des. Manage., 6(4), 254–263.
for Computing in Civil and Building Engineering, Nottingham, U.K. Senescu, R., Haymaker, J., Meza, S., and Fischer, M. (2014). “Design pro-
Downloaded from ascelibrary.org by Rui Liu on 02/12/15. Copyright ASCE. For personal use only; all rights reserved.

Latu, K., Swain, N., Christensen, S., Jones, N., Nelson, E., and Williams, cess communication methodology: Improving the effectiveness and
G. (2013). “Essential GIS technologies for hydrologic simulation appli- efficiency of collaboration, sharing, and understanding.” J. Archit.
cations in cloud computing.” Proc., World Environmental and Water Eng., 10.1061/(ASCE)AE.1943-5568.0000122, 05013001.
Resources Congress 2013, ASCE, Reston, VA. Shannon, C. (1948). “A mathematical theory of communication.” Bell Syst.
Liang, F., and Luo, Y. (2013). “A framework of the civil engineering CAD Tech. J., 27(3), 379–423.
experimental platform based on cloud computing.” Proc., Int. Conf. on Sheng, Z., Sun, S., Wang, J., and Chu, W. (2007). “Comprehensive evalu-
Construction and Real Estate Management 2013: Construction and ation of river water environmental quality based on the principal
Operation in the Context of Sustainability, ASCE, Reston, VA. component analysis.” Environ. Sci. Manage., 32(12), 172–175.
Linstone, H. A., and Turoff, M. (1975). The delphi method, Addison- Srdjevic, B., Medeiros, Y., and Faria, A. (2004). “An objective multi-
Wesley Reading, Boston, MA. criteria evaluation of water management scenarios.” Water Resour.
Manage., 18(1), 35–54.
Liu, J., Wang, H., Ge, Y., and Huang, J. (2013). “Application of multi-
Subashini, S., and Kavitha, V. (2011). “A survey on security issues in ser-
source information fusion technology in the construction of a secure
vice delivery models of cloud computing.” J. Network Comput. Appl.,
and emergent transportation platform.” Proc., Int. Conf. on Transpor-
34(1), 1–11.
tation Information and Safety 2013: Improving Multimodal Transpor-
Succar, B. (2009). “Building information modelling framework: A research
tation Systems—Information, Safety, and Integration, ASCE, Reston,
and delivery foundation for industry stakeholders.” Autom. Constr.,
VA, 237–243.
18(3), 357–375.
Liu, R. (2012). “BIM-based life cycle information management: Integrating
Succar, B. (2010). “Building information modelling maturity matrix.”
knowledge of facility management into design.” Ph.D. thesis, Univ. of
Handbook of research on building information modelling and construc-
Florida, Gainesville, FL.
tion informatics: Concepts and technologies, J. Underwood and
Lohuis, M. M., Dekkers, J., and Smith, C. (1992). “Probability of success
U. Isikdag, eds., IGI, Hershey, PA, 65–103.
and predicted returns of sires in progeny test programs.” J. Dairy Sci.,
Succar, B., Sher, W., and Williams, A. (2012). “Measuring BIM perfor-
75(6), 1660–1671.
mance: Five metrics.” Archit. Eng. Des. Manage., 8(2), 120–142.
Marinescu, D. C. (2013). Cloud computing: Theory and practice, Elsevier, Succar, B., Sher, W., and Williams, A. (2013). “An integrated approach to
Philadelphia, PA. BIM competency assessment, acquisition and application.” Autom.
Marosszekey, M., and Karim, K. (1997). “Benchmarking—A tool for lean Constr., 35(2013), 174–189.
construction.” Proc., 5th Int. Conf. of the International Group for Lean Suermann, P., Issa, R., and McCuen, T. (2008). “Validation of the
Construction, International Group for Lean Construction (IGLC), Gold U.S. national building information modeling standard interactive
Coast, Australia, 157–167. capability maturity model.” Proc., 12th Int. Conf. on Computing in Civil
Mell, P., and Grance, T. (2011). “The NIST definition of cloud computing.” and Building Engineering, International Society for Computing in Civil
National Institute of Standards and Technology (NIST) special and Building Engineering, Nottingham, U.K.
publication, Vol. 800, Gaithersburg, MD, 7. Suermann, P. C. (2009). “Evaluating the impact of building information
Moore, D. S. (1978). “Chi-square tests.” Studies in statistics, R. V. Hogg, modeling (BIM) on construction.” Ph.D. dissertation, Univ. of Florida,
ed., Vol. 19, Mathematical Association of America, Washington, DC. Gainesville, FL.
Mulaik, S. A. (1987). “A brief history of the philosophical foundations Underwood, J., and Isikdag, U. (2010). Handbook of research on building
of exploratory factor analysis.” Multivariate Behav. Res., 22(3), information modeling and construction informatics: Concepts and
267–305. technologies, Information Science Reference.
National Institute of Building Sciences (NIBS). (2007). “National building Unicode Consortium. (2011). “Unicode 6.0.0.” 〈http://www.unicode.org/
information modeling standard—Version 1.0.” NIBS Rep., Washington, versions/Unicode6.0.0/〉 (Jan. 15, 2014).
DC. Var, I. (1998). “Multivariate data analysis.” Vectors, 8(2), 125–136.
National Institute of Building Sciences (NIBS). (2012). “National building Welch, B., et al. (2008). “Scalable performance of the panasas parallel file
information modeling standard—Version 2.0—Chapter 5.2 minimum system.” Proc., File and Storage Technologies, USENIX (Advanced
BIM.” NIBS Rep., Washington, DC. Computing Systems Association), Berkeley, CA, 1–17.
Nieh, J., Yang, S. J., and Novik, N. (2000). “A comparison of thin-client Yang, Y. (2006). “Weighting methods in multivariate evaluation.” Stat.
computing architectures.” Technical Rep. CUCS-022-00, Dept. of Decis., 2006(13), 17–19.
Computer Science, Columbia Univ., New York. Zeleny, M., and Cochrane, J. L. (1982). Multiple criteria decision making,
Procopiuc, C. M., Jones, M., Agarwal, P. K., and Murali, T. (2002). McGraw-Hill, New York.
“A Monte Carlo algorithm for fast projective clustering.” Proc., 2002 Zhang, L., and Issa, R. (2012). “Comparison of BIM cloud computing
Association for Computing Machinery Special Interest Group on frameworks.” Proc., Computing in Civil Engineering (2012), ASCE,
Management of Data (ACM SIGMOD) Int. Conf. on Management of Reston, VA, 389–396.

© ASCE 04014054-13 J. Constr. Eng. Manage.

View publication stats J. Constr. Eng. Manage. 2014.140.

You might also like