You are on page 1of 28

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/288686462

Integration Evaluation Framework for Integrated Design Teams of Green


Buildings: Development and Validation

Article in Journal of Management in Engineering · December 2015


DOI: 10.1061/(ASCE)ME.1943-5479.0000416

CITATIONS READS

41 5,133

2 authors:

Rahman Azari Yong-Woo Kim


Pennsylvania State University University of Washington Seattle
37 PUBLICATIONS 913 CITATIONS 68 PUBLICATIONS 1,085 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Process-Based Sustainable Impact Analysis View project

Project Delivery Process and its Relationship with Transportation Projects View project

All content following this page was uploaded by Rahman Azari on 01 January 2016.

The user has requested enhancement of the downloaded file.


Integration Evaluation Framework for Integrated Design Teams of Green Buildings:
Development and Validation

Rahman Azari1 and Yong-Woo Kim2


1
Assistant Professor, College of Architecture, Construction and Planning, University of Texas at San Antonio, 501
W. Cesar E. Chavez Blvd., San Antonio, TX 78207, USA; PH (210) 458-3010; Corresponding Author email:
rahman.azari@utsa.edu
2
Associate Professor, Department of Construction Management, University of Washington, 120 Architecture Hall,
Box 351610, Seattle, WA 98195, USA; PH (206) 616-1916; email: yongkim@uw.edu

ABSTRACT
Integrated Design (ID) process encourages integration of team members in design phase
of green building projects through intense collaborative processes and free exchange of
information. While ‘integration’ in general and ID in particular has been well theorized by
construction management research community, there exists no systematic mechanism in the field
to help owners, architects and managers of green project teams assess the level of integration in
their projects’ ID team environment in a practical manner. The key objective of the present
article is therefore to use a qualitative-quantitative methodology to propose and validate an
integration evaluation framework for green project teams, and to statistically test the association
between integration level and project success. The framework can be used by green project
teams for comparison, benchmarking or educational purposes as well as for integration
evaluation and improvement in ID team environments. This research also provides empirical
evidence to anecdotes suggesting positive link between team integration and project success in
green projects.
Keywords: Evaluation, integration, integrated Design, green buildings, validation, CIPP

INTRODUCTION
Due to complexity of green project requirements and diversity of disciplines involved in
the design process of these projects, the owners and project team managers try to apply
Integrated Design (ID) process to ensure delivery of targeted outcomes. In the ID process, an
integrated team of key stakeholders (owner, architect, contractors, suppliers, users, etc.) is
developed early on in the design process and the members intensely collaborate in ‘eco-
charrettes’, that is, inclusive brainstorming sessions, to define and agree upon project goals
pertaining to sustainable design and construction and to collectively decide about some important
aspects of the project design. Similar to Integrated Project Delivery (IPD) and Concurrent
Engineering (CE), the focus of the ID teams is to create ‘integration’. But integration in the ID
teams is different in scope and expected outcome. Indeed integration in IPD is about integration
of information, leadership, agreements and processes (CMAA 2010), and in CE is about
integration of processes and people, while integration in the ID context is oriented toward
integration of people (i.e., disciplines), information, and building systems in order to meet
project’s green targets.
Previous studies in the field (Korkmaz 2007, Yudelson 2008, Kibert 2008, 7group and
Reed 2009) have well theorized the issue of integration in green construction projects and have
even anecdotally linked it to project success. However, only a few (Korkmaz 2007) offer
tangible metrics for the assessment of integration in the ID team environment, without
organizing the metrics in a quantifiable evaluation framework. As a result, project teams and
managers in green construction projects often lack a systematic mechanism to self-evaluate and
quantify integration, a gap that is also highlighted by researchers (Xue et al 2010) in the context
of construction field in general. The green construction community also needs to be provided
with empirical evidence to anecdotal suggestions about the link between integration and project
success. A systematic and quantifiable integration evaluation framework can help teams to not
only understand the integration and collaboration level that exists between key participants in a
green project but also identify the integration-related areas that need to be improved. This
improvement of integration would in turn have direct positive impacts on project success, as the
empirical results of this research would suggest. Ideally, the statistical association between
integration and project success can be used to predict success through integration score.
The research questions of interest in this research include:
1. How to measure the degree of integration in the ID team interactions in the design
process of green building projects?
2. Are green building projects with higher degrees of integration in the ID team
environment more successful?
To address the research questions, the authors conducted a sequential qualitative-
quantitative research to develop, implement and validate an evaluation framework to assess the
ID team integration in green building projects. More specifically, the research objectives were
framed as the following:
a. Develop the evaluation framework
• Identify the evaluation factors indicators that could be used to measure the
integration level within the ID team environment;
• Organize the factors/indicators in a systematic evaluation model;
• Propose a measurement format to quantify the evaluation
b. Implement and validate the proposed evaluation framework; and
c. Test the hypothesis that: ‘Higher integration in the ID team environment results in more
successful project outcomes’.

LITERATURE REVIEW
The concept of integration is widely used in construction industry. Nam and Tatum
(1992) use integration as an antonym to disintegration which, according to them, is the outcome
of “incongruent goals and consequent divergent behaviors” in construction projects. They also
state that there is a close relationship between integration and cooperative project environments.
Similarly, Baiden and Price (2011) define integration as “where different disciplines or
organizations with different goals, needs and cultures merge into a single cohesive and mutually
supporting unit with collaborative alignment of processes and cultures”.
The need for integration in construction industry primarily stems from the fact that
fragmented procurement to delivery of construction projects leads to a lack of efficiency and
effectiveness in delivery of projects with respect to time, cost, etc. (Love et al 1998, Baiden et al
2006). Literature links high wastes, increased project costs and time, and adversarial
relationships in the industry to separated project organizations (CMAA 2010). Mitropoulos and
Tatum (2000) believe that the fragmentation in project organization is a result of two major
factors: the complexity of construction projects and the high degrees of specialization involved
in design and construction of projects. Numerous studies highlight a need to more collaborative
and integrated approaches in order to address this challenge as well as to decrease the
uncertainties associated with construction projects and improve the predictability of results
(Love et al 1998, CMAA 2010, Mitropoulos and Tatum 2000, Egan 1998, Egan 2002,
Fairclough 2002).
Mitropoulos and Tatum (2000) maintain that integration is critical in building design for
two reasons: to avoid problems that could happen after design phase and to achieve an optimized
design solution. Integration, according to them, becomes more needed in project projects with
higher levels of uncertainty, complexity, and speed in construction (Mitropoulos and Tatum
2000). Indeed, in complex construction projects, there would be a need to create an environment
of free exchange of information from the early stages of project development that necessitates
integration of team members. This integration can also tackle the problem of adversarial
relationships and lack of trust within project teams that is believed to be the main criticism in
current practices in delivery of construction projects (Ruparathna and Hewage 2015).
Collaboration is a primary element in integration concept and is defined as “a creative
process undertaken by two or more interested individuals, sharing their collective skills,
expertise, understanding and knowledge (information) in an atmosphere of openness, honesty,
trust and mutual respect, to jointly deliver the best solution that meets their common goal”
(Wilkinson 2005). Collaboration in construction occurs across organizational boundaries
between owners, design team, contractors, etc. (Smyth and Pryke 2008). Table 1 displays some
of the parameters that affect the quality of collaboration in construction projects, along with the
supporting literature.

Table 1. Factors affecting collaboration


Factor Literature
Accountability Chan et al 2004, Gabriel 1991, Yudelson 2008
Commitment Chan et al 2004, Yeung et al 2007, Yudelson 2008
Communication Diallo & Thuillier 2005, Hauck et al 2004, Yeung et al 2007, Yudelson 2008
Compatibility Kumaraswamy et al 2005a, 2005b
Timely Involvement Chan et al 2004, Yeomans et al 2006, Yudelson 2008
Joint Operations Chan et al 2004, Hauck et al 2004, Yeomans et al 2006, Yudelson 2008
Mutual Respect Forbes & Ahmed 2010
Ballard 2006, Chan et al 2004, Cheung et al 2003, Diallo & Thuillier 2005, Rahman
Trust
& Kumaraswamy 2004, Xu et al 2005, Yudelson 2008

While collaboration and free exchange of information are widely mentioned in


discussions around integration, ‘integration thinking’, or ‘systems-thinking’, which refers to
considering the relationships among the constituent subsystems of a system in order to create an
optimized performance seems to be less explicitly examined. In systems-thinking, the decision-
makers of a project realize that a decision about one subsystem would impact the performance of
other subsystems; therefore, any decision-making on the subsystem of interest must be made by
involving all disciplines, as representatives of subsystems, in the decision-making process.
Recent trends in construction industry have tried to improve integration from multiple
angles. Building Information Modeling (BIM) has shown great potential to enhance the
integration of AEC team members by providing an environment of free exchange of information.
The increased collaboration that is achieved through BIM can lead to increased efficiency, time
and cost-savings, profitability, improved relationships, and lower rates of errors (Azhar 2011,
Porwal and Hewage 2013). The industry has even developed BIM-based applications that can be
used to optimize project cost, time, and environmental impacts during design phase (Inyim et al
2015). Integrated Project Delivery (IPD), as another development in the industry, tries to create
integration in four aspects of agreements, leadership, information and processes so that the
maximum value is delivered to owner (CMAA 2010). Integrated Design and Delivery Solutions
(IDDS), established by the International Council for Research and Innovation in Building and
Construction (CIB), is “the framework for an integrated and coordinated merger of people,
process and technology issues in order to enact a radical and sustained transformation of the
construction industries” (Owen et al 2010). This framework, which is founded on four
components of collaborative process, enhanced skills, integrated information and automation
systems, and knowledge management, tries to take the industry from the current levels of BIM
adoption to more holistic and sophisticated integration solutions proposed by IDDS (Owen et al
2010).
While the mentioned integration approaches in the industry are broad in concept, the ID
process, another integration approach, is limited in scope and copes only with design process of
green building projects. US Department of Energy (DOE 2009) defines the ID process as “the
process in which multiple disciplines and seemingly unrelated aspects of design are integrated in
a manner that permits synergic benefits to be realized”. The extensive literature on Integrated
Design (ID) process agrees that the process recognizes the interdependency of building systems,
the complexity of green building design, and the need for early collaboration of all disciplines in
order to make optimum decision for the whole building (IEA 2003, Yudelson 2008, 7group and
Reed 2009, Kibert 2008).
The level of complexity that exists in green projects usually requires more collaboration
and integration of project parties. Robichaud and Anantatmula (2011) highlight coordination and
communication across a team of multidisciplinary members as the most significant challenge to
delivery of cost-effective green buildings. Indeed, to design an optimally-functioning green
project, the team has to design all building systems as a unified whole, by considering the
impacts of each system on other systems’ performance (Yudelson 2008, 7group & Reed 2009,
Kibert 2008). Achieving the integration of systems requires integration of the representatives of
all design disciplines in a team to freely share their information and collectively design the
building.
Therefore, integration, in the context of the ID process of green buildings, is about
integration of people (i.e., disciplines), information and building systems in a holistic life-cycle-
oriented way that ensures delivery of value and sustainable outcomes to owner. It implies using a
collaborative team of architects, engineers, building occupants, contractors, etc. to collectively
consider various aspects of sustainability (water efficiency, energy efficiency, site, indoor
environmental quality, etc.) from the earliest stages of building design and to realize the
synergies and tradeoffs between these aspects over the life-cycle of a project. The outcome
would be an optimized building that provides a healthy productive environment for the
occupants, is sustainable to environment, and generates value to owner.
Researchers and practitioners in the industry usually have a similar approach to the ID
process, even though their focus might be slightly different. Kowk and Grondzik (2007)
highlight several steps toward integrated design. According to them, these steps include
establishing commitment, team formation and setting of goals, information gathering, conceptual
and schematic design, testing, design development, construction, and assessment and
verification. Yudelson (2008) lists the elements of integrated design as commitment to integrated
design, setting goals and criteria for the team, commitment to zero cost increase, front-loading
the design process with environmental charrettes, allocating time to feedback and revisions
before committing to final design selection, and engagement of all team members in all
decisions. Robichaud and Anantatmula (2011) define a green project delivery process that
emphasizes early definition of goals and priorities, integration of project team, design with the
entire team, use of rewards in contracts and provision of training.
Synthesizing previous definitions and elements, integration in this research is defined as
‘timely collaboration of relevant project stakeholders with the goal of encouraging systems-
thinking to deliver optimized value to the client’ and the ID process is the way through which
this definition takes place in the settings of green building projects.

RESEARCH METHODS
Similar to social science research, construction management research applies a variety of
research methods and paradigms, ranging from qualitative (i.e., inductive approach;
interpretivisim paradigm) to quantitative (i.e., deductive approach; positivism and realism
paradigms). Wing et al (1998) characterize construction management research as having a
practical nature that requires generalizability of results and testing hypotheses (hence, positivism
paradigm and quantitative methods should be applied). Yet, they believe that qualitative methods
(interpretivism paradigm) can be applied in addressing certain problems in construction research,
such as understanding human behaviors (Wing et al, 1998).
Due to the nature of the problem of interest in this research which required a combination
of in-depth exploration of a phenomenon (i.e., integration in the ID team interactions) and
development of generalizable outcome and results, the authors chose to use a sequential
qualitative-quantitative research methodology to achieve the research objectives. This
methodology has been illustrated in Figure 1.
In the qualitative step of research, case-study examination and interviews with industry
and academic experts were conducted to identify the key factors to be assessed for evaluation of
the ID team integration and to provide a checklist of evaluation indicators to be used for
operationalization of the factors. Three case studies, representing various levels of integration in
the ID process, were selected based on their reputation and media coverage and were explored in
detail. Table 2 shows the characteristics of the case-studies. Extensive interviews were conducted
with the stakeholders of these projects to understand how the ID process in these projects
functioned, what parties were involved in the design process and at what point during the project
development they got involved, what factors facilitated success or inhibited it, and so forth.
In addition to extensive interviews with case-study project stakeholders (a total of 12
interviews), 15 additional industry professionals with extensive experience in the area of
integrated design that represented major ID disciplines were interviewed to achieve a further
insight into what major team-related factors impact the design process and its success, and to
identify obstacles, challenges, and facilitators of integration. By interviewing the professionals,
the researcher also aimed at isolating some operationalized definitions, or evaluation indicators,
for evaluation factors. The interviews were then coded and analyzed using Dedoose qualitative
research tool. The content data analysis helped identify the evaluation factors and indicators that
are important in assessing the ID team integration. Through the content analysis the authors
listed the themes related to integration assessment that emerged in responses by all the
interviewees. This list of themes, i.e., integration evaluation factors, became the backbone of the
evaluation framework. During interview sessions, once a theme (collaboration, for instance)
emerged in the discussion, the authors asked the interviewee how they defined it and what the
best and the worst forms of that theme in an ID team environment would be. This helped us write
operational definitions, i.e., evaluation indicators, for the identified evaluation factors. The
outcome of content analysis included a checklist of integration evaluation factors and their
corresponding indicators. The integration evaluation factors in this checklist were included based
on the unanimous opinion of the interviewees. In case of ‘collaboration’ as an evaluation factor,
because of the breadth of concept, the authors used the literature to break the concept down into
8 sub-factors. Finally, the evaluation indicators were determined based on the interview data as
well as literature suggestions.

Figure 1. Methodological steps in evaluation framework development and validation

Table 2. Information on green project case-studies


Case ID Green Rating Floor Area (sf) Construction Year Building Type Project Delivery
Case 1 Living Building 52,000 2013 Office CM@R
Case 2 LEED Gold 196,845 2012 Educational CM@R
Case 3 LEED Silver 77,000 2012 Research Lab CM@R

To ensure that the evaluation indicators cover the content domains – i.e., all important
aspects of the ID team integration -, a content validity test was conducted by extending the
checklist of the indicators to a group of experts (representing both industry and academia) and
asking them to check: a) whether the checklist covers all aspects of the ID team integration and
b) if the indicators present in the checklist can properly assess those aspects. The checklist was
then revised based on the comments received. More specifically, the experts suggested the
addition of two indicators to the list; one to capture ‘commitment’ and another to capture
‘systems-thinking’. They also suggested changes in the wording of some indicator statements.
In the final step of framework development efforts, the evaluation factors and indictors
were organized into an evaluation model. The authors chose to adopt Context, Input, Process and
Product (CIPP) (Stufflebeam 2003) as the evaluation model following a review of various
models used in different disciplines. CIPP is one of the most popular models in education and
business contexts and is regarded by the literature among the best with respect to feasibility,
utility, and accuracy (Zhang et al 2011). The framework, proposed by Stufflebeam (1983),
evaluates the evaluation subject by assessing its context, inputs, process and products. Following
the organization of the factors and indicators within the categories of the evaluation model, a
measurement format was designed to help quantify the evaluation results.
Once the framework to assess the integration was developed, the authors had to make
sure it was valid and fitted the purposed it had been designed for. The objective of the second
phase of research, the quantitative phase, was to implement the proposed evaluation framework
and validate it. The literature highlights various measures for validation. Some of the widely
mentioned measures include content validity, construct validity, internal validity, external
validity, and reliability (Babbie 2010, Creswell 2012, Hinkin 1995, Yin 2002). Content validity
was met in qualitative phase through the expert feedback. In quantitative phase, the authors
converted the proposed evaluation framework into a questionnaire survey and distributed it
among the ID teams of buildings rated by Leadership in Energy and Environmental Design
(LEED) green rating system and asked them to self-assess the integration level of their ID team
environment. Statistical analysis was then used to validate the proposed evaluation framework
with respect to construct validity, internal validity, external validity and reliability.
To determine the needed sample size for questionnaire-based study, the authors used a
rule of thumb guideline suggested by several studies (Green 1991, Vanvoorhis and Morgan
2007) based on which the needed sample size for multiple regression should be at least 50+m,
where m is the number of independent variables in a regression model. In the present research,
the maximum number of potential independent variables was determined to be 7; hence, a
sample size of at least 57 cases was determined to be needed.
As mentioned, the questionnaire survey was designed by incorporating the proposed
evaluation framework developed in the qualitative phase of research. The statement of the
questions reflected the evaluation indicators, and the response format was based on a 5-point
Likert-scale, representing various levels of agreement with each evaluation indicator statement.
In addition, some questions were added to collect demographic information about the owner
type, project size, cost, etc. By filling out this survey, LEED project participants served as the
self-evaluator of their own projects.
The survey was pilot-tested and then distributed through several online social networking
platforms for professionals, such as LinkedIn. The researchers also recruited their professional
and personal contacts to reach other potential LEED project participants. 79 responses were
collected over a two-month period. This sample size was considered to be sufficient, given the
fact that the needed sample size for the research was 57.
The collected responses entailed a representative sample of LEED-rated projects. Figure
2 illustrates a profile of the collected sample with respect to project parties involved in
evaluation, floor area, owner type, project delivery systems, LEED versions, and LEED
certification level.
Figure 2. A profile of the questionnaire respondents and represented projects

Using Stata IE 10 as the statistical analysis software, the collected data were recoded,
examined for missing values (and treated, if needed) and organized. Also, three indices were
constructed, based on the formulas 1, 2, and 3 (which will be presented later in the measurement
format of the proposed framework), in order to consolidate the scores assigned by the evaluator
to the evaluation indicators under different categories. ‘Challenge Index’ (CI) was constructed to
represent the evaluation indicators in the ‘context’ category. ‘Integration Assessment Index’
(IAI) was built to represent the ‘input’ and ‘process’ categories; and ‘Performance Index’ (PI)
consolidated the evaluation indicator scores associated with the ‘product’ category of the
proposed framework. Table 3 shows the list of variables, their assigned codes and type of data.
Once the database was completed, the authors started analyzing the data, first for
reliability assessment. Reliability is a validation measure that concerns the consistency of
measurement (Weiner IB and Greene 2011) and the repeatability of results (Babbie 2010). The
most widely accepted statistical measure for reliability of evaluation indicators is internal
consistency method using Cronbach’s Alpha (Hinkin 1995) which measures the degree to which
evaluation indicators complement each other in their measurement of the subject of evaluation
(Yin 2002): the higher Cronbach’s Alpha, the higher the reliability. In order to avoid the
misleading effects of potential inflation in Cranbach’s Alpha which can happen due to numerous
indicators in the framework (Streiner 2003), the authors used Cronbach’s Alpha in this research
along with Item-Test, Item-Rest, and Inter-Item correlations. The authors also used 0.60 as the
minimum acceptable level of Cronbach’s Alpha (Hair et al 1998).
Table 3. Variables along with their associated evaluation category/factor and assigned code
Evaluation
Code Variable Type
category
PAR Party represented by the respondent Nominal Categorical
LOC Location Nominal Categorical
YEAR Construction year Continuous
OWN Owner type Nominal Categorical
SIZE Project size (sf) Continuous
COST Project cost (dollar) Continuous
LDVR LEED version (1.0, 2.0, etc.) Nominal Categorical
LDTCR Total LEED credits achieved Continuous
LDECR Energy & Atmosphere credits achieved Continuous
CMP Context Complexity and Uncertainty level Ordinal Categorical
PTY Context Priority level Ordinal Categorical
PDS Context Project delivery system Ordinal Categorical
CONT1 Context Integration level of contract Ordinal Categorical
CONT2 Context Sustainability inclusion in contract Ordinal Categorical
CI Context Challenge Index Continuous
TEAM Input Team capability level Ordinal Categorical
TOOL Input Implementation of tools and technology Ordinal Categorical
COL1 Process Collaboration > Accountability Ordinal Categorical
COL2 Process Collaboration > Commitment Ordinal Categorical
COL3 Process Collaboration > Communication 1 Ordinal Categorical
COL4 Process Collaboration > Communication 2 Ordinal Categorical
COL5 Process Collaboration > Compatibility Ordinal Categorical
COL6 Process Collaboration > Involvement Ordinal Categorical
COL7 Process Collaboration > Joint operations 1 Ordinal Categorical
COL8 Process Collaboration > Joint operations 2 Ordinal Categorical
COL9 Process Collaboration > Mutual Respect Ordinal Categorical
COL10 Process Collaboration > Trust 1 Ordinal Categorical
COL11 Process Collaboration > Trust 2 Ordinal Categorical
LEAD Process Leadership Ordinal Categorical
SYS1 Process System-thinking 1 Ordinal Categorical
SYS2 Process System-thinking 2 Ordinal Categorical
IAI Input & Process Integration Assessment Index Continuous
LDCR Product LEED certification level Ordinal Categorical
SUSC Product Cost success level Ordinal Categorical
SUST Product Schedule success level Ordinal Categorical
SUSI Product Innovation level Ordinal Categorical
SUSS Product Safety success level Ordinal Categorical
PI Product Performance Index Continuous

To test for construct validity, internal validity and external validity, a multiple regression
analysis technique was applied. Regression analysis can depict construct validity by reflecting
the statistical association between variables in the model, which then can be checked for
agreement with the literature suggestions (DeVellis 2003). Also, because regression analysis can
provide the opportunity to study research variables under controlled conditions, it can help assess
internal validity, which is about ruling out plausible rival hypotheses (Rosenthal and Rosnow
1991). The authors used project cost and challenges as control variables in this research. In
addition, external validity, which is about generalizability of the research findings to other
samples (Rosenthal and Rosnow 1991), could be tested by showing representativeness (Babbie
2010) of the sampled data.
Through the multiple regression analysis, the authors also tested the hypothesis that:
‘higher integration in the ID team environment results in more successful project outcomes’. The
regression model to test this hypothesis initially included four variables, as shown in Figure 3:
a. Independent variable: integration level (measured through Integration Assessment
Index (IAI), as defined by the evaluation framework, and representing evaluation
indicators present in ‘input’ and ‘process’ categories of evaluation framework);
b. Dependent variable: project success (measured through Performance Index (PI), as
defined by the evaluation framework, and representing evaluation indicators present
in ‘product’ category of evaluation framework); and
c. Control variable: challenges level (measured through Challenge Index (CI), as
defined by the evaluation framework, and representing evaluation indicators present
in ‘context category’ of evaluation framework); and
d. Control variable: project cost

Figure 3. Variables in initial multiple regression model and direction of their effects

Figure 4. Multiple regression analysis methodology

The authors initially included the challenge level and cost as control variables in the
model because integration level is not the only factor that can impact the success of green
building projects. Indeed, success or failure of a project can be a result of the level of challenges
(complexity, uncertainty, priorities, level of scope definition, effect of project delivery systems
and contractual support) that it faces or the monetary resources that it is taking advantage of.
Therefore, the addition of the project cost and the challenge level as control variables in the
regression model would help isolate the effect of integration on project outcomes. In other
words, by holding constant the challenge level and project cost, it becomes possible to measure
how integration alone affects project outcomes. Figure 3 shows the variables in the multiple
regression model and potential direction of their effects on project success.
Multiple regression analysis of the model shown in Figure 3 entailed a methodology that
is illustrated in Figure 4. Based on this methodology, the data were visually examined for
outliers. Following the removal of outliers from the data, a preliminary bivariate regression
analysis was conducted with the presence of dependent variable and either of other variables.
The purpose of this analysis was to examine if there was a relationship between variables in the
first place. Then, a multiple regression analysis was run with all variables included.
To ensure the accuracy of results, multiple regression results should be diagnosed, as
several problems might be present in analysis with the potential to add bias to and skew the
results. These problems include outlier data, non-normality, heteroscedasticity, multicollinearity,
and non-linearity. The results of diagnosis analysis revealed the presence of outliers and non-
linearity. To detect and remove outliers, DfFit and studentized residuals were used as the
measures of detection. DfFit focuses on the change in parameter estimates resulting from
exclusion of an observation (Miles and Shevlin 2001). Studentized residuals examine regression
residuals that have been divided by their standard deviations (Wooldridge 2009). Five
observations were detected commonly by the two measures and were excluded from the analysis.
The regression model was re-run using the data that were now free from the effect of outliers.
Non-linearity was also present in regression analysis. This occurs when the relationship between
independent and dependent variables is not linear. This is a major concern in Ordinary Least
Square (OLS) regression models. Non-linearity was checked by visually inspecting the residual
scatterplots, which revealed concerning deviation of the cost variable from linearity. To treat the
problem, this variable was transformed through a natural logarithmic function. The transformed
variable was labeled LGCOST (natural logarithmic function of COST). Re-analysis of the model
using the transformed cost variable failed to show any statistical significance of its effect on
project success. Therefore, project cost was dropped off the model and the regression analysis
was re-run using the new model (Figure 5).

RESULTS
The authors present the results of this research in two parts. First, the proposed CIPP-based
integration evaluation framework, which is illustrated in Figure 6 and presented as a checklist in
the Appendix, is described and then validation results are presented.
Figure 6. Proposed CIPP-based integration evaluation model for the ID teams of green projects
CIPP-based Integration Evaluation Framework for the ID Teams
The outcome of qualitative phase of research was the CIPP-based integration evaluation
framework for the ID teams of green building projects. As Figure 6 shows, this framework
consists of four major components: a) the CIPP evaluation model and its four categories of
context, input, process and product, b) a list of evaluation factors organized under CIPP
evaluation categories, c) evaluation indicators corresponding to the evaluation factors, and 4) a
measurement format. The framework is explained in the following sections.
Evaluation Model and its Categories: As mentioned before, the CIPP evaluation model
was adopted in this research to ensure comprehensiveness of assessment. The CIPP model is
widely used for evaluation purposes in business and education contexts. Based on the CIPP
model, an evaluation effort should be conducted through a “comprehensive framework” under
the four categories of context, input, process, and product (Stufflebeam 1983, 2003). Context
evaluation focuses on the needs, challenges, and opportunities within a defined environment that
affect the performance of the process being evaluated. Input evaluation assesses the resources
available and proposed strategies. Process evaluation focuses on the activities and factors critical
to successful completion, and product evaluation determines whether the intended outcomes
were achieved (Stufflebeam 1983). Figure 6 displays the evaluation factors for the ID team
integration, separated by the CIPP evaluation categories.
Evaluation Factors (20 factors): The second component of the proposed evaluation
framework represents the evaluation factors; i.e. macro-level areas to be evaluated under each
and all four categories of the CIPP model, in order to assess the integration level of the ID teams
in green building projects. 20 factors were identified through qualitative research. Due to its
breadth, the authors broke collaboration, as an evaluation factor, into 8 sub-factors (such as
accountability, commitment, communication, etc.) to help specify it, as shown in Figure 6.
Evaluation Indicators (65 items): The evaluation factors identified in the qualitative
phase of research were broad concepts difficult to evaluate. To facilitate integration evaluation
and provide tangible and measurable criteria for evaluation of the factors, they were
operationalized (i.e. specified) into 65 evaluation indicators. These evaluation indicators were
identified based on the interviews with industry experts as well as suggestions by previous
studies in the field. The resultant final list of evaluation indicators included 65 items. Figure 6
shows the number of evaluation indicators per each CIPP category. The complete list of
evaluation items can be found in the appendix section of this article. To show an example of
evaluation indicators specified for an evaluation factor, Figure 7 shows the indicators that were
designed to capture the presence of ‘systems-thinking’.

Figure 7. Four (4) evaluation indicators were specified to capture ‘systems-thinking’


Measurement Format: The final component of the proposed evaluation framework is a
measurement format for quantifying the results of evaluation. Quantification of the evaluation
provides the opportunity to compare integration levels across projects (and their associated ID
teams) and link those to project outcomes.
In designing the measurement format, first a response format for evaluation indicators
was provided. The response format would show various degrees of change in each evaluation
indicator and allow the evaluator to choose the one among them that best reflects the subject of
evaluation. Likert-scale is one of the most common response formats used in various studies
(DeVellis 2003); therefore the authors chose it as the response format for the evaluation
framework of this research. The points in Likert-scale were designed to represent various degrees
of agreement with the declarative statement of the evaluation indicator. A 5-point Likert-scale
provides the highest reliability and is the most widely used type of Likert-scale in the scale
development literature (Hinkin 1995). Accordingly, a 5-point scale was used in this research
with five degrees of ‘strongly disagree’, ‘disagree’, ‘neutral’, ‘agree’, and ‘strongly agree’.
The next issue in designing a measurement format is scoring and indexing. Since
validation and hypothesis-testing was a future step in this research, the evaluator’s scores on the
evaluation indicators needed to be consolidated into indices that could quantify the performance
of teams under each category of the CIPP model. To this end, one important issue was the
‘weighting’ of evaluation indicators. The researchers had to decide whether the evaluation
indicators had equal or different weights with regard to their importance in measuring the subject
of evaluation. While there is no firm rule as to weighting, the literature in the field recommends
equal weighting of evaluation indicators unless there are compelling proved reasons to use
differential weighting (Babbie 2010). Moreover, equal weighting can be considered objective
because the subjectivity of the researcher with this method remains limited to the influence of the
subject matter experts on selection of evaluation indicators (Maggino and Ruviglioni 2009). For
the purpose of the evaluation framework developed in this research, equal weighting was used
for two reasons: 1. There is no literature and theoretical framework in the field supporting the
differential significance of the evaluation factors (such as trust, mutual respect, collaboration,
etc.) in achieving integration; 2. Equal weighting improves the objectivity, simplicity, and
robustness of the developed evaluation framework.
With equal weighting, and assigning the scores of 1, 2, 3, 4, and 5 to ‘strongly disagree’,
‘disagree’, ‘neutral’, ‘agree’ and ‘strongly agree’, respectively, the following equations could be
applied to construct the indices of interest. Three indices were designed: a) Challenge Index (CI)
representing challenges arising from the ‘context’ of a project; 2) Integration Assessment Index
(IAI) representing ‘input’ and ‘process’ categories which would show the level of integration
maturity; and 3) Performance Index (PI) representing the ‘product’ category which could be used
as a measure of project success. The following equations were used for building these indices:

Formula 1: CI = S%
Formula 2: IAI = (S( + S*+ )
Formula 3: PI = S*.
Where,
- CI, IAI, and PI refer to Challenge Index, Integration Assessment Index and
Performance Index, respectively;
- Sc, Si, Sps, and Spt refer to the scores assigned to evaluation indicators in the context,
input, process, and product categories, respectively.

The minimum value for each index can be determined by assigning a score of 1
(representing ‘strongly disagree’) to all evaluation indicators. Likewise, using a score of 5
(representing ‘strongly agree’) for all indicators results in the maximum values for CI, IAI, and
PI. The performance of a given project under these indices would vary within the range between
minimum and maximum values. To rate projects based on their performance under these indices,
the range between minimum and maximum for each index was translated into multiple intervals,
as shown in Table 4.

Table 4. CI, IAI and PI indices and their weight ranges


Integration Assessment
Challenge Index (CI) Score Score Performance Index (PI) Score
Index (IAI)
Extremely Challenging 47-55 Extremely Integrated 190-225 Extremely Successful 39-45
Moderately Challenging 38-46 Moderately Integrated 154-189 Moderately Successful 32-38
Somewhat Challenging 29-37 Somewhat Integrated 117-153 Somewhat Successful 23-31
Mildly challenging 20-28 Mildly Integrated 81-116 Mildly Successful 16-22
Not challenging 11-19 Fragmented 45-80 Unsuccessful 9-15

Validation Results
As mentioned before in the methods section, the proposed evaluation framework was
implemented through a questionnaire survey of LEED-rated projects. The authors then analyzed
the collected data for validity check (reliability, construct validity, internal validity and external
validity) and tested the hypothesis proposed by the research.
Reliability Assessment: Tables 5, 6, and 7 display the results of reliability assessment for
the evaluation indictors present in the proposed evaluation framework. As Table 5 shows, the
Cronbach’s Alpha for the context category of evaluation framework is 0.6461, which is above
the minimum acceptable level of 0.6 (Hair et al 1998). The last column in the table also reports
the Alpha for each evaluation indictor, which is essentially the alpha level that the category
would have if that indicator were removed. Since the alpha levels for all indicators are lower
than their category’s alpha level, their presence in the category contributes to the concept they
represent. Moreover, examination of Inter-Item correlation and Item-Rest correlation shows
acceptable levels of correlations.

Table 5. Reliability assessment results for ‘Context’ indicators using internal consistency method.

Item-Test Item-Rest Average Inter-Item


Item Obs. Sign Alpha
Correlation Correlation Covariance
CMP 75 + 0.6298 0.4168 0.4748 0.5950
PTY 75 + 0.5518 0.4272 0.5904 0.6351
CONT1 75 + 0.7716 0.4799 0.3110 0.5414
CONT2 75 + 0.8513 0.5714 0.2122 0.4753
Category 0.3971 0.6461
As shown before, the evaluation indictors in the ‘input’ and ‘process’ categories of the
proposed framework were consolidated to form a single index, Integration Assessment Index
(IAI). Therefore, reliability assessment for the items in these two categories is performed in a
single analysis, as they relate to one underlying construct. Table 6 shows the results of reliability
assessment for these items. The review of Table 6 reveals that the Cronbach’s Alpha (0.9586) is
well above the minimum acceptable level. The high Cronbach’s Alpha either indicates the
internal consistency, and therefore, reliability of the indicators, or it may imply their redundancy;
however, redundancy cannot be the cause in this case because the correlation analysis of the
items does not show high correlation of the indicators. Moreover, the Item-Rest correlation is
within the acceptable levels.

Table 6. Reliability assessment results for ‘Input’ and ‘Process’ indicators using internal
consistency method.

Item-Test Item-Rest Average Inter-Item


Item Obs. Sign Alpha
Correlation Correlation Covariance
TOOL 75 + 0.7618 0.7245 1.0366 0.9564
TEAM 75 + 0.6969 0.6568 1.0616 0.9575
COL1 75 + 0.7261 0.6828 1.0407 0.9572
COL2 75 + 0.7540 0.7162 1.0408 0.9565
COL3 75 + 0.7838 0.7532 1.0435 0.9559
COL4 75 + 0.7414 0.7015 1.0416 0.9569
COL5 75 + 0.7782 0.7429 1.0317 0.9561
COL6 75 + 0.8609 0.8367 1.0096 0.9543
COL7 75 + 0.8684 0.8445 1.0049 0.9542
COL8 75 + 0.7715 0.7344 1.0308 0.9562
COL9 75 + 0.8584 0.8336 1.0119 0.9544
COL10 75 + 0.8131 0.7819 1.0231 0.9554
COL11 75 + 0.7749 0.7421 1.0428 0.9561
LEAD 75 + 0.7032 0.6617 1.0574 0.9574
SYS1 75 + 0.7839 0.7513 1.0390 0.9560
SYS2 75 + 0.8729 0.8500 1.0037 0.9540
Category 1.0366 0.9586

Table 7. Reliability assessment results for ‘Product’ indicators using internal consistency method.

Item-Test Item-Rest Average Inter-Item


Item Obs. Sign Alpha
Correlation Correlation Covariance
SUSC 75 + 0.7291 0.6013 0.6238 0.7483
SUST 75 + 0.6503 0.5005 0.6780 0.7743
SUSI 75 + 0.8494 0.7117 0.4487 0.6994
LDCR 75 + 0.9032 0.7818 0.3722 0.6787
SUSS 75 + 0.5309 0.3506 0.7544 0.8094
Category 0.5752 0.7902
Finally, the evaluation indictors in the ‘product’ category of the framework were checked
for reliability. The results, as shown in Table 7, indicate that the evaluation indictors represent
acceptable levels of internal consistency as judged by the Cronbach’s Alpha and Item-Rest
correlations. There is only one indicator in this list whose removal can improve the alpha
coefficient: SUSS, i.e., the level of a project’s success in achieving safety. However, because its
Inter-Item correlation was within normal limits and its presence was meaningful based on the
theory and literature in the field, this indicator was still kept in the model.
Validity assessment: As mentioned before, construct and internal validity were checked
using a regression analysis of the main hypothesis involving the key constructs/variables of the
proposed framework. The hypothesis of this research was defined to be: ‘higher integration in
the ID team environment results in more successful project outcomes’.

Table 8. Multiple Regression Analysis Results

Slope Standard 95% Confidence Interval


Variable T P>t
Coefficients Error Lower Upper
IAI 0.1875 0.0082 22.64 0.000 0.1709 0.2040
CI -0.4793 0.1756 -2.73 0.008 -0.8299 -0.1286
Constant 1.6820 1.2480 1.35 0.182 -0.8089 4.1730
Number of Observations = 70 F(2,67) = 389.72 R-squared = 0.9208
Root MSE=2.2443 Prob > F = 0.0000 Adjusted R-squared = 0.9185

Figure 5. Revised multiple regression model

Integration Assessment Index (IAI), Challenge Index (CI) and Performance Index (PI)
were used as measures of integration, challenges, and project success, respectively. The results
of multiple regression analysis of the model shown in Figure 5 are displayed in Table 8. As the
results indicate, the effects of integration level (IAI) and challenges (CI) on project success (PI)
are statistically significant, at the 99% confidence level (p<0.01). This shows that the primary
hypothesis of this research holds true. The effect of integration on project success is positive.
The slope coefficient of 0.187 for IAI indicates that one scale level increase of IAI would result
in a 0.187 increase in PI scale level, holding the level of challenges constant. In other words, one
Likert-level increase, for instance from ‘agree’ to ‘strongly agree’, in one of the evaluation
indictors comprising the Integration Assessment Index (IAI) is expected to result in a 0.187
increase in Performance Index scale level. It is important here, however, to highlight some of the
psychometric limitations associated with the use of Likert-scale that need to be considered in any
interpretation of the results. First of all, in a Liker-scale, it is generally assumed that the distances
between successive categories are equivalent (e.g., the difference between ‘strongly disagree’
and ‘disagree’ is the same as the difference between ‘disagree’ and ‘neutral’). While this may not
be always the case, the equidistance assumption contributes to easier analysis of results. Another
limitation has to do with the lack of a point of reference that often occurs in Likert-scale
statements which leads to potential response inconsistency from individual to individual, even
when data on the same phenomenon or object is collected. Also, there is much debate whether
the Likert-scale should be treated as interval or ordinal data. When measuring the same variable,
the Likert-scales are usually treated as interval.
Examining the effect of the Challenge Index on Performance Index reveals that the effect
is statistically significant and negative (-0.4793; p=0.008). This indicates that increasing
challenges in a green building project reduces the project’s success. R-squared, the coefficient of
determination, for this analysis is 0.9208, which indicates that this model can explain about 92%
of the changes in Performance Index.
The results of the multiple linear regression analysis revealed that integration positively
affects success in green building projects, when controlling for the level of challenges. This
adheres with what the literature in the field suggests. Indeed, various studies in the field have
theorized (7group and Reed 2009, Kibert 2008, Yudelson 2008), or qualitatively and empirically
shown (Korkmaz 2007, Korkmaz et al 2013), that integration level in design process positively
impacts project outcomes. In an in-depth qualitative study of 12 case-studies, Korkmaz, Swarup
& Riley (2013) measured team integration level using parameters such as involvement, design
charrettes, communication, compatibility, presence of LEED Accredited Professional (AP) in
team, prior experience, use of energy modeling, and LEED education to contractors, and
examined the relationship between project delivery systems, level of integration, and
achievement of the outcomes. They proposed that higher levels of integration result in more
successful outcomes with respect to sustainability, cost, schedule, etc.
The regression analysis results also indicate a negative relation between the level of
challenges in a green building project, as represented by the Challenge Index, and project
success. Considering the fact that the Challenge Index is made of a consolidation of several
factors including project uncertainty, the lack of contractual support for integration, and so forth,
the results could highlight the importance of reducing the challenges level by improving
contractual integration of project parties and encouraging free exchange of information.
To summarize the results, the authors tested the research hypothesis that ‘higher
integration in the ID team environment results in more successful project outcomes’. Also, the
results helped validate the proposed integration evaluation framework based on five measures of
content validity, construct validity, internal validity, reliability and external validity.
The content validity was already met during development of the evaluation framework
when a group of experts were asked to peer-review the framework and offer their comments and
critiques. The framework was then revised to address the comments. It was ensured that the
revised framework covers all factors affecting the ID process and that the evaluation indictors are
adequate to measure those factors.
To meet the construct and internal validity, two key strategies were applied:
a. All possible factors and variables with the potential of affecting the integration of project
participants in the ID process of green buildings were investigated through literature
review and qualitative research, and were included in the evaluation framework. In
particular, the addition of contextual factors (such as complexity, sustainability, etc.) to
the evaluation framework was intended to improve internal validity by capturing aspects
of change in project outcomes that are not related to the integration of project
participants.
b. Multiple regression analysis was conducted to examine the association of variables in the
model. The results revealed a positive link between integration and project success,
which adheres with anecdotal literature suggestions (Korkmaz 2007, 7group and Reed
2009, Kibert 2008, Yudelson 2008).

The framework’s quantitative reliability was examined through the internal consistency
method, and measures such as Cronbach’s Alpha, Inter-Item correlation, and Item-Rest
correlation. Finally, the external validity was met by ensuring that the collected sample was
representative of the target population. This research relied on a sample of 79 projects which
represented the characteristics of the population of LEED-rated projects with respect to location,
size, cost, and certification level. Therefore, the issue of representativeness and external validity
were met.

CONCLUSION
A CIPP-based evaluation framework is proposed by the authors to help assess the
integration of team members in design phase of green building projects. The proposed
framework, which is illustrated in Figure 6 and is presented in more detail in the Appendix,
consists of four major components:
a. Evaluation Model; based on the Context, Input, Process, and Product (CIPP) evaluation
model;
b. Evaluation Factors; representing various dimensions of the integration of the ID team in
green building projects
c. Evaluation Indicators; to work as reasonably tangible definitions for evaluation factors
d. Measurement Format; to provide a schema for quantifying the evaluation framework.
Table 9 summarizes the major characteristics of the proposed framework. To evaluate the
ID team environment with respect to integration maturity, a third-party evaluator or an ID team
member would use the checklist of 65 evaluation indicators in the Appendix and would express
his level of agreement with the statement of the evaluation indicator through a 5-point Likert-
scale ranging from ‘strongly disagree’ to ‘strongly agree’. Through the proposed measurement
format, the responses would be scored and converted into three indices representing the
challenges level, integration level, and performance level. Based on these indices, the project is
then rated according to Table 4. The first two indices would be used over the design process; and
the third one is applicable only when the project is completed.
This research contributes to the knowledge and practice in the field of construction
management in two specific ways. The first and most important contribution of this research is
that it proposes a comprehensive and quantifiable integration evaluation framework, which can
be used by the research community and industry to measure and quantify integration of the ID
teams in the context of green building projects. The lack of such framework in construction
industry and the need to address it has been highlighted by previous studies (Xue et al 2010). The
validated framework proposed by this research enables owners, architects and managers of green
projects to evaluate, diagnose and improve their ID team environments. Integration evaluation
through this framework provides project teams with the opportunity to benchmark their
performance against past and future projects too. While the proposed evaluation framework has
been designed to fit the context of green building projects, it can be adjusted to fit other types of
construction projects too. The framework can also function as a learning tool in academia to help
architecture and construction management students gain insight about various dimensions of
integration in the ID team environment. It can also teach them how to evaluate a process or
project in a systematic way by exploring the organized evaluation method embedded in the
proposed framework. The second contribution of this research is in its provision of empirical
evidence to the anecdotes suggesting a positive link between integration in the ID team
environment and green project outcomes. The statistically positive association between
integration and project success that is shown in this research can also be used as a means to
predict project performance during design phase.

Table 9. Characteristics of the proposed CIPP-based integration evaluation framework.

Evaluation subject Integration in the ID team environment of Green Buildings


Purpose To diagnose the ID team environment
To improve the ID team environment
To provide an overall assessment of the integration in the ID team
To guide decision-making with regard to integration
To facilitate benchmarking of integration
Components a. Evaluation Model (Context, Input, Process, Product (CIPP)
model)
b. Evaluation Factors
c. Evaluation indicators
d. Measurement Format (Challenge Index (CI), Integration
Assessment Index (IAI), Performance Index (PI))
Suggested time of conduction During the design process; Following project completion
Foci Process-related issues; Team dynamics
Audience Owners, architects and core project teams of green building projects

It is suggested that future research tries to enhance comprehensiveness of the proposed


framework by exploring other evaluation indicators. The challenge in the present research was to
make a balance between simplicity and comprehensiveness in order to make the framework
readily usable by project managers. This prevented us from expanding the framework. Future
research can also explore other quantification methods to translate evaluation indicators into
numerical scores.

REFERENCES
7group, and Reed, B. (2009). The integrative design guide to green building: redefining the
practice of sustainability, Wiley, Hoboken, New Jeresey.
Azhar, S. (2011). “Building Information Modeling (BIM): Trends, Benefits, Risks, and
Challenges for the AEC Industry.” Leadership Manage. Eng., 11(3), 241-252.
Babbie, E. (2010). The practice of social research. 12th ed. Wadsworth, Belmont.
Baiden, B. K., Price, A. D., and Dainty, A. R. (2006). “The extent of team integration within
construction projects.” Int. J. Proj. Manag., 24:13-23.
Baiden, B. K., and Price, A. D. (2011). “The effect of integration on project delivery team
effectiveness.” Int. J. Proj. Manag., 29:129-136.
Ballard, G. (2006). “Rethinking project definition in terms of target costing.” Proc., 14th annual
Congress of Int. Group for Lean Construction, Santiago, Chile, 77-90.
Chan A. P., Scott, D., and Chan, A. P. (2004). “Factors affecting the success of a construction
project.” J. Constr. Eng. Manag., 130(1):153-155.
Cheung, S. O., Ng, T. S., Wong, S. P., and Suen, H. C. (2003). “Behavioral aspects in
construction partnering.” Int. J. Proj. Manag., 21(5):333-343.
CMAA. (2010). Integrated project delivery; an overview. Construction Management Association
of America.
Creswell, J. W. (2012). Qualitative inquiry and research design: choosing among five
approaches. SAGE Publications, Inc, Thousand Oaks, CA.
DeVellis, R. F. (2003). Scale Development; theory and applications. 2nd ed. SAGE Publications,
Thousand Oaks.
Diallo, A., Thuillier, D. (2005). “The success of international development projects, trust and
communication: an african perspective.” Int. J. Proj. Manag., 23(3):237–252.
DOE. (2009). Net-zero energy building definitions. US Department of Energy.
Egan, J. (1998). Rethinking construction. Department of the Environment, Transport and the
Regions, U.K.
Egan, J. (2002). Accelerating change. Department of the Environment, Transport and the
Regions, U.K.
Fairclough, J. (2002). Rethinking construction innovation and research - a review of the
government's R&D policies and practices. Department of the Environment, Transport
and the Regions, U.K.
Forbes, L. H., and Ahmed, S. M. (2010). Modern construction: lean project delivery and
integrated practices. Taylor & Francis, Boca Raton.
Gabriel, E. (1991). “Teamwork - fact and fiction.” Int. J. Proj. Manag., 195-8.
Green, S. B. (1991). “How many subjects does it take to do a regression analysis?” Multivar.
Behav. Res., 26:499‐510.
Hair, J. F., Black, B., Babin, B., Anderson, R. E., and Tatham, R. (1998). Multivariate data
analysis. 6th ed. Prentice-Hall International.
Hauck, A. J., Walker, D. H., Hampson, K. D., and Peters, R. J. (2004). “Project alliancing at
national museum of australia—collaborative process.” J. Constr. Eng. Manag.,
130(1):143-152.
Hinkin, T. R. (1995). “A review of scale development practices in the study of organizations.” J.
Manag., 21(5):967-988.
IEA. (2003). Integrated design process; a guideline for sustainable and solar-optimized building
design. Task 23, optimization of solar energy use in large buildings, subtask B, design
process guidelines. Internantional Energy Agency (IEA).
Inyim, P., Rivera, J., and Zhu, Y. (2015). “Integration of Building Information Modeling and
Economic and Environmental Impact Analysis to Support Sustainable Building
Design.” J. Manage. Eng. 31, SPECIAL ISSUE: Information and Communication
Technology (ICT) in AEC Organizations: Assessment of Impact on Work Practices,
Project Delivery, and Organizational Behavior, A4014002.
Kibert, C. J. (2008). Sustainable construction: green building design and delivery. 2nd ed. John
Wiley & Sons, US.
Korkmaz, S. (2007). Piloting evaluation metrics for high performance green building project
delivery. PhD Dissertation, Pennsylvania State University.
Korkmaz, S., Swarup, L., Riley, D. (2013). “Delivering Sustainable, High Performance
Buildings: Influence Of Project Delivery Methods on Integration and Project Outcomes.”
J. Manag. Eng., 29(1):71-78.
Kumaraswamy, M. M., Rahman, M. M., Ling, F. Y., and Phng S. T. (2005). “Constructing
relationally integrated teams.” J. Constr. Eng. Manag., 131(10):1076-1086.
Kumaraswamy, M. M., Rahman, M. M., Ling, F. Y., and Phng S.T. (2005). “Reconstructing
cultures for relational contracting.” J. Constr. Eng. Manag., 131(10):1065-1075.
Love, P., Gunasekaran, A., Li, H. (1998). “Concurrent engineering: a strategy for procuring
construction projects.” Int. J. Proj. Manag., 16(6):375-383.
Maggino, F., Ruviglioni, E. (2009). Obtaining weights: from objective to subjective approaches
in view of more participative methods in the construction of composite indicators. New
Techniques and Technologies for Statistics, 37-46, Brussels, European Commission,
EUROSTAT.
Miles, J., Shevlin, M. (2001). Applying regression and correlation: a guide for students and
researchers. SAGE Publications, Inc.,Thousand Oaks.
Mitropoulos, P., Tatum, C. B. (2000). “Management-Driven Integration.” J. Manag. Eng.,
16(1):48-58.
Nam, C. H., and Tatum, C. B. (1992). “Noncontractual Methods of Integration on Construction
Projects.” J. Constr. Eng. Manag., 118(2):385-398.
Owen, R., Amor, R., Palmer, M., Dickinson, J., Tatum, C. B., Kazi, A. S., Prins, M., Kiviniemi,
A., and East, B. (2010). “Challenges for integrated design and delivery solutions.” J.
Archit. Eng. Design Manag., 6: 232–240.
Porwal, A., and Hewage, K. N. (2013). “Building Information Modeling partnering framework
for public construction projects.” Automat. Constr., 31:204-214.
Rahman, M. M., and Kumaraswamy, M. M. (2004). “Contracting relationship trends and
transitions.” J. Manag. Eng., 20(4):147-161.
Robichaud, L. and Anantatmula, V. (2011). ”Greening Project Management Practices for
Sustainable Construction.” J. Manage. Eng., 27(1), 48–57.
Rosenthal, R., and Rosnow, R. L. (1991). Essentials of behavioral research: methods and data
analysis. McGraw-Hill, New York.
Ruparathna, R. and Hewage, K. (2015). “Review of Contemporary Construction Procurement
Practices.” J. Manage. Eng., 31(3), 04014038.
Smyth, H., and Pryke, S. (2008). Collaborative relationships in construction: developing
frameworks and networks. John Wiley & Sons, Malden.
Streiner, D. L. (2003). “Starting at the beginning: an introduction to coefficient alpha and
internal consistency.” J. Pers. Assessment, 80(1):99-103.
Stufflebeam, D. L. (1983). “The CIPP model for program evaluation.” In Madaus, G. F., Scriven,
M., Stufflebeam, D. L., editors. Evaluation models: viewpoints on educational and
human services evaluation. Kluwer Nijhof, Boston.
Stufflebeam, D. L. (2003). “The CIPP model for evaluation.” In Stufflebeam, D. L., and
Kellaghan, T., editors, The international handbook of educational evaluation. Kluwer
Academic Publishers, Boston.
UW. WebQ. University of Washington 2012.
VanVoorhis, C. R., Morgan, B. L. (2007). “Understanding power and rules of thumb for
determining sample sizes.” Tutorials in Quantitative Methods for Psychology, 3(2):43‐50.
Weiner, I. B., Greene, and R. L. (2011). Handbook of personality assessment. John Wiley &
Sons, New Jersey.
Wilkinson, P. (2005). Construction collaboration technologies: an extranet evolution. Taylor &
Francis, New York.
Wing, C. K., Raftery, J., and Walker, A. (1998). “The baby and the bathwater: research methods
in construction management.” Construction Management & Economics, 16(1): 99-104.
Wooldridge, J. M. (2009). Introductory econometrics; a modern approach. 4th ed. South-
Western Cengage Learning, Mason.
Xu, T., Smith, N. J., and Bower, D. A. (2005). “Form of collaboration and project delivery in
chinese construction markets: probable emergence of strategic alliances and
design/build.” J. Manag. Eng., 21(3):100-109.
Xue, X., Shen, Q., and Ren, Z. (2010). “Critical review of collaborative working in construction
projects: business environment and human behaviors.” J. Manag. Eng., 26(4):196-208.
Yeomans, S. G., Bouchlaghem, N. M., and El-Hamalawi, A. (2006). “An evaluation of current
collaborative prototyping practices within the AEC industry.” Automat. Constr.,
15(2):139-149.
Yeung, J. F., Chan, A. P., and Chan, D.W. (2007). “Definition of alliancing in construction as a
wittgenstein family-resemblance concept.” Int. J. Proj. Manag., 25(3):219–231.
Yin, R. K. (2002). Case study research: design and methods. 3rd ed. Sage Publications, Inc,
Thousand Oaks.
Yudelson, J. (2008). Green building through integrated design. McGraw-Hill Professional, US.
Zhang, G. Z., Griffith, R., Metcalf., D., Williams, J., Shea, C., and Misulis, K. (2011). “Using the
context, input, process, and product evaluation model (CIPP) as a comprehensive
framework to guide the planning, implementation, and assessment of service-learning
programs.” J. High. Educ. Outreach Engagement., 15(4):57-84.
Appendix. Checklist of CIPP-based Evaluation Factors and Indicators.

1
Appendix. Checklist of CIPP-based Evaluation Factors and Indicators.

Strongly disagree

Strongly agree
Disagree
Neutral
Agree
Evaluation Factors Evaluation Indicators

Financial Project is complex with respect to providing financial resources and paying the parties for services.
Complexity Temporal Project is complex with respect to planning and timing of activities, given the environmental context.
Technical Project is technically complex, especially in terms of the systems, size, and project requirements.
Cost Cost is a major priority in this project.
Priority Schedule Schedule is a major priority in this project.
Context

Sustainability Sustainability is a major priority in this project.


Uncertainty There are high levels of uncertainty which pose project at high degrees of risk.
Scope Project scope is fully defined before the architect is brought on board.
Project Delivery Roles and responsibilities of project parties are assigned under delivery systems that do not support integration.
The type and terms of the contract do not support the integration of project parties and their full collaboration.
Contract support
Sustainability requirements and goals for the project have not been integrated into the contract of the core team.
Challenge Index (CI)
Budget There is a balance between project budget and the expected outcomes.
Input

The project owner is resourceful to make the needed decisions for this project timely.
Team capability
The team as a whole possesses sufficient experience and expertise needed in design and construction of this project.
Building performance tools are widely used in schematic design phase.
Tools & Technology
Building Information Models are widely used in design and construction phase.
Accountability Team members are held responsible by the leadership for timely accomplishment of the assigned tasks.
Team members show patience and willingness in meetings to explain issues not clear to other parties.
Commitment Team members listen patiently and eagerly in meetings to the concerns raised by other parties.
Project team members do not get frustrated in addressing the received feedback on their work.
Process

Project goals are communicated effectively among the team members.


Collaboration The team has regular formal meetings and members can readily reach others formally or informally.
Team members attend the meetings prepared.
Communication Team members spend meeting times critically engaged in exploring solutions to the design problems.
Team members are able to get needed information at the least possible time through formal means of
communication (email exchange, phone calls, etc.).
There is a strict but easy-to-use protocol in place for sharing and updating the documents, drawings,
models, etc. among project parties, which prevents confusion and conflicts.

2
Appendix. Checklist of CIPP-based Evaluation Factors and Indicators.

Team members are familiar with each other through previous work or reputation.
Compatibility
Team members are compatible at both personal and organizational levels.
Project owner or its representative is actively involved throughout the design process.
In addition to architect and owner, representatives of the following disciplines are present in the team
and actively engaged in the design: civil/structural engineering, mechanical/electrical engineering,
Involvement lighting design, cost estimating, general contractor, major trade contractors and suppliers, users.
The addition of representatives of the following disciplines to the team happens timely during the
design process: civil/structural engineering, mechanical/electrical engineering, lighting design, cost
estimating, general contractor, major trade contractors and suppliers, users.
Team selection: Consultants, subcontractors, suppliers and vendors are selected to join the team,
collaboratively and based on the inputs of the core team.
Goal-setting: Team members collaboratively set time, cost and sustainability goals.
Target-setting: Team members collaboratively set performance targets to meet the defined goals.
Joint operations Idea-generation: Many innovative ideas with respect to sustainability issues, form, program, value-
adding, etc., are generated during the joint meetings.
Design iteration: Team collaboratively produces several design alternatives based on the jointly-
generated ideas and revises/refines them based on their input.
Design Evaluation: Before selecting the final design, the design alternatives are discussed by the team
and their achievement of performance targets is evaluated.
Team members are sympathetic towards other parties' situation.
Mutual respect Team members go beyond their obligations in meeting other parties' requests.
Team members feel valued by other team members.
Team members are confident they could receive right information from other parties at the right time
without too much effort.
Team members believe in capabilities of each other and the team.
Team members are confident their voice would be heard and the team's decisions would reflect the
concerns of everyone on the team.
Trust Team members are confident the team's decisions would be the most beneficial for the project.
Team members are confident no team member will take action against the interests of others to achieve
what he wants.
Team members are confident all parties will be transparent in their communication with other parties.
Team members are confident the team makes sufficient efforts to examine all potential solutions to the
design problem.
The owner possesses the needed capability and makes sufficient efforts in setting directions and aligning team resources.
The owner possesses the needed capability and makes sufficient efforts in motivating the team members, fostering a sense of
ownership and building trust among them.
Leadership The owner possesses capability to make fast and stable decisions based on the input of the design team.
The architect possesses capability to timely predict the design issues, invite appropriate participants at the right time, and
effectively engage participants in the team discussions.
Architect possesses capability to lead the design team by communicating and pursuing the project goals and targets.

3
Appendix. Checklist of CIPP-based Evaluation Factors and Indicators.

Tradeoffs and synergies of following major sustainability elements are thoroughly discussed in the joint meetings before
making design decisions (form and energy use, site potentials and energy use, site potentials and daylighting, site potentials
and ventilation, daylighting and energy use, ventilation and energy use, etc.)
System-thinking Impacts of design decisions across relevant disciplines are discussed before making design decisions.
Impacts of design decisions over the project lifecycle are discussed before making design decisions.
The team as a whole is motivated to achieve sustainable design and followed opportunities for that through exploration and
discussions rather than mere pursuit of LEED credits.
Integration Assessment Index (IAI)
Cost Project is considered successful in exceeding cost targets defined in initial contract.
Schedule Project is considered successful in exceeding schedule targets defined in initial contract.
Project is considered successful in addressing sustainability, based on LEED certification.
Sustainability
Project is considered successful in addressing sustainability compared with other LEED-rated projects.
Product

Innovation The project is considered innovative at the time of design/construction, compared with the regular practice in the market.
Customer
The project owner is satisfied with the final outcome of the project.
Satisfaction
Safety High levels of safety compared to similar projects were achieved.
Learning Being part of this project improved the learning by project parties.
Relationships Team members are satisfied with the quality of relationships in design phase, and relationships survived the project.

Performance Index (IAI)

View publication stats

You might also like