You are on page 1of 13

Performance Dashboard for a Pharmaceutical Project

Benchmarking Program
Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.

Sung-Joon Suk1; Bon-Gang Hwang2; Jiukun Dai, M.ASCE3; Carlos H. Caldas, M.ASCE4;
and Stephen P. Mulva, M.ASCE5

Abstract: Performance measurement is essential to controlling and improving capital projects. Increasingly, industry sectors demand an
industry-specific performance measurement system because their unique processes can result in significant differences in performance out-
comes. This paper presents the development of a performance dashboard for a pharmaceutical project benchmarking program. The proposed
approach adopts a relative comparison method and uses weighted key performance indicators (KPIs) to compare project performance. The
dashboard generates an overall performance score both at the project level and at the company level, as shown in tabular and graphical formats.
The dashboard provides flexibility for comparing the performance of projects even when the project types or KPIs are different. Although the
study focuses on a performance dashboard for pharmaceutical industry projects, the dashboard development process described in this paper can
be applied to other industry projects. DOI: 10.1061/(ASCE)CO.1943-7862.0000503. © 2012 American Society of Civil Engineers.
CE Database subject headings: Benchmark; Construction management; Performance characteristics.
Author keywords: Benchmarking; Construction management; Performance dashboard; Performance evaluation; Performance
measurement; Pharmaceutical project.

Introduction Nitithamyong and Skibniewski 2006; Abdel-Razek et al. 2007; Yu


et al. 2007; Luu et al. 2008; Toor and Ogunlana 2010; Yeung et al.
Performance measurement is “the process of quantifying the effi- 2009]. These frameworks can be classified into three categories on
ciency and effectiveness of actions” (Neely et al. 1995). In other the basis of their area of focus: (1) benchmarking programs;
words, it is the process of evaluating how successfully a project (2) project performance measurements; and (3) company perfor-
is completed or an organization is managed. In a very competitive mance measurements. Benchmarking programs focus on compar-
global environment, performance measurement becomes critical to ing and improving the performance of either projects or companies
improving and managing performance in the construction industry. on the basis of project performance measurements and/or company
Over the past two decades, a number of performance measurement performance measurements. Project performance measurements
frameworks have been introduced to measure and improve different evaluate the success level of individual projects, whereas company
aspects of performance [Kumaraswamy and Thorpe 1996; Fong performance measurements assess the overall business success
et al. 2001; Palaneeswaran and Kumaraswamy 2000; Sommerville level of companies.
and Robertson 2000; Key Performance Indicators (KPI) Working Although many performance measurement frameworks have
Group 2000; Kagioglou et al. 2001; Li et al. 2001; Cox et al. 2003; been developed, few studies have focused on a performance dash-
Mohamed 2003; Beatham et al. 2004; Chan et al. 2004; Love et al. board, which is defined as “a multilayered application built on a
2004; Ramirez et al. 2004; Bassioni et al. 2005; Lee et al. 2005; business intelligence and data integration infrastructure that enables
organizations to measure, monitor, and manage performance more
1 effectively” (Eckerson 2006). Eckerson stated that a performance
Ph.D. Candidate, Univ. of Texas at Austin, Dept. of Civil, Architectural
and Environmental Engineering, 1 University Station C1752, Austin, dashboard needs applications and layers. Applications include
TX 78712 (corresponding author). E-mail: sungjoon.suk@gmail.com monitoring, to convey information at a glance; analyzing, to let
2
Assistant Professor, National Univ. of Singapore, Dept. of Building, users analyze exceptional conditions; and managing, to improve
SDE2-03-05, 4 Architecture Drive, Singapore 117566. E-mail: bdghbg@ alignment coordination and collaboration. Layers consist of a sum-
nus.edu.sg
3 marized graphical view that allows users to monitor aggregated key
Research Engineer, Construction Industry Institute, Benchmarking and
Metrics, Univ. of Texas at Austin, 3925 West Braker Lane (R4500), Austin, performance indicators (KPIs), a multidimensional view to facili-
TX 78759. E-mail: jiukun.dai@cii.utexas.edu tate exploration of information from various perspectives, and a de-
4
Associate Professor, Univ. of Texas at Austin, Dept. of Civil, tailed reporting view that provides information about performance
Architectural and Environmental Engineering, 1 University Station in detail. To attain this functionality, a performance dashboard re-
C1752, Austin, TX 78712. E-mail: caldas@mail.utexas.edu quires a detailed implementation and scoring technique that goes
5
Associate Director, Construction Industry Institute, Benchmarking and beyond current research on performance measurement (Bassioni
Metrics, Univ. of Texas at Austin, 3925 West Braker Lane (R4500), Austin, et al. 2005) and a method to represent performance outputs in a
TX 78759. E-mail: smulva@cii.utexas.edu
quantifiable and aggregated fashion.
Note. This manuscript was submitted on April 20, 2011; approved on
October 24, 2011; published online on October 26, 2011. Discussion period
The purpose of this study is to build a comprehensive
open until December 1, 2012; separate discussions must be submitted performance dashboard that is practical and relevant to an
for individual papers. This paper is part of the Journal of Construction industry-tailored benchmarking program. The paper focuses on a
Engineering and Management, Vol. 138, No. 7, July 1, 2012. ©ASCE, performance dashboard for a pharmaceutical project benchmarking
ISSN 0733-9364/2012/7-864–876/$25.00. program, which was built as part of a benchmarking study by the

864 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / JULY 2012

J. Constr. Eng. Manage., 2012, 138(7): 864-876


Table 1. Specific Benchmarking Programs
Researchers Area Result
Palaneeswaran and Kumaraswamy (2000) Contractor selection Process model: A conceptual benchmarking model
Sommerville and Robertson (2000) Total quality management Metrics/process model: 18 metrics considered key to the business and a
(TQM) six-step process toward TQM
Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.

Fong et al. (2001) Value management Process model: Management commitment, facilitator’s skills,
brainstorming, group effectiveness, customer satisfaction
Li et al. (2001) Partnering Process model: Eight-stage process of a cooperative benchmarking
approach
Mohamed (2003) Organizational safety Measurement system: Balanced scorecard across four perspectives,
culture including management, operational, customer, and learning
Love et al. (2004) Information technology Metrics: Benefits, costs, and risks
Ramirez et al. (2004) Management practices Measurement system: Application of a management evaluation system
Abdel-Razek et al. (2007) Labor productivity Metrics: Disruption index, performance ratio, and project
management index

Construction Industry Institute (CII). Data on 198 pharmaceutical companies in the construction industry. Seven of them are related
capital projects are used to develop the performance dashboard; the to project performance: construction cost, construction time, cost
data were submitted by 12 companies: 11 U.S. companies and one predictability, time predictability, defects, client satisfaction with
European company. The development process of the performance product, and client satisfaction for service. The other three KPIs
dashboard is presented with a discussion of the underlying rationale are related to company performance: safety, profitability, and pro-
for each process and its results. Furthermore, results of a survey of ductivity. The 10 KPIs have been used by many construction
16 pharmaceutical industry experts validate the performance dash- companies in the United Kingdom (Beatham et al. 2004).
board framework. In addition to CII and CBPP, research efforts have been made to
establish an area-specific benchmarking program. Such efforts gen-
erated more relevant metrics, measurement systems, and/or process
Project Performance Measurement Frameworks models to benchmark a particular area. Table 1 shows recent efforts
related to benchmarking programs that narrowed down to a
specific area.
Benchmarking Programs
Although these benchmarking programs have developed many
McGeorge et al. (2002) defined benchmarking as “a process of useful metrics to measure and evaluate performance, they focus on
continuous improvement based on the comparison of an organiza- individual metrics rather than on aggregated indicators. As a result,
tion’s processes or products with those identified as best practice.” it is difficult to obtain insight into the overall performance of the
Benchmarking is recognized as a commonly used tool to identify company (El-Mashaleh et al. 2007) or project. In addition, a com-
successful performance of projects and construction companies petitive benchmarking program requires an industry-tailored
(El-Mashaleh et al. 2007). The two most widely known bench- project hierarchy because processes and facilities vary by industry
marking programs are the Benchmarking and Metrics (BM&M) (Hwang et al. 2008), and such distinct processes and facilities often
Program at CII and the Construction Best Practice Programme result in significant differences in project performance. Moreover,
(CBPP), which is jointly supported by the Department of the Envi- each industry needs particular metrics that reflect industry-specific
ronment, Transport and the Regions and the Construction Industry characteristics. For example, pharmaceutical projects need to meet
Board in the United Kingdom. regulatory requirements for validation and qualification of facili-
Based at the University of Texas at Austin, CII is a consortium ties. The followings are examples of pharmaceutical industry–
of leading owners, engineering and construction contractors, and specific metrics to measure and evaluate performance related to
suppliers whose mission is to measurably improve the delivery of validation and qualification: (commissioning cost + qualification
capital facilities (CII 2010b). The CII organized the BM&M Pro- cost)/total installed cost (TIC); [installation qualification (IQ) thru
gram in 1996 and established its first set of performance metrics: operation qualification (OQ) duration]/(# IQ + OQ protocols), and
project cost growth, project schedule growth, project budget fac- (process space square foot + process-related space square foot)/
tor, project schedule factor, change cost factor, and practice use gross square foot (GSF).
(Lee et al. 2005). As the program matured, CII established stan-
dard metric definitions for engineering and construction produc- Project Performance Measurements
tivity from selected disciplines, including concrete, structural Project performance measurements evaluate the success level of
steel, electrical, piping, instrumentation, and equipment (Park individual projects. These measurements can have different pur-
et al. 2005; Kim 2007). Using these metrics, CII provides its par- poses: business performance and project delivery performance.
ticipants with its web-based benchmarking reports detailing Business performance evaluates the success and effectiveness of
project performance norms, including values on individual met- projects over the project lifetime. Business performance is usually
rics with mean, median, quartile cutoffs, and data distribution measured by the owner because it needs information on the profit-
(CII 2010b). ability of products or services in addition to the performance of the
The CBPP launched its KPIs for performance measurement project delivery itself. Project delivery performance, on the other
in the United Kingdom in the late 1990s. Ten KPIs were developed hand, assesses the efficiency of completing facilities constructed
for measuring both project performance and company performance, for production or occupancy. Project delivery performance meas-
comparing a project or company with the other projects or urement is of major interest to the construction industry and has

JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / JULY 2012 / 865

J. Constr. Eng. Manage., 2012, 138(7): 864-876


Table 2. Project Performance Indicators
Performance indicators
Client Business Project-team Communi-
Studies Cost Time Quality Safety Environ-ment satisfaction performance satisfaction cation Others
Kumaraswamy and O O O O O O O Technology transfer
Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.

Thorpe (1996)
Audit Office of New O O O O O O O
South Wales (1999)
Key Performance O O O O O Change orders
Indicators (KPI)
Working Group (2000)
Cox et al. (2003) O O O O Productivity
Chan et al. (2004) O O O O O O O Functionality of
building
Nitithamyong and O O O O Strategic
Skibniewski (2006) performance
risk improvement
Toor and Ogunlana O O O O O O Minimized conflicts
(2010)
Yeung et al. (2009) O O O O O O Innovation and
improvement
This paper O O O O Design/space
efficiency
Note: O = Yes.

been widely adopted for project performance evaluation. Both scorecard (BSC), which added three nonfinancial perspectives:
business performance and project delivery performance use customers, innovation and improvement, and internal processes,
performance metrics to evaluate outcomes objectively. in addition to the financial perspective. Consequently, the BSC pro-
Traditionally, the performance of construction projects was vided a balanced picture by including operational performance in
measured by three performance indicators: cost, time, and quality addition to economic performance (Amaratunga et al. 2001).
(Ward et al. 1991). However, recent studies employed additional Many studies have been conducted to measure company perfor-
performance indicators to evaluate construction project execution mance by adopting and modifying the BSC. Kagioglou et al.
from a more balanced perspective. Table 2 shows the performance (2001) presented a conceptual framework that integrated the per-
indicators used by several previous studies. In addition to cost, spectives of project and supplier with the BSC. Bassioni et al.
time, and quality, safety is included in most of the studies. Addi- (2005) established a conceptual framework to measure the business
tionally, some studies suggest the use of indicators such as envi- performance of construction companies by the BSC, business
ronmental friendliness, client satisfaction, business performance, excellence models, and empirical feedback from expert interviews.
project-team satisfaction, and communication. Some other indica- Yu et al. (2007) developed a framework to calculate a company
tors found in the literature include technology transfer, change or- performance score by combining weighted metrics, which were
ders, productivity, building functionality, strategic performance,
categorized by the four BSC perspectives. The weights for metrics
risk improvements, minimized conflicts and innovation, and
were obtained from a survey of construction companies. Luu et al.
improvement.
(2008) incorporated a strengths–weaknesses–opportunities–threats
The definition of performance indicators were the primary focus
(SWOT) matrix into the BSC so that a typical construction com-
and contribution of the previous studies shown in Table 2, which
primarily dealt with the criteria required for assessing the perfor- pany can devise their short- and long-term strategies. Stewart and
mance of construction projects. However, a generic framework is Mohamed (2001) adapted the BSC concept to develop a framework
required to compare the performance of various construction proj- to evaluate (IT) performance. The framework measured (IT) per-
ects regardless of their industry or facility type. Specifically, the formance at three different decision-making tiers: project, business
framework needs a scoring algorithm for the selected performance unit, and enterprise.
metrics and a comparison method to evaluate the performance of These studies contributed to improving company performance
projects and/or companies. measurement on the basis of the BSC. They developed perfor-
mance metrics and frameworks appropriate for the construction in-
Company Performance Measurements dustry from various perspectives. Nonetheless, more studies need
For a long time before the 1990s, company performance measure- to be conducted to investigate the relationship between company
ments relied on financial and accounting measurements: return on performance and project performance (Bassioni et al. 2004). Fur-
investment, sales per employees, cash flow, profit, and more. How- thermore, little has been done to measure company performance
ever, financial-focused performance measurements were criticized from the perspective of aggregated project performance. The most
for their insufficiency in addressing overall business performance practical need is a detailed implementation method and scoring al-
(Eccles 1991; Kaplan and Norton 1992; Parker 2000). To overcome gorithm that can be used for company performance measurement
this deficiency, Kaplan and Norton (1992) introduced the balanced (Bassioni et al. 2005).

866 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / JULY 2012

J. Constr. Eng. Manage., 2012, 138(7): 864-876


Development of a Performance Dashboard metric is within the 25% with the worst performance. More details
on the first stage of the pharmaceutical benchmarking system can
Background be found in Hwang et al. (2008).
However, the team perceived that the pharmaceutical bench-
To develop a more specific benchmarking system for the pharma- marking system should include more aggregated information to as-
ceutical industry, CII and pharmaceutical industry experts worked sess the project performance from an overall perspective. Although
Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.

together as a team to launch the pharmaceutical facility benchmark- metrics of a project are individually compared with those of other
ing program in 2004. The research team developed the hierarchical projects, the overall performance of projects cannot be evaluated
structure for categorizing pharmaceutical projects, as shown in only with the metrics and the project hierarchy. To objectively as-
Table 3. They identified three major project types: bulk manufac- sess the overall performance of projects, an approach is required to
turing, secondary manufacturing, and laboratory, each of which reasonably aggregate the performance results of a number of
has subcategories, some of which are divided into more subcate- metrics. The performance dashboard developed in this research
gories. The rationale for this categorization is that the distinct suggests a rational solution to evaluate the overall performance
processes and characteristics of pharmaceutical facilities will show of projects. The following sections discuss the components and
different project performance results. Therefore, benchmarking algorithm of the performance dashboard in detail.
requires organization of an appropriate hierarchical structure for
pharmaceutical projects. The team also defined 80 pharmaceutical Components of the Performance Dashboard
industry–specific metrics, which include cost, schedule, dimension, The proposed performance dashboard framework includes a
and quality metrics (Hwang et al. 2008). tabular-format dashboard and two graphic-format dashboards.
The CII constructed a tailored pharmaceutical questionnaire by The tabular-format dashboard summarizes the results of the perfor-
modifying and adding questions to the existing questionnaire, mance of projects by numbers, whereas the graphic-format dash-
which is generic to capital construction projects in all industries. boards show the deviations of the performance of projects by
The pharmaceutical questionnaire was programmed and released distributions. This section describes the components of the
online to allow the team to submit project data. The submitted data tabular-format dashboard. The outputs of all dashboards are
are analyzed on the basis of the individual metrics categorized by discussed in the “Performance Dashboard Outputs” section, where
the project hierarchy. The metrics of a project are compared with their layout and design are shown in a table and figures.
those of other projects by using a statistical method. The compari- The tabular-format dashboard consists of aggregated quartile
son results are reported by showing a relative performance location scores for project-level performance, company-level performance,
of each metric in a quartile-based distribution. For example, if a safety performance, industrial project definition rate index (PDRI),
metric for a project belongs to the first quartile, then the perfor- project type, project comparison level, number of projects com-
mance of the metric falls within the top performing 25%. In con- pared, and project priority. Each of these elements is explained
trast, if a metric is in the fourth quartile, then the performance of the in the following paragraphs.

Table 3. Hierarchical Structure of Pharmaceutical Projects


Level I Level II Level III Level IV
Pharmaceutical bulk Biological (41) Direct derived (2) —
manufacturing (65) Fermentation (14) —
Cell culture (20) Attachment dependent (7)
Stirred tank (13)
Pilot plant (5) —
Chemical (24) Manufacturing (21) —
Pilot plant (3) —
Pharmaceutical secondary Fill finish (50) Parenteral (20) Syringe (2)
manufacturing (68) Delivery device (2)
Vial (15)
Pilot plant (1)
Non-parenteral (30) Inhalants (4)
Solid dosage (22)
Cream/ointment (2)
Pilot plant (2)
Medical devices (3) — —
Nutritional (4) — —
Packaging (3) — —
Pharmaceutical warehouse (8) — —
Pharmaceutical Research (34) Biological (23) —
laboratory (65) Chemical (11) —
Quality control/quality assurance (12) — —
Vivarium (9) — —
Process development (10) — —
Note: Number of projects shown in parentheses.

JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / JULY 2012 / 867

J. Constr. Eng. Manage., 2012, 138(7): 864-876


The performance of each project is evaluated on the basis of subcategories. Third, the dimension performance category evalu-
an overall project performance that is a composite score of four ates how efficiently a process facility is designed. For bulk and
performance categories: cost, schedule, quality, and dimension. secondary manufacturing, the proportion of process space is as-
In addition to the first three most commonly used performance cat- sessed. On the other hand, for laboratory, the length of the benchtop
egories, the performance dashboard includes the dimension perfor- and the number of persons to be admitted are evaluated. Lastly, the
mance category that measures the design/space efficiency of quality performance category includes planning execution of quali-
Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.

facilities. The first two performance categories, cost performance fication and validation (PEQV) and Pharmaceutical Project Defi-
and schedule performance, have subcategories: absolute perfor- nition Index (PPDI). Terms related to PEQV and PPDI are
mance and relative performance, each of which has different defined in the CII pharmaceutical owner large questionnaire (CII
key metrics. In contrast, dimension and quality performance cat- 2010a) as follows: qualification is the action of providing equip-
egories consist of key metrics without subcategories. ment or ancillary systems that are properly installed, work cor-
Apart from the overall project performance, safety performance rectly, and actually lead to the expected results; validation is a
is independently assessed. The team decided to exclude it from the documented program that provides a high degree of assurance that
overall project performance because many projects had no accident a specific process, method, or system will consistently produce a
record or did not have data, and thus safety performance may not result meeting predetermined acceptance criteria; and PPDI indi-
have enough power of discrimination to calculate the overall cates how well user requirements, design qualification planning,
project performance. However, the team still wanted to examine and validation master plans were defined before the total project
the results of safety performance because of the importance of budget authorization. Both PEQV and PPDI measure the level
safety itself. Consequently, safety performance was included in of activities to meet the requirements of the Current Good Manu-
a separate column of the performance dashboard and not as a part facturing Practice (CGMP) regulations. On the basis of the CGMP
of the overall project performance. regulations, the U.S. Food and Drug Administration (FDA) in-
Company-level performance shows the quartiles and rankings spects the facilities used in manufacturing, processing, and packing
for the overall project performance and the four performance cat- of a drug before it can be approved (FDA 2010). The activities for
egories by averaging the performance scores of all the projects in a qualification and validation are essential to receive an approval
company portfolio. Quartiles and ranks are determined by compar- from the FDA and to deliver pharmaceutical projects.
ing the average performance scores of a company with those of Although bulk manufacturing, secondary manufacturing, and
other companies, which enables each company to know their over- laboratory use the same performance categories, the key metrics
all relative level of performance. To protect confidentiality, compa- included in each performance category are not the same. Bulk
nies can view only their own performance and relative position in manufacturing and secondary manufacturing have the same key
quartiles; other companies’ data are blinded. To help understand metrics (Fig. 1), but laboratory employs different key metrics
quartiles and ranks, the distributions of the average performance (Fig. 2) to measure aspects of laboratory that are distinct from those
scores are examined and graphically represented. in manufacturing facilities. All of the key metrics for the four per-
The PDRI score measures how well a project is defined and is formance categories were determined by team discussion.
included in the performance dashboard to support examination of Weights were defined for the four performance categories, KPIs,
the relationship between project performance and the adequacy of and project priorities to reflect their relative importance. The Delphi
the definition. Project type indicates the original project type in the method, an iterative process leading to a group consensus, was used
hierarchical structure; comparison level represents the project type to determine weights. The iterative process involves experts, a
where a project is compared with other projects; numbers of proj- questionnaire, and a facilitator. Once the experts complete the ques-
ects compared shows how many projects are compared in a com- tionnaire, the facilitator reports a summary of the answers. Consid-
parison level; and project priority describes the project driver for a ering the summary as a group opinion, the experts can revise the
project. Three different project priorities are used: cost-driven, answers. This process can be concluded after a predetermined num-
schedule-driven, and balanced/others. ber of rounds, when the summary does not change with more
In addition to this performance dashboard, dashboards are also rounds, or when the group reaches consensus on the summary.
provided for the three Level I groups in the hierarchical structure The iterative process of the Delphi method enables a group to con-
shown in Table 3 to allow the team to separately examine the per- verge to consensus. In this research, the industry experts of the team
formance of bulk manufacturing, secondary manufacturing, and answered the questionnaire for the weights of the KPIs. The CII
laboratory. facilitated the Delphi method and provided the team a summary
with the averages of the weights. The process stopped after the sec-
Performance Categories, Key Performance Metrics, ond round because the team achieved consensus on the averages.
and Weights Bulk manufacturing and secondary manufacturing use the same
Bulk manufacturing, secondary manufacturing, and laboratory use set of weights (Fig. 1), but laboratory uses a different set of weights
the same four performance categories: cost, schedule, dimension, (Fig. 2) because its characteristics are different from manufacturing
and quality. First, the cost performance category consists of two projects. Each set of weights has three different groups of weights
subcategories: the absolute cost performance category and the rel- for the four performance categories depending on project priorities:
ative cost performance category. The absolute cost performance cost, schedule, or balanced. The team agreed that the project pri-
represents ratios of actual costs to other costs or dimensions, orities lead to significant differences in the project performance of
and the relative cost performance measures cost variation. Second, pharmaceutical capital projects. According to the project priority,
the schedule performance category also has two subcategories: the the weights of the four performance categories are applied differ-
absolute schedule performance category and the relative schedule ently for the weighted percentiles. For example, when cost is the
performance category. Absolute schedule performance assesses ac- project priority for a bulk project, the weights of the four perfor-
tivity durations to the number of protocols for qualification, dimen- mance categories are, respectively, 55, 22, 10, and 13. In contrast,
sion, or cost. On the other hand, relative schedule performance when schedule is the project priority for a bulk project, the weights
evaluates schedule variation. However, both of the performance cat- of the four performance categories are 22, 55, 10, and 13. The sub-
egories for dimension and quality have only key metrics without categories and indicators under the four performance categories

868 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / JULY 2012

J. Constr. Eng. Manage., 2012, 138(7): 864-876


Cost Performance

Absolute Cost Performance 66 Absolute Cost Performance Relative Cost Performance


Cost Growth: (Actual Cost - Planned Cost) /
Relative Cost Performance 34 TIC / Process Equipment Cost 22 50
Planned Cost
Delta Cost Growth: (Actual Cost - Planned
Facility Construct. Cost / GSF 18 40
Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.

Cost) / Planned Cost


Change Cost Factor:
Project Priority TIC / GSF 33 10
Change Cost / Actual Cost
Cost Balanced Schedule Soft Cost / Hard Cost 10
Cost (Design Cost + Construct. mgmt.
55 44 22 9
Performance Cost) / TIC
Schedule (Commissioning + Qualification
22 33 55 8
Performance Cost) / TIC
Dimension
10 10 10 Schedule Performance
Performance
Quality
13 13 13 Absolete Schedule Performance 48 Absolute Schedule Performance Relative Schedule Performance
Performance
(IQ thru OQ Duration) / Schedule Growth: (Actual Duration -
Relative Schedule Performance 52 33 70
((# IQ
Q + OQ
Q Protocols)) Planned Duration)) / Planned Duration
Delta Schedule Growth: (Actual Duration -
(Design ~ OQ Duration) / GSF 26 20
Planned Duration) / Planned Duration
Change Schedule Factor:
(Design ~ OQ Duration) / TIC 41 10
Change Duration / Actual Duration
Dimension Performance

(Process Space SF + Process


100
Related Space SF) / GSF

Quality Performance
Planning & Execution of
50
Qualification & Validation
Pharma Project Definition Index 50

Fig. 1. Key performance metrics and weights for bulk and secondary manufacturing

also have weights. All of the weights were determined by the ensure reasonable comparison and confidentiality. Comparison
Delphi method described previously. with a small number of projects may not have adequate power
to differentiate the performance level of projects, and may reveal
Algorithm of the Performance Dashboard the company that is the source of a metric’s value. The team dis-
cussed that having more than seven projects in a comparison level
On the basis of the hierarchical structure, components, performance
ensures at least two projects for each quartile, and the interval of
categories, KPIs, and weights described previously, the following
percentiles is not too wide to calculate their relative performance. In
steps delineate the calculation process of generating the perfor- addition, having more than three companies in a comparison level
mance dashboard. prevents a company from easily recognizing the other companies. If
the rule is not satisfied in a comparison level, then percentiles are
Step 1: Calculation of the Percentiles of the Key
calculated and reported at the next higher comparison level in the
Performance Metrics
project hierarchy that meets the rule.
Step 1 calculates the percentiles of the KPIs. Percentiles indicate
relative scores or positions represented by values ranging from Step 2: Calculation of the Weighted Percentiles
0–100. For example, a metric may have six values that are 1, 2, The percentiles of each key metric are multiplied by their weights,
3, 4, 5, and 6. If a lower value means better performance, the per- thereby producing weighted percentiles. Then the sum of the
centiles of the six values are 0, 20, 40, 60, 80, and 100, respectively. weighted percentiles in a performance category becomes a perfor-
Percentiles are used to represent relative performance scores for mance score for the performance category. Each performance cat-
each key metric because percentiles enable the performance dash- egory also has different weights, which multiply the performance
board to be combined with approximately 20 key metrics that mea- scores of the four performance categories. Finally, the overall per-
sure different aspects of project performance and thus have various formance scores for each project are calculated by summing up the
units and distribution ranges. weighted percentiles of the four performance categories. Table 4
In accordance with the project hierarchy, the KPI values of a shows an example of the calculation process of weighted percen-
project are compared with those of other projects. During this com- tiles for the relative cost performance category. In the O1001
parison, percentiles are calculated and assigned to the KPIs as long project, the three KPIs (cost growth, delta cost growth, and change
as the number of projects is sufficient. The team established a rule cost factor) are in the 80, 40, and 70 percentiles, respectively. The
that the calculation of percentiles for each KPI requires that a weights of the three KPIs are 50, 40, and 10, as shown in Table 4.
comparison level have more than seven projects submitted by more Because the weighted percentile is the product of percentile and
than three separate companies. The objective of this rule is to weight, the weighted percentile of cost growth is 40 (80 × 0:5).

JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / JULY 2012 / 869

J. Constr. Eng. Manage., 2012, 138(7): 864-876


Cost Performance

Absolute Cost Performance 62 Absolute Cost Performance Relative Cost Performance


Cost Growth: (Actual Cost - Planned Cost) /
Relative Cost Performance 38 Facility Construction Cost / GSF 20 50
Planned Cost
Delta Cost Growth: (Actual Cost - Planned 40
TIC / GSF 31
Cost) / Planned Cost
Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.

Project Priority Soft Cost / Hard Cost 7 Change Cost Factor: 10


Change Cost / Actual Cost
(Design Cost + Construction
Cost Balanced Schedule 10
Management Cost) / TIC
Cost 59 49 28 (Commissioning + Qualification
6
Performance Cost) / TIC
Schedule 14 24 45 TIC / Lab Population 15
Performance
Dimension 16 16 16 TIC / Total Building Population 11
Performance
Quality
11 11 11
Performance
Schedule Performance

Absolete Schedule Performance 54 Absolute Schedule Performance Relative Schedule Performance

Relative Schedule Performance 46 (Design ~ OQ Duration) / GSF 45 Schedule Growth: (Actual Duration - 60
Planned Duration) / Planned Duration
(Design ~ OQ Duration) / TIC 55 Delta Schedule Growth: (Actual Duration - 30
Planned Duration) / Planned Duration
Change Schedule Factor: 10
Change Duration / Actual Duration
Dimension Performance
Benchtop Linear Foot / Lab
49
Population
GSF / # Total Building Population 51

Quality Performance
Planning & Execution of
36
Qualification & Validation
Pharma Project Definition Index 64

Fig. 2. Key performance metrics and weights for laboratory

Table 4. Example of Calculating the Weighted Percentile


Cost growth Delta cost growth Change cost factor
Weighted
½ðactual cost  planned costÞ∕planned cost ½jðactual cost  planned costÞ∕planned costj ðchange cost∕actual costÞ
percentile of
Weighted Weighted Weighted relative cost
Project Percentile Weight percentile Percentile Weight percentile Percentile Weight percentile performance
O1001 80 50 40.0 40 40 16.0 70 10 7 63.0
(80 × 50 ÷ 100) (40 × 40 ÷ 100) (70 × 10 ÷ 100) (40 þ 16 þ 7)
O1002 60 50 33.3 30 40 13.3 Not 10 Not 46.6
(60 × 50 ÷ 90) (30 × 40 ÷ 90) available available (33:3 þ 13:3)

In the same way, the weighted percentiles of delta cost growth and the same way, the weighted percentile of delta cost growth is
change cost factor are 16 and 7. Subsequently, the weighted per- 13.3 [ð30 × 0:4Þ∕0:9]. Finally, the weighted percentile of the rel-
centile of the relative cost performance category is 63, which is the ative cost performance is 46.6, which is the sum of 33.3 and 13.3.
sum of 40, 16, and 7.
Weights are adjusted when values are missing in the key metrics. Step 3: Search of the Comparison Level of Projects
Although the ground rule is that the weighted percentiles of each Step 3 determines at which level a project is compared in the hier-
performance category are calculated by every KPI, projects some- archical structure (Table 3). Ideally, projects need to be compared
times did not have data for all KPIs. The O1002 project shows an against their original project category. For instance, comparing an
example of how the weights are adjusted. The O1002 project has attachment-dependent project with other attachment-dependent
cost growth and delta cost growth but not change cost factor. In this projects is the most desirable method. However, not all projects
case, the weighted percentile of the relative cost performance cat- can be compared within their original project category because
egory is calculated by only the two KPIs (cost growth and delta cost of an insufficient number of projects. As described, the team estab-
growth). Therefore, the weighted percentiles of the two KPIs are lished a rule that a comparison level requires eight or more projects
divided by the sum of the weights of the two KPIs. That is, the to be submitted by at least four companies. Although 198 projects
weighted percentile of cost growth is 33.3 [ð60 × 0:5Þ∕0:9]. In have been submitted over 5 years, not every project category in the

870 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / JULY 2012

J. Constr. Eng. Manage., 2012, 138(7): 864-876


hierarchical structure has a large enough number of projects to meet Table 5. Example of Adjusting Weighted Percentile (Comparison
the rule at the most specific level. If a project category does not Level: Bulk Manufacturing/Biological/Cell Culture/Stirred Tank)
meet the rule, then this category cannot be a comparison level. Ac- Weighted Adjusted
cordingly, projects in such a category cannot be compared in their Project percentile Rank percentile
original project category. Nonetheless, they need to be compared
O2001 74 2 90
somehow so that the overall performance can reflect all of the
O2002 54 7 40
Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.

projects in a company’s portfolio.


Comparing projects with an upper level project category may be O2003 57 6 50
the next best way to compare projects whose comparison level does O2004 48 8 30
not satisfy the rule. For example, currently the project category of O2005 64 5 60
attachment dependent has seven projects, as shown in Table 3. In O2006 44 9 20
this case, attachment-dependent projects are compared at the O2007 71 3 80
project category of cell culture, which meets the rule and the next O2008 16 11 0
upper level of attachment dependent in the hierarchy. Likewise, if O2009 25 10 10
any project category does not have enough projects, projects are O2010 85 1 100
compared at the next upper category that becomes the comparison
O2011 67 4 70
level. Consequently, all projects can be reasonably compared,
which enables the performance dashboard to include all of the
projects in a company’s portfolio.
Step 5: Comparison of the Performance of Companies
Step 4: Adjustment of the Weighted Percentiles of the Step 5 compares the performance of companies on the basis of the
Performance Categories and the Overall Performance averages of the adjusted percentiles of the projects in each company
Step 4 adjusts the weighted percentiles calculated at Step 2 so that portfolio. The overall performance of a company is calculated by
the weighted percentiles for the performance categories and for the averaging the overall performance scores of each project in that
overall performance can range from 0–100 percentiles. Because the company’s portfolio. The comparison results are described in
weighted percentiles from Step 2 are the sum of the products of the next section.
several metric percentiles and their weights, the performance cat-
egories and the overall performance do not range from 0–100.
Accordingly, the ranges vary depending on comparison levels. Performance Dashboard Outputs
For example, the weighted percentiles in the bulk manufactur-
Performance dashboards for each company include several dash-
ing/biological/cell culture level vary from 35–85, whereas those
board outputs. Table 6 shows the tabular performance dashboard
in the bulk manufacturing/chemical/manufacturing level vary from
of an example company X with 12 projects. The projects have their
25–90. Comparison with these different ranges may lead to a faulty
adjusted percentiles for the four performance categories and for the
assessment of project performance. At the bulk manufacturing/
overall project performance. For instance, the adjusted percentiles
biological/cell culture level, the project whose weighted percentile
for the four performance categories of the O1001 project are 69, 15,
is 85 is the best; at the bulk manufacturing/chemical/manufacturing
62, and 82, which results in a 69 adjusted percentile and second
level, the project whose weighted percentile is 90 is the best. If the
quartile for the overall project performance. In addition to these
two best projects were compared with only their weighted percen-
relative performance scores, the performance dashboard outputs
tiles, the project with 90 would appear better. However, from the
provide detailed criteria to document how the performance of a
relative performance perspective, this comparison is unfair to the
project is evaluated and compared, such as project type, compari-
project with 85, because this project is the best at the bulk manu-
son level, number of projects compared, and project priority.
facturing/chemical/manufacturing level. Similarly, it is not correct
The overall performance of each company is evaluated by the
to say that the worst project with 35 is better than a project with 25
average adjusted percentiles of the projects in their portfolio. In this
because the weighted percentiles come from different comparison
example (Table 6), the overall performance on the 12 projects of
levels.
company X is 70 on average, which is compared with that of other
To avoid such unfair comparisons, the weighted percentiles are
companies. Similarly, the averages of the four performance catego-
adjusted by Eq. (1). Eq. (1) makes the weighted percentiles range
ries are compared with those of other companies. The comparison
from 0–100 as relative scores, which enable projects to be com-
results are reported by rank and quartile in the last two rows. The
pared with each other even though they have different comparison
overall performance, 70, ranks second of 11 companies, which is
levels.
shown in the overall performance column. Consequently, the over-
Adjusted percentile ¼ 100  100 × ðR  1Þ∕ðN  1Þ ð1Þ all performance of company X falls in the first quartile
(2∕11 ≈ 0:18, which is less than 0.25, the first quartile limit). In
where R = rank of the weighted percentile; and N = total number of the same fashion, the rankings of the four performance categories
percentiles. are 3/11, 8/11, 2/11, and 7/9, and their quartiles are second, third,
Table 5 shows an example of how the weighted percentiles are first, and fourth.
adjusted. The comparison level is bulk manufacturing/biological/ In addition to the tabular dashboard (Table 6), the performance
cell culture/stirred tank, with 11 projects. Regarding project evaluation results of projects in a company portfolio can be exam-
O2007, its weighted percentile for overall project performance is ined by graphical dashboards in a summarized way. The two
71, and it ranks third of 11 projects. On the basis of the preceding graphical dashboards (Figs. 3 and 4) encapsulate the tabular per-
formula, the adjusted percentile of the overall performance of formance dashboard. The objective of the graphical dashboards
project O2007 is 80, against which the project can be compared is to present a summarized view of the performance results. As
with other projects in different comparison levels. In this way, each the number of projects increases, the tabular dashboard becomes
project can be evaluated by using the adjusted percentiles ranging difficult to read and interpret. The first graphical dashboard (Fig. 3)
from 0–100. is a doughnut chart that summarizes the distribution of the averages

JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / JULY 2012 / 871

J. Constr. Eng. Manage., 2012, 138(7): 864-876


Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.

Table 6. Tabular Performance Dashboard for Company X


Overall project Safety Number
performance performance Project of projects
definition in
Cost Schedule Dimension Quality Adjusted rate index Project Comparison compared Project
Project performance performance performance performance percentile Quartile TRIR DART score type level data set priority
O1001 69 15 62 82 69 2 100 100 138 Bulk/bio/ Bulk/bio/ 14 Cost
fermentation fermentation
O1002 77 77 85 89 100 1 100 100 242 Bulk/bio/cell/ Bulk/bio/cell 14 Schedule
attachment
O1003 79 36 67 17 61 2 100 100 — Bulk/bio/pilot Bulk/bio 34 Cost
slant
O1004 91 76 55 Not available 94 1 12 100 — Bulk/Bio/Pilot Bulk/Bio 34 Cost
Plant
O1005 88 8 76 76 67 2 41 N/A — Secondary/ Secondary 52 Cost
warehouse
O1006 100 25 92 18 67 2 Not Not — Secondary/fill Secondary/fill 13 Schedule
available available finish/parental/ finish/
vial parental/vial
O1007 56 50 94 80 75 1 100 100 76 Secondary/fill Secondary/fill 17 Cost
finish/parental/ finish/
delivery parental/
delivery
O1008 78 80 78 11 71 2 Not 34 331 Secondary/ Secondary 52 Balanced
available nutritional
O1009 20 100 25 38 50 2 63 100 — Laboratory/ Laboratory/ 11 Cost
quality control quality
and quality control and

J. Constr. Eng. Manage., 2012, 138(7): 864-876


assurance quality

872 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / JULY 2012


assurance
O1010 15 Not available 100 13 40 3 0 0 — Laboratory/ Laboratory/ 21 Cost
research/bio research/bio
O1011 95 10 61 0 80 1 13 100 414 Laboratory/ Laboratory/ 21 Cost
research/bio research/bio
O1012 85 Not available Not available 50 70 2 100 Not — Laboratory/ Laboratory/ 21 Schedule
available research/bio research/bio
Average 71 48 72 43 70 63 82
Company-level Rank/number of 3∕11 8∕11 2∕11 7∕9 2∕11 3∕9 3∕9
performance companies
Quartile 2 3 1 4 1 2 2
Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.

Fig. 3. Graphical performance dashboard I for company X

for the overall performance and the four performance categories. example, the average adjusted percentiles for company X are de-
This chart provides information on how the overall performance picted by large diamonds in Fig. 4. A significant difference is
of a company is derived from the performance results of the indi- shown in the average adjusted percentiles of the overall perfor-
vidual projects. In Fig. 3, the number in the center of the top circle mance between the first ranked company and company X. In con-
shows that the overall performance of company X is in the first trast, little difference is shown in the average adjusted percentiles of
quartile. The percentages in the ring of the top circle indicate that the cost performance between the first or second ranked companies
33% are in the first quartile, 59% are in the second quartile, 8% are and company X. The distributions imply that company X falls be-
in the third quartile, and 0% is in the fourth quartile. Because com- hind the first ranked company from the overall performance per-
pany X has 12 projects in its portfolio, three projects are in the first spective, although it ranked second, and that company X is
quartile, seven projects are in the second quartile, one project is in similar to the first and second companies from a cost performance
the third quartile, and no projects are in the fourth quartile. This perspective. Consequently, company X can have a better under-
distribution of the overall performance averages of 12 projects re- standing regarding their performance.
sults in the first quartile for the performance of company X. The
circles at the bottom illustrate the distribution of the performance
averages of the four performance categories. Because the doughnut Validation
chart shows the performance results by distribution, it provides a
summary of the performance of projects regardless of the number The developed performance dashboards were reported to each com-
of projects. pany through the reporting system. To validate the overall frame-
To examine the performance of projects in detail, the team de- work of the performance dashboards, pharmaceutical industry
veloped similar performance dashboards for bulk manufacturing, experts were surveyed on whether the performance dashboards
secondary manufacturing, and laboratory. These performance dash- are applicable to their needs as a decision support system. The sur-
boards further investigate the performance of projects as classified vey solicited feedback on the rationale underlying the comparison
by the three primary project types in Level I of the hierarchical algorithm and on the structure of the developed dashboard frame-
structure of pharmaceutical projects (Table 3). The performance work from the user’s standpoint. The respondents were 16 pharma-
dashboards for the three primary project types have the same tabu- ceutical industry experts from 10 pharmaceutical companies who
lar and graphical format described in the previous paragraphs. They are participating in the CII pharmaceutical BM&M Program. They
assist in conveying company portfolio performance at a more were asked to rate different aspects of the framework on 5-point
detailed level. Likert scales (strongly agree, agree, neutral, disagree, and strongly
The second graphical dashboard (Fig. 4) provides the distribu- disagree). Fifteen of 16 answered agree or strongly agree to the
tions of the average adjusted percentiles for the overall performance question “The comparison algorithm consisting of the relative im-
and the four performance categories. This dashboard enables com- portance of the indicators, adjusted composite index scores, and
panies to investigate a relative level of their performance. The dis- level of data references is appropriate.” The same number of the
tributions help them understand how their performance is different respondents also answered agree or strongly agree to the question
from other companies in terms of the average adjusted percentile of “The structure of the performance dashboard is well organized.”
the projects in their portfolio. This first bar graph from left indicates Almost 94% of the respondents agreed that the rationale of the per-
the overall performance that summarizes the next four bar graphs formance dashboard framework and the structure of the perfor-
for the performances of cost, schedule, dimension, and quality. For mance dashboard are appropriate for providing appropriate

JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / JULY 2012 / 873

J. Constr. Eng. Manage., 2012, 138(7): 864-876


Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.

Fig. 4. Graphical performance dashboard II for company X

levels of information on the performance of pharmaceutical proj- schedule performance, absolute schedule performance and relative
ects. The performance dashboard has been implemented as a regu- schedule performance had almost equal importance in their
lar component of the CII benchmarking program, which further weights.
validates its applicability. The weights for the KPIs were also determined to relatively
measure each cost, schedule, dimension, and quality performances.
The weights for the KPIs differed between manufacturing projects
Discussion and laboratory projects because they had different KPIs. For abso-
lute cost performance, manufacturing projects have TIC/process
The purpose of this study was to develop a performance dashboard equipment cost, whereas laboratory projects had TIC/lab popula-
for a pharmaceutical project benchmarking system that could rea- tion and TIC/total building population. Because of the FDA’s in-
sonably assess the overall performance of different types and char- spection criteria, manufacturing projects included (IQ ∼ OQ
acteristics of pharmaceutical projects. The performance dashboard duration)/(# IQ + OQ protocols) in evaluating absolute schedule
used weights that were assigned to the project priorities, the per- performance, which was not necessary for laboratory projects.
formance categories, and the KPIs. For dimension performance, manufacturing projects used (process
Depending on whether the project priority was cost, schedule, or space square foot + process-related space square foot)/GSF,
balanced, the different sets of weights were applied for the cost whereas laboratory projects involved linear foot benchtop/lab pop-
performance category and the schedule performance category in ulation and GSF/total building population.
evaluating the overall project performance. When the project prior- Regarding the comparison method used for the dashboard, a rel-
ity was cost, the weights showed that cost performance was con- ative comparison approach was adopted so that different types of
sidered more important than schedule as much as 2.5 times the KPIs could be aggregated, and distinct types of projects could
(cost∕ schedule ¼ 55∕22) for manufacturing projects (Fig. 1) be compared. As a result, the performance of every project could be
and 4.2 times (cost ∕ schedule ¼ 59∕14) for laboratory projects evaluated with the adjusted weighted percentile ranging from
(Fig. 2). On the other hand, when the project priority was schedule, 0–100, which indicated a relative performance score. In addition,
the weights indicated that schedule performance was more the overall performance of each company portfolio could be as-
important than cost performance as much as 2.5 times sessed, showing how successfully a company completed their proj-
(schedule∕ cost ¼ 55∕22) for manufacturing projects and 1.6 times ects compared with their competitors. Because projects in each
(schedule∕ cost ¼ 45∕28) for laboratory projects. When the project company’s portfolio were equally weighted in calculating the over-
priority was balanced, cost performance was still considered more all performance of a company regardless of the project cost, the
important than schedule performance as much as 1.3 times overall performance evaluated a company’s effectiveness in com-
(cost ∕ schedule ¼ 44∕33) for manufacturing projects and 2.0 times pleting the projects in their portfolio. If the overall performance was
(cost ∕ schedule ¼ 49∕24) for laboratory projects. Meanwhile, the weighted by the cost of projects, then the dashboard could evaluate
weights implied that cost performance was more emphasized on a company’s efficiency in completing the projects in their portfolio.
laboratory projects than on manufacturing projects. The authors recognized that the results from the effectiveness
For cost performance that consists of absolute cost performance evaluation were different from those from the efficiency evaluation
and relative cost performance, the former was regarded as approx- by a sensitivity analysis.
imately twice as important as relative cost performance for both On the basis of the relative project or company performance
manufacturing projects and laboratory projects. However, for scores, the performance dashboards enabled companies to facilitate

874 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / JULY 2012

J. Constr. Eng. Manage., 2012, 138(7): 864-876


communication regarding where the strong or weak points of their thus, a flexible performance dashboard framework needs to be
projects are and to understand how they performed on their projects studied so that a company can dynamically assign preferred
in delivering the pharmaceutical capital projects compared with weights to the dashboard.
their competitors. Furthermore, the two graphical dashboards effec-
tively summarize the performance of all projects in each company’s
portfolio regardless of the number of projects in the portfolio. Be- References
Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.

cause the performance dashboard has been developed as part of the


CII benchmarking Program, high-level managers with pharmaceut- Abdel-Razek, R. H., Elshakour, M. H., and Abdel-Hamid, M. (2007).
“Labor productivity: Benchmarking and variability in egyptian
ical companies participating in the CII pharmaceutical BM&M Pro-
projects.” Int. J. Proj. Manage., 25(2), 189–197.
gram have used the dashboards to compare the performance of their Amaratunga, D., Baldry, D., and Sarshar, M. (2001). “Process improvement
projects with other projects in the CII database. However, the per- through performance measurement: The balanced scorecard
formance dashboard is open to any company who joins the CII methodology.” Work Study, 50(5), 179–189.
pharmaceutical BM&M Program. Currently, the dashboards show Audit Office of New South Wales. (1999). Performance audit report: Key
the comparison results on the basis of 198 projects from 12 com- performance indicators, Sydney, Australia.
panies. This comparison results will be updated at the end of every Bassioni, H. A., Price, A. D. F., and Hassan, T. M. (2004). “Performance
year with newly submitted projects. measurement in construction.” J. Manage. Eng., 20(2), 42–50.
Although this study focuses on pharmaceutical industry proj- Bassioni, H. A., Price, A. D. F., and Hassan, T. M. (2005). “Building a
ects, it may be possible to apply the process used for developing conceptual framework for measuring business performance in construc-
tion: An empirical evaluation.” Constr. Manage. Econ., 23(5), 495–507.
a performance dashboard to other industries. The process involved
Beatham, S., Anumba, C., Thorpe, T., and Hedges, I. (2004). “KPIs: A
is as follows: (1) develop a hierarchical structure for categorizing critical appraisal of their use in construction.” Benchmarking, 11(1),
projects in a certain industry on the basis of project characteristics, 93–117.
such as project type and project priority; (2) identify KPIs appro- Chan, A. P. C., Scott, D., and Chan, A. P. L. (2004). “Factors affecting the
priate to the type of projects; (3) group the identified KPIs into success of a construction project.” J. Constr. Eng. Manage., 130(1),
relevant performance categories, such as cost, schedule, quality, di- 153–155.
mension, and others; (4) define weights for the KPIs and the per- Construction Industry Institute (CII). (2010a). “Owner pharmaceutical
formance categories; (5) calculate the percentiles of the KPIs and questionnaire version 10.2.” 〈https://www.construction-institute.org/
the performance categories; (6) calculate the weighted percentiles scriptcontent/BM_GUEST/downloads/Owner_Pharma_v10_2.pdf〉 (Jan.
28, 2010).
of the KPIs and the performance categories; (7) adjust the weighted
Construction Industry Institute (CII). (2010b). “About CII.” 〈https://www
percentiles for each project; (8) determine the relative performance .construction-institute.org/scriptcontent/aboutcii.cfm?section=aboutcii〉
level of each project by the adjusted weighted percentiles; (9) cal- (Jan. 28, 2010).
culate the average adjusted weighted percentiles for both the per- Cox, R. F., Issa, R. R. A., and Ahrens, D. (2003). “Management’s percep-
formance categories and the overall project performance for each tion of key performance indicators for construction.” J. Constr. Eng.
company; and (10) determine the relative performance level of each Manage., 129(2), 142–151.
company by the average adjusted percentiles of the projects in their Eccles, R. G. (1991). “The performance measurement manifesto.” Harv.
portfolio. Bus. Rev., 69(1), 131–137.
Eckerson, W. W. (2006). Performance dashboards: Measuring, monitor-
ing, and managing your business, Wiley, New York.
El-Mashaleh, M., Minchin, R. E., Jr., and O’Brien, W. J. (2007).
Conclusions
“Management of construction firm performance using benchmarking.”
J. Manage. Eng., 23(1), 10–17.
This study described the development of a performance dashboard
Fong, P. S., Shen, Q., and Cheng, E. W. L. (2001). “A framework for
for a pharmaceutical project benchmarking system. The developed benchmarking the value management process.” Benchmarking, 8(4),
dashboard integrated four performance categories: cost, schedule, 306–316.
dimension, and quality. Each performance category consisted of Hwang, B. G., Thomas, S. R., Degezelle, D., and Caldas, C. H. (2008).
KPIs. The weights of the project priorities, the performance catego- “Development of a benchmarking framework for pharmaceutical capital
ries, and the KPIs were assigned through the Delphi method. The projects.” Constr. Manage. Econ., 26(2), 177–195.
dashboard was formulated with a relative comparison approach. As Kagioglou, M., Cooper, R., and Aouad, G. (2001). “Performance manage-
a result, the performance of different types of projects and the per- ment in construction: A conceptual framework.” Constr. Manage.
formance of a company’s portfolio could be compared and evalu- Econ., 19(1), 85–95.
Kaplan, R. S., and Norton, D. P. (1992). “The balanced scorecard measures
ated. Although the dashboard was developed for pharmaceutical
that drive performance.” Harv. Bus. Rev., 70(1), 71–79.
industry projects, the development process for the dashboard could Key Performance Indicators (KPI) Working Group. (2000). KPI report for
be applicable to other industries. the Minister for Construction, Dept. of the Environment, Transport and
The results of this paper open several areas for future research. the Regions, London.
The proposed dashboard can facilitate the study of the relationships Kim, I.-H. (2007). “Development and implementation of an engineering
between the results and the causes of project performance. To im- productivity measurement system (EPMS) for benchmarking.” Ph.D.
prove project performance, the causes or factors resulting in good dissertation, Univ. of Texas at Austin, Austin, TX.
or bad project performance need to be identified. Possible ap- Kumaraswamy, M. M., and Thorpe, A. (1996). “Systematizing construction
proaches to investigate the causes and factors include the following project evaluations.” J. Manage. Eng., 12(1), 34–39.
analyses: the relationships between project performance and mana- Lee, S. H., Thomas, S. R., and Tucker, R. L. (2005). “Web-based bench-
marking system for the construction industry.” J. Constr. Eng. Manage.,
gerial factors, such as delivery methods, contract types, project
131(7), 790–798.
organization, and others, and the relationships between project per- Li, H., Cheng, E. W. L., Love, P. E. D., and Irani, Z. (2001). “Co-operative
formance and use of practices such as front-end planning, align- benchmarking: a tool for partnering excellence in construction.” Int. J.
ment, change management, partnering, and others If identified, Proj. Manage., 19(3), 171–179.
the causes and factors could be used as leading indicators. The Love, P. E. D., Irani, Z., and Edwards, D. J. (2004). “Industry-centric
strategies or performance goals for each company can be different; benchmarking of information technology benefits, costs and risks for

JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / JULY 2012 / 875

J. Constr. Eng. Manage., 2012, 138(7): 864-876


small-to-medium sized enterprises in construction.” Autom. Constr., Sommerville, J., and Robertson, H. W. (2000). “A scorecard approach to
13(4), 507–524. benchmarking for total quality construction.” Int. J. Qual. Reliab. Man-
Luu, T. V., Kim, S. Y., Cao, H. L., and Park, Y. M. (2008). “Performance age., 17(4/5), 453–466.
measurement of construction firms in developing countries.” Constr. Stewart, R. A., and Mohamed, S. (2001). “Utilizing the balanced scorecard
Manage. Econ., 26(4), 373–386. for IT/IS performance evaluation in construction.” Constr. Innovation-
McGeorge, W. D., Palmer, A., and London, K. (2002). Construction man- London, 1(3), 147–164.
agement: New directions, Wiley-Blackwell, Hoboken, NJ. Toor, S. R., and Ogunlana, S. O. (2010). “Beyond the ‘iron triangle’:
Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.

Mohamed, S. (2003). “Scorecard approach to benchmarking organizational Stakeholder perception of key performance indicators (KPIs) for
safety culture in construction.” J. Constr. Eng. Manage., 129(1), 80–88. large-scale public sector development projects.” Int. J. Proj. Manage.,
Neely, A., Gregory, M., and Platts, K. (1995). “Performance measurement 28(3), 228–236.
system design.” Int. J. Oper. Prod. Manage., 15(4), 80–116.
U.S. Food and Drug Administration (FDA). (2010). “Drug applications and
Nitithamyong, P., and Skibniewski, M. J. (2006). “Success/failure factors
current good manufacturing practice (CGMP) regulations.” 〈http://www
and performance measures of web-based construction project manage-
.fda.gov/Drugs/DevelopmentApprovalProcess/Manufacturing/ucm090016
ment systems: Professionals’ viewpoint.” J. Constr. Eng. Manage.,
132(1), 80–87. .htm〉 (Jan. 28, 2010).
Palaneeswaran, M. M., and Kumaraswamy, M. M. (2000). “Benchmarking Ward, S. C., Curtis, B., and Chapman, C. B. (1991). “Objectives and
contractor selection practices in public-sector construction: A proposed performance in construction projects.” Constr. Manage. Econ., 9(4),
model.” Eng. Constr. Archit. Manage., 7(3), 285–299. 343–353.
Park, H. S., Thomas, S. R., and Tucker, R. L. (2005). “Benchmarking of Yeung, J. F., Chan, A. P. C., and Chan, D. W. C. (2009). “Developing a
construction productivity.” J. Constr. Eng. Manage., 131(7), 772–778. performance index for relationship-based construction projects in
Parker, C. (2000). “Performance measurement.” Work Study, 49(2), 63–66. Australia: Delphi study.” J. Manage. Eng., 25(2), 59–68.
Ramirez, R. R., Alarcon, L. F. C., and Knights, P. (2004). “Benchmarking Yu, I., Kim, K., Jung, Y., and Chin, S. (2007). “Comparable performance
system for evaluating management practices in the construction measurement system for construction companies.” J. Manage. Eng.,
industry.” J. Manage. Eng., 20(3), 110–117. 23(3), 131–139.

876 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / JULY 2012

J. Constr. Eng. Manage., 2012, 138(7): 864-876

You might also like