Professional Documents
Culture Documents
Benchmarking Program
Downloaded from ascelibrary.org by "National Institute of Technical Teachers Training & Research, Chennai" on 10/03/23. Copyright ASCE. For personal use only; all rights reserved.
Sung-Joon Suk1; Bon-Gang Hwang2; Jiukun Dai, M.ASCE3; Carlos H. Caldas, M.ASCE4;
and Stephen P. Mulva, M.ASCE5
Abstract: Performance measurement is essential to controlling and improving capital projects. Increasingly, industry sectors demand an
industry-specific performance measurement system because their unique processes can result in significant differences in performance out-
comes. This paper presents the development of a performance dashboard for a pharmaceutical project benchmarking program. The proposed
approach adopts a relative comparison method and uses weighted key performance indicators (KPIs) to compare project performance. The
dashboard generates an overall performance score both at the project level and at the company level, as shown in tabular and graphical formats.
The dashboard provides flexibility for comparing the performance of projects even when the project types or KPIs are different. Although the
study focuses on a performance dashboard for pharmaceutical industry projects, the dashboard development process described in this paper can
be applied to other industry projects. DOI: 10.1061/(ASCE)CO.1943-7862.0000503. © 2012 American Society of Civil Engineers.
CE Database subject headings: Benchmark; Construction management; Performance characteristics.
Author keywords: Benchmarking; Construction management; Performance dashboard; Performance evaluation; Performance
measurement; Pharmaceutical project.
Fong et al. (2001) Value management Process model: Management commitment, facilitator’s skills,
brainstorming, group effectiveness, customer satisfaction
Li et al. (2001) Partnering Process model: Eight-stage process of a cooperative benchmarking
approach
Mohamed (2003) Organizational safety Measurement system: Balanced scorecard across four perspectives,
culture including management, operational, customer, and learning
Love et al. (2004) Information technology Metrics: Benefits, costs, and risks
Ramirez et al. (2004) Management practices Measurement system: Application of a management evaluation system
Abdel-Razek et al. (2007) Labor productivity Metrics: Disruption index, performance ratio, and project
management index
Construction Industry Institute (CII). Data on 198 pharmaceutical companies in the construction industry. Seven of them are related
capital projects are used to develop the performance dashboard; the to project performance: construction cost, construction time, cost
data were submitted by 12 companies: 11 U.S. companies and one predictability, time predictability, defects, client satisfaction with
European company. The development process of the performance product, and client satisfaction for service. The other three KPIs
dashboard is presented with a discussion of the underlying rationale are related to company performance: safety, profitability, and pro-
for each process and its results. Furthermore, results of a survey of ductivity. The 10 KPIs have been used by many construction
16 pharmaceutical industry experts validate the performance dash- companies in the United Kingdom (Beatham et al. 2004).
board framework. In addition to CII and CBPP, research efforts have been made to
establish an area-specific benchmarking program. Such efforts gen-
erated more relevant metrics, measurement systems, and/or process
Project Performance Measurement Frameworks models to benchmark a particular area. Table 1 shows recent efforts
related to benchmarking programs that narrowed down to a
specific area.
Benchmarking Programs
Although these benchmarking programs have developed many
McGeorge et al. (2002) defined benchmarking as “a process of useful metrics to measure and evaluate performance, they focus on
continuous improvement based on the comparison of an organiza- individual metrics rather than on aggregated indicators. As a result,
tion’s processes or products with those identified as best practice.” it is difficult to obtain insight into the overall performance of the
Benchmarking is recognized as a commonly used tool to identify company (El-Mashaleh et al. 2007) or project. In addition, a com-
successful performance of projects and construction companies petitive benchmarking program requires an industry-tailored
(El-Mashaleh et al. 2007). The two most widely known bench- project hierarchy because processes and facilities vary by industry
marking programs are the Benchmarking and Metrics (BM&M) (Hwang et al. 2008), and such distinct processes and facilities often
Program at CII and the Construction Best Practice Programme result in significant differences in project performance. Moreover,
(CBPP), which is jointly supported by the Department of the Envi- each industry needs particular metrics that reflect industry-specific
ronment, Transport and the Regions and the Construction Industry characteristics. For example, pharmaceutical projects need to meet
Board in the United Kingdom. regulatory requirements for validation and qualification of facili-
Based at the University of Texas at Austin, CII is a consortium ties. The followings are examples of pharmaceutical industry–
of leading owners, engineering and construction contractors, and specific metrics to measure and evaluate performance related to
suppliers whose mission is to measurably improve the delivery of validation and qualification: (commissioning cost + qualification
capital facilities (CII 2010b). The CII organized the BM&M Pro- cost)/total installed cost (TIC); [installation qualification (IQ) thru
gram in 1996 and established its first set of performance metrics: operation qualification (OQ) duration]/(# IQ + OQ protocols), and
project cost growth, project schedule growth, project budget fac- (process space square foot + process-related space square foot)/
tor, project schedule factor, change cost factor, and practice use gross square foot (GSF).
(Lee et al. 2005). As the program matured, CII established stan-
dard metric definitions for engineering and construction produc- Project Performance Measurements
tivity from selected disciplines, including concrete, structural Project performance measurements evaluate the success level of
steel, electrical, piping, instrumentation, and equipment (Park individual projects. These measurements can have different pur-
et al. 2005; Kim 2007). Using these metrics, CII provides its par- poses: business performance and project delivery performance.
ticipants with its web-based benchmarking reports detailing Business performance evaluates the success and effectiveness of
project performance norms, including values on individual met- projects over the project lifetime. Business performance is usually
rics with mean, median, quartile cutoffs, and data distribution measured by the owner because it needs information on the profit-
(CII 2010b). ability of products or services in addition to the performance of the
The CBPP launched its KPIs for performance measurement project delivery itself. Project delivery performance, on the other
in the United Kingdom in the late 1990s. Ten KPIs were developed hand, assesses the efficiency of completing facilities constructed
for measuring both project performance and company performance, for production or occupancy. Project delivery performance meas-
comparing a project or company with the other projects or urement is of major interest to the construction industry and has
Thorpe (1996)
Audit Office of New O O O O O O O
South Wales (1999)
Key Performance O O O O O Change orders
Indicators (KPI)
Working Group (2000)
Cox et al. (2003) O O O O Productivity
Chan et al. (2004) O O O O O O O Functionality of
building
Nitithamyong and O O O O Strategic
Skibniewski (2006) performance
risk improvement
Toor and Ogunlana O O O O O O Minimized conflicts
(2010)
Yeung et al. (2009) O O O O O O Innovation and
improvement
This paper O O O O Design/space
efficiency
Note: O = Yes.
been widely adopted for project performance evaluation. Both scorecard (BSC), which added three nonfinancial perspectives:
business performance and project delivery performance use customers, innovation and improvement, and internal processes,
performance metrics to evaluate outcomes objectively. in addition to the financial perspective. Consequently, the BSC pro-
Traditionally, the performance of construction projects was vided a balanced picture by including operational performance in
measured by three performance indicators: cost, time, and quality addition to economic performance (Amaratunga et al. 2001).
(Ward et al. 1991). However, recent studies employed additional Many studies have been conducted to measure company perfor-
performance indicators to evaluate construction project execution mance by adopting and modifying the BSC. Kagioglou et al.
from a more balanced perspective. Table 2 shows the performance (2001) presented a conceptual framework that integrated the per-
indicators used by several previous studies. In addition to cost, spectives of project and supplier with the BSC. Bassioni et al.
time, and quality, safety is included in most of the studies. Addi- (2005) established a conceptual framework to measure the business
tionally, some studies suggest the use of indicators such as envi- performance of construction companies by the BSC, business
ronmental friendliness, client satisfaction, business performance, excellence models, and empirical feedback from expert interviews.
project-team satisfaction, and communication. Some other indica- Yu et al. (2007) developed a framework to calculate a company
tors found in the literature include technology transfer, change or- performance score by combining weighted metrics, which were
ders, productivity, building functionality, strategic performance,
categorized by the four BSC perspectives. The weights for metrics
risk improvements, minimized conflicts and innovation, and
were obtained from a survey of construction companies. Luu et al.
improvement.
(2008) incorporated a strengths–weaknesses–opportunities–threats
The definition of performance indicators were the primary focus
(SWOT) matrix into the BSC so that a typical construction com-
and contribution of the previous studies shown in Table 2, which
primarily dealt with the criteria required for assessing the perfor- pany can devise their short- and long-term strategies. Stewart and
mance of construction projects. However, a generic framework is Mohamed (2001) adapted the BSC concept to develop a framework
required to compare the performance of various construction proj- to evaluate (IT) performance. The framework measured (IT) per-
ects regardless of their industry or facility type. Specifically, the formance at three different decision-making tiers: project, business
framework needs a scoring algorithm for the selected performance unit, and enterprise.
metrics and a comparison method to evaluate the performance of These studies contributed to improving company performance
projects and/or companies. measurement on the basis of the BSC. They developed perfor-
mance metrics and frameworks appropriate for the construction in-
Company Performance Measurements dustry from various perspectives. Nonetheless, more studies need
For a long time before the 1990s, company performance measure- to be conducted to investigate the relationship between company
ments relied on financial and accounting measurements: return on performance and project performance (Bassioni et al. 2004). Fur-
investment, sales per employees, cash flow, profit, and more. How- thermore, little has been done to measure company performance
ever, financial-focused performance measurements were criticized from the perspective of aggregated project performance. The most
for their insufficiency in addressing overall business performance practical need is a detailed implementation method and scoring al-
(Eccles 1991; Kaplan and Norton 1992; Parker 2000). To overcome gorithm that can be used for company performance measurement
this deficiency, Kaplan and Norton (1992) introduced the balanced (Bassioni et al. 2005).
together as a team to launch the pharmaceutical facility benchmark- metrics of a project are individually compared with those of other
ing program in 2004. The research team developed the hierarchical projects, the overall performance of projects cannot be evaluated
structure for categorizing pharmaceutical projects, as shown in only with the metrics and the project hierarchy. To objectively as-
Table 3. They identified three major project types: bulk manufac- sess the overall performance of projects, an approach is required to
turing, secondary manufacturing, and laboratory, each of which reasonably aggregate the performance results of a number of
has subcategories, some of which are divided into more subcate- metrics. The performance dashboard developed in this research
gories. The rationale for this categorization is that the distinct suggests a rational solution to evaluate the overall performance
processes and characteristics of pharmaceutical facilities will show of projects. The following sections discuss the components and
different project performance results. Therefore, benchmarking algorithm of the performance dashboard in detail.
requires organization of an appropriate hierarchical structure for
pharmaceutical projects. The team also defined 80 pharmaceutical Components of the Performance Dashboard
industry–specific metrics, which include cost, schedule, dimension, The proposed performance dashboard framework includes a
and quality metrics (Hwang et al. 2008). tabular-format dashboard and two graphic-format dashboards.
The CII constructed a tailored pharmaceutical questionnaire by The tabular-format dashboard summarizes the results of the perfor-
modifying and adding questions to the existing questionnaire, mance of projects by numbers, whereas the graphic-format dash-
which is generic to capital construction projects in all industries. boards show the deviations of the performance of projects by
The pharmaceutical questionnaire was programmed and released distributions. This section describes the components of the
online to allow the team to submit project data. The submitted data tabular-format dashboard. The outputs of all dashboards are
are analyzed on the basis of the individual metrics categorized by discussed in the “Performance Dashboard Outputs” section, where
the project hierarchy. The metrics of a project are compared with their layout and design are shown in a table and figures.
those of other projects by using a statistical method. The compari- The tabular-format dashboard consists of aggregated quartile
son results are reported by showing a relative performance location scores for project-level performance, company-level performance,
of each metric in a quartile-based distribution. For example, if a safety performance, industrial project definition rate index (PDRI),
metric for a project belongs to the first quartile, then the perfor- project type, project comparison level, number of projects com-
mance of the metric falls within the top performing 25%. In con- pared, and project priority. Each of these elements is explained
trast, if a metric is in the fourth quartile, then the performance of the in the following paragraphs.
facilities. The first two performance categories, cost performance fication and validation (PEQV) and Pharmaceutical Project Defi-
and schedule performance, have subcategories: absolute perfor- nition Index (PPDI). Terms related to PEQV and PPDI are
mance and relative performance, each of which has different defined in the CII pharmaceutical owner large questionnaire (CII
key metrics. In contrast, dimension and quality performance cat- 2010a) as follows: qualification is the action of providing equip-
egories consist of key metrics without subcategories. ment or ancillary systems that are properly installed, work cor-
Apart from the overall project performance, safety performance rectly, and actually lead to the expected results; validation is a
is independently assessed. The team decided to exclude it from the documented program that provides a high degree of assurance that
overall project performance because many projects had no accident a specific process, method, or system will consistently produce a
record or did not have data, and thus safety performance may not result meeting predetermined acceptance criteria; and PPDI indi-
have enough power of discrimination to calculate the overall cates how well user requirements, design qualification planning,
project performance. However, the team still wanted to examine and validation master plans were defined before the total project
the results of safety performance because of the importance of budget authorization. Both PEQV and PPDI measure the level
safety itself. Consequently, safety performance was included in of activities to meet the requirements of the Current Good Manu-
a separate column of the performance dashboard and not as a part facturing Practice (CGMP) regulations. On the basis of the CGMP
of the overall project performance. regulations, the U.S. Food and Drug Administration (FDA) in-
Company-level performance shows the quartiles and rankings spects the facilities used in manufacturing, processing, and packing
for the overall project performance and the four performance cat- of a drug before it can be approved (FDA 2010). The activities for
egories by averaging the performance scores of all the projects in a qualification and validation are essential to receive an approval
company portfolio. Quartiles and ranks are determined by compar- from the FDA and to deliver pharmaceutical projects.
ing the average performance scores of a company with those of Although bulk manufacturing, secondary manufacturing, and
other companies, which enables each company to know their over- laboratory use the same performance categories, the key metrics
all relative level of performance. To protect confidentiality, compa- included in each performance category are not the same. Bulk
nies can view only their own performance and relative position in manufacturing and secondary manufacturing have the same key
quartiles; other companies’ data are blinded. To help understand metrics (Fig. 1), but laboratory employs different key metrics
quartiles and ranks, the distributions of the average performance (Fig. 2) to measure aspects of laboratory that are distinct from those
scores are examined and graphically represented. in manufacturing facilities. All of the key metrics for the four per-
The PDRI score measures how well a project is defined and is formance categories were determined by team discussion.
included in the performance dashboard to support examination of Weights were defined for the four performance categories, KPIs,
the relationship between project performance and the adequacy of and project priorities to reflect their relative importance. The Delphi
the definition. Project type indicates the original project type in the method, an iterative process leading to a group consensus, was used
hierarchical structure; comparison level represents the project type to determine weights. The iterative process involves experts, a
where a project is compared with other projects; numbers of proj- questionnaire, and a facilitator. Once the experts complete the ques-
ects compared shows how many projects are compared in a com- tionnaire, the facilitator reports a summary of the answers. Consid-
parison level; and project priority describes the project driver for a ering the summary as a group opinion, the experts can revise the
project. Three different project priorities are used: cost-driven, answers. This process can be concluded after a predetermined num-
schedule-driven, and balanced/others. ber of rounds, when the summary does not change with more
In addition to this performance dashboard, dashboards are also rounds, or when the group reaches consensus on the summary.
provided for the three Level I groups in the hierarchical structure The iterative process of the Delphi method enables a group to con-
shown in Table 3 to allow the team to separately examine the per- verge to consensus. In this research, the industry experts of the team
formance of bulk manufacturing, secondary manufacturing, and answered the questionnaire for the weights of the KPIs. The CII
laboratory. facilitated the Delphi method and provided the team a summary
with the averages of the weights. The process stopped after the sec-
Performance Categories, Key Performance Metrics, ond round because the team achieved consensus on the averages.
and Weights Bulk manufacturing and secondary manufacturing use the same
Bulk manufacturing, secondary manufacturing, and laboratory use set of weights (Fig. 1), but laboratory uses a different set of weights
the same four performance categories: cost, schedule, dimension, (Fig. 2) because its characteristics are different from manufacturing
and quality. First, the cost performance category consists of two projects. Each set of weights has three different groups of weights
subcategories: the absolute cost performance category and the rel- for the four performance categories depending on project priorities:
ative cost performance category. The absolute cost performance cost, schedule, or balanced. The team agreed that the project pri-
represents ratios of actual costs to other costs or dimensions, orities lead to significant differences in the project performance of
and the relative cost performance measures cost variation. Second, pharmaceutical capital projects. According to the project priority,
the schedule performance category also has two subcategories: the the weights of the four performance categories are applied differ-
absolute schedule performance category and the relative schedule ently for the weighted percentiles. For example, when cost is the
performance category. Absolute schedule performance assesses ac- project priority for a bulk project, the weights of the four perfor-
tivity durations to the number of protocols for qualification, dimen- mance categories are, respectively, 55, 22, 10, and 13. In contrast,
sion, or cost. On the other hand, relative schedule performance when schedule is the project priority for a bulk project, the weights
evaluates schedule variation. However, both of the performance cat- of the four performance categories are 22, 55, 10, and 13. The sub-
egories for dimension and quality have only key metrics without categories and indicators under the four performance categories
Quality Performance
Planning & Execution of
50
Qualification & Validation
Pharma Project Definition Index 50
Fig. 1. Key performance metrics and weights for bulk and secondary manufacturing
also have weights. All of the weights were determined by the ensure reasonable comparison and confidentiality. Comparison
Delphi method described previously. with a small number of projects may not have adequate power
to differentiate the performance level of projects, and may reveal
Algorithm of the Performance Dashboard the company that is the source of a metric’s value. The team dis-
cussed that having more than seven projects in a comparison level
On the basis of the hierarchical structure, components, performance
ensures at least two projects for each quartile, and the interval of
categories, KPIs, and weights described previously, the following
percentiles is not too wide to calculate their relative performance. In
steps delineate the calculation process of generating the perfor- addition, having more than three companies in a comparison level
mance dashboard. prevents a company from easily recognizing the other companies. If
the rule is not satisfied in a comparison level, then percentiles are
Step 1: Calculation of the Percentiles of the Key
calculated and reported at the next higher comparison level in the
Performance Metrics
project hierarchy that meets the rule.
Step 1 calculates the percentiles of the KPIs. Percentiles indicate
relative scores or positions represented by values ranging from Step 2: Calculation of the Weighted Percentiles
0–100. For example, a metric may have six values that are 1, 2, The percentiles of each key metric are multiplied by their weights,
3, 4, 5, and 6. If a lower value means better performance, the per- thereby producing weighted percentiles. Then the sum of the
centiles of the six values are 0, 20, 40, 60, 80, and 100, respectively. weighted percentiles in a performance category becomes a perfor-
Percentiles are used to represent relative performance scores for mance score for the performance category. Each performance cat-
each key metric because percentiles enable the performance dash- egory also has different weights, which multiply the performance
board to be combined with approximately 20 key metrics that mea- scores of the four performance categories. Finally, the overall per-
sure different aspects of project performance and thus have various formance scores for each project are calculated by summing up the
units and distribution ranges. weighted percentiles of the four performance categories. Table 4
In accordance with the project hierarchy, the KPI values of a shows an example of the calculation process of weighted percen-
project are compared with those of other projects. During this com- tiles for the relative cost performance category. In the O1001
parison, percentiles are calculated and assigned to the KPIs as long project, the three KPIs (cost growth, delta cost growth, and change
as the number of projects is sufficient. The team established a rule cost factor) are in the 80, 40, and 70 percentiles, respectively. The
that the calculation of percentiles for each KPI requires that a weights of the three KPIs are 50, 40, and 10, as shown in Table 4.
comparison level have more than seven projects submitted by more Because the weighted percentile is the product of percentile and
than three separate companies. The objective of this rule is to weight, the weighted percentile of cost growth is 40 (80 × 0:5).
Relative Schedule Performance 46 (Design ~ OQ Duration) / GSF 45 Schedule Growth: (Actual Duration - 60
Planned Duration) / Planned Duration
(Design ~ OQ Duration) / TIC 55 Delta Schedule Growth: (Actual Duration - 30
Planned Duration) / Planned Duration
Change Schedule Factor: 10
Change Duration / Actual Duration
Dimension Performance
Benchtop Linear Foot / Lab
49
Population
GSF / # Total Building Population 51
Quality Performance
Planning & Execution of
36
Qualification & Validation
Pharma Project Definition Index 64
In the same way, the weighted percentiles of delta cost growth and the same way, the weighted percentile of delta cost growth is
change cost factor are 16 and 7. Subsequently, the weighted per- 13.3 [ð30 × 0:4Þ∕0:9]. Finally, the weighted percentile of the rel-
centile of the relative cost performance category is 63, which is the ative cost performance is 46.6, which is the sum of 33.3 and 13.3.
sum of 40, 16, and 7.
Weights are adjusted when values are missing in the key metrics. Step 3: Search of the Comparison Level of Projects
Although the ground rule is that the weighted percentiles of each Step 3 determines at which level a project is compared in the hier-
performance category are calculated by every KPI, projects some- archical structure (Table 3). Ideally, projects need to be compared
times did not have data for all KPIs. The O1002 project shows an against their original project category. For instance, comparing an
example of how the weights are adjusted. The O1002 project has attachment-dependent project with other attachment-dependent
cost growth and delta cost growth but not change cost factor. In this projects is the most desirable method. However, not all projects
case, the weighted percentile of the relative cost performance cat- can be compared within their original project category because
egory is calculated by only the two KPIs (cost growth and delta cost of an insufficient number of projects. As described, the team estab-
growth). Therefore, the weighted percentiles of the two KPIs are lished a rule that a comparison level requires eight or more projects
divided by the sum of the weights of the two KPIs. That is, the to be submitted by at least four companies. Although 198 projects
weighted percentile of cost growth is 33.3 [ð60 × 0:5Þ∕0:9]. In have been submitted over 5 years, not every project category in the
for the overall performance and the four performance categories. example, the average adjusted percentiles for company X are de-
This chart provides information on how the overall performance picted by large diamonds in Fig. 4. A significant difference is
of a company is derived from the performance results of the indi- shown in the average adjusted percentiles of the overall perfor-
vidual projects. In Fig. 3, the number in the center of the top circle mance between the first ranked company and company X. In con-
shows that the overall performance of company X is in the first trast, little difference is shown in the average adjusted percentiles of
quartile. The percentages in the ring of the top circle indicate that the cost performance between the first or second ranked companies
33% are in the first quartile, 59% are in the second quartile, 8% are and company X. The distributions imply that company X falls be-
in the third quartile, and 0% is in the fourth quartile. Because com- hind the first ranked company from the overall performance per-
pany X has 12 projects in its portfolio, three projects are in the first spective, although it ranked second, and that company X is
quartile, seven projects are in the second quartile, one project is in similar to the first and second companies from a cost performance
the third quartile, and no projects are in the fourth quartile. This perspective. Consequently, company X can have a better under-
distribution of the overall performance averages of 12 projects re- standing regarding their performance.
sults in the first quartile for the performance of company X. The
circles at the bottom illustrate the distribution of the performance
averages of the four performance categories. Because the doughnut Validation
chart shows the performance results by distribution, it provides a
summary of the performance of projects regardless of the number The developed performance dashboards were reported to each com-
of projects. pany through the reporting system. To validate the overall frame-
To examine the performance of projects in detail, the team de- work of the performance dashboards, pharmaceutical industry
veloped similar performance dashboards for bulk manufacturing, experts were surveyed on whether the performance dashboards
secondary manufacturing, and laboratory. These performance dash- are applicable to their needs as a decision support system. The sur-
boards further investigate the performance of projects as classified vey solicited feedback on the rationale underlying the comparison
by the three primary project types in Level I of the hierarchical algorithm and on the structure of the developed dashboard frame-
structure of pharmaceutical projects (Table 3). The performance work from the user’s standpoint. The respondents were 16 pharma-
dashboards for the three primary project types have the same tabu- ceutical industry experts from 10 pharmaceutical companies who
lar and graphical format described in the previous paragraphs. They are participating in the CII pharmaceutical BM&M Program. They
assist in conveying company portfolio performance at a more were asked to rate different aspects of the framework on 5-point
detailed level. Likert scales (strongly agree, agree, neutral, disagree, and strongly
The second graphical dashboard (Fig. 4) provides the distribu- disagree). Fifteen of 16 answered agree or strongly agree to the
tions of the average adjusted percentiles for the overall performance question “The comparison algorithm consisting of the relative im-
and the four performance categories. This dashboard enables com- portance of the indicators, adjusted composite index scores, and
panies to investigate a relative level of their performance. The dis- level of data references is appropriate.” The same number of the
tributions help them understand how their performance is different respondents also answered agree or strongly agree to the question
from other companies in terms of the average adjusted percentile of “The structure of the performance dashboard is well organized.”
the projects in their portfolio. This first bar graph from left indicates Almost 94% of the respondents agreed that the rationale of the per-
the overall performance that summarizes the next four bar graphs formance dashboard framework and the structure of the perfor-
for the performances of cost, schedule, dimension, and quality. For mance dashboard are appropriate for providing appropriate
levels of information on the performance of pharmaceutical proj- schedule performance, absolute schedule performance and relative
ects. The performance dashboard has been implemented as a regu- schedule performance had almost equal importance in their
lar component of the CII benchmarking program, which further weights.
validates its applicability. The weights for the KPIs were also determined to relatively
measure each cost, schedule, dimension, and quality performances.
The weights for the KPIs differed between manufacturing projects
Discussion and laboratory projects because they had different KPIs. For abso-
lute cost performance, manufacturing projects have TIC/process
The purpose of this study was to develop a performance dashboard equipment cost, whereas laboratory projects had TIC/lab popula-
for a pharmaceutical project benchmarking system that could rea- tion and TIC/total building population. Because of the FDA’s in-
sonably assess the overall performance of different types and char- spection criteria, manufacturing projects included (IQ ∼ OQ
acteristics of pharmaceutical projects. The performance dashboard duration)/(# IQ + OQ protocols) in evaluating absolute schedule
used weights that were assigned to the project priorities, the per- performance, which was not necessary for laboratory projects.
formance categories, and the KPIs. For dimension performance, manufacturing projects used (process
Depending on whether the project priority was cost, schedule, or space square foot + process-related space square foot)/GSF,
balanced, the different sets of weights were applied for the cost whereas laboratory projects involved linear foot benchtop/lab pop-
performance category and the schedule performance category in ulation and GSF/total building population.
evaluating the overall project performance. When the project prior- Regarding the comparison method used for the dashboard, a rel-
ity was cost, the weights showed that cost performance was con- ative comparison approach was adopted so that different types of
sidered more important than schedule as much as 2.5 times the KPIs could be aggregated, and distinct types of projects could
(cost∕ schedule ¼ 55∕22) for manufacturing projects (Fig. 1) be compared. As a result, the performance of every project could be
and 4.2 times (cost ∕ schedule ¼ 59∕14) for laboratory projects evaluated with the adjusted weighted percentile ranging from
(Fig. 2). On the other hand, when the project priority was schedule, 0–100, which indicated a relative performance score. In addition,
the weights indicated that schedule performance was more the overall performance of each company portfolio could be as-
important than cost performance as much as 2.5 times sessed, showing how successfully a company completed their proj-
(schedule∕ cost ¼ 55∕22) for manufacturing projects and 1.6 times ects compared with their competitors. Because projects in each
(schedule∕ cost ¼ 45∕28) for laboratory projects. When the project company’s portfolio were equally weighted in calculating the over-
priority was balanced, cost performance was still considered more all performance of a company regardless of the project cost, the
important than schedule performance as much as 1.3 times overall performance evaluated a company’s effectiveness in com-
(cost ∕ schedule ¼ 44∕33) for manufacturing projects and 2.0 times pleting the projects in their portfolio. If the overall performance was
(cost ∕ schedule ¼ 49∕24) for laboratory projects. Meanwhile, the weighted by the cost of projects, then the dashboard could evaluate
weights implied that cost performance was more emphasized on a company’s efficiency in completing the projects in their portfolio.
laboratory projects than on manufacturing projects. The authors recognized that the results from the effectiveness
For cost performance that consists of absolute cost performance evaluation were different from those from the efficiency evaluation
and relative cost performance, the former was regarded as approx- by a sensitivity analysis.
imately twice as important as relative cost performance for both On the basis of the relative project or company performance
manufacturing projects and laboratory projects. However, for scores, the performance dashboards enabled companies to facilitate
Mohamed, S. (2003). “Scorecard approach to benchmarking organizational Stakeholder perception of key performance indicators (KPIs) for
safety culture in construction.” J. Constr. Eng. Manage., 129(1), 80–88. large-scale public sector development projects.” Int. J. Proj. Manage.,
Neely, A., Gregory, M., and Platts, K. (1995). “Performance measurement 28(3), 228–236.
system design.” Int. J. Oper. Prod. Manage., 15(4), 80–116.
U.S. Food and Drug Administration (FDA). (2010). “Drug applications and
Nitithamyong, P., and Skibniewski, M. J. (2006). “Success/failure factors
current good manufacturing practice (CGMP) regulations.” 〈http://www
and performance measures of web-based construction project manage-
.fda.gov/Drugs/DevelopmentApprovalProcess/Manufacturing/ucm090016
ment systems: Professionals’ viewpoint.” J. Constr. Eng. Manage.,
132(1), 80–87. .htm〉 (Jan. 28, 2010).
Palaneeswaran, M. M., and Kumaraswamy, M. M. (2000). “Benchmarking Ward, S. C., Curtis, B., and Chapman, C. B. (1991). “Objectives and
contractor selection practices in public-sector construction: A proposed performance in construction projects.” Constr. Manage. Econ., 9(4),
model.” Eng. Constr. Archit. Manage., 7(3), 285–299. 343–353.
Park, H. S., Thomas, S. R., and Tucker, R. L. (2005). “Benchmarking of Yeung, J. F., Chan, A. P. C., and Chan, D. W. C. (2009). “Developing a
construction productivity.” J. Constr. Eng. Manage., 131(7), 772–778. performance index for relationship-based construction projects in
Parker, C. (2000). “Performance measurement.” Work Study, 49(2), 63–66. Australia: Delphi study.” J. Manage. Eng., 25(2), 59–68.
Ramirez, R. R., Alarcon, L. F. C., and Knights, P. (2004). “Benchmarking Yu, I., Kim, K., Jung, Y., and Chin, S. (2007). “Comparable performance
system for evaluating management practices in the construction measurement system for construction companies.” J. Manage. Eng.,
industry.” J. Manage. Eng., 20(3), 110–117. 23(3), 131–139.