You are on page 1of 15

International Journal of Construction Management

ISSN: 1562-3599 (Print) 2331-2327 (Online) Journal homepage: http://www.tandfonline.com/loi/tjcm20

Performance measurement tools for Ghanaian


contractors

J.K. Ofori-Kuragu, B.K. Baiden & E. Badu

To cite this article: J.K. Ofori-Kuragu, B.K. Baiden & E. Badu (2016) Performance measurement
tools for Ghanaian contractors, International Journal of Construction Management, 16:1, 13-26,
DOI: 10.1080/15623599.2015.1115245

To link to this article: https://doi.org/10.1080/15623599.2015.1115245

Published online: 08 Dec 2015.

Submit your article to this journal

Article views: 211

View Crossmark data

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=tjcm20
International Journal of Construction Management, 2016
Vol. 16, No. 1, 1326, http://dx.doi.org/10.1080/15623599.2015.1115245

Performance measurement tools for Ghanaian contractors


J.K. Ofori-Kuragu*, B.K. Baiden and E. Badu

Department of Building Technology, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana

This paper addresses the absence of performance measurement systems in the Ghanaian construction industry. A thorough
review of literature and existing academic publications on performance measurement was undertaken. As part of the
process, three performance measurement systems used in the construction industries in the United Kingdom, the United
States of America and Denmark were reviewed. The performance measurement system used by Danish contractors was
selected and adapted for Ghanaian contractors. Two performance measurement tools are developed in this paper. The
Project Scorecard (ProScor) can be used to measure project performance whilst the Contractor Scorecard (ConScor) can
be used to measure organizational performance. The developed tools were validated using focus group sessions with
experts drawn from the Ghanaian construction industry and later using a questionnaire-based survey of selected Ghanaian
contractors. Future work will focus on developing interactive versions of the tools to enhance their use.
Keywords: performance measurement; performance; performance improvement; Ghana; contractors

Introduction
Underperformance by Ghanaian contractors is a major cause of concern amongst client groups and other stakeholders in
the Ghanaian construction industry. The failure to meet performance targets in the management of construction projects in
the Ghanaian construction industry is a common occurrence. There is a general perception that limited knowledge of con-
tractors in the application of requisite management techniques is largely responsible for the poor performance of Ghanaian
contractors (Ahadzie 2008). Many of the larger indigenous contractors are owned by proprietors who have little or no for-
mal knowledge of construction, project or organizational management. Such proprietors generally do not employ person-
nel with the necessary technical know-how to manage their firms for sustainable growth. Management of construction
firms’ resources is undertaken haphazardly and therefore does not promote growth (Vulink 2004).
There is a general perception in Ghanaian society of widespread poor and underperformance in the management of
construction projects by Ghanaian contractors. This means that the majority of major projects in Ghana are awarded to a
very few large firms, mostly foreign owned. These large foreign-owned firms have dominated the Ghanaian construction
industry for decades (Chileshie & Yirenkyi-Fianko 2012). Both large and small contractors in Ghana find it difficult
accessing finance for projects (Badu et al. 2012). Delays in the payment of contractors for work done are very common
and constitute a major cause of delays in the completion of projects (Adams 2008). On average, construction projects in
Ghana record cost overruns of 60% to 180% and time overruns of between 12 and 24 months (Kpamma & Adjei-Kumi
2010). There is a lack of commitment to the health and safety of Ghanaian construction workers who work in a generally
unsafe environment (Boakye et al. 2010). The effects of these and many other problems which affect Ghanaian contractors
is a high incidence of poor and underperformance in the Ghanaian construction industry. This makes it difficult to attract
investment into Ghanaian construction firms (GSE 2012). There are currently no listed construction companies on the
Ghana stock exchange and there are no Ghanaian construction firms in the Ghana Club 100 list of prestigious companies
which demonstrate excellence in performance (GIPC 2012). There is an urgent need to improve contractor performance in
the Ghanaian construction industry.
According to Beatham et al. (2004), performance measurement is a critical key to improving performance. It is a major
cornerstone to any effort to attain world-class performance (Alarcon et al. 2001). Performance measurement is thus rele-
vant to the quest to improve the management of projects by Ghanaian contractors. There is broad agreement between the
government of Ghana and major contractor groups that the lack of a performance measurement tool for Ghanaian contrac-
tors is a major cause of poor project delivery. In a bid to improve construction project delivery, the Ghana government has
expressed the need for a performance measurement tool and a ranking system for contractors as a means to ensure that
projects are awarded only to competent contractors (Yeboah 2007). The development of a performance measurement sys-
tem for Ghanaian contractors will enable third parties to independently assess the performance of contractors and ensure

*Corresponding author. Email: kofori-kuragu.cap@knust.edu.gh

Ó 2015 Taylor & Francis


14 J.K. Ofori-Kuragu et al.

that only the best contractors are awarded contracts. Improvements in practices and processes are urgently needed to
improve overall performance of Ghanaian contractors.
In this paper, a performance measurement tool has been developed for Ghanaian contractors. This tool will measure
performance and improve the management and delivery of construction projects in Ghana.

Literature review
Performance
Several definitions of ‘performance’ have been proposed in the literature reviewed. It has been described as the valued pro-
ductive output of a system in the form of goods and services with units of performance describing the actual fulfilment of
the goods and services relating to performance and measured in terms of features of production, quality, quantity and/or
time (Swanson 1995). Salaheldin (2009) defines performance as the degree to which an operation fulfils primary measures
in order to meet the needs of the customers. Ahadzie (2007) defines it as the behavioural competencies that are relevant to
achieving the goals of project-based organizations.
Much of the review for this paper on the definitions of performance describe the concept in terms of achievement and
fulfilment arising from an operation in relation to set goals. For example, the Baldridge National Quality Programme,
BNQP (2009) describes performance as ‘outputs and outcomes from processes, products and services that permit evalua-
tion and comparison relative to goals, standards, past results, and other organisations’. Four types of performance are iden-
tified: product and service; customer-focused; financial and marketplace; and operational (BNQP 2009). In this
classification, product and service performance refers to performance relative to measures and indicators of product and
service characteristics important to customers whilst customer-focused performance refers to performance relative to
measures and indicators of customer perceptions, reactions and behaviours. Financial performance, on the other hand,
describes performance relative to measures of cost, revenue and market position including asset utilization, asset growth
and market share, and operational performance refers to workforce, leadership, organizational and ethical performance rel-
ative to effectiveness, efficiency and accountability measures and indicators (BNQP 2008).
There is a level of performance where organizations achieve high scores in every area of practice and performance. At
this level of performance, they are described as ‘world-class’ organizations, the best in their sectors both in their practices
and results (Prabhu & Robson 2000). To achieve this level of international competitiveness in performance, organizations
should emulate and surpass the best international companies in their sector (Munro-Faure & Munro-Faure 1992).

Performance theories
Performance can be described as the outputs and outcomes from processes, products and services that permit evaluation
and comparison relative to goals, standards, past results and other organisations. Four types of performance may be identi-
fied: product and service, customer-focused, financial and marketplace and operational (BNQP 2009). The different perfor-
mance types have different measures of performance. It is possible for organizations to achieve high scores in every area of
practice and performance. Organizations which are the best in their sectors and demonstrate international competitiveness
are described as ‘world-class’ (Prabhu & Robson 2000). To achieve international competitiveness in performance, organi-
zations should measure up to the best international companies in their sector (Munro-Faure & Munro-Faure 1992).
Three theories of performance improvement can be identified. Psychological theory acknowledges human beings as the
brokers of productivity along with their cultural and behavioural nuances. Economic theory is the primary driver and sur-
vival metric of organizations, whilst systems theory is described as recognizing the complex interactions which are needed
to make systems and sub-systems effective (Swanson 1995).
Swanson (1995) alludes to Peters and Waterman’s (1982) ‘in Search of Excellence’, which identifies 43 ‘excellent’
companies and explores the secrets of their organizational success. Referring to the fact that two-thirds of the so-called
excellent companies ceased to be excellent after five years, Swanson (1995) argued that the companies did not possess a
key for excellence. This argument is flawed by the possibility that losses in performance levels may be due to either related
or unrelated factors and external factors which impact on overall performance. Questions can also be asked if the measures
used are comparable. It is relevant to explore a set of factors which enhance performance and an alternative set of factors
independent of the first set which may be responsible for the failure of organizations.

Performance of best-in-class organizations


The Construction Task Force (1998) described the levels of performance achieved by the best organizations in key perfor-
mance areas. Amongst the best companies, there were average reductions of between 6% and 14% in capital cost year on
International Journal of Construction Management 15

year, with the highest being 40% year-on-year reduction. Such companies are regularly able to achieve 1015% reduction
in construction time, 20% increases on average in the number of projects completed on time and within cost and predict-
ability rates regularly exceeding 95% (Construction Task Force 1998). World-class organizations are also able to achieve
as a minimum 30% annual reductions in project time, zero defects, 50‒60% reductions in accident rates in two years or
less, 10‒15% gains in productivity per year, 10‒20% increases in turnover and profits year on year and 65% decrease in
absenteeism (Construction Task Force 1998).

Performance measurement
Performance measurement is the process of quantifying the efficiency and effectiveness of action (Sousa et al. 2006). It is a
critical factor for effective management since ‘without measuring something; it is difficult to improve it’ (Salaheldin 2009,
p. 3). It can be used to undertake comparisons with successful entities and allow organizations to put their own perfor-
mance into context. A lack of effective performance measurement systems hinders efforts to improve performance. Perfor-
mance measurement enables organizations to identify areas where improvements are needed (Robson 2004). Performance
measurement, however, does not automatically result in improved performance. For effective performance measurement,
a set of critical failure indicators should be identified (Robson 2004). These failure indicators are described by Deros et al.
(2006) as critical success factors. The ideal future or customer requirements (Robson 2004) thus become the measures and
indicators of performance excellence used to determine appropriate success factors (Deros et al. 2006). Performance mea-
surement is integral to performance management and provides a basis for performance improvement programmes. It pro-
vides a basis for data which can be collected for analysis to use in making effective business decisions, leading to
improved business performance and providing the basis for business-related expenditures and measuring progress vis-
a-vis organizational objectives. Performance measurement provides a framework which can be used for analysing business
improvement efforts (Artley & Stroh 2001).

Performance measures and indicators


A performance measure is a metric which is used for quantifying the efficiency and effectiveness of actions (Sousa et al.
2006). These are short-term metrics and short-term measures which have to be continually calculated and reviewed (Zairi
1994). BNQP (2009) does not distinguish between ‘measures and indicators’, describing both as numerical information
used to quantify the input, output and performance dimensions of processes, products, programmes, projects, services and
the overall outcomes of an organization. ‘Measures and indicators’ may be derived from a single measurement or compos-
ite measurements (BNQP 2009). Performance measures provide a mechanism for relating product or process improvement
policies developed by senior management for action at a local organizational level (Bond 1999). An ‘indicator’ may be
used when the measurement relates to performance and is not a direct measure of such performance but a predictor of sig-
nificant performance (BNQP 2009). In other situations, performance indicators are used as a representation for absolute
measures of performance recorded by organizations. Perception measures are obtained directly from service users and
other stakeholders (Moullin 2004).

Selected existing construction industry performance measurement tools


In this section, three performance measurement tools used in the construction industries of the United Kingdom, Denmark
and the United States of America are described, from which one is selected and adapted for Ghanaian contractors. This
was necessary given the differences between the Ghanaian construction industry and the industries considered. The selec-
tion was based on the easy adaptability of the systems reviewed for use in the Ghanaian construction industry.

Example of a performance measurement in the UK construction industry


The UK’s Constructing Excellence developed charts for each of the major sub-divisions of the industry as well as graphs
for each of the headline key performance indicators (KPIs). The graphs are developed using performance data for the
respective headline KPIs collected across the entire construction industry. Company or project performance can be mea-
sured by reading off the performance along the performance axis on the relevant graph. Over a specified duration, the per-
formance curve can be drawn by joining the points representing the scores at various points out of a maximum 100
(Constructing Excellence 2009).
16 J.K. Ofori-Kuragu et al.

Online performance measurement in the US construction industry


The Construction Industry Institute (CII) of the United States of America uses an online performance measurement system.
Using the online Performance Assessment System, the CII performance assessment programme provides a means for
members to compare their performance on both capital and maintenance projects with the ‘best in class’. Contractors can
do this by inputting performance on projects and practice data into the secure benchmarking site. Using data from a large
sample of projects from reputable American construction firms, real-time project performance may be accessed in both
graphic and tabular format. Training is offered to contractors to enable them to use the online Performance Assessment
System (CII 2014).
The CII performance measurement system has four main KPI areas for measuring performance, as follows: perfor-
mance; construction productivity; engineering productivity; and practices. These measures are broken down into their
respective sub-areas for effective measurement. Performance, for example, covers the sub-areas: cost, schedule, changes,
work hours and accident data, project impacts and re-work (CII 2012). Whilst the CII (2012) uses fewer KPI categories
than two others, its KPIs have many sub-areas, making it comparable WITH the UK and Danish systems.

Contractor performance evaluation in the Danish construction industry


The Benchmark Centre for the Danish Construction Sector (BEC) produces a Grade Book for all Danish Construction
firms. These present details of projects undertaken by contractors. For each such project, a factsheet is issued which con-
tains details of the contractor’s performance on the project, valid for three years. When compiling the Grade Book, the
KPIs from each factsheet are weighted with the contract price of the particular task. The Grade Book is automatically
updated when the company receives a new factsheet and when a factsheet is no longer valid. For some categories of proj-
ects, it is a legal requirement for contractors to provide the Grade Book (BEC 2010). The contractor’s grade in each KPI
features the average performance of the contractor over a three-year maximum period. This is based on assessments in the
respective KPIs for each contract or construction activity. The company is provided with a Grade Book for all evaluated
contracts (BEC 2010).

Choosing the best system for the Ghanaian construction industry


The three performance measurement systems used in the UK, the USA and Denmark show similarities especially in the
measures of performance used. However, the USA system uses only four KPIs, which limits the scope of metrics that can
be measured. The Danish system can involve up to 14 KPIs, which allows for a wider scope for measurement. Again
unlike the Danish system, the USA performance measurement system is purely internet-based, which can present difficul-
ties in a developing country context where internet access may be limited. The UK system helps to compare an organ-
ization’s performance with the rest of the industry since the graphs are developed using data from across the entire
industry. However, the graphs based on industry-wide data can be expensive to develop. Also outliers arising from perfor-
mance extremes can distort the graphs since they are based on industry-wide data. In the context of the Ghanaian construc-
tion industry, the non-availability of reliable project data will affect the quality of graphs developed if they are based on
largely unreliable data from construction firms. In most cases, Ghanaian construction firms do not keep sufficient organiza-
tional, project or performance data. This may be due to the absence of suitably qualified staff to keep such records, whilst
in other cases construction firms are unwilling to share the correct data relating to their firms for taxation reasons. The Dan-
ish system, however, focuses on a single company’s performance with no effect on the performance of others. Following
an assessment of the three systems in use in the UK, Denmark and the USA and potential for adapting them for use in a
developing country context, the Danish approach was adopted and adapted for use by Ghanaian contractors owing to its
simplicity of design, ease of use and adaptability to other contexts. The next section further discusses how the Danish sys-
tem works.

The Danish construction performance system in operation


In Denmark, contractors bidding for jobs since 2005 have had to demonstrate competence in a set of 14 KPIs which were
used in the Danish construction industry (BEC 2006). However, according to BEC (2010), since 2010 a new set of seven
KPIs has been used to benchmark contractor performance for Danish contractors. These are:

(1) Actual construction time in relation to planned construction time.


International Journal of Construction Management 17

(2) Number of defects entered in the handing-over protocol, classified according to degree of severity (comprising the
following four KPIs):
 number of minor defects;
 number of less serious defects;
 number of serious and critical defects;
 number of defects to be investigated further.
 Economic value of defects.
(3) Defects in delivery which hamper the intended use of the essential parts of the building.
(4) Accident frequency.
(5) Customer satisfaction with the construction process.
(6) Customer loyalty.

For the purposes of this paper, the more recent version of Danish construction KPIs from BEC (2010) is used. It shows
how many construction projects have been evaluated and their scale. The more projects in the Grade Book, the more reli-
able the indicators. A Grade Book cannot be used until at least three projects completed in the last three years have been
evaluated. This ensures that the companies are judged on their current performance (BEC 2010) and gives a fair reflection
of the overall performance of contractors.
BEC staff subject all data submitted by contractors to rigorous scrutiny to ensure accuracy. Secondly, some projects
may be selected for random checks by the BEC staff. Thirdly data  where applicable  are subject to confirmation by
other parties involved in projects. So if the client gives information about defects, for example, these may be passed on to
the contractor to confirm the details as supplied by the client (BEC 2010). These checks ensure the integrity of the informa-
tion provided by the contractors in relation to evaluated projects and prevents arbitrariness in the data supplied by respond-
ents. BEC (2014) uses four main KPIs  deadlines, defects, health and safety and customer satisfaction. Apart from
‘deadlines’ and ‘health and safety’ which have single categories, there are two categories for measuring ‘customer service’
and six categories for ‘defects’ (Table 1).

Table 1 Example of Grade Book for contractors.

Company Construction company A/S

VAT 12345678
Area Key performance indicators Company average
Deadlines Actual construction time measured against 101.4%
planned construction time
Defects 1. Number of minor defects 3.628 per DKK 1million
2. Number of less serious defects 1.039 per DKK 1 million
3. Number of serious and critical defects 0.069 per DKK 1 million
4. Number of defects to be investigated further 0.077%
5. Economic value of defects 0.666%
6. Proportion of cases with defects in the delivery, 4.7%
which hampered, or actually prevented the intended
use of the essential parts of the building
Health and safety Accident frequency 57 accidents per DKK
1million
Customer satisfaction Customer satisfaction 3.9
Customer loyalty 4.2
Assessment satisfaction
Project type Number of projects Total contract sum for evaluated
evaluated projects expressed in million DKK
(2004 price level)
New build 13 25‒100 1
Repair and maintenance 4 Less than 25 0

Source: BEC (2014).


18 J.K. Ofori-Kuragu et al.

Methodology and methods


Methodology
In this section, some approaches to research are discussed as well as the specific approaches used in this paper and the jus-
tification for choosing these approaches.

Research paradigms
The major research paradigm for construction and built environment research has been mainly positivistic and quantita-
tive. In recent times, qualitative constructivist paradigms employing intepretivism, grounded theory and ethnomethodol-
ogy are largely used (Dainty 2008). The emerging trend for research in the construction and built environment employs
multi-methodology based on triangulation (Fellows & Liu 2009). This approach has been used in this paper using the field
survey to confirm the findings arising from the literature review undertaken as part of this paper and later validated using
expert interviews.

Inductive and deductive reasoning


There are two main reasoning approaches which can be used in the acquisition of new knowledge. These are deductive rea-
soning and inductive reasoning (Hyde 2000). In the development of research theory, deductive reasoning involves the deri-
vation of an expectation and a testable hypothesis from a general theoretical understanding (Babbie 2007). It is an
approach used for testing theory and commences with an established theory or generalization. Further investigation is
undertaken to see if the theory applies to specific instances (Hyde 2000). In the inductive model, research is used to test
theories. It is a logical model in which general principles are developed based on specific observations. This involves using
a set of specific observations to discover a pattern that shows a degree of order among all the identified events (Hyde
2000). This research is based on the deductive approach in which a model of performance measurement is developed based
on examples reviewed from literature and the review of existing models.

Choosing a research method


There are three main approaches to research: field research, focus groups and survey approach. Field research is the direct
observation of events in progress. It is frequently used to develop theories through observation (Babbie 2007). Whilst this
method provides the opportunity to observe events first-hand as they take place, there is a limit to the observations that can
be made.
As a research method, focus groups present the best method for accessing group norms. The discussions which occur
within focus groups provide rich data on the group meanings associated with a topic (Bloor et al. 2001). Also called group
interviewing, the focus group method is a qualitative method which may be based on structured, semi-structured or
unstructured interviews (Babbie 2007). Focus groups can be used to obtain data on the underlying meanings of assessments
made by a group as well as data on the ambiguities and the processes which lead to assessments made by the group.
Survey research is a method of collecting information from different sources by asking questions. These may be done
using questionnaires or without questionnaires. Questionnaire-based interviews may use structured, semi-structured or
unstructured questionnaires. Interviews done during surveys may be face-to-face, by telephone or using electronic alterna-
tives. Postal questionnaires can also be used (Babbie 2007).
Focus groups provide rich qualitative information and can be an essential link between qualitative and quantitative
research stages (Jenkins & Harrison 1992). The relevance of focus groups in academic research lies in the access they pro-
vide to group meanings, processes and norms and its potential to provide a platform for participants to articulate normative
assumptions which would otherwise not be articulated (Bloor et al. 2001). This presents a clear advantage over question-
naire-based surveys where respondents may apply subjective interpretations to questions. Groups of approximately 12 peo-
ple (Bloor et al. 2001)  can be up to 15 people (Babbie 2007) ‒ may take part in focus group sessions. The focus group
method of research is flexible with high face-validity, low in costs and able to produce speedy results. A major weakness
of focus groups is that they may offer researchers less control than in individual interviews; they may be difficult to ana-
lyse; and differences between groups may present problems (Babbie 2007). Also, the focus group process may present a
picture which does not truly reflect individual beliefs and attitudes. The most fundamental limitation of the focus group
approach is that the findings obtained cannot be projected to the population as a whole (Jenkins & Harrison 1992).
The varied approaches to surveys afford opportunities to reach wide audiences and increase participation in surveys
(Babbie 2007). The challenge, however, is to find the right respondent group that correctly reflects the intended target
International Journal of Construction Management 19

group. The main drawback of surveys is that they can be expensive to undertake. Also where structured questionnaires are
used, opportunities to further interrogate respondents are restricted (Hyde 2000). Following consideration of the three
approaches, it was decided to use the focus group and survey approaches in developing this paper.

Approaches to validation
Ahadzie (2007) identified five main techniques for undertaking external validation as follows:

(1) Using independent verification obtained through the use of surrogate variables.
(2) Splitting samples and using one part for estimating the model and the other for validation.
(3) Taking repeated samples from the original sample and refilling the model each time.
(4) Using Stein’s equation of re-calculating the adjusted coefficient of determination (R2).
(5) Approaching experts to comment on relevant aspects of the model.

Because the performance measurement tool developed in this study has been customized for Ghanaian contractors, it
was decided to use experts from the Ghanaian construction industry to test it and provide feedback. This approach to vali-
dation is similar to the approach used by Agbodjah (2008), which involved review meetings with a panel of experts to vali-
date a People Management Policy Development (PMPD) framework developed for large construction companies in
Ghana. Agbodjah (2008) used a panel of 12 experts comprising eight drawn from industry professional and trade associa-
tions and four academics. Following a presentation on the PMPD framework, the panel answered questions on an assess-
ment form developed using a Likert scale which were later collected and assessed.

Methods
In developing this paper, a thorough review of literature and content analysis of existing academic publications was under-
taken. Literature on performance, performance measurement, existing systems and databases used to measure and rank
performance were reviewed. Three performance measurement systems used in the UK, the USA and the Danish construc-
tion industries were reviewed. The respective performance measurement tools and approaches reviewed were compared to
identify their respective weaknesses and strengths. The Danish system deemed to be the simplest and easiest to use was
selected and adapted to the Ghanaian construction industry. This involved using KPIs relevant to the Ghanaian industry
and changing terminology to reflect the Ghanaian context. Also the layout was modified to reflect new categories.
Ofori-Kuragu (2014) examined and compared a set of 16 KPI groups and identified the most common KPIs. These
KPIs were selected through a scoring and ranking exercise of KPI groups identified from literature. Following a field sur-
vey of Ghanaian contractors, a list of the 10 most common KPIs from the 16 KPI groups deemed to be most relevant to
Ghanaian contractors were selected. These are: client satisfaction, cost, time, quality, health and safety, business perfor-
mance, productivity, predictability, people and environment (Ofori-Kuragu 2014). These 10 KPIs are adopted and inte-
grated into the performance measurement tools developed in this paper. Two variants of the performance measurement
tool were developed. These are the Contractor Scorecard, ConScor (Table 2), and the Project Scoresheet, ProScor
(Table 3). The tools developed  ConScor and ProScor  were validated using focus group sessions of a group of experts
from the Ghanaian construction industry and contractors respectively and questionnaire-based interviews with selected
Ghanaian contractors.

Results and discussion


The performance measurement system developed in this paper is based on the performance measurement system used in
the Danish construction industry. It consists of two separate tools  the Project Scoresheet (ProScor) and the Contractor
Scorecard (ConScor). ProScor is used to measure contractor performance on specific projects whilst ConScor tracks the
overall performance of contractors over a period. Generally, projects included in ProScor and ConScor should not be
more than three years old to ensure only projects which are representative of the company’s current performance are
included. ProScor allows for details of projects to be noted to prevent multiple counting of projects. Project types are spec-
ified from a range of three  new build, renovation and civil engineering or road projects. Consideration is given to miti-
gating circumstances that may have negatively impacted on performance to be recorded. Where extenuating
circumstances are determined, discussions should be held with the contractor to determine whether or not to include the
projects involved in the scoresheet.
20 J.K. Ofori-Kuragu et al.

Table 2. Contractor Scorecard for Ghanaian contractors (ConScor).


CONTRACTOR SCORECARD (ConScor)

Construction company Financial class

Total contract sum for Number of projects on


Number of projects evaluated projects which evaluation
Project type Evaluated (in millions GH¢) abandoned

New build
Repairs and maintenance
Roads/civil works
Performance indicator Sub-criteria Company average score
(1 minimum to 10 maximum)
Client satisfaction Client satisfaction (product)
Client satisfaction (service)
Cost
Time
Quality Defects at available to use
Defects after defects liability period
Health and safety Reportable accidents (incl. fatalities)
Reportable accidents (excl. fatalities)
Productivity
Business performance Pre-tax profit
Operating profit
Turnover
People
Total score
ConScor index score

Projects on which the evaluation has been abandoned or where the parties could not agree or did not wish to participate.

Validation of performance measurement tools


In the first stage of the validation process, two focus group sessions were conducted with 10 experts drawn from across the
Ghanaian construction industry. Following these sessions, the feedback received from the responses was used to improve
the performance measurement tools developed. The improved tools were then presented to Ghanaian contractors to sample
their views using a questionnaire-based survey. Further improvements were made to the tools using the outcome of this
survey.

First stage validation interviews


The focus group sessions were conducted with selected experts drawn from a broad spectrum of the Ghanaian construction
industry to assess the effectiveness of the performance measurement tools using semi-structured questionnaires. The ses-
sions were conducted using semi-structured questionnaires. The development of the semi-structured questionnaires was
based on the main findings of the study and the tools developed from the research. The 10 Ghanaian experts targeted
included contractors, consultants, academics and researchers drawn from the construction industry. Respondent contrac-
tors were selected to include a large, a medium and a small contractor respectively to ensure that feedback was provided
from their respective perspectives. The experts were put into two groups to ensure their independence in arriving at deci-
sions and ensure maximum participation of the members. A major criterion for selecting participants was to choose profes-
sionals with substantial experience of working in the Ghanaian construction industry.

Design of instrument used for focus group sessions


The questionnaire was designed to assess the effectiveness and usability of the performance measurement tools  ProScor
and ConScor. The questionnaire was made up of two sections. The first section with four questions was used to collect
International Journal of Construction Management 21

Table 3. Project Scorecard for Ghanaian contractors (ProScor).


PROJECT SCORESHEET (ProScor)

Construction company Class:

Project type Project description Project start and Total contract sum expressed
(tick one) and location finish dates in million GH¢

New build
Repairs and maintenance
Roads / civil works
Performance indicator Sub-criteria (if any) Project score (1 minimum to
10 maximum)
Client satisfaction Client satisfaction (product)
Client satisfaction (service)
Cost
Time
Quality Defects at available to use
Defects after defects liability period
Health and safety Reportable accidents (incl. fatalities)
Reportable accidents (excl. fatalities)
Productivity
Business performance Pre-tax profit
Operating profit
Turnover
People
Total project score
ProScor index score
Is there any special event (s) which could have negatively impacted on performance on this project? Yes [] No [] If yes, please
state briefly below and provide further details on reverse
Should this project be included in your performance scorecard? Yes [] No []
THIS SECTION FOR EXTERNAL ASSESSOR’S USE: Can the project be included in the company’s project record? Yes
[] No []

details of the respondents to ensure that all the respondents met the criteria for the target sample. Questions asked about
their professional background, level of education, years of experience in the construction industry and the type of organiza-
tion they worked for. In the second section, questions were used to explore the respondents’ views on the effectiveness of
the two performance measurement tools developed for Ghanaian contractors. In Questions 5 and 6, a sample of the Con-
tractor Scorecard and the Project Scorecard respectively were produced for respondents’ review. Six questions followed
which asked about the simplicity of the tools; how easily the terminology used could be understood; how easy the tool was
to use; the perceived effect of the tool on the performance measurement process; how confident the respondents were that
the tool would improve performance; and how ready the respondents were to implement the tools in their respective organ-
izations. These questions were based on a Likert scale of 1‒5 where 1 represented ‘strongly disagree’ and 5 ‘strongly
agree’, with the option N provided if the question was not applicable. There were two further questions which provided
space for respondents to describe any perceived weaknesses of the tools and offer specific suggestions for improving their
effectiveness. Lastly there was space for the respondents to offer suggestions for improving the validation questionnaire
which would subsequently be used for the contractor validation survey.

Pre-testing of measuring instrument


The validation questionnaire was pre-tested on three experienced construction industry professionals  an academic, con-
sultant and a contractor ‒ to identify potential flaws in the design and any general improvements required. Feedback from
the pre-testing was used to improve the structure and layout of the questionnaires. Less familiar terminology was either
explained or removed.
22 J.K. Ofori-Kuragu et al.

Table 4. Validation feedback scores for Contractor Scorecard (ConScor).

Validation question Average score

The Contractor Scorecard (ConScor) for Ghanaian contractors is simple to use 4


The terminology used in the Contractor Scorecard (ConScor) is easy to understand 3.7
The Contractor Scorecard (ConScor) is easy to use 3.6
The Contractor Scorecard (ConScor) makes it easy to compare contractor organizational performance 2.6
I am confident that The Contractor Scorecard (ConScor) can help us improve our performance 3.3
I will be ready to try out the Contractor Scorecard (ConScor) 3.1

Profile of respondents
The 10 experts each had a minimum of 15 years’ experience working in the Ghanaian construction industry. Consideration
was given to selecting senior officials able to make a contribution to the policy-making process in their respective estab-
lishments whilst ensuring balance in the spread of professional backgrounds. This was to ensure that they had access to the
relevant information required to address the issues raised in the questionnaire whilst bringing on board a broad range of
experiences. Owners of five large Ghanaian construction firms and five other experts were selected to provide feedback as
part of the process. The respondents comprised a senior architect, senior quantity surveyor and a leading academic ‒ a pro-
fessor of international repute with a good understanding of the Ghanaian construction industry, a senior official of the Pub-
lic Procurement Authority with responsibility for benchmarking and a background in construction and the director of a
leading construction research institute in Ghana.

Feedback from expert interviews


The responses received from the focus group sessions provided useful feedback which was used to improve the tools
developed in this paper. The comments provided by the respondents covered both tools presented in the questionnaire for
review. The usability and usefulness of the respective tools were explored using a Likert scale.
The commonest issue amongst respondents was the fact that the two scorecards appeared to have been designed for
larger projects worth more than 1 million of the local currency, Ghana Cedis, a position held by five respondents. This has
been amended to reflect their use for both small and large projects. Some of the more specific observations included one
which asked if there were ‘methods for “measuring” the qualitative indicators such as client satisfaction, quality and health
and safety’. General comments about the structure of the scorecards included suggestions to provide space or more space
for specified items. These have been responded to with modifications where necessary. Table 4 and Table 5 represent aver-
age scores for the respective validation criteria for ConScor and ProScor respectively.
From a maximum of 5, average responses were between 3 and 4 for all categories. The only exception was to the ques-
tion of ConScor’s potential and effectiveness as a tool for comparing contractor performance, which scored 2.6 out of 5.
This represented more than half of the respondents. There is also significant variance between the score for this question
for ConScor and ProScor, with 2.6 and 4.1, respectively, raising the prospect of the ProScor score being anomalous. Apart
from this, there is a reasonable measure of consistency between the results obtained for ConScor and ProScor. The overall
average of the mean scores is 3.472. On the Likert scale of 1 to 5 used in the validation questionnaire, this suggests that
responses are generally positive for the tools in terms of their usability and usefulness.

Table 5. Feedback scores for Project Scorecard (ProScor).

Factor Average score

The Project Scoresheet (ProScor) for Ghanaian contractors is simple to use 3.2
The terminology used in the Project Scoresheet (ProScor) for Ghanaian contractors is easy to understand 3.6
The Project Scoresheet (ProScor) for Ghanaian contractors is easy to use 4.1
The Project Scoresheet (ProScor) for Ghanaian contractors makes it easy to compare contractor project performance 4.0
I am confident that Project Scoresheet (ProScor) for Ghanaian contractors can help us improve performance 3.7
I will be ready to try out the Project Scoresheet (ProScor) for Ghanaian contractors 3.7
International Journal of Construction Management 23

The comments and observations made by the respondents have been generally incorporated into the developed tools as
required. Suggestions have been considered and identified weaknesses have been addressed. In response to the feedback,
more specific measures were introduced to enable health and safety performance to be more easily measured in the respec-
tive scorecards. General comments about the structure of the scorecards included suggestions to provide space or bigger
spaces for specified items. These have been responded to with modifications where necessary. The final outputs in this
paper reflect all the modifications and improvements made.

Final stage validation


Feedback from the initial validation was used to improve the tools developed. The modified tools were presented again to
selected Ghanaian contractors using a questionnaire-based survey. The survey instrument was the questionnaire used with
experts for the first stage of validation after it had been improved based on comments and suggestions.

Population and sample size for validation survey


The survey targeted Ghanaian construction contractors in the largest financial categories, D1K1. Categories are based on
the maximum size of projects they can undertake. D1K1 contractors are not limited as to the size of project they can under-
take. According to the Ministry of Works and Housing (MoWH), which keeps a register of all Ghanaian contractors in the
respective classes, there were 139 D1K1 contractors in Kumasi and Accra at the end of 2012 and thus the population size
(N) for D1K1 contractors is 139. D1K1 contractors are the largest of four contractor financial categories in Ghana. The sur-
vey was restricted to Accra and Kumasi because experience shows that almost all D1K1 contractors are based in these two
cities in Ghana. Where they operate outside Accra and Kumasi, they tend to maintain an office in these cities so it was
administratively easier to work with these contractors. Also it was felt that the subject of the study would be most relevant
to the largest contractors which would be most likely to have or be interested in performance measurement systems.
It is recommended that where N is less than 200, the entire population should be sampled using the census approach
(Israel 2009). In the absence of a credible central database of contractors, it was impossible to identify all the contractors
in the population. The sample size was thus calculated mathematically to give a more precise sample for the research.
The risk that a sample selected will not show the true value of the population is reduced for confidence levels of 99%
and raised for lower confidence of 90% or less (Israel 2009). Using a confidence level of 95%, the sample size, n, was cal-
culated using Israel (2009) as follows:
 
n D 1 C NN ðeÞ2 ; Given N D 139 and e D 0.05 for confidence level of 95%
n D 139 / 1C (139) (0.05)2
D 139 / 1.3475 D 103.153, approximated to 103.

Research instrument for contractor survey


The questionnaire was made up of two sections. The first section comprised four questions used to collect details of the
respondents. Respondents were asked about their professional background, level of education and the number of years of
experience in the construction industry. In the second section, questions were used to explore the respondents’ views on
the effectiveness of the two performance measurement tools developed for Ghanaian contractors. In Questions 5 and 6, a
sample of the Contractor Scorecard and the Project Scorecard respectively were produced for respondents’ review. Six
questions followed which asked about the simplicity of the tools; how easily the terminology used could be understood;
how easy the tool was to use; the perceived effect of the tool on the performance measurement process; how confident the
respondents were that the tool would improve performance; and how ready the respondents were to implement the tools in
their respective organizations. These questions were based on a Likert scale of 1 to 5 where 1 was the lowest and 5 the
highest, with the option N provided if the question was not applicable to them. There were two further questions which
provided space for respondents to describe observed or perceived weaknesses of the tools and offer specific suggestions
for improving the effectiveness of the tools.

Results analysis and discussion of final validation stage


Out of 103 questionnaires distributed 56 were returned, representing a 54% return rate. Analysis of the questionnaires
showed that 63.8% of respondents agreed that the Contractor Scorecard, ConScor, was simple to use. This consisted of
24 J.K. Ofori-Kuragu et al.

44.7% who strongly agreed and 19.1% who agreed. However, 2.1% ‘strongly disagreed’ that it was easy to use whilst
19.1% said they ‘disagreed’, making the dissenters altogether 21.2% with 14.9% unsure.
On the terminologies used, 57.4% agreed that the language and terminology used was easy to understand. However
19.2% disagreed, including 6.4% who ‘strongly disagreed’ and 23.4% who were unsure. Whilst the percentage that found
the terminology easy to understand was significant, the 23% who were not sure is still high and suggests more work needs
to be done in ensuring the terminology used is further simplified.
On the usability of ConScor, 85.1% agreed that it was easy to use. This included 25.5% who strongly agreed. Only
6.4% thought it was not easy to use, with a further 8.5% unsure. A total 68% of respondents agreed that ConScor made it
easy to compare contractor performance whilst 12.8% disagreed, with 19.1% uncertain about its effectiveness as a tool for
distinguishing between the performances of different contractors.
On improving contractor performance, 59.6% of respondents agreed that the Contractor Scorecard would help improve
contractor performance, whilst 12.8% disagreed and 27.7% were not certain if the Contractor Scorecard could help
improve their performance. Of the total, almost 32% of respondents were willing to try out the contractor performance
measurement tool, ConScor, whilst 13% stated definitively that they were not willing to try it in their organizations, with
6.4% being strongly opposed to any suggestion of trying it out in their organizations; 55% of the respondents were unable
to make their minds up whether to try it out or not.
On the question of whether ‘the Project Scoresheet for Ghanaian contractors was simple to use’, 68% of respondents
agreed, including 21% who ‘strongly agreed’ whilst 20% disagreed, of whom 11% ‘strongly disagreed’; 12% of respond-
ents were not sure of the simplicity of the Project performance measurement tool, ProScor.
Some 58% of respondents agreed that the terminology and language of ProScor was easy to understand: 9% of these
‘strongly agreed’ whilst 17% disagreed and 25% were not entirely sure about the language and terminology. On the usabil-
ity of the Project Scorecard, 89% of respondents agreed that it was easy to use, with 4% disagreeing and 6% uncertain.
In response to the question on perceptions of whether the Project Scorecard made it easy to compare the performance
of contractors, 69% agreed, 10% disagreed and 21% of respondents were not sure of its effectiveness or otherwise as a
tool for comparing contractor performance. Regarding improving performance, 53% of the respondents considered Pro-
Scor as a tool which could improve their performance, with 17% disagreeing and 30% uncertain whether it could lead to
performance improvements or otherwise. Asked if they would try out ProScor in their organizations, 39% were ready and
willing to try it out, 6% would not try it out and 55% could not decide whether they would try it out in their organizations
or not.
For the Likert scale, a coding system is used ranging from ‒2 to C2 (Table 6). The percentage responses for the respec-
tive questions were found and multiplied by the codes and the coded sums were calculated as shown in table 4.6 and Table
4.7. A positive overall sum for each of the areas tested by the validation questions indicated agreement with the proposition
whilst a negative overall result would imply disagreement. In this sense, high values would indicate ‘strong’ agreement
and disagreement respectively.
The trends in the results show that the performance measurement tools  ConScor and ProScor ‒ developed in this
paper were approved in the second stage of validation as being fit for purpose by the respondents. The net positive results
for all the areas show that the proportion of those who agreed with the statements used in the research instrument was con-
sistently larger than those who disagreed. The highest scores were for the usability of the tools, with a net result of 104 and
120 for ConScor and ProScor respectively. This shows that most of the respondents agreed that that the tools were easy to
use. The lowest scores were for the level of willingness amongst the respondents to use the tools in their organizations.
Whilst the overall approval ratings for the respective categories are generally high for all the questions asked, the level of
willingness to trial the tools in their companies is relatively low, with 24 and 46 aggregated scores for ConScor and Pro-
Scor. This can be ascribed to the novelty factor. This will be addressed as contractors get used to these tools and the bene-
fits that can be derived from using them.
The results of the second-stage validation exercise on selected contractors were largely in line with and reinforced the
results of the first stage of the validation exercise. In both cases, the highest score obtained was for ‘usability’ of the tools.
This is a positive validation of the tools developed in this paper.

Table 6. Coding for Likert scale.

Likert Scale grade 1 2 3 4 5


(Strongly disagree) (Disagree) (Neutral) (Agree) (Strongly agree)

Code ‒2 ‒1 0 1 2
International Journal of Construction Management 25

Table 7. Summated coded responses for Contractor Scorecard (ConScor).

Factor 1 2 3 4 5 Total

The Contractor Scorecard (ConScor) for Ghanaian contractors is simple to use ‒4.2 ‒19.1 0 44.7 38.2 59.6
The terminology used in the Contractor Scorecard (ConScor) is easy to understand ‒12.4 ‒19.2 0 29.4 56 53.8
The Contractor Scorecard (ConScor) is easy to use 0 ‒6.4 0 59.6 51 104
The Contractor Scorecard (ConScor) makes performance measurement simple ‒10 ‒7.8 0 38 60 80
I am confident that the Contractor Scorecard (ConScor) can help us improve our performance ‒6 ‒9.8 0 31.6 56 71.8
I will be ready to try out the Contractor Scorecard (ConScor) ‒12.8 ‒6.6 0 20 24 24

Table 8. Summated coded responses for Project Scorecard (ProScor).

Factor 1 2 3 4 5 Total

The Project Scoresheet (ProScor) for Ghanaian contractors is simple to use ‒22 ‒9 0 21 94 84
The terminology used in the Project Scoresheet (ProScor) for Ghanaian contractors is easy to understand ‒14 ‒10 0 49 18 43
The Project Scoresheet (ProScor) for Ghanaian contractors is easy to use ‒2 ‒4 0 61 56 120
The Project Scoresheet (ProScor) for Ghanaian contractors makes performance measurement simple ‒4 ‒8 0 46 46 80
I am confident that Project Scoresheet (ProScor) for Ghanaian contractors can help us improve performance ‒12 ‒11 0 31 44 52
I will be ready to try out the Project Scoresheet (ProScor) for Ghanaian contractors ‒2 ‒5 0 25 28 46

Verification of contractor self-assessment using ConScor and ProScor


In the first instance, both ProScor and ConScor will be completed by the contractor. The contractor’s version of records
will then be verified by confirming with the client or consultant on the projects involved using documentation and records
provided by the contractor, consultant and client. Where decisions regarding specific projects are not agreed, such projects
should be excluded from the records. To ensure effectiveness of the system, there must be strong collaboration between
the government agencies responsible for construction, such as Ministry of Works and Housing (MoWH), Ministry of
Roads and Transport, Department of Feeder Roads, the Highway Authority, as well as contractor associations with support
from all key stakeholders.

Conclusions
In this paper, two performance measurement tools ‒ Contractor Scorecard (ConScor) and Project Scoresheet (ProScor) ‒
have been developed for Ghanaian contractors. ConScor tracks the overall performance of contractors over a period of
time whilst ProScor is used to measure contractor performance on specific projects. Generally, projects included in Con-
Scor and ProScor should not be more than three years old. This allows for only projects which are fairly representative of
the company’s current or recent performance to be included.
ProScor allows for mitigating circumstances that negatively impact on performance to be recorded. Where extenuating
circumstances are determined, discussions are held with the contractor to determine whether or not to include the projects
involved in the scoresheet.
Feedback provided by contractors and selected experts from the Ghanaian construction industry suggest that the perfor-
mance tools developed in this paper are generally easy to use and the terminology used is simple and easy to understand.
More than a third of contractors surveyed were willing to use the performance measurement tools in their organizations. It
is hoped that following education on the tools and their potential benefits, more contractors will be willing to use them in
their organizations.
The next stage of the research will focus on the development of an electronic version of the performance measurement
tools to enhance their use and administration.

Disclosure statement
No potential conflict of interest was reported by the authors.
26 J.K. Ofori-Kuragu et al.

References
Adams F. 2008. Risk perception and Bayesian analysis of international construction contract risks: The case of payment delays in devel-
oping countries. Int J Proj Manag. 26:138148.
Agbodjah SL. 2008. A people management policy development framework for large construction companies operating in Ghana [disser-
tation]. Kumasi: Kwame Nkrumah University of Science and Technology.
Ahadzie D. 2007. A Model for predicting performance of project managers in mass housing building projects in Ghana [dissertation].
Wolverhampton: Wolverhampton University.
Alarcon L, Grill A, Freire J, Diethelm S. 2001. Learning from collaborative benchmarking in the construction industry. Paper presented
at the 9th International Group for Lean Construction Conference; 2001Aug 6‒8; Singapore.
Artley W, Stroh S. 2001. The performance-based management handbook: A six-volume compilation of techniques and tools for imple-
menting the Government Performance and Results Act of 1993. Washington, DC: Performance-based Management Special Interest
Group.
Babbie E. 2007. The practice of social research. Belmont: Thomson Wadsworth.
Badu E, Edwards P, Owusu-Manu D. 2012. Trade credit and supply chain delivery in the Ghanaian construction industry: analysis of
vendor interactiorns with small to medium enterprises. J Eng Des Tech. 10:360379.
Baldridge National Quality Programme (BNQP). 2009. Baldridge Award Criteria [Internet]. Baldridge National Quality Programme;
[cited 2009 May 14]. Available from: http://www.quality.nist.gov.
Beatham S, Anumba C, Thorpe T, Hedges I. 2004. KPIs: a critical appraisal of their use in construction. Benchmarking Int J. 11:93117.
Bloor M, Frankland J, Thomas M, Robson K. 2001. Focus groups in social research. London: Sage.
Benchmark Centre for the Danish Construction Sector (BEC). 2006. Benchmarking Danish construction. Copenhagen: Benchmark Cen-
tre for the Danish Construction Sector.
Benchmark Centre for the Danish Construction Sector (BEC). 2010. Applying and improving key performance indicators (KPI) in the
Danish construction sector. Copenhagen: Benchmark Centre for the Danish Construction Sector.
Benchmark Centre for the Danish Construction Sector (BEC). 2014. Benchmarking Danish construction. Copenhagen: Benchmark Cen-
tre for the Danish Construction Sector.
Boakye NA, Ankomah B, Fugar F. 2010. Safety on construction sites: The role of the employer and employee. Proceedings of the West
Africa Built Environment Research (WABER) Conference; 2010 Jul 2728; Accra. Reading: Reading University.
Bond T. 1999. The role of performance measurement in continous improvement. Int J Operations Prod Manag. 19:13181334.
Chileshie N, Yirenkyi-Fianko B. 2012. An evaluation of risk factors impacting construction projects in Ghana. J Eng Des Tech.
10:306329.
Construction Industry Institute (CII). 2012. Benchmarking and metrics[Internet]; [cited 2012 Jul 12]. Available from: http://www.con
struction-institute.org/
Construction Industry Institute (CII). 2014. Performance assessment [Internet]; [cited 2014 Oct 13]. Available from: http://www.con
struction-institute.org
Constructing Excellence. 2009. UK construction industry KPIs [Internet]; [cited 2009 May 16]. Available from: http://www.constructin
gexcellence.org
Construction Task Force. 1998. Rethinking construction. London: UK Department of Trade and Industry.
Dainty A. 2008. Research design. Research methods for construction. Oxford: Blackwell Science.
Deros B, Yusof S, Salleh A. 2006. A benchmarking implementation framework for automotive industry. Benchmarking Int J.
13:396430.
Fellows R, Liu A. 2009. Research methods for construction Oxford: John Wiley & Sons
Ghana Stock Exchange (GSE). 2012. Listed companies[Internet]; [cited 2012 May 7]. Available from: http://www.gse.com.gh
GIPC. 2012. Ghana Cub 100 Ghana Investment Promotion Council. Available from: http://www.gipcghana.com
Hyde KF. 2000. Recognising deductive process in qualitative research. Qual Market Res Int J. 3:82‒90.
Israel GD. 2009. Sampling the evidence of extension program impact PEOD. Vol 6, October. Florida: University of Florida.
Jenkins SK, Harrison M. 1992. Focus groups: A discussion. Brit Food J. 9:3337.
Kpamma Z, Adjei-Kumi T. 2010. The lean project delivery system (LPDS): application at design stage for construction projects in
Ghana. Proceedings of the West Africa Built Environment Research (WABER) Conference; 2010 Jul 2728; Reading: Reading
University.
Moullin M. 2004. Eight essentials for performance measurement. Int J Health Care Qual Ass. 17:110‒112.
Munro-Faure L, Munro-Faure M. 1992. Implementing total quality management. London: Pitman.
Ofori-Kuragu JK. 2014. Enabling world-class performance ‒ a framework for benchmarking [dissertation]. Kumasi: Kwame Nkrumah
University of Science and Technology (KNUST).
Peters TJ, Waterman RH. 1982. In search of excellence: Lessons from America’s best run companies. New York, NY: Harper and Row.
Prabhu V, Robson R. 2000. Achieving service excellence. Managing Serv Qual. 10:307‒317.
Robson I. 2004. From process measurement to performance improvement. Bus Process Manag J. 10:510‒521.
Salaheldin SI. 2009. Critical success factors for TQM implementation and their impact on performance of SMEs. Int J Prod Perf Manag
58:215‒237.
Sousa SD, Aspinall EM, Rodrigues G. 2006 Performance measures in English small and medium enterprises. Benchmarking Int J.
13:120‒134.
Swanson R. 1995. Human resource developmenmt: Performance is Key Human Reosurce Development Quarterly, Vol 6. San Francisco,
CA: Jossey Bass Publishers.
Vulink M. 2004. Technology transfer in the construction industry of Ghana [dissertation]. Eindhoven: Technische Universiteit
Eindhoven.
Yeboah A. 2007 Nov 7. Task force proposes ranking of contractors. Daily Graphic Accra.
Zairi M. 1994. Benchmarking: the best tool for measuring competitiveness. Benchmarking Qual Manag Tech. 1:11‒24.

You might also like