You are on page 1of 25

The current issue and full text archive of this journal is available at


CI 10,3

Contractor selection innovation: examination of two decades published research

Gary Holt
Department of Civil and Building Engineering, Loughborough University, Loughborough, UK
Purpose The problem of selecting a contractor has attracted signicant academic research endeavour over the last two decades. The principal aim here is to examine that research via published academic outputs for the period circa 1990-2009. Design/methodology/approach A sample of published contractor selection (CSn) research is critically appraised. Aspects highlighted include: stated aims and research justication; methodological approaches employed; research tools used; and products of CSn research. Findings Main research foci are observed as: modelling the CSn process; studying selection criteria; and interrogation of existing CSn systems. Foci justiers are linked mainly to the importance and difculties of CSn decision making. Deterministic modelling of CSn is the favoured methodological approach, followed by documentary synthesis then questionnaire surveys. Preferred research tools are found to be system interrogation, rank order analysis and Likert scale/importance indices, with hypothesis testing and other methods used less so. Almost two-thirds of research products are CSn models, with derived or proffered processes, and knowledge relating to CSn criteria, between them representing approximately the remaining third of output. Research limitations/implications It is suggested that many of the CSn models exhibit as much complexity as the original problem they sought to resolve, while the reliability and longevity of suggested cocktails of CSn criteria (in practice), might be questioned. A call for future research products to more closely consider end-user impact and potential for take-up by industry is made. An empirical follow-on study to assess (inter alia) practitioner use and value of CSn research is proposed. Practical implications The paper signals a possible need for greater industrial engagement in the research domain. Originality/value The ndings are novel to this paper. Keywords Procurement, Contractor workers, Tendering, Subcontractoring, Clients Paper type Literature review

Received 16 April 2009 Accepted 15 October 2009

Construction Innovation Vol. 10 No. 3, 2010 pp. 304-328 q Emerald Group Publishing Limited 1471-4175 DOI 10.1108/14714171011060097

Introduction The construction industry is recognised for many distinct characteristics and in a procurement context, its separation of design from production and resulting transient fragmentation, arguably reigns supreme (Langford and Male, 2001, Chapter 2; Walker, 2002, Chapter 1; Loosemore et al., 2003, p. 2; Cooke and Williams, 2004, p. 2). Resultantly, selecting the most appropriate contractor for the purpose of their undertaking the production function[1] has for decades, been the subject of research, debate and published guidance.
The combined constructive comments of the anonymous reviewers are acknowledged, for helping develop and improve the paper.

Contractor selection (CSn) normally comprises a theoretically innite group of contractors (Holt, 1996) whose attributes (El-Sawalhi et al., 2008) are often on the basis of past performance (Kadefors, et al., 2007; Holt et al., 1994a) assessed against client objectives or criteria (Russell and Skibniewski, 1988; Wong and Holt, 2003). Effectively, this is the judging of contractor(s) capabilities (Ng and Skitmore, 1999; Marzouk, 2008) but that judgment often performed as a snapshot in time must be considered against the potential for these capabilities to signicantly change over short periods. In practice, CSn procedures vary signicantly (Palaneeswaran and Kumaraswamy, 2001; Egemen and Mohamed, 2005) and are typied by: non-linearity (Lam et al., 2005); uncertainty (Watt et al., 2009); subjectivity (Holt et al., 1993; Watt et al., 2009); and volatility/competitiveness (Fong and Choi, 2000). These aspects will be expanded upon as this study unfolds. Given the importance of CSn to project performance (Holt, 1998), numerous construction industry reports have focused on it (Cooke and Williams, 2004, p. 1.2). For example, The Simon Committee (1944) strongly advocated selective rather than open tendering (Masterman, 1996, p. 8), while Emmerson (1962) highlighted the issue of construction design being removed from production as referred to above. Shortly after Emmerson, Banwell (1964) reiterated aspects of The Simon Committee (et al.), advocating what were then non-conventional procurement methods such as negotiation, and a follow-on report from the National Economic Development Ofce, also supported greater use of serial tendering and negotiated contracts (NEDO, 1967). In the title of its report, The Tavistock Institute (Tavistock, 1966), reected that the building industry was characterised by [. . .] interdependence and uncertainty concluding somewhat ironically (given said calls for increased negotiation), that the industry thrived on the latter (Woodward, 1997). The oft-cited Latham (1993, 1994) reports were quite inuential in procurement terms, raising issues of mistrust between stakeholders (Cooke and Williams, 2004, p. 4), whilst among other aspects, promoting greater use of selection based on quality in addition to price (i.e. value). Egan (1998) subsequently encouraged longer term procurement relationships in favour of selective methods, as a mechanism to achieving quality and efciency improvement; while in a second report (Egan, 2003), arguably rooted the term integrated supply chains into procurement dialogue. Aside from government, the Construction Industry Institute offered explicit CSn guidance (CII, 1988) along with direction on choosing contractors when striving to improve construction safety (CII, 2003), while the issue of long-term (contractor/client) relationships surfaced yet again, when the European Construction Institute highlighted value enhancement via strategic alliances (ECI, 1999). Given the adoption of information technology within construction, it was inevitable that problems (and their avoidance) of CSn and e-tendering have also been commented upon (CRC, 2006). Since the middle of the last century, a shift from promoting broadest CSn competition towards integrated supply chain mechanisms that encourage mutual benet, organisational learning, partnerships, protability and quality while embracing information technology is witnessed (Li et al., 2000; Love et al., 2002b; Walker and Johannes, 2003; Akintoye and Main, 2007). The emphasis within the literature has moved from lowest cost to value (CIRIA, 1998, 2009) (while the authors anecdotal evidence suggests that, among smaller contractors at least, cost often remains the ultimate selection criterion). The credit crunch has certainly placed strain on the

Contractor selection innovation 305

CI 10,3


literatures call for value, with price competition presently erce[2] (RICS, 2008, p. 3), resulting from unprecedented negative forces acting on international construction activity (The Times, 2008; CECA, 2008; CIOB, 2009; ASCE, 2009). The issue of CSn and academic research in the eld for that matter has been an evolving one over the last two decades; moulded by reaction to changes in the procurement environment and in the case of academic CSn research, to advancements in the science of methodology. Regarding the latter, developing techniques such as neural networks, fuzzy decision making, and web-based technologies have witnessed application among a range of construction management problems (Graham et al., 2006; Vries and de and Steins, 2008; Han et al., 2008, respectively). Somewhat expectedly therefore, examples of their use in CSn research are equally easy to nd, for example, see Albino and Garavelli (1998), Singh and Tiong (2005) and Palaneeswaran and Kumaraswamy (2005), respectively. Aim and objectives of this study This backcloth of evolvement brings matters conveniently to the focus of this study, the aim of which was to critically appraise academic research within the eld of construction CSn, published over a time window of circa 20 years prior to 2009. In so doing, such review and concomitant synthesis sought accord with contention of Fellows and Liu (2003, p. 64) that a literature review must provide, [. . .] a summary of the state-of-the-art. Objectives related to this aim included particular consideration of: (1) the foci of that research; (2) stated research drivers; (3) favoured methodological approaches; (4) research tools employed; and (5) the products of research effort. That is, the study aspired to capture a temporal transit through the observed window of literature, with particular clusters of synthesis, emanating from focus on ve areas of the subject dened by objectives (1)-(5) above. Based on observation of outcomes resulting from satisfying these objectives, the contribution of published academic research is considered, and future research direction intimated. The literature review: rationale and method The literature review is discussed in terms of its: . rationale; and . method. Rationale Levin (2007, p. 75) asserted that to write a good literature review requires one to ask: What am I reviewing the literature for? In the present instance, some answers to this question mirror uses of a review included in OLearys (2004, p. 78) list, these being to inform, argue rationale, and develop questions. Notwithstanding that, some regard a review as simply an essay (Davies, 2007, p. 39), for this study the answer to Levin connects with two particular denitions:

(1) that a review can be, an experts general review of current literature on a particular topic (Walliman, 2006, p. 182); and (2) that it should overview, [. . .] who the key writers are, [. . .] prevailing theories and hypotheses [. . .] what questions are being asked, [. . .] what methods and methodologies are appropriate [. . .] and show relationships [. . .] so that key themes emerge (Emerald, 2009). Literature reviews have frequently been used to good effect in construction management research (CMR). For example, Sidwell and Budiawan (2001) undertook a review to highlight problems with competitive tendering in relation to contractor-led innovation; Wong et al. (2005) reviewed the literature to investigate intelligent building research and discover three research foci as a basis for summarising future research direction; Donohoe (2005) reviewed literature and case law to consider a (then) recent appeal case and its implications for building surveyors; and constraints on information communication technology were identied through the literature by Peansupap and Walker (2006). More recently, common measures for improving constructability have been studied via review (Wong et al., 2007). This is but an indicative sample. Method The academic literature, upon which this study focussed, was identied via online search among selected construction management journals and other research databases (see later). Additional kinds of literature such as trade journals, comments, magazine articles and theses[3] do exist and are not excluded herein; but academic journal papers were afforded emphasis, because they represent the most important wealth of literature available (Fellows and Liu, 2003, p. 62). Further, journal papers are (should be) academically robust in that most are subject to blind review; a process stated as key to scientic progress (Smith, 1999, p. 9). Search keywords were limited to CSn intending only to identify research directly relating to CSn or to consider this conversely to avoid contractor research in a broader sense such as that relating to: performance measurement per se (Luu, et al., 2008); specic contractor assessment such as safety performance (Hinze and Gambatese, 2003); or contractor metrics such as nancial stability (Chan, et al., 2005). While it is difcult to denitively justify which journals should, or should not be included in this kind of (targeted) literature search, those chosen (Table I) were considered to represent contemporary eminence in the eld of construction management, and hence, able to yield a representative sample (note journal acronyms in Table I which are used hereafter). By way of supporting this contention, several ASCE publications, BRI, CME, CI, ECAM, and some in the others category of the table are all included in the Australian Business Deans Council list of top journals in the eld (ABDC, 2009); while a list of journals most respected by UK academics based on an analysis by Hughes (2009) relating to papers submitted for the 2007 UK Research Assessment Exercise (RAE, 2008) ranked virtually all journals targeted in the present study highly. Notwithstanding this emphasis on journal papers, some conference papers in the eld were also identied via the Royal Institution of Chartered Surveyors conference series (RICS, 2009); the Association of Researchers in Construction Management database (ARCOM, 2009); and the American Society of Civil Engineers database (ASCE, 2008).

Contractor selection innovation 307

CI 10,3


No. papers returned from searcha 86 133 35 121 8 164 312

No. papers relevant to studyb 24 13 6 14 2 10 12 12 93


Table I. Results of searches

ASCE Database Building and Environment ISSN: 0360-1323 (print) Building Research and Information ISSN: 0961-3218 (print) Construction Management and Economics ISSN: 0144-6193 (print) Construction Innovation journals.htm?idCI ISSN: 1471-4175 (print) Engineering, Construction and Architectural Management journals.htm?idecam ISSN: 0969-9988 (print) International Journal of Project Management ISSN: 0263-7863 (print) Other, non-specied journals Total

Notes: aUsing search terms Contractor Selection; bsubjectively chosen refer to narrative

The decision regarding a papers relevance to this study was a subjective one based entirely on the authors experiential judgement. This decisional characteristic of literature reviews has previously been recognised. For example, Birmingham (2000) suggested that the problem of what to include or otherwise is an, [. . .] evaluative decision based on the researchers own judgement of adequacy; while Fink (2005, p. 5) conrmed its being a subjective task that might use screening criteria for evaluating a papers coverage of a subject, along with its scientic quality. Similarly, the categorisation of papers (regarding, for example, subject focus and methodological standpoint), was a result of subjective evaluation and harmonized with Coopers (1998) suggestion that [. . .] literature reviews can focus on research, research methods, theories, applications [. . .] to identify the central issues in a eld while also noting that, identifying related work [assists in helping] to rationalise the work and identify a focus (Hart, 1998). Table I summarises the search sources and respective numbers of works identied/included from each. The sample does not purport to be exhaustive or a denitive listing in the eld, taking into account the above issue of inclusion and the fact that new research is always working through to publication. However, given the scope, depth, and targeted nature of the review, the sample was considered appropriate to meaningful analysis. Analysis of the literature The analysis is focussed under each of the headings: CSn research focus and justication (to observe favoured aspects of study and their motives for attention); Methodological approaches to CSn research (to consider methodological predilection);

Research tools used to solve CSn problems (to identify solutions chosen); and Products of research (to highlight the results of this combined effort). Accordingly, a discussion ensues; salient observations are highlighted; and suggestions for future research direction in the eld are proffered. CSn research focus and justication The majority of CSn research has focussed on the selection problem as a process, with justication centred on decisional aspects relating to (inter alia) its issues of difculty, uncertainty, and subjectivity announced in the introduction. Coupled with this, the importance of the selection decision is often mentioned. Differing Csn models have been proffered as solutions thereto. On justication, Hatush and Skitmore (1998) highlighted the important link between CSn and project success in terms of achieving schedule, cost and quality; such link additionally highlighted by many others (Russell and Jaselskis, 1992). Kumaraswamy (1996) stated that, The right choice of contractor is crucial to success; further examples conrming that CSn, [. . .] is important to ensure the success of projects (Bubshait and Al-Gobali, 1996); [. . .] is a decisive event for project success (Alarcon and Mourgues, 2002); [. . .] the projects success level is largely dependent on it (Li, et al., 2005); and (selecting a capable bidder) is crucial to, ensure the success of construction projects (Li, et al., 2007). Fong and Choi (2000) meanwhile, identied that CSn is one of the most important tasks facing a client who wishes to achieve successful project outcomes. As Kumaraswamy et al. (2000) summed it up, [. . .] smarter selection does in fact matter. The issue of importance is closely linked to the decisional difculties of CSn and hence, addressing these difculties, has been the ambition of many researchers. Hatush and Skitmore (1997) conrmed, One of the most difcult decisions taken by the client [. . .] is selecting the contractor; while Albino and Garavelli (1998) referred to the complexity of the decision in terms of, analysis and judgement. El-Sawalhi et al. (2007) identied with the need to, provide uncertain, incomplete, or imprecise assessments due to lack of information in harmony with Watt et al. (2009), who stated that the decision is, [. . .] plagued with uncertainties. Similar rhetoric includes, [. . .] the task is challenging (Fong and Choi, 2000); and its description as, [. . .] a complicated two-group non-linear classication problem by Lam et al. (2005). Elsewhere, Pongpeng and Liston (2003) recognised that that whether in public or private sector arenas, multiple decision makers are involved. In contrast, others have justied (and focused) their work differently. Khosrowshahi (1999) for instance, in developing a CSn model for the public sector, suggested that prequalication should be an, [. . .] important issue for contractors who seek to obtain work and that they might therefore direct [more] attention to their qualifying attributes. The need to aggregate both quantitative and linguistic selection data was the justication of a model suggested by Sonmez et al. (2001); while Phillips et al. (2007) described the need to achieve best value in delivering savings identied in the Gershon (2004) review, as the basis for their study. Juan et al. (2009) addressed the, [. . .] few efforts [previously] focused on CSn in refurbishment projects; while Holt and Edwards (2005) justied their treatise, by highlighting problems of selecting from a preponderance of cowboy rms in the domestic building sector. Some research has substantiated the targeting of a country-specic CSn setting or problem. Examples include Lai et al. (2004) who studied Chinese bid evaluation;

Contractor selection innovation 309

CI 10,3


Ferguson et al. (1995) focussing on Dutch tendering practice; Kumaraswamy (1996) who presented a Hong Kong perspective; and observation of European contract award by Lambropoulos (2007) and Topcu (2004). Other, setting-specic justication includes the work of Yiu et al. (2002) that addressed the dearth of research aimed specically at building and maintenance CSn; Aziz (2008) whose study related to highway pavement project contractors; a study aimed at apartment block maintenance contractors by Zavadskas and Vilutiene (2006); and Juan et al. (2009) whose focus was housing refurbishment contractors. Finally, regarding research focus, it was perhaps inevitable that the broader aspect of research into main CSn would eventually be applied to subcontractors; and recent outputs conrm this. For example, see Kumaraswamy and Mathews (2000), Ng and Tang (2008), Mbachu (2008), Ng et al. (2008) and Arslan et al. (2008). By categorising each paper making up the sample (n 93) into one principal research focus, the results shown in Table II were achieved. Two caveats are acknowledged regarding this categorisation: (1) it was a subjective, evaluative process (as described in the introduction); and (2) many papers spanned more than one research focus. Nonetheless, the exercise does provide thematic insight (Emerald, 2009) that is elucidated within the Discussion later. Methodological approaches to CSn research Analysis of methodological approaches was based on the chosen categories of: . statistical/deterministic modelling; . literature/documentary analysis; . surveys; . other (non-deterministic) forms of statistical enquiry; and . interviewing. The sixth classication designated Other (Figure 1), represented the note offered by Skitmore and Mills (1999), which did not fall into a methodological category. Statistical/deterministic modelling generally sets out to mathematically model a process, such that the derived model can replicate a previous event as a means of predicting a future one (Fellows and Liu, 2003, p. 76); and being different from stochastic modelling, which draws on probability (Fellows and Liu, 2003). Although, as elsewhere highlighted, [. . .] models whose solutions include chaotic dynamics are a link between deterministic and stochastic [types] (Metcalfe, 1996, p. 208). In the present context, typical deterministic examples tend to comprise an algorithm that given specic inputs, will yield contractor classication output and/or optimum choice. Statistical/deterministic modelling of CSn was the most favoured methodological approach among the sample (almost 40 per cent of papers); being represented by a range of selection models. For instance, Tam and Harris (1996) used a discriminant function to classify contractors into performance groups; Wong et al. (2003) used a similar function and logistic regression to achieve contractor classication; Lam et al. (2005) employed principal component analysis during prequalication decision making to, alleviate multicollinearity and largely reduce the dimensionality of the prequalication data set;

Research focus


Example papers

Number 39 16 13 8 7 4 3 3 93

%a 42 17 14 9 8 4 3 3 100

Selection model

Assessment criteria

Pongpeng and Liston (2003) and Abdelrahman et al. (2008) Russell and Skibniewski (1988), Waara and Brochner (2006) and Holt et al. (1994b)

Survey of practice

Decision support

Ferguson et al. (1995) and Palaneeswaran and Kumaraswamy (2001) Lam et al. (2000) and Shen et al. (2003) Ng et al. (2005) and Mbachu (2008) Jennings and Holt (1998) and Singh and Tiong (2006) Holt (1997) and Lai et al. (2004) Skitmore and Mills (1999)

Contractor performance

Opinion survey

Case study

Other Totals

Model embracing the contractor evaluation and selection process to yield optimum result Investigation of assessment criteria (e.g. for prequalication, project specic evaluation, or tender evaluation) Survey of for example, present selection or evaluation methods Model to embrace selection or a part thereof, explicitly stated as decision support or aid Contractor evaluation directly related to past or predicted contractor performance Survey of opinion (for example, of clients, contractors or consultants) In depth analysis of a past case or an existing/ proposed system For any papers not tting into the above foci

Note: aPercentage of sample, rounded

Contractor selection innovation 311

Table II. Analysis of research focus

CI 10,3

40 No. 35 36 Percent


30 27






6 3 1 Other (note)

Figure 1. Summary of methodological approaches employed

0 Stat./ detmnst. model Lit./doc. synthesis Postal survey Statistical Interviews inquiry

while Bendana et al. (2008) presented a fuzzy-logic-based selection method. Deterministic techniques that embrace CSn variables represented on different measurement scales are appropriate to the multivariate character of CSn input data. Literature/documentary synthesis, is taken here to include examination of all kinds of existing (written) knowledge or data, as a means of conceptualising, developing theory, or presenting propositions (Davies, 2007, p. 3). Almost one-third of the sample employed this methodology, such as within the review of research on transaction costs related to CSn method (Lingard et al., 1998); the comparative overview of documented knowledge of practices in design/build (sic) CSn (Palaneeswaran and Kumaraswamy, 2000b); the benchmarking of public sector selection practices (Palaneeswaran and Kumaraswamy, 2000a); and an examination of the management and tender evaluation literature by Watt et al. (2009). Many texts expand the virtues of this interrogative research methodology, see for example, Silverman (2001) and May (2001, Chapter 8). Within the broader eld of CMR, questionnaires are favoured, having even been used to study the journals within which the results of these kinds of surveys are often published (Chau, 1997). But, such persistent usage has been questioned of late. For instance, McCaffer (2008) asked, How do you move from analyses of questionnaire returns to making a real change in industrys practices? Surveys were used in 22 per cent of the papers making up the sample, such as to rate the importance of a list of 35 prequalication criteria (Ng et al., 1999); to study the perceived importance of factors affecting choice of contractors in the UK (Holt et al., 1994c); and to compare use of ` project-specic vis-a-vis value-specic tender selection criteria (Wong et al., 2000).

Other, non-deterministic forms of statistical enquiry such as correlation analysis (Colman and Pulford, 2006); hypothesis testing (Colman and Pulford, 2006, sections 7-8); and ordinal or rank analysis (Meddis, 1984) were used in six studies (Chinyio et al., 1998; Holt, 1998). Interviewing and/or case studies were employed among three studies, including Hatush and Skitmore (1997) who developed a questionnaire for structured interviews to investigate project success factors; and Palaneeswaran and Kumaraswamy (2001) who combined correspondence with interviews to seek international expert opinion on CSn. Research tools used to solve CSn problems Specic research tools used among the sample were examined (in respect of 16 classications), to look in more detail at the mechanisms by which research problems were resolved and the popularity of these. These classications and respective usage were ordered by rank based on the number of times each was observed among the sample (see column 1, Table III). The number of usage observations (column 3, total 113) is greater than the sample (n 93) because some researchers used more than one classication of tool. To offer some examples, Assaf et al. (1996) used rank order analysis with regression analysis; while Chau et al. (2003) combined Likert scale/importance ranking with multi-attribute utility theory. Often, the combining of several methods in this way underpins a triangulated CMR methodology (Love, et al., 2002a; Edwards and Holt, 2010). Here, system interrogation was dened as any method of inquiry that observed qualitative data and/or observation of existing systems, and was the favoured research tool (40 observations). Examples include Holt et al.s (1995b) review of UK selection practice; a cross-sectional review of CSn methodologies embedded in the literature (Holt, 1998); and the study of, [. . .] current literature and actual experience rened through a review of public-sector owners selection procedures by Al-Reshaid and Kartam (2005). Forms of rank order analysis were second-favourite tool. Zavadskas and Vilutiene (2006) presented values of qualitative performance criteria obtained from a survey, and in part, rank analysed these using a quadrant analysis matrix. Marzouk (2008) applied a superiority and inferiority ranking model to CSn, while several other researches demonstrated rank analysis of one form or another (Hatush and Skitmore, 1997; Wong et al., 2000; Chau et al., 2003). The latter two studies (as did several others) employed Likert/importance ranks as a means of abstracting subjective personal opinion (Likert, 1932; Oppenheim, 2000). This was the third favoured research tool and (given that such analysis is often used to analyse survey returns), mirrors the earlier identied popularity of questionnaires. Other tools used include: muilti-attribute analysis (Assaf and Jannadi, 1994) and multi-attribute utility theory (Holt et al., 1994d, 1995a) (especially among earlier studies in the eld), along with several interpretations of fuzzy theory (Singh and Tiong, 2005). Articial neural networks (Lam et al., 2000), mutivariate discriminant analysis (Wong and Holt, 2003), analytical hierarchy process (Abdelrahman et al., 2008) and cluster analysis (Chinyio et al., 1998) were all observed four times each. Slightly lesser use (three observations) was noted for regression analysis (Tam and Harris, 1996), variance analysis (Hatush and Skitmore, 1997), correlation analysis (Zavadskas and Vilutiene, 2006) and evidential or case-based reasoning (Juan, 2009).

Contractor selection innovation 313

CI 10,3


Rank by use 40 11 10 6 6 5 4 4 4 4 3 3 2 2 2 1 ( 7 instances)c 113 35.4 9.7 8.8 5.3 5.3 4.4 3.5 3.5 3.5 3.5 2.7 2.7 1.8 1.8 1.8 6.2 100

1 2 3 4 4 6 7 7 7 7 11 11 13 13 13 16

Table III. Analysis of research tools No. of observationsa Percentage of observationsb Example publications Holt (1998) and Al-Reshaid and Kartam (2005) Assaf et al. (1996) and Wong et al. (2001) Wong et al. (2000) and Chau et al. (2003) Assaf and Jannadi (1994) and Holt et al. (1994d) Holt et al. (1995a) and Holt et al. (1994e) Bendana et al. (2008) and Juan et al. (2009) Khosrowshahi (1999) and Lam et al. (2000) Tam and Harris (1996) and Ng et al. (1999) Mustafa and Ryan (1990) and Mahdi et al. (2002) Holt (1997) and Chinyio et al. (1998) Wong (2004) and Aziz (2008) Hatush and Skitmore (1997) and Wong et al. (2003) Zavadskas and Vilutiene (2006) Juan (2009) Ng and Tang (2008) Darvish et al. (2008) and Ip et al. (2004)

Research tool

System interrogation Other rank analysis Likert/importance ranks MAA MAUT Fuzzy theory ANN MDA AHP Cluster analysis Regression analysis Variance analysis Corelation analysis Evidential/CB reasoning Hypothesis testing Other Totals

Notes: aNumber of observations differs from sample size due to some papers utilising more than one research tool; bminor error in total due to rounding; other, one observed instance each of analytical network process, principal component analysis, factor analysis, combined matrix determination, genetic neural network, graph theory and matrix methods, branch and bound algorithm; key: MAA, multi-attribute analysis; MAUT, multi-attribute utility theory; ANN, articial neural network; MDA, multivariate discriminant analysis; AHP, analytical hierarchy process; CB, case-based (reasoning)

Seven papers used novel tools in this setting, viz.: analytic network process (Cheng and Li, 2004), principal component analysis (Lam et al., 2005), factor analysis (Ng and Tang, 2008), combined radix determination (Wang et al., 2007), genetic-neural network (El-Sawalhi et al., 2007), graph theory and matrix methods (Darvish et al., 2008), and branch and bound algorithm (Ip et al., 2004). Products of research Figure 2 shows that the research products of the sample may be broadly grouped into four categories, the largest of which (63 per cent) comprised CSn models of one form or another. Of products presented, 17 per cent derived, or proffered, processes (derived representing or conceptualising an existing process; proffered suggesting an improvement or variation of existing process). Suggested groupings of selection criteria for given CSn scenarios and/or developed importance metrics attached to these, evolved as a result of 15 per cent of the studies undertaken; with the remaining four per cent of studies comprising the miscellaneous grouping whose products did not t with these three former classications of output. By considering authors own descriptions of the 59 CSn models contained within the sample (Figure 2), emphases of intended use were determined: . 26 models were described as intended for contractor (or sub-contractor, or tender) selection ( Jaselskis and Russell, 1992; Fong and Choi, 2000; Kashiwagi and Byeld, 2002; Cheng and Li, 2004; Arslan et al., 2008); . eight were related to assessing or predicting contractor (or sub-contractor) performance (Holt et al., 1994a; Tam and Harris, 1996; Yasamis et al., 2002; El Sawalhi et al., 2008); . seven were related to prequalication (Russell and Skibniewski, 1990; Russell et al., 1990; Assaf and Jannadi, 1994; Lam et al., 2005; Palaneeswaran and Kumaraswamy, 2005; El-Sawalhi et al., 2007);

Contractor selection innovation 315

60 59 50 40 30 20 16 10 0 Model Process Criteria 14 4 Misc. No. Percent

Figure 2. Broad classications of research products

CI 10,3

six addressed tender or bid evaluation (Mustafa and Ryan, 1990; Wong et al., 2001; Pongpeng and Liston, 2003; Watt et al., 2009); and four were designed for contractor/client classication (Holt, 1996; Chinyio et al., 1998; Wong and Holt, 2003; Wong et al., 2003).


The remainder had specic intended application such as for contractor rating (Albino and Garavelli, 1998), contractor ranking (Darvish et al., 2008), or assessing competitiveness (Shen et al., 2003, 2006). See also Lo et al. (1998) in this respect. Process products have contributed to a range of issues such as for better understanding of: practices in the public sector (Palaneeswaran and Kumaraswamy, 2000a); geographically specic CSn tradition (Kumaraswamy, 1996); procurement specic synthesis (Gransberg and Molenaar, 2003); and contrast between CSn stakeholders (Egemen and Mohamed, 2005). Equal diversity is evident in criteria products, including guidance on prequalication criteria (Hatush and Skitmore, 1997); with respect to cost-benet (Ng and Skitmore, 2001); and reporting practitioner preference (Singh and Tiong, 2006). Discussion Figure 3 shows graphically the above ndings of the review and may be cross-referenced with the following discussion. CSn importance The signicance of selecting the most appropriate contractor for a given selection setting is of exceptional importance, largely, because of the positive association between employing a good contractor and achieving project aspirations. (Or its antipode the association between employing a poor contractor and likelihood of
Interrelationships Observed research foci: 1. Selection models 2. Assessment criteria 3. Survey of practice 4. Decision support 5. Contractor performance 6. Opinion survey 7. Case study 8. Other Observed research justification: 1. Importance of selection decision 2. Difficulties of selection decision 3. Other, selection setting specific challenges Observed methodological design: 1. Deterministic models 2. Literature and documentary synthesis 3. Postal surveys 4. Statistical enquiry 5. Interviews Observed research tools: 1. System interrogation 2. Rank analysis 3. Likert / importance data 4. MAA 5. MAUT 6. Fuzzy theory 7. ANN 8. MDA 9. AHP 10. Cluster analysis 11. others

Observed research products Future research products?



Increasing complexity

C.2010 Interrelationships

Existing: mapping, formalisation, understanding Process Proffered: improvements, variations on existing Relevance to selection setting or scenario Importance:ranks, indices to selection setting

Closer dialogue with industry and practitioners? More use of face-to-face research? Greater end-user impact and takeup? Web based technologies transitional interface between academic products and industry?


Figure 3. Model of CSn research


Knowledge advancement in specific CSn contexts

unsatisfactory project outturn). Such interrelationship, between an event that (normally) occurs at the outset of a project and one representing its conclusion, demonstrates CSns relevance in the overall context of construction product procurement and helps explain why importance is a signicant research justier in this eld. Uniqueness of the CSn environment The uniqueness of each construction product is matched by the (resulting) individuality of its CSn decisional parameters (such as CSn criteria to be applied). Accordingly, appropriate CSn criteria and their degrees of importance will uctuate; both as a function of objective factors related to the project (such as time, quality, and value targets), along with more subjective counterparts, such as decision makers predilection, disposition and perceived utility of (those) criteria. To this, uniqueness may be added the dynamic state within which contractors operate and within which selection decisions are made impacted for example by inconstant macroeconomic, political, and commercial ambience. These characteristics, intuitively, explain the focus of much research observed in this study being the CSn process itself, and equally, why importance of that process is an oft-cited principal research justier. Research foci and justiers The uniqueness of construction product procurement however, demonstrates why some research has focussed on (and been justied by), distinctive CSn settings set apart for example, by type of project, form of contractual arrangement, geographical characteristics, or some other peculiarity. This also helps illuminate the decision by several researchers, to study selection criteria for given selection scenarios. To seek for example, their applicability and concomitant importance levels which equally even for the same criteria will differ between these varying selection environments. A caveat here however, relates to the general applicability of any CSn research outputs, across differing selection settings and the dynamic state referred to above. Methodological considerations The methodological approaches observed lean toward deterministic modelling, reecting the leading desire to model the selection process per se. That is, to determine optimal output (contractor identication) given numerous multivariate decisional input parameters. The predominance of literature and other documentary analysis underlining methodological design (the basis of almost one-third of the sample), corresponds with the third most popular research focus, of surveying existing practice. The former desire can often, in part be informed by results of the latter approaches. Postal surveys were the research tool of choice among one-quarter of the sample, whereas interviews only represented 4 per cent. It seems that in line with earlier calls for, [. . .] closer dialogue with practitioners and more demonstrations of application [. . .] within CMR generally (McCaffer, 2008); face-to-face research tools appear under-utilised in this branch of the eld also. This is somewhat surprising in that given a desire to produce optimum selection models that by denition are intended for practitioner use one might have expected to witness more discourse with end-users if these particular products of research are to be (of maximum utility to and ultimately) adopted by them.

Contractor selection innovation 317

CI 10,3


Favoured research tool was system interrogation of different kinds (35 per cent of research tool observation on the sample), relating to the above recurring research focus of surveying existing practice. The use of rank analysis (10 per cent of observations) relates mainly to those studies that investigated and subsequently positioned in various ways, selection criteria. Predominant remaining research tools were statistical or algorithmic in character, being applied principally to said modelling aspirations and other forms of interrogating data, accrued by way of the 25 per cent of studies employing questionnaire and interview surveys. General observations One general observation is that earlier studies tended toward more simpler modelling techniques; predominantly of an additive multi-attribute nature, and sometimes embracing utility theory. Later models have employed newer research tools such as articial neural networks and fuzzy theory. A few recent studies meanwhile, have demonstrated use of tools such as Combined Matrix Determination and Graph Theory and Matrix Models. Referring to the earlier statement concerning end-user take-up, then given complexity of some models, one might speculate as to how acceptable these are to practitioners unless presented in a user-friendly for instance, ready-to-use computer package or web-based form (Ng et al., 2003). That is, algorithmic solutions alone require some form of transitional interface by which non-academic users can employ them in practice. Of note on this issue, White (2005) suggested that with respect to tendering procedures in the public sector, an environment exists that, [. . .] mandates procedures which make the implementation of superior methods unlikely so one might reasonably ask, How many of these new models are used in public sector practice? The issue of industry take-up, or impact value of CSn research products, is perhaps best answered denitively by survey of (dialogue with) practitioners, and points to future research direction as a follow-on to the present study. Other research products of the sample include formulation of existing, and proffering of proposed CSn systems (18 per cent of observations), along with knowledge of selection criteria (15 per cent). If the dialogue with practitioners mentioned above is performed, it would be valuable to also assess end-user adoption of these suggested cocktails of criteria, to identify degree of linkage between what academia suggests should impact contractor assessment decisions, and to what extent practitioners are actually aware of, or employ such knowledge in practice. Conclusion Much academic research has targeted CSn over the last two decades. The focus of this has, predominantly, been the decisional processes involved in CSn and decisional criteria applied thereto, with principal research drivers often cited as being to optimally model that complex decisional process, linked to the importance of the decisional task. Nonetheless, where equally complex statistical or algorithmic CSn solutions evolve from that research (as often observed among the sample), then paradoxically, these solutions may simply be replacing one demanding state with another. Similarly, one may question the reliability or relevance of cocktails of optimal selection criteria, given the transient and dynamic environments in which they are applied. That is, while a set of criteria might be optimal in one setting; will this be the

case in a different setting, or a similar setting at a different point in time? With the benet of hindsight (i.e. since CSn research began in earnest) there is justication in asking whether such recommended criteria groupings can make the transition; from their academic evolvement, to sustained and reliable application in practice. Only dialogue with end-users could determine the answer. Statistical and/or deterministic modelling was the most prominent methodological approach observed; with earlier models tending towards (arguably simpler) multi-attribute analysis; while later models have employed (arguably more complex) deterministic techniques, such as articial neural networks, and fuzzy logic. The above comments regarding model complexity and relevance, similarly, apply here too. Specic research tools employed within CSn methodological research frameworks are equally varied. System interrogation primarily as a function of qualitative data analysis has frequently been used. This, predominantly subjective, interpretive tool may be welcomed by those within the broader eld of CMR that espouse a shift from (more mechanistic) survey-based interrogative approaches; while simultaneously encouraging discourse between researchers and practitioners. Accrual of Likert scale data via questionnaire surveys and analysis of (often resulting) interval data using rank analysis methods, were also popular, but might be viewed as more convenient to the researcher than relevant to the research problem, in light of the above comments. Given said complexity of some evolving CSn models and an arguably under-utilisation of dialogue-based research methods in this subject domain, a call is made for future CSn research to more enthusiastically employ the latter techniques, as a means of: (1) Assessing the take-up of CSn research products by practitioners. For instance, what products actually lter through to practice? And which aspects of research output can actually demonstrate real end-user impact? Accordingly, the answers here would better inform CSn research design if industry impact is the intention of research. (2) Identifying exemplars and/or barriers associated with (1). Such practical knowledge could then be fed back to the CSn research community, informing methodological designs with a view to maximising research value. (3) Helping to steer future CSn research direction in the context of maximising its end-user impact and potential for industrial take-up. In sum, CSn research should henceforth be targeted at satisfying, and perhaps its success be measured more by, the latter criterion (3). These propositions represent a natural extension of this review of the subject, and will form the basis of a follow-on, empirical study in the eld.
Notes 1. Here, the term production is used loosely; procurement variants embracing contractor design for example, are not overlooked. 2. Asserted partly on anecdotal evidence from the authors own network of construction professionals. 3. Many research theses have addressed the CSn issue but are not considered here, because much of that research ultimately lters down into published papers. A searchable electronic index of theses is however, available (ITT, 2009).

Contractor selection innovation 319

CI 10,3


References ABDC (2009), Australian Business Deans Council Journal Ratings List, Australian Business Deans Council, Sydney, available at:,177,1 (accessed August 2009). Abdelrahman, M., Zayed, T. and Elyamany, A. (2008), Best-value model based on project specic characteristics, Journal of Construction Engineering and Management, Vol. 134 No. 3, pp. 179-88. Akintoye, A. and Main, J. (2007), Collaborative relationships in construction: the UK contractors perception, Engineering, Construction and Architectural Management, Vol. 14 No. 6, pp. 597-617. Alarcon, L.F. and Mourgues, C. (2002), Performance modelling for contractor selection, Journal of Management in Engineering, Vol. 18 No. 2, pp. 52-60. Albino, V. and Garavelli, C. (1998), A neural network application to subcontractor rating in construction rms, The International Journal of Project Management, Vol. 16 No. 1, pp. 9-14. Al-Reshaid, K. and Kartam, N. (2005), Design-build pre-qualication and tendering approach for public projects, The International Journal of Project Management, Vol. 23, pp. 309-20. ARCOM (2009), The Association of Researchers in Construction Management Database, The Association of Researchers in Construction Management, Nottingham, available at: (accessed via February 2009). Arslan, G., Kivrak, S., Birgonul, M.T. and Dikmen, I. (2008), Improving sub-contractor selection process in construction projects: web-based sub-contractor evaluation system (WEBSES), Automation in Construction, Vol. 17, pp. 480-8. ASCE (2008), ASCE Publications. Research Databases, American Society of Civil Engineers, Reston, VA, available at: (accessed November 2008). ASCE (2009), Civil Engineers Present Guidelines for Economic Stimulus Infrastructure Investment, The American Society of Civil Engineers, Reston, VA, available at: www. (accessed February 2009). Assaf, S. and Jannadi, O. (1994), A multi-criterion decision-making model for contractor prequalication selection, Building Research and Information, Vol. 22 No. 6, pp. 332-5. Assaf, S.A., Al-Hammad, A. and Ubaid, A. (1996), Factors affecting construction contractors performance, Building Research and Information, Vol. 24 No. 3, pp. 159-63. Aziz, A.M.A. (2008), Minimum performance bounds for evaluating contractors performance during construction of highway pavement projects, Construction Management and Economics, Vol. 26, pp. 507-29. Banwell, H. (1964), The placing and management of contracts for building and civil engineering work, Report Chaired by Sir H. Banwell, Ministry of Public Buildings and Works, HMSO, London. a, Bendan R., del Cano, A. and de la Cruz, M.P. (2008), Contractor selection: fuzzy control approach, Canadian Journal of Civil Engineering, Vol. 35 No. 5, pp. 473-86. Birmingham, P. (2000), Reviewing the literature, in Wilkinson, D. (Ed.), The Researchers Toolkit: The Complete Guide to Practitioner Research, Routledge, London. Bubshait, A.A. and Al-Gobali, K.H. (1996), Contractor prequalication in Saudi Arabia, Journal of Management in Engineering, Vol. 12 No. 2, pp. 50-4. CECA (2008), CECA surveys show contractors hit hard times, CECA Communicates. Economy Special, Civil Engineering Contractors Association, London, available at: NewsDetail.aspx?NewsID188&NewsTypeID1 (accessed February 2009).

Chan, J.K.W., Tam, C.M. and Cheung, R. (2005), Monitoring nancial health of contractors at the aftermath of the Asian economic turmoil: a case study in Hong Kong, Construction Management and Economics, Vol. 23 No. 5, pp. 451-8. Chau, C.K., Sing, W.L. and Leung, T.M. (2003), An analysis on the HVAC maintenance contractors selection process, Building and Environment, Vol. 38, pp. 583-91. Chau, K.W. (1997), The ranking of construction management journals, Construction Management and Economics, Vol. 15 No. 4, pp. 387-98. Cheng, E.W.L. and Li, H. (2004), Contractor selection using the analytic network process, Construction Management and Economics, Vol. 22, pp. 1021-32. Chinyio, E.A., Olomolaiye, P.O., Kometa, S.T. and Harris, F.C. (1998), A needs-based methodology for classifying construction clients and selecting contractors, Construction Management and Economics, Vol. 16, pp. 91-8. CII (1988), Project materials management primer, Best Practice Guidance, The Construction Industry Institute, Austin, TX, Document Ref. RS7-2, available at: (accessed August 2009). CII (2003), The owners role in construction safety, The Construction Industry Institute, Document Ref. RR190-11, available at: more/rr190_11_more.cfm (accessed August 2009). CIOB (2009), The impact of the global nancial crisis on the construction industry, Policy Brief, 2009, The Chartered Institute of Building, Ascot, available at: ciobpolicies (accessed February 2009). CIRIA (1998), Selecting contractors by value, CIRIA, London, Document Ref. SP150. CIRIA (2009), Selecting contractors by value update (CON167), available at: service/current_projects/AM/ContentManagerNet/ContentDisplay.aspx?Sectioncurrent_ projects&ContentID11900 (accessed April 2009). Colman, A. and Pulford, B. (2006), A Crash Course in SPSS for Windows, 3rd ed., Blackwell, Oxford. Cooke, B. and Williams, P. (2004), Construction Planning, Programming and Control, 2nd ed., Blackwell, Oxford. Cooper, H.M. (1998), Synthesising Research: A Guide for Literature Reviews, Sage, New York, NY. CRC (2006), E-tendering Security and Legal Issues, The Co-operative Research Centre for Construction Innovation, Brisbane. Darvish, M., Yasaei, M. and Saeedi, A. (2008), Application of the graph theory and matrix methods to contractor ranking, The International Journal of Project Management, Vol. 27 No. 6, pp. 610-19. Davies, M.B. (2007), Doing a Successful Research Project, Palgrave Macmillan, Basingstoke. de Vries, B. and Steins, R.J. (2008), Assessing working conditions using fuzzy logic, Automation in Construction, Vol. 17 No. 5, pp. 584-91. Donohoe, S. (2005), Building surveyors and adjudication: the implications of Hurst Stores v. M L Europe Property Ltd (2004), Structural Survey, Vol. 23 No. 1, pp. 55-62. ECI (1999), Summaries of ECI Value Enhancement Practices, The European Construction Institute, Loughborough. Edwards, D.J. and Holt, G.D. (2010), The case for 3D Triangulation when applied to construction management research, Construction Innovation: Information, Process, Management, Vol. 10 No. 1, pp. 25-41.

Contractor selection innovation 321

CI 10,3


Egan, J. (1998), Rethinking Construction, Report of the Construction Task Force to the Deputy Prime Minister, John Prescott, on the scope of improving the quality and efciency of UK construction, Department of Environment Transport and the Regions, London, Chaired by Sir J. Egan. Egan, J. (2003), Accelerating change, A report by the Strategic Forum for Construction, Rethinking Construction, London. Egemen, M. and Mohamed, A. (2005), Different approaches of clients and consultants to contractors qualication and selection, Journal of Civil Engineering and Management, Vol. 11 No. 4, pp. 267-76. El-Sawalhi, N., Eaton, D. and Rustom, R. (2007), Contractor prequalication model: state-of-the-art, The International Journal of Project Management, Vol. 25, pp. 465-74. El-Sawalhi, N., Eaton, D. and Rustom, R. (2008), Forecasting contractor performance using a neural network and genetic algorithm in a pre-qualication model, Construction Innovation, Vol. 8 No. 4, pp. 280-98. Emerald (2009), How to Write a Literature Review, Emerald, available at: http://info. (accessed August 2009). Emmerson, H. (1962), Survey of problems before the construction industries, Report Chaired by Sir H. Emmerson, Ministry of Works, HMSO, London. Fellows, R. and Liu, A. (2003), Research Methods for Construction, 2nd ed., Blackwell Science, Oxford. Ferguson, N.S., Langford, D.A. and Chan, W.M. (1995), Empirical study of tendering practice of Dutch municipalities for the procurement of civil-engineering contracts, The International Journal of Project Management, Vol. 13 No. 3, pp. 157-61. Fink, A.G. (2005), Conducting Research Literature Reviews: From the Internet to Paper, Sage, New York, NY. Fong, P.S. and Choi, S.K. (2000), Final contractor selection using the analytical hierarchy process, Construction Management and Economics, Vol. 18, pp. 547-57. Gershon, S.P. (2004), Releasing Resources to the Front Line. Independent Review of Public Sector Efciency, HMSO, London, available at: efciency.htm (accessed February 2009). Graham, L.D., Forbes, D.R. and Smith, S.D. (2006), Modelling the ready mixed concrete delivery system with neural networks, Automation in Construction, Vol. 15 No. 5, pp. 656-63. Gransberg, D.G. and Molenaar, D. (2003), A synthesis of design-builder selection methods for public infrastructure projects, Journal of Construction Procurement, Vol. 9 No. 2, pp. 40-51. Han, S.H., Kim, D.Y., Kim, H. and Jang, W. (2008), A web-based integrated system for international project risk management, Automation in Construction, Vol. 17 No. 3, pp. 342-56. Hart, C. (1998), Doing a Literature Review: Releasing the Social Science Research Imagination, Sage, New York, NY. Hatush, Z. and Skitmore, R.M. (1997), Criteria for contractor selection, Construction Management and Economics, Vol. 15, pp. 19-38. Hatush, Z. and Skitmore, R.M. (1998), Contractor selection using multicriteria utility theory: an additive model, Building and Environment, Vol. 33 Nos 2/3, pp. 105-15. Hinze, J. and Gambatese, J. (2003), Factors that inuence safety performance of speciality contractors, Journal of Construction Engineering and Management, Vol. 129 No. 2, pp. 159-64.

Holt, G.D. (1996), Applying cluster analysis to construction contractor classication, Building and Environment, Vol. 31 No. 6, pp. 557-68. Holt, G.D. (1997), Classifying construction contractors: a case study using cluster analysis, Building Research and Information, Vol. 25 No. 6, pp. 374-82. Holt, G.D. (1998), Which contractor selection methodology?, The International Journal of Project Management, Vol. 16 No. 3, pp. 153-64. Holt, G.D. and Edwards, D.J. (2005), Domestic builder selection in the UK housing repair and maintenance sector: a critique, Journal of Construction Research, Vol. 6 No. 1, pp. 123-37. Holt, G.D., Olomolaiye, P.O. and Harris, F.C. (1993), A conceptual alternative to current tendering practice, Building Research and Information, Vol. 21 No. 3, pp. 167-72. Holt, G.D., Olomolaiye, P.O. and Harris, F.C. (1994a), Applying multi-attribute analysis to contractor selection decisions, European Journal of Purchasing & Supply Management, Vol. 1 No. 3, pp. 139-48. Holt, G.D., Olomolaiye, P.O. and Harris, F.C. (1994b), Evaluating performance potential in the selection of construction contractors, Engineering, Construction and Architectural Management, Vol. 1 No. 1, pp. 29-50. Holt, G.D., Olomolaiye, P.O. and Harris, F.C. (1994c), Evaluating prequalication criteria in contractor selection, Building and Environment, Vol. 29 No. 4, pp. 437-48. Holt, G.D., Olomolaiye, P.O. and Harris, F.C. (1994d), Factors inuencing UK construction clients choice of contractor, Building and Environment, Vol. 29 No. 2, pp. 241-8. Holt, G.D., Olomolaiye, P.O. and Harris, F.C. (1994e), Incorporating project-specic criteria and client utility in the evaluation of construction tenderers, Building Research and Information, Vol. 22 No. 4, pp. 214-21. Holt, G.D., Olomolaiye, P.O. and Harris, F.C. (1995a), Application of an alternative contractor selection model, Building Research and Information, Vol. 23 No. 5, pp. 255-64. Holt, G.D., Olomoloaiye, P.O. and Harris, F.C. (1995b), A review of contractor selection practice in the UK construction industry, Building and Environment, Vol. 30 No. 4, pp. 553-61. Hughes, W. (2009), New wine in old bottles: how about the ranking of journals in construction engineering and management, Reply to debate on the Co-operative Network for Building Researchers (CNBR) web site, June 12, available at: cnbr-l/ (accessed August 2009), (registration required for access). Ip, W.H., Yung, K.L. and Wang, D. (2004), A branch and bound algorithm for sub-contractor selection in agile manufacturing environment, International Journal of Production Economics, Vol. 87, pp. 195-205. ITT (2009), Index to Theses, A comprehensive listing of theses with abstracts accepted for higher degrees by universities in Great Britain and Ireland since 1716, available at: www. (accessed February 2009). Jaselskis, E.J. and Russell, J.S. (1992), Risk analysis approach to selection of contractor evaluation method, Journal of Construction Engineering and Management, Vol. 118 No. 4, pp. 814-21. Jennings, P. and Holt, G.D. (1998), Prequalication and multi-criteria selection: a measure of contractors opinions, Construction Management and Economics, Vol. 16, pp. 651-60. Juan, Y. (2009), A hybrid approach using data envelopment analysis and case-based reasoning for housing refurbishment contractors selection and performance improvement, Expert Systems with Applications, Vol. 36 No. 3, pp. 5702-10.

Contractor selection innovation 323

CI 10,3


Juan, Y., Perng, Y., Castro-Lacouture, C. and Lu, K. (2009), Housing refurbishment contractors selection based on a hybrid fuzzy-QFD approach, Automation in Construction, Vol. 18 No. 2, pp. 139-44. Kadefors, A., Bjorlingson, E. and Karlsson, A. (2007), Procuring service innovations: contractor selection for partnering projects, The International Journal of Project Management, Vol. 25, pp. 375-85. Kashiwagi, D. and Byeld, R.E. (2002), Selecting the best contractor to get performance: on time, to budget, meeting quality expectations, Journal of Facilities Management, Vol. 1 No. 2, pp. 103-16. Khosrowshahi, F. (1999), Neural network model for contractors prequalication for local authority projects, Engineering, Construction and Architectural Management, Vol. 6 No. 3, pp. 315-28. Kumaraswamy, M.M. (1996), Contractor evaluation and selection: a Hong Kong perspective, Building and Environment, Vol. 31 No. 3, pp. 273-82. Kumaraswamy, M.M. and Mathews, J.D. (2000), Improved subcontractor selection employing partnering principles, Journal of Management in Engineering, Vol. 16 No. 3, pp. 47-57. Kumaraswamy, M.M., Palaneeswaran, E. and Humphreys, P. (2000), Selection matters in construction supply chain optimisation, International Journal of Physical Distribution & Logistics Management, Vol. 30 Nos 7/8, pp. 661-80. Lai, K.K., Liu, S.L. and Wang, S.Y. (2004), A method used for evaluating bids in the Chinese construction industry, The International Journal of Project Management, Vol. 22, pp. 193-201. Lam, K.C., Hu, T.S. and Ng, S.T. (2005), Using the principal component analysis method as a tool in contractor pre-qualication, Construction Management and Economics, Vol. 23, pp. 673-84. Lam, K.C., Ng, S.T., Hu, T., Skitmore, R.M. and Cheung, S.O. (2000), Decision support system for contractor pre-qualication articial neural network model, Engineering, Construction and Architectural Management, Vol. 7 No. 3, pp. 251-66. Lambropoulos, S. (2007), The use of time and cost utility for construction contract award under European legislation, Building and Environment, Vol. 42, pp. 452-63. Langford, D. and Male, S. (2001), Strategic Management in Construction, 2nd ed., Blackwell Science, Oxford. Latham, M. (1993), Trust and money, Joint Government/Industry Review of Procurement and Contractual Arrangements in the United Kingdom Construction Industry Interim Report. Chaired by Sir M. Latham, HMSO, London. Latham, M. (1994), Constructing the team, Final Report of the Joint Government/Industry Review of Procurement and Contractual Arrangements in the United Kingdom Construction Industry. Chaired by Sir Michael Latham, HMSO, London. Levin, P. (2007), Excellent Dissertations, McGraw-Hill International (UK), Maidenhead. Li, H., Cheng, E.W.L. and Love, P.E.D. (2000), Partnering research in construction, Engineering, Construction and Architectural Management, Vol. 7 No. 1, pp. 76-92. Li, Y., Nie, X. and Chen, S. (2007), Fuzzy approach to prequalifying contractors, Journal of Construction Engineering and Management, Vol. 133 No. 1, pp. 40-9. Li, Y., Shouyu, C. and Xiangtian, N. (2005), Fuzzy pattern recognition approach to construction contractor selection, Fuzzy Optimization and Decision Making, Vol. 4, pp. 103-18. Likert, R.A. (1932), Techniques in the measurement of attitudes, Archives of Psychology, Vol. 140, pp. 1-55.

Lingard, H., Hughes, W. and Chinyio, E. (1998), The impact of contractor selection method on transaction costs: a review, Journal of Construction Procurement, Vol. 4 No. 2, pp. 89-102. Lo, W., Chao, C., Hadavi, A. and Krizek, J. (1998), Contractor selection process for Taipei mass rapid transit system, Journal of Management in Engineering, Vol. 14 No. 3, pp. 57-65. Loosemore, M., Dainty, A. and Lingard, H. (2003), Human Resource Management in Construction Projects. Strategic and Operational Approaches, Spon Press, London. Love, P.E.D., Holt, G.D. and Li, H. (2002a), Triangulation in construction management research, Engineering, Construction and Architectural Management, Vol. 9 No. 4, pp. 294-303. Love, P.E.D., Tse, R.Y., Holt, G.D. and Proverbs, D.G. (2002b), Transaction costs, learning, and alliances in construction, Journal of Construction Research, Vol. 3 No. 2, pp. 193-207. Luu, V.T., Kim, S. and Huynh, T. (2008), Improving project management performance of large contractors using benchmarking approach, International Journal of Project Management, Vol. 26 No. 7, pp. 758-69. McCaffer, R. (2008), Editorial, Engineering, Construction and Architectural Management, Vol. 15 No. 2. Mahdi, I.M., Riley, M.J., Fereig, S.M. and Alex, A.P. (2002), A multi-criteria approach to contractor selection, Engineering, Construction and Architectural Management, Vol. 9 No. 1, pp. 29-37. Marzouk, M. (2008), A superiority and inferiority ranking model for contractor selection, Construction Innovation, Vol. 8 No. 4, pp. 250-69. Masterman, J.W.E. (1996), An Introduction to Building Procurement Systems, E. & F. N. Spon, London. May, T. (2001), Social Research Issues, Methods and Process, Open University Press, Buckingham. Mbachu, J. (2008), Conceptual framework for the assessment of subcontractors eligibility and performance in the construction industry, Construction Management and Economics, Vol. 26, pp. 471-84. Meddis, R. (1984), Statistics Using Ranks: A Unied Approach, Basil Blackwell, Oxford. Metcalfe, A. (1996), Mathematical models and simulation, in Greeneld, T. (Ed.), Research Methods, Guidance for Postgraduates, Arnold, London. Mustafa, M.A. and Ryan, T. (1990), Decision support for bid evaluation, The International Journal of Project Management, Vol. 8 No. 4, pp. 230-5. NEDO (1967), The Economic Development Committee for Building, Action on the Banwell Report, National Economic Development Ofce, London. Ng, S.T. and Skitmore, R.M. (1999), Client and consultant perspectives of prequalication criteria, Building and Environment, Vol. 34, pp. 607-21. Ng, S.T. and Skitmore, R.M. (2001), Contractor selection criteria: a cost-benet analysis, IEEE Transactions on Engineering Management, Vol. 48 No. 1, pp. 96-106. Ng, S.T. and Tang, Z. (2008), Delineating the predominant criteria for subcontractor appraisal and their latent relationships, Construction Management and Economics, Vol. 26, pp. 249-59. Ng, S.T., Cheng, K.P. and Skitmore, R.M. (2005), A framework for evaluating the safety performance of construction contractors, Building and Environment, Vol. 40, pp. 1347-55. Ng, S.T., Luu, C.D.T. and Chu, A.W.K. (2008), Delineating criteria for subcontractors registration considering divergence in skill base and scales, The International Journal of Project Management, Vol. 26 No. 4, pp. 448-56.

Contractor selection innovation 325

CI 10,3


Ng, S.T., Palaneeswaran, E. and Kumaraswamy, M.M. (2003), Web-based centralized multiclient cooperative contractor registration system, Journal of Computing in Civil Engineering, Vol. 17 No. 1, pp. 28-37. Ng, S.T., Skitmore, R.M. and Smith, N.J. (1999), Decision makers perceptions in the formulation of prequalication criteria, Engineering, Construction and Architectural Management, Vol. 6 No. 2, pp. 155-65. OLeary, Z. (2004), The Essential Guide to Doing Research, Sage, Thousand Oaks, CA. Oppenheim, A.N. (2000), Questionnaire Design, Interviewing and Attitude Measurement, Continuum International, London. Palaneeswaran, E. and Kumaraswamy, M. (2001), Recent advances and proposed improvements in contractor prequalication methodologies, Building and Environment, Vol. 36, pp. 73-87. Palaneeswaran, E. and Kumaraswamy, M. (2005), Web-based client advisory decision support system for design-builder prequalication, Journal of Computing in Civil Engineering, Vol. 19 No. 1, pp. 69-82. Palaneeswaran, E. and Kumaraswamy, M.M. (2000a), Benchmarking contractor selection practices in public-sector construction a proposed model, Engineering, Construction and Architectural Management, Vol. 7 No. 3, pp. 285-99. Palaneeswaran, E. and Kumaraswamy, M.M. (2000b), Contractor selection for design/build projects, Journal of Construction Engineering and Management, Vol. 126 No. 5, pp. 331-9. Peansupap, V. and Walker, D.H.T. (2006), Information communication technology (ICT) implementation constraints. A construction industry perspective, Engineering, Construction and Architectural Management, Vol. 13 No. 4, pp. 364-79. Phillips, S., Dainty, A. and Price, A. (2007), The development of a tender analysis support tool for use in social housing best value procurement, in Boyd, D. (Ed.), Proceedings of the 23rd Annual ARCOM Conference, 3-5 September, The Association of Researchers in Construction Management, Belfast, pp. 797-808. Pongpeng, J. and Liston, J. (2003), TenSem: a multicriteria and multidecision-makers model in tender evaluation, Construction Management and Economics, Vol. 21, pp. 21-30. RAE (2008), rae 2008 (sic), Results and Analysis of the UKs Research Assessment Exercise 2008, available at: (accessed August 2009). RICS (2008), Expected prot margins. Graph of expected pressure on construction prot margins, RICS Construction Market Survey United Kingdom. Third Quarter 2008, The Royal Institution of Chartered Surveyors, London, available at: rdonlyres/7FDB18D6-B0E4-4777-A777-5FA044873D71/0/RICSConstructionMarketSurvey Q32008.pdf (accessed February 2009). RICS (2009), Royal Institution of Chartered Surveyors Research Publications Database, available at: (accessed February 2009). Russell, J.S. and Jaselskis, E.J. (1992), Quantitative study of contractor evaluation programs and their impact, Journal of Construction Engineering and Management, Vol. 118 No. 3, pp. 612-24. Russell, J.S. and Skibniewski, M.J. (1988), Decision criteria in contractor selection, Journal of Management in Engineering, Vol. 4 No. 2, pp. 148-64. Russell, J.S. and Skibniewski, M.J. (1990), QUALIFIER-1: contractor prequalication model, Journal of Computing in Civil Engineering, Vol. 4 No. 1, pp. 77-90.

Russell, J.S., Skibniewski, M.J. and Cozier, D.R. (1990), QUALIFIER-2: knowledge based system for contractor pre-qualication, Journal of Construction Engineering and Management, Vol. 116 No. 1, pp. 155-69. Shen, L.Y., Lu, W. and Yam, M.C.H. (2006), Contractor key competitiveness indicators: a China study, Journal of Construction Engineering and Management, Vol. 132 No. 4, pp. 416-24. Shen, L.Y., Lu, W., Shen, Q. and Li, H. (2003), A computer-aided decision support system for assessing a contractors competitiveness, Automation in Construction, Vol. 12, pp. 577-87. Sidwell, A.C. and Budiawan, D. (2001), The signicance of the tendering contract on the opportunities for clients to encourage contractor-led innovation, Construction Innovation, Vol. 1 No. 2, pp. 107-16. Silverman, D. (2001), Interpreting Qualitative Data: Methods for Analysing Talk, Text and Interaction, Sage, London. (The) Simon Committee (1944), The placing and management of building contracts, The Simon Report, Central Council for Building and Works, Chairman Sir E. Simon, HMSO, London. Singh, D. and Tiong, L.K. (2005), A fuzzy decision framework for contractor selection, Journal of Construction Engineering and Management, Vol. 131 No. 1, pp. 62-70. Singh, D. and Tiong, L.K. (2006), Contractor selection criteria: investigation of opinions of Singapore construction practitioners, Journal of Construction Engineering and Management, Vol. 132 No. 9, pp. 998-1008. Skitmore, M. and Mills, A. (1999), Note. A needs based methodology for classifying construction clients and selecting contractors: comment, Construction Management and Economics, Vol. 17, pp. 5-7. Smith, A.J. (1999), The task of the referee, IEEE Computer, Vol. 23 No. 4, pp. 65-73, available at: (accessed February 2009). Sonmez, M., Yang, J.B. and Holt, G.D. (2001), Addressing the contractor selection problem using an evidential reasoning approach, Engineering, Construction and Architectural Management, Vol. 8 No. 3, pp. 198-210. Tam, C.M. and Harris, F.C. (1996), Model for assessing building contractors performance, Engineering, Construction and Architectural Management, Vol. 3 No. 3, pp. 187-203. Tavistock (1966), Interdependence and Uncertainty A Study of the Building Industry, Tavistock Institute of Human Relations, Tavistock Publications, London. (The) Times (2008), Galliford says construction downturn is spreading, Times Online, available at: and_property/article4735858.ece (accessed February 2009). Topcu, Y.I. (2004), A decision model proposal for construction contractor selection in Turkey, Building and Environment, Vol. 39, pp. 469-81. Waara, F. and Brochner, J. (2006), Price and nonprice criteria for contractor selection, Journal of Construction Engineering and Management, Vol. 132 No. 8, pp. 797-804. Walker, A. (2002), Project Management in Construction, 4th ed., Blackwell, London. Walker, D.H.T. and Johannes, D.S. (2003), Preparing for organisational learning by HK infrastructure project joint ventures organisations, The Learning Organisation, Vol. 10 No. 2, pp. 106-17. Walliman, N. (2006), Social Research Methods, Sage, Thousand Oaks, CA.

Contractor selection innovation 327

CI 10,3


Wang, Y., Yu, X. and Xue, X. (2007), An application of the method of combined radix determination for selecting construction supply chain partners, The International Journal of Project Management, Vol. 25, pp. 128-33. Watt, D.J., Kayis, B. and Willey, K. (2009), Identifying key factors in the evaluation of tenders for products and services, The International Journal of Project Management, Vol. 27 No. 3, pp. 250-60. White, A.S. (2005), Public sector tendering issues and analysis, in Khosrowshahi, F. (Ed.), Proceedings of the 21st Annual ARCOM Conference, 7-9 September, The Association of Researchers in Construction Management, University of London, London, pp. 1143-52. Wong, C.H. (2004), Contractor performance prediction model for the United Kingdom construction contractor: study of a logistic regression approach, Journal of Construction Engineering and Management, Vol. 130 No. 5, pp. 691-8. Wong, C.H. and Holt, G.D. (2003), Developing a contractor classication model using a multivariate discriminate analysis approach, RICS Research Papers, Vol. 4 No. 20. Wong, C.H., Holt, G.D. and Cooper, P.A. (2000), Lowest price or value? Investigation of UK construction clients tender selection process, Construction Management and Economics, Vol. 18, pp. 767-74. Wong, C.H., Holt, G.D. and Harris, P.T. (2001), Multi-criteria selection or lowest price? Investigation of UK construction clients tender evaluation preferences, Engineering, Construction and Architectural Management, Vol. 8 No. 4, pp. 257-71. Wong, C.H., Nicholas, J. and Holt, G.D. (2003), Using multivariate techniques for developing contractor classication models, Engineering, Construction and Architectural Management, Vol. 10 No. 2, pp. 99-116. Wong, F.W.H., Lam, P.T.I., Chan, E.H.W. and Shen, L.Y. (2007), A study of measures to improve construct ability, International Journal of Quality & Reliability Management, Vol. 24 No. 6, pp. 586-601. Wong, J.K.W., Li, H. and Wang, S.W. (2005), Intelligent building research: a review, Automation in Construction, Vol. 14 No. 1, pp. 143-59. Woodward, J.F. (1997), Construction Project Management. Getting It Right First Time, Thomas Telford, London. Yasamis, F., Arditi, D. and Mohammadi, J. (2002), Assessing contractor quality performance, Construction Management and Economics, Vol. 20, pp. 211-23. Yiu, C.Y., Lo, S.M., Ng, S.T. and Ng, M.M.F. (2002), Contractor selection for small building works in Hong Kong, Structural Survey, Vol. 20 No. 4, pp. 129-35. Zavadskas, E.K. and Vilutiene, T. (2006), A multiple criteria evaluation of multi-family apartment blocks maintenance contractors: I-model for maintenance contractor evaluation and the determination of its selection criteria, Building and Environment, Vol. 41, pp. 621-32. Corresponding author Gary Holt can be contacted at:

To purchase reprints of this article please e-mail: Or visit our web site for further details: