European Journal of Operational Research 154 (2004) 236–250 www.elsevier.

com/locate/dsw

O.R. Applications

A methodology for strategic sourcing
Srinivas Talluri *, Ram Narasimhan
Department of Marketing and Supply Chain Management, Eli Broad College of Business, Michigan State University, N370 North Business Complex, East Lansing, MI 48824, USA Received 16 May 2001; accepted 29 July 2002

Abstract Strategic sourcing is critical for firms practicing the principles of supply chain management. It specifically deals with managing the supply base in an effective manner by identifying and selecting suppliers for strategic long-term partnerships, involving in supplier development initiatives by effectively allocating resources to enhance supplier performance, providing benchmarks and continuous feedback to suppliers, and in some cases involving in supplier pruning activities. Currently, the methodologies in practice for strategic sourcing have mostly been subjective in nature with few objective decision models focused at supplier evaluation, which are also not devoid of limitations. This paper proposes an objective framework for effective supplier sourcing, which considers multiple strategic and operational factors in the evaluation process. Suppliers are categorized into groups based on performance, which assists managers in identifying candidates for strategic long-term partnerships, supplier development programs, and pruning. In addition, this research investigates the differences among supplier groups in proposing possible improvement strategies for ineffectively performing suppliers. Also, we demonstrate the methodological richness of our framework when compared to some of the traditional methods proposed and utilized for supplier evaluation purposes. The supplier data utilized in the study is obtained from a large multinational corporation in the telecommunications industry. Ó 2002 Elsevier B.V. All rights reserved.
Keywords: Nonparametric efficiency analysis; Purchasing; Strategic sourcing

1. Introduction Strategic sourcing is a critical challenge faced by many firms involved in the latest innovations of supply chain management. With the recent emphasis on just-in-time (JIT) manufacturing philosophy, strategic sourcing that establishes a long-

* Corresponding author. Tel.: +1-517-3536381; fax: +1-5174321112. E-mail addresses: talluri@pilot.msu.edu (S. Talluri), narasimh@pilot.msu.edu (R. Narasimhan).

term relationship with suppliers has become even more important and vital for enhancing organizational performance. Also, in todayÕs dynamic environment strategic relationship with suppliers is a key ingredient to the success of a supply chain. Strategic sourcing decisions must not be solely based on operational metrics such as cost, quality, and delivery, but also incorporate strategic dimensions and capabilities of suppliers such as emphasis on quality management practices, process capabilities, management practices, design and development capabilities, and cost reduction capabilities into the decision-making process.

0377-2217/$ - see front matter Ó 2002 Elsevier B.V. All rights reserved. doi:10.1016/S0377-2217(02)00649-5

S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250

237

These supplier attributes provide information to a firmÕs managers on the infrastructure and practices employed by the suppliers, which are key elements for long-term strategic relationships (SR). It is well established in the strategic supplier evaluation literature that sourcing decisions significantly impact various aspects of a product such as cost, design, manufacturability, and quality (Burt, 1984; Burton, 1988). Other research that emphasizes the importance of supplier evaluation includes works by Banker and Khosla (1995) and Dobler et al. (1990). Banker and Khosla (1995) have identified the supplier evaluation and justification problem as an important one in operations management. While several methods have been proposed and utilized for evaluation and selection of suppliers, they have limitations including: evaluation solely based on operational metrics without the consideration of strategic capabilities, simple weighted scoring methods based on subjective assessments, inappropriate or arbitrary methods utilized to derive factor weights, and lack of relative evaluation across various suppliers. We now expand on each of these limitations. While operational metrics such as price, quality and delivery are important and critical in evaluating suppliers, strategic evaluation of suppliers leading to a long-term relationship requires consideration of supplier capabilities and practices. This is important because as a firmÕs products evolve over time it is critical to form relationships with suppliers that can effectively meet the changing requirements from the perspective of new product development, design, manufacturing processes and manufacturing capability, at lower costs. Such suppliers are more likely in the long run to have the infrastructure and organizational capabilities in place to effectively meet the changing demands of the buying firms. For example, it has been suggested in the literature that quality management practices with strategic implications such as total quality management, zero defects, process improvement, statistical process control, and continuous process improvement lead to tangible improvements in quality and cost reduction (De Ron, 1998; Lederer and Rhee, 1995; Tham, 1988). Similarly, design based practices that encompass initiatives such as design for manufacturability,

modularity, product redesign, concurrent engineering, and standardization have also been associated with cost reduction and better delivery performance (Koulamas, 1992; Tummala et al., 1997; Coughlan and Wood, 1992). Thus, it is important to consider these factors in supplier evaluation decisions. Several techniques utilized for evaluation of suppliers assign importance weights to various supplier evaluation factors in a subjective and/or arbitrary manner. As the complexity of the decision-making process increases in terms of factors and alternatives considered, it is increasingly difficult to assign a consistent set of weights. Finally, relative evaluation methods that compare suppliers and identify potential reasons for differences in supplier performance have not been fully explored in the literature. The primary advantage associated with relative evaluation methods is that they allow for grouping suppliers based on performance, which provides useful insights to management in identifying benchmarks for ineffective suppliers, and assists in decisions relating to supplier development initiatives (SDI) and programs. This paper proposes a methodology for strategic sourcing that addresses the aforementioned issues. The methodology utilizes a combination of traditional and advanced data envelopment analysis (DEA) models in estimating the efficiencies of alternative suppliers, and the variability in their efficiency scores. Nonparametric statistical techniques are utilized in identifying homogenous groups of suppliers based on their efficiency scores, which assist management in selecting suppliers for strategic partnerships, SDI, and supply base rationalization decisions. Inter-group differences with respect to various factors are identified in order to assist in benchmarking and process improvement efforts. In summary, some of the questions our methodology addresses, which current supplier evaluation techniques do not comprehensively answer, are: • Which suppliers to consider for strategic partnerships? • Which suppliers must be a part of supplier development initiatives?

238

S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250

• Which suppliers must be pruned from the supply base? • How can ineffective suppliers improve their performance? Against whom should they benchmark? • How can firms effectively allocate resources to supplier improvement programs? While our paper proposes a new methodology for evaluating suppliers, the primary focus of our study is on managerial implications and usefulness of the results in addressing strategic sourcing issues faced by companies.

and cost in supplier evaluation. It is evident from these studies that multiple factors need to be incorporated into the supplier evaluation process and that it should not be solely based on a single criterion such as cost. However, these works have not developed decision models for supplier evaluation. 2.1. Supplier evaluation techniques In a comprehensive review of supplier selection methods, Weber et al. (1991) reported that 47 of the 74 articles in the review utilized multiple criteria. Some of the traditional multi-criteria approaches have utilized factors such as cost, quality, and delivery, which have become increasingly important with the emphasis on JIT manufacturing philosophy (Chapman, 1989; Chapman and Carter, 1990). However, these measures are primarily at the operational level. Table 1 depicts the supplier evaluation techniques by methodological area. Several of these techniques have utilized multiple supplier criteria in the evaluation process. However, some of the issues with many of these techniques include lack of objective methods for assigning factor weights, lack of relative comparison of alternative suppliers for facilitating benchmarking and SDI, minimal emphasis on strategic level capabilities or practices, and not addressing issues and reasons relating to ineffective supplier performance. Application of DEA as a tool for strategic sourcing of suppliers has been limited. To date there have been few works that have applied this tool for supplier evaluation purposes. Kleinsorge et al. (1992) utilized DEA as a tool for performance monitoring of a single supplier over time. However, their work did not address issues relating to strategic supplier selection or benchmarking. Two articles by Weber and Desai (1996) and Weber et al. (1998) have addressed the issue of supplier selection and negotiation using DEA. However, the supplier metrics utilized by them were strictly operational ones. Also, their analysis is based on a traditional DEA model, which has certain limitations as discussed in the next section of the paper. Narasimhan et al. (2001) have applied DEA for strategic evaluation of suppliers by

2. Literature review Supplier evaluation is one of the most widely researched areas in purchasing with methodologies ranging from conceptual to empirical and modeling streams. It is beyond the scope of this paper to discuss all these works in detail. Since our framework is primarily related to the modeling area, we mainly limit our discussion to quantitative models proposed for supplier evaluation. Empirical work in supplier evaluation dates back to 1960s. Dickson (1966) conducted a study that investigated the importance of supplier evaluation criteria for industrial purchasing managers. The study concluded that cost, quality, and delivery performance were the three most important criteria in supplier evaluation. Other relevant works in this area emphasized the strategic importance of supplier evaluation and the relative importance and tradeoffs among cost, quality, and delivery (Hahn et al., 1983; Jackson, 1983; Kralijic, 1983; Browning et al., 1983; Ansari and Modarress, 1986; Treleven, 1987; Burton, 1988; Bernard, 1989; Benton and Krajeski, 1990, and Ellram, 1990). Other researchers that specifically addressed issues relating to the relative importance of various supplier attributes include Monczka et al. (1981), Moriarity (1983), Woodside and Vyas (1987), Chapman and Carter (1990), Tullous and Munson (1991), and Weber et al. (1991). Based on a review of 74 articles on supplier evaluation, Weber et al. (1991) concluded that quality was the most important factor followed by delivery performance

S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250 Table 1 Vendor evaluation techniques Evaluation technique Weighted linear models Linear programming Mixed integer programming Grouping methods Analytic hierarchy process Matrix method Multi-objective programming Total cost of ownership Human judgment models Principal component analysis DEA Interpretive structural modeling Statistical analysis Discrete choice analysis experiments Neural networks Authors

239

Lamberson et al. (1976), Timmerman (1986) Pan (1989), Turner (1988) Weber and Current (1993) Hinkle et al. (1969) Barbarosoglu and Yazgac (1997), Hill and Nydick (1992), Narasimhan (1983) Gregory (1986) Weber and Ellram (1993) Ellram (1995) Patton (1996) Petroni and Braglia (2000) Narasimhan et al. (2001), Weber and Desai (1996), Weber et al. (1998) Mandal and Deshmukh (1994) Mummalaneni et al. (1996) Verma and Pullman (1998) Siying et al. (1997)

considering various factors at both strategic and operational levels. While their approach provided some useful insights into supplier evaluation and rationalization, they were also limited by the traditional DEA model evaluations. Also, their work has not investigated the reasons behind the differences in efficiency scores of suppliers, and thus did not delve into supplier improvement strategies. In a more recent paper, Talluri and Narasimhan (in press) developed a vendor evaluation model that effectively considers performance variability issues, but their approach only incorporated operational measures into the decision-making process. The robustness of our methodology over existing DEA models is that we utilize a combination of methods that effectively discriminates among suppliers and avoids some of the pitfalls associated with the traditional DEA models. Also, our approach utilizes robust statistical methods for investigating the differences among suppliers and providing recommendations for improvement. We now provide an introduction to the DEA models utilized in our analysis.

models include the basic Charnes, Cooper, and Rhodes (CCR) and the aggressive cross-efficiency models. For more details on the model development the readers are encouraged to refer to the actual works in this area. 3.1. CCR DEA model It is well known in productivity literature that DEA is a multi-factor analysis tool that measures the relative efficiencies of a set of decision-making units. It effectively considers multiple input and output factors in evaluating the efficiency scores. In the context of our current study, the input and output factors correspond to supplier capabilities and performance metrics, respectively. Based on the notion of Doyle and Green (1994), the efficiency measure utilized in DEA is best defined by Eq. (1): P y Osy vky Eks ¼ P ð1Þ x Isx ukx where (Eks ) is the efficiency or productivity measure of supplier s, using the weights of test supplier k; (Osy ) is the value of output y for supplier s; (Isx ) is the value for input x of supplier s; (vky ) is the weight assigned to supplier k for output y; and (ukx ) is the weight assigned to supplier k for input x. In the ratio DEA model proposed by Charnes et al. (1978), which is also referred to as the CCR

3. DEA models In this section we provide a brief description of the DEA models utilized in our study. The DEA

240

S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250

model, each test supplier k selects optimal weights for capabilities (inputs) and performance metrics (outputs) in achieving the highest possible efficiency score subject to the restrictions that these weights prevent the set of suppliers s from achieving an efficiency score of greater than 1. The CCR model is presented in expression (2) below: P Oky vky maximize Ekk ¼ Py
I u x kx kx

subject to : Eks 6 1 8 Suppliers s ukx ; vky P 0

ð2Þ

The conversion of (2) into a linear programming problem is shown below in (3). P maximize Ekk ¼ Oky vky
y

subject to : Eks P 6 1 8 Suppliers s Ikx ukx ¼ 1
x

ð3Þ

ukx ; vky P 0 This conversion is performed by equating the denominator of the efficiency ratio in (2) to a value of P 1, represented by the constraint x Ikx ukx ¼ 1. The result of problem (3) is an optimal effià ciency score (Ekk ), which does not exceed a value of à 1. If Ekk ¼ 1 and the corresponding slack variables are 0 then supplier k is considered to be efficient. If à Ekk < 1, then supplier k does not lie on the efficient frontier and is dominated by at least one other supplier or a linear combination of suppliers. Problem (3) is executed s times in evaluating the efficiency scores of all the suppliers. 3.2. Cross-efficiency models Cross-efficiency models are primarily utilized to overcome the unrestricted weight flexibility problem of the CCR model. The CCR model allows DMUs to emphasize relatively few inputs and outputs in achieving a high efficiency score while ignoring other important factors. Sexton et al. (1986) introduced the concept of cross-efficiencies and the cross-efficiency matrix (CEM) in DEA. In the context of the current paper, the CEM provides information on the efficiency of a specific supplier with the optimal weighting schemes de-

termined for other suppliers. In the CEM, the element in kth row and the sth column represents the efficiency measure of supplier s when evaluated against the optimal weights of supplier kðEks Þ. Each of the columns of the CEM is then averaged to get a mean cross-efficiency score for each supplier. The suppliers can be ranked based on these mean scores. Thus, the CEM provides a mechanism for effectively differentiating among the suppliers. One issue that may arise in utilizing the crossefficiency scores is the uniqueness of the input and output factor weights obtained from the CCR model used in their evaluation. This makes the cross-efficiency analysis arbitrary and limits its applicability. To overcome this potential limitation, a formulation developed by Doyle and Green (1994) may be used for cross-efficiency evaluations and development of a CEM. This formulation, shown as (4), generates a unique set of weights. ! P P minimize vky Osy ;
y s6¼k

subject to : P
x

¼ 1; P P Ã Oky vky À Ekk Ikx ukx ¼ 0 ukx Isx
s6¼k y x

P

! ð4Þ

Eks 6 1 8 Suppliers s 6¼ k ukx ; vky P 0 The above formulation has a primary goal of obtaining a maximum CCR efficiency score for supplier k (the test unit) and a secondary goal of determining a set of weights that minimize the other suppliersÕ aggregate output, as defined by the objective function. The test unit k is defined as an average unit whose efficiency is minimized. This model has been defined as an aggressive formulation. The data required in (4) includes the optimal à efficiency scores (Ekk ) from the CCR model, as shown by the second constraint set. Thus, a key advantage of the aggressive model utilized in cross-efficiency analysis is that units emphasize on their strengths, which are the weaknesses of their competitors. Thus, a unit with high mean crossefficiency score can be considered as a superior performer because it is excelling across many

S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250 Table 2 Key differences between the CCR and cross-efficiency models CCR model Allows for the selection of weights in an unrestricted manner resulting in some units to achieve a high relative efficiency score by emphasizing on relatively few inputs and outputs The model is run one time for each unit in obtaining the relative efficiency scores Cross-efficiency model

241

Computes the efficiency of each unit with respect to the optimal weights of other units for a more comprehensive peer evaluation. This allows in effectively differentiating between good overall performers and niche performers The model is run one time for each unit for determining the input and output weights that not only maintain its CCR efficiency score but also minimize the efficiency scores of all other units. These weights are utilized in deriving a mean cross-efficiency score
Capabilities (Inputs)

dimensions. The key differences and the relative advantages of the cross-efficiency models over the CCR model are summarized in Table 2. The second reason for utilizing the cross-efficiency analysis in our study is to identify variability in the efficiency scores of a supplier when evaluated against the optimal weights of its peers. This facilitates identifying homogenous groups of suppliers for strategic relationships, SDI, and pruning decisions for the buying firm. The third reason for utilizing the cross-efficiency analysis is associated with the limitations in our sample size. Since small sample size exacerbates the unrestricted weight flexibility problem in DEA (Boussofiane et al., 1991), we utilize crossevaluations to discriminate better among the suppliers. It is important to note that other methods developed in the literature such as reconstructing virtual input and output combinations from expert consultation proposed by Thanassoulis and Allen (1998), and the bootstrap procedure for replicating input–output combinations by Simar and Wilson (1998) can be utilized as alternative approaches for improving the discriminatory power of the DEA models.

Supply Base

Efficiency Evaluation

Identification of Supplier Groups

Performance Metrics (Outputs)

Identification of Group Diff. on Inputs and Outputs

Performance Feedback and Resource Allocation Decisions

Managerial Decisions on SR, SDI, Pruning

Fig. 1. A framework for strategic sourcing.

4. Methodology for strategic sourcing Fig. 1 depicts the framework utilized for strategic sourcing. The first step involves the identification of the suppliers to be evaluated followed by data collection. Data collection can be performed through questionnaires and/or site visits to candidate suppliers. The efficiency evaluations are performed by obtaining data on capabilities (inputs)

and performance metrics (outputs) of suppliers being evaluated. The following section provides details on the data acquisition process. In general, any resource can be utilized as a possible input measure and outputs can encompass activity/performance measures. The next step involves the evaluation of efficiency scores and ranking of suppliers based on DEA models. Subsequently, the supplier groups are identified by a nonparametric procedure, which effectively incorporates variability in efficiency measures into the evaluation process. This step is performed by utilizing a procedure proposed by Talluri et al. (2000) in categorizing suppliers into groups based on their cross-efficiencies. The next part of the framework addresses the managerial decisions associated with supplier evaluation that can be handled through our analysis.

242

S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250

Here, we stress the identification of suppliers for SR, SDI, and pruning, which assists managers in decisions relating to partnerships and effective allocation of resources to various SDI programs. Also, we identify the differences among the supplier groups in terms of capabilities and performance metrics, and provide feedback to ineffective supplier groups regarding the necessary improvements across various dimensions.

5. Strategic sourcing: An illustrative case Our study was carried out in a large multinational telecommunications company, which we refer to as Company X throughout the paper in order to maintain anonymity. Company X is a global leader in design, production, and marketing of communication systems. It operates production plants, research and development facilities, and distribution systems on a global basis. The critical objectives of the company in procurement and supply management include: • improving the quality of purchased products/ services; • reducing lead-time and improving on-time delivery; • developing long-term relationships with key suppliers; • securing global competitive pricing. In order to achieve these objectives, Company X has placed emphasis on supplier rationalization by evaluating and developing suppliers for longterm SR, providing continuous feedback for improving performance, achieving excellence across multiple competitive dimensions, and decreasing supply based by pruning inefficient suppliers. As discussed earlier, our framework specifically addresses these issues. The factor selection and data acquisition process was initiated by first defining the relevant input and output dimensions to be utilized in the efficiency analysis. This was accomplished in focus group sessions with the management of Company X. Due to the decentralized nature of Company XÕs supply management system, these focus group

sessions required had to be carefully planned. A series of meetings were conducted in order to identify the specific product-line to be examined and the input and output dimensions to be used in DEA. It was decided that the data gathering efforts must be performed as objectively as possible while ensuring convenience of data collection. Also, in order to ensure full participation, it was decided early in the study that the time and effort required to collect data from suppliers and buyers, must be kept to an acceptable minimum. After the identification of the input and output dimensions, we developed two separate questionnaires––one to assess supplier capabilities (comprising the input dimensions of DEA) and the other to assess supplier performance (comprising the output dimensions of DEA). The questionnaires utilized multiple items to measure the input and output dimensions. The individual items were measured on a binary scale (yes/no responses) to afford maximum objectivity and accuracy of survey responses. The questions were carefully worded so that the responses of the suppliers could be easily verified for accuracy. In addition, the binary scale used obviated the need for suppliers to be making judgments regarding their capability allowing for distortion of input data. The design of the questionnaire and the binary scale were conscious design choices made by us in the data collection phase of the study to ensure a high degree of reliability of input data. It should be noted that the alternative of conducting a detailed audit of suppliers by a team from Company X was rejected as both time consuming and requiring an inordinate amount of effort. The questionnaires were reviewed by Company XÕs management and revised to reflect their comments and suggestions. The Supplier Capability Questionnaire was sent out to the suppliers and the Supplier Performance Assessment Questionnaire was sent out to the purchasing staff of Company X. The returned questionnaires were sent to the project staff for data coding, entry, and analysis. To test the hypothesis that the questionnaires might have contained difficult or ambiguous questions, an analysis of responses to individual items was carried out by examining the proportion

S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250

243

of sample with missing data on individual items. The analysis showed that there was no evidence to support the hypothesis, confirming that the questionnaire was acceptable (i.e., not difficult to fill out) as a data collection instrument. The following section describes the questionnaires and subsequent sections discuss the data analysis and managerial implications of the study. 5.1. Supplier capability questionnaire Items on the Supplier Capability Questionnaire were grouped into the following categories, which are utilized as the inputs in the DEA model evaluations: • quality management practices and systems (QMP); • documentation and self-audit (SA); • process/manufacturing capability (PMC); • management of the firm (MGT); • design and development capabilities (DD); • cost reduction capability (CR). These six categories were measured with a composite score between 0 and 1. The score was computed as the proportion of ÔyesÕ answers to individual questionnaire items in the category. ÔBlankÕ and Ônot applicableÕ responses were not considered in the calculation of the proportion of the ÔyesÕ responses. 5.2. Supplier performance assessment questionnaire Items on the Supplier Performance Assessment Questionnaire were grouped into the following categories, constituting the outputs: • • • • • quality; price; delivery; cost reduction performance (CRP); other.

measure of the variables in the category. Table 3 shows the scaled composite scores for the input and output variables for the 23 suppliers. While we have utilized the actual composite scores in the DEA evaluations, in order to maintain confidentiality of the data we have scaled it by dividing each measure by a factor. For the categories in which subjective questions were included, the answers to the questions were normalized to a value between 0 and 1, and then combined with the responses to the ÔobjectiveÕ measures on items belonging to the category. This was performed by taking a weighted average of the ÔsubjectiveÕ and ÔobjectiveÕ measures, with 0.4 and 0.6, respectively, as weights for the two, based on the managerial input from Company X.

6. Data analysis 6.1. DEA results First we determined the CCR efficiency scores of the 23 suppliers with respect to the six capabilities (inputs) and five performance metrics (outputs). The scaled supplier input and output data is shown in Table 3. CCR model identified suppliers 2, 3, 4, 6, 7, 10, 12, 15, 20, 22, and 23 to be efficient with a score of 1.000, and the other 12 suppliers are inefficient with scores of less than 1.000. These results are shown in Table 3 under the heading ‘‘CCR Eff’’. Since firmÕs objectives of linearity in inputs and outputs and the implicit technology assumptions are not justified in the selection of the CCR model, we have tested the sensitivity of the results using the Banker, Charnes, and Cooper (BCC) model (Banker et al., 1984), which works under the assumption of variable returns to scale. While the BCC model identified 12 suppliers to be efficient and the rest inefficient, the CCR model identified 11 of those 12 suppliers to be efficient and the remaining suppliers inefficient. Also, we utilized the Mann–Whitney test (nonparametric t-test) to test for the differences in mean efficiency scores between the CCR and BCC model results, and failed to reject the null hypotheses at a p-value 6 0.05 indicating that the differences are statistically insignificant.

The above categories were also measured with a composite score between 0 and 1. To evaluate the score, the proportion of ÔyesÕ answers were identified in each category to provide an ÔobjectiveÕ

244 S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250

Table 3 Scaled supplier data with inputs and outputs and efficiency scores Supplier # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 QMP 0.9662 0.7054 0.5611 1.1272 1.1272 0.9877 0.8051 1.1809 1.2346 0.5904 0.8642 0.6441 1.2346 1.0662 1.0100 0.8978 1.1272 1.1809 1.0735 1.0735 1.2346 1.2346 1.0735 SA 0.9742 1.0438 0.8947 1.0438 1.0438 1.0438 0.8351 1.0438 1.0438 1.0438 0.8118 0.8351 1.0438 1.0438 1.0438 0.9742 0.9742 1.0438 1.0438 1.0438 1.0438 1.0438 1.0438 PMC 1.0385 0.7500 0.7789 0.9520 1.1251 0.9376 1.0385 1.1251 1.1251 0.6058 0.8182 1.0227 1.1251 1.1251 0.8654 1.0385 1.0385 1.1251 1.1251 1.1251 1.1251 0.9520 1.0385 MGT 1.0808 0.8782 0.7205 0.9607 1.0808 1.0808 0.9607 1.0208 1.0808 0.7629 0.9536 1.0208 1.0808 1.0808 1.0208 1.0208 1.0208 1.0808 0.9007 1.0808 1.0133 1.0808 1.0172 DD 1.1417 0.0000 0.8372 0.9661 1.2560 1.0466 1.2560 1.0627 1.2560 0.5796 0.9661 0.9661 1.2560 1.1593 0.7322 0.9420 1.2560 1.2560 1.1593 0.6762 1.2560 1.0466 0.8695 CR 0.7839 0.8750 0.7404 1.1402 1.2115 0.9422 1.0768 1.0096 1.1442 0.4038 0.8076 1.0768 1.2115 1.2115 0.6815 0.8076 1.0768 1.2115 0.9422 1.1442 1.2115 1.2115 1.0768 Quality 0.6211 0.6932 1.0205 1.6639 0.9983 1.0426 1.2201 0.8429 0.6433 1.4419 0.4215 1.0205 0.5546 0.8208 1.2423 1.0205 1.0205 1.2201 1.1647 0.8429 0.7764 1.4642 1.2423 Price 0.8922 0.8922 0.4341 1.1333 1.3503 1.3263 1.2056 1.1333 0.8922 0.4341 0.8922 1.3263 1.1092 0.8922 1.5674 0.8922 0.8681 0.2411 0.8922 1.0550 0.8922 1.3263 1.3503 Delivery 0.1284 0.3855 1.5420 1.5420 1.1565 1.7990 0.7710 0.6424 0.3855 1.4135 1.0279 0.7710 1.0279 0.8994 1.4135 0.3855 0.7710 0.0000 1.4135 1.4135 1.0279 1.7990 1.2849 CRP 1.2107 0.0000 0.0000 1.2107 1.2107 2.4214 2.4214 1.2107 0.0000 0.0000 0.0000 1.2107 1.2107 1.2107 2.4214 0.0000 0.0000 0.0000 1.2107 1.2107 0.0000 2.4214 2.4214 Other 0.6359 0.3179 1.2719 1.8019 0.9540 1.2719 1.2719 0.8479 0.5299 1.2719 0.8479 0.7418 1.1660 0.8479 1.2719 0.4240 0.5299 0.4240 1.0599 1.4839 0.9540 1.4839 1.5900 CCR Eff. 0.602 1.000 1.000 1.000 0.855 1.000 1.000 0.723 0.562 1.000 0.805 1.000 0.773 0.609 1.000 0.764 0.702 0.733 0.904 1.000 0.658 1.000 1.000 X-Eff. mean 0.427 0.412 0.536 0.752 0.615 0.810 0.821 0.523 0.316 0.578 0.459 0.722 0.518 0.479 0.906 0.392 0.398 0.193 0.596 0.618 0.405 0.817 0.813 Standard deviation 0.129 0.288 0.326 0.243 0.207 0.171 0.207 0.156 0.201 0.369 0.275 0.252 0.175 0.115 0.16 0.255 0.242 0.204 0.159 0.193 0.232 0.186 0.168

S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250

245

This indicates that our results are not very sensitive to model changes. In order to further test the sensitivity of these results, we have evaluated the supplier efficiency scores based on the free disposal hull (FDH) model proposed by Tulkens (1993), but the model did not show effective discrimination among the suppliers. This may be a result of small sample size in the current application. It is also important to note in these comparisons that the ratio efficiency measure shown in (1) does not apply to the BCC and FDH models. Since the CCR model has certain limitations as discussed earlier, we utilize the Doyle and Green (1994) aggressive cross-efficiency model for a more complete evaluation of the supplier performance. The mean cross-efficiency scores of the suppliers identified from the weights obtained from the Doyle and Green formulation are shown in Table 3 under the heading ‘‘X-Eff Mean’’. Based on the mean cross-efficiency scores it can be concluded that supplier 15 is the best performer with a score of 0.906, and supplier 18 is the worst performer with a score of 0.193. It is interesting to note that supplier 2, which is efficient based on the CCR model evaluations, is ranked very low based on cross-efficiency evaluations with a mean score of only 0.412. In fact some of the CCR inefficient suppliers such as 1, 5, 8, 11, 13, 14, and 19 are better performers than supplier 2 based on crossefficiency evaluations. Supplier 2 is a typical case of a ‘‘false- positive’’ or niche performer. Also, supplier 5, with a CCR efficiency score of 0.855, achieved a cross-efficiency mean score of 0.615, which is higher than the CCR efficient suppliers that include 2, 3, and 10. These types of insights and differentiation among suppliers are not possible when using the CCR model alone, which demonstrates the strength of cross-efficiency evaluations as a more comprehensive technique for efficiency evaluation. It is essential for the decision-maker to consider these issues in supplier rationalization in order to avoid making a Type II error in the selection process. 6.2. Identifying homogenous groups of suppliers Since the traditional cross-efficiency analysis is primarily based on the mean scores and does not

take into consideration the variability in the efficiency scores, the ranking obtained from the analysis may not be the best. In fact it can be concluded from Table 3 that mean scores alone may not be appropriate in ranking suppliers because while certain suppliers, such as 3 and 8, have close to the same mean scores their variability in terms of cross-efficiencies are quite different. Thus, we utilized the method proposed by Talluri et al. (2000), which effectively incorporates the variability measures in identifying homogenous groups of suppliers. Talluri et al. (2000) performed this step by applying a nonparametric statistical test due to Friedman (Friedman, 1937) on the CEM. They utilized a nonparametric test because the efficiency scores do not lend themselves to the assumptions of normality. We conduct the FriedmanÕs test for the following null and alternative hypotheses: Ho : The suppliers have identical cross-efficiency scores Ha : At least one of the suppliers tends to yield larger cross-efficiency scores than at least one other supplier We treated the rows of the CEM as blocks and the columns as the treatments. The cross-efficiency scores within each block are ranked by assigning 1 for the lowest, 2 for the second lowest, and so on. Mean ranks are considered in the event that ties exist. The cross-efficiency matrix based on ranks is shown in Table 4. The test resulted in a p-value of 0.000 thereby rejecting the null hypotheses at an a ¼ 0:05. Thus, there is sufficient evidence to conclude that at least one supplier tends to yield larger cross-efficiency scores than at least one other supplier. Using the least significant difference tests (Cononver, 1980) on the ranked transformed data, all pair-wise comparisons were performed to identify suppliers that are different. For this case, for a pair of suppliers to be significantly different, at a ¼ 0:05, the absolute difference between sums of their ranks must be greater than 54.95. The results of the analysis are shown in Table 5. We identified three groups of suppliers based on the overlaps in the least significant difference tests. It is evident

246 S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250

Table 4 Ranked cross-evaluation scores Supplier # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 1 11 6 2 6 6 14 17 4.5 10 8 2 10 5 5 18 5 6 4 2 5 2 10 10 2 8 21.5 6 2 5 4 5 13 9 7 3.5 19 2 2 5 7 4 5 5 22.5 5 3 2 3 3 14 23 21 3 8 5 3 3 20 11 6 12 15 5 15 11 15 21 17 19 9 9 4 14 19 18 22.5 16 17 12.5 16 16 21 17 14 21.5 20 14 21.5 22 23 21 20 22 18 18 5 18 12 12 10 17 12 12.5 19 19 9 16 17 15 14 11 12 15 10 14 7.5 14 14 11 6 19 15 21 16 19.5 22 22 17.5 17.5 14 22 20 18 18 22 16 16 11 18 15 20 21.5 19 7 21 17 11 15 21 22 23 20 21 15 18 21 21.5 22 20.5 21.5 22 22 13 14 13 21.5 22 8 15 9 5 9 12 10 11 15 15 10 9.5 11 10 8 16 10 10 8.5 8 10 8 11 14 9 6 1 3 4 4 2 5 6 4.5 3 3.5 2.5 3 3 5 2 1 3 3 2 3 2 4 10 2 21.5 22 22.5 2 9 5 2 2 23 6 4 8 10 5 21.5 17 20 21 21 16 8 8 11 16 5 13 12 15 7 5 10 14 2 15 15 14 13 5 3 3 1 11 11 10 7 6 12 22.5 21.5 15 7 22.5 18 18 21 22.5 11.5 20 23 17 17 15 19 19 19 12 9 12 17 15 13 13 3 8.5 14 13 11 10 14 13 1 12 9 13 12 11 1 2 2 9 13 11 13 13 14 5 7 10 8 8 13 16 4.5 4.5 5 8 8 7 7 11 6 7 7 7 6 7 12 12 15 22.5 21.5 19 18 22.5 22 21 23 22.5 22 22 22 21.5 22 23 21.5 22 17.5 21 18 22 21.5 22 16 10 13 4 3 7 3 5 8 11 19 5 13 4 4 5 14 13 12.5 4 4 4 4 3 17 9 8 7 5 9 5 5 7 7 11.5 7 5 6 6 5 11 12 12.5 6 3 6 5 5 18 1 2 1 1 1 1 5 1 1 13 1 1 1 1 5 8 8 16 1 1 1 1 1 19 7 10 16.5 13 11 15 14.5 12 8 18 13 7 11 11 17 13 14 14 17 12 17 16 17 20 12 11 16.5 17 14 15 14.5 11 12 6 14 12 16 16 13 9 9 8.5 15 22.5 15 15 16 21 4 4 8.5 11 10 6 1 9 6 4 9.5 2.5 9 9 5 4 5 6 10 7.5 9 6 7 22 17 16 20 19 19.5 19 19 17.5 17.5 17 22 16 21.5 22 19 18 20 21 21 16 22 21.5 20 23 20 18 14 20 18 20 20 22 20 16 19 18 19 19 20.5 17 18 17.5 16 19 18 19 22

S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250 Table 5 Supplier groups based on FriedmanÕs test

247

later in this section by identifying the differences among the three groups of suppliers in terms of capabilities and performance metrics. Finally, suppliers 9 and 18 are possible candidates for pruning. 6.3. Identifying differences in performance across supplier groups From process improvement and supplier development perspectives, we further investigated the reasons for differences in performance across the three supplier groups. We analyzed the differences in their capabilities (inputs) and performance metrics (outputs). The ANOVA results on inputs and outputs for the three groups are shown in Table 6. It is interesting to note that there is no significant difference in terms of inputs or capabilities of the supplier groups, i.e., QMP, SA, PMC, MGT, DD, and CR levels are not statistically different. However, the performance metrics Quality, Price, Delivery, CRP, and Other are all significantly different at an a ¼ 0:05. In order to investigate which groups differ we performed the DuncanÕs multiple range tests on output measures. These results are summarized in Table 7. Based on results in Table 7, we can conclude that lower supplier efficiency groups 2 and 3 are ranked well below group 1 with respect to several output variables. It also shows that group 3 is the lowest ranked with respect to all performance metrics or outputs. It can be seen that group 1 suppliersÕ performance is vastly superior on ‘‘price’’ and ‘‘CRP’’ compared to groups 2 and 3. Group 2 suppliers who will be the primary targets of SDI programs could
Table 6 ANOVA results on inputs and outputs for supplier groups Factor QMP SA PMC MGT DD CR Quality Price Delivery CRP Other Type Input Input Input Input Input Input Output Output Output Output Output F -value 1.050 0.440 0.840 1.110 0.820 0.990 5.390 11.320 5.850 17.020 7.050 Significance 0.369 0.650 0.446 0.348 0.454 0.389 0.013 0.001 0.010 0.000 0.005

from this analysis that suppliers 15, 22, 7, 23, 4, 6, and 12 are the best (in the sense of the highest ranked group of) performers. These are the suppliers that management must consider as potential candidates for SR. These suppliers are the stars that are excelling with respect to several input and output dimensions, i.e., capabilities and performance metrics. From a resource allocation standpoint, management must primarily invest in improving integration with these suppliers in terms of implementing systems such as electronic data interchange (EDI) and web-based procurement for effective and rapid transactional ability with the suppliers. Also, as we discuss later, these suppliers can serve as potential benchmarks for ineffectively performing suppliers. In essence, management must find possible ways of transferring their best practices to other suppliers. Suppliers 5, 20, 19, 10, 3, 8, 13, 11, 14, 16, 1, 2, 17, and 21 are in the second category. These are the suppliers that management should consider as potential candidates for supplier development programs and initiatives. While these suppliers have demonstrated potential, they have scope for further improvement. The exact supplier development programs to implement will depend on the areas in which they are weak, which we address

248

S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250

Table 7 DuncanÕs multiple comparison results on supplier group differences Factor/ subsets 1 2 3 Level of significance 0.1 0.05 0.05 0.05 0.250 (G3) 0.1

Quality Price Delivery CRP Other

0.764 (G1) 0.730 (G1) 0.490 (G2) 0.695 (G1) 0.572 (G1) 0.706 (G1)

0.525 (G2) 0.560 (G3) 0.499 (G2) 0.100 (G3) 0.000 (G3) 0.167 (G2) 0.472 (G2)

0.313 (G3)

learn from group 1 suppliers on how to reduce their costs by effectively implementing cost reduction programs. It is conceivable that group 1 suppliers might refuse to divulge their best practices to group 2 firms for fear of intensified competition from them in the future. This can be mitigated by entering into strategic partnering agreements with group 1 firms that essentially make them the principal beneficiaries of the buying firmÕs competitive performance. In summary, the managerial implications are that group 2 suppliers must improve with respect to Quality, Price, CRP, and Other. However, they are categorized as being in the same subset with group 1 suppliers with respect to Delivery. This is the type of feedback that management should provide to group 2 suppliers. Since there are no significant differences in the inputs across the three groups, it implies that groups 2 and 3 have all capabilities in place, but are poor in executing these capabilities and transforming them into high level of performance. Thus, these groups must benchmark themselves against group 1 suppliers and identify ways to execute their capabilities better. The buying firmÕs SDI programs can be targeted at group 2 suppliers and specific areas of performance improvement. The knowledge transfer from group 1 suppliers to group 2 suppliers can be filtered through the buying firm and can be kept at a level acceptable to the group 1 suppliers. 7. Conclusions In this paper, we have proposed a framework and methodology for strategic sourcing. We uti-

lized a combination of DEA models for effectively discriminating supplier performance. We utilized both strategic capabilities and performance metrics in evaluating suppliers. Our analysis yielded a number of managerial insights that could not have been possible with traditional supplier evaluation methods. These include the identification of suppliers for strategic partnerships, deployment of resources for SDI, identifying the factors on which ineffective suppliers need to improve on, and selecting targets for improvement. The principal advantages of our methodology are that: it simultaneously considers supplier capabilities and performance metrics in evaluating the efficiency of alternative suppliers; it does not require the decision-maker to select a priori weights or preferences for the supplier factors; it overcomes some of the problems associated with the traditional DEA models, which include unrestricted weight flexibility in selection of input and output weights in supplier evaluation decisions; it effectively incorporates efficiency variability measures into the analysis in determining homogenous groups of suppliers; and identifies the key differences across the supplier groups in terms of performance. While we have considered the input side of the DEA model somewhat comprehensively, the output measures might need further examination. In addition, it should be pointed that although the input and output dimensions considered in this paper are generally useful, they are context specific. Also, in a specific application of this methodology, if in fact the set of ineffective suppliers is deemed an unacceptable result by management, the output dimensions of DEA model must be reexamined for relevant but missing dimensions, which might cause them to be ineffective. A reevaluation of the proposed methodology along these lines would yield additional insights and lead to a better approach for strategic sourcing.

References
Ansari, A., Modarress, B., 1986. Just-in-time purchasing: Problems and solutions. Journal of Purchasing and Materials Management 22 (2), 19–26.

S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250 Banker, R.D., Khosla, I.S., 1995. Economics of operations management: A research perspective. Journal of Operations Management 12, 423–425. Banker, R.D., Charnes, A., Cooper, W.W., 1984. Some models for estimation of technical and scale efficiencies in data envelopment analysis. Management Science 30 (9), 1078– 1092. Barbarosoglu, G., Yazgac, T., 1997. An application of the analytic hierarchy process to the supplier selection problem. Production and Inventory Management Journal 38 (1), 14– 21. Benton, W.C., Krajeski, L., 1990. Vendor performance and alternative manufacturing environments. Decision Sciences 21, 403–415. Bernard, P., 1989. Managing vendor performance. Production and Inventory Management Journal 30, 1–7. Boussofiane, A., Dyson, R.G., Thanassoulis, E., 1991. Applied data envelopment analysis. European Journal of Operational Research 52 (1), 1–15. Browning, J.M., Zabriskie, N.B., Huellmantle, A.B., 1983. Strategic purchasing planning. Journal of Purchasing and Materials Management 19 (1), 19–24. Burt, D.N., 1984. Proactive Procurement. Prentice-Hall, Englewood Cliffs, NJ. Burton, T.T., 1988. JIT/Repetitive sourcing strategies: Tying the knot with your suppliers. Production and Inventory Management Journal (4th Quarter), 38–41. Chapman, S.N., 1989. Just-in-time supplier inventory: An empirical implementation model. International Journal of Production Research 27 (12), 1993–2007. Chapman, S.N., Carter, P.L., 1990. Supplier/customer inventory relationships under just-in-time. Decision Sciences 21, 35–51. Charnes, A., Cooper, W.W., Rhodes, E., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research 2 (6), 429–444. Cononver, W.J., 1980. Practical Nonparametric Statistics. Wiley, New York. Coughlan, P.D., Wood, A.R., 1992. Getting product designs right. Business Quarterly 56 (4), 63–67. De Ron, A.J., 1998. Sustainable production: The ultimate result of a continuous improvement. International Journal of Production Economics 56–57 (1–3), 99–110. Dickson, G., 1966. An analysis of vendor selection systems and decisions. Journal of Purchasing 2, 28–41. Dobler, D.W., Burt, D.N., Lee, L., 1990. Purchasing and Materials Management. McGraw-Hill, New York. Doyle, J., Green, R., 1994. Efficiency and cross-efficiency in DEA: Derivations, meanings and uses. Journal of Operational Research Society 45 (5), 567–578. Ellram, L.M., 1990. The supplier selection decision in strategic partnerships. Journal of Purchasing and Materials Management 26 (4), 8–14. Ellram, L.M., 1995. Total cost of ownership: An analysis approach for purchasing. International Journal of Physical Distribution and Logistics Management 25 (8), 4– 23.

249

Friedman, M., 1937. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of American Statistical Association 32, 675–701. Gregory, R.E., 1986. Source selection: A matrix approach. Journal of Purchasing and Materials Management (Summer), 24–29. Hahn, C.K., Pinto, P.A., Bragg, D.J., 1983. Just-in-time production and planning. Journal of Purchasing and Materials Management 19 (3), 2–10. Hill, R.P., Nydick, R.J., 1992. Using the analytic hierarchy process to structure the supplier selection procedure. International Journal of Purchasing and Materials Management 28 (2), 31–36. Hinkle, C.L., Robinson, P.J., Green, P.E., 1969. Vendor evaluation using cluster analysis. Journal of Purchasing, 49–58. Jackson, G.C., 1983. Just-in-time production: Implications for logistics managers. Journal of Business Logistics 4, 1–19. Kleinsorge, I.K., Schary, P.B., Tanner, R.D., 1992. Data envelopment analysis for monitoring customer supplier relationships. Journal of Accounting and Public Policy 11, 357–372. Koulamas, C., 1992. Quality improvement through product redesign and the learning curve. OMEGA: The International Journal of Management Science 20 (2), 161–168. Kralijic, P., 1983. Purchasing must become supply management. Harvard Business Review 61, 109–117. Lamberson, L.R., Diederich, D., Wuori, J., 1976. Quantitative vendor evaluation. Journal of Purchasing and Materials Management (Spring), 19–28. Lederer, P.J., Rhee, S.K., 1995. Economics of total quality management. Journal of Operations Mangement 12 (3/4), 353–367. Mandal, A., Deshmukh, S.G., 1994. Vendor selection using interpretative structural modeling (ISM). International Journal of Operations and Production Management 14 (6), 52–59. Mummalaneni, V., Dubas, K.M., Chao, C., 1996. Chinese purchasing mangersÕ preferences and trade-offs in supplier selection and performance evaluation. Industrial Marketing Management 25 (2), 115–124. Monczka, R.M., Giunipero, L.C., Reck, R.F., 1981. Perceived importance of supplier information. Journal of Purchasing and Materials Management 17, 21–29. Moriarity, R.T., 1983. Industrial Buying Behavior. Lexington Books, Lexington, Massachusetts. Narasimhan, R., 1983. An analytical approach to supplier selection. Journal of Purchasing and Materials Management 19 (1), 27–32. Narasimhan, R., Talluri, S., Mendez, D., 2001. Supplier evaluation and rationalization via data envelopment analysis: An empirical examination, working paper, Michigan State University. Pan, A.C., 1989. Allocation of order quantity among suppliers. Journal of Purchasing and Materials Management, 36–39. Patton, W.W., 1996. Use of human judgment models in industrial buyerÕs vendor selection decisions. Industrial Marketing Management 25, 135–149.

250

S. Talluri, R. Narasimhan / European Journal of Operational Research 154 (2004) 236–250 Treleven, M., 1987. Single sourcing: A management tool for the quality supplier. Journal of Purchasing and Materials Management 23 (1), 19–24. Tullous, R., Munson, J.M., 1991. Trade-offs under uncertainty: Implications for industrial purchasers. International Journal of Purchasing and Materials Management 27, 24–31. Tummala, R.V.M., Chin, K.S., Ho, S.H., 1997. Assessing success factors for implementing CE: A case study in Hong Kong electronics industry. International Journal of Production Economics 49 (3), 265–283. Turner, I., 1988. An independent system for the evaluation of contract tenders. Journal of Operational Research Society 39 (6), 551–561. Verma, R., Pullman, M.E., 1998. An analysis of the supplier selection process. Omega 26 (6), 739–750. Weber, C.A., Current, J.R., 1993. A multiobjective approach to vendor selection. European Journal of Operational Research 68, 173–184. Weber, C.A., Current, J.R., Benton, W.C., 1991. Vendor selection criteria and methods. European Journal of Operational Research 50, 2–18. Weber, C.A., Current, J.R., Desai, A., 1998. Non-cooperative negotiation strategies for vendor selection. European Journal of Operational Research 108, 208–223. Weber, C.A., Desai, A., 1996. Determination of paths to vendor market efficiency using parallel coordinates representation: A negotiation tool for buyers. European Journal of Operational Research 90, 142–155. Weber, C.A., Ellram, L.M., 1993. Supplier selection using multi objective programming: A decision support systems approach. International Journal of Physical Distribution and Logistics Management 23 (2), 3–14. Woodside, A.G., Vyas, N., 1987. Industrial purchasing strategies. Lexington Books, Lexington, Massachusetts.

Petroni, A., Braglia, M., 2000. Vendor selection using principal component analysis. Journal of Supply Chain Management (Spring), 63–69. Sexton, T.R., Silkman, R.H., Hogan, A., 1986. Data envelopment analysis: Critique and extensions, In: Richard H., Silkman (Eds.), Measuring Efficiency: An Assessment of Data Envelopment Analysis, Publication no. 32 in the series New Directions of Program Evaluation, Jossey Bass, San Francisco. Simar, L., Wilson, P., 1998. Sensitivity analysis of efficiency scores: How to bootstrap in non-parametric frontier models. Management Science 44, 49–61. Siying, W., Jinlong, Z., Zhicheng, L., 1997. A supplier-selecting system using a neural network 1997 IEEE International Conference on Intelligent Processing Systems, IEEE, New York, NY, pp. 468–471. Tulkens, H., 1993. On FDH efficiency analysis: Some methodological issues and applications to retail banking, courts, and urban transit. Journal of Productivity Analysis 4, 183– 210. Talluri, S., Narasimhan, R., in press. Vendor evaluation with performance variability: A max–min approach. European Journal of Operational Research. Talluri, S., Whiteside, M.M., Seipel, S.J., 2000. A nonparametric stochastic procedure for FMS evaluation. European Journal of Operational Research 124 (3), 529–538. Tham, W.S., 1988. Transactions of the American Association of Cost Engineers R.1.1. Thanassoulis, E., Allen, R., 1998. Simulating weight restrictions in data envelopment analysis. Management Science 44, 586– 594. Timmerman, E., 1986. An approach to vendor performance evaluation. Journal of Purchasing and Materials Management (Winter), 2–8.