You are on page 1of 15

Journal of the Operational Research Society (1999) 50, 916±930 #1999 Operational Research Society Ltd.

d. All rights reserved. 0160-5682/99 $15.00


http://www.stockton-press.co.uk/jor

Vendor rating for an entrepreneur development


programme: a case study using the analytic hierarchy
process method
S Yahya1 and B Kingsman*2
1
Universiti Pertanian Malaysia, Malaysia and 2Lancaster University, Lancaster

With collaborative purchasing programmes where one of the aims is to develop suppliers, vendor rating is important not
only in supplier selection and in deciding how to allocate business but also to determine where scarce development effort
is best applied. This paper describes a case study into vendor rating for a government sponsored Entrepreneur
Development programme in Malaysia. The paper reviews current methods for vendor rating and ®nds them wanting.
It illustrates a new approach based on the use of Saaty's Analytic Hierarchy process method, which was developed to
assist in multi-criteria decision problems. The new method overcomes the dif®culties associated with the categorical and
simple linear weighted average criteria ranking methods. It provides a more systematic way of deriving the weights to be
used and for scoring the performance of vendors.

Keywords: vendor rating; analytic hierarchy process; multi-criteria decision making; purchasing; development programmes

Introduction including schools, administration, police, hospitals, mili-


tary etc., are bought only from supplier companies that are
Selecting the right vendor is always a dif®cult task for
members of the scheme. The scheme is managed on a
buyers and purchasing managers. Suppliers have varied
commercial basis on behalf of the government by a large
strengths and weaknesses which require careful assessment
private company, Guthrie Furniture Private Limited
by the purchasers before orders could be given to them. The
Company (GFSB). In such an EDP the selection of vendors
vendor selection process would be simple if only one
has to be done not only to ensure bene®ts to the purchaser
criterion were used in the decision making process.
customers but also to develop the vendors. The emphasis
However in many situations, purchasers have to take
has to include a mutual bene®t. The multiple and con¯ict-
account of a range of criteria in making their decisions. If
ing objectives, both getting good quality furniture to satisfy
several criteria are used then it is necessary to determine
government needs and to assist the supplier companies
how far each criterion in¯uences the decision making
improve their operations, imply that the criteria to use in
process, whether all are to be equally weighted or whether
selecting vendors might be different than that for normal
the in¯uence varies accordingly to the type of criteria.
commercial purchasing of goods. Given also the need to
Vendor selection for normal commercial purchasing
identify the strengths and weaknesses of vendors for the
purposes is usually an one-sided affair Buyers evaluate
development purposes of the scheme, a vendor rating
the vendors based on criteria that are to the bene®t of the
system is essential and cannot be avoided.
purchasing organisation. Entrepreneur Development
There are two major components to the Umbrella
Programmes (EDP) are established to improve the business
scheme. The ®rst consists of various forms of direct
skills and capabilities of small to medium sized enterprises
assistance to help the SMEs in the areas of design, manage-
(SMEs) to enable them to operate more effectively in the
ment, supervision, wood ®nishing activities, productivity
competitive commercial world. The particular scheme
and marketing. This is achieved through intensive training
discussed in this paper, known as the Umbrella Scheme,
courses, seminars, workshops, technical advice and assis-
is intended to promote Malaysia's furniture industry. A
tance within the factory. Promotional programmes includ-
major part of the scheme is that all wooden furniture
ing exhibitions of well designed local and foreign products,
requirements for government departments and services,
design awards and design competitions are organised on a
regular basis for the companies in the scheme.
*Correspondence: Dr B Kingsman, Department of Management Science, The second component is being accepted as a potential
The Management School, Lancaster University, Bailrigg, Lancaster LA1
4YX, UK supplier of the wood furniture businesses needs of govern-
E-mail: b.kingsman@lancaster.ac.uk ment departments and services. The actual government
S Yahya and B KingsmanÐVendor rating for an entrepreneur development programme 917

orders for furniture are sent to GFSB, the company mana- mance criteria and which could differ between the GFSB
ging the scheme. The orders come in the form of contracts personnel operating the scheme. The business allocation
from three co-ordinating government agencies, education, procedure must be fair and equitable in the eyes of all
central and special projects. GFSB is awarded the contracts vendors but, in addition, it must achieve all of the Umbrella
and then allocates orders amongst the vendors. The vendors scheme's stated goals, otherwise it could be easily collapse.
deliver the orders directly to the particular government It is particularly important for each SME to have a
department or service which placed them. Provided that the continuing amount of business as it provides the ability to
goods delivered are satisfactory, the supplier company is maintain the company operations and have jobs to offer in
paid immediately by GFSB, who is then re-imbursed by the order to keep its pool of skilled workers. Otherwise,
Ministry of Finance. expertise could be lost.
The prices for the products are set by the government, so
there is no competition between potential vendors on price.
There are many contracts for a variety of products that
Vendor rating methods
become available at varying times over the year. Vendors
have to satisfy minimum quality and delivery performance The ®rst step in any vendor rating procedure is to establish
standards, but one of the scheme's objectives is to improve the criteria to be used for assessing the vendors. The
these continually. The annual amount of business is limited, pioneering effort in this regard was the work of Dickson.1
but all vendors are seeking more orders as more orders In his study, Dickson validated 23 criteria for assessing
mean more pro®t. Government sponsored schemes are vendor's performance as listed in Table 1, based on the
often thought to be prone to political bias and to the responses of 170 purchasing agents and managers. Dickson
growth of envy among vendors, especially when some get asked the respondents to assess the importance of each
more business than others. Any co-ordinator involved in criterion on a ®ve point scale; of extreme, considerable,
such schemes would agree that an equitable order alloca- average, slight and of no importance. The ®ve points were
tion among vendors is the most vital and dif®cult task to given scores of 4 down to 0 and the average values over all
perform. Currently the allocation of business has had to be the respondents then computed. The resulting values are
based mainly on subjective judgements which might be also given in Table 1. They provide some indication of the
over-in¯uenced by only one of the possible vendor perfor- relative importance of the different criteria. Dickson

Table 1 Dickson's criteria for assessing vendors

Dickson's 1966 Studya 1991 exercise


Number of
Criteria Mean Rank Evaluation Mean Rank articlesb
Quality 3.51 1 EI 3.6 1 40
Delivery 3.42 2 CI 2.9 4ˆ 44
Performance history 3.0 3 CI 3.4 2 7
Warranties and claim policies 2.84 4 CI 2.9 4ˆ 0
Production facilities=capacity 2.78 5 CI 2.8 6 23
Price 2.76 6 CI 3.1 3 61
Technical capability 2.55 7 CI 2.4 10 ˆ 15
Financial position 2.51 8 CI 2.5 8ˆ 7
Bidding procedural compliance 2.49 9 AI 2.5 8ˆ 2
Communication system 2.43 10 AI ± 2
Industry reputation and position 2.41 11 AI 2.4 10 ˆ 8
Desire for business 2.26 12 AI 2.6 7 1
Management and organisation 2.22 13 AI 2.3 12 10
Operating controls 2.21 14 AI 1.9 14 3
Repair service 2.19 15 AI 1.6 15 7
Attitude 2.12 16 AI ± 6
Impression 2.05 17 AI 1.4 17 2
Packaging ability 2.01 18 AI ± 3
Labour relations record 2.00 19 AI 1.1 18 2
Geographical location 1.87 20 AI 1.5 16 16
Amount of past business 1.60 21 AI 2.1 13 1
Training aids 1.54 22 AI ± 2
Reciprocal arrangement 0.61 23 SI ± 2
a
Source: Dickson (1966: EI ˆ extreme importance, CI ˆ considerable importance, AI ˆ average importance, SI ˆ slight importance.
b
Number of articles in Weber, Current and Benton 1991 review of 74 papers mentioning criterion.
918 Journal of the Operational Research Society Vol. 50, No. 9

regarded the ability of each vendor to meet the quality on any one criterion is treated as just as valuable as a `good'
speci®cation consistently as being an Extremely Important score on any other criterion. This approach is largely an
criterion in vendor selection. The second to eighth ranked intuitive process relying heavily on memory, personal judge-
criteria are graded as of Considerable Importance and the ment and the experience and ability of the buyer. Conse-
ninth to twenty second ranked criteria as being of Average quently it is the least precise of the evaluation techniques.
Importance. The ®nal criterion, `reciprocal arrangement or The main advantage of this method is that it is inexpensive
the future purchases each vendor will make from your and requires a minimum of data. The categorical method
®rm', was categorised as of only Slight Importance to the could work well for smaller ®rms without the resources to
assessment of vendors. operate more sophisticated approaches, but with suf®cient
One of the authors was involved with a repetition of this management experience and intuition to make good value
exercise in a major engineering company in 1991. The judgements.
study had the eight senior buyers in the company making Dickson's research indicates that buyers perceive that
the assessments. It found very similar results to Dickson's each criterion has a different in¯uence and effect on the
study, as shown by comparing the ranks and mean scores overall performance of the buyer, hence, any overall
given in the ®nal two columns of Table 1 with those of assessment has to be based on using differential weights.
Dickson. Interestingly not only the ordering but the relative The categorical method treats all criteria as being of equal
weights are surprisingly similar. The top six criteria are the importance. The simple linear weighted average method
same, although price has moved up in importance. Five assumes that a set of weights indicating the relative impor-
criteria, the communication system, attitude, packaging tance of each criterion can be determined, adding up to unity
ability, training aids and reciprocal arrangements, were or 100%. If each vendor can be assessed on each criteria,
not thought an issue of concern to that company. A say marked out of 100, a performance measure for each
survey of purchasing in the pharmaceutical industry, vendor can be obtained by summing over all criteria the
Hatherall,2 found that eight criteria were generally used. products of the mark awarded and the weight for the
In order of priority they were quality, price, service, criterion The ®nal results of these total weighted scores
technical capability, ®nancial strength, geographical loca- can then be used to rank the vendors. The dif®culty is the
tion, reputation and reciprocal arrangements. Weber, determination of the weights to use. This has to be done
Current and Benton3 reviewed 74 papers on vendor selec- subjectively by the management of each company for its
tion in the academic literature in terms of the criteria they own particular situation. Articles on the use of this method
used. They found that net price was the most discussed can be found in Wind and Robinson,4 Timmerman,5
criteria, followed by delivery and quality, see the ®nal Gregory,6 Cooper7 and Roberts.8
column of Table 1. Production facilities and capacity, The Dickson `scores' are indications of how important
geographical position and technical capability were consid- the respondents believed each criteria to be. The ®fth and
ered in about one quarter of the papers. The other Dickson sixth of Dickson's criteria have virtually identical scores, of
criteria were mentioned less often. Warranties and claim 2.78 and 2.76 respectively, so both are regarded as being of
policies received no mention at all. considerable importance in Dickson's terminology.
Therefore, it can be concluded, that even though it is 30 However, this does not necessarily imply that a purchaser
years since the Dickson paper was published, the basic would rank a vendor scoring highly on production facil-
criteria for vendor selection are little changed. The Dickson ities=capacity (criterion 5) and poorly on price (criterion 6)
criteria can still be regarded as a starting benchmark for as being equal to a vendor scoring poorly on production
assessing vendor selection by industrial purchasers. Of facilities and highly on price. Therefore Dickson's average
course there are variations in terms of the ranking of scores cannot be treated as a basis for calculating the
these criteria. weights required for the simple weighted average ranking
Having established the list of criteria to use, the simplest method.
method for rating vendors is the categorical method. This A third possible approach for vendor rating is the cost-
involves assessing vendors' performance on each criterion ratio method, which evaluates supplier performance using
in simple categorical terms such as `good', `neutral' and the tools of standard cost analysis. The total cost of each
`unsatisfactory'. The vendor's overall rating can then be purchase to be evaluated is calculated as the vendors selling
based on the number of criteria ranked as good, neutral and price plus the buyers internal operating costs that are
unsatisfactory. These can then be interpreted as excellent, associated with the quality, delivery and service elements
good, acceptable and unacceptable for example, where of the purchase. Firstly, the internal costs associated with
excellent could be good for all criteria, good, a mixture of quality, delivery and service have to be determined and then
goods and neutrals but no unsatisfactory ranks, and unsa- expressed as a proportion of the total expected value of the
tisfactory would be having more than some pre-set number ®rm's purchase. The sum of the three individual cost ratios
of criteria ranked as unsatisfactory. It might be possible to (proportions) gives the overall cost ratio which is then the
establish clear rules for this aggregation but a `good' score proportional margin to be added to the vendors quoted price.
S Yahya and B KingsmanÐVendor rating for an entrepreneur development programme 919

In practice the choice of cost to be allocated depends on the exercise although done by consensus and backed up with
products involved. However, quality, delivery, service and best advice and relevant information, often results in
price are the most commonly employed categories. This varying opinions as to what the size of the weights
method is a complex approach, requiring a comprehensive should be. The method treats the point ordinal scale on
cost-accounting system to generate the precise cost data which vendors are ranked as if it is a cardinal (interval)
needed. scale. The overall score for a vendor is taken as a weighted
The categorical method is a qualitative approach, and so sum of the rank positions which the vendor achieved. This
yields the least de®nite results. It requires the lowest effort however, is generally not the intention of such a ranking.
of the three methods and has the lowest cost, but corre- The numbers should be interpreted as rank positions only
spondingly produces the least reliable results. The cost-ratio and not as absolute worth of each vendor according to the
method, on the other hand, yields results that tend to be criterion in question.
extremely cost-control oriented, but which may not be useful Cook and Johnson overcame the problems by ignoring
in comparing vendor performances because of the dif®cul- the vendor rating issue. They modi®ed the problem to
ties of translating all aspects of vendor performance into consider directly the question of how to choose and how
precise cost ®gures. The set-up cost for this process can also to allocate business between suppliers. They developed a
be quite high if the requirements of the cost-ratio plan are linear programming model based on the Data Envelopment
not directly available in the existing cost accounting system. Analysis technique. This does not require the explicit
Furthermore, it can only ever deal with a small number of speci®cation of criteria weights nor that these weights be
the criteria identi®ed by Dickson. It cannot cope where the points on some arbitrarily chosen scale.
performance on a criterion has to be assessed qualitatively. An alternative solution to these problems was taken by
The advantage of the simple linear weighting average Lamberson et al11 who considered that any vendor rating
method is that it is potentially more precise than the system should be entirely a quantitative procedure. This was
categorical plan, but without the rigid and accounting necessary in their view to remove some of the bias that
demands of the cost-ratio plan. This plan is extremely exists in many vendors' selection processes. Their limited
¯exible and can accommodate any number of evaluation criteria were constrained to characteristics which enhance a
factors. However, it requires the purchaser to ®nd appro- vendors ability to produce and deliver a product. They also
priate weights for a large range of factors showing their introduced the notion of the criticality of a part based on the
comparative importance; not an easy task. It also has the level of dif®culty the vendor would have in producing the
same problems as for the categorical method in scoring each part, considering factors such as engineering, quality, mate-
vendor's performance on each criteria. rial, safety and purchasing. The system devised, essentially a
Thomson9 attempted to take account of the latter point, weighted average method, was also believed to be a self-
errors and uncertainties in scoring vendors performance on correcting system by its constant updating of a vendors
the criteria. He argued that `At best, for a given criterion, status.
decision makers may be able only to accurately appraise a As can be seen from Dickson's list, some of the criteria
performance range within which an individual vendor for rating vendors and hence for vendor selection can be
actual performance may fall'. The decision maker still measured using qualitative methods, some using quantita-
has to set the linear weights subjectively. Given a set of tive methods and some require employing both methods
linear weights and the assumption that the scores on a simultaneously. The restriction to quantitative measure-
criterion are random variates from a rectangular distribu- ment will clearly limit the number of criteria that can be
tion over the range speci®ed by the decision maker, it is covered and hence severely restrict the ®eld of application
possible by repeated random sampling to determine the of Lamberson et al's approach. Any vendor rating system
distribution of the simple linear weighted average score for has to be able to make use of both qualitative and quanti-
each vendor. The choice of vendors to use can then be tative data if it is to be of general use.
made on the basis of the modal value, or by taking into Some of the criteria can be themselves multi-faceted.
account the variance as well as the mode. Despite its Take for example the problem of assessing delivery perfor-
drawbacks, this remains the only paper on vendor rating mance. Cooper7 argued that there were at least two differ-
to deal directly with the uncertainties in the scoring of ent factors to take into account in assessing delivery
performance. performance. The percentage of parts late on each order
Cook and Johnston10 pointed out that although a simple or line item should be taken into account as well as the
weighted average approach is commonly adopted for deal- actual number of parts delivered later than the suppliers
ing with multiple criteria decision problems such as vendor promised date He found that this proved to be ef®cient in
selection and business allocation, and is simple to under- assessing supplier performance. It was able to produce a
stand, it possesses at least two major weaknesses The more smoothly running production system because good
buyers are required to specify explicitly the numerical performance supplier can be rewarded with extra orders
sizes of weights, using some arbitrary chosen scale. This whilst the scarce purchasing department effort can be
920 Journal of the Operational Research Society Vol. 50, No. 9

concentrated on the problem suppliers of parts. Thus within is achieved by identifying and evaluating all the issues or
each rating criterion there may be sub-criteria or factors attributes that in¯uence and contribute to the solution of the
that have to be taken into account differentially in deter- problem. The second step is to formulate the questions that
mining the performance on the rating criteria. This may are to be used to generate the comparison between
give rise to further weighting problems or increase the elements. This is done by asking the decision maker to
number of criteria for which weights have to be subjec- compare the elements in paired comparisons using a scale
tively determined signi®cantly. from 1±9. A value of 1 means that the two elements are of
The important issues raised by Cook and Johnson were equal importance in achieving the desired objective, a value
addressed to some extent by Golany and Kress.12 They of 5 that the ®rst element is strongly favoured over the
evaluated some of the currently known mathematical tech- second, whilst the upper end value of 9 means that the ®rst
niques for extracting weights from ratio-scales matrices, element is of absolute importance relative to the second.
such as eigenvector, modi®ed eigenvector, weighted least The outcome of the ®rst stage is the pairwise comparison
squares, direct least squares, logarithmic least squares and matrix, an example of which is shown in Figure 1. The
logarithmic least absolute values. Using a variety of entries in the matrix are the value of comparison between
performance characteristics they singled out the modi®ed row and column elements, using the scale of relative
eigenvector method as the most effective method for importance from 1±9 discussed above. The entry for the
calculating weights. The remaining ®ve methods have ith row and the jth column gives the importance of that row's
different weaknesses and advantages but none of them is criterion relative to the column's criterion. Therefore a good
dominated by the other. The modi®ed eigenvector method performance on Delivery, the criterion for ®rst row, is
underlies a by now well known method for dealing with slightly preferred to one on Quality (shown by the value
multi-criteria decision problems, initiated by Thomas of 2), of strong importance compared to the vendor having
Saaty13 known as the Analytic Hierarchy Process method. good facilities and technical capability, (shown by the value
The approach taken in this study was ®rstly to determine of 5). A good performance on Quality, the criterion for the
a vendor rating procedure and then secondly use this as second row and column, is moderately more important than
input to a Goal Programming model for the allocation of a having good facilities and technical capability, (shown by
contracts to the vendors in the Umbrella scheme followed the value of 4). Having good facilities, the third row
thirdly by a linear programming model if there were criterion is weakly more important than good technical
alternative solutions to the Goal Program. The Cook and capability, (value of 3). The decision makers only need to
Johnson approach of a linear programming model based on ®ll in the upper half of the comparison matrix. A value of 1
Data Envelopment Analysis was also evaluated later in the is assigned to the diagonal elements since delivery (row) is
research. It was found not to give as good a performance as equally preferred to delivery (column). The elements in the
the ®rst approach. Details will be provided in a later paper. lower half are the reciprocals of the corresponding elements
in the upper half as can be seen in the ®gure.
A mathematical procedure is then used to ®nd the set of
Analytic hierarchy process
weights, w1 w2 ; . . . ; wn , associated with each of the n
Analytic Hierarchy Process (AHP), is a theory of measure- criteria that are consistent with the pairwise comparisons
ment to determine the relative importance of a set of determined. The weights must add up to unity. This is based
activities or criteria. It provides a possible new approach on solving what is known as the eigenvalue problem.
similar to the linear weighted average idea, which has the Standard software is available to do this. Alternatively, in
ability to generate weights from subjective assessments. It most cases, a simple approximation method can be used.
is a more systematic process for determining the weights, Firstly, divide all elements in the matrix by the sum down
which still relies on subjective judgement to determine the column. The weights are then the average values across
them. It does this by a series of pairwise comparisons of all the rows of these `normalised elements'. The advantage of
the criteria. It is the use of these pairwise comparisons that using the software is that they also provide tests that there is
is at the heart of the method. The method consists of three consistency in the relative pairwise rating of the criteria, by
steps, structuring the problem hierarchy, the evaluation the Consistency Index (CI) and the Consistency ratio (CR).
process, and calculating the weights and the consistency
index of the judgements made.
To reduce the number of pairwise comparisons required,
the method uses a hierarchy of elements (criteria) starting
with a high level description which is broken down into
®ner detail at succeeding levels. The ®rst stage in the
process is to identify the levels and elements (criteria) in
the hierarchy. These will be criteria, sub-criteria and factors
by which the achievement of the criterion is measured. This Figure 1 Example of the pairwise comparison matrix.
S Yahya and B KingsmanÐVendor rating for an entrepreneur development programme 921

Only one published paper, Narasimhan,14 considered the answering of any queries on them. The participants at this
possible use of AHP for supplier selection and vendor stage were merely asked to indicate which criteria and
ranking. However, his assessment was on the basis of a factors were important and relevant to GFSB in managing
hypothetical example only. His conclusions were that the the Umbrella Scheme and to add any other criterion or
approach facilitates the development of `accurate' judge- factor that they thought might be useful. The whole process
ments and that the formalisation systematised what is typically required an interview of one hour with each
largely a subjective decision. He also commented that the participant. This short time was possible because many of
ease of the computations makes it possible to do sensitivity the participants had had some experience of evaluating
analysis quickly. However, he has not considered the many vendor performance, as part of their usual management
practical problems and issues in applying the method that role. The completed individual lists were then combined by
many commentators have raised as important to its possible the researchers to produce one master list containing all
success and usefulness. those criteria and factors indicated as relevant by any
Ever since the AHP was introduced by Saaty there has participant, this is shown in Table 2 in columns one and
been a wide discussion about the empirical effectiveness two.
and theoretical validity of the technique, see Saaty15 for a The master list was circulated to all 16 participants. The
review. One objection frequently raised is the impact of list contained columns three and four which enabled the
important criteria that have not been identi®ed and thus participants to record their initial judgement on which
have been excluded from the comparisons. This means that criteria and factors should be used. A meeting was then
a row and column have been removed from the pairwise held where all participants were allowed to discuss and
comparison matrix for each excluded criterion. This inclu- openly criticise each of the criteria proposed. This took
sion of these `missing' criteria can lead to major changes in about three hours to complete, longer than had been
the relative weights for the existing criteria. Clearly it is expected. This was mainly due to confusion between parti-
necessary to ensure that as comprehensive a set of criteria, cipants on the de®nition and role played by each criterion
which may later be reduced, is determined at the start of the and factor. This showed the importance of not just listing
process and which includes all those likely to be important. possible rating criteria but of producing clear de®nitions, so
Given the discussion in Section 2 on general agreement that the managers involved understand thoroughly what is
over time on the criteria to use in vendor rating, this being discussed.
particular criticism of AHP is unlikely to detract substan- A consensus agreement was eventually reached on 8
tially from its possible usefulness in vendor rating. The criteria to use and 13 factors to be taken into account in
criticisms of AHP may have validity on its application to assessing those criteria. These are indicated by a yes in the
some types of problem, one-off major strategic decisions ®nal column of Table 2. One result from the discussion at
for example. However, the purchasing function has to the meeting was the addition of two extra associated factors
choose a subset of potential suppliers with whom to do for the technical capability factor. These were technical
business. It cannot avoid, at least implicitly, rating the problem solving ability and the range of products vendor
vendors. It is at least well worth exploring how the method could make. A fuller de®nition of the 8 criteria and 13
would work in a practical vendor rating and supplier associated factors is given in an Appendix.
selection situation. The remainder of this paper describes The outcome of this ®rst stage can thus be seen to be a
how the method was applied to the rating of vendors in an Vendor Rating Hierarchy. This hierarchy contains all the
EDP programme as described earlier. criteria that will be used in the vendor rating process. Each
level of the hierarchy goes into ®ner detail about the factors
to be included. At each level the criteria or factors should
A systematic vendor ranking procedure be independent of each other, as this eases the making of
judgements on the relative importance of the criteria. These
Establishing the vendor rating criteria
judgements are made on each level in turn. The factors or
A total of 16 respondents participated in this study, who sub-criteria at the second level are the alternative aspects of
were all managers and supervisors of GFSB. They were the associated criterion at the level above which need to be
®rst briefed about the overall objective of the study then taken into account in assessing performance on that criter-
speci®cally on vendor rating. A set of potential criteria ion. They can be regarded as alternative ways of measuring
together with the factors that might contribute to each that criterion for the purposes for which the ratings will be
criterion were presented to each participant individually. used.
This took the form of a simple list, produced by the The criteria obtained from this exercise fall into two
researchers from vendor rating criteria reported in the categories, objective and subjective criteria. The objective
literature, particularly relying on the pioneering work of criteria are those that can be evaluated using factual data,
Dickson (1966), see Table 1. The brie®ng included a short which include Quality, Delivery, Responsiveness, Techni-
description of each criteria and the associated factors plus cal Capability, Facility and Financial. Subjective criteria
922 Journal of the Operational Research Society Vol. 50, No. 9

Table 2 The master list and ®nal list of criteria and factors to use for vendor rating in this case

Evaluation criterion Associated factors Yes No Included in ®nal list


Delivery yes
Quality yes
Customer rejects yes
Factory audit yes
Facility yes
Machinery yes
Layout yes
Infrastructure yes
Technical Capabilitya yes
Management yes
Overall quality of managementb yes
Business skill yes
Operational controls no
Labour Relations no
Organisational Structure no
Financial position yes
Stability no
Strength no
Past Performance no
Attitudec yes
Honesty yes
Procedural compliance yes
Communication system no
Telecommunications no
Record keeping no
Responsiveness yes
On quality problem yes
Urgent delivery yes
Desire for business no
(Please add any other criteria which you believed relevant)
a
Two associated factors were added, technical problem solving ability and the range of products vendor
could make.
b
De®ned as attitude to improving operations, co-operation in scheme and desire to grow.
c
GFSB preferred to label this as discipline rather than attitude.

are those that are dif®cult to quantify and thus have to be Dickson criteria, except for the communication system
evaluated using qualitative judgement, which include criterion. There is however one new criterion, Responsive-
Discipline and Management. In this particular case the ness. Part of this corresponds to Dickson's repair service
managers determining the criteria viewed Quality as criterion. The addition of recognising the need for suppliers
conformance to speci®cation. It was something that was to respond more quickly to customer demands and deal
clear, either the item met the speci®cation or not. The more quickly with customer problems has been one of the
speci®cation of the particular table or desk, for example, major changes that has occurred over the thirty years since
was made in the contract. Therefore, in this case it was Dickson's research was carried out.
appropriate to assess quality as an objective measure.
Price is not an issue in this case in considering allocating
Determining the comparison matrix
business between vendors. Desire for business applies to all
the vendor companies taking part in the Umbrella scheme. Here the issue is to determine the comparison matrix, the
Similarly, the scheme is designed to develop the skills of managers' views on the degree of importance of each
the vendors, so being in the scheme indicates having a criteria over the others, and also for the alternative factors
reputation for wanting to improve oneself and a position in within each criterion. The values are obtained by asking the
the industry of wanting to improve, so as to compete in the respondents to evaluate the elements in a pairwise fashion
commercial market. Accepting the same warranty and with respect to their common parent criterion in a higher
claim policies was a requirement for being accepted as a level. The process starts by presenting the participants with
vendor in the scheme. Comparing Table 2 with the Dickson the full set of pairwise comparisons to be made. The
criteria of Table 1 shows that the criteria and associated comparisons that need to be made are developed from the
factors found for this situation cover the top sixteen of the problem hierarchy of the criteria and associated factors of
S Yahya and B KingsmanÐVendor rating for an entrepreneur development programme 923

Table 2. The comparisons at the ®rst level include all the 8 criteria, for example, `how more important a customer
criteria identi®ed. All the pairwise comparisons are reject is in in¯uencing quality compared to factory audit'.
presented to the managers concerned. The ®rst part would If the answer is `equally important', the value `1' in the
be for example to compare delivery with all of the other 7 appropriate of the full Table 3 should be circled.
criteria as shown in Table 3. The process is continued by Since the comparison process will take a considerable
comparing quality with the remaining 6 criteria, excluding time and it is important that it is done as carefully as
delivery which has already been done. The process possible, the participants were given a week to accomplish
continues with the remaining criteria. The table assumes the tasks. The results for each pairwise comparison were
for example that a good delivery performance is more collected and the simple arithmetic average across all
important than a good quality performance, so that the respondents used to represent the general opinion of
comparison score will be one or more. If it is the other way GFSB on weighting criteria for vendor rating. The results
round, then the process should start with comparing quality are then transformed into the form of a matrix, as in Table
with the other 7 criteria, rather than delivery. In fact in this 4, by including the reciprocal values in the bottom half of
case, the discussion on establishing the criteria and factors the matrix. Given the time needed and the geographical
had indicated that most taking part regarded delivery as the separation of the managers participating in the exercise it
most important one to get right. The actual values deter- was not possible to keep bringing them together for meet-
mined from the pairwise comparisons of delivery with the ings to try to argue out a consensus view. It was felt that the
other criteria then show which is the next criteria to averaging process used would achieve this satisfactorily.
consider. This is continued until all factors are compared With a group meeting there is a danger that one person
with each other. The process continues at the second level because of his rank or experience may gain an over
by comparing for each criterion in turn its associated dominance in a face-to-face meeting seeking consensus.
factors, for example for the quality criterion, the relative
importance of customer reject as a factor compared to
Calculating the weights
factory audit is assessed, whilst for the facility criterion,
the comparisons are machinery with layout, machinery with The weights are determined for each level in the hierarchy
infrastructure and then layout with infrastructure. from the various comparison matrices using a standard
Even though the task for respondents looks easy, it piece of software based on the eigenvector method as
requires a lot of effort to accomplish. The respondents discussed earlier. There are commercial programs available
need to be trained so as to be certain on their roles. Training to do this, although it is quite easy to program. The output
sessions were provided with a brie®ng on AHP, especially from this processing is the relative weighting of the
on how to code answers using the 1±9 scale, followed with different criteria and associated factors. The values are as
several examples on how to perform the comparison in Figure 2. The values in the ®rst level of the ®gure
between criteria. As stated earlier, AHP compares only represent the normalised weights for the eight criteria; these
two criteria at a time to decide the level of importance. An are 0.336 for Delivery, 0.246 for Quality, 0.152 for Facility,
illustrative example would be how more important does 0.084 for Technical Capability, 0.067 for Financial, 0.048
delivery contribute to vendor performance compared to for Management, 0.036 for Discipline and 0.031 for
quality'. If the answer is `absolutely more important' then Responsiveness. So in rating the vendors GFSB regards
the performance should circle `9' in the ®rst row of the achieving a good performance on Delivery as the most
Table 3. Whilst if it is of equal importance then he=she important criterion to be satis®ed, followed by achieving a
should circle `1'. Some judgement is clearly required if is good Quality and having a good standard on Facilities.
weakly important, whether that is at the near equal end of These three criteria in total would be given a weight of
the spectrum or at the near strong end and so circling `3' or 0.734 (or 73%) compared to a total of only 0.266 (or 27%)
`4'. The same procedure applies to the factors for any for the other ®ve criteria. Put in another way, it is three

Table 3 The relative importance of the criteria using AHP. (For relative importance, please circle in the appropriate column)

Factor Equal Weak Strong Very strong Absolute Factor


Delivery 1 2 3 4 5 6 7 8 9 Quality
Delivery 1 2 3 4 5 6 7 8 9 Facility
Delivery 1 2 3 4 5 6 7 8 9 Technical Capability
Delivery 1 2 3 4 5 6 7 8 9 Management
Delivery 1 2 3 4 5 6 7 8 9 Financial
Delivery 1 2 3 4 5 6 7 8 9 Discipline
Delivery 1 2 3 4 5 6 7 8 9 Responsive
924 Journal of the Operational Research Society Vol. 50, No. 9

Table 4a The pairwise comparison matrix for the criteria

Vendor rating D Q F TC M F D R
Delivery (D) 1 1.91 5.38 4.88 5.13 5.13 5.63 5
Quality (Q) 1=1.91 1 3.88 3.75 4.38 5 4.5 4.63
Facility (F) 1=5.38 1=3.88 1 3.04 5.25 3.92 4.63 4.38
Tech.Cap. (TC) 1=4.88 1=3.75 1=3.04 1 2.29 2.34 3.04 3.16
Management (M) 1=5.13 1=4.38 1=5.25 1=2.29 1 1.1 1.81 1.47
Financial (F) 1=5.13 1=5 1=3.92 1=2.34 1=1.1 1 4.38 4.38
Discipline (D) 1=5.63 1=4.5 1=4.63 1=3.04 1=1.81 1=4.38 1 2.07
Responsive (R) 1=5 1=4.29 1=4.38 1=3.14 1=1.47 1=4.38 1=2.07 1

Table 4b Pairwise comparison matrices for the factors on quality and management

Quality CR FA Management A BS
Customer reject (CR) 1 2.29 Attitude (A) 1 3.88
Factory audit (FA) 1=2.29 1 Business skill (BS) 1=3.88 1

Table 4c Pairwise comparison matrices for the factors on discipline and responsiveness

Discipline H PC Responsiveness QP UD
Honesty (H) 1 2.04 Quality problem (QP) 1 1.42
Procedural comp. (PC) 1=2.04 1 Urgent delivery (UD) 1=1.42 1

Table 4d Pairwise comparison matrices for the factors on tech- performance will be made up from weighting Customer
nical capability Reject performance by 0.696 and the Factory Audit perfor-
mance by 0.304. The values in the bottom level are the
Technical capability (TC) TPS PR
global weight for each of the thirteen factors; they are the
Technical problem solving (TPS) 1 4.38 factor weight multiplied by the criterion weight, so for the
Product ranges (PR) 1=4.38 1 Customer Reject factor the value is 0.696 times 0.246. As
the actual performance data is collected for the factor
values, these weights in the bottom level of Figure 2 can
Table 4e Pairwise comparison matrices for the factors on facility be used directly to calculate the overall rating of the
criterion vendor, provided a performance `score' can be derived
for each factor.
Facility M L I
Machinery (M) 1 3.75 4.63
Layout (L) 1=3.75 1 1.71
Infrastructure (I) 1=4.63 1=1.71 1 Scoring factor performance
The next step requires the managers to assess the perfor-
mance of all vendors on the thirteen factors identi®ed as
important for vendor rating. In the particular situation
times more important to achieve a high standard on studied, this performance evaluation is quite a dif®cult
Delivery, Quality and Facility, than on the other ®ve and fuzzy activity, since it involves measuring performance
criteria together. Indeed, achieving a good standard on using subjective data and different of®cers assessing differ-
Quality alone is better than achieving a good standard on ent vendors. A major problem was thus to ensure consis-
all of the ®ve criteria Technical Capability, Financial, tency between the different company of®cers and avoid any
Management, Discipline and Responsiveness. The consis- bias creeping in. A set of standard guidelines were set up
tency index and consistency ratio for the solution are 0.122 after discussions with the managers of the company. The
and 0.086 respectively, which satisfy the acceptable level results are thus their own views but guided by the research-
for consistency in the judgements made. ers. It was agreed that all performance scores would be
The second level gives the normalised values for all the based on an eleven point grade scale. Each grade would
thirteen factors. The sum of weights for the factors for each have an adjectival descriptor and an associated point score
criteria must add up to unity. So the quality criteria or range of point scores. The managers preferred in the ®rst
S Yahya and B KingsmanÐVendor rating for an entrepreneur development programme 925

Figure 2 Results of the AHP analysis.

instance to make their judgements on the qualitative scale weights given at the bottom of Figure 2 will give a vendor
of adjectival descriptors, and then after this consider the rating score of between 0 and 10 for each vendor; the closer
grade point to award if there was a choice. This general the rating to 10 the better the overall performance of that
performance score guidelines are given in Table 5. There- vendor.
fore each vendor can be awarded a `score' from 0 to 10 on It is clearly necessary to clarify further what `excellence'
each factor. Multiplying the score on each factor by the in performance compared to `good' means for any parti-

Table 5 Vendor criteria score guide-line

Point Grade Description


10 Exceptional demonstrates substantially excellence performance, has been in the excellence band for last 12 months
9=8 Excellence exceeds GFSB and customer expectation, demonstrates extra effort, is superior to vast majority of
vendors
7=6 Good meet GFSB and customer expectation
5 Acceptable meet minimum GFSB requirement
4=3 Needs Attention overall performance does not meet GFSB and customer minimum acceptable level
2=1 Poor overall performance is well below GFSB and customer acceptable level, is inferior to vast majority of
vendors
0 Bad performance has not been improved, has been in poor band for last 12 months.
926 Journal of the Operational Research Society Vol. 50, No. 9

cular factor. Further detailed discussions led to agreement Vendor rating based on total criteria weighted score
on this. The process was ®rstly to see what were the sources
of the faults identi®ed for that factor and how they might be Simple score sheets were provided to assist the managers to
measured. It is important to realise that such faults may be record the scores for each vendor on the thirteen factors.
measured on a multi-dimensional basis rather than one Once the scores for each factor have been determined, then
single scale. An example is given in Table 6 for the it is relatively easy to calculate the resulting vendor rating
Quality: Customer Rejects factor. This was concerned score. An example of this is shown in Table 7, for the best
with the occurrence of defective goods. The sources were of the vendors. Mathematically, the vendor rating is
goods returned by the customer within the 8 months equivalent to the sum of the product of each factor
warranty period and ®nished goods at the vendor's weight and the vendor performance score on that factor.
premises that could not be despatched because they failed The vendor rating value for vendor 37 is obtained by
to meet the quality requirements and thus required some re- summing the products of the respective elements in
work. The impact of defects was measured on three columns 3 and 4 for each row, given in the ®nal column,
dimensions, the quantity of goods defective, the type of over all the rows to give an value of `7.847'.
defect occurring and the seriousness of the defect. Each of The same procedure was carried out for the remaining 61
the three can be measured quantitatively. However, the vendors; the results are in Table 8. The vendor with the
performance on each needs to be combined to give an highest vendor rating value should be regarded as the best
overall `score' for customer rejects. So four classes of performing vendor and the rest can be ranked accordingly.
performance for each measurement dimension were It can be seen that many of the vendors end up with a low
de®ned; none, low, acceptable and high for quantity of performance. A partial validation of the process just
defects; none, minimum, acceptable and high for type of described is that the top three vendors were indicated as
defect in terms of the effect on the item's ability to perform numbers 37, 66 and 23, which are generally regarded by the
the role for which it was designed; and none, insigni®cant, GFSB management as the best vendors who receive more
minor and major for the seriousness of the defect. The top business than any other individual vendor, whilst the
and bottom points of 10 and 0 have the meaning given in bottom three, numbers 19, 59 and 67, are generally
Table 6. The grades 1±9 are de®ned in terms of the regarded as the worst. The AHP analysis, however,
categories on the three measurement dimensions as given suggests that vendor 19 is noticeably better than 59 and
in Table 6. For example, if a vendor scores `low' for only negligibly worse than vendors 12, 38, 61 and 27.
Quantity of defects, `minimum' for Type of defect and It was pointed out in Table 5, discussing the general
`insigni®cant' for Seriousness of defect, then a score value guidelines for vendor rating, that a `score' of 5.0 repre-
of `8' or grade `excellence' is awarded to that vendor. The sented the level that the company found acceptable. Table 8
detailed guidelines for the other twelve factors are given in shows that 20 out of the 68 vendors failed to achieve this
summary in the Appendix. level. In a normal commercial situation, provided there is

Table 6 Quality: customer reject

Two sources of customer reject are:


Goods returned within 8 months warranty
Collection delayed due to quality problem

Measured by quantity of defect, type of defect and the seriousness of the defect.

Type of measurement Scale of measurement


Quantity of defect None, Low, Acceptable, High
Type of defect None, Minimum, Acceptable, High
Seriousness of defect None, Insigni®cant, Minor, Major
Point Grade Quantity Types Seriousness
9 Excellence None None None
8 Excellence Low Minimum Insigni®cant
7 Good Low Acceptable Insigni®cant
6 Good Low Acceptable Minor
5 Acceptable Acceptable Acceptable Minor
4 Need attention Acceptable High Minor
3 Need attention Acceptable Acceptable Major
1 Poor High High Major
S Yahya and B KingsmanÐVendor rating for an entrepreneur development programme 927

Table 7 Rating of vendor number 37

Criteria Sub-criteria Effective weight Criteria score Sub-Total


Quality Customer Reject 0.171 8 1.368
Factory Audit 0.075 7 0.525
Responsive Urgent Delivery 0.013 9 0.117
Quality Problem 0.018 9 0.162
Discipline Honesty 0.024 7 0.168
Procedural Compliance 0.012 6 0.072
Delivery 0.336 7 2.352
Financial 0.067 9 0.603
Management Attitude 0.038 7 0.266
Business Skill 0.010 9 0.09
Tech. Capability Tech. Prob. Solving 0.068 9 0.612
Product Ranges 0.016 9 0.144
Facility Machinery 0.102 9 0.918
Infrastructure 0.02 9 0.18
Layout 0.03 9 0.27
Total Vendor Rating 7.847

suf®cient capacity amongst the remaining vendors, all those Conclusions


failing to achieve the `5.0' level would be excluded from
receiving business. However, unlike a normal commercial The analysis and results presented above have demon-
situation, this scheme was also aimed at developing strated than the process presented is an useful and practical
vendors as part of an entrepreneurial development method for carrying out vendor rating for a commercial
programme to improve the indigenous furniture industry. organisation. It has the advantage of being a more formal
In this situation, the vendor rating assists in determining and systematic method than existing approaches. It allows
those vendors most in need of aid, management assistance for a wider range of options and thus relative weightings
as well as development aid. The actual allocation of the than, for example, the simple categorical methods based on
different classes of business between all of the 62 vendors terms such as good, neutral or unsatisfactory. Unlike simple
is a very complex multi-objective allocation process. This linear weighted averaging methods, which require the
can be formulated as a two stage Goal Programming and weights to be provided directly by the managers, the
Linear programming model. But how to do this is a task for weights are determined by the system on the basis of
a further paper. simple pairwise choices.

Table 8 The results of rating for all of the vendors

k Vendor rating average k Vendor rating average k Vendor rating average k Vendor rating average
37 7.8 63 5.9 33 5.4 48 4.7
66 7.3 21 5.9 1 5.4 54 4.7
23 6.8 46 5.9 10 5.4 51 4.6
25 6.4 52 5.9 64 5.3 62 4.5
16 6.2 28 5.8 56 5.3 17 4.5
45 6.2 53 5.8 50 5.2 65 4.5
18 6.2 5 5.7 24 5.2 57 4.4
43 6.1 3 5.7 44 5.2 36 4.3
31 6.1 11 5.6 4 5.2 40 4.3
39 6.1 60 5.6 55 5.2 26 4.1
22 6.1 49 5.6 32 5.1 27 3.7
6 6.0 15 5.6 2 5.1 61 3.7
13 6.0 8 5.5 30 5.0 38 3.7
41 6.0 58 5.5 42 5.0 12 3.7
34 6.0 14 5.5 47 4.9 19 3.6
7 5.9 20 5.5 29 4.8 59 2.2
9 5.9 35 5.4 68 4.7 67 2.1
928 Journal of the Operational Research Society Vol. 50, No. 9

The analysis in this case demonstrated that the company  All information on de®nitions, questions and procedures
management believed that 8 criteria together with 13 must be made clear to respondents.
factors should be taken into account in assessing vendors  Training is usually required to ensure respondents under-
for an EDP. Delivery performance was most important in stood the de®nitions and guidelines and what is involved,
this situation because most of the business given to the especially on how to make the comparison between
vendors originated from government agencies where the criteria. In fact the comparison process is the most critical
prices are ®xed and prompt deliveries are vital. The and important aspect of AHP study.
importance of the Quality, Facility and Technical Capabil-  Allowing enough time for respondent to complete the
ity factors re¯ects the objective of the scheme of assisting comparison matrix is important to ensure the success of
entrepreneurs to upgrade themselves in terms of produc- the study.
tion, better product quality and having better technical
The results in this case gave a greater difference between
facilities and knowledge. The other criteria are regarded
the relative weights than implied by Dickson's original
as less important. They still play a role however in the
study for the reasons described earlier. It should be noted
vendor rating exercise. They have value because each
that price did not enter this assessment since the prices for
vendor must achieve some minimum level of attainment
the different goods were set by the government once per
on them to be registered in the scheme.
year and were given to any vendor, whatever the quality of
A number of general lessons were learned from this re-
product produced. It is likely that carrying out a similar
search that might help others in the implementation of vendor
exercise in other situations will give a different set of
rating using the Analytic Hierarchy Process. These are:
criteria and associated factors and, of course, a different
 To ensure good results, AHP has to be conducted on the set of weights. However, the process described can easily
basis of face to face study and discussion. It cannot be be applied to any vendor rating situation, irrespective of
carried out effectively as a postal questionnaire. This is whether measurement has to be done quantitatively or
because a maximum interaction is necessary to ensure qualitatively.
respondents understand their functions and how to make
comparisons between criteria.
S Yahya and B KingsmanÐVendor rating for an entrepreneur development programme 929

Appendix
The measurement guidelines for the factors in the vendor rating

Factor and de®nition Type of measurement Scale of measurement


Quality: Customer Reject
Two sources of customer rejects are: Quantity of defective pieces None, Low, Acceptable, High
Goods returned within 8 months warranty Type of defect None, Minimum, Acceptable,
Collection delayed due to quality problem Seriousness of defect High
None, Insigni®cant, Minor, Major
Quality: Factory Audit
Scored based on monthly factory audit performed by Number of violations Low, Acceptable, High
GFSB's of®cers. Measured by violations in standard Seriousness of violation Insigni®cant, Minor, Major
speci®cation of raw material, assembly and ®nishing
activities.
Delivery
Number of cancelled and delayed orders where vendor Cancelled orders None, Low, High
unable to ful®ll the delivery commitment Delayed orders Low, Acceptable, High
Responsiveness: Urgent Delivery
Delivery within a two weeks lead-time. Measured by Frequency of delays None, Low, Sometimes, High
delays beyond two weeks. Duration of delays Number of weeks
Responsive: Quality Problem
De®ned by response time, time taken by the vendor to Response time Number of days
take action on the complaint and turn around time, Turn around time Number of weeks
time taken to solve the quality problem.
Discipline: Honesty
Determined by the vendor honesty on transactions, Frequency of untruths Never, Low, High
commitment and negotiation related to the scheme Seriousness Insigni®cant, Minor, Serious
and customer. Subjective assessment by GFSB
management.
Discipline: Procedural Compliance
Vendor attitude toward compliance to rules, guidelines Frequency of violations None, Low, Normal, High
and policies of the scheme. Impact of violations Minimal, Normal, Serious
Management: Attitude
Vendor attitude towards: Grading on all three separately Strong, Adequate, Weak
Ðimprovements (initiative, willingness, etc.)
Ðco-operation
Ðbusiness (positive thinking, hardworking,
ambitious, etc)
Management: Business Skill
Vendors's skills in terms of customer service, managing Grading on all three Effective, Adequate, Weak
employees, managing process
Technical Capability: Technical Problem Solving
Vendor ability to provide corrective and preventive Ability to Find Solution High, Normal, Low
action on technical problem Completeness of solution Full, Temporary ®x, Unable
Facilities: Machinery
Amount of machinery, range and level of a automation Amount of machinery Ample, Adequate, Inadequate
and machine maintenance Range of machinery Wide, Normal, Narrow
Maintenance Well, Adequate, Inadequate
Facilities: Layout
Building structure, material storage, production process Building Structure Closed, Partially closed, mainly open
and space availability Material Storage Accurate=clean=safe, Normal,
Production Process Messy=unsafe
Smooth=clean=spacious, Normal,
Jam=messy
Facilities: Infrastructure (for furniture manufacturing
operation)
critical infrastructure (communication, modes, location Critical Infrastructure Strong, Adequate, Weak
accessibility, etc) basic infrastructure (labour Basic Infrastructure Strong, Adequate, Weak
availability, environmental control, transportation
means)
930 Journal of the Operational Research Society Vol. 50, No. 9

For the above factors, the scoring guidelines, see table 5, were based on appropriated combinations of the three categories
for each type of measurement. The scoring guidelines for the remaining two factors were de®ned directly.

Points Grade De®nition


Technical Capability: Product Range
The range of products manufactured by the vendor. Measured by type, variety and feature of the
product.
9 Excellence Very wide product range
7 Good Wide, has many non-UMSC product
5 Acceptable Mainly, about 90% UMSC product
3 Need attention Only 100% UMSC product
1 Poor Only very narrow range of UMSC product

Financial
Debt=equity ratio Paid-up capital
9 Excellence < 1.0 > 500 K
8 Good 1.0±1.5 > 500 K
7 Good < 1.0 100 K±500 K
6 Good < 1.0 < 100 K
5 Acceptable 1.0±1.5 100 K±500 K
4 Need attention 1.0±1.5 < 100 K
3 Need attention > 1.5 100 K
1 Poor > 1.5 < 100 K

AcknowledgementÐThe authors would like to acknowledge the co-opera- 7 Cooper S. (1977). A total system for measuring delivery
tion of Guthrie Furniture Private Limited Company (GFSB) and its performance. J Purchasing and Materials Mgmt Fall 22±26.
managers in this research. The views expressed in this paper are those of 8 Roberts BJ (1978). A vendor delivery rating model. J Purchas-
the authors alone. ing and Materials Mgmt Fall 12±16.
9 Thomson KN (1990). Vendor Pro®le Analysis. J Purchasing
and Materials Mgmt Winter: 11±18.
10 Cook WD and Johnston DA (1992). Evaluating suppliers of
complex systems: a multiple criteria approach. J Opl Res Soc
43: 1055±1061.
References 11 Lamberson LR, Diederich D and Wuori J (1976). Quantitative
vendor evaluation. J Purchasing and Materials Mgmt Spring
1 Dickson GW (1966). An analysis of vendor selection systems 19±28.
and decisions. J Purchasing 2: 5±17. 12 Golany B and Kress M (1993). A multicriteria evaluation of
2 Hatherall DA (1988). Purchasing in the pharmaceutical indus- methods for obtaining weights from ration-scale matrices. Eur J
try. Unpublished M. Phil. Thesis, Department of Management Opl Res 69: 210±220.
Science, Lancaster University, UK. 13 Saaty TL (1980). The Analytic Hierarchy Process. McGraw-
3 Weber CA, Current JR and Benton WC (1991). Vendor selec- Hill: New York.
tion criteria and methods. Eur J Op Res 50: 2±18. 14 Narasimhan R (1983). An analytical approach to supplier
4 Wind Y and Robinson PJ. (1968). The determinants of vendor selection. J Purchasing and Materials Mgmt, Winter 27±32.
selection: The evaluation function approach. J Purchasing and 15 Saaty TL (1994). Highlights and critical points in the theory and
Materials Mgmt August 29±41. application of the Analytic Hierarchy Process. Eur J Opl Res
5 Timmerman E (1986). An approach to vendor approach evalua- 74: 426±447.
tion. J Purchasing and Materials Mgmt Winter 2±8.
6 Gregory RE. (1968). Source selection: A matric Approach. J Received July 1998;
Purchasing and Material Mgmt Summer 24±29. accepted May 1999 after one revision

You might also like