Professional Documents
Culture Documents
(VRS) model.
3.1 Introduction
There is an increasing concern with measuring and comparing the
efficiency of organizational units such as local authority departments, schools,
hospitals, shops, bank branches and similar instances where there is a
relatively homogeneous set of units.
52
The usual measure of efficiency, i.e.:
__ . output
ethciency =---------
input
is often inadequate due to the existence of multiple inputs and outputs related
to different resources, activities and environmental factors.
53
function. Frontiers have been estimated using many different methods over the
past 40 years. The two principal methods are:
1. Data envelopment analysis (DEA) and
2. Stochastic frontiers,
Which involve mathematical programming and econometric methods,
respectively?
54
We will examine three variations of the DEA methodology and study the
differences in assumptions of the three as well as compare the different
outcomes of each. The purpose of this analysis is to understand whether the
choice of methodology may have an impact on the outcome of an analysis.
inherent efficiency in converting inputs Xi, X2,...,x„ into outputs Yi, Y2,...,Ym is
identified, and then all other DMUs are ranked relative to that most efficient
DMU. For DMU 0, the basic CRS Input Oriented model (so-called CCR after
Chames, Cooper, and Rhodes) is calculated as follows:
max /?o = r
2>-iv
----- ---------- <1
subject to y r,Xij for each unit j 3.1
Mr,V, >0
to outputs yn and inputs xij and the are chosen to maximize the efficiency score
h0 for DMUo. The constraint forces the efficiency score to be no greater than 1
for any DMU. An efficiency frontier is calculated, enveloping all data points
in a convex hull. The DMU(s) located on the frontier represent an efficiency
level of 1.0, and those located inside the frontier are operating at a less than
full efficiency level, i.e. less than 1.0. The above fractional program is
executed once for each participating DMU, resulting in the optimal weights
55
being determined for each DMU. Before solving the problem, the denominator
in the objective function is removed and instead an additional constraint is
added. Also, the original constraint is manipulated in order to convert the
fractional program to a linear program. These two steps result in the
following:
2X>v -X’v'v -°
r i
3.2
subject to =1
i
II,-V, >0
max(v.u) = liy0
Finally, before solving, the linear program is converted to its dual for
computational efficiency reasons:
56
mm( 6./.) = 0
A >0
With the addition of slack variables, the dual problem becomes:
min( $./,) = 0
subject to YX = V0 + S ’’’
m s
max][>~
i-I
+ r=l
2>r
subject to
z -VjT + s~ = / = 1.2,...,///; --------- 3.6
FI
ii
21 ) rf^-j ~~ ~ y ro
57
Mill /Zrttr.V™
Subject to
ZiY,xv / I> 1 for; = 1- .... - 3.7
ur. r, > s > 0 for all / and r.
Again, the Chames-Cooper (1962) transformation for linear fractional
programming yields model (1.8) (multiplier model) below, with associated
dual problem, (1.9) (envelopment model), as in the following pair,
mine/ = 2>vt10
i—l
subject to
m s
Zn*tf >o 3.8
7=1 r=l
r=l
> £. Vr,7
m i
max # + £(X s7 + Z K)
1 1=1 7=
subject to
s
zv,+*;=.- 7 = 1.2... ..nr.
M
n
4
r = 1*2*.
1!
I
XM
*5.
v,
*7
IV
;=ix
0
58
max^s.
r=l +ZK
r=l
subject to
Zx^J+s:=xi /= 1.2..... - 3.10
j-i
2-3 tj^’j <P .v, 7* = 1.2..... s:
J-l
Table 3.1 presents the CRS model in input- and output-oriented versions, each
in the form of a pair of dual linear programs.
Input-oriented
Envelopment model Multiplier model
m s
+ ZK) maxz = ZMrXro
subject to
M /-=! subject to
s
2>,.v„ - m £0
Z.x *,+s; i = 1.2,..., w;
j-l r-1 ’ M
n m
Z.Vn^j ~K = yn r = 1,2,....s; TVX ~l
M l
kj'z 0 j ~ 1,2.... n. ftr.v, > t: > 0
Output-oriented
Envelopment model Multiplier model
max m
+£■(!>•; +sv.s;) ming = m
/-l r-1 7-1
subject to subject to
ixvA.+sJ =xi0 i -1.2....w; m s
Zv,x9 - ZVr>\: * 0
j-i 1 M r-l
Zy^Aj -5* = r = 1,2..... s; Z/yyro = 1
7»t r»l
/lr>V, >£>0
Xj>0 j = 1,2,...,».
59
the next chapter, this extra variable makes it possible to effect retums-to-scale
evaluations (increasing, constant and decreasing). So the BCC model is also
referred to as the VRS (Variable Returns to scale) model and distinguished
form the CCR model which is referred to as the CRS (Constant Returns to
Scale) model.
The CRS model is designed with the assumption of constant
returns to scale. This means that there is no assumption that any positive or
negative economies of scale exist. It is assumed is that a small unit should be
able to operate as efficiently as a large one - that is, constant returns to scale.
In order to address this, Banker, Chames, and Cooper developed the BCC
model. It is also referred as VRS model. The VRS model is closely related to
the standard CRS model as is evident in the dual of the BCC model:
min(6U)= 6
- XA = s
YA = vQ + s* 3.11
subject to
eA = l
A>0.s+ >0..$~ >0
60
change with input then there are constant returns to scale (CRTS), sometimes
referred to simply as returns to scale. If output increases by less than that
proportional change, there are decreasing returns to scale (DRS). If output
increases by more than that proportion, there are increasing returns to scale
(IRS).
Example: Where all inputs increase by a factor of 2, new values for output
should be: Twice the previous output given a constant return to scale (CRTS)
less than twice the previous output given a decreased return to scale (DRS)
more than twice previous output given an increased return to scale (IRS).
performers and good overall performers. Cross efficiency [48] score of a DMU
represents how well the unit is performing with respect to the optimal weights
measure [62l in DEA. The basic idea is to use DEA in a peer-appraisal instead
efficiency value given to each DMUs. This technique can also identify
‘overall’ efficient and ‘false positive’ DMUs, and it selects appropriate targets
61
Cross Efficiency Models: Aggressive and Benevolent Approaches
Aggressive Model
, f 1
min Z V v*Z^a k=\ i*p
m ( \
sJ Z=1 v
j
uj'Lxji
i*p /
=1
s m
4=1 y=i
ZV*JV -0PlLujxjP =0
4=1 y=l
V* ,Uj> 0 V*, j
Where 0p\$ the relative efficiency score of DMU p obtained from the CCR model
Benevolent Model
max £ v*Z yu
4=1 V >*P
m f )
S-f Z
y=i y
ujTxj>
i*p J
=1
s m
Zv*->^“Z“yx;> *°» Vi*P
k=\ j=1
s m
Z v^kp -°
4=1
p1Lujx,p
j=1
=0
vk,Uj> 0 Vk,j
Where 6^ is the relative efficiency score of DMU p obtained from the CCR model
62
3.8 Scale Efficiency (SE) Model
In DEA, Scale Efficiency [15] is routinely calculated. This measure
may, however, tell us very little about whether a production unit is over- or
undersized. An empirical case is used to illustrate that, under some
circumstances, scale inefficiency may simply reflect that a production unit
is producing too little, given its use of factors of production, and not that is
over- or undersized. A fictitious sample based on a production function
with variable returns to scale is used for demonstrating that in small
samples with large deviations from the efficiency frontier and limited
variability between units in terms of factor proportions scale efficiency may
not reflect very well how far the production units are from being of an
optimal size.
max
m
st 2>a=1
m
vk,Uj>0 Vk,j
63
Super-efficiency data envelopment analysis (DEA) model is obtained when a
decision making unit (DMU) under evaluation is excluded from the reference
set. Because of the possible infeasibility of super-efficiency DEA model, the
use of super-efficiency DEA model has been restricted to the situations where
constant returns to scale (CRS) are assumed. It is shown that one of the input-
oriented and output-oriented super-efficiency DEA models must be feasible
for a any efficient DMU under evaluation if the variable returns to scale
(VRS) frontier consists of increasing, constant, and decreasing returns to scale
DMUs. We use both input- and output-oriented super-efficiency models to
fully characterize the super-efficiency. When super-efficiency is used as an
efficiency stability measure, infeasibility means the highest super-efficiency
(stability). If super-efficiency is interpreted as input saving or output surplus
achieved by a specific efficient DMU, infeasibility does not necessary mean
the highest super-efficiency.
min(/!.s~ j) p =
x0 —XA = .s"
subject to YA = V0 + s~ 3.12
64
3.11 Window Analysis
In the examples of the previous sections, each DMU was observed only
once, i.e., each example was a cross-sectional analysis of data. In actual
studies, observations for DMUs are frequently available over multiple time
periods (time series data), and it is often important to perform an analysis
where interest focuses on changes in efficiency over time. In such a setting, it
is possible to perform DEA over time by using a moving average analogue,
where a DMU in each different period is treated as if it were a "different"
DMU. Specifically, a DMU's performance in a particular period is contrasted
with its performance in other periods in addition to the performance of the
other DMUs.
The window analysis technique represents one area for further research
extending DEA. For example, the problem of choosing the width for a window
(and the sensitivity of DEA solutions to window width) is currently
determined by trial and error. Similarly, the theoretical implications of
representing each DMU as if it were a different DMU for each period in the
window remain to be worked out in full detail.
does not require a use of prices or other “weights.” Now we extend the
analysis to situations in which unit prices and unit costs are available. This
allows us to introduce the concepts of “allocative” and “overall” efficiency
and relate them to “technical efficiency” in a manner first introduced by M.J.
Farrell (1957).
For this introduction we utilize Figure 3.1 in which the solid line segments
connecting points ABCD constitute an “isoquant” or “level line’ that
represents the different amounts of two inputs (xt, xz) which can be used to
produce the same amount (usually one unit) of a given output. This line
represents the “efficiency frontier” of the “production possibility set” because
65
it is not possible to reduce the value of one of the inputs without increasing the
other input if one is to stay on this isoquant. The dashed line represents an
isocost (=budget) line for which (xi, X2) pairs on this line yield the same total
cost, when the unit costs are cl and c2, respectively. When positioned on C
the total cost is k. However, shifting this budget line upward in parallel
fashion until it reaches a point of intersection with R would increase the cost
to k > k fact, as this Figure shows, k is the minimum total cost needed to
produce the specified output since any parallel shift downward below C would
yield a line that fails to intersect the production possibility set. Thus, the
intersection at C gives an input pair (xi, X2) that minimizes the total cost of
producing the specified output amount and the point C is therefore said to be
“allocatively” as well as “technically” efficient.
Now let R represent an observation that produced this same output
amount. The ratio 0 < OQ / OR < 1 is said to provide a “radial” measure of
technical efficiency with 0 < 1 - (OQ / OR) < 1 yielding a measure of
technical inefficiency. Now consider the point P which is at the intersection of
this cost line through C with the ray from the origin to R. We can also obtain a
radial measure of “overall efficiency” from the ratio 0 < OP / OR < 1 In
addition, we can form the ratio 0 < OP / OR < 1 to obtain a measure of what
Farrell (1957) referred to as “price efficiency” but is now more commonly
called “allocative efficiency.” Finally, we can relate these three measures to
each other by noticing that
OP OO OP
Which we can verbalize by saying that the product of allocative and technical
efficiency equals to overall efficiency in these radial measures.
66
C4
To implement these ideas we use the following model, as taken from Cooper,
Seiford and Tone (2000, p. 236),
min Sc x.
" to i
i*1
subject to
n
Vr X < Y i =1 .m 3.14
j=l
r = l.. s
L < ZJL, < V
j=l
where the objective is to choose the X; and Xj values to minimize the total cost
of satisfying the output constraints. The Cj0 in the objective represent unit
costs. This formulation differs from standard models, as in Fare, Grosskopf
and Lovell (1985, 1994), in that these unit costs are allowed to vary from one
DMUco to another in (3.14). Finally, using the standard approach, we can
obtain a measure of relative cost (= overall efficiency) by utilizing the ratio:
67
m M
TC. X
i=l f
0 1 3.15
m
T c. x.
*—* W 10
M
Where the** are the optimal values obtained from (3.14) and the Xj0 are the
observed values for DMUo.
max tsr"+t.s*'
r~l i*l
subject to
v = fr./L-r. r = 1.2....s
■ ro *->• t} J r
j=l 3.16
.-U
j-i
o <Ars+ $~:'xfLj.r.
This model uses a metric that differs from the one used in the “radial measure”
model.3 It also dispenses with the need for distinguishing between an “output”
and an “input” orientation as was done in the discussion with above mentioned
models. Because the objective in (3.16) simultaneously maximizes outputs and
minimizes inputs-in the sense of vector optimizations. This can be seen by
utilizing the solution to (3.16) to introduce new variables yro, xlo defined as
follows,
V
. ro
= V
• ro
4- S'
r
> V
— . ro r = 1.... s\ 3.17
\
io
— X
io
— S~
i
< Xio' f = 1.2....,/;/.
Now note that the slacks are all independent of each other. Hence an optimum
is not reached until it is not possible to increase an output%0 or reduce an
68
Input xi0 without decreasing some other output or increasing some other
input.
max z =es~+es*
s£
x0 = X A + s~
y0 =7 l~s +
eA = l
A>0, s',s*> 0
determining the efficiency scores of DMUs. This allows units to achieve rela
tively high efficiency scores by indulging in inappropriate input and output
factor weights. Weight restrictions allow for the integration of managerial
preferences in terms of relative importance levels of various inputs and
outputs. For example, if output 1 is at least twice as important as output 2 then
this can be incorporated into the DEA model by using the linear constraint vl
> 2v2. Methods for incorporating weight restrictions have been suggested by
several researchers. Included in this stream of research are works by Chames
69
et al. (1990), Dyson and Thannassoulis (1988), Thompson et al. (1986, 1990,
1995), and Wong and Beasley (1990). Although weight restrictions effectively
discriminate between efficient and inefficient units, ranking DMUs can still be
an issue. In order to allow for a ranking of units in the presence of weight
restrictions, a combination of models proposed by Talluri and Yoon (2000)
These are the four models commonly available in literature and described
to the efficient frontier [Chames A. et al. (1994)]. During the last ten (10)
years a lot of extensions to these four models have been developed that allow
further fine tuning to the basic models. Most of these extensions of DEA {67j
have been a result of the application of the technique to real life problems
(Allen et. al. (1997)). In this section, some of these numerous methodological
a) The basic DEA models always assume that inputs and outputs can be
70
a framework for deriving quantitative estimates using expert opinion.
proposed new methodology for the same and done detailed theoretical
the literature regarding this process is that an inefficient DMU and its
This is primarily due to the fact that the composite DMU that
71
weigh few favorable measures and completely ignore other inputs and
outputs. These DMUs can be considered as niche members and are not
the performance of a DMU with respect to the optimal input and output
along its column in the CEM. On the other hand, a poorly performing
DMU should have several low values. The column mean can be
e) A limitation in using the CEM is that the factor weights obtained may
have been proposed for obtaining robust factor weights for use in the
set of formulations for this purpose. The one that is most appropriate
optimal weights that not only maximize the efficiency of a unit but also
72
minimize the efficiency of the average unit that is constructed from the
which compares a pair of DMUs each time. In this model, the target
DMU (evaluator) not only maximizes its efficiency score but also
the optimal weights of the target DMU may vary depending on the
good overall performers. Sarkis and Talluri (1999) extended the above
case to include both cardinal and ordinal input and output factors,
73
In this version of DEA based upon comparison of efficient DMU’s
developed. In this model, the test DMU is removed from the constraint
The procedure provides a frame work for ranking efficient units and
h) The size of the data set is also an important factor when using some of
the traditional DEA models. As a general rule, with five inputs and five
outputs, at least 25 or so units will appear efficient and so the set needs
models.
into the DEA model by using the linear constraint vl > 2v2. Methods
74
Thompson et al. (1986, 1990, 1995), and Wong and Beasley (1990),
Roll et.al. (1993). Roll et.al. (1993) suggested a conceptual frame work
guide lines for setting bounds on factor weights, then, it develops and
factor weights are allowed to vary. Allen et.al (1997) classified the
categories.
Restrictions,
judgments.
functions.
utilized.
75
k) The introduction of categorical variables extends the application focus
factors in DEA can be found in Cook et al. (1993, 1996), Sarkis and
Talluri (1999).
For example, if there are n units with data on their input and output
76
effectively monitor the performance of a unit over time and assist in
performing period instead of the earliest period. This allows for a new
n. Apart from the above mentioned areas, there is significant work in the
areas of stochastic DEA; sensitivity analysis in DEA, target setting in
DEA, more effective ways of weight restrictions in DEA is being
carried. Some of the interesting extensions in this area can include the
improvement of discriminatory power of non-constant returns to scale
models, better methods for benchmarking, developing the robustness
of cross-efficiency models, etc.. Multi-output forms of stochastic
production frontiers have been developed but remain highly complex
(Kumbhakar and Lovell (2000)). The development of stochastic DEA
models is currently a key area of research (Resti (2000), and Ruggiero
(2000). Approaches have been developed to capture some of the
random variability in data (e.g. chance constrained DEA). Details on
some of the recent developments in this area are given in Cooper,
Seiford and Tone (2000). Bootstrapping techniques have also been
applied to estimate the effects of random variation on the estimates of
efficiency and methods have been developed to compensate for some
of these effects (Simar and Wilson, 2000). Each of these new models
and methods can be useful in a variety of manufacturing and service
areas.
77
3.18 Application of DEA Methodology in Various Sectors
sectors with special focus on weight restriction models that have been
were discussed.
useful when a comparison is sought against "best practices" where the analyst
doesn't want the frequency of poorly run operations to affect the analysis.
Maindiratta(1986)). Some other settings in which the DEA technique has been
(Byrnes, Fare and Grosskopf (1984)), Pharmacy stores (Banket and Morey
cards and also some post optimal analysis on the basis of program of studies
and locations of schools. Finally he has done regression analysis of the DEA
78
based upon ideas drawn from the DEA. Computational results are given for
ratio - analysis has been tool of analysis for as long as financial statements
and one denominator severely limit its usefulness. This paper extends the
The study made by Roy et. all. (1991) combined DEA with regression
and other socio economic factors. DEA was performed with the school inputs
Variation in the managerial efficiency is much less than what is implied by the
DEA results.
DEA has also been utilized as a resource allocation tool. A good example of
its use in resource allocation can be found in Bessent et al.(1983) Doyle et.
al. (1991) tried to compare products which vary excellence along a number of
79
dimensions using DEA, this method was illustrated by comparing published
of restructured companies that have reduced staff numbers and also companies
that have found it necessary to downsize due to declining demand for its
product.
process.
general, and vendor evaluation, in particular. Cook et al. (1990) used absolute
analysis of 7 years (1980 - 1986) was made for 45 oil / gas firms called
independents.
DEA is carried out by Thompson et.al (1996a) to determine the ideal site for
locating a high energy physics lab. The assurance region was applied to the
site location problem; it was found that the region of dominance of one of the
Therefore, this site was the only one with efficiency score of 1 and was also
the preferred site. Thompson et.al (1996b) the efficiency and profit potential
bounds. Thompson et al. (1996c) solved the AR-DEA model for 48 banks for
80
the years 1980 - 1990 computing DEA/AR Efficiency and Profit Ratio
output cones, a cone-ratio assurance region (CR-AR) was set up. While most
the Analytic Hierarchy Process (AHP) to gather and present expert opinion for
the AHP were used in this paper to set bounds on the weights.
DEA and linked-cone assurance region models were used by Taylor et.al
banks. Schanffnit et.al (1997) has carried out a best practice analysis of the
Chilingerian and Sherman (1997) used the assurance region model to spot
Bynes et. al. (1984) applied a generalized version of the Farrell measure of
81
Farrell measure (which was designed to measure lost output or wasted inputs
due to under utilization of inputs) into three mutually exclusive and exhaustive
input congestion and (3) a measure of scale efficiency. Their paper describes
Ramani et, al. (1988) discussed about problem with traditional mine
management and its unique characteristics. They identified the need for
taking total salaries and wages, total store expenses, power expenses, interest,
output by using CCR model and compared the efficiencies of the UG mines.
data (beyond the input - output data per se) and expert opinion to bind the
function. The authors content that DEA technical efficiency does not imply a
DEA maximum profit ratio and vice - versa. The analysis of Illinois coal
82
The data used in Bynes et. al. (1984) was reformulated and used for this
study by Thomson. The capacity inputs were defines in real capital term and
K=kl+K2+k3
The four geographical characteristics were consolidated into one mine quality
input: T= (T1/D1+T2/D2)
inverse of an aggregated stripping ratio i.e., tons of coal per ton overburden.
The output was taken as thousand tons of coal mined and inputs were
thousand labor days. Thousand dollars of investment is capital (to a base year)
One notable feature in this study is that by aggregation of input factors. The
total number of inputs and outputs has been brought within the limit set by
Another feature of this study is that the role of profitability in decision making
is examined. If both profitability and efficiency scores are available, units can
Units which both profitable and efficient provide examples for good operating
subjected to an efficiency drive and those which are efficient but not profitable
calls for close scrutiny of operating environments. The study shows that the
83
region constructs are lower than in a normal DEA which shows the need to
Niraj Kumar et al (2002) used DEA and fuzzy logic techniques for
comparing 40 UG mine. They had chosen capacity, Man shifts and cost per
(SEI) as outputs.
Niraj Kumar et.al used a two stage DEA model to rank the mining units.
In the first stage, the DEA is used to get efficiency scores of various mining
units. In the second stage, the AHP is applied to differentiate among mines,
which have the same efficiency score based upon the DEA method. This helps
in further ranking of each mining units. This combined approach not only
helps to overcome the limitations of both the methods, but also enables the full
mines by taking capacity, Man shifts and cost per tonne as inputs and
84