Professional Documents
Culture Documents
1. Introduction
(Ref. 7), Sawaragi, Nakayama, and Tanino (Ref. 8), Stadler (Ref. 9), Steuer
(Ref. 10), Yu (Refs. 11-12), Zeleny (Ref. 13), and references therein.
Recently, researchers and practitioners have been increasingly interested
in problems involving optimizations of functions over the efficient sets of
multiple objective mathematical programs. Interest in these problems is
motivated by many factors.
First, these problems arise in many applications. For instance, practical
problems in production planning (Ref. 14), value theory (Ref. 15), portfolio
management (Ref. 16), and a variety of other areas can be represented as
optimizations over efficient sets.
Second, by optimizing a function over an efficient set, instead of using
one of the standard algorithms to find and compare efficient solutions and
their tradeoffs, the absolute importance, rather than the relative importance,
of each criterion in the multiple objective mathematical program is taken
into account. Thus, an efficient solution found via the optimization approach
can be expected to be superior to those found by standard approaches (Ref.
17).
Third, the approach of optimizing over efficient sets is relatively easy
for decision makers to work with. In particular, it relieves decision makers of
the bulk of the burdensome work that is entailed from using more standard
approaches (Refs. 14, 18, 19).
Finally, the special case of optimization over an efficient set in which
a criterion of a multiple objective mathematical program is minimized over
the efficient set of the program has several distinct uses of its own. For
instance, solutions for this special case aid decision makers in setting goals,
ranking or eliminating objective functions and comparing efficient solutions
to one another (Refs. 20-24). In addition, these solutions are needed to
ensure the effectiveness of several standard interactive algorithms for multi-
ple objective mathematical programming, including STEM (Refs. 25-26),
the Belenson-Kapur algorithm (Ref. 27), and the algorithm of Kok and
Lootsma (Ref. 28).
Mathematically, problems of optimizing functions over efficient sets of
multiple objective mathematical programming problems are difficult global
optimization problems; i.e., they generally possess local optima that are not
global. This is true regardless of the function to be optimized, because effi-
cient sets of multiple objective mathematical programs are generally noncon-
vex sets. Even in the case of multiple objective linear programming, the
efficient set is generally nonconvex. Furthermore, in problems of optimizing
over efficient sets, as in most global optimization problems, the number of
local optima that are not global can be very large (Refs. 29-32). Confound-
ing the situation further is the fact that, since the feasible regions of these
problems are efficient sets, they cannot be expressed in the traditional convex
programming format as a system of functional inequalities.
JOTA: VOL. 88, NO. 1, JANUARY 1996 79
2. Theoretical Prerequisites
(P) min f ( x ) ,
s.t. x~Xe.
Let v denote the optimal value of problem (P). Notice that, since X is
compact, ArE will also be compact (Ref. 12). This implies that problem (P)
possesses at least one global optimal solution (e.g., Ref. 64). In fact, when
f i s linear or quasiconcave, at least one global optimal solution will coincide
with an extreme point o f X (Refs. 14, 34).
Let
Y= {CxlxEX}.
From Rockafellar (Ref. 65), Yis a nonempty, compact polyhedron. We will
refer to Y as the outcome set or image of X under C (Ref. 65). Notice that,
in this context, the matrix C represents a linear mapping of the feasible
decision set X of problem (BX) onto the outcome set Y. Furthermore, it is
easy to see that the outcome set CXE of XE under C, denoted Ye, is precisely
the set of efficient points of the bicriteria linear programming problem
where 12 denotes the 2 x 2 identity matrix. The set Ye is also called the set
of admissible points of Y; see for instance Refs. 66-69.
To solve problem (P), the algorithm to be presented in this article will
use an indirect search of XE organized according to the faces of YE, rather
than according to points or faces ofXe. As we shall see, in this way, generally
far fewer efficient faces need to be generated than if Xe were directly
searched. Furthermore, the algorithm will need to generate only faces of
dimension zero and one in YE, i.e., efficient extreme points and edges of Y.
In contrast to this, direct face searches in Xe could require searching faces
and facets of much higher dimension. In several articles, Dauer, Benson,
and other authors (Refs. 34-36, 43, 59-63) have demonstrated previously
that focusing on the outcome sets, rather than the decision sets, in multiple
objective mathematical programming can have major computational benefits
of this type.
To motivate and later verify the geometrical workings of the algorithm,
some general geometric results concerning the outcome sets of certain
82 JOTA: VOL. 88, NO. 1, JANUARY 1996
Z = {Dwlwe W}
and let ZE denote the set of efficient points of Z. Notice that Z is a nonempty
convex set in R e (Ref. 65). Also notice that WE and Ze may be empty (e.g.,
if W is not compact).
Recall that a face of a convex set V is a convex subset F of V such that
any closed line segment in V with at least one relative interior point in F
must have both of its endpoints in F. The zero-dimensional faces of V are
called the extreme points of V. The set of extreme points Vex of V may be
empty. However, the set of faces of V is always nonempty, since the empty
set and V itself are always faces of V.
The following result will be used frequently in the sequel. It confirms
the fact that ZE is equal to the image of WE under D and can be proven
easily from the definitions.
Proposition 2.1.
Benson (Ref. 63) has shown that We and ZE consist of unions of efficient
faces of W and Z, respectively. However, the mapping D in problem (M)
does not necessarily map an efficient face of W onto an efficient face of Z
(Ref. 63). For instance, Dauer (Ref. 59) and Benson (Ref. 63) have given
several examples in which large numbers of efficient zero-dimensional and
one-dimensional faces (i.e., extreme points and edges) of W are mapped by
D into strict subsets of the relative interiors of efficient faces of Z. This
phenomenon can occur even in the special case of multiple objective linear
programming. Fortunately, however, the inverse mapping of D maps effi-
cient faces of Z onto efficient faces of W, as shown by the following result.
JOTA: VOL. 88, NO. 1, JANUARY 1996 83
Theorem 2.1 states in part that, given any efficient face ZF in the out-
come set Z, the set of all decisions in W that map into ZF under D form an
efficient face of W. However, from the comments preceding the theorem,
the reverse is not true; i.e., not every efficient face of W is mapped by D
onto an efficient face of Z. In particular, while some efficient faces of W are
mapped by D onto efficient faces of Z, there may be easily much larger
numbers of efficient faces of W that are strict subsets of these types of faces
that map into nonfacial convex subsets of Z (Ref. 63).
Theorem 2.1 also states that, for each efficient face ZF in the outcome
set Z, the corresponding preimage under D is an efficient face WF of W of
equal or greater dimension. From Ref. 63, the dimension of WF can often
be in fact much greater than the dimension of Zr. Furthermore, many effi-
cient faces that are strict subfaces of faces of the type WF can exist that are
mapped into nonfacial subsets of ZF = D WF, but also have dimensions far
exceeding the dimension of ZF (Ref. 63).
Taken altogether, Theorem 2.1 and the discussions accompanying it
imply that, to efficiently solve problem (P), it would be preferable to organize
the search for an optimal solution according to the faces of Ye rather than
XE. In particular, a search organized in this way would generally need to
generate significantly fewer numbers of efficient faces of Ye than if the faces
of Xe were searched directly. Furthermore, the dimensions of the faces in
YE can be expected to be significantly smaller than those of XE. These
observations motivated the development of the outcome-based algorithm
for problem (P) to be presented in this article.
In the remainder of this section, we present some more particular results
that will be needed to develop the outcome-based algorithm. Toward this
end, for each i= 1, 2, let
Mi=max (el, x), (la)
s.t. xEX, (lb)
and let m2 equal the optimal value of the linear program
(Q) max (c2,x),
s.t. (cl, x) = Mj,
x~X.
In addition, let
I= {bER[mz<b<M2},
84 JOTA: VOL. 88, NO. 1, JANUARY 1996
The algorithm will use problem (Pb) and the dual linear program to
problem (Pb) to systematically generate faces of Ye. The next result will be
fundamental to this process. Although this result is well known, we give a
short proof that will be useful later on. For each bsI, let w(b) denote the
optimal value of problem (Pb).
addressed, various tests have been formulated for detecting complete effi-
ciency. One of the more convenient tests is contained in the following
theorem. In this theorem, e denotes the vector in R 2 whose entries are each
equal to one.
Theorem 2.4. See Ref. 71. Problem (BX) is completely efficient if and
only if t = 0, where t is the optimal value of the linear program
(T) min (a,q),
s.t. -Cru+Arr-z=Cre,
Arq-z>O,
u,z>-_O.
Since X is nonempty and compact, the linear program (T) in Theorem 2.4
always has an optimal solution, and t is always nonnegative and finite (Ref.
71). Notice that, when problem (BX) is completely efficient, problem (P) is
identical to the problem
(PR) min f ( x ) ,
s.t. x~X.
3. Outcome-Based Algorithm
Outcome-Based Algorithm.
Initialization Step. See Steps 1 through 5 below.
(PI) rain f ( x ) ,
s.t. (ci, x ) = Mi, i= 1, 2,
x~X,
and stop: x I is an optimal solution for problem (P).
Step kl. Set b = bk and a = ak and find any optimal solution (uk, q~)
to the linear program
(Pb,a) max u,
s.t. - bu + (a, q) = a,
--c2u+ ATq>mcl,
u>_-0.
JOTA: VOL. 88, NO. 1, JANUARY 1996 87
and set
bk+l=(c~,X~+~), ak+l-- (el, JZk+l).
If b~+ ~> M2, stop: x ~'is an optimal solution for problem (P).
Otherwise, continue.
Step k5. If (c2, xR)>bk+~, set k=k+ 1 and go to iteration k. Other-
wise, with b=bk+~, find any optimal solution x R to the
problem
(PRb) rain f(x),
s.t. (c2, x)>b,
x~Y,
and continue.
Step k6. Set LB =f(x R). If LB > UB, stop: x Cis an optimal solution
for problem (P). Otherwise, set k = k + 1 and go to iteration
k.
f ( x R) is a lower bound for the optimal value v of problem (P), and the face
search procedure must be invoked to solve problem (P).
Although Step 1 checks for complete efficiency of problem (BX), there
are certain other special cases of problem (P) that should be checked by the
user before the algorithm is invoked. When one of these cases arises, problem
(P) can be solved by special methods that are more efficient than the algo-
rithm. For a discussion of some of these special cases, the reader may consult
Ref. 37.
In Steps 2 through 4, the interval
Y -LMA'
and the optimal solution to problem (P1) found in Step 5 minimizes f over
X~.
The iterative steps k > 1 of the algorithm are executed when Ye consists
of one or more edges of Y. At the beginning of a typical iteration k, the
incumbent solution minimizes f over all faces Fx of X~ for which xeFx
implies that m2~ (c2, X)<=bk, and ak equals the optimal value W(bk) of the
linear program (Pb) at b=bk. In Step kl, as we shall see later, the values of
UAand qk are calculated in such a way that
describes the face of Ye given by the line segment connecting the points
[w(bk), bk], [W(bk+l), bk+l]~R 2, where bk+l is defined as in Step k4. From
Theorem 2.1, this implies that the point x k calculated in Step k2 minimizes
f over Fkx, where Fkx denotes the face of XE given by
Proof. Recall from the proof of Theorem 2.3 that for each b~L the
dual linear program to problem (Pb) is given by
(Qb) rain -bu+(a,q),
s.t. -c2u+Arq>cl,
u=0,
and that the optimal value of problem (Qb) equals w(b).
Assume that k > 1. Then, since ~k+l is an optimal solution to the prob-
lem solved in Step k4 of the algorithm, if follows that
(c2, ~?k+z) _-_bk+ ~, (3)
(el, ~kw l) _]_Uk (C2,)~k+ 1) = (a, qk), (4)
2k+l eX. (5)
From (3) and (5), 2 ~+l is a feasible solution to problem (Pb) at b=b~+z.
From (3) and (4), it follows that
(el, Xk+l) = --ukbk+l + (a, qk). (6)
From Step kl and the definition of problem (Qb), (u~, qk) is a feasible
solution to problem (Qb) at b=bk+l. From (6), since ffk+l is a feasible
solution to problem (Pb) at b - b k + 1, by duality theory of linear program-
ming (Ref. 70), this implies that 2 ~§ 1 and (u~, q*) are optimal solutions to
problems (Pb) and (Qb) at b=b~+ 1, respectively.
Notice from the definitions of MI and m2, and from Step 4 of the
algorithm, that a~ = w(bl). Furthermore, since Sk+r is an optimal solution
to problem (Pb) at b = bk+ 1, Step k4 implies that ak§ 1= w(b~+ ~). It follows
that b=bk and a=w(b~) in the linear program (Pb.~) in Step kl. Since
(uk, q*) is a feasible solution to this linear program, and since b=b~ and
a = w(bk) in this program, this implies that (u~, qk) is a feasible solution to
problem (Qb) at b = be which satisfies
- bk uk + (a, qk ) = w(b~).
90 J O T A : VOL. 88, NO. 1, J A N U A R Y 1996
But from the proof of Theorem 2.3, w(bk) is the optimal value of problem
(Qb) at b = b~. Therefore, (uk, qk) is an optimal solution to this problem.
Assume now that
/7~{b~Rlbk <b<bk+l}.
Then, for some 7/~ R satisfying 0 _<_~, _< 1,
/7= 7b~+ (1 - ~')bk+ 1.
From Theorem 2.3, w is a concave function on L Therefore, it follows that
w(/7) > rw(bk) + (1 - r)w(b~ + ,). (7)
Since (u~, qk) is an optimal solution to problem (Qb) at both b = bk and b =
bk+l, and since w(b) is the optimal value of problem (Qb) for b=bksI and
for b=bk+leL (7) implies that
w(/7) > r ( - b ~ u~ + (a, qk) ) + (1 - r ) ( - b k + 1uk + (a, qk ) )
= [ - Tbk- (1 - )')bk+l]Uk+(a, qk)
= -/Tuk + (a, qk ).
If we let Z denote the feasible region of problem (Q~), then this implies that
the optimal value of problem (Q~) is bounded below by -/TUk + (a, ~ ),
where (uk, qk)~Z. Since (uk, qk)~Z, this implies that
w(/7) = --/7Uk+ (a, qk),
and that (uk, q~) is an optimal solution for problem (Q~). []
Lemma 3.2. For each k > 1, there exists an ~ > 0 such that the vector
(uk, qk) computed in Step kl of the outcome-based algorithm satisfies
w(/7) = -/Tuk + (a, qk ),
for all/7 such that bk</7<bk+ gk.
for each k>2. For each k > 2 , since bk = (c2, ~k) and ~k~x, this implies
that :~k is a feasible solution to the linear program solved in Step k4 of the
algorithm. From Step k4, it follows that bk < bk+ ~for all k___>2. To show that
bj <b2, the argument for k > 2 can be repeated with k = 1 and with ~
replaced by the incumbent solution x Ccomputed in Step 2 of the algorithm.
[]
Lemma 3.4. For each k > 1, the vector (uk, qk) computed in Step kl
of the outcome-based algorithm satisfies Uk> 0.
where
-c2uk + A r q ~ c l . (9)
Since x* ~X, x* is a feasible solution for the dual linear program to the latter
problem, which may be written as
max (cl, x ) + Uk(C2, x),
s.t. x~X.
Remark 3.1. It is easy to show that Lemma 3.4 implies that the optimal
value function w(b) of problem (Pb) is strictly decreasing on L
Lemma 3.5. For each k > 1, the values of bk and bk + ~computed in the
outcome-based algorithm satisfy bk<bk+~. Furthermore, for each
/7~{b~Rlbk <b<bk+l} ' W(/7)=_/TUk+ (a, qk).
that bk+ 1> bk. The second statement in the lemma follows immediately from
Lemma 3.1 and the proof of Theorem 2.3. []
Remark 3.2. It is evident from Lemmas 3.1 and 3.5 that, for each
k > 1, after iteration k of the outcome-based algorithm, one can find an
additional linear piece of the graph of the function w. This piece of the
graph of w is the line segment in R 2 that lies on the line with equation
Theorem 3.1. For each k > 1, let L~ denote the line segment in R 2 with
endpoints [w(bi), hi], i=k, k + 1, that is generated by the algorithm after
iteration k. Then, Lg is an efficient face of Y, and Lk = F~, where F~r is given
by (2).
Furthermore, in the proof of Lemma 3.4, the only property of x* that was
used to derive (10) was x*~X. Therefore,
for all x~X. From (14), since xr'eX, this implies that (a, q~) is the optimal
value of the linear program
where
w(/7) = (cl, x e) and b = (c2, Xb),
Case 1. y2 ~ m2. In this case, since y e Y, Y2~ M2 must also hold. Notice
from Lemmas 3.1 and 3.5 that, for each j > 1, since -uj+l is the unique right-
hand derivative of w at bj+ 1, (uj, q J) is a feasible but nonoptimal solution
to the linear program given in Step ( j + 1)1 of the algorithm. Therefore, for
each j >_1, Step (j + 1)1 implies that uj < u;+ 1. Since the feasible region Z of
p r o b l e m (Qb) is invariant over b e I and has a finite number of extreme
points, and since for each j > 1, the optimal value of the linear program
solved in Step j 1 is achieved by one of these extreme points, by Remark 3.2,
if the tests involving the inequality LB > U B are eliminated from the out-
come-based algorithm, then, for some finite number/~, the algorithm will
terminate in Step/~4 after generating the entire graph of w over L Assume
in the remainder of this proof that the algorithm has been used to generate
this graph. Then, since y2 eL y2 satisfies bj<y2 < bj+ 1 for some integer j > 1.
Therefore, j ~ k.
By Lemma 3.5,
or equivalently,
Case 2. y2 < me. From Step 4 of the algorithm, m2 bj. We may thus
=
Proof. If xReXe is detected in Step 1, then from xREXE and the defini-
tion of problem (PR), v > f ( x R ) > v . In this case then, v = f ( x R) and the
algorithm appropriately terminates with the optimal solution x R to problem
(P).
If x R~XE, then as explained at the beginning of Section 3.1, either Y
is completely efficient, Ye is a singleton, or YE consists of one or more edges
of Y. We deal with these three cases separately.
When Y is completely efficient, as explained in Section 3.1, Step 1 of
the algorithm detects this case, and the algorithm computes the optimal
solution x R to problem (P) and terminates. If YE consists of a singleton,
then it follows from Theorem 2.2 that m2 = M2 and that Ye = {(M~, M2)r}.
From Steps 2 through 5 of the algorithm, in this case, if the algorithm does
not terminate in Step 3, then it detects that YE is a singleton, finds a point
in X that minimizes f over all x e X such that Cx equals this singleton, and
terminates. It is easy to see that (M1, M2)re Y~x, so that {(Mr, M2)} is a
face of Y. By Theorem 2.1, the latter two statements imply that, when Ye
is a singleton and the algorithm does not terminate in Step 3, it finds an
optimal solution to problem (P) in the initialization step.
If the algorithm terminates in Step 3, then from the values of LB and
UB in that step, it follows that f ( x c) _<f(x~), where x ~ and x R are optimal
solutions to the linear programs (Q) and (PR), respectively. It is easy to see
that this implies that x~eXe and that f ( x R) =<v. Therefore, when the algo-
rithm terminates in Step 3,
v <_f(x c) <_f(x R) ~ v.
Assume now that Y# YE, that Ye is not a singleton, and that the
algorithm does not terminate in Step 3. These assumptions and the argu-
ments given so far in this proof imply that the algorithm proceeds beyond
the initialization step. Thus, either the algorithm terminates during some
iteration/~> 1 by detecting that LB > UB is satisfied, or it does not terminate
in this way.
Suppose first that the algorithm does not terminate by detecting that
LB ___UB is satisfied. Then, since the algorithm proceeds beyond the initiali-
zation step, Theorem 3.1 and its proof imply that, in each iteration k > 1
that is executed, a distinct efficient face F~r of Y of the form (2) is generated.
By Theorem 2.1, it follows that, for each such k, the point x k found in Step
k2 of the algorithm minimizes f over the efficient face F~. of X that consists
of all x ~ X such that C x e F k r . Since Y is polyhedral, it has a finite number
of efficient faces (Ref. 11). From Theorem 2.2 and Step k4, since the algo-
rithm does not terminate by detecting LB>UB, it follows that, after some
finite number of iterations/~, the algorithm will terminate in Step k4 after
identifying all of the efficient faces of YE. At that point, x c will minimize f
over XE.
Now, suppose that, during some iteration, the algorithm detects that
LB > UB is satisfied. Then, by using arguments similar to those used for the
case where the algorithm temfinates in Step 3, it is easy to show that, at the
point of termination, x c will be an optimal solution for problem (P). []
4. Examples
Initialization Step. In this step, the algorithm detects that t > 0, so that
problem (BX) is not completely efficient. It calculates LB = - 4 . 0 0 as the
initial lower bound for v, and detects that the optimal solution x R to problem
(PR) which provides this lower bound does not satisfy xReXe. Therefore,
the minimum o f f over X equals - 4.00, o ~ - 4.00, and the algorithm must
continue. It next finds that Mj = 2.667, M2 =4.500, and that
(xC)T=(0, 0, 0, 0, 1, 1, 1, 1,0,0).
It then sets
Since LB < UB and m2 < M2, the algorithm must proceed to iteration 1 with
bl = -1.333, a~ =2.667,
because it has detected that Y has at least one efficient edge to be identified.
Iteration 1. In this iteration, the algorithm finds a minimizer x ~ o f f
over the efficient face of X consisting of all x ~ X such that Cx is an element
of the efficient edge F lr of Y given by
with f (x ~) = - 1.0. Since f (x 1) < UB, the incumbent solution x c is set equal
to x 1, and UB is set equal to -1.0. Since LB < UB,
b2 = 2.667, a2 = - 1.333
are calculated. Since b2 < M2, Y has at least one more efficient edge to be
potentially identified. Before proceeding to iteration 2 to do so, however,
the algorithm checks to see if LB can now perhaps be increased. To do so,
it calculates (c2, xR) and finds that (c2, x R) < b2. Therefore, LB can now
perhaps be increased. The algorithm then solves the linear program (PRb)
with b = b2 for an optimal solution x R. Using x R, it sets LB = f (x R) = -1.917.
Although this increases the value of LB as compared to its previous value,
LB < UB still holds, so that the search must continue.
Iteration 2. In this iteration, the algorithm finds a minimizer x 2 o f f
over the efficient face of X consisting of all x E X such that Cx is an element
of the efficient edge F ~ of Y given by
Example 4.2. Consider the same data as in Example 4.1, except let
f : R2~ be the convex quadratic function defined for each x e R 2~ by
10
f ( x ) = Z (11-j)(x2-0.25) 2.
j=l
Since the only difference between Example 4.1 and this example is in the
definition off, Xe and YE are unchanged. In particular, as in Example 4.1,
since Y~ consists of exactly three edges of Y, the outcome-based algorithm
will require at most three iterations to solve problem (P). However, since f
is now a convex function, problem (P) need not have an extreme point
optimal solution. Furthermore, while most of the subproblems that must
be solved when applying the outcome-based algorithm will again be linear
programming problems, a minority of these problems will be convex quad-
ratic programs.
In this case, as in Example 4.1, the outcome-based algorithm solves
problem (P) in two iterations. In particular, it searches the same two edges
JOTA: VOL. 88, NO. 1, JANUARY 1996 101
5. Conclusions
References
41. DAUER, J. P., Optimization over the Efficient Set Using an Active Constraint
Approach, Zeitschrift fiir Operations Research, Vol. 35, pp. 185-195, 1991.
42. BENSON,H. P., and SAYIN, S., A Face Search I-Ieuristicfor Optimizing over the
Efficient Set, Naval Research Logistics, Vol. 40, pp. 103-116, 1993.
43. KORHONEN,P., SALO, S., and STEUER, R., A Heuristic for Estimating Nadir
Criterion Values in Multiple Objective Linear Programming, Working Paper,
Helsinki School of Economics and Business Administration, Helsinki, Finland,
1992.
44. AKsov, Y., An Interactive Branch-and-Bound Algorithm for Bicriterion Noncon-
vex~Mixed Integer Programming, Naval Research Logistics, Vol. 37, pp. 403-
417, 1990.
45. ANEJA,Y. P., and NAIR, K. P. K., Bicriteria Transportation Problem, Manage-
ment Science, Vol. 25, pp. 73-78, 1979.
46. BENSON, H. P., Vector Maximization with Two Objective Functions, Journal of
Optimization Theory and Applications, Vol. 28, pp. 253-257, 1979.
47. BENSON, H. P., and MORIN, T. L., A Bicriteria Mathematical Programming
Model for Nutrition Planning in Developing Nations, Management Science, Vol.
33, pp. 1593-1601, 1987.
48. COHON, J. L., CHURCH, R. L., and SHEER, D. P., Generating Multiobjective
Tradeoffs: An Algorithm for Bicriterion Problems, Water Resources Research,
Vol. 15, pp. 1001-1010, 1979.
49. GEARrtART,W. B., On the Characterization of Pareto-Optimal Solutions in Bicrit-
eria Optimization, Journal of Optimization Theory and Applications, Vol. 27,
pp. 301-307, 1979.
50. GEOFFRION, A. M., Solving Bicriterion Mathematical Programs, Operations
Research, Vol. 15, pp. 39-54, 1967.
51. KIZILTAN,G., and YUCAOGLU,E., An Algorithm for Bicriterion Linear Program-
ming, European Journal of Operational Research, Vol. 10, pp. 406-411, 1982.
52. KLEIN, G., MOSKOWITZ,H., and RAVINDRAN,A., Comparative Evaluation of
Prior versus Progressive Articulation of Preferences in Bicriterion Optimization,
Naval Research Logistics, Vol. 33, pp. 309-323, 1986.
53. SHIN, W. S., and ALLEN, D. B., An Interactive Paired Comparison Method for
Bicriterion Integer Programming, Naval Research Logistics, Vol. 41, pp. 423-
434, 1994.
54. SHIN, W. S., and LEE, J. J., A Multirun Interactive Methodfor Bicriterion Optimi-
zation Problems, Naval Research Logistics, Vol. 39, pp. 115-135, 1992.
55. WALKER,J., An Interactive Method as an Aid in Solving Bicriterion Mathematical
Programming Problems, Journal of the Operational Research Society, Vol. 29,
pp. 915-922, 1978.
56. HAIMES,Y. Y., WISMER, D. A., and LASDON, L. S., On the Bicriterion Formula-
tion of Integrated System Identification and Systems Optimization, IEEE Transac-
tions on Systems, Man, and Cybernetics, Vol. 1, pp. 296-297, 1971.
57. ConoN, J. L., SCAVONE, G., and SOLANKI,R., Multicriterion Optimization in
Resources Planning, Multicriteria Optimization in Engineering and in the
Sciences, Edited by W. Stadler, Plenum Press, New York, New York, pp. 117-
160, 1988.
JOTA: VOL. 88, NO. I, JANUARY 1996 105
58. VAN WASSENHOVE,L. N., and GELDERS,L. F., Solving a Bicriterion Scheduling
Problem, European Journal of Operational Research, Vol. 4, pp. 42-48, 1980.
59. DAUZR, J. P., Analysis of the Objective Space in Multiple Objective Linear Pro-
gramming, Journal of Mathematical Analysis and Applications, Vol. 126, pp.
579-593, 1987.
60. DAUER, J. P., On Degeneracy and Collapsing in the Construction of the Set of
Objective Values in a Multiple Objective Linear Program, Annals of Operations
Research, Vol. 47, pp. 279-292, 1993.
61. DAUER, J. P., and Llu, Y. H., Solving Multiple Objective Linear Programs in
Objective Space, European Journal of Operational Research, Vol. 46, pp. 350-
357, 1990.
62. DAUER, J. P., and SALEH, O. A., Constructing the Set of Efficient Objective
Values in Multiple Objective Linear Programs, European Journal of Operational
Research, Vol. 46, pp. 358-365, 1990.
63. BENSON,H. P., A Geometrical Analysis of the Efficient Outcome Set in Multiple-
Objective Convex Programs with Linear Criterion Functions, Journal of Global
Optimization, Vol. 6, pp. 231-25 l, 1995.
64. MANGASARIAN,O. L., Nonlinear Programming, McGraw-Hill Book Company,
New York, New York, 1969.
65. ROCKAFELLAR,R. T., Convex Analysis, Princeton University Press, Princeton,
New Jersey, 1970.
66. ARROW, K. J., BARANKIN, E. W., and BLACKWELL,D., Admissible Points of
Convex Sets, Contributions to the Theory of Games, Edited by H. W. Kuhn
and A. W. Tucker, Princeton University Press, Princeton, New Jersey, pp. 87-
91, 1953.
67. BITRAN, G. R., and MAGNANTI, T. L., The Structure of Admissible Points with
Respect to Cone Dominance, Journal of Optimization Theory and Applications,
Vol. 29, pp. 573-614, 1979.
68. BENSON,H. P., Admissible Points of a Convex Polyhedron, Journal of Optimiza-
tion Theory and Applications, Vol. 38, pp. 341-361, 1982.
69. BLACKWELL,D., and GIRSHICK,M. A., Theory of Games and Statistical Deci-
sions, Dover Publications, New York, New York, 1954.
70. MURTY, K. G., Linear Programming, John Wiley and Sons, New York, New
York, 1983.
71. BENSON,H. P., Complete Efficiency and the Initialization of Algorithms for Multi-
ple Objective Programming, Operations Research Letters, Vol. 10, pp. 481-487,
1991.
72. BAZARAA,M. S., JARVIS, J. J., and SHERALI, H. D., Linear Programming and
Network Flows, 2nd Edition, John Wiley and Sons, New York, New York, 1990.