You are on page 1of 6

STOR601: Research Topic I

Multi-Objective Optimization
Ciara Pike-Burke

1 Introduction
Optimization is a widely used technique in Operational Research that has been employed in a
range of applications. The aim is to maximize or minimize a function (eg. maximizing profit or
minimizing environmental impact) subject to a set of constraints. However, in many situations,
decision makers find themselves wanting to optimize several different objective functions at the
same time. This leads to Multi-Objective Optimization (MOO). It is easy to see that if the
multiple objectives do not coincide, this problem becomes considerably more difficult. There
have been many methods suggested for MOO, this report will look at some of them.

1.1 Background
In Multi-Objective Optimization, it is often unclear what constitutes an optimal solution. A
solution may be optimal for one objective function, but suboptimal for another. Let yi = fi (x)
for i = 1, . . . , p, denote the p objective functions to be optimized over the feasible set X .
Throughout this report, the optimization problem will be assumed to be of minimization type.
A feasible solution x X , is efficient (or in some literature Pareto optimal ) if there is
no other x X satisfying both fk (x) fk (x) for k = 1, . . . , p, and fi (x) < fi (x) for some
i {1, . . . , k}. x is weakly efficient if there is no x X satisfying fk (x) < fk (x) for all k =
1, . . . , p. The image of a (weakly) efficient solution, y = f (x), is called a (weakly) undominated
point. If X is a polytope, then it is possible to define an extreme point, x X , if for 0 1
and x1 , x2 X , x = x1 + (1 )x2 implies that x1 = x2 = x. Another way of comparing
solutions is to use lexicographic ordering; y1 <lex y2 if yq1 < yq2 for q = min{k|yk1 6= yk2 }. A
lexicographically optimal solution is a feasible solution x X such that there is no x X
with f (x) <lex f (x). The utopia point or ideal point of a MOO problem, denoted by
yI = (y1I , . . . , ypI ), is defined by ykI := minxX fk (x) = minyY yk , where Y denotes the image of
the feasible set X under f . If X is convex then Y is called the convex hull.

1.2 Example
In this report, the following simple example will be used to demonstrate different methods:

3x1 + x2
Minimize
x1 + 4x2
Subject To x2 6
x1 + 3x2 3
2x1 x2 6
2x1 + x2 10
x1 0, x2 0
The feasible region and the projected feasible region are shown in Figure 1.

2 Basic Techniques
2.1 The Weighted Sum Method
One of the most intuitive methods for solving a Multi-Objective Optimization problem is to
optimize a weighted sum of the objective functions using any method for single objective opti-

1
STOR601: Research Topic I

(a) The feasible region X . (b) The projected feasible region Y = F (X ).

Figure 1: The feasible region and projected feasible region of Example 1.2.

mization. The general approach is toPassign to each objective function fi (x) a weight wi > 0
and minimize the objective function pi=1 wi fi subject to the problem constraints. Note that
if we take the weights to be w = ei , the standard basis vector, the weighted sum method is
equivalent to minimizing fi .
It has been shown that the weighted sum method as stated above will produce efficient solu-
tions. However, if the positivity requirement on wi is weaken to wi 0 there is a potential to get
only weakly efficient solutions (Marler and Arora, 2010). The method is simple to implement but
the results obtained are highly dependent on the weights used, which have to be specified before
the optimization process begins. Additionally, the weighted sum method is not able to represent
complex preferences and in some cases will only approximate the decision makers preferences.
The results of applying the weighted sum method to Example 1.2 are shown in Figure 2.

2.2 The -Constraint Method


Another simple approach to MOO is to minimize one objective, say fi (x), subject to the addi-
tional constraints that fj (x) j for all j 6= i and some j > 0, where j represents the worst
value fj is allowed to take. This method is known as the -Constraint Method and is very simple
to implement. It has been shown that if the solution to the -constraint method is unique then
it is efficient (Marler and Arora, 2004). One issue with this approach is that it is necessary to
preselect which objective to minimize and the j values. This is problematic as for many values
of  there will be no feasible solution. Some results for Example 1.2 are shown in Figure 3.

2.3 The Goal Programming Method


Goal Programming is a method commonly used in mathematical programming when it is not
possible to exactly meet some constraints. Charnes and Cooper (1977) present a way of using
goal programming in the Multi-Objective setting. Their method is to solve the following LP:
p
X
min (i+ + i )
x, + ,
i=1
st fi (x) + i+ i = gi i = 1, . . . , p
Ax b
i+ , i 0 i = 1, . . . , p
x 0.

This linear program is minimizing the deviations of the objective functions from some pre-
specified goals, gi . One fairly intuitive option is to use the utopia point as the goal for each
objective and try to minimize the deviations from this perfect optimum (even if it is not feasi-
ble for the problem). In this case, the goal programming method is equivalent to compromise
programming (Romero et al., 1998). However, the solution obtained by the goal programming
method will not necessarily be an efficient solution (Marler and Arora, 2004). The Goal pro-
gramming method was implemented for Example 1.2 and the result is given in Figure 2.

2
STOR601: Research Topic I

(a) Optimal solution at (b) Optimal solution at (c) Optimal solution at


y = (1, 4). y = (9, 3). y = (6.2143, 0.5625).
Figure 2: Three solutions to Example 1.2. (a) was found to be optimal for the weighted sum
method with weight w > 0.46 1 and the goal programming method with goal g = (1, 4) = yI .
(b) was found to be optimal for the weighted sum method with weight w 0.46 and the
bi-objective simplex method. (c) was found to be optimal for the game theoretic approach.

(a) Minimize y1 subject to the additional (b) Minimize y2 subject to the additional
constraint that y2 2 where 2 = 2. constraint that y1 1 where 1 = 4.
Figure 3: The epsilon constraint method for Example 1.2. The optimal solution in (a) is
y = ( 23 11
7 , 2) and in (b) it is y = (4, 8 ).

3 The Simplex Method


It is possible to extend the simplex method commonly used in single objective optimization
to the bi-objective case. In order to do so, the mathematical program must be linear and in
standard form (slack/surplus variables may have to be added). The bi-objective simplex method
is outlined in Algorithm 1. Note that in the algorithm N represents the set of non basic columns
of A, and e is the standard basis.
If the initial basis corresponds to an efficient solution then the bi-objective simplex method
pivots between efficient solutions. Furthermore, if the LP in phase 2 has an optimal basic solution
then this will correspond to an initial efficient solution to the bi-objective problem. Hence, as
long as an optimal basic solution is found in phase 2, the method should find efficient solutions
(Ehrgott, 2006). An issue with this method is that due to the definition of the optimality
criterion, the order of the objective functions will influence the final solution.

3.1 Example
We return to Example 1.2 and apply the bi-objective simplex algorithm.
Phase I:
x1 x2 s1 s2 s3 s4 z x1 x2 s1 s2 s3 s4 z
1
s1 0 1 1 0 0 0 0 6 s1 3 0 1 3 0 0 13 5
1
1 1 1
z 1 3 0 -1 0 0 1 3 x2 3 1 0 3 0 0 3 1
7 1 1
s3 2 -1 0 0 1 0 0 6 s3 3 0 0 3 1 0 3 7
5 1
s4 2 1 0 0 0 1 0 10 s4 3 0 0 3 0 1 13 9
eT z 1 3 0 -1 0 0 0 3 eT z 0 0 0 -1 0 0 -1 0
Phase II: c() = (3x1 + x 2) + (1 )(x1 + 4x2 ) = (4 1)x1 + (3 + 4)x2 . An optimal
basis for = 1 is given by {x2 , s1 , s3 , s4 } and optimal solution (0, 1, 5, 0, 7, 9) with c () = 1.
1
In the bi-objective case, we can just use one weight, 0 w 1, and define the objective wf1 (x)+(1w)f2 (x).

3
STOR601: Research Topic I

Algorithm 1: Bi-Objective Simplex


Input : A bi-objective LP of the form min{cT x|Ax = b, x 0}.
Phase I : Solve the auxiliary LP min{eT z|Ax + z = b, x 0, z 0} to get optimal
solution z .
if eT z > 0 then
stop there are no feasible solutions.
else
Define B to be the optimal basis.
Go to Phase II.
Phase II : Define c() := c1 + (1 )c2 .
Solve the LP min{c()T x|Ax = b, x 0} for = 1 using initial basis B.
Phase III: while I = {i N |c2i < 0, c1i 0} =
6 do
c2i
= maxiI c1i c2i
,
c2
n o
s arg min i I| c1 ci 2 ,
n i i o
bj
r arg min j B| A , Asj > 0 .
sj
Perform a simplex pivot on row xs , column xr .
Return : A sequence of and optimal BFSs.

Phase III:
x1 x2 s1 s2 s3 s4 x1 x2 s1 s2 s3 s4
8
c1
3 0 0 31 0 0 -1 c1 0 0 0 75 87 0 -9
7
c2 3 0 0 34 0 0 -4 c2 0 0 0 -1 1 0 3
1 1 2
x2 3 1 0 3 0 0 1 x2 0 1 0 7 17 0 0
1 1
s1 3 0 1 3 0 0 5 s1 0 0 1 72 1
7 0 6
7 1 1 3
s3 3 0 0 3 1 0 7 x1 1 0 0 7 7 0 3
5 1
s4 3 0 0 3 0 1 9 s4 0 0 0 74 57 1 4
7
I = {1}, = 15 , s = 1, r = 3.
The optimal solution from this tableau is given by the negative of the final entries in the c1 and
c2 rows, so y1 = 9, y2 = 3. This solution is shown in Figure 2.

3.2 Multi-Objective Simplex


Ehrgott (2006) also presents a simplex algorithm for the case where p > 2. Even for just one
objective, the simplex algorithm may require an exponential number of pivots, and so the same is
true of the bi- and multi-objective simplex algorithms. Furthermore, as dimensionality increases,
so does the number of efficient extreme points (which must be considered in the algorithm), thus
making the problem more computationally difficult.

4 The Game Theoretic Approach


An interesting approach to Multi-Objective Optimization is to think of it as a multi-player
co-operative game where each objective function to be minimized is a player in the game. A
game is said to be co-operative if the players are able to reach an agreement on strategies.
In Multi-Objective Optimization, the players are the objective functions which are ultimately
controlled by the decision maker and so can be expected to reach an agreement, meaning the
game is co-operative. Using the fundamental text on co-operative games (Nash, 1953) a game
theoretic method for Multi-Objective Optimization was proposed by Rao (1987) and is outlined
in Algorithm 2.

4
STOR601: Research Topic I

Algorithm 2: Multi-Objective Optimization using Game Theory


Input : A multi-objective LP of the form min{f1 (x), . . . , fp (x)|Ax b, x 0}.
Step 1 : Normalize the objective functions fi (x) to Fi (x) = mi fi (x) s.t.
Fi (x) = = Fp (x)=M.
Step 2 : for i = 1, . . . , p do
min{Fi (x)|Ax b, x 0} to get solution xi .
Step 3 : for i = 1, . . . , p do
Fwi = max1jpQFi (xj ).
Step 4 : Set S = pi=1 [Fwi Fi (x)] and solve max{S|Ax b, x 0}.
Return: An efficient solution.

In Step 1, the objective functions must be normalized as we are multiplying them, so differ-
ences in scale could have an effect on the solution. Rao suggests doing this by finding a feasible
solution, x, and using the equality m1 f1 (x) = = mp fp (x) = M to calculate mi for some
constant M . This way of normalizing fi depends on the solution x, so better methods that do
not depend on x could be investigated. In Step 3, we are calculating the worst value that Fi can
take and then trying to find a solution, x, such that each Fi (x) is furthest from its worst value.
The game theoretic method was implemented for Example 1.2 and the result is shown in
Figure 2. It is interesting to observe that the solution in this case is in between the other two
solutions (representing the optima of f1 and f2 ) suggesting a compromise has been made. The
efficiency of the solution obtained by algorithm 2 is stated in Ghotbi (2013). This method in-
volves optimizing a non-linear function which is generally more difficult than the linear case.
However, if S is convex the problem becomes significantly easier.

5 The Two Phase Method for Two Objectives


A subclass of Multi-Objective Optimization problems are Multi-Objective Combinatorial Opti-
mization problems (MOCO). Formally, a MOCO problem can be stated as
min{Cx|Ax = b, x {0, 1}n }
and interpreted as MOO with the additional requirements that all variables are binary and con-
straints linear. A two phase method for solving this type of problem was presented in Ehrgott
and Gandibleux (2014).
In phase 1, the objective is to find a complete set of extreme efficient solutions. The best
way to do this is to find two lexicographically optimal solutions2 and then calculate a weight
vector, T , normal to the line connecting them. This weight vector helps define the weighted
sum LP min{T Cx|Ax = b, x {0, 1}2 }, the solution to which is used to split the problem
into two sub-problems. In each sub-problem, the same technique is applied and this is repeated
until no further non-dominated extreme points are found.
Once all the non-dominated extreme points are found, phase 2 aims to find any other effi-
cient solutions. It has been shown that this search can be reduced to the triangles created by
connecting the last non-dominated extreme points found in phase 1 (see Figure 4(c)). In fact,
by using a ranking algorithm to order the feasible solutions in the triangle according to T Cx,
the search can be stopped when a solution is found that has a worse value of T Cx than all the
corners of the triangle.
By construction, the two phase method finds a set efficient solutions to the problem. Work
has been done on extending the two phase method to situations with more than two objective
functions, however, even with just three objective functions, the problem becomes considerably
more difficult (Ehrgott and Gandibleux, 2014).
2
In a bi-objective problem, the first lexicographically optimal solution is found using the definition in section
1.1. The second can then be found by switching the order of the objective functions.

5
STOR601: Research Topic I

(a) Phase 1: a weighted aver- (b) Phase 1: no further non- (c) Phase 2: the triangles to
age normal to the lexicographi- dominated extreme points can which the search for efficient so-
cal solutions. be found. lutions is restricted.
Figure 4: The two phase method for bi-objective problems.

6 Conclusion
This report has looked into several methods for solving multi-objective optimization problems.
However, there exist many more approaches, details of which can be found in Marler and Arora
(2004), and many combinations of existing methods. Aside from the two phase method which
is only suitable for MOCOs, all methods which were described have been applied to Example
1.2. Interestingly, only three different solutions to this problem were obtained, all of which are
efficient. Most methods produced solutions that were lexicographically optimal for one of the
objective functions, only the game theoretic approach produced a compromise solution, but this
came at the cost of solving a non-linear program. Therefore, it would be useful to produce
methods for generating compromise solutions that are more computationally efficient.
In multi-objective optimization, different methods are often used to generate a set of efficient
solutions from which the decision maker can choose. Hence methods that are able to produce
the entire set of efficient solutions (such as the two-phase method for MOCO) are preferable and
more of these methods should be investigated. Each of the methods discussed has advantages
and disadvantages and a lot of them can be adapted for specific problems. However, there is
still no general best method that can be used to solve Multi-Objective Optimization problems.

References
Charnes, A. and Cooper, W. W. (1977). Goal programming and multiple objective optimizations:
Part 1. European Journal of Operational Research, 1(1):3954.
Ehrgott, M. (2006). Multicriteria optimization. Springer Science & Business Media.
Ehrgott, M. and Gandibleux, X. (2014). Multi-objective combinatorial optimisation: Concepts,
exact algorithms and metaheuristics. In Al-Mezel, S. A. R., Al-Solamy, F. R. M., and Ansari,
Q. H., editors, Fixed Point Theory, Variational Analysis, and Optimization, pages 307 341.
CRC Press.
Ghotbi, E. (2013). Bi- and Multi Level Game Theoretic Approaches in Mechanical Design. PhD
thesis, University of Wisconsin-Milwaukee.
Marler, R. T. and Arora, J. S. (2004). Survey of multi-objective optimization methods for
engineering. Structural and multidisciplinary optimization, 26(6):369395.
Marler, R. T. and Arora, J. S. (2010). The weighted sum method for multi-objective optimiza-
tion: new insights. Structural and multidisciplinary optimization, 41(6):853862.
Nash, J. (1953). Two-person cooperative games. Econometrica: Journal of the Econometric
Society, pages 128140.
Rao, S. (1987). Game theory approach for multiobjective structural optimization. Computers
& Structures, 25(1):119127.
Romero, C., Tamiz, M., and Jones, D. (1998). Goal programming, compromise programming
and reference point method formulations: linkages and utility interpretations. Journal of the
Operational Research Society, 49(9):986991.

You might also like