Professional Documents
Culture Documents
Multi-Objective Optimization
Ciara Pike-Burke
1 Introduction
Optimization is a widely used technique in Operational Research that has been employed in a
range of applications. The aim is to maximize or minimize a function (eg. maximizing profit or
minimizing environmental impact) subject to a set of constraints. However, in many situations,
decision makers find themselves wanting to optimize several different objective functions at the
same time. This leads to Multi-Objective Optimization (MOO). It is easy to see that if the
multiple objectives do not coincide, this problem becomes considerably more difficult. There
have been many methods suggested for MOO, this report will look at some of them.
1.1 Background
In Multi-Objective Optimization, it is often unclear what constitutes an optimal solution. A
solution may be optimal for one objective function, but suboptimal for another. Let yi = fi (x)
for i = 1, . . . , p, denote the p objective functions to be optimized over the feasible set X .
Throughout this report, the optimization problem will be assumed to be of minimization type.
A feasible solution x X , is efficient (or in some literature Pareto optimal ) if there is
no other x X satisfying both fk (x) fk (x) for k = 1, . . . , p, and fi (x) < fi (x) for some
i {1, . . . , k}. x is weakly efficient if there is no x X satisfying fk (x) < fk (x) for all k =
1, . . . , p. The image of a (weakly) efficient solution, y = f (x), is called a (weakly) undominated
point. If X is a polytope, then it is possible to define an extreme point, x X , if for 0 1
and x1 , x2 X , x = x1 + (1 )x2 implies that x1 = x2 = x. Another way of comparing
solutions is to use lexicographic ordering; y1 <lex y2 if yq1 < yq2 for q = min{k|yk1 6= yk2 }. A
lexicographically optimal solution is a feasible solution x X such that there is no x X
with f (x) <lex f (x). The utopia point or ideal point of a MOO problem, denoted by
yI = (y1I , . . . , ypI ), is defined by ykI := minxX fk (x) = minyY yk , where Y denotes the image of
the feasible set X under f . If X is convex then Y is called the convex hull.
1.2 Example
In this report, the following simple example will be used to demonstrate different methods:
3x1 + x2
Minimize
x1 + 4x2
Subject To x2 6
x1 + 3x2 3
2x1 x2 6
2x1 + x2 10
x1 0, x2 0
The feasible region and the projected feasible region are shown in Figure 1.
2 Basic Techniques
2.1 The Weighted Sum Method
One of the most intuitive methods for solving a Multi-Objective Optimization problem is to
optimize a weighted sum of the objective functions using any method for single objective opti-
1
STOR601: Research Topic I
Figure 1: The feasible region and projected feasible region of Example 1.2.
mization. The general approach is toPassign to each objective function fi (x) a weight wi > 0
and minimize the objective function pi=1 wi fi subject to the problem constraints. Note that
if we take the weights to be w = ei , the standard basis vector, the weighted sum method is
equivalent to minimizing fi .
It has been shown that the weighted sum method as stated above will produce efficient solu-
tions. However, if the positivity requirement on wi is weaken to wi 0 there is a potential to get
only weakly efficient solutions (Marler and Arora, 2010). The method is simple to implement but
the results obtained are highly dependent on the weights used, which have to be specified before
the optimization process begins. Additionally, the weighted sum method is not able to represent
complex preferences and in some cases will only approximate the decision makers preferences.
The results of applying the weighted sum method to Example 1.2 are shown in Figure 2.
This linear program is minimizing the deviations of the objective functions from some pre-
specified goals, gi . One fairly intuitive option is to use the utopia point as the goal for each
objective and try to minimize the deviations from this perfect optimum (even if it is not feasi-
ble for the problem). In this case, the goal programming method is equivalent to compromise
programming (Romero et al., 1998). However, the solution obtained by the goal programming
method will not necessarily be an efficient solution (Marler and Arora, 2004). The Goal pro-
gramming method was implemented for Example 1.2 and the result is given in Figure 2.
2
STOR601: Research Topic I
(a) Minimize y1 subject to the additional (b) Minimize y2 subject to the additional
constraint that y2 2 where 2 = 2. constraint that y1 1 where 1 = 4.
Figure 3: The epsilon constraint method for Example 1.2. The optimal solution in (a) is
y = ( 23 11
7 , 2) and in (b) it is y = (4, 8 ).
3.1 Example
We return to Example 1.2 and apply the bi-objective simplex algorithm.
Phase I:
x1 x2 s1 s2 s3 s4 z x1 x2 s1 s2 s3 s4 z
1
s1 0 1 1 0 0 0 0 6 s1 3 0 1 3 0 0 13 5
1
1 1 1
z 1 3 0 -1 0 0 1 3 x2 3 1 0 3 0 0 3 1
7 1 1
s3 2 -1 0 0 1 0 0 6 s3 3 0 0 3 1 0 3 7
5 1
s4 2 1 0 0 0 1 0 10 s4 3 0 0 3 0 1 13 9
eT z 1 3 0 -1 0 0 0 3 eT z 0 0 0 -1 0 0 -1 0
Phase II: c() = (3x1 + x 2) + (1 )(x1 + 4x2 ) = (4 1)x1 + (3 + 4)x2 . An optimal
basis for = 1 is given by {x2 , s1 , s3 , s4 } and optimal solution (0, 1, 5, 0, 7, 9) with c () = 1.
1
In the bi-objective case, we can just use one weight, 0 w 1, and define the objective wf1 (x)+(1w)f2 (x).
3
STOR601: Research Topic I
Phase III:
x1 x2 s1 s2 s3 s4 x1 x2 s1 s2 s3 s4
8
c1
3 0 0 31 0 0 -1 c1 0 0 0 75 87 0 -9
7
c2 3 0 0 34 0 0 -4 c2 0 0 0 -1 1 0 3
1 1 2
x2 3 1 0 3 0 0 1 x2 0 1 0 7 17 0 0
1 1
s1 3 0 1 3 0 0 5 s1 0 0 1 72 1
7 0 6
7 1 1 3
s3 3 0 0 3 1 0 7 x1 1 0 0 7 7 0 3
5 1
s4 3 0 0 3 0 1 9 s4 0 0 0 74 57 1 4
7
I = {1}, = 15 , s = 1, r = 3.
The optimal solution from this tableau is given by the negative of the final entries in the c1 and
c2 rows, so y1 = 9, y2 = 3. This solution is shown in Figure 2.
4
STOR601: Research Topic I
In Step 1, the objective functions must be normalized as we are multiplying them, so differ-
ences in scale could have an effect on the solution. Rao suggests doing this by finding a feasible
solution, x, and using the equality m1 f1 (x) = = mp fp (x) = M to calculate mi for some
constant M . This way of normalizing fi depends on the solution x, so better methods that do
not depend on x could be investigated. In Step 3, we are calculating the worst value that Fi can
take and then trying to find a solution, x, such that each Fi (x) is furthest from its worst value.
The game theoretic method was implemented for Example 1.2 and the result is shown in
Figure 2. It is interesting to observe that the solution in this case is in between the other two
solutions (representing the optima of f1 and f2 ) suggesting a compromise has been made. The
efficiency of the solution obtained by algorithm 2 is stated in Ghotbi (2013). This method in-
volves optimizing a non-linear function which is generally more difficult than the linear case.
However, if S is convex the problem becomes significantly easier.
5
STOR601: Research Topic I
(a) Phase 1: a weighted aver- (b) Phase 1: no further non- (c) Phase 2: the triangles to
age normal to the lexicographi- dominated extreme points can which the search for efficient so-
cal solutions. be found. lutions is restricted.
Figure 4: The two phase method for bi-objective problems.
6 Conclusion
This report has looked into several methods for solving multi-objective optimization problems.
However, there exist many more approaches, details of which can be found in Marler and Arora
(2004), and many combinations of existing methods. Aside from the two phase method which
is only suitable for MOCOs, all methods which were described have been applied to Example
1.2. Interestingly, only three different solutions to this problem were obtained, all of which are
efficient. Most methods produced solutions that were lexicographically optimal for one of the
objective functions, only the game theoretic approach produced a compromise solution, but this
came at the cost of solving a non-linear program. Therefore, it would be useful to produce
methods for generating compromise solutions that are more computationally efficient.
In multi-objective optimization, different methods are often used to generate a set of efficient
solutions from which the decision maker can choose. Hence methods that are able to produce
the entire set of efficient solutions (such as the two-phase method for MOCO) are preferable and
more of these methods should be investigated. Each of the methods discussed has advantages
and disadvantages and a lot of them can be adapted for specific problems. However, there is
still no general best method that can be used to solve Multi-Objective Optimization problems.
References
Charnes, A. and Cooper, W. W. (1977). Goal programming and multiple objective optimizations:
Part 1. European Journal of Operational Research, 1(1):3954.
Ehrgott, M. (2006). Multicriteria optimization. Springer Science & Business Media.
Ehrgott, M. and Gandibleux, X. (2014). Multi-objective combinatorial optimisation: Concepts,
exact algorithms and metaheuristics. In Al-Mezel, S. A. R., Al-Solamy, F. R. M., and Ansari,
Q. H., editors, Fixed Point Theory, Variational Analysis, and Optimization, pages 307 341.
CRC Press.
Ghotbi, E. (2013). Bi- and Multi Level Game Theoretic Approaches in Mechanical Design. PhD
thesis, University of Wisconsin-Milwaukee.
Marler, R. T. and Arora, J. S. (2004). Survey of multi-objective optimization methods for
engineering. Structural and multidisciplinary optimization, 26(6):369395.
Marler, R. T. and Arora, J. S. (2010). The weighted sum method for multi-objective optimiza-
tion: new insights. Structural and multidisciplinary optimization, 41(6):853862.
Nash, J. (1953). Two-person cooperative games. Econometrica: Journal of the Econometric
Society, pages 128140.
Rao, S. (1987). Game theory approach for multiobjective structural optimization. Computers
& Structures, 25(1):119127.
Romero, C., Tamiz, M., and Jones, D. (1998). Goal programming, compromise programming
and reference point method formulations: linkages and utility interpretations. Journal of the
Operational Research Society, 49(9):986991.