You are on page 1of 42

Number

Nanjing University of Aeronautics and


Astronautics

Graduation
Thesis
Multiobjective Gradient Optimization Applied in
Aerodynamic Shape Optimization

Student Name Allan Adriano Dias

ID Number 191561174

College College of International Education

Major Aeronautical Engineering

Class 1915612

Advisor Prof. 唐智礼

June 2019
Graduation Thesis Report Sheet

Contents

CONTENTS ...................................................................................................................................................................... 2

LIST OF FIGURES ............................................................................................................................................................. 4


LIST OF EQUATIONS ........................................................................................................................................................ 4
LIST OF ALGORITHMS ..................................................................................................................................................... 4

INTRODUCTION ............................................................................................................................................................ 5

PREFACE ......................................................................................................................................................................... 5
PROBLEM ........................................................................................................................................................................ 6
OBJECTIVES .................................................................................................................................................................... 6
Primary Objective ..................................................................................................................................................... 6
Secondary Objectives ................................................................................................................................................ 6
HYPOTHESES .................................................................................................................................................................. 6
METHOD OF APPROACH .................................................................................................................................................. 6

LITERATURE REVIEW................................................................................................................................................. 8

MULTI-OBJECTIVE OPTIMIZATION .................................................................................................................................. 8


PARETO FRONT ............................................................................................................................................................... 8
OPTIMIZATION ALGORITHMS .......................................................................................................................................... 8
ZDT Problems ........................................................................................................................................................... 9
DTLZ Problems ......................................................................................................................................................... 9
Elitism ....................................................................................................................................................................... 9
GRADIENT OPTIMIZATION METHODS .............................................................................................................................. 9
Gradient .................................................................................................................................................................. 10
Hessian .................................................................................................................................................................... 10
BFGS METHOD ............................................................................................................................................................ 10
GEOMETRY PARAMETRIZATION .................................................................................................................................... 11
Bézier Curve ............................................................................................................................................................ 12
Control Points ......................................................................................................................................................... 12
MESH ............................................................................................................................................................................ 13
Mesh Deformation ................................................................................................................................................... 13
RANS ........................................................................................................................................................................... 13
Spalart-Allmaras Turbulence Model ....................................................................................................................... 13

ALGORITHM STUDY .................................................................................................................................................. 15

ALGORITHM DEVELOPMENT ......................................................................................................................................... 15


Single-Objective BFGS Algorithm .......................................................................................................................... 16
Multi-Objective Adaptation of the BFGS Algorithm ............................................................................................... 18
Elitist Multi-Objective BFGS Algorithm ................................................................................................................. 21
ALGORITHM TESTING & BENCHMARKING .................................................................................................................... 26
Performance in regards to number of variables ...................................................................................................... 27
Performance on Higher-Than-Two MOOPs ........................................................................................................... 27
Graduation Thesis Report Sheet
SOFTWARE INTERFACING ....................................................................................................................................... 31

INTRODUCTION TO SOFTWARE SUITE ............................................................................................................................ 31


MATLAB .................................................................................................................................................................. 31
Pointwise ................................................................................................................................................................. 31
ANSYS Fluent .......................................................................................................................................................... 32
INTERFACING OF SOFTWARE SUITE FOR OPTIMIZATION APPLICATIONS ......................................................................... 32
Mesh Data I/O ......................................................................................................................................................... 32
Fluent aaS ............................................................................................................................................................... 33

AERODYNAMIC SHAPE OPTIMIZATION ............................................................................................................. 36

REFERENCES ............................................................................................................................................................... 37

APPENDIX...................................................................................................................................................................... 38

APPENDIX 1: BFGS2.M................................................................................................................................................. 38
APPENDIX 2: NONDOMINATEDFRONT.M ....................................................................................................................... 39
APPENDIX 3: FINAL MULTI-OBJECTIVE ALGORITHM .................................................................................................... 40
APPENDIX 4: MESH I/O FUNCTIONS .............................................................................................................................. 42
CASextractPoints.m ................................................................................................................................................ 42
findIndex.m .............................................................................................................................................................. 42
updateMesh.m ......................................................................................................................................................... 42
Graduation Thesis Report Sheet

List of Figures
FIGURE 1: METHOD OF APPROACH ...................................................................................................................................... 6
FIGURE 2: EXAMPLE OF AIRFOIL WITH CONTROL POINTS .................................................................................................. 12
FIGURE 3: WEIGHTED FUNCTION BEHAVIOR IN A NON-CONVEX PARETO FRONT................................................................. 16
FIGURE 4: BFGS2.M OPTIMIZATION OF ROSENBROCK'S BANANA FUNCTION..................................................................... 18
FIGURE 5: ZDT 1 OPTIMIZED BY ALGORITHM 3 ................................................................................................................. 19
FIGURE 6: ZDT 2 OPTIMIZED BY ALGORITHM 3 ................................................................................................................. 20
FIGURE 7: ZDT 3 OPTIMIZED BY ALGORITHM 3 ................................................................................................................. 20
FIGURE 8: ZDT 4 OPTIMIZED BY ALGORITHM 3 ................................................................................................................. 21
FIGURE 9: ZDT 1 OPTIMIZED BY ALGORITHM 5 ................................................................................................................. 24
FIGURE 10: ZDT 2 OPTIMIZED BY ALGORITHM 5 ............................................................................................................... 25
FIGURE 11: ZDT 3 OPTIMIZED BY ALGORITHM 5 ............................................................................................................... 25
FIGURE 12: ZDT 4 OPTIMIZED BY ALGORITHM 5 ............................................................................................................... 26
FIGURE 13: THEORETICAL PARETO FRONT OF THE 3-OBJECTIVE PROBLEM ....................................................................... 28
FIGURE 14: OPTIMIZED RESULTS OF THE DTLZ ADAPTED PROBLEM ................................................................................ 29
FIGURE 15: DETAILED VIEW OF THE SOLUTION OF THE 3-OBJECTIVE PROBLEM ............................................................... 29

List of Equations
EQUATION 1: GENERAL FORM OF THE MULTI-OBJECTIVE OPTIMIZATION PROBLEM ............................................................ 8
EQUATION 2: DOMINANCE TEST .......................................................................................................................................... 8
EQUATION 3: GENERAL FORM OF ZDT PROBLEMS .............................................................................................................. 9
EQUATION 4: GRADIENT OF A FUNCTION ............................................................................................................................ 10
EQUATION 5: HESSIAN OF A FUNCTION .............................................................................................................................. 10
EQUATION 6: BFGS UPDATE EQUATION ............................................................................................................................ 11
EQUATION 7: BÉZIER CURVE ............................................................................................................................................. 12
EQUATION 8: REYNOLDS-AVERAGED NAVIER STOKES EQUATIONS ................................................................................... 13
EQUATION 9: SPALART-ALLMARAS MODEL TRANSPORT EQUATION .................................................................................. 14
EQUATION 10: THE ARMIJO CONDITION ............................................................................................................................ 17
EQUATION 11: WEIGHTED SUM OF THE OBJECTIVE FUNCTIONS ........................................................................................ 18
EQUATION 12: 3-OBJECTIVE TEST PROBLEM ..................................................................................................................... 27

List of Algorithms
ALGORITHM 1: GENERALIZED FORM OF GRADIENT ALGORITHMS ..................................................................................... 10
ALGORITHM 2: BFGS OPTIMIZATION ALGORITHM ............................................................................................................ 11
ALGORITHM 3: MULTI-OBJECTIVE BFGS ALGORITHM WITH STOCHASTIC VARIABLE WEIGHTED SUM ............................ 19
ALGORITHM 4: NON-DOMINATED FRONT SEARCH ............................................................................................................ 23
ALGORITHM 5: ELITIST MULTI-OBJECTIVE BFGS ALGORITHM WITH VARIABLE STOCHASTIC WEIGHTED SUM ............... 23
Graduation Thesis Report Sheet

Introduction

Preface
In the field of aircraft design, especially in the portion of the design process that is concerned with
the exterior shape of the aircraft, there are many variables at play, and how any of them changes is a
complex criterion that has to be taken into consideration very carefully; in most cases, a slight
modification in any aspect of the wetted area will affect the aerodynamic performance of the aircraft
as a whole over a long period of time, sometimes even drastically. Furthermore, how the many
surfaces of an aircraft interact with each other aerodynamically is not always a clear point to the
designer, and sometimes a clear picture of these interactions is not possible without a careful technical
analysis. However, to select the best option in a wide range of designs, programming, testing and
analyzing each option is not humanly feasible, especially as the number of variables increases. In
addition to that, keeping in mind that optimizing a single objective is already a hard task to be done
by 'brute force', that is, testing each one of the options for the one that works best, even harder is to
decide the best trade-offs among many objectives. In aircraft design, an example of this problem can
be given in the form:

"How does an engineer know what values to select for wingspan, wing sweep angle, and what
airfoil section to select, so that their design will give the best possible trade-off between a low drag
in cruise and a high lift in take-off?"

To help solve these problems comes the field of optimization. Rooted in both mathematics and
computational engineering, optimization, or more specifically, multi-objective optimization deals
with finding the best possible trade-offs between a set of options.
In this paper, this discipline is explored through the computational use of gradient algorithms, a
class of optimization algorithms, and their application in solving multi-objective optimization
problems related to aerodynamics. The research will propose the use of a novel method to use gradient
optimization algorithms in multi-objective optimization problems, and interface the final version of
the algorithm with an aerodynamic problem-solving software to perform case studies related to
aerodynamic shape optimization.
The paper is organized as follows: In Section 2 (Literature Review) a review of the sources used
and their importance to the contents hence developed will be presented; in Section 3 (Algorithm Study)
the development of the optimization algorithm is presented, and its performance is tested in common
MDO (multi-disciplinary optimization) benchmarking problems; in Section 4 (Software Interfacing)
the communication between the algorithm and the CFD solver is presented; in Section 5 (Airfoil
Geometry Parametrization) the method used to translate airfoil geometry into the optimizing
algorithm is presented, along with a case study on airfoil optimization; in Section 6 (Aircraft Structure
Parametrization) the method used to translate general aspects of the aircraft structure into the
algorithm is presented, along with a case study on general aircraft structure optimization; finally, in
Section 7 (Conclusion and Further Studies) the study is concluded, reviewing the solutions for the
problem, and further study propositions on the topic are presented.
Graduation Thesis Report Sheet

Problem
Can a gradient algorithm be developed to be used to solve multi-objective problems in general,
then be applied to optimize aerodynamic shapes in the context of finding optimal trade-offs between
several designs?

Objectives
Primary Objective
To develop a gradient algorithm to find optimal trade-offs between several aerodynamic designs,
to satisfy several stated objectives.

Secondary Objectives
• To find a uniformly distributed front of many such trade-offs, to allow for choice making
when selecting a final design;
• To develop such algorithm to be fast-converging and robust;
• To effectively express the geometries of interest in ways that can be read and tested
efficiently;
• To interface the algorithm with the CFD solver to be able to solve real-world engineering
problems as efficiently as theoretical mathematical problems;
• To effectively test the algorithm’s application in at least one case study of aerodynamic
optimization;

Hypotheses
• Applying the suggested novel approach with variable stochastic weights will make single-
objective algorithms work for multi-objective problems;

Method of Approach
The suggested method to approach the problem will be to divide the study into Algorithm
Development and Aerodynamic Analysis, following the structure below:

• Algorithm Theory
• Algorithm Coding • Geometry Parametrization;
• Algorithm Testing • Mesh Perturbation;
Software •
• Algorithm Interfacing
Airfoil Case Study;
Debugging; • Aircraft Structure Case Study;
Aeordynamic
Algorithm Analysis
Development

Figure 1: Method of Approach


Graduation Thesis Report Sheet

Throughout the course of this paper, standard formatting was used uniformly to represent different
kinds of information. These formatting rules are as follows:

• Standard formatting in Times New Roman was used to represent text and research in general;
• 𝐶𝑎𝑚𝑏𝑟𝑖𝑎 𝑀𝑎𝑡ℎ 𝐼𝑡𝑎𝑙𝑖𝑐 was used to represent equations, mathematical notation and information
of mathematical significance. Furthermore, variables in bold, such as 𝑿, represent vectors;
• Lucida Console was used to represent code, portions of scripts, functions, software inputs and
outputs and file names;
Times New Roman bound by a Grey Textbox
• was used to represent algorithms.
Graduation Thesis Report Sheet

Literature Review

In this section, the previous literature relevant to the development of this paper will be reviewed
and presented, showcasing the important points that each of them may contribute to the hence coming
research. This review will also include a posteriori knowledge acquired along the research, to better
organize the theoretical knowledge presented in the methodology section, i.e., mathematical
equations, theorems, etc.

Multi-Objective Optimization
Multi-Objective Optimization is a subfield of Optimization which deals with a class of problems
which require more than one objective to be optimized at the same time. Like in single-objective
optimization, the goal of this field is to either minimize or maximize a function, and the final answer
obtained by solving these problems is a set of solutions defining the best tradeoff between the
competing objectives [1].
In this work, optimization will deal only with the direct minimization of mathematical functions,
and thus the general form of a Multi-Objective Optimization Problem (MOOP) will be considered to
be:

𝑓1 (𝑿) 𝑥1
𝑓 (𝑿) 𝑥2
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 { 2 , 𝑤𝑖𝑡h 𝑿 = { ⋮ } , 𝑓𝑜𝑟 𝑥 (𝐿) ≤ 𝑥𝑖 ≤ 𝑥 (𝑈) , 𝑖 = 1,2, … , 𝑛

𝑓𝑚 (𝑿) 𝑥𝑛

Equation 1: General form of the Multi-Objective Optimization Problem

Pareto Front
Perhaps the most important concept of the Multi-Objective Optimization Problem is the Pareto
Front. The Pareto Front, or Pareto Optimal Solution, is the set of all best trade-offs between the
objectives. All points composing the Pareto Front are points which cannot be further optimized for
one of the objectives without compromising the other ones. The metric used to classify these points
is called dominance, and every point of the Pareto Front is called a non-dominated point, which means
these solutions are not dominated by any other solution. For problems investigated in this paper,
represented by Equation 1, a solution A is said to dominate B if its value for all objectives is smaller
than those of B.

𝑓1 (𝑨) < 𝑓1 (𝑩)


𝑓 (𝑨) < 𝑓1 (𝑩)
𝑖𝑓 2 , 𝑡h𝑒𝑛 𝑨 𝑑𝑜𝑚𝑖𝑛𝑎𝑡𝑒𝑠 𝑩

𝑓𝑚 (𝑨) < 𝑓1 (𝑩)

Equation 2: Dominance Test

Optimization Algorithms
Although optimization is a classic mathematical field, for solving MOOPs efficiently the use of
Graduation Thesis Report Sheet

computer algorithms is necessary, due to the necessity of calculating hundreds, or sometimes


thousands of solutions. For this reason, most methods to solve MOOPs use very simple calculations,
to make the computations more efficient. Basically, optimization algorithms can be divided in
derivative and non-derivative based methods, and all optimization algorithms are iterative,
approximating the Pareto front slowly with each iteration.

ZDT Problems
The ZDT Problems, named after its creators, Zitzler, Deb and Thiele, are a set of 6 mathematical
problems which are usually regarded as a standard for benchmarking Multi-Objective Optimization
Algorithms (MOOA) for problems with two objectives, each of which tests the algorithm's efficacy
in dealing with a specific kind of problem, and they have become popular among engineering
applications since the their Pareto front follows the same characteristics as those of common
engineering optimization problems. These functions take the form below:

𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝒯(𝑿) = (𝑓1 (𝑥1 ), 𝑓2 (𝑿)),


𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑓2 (𝑿) = 𝑔(𝑥2 , … , 𝑥𝑚 )h(𝑓1 (𝑥1 ), 𝑔(𝑥2 , … , 𝑥𝑚 )
Equation 3: General form of ZDT Problems

The ZDT Functions used to benchmark the algorithm presented in this study are:
𝒯1 (𝑿) , to benchmark for convex problems;
𝒯2 (𝑿) , to benchmark for nonconvex problems;
𝒯3 (𝑿) , to benchmark for discrete problems;
𝒯4 (𝑿) , to benchmark for multimodal problems;
[2]

DTLZ Problems
The DTLZ Problems are a scalable variant of the ZDT Problems, proposed to test for more-than-
two-objective problems the algorithms which successfully demonstrated the ability to solve two-
objective optimization problems. They involve a set of 7 problems that are indefinitely scalable for
number of objectives. This work will only use 3-objetive formulations as a benchmark, to allow for
an intuitive presentation of the Pareto Fronts, and will analyze only selected problems out of the
DTLZ Problems, since their main objective is to test for more-than-two-objective performance [3].

Elitism
A concept that is also presented in the algorithm development is elitism. Elitist algorithms use the
concept of generations and fitness to filter their objectives. Since the first proposed solutions are
randomly generated, and later optimized, after a number of specified iterations, the algorithm will
stop and discard the worst results, keeping only the best ones, counting every iteration, not only the
last one. This can be thought as the 'survival of the fittest' rule applied in optimization.

Gradient Optimization Methods


Gradient Optimization Methods are a class of optimization algorithms that fall under the derivative-
based methods. They use differential information of the objective functions they are analyzing, to
Graduation Thesis Report Sheet

account not only for the result, but also for the rate of change of the result in different directions. All
Gradient Algorithms will follow the basic form below:

Compute Search Direction (using gradient information);

Compute Step Length (find a positive scalar that will decrease


the function in the search direction);

Update the design variables;

Repeat until convergence or iteration number is met.

Algorithm 1: Generalized form of Gradient Algorithms

[4]

Gradient
The gradient is the multivariable generalization of the derivative, and points to the direction of
steepest ascend. For any function 𝑓(𝑿), the gradient is given by the vector:

𝜕𝑓 𝜕𝑓 𝜕𝑓 𝑇
∇𝑓(𝑿) = [ … ]
𝜕𝑥1 𝜕𝑥1 𝜕𝑥1
Equation 4: Gradient of a function

Hessian
The Hessian is a symmetric matrix, which serves as the multivariable generalization of the second
derivative for a multivariable function. It contains information about the function's curvature at any
point in its domain, and for a function 𝑓(𝑿) it is given by:

𝜕 2𝑓 𝜕 2𝑓

𝜕𝑥12 𝜕𝑥1 𝜕𝑥𝑛
2
∇ 𝑓(𝑿) ≡ 𝐻(𝑿) ≡ ⋮ ⋱ ⋮
𝜕 2𝑓 𝜕 2𝑓

[𝜕𝑥𝑛 𝜕𝑥1 𝜕𝑥𝑛2 ]
Equation 5: Hessian of a function

BFGS Method
The BFGS, named for its discoverers Broyden, Fletcher, Goldfarb and Shanno, is the most popular
of the quasi-Newton algorithms. The Newton Optimization algorithms use the exact calculation of a
function's Hessian to solve for the best possible descent direction, but the calculation of the Hessian
is usually unfeasible to iterative methods, so in return, the quasi-Newton algorithms calculate an
approximation of the Hessian and update it with each iteration. For an optimization problem, the
BFGS Method updates the inverse of the Hessian of iteration 𝑘 + 1, denoted by 𝐻𝑘+1 by the
formula:
Graduation Thesis Report Sheet

𝐻𝑘+1 = (𝐼 − 𝜌𝑘 𝑠𝑘 𝑦𝑘𝑇 )𝐻𝑘 (𝐼 − 𝜌𝑘 𝑦𝑘 𝑠𝑘𝑇 ) + 𝜌𝑘 𝑠𝑘 𝑠𝑘𝑇 , 𝑓𝑜𝑟 𝜌𝑘 = (𝑦𝑘𝑇 𝑠𝑘 )−1


Equation 6: BFGS Update Equation

The complete algorithm can be presented by the loop:

Initialize solution 𝒙𝟎 and the inverse Hessian approximation 𝐻0 ;


𝑘 → 0;

while (iteration number not met) or (convergence not met)


Compute search direction 𝒑𝒌 ;
Compute step size 𝛼𝑘 ;
Set 𝒙𝒌+𝟏 = 𝒙𝒌 + α𝑘 𝒑𝒌 ;
Define 𝒔𝒌 = 𝒙𝒌+𝟏 − 𝒙𝒌 and 𝒚𝒌 = ∇𝑓𝑘+1 − ∇𝑓𝑘 ;
Compute 𝐻𝑘+1 from Equation 6;
𝑘 → 𝑘 + 1;
end (while)

Algorithm 2: BFGS Optimization Algorithm

The BFGS has very good properties when it comes to dealing with functions with many variables,
which makes it especially attractive to modern engineering applications. Although was discovered
and is mainly applied in problems involving single-objective optimization, its application was adapted
in this research to use it effectively in multi-objective applications [5].

Geometry Parametrization
The bridge between the aerodynamic shape (or any shape for that matter) and an optimization
method is the way in which the geometry is represented in mathematical terms, also called
parametrization. In engineering terms, a parametric geometry is a precisely defined geometrical curve
or surface that can be changed and adjusted through the manipulation of several parameters, called
design variables. All the design variables and the relationship between themselves and the curve are
precisely defined, and as such, can be integrated in the optimization algorithm.
A proper parametric geometry must have some key features, introduced by Sóbester and Forrester
as:
• Conciseness: Limit the number of design variables as much as possible, to be able to deal
with a design variable space as little as possible without losing the value of the geometry;
• Robustness: The parametrization must be able to represent shapes that make sense physically
and geometrically, for example, not generate self-intersecting shapes or shapes with negative
volume or infinite asymptotes;
• Flexibility: The parametrization of the shape should be able to provide control over a wide
enough range of different shapes, to ensure the diversity in results, while still keeping the search
Graduation Thesis Report Sheet

space limited to normal and not 'weird' shapes [6];

Bézier Curve
The Bézier Curve is a parametric curve, discovered in the 1960s by French engineer Pierre Bézier,
who used it to model the body of Renault automobiles. Mathematically, the Bézier Curve is a
superposition of the Bernstein Polynomials of its degree; a Bézier curve of degree 𝒏 is defined by
𝒏 + 𝟏 points. If 𝒂(𝑖) denotes the ith control point of a Bézier curve of degree 𝒏, the curve is given
by the function:

𝑛
𝑛 𝑖
ℬ(𝑢) = ∑ 𝒂(𝑖) 𝑏𝑖,𝑛 (𝑢), 𝑤ℎ𝑒𝑟𝑒 𝑏𝑖, 𝑛 (𝑢) = ( ) 𝑢 (1 − 𝑢)𝑛−𝑖 , 𝑢 ∈ [0,1]
𝑖
𝑖=0

Equation 7: Bézier Curve

The Bézier curve is used in this project to parametrize airfoil sections for optimization, since it is
easily calculated, easily differentiatable, smooth within its whole domain and tangent to the first and
last sections.

Control Points
Control points are the units used to control the parametrized geometry in an intuitive manner. For
two-dimensional lines, for example, control points can be used to dictate how the curve behaves.
Figure 2 shows an example of an airfoil-like curve made by a Bézier curve in red, with its control
points in blue. It can be seen how the shape drawn by the control points relates to the Bézier curve
they generate.
For Bézier curves, altering the position of one of the control points will alter the entire curve, and
the entire curve is contained in the convex hull drawn by the points. It is also important to state that
control points can be weighted, so that control points with larger weights will make the curve
approximate to them.

Figure 2: Example of Airfoil with Control Points


Graduation Thesis Report Sheet

Mesh
A mesh is a topological domain used to translate a geometrical space into a computational space.
It is the domain used to perform Finite-Element simulations. A mesh is composed of nodes, cells and
edges. Meshes are generated by specialized software according to CAD geometry and specific mesh-
generation parameters which can be specified by the user.

Mesh Deformation
Usually, generating a high-quality mesh takes a slight computational time for two-dimensional
meshes, but a considerable amount of time for three-dimensional meshes. For simple and limited
amounts of changes in the geometry, the user may change the geometry and remesh through the
software. However, in optimization problems, thousands of slight changes to the geometry are needed
to test the different behavior of the geometry, so remeshing becomes an unfeasible activity.
To assess this issue, the concept of mesh deformation is used: instead of remeshing through the
software, the mesh is instead analyzed nodewise, and a function relating the distance of each node to
the analysis geometry is applied to displace each one of the nodes. Several mesh deformation
strategies exist, so as to assess problems such as loss of mesh quality for extreme deformations.

RANS
The Navier-Stokes equations are the differential equations that govern the behavior of viscous flow.
Since the solution of the equations is not always exactly determinate in a continuum domain, CFD
software use the concept of Reynolds-Averaging to solve the equations, decomposing the
instantaneous (exact) variables of the Navier-Stokes equations into a mean (time-averaged)
component and a fluctuating component. For example, velocity 𝑢𝑖 = 𝑢̅𝑖 + 𝑢𝑖′ . By substituting the
relation on the exact equations, and representing all the velocities as time averaged, the equations
become:

𝜕𝜌 𝜕
+ (𝜌𝑢𝑖 ) = 0
𝜕𝑡 𝜕𝑥𝑖
𝜕 𝜕 𝜕 𝜕 𝜕𝑢𝑖 𝜕𝑢𝑗 2 𝜕𝑢𝑙 𝜕
(𝜌𝑢𝑖 ) + (𝜌𝑢𝑖 𝑢𝑗 ) = − + [𝜇 ( + − 𝛿𝑖𝑗 )] + (−𝜌 ̅̅̅̅̅̅
𝑢𝑖′ 𝑢𝑗′ )
𝜕𝑡 𝜕𝑥𝑗 𝜕𝑥𝑖 𝜕𝑥𝑗 𝜕𝑥𝑗 𝜕𝑥𝑖 3 𝜕𝑥𝑙 𝜕𝑥𝑗

Equation 8: Reynolds-Averaged Navier Stokes Equations

The myriad of software used to deal with this kind of equation are called RANS Solvers. In this
paper, the RANS Solver of choice was ANSYS Fluent.
For a complete description of the equation usage and terms, it is recommended to refer to ANSYS
Inc. documentation [7].

Spalart-Allmaras Turbulence Model


The Spalart-Allmaras model for turbulence is a one-equation model that solves a modeled transport
equation for the kinematic eddy viscosity. It was designed by a pair of Boeing engineers specifically
for aeronautics and aerospace applications involving wall-bounded flows and it shows good results
for boundary layers at a low computational cost. The transport equation for the model is given by:
Graduation Thesis Report Sheet

2
𝜕 𝜕 1 𝜕 𝜕𝜈̃ 𝜕𝜈̃
(𝜌𝜈̃) + (𝜌𝜈̃𝑢𝑖 ) = 𝐺𝑣 + [ {(𝜇 + 𝜌𝜈̃) } + 𝐶𝑏2 𝜌 ( ) ] − 𝑌𝜈 + 𝑆𝜈̃
𝜕𝑡 𝜕𝑥𝑖 𝜎𝜈̃ 𝜕𝑥𝑗 𝜕𝑥𝑗 𝜕𝑥𝑗

Equation 9: Spalart-Allmaras Model Transport Equation

The model yields smooth laminar-turbulent transition at points of interest, and is shown to provide
good results to a variety of test cases proposed by the authors, including in cases of high-lift airfoils
and transonic wings.
For these computational advantages and the expected absence of complex turbulence activity
expected to be found in the cases studied in this paper, the Spalart-Allmaras Turbulence Model will
be used for carrying all the CFD analyses.
More information about the model’s usage is clarified in the original work by Spalart and Allmaras
[8].
Graduation Thesis Report Sheet

Algorithm Study

In this section, the part of the paper concerning the optimization algorithm will be presented. It will
present the whole process for the algorithm development, as well as results for its tests, debugging
and changes made throughout the process, finally presenting the final algorithm.
The first step on the algorithm study process, was to determine the kind of optimization algorithm
that would prove the most suitable to the problems in question. On the early stages of the project, the
gradient methods of optimization were straightforward preferred over the derivative-free methods
(like genetic algorithms), since they converge much faster, which is a considerable advantage when
working with aerodynamic calculations, as the computational cost of each calculation will drive up
the total cost of the optimization.
With the class of gradient methods in mind, further reading on past literature on the topic, especially
those provided by Stanford University AA222 lectures, in which several kinds of gradient methods
are compared, has led to the choice of Quasi-Newton algorithms to be chosen as the algorithm of
preference [4]. Conjugate gradient methods were briefly considered and tested according to past
research [8], but previous results have shown a clear superiority of the Quasi-Newton algorithms in
terms of convergence rates. Out of the many different Quasi-Newton algorithms, further reading has
led to the selection of the BFGS algorithm (discovered by Broyden, Fletcher, Goldfarb and Shanno)
as the final choice, as its advantages are elicited clearly in texts such as Numerical Optimization by
Nocedal and Wright [5]. Some of these advantages include fast convergence, low computational cost
and robustness when dealing with an extremely large number of variables.

Algorithm Development
As the BFGS algorithm is originally of a single-objective optimization nature, early on the
development stage of the project the idea of using the novel stochastic weighting variable approach
was considered as a way to adapt it for use with multi-objective problems. Naturally, the final
algorithm developed would still work as perfectly as the original BFGS algorithm does in the scope
of single-objective optimization.
The idea behind this novel approach comes from the fact that gradient algorithms are known for
not spreading the solutions evenly throughout the Pareto front when optimizing for multi-objective
problems, especially moving the results towards extreme points when the Pareto front is non-convex.
This problem still persists if random variables are assigned to weight the objective functions at the
first iteration, as shown in the following diagram from an optimization algorithm trade-off study made
by Obayashi et al., in points A and B [9]:
Graduation Thesis Report Sheet

Figure 3: Weighted function behavior in a non-convex Pareto front

Furthermore, gradient methods might stall when optimizing with an initially weighted set of
objective functions, as shown on point C, because of the complexity of the objective functions’
distribution.
As this behavior shows for every combination of weights assigned to the objective functions at
initialization, a new suggestion would be to assign new random weights at every iteration. Applying
this method in the optimization algorithm is expected to make the solutions spread in different
directions at each iteration, at the cost of reducing the convergence rate slightly because of the
constant change of parameters.

Single-Objective BFGS Algorithm


Because the suggested approach involves meddling directly with the algorithm, and in terms of
computing, this means making changes in the code of the function responsible for carrying the BFGS
optimization, commercially available optimization functions, such as fminunc, a MATLAB
Optimization Toolbox function to find the minimum of a function in unconstrained problems which
actually uses the BFGS algorithm, cannot be used since the optimization is carried completely by the
predefined function, and altering the factory code could cause the optimization to become unstable.
So, based on literature on optimization, the author has coded the BFGS into a script that could easily
be altered, using the MATAB language and environment.
The script for the first attempt of coding the BFGS algorithm is based on the algorithm given by
Nocedal and Wright [5], presented in Algorithm 2. As already presented, it is important to highlight
the fact that, even though the original BFGS algorithm suggests an update to the Hessian matrix of
the function, the approach used in Algorithm 2 and in this paper uses an update to the inverse of the
Hessian matrix instead, via the use of Equation 6, since it is vastly faster than finding the Hessian and
inverting it for solving an equation. It was coded as a MATLAB function that takes in a mathematical
function of two variables, upper and lower limits that bound the search space (and which are equal to
all the variables), and an initial guess that lies within the specified limits.
The first attempts showed problems at the line search part of the algorithm, in which the step size
𝛼 needs to be calculated. First attempts used the MATLAB function fminbnd, which searches for a
minimum of a single variable function within a specified interval. The implementation of this
Graduation Thesis Report Sheet

approach proved both extremely inaccurate, leading to ridiculous cases of divergence, and
computationally ineffective.
A method in an algorithm used in a paper by Povalej [10] suggests the step size 𝛼 to be the largest
number in the interval (0, 1] to reduce the function, instead of a minimal step; the same research
suggests that 𝛼 be a number in the set ℓ = {1⁄2𝑛 ; 𝑛 = 0, 1, 2, … }. To test the validity of the step
size, a condition presented by Nocedal and Wright, called the Armijo Condition, was used. The
Armijo Condition states that a step size 𝛼 that will provide an acceptable decrease of the objective
function must follow the inequality:

𝑓(𝑥𝑘 + 𝛼𝑝𝑘 ) ≤ 𝑓(𝑥𝑘 ) + 𝑐1 𝛼∇𝑓𝑘𝑇 𝑝𝑘 , 𝑓𝑜𝑟 𝑐1 ∈ (0,1)


Equation 10: The Armijo Condition

The value of 𝑐1 suggested for using the Armijo Condition in Quasi-Newton algorithms is 0.1,
which was the value used in the script. Using the Armijo Condition and the approach suggested by
Povalej has proven successful both in terms of convergence, robustness and calculation times.
Another point taken into consideration in this first implementation of the BFGS algorithm was the
limitation to the search space. The need to keep the solution within the bounding box provided in the
function call comes from the fact that some initial guesses for some optimization functions can cause
fast divergence. So to deal with this problem, the approach implemented was that, when the step
increment is added to the current value of the optimization variables, the program checks if the
increment will exceed the bounding limits, and if it does, it increases the value in the exceeding
direction by a value lower than the distance from the bounding limit.
Implementing these two approaches, the step size and the limit bounding box, the code for this first
attempt was finalized and tested, and it is presented in Appendix 1: BFGS2.m.
To test the algorithm, various common optimization testing functions were used and successfully
optimized. Furthermore, since this first algorithm is basically the original single-objective BFGS
algorithm without any attempt to modify its use to MOOPs, in theory it should provide the same
results as the fminunc MATLAB function. Tests have proven that similar initial guesses lead to the
same final optimized results for both the coded BFGS2.m script and the fminunc function, as well as
the initial guesses which cause divergence are the same for both functions.
A test of this first implementation can be seen in the figure below, in which the BFGS2.m code was
used to optimize 50 random initial guesses for the Rosenbrock’s Banana Function, a common function
used to test optimization algorithms. The Rosenbrock’s Banana Function is essentially a parabolic
shaped valley. Finding the valley is easy for any algorithm, since the valley walls are very steep, but
the flatness of the valley makes it hard for algorithms to find it minimum point, located at [1,1]. The
bounding box of the optimization problem is the interval [−2,2], a common search space for the
Rosenbrock’s Banana Function. The optimization results are presented in Figure 4:
Graduation Thesis Report Sheet

Figure 4: BFGS2.m optimization of Rosenbrock's Banana Function

From the figure above, it can be seen that the results quickly converge from the exponentially
increasing area of the valley, with values near 100, to the valley containing values closer to 0. The
algorithm provides to most guesses a convergence of more than 8 digits of precision in less than 30
iterations, which can be considered a very satisfactory result.

Multi-Objective Adaptation of the BFGS Algorithm


Using the code from the BFGS2.m script, the algorithm was adapted into a multi-objective
optimization algorithm by introducing the stochastic variables, hereby referred to as 𝜆. The number
of 𝜆 variables is equal to the number of objective functions to be optimized simultaneously. The
value given 𝜆𝑛 , the weight of the nth objective function, is a random value in the interval [0,1]. The
values given to 𝜆𝑛 are initialized together with the first evaluation of the algorithm, and updated
every iteration.
To adapt the algorithm into a multi-objective optimizer, the n objective functions (represented by
𝒻 = {𝑓1 , 𝑓2 , … , 𝑓𝑛 }) are weighted and summed into a mean objective function 𝑓𝑚𝑒𝑎𝑛 , according to
Equation 11:

𝜆1 𝑓1 + 𝜆2 𝑓2 + ⋯ + 𝜆𝑛 𝑓𝑛
𝑓𝑚𝑒𝑎𝑛 =
∑𝑛𝑖=1 𝜆𝑖
Equation 11: Weighted Sum of the Objective Functions

By implementing the weighted sum of objectives, the original BFGS algorithm can be made into a
multi-objective optimizer, presented in Algorithm 3:
Graduation Thesis Report Sheet

Initialize the objective functions 𝒻;


Initialize solution 𝒙𝟎 , the inverse Hessian approximation 𝐻0 and
the initial stochastic variables 𝜆1:𝑛 ;
𝑘 → 0;

while (iteration number not met) or (convergence not met)


Compute mean objective function 𝑓𝑚𝑒𝑎𝑛 via Equation 11;
Compute search direction 𝒑𝒌 ;
Compute step size 𝛼𝑘 ;
Set 𝒙𝒌+𝟏 = 𝒙𝒌 + α𝑘 𝒑𝒌 ;
Define 𝒔𝒌 = 𝒙𝒌+𝟏 − 𝒙𝒌 and 𝒚𝒌 = ∇𝑓𝑘+1 − ∇𝑓𝑘 ;
Compute 𝐻𝑘+1 from the BFGS Update Equation;
𝑘 → 𝑘 + 1;
Assign new random values to 𝜆1:𝑛 ;
end (while)

Algorithm 3: Multi-Objective BFGS Algorithm with Stochastic Variable Weighted Sum

Translating Algorithm 3 into a MATLAB Script allowed for the first multi-objective tests to be
made. For the first set of tests done with MOOPs, the ZDT functions 1 to 4 were used. Since these
test functions are scalable to up to 30 variables, for means of easy representation, the tests were carried
with a 3 variable only implementation of the functions. The number of individuals for the initial
population was set to a very high number for the first tests, to better visualize if the implementation
of the stochastic variable weighted sum would be able to successfully spread the results around as
estimated. The results for these first tests are shown in Figure 5 to Figure 8, where the red dots show
the initial population, the grey lines show the path described by the solutions until the final iteration,
and the blue stars show the final optimized result. The Pareto front is shown in a black dotted line:

Figure 5: ZDT 1 optimized by Algorithm 3


Graduation Thesis Report Sheet

Figure 5 shows the optimization of the ZDT function 1, which is the easiest one to be optimized.
Using 200 individuals per variable showed a clear distribution along the Pareto front by the non-
dominating solutions, and convergence also seems decent, especially in the points closer to the
(0.15, 0.6) area of the graph.

Figure 6: ZDT 2 optimized by Algorithm 3

Figure 6 shows the optimization of the ZDT function 2. This function is naturally harder to optimize,
due to its non-convex Pareto front, which has led to the use of 400 individuals per variable. Careful
analysis of the results reveals the phenomenon described by the point C of Figure 3, also mentioned
by Obayashi et al., in which the gradient algorithm will stall the solution somewhere along the plot,
due to the complexity of the objective space. However, the behavior shown in points A and B of
Figure 3 is not observed, which means the use of the stochastic variable weights did successfully
prevent the solutions from clumping in the extreme points (0,1) and (1,0).

Figure 7: ZDT 3 optimized by Algorithm 3

Figure 7 shows the optimization of the ZDT function 3. This function is notably hard to optimize
Graduation Thesis Report Sheet

and specially to find a well distributed Pareto front, due to the non-uniform nature of the front. Here
the algorithm showed a fairly satisfactory performance, since even though the final results are not
perfectly distributed along the optimal front, they are indeed well distributed along the five non-
uniform portions of the front. Furthermore, taking in consideration the initial guesses, represented by
the red dots and how much the solutions ‘moved around’ during the optimization process, it can be
arguably said that the use of the stochastic variables has helped to spread the solutions, while still
keeping the optimization process effective.

Figure 8: ZDT 4 optimized by Algorithm 3

Figure 8 shows the optimization of the ZDT function 4. This function differs from the others for
the nature of its Pareto front. It is called ‘multimodal’ by its creators, which means its objective space
has many ‘fake’ Pareto fronts, which trick the algorithm into stalling into them, while only the front
highlighted by the black dotted line, the same front as the ZDT 1 (these fake fronts can be seen by the
many layers formed by final solutions). In this function, the algorithm also showed satisfactory
performance. Unlike the other functions, the initial guesses of ZDT 4 are usually much further away
from the front, as can be seen by the lack of red dots in the figure. Also, even though there are not
many final optimized results directly on the Pareto front, careful analysis of the results show that
precious solutions have passed by the Pareto front, and for some reason end up in other points.

The results from this first round of algorithm tests have proved the implementation of the stochastic
variable weights has successfully prevented the negative ‘clumping’ behavior usually associated with
the use of gradient algorithms for solving MOOPs. The results are, indeed, inexact, which suggests
Algorithm 3 needs further betterments, but the main preliminary for testing it with multi-objective
optimization is confirmed.

Elitist Multi-Objective BFGS Algorithm


As previously stated, when analyzing the results of Algorithm 3 with the solution of the ZDT
problems, especially when judging the results for ZDT functions 3 and 4, it is possible to see that in
between the initial population and the final population, some of the points passed by the optimal
Pareto front during the optimization process, then were dragged away. Furthermore, since the final
Graduation Thesis Report Sheet

optimized result is directly related to the initial guess, some fully optimized individuals may not be
as close to the Pareto front as the intermediate results from the computations of other individuals.
This phenomenon can be seen clearly upon examining Figure 8, where gray lines pass directly on the
Pareto front, meaning that there were points there during the optimization process, although a small
amount of final optimized results is actually there.
With these considerations in mind, the concept of elitism was applied in the algorithm, to allow for
a ‘survival of the fittest’ to be applied to the results. To implement elitism in the already developed
algorithm, a function of non-dominated fronts search was created in MATLAB. Firstly, a second
metric of iterations was implemented, defined as generation; so that the algorithm separates the results
by iteration and generation. A predefined number of iterations composes each generation, and at the
end of this number of iterations, a non-dominated front search function categorizes the best results in
a set on non-dominated fronts, that is, the set of solutions that have no other solution directly better
than them. After enough elements are categorized in non-dominated fronts, the algorithm repeats the
process, optimizing the next generation, until the convergence conditions are met, or a predefined
number of generations is calculated. It is important to emphasize the fact that using this generation
approach requires:

1. A whole set of solutions is partially optimized and all their initial, final, as well as
intermediate values, saved (in contrast of completely optimizing one by one);
2. The inverse Hessian approximation is reset at the end of each generation.

Item 2 has both positive and negative effects in the final optimized results: It may lead to some
elements to be optimized more slowly, since the Hessian approximation gives a better search direction
at each iteration; however, in contrast, it may also lead to some elements that are ‘stuck’ at some
complex points of the objective space to move further close to the optimized solution.
The approach used to carry the non-dominated search can be seen in Algorithm 4:

Load current generation of elements and their function values;


Determine 𝑁 as the number of necessary points to continue the next generation;
Initialize number of points categorized 𝑃𝑛 → 0;
Initialize front number 𝐹𝑛 → 1;
Initialize a null list of all checked points;
while 𝑃𝑛 < 𝑁
for 𝐴 = 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 1 𝑡𝑜 𝑎𝑙𝑙 𝑝𝑜𝑖𝑛𝑡𝑠;
for 𝐵 = 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 1 𝑡𝑜 𝑎𝑙𝑙 𝑝𝑜𝑖𝑛𝑡𝑠
Compare A with B:
If A is not a dominated point, AND if all coordinates of point A are smaller than all
coordinates of point B: B is dominated;
Save list of all dominated points;
end (for)
end (for)

Graduation Thesis Report Sheet


Update 𝑃𝑛 ;
Update front number 𝐹𝑛 with a list of all the non-dominated points and their respective
function values.
Update the list of checked points with the points that have been assigned to front 𝐹𝑛 ;
𝐹𝑛+1 = 𝐹𝑛 + 1;
end (while)

Algorithm 4: Non-Dominated Front Search

As the MATLAB implementation of Algorithm 3 separates elements in a 3-dimensional floating


point array, by its coordinates in direction 1, its iterations in direction 2 and different elements in
direction 3, and the non-dominated front search analyzes all points, the array 3 rd dimension is
concatenated to the 2nd dimension, so for a 𝑥 × 𝑦 × 𝑧 array, the number of points analyzed is 𝑦 × 𝑧.
The implementation of Algorithm 3 in MATLAB is given in Appendix 2: nonDominatedFront.m.
By including the elitist approach in the existing multi-objective BFGS algorithm, it is possible to
come up with a new algorithm, represented by Algorithm 5:

Initialize the objective functions 𝒻;


Initialize solution 𝒙𝟎 , the inverse Hessian approximation 𝐻0 and
the initial stochastic variables 𝜆1:𝑛 ;
𝑘 → 0;

while (generation number not met)


while (iteration number not met)
Compute mean objective function 𝑓𝑚𝑒𝑎𝑛 via Equation 11;
Compute search direction 𝒑𝒌 ;
Compute step size 𝛼𝑘 ;
Set 𝒙𝒌+𝟏 = 𝒙𝒌 + α𝑘 𝒑𝒌 ;
Define 𝒔𝒌 = 𝒙𝒌+𝟏 − 𝒙𝒌 and 𝒚𝒌 = ∇𝑓𝑘+1 − ∇𝑓𝑘 ;
Compute 𝐻𝑘+1 from the BFGS Update Equation;
𝑘 → 𝑘 + 1;
Assign new random values to 𝜆1:𝑛 ;
end (while)
Categorize current generation points by Algorithm 4;
Load the categorized non-dominated fronts as the initial
population for the next generation;
end (while)

Algorithm 5: Elitist Multi-Objective BFGS Algorithm with Variable Stochastic Weighted Sum
Graduation Thesis Report Sheet

Implementing Algorithm 5 into a MATLAB script, adapting the previously coded script to include
the non-dominated front search, allowed for a more robust analysis of optimization methods to be
made. Again, to benchmark the algorithm, the same ZDT functions 1 to 4 were tested, with the results
given below, on Figure 9 to Figure 12. The functions are scalable to any number of variables from 1
to 30, and although primary tests were performed with 10 variables, the following tests were
performed with 3 variables only, due to computational cost reasons. It is important to highlight that
the results of the primary tests performed with 10 variables showed basically no difference from the
following tests performed on 3 variables, so they were not shown for brevity.
Each one of the ZDT functions was optimized with 5 generations of 5 iterations each. The final
front of each generation is shown in dots, with lighter dots showing the first iterations, and darker
dots showing final ones. The last optimized results, which also form the optimized front found by the
algorithm, are shown in black crosses:

Figure 9: ZDT 1 optimized by Algorithm 5

First of all, from Figure 9, the Pareto front of ZDT 1 can be seen to be exceptionally clear. Using
an initial population of 300 individuals per variable showed to generate a very satisfactory result. The
points are almost uniformly spaced and well distributed around the front, and the shape of the found
Pareto front is not only coincident with the theoretical front, but also possesses a smooth and defined
shape.
Graduation Thesis Report Sheet

Figure 10: ZDT 2 optimized by Algorithm 5

The optimization of ZDT 2 required the use of a larger initial population, since it is naturally harder
to optimize due to its non-convex Pareto front. Nevertheless, increasing the initial population to 400
individuals per variable allowed for a final representation of the Pareto optimal front to be clear and
exact. It is arguably satisfactory, since the shape of the front is very clear and coincident with the
theoretical front, even taking in consideration some eventual discontinuities on the shape. The
distribution of the points along the curve is also arguably uniform, which shows the efficacy of the
stochastic variable weighted sum approach. Furthermore, for practical applications of Pareto search
for problems of which the Pareto front follows the same aspects, good results can be expected to be
obtained, by judging the quality of the front obtained in Figure 10, regardless of the eventual
discontinuities on the final results obtained.

Figure 11: ZDT 3 optimized by Algorithm 5

The optimization of the ZDT 3 function could be carried with the same lower value of 300
Graduation Thesis Report Sheet

individuals per variable used for ZDT 1, and presented very satisfactory results as well. The
distribution along the fronts seem fairly uniform, with results slightly denser on the lower portions of
each of the discrete fronts, as expected from the more curved convex profile of the function on those
areas. The aspect that makes ZDT 3 hard to optimize, the discontinuity of the Pareto front, is dealt
with impeccably by the algorithm, with final results presenting each of the separate fronts clearly.

Figure 12: ZDT 4 optimized by Algorithm 5

Finally, the optimization of the ZDT function 4 can be seen to be impeccable, presenting a perfect
final set of optimized points lying continuously and smoothly along the global Pareto optimal, which
suggests the algorithm’s perfect performance when dealing with multimodal problems (problems
with many local Pareto fronts, but only one global Pareto front). Also, the same initial population of
300 individuals per variable was used, still proving to be more than enough to show the Pareto front.
Accepting the results of Figure 9 to Figure 12 as validating and fully acceptable to account for a
successful implementation of a multi-objective optimization algorithm, the research proceeded
considering Algorithm 5 as the final baseline algorithm for carrying all the subsequent analyses.
The MATLAB implementation of Algorithm 5 is given on Appendix 3: Final Multi-Objective
Algorithm. The presented version of the script was the same used to generate the final plots of the
ZDT Functions, which used 2 objective functions and 3 variables. However, the code can be easily
modified as stated in the appendix so as to include any number of variables and functions to be
optimized, as will be presented in the next section.
To summarize the algorithm development, the following points can be stated:
• The implementation of the novel approach of Variable Stochastic Weighted Sum to adapt the
BFGS algorithm into a multi-objective optimizer proved to be successful;
• The use of elitism through a non-dominated front search function proved to significantly
improve the final obtained results of the optimization;

Algorithm Testing & Benchmarking


After the final optimizing algorithm, Algorithm 5 (hereby referred to as Final Algorithm or Final
Graduation Thesis Report Sheet

Multi-Objective Optimizer), was validated through the tests presented in the previous section, further
work into its implementation was started. Since the algorithm was validated, and the work presented
in this section was only carried to evaluate its performance in several aspects, it was performed in
parallel with the subsequent work presented in the Software Interfacing and Airfoil Geometry
Parametrization sections.

Performance in regards to number of variables


As briefly mentioned in the previous section, the tests of which the results are presented in Figure
9 to Figure 12 were performed with a version of the ZDT test functions with 3 variables, although
preliminary tests with 10 variables were previously made.
The change of the number of variables shows no apparent change in the performance of the
algorithm’s ability to optimize results in general. It does, however, makes the Pareto optimal front
less clear if the same number of elements in the initial population is used for a higher number of
variables. That is why whenever the initial population is mentioned, it is given in terms of individuals
per variable, so that when scaled up to more variables, the algorithm is expected to give the same
result for the same number of individuals per variable.

Performance on Higher-Than-Two MOOPs


The success of the implementation of the novel approach of Variable Stochastic Weighted Sum
seen in the end of the previous section proves the algorithm’s ability to solve multi-objective
optimization problems. Indeed, the difference from single-objective optimizers and multi-objective
optimizers is great, since the former’s objective is to find the single best value for a single function,
while the latter’s objective is to find a set of best trade-offs; that step up, allowing the BFGS to solve
multi-objective problems, has been successfully tacked. However, it is always good to test the ability
of the multi-objective optimizer to solve MOOPs with more than two objective functions, not only to
check its performance, but also to see how the increase of objective functions affects the results.
To test the performance of the Final Algorithm, the test problem approach suggested by Deb et al.
[3] was used, in which one of their suggested DTLZ problems was adapted to be tested. The DTLZ
problems are indefinitely scalable, and for convenience in representing the resulting data visually, a
3-objective problem was tested. Inspired on the DTLZ problem 6, adapted with the suggestions given
on the research by Deb et al., the optimization problem 𝒫3 was defined as:

𝑓1 (𝑿) = 𝑥1
𝒫3 → 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 { 𝑓2 (𝑿) = 𝑥2 ,
𝑓3 (𝑿) = (1 + 𝑔(𝑿)) × ℎ(𝑿)
𝑥3
𝑔(𝑿) = 1 +
20
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 2
ℎ(𝑿) = 6 − ∑ 2𝑥𝑖 + sin 3𝜋𝑥𝑖
𝑖=1

Equation 12: 3-Objective Test Problem

The proposed problem is adapted from the DTLZ Function 6 since the original proposed function,
although scalable, suggests the use of a number of variables larger than 20, to perform an extra step
to normalize the magnitude of the vector formed by the variables. Since the objective of this test is
Graduation Thesis Report Sheet

not that mathematically deep and demanding, the problem was adapted using the guidelines on the
same research, leading to the above Equation 12. The shape of the theoretical Pareto optimal front of
the problem 𝒫3 is shown below on Figure 13:

Figure 13: Theoretical Pareto front of the 3-Objective Problem

The problem was formulated as above to lead to a Pareto optimal front similar to that of the ZDT
problem 3, seen in Figure 11, in which several non-continuous convex patches form the front due to
the sinusoidal term in the function; this kind of Pareto front was considered a good shape for testing
since it can show the ability of the algorithm to deal with the following criteria by optimizing a single
problem:

• Converge to a precise approximation of the theoretical Pareto front;


• Find all the patches of a discontinuous Pareto front;
• Clearly portray the shape of the patches;
• Converge to a final solution which is uniformly distributed throughout each of the patches;

To optimize this problem, Equation 12 was implemented in the MATLAB script containing the
Final Algorithm, as presented in Appendix 3. To optimize the 3 objective functions, several different
densities of individuals per variable were tested. The result presented contains the final optimization
of a set of 900 individuals, i.e., 300 individuals per variable; this density was the same used to
optimize the ZDT function 3, which is somewhat ‘equivalent’ to the problem presented on Equation
12. This population was, in the same way as the 2-objective counterparts, optimized though 5
generations of 5 iterations each.
The results of this optimization of Equation 12 is presented below, in Figure 14:
Graduation Thesis Report Sheet

Figure 14: Optimized Results of the DTLZ Adapted Problem

The results show, when comparing with the theoretical Pareto front presented in Figure 13, that
the points in the solution accurately describe the four discrete patches comprising the Pareto front.
Judging qualitatively by their shape, the results seem to be satisfactorily converged to the actual
Pareto front, since the four patches follow the sinusoidal shape of the theoretical Pareto front.
A detailed view of each of the individual Pareto fronts can be seen below, comparing the final
optimized solution against the theoretical Pareto front:

Figure 15: Detailed View of the Solution of the 3-Objective Problem

Judging from Figure 15, all the nodes show satisfactory approximation of the Pareto front, out of
which all but the first, which is the highest node, show a seemingly exact convergence. The poorer
Graduation Thesis Report Sheet

convergence in the top node can be thought to be due to a higher location of the node, a problem
which can be further optimized by increasing the number of iterations and/or generations.
As for the shape of the Pareto front formed by the final results, the performance needs further
improvement. However, as seen on previous tests, which will not be presented here for brevity, the
shape of the Pareto front becomes more apparent, clear and smooth with an increase in individuals in
the first generation. The fact that the 3-objective optimization shows a less clear Pareto front than its
2-objective counterpart for the same number of individuals is somewhat intuitive, since there is an
extra dimension for the results to scatter around. Indeed, the result in Figure 14, with 300 individuals
per variable, was accepted as satisfactory after trying with 30, 50 and 100 individuals per variable,
all which showed a more defined and uniform Pareto front for each increment of individual number.
From this fact, it is possible to arrive to the following conjecture:

➢ Algorithm 5 can find any point in the Pareto front, provided that sufficient individuals are
initialized for optimization;

Another point that deserves emphasis is the fact that the solutions still tend to converge and
accumulate in the lower portions of more curved and convex portions of an objective space. This is
exactly the phenomenon that inspired the use of the novel Stochastic Variable Weighted Sum
approach, which proved to mitigate it to some extent; this phenomenon occurs due to the natural
tendency of more curved and convex areas to attract solutions, as opposed to non-convex less curved
points in the objective space tending to spread the results around, much like a ball tends to roll to the
bottom of a convex surface, and away from a non-convex one. Tests have shown to somewhat lower
the rate at which it happens if one of the objective functions is multiplied by a factor in the interval
(0,1). However, further analysis on how much this mitigates this tendency, and how it affects the rest
of the optimization, such as convergence rate, will not be discussed here, and is considered as a topic
for further study.

Finally, to conclude the section on Algorithm Development, it is of interest to clarify that studies
of convergence rate and computational costs of the algorithm will not be studied in this paper; the
convergence seems acceptable, judging from the obtained results, when comparing the results
obtained by the standard test functions (ZDT and DTLZ) with the results presented in the research
where this problems were first introduced, and the general convergence rate (obtaining results in
around 25 iterations in total) is significantly faster that other kinds of algorithms that produce good
Pareto fronts (which mostly need more than 500 iterations, often needing around 10,000 iterations to
solve more complicated problems); computational cost is also not considered, since the application
of this multi-objective optimizer focuses entirely in aerodynamic shape problems, which will use
RANS equations to calculate, and which calculations take a time immensely greater than the
algorithm computations to be performed. So, since very satisfactory solutions can be obtained with a
lesser number of iterations, which means a lesser number of RANS calculations, the convergence rate
and computational costs are considered acceptable.
Graduation Thesis Report Sheet

Software Interfacing

In this section, the portion of the work concerned with organizing the software used in a concise
integrated interface to carry out the proposed initial task of aerodynamic shape optimization is
presented. With the algorithm's completion, a number of further considerations of its practical
application have led to the subsequent work presented in this section.

Introduction to Software Suite


The problem of multi-objective optimization applied to engineering is naturally a problem
spanning though many specialties and fields, and thus needs specialized software to cover the part
concerned with each of these fields. The problem presented in this paper has dealt heavily with
problems involving the fields of calculus, numerical mathematics, data sorting, geometry analysis,
geometry parametrization and aerodynamics, and the complete software suit used in the progress of
this paper is presented below, listing each software, its application and in which portion of the research
it was applied.

MATLAB
MATLAB is a multidisciplinary programming language and programming environment developed
by Mathworks, Inc. The MATLAB programming environment was the cornerstone of this project,
serving not only as the center for managing the data involved in the optimization problem, but also
as the environment for performing all the computations.
MATLAB has extended capabilities for several mathematics, scientific and engineering
applications, as well as an extensive library for data representation and visualization. It is usually
applied in general engineering applications that involve data management, but its use was explored
in much more depth for this paper:
• Due to its vast library of multidisciplinary functions, which include functions to work
with symbolic mathematics, data I/O, plotting, optimization, scripting, etc., it was used as the
programming tool for implementing the optimization algorithm, as well as preforming
benchmarking tests and displaying results;
• Due to its ability of integrating functions of external libraries and other programming
language, it was used as the main environment to implement the final optimization program,
since it would require access and data exchange between several different software;

Pointwise
Pointwise is a meshing software especially dedicated to generating highly customizable meshes
for FEM and CFD, with support for several numerical solvers, as well as simple geometrical data
importing capabilities. It also counts with a very complete guide on how to create several types of
meshes for different applications, as well as a very concise documentation.
In this paper, Pointwise was used to generate the baseline meshes to be used in CFD analysis.
Operation of its functionality is not expected to be implemented in the iterative process of
optimization, although it counts with a functionality for automating mesh generation.
Graduation Thesis Report Sheet

ANSYS Fluent
ANSYS Fluent is a CFD solver, perhaps the most popular in the world, created and distributed by
ANSYS Inc. Fluent is a RANS solver (see Literature Review), and solves several different classes of
fluid dynamics problems. Although it provides support for thermal, electrical, energy exchange and
other factors in its analyses, only its viscous flow capabilities will be used in this paper: It will be the
tool used to evaluate the aerodynamic characteristics of the objectives to be optimized, constantly
receiving information from the optimization algorithm, calculating, and sending it back.
Fluent also counts with batch mode, which can be used to calculate bulks of problems directly in
background without user interaction, as well as a journaling mode, in which the user can create scripts
in the software’s own programming language, to be ran in automated fashion.
Perhaps the key feature of Fluent that will used in this paper, and somewhat the main feature to be
explored in this section is the capability of Fluent to be run in server mode: It can be initialized so it
allows external software to access it over a server address, so that data and analysis instructions can
be passed to it in real time without actually interacting with the software.

Interfacing of Software Suite for Optimization Applications


After the Final Algorithm was implemented in MATLAB script as shown in the previous section
and in Appendix 3: Final Multi-Objective Algorithm, the work concerned in adapting its use for
aerodynamic shape applications was started. Since the script shown in Appendix 3: Final Multi-
Objective Algorithm works solely with optimization functions explicitly defined in mathematical
form (indeed they are applied using MATLAB’s Symbolical Mathematics Toolbox), the script had to
be adapted to be able to exchange information with a software specialized in solving aerodynamics
problems, which cannot be easily represented, much less easily solved, in a symbolic mathematical
notation.
Since MATLAB does not support advanced aerodynamic calculations, ANSYS Fluent was
selected as the solver of choice, due to its past use by the author and its renowned popularity. ANSYS
Fluent journaling feature was briefly considered as the method of approach for integrating MATLAB
and Fluent, expecting communication to be done through a text I/O exchange where information
would be read and written to a file commonly used by both software.
However, the server mode of Fluent was quickly chosen over other alternatives once the reference
documentation introducing its use was verified. The choice of using the server mode of Fluent, which
is called Fluent aaS (as a Server) in the documentation, came from the benefit it delivers of allowing
control in real time, whereas other approaches rely on pre-programming. By having a server that
allows real time control, the optimization algorithm can constantly perform data manipulation and
call Fluent for analysis, without losing time closing and opening the software repeatedly.

Mesh Data I/O


Since mesh data needs to be continuously read and altered by both ANSYS Fluent and MATLAB,
a way to make this possible had to be found that works for both software. Fortunately, the format used
by Pointwise to generate .cas files, the file format used by Fluent CAE Cases, is not encrypted like
other file formats generated by ANSYS software, so it can be easily read by MATLAB in text format.
Furthermore, the format is organized in a way that it first elicits all the node coordinates, then the
relationship between them to form faces and cells. This way of representing makes it especially
convenient to mesh deformation methods, since for mesh deformation, the relationship between the
Graduation Thesis Report Sheet

faces and cells is not changed, only the position of the nodes. An example of this file format is shown
below, showing the header, containing information about the mesh metrics, along with some
formatting rules that can be disregarded, and the beginning of the section where the coordinates of
the nodes are declared, showing the formatting used for the first 10 of the 16,357 nodes on the file:

(1 "Exported from Pointwise") (0 " Quad cells : 0")


(0 "22:14:25 Mon May 06 2019") (12 (0 1 7e2b 0))

(0 "Dimension : 2") (0 "Zone 1 Number of Nodes : 16357")


(2 2) (10 (1 1 3fe5 1 2)(
5.500000000000000e+00 0.000000000000000e+00
(0 "Number of Nodes : 16357") 5.500000000000000e+00 5.000000000000000e-04
(10 (0 1 3fe5 0 2)) 5.500000000000000e+00 1.168439275588080e-03
5.500000000000000e+00 2.062043233175066e-03
(0 "Total Number of Faces : 48656") 5.500000000000000e+00 3.256626426237982e-03
(0 " Boundary Faces : 415") 5.500000000000000e+00 4.853505145734158e-03
(0 " Interior Faces : 48241") 5.499999999999999e+00 6.988055320077761e-03
(13 (0 1 be10 0)) 5.500000000000000e+00 9.841126451512666e-03
5.500000000000000e+00 1.365425226972153e-02
(0 "Total Number of Cells : 32299") 5.500000000000000e+00 1.874989877155121e-02
(0 " Tri cells : 32299") ...

In addition to being easily read, preliminary tests also proved that altering the coordinates of the
points does not alter the readability of the mesh by Fluent, even if the geometry becomes structurally
impossible. This means that if a method of mesh deformation can be successfully implemented in
MATLAB, Fluent will not generate errors for the deformed meshes. This also allows for a single
baseline.cas mesh to be generated in advance, and all the other meshes used in the optimization
process subsequently to be deformed version of the baseline, altered in MATLAB.
To read the data from the .cas file from MATLAB, the function importdata() was used, and has
proven to read only the coordinates of the node points into a variable containing a numerical array
that can be easily extracted. This extracting functionality was implemented in a MATLAB function
called CASextractPoints.m. To write the updated points to the mesh in the same format, two other
MATLAB functions were made: findIndex.m and updateMesh.m. MATLAB text writing functions use
an index object to show the place of the text where the program should continue writing. The function
findIndex.m places the index at the beginning of the point coordinate section, so that updateMesh.m
can rewrite the new values (which are obtained after deformation) in the same formatting. The script
for the functions CASextractPoints.m, findIndex.m, and updateMesh.m are shown in Appendix 4:
Mesh I/O Functions

Fluent aaS
Currently, ANSYS Inc. has a specific MATLAB Toolbox called the aaS Toolbox, which it
recommends as the method of choice for interfacing MATLAB and Fluent. However, since this
toolbox is only available for commercial clients of ANSYS software, its use was discarded, and the
native aaS mode of Fluent was preferred instead.
The ANSYS Fluent aaS [7] mode allows conection to and control of Fluent sessions remotely. It
works via the creation of a built-in Internet Inter-ORB Protocol, providing a text file with the address
needed to connect to the server.
To access Fluent in server mode in the Windows platform, it is necessary to declare FLUENT_AAS=1
as an environmental variable upon launching the software. It is also possible to run the command
fluent (options) -aas in the Windows command prompt (CMD). It should also be mentioned that
a working folder should be specified upon launch, so that right after launching, Fluent will write a set
Graduation Thesis Report Sheet

of text files with an address for accessing the server.


Out of the several options given by ANSYS for accessing Fluent aaS, the one chosen for this paper
was the Software Development Kit (SDK), a Server Client Package that allows users to enable their
own client application to connect and communicate with Fluent as a Server session. Using the SDK
allows the user to extend or create a client application to provide whatever level of control over the
Fluent as a Server session that an application may require. The main mechanism for connecting to
Fluent aaS is a CORBA IDL compiler, which is not available in MATLAB; however, ANSYS also
provides a .NET Connector, which is the chosen approach to connect with MATLAB, since MATLAB
has native support to the .NET Framework.
Upon installation, Fluent creates a set of Dynamic-Link Library files that contain wrappers for the
CORBA interface, and can be accessed though .NET. Once the .NET Assembly is loaded into
MATLAB, the DotNetCoFluentUnit class has to be accessed in the AAS_CORBA namespace (here named
fluent), then the address of the server can be read and connection can be established through the
fluent class. After this connection is established, a new class, here called fl, can be created and
assigned the control object getDotNetCoFluentSchemeControllerInstance. This control object allows
access to an interface with functions for more advanced control of Fluent, such as issuing commands
to the Fluent command line. In MATLAB, this process is coded as follows:

NET.addAssembly('C:\Program Files\ANSYS
Inc\v180\fluent\fluent18.0.0\addons\corba\DotNetFramework40\DotNetCoFluentUnit.dll');
fluent = AAS_CORBA.DotNetCoFluentUnit;
fluent.ConnectToServerFromIorFile('aaS_FluentId.txt');
fl = fluent.getDotNetCoFluentSchemeControllerInstance;

After the fl class is created, commands can be sent to the Fluent command line through the function
in the format fl.doMenuCommand(‘’). Examples of the usage of the command are as follows:

• Read Case File: fl.doMenuCommand('file/read-case/variableCase.cas\nok\n');


• Change the turbulence model to Spallart-Allmaras: fl.doMenuCommand('define/
models/viscous/spalart-allmaras/y\n');

• Set the convergence criteria: fl.doMenuCommand('solve/monitors/residual/convergence-


criteria/1e-4\n1e-4\n1e-4\n3e-4\n');

• Calculate a 300 iterations: fl.doMenuCommand('solve/iterate/300\n');

The complete list of commands can be found in the ANSYS documentation [7]. The format of
commands given above was used throughout the subsequent parts of the project, to apply analysis
parameters in real time programmatically from the MATLAB environment. These commands also
allow to control the flow of information to Fluent, such as loading case files, saving result data to
files, etc. A very big advantage of this implementation is the fact that MATLAB ‘listens’ to Fluent
while it is running, so for example, when iterations are being calculated in Fluent (the part of the
analysis that takes the longest), MATLAB stops running and waits for Fluent to complete its execution
to continue.

To finalize the Software Interfacing section, it can be summarized that the interfacing of MATLAB
and Fluent will be done through the use of the ‘as a Server’ mode of Fluent, which can be accessed
Graduation Thesis Report Sheet

in MATLAB through a .NET Assembly provided by ANSYS. Once the two software are linked, they
will constantly communicate analysis information, which will be done in a baseline mesh, generated
in Pointwise, that will be constantly altered by MATLAB and analyzed by Fluent.
The completion of this session into a clean, smart and efficient control interface allows for the
final session of this work to be successfully implemented, allowing MATLAB to control advanced
aerodynamic analysis, and therefore, transforming it into a powerful aerodynamic shape optimizer.
Graduation Thesis Report Sheet

Aerodynamic Shape Optimization

doefijsoifj
Graduation Thesis Report Sheet

References

[1] S. S.D., "Lecture 9: Multi-Objective Optimization," 2007. [Online]. Available:


https://engineering.purdue.edu/~sudhoff/ee630/Lecture09.pdf. [Accessed February 2019].
[2] E. Zitzler, K. Deb and L. Thiele, "Comparison of Multiobjective Evolutionary Algorithms:
Empirical Results," Evolutionary Computation, vol. 8, no. 2, pp. 173-195, 2000.
[3] K. Deb, L. Thiele, M. Laumanns and E. Zitzler, "Scalable multi-objective optimization test
problems," in Congress on Evolutionary Computation, Honolulu, HI, 2002.
[4] Stanford University, "AA222: Introduction to MDO," [Online]. Available:
http://adl.stanford.edu/aa222/Lecture_Notes_files/AA222-Lecture3.pdf. [Accessed February
2019].
[5] J. Nocedal and S. J. Wright, Numerical Optimization, Springer Science, 2006.
[6] A. Sóbester and A. I. J. Forrester, Aircraft Aerodynamic Design: Geometry and Optimizatio,
John Wiley & Sons, 2014.
[7] ANSYS, Inc, "ANSYS Help," 2016.
[8] P. R. Spalart and S. R. Allmaras, "A one-equation turbulence model for aerodynamic flows," La
Recherche Aérospatiale, no. 1, pp. 5-21, 1994.
[9] R. Fletcher and C. M. Reeves, "Function minimization by conjugate gradients," vol. 7, no. 2,
1964.
[10] S. Obayashi, A. Oyama and S. Daisuke, "Finding Tradeoffs by Using Multiobjective
Optimization Algorithms," Transactions of the Japan Society for Aeronautical and Space
Sciences, vol. 47, no. 155, pp. 51-58, 2004.
[11] Ž. Povalej, "Quasi-Newton’s method for multiobjective optimization," Journal of
Computational and Applied Mathematics, vol. 255, pp. 765-777, 2014.
[12] K. Deb, "Multi-Objective Optimization Using Evolutionary Algorithms: An Introduction,"
Indian Institute of Technology Kanpur, Kanpur, PIN 208016, India, 2011.
[13] R. J. Kuo and F. E. Zulvia, "The gradient evolution algorithm: A new metaheuristic,"
Information Sciences, no. 316, pp. 246-265, 2015.
[14] Pointwise, Inc., "Pointwise Tutorial Workbook," Fort Worth, TX, 2016.
Graduation Thesis Report Sheet

Appendix

Appendix 1: BFGS2.m
The following script is the first attempt of coding the BFGS algorithm for optimization of a single 2
variable function:

function x = BFGS2(fsym, x0,llim, ulim) g0 = double(gsym(x0(1), x0(2)));


f = @(x1, x2) double(fsym(x1,x2)); p = -H*g0;
H = eye(2); phi = @(t) f(p(1)*t + x0(1), p(2)*t +
gsym = gradient(fsym); x0(2));
g0 = double(gsym(x0(1),x0(2))); N = 0;
p = -H*g0; alpha = 1;
phi = @(t) f(p(1)*t + x0(1), p(2)*t + x0(2)); while (phi(alpha) > f(x0(1), x0(2)) +
N = 0; 0.1*alpha*g0'*p)
alpha = 1; N = N+1;
while (phi(alpha) > f(x0(1), x0(2)) + alpha = 1/2^N;
0.1*alpha*g0'*p) end
N = N+1; s = p*alpha;
alpha = 0.1/2^N; if max([max((x0 + s) > ulim) max((x0 + s) <
end llim)])
s = p*alpha; s = min([abs(llim - x0); abs(ulim - x0)])
* p/norm(p);
if max([max((x0 + s) > ulim) max((x0 + s) < end
llim)]) xNew = x0 + s;
s = min([abs(llim - x0); abs(ulim - x0)]) * y = double(gsym(xNew(1),xNew(2))) - g0;
p/norm(p); rho = 1/(y'*s);
end
H = (eye(2) - rho*(s*y'))*H*(eye(2) -
xNew = x0 + s; rho*(y*s')) + rho*(s*s');
y = double(gsym(xNew(1),xNew(2))) - g0; if isnan(xNew)
rho = 1/(y'*s); break
end
H = (eye(2) - rho*(s*y'))*H*(eye(2) - if n > 50
rho*(y*s')) + rho*(s*s'); break
n = 1; end
x(n,1:2) = x0; end

while norm(g0) > 1e-6 end


n = n+1;
x0 = xNew;
x(n,1:2) = x0;
Graduation Thesis Report Sheet

Appendix 2: nonDominatedFront.m
The following script is the MATLAB implementation of the non-dominated front search. The function
takes in a whole generation of optimized points. The variable result is an array of the optimized
variables of the objective functions, and the variable points is an array of the optimized function
values (the function evaluations of points). The script below works only for 3-variable and 3-
objective functions. For different settings, the if statement and the definitions of front{} have to be
altered:

function front = nonDominatedFront(result, points)

numEls = size(points,3);
numPoints = size(points,2)*size(points,3);
points = reshape(points,[2,numPoints]);
result = reshape(result,[3,numPoints]);

frontPoints = 0;
frontNo = 1;
checkedPoints = zeros(1,numPoints);

while frontPoints < numEls


dominated = checkedPoints;
for n = 1:numPoints
for m = 1:numPoints
if (dominated(n)~=1 && ...
(points(1,n)<points(1,m) && points(2,n)<points(2,m)...
&& points(3,n)<points(3,m)))
% Check if point n is not dominated
% Check if point m is dominated by n
dominated(m) = 1;
end
end
end
frontPoints = frontPoints + sum(~dominated);
% dominated contains 1 for dominated points;
% IF THE NUMBER OF VARIABLES IS DIFFERENT, CHANGE THE FOLLOWING LINE
front{1,frontNo} = ...
[(nonzeros(~dominated.*(points(1,:)+1))-1)'; ...
(nonzeros(~dominated.*(points(2,:)+1))-1)'; ...
(nonzeros(~dominated.*(points(3,:)+1))-1)'];

front{2,frontNo} = ...
[(nonzeros(~dominated.*(result(1,:)+1))-1)'; ...
(nonzeros(~dominated.*(result(2,:)+1))-1)'; ...
(nonzeros(~dominated.*(result(3,:)+1))-1)'];

checkedPoints = checkedPoints + ~dominated;

frontNo = frontNo + 1;
end
Graduation Thesis Report Sheet

Appendix 3: Final Multi-Objective Algorithm


The following script presents the final algorithm used for optimization. This was the script used to
generate all the presented results on Section 3 (Algorithm Study), and the code was modified to the
applications presented in Section 4 and 5. Note that although there is a variable number declaration,
it is necessary to change the code manually to include extra variables whenever these are assigned to
a symfun type object on the script. Also, as mentioned in Appendix 2, the code of nonDominatedFront.m
also needs to be altered to include extra objective functions and function variables. The objective
functions are declared as f1(X), f2(X), …, fn(X) in the beginning of the script.

nVars = 3;
X = sym('x',[1 nVars]);

f1(X) = 1-exp(-4*X(1)).*sin(6*pi*X(1)).^6
g(X) = 1 + 9*(sum(X(2:end))/2).^0.25;
h = 1-(f1./g).^2;
f2(X) = g.*h

llim = 0;
ulim = 1;
nPop0 = nVars*300;
nIt = 5;
H = zeros(nVars,nVars,nPop0);

figure
hold on
for genNum = 1:5
result = zeros(nVars,nIt, nPop0);
for n = 1:nPop0
H(:,:,n) = eye(nVars);

lambda = rand(2,1);
fMean = lambda(1)/sum(lambda)*f1 + lambda(2)/sum(lambda)*f2;

g0 = gradient(fMean);
gVal = zeros(nVars,nIt);

gVal(:,1) = double(g0(x(1,n), x(2,n), x(3,n)));


p = -H(:,:,n)*gVal(:,1);
phi = @(t) fMean(x(1,n) + t*p(1), x(2,n) + t*p(2), x(3,n) + t*p(3));

N = 0;
alpha = 0.1;
while (double(phi(alpha)) > ...
double(fMean(x(1,n), x(2,n), x(3,n)) + 0.1*alpha*gVal(:,1)'*p))
N = N+1;
alpha = 0.1/2^N;
end
s = p*alpha;

if max([max((x(:,n) + s) > ulim) max((x(:,n) + s) < llim)])


s = min([abs(llim - x(:,n)); abs(ulim - x(:,n))]) * p/norm(p);
end

xNew = x(:,n) + s;
y = double(g0(xNew(1),xNew(2), xNew(3))) - gVal(:,1);
rho = 1/(y'*s);
Hold = H(:,:,n);

H(:,:,n) = (eye(nVars) - rho*(s*y'))*H(:,:,n)*(eye(nVars) - rho*(y*s'))...


+ rho*(s*s');
result(:,1,n) = x(:,n);

for m = 2:nIt

result(:,m,n) = xNew;
Graduation Thesis Report Sheet
lambda = rand(2,1);
fMean = lambda(1)/sum(lambda)*f1 + lambda(2)/sum(lambda)*f2;
g0 = gradient(fMean);

gVal(:,m) = double(g0(xNew(1), xNew(2), xNew(3)));


p = -H(:,:,n)*gVal(:,m);
phi = @(t) fMean(xNew(1) + t*p(1), xNew(2) + t*p(2), xNew(3) + t*p(3));
N = 0;
alpha = 0.1;
while (double(phi(alpha)) > ...
double(fMean(xNew(1), xNew(2), xNew(3)) + 0.1*alpha*gVal(:,m)'*p))
N = N+1;
alpha = 0.1/2^N;
end
s = p*alpha;

if max([max((xNew + s) > ulim) max((xNew + s) < llim)])


s = min([abs(llim - xNew); abs(ulim - xNew)]) * p/norm(p);
H(:,:,n) = Hold;
end

if max(isnan(xNew+s))
for ii = 1:nVars
result(ii,m:end, n) = xNew(ii,1);
end
break
end

xNew = xNew + s;
y = double(g0(xNew(1),xNew(2), xNew(3))) - gVal(:,m);
rho = 1/(y'*s);

H(:,:,n) = (eye(nVars) - rho*(s*y'))*H(:,:,n)*(eye(nVars) - rho*(y*s'))...


+ rho*(s*s');
end

end

points = double([f1(result(1,:,:), result(2,:,:), result(3,:,:));...


f2(result(1,:,:), result(2,:,:), result(3,:,:))]);

front = nonDominatedFront(result,points);
x = cat(2,front{2,:});

for nn = 1:size(front,2)
scatter(front{1,nn}(1,:),front{1,nn}(2,:), 'MarkerEdgeColor', (5-genNum)/5*[1 1 1],
'Marker','.')
end
end

grid on
scatter(front{1,1}(1,:),front{1,1}(2,:), 'MarkerEdgeColor', [0 0 0], 'Marker','o')
Graduation Thesis Report Sheet

Appendix 4: Mesh I/O Functions


The following function scripts are used to import the mesh point data into MATLAB and to rewrite
new points into the file. It is important to emphasize that the number of points written to the file with
updateMesh.m should be equal to the number of points imported with CASextractPoints.m.

CASextractPoints.m
function points = CASextractPoints(filename)
importedData = importdata(filename);
points = importedData.data;
end

findIndex.m
function index = findIndex(fileID)
frewind(fileID);
for n = 1:21
fgetl(fileID);
end
index = ftell(fileID);
end

updateMesh.m
function updateMesh(fileID, coordinateIndex, newPoints)
fseek(fileID,coordinateIndex-1,'bof');
fprintf(fileID, '\n % 20.15e % 20.15e ', newPoints);
end

You might also like