Professional Documents
Culture Documents
Graduation
Thesis
Multiobjective Gradient Optimization Applied in
Aerodynamic Shape Optimization
ID Number 191561174
Class 1915612
June 2019
Graduation Thesis Report Sheet
Contents
CONTENTS ...................................................................................................................................................................... 2
INTRODUCTION ............................................................................................................................................................ 5
PREFACE ......................................................................................................................................................................... 5
PROBLEM ........................................................................................................................................................................ 6
OBJECTIVES .................................................................................................................................................................... 6
Primary Objective ..................................................................................................................................................... 6
Secondary Objectives ................................................................................................................................................ 6
HYPOTHESES .................................................................................................................................................................. 6
METHOD OF APPROACH .................................................................................................................................................. 6
LITERATURE REVIEW................................................................................................................................................. 8
REFERENCES ............................................................................................................................................................... 37
APPENDIX...................................................................................................................................................................... 38
APPENDIX 1: BFGS2.M................................................................................................................................................. 38
APPENDIX 2: NONDOMINATEDFRONT.M ....................................................................................................................... 39
APPENDIX 3: FINAL MULTI-OBJECTIVE ALGORITHM .................................................................................................... 40
APPENDIX 4: MESH I/O FUNCTIONS .............................................................................................................................. 42
CASextractPoints.m ................................................................................................................................................ 42
findIndex.m .............................................................................................................................................................. 42
updateMesh.m ......................................................................................................................................................... 42
Graduation Thesis Report Sheet
List of Figures
FIGURE 1: METHOD OF APPROACH ...................................................................................................................................... 6
FIGURE 2: EXAMPLE OF AIRFOIL WITH CONTROL POINTS .................................................................................................. 12
FIGURE 3: WEIGHTED FUNCTION BEHAVIOR IN A NON-CONVEX PARETO FRONT................................................................. 16
FIGURE 4: BFGS2.M OPTIMIZATION OF ROSENBROCK'S BANANA FUNCTION..................................................................... 18
FIGURE 5: ZDT 1 OPTIMIZED BY ALGORITHM 3 ................................................................................................................. 19
FIGURE 6: ZDT 2 OPTIMIZED BY ALGORITHM 3 ................................................................................................................. 20
FIGURE 7: ZDT 3 OPTIMIZED BY ALGORITHM 3 ................................................................................................................. 20
FIGURE 8: ZDT 4 OPTIMIZED BY ALGORITHM 3 ................................................................................................................. 21
FIGURE 9: ZDT 1 OPTIMIZED BY ALGORITHM 5 ................................................................................................................. 24
FIGURE 10: ZDT 2 OPTIMIZED BY ALGORITHM 5 ............................................................................................................... 25
FIGURE 11: ZDT 3 OPTIMIZED BY ALGORITHM 5 ............................................................................................................... 25
FIGURE 12: ZDT 4 OPTIMIZED BY ALGORITHM 5 ............................................................................................................... 26
FIGURE 13: THEORETICAL PARETO FRONT OF THE 3-OBJECTIVE PROBLEM ....................................................................... 28
FIGURE 14: OPTIMIZED RESULTS OF THE DTLZ ADAPTED PROBLEM ................................................................................ 29
FIGURE 15: DETAILED VIEW OF THE SOLUTION OF THE 3-OBJECTIVE PROBLEM ............................................................... 29
List of Equations
EQUATION 1: GENERAL FORM OF THE MULTI-OBJECTIVE OPTIMIZATION PROBLEM ............................................................ 8
EQUATION 2: DOMINANCE TEST .......................................................................................................................................... 8
EQUATION 3: GENERAL FORM OF ZDT PROBLEMS .............................................................................................................. 9
EQUATION 4: GRADIENT OF A FUNCTION ............................................................................................................................ 10
EQUATION 5: HESSIAN OF A FUNCTION .............................................................................................................................. 10
EQUATION 6: BFGS UPDATE EQUATION ............................................................................................................................ 11
EQUATION 7: BÉZIER CURVE ............................................................................................................................................. 12
EQUATION 8: REYNOLDS-AVERAGED NAVIER STOKES EQUATIONS ................................................................................... 13
EQUATION 9: SPALART-ALLMARAS MODEL TRANSPORT EQUATION .................................................................................. 14
EQUATION 10: THE ARMIJO CONDITION ............................................................................................................................ 17
EQUATION 11: WEIGHTED SUM OF THE OBJECTIVE FUNCTIONS ........................................................................................ 18
EQUATION 12: 3-OBJECTIVE TEST PROBLEM ..................................................................................................................... 27
List of Algorithms
ALGORITHM 1: GENERALIZED FORM OF GRADIENT ALGORITHMS ..................................................................................... 10
ALGORITHM 2: BFGS OPTIMIZATION ALGORITHM ............................................................................................................ 11
ALGORITHM 3: MULTI-OBJECTIVE BFGS ALGORITHM WITH STOCHASTIC VARIABLE WEIGHTED SUM ............................ 19
ALGORITHM 4: NON-DOMINATED FRONT SEARCH ............................................................................................................ 23
ALGORITHM 5: ELITIST MULTI-OBJECTIVE BFGS ALGORITHM WITH VARIABLE STOCHASTIC WEIGHTED SUM ............... 23
Graduation Thesis Report Sheet
Introduction
Preface
In the field of aircraft design, especially in the portion of the design process that is concerned with
the exterior shape of the aircraft, there are many variables at play, and how any of them changes is a
complex criterion that has to be taken into consideration very carefully; in most cases, a slight
modification in any aspect of the wetted area will affect the aerodynamic performance of the aircraft
as a whole over a long period of time, sometimes even drastically. Furthermore, how the many
surfaces of an aircraft interact with each other aerodynamically is not always a clear point to the
designer, and sometimes a clear picture of these interactions is not possible without a careful technical
analysis. However, to select the best option in a wide range of designs, programming, testing and
analyzing each option is not humanly feasible, especially as the number of variables increases. In
addition to that, keeping in mind that optimizing a single objective is already a hard task to be done
by 'brute force', that is, testing each one of the options for the one that works best, even harder is to
decide the best trade-offs among many objectives. In aircraft design, an example of this problem can
be given in the form:
"How does an engineer know what values to select for wingspan, wing sweep angle, and what
airfoil section to select, so that their design will give the best possible trade-off between a low drag
in cruise and a high lift in take-off?"
To help solve these problems comes the field of optimization. Rooted in both mathematics and
computational engineering, optimization, or more specifically, multi-objective optimization deals
with finding the best possible trade-offs between a set of options.
In this paper, this discipline is explored through the computational use of gradient algorithms, a
class of optimization algorithms, and their application in solving multi-objective optimization
problems related to aerodynamics. The research will propose the use of a novel method to use gradient
optimization algorithms in multi-objective optimization problems, and interface the final version of
the algorithm with an aerodynamic problem-solving software to perform case studies related to
aerodynamic shape optimization.
The paper is organized as follows: In Section 2 (Literature Review) a review of the sources used
and their importance to the contents hence developed will be presented; in Section 3 (Algorithm Study)
the development of the optimization algorithm is presented, and its performance is tested in common
MDO (multi-disciplinary optimization) benchmarking problems; in Section 4 (Software Interfacing)
the communication between the algorithm and the CFD solver is presented; in Section 5 (Airfoil
Geometry Parametrization) the method used to translate airfoil geometry into the optimizing
algorithm is presented, along with a case study on airfoil optimization; in Section 6 (Aircraft Structure
Parametrization) the method used to translate general aspects of the aircraft structure into the
algorithm is presented, along with a case study on general aircraft structure optimization; finally, in
Section 7 (Conclusion and Further Studies) the study is concluded, reviewing the solutions for the
problem, and further study propositions on the topic are presented.
Graduation Thesis Report Sheet
Problem
Can a gradient algorithm be developed to be used to solve multi-objective problems in general,
then be applied to optimize aerodynamic shapes in the context of finding optimal trade-offs between
several designs?
Objectives
Primary Objective
To develop a gradient algorithm to find optimal trade-offs between several aerodynamic designs,
to satisfy several stated objectives.
Secondary Objectives
• To find a uniformly distributed front of many such trade-offs, to allow for choice making
when selecting a final design;
• To develop such algorithm to be fast-converging and robust;
• To effectively express the geometries of interest in ways that can be read and tested
efficiently;
• To interface the algorithm with the CFD solver to be able to solve real-world engineering
problems as efficiently as theoretical mathematical problems;
• To effectively test the algorithm’s application in at least one case study of aerodynamic
optimization;
Hypotheses
• Applying the suggested novel approach with variable stochastic weights will make single-
objective algorithms work for multi-objective problems;
•
Method of Approach
The suggested method to approach the problem will be to divide the study into Algorithm
Development and Aerodynamic Analysis, following the structure below:
• Algorithm Theory
• Algorithm Coding • Geometry Parametrization;
• Algorithm Testing • Mesh Perturbation;
Software •
• Algorithm Interfacing
Airfoil Case Study;
Debugging; • Aircraft Structure Case Study;
Aeordynamic
Algorithm Analysis
Development
Throughout the course of this paper, standard formatting was used uniformly to represent different
kinds of information. These formatting rules are as follows:
• Standard formatting in Times New Roman was used to represent text and research in general;
• 𝐶𝑎𝑚𝑏𝑟𝑖𝑎 𝑀𝑎𝑡ℎ 𝐼𝑡𝑎𝑙𝑖𝑐 was used to represent equations, mathematical notation and information
of mathematical significance. Furthermore, variables in bold, such as 𝑿, represent vectors;
• Lucida Console was used to represent code, portions of scripts, functions, software inputs and
outputs and file names;
Times New Roman bound by a Grey Textbox
• was used to represent algorithms.
Graduation Thesis Report Sheet
Literature Review
In this section, the previous literature relevant to the development of this paper will be reviewed
and presented, showcasing the important points that each of them may contribute to the hence coming
research. This review will also include a posteriori knowledge acquired along the research, to better
organize the theoretical knowledge presented in the methodology section, i.e., mathematical
equations, theorems, etc.
Multi-Objective Optimization
Multi-Objective Optimization is a subfield of Optimization which deals with a class of problems
which require more than one objective to be optimized at the same time. Like in single-objective
optimization, the goal of this field is to either minimize or maximize a function, and the final answer
obtained by solving these problems is a set of solutions defining the best tradeoff between the
competing objectives [1].
In this work, optimization will deal only with the direct minimization of mathematical functions,
and thus the general form of a Multi-Objective Optimization Problem (MOOP) will be considered to
be:
𝑓1 (𝑿) 𝑥1
𝑓 (𝑿) 𝑥2
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 { 2 , 𝑤𝑖𝑡h 𝑿 = { ⋮ } , 𝑓𝑜𝑟 𝑥 (𝐿) ≤ 𝑥𝑖 ≤ 𝑥 (𝑈) , 𝑖 = 1,2, … , 𝑛
⋮
𝑓𝑚 (𝑿) 𝑥𝑛
Pareto Front
Perhaps the most important concept of the Multi-Objective Optimization Problem is the Pareto
Front. The Pareto Front, or Pareto Optimal Solution, is the set of all best trade-offs between the
objectives. All points composing the Pareto Front are points which cannot be further optimized for
one of the objectives without compromising the other ones. The metric used to classify these points
is called dominance, and every point of the Pareto Front is called a non-dominated point, which means
these solutions are not dominated by any other solution. For problems investigated in this paper,
represented by Equation 1, a solution A is said to dominate B if its value for all objectives is smaller
than those of B.
Optimization Algorithms
Although optimization is a classic mathematical field, for solving MOOPs efficiently the use of
Graduation Thesis Report Sheet
ZDT Problems
The ZDT Problems, named after its creators, Zitzler, Deb and Thiele, are a set of 6 mathematical
problems which are usually regarded as a standard for benchmarking Multi-Objective Optimization
Algorithms (MOOA) for problems with two objectives, each of which tests the algorithm's efficacy
in dealing with a specific kind of problem, and they have become popular among engineering
applications since the their Pareto front follows the same characteristics as those of common
engineering optimization problems. These functions take the form below:
The ZDT Functions used to benchmark the algorithm presented in this study are:
𝒯1 (𝑿) , to benchmark for convex problems;
𝒯2 (𝑿) , to benchmark for nonconvex problems;
𝒯3 (𝑿) , to benchmark for discrete problems;
𝒯4 (𝑿) , to benchmark for multimodal problems;
[2]
DTLZ Problems
The DTLZ Problems are a scalable variant of the ZDT Problems, proposed to test for more-than-
two-objective problems the algorithms which successfully demonstrated the ability to solve two-
objective optimization problems. They involve a set of 7 problems that are indefinitely scalable for
number of objectives. This work will only use 3-objetive formulations as a benchmark, to allow for
an intuitive presentation of the Pareto Fronts, and will analyze only selected problems out of the
DTLZ Problems, since their main objective is to test for more-than-two-objective performance [3].
Elitism
A concept that is also presented in the algorithm development is elitism. Elitist algorithms use the
concept of generations and fitness to filter their objectives. Since the first proposed solutions are
randomly generated, and later optimized, after a number of specified iterations, the algorithm will
stop and discard the worst results, keeping only the best ones, counting every iteration, not only the
last one. This can be thought as the 'survival of the fittest' rule applied in optimization.
account not only for the result, but also for the rate of change of the result in different directions. All
Gradient Algorithms will follow the basic form below:
[4]
Gradient
The gradient is the multivariable generalization of the derivative, and points to the direction of
steepest ascend. For any function 𝑓(𝑿), the gradient is given by the vector:
𝜕𝑓 𝜕𝑓 𝜕𝑓 𝑇
∇𝑓(𝑿) = [ … ]
𝜕𝑥1 𝜕𝑥1 𝜕𝑥1
Equation 4: Gradient of a function
Hessian
The Hessian is a symmetric matrix, which serves as the multivariable generalization of the second
derivative for a multivariable function. It contains information about the function's curvature at any
point in its domain, and for a function 𝑓(𝑿) it is given by:
𝜕 2𝑓 𝜕 2𝑓
⋯
𝜕𝑥12 𝜕𝑥1 𝜕𝑥𝑛
2
∇ 𝑓(𝑿) ≡ 𝐻(𝑿) ≡ ⋮ ⋱ ⋮
𝜕 2𝑓 𝜕 2𝑓
⋯
[𝜕𝑥𝑛 𝜕𝑥1 𝜕𝑥𝑛2 ]
Equation 5: Hessian of a function
BFGS Method
The BFGS, named for its discoverers Broyden, Fletcher, Goldfarb and Shanno, is the most popular
of the quasi-Newton algorithms. The Newton Optimization algorithms use the exact calculation of a
function's Hessian to solve for the best possible descent direction, but the calculation of the Hessian
is usually unfeasible to iterative methods, so in return, the quasi-Newton algorithms calculate an
approximation of the Hessian and update it with each iteration. For an optimization problem, the
BFGS Method updates the inverse of the Hessian of iteration 𝑘 + 1, denoted by 𝐻𝑘+1 by the
formula:
Graduation Thesis Report Sheet
The BFGS has very good properties when it comes to dealing with functions with many variables,
which makes it especially attractive to modern engineering applications. Although was discovered
and is mainly applied in problems involving single-objective optimization, its application was adapted
in this research to use it effectively in multi-objective applications [5].
Geometry Parametrization
The bridge between the aerodynamic shape (or any shape for that matter) and an optimization
method is the way in which the geometry is represented in mathematical terms, also called
parametrization. In engineering terms, a parametric geometry is a precisely defined geometrical curve
or surface that can be changed and adjusted through the manipulation of several parameters, called
design variables. All the design variables and the relationship between themselves and the curve are
precisely defined, and as such, can be integrated in the optimization algorithm.
A proper parametric geometry must have some key features, introduced by Sóbester and Forrester
as:
• Conciseness: Limit the number of design variables as much as possible, to be able to deal
with a design variable space as little as possible without losing the value of the geometry;
• Robustness: The parametrization must be able to represent shapes that make sense physically
and geometrically, for example, not generate self-intersecting shapes or shapes with negative
volume or infinite asymptotes;
• Flexibility: The parametrization of the shape should be able to provide control over a wide
enough range of different shapes, to ensure the diversity in results, while still keeping the search
Graduation Thesis Report Sheet
Bézier Curve
The Bézier Curve is a parametric curve, discovered in the 1960s by French engineer Pierre Bézier,
who used it to model the body of Renault automobiles. Mathematically, the Bézier Curve is a
superposition of the Bernstein Polynomials of its degree; a Bézier curve of degree 𝒏 is defined by
𝒏 + 𝟏 points. If 𝒂(𝑖) denotes the ith control point of a Bézier curve of degree 𝒏, the curve is given
by the function:
𝑛
𝑛 𝑖
ℬ(𝑢) = ∑ 𝒂(𝑖) 𝑏𝑖,𝑛 (𝑢), 𝑤ℎ𝑒𝑟𝑒 𝑏𝑖, 𝑛 (𝑢) = ( ) 𝑢 (1 − 𝑢)𝑛−𝑖 , 𝑢 ∈ [0,1]
𝑖
𝑖=0
The Bézier curve is used in this project to parametrize airfoil sections for optimization, since it is
easily calculated, easily differentiatable, smooth within its whole domain and tangent to the first and
last sections.
Control Points
Control points are the units used to control the parametrized geometry in an intuitive manner. For
two-dimensional lines, for example, control points can be used to dictate how the curve behaves.
Figure 2 shows an example of an airfoil-like curve made by a Bézier curve in red, with its control
points in blue. It can be seen how the shape drawn by the control points relates to the Bézier curve
they generate.
For Bézier curves, altering the position of one of the control points will alter the entire curve, and
the entire curve is contained in the convex hull drawn by the points. It is also important to state that
control points can be weighted, so that control points with larger weights will make the curve
approximate to them.
Mesh
A mesh is a topological domain used to translate a geometrical space into a computational space.
It is the domain used to perform Finite-Element simulations. A mesh is composed of nodes, cells and
edges. Meshes are generated by specialized software according to CAD geometry and specific mesh-
generation parameters which can be specified by the user.
Mesh Deformation
Usually, generating a high-quality mesh takes a slight computational time for two-dimensional
meshes, but a considerable amount of time for three-dimensional meshes. For simple and limited
amounts of changes in the geometry, the user may change the geometry and remesh through the
software. However, in optimization problems, thousands of slight changes to the geometry are needed
to test the different behavior of the geometry, so remeshing becomes an unfeasible activity.
To assess this issue, the concept of mesh deformation is used: instead of remeshing through the
software, the mesh is instead analyzed nodewise, and a function relating the distance of each node to
the analysis geometry is applied to displace each one of the nodes. Several mesh deformation
strategies exist, so as to assess problems such as loss of mesh quality for extreme deformations.
RANS
The Navier-Stokes equations are the differential equations that govern the behavior of viscous flow.
Since the solution of the equations is not always exactly determinate in a continuum domain, CFD
software use the concept of Reynolds-Averaging to solve the equations, decomposing the
instantaneous (exact) variables of the Navier-Stokes equations into a mean (time-averaged)
component and a fluctuating component. For example, velocity 𝑢𝑖 = 𝑢̅𝑖 + 𝑢𝑖′ . By substituting the
relation on the exact equations, and representing all the velocities as time averaged, the equations
become:
𝜕𝜌 𝜕
+ (𝜌𝑢𝑖 ) = 0
𝜕𝑡 𝜕𝑥𝑖
𝜕 𝜕 𝜕 𝜕 𝜕𝑢𝑖 𝜕𝑢𝑗 2 𝜕𝑢𝑙 𝜕
(𝜌𝑢𝑖 ) + (𝜌𝑢𝑖 𝑢𝑗 ) = − + [𝜇 ( + − 𝛿𝑖𝑗 )] + (−𝜌 ̅̅̅̅̅̅
𝑢𝑖′ 𝑢𝑗′ )
𝜕𝑡 𝜕𝑥𝑗 𝜕𝑥𝑖 𝜕𝑥𝑗 𝜕𝑥𝑗 𝜕𝑥𝑖 3 𝜕𝑥𝑙 𝜕𝑥𝑗
The myriad of software used to deal with this kind of equation are called RANS Solvers. In this
paper, the RANS Solver of choice was ANSYS Fluent.
For a complete description of the equation usage and terms, it is recommended to refer to ANSYS
Inc. documentation [7].
2
𝜕 𝜕 1 𝜕 𝜕𝜈̃ 𝜕𝜈̃
(𝜌𝜈̃) + (𝜌𝜈̃𝑢𝑖 ) = 𝐺𝑣 + [ {(𝜇 + 𝜌𝜈̃) } + 𝐶𝑏2 𝜌 ( ) ] − 𝑌𝜈 + 𝑆𝜈̃
𝜕𝑡 𝜕𝑥𝑖 𝜎𝜈̃ 𝜕𝑥𝑗 𝜕𝑥𝑗 𝜕𝑥𝑗
The model yields smooth laminar-turbulent transition at points of interest, and is shown to provide
good results to a variety of test cases proposed by the authors, including in cases of high-lift airfoils
and transonic wings.
For these computational advantages and the expected absence of complex turbulence activity
expected to be found in the cases studied in this paper, the Spalart-Allmaras Turbulence Model will
be used for carrying all the CFD analyses.
More information about the model’s usage is clarified in the original work by Spalart and Allmaras
[8].
Graduation Thesis Report Sheet
Algorithm Study
In this section, the part of the paper concerning the optimization algorithm will be presented. It will
present the whole process for the algorithm development, as well as results for its tests, debugging
and changes made throughout the process, finally presenting the final algorithm.
The first step on the algorithm study process, was to determine the kind of optimization algorithm
that would prove the most suitable to the problems in question. On the early stages of the project, the
gradient methods of optimization were straightforward preferred over the derivative-free methods
(like genetic algorithms), since they converge much faster, which is a considerable advantage when
working with aerodynamic calculations, as the computational cost of each calculation will drive up
the total cost of the optimization.
With the class of gradient methods in mind, further reading on past literature on the topic, especially
those provided by Stanford University AA222 lectures, in which several kinds of gradient methods
are compared, has led to the choice of Quasi-Newton algorithms to be chosen as the algorithm of
preference [4]. Conjugate gradient methods were briefly considered and tested according to past
research [8], but previous results have shown a clear superiority of the Quasi-Newton algorithms in
terms of convergence rates. Out of the many different Quasi-Newton algorithms, further reading has
led to the selection of the BFGS algorithm (discovered by Broyden, Fletcher, Goldfarb and Shanno)
as the final choice, as its advantages are elicited clearly in texts such as Numerical Optimization by
Nocedal and Wright [5]. Some of these advantages include fast convergence, low computational cost
and robustness when dealing with an extremely large number of variables.
Algorithm Development
As the BFGS algorithm is originally of a single-objective optimization nature, early on the
development stage of the project the idea of using the novel stochastic weighting variable approach
was considered as a way to adapt it for use with multi-objective problems. Naturally, the final
algorithm developed would still work as perfectly as the original BFGS algorithm does in the scope
of single-objective optimization.
The idea behind this novel approach comes from the fact that gradient algorithms are known for
not spreading the solutions evenly throughout the Pareto front when optimizing for multi-objective
problems, especially moving the results towards extreme points when the Pareto front is non-convex.
This problem still persists if random variables are assigned to weight the objective functions at the
first iteration, as shown in the following diagram from an optimization algorithm trade-off study made
by Obayashi et al., in points A and B [9]:
Graduation Thesis Report Sheet
Furthermore, gradient methods might stall when optimizing with an initially weighted set of
objective functions, as shown on point C, because of the complexity of the objective functions’
distribution.
As this behavior shows for every combination of weights assigned to the objective functions at
initialization, a new suggestion would be to assign new random weights at every iteration. Applying
this method in the optimization algorithm is expected to make the solutions spread in different
directions at each iteration, at the cost of reducing the convergence rate slightly because of the
constant change of parameters.
approach proved both extremely inaccurate, leading to ridiculous cases of divergence, and
computationally ineffective.
A method in an algorithm used in a paper by Povalej [10] suggests the step size 𝛼 to be the largest
number in the interval (0, 1] to reduce the function, instead of a minimal step; the same research
suggests that 𝛼 be a number in the set ℓ = {1⁄2𝑛 ; 𝑛 = 0, 1, 2, … }. To test the validity of the step
size, a condition presented by Nocedal and Wright, called the Armijo Condition, was used. The
Armijo Condition states that a step size 𝛼 that will provide an acceptable decrease of the objective
function must follow the inequality:
The value of 𝑐1 suggested for using the Armijo Condition in Quasi-Newton algorithms is 0.1,
which was the value used in the script. Using the Armijo Condition and the approach suggested by
Povalej has proven successful both in terms of convergence, robustness and calculation times.
Another point taken into consideration in this first implementation of the BFGS algorithm was the
limitation to the search space. The need to keep the solution within the bounding box provided in the
function call comes from the fact that some initial guesses for some optimization functions can cause
fast divergence. So to deal with this problem, the approach implemented was that, when the step
increment is added to the current value of the optimization variables, the program checks if the
increment will exceed the bounding limits, and if it does, it increases the value in the exceeding
direction by a value lower than the distance from the bounding limit.
Implementing these two approaches, the step size and the limit bounding box, the code for this first
attempt was finalized and tested, and it is presented in Appendix 1: BFGS2.m.
To test the algorithm, various common optimization testing functions were used and successfully
optimized. Furthermore, since this first algorithm is basically the original single-objective BFGS
algorithm without any attempt to modify its use to MOOPs, in theory it should provide the same
results as the fminunc MATLAB function. Tests have proven that similar initial guesses lead to the
same final optimized results for both the coded BFGS2.m script and the fminunc function, as well as
the initial guesses which cause divergence are the same for both functions.
A test of this first implementation can be seen in the figure below, in which the BFGS2.m code was
used to optimize 50 random initial guesses for the Rosenbrock’s Banana Function, a common function
used to test optimization algorithms. The Rosenbrock’s Banana Function is essentially a parabolic
shaped valley. Finding the valley is easy for any algorithm, since the valley walls are very steep, but
the flatness of the valley makes it hard for algorithms to find it minimum point, located at [1,1]. The
bounding box of the optimization problem is the interval [−2,2], a common search space for the
Rosenbrock’s Banana Function. The optimization results are presented in Figure 4:
Graduation Thesis Report Sheet
From the figure above, it can be seen that the results quickly converge from the exponentially
increasing area of the valley, with values near 100, to the valley containing values closer to 0. The
algorithm provides to most guesses a convergence of more than 8 digits of precision in less than 30
iterations, which can be considered a very satisfactory result.
𝜆1 𝑓1 + 𝜆2 𝑓2 + ⋯ + 𝜆𝑛 𝑓𝑛
𝑓𝑚𝑒𝑎𝑛 =
∑𝑛𝑖=1 𝜆𝑖
Equation 11: Weighted Sum of the Objective Functions
By implementing the weighted sum of objectives, the original BFGS algorithm can be made into a
multi-objective optimizer, presented in Algorithm 3:
Graduation Thesis Report Sheet
Translating Algorithm 3 into a MATLAB Script allowed for the first multi-objective tests to be
made. For the first set of tests done with MOOPs, the ZDT functions 1 to 4 were used. Since these
test functions are scalable to up to 30 variables, for means of easy representation, the tests were carried
with a 3 variable only implementation of the functions. The number of individuals for the initial
population was set to a very high number for the first tests, to better visualize if the implementation
of the stochastic variable weighted sum would be able to successfully spread the results around as
estimated. The results for these first tests are shown in Figure 5 to Figure 8, where the red dots show
the initial population, the grey lines show the path described by the solutions until the final iteration,
and the blue stars show the final optimized result. The Pareto front is shown in a black dotted line:
Figure 5 shows the optimization of the ZDT function 1, which is the easiest one to be optimized.
Using 200 individuals per variable showed a clear distribution along the Pareto front by the non-
dominating solutions, and convergence also seems decent, especially in the points closer to the
(0.15, 0.6) area of the graph.
Figure 6 shows the optimization of the ZDT function 2. This function is naturally harder to optimize,
due to its non-convex Pareto front, which has led to the use of 400 individuals per variable. Careful
analysis of the results reveals the phenomenon described by the point C of Figure 3, also mentioned
by Obayashi et al., in which the gradient algorithm will stall the solution somewhere along the plot,
due to the complexity of the objective space. However, the behavior shown in points A and B of
Figure 3 is not observed, which means the use of the stochastic variable weights did successfully
prevent the solutions from clumping in the extreme points (0,1) and (1,0).
Figure 7 shows the optimization of the ZDT function 3. This function is notably hard to optimize
Graduation Thesis Report Sheet
and specially to find a well distributed Pareto front, due to the non-uniform nature of the front. Here
the algorithm showed a fairly satisfactory performance, since even though the final results are not
perfectly distributed along the optimal front, they are indeed well distributed along the five non-
uniform portions of the front. Furthermore, taking in consideration the initial guesses, represented by
the red dots and how much the solutions ‘moved around’ during the optimization process, it can be
arguably said that the use of the stochastic variables has helped to spread the solutions, while still
keeping the optimization process effective.
Figure 8 shows the optimization of the ZDT function 4. This function differs from the others for
the nature of its Pareto front. It is called ‘multimodal’ by its creators, which means its objective space
has many ‘fake’ Pareto fronts, which trick the algorithm into stalling into them, while only the front
highlighted by the black dotted line, the same front as the ZDT 1 (these fake fronts can be seen by the
many layers formed by final solutions). In this function, the algorithm also showed satisfactory
performance. Unlike the other functions, the initial guesses of ZDT 4 are usually much further away
from the front, as can be seen by the lack of red dots in the figure. Also, even though there are not
many final optimized results directly on the Pareto front, careful analysis of the results show that
precious solutions have passed by the Pareto front, and for some reason end up in other points.
The results from this first round of algorithm tests have proved the implementation of the stochastic
variable weights has successfully prevented the negative ‘clumping’ behavior usually associated with
the use of gradient algorithms for solving MOOPs. The results are, indeed, inexact, which suggests
Algorithm 3 needs further betterments, but the main preliminary for testing it with multi-objective
optimization is confirmed.
optimized result is directly related to the initial guess, some fully optimized individuals may not be
as close to the Pareto front as the intermediate results from the computations of other individuals.
This phenomenon can be seen clearly upon examining Figure 8, where gray lines pass directly on the
Pareto front, meaning that there were points there during the optimization process, although a small
amount of final optimized results is actually there.
With these considerations in mind, the concept of elitism was applied in the algorithm, to allow for
a ‘survival of the fittest’ to be applied to the results. To implement elitism in the already developed
algorithm, a function of non-dominated fronts search was created in MATLAB. Firstly, a second
metric of iterations was implemented, defined as generation; so that the algorithm separates the results
by iteration and generation. A predefined number of iterations composes each generation, and at the
end of this number of iterations, a non-dominated front search function categorizes the best results in
a set on non-dominated fronts, that is, the set of solutions that have no other solution directly better
than them. After enough elements are categorized in non-dominated fronts, the algorithm repeats the
process, optimizing the next generation, until the convergence conditions are met, or a predefined
number of generations is calculated. It is important to emphasize the fact that using this generation
approach requires:
1. A whole set of solutions is partially optimized and all their initial, final, as well as
intermediate values, saved (in contrast of completely optimizing one by one);
2. The inverse Hessian approximation is reset at the end of each generation.
Item 2 has both positive and negative effects in the final optimized results: It may lead to some
elements to be optimized more slowly, since the Hessian approximation gives a better search direction
at each iteration; however, in contrast, it may also lead to some elements that are ‘stuck’ at some
complex points of the objective space to move further close to the optimized solution.
The approach used to carry the non-dominated search can be seen in Algorithm 4:
…
Update 𝑃𝑛 ;
Update front number 𝐹𝑛 with a list of all the non-dominated points and their respective
function values.
Update the list of checked points with the points that have been assigned to front 𝐹𝑛 ;
𝐹𝑛+1 = 𝐹𝑛 + 1;
end (while)
Algorithm 5: Elitist Multi-Objective BFGS Algorithm with Variable Stochastic Weighted Sum
Graduation Thesis Report Sheet
Implementing Algorithm 5 into a MATLAB script, adapting the previously coded script to include
the non-dominated front search, allowed for a more robust analysis of optimization methods to be
made. Again, to benchmark the algorithm, the same ZDT functions 1 to 4 were tested, with the results
given below, on Figure 9 to Figure 12. The functions are scalable to any number of variables from 1
to 30, and although primary tests were performed with 10 variables, the following tests were
performed with 3 variables only, due to computational cost reasons. It is important to highlight that
the results of the primary tests performed with 10 variables showed basically no difference from the
following tests performed on 3 variables, so they were not shown for brevity.
Each one of the ZDT functions was optimized with 5 generations of 5 iterations each. The final
front of each generation is shown in dots, with lighter dots showing the first iterations, and darker
dots showing final ones. The last optimized results, which also form the optimized front found by the
algorithm, are shown in black crosses:
First of all, from Figure 9, the Pareto front of ZDT 1 can be seen to be exceptionally clear. Using
an initial population of 300 individuals per variable showed to generate a very satisfactory result. The
points are almost uniformly spaced and well distributed around the front, and the shape of the found
Pareto front is not only coincident with the theoretical front, but also possesses a smooth and defined
shape.
Graduation Thesis Report Sheet
The optimization of ZDT 2 required the use of a larger initial population, since it is naturally harder
to optimize due to its non-convex Pareto front. Nevertheless, increasing the initial population to 400
individuals per variable allowed for a final representation of the Pareto optimal front to be clear and
exact. It is arguably satisfactory, since the shape of the front is very clear and coincident with the
theoretical front, even taking in consideration some eventual discontinuities on the shape. The
distribution of the points along the curve is also arguably uniform, which shows the efficacy of the
stochastic variable weighted sum approach. Furthermore, for practical applications of Pareto search
for problems of which the Pareto front follows the same aspects, good results can be expected to be
obtained, by judging the quality of the front obtained in Figure 10, regardless of the eventual
discontinuities on the final results obtained.
The optimization of the ZDT 3 function could be carried with the same lower value of 300
Graduation Thesis Report Sheet
individuals per variable used for ZDT 1, and presented very satisfactory results as well. The
distribution along the fronts seem fairly uniform, with results slightly denser on the lower portions of
each of the discrete fronts, as expected from the more curved convex profile of the function on those
areas. The aspect that makes ZDT 3 hard to optimize, the discontinuity of the Pareto front, is dealt
with impeccably by the algorithm, with final results presenting each of the separate fronts clearly.
Finally, the optimization of the ZDT function 4 can be seen to be impeccable, presenting a perfect
final set of optimized points lying continuously and smoothly along the global Pareto optimal, which
suggests the algorithm’s perfect performance when dealing with multimodal problems (problems
with many local Pareto fronts, but only one global Pareto front). Also, the same initial population of
300 individuals per variable was used, still proving to be more than enough to show the Pareto front.
Accepting the results of Figure 9 to Figure 12 as validating and fully acceptable to account for a
successful implementation of a multi-objective optimization algorithm, the research proceeded
considering Algorithm 5 as the final baseline algorithm for carrying all the subsequent analyses.
The MATLAB implementation of Algorithm 5 is given on Appendix 3: Final Multi-Objective
Algorithm. The presented version of the script was the same used to generate the final plots of the
ZDT Functions, which used 2 objective functions and 3 variables. However, the code can be easily
modified as stated in the appendix so as to include any number of variables and functions to be
optimized, as will be presented in the next section.
To summarize the algorithm development, the following points can be stated:
• The implementation of the novel approach of Variable Stochastic Weighted Sum to adapt the
BFGS algorithm into a multi-objective optimizer proved to be successful;
• The use of elitism through a non-dominated front search function proved to significantly
improve the final obtained results of the optimization;
Multi-Objective Optimizer), was validated through the tests presented in the previous section, further
work into its implementation was started. Since the algorithm was validated, and the work presented
in this section was only carried to evaluate its performance in several aspects, it was performed in
parallel with the subsequent work presented in the Software Interfacing and Airfoil Geometry
Parametrization sections.
𝑓1 (𝑿) = 𝑥1
𝒫3 → 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 { 𝑓2 (𝑿) = 𝑥2 ,
𝑓3 (𝑿) = (1 + 𝑔(𝑿)) × ℎ(𝑿)
𝑥3
𝑔(𝑿) = 1 +
20
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 2
ℎ(𝑿) = 6 − ∑ 2𝑥𝑖 + sin 3𝜋𝑥𝑖
𝑖=1
The proposed problem is adapted from the DTLZ Function 6 since the original proposed function,
although scalable, suggests the use of a number of variables larger than 20, to perform an extra step
to normalize the magnitude of the vector formed by the variables. Since the objective of this test is
Graduation Thesis Report Sheet
not that mathematically deep and demanding, the problem was adapted using the guidelines on the
same research, leading to the above Equation 12. The shape of the theoretical Pareto optimal front of
the problem 𝒫3 is shown below on Figure 13:
The problem was formulated as above to lead to a Pareto optimal front similar to that of the ZDT
problem 3, seen in Figure 11, in which several non-continuous convex patches form the front due to
the sinusoidal term in the function; this kind of Pareto front was considered a good shape for testing
since it can show the ability of the algorithm to deal with the following criteria by optimizing a single
problem:
To optimize this problem, Equation 12 was implemented in the MATLAB script containing the
Final Algorithm, as presented in Appendix 3. To optimize the 3 objective functions, several different
densities of individuals per variable were tested. The result presented contains the final optimization
of a set of 900 individuals, i.e., 300 individuals per variable; this density was the same used to
optimize the ZDT function 3, which is somewhat ‘equivalent’ to the problem presented on Equation
12. This population was, in the same way as the 2-objective counterparts, optimized though 5
generations of 5 iterations each.
The results of this optimization of Equation 12 is presented below, in Figure 14:
Graduation Thesis Report Sheet
The results show, when comparing with the theoretical Pareto front presented in Figure 13, that
the points in the solution accurately describe the four discrete patches comprising the Pareto front.
Judging qualitatively by their shape, the results seem to be satisfactorily converged to the actual
Pareto front, since the four patches follow the sinusoidal shape of the theoretical Pareto front.
A detailed view of each of the individual Pareto fronts can be seen below, comparing the final
optimized solution against the theoretical Pareto front:
Judging from Figure 15, all the nodes show satisfactory approximation of the Pareto front, out of
which all but the first, which is the highest node, show a seemingly exact convergence. The poorer
Graduation Thesis Report Sheet
convergence in the top node can be thought to be due to a higher location of the node, a problem
which can be further optimized by increasing the number of iterations and/or generations.
As for the shape of the Pareto front formed by the final results, the performance needs further
improvement. However, as seen on previous tests, which will not be presented here for brevity, the
shape of the Pareto front becomes more apparent, clear and smooth with an increase in individuals in
the first generation. The fact that the 3-objective optimization shows a less clear Pareto front than its
2-objective counterpart for the same number of individuals is somewhat intuitive, since there is an
extra dimension for the results to scatter around. Indeed, the result in Figure 14, with 300 individuals
per variable, was accepted as satisfactory after trying with 30, 50 and 100 individuals per variable,
all which showed a more defined and uniform Pareto front for each increment of individual number.
From this fact, it is possible to arrive to the following conjecture:
➢ Algorithm 5 can find any point in the Pareto front, provided that sufficient individuals are
initialized for optimization;
Another point that deserves emphasis is the fact that the solutions still tend to converge and
accumulate in the lower portions of more curved and convex portions of an objective space. This is
exactly the phenomenon that inspired the use of the novel Stochastic Variable Weighted Sum
approach, which proved to mitigate it to some extent; this phenomenon occurs due to the natural
tendency of more curved and convex areas to attract solutions, as opposed to non-convex less curved
points in the objective space tending to spread the results around, much like a ball tends to roll to the
bottom of a convex surface, and away from a non-convex one. Tests have shown to somewhat lower
the rate at which it happens if one of the objective functions is multiplied by a factor in the interval
(0,1). However, further analysis on how much this mitigates this tendency, and how it affects the rest
of the optimization, such as convergence rate, will not be discussed here, and is considered as a topic
for further study.
Finally, to conclude the section on Algorithm Development, it is of interest to clarify that studies
of convergence rate and computational costs of the algorithm will not be studied in this paper; the
convergence seems acceptable, judging from the obtained results, when comparing the results
obtained by the standard test functions (ZDT and DTLZ) with the results presented in the research
where this problems were first introduced, and the general convergence rate (obtaining results in
around 25 iterations in total) is significantly faster that other kinds of algorithms that produce good
Pareto fronts (which mostly need more than 500 iterations, often needing around 10,000 iterations to
solve more complicated problems); computational cost is also not considered, since the application
of this multi-objective optimizer focuses entirely in aerodynamic shape problems, which will use
RANS equations to calculate, and which calculations take a time immensely greater than the
algorithm computations to be performed. So, since very satisfactory solutions can be obtained with a
lesser number of iterations, which means a lesser number of RANS calculations, the convergence rate
and computational costs are considered acceptable.
Graduation Thesis Report Sheet
Software Interfacing
In this section, the portion of the work concerned with organizing the software used in a concise
integrated interface to carry out the proposed initial task of aerodynamic shape optimization is
presented. With the algorithm's completion, a number of further considerations of its practical
application have led to the subsequent work presented in this section.
MATLAB
MATLAB is a multidisciplinary programming language and programming environment developed
by Mathworks, Inc. The MATLAB programming environment was the cornerstone of this project,
serving not only as the center for managing the data involved in the optimization problem, but also
as the environment for performing all the computations.
MATLAB has extended capabilities for several mathematics, scientific and engineering
applications, as well as an extensive library for data representation and visualization. It is usually
applied in general engineering applications that involve data management, but its use was explored
in much more depth for this paper:
• Due to its vast library of multidisciplinary functions, which include functions to work
with symbolic mathematics, data I/O, plotting, optimization, scripting, etc., it was used as the
programming tool for implementing the optimization algorithm, as well as preforming
benchmarking tests and displaying results;
• Due to its ability of integrating functions of external libraries and other programming
language, it was used as the main environment to implement the final optimization program,
since it would require access and data exchange between several different software;
Pointwise
Pointwise is a meshing software especially dedicated to generating highly customizable meshes
for FEM and CFD, with support for several numerical solvers, as well as simple geometrical data
importing capabilities. It also counts with a very complete guide on how to create several types of
meshes for different applications, as well as a very concise documentation.
In this paper, Pointwise was used to generate the baseline meshes to be used in CFD analysis.
Operation of its functionality is not expected to be implemented in the iterative process of
optimization, although it counts with a functionality for automating mesh generation.
Graduation Thesis Report Sheet
ANSYS Fluent
ANSYS Fluent is a CFD solver, perhaps the most popular in the world, created and distributed by
ANSYS Inc. Fluent is a RANS solver (see Literature Review), and solves several different classes of
fluid dynamics problems. Although it provides support for thermal, electrical, energy exchange and
other factors in its analyses, only its viscous flow capabilities will be used in this paper: It will be the
tool used to evaluate the aerodynamic characteristics of the objectives to be optimized, constantly
receiving information from the optimization algorithm, calculating, and sending it back.
Fluent also counts with batch mode, which can be used to calculate bulks of problems directly in
background without user interaction, as well as a journaling mode, in which the user can create scripts
in the software’s own programming language, to be ran in automated fashion.
Perhaps the key feature of Fluent that will used in this paper, and somewhat the main feature to be
explored in this section is the capability of Fluent to be run in server mode: It can be initialized so it
allows external software to access it over a server address, so that data and analysis instructions can
be passed to it in real time without actually interacting with the software.
faces and cells is not changed, only the position of the nodes. An example of this file format is shown
below, showing the header, containing information about the mesh metrics, along with some
formatting rules that can be disregarded, and the beginning of the section where the coordinates of
the nodes are declared, showing the formatting used for the first 10 of the 16,357 nodes on the file:
In addition to being easily read, preliminary tests also proved that altering the coordinates of the
points does not alter the readability of the mesh by Fluent, even if the geometry becomes structurally
impossible. This means that if a method of mesh deformation can be successfully implemented in
MATLAB, Fluent will not generate errors for the deformed meshes. This also allows for a single
baseline.cas mesh to be generated in advance, and all the other meshes used in the optimization
process subsequently to be deformed version of the baseline, altered in MATLAB.
To read the data from the .cas file from MATLAB, the function importdata() was used, and has
proven to read only the coordinates of the node points into a variable containing a numerical array
that can be easily extracted. This extracting functionality was implemented in a MATLAB function
called CASextractPoints.m. To write the updated points to the mesh in the same format, two other
MATLAB functions were made: findIndex.m and updateMesh.m. MATLAB text writing functions use
an index object to show the place of the text where the program should continue writing. The function
findIndex.m places the index at the beginning of the point coordinate section, so that updateMesh.m
can rewrite the new values (which are obtained after deformation) in the same formatting. The script
for the functions CASextractPoints.m, findIndex.m, and updateMesh.m are shown in Appendix 4:
Mesh I/O Functions
Fluent aaS
Currently, ANSYS Inc. has a specific MATLAB Toolbox called the aaS Toolbox, which it
recommends as the method of choice for interfacing MATLAB and Fluent. However, since this
toolbox is only available for commercial clients of ANSYS software, its use was discarded, and the
native aaS mode of Fluent was preferred instead.
The ANSYS Fluent aaS [7] mode allows conection to and control of Fluent sessions remotely. It
works via the creation of a built-in Internet Inter-ORB Protocol, providing a text file with the address
needed to connect to the server.
To access Fluent in server mode in the Windows platform, it is necessary to declare FLUENT_AAS=1
as an environmental variable upon launching the software. It is also possible to run the command
fluent (options) -aas in the Windows command prompt (CMD). It should also be mentioned that
a working folder should be specified upon launch, so that right after launching, Fluent will write a set
Graduation Thesis Report Sheet
NET.addAssembly('C:\Program Files\ANSYS
Inc\v180\fluent\fluent18.0.0\addons\corba\DotNetFramework40\DotNetCoFluentUnit.dll');
fluent = AAS_CORBA.DotNetCoFluentUnit;
fluent.ConnectToServerFromIorFile('aaS_FluentId.txt');
fl = fluent.getDotNetCoFluentSchemeControllerInstance;
After the fl class is created, commands can be sent to the Fluent command line through the function
in the format fl.doMenuCommand(‘’). Examples of the usage of the command are as follows:
The complete list of commands can be found in the ANSYS documentation [7]. The format of
commands given above was used throughout the subsequent parts of the project, to apply analysis
parameters in real time programmatically from the MATLAB environment. These commands also
allow to control the flow of information to Fluent, such as loading case files, saving result data to
files, etc. A very big advantage of this implementation is the fact that MATLAB ‘listens’ to Fluent
while it is running, so for example, when iterations are being calculated in Fluent (the part of the
analysis that takes the longest), MATLAB stops running and waits for Fluent to complete its execution
to continue.
To finalize the Software Interfacing section, it can be summarized that the interfacing of MATLAB
and Fluent will be done through the use of the ‘as a Server’ mode of Fluent, which can be accessed
Graduation Thesis Report Sheet
in MATLAB through a .NET Assembly provided by ANSYS. Once the two software are linked, they
will constantly communicate analysis information, which will be done in a baseline mesh, generated
in Pointwise, that will be constantly altered by MATLAB and analyzed by Fluent.
The completion of this session into a clean, smart and efficient control interface allows for the
final session of this work to be successfully implemented, allowing MATLAB to control advanced
aerodynamic analysis, and therefore, transforming it into a powerful aerodynamic shape optimizer.
Graduation Thesis Report Sheet
doefijsoifj
Graduation Thesis Report Sheet
References
Appendix
Appendix 1: BFGS2.m
The following script is the first attempt of coding the BFGS algorithm for optimization of a single 2
variable function:
Appendix 2: nonDominatedFront.m
The following script is the MATLAB implementation of the non-dominated front search. The function
takes in a whole generation of optimized points. The variable result is an array of the optimized
variables of the objective functions, and the variable points is an array of the optimized function
values (the function evaluations of points). The script below works only for 3-variable and 3-
objective functions. For different settings, the if statement and the definitions of front{} have to be
altered:
numEls = size(points,3);
numPoints = size(points,2)*size(points,3);
points = reshape(points,[2,numPoints]);
result = reshape(result,[3,numPoints]);
frontPoints = 0;
frontNo = 1;
checkedPoints = zeros(1,numPoints);
front{2,frontNo} = ...
[(nonzeros(~dominated.*(result(1,:)+1))-1)'; ...
(nonzeros(~dominated.*(result(2,:)+1))-1)'; ...
(nonzeros(~dominated.*(result(3,:)+1))-1)'];
frontNo = frontNo + 1;
end
Graduation Thesis Report Sheet
nVars = 3;
X = sym('x',[1 nVars]);
f1(X) = 1-exp(-4*X(1)).*sin(6*pi*X(1)).^6
g(X) = 1 + 9*(sum(X(2:end))/2).^0.25;
h = 1-(f1./g).^2;
f2(X) = g.*h
llim = 0;
ulim = 1;
nPop0 = nVars*300;
nIt = 5;
H = zeros(nVars,nVars,nPop0);
figure
hold on
for genNum = 1:5
result = zeros(nVars,nIt, nPop0);
for n = 1:nPop0
H(:,:,n) = eye(nVars);
lambda = rand(2,1);
fMean = lambda(1)/sum(lambda)*f1 + lambda(2)/sum(lambda)*f2;
g0 = gradient(fMean);
gVal = zeros(nVars,nIt);
N = 0;
alpha = 0.1;
while (double(phi(alpha)) > ...
double(fMean(x(1,n), x(2,n), x(3,n)) + 0.1*alpha*gVal(:,1)'*p))
N = N+1;
alpha = 0.1/2^N;
end
s = p*alpha;
xNew = x(:,n) + s;
y = double(g0(xNew(1),xNew(2), xNew(3))) - gVal(:,1);
rho = 1/(y'*s);
Hold = H(:,:,n);
for m = 2:nIt
result(:,m,n) = xNew;
Graduation Thesis Report Sheet
lambda = rand(2,1);
fMean = lambda(1)/sum(lambda)*f1 + lambda(2)/sum(lambda)*f2;
g0 = gradient(fMean);
if max(isnan(xNew+s))
for ii = 1:nVars
result(ii,m:end, n) = xNew(ii,1);
end
break
end
xNew = xNew + s;
y = double(g0(xNew(1),xNew(2), xNew(3))) - gVal(:,m);
rho = 1/(y'*s);
end
front = nonDominatedFront(result,points);
x = cat(2,front{2,:});
for nn = 1:size(front,2)
scatter(front{1,nn}(1,:),front{1,nn}(2,:), 'MarkerEdgeColor', (5-genNum)/5*[1 1 1],
'Marker','.')
end
end
grid on
scatter(front{1,1}(1,:),front{1,1}(2,:), 'MarkerEdgeColor', [0 0 0], 'Marker','o')
Graduation Thesis Report Sheet
CASextractPoints.m
function points = CASextractPoints(filename)
importedData = importdata(filename);
points = importedData.data;
end
findIndex.m
function index = findIndex(fileID)
frewind(fileID);
for n = 1:21
fgetl(fileID);
end
index = ftell(fileID);
end
updateMesh.m
function updateMesh(fileID, coordinateIndex, newPoints)
fseek(fileID,coordinateIndex-1,'bof');
fprintf(fileID, '\n % 20.15e % 20.15e ', newPoints);
end