You are on page 1of 18

Part 11: Optimisation

Table of contents:
Chapter I - General information......................................................................................... 2
I.A. Introduction ................................................................................................................. 2
I.B. Applications within the forging process ...................................................................... 2
I.C. Principle and definitions.............................................................................................. 3
Chapter II - Description of the algorithm .......................................................................... 4
II.A. Mono-objective algorithm ......................................................................................... 4
II.B. Evolutionary algorithm .............................................................................................. 4
II.C. Notion of the kriging metamodel ............................................................................... 5
II.D. Operation of the genetic algorithm ............................................................................ 7
II.E. Initialisation of the population ................................................................................. 10
II.F. Computation of a generation .................................................................................... 10
Chapter III - Optimisation strategies ............................................................................... 13
III.A. Initialisation by experimental design ..................................................................... 13
III.B. Discrete optimisation .............................................................................................. 14
Chapter IV - Example: optimisation of a billet volume .................................................. 15
Index: ................................................................................................................................... 17
Bibliography: ....................................................................................................................... 18

Part 11: Optimisation -1- Transvalor


CHAPTER I - GENERAL INFORMATION

I.A. Introduction
Automatic optimisation is a new approach to problem resolution and process enhancement in
material forming. It automates long and repetitive data setup and parameter selection.
Formely, the user would launch a series of computations, progressively modifying data setup
according to results obtained for each simulation. With the automatic optimisation feature, the
user directly submits their optimisation problem to the software by choosing the criteria to be
improved, the constraints to be satisfied, and the variables to be modified. The application
will automatically launch a series of simulations with a judicious choice of parameters and the
goal of finding an optimal solution. The work required of the user is simply to structurally
formalise the objectives, the imposed constraints, and the parameters that can vary. In this
document, "minimisables" are the criteria to improve, "constraints" the constraints to satisfy,
and "parametered actions" the process variables to modify.

To resolve a variety of optimisation problems, an algorithm based on genetic methods was


chosen to effectively and robustly respond to a wide range of situations. It is also possible to
limit variable parameters to a series of imposed values (discrete optimisation) and use a
defined experimental design for simulations.

I.B. Applications within the forging process


The ultimate goal of an optimisation problem applied to a forging process could be to reduce
production costs or improve the quality of the manufactured part. With this in mind, we will
look, for example, at reducing the weight of a blocker while taking filling constraints into
account, minimising loads, obtaining precise shapes, and improving the quality of material,
etc. Variable parameters can be tied to part or die geometries, to kinematics, to rheological
laws. Likewise, it is possible to look at reducing the temperature in the part by modifying
lubrication conditions or process speed.

A first example is to minimise the weight/volume of the blocker in a forging context. The
constraints are: die completely filled and no folds. The parameters that can vary are: billet
diameter and height. The user can choose the amount and types of parameter they want to
vary in order to optimise the process. They can also choose to vary billet diameter by
imposing a series of defined values that correspond to their supplier's diameters for example.

In the optimisation example provided in Figure 1, a 1.4 kg savings of material per forged part
was accomplished. This gain also reduced the maximum tonnage required to forge the part
and, as a result, made it possible to use a smaller size press. Figure 1 draws attention to the
fact that the flash of the part has shrunk while maintaining the same filling rate.

Part 11: Optimisation -2- Transvalor


Before optimisation After optimisation
Weight: 8.9 kg Weight: 7.5 kg
Forging force: 2800t Forging force: 2.000t

Figure 1: Example of a part before and after optimisation

A second example could be to minimise part temperature. The constraint is the maximum
force of the press. The parameters that can vary are the lubrication conditions and press speed.

I.C. Principle and definitions


The principle of optimisation is to minimise the value of a quantity called "cost function".
This cost function is evaluated based on the value of minimisables. We need to perform a
simulation using a set of traditional parameters. Among these parameters, some are
considered to be actions parameterised for the optimisation and are defined within the limits
they must observe. Not only that, but simulation results are subject to certain constraints that
must be satisfied. At the end of the simulation, the cost function is evaluated and used to
determine whether the chosen set of parameters provides an acceptable solution. If it doesn't,
another set of parameters must be chosen, the simulation must be done, and the function
evaluated again, and so on and so forth. It's obvious that if done manually, this work is
repetitive and a source of error, often limited to a first approach and, though it will produce an
acceptable result, it will not be optimal.

Automatic optimisation makes it possible to choose different pertinent parametered actions to


test by having them vary within set limits and according to previously obtained results. It can
also verify whether constraints are satisfied, evaluate the cost function, and automate the
conduct of computations to minimise it. The mono-objective optimisation algorithm chosen is
based on an evolution strategy coupled with a metamodel.

Part 11: Optimisation -3- Transvalor


CHAPTER II - DESCRIPTION OF THE ALGORITHM

II.A. Mono-objective algorithm


The algorithm used is mono-objective, which means it searches for the optimal solution for a
given objective, in other words, for a minimisable. It can however happen that the problem
needs to meet several objectives, some of which can be completely unrelated to each other
and be optimal when parameter sets lean towards opposite values. If for example the problem
needs to meet two antagonistic objectives, two mono-objective optimisations will give a result
for each that will be optimal for one of the minimisables but will have a tendency to
deteriorate the second. For a problem such as this, the principle is actually not to search for
the optimal solution, but rather the best compromise.

To resolve this type of problem, a multi-objective algorithm is generally used. This algorithm
will provide a set of optimal solutions to form what we call a "Pareto Frontier". Depending on
the priority given to one objective or the other, it's a different point of the Pareto Frontier that
will be selected and become the optimal solution to the problem.

To find a solution to this type of problem, you can also use several minimisables with a
penalised mono-objective algorithm and consequently lean towards a pseudo multi-objective
algorithm that will provide a unique optimal solution. To do this, the cost function will be
obtained by a rated sum of the different minimisables chosen. This is the type of algorithm we
use in our software.

II.B. Evolutionary algorithm


An algorithm based on an evolution strategy is called an evolutionary algorithm. This type of
algorithm is well-adapted to complex problem optimisation. This is because it has the
particularity of converging towards the global optimum and being independent of the initial
distribution of the parameter set. This type of algorithm however needs a substantial number
of evolutions to converge towards the optimum. A classic minimisation algorithm, such as a
descent direction algorithm, which is frequently used for optimisation, is based on the
computation of the cost function gradient. Each new parameter set defined by this algorithm
will produce a lower cost function value. The success of an optimisation using this type of
method however varies quite drastically depending on the problem to be minimised.
Moreover, it will converge towards a local optimum, which makes results very dependent on
the initial set of parametered actions chosen. Figure 2 for example shows a case where, for an
initial value x0 of the parametered action, the algorithm will converge towards the global
minimum x g . If however the initial value of the parameter is x'0 , the gradient algorithm will
converge towards the local minimum xl . An evolutionary algorithm, which is based on a
principle of exploration and exploitation (of minimisation), will converge more easily towards
the global minimum.

Part 11: Optimisation -4- Transvalor


Figure 2: Illustration of the search for extrema using a descent
direction method on a dual extremum cost function [Roux 2011]

II.C. Notion of the kriging metamodel


In general, the resolution of an optimisation problem by a genetic algorithm requires a
substantial number of cost function evaluations (Figure 3), that is, the generation of a large
number of individuals and therefore the resolution of a large number of finite element
problems (if the algorithm is coupled with a finite element solver, which is the case in our
situation). This is because, in order to effectively search the solutions space to find the
optimal solution, a random component is included in the individuals generation technique.
This type of problem is costly in terms of cost function evaluations.

Evaluation of the
individual
Optimisation algorithm
FEM solver

Population update

Figure 3: Principle of the evolutionary algorithm with no metamodel

Using a metamodel greatly reduces this computation cost [Ejday 2011, Roux 2011]. The
principle is to replace the direct evaluation of an individual with an estimation using a
simplified model (the metamodel) of the problem to be solved. This metamodel is constructed
using the database of evaluated individuals. It can therefore not be constructed or used until
after the complete evaluation of the first generation of individuals. With each new generation,
the metamodel becomes more precise. This metamodel is used to get a first cost function
approximation for an individual. Based on this estimation and the error estimation given to us
by the metamodel, the individual can be selected for a complete finite element evaluation
(Figure 4). This method avoids having to compute a very large number of individuals in order
to search the space. Computed individuals are relevantly chosen to both improve the
metamodel and to converge towards the optimum.

Part 11: Optimisation -5- Transvalor


Estimation of the
individual
Optimisation algorithm
Metamodel
Population update

Improved
metamodel
Estimation of the
individual FEM solver

Figure 4: Principle of the evolutionary algorithm with metamodel

There are several types of metamodel in the literature, such as neural networks, radial basis
functions, and kriging, for example. The last method is the one used in our case. The objective
of kriging is to provide a cost function approximation for the entire domain defined by the
limits of the optimisation parameters. Each optimisation parameter represents a dimension of
the optimisation space. Kriging was originally developed empirically for geostatistics by D.
Krige in the 50s, then mathematically formalised in the 60s by G. Matheron, mathematician at
the École des Mines de Paris.

The advantage of the kriging method is that, based on the exact evaluation of individuals in
the database, it can provide an approximation of the solution as well as an error estimation for
the approximation. So, on computed points, there are obviously no errors. The points selected
for the exact evaluation will be chosen by both exploration and exploitation. Exploration
consists of searching the areas for which the error estimation of the metamodel is the largest
in order to improve it. Exploitation consists of placing yourself in the areas where the
metamodel estimates the cost function will be minimised. Figure 5 illustrates this method for
selecting individuals using a kriging metamodel. The choice will fall to points where the
minimum is most likely to be found, but also in the areas where the metamodel is the most
precise. This avoids having to converge towards a local minimum.

Figure 5: Selection of an individual using a kriging metamodel

Part 11: Optimisation -6- Transvalor


II.D. Operation of the genetic algorithm
Evolutionary algorithms are inspired by Darwin's natural selection or survival of the fittest
principle. This is why the vocabulary used is borrowed from biology. We will therefore talk
about populations of individuals. Each individual represents a potential solution to the
optimisation problem. An individual corresponds to each set of parameters (parametered
actions). The principle is to compute the cost function of each individual. In this case, we talk
about population evaluation. The genetic algorithm used works as follows (Figure 6):
- Initialisation of the population: an initial population of µ individuals is generated.
This first generation can be defined in several different ways, details looked at
further on in the document.
- Evaluation of the initial population: each individual corresponds to a set of
parametered actions and, therefore, to a simulation. All simulations are conducted.
- Creation of a population of offspring: by applying evolution operators on parent
individuals, offspring will be created. These evolution operators are based on the
principle of natural selection. The selection operator consists of selecting the best
parents to create the population of offspring. As in natural selection, only the fittest
survive to reproduce. The crossover operator uses two parents to obtain one or
more offspring. The offspring's set of parametered actions is obtained by a linear
combination of the parents' parameters. The mutation operator consists of
randomly modifying certain parameters of an offspring. It's this last operator that
will make it possible to search a bigger assembly of parametered action sets and
avoid converging towards a local optimum.
- Selection of offspring: the number of offspring obtained after applying evolution
operators is very high. A selection must now be made to only keep µ individuals.
This selection takes place in several stages, by the metamodel in particular, details
are provided later on.
- Evaluation of the offspring population.
- If necessary, this population becomes a new population of parents and an
additional optimisation stage is carried out.

Part 11: Optimisation -7- Transvalor


Generation of the initial
population: number of individuals

Exact evaluation of all individuals in


the initial population

μ Computations

Population of μ parents updated

Application of such evolution


operators as selection, crossover,
mutation

Selection of µ individuals from among Replacement


the population of λ offspring obtained (update) of the parent
(kriging method) population

Exact evaluation of μ individuals in the


population

μ Computations

Stop test corresponding to the number of NO


generations wanted

YES

Extraction of solutions

Figure 6: Genetic algorithm used

Part 11: Optimisation -8- Transvalor


In our case, optimisation applied to forming, we will review a few terms in particular. The
minimisable, the parametered action, and the constraint.
• A minimisable is a scalar the value of which the optimisation algorithm
searches to reduce. The simplest example is the reduction of billet volume. The
definition of a minimisable can require additional data: coefficients for
dimensionalisation, sets to define areas covered by the scalar computation, etc. Other
examples of minimisables are: the force applied by the press, the torque, the minimum
or maximum value of a field, etc.

• A parametered action is a sequence of commands that modifies the reference


data setup so that it complies with variations of the optimisation parameters. Examples
of parametered actions include the position of an object, the values of friction law or
material behaviour law parameters, a press parameter, etc. It is possible to have certain
geometric parameters vary directly on the native format of the geometry by coupling
the evolutionary algorithm with certain CAD programs.

• A constraint is a condition computation results must satisfy. Constraints are


also associated with data setup objects and can be defined locally using sets. They are
considered a cost function penalty. The cost function is the sum of the quantity to
minimise and the different penalty terms that ensure constraints are satisfied.
Examples of constraints include absence of folds, die fill, a constraint on force or
torque, a constraint on a field value, etc.

Here are some examples of triplets (minimisable/constraints/parametered actions) that will


help distinguish between the above-mentioned notions:

Parametered actions
Minimisable (objective) Constraints (obligations)
(Size variations)
Reduction of initial billet Absence of folds and no Variation on initial billet
volume underfilling dimensions
Reduction of wear on the
No constraint Variation of friction
die
Reduction of wear on the Absence of folds and no Variation on initial billet
die underfilling dimensions
Absence of folds and no
Reduction of force Variation in final press height
underfilling
Process conditions (geometry,
Reduction in damage No constraint
friction, etc.)

In Transvalor software, the generation of the initial population is based on a number of


individuals μ equals to (2 * number of parameters). The total number of generations is set by
default to 10.

Part 11: Optimisation -9- Transvalor


II.E. Initialisation of the population
In an initialisation stage, (Figure 7), the algorithm uses the definition of the parametered
actions (and their limits) to compute a fixed number of parameter sets (or individuals). This
group of individuals is a generation. Generally, the initialisation algorithm proceeds such as
the values chosen for the parameters will cover the largest possible domain of validity by
choosing points as far away from each other, meaning points located at the edges of the
domain.

We note μ the number of individuals in a generation and Dim the number of parameters of an
individual.
Individual 1
Parameter 1 = 0.8
Parameter 1
(position)
Min: 0.8 Initialisation
Max:1 Individual 2
Parameter 1 = 1

Figure 7: Initialisation stage of the optimisation algorithm

II.F. Computation of a generation


This section describes how the optimisation cycle works, that is, how it runs, starting from the
parent population up to evaluation of the offspring population:

• We have the population of µ evaluated parents available. The evolution operators


are applied to these parents in order to obtain a population of offspring (Figure 8).
These evolution operators are selection, crossover, and mutation.

Individual 1 Individual 3
Parameter 1 = 0.825 Parameter 1 = 0.958
Cost 1 = 12.099

Creation
Individual 2
Parameter 1 = 0.975 Individual 4
Cost 2 = 0.975 Parameter 1 = 0.903

Figure 8: Loop of the optimisation algorithm, creation of new individuals stage

Selection consists of selecting the fittest parents on which to apply the other
evolution operators. The fittest parents are those for whom the cost function
evaluated is the smallest. Crossover consists of taking the parameter sets of
selected parents and combining them to obtain the parameter sets of offspring.
Finally, mutation consists of applying, on certain offspring, modifications to
parameters based on statistic computations. This makes it possible to search a
larger range of parameter sets.

Part 11: Optimisation - 10 - Transvalor


• The offspring population obtained can be very large. Consequently, select µ to find
a population of which the number of individuals is acceptable. Two methods are
applied. First of all, individuals with parameter sets that are too close to one of the
parents are eliminated. This makes it possible to search a much larger range of
parameter sets, rather than focus on an area that could be a local minimum. Second
of all, we use a metamodel to make a first evaluation of offspring individuals and
this way we can choose the most interesting individuals. The metamodel used is a
kriging metamodel.

The reason for using a metamodel is that it is very simple. Without a metamodel,
an optimisation method using a genetic algorithm must evaluate a very substantial
number of individuals in order to converge towards the optimum. This number can
reach several tens of thousands. It is obvious that, in this case, this method is not
effective in terms of computation cost. A metamodel is in fact a simplified model
of the optimisation problem, that is, an approximate model of the cost function
according to optimisation parameters. This model is constructed from results
already obtained. Thus, as optimisation progresses, the metamodel becomes more
precise (Figure 9).

Figure 9: Representation of the data ranges collected to form


the sample frame of the metamodel.

This figure shows that our optimisation problem has numerous extrema. They explain why
descent direction algorithms, such as BFGS, are not performance-oriented in our case: they
only find the optimum closest to their point of departure, in other words, a local optimum.
The result heavily depends on the set of parameters used for initialisation. Evolutionary
algorithms do not have this limitation, they lean towards the global minimum when the
number of individuals evaluated is sufficient.

The metamodel is used to rapidly evaluate an offspring individual. If retained, this


individual will be evaluated exactly by a finite element computation. The true
value of the cost function obtained will feed the metamodel, which will
consequently be more precise, and make it possible to better choose the next
generation of offspring individuals. Not only does the kriging metamodel estimate

Part 11: Optimisation - 11 - Transvalor


the value of the cost function, it provides an error estimation on this cost function.
This allows us to not only choose individuals with parameter sets that will
probably minimise the optimisation problem, but also those that will help improve
the metamodel and, consequently, converge more rapidly towards the global
extremum.

• The offspring population is evaluated by finite element computations (Figure 10).


If the number of generations desired by the user is reached, the fittest individual
will give us the solution to the optimisation problem. If the opposite is true, these
offspring will complete the parent population and also become parent individuals
of the next generation. Their evaluation will improve the metamodel used for
selecting offspring. In practice, the optimisation algorithm replaces existing
parents with the offspring that had the best results. If a parent survives several
generations (this varies according to lifespan and user-imposed parameters), that
parent is systematically replaced.

Computations can run at the same time if the number of processors allows it.

Individual 1
Evaluation Cost 1 = 12.099
Parameter 1 = 0; 825

Individual 2
Valuation Cost 2 = 0.9750
Parameter 1 = 0.975

Figure 10: Computation of a generation of individuals, evaluation stage

Part 11: Optimisation - 12 - Transvalor


CHAPTER III - OPTIMISATION STRATEGIES

III.A. Initialisation by experimental design


Sometimes, it's a good idea not to let the software determine the first generation of individuals
itself, that is, the first sets of parameters. In this case, it is possible to provide an experimental
design as an initial generation of individuals for optimisation.

This means we must provide the list of individuals, that is, a table of each individual in the
experimental design. An individual is actually the list of values of each of its parameters.
These values can be either the true values between the limits of variation or those between 0
and 1 (where computation values are obtained by applying a proportionality law of 0
corresponding to the true minimum and 1 corresponding to the true maximum of the
parameter). In this case, the optimiser will evaluate the true value by using the limits that will
have been defined. This second way of providing an experimental design is useful when we
want to use an experimental design from literature or when we want to apply the same design
to several different optimisations the parameter limits of which can vary.

The experimental design must however obey user settings, namely the number of individuals
by generation and the limits of each of the parameters. Individuals on the design who have at
least one parameter with a value outside defined limits will be eliminated from the design and
will not be evaluated. If the number of individuals on the design is smaller than the number of
individuals by generation, the optimiser will determine the missing parameter sets itself. If the
number of individuals is too large, the extra individuals will not be considered.

It is possible to give the optimiser a design limited to just certain parameters. Parameters that
are not restricted by the experimental design will be automatically deduced by the optimiser.

An example of an optimisation plan for two parameters is provided below. The


recommendation for two parameters is to have an initial generation of 2*2=4 individuals.

! example of Initial Design File in 2 dimension (2 parameters)


0.5,5 ! first individual (this is a comment)
10,10 ! second individual
1, 1 ! third individual
P1,50 ! the first parameter will be done by the optimiser

We have defined the following individuals: (0.5,5), (10,10), (1,1), and (p1 chosen by the
optimiser,50)

Part 11: Optimisation - 13 - Transvalor


III.B. Discrete optimisation
In certain situations, parameters cannot take just any value. This is, for example, the case
for a billet diameter, which is determined by what the supplier can deliver. In this case, the
parameter is no longer continuous. It is discrete.

In a situation like this, there is no point for the optimiser to propose a parameter set that is
impossible to obtain in reality. Discrete optimisation is used to consider these parameters
in the optimisation process. The list of values for each discrete parameter must be
provided. For the other parameters, the value will be automatically determined by the
optimiser. In practice, during optimisation, the algorithm will propose a continuous
parameter set when a new individual is created. The discrete parameter set closest to the
continuous parameter set will be retained to continue with the optimisation.

It is possible to provide the list of values for each discrete parameter. It is also possible to
provide values between 0 and 1. The true values of the parameters will then be computed
according to the limits defined by the user. This means a list of discrete values can be
reused for several types of optimisation.

An example of a list of discrete values for an optimisation on two discrete parameters and
one continuous parameter is provided below:
! example of Discrete Optimization File in 3 dimension (3
parameters)
0.5,7, ! (this is a comment)
1,7.2, ! the third parameter will be always done by the
optimizer
1.5,7.8, !
2,8.0,
which corresponds to the first parameter that takes the values 0.5, 1, 1.5, and 2, to the
second parameter that takes the values 7, 7.2, 7.8, and 8, and to a third continuous
parameter, the values of which are determined by the optimiser.

It is possible to cumulate both an experimental design and the discrete optimisation for a
same optimisation. It should also be specified that if all parameters are discrete, it is better
to define an experimental design that contains all the discrete values to launch on a single
generation since, in this case, there is no search for optimisation points.

Part 11: Optimisation - 14 - Transvalor


Chapitre IV - EXAMPLE: OPTIMISATION OF A BILLET
VOLUME

According to a genetic algorithm as described in figure 11, the objective is to find the
following for a billet:
− the optimal diameter and length (parametered actions)
− in order to minimise its volume (the minimisable)
− with a guarantee of filling the die with no folds (the constraints).

Generation of three
Initialisation individuals
first generation Volume = 512 Good
Filled, no folds

Volume = 448
Filled, no folds
Perfect

Volume = 342
Not filled, Constraint not
no folds satisfied

Combine individuals
to obtain new "fitter" individuals
(that is, the parameters)
End after given
number of
generations

Figure 11: Example of generation cycle

Besides results from simulations conducted, optimisation produces results specific to the
algorithm: parameter values, cost function, quantity to minimise, constraints of each
individual evaluated during the procedure
Length

Fill as best as possible

Overfill (significant
Underfill flash)

Diameter
Figure 12: Example of billet dimension choices according to length and diameter

Part 11: Optimisation - 15 - Transvalor


It's easier to represent these results by means of a "quantity to minimise" / "constraint" graph
representing individuals (figure 13)

Figure 13: Example of a "quantity to minimise" / "constraint" graph

In this example, the point on the axis farthest to the left with the coordinates (4,256270;0,00)
is the best minimisable obtained (the entire die is filled and the weight is the lowest). The blue
curve represents the Pareto Frontier that corresponds to the individuals that are optimal in
their range (the fittest individuals cannot be improved without deteriorating other criteria).

Part 11: Optimisation - 16 - Transvalor


Index:

C
Computation of a generation .................................................................................................................................................... 10
Constraint ................................................................................................................................................................................... 9
Cost function .............................................................................................................................................................................. 3
Creation of a population ............................................................................................................................................................. 7

D
Discrete optimisation................................................................................................................................................................ 14

E
Evolution operators .................................................................................................................................................................... 7
Evolution strategy ...................................................................................................................................................................... 4
Evolutionary algorithm............................................................................................................................................................... 4
Experimental design ................................................................................................................................................................. 13

G
Genetic algorithm ............................................................................................................................................................... 5, 7, 8

I
Initialisation of the population.............................................................................................................................................. 7, 10

K
Kriging metamodel ..................................................................................................................................................................... 5

M
Minimisable............................................................................................................................................................................ 3, 9
Mono-objective algorithm .......................................................................................................................................................... 4

O
Optimisation ............................................................................................................................................................................... 3

P
Parametered action ..................................................................................................................................................................... 9

S
Selection ................................................................................................................................................................................... 10
Selection of offspring ................................................................................................................................................................. 7

Part 11: Optimisation - 17 - Transvalor


Bibliography:

[Ducloux 2010a], R. Ducloux, L. Fourment, S. Marie, D. Monnereau, Automatic optimisation


techniques applied to large range of industrial test cases, 13th International ESAFORM
Conference, Brescia, Italy, 2010

[Ducloux 2010b], R. Ducloux, S. Marie, D. Monnereau, N. Behr, L. Fourment, M. Ejday,


Automatic optimisation techniques applied to a large range of industrial test cases using meta-
model assisted evolutionary algorithm, 10th International NUMIFORM Conference, Pohang,
Korea, 2010

[Ejday 2011], Optimisation multi-objectifs à base de métamodèle pour les procédés de mise
en forme, Doctoral thesis, Mines Paristech, 2011

[Fourment 2010], L. Fourment, R. Ducloux, S. Marie, M. Ejday, D. Monnereau, T. Masse, P.


Montmitonnet, Mono and multi objective optimisation techniques applied to a large range of
industrial test cases using metamodel assisted evolutionary algorithms, 10th International
NUMIFORM Conference, Pohang, Korean, 2010

[Fourment 2009], L. Fourment, T. Massé, S. Marie, R. Ducloux, M. Ejday, C. Bobadilla, P.


Montmitonnet, Optimization of a range of 2D and 3D bulk forming processes by a meta-
model assisted evolution strategy, 12th ESAFORM Conference on Material Forming,
Enschede, 2009

[Roux 2011], Stratégies d’optimisation des procédés et d’identification des comportements


mécaniques des matériaux, Doctoral thesis, Mines ParisTech, 2011

Part 11: Optimisation - 18 - Transvalor

You might also like