You are on page 1of 11

Design Optimization

Homework 1

Eduardo Marcos Larangeira

Problem (A)

The area equation to be minimized is the shown in eq. 1.


𝑥1
𝐴 = 2𝜋 ∫ 𝑦𝑑𝑠
𝑥0 (1)

Where, given the values assigned for the radii and distance, results to the boundary
conditions: 𝑦(𝑥0 ) = 𝑦(0) = 2 and 𝑦(𝑥1 ) = 𝑦(3) = 3.

The archlength along the curve can be expressed as 𝑑𝑠 = √1 + 𝑦 ′ 2 𝑑𝑥, which applied to eq.
1, gives the equation that was discretized in the software MATLAB in order to use optimization
functions. The formula mentioned above is depicted in eq. 2.
𝑥1
𝐴 = 2𝜋 ∫ 𝑦√1 + 𝑦 ′ 2 𝑑𝑥
𝑥0 (2)

Applying an optimization method to minimize eq. 2 considering the boundary conditions,


yields to the minimal surface of revolution, which is the equilibrium of a soap film under those
circumstances. The following equation 3 shows the discretized function, using a midpoint
quadrature scheme and centered finite difference at the midpoint for the derivative, also
considering an array equispaced “N” points over the x-axis.
𝑁−1 2
𝑥1 − 𝑥0 𝑦(𝑗+1)+ 𝑦(𝑗) 𝑦(𝑗+1) −𝑦(𝑗)
𝐴 = 𝐴̃ = 2𝜋 ∑ √1 + ( )
𝑁−1 2 (𝑥1 − 𝑥0) ⁄(𝑁 − 1) (3)
𝑗=1

Where, N is the number of points used to discretize the functions.

The next figure displays the discretized equation coded in the software MatLab.

function [ A ] = A(Y,N)
A=0;
%integration limits
inf = 0;
sup = 3;

Yy = [2 Y 3]; % % Adding B.C. to the set of initial guesses%%


deltaX = (sup-inf)/(length(Yy)-1);

for j = 1:(length(Yy)-1)
A = A + (0.5*(Yy(j+1)+Yy(j))*sqrt(1+(((Yy(j+1)-Yy(j))/(deltaX))^2)));
end
A = A*deltaX*2*pi;
end
The BFGS method was chosen to solve this problem, due to its good compromise regards
efficiency and required computational resources, the functional supplied in the course
material was used as follows:

% maximum number of iterations and tolerances


inf =-2;
sup = 2;
MaxIter = 5000;
DxToler = 1e-8;
myF = 'A';
metodo = 2;
gr = 0;
he = 0;
N = 200;

xx = inf:((sup-inf)/(N-1)):sup;

Y = [(0.33*xx(2:N-1)+2)];

[X,Fo,Iters,Neval] = bfgs(N-2, Y, DxToler, MaxIter, myF,gr);

The variable “Y” represents the set of initial conditions, which was chosen somewhere closer
to a linear function passing through both boundary conditions. Given that, gradient-like
methods converge to a local minimum the closer to the final optimized function the initial
guess is more accurate and efficient the solver performance will be.

The number of points “N” was reduced by 2 with the intention of applying the boundary
conditions. This is done by the line “Yy = [2 Y 3];” inside the function that was solved.

Firstly, the BFGS method was solved with 4 different values of N, The first plot below displays
the evolution of the function with regards to N points and the second one, its maximum error
(max[min(F) - Analytical]) . As expected, the error reduces as the number of pointes increases.
Solving it for N = 700 yielded to an optimal surface area of 46.01 u.a. and archlength of 3.56 u.
l., after 714 iterations and 1124447 evaluated equations in 66.13 seconds. The following figure
presents the plot of the minimized function along with the analytical catenary function
calculated considering the archlength obtained from the optimized function. Both lines follow
almost the same path with differences in the order of ~10-4 as demonstrated in the second plot
below.
When trying to use the genetic algorithm method, at first, the results were completely out,
with the minimized function reaching meaningless values. The solver was set to a population
of 200 and solved for N = 1000. The figure below displays the command used to call the
MatLab function.

opts = optimoptions(@ga,'PlotFcn',{@gaplotbestf,@gaplotstopping});
rng default
lb = 0*ones(1,N-2);
ub = 3*ones(1,N-2);
A = [];
b = [];
Aeq = [];
beq = [];
opts.PopulationSize = 200;
[X,Fo,exitFlag,Output] = ga(@A,N-2,A,b,Aeq,beq,lb,ub,[],opts);

The following images depicts the calculated function under the mentioned settings, the
analytical catenary traced with the archlength obtained from the optimized equation, and the
values retrieved during the process.
Trying to approach the results from the gradient-like optimization, it was assigned as lower
and upper bounds the values from the catenary function shitted by minus and plus 0.02
respectively as shown by the coding below.

[~,lb] = (catenary([0,2],[3,3],3.56,N));
[~,ub] = (catenary([0,2],[3,3],3.56,N));
lb = lb-0.02;
ub = ub+0.02;

The results are plotted in the figure below, the function got indeed closer, but at the same
time, it seems to be trying to follow the lower boundary with some disturbances along the line.
To conclude with the genetic algorithm, it is considered that its applicability in this case has
failed, one of the reasons is probably the incorrect set-up of the algorithm and modelling of
the function to be minimized.

To solve the problem via calculus, it is taken as starting point eq. (2), Euler`s equation solves
the first integral as:

𝐹 − 𝑦 ′ 𝐹𝑦′ = 𝐶
(4)
Solving it yields to,
𝐶𝑑𝑦
𝑑𝑥 =
√𝑦 2 − 𝐶 2 (5)

Integrating both sides results in,

𝑦 + √𝑦 2 − 𝐶 2
𝑥 + 𝐶1 = 𝐶 ln (6)
𝐶

So that,
𝑥 + 𝐶1
𝑦 = 𝐶 csch
𝐶 (7)

The constants are determined applying the boundary conditions that in this case are:

𝑦0 (𝑥0 ) = 𝑦0 (0) = 2, 𝑎𝑛𝑑 𝑦(𝑥1 ) = 𝑦(3) = 3,


Problem (B)

The hanging chain problem is quite similar to the soap film problem, except that it has a
restriction on the chain`s length. The equation to be minimized that represents the potential
energy is presented below, which is essentially the same instead the 2pi term it uses the
density times gravity. In addition, the restriction equation s presented.
𝑥1
𝐸 = 𝜌𝑔 ∫ 𝑦√1 + 𝑦 ′ 2 𝑑𝑥
𝑥0 (8)

𝑥1
(9)
𝑅 = ∫ √1 + 𝑦 ′ 2 𝑑𝑥 = 𝐿 = 6
𝑥0

In addition to the restriction, there is the boundary points, which are: 𝑦(𝑥0 ) = 𝑦(−2) = 2 and
𝑦(𝑥1 ) = 𝑦(2) = 2.

As the chain`s properties are not given, constants outside the integral were not considered in
the discretized equation. Both the functional and its length restriction are shown in the
discretized form.
𝑁−1 2
𝑥1 − 𝑥0 𝑦(𝑗+1)+ 𝑦(𝑗) 𝑦(𝑗+1) −𝑦(𝑗)
𝐸 = 𝐸̃ = ∑ √1 + ( )
𝑁−1 2 (𝑥1 − 𝑥0) ⁄(𝑁 − 1) (10)
𝑗=1

𝑁−1 2 (11)
𝑥1 − 𝑥0 𝑦(𝑗+1) −𝑦(𝑗)
𝑅 = 𝑅̃ = ∑ √1 + ( ) −6
𝑁−1 (𝑥1 − 𝑥0) ⁄(𝑁 − 1)
𝑗=1

The following command lines show the MatLab script that represents the functions depicted
above.
function [E ] = E(Y,N)
E=0;
inf = -2;
sup = 2;
Yy = [2 Y 2]; % % Imposing B.C.%%

deltaX = (sup-inf)/(length(Yy)-1);

for j = 1:(length(Yy)-1)

E = E + ((0.5*(Yy(j+1)+Yy(j)))*sqrt(1+(((Yy(j+1)-
Yy(j))/(deltaX))^2)));
end
E = E*deltaX;
End
%*************************%
function [c, ceq ] = R(Y,N)
r=0;
inf = -2;
sup = 2;
len = 6;
Yy = [2 Y 2]; % % Imposing B.C.%%
deltaX = (sup-inf)/(length(Yy)-1);

for j = 1:(length(Yy)-1)

r = r + (sqrt(1+(((Yy(j+1)-Yy(j))/(deltaX))^2)));
end
r = r*deltaX -len;
ceq = r;
c = [];
end

This time the function fmincon from the MatLab optimization toolbox as shown by the code
below. The initial guess (y0), was based on the second order equation x2, which obviously the
shape is a parabola thus approximating the catenary shape, making it easier for the algorithm
to reach the optimal solution.

kxx = 6.01;
inf =-2;
sup = 2;
N = 300;
fun = 'E';

xx = inf:((sup-inf)/(N-1)):sup;
y0 = (xx(2:N-1).^2);

A = [];
b = [];
Aeq = [];
beq = [];
lb = -0.2*ones(1,N-2);
ub = 2*ones(1,N-2);

options = optimoptions('fmincon',...,
'Algorithm','sqp',...,
'FiniteDifferenceType','central',...,
'MaxFunctionEvaluations',1000000,...,
'MaxIterations',1000000);
tic
[y,fval,exitflag,output] = fmincon(fun,y0,A,b,Aeq,beq,lb,ub,'R',options);
t= toc;
The following plots display the results for N=200, proving that the values are similar to the
analytical function, with maximum error of 2.0210e-05 as presented in the plot. The min(E)
final value is 4.73, reached after completing 281 iterations and solving 113686 functions all in
49.68 seconds.

Again, in the first try, the genetic algorithm has not reached the correct value within the
settings used. The following figure shows the MatLab script responsible to call the ‘ga’
function.
lbga = (xx(2:N-1).^2)-0.5;
ubga = (xx(2:N-1).^2)+0.5;
A = [];
b = [];
Aeq = [];
beq = [];

mName = 'GA';
disp('Genetic Algorithm method 2 results')
disp('*********************************')
opts = optimoptions(@ga,'PlotFcn',{@gaplotbestf,@gaplotstopping});
rng default
opts.PopulationSize = 50;
opts.UseParallel = true;
[y,Fo,exitFlag,Output] = ga(@E,N-2,A,b,Aeq,beq,lbga,ubga,@R,opts);

After evaluating 2768600 equations, 5 generations and 3437 seconds the solution was
manually stopped, resulting in the function shown in the following plot.

In order to facilitate for the solver, the upper and lower boundaries were set to values from
the catenary function as demonstrated below.

[~,lbga] = catenary([-2,2],[2,2],6.1,N-2);
[~,ubga] = catenary([-2,2],[2,2],5.9,N-2);

The solver reached a solution just after 54 seconds, approximate archlength was 5.9997 u.l.,
37800 evaluated functions over 3 generations, resulting on the min(E) final value of 4.91 and
max error of 0.09 The next figures shows the function optimized and analytical, errors and a
table comparing both algorithms (‘fmincon/sqp’ and ‘ga’) .

Solver Error Eval. Functions CPU Time Fval


SQP 2.0210e-05 113686 49.68 4.73
GA 0.09 37800 54.4 4.91
Analysing the table, the CPU time and Fval are quite similar, but in general the gradient-like
method has been a better fit for this case given that, it did not need the initial guess to be
really as close to the analytical values as the heuristic method. In addition, the accuracy of the
sqp method was far greater than the G.A.

You might also like