You are on page 1of 26

COURSEWORK

UNCONSTRAINED OPTIMIZATION.

Rio Adi Prasetyo 02311840000034


Dian Permana 02311840000076

Muhammad Fikri Mumtaz 02311840000115

Departemen Teknik Fisika


Fakultas Teknologi Industri dan Rekayasa Sistem
Institut Teknologi Sepuluh Nopember
Surabaya
2020
Problem A
a. Basic Theory
There are various methods of solving the unconstrained minimization problem.

Rarely a practical design problem would be unconstrained; but still, a study of


this materials is important. There are some reasons why we should know about
unconstrained optimization. First, the constraints do not have significant influence in
certain design problems. Second, some of the powerful and robust methods of
solving constrained minimization problems require the use of unconstrained
minimization techniques. Third, the study of unconstrained minimization techniques
provides the basic understanding necessary for the study of constrained minimization
methods. And fourth, he unconstrained minimization methods can be used to solve
certain complex engineering analysis problems. For example, the displacement
response (linear or nonlinear) of any structure under any specified load condition can
be found by minimizing its potential energy. Similarly, the eigenvalues and
eigenvectors of any discrete system can be found by minimizing the Rayleigh
quotient.
A point X∗ will be a relative minimum of f (X) if the necessary conditions are
satisfied.

The point X∗ is guaranteed to be a relative minimum if the Hessian matrix is positive


definite, that is,

Those equations can be used to identify the optimum point during numerical
computations. However, if the function is not differentiable, these equations cannot
be applied to identify the optimum point. For example, consider the function

where a >0 and b>0. The graph of this function is shown in Fig. 6.1. It can be seen that
this function is not differentiable at the minimum point, x∗ = 0, and hence the
equations are not applicable in identifying x∗. In all such cases, the commonly
understood notion of a minimum, namely, f (X∗) < f (X) for all X, can be used only to
identify a minimum point.
b. Problems
A cantilever beam is subjected to an end force P0 and an end moment M0 as shown in Fig.
1(a). By using a one-finite-element model indicated in Fig. 1(b), the transverse
displacement, w(x), can be expressed in the following along with the shape functions Ni

with α = x/l, and u1, u2, u3, and u4 (boundary conditions are u1 = u2 = 0) are the end
displacements (or slopes) of the beam. The deflection of the beam at point A can be found
by minimizing the potential energy of the beam (F), being written as

where E is Young’s modulus and I is the area moment of inertia of the beam. Formulate
the optimization problem in terms of the variables x1 = u3 and x2 = u4l for the case that
c. Solutions with univariate
To solve the optimization problem, we use matlab as a tools. Here is the source codes.

clear all
close all
clc
%% Initialization
syms X Y lambda %Declare a time-changing function
i = 1; %Declare initialization
var = 2; %Declare the number of variations
eps = 0.01; %Declare the epsilon value
x = [0;0]; %Declare x with the value 0 because it is still in the first
iteration
f = symfun(6*X^2 - 6*X*Y + 2*Y^2 - X - 2*Y, [X Y]);%The Objective Function
of the problem
S = eye(var); %variable identity
%% Looping
for i = i:100 %to do an iteration with 100 max iterations
x_temp = x(:,i);% to save the value of x because the value of x varies
if mod(i,2) == 1 % using modulo to separate and create different column
when the value is odd or even
s = S(:,1); %the odd value of s
else
s = S(:,2);%the even value of s
end

fval = f(x_temp(1), x_temp(2)); %to calculate the value of f with


temporary value of x
xp_temp = x_temp + eps*s;
xn_temp = x_temp - eps*s;
fval_p = f(xp_temp(1), xp_temp(2));%to determine f when the x value is
positive
fval_n = f(xn_temp(1), xn_temp(2));%to determine f when the x value is
negative
if fval_p < fval %when the value of f-positive is less than f, so the
value of s is positive
s = s;
elseif fval_n < fval%when the value of f-negative is less than f, so the
value of s is negative
s = -s;
else %to stop the iterations when there is an optimum value
break
end

xopt_temp = x_temp + lambda*s; %to calculate optimum value of x


fval_opt = f(xopt_temp(1), xopt_temp(2)); % to calculate optimum value
of f
grad_fopt = gradient(fval_opt);%differentiating to get the gradient
value
lambda_opt = solve(grad_fopt);%to calculate lambda

x(:,i+1) = x_temp + lambda_opt*s; %insert the each value of iterations


to x
end

%% Result Table
Iter = 1:i;
X_coordinate = x(1,:)';
Y_coordinate = x(2,:)';
Iterations = Iter';
T = table(Iterations,X_coordinate,Y_coordinate);

%% Output
fprintf('Initial Objective Function Value: %d\n\n',subs(f,[X,Y],
[x(1),x(2)]));
disp(T)
fprintf('Number of Iterations : %d\n\n', i);
fprintf('Point of Minimal: [%d,%d]\n\n', x(1,i), x(2,i));
fprintf('X: [%d]\n\n', x(1,i));
fprintf('Y: [%d]\n\n', x(2,i));
fprintf('Objective Function Minimum Value after Optimization:
%f\n\n',double(f(x(1,i),x(2,i))));

%% Plotting
axis square
X = linspace(-8,8); Y = linspace(-8,8);
[A,B] = meshgrid(X,Y);
f_fig = f(A,B);
levels = 10:10:350;
figure(1), contour(X,Y,f_fig,levels,'linewidth',1.2), colorbar
hold on;
plot(x(1,:), x(2,:))

d. Output
Initial Objective Function Value: 0

Iterations X_coordinate Y_coordinate


__________ __________________ ________________

1 0 0
2 0.0833333333333333 0
3 0.0833333333333333 0.625
4 0.395833333333333 0.625
5 0.395833333333333 1.09375
6 0.630208333333333 1.09375
7 0.630208333333333 1.4453125
8 0.805989583333333 1.4453125
9 0.805989583333333 1.708984375
10 0.937825520833333 1.708984375
11 0.937825520833333 1.90673828125
12 1.03670247395833 1.90673828125
13 1.03670247395833 2.0550537109375
14 1.11086018880208 2.0550537109375
15 1.11086018880208 2.16629028320313
16 1.1664784749349 2.16629028320313
17 1.1664784749349 2.24971771240234
18 1.20819218953451 2.24971771240234
19 1.20819218953451 2.31228828430176
20 1.23947747548421 2.31228828430176
21 1.23947747548421 2.35921621322632
22 1.26294143994649 2.35921621322632
23 1.26294143994649 2.39441215991974
24 1.2805394132932 2.39441215991974
25 1.2805394132932 2.4208091199398
26 1.29373789330324 2.4208091199398
27 1.29373789330324 2.44060683995485
28 1.30363675331076 2.44060683995485
29 1.30363675331076 2.45545512996614
30 1.3110608983164 2.45545512996614
31 1.3110608983164 2.4665913474746
32 1.31662900707064 2.4665913474746
33 1.31662900707064 2.47494351060595

Number of Iterations : 33

Point of Minimal: [1.316629e+00,2.474944e+00]

X: [1.316629e+00]

Y: [2.474944e+00]

Objective Function Minimum Value after Optimization: -3.166248


e. Solution with Powell
Algoritma
File run_powell.m

clear;
close all
clc
var = 2; %number of variables
syms X1 X2
f = symfun(6*X1^2 - 6*X1*X2 + 2*X2^2 - X1 - 2*X2, [X1 X2]);
X = [0 0]; %row array of initial guesses
Eps_Fx = 1e-7; %tolerance for function value
Eps_Step = [1e-5 1e-5]; %tolerance for step values
MaxIter = 1000; %maximum number of iterations
myFx ='fx1'; %string name of the target functions

[X, FxVal, Iters] = powell_opt(var, X, Eps_Fx, Eps_Step, MaxIter,


myFx)
axis square
M = linspace(-8,8); G = linspace(-8,8);
[A,B] = meshgrid(M,G);
f_fig = f(A,B);
levels = 10:10:350;
figure(1), contour(M,G,f_fig,levels,'linewidth',1.2), colorbar
hold on;

File fx1.m
function y=fx1(X, N)
y = 6*X(1)^2 - 6*X(1)*X(2) + 2*X(2)^2 - X(1) - 2*X(2);
end

File powell_opt.m
function [X, FxVal, Iters] = powell_opt(N, X, Eps_Fx, Eps_Step,
MaxIter, f)
%set I = 1
Iters = 1;
f1 = feval(f, X, N);
%set X1 = X
X1 = X;
S = eye(N+1,N);

condition = true;

while condition % Iteration


S(N+1,:) = 0; % reset row N+1
for i= 1:N
lambda = 0.1;
% find lambda to minimize f(x + lamda*s)
lambda = linsearch(X, N, lambda, S, i, f);
% algoritm jump to linesearch function
X = X + lambda * S(i,:);
S(N+1,:) = S(N+1,:) + lambda * S(i,:);
end

lambda = 0.1;
lambda = linsearch(X, N, lambda, S, N+1, f);
X = X + lambda * S(N+1,:);
X2 = X;

f2 = feval(f, X2, N);

if abs(f2 - f1) < Eps_Fx


break;
end

if norm(X2 - X1) < Eps_Step


break
end

Iters = Iters + 1;

if Iters >= MaxIter


break
end

X1 = X2;
for k=1:N
for m=1:N
S(k, m) = S(k+1,m);
end
end

end

FxVal = feval(f, X, N);

function y = myFxEx(N, X, S, ii, lambda, f)

X = X + lambda * S(ii,:);
y = feval(f, X, N);

function lambda = linsearch(X, N, lambda, S, ii, f)

MaxIt = 100;
Toler = 0.0001;

iter = 0;
condition = true;
while condition % Set I = I + 1
iter = iter + 1;
if iter > MaxIt
lambda = 0;
break
end

h = 0.01 * (1 + abs(lambda));
f0 = myFxEx(N, X, S, ii, lambda, f);
fp = myFxEx(N, X, S, ii, lambda+h, f);
fm = myFxEx(N, X, S, ii, lambda-h, f);
deriv1 = (fp - fm) / 2 / h;
deriv2 = (fp - 2 * f0 + fm) / h ^ 2;
if deriv2 == 0
break
end
diff = deriv1 / deriv2;
lambda = lambda - diff;
if abs(diff) < Toler % to know f is optimum?
condition = false;
end
end

OUTPUT:
X = 1.333333333333183 2.499999999999760

FxVal = -3.166666666666667

Iterations = 3

f. Conclusions
Both univariate and Powell’s method have similar optimum value -3.16, but there is
small difference. Univariate method gives the optimum value -3.166248, while
Powell’s method gives the optimum value -3.166666666666667. But univariate
method takes 33 iterations while Powell’s method only takes 3 iterations. So, Powell’s
method is way more efficient.
Problem B
a. Basic Theory
There are 2 unconstrained optimization method that we are going to explain,
univariate method and Powell method.

Univariate method
In this method we change only one variable at a time and seek to produce a sequence
of improved approximations to the minimum point. By starting at a base point Xi in
the ith iteration, we fix the values of n − 1 variables and vary the remaining variable.
Since only one variable is changed, the problem becomes a one-dimensional
minimization problem and any of the methods discussed in Chapter 5 can be used to
produce a new base point Xi+1. The search is now continued in a new direction. This
new direction is obtained by changing any one of the n − 1 variables that were fixed
in the previous iteration. In fact, the search procedure is continued by taking each
coordinate direction in turn. After all the n directions are searched sequentially, the
first cycle is complete and hence we repeat the entire process of sequential
minimization. The procedure is continued until no further improvement is possible in
the objective function in any of the n directions of a cycle. The univariate method can
be summarized as follows:
1. Choose an arbitrary staring point X1 and set i = 1.
2. Find the search direction Si as

3. Determine whether λi should be positive or negative. For the current direction


Si , this means find whether the function value decreases in the positive or
negative direction. For this we take a small probe length (ε) and evaluate fi = f (Xi
), f + = f (Xi + εSi), and f − = f (Xi − εSi). If f + < fi , Si will be the correct direction for
decreasing the value of f and if f − < fi , −Si will be the correct one. If both f + and
f − are greater than fi , we take Xi as the minimum along the direction Si .
4. Find the optimal step length λ∗i such that

where + or − sign has to be used depending upon whether Si or −Si is the


direction for decreasing the function value.
5. Set Xi+1 = Xi ± λ∗i Si depending on the direction for decreasing the function
value, and fi+1 = f (Xi+1).
6. Set the new value of i = i + 1 and go to step 2. Continue this procedure until
no significant change is achieved in the value of the objective function.

The univariate method is very simple and can be implemented easily. However, it will
not converge rapidly to the optimum solution, as it has a tendency to oscillate with
steadily decreasing progress toward the optimum. Hence it will be better to stop the
computations at some point near to the optimum point rather than trying to find the
precise optimum point. In theory, the univariate method can be applied to find the
minimum of any function that possesses continuous derivatives. However, if the
function has a steep valley, the method may not even converge. For example, consider
the contours of a function of two variables with a valley as shown in Fig. 6.5. If the
univariate search starts at point P, the function value cannot be decreased either in
the direction ±S1 or in the direction ±S2. Thus the search comes to a halt and one may
be misled to take the point P, which is certainly not the optimum point, as the
optimum point. This situation arises whenever the value of the probe length ε needed
for detecting the proper direction (±S1 or ±S2) happens to be less than the number of
significant figures used in the computations.

Powell Method
Powell’s method belongs to the direct search methods, i.e., no first or second order
derivatives are required. It is based on so called conjugate directions. Powell states
that its main justification is based on the properties [1], when the objective function
f (x) is convex and quadratic:

Two directions di and dj , i ≠ j are mutually conjugate if it holds:

A set of mutual conjugate directions di ,dj ∈ X ⊂ R, 𝑖 ≠ 𝑗 constitutes a basis of X. The


conjugate gradient method works as follows. Let X0 be the initial guess of a minimum
of function f. In iteration k we require the gradient gk = g(Xk ). If k = 1, and dk the
steepest descent direction is dk = −gk . For k > 1 Powell applies the equation

with the Euclidean vector norms:

The main idea of the conjugate direction method is to search for the minimal value
of f (x) along direction dk to obtain the next solution Xk+1, i.e., find the λ that
minimizes:

For a minimizing λk set the vector xk+1 to:


b. Problems
To contrast the performance, in terms of behavior and speed of convergence, of
various unconstrained optimization problem, it is customary to construct test
functions. The proposed function to be examined is known as Rosenbrock function,
such that

B1 : Compute analytically the stationary points of the function and verify if it is


included into maxima/minima/saddle point
B2 : Do analytical (math) iteration from i = 1 to 3 (Univariate)
Do analytical (math) iteration from Sn to P1 (Powell’s)
B3 : Plot using Matlab the level sets of the function f(x,y)
B4 : Implement the procedures (Matlab) for minimization of the function f(x,y)
using Univariate Method and Powell’s Method such that the stationary point (optimal
point) is achieved. Are they convergence? If so, how many iterations has it be?

c. Solutions
B1 :
- Simplify function f(x,y)
𝑓(𝑥, 𝑦) = 100(𝑦 − 𝑥 2 )2 + (1 − 𝑥)2

𝑓(𝑥, 𝑦) = 100(𝑦 2 − 2𝑥 2 𝑦 + 𝑥 4 ) + (𝑥 2 − 2𝑥 + 1)

𝑓(𝑥, 𝑦) = 100𝑥 4 + 100𝑦 2 − 200𝑥 2 𝑦 + 𝑥 2 − 2𝑥 + 1)


- Determine the value of X* dan Y* which is the value of x and y that caused the first
derivative of the function f(x,y) valued 0 . So,
𝜕𝑓
= 400𝑥 3 − 400𝑥𝑦 + 2𝑥 − 2 = 0 … … (1)
𝜕𝑥
𝜕𝑓
= 200𝑦 − 200𝑥 2 = 0
𝜕𝑦
200𝑦 = 200𝑥 2

𝑦 = 𝑥 2 … … … . . (2)

- Substitute equation (1) dan (2)

400𝑥 3 − 400𝑥 3 + 2𝑥 − 2 = 0

2𝑥 = 2
Hence,
𝑋∗ = 1
2
- Since 𝑦 = 𝑥 , so we can get
𝑦 = 12

𝑌∗ = 1
- Using Hessian matrix to the function f(x,y), we get
𝜕2𝑓 𝜕2𝑓
𝜕𝑥 2 𝜕𝑥𝜕𝑦
𝐻(𝑥 ∗ , 𝑦 ∗ ) =
𝜕2𝑓 𝜕2𝑓
[𝜕𝑥𝜕𝑦 𝜕𝑦 2 ]
2
𝐻(𝑥 ∗ , 𝑦 ∗ ) = [−400𝑦 + 1200𝑥 + 2 −400𝑥 ]
−400𝑥 200

- Substitute the value of 𝑥 ∗ 𝑦 ∗


802 −400
𝐻(𝑥 ∗ , 𝑦 ∗ ) = [ ]
−400 200

- Determination of the position of a function at the maxima, minima, or saddle point


can be determined through the determinant value of the Hessian matrix itself,
which will be at the maxima point for negative determinants, minima for positive
determinants, and saddle points for zero determinants.
- Calculating matrix determinant,
𝐷 = (802 × 200) − (−400 × −400)
𝐷 = 160400 − 160000
𝐷 = 400
Since the determinant is positive, it can be concluded that the function f (x, y) is at
the minima point.
B2 :
B3 :
Plot using Matlab, the level sets of function f(x,y)
clear;
close all
clc
var = 2; %number of variables
syms X1 X2
f = symfun(100*(X2 - X1^2)^2 + (1 - X1)^2, [X1 X2]);
X = [0 0]; %row array of initial guesses
Eps_Fx = 1e-7; %tolerance for function value
Eps_Step = [1e-5 1e-5]; %tolerance for step values
MaxIter = 1000; %maximum number of iterations
myFx ='fx1'; %string name of the target functions

[X, FxVal, Iters] = powell_opt(var, X, Eps_Fx, Eps_Step, MaxIter,


myFx)
axis square
M = linspace(-8,8); G = linspace(-8,8);
[A,B] = meshgrid(M,G);
f_fig = f(A,B);
levels = 10:10:350;
figure(1), contour(M,G,f_fig,levels,'linewidth',1.2), colorbar
hold on;

function y=fx1(X, N)
y = 100*(X(2) - X(1)^2)^2 + (1 - X(1))^2;
end

function [X, FxVal, Iters] = powell_opt(N, X, Eps_Fx, Eps_Step,


MaxIter, f)
%set I = 1
Iters = 1;
f1 = feval(f, X, N);
%set X1 = X
X1 = X;
S = eye(N+1,N);

condition = true;

while condition % Iteration

S(N+1,:) = 0; % reset row N+1


for i= 1:N
lambda = 0.1;
% find lambda to minimize f(x + lamda*s)
lambda = linsearch(X, N, lambda, S, i, f);
% algoritm jump to linesearch function
X = X + lambda * S(i,:);
S(N+1,:) = S(N+1,:) + lambda * S(i,:);
end

lambda = 0.1;
lambda = linsearch(X, N, lambda, S, N+1, f);
X = X + lambda * S(N+1,:);
X2 = X;

f2 = feval(f, X2, N);


%condition value optimum
if abs(f2 - f1) < Eps_Fx
break;
end

if norm(X2 - X1) < Eps_Step


break
end
Iters = Iters + 1;

if Iters >= MaxIter


break
end

X1 = X2;
for k=1:N
for m=1:N
S(k, m) = S(k+1,m);
end
end
end

FxVal = feval(f, X, N);

function y = myFxEx(N, X, S, ii, lambda, f)

X = X + lambda * S(ii,:);
y = feval(f, X, N);

function lambda = linsearch(X, N, lambda, S, ii, f)

MaxIt = 100;
Toler = 0.0001;

iter = 0;
condition = true;
while condition % Set I = I + 1
iter = iter + 1;
if iter > MaxIt
lambda = 0;
break
end

h = 0.01 * (1 + abs(lambda));
f0 = myFxEx(N, X, S, ii, lambda, f);
fp = myFxEx(N, X, S, ii, lambda+h, f);
fm = myFxEx(N, X, S, ii, lambda-h, f);
deriv1 = (fp - fm) / 2 / h;
deriv2 = (fp - 2 * f0 + fm) / h ^ 2;
if deriv2 == 0
break
end
diff = deriv1 / deriv2;
lambda = lambda - diff;
if abs(diff) < Toler % to know f is optimum?
condition = false;
end
end
B4.1
Univariate Method
clear all
close all
clc
%% Initialization
syms X Y lambda %Declare a time-changing function
i = 1; %Declare initialization
var = 2; %Declare the number of variations
eps = 0.01; %Declare the epsilon value
x = [0;0]; %Declare x with the value 0 because it is still in the first
iteration
f = symfun(X - Y + 2*X^2 + 2*X*Y + Y^2, [X Y]);%The Objective Function of
the problem
S = eye(var); %variable identity
%% Looping
for i = i:100 %to do an iteration with 100 max iterations
x_temp = x(:,i);% to save the value of x because the value of x varies
if mod(i,2) == 1 % using modulo to separate and create different column
when the value is odd or even
s = S(:,1); %the odd value of s
else
s = S(:,2);%the even value of s
end

fval = f(x_temp(1), x_temp(2)); %to calculate the value of f with


temporary value of x
xp_temp = x_temp + eps*s;
xn_temp = x_temp - eps*s;
fval_p = f(xp_temp(1), xp_temp(2));%to determine f when the x value is
positive
fval_n = f(xn_temp(1), xn_temp(2));%to determine f when the x value is
negative
if fval_p < fval %when the value of f-positive is less than f, so the
value of s is positive
s = s;
elseif fval_n < fval%when the value of f-negative is less than f, so the
value of s is negative
s = -s;
else %to stop the iterations when there is an optimum value
break
end

xopt_temp = x_temp + lambda*s; %to calculate optimum value of x


fval_opt = f(xopt_temp(1), xopt_temp(2)); % to calculate optimum value
of f
grad_fopt = gradient(fval_opt);%differentiating to get the gradient
value
lambda_opt = solve(grad_fopt);%to calculate lambda

x(:,i+1) = x_temp + lambda_opt*s; %insert the each value of iterations


to x
end

%% Result Table
Iter = 1:i;
X_coordinate = x(1,:)';
Y_coordinate = x(2,:)';
Iterations = Iter';
T = table(Iterations,X_coordinate,Y_coordinate);

%% Output
fprintf('Initial Objective Function Value: %d\n\n',subs(f,[X,Y],
[x(1),x(2)]));
disp(T)
fprintf('Number of Iterations : %d\n\n', i);
fprintf('Point of Minimal: [%d,%d]\n\n', x(1,i), x(2,i));
fprintf('X: [%d]\n\n', x(1,i));
fprintf('Y: [%d]\n\n', x(2,i));
fprintf('Objective Function Minimum Value after Optimization:
%f\n\n',double(f(x(1,i),x(2,i))));

%% Plotting
axis square
X = linspace(-8,8); Y = linspace(-8,8);
[A,B] = meshgrid(X,Y);
f_fig = f(A,B);
levels = 10:10:350;
figure(1), contour(X,Y,f_fig,levels,'linewidth',1.2), colorbar
hold on;
plot(x(1,:), x(2,:))

Initial Objective Function Value: 0

Iterations X_coordinate Y_coordinate


__________ ____________ ____________

1 0 0
2 -0.25 0
3 -0.25 0.75
4 -0.625 0.75
5 -0.625 1.125
6 -0.8125 1.125
7 -0.8125 1.3125
8 -0.90625 1.3125
9 -0.90625 1.40625
10 -0.953125 1.40625
11 -0.953125 1.453125
12 -0.9765625 1.453125
13 -0.9765625 1.4765625
14 -0.98828125 1.4765625
15 -0.98828125 1.48828125
16 -0.994140625 1.48828125
17 -0.994140625 1.494140625

Number of Iterations : 17

Point of Minimal: [-9.941406e-01,1.494141e+00]

X: [-9.941406e-01]

Y: [1.494141e+00]

Objective Function Minimum Value after Optimization: -1.249966


B4.2 Powell’s Method

clear;
close all
clc
var = 2; %number of variables
syms X Y
f = symfun(X - Y + 2*X^2 + 2*X*Y + Y^2, [X Y]);
X = [0 0]; %row array of initial guesses
Eps_Fx = 1e-7; %tolerance for function value
Eps_Step = [1e-5 1e-5]; %tolerance for step values
MaxIter = 1000; %maximum number of iterations
myFx ='fx1'; %string name of the target functions

[X, FxVal, Iters] = powell_opt(var, X, Eps_Fx, Eps_Step, MaxIter,


myFx)
axis square
M = linspace(-8,8); G = linspace(-8,8);
[A,B] = meshgrid(M,G);
f_fig = f(A,B);
levels = 10:10:350;
figure(1), contour(M,G,f_fig,levels,'linewidth',1.2), colorbar
hold on;
plot(X(1),X(2))

function y=fx1(X, N)
y = X(1) - X(2) + 2*X(1)^2 + 2*X(1)*X(2) + X(2)^2;
end
function [X, FxVal, Iters] = powell_opt(N, X, Eps_Fx, Eps_Step,
MaxIter, f)
%set I = 1
Iters = 1;
f1 = feval(f, X, N);
%set X1 = X
X1 = X;
S = eye(N+1,N);

condition = true;

while condition % Iteration

S(N+1,:) = 0; % reset row N+1


for i= 1:N
lambda = 0.1;
% find lambda to minimize f(x + lamda*s)
lambda = linsearch(X, N, lambda, S, i, f);
% algoritm jump to linesearch function
X = X + lambda * S(i,:);
S(N+1,:) = S(N+1,:) + lambda * S(i,:);
end

lambda = 0.1;
lambda = linsearch(X, N, lambda, S, N+1, f);
X = X + lambda * S(N+1,:);
X2 = X;

f2 = feval(f, X2, N);


%condition value optimum
if abs(f2 - f1) < Eps_Fx
break;
end

if norm(X2 - X1) < Eps_Step


break
end
Iters = Iters + 1;

if Iters >= MaxIter


break
end

X1 = X2;
for k=1:N
for m=1:N
S(k, m) = S(k+1,m);
end
end

end

FxVal = feval(f, X, N);

function y = myFxEx(N, X, S, ii, lambda, f)

X = X + lambda * S(ii,:);
y = feval(f, X, N);

function lambda = linsearch(X, N, lambda, S, ii, f)

MaxIt = 100;
Toler = 0.0001;

iter = 0;
condition = true;
while condition % Set I = I + 1
iter = iter + 1;
if iter > MaxIt
lambda = 0;
break
end

h = 0.01 * (1 + abs(lambda));
f0 = myFxEx(N, X, S, ii, lambda, f);
fp = myFxEx(N, X, S, ii, lambda+h, f);
fm = myFxEx(N, X, S, ii, lambda-h, f);
deriv1 = (fp - fm) / 2 / h;
deriv2 = (fp - 2 * f0 + fm) / h ^ 2;
if deriv2 == 0
break
end
diff = deriv1 / deriv2;
lambda = lambda - diff;
if abs(diff) < Toler % to know f is optimum?
condition = false;
end
end

X=

-0.999999999999995 1.500000000000009

FxVal =

-1.250000000000000

Iters = 3
Conclusions :

Both univariate and Powell’s method have similar optimum value point for univariate -
0.994140625 and 1.494140625 for Powell’s -0.999999999999995 and 1.500000000000009,
but there is small difference. Univariate method gives the optimum value -1.24996 , while
Powell’s method gives the optimum value -1.250000000000000. But univariate method takes
17 iterations while Powell’s method only takes 3 iterations. So, Powell’s method is way more
efficient
REFERENSI

[1] Kramer, O., 2010. Iterated local search with Powell’s method: a memetic algorithm for
continuous global optimization. Memetic Computing, 2(1), pp.69-83.
[2] Li LP, Wang L (2009) Hybrid algorithms based on harmony search and differential
evolution for global optimization. In: GEC ’09: Proceedings of the first ACM/SIGEVO
summit on genetic and evolu
[3] Shammas, Namir. MATLAB Program to Find A Function Minimum Using the Powell
Search Method http://www.namirshammas.com/MATLAB/Optim_Powell.htm
[4] Sitole, Soumitra. Univariate Search Method (Optimizing Quadratic Equations with Two
Variables) https://www.mathworks.com/matlabcentral/fileexchange/62017-
univariate-search-method-optimizing-quadratic-equations-with-two-variables
[5] Powell M (1964) An efficient method for finding the minimum of a function of several
variables without calculating derivatives. Comput J 7(2):155–162 55. Powell MJD
(1977) Restart procedures for the conjugate gradient method. Math Program
V12(1):241–254]

You might also like