You are on page 1of 12

Linear and Nonlinear Optimization

HW5

Naeemullah Khan & Sultan Albarakati

Problem2: Min f=50 X1+80 X2 Subject to:

3x 1 6 2x 1 + 4x 2 10 2x 1 + 5x 2 8 x 1 0, x 2 0
- 3 - 2 50 - 2 c= , A = 80 - 1 0
Code:
clc clear f=[50 ;80]; A=[-3 0;-2 -4;-2 -5]; b=[-6 -10 -8]; lb=zeros(2,1); [x,f,exitflag,output,lambda] = linprog(f,A,b,[],[],lb) xt=x'; f; exitflag inequality_lambda=lambda.ineqlin'

0 - 4 - 6 - 5 ,b = - 10 , lb = 0 - 8 0 - 1 0

0 0

Results Optimization terminated. x= 2.0000 1.5000 f= 220.0000 exitflag = 1 output = iterations: 7 algorithm: 'large-scale: interior point' cgiterations: 0 message: 'Optimization terminated.' 1

Linear and Nonlinear Optimization

HW5

Naeemullah Khan & Sultan Albarakati

constrviolation: 0 firstorderopt: 6.3469e-11 lambda = ineqlin: [3x1 double] eqlin: [0x1 double] upper: [2x1 double] lower: [2x1 double] exitflag = 1 inequality_lambda = 3.3333 20.0000 0.0000 Part2: The physical meaning in l is that if we need to increase the required amount of a specific nutrition by a small amount the increase in the objective will be the change in variable multiplied by its corresponding l , for example a slight perturbation of second constraint will have a more profound effect on the objective than the first and third constraint.

Part 3:

l (x , l ) = f (x ) -

l i hi (x )

i= 1

f (x ) = 50x 1 + 80x 2 h1(x ) = 3x 1 - 6 h2 (x ) = 2x 1 + 4x 2 h3 (x ) = 2x 1 + 5x 2 - 8 h4 (x ) = x 1, h5 (x ) = x 2


KKT optimality condition:

hi (x ) 0, i = 1,..., 5 li 0 l i hi (x * ) = 0 l (x * , l ) = 0

Linear and Nonlinear Optimization

HW5

Naeemullah Khan & Sultan Albarakati

From matlab:
* * x1 = 2, x 2 = 1.5, l 1 = 3.3333, l 2 = 20, l 3 = l 4 = l 5 = 0

50 - 3l 1 - 2l 2 - 3l 3 - l 5 x l (x , l ) = = 80 4 l 5 l l 2 3 5 h1(x * ) = 0, h2 (x * ) = 0, h3 (x * ) = 3.5 0,
*

0 0 h4 (x * ) = 2 0, h5(x * ) = 1.5 0

All lambdas are non-negative and all conditions are met. Part4 Min 50x1 + 80x 2 Subject to :

3x 1 + x 3 = 6 2x 1 + 4x 2 + x 4 = 10 2x 1 + 5x 2 + x 5 = 8 x 1, x 2, x 3, x 4 , x 5 0
3 0 1 0 0 t p (50, 80, 0, 0, 0), A = 2 4 0 1 0 ,b = 1 5 0 0 1
Code:
clc f = [50;80;0;0;0]; Aeq=[-3,0,1,0,0;-2,-4,0,1,0;-2,-5,0,0,1]; beq=[-6;-10;-8]; lb=[0;0;0;0;0]; [x,f,exitflag,output,lambda] = linprog(f,[],[],Aeq,beq,lb,[]); xt= x' f exitflag equality_lambdas=lambda.eqlin'

6 10 8

Results Optimization terminated. xt = 2.0000 1.5000 0.0000 0.0000 3.5000 f= 220.0000 exitflag = 1 3

Linear and Nonlinear Optimization

HW5

Naeemullah Khan & Sultan Albarakati

equality_lambdas = 3.3333 20.0000 0.0000

Problem 3: We have three variables in this problem: X1,X2 (coordinates of the center position), and R (radius of the inscribed circle) We want to maximize R such that the circle is inside the polygon completely. Max R Subject to:

ait x - bi + R ai 0, R 0

" i = 1,..., n

In this case x can be positive or negative so to change the formula into linear programming problem we introduce free variable: X1=X1+- X1X2=X2+- X2So we have this from: Max R ,

P = (0, 0, 0, 0, - 1)
Subject to

x+ + A A b - RN 0 ( ) - x + R, X , X - 0
Where N is the vector of the norm of ais. Given A and b we solve it using linprog

Code:
clc f=[0;0;0;0;-1]; A=[0 -1;2 -1;1 1;-1/3 1;-1 0;-1 -1]; for i=1:6 N(i,:)=norm(A(i,:)); end A = [A -A N]; b=[0 ; 8 ; 7 ; 3 ; 0 ; -1]; lb=[0;0;0;0;0]; [x,f,exitflag,output,lambda] = linprog(f,A,b,[],[],lb,[]); x' x1=x(1)-x(3) x2=x(2)-x(4)

Linear and Nonlinear Optimization

HW5

Naeemullah Khan & Sultan Albarakati

R=x(5) exitflag inequality_lambdas=lambda.ineqlin'

Results: x1 = 2.4961 x2 = 1.8656 R= 1.8656 exitflag = 1 inequality_lambdas = 0.4664 0.0000 0.1166 0.3498 0.0000 0.0000 Part3: The non-zero multiplier indicate the constraints which are active i.e. the lines that the circle is touching. If l =0 the circle does not touch the line, otherwise the circle touches the line. Also the value of non zero lambdas indicate the sensitivity of the function to the corresponding constraint. If a constraint has a large lambda a small perturbation in constraint will cause large change in the objective and vice versa.

Linear and Nonlinear Optimization

HW5

Naeemullah Khan & Sultan Albarakati

Problem 4

min

f (x ) =

pi x i , s.t for every node

i= 1

x ij -

x ij = bj " j

i= 1

i= 1

So: f (x ) = 2x1 + 7x 2 + x 3 + 5x 4 + 3x 5 Subject to:

- x1 - x 4 = - 4 x1 - x 2 = 0 x2 + x 3 = 4 x4 + x5 - x3 = 0 x 1, x 2, x 3, x 4 , x 5 0 x1 5 x2 2 x3 4 x4 2 x5 1
Code:
clc clear f=[2;7;1;5;3]; Aeq=[-1 0 0 -1 0;1 -1 0 0 -1;0 1 1 0 0;0 0 -1 1 1] beq=[-4;0;4;0]; lb=[0;0;0;0;0]; ub=[5;2;4;2;1]; [x,f,exitflag,output,lambda] = linprog(f,[],[],Aeq,beq,lb,ub); xt=x' total_cost=f exitflag equality_lambdas=lambda.eqlin' UpperBound_lambdas=lambda.upper

Linear and Nonlinear Optimization

HW5

Naeemullah Khan & Sultan Albarakati

Results: Optimization terminated. xt = 2.0000 1.0000 3.0000 2.0000 1.0000 total_cost = 27.0000 exitflag = 1 equality_lambdas = 2.0000 0 -7.0000 -6.0000 UpperBound_lambdas = 0.0000 0.0000 0.0000 3.0000 3.0000

Part2: The upper bound lambdas represent the sensitivity of the total capacity to a specific arc, therefore if we increase the capacity of arc the cost will increased by - l i D ci . For node constrain lambdas, if we want to increase the amount of demand of specific node, the total cost will be affected by - l i D bi .

Linear and Nonlinear Optimization

HW5

Naeemullah Khan & Sultan Albarakati

Problem 5: The code for the problem is given below Code:


% % % % This template generates synthetic data (missing pixels) to use in testing the procedures developed in Assignment 5. Unknown pixels are defined by the array called "unknown". Obscured image is in the array U1.

% Read a sample grayscale image close all clc clear U0 = double(imread('hovegray.png')); [m, n] = size(U0); M=m; N=n;

% Image courtesy of S. Boyd.

% Create 50% mask of known pixels and use it to obscure the original rand('state', 1029); unknown = rand(m,n) < 0.5; U1 = U0.*(1-unknown) + 150.*unknown;

% Display images figure(1); cla; colormap gray; subplot(121); imagesc(U0) title('Original image'); axis image; subplot(122); imagesc(U1); title('Obscured image'); axis image; %my code from here n=M*N C=zeros(n,n); for i=2:M for j=2:N k1=M*(j-2)+i; k2=M*(j-1)+i-1; k3=M*(j-1)+i; C([k1,k2,k3],[k1,k2,k3])= C([k1,k2,k3],[k1,k2,k3])+[1 0 -1;0 1 -1;-1 -1 2]; end end %%%%%%%%%

Linear and Nonlinear Optimization

HW5

Naeemullah Khan & Sultan Albarakati

p1=sum(sum(unknown)); p=M*N-p1; r=1; A=zeros(M*N-p,M*N); b=zeros(p,1); for i=1:M for j=1:N if unknown(i,j)==0 A(r,M*(j-1)+i)=1; b(r)=U0(i,j)/255; r=r+1; end end end figure size(A) spy(A) x=reshape(U1,M*N,1); %test1=A*x-b; Mbig=[2*C , A'; A, zeros(p,p)]; y=[zeros(M*N,1);b] U2=Mbig\y; showx(U2(1:M*N))

Result of Denoising using L2 norm

Linear and Nonlinear Optimization

HW5

Naeemullah Khan & Sultan Albarakati

The code for Denoising using l1 mod is given below


% % % % This template generates synthetic data (missing pixels) to use in testing the procedures developed in Assignment 5. Unknown pixels are defined by the array called "unknown". Obscured image is in the array U1.

% Read a sample grayscale image tic close all clear clc M=51; N=59; U0 = double(imread('hovegray.png')); [m, n] = size(U0);

% Image courtesy of S. Boyd.

% Create 50% mask of known pixels and use it to obscure the original rand('state', 1029); unknown = rand(m,n) < 0.5; U1 = U0.*(1-unknown) + 150.*unknown;

% Display images figure(1); cla; colormap gray; subplot(121); imagesc(U0) title('Original image'); axis image; subplot(122); imagesc(U1); title('Obscured image'); axis image; %my code from here p=M*N-sum(sum(unknown)); s=(M-1)*(N-1); f=[zeros(1,M*N) ones(1,p) ones(1,p)]; A=zeros(p,M*N); B=zeros(s,M*N); C=zeros(s,M*N); r=1 for i=1:M for j=1:N if unknown(i,j)==0 A(r,M*(j-1)+i)=1; b(r,1)=U0(i,j); r=r+1; end

10

Linear and Nonlinear Optimization

HW5

Naeemullah Khan & Sultan Albarakati

end end r=1; for i=2:M for j=2:N B(r,M*(j-1)+i)=1; B(r,M*(j-1)+i-1)=-1; C(r,M*(j-1)+i)=1; C(r,M*(j-2)+i)=-1; r=r+1; end end Mbig=[B -eye(s) zeros(s); -B -eye(s) zeros(s); C zeros(s) -eye(s); -C zeros(s) -eye(s)]; bbig=zeros(4*s,1); Abig=[A, zeros(p,s), zeros(p,s)]; f=[zeros(1,M*N) ones(1,s) ones(1,s)]; xans=linprog(f,Mbig,bbig,Abig,b) ; u3=xans(1:M*N); u3=reshape(u3,M,N); imagesc(u3) toc

Result of Denoising using L1 norm 11

Linear and Nonlinear Optimization

HW5

Naeemullah Khan & Sultan Albarakati

Discussion: It can be seen from the results that since the l2 norm penalizes large deviations more hence the image is much smoother and the sharp edges are distorted. Compared to l2 norm l1 norm is much more effective in preserving the boundaries and sharp edges. Since L1 norm does not penalize the gradient term with power 2 instead we uses the absolutes of the gradient terms.

12

You might also like