P. 1
HW5 optimization

HW5 optimization

|Views: 2|Likes:
Published by Naeemullah Khan
optimization
optimization

More info:

Categories:Topics
Published by: Naeemullah Khan on Sep 29, 2013
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as DOCX, PDF, TXT or read online from Scribd
See more
See less

03/02/2015

pdf

text

original

Linear and Nonlinear Optimization

HW5

Naeemullah Khan & Sultan Albarakati

Problem2: Min f=50 X1+80 X2 Subject to:

3x 1 ³ 6 2x 1 + 4x 2 ³ 10 2x 1 + 5x 2 ³ 8 x 1 ³ 0, x 2 ³ 0
é - 3 ê ê - 2 ê é ù 50 ê ê ú - 2 c= ê ú, A = ê 80 ê ê ú ë û - 1 ê ê 0 ê ë
Code:
clc clear f=[50 ;80]; A=[-3 0;-2 -4;-2 -5]; b=[-6 -10 -8]; lb=zeros(2,1); [x,f,exitflag,output,lambda] = linprog(f,A,b,[],[],lb) xt=x'; f; exitflag inequality_lambda=lambda.ineqlin'

é ù ê ú ê ú 0ù ê ú ú ê ú ú - 4ú ê- 6 ú ê ú - 5ú ,b = ê - 10ú , lb = ú ê ú ú 0ú ê- 8 ú ê ú ú ê0 ú - 1ú û ê ú ê0 ú ê ú ë û

é 0ù ê ú ê 0ú ê ú ë û

Results Optimization terminated. x= 2.0000 1.5000 f= 220.0000 exitflag = 1 output = iterations: 7 algorithm: 'large-scale: interior point' cgiterations: 0 message: 'Optimization terminated.' 1

Linear and Nonlinear Optimization HW5 Naeemullah Khan & Sultan Albarakati constrviolation: 0 firstorderopt: 6.3333 20.6 h2 (x ) = 2x 1 + 4x 2 h3 (x ) = 2x 1 + 5x 2 . l ) = f (x ) - å 5 l i hi (x ) i= 1 f (x ) = 50x 1 + 80x 2 h1(x ) = 3x 1 .. l ) = 0 2 ..3469e-11 lambda = ineqlin: [3x1 double] eqlin: [0x1 double] upper: [2x1 double] lower: [2x1 double] exitflag = 1 inequality_lambda = 3. for example a slight perturbation of second constraint will have a more profound effect on the objective than the first and third constraint.0000 Part2: The physical meaning in l is that if we need to increase the required amount of a specific nutrition by a small amount the increase in the objective will be the change in variable multiplied by its corresponding l .. Part 3: l (x . h5 (x ) = x 2  KKT optimality condition: hi (x ) ³ 0. i = 1.0000 0.8 h4 (x ) = x 1.. 5 li ³ 0 l i ×hi (x * ) = 0 Ñ l (x * .

xt= x' f exitflag equality_lambdas=lambda.lambda] = linprog(f.-2.0.-4. 80.eqlin' æ6 ö ÷ ç ÷ ç ÷ ç ÷ ç 10 ÷ ç ÷ ç ÷ ç ÷ 8 ÷ ç è ø Results Optimization terminated.b = ÷ ç ÷ ç ÷ ç ÷ ÷ ç è1 5 0 0 1ø Code: clc f = [50. * æ 0ö ÷ ç ÷ ç ç0÷ ÷ ÷ ç è ø h4 (x * ) = 2 ³ 0.0. 0.5 ³ 0 All lambdas are non-negative and all conditions are met.0000 0.0.2l 2 .output. 0).0.f. x 2 = 1.3l 1 .5000 0.5000 f= 220. x 5 ³ 0 æ 3 0 1 0 0ö ÷ ç ÷ ç ÷ ç t ÷ p (50.exitflag.1.3333.5.0000 1.0. h2 (x * ) = 0.0000 exitflag = 1 3 . A = ç 2 4 0 1 0 . h3 (x * ) = 3. l ) = ç = ÷ ÷ 80 4 l 5 l l ÷ ç è ø 2 3 5 h1(x * ) = 0. lb=[0.[].-2.0.[]).lb.0.Linear and Nonlinear Optimization HW5 Naeemullah Khan & Sultan Albarakati From matlab: * * x1 = 2.[]. l 1 = 3. h5(x * ) = 1.3l 3 .0]. l 3 = l 4 = l 5 = 0 æ 50 .0.1]. Part4 Min 50x1 + 80x 2 Subject to : 3x 1 + x 3 = 6 2x 1 + 4x 2 + x 4 = 10 2x 1 + 5x 2 + x 5 = 8 x 1. 0.-8]. Aeq=[-3.Aeq.0].0.0000 3.80.-10. xt = 2.l 5 ö ÷ ç ÷ ç Ñ x l (x .0.1.0. beq=[-6. x 3. x 4 . x 2.-5. [x.5 ³ 0.0.beq. l 2 = 20.

1) Subject to æ ö x+ ÷ ç ÷ ç ÷+ A A b .:)=norm(A(i. n In this case x can be positive or negative so to change the formula into linear programming problem we introduce free variable: X1=X1+. X .-1].÷ ç ÷ x ç ÷ è ø + R.³ 0 Where N is the vector of the norm of ai’s.. ..[]).-1 0. 0. lb=[0.-1/3 1. Max R Subject to: ait x .:)).1 1.X1X2=X2+. for i=1:6 N(i.[].2 -1.0. 7 .-1 -1]. [x. b=[0 ..0].0. 0 .exitflag.0.0000 Problem 3: We have three variables in this problem: X1.RN ³ 0 ( )ç . x' x1=x(1)-x(3) x2=x(2)-x(4) 4 . 0. R³ 0 " i = 1.f.0000 0.lb.output. A=[0 -1. P = (0.0. Given A and b we solve it using ‘ linprog’ Code: clc f=[0.0.3333 20.bi + R ai £ 0. -1]. X .A. end A = [A -A N]. 3 . and R (radius of the inscribed circle) We want to maximize R such that the circle is inside the polygon completely.0.b..[].X2 (coordinates of the center position). 0. 8 .lambda] = linprog(f.X2So we have this from: Max R .Linear and Nonlinear Optimization HW5 Naeemullah Khan & Sultan Albarakati equality_lambdas = 3.

0000 0.8656 R= 1.0000 Part3: The non-zero multiplier indicate the constraints which are active i. If a constraint has a large lambda a small perturbation in constraint will cause large change in the objective and vice versa.4664 0. 5 .4961 x2 = 1. the lines that the circle is touching. otherwise the circle touches the line.e. If l =0 the circle does not touch the line. Also the value of non zero lambdas indicate the sensitivity of the function to the corresponding constraint.1166 0.ineqlin' Results: x1 = 2.0000 0.Linear and Nonlinear Optimization HW5 Naeemullah Khan & Sultan Albarakati R=x(5) exitflag inequality_lambdas=lambda.3498 0.8656 exitflag = 1 inequality_lambdas = 0.

4.x 2 = 0 x2 + x 3 = 4 x4 + x5 .beq.7. [x.1 -1 0 0 -1.0 0 -1 1 1] beq=[-4.0].f.0]. x 5 ³ 0 x1 £ 5 x2 £ 2 x3 £ 4 x4 £ 2 x5 £ 1 Code: clc clear f=[2. ub=[5.Aeq.4 x1 .3]. x 4 .x 4 = . x 2.1]. xt=x' total_cost=f exitflag equality_lambdas=lambda.0.0.1.t for every node i= 1 å 5 x ij - å 5 x ij = bj " j i= 1 i= 1 So: f (x ) = 2x1 + 7x 2 + x 3 + 5x 4 + 3x 5 Subject to: .4.[].x1 .upper 6 . Aeq=[-1 0 0 -1 0. lb=[0.0.5.0 1 1 0 0.exitflag.2.lambda] = linprog(f.x3 = 0 x 1.2.eqlin' UpperBound_lambdas=lambda. x 3.ub).Linear and Nonlinear Optimization HW5 Naeemullah Khan & Sultan Albarakati Problem 4 min f (x ) = å 5 pi x i .[].0.lb. s.output.

For node constrain lambdas.0000 1.0000 0.0000 0.0000 0 -7. if we want to increase the amount of demand of specific node. xt = 2.0000 Part2: The upper bound lambdas represent the sensitivity of the total capacity to a specific arc. 7 .l i D ci .0000 3.0000 2.0000 total_cost = 27.Linear and Nonlinear Optimization HW5 Naeemullah Khan & Sultan Albarakati Results: Optimization terminated.0000 UpperBound_lambdas = 0.l i D bi .0000 -6.0000 3. the total cost will be affected by . therefore if we increase the capacity of arc the cost will increased by .0000 exitflag = 1 equality_lambdas = 2.0000 3.0000 1.

k2=M*(j-1)+i-1.k3])+[1 0 -1. Obscured image is in the array U1. % Create 50% mask of known pixels and use it to obscure the original rand('state'.[k1.k3].0 1 -1. imagesc(U0) title('Original image').k3]. M=m. end end %%%%%%%%% 8 . cla.*(1-unknown) + 150. k3=M*(j-1)+i.k2. [m. axis image. %my code from here n=M*N C=zeros(n. subplot(121). U1 = U0. Unknown pixels are defined by the array called "unknown". imagesc(U1).n) < 0. % Display images figure(1). unknown = rand(m.[k1. colormap gray.k2.5.k2.n). Boyd.k3])= C([k1. 1029). % Read a sample grayscale image close all clc clear U0 = double(imread('hovegray. title('Obscured image'). N=n.-1 -1 2]. axis image.*unknown.png')). % Image courtesy of S. subplot(122). for i=2:M for j=2:N k1=M*(j-2)+i. n] = size(U0).k2. C([k1.Linear and Nonlinear Optimization HW5 Naeemullah Khan & Sultan Albarakati Problem 5: The code for the problem is given below Code: % % % % This template generates synthetic data (missing pixels) to use in testing the procedures developed in Assignment 5.

p)].b] U2=Mbig\y. %test1=A*x-b. end end end figure size(A) spy(A) x=reshape(U1. for i=1:M for j=1:N if unknown(i.1).M*(j-1)+i)=1. p=M*N-p1. b(r)=U0(i.M*N. showx(U2(1:M*N)) Result of Denoising using L2 norm 9 .1). y=[zeros(M*N. A=zeros(M*N-p. zeros(p.1). b=zeros(p.j)==0 A(r.M*N). r=1.j)/255. A'. A. Mbig=[2*C .Linear and Nonlinear Optimization HW5 Naeemullah Khan & Sultan Albarakati p1=sum(sum(unknown)). r=r+1.

axis image. Boyd.Linear and Nonlinear Optimization HW5 Naeemullah Khan & Sultan Albarakati The code for Denoising using l1 mod is given below % % % % This template generates synthetic data (missing pixels) to use in testing the procedures developed in Assignment 5. subplot(121). U1 = U0.j)==0 A(r. n] = size(U0). % Image courtesy of S. %my code from here p=M*N-sum(sum(unknown)). [m. % Create 50% mask of known pixels and use it to obscure the original rand('state'. B=zeros(s.1)=U0(i. r=1 for i=1:M for j=1:N if unknown(i. cla. end 10 .p)]. % Display images figure(1). % Read a sample grayscale image tic close all clear clc M=51. C=zeros(s.M*N). b(r. Obscured image is in the array U1. imagesc(U1). axis image.*(1-unknown) + 150. U0 = double(imread('hovegray. colormap gray. Unknown pixels are defined by the array called "unknown". A=zeros(p. imagesc(U0) title('Original image').5. N=59.png')).M*N). 1029). r=r+1.j).M*N) ones(1.n) < 0. subplot(122).M*N). f=[zeros(1. title('Obscured image'). unknown = rand(m.*unknown. s=(M-1)*(N-1).p) ones(1.M*(j-1)+i)=1.

Abig. Abig=[A. xans=linprog(f. B(r.1). end end Mbig=[B -eye(s) zeros(s).M*(j-1)+i)=1.s)]. C zeros(s) -eye(s).Mbig. u3=reshape(u3. bbig=zeros(4*s. zeros(p. f=[zeros(1.s)]. C(r. -C zeros(s) -eye(s)].b) .M*(j-2)+i)=-1.M*N) ones(1.M*(j-1)+i)=1.M*(j-1)+i-1)=-1.s) ones(1. for i=2:M for j=2:N B(r.M.bbig.Linear and Nonlinear Optimization HW5 Naeemullah Khan & Sultan Albarakati end end r=1. r=r+1. zeros(p. C(r.s). u3=xans(1:M*N). imagesc(u3) toc Result of Denoising using L1 norm 11 . -B -eye(s) zeros(s).N).

Compared to l2 norm l1 norm is much more effective in preserving the boundaries and sharp edges.Linear and Nonlinear Optimization HW5 Naeemullah Khan & Sultan Albarakati Discussion: It can be seen from the results that since the l2 norm penalizes large deviations more hence the image is much smoother and the sharp edges are distorted. Since L1 norm does not penalize the gradient term with power 2 instead we uses the absolutes of the gradient terms. 12 .

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->