Professional Documents
Culture Documents
𝑢𝑛
lim = 𝑥 𝑚𝑎𝑥
𝑛→∞ ||𝑢 𝑛 ||
hence we already have everything descriptive to start our iteration:
A=randn(5); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
y=norm(x,2);% y0:absolute value of the vector x0
for i=1:(n^n)% times of iterations for the best approximation of the results
x=A*x/y;
y=norm(x,2);
xmax=x/y;
end
if y<0
ymax=y % Higher absolute value eigenvalue
xmax=xmax % Higher absolute value eigenvector
else
if xmax<0
ymax= -(y)% Higher absolute value eigenvalue
xmax=((-1)*xmax)% Higher absolute value eigenvector
else
ymax=y
xmax=xmax
end
end
[EV, DV] = eig(A);%EV columns of eigenvectors and DV diagonal of eigenvalues
DV=diag(real(DV))
EV=real(EV)
2)For a matrix 5x5 we have:
As can see in the results obtained, the developed code calculates very accurately
the maximum eigenvalue and corresponding eigenvector, the results have been very
accurate comparing what was thrown by the code and by the eig command of matlab. It
is also concluded that the power method only finds the dominant eigenvalue (the
largest in absolute value). To find the smallest eigenvalue, the inverse power
method must be applied, finding the eigenvector.
In this case the same thing happens as for the calculation of the maximum
eigenvalue of the 5x5 matrix
3) For the matrix [7 0 0; 0 2 -1; 0 2 5] we obtain:
As you can see the result obtained by the code is very accurate and close to that
obtained with the eig command.
These calculations can not be made, because to multiply the column vector rand(3,
1) by another, this must be row and our initial vector is column, so it can not be
multiplied and the other aspect is that the matrix in question must be square nxn
to be able to perform the method of powers, in the second case the same thing
happens, It would be a three-row column vector to be multiplied by another column
vector.
This matrix has the characteristic that every column vector that is multiplied by
it, will give this same with its transposed rows again and again, its norm is 1
and the only way to obtain its eigenvector corresponding to the maximum eigenvalue
is taking as initial vector one that its elements are equal and different from
zero, due to its characteristics.
A=randi(100,1000); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
I=eye(n);
E=eig(A); j=real(max(E));%initial guess for the eigenvalue, can be any of the
eigenvalues of A
a=95;% shift between 1 and 100
h=1/(j-a);% dominant eigenvalue of inv(A-a*I)
K=(A-a*I);
[L,U]=lu(K);
t1=cputime;
for i=1:10 % number of iterations
y= U\(L\x);
c=( y'*x)/(x'*x);% Rayleigh coefficient,converge to h: dominant value of inv(A-
a*I)
x=y/norm(y,inf);%This sequence converges to to eigenvector v of inv(A-a*I)
end
t2=cputime;
t=t2-t1
E=1/h+a %eigenvalue of A
[EV, DV] = eig(inv(A-a*I));%EV columns of eigenvectors and DV diagonal of
eigenvalues
c=c;
EVmin=min(min(EV));
EVmax=max(max(EV));
xmin= min(min(x));
xmax= max(max(x));
DVmin=min(min(DV));
DVmax=max(max(DV));
disp(' Eigenvalue ')
[c DVmin DVmax]
disp(' Eigenvector ')
[min(x) xmin max(x) xmax]
t
norm(min(x)-xmin,2)
2)
A=randi(100,1000); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
I=eye(n);
E=eig(A); j=real(max(E));%initial guess for the eigenvalue, can be any of the
eigenvalues of A
a=95;% shift between 1 and 100
h=1/(j-a);% dominant eigenvalue of inv(A-a*I)
K=(A-a*I);
t1=cputime;
for i=1:10 % number of iterations
y= K\x;
c=( y'*x)/(x'*x);% Rayleigh coefficient,converge to h: dominant value of inv(A-
a*I)
x=y/norm(y,inf);%This sequence converges to to eigenvector v of inv(A-a*I)
end
t2=cputime;
t=t2-t1
E=1/h+a %eigenvalue of A
[EV, DV] = eig(inv(A-a*I));%EV columns of eigenvectors and DV diagonal of
eigenvalues
c=c;
EVmin=min(min(EV));
EVmax=max(max(EV));
xmin= min(min(x));
xmax= max(max(x));
DVmin=min(min(DV));
DVmax=max(max(DV));
disp(' Eigenvalue ')
[c DVmin DVmax]
disp(' Eigenvector ')
[min(x) xmin max(x) xmax]
t
norm(min(x)-xmin,2)
3)
A=randi(100,1000); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
I=eye(n);
E=eig(A); j=real(max(E));%initial guess for the eigenvalue, can be any of the
eigenvalues of A
a=95;% shift between 1 and 100
h=1/(j-a);% dominant eigenvalue of inv(A-a*I)
K=(A-a*I);
t1=cputime;
for i=1:10 % number of iterations
y= inv(K)*x;
c=( y'*x)/(x'*x);% Rayleigh coefficient,converge to h: dominant value of inv(A-
a*I)
x=y/norm(y,inf);%This sequence converges to to eigenvector v of inv(A-a*I)
end
t2=cputime;
t=t2-t1
E=1/h+a %eigenvalue of A
[EV, DV] = eig(inv(A-a*I));%EV columns of eigenvectors and DV diagonal of
eigenvalues
c=c;
EVmin=min(min(EV));
EVmax=max(max(EV));
xmin= min(min(x));
xmax= max(max(x));
DVmin=min(min(DV));
DVmax=max(max(DV));
disp(' Eigenvalue ')
[c DVmin DVmax]
disp(' Eigenvector ')
[min(x) xmin max(x) xmax]
t
norm(min(x)-xmin,2)
4) Shift=95
min(x)- xmin 0 0 0
min(x)- xmin 0 0 0
Iter, 1000 T 11.5700 56.2100 119.5300
min(x)- xmin 0 0 0
min(x)- xmin 0 0 0
The results obtained are as expected, since as the number of iterations increased, the cpu time
increased and the same happened with the methods, since the factorization method lu is of lower
computational cost, on the other hand, the eigenvector approximates the one calculated by the
eig command.
Vandermonde matrix
min(x)- xmin 0 0 0
min(x)- xmin 0 0 0
min(x)- xmin 0 0 0
min(x)- xmin 0 0 0
This result was not expected, since the Vandermonde matrix is poorly conditioned, because its
condition number is very high. The execution times are quite close for the three methods used, it
could be because it has few non-zero elements so the use of cpu is reduced.
Practice 15
1)
A=randi(100,1000); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
I=eye(n);
E=eig(A); j=real(max(E));%initial guess for the eigenvalue, can be any of the
eigenvalues of A
a=95;% shift between 1 and 100
h=1/(j-a);% dominant eigenvalue of inv(A-a*I)
K=(A-a*I);
[L,U]=lu(K);
t1=cputime;
for i=1:10 % number of iterations
y= U\(L\x);
c=x'*K*x;% Rayleigh coefficient,converge to h: dominant value of inv(A-a*I)
x=y/norm(y,inf);%This sequence converges to to eigenvector v of inv(A-a*I)
end
t2=cputime;
t=t2-t1
E=1/h+a %eigenvalue of A
[EV, DV] = eig(inv(A-a*I));%EV columns of eigenvectors and DV diagonal of
eigenvalues
c=c;
EVmin=min(min(EV));
EVmax=max(max(EV));
xmin= min(min(x));
xmax= max(max(x));
DVmin=min(min(DV));
DVmax=max(max(DV));
disp(' Eigenvalue ')
[c DVmin DVmax]
disp(' Eigenvector ')
[min(x) xmin max(x) xmax]
t
norm(c-DVmax,2)
2)
A=randi(100,1000); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
I=eye(n);
E=eig(A); j=real(max(E));%initial guess for the eigenvalue, can be any of the
eigenvalues of A
a=95;% shift between 1 and 100
h=1/(j-a);% dominant eigenvalue of inv(A-a*I)
K=(A-a*I);
t1=cputime;
for i=1:10 % number of iterations
y= K\x;
c=x'*K*x;% Rayleigh coefficient,converge to h: dominant value of inv(A-a*I)
x=y/norm(y,inf);%This sequence converges to to eigenvector v of inv(A-a*I)
end
t2=cputime;
t=t2-t1
E=1/h+a %eigenvalue of A
[EV, DV] = eig(inv(A-a*I));%EV columns of eigenvectors and DV diagonal of
eigenvalues
c=c;
EVmin=min(min(EV));
EVmax=max(max(EV));
xmin= min(min(x));
xmax= max(max(x));
DVmin=min(min(DV));
DVmax=max(max(DV));
disp(' Eigenvalue ')
[c DVmin DVmax]
disp(' Eigenvector ')
[min(x) xmin max(x) xmax]
t
norm(c-DVmax,2)
3)
A=randi(100,1000); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
I=eye(n);
E=eig(A); j=real(max(E));%initial guess for the eigenvalue, can be any of the
eigenvalues of A
a=95;% shift between 1 and 100
h=1/(j-a);% dominant eigenvalue of inv(A-a*I)
K=(A-a*I);
t1=cputime;
for i=1:10 % number of iterations
y= inv(K)*x;
c=x'*K*x;% Rayleigh coefficient,converge to h: dominant value of inv(A-a*I)
x=y/norm(y,inf);%This sequence converges to to eigenvector v of inv(A-a*I)
end
t2=cputime;
t=t2-t1
E=1/h+a %eigenvalue of A
[EV, DV] = eig(inv(A-a*I));%EV columns of eigenvectors and DV diagonal of
eigenvalues
c=c;
EVmin=min(min(EV));
EVmax=max(max(EV));
xmin= min(min(x));
xmax= max(max(x));
DVmin=min(min(DV));
DVmax=max(max(DV));
disp(' Eigenvalue ')
[c DVmin DVmax]
disp(' Eigenvector ')
[min(x) xmin max(x) xmax]
t
norm(c-DVmax,2)
The results obtained are not as expected, since the value of the difference of the eigenvalues is
enormous, the factorization method lu is of lower computational cost.
Vandermonde matrix
This result is analogous in execution time to the previous paragraph, but the difference between
the eigenvalues is enormous.
3) The code is included in each exercise. The precision is correct in the original command, after
undergoing changes in the last paragraph it is not at all accurate.
4) The error in the first iterations is zero and agrees with the theory since the eigenvalues and
eigenvectors converge.
5)
norm(A,2)
ans =
1.0829e+11
The norm of A is huge, but the elements of A also so it does not affect the precision, there is
convergence between eigenvalues and eigenvectors.