You are on page 1of 17

Team:

1. Marisnelvys Cabreja Consuegra


2. Dannys García Miranda
Answers
Practice 13
1)Power method can be used to find the eigenvalue of the matrix A, that is largest in
absolute value, and is call this eigenvalue the dominant eigenvalue of A. The matrix A is nxn
and we know that power method for approximating eigenval ues is iterative. For large
powers , and by properly scaling this sequence, we see that we obtaina good approximation
of the dominant eigenvector of A.
Below we show the code that we have written for the method of powers, first we define the
matrix to which we want to find the eigenvector and the eigenvalue of greater absolute value
and taking into account the characteristics of the method we make iterations of the form:
𝑢𝑛−1
𝑢𝑛 = 𝐴
||𝑢𝑛−1 ||
We start from any u0 vector that in this case will be the ones (n, 1), then it is
fulfilled:
lim ||𝑢𝑛 || = 𝑦 𝑚𝑎𝑥
𝑛→∞

𝑢𝑛
lim = 𝑥 𝑚𝑎𝑥
𝑛→∞ ||𝑢 𝑛 ||
hence we already have everything descriptive to start our iteration:

A=randn(5); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
y=norm(x,2);% y0:absolute value of the vector x0
for i=1:(n^n)% times of iterations for the best approximation of the results
x=A*x/y;
y=norm(x,2);
xmax=x/y;
end
if y<0
ymax=y % Higher absolute value eigenvalue
xmax=xmax % Higher absolute value eigenvector
else
if xmax<0
ymax= -(y)% Higher absolute value eigenvalue
xmax=((-1)*xmax)% Higher absolute value eigenvector
else
ymax=y
xmax=xmax
end
end
[EV, DV] = eig(A);%EV columns of eigenvectors and DV diagonal of eigenvalues
DV=diag(real(DV))
EV=real(EV)
2)For a matrix 5x5 we have:

As can see in the results obtained, the developed code calculates very accurately
the maximum eigenvalue and corresponding eigenvector, the results have been very
accurate comparing what was thrown by the code and by the eig command of matlab. It
is also concluded that the power method only finds the dominant eigenvalue (the
largest in absolute value). To find the smallest eigenvalue, the inverse power
method must be applied, finding the eigenvector.

2)For a matrix 8x8 we have:

In this case the same thing happens as for the calculation of the maximum
eigenvalue of the 5x5 matrix
3) For the matrix [7 0 0; 0 2 -1; 0 2 5] we obtain:

As you can see the result obtained by the code is very accurate and close to that
obtained with the eig command.

For the (i) and (ii):

These calculations can not be made, because to multiply the column vector rand(3,
1) by another, this must be row and our initial vector is column, so it can not be
multiplied and the other aspect is that the matrix in question must be square nxn
to be able to perform the method of powers, in the second case the same thing
happens, It would be a three-row column vector to be multiplied by another column
vector.

4) For the matrix [0 1; 1 0] and [a; b], a>0,b>0,a ≠ b:

This matrix has the characteristic that every column vector that is multiplied by
it, will give this same with its transposed rows again and again, its norm is 1
and the only way to obtain its eigenvector corresponding to the maximum eigenvalue
is taking as initial vector one that its elements are equal and different from
zero, due to its characteristics.

x0=[1;2] x0=[1;3] x0=[2;5]


Practice 14
1)To create our algorithm for the shift and invert method, we will take into
account the following theorems
a) If y is an eigenvalue and v an nonzero eigenvector of A matrix, a is any
constant (we Will take it close to y), so we have that y, v and a are an
eigenpair of the matrix (A-a*I).
b) If y is not equal to a, then 1/(y-a) and v are an eigenpair of the matrix
(A-a*I)^(-1).
Now, according to the shift and invert method:
We guess that the Anxn matrix has different eigenvalues y1,y2,…yk,..yn, so we can
choose the constant a for what: h=1/(j-a), where j=yn, h is the dominant value of
(A-a*I)^(-1). Also if X0 which is the initial vector is chosen appropriately, the
sequences(Xk) defined by: Y_k=(A-a*I)^(-1)X_k, so X_k+1=Y_k/norm(Y_k) will converge
to the corresponding eigenvector V1 of the matrix (A-a*I)^(-1) and
c_k+1=(Y’_k*X_k)/(X_k’* X_k)), Raileigh quotient, will converge to eigenvalue h.
In conclusion, at the end of the iterations h converges to the dominant eigenvalue
and X_k+1 to the dominant eigenvector of the matrix (A-a*I)^(-1). Therefore the
corresponding eigenvalue for matrix A is given by:j=1/h+a, then our code is like
this:

A=randi(100,1000); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
I=eye(n);
E=eig(A); j=real(max(E));%initial guess for the eigenvalue, can be any of the
eigenvalues of A
a=95;% shift between 1 and 100
h=1/(j-a);% dominant eigenvalue of inv(A-a*I)
K=(A-a*I);
[L,U]=lu(K);
t1=cputime;
for i=1:10 % number of iterations
y= U\(L\x);
c=( y'*x)/(x'*x);% Rayleigh coefficient,converge to h: dominant value of inv(A-
a*I)
x=y/norm(y,inf);%This sequence converges to to eigenvector v of inv(A-a*I)
end
t2=cputime;
t=t2-t1
E=1/h+a %eigenvalue of A
[EV, DV] = eig(inv(A-a*I));%EV columns of eigenvectors and DV diagonal of
eigenvalues
c=c;
EVmin=min(min(EV));
EVmax=max(max(EV));
xmin= min(min(x));
xmax= max(max(x));
DVmin=min(min(DV));
DVmax=max(max(DV));
disp(' Eigenvalue ')
[c DVmin DVmax]
disp(' Eigenvector ')
[min(x) xmin max(x) xmax]
t
norm(min(x)-xmin,2)

2)

A=randi(100,1000); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
I=eye(n);
E=eig(A); j=real(max(E));%initial guess for the eigenvalue, can be any of the
eigenvalues of A
a=95;% shift between 1 and 100
h=1/(j-a);% dominant eigenvalue of inv(A-a*I)
K=(A-a*I);
t1=cputime;
for i=1:10 % number of iterations
y= K\x;
c=( y'*x)/(x'*x);% Rayleigh coefficient,converge to h: dominant value of inv(A-
a*I)
x=y/norm(y,inf);%This sequence converges to to eigenvector v of inv(A-a*I)
end
t2=cputime;
t=t2-t1
E=1/h+a %eigenvalue of A
[EV, DV] = eig(inv(A-a*I));%EV columns of eigenvectors and DV diagonal of
eigenvalues
c=c;
EVmin=min(min(EV));
EVmax=max(max(EV));
xmin= min(min(x));
xmax= max(max(x));
DVmin=min(min(DV));
DVmax=max(max(DV));
disp(' Eigenvalue ')
[c DVmin DVmax]
disp(' Eigenvector ')
[min(x) xmin max(x) xmax]
t
norm(min(x)-xmin,2)
3)

A=randi(100,1000); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
I=eye(n);
E=eig(A); j=real(max(E));%initial guess for the eigenvalue, can be any of the
eigenvalues of A
a=95;% shift between 1 and 100
h=1/(j-a);% dominant eigenvalue of inv(A-a*I)
K=(A-a*I);
t1=cputime;
for i=1:10 % number of iterations
y= inv(K)*x;
c=( y'*x)/(x'*x);% Rayleigh coefficient,converge to h: dominant value of inv(A-
a*I)
x=y/norm(y,inf);%This sequence converges to to eigenvector v of inv(A-a*I)
end
t2=cputime;
t=t2-t1
E=1/h+a %eigenvalue of A
[EV, DV] = eig(inv(A-a*I));%EV columns of eigenvectors and DV diagonal of
eigenvalues
c=c;
EVmin=min(min(EV));
EVmax=max(max(EV));
xmin= min(min(x));
xmax= max(max(x));
DVmin=min(min(DV));
DVmax=max(max(DV));
disp(' Eigenvalue ')
[c DVmin DVmax]
disp(' Eigenvector ')
[min(x) xmin max(x) xmax]
t
norm(min(x)-xmin,2)

4) Shift=95

First method Second method Third method

Iter, 10 T 0.1500 0.6400 1.2600

min(x)- xmin 0 0 0

Iter, 100 T 1.3000 5.2400 9.7400

min(x)- xmin 0 0 0
Iter, 1000 T 11.5700 56.2100 119.5300

min(x)- xmin 0 0 0

Iter, 10000 T 111.3100 568.6300 964.2900

min(x)- xmin 0 0 0

The results obtained are as expected, since as the number of iterations increased, the cpu time
increased and the same happened with the methods, since the factorization method lu is of lower
computational cost, on the other hand, the eigenvector approximates the one calculated by the
eig command.

Vandermonde matrix

First method Second method Third method

Iter, 10 T 0.0300 0.0300 0.0300

min(x)- xmin 0 0 0

Iter, 100 T 0.0300 0.0200 0.0300

min(x)- xmin 0 0 0

Iter, 1000 T 0.0400 0.0400 0.0400

min(x)- xmin 0 0 0

Iter, 10000 T 0.0800 0.0800 0.0700

min(x)- xmin 0 0 0
This result was not expected, since the Vandermonde matrix is poorly conditioned, because its
condition number is very high. The execution times are quite close for the three methods used, it
could be because it has few non-zero elements so the use of cpu is reduced.

Practice 15
1)
A=randi(100,1000); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
I=eye(n);
E=eig(A); j=real(max(E));%initial guess for the eigenvalue, can be any of the
eigenvalues of A
a=95;% shift between 1 and 100
h=1/(j-a);% dominant eigenvalue of inv(A-a*I)
K=(A-a*I);
[L,U]=lu(K);
t1=cputime;
for i=1:10 % number of iterations
y= U\(L\x);
c=x'*K*x;% Rayleigh coefficient,converge to h: dominant value of inv(A-a*I)
x=y/norm(y,inf);%This sequence converges to to eigenvector v of inv(A-a*I)
end
t2=cputime;
t=t2-t1
E=1/h+a %eigenvalue of A
[EV, DV] = eig(inv(A-a*I));%EV columns of eigenvectors and DV diagonal of
eigenvalues
c=c;
EVmin=min(min(EV));
EVmax=max(max(EV));
xmin= min(min(x));
xmax= max(max(x));
DVmin=min(min(DV));
DVmax=max(max(DV));
disp(' Eigenvalue ')
[c DVmin DVmax]
disp(' Eigenvector ')
[min(x) xmin max(x) xmax]
t
norm(c-DVmax,2)

2)

A=randi(100,1000); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
I=eye(n);
E=eig(A); j=real(max(E));%initial guess for the eigenvalue, can be any of the
eigenvalues of A
a=95;% shift between 1 and 100
h=1/(j-a);% dominant eigenvalue of inv(A-a*I)
K=(A-a*I);
t1=cputime;
for i=1:10 % number of iterations
y= K\x;
c=x'*K*x;% Rayleigh coefficient,converge to h: dominant value of inv(A-a*I)
x=y/norm(y,inf);%This sequence converges to to eigenvector v of inv(A-a*I)
end
t2=cputime;
t=t2-t1
E=1/h+a %eigenvalue of A
[EV, DV] = eig(inv(A-a*I));%EV columns of eigenvectors and DV diagonal of
eigenvalues
c=c;
EVmin=min(min(EV));
EVmax=max(max(EV));
xmin= min(min(x));
xmax= max(max(x));
DVmin=min(min(DV));
DVmax=max(max(DV));
disp(' Eigenvalue ')
[c DVmin DVmax]
disp(' Eigenvector ')
[min(x) xmin max(x) xmax]
t
norm(c-DVmax,2)

3)

A=randi(100,1000); %Matrix
n=size(A,1);
x=ones(n,1);%initial vector x0
I=eye(n);
E=eig(A); j=real(max(E));%initial guess for the eigenvalue, can be any of the
eigenvalues of A
a=95;% shift between 1 and 100
h=1/(j-a);% dominant eigenvalue of inv(A-a*I)
K=(A-a*I);
t1=cputime;
for i=1:10 % number of iterations
y= inv(K)*x;
c=x'*K*x;% Rayleigh coefficient,converge to h: dominant value of inv(A-a*I)
x=y/norm(y,inf);%This sequence converges to to eigenvector v of inv(A-a*I)
end
t2=cputime;
t=t2-t1
E=1/h+a %eigenvalue of A
[EV, DV] = eig(inv(A-a*I));%EV columns of eigenvectors and DV diagonal of
eigenvalues
c=c;
EVmin=min(min(EV));
EVmax=max(max(EV));
xmin= min(min(x));
xmax= max(max(x));
DVmin=min(min(DV));
DVmax=max(max(DV));
disp(' Eigenvalue ')
[c DVmin DVmax]
disp(' Eigenvector ')
[min(x) xmin max(x) xmax]
t
norm(c-DVmax,2)

First method Second method Third method

Iter, 10 T 0.1500 0.5100 1.2300

min(e)- DVmin 1.0661e+03 1.5528e+03 1.5340e+03

The results obtained are not as expected, since the value of the difference of the eigenvalues is
enormous, the factorization method lu is of lower computational cost.

Vandermonde matrix

First method Second method Third method

Iter, 10 T 0.0400 0.0300 0.0200

min(x)- xmin 102.0217 102.0217 102.0217

This result is analogous in execution time to the previous paragraph, but the difference between
the eigenvalues is enormous.

3) The code is included in each exercise. The precision is correct in the original command, after
undergoing changes in the last paragraph it is not at all accurate.

4) The error in the first iterations is zero and agrees with the theory since the eigenvalues and
eigenvectors converge.

5)
norm(A,2)

ans =

1.0829e+11
The norm of A is huge, but the elements of A also so it does not affect the precision, there is
convergence between eigenvalues and eigenvectors.

You might also like