You are on page 1of 10

Team:

1. Marisnelvys Cabreja Consuegra


2. Dannys García Miranda
Answers
Practice 24
When the places.txt file is imported, we left-click on the file name and it takes
us to the Imported Data section, where to shorten steps you choose how you want the
data to be imported. The selected option was: Numerical matrix; the selected data is
imported as a numeric array m by n. In this case, the places array of size 329x9 was
imported. Then we are ready to perform the operations with this matrix, which we
will call 'x'.

%Practice_24_25
x=places;
[a,b]=size(x);

%1)
xstd=zeros(a,b);
m=mean(x);
d=std(x);
mm=zeros(a,b);
for i=1:b
mm(1:a,i)=m(1,i);
xstd(1:a,i)=(x(1:a,i)-mm(1:a,i))./d(1,i);
end
xstd=xstd;% standardized matrix
Co=cov(xstd);
[Var,M]=var(xstd)

%2)
[U,S,V] = svd(xstd);

%3)
[coeff,score,latent,tsquared,explained,mu] = pca(x);
scatter3(score(:,1),score(:,2),score(:,3))
axis equal
xlabel('1st Principal Component')
ylabel('2nd Principal Component')
zlabel('3rd Principal Component')

%4

biplot(coeff(:,1:2),'scores',score(:,1:2),'varlabels',{'v_1','v_2','v_3','v_4','v_
5','v_6','v_7','v_8','v_9'});

%5
Xcentered = score*coeff'
%The new data in Xcentered is the original ingredient data centered by subtracting
the column means from the corresponding columns.
coeff: returns any output arguments from previous syntaxes using additional options
for computing and handling special data types, specified by one or more pairs of
Name,Value arguments. For example, you can specify the number of principal components
that pca returns or the use of an algorithm other than SVD. returns the coefficients
of principal components, also known as loads, for the data matrix n by p, X. The
rows of X correspond to observations and the columns correspond to variables. The
matrix of coefficients is p times p. Each coeff column contains coefficients for a
major component, and the columns are in a descending order of component variance. By
default, pca centers the data and uses the singular value decomposition (SVD)
algorithm.
score,latent: Returns the scores of the principal components in Score and the
variances of the Principal Components in Latent. You can use any of the input
arguments from the preceding syntax. Principal component scores are representations
of X in the space of principal components. The score rows correspond to observations
and the columns correspond to the components. The variances of the principal
components are eigenvalues of the covariance matrix of X.
tsquared: returns the Hotelling T-squared statistic for each observation in X.
explained,mu: returns explained, the percentage of the total variance explained by
each principal component, and mu, the estimated mean of each variable in X.

explained =

75.2903
13.5940
5.0516
3.3194
1.4752
0.7428
0.2862
0.2066
0.0338

The first major component is a unique axis in space. When you project each observation
onto that axis, the resulting values form a new variable. The variance of this
variable is the maximum among all possible options of the first axis.
The second main component is another axis in space, perpendicular to the first.
Projecting observations onto this axis generates another new variable. The variance
of this variable is the maximum among all possible options of this second axis.
The entire set of principal components is the same size as the original set of
variables. However, it is common for the sum of the variances of the first principal
components to exceed 80% of the total variance of the original data. In this case
it happens with the first two components, so the first three components explain 93.8%
of all variability. The representation of data in the space of the first three main
components is then visualized.
The data show the greatest variability along the first major component axis. This is
the greatest possible variance among all possible options of the first axis. The
variability along the second major component axis is the greatest among all possible
options remaining on the second axis. The third major component axis has the third
largest variability, which is significantly less than the variability along the
second major component axis. The other axes of major components fourth to ninth
explain a low percentage of all data variability.

Mean:mu=m
mu =

1.0e+03 *

0.5387 8.3466 1.1857 0.9611 4.2101 2.8149 3.1509 1.8460 5.5254

m =

1.0e+03 *

0.5387 8.3466 1.1857 0.9611 4.2101 2.8149 3.1509 1.8460 5.5254

coeff =

0.0064 0.0155 -0.0067 -0.0263 0.0163 -0.0012 0.0814 0.0421 0.9951


0.2691 0.9372 -0.0826 -0.1778 -0.0838 -0.0486 0.0267 0.0121 -0.0229
0.1783 -0.0205 0.0278 -0.0266 -0.1591 0.9295 0.1371 -0.2414 0.0014
0.0281 -0.0109 0.0376 0.0990 0.1160 -0.0540 0.9448 0.2668 -0.0877
0.1493 0.0188 0.9715 -0.0384 -0.1466 -0.0922 -0.0135 -0.0415 0.0094
0.0252 -0.0014 0.0415 0.0216 -0.1063 0.2532 -0.2412 0.9292 -0.0169
0.9309 -0.2823 -0.1510 0.0278 0.0087 -0.1676 -0.0430 0.0159 0.0006
0.0698 0.1038 0.1496 0.0690 0.9543 0.1733 -0.1271 0.0188 -0.0050
0.0251 0.1734 0.0127 0.9745 -0.1022 0.0052 -0.0702 -0.0544 0.0327
score =

1.0e+04 *

-0.2760 -0.1068 0.0260 0.2373 -0.0296 -0.0478 -0.0208 -0.0012 0.0103


0.2388 -0.0994 0.0403 -0.1042 0.0767 0.0020 -0.0048 -0.0395 0.0020
-0.3418 -0.0293 -0.1285 -0.0161 -0.0491 -0.0073 0.0252 -0.0093 -0.0066
0.1722 -0.0754 0.2394 0.0304 -0.0738 -0.0120 -0.0551 0.0272 0.0007
0.1813 -0.0195 0.2237 0.0224 0.0308 0.0335 0.0344 0.0107 0.0096
-0.3731 -0.1727 -0.1226 0.0113 -0.0273 0.0158 -0.0134 0.0211 0.0033
-0.1142 0.0017 -0.1296 -0.0452 -0.0456 -0.0284 -0.0403 0.0375 0.0031
-0.2011 -0.1264 0.1057 0.0462 -0.0547 0.0044 -0.0238 -0.0000 0.0079
-0.3497 -0.1471 0.0495 -0.1042 -0.0265 -0.0210 -0.0386 -0.0017 0.0080
-0.2286 -0.1076 0.1000 0.0911 -0.0651 -0.0256 0.0100 0.0058 0.0133
0.4606 0.6756 -0.1032 -0.0551 0.0464 0.0250 0.0044 -0.0026 0.0182
0.0438 0.4327 0.0840 0.1473 0.0585 -0.0580 0.0064 -0.0274 -0.0375
-0.3688 -0.2198 -0.0890 -0.1724 -0.0223 -0.0123 0.0265 -0.0119 -0.0016
-0.3889 -0.1982 -0.1006 -0.0867 -0.0568 0.0194 0.0302 0.0083 0.0060
0.4965 0.1094 -0.1524 -0.1227 0.0193 0.0633 0.0056 0.0398 -0.0108
-0.3798 -0.2045 -0.0606 -0.0339 -0.0008 -0.0109 0.0434 -0.0397 0.0046
-0.2159 0.0118 -0.0505 -0.0178 0.0629 0.0223 -0.0436 0.0013 -0.0114
-0.1977 -0.1092 0.1193 -0.0176 -0.0329 0.0321 -0.0318 -0.0012 0.0221
… … … … … … … … … …
… … … … … … … … … …
… … … … … … … … … …
latent =

1.0e+07 *

2.4414
0.4408
0.1638
0.1076
0.0478
0.0241
0.0093
0.0067
0.0011

Each score column corresponds to a major component. The latent vector stores the
variances of the nine principal components.

tsquared =

8.4050
5.1931
3.2688
9.4194
6.1923
3.3970
5.9722
3.2165
4.6678
4.7782
15.9169
22.7917
6.0554
5.0675
9.3016
6.4867
4.6551
7.5485
5.8749
21.3506
15.4678
7.5842
7.2807
9.8120
5.4753
13.8910
3.9544
10.3084
4.6434
4.3817
7.7022



S =

33.4353 0 0 0 0 0 0 0 0
0 19.9546 0 0 0 0 0 0 0
0 0 19.3496 0 0 0 0 0 0
0 0 0 17.3799 0 0 0 0 0
0 0 0 0 15.7187 0 0 0 0
0 0 0 0 0 14.3814 0 0 0
0 0 0 0 0 0 12.7169 0 0
0 0 0 0 0 0 0 10.2136 0
0 0 0 0 0 0 0 0 6.2843

%4) The nine variables are represented in this biplot by a vector, and the direction and
length of the vector indicate how each variable contributes to the two major components on
the graph.
Practice 25
We assume the matrix defining the problem is of full rank. Given A ꞓ ₵𝑚𝑥𝑛 of full rank, m ≥
n, b ꞓ ₵𝑚 , find x ꞓ ₵𝑛 such that ||b- Ax || is minimized.The solution x and the corresponding
point y = Ax that is closest to b in range(A) are given by x = A'b, y = Pb, where A’ꞓ ₵𝑛𝑥𝑚 is
the pseudoinverse of A and P = AA’ꞓ ₵𝑚𝑥𝑚 ,is the orthogonal projector onto range(A).

||r|| ||b − Ax|| ||b − y|| ||b − y||


Sin(θ) = = = → θ = asin⁡(⁡ )
||b|| ||b|| ||b|| ||b||
||y|| ||y||
Cos(θ) = → θ = acos⁡(⁡ )
||b|| ||b||

||𝐴|| ∗ ||𝑥|| ||𝐴|| ∗ ||𝑥||


η= =
||𝐴𝑥|| ||𝑦||

The map from vectors of coefficients of polynomials p of degree < n=15 to vectors
(p(x1),p(x2),…,p(xm)) of sampled polynomial values is linear. Any linear map can be expressed
by an mxn Vandermond matrix: A=[x^0 x^1 … x^14], where x=t

So:

%1

n=15<14, m=100 and x=t : p(x0)=a0*t^0=a0*a0’,because P = AA’ so t^0=a0’ and t^0’=a0. The
structure of the coefficient matrix A is A=[ t’^0 t’^1 t’^2 … t’^14] or A= fliplr(vander(t'))
and the right hand side b of the LSP problem Ax = b obtained after imposing that the 100
nodes of the form (ti, f(ti)) are in the graph of the polynomial p(x), is b= exp^(sin(4*t)),
because f(x)=y=Ax=b. The size of the matrix A is mxn (100x15) and the vector b is mx1 (100x1).

This is p(x0)= t^0’* t^0 and:

%2
m=100;% number of nodes
n=15;% polynomial degree =14<15
t = (0:1:99)/99;% Since we must discretize in the interval [0,1], the values range
from 0-99 and divide /99
A = fliplr(vander(t));% Vandermonde matrix mxn:100x15
A = A(1:100,1:15);% We are left with the part of the matrix that concerns us,
since this function returns an mxm matrix and ours is only mxn
b=exp(sin(4*t'));
bnorm=b/ 2006.787453080206;

%3
x_inv=pinv(A)*bnorm;
x15_inv=x_inv(15,1)% 1.0000

%4 5
[Q,R] = qr(A);
x_qr= R\(Q\bnorm);
x15_qr=x_qr(15,1)% 1.0000

%6
x_md=A\bnorm;
x15_md=x_md(15,1)% 1.0000

%7
x=A\b;
exact_x15=x(15,1)/ x(15,1);%

ex_inv=norm(x15_inv- exact_x15)/norm(exact_x15)%1.6150e-08 forward error


ex_qr=norm(x15_qr- exact_x15)/norm(exact_x15) %1.6143e-08 forward error
ex_md=norm(x15_md- exact_x15)/norm(exact_x15) %9.1774e-08 forward error

%8
The errors are very small, of the order of E-08, which means that the polynomial
approximation is good and the polynomial of grade 14 of this type minimizes vertical
error of least squares.

%9

x=A\bnorm;
y= A*x; % least squares problem
K_2= cond(A) % 2.2718e+10
theta = asin(norm(bnorm-y)/norm(bnorm)) % 3.7461e-06
eta = norm(A)*norm(x)/norm(y) % 2.1036e+05

%10
forward error ≲ condition number × backward error (where a≲b means that a ≤ b and
a⋍b).

||xcomput − xexact|| ||x15_inv − ⁡exact_x15||


𝑓𝑜𝑟𝑤𝑎𝑟𝑑⁡𝑒𝑟𝑟𝑜𝑟 = = ⁡𝑎𝑛𝑑⁡𝑠𝑜⁡𝑜𝑛
||xexact|| ||exact_x15||

||bcomput − bexact|| ||bnorm − b|| ||bnorm_15 − b_15|| ||bnorm(15,1) − b(15,1)||


𝑏𝑎𝑐𝑘𝑤𝑎𝑟𝑑⁡𝑒𝑟𝑟𝑜𝑟 = = = =
||bexact|| ||b|| ||b_15|| ||b(15,1)||

𝑓𝑜𝑟𝑤𝑎𝑟𝑑⁡𝑒𝑟𝑟𝑜𝑟
≲ condition⁡number⁡
𝑏𝑎𝑐𝑘𝑤𝑎𝑟𝑑⁡𝑒𝑟𝑟𝑜𝑟

𝑓𝑒xinv
= 1.6158e − 08 ≲ C ≪ 1
be
𝑓𝑒xqr
= 1.6151e − 08 ≲ C ≪ 1
be
𝑓𝑒xcd
= 9.1820e − 08 ≲ C ≪ 1
be
be=norm(bnorm(15,1)-b(15,1))/norm(b(15,1)) % 0.9995
Cinv=1.6150e-08/be % 1.6158e-08 << 1
Cqr=1.6143e-08/be % 1.6151e-08 << 1
Cmd=9.1774e-08/be % 9.1820e-08 << 1

We can interpret C as an amplification factor of the change in the input data and
in this case it takes values << 1, therefore, the errors are in accordance with the
limit of the condition number to solve the LSP provided in the course notes, the
problem is well conditioned and the solution of the problem is stable and there is
a low sensitivity to errors in the input data.

%11
These results have the same order of accuracy.

You might also like