Professional Documents
Culture Documents
roberto.costas@uah.es
5 10 M 10 00 12 00 ONLINE
7 10 W 10 00 12 00 in room SA5B
12 10 M FESTIVITY
14 10 W 10 00 12 00 in room SA5B
19 10 M 10 00 12 00 ONLINE
21 10 W 10 00 12 00 in room SA5B
26 10 M 10 00 12 00 ONLINE
28 10 W 10 00 12 00 in room SA5B
Matrices, Determinants and Linear Systems
Tr A a an t Ann
Transpose of a matrix. it is the matrix obtained when interchanging
rows and columns.
42 Es
Row matrix, column matrix.
too
E eDiag 2.3.51
Homework
aij o.i
Diagonal matrix.
j 1 ai.i
ae Identity matrix I. too IIe a eIs FIE Iz
U µg r ai o Null matrix O.
Zoo
Uij o i jam Triangular matrices (upper, lower). L
Symmetric matrix, skew-symmetric matrix. 1019
ATea ai.i ai.ie Diagonal by blocks matrix.
i
aij Aji AT ATdiagonal
Thisimpliesthatthe
entries are0
A y.bgso where
f t
Matrices, Determinants and Linear Systems
Matrices
then A LU where
If A is n
by n
entries are
V is n n with diagonal
by
Um Then the
Un Un
det A Ui Unit Um
II
a CUZ
A Laa o a
a11
a21
a12
a22
a13
a23 = a11 · A11 + a12 · A12 + a13 · A13
You cantry by LU
a31 a32 a33 decomposition
where Aij denotes the cofactor of the entry aij (i.e. the signed minor
of aij ). Also, for 3 ⇥ 3 matrices, Sarrus’ rule may be useful.
Basic properties:
1. |A| = |AT |
2. If A and B are square matrices of the same order, then
|A · B| = |A| · |B|.
3. If all the elements in a row (or column) admit a same factor, then
that number can be taken out of the determinant.
4. If we interchange two rows (or columns), the determinant changes
sign.
Basic properties:
5. If A has a row or a column of 0’s, then det(A) = 0.
6. If A has two rows (or columns) which are either equal or
proportional, then det(A) = 0. The value is also 0 if there is some
row (column) which is a linear combination of others.
7. The value of the determinant does not change if we add to a row (or
column) other rows (or columns) multiplied by numbers. This
property is essential for efficiently computing determinants.
we willexplainduringthenext
Alternative: Gauss’ method.
class
↵ 1 · r i1 + · · · + ↵ s · r is
Question: When are two rows (resp. two columns) linearly dependent?
Definition
The rank of a matrix A, rank(A), is the maximum number of rows (or
columns) which are linearly independent.
Some observations/properties:
We say that a square matrix of orden n has full rank (or is regular),
if rank(A) = n. It can be proven that this happens if and only if the
determinant of A is di↵erent from 0 (therefore, if and only if A is
invertible). If A is square and has not full rank, it is called singular;
such a matrix has no inverse.
The rank by rows coincides with the rank by columns.
rank(A) = rank(AT ).
If the dimension of A is m ⇥ n, then rank(A) min(m, n).
When we compute the rank, we find rows/columns which are linearly
independent!
xi ’s: unknowns
aij ’s: coefficients
bj ’s: constant terms
Abbreviately,
A · ~x = ~b
A: Coefficients matrix.
~x : vector of unknowns
~b: vector of constant terms.
Two possibilities:
1 Cramer’s Method: uses determinants and must be applied on a
Cramer’s system (i.e. a system where the coefficients matrix has full
rank). It is not efficient for big systems.
2 Gauss and Gauss-Jordan Method: does not require to compute
determinants, but just simple operations with rows/columns.
Efficient for big systems.
These are those linear systems where the constant terms are all 0:
8
>
> a11 · x1 + a12 · x2 + · · · a1n · xn = 0
>
< a21 · x1 + a22 · x2 + · · · a2n · xn = 0
.. .. ..
>
> . . .
>
:
am1 · x1 + am2 · x2 + · · · amn · xn = 0
1
there def A UnUnUss L I 371 137
1
By Samus's method 1 0 12 O 8 4
A
adjugate matrix of
F
The AdjCA
T
AdjIADi.j C hitidetfAlpj.cl
matrix A where I throw
and
i thcolour is
deleted
Examples
da ab
If A Ibd ADELA
at c t.aaitaif.IE1
I ititl page2
Remember that there are di erent ways to solve a linear system. For example by using the inverse
of the matrix of the coe cients:
II
AI 3 A AI Ab e.at x A b
1
Let us compute A In two di erent ways.
1) By using the formula for the inverse:
A 2 3 7 4 5
A l
A 2 A 4 a r z
O 2 A
1 4 2 3 inver
ofA
2) Use the Gauss elimination method: The idea is to go from A Il n I B
Pwt
pivot pivot
yo o 125 3123
A 2 3 I A 0 O RT Rz 212 a 2 3l A 0 0
2 A 4 I O t O O 3 21 21 O
RT RT12122
I
O 21 10 01 O 21 10 01
i
wt pint
P'Io2 3 z o g
l l 2
O Ri Ri 15123
A A 0 O RT 3121 I
RT Rz 2123
O 3 21 210 RT RT 12122 O 3 2 210
i
O O l 1i 423 o o l 1i 423
At
5 43
3 O O I 211215 12 I 0 O 745
122 3
O 3016 3 6 O l O I
212
0 O l 1i 423 R Rzf O 0 A l14 2 3
page25
We also solved problem 11 of sheet 1
A a compute W decomposition
A Ja
As
b by using such decomposition
solve A I 11,3DT
3
a
f a r r
µ It
from here we have working
A LU Oo03
3
2 with the rows
3
l
Are re rn Ai l l 6e 3
6 e 3
Az lire r2 s 13,6 6 h h l 0 T
Az her G r2 t B s
2,1113 la la le 0,313,313 t QQ
lz 11 12 313 le
2 13 k 34
f if ya
Bo Egg
b AI Lu
then
f HH I
vii i
fix HH til
page26
If we considerthe
Euclideanplane 1122
1
numbers coexist
points and complex
Raff o
r A two pointsof
r
Lesson 2: Vector Spaces
Vector Spaces
Let V be a set, and let +, · be two operations, the first one (sum)
defined between the elements of V , and the second one (product by
scalars) defined between V , and the elements of R (resp. C). We say
that (V , +, ·) is a vector space over R (resp. C) if the following
properties hold:
Let V be a set, and let +, · be two operations, the first one (sum)
defined between the elements of V , and the second one (product by
scalars) defined between V , and the elements of R (resp. C). We say
that (V , +, ·) is a vector space over R (resp. C) if the following
properties hold:
5k a bicep R W
I thasdimension 3
then dimensional
f V W visas
f abba Carb c for example
ii
Is hit ME
at Ma
Hottub
db µb
da pic
E 1 ab
page 3
Vector Spaces
Proposition
Let (V (R), +, ·) be a vector space over R. Then for all 2 R and for all
u~ 2 V , it holds that:
(1) · ~0 = ~0.
(2) 0 · u~ = ~0.
(3) · u~ = ~0 if and only if = 0 or u~ = ~0.
(4) · ( u~) = ( ) · u~ = ( · u~).
Definition
A linear combination of vectors u~1 , u~2 , . . . , u~n is another vector of the
form
~1 + 2 u~2 + · · · + n u~n
1u
where i 2 R for i = 1, . . . , n.
Definition
We say that {~u1 , u~2 , . . . , u~n } are linearly dependent (l.d.) if at least of
them is a linear combination of the rest. If {~ u1 , u~2 , . . . , u~n } are not l.d.,
we say that they are linearly independent (l.i.).
3
Lesson 2: Vector Spaces 3L
1
A
Kiffin tix l
4
l m
ae kitsune i
l X LID
king F F
Ez a un p
3 2 2 37
page 3
At
2 A L LDLT Ac Ui Ui Cli
Ar 1427
Az like
4
It 22
A
0 2,47 4,247 197
day 0 uz
ou 7 4 24 0
119361811
T
1
L D L
page36
3 112
U
a
III III to I U
In 2 4 3 2
Az l U a Uz Cl Z l 24 li le t o
Az lzUielzUztUz 1 24 I
h 1 4 32
Cy 1 2 2K ez le o
Elz 3b poiQ
T
d
2h2 I lz 2 lz1213 2232 0
l ez 13
213 Z lz Z l3
A LDF
page37
i
4 a
f EH Qz l canal
o o 17
li 24 24 16
5124 9
A z e l U t Uz 2,7 5 y
A3 KU el3UztUs 24 3
y
I
C 45 307 ez Zee 2 la 0 3h3 9 t o Q
T t
lz
f
30 26 913
5 2h2 1313 l3 30 4 27 I
aint.LI 1 9.11
I
page38
5
Efg
A 14g LU solve AI's
qs
a
b
1
0
3
B iioioii.si
lo
610,011
A2 44 42 14 li le t
i 2,347 YK 6 213 1
Is 4 ly 3 Is lo la 2ls 3
page 3
6 Homework
112
7
f
a.t Hof Il O
l
Eteria
0,4 Co
33 27 144 24,0 0 l
t b
2 Z
i 2h3 2 ls 1213 4
a.t
A is PositiveDefiniteIPD if I A IT o for E 1 0
A Ll U LCD LT A is
if A LU e
negative
LD LT definite
symmetric
page40
a
III II.it0981 E
o
t fco.zn
O Co
213
o 4
3
c in 17 124 lied O
2 4 2
374g
li I I
f
01 112 Gee le de Co Zf G 019
l ltlz zlz ls
IE II Et
z a.
Inverse
uor detA
2 zZx z
of A by using
4 to A is regular
Gauss Jordan method
ft O O Ri R2
y 2 f l o e O T2z Rzt2R
A'III I 2 Ii or o a a oil 00
O l 2 I 0O l O l 2 1 00 A
pivot i
e
l 2 l O l O p
f Rz I 2
l O l O
l RI Rzt3R2
O 3 2,1 20 O l 2 1 00 A
O l 21 O O l
i
O 3 211
l 20
l 2 I O l O E O 3 101 2
RT Ri ZB
O e 2 OO l
k
f e 2 OO l
Y
k 2 tzR3
O O 4 11
I
23 O O 4 1 23
y o o i I E E RI R
y o Z EE
o l
o e O tI i z o e O l I i E III A
4
O O 4 11 23 RT 4123 O O 1 1I
I
page41
Linear Dependence. Bases.
Theorem
The vectors {~ u1 , u~2 , . . . , u~n } are linearly independent if and only the only
linear combination of them fulfilling 1 u~1 + · · · + u~n = ~0 satisfies
1 = · · · = n = 0.
page43
Lesson 2: Vector Spaces
Linear Dependence. Bases.
Definition
We say that S = {~ u1 , . . . , u~n } is a spanning set of V if any vector in V
can be written as a linear combination of the vectors in S.
Definition
We say that B = {~ u1 , . . . , u~n } is a basis of V if it is a spanning set of V
and they are linearly independent.
Examples
Important remark: vector spaces may have infinitely many bases!!
Definition
We say that a vector space has finite dimension if it has a basis
consisting of finitely many vectors.
Examples:
1 Rn has dimension n, since it admits the basis (called the canonical
basis)
Theorem
If V has finite dimension, then all the bases of V have the same number
of vectors (the dimension of V , dim(V )).
Theorem
Let V be a vector space with finite dimension, and let dim(V ) = n. Then
the following statements are true:
(i) If S spans V , then you can extract a basis from S.
(ii) Every system consisting of more than n vectors is linearly dependent.
(iii) Every spanning system contains at least n vectors.
(iv) Given B = {~ u1 , . . . , u~n } ⇢ V (a subset of exactly n vectors), the
following statements are equivalent: (a) B is a basis; (b) B is linearly
independent; (iii) B is a spanning system.
Definition
Let B = {~ u1 , . . . , u~n } be a basis of V , and let ~v 2 V . The coordinates
of ~v with respect to B are the scalars 1 , . . . , n 2 R such that
~v = ~1
1u + ... + ~n
nu
Usually we write
Theorem
Let V be a vector space of finite dimension n, and let B = {~
u1 , . . . , u~n }
be a basis of V . Then every vector ~v 2 V has unique coordinates with
respect to B.
Definition
Let (V (R), +, ·) be a vector space. We say that W ⇢ V is a vector
subspace of V if (W (R), +, ·) has also a structure of vector space.
Theorem
W ⇢ V is a vector subspace if and only if 8 u~, ~v 2 W , 8 , µ 2 R,
u~ + µ~v 2 W (i.e. if and only if every linear combination of two vectors
in W , stays in W ).
Examples
Observation: If W is a vector subspace, then ~0 2 W .
Definition
u1 , . . . , u~n } be a subset of V . The linear variety spanned by S
let S = {~
(or simply the linear span of S) is the set consisting of all the vectors
which are linear combinations of the vectors in S, i.e.
and 2e IR
and I cxiyiziewa.ie fX5LfyIE
U tT XtX yy 2 z EW
we need to check if it ful lls the equations:
0
ly 14 1212 170
2 0 0
Xtx't
3CXtx 114 4 01 0
0
Ix Hy 1222 40
3147 ay 40 0
122 0
Obtaining a basis: X Y implicit equations
We O
3 1 y
pint
It has in nitely many solutions
Iii L
page55
you must use 1 parameter
a 121 4
x
1 12 X y 24
E Rz 3121
O 4 6 g 4g 61 y 21 34
n
2 D z 21 Dim W 1
WeSpace C l 3.2 5 1
M Mt W
b ME1433 IR letM 1 few and Mz bodied
few andDEIR
M 1 Mz CW ata btb Ctc
Mi M2 btb dtd ete EN
j µ EW etc ere ftf
AM EW It is analogous
Implicit equations: X
X6
4
X Xz X3 And
X7 X3
Xs Xt
µ tew xu
g
Xa Xs 19 Xq
X Xs
pink free parameters
to'i
G OO OO 1 O 1
III parameters diucw
X H 1 116
2 62 Xp 14
z by
and
g y
Xy dz Xa 46
Xs d3
page56
In order to compute a basis, you set each parameter equal to 1, and the rest equal to 0 to get every
element of the basis.
BA ofI
del Iz 26 0
Melo Ei
de0,22 1 HE 76 0
wept Isomorphism
Nzef EztE4
he
disco too as 7,74 96 0
Carb c die f
h E 00
3 1868 Es
6,08888
O 00 I 00 1
4 a so.arnasaoo
M4s do EstEz
die 1450 25 1 CIG 0
Ms180,8019 Estes
d 95 0,96 1
Me
1 f Eg
5
page57
9. Let us consider the matrices
0 1 0 1
1 0 0 1 1 1
A = @2 1 3A B = @1 2 2 A
0 0 1 1 2 3
Obtain, by using the Gauss-Jordan method, the inverse of A, B and AB. Check that
(AB) 1 = B 1 A 1 .
Al
B
I I
fempu
I 00
f
AB In II ABY GS method
AB fool
o
EEE
ki B t I EEEur.fi 1
fEedEEEiII l
i iii'iit
2 I O
it
l O0
AB
BA
it I 9 I3
2 I 4 1
page58
10. Let A be the 2-by-2 matrix: ✓ ◆
1 2
A=
3 7
By using the Gauss-Jordan method, compute the inverse matrix of A and of its transpose
AT and check that (AT ) 1 = (A 1 )T .
at I 3I I si I is
t's I I II is
a
Iii
10
page59
11. Let A be the matrix 0 1
1 1 1
A = @3 6 6 A
2 11 13
Homework
a
at L
It U
b Ux y i Ly L y ft LI
Ux y
11
page60
12. Let B the matrix 0 1
1 1 1
B = @3 6 6 A
2 11 k
LU decomposition
F Uk
13 detcpg 1 3 k 117 0 if kid
13 11293,81181 Bissinger
n t
L KM
thx y ly111 g
a
X Xr 1 3 2 2 X3
31 2 31 3 31 z
uax.yly f.IT I
I 2 Diez O l D
Unity I k µ
i E to
O
12
page61
13. Let 0 1
1 1 2 0 0
A= @ 1 2 2 2 4A
1 0 2 2 4
n n
a
f II IE O l O 24 bzibis
EfEifii ai
yE
Iiio OO OO og
bist2b bz
Rank A 2
Discuss 2 if 2b bz o
RankA
3 if by124 bz1 0
When rank A rankCA't
When do we have a solution
13
page62
LINEAR ALGEBRA (350000) Course 2019/20
1. Analyze whether the following subsets are vector subspaces of the given ones. In affirma-
tive case, obtain a basis for such subset.
Homgen Lin
Sys
(a) W = {(x, y, z) 2 R | x y + 2z = 3x + y = 0} of R
3
YES Dim 1
3
T
(b) W = {M 2 M3⇥3 (R) | M = M } of M3⇥3 (R) YES
Dim 6
(c) W = {p(t) 2 P2 (R) | 2p(0) p(1) = 2} of P2 (R) No
IT Is not homogen
1123
142 17,191,17 1,01 3 Basis of
2. Analyze if
lO t.li H ii i J
2 MazCR 1124
t Ebd
Is largebicypd
O l O 1
p B
RIN RIN BIH
c
f XIX
2
15,3 5 3 2
2 2 1 Basis of PzCIRI
f
of
a
1123 PsPups ki in 112412 is
are
f pix l L ti 5 c
equivalent to prove that the
rector
are E
f l P2 3,5 3 i in pe
HI Pa 3 are l
f Ps 2 z e c R3
irs
page64
They do notform a basis
det 353
1 5 30 6 50 31 6 0
fi 2
Pdx linear mappings
IRCA Pdx
g Bc 132 A
By il
xd
g fzof
p T T
Az Ai coordinates
B
B 2
Here rtx a I Bu 1 I'tB f
T
Atx Hi B Bz 21 1 Be coordinates
B
and a
t
x a Bu o I p
1 Xi Aco Pz
1 O is it
f As As y 1 MpaBc B AzA
fzs Az Az Oy
1 fly to MBe.BZ
gEliit I
O
msn.im I is
page66
Lesson 3: Linear Mappings
f injective
f surjective
f bijective
f bijective
Definition
We say that a mapping is injective, if there are not two di↵erent
elements of S with the same image. Also, we say that a mapping is
surjective if every element of S 0 is the image of some element of S.
Finally, we say that a mapping is bijective if it is both injective and
surjective.
Proposition
Let f : V ! V 0 be a linear mapping. Then it holds that:
(1) f (~0) = ~0.
(2) If {~
u1 , . . . , u~n } are linearly dependent, then {f (~
u1 ), . . . , f (~
un )} are
also linearly dependent.
(3) If S ⇢ V is a vector subspace of V , then f (S) is a vector subspace
of V 0 .
Properties:
(1) Given f : Rn ! R, f is linear if and only if
f (~x ) = f (x1 , . . . , xn ) = a1 x1 + · · · + an xn
Similarly when f : Vn ! V10 (i.e. if the final space has dimension 1).
(2) Given f : Rn ! Rm , we can write
Properties:
(3) Every linear mapping can be written as
~y = A · ~x ,
Properties:
(6) Let B, B 0 be fixed bases in V , V 0 respectively. Given a linear
mapping f : V ! V 0 there exists a matrix A associated with it in
the bases B, B 0 . Conversely, any matrix A defines a linear mapping
with respect to the considered bases. So,
Proposition
Let f : Vn ! Vm0 , g : Vn ! Vm0 be two linear mappings, with associated
matrices Af and Ag , respectively, and let k 2 R. Then, it holds that:
(1) f + g is also linear, and its associated matrix is Af + Ag .
(2) k · f is also linear, and its associated matrix is k · Af .
Proposition
Let f : Vn ! Vm0 , g : Vm0 ! Vp00 be two linear mappings, with associated
matrices Af and Ag , respectively. Then the composition g f : Vn ! Vp00
is also linear, and the matrix associated with g f is Ag · Af .
We can apply this to study how A changes when the bases B, B 0 are
changed. Khan Academy (click)
ERe412 c b
I pot's
free
a r
3
One Basis of W is 2 9 412X t
4 12
4. Obtain the null space and the vectorial space generated by the columns of the following
matrices: 0 1 0 1
1 2 0 1 1 2 1
B0 1 1 C B0 4 2 2C
A=B C B CRzRs R
@2 1 3 A B = @ 1 3 0 3A
1 0 0 0 1 1 1
page68
5. Given the vectors v1 = (1, 2, 1, 1), v2 = (2, 3, 1, 2), v3 = (1, 3, 2, 1) and v4 = (2, 1, 1, 2)
of R4 , we want:
a To check that
I To 30 0 Therank is 2
I 20 I 10
IZ
a
A
Iii
flo
L
2,00
O UX
tl H t ff 3 3
obtain the matrix of the change of basis from B1 to the canonical basis Bc and from Bc
to B1 , where the canonical basis of R3 is
Given te vector u = (1, 1, 3). Obtain its coordinates with respect to the basis B1 .
B1 = {(0, 2, 2), (2, 0, 1), (3, 0, 0)} and B2 = {(2, 0, 2), (1, 2, 0), (0, 0, 1)}.
Miss.HN
eszMrs Bc fEIfJf
t.EE 1 b7U tii
C
llcl.Dpg 27BzMBnBz ti Ig
21h 3 Bz
4
8. In the vectorial space P1 (R) we consider the basis
We want to:
(a) Obtain the matrices of change of basis from B1 to B2 , and the one form B2 to B1 .
(b) Given the polynomial p(t) = t+3, use the correct matrix of change of basis to obtain
its coordinates with respect to the basis B2 .
5
9. In the vectorial space M2⇥2 (R) we consider the basis
⇢✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
1 0 0 1 1 0 0 1
B1 = , , , .
0 1 1 0 0 0 0 0
Find the matrix of change of basis from the canonical basis of M2⇥2 (R) to B1 , and obtain
the coordinates of ✓ ◆
3 4
A=
2 1
with respect to B1 .
1124 t by 1 correspondence
f MexCR t the
coordinates of Ibd w r
Edb s a b cd
can basis of Muz
m.am nisi
1 too O l l 0
A LY s 13.4.41 Be L Bs t 214 6
By
MBe.iq
J fIgJL
6
10. In the vectorial space M2⇥2 (R) we consider the canonical basis Bc , and the basis
⇢✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
1 0 0 1 0 1 0 0
B1 = , , , .
0 0 0 0 1 0 1 1
We want to:
7
11. Given the following basis of R4 :
msn.LY Mimi
I liked
b E Hi 2917 1 p
MBE.B 92 gaff
pivots
Ite a n ooo RTR 212 D988Esr
Interchange
II
2,2 e a r ooo
o o iii or oo 12441 dh o o iii or oo Rz R4
20 20
r o r e l
OO t O
0001 f
z 4 2 2010
a 2 O 1001 If o 4 o 2 210
z r l i 101
RI Ry 212
425
Rs 1
r
10 00 1 o RT123 2124
l l l 1 O hO ReRT Re Re 124
2100joe
21 a
88 o
2 I
io
I
no.no
Er
iii't
Oi
O t O 0
10 It
s 1001
MBcB
8
12. In the vectorial space P3 (R) we consider the canonical basis Bc = {1, t, t2 , t3 } and the
basis B1 = {t2 + t3 , t + t2 , t, 1}.
We want to:
MB
L Mars nisi a
f
b htt Etf 4111,1 1 Be C 7 By 4011d Be
msn.li
fIl
9
13. In the vectorial space P3 (R) we consider the basis B1 = {t3 , t2 , t, 1} and the basis B1 =
{t3 , t + t2 , 1 + t, 1}.
We want to:
10
Matrix Equation of a Linear Mapping
5th
Week
Definition
Two matrices A, A0 such that there exist regular matrices P, Q satisfying
A0 = Q 1
·A·P
If two matrices are equivalent, then they have the same rank.
If two matrices represent the same linear mapping but in di↵erent
bases, then they are equivalent.
Conversely, two equivalent matrices represent the same linear
mapping, in di↵erent bases.
Film Vm
Af MbcBe f
R Be
f Ralph Muz R
Egle µ f
Bce TM Be KEI EuE3iE4
flat bt7 9 ba
f t On
f Eat E3 Q 111107Be
t
A0
coordinates of Cdf w r Be
b I
1101111 Bc
fD
I
IG E Est EU
am
bio
A MIA
f t
iii it S0000
c I
f Htt ft lt
It
l 1,21pm
Mpaa to A
fH H III ci i i
osp
Homework is to check that
ii
t
MBe Ba
H p
Q
Matrix Equation of a Linear Mapping
Definition
Two square matrices A, A0 such that there exists a regular matrix P
satisfying
A0 = P 1 · A · P
are said to be similar.
B 1 It 2ft Basis
II HO YES
f l 2H i 3T Hi DB
flat H 3 t 11,17ps
MB pt LflJ
Homew
a
Ii'tT ati p
p I p
Kernel and Image
Definition
Let f : V ! V 0 be a linear mapping. We define:
(i) The nullspace or kernel of f , Ker(f ), is the set of all the vectors
of V that transform themselves into the vector ~0 2 V 0 .
(ii) The image of f , Im(V ), is the set of all the vectors of V 0 that are
the image of some vector of V .
Proposition
Ker(f ) is a vector subspace of V .
Proposition
Im(f ) is a vector subspace of V 0 , and its dimension is rank(A), where A
is the matrix associated with f (regardless of the bases used). In fact
Im(f ) = L({f (u1 ), . . . , f (un )}), where B = {u1 , . . . , un } is a basis of V .
Theorem
Let f : V ! V 0 be a linear mapping. The following statements are true:
(1) f is injective if and only if Ker (f ) = {~0}.
(2) f is surjective if and only if Im(f ) = V 0 .
Theorem
If f : V ! V 0 is an isomorphism and {~ u1 , . . . , u~n } is a basis of V , then
{f (~ un )} is a basis of V 0 .
u1 ), . . . , f (~
Definition
A linear mapping f : Vn ! Vn of a vector space onto itself is called
endomorphism.
Proposition
Let f : Vn ! Vn be an automorphism, and let Af be the matrix of f .
Then f has an inverse, f 1 : Vn ! Vn , which is also linear, and its
associated matrix is A 1 .
btc 0 dine VA 2
VAI atb d o
c i i
colours
it
profit I
Bashkimi
1 0
µ I
If I
ii vii
HI ri e
zI
rank hairs ei
ut Tee fi
are
III
T
is
Be us oil LUR Bc h e Bc
Mps
Bc It II CoidBa
det HII A
j IL1 5 31 3
I
3 II 3 A AZ AA II A f AA
At tI
Homework
as in
A L Yog the same
pop
revious example
the
14 102019
Problem 1. Given the following spaces of R4 :
Homos linear system
V1 = {(x, y, z, t) 2 R : x
4
y + z = 0, x z t = 0},
and
V2 = Span {(1, 1, 1, 0), (1, 0, 1, 1), (0, 0, 1, 1)} .
(c) Extend such bases to a bases of R4 , and call them 1 and 2 respectively.
a VERY fixI c Vi
V C1124 Eva HeVs i
Iue Y
G
O
t H H c
yeR4
Vz satisfies a equation I
a µ
duh 4
2
2
Vi I III III li.eind.gr
X 9 z µ
9
Vi Spae 4,440 l co 1 1,1737 µ
p
D I 1 0
I
y 1 9
Mr µ
4
1 11,0 X
y 12 0
1,0 1 1 X z t o
t so
Cao lie z
free t 2
2 2
I H na a a
X 2 9
a
0
1,1 Z t 0
It 0 1 no
yt
Egs of Vz
i H t r
13 1
41141107,10511 411,610,407,610,9173
Vr
p t7
camp.pe ripping.pe II
Problem 2. Given the linear mapping f : R3 ! R3 defined as:
f (x, y, z) = (x y, y z, x z).
fled f 11,01071491
(a) Obtain the matrix associated to f .
1 = {(1, 1, 0), (0, 1, 1), (1, 1, 1)} and 2 = {( 1, 0, 1), ( 1, 1, 0), (0, 1, 0)},
l l 0
A
Ijn coma
t.ua
if KerCf
31019073 4 4,27 X y of
b f is injective Y 2 0
X 2 0
It
1 in o rrmamfhf.IE dinterCII
NI Kor f Spae UH
colours of A
c Im f is spanned by
f tho
Intf hat
µ I
span
Mlf
L L 5 L
Id M c
ftp.p.lt Mp
LINEAR ALGEBRA (350000) Course 2019/20
Linear Mappings
f (x, y) = (x + y, 2x y, 2x + 2y).
(a) Obtain the coordinate matrix of f with respect to the canonical bases.
(b) Obtain a basis, the dimension and a set of equations of the kernel of f and of the
image space of f .
(c) Obtain the coordinate matrix of f with respect to the basis B1 = {(1, 1), ( 1, 2)} of
R2 and of the basis B2 = {( 1, 1, 0), (1, 0, 1), (0, 0, 1)} of R3 .
a
flee f Io 112,2 i MBcBeH Y2
fled float It t.DE I eeBEoYheeIco
e
Mcf
X 4 C R2 f X y Co
oo
b Ker f
Xty 0
and
1mCf Space
I ft l.ie iiEdimCImHY 2
AKO
Equations ZX Z o
cols
rank At solutions
c t
Mip.MCHMB.pe l i equations
MBiipzlfl MBCBZMLDMpspse
f (x, y, z) = (x + 2y z, y + z, x + y 2z).
(a) Obtain the matrix of f with respect to the basis B = {(0, 2, 1), (1, 0, 1), ( 1, 0, 0)}.
(b) Use this matrix to compute f ( 2, 2, 2).
IT 3
al
fFUe f 0,2 D 3,3 o Zz Z FB
I Ii.iooi i f IiIIse
i.on
E o
fl 32,4 11,1 3
B MpBtf Ig Ig 2Th GUI Gil's
bydef
0,4 47 10,4 4
2
3. Let f : R3 ! R4 be the linear mapping defined by:
f (1, 0, 1) = (1, 1, 1, 0)
f ( 1, 2, 0) = (1, 3, 0, 1)
f (0, 1, 1) = ( 1, 0, 1, 0).
Obtain the matrix associated to f with respect to the canonical bases and Ker(f ).
III If printer's
a
II
b You need MpsBc
Remember
MB.BEMido B
i i ii n
p 1
I
Means
MizpahE MaffMpap
I I
3
1123 1124
f
ma Ei Ini
tanto Eat
Ker f x y 27 c
1123
f X y 27 6 o o o
o
Egs Xty
X 2g 122
0
II S
Ex KerCf Icao oB
No extensionof
Is f surjective why get an
cols A to get a
basis of R4
to i
4. Let f : M2⇥2 (R) ! R3 be the linear mapping defined by:
✓✓ ◆◆
I Ima
x y
f = (x + y, z, t).
z t
i
We want: i
(a) Prove that the application f is linear.
(b) Obtain a basis, the dimension and equations of the Ker(f ) and the image of f .
Weneedtocheck
take two elements
of Me CR felR
a
f AtB f A tf B
AIB
fCan afCA
d ffaIa4bdttaY lata'tbtb ctd Cdtd'D
ftp
g yflAtB atb d debt d d f A FCB
c
Laatab Ac dd A Cathg d
f GA f ta f
tfCA
I Karlf EE FLEEK to qoH fxty o z o t Eg of
spin to 073 theft
I'd I drunker L
O O O l
T
free din Ion di fl diulle.IE 4
I se
because He elemett
Golems of Mtf
Imff7
4
dimE 3 fatt 2 40 1
f
5. Let f : P (R)
2 ! R4 be the linear mapping defined by:
I 11
f a0 + a1 t + a2 t2 = (a0 + a1 + a2 , a0 2a1 , a2 , a0 ),
where E is the vector space of the polynomials of degree less or equal to 2 with real
coefficients. Find the coordinate matrix of f with respect to the bases
can
00
B1 = {1, t, t2 },
and
B2 = {(1, 0, 0, 1), (0, 1, 0, 0), (0, 0, 1, 1), (1, 1, 1, 0)}.
L d
FAY
Cbl Use such matrix to obtain f (1 + t).
My
O 111
A 20
in
idiotism
fetus 2
ii l l i
I
Ttt Clamps flittlethilots MB.BA hoI ftEzz
po4fomial Its coordinates Be
5
w.r.t.pe 124,010,1 0,4907 36,0141 314440
2 1
Lesson 4: Diagonalization
Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces
The rough idea: From the preceding lesson, we know that a matrix A
represents a linear mapping in a certain basis. On the other hand, the
matrix associated with a linear mapping changes when we change the
basis. So, maybe there exists some basis where the matrix is “specially
nice”...
Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces
Definition
Let f : V ! V be an endomorphism. We say that 2 R (or C) is an
eigenvalue of f if there exists ~v 2 V , ~v 6= ~0, such that f (~v ) = ~v .
Furthermore, in that case we say that ~v is an eigenvector associated
with .
Examples
Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces
7
How do we compute the eigenvalues:
DetIA AI
Properties:
(1) p( ) = |A
j
I | is called the characteristic polynomial of the
matrix A. Its degree is dim(V ).
dimA
(2) It is usual to refer to “the eigenvalues of the matrix” (values such
that A · ~v = ~v ), instead of the linear mapping.
Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces
Properties:
(3) If i is an eigenvalue, then it is a root of p( ), and therefore
ni
p( ) = ( i) ···
TO
The number ni is called the algebraic multiplicity of i.
(4) When we consider vector spaces over R, the eigenvalues can be
either real or complex.
Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces
Z
How do we compute the eigenvalues:
Properties:
(5) If is an eigenvalue of A and A · ~v = ~v , we say that ~v is an
eigenvector associated with .
Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces
Proposition
The set of all the eigenvectors associated with a same eigenvalue of a
matrix A, is a vector subspace.
Proof.
f i V V linear If dinka lvector SpaeeV
w vs
Ingeneral fr Vi A HI r o it is a space and
w wee ta i witwz e aw c a eIR
Definition
For each eigenvalue i , the set of eigenvectors associated with it is called
the eigenspace of i . We represent it by L i . From the above result, it
is a vector subspace, and its dimension is called the geometric
multiplicity of i .
dimLai
smg Ai
Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces
Observations:
(1) L i
is the solution of (A iI) · ~v = ~0.
(2) dim(L i ) = n rank(A i I ), where n is the order of A.
(3) Denoting the algebraic multiplicity of i by ni , it holds that
M 7 1 dim(L i ) ni
in
Example
Malai
MgGil
Lesson 4: Diagonalization
f PzCIRI IBAR
t e CE
flat btecE at b t Cbt2c f tfLH
e CH
a we
consider a basis
Pettit13
btk atb
a flatbtect fCc b a pa Cc pa
the matrixof fi
3 Weneed del A
det I
f ee fChocobo 1 2,07Pa o
mattoid
Ii feet 1
I Charad polynemili
delta
det att A 43 312 Hsp
Path
P f ripkmaG D 3
det CHIAl
k t Bakr
7G zu
WI A
fE og ithasrank
2
jmg.dz
MaCa M 1 A table
tMgla is NST diagonal
Eigenvalues, eigenvectors, eigenspaces
Theorem
Let f : V ! V be a linear mapping with p di↵erent eigenvalues
1 , . . . , p . Then the eigenvectors ~
v1 , . . . , ~vp associated with them are
linearly independent.
Lesson 4: Diagonalization
Diagonalization of a square matrix
Definition
Let A be a square matrix, and let f be the endomorphism that it
represents. We say that A (or f ) is diagonalizable if there exists some
basis such that the matrix associated with f in that basis is diagonal
(equivalently, if it is similar to some diagonal matrix).
Lesson 4: Diagonalization
Diagonalization of a square matrix
Theorem
An endomorphism f : Vn ! Vn is diagonalizable if and only if there exists
a basis of Vn consisting of eigenvectors.
Proof.
Lesson 4: Diagonalization
Diagonalization of a square matrix
Theorem
Let V be a vector space over R of dimension n, and let f : Vn ! Vn be
an endomorphism. Then f is diagonalizable (over the reals) if and only if
the following two conditions hold:
(i) The total number of real eigenvalues, counting multiplicities, is n.
(ii) The geometric multiplicity of each eigenvalue equals its algebraic
multiplicity.
Lesson 4: Diagonalization
Jordan Matrix
In that case, there exists a matrix called Jordan matrix of A, which is
“block-diagonal”: 0 1
J1 0 ··· 0
B0 J2 ··· 0C
B C
J=B. .. .. .. C
@ .. . . .A
0 0 ··· Jr
where
O
B0 ··· 0 0C
B i C
B .. .. . . .. .. C
Ji = B . . . . .C
B C
@0 0 ··· i ?A
0 0 ··· 0 i
• The number and positions of the 1?s in each Jordan block must be
computed. We just mention the two “easy” rules:
1 If dim(L i ) = n i , then the Jordan block is diagonal (no 1?s);
2 if dim(L i ) = 1 6= n i , then the block has 1?s above all the
elements in the main diagonal;
so, in that case it looks:
0 1
i 1 ··· 0 0
B0 ··· 0 0C
B i C
B .. C
Ji = B ... ..
.
..
.
..
. .C
B C
@0 0 ··· i 1A
0 0 ··· 0 i
Lesson 4: Diagonalization
Jordan Matrix
where P fulfills: b
D
The columns of P include independent eigenvectors (but we need
more!!).
The remaining, unknown, columns must be computed:
(i) by using that AP = PJ (not very efficient...);
(ii) using a more sophisticated/efficient method that we will skip here
(but it exists!)
Lesson 4: Diagonalization
let A stuITIFEis diagonalaide
L 3g
PaHgfYI
detHI
A
Igf.ca alto f cancan
hast
simple A diagonalizable
tf
Ei J
E ol40io7
diI A f8oz9IJ z7yIIIofx
dz
is c z
wIa.f
iEat aor
fitriB I
I E
Exempt
Let a c IR Study for which values of a the matrix
30
A is diagonalizable
that A PDP
matrices E and D so
Obtain
alI at
a a
Hal O t
EE o.atca u n
f I 1 2 da I
if
d d3
a a
Simple A dagou.at
42 2
a
malt 2
if a p 4
12
1
2
double
simple Matt
21 1 I A is not
diagardizable
rank is 2
d I A L
Ar 17 3 2 1
Mg
rank is 2
12 detI A
2
L fq
Mg 22 2 3 2 1
a 1.2 HI A ooo rT Uio
a
d HI A
f a
Felsite
rI C3o
d WI A
Ag
Iz
a
I L e too
Ii E Ii
E
A PDP A PDP157 67 D P
An
P t EDwp
ME IN
a nE a
E
i
6. Let E be the space of polynomials of degree at most 2 with real coefficients and let F
be the 2-by-2 squared matrices with real entries. Let us consider the linear mapping
f : E ! F defined as follows:
✓ ◆
p(0) p(1)
Heat
2
f (a + bt + ct ) =
p(1) p(0)
pie
(a) Obtain the coordinate matrix of f with respect to the basis
B1 = {1, 1 + t, 1 + t + t2 }
kid
Be IEtiEqE3 Ey
of E and the canonical basis of F . fput.su
(b) By using the matrix obtained in (a), compute f (p) where p = 3 + 2t + t2 .
(c) By using the matrix obtained in (a), compute a basis of the subpsace Ker(f ).
ftp.ffht ttI z7
EieBEz
a I
3EstEI3y3DBc
set 5 1
c I
barisBy
w.at
pH
msn.tt kid need to
express
f
uiMlB lb7fCp7
ptH 3t2tt eu ai
x.stpCkHt8Htt
system
solve the
3tzt lt ktp N lptbt Btr
IT 6 rtg.atper 8 1
ftp.MB.BIHHI fqk.bg fhI 1383
35
of matrices
a collection
polynomial became e c Blk
c Ker is of
era
E
A 0 O L
2
r n l
KerCfl Span HEI free t
7. Let E be the 2-by-2 squared matrices with real entries and let F be the space of polynomi-
als of degree at most 2 with real coefficients. We consider the linear mapping f : E ! F
defined as follows: ✓✓ ◆◆
a b
f
c d
= a + (b + c)t + dt2 .
c f 2t3t
(a) Obtain the coordinate matrix of f with respect to the basis
fieii.IEoff
⇢✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
B1 =
1 0
0 0
,
1 1
0 0 co
,
1 1
1 0
,
1 1
1 1
iii
of E and the canonical basis of F .
at ios ice
✓ ◆
2 2
(b) By using the matrix obtained in (a), compute f (C), being C =
1 0
.
Etui
Cal f UT f f8 hot107 1 05 6,9 1 Be
T TB Coil 1 Be
a I f UI It ttoE let
set
get fat tl f i zt ioE zr iB C92
d o
Cd
at 57
c I
d0
2
filly f ff 1 21 1 1 2 03 11,21 Be
at b I
a7 TV
c I
µ
pe fg9II
Anemia se n cam a
fIIII3l
b C Hofstetter491,101,3
7
f c 8,9 4 g
E Bc 3 2
8. Let E be the space of 2-by-2 matrices with real entries. We consider the linear mapping
f : E ! E given by:
f (A) = A + AT ,
where AT is the matrix transposed of A. AIBT
(a) Prove that f is a linear mapping.
HA iB CA B CAIBYICA AH CB.pt
(b) Obtain the coordinate matrix of f with respect to the basis
⇢✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆ fCAlefCB7
B=
1 0
0 0
,
1 1
0 0
, O
1 1
1 0
,
1 1
0 1
Dd
FAA GAltGAT
of E. e
Us og ✓
2 3
◆
dAtdAT
(c) By using the matrix obtained in (b), compute f (C), being C = .
O
2 1
d AcAT
p
1,0 2,1 B tfCA
b
ftp.pdfl MB.BEMCDMB.Bc
e i l b
let me also reunber fluit UT UT f.IE 2UTtou7tou3eouy
2,0101dB
I g i u ti i
2 I I 0
msn.at
t H
8
d Gi L UT aatzt.tle t1iQ2iDg BfC h0i2il7B MB.B f Ig
I
t 13311341 figs if f 1 4 4921,3
C Th 2 505 54 14552
l Looker COLI A o o 2t 03
zx
ytz
Spanko l 1107
20
12 Ker ZI A ytZ of
010,0
Span Choco d 0,111,0
toilet Eia're
A PD
9. Let E be the space of polynomials of degree at most 2 with real coefficients. We consider
the linear mapping f : E ! E given by:
f (p(t)) = p0 (t),
ffatbt ct22
(i.e. the image of a polynomial is its derivative). btzc u.ie
ie ca7fluTl
(a) Obtain the coordinate matrix of f with respect to the basis B = {t, 1 + t, t2 } of E.
(b) By using the matrix obtained in (a), compute f (1 + 2t + 2t2 ).
2 zt2
t xu.HU ru3 4 1ple Lt t rF p ldtB7t N
Z
f I 2 1 2
f 1,2
tf
Study is diagonalizable
faff 1
ItII f tYIt a.d2 d3f
9
Equating
dim 6 3 2 1 Mgl'd07 1
Mala07 1 Mgb07 A is NOT dragonalizable
NOT FOR THE EXAM
WARNING
spaett 1.073 totter out AT FILI
spanking 113
e
FALI
3 1123
Karlo A
co oil
span411 407,111,07 Burt
Ear Is
riv
at Ip
i
i be
10. Let E be the space of polynomials of degree at most 2 with real coefficients and let F be
the space of polynomials of degree at most 3 with real coefficients. We consider the linear
mapping f : E ! F defined as follows
Z t
BE flatbt ct7 attbtz.cc
It
f (p(t)) = p(s) ds
0
(i.e. the image of a polynomial is its definite integral between 0 and t: a primitive of the
polynomial so that at 0 is equal to 0.)
(a) Obtain the coordinate matrix of f with respect to the basis B = {1, 1 + t, 1 + t2 } of
E and the canonical basis of F .
(b) By using the matrix obtained in (a), compute f (3 + t + t2 ). es
Cal flatlet t r5 Co o 1,07
Be fatd Shtetl
la
t tz.fr tB
a I
St boo set b I 0,12cL07
Ceo Cto
Mps
L Ff
b
flzyfet7 flhhNB MB.B.CH
I fz jEEtfEt3t
w need to writeit w r t B
3 it it2 dhi pfz tuz x.lt putt 18 little ftp.IJ ptttt
dtBtt 3
a
I
10
411
uh until
L
iit
R
t
L
Eun
C
n
E Ult p
L
d 2 q(t)
dt 2
+R
11111 we solveHe
dq(t)
dt
1
+ q(t) = E .
C
in
weassure
it Lg 4
Walt
E Rualt U It
E is constant
system
Lesson 5: Linear Di↵erential Equations
Introduction
was
EI ut
I
1H i1
8
> i10 (t) = i2 (t) 2i3 (t)
p in d II If I
>
>
1H 1H <
i20 (t) = 4i2 (t) + i3 (t)
>
>
>
: i 0 (t) = 2i (t) 6i (t)
2 3
ft lo
3
t 22
i2 3⌦ i3
1⌦ 2⌦ T
Examples:
yIH Ket K
Solutions of y 0 (t) = y (t)? constant
(t, y , C1 , . . . , Cn )
Non-homogeneous case
Theorem
Denote by ynh (t) the general solution of the above non-homogeneous,
first order linear equation, denote by yp (t) a particular solution of the
above non-homogeneous equation, and denote by yh (t) denotes the
general solution of the associated homogeneous first order linear
equation. Then it holds that
Theorem
The set of solutions of an homogeneous second order linear equation has
a structure of vector space of dimension 2. So, the general solution of
such an o.d.e. is
yh (t) = C1 y1 (t) + C2 y2 (t),
where y1 (t), y2 (t) are linearly independent solutions.
t
whatdoesthatmean
So, whenever we find two independent solutions y1 (t), y2 (t) of the o.d.e.,
we get the general solution.
Definition
The Wronskian W (y1 , y2 , . . . , yn ) of y1 (t), y2 (t), . . . , yn (t) is the
determinant
y1 y2 ··· yn
y10 y20 ··· yn0
W (y1 , y2 , . . . , yn ) = .. .. .. ..
. . . .
I
(n 1) (n 1) (n 1)
y1 y2 ··· yn
defterminant
Proposition
If y1 (t), . . . , yn (t) are linearly dependent, then W (y1 , y2 , . . . , yn ) = 0
(i.e. it is identically 0).
Tense
Ka t ao polynomial of2nddegree
J
Characteristic equation and discussion
.
youneed replace y
y A
y 5 andsolvethe
equation
y1,5
Lesson 5: Linear Di↵erential Equations
I a d do 0
to b a.tt RLI circuit
a
2 Example
h a F4ao c a
This is o
only if 4 1 da o but hehe
which means he 6
If di k d tea
Y IH ed't k kL Kz constants
yzltTe.tk
why ms
fIIteIetht e tch o
t
if I th b Ed h dtj dz jµJ
I9 unawares e
i
Second Order Linear Di↵erential Equations
Non-homogeneous case:
Theorem
Denote by ynh (t) the general solution of the above non-homogeneous,
second order linear equation, denote by yp (t) a particular solution of the
above non-homogeneous equation, and denote by yh (t) the general
solution of the associated homogeneous second order linear equation.
Then, it holds
ynh (t) = yh (t) + yp (t)
Diagonalization
1. Study if the following matrices can be diagonalized over R and, if so, obtain matrices D
(Diagonal) and P (regular) so that A = P DP 1 .
0 1
0 1 0 1 1 0 1 1
1 1 1 1 0 0 B 2 1 2 4C
(a) A = @0 1 0A (b) A = @1 1 2A (c) A = B @ 1
C.
1 1 1A
0 1 2 0 0 1
1 1 1 3
4 Kor AI A
NI A
I f dim 4 mgca.si 3 2
A is not diagonalizable
raid
b play d DA DA D maH
A is not
L KerCUI A diagaalizable
f g 4 MgCd 1 3
dim 1 2
NI A
rank 1
ant it i L
Ha
ith II ah EH 27 mm 95.7
A 1
2 2 1
µMgtd rau.kz Ma
Ii 41 AI A Lf Ais DIAGONAL2ARE
Span Gioin1,1 Ct His
columns
1 0,07 free
L
LEE
is
o
at II I
Lo span U o t o
to
ix y
yet
o
o
o
a
t.tt p
04
free
spanks to.ms
74
af IIH 7iII4nfoo oEIo
002
1 9 81
fririrtriH
2. Let f : R3 ! R3 be a linear mapping which has the following matritial representation
with respect to the canonical basis:
0 1
1 3 0
A = @0 a 0 A .
2 1 1
Al
all It
plat det CHI
II f
can a
so A is
a I
not
III A
diasonalizable
I 2
I
a
f
1 either
rank z
A 3
D
f is e't.feteoe.ge O O
E
III II a I's
TEI II o.o.se
um e
PeltDDp i
un
uu.is
II Ile9eIi'eiiifp uiD
f
it ulk cas.at
i i It
it N
y O 0 A 1
u.nl 3
IE 3l
e
eii IfiIeiet IE
3Ettt
3 2
3e
3t3t
1z e
ltt 313 LI
d t.ze tze
solution A
f youwant to check
your owf
z
It
µ Ub 24th Uz
3. Let us consider the matrix 0 1
b 0 2b
A= @ 1 1 2 A
b 0 2b
that depends on the real parameter b.
b then Maco 2
up if 3b 0 o
Machel
2 1
AIL
Lo Ker coli A dim6 3 1
A
Iggy
A If 3b I be43 there ma o 1
Mall 2 NOT
A is
Ker ttt A then dim Lee 3 2 1 diag
it a
I
solu t.A ryb.withb.tl
3
b BIO to Kercott A x y 27 0
i tsE.iIy
1
i H to i i
i aint
4. Let f : R3 ! R3 be the linear mapping defined by:
where ↵ is real.
(a) Compute A the coordinate matrix with respect to the canonical basis.
(b) Determine the values of ↵ so that A is diagonalizable over R.
1
(c) For ↵ = 1 find the matrices D (diagonal) and P (regular) so that A = P DP .
(d) Compute, as application of the precious result, the power An (fot the ↵ = 1 case)
as a function on n. Simplify as mush as possible.
4
Linear Di↵erential Equations of Order n
General form:
Homogenous case:
Theorem
The set of solutions of an homogeneous linear equation of order n has a
structure of vector space of dimension n. So, the general solution of such
an o.d.e. is
Non-Homogenous case:
Theorem
If ynh (t) denotes the general solution of a non-homogeneous linear
equation of order n, yh (t) is the general solution of the associated
homogeneous equation and yp (t) is particular solution of the
non-homogeneous, then
Characteristic equation:
an r n + an 1r
n 1
+ · · · + a0 = 0
As in the case n = 2, each root rk (real or complex), of multiplicity
mk , gives rise to mk independent solutions:
Every real root rk of multiplicity mk gives rise to
e rk t , te rk t , . . . t mk 1 rk t
e .
and
EXAMPLES
EXAMPLES
Lesson 5: Linear Di↵erential Equations
Transforming Equations of High Order into First Order
Linear Systems
EXAMPLE
we get
du(t)
= A(t)u + b(t).
dt
In general, we have
du(t)
= A(t)u + b(t),
dt
where
A(t) is an n-by-n square matrix whose elements are functions of t.
b(t) = (b1 (t), . . . , bn (t)) is a vector whose n coordinates are
functions of t.
u = (u1 , . . . , un ) where ui = ui (t) for i = 1, . . . , n.
If, in addition, we have initial conditions u1 (t0 ) = u10 , . . . , un (t0 ) = un0 ,
denoting (u10 , . . . , un0 ) by u(t0 ), then we have the following IVP:
8
< du(t) = A(t)u + b(t),
dt
:
u(t0 ) = u 0 .
Given
du(t)
= A(t)u + b(t),
dt
we say that
Is homogeneous if b(t) = 0, and non-homogeneous otherwise.
Has constant coefficients if A(t) = A, i.e. it is a constant matrix.
Homogeneous case:
du(t)
= Au.
dt
If we think of
dy
= ay ) y (t) = Ce at .
dt
series
Introduction: Taylor’s polynomial.
EXAMPLES
et it t I Ist
sint t
t
tf t
A2 t 2 An t n
A eAt
At
e At = I + + + ··· + + ···
1! 2! n!
Here n! represents the factorial number defined as n! = 1 ⇥ 2 ⇥ · · · ⇥ n.
2
feat Ay ftp.h AZz i A I Ant A E
Lesson 5: Linear Di↵erential Equations
Linear First Order Systems of Di↵erential Equations with
Constant Coefficients
If
A = Diag( 1, 2, . . . , n ),
then
e A = Diag(e 1 , e 2 , . . . , e n ).
Properties:
e 0 = I,
If A and B commute, then e A+B = e A e B .
I=entries
1
The matrix e A is regular and e A e . A
d e At
= A e At .
dt
Theorem:
The general solution of
du(t)
= Au.
dt
is
u(t) = e At C .
EXAMPLE
Theorem:
The general solution of the Initial Values Problem
8
< du(t) = A(t)u,
dt
:
type i toro
u(t0 ) = u 0 .
type to 20
is
u(t) = e A(t t0 )
u(t0 ).
TT eat pept E
1
A = PDP
skipthis
case
e
C (t) = e As b(s) ds.
Non-homogeneous case:
Hence, the solution of the equation
du(t)
= A(t)u + b(t),
dt
is
u(t) = e At
C +e
dZ
At
e
db(s) ds.
As
u(t) = e A(t t0 )
u0 + e At
Z t
e As
b(s) ds.
IfPellitory
t0
type Eton
A MA 14 12124
in It if it
13 1881,1 11 1,1117
a MBB f MBBeCf
i
Nd
b f C 1423
dkerCt
a flute Mf 1 to antipathies Spy
1 2,210713
fluid M e
LL to 1,0 2
B
IoIEIo b
Ma.at
fCc7 flIiIihMB MB.plflf
f o p
suis Huit6uttloties
L
c Ker
_spank HELLO 39dB span ffg fff
free 2 D tell
I
a team
f 022T p 22 24
5. Let us consider the matrix 0 1
b 2 0
A = @2 b 0 A
1 0 2
that depends on the real parameter b.
A
f f pal delCHIAl I Eobf lazy Ebt
h2 T 2bItb 4
in Jae zbt2 b 2
b1 0 b
d 1227
Dim If b 12 2 di b 2
nor b 2 1 2 3 diff eigenvalues so A is
t diagonalizable
b 14
if be 4 malt21 2
mat 6 1 go A is Not
dragonalizable
z Ker t2I A dim mg 2 L
ZI A fFgfD2 rank is 2
If bio Mac27 2
to A is not
Mate 1
dragonalizable
KertzlI A dine L
t d my
t g
a ZI A
f rank is 2
b
A
IIE ra 12 133 5
Lz Hrc 4 Kerl spanks LIED
zqqgtqty
spanlloiof.fi i
IIIA spank D
EEE o you can
multiply
Km HI A Span 441 a constant
T
ri
pego
D go E fog
71 bet
man PDH
III I III
i xi t
i ssi.iI
fi io
2nH fiTt3nH 4.2n tiYt3nt 6.2
6. Let us consider the matrix 0 1
a 1 1
A = @ 1 a 0A
0 0 1
that depends on the real parameter a.
ma 1 ez
f to
000
If 9 2 then
Li Kor NI A dim4 L
Ma 37 1
1
ng It
Matt
so A is Not diagonalizable I 0
000
b a 1 A
f ra 1,0 23
i J
I L HII II
47 Homework
7. Let us consider the matrix 0 1
1 0 3
@
A= 0 2 0A
0 2 a
that depends on the real parameter a.
Homework
pin HI Al Ez 301
ca
I O 2 da
ca na Alta
of 21,3A
A has 3 diff single
i on
If a s nor a z gthee
eigenvalues hence A is diagonalizable
Q A is NET drag
dim Length
If a 2 then Ma127 2 Lz terCUI A
1 Ma
ma ut l
88 8
020
So A is NOI diag
7
8. Let us consider the matrix 0 1
1 0 0
@
A= 0 3 2A
2 0 b
that depends on the real parameter b.
Homework
to
O O
a INI Al a 179 37 la b the spectrum is
Jodtzz
2 O d I 3,54
TA
Discussion If b1 1 nor bt 3 Ha
A is diagonalizable
KarGII A dim 13 1 Ma f 3
if b 3 then male 1
t
mat37 2
So A is Notdiagonalizable
f If200If
8
LINEAR ALGEBRA (350000) Course 2019/20
EASY
1. Solve the following initial values problem: I
(
Type 0 1
u 0
= Au 0 1 1
where A = @1 0 1 A .
u(0) = (1, 1, 1)T 1 1 0
A
f platini At
HII f B i i a t t 1332 2
i 42 2 2 2 5421 1
Z method 43 32 2
A O Z Ruffini's T
KIIotinthAt a swoosh
C TA 42 4 13
El t I
ma E z
ri iz
4 Ker GI t A Spark 07,111017
43212
D
E Et
00
i
Is tf tf
1
settle eat Uco d
Solution PeDZPI
o
to i e
ii iii
S
iii i i l it
wait H II r
Lesson 6: Euclidean Spaces
We want to find the coordinates (x, y , z) of the point where we are. Let
us address first the problem in the plane (we want to find (x, y ))
We call a satellite. The satellite “knows” where it is; also, by calling the
satellite, we know the distance from the satellite to us. So, we get a
circle, and we know that we are a point on the circle.
If we call another satellite, then we find two circles containing us. So,
two possible solutions. But one of them can be easily discarded.
Discarded
The satellites are very far away. So in fact, in the vicinity of our position
the circles can be approached as lines. Equivalently, the spheres can be
approximated as planes.
8
>
> a11 x + a12 y + a13 z = b1
>
>
>
>
< a21 x + a22 y + a23 z = b2
>
> .. ..
>
>
>
> . .
:
am1 x + am2 y + am3 z = bm
In other words
1
X
f (t) = a0 + [an sin(nt) + bn cos(nt)] .
n=1
Rough idea: the signal f (t) is decomposed into simpler signals, with
increasing frequency. Often, the most relevant information on f (t) is
carried by the terms of small frequency.
(2) Linearity:
u~ • ( ~v + µ~ u • ~v ) + µ(~
w ) = (~ u•w
~) ~ 2 V.
for all u~, ~v , w
(3) Positive-definiteness:
Moreover,
u~ • u~ = 0 if and only if u~ = ~0.
A vector space V over R furnished with an inner product •, and for that
we mean (V , R, •) is called a Euclidean vector space or an inner
product space.
k.k : V ! R, ~v 2 V ! k~v k.
k~v + w
~ k k~v k + k~
w k, for all ~v , w 2 V .
Examples.
A vector space V over R furnished with a norm k.k, and for that we
mean (V , R, k.k), is called a normed space.
Let V be a vector space over R. Then for all u~, ~v 2 V , the following
inequality holds:
p p
|~
u • ~v | u~ • u~ ~v • ~v .
|~
u • ~v | k~
u k k~v k.
u , ~v ) = k~
d(~ u ~v k.
So, for instance, for two signals, namely f (t) and g (t), such distance
between them is
s
Z b
d(f (t), g (t)) = (f (t) g (t))2 dt.
a
In particular, if f (t) is the signal that was sent, and g (t) the signal
received, then the above quantity is measuring the noise.
u~ • ~v
cos(✓) = .
k~
u kk~v k
u~ ? ~v () ✓ = ⇡/2 () cos ✓ = 0.
u~ • ~v
cos(✓) = .
k~
u kk~v k
u , ~v 2 V are orthogonal)
Whenever, this happens (~
D
u~ ? ~v () cos ✓ = 0 () u~ • ~v = 0.
I
We say that a basis B of a Euclidean vector space V is orthogonal, if
every two vectors e~i , e~j 2 B, with i 6= j, satisfy that
e~i • e~j = 0.
Examples.
I
Lesson 6: Euclidean Spaces
1123 V Space 1,011 CO1,07 estandar
innerproduct
Homework
Ip
XZ o
j fi o
Tf Xtyt 2 9 211,07 I
Joo
dft 2.1.07
L
Q
b MTT
Q prog e
w seize ii Tito IT
w
if ovz o
Xty12
0 grunted by 11 1,07 7
Leo 1 FI
IIIIIE
2 e
at
fl t
9 s
icand y
r NTI 2 147 1414 11µL 9
9
x 31 3 fu 9
I
d 413,2 20
14
free
IT d
b XtytZ 9 z
v z o Io't 90
x 1 a a team
Xm I 64
Hey27 0 9,0 411 2,1
East
2. Solve the following initial values problem where the initial value is given at t = 1:
(
u0 = Au II
type where A = 4 3 .
✓ ◆
HI Al 4,71 6 1712 4
A
f 3 plat
TA 11,43
4,173
Le leer NI A Spae
p ri
i freecolumn I t D b Is
vz
1 31
J u.in
tree n
TIpda.hinforsesdrr
et't unseat p
n.pe 7 pIop
Iuui
e
yIH UeK
y't uz't U
et Udt 12447 et
y H7 U'zlH y1H 2gIt ya gilt qslt
uu i.fi u.f utetfiiiYaHiIInae etH ill feel
a
cat I'd
AsOBcs7ds
tucote.NO
eh ote
Solid ttltl
if A diagonalizable the fl
UCH Ee't ptuco Pe if ftpDsOE'Blsds
e
5 d 2 4 46 2
It path 1,1
L kerf I A e Sparkle DI
t
li
He.es
e am
2IffJeEeeEfLi
eAt.pefOtp yOfif1lEtEeI 3
Pe Dt
ask.ee 3 I'stain
eatuia.is e EiI
1S O S
as EI EIz L
it.in etiifl
e z 2
i e
d
Barrow's
rule
ilii
ee t.EE EIIEI7
in
Peet
et
Lte
ee e eIet i
Bluets
i
Have
un
t t
Et
yctl U.LA ez Eet
VERY
HARD II
0
4. Solve the following initial values problem: type
( 0 ✓ ◆
u (t) = Au(t) + B(t) 2 1
where A= , B(t) = (3e2t , 4e2t )T .
u(0) = (1, 1)T 0 2
I
plated 42 4
TA 2,23 Ma 2 2
1 11 I I
Li.mu TAP spauKiio7il91Dff pJp
T
to 3 E to yordan Matrix
tee
r est Leite
IT F'AP A
Uld ffJ Aa 4 A t As
FOR JORDAN MATRICES CASE
1 J
l eat aegtiee.FI
Blts t 24 29 a
I
typed✓
5. Solve the following initial values problem:
( 0 ◆
u (t) = Au(t) 1 1
where A= .
u(0) = (1, 1)T 1 3
t
plated 42 14
up
1 2 2
2
TA 12,23 ma
Lasker 21 A espouse MY
1411,11 173
Li Ias 2E APE Spout A PIF
T
881 fordantatix
Hit
E F'Antitheft'll fifth 974
L estate
Tf2eK
Inti uh.fieiIIIEIf fe7J
5
Orthogonality in V
(x1 , x2 , . . . , xn )B ,
Proof.
Example.
Lemma
Let {~v1 , ~v2 , . . . , ~vn } be vectors of V , and let
(?) One may see that in fact every vector in W is orthogonal to every
vector in Span(~v1 , ~v2 , . . . , ~vn ). So, we might write
Given a vector subspace U, the space of all the vectors which are
orthogonal to U is called the orthogonal complement of U. It is
denoted by U ? .
Properties:
(1) If W = U ? , then
W? = U ) U ?? = U,
{!1 , !2 , · · · , !m }
2 The projection is
Normal equations
ATAZ AT b
Kx k text
ATA Atb
V is
The projection of 5 into
I
p A x
atix I A1ATAT'AT
Peg.ec imm
E B F E AE Cei emer
6. Solve the following initial values problem:
( 0
tepee✓ ◆
u (t) = Au(t) 2 1
where A= .
u(0) = (500, 100)T 1 4
t
p 5 61 9
3,33
TA Mats
II a
If E
II
a tap ftp.II.iit tf's Hit
P eat EE'T
state
uay 6
7. Solve the following initial values problem:
type I 0 1
( 1 0 0
u0 (t) = Au(t)
A = @ 4 1 0A
U
where
u(0) = ( 1, 2, 30)T , 3 6 2
n
a
G Hkd 2
PHY
III Fat
Tess his
Math 1 Ma 1 2
it AP
L Kar CHI A Span1194613 j 42 Ker
NII
T 21 6 t 2
y
1
0 O O
I L I 12
19167 G O 21
Spae
En ai
ker 2E AI 01173
k Spar co
in it E fog
Iii JG O
a
A is NOT
DIAGONAL 17ABLE
7
4 O htet O
d P AP do
to
g
a est ft et ga
O O
O
et O
sente i n't ki
II see If I
t
t
EASY
8. Solve the following initial values problem: type I
( 0 1
u0 (t) = Au(t) 1 0 0
where A = @1 2 0 A.
u(0) = (1, 0, 0)T , 1 0 1
I
ai
a pCAH DU 216th
4 ker HI A
a
Spae 42 2,113 Ga 1,2 IS
since
lieI O 2
free Tiz
E I O 3
3 E 3
I 3 a
Ei p
e't.feote.EE
O 0
o
0
9. Solve the following initial values problem:
( 0
type E
u (t) = Au(t) ✓ ◆
0 1
where A= .
u(0) = (2, 2)T , 1 0
t
ut PLA'tI d j te j
k
Lj Ker JI Al Span H1 IB
I
fIiIixtt ueu.p.fio.o
fIteEt l
ae
SpanLce9jI3
krC jI A
j I
Eff
sent
uneatuca it HIT Ift
it it
e i i jf.es e
feit e itljtfeitteotf
flesties't
tfest jest
jet ja
it cost 12 Sint
edt
422Sint
e
zjs.int
et't t e It 2cost
2 cost
fj I
j g1
9
10. Solve the following initial values problem:
( 0
u (t) = Au(t)
type's ✓ ◆
0 2
where A= .
u(0) = (1, 2)T , 1 2
T T
Phd 12 22 2
t
1 Hj d I j
t
The solution is
10
11. Solve the following initial values problem: type III
( 0 ✓ ◆
u (t) = Au(t) + B(t) 2 5
where A= , B(t) = (2, 3)T .
u(0) = (1, 1)T , 1 2
t
f
Au
Solution UCH Ataco e
At
e Blu du
Platt 521 12 1 o
L f j
simples
A 1S L Span H jt2,173
Lj Kr JI Span
Jef
I
iii I
II it e fitz Itg
11
12. Solve the following initial values problem:
( 0 ✓ ◆
u (t) = Au(t) + B(t) 0 1
where A= , B(t) = (0, t)T .
u(0) = (7, 9)T , 1 0
12
tape I
12. Solve the following initial values problem:
3
y (t) + 2y(t) = 0a
( 000
y (t) 2y 00 (t) 0
e
y(0) = 3, y 0 (0) = 2, y 00 (0) = 6
AH F 33 2
6 9 1,427 date ft NG 2
Uz
a to
Us
u ia.eaeu
a.fi B
Homework i Solve y 2g t y get ateat
1 1,2
LINEAR ALGEBRA
November 11th, 2019
where a 2 R.
(a) (1.5 points) Obtain the LU decomposition of A.
(b) (1.5 points) By using the previous LU decomposition, solve the linear
system AX = b, where bT = (4, 6, 16).
Solution:
(a) We are looking for matrices L lower triangular with main diagonal
entries equal to 1, and U upper triangular, so that A = LU , we get
such matrices after applying Gauss-Jordan method, obtaining:
0 1 0 10 1
1 2 a 1 0 0 1 2 a
13m51
A=@ 2 2 2a 1 A = LU = @ 2 1 0 A @ 0 2 1 A.
3 10 3a + 1 3 2 1 0 0 3
j
1 0 0 y1 4 y1 4
@ 2 1 0 A @ y 2 A = @ 6 A ) @ y2 A = @ 2 A .
3 2 1 y3 16 y3 0
u Y y
2. In the vector space of polynomials of degree at most 3 with real coefficients
T
we consider the bases
B1 = t, 1, t2 , t3
and t
B2 = t3 + t2 , t2 , t, 1 .
(a) (1.5 points) Obtain the matrix of change of bases from B1 to B2 , and
from B2 to B1 , identifying clearly which one is which.
(b) (1.5 points) Given the polynomial 1 t + t3 , use the proper matrix of
change of basis to calculate its coordinates with respect to B2 . Check
the result is correct.
Solution:
(a) There are many ways to solve this problem, we are going to use here
one of such methods. In order to obtain the change of basis from B1
to B2 we need to express each element of B1 with respect to the basis
B2 :
t t t t ttdd
o1 = 0(t
t = 0(t3 + t2 ) + 0(t2 ) + 1(t) + 0(1)
3
+ t2 ) + 0(t2 ) + 0(t) + 1(1)
!
!
u
t = (0, 0, 1, 0)B2 ,
1 = (0, 0, 0, 1)B2 ,
t2 = 0(t3 + t2 ) + 1(t2 ) + 0(t) + 0(1) ! t2 = (0, 1, 0, 0)B2 ,
and
t3 = 1(t3 + t2 ) 1(t2 ) + 0(t) + 0(1) ! t3 = (1, 1, 0, 0)B2 .
Therefore, the change of basis from B1 to B2 is
0 1
0 0 0 1
B 0 0 1 1 C
MB1 ,B2 = B C.
@ 1 0 0 0 A MBzibn ttbn.BZ
0 1 0 0
And to the the another case we have
0(t) + 0(1) + d
t3 + t2 = d E3 )
1(t2 ) + 1(t !
d f
t3 + t2 = (0, 0, 1, 1)B1 ,
t2 = 0(t) + T
0(1) + 1(t2 ) + 0(t3 ) ! tB ,r
t2 = (0, 0, 1, 0) 1
T
1 ttt
3. Let E be the space of squared matrices of dimension 2 with real entries.
We consider the linear mapping f : E ! E defined as follows:
f (A) = M A, Endomorphism
where in✓ a
◆
1 2
M= .
2 4
(a) (2 points) Obtain the coordinate matrix of f with respect to the basis
of E
B=
⇢✓
1 0
0 0
◆ ✓
,
1 1
0 0
◆ ✓
,
1 1
1 0
◆ ✓
,
1 1
1 1
◆
. Milf
(b) (1 point) By using the matrix obtained in (a), obtain f (C), where
✓ ◆
4 3
C= .
2 1
(c) (1 point) By using the matrix obtained in (a), obtain a basis of the
subspace ker(f ).
Solution:
(a) We need to express the image of each element from the basis into the
same basis, let us see how:
✓ ◆ ✓ ◆ ✓ ◆✓ ◆ ✓ ◆
1 0 1 0 1 2 1 0 1 0
f =M = = ,
0 0 0 0 2 4 0 0 2 0
MBBo f
B 2 1 5 3 C
MB,B (f ) = B
@ 2
C.
0 4 0 A
0 2 2 6
T
(b) We express C with respect to the basis B, obtaining
✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
1 0 1 1 1 1 1 1
C=1 +1 +1 +1 = (1, 1, 1, 1)B .
0 0 0 0 1 0 1 1
t T t t
Therefore ai ti it Ee
so, we get
O
f (C) ! MB,B (f )(1, 1, 1, 1)T = (3, 11, 6, 10)T ,
word of fCc
3uT Huizhou's ctoute ✓ ◆ w r t B
8 5
f (C) = (3, 11, 6, 10)B = .
16 10 µC
(c) The Kernel of the mapping is formed by the matrices A for whose
image is the null matrix, meaning:
⇢ E ✓0 0◆
ker(f ) = A : f (A) = .
EE
0 0
II
2x y 5z 3t = 0 x + 2z = 0
)
>
> 2x + 4z = 0 y + z + 3t = 0
:
2y + 2t + 6z = 0
I I a si on
so, writing the element with respect the original vector space, the
basis is E
✓✓ ◆ ✓ ◆◆
2 0 2 2
ker(f ) = Span , .
1 0 1 1
Il n
C OLB l9 B
TIME: 1 hour and 20 minutes.
HI a
p for
which A is diagonaizable
A tooo b D Dreguler
pro
pin.fi EIfE.fa
nHII.t hghliutf EI
a
T E
F Intl Iz o D
It
Ta I it rise i if
table
If Pto A is diagonal
2
4 Karlie A
L
T
rameisIdn4IEuIaosa
o
A
t.EE
xty 0
42 tercee.at
b f O
L spank90117 Haid
fi Ivi ri 3
you
want Bjorn 0 i gFz
o
83 Lacy z then
L
Z 0
atyeo TI
V3 Ct
El It t'm
p it
ti io t L to
et.FI tII
Sheet 7 Euclidean Spaces
22
v3 Sparegun 83
step i b
re
Second i span
it ie IV Ito
O E Te Fr Av I re Av re noon dIo
a if von o I V
D I
IaIY
u
Alf v u to
IET us
UT u r w3 qan u in
third Spain
I Te fair yw w I O
Wo I O
n
µ re u
a
o
yw a u.uipeu.az
r
M
a uw
o_u yw
L
o
o wit Cu Mu na v
yw up go.eu
o
Y u.o
ua.otuni.uv.o rw.ir
iuZI.vo r j
u.vCi u1
w
Department of Physics and Mathematics LINEAR ALGEBRA (350000) Course 2019/20
Euclidean Spaces
A 2,012,0
92 11 1,107
93 12 40,07
We define it s t Space Vi span at then iiA no ia
Tin
Next we consider Tiz Spaulat Span Tinted and iii
then
Th ta 11,0 no 111 1,40 since I.ee e o
Uz
0 2 d 2 d l Fez 0,1 0,0
Ut X y HE
1123
CXy Z no e o
X y HE1123 x z o Span 1 07 t
f Ei
we solve the implicit egnatim
o CIePgaEYfn
r
Ll O
pivot free
XtZ
t
colures
jxyz.jp of Ut
Proj Hit27 H
uh y uY
uit
tuff Ba flat in Ethan l E i E
so since
proju 1,427 project 1,12 442 then
projuu1,27 4 e 2 projectit 2 Z o
it is also equal
to r
1412 of v
To Tr
2
3. Consider the vector subspace U of R3 , furnished with the usual dot product, defined by
U ⌘ x + y = 0. Find the parametric and implicit equations of the orthogonal complement
U ? , and the orthogonal projection of b = (1, 2, 3) onto U .
spankomD It.io
Pe Uz
u
oTffgIdd 2 O
Ut i XyZ7ElR3
Hey c
1123
Kiya
xty3
s
1010,17 0 and Hey Z I 1,407 03
Span K 1073
as
d mania
i.sIuuIuTs
proiulhzid.lt ffi.o9Yifio 4ind Ilana Zo
3
4. In R4 , with the usual dot product, consider the vector subspace
U = L ((1, 1, 1, 1), (1, 1, 1, 1), (1, 1, 1, 1)) .
(a) Find a basis and the implicit equations of the orthogonal complement U ? of U .
(b) Construct an orthonormal basis for U .
(c) Express v = (1, 2, 3, 4) as a sum of two vectors, one of them belonging to U and the
other to U ? .
(d) Find the distance from v to U and U ? .
2
4 Us Span It 1 htt Hilda It 1 1 11
in tis
a Ut din U 3 dim Ut L
Implicit egs
t of UL
t o t o X y z t o
U X
ytz xtytz
t t
E OI OIT2 Od Iz outspanHoi hah
II
o
t
ye d
free
4 0
column param
egs
wi
II L ftp t Hzit.Ii II
11h11 It FEZ
ri.IE
rtTtru'EiEiliyz.E.EiE w3
vI l1iQ o
IT 9 10
Vdl f p
4. In R4 , with the usual dot product, consider the vector subspace
(a) Find a basis and the implicit equations of the orthogonal complement U ? of U .
(b) Construct an orthonormal basis for U .
(c) Express v = (1, 2, 3, 4) as a sum of two vectors, one of them belonging to U and the
other to U ? .
(d) Find the distance from v to U and U ? .
4
we have ai Chi tch M Ue Obtain orthogonal basis
Tri le 1 1 e it
UI 1,1 0,07 272
Tiye 110,0 1 Tis
problem
can rename them to simplify the
you
Vi U7 lh h4
1St
CUT Span If U and oiortz
vz.ua
21T Spar
It 5
31T spae VI Fz B Span Un UzUs and vi too
by
def Tz Vz o
vii vis wi tr to
r
VI 83 of prior
8 OUT
0 0 d4 o
07
12
Ijv3 vIou5 dvz.v7 peFz.B 0 1 od p2
r
Lg defiant
fidef
by
UT 11,010,1 Vj 11,0 o D O VI 1,1010
ki Iz o t 83 1 110,2
vii FatsFy Te TE'sUT and Fiora 0
1st Span Spon
VET Try to
E I Z E
By ft 1 3 l
5. Find an orthogonal basis of the subspace U ✓ R4 of equation x+y z +w = 0. Determine
the projection of v = (1, 0, 0, 1) onto U and the distance from v to U .
5
6. As in problem 3, determine the orthogonal projection of b = (1, 2, 3) onto the subspace
spanned by the columns of the matrix
0 1
1 0
A = @ 1 0 A. a o
0 1
tool
ui
F AI
f 1 14 1 e b pelts fi
E Alata'At I 8 8911119
it H L
b b
p P
6
e
p
7. Find the regression line b = CL+ Dt L fort he data t1 = 1, t2 = 2, t3 = 3, t4 = 4, b1 = 3,
b2 = 4, b3 = 5, b4 = 7 (i.e. the line of equation b = C + Dt that best approximates, in the
least squares sense, the points (1, 3), (2, 4), (3, 5), (4, 7)). Denoting the solution p = Ab
(the projection of b onto the column space of A, which provides the values at the points
ti of the computed regression line), explain why the following relationships, well-known
in statistics, hold:
e1 + e2 + e3 + e4 = 0, t1 e1 + t2 e2 + t3 e3 + t4 e4 = 0.
t
Ht t.tt I H n
7
T
T A X
b
I P R b A ATA Atb
a
Ii e Iii's.li I iHe I 3
l ni I ai
t
elite.l t lt
sat
i
e b r HH H 7
8. Consider the problem of calculating a regression parabola b = Dt + Et2 (i.e. we impose
that the constant term is C = 0) for the data t1 = 1, t2 = 2, t3 = 3, t4 = 4. Let x be the
solution vector of the normal equations AT Ax = AT b and let e = b Ax tbe he residues
vector. Explain why (regardless of the vector b) it holds that
although in this case we cannot ensure that the sum of all the residues
e1 + e2 + e3 + e4 = 0.
t il by
it HEiitn
Hence
youmustuse
i
least min sopreres method
it I 9
youmustfind
I
A
f If
ATA
ATA l E f Atb A
t the colours of
II
is orthogaf.ae w r
f SI me
b AI
t t e tzez tz.es t tyg
0
I
3 I tie tf ez ts est TI 4
0
620 8
i
January, 16th, 2019
tT
Io4zbI4
U = h( 1, 1, 0, 1), (1, 1, 2, 1), ( 3, 1, 0, 1)i .
test 3es
b
e
41zbfytfzb.jo iguf7lb
Se pide:
2bat3b3t4h
a) Obtain a basis of the orthogonal complement U ? of U . (1 pt.)
620
b) Compute the orthogonal projection of the vector b = (2, 5, 1, 1)
onto the subspace U . (1,5 pts.)
20 2.100 3.240 4.480
c) Compute the distance between b and U . (1 pt.) f b t 4b z t 9b t 16by
6 20 the definition of U ? , then
Solution: a) Taking into account
U ? = Span (1, 2, 1, 1) .
0
b) Taking into account projU (b)+proj U ? (b) = b, we have since
0 1 0 1
1 2
B 2 C1 B 4 C
projU ? (b) = A(AT A) 1 AT b = B C B
@ 1 A 7 14 = @ 2 A ,
C
6I6 1, 1)T , so
0 1
0
48556 B 1 C
projU (b) = b projU ? (b) = B C.
7316
@ 1 A
1
c) An the distance is
5620 d(b, U ) = kprojU ? (b)k =
p p p
22 + 42 + ( 2)2 + 22 = 28 = 2 7.
2. For the following data:
x1 = 1, x2 = 0, x3 = 2, x4 = 3,
y1 = 0, y2 = 1, y3 = 3, y4 = 2.
We want:
a) To compute the regression line y = a + bx (i.e. the line y = a + bx
that fits the best the the points (xi , yi ) in the sense of least squared).
(2 pts.)
b) Evaluate the line of a) in the values xi . (0,5 pts.)
c) If e = (e1 , e2 , e3 , e4 ) is the residue vector, explain why, in this case,
the relación e2 + 3e3 + 4e4 = 0 fulfills. (1 pt.)
Solution: a) If we want to obtain such regression line we have with these
values 0 1 0 1
0 1 1 ✓ ◆
B 1 C B 1 0 C
B C B C a .
@ 3 A=@ 1 2 A b
2 1 3
To get the solution of the regression line, y = a + bx, we need to solve the
linear system ✓ ◆ ✓ ◆✓ ◆
6 4 4 a
= .
12 4 14 b
We get as solution:
9 6
a= , b= ) 10y = 9 + 6x.
10 10
b) With this solution we have
3 9 21 27
y(x1 ) = , y(x2 ) = , y(x3 ) = , y(x4 ) = .
10 10 10 10
c) In this case we have
3 1 9 7
e 1 = y1 y(x1 ) = , e 2 = y2 y(x2 ) = , e3 = , e4 = .
10 10 10 10
Since ~e is orthogonal with the matrix A, i.e.
e1 + e2 + e3 + e4 = 0, e1 + 2e3 + 3e4 = 0,
then the condition we have is the sum of the two columns of such matrix,
3. Given the boolean function
we want to compute, by using its value table, its disjunctive normal form
and its conjunctive normal form. (3 pts.)
Solution: The table of values of f is
! x y z xzy !y I
n f i wxEow1y
0 0 0 0 0 0 0 0
µ
0 0 0 1 0 0 1 0 o O O
0 0 1 0 0 0 2 0 O o o
0 0 1 1 0 0 3 0 co O 0
0 1 0 0 0 0 4 1
1 1 O
0 1 0 1 0 0 5 1
O 0 O
0
DO
1 1 0 1 0 6 1
toO
0
1
1
0
1
0
1
0
0
0
0
0
7
8
1
0 IO no
1
1
1
0
0
0
1
1
0
0
0
0
1
9
10
0
1
O l O
0 1 l
1 0 1 1 0 1 11 1
E a I
1 1 0 0 0 0 12 1
1 y O
1 1 0 1 0 0 13 1
O O
g l
1 1 1 0 1 1 14 1
t n
1 1 1 1 0 1 15 1 O 1 I
Therefore the d.n.f. of f is
f (!, x, y, z) =
X
u10, 11, 12,
u u6, 7,
m(4, 5, u 14,
u 13, u = !xyz + !xyz + !xyz + !xyz + !xyz
u 15)
+ ! x y z + ! x y z + ! x y z + ! x y z + ! x y z.
(! + x + y + z)(! + x + y + z).
LINEAR ALGEBRA
Third Evaluation Test
January, 16th, 2019
Se pide:
a) Obtain a basis of the orthogonal complement U ? of U . (1 pt.)
ii i H
b) Compute the orthogonal projection of the vector b = (2, 5, 1, 1)
onto the subspace U . (1,5 pts.)
ei
c) Compute the distance between b and U . (1 pt.)
to Ii
Solution: a) Taking into account the definition of U ? , then i
U ? = Span (1, 2, 1, 1) . free
coburn
b) Taking into account projU (b)+proj U ? (b) = b, we have since
0 1 0 1 t A
I
B 2 C1
1
B 4 C
2 2 X
projU ? (b) = A(AT A) 1 AT b = B C B
@ 1 A 7 14 = @ 2 A ,
C
1 2
c) An the distance is
p p p
d(b, U ) = kprojU ? (b)k = 22 + 42 + ( 2)2 + 22 = 28 = 2 7.
2. For the following data:
x1 = 1, x2 = 0, x3 = 2, x4 = 3,
y1 = 0, y2 = 1, y3 = 3, y4 = 2.
We want:
a) To compute the regression line y = a + bx (i.e. the line y = a + bx
that fits the best the the points (xi , yi ) in the sense of least squared).
(2 pts.)
b) Evaluate the line of a) in the values xi . (0,5 pts.)
c) If e = (e1 , e2 , e3 , e4 ) is the residue vector, explain why, in this case,
the relación e2 + 3e3 + 4e4 = 0 fulfills. (1 pt.)
Solution: a) If we want to obtain such regression line we have with these
values 0 1 0 1
0 1 1 ✓ ◆
B 1 C B 1 0 C
B C B C a .
@ 3 A=@ 1 2 A b
2 1 3
To get the solution of the regression line, y = a + bx, we need to solve the
linear system ✓ ◆ ✓ ◆✓ ◆
6 4 4 a
= .
12 4 14 b
We get as solution:
9 6
a= , b= ) 10y = 9 + 6x.
10 10
b) With this solution we have
3 9 21 27
y(x1 ) = , y(x2 ) = , y(x3 ) = , y(x4 ) = .
10 10 10 10
c) In this case we have
3 1 9 7
e 1 = y1 y(x1 ) = , e 2 = y2 y(x2 ) = , e3 = , e4 = .
10 10 10 10
Since ~e is orthogonal with the matrix A, i.e.
e1 + e2 + e3 + e4 = 0, e1 + 2e3 + 3e4 = 0,
then the condition we have is the sum of the two columns of such matrix,
3. Given the boolean function
we want to compute, by using its value table, its disjunctive normal form
and its conjunctive normal form. (3 pts.)
Solution: The table of values of f is
! x y z xzy !y n f
0 0 0 0 0 0 0 0
0 0 0 1 0 0 1 0
0 0 1 0 0 0 2 0
0 0 1 1 0 0 3 0
0 1 0 0 0 0 4 1
0 1 0 1 0 0 5 1
0 1 1 0 1 0 6 1
0 1 1 1 0 0 7 1
1 0 0 0 0 0 8 0
1 0 0 1 0 0 9 0
1 0 1 0 0 1 10 1
1 0 1 1 0 1 11 1
1 1 0 0 0 0 12 1
1 1 0 1 0 0 13 1
1 1 1 0 1 1 14 1
1 1 1 1 0 1 15 1
+ ! x y z + ! x y z + ! x y z + ! x y z + ! x y z.
(! + x + y + z)(! + x + y + z).
10 fWXy t Em 0,213,4167,819 11,12 13,15 Ndrf
GCW X y t Emts 416,719 11,121314,15 dnf
a Table of
values
of f g and fg
b dnf off Cnf of g
eo oot8of.g
l O
O O 0
f t
O O utxy z.ir Wtxtytz
1 z w x y z.ir
wtXtytI
I
I
8h19 f ft
81 o
9 A 1
001 10 O O
f
Il o
1
7
8
r
G I'WIFFEN
tw xyz.ir
i
Etxtytz
Cotxtytz
l g g twxyz.ir
1011
1 to f f i lo twx y z.ir
O r y twxyZ
1Y t
lionO 04
1 101 at wxyz
fake
y XI II Is
wzxtwz wzy
dkrarnaughiwz.rs
WE
wEf
w
E
II
Z2Dp sIyItIyw
txyzfxgtwtfxy
wy
xyw
z way
ITw xy.az
d 00 d 000 I O ty
a I o sy
2 1 00 O u uzUg 2 1 00 O 2 10
O O UyUs I A0 O O Uy Us
1 e AO 0 O O
2h2 l3 A 0 On
go 2h2 last Ut
U 2 2l 2 lr
2 62 1
L 1 11 44 3 s 44 9
Zt Uz
1 145 0 Us I
1 000 I O l 1
2 1 00 O 2 IO
O 01 I LU
l l 1O
O O 03
Zo 7 A
b At b L b UX Y D Ly b
Ux
1 00
2
Y
Ii iiit.ftxi.it
2. In the vector space R3 we consider the bases
ill in its
B1 = {(1, 2, 3), (3, 1, 1), ( 4, 1, 0)}
and
B2 = {(1, 0, 1), (3, 0, 0), (0, 1, 1)} .
Let P the matrix of the change of bases from B1 to B2 , and Q the matrix
of the change of bases from B2 to B1 .
i
z 1 I
a F the 2 3 1
a t
b 4 12 413,8 B E 1,1 1 Be
11 Iz Iz Iz
L U t file 843
3. Let E be the vector space of 2-by-2 matrices with real entries, and let F
be the vector space of polynomials of degree less or equal to 2 with real
coefficients, and let f : E ! F be the linear mapping defined by:
✓✓ ◆◆
a b
f = (a + b)x2 + cx + d,
c d
a f Aa x2 010117132
IIE
f Az 2X2tX A 11432
msn.it
i'iii I
b f IfII MB.at IEy l no 37Bz 3
2 1
f E Z'Z 1
B g
wi t B1
i H tt 1 i ii
LINEAR ALGEBRA
Second Evaluation Test
January 17th, 2020
23 AI A ramie 3D A 2 Mg 37 7 Ma 3
A is not diagonalizable
A is not diagonalizable
b if we set now b t them l Z 2
A o t O
4 t t
en
L
A as
and Kerl 3
Ui
L.kz
SI a L II L.la a 211
itI L3
I A
E
Ker 3II A
23
116,47
D Then I ai ii iiiIf.dz 4
3I A
2. We consider the initial value problem
✓ ◆
1
X 0 = AX + G, X(0) = ,
4
where ✓ ◆
1 1
A=
2 2
and ✓ ◆
et + 3
G(t) = .
et 6
eat I Dtp
and to HerCott A A
f z
Lo Lkl N
13 14131A 31 A
f say 2g
E
f I
is
µH eatXD e.AToteAsGcs ds
L II IEEE then
I 2
eatteasaads.es EEIiIIIIEEaeIe.sf fEIEe sf xpie
particular
solution
Then the solution is
f H H p.mn
Ut Lf 3 1,2 2
If 31,3 2 13 Li t l
B l tZ
proju B projut Ii 1,1
DCB Ut ftp.fff fz
3
2. For the following data:
x1 = 0, x2 = 1, x3 = 2, x4 = 3,
y1 = 1, y2 = 0, y3 = 2, y4 = 2.
We want:
a) To compute the regression line y = C + Dx (i.e. the line y = C + Dx
that fits the best the the points (xi , yi ) in the sense of least squared).
(2 pts.)
b) Evaluate the line obtained in a) in the values xi . (0,5 pts.)
c) If e = (e1 , e2 , e3 , e4 ) is the residue vector, explain why, in this case,
the relation e1 + 2e2 + 3e3 + 4e4 = 0 fulfills. (1 pt.)
need to solve is
a The linearsystem we
ea
i ti
I t.tl
A b
1
AtA If
in
d
Atb in this case
regular matrix
b Fr Yo yT yet Fu 34
0
512
t o
910 izeztses 4e 4tzeo
He 5 444
1 fI jo fIIgogffej1ei
3. Given the boolean function