You are on page 1of 278

Academic Year 2020-21 Linear Algebra

Roberto Costas (E223)

roberto.costas@uah.es

Bibliography LAY D e linear Algebra and its apps 14thEd

other material BB www.rscosan.com docencia.htm

Exams PEI PETE PEI 1 31 1131113

3019 W 10 00 12 00 in room SA5B 22515 Ll t

5 10 M 10 00 12 00 ONLINE

7 10 W 10 00 12 00 in room SA5B

12 10 M FESTIVITY

14 10 W 10 00 12 00 in room SA5B

19 10 M 10 00 12 00 ONLINE

21 10 W 10 00 12 00 in room SA5B

26 10 M 10 00 12 00 ONLINE

28 10 W 10 00 12 00 in room SA5B
Matrices, Determinants and Linear Systems

September 23, 2020

Matrices, Determinants and Linear Systems


Matrices

A matrix Am⇥n is an array of numbers in rows and columns


0 1
a11 a12 · · · a1n ! r1
B a21 a22 · · · a2n C ! r2
B C
B .. .. . . .. C ..
@ . . . . A .
am1 am2 · · · amn ! rm
# # #
c1 c2 · · · cn

We say that the dimension of A is m ⇥ n (we also could say m-by-n). If


m = n, we say that A is square, otherwise we say that it is rectangular.

Matrices, Determinants and Linear Systems


Matrices

Main diagonal. If A is a square matrix of dimension n, the elements aii ,


i = 1, . . . , n form the main diagonal of the matrix; the sum of these
elements is called the trace of the matrix.

Tr A a an t Ann
Transpose of a matrix. it is the matrix obtained when interchanging
rows and columns.

Matrices, Determinants and Linear Systems


Generalities on Matrices

42 Es
Row matrix, column matrix.
too
E eDiag 2.3.51

Homework
aij o.i
Diagonal matrix.

j 1 ai.i
ae Identity matrix I. too IIe a eIs FIE Iz
U µg r ai o Null matrix O.
Zoo
Uij o i jam Triangular matrices (upper, lower). L
Symmetric matrix, skew-symmetric matrix. 1019
ATea ai.i ai.ie Diagonal by blocks matrix.
i
aij Aji AT ATdiagonal
Thisimpliesthatthe
entries are0

A y.bgso where
f t
Matrices, Determinants and Linear Systems
Matrices

Operations: Khan Academy


1. Addition. Properties:
Commutative: A + B = B + A
Associative: A + (B + C ) = (A + B) + C
Neutral element: null matrix. A + O = A
Inverse element: negative of a matrix. A + ( A) = O
2. Multiplication by a number. Properties:
· (A + B) = · A + · B,
( + µ) · A = · A + µ · A
· (µ · A) = ( · µ) · A
1 · A = A.

Sometimes we will write


In the

Matrices, Determinants and Linear Systems page5 I


Matrices

3. Multiplication of two matrices. Properties:


Not commutative, in general. i.e. A · B 6= B · A.
Associative: A · (B · C ) = (A · B) · C
Neutral element for square matrices: identity matrix. A · I = I · A = A
Inverse element for certain square matrices: inverse matrix.
(A · B)T = B T · AT .
Obs: The matrices A and B are said to commute if A · B = B · A.
A is said regular, invertible if it has inverse, otherwise is said singular .

Matrices, Determinants and Linear Systems page 6


Matrices
detAto
ifAB Ac ACB 0 0 BC
Y
1
Inverse of a matrix: given an square matrix A, A (its inverse) is the
matrix, if it exists, fulfilling
Homework If AB I and AC I
A·A 1
=A 1
·A=I then B c
Moreover
BA I
A 1 does not always exist. A characterization of its existence can be
Homework why must A be n by n
achieved sby using determinants, or the notion of rank. wesolvedthisin
(A 1 T
) = (AT ) 1
. class
1 1 1
(A · B) =B ·A
Two ways for computing it: determinants or Gauss-Jordan method
(see later).

Matrices, Determinants and Linear Systems


Result Any matrix can be factorized
m
by n
as follows
A LU L mx m U Mxn

wher L is a lowertriangular matrix


with main diagonal entries equal to 1
and U is a upper triangular matrix

This is the so called LV decomposition

then A LU where
If A is n
by n

entries are
V is n n with diagonal
by
Um Then the
Un Un

det A Ui Unit Um

The Gauvin method let us to


compute L au U

it is non zero entry


Example a pivot diagonal

II
a CUZ

A Laa o a

so det Ae AnAzz Arazi Infact pageG


Determinants

In fact if A is uppertriangular deLCA U Un Um


Given a square matrix A, the determinant of A, denoted by |A|, or
det(A), is a number associated with A.
|A| is defined first for 2 ⇥ 2 matrices. For matrices of higher order,
in principle it can be computed by expanding along any row/column.
For example, if A is 3 ⇥ 3, then we can expand along the first row
(we might also choose any other row or column)

a11
a21
a12
a22
a13
a23 = a11 · A11 + a12 · A12 + a13 · A13
You cantry by LU
a31 a32 a33 decomposition
where Aij denotes the cofactor of the entry aij (i.e. the signed minor
of aij ). Also, for 3 ⇥ 3 matrices, Sarrus’ rule may be useful.

Matrices, Determinants and Linear Systems page o


Determinants

Basic properties:
1. |A| = |AT |
2. If A and B are square matrices of the same order, then
|A · B| = |A| · |B|.
3. If all the elements in a row (or column) admit a same factor, then
that number can be taken out of the determinant.
4. If we interchange two rows (or columns), the determinant changes
sign.

Matrices, Determinants and Linear Systems page1


Determinants

Basic properties:
5. If A has a row or a column of 0’s, then det(A) = 0.
6. If A has two rows (or columns) which are either equal or
proportional, then det(A) = 0. The value is also 0 if there is some
row (column) which is a linear combination of others.
7. The value of the determinant does not change if we add to a row (or
column) other rows (or columns) multiplied by numbers. This
property is essential for efficiently computing determinants.

Matrices, Determinants and Linear Systems page1


Determinants

Computation of the inverse of an square matrix A.


The inverse A 1 exists if and only if |A| = 6 0.
1 1
A 1= · AdjT (A) = · Adj(AT ), where Adj(A) is the cofactor
|A| |A|
matrix, i.e. for each i, j, the corresponding element is the cofactor
of aij .

we willexplainduringthenext
Alternative: Gauss’ method.

class

Matrices, Determinants and Linear Systems 1


Rank of a Matrix

We say that a certain row r (similarly for columns) is a linear


combination of the rows ri1 , . . . , ris if r can be obtained from these
rows by means of an expression like

↵ 1 · r i1 + · · · + ↵ s · r is

for certain numbers ↵1 , . . . , ↵s , which are called the coefficients of


the linear combination.
We say that certain rows (similarly for columns) are linearly
independent, if none of them can be obtained as a linear
combination of the rest. Otherwise, we say that they are linearly
dependent.

Question: When are two rows (resp. two columns) linearly dependent?

Matrices, Determinants and Linear Systems page1


Rank of a Matrix

Definition
The rank of a matrix A, rank(A), is the maximum number of rows (or
columns) which are linearly independent.

An equivalent definition of rank, in terms of determinants. A minor,


in a matrix A, is any determinant that you can get by eliminating some
rows and/or columns. Then one may see that rank(A) is the maximum
order of the non-zero minors of A.

Matrices, Determinants and Linear Systems page1


Rank of a Matrix

Some observations/properties:
We say that a square matrix of orden n has full rank (or is regular),
if rank(A) = n. It can be proven that this happens if and only if the
determinant of A is di↵erent from 0 (therefore, if and only if A is
invertible). If A is square and has not full rank, it is called singular;
such a matrix has no inverse.
The rank by rows coincides with the rank by columns.
rank(A) = rank(AT ).
If the dimension of A is m ⇥ n, then rank(A)  min(m, n).
When we compute the rank, we find rows/columns which are linearly
independent!

Matrices, Determinants and Linear Systems page 1


Rank of a Matrix

Some rules for computing rank(A):


A matrix has rank 0 if and only if all its elements are 0, i.e., A = O.
A row/column of 0’s does not count for determining the rank of A.
Similarly, a row/colum which is clearly a multiple of another
row/colum, or a linear combination of rows/columns, does not
count, either.
The rank does not change if we perform elementary operations on A
(swapping rows/columns, multiplying a row/column by a number,
add a linear combination of rows/column to another row/colum).
From a practical point of view, the computation of the rank can be
done either using determinants or by means of Gauss method.

Matrices, Determinants and Linear Systems page


Linear Systems: Definitions

A System of Linear Equations is a set of equations of the type


8
>
> a11 · x1 + a12 · x2 + · · · a1n · xn = b1
>
< a21 · x1 + a22 · x2 + · · · a2n · xn = b2
.. .. ..
>
> . . .
>
:
am1 · x1 + am2 · x2 + · · · amn · xn = bm

xi ’s: unknowns
aij ’s: coefficients
bj ’s: constant terms

Matrices, Determinants and Linear Systems page1


Linear Systems: Definitions

The system can be written in matrix form in the following way:


0 1 0 1 0 1
a11 a12 · · · a1n x1 b1
B a21 a22 · · · a2n C B x2 C B b2 C
B C B C B C
B .. .. .. .. C · B .. C = B .. C
@ . . . . A @ . A @ . A
am1 am2 · · · amn xn bm

Abbreviately,
A · ~x = ~b

A: Coefficients matrix.
~x : vector of unknowns
~b: vector of constant terms.

Question: why is the above equality true?

Matrices, Determinants and Linear Systems page1


Classification of Linear Systems

Classification of Linear Systems: linear systems can be


1 Inconsistent, if it has no solution.
2 Consistent, if it has some solution (i.e. it is solvable). In this case, it
can have
A Unique solution, or
Infinitely many solutions.

Matrices, Determinants and Linear Systems page19


Classification of Linear Systems

In order to classify a given linear system, we use the augmented matrix,


and Rouche-Fröbenius Theorem. The augmented matrix, that we
denote by B, is
0 1
a11 a12 · · · a1n b1
B a21 a22 · · · a2n b2 C
B C
B=B . .. .. .. .. C
@ .. . . . . A
am1 am2 · · · amn bm

Matrices, Determinants and Linear Systems page2


Classification of Linear Systems

Theorem (Rouche-Fröbenius Theorem)


Let A · ~x = ~b be a linear system of m equations with n unknowns, and let
B denote the augmented matrix of the system. Then the system is
consistent if and only if rank(A) = rank(B); furthermore, the system
has a unique solution if rank(A) = rank(B) = n, and infinitely many
solutions if rank(A) = rank(B) < n.

When rank(A) = rank(B) = n, the di↵erence n rank(A) is in fact the


number of degrees of freedom of the system, i.e. the number of
parameters the solutions depend on.

Matrices, Determinants and Linear Systems page2


Solving Linear Systems

Two possibilities:
1 Cramer’s Method: uses determinants and must be applied on a
Cramer’s system (i.e. a system where the coefficients matrix has full
rank). It is not efficient for big systems.
2 Gauss and Gauss-Jordan Method: does not require to compute
determinants, but just simple operations with rows/columns.
Efficient for big systems.

Matrices, Determinants and Linear Systems page2


Homogeneous Linear Systems

These are those linear systems where the constant terms are all 0:
8
>
> a11 · x1 + a12 · x2 + · · · a1n · xn = 0
>
< a21 · x1 + a22 · x2 + · · · a2n · xn = 0
.. .. ..
>
> . . .
>
:
am1 · x1 + am2 · x2 + · · · amn · xn = 0

Always consistent (why?)


The interesting question is whether it has other solutions, up to the
trivial one (in such a case it has infinitely many!)
This happens only if rank(A) < n.
If A is square, this is equivalent to |A| = 0.

Matrices, Determinants and Linear Systems page2


A 2 3 compute the determinant
of A 1 if P
A 2 A 4 1 O O Ui Ur UB Z
O
A LU l T O O Un Uu
lE l3 r 0 0 UE 2
3 b

Azz 2 L Ucz l2Uktl3Urz 313 13 3


Azz I Vcs lz Uist13h23 t Uz3 34 1 433 U33 31

1
there def A UnUnUss L I 371 137
1
By Samus's method 1 0 12 O 8 4

A
adjugate matrix of
F
The AdjCA
T
AdjIADi.j C hitidetfAlpj.cl
matrix A where I throw
and
i thcolour is
deleted

Examples
da ab
If A Ibd ADELA

at c t.aaitaif.IE1
I ititl page2
Remember that there are di erent ways to solve a linear system. For example by using the inverse
of the matrix of the coe cients:
II
AI 3 A AI Ab e.at x A b
1
Let us compute A In two di erent ways.
1) By using the formula for the inverse:

A 2 3 7 4 5
A l
A 2 A 4 a r z
O 2 A
1 4 2 3 inver
ofA
2) Use the Gauss elimination method: The idea is to go from A Il n I B
Pwt
pivot pivot
yo o 125 3123
A 2 3 I A 0 O RT Rz 212 a 2 3l A 0 0
2 A 4 I O t O O 3 21 21 O
RT RT12122
I
O 21 10 01 O 21 10 01
i
wt pint
P'Io2 3 z o g
l l 2
O Ri Ri 15123
A A 0 O RT 3121 I
RT Rz 2123
O 3 21 210 RT RT 12122 O 3 2 210
i
O O l 1i 423 o o l 1i 423
At
5 43
3 O O I 211215 12 I 0 O 745
122 3
O 3016 3 6 O l O I
212
0 O l 1i 423 R Rzf O 0 A l14 2 3

page25
We also solved problem 11 of sheet 1

A a compute W decomposition
A Ja
As
b by using such decomposition
solve A I 11,3DT

3
a
f a r r

µ It
from here we have working
A LU Oo03
3
2 with the rows

3
l
Are re rn Ai l l 6e 3
6 e 3
Az lire r2 s 13,6 6 h h l 0 T
Az her G r2 t B s
2,1113 la la le 0,313,313 t QQ

lz 11 12 313 le
2 13 k 34
f if ya
Bo Egg
b AI Lu

then

f HH I

vii i
fix HH til
page26
If we considerthe
Euclideanplane 1122

Lesson 2: Vector Spaces n r 7 2,0 1


g
slope

n the same space R2 vectors


September 15, 2019
l
µ Belo
k
s

1
numbers coexist
points and complex
Raff o
r A two pointsof
r
Lesson 2: Vector Spaces
Vector Spaces

Let V be a set, and let +, · be two operations, the first one (sum)
defined between the elements of V , and the second one (product by
scalars) defined between V , and the elements of R (resp. C). We say
that (V , +, ·) is a vector space over R (resp. C) if the following
properties hold:

(i) (V , +) is a conmutative group:


Internal Law: 8~ u, ~v 2 V , u~ + ~
v 2 V.
Associative: 8~ u, ~ ~ 2 V , u~ + (~
v, w v +w ~ ) = (~
u+~ ~.
v) + w
Neutral element: there exists ~0 2 V such that u~ + ~0 = u~.
Inverse element: for all u~ 2 V , there exists u~ 2 V such that
u~ + ( u~) = ~0.
Conmutative: 8~ u, ~v 2 V , u~ + ~
v =~v + u~.

Lesson 2: Vector Spaces page


Vector Spaces

Let V be a set, and let +, · be two operations, the first one (sum)
defined between the elements of V , and the second one (product by
scalars) defined between V , and the elements of R (resp. C). We say
that (V , +, ·) is a vector space over R (resp. C) if the following
properties hold:

(ii) The operation · satisfies that:


8 2 R (resp. C), 8~ v 2 V , · (~
u, ~ u+~v ) = u~ + ~ v.
8 , µ 2 R (resp. C), 8~ v 2 V , ( + µ) · u~ = u~ + ~
u, ~ v.
8 , µ 2 R (resp. C), 8~ v 2 V , · (µ~
u, ~ u ) = ( · µ) · u~.
1 · u~ = u~.

Lesson 2: Vector Spaces page2


Vector Spaces

In order to make explicit whether we work over R or C, one writes


(V (R), +, ·) or (V (C), +, ·). In the sequel, we will assume that we work
over R, although the results are equivalent for C. In fact, vector spaces
can be defined over sets other than R or C (fields).

Inthe previous example the space of all


Examples
uultip
af Fr is a rectorspace
A E Maz IR A AT is a vector space youmustcheckthat
Lesson 2: Vector Spaces page30
2 2 Symmetric matrices with real entries
by

5k a bicep R W
I thasdimension 3
then dimensional
f V W visas
f abba Carb c for example

Two elements of V are

ii
Is hit ME
at Ma

Hottub
db µb
da pic
E 1 ab

page 3
Vector Spaces

Proposition
Let (V (R), +, ·) be a vector space over R. Then for all 2 R and for all
u~ 2 V , it holds that:
(1) · ~0 = ~0.
(2) 0 · u~ = ~0.
(3) · u~ = ~0 if and only if = 0 or u~ = ~0.
(4) · ( u~) = ( ) · u~ = ( · u~).

Lesson 2: Vector Spaces page 3


Linear Dependence. Bases.

Definition
A linear combination of vectors u~1 , u~2 , . . . , u~n is another vector of the
form
~1 + 2 u~2 + · · · + n u~n
1u

where i 2 R for i = 1, . . . , n.

Remark The vector ~0 can be considered as a linear combination of any


vectors u~1 , u~2 , . . . , u~n , because ~0 = 0 · u~1 + 0 · u~2 + · · · + 0 · u~n .

Lesson 2: Vector Spaces page 3


Linear Dependence. Bases.

Definition
We say that {~u1 , u~2 , . . . , u~n } are linearly dependent (l.d.) if at least of
them is a linear combination of the rest. If {~ u1 , u~2 , . . . , u~n } are not l.d.,
we say that they are linearly independent (l.i.).

3
Lesson 2: Vector Spaces 3L
1

A
Kiffin tix l
4
l m

we multiply w by rows and we compare both expressions


row
row by
Aa Ui U 1,2
Az like Ur 4,91 14,241 101
9 2 lit 9 8 1

ae kitsune i
l X LID
king F F
Ez a un p

vesta foil stil I H L

3 2 2 37

page 3
At
2 A L LDLT Ac Ui Ui Cli
Ar 1427
Az like
4
It 22
A
0 2,47 4,247 197
day 0 uz
ou 7 4 24 0

119361811
T
1
L D L

page36
3 112
U
a
III III to I U
In 2 4 3 2

Az l U a Uz Cl Z l 24 li le t o

Az lzUielzUztUz 1 24 I
h 1 4 32
Cy 1 2 2K ez le o
Elz 3b poiQ
T
d
2h2 I lz 2 lz1213 2232 0

l ez 13
213 Z lz Z l3

Homework i compute D so that

A LDF

page37
i
4 a
f EH Qz l canal
o o 17

li 24 24 16
5124 9
A z e l U t Uz 2,7 5 y
A3 KU el3UztUs 24 3
y
I
C 45 307 ez Zee 2 la 0 3h3 9 t o Q
T t
lz
f
30 26 913
5 2h2 1313 l3 30 4 27 I

aint.LI 1 9.11
I

page38
5
Efg
A 14g LU solve AI's

qs
a
b
1
0
3
B iioioii.si
lo
610,011

A2 44 42 14 li le t

i 2,347 YK 6 213 1

lzU tl3UztU3 le lzilz.la


4
A t Ql3i2l3,3l3 0,0
T T
T L
1,316,10 10 12 313 3
3t3 lz
2 Ay.lqUitlgUztl6UztU4
llqlu.lu.la tf0pils 2lsn3ls t
0,01 36 610,0
114,10120 r t d
Zo ly Bls 36 1

Is 4 ly 3 Is lo la 2ls 3

UE JLF ff.gg sUI yf

page 3
6 Homework
112
7
f
a.t Hof Il O

l
Eteria
0,4 Co

33 27 144 24,0 0 l
t b
2 Z

o 2,6 Gee 2 ez co o 2ez 2 G Cola

i 2h3 2 ls 1213 4

a.t
A is PositiveDefiniteIPD if I A IT o for E 1 0

If A AT A P D when A _LDLT D has positive


diagonal entries

A Ll U LCD LT A is
if A LU e

negative
LD LT definite
symmetric

page40
a
III II.it0981 E
o
t fco.zn
O Co
213
o 4
3
c in 17 124 lied O
2 4 2
374g
li I I
f
01 112 Gee le de Co Zf G 019

l ltlz zlz ls

IE II Et
z a.

Inverse
uor detA
2 zZx z
of A by using
4 to A is regular
Gauss Jordan method

ft O O Ri R2
y 2 f l o e O T2z Rzt2R
A'III I 2 Ii or o a a oil 00
O l 2 I 0O l O l 2 1 00 A
pivot i
e
l 2 l O l O p
f Rz I 2
l O l O
l RI Rzt3R2
O 3 2,1 20 O l 2 1 00 A
O l 21 O O l
i
O 3 211
l 20
l 2 I O l O E O 3 101 2
RT Ri ZB
O e 2 OO l
k
f e 2 OO l
Y
k 2 tzR3
O O 4 11
I
23 O O 4 1 23

y o o i I E E RI R
y o Z EE
o l
o e O tI i z o e O l I i E III A
4
O O 4 11 23 RT 4123 O O 1 1I
I
page41
Linear Dependence. Bases.

For vectors in Rn : in order to check the linear independence of


several vectors, form a matrix with them as columns (or rows), and
compute the rank (Example)
The above idea can be extended to other cases (but we need the
notion of coordinates of a vector, to do this; we will see it later).

Lesson 2: Vector Spaces page42


Linear Dependence. Bases.

Theorem
The vectors {~ u1 , u~2 , . . . , u~n } are linearly independent if and only the only
linear combination of them fulfilling 1 u~1 + · · · + u~n = ~0 satisfies
1 = · · · = n = 0.

page43
Lesson 2: Vector Spaces
Linear Dependence. Bases.

Definition
We say that S = {~ u1 , . . . , u~n } is a spanning set of V if any vector in V
can be written as a linear combination of the vectors in S.

Definition
We say that B = {~ u1 , . . . , u~n } is a basis of V if it is a spanning set of V
and they are linearly independent.

Examples
Important remark: vector spaces may have infinitely many bases!!

Lesson 2: Vector Spaces


page4L
Linear Dependence. Bases.

Definition
We say that a vector space has finite dimension if it has a basis
consisting of finitely many vectors.

Examples:
1 Rn has dimension n, since it admits the basis (called the canonical
basis)

{(1, 0, 0, . . . , 0), (0, 1, 0, . . . , 0), . . . , (0, 0, 0, . . . , 1)}

Notation: e~1 = (1, 0, . . . , 0), e~2 = (0, 1, . . . , 0), . . .,


e~n = (0, 0, . . . , 1).
2 Matrices of fixed dimension, polynomials of fixed degree.
3 At least an example of a vector space which is NOT this kind?

Lesson 2: Vector Spaces


page45
Linear Dependence. Bases.

Theorem
If V has finite dimension, then all the bases of V have the same number
of vectors (the dimension of V , dim(V )).

Intuitively, the dimension of a vector space is the number of parameters


that one needs to specify in order to identify a concrete element in the
space.

Lesson 2: Vector Spaces


page46
Linear Dependence. Bases.

Theorem
Let V be a vector space with finite dimension, and let dim(V ) = n. Then
the following statements are true:
(i) If S spans V , then you can extract a basis from S.
(ii) Every system consisting of more than n vectors is linearly dependent.
(iii) Every spanning system contains at least n vectors.
(iv) Given B = {~ u1 , . . . , u~n } ⇢ V (a subset of exactly n vectors), the
following statements are equivalent: (a) B is a basis; (b) B is linearly
independent; (iii) B is a spanning system.

Lesson 2: Vector Spaces


page47
Linear Dependence. Bases.

Definition
Let B = {~ u1 , . . . , u~n } be a basis of V , and let ~v 2 V . The coordinates
of ~v with respect to B are the scalars 1 , . . . , n 2 R such that

~v = ~1
1u + ... + ~n
nu

Usually we write

~v = ( 1, . . . , n )B ; we also say ~v has coordinates ( 1, . . . , n) in B.

Lesson 2: Vector Spaces


page48
Linear Dependence. Bases.

Theorem
Let V be a vector space of finite dimension n, and let B = {~
u1 , . . . , u~n }
be a basis of V . Then every vector ~v 2 V has unique coordinates with
respect to B.

Lesson 2: Vector Spaces page49


Vector Subspaces.

Definition
Let (V (R), +, ·) be a vector space. We say that W ⇢ V is a vector
subspace of V if (W (R), +, ·) has also a structure of vector space.

Lesson 2: Vector Spaces page50


Vector Subspaces.

Theorem
W ⇢ V is a vector subspace if and only if 8 u~, ~v 2 W , 8 , µ 2 R,
u~ + µ~v 2 W (i.e. if and only if every linear combination of two vectors
in W , stays in W ).

Examples
Observation: If W is a vector subspace, then ~0 2 W .

Lesson 2: Vector Spaces


page51
Vector Subspaces.

Since a vector subspace is in fact a vector space inside of another


bigger vector space, it makes sense to speak about its dimension,
about bases, etc.
Again, the dimension of a vector subspace is the number of
parameters defining an element of the subspace.

Lesson 2: Vector Spaces page52


Vector Subspaces.

Definition
u1 , . . . , u~n } be a subset of V . The linear variety spanned by S
let S = {~
(or simply the linear span of S) is the set consisting of all the vectors
which are linear combinations of the vectors in S, i.e.

L(S) = {~x 2 V |~x = ~1


1u + ··· + ~n }
nu

L(S) is a vector subspace, and its dimension is the rank of the


system S, i.e. the number of linearly independent vectors in S.
In R2 , the linear span of a vector is a line.
In R3 , the linear span of one vector is also a line; the linear span of
two independent vectors is a plane.
In Rn , n 4, one vector spans a line, two independent vectors span
a plane, three independent vectors span a space,...

Lesson 2: Vector Spaces page 5


Vector Subspaces.

Equations of a vector subspace:


1 Vector equation.
2 Parametric equations.
3 Implicit equations.
Change of basis

Lesson 2: Vector Spaces


page54
SHEET 2

1. (c). Is NOT vector space because


plt 0 Is not in the space. the polynomial identical
2 217107 p o
to Zero J
T T
condition because pltko
(a) Yes.

If E Cxy EW ELITE and

and 2e IR
and I cxiyiziewa.ie fX5LfyIE

U tT XtX yy 2 z EW
we need to check if it ful lls the equations:

0
ly 14 1212 170
2 0 0
Xtx't
3CXtx 114 4 01 0

tu Ew we need to check if it ful lls the equations:

0
Ix Hy 1222 40
3147 ay 40 0

122 0
Obtaining a basis: X Y implicit equations
We O
3 1 y

pint
It has in nitely many solutions

Iii L
page55
you must use 1 parameter
a 121 4
x
1 12 X y 24
E Rz 3121
O 4 6 g 4g 61 y 21 34
n
2 D z 21 Dim W 1

WeSpace C l 3.2 5 1

M Mt W
b ME1433 IR letM 1 few and Mz bodied
few andDEIR
M 1 Mz CW ata btb Ctc
Mi M2 btb dtd ete EN
j µ EW etc ere ftf
AM EW It is analogous
Implicit equations: X
X6
4
X Xz X3 And
X7 X3
Xs Xt
µ tew xu
g
Xa Xs 19 Xq
X Xs
pink free parameters

to'i
G OO OO 1 O 1
III parameters diucw

So the paramehiceq.ua are

X H 1 116
2 62 Xp 14
z by
and
g y
Xy dz Xa 46
Xs d3
page56
In order to compute a basis, you set each parameter equal to 1, and the rest equal to 0 to get every
element of the basis.

BA ofI
del Iz 26 0

Melo Ei
de0,22 1 HE 76 0

wept Isomorphism
Nzef EztE4
he
disco too as 7,74 96 0
Carb c die f
h E 00
3 1868 Es
6,08888
O 00 I 00 1

4 a so.arnasaoo
M4s do EstEz
die 1450 25 1 CIG 0

Ms180,8019 Estes
d 95 0,96 1
Me
1 f Eg

5
page57
9. Let us consider the matrices
0 1 0 1
1 0 0 1 1 1
A = @2 1 3A B = @1 2 2 A
0 0 1 1 2 3

Obtain, by using the Gauss-Jordan method, the inverse of A, B and AB. Check that
(AB) 1 = B 1 A 1 .

Al

a naii.at lie i'I


as me li i
HiEEiatooioI
i tE EL iiog
o Il

B
I I
fempu
I 00

f
AB In II ABY GS method
AB fool
o
EEE
ki B t I EEEur.fi 1
fEedEEEiII l

i iii'iit
2 I O
it
l O0
AB
BA
it I 9 I3
2 I 4 1
page58
10. Let A be the 2-by-2 matrix: ✓ ◆
1 2
A=
3 7

By using the Gauss-Jordan method, compute the inverse matrix of A and of its transpose
AT and check that (AT ) 1 = (A 1 )T .

at I 3I I si I is

t's I I II is
a
Iii

10

page59
11. Let A be the matrix 0 1
1 1 1
A = @3 6 6 A
2 11 13

(a) Obtain its LU decomposition.


(b) Solve the linear system Ax = (1, 3, 0)T by using the decomposition obtained in (a).

Homework

a
at L
It U

b Ux y i Ly L y ft LI
Ux y

11

page60
12. Let B the matrix 0 1
1 1 1
B = @3 6 6 A
2 11 k

(a) Obtain the values for k so that B is singular.


(b) Find the values for k so that the linear system Bx = (2, 9, 13)T has infinitely many
solutions. Obtain the general solution for such a case.
(c) Find the values for k so that the linear system Bx = (2, 9, 13)T has one solution.
Obtain such solution.

LU decomposition
F Uk
13 detcpg 1 3 k 117 0 if kid
13 11293,81181 Bissinger
n t
L KM

thx y ly111 g
a

X Xr 1 3 2 2 X3
31 2 31 3 31 z

we set Kitt 0 0 free Ea X htt


2g

uax.yly f.IT I
I 2 Diez O l D
Unity I k µ
i E to
O
12

page61
13. Let 0 1
1 1 2 0 0
A= @ 1 2 2 2 4A
1 0 2 2 4

(a) Solve the homogeneous system Ax = 0.


(b) Solve the underdetermined linear system Ax = (2, 4, 0)T .
(c) Given three values, namely b1 , b2 , and b3 . Find the linear system such values need
to fulfill so that the linear system Ax = (b1 , b2 , b3 )T has solution.

n n
a
f II IE O l O 24 bzibis
EfEifii ai
yE
Iiio OO OO og
bist2b bz
Rank A 2
Discuss 2 if 2b bz o
RankA
3 if by124 bz1 0
When rank A rankCA't
When do we have a solution

b bz bz o there is always a solution


Rewind When

13

page62
LINEAR ALGEBRA (350000) Course 2019/20

Practice exercises .. Sheet 2

Vector Spaces. Change of basis

1. Analyze whether the following subsets are vector subspaces of the given ones. In affirma-
tive case, obtain a basis for such subset.
Homgen Lin
Sys
(a) W = {(x, y, z) 2 R | x y + 2z = 3x + y = 0} of R
3
YES Dim 1
3

T
(b) W = {M 2 M3⇥3 (R) | M = M } of M3⇥3 (R) YES
Dim 6
(c) W = {p(t) 2 P2 (R) | 2p(0) p(1) = 2} of P2 (R) No

IT Is not homogen

1123
142 17,191,17 1,01 3 Basis of

4411 0 it is not a basis

2. Analyze if

(a) B = {(1, 2, 1), (0, 1, 1), (1, 0, 3)} is a basis of R3


⇢✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
1 0 1 0 0 1 0 1
(b) B = , , , is a basis of M2⇥2 (R)
0 0 1 1 1 0 1 1
(c) B = { x2 + x + 5, 3x2 + 5x + 3, 2x2 + 2x 1} is a basis of P2 (R)

is viii ciii Basis as Mazur


We consider two ways:

D1 dzVz bB 1yFy E Kidzidsdy O


1
if YES Lin ind
if NO Lin def
1 page63
I
1
H 11 11
In
H O l O f

lO t.li H ii i J
2 MazCR 1124
t Ebd
Is largebicypd

the coordinate matrix is

O l O 1
p B
RIN RIN BIH
c
f XIX
2
15,3 5 3 2
2 2 1 Basis of PzCIRI

1124127 e IRB with flax't b Xt c Ca b o

f
of
a
1123 PsPups ki in 112412 is
are
f pix l L ti 5 c
equivalent to prove that the
rector
are E
f l P2 3,5 3 i in pe
HI Pa 3 are l
f Ps 2 z e c R3
irs

page64
They do notform a basis
det 353
1 5 30 6 50 31 6 0

Change of Bases Br thx 13 1,17 Con


Bz l l X oil Hit

fi 2
Pdx linear mappings
IRCA Pdx
g Bc 132 A
By il
xd

g fzof
p T T
Az Ai coordinates
B
B 2
Here rtx a I Bu 1 I'tB f
T
Atx Hi B Bz 21 1 Be coordinates

B
and a
t
x a Bu o I p

1 Xi Aco Pz

Atx 1107131 canonical


Observe that i
0 1 B1 basis 6
1 page65
Therefore i
Dnt Matrix ofthe
2 1
g tho Kil ga B then B y o
change of
Bases
g 0.14 11,0
B

1 O is it
f As As y 1 MpaBc B AzA

fzs Az Az Oy
1 fly to MBe.BZ

Gauss Jordan mitin

gEliit I
O

msn.im I is

page66
Lesson 3: Linear Mappings

September 15, 2019

Lesson 3: Linear Mappings


Linear Mappings

Lesson 3: Linear Mappings


Linear Mappings

f injective

Lesson 3: Linear Mappings


Linear Mappings

f surjective

Lesson 3: Linear Mappings


Linear Mappings

f bijective

Lesson 3: Linear Mappings


Linear Mappings

f bijective

Lesson 3: Linear Mappings


Linear Mappings

Definition
We say that a mapping is injective, if there are not two di↵erent
elements of S with the same image. Also, we say that a mapping is
surjective if every element of S 0 is the image of some element of S.
Finally, we say that a mapping is bijective if it is both injective and
surjective.

Here we will be interested in a particular kind of mappings, namely linear


mappings.

Lesson 3: Linear Mappings


Linear Mappings

Proposition
Let f : V ! V 0 be a linear mapping. Then it holds that:
(1) f (~0) = ~0.
(2) If {~
u1 , . . . , u~n } are linearly dependent, then {f (~
u1 ), . . . , f (~
un )} are
also linearly dependent.
(3) If S ⇢ V is a vector subspace of V , then f (S) is a vector subspace
of V 0 .

Lesson 3: Linear Mappings


Matrix Equation of a Linear Mapping

Let f : V ! V 0 be a linear mapping between vector spaces, and let


B = {~u1 , . . . , u~n } be a basis of V , and B 0 = {~
u10 , . . . , u~m
0
} be a basis of
0
V . All that we need in order to define f , is to know the images of
the vectors in B.

Matrix equation of a linear mapping. Examples

Lesson 3: Linear Mappings stop


Matrix Equation of a Linear Mapping

Properties:
(1) Given f : Rn ! R, f is linear if and only if

f (~x ) = f (x1 , . . . , xn ) = a1 x1 + · · · + an xn

Similarly when f : Vn ! V10 (i.e. if the final space has dimension 1).
(2) Given f : Rn ! Rm , we can write

f (~x ) = (f1 (~x ), . . . , fm (~x ))

where each fi is said to be a component of f . Then, f is linear if


and only if each fi (~x ) (which is a mapping from Rn to R) is linear
(see (1)). Similarly when f : Vn ! Vm0 (i.e. when f goes from a
space of dimension n, to another space of dimension m).

Lesson 3: Linear Mappings


Matrix Equation of a Linear Mapping

Properties:
(3) Every linear mapping can be written as

~y = A · ~x ,

where A is said to be the matrix associated with f (in certain


bases B and B 0 ). If f : Vn ! Vm0 , then A 2 Mm⇥n .
(4) The columns of A are the images of the vectors of B under f ,
expressed in the basis B 0 .
(5) The matrix A depends on the chosen bases B, B 0 : if we change
some basis, A changes too!

Lesson 3: Linear Mappings


Matrix Equation of a Linear Mapping

Properties:
(6) Let B, B 0 be fixed bases in V , V 0 respectively. Given a linear
mapping f : V ! V 0 there exists a matrix A associated with it in
the bases B, B 0 . Conversely, any matrix A defines a linear mapping
with respect to the considered bases. So,

Matrices $ Linear Mappings

Examples of linear mappings: Khan Academy (click)

Lesson 3: Linear Mappings


Matrix Equation of a Linear Mapping

Proposition
Let f : Vn ! Vm0 , g : Vn ! Vm0 be two linear mappings, with associated
matrices Af and Ag , respectively, and let k 2 R. Then, it holds that:
(1) f + g is also linear, and its associated matrix is Af + Ag .
(2) k · f is also linear, and its associated matrix is k · Af .

Lesson 3: Linear Mappings


Matrix Equation of a Linear Mapping

Proposition
Let f : Vn ! Vm0 , g : Vm0 ! Vp00 be two linear mappings, with associated
matrices Af and Ag , respectively. Then the composition g f : Vn ! Vp00
is also linear, and the matrix associated with g f is Ag · Af .

We can apply this to study how A changes when the bases B, B 0 are
changed. Khan Academy (click)

Lesson 3: Linear Mappings


3. In the space of polynomials of degree, at most, 3 with real coefficients we consider the
vectorial subspace spanned by the polynomials fulfilling the conditions
BIR
p0 (1) = 0, p0 (2) = 0. 11
Space
x3x3xm
Obtain a basis of this vectorial space.
u
ax3tbxtcxdia.bgd
R
W pCx7EPz Rl p'll 0 p 2 so

ax3tbxtcx di3at2btc ogl2 at 4btC o

ERe412 c b

I pot's
free
a r

3
One Basis of W is 2 9 412X t
4 12

4. Obtain the null space and the vectorial space generated by the columns of the following
matrices: 0 1 0 1
1 2 0 1 1 2 1
B0 1 1 C B0 4 2 2C
A=B C B CRzRs R
@2 1 3 A B = @ 1 3 0 3A
1 0 0 0 1 1 1

coecahspanffql.fi d collB7 span Tg


II I
Null A xeR3 AX03 1 Null B xeR4iBx o

dim NAIA coeCA rank A


span
f
2

page68
5. Given the vectors v1 = (1, 2, 1, 1), v2 = (2, 3, 1, 2), v3 = (1, 3, 2, 1) and v4 = (2, 1, 1, 2)
of R4 , we want:

(a) To know if they form a basis of R4 .


(b) If not, we want to complete a basis of R4 and obtain the coordinates of the vector
v = (4, 1, 0, 1) in such a base.

a To check that
I To 30 0 Therank is 2

bl 1134 spa Iii riiio.atoico.oYiin3 Basis to

4,110111 2WitBWz18W fwy linearsystem

I 20 I 10

IZ
a

A
Iii
flo
L
2,00

O UX
tl H t ff 3 3

6. Given the the basis of R3


7
B1 = {(1, 0, 2), (0, 2, 1), (1, 1, 2)},

obtain the matrix of the change of basis from B1 to the canonical basis Bc and from Bc
to B1 , where the canonical basis of R3 is

Bc = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}.

Given te vector u = (1, 1, 3). Obtain its coordinates with respect to the basis B1 .

Men Bc i MBcih Mi.BE


f FI

Ui 11 113 11 1,37g coordinates u withresp


MpaB
of to Bs
12,5 HIBS 3
Isn
7. We consider the following basis of R3 :

B1 = {(0, 2, 2), (2, 0, 1), (3, 0, 0)} and B2 = {(2, 0, 2), (1, 2, 0), (0, 0, 1)}.

(a) Obtain the matrix of the change of basis from B1 to B2 .


(b) Given the vector (1, 1, 1) with coordinates with respect to B1 , obtain its coordinates
with respect to B2 .

Miss.HN
eszMrs Bc fEIfJf
t.EE 1 b7U tii
C
llcl.Dpg 27BzMBnBz ti Ig
21h 3 Bz

4
8. In the vectorial space P1 (R) we consider the basis

B1 = {t, 1} and B2 = {1 + t, 1 t}.

We want to:

(a) Obtain the matrices of change of basis from B1 to B2 , and the one form B2 to B1 .
(b) Given the polynomial p(t) = t+3, use the correct matrix of change of basis to obtain
its coordinates with respect to the basis B2 .

Weassume Bc I413 13 t H Be 1 6117,3


3414are l i if Ilo fIto
a MBBZ Mais MBBo MI Ba MBBe L I df
t
MBZBi MI.BZ L
b plt te 3 13 L ME
B
C2cNB MeaBzL's
f zl

5
9. In the vectorial space M2⇥2 (R) we consider the basis
⇢✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
1 0 0 1 1 0 0 1
B1 = , , , .
0 1 1 0 0 0 0 0

Find the matrix of change of basis from the canonical basis of M2⇥2 (R) to B1 , and obtain
the coordinates of ✓ ◆
3 4
A=
2 1
with respect to B1 .

1124 t by 1 correspondence
f MexCR t the
coordinates of Ibd w r
Edb s a b cd
can basis of Muz

m.am nisi
1 too O l l 0

A LY s 13.4.41 Be L Bs t 214 6
By

MBe.iq
J fIgJL

6
10. In the vectorial space M2⇥2 (R) we consider the canonical basis Bc , and the basis
⇢✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
1 0 0 1 0 1 0 0
B1 = , , , .
0 0 0 0 1 0 1 1

We want to:

(a) Find the matrix of change of basis from B1 to Bc , and from Bc to B1 .


(b) Given the matrix ✓ ◆
1 2
C=
2 1
obtain its coordinates with respect to the basis B1 .

Homework check the solution in the PDF

7
11. Given the following basis of R4 :

B = {(1, 0, 2, 1), ( 1, 1, 0, 0), ( 1, 0, 2, 1), (1, 1, 0, 1)},


a
we want to obtain the matrix of the change of basis from B to the canonical basis of R4 ,
the matrix of the change of basis from the canonical basis of R4 to B, and the coordinates
of the vector (1, 2, 0, 1) with respect to the basis B. b
a p is a basis GI

11 41471 14291 4 the elements 413


are li

msn.LY Mimi

I liked
b E Hi 2917 1 p
MBE.B 92 gaff
pivots
Ite a n ooo RTR 212 D988Esr
Interchange

II
2,2 e a r ooo
o o iii or oo 12441 dh o o iii or oo Rz R4
20 20
r o r e l
OO t O
0001 f
z 4 2 2010
a 2 O 1001 If o 4 o 2 210
z r l i 101
RI Ry 212
425
Rs 1
r
10 00 1 o RT123 2124
l l l 1 O hO ReRT Re Re 124

2100joe
21 a
88 o
2 I

io

I
no.no
Er
iii't
Oi
O t O 0
10 It
s 1001
MBcB
8
12. In the vectorial space P3 (R) we consider the canonical basis Bc = {1, t, t2 , t3 } and the
basis B1 = {t2 + t3 , t + t2 , t, 1}.
We want to:

(a) Obtain the matrix of change of basis from B1 to Bc , and from Bc to B1 .


(b) Given the polynomial 1 + t + t2 + t3 , use the right matrix of change of basis to obtain
its coordinates in the basis B1 .

MB
L Mars nisi a
f
b htt Etf 4111,1 1 Be C 7 By 4011d Be

msn.li
fIl

9
13. In the vectorial space P3 (R) we consider the basis B1 = {t3 , t2 , t, 1} and the basis B1 =
{t3 , t + t2 , 1 + t, 1}.
We want to:

(a) Obtain the matrix of change of basis from B1 to B2 , and from B2 to B1 .


(b) Given the polynomial t3 + t2 + t, use the right matrix of change of basis to obtain
its coordinates in the basis B2 .

Check the solution in the PDF

10
Matrix Equation of a Linear Mapping
5th
Week
Definition
Two matrices A, A0 such that there exist regular matrices P, Q satisfying

A0 = Q 1
·A·P

are called equivalent.

If two matrices are equivalent, then they have the same rank.
If two matrices represent the same linear mapping but in di↵erent
bases, then they are equivalent.
Conversely, two equivalent matrices represent the same linear
mapping, in di↵erent bases.

Lesson 3: Linear Mappings


f i Un Vln
Af MBc.BZ Af MBnBe
Bs 132
MBap

Film Vm
Af MbcBe f
R Be

f Ralph Muz R
Egle µ f
Bce TM Be KEI EuE3iE4

flat bt7 9 ba
f t On
f Eat E3 Q 111107Be
t
A0
coordinates of Cdf w r Be
b I
1101111 Bc
fD
I
IG E Est EU
am
bio
A MIA
f t
iii it S0000
c I
f Htt ft lt
It
l 1,21pm
Mpaa to A
fH H III ci i i
osp
Homework is to check that

ii

t
MBe Ba
H p
Q
Matrix Equation of a Linear Mapping

Definition
Two square matrices A, A0 such that there exists a regular matrix P
satisfying
A0 = P 1 · A · P
are said to be similar.

The notion comes from that of equivalent matrices, when Q = P.


If one considers a linear mapping f : V ! V (original and final
spaces coincide!!), and uses the same basis B in both cases, then
when B is changed to B 0 , the associated matrices A, A0 are similar,
and P is the change of basis matrix.
Conversely, two similar matrices represent a the same linear mapping
in di↵erent bases (related by P).

Lesson 3: Linear Mappings


f RCH Pelt Bettie
Catb b alt
f Catbt

f HeP let lit Be FH t t C 1 1 Be


ai O
b
m A

B 1 It 2ft Basis
II HO YES

f l 2H i 3T Hi DB

flat H 3 t 11,17ps
MB pt LflJ
Homew

orkr To check that r

a
Ii'tT ati p
p I p
Kernel and Image

Definition
Let f : V ! V 0 be a linear mapping. We define:
(i) The nullspace or kernel of f , Ker(f ), is the set of all the vectors
of V that transform themselves into the vector ~0 2 V 0 .
(ii) The image of f , Im(V ), is the set of all the vectors of V 0 that are
the image of some vector of V .

Lesson 3: Linear Mappings


Kernel and Image

Proposition
Ker(f ) is a vector subspace of V .

Proposition
Im(f ) is a vector subspace of V 0 , and its dimension is rank(A), where A
is the matrix associated with f (regardless of the bases used). In fact
Im(f ) = L({f (u1 ), . . . , f (un )}), where B = {u1 , . . . , un } is a basis of V .

Proof. Khan Academy (click)

Lesson 3: Linear Mappings


Kernel and Image

Theorem
Let f : V ! V 0 be a linear mapping. The following statements are true:
(1) f is injective if and only if Ker (f ) = {~0}.
(2) f is surjective if and only if Im(f ) = V 0 .

Examples of how to compute Ker(f ) and Im(f ): Khan Academy (click)

Lesson 3: Linear Mappings


Kernel and Image

Theorem
If f : V ! V 0 is an isomorphism and {~ u1 , . . . , u~n } is a basis of V , then
{f (~ un )} is a basis of V 0 .
u1 ), . . . , f (~

Lesson 3: Linear Mappings


Endomorphisms

Definition
A linear mapping f : Vn ! Vn of a vector space onto itself is called
endomorphism.

The matrix associated with an endomorphism is a square matrix.


Conversely, any square matrix corresponds to some endomorphism.

Lesson 3: Linear Mappings


Endomorphisms

Classification of endomorphisms: let f : Vn ! Vn be an


endomorphism, and let A be its associated matrix.
If |A| =
6 0, then f is bijective. In that case, we say that f is an
automorphism).
If |A| = 0, then f is neither injective nor surjective.

Lesson 3: Linear Mappings


Endomorphisms

Proposition
Let f : Vn ! Vn be an automorphism, and let Af be the matrix of f .
Then f has an inverse, f 1 : Vn ! Vn , which is also linear, and its
associated matrix is A 1 .

Lesson 3: Linear Mappings


Be Mani BA AB B Ebd
HI A VA

btc 0 dine VA 2
VAI atb d o

c i i
colours
it
profit I
Bashkimi
1 0
µ I
If I
ii vii
HI ri e

zI
rank hairs ei
ut Tee fi
are

III
T
is

Be us oil LUR Bc h e Bc

Mps
Bc It II CoidBa

Notice that I Air EVA and alee


A it EVA
Moreover became
ft L A

det HII A
j IL1 5 31 3

Homeworki Check that


AZ 3 At 31 0

I
3 II 3 A AZ AA II A f AA
At tI
Homework
as in
A L Yog the same
pop
revious example
the
14 102019
Problem 1. Given the following spaces of R4 :
Homos linear system
V1 = {(x, y, z, t) 2 R : x
4
y + z = 0, x z t = 0},

and
V2 = Span {(1, 1, 1, 0), (1, 0, 1, 1), (0, 0, 1, 1)} .

(a) Explain why they are vector spaces?

(b) Calculate the dimension and a basis of V1 , and V2 .

(c) Extend such bases to a bases of R4 , and call them 1 and 2 respectively.

(d) Obtain the matrix of change of bases from 2 to 1.

a VERY fixI c Vi
V C1124 Eva HeVs i
Iue Y
G
O
t H H c

yeR4
Vz satisfies a equation I
a µ
duh 4
2
2
Vi I III III li.eind.gr
X 9 z µ
9
Vi Spae 4,440 l co 1 1,1737 µ
p
D I 1 0

I
y 1 9
Mr µ
4
1 11,0 X
y 12 0

1,0 1 1 X z t o

t so
Cao lie z
free t 2
2 2

I H na a a
X 2 9
a
0

1,1 Z t 0
It 0 1 no
yt
Egs of Vz

i H t r

13 1
41141107,10511 411,610,407,610,9173
Vr

p t7

camp.pe ripping.pe II
Problem 2. Given the linear mapping f : R3 ! R3 defined as:

f (x, y, z) = (x y, y z, x z).
fled f 11,01071491
(a) Obtain the matrix associated to f .

(b) Is f injective? Obtain its kernel. fled fCon 07C1h07


(c) Obtain a basis of Im(f ). fled floral Co 4 1
(d) Given the bases

1 = {(1, 1, 0), (0, 1, 1), (1, 1, 1)} and 2 = {( 1, 0, 1), ( 1, 1, 0), (0, 1, 0)},

obtain the matrix of f with respect to the bases from 1 to 2.

l l 0
A
Ijn coma
t.ua
if KerCf
31019073 4 4,27 X y of
b f is injective Y 2 0

X 2 0

It
1 in o rrmamfhf.IE dinterCII
NI Kor f Spae UH
colours of A
c Im f is spanned by
f tho
Intf hat
µ I
span
Mlf
L L 5 L

Id M c
ftp.p.lt Mp
LINEAR ALGEBRA (350000) Course 2019/20

Practice exercises .. Sheet 3

Linear Mappings

1. Let f : R2 ! R3 the linear mapping defined by

f (x, y) = (x + y, 2x y, 2x + 2y).

(a) Obtain the coordinate matrix of f with respect to the canonical bases.
(b) Obtain a basis, the dimension and a set of equations of the kernel of f and of the
image space of f .
(c) Obtain the coordinate matrix of f with respect to the basis B1 = {(1, 1), ( 1, 2)} of
R2 and of the basis B2 = {( 1, 1, 0), (1, 0, 1), (0, 0, 1)} of R3 .

a
flee f Io 112,2 i MBcBeH Y2
fled float It t.DE I eeBEoYheeIco
e

Mcf
X 4 C R2 f X y Co
oo
b Ker f
Xty 0

Equalise solution kerffimff.cof.o


z f o

and
1mCf Space
I ft l.ie iiEdimCImHY 2

AKO
Equations ZX Z o
cols
rank At solutions
c t
Mip.MCHMB.pe l i equations
MBiipzlfl MBCBZMLDMpspse

iii i'I little 1


HII't
2. Let f : R3 ! R3 be the linear mapping defined by

f (x, y, z) = (x + 2y z, y + z, x + y 2z).

(a) Obtain the matrix of f with respect to the basis B = {(0, 2, 1), (1, 0, 1), ( 1, 0, 0)}.
(b) Use this matrix to compute f ( 2, 2, 2).
IT 3
al
fFUe f 0,2 D 3,3 o Zz Z FB

I Ii.iooi i f IiIIse
i.on
E o

fl 32,4 11,1 3
B MpBtf Ig Ig 2Th GUI Gil's
bydef
0,4 47 10,4 4

2
3. Let f : R3 ! R4 be the linear mapping defined by:

f (1, 0, 1) = (1, 1, 1, 0)
f ( 1, 2, 0) = (1, 3, 0, 1)
f (0, 1, 1) = ( 1, 0, 1, 0).

Obtain the matrix associated to f with respect to the canonical bases and Ker(f ).

You need fled fled and fled Ji Choi


FEC1.2.07
a Are Kira Ble i B co h D

III If printer's
a
II
b You need MpsBc
Remember
MB.BEMido B

MBBc Miz B Gauss


Tordoff
process

i i ii n
p 1
I
Means

MizpahE MaffMpap
I I
3
1123 1124
f
ma Ei Ini
tanto Eat

Ker f x y 27 c
1123
f X y 27 6 o o o

o
Egs Xty
X 2g 122
0
II S
Ex KerCf Icao oB

No extensionof
Is f surjective why get an

cols A to get a

basis of R4

to i
4. Let f : M2⇥2 (R) ! R3 be the linear mapping defined by:
✓✓ ◆◆

I Ima
x y
f = (x + y, z, t).
z t
i
We want: i
(a) Prove that the application f is linear.
(b) Obtain a basis, the dimension and equations of the Ker(f ) and the image of f .

Weneedtocheck
take two elements
of Me CR felR
a
f AtB f A tf B
AIB
fCan afCA
d ffaIa4bdttaY lata'tbtb ctd Cdtd'D
ftp
g yflAtB atb d debt d d f A FCB
c

Laatab Ac dd A Cathg d
f GA f ta f
tfCA
I Karlf EE FLEEK to qoH fxty o z o t Eg of
spin to 073 theft
I'd I drunker L
O O O l
T
free din Ion di fl diulle.IE 4
I se

because He elemett
Golems of Mtf
Imff7
4
dimE 3 fatt 2 40 1
f
5. Let f : P (R)
2 ! R4 be the linear mapping defined by:

I 11
f a0 + a1 t + a2 t2 = (a0 + a1 + a2 , a0 2a1 , a2 , a0 ),

where E is the vector space of the polynomials of degree less or equal to 2 with real
coefficients. Find the coordinate matrix of f with respect to the bases
can
00
B1 = {1, t, t2 },

and
B2 = {(1, 0, 0, 1), (0, 1, 0, 0), (0, 0, 1, 1), (1, 1, 1, 0)}.
L d
FAY
Cbl Use such matrix to obtain f (1 + t).
My
O 111
A 20
in
idiotism
fetus 2

ii l l i
I
Ttt Clamps flittlethilots MB.BA hoI ftEzz
po4fomial Its coordinates Be
5
w.r.t.pe 124,010,1 0,4907 36,0141 314440
2 1
Lesson 4: Diagonalization

September 15, 2019

Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces

The rough idea: From the preceding lesson, we know that a matrix A
represents a linear mapping in a certain basis. On the other hand, the
matrix associated with a linear mapping changes when we change the
basis. So, maybe there exists some basis where the matrix is “specially
nice”...

Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces

Definition
Let f : V ! V be an endomorphism. We say that 2 R (or C) is an
eigenvalue of f if there exists ~v 2 V , ~v 6= ~0, such that f (~v ) = ~v .
Furthermore, in that case we say that ~v is an eigenvector associated
with .

Examples

Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces

7
How do we compute the eigenvalues:

DetIA AI
Properties:
(1) p( ) = |A
j
I | is called the characteristic polynomial of the
matrix A. Its degree is dim(V ).
dimA
(2) It is usual to refer to “the eigenvalues of the matrix” (values such
that A · ~v = ~v ), instead of the linear mapping.

Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces

How do we compute the eigenvalues: 2

Properties:
(3) If i is an eigenvalue, then it is a root of p( ), and therefore
ni
p( ) = ( i) ···
TO
The number ni is called the algebraic multiplicity of i.
(4) When we consider vector spaces over R, the eigenvalues can be
either real or complex.

Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces

Z
How do we compute the eigenvalues:

Properties:
(5) If is an eigenvalue of A and A · ~v = ~v , we say that ~v is an
eigenvector associated with .

Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces

Proposition
The set of all the eigenvectors associated with a same eigenvalue of a
matrix A, is a vector subspace.

Proof.
f i V V linear If dinka lvector SpaeeV
w vs
Ingeneral fr Vi A HI r o it is a space and
w wee ta i witwz e aw c a eIR
Definition
For each eigenvalue i , the set of eigenvectors associated with it is called
the eigenspace of i . We represent it by L i . From the above result, it
is a vector subspace, and its dimension is called the geometric
multiplicity of i .

dimLai
smg Ai
Lesson 4: Diagonalization
Eigenvalues, eigenvectors, eigenspaces

Observations:
(1) L i
is the solution of (A iI) · ~v = ~0.
(2) dim(L i ) = n rank(A i I ), where n is the order of A.
(3) Denoting the algebraic multiplicity of i by ni , it holds that

M 7 1  dim(L i )  ni
in
Example
Malai
MgGil

Lesson 4: Diagonalization
f PzCIRI IBAR
t e CE
flat btecE at b t Cbt2c f tfLH
e CH

a we
consider a basis
Pettit13
btk atb
a flatbtect fCc b a pa Cc pa

the matrixof fi
3 Weneed del A
det I
f ee fChocobo 1 2,07Pa o
mattoid
Ii feet 1

I Charad polynemili
delta
det att A 43 312 Hsp
Path
P f ripkmaG D 3
det CHIAl
k t Bakr
7G zu

WI A
fE og ithasrank
2
jmg.dz
MaCa M 1 A table
tMgla is NST diagonal
Eigenvalues, eigenvectors, eigenspaces

Theorem
Let f : V ! V be a linear mapping with p di↵erent eigenvalues
1 , . . . , p . Then the eigenvectors ~
v1 , . . . , ~vp associated with them are
linearly independent.

Lesson 4: Diagonalization
Diagonalization of a square matrix

Introduction: every square matrix A of order n is the matrix assoc. with


some endomorphism f : Vn ! Vn in some basis, i.e. A = M(f ; B, B).
Furthermore, if we change the basis B ! B 0 , then the matrix changes
according to A0 = P 1 · A · P. The question is: given A, is there any
basis where the expression of the corresponding endomorphism is
diagonal? In other words, given A, is it similar to any diagonal matrix?

Definition
Let A be a square matrix, and let f be the endomorphism that it
represents. We say that A (or f ) is diagonalizable if there exists some
basis such that the matrix associated with f in that basis is diagonal
(equivalently, if it is similar to some diagonal matrix).

Lesson 4: Diagonalization
Diagonalization of a square matrix

Theorem
An endomorphism f : Vn ! Vn is diagonalizable if and only if there exists
a basis of Vn consisting of eigenvectors.

Proof.

ifand only if iff a

Lesson 4: Diagonalization
Diagonalization of a square matrix

Theorem
Let V be a vector space over R of dimension n, and let f : Vn ! Vn be
an endomorphism. Then f is diagonalizable (over the reals) if and only if
the following two conditions hold:
(i) The total number of real eigenvalues, counting multiplicities, is n.
(ii) The geometric multiplicity of each eigenvalue equals its algebraic
multiplicity.

Lesson 4: Diagonalization
Jordan Matrix
In that case, there exists a matrix called Jordan matrix of A, which is
“block-diagonal”: 0 1
J1 0 ··· 0
B0 J2 ··· 0C
B C
J=B. .. .. .. C
@ .. . . .A
0 0 ··· Jr
where

• Each Ji (Jordan block) corresponds to a di↵erent eigenvalue i, and it


is a square matrix of order n i -by-n i of the form
0 1
i ? ··· 0 0

O
B0 ··· 0 0C
B i C
B .. .. . . .. .. C
Ji = B . . . . .C
B C
@0 0 ··· i ?A
0 0 ··· 0 i

and each ? is either a 1, or a 0.


Lesson 4: Diagonalization
Jordan Matrix

• The number and positions of the 1?s in each Jordan block must be
computed. We just mention the two “easy” rules:
1 If dim(L i ) = n i , then the Jordan block is diagonal (no 1?s);
2 if dim(L i ) = 1 6= n i , then the block has 1?s above all the
elements in the main diagonal;
so, in that case it looks:
0 1
i 1 ··· 0 0
B0 ··· 0 0C
B i C
B .. C
Ji = B ... ..
.
..
.
..
. .C
B C
@0 0 ··· i 1A
0 0 ··· 0 i

Lesson 4: Diagonalization
Jordan Matrix

It also happens that where P fulfills: I regular


1
A = PJP

where P fulfills: b
D
The columns of P include independent eigenvectors (but we need
more!!).
The remaining, unknown, columns must be computed:
(i) by using that AP = PJ (not very efficient...);
(ii) using a more sophisticated/efficient method that we will skip here
(but it exists!)

Lesson 4: Diagonalization
let A stuITIFEis diagonalaide
L 3g

Obtain matrices E and D so that A PDP

PaHgfYI
detHI
A
Igf.ca alto f cancan

hast
simple A diagonalizable
tf
Ei J
E ol40io7
diI A f8oz9IJ z7yIIIofx
dz
is c z
wIa.f

iEat aor

fitriB I
I E
Exempt
Let a c IR Study for which values of a the matrix

30
A is diagonalizable

that A PDP
matrices E and D so
Obtain

alI at
a a
Hal O t
EE o.atca u n

f I 1 2 da I
if
d d3
a a
Simple A dagou.at
42 2
a

malt 2

if a p 4
12
1
2
double
simple Matt
21 1 I A is not
diagardizable

h I hide math A is NOT


if an
12 2 double Mala 272 diagonalitable

rank is 2
d I A L
Ar 17 3 2 1
Mg
rank is 2
12 detI A
2
L fq
Mg 22 2 3 2 1
a 1.2 HI A ooo rT Uio
a

d HI A
f a
Felsite
rI C3o
d WI A
Ag
Iz
a
I L e too

Ii E Ii
E
A PDP A PDP157 67 D P

An
P t EDwp
ME IN

a nE a
E
i
6. Let E be the space of polynomials of degree at most 2 with real coefficients and let F
be the 2-by-2 squared matrices with real entries. Let us consider the linear mapping
f : E ! F defined as follows:
✓ ◆
p(0) p(1)

Heat
2
f (a + bt + ct ) =
p(1) p(0)
pie
(a) Obtain the coordinate matrix of f with respect to the basis

B1 = {1, 1 + t, 1 + t + t2 }
kid
Be IEtiEqE3 Ey
of E and the canonical basis of F . fput.su
(b) By using the matrix obtained in (a), compute f (p) where p = 3 + 2t + t2 .
(c) By using the matrix obtained in (a), compute a basis of the subpsace Ker(f ).

ftile flip I f e FE tEzHE3HEY 1,11 1 Bc


MBBc f
a TTU
set b o
c o
fl ffya I LY EItU fh2HE
set b I
c to

ftp.ffht ttI z7
EieBEz
a I
3EstEI3y3DBc
set 5 1
c I
barisBy
w.at
pH
msn.tt kid need to
express

f
uiMlB lb7fCp7
ptH 3t2tt eu ai

x.stpCkHt8Htt
system
solve the
3tzt lt ktp N lptbt Btr

IT 6 rtg.atper 8 1
ftp.MB.BIHHI fqk.bg fhI 1383
35
of matrices
a collection
polynomial became e c Blk
c Ker is of
era
E
A 0 O L
2
r n l
KerCfl Span HEI free t
7. Let E be the 2-by-2 squared matrices with real entries and let F be the space of polynomi-
als of degree at most 2 with real coefficients. We consider the linear mapping f : E ! F
defined as follows: ✓✓ ◆◆
a b
f
c d
= a + (b + c)t + dt2 .
c f 2t3t
(a) Obtain the coordinate matrix of f with respect to the basis

fieii.IEoff
⇢✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
B1 =
1 0
0 0
,
1 1
0 0 co
,
1 1
1 0
,
1 1
1 1
iii
of E and the canonical basis of F .
at ios ice
✓ ◆
2 2
(b) By using the matrix obtained in (a), compute f (C), being C =
1 0
.
Etui
Cal f UT f f8 hot107 1 05 6,9 1 Be
T TB Coil 1 Be
a I f UI It ttoE let
set
get fat tl f i zt ioE zr iB C92
d o
Cd
at 57
c I
d0
2
filly f ff 1 21 1 1 2 03 11,21 Be
at b I
a7 TV
c I

µ
pe fg9II
Anemia se n cam a
fIIII3l
b C Hofstetter491,101,3
7
f c 8,9 4 g
E Bc 3 2
8. Let E be the space of 2-by-2 matrices with real entries. We consider the linear mapping
f : E ! E given by:
f (A) = A + AT ,
where AT is the matrix transposed of A. AIBT
(a) Prove that f is a linear mapping.
HA iB CA B CAIBYICA AH CB.pt
(b) Obtain the coordinate matrix of f with respect to the basis
⇢✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆ fCAlefCB7
B=
1 0
0 0
,
1 1
0 0
, O
1 1
1 0
,
1 1
0 1
Dd
FAA GAltGAT
of E. e
Us og ✓
2 3

dAtdAT
(c) By using the matrix obtained in (b), compute f (C), being C = .
O
2 1
d AcAT
p
1,0 2,1 B tfCA
b
ftp.pdfl MB.BEMCDMB.Bc

e i l b
let me also reunber fluit UT UT f.IE 2UTtou7tou3eouy

2,0101dB

fcuj.u ieu I ffffIH.tk that.de

I g i u ti i
2 I I 0
msn.at
t H
8
d Gi L UT aatzt.tle t1iQ2iDg BfC h0i2il7B MB.B f Ig
I
t 13311341 figs if f 1 4 4921,3

C Th 2 505 54 14552

af pen Hatti L feat


7
Mala07
h o simple
MaG 2 3
12 2 triple

glad I A L rank ist


mga 22 4 1 3 d.tt
a

l Looker COLI A o o 2t 03
zx
ytz
Spanko l 1107

20
12 Ker ZI A ytZ of
010,0
Span Choco d 0,111,0

toilet Eia're
A PD
9. Let E be the space of polynomials of degree at most 2 with real coefficients. We consider
the linear mapping f : E ! E given by:

f (p(t)) = p0 (t),
ffatbt ct22
(i.e. the image of a polynomial is its derivative). btzc u.ie

ie ca7fluTl
(a) Obtain the coordinate matrix of f with respect to the basis B = {t, 1 + t, t2 } of E.
(b) By using the matrix obtained in (a), compute f (1 + 2t + 2t2 ).

flH I tUIIi f.lusI flt7 2t 2YI


ao UfoB get a 0 2,40 B
ft bei 5
Ceo c I

fluids filth L uIuT tailors


at Ei
Mps.rs
fI to

b fllt2t 2EI fthhDB MBBCflfff.LY 2u7t2kz


p 2t 2HtL
2

2 zt2
t xu.HU ru3 4 1ple Lt t rF p ldtB7t N
Z

f I 2 1 2
f 1,2
tf
Study is diagonalizable

faff 1
ItII f tYIt a.d2 d3f
9

pttt detlNI.AE d Fpe maCd


o 3
Lo KerloE At Xty Zzz
to JF IT8o e
X
y 0

Equating
dim 6 3 2 1 Mgl'd07 1
Mala07 1 Mgb07 A is NOT dragonalizable
NOT FOR THE EXAM
WARNING
spaett 1.073 totter out AT FILI
spanking 113
e

FALI
3 1123
Karlo A
co oil
span411 407,111,07 Burt
Ear Is
riv

at Ip

i
i be
10. Let E be the space of polynomials of degree at most 2 with real coefficients and let F be
the space of polynomials of degree at most 3 with real coefficients. We consider the linear
mapping f : E ! F defined as follows
Z t
BE flatbt ct7 attbtz.cc
It
f (p(t)) = p(s) ds
0

(i.e. the image of a polynomial is its definite integral between 0 and t: a primitive of the
polynomial so that at 0 is equal to 0.)

(a) Obtain the coordinate matrix of f with respect to the basis B = {1, 1 + t, 1 + t2 } of
E and the canonical basis of F .
(b) By using the matrix obtained in (a), compute f (3 + t + t2 ). es
Cal flatlet t r5 Co o 1,07
Be fatd Shtetl
la
t tz.fr tB
a I
St boo set b I 0,12cL07
Ceo Cto

ftp.flt E tttz3 z1JitFPs 1310,40Be


set a I
boo
g

Mps
L Ff

b
flzyfet7 flhhNB MB.B.CH
I fz jEEtfEt3t
w need to writeit w r t B
3 it it2 dhi pfz tuz x.lt putt 18 little ftp.IJ ptttt

dtBtt 3
a
I
10
411

Lesson 5: Linear Di↵erential Equations

September 29, 2019

Lesson 5: Linear Di↵erential Equations


Introduction

A general problem in Applied Mathematics is the following:


We have two variables, y and t, which are related, i.e y = y (t), and
we want to find the function y (t).
Nature, in general, does not make it easy to find, directly, y = y (t).
Instead, it usually provides a certain relationship between the
derivatives of y (t) with respect to t, the function y (t), and t that
we have to exploit to find y = y (t).
Examples: planetary movement, elasticity, radioactive decay,
population dynamics, circuits, . . .

Lesson 5: Linear Di↵erential Equations


Introduction

uh until
L

iit
R

t
L

Eun
C
n
E Ult p

L
d 2 q(t)
dt 2
+R

11111 we solveHe
dq(t)
dt
1
+ q(t) = E .
C
in
weassure
it Lg 4
Walt
E Rualt U It

E is constant
system
Lesson 5: Linear Di↵erential Equations
Introduction

was
EI ut
I
1H i1

8
> i10 (t) = i2 (t) 2i3 (t)

p in d II If I
>
>
1H 1H <
i20 (t) = 4i2 (t) + i3 (t)
>
>
>
: i 0 (t) = 2i (t) 6i (t)
2 3

ft lo
3
t 22
i2 3⌦ i3
1⌦ 2⌦ T

Lesson 5: Linear Di↵erential Equations


Introduction

So, the first problem is: given F (t, y , y 0 , y 00 , . . . , y (n) ) = 0,

can we find y = y (t)?

Difficult, and not solvable in general.


Here, we focus on some treatable cases, whose study is related to
the topics addressed in previous lessons.

Lesson 5: Linear Di↵erential Equations


Di↵erential equations
An ordinary di↵erential equation (o.d.e) is an equality involving an
unknown function y , and its derivatives with respect to a certain
independent variable t:
F (t, y , y 0 , y 00 , . . . , y (n) ) = 0 = 0
Examples:
y 0 (t) = y (t) YIH et
✓ ◆
0 y (t)
y (t) 2y (t) y (t) =0
10
✓ ◆
y (t)
y 0 (t) = 2y (t) t 2
10
y 00 (t) sin(t y 0 (t)) + y (t) = 0
..
.
The order of an o.d.e. is the order of the highest derivative appearing in
the equation.

How do we compute the eigenvalues?


Lesson 5: Linear Di↵erential Equations
Di↵erential equations

• An explicit solution of F (t, y , y 0 , ..., y (n) ) = 0 is a function y = (t)


fulfilling the equation, i.e. satisfying that
0 (n)
F (t, (t), (t), ..., (t)) = 0

Examples:

yIH Ket K
Solutions of y 0 (t) = y (t)? constant

y.lt Kcosit y Kzsink


Solutions of y 00 (t) = y (t)?
Check that y (t) = e t is a solution of y 00 (t) 2y 0 (t) + y (t) = 0.
I
check as wellthat
solution
yettet is a

Lesson 5: Linear Di↵erential Equations


Di↵erential equations

An o.d.e. always has infinitely many solutions.


A solution of an o.d.e. is called a particular solution.
The general solution of an o.d.e. of order n is an expression,
depending on n constants, which provides infinitely many solutions:

(t, y , C1 , . . . , Cn )

(examples relative to the o.d.e’s in the previous slide).

Lesson 5: Linear Di↵erential Equations


Di↵erential equations

Initial Value Problem (I.V.P.) It is an o.d.e. together with some


initial conditions referred to a same value of t.
Boundary Value Problem (B.V.P.) It is an o.d.e. together with
some conditions referred to di↵erent values of t.
Both I.V.P. and B.V.P may or may not have solution!! Furthermore,
if it has a solution, it may or may not be unique. In general, we will
look for the solutions by imposing the initial conditions on the
general solution.

yea a g pau wa µ aye


etc

Lesson 5: Linear Di↵erential Equations


Di↵erential equations

Here, we will restrict ourselves to the easiest case, namely that of


linear di↵erential equations:

an (t)y (n) (t) + an 1 (t)y


(n 1)
(t) + · · · + a0 (t)y (t) = b(t)

In the above case, we say that the equation is homogeneous if b(t)


is identically 0. In other case, we say that it is non-homogeneous.
If ai (t) = ai for i = 1, . . . , n, we say that it is linear with constant
coefficients.

Lesson 5: Linear Di↵erential Equations


Di↵erential equations
• Linear di↵erential equations have the following nice property:
Theorem
Let an (t)y (n) (t) + an 1 (t)y (n 1) (t) + · · · + a0 (t)y (t) = b(t), and let
J ⇢ R be an interval where all the functions ai (t)’s and b(t) are
continuous. Then, for any t0 2 J, and any choosing
(n 1)
y0 , y00 , . . . , y0 2 R,

the Initial Value Problem


8
>
> an (t)y (n) (t) + an 1 (t)y
(n 1)
(t) + · · · + a0 (t)y (t) = b(t)
>
>
>
>
< y (t0 ) = y0
y 0 (t0 ) = y00
>
> ...
>
>
>
>
: (n 1) (n 1)
y (t0 ) = y0

has a solution, and it is unique.


generalresult

Lesson 5: Linear Di↵erential Equations


First Order Linear Di↵erential Equations

First order linear equation: The general form is

y 0 (t) + a(t)y (t) = b(t)

We consider first the homogeneous case:


at k
y 0 (t) + a(t)y (t) = 0
yIH e l kconst
rameter
Theorem
The set of solutions of an homogeneous first order linear equation has a
structure of vector space of dimension 1.

Lesson 5: Linear Di↵erential Equations


First Order Linear Di↵erential Equations

Non-homogeneous case

y 0 (t) + a(t)y (t) = b(t)

Theorem
Denote by ynh (t) the general solution of the above non-homogeneous,
first order linear equation, denote by yp (t) a particular solution of the
above non-homogeneous equation, and denote by yh (t) denotes the
general solution of the associated homogeneous first order linear
equation. Then it holds that

ynh (t) = yh (t) + yp (t)

Exanpkixty.se NH Parti XP t 410 0


D µn µy p x
x y o CH select Yuu d Yp
eye
Yu d Lesson 5: Linear Di↵erential Equations
Second Order Linear Di↵erential Equations

The general form of these equations is:

y 00 (t) + p(t)y 0 (t) + q(t)y (t) = b(t)

As before, if b(t) = 0 we say that the equation is homogeneous.

Lesson 5: Linear Di↵erential Equations


Second Order Linear Di↵erential Equations

Theorem
The set of solutions of an homogeneous second order linear equation has
a structure of vector space of dimension 2. So, the general solution of
such an o.d.e. is
yh (t) = C1 y1 (t) + C2 y2 (t),
where y1 (t), y2 (t) are linearly independent solutions.

t
whatdoesthatmean

Lesson 5: Linear Di↵erential Equations


Second Order Linear Di↵erential Equations

So, whenever we find two independent solutions y1 (t), y2 (t) of the o.d.e.,
we get the general solution.

How do we check if the functions

y1 (t), y2 (t), ..., yn (t)

are linearly independent?

Lesson 5: Linear Di↵erential Equations


Higher order Linear Di↵erential Equations

Definition
The Wronskian W (y1 , y2 , . . . , yn ) of y1 (t), y2 (t), . . . , yn (t) is the
determinant
y1 y2 ··· yn
y10 y20 ··· yn0
W (y1 , y2 , . . . , yn ) = .. .. .. ..
. . . .

I
(n 1) (n 1) (n 1)
y1 y2 ··· yn
defterminant

Remark: Notice that W (y1 , y2 , . . . , yn ), in general, is a function on the


variable t.

Lesson 5: Linear Di↵erential Equations


Higher order Linear Di↵erential Equations

Proposition
If y1 (t), . . . , yn (t) are linearly dependent, then W (y1 , y2 , . . . , yn ) = 0
(i.e. it is identically 0).
Tense

Consequence: if W (y1 , y2 , . . . , yn ) is not identically 0 (i.e. it might be 0


for some t’s, but not for all t’s!!), then y1 (t), . . . , yn (t) are linearly
independent.

Lesson 5: Linear Di↵erential Equations


Second Order Linear Di↵erential Equations

Case of constant coefficients: we start with the homogeneous

y 00 (t) + a1 y 0 (t) + a0 y (t) = 0

Ka t ao polynomial of2nddegree

J
Characteristic equation and discussion
.
youneed replace y
y A
y 5 andsolvethe
equation

y1,5
Lesson 5: Linear Di↵erential Equations
I a d do 0
to b a.tt RLI circuit
a
2 Example
h a F4ao c a

then the solutions are


If he be
constants
ks.kz

t ed.tt ttdzeHt2dt d ecaitda


wlyiiyzl fy.ee eine ytzeaztf
eHHHt It th da to

This is o
only if 4 1 da o but hehe

which means he 6
If di k d tea

Y IH ed't k kL Kz constants
yzltTe.tk

why ms
fIIteIetht e tch o

t
if I th b Ed h dtj dz jµJ

I9 unawares e
i
Second Order Linear Di↵erential Equations

Non-homogeneous case:

y 00 (t) + p(t)y 0 (t) + q(t)y (t) = b(t)

Theorem
Denote by ynh (t) the general solution of the above non-homogeneous,
second order linear equation, denote by yp (t) a particular solution of the
above non-homogeneous equation, and denote by yh (t) the general
solution of the associated homogeneous second order linear equation.
Then, it holds
ynh (t) = yh (t) + yp (t)

Lesson 5: Linear Di↵erential Equations


1
LINEAR ALGEBRA (350000) Course 2019/20

Practice exercises .. Sheet 4

Diagonalization

1. Study if the following matrices can be diagonalized over R and, if so, obtain matrices D
(Diagonal) and P (regular) so that A = P DP 1 .
0 1
0 1 0 1 1 0 1 1
1 1 1 1 0 0 B 2 1 2 4C
(a) A = @0 1 0A (b) A = @1 1 2A (c) A = B @ 1
C.
1 1 1A
0 1 2 0 0 1
1 1 1 3

cat phd det HI A A 1 d Htt 2 1 1 Mala D


6 2 Ma 2 2 1

4 Kor AI A
NI A
I f dim 4 mgca.si 3 2
A is not diagonalizable
raid
b play d DA DA D maH
A is not
L KerCUI A diagaalizable

f g 4 MgCd 1 3
dim 1 2
NI A
rank 1

ant it i L

Ha
ith II ah EH 27 mm 95.7
A 1
2 2 1
µMgtd rau.kz Ma
Ii 41 AI A Lf Ais DIAGONAL2ARE
Span Gioin1,1 Ct His
columns
1 0,07 free
L
LEE
is
o
at II I
Lo span U o t o
to
ix y
yet
o
o
o
a
t.tt p
04
free

spanks to.ms
74
af IIH 7iII4nfoo oEIo
002

1 9 81
fririrtriH
2. Let f : R3 ! R3 be a linear mapping which has the following matritial representation
with respect to the canonical basis:
0 1
1 3 0
A = @0 a 0 A .
2 1 1

(a) Find the values of a so that A is diagonalizable over R.


1
(b) For a = 3 obtain the matrices D (diagonal) and P (regular) such that A = P DP .

Al
all It
plat det CHI
II f
can a

The Spectrum is TCA L I t a


d 1 ATL d a
Discussion i
I then A has single eigenvalues
if Fs nora a t
is diagonalizable
therefore A
ma 117 2 In such a case we need to compute
if a L then
ma 1 17 1 MgCD
A Mach
MgIN dim Le L Ker AI
dim
AI NI A
f E rank z

so A is not diagonalizable for a L

2 In such a case we need to compute


If a l then Mac 17
mall 1 myth
Kerl NI A malt
MgH7 dim le L
diml.ie

so A is
a I

not
III A
diasonalizable
I 2
I
a
f
1 either
rank z
A 3

D
f is e't.feteoe.ge O O

E
III II a I's

TEI II o.o.se
um e

PeltDDp i
un

uu.is
II Ile9eIi'eiiifp uiD
f
it ulk cas.at

i i It
it N

y O 0 A 1

u.nl 3
IE 3l
e
eii IfiIeiet IE
3Ettt
3 2
3e
3t3t
1z e
ltt 313 LI
d t.ze tze
solution A
f youwant to check
your owf
z

It
µ Ub 24th Uz
3. Let us consider the matrix 0 1
b 0 2b
A= @ 1 1 2 A
b 0 2b
that depends on the real parameter b.

a) Determine the values of b so that A is diagonalizable over R.


1
b) For b = 0 find the matrices D (diagonal) and P (regular) so that A = P DP .
c) Compute, As application of (b), the power An (for the b = 0 case) for any positive
integer n.
t
a pad detCHI A Itb f.bz CaD bbf7zb laD f 3rb
dH DG 3D of 011,333 Ep yA_
because
Discussion if 3b 1 0,1 then A is diagonalizable
A has 3 single eigenvalues

b then Maco 2
up if 3b 0 o
Machel
2 1
AIL
Lo Ker coli A dim6 3 1

A
Iggy
A If 3b I be43 there ma o 1
Mall 2 NOT
A is
Ker ttt A then dim Lee 3 2 1 diag

it a
I
solu t.A ryb.withb.tl
3
b BIO to Kercott A x y 27 0

Span 141,07 2,011


in ie
Lasker NI A L to
span40,4073
2 0 IT
e
NI A III
a
I I O d
Cc b i 1
µ
a
ear't D io lio
ItioIII
l t
l

i tsE.iIy
1

i H to i i
i aint
4. Let f : R3 ! R3 be the linear mapping defined by:

f (x, y, z) = (↵x ↵y + 3z, 2↵x + 2↵y + z, 3z).

where ↵ is real.

(a) Compute A the coordinate matrix with respect to the canonical basis.
(b) Determine the values of ↵ so that A is diagonalizable over R.
1
(c) For ↵ = 1 find the matrices D (diagonal) and P (regular) so that A = P DP .
(d) Compute, as application of the precious result, the power An (fot the ↵ = 1 case)
as a function on n. Simplify as mush as possible.

4
Linear Di↵erential Equations of Order n

General form:

an (t)yn (t) + an 1 (t)yn 1 (t) + · · · + a0 (t)y (t) = b(t)

Homogenous case:

an (t)yn (t) + an 1 (t)yn 1 (t) + · · · + a0 (t)y (t) = 0

Theorem
The set of solutions of an homogeneous linear equation of order n has a
structure of vector space of dimension n. So, the general solution of such
an o.d.e. is

yh (t) = C1 y1 (t) + C2 y2 (t) + · · · + Cn yn (t),

where y1 (t), y2 (t), · · · , yn (t) are linearly independent solutions.

Lesson 5: Linear Di↵erential Equations


Linear Di↵erential Equations of Order n

Non-Homogenous case:

an (t)yn (t) + an 1 (t)yn 1 (t) + · · · + a0 (t)y (t) = b(t)

Theorem
If ynh (t) denotes the general solution of a non-homogeneous linear
equation of order n, yh (t) is the general solution of the associated
homogeneous equation and yp (t) is particular solution of the
non-homogeneous, then

ynh (t) = yh (t) + yp (t).

Lesson 5: Linear Di↵erential Equations


Linear Di↵erential Equations of Order n
Constant coefficients, homogeneous case

an yn (t) + an 1 yn 1 (t) + · · · + a0 y (t) = 0

Characteristic equation:

an r n + an 1r
n 1
+ · · · + a0 = 0
As in the case n = 2, each root rk (real or complex), of multiplicity
mk , gives rise to mk independent solutions:
Every real root rk of multiplicity mk gives rise to

e rk t , te rk t , . . . t mk 1 rk t
e .

Every pair of complex roots ak ± jbk of multiplicity mk gives rise to

e ak t sin(bkt), te ak t sin(bk t), . . . , t mk 1 ak t


e sin(bk t)

and

e ak t cos(bkt), te ak t cos(bk t), . . . , t mk 1 ak t


e cos(bk t)

Lesson 5: Linear Di↵erential Equations


First Order Systems of Di↵erential Equations

A first order system of di↵erential equations is a collection of equations


8
> du1
>
> = f1 (t, u1 , u2 , · · · , un ),
>
> dt
>
>
>
> du2
< = f2 (t, u1 , u2 , · · · , un ),
dt
>
> ..
>
>
>
> .
>
> dun
>
: = fn (t, u1 , u2 , · · · , un ),
dt

Lesson 5: Linear Di↵erential Equations


First Order Systems of Di↵erential Equations

If, in addition, we impose some initial conditions, meaning


8
>
> u1 (t0 ) = u10 ,
>
>
>
>
< u2 (t0 ) = u20 ,
>
> ..
>
>
>
> .
:
un (t0 ) = un0 ,

then we have an IVP.

EXAMPLES

Lesson 5: Linear Di↵erential Equations


First Order Systems of Di↵erential Equations
What is now a “particular solution”?
8
>
> u1 (t)
>
>
>
>
< u2 (t)
>
> ..
>
>
>
> .
:
un (t)
What is a “general solution’”?
8
>
> u1 (t, C1 , . . . , Cn )
>
>
>
>
< u2 (t, C1 , . . . , Cn )
>
> ..
>
>
>
> .
:
un (t, C1 , . . . , Cn ).

As many constants, as equations.

EXAMPLES
Lesson 5: Linear Di↵erential Equations
Transforming Equations of High Order into First Order
Linear Systems

An o.d.e. of order m (not necessarily linear) of the following type can be


transformed into a first-order di↵erential system:

y (m) (t) = f (t, y , y 0 , . . . , y (m 1)


)
Important
in the following way:
9 8
u1 = y >
> >
> u10 = u2
>
> >
>
>
> >
>
u2 = y0 = < u20 = u3
)
.. >
> >
> ..
. >
> >
>
>
> >
> .
; : 0
um = y (m 1)
. um = f (t, u1 , u2 , . . . , um ).

EXAMPLE

Lesson 5: Linear Di↵erential Equations


Linear First Order Systems of Di↵erential Equations

Consider just two functions (i.e. n = 2). A first order system of


di↵erential equations with two unknowns is linear, if it can be written as
8
< x 0 = a1 1(t)x + a12 (t)y + b1 (t)
:
y0 = a2 1(t)x + a22 (t)y + b2 (t).

Denoting (x, y ) by u, (b1 (t), b2 (t)) by b(t), and


✓ ◆
a11 (t) a12 (t)
A(t) = ,
a21 (t) a22 (t)

we get
du(t)
= A(t)u + b(t).
dt

Lesson 5: Linear Di↵erential Equations


Linear First Order Systems of Di↵erential Equations

In general, we have
du(t)
= A(t)u + b(t),
dt
where
A(t) is an n-by-n square matrix whose elements are functions of t.
b(t) = (b1 (t), . . . , bn (t)) is a vector whose n coordinates are
functions of t.
u = (u1 , . . . , un ) where ui = ui (t) for i = 1, . . . , n.
If, in addition, we have initial conditions u1 (t0 ) = u10 , . . . , un (t0 ) = un0 ,
denoting (u10 , . . . , un0 ) by u(t0 ), then we have the following IVP:
8
< du(t) = A(t)u + b(t),
dt
:
u(t0 ) = u 0 .

Lesson 5: Linear Di↵erential Equations


Linear First Order Systems of Di↵erential Equations

Given
du(t)
= A(t)u + b(t),
dt
we say that
Is homogeneous if b(t) = 0, and non-homogeneous otherwise.
Has constant coefficients if A(t) = A, i.e. it is a constant matrix.

In the sequel we focus only on systems with constant coefficients.

Lesson 5: Linear Di↵erential Equations


Linear First Order Systems of Di↵erential Equations with
Constant Coefficients

Homogeneous case:
du(t)
= Au.
dt
If we think of
dy
= ay ) y (t) = Ce at .
dt

So, could we try something like u(t) = Ce At ?

Does this make sense?

Lesson 5: Linear Di↵erential Equations


Linear First Order Systems of Di↵erential Equations with
Constant Coefficients

series
Introduction: Taylor’s polynomial.

EXAMPLES

et it t I Ist
sint t
t
tf t

Lesson 5: Linear Di↵erential Equations


Linear First Order Systems of Di↵erential Equations with
Constant Coefficients

Introduction: Taylor’s polynomial.


The exponential matrix
Let A be a n-by-n matrix, the exponential matrix l of A, e A is another
matrix defined as follows:
A A2 An
eA = I + + + ··· + + ···
1! 2! n!

Observe that then

A2 t 2 An t n
A eAt
At
e At = I + + + ··· + + ···
1! 2! n!
Here n! represents the factorial number defined as n! = 1 ⇥ 2 ⇥ · · · ⇥ n.
2
feat Ay ftp.h AZz i A I Ant A E
Lesson 5: Linear Di↵erential Equations
Linear First Order Systems of Di↵erential Equations with
Constant Coefficients

If
A = Diag( 1, 2, . . . , n ),

then
e A = Diag(e 1 , e 2 , . . . , e n ).
Properties:
e 0 = I,
If A and B commute, then e A+B = e A e B .

I=entries
1
The matrix e A is regular and e A e . A

The matrix e At is di↵erentiable (all tis are di↵erentiable), and

d e At
= A e At .
dt

Lesson 5: Linear Di↵erential Equations


Linear First Order Systems of Di↵erential Equations with
Constant Coefficients

Theorem:
The general solution of
du(t)
= Au.
dt
is
u(t) = e At C .

How does the solution look like when n = 2?

EXAMPLE

Lesson 5: Linear Di↵erential Equations


Linear First Order Systems of Di↵erential Equations with
Constant Coefficients

Theorem:
The general solution of the Initial Values Problem
8
< du(t) = A(t)u,
dt
:

type i toro
u(t0 ) = u 0 .

type to 20
is
u(t) = e A(t t0 )
u(t0 ).

Lesson 5: Linear Di↵erential Equations


Linear First Order Systems of Di↵erential Equations with
Constant Coefficients

PROBLEM: how do we compute e At , in the A non diagonal case A?


If A is not diagonal, but it is diagonalizable, then we use

TT eat pept E
1
A = PDP

If A is not diagonalizable, it can also be done (eigenvalues help also


here), but further questions are necessary (Jordan forms).

skipthis
case

Lesson 5: Linear Di↵erential Equations


Linear First Order Systems of Di↵erential Equations with
Constant Coefficients
Non-homogeneous case:
du(t)
dt
= A(t)u + b(t),
D
Again, in this case we have
u nh (t) = u h (t) + u p (t).
We know how to compute u h (t); in fact we have
u h (t) = e At C .
In order to find u p (t), we use the variation of parameters method:
we look for a particular solution u p (t) = e At C (t), where C (t) is an
unknown (vector) function.
By imposing that u p (t) is a solution of the equation, we obtain
Z

e
C (t) = e As b(s) ds.

Lesson 5: Linear Di↵erential Equations


Linear First Order Systems of Di↵erential Equations with
Constant Coefficients

Non-homogeneous case:
Hence, the solution of the equation

du(t)
= A(t)u + b(t),
dt
is
u(t) = e At
C +e
dZ
At
e
db(s) ds.
As

If we add the initial condition u(t0 ) = u 0 , then

u(t) = e A(t t0 )
u0 + e At
Z t
e As
b(s) ds.
IfPellitory
t0
type Eton

Lesson 5: Linear Di↵erential Equations


2
PICHI f E E

A MA 14 12124

in It if it
13 1881,1 11 1,1117

a MBB f MBBeCf
i
Nd

b f C 1423

dkerCt
a flute Mf 1 to antipathies Spy
1 2,210713

fluid M e
LL to 1,0 2
B

IoIEIo b
Ma.at
fCc7 flIiIihMB MB.plflf
f o p
suis Huit6uttloties

L
c Ker
_spank HELLO 39dB span ffg fff
free 2 D tell

I
a team
f 022T p 22 24
5. Let us consider the matrix 0 1
b 2 0
A = @2 b 0 A
1 0 2
that depends on the real parameter b.

a) Study the values of b for whose the matrix A is diagonalizable over R


1
b) For the b = 1 case obtain matrices D (diagonal) and P (regular) so that A = P DP .
c) Obtain, as an application of the previous result, the power An (in the b = 1 case)
for any positive integer as a function on b. Simplify it as much as possible.

A
f f pal delCHIAl I Eobf lazy Ebt
h2 T 2bItb 4
in Jae zbt2 b 2
b1 0 b
d 1227
Dim If b 12 2 di b 2
nor b 2 1 2 3 diff eigenvalues so A is
t diagonalizable
b 14

if be 4 malt21 2
mat 6 1 go A is Not
dragonalizable
z Ker t2I A dim mg 2 L
ZI A fFgfD2 rank is 2

If bio Mac27 2
to A is not
Mate 1
dragonalizable
KertzlI A dine L
t d my
t g
a ZI A
f rank is 2

b
A
IIE ra 12 133 5
Lz Hrc 4 Kerl spanks LIED
zqqgtqty
spanlloiof.fi i
IIIA spank D
EEE o you can
multiply
Km HI A Span 441 a constant
T
ri
pego
D go E fog
71 bet

man PDH
III I III
i xi t
i ssi.iI

fi io
2nH fiTt3nH 4.2n tiYt3nt 6.2
6. Let us consider the matrix 0 1
a 1 1
A = @ 1 a 0A
0 0 1
that depends on the real parameter a.

a) Study the values of a for whose the matrix A is diagonalizable over R


1
b) For the a = 1 case obtain matrices D (diagonal) and P (regular) so that A = P DP .
c) Obtain, as an application of the previous result, the power An (in the a = 1 case)
for any positive integer as a function on b. Simplify it as much as possible.

cal Platt T.IE caafIiaIat72HhlI zahea2D


a1 0 a att
att 11 l11
Dassin if nor a
a
then A is Diagonalizable
Spectrum TA 1 atlatl
off
the male L Kor NI A dim4 Mg L
If a 0 2
It
Mac17 1 1
Mals
so A is NIT diagonalitabh

ma 1 ez
f to
000
If 9 2 then
Li Kor NI A dim4 L
Ma 37 1
1
ng It
Matt
so A is Not diagonalizable I 0
000

b a 1 A
f ra 1,0 23

Li Ker NI A Span 0,1 17 Lo KorCoE A Spank 1,07


I Va
t II
Eo
6
Kor 3 II Al Spank2 1 OD

i J
I L HII II
47 Homework
7. Let us consider the matrix 0 1
1 0 3
@
A= 0 2 0A
0 2 a
that depends on the real parameter a.

a) Study the values of a for whose the matrix A is diagonalizable over R


1
b) For the a = 3 case obtain matrices D (diagonal) and P (regular) so that A = P DP .
c) Obtain, as an application of the previous result, the power An (in the a = 3 case)
for any positive integer as a function on b. Simplify it as much as possible.

Homework

pin HI Al Ez 301
ca
I O 2 da
ca na Alta
of 21,3A
A has 3 diff single
i on
If a s nor a z gthee
eigenvalues hence A is diagonalizable

If a l then Mal Dez KaracIA dim 4 Mg Dz


Ma 2 p t
ratukkisz male

Q A is NET drag
dim Length
If a 2 then Ma127 2 Lz terCUI A
1 Ma
ma ut l
88 8
020
So A is NOI diag

7
8. Let us consider the matrix 0 1
1 0 0
@
A= 0 3 2A
2 0 b
that depends on the real parameter b.

a) Study the values of b for whose the matrix A is diagonalizable over R


1
b) For the b = 2 case obtain matrices D (diagonal) and P (regular) so that A = P DP .
c) Obtain, as an application of the previous result, the power An (in the b = 2 case)
for any positive integer as a function on b. Simplify it as much as possible.

Homework
to
O O
a INI Al a 179 37 la b the spectrum is
Jodtzz
2 O d I 3,54
TA
Discussion If b1 1 nor bt 3 Ha
A is diagonalizable

L Kor NI A dim 4 1 Mals


if b t then male 2
t
Mac37 1
EEE
so A is notdiagonalized

KarGII A dim 13 1 Ma f 3
if b 3 then male 1
t
mat37 2

So A is Notdiagonalizable
f If200If

8
LINEAR ALGEBRA (350000) Course 2019/20

Practice exercises .. Sheet 5

Linear Di↵erential Equations

EASY
1. Solve the following initial values problem: I
(
Type 0 1
u 0
= Au 0 1 1
where A = @1 0 1 A .
u(0) = (1, 1, 1)T 1 1 0

A
f platini At
HII f B i i a t t 1332 2

i 42 2 2 2 5421 1
Z method 43 32 2
A O Z Ruffini's T
KIIotinthAt a swoosh

C TA 42 4 13
El t I
ma E z
ri iz
4 Ker GI t A Spark 07,111017

Jithas ranks Mg67 2 A is DIAGONAL'ZABLE


1,1
kwC2I A Hank Fff

43212
D
E Et
00
i

Is tf tf
1
settle eat Uco d
Solution PeDZPI
o

to i e

ii iii
S
iii i i l it

solution nut III

wait H II r
Lesson 6: Euclidean Spaces

October 27, 2019

Lesson 6: Euclidean Spaces


Motivation: (1) GPS Technology

We want to find the coordinates (x, y , z) of the point where we are. Let
us address first the problem in the plane (we want to find (x, y ))

Lesson 6: Euclidean Spaces


Motivation: (1) GPS Technology

We call a satellite. The satellite “knows” where it is; also, by calling the
satellite, we know the distance from the satellite to us. So, we get a
circle, and we know that we are a point on the circle.

Lesson 6: Euclidean Spaces


Motivation: (1) GPS Technology

If we call another satellite, then we find two circles containing us. So,
two possible solutions. But one of them can be easily discarded.

Discarded

Lesson 6: Euclidean Spaces


Motivation: (1) GPS Technology
But in fact there are many more satellites, so we can call them, and get
information from them also. In fact, when we sort to the 3D case we
need at least three satellites to find our position as the intersection of
three spheres (this provides two solutions, again one is discarded).

Lesson 6: Euclidean Spaces


Motivation: (1) GPS Technology

The satellites are very far away. So in fact, in the vicinity of our position
the circles can be approached as lines. Equivalently, the spheres can be
approximated as planes.

Lesson 6: Euclidean Spaces


Motivation: (1) GPS Technology

In an “ideal” setup, all these lines (planes) intersect exactly at our


position. But in the real world, any measure carries a certain error. As a
consequence, these lines (planes) do NOT intersect at a point, although
certainly they are “close” to intersect exactly.

Lesson 6: Euclidean Spaces


Motivation: (1) GPS Technology

Since we need to intersect planes, we need to solve a linear system of


equations:

8
>
> a11 x + a12 y + a13 z = b1
>
>
>
>
< a21 x + a22 y + a23 z = b2
>
> .. ..
>
>
>
> . .
:
am1 x + am2 y + am3 z = bm

where m is bigger than 3. But, due to errors in the measures, this


system (see the previous slide) is not consistent. Still, it is very “close”
to being consistent... So, does it make sense to look for something
that we can call a “solution” of this unconsistent system?

Lesson 6: Euclidean Spaces


Motivation: (2) Fourier Series

Given a signal f (t), we want to write it as:

f (t) = a0 + a1 sin(t) + a2 sin(2t) + · · · + b1 cos(t) + b2 cos(2t) + · · · .

In other words
1
X
f (t) = a0 + [an sin(nt) + bn cos(nt)] .
n=1

Rough idea: the signal f (t) is decomposed into simpler signals, with
increasing frequency. Often, the most relevant information on f (t) is
carried by the terms of small frequency.

How do we do this, i.e. how do we compute the an ’s and the bn ’s?

Lesson 6: Euclidean Spaces


Euclidean Spaces. Inner product
Let V be a vector space over R. An inner product or dot product is an
operation (•) between two elements of a vector space, satisfying the
following three properties: ~a
(1) Commutativity:

u~ • ~v = ~v • u~ for all u~, ~v 2 V .

(2) Linearity:

u~ • ( ~v + µ~ u • ~v ) + µ(~
w ) = (~ u•w
~) ~ 2 V.
for all u~, ~v , w

(3) Positive-definiteness:

For all u~ 2 V , u~ 6= ~0, u~ • u~ > 0.

Moreover,
u~ • u~ = 0 if and only if u~ = ~0.

Sometimes the dot product of u~, ~v is denoted as h~


u , ~v i. Examples.

Lesson 6: Euclidean Spaces


Euclidean Spaces. Inner product Space

A vector space V over R furnished with an inner product •, and for that
we mean (V , R, •) is called a Euclidean vector space or an inner
product space.

Lesson 6: Euclidean Spaces


Euclidean Spaces. Norm

Let V be a vector space over R. A norm is a map

k.k : V ! R, ~v 2 V ! k~v k.

satisfying the following three properties:


(1) Positivity:
k~v k > 0 for all ~v 2 V , ~v 6= ~0.
Moreover,
kv k = 0 if and only if ~v = ~0.
(2) k↵~v k = |↵|k~v k, for all ~v 2 V , ↵ 2 R.
(3) Triangle inequality:

k~v + w
~ k  k~v k + k~
w k, for all ~v , w 2 V .

Examples.

Lesson 6: Euclidean Spaces


Euclidean Spaces. Normed space

A vector space V over R furnished with a norm k.k, and for that we
mean (V , R, k.k), is called a normed space.

Lesson 6: Euclidean Spaces


Euclidean Spaces. Cauchy-Schwartz’s Inequality

Let V be a vector space over R. Then for all u~, ~v 2 V , the following
inequality holds:

p p
|~
u • ~v |  u~ • u~ ~v • ~v .

Lesson 6: Euclidean Spaces


Euclidean Spaces

1 The notion of norm generalizes the idea of “length” for elements of


any Euclidean space.
2 If V is furnished with an inner product, then we can define the norm
p
k~v k = ~v • ~v .

3 (Cauchy-Schwartz’s inequality is needed to prove triangle


inequality). More common form of Cauchy-Schwartz’s inequality:

|~
u • ~v |  k~
u k k~v k.

Lesson 6: Euclidean Spaces


Euclidean Spaces. Distance. Noise

A very important application of norms, is that they provide a notion of


distance for spaces where that idea is not intuitive.
In a normed space (V , k.k), given vectors u~, ~v 2 V we can define

u , ~v ) = k~
d(~ u ~v k.

So, for instance, for two signals, namely f (t) and g (t), such distance
between them is
s
Z b
d(f (t), g (t)) = (f (t) g (t))2 dt.
a

In particular, if f (t) is the signal that was sent, and g (t) the signal
received, then the above quantity is measuring the noise.

Lesson 6: Euclidean Spaces


Orthogonality in Rn

In R2 and R3 , the angle between two vectors u~, ~v is defined as

u~ • ~v
cos(✓) = .
k~
u kk~v k

Furthermore, also in R2 and R3 , we say that u~, ~v are orthogonal (or


perpendicular) if ✓ = ⇡/2. In other words,

u~ ? ~v () ✓ = ⇡/2 () cos ✓ = 0.

Notice that whenever u~, ~v 6= ~0, this happens if and only if u~ • ~v = 0.

Notice that ✓ is the angle starting in u~ and ending in ~v .


E
Lesson 6: Euclidean Spaces
Orthogonality in V

Given a Euclidean vector space V (therefore also furnished with the


corresponding norm) and u~, ~v 2 V , we define the angle between them
as the angle ✓ which fulfills

u~ • ~v
cos(✓) = .
k~
u kk~v k

Similarly, we say that u~, ~v 2 V are orthogonal if cos(✓) = 0.

u , ~v 2 V are orthogonal)
Whenever, this happens (~

D
u~ ? ~v () cos ✓ = 0 () u~ • ~v = 0.

Lesson 6: Euclidean Spaces


Orthogonality in V

I
We say that a basis B of a Euclidean vector space V is orthogonal, if
every two vectors e~i , e~j 2 B, with i 6= j, satisfy that

e~i • e~j = 0.

Also, we say that B is orthonormal, if it is orthogonal and kei k = 1 for


all i.

Examples.

I
Lesson 6: Euclidean Spaces
1123 V Space 1,011 CO1,07 estandar
innerproduct
Homework
Ip
XZ o
j fi o

Tf Xtyt 2 9 211,07 I
Joo

dft 2.1.07

L
Q
b MTT

Q prog e
w seize ii Tito IT
w
if ovz o

Xty12
0 grunted by 11 1,07 7
Leo 1 FI

IIIIIE
2 e
at
fl t
9 s
icand y
r NTI 2 147 1414 11µL 9
9
x 31 3 fu 9
I
d 413,2 20
14

dCR.IT d CE07 415 1207


74 7474 fE 3

free
IT d
b XtytZ 9 z

v z o Io't 90
x 1 a a team
Xm I 64
Hey27 0 9,0 411 2,1
East
2. Solve the following initial values problem where the initial value is given at t = 1:
(
u0 = Au II
type where A = 4 3 .
✓ ◆

u(1) = (8e4 5e, 5e)T 0 1

Alt A is Diagonalizable Hee


UCH e uld if
A PDF
Alt D Dp l
e pent

HI Al 4,71 6 1712 4
A
f 3 plat
TA 11,43

4,173
Le leer NI A Spae
p ri

i freecolumn I t D b Is
vz

Ker CHI Al span riots


D to 4 eDs feo

1 31

J u.in
tree n

TIpda.hinforsesdrr
et't unseat p
n.pe 7 pIop
Iuui
e

fi'oIl tEIEoe.oHo HlseIesef III 2


41 L
setsteEe4tI
oH
MENN
3, Solve the following initial values problem: type HI
k t
y 00 (t) y 0 (t)
0
2y(t) = et , y(0) = 1, y 0 (0) = 3.

yIH UeK
y't uz't U
et Udt 12447 et
y H7 U'zlH y1H 2gIt ya gilt qslt
uu i.fi u.f utetfiiiYaHiIInae etH ill feel

a
cat I'd
AsOBcs7ds
tucote.NO
eh ote
Solid ttltl

if A diagonalizable the fl
UCH Ee't ptuco Pe if ftpDsOE'Blsds
e

5 d 2 4 46 2
It path 1,1
L kerf I A e Sparkle DI
t
li
He.es
e am

2IffJeEeeEfLi
eAt.pefOtp yOfif1lEtEeI 3

Pe Dt
ask.ee 3 I'stain
eatuia.is e EiI
1S O S

eoezsH3 iIlEsIds f feoEsJfE


I e Bends.nl
0

as EI EIz L

it.in etiifl
e z 2

i e
d
Barrow's
rule
ilii
ee t.EE EIIEI7
in
Peet
et
Lte
ee e eIet i
Bluets

i
Have
un
t t
Et
yctl U.LA ez Eet
VERY
HARD II
0
4. Solve the following initial values problem: type
( 0 ✓ ◆
u (t) = Au(t) + B(t) 2 1
where A= , B(t) = (3e2t , 4e2t )T .
u(0) = (1, 1)T 0 2

I
plated 42 4
TA 2,23 Ma 2 2

Ker UI A Span HIM


A is Not DinoonhLIZABLE
In ie il

1 11 I I
Li.mu TAP spauKiio7il91Dff pJp
T
to 3 E to yordan Matrix
tee
r est Leite
IT F'AP A

Unltl Upttf fAEt B.tet c.EE


qt2e2t
Unt
yAze2ttBzte2t
sptncetiteztlhgpance.at te2tit2eU

Uld ffJ Aa 4 A t As
FOR JORDAN MATRICES CASE

if I 185 estate aeaei.tt

1 J
l eat aegtiee.FI

You can use this with no proof


B ert 2B.ee t2tCef
µ 2atIe 2Bzte2ttet
felt 2e2ttBze2t zezt2
2e'te't t itBatte't
Au a7EeY
t
Zest t 2Bzte2t 2cz

Blts t 24 29 a

2131 29 2131 132


q
282 2h72

I
typed✓
5. Solve the following initial values problem:
( 0 ◆
u (t) = Au(t) 1 1
where A= .
u(0) = (1, 1)T 1 3

t
plated 42 14
up
1 2 2
2
TA 12,23 ma
Lasker 21 A espouse MY

i'I in i uToi thetoriaruesonae

1411,11 173
Li Ias 2E APE Spout A PIF
T
881 fordantatix
Hit
E F'Antitheft'll fifth 974
L estate
Tf2eK
Inti uh.fieiIIIEIf fe7J
5
Orthogonality in V

Why are orthonormal bases useful?


Proposition
Let B = {~u1 , u~2 , . . . , u~n } be an orthonormal basis. Then the coordinates
of any vector ~x with respect to B, namely

(x1 , x2 , . . . , xn )B ,

can be obtained by simply multiplying ~x by u~1 , u~2 , . . . , u~n .

Proof.

Example.

Lesson 6: Euclidean Spaces


Euclidean Spaces. Gram-Schmidt’s process

Gram-Schmidt’s orthogonalization process: given a non-orthogonal


basis B = {~u1 , u~2 , . . . , u~n }, we want to construct, from B, another basis
which is orthonormal.

Idea: At each step, construct a new vector e~j (starting from j = 1,


ending j at j = n), which is a linear combination of u~j and the j 1
previous e~j ’s, and which is orthogonal to e~1 , . . . , e~j 1 .

Algorithm (and example).

Lesson 6: Euclidean Spaces


Orthogonality
The notion of orthogonality can be applied not only to vectors, but to
vector subspaces. The reason is the following result.

Lemma
Let {~v1 , ~v2 , . . . , ~vn } be vectors of V , and let

W = {~v 2 V such that ~v ? ~vi , for all i = 1, . . . , n}.

Then, W is a vector subspace of V .

(?) We will define W as the orthogonal subspace to ~v1 , ~v2 , . . . , ~vn

(?) One may see that in fact every vector in W is orthogonal to every
vector in Span(~v1 , ~v2 , . . . , ~vn ). So, we might write

W ? Span(~v1 , ~v2 , . . . , ~vn ).

Lesson 6: Euclidean Spaces


Euclidean Spaces. Orthogonal Spaces

Two vector subspaces U and W are orthogonal, if every vector in U is


orthogonal to every vector in W , and conversely.

Lesson 6: Euclidean Spaces


Euclidean Spaces. Orthogonal Complement

Given a vector subspace U, the space of all the vectors which are
orthogonal to U is called the orthogonal complement of U. It is
denoted by U ? .

Properties:
(1) If W = U ? , then

W? = U ) U ?? = U,

therefore dim U + dim U ? = n.


(2) Two subspaces can be orthogonal although one is not the
complement of the other (e.g. two lines in space).

Lesson 6: Euclidean Spaces


Euclidean Spaces. Projection of a vector onto a subspace

Given a vector ~v and a vector Space V , the projection of v into V can be


computed following the algorithm:

1 We compute an orthonormal basis of V :

{!1 , !2 , · · · , !m }
2 The projection is

ProjV (~v ) = (~v • !1 )!1 + (~v • !2 )!2 + · · · + (~v • !m )!m .

Observe, moreover, that the distance between ~v and V is equal to

d(~v , V ) = kProjV ? (~v )k = k~v ProjV (~v )k.

Lesson 6: Euclidean Spaces


z't
Leastminimumsquaresmethod ve
Tie UI E R ni ii are ei

A lui UI int eMkxmlR


µ
µ.aJ
rank A K

Normal equations
ATAZ AT b
Kx k text

ATA Atb

V is
The projection of 5 into
I
p A x

atix I A1ATAT'AT
Peg.ec imm

E B F E AE Cei emer
6. Solve the following initial values problem:
( 0
tepee✓ ◆
u (t) = Au(t) 2 1
where A= .
u(0) = (500, 100)T 1 4

t
p 5 61 9
3,33
TA Mats
II a
If E
II
a tap ftp.II.iit tf's Hit
P eat EE'T

eat iileatliiti.EEE.ie ILii


2He3t
2te3 2te3tC2t
ffl2 2 est

state
uay 6
7. Solve the following initial values problem:
type I 0 1
( 1 0 0
u0 (t) = Au(t)
A = @ 4 1 0A

U
where
u(0) = ( 1, 2, 30)T , 3 6 2
n
a
G Hkd 2
PHY
III Fat
Tess his
Math 1 Ma 1 2

it AP
L Kar CHI A Span1194613 j 42 Ker
NII
T 21 6 t 2
y
1
0 O O

I L I 12
19167 G O 21
Spae
En ai
ker 2E AI 01173
k Spar co
in it E fog
Iii JG O
a

A is NOT
DIAGONAL 17ABLE
7
4 O htet O
d P AP do
to
g
a est ft et ga
O O

O
et O

sente i n't ki
II see If I
t
t
EASY
8. Solve the following initial values problem: type I
( 0 1
u0 (t) = Au(t) 1 0 0
where A = @1 2 0 A.
u(0) = (1, 0, 0)T , 1 0 1

I
ai
a pCAH DU 216th
4 ker HI A
a
Spae 42 2,113 Ga 1,2 IS
since
lieI O 2

free Tiz

Ker 21 A Span o hoB L ter C II A spankoicia


u

E I O 3
3 E 3

I 3 a
Ei p
e't.feote.EE
O 0
o

solution Uche eatuca.pe F'III LEI ftp.go

t.io l tife oHItit I e 8


EASY
MEDIUM

0
9. Solve the following initial values problem:
( 0
type E
u (t) = Au(t) ✓ ◆
0 1
where A= .
u(0) = (2, 2)T , 1 0

t
ut PLA'tI d j te j
k

Lj Ker JI Al Span H1 IB
I
fIiIixtt ueu.p.fio.o
fIteEt l
ae

SpanLce9jI3
krC jI A
j I
Eff
sent
uneatuca it HIT Ift
it it
e i i jf.es e

feit e itljtfeitteotf
flesties't
tfest jest
jet ja
it cost 12 Sint
edt
422Sint
e
zjs.int
et't t e It 2cost
2 cost
fj I
j g1

9
10. Solve the following initial values problem:
( 0
u (t) = Au(t)
type's ✓ ◆
0 2
where A= .
u(0) = (1, 2)T , 1 2

T T
Phd 12 22 2
t
1 Hj d I j
t
The solution is

uctketfcostlnII sntf.tkD spawned by


11

111 I e'tas e't


Sint

10
11. Solve the following initial values problem: type III
( 0 ✓ ◆
u (t) = Au(t) + B(t) 2 5
where A= , B(t) = (2, 3)T .
u(0) = (1, 1)T , 1 2

t
f
Au
Solution UCH Ataco e
At
e Blu du

Platt 521 12 1 o
L f j
simples

A 1S L Span H jt2,173
Lj Kr JI Span
Jef
I

iii I
II it e fitz Itg

11
12. Solve the following initial values problem:
( 0 ✓ ◆
u (t) = Au(t) + B(t) 0 1
where A= , B(t) = (0, t)T .
u(0) = (7, 9)T , 1 0

12
tape I
12. Solve the following initial values problem:
3
y (t) + 2y(t) = 0a
( 000
y (t) 2y 00 (t) 0

e
y(0) = 3, y 0 (0) = 2, y 00 (0) = 6

III III uiu.io u ai.fi I


giYIIffftIIYgttIIykti rush that 24 t
zyu
A
at Ace
U'Itt U It a coil
T d l 0
PpfD
2
od
I if
d2
B 25 1 2 0

AH F 33 2
6 9 1,427 date ft NG 2
Uz

spanked K L A spanks HD's


L starlit A Kerth
in

a to
Us

hiker 2E At Span 1,2 4


T 13
a to to
fiiiz Solu
i
T I

u ia.eaeu
a.fi B
Homework i Solve y 2g t y get ateat
1 1,2
LINEAR ALGEBRA
November 11th, 2019

1. Given the matrix


0 1
1 2 a
A=@ 2 2 2a 1 A ,
3 10 3a + 1

where a 2 R.
(a) (1.5 points) Obtain the LU decomposition of A.
(b) (1.5 points) By using the previous LU decomposition, solve the linear
system AX = b, where bT = (4, 6, 16).
Solution:
(a) We are looking for matrices L lower triangular with main diagonal
entries equal to 1, and U upper triangular, so that A = LU , we get
such matrices after applying Gauss-Jordan method, obtaining:
0 1 0 10 1
1 2 a 1 0 0 1 2 a

13m51
A=@ 2 2 2a 1 A = LU = @ 2 1 0 A @ 0 2 1 A.
3 10 3a + 1 3 2 1 0 0 3

(b) We set Y = U X, then we need to solve first LY = (4, 6, 16)T ,


obtaining
b
0 10 yY
1 0 1 0 1 0 1

j
1 0 0 y1 4 y1 4
@ 2 1 0 A @ y 2 A = @ 6 A ) @ y2 A = @ 2 A .
3 2 1 y3 16 y3 0

Finally, we need to solve the linear system U X = Y ,


0 10 1 0 1 0 1 0 1
1 2 a x1 4 x1 2
@ 0 2 1 A @ x2 A = @ 2 A ) X = @ x2 A = @ 1 A .
0 0 3 x3 0 x3 0

u Y y
2. In the vector space of polynomials of degree at most 3 with real coefficients

T
we consider the bases
B1 = t, 1, t2 , t3
and t
B2 = t3 + t2 , t2 , t, 1 .
(a) (1.5 points) Obtain the matrix of change of bases from B1 to B2 , and
from B2 to B1 , identifying clearly which one is which.
(b) (1.5 points) Given the polynomial 1 t + t3 , use the proper matrix of
change of basis to calculate its coordinates with respect to B2 . Check
the result is correct.
Solution:
(a) There are many ways to solve this problem, we are going to use here
one of such methods. In order to obtain the change of basis from B1
to B2 we need to express each element of B1 with respect to the basis
B2 :
t t t t ttdd
o1 = 0(t
t = 0(t3 + t2 ) + 0(t2 ) + 1(t) + 0(1)
3
+ t2 ) + 0(t2 ) + 0(t) + 1(1)
!
!
u
t = (0, 0, 1, 0)B2 ,
1 = (0, 0, 0, 1)B2 ,
t2 = 0(t3 + t2 ) + 1(t2 ) + 0(t) + 0(1) ! t2 = (0, 1, 0, 0)B2 ,
and
t3 = 1(t3 + t2 ) 1(t2 ) + 0(t) + 0(1) ! t3 = (1, 1, 0, 0)B2 .
Therefore, the change of basis from B1 to B2 is
0 1
0 0 0 1
B 0 0 1 1 C
MB1 ,B2 = B C.
@ 1 0 0 0 A MBzibn ttbn.BZ
0 1 0 0
And to the the another case we have
0(t) + 0(1) + d
t3 + t2 = d E3 )
1(t2 ) + 1(t !
d f
t3 + t2 = (0, 0, 1, 1)B1 ,
t2 = 0(t) + T
0(1) + 1(t2 ) + 0(t3 ) ! tB ,r
t2 = (0, 0, 1, 0) 1

t = 1(t) + 0(1) + 0(t2 ) + 0(t3 ) ! t = (1, 0, 0, 0)B1 ,


and
1 = 0(t) + 1(1) + 0(t2 ) + 0(t3 ) ! 1 = (0, 1, 0, 0)B1 .
Therefore, the change of basis from B2 to B1 is
0 1
0 0 1 0
B 0 0 0 1 C
MB2 ,B1 = B C.
Bc 4711417
@ 1 1 0 0 A
1 0 0 0
1
Observe that MB2 ,B1 = (MB1 ,B2 ) .
0

IBD Mpsf MB.BE


MABE
Belt hit it
(b) Because we need to calculate the coordinates of 1 t+t3 with respect
to the bases B2 we need to express first the polynomial with respect
the bases B1 . In fact, we have

1I j t + t3t= ( 1,I 1,th


0, 1)fB ,
1

then the coordinates of the polynomial with respect to the basis B2


are
1 t + t3 = MB1 ,B2 ( 1, 1, 0, 1)T = (1, 1, 1, 1)B2 .

T
1 ttt
3. Let E be the space of squared matrices of dimension 2 with real entries.
We consider the linear mapping f : E ! E defined as follows:

f (A) = M A, Endomorphism

where in✓ a

1 2
M= .
2 4

(a) (2 points) Obtain the coordinate matrix of f with respect to the basis
of E

B=
⇢✓
1 0
0 0
◆ ✓
,
1 1
0 0
◆ ✓
,
1 1
1 0
◆ ✓
,
1 1
1 1

. Milf
(b) (1 point) By using the matrix obtained in (a), obtain f (C), where
✓ ◆
4 3
C= .
2 1

(c) (1 point) By using the matrix obtained in (a), obtain a basis of the
subspace ker(f ).
Solution:
(a) We need to express the image of each element from the basis into the
same basis, let us see how:
✓ ◆ ✓ ◆ ✓ ◆✓ ◆ ✓ ◆
1 0 1 0 1 2 1 0 1 0
f =M = = ,
0 0 0 0 2 4 0 0 2 0

and such matrix we need to express with respect to the basis B:


✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
1 0 1 0 1 1 1 1 1 1
=1 2 +2 +0 = (1, 2, 2, 0)B .
2 0 0 0 0 0 1 0 1 1
✓w ◆ ✓r ◆ ✓u ◆✓ ◆ ✓ u ◆
1 1 1 1 1 2 1 1 1 1
f =M = = ,
0 0 0 0 2 4 0 0 2 2
and
✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
1 1 1 0 1 1 1 1 1 1
=0 1 +0 +2 = (0, 1, 0, 2)B .
2 2 0 0 0 0 1 0 1 1
✓ ◆ ✓ ◆ ✓ ◆✓ ◆ ✓ ◆
1 1 1 1 1 2 1 1 3 1
f =M = = ,
1 0 1 0 2 4 1 0 6 2
and
✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
3 1 1 0 1 1 1 1 1 1
=2 5 +4 +2 = (2, 5, 4, 2)B .
6 2 0 0 0 0 1 0 1 1
For the last element we have
✓ ◆ ✓ ◆ ✓ ◆✓ ◆ ✓ ◆
1 1 1 1 1 2 1 1 3 3
f =M = = ,
1 1 1 1 2 4 1 1 6 6
and
✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
3 3 1 0 1 1 1 1 1 1
=0 3 +0 +6 = (0, 3, 0, 6)B .
6 6 0 0 0 0 1 0 1 1
Hence, with all this information the coordinate matrix we are looking
for is 0 1
1 0 2 0

MBBo f
B 2 1 5 3 C
MB,B (f ) = B
@ 2
C.
0 4 0 A
0 2 2 6
T
(b) We express C with respect to the basis B, obtaining
✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
1 0 1 1 1 1 1 1
C=1 +1 +1 +1 = (1, 1, 1, 1)B .
0 0 0 0 1 0 1 1
t T t t
Therefore ai ti it Ee
so, we get
O
f (C) ! MB,B (f )(1, 1, 1, 1)T = (3, 11, 6, 10)T ,
word of fCc
3uT Huizhou's ctoute ✓ ◆ w r t B
8 5
f (C) = (3, 11, 6, 10)B = .
16 10 µC
(c) The Kernel of the mapping is formed by the matrices A for whose
image is the null matrix, meaning:
⇢ E ✓0 0◆
ker(f ) = A : f (A) = .
EE
0 0

In other words, by using the previous matrix (remember this matrix


is with respect the basis B), we have

ker(f ) = (x, y, z, t)B : MB,B (f )(x, y, z, t)T = (0, 0, 0, 0)T .

From here we obtain the equations: Tat T


8
>
> x + 2z = 0 ⇢
<

II
2x y 5z 3t = 0 x + 2z = 0
)
>
> 2x + 4z = 0 y + z + 3t = 0
:
2y + 2t + 6z = 0

Therefore, a basis for the kernel (with respect to B) is:


⇣ ⌘
ker(f ) = Span ( 2, 1, 1, 0)B , (0, 3, 0, 1)B ,

I I a si on
so, writing the element with respect the original vector space, the
basis is E
✓✓ ◆ ✓ ◆◆
2 0 2 2
ker(f ) = Span , .
1 0 1 1

Il n
C OLB l9 B
TIME: 1 hour and 20 minutes.
HI a
p for
which A is diagonaizable
A tooo b D Dreguler
pro

pin.fi EIfE.fa
nHII.t hghliutf EI
a

T E
F Intl Iz o D
It

Ta I it rise i if

table
If Pto A is diagonal
2

Ta 94413 Mala 1 3 MgCheD


If f o

4 Karlie A
L
T
rameisIdn4IEuIaosa
o
A
t.EE
xty 0
42 tercee.at
b f O
L spank90117 Haid
fi Ivi ri 3

you
want Bjorn 0 i gFz
o
83 Lacy z then
L
Z 0
atyeo TI
V3 Ct

El It t'm
p it
ti io t L to

et.FI tII
Sheet 7 Euclidean Spaces

d but not orthogonal each


1 Gramm Schmidt U U w i
other
U v to o
has
no
ieiv.ru
s t they are or the

22
v3 Sparegun 83
step i b
re
Second i span

it ie IV Ito
O E Te Fr Av I re Av re noon dIo
a if von o I V
D I
IaIY
u
Alf v u to
IET us

UT u r w3 qan u in
third Spain

I Te fair yw w I O
Wo I O

n
µ re u
a
o
yw a u.uipeu.az

r
M
a uw
o_u yw
L
o
o wit Cu Mu na v
yw up go.eu

o
Y u.o
ua.otuni.uv.o rw.ir

iuZI.vo r j
u.vCi u1
w
Department of Physics and Mathematics LINEAR ALGEBRA (350000) Course 2019/20

Practice exercises .. Sheet 6

Euclidean Spaces

1. Find, using Gram-Schmidt orthogonalization, an orthonormal basis of the subspace of R4


spanned by a1 = (2, 0, 2, 0), a2 = (1, 1, 1, 0), a3 = (2, 2, 0, 0).

A 2,012,0
92 11 1,107
93 12 40,07
We define it s t Space Vi span at then iiA no ia

Tin
Next we consider Tiz Spaulat Span Tinted and iii
then
Th ta 11,0 no 111 1,40 since I.ee e o
Uz
0 2 d 2 d l Fez 0,1 0,0

Spau aI.difad spancih.UIUD.u.us 0 an


is s t
Finally UJ
Plz Tiz o We consider
0 then
since Teno
UT h l O o
µ
UT
YUI
then
112 and became UT
Us o
0 1 fu 2 µ
t
y y l therforei Tt's
1,0 1210 so

we can instead take Us 1,0 1,0


1
2. Consider the vector subspace U = Span{(1, 0, 1)} of R3 , with the usual dot product.
Find the implicit equations of the orthogonal complement U ? . Also, find the orthogonal
projection of (1, 1, 2) onto U and U ? , respectively.

Ut X y HE
1123
CXy Z no e o

X y HE1123 x z o Span 1 07 t
f Ei
we solve the implicit egnatim

o CIePgaEYfn
r
Ll O

pivot free
XtZ
t
colures
jxyz.jp of Ut

Because Th Tiz o then

Proj Hit27 H
uh y uY
uit
tuff Ba flat in Ethan l E i E

so since
proju 1,427 project 1,12 442 then

projuu1,27 4 e 2 projectit 2 Z o

it is also equal
to r
1412 of v
To Tr

2
3. Consider the vector subspace U of R3 , furnished with the usual dot product, defined by
U ⌘ x + y = 0. Find the parametric and implicit equations of the orthogonal complement
U ? , and the orthogonal projection of b = (1, 2, 3) onto U .

spankomD It.io
Pe Uz
u
oTffgIdd 2 O

Ut i XyZ7ElR3
Hey c
1123
Kiya
xty3
s
1010,17 0 and Hey Z I 1,407 03
Span K 1073
as
d mania

i.sIuuIuTs
proiulhzid.lt ffi.o9Yifio 4ind Ilana Zo

projultih37 142,37 projuelhad fE E 3

3
4. In R4 , with the usual dot product, consider the vector subspace
U = L ((1, 1, 1, 1), (1, 1, 1, 1), (1, 1, 1, 1)) .
(a) Find a basis and the implicit equations of the orthogonal complement U ? of U .
(b) Construct an orthonormal basis for U .
(c) Express v = (1, 2, 3, 4) as a sum of two vectors, one of them belonging to U and the
other to U ? .
(d) Find the distance from v to U and U ? .

5. Find an orthogonal basis of the subspace U ✓ R4 of equation x+y z +w = 0. Determine


the projection of v = (1, 0, 0, 1) onto U and the distance from v to U .

6. As in problem 3, determine the orthogonal projection of b = (1, 2, 3) onto the subspace


spanned by the columns of the matrix
0 1
1 0
A = @ 1 0 A.
0 1
(a) Solve the normal equations
AT Ax = AT b,
and after finding the solution x of this system, determine the projection as p = Ax.

2
4 Us Span It 1 htt Hilda It 1 1 11
in tis
a Ut din U 3 dim Ut L
Implicit egs
t of UL
t o t o X y z t o
U X
ytz xtytz
t t
E OI OIT2 Od Iz outspanHoi hah
II
o

t
ye d
free
4 0

column param
egs

b VI Il I l l became UT Tiz o JI UI CeeAD

Tsui KEITH It hi D Ela na D Fine eat


d LTE 1 1 1 1 I 1 1 1 1,0 1 o

wi
II L ftp t Hzit.Ii II
11h11 It FEZ
ri.IE
rtTtru'EiEiliyz.E.EiE w3
vI l1iQ o
IT 9 10
Vdl f p
4. In R4 , with the usual dot product, consider the vector subspace

U = L ((1, 1, 1, 1), (1, 1, 1, 1), (1, 1, 1, 1)) .

(a) Find a basis and the implicit equations of the orthogonal complement U ? of U .
(b) Construct an orthonormal basis for U .
(c) Express v = (1, 2, 3, 4) as a sum of two vectors, one of them belonging to U and the
other to U ? .
(d) Find the distance from v to U and U ? .

work Solvethe same problem without vector 411,1 in U


1 n i with the inner product
xcy2ctloolxiycz.II xxtzyy 32E 4LI

4
we have ai Chi tch M Ue Obtain orthogonal basis
Tri le 1 1 e it
UI 1,1 0,07 272
Tiye 110,0 1 Tis
problem
can rename them to simplify the
you
Vi U7 lh h4
1St
CUT Span If U and oiortz

vz.ua
21T Spar

Joi I.ro Vo drToPi 2 0


11
Ho o 4
By
definition

It 5
31T spae VI Fz B Span Un UzUs and vi too
by
def Tz Vz o

vii vis wi tr to
r

VI 83 of prior
8 OUT
0 0 d4 o
07
12
Ijv3 vIou5 dvz.v7 peFz.B 0 1 od p2
r
Lg defiant
fidef
by
UT 11,010,1 Vj 11,0 o D O VI 1,1010

ki Iz o t 83 1 110,2
vii FatsFy Te TE'sUT and Fiora 0
1st Span Spon
VET Try to

vii nie Toi peut y VI Tie o

tofu TrioUI d 07ooh K 52 qrf.ir


VoUys d 8Br fUVorz qFoI
VoH
E oui ti tea airs
2 112
2 d 4
2 o
0 0 In µ
6 0
0 0 y y

If Ii Evi h h t D Ell 1,1 D

E I Z E
By ft 1 3 l
5. Find an orthogonal basis of the subspace U ✓ R4 of equation x+y z +w = 0. Determine
the projection of v = (1, 0, 0, 1) onto U and the distance from v to U .

5
6. As in problem 3, determine the orthogonal projection of b = (1, 2, 3) onto the subspace
spanned by the columns of the matrix
0 1
1 0
A = @ 1 0 A. a o
0 1

(a) Solve the normal equations


AT Ax = AT b,
and after finding the solution x of this system, determine the projection as p = Ax.
(b) Find the projection matrix
P = A(AT A) 1 AT ,
and check that p = P b is the projection computed in (a).

Ate ff 9 ATA 9 Atan f Normal


at

tool
ui

F AI
f 1 14 1 e b pelts fi

E Alata'At I 8 8911119

it H L
b b
p P
6
e
p
7. Find the regression line b = CL+ Dt L fort he data t1 = 1, t2 = 2, t3 = 3, t4 = 4, b1 = 3,
b2 = 4, b3 = 5, b4 = 7 (i.e. the line of equation b = C + Dt that best approximates, in the
least squares sense, the points (1, 3), (2, 4), (3, 5), (4, 7)). Denoting the solution p = Ab
(the projection of b onto the column space of A, which provides the values at the points
ti of the computed regression line), explain why the following relationships, well-known
in statistics, hold:

e1 + e2 + e3 + e4 = 0, t1 e1 + t2 e2 + t3 e3 + t4 e4 = 0.

t
Ht t.tt I H n
7
T
T A X
b

I P R b A ATA Atb

a
Ii e Iii's.li I iHe I 3

l ni I ai
t
elite.l t lt
sat

i
e b r HH H 7
8. Consider the problem of calculating a regression parabola b = Dt + Et2 (i.e. we impose
that the constant term is C = 0) for the data t1 = 1, t2 = 2, t3 = 3, t4 = 4. Let x be the
solution vector of the normal equations AT Ax = AT b and let e = b Ax tbe he residues
vector. Explain why (regardless of the vector b) it holds that

e1 + 2e2 + 3e3 + 4e4 = 0,

although in this case we cannot ensure that the sum of all the residues

e1 + e2 + e3 + e4 = 0.

is zero. Deduce other relationship that is also fulfilled by e1 , e2 , e3 , e4 .

t il by

it HEiitn
Hence
youmustuse
i
least min sopreres method

it I 9
youmustfind

I
A

f If
ATA
ATA l E f Atb A

t the colours of
II
is orthogaf.ae w r

f SI me

b AI
t t e tzez tz.es t tyg
0

I
3 I tie tf ez ts est TI 4
0
620 8

E 3 05 bi b e2bz 134 4by


620 b 14be 963 16bn
e L
H iiI LINEAR ALGEBRA
Third Evaluation Test

i
January, 16th, 2019

tT

1. In R4 with the standard scalar product, we consider the subspace :

Io4zbI4
U = h( 1, 1, 0, 1), (1, 1, 2, 1), ( 3, 1, 0, 1)i .
test 3es
b
e
41zbfytfzb.jo iguf7lb
Se pide:
2bat3b3t4h
a) Obtain a basis of the orthogonal complement U ? of U . (1 pt.)
620
b) Compute the orthogonal projection of the vector b = (2, 5, 1, 1)
onto the subspace U . (1,5 pts.)
20 2.100 3.240 4.480
c) Compute the distance between b and U . (1 pt.) f b t 4b z t 9b t 16by
6 20 the definition of U ? , then
Solution: a) Taking into account

U ? = Span (1, 2, 1, 1) .
0
b) Taking into account projU (b)+proj U ? (b) = b, we have since
0 1 0 1
1 2
B 2 C1 B 4 C
projU ? (b) = A(AT A) 1 AT b = B C B
@ 1 A 7 14 = @ 2 A ,
C

254 where A = (1, 2,


1 2

6I6 1, 1)T , so
0 1
0
48556 B 1 C
projU (b) = b projU ? (b) = B C.
7316
@ 1 A
1

c) An the distance is
5620 d(b, U ) = kprojU ? (b)k =
p p p
22 + 42 + ( 2)2 + 22 = 28 = 2 7.
2. For the following data:

x1 = 1, x2 = 0, x3 = 2, x4 = 3,
y1 = 0, y2 = 1, y3 = 3, y4 = 2.

We want:
a) To compute the regression line y = a + bx (i.e. the line y = a + bx
that fits the best the the points (xi , yi ) in the sense of least squared).
(2 pts.)
b) Evaluate the line of a) in the values xi . (0,5 pts.)
c) If e = (e1 , e2 , e3 , e4 ) is the residue vector, explain why, in this case,
the relación e2 + 3e3 + 4e4 = 0 fulfills. (1 pt.)
Solution: a) If we want to obtain such regression line we have with these
values 0 1 0 1
0 1 1 ✓ ◆
B 1 C B 1 0 C
B C B C a .
@ 3 A=@ 1 2 A b
2 1 3
To get the solution of the regression line, y = a + bx, we need to solve the
linear system ✓ ◆ ✓ ◆✓ ◆
6 4 4 a
= .
12 4 14 b

We get as solution:
9 6
a= , b= ) 10y = 9 + 6x.
10 10
b) With this solution we have
3 9 21 27
y(x1 ) = , y(x2 ) = , y(x3 ) = , y(x4 ) = .
10 10 10 10
c) In this case we have
3 1 9 7
e 1 = y1 y(x1 ) = , e 2 = y2 y(x2 ) = , e3 = , e4 = .
10 10 10 10
Since ~e is orthogonal with the matrix A, i.e.

e1 + e2 + e3 + e4 = 0, e1 + 2e3 + 3e4 = 0,

then the condition we have is the sum of the two columns of such matrix,
3. Given the boolean function

f (!, x, y, z) = (xz + !)y + x,

we want to compute, by using its value table, its disjunctive normal form
and its conjunctive normal form. (3 pts.)
Solution: The table of values of f is

! x y z xzy !y I
n f i wxEow1y
0 0 0 0 0 0 0 0

µ
0 0 0 1 0 0 1 0 o O O
0 0 1 0 0 0 2 0 O o o
0 0 1 1 0 0 3 0 co O 0
0 1 0 0 0 0 4 1
1 1 O
0 1 0 1 0 0 5 1
O 0 O
0
DO
1 1 0 1 0 6 1
toO
0
1
1
0
1
0
1
0
0
0
0
0
7
8
1
0 IO no
1
1
1
0
0
0
1
1
0
0
0
0
1
9
10
0
1
O l O
0 1 l
1 0 1 1 0 1 11 1
E a I
1 1 0 0 0 0 12 1
1 y O
1 1 0 1 0 0 13 1
O O
g l
1 1 1 0 1 1 14 1
t n
1 1 1 1 0 1 15 1 O 1 I
Therefore the d.n.f. of f is

f (!, x, y, z) =
X
u10, 11, 12,
u u6, 7,
m(4, 5, u 14,
u 13, u = !xyz + !xyz + !xyz + !xyz + !xyz
u 15)

+ ! x y z + ! x y z + ! x y z + ! x y z + ! x y z.

and the c.n.f. of f is


Y
f (!, x, y, z) = M (0, 1, 2, 3, 8, 9) = (! + x + y + z)(! + x + y + z)(! + x + y + z)(! + x + y + z)

(! + x + y + z)(! + x + y + z).
LINEAR ALGEBRA
Third Evaluation Test
January, 16th, 2019

1. In R4 with the standard scalar product, we consider the subspace :

U = h( 1, 1, 0, 1), (1, 1, 2, 1), ( 3, 1, 0, 1)i .

Se pide:
a) Obtain a basis of the orthogonal complement U ? of U . (1 pt.)
ii i H
b) Compute the orthogonal projection of the vector b = (2, 5, 1, 1)
onto the subspace U . (1,5 pts.)
ei
c) Compute the distance between b and U . (1 pt.)
to Ii
Solution: a) Taking into account the definition of U ? , then i
U ? = Span (1, 2, 1, 1) . free
coburn
b) Taking into account projU (b)+proj U ? (b) = b, we have since
0 1 0 1 t A

I
B 2 C1
1
B 4 C
2 2 X
projU ? (b) = A(AT A) 1 AT b = B C B
@ 1 A 7 14 = @ 2 A ,
C

1 2

where A = (1, 2, 1, 1)T , so


0 1
0
B 1 C
projU (b) = b projU ? (b) = B
@
C.
1 A
1

c) An the distance is
p p p
d(b, U ) = kprojU ? (b)k = 22 + 42 + ( 2)2 + 22 = 28 = 2 7.
2. For the following data:

x1 = 1, x2 = 0, x3 = 2, x4 = 3,
y1 = 0, y2 = 1, y3 = 3, y4 = 2.

We want:
a) To compute the regression line y = a + bx (i.e. the line y = a + bx
that fits the best the the points (xi , yi ) in the sense of least squared).
(2 pts.)
b) Evaluate the line of a) in the values xi . (0,5 pts.)
c) If e = (e1 , e2 , e3 , e4 ) is the residue vector, explain why, in this case,
the relación e2 + 3e3 + 4e4 = 0 fulfills. (1 pt.)
Solution: a) If we want to obtain such regression line we have with these
values 0 1 0 1
0 1 1 ✓ ◆
B 1 C B 1 0 C
B C B C a .
@ 3 A=@ 1 2 A b
2 1 3
To get the solution of the regression line, y = a + bx, we need to solve the
linear system ✓ ◆ ✓ ◆✓ ◆
6 4 4 a
= .
12 4 14 b

We get as solution:
9 6
a= , b= ) 10y = 9 + 6x.
10 10
b) With this solution we have
3 9 21 27
y(x1 ) = , y(x2 ) = , y(x3 ) = , y(x4 ) = .
10 10 10 10
c) In this case we have
3 1 9 7
e 1 = y1 y(x1 ) = , e 2 = y2 y(x2 ) = , e3 = , e4 = .
10 10 10 10
Since ~e is orthogonal with the matrix A, i.e.

e1 + e2 + e3 + e4 = 0, e1 + 2e3 + 3e4 = 0,

then the condition we have is the sum of the two columns of such matrix,
3. Given the boolean function

f (!, x, y, z) = (xz + !)y + x,

we want to compute, by using its value table, its disjunctive normal form
and its conjunctive normal form. (3 pts.)
Solution: The table of values of f is

! x y z xzy !y n f
0 0 0 0 0 0 0 0
0 0 0 1 0 0 1 0
0 0 1 0 0 0 2 0
0 0 1 1 0 0 3 0
0 1 0 0 0 0 4 1
0 1 0 1 0 0 5 1
0 1 1 0 1 0 6 1
0 1 1 1 0 0 7 1
1 0 0 0 0 0 8 0
1 0 0 1 0 0 9 0
1 0 1 0 0 1 10 1
1 0 1 1 0 1 11 1
1 1 0 0 0 0 12 1
1 1 0 1 0 0 13 1
1 1 1 0 1 1 14 1
1 1 1 1 0 1 15 1

Therefore the d.n.f. of f is


X
f (!, x, y, z) = m(4, 5, 6, 7, 10, 11, 12, 13, 14, 15) = ! x y z + ! x y z + ! x y z + ! x y z + ! x y z

+ ! x y z + ! x y z + ! x y z + ! x y z + ! x y z.

and the c.n.f. of f is


Y
f (!, x, y, z) = M (0, 1, 2, 3, 8, 9) = (! + x + y + z)(! + x + y + z)(! + x + y + z)(! + x + y + z)

(! + x + y + z)(! + x + y + z).
10 fWXy t Em 0,213,4167,819 11,12 13,15 Ndrf
GCW X y t Emts 416,719 11,121314,15 dnf

a Table of
values
of f g and fg
b dnf off Cnf of g
eo oot8of.g
l O
O O 0
f t
O O utxy z.ir Wtxtytz
1 z w x y z.ir
wtXtytI

I
I
8h19 f ft
81 o
9 A 1
001 10 O O
f
Il o
1
7
8
r
G I'WIFFEN
tw xyz.ir
i
Etxtytz
Cotxtytz
l g g twxyz.ir
1011
1 to f f i lo twx y z.ir
O r y twxyZ
1Y t
lionO 04
1 101 at wxyz

fake
y XI II Is
wzxtwz wzy
dkrarnaughiwz.rs
WE
wEf

w
E
II
Z2Dp sIyItIyw
txyzfxgtwtfxy

wy
xyw
z way
ITw xy.az

f wiki4,7 WZXtWZ ETtw w tx 4I x y

ozytxyw xyz fxywtxy I.IT wtIy E


h h
F
or
w x
yizl WTtwT YZ y
hnthytyItwz
helw.ly
2 w
LINEAR ALGEBRA
retrieval of the First Evaluation Test
January 17th, 2020

1. Given the matrix 0 1


1 0 1 1
00
B 2 2 1 2 C
A=B
@
C
1 2 3 0 A
2 0 1 6

a) Obtain the LU decomposition of A, i.e. A = LU . (1’5 pts.)


b) Using the previous result to solve the linear system Ax = b, where
b = ( 1, 5, 4, 7)T . (1’5 pts.)

d 00 d 000 I O ty
a I o sy
2 1 00 O u uzUg 2 1 00 O 2 10
O O UyUs I A0 O O Uy Us
1 e AO 0 O O
2h2 l3 A 0 On
go 2h2 last Ut
U 2 2l 2 lr
2 62 1
L 1 11 44 3 s 44 9
Zt Uz
1 145 0 Us I
1 000 I O l 1
2 1 00 O 2 IO
O 01 I LU
l l 1O
O O 03
Zo 7 A

b At b L b UX Y D Ly b
Ux
1 00
2
Y

Ii iiit.ftxi.it
2. In the vector space R3 we consider the bases
ill in its
B1 = {(1, 2, 3), (3, 1, 1), ( 4, 1, 0)}

and
B2 = {(1, 0, 1), (3, 0, 0), (0, 1, 1)} .
Let P the matrix of the change of bases from B1 to B2 , and Q the matrix
of the change of bases from B2 to B1 .

a) Obtain matrices P and Q. (2 pts.)


b) Given the vector (4, 1, 2) 2 R3 , by using the matrix obtained in a),
obtain the coordinates of such vector with respect to the basis B2 .
(1 pt.)
I
a MB Be
a
MB7132 MBC BIMBABc MpoBz 12g I

i
z 1 I

a F the 2 3 1
a t

b 4 12 413,8 B E 1,1 1 Be
11 Iz Iz Iz
L U t file 843
3. Let E be the vector space of 2-by-2 matrices with real entries, and let F
be the vector space of polynomials of degree less or equal to 2 with real
coefficients, and let f : E ! F be the linear mapping defined by:
✓✓ ◆◆
a b
f = (a + b)x2 + cx + d,
c d

a) Compute the matrix of the application with respect to the basis:


⇢✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
1 0 1 1 0 1 0 1
B1 = , , ,
0 0 1 0 1 0 0 1
A Ar
of E and B2 = {1, 1 x, x2 } of F . (2 pts.)As Ay
✓ ◆
1 2 t A
b) Let M = . Find f (M ) by using the matrix obtained in
0 1
a). (1 pt.)
c) By using the previous matrix obtained in a), compute a basis and
the dimension of the kernel of f , ker(f ). (1 pt.)

Exam duration: 1h 20 minutes

a f Aa x2 010117132

IIE
f Az 2X2tX A 11432
msn.it

i'iii I
b f IfII MB.at IEy l no 37Bz 3
2 1

f E Z'Z 1
B g
wi t B1

iii iii iii


iii I tt L

i H tt 1 i ii
LINEAR ALGEBRA
Second Evaluation Test
January 17th, 2020

1. Given the matrix:


0 1
1 b+1 2
A=@ 0 b 0 A,
4 1 1
with b 2 R.
a) Determine the values of b for which A is diagonalizable over R. (3
pts.)
b) For b = 1. Obtain the matrices D, diagonal, and P , invertible, so
that A = P DP 1 . (2 pts.)

a We compute the characteristic polynomial of A


an bt 2 al 2
Det AI A 0 0 ab la b 12 I 8 2 b 2 3 213
4 Atl
4 1 Att Twc
expandthedetalong
the 2ndrow
T A b 3 33 So i
spectrum of A is
therefore the
A is diagonalizable
a if bt 3 3 then A has 3 diff eigenvalues
3i
then 137 2 we need to compute Mgl
If b 3 Ma
Ma 137 1

23 AI A ramie 3D A 2 Mg 37 7 Ma 3

A is not diagonalizable

3 then Ma137 2 we need to compute Mgl3


If b
Ma 3 f
4 2
02 Mg137 7 Ma63
2 3 AI A
fo g 2
ram RBI A 2

A is not diagonalizable
b if we set now b t them l Z 2
A o t O
4 t t
en

L
A as
and Kerl 3
Ui
L.kz
SI a L II L.la a 211
itI L3
I A
E
Ker 3II A

23
116,47
D Then I ai ii iiiIf.dz 4
3I A
2. We consider the initial value problem
✓ ◆
1
X 0 = AX + G, X(0) = ,
4

where ✓ ◆
1 1
A=
2 2
and ✓ ◆
et + 3
G(t) = .
et 6

a) Compute the matrix eAt . (2 pts.)


b) Solve the initial value problem by using the matrix eAt . (3 pts.)

Exam duration: 1h 30 minutes.

Ca In order to obtaineatwe need to obtain D I so that A PDP then

eat I Dtp

Inthis case INI Al AI Iz I 32 0 HA 0,3 There D Oo

and to HerCott A A
f z
Lo Lkl N

13 14131A 31 A
f say 2g
E
f I

eat.fi.ME EIkitsIieIeeIitt.e.L EIi EI


at t o then the solution
b Since X_AX G andthe initial condition is

is
µH eatXD e.AToteAsGcs ds

oeAtXlol f L2tze7Iffzee7e ffJ f2 Yet X Homogeneous solution


Teas asus tf
Lee's as
I as

L II IEEE then
I 2
eatteasaads.es EEIiIIIIEEaeIe.sf fEIEe sf xpie
particular
solution
Then the solution is

xlth xniti.ae kiEIfEIzeEEIffE.eeI


LINEAR ALGEBRA
Third Evaluation Test
January 17th, 2020

1. In R4 with the standard scalar product, we consider the subspace :

U = h(1, 1, 2, 0), ( 1, 3, 2, 1), (2, 0, 1, 2)i .

a) Obtain a basis of the orthogonal complement U ? of U . (1 pt.)


b) Compute the orthogonal projection of the vector b = (0, 1, 2, 2)
onto the subspace U ? . (1,5 pts.)
c) Compute the distance between b and U ? . (1 pt.)

a Ut X yZ t X y122 0 x13L 221 2 0 2 2 22 03 Now we

f H H p.mn
Ut Lf 3 1,2 2

b Inthis case projut B Bol3,4227 314 2


13h221.63e

If 31,3 2 13 Li t l

c Since d Brut Hproju15711 andtakinginto accountthat


D proju B projuelII then

B l tZ
proju B projut Ii 1,1
DCB Ut ftp.fff fz
3
2. For the following data:

x1 = 0, x2 = 1, x3 = 2, x4 = 3,
y1 = 1, y2 = 0, y3 = 2, y4 = 2.

We want:
a) To compute the regression line y = C + Dx (i.e. the line y = C + Dx
that fits the best the the points (xi , yi ) in the sense of least squared).
(2 pts.)
b) Evaluate the line obtained in a) in the values xi . (0,5 pts.)
c) If e = (e1 , e2 , e3 , e4 ) is the residue vector, explain why, in this case,
the relation e1 + 2e2 + 3e3 + 4e4 = 0 fulfills. (1 pt.)

need to solve is
a The linearsystem we
ea
i ti
I t.tl
A b
1
AtA If
in
d
Atb in this case

regular matrix

and 5 then the regression


f E o
e
f
is 11
line y 10

b Fr Yo yT yet Fu 34
0
512

t o
910 izeztses 4e 4tzeo
He 5 444
1 fI jo fIIgogffej1ei
3. Given the boolean function

f (!, x, y, z) = (y + x z)(z + x y),

we want to compute, by using its table of values, its disjunctive normal


form and its conjunctive normal form. (3 pts.)

Exam duration: 1h 30 minutes


wxy

W 0123 4567 891011 12131415


fiflwix.ly z dIm t
sum
112,317,910,1415
d
WITZ
w 0000 0000 1111 1111 g
X 00001111 00001111 cnf flw y TIM 014,516,812,1314
0011
r L
y O 11 00110011 product
wt y z
01010101 0101 of wtxi.ge
Z 0101
Iz 0101 00000101 0000
ytIz 0111 0011 0111 0011
Is 0011 0000 0011 0000
2 Is 0111 01010111 0101
f 0111 0001 0111 0001

You might also like