You are on page 1of 37

Part 1 – basics

1. Vectors
2. Linear (in)dependence and basis
3. Matrices
4. Linear equations
5. Solving linear equations by Gaussian elimination.
6. Linear transformations (identity, composition, inverse)
7. Kernels and ranges of matrices, vector subspaces in general.
8. Dot product and orthogonality.

1 / 37
Vectors in Rn

A vector in Rn has the form


 
v1
v =  ...  = v1 e1 + . . . + vn en ,
 

vn
     
1 0 0
 0   1   0 
     
 0   0 
e1 =   , e2 =   , en = 
 ..
canonical basis vectors
 ..   ..   .

 .   .   0 
0 0 1
v1 , v2 , . . . , vn real numbers.

2 / 37
Basic operations: addition of vectors and multiplication by
a scalar

 
v1 + w1
v+w =
 .. 
. 
vn + wn
 
αv1
αv =  ...  , α ∈ R
 

αvn

w is a linear combination of v1 , . . . , vk if

w = α1 v1 + α1 v2 + . . . αk vk

for some constants α1 , . . . , αk .


3 / 37
Zero vector and linear dependance

The zero vector:


 
0
 0 
0= is the zero vector.
 
.. 
 . 
0

Vectors v1 , v2 , . . . , vk are linearly dependant if there exist


constants, not all equal 0, such that

0 = α1 v1 + α1 v2 + . . . αk vk .

In words: 0 is a non-trivial linear combination of v1 , v2 , . . . , vk .

4 / 37
Linear dependence - alternative statements

α1 v11 + α2 v12 + . . . + αk v1k = 0


α1 v21 + α2 v22 + . . . + αk v2k = 0
..
.
α1 vn1 + α2 vn2 + . . . + αk vnk = 0.

or     
v11 v12 . . . v1k α1 0
..   ..   .. 
 .  = 

 . . 
vn1 vn2 . . . vnk αn 0

5 / 37
Linear independence and basis

• Vectors v1 , v2 , . . . , vk are linearly independant if they are not


linearly dependent.
• A set of n linearly independent vectors in Rn is called a basis of
Rn .
Example {e1 , e2 , . . . , en } are a basis of Rn .
Theorem The two statements below are equivalent:
(1) {v1 , v2 , . . . , vn } is a basis of Rn
(2) Each vector w ∈ Rn can be represented as a linear combination
of v1 , v2 , . . . , vn .
Proof by Gaussian elimination will be given later.

6 / 37
Matrices

An n by k matrix consists of k column vectors of length n:


 
v1i
  v2i 
A = v1 v2 . . . vk , vi =  .  , or
 
 .. 
vni
 
v11 v12 . . . v1k
A= ..
 , or A = (vij )i=1,...,n,j=1,...k
 
.
vn1 vn2 . . . vnk

Notation aij is often used instead of vij , i.e. A = (aij )i=1,...,n,j=1,...k .

7 / 37
Example of a matrix

 
1 1 1
A =  2 −3 −1  .
−1 1 2
Question: Are the columns (rows) of A linearly dependent?

8 / 37
Multiplication of a vector by a matrix
Given  
α1
α =  ... 

A= v1 v2 . . . vk ,
 

αk
we have
Aα = α1 v1 + α2 v2 + . . . + αk vk .
In other words Aα is a linear combination of the columns of A with
the coefficients α1 , α2 , . . . , αk .
   Pk 
α1 v11 + α2 v12 + . . . + αk v1k i=1 αi v1i
 α1 v21 + α2 v22 + . . . + αk v2k   Pk αi v2i 
i=1
Also Aα =  =
   
.. .. 
 .   . 
Pk
α1 vn1 + α2 vn2 + . . . + αk vnk i=1 αi vni

9 / 37
Matrix multiplication

Given matrices A and B of dimensions n by k and k by m we can


form their product, as follows. If
 
A = a1 a2 . . . ak , B = b1 b2 . . . bm

then AB is an n by m matrix given by



AB = Ab1 Ab2 . . . Abm , or
k
!
X
AB = ail blj
l=1 i=1,...,n,j=1,...m

Example of matrix multiplication...

10 / 37
Systems of linear equations

Suppose

b1
A = (aij )i=1,...,n,j=1,...k and b =  ...  are given.
 

bn

A system of linear equations in the matrix form is

Ax = b,

where  
x1
x =  ...  is an unknown vector.
 

xk

11 / 37
Familiar form

a11 x1 + a12 x2 + . . . a1k xk = b1


a21 x1 + a22 x2 + . . . a2k xk = b2
..
.
an1 x1 + an2 x2 + . . . ank xk = bn .

Solving a linear system is equivalent to representing b as a linear


combination of the columns of A:

x1 a1 + x2 a2 + . . . + xk ak = b.

12 / 37
A concrete (high school level) example:

x +y +z =1
2x − 3y − z = 0
−x + y + 2z = −1.

13 / 37
Gaussian elimination
Given a system of equations
a11 x1 + a12 x2 + . . . a1k xk = b1
a21 x1 + a22 x2 + . . . a2k xk = b2
..
.
an1 x1 + an2 x2 + . . . ank xk = bn ,
Gaussian elimination consists of two types of elementary operations
that are carried out in succession:
(RI) Interchange of equations,
(RA) Adding a multiple of an equation to another equation and
replacing it by the resulting equation.
The system obtained by any of these operations has the same
solutions.
Note: Multiplying an equation by a scalar does not change
solutions and can be used in numerical approaches.
14 / 37
Gaussian elimination on the level of matrices

We form the extended matrix, where we add an extra column to A


equal to b:
M = (A | b).
Equivalently, the elemntary operations of Gaussian elimination are
now:
(RI) Row interchange,
(RA) Adding a multiple of a row to another row and replacing it by
the result,
Theorem Using Gaussian elimination one can transform (A, b) to
(U, c) so that U is upper diagonal (all elements under the diagonal
are 0).

15 / 37
Examples

   
0 1 1 | 1 1 1 3 | 2
(A, b) =  1 1 3 | 2  → (U, c) =  0 1 1 | 1 
2 3 4 | 1 0 0 −3 | −4

Solution: x3 = 43 , x2 = − 13 , x1 = − 35 .
   
1 1 2 | 1 1 1 2 | 1
(A, b) =  0 1 1 | 1  → (U, c) =  0 1 1 | 1 
2 3 5 | 1 0 0 0 | −2

0x3 = −2 cannot be solved so there is no solution.

16 / 37
   
1 1 2 | 1 1 1 2 | 1
(A, b) =  0 1 1 | 1  → (U, c) =  0 1 1 | 1 
2 3 5 | 3 0 0 0 | 0

0x3 = 0 gives no condition so there is a continuum of solutions


(infinitely many) .
The elements on the diagonal of U are called pivots .
Theorem If A is n by n and all the pivots are non-zero the Ax = b
has a unique solution for any choice of b. Otherwise for most
choices of b there are no solutions, but there exist choices of b for
which there are infinitely many solutions.

17 / 37
Return to linear independence

Consider:

v1 , . . . , vk and A = v1 v2 . . . vk .

Note that
• the zero vector 0 is always a solution of Ax = 0.
• Linear independence of v1 , . . . , vk is equivalent to the
requirement that 0 is the only solution of Ax = 0,
• Linear independence of v1 , . . . , vk is equivalent to the
requirement that all the pivots of U not equal 0.
• Consequently (by Theorem above) if A is an n by n matrix then
Ax = b has a unique solution for any choice of b

18 / 37
Linear transformations

f : Rn → Rk is a linear function (transformation) if


(i) f (v + w) = f (v) + f (w), for all v, w ∈ Rn
(ii) f (λv) = λf (v), for all v ∈ Rn , λ ∈ R.
Any linear transformation in Rn is represented by a matrix:

let v1 = f (e1 ), v2 = f (e2 ), . . . , vk = f (ek ),



A = v1 v2 . . . vk .
 
α1
Then, for any α =  ... , f (α) = Aα.
 

αk

19 / 37
Composition and identity

Let A and B be n by k and k by m matrices and

f (x) = Ax and g (x) = Bx

the associated linear transformations. Then the composition f ◦ g


is given by
f ◦ g (x) = f (g (x)) = ABx.
The identity function f (x) = x is given by the identity matrix
 
1 0 ... 0
 0 1 ... 0 
I =
 
.. 
 . 
0 ... 0 1

20 / 37
Inverse

Suppose that A is an n by n matrix with linearly independent


columns. Then there exists unique vectors

α1 , α 2 , . . . , α n

such that Aαj = ej , j = 1, . . . , n. Let



B= α1 α2 . . . αn .

Note that AB = I . The matrix B is called the inverse of A,


denoted by A−1 .
If f (x) = Ax is the linear transformation associated to A then
g (x) = A−1 x is the inverse transformation of f (g = f −1 ).

21 / 37
Notes
(1) A−1 can be found by Gaussian elimination by solving for the
columns αj .
(2) (AB)−1 = B −1 A−1
(3) A−1 A = I .
Proof of (3)
• We have defined A−1 by the requirement AA−1 = I and showed
that it can be found by Gaussian elimination.
• Also by Gaussian elimination we can find a matrix B such that
BA = I .
Now it is ‘easy’ to show that B = A−1 :

B = BI = B(AA−1 ) = (BA)A−1 = IA−1 = A−1 .

Hence (3) holds.


22 / 37
Kernels and Ranges

Given an n by k matrix A, the range of A, Im(A) consists of all


vectors b in Rn such that Ax = b has a solution:

{b ∈ Rn : Ax = b has a solution.}

Given an n by k matrix A, the kernel of A, ker(A) consists of all


vectors x in Rk such that Ax = 0:

{x ∈ Rk : Ax = 0.}

23 / 37
Vector subspace of Rn

A subset V of Rn is a vector subspace if the two conditions below


hold:
(1) if two vectors v1 and v2 are in V then their sum v1 + v2 is in V .
(2) if a vector v is in V and λ is any scalar, then λv is in V .
Examples of vector subspaces are Im(A) and ker(A).
Specific example: the line in R2 defined by 2x + 3y = 0 is the
kernel of the matrix A = (2 3).

24 / 37
Spanning set and basis of a subspace

A set of vectors {v1 , v2 , . . . vk } is a spanning set of a subspace V


of Rn if all elements of V can be represented as linear combination
of v1 , v2 ,. . ., vk :

span V = {α1 v1 + α2 v2 + . . . + αk vk , α1 , . . . , αk arbitrary.}

A basis of V is a spanning set consisting of independent vectors.


Each basis of a subspace V has the same number of elements.
This number of elements
 is the dimension of V or rank(A), where
A = v1 , . . . , vk .
The dimension of V is the number of independent pivots of U,
where U is obtained from A by Gaussian elimination.

25 / 37
Example

Let  
1 1 2 0
A= 0 1 1 1 
2 3 5 1
(1) Find the k for which ker(A) is the subset of Rk .
(2) What is the dimension of ker(A)?
(3) What is rank(A)?

26 / 37
The transpose of a matrix

Given is an n by k matrix A = (aij )i=1,...,n, j=1,...,k .


The transpose of A is given by: A = (aji )j=1,...,k, i=1,...,n
AT is the matrix whose rows are the columns of A written
horizontally.
 
  1 0 2
1 1 2 0
1 1 3 
A =  0 1 1 1  , AT = 

.
 2 1 5 
2 3 5 1
0 1 1

Important formula: (AB)T = B T AT (easy but tedious to verify).

27 / 37
By Gaussian elimination:
(1) The number of pivots in U equals the number of independent
rows in A.
(2) The number of pivots in U equals the number of independent
columns in A.
rank(A) = rank(AT ).

28 / 37
Dot product (inner product), length and orthogonality

Given are vectors


   
v1 w1
 v2   w2 
v=  and  .
   
.. ..
 .   . 
vn wn

Then

v · w = v1 w1 + v2 w2 + . . . + vn wn is the dot product of v and w.



Note: `(v ) = v · v is the length of v (Pythagoras theorem).

29 / 37
Traingle inequality and Cauchy-Schwartz inequality


|v| = `(v) = v · v norm of v
|v + w| ≤ |v| + |w| triangle inequality
|v · w| ≤ |v||w| Cauchy-Schwartz inequality

30 / 37
A note on dot product

A vector can be identified with n by 1 matrix. Then:

v · w = vT w,

where the right hand side is understood as matrix multiplication


and the resulting 1 by 1 matrix identified with its entry.

31 / 37
Orthogonality of vectors and spaces

Vectors    
v1 w1
 v2   w2 
v=  and w = 
   
.. .. 
 .   . 
vn wn
are orthogonal if v · w = 0.
   
1 2
Example: v = and w = are orthogonal.
2 −1
A vector v is perpendicular to a vector subspace W if v is
perpendicular to each vector in W .
Vector spaces V and W are orthogonal if v · w = 0 for every v in
V and w in W .

32 / 37
Orthogonal complement

If V is a vector subspace of Rn then the orthogonal complement


V ⊥ consists of all vectors in Rn that are perpendicular to V .
(i) dim(V ) + dim(V ⊥ ) = n.
(ii) the only vector contained in both V and V ⊥ is the zero vector.
(iii) If k and m are the dimensions of V and V ⊥ , {v1 , . . . , vk } and
{w1 , . . . , wm } are bases of V and V ⊥ , then

{v1 , . . . , vk , w1 , . . . , wm }

is a basis of Rn .

33 / 37
The image of a matrix and the kernel of its transpose

The following formula connects transpose and dot product:

Av · w = v · AT w

Proof
 
Av · w = (Av)T w = vT AT w
 
= vT AT w = v · AT w.

The following equality is a consequence of this formula:

(Im(A))⊥ = ker(AT ).

Important for symmetric matrices (AT = A).

34 / 37
Exercices

1. Given
  

2 −3 0 3
A =  4 −5 1  and b =  7  ,
2 −1 −3 5

solve Ax = b using Gaussian elimination.


2. Each elementary operation of Gaussian elimination is given by a
matrix multiplication. In other words if k is the number of
elementary operations needed to get from A to U then

U = Lk Lk−1 . . . L1 A = LA.

and the multiplication by Lj results in the jth elementary row


operation. What is k for the example of exercise 1.? Find the Lj ’s
and L = Lk Lk−1 . . . L1 .
35 / 37
Exercises cont.

3. Given  
2 −3 0
A =  4 −5 d 
2 −1 −3
find the number d such that

Ax = 0

has multiple solutions. What can you say about the solutions of
Ax = b, where b is from exercise 1.?
4. For A from exercise 1. find M such that A = MU

36 / 37
Exercises cont.

5. Using Gaussian elimination find A−1 for


 
2 1 0
A =  1 1 1 .
3 2 2

6. For the matrix  


1 1 2 0
A= 0 1 1 1 
2 3 5 1
find bases of ker A and Im A. You can use the formula
(Im(A))⊥ = ker(AT ).

37 / 37

You might also like