You are on page 1of 45

Linear Algebra

SEG : S4
Pr: Tazi Ennouri

Licence Economics/ Business Studies 2023-2024

1
Contents
1 Matrix calculation 4
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Dies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Particular square matrices . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1 Diagonal Matrix . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.2 Null matrix and unit matrix . . . . . . . . . . . . . . . . . 5
1.3.3 Basic operations on matrices . . . . . . . . . . . . . . . . 5
1.3.4 Triangular matrix . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Elementary operations on matrices . . . . . . . . . . . . . . . . . 6
1.4.1 Equality of two matrices . . . . . . . . . . . . . . . . . . . 6
1.4.2 Addition and subtraction of matrices . . . . . . . . . . . . 6
1.4.3 Product of a matrix by a real . . . . . . . . . . . . . . . . 7
1.4.4 Product of two matrices . . . . . . . . . . . . . . . . . . . 8
1.5 Transpose of a matrix . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Symmetric matrix and antisymmetric matrix . . . . . . . . . . . 9
1.6.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.7 Symmetric matrix and antisymmetric matrix . . . . . . . . . . . 10
1.8 Trace of a square matrix . . . . . . . . . . . . . . . . . . . . . . . 11
1.8.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.8.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.9 Determinant of a square matrix . . . . . . . . . . . . . . . . . . . 12
1.9.1 Determinant of a square matrix of order 2 . . . . . . . . . 12
1.9.2 Determinant of a square matrix of order 3 by the method
of Sarrus . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.9.3 Determinant of a square matrix of order n > 2 by the
method of cofactors . . . . . . . . . . . . . . . . . . . . . 14
1.9.4 Determinant of a square matrix of order n > 2 by the
method of cofactors . . . . . . . . . . . . . . . . . . . . . 14
1.9.5 Properties of determinants . . . . . . . . . . . . . . . . . 16
1.10 The comatrix of a matrix . . . . . . . . . . . . . . . . . . . . . . 18
1.11 Inverse of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.12 Inverse of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.12.1 Definition and examples . . . . . . . . . . . . . . . . . . . 18
1.12.2 Some Properties of the Inverse . . . . . . . . . . . . . . . 19
1.13 Rank of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.13.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.13.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.13.3 Elementary transformations that do not change the rank
of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.13.4 Rang Properties . . . . . . . . . . . . . . . . . . . . . . . 21
1.14 The normal form of a matrix . . . . . . . . . . . . . . . . . . . . 22

2
1.14.1 Definition and Examples . . . . . . . . . . . . . . . . . . . 22
1.14.2 Technique for finding the normal form of a matrix . . . . 22

2 Solving Systems of Linear Equations 24


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Elementary operations on the lines of a system . . . . . . . . . . 25
2.4 Matrix writing of a linear system . . . . . . . . . . . . . . . . . . 26
2.5 Solving linear systems by Cramer’s method . . . . . . . . . . . . 26
2.6 Solving linear systems by Cramer’s method . . . . . . . . . . . . 27
2.6.1 Definition of a Cramer system . . . . . . . . . . . . . . . 27
2.6.2 Principle of Cramer’s method . . . . . . . . . . . . . . . . 27
2.7 Solving linear systems by the Gaussian method . . . . . . . . . . 28
2.7.1 Solving a system of equations using matrices . . . . . . . 28
2.7.2 Finding the Rank of a Matrix . . . . . . . . . . . . . . . . 28
2.7.3 Gaussian method technique . . . . . . . . . . . . . . . . . 28
2.8 Application of Gauss’s method . . . . . . . . . . . . . . . . . . . 29
2.8.1 Solving a system of equations using matrices . . . . . . . 32
2.8.2 Find the rank of a matrix . . . . . . . . . . . . . . . . . . 34
2.8.3 Calculation of the inverse of a square matrix by Gauss’s
algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3 Vector space 37
3.1 Definition of a vector space . . . . . . . . . . . . . . . . . . . . . 37
3.2 Linear combinations . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3 Free family, Linked family, Generating family . . . . . . . . . . . 40
3.3.1 Generating family . . . . . . . . . . . . . . . . . . . . . . 40
3.3.2 Free family, Linked family . . . . . . . . . . . . . . . . . . 41
3.4 Basis of a vector space . . . . . . . . . . . . . . . . . . . . . . . . 41
3.5 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.5.1 Kernel and image of a linear map . . . . . . . . . . . . . . 43

3
1 Matrix calculation
1.1 Introduction
In many economic analyses, the different variables are related to each other
by linear equations. Linear algebra provides a clear and precise notation for
formulating and solving such problems. In this chapter, we will first define the
notion of matrix. We will then look at the different types of matrices and the
usual operations such as arithmetic, the calculation of the determinant or the
inverse.

1.2 Dies
1.2.1 Definition
Definition 1. A matrix can be thought of as a rectangular array of numbers,
which has rows and n columns.
 
a1,1 a1,2 ··· a1,j
a2,1 a2,2 ··· a2,j 
A = (ai,j ) =  .
 
.. .. .. 
 .. . . . 
ai,1 ai,2 ··· ai,j

• ai,j is called element or coefficient of the matrix


• The element ai,j is characterized by its value and by its position
• i is the row number and j is the column number
Definition 2. A matrix of m rows and n columns is said to be of type (m, n)
• if m = n the matrix is called a square matrix of order n
• a matrix of order (m1) is called column-vector
• a matrix of order (1n) is called row-vector

1.2.2 Examples
Example 1.1. The following matrix A is square of order 3
 
4 0 −2
A = 17 −9 2 
34 1 −6

Example 1.2. The matrices A of order (41)and B


of order (12) are respectively
19
 23  
−25 et B = 101, 23
called column vector and row vector: A =  

4
1.3 Particular square matrices
1.3.1 Diagonal Matrix
Definition 3. We call the diagonal of a square matrix the set of its coefficients
whose row index is equal to the column index.
Example 1.3. A a square matrix of order 3
 
2 7 5
A = 3 0 6
1 4 1

The diagonal of A is the sequence of elements in bold.


Definition 4. A diagonal matrix is any square matrix whose coefficients are
all zero, except those of the diagonal.
Example 1.4. Has a diagonal matrix of order 4
 
3 0 0 0
0 5 0 0
A= 0

0 7 0
0 0 0 1

The diagonal of A is the sequence of elements in bold.

1.3.2 Null matrix and unit matrix


Definition 5. The null matrix is a matrix of which all its terms are null, one
notes it 0

1.3.3 Basic operations on matrices


Definition 6. The unit matrix or identity matrix is a diagonal matrix in which
all the terms of the diagonal are equal to 1. We denote it In with n the order
of the matrix.
Example 1.5. In the identity matrix of order 3
 
1 0 0
0 1 0
0 0 1

1.3.4 Triangular matrix


Definition 7. We call upper respectively lower triangular matrix any square
matrix whose coefficients located below (respectively above) the diagonal are all
zero.

5
Example 1.6. A an upper triangular matrix of order 3.
 
7 5 1
A = 0 2 3
0 0 4

B a lower triangular 3rd order matrix.


 
4 0 0
B =  9 2 0
11 8 3

1.4 Elementary operations on matrices


1.4.1 Equality of two matrices
Definition 8. Consider two matrices A = (aij )andB = (bij ). These two ma-
trices are equal if and only if:
• they are of the same order
• ∀(i, j)aij = bij

1.4.2 Addition and subtraction of matrices


Definition 9. A = (aij ) and B = (bij ) are two matrices of order (mn).
• their sum A + B is defined by the matrix C = (cij ) such that: ∀(i, j)cij =
aij + bij .
• their subtraction A − B is defined by the matrix D = (dij ) such that
∀(i, j)dij = aij − bij
Example 1.7. Consider the following matrices A and B:
   
15 25 1 −20
A = 39 47  et B =  9 7 
2 −8 −10 12
   
16 5 14 45
A + B =  48 54 et A − B = 30 40 
−8 4 12 −20
Note 1. The sum and subtraction of two matrices of different order is not
defined
Definition 10. If A , B and C are matrices of the same order, then we have:
• addition of matrices is commutative: A + B = B + A.
• the addition of matrices is associative: A + (B + C) = (A + B) + C.

6
• The null matrix 0, is the neutral element for the addition: A+0 = 0+A =
A
• If A + B = 0 then B = −A is called matrix opposite of A, If A = (aij )
then B = (−aij )

Example 1.8. Consider the following matrices A and B:


   
1 −1 6 −5
A= et B =
3 0 2 1

   
1+6 −1 + −(5) 6+1 (−5) + (−1)
A+B = = =B+A
3+2 0+1 2+3 1+0
 
7 −6
A+B =B+A=
5 1
Example 1.9. Consider the following matrices A, B and C:
     
1 −1 6 −5 0 2
A= B= et C =
3 0 2 1 2 4
     
7 −6 0 2 7 −4
(A + B) + C = + =
5 1 2 4 7 5
     
1 −1 6 −3 7 −4
A + (B + C) = + =
3 0 4 5 7 5

1.4.3 Product of a matrix by a real


Definition 11. If λ ∈ R a scalar and A = (aij ) a matrix, the product λ .
A is the matrix of the same order as A, obtained by multiplying each element
aij of A by λ:
λ.A = λ.A = A.λ = (λ.aij )
Example 1.10. Let A be a matrix of order (43).
   
4 6 2 12 18 6
9 2 5 27 6 15
A=  ⇒ (3A) =  
11 8 3 33 24 9
0 −3 1 0 −9 3
Definition 12. Let A and B be two matrices of the same order and λ and µ
two scalars, we have:
• λ(A + B) = λA + λB
• (λ + µ)A = λA + µA

7
• (λ.µ)A = λ(µA) = µ(λA)
• The matrix A multiplied by the scalar 1 is equal to the matrix A.
• The matrix A multiplied by the scalar 0 is equal to the zero matrix.
• The null matrix multiplied by the scalar λ is equal to the null matrix.

1.4.4 Product of two matrices


Definition 13. Let A and B be two matrices
• The product AB is defined if and only if the number of columns of A is
equal to the number of lines of B.
• If A is of order (m × n) and B of order (n × q) then the product AB
is defined by the matrix C = (Cij ) of order (m × q) whose elements are
obtained by:
Cij = ai1 × b1j + ai2 b2j + .. + ain × bnj
n
X
Cij = aik × bkj
k=1

i = 1...m and j = 1..q

We say that the coefficient Cij is obtained by multiplying the ith row of A by
the jth column of B.
Example 1.11. Let A and B be the following matrices:
 
  1 2
1 3 0
A= et B =  0 2 
−1 1 2
0 −1
 
1×1+3×0+0×0 1 × 2 + 3 × 2 + 0 × −1
A·B =
−1 × 1 + 1 × 0 + 2 × 0 −1 × 2 + 1 × 2 + 2 × −1
 
1 8
A×B =
−1 −2

Definition 14. If A, B and C are three matrices whose product and sum are
defined, we have:
1. (AB)C = A(BC) (associativity)

2. A · (B + C) = AB + AC (distributivity on the right).


3. (A + B) × C = AC + BC (distributivity on the left).
4. If In and Ip are the unit matrices of order n and p and A a matrix of
order (n, p), then
In · A = A et A · Ip = A

8
Example 1.12. Let A, B and C be three matrices of order 2.
     
1 −1 6 −5 0 2
A= ,B = et C =
3 0 2 1 2 4
     
4 −6 0 2 −12 −16
(A.B) · C = · =
18 −15 2 4 −30 −24
     
1 −1 −10 −8 −12 −16
A. (B.C) = · =
3 0 2 8 −30 −24

Note 2. .
• The multiplication of two matrices is not commutative.

• The division of two matrices does not exist, but we will see that the
multiplication by the inverse of a matrix has similarities with the division
in R.

1.5 Transpose of a matrix


1.5.1 Definition
Definition 15. Let A = (aij ) be a matrix of order (n × p), we call transpopse
of A the matrix t A of order (p × n) defined by
t
A = a0ij , with a0ij = aji ∀1 ≤ i ≤ p and ∀1 ≤ j ≤ n


1.6 Symmetric matrix and antisymmetric matrix


Example 1.13. Consider the matrix A of order (2 × 3),
 
  1 0
1 3 2
⇒ tA =  3 1 

A=
0 1 −1
2 −1

Note 3. The product of a matrix and its transpose is a square matrix.

1.6.1 Properties
Proposal 1. If A and B are two matrices whose sum and product are defined,
then we have the following relations:
• t (A + B) = t A + t B

• t (A · B) = t B · t A
• t t
( A) = A

9
Example 1.14. Let A and B be two matrices of order 2.
   
1 −1 6 −5
A= and B =
30 2 1
   
t 1 3 t 6 2
A= and B =
−1 0 −5 1

So   
t t 7 5 t t 18 4
A+ B = and B · A =
−6 1 −15 −6
   
7 −6 t 7 5
A+B = ⇒ (A + B) =
5 1 −6 1
   
4 −6 t 4 18
A·B = ⇒ A·B =
18 −15 −6 −15

1.7 Symmetric matrix and antisymmetric matrix


Definition 16. .

• A square matrix A is said to be symmetric if t A = A


• A square matrix A is said to be antisymmetric if t A = −A
Example 1.15. .
 
−1 2 4
• A a symmetric matrix A =  2 5 9 
4 9 45
 
0 2 4 −9
 −2 0 −5 3 
• A a antisymmetric matrix A = 
 −4

5 0 −10 
9 −3 10 0
Note 4. .
1. The elements of the main diagonal of an antisymmetric matrix are zero.
2. Let A be any square matrix:

• A + (t A) is a symmetric matrix
• A − (t A) is an antisymmetric matrix

10
Example 1.16. Consider the following square matrix:
 
1 3 2
A =  0 1 −1 
2 5 4
     
1 3 2 1 0 2 2 3 4
A + t A =  0 1 −1  +  3 1 5  =  3 2 4 
2 5 4 2 −1 4 4 4 8
     
1 3 2 1 0 2 0 3 0
A − t A =  0 1 −1  −  3 1 5  =  −3 0 −6 
2 5 4 2 −1 4 0 6 0

1.8 Trace of a square matrix


1.8.1 Definition
Definition 17. The trace of a square matrix is the sum of the diagonal elements.
If A = a (ij ) a square matrix of order n, we define its trace as follows:
n
X
tr(A) = aii
i=1

Example 1.17. Consider the square matrix of the previous example:


 
1 3 2
A= 0 1 −1  ⇒ tr(A) = 1 + 1 + 4 = 6
2 5 4

1.8.2 Properties
Proposal 2. • tr (t A) = tr(A).
• If A and B are two square matrices of the same order, then we have:
tr(A + B) = tr(A) + tr(B).

• If A is of order (m, n) and B of order (n, m), then we have:


tr(A.B) = tr(B · A).
Example 1.18. Given the square matrix of the previous example, verify that
tr(A) = tr t A

 
1 3 2
A =  0 1 −1  ⇒ tr(A) = 1 + 1 + 4 = 6
2 5 4
 
1 0 2
t
A =  3 1 5  ⇒ tr t A = 1 + 1 + 4 = 6 = tr(A)

2 −1 4

11
Example 1.19. Given the following two matrices A and B, verify that

tr(A) + tr(B) = tr(A + B)


 
1 −1
A= ⇒ tr(A) = 1 + 0 = 1
30
 
6 −5
B= ⇒ tr(B) = 6 + 1 = 7
2 1
 
7 −6
A+B = ⇒ tr(A + B) = 7 + 1 = tr(B) + tr(A)
5 1

Example 1.20. Consider the matrix A of order (2,3) and the matrix B of order
(3,2) as follows:
 
  0 −1
2 1 −1
A= B= 1 0 
1 2 0
1 1

We will check that tr(A.B) = tr(B.A) :


 
0 −3
AB = ⇒ tr(A · B) = 0 + (−1) = −1
2 −1
 
−1 −2 0
BA =  2 1 −1  ⇒ tr(B · A) = −1 + 1 + (−1) = −1
3 3 −1

1.9 Determinant of a square matrix


1.9.1 Determinant of a square matrix of order 2
Definition 18. Either :  
a11 a12
A=
a21 a22
A square matrix of order 2, then the determinant of A is defined by the equality:

a11 a12
|A| = = a11 a22 − a21 a12
a21 a22

Example 1.21. Find the determinant of the matrix:


 
3 −2
A=
1 4

Solution : The determinant is:


3 −2
|A| = = (3 ∗ 4) − (1 ∗ (−2)) = 12 + 2 = 14
1 4

12
1.9.2 Determinant of a square matrix of order 3 by the method of
Sarrus
Definition 19. Either
 
a11 a12 a13
A =  a21 a22 a23 
a31 a32 a33

a square matrix of order 3 Sarrus’ rule is to write the three columns of the
matrix A and repeat in order:

• Consider the first two rows below the A matrix to obtain the following
inline augmented matrix:
 
a11 a12 a13

 & % 

 a21 a22 a23 
 

 & & 


 % % 

AL =   a31 a32 a33 


 & & 


 % % 

 a11 a12 a13 
 
 % & 
a21 a22 a23
It is then enough to carry out the products of the coefficients of each diagonal
and to make the sum if the diagonal is descending (black arrow) or the difference
if the diagonal is ascending (red arrow).

|A| = a11 a22 a33 + a21 a32 a13 + a31 a12 a23 − a21 a12 a33 − a11 a32 a23 − a31 a22 a13

• Consider the first two columns to the right of the matrix A to obtain the
augmented matrix in the following column:
 
a11 a12 a13 a11 a12

 & & & 


 % % % 

AC =  21 a a22 a23 a 21 a 22 


 & & & 

 % % % 
a31 a32 a33 a31 a32
and perform the products of the coefficients of each diagonal and make the sum
if the diagonal is descending (black arrow) or the difference if the diagonal is
ascending (red arrow).

|A| = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a33 a21 a12 − a32 a23 a11 − a31 a22 a13

13
Note 5. WARNING: Sarrus’ method only works for square matrices of order 3
Example 1.22. Calculate the determinant of the matrix
 
2 −3 1
A=  3 2 −3 
5 4 −2

Solution :  
2 −3 1

 & % 

 3
 2 −3 


 & & 


 % % 

AL = 
 5 4 −2 


 & & 


 % % 

 2
 −3 1 
 % & 
3 2 −3
|A| = 2∗2∗(−2)+3∗4∗1+5∗(−3)∗(−3)−3∗(−3)∗(−2)−2∗4∗(−3)−5∗2∗1) = −8+12+45−18+24−10 = 45
the same if we consider the matrix
 
2 −3 1 2 −3

 & & & 


 % % % 

AC =  3 2 −3 3 2 


 & & & 

 % % % 
5 4 −2 5 4

in both cases we find the same value for the determinant.

1.9.3 Determinant of a square matrix of order n > 2 by the method


of cofactors
1.9.4 Determinant of a square matrix of order n > 2 by the method
of cofactors
To calculate the determinant of a matrix of order n > 2, we are going to in-
troduce the notions of sub-matrix and of cofactor which will make it possible
to reduce the calculation of the determinant of order n to the calculation of
determinant of order 2.
Definition 20. 1. Let A be a square matrix of order n, we call sub-matrix
Aij the matrix of order (n − 1) obtained by deleting the i th line and the
boldsymbolj th column of matrix A.

14
2. The determinant of a submatrix Aij is called the minor of the item aij
3. The signature of an item aij is given by (−1)i+j .
4. The cofactor of an item aij is the minor care product by its signature, i.e.:

(−1)i+j |Aij | .

Example 1.23. Find the sub-matrix A23 , the minor of a12 and the cofactor of
a32 of the matrix:  
5 2 −3
 −6 4 7 
8 −1 −4
Solution : The sub-matrix
 
5 2
A23 =
8 −1

It is obtained by eliminating row 2 and column 3 of the matrix A. The minor


of a12 is the determinant of the sub-matrix A12 , i.e.:

−6 7
|A12 | = = 24 − 56 = −32
8 −4

The cofactor of the element a32 is the product of the minor of a32 and its
signature, that is:

5 −3
(−1)3+2 |A32 | = (−1)5 = −1(35 − 18) = −17
−6 7

Note 6. The signature is positive when (i + j) is even and negative when (i + j)


is odd. In practice, when the element signature is negative, we change the sign
of its minor to obtain the cofactor and if the signature is positive, we keep the
same sign. The signature of the elements alternates from one element to the
other. signature of a square matrix of order n is:

− . . . (−1)1+j . . . (−1)1+n
 
+
 − + . . . (−1)2+j . . . (−1)2+n 
.. .. .. .. .. ..
 
 

 . . . . . . 

 (−1)i+1 . . . . . . (−1)i+j . . . (−1)i+n 
 
 .. .. .. .. .. .. 
 . . . . . . 
(−1)n+1 . . . . . . (−1)n+j . . . +

Definition 21. The determinant of a square matrix of order n is obtained by


calculating the sum of the products of each element of a row (or column) by its
respective cofactor.

15
Theorem 1. If A is a square matrix of order n, then,

|A| = ai1 |Ai1 | + ai2 |Ai2 | + . . . + ain |Ain |

for all i, 1 ≤ i ≤ n. likewise,

|A| = a1d |A1d | + a2d |A2d | + . . . + anj |Anj |

for all j, 1 ≤ j ≤ n.
Example 1.24. Calculate the determinant of the matrix:
 
3 −2 1
 2 −3 4 
5 −1 2

Solution : Let’s expand the determinant from the first line:

3 −2 1
−3 4 2 4 2 −3
2 −3 4 =3 +2 +1
−1 2 5 2 5 −1
5 −1 2

So
3 −2 1
2 −3 4 = 3(−6 + 4) + 2(4 − 20) + 1(−2 + 15) = −6 − 32 + 13 = −25
5 −1 2

Example 1.25. Calculate the determinant of the matrix:


 
3 −2 1 0
 2 −3 4 0 
A=  5 −1 2

0 
4 3 1 9

Solution : Let’s expand the determinant from the fourth column:

3 −2 1
|A| = (−1)4+4 ∗ 9 2 −3 4 = 9 ∗ (−25) = −225
5 −1 2

1.9.5 Properties of determinants


The properties of the determinants make it possible to simplify the calculation
of the determinants, especially for a determinant of high order.
Theorem 2. Let A be a square matrix of order n.
1. If all the elements of a row (or of a column) of A are null then |A| = 0

16
2. If two rows or two columns of A are proportional then |A| = 0

a11 a12 a13


a21 a22 a23 =0
ka11 ka12 ka13

3. If we perform a permutation of two rows or two columns, we must multiply


the determinant by (-1).

a11 a12 a13 a21 a22 a23


a21 a22 a23 =− a11 a12 a13
a31 a32 a33 a31 a32 a33

4. If each element of a row (or column) of a matrix A is the sum of two


quantities, then the determinant of A can be written as a sum of two
determinants.
a11 a12 a13 a11 a12 a13 a11 a12 a13
a21 a22 a23 = a21 a22 a23 + a21 a22 a23
a31 + b31 a32 + b32 a33 + b33 a31 a32 a33 b31 b32 b33

5. If we have a line (resp. a column) linear combination of the other lines


(resp. of the other columns) then |A| = 0

a11 a12 a13


a21 a22 a23 =0
αa11 + βa21 αa12 + βa22 αa13 + βa23

6. The determinant does not change if we add to a line (resp. to a column)


a linear combination of the other lines (resp. of the other columns).

a11 a12 a13 a11 a12 a13


a21 + ka11 a22 + ka12 a23 + ka13 = a21 a22 a23
a31 a32 a33 a31 a32 a33

7. Let A and B be two matrices which differ only in that the elements of a
row (or of a column) of the matrix B are obtained by multiplying by k the
elements with the same address of A. then

|A| = k|B|
a11 a12 a13 a11 a12 a13
ka21 ka22 ka23 =k a21 a22 a23
a31 a32 a33 a31 a32 a33

8. det(A) = det (t A)
9. det(A · B) = det(A) · det(B)

17
10. The determinant of a triangular matrix is equal to the product of the ele-
ments of the main diagonal.
11. The determinant of a diagonal matrix is equal to the product of the ele-
ments of the main diagonal.
Note 7. The determinant of an identity matrix (regardless of its order) is equal
to 1.

1.10 The comatrix of a matrix


Definition 22. Let A = (aij ) be a square matrix of order n. We call comatrix
(or adjoint matrix) of A the square matrix of order n, denoted com(A) (or
Adj(A) ) defined by:
 
411 412 · · · 41n
 421 422 · · · 42n 
com(A) =  .
 
.. .. .. 
 .. . . . 
4n1 4n2 ··· 4nn
i+j
where 4ij = (−1) Aij is the cofactor associated with element aij of A.
Example 1.26. Let
 
2 −4
A=
−3 1
(−1)1+1 A11 (−1)1+2 A12
 
com(A) =
(−1)2+1 A21 4 (−1)2+2 A22
 
1 3
com(A) =
4 2

1.11 Inverse of a matrix


Example 1.9.2. Either
 
1 2 3
B= 0 1 2 
−1 −4 −1
 
7 −2 1
com(B) =  −10 2 2 
1 −2 1

1.12 Inverse of a matrix


1.12.1 Definition and examples
Definition 23. We call the inverse matrix of the square matrix A of order n,
the matrix, if it exists, denoted A−1 such that
A · A−1 = A−1 · A = In

18
obtained by the following relation
1 t
A−1 = · Com(A)
|A|

Note 8. A matrix is only invertible if its determinant is nonzero.


Example 1.27. Let A be a matrix of order 2.
 
3 1
A=
2 0

To calculate its inverse, we start by calculating its determinant:

det(A) = (3.0) − (2.1) = −2

then the comatrix of A and its transpose.


   
0 −2 t 0 −1
com(A) = ⇒ com(A) =
−1 3 −2 3

then finally the inverse of A.


   
−1 1 1 0 −1 1 0 1
A = · t com(A) = − =
det(A) 2 −2 3 2 2 −3

1.12.2 Some Properties of the Inverse


Properties 1. Let A be an invertible matrix of order n

• The inverse A−1 of A is unique.


−1
• A−1 is invertible and A−1 =A
−1
• t A is invertible and (t A) = t A−1


• det A−1 = det1 A




Properties 2. Let A and B be two matrices of order n.


• If A is invertible, then AB = 0 ⇒ B = 0

• If A and B are invertible then A.B is invertible and


(A.B)−1 = B −1 · A−1

19
1.13 Rank of a matrix
1.13.1 Definitions
Definition 24. (Submatrix) Let A be a matrix of order (m × n). We call sub-
matrix of A, a matrix which is obtained from the matrix A when we eliminate
one or more rows (or columns) of A.
Definition 25. (Rank of a matrix) Let A be a matrix of order (m × n) We call
rank of a matrix the order of the largest square sub-matrix whose determinant
is non-zero.

Note 9. .
• The rank of a matrix A is denoted rg(A)
• If A of order (m × n), then rg(A) ≤ min(m, n).

1.13.2 Examples
Example 1.28. If A is the following matrix of order (2 × 3); find its rank:
 
1 3 2
A=
1 3 1

thesquare sub-matrix
 of order 2, composed of the second and the third column
3 2
A1 is invertible det (A1 ) = 3 − 6 = −3 6= 0. There therefore exists
3 1
at least one sub-matrix of order 2 whose determinant is different from zero.
Consequently rg(A) = 2
Example 1.29. Either
 
1 0 1
B= 0 5 −1 
−1 0 −1

Note that L3 = −L1 , therefore


 det(B)= 0 which implies that rg(B) < 3. Now
5 −1
consider the subarray B1 =
0 −1

det (B1 ) = −5 6= 0 ⇒ rg(B) = 2

Note 10. .
• The rank of an invertible square matrix of order n is equal to n(rg(A) = n).

• If all the elements of a matrix are zero, the rank is said to be zero.
• If a matrix A of order (m × n) has a maximum rank rg (A) = min(m, n),
we say that the matrix has a complete rank.

20
1.13.3 Elementary transformations that do not change the rank of
a matrix
It is obvious that calculating the rank of a matrix can be a very long operation.
Let us take the case of a square matrix of order n. First we need to calculate
the determinant of the matrix itself. If the determinant is different from zero,
the rank is n but if it is equal to zero, it will be necessary to calculate the
determinant of the sub-matrices of order (n − 1). Now, there are n2 . If all
these sub-matrices have a null determinant, it is necessary to pass to the sub-
matrices of order (n − 2) and so on. It is for this reason that we are going
to see a method which facilitates the determination of the rank of a matrix.
This method is based on elementary transformations. There are three types of
elementary transformations:
1. Swapping two rows (columns),

Li ↔ Lj , (Ci ↔ Cj ) , ; (i 6= j)

2. Multiplying a row (column) by a non-zero real,

Li ↔ αLi , (Ci ↔ αCi ) , (α 6= 0)

3. The fact of adding to a line (column) the multiple of another line (column)

Li ↔ Li + αLj , (Ci ↔ Ci + αCj ) , (i 6= i)

These elementary transformations do not change the rank of the matrix. Thus,
we will use these transformations to reduce a matrix A to its normal form.

1.13.4 Rang Properties


Theorem 3. Consider the matrices Am,n , Bm,n and Dn,p
1. rank(A) = rank (t A)
2. rank(A) ≤ min(m, n)
3. rank(A + B) ≤ rank(A) + rank(B)

4. ran(AD) ≤ min(rank(A), rank(D))

21
1.14 The normal form of a matrix
1.14.1 Definition and Examples
Definition 26. If A is a matrix of order (m×n) its normal form is the following:
 
n n−r

 p1 0 · · · 0q p0 · · · 0q  
 r 0 1 ··· 0 0 ··· 0 
.. .. . . .. .. . . ..  
 
 

 . . . . . . . 
= Ir 0

 x0 0 · · · 1y x0 · · · 0y   0 0

 p0 0 · · · 0q p0 · · · 0q 

. . . .. . . .. 
 m − r .. .. . . . ..

. . . 
x0 0 · · · 0y x0 · · · 0y

The rank of this matrix is r, because the largest invertible sub-matrix is of order
r.

Example 1.30. 1. The normal form of an invertible third-order square ma-


trix is:  
1 0 0
Id3 =  0 1 0 
0 0 1

2. The normal form of a square matrix of order (3 × 2) of rank 2 is:


 
1 0
 0 1 
0 0

1.14.2 Technique for finding the normal form of a matrix


Given a matrix A = (aij ) of order (m × n), we can obtain the normal form of
the matrix A and, consequently, his rank by proceeding in a systematic way:

1. We use elementary transformations of type 1 if the element at position


(1, 1) is zero.

If a11 = 0, L1 ↔ Li such that ai1 6= 0

2. To get a11 = 1
1
L1 = L1
a11
3. To have ai1 = 0, i 6= 1;

Li = Li − ai1 L1

22
4. To have a1j = 0, j 6= 1;

Cj ↔ Cj − a1j C1

5. Repeat points 1 to 4 for item a22

6. We continue the process along the main diagonal.


7. the process ends when the end of the diagonal is reached and all non-zero
elements have been used.
Example 1.31. Using the normal form of the following matrix, find its rank.
 
0 2 1 2
D= 3 3 6 9 
4 3 3 5

We apply points 1 to 4 for the element a11


 
3 3 6 9
L1 ↔ L2 D =  0 2 1 2 
4 3 3 5
 
1 1 2 3
1
L1 = L1 D =  0 2 1 2 
3
4 3 3 5
 
L2 = L2 − 0.L1 1 1 2 3
D= 0 2 1 2 
L3 = L3 − 4.L1 0 −1 −5 −7
C2 = C2 − 1.C1 
1 0 0 0

C3 = C3 − 2.C1 D =  0 2 1 2 
C4 = C4 − 3.C1 0 −1 −5 −7

We repeat points 1 to 4 for the element (2, 2). Point 1 is not necessary because
the element (2, 2) = 2 is non-zero.
 
1 0 0 0
1 1
L2 = L2 D =  0 1 2 1 
2
0 −1 −5 −7
 
1 0 0 0
L3 ↔ L3 + L2 D =  0 1 21 1 
0 0 −9 2 −6

 
1 0 0 0
1 1
C3 ↔ C3 − C2 C4 ↔ C4 − C2 D= 0 1 2 1 
2
0 −1 −5 −7

23
Let’s apply point 2 to row 3, then point 4 to column 4
 
1 0 0 0
1
L3 = −9 L3 ⇒ D =  0 1 0 0 
2 0 0 1 34
 
1 0 0 0
4
C4 = C4 − C3 ⇒ D =  0 1 0 0 
3
0 0 1 0

So rg(A) = 3

2 Solving Systems of Linear Equations


2.1 Introduction
Solving a system of linear equations is a problem that has preoccupied math-
ematicians in almost every era of history. All sorts of solutions have been
imagined but the invention of matrices has made it possible to systematize
the methods of solution. In this chapter, we will see that the matrix calculation
is a powerful tool and well adapted to this kind of problems. It makes it possible
to increase the speed of resolution of such systems and to considerably simplify
the calculation thereof.

2.2 Definitions
Definition 27. .
• A linear equation with unknown p is any equation of the form:

a1 x1 + a2 x2 + . . . . . . + ap xp = b

x1 , x2 . . . . . . xp are the unknowns a1 , a2 , . . . . . . .ap , b are numerical coeffi-


cients belonging to R
• A p-tuple (c1 , c2 , . . . ..cp ) of elements of R verifies the equation or is a
solution of the equation if the equality: a1 c1 + a2 c2 + . . . . + ap cp = b holds.

• The equation 0x1 +0x2 +. . .+0xp = β is impossible if β 6= 0 indeterminate


if β = 0.
Note 11. When the equation has few unknowns, they are noted for convenience:
x, y, z, t, . . .

Definition 28. .
• We call a system of n linear equations with p unknown the tuple of equa-
tions:

24


 a11 x1 + . . . + a1j xj + . . . + a1p xp , = b1

a21 x1 + . . . + a2j xj + . . . + a2p xp , = b2




...



(S)

 ai1 x1 + . . . + aij xj + . . . + aip xp , = bi
..


.





a x + . . . + a x + . . . . + a x ,
n1 1 nj j np p = bn

Where the coefficients aij and bi (1 ≤ i ≤ n, 1 ≤ j ≤ p) are elements of R


and where the unknowns are x1 , . . . .xp .
• The ie equation: a11 x1 + . . . + a1j xj + . . . + a1p xp = b1 is denoted Li and
called ie line of (S).

• A homogeneous system is a system whose second members are zero ∀1 ≤


i ≤ n, bi = 0.
• A p-tuple (c1 , c2 , . . . ..cp ) is a solution of the system (S) if it is a solution
of the n equations composing the system.

• Solving the system (S) consists in finding the set of solutions of (S).
• A system with no solution is said to be impossible.
• A system with several solutions is said to be indeterminate.
Note 12. A homogeneous system always admits the zero solution (0, 0 . . . ., 0).

Definition 29. Two systems S and S ’ are equivalent if they have the same set
of solutions.

2.3 Elementary operations on the lines of a system


Let (S) be a system of n linear equations with p unknowns.
Definition 30. We call elementary operation on the rows of a system:
• Swapping two lines, Li ↔ Lj , (i 6= j)

• Multiplying a line by a nonzero real, Li ↔ αLi , (α 6= 0)


• Adding to a line the multiple of another line, Li ↔ Li + αLj , (i 6= j)
Proposal 3. By performing an elementary operation on the rows of a system
(S), we obtain a system (S 0 ) which is equivalent to it.

Note 13. You have to be careful to only do one basic operation at a time.

25
2.4 Matrix writing of a linear system
 
x1
 x2 
By introducing an unknown column array X =  , we can write the
 
..
 . 
xp
system (S) as an equation between matrices:

AX = b

With    
a11 a12 ... a1p b1
 a21 a22 ... a2p   b2 
A= and b=
   
.. .. .. ..  .. 
 . . . .   . 
an1 an2 ... anp bn

Note 14. .
• A is a matrix of order (n,p) called matrix of the system S
• X is a column-vector with p components called vector of unknowns
• best an n-component column-vector called the vector of constants.
Definition 31. (Rank of a linear system). We call the rank of a linear system,
that of its matrix.
Example 2.1. Write the following system of linear equations S1 in matrix form:

 x1 − 2x2 + 3x3 = 16

2x1 + 2x2 − x3 = 0

S1

 x1 + x2 + 5x3 = 11
x1 + x2 − 6x3 = −11

We have four equations (n = 4) and three unknowns (p = 3). The matrix of


coefficients A is therefore a matrix of order (4, 3):
 
1 −2 3
 2 2 −1 
A=  1 1

5 
1 1 −6

2.5 Solving linear systems by Cramer’s method


The vector of unknowns X is a vector with three components:
 
x1
X =  x2 
x3

26
The vector of constants b is a vector with four components:
 
16
 0 
b=  11 

−11
We finally have the following system:
   
1 −2 3   16
 2 2 −1  x 1
 ·  x2  =  0 
 

 1 1 5   11 
x3
1 1 −6 −11
The rank of the system S1 is equal to three since the rank of its associated
matrix A is equal to three.  
1 −2 3
Indeed the determinant of the submatrix  1 1 5  is different from
1 1 −6
0 . Performing the operation L3 = L3 − L2 , we obtain
1 −2 3 1 −2 3
1 −2
1 1 5 = 1 1 5 = −11(−1)3+3 = −11(1 +
1 1
1 1 −6 0 0 −11
2) = −33 6= 0

2.6 Solving linear systems by Cramer’s method


Cramer’s method, also known as the determinants method, is a method which is
simple in its application, but which can become very tedious when the number of
variables becomes high since it is based solely on the calculation of determinants.

2.6.1 Definition of a Cramer system


Definition 32. We say that a linear system (S) is Cramer if the number of
its unknowns p is equal to the number of these equations n’, i.e. a system of n
equations with n unknowns and in addition its order is equal to its rank r(n = r).
Proposal 4. • A linear system of n equations with n unknowns is Cramer
if and only the determinant of the associated matrix is nonzero.
• A linear system of n equations with n unknowns is Cramer if it has a
unique solution.

2.6.2 Principle of Cramer’s method


Cramer’s method calculates the values of the unknowns by following three suc-
cessive steps:
1. Calculate the determinant of the matrix A associated with the system S.
This result must be different from 0 otherwise the system has no solution.

27
2. Calculate the determinant of the matrix Ai obtained by replacing the i th
column by the elements of the column vector of the second member b
3. These two preliminary operations make it possible to obtain the solutions
of the system S by setting
|Ai |
xi =
|A|
Example 2.2. Solve by CRAMER’s method the following system of equations:

2x1 − 3x2 + 5x3 , = −8

4x1 − 2x2 + x3 , = 12

x1 + 5x2 − x3 , = −3

Solution: The determinant of the coefficient matrix is:


2 −3 5
4 −2 1 = 89
1 5 −1

The determinant being different from zero, the solution is unique is given by:

−8 −3 5 2 −8 5 2 −3 −8
12 −2 1 4 12 1 4 −2 12
−3 5 −1 1 −3 −1 1 5 −3
x1 = = 3 x2 = = −2 x3 = = −4
89 89 89

2.7 Solving linear systems by the Gaussian method


2.7.1 Solving a system of equations using matrices
2.7.2 Finding the Rank of a Matrix
The object of this section is to present a systematic method allowing to trans-
form any system into an equivalent system easy to solve. This is the Gauss
method of elimination which consists of eliminating the x1 from the second
line downwards, then the x2 from the third line downwards and so on . This
method of resolution therefore consists in constructing a series of equivalent
systems until a scaled system is obtained.

2.7.3 Gaussian method technique


Let S be a system of n linear equations with p unknowns.
1. We swap the rows from 1 to n so that the coefficient at x1 , in the first row
is non-zero. This is our pivot a11 .

28
2. A multiple of the first line is subtracted from the ie line so that the coef-
ficient at x1 of the ie line is zero,
ai1
Li ↔ Li − L1 (2 ≤ i ≤ n)
a11

L1
The system (S) is transformed into the equivalent system Where
(S1 )
(S1 ) is a system of (n − 1) unknown (p − 1) equations: x2 , x3 , . . . ., xp
3. We apply the method described in 1. and
 2. to (S1 ) which will in turn be
L2
transformed into an equivalent system Where (S2 ) is a system of
(S2 )
(n − 2) unknown (p − 2) equations : x3 , x4 , . . . ., xp
4. The process stops because at each step, the number of equations and the
number of unknowns decrease
5. We find at the end a triangular system, it is then enough to ”go up”:
Note 15. .
• Choice of pivots: at each step, if possible, it is in your interest to choose
a simple pivot ( 1 is often the best choice ).
• If during a step appears an equation of the form 0 = β
• Siβ 6= 0 the process stops because (S) is impossible.
• If β = 0 we can delete the equation (0 = 0) and continue the solution.

2.8 Application of Gauss’s method


Example 2.3. Solve

2x2 + x3
 =2
x1 + 2x2 + x3 =1

x1 − 2x2 + 2x3 = 3


 x1 + 2x2 + x3 = 1

L1 ↔ L2 2x2 + x3 = 2

x1 − 2x2 + 2x3 = 3


a21 a31  x1 + 2x2 + x3 =1
L2 = L2 − L1 = L2 L3 = L2 − L1 = L3 − L1 2x2 − x3 =2
a11 a11
−4x2 − 2x3 =2


 x1  + 2x2 + x3 = 1
2x2 + x3 = 2
 S1
−4x2 + x3 = 2

29

 x1 + 2x2 + x3 = 1
 (
a32 −4 2x2 + x3 =2
 L3 = L3 −
 a22 L2 = L3 − 2 L2 = L3 + 2L2
3x3 =6

We then obtain the following upper triangular system:



 x1 + 2x2 + x3 = 1
2x2 + x3 =2
3x3 =6

whose solution is 
 x1 = 1 − 2x2 − x3 = −1
x2 = 21 (2 − x3 ) = 0
x3 = 2

S = {(−1, 0, 2)}
Example 2.4. Solve


x1 − x4 =0

x + 2x + x + 2x
1 2 3 4 =3


2x 1 + 2x 2 + 3x 3 =5
x1 − 2x2 + 2x3 + x4 =4


L2 = L2 − a21
= L2 − L1 x1 − x4 =0
a11 L1


2x2 + x3 + 3x4 =3

a31
L3 = L3 − a11 L1 = L3 − 2L1
a41 2x2 + 3x3 + 2x4 =5
L4 = L4 − a11 L1 = L4 − L1


−2x2 + 2x3 + 2x4 =4



 − x4 = 0
x1 
 2x2 + x3 + 3x4 =3

S
 1
 2x2 + 3x3 + 2x4 =5
−2x2 + 2x3 + 2x4 =4
 

 x1 − x4 = 0
L3 = L3 − aa2232
L2 = L3 − 22 L2 = L3 − L2
L4 = L4 − aa42 L2 = L4 − −2 2
L2 = L4 + L2

22
 
 x1 − x4 = 0  2x2 + x3 + 3x4 =3
L3 = L3 − aa32
22
L2 = L 3 − 2
2 L2 = L3 − L2 2x3 − x4 =2
L4 = L4 − aa42 L = L − 2
L = L + L 3x 3 + 5x4 =7
 
22
2 4 −2 2 4 2


 x1 − x4 = 0
2x2 + x3 + 3x4 = 3


a43 3 2x2 + x4 = 2
L = L − L = L − L

 4
 4 a33 3 4 2 3 13
2 x4 =4

30
We get the following triangular system:

x1 − x4


=0
2x + x + 3x =3
2 3 4
2x3 − x4
 =2

 13
2 x4 =4

whose solution is
8


 x1 = x4 = 13
1 1
x2 = 2 (3 − x3 − 3x4 ) = − 13

1 17

 x3 = 2 (2 + x4 ) = 13
8
x4 = 13

8 1 17 8
 
So S = 13 , − 13 , 13 , 13

Example 2.5. Solve



 2x1 + 2x2 + x3 = 2
x1 + 2x2 + x3 = 1
x1 − 2x2 − x3 = 1


 x1 + 2x2 + x3 = 1
L1 ↔ L2 2x1 + 2x2 + x3 = 2
x1 − 2x2 − x3 = 1



L2 = L2 = L2 − aa21 L = L − 2L  x1 + 2x2 + x3 = 1
1 2 1
11
a31 −2x2 − x3 =0
L3 = L3 = L3 − a11 L1 = L3 − L1 
−4x2 − 2x3 =0

 x1 + 2x2 + x3 = 1
−2x2 − x3 = 0
 S1
−4x2 − 2x3 = 0

 x1 + 2x2 + x3 = 1 
−4 −2x2 − x3 = 0
 L3 = L3 − aa32 L 2 = L 3 − L
−2 2 = L3 − 2L 2
22 0 =0

We get the following upper triangular system:


(
x1 + 2x2 + x3 = 1
−2x2 − x3 = 0

We then have an infinity of solutions which have the following form:

x = 1 − 2x2 − x3 = 1

 1

1 1

x = − x3 = − λ λ∈R
 2
 2 2

x3 = λ

S = 1, − 21 λ, λ
 
with λ ∈ R

31
Example 2.6. Solve

2x1 + x2 − 3x3 + 4x4
 =5
3x1 + 5x2 + 2x3 − 3x4 =8

−x1 + 3x2 + 8x3 − 11x4 = −7


−x1 + 3x2 + 8x3 − 11x4 = −7

L1 ↔ L3 3x1 + 5x2 + 2x3 − 3x4 =8

2x1 + x2 − 3x3 + 4x4 =5

a21
L2 = L2 − L1 = L2 + 3L1
a11

a31  −x1 + 3x2 + 8x3 − 11x4 = −7
L3 = L3 − L1 = L3 + 2L1 14x2 + 26x3 − 36x4 = −13
a11
7x2 + 13x3 − 18x4 = −9


 −x 1 + 3x2 + 8x3 − 11x4 = −7
14x2 + 26x3 − 36x4 = −13
 S1
7x2 + 13x3 − 18x4 = −9

 −x1 + 3x2 + 8x3 − 11x4 = −7 
a32 7 1 14x2 + 26x3 − 36x4 = −13
 L3 = L3 − a22 L2 = L3 − 14 L2 = L3 − 2 L2
0 = 52
The last line is impossible, so the system has no solution.

2.8.1 Solving a system of equations using matrices


When trying to solve a system of equations, the names of the variables do not
matter. The important elements in such a system are the number of variables
and the coefficient in front of each of the variables in each of the equations as
well as the elements of the second member. If we have the following system:


 a11 x1 + . . . + a1j xj + . . . + a1p xp = b1

a21 x1 + . . . + a2j xj + . . . + a2p xp = b2




...



(S)

 ai1 x1 + . . . + aij xj + . . . + aip xp = bi
..



.





a x + . . . + a x + . . . + a x = b
n1 1 nj j np p n

We can easily associate the following matrix (known as the matrix augmented
by the second member):
 
a11 a12 . . . a1p b1
 a21 a22 . . . a2p b2 
[A | b] =  .
 
.. .. .. ..
 ..

. . . . 
an1 an2 ... anp bn

32
Here, the vertical bar only serves to separate the left parts from the right parts
of the equations (it only serves to facilitate reading). Solving the system S
amounts to scaling the augmented matrix [A | b] by performing the same kind
of Gaussian operations as on systems of equations.
Example 2.7. Solve the following system:

x + y + z
 =1
(S) 3x + 2y + z =6

y−z =3

The associated augmented matrix is:


 
1 1 1 1
[A | b] =  3 2 1 6 
0 1 −1 3

By using Gaussian operations, we obtain a sequence of equivalent systems.


 
1 1 1 1
(L2 = L2 − 3L1 ) ⇒ [A | b] ∼  0 −1 −2 3 
0 1 −1 3
 
1 1 1 1
(L3 = L3 + L2 ) ⇒ [A | b] ∼  0 −1 −2 3 
0 0 −3 6
By analyzing the last matrix obtained, we realize:
1. the last row of the matrix implies

−3z = 6 ⇔ z = −2

2. The second row of the matrix results in the equation

−y − 2z = 3 ⇒ −y + 4 = 3 ⇒ y = 1

3. the first line gives the equation

x+y+z =1⇒x+1−2=1⇒x=2

Note 16. .
1. Solving a system of equations by matrices or by directly manipulating
the equations is exactly the same. The only difference is that the ma-
trix method is much less time consuming to use since we don’t need to
transcribe the variables (in this example, x, y and z) at each step.
2. Solving a system of equations by matrix is more advantageous when solv-
ing several systems of equations which differ only by the second member.

33
Example 2.8. Solve the following systems:
 
 x+y+z =1  x+y+z =4
(S1 ) 3x + 2y + z = 6 and (S2 ) 3x + 2y + z =5
y−z =3 y−z =9
 

We notice that the two systems differ only by their second members b1 and b2 ,
so they have the same associated matrix A that we will increase by the two
second members and make a resolution simultaneous.
 
1 1 1 1 4
[A | b1 , b2 ] =  3 2 1 6 5 
0 1 −1 3 9
 
1 1 1 1 4
(L2 = L2 − 3L1 ) ⇒ [A | b] ∼  0 −1 −2 3 −7 
0 1 −1 3 9
 
1 1 1 1 4
(L3 = L3 + L2 ) ⇒ [A | b1 , b2 ] ∼  0 −1 −2 3 −7 
0 0 −3 6 2
So

x + y + z = 4

 x+y+z =1 
(S1 ) . ⇔) −y − 2z =3 and (S2 ) . ⇔ −y − 2z = −7
−3z =6
 
−3z =2

1. The solutions of S1 are z = −2; y = 1 and x = 2


2. The solutions of S2 are z = − 23 ; y = 25
3 and x = − 11
3

2.8.2 Find the rank of a matrix


One can notice that the elementary operations of Gauss appear among those
which do not modify the rank of a matrix. So to calculate the rank of a matrix,
it suffices to scale it using the Gaussian method and the rank is then equal to
the number of non-zero rows of the scaled matrix.
Note 17. If matrix A is a triangular matrix, its rank is equal to the number of
nonzero pivots on the diagonal.
Example 2.9. .  
1 0 9
 0 −4 11  row = 2
0 0 0
 
1 5 0
 0 3 21  row = 3
0 0 7
 
5 0 11
 0 0 19  row = 1
0 0 0

34
Example 2.10. What is the rank of the following matrix A:
 
0 0 1 3
 1 0 −1 2 
 
A=  0 0 1 2 

 −2 4 −4 1 
−1 0 3 0
To find the rank of A, we will scale it using the Gauss method.
 
1 0 −1 2
 0 0 1 3 
 
L1 ↔ L2 
 0 0 1 2 

 −2 4 −4 1 
−1 0 3 0
 
1 0 −1 2
L4 = L4 + 2L1  0 0 1 3 
 
 0 0 1 2 
L5 = L5 + L1   0 4 −6 5 

0 0 2 2
 
1 0 −1 2
 0 4 −6 5 
 
L2 ↔ L4 
 0 0 1 2 

 0 0 1 3 
0 0 2 2
 
1 0 −1 2
L4 = L4 − L3  0 4 −6 5 
 
 0 0 1 2 
L5 = L5 + 2L3  0 0 0 2 

0 0 0 −2
 
1 0 −1 2
 0 4 −6 5 
 
L5 = L5 + L4  0 0 1 2 

 0 0 0 2 
0 0 0 0
The number of nonzero rows of the echelon matrix is 4, so the rank of A is 4.

2.8.3 Calculation of the inverse of a square matrix by Gauss’s algo-


rithm
To invert the matrix A = (aij ) of format (n, n), we will use the following aug-
mented matrix:
 
a11 . . . a1n k 1 ... 0
 .. .. .. .. .. .. 
(A | I) =  . . . k . . . 
an1 . . . ann k 0 ... 1

35
The Gaussian transformation consists in transforming this system into an equiv-
alent system whose left block is the identity, i.e. the matrix (A | I) must be
modified so that it becomes the form (I | B). Matrix B is none other than the
inverse matrix of A : A−1 .
Theorem 4. Let A be an invertible matrix of order n. By elementary row oper-
ations, A can be transformed into In . The sequence of operations that transform
A into In , transform In into A−1 .
Process to transform (A | I) into (I | B) using Gaussian method
1. Step 1: Transform the matrix A into an upper triangular matrix using
Gauss’s method. Check if the rank of A is equal to the order of the
matrix, that is to say that all the elements of the diagonal are non-zero:
• If yes, the matrix is invertible, go to step 2.
• Otherwise, the matrix is not invertible and the process is stopped.
2. Step 2: To get 1 on the diagonal, we set:
1
Li = Li
aii
3. Step 3: To have 0 for all the elements located above the diagonal, we set:
Li = Li − aij Lj , for 1 ≤ i ≤ (j − 1) and j = n, n − 1, . . . ..2

Example 2.11. Check if the matrix A is invertible, if so give its inverse.


 
2 1 3
A =  −1 5 −2 
5 8 7
We increase the matrix A by I3
 
2 1 3 1 0 0
Ag =  −1 5 −2 0 1 0 
5 8 7 0 0 1
Let’s transform A into an upper triangular matrix
 
−1 5 −2 0 1 0
L1 ↔ L2  2 1 3 1 0 0 
5 8 7 0 0 1
 
L2 = L2 + 2L1 −1 5 −2 0 1 0
 0 11 −1 1 2 0 
L3 = L3 + 5L1 0 33 −3 0 5 1
 
−1 5 −2 0 1 0
L3 = L3 − 3L1  0 11 −1 1 2 0 
0 0 0 −3 −1 1
Note that the number of nonzero pivots is equal to 2, so the matrix A is not
invertible.

36
Example 2.12. Give the inverse of the following matrix:
 
2 1 1
B= 4 1 0 
−2 2 1
We increase the matrix A by I3
 
2 1 1 1 0 0
 4 1 0 0 1 0 
−2 2 1 0 0 1
Let’s transform A into an upper triangular matrix
 
L2 = L2 − 2L1 2 1 1 1 0 0
 0 −1 −2 −2 1 0 
L3 = L3 + L1 0 3 2 1 0 1
 
2 1 1 1 0 0
L3 = L3 + 3L2  0 −1 −2 −2 1 0 
0 0 −4 −5 3 1
It is noticed that the number of non-zero pivots is equal to 3, therefore the
matrix is invertible. Have 1 on the diagonal
1
L1 = L1  1 1 1

2 1 2 2 2 0 0
L2 = −L2  0 1 2 2 −1 0 
5
1 0 0 1 4 − 43 − 41
L3 = − L3
4
Have 0 for all elements above the diagonal
1 1
− 18
 
1 0 0 |
1  8 8
L1 = L1 − L2 0 1 0 | − 12 1
2
1
2

2 5
0 0 1 | 4 − 43 − 14

1 12 0 − 18 3 1
 
1  8 8
L1 = L1 − L3 0 1 0 − 12 1
2
1
2

2 5
0 0 1 4 − 34 − 41
So
1 1
− 81
 
8 8
A−1 =  − 12 1
2
1
2

5
4 − 43 − 14

3 Vector space
3.1 Definition of a vector space
Theorem 5. We call vector space on R all set E with tow operation satisfing:

37
1. an internal law addition, noted +, such that

+ : E × E −→ E
(u, v) 7−→ u + v

such as
• is commutative : ∀(uv) ∈ E 2 , u + v = v + u
• is associative : ∀(u, v, w) ∈ E 3 , (u + v) + w = u + (v + w)
• +admits a neutral element noted oE : ∀u ∈ E, u + oE = u
• ∀u ∈ E∃u0 ∈ E symmetrique de u such that
u + u0 = oE , u0 = −u

2. an external LAW multiplication noted.,

. : R × E −→ E
(λ, v) 7−→ λ.v

such that ∀(λ, µ) inR2 , and ∀(u, v) ∈ E 2


• (λ + µ)u = λ.u + µ.u
• λ.(u + v) = λ.U + λ.v
• λ.(µ · u) = (λµ) · u
• 1
Note 18. • The elements of E are called vectors
• The elements of R are called scalars
• The order λu must be respected: xλ and u
λ have no meaning.
Example 3.1. 1. The set of real matrices is a R. vector space.
2. The set of continuous functions C(R) is a R. vector space.
3. The set of real sequences S is a R. vector space.
4. Rn is a R. vector space, (n ≥ 1).

• vectors are the n-tuples (x1 , · · · , xn ) of elements of R


• Law + : (x1 , · · · , xn ) + (y1 , · · · , yn ) = (x1 + y1 , · · · , xn + yn )
• The law . : λ. (x1 , · · · , xn ) = (λ.x1 , · · · , λ.xn )

Theorem 6. For all (u, v) ∈ E 2 and all (α, β) ∈ R

• α · u = OE ⇔ α = 0 or u = OE

38
• −u = (−1).u. we set u − v = u + (−v)
• (α − β) · u = α.u − β.u
Definition 33. Let (E, +,.)beaR.vector space and let F be a subset of E; If
(F, +,.) is itself a R vector space, we say that (F, +, .) is a sub-vector space of
E and denote F s.e.v of E.
Proposal 5. Let F be a part of a R vector space E. F is a vector subspace of
E if and only if
• F 6= ∅ (0E ∈ F )
• ∀(u, v) ∈ F 2 , (u + v) ∈ F
• ∀u ∈ F and ∀λ ∈ R, (λ.u) ∈ F
Example 3.2. F = {(x, x); x ∈ R} is s.e.v of R2 In effect
• 0R2 = (0, 0) ∈ F
• Let u and v be two elements of F, show that (u + v) ∈ F

u ∈ F → u = (x, x) x ∈ R
⇒ u+v = (x, x)+(y, y) = (x + y , x+y) ∈ F
v ∈ F → v = (y, y)y ∈ R | {z }
∈R

• Given u ∈ F and λ ∈ R, show that (λu) ∈ F


u ∈ F ⇒ u = (x, x)x ∈ R} ⇒ λu = (|{z}
λx , λx) ∈ F
∈R

Proposal 6. If F and G are two vector subspaces of E, then F ∩ G is a vector


subspace of E.
Note 19. In general F ∪ G is not a s.e.v of E.
Example 3.3. E = R2 F = {(x, 0)/x ∈ R} and G = {(0, y)/y ∈ R} are vector
subspaces of E F ∪ G = {(x, y)/x = 0 or y = 0} is not a s.e.v of E: Indeed
(1, 0) ∈ F ∪ G and (0, 1) ∈ F ∪ G but (1, 0) + (0, 1) = (1, 1) ∈
/ F ∪G

3.2 Linear combinations


Using two vectors u and v, we can construct the vectors:
2u + v, 4u − 5v, 9u + 13v . . .
They are all linear combinations of vectors of the family {u, v}.
Theorem 7. Let {u1 , · · · , un } be a finite family of vectors of a R. vector
space E. We say that a vector u of E is a linear combination of the vectors
{u1 , · · · , un } if there exists a family of scalars {α1 , · · · , αn } such that

u = α1 u1 + · · · + αn un

39
Example 3.4. E = R2 : any vector (x, y) ∈ R2 is linear combination of vectors
(1, 0) and (0, 1)

∀(x, y) ∈ R2 , (x, y) = x(1, 0) + y(0, 1)

Theorem 8. 1. In a vector space E, the set of linear combinations of a


finite family of vectors {u1 , · · · , un } is a vector subspace of E gener-
ated by the family or spanning by the family u1 , · · · , un . We denote it
V ect {u1 , · · · , un } or Span {u1 , · · · , un } .
2. Vect {u1 , · · · , un } is the smallest s.e.v of E containing the family {u1 , · · · , un }

Note 20. This theorem is often used in practice. To show that F is a vector
subspace of the vector space E, we show that F is the set of all linear combina-
tions of a finite number of vectors in E.

Example 3.5. E = (x, y, z) ∈ R3 /2x + y − z = 0 is a R vector space. Indeed
E ⊂ R3 which is a vector space over R and in addition we have .

∀u = (x, y, z) ∈ E ⇔ z = 2x + y
⇔ u = (x, y, 2x + y)
⇔ u = x(1, 0, 2) + y(0, 1, 1)

E = Vect (u1 , u2 ) with u1 = (1, 0, 2) and u2 = (0, 1, 1)


hence E s.e.v of R3 and hence E is vector space over R

3.3 Free family, Linked family, Generating family


Let E be a R. vector space and {u1 , · · · , un } a family of n vectors of E.

3.3.1 Generating family


Definition 34. The family {u1 , · · · , un } is said to be the generating family
of E if every vector of E is a linear combination of the vectors of the family
{u1 , · · · , un } i.e.

∀u ∈ E, ∃ {α1 , · · · , αn } ∈ Rn , u = α1 u1 + · · · + αn un

Note 21. The family {u1 , · · · , un } generates E if and only if E = vect ({u1 , · · · , un })
Example 3.6. 1. E = R, the family (1) is a generating family of E. indeed
∀x ∈ R, x = x.(1)

2. E = R2 , the family {(1, 0), (0, 1)} is a generating family of E. Indeed

∀(x, y) ∈ R2 , (x, y) = x(1, 0) + y(0, 1)

40
3.3.2 Free family, Linked family
Definition 35. The family {u1 , · · · , un } is said to be free or linearly indepen-
dent if it satisfies:

∀ (λ1 , . . . , λn ) ∈ Rn : λ1 u1 + . . . + λn un = 0 ⇒ λ1 = . . . = λn = 0

Definition 36. {u1 , · · · , un } is said to be linked or linearly dependent if it is


not free. Which means

∃ (λ1 , . . . , λn ) : (λ1 , . . . , λn ) 6= 0E andλ1 u1 + . . . + λn un = 0

Example 3.7. Let E = R4 be a R. vector space


1. the vectors u1 = (−1, 2, 0, 1), u2 = (0, 1, 0, −1) and u3 = (−2, 5,0.1 are
linked: Indeed ∃(2, 1, −1) ∈ R3 and 2u1 + u2 − u3 = 0
2. the vectors u1 = (1, 0, 0, 0), u2 = (0, 1, 0, 0) and u3 = (0, 0, 0, 1) are free:
Indeed let λ1 , λ2 and λ3 be vectors of R

(λ1 u1 + λ2 u2 + λ3 u3 = 0E ) ⇒ (λ1 , λ2 , λ3 , 0) = 0E
λ1 = 0
(λ1 , λ2 , λ3 , 0) = 0E ⇒ λ2 = 0
λ3 = 0

Theorem 9. Let E be a R. vector space and {u1 , · · · , un } a family of n vectors


of E. ({u1 , · · · , un } is linked ) ⇔ ( one of ui (1 ≤ i ≤ n) is a linear combination
of the others
Example 3.8. Let E = R3 . Consider the following vectors u = (1, 2, 1), v =
(1, −1, 1) and w = (1, 1, 0) Let us study the freedom of the family F = {u =
(1, 2, 1), v = (1, −1, 1), w = (1, 1, 0)} Let α, β and γ be scalars such that
αu + βv + γw = 0R3 αu + βv + γw = 0R3 ⇒ (α + β + γ, 2α − β + γ, α + β) =
 α+β+γ =0
0R3 ⇒ 2α − β + γ = 0 After solving the system, we obtain α = β = γ = 0,
α+β =0

the family F is therefore free.

3.4 Basis of a vector space


Definition 37. Let E be a R. vector space and {u1 , · · · , un } a family of n
vectors of E. We say that {u1 , · · · , un } is a basis of E if:
1. {u1 , · · · , un } is a generating family.
2. {u1 , · · · , un } is free
3. The dimension of a vector space is the number of vectors in any of its
bases.

41
Note 22. {u1 , · · · , un } is a basis of E if:
1. E = vect {u1 , · · · , un }
2. {u1 , · · · , un } is free
Example 3.9. Let E = Rn be a R. vector space. Let be the vectors e1 =
(1, 0, · · · , 0), e2 = (0, 1, · · · , 0), . . . .en = (0, 0, · · · , 1). The family {e1 , e2 , · · · , en }
is a basis of E called Canonical basis of Rn . and dim(E) = n
Theorem 10. Let E be a R.vector space; B = {u1 , · · · , un } a basis of E. Then
any vector of E can be written in a unique way as a linear combination of the
vectors u1 , · · · , un .

Definition 38. If U = λ1 u1 + . . . + λn un is the writing of U in the database


B = {u1 , · · · , un }. The scalars λ1 , · · · , λn are called ccoordinates or components
of U in the base B.

3.5 Linear Maps


E and F denote two vector spaces
Definition 39. A mapping f from E to F is said to be linear if:
1. ∀(u, v) ∈ E 2 , f (u + v) = f (u) + f (v)

2. ∀u ∈ E, ∀λ ∈ R f (λu) = λf (u)
We denote by L(E, F ) the set of maps from E to F .
Note 23. We also say that f is a morphism of vector spaces.
Proposal 7. (Usual characterization of linear maps). A mapping f from E to
F is said to be linear if:

∀(u, v) ∈ E 2 et ∀(λ, µ) ∈ R2 , f (λu + µv) = λf (u) + µf (v).




Definition 40. Let f be a linear map from E to F

• If F = R, we say that f is a linear form


• If F = E, we say that f is an endomorphism
The set of endomorphisms of E is denoted L(E)
• If f is bijective, we say that f is an isomorphism

• If f is bijective and E = F , we say that f is an automorphism


Proposal 8. If f is a linear map from E to F , then
• f (u − v) = f (u) − f (v), ∀(u, v) ∈ E 2

42
• f (−u) = −f (u), ∀u ∈ E
• f (0E ) = 0F
Example 3.10.
f :R→R
x 7→ ax (a ∈ R)
is a linear map. This is the simplest there is, it is called in analysis, linear
function Indeed, for all (x, y) ∈ R2 and (λ, µ) ∈ R2 , we

f (λx + µy) = a(λx + µy)


= λ(ax) + µ(ay)
= λf (x) + µf (y).

Example 3.11.
f: R2 → R3
(x, y) 7→ (x + y, x − y, 2y)
is a linear map. We have for all (x, y); (x0 , y 0 ) of R2 ,

f ((x, y) + (x0 , y 0 )) = f (x + x0 , y + y 0 )
0 0 0 0 0
= ((x + x ) + (y + y ) , (x + x ) − (y + y ) , 2 (y + y ))
= ((x + y) + (x0 + y 0 ) , (x − y) + (x0 − y 0 ) , 2y + 2y 0 )
= ((x + y), (x − y), 2y) + ((x0 + y 0 ) + (x0 − y 0 ) , 2y 0 )
= f (x, y) + f (x0 , y 0 ) .

We have for all (x, y) of R2 and all λ of R

f (λ(x, y)) = f (λx, λy)


= (λx + λy, λx − λy, 2λy)
= λ(x + y, x − y, 2y)
= λf (x, y).

Proposal 9. Let E, F , and G be vector spaces. If f is a linear map from E to


F and g is a linear map from F to G, then g ◦ f is a linear map from E to G.
Theorem 11. Let E and F be two vector spaces. If f and g are two linear
maps from E to F , then ∀(λ, µ) ∈ R, λf + µg is a linear map from E into F .
L(E, F ) is a vector space over R

3.5.1 Kernel and image of a linear map


Theorem 12. Let E and F be two vector spaces and f : E → F a linear map.
1. If V is a vector subspace of E then f (V ) is a vector subspace of F .
2. If W is a vector subspace of F then f −1 (W ) is a vector subspace of E.
f −1 (W ) is the reciprocal image of W by f .

43
Definition 41. Let E and F be two vector spaces and f : E → F a linear map.
We call
1. Kernel of f : the set {u ∈ E/f (u) = 0F } = f −1 ({0F }). We note it Ker(f ).
2. the range of f or Image of f : l the set {v ∈ F/∃u ∈ E such that v =
f (u)} = f (E).
We note it Im(f ).
Theorem 13. Let E and F be two vector spaces and f : E → F a linear map.
1. ker(f ) is a vector subspace of E.

2. Im(f ) is a vector subspace of F .


3. f injective ⇔ ker(f ) = {0E }
4. f surjective ⇔ Im(f ) = F
Note 24. 1. To determine the Im(f )of a linear map f , we determine the
values taken by f , i.e., the y ∈ F such that there exists x ∈ E for which
y = f (x)
2. To determine the kernel of a linear map f , we solve the equation f (x) = 0F
of unknown x ∈ E

Example 3.12. Let’s determine the kernel and the range of the linear map

f: R2 → R2
(x, y) 7→ (x + y, x + y).

Solution
Im(f ) = (a, b) ∈ R2 /∃(x, y) ∈ R2 such as (a, b) = (x + y, x + y)

(
a =x+y
(a, b) = (x + y, x + y) ⇔ (S)
b =x+y

The system S admits solutions only if a = b

Im(f ) = {(a, a)/a ∈ R}

Im(f ) 6= R2 , so f is not surjective


ker(f ) = {(x, y)/f (x, y) = 0R2 = (0, 0)}

f (x, y) = (0, 0) ⇔ (x + y, x + y) = (0, 0) ⇔ x + y = 0 ⇔ y = −x


ker(f ) = {(x, −x)/x ∈ R}

ker(f ) 6= (0, 0), so f is not injective.

44
Example 3.13. Let’s determine the kernel and the range of the linear map

f : R2 → R2
(x, y) 7→ (x + y, x − y).

Solution
Im(f ) = (a, b) ∈ R2 /∃(x, y) ∈ R2 such as (a, b) = (x + y, x − y)

(
a =x+y
(a, b) = (x + y, x − y) ⇔ (S)
b =x−y

a+b a−b
the solutions of the system (S) are x = 2 and y = 2 which exist for all
(a, b) ∈ R2

Im(f ) = R2
f is therefore surjective
ker(f ) = {(x, y)/f (x, y) = 0R2 = (0, 0)}

f (x, y) = (0, 0) ⇔ (x + y, x − y) = (0, 0) ⇔ x = y = 0


ker(f ) = {(0, 0)} = {0R2 }
f is therefore injective. f is injective and surjective, so it is bijective. It is an
automorphism of R2
Example 3.14. Let’s determine the kernel and the range of the linear map

f : R3 → R2
(x, y, z) 7→ (x − y, y − z).
ker(f ) = {(x, y, z)/f (x, y, z) = 0R2 = (0, 0)}

f (x, y, z) = (0, 0) ⇔ (x − y, y − z) = (0, 0) ⇔ x = y; y=z


ker(f ) = {(x, y, z)/x = y = z)} ⇒ f is not injective.
Im(f ) = (a, b) ∈ R2 /∃(x, y, z) ∈ R3 : (a, b) = f (x, y, z)
 
x−y =a y =b+z
(a, b) = (x − y, y − z) ⇔ ⇒
y−z =b x=a+b+z
Im(f ) = R2 ⇒ f is therefore surjective

References

45

You might also like