You are on page 1of 7

Chapter 3: Systems of linear equations

1 General principles for the solutions of linear equations with examples.


2 Computational aspects of systems of linear equations.
3 The Gauss Elimination Method for solving linear equations.
4 We will introduce elementary matrices and show how they can be used in solving
linear equations.
5 Any invertible matrix is a product of elementary matrices.
6 We will discuss a practical algorithm called the Gauss-Jordan algorithm for
computation of the inverse of an invertible matrix.

1/7
A pair of homogeneous linear equations
1 Let us consider a pair of linear equations
2x + 3y − z = 0
x + y + z = 0.
2 These are equations of two planes in R3 passing through the origin. Their normal
vectors are n1 = (2, 3, −1) and n2 = (1, 1, 1).
3 Since the normal vectors are not parallel, we expect that the set of solutions
constitute a line passing through the origin that is the intersection of these planes.
4 Eliminate x from the 1st equation by subtracting 2 times the 2nd equation:
y − 3z = 0
x + y + z = 0.

5 Substitute y = 3z in the second equation x + y + z = 0 to get x = −4z.


6 Hence any solution vector is (x, y, z) = (−4z, 3z, z) = z(−4, 3, 1) for any z ∈ R.
7 Therefore the solution vectors constitute a line passing through the origin and
parallel to the vector w = (−4, 3, 1). Note that w ⊥ n1 and w ⊥ n2 .
2/7
General facts about systems of linear equations
1 A general system of linear equations with coefficients in F where F is either R or
C can be written as
a11 x1 + a12 x2 + · · · + a1n xn = b1
a21 x1 + a22 x2 + · · · + a2n xn = b2
.. ..
. .
am1 x1 + am2 x2 + · · · + amn xn = bm .

2 The entries aij , bk ∈ F for all i, j and k.


3 A compact way of writing the above m equations in n variables is to use
matrices. Let A = (aij ) ∈ Fm×n , b = (bi ) ∈ Fm and x = (xj ).
4 Then the above system of equations can be written as Ax = b.
5 Let Aj denote the jth column of A for j = 1, 2, . . . , n.
6 A useful way of writing Ax = b is x1 A1 + x2 A2 + · · · + xn An = b.
7 Hence Ax = b has a solution ⇐⇒ b is a linear combination of the column
vectors of A.
3/7
Four fundamental spaces associated to Ax = b.
1 Let u1 , u2 , . . . , um be vectors in Fn . The vector

a1 u1 + a2 u2 + · · · + am um

where a1 , a2 , . . . , am ∈ F is called a linear combination of u1 , . . . , um .


2 The set L(u1 , u2 , . . . , um ) of all linear combinations of u1 , u2 , . . . , um is called the
linear span of u1 , u2 , . . . , um .
3 The linear span C(A) of the column vectors of A, is called the column space of
A. Hence C(A) ⊂ Fm .
4 The linear span R(A), of row vectors of A, is called the row space of A.
5 A vector c = (c1 c2 . . . , cn )t ∈ Fn is called a solution of Ax = b if Ac = b.
Therefore if Ax = b has a solution if and only if b ∈ C(A).
6 If b = 0 then Ax = 0 is called a homogeneous system. If b 6= 0 then Ax = b is
called an inhomogeneous system.
7 The set of vectors c ∈ Fn that are solutions of Ax = 0 is called the null space of
A. It is denoted by N(A).
8 The fourth space associated to Ax = b is the space N(At ).
4/7
General facts about linear equations

1 The solutions of Ax = b and the corresponding homogeneous system Ax = 0 are


closely related. If c, d are solutions of Ax = 0 then for any scalars α, β, αc + βd
is also a solution of Ax = 0 since

A(αc + βd) = αAc + βAd = 0.

2 Recall that the set of solutions of Ax = 0, denoted by N(A), is called the null
space of A.
3 Theorem. Let s ∈ Fn be any solution of Ax = b. Then the set of solutions of
Ax = b is given by
s + N(A) = {s + c | c ∈ N(A)}.
4 Proof. Let c ∈ N(A). Then A(s + c) = As + Ac = As = b.
5 Conversely let d be a solution of Ax = b. Then A(d − s) = Ad − As = b − b = 0.
6 Hence d − s ∈ N(A). Therefore d ∈ s + N(A).

5/7
Basic facts about system of linear equations
1 Proposition. A system of linear equations Ax = b has either no solution, or one
solution or infinitely many solutions.
2 Proof. Suppose that Ax = b has two distinct solutions c, d ∈ Fn . Then for z ∈ F,

A(zc − zd) = zAc − zAd = zb − zb = 0.

3 Hence z(c − d) for any nonzero z ∈ F is a nonzero vector in N(A).


4 Therefore c + z(c − d) is a solution of Ax = b for all z ∈ F.
5 Hence Ax = b has infinitely many solutions.
6 Proposition. Let A be a m × n matrix over F, b ∈ Fm , and E an invertible m × m
matrix over F. Then Ax = b has the same solutions as EAx = Eb.
7 Proof. If Ax = b then EAx = Eb.
8 If EAx = Eb then E−1 (EAx) = E−1 (Eb) or Ax = b.
9 Remark. We will introduce invertible matrices called the elementary matrices
which can be used to simplify a system of linear equations so that the solutions
of the new system are easy to find.
6/7
Linear independence and dependence of vectors
1 A set of vectors u1 , u2 , . . . , um ∈ Fn are called linearly dependent if there are
scalars x1 , x2 , . . . , xm ∈ Fn not all zero so that

x1 u1 + x2 u2 + · · · + xm um = 0.

2 The vectors u1 , u2 , . . . , um are called linearly independent if they are not


linearly dependent.
3 Consider the row vectors

a = (−3, 2, 1, 4), b = (4, 1, 0, 2), c = (−10, 3, 2, 6).

4 We wish to find whether a, b, c are linearly dependent.


5 This is same as solving for x, y, z so that xa + yb + zc = 0.
6 This is a homogeneous system of linear equations.
7 If we can find a nontrivial solution then a, b, c are linearly dependent.
7/7

You might also like