You are on page 1of 8

Chapter 2

SYSTEMS OF LINEAR EQUATIONS

In this chapter, we will consider the following linear system of equations:


$
'
' a11 x1 a12 x2 ... a1n xn  b1
a2n xn 
&
'
a21 x1
... ...
a22 x2
... ...
...
... ... ... ...
b2
...
ðñ AX B (2.1)
'
%
an1 x1 an2 x2 ... ann xn  bn

where
     
a11 a12 ... a1n b1 x1
 a21 a22    
a2n  b2 x2
A    
... ,  
 ... ... B .. , X ..  (2.2)
... ...  .  . 
an1 an2 ... ann bn xn

We assume that det A  0; it means that there exists an unique solution of the system
(2.1) in the form X  A1 B. Firstly, we consider three cases when the matrix A has the
special type:

Matrix A has a diagonal form:


 
a11 0 ... 0 $

ùñ % k  akk
&x bk
 0 a22 0 
A
... 
 ... ... ... 
k  1, 2, . . . , n
...
0 0 ... ann

Matrix A has an upper triangular form:


$

a11 a12 ... a1n
 '
'
'
'
xn  abn
'
& nn 
 0 a22 a2n 
A ùñ 'xk 
...  °
bk 
1 n
 ... ... ... ...  akj xj
'
' akk 
0 0 ... ann '
'
j k 1
%
k  n  1, n  2, . . . , 1

1
Matrix A has a lower triangular form:
$

a11 0 ... 0
 '
'
'
'
x1  ab1
'
& 11 
 a21 a22
A
... 0  ùñ 'xk  
k°1
bk 
1
 ... ... ... ...  akj xj
'
' akk 
an1 an2 ... ann '
'
j 1
%
k  2, 3, . . . , n

2.1 LU Factorization

Gaussian elimination is the principal tool in the direct solution of linear systems of equa-
tions, so it should be no suprise that it appears in other guises. In this section we will
see that the steps used to solve a system of the form AX  B can also be used to factor a
matrix into a product of matrices. The factorization is particularly useful when it has the
form A  LU , where L is lower triangular and U is upper triangular.
     
a11 a12 ... a1n l11 0 ... 0 u11 u12 ... u1n
 a21 a22 a2n   0   0 u22 u2n 
A     LU
... l21 l22 ... ...
 ... ...   (2.3)
... ... ... ... ... ...   ... ... ... ... 
an1 an2 ... ann ln1 ln2 ... lnn 0 0 ... unn
#
LY  B
Then we have AX  B ô LU X  LpU X q  B ô . This means that instead
UX  Y
of solving one system we used to solve two easier system in which the coefficient matrices
have the triangular forms. There are many ways to determine the matrices L and U . In
this section we consider two of them:
Doolittle’s Method: In this case we assume l11  l22    lnn  1. The formula to
compute lij and uij is as follows:
$
'
'
u1j  a1j , j  1, 2, . . . , n
'
'
'
'
'
li1  ai1
, i  2, 3, . . . , n
'
& u11

i°1

'
uij  aij  lik ukj , 2 ¤ i ¤ j ¤n (2.4)
'
'
' k1
'
' j°1 
'
'
% lij  1
aij  lik ukj , 2¤j  i¤n
ujj k 1 
Crout’s Method: In this case we assume u11  u22      unn  1. The formula to compute
lij and uij is as follows:
$
'
'
'
li1  ai1 , i  1, 2, . . . , n
'
'
'
'
'
u1j  a1j
l11
, j  2, 3, . . . , n
&

j°1
'
'
lij  aij  lik ukj , 2 ¤ j ¤i¤n (2.5)
'
'  
k 1

'
' i°1 
'
'
% uij  1
aij  lik ukj , 2 ¤ i   j ¤n
lii k 1 
2
 
1 1 2
Example 2.1. Factor the following matrix using Doolittle’s method: A   1 2 3 .
2 1 4
Use this to solve the system AX  B where B  p1, 2, 3qT .
From the formula (2.4) we have:

u11  a11  1; u12  a12  1; u13  a13  2; l21  ua21  1; l31  ua31  2;
 a22  l21 u12  2  p1qp1q  1; u23  a23  l21 u13  3  p1qp2q  1;
11 11
u22
l32  1
u22
pa32  l31u12q  11 p1  p2qp1qq  3
u33  a33  l31 u13  l32u23  4  p2qp2q  p3qp1q  3
So we obtain
     
1 1 2 1 0 0 1 1 2
A  1
 2 3    1 1 0  0
  1 1   LU
2 1 4 2 3 1 0 0 3
In order to solve the system, we follow by two steps:
     
1 0 0 1 1
LY  B ô  1 1 0   Y   2  ñ Y   3 

2 3 1  3
 
8 
1 1 2 1 20{3
UX  Y ô 0 1 1   X   3  ñ X   1{3 
0 0 3 8 8{3
2.2 Choleski’s Factorization

A matrix A is called positive-definite if it is symmetric and if XT AX ¡ 0 for every n-


dimentional column vector X  0.

Theorem 2.1. A matrix A is positive-definite if and only if the all main subdeterminants
∆k , k  1, 2, . . . , n are positive.
 
1 1 2  
 1 1 
Example 2.2. Given A   1 3 3 . We have ∆1  1 ¡ 0, ∆2  
 1 3

  2 ¡ 0, and
2 3 10
∆3  det A  11 ¡ 0. Therefore, A is positive-definite.

1 α 1 
Example 2.3. Find all values of α such that the matrix A   α 4 1  is positive-
1 1 8
definite.  
 1 α 
We have: ∆1  1 ¡ 0, ∆2   4  α2 ¡ 0 ô 2   α   2, ∆3  det A  27  2α  α2 ¡
 
 α 4 
0 ô 1.9664   α   1.7164. Finally, 1.9664   α   1.7164.

Theorem 2.2. The matrix A is positive-definite if and only if A can be factorized in the form
CC T , where C is lower triangular matrix with nonzero diagonal entries.

3
If C  pcij q, we obtain
$
'
'
c11  ?a11,
'
'
'
'
'
'
'
'
ci1  cai1 , i  2, 3, . . . , n
'
&
11
d

k°1 (2.6)
'
'
'
ckk  akk  c2ki , k  2, 3, . . . , n
'
'
'  
j 1

'
' 
k°1
'
'
'
% cik  1
aik  cij ckj , k 1¤i¤n
ckk 
j 1

Example 2.4. Solve the following system using Choleski’s method:



1 1 1  x1

3
  
 1 4 1  x2
    4

1 1 5 x3 5

We have ∆1  1 ¡ 0, ∆2  3 ¡ 0, ∆3  8 ¡ 0 so that the matrix A is positive-definite. From


? ? ? ?
we have c11  1, c21  1, c31  1, c22  4  12  3, c32  p1  p1qq{ 3  2{ 3,
(2.6), b
? a
c33  5  p1q2  p2{ 3q2  8{3. The solution of the system is:
     
1 ?0 0 3 ?3 
CY  B ô  1 ? a
3 0 Y  4 ñY  {
a 3
1

1 2{ 3 8{3 
5

p22{3q 3{8 
1 ?1 
? 
1 ? 
3 29{4
C XY ô 0
T  3 a 2{ 3  X   1{ 3 ñ X  3{2 
a

0 0 8{3 p22{3q 3{8 11{4

2.3 Norms of Vectors and Matrices

A vector norm on Rn is a function, denoted by }.}, from Rn into R with the following prop-
erties:

(a) @x P Rn, }x} ¥ 0; }x}  0 ô x  0.


(b) @x P Rn, @α P R, }αx}  |α|  }x}.
(c) @x, y P Rn, }x y } ¤ }x} }y }
The followings are specific norms on Rn . If x  rx1 , x2 , . . . , xn sT P Rn then we define:

}x}1  |x1| |x2|    |xn|  |xk | (2.7)
k 1 
}x}8  maxp|x1 | , |x2 | , . . . , |xn |q  max |xk | (2.8)

k 1,n

The distance between two vectors x and y is defined by }x  y }.

4
Depending on the specific norm, we obtain the following formula for distance between
two vectors: ņ
}x  y}1  |xk  yk | , }x  y}8  max |xk  yk |

k 1

k 1,n

A sequence txpkq u8
k1 of vectors in R is said to converge to x with respect to the norm }.}
n

if, given any ε ¡ 0, there exists an integer N pεq such that


 
 pkq
x  x   ε for all k ¥ N p εq
Theorem 2.3. A sequence of vectors txpkq u8 n
k1 converges to x in R with respect to the norm
}.}8 if klim pkq
x  xi for each i  1, 2, . . . , n.
Ñ8 i
The natural, or induced, matrix norm associated with the vector norm is defined as
follows:

}A}  }max
x}1
}Ax} (2.9)

Theorem 2.4. If A  paij q is an n  n matrix, then




}A}1  max
¤¤
|aij | (2.10)
1 j n


i 1


}A}8  max
¤¤
|aij | (2.11)
1 i n

j 1
 
3 11 2
Example 2.5. Given A  1
 12 3 , then
2 13 5

}A}1  max p6, 36, 10q  36, }A}8  max p16, 16, 20q  20
A conditional number of a matrix A is a number defined by
      
k pAq  }A}  A1  , k1 pAq  }A}1  A1 1 , k8 pAq  }A}8  A1 8
 
1 1 2
Example 2.6. Given A   1 2 3 . Find k8 pAq.
2 3 4
 
1 2 1  

We have A  2
1  0 1 . And A } }8  9, A18  4 ñ k8pAq  36.
1 1 1

5
2.4 Iterative Methods

An iterative technique to solve the linear system AX  B starts with an initial approxima-
tion X p0q to the solution X and generates a sequence of vectors tX pkq u that converges to X.
Iterative techniques involve a process that converts the system AX  B into an equivalent
system X  T X C for some fixed matrix T and vector C. After the initial vector X p0q is
selected, the sequence of approximate solution vectors is generated by computing

X pkq  T X pk1q C (2.12)

for each k  1, 2, 3, . . . .
Theorem 2.5. If }T }  q   1 then the sequence of vectors X pkq from (2.12) will converge to
solution X and satisfy:
   
 pkq
X  X  ¤ 1 q q X pkq  X pk1q (2.13)

We consider the case that matrix A is split into three matrices as follows:
 
a11 a12 . . . a1n
 a21 a22 
A  
 ... ...
. . . a2n
... ...

 
a
 n1
an2 . . . ann    
a11 0 ... 0 0 0 ... 0 0 a12 ... a1n 
 0 a22   a21 0   a2n  
 
 ... ...
...
... ...
0 
  ...
0
...
...
... ...

 
0
...
0
...
...
... ... 
0 0 . . . ann an1 an2 ... 0 0 0 ... 0
 DLU

The equation AX  B is equivalent to pD  L  U qX  B, and depending on how to transform


this equation, we can obtain different methods for solving with iterative technique. We
consider two of them:
Jacobian Iterative Method: We transform the above equation into DX  pL U qX B
and, finally,
 D 1 p L U qX D 1 B
X

Introducing the notation Tj  D1 pL U q and Cj  D1 B, the Jacobian technique has
the form
X pkq  Tj X pk1q Cj

or in the coordinate words:




i¸1 ņ
xi
pkq  1
 aij xj
pk1q  pk1q
aij xj bi , i  1, 2, . . . , n
aii 
j 1 
j i 1

6
"
8x1 2x2  5
. Use Jacobian method with X p0q  p0, 0qT to find
Example 2.7. Given
x1 4x2  6
X p3q and its error.  
This system can be transformed into the form X  Tj X Cj where Tj  0 0.25 and
0.25 0
 
 . With X p0q  p0, 0qT we obtain
0.625
Cj
1.5
     
X p1q  , X p2q  , X p3q 
0.625 0.25 0.2109375
1.5 1.65625 1.5625

Because of }T }8  0.25, the error of X p3q is


   

X  X p3q8 ¤ 1 0.25
0.25
 p3q p2q 
X  X   0.03125
8
Gauss-Seidel Iterative Method: We transform the above equation into pD  LqX  UX B
and, finally,
 pD  Lq1U X pD  Lq1B
X

Introducing the notation Tg  pD  Lq1 U and Cg  pD  Lq1 B, the Gauss-Seidel tech-


nique has the form
X pkq  Tg X pk1q Cg

or in the coordinate words:




i¸1 ņ
pkq 1
x   pkq
aij xj 
pk1q
aij xj bi , i  1, 2, . . . , n
i
aii 
j 1 
j i 1

Example
 2.8. Consider
 the same
 system
 in the example 2.6. Gauss-Seidel technique gives
0 0.25
Tg  and Cg  . With X p0q  p0, 0qT we obtain
0.625
0 0.0625 1.65625
     
X p1q  , X p2q  , X p3q 
0.625 0.2109375 0.2368164063
1.65625 1.552734375 1.559204102

Because of }T }8  0.25, the error of X p3q is


   

X  X p3q8 ¤ 1 0.25
0.25
 p3q p2q 
X  X   0.00863
8

2.5 Exercise

Question 1. Use Doolittle’s and Crout’s methods to find a factorization of the form A  LU
for the following matrices:
     
3 1 2 2 1 0 7 2 3
(a) A  1 2 3 
 (b) A  1
 2 1  (c) A  2 8 5 

2 3 5 0 1 2 4 3 9

7
Question 2. Find α so that the following matrices are positive definite

α 1 1  
2 α 1  
3 α 2


(a) A   1 2 1  (b) A   α 2 1  (c) A  α 2 2 



1 1 4 1 1 4 2 2 11

Question 3. Use Choleski’s method to solve the following systems:


$ $
& 2x1  x2  3 & x1 x2 2x3  3
(a) x1 2x2  x3  4 (b) x1 2x2  3x3  3
%
 x2 2x3  5
%
2x1  3x2 3x3  1
Question 4. Find conditional numbers of the following matrices:
     
3 1 1 5 1 0 1.2 0.3 0.2
(a) A   1 3 1  (b) A   1 5 1  (c) A   0.4 1.1 0.5 
1 1 3 0 1 5 0.4 0.3 0.9

Question 5. Use Jacobian technique to find the approximative solution X p3q and its error
for the following systems:
"   "  
12x1  3x2  7
, X p0q  0 6x1 x2  5
, X p0q  0.7
(a)
4x1 11x2  9 0
(b)
2x1 7x2  3 0.4

$  
& 5x1  x2 x3  3 0
(c) x1 6x2  2x3  4 , X p0q  0

%
2x1  x2 7x3  5 0
$  
& 12x1 3x2  2x3  11 0.4
(d) 2x1 15x2  x3  12 , X p0q   0.5 
%
3x1  2x2 16x3  13 0.6

Question 6. Repeat the question 5 using Gauss-Seidel technique.

You might also like