You are on page 1of 27

Introduction to Finite Element Method

Judy P. Yang (楊子儀)

Sept. 20, 2016

Department of Civil Engineering


National Chiao Tung University (NCTU)
Chapter 2 Matrix Algebra
• 2.1 Definitions
• 2.2 Addition and Subtraction
• 2.3 Multiplication
• 2.4 Determinant
• 2.5 Inverse Matrix
• 2.6/2.7 Linear Equations
• 2.8 Quadratic Forms and Positive Definiteness
• 2.9 Partitioning
• 2.10 Differentiation and Integration
Introduction
• The FEM is a numerical approach which
results in the establishment of systems of
equations often involving thousands of
unknowns
• To enable one to deal with such expressions in
a compact fashion which emphasizes the
physical content, use of matrix algebra turns
out to be convenient
2.1 Definitions
• Matrix
– Consists of a collection of quantities which are
termed the components of the matrix
– The components are ordered in rows and columns
– Dimension of a matrix: the number of rows and
columns
– Example: for a matrix with r rows and c columns
 M 11 M 12  M 1c 
M M 22 M 2 c 
 21
   
 
 M r1 M r 2  M rc 
2.1 Definitions
• One-dimensional matrix
– Usually denoted by a lower-case letter in bold type
– A column matrix
 a1 
a =  a2 
 a3 
– The transpose of a is a row matrix
a T = [ a1 a2 a3 ]
2.1 Definitions
• Two-dimensional matrix
– Usually denoted by an upper-case letter in bold type
 C11 C12 
C = C21 C22 
C31 C32 
– A square matrix: the numbers of rows and columns are equal
 B11 B12 B13 
B =  B21 B22 B23 
 B31 B32 B33 
– The transpose of B is obtained by interchanging rows and
columns
 B11 B21 B31 
B T =  B12 B22 B32 
 B13 B23 B33 
– The matrix B is symmetric if B = BT
2.2 Addition and Subtraction
• Two-dimensional matrix
– A diagonal matrix: only the diagonal components are
different from zero
2 0 0 
A =  0 1 0 
 0 0 −1
– A unit matrix I
1 0 0 
I = 0 1 0 
0 0 1 
– A zero matrix 0
0 0 
0= 
0 0 
2.2 Addition and Subtraction
• If two matrices have the same dimension, they can be
added and subtracted, with addition and subtraction
being carried out for each component of the matrices
( ± ) =aT ± bT
T
a ± b =±b + a a b
aT ± bT =±b T + a T (a T
±b )
T T
=
a±b
A ± B =±B + A
( A ± B ) =A T ± BT
T

AT ± BT =
±BT + A T
(A T
±B T T
) =±
A B

– e.g.
 A11 A12   B11 B12   A11 ± B11 A12 ± B12 
C = A ± B =  A21 A22  ±  B21 B22  = A ± B
 21 21 A22 ± B22 
 A31 A32   B31 B32   A31 ± B31 A32 ± B32 
 2.3 Multiplication
• A matrix multiplied by a number c
 A11 A12   cA11 cA12 
cA A
c= A  cA cA 
 21 22   21 22 

 A31 A32   cA31 cA32 


• Scalar product: two column matrices having the same dimension
 b1 
a T b =[ a1 a2 a3 ] b2  = a1b1 + a2b2 + a3b3 = b T a
 b3 

• Length of a or aT
a = (a
2
1 + a ++ a
2
2 n )
2 12
where n is the number of rows in a
 2.3 Multiplication
• Matrix product
c = A x
m×1 ( m×n ) ( n×1)
n
ci = ∑ Aij x j (index notation)
j =1

 A11 A12   A11 x1 + A12 x2 


 x1 
= =  A21
c Ax 
A22   = 
A x + A x 
 21 1 22 2 

A32   2 
x
 A31  A31 x1 + A32 x2 
– If x has the dimension mx1
cT = xT A
1×n (1×m ) ( m×n )
m
c j = ∑ xi Aij (index notation)
i =1

 A11 A12 A13 


cT = [ 1 2]A
xT A =x x  [
= A11 x1 + A21 x2 A12 x1 + A22 x2 A13 x1 + A23 x2 ]
 21 A22 A23 
 2.3 Multiplication
• Matrix multiplication
 A11 A12   A11 B11 + A12 B21 A11 B12 + A12 B22 
C = A B = A   B11 B12  
= A B + A22 B21 A21 B12 + A22 B22 
22  
B22   21 11
A
( m×n ) ( n× p )  21
 A31 A32   21
m× p B
 A31 B11 + A32 B21 A31 B12 + A32 B22 

– The matrix multiplication is only defined if the matrices


possess the correct dimensions
• If A has the dimension mxn and B has the dimension
nxm, both AB and BA are defined
– In general AB ≠ BA
– AB = BA holds only if m=n
2.3 Multiplication
• Matrix multiplication
AI = A
( AB ) = BT A T
T

( ABC ) ( (= AB ) C ) ( AB ) CT BT A T
T
= C=
T T T

( Ax ) = xT A T
T

cAB = A ( cB )

• Distribution law
( A + B ) x =Ax + Bx
xT ( A + B ) = xT A + xT B
( A + B ) C =AC + BC
C ( A + B ) = CA + CB
2.4 Determinant
• For a square matrix A with the dimension nxn, it is
possible to calculate the determinant of A as det A
 A11 A12 
A=
 A21 A22 

• Minor of A (detMik)
– The determinant of the matrix in which row number i and
column number k are deleted to form a new square matrix
with the dimension (n-1)x(n-1)
– e.g.
= det M 11 A=
22 , det M 12 A=
21 , det M 21 A=
12 , det M 22 A11

• Cofactor of A (A ) c
ik

( −1)
i+k
Aikc = det M ik
2.4 Determinant
• Determinant of A
n
det A ∑A
k =1
ik Aikc where i indicates any row number in the range 1 ≤ i ≤ n

– e.g. consider i=1


n
=
det A ∑ A=
A
k =1
1k
c
1k A11 A11c + A12 A12c

= A11 ( −1) det M 11 + A12 ( −1)


1+1 1+ 2
det M 12
= A11 A22 − A12 A21

– or det A ∑ A A where j indicates any column number in the range 1 ≤ j ≤ n


n
c
kj kj
k =1

– Other property
det A T = det A
det AB = det A det B
det ( A + B ) ≠ det A + det B
2.4 Determinant
• For theoretical consideration, the above expansion
formulae are important so that one can establish the
following properties
– A row (or column) consists of zeros, the determinant is zero
– If two rows (or columns) are proportional, the determinant is
zero
– If a row (or column) is multiplied by a factor c, the
determinant is also multiplied by the factor c
– Row (or column) operations do not change the determinant
• Recall a row (or column) operation is an operation where
all components of one row (or column) are multiplied by a
factor and then added to another row (or column)
– If two rows (or columns) are interchanged, the determinant
changes its sign
2.5 Inverse Matrix
• Inverse of a square matrix 𝑨 is defined by 𝑨−1 𝑨 = 𝑨𝑨−1 = 𝑰
• The adjoint matrix of 𝑨
T
 A11c Ac
12  A c
1n
 c 
 A21 Ac
 A c
adjA 22
= 2n
⇒ A −1 adjA det A
    
 c c 
 An1 Anc2  Ann 
– 𝑨−1 exists only if det𝑨 ≠ 0
– Singular matrix: a square matrix having det𝑨 = 0
– Orthogonal matrix: 𝑨−1 = 𝑨T and 𝑨T 𝑨 = 𝑨𝑨T = 𝑰
(A ) = (A )
−1 T T −1

( AB ) = B −1A −1
−1

( ABC ) = C−1B −1A −1


−1
2.5 Inverse Matrix
• For a 2x2 matrix 𝑨
𝑎 𝑐
–𝑨=
𝑑 𝑏

– 𝑨−1 =
2.6 Linear Equations: Number of Equations
Equals Number of Unknowns
• Consider the system of equations 𝑨𝑨 = 𝒃
– 𝒃 = 𝟎: a homogeneous system of equations
– 𝒃 ≠ 𝟎: an inhomogeneous system of equations
• Assume det𝑨 ≠ 0
– 𝑨−1 𝑨𝒙 = 𝑨−1 𝒃
– 𝒙 = 𝑨−1 𝒃

• For homogeneous systems of equations 𝑨𝑨 = 𝟎


– If det𝑨 = 0, a non-trivial solution exists
– If det𝑨 ≠ 0, only the trivial solution 𝒙 = 𝟎 exists
– Application: buckling and vibration problems
2.6 Linear Equations: Number of Equations
Equals Number of Unknowns
• For inhomogeneous systems of equations
𝑨𝑨 = 𝒃 (𝒃 ≠ 𝟎)
– If det𝑨 ≠ 0, one unique solution 𝒙 = 𝑨−1 𝒃 exists
– If det𝑨 = 0, no unique solution exists
• Note: no solution or infinite solutions depending on 𝒃
𝑥+𝑦 =1 𝑥+𝑦 =1
–� , �
𝑥+𝑦 =2 2𝑥 + 2𝑦 = 2
– Gauss elimination for solution 𝒙 when det𝑨 ≠ 0
• Since the direct establishment of 𝑨−1 is cumbersome
2.6 Linear Equations: Number of Equations
Equals Number of Unknowns
• Gauss elimination
– 𝑨𝑨 = 𝒃 where 𝑨 is called the coefficient matrix
– Back substitution
• The equations are combined in such a manner that the lower left
part of the new coefficient matrix 𝑨𝑨 consists of zeros
 A11' A12' A13'  A1' n 
 
0
'
A22 '
A23  A2' n 
A' =  0 0 A33'  A3' n 
 
     
0 0  Ann ' 
 0 

• The elimination technique transforms the system into 𝑨′𝒙 = 𝒃′


• The system is triangularized
• The diagonal elements 𝐴11 ′ ,…, 𝐴𝑛𝑛 ′ are termed pivot elements
2.6 Linear Equations: Number of Equations
Equals Number of Unknowns
• Gauss elimination
– Back substitution
 A11' A12' A13'  A1' n   x1   b1' 
    
0
'
A22 '
A23  A2' n   x2  b2' 
0 0 A33'  A3' n   x3  =  b3' 
    
          
0 '   '
 0 0  Ann   xn  bn 
• The unknown in the last equation can be solved by 𝑥𝑛 = 𝑏𝑛 ′ ⁄𝐴𝑛𝑛 ′
• This solution is substituted into the (n-1)th equation to solve for
𝑥𝑛−1
• The process is continued until all components in 𝒙 are determined
• det𝑨𝑨 = det𝑨
2.6 Linear Equations: Number of Equations
Equals Number of Unknowns
• e.g. Illustration of Gauss elimination
 200 −100 0   x1  8 
 −100 200 −100   x  = 8 
    
2

 0 −100 100   x3   2 
– Multiplying the 1st eq. by 12 and adding to the 2nd eq. result in
 200 −100 0   x1   8 
 0 150 −100   x2  =
12 
  
 0 −100 100   x3   2 
– The first component in the third row is already zero. Thus, multiply
the 2nd eq. by 2⁄3 and add to the 3rd eq. to obtain
 200 −100 0   x1   8 
 0 150 −100  x  = 12 
    
2

 0 0 100 3  x3  10   x1  18 


Use back substitution, we have  x2  =  28
1

100
 x3  30 
2.7 Linear Equations: Number of Equations is
Different from Number of Unknowns
• Consider the system of equations 𝑨𝑨 = 𝟎

– For 𝑛 > 𝑚, i.e. more unknowns than equations,


there exist at least (𝑛 − 𝑚) non-trivial solutions
2.8 Quadratic Forms and
Positive Definiteness
• If the quadratic form of a square matrix 𝑨 satisfies 𝒙T 𝑨𝒙 > 0
∀𝒙 ≠ 𝟎, the matrix 𝑨 is positive definite
• If 𝑨 is positive definite, then det𝑨 ≠ 0
– The inverse argument does not hold
– Proof
• Assume 𝒙T 𝑨𝑨 > 0 ∀𝒙 ≠ 𝟎 and det𝑨 = 0
• Since det𝑨 = 0, there exists a non-trivial solution to the homogeneous system of
equations 𝑨𝑨 = 𝟎, which violates 𝒙T 𝑨𝑨 > 0 ∀𝒙 ≠ 𝟎
• Therefore, det𝑨 ≠ 0
• If a matrix is positive definite, it is required that all diagonal
components of the matrix are positive
– Proof
• Choose 𝒙T = 0 … 0 𝑥𝑘 0 … 0
• It follows that 𝒙T 𝑨𝑨 = 𝐴𝑘𝑘 𝑥𝑘 2 > 0, which proves the statement
T
• The square matrix 𝑨 is positive semi-definite if 𝒙 𝑨𝑨 ≥ 0 ∀𝒙 ≠ 𝟎
2.9 Partitioning
• To facilitate matrix manipulation, it may be
useful to partition a matrix into submatrices
– Submatrix: a matrix obtained from the original matrix
by including only the components of certain rows
and columns
 A11 A12 A13 
A =  A21 A22 A23 
 A31 A32 A33 

– Using this partitioning, the matrix A can be written as


 B C  A11 A12   A13 
=A   =
where B  =  , C =  ,D [ A31 A32 ] , E
= [ A33 ]
D E   A21 A22   A23 
2.9 Partitioning
• e.g. example of using partitioning
Ax = f
 x1   f1 
 x1   f1 
=
Given x = x2  , f f 
 2 =
Define y = , z [=
x3 ] , g  f= , h [ f3 ]
 x3   f3   2
x  2

• The system of equations can be written as


 B C  y   g 
 D E   z  = h 
    
• Multiplication of the matrices implies
By + Cz =
g
Dy + Ez =
h

• The partition must be made in the way such that the submatrices
possess correct dimensions
• By partitioning matrices, it follows that not only multiplications, but
also additions and subtractions can be carried out
2.10 Differentiation and Integration
• If the components of the matrix depend on a variable x
A A12 A13 
A ( x ) =  11
 A21 A22 A23 
• Matrix differentiation
 dA11 dA12 dA13 
dA  dx dx dx 
= 
dx  dA21 dA22 dA23 
 dx dx dx 

– e.g. A ( x ) b = f ( x ) where b is constant


dA df
b=
dx dx
• Matrix integration
 A dx A dx ∫ A dx 
∫ ∫
∫ Adx =  A dx A dx A dx 
11 12 13

∫ ∫ 21 ∫  22 23

You might also like