You are on page 1of 12

CHAPTER 2

MATHEMATICAL BACKGROUND

2.1 INTRODUCTION
The mathematics required for the study of finite element analysis can vary from elementary
to sophisticated. Most concepts can be mastered with a reasonable knowledge of vector
analysis, matrix analysis, and differential equations.

2.2 VECTOR ANALYSIS


A vector is defined as a physical quantity that can be described by a single magnitude and a
direction that is related to a coordinate reference frame. A fundamental concept, which
justifies the use of vector analysis is, is that physical quantities that are arbitrarily directed in
space can be resolved into orthogonal components corresponding to the reference frame.
Once the components are found, they can be manipulated using standard algebraic operations.
Graphical representation of a vector A is shown in Figure 2.1 and is expressed
mathematically as
A = Axi + Ayj + Azk ......... (2.1)

where i, j, and k are unit vectors directed along the x, y, and z axes, respectively.

Figure 2.1: Representation of vector

The vector differential operator del is defined as

i + j+ k .......... (2.2)

This operator, by definition, has vector properties and is used to define three fundamental
vector operations, the gradient, the divergence, and the curl. These vector operations are
useful when defining integral theorems such as the divergence theorem and Green’s theorem,
which sometimes called the Green-Gauss theorem.

FEM 9 CHAPTER 2
2.3 MATRIX THEORY
2.3.1 Definitions
The mathematical description of many physical problems is often simplified by the use of
rectangular arrays of scalar quantities of the form

[A] = ………. (2.3)

Such an array is known as a matrix, and the scalar values that compose the array are the
elements of the matrix. The position of each element aij is identified by the row subscript i
and the column subscript j.

The number of rows and columns determine the order of a matrix. A matrix having m rows
and n columns is said to be of order “m by n” (usually denoted as m × n). If the number of
rows and columns in a matrix are the same, the matrix is a square matrix and said to be of
order n. A matrix having only one row is called a row matrix or row vector. Similarly, a
matrix with a single column is a column matrix or column vector.

If the rows and columns of a matrix [A] are interchanged, the resulting matrix is known as the
transpose of [A], denoted by [A]T . For the matrix defined in Equation (2.3), the transpose is

[A]T = ………. (2.4)

and we observe that, if [A] is of order m by n, then [A]T is of order n by m.

Several important special types of matrices are defined next. A diagonal matrix is a square
matrix composed of elements such that aij = 0 and i ≠ j. Therefore, the only nonzero terms are
those on the main diagonal (upper left to lower right).

An identity matrix (denoted [I ]) is a diagonal matrix in which the value of the nonzero terms
is unity.

A null matrix (also known as a zero matrix [0]) is a matrix of any order in which the value of
all elements is 0.

A symmetric matrix is a square matrix composed of elements such that the nondiagonal
values are symmetric about the main diagonal. Mathematically, symmetry is expressed as aij
= aji and i ≠ j . Note that the transpose of a symmetric matrix is the same as the original
matrix.

A skew symmetric matrix is a square matrix in which the diagonal terms aii have a value of 0
and the off-diagonal terms have values such that aij = −aji . For a skew symmetric matrix, we
observe that the transpose is obtained by changing the algebraic sign of each element of the
matrix.

FEM 10 CHAPTER 2
2.3.2 Algebraic Operations
Addition and subtraction of matrices can be defined only for matrices of the same order. If
[A] and [B] are both m × n matrices, the two are said to be conformable for addition or
subtraction. The sum of two m × n matrices is another m × n matrix having elements obtained
by summing the corresponding elements of the original matrices. Symbolically, matrix
addition is expressed as
[C] = [A] + [B] ………. (2.5)
where
cij = aij + bij i = 1,m j= 1, n ………. (2.6)

The operation of matrix subtraction is similarly defined. Matrix addition and subtraction are
commutative and associative; that is,
[A] + [B] = [B] + [A] ………. (2.7)
[A] + ([B] + [C]) = ([A] + [B]) + [C] ………. (2.8)

The product of a scalar and a matrix is a matrix in which every element of the original matrix
is multiplied by the scalar. If a scalar u multiplies matrix [A], then
[B] = u[A] ……… (2.9)
where the elements of [B] are given by
bij = uaij i = 1,m j= 1, n ………. (2.10)

Matrix multiplication is defined in such a way as to facilitate the solution of simultaneous


linear equations. The product of two matrices [A] and [B] denoted
[C] = [A][B] ………. (2.11)
exists only if the number of columns in [A] is the equal to the number of rows in [B]. If this
condition is satisfied, the matrices are said to be conformable for multiplication. If [A] is of
order m × p and [B] is of order p × n, the matrix product [C] = [A][B] is an m × n matrix
having elements defined by
cij = ………. (2.12)

Thus, each element cij is the sum of products of the elements in the ith row of [A] and the
corresponding elements in the jth column of [B]. When referring to the matrix product [A][B],
matrix [A] is called the premultiplier and matrix [B] is the postmultiplier.

In general, matrix multiplication is not commutative; that is,


[A][B] ≠ [B][A] ………. (2.13)

Matrix multiplication does satisfy the associative and distributive laws, and we can therefore
write
([A][B])[C] = [A]([B][C]) ………. (2.14)
[A]([B] + [C]) = [A][B] + [A][C] ………. (2.15)

FEM 11 CHAPTER 2
([A] + [B])[C] = [A][C] + [B][C] ………. (2.16)

In addition to being noncommutative, matrix algebra differs from scalar algebra in other
ways. For example, the equality [A][B] = [A][C] does not necessarily imply [B] = [C], since
algebraic summing is involved in forming the matrix products. As another example, if the
product of two matrices is a null matrix, that is, [A][B] = [0], the result does not necessarily
imply that either [A] or [B] is a null matrix.

2.3.3 Determinants
The determinant of a square matrix is a scalar value that is unique for a given matrix. The
determinant of an n × n matrix is represented symbolically as

det[A] = |A| = ………. (2.17)

and is evaluated according to a very specific procedure. First, consider the 2 × 2 matrix

[A] = ………. (2.18)

for which the determinant is defined as

|A| = ≡ a11a22 − a12a21 ………. (2.19)

Next, consider the determinant of a 3 × 3 matrix

|A| = ………. (2.20)

defined as
|A| = a11(a22a33 − a23a32) − a12(a21a33 − a23a31) + a13(a21a32 − a22a31) ………. (2.21)

Note that the expressions in parentheses are the determinants of the second-order matrices
obtained by striking out the first row and the first, second, and third columns, respectively.
These are known as minors. A minor of a determinant is another determinant formed by
removing an equal number of rows and columns from the original determinant. The minor
obtained by removing row i and column j is denoted |Mij |. Using this notation, Equation
(2.21) becomes
|A| = a11|M11| − a12|M12| + a13|M13| ………. (2.22)
and the determinant is said to be expanded in terms of the cofactors of the first row. The
cofactors of an element aij are obtained by applying the appropriate algebraic sign to the
minor |Mij | as follows. If the sum of row number i and column number j is even, the sign of
the cofactor is positive; if i + j is odd, the sign of the cofactor is negative. Denoting the
cofactor as Cij we can write
Cij = (−1)i+j |Mij | ………. (2.23)

The determinant given in Equation (2.22) can then be expressed in terms of cofactors as

FEM 12 CHAPTER 2
|A| = a11C11 + a12C12 + a13C13 ………. (2.24)

The determinant of a square matrix of any order can be obtained by expanding the
determinant in terms of the cofactors of any row i as
|A| = ………. (2.25)
or any column j as
|A| = ………. (2.26)

Application of Equation (2.25) or (2.26) requires that the cofactors Cij be further expanded to
the point that all minors are of order 2 and can be evaluated by Equation (2.19).

2.3.4 Matrix Inversion


The inverse of a square matrix [A] is a square matrix denoted by [A]−1 and satisfies
[A]−1[A] = [A][A]−1 = [I ] ………. (2.27)
that is, the product of a square matrix and its inverse is the identity matrix of order n.

The concept of the inverse of a matrix is of prime importance in solving simultaneous linear
equations by matrix methods. Consider the algebraic system
a11x1 + a12x2 + a13x3 = y1
a21x1 + a22x2 + a23x3 = y2 ………. (2.28)
a31x1 + a32x2 + a33x3 = y3
which can be written in matrix form as
[A]{x} = {y} ………. (2.29)
where

[A] = ………. (2.30)

is the 3 × 3 coefficient matrix,

{x} = ………. (2.31)

is the 3 × 1 column matrix (vector) of unknowns, and

{y} = = ………. (2.32)

is the 3 × 1 column matrix (vector) representing the right-hand sides of the equations (the
“forcing functions”).

If the inverse of matrix [A] can be determined, we can multiply both sides of Equation (2.29)
by the inverse to obtain
[A]−1[A]{x} = [A]−1{y} ………. (2.33)

FEM 13 CHAPTER 2
Noting that
[A]−1[A]{x} = ([A]−1[A]){x} = [I ]{x} = {x } ………. (2.34)
the solution for the simultaneous equations is given by Equation (2.33) directly as
{x} = [A]−1{y} ………. (2.35)

While presented in the context of a system of three equations, the result represented by
Equation (2.35) is applicable to any number of simultaneous algebraic equations and gives
the unique solution for the system of equations.

The inverse of matrix [A] can be determined in terms of its cofactors and determinant as
follows. Let the cofactor matrix [C] be the square matrix having as elements the cofactors
defined in Equation (2.23). The adjoint of [A] is defined as
adj[A] = [C]T ……….(2.36)

The inverse of [A] is then formally given by

[A]−1 = ………. (2.37)

If the determinant of [A] is 0, Equation (2.37) shows that the inverse does not exist. In this
case, the matrix is said to be singular and Equation (2.35) provides no solution for the system
of equations. Singularity of the coefficient matrix indicates one of two possibilities: (1) no
solution exists or (2) multiple (nonunique) solutions exist. In the latter case, the algebraic
equations are not linearly independent.

2.3.5 Matrix Partitioning


To facilitate matrix multiplications and to take advantage of the special form of the matrices,
it may be useful to partition a matrix into submatrices. A submatrix is a matrix that is
obtained from the original matrix by including only the elements of certain rows and
columns. The idea is demonstrated using a specific case in which the dashed lines are the
lines of partitioning

[A] = ………. (2.38)

It should be noted that each of the partitioning lines must run completely across the original
matrix. Using the partitioning, matrix [A] is written as

[A] = ………. (2.39)

where

[A11] = ; [A12] = ; etc ………. (2.40)

The right hand side of (2.39) could again be partitioned, such as

FEM 14 CHAPTER 2
[A] = ……….(2.41)

and we may write [A] as

[A] = = ; and = ………. (2.42)

The partitioning of matrices can be advantage in saving computer storage; namely, if


submatrices repeat, it is necessary to store the submatrix only once. The same applies in
arithmatic. Using submatrices, we may idenitfy a typical operation that is repeated many
times. We then carry out this operation only once and use the result whenever it is needed.

The rules to be used in calculations with partitioned matrices follow from the definition of
matrix addition, subtraction, and multiplication. Using partitioned matrices, we can add,
subtract, or multiply as if the submatrices were ordinary matrix elements, provided the
original matrices have been partitioned in such a way that it is permissible to perform the
individual submatrix additions, subtractions, or multiplications.

EXAMPLE 2.1
Evaluate the matrix product [C] = [A] [B] by using the following partioning.

[A] = ; [B] =

Here we have

[A] = ; [B] =

Therefore, [A] [B] = ………. (i)

Where,
[A11] [B1] = =

[A12] [B2] = =

[A21] [B1] = =

[A22] [B2] = =

Then substitting in Equation (i), we get

[A] [B] =

FEM 15 CHAPTER 2
2.4 VECTOR SPACE
Consider a column vector of order 3 such as

{x} = = ……..(2.43)

We know from elementary geometry that {x} represents a geometric vector in a chosen
coordinate system in three-dimensional space. Figure 2.2 shows assumed coordinate axes
and the vector corresponding to Equation (2.43) in this system. We should note that the
geometric representation of {x} depends completely on the coordinate system chosen; in
other words, if Equation (2.43) would give the components of a vector in a different
coordinate system, then the geometric representation of {x} would be different from the one
in Figure 2.2. Therefore, the coordinates (or components of a vector) alone do not define the
actual geometric quality, but they need to be given together with the specific coordinate
system in which they are measured.

Figure2.2: Geometric representation of vector {x}

The concepts of three dimensional geometry generalize to a vector of any finite order n. If
n>3, we can no longer obtain a plot of the vector; however, we shall see that mathematically
all concepts that pertain to vectors are independent of n. As before, when we considered the
specific case n= 3, the vector of order n represents a quantity in a specific coordinate system
of an n-dimensional space.

Assume that we are dealing with a number of vectors all of order n, which are defined in a
fixed coordinate system. Then the collection of vectors x1, x2, …., xn is said to be linearly
dependent if there exists numbers α1, α2, …, α3, which are not all zero, such that
α 1 x1 + α2 x2 + …….. + αn xn = 0 ………. (2.44)

If the vecors are not linearly dependent, they are called linearly independent vectors.

EXAMPLE 2.2
Determine if the vectors {1 0 0}T, {0 1 0}T and {0 0 1}T are linearly dependent or
independent.

FEM 16 CHAPTER 2
According the definition of linear dependency, check if there are constants α1, α2, and α3 , not
all zero, that satisfies the equation

+ + =

or, =

which is satisfied only if α1 = α2 = α3 = 0; therefre, the given vectors are linearly independent.

2.5 COORDINATE TRANSFORMATIONS


It is convenient and in fact necessary to express different variables and field equations in
several different coordinate systems. This situation requires the development of particular
transformation rules for scalar, vector, matrix and higher order variables.

Consider two Cartesian coordinate system as shown in Figure 2.3. The two Cartesian frames
(x1, x2, x3) and (x1’, x2’, x3’) differ only by orientation, and the unit basis vectors for each
frame are {ei} = {e1, e2, e3} and {ei’} = {e1’, e2’, e3’}.

Figure 2.3: Change of Cartesian coordinate frames

LE Qij denote the cosine of the angle between the xi’-axis and xj-axis, then
Qij = cos(xi’, xj) ………. (2.45)

Using this definition, the basis vectors in the primed coordinate frame can be easily expressed
in terms of those in the unprimed frame by the relations
e1 ’ = Q11 e1 + Q12 e2 + Q13 e3
e2 ’ = Q21 e1 + Q22 e2 + Q23 e3 ………. (2.46)
e3 ’ = Q31 e1 + Q32 e2 + Q33 e3

FEM 17 CHAPTER 2
This can also be expressed in matrix and index forms as
{e’} = [Q] {e}; or e i’ = Qijej ………. (2.46a)

Likewise, the opposite transformation can be written using the same format as
{e} = [Q]’ {e’}; or ei = Qjiej’ ………. (2.47)

Now an arbitrary vector v can be written in either of the two coordinate system as
v = v1 e1 + v2 e2 + v3 e3 = vi ei
= v1’ e1’ + v2’ e2’ + v3’ e3’ = v i’ e i’ ………. (2.48)

Substituting from (2.47) into (2.48)1 gives


v = vi Qjiej’
but form (2.48)2, v = vi’ ei’, and so we find that
vi’ = Qij vj ………. (2.49)

In similar fashion, using (2.46a) in (2.48)2 gives


vi = Qij vj’ ………. (2.50)

Relations (2.49) and (2.50) constitute the transformation laws for the Cartesian components
of a vector under a change of rectangular Cartesian coordinate frame. It should be understood
that under such transformations, the vector is unaltered (retaining original length and
orientation), and only its components are changed. Consequently, if we know the components
of a vector in one frame, relation (2.49) and/or relation (2.50) can be used to calculate
components in any other frame.

EXAMPLE 2.3
The components of a vector in a particular coordinate frame is given by ai ={1 4 2}T.
Determine the components of each tensor in a new coordinate system found through a
rotation of 600 (π/6 radians) about the x3-axis. Choose a counterclockwise rotation when
viewing down the negative x3-axis (see Figure E2.3).

Figure E2.3
FEM 18 CHAPTER 2
The original and primed coordinate systems shown in Figure 2.4 establish the angles
between the various axes. The solution starts by determining the rotation matrix for this case:

Qij = =

The transformation for the vector quantity follows from equation (2.49):

a i’ = Qij aj = =

EXCERISE 2
1. For the matrices A and B, verify that (AB)-1 = B-1 A-1.

A = ; B =

2. Taking advantage of partitioning, evaluate [C] = [A]{b}, where

[A] = ; {b} =

3. Investigate whether the following vectors are linear dependent or independent.

x1 = ; x2 = ; x3 = ;

4. The components of a force expressed in the unprimed coordinate system shown in


FigureP2.4 are R = {0 1 }T. Evaluate the components of the force in the primed
coordinate system.

Figure P2.4

FEM 19 CHAPTER 2
FEM 20 CHAPTER 2

You might also like