You are on page 1of 20

APPENDIX A

REVIEW OF MATRIX ALGEBRA

A.1 GENERAL
The following presents a summary of the rudimentary elements of and concepts in
matrix algebra to assist those who require a review of the subject matter. Obviously,
a detailed treatment of the subject matter is beyond the scope of these notes, and the
student is referred to books in advanced engineering mathematics and numerical
analysis for a more exhaustive presentation.

A.2 BASIC CONCEPTS AND DEFINITIONS


A matrix is a rectangular array of elements enclosed in brackets and denoted by a
single symbol. For example, below is an (m x n) (read m by n) matrix denoted as A or
[A] having m- rows and n- columns.
⎛ a11 a12 ... a1 j ... a1m ⎞
⎜ ⎟
⎜a a 22 ... a2 j ... a2 m ⎟
⎜ 21 ⎟
⎜ M M O M ... M⎟
A = [ A] = ⎜ ⎟
⎜ ai1 ai 2 ... aij ... aim ⎟
⎜ ⎟
⎜ M M ... M O M⎟
⎜ ⎟
⎜ an1 an 2 ... an ... anm ⎟
⎝ j ⎠
The typical term aij is referred to as an element of the matrix. In the double-subscript
notation used, the first subscript refers to the row and the second subscript refers to
the column containing the element. In general, an element may be a real number, a
complex number, a variable symbol, a function of a variable or variables, or another
matrix (referred to as a submatrix).
scalar - a matrix having only one element, i.e. m = n = 1
row matrix - a matrix having only one row, m = 1 and n > 1
column matrix or column vector or vector - a matrix with only one column, m > 1, and
n=1. Matrices requiring only a single line of elements are usually presented as a
column matrix rather than a row matrix. Since there is only one column a single
subscript is sufficient to identify the ith element, ai, of a column or row matrix.
square matrix - a matrix having the same number of rows as columns, i.e. m=n. The
number of rows is called the order of the square matrix. The diagonal containing
the elements a11, a22, . . . ann is known as the main or principal diagonal, and the
elements known as the diagonal elements. The elements other than the diagonal

Institute of Civil Engineering, UP Diliman


Page A-2 Notes on Matrix Structural Analysis

elements are referred to as the off-diagonal elements.


rectangular matrix - a matrix whose number of rows is not equal to the number of
columns.
submatrix - a matrix obtained by omitting some rows and columns. By convention
an (mxn) matrix [A] includes itself as a submatrix, as it is the matrix obtained by
deleting no rows or columns.
A null or zero matrix, denoted by [Φ], is a matrix with all elements being equal to
zero, aij = 0.
Special Square Matrices
Diagonal Matrix - a square matrix with all off-diagonal elements are zeroes, aij = 0.0
for all i ≠ j.
Scalar Matrix - a diagonal matrix with all diagonal elements equal to the same scalar,
aii = s for all i.
⎛ a11 0 ... 0 ... 0⎞ ⎛s 0 ... 0 ... 0 ⎞
⎜ ⎟ ⎜ ⎟
⎜ 0 a 22 ... 0 ... 0 ⎟ ⎜0 s ... 0 ... 0 ⎟
⎜ ⎟ ⎜ ⎟
⎜ . . O . ... .⎟ ⎜ . . O . ... .⎟
⎜ ⎟
⎜ 0 0 ... a ii ... 0 ⎟ ⎜0 0 ... s ... 0 ⎟
⎜ ⎟
⎜ ⎟ ⎜ . . ... . O .⎟
⎜ . . ... . O .⎟
⎜ ⎟
⎜ 0 0 ... 0 ... a nn ⎟⎠ ⎝0 0 ... 0 ... s ⎠

Diagonal Matrix Scalar Matrix
Unit or Identity Matrix - a scalar matrix with all diagonal elements equal to 1.0. This
is denoted by I or [ I ].
⎡1 0 ... 0 ... 0 ⎤
⎢ ⎥
⎢0 1 ... 0 ... 0 ⎥
⎢ . . O . ... . ⎥
I = [I] = ⎢ ⎥
⎢0 0 ... 1 ... 0 ⎥
⎢ ⎥
⎢ . . ... . O .⎥
⎢ ⎥
⎣0 0 ... 0 ... 1 ⎦
Triangular Matrix - a square matrix with all elements above or below the main
diagonal is equal to zero. If all the elements below the main diagonal are zeroes, it is
called an upper triangular matrix. If all the elements above the main diagonal are
zeroes, it is called a lower triangular matrix.

⎡ u11 u12 ... u1i ... u1n ⎤ ⎡ l11 0 ... 0 ... 0⎤


⎢ ⎥ ⎢ ⎥
⎢ 0 u 22 ... u 2i ... u 2n ⎥ ⎢ l 21 l 22 ... 0 ... 0⎥
⎢ ⎥
. . ... . ... .⎥ ⎢ . . ... . ... . ⎥⎥
U = [U] = ⎢ L = [L] = ⎢
⎢ 0 0 ... uii ... ui n ⎥ ⎢ l i1 l i2 ... lii ... 0⎥
⎢ ⎥ ⎢ ⎥
⎢ . . ... . ... .⎥ ⎢ . . ... . ... .⎥
⎢ ⎥ ⎢ ⎥
⎢⎣ 0 0 ... 0 ... u nn ⎥⎦ ⎣ l n1 l n2 ... l ni ... l nn ⎦

Upper Triangular Matrix Lower Triangular Matrix

ABJ Jr
Review of Matrix Algebra Page A-3

Symmetric Matrix - a square matrix whose elements aij = aji for all i and j; i.e. the
elements are symmetrical about the principal diagonal.
Skew-symmetric matrix - a square matrix whose elements aij = -aji for all i and j. Note
that to satisfy the definition for the diagonal terms, aii = -aii, the diagonal elements
must equal zero. If aii is not equal to zero, while the off-diagonal terms satisfy the
definition, the matrix is called a skew matrix.
⎡ 1 2 3 4⎤ ⎡ 0 1 −2 3 ⎤
⎢ ⎥ ⎢ ⎥
⎢ 2 5 6 7⎥ ⎢ -1 0 -4 5 ⎥
⎢ 3 6 8 9⎥ ⎢ 2 4 0 -6 ⎥
⎢ ⎥ ⎢ ⎥
⎣ 4 8 11 10⎦ ⎣ -3 -5 6 0 ⎦
Symmetric Matrix Skew-Symmetric Matrix

Equality Of Matrices
Two matrices [A] and [B] are equal if and only if [A] and [B] have the same number
of rows and the same number of columns, and the corresponding elements are equal,
i.e. aij = bij for all i and j. The equality if denoted by the equal sign as in ordinary
algebra.

A.3 TRANSPOSE OF A MATRIX


T
The Transpose of an (mxn) matrix [A] is denoted as [A] and is an (nxm) matrix
obtained by interchanging the corresponding rows and columns of matrix [A]. Thus
the ith row of matrix A becomes the ith column of AT; or alternatively, the jth column
of A becomes the jth row of AT. For example,
⎡ 1 5⎤
⎢ ⎥
⎡ 1 2 3 4⎤ T ⎢ 2 6⎥
if A=⎢ ⎥ then A =
⎣ 5 6 7 8⎦ ⎢ 3 7⎥
⎢ ⎥
⎣ 4 8⎦
The transpose of a row matrix is a column matrix, and conversely, the transpose of a
column matrix is a row matrix. Frequently, authors present the transpose of vectors
to save space.

The following may be easily verified and presented here without proof:
T T
a) Transposition is a reversible operation, [A ] = [A]
b) If s is a scalar, sT = s
T
b) If [B] is a symmetric, diagonal, or scalar matrix, [B] = [B]
T
c) For an Identity matrix, [I] = [I]
T
d) For a square Null Matrix, [Φ] = [Φ]
if not a square matrix, the transpose is still a null matrix but by
definition is not equal to the original null matrix since the size will be
different.
T
e) For a Skew-symmetric Matrix [C], [C] = -[C]

A.4 MATRIX ADDITION AND SUBTRACTION


The sum or difference of matrices is defined only of the matrices have the same
number of rows and the same number of columns, and is a matrix of the same size

Institute of Civil Engineering, UP Diliman


Page A-4 Notes on Matrix Structural Analysis

whose elements are equal to the sum, or difference, of the corresponding elements of
the original matrices.
Thus, if [C] = [A] + [B] then cij = aij + bij for all i and j;
or if [C] = [A] - [B] then cij = aij - bij for all i and j.

For example, if
⎡ 1 4⎤ ⎡ 12 - 11⎤
⎢ ⎥ ⎢ ⎥
A = ⎢ -2 5⎥ B = ⎢ 10 -9 ⎥
⎢⎣ 3 - 6 ⎥⎦ ⎢⎣ - 8 7 ⎥⎦
then
⎡ 13 - 7⎤ ⎡ - 11 15 ⎤
⎢ ⎥ ⎢ ⎥
A+ B = ⎢ 8 - 4⎥ A - B = ⎢ - 12 14 ⎥
⎢⎣ - 5 1 ⎥⎦ ⎢⎣ 11 - 13 ⎥⎦

From the definition of matrix addition and subtraction, the operations have similar
properties as the corresponding operations for real numbers; i.e
a) A + B = B + A; commutative property,
b) (A + B) + C = A + (B + C) associative property,
c) A+Φ= A identity,
d) A-A= Φ
T T T
e) (A + B) = A + B

A.5 MULTIPLICATION OF A MATRIX BY A SCALAR


If [A] is an (mxn) matrix and s is a scalar, the product of s and [A], denoted by s[A] is
an (mxn) matrix whose elements is equal to the product of s and the corresponding
elements of [A]. Thus, if [B] = s[A], then bij = s(aij). For example, if s = 2 and
⎡1 4⎤ ⎡ 2 8⎤
⎢ ⎥ ⎢ ⎥
A= ⎢ 2 5⎥ 2A = ⎢ 4 10 ⎥
⎢⎣ 3 6 ⎥⎦ ⎢⎣ 6 12 ⎥⎦
If [A] and [B] are matrices, and s and t are scalars, the following can be verified:
a) s[A] = [A]s
b) s([A] + [B]) = s[A] + s[B]
c) (s + t)[A] = s[A] + t[A]
d) -1[A] = -[A]
e) (s[A])T = s[A]T
f) (st)[A] = s(t[A]) = t(s[A])
g) [A] + [A] = 2[A]
h) a scalar matrix can be expressed as s[I]

A.6 MULTIPLICATION OF A MATRIX BY ANOTHER MATRIX


If [A] is an (mxn) matrix and [B] is an (rxp) matrix, then the product [A][B] (in this
order) is defined only if the number of columns of [A] is equal to the number of rows

ABJ Jr
Review of Matrix Algebra Page A-5

of [B], i.e. r=n, and is an (mxp) matrix whose elements


c ij = a i1 b1j + a i2 b 2j + ... + a i n b nj
n
= ∑a ik b kj
k =1

For example,

⎡ a 11 a 12 a 13 ⎤ ⎡ x⎤ ⎡ b1 ⎤
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
if A = ⎢ a 21 a 22 a 23 ⎥ X = ⎢ y⎥ and B = ⎢ b2 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢⎣ a 31 a 32 a 33 ⎥⎦ ⎣z⎦ ⎢⎣ b 3 ⎥⎦

then
⎡ a 11 x a 12 y a 13 z ⎤
⎢ ⎥
[A][X ] = ⎢ a 21 x a 22 y a 23 z ⎥
⎢ ⎥
⎣⎢ a 31 x a 32 y a 33 z ⎦⎥
and the system of linear equations
a 11 x + a 12 y + a 13 z = b1
a 21 x + a 22 y + a 23 z = b2
a 31 x + a 32 y + a 33 z = b3
can be expressed in matrix form as [A][X] = [B], see section A.10.

⎡ 1 - 2⎤ ⎡ 1⎤
⎢ ⎥ ⎡ 1 -2 3 ⎤ ⎢ ⎥
if A= ⎢ 3 - 4⎥ B = ⎢ ⎥ and C = ⎢ 0⎥
⎢⎣ 3 ⎣ 0 1 2 ⎦
- 2 ⎥⎦ ⎢⎣ 2 ⎥⎦
then
⎡ 1 -4 -1 ⎤
⎡ 4 0⎤ ⎡ 7⎤
[ A][B] = ⎢⎢ 3 - 10 1 ⎥

[B][ A] = ⎢ ⎥ [B ][C ] = ⎢ ⎥
⎢⎣ 3 ⎣ 9 - 8⎦ ⎣ 4⎦
-8 5 ⎥⎦

When the number of columns of [A] is equal to the number of rows of [B], then the
two matrices are said to be comformable with respect to matrix multiplication in the
order [A][B]. The following products in the above example, [A] [C], [C] [A] and [C]
[B], are undefined since the matrices are noncomformable in these orders.

Note also that in the above example the product [A][B] is not equal to the product
[B][A] – the products are not even of the same size. The following example shows
that, in general, matrix multiplication is not commutative even though the matrices
are comformable in both order.

Institute of Civil Engineering, UP Diliman


Page A-6 Notes on Matrix Structural Analysis

⎡ 1 2 3 ⎤ ⎡ 1 0 3⎤
⎢ ⎥ ⎢ ⎥
if A = ⎢ -4 5 4 ⎥ and B = ⎢ -1 2 2⎥
⎢⎣ 3 -2 1 ⎥⎦ ⎢⎣ 2 0 1 ⎥⎦
⎡ 5 4 10 ⎤ ⎡ 10 -4 6 ⎤
then [ A][B] = ⎢⎢ - 1 10 2 ⎥

and [B][ A] = ⎢⎢ - 3 4 7 ⎥

⎢⎣ 7 -4 6 ⎥⎦ ⎢⎣ 5 2 7 ⎥⎦
The order of the matrix product must therefore be carefully observed at all times.
Exceptions to this observation, however, are shown below. Because of this, the
product [A][B] can be described more completely as: A is premultiplied to B, or B is
premultiplied by A, or B is postmultiplied to A, or A is postmultiplied by B.

If more than two matrices are to be multiplied, the comformability requirement


must be satisfied by adjacent matrices. Thus for the triple product [A][B][C], matrix
[B] must have the number of rows equal to the number of columns of [A] and have
the number of columns equal to the number of rows of [C].

If [A], [B], and [C] are comformable matrices in the order indicated, and s is a scalar,
then the following may be verified:
a) [A] [B] ≠ [B] [A] in general, matrix product is not commutative
b) s([A][B]) = (s[A])[B] = [A](s[B]) = [A][B]s
c) [A][B][C] = ([A][B])[C] = [A]([B][C])
d) [C]([A] + [B]) = ([C][A] + [C][B])
e) ([A] + [B])[C] = ([A][C] + [B][C])
f) [ I ][A] = [A][ I ] = [A] the unit matrix premultiplied and
postmultiplied to A are the same if [A] is a square matrix.
g) [Φ][A] = [A][Φ] = [Φ] note [A] must be a square matrix.
2
h) [A][A] = [A] note [A] must be a square matrix
i) If [B] is a scalar matrix such that [B] = s[ I ],
then the product [A][B] = s[A].
T T T T
j) ([A] [B] [C]) = [C] [B] [C] - this is referred to as the reversal rule
for matrix transpose: the transpose of the product of matrices is
equal to the product of the transpose of the individual matrices in the
reverse order.
k) if [A][B] = [Φ], then it does not follow that either [A] = [Φ] or [B] =
[Φ], or both [Α] and [Β] are null matrices. The cancellation law for
matrix multiplication is not true in general.
⎡ 2 1⎤
For example, if A = ⎢ ⎥
⎣ 4 2⎦
then, aside from [B] =[Φ], the following also satisfies [A][B] =[Φ].
⎡ 1 -1⎤
B =s ⎢ ⎥ where s is any scalar
⎣ -2 2 ⎦

ABJ Jr
Review of Matrix Algebra Page A-7

l) if [A][B] = [A][C], then we cannot, in general, cancel out matrix


[A] from both sides of the equation and say [B] = [C] except when
[A] is a square nonsingular matrix (see A.7) such that both sides of
the equation can be premultiplied by the inverse of A.
For a non-square or singular matrix, it is possible to have a matrix C
that is not equal to B that satisfies the equality - see preceding
example where [A][B] = [A][ Φ] = [Φ] where B ≠ Φ and A is
singular.
For any arbitrary matrix A, if [A][B] = [A][C] then we can say that
this “implies” that [B] = [C].
m) If [B] is a symmetric matrix, and [A] is any arbitrary matrix
conformable to the indicated matrix product, then the result of the
triple product [A]T [B] [A] or [A] [B] [A] T is also symmetric.

A.7 MATRIX INVERSION


The inverse of a square matrix [A] of order n, is also a matrix of order n which when
premultiplied or postmultiplied to [A] results in an identity matrix. The inverse is
-1
denoted as [A] .

Note that not all square matrices have a corresponding inverse. A matrix which
does not have an inverse is said to be a singular matrix, while one which has an
inverses said to be nonsingular. However, if the inverse of a matrix exists, then the
inverse is unique, i.e. it may be premultiplied or postmultiplied to the original
matrix to yield the identity matrix.

-1
Proof of uniqueness of A ,
let [B] and [C] be inverses of matrix [A] such that [A][B] = [ I ] and [C][A] = [ I ].
Using the definition of matrix inverse and the associative property of matrix
multiplication, then
[C] = [C][ I ] = [C][A][B] = [ I ][B] = [B]
thus [C] = [B]

Assuming matrices [A], [B], and [C] are nonsingular matrices, and s is a scalar, the
following properties and observations are presented:
-1 -1
a) [A ] = [A] matrix inversion is reversible.
T -1 -1 T
b) [A ] = [A ] matrix inversion and transposition are independent
operations.
c) The inverse of a scalar, s, a square matrix of order 1, is simply its
reciprocal, s-1 = 1/s.
d) The inverse of a diagonal matrix is a matrix whose diagonal elements
are the reciprocal of the corresponding elements of the original matrix.
e) The inverse of a symmetric matrix is also a symmetric matrix.
-1
f) [I] =[I]
-1 -1
g) [sA] = (1/s)[A]

Institute of Civil Engineering, UP Diliman


Page A-8 Notes on Matrix Structural Analysis

-1 -1 -1 -1
h) [A B C] = C B A Reversal rule of matrix inversion.

A.8 THE DETERMINANT OF A SQUARE MATRIX


Before presenting methods of getting the inverse of a matrix, a discussion of
determinants of a matrix is required.

The Determinant is a scalar quantity associated with a square matrix, denoted as


Det[A] or |A|. (This is rather a poor definition of a determinant as it does not really explain
what it is. For our purposes, it is used as an indication of whether or not a square matrix has
an inverse or the solvability of a system of linear equations. A square matrix whose
determinant is equal to zero is singular.) The determinant of an nxn square matrix can
be calculated as follows:
| A |= ∑ ± ( a 1i a 2j a 3k ... a np )
In the summation above, the row indices appear in the normal order, while the
column indices appear in some permutation of the normal order. The positive sign
is applied for permutations that require an even number of interchanges of adjacent
column indices to bring it back to the normal order, while the negative sign is
applied for permutations requiring an odd number of interchanges. In all there are
n! permutations involved, equally divided between the (+) and (-) signs. The
determinant of a scalar equals the value of the scalar.
⎡ a11 a12 ⎤
If n=2, [A] = ⎢ ⎥ then |A| = a11 a22 - a12 a21
⎣ a 21 a 22 ⎦
⎡ a11 a12 a13 ⎤
⎢ ⎥
If n=3, [A] = ⎢ a 21 a 22 a23 ⎥
⎢ ⎥
⎣ a 31 a 32 a33 ⎦
then |A| = a11 a22 a33 + a12 a23 a31 + a13 a21 a32
- a11 a23 a32 - a12 a21 a33 - a13 a24 a31

From the above, the determinants of matrices of order 2 and 3 may be calculated
using simple procedures, without directly using the above summation.

For n = 2, |A| is simply equal to the product of the main diagonal elements, (a11*a22)
minus the product of the elements of the other diagonal, (a12*a21).

For n = 3, copying the first two columns of the matrix to the right of the matrix, the
determinant is simply the sum of the products of the elements parallel to the
principal diagonal minus the products of the elements parallel to the other
diagonals.
⎡ a11 a12 a13 ⎤ a11 a12
⎢ ⎥
[A] = ⎢ a21 a22 a23 ⎥ a21
⎢ ⎥
⎣ a31 a32 a 33⎦ a 31 a 32
Again giving: |A| = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 – a13 a22 a31- a11 a23 a32 - a12 a21 a33
Note that this procedure of summing the products of the main diagonal minus the
sum of the products of the other diagonal does not apply to higher dimensions of n.

ABJ Jr
Review of Matrix Algebra Page A-9

For matrices having an order greater than 3, the use of the above definition to
manually determine the determinant of a matrix becomes increasingly difficult in
terms of tracking the possible permutations and applying the proper signs. It is best
to determine the determinant using a numerical procedure with the aid of a
computer.

Another useful method of determining determinants is by using the cofactor


expansion. In this expansion, the determinant is calculated as the sum of the
products of the elements of any row (or column) and its corresponding cofactor, i.e.
n
expanding about row i, | A|= ∑a
j=1
ij c ij

n
or, expanding about column j, | A|= ∑a
i=1
ij c ij

where: cij = cofactor of aij


i+j
= (-1) mij
mij = minor of element aij
= determinant of the matrix resulting from the deletion of the ith
row and jth column of the matrix.
Obviously, the expansion can be applied to the determination of the elements of the
cofactor matrix.

Applying the expansion for the 3x3 matrix [A] and expanding about row 1 gives:
a22 a23 a21 a23 a21 a22
A = a11 − a12 + a13
a32 a33 a31 a33 a31 a32
= a11 (a22 a33 – a23 a32 ) - a12 (a21 a33 – a23 a31 ) + a13 (a21 a32 – a22 a31 )

A.9 THE ADJOINT METHOD OF MATRIX INVERSION


Except for some special matrices, no method of determining the inverse of a matrix
has yet been presented. For a general nonsingular matrix, the inverse may be
determined using the adjoint method, i.e.

A-1 =
|A|
where: Â = adjoint of matrix [A]
= transpose of the matrix of cofactors, CT,

For example:
⎡ a11 a12 ⎤
If n=2, [A] = ⎢ ⎥ with |A| = a11 a22 - a12 a21
⎣ a 21 a 22 ⎦
⎡ a 22 − a12 ⎤
[Aˆ ] = ⎢ ⎥
⎣ − a 21 a11 ⎦
1 ⎡ a 22 − a12 ⎤
[A] −1 = ⎢ ⎥
a11a22 − a12 a21 ⎣ − a 21 a11 ⎦

Institute of Civil Engineering, UP Diliman


Page A-10 Notes on Matrix Structural Analysis

⎡ a11 a12 a13 ⎤


⎢ ⎥
If n=3, [A] = ⎢ a 21 a 22 a23 ⎥
⎢ ⎥
⎣ a 31 a 32 a33 ⎦
with |A| = a11 a22 a33 + a12 a23 a31 + a13 a21 a32
- a11 a23 a32 - a12 a21 a33 - a13 a24 a31
⎡ a22 a33 − a23a32 − a12 a33 + a13a32 a12 a23 − a13a22 ⎤
ˆ ⎢ ⎥
[A] = ⎢− a21a33 + a23a31 a11a33 − a13a31 − a11a23 + a13a21 ⎥
⎢⎣ a21a32 − a22 a31 − a11a32 + a12 a31 a11a22 − a12 a21 ⎥⎦
⎡ 2 0 −1 ⎤ ⎡ 3 −1 1⎤
⎢ ⎥ −1 ⎢ ⎥
if [A] = ⎢ 5 1 0 ⎥ then |A| = 1.0 and [A] = ⎢ − 15 6 −5 ⎥
⎢⎣ 0 1 3 ⎥⎦ ⎢⎣ 5 − 2 2 ⎥⎦

In practice, except for probably a (2 x 2) matrix, the inverse of a matrix is rarely


determined by the adjoint method. As will be discussed in the next section, the
inverse of a matrix is used to designate the solution to a system of linear equations
but the actual solution may be determined by other means.

A.10 SOLUTION TO SYSTEMS OF LINEAR EQUATIONS


As stated above, matrices are to be used to write the equations of structural analysis.
These equations generally represent a set of linear equations in the form:
a11 x1 + a12 x2 + ... a1n xn = b1
a21 x1 + a22 x2 + ... a2n xn = b2
. . ... . .
an1 x1 + an2 x2 + ... ann xn = bn
or in matrix form,
[A][X] = [B]

where: [A] = matrix of coefficients


[X] = matrix of unknowns
[B] = matrix of constants

In matrix formulation, premultiplying both sides of the equation by the inverse of A,


the solution is presented as:
-1
[X] = [A] [B]
Recall that in the solution of a set of simultaneous equations, a unique solution exists
if and only if there are equal numbers of equations as there are unknowns. As the
number of rows of the above matrices represents the number of equations, and the
number of columns of [A] must equal the number of unknowns, the matrix of
coefficient must necessarily be a square and a nonsingular matrix.

Also note, that although only a vector is shown for the matrix of constants, [B], this
may be a rectangular matrix having several columns. For this case, the solution
would also be a rectangular matrix with each column representing the solution for
the corresponding columns in [B]. The following will not cover the special type of
problem found in stability and dynamics when the vector of constant B is a null
matrix – the system of equations is said to be homogeneous.

ABJ Jr
Review of Matrix Algebra Page A-11

For practical problems, the size of the coefficient matrix is quite large, such that the
adjoint method becomes impractical due to the large number of determinants to be
determined. Therefore, numerical methods are almost solely used in the solution of
these simultaneous equations. Note, also, that the primary concern is to determine
the solution vector and not the inverse of A. As such, numerical procedures exist
which solves the system of equations without explicitly determining the inverse of
the coefficient matrix.

Numerical methods for solving simultaneous linear equation may be divided into
the direct and indirect methods. Direct methods are those, in the absence of
round-off or other errors, will yield the exact solution in a finite number of operation.
Indirect or iterative methods are those which start with an initial approximation of
the solution vector and by applying a suitable algorithm or recursive procedure,
leads to successively better approximations. The following discussion will be
limited only to the direct methods.

The fundamental method for direct solutions is the Gauss elimination. This involves
two major steps; first, the forward elimination and secondly, backsubstitution.
Forward elimination involves the transformation of the system of equations into an
equivalent system of equations (one having the same solution vector) whose
coefficient matrix is an upper-triangular matrix. This is accomplished by applying a
series of the following elementary operation:
a) Multiplication of one row (equation) by a nonzero constant,
b) Addition of a multiple of one equation to another,
c) Interchange of two rows (equations).
In matrix form the elementary operations are applied to an augmented matrix which
contains the elements of [A] and [B].

Once the forward elimination has been completed, the solution vector can be
determined by back substitution, starting from the nth equation.

Example Problem: Solution Gaussian Elimination


Consider the following linear system of equations, [A][X] = [B] with
⎡2 2 1 5⎤ ⎡ x1 ⎤ ⎡ 8 ⎤
⎢6 8 2 1⎥ ⎢x ⎥ ⎢− 2 ⎥
[ A] = ⎢ ⎥ [X ] = ⎢ ⎥
2
[ B] = ⎢ ⎥
⎢5 5 1 1⎥ ⎢ x3 ⎥ ⎢− 5 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣6 1 0 7⎦ ⎣x4 ⎦ ⎣ 4 ⎦
Solution: the Gaussian elimination is shown on the matrix on the left, the expanded form of
the system of equations are shown on the right
a. Define an augmented matrix, C, containing the elements of A and B
⎡ 2 2 1 5 8⎤ 2 x1 + 2 x2 + 1x3 + 5 x4 = 8
⎢ 6 8 2 1 − 2⎥ 6 x1 + 8 x2 + 2 x3 + 1x4 =−2
⎢ ⎥
⎢ 5 5 1 1 − 5⎥ 5 x1 + 5 x2 + 1x3 + 1x4 = −5
⎢ ⎥
⎣ 6 1 0 7 4⎦ 6 x1 + 1x2 + 0 x3 + 7 x4 = 4

Institute of Civil Engineering, UP Diliman


Page A-12 Notes on Matrix Structural Analysis

b. Interchange first and second equations/rows. Interchanges are performed to reduce


round-off errors particularly for large systems of equations. This step could have been
omitted for this example. (Also as in most numerical procedures, remember to use as large
a number of significant figures as practicable to reduce round-off errors). This strategy,
referred to as partial pivoting, involves selecting the row with the largest absolute
coefficient in the column (say the jth) under consideration from the jth to the nth, and
interchanging rows so as to place the largest coefficient in the jth row.
⎡ 6 8 2 1 − 2⎤ 6 x1 + 8 x2 + 2 x3 + x4 = − 2
⎢ 2 2 1 5 8⎥ 2 x1 + 2 x2 + x3 + 5 x4 = 8
⎢ ⎥
⎢ 5 5 1 1 − 5⎥ 5 x1 + 5 x2 + x3 + x4 = − 5
⎢ ⎥
⎣ 6 1 0 7 4⎦ 6 x1 + x2 + 0 x3 + 7 x4 = 4

Interchanges of row may also be required in cases where forward elimination cannot
proceed because the diagonal term is zero. If the elimination cannot proceed despite the
row interchanges, there is no solution for the system of equation, i.e. the coefficient matrix
is singular.
c. Divide 1st equation/row by 6 to make the coefficient of x1 in the 1st equation equal to 1.
Let’s define this as normalizing the equation/row.
⎡1 1.333 0.333 0.167 − 0.333⎤ x1 + 1.33x2 + 0.333x3 + 0.167 x4 = − 0.333
⎢2 2 1 5 8 ⎥ 2 x1 + 2 x2 + x3 + 5 x4 = 8
⎢ ⎥
⎢5 5 1 1 −5 ⎥ 5 x1 + 5 x2 + x3 + x4 = −5
⎢ ⎥
⎣6 1 0 7 4 ⎦ 6 x1 + x2 + 7 x4 = 4

d. Eliminate x1 from the second to fourth equations/row. For example, for the second
equation, replace the second equation by the equation obtained by subtracting from the
original 2nd equation the 1st equation multiplied by the coefficient of x1 of the 2nd equation
(which is 2 for this example). For the augmented matrix C, the 2nd row elements become,
c2j = c2j – c21*c1j. This is repeated for the 3rd and 4th equations.
⎡1 1.333 0.333 0.167 − 0.333⎤ x1 + 1.333x 2 + 0.333x3 + 0.167 x 4 = − 0.333
⎢0 − 0.667 0.333 4.667 8.667 ⎥ − 0.667 x 2 + 0.333x3 + 4.667 x 4 = 8.667
⎢ ⎥
⎢0 − 1.667 − 0.667 0.167 − 3.333⎥ − 1.667 x 2 − 0.667 x3 + 0.167 x 4 = − 3.333
⎢ ⎥
⎣0 −7 −2 6 6 ⎦ − 7 x2 − 2 x3 + 6 x4 = 6

e. Interchange 2nd and 4th equations and normalize the 2nd equation.
⎡1 1.333 0.333 0.167 − 0.333⎤ x1 + 1.333 x2 + 0.333 x3 + 0.167 x4 = − 0.333
⎢0 1 0.286 − 0.857 − 0.857 ⎥ x2 + 0.286 x3 − 0.857 x4 = − 0.857
⎢ ⎥
⎢0 − 1.667 − 0.667 0.167 − 3.333⎥ − 1.667 x2 − 0.667 x3 + 0.167 x4 = − 3.333
⎢ ⎥
⎣0 − 0.667 0.333 4.667 8.667 ⎦ − 0.667 x2 + 0.333 x3 + 4.667 x4 = 8.667

f. Eliminate x2 from the 3rd and 4th equations


⎡1 1.333 0.333 0.167 − 0.333⎤ x1 + 1.333 x2 + 0.333 x3 + 0.167 x4 = − 0.333
⎢0 1 0.286 − 0.857 − 0.857 ⎥ x2 + 0.286 x3 − 0.857 x4 = − 0.857
⎢ ⎥
⎢0 0 − 0.190 − 1.262 − 4.762⎥ − 0.190 x3 − 1.262 x4 = − 4.762
⎢ ⎥
⎣0 0 0.524 4.095 8.095 ⎦ 0.524 x3 + 4.095 x4 = 8.095

ABJ Jr
Review of Matrix Algebra Page A-13

g. Normalize the 3rd. Note pivoting could have been done between the 3rd and 4th equation.
⎡1 1.333 0.333 0.167 − 0.333⎤ x1 + 1.333x2 + 0.333 x3 + 0.167 x4 = − 0.333
⎢0 1 0.286 − 0.857 − 0.857 ⎥ x2 + 0.286 x3 − 0.857 x4 = − 0.857
⎢ ⎥
⎢0 0 1 6.625 25 ⎥ x3 + 6.625 x4 = 25
⎢ ⎥
⎣0 0 0.524 4.095 8.095 ⎦ 0.524 x3 + 4.095 x4 = 8.095

h. Eliminate x3 from the 4th equation.


⎡1 1.333 0.333 0.167 − 0.333⎤ x1 + 1.333 x2 + 0.333 x3 + 0.167 x4 = − 0.333
⎢0 1 0.286 − 0.857 − 0.857 ⎥ x2 + 0.286 x3 − 0.857 x4 = − 0.857
⎢ ⎥
⎢0 0 1 6.625 25 ⎥ x3 + 6.625 x4 = 25
⎢ ⎥
⎣0 0 0 0.625 −5 ⎦ 0.625 x4 = −5

i. Normalize the 4th equation.


⎡1 1.333 0.333 0.167 − 0.333⎤ x1 + 1.333x2 + 0.333 x3 + 0.167 x4 = − 0.333
⎢0 1 0.286 − 0.857 − 0.857 ⎥ x2 + 0.286 x3 − 0.857 x4 = − 0.857
⎢ ⎥
⎢0 0 1 6.625 25 ⎥ x3 + 6.625 x4 = 25
⎢ ⎥
⎣0 0 0 1 −8 ⎦ x4 = −8

j The above completes the forward elimination. The backward substitution starts from the
last equation to the first.
x4 = − 8
x3 = 25 − 6.625 x4 = 78
x2 = −0.857 + 0.857 x4 − 0.286 x3 = − 30
x1 = −0.333 − 0.167 x4 − 0.333 x3 − 1.333x2 = 15

A variation of the Gaussian elimination procedure involves transforming the


coefficient matrix into an identity matrix using the elementary operations. This is
accomplished by eliminating the ith unknown from the all the equations/rows above
and/or below the ith row. In this procedure, known as Gauss-Jordan elimination,
the resulting transformed matrix of constants is the solution.

The Gauss-Jordan procedure can also be used to get the inverse of the coefficient
matrix. This is accomplished by solving the set of equations with [B] = [ I ]; i.e.
[A][X] = [ I ], therefore the solution is [X] = [A]-1.

Example Gauss-Jordan Elimination


Using the same set of equations and performing the same transformations as in the above
example, the intermediate results are presented below:
a. Augmented matrix after interchange of 1st and 2nd row.
⎡ 6 8 2 1 − 2⎤
⎢ 2 2 1 5 8 ⎥
⎢ ⎥
⎢ 5 5 1 1 − 5⎥
⎢ ⎥
⎣ 6 1 0 7 4⎦

Institute of Civil Engineering, UP Diliman


Page A-14 Notes on Matrix Structural Analysis

b. After eliminating x1 from 2nd to 4th equations


⎡ 1 1.333 0.333 0.167 − 0.333⎤
⎢ 0 − 0.667 0.333 4.667 8.667 ⎥
⎢ ⎥
⎢ 0 − 1.667 − 0.667 0.167 − 3.333⎥
⎢ ⎥
⎣ 0 − 7.000 − 2.000 6.000 6.000 ⎦
c. After interchanging 2nd and 4th rows and eliminating x2 from 1st, 3rd and 4th equations.
⎡ 1 0 − 0.048 1.310 0.810 ⎤
⎢ 0 1 0.286 − 0.857 − 0.857⎥
⎢ ⎥
⎢ 0 0 − 0.190 − 1.262 − 4.762⎥
⎢ ⎥
⎣ 0 0 0.524 4.095 8.095 ⎦
d. After eliminating x3 from 1st, 2nd and 4th equations
⎡ 1 0 0 1.625 2 ⎤
⎢ 0 1 0 − 2.750 −8 ⎥
⎢ ⎥
⎢ 0 0 1 6.625 25 ⎥
⎢ ⎥
⎣ 0 0 0 0.625 −5 ⎦
e. After eliminating x4 from 1st to 3rd equations
⎡ 1 0 0 0 15 ⎤
⎢ 0 1 0 0 − 30⎥
⎢ ⎥
⎢ 0 0 1 0 78 ⎥
⎢ ⎥
⎣ 0 0 0 1 −8 ⎦

Example Gauss-Jordan Elimination – Inversion


Using matrix A in the above example, the elementary operations are performed on the
augmented matrix [A | I ].
⎡2 2 1 5 1 0 0 0⎤
⎢6 8 2 1 0 1 0 0⎥
⎢ ⎥
⎢5 5 1 1 0 0 1 0⎥
⎢ ⎥
⎣6 1 0 7 0 0 0 1 ⎦
After completion of the procedure the inverse is obtained in the right half of the augmented
matrix.
⎡ 1 0 0 0 − 2 .6 5.0 − 7.4 2.2 ⎤
⎢0 1 0 0 4.4 − 9.0 13.6 − 3.8⎥
⎢ ⎥
⎢0 0 1 0 − 10.6 23.0 − 34.4 9.2 ⎥
⎢ ⎥
⎣0 0 0 1 1.6 − 3.0 4.4 − 1.2 ⎦

A.11 DIFFERENTIATION AND INTEGRATION OF MATRICES


If [A] is an (mxn) matrix such the elements are functions of the variable x, the
differential of [A] with respect to x, denoted by d[A]/dx, is an (mxn) matrix whose
elements are the differential with respect to x of the corresponding elements of [A].
Thus
d[A] da ij
[B] = then, b ij =
dx dx

ABJ Jr
Review of Matrix Algebra Page A-15

Note that the rules of differentiation of ordinary algebraic expressions also apply to
matrix expressions. For example, the differential of a product of matrices:
d([A][B]) d[A] d[B]
= [B] + [A]
dx dx dx
Similarly, the integral of a matrix, is a matrix whose elements are the integral of the
corresponding elements.
[C] = ∫ ba [A] dx then, c ij = ∫ ba a ij dx

A.12 PARTITIONING AND SUBMATRICES


As stated in the definition of a matrix, the element of a matrix may also be a matrix
itself. The process of dividing a matrix into submatrices is known as partitioning.
Generally there are no restrictions on the number and sizes of the submatrices.

In these notes, frequent use of partitioned matrices will be made. For almost all of
these matrices, the partition results because of a natural grouping of the elements.
For example, the forces acting on a structure can be grouped together into support
reactions and other forces.

The matrix operations defined, including the observations and properties stated, in
the previous sections generally apply to partitioned matrices. Care should be taken,
however, that the matrices are partitioned such that the comformability and other
conditions required for the specific operation are satisfied both in the matrix and the
submatrix levels.

For example, let


⎡ A11 A12 ⎤ ⎡ B11 B12 ⎤ ⎡ C1 ⎤
A = ⎢ ⎥ B = ⎢ ⎥ C = ⎢ ⎥
⎢⎣ A21 A22 ⎥⎦ ⎢⎣ B21 B22 ⎥⎦ ⎢⎣ C2 ⎥⎦
then the sum of [A] and [B]
⎡ A11 + B11 A12 + B12 ⎤
A+ B = ⎢ ⎥
⎣⎢ A21 + B21 A22 + B22 ⎦⎥
Remember that A and B must be of the same size, as well as each pair of
submatrices that are to be added. The partitions of A and B must therefore be the
same, although the sizes of the submatrices of A need not be the same.

The product of [A] and [B], and [A] and [C] are:
⎡ A11 B11 + A12 B21 A11 B12 + A12 B22 ⎤
[ A] [B ] = ⎢ ⎥
⎢⎣ A21 B11 + A22 B21 A21 B12 + A22 B22 ⎥⎦

⎡ A11C1 + A12C2 ⎤
[ A] [C ] = ⎢ ⎥
⎣⎢ A21C1 + A22C2 ⎦⎥
Remember that A, B and C must be comformable to matrix multiplication in the
order shown, as well as each pair of submatrices that are to be multiplied. Also, the
order of matrix multiplication must also be preserved in the submatrix products, i.e.
the submatrices of [A] must premultiply the submatrices [B] in the above example.

Institute of Civil Engineering, UP Diliman


Page A-16 Notes on Matrix Structural Analysis

For the transpose of partitioned matrices, note that the aside from the exchange of
rows and columns, the submatrices themselves must be transposed.
⎡ A11T T
A21 ⎤
T ⎢ ⎥
⎡ A11 A12 A13 ⎤ ⎢ AT T ⎥
AT = ⎢ ⎥ = A
⎢ 12 22 ⎥
⎣⎢ A21 A22 A23 ⎦⎥
⎢ T T ⎥
⎢⎣ A13 A23 ⎥⎦
A partitioned matrix whose off diagonal submatrices are null matrices can also be
considered as a diagonal matrix. For these matrices the following may be easily
verified.
⎡ A11 0 0 ⎤
⎢ ⎥
if A = ⎢ 0 A22 0 ⎥
⎢ ⎥
⎣⎢ 0 0 A33 ⎦⎥
⎡ A−1 0 0 ⎤ ⎡ AT 0 0 ⎤
⎢ 11 ⎥ ⎢ 11 ⎥
then −1
A = ⎢ − 0 ⎥ and T
A = ⎢ T 0 ⎥

0 A22 1
⎥ ⎢
0 A22 ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣ 0 0 −1
A33 ⎥⎦ ⎢⎣ 0 0 T
A33 ⎥⎦
Matrix partitioning may also be resorted to for the purpose of reducing the effort
involved in performing the matrix operations. This is particularly true if special
matrices, such as identity and null matrices, can be identified within a large matrix.
For example:
⎡ A11 A12 ⎤ ⎡ I Φ⎤
if A = ⎢ ⎥ and B = ⎢ ⎥
⎣⎢ A21 A22 ⎦⎥ ⎣⎢ B21 I ⎦⎥
⎡ A11 + A12 B21 A12 ⎤
then [ A] [B] = ⎢ ⎥
⎢⎣ A21 + A22 B21 A22 ⎥⎦

The following two sections present special applications of partitioning in the


determination of the inverse of a matrix and the solution to a set of linear equations.
Obviously this is intended for use when attempting to perform the mathematical
operations manually, or when the computational tool available has limited
capabilities particularly with respect to the size of the matrix to invert.

Again, additional advantage can be achieved when special matrices can be defined
in the partitioned form such as null, identity, symmetric, diagonal matrices.

A.13 INVERSION BY MATRIX PARTITIONING


The effort required in finding the inverse of a matrix increases quite rapidly as the
size of the matrix increases (using numerical analysis, the number of single
operations required in the inversion or solution procedures for an (nxn) matrix may
be in the order of n3).

The following describes a way of reducing the size of the matrix to be inverted
through matrix partitioning. As will be seen, there will be, however, a
corresponding increase in the number of other matrix operations to be performed.

ABJ Jr
Review of Matrix Algebra Page A-17

Let [A] be a matrix whose inverse is required and partitioned such that the diagonal
submatrices are square matrices; and let [B] be its inverse, which must necessarily be
partitioned the same way as [A]. Taking the product:
⎡ A11 B11 + A12 B21 A11 B12 + A12 B22 ⎤ ⎡ I Φ⎤
[ A] [B ] = ⎢ ⎥ = ⎢ ⎥
⎣⎢ A21 B11 + A22 B21 A21 B12 + A22 B22 ⎦⎥ ⎣Φ I ⎦
therefore
A11B11 + A12 B21 = I (A-1)
A21 B11 + A22 B21 = Φ (A-2)
A11 B12 + A12 B22 = Φ (A-3)
A21 B12 + A22 B22 = I (A-4)
-1
from (A-2) B21 = -A22 A21 B11 (A-5)
substituting into (A-1)
-1
A11 B11 - A12 A22 A21 B11 = I
-1
[A11 - A12 A22 A21 ] B11 = I
-1 -1
B11 = [A11 - A12 A22 A21] (A-6)
from which B21 can be determined from (A-5)
similarly, from (A-3)
-1
B12 = -A11 A12 B22 (A-7)
substituting into (A-4)
-1 -1
B22 = [A22 - A21 A11 A12] (A-8)
from which B12 can be determined from (A-7)

Obviously the reason why the diagonal submatrices were required to be square
matrices is that these must be inverted. Thus, through partitioning, instead of
inverting a large matrix, four inverses of smaller matrices must be performed plus
several other matrix operations.

The number of inversions may still be reduced from four to two by getting an
alternative derivation for [B21] and [B22] and using the result of equations (A-7) and
(A-8). The alternative derivation can be determined by recalling that the inverse of a
matrix is unique i.e [A][B] = [B][A], the following also applies from the product
[B][A] :
B11 A12 + B12 A22 = Φ
B21 A12 + B22 A22 = I
from which,
-1
B12 = -B11 A12 A22 (A-9)
-1 -1
B22 = A22 - B21 A12 A22 (A-10)

Assuming that B11 and B21 have been determined using equations (A-5) and (A-6)
we can use equations (A-9) and (A-10) instead of equations (A-7) and (A-8). In
summary, if A is the matrix whose inverse B is to be determined,
⎡ A11 A12 ⎤ ⎡ B11 B12 ⎤
A = ⎢ ⎥ B = ⎢ ⎥
⎢⎣ A21 A22 ⎥⎦ ⎢⎣ B21 B22 ⎥⎦

Institute of Civil Engineering, UP Diliman


Page A-18 Notes on Matrix Structural Analysis

then the elements of B may be determined as follows:


-1 -1
B11 = [A11 - A12 A22 A21]
-1
B21 = -A22 A21 B11 with B11 from above
-1
B12 = -B11 A12 A22 with B11 from above
-1 -1
B22 = A22 - B21 A12 A22 with B21 from above

Recognizing that there are common operations in the above expressions, if we define
matrices C and D and express the above more compactly as:
-1
define C = -A22 A21
-1
D = -A12 A22
-1
then B11 = [A11 + A12 C]
B21 = C B11
B12 = B11 D
-1
B22 = A22 + B21 D

Example Inversion using Partitions


Using the same example as before, partitioned into 2x2 submatrices.
⎡2 2 1 5⎤
⎢6 8 2 1⎥
[ A] = ⎢ ⎥
⎢5 5 1 1⎥
⎢ ⎥
⎣6 1 0 7⎦
−1
−1 ⎡1 1 ⎤ ⎡ 1 − 0.143 ⎤
A22 = ⎢ ⎥ = ⎢
⎣0 7 ⎦ ⎣ 0 0.143 ⎥⎦

−1 ⎡
5 5⎤ ⎡ − 4.143 − 4.857 ⎤
C = − A22 ⎢6 1⎥ = ⎢ − 0.857
⎣ ⎦ ⎣ − 0.143⎥⎦
⎡1 5⎤ −1 ⎡ −1 − 0.571 ⎤
D = −⎢ ⎥ A22 = ⎢
⎣ 2 1⎦ ⎣ −2 0.143 ⎥⎦
−1
-1 ⎡− 6.429 3.571 ⎤ ⎡− 2.6 5 ⎤
B11 = [ A11 + A12 C] = ⎢ ⎥ = ⎢ ⎥
⎣ − 3.143 − 1.857 ⎦ ⎣ 4 .4 − 9 ⎦
⎡− 10.6 23 ⎤
B21 = C B11 = ⎢
⎣ 1 .6 − 3⎥⎦
⎡− 7.4 2.2 ⎤
B12 = B11 D = ⎢ 13.6 − 3.8⎥
⎣ ⎦
−1 ⎡− 34.4 9.2 ⎤
B22 = A22 + B21 D = ⎢
⎣ 4.4 − 1.2⎥⎦
thus
⎡ − 2.6 5 − 7.4 2.2 ⎤
⎢ 4.4 − 9 13.6 − 3.8 ⎥
[B ] = ⎢ ⎥
⎢− 10.6 23 − 34.4 9.2 ⎥
⎢ ⎥
⎣ 1.6 − 3 4.4 − 1.2 ⎦

ABJ Jr
Review of Matrix Algebra Page A-19

A.14 SOLVING SYSTEM OF EQUATIONS BY PARTITIONING


-STATIC CONDENSATION
Static condensation may be used as a mathematical tool for solving systems of
equations with the purpose of reducing the number of equations that have to be
solved at any one time. Consider the system of linear equations in the form
[A][X] = [B]. Instead of determining the solution vector by inverting the coefficient
-1
matrix as indicated by [X] = [A] [B], partition the matrices as follows:
⎡ A11 A12 ⎤ ⎡ X1⎤ ⎡ B1 ⎤
A = ⎢ ⎥ X = ⎢ ⎥ B = ⎢ ⎥
⎣⎢ A21 A22 ⎦⎥ ⎣⎢ X 2 ⎥⎦ ⎣⎢ B2 ⎥⎦
Performing the matrix product on the partitioned matrices gives the following
two equations.
A11 X1 + A12 X2 = B1 (A-11)
A21 X1 + A22 X2 = B2 (A-12)
solving for X2 from equation (A-12) gives
-1
X2 = A22 [ B2 - A21 X1 ] (A-13)
substituting equation (A-13) into (A-11) gives
-1
A11 X1 + A12 A22 [ B2 - A21 X1 ] = B1
rearranging
-1 -1
[A11 - A12 A22 A21 ] X1 = [ B1 - A12 A22 B2 ] (A-14)
or [ A11 ] X 1 = B1 (A-14a)
Equation (A-14a) represents a system of equations in terms of the unknown
vector X1. The modified coefficient and constants vectors (considering the effect
of the unknown vector X2) are:
−1
A11 = [ A11 − A12 A22 A21 ] (A-15)
−1
and B1 = [ B1 − A12 A22 B2 ] (A-16)
The unknown vector X1 can therefore be determined from equation (A-14a) as:
X 1 = [ A11 ]−1 B1
and the unknown vector X2 can be determined from equation (A-13) as:
-1
X2 = A22 [ B2 - A21 X1 ]
To clarify what static condensation does, consider the following system of 3
equations in 3 unknowns solved without using matrices.
a1 x + b1 y + c1 z = d1
a2 x + b2 y + c2 z = d2
a3 x + b3 y + c3 z = d3
using one of the equation, say the third we solve for z in terms of x and y.
z = (1/c3) (d3 – a3 x – b3 y)
Variable z corresponds to the unknown vector X2 and the above is equivalent to
equation (7-4). Substituting this to the first two equations and simplifying give
(a1 – c1 a3/c3) x + (b1 – c1 b3/c3) y = {d1 – (c1/c3) d3 }
(a2 – c2 a3/c3) x + (b2 – c2 b3/c3) y = {d2 – (c2/c3) d3 }

Institute of Civil Engineering, UP Diliman


Page A-20 Notes on Matrix Structural Analysis

These two represents the reduced system of equations in Equation (7-5a) which
includes the effect of variable z.
a*1 x + b*1 y = d*1
a*2 x + b*2 y = d*2
The unknowns, x and z comprising the unknown vector X1 can be solved for in this
reduced system of equations.

Example Solution to a System of Simultaneous Equations – Static Condensation


Consider same problem as before partitioned as shown:
⎡2 2 1 5⎤ ⎡ x1 ⎤ ⎡ 8 ⎤
⎢6 8 2 1⎥ ⎢x ⎥ ⎢− 2 ⎥
[ A] = ⎢ ⎥ [X ] = ⎢ ⎥
2
[ B] = ⎢ ⎥
⎢5 5 1 1⎥ ⎢ x3 ⎥ ⎢− 5 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣6 1 0 7⎦ ⎣x4 ⎦ ⎣ 4 ⎦
Solution:
−1
−1 ⎡1 1 ⎤ ⎡ 1 − 0.143 ⎤
A22 = ⎢ = ⎢
⎣0 7 ⎦

⎣ 0 0.143 ⎥⎦
−1 ⎡ − 8.429 − 5.571 ⎤
− A12 A22 A21 = ⎢
⎣ − 9.143 − 9.857⎥⎦
−1 ⎡ 10.429 7.571 ⎤
A11 = [ A11 − A12 A22 A21 ] = ⎢
⎣ 15.143 17.857⎥⎦
−1 ⎡ 6.714 ⎤
B1 = [ B1 − A12 A22 B2 ] = ⎢ ⎥
⎣− 5.714⎦
⎡ 15 ⎤
X 1 = [ A11 ]−1 B1 = ⎢ ⎥
⎣− 30⎦
⎡ 70 ⎤ ⎡ 78⎤
X 2 = A22
−1
[B2 − A21 X 1 ] = A22
−1
⎢ − 56⎥ = ⎢ − 8⎥
⎣ ⎦ ⎣ ⎦

ABJ Jr

You might also like