You are on page 1of 20

Matrix Structural Analysis

Module Overview:

This module discusses the structural analysis procedure using matrix operations. Analysis of
actual structures becomes unwieldy if there are too many members in the structural system
which is related to the number of equations that needs to be solved simultaneously.
Moreover, indeterminate systems are complicated to analyze because there are not enough
equilibrium equations to solve for the member forces.

Matrices are used to present and process systems of simultaneous equations. A computer
based solution is resorted to in order to handle problems where the sizes of the matrices
are very big.

Course Overview:

Actual structural analyses nowadays are in general done with the use of computer
softwares. These softwares are very advanced, very fast and powerful. Engineers use these
softwares without understanding what the softwares are doing. They simply see the input
data that the users supply to run the softwares and the output data that are in the form of
displacements, forces, among others.

The objective of this course is to let the student understand what the structural analysis
software is doing in order that they can prepare the input data properly in conformance
with the precepts of the acronym G.I.G.O. (Garbage In, Garbage Out) which emphasizes that
the output is only as good as the accuracy of the input data.

The power of matrix structural analysis is in the way it solves complicated and highly
indeterminate structural systems. As a matter of fact, the procedure does not distinguish
between determinate and indeterminate systems and therefore follows the same
procedure for both systems.

Course Outcomes:

In this course, the students are expected to learn the principles of “Stiffness Method”, the
procedure used by computer in analyzing structures. They will understand that the method
will arrive at an exact solution even for highly indeterminate systems or other complicated
systems.

For this undergrad course, they will learn in particular to:

 Understand the principle of Stiffness Method


 Learn how the computer will execute the Stiffness Method using only numerical data
 Formulate the System Load Vector
 Formulate the System Stiffness Matrix
 Solve for the System Displacements
 Solve for the Member Forces and Support Reactions
 How to Formulate the System Load and Stiffness Matrices for different Structural
Systems Like Plane Frames and Plane Grids.
 Modify the Stiffness Matrix of Systems with Different Types of Members
 Formulate System Load Vectors for Self-Straining Loads
Lecture 1: SPECIAL MATRIX OPERATIONS
The mathematical crux in matrix structural analysis is the solution of
simultaneous linear equations. The number of linear equations is large even for a
moderately sized structure. Thus, methods in matrix structural analysis require
the use of large sized matrices in solving simultaneous equations and this is done
very quickly with the use of computers.

A simple example of the type of equations encountered in matrix structural


analysis, specifically in Stiffness Method, is shown below.

20x1 – 4x2 + x3 = 200

-4x1 +10x2 + 2x3 = 0

3x1 +2x2 + 10x3 -3x4 = 100

-3x3 +20x4 = 300

The above equations are then presented in matrix form as follows,

20 -4 3 0 x1 -200
-4 10 2 0 x2 0
* =
3 2 10 -3 x3 100
0 0 -3 20 x4 300

Note that the number of rows is equivalent to the number of equations while the
number of columns represents number of unknowns.

 BASIC MATRIX OPERATIONS


Of course, other basic matrix operations will still be used, e.g. addition and
subtraction. The rules in matrix addition and subtraction are similar. Matrices
have to be of the same size and the addition or subtraction is done element-by-
element, i.e.

[A] ± [B] = [C]


(mxn) ± (mxn) = (mxn)

aij ± bij = cij for all values of i and j.

Matrix multiplication involves multiplying the row elements of the one matrix
with the column elements of the next matrix. Multiplication of the two matrices
below is only possible if the number of columns in [A] is equal to the number of
rows in [B].
[A][B] = [C]
mxn nxo mxo

In the above example, the size of [C] is mxo, the number of rows of [A] and the
number of columns of [B]. The elements of [C] is given by the expression
n
cij = ∑ aik b kj
k =1

Example:
4 3 4 3 8 6

[ ][
0 1
−4 1
1 0 20 =
0 1 02 ]
0 1 0 2
−4 1 −8 2[ ]
(3 x 2) (2 x 4) = (3 x 4)

Several matrices can be multiplied together but the sizes of the columns and
rows of succeeding matrices must be equal for the multiplication to be possible.
The resulting product is a matrix with a size having rows equal in number to the
rows of the first matrix and columns equal to the number of columns of the last
matrix in the group.

[A] [B] [C] [D] = [E]


(m x n) (n x o) (o x p) (p x q) = (m x q)

The other remaining basic matrix arithmetic operation is matrix division. This
operation is a bit complicated compared to the other basic operations. Thus, the
division problem is presented as a multiplication problem with one of the factors
to be multiplied as the unknown.

[ A ] {x} = {b}
(n x n) (n x 1) = (n x 1)

In the above equation, the matrix [A] and the vector {b} are given while the
vector {x} represents the vector of unknowns. Vectors are single row or single
column matrices.

There are many solutions to the division problem. One of them Is the Gaussian
Elimination where the matrix [A] is converted into a triangular matrix. In doing
so. the unknown elements of {x} are solved one-by-one by eliminating all
previously solved x’s and then solving for the remaining unknown x’s.

Another method is by matrix inversion to solve for {x} known as the Gauss-
Jordan Inversion. In this approach, the inverse of [A] is derived and this inverse is
then multiplied to the given vector {b} to solve for the unknown vector {x}.
{x} = [ A ]-1 {b}

Another popular method is the Cramer’s Rule. However, this method becomes
unwieldy for matrices with big sizes. Nevertheless, this is a fast way of solving
problems with 2 or 3 unknowns where the size of [A] is either 2x2 or 3x3.

In the Stiffness Method for matrix structural analysis, the matrix [A] always has
special properties which are discussed in the succeeding paragraphs.

1) [A] is square – The number of rows in the matrix [A] is equal to the
number of columns. It is emphasized here that the number of rows is
equal to the number of equations while the number of columns is equal
to the number of unknowns. It has been established in basic algebra that
in order that a system of simultaneous equations to have a unique
solution, there must be an equal number of independent equations to the
number of unknown variables. Hence, [A] must be square.

In addition, to ensure that all the equations are independent, the matrix
[A] must not be a singular matrix, i.e. det[A] ≠ 0.

2) [A] is symmetrical – A symmetrical matrix is when a ij = aji for all values of I


and j. A sample of a symmetrical matrix is shown below.

30 -5 4 -2
-5 20 -3 6
4 -3 25 -7
-2 6 -7 40

Note that the elements are symmetrical with respect to the main
diagonal. The main diagonal is the one where all elements have the same
row and column numbers, i.e. aii.

This symmetrical nature of the matrix [A] in the Stiffness Method is due
to the principle explained in the Reciprocal Theorem which states that
the displacement at Point B due to a load at Point A is equal to the
displacement at Point A if the same load is applied at Point B. This is
shown by a simple example below.
P P

B A
A   B

3) [A] is positive definite – A matrix is positive definite if the determinant of


the matrix is positive. Moreover, all sub-matrices that are formed from
[A] where the main diagonal elements of the sub-matrices are also main
diagonal elements of the original matrix [A] are likewise all positive
definite.

Some of the sub-matrices, together with the original matrix, are shown
below.

30 -5 4 -2
-5 20 -3 6
4 -3 25 -7
-2 6 -7 40

30 -5 4
-5 20 -3
4 -3 25

20 -3 6
-3 25 -7
6 -7 40

30 -5 20 -3 25 -7
-5 20 -3 25 -7 40

30 20 25 40

Note that the determinants of all the matrices above are positive
including the 1x1 sub-matrices where their determinants are their scalar
values.

It is also worth mentioning at this point that the reason why [A] has to be
positive definite is because the solution of the simultaneous equations
for the unknown vector {x} involves solving for square roots. A positive
definite [A] endures that all terms inside the square root are positive. In
cases when square roots of negative numbers are encountered, these
indicate instability in the system and therefore are used as warnings (of
instability) by structural softwares.

 CHOLESKY PROCEDURE
Due to the special properties of matrix [A], a special procedure to solve the
system of linear simultaneous equations was formulated called the Cholesky
Procedure.
The system of simultaneous equations in matrix form is again presented below.

[A]{x} = {b} (Eqn. 1)

Note that the elements of [A] and {b} are given and the unknowns are contained
in vector {x}.

The solution starts with decomposing [A] into two triangular matrices.

[A] = [L]x [U] (Eqn. 2)

[L] is a lower triangular matrix where the elements in the upper triangle above
the main diagonal are all zeroes as shown below.

x 0 0 0
x x 0 0
x x x 0
x x x x

[U] is an upper triangular matrix where the elements in the lower triangle below
the main diagonal are all zeroes as shown below.

x x x x
0 x x x
0 0 x x
0 0 0 x

Matrix [A] can be decomposed into an infinite number of pairs of [L] and [U].
However, there is only one unique pair where [L] and [U] are transpose of each
other. The principle is easier appreciated when dealing with scalar values.
Consider for example the scalar value 12. An infinite number of co-factor pairs
give a product equal to 12, e.g. 12x1, 1x12, 6x2, 2x6, 4x3, 3x4, 24x0.5, 5x2.4, etc.
But, only one pair is unique where the co-factors are equal to each other, i.e. 12
= √ 12 x √ 12 .

Similarly, the elements of [L] and [U] are equal and mirror (about the main
diagonal) of each other since they are transpose matrices of each other. All
elements of [U] and [L] are related by the expression
uij = lji for all values of i and j.

From Equations 1 and 2, [A] is replaced by [L] [U].


[L] [U] {x} = {b} (Eqn. 3)

Introducing in Eqn. 3 the expression [U] {x} = {y}, (Eqn. 4)

[L] {y} = {b} (Eqn. 5)

Now, the Cholesky Procedure is presented as a 3 step procedure.

Step 1) Decomposition – Decompose [A] to [L] and [U]

Since [L] and [U] are transpose of each other, only one of these two
triangular matrices needs to be determined. The upper triangular matrix [U]
is chosen to be solved. The solution process starts with Row 1. In each row,
the elements are determined starting with the main diagonal element and
proceeds to the next element in the row until the element in the last
column in the row is reached. Then the solution can proceed to the next
row until the last row is finished.

The derivation of the formulas for each element in the decomposition process is
illustrated below. Consider a 4x4 matrix [A] that will be decomposed into [L] and [U].

a11 a12 a13 a14 l11 0 0 0 u11 u12 u13 u14


a21 a22 a23 a24 l21 l22 0 0 0 u22 u23 u24
= *
a31 a32 a33 a34 l31 l32 l33 0 0 0 u33 u34
a41 a42 a43 a44 l41 l42 l43 l44 0 0 0 u44

The expressions for the first row elements u 1j are derived below. Note that the
highlighted variables are the unknowns in each equation.

 a11 = l11* u11 = [u11]2  a11 = [u11]2  u11 = [a11]0.5


 a12 = l11 * u12  a12 = u11 * u12  u12 = a12 / u11
 a13 = l11 * u13  a13 = u11 * u13 u13 = a13 / u11
 a14 = l11 * u14  a14 = u11 * u14  u14 = a14 / u11

Similarly, the second row expressions for u2j are shown.

 a22 = [l21*u12 +l22* u22] a22 = [u122 + u222]  u22 = [a22 – (u122)]0.5
 a23 = [l21*u13 +l22* u23] a23 =u12*u13 + u22*u23  u23 = (a23 – [u12*u13]) / u22
 a24 = [l21*u14 +l22* u24]  a24 =u12*u14 + u22*u24  u24 = (a24 – [u12*u14]) / u22
For the third row, we have

 a33= [l31*u13 +l32* u23+l33* u33]  a33= u132 + u232 + u332

 u33 = [a33 – (u132 + u232)]0.5

 a34 = [l31*u14 + l32* u24+ l33* u34]  a34 = u13*u14 + u23*u24 + u33*u34

 u34 = (a34 – [u13*u14 + u23*u24]) / u33

And finally for the fourth and last row,

 a44 = [l41*u14 + l42* u24+ l43* u34+ l44* u44]  a44 = u142 + u242 + u342 + u442

 u44 = [a44 – (u142 + u242 + u342)]0.5

From the above example, general expressions can be formulated from the common
patterns. Presented below are the formulas for the main diagonal elements.

u11 = [a11]0.5
u22 = [a22 – (u122)]0.5
u33 = [a33 – (u132 + u232)]0.5
u44 = [a44 – (u142 + u242 + u342)]0.5

It can be observed that the main diagonal element u ii have square root terms. Also,
aii is used in the formula to solve for u ii. There is also a summation term that is
deducted from aii. The number of terms in the summation is “i-1”. Hence, the
general formula for main diagonal elements can be presented as

i−1


uii = aii −∑ u ki2
k=1
 For Main Diagonal Elements

The expressions for the off-diagonal elements of [U] are presented below.

u12 = a12 / u11 u23 = (a23 – [u12*u13]) / u22


u13 = a13 / u11 u24 = (a24 – [u12*u14]) / u22
u14 = a14 / u11 u34 = (a34 – [u13*u14 + u23*u24]) / u33

The pattern that can be observed from the above formulas are:

a) aij is used to solve for uij


b) The denominator is uii
c) There is a summation term in the numerator. The number of terms is
equal to “i-1”. The summation terms are made up of products of u ki and
ukj.

Now, the general expression for off-diagonal elements can be formulated.

i−1
aij −∑ u ki u kj
k=1  For Off-Diagonal Elements
uij =
uii

Step 2) Forward Elimination – Solve for {y} in the formula [L]{y} = {b}.

This is called Forward Elimination because the procedure starts with the first row
and proceeds row-by-row until the last row. In the case of a 4x4 problem,

l11 0 0 0 y1 b1
l21 l22 0 0 y2 b2
* =
l31 l32 l33 0 y3 b3
l41 l42 l43 l44 y4 b4

However, the elements of [L] can be replaced with the elements from [U] derived
earlier in Step 1.

u11 0 0 0 y1 b1
u12 u22 0 0 y2 b2
* =
u13 u23 u33 0 y3 b3
u14 u24 u34 u44 y4 b4

The unknown y’s are now solved from the equation above.

 b1 = u11*y1  y1 = b1 /u11
 b2 = u12*y1+ u22*y2  y2 = [b2 - (u12*y1)] / u22
 b3 = u13*y1 + u23*y2 +u33*y3  y3 = [b3 - (u13*y1 +u23*y2)] / u33
 b4 =u14y1 + u24y2 + u34y3 + u44y4  y4 = [b4 – (u14y1 + u24y2 + u34y3)] / u44

Similar patterns can be discerned from the formulas above thereby arriving at the
general formula for Forward Elimination.
i−1
bi −∑ uki y k
k=1
y i=
uii
Step 3) Backward Substitution – Solve for {x} in the formula [U]{x} = {y}.
This is called Backward Substitution because the solution starts with the last row and
proceeds backward row-by-row until the first row. Continuing with the 4x4 problem,

u11 u12 u13 u14 x1 y1


0 u22 u23 u24 x2 y2
* =
0 0 u33 u34 x3 y3
0 0 0 u44 x4 y4

The expressions for the unknown x’s are presented below.

 y4= u44*x4  x4 = y4/u44


 y3= u33*x3 + u34*x4  x3 = (y3–u34*x4) / u33
 y2= u22*x2 + u23*x3+ u24*x4  x2 = [y2 – (u23*x3 +u24*x4) / u22
 y1= u11*x1 + u12*x2 + u13*x3+ u14*x4  x1 = [y1 – (u12*x2 +u13*x3 +u14*x4) / u11

From the common pattern in the above equations, the general formula for Backward
Substitution is derived.

n
y i− ∑ u ik∗x k
k=i+1
x i=
uii

Example:
20 -4 3 0 x1 -200
-4 10 2 0 x2 0
=
3 2 10 -3 x3 100
0 0 -3 20 x4 300

1. Decomposition

20 -4 3 0 a11 a12 a13 a14


-4 10 2 0 a21 a22 a23 a24
=
3 2 10 -3 a31 a32 a33 a34
0 0 -3 20 a41 a42 a43 a44

l11 0 0 0 u11 u12 u13 u14


l21 l22 0 0 0 u22 u23 u24
= *
l31 l32 l33 0 0 0 u33 u34
l41 l42 l43 l44 0 0 0 u44

 1st row:
 u11 = [a11]0.5 = √ 20 = 4.472
−4
 u12 = a12 / u11 = = -0.894
4.4721
3
 u13 = a13 / u11 = = 0.671
4.4721
0
 u14 = a14 / u11 = =0
4.4721

 2nd row:
 u22 = [a22 – (u122)]0.5 =
√ 10−(−0.8944)2 = 3.033
2−(−0.8944)(0.6708)
 u23 = (a23 – [u12*u13]) / u22= = 0.857
3.0331
0−(−0.8944)(0)
 u24 = (a24 – [u12*u14]) / u22= =0
3.0331
 3rdrow:
 u33 = [a33 – (u132 + u232)]0.5=√ 10−[(0.6708) ¿ ¿ 2+(0.8571)¿ ¿ 2 ¿ ]¿ ¿ ¿ = 2.969
−3−(−0.6708∗0+0.8571∗0)
 u34 = (a34 – [u13*u14 + u23*u24]) / u33= = -1.010
2.9691

 4throw: Lij= Uii


 u44 = [a44 – (u142 + u242 + u342)]0.5=√ 20−[(0)¿¿ 2+( 0)2 +(−1.0104)¿¿ 2¿]¿ ¿ ¿ = 4.356

2. Forward Elimination

[ L ] {y} = {b }

u11 0 0 0 y1 b1
u12 u22 0 0 y2 b2
* =
u13 u23 u33 0 y3 b3
u14 u24 u34 u44 y4 b4

 b1 = u11*y1  y1 = b1 /u11
 b2 = u12*y1+ u22*y2  y2 = [b2 - (u12*y1)] / u22
 b3 = u13*y1 + u23*y2 +u33*y3  y3 = [b3 - (u13*y1 +u23*y2)] / u33
 b4 =u14y1 + u24y2 + u34y3 + u44y4  y4 = [b4 – (u14y1 + u24y2 + u34y3)] / u44

General Formula:
i−1
bi −∑ uki y k
k=1
y i=
uii
From Step 1, Decomposition we get:

4.472 0 0 0 y1 -200
-0.894 3.033 0 0 * y2 = 0
0.671 0.857 2.969 0 y3 100
0 0 -1.010 4.356 y4 300

−200
 y1 = b1 /u11 = = -44.722
4.472

0−(−0.894∗(−44.722))
 y2 = (b2 - u12*y1) / u22= = -13.182
3.033

100−(0.671∗(−44.722 )+ 0.857∗(−13.182 ))
 y3 = (b3 - u13*y1 +u23*y2) / u33= = 47.594
2.969

 y4 = [b4 – (u14y1 + u24y2 + u34y3)] / u44=


300−(0∗ (−44.722 ) +0∗(−13.182 ) + (−1.010 )∗47.594)
= y4 = 79.906
4.356

3. Backward Substitution
[ u ] {x} = {y }

u11 u12 u13 u14 x1 y1


0 u22 u23 u24 x2 y2
* =
0 0 u33 u34 x3 y3
0 0 0 u44 x4 y4

 y4= u44*x4  x4 = y4/u44


 y3= u33*x3 + u34*x4  x3 = (y3–u34*x4) / u33
 y2= u22*x2 + u23*x3+ u24*x4  x2 = [y2 – (u23*x3 +u24*x4) / u22
 y1= u11*x1 + u12*x2 + u13*x3+ u14*x4  x1 = [y1 – (u12*x2 +u13*x3 +u14*x4) / u11

General Formula:
n
y i− ∑ u ik∗x k
k=i+1
x i=
uii
From Step1. Decomposition and Step 2. Forward Substitution, we get:

4.472 -0.894 0.671 0


0 3.033 0.857 0
0 0 2.969 -1.01
0 0 0 4.356

u11 u12 u13 u14 x1 -44.722


0 u22 u23 u24 x2 -13.182
* =
0 0 u33 u34 x3 47.594
0 0 0 u44 x4 79.906

80.308
 x4 = y4/u44 = = 18.436
4.356
47.594−(−1.01∗18.436)
 x3 = (y3 – u34*x4) / u33 = = 22.302
2.969
−13.182−( 0.857∗22.302+ 0∗18.436)
 x2 = [y2 – (u23*x3 +u24*x4) / u22= = -10.648
3.033
 x1 = [y1 – (u12*x2 +u13*x3 +u14*x4) / u11

−44.722−(−0.894∗(−10.648)+ 0.671∗22.302+0∗18.436)
= = -15.475
4.472

20 -4 3 0 -15.475 -200
-4 10 2 0 -10.648 0
* =
3 2 10 -3 22.302 100
0 0 -3 20 18.436 300

From calculator:

20 -4 3 0 -15.475 -200
-4 10 2 0 -10.648 0.006
* =
3 2 10 -3 22.302 100.01
0 0 -3 20 18.436 303.55

 ALGORITHMS

The algorithms for the Cholesky Procedure in B.A.S.I.C. computer language is included
here. The purpose of this is to understand the flow of each step of the procedure. This is
possible by just following the step-by-step logic of the commands line-by-line.
Also, the algorithms give an idea to the readers how the software programs look like.
These can even be helpful if they want to make short programs. Hence, algorithms are
included in this reference where they are deemed to be helpful in understanding the
logic of the procedure.
ALGORITHMS
Input N Input Data
DIM A(N,N), B(N), U(N,N), Y(N), X(N)

REM INPUT ROUTINE


 
For I = 1 to N 

For J = 1 to N 

INPUT A(I,J)

Next J

INPUT B(I)

Next I
REM DECOMPOSITION
    Decomposition

For I = 1 to N 

For J = I to N 

If I = J 

Then 

Sum = 0

For K = 1 to I - 1 

Sum = Sum + U(K,I)2 

Next K

U(I,I) = (A(I,I) – Sum)0.5

Else 

Sum = 0

For K = 1 to I - 1 

Sum = Sum + U(K,I) *U(K,J)

Next K

U(I,J) = (A(I,J) – Sum) / U(I,I)

Next J

Next I
REM FORWARD ELIMINATION
  Forward Elimination
For I = 1 to N 

Sum = 0

For K = 1 to I - 1 

Sum = Sum + U(K,I)*Y(K) 

Next K

Y(I) = (B(I) – Sum) / U(I,I)

Next I
REM BACKWARD SUBSTITUTION
    Backward Substitution
For I = N to 1 STEP -1 

Sum = 0

For K = I+1 to N

Sum = Sum + U(K,I) *X(K)

Next K

X(I) = (Y(I) – Sum) / U(I,I)

Next I

 EXTRA SPECIAL MATRIX OPERATIONS

As mentioned before, modest size structures require hundreds of simultaneous


equations resulting to matrices with hundreds of rows. A structural system can easily
require a thousand equations and the matrix [A] will have 1,000 rows by 1,000 columns .
This means that there will be 1,000,000 variables in [A] alone. Each variable will require
2 bytes of memory (for double precision variables) and matrix [A] will consume 2 MB of
memory space.

This amount of memory requires a lot of memory space both in the RAM and external
memory of the computer CPU. This will also slow down considerably the processing
time.

To save on memory space and processing time, the matrix sizes are drastically reduced.
In most structures, a lot of elements in the matrix [A] are zeroes. The non-zero elements
are the ones needed in the analysis. Therefore, these non-zero values are clustered
together as near as possible to the main diagonal. The width in matrix [A] that contains
the non-zero elements is called the band width. In the matrix example below, the non-
zero elements in a row is 11 = 1 main diagonal element plus 10 off-diagonal elements.

The band width is defined as

Band width = 10 + 1 + 10 = 21

1,000 columns

1,000 rows =

A x = b

1,000 columns

Band Width
1 10
zeroes
10

1,000 rows Non zero values


10
1
zeroes Half-band Width

The band width of 21 is shown above. The matrix [A] is symmetrical, i.e. the 10 elements
below the main diagonal are equal to and mirror with the 10 elements to the right of the
main diagonal element. Therefore, only the elements above or to the right of the main
diagonal are needed in the solution process. Hence, only these (10) elements, together with
the (1) main diagonal element, needs to be saved in the computer memory. This defines the
half-band width, HBW ( HBW = 1 + 10 = 11).
To save on memory requirements which accordingly will also speed up the process,
softwares use “block” operations. The size of the block is equal to HBW x HBW.

With block operations, the matrix [A] is converted into a “banded” matrix. A banded matrix
is shown below. The number of rows is 1,000 in this example but the number of columns is
reduced to HBW = 11. This drastically reduces the external memory storage requirement of
[A] from 2 MB 9 (for 1 M variables) to 22 kB (for 11,000 variables).

During the 3 stages of the Cholesky Procedure, the number of rows required in the [A], {b}
and {y} matrices at any given time is only equal to HBW. Thus, the size of [A] that needs to
be processed at any given time is reduced to HBW by HBW hence the term “block”
operations. Similarly, the number of rows required in the Cholesky processing in the {b}, {y}
and {x} vectors are also equal to HBW and these are the same rows processed in [A]. It is
emphasized that the data being processed are the ones that needs to be stored in the RAM
of the CPU and therefore directly affects the processing speed. This is dramatic in this
example because the size of memory space requirement in the RAM for [A] alone is reduced
from 2 MB to 242 bytes (0.0121 % of 2 MB). If the RAM memory requirements for the {b},
{y} and {x} vectors are to be included, there will even be a bit more savings with the block
operation.

HBW

HBW

Banded A x = b

Therefore, minimizing the band width is a crucial step to accelerate the processing time and
to ensure that there will be enough memory available especially for the RAM of the CPU.

The band width can be minimized by adopting a numbering of the joints such that the
difference of the joint numbers of the two ends of a member is minimized. This is called
“member incidence”.

This is achieved by numbering the joints in a “wave-like” fashion in the direction of the
“longer” dimension as shown in the figure below. The quotes means that the dimensions
are not to be interpreted literally but instead the “dimension” is measured by the number of
nodes in that direction.

The input of the member incidences by the user can be random and generally does not
result into a minimized band width. Hence, the software will renumber the joints to attain a
minimum difference in the member incidence joint numbers before proceeding with the
processing of the Cholesky routine.

Exercise:

Using Cholesky Procedure, solve for the elements of vector {x} in the system of linear
equations presented in matrix form below.

You might also like