Professional Documents
Culture Documents
A.1 GENERAL
The following presents a summary of the rudimentary elements of and concepts in
matrix algebra to assist those who require a review of the subject matter. Obviously,
a detailed treatment of the subject matter is beyond the scope of these notes, and the
student is referred to books in advanced engineering mathematics and numerical
analysis for a more exhaustive presentation.
ABJ Jr
Review of Matrix Algebra Page A-3
Symmetric Matrix - a square matrix whose elements aij = aji for all i and j; i.e. the
elements are symmetrical about the principal diagonal.
Skew-symmetric matrix - a square matrix whose elements aij = -aji for all i and j. Note
that to satisfy the definition for the diagonal terms, aii = -aii, the diagonal elements
must equal zero. If aii is not equal to zero, while the off-diagonal terms satisfy the
definition, the matrix is called a skew matrix.
⎡ 1 2 3 4⎤ ⎡ 0 1 −2 3 ⎤
⎢ ⎥ ⎢ ⎥
⎢ 2 5 6 7⎥ ⎢ -1 0 -4 5 ⎥
⎢ 3 6 8 9⎥ ⎢ 2 4 0 -6 ⎥
⎢ ⎥ ⎢ ⎥
⎣ 4 8 11 10⎦ ⎣ -3 -5 6 0 ⎦
Symmetric Matrix Skew-Symmetric Matrix
Equality Of Matrices
Two matrices [A] and [B] are equal if and only if [A] and [B] have the same number
of rows and the same number of columns, and the corresponding elements are equal,
i.e. aij = bij for all i and j. The equality if denoted by the equal sign as in ordinary
algebra.
The following may be easily verified and presented here without proof:
T T
a) Transposition is a reversible operation, [A ] = [A]
b) If s is a scalar, sT = s
T
b) If [B] is a symmetric, diagonal, or scalar matrix, [B] = [B]
T
c) For an Identity matrix, [I] = [I]
T
d) For a square Null Matrix, [Φ] = [Φ]
if not a square matrix, the transpose is still a null matrix but by
definition is not equal to the original null matrix since the size will be
different.
T
e) For a Skew-symmetric Matrix [C], [C] = -[C]
whose elements are equal to the sum, or difference, of the corresponding elements of
the original matrices.
Thus, if [C] = [A] + [B] then cij = aij + bij for all i and j;
or if [C] = [A] - [B] then cij = aij - bij for all i and j.
For example, if
⎡ 1 4⎤ ⎡ 12 - 11⎤
⎢ ⎥ ⎢ ⎥
A = ⎢ -2 5⎥ B = ⎢ 10 -9 ⎥
⎢⎣ 3 - 6 ⎥⎦ ⎢⎣ - 8 7 ⎥⎦
then
⎡ 13 - 7⎤ ⎡ - 11 15 ⎤
⎢ ⎥ ⎢ ⎥
A+ B = ⎢ 8 - 4⎥ A - B = ⎢ - 12 14 ⎥
⎢⎣ - 5 1 ⎥⎦ ⎢⎣ 11 - 13 ⎥⎦
From the definition of matrix addition and subtraction, the operations have similar
properties as the corresponding operations for real numbers; i.e
a) A + B = B + A; commutative property,
b) (A + B) + C = A + (B + C) associative property,
c) A+Φ= A identity,
d) A-A= Φ
T T T
e) (A + B) = A + B
ABJ Jr
Review of Matrix Algebra Page A-5
For example,
⎡ a 11 a 12 a 13 ⎤ ⎡ x⎤ ⎡ b1 ⎤
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
if A = ⎢ a 21 a 22 a 23 ⎥ X = ⎢ y⎥ and B = ⎢ b2 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢⎣ a 31 a 32 a 33 ⎥⎦ ⎣z⎦ ⎢⎣ b 3 ⎥⎦
then
⎡ a 11 x a 12 y a 13 z ⎤
⎢ ⎥
[A][X ] = ⎢ a 21 x a 22 y a 23 z ⎥
⎢ ⎥
⎣⎢ a 31 x a 32 y a 33 z ⎦⎥
and the system of linear equations
a 11 x + a 12 y + a 13 z = b1
a 21 x + a 22 y + a 23 z = b2
a 31 x + a 32 y + a 33 z = b3
can be expressed in matrix form as [A][X] = [B], see section A.10.
⎡ 1 - 2⎤ ⎡ 1⎤
⎢ ⎥ ⎡ 1 -2 3 ⎤ ⎢ ⎥
if A= ⎢ 3 - 4⎥ B = ⎢ ⎥ and C = ⎢ 0⎥
⎢⎣ 3 ⎣ 0 1 2 ⎦
- 2 ⎥⎦ ⎢⎣ 2 ⎥⎦
then
⎡ 1 -4 -1 ⎤
⎡ 4 0⎤ ⎡ 7⎤
[ A][B] = ⎢⎢ 3 - 10 1 ⎥
⎥
[B][ A] = ⎢ ⎥ [B ][C ] = ⎢ ⎥
⎢⎣ 3 ⎣ 9 - 8⎦ ⎣ 4⎦
-8 5 ⎥⎦
When the number of columns of [A] is equal to the number of rows of [B], then the
two matrices are said to be comformable with respect to matrix multiplication in the
order [A][B]. The following products in the above example, [A] [C], [C] [A] and [C]
[B], are undefined since the matrices are noncomformable in these orders.
Note also that in the above example the product [A][B] is not equal to the product
[B][A] – the products are not even of the same size. The following example shows
that, in general, matrix multiplication is not commutative even though the matrices
are comformable in both order.
⎡ 1 2 3 ⎤ ⎡ 1 0 3⎤
⎢ ⎥ ⎢ ⎥
if A = ⎢ -4 5 4 ⎥ and B = ⎢ -1 2 2⎥
⎢⎣ 3 -2 1 ⎥⎦ ⎢⎣ 2 0 1 ⎥⎦
⎡ 5 4 10 ⎤ ⎡ 10 -4 6 ⎤
then [ A][B] = ⎢⎢ - 1 10 2 ⎥
⎥
and [B][ A] = ⎢⎢ - 3 4 7 ⎥
⎥
⎢⎣ 7 -4 6 ⎥⎦ ⎢⎣ 5 2 7 ⎥⎦
The order of the matrix product must therefore be carefully observed at all times.
Exceptions to this observation, however, are shown below. Because of this, the
product [A][B] can be described more completely as: A is premultiplied to B, or B is
premultiplied by A, or B is postmultiplied to A, or A is postmultiplied by B.
If [A], [B], and [C] are comformable matrices in the order indicated, and s is a scalar,
then the following may be verified:
a) [A] [B] ≠ [B] [A] in general, matrix product is not commutative
b) s([A][B]) = (s[A])[B] = [A](s[B]) = [A][B]s
c) [A][B][C] = ([A][B])[C] = [A]([B][C])
d) [C]([A] + [B]) = ([C][A] + [C][B])
e) ([A] + [B])[C] = ([A][C] + [B][C])
f) [ I ][A] = [A][ I ] = [A] the unit matrix premultiplied and
postmultiplied to A are the same if [A] is a square matrix.
g) [Φ][A] = [A][Φ] = [Φ] note [A] must be a square matrix.
2
h) [A][A] = [A] note [A] must be a square matrix
i) If [B] is a scalar matrix such that [B] = s[ I ],
then the product [A][B] = s[A].
T T T T
j) ([A] [B] [C]) = [C] [B] [C] - this is referred to as the reversal rule
for matrix transpose: the transpose of the product of matrices is
equal to the product of the transpose of the individual matrices in the
reverse order.
k) if [A][B] = [Φ], then it does not follow that either [A] = [Φ] or [B] =
[Φ], or both [Α] and [Β] are null matrices. The cancellation law for
matrix multiplication is not true in general.
⎡ 2 1⎤
For example, if A = ⎢ ⎥
⎣ 4 2⎦
then, aside from [B] =[Φ], the following also satisfies [A][B] =[Φ].
⎡ 1 -1⎤
B =s ⎢ ⎥ where s is any scalar
⎣ -2 2 ⎦
ABJ Jr
Review of Matrix Algebra Page A-7
Note that not all square matrices have a corresponding inverse. A matrix which
does not have an inverse is said to be a singular matrix, while one which has an
inverses said to be nonsingular. However, if the inverse of a matrix exists, then the
inverse is unique, i.e. it may be premultiplied or postmultiplied to the original
matrix to yield the identity matrix.
-1
Proof of uniqueness of A ,
let [B] and [C] be inverses of matrix [A] such that [A][B] = [ I ] and [C][A] = [ I ].
Using the definition of matrix inverse and the associative property of matrix
multiplication, then
[C] = [C][ I ] = [C][A][B] = [ I ][B] = [B]
thus [C] = [B]
Assuming matrices [A], [B], and [C] are nonsingular matrices, and s is a scalar, the
following properties and observations are presented:
-1 -1
a) [A ] = [A] matrix inversion is reversible.
T -1 -1 T
b) [A ] = [A ] matrix inversion and transposition are independent
operations.
c) The inverse of a scalar, s, a square matrix of order 1, is simply its
reciprocal, s-1 = 1/s.
d) The inverse of a diagonal matrix is a matrix whose diagonal elements
are the reciprocal of the corresponding elements of the original matrix.
e) The inverse of a symmetric matrix is also a symmetric matrix.
-1
f) [I] =[I]
-1 -1
g) [sA] = (1/s)[A]
-1 -1 -1 -1
h) [A B C] = C B A Reversal rule of matrix inversion.
From the above, the determinants of matrices of order 2 and 3 may be calculated
using simple procedures, without directly using the above summation.
For n = 2, |A| is simply equal to the product of the main diagonal elements, (a11*a22)
minus the product of the elements of the other diagonal, (a12*a21).
For n = 3, copying the first two columns of the matrix to the right of the matrix, the
determinant is simply the sum of the products of the elements parallel to the
principal diagonal minus the products of the elements parallel to the other
diagonals.
⎡ a11 a12 a13 ⎤ a11 a12
⎢ ⎥
[A] = ⎢ a21 a22 a23 ⎥ a21
⎢ ⎥
⎣ a31 a32 a 33⎦ a 31 a 32
Again giving: |A| = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 – a13 a22 a31- a11 a23 a32 - a12 a21 a33
Note that this procedure of summing the products of the main diagonal minus the
sum of the products of the other diagonal does not apply to higher dimensions of n.
ABJ Jr
Review of Matrix Algebra Page A-9
For matrices having an order greater than 3, the use of the above definition to
manually determine the determinant of a matrix becomes increasingly difficult in
terms of tracking the possible permutations and applying the proper signs. It is best
to determine the determinant using a numerical procedure with the aid of a
computer.
n
or, expanding about column j, | A|= ∑a
i=1
ij c ij
Applying the expansion for the 3x3 matrix [A] and expanding about row 1 gives:
a22 a23 a21 a23 a21 a22
A = a11 − a12 + a13
a32 a33 a31 a33 a31 a32
= a11 (a22 a33 – a23 a32 ) - a12 (a21 a33 – a23 a31 ) + a13 (a21 a32 – a22 a31 )
For example:
⎡ a11 a12 ⎤
If n=2, [A] = ⎢ ⎥ with |A| = a11 a22 - a12 a21
⎣ a 21 a 22 ⎦
⎡ a 22 − a12 ⎤
[Aˆ ] = ⎢ ⎥
⎣ − a 21 a11 ⎦
1 ⎡ a 22 − a12 ⎤
[A] −1 = ⎢ ⎥
a11a22 − a12 a21 ⎣ − a 21 a11 ⎦
Also note, that although only a vector is shown for the matrix of constants, [B], this
may be a rectangular matrix having several columns. For this case, the solution
would also be a rectangular matrix with each column representing the solution for
the corresponding columns in [B]. The following will not cover the special type of
problem found in stability and dynamics when the vector of constant B is a null
matrix – the system of equations is said to be homogeneous.
ABJ Jr
Review of Matrix Algebra Page A-11
For practical problems, the size of the coefficient matrix is quite large, such that the
adjoint method becomes impractical due to the large number of determinants to be
determined. Therefore, numerical methods are almost solely used in the solution of
these simultaneous equations. Note, also, that the primary concern is to determine
the solution vector and not the inverse of A. As such, numerical procedures exist
which solves the system of equations without explicitly determining the inverse of
the coefficient matrix.
Numerical methods for solving simultaneous linear equation may be divided into
the direct and indirect methods. Direct methods are those, in the absence of
round-off or other errors, will yield the exact solution in a finite number of operation.
Indirect or iterative methods are those which start with an initial approximation of
the solution vector and by applying a suitable algorithm or recursive procedure,
leads to successively better approximations. The following discussion will be
limited only to the direct methods.
The fundamental method for direct solutions is the Gauss elimination. This involves
two major steps; first, the forward elimination and secondly, backsubstitution.
Forward elimination involves the transformation of the system of equations into an
equivalent system of equations (one having the same solution vector) whose
coefficient matrix is an upper-triangular matrix. This is accomplished by applying a
series of the following elementary operation:
a) Multiplication of one row (equation) by a nonzero constant,
b) Addition of a multiple of one equation to another,
c) Interchange of two rows (equations).
In matrix form the elementary operations are applied to an augmented matrix which
contains the elements of [A] and [B].
Once the forward elimination has been completed, the solution vector can be
determined by back substitution, starting from the nth equation.
Interchanges of row may also be required in cases where forward elimination cannot
proceed because the diagonal term is zero. If the elimination cannot proceed despite the
row interchanges, there is no solution for the system of equation, i.e. the coefficient matrix
is singular.
c. Divide 1st equation/row by 6 to make the coefficient of x1 in the 1st equation equal to 1.
Let’s define this as normalizing the equation/row.
⎡1 1.333 0.333 0.167 − 0.333⎤ x1 + 1.33x2 + 0.333x3 + 0.167 x4 = − 0.333
⎢2 2 1 5 8 ⎥ 2 x1 + 2 x2 + x3 + 5 x4 = 8
⎢ ⎥
⎢5 5 1 1 −5 ⎥ 5 x1 + 5 x2 + x3 + x4 = −5
⎢ ⎥
⎣6 1 0 7 4 ⎦ 6 x1 + x2 + 7 x4 = 4
d. Eliminate x1 from the second to fourth equations/row. For example, for the second
equation, replace the second equation by the equation obtained by subtracting from the
original 2nd equation the 1st equation multiplied by the coefficient of x1 of the 2nd equation
(which is 2 for this example). For the augmented matrix C, the 2nd row elements become,
c2j = c2j – c21*c1j. This is repeated for the 3rd and 4th equations.
⎡1 1.333 0.333 0.167 − 0.333⎤ x1 + 1.333x 2 + 0.333x3 + 0.167 x 4 = − 0.333
⎢0 − 0.667 0.333 4.667 8.667 ⎥ − 0.667 x 2 + 0.333x3 + 4.667 x 4 = 8.667
⎢ ⎥
⎢0 − 1.667 − 0.667 0.167 − 3.333⎥ − 1.667 x 2 − 0.667 x3 + 0.167 x 4 = − 3.333
⎢ ⎥
⎣0 −7 −2 6 6 ⎦ − 7 x2 − 2 x3 + 6 x4 = 6
e. Interchange 2nd and 4th equations and normalize the 2nd equation.
⎡1 1.333 0.333 0.167 − 0.333⎤ x1 + 1.333 x2 + 0.333 x3 + 0.167 x4 = − 0.333
⎢0 1 0.286 − 0.857 − 0.857 ⎥ x2 + 0.286 x3 − 0.857 x4 = − 0.857
⎢ ⎥
⎢0 − 1.667 − 0.667 0.167 − 3.333⎥ − 1.667 x2 − 0.667 x3 + 0.167 x4 = − 3.333
⎢ ⎥
⎣0 − 0.667 0.333 4.667 8.667 ⎦ − 0.667 x2 + 0.333 x3 + 4.667 x4 = 8.667
ABJ Jr
Review of Matrix Algebra Page A-13
g. Normalize the 3rd. Note pivoting could have been done between the 3rd and 4th equation.
⎡1 1.333 0.333 0.167 − 0.333⎤ x1 + 1.333x2 + 0.333 x3 + 0.167 x4 = − 0.333
⎢0 1 0.286 − 0.857 − 0.857 ⎥ x2 + 0.286 x3 − 0.857 x4 = − 0.857
⎢ ⎥
⎢0 0 1 6.625 25 ⎥ x3 + 6.625 x4 = 25
⎢ ⎥
⎣0 0 0.524 4.095 8.095 ⎦ 0.524 x3 + 4.095 x4 = 8.095
j The above completes the forward elimination. The backward substitution starts from the
last equation to the first.
x4 = − 8
x3 = 25 − 6.625 x4 = 78
x2 = −0.857 + 0.857 x4 − 0.286 x3 = − 30
x1 = −0.333 − 0.167 x4 − 0.333 x3 − 1.333x2 = 15
The Gauss-Jordan procedure can also be used to get the inverse of the coefficient
matrix. This is accomplished by solving the set of equations with [B] = [ I ]; i.e.
[A][X] = [ I ], therefore the solution is [X] = [A]-1.
ABJ Jr
Review of Matrix Algebra Page A-15
Note that the rules of differentiation of ordinary algebraic expressions also apply to
matrix expressions. For example, the differential of a product of matrices:
d([A][B]) d[A] d[B]
= [B] + [A]
dx dx dx
Similarly, the integral of a matrix, is a matrix whose elements are the integral of the
corresponding elements.
[C] = ∫ ba [A] dx then, c ij = ∫ ba a ij dx
In these notes, frequent use of partitioned matrices will be made. For almost all of
these matrices, the partition results because of a natural grouping of the elements.
For example, the forces acting on a structure can be grouped together into support
reactions and other forces.
The matrix operations defined, including the observations and properties stated, in
the previous sections generally apply to partitioned matrices. Care should be taken,
however, that the matrices are partitioned such that the comformability and other
conditions required for the specific operation are satisfied both in the matrix and the
submatrix levels.
The product of [A] and [B], and [A] and [C] are:
⎡ A11 B11 + A12 B21 A11 B12 + A12 B22 ⎤
[ A] [B ] = ⎢ ⎥
⎢⎣ A21 B11 + A22 B21 A21 B12 + A22 B22 ⎥⎦
⎡ A11C1 + A12C2 ⎤
[ A] [C ] = ⎢ ⎥
⎣⎢ A21C1 + A22C2 ⎦⎥
Remember that A, B and C must be comformable to matrix multiplication in the
order shown, as well as each pair of submatrices that are to be multiplied. Also, the
order of matrix multiplication must also be preserved in the submatrix products, i.e.
the submatrices of [A] must premultiply the submatrices [B] in the above example.
For the transpose of partitioned matrices, note that the aside from the exchange of
rows and columns, the submatrices themselves must be transposed.
⎡ A11T T
A21 ⎤
T ⎢ ⎥
⎡ A11 A12 A13 ⎤ ⎢ AT T ⎥
AT = ⎢ ⎥ = A
⎢ 12 22 ⎥
⎣⎢ A21 A22 A23 ⎦⎥
⎢ T T ⎥
⎢⎣ A13 A23 ⎥⎦
A partitioned matrix whose off diagonal submatrices are null matrices can also be
considered as a diagonal matrix. For these matrices the following may be easily
verified.
⎡ A11 0 0 ⎤
⎢ ⎥
if A = ⎢ 0 A22 0 ⎥
⎢ ⎥
⎣⎢ 0 0 A33 ⎦⎥
⎡ A−1 0 0 ⎤ ⎡ AT 0 0 ⎤
⎢ 11 ⎥ ⎢ 11 ⎥
then −1
A = ⎢ − 0 ⎥ and T
A = ⎢ T 0 ⎥
⎢
0 A22 1
⎥ ⎢
0 A22 ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣ 0 0 −1
A33 ⎥⎦ ⎢⎣ 0 0 T
A33 ⎥⎦
Matrix partitioning may also be resorted to for the purpose of reducing the effort
involved in performing the matrix operations. This is particularly true if special
matrices, such as identity and null matrices, can be identified within a large matrix.
For example:
⎡ A11 A12 ⎤ ⎡ I Φ⎤
if A = ⎢ ⎥ and B = ⎢ ⎥
⎣⎢ A21 A22 ⎦⎥ ⎣⎢ B21 I ⎦⎥
⎡ A11 + A12 B21 A12 ⎤
then [ A] [B] = ⎢ ⎥
⎢⎣ A21 + A22 B21 A22 ⎥⎦
Again, additional advantage can be achieved when special matrices can be defined
in the partitioned form such as null, identity, symmetric, diagonal matrices.
The following describes a way of reducing the size of the matrix to be inverted
through matrix partitioning. As will be seen, there will be, however, a
corresponding increase in the number of other matrix operations to be performed.
ABJ Jr
Review of Matrix Algebra Page A-17
Let [A] be a matrix whose inverse is required and partitioned such that the diagonal
submatrices are square matrices; and let [B] be its inverse, which must necessarily be
partitioned the same way as [A]. Taking the product:
⎡ A11 B11 + A12 B21 A11 B12 + A12 B22 ⎤ ⎡ I Φ⎤
[ A] [B ] = ⎢ ⎥ = ⎢ ⎥
⎣⎢ A21 B11 + A22 B21 A21 B12 + A22 B22 ⎦⎥ ⎣Φ I ⎦
therefore
A11B11 + A12 B21 = I (A-1)
A21 B11 + A22 B21 = Φ (A-2)
A11 B12 + A12 B22 = Φ (A-3)
A21 B12 + A22 B22 = I (A-4)
-1
from (A-2) B21 = -A22 A21 B11 (A-5)
substituting into (A-1)
-1
A11 B11 - A12 A22 A21 B11 = I
-1
[A11 - A12 A22 A21 ] B11 = I
-1 -1
B11 = [A11 - A12 A22 A21] (A-6)
from which B21 can be determined from (A-5)
similarly, from (A-3)
-1
B12 = -A11 A12 B22 (A-7)
substituting into (A-4)
-1 -1
B22 = [A22 - A21 A11 A12] (A-8)
from which B12 can be determined from (A-7)
Obviously the reason why the diagonal submatrices were required to be square
matrices is that these must be inverted. Thus, through partitioning, instead of
inverting a large matrix, four inverses of smaller matrices must be performed plus
several other matrix operations.
The number of inversions may still be reduced from four to two by getting an
alternative derivation for [B21] and [B22] and using the result of equations (A-7) and
(A-8). The alternative derivation can be determined by recalling that the inverse of a
matrix is unique i.e [A][B] = [B][A], the following also applies from the product
[B][A] :
B11 A12 + B12 A22 = Φ
B21 A12 + B22 A22 = I
from which,
-1
B12 = -B11 A12 A22 (A-9)
-1 -1
B22 = A22 - B21 A12 A22 (A-10)
Assuming that B11 and B21 have been determined using equations (A-5) and (A-6)
we can use equations (A-9) and (A-10) instead of equations (A-7) and (A-8). In
summary, if A is the matrix whose inverse B is to be determined,
⎡ A11 A12 ⎤ ⎡ B11 B12 ⎤
A = ⎢ ⎥ B = ⎢ ⎥
⎢⎣ A21 A22 ⎥⎦ ⎢⎣ B21 B22 ⎥⎦
Recognizing that there are common operations in the above expressions, if we define
matrices C and D and express the above more compactly as:
-1
define C = -A22 A21
-1
D = -A12 A22
-1
then B11 = [A11 + A12 C]
B21 = C B11
B12 = B11 D
-1
B22 = A22 + B21 D
−1 ⎡
5 5⎤ ⎡ − 4.143 − 4.857 ⎤
C = − A22 ⎢6 1⎥ = ⎢ − 0.857
⎣ ⎦ ⎣ − 0.143⎥⎦
⎡1 5⎤ −1 ⎡ −1 − 0.571 ⎤
D = −⎢ ⎥ A22 = ⎢
⎣ 2 1⎦ ⎣ −2 0.143 ⎥⎦
−1
-1 ⎡− 6.429 3.571 ⎤ ⎡− 2.6 5 ⎤
B11 = [ A11 + A12 C] = ⎢ ⎥ = ⎢ ⎥
⎣ − 3.143 − 1.857 ⎦ ⎣ 4 .4 − 9 ⎦
⎡− 10.6 23 ⎤
B21 = C B11 = ⎢
⎣ 1 .6 − 3⎥⎦
⎡− 7.4 2.2 ⎤
B12 = B11 D = ⎢ 13.6 − 3.8⎥
⎣ ⎦
−1 ⎡− 34.4 9.2 ⎤
B22 = A22 + B21 D = ⎢
⎣ 4.4 − 1.2⎥⎦
thus
⎡ − 2.6 5 − 7.4 2.2 ⎤
⎢ 4.4 − 9 13.6 − 3.8 ⎥
[B ] = ⎢ ⎥
⎢− 10.6 23 − 34.4 9.2 ⎥
⎢ ⎥
⎣ 1.6 − 3 4.4 − 1.2 ⎦
ABJ Jr
Review of Matrix Algebra Page A-19
These two represents the reduced system of equations in Equation (7-5a) which
includes the effect of variable z.
a*1 x + b*1 y = d*1
a*2 x + b*2 y = d*2
The unknowns, x and z comprising the unknown vector X1 can be solved for in this
reduced system of equations.
ABJ Jr