You are on page 1of 12

First Exam Coverage

ES 21 Notes
MATRICES D. Upper Triangular Matrix – a square matrix all
elements of which below the principal diagonal
Definition are zero (aij = 0 for i>j).

A matrix is a rectangular array of numbers or functions Example:


arranged in rows and columns usually designated by a
capital letter and enclosed by brackets, parentheses or
double bars. A matrix may be denoted by:

E. Lower Triangular Matrix – a square matrix all


elements of which above the principal diagonal
are zero (aij = 0 for i<j).

Example:
Unless stated, we assume that all our matrices are
composed of real numbers.

The horizontal groups of elements are called the rows of the


matrix. The ith row of A is
F. Diagonal Matrix – is a square matrix that is an
upper triangular and lower triangular at the same
time. The only non-zero elements are the
The vertical groups of elements are called the columns of elements on the principal diagonal. (aij = 0 for i ≠
the matrix. The jth column of A is j)

Example:

G. Scalar Matrix – is a diagonal matrix whose


The size of a matrix is denoted by “m x n” (m by n) where m diagonal elements are equal.
is the number of rows and n is the number of columns.
Example:
We refer to aij as the entry or the element in the ith row and
jth column of the matrix.

We may often write a given matrix as

A = [aij].
H. Identity Matrix – represented by In, a diagonal
matrix where all the elements along the main
SOME SPECIAL TYPES OF MATRICES
diagonal are equal to 1 or unity.
A. Row Matrix or Row Vector – is a matrix consists
Example:
of only one row.

Example: B = [b1 b 2 . . . bj . . . bn ]

B. Column Matrix or Column Vector – is a matrix


consists of only one column.
I. Null Matrix – represented by Ο is a matrix in
Example: which all the elements are zero.

J. Symmetric Matrix – a square matrix whose


element aij = aji.

C. Square Matrix – is a matrix in which the no. of Example:


rows equals the no. of columns.

Order of a Square Matrix – is the number of rows


or columns of the matrix. Thus, we can just refer
to a 3x3 matrix as a square matrix of order 3.
K. Skew Symmetric Matrix – a square matrix whose
Principal Diagonal or Main Diagonal of a Square element aij = -aji.
Matrix – consists of the elements a11, a22, a33, …
ann. Example:

The Trace of a Square Matrix – is the sum of the


elements on the main diagonal of the matrix.

ES 21 Notes
Page 1 of 12
First Exam Coverage
ES 21 Notes

EQUALITY OF MATRICES

Two matrices A = (aij) and B = (bij) are equal if and only if


the following conditions are satisfied:

a) They have equal number of rows.


b) They have equal number of columns.
c) All elements in A agree with the elements in
Note: We can only add or subtract matrices with the same
B. (aij=bij, for all i and j.)
number of rows and columns.
Example: The matrices
D. MATRIX MULTIPLICATION

If A = (aij) is an m x n matrix and B = (bij) is an n x


p matrix, then the product of A and B, AB = C =
[cij] is an m x p matrix where

are equal if and only if x = 2, y = 1, z = 2, a = 3,


b = 4 and c = 1.

for i = 1 to m and j = 1 to p
ELEMENTARY OPERATIONS ON MATRICES
The formula tells us that in order to get the
A. MATRIX ADDITION AND SUBTRACTION element cij of the matrix C, get the elements of the
ith row of A (the pre-multiplier) and the elements
If A = (aij) and B = (bij) are matrices of the same size m x n, on the jth column of B (the post multiplier).
then the sum A + B is another m x n matrix C = [cij] where Afterwards, obtain the sum of the products of
cij = aij + bij for i = 1 to m and j = 1 to n. Matrix addition is corresponding elements on the two vectors.
accomplished by adding algebraically corresponding
elements in A and B. Note: The product is defined only if the
number of columns of the first factor
Example: A (pre-multiplier) is equal to the
number of rows of the second factor
B (post-multiplier). If this is
satisfied, we say that the matrices
are comformable in the order AB.

MATRIX EXPONENTIATION
n
The formula A will be defined as A∗ A∗ A…∗ A

Example:

B. SCALAR MULTIPLICATION

If A = (aij) is an m x n matrix and k is a real number (or a


scalar), then the scalar multiple of A by k is the m x n matrix
C = [cij] where cij = kaij for all i and j. In other words, the
matrix C is obtained by multiplying each element of the
matrix by the scalar k.
Examples: We obtain

Note: Although AB and BA are defined it is not necessary


that AB = BA.

Example:
C. MATRIX SUBTRACTION

If A and B are m x n matrices, the difference between A and


B denoted as A – B is obtained from the addition of A and (-
1)B.
1. A x B is a 3 x 3 matrix while B x A is
A – B = A + (-1)B a 2 x 2 matrix.
Matrix subtraction is accomplished by subtracting from the 2. A x C is a 3 x 2 matrix but C x A is not
elements of the first matrix the elements of the second defined.
matrix correspondingly. 3. B x C is not defined but C x B is
defined (2 x 3).
Example:
E. MATRIX TRANSPOSITION

If A = [aij] is an m x n matrix, then the transpose of A,


T
denoted by A =[a’ij] is an n x m matrix defined by a’ij=aji.
ES 21 Notes
Page 2 of 12
First Exam Coverage
ES 21 Notes
Odd and Even Permutations
The transpose of A is obtained by interchanging the rows
and the columns of A. A permutation a1a2a3…an is said to have an inversion if a
larger number precedes a smaller one. If the total number of
Example: inversion in the permutation is even, then we say that the
permutation is even, otherwise it is odd.
Examples: ODD and EVEN Permutation

S1 has only one permutation; that is 1, which is even since


there are no inversions.

Note: The transpose of a symmetric matrix is equal to itself. In the permutation 35241, 3 precedes 2 and 1, 5 precedes
PROPERTIES AND THEOREMS ON MATRIX 2, 4 and 1, 2 precedes 1 and 4 precedes 1. There is a total
OPERATIONS: of 7 inversions, thus the permutation is odd.

MATRIX ADDITION S3 has 3! = 6 permutations: 123, 231and 312 are even while
132, 213, and 321 are odd.
A+O=A Existence of Additive Identity
A + (-A) = O Existence of Additive Inverse S4 has 4! = 24 permutations: 1234, 1243, 1324, 1342, 1423,
A+B=B+A Commutative Property 1432, 2134, 2143, 2314, 2341, 2413, 2431, 3124, 3142,
(A + B) + C = A + (B + C) Associative Property 3214, 3241, 3412, 3421, 4123, 4132, 4213, 4231, 4312,
4321.
SCALAR MULTIPLICATION
For any Sn, where n>1 it contains n!/2 even permutations
0xA=O and n!/2 odd permutations.
1x A=O
k l (A) = k (l A) = l (k A)
(k + l) A = k A + l A DEFINITION: DETERMINANT
k(A + B) = k A + k B
Let A = [aij] be a square matrix of order n. The determinant
MATRIX MULTIPLICATION of A denoted by det(A) or A is defined by

A(BC) = (AB)C Associative Property


A(B + C) = AB + AC Left Distributive Property
(A + B)C = AC + BC Right Distributive Property
AI = IA = A Existence of Multiplicative where the summation is over all permutations j1j2…jn of the
Identity set S = {1,2,…,n}. The sign is taken as (+) if the permutation
kl (AB) = (k A)(l B) = (l A)(k B) is even and (–) if the permutation is odd.

Note: In general, Matrix Multiplication is not Examples:


commutative. That is, AB ≠ BA.
If A = [a11] is a 1 x 1 matrix then det(A) or A= a11.
MATRIX TRANSPOSITION
T T
(A ) = A If A = , then to get Awe write down the terms a1-
T T T
(A + B) = A + B
T T
(k A) = k A a2-b and replace the dashes with the all-possible
T T T
(AB) = B A permutations of S = {1, 2}, namely 12 (even) and 21 (odd).
T T T T T T
Thus A= a11a22 - a12a21.
In general (A1 A2 A3…An-1 An) = An An-1 …A3 A2 A1

If A = , then to compute the A we write


DETERMINANTS

Another very important number associated with a square down the six terms a1-a2-a3-, a1-a2-a3-, a1-a2-a3-, a1-a2-a3-, a1-
matrix A is the determinant of A which we will now define. a2-a3-, a1-a2-a3-. Replace the dashes with all the elements of
This unique number associated to a matrix A is useful in the S3, affix a (+) or (-) sign and get the sum of the six terms.
solutions of linear equation.
If A is a square matrix of order n, there will be n! terms in
Permutation: the determinant of A with n!/2 positive terms and n!/2
negative terms.
Let S={1, 2, 3, … , n} be the set of integers from 1 to n,
arranged in increasing order. A rearrangement a1a2a3…an of
the elements in S is called a permutation of S.

By the Fundamental Principle of Counting we can put any


one of the n elements of S in the first position, any one of
the remaining (n-1) elements in the second position, any
one of the remaining (n-2) elements in the third position,
and so on until the nth position. Thus there are n(n-1)(n-
2)…3*2*1 = n! permutations of S. We refer to the set of all
permutations of S by Sn. METHODS IN GETTING THE DETERMINANT
Examples: A. DIAGONAL METHOD
If S = {1, 2, 3} then S3 = {123, 132, 213, 231, 312, 321} This method is applicable to matrices with size
less than or equal to 3.
If S = {1, 2, 3, 4} then there are 4! = 24 elements of S4.
1. 2 x 2 Matrices
ES 21 Notes
Page 3 of 12
First Exam Coverage
ES 21 Notes

2. 3 x 3 Matrices
3. If a row (or column) of a square matrix A = [aij] is
multiplied by a constant k, then the determinant of the
resulting matrix B = [bij] is equal to k times the
determinant of A (i.e. B= kA).

Example:

B. METHOD OF COFACTORS

Complementary Minor, det(Mij) or Mij


The complementary minor or simply minor of an element aij
of the matrix A is that determinant of the sub-matrix Mij
obtained after eliminating the ith row and jth column of A. 4. As a corollary to the third theorem, if A has a row (or
column) that has a common factor l, then this k may be
factored out of the determinant of A, where a simplified
Algebraic Complement or Cofactor, Aij
matrix B is formed. (i.e. .A= kB.
The algebraic complement or cofactor of an element aij of
the matrix A is that signed minor obtained from the formula
i+j Example:
(-1) Mij

DETERMINANT USING THE COFACTOR METHOD

The determinant of a square matrix maybe obtained using


expansion about a row or expansion about a column. The 5. If two row (or columns) of a square matrix A = [aij] were
following formulas maybe used in getting the determinant: interchanged to form a new matrix B = [bij], then B=
-A.

Example:
(expansion about the ith row)

and

(expansion about the ith row)


6. If two rows (or columns) of a matrix A = [aij] are
Note: We may choose any row or any column in getting the identical then A= 0.
determinant of a given matrix. Example:

Example: To evaluate

7. As a corollary to the sixth theorem, if the elements in a


row (or column) of a square matrix A = [aij] are
multiples of the corresponding elements of another row
or column of the matrix A, then A= 0.
Example:
It is best to expand about the fourth row because it has the
most numbers of zeros. The optimal course of action is to
expand about the row or column that has the largest
number of zeros, because in that case the cofactors Aij of
those aij which are zero need not be evaluated since the
product of aijAij = (0)Aij = 0.
8. If B = [bij] is a square matrix of order n that is derived
THEOREMS ON DETERMINANTS from another square matrix A = [aij] of order n, by
adding correspondingly the elements of a row (or
1. If a square matrix A = [aij] contains a row (or a column) column) to a multiple of the elements of another row
that has elements all equal to zero, then A= 0. (or column), then B=A.
Example:
Example:

9. If the elements of one row (or column) of a square


2. The determinant of a square matrix A = [aij] is equal to matrix A = [aij] of order n may be expressed as
T
the determinant of its transpose A = [aij]. binomials such that two square matrices B = [bij] and C
T
(i.e.A=A ) = [cij] both of order n, are formed after splitting the
binomial elements, then A=B+C.
Example: Example:

ES 21 Notes
Page 4 of 12
First Exam Coverage
ES 21 Notes

Not all matrices has its inverse. However, if the inverse of a


matrix exists, it is unique.

If the inverse of a matrix exists, we say that the matrix is


invertible or non-singular. Otherwise, we say that the matrix
is non-invertible or singular.
10. The determinant of the product of two square matrices A
= [aij] and B = [bij] of the same order n is equal to the Matrix Inversion Using the Adjoint
product of the determinant of A and the determinant of B. and the Inverse
Example:
Matrix Inversion applies only to square matrices and can be
produced using the adjoint matrix and the
If and determinant.

then = = .
-1 -1
Notation: A , B …

The proof of this needs the knowledge of the following


11. The determinant of a triangular matrix is equal to the theorem:
product of the elements in its principal diagonal.
The sum of the products of the elements in one row (or
Example: column) and the cofactors of the elements of another row
(or column) of a given square matrix is zero.

From the above formula for inverse, it is highly suggested


that the determinant be computed first. If it so
happened that the matrix is singular (i.e., the
determinant is zero), then the inverse of the
12. The determinant of an Identity Matrix is equal to 1. matrix is said to be non-existent.

Note that it is a waste of effort to still produce the adjoint if


ADJOINT OF A MATRIX

The Adjoint of a square matrix A=[aij] of order n is that


square matrix with the same order n denoted by adj(A)=[Aji]
where Aij is the cofactor of the element aij of matrix A. The
adjoint of a matrix is the transpose of the matrix of cofactors the matrix is singular. Therefore, it is advised
of the elements of A. that you first check for singularity.
Example: Set up the Inverse of the given matrix.
Input: Square Matrix
Output: Square Matrix (with the same size as the original Using the diagonal method to compute for the determinant
matrix) of the given matrix:

Notation: adj A, adj B…


Since matrix A is singular, as evidenced by its zero
Step 1: Get the cofactors of all the elements in the determinant, it can thus be concluded that the
original matrix. Inverse of A (or A-1) does not exist.

Recall: the cofactor of an element aij can be denoted as Aij Example 2: Set up the Inverse of the given matrix
and is defined by:

Step 2: Set up the adjoint matrix by taking the transpose


of the matrix of cofactors. Since the determinant is not zero, then matrix A is said to be non-
singular. In this case, the inverse exists and there is a need to set up
the adjoint.

Getting the cofactors of all the elements in the original matrix.


Example:

If A = then adj(A) = .

Inverse of a Matrix

The inverse of a square matrix A = [aij] of order n is that


matrix B = [bij] of the same order n such that AB = BA = In.
-1
We denote the inverse matrix of A by A . Thus, we define
-1
the inverse of A as that matrix A such that
-1 -1
A(A ) = (A )A = In.
ES 21 Notes
Page 5 of 12
First Exam Coverage
ES 21 Notes
Derivation of the Solution for xi’s :

Take note that the


derivation assumes
-1 -1
that A exists. If A
does not exist, we
can not find the
solution to the
,thus
system AX = B.
Example: Determine the values of x1, x2 and x3 in the
following system of equations.

Consequently,

Solution:

The above system of equations can be written in matrix


form:

SOLUTION TO SYSTEM OF LINEAR EQUATIONS

In general, we can think of a system of linear


equations as a set of “m” equations that contains “n” -1
unknowns. There are several forms by which a system of We can write this in matrix form AX = B and let X = A B,
equations can be written. where:

We can have the equation form:

where aij are Getting A


-1

constant
coefficients of the
unknowns xj and bI
are constants

-1
Or we can transform that to the matrix form: To get x1, x2 and x3 , multiply A to B:

-1
Performing the operation A B will yield the solution
Referring to the matrix form, we can actually rewrite the
system of equations as a compact matrix operation:
matrix:
AX = B.

Where:

A  Coefficient Matrix Make it a habit to check if all the computed values of the
X  Column Matrix of Unknowns/Variables unknowns satisfy all the given equations. Checking is done
B  Column Matrix of Constants by substituting the values x1 = 1, x2 = 1 and x3 = 1 to the
original equations.

SOLUTION TO SYSTEM OF n-LINEAR EQUATIONS Equation1 1(1) – 1(1) + 1(1) =? 1Satisfied


WITH n UNKNOWNS
Equation 21(1) + 2(1) + 3(1) =? 6Satisfied
A. USING THE INVERSE METHOD
Equation 34(1) – 2(1) + 3(1)=? 5Satisfied
The Inverse Method maybe applied only to a
system of linear equations in which the number of Since all the equations were satisfied, then (x1, x2, x3) = (1,
independent equations is equal to the number of unknowns. 1, 1) is indeed the solution to the system.
If the number of equations is equal to the number of
unknowns, the equation AX = B will have a matrix of
coefficients that is square. SOLUTION TO SYSTEM OF EQUATIONS USING
CRAMER'S RULE
If the matrix of coefficients A is non-singular, the
solution to the system is unique. On the other hand, if A is Recall that A system of equation “n” equations in “n”
singular, either the system has a unique solution or no unknowns can be modeled as a matrix operation AX = B.
solution at all.

ES 21 Notes
Page 6 of 12
First Exam Coverage
ES 21 Notes

Notice that the diagonal elements of the upper triangular


matrix have been set to values of 1 for reason of simplicity.
Let: (L-U Factorization is not unique.)
A  coefficient matrix
th
xi  i variable From matrix multiplication, we know that:
B  right hand side constants
AI  matrix resulting from replacing the or
th
i column of A by the column
or
vector of constants B
or
The solution of the system of equations can be determined
by using the formula: or

or
or
or
Notice that regardless of the variable i that is computed, the
denominator of the above formula is fixed at |A|. Therefore, or
it is suggested that the determinant of the coefficient matrix
be the first to be computed.

or
Example: Using Cramer's Rule, determine the values of x1,
x2 and x3 that simultaneously satisfy the following system of
equations.
or

or
Solution:
Compute for the determinant of A first:

or

or
Now, let us compute for the value of x1 by using the formula

or
The right hand side matrix B is

or
To set up the matrix A1, all you just have to do is to replace
the first column of A by b. Doing what has just been
described will result in:
or

How to get the solution to a system of equations using


Applying the same process to solve x2 and x3: L-U Decomposition Method?

Recall: A system of equations can be written as a compact


matrix operation AX = B

If we factor out the coefficient matrix A as L*U and


SOLUTION TO SYSTEM OF LINEAR EQUATIONS USING substitute to AX = B, we can generate the equation
L-U FACTORIZATION L(UX)=B.

Direct L-U Factorization: Momentarily define UX = Y which suggests LY = B. From


this transformation, we have actually decomposed AX = b to
In theory any square matrix A may be factored into a two systems of equations.
product of lower and upper triangular matrices.
Two-stage solution:
th
Let us take the case of a 4 order matrix:

ES 21 Notes
Page 7 of 12
First Exam Coverage
ES 21 Notes
I. Solve for Y in the equation LY = B using forward 1. All rows whose elements are all zeros, if
substitution. exist, are at the bottom of the matrix.
II. Solve for X in the equation UX = Y using back 2. If at least one element on a row is not equal
substitution. to zero, the first non-zero element is 1, and
this is called the leading entry of the row.
Example: Determine the values of xi's in : 3. If two successive rows of the matrix have
leading entries, the leading entry of the row
below the other row must appear to the
right of the leading entry of the other row.

An m x n matrix A is said to be in reduced row echelon form


if added to the first three properties it satisfies a fourth
property:
Knowing that , therefore
4. If a column contains a leading entry of
some row, then all the other entries must
be zero.
and Example:

The following matrices are not in row echelon form. (Why


not?)
Stage 1: Forward substitution using LY = B

The following matrices are in row echelon form but not in


reduce row echelon form.

Note that the computed values of yi's here are not yet the
solution since the original system of equations is in terms of
xi's.

Stage 2: Back substitution using UX = Y


The following matrices are in reduced row echelon form.
(Hence, in row echelon form.)

This time (x1, x2, x3) = (1, 2, 3) is the solution to the original
system of equations.

AUGMENTED MATRIX OF A AND B


ELEMENTARY ROW (COLUMN) OPERATIONS ON
MATRICES
If A is an m x n matrix and B is a p x n matrix, then the
augmented matrix of A and B denoted by [A : B] is the
An elementary row (column) operation on a matrix A is any
matrix formed by the elements of A and B separated by
one of the following operations:
pipes.
Type I. Interchange any two rows (columns).
Example:
Type II. Multiply a row (column) by a non-zero
constant k.
If and then A : B is Type III. Add to elements of a row k times of the elements
of another row the correspondingly.

Example: Let

The augmented matrix associated to a system of linear


equation AX=B is the matrix [A : B]. For example, we can
now rewrite the system of equation: Interchanging rows 1 and 3 of A (R1↔R3) obtain

as simply .

Multiplying row 3 by ½ (R3’→ ½R3), we obtain


ECHELON FORM OF A MATRIX

An m x n matrix A is said to be in row echelon form if it


satisfies the following properties:

ES 21 Notes
Page 8 of 12
First Exam Coverage
ES 21 Notes

Adding 3 times the elements in row 1 to the elements in row


2 (R2’→R2 + 3R1), we obtain

Applying the theorems on equivalent matrices we now have


the following methods of solution:

GAUSSIAN ELIMINATION METHOD

ELEMENTARY ROW OPERATIONS AS APPLIED TO The objective of the Gaussian Elimination Method is to
THE A SYSTEM OF EQUATION A:B transform the augmented matrix [A : B] to the matrix [A* :
B*] in row echelon form by applying a series of elementary
As a applied to the augmented matrix [A : B] as a system of row transformations. Getting the solution of the system [A* :
equation, the three elementary row operation will B*] using back substitution will also give the solution to the
correspond to the following: original system [A : B].

TYPE I → rearranging the order of the equations To reduce any matrix to row echelon form, apply the
TYPE II → multiplying both side of the equation by following steps:
a constant 1. Find the leftmost non-zero column.
st
TY0PE III → working with two equations 2. If the 1 row has a zero in the column of step 1,
interchange it with one that has a non-zero entry in
From this observation, we could see that as applied to a the same column.
operations does not alter the solution of the system. 3. Obtain zeros below the leading entry by adding
suitable multiples of the top row and to the rows
below that.
ROW (COLUMN) EQUIVALENT MATRICES 4. Cover the top row and repeat the same process
starting with step 1 applied to the leftover submatrix.
An m x n matrix A is row (column) equivalent to an m x n Repeat this process with the rest of the rows.
matrix B if B can be obtained from A by applying a finite 5. For each row obtain leading entry 1 by dividing each
sequence of elementary row operations. row by their corresponding leading entry.

Example: The linear system


THEOREMS ON MATRIX EQUIVALENCE

1. Every nonzero m x n matrix A = [aij] is row (column)


equivalent to a matrix in row (column) echelon form.
has the augmented matrix associated to the system
2. Every nonzero m x n matrix A = [aij] is row (column)
equivalent to a matrix in reduced row (column) echelon
form.

3. Let AX = B and CX = D be two systems of “m” linear which can be transformed as a matrix in row echelon form
equations in “n” unknowns. If the augmented matrices
[A : B] and [C : D] are row equivalent, then the linear
systems are equivalent (i.e. they have exactly the
same solutions).

4. As a corollary to the third theorem, if A and B are row using back substitution we have
equivalent matrices, then the homogeneous systems
AX = 0 and BX = 0 are equivalent.

SOLUTIONS TO A SYSTEM OF “m” EQUATIONS in “n”


UNKNOWNS thus we have the solution (x, y, z) = (2, -1, 3).

In general a system of “m” equations in “n” unknowns may


be written in matrix form: GAUSS-JORDAN REDUCTION METHOD

On the other hand a second method called the Gauss-


. Jordan Reduction Method gets rid of the back substitution
phase. The objective of the Gauss-Jordan Reduction
Method is to transform the augmented matrix [A : B] to the
matrix [A* : B*] in reduced row echelon form by applying a
series of elementary row transformations. Doing this will
automatically give the solution of the system [A* : B*] which
also provides the solution to the original system [A : B].
This system may now be represented by the augmented
notation: To reduce any matrix to reduced row echelon form, apply
the following steps (SINE):

1. Search – search the ith column of the augmented


matrix from the ith row to the nth row for the

ES 21 Notes
Page 9 of 12
First Exam Coverage
ES 21 Notes
maximum pivot, i.e. element with the largest absolute
value.
2. Interchange – assuming the maximum pivot occurs
in the jth row, interchange the ith row and the jth row
so that the maximum pivot will now occur in the
diagonal position.
3. Normalize – normalize the new ith row by dividing it Since at least one 3x3 submatrix of A has a non-zero
by the maximum pivot on the diagonal position. determinant, then r(A) = 3.
4. Eliminate – eliminate the ith column from the first up
to the nth equation, except in the ith equation itself Example: What is the rank of B?
using the transformations.

Example: The linear system

Solution:
has the augmented matrix associated to the system
The determinant of B is equal to zero (THEOREM:
Proportional rows). And it can also be shown that 3x3
submatrices of B will have determinants equal to zero.

which can be transformed as a matrix in row echelon form


(Rows are proportional)

But at least one 2x2 submatrix has non-zero determinant.

thus we have the solution (x, y, z) = (2, -1, 3).

Therefore r(B) = 2.
SUBMATRIX AND RANK

SUBMATRIX THEOREMS ON RANKS


A submatrix of A=[aij] is any matrix obtained by eliminating
some rows and/or columns of the matrix A. 1. The rank of a matrix is not altered by any
sequence of elementary row (column)
Example: Let transformations.

2. Let A = [aij] and B = [bij] be two mxn matrices, if


rank(A) = rank(B) then A and B are equivalent.

3. If A = [aij] and B = [bij] are mxn matrices, and


rank(A) = rank(B) = n, then rank(AB) = rank(BA) =
The following are some submatrices of A: n.

Example: What is the rank of C?

Solution:
RANK OF A MATRIX Operating on rows of matrix C, we obtain the equivalent
matrix C’
The rank of a matrix A = [aij] is the order of the largest
square submatrix of A with a non-zero determinant. We
denote the rank of A by rank(A) or simply r(A).

Example: What is the rank of A?

We could easily see that all 5x5, 4x4 and 3x3 submatrices
Solution: of C’ have determinants equal to zero (THEOREM: Identical
rows). But for at least one 2x2 submatrix of C’ has a non-
Checking out first the determinants of 3x3 submatrices: zero determinant.

(e.g. )

ES 21 Notes
Page 10 of 12
First Exam Coverage
ES 21 Notes
Consequently r(C’) = 2. But C and C’ are equivalent
matrices and hence they have equal ranks. Therefore r(C) is
also equal to 2.

RANKS AND THE TYPES OF SOLUTION TO A SYSTEM


OF EQUATION

Recall that for the system of “m” linear equations in “n”


unknowns AX = B. We can associate the system of
equation to the augmented matrix of the system [A:B].

The type of solution may be classified as unique, non-


unique or inconsistent. Applying the concept of rank to the Therefore we have the following conclusions:
augmented matrix [A:B], we have the following propositions:
a) For a unique solution, r(A) = r[A:B] = n
1. If r(A) = r([A:B]) = n then the solution to the
system is unique. There will be no value of k that will satisfy this
since r(A) = 2 < n =3.
Example:
b) For non-unique solutions, r(A) = r[A:B] < n.

This will be satisfied if r[A:B] is also equal to 2.


This will happen when the last element in the third row of
the augmented matrix is also equal to zero.
2. If r(A) = r([A:B]) < n, then the solution to the
2
system is non-unique. 8k - 8k = 0 ⇒ k = 0, 1.

Example: c) For the system to be inconsistent, r(A) < r[A:B]. This will
be satisfied if r[A:B] = 3 > 2. This will happen when the last
element in the third row of the augmented matrix is not
equal to zero.
2
8k - 8k ≠ 0 ⇒ k ≠ 0, 1.

3. If r(A) < r([A:B]), then the system has no solution Example 2:


or inconsistent.
For what values of m will the system of equations have
Example: d) a unique solution
e) a non-unique solution
f) no solution

Solution: In augmented matrix form, we have:


Example 1: Rank and the Type of Solution to a System

For what values of k will the system of equations have


a) a unique solution
b) a non-unique solution
c) no solution

Performing Gaussian Elimination Method:

Solution: In augmented matrix form, we have:

Performing Gaussian Elimination Method:


Therefore we have the following conclusions:

Therefore we have the following conclusions:

a) For a unique solution, r(A) = r[A:B] = n


2
This will be satisfied if m -1≠0 and –m-2≠0. Thus
we have a unique solution if m ≠ ±1,2.

b) For non-unique solutions, r(A) = r[A:B] < n.


ES 21 Notes
Page 11 of 12
First Exam Coverage
ES 21 Notes
Clearly, we could see that the resulting system
This will happen when the last element in the third gives a non-unique solution because, r(A) = r[A:B]=2 <3.
row of matrix A and the augmented matrix are both equal to
zero.
Thus, the system gives non-unique solutions when m = -2.
2
m - 1 = 0 and m+2 = 0.
c) For the system to be inconsistent, r(A) < r[A:B].
There is no value of m that will satisfy both
equation. This will be satisfied if r[A:B] = 3 > 2. This will
happen when the last element in the third row of the
The other value of m to be checked is when m = -2. augmented matrix is not equal to zero but the last element
Substituting this to the system gives the system: of the third row of A is equal to zero.
2
m – 1 = 0 and m+2 ≠ 0.

Thus, the system is inconsistent when m = ±1 .

ES 21 Notes
Page 12 of 12

You might also like