You are on page 1of 50

MATRIX AND ITS APPLICATIONS

TABLE OF CONTENTS

CHAPTER ONE

1.1 Background of the study

1.2 Statement the Problem

1.3 Types of Matrix

1.4 Definitions of Matrix


CHAPTER ONE

INTRODUCTION AND LITERATURE REVIEW

1.1 BACKGROUND OF THE STUDY

The introduction and development of the notion of a matrix

and the subject of linear algebra followed the development of

determinants, which arose from the study of coefficients of

systems of linear equations. Leibnitz, one of the founder of

calculus, used determinant in 1963 and Cramer presented his

determinant based formula for solving systems of linear equation

(today known as Cramer’s rule) in 1750.

The first implicit use of matrices occurred in Lagrange’s

work on bilinear form in late 1700. Lagrange desired to

characterize the maxima and minima of multi-variant functions.

His method is now known as the method of Lagrange multipliers.

In order to do this he first required the first order partial derivation

to be 0 and additionally required that a condition on the matrix of

second order partial derivatives holds; this condition is today

called positive or negative definiteness, although Lagrange did

not use matrices explicitly.


Gauss developed elimination around 1800 and used it to

solve least square problem in celestial computations and later in

computations to measure the earth and it’s surface (the branch

of applied mathematics concerned with measuring or

determining the shape of the earth or with locating exactly points

on the earth’s surface is called Geodesy). Even though Gauss

name is associated with this technique eliminating variable from

system of linear equations there were earlier work on this

subject.

Chinese manuscripts from several centuries earlier have

been found that explains how to solve a system of three

equations in three unknown by “Guassian” elimination. For years

Gaussian elimination was considered part of the development of

geodgesy, not mathematics. The first appearance of Gaussian-

Jordan elimination in print was in a handbook on geodesy written

by Wihelm Jordan. Many people incorrectly assume that the

famous mathematician, Camille Jordan is the Jordan in “Gauss-

Jordan elimination”.

For matrix algebra to fruitfully develop one needed both

proper notation and proper definition of matrix multiplication.


Both needs were met at about the same time in the same place.

In 1848 in England, J.J Sylvester first introduced the term

“matrix”, which was the Latin word for “womb” as a name for an

array of numbers.

Matrix algebra was nurtured by the work of Arthur Cayley in

1855. Cayley studied multiplication so that the matrix of

coefficient for the composite transformation ST is the product of

the matrix S times the matrix T. He went on to study the algebra

of these composition including matrix inverses. The famous

Cayley-Hamilton theorem which asserts that a square matrix is a

root of it’s characteristics’ polynomial was given by Cayley in his

1858 memoir on the theory of matrices. The use of single letter

“A to represent a matrix was crucial to the development of matrix

algebra. Early in the development, the formular det(AB) = det (A)

det(B) provided a connection between matrix algebra and

determinants. Cayley wrote “There would be many things to say

about this theory of matrices which should, it seems to me,

precede the theory of determinants”.

Mathematicians also attempted to develop for algebra of

vectors but there was no natural definition of the product of two


vectors that held in arbitrary dimensions. The first vector algebra

that involved a non- commutative vector product (that is V x W

need not equal W x V) was proposed by Hermann Grassman in

his book – Ausedehnungslehre (1844). Grossmann’s text also

introduced the product of a column matrix and a row matrix,

which resulted in what is now called a simple or a rank one

matrix. In the late 19 th century the American mathematical

physicist, Willard Gibbs published his famous treatise on vector

analysis. In that treatise Gibbs represented general matrices,

which he called dyadics as sum of simple matrices, which Gibbs

called dyads. Later the physicist, P.A.M. Dirac introduced the

term “bracket” for what we now call the “scalar product” of a “bar”

(row) vector times a “ket” (column) vector and the term “ket-bra”

for the product of a ket times a bra, resulting in what we now call

a simple matrix, as above. Our convention of identifying column

matrices and vector was introduced by physicists in the 20th

century.

Matrices continued to be closely associated with linear

transformations. By 1900, they were just a finite dimensional sub

case of the emerging theory of linear transformations. The


modern definition of a vector space was introduced by Peano in

1888. Abstract vector space whose elements were function soon

followed. There was renewed interests in matrices, particularly

on the numerical analysis of matrices, John Von Neumann and

Herman Goldstein introduced condition numbers in analyzing

round – off errors in 1947. Alan Turing and Von Neumann, the

20th century giants in the development of stored – program

computers. Turning introduced the LU decomposition of a matrix

in 1948. The L is a lower triangular matrix with I’s on the

diagonal and the U is an echelon matrix. It is common to use LU

decomposition in the solution of n sequence of systems of linear

equations, each having the same co-efficient matrix. The benefit

of the QR decomposition was realized a decade later. The Q is a

matrix whose column are orthogonal vector and R is a square

upper triangular invertible matrix with positive entities on its

diagonal.

The QR factorization is used in computer algorithms for

various computations, such as solving equations and find

eigenvalues. Frobenius in 1878 wrote an important work on

matrices on linear substitutions and bilinear forms, although he


seemed unaware of Cayley’s work. However be proved

important results in canonical matrices as representatives of

equivalences classes of matrices. He cites Kronecker and

Weiserstrases having considered special cases of his results in

1868 and 1874 respectively.

Frobenius also proved the general result that a matrix

satisfies it’s characteristic equation. This 1878 paper by

Frobenius also contains the definition of the rank of a matrix,

which he used in his work on canonical forms and the definition

of orthogonal matrices.

An axiomatic definition of a determinant was used by

Weierstrass in his lectures and after his death, it was published

in 1903 in the note on determinant theory. In the same year

Kronecker’s lectures on determinants were also published after

his death. With these two publications, the modern theory of

determinants was in place but matrix theory took slightly longer

to become a fully accepted theory. An important early text which

brought matrices into their proper place within mathematics was

introduction to higher algebra by Bocher in 1907. Turnbull and

Aitken wrote influential text in the 1930s and Missky’s; “An


introduction to linear algebra” in 1955 saw matrix theory to reach

its present major role as one of the most important

undergraduate mathematics topic.

1.2 SCOPE OF STUDY

In this study we are going to focus on m x n matrices of

different order, i.e 2 x 3, 3 x2, 3 x3, etc. algebra of matrices, i.e.

the different operation of addition, subtraction, scalar

multiplication, matrix multiplication (under which we will consider

power of matrices) and see if division is defined for matrices,

determinant of different order starting with 2,3, etc.

Also of square matrix (under which we consider cofactors

and adjoint) and the different properties of determinant; inverse

of square matrix, product of a square matrix and it’s inverse and

also special types of square matrix and the different applications

of matrices.

1.3 SIGNIFICANT OF STUDY

Matrices are key tools in linear algebra. One of the uses of

matrices is to represent linear transformations, which are higher

dimensional analogs of linear functions where matrix

multiplication corresponds to composition of linear transformation


which is used in computer graphics to project 3- dimensional

space onto a 2- dimensional screen.

A major branch of numerical analysis is devoted to the

development of efficient algorithms for matrix computations and

for a square matrix, the determinant and inverse matrix (when it

exists) govern the behaviour of solution of the corresponding

system of the linear equations and eigenvalue and eigenvectors

provide insight into the geometry of the associated linear

transformation. The study of matrix is applicable to every aspect

of human endeavour.

1.4 TYPES OF MATRICES

1.4.1 Row Matrix

A row matrix consists of 1 column only e.g. (3 2 4) is a row

matrix of order 1 x 3.

1.4.2 Column Matrix

A column matrix is matrix having only one column.

(3 ¿)( 2 ¿) ¿ ¿
e.g. ¿ is a column matrix of order 3 x 1

So to conserve space in printing, a column matrix is

sometimes written on one line with “curly” bracket e.g. (3 2 4)

and is the same matrix as order 3 x 1.


1.4.3 Single Element Matrix

A single matrix number may be regarded as a matrix as a

[x] matrix is having 1 row and 1 column.

1.4.4 Double Suffix Matrix

Each element in a matrix has its own particular address or

location which can be defined by a system of double suffixes, the

first indicating the row and the second the column, thus:

(a1 a12a13a14 ¿)(a21a2 a23a24 ¿)¿¿¿


¿ ¿
¿ a32 indicates element in the third row and element

in the second column.

1.4.5 Matrix Notation

A whole matrix can be denoted by a single general element

enclosed in brackets, or by a single letter printed in bold types.

This is a very neat shorthand and saves much space, for

example:

(a1 a12a13¿)(a21a2 a23¿)¿¿


¿ ¿
¿ can be denoted by ( a ii ) or (a) or by A
1.5 SPECIAL MATRICES

1.5.1 Square Matrix

A square matrix is a matrix of order m x m meaning of the

same number of rows and columns. e.g.

1 2 5
6 8 9 is a 3 x 3 matrix
1 7 4

A square matrix (aij) is symmetric of aij = aji

1 2 5 1 2 5
2 8 9 = 2 8 9
5 9 4 5 9 4

i.e. it is symmetrical about the leading diagonal.

Note: A = A

A square matrix (aij) is skew-symmetric of aij = -aji e.g.

1 2 5 -1 -2 -5
2 8 9 = - -2 -8 -9
5 9 4 -5 -9 -4

in this case A = -AT

1.5.2 Diagonal Matrix

A square matrix is called a diagonal matrix, if all its non-

diagonal element are zero e.g.

1 0 0
0 3 0
0 0 4

1.5.3 Unit or Identity Matrix

A square matrix is called a unit matrix if all the diagonal

elements are unity and non-diagonal elements are zero e.g.

(i) 1 0 0
0 1 0
0 0 1

(ii) 1 0
0 1

1.5.4 Null Matrix or Law Matrix

Any where in which the elements are zero is called a Zero

Matrix or Null Matrix. e.g.

0 0 0
0 0 0
0 0 0

1.5.5 Equal Matrix

Two matrixes are said to be equal if:

(i) They are of the same order

(ii) The elements in the corresponding position are equal.

Thus of A = 2 3 B= 2 3
1 4 1 4

A=B
1.5.6 Singular Matrix

If the determinant of a matrix is zero, then the matrix is

known as singular matrix e.g.

( 1 2 ¿) ¿¿ ¿
|A|= ¿ then A is a singular matrix.

1.5.7 Triangular Matrix (Echelon Form)

A square matrix, all of whose element below the leading

diagonals are zero is called an upper triangular matrix. A square

matrix, all of whose elements above the leading diagonal are

zero, is called a lower triangular matrix. e.g.

1 3 2 1 0 0
0 4 1 4 1 0
0 0 0 6 8 5
Upper triangular matrix Lower triangular matrix

1.5.8 Orthogonal Matrix

A square matrix A is called an orthogonal matrix, if the

product of the matrix A and the transpose matrix A 1 or ‘A’ is a

unity matrix e.g.

A.AT = I
If |A|=1 matrix A is proper

1.5.9 Non-Singular or Invertible Matrix

A matrix A is called non-singular matrix if its inverse exist.

where A =¿ ( 2 5 ¿ ) ¿ ¿ ¿ ¿
¿
¿

( 2
AB = ¿ ¿ ¿5¿ ) ¿
¿
¿
How to get the inverse of matrix A:

To get the inverse of matrix A the following rules must be

observed or followed:

1. Interchange the two elements on the diagonal.

2. Take the negative of the other two elements.

1
3. Multiply the resulting matrix by |A| or equivalently, divide

each element by |A| . In case |A| = O, the matrix A is not

vertible or non singular.

Expressing the process of inverting matrixes as a rule we do

the following:
i. We get the minors of the matrix.

ii. We sign the minors with the rule (-1) i+j to obtain the

cofactors.

iii. We transpose the cofactors

iv. Multiply the result with the reciprocal of the determinant

1
of the original matrix i.e. |A|

A ¿( 2 5 ¿) ¿ ¿
Using matrix ¿

where CT = B Adj A = CT

(2 5 ¿ )¿ ¿ ¿
Consider A = ¿

a11 = (−1 )1+1 ( 3 ) = 3 , cofactor of a12 = (−1 )1+2 ( 1 ) = −1 a 21 = (−1 )2+1 ( 5 ) = −5

a22 = (−1 )
2+2
( 2 ) = 2 putting these in matrix form¿ ( a11 a22 ¿ ) ¿ ¿
¿
( 3 −1 ¿ ) ¿ ¿ ¿
¿
1.5.10 Conjugate of a Matrix

A = ¿ (1 + i 2 − 3 i 4 ¿ )¿ ¿
Let ¿

then the conjugate of matrix A is Ă


( 1 − i 2 + 3i 4 ¿ ) ¿ ¿ ¿
Ă ¿
1.5.11 Idempotent Matrix

A matrix, such that A2 = A is called an idempotent matrix

e.g.

A = ¿ ( 2 −2 −4 ¿ )(−1 3 4 ¿ ) ¿ ¿
¿
( 4 +2 −4 −4 −6 +8 −8 −8 + 12¿ ) (−2−3+4 2 + 9 −8 +4 +12 −12¿) ¿ ¿¿
¿
1.5.12 Periodic Matrix

A matrix A will be called a periodic matrix, if A K+1 = A where

K is a positive integer. If K is the least positive integer, for which

AK+1 = A, then K is said to be periodic of A. if we choose K=1 we

get A2 = A and we call it to be idempotent matrix.

1.5.13 Nilpotent Matrix

A matrix will be called a nilpotent matrix, if A K = 0 (null

matrix) where K is a positive integer, if however

is the least positive integer of which A K = 0, then K is the index of

the nilpotent matrix.

A = ab b2 , A2 = ab b2 ab b2 = 0 0 =0
-a2 -ab -a2 -ab -a2 -ab 0 0
1.5.14 Involuntary Matrix

A matrix A will be called an involuntary matrix, if A 2 = I (unit

matrix) since I2 = I always, it therefore means that a unit matrix is

an involuntary matrix.

1.5.15 Transpose of a Matrix

In a given matrix A, we interchange the rows, and the

corresponding columns, the new matrix obtained is called the

transpose of matrix A and is denoted by A0 and A1 e.g.

A = ¿ ( 2 3 4 ¿ )(1 0 5 ¿ ) ¿ ¿
¿
θ
1.5.16 MATRIX A

θ
The transpose of the conjugate matrix A is denoted by [ Α ]

Let A = ¿ ( 1 + i 2 −3i 4¿ ) ¿ ¿¿
¿
¿
1.5.17 Unitary Matrix

A square matrix A is said to be unitary of AT A = I

e. g. A = ¿
1−i
2 ( −1 + i
2
¿ ¿¿ )
¿

θ
A A = ¿
1−i 1 + i
2 2 (¿ ¿¿ )
¿

Let A = ¿ 2 (
1 − i −1 + i
2
¿ ¿¿ )
Proof: ¿

Where  is called the conjugate of matrix A.

A = ¿θ
2 (
1 − i −1 + i
2
¿ ¿¿ )
¿
Then A ∂ A = I

θ
A A=¿ 2 2(
1−i 1 + i
¿ ¿¿ )
¿
=¿ (
(1+i) (1−i (−1+i) (−1−i ) (1+i ) (1−i) (1+i) (1+i)
4
+
4 4 4 )
¿ ¿¿
¿

=
4 (
(1+i−i−i 2 )
+
4 )
(1+i−i−i 2 ) (1−i−i−i 2 ) (−1+i−i−i 2 )
4
+
4

( )
2 2 2 2
(1+i−i−i ) (−1+i−i−i ) (1−i−i−i ) (1+i−i−i )
+ +
4 4 4 4

(
2 2 2 2
= ¿ 4 + 4 4 − 4 ¿ ¿ ¿¿ )
¿
¿
CHAPTER TWO

LAWS AND ALGEBRAS OF MATRICES

2.1 Introduction

This chapter discusses the different laws of matrices.

These are laws of addition, subtraction, multiplication, scalar

multiplication, matrix multiplication and properties of matrices.

2.2 Addition of Matrices

If A = [ aij ] and B = (bij) be two matrices of same order, m x n

matrices. Then their sum A+B is defined as the matrix, each

element of which in the sum of the corresponding elements of A

and B.

A=¿ ( a11 a12 −−a1n ¿)(a21 a22 −−a2n ¿)(1 1 1¿) (1 1 1¿) ¿
¿
¿
Adding both matrices, we have

(A+B =¿ a1 +b12 a12 +b12 − a1n +b1n¿)(a21+bbn a2 +b12 − a2 +b2n¿)( 1 1 1¿)( 1 1 1¿)¿¿
¿
If A = [aij] and B = [bij]

then A x B = [aij +[bij]


2.3 Addition of Matrices

Only matrices of same order can be added or subtracted.

Let A, B, C be arbitrary matrices of the same order. Then

using the definition of sum of matrices and law of addition of real

numbers or complex numbers.

1. Commutative Law:

A + B = [ aij ] + [ bij ] = ( bij + aij ) = B + A.

2. Associative Law:

A + ( B + C)
= [ aij + (bij + cij ) ] = [ (aij + bij ) + cij ]
= [ ( A + B) + C ]

Proof of (1)

Let A and B be the arbitrary matrices of the same order.

Then to show that it is commutative

A + B = (aij ) + ( bij )

A=¿ ( a11 a12 −−a1n ¿) ( 1 1 1¿)(1 1 1¿)¿


¿
¿

(=¿ a11 + b11 a12 +b12 −−a1n +b1n¿)( 1 1 1¿)( 1 1 1¿)¿¿


¿
(¿ a11 + a11 b12 +b12 −−b1n +a1n¿)( 1 1 1¿)( 1 1 1¿)¿¿
¿
= (bij) + (aij) = B + A as required and the proof is complete.

Proof of (2) Let A, B and C be arbitrary matrices of the

same order, Then to show associatively, we have A + (B+C) =

(aij) +

[( bij + cij)] = (aij + bij) + Cij = (A + B) + C.


a( 1 a12− a1n¿)(a21 b2 − a2n¿)(1 11¿)(1 11 )¿
=¿ ¿¿
¿
¿
= (aij + bij) + cij = (A + B) + C

2.4 SUBTRACTION OF MATRICES

Let A [aij] and B = [ bij ] be two matrices with same order.

The difference of A and B can be written as A – B i.e.

A – B = [ aij ] – [ bij ]

=¿ ( a11 a12 −−a 1n ¿ ) ( 1 1 1¿ )( 1 1 1¿ ) ¿


¿
¿
(=¿ a11 −b11 a12 −b12 −−a1n −b1n¿)(1 1 1¿)(1 1 1¿)¿¿
¿
Example:

Let A = ¿ ( a11 a 12 ¿ ) ¿ ¿
1. ¿

Find A - B

Solution:

Sine A = ¿ ( a11 a 12 ¿ ) ¿ ¿
¿
then A – B =

then A −B = ¿ ( 11 11 12 12 ¿) ¿ ¿
a −b a −b ¿
¿
¿

As require and the solution is obtained

(2). Let A = a11 a12 0 b11 0 b13


a21 a22 0 0 b22 b23
a31 0 a33 and B = b31 b32 0
0 a42 a43 b41 b42 b43

then A – B = [aij - bij]


= a11 – b11 a12 – 0 0 – b13
a21 – 0 a22 – b22 a23 – b23
a31 – b31 0 – b32 a33 – 0
0 – b41 042 – b42 043 – b43

= a11 – b11 a12 - b13


a21 a21 – b22 a23 – b23
a31 – b31 – b32 a33
-b41 a42 – b42 a43 – b43

As required and the solution is obtained

SCALAR MULTIPLICATION

This means using a single number (i.e. a Scalar) to multiply

a matrix (i.e each element of the matrix) i.e.

K.A = Ka11 Ka12 Kain


Ka21 Ka22 Ka2n
I I I
I I I
Kami Kam2 Kamn
Where K is the scalar multiplier

Example
(1) Let A = a11 -a12
then K.A
a21 - a22
Solution:
Multiply the matrix by K the constant, we have
K.A = k x a11 k x – a12

k x a21 k x – a22
= ka11 - ka12
ka21 - ka22

Some properties of matrix under this operation.

(i) ( + ) x A = A + A

(ii)  (A + B) = A + B

(iii)  (A) – () A

MATRIX MULTIPLICATION

As stated before when we started, we said that when

multiplying two matrices, we must make sure the number of rows

of the first must be equal to the numbers of columns of the

second. That is suppose A = [aij] and B = (bij) are matrices, the

number of columns of A must be equal to the numbers of rows of

B say A is an m am matrix hose ij – entry is obtained by

multiplying the ith row of A by the jth columns of B, that is


A x B = a11 a12 -- ain b11 b12 --- bin C11 C12 -----Cin
I I I I I I I I I
I I I . I I I = I I I
Am1 am2-- amn bm1 bm2--- bmn Cm1 Cm2 Cmn

Where Cij = aii bij + ai2 bij + ai2b2j + --------------aimbmn


m

=  aikbkj
k=1

Example
(1) Find AB; where A = a11 a12 and B = b11 0 b13
a 23 a23 b21 b22 b23

Solu:

Since A is 2 x 2 and B is 2 x 3 matrices respectively. The

product AB is 2 x 3 matrix but to obtain this, the product matrix

AB, multiply the first row [a11 a12] by each column of B.

b11 0 b13
b21 , b21 b23
Respectively
AB = [a11 x b11 + a12 x b21, a11 x 0 a12 x b22, an x b13 + a12 x b13] is

the first row.

To obtain the second row of AB, multiply the second row [a 21 a22]

of A by each column of B. Thus,

AB = a11 b11 + a12 b21 a12 b22 a11 b13 + a12 b13
a12 b11 + a22 b22 a22 b22 a21 b13 + a22 b23

Properties of matrix under this operation.


Let A,B,C be matrices, then whenever the product and sum

are defined,

1. (AB) C = A (BC) (Associative law)

2. A (B + C) = AB + AC

3. (A + B) C = Right distributive law

4. Multiplication of matrix by unit matrix AI = IA =A

5. Multiplication inverse of matrix exist of /A/ ≠ 0

6. K (AB) = (KA) B = A (KB) where K is a scalar

7. AB = BA

Proof

(1) To prove the associative law with respect to matrix

multiplication.

Consider

ith, jth entry of (AB) C = ith rows of AB X jth

Column of C

n n n cij

 
K=1
a1k bk1,

K=1
a2k bk2,.…,

K=1
aik bkn
x
.
.
.
cnj

  
n n n

a1k bk1, a2k bk2 + …, cnj aik bkj


= cij c2j
K=1 K=1 K=1

n n n
= cij  
K=1
a1k bk1 cij +

K=1
a2k bk2 c2j +…+
…+ 
K=1
aik bkj cnj

=  n
k=1 r n
= 1 aik bkr

crj…………… (*)
Similarly, one can shown that the ith, jth entry of A (BC) = ith row A

X jth column of BC.


n n n bij X cij
  a1k
k=1
a2k……,

k=1
aik
x .
.
.
.
k=1 . .
bnj X cnj

n n n
bij x cij 
k=1
a1k + b2 x c2j 
k=1
a2k +…… + bnj x cnj 
k=1
aik

n n n
=  a1k bij c1j +
 a2kb2jc2j + … … + cnj
 aik bnjcnj
k=1 k=1 k=1

n n
 aikbrjcjr … …………**
k=1 k=1

Adding (*) + (**) implies (AB) C = A (BC)

(2.) To show left distributive property, consider i th, jth entry of A

(B + C) = ith row of A X jth column of B + C


bij + cij
= (ai1, ai2, ………. ain) x . .
. .
. .
bnj + cnj
= aii (bij + cij) + ai2 (b2j + c2j) + … + ain (bnj + cnj)

= aiibij + aiicij + ai2b2j + ai2cj + … ainbnj +aincnj

= aiibij + ai2b2j + …………+ainbnj + aiicij+ … + aincnj


n n
= 
k=1
aikbkj +

k=1
Aikckj

= I, jth entry AB + i, jth entry of AC

(3.) Similarly, one can show for the right distributive property,

consider I, jth entry (A + B) C = ith row of A + B X jth column

of C
cij
= (ai1 + bi1, ai2 + bi2, …. ain + bin) x .
.
.
cnj

= (ai1 + bi1) cij + (ai2 + bi2) c2j + ….. (ain + bin) cnj

= ai1 c1j + bi1c1j + ai2c2j + bi2c2j + …. + aincnj + bincnj

= ai1 c1j + ai2c2j + … + aincnj + bi1c1j + bi2c2j + bincnj


n n

= 
k=1
aikckj +

k=1
bikckj

i, jth entry of AC + I, jth entry of BC


.
.. (A + B) C = AC + BC
(4) Let A = [aij] be the usual matrix defined

such that A = a11 a12 and I is the unit matrix


a21 a22

defined as 1 0 then
0 1
AI = IA = A

a11 a12 1 0 = a11 + 0 0 + a12 = a11 a12


a21 a22 0 1 a21 + 0 0 + a22 a21 a22

Similarly IA = 1 0 a11 a12 a11 + 0 a12 + 0


0 1 a 21 a22 = 0 + a21 0 + a22

= a11 a12
a21 a22

(6.) This property has already been proven when we dealt with

scalar multiplication. Please check properties of scalar

multiplication in 2.4

Powers of Matrices
Let A be an n-square matrix over a field K. power of
A are defined as follows;
= AA, A3 = A2A …, An+1 = AnA…, and A0 = I

Note: It can be clearly seen that the powers of matrices

falls under matrix multiplication.

Here are some examples.

Suppose:

= a11 a12

a21 + a22
Here,
a11 a12 a11 a12 a211 + a12a21 a11a12 + a12a22
= =
a21 + a22 a21 a22 a21 a11+ a22a21 a21a21+a222

Like manner A3 = A2A

(7) Let A = (aij) and B (bij) be the usual matrices

such that A = a11 a12 B = b19 b12


a21 a22 and b21 b22

then AB = a11 x b11 + a12 x b21 an x b12 + a12 x b22


a21 x b11 + a22 x b21 a21 x b12 + a22 x b22

BA = b11 b12 a11 a12


b21 b22 a21 a22

= b11 x a11 + b12 x a21 b11 x a12 + b12 x a22


b21 x a11 + b22 x a21 b21 x a12 x b22 x a22

BA = b11 a11 + b12 a21 b11 a12 + b12 a22


b21 a11 + b22 a23 b21 a12 + b22 a22

AB = a11 b11 + a12 b21 a11 b12 + a12 b22


a21 b11 + a22 b21 a21 b12 + a22 b22

a11b11+ a12b21, a11b12+a12 b22 b11a11+b12 a21 b11 a12+b12a22


a21b11+a22b21 a21b12+a22b22 ≠ b21a11+b22a21 b21a12 +b22a22

AB ≠ BA
CHAPTER THREE

3.1 Each n-square matrix A = (aj) is assigned a special scalar

called the determinant of A denoted by (A) or /A/ or in the case

where the matrix entries are written out in full, the determinant in

denoted by surrounding the matrix entries by vertical bars

instead the brackets or parentheses of the matrix. For instance

the determinant of a matrix

a11 a12 ain


is written
ani an2 ann

a11 a12 ain


as
ani an2 ann

Although most often used for matrices whose entries are

real or complex numbers, the definition of determinant only

involves addition, subtraction and multiplication and so it can be

defined for square matrix with entries taken from any

commutative ring. This for instance the determinant of matrix

with integer coefficient will be an integer and the matrix has an

inverse with integer coefficient of and only if this determinant is 1

or -1.
DETERMINANT OF ORDER 2

Determinant of order 2 is defined as follows

det (A) = a11 a12

= a11 a22 - a12 a21


a21 a22

Let us take some examples under the determinant of order 2.

Examples 3.1

1. 5 2 = 5 (3) - 2(4)
= 15 – 8
4 3 =7

2. 2 3

= 2(1) - 5(4)
-4 -1 = -2 + 12
= 10

3.1.1 DETERMINANT OF ORDERS 3

Let us consider or orbituary 3 x 3 matrix A = (aij) the

determinant of A is defined as follows:

det (A) = a11 a12 a13


a21 a22 a23
a31 a32 a33
= a11 (a22 a33 – a23 a32) – a12 (a21 a33 – a23 a31) + a13 (a21 a32 – a22
a31)
= a11 a22 a33 – a11 a23 – a32 – a12 a21 a33 + a12 a23 a31 + a13 a21 a32 –
a13 a22 a31
= a11 a22 a33 + a11 a23 + a32 - a12 a21 a33 - a12 a23 a31 - a13 a21 a32 –
a13 a22 a31
Observe that there are six products, each products consisting of

three element of the original matrix. Three of the products of plus

labeled and three other are minus labeled.

Example

Let A = 2 3 1 B= 3 6 2
4 2 -4 -2 3 4
1 0 3 1 2 5
Find the det (A) and det (B)

Solu:

Det (A) = |A| = 8 3 1


4 2 -4
1 0 3
8[(2x3) – (-4 x 0)] -3 [(4 x 3) – (-4 x 1)] + 1 [(4x0) – (2 x 1)]
8[6+0] - 3 [12 + 4] +1 [ 0 – 2]
= 48 – 48 -2
= -2

3.2 DETERMINANT OF A SQUARE MATRIX

The determinant of a square matrix is the determinant

having the same element as those of the matrix.

For example

1. |A| = 3 1 5
4 7 6 = 3 [28 -30] -1 [16-6] +5 [20 – 7]
1 5 4 = 3[-2] -1 [10] + 5 [13]
= -6-10+65
= 49
3.2.1 MINORS AND CO-FACTORS

Cofactors = (-1)r+c minor or cofactor of Aij = (-1)i+j (M.J)

where r is the number of rows of the element and c is the

number of columns of the element.

This give |A| = a1 b1 c1


a2 b2 c2
a3 b3 c3
a1 (minor of a1) – b1 (minor of b1) + c1 (minor of c1)
where minor of a1 from |A| is b2 c2 and minor of
b 3 c3
b1 is a2 c2 and minor of c1 is a2 b2
a3 c3 a3 b 3
Then the cofactor of a1 = (-1) [ b2 c3 – b3 c2)
1+1

Then the cofactor of b1 = (-1)1+2 [a2 c3 – a3 c2]

Then the cofactor of c1 = (-1)1+3 [a2b3 – a3b2]

Therefore the cofactor of b2 = (-1)2+2 (a1c3 – a3 c1)

= (a1c3 – a3 c1)

In conclusion, by using laplace expression we can derive the

determinant of a square matrix A = [aij] which is equal to the sum

of the products obtained by multiplying the elements of any row

(column) by their respective cofactors.


n
|A| = aii Aii + a12 A12 + - - - - - ain Ain =  aijAij
J21
n
|A| = aij Aij + a2j A2j + --------- ain Ain =  aijAij
i=1
The above formulas for |A| are called the laplace expansion

of the determinant of A by the ith row and the jth column. Together

with the elementary row (column) operations, they after a

method of simplifying the computation of |A|.

3.2.2 CLASSICAL ADJOINT OF A MATRIX

Let A = (aij) be an n x n matrix over a field k and let Aij

denote the cofactor of aij. The classical adjoint of A, is the

transpose of the matrix of cofactors of A. Namely.

We say “classical adjoint” instead of simply “adjoint”

because the term “adjont” is currently used for an entirely

different concept.

Example

Let A = 2 3 -4
0 -4 2
1 -1 5

The cofactors of the nine elements of A follows


A11 = -4 2 A12 = 0 2 A13 = 0 -4
+ -1 5 -1 5 + 1 -1

A21 = 3 -4 A22 = 2 -4 A23 = 2 3


- -1 5 + 1 5 - 1 -1

A31 = 3 -4 A32 = 2 4 A33 = 2 3


+ -4 2 - 0 2 + 0 -4
A11 = -18, A12 = 2, A13 = 4, A21 = -11, A22 = 14, A23 = 5, A31
=10, A32 = -4, A33 = -8

Where matrix A = a11 a12 a13


a21 a22 a23
a31 a32 a33
.
. . Matrix for by the cofactors respectively in order of the above
matrix is

-18 2 4
= -11 14 5
-10 -4 -8

but adj A = |Aij|╥

adj A = -18 -11 -10


2 14 -4
4 5 -8

3.3 PROPERTIES OF DETERMINANT

(Property I)

The value of a determinant remains unaltered, if the rows

are interchanged into columns or columns are interchanged into

rows.

= a 1 b 1 c1
a2 b 2 c2
a3 b3 c3
= a1 (b2c3 – b3c2) – b1 (a2c3 – a3c2) + c1 (a2b3 – a3b2)
= a1 b2c3 – b3c2 – b1 a2c3 – a3c2 + c1 a2b3 – a3b2
= (a1b2c3 – a1b3c2) – (a2b1c3 + a2b3c1) + (a3b1c3 – a3b2c1)
= a1 (b2c3 – b3c2) – a2 (b1 c3 – b3 c1) + a3 (b1 c2 – b2c1)
= a1 a2 a3
b1 b2 b3
c1 c2 c3
(Property II)

The value of the determinant remains unaltered of to

the of one row (or column) be added any constant

multiple of the corresponding elements of any other row

(or column) respectively.

Let a1 b1 c1
a2 b2 c2
a3 b3 c3

On multiplying the second column by l and the third column by m

and adding to the first column we get

= a1 + lb1 + mc1 b1 c1
a2 + lb2 + mc2 b2 c2
a3 + lb3 + mc3 b3 c3
= a1 b1 c1 b1 b1 c1 c1 b1 c1
a2 b2 c2 +l b2 b2 c2 c2 b2 c2
a3 b3 c3 b3 b3 c3 c3 b3 c3

= ▲+ 0 + 0
= ▲

(Property III)

If two rows (or two columns) of a determinant are inter-

changed the sign of the value of the determinant changes.

Interchanging the first two rows of ▲, we get

a2 b2 c2 from a1 b1 c1
a1 b1 c1 a2 b2 c2
a3 b3 c3 a3 b3 c3
▲ = a2 (b1c3 – b3c1) – b2 (a1c3 – a3c1) + c2 (a1b3 – a3b1)
= a2b1c3 – a2b3c1 – b2 a1c3 + b2a3c1 + c2a1b3 – c2a3b1
= -a1b2c3 + a1b3c2 + b1a2c3 – b1a3c2 – c1 a2 b3 + c1a3b2
= -[(a1b2c3 - a1b3c2) – (b1a2c3 - b1a3c2) + (c1 a2 b3 - c1a3b2)]
= -[a1 (b2c3 - b3c2) – b1 (a2c3 - a3c2) + c1 (a2 b3 - a3b2)]

= - a1 b 1 c1
a2 b 2 c2
a3 b 3 c3
=-▲

(Property iv)

If two rows (or columns) of a determinant are identical the value

of the determinant is zero.

Let ▲ = a1 b1 c1
a1 b 1 c1
a3 b 3 c3 so that the first two rows are identical

By interchanging the first two rows, we get the same determinant

▲.

By property (iii) in interchanging the rows, the sign of the

determinant changes or ▲ = - ▲ or 2▲= 0 or ▲ = 0

(Property v)

If the elements of any row (or column) of a determinant be

each multiplied by the same number, the determinant is

multiplied by that number


▲ = ka1 kb1 kc1
a2 b2 c2
a3 b3 c3
= ka1(b2c3 – b3e2) – kb1(a2c3 – a3c2) + kc1(a2b3 – a3b2)
= k[a1(b2c3 – b3c2) –b1 (a2c3 – a3c2) + c1 (a2b3-a3b2)

= k a1 b1 c1
a2 b2 c2 = KD
a3 b3 c3
CHAPTER FOUR

INVERSE OF A MATRIX SQUARE

4.1 INVERSE OF A MATRIX

If A and B are two square matrices of the same order such

that:

AB = BA = I

Then B is called the inverse of A i.e B=A -1 and A is the inverse of

B.

The condition for a square matrix A to posses an inverse in

that matrix A is non-singular i.e |A| ≠ 0

If it is a square matrix and B its inverse then AB = I

Taking determinants of both sides

|AB| = |I| or |A||B| = I

from this relation it is clear that |A| ≠ 0 i.e the matrix A is

non-singular.

Examples

Suppose that A = 2 5 and B = 3 -5 find


1 3 1 2
their inverses

Solu:

2 5 3 -5 = 6 – 5 -10 + 10 = 1 0
1 3 -1 2 3 – 3 -5 + 6 0 1
4.1.1 INVERSE OF A 2 X 2 MATRIX

Let A be an arbituary 2 x 2 matrix say

A = a 1 b1
a 2 b2

We will be deriving a formular A-1, the inverse of A

specifically we seek 22 = 4 scalars say x1, x2, y2, y2 such that

a1 b 1 x1 x2 = 1 0
a2 b 2 y1 y2 0 1

a1 x1 + b1y1 a1x2 + b1y2 = 1 0


a2 x1 + b2y1 a2x2 + b2y2 0 1

We set the four entries equal to the corresponding entries

in the identity matrix, this yields for equations which can be

partitioned into two 2 x 2 system as follows

a1x1 + b1y1 = a1x2 + b1y2 = 0


a2x2 + b2y2 = a2x2 + b2y2 = 1

Suppose we let |A| = ab=bc (called the determinant of A)

Assuming |A| ≠ 0, we can solve uniquely for the above unknowns

x1, y1, x2, y2 obtaining

X1 = b2 y1 = -a2, X2 = -b1 y2 = a1
|A| |A| |A| |A|
Accordingly,

A-1 = a1 b1 -1= b2 -b1 1 b 2 - b1


a2 b 2 |A| |A| = |A| -a 2 a1
-a2 a1
|A| |A|
In other words when |A| ≠ 0, the inverse of a 2 x 2 matrix A may

be obtained from A as follows

1. Interchanged the two elements on the diagonal

2. Take the negatives of the other two elements

3. Multiply the resulting matrix by 1/|A|. Incase |A| = 0, the

matrix A is not invertible.

Expressing the process of inverting matrices as a rule we do the

following.

i. We get the minors of the matrix

ii. We sign the minus with the rule (-1)i+j to obtain the cofactor

iii. We transpose the cofactors

iv. Multiply the result with the reciprocal of the determinant of


the original matrix i.e 1_
|A|
Example (4.1.1)

Find the inverse of A = 2 3


4 5
Firstly, the determinant of A
|A| = 2 3
4 5
= 2(5) - 4(3) = 10 – 12
|A| = 2
Because |A| = 0, the matrix A is invertible and
A-1 = 1 5 -3 5
/2 3/2
2 -4 2 = =B
2 -1
There matrix /25 3
/2
is the inverse of the matrix
2 -1
2 3
4 5 i.e A.B = I

2 3 5
/-2 /2
3
1 0
4 5 =
2 -1 0 1

4.1.2 INVERSE OF 3 X 3 MATRIX

In this case we are going to show the inverse of the matrix

of solving for co-factors and adjoint.

Example 2 3 5
A= 4 1 6
1 4 0

Solution:
First we find the |A| (determinant)
det(A) = |A| = 2 3 5
4 1 6
1 4 0
= 2(0-24) -3(0-6) +5 (16 – 1) = 45
Secondly, We find all the cofactors
A11 = (0-24) = -24 A12 = - (0 – 6) = 6 A13 = 16-1 = 15
A21 = (0-20) = -20 A22 = - (0 – 5) = -5 A23 = (8-3) = -5
A31 = (18-5) = -13 A32 = - (12 – 20) = 8 A33 = 2-12 = 10

Adjoint of matrix A = Transpose of cofactors


But cofactors (matrix) = -24 6 15
20 -5 -5
13 8 -10
.
. . cT = -24 20 13
6 -5 8
15 -5 -10

Then inverse of A is
A-1 = -24/25 20/45 13/45 -24 20 13
6 -5 8 =1 6 -5 8
45 45 45 45 15 -5 -10
15
/45 -5
/45 -10/45

4.2 Product of a square matrix and it’s inverse from the above
examples, we have seen how to obtain
A-1 = 1 -24 20 13 2 3 5
45 -6 -5 8 from A = 4 1 6
15 -5 -10 1 4 0
Now, we are going to obtain their product. i.e
AA-1 = A-1 = 2 3 5 -24 20 13
4 1 6 6 -5 8 1/45
1 4 0 -15 -5 -10

= -48 + 18 + 75 40 – 15 – 25 26 + 24 – 50
-96 + 6 + 90 80 – 5 – 30 52 + 8 – 60
1
/45
-24 + 24 + 0 20 – 20 – 0 13 + 32 – 0

= 45 0 0
0 45 0 1/45
0 0 45

= 45 0 0_
45 45 45 1 0
0 45 0_ = 0 1
45 45 45 0 0
0 0 45
45 45 45
.
. . AA-1 = A-1A = I (where I is the identity matrix)
4.3 INVERSE THEOREM

Theorem 1: of B and C are both inverse of the matrix A,

then B = C

Proof:

Suppose A = (aij) as a square matrix and B is the inverse

i.e B = (aij). Then their multiplier i.e A . B = B . A = I. Also if A is a

square matrix i.e A = (aij) and C is the inverse of the matrix A,

then C = (aij) then their multiplier given the identity matrix i.e A.C

= C.A = I.

Therefore the inverse matrix B of the matrix A and the inverse

matrix C are equal i.e B = C

Theorem 2: If A and B are invertible matrix of the same

order, then,

(1) A.B is invertible

(2) (A.B)-1 = B-1 – A-1

Proof:

Using the associatively law of matrix w.r.t multiplication, we


get

(AB) (B-1 A-1) = A (BB-1) A-1 = AIA-1 = AA-1 = I


(B-1A-1) (AB) = B-1 (A-1A)B = B-1IB = B-1B = I
Thus (AB)-1 = B-1A-1. so if AB is invertible, then their product is
invertible.
Theorem 3: If A is a non-singular matrix, the unique

solution to the matrix equation Ax = B is X = A-1B

Proof:
If A is a non-singular and invertible matrix, then A(A -1B) =

(AA-1) x = A-1 (Ax) = A-1B.

.
. . the solution A-1B is unique

You might also like