0% found this document useful (0 votes)
100 views16 pages

Matrix 2

The document discusses the properties and conditions for the invertibility of square matrices, including the necessary condition for a matrix A to have an inverse being that it is non-singular. It presents various theorems related to the inverses of matrices, including the relationship between the inverses of products of matrices and the definition of orthogonal and unitary matrices. Additionally, it provides examples and proofs to illustrate these concepts.

Uploaded by

kumuthinijegan11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views16 pages

Matrix 2

The document discusses the properties and conditions for the invertibility of square matrices, including the necessary condition for a matrix A to have an inverse being that it is non-singular. It presents various theorems related to the inverses of matrices, including the relationship between the inverses of products of matrices and the definition of orthogonal and unitary matrices. Additionally, it provides examples and proofs to illustrate these concepts.

Uploaded by

kumuthinijegan11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

158

Mathemnticnl Phy sics


AB = A(A) -4(3dj A)1A 11
IAI =TALusing theorem 2.1
ic. AB = 1

Also BA =(d 4) A (adjIAIA)4 IAII


ie. BA =I
Thus AB = BA = |
which shows that B is the inverse of A ie. the matrix Ais invertible.
Thus the necessary and sufficient condition for a square
matrix A to possess an inverse is
that the matrix A is non-singular (i.e. IA I0).
Cor. If Ais invertible matrix, then the inverse of A is defined as
B = A-1= adj A
IAI
This corollary gives a rather powerfulmethod of computing the inverse of a
square matrix.
theorem 2-4. If A, B are two n-square, non-singular matrices, then AB, ATAt
are all invertible and
(Agra 200)
() (AB, =B-1A1
(ü) (4=(41
(ü) (A= (4-).
Proof. As AandB are non-singular matrices, therefore IAI 0and B0.
So that IAB |=lAIIBI0
This implies that AB is non-singular matrix and hence invertible.
As I AI=lAland IA I0, therefore AT is invertible.
Also as IAI =IA*I and IAl# 0, therefore IA*|=|A l0. Hence A is
invertible.
() Let A- and B- be the inverse of matrices A and B
respectively.
AA-! = A- A = I
BB-1 =B-1 B=I
Now consider a matrix C given by
C= B-1 A-1: then
C(AB) = (B- A-) (AB) = B-1 (A-! A) B = B-l IB
= (B- B) =I (Since A-A =land IB = B)
i.e.
(B- A-!) (AB) = I ...(la)
Similarly it can be shown that
(AB) (B-1 A-) = I ... (1b)
Equations (1) imply that B A- is the inverse of
AB, i.e.,
(AB)- B- A-1
Remark: This result may
be
for multiplication. For example: extended to any number of square matrices which are conformable
(a) For three matrices A, Band C, we
have
(ABC) =c B A-!
Matrices 159
Replacing Bby Xin equation (2); we have
(AX)- =X-A-!
Now substituting X = BC; we get
(ABC)- = (BC)-! A- = - B- A-1 ... (3)
[using (2) again)
(b) Hence for any number of matrices ABC ... GH; we get
[ABC ... GHÊ = H- G-l ... -lB-A ... (3)
() We have
AA-l = A-! A =1 ...(5)
Taking transpose of both sides and using I' = I, we get
(AA-l)T =(A-! A)T =I
(A-1) AT =A (A-lT=I(by reversal law of transposes)
From this it follows that (A-l) is the inverse of AT, i.e.
(A)- =(A-lT ...(6)
(üi)Again taking the conjugate transpose of both sides of (5); we get
(AA-l)t = (A-! A)t = I
(A-)* A = At(4-)t = I (since It =]
which shows that (A-l) is the inverse of At, i.e.
...(7)
(A)-! = (A-!)
(Kerala, 2003)
Ex. 10) Evaluate the inverse of matrix A=
Solution. Let A-
Then LA= =+1 =2
Cofactors of A arel, the cofactors of determinant are
A| =(-l)!*|= 1, Ajp = (-l)!* 2.1 = -1
A =(-1)2+ (-1)= 1, Ay =(-1)?+2.1= 1
The matrix of cofactors oflAlis

taking transpose of above matrix, we get adj. A=


.. A=

Ex. 11. Find the inverse of the matrix

1982, 69)
(Purvanchal 2004, Meerut
160
Muthematical Physics
Solution.

3(-2-3) -25
are -5, 5, - 5 respectively.
The cofactors of the clements of the first row
respectively.
The cofactors of the clements of the second row are 5, - 10,-5
respectively.
The cofactors of theclements ofthe third row are -5, -5, 0
Therefore the matrix of cofactors of 1Alis
[-5 5
-10

Taking the transpose of the above matrix, we get

adj A= 5 -10
-5

5 1/5 -1/5
adj A -10 2/5
-5 -5 1/5

8
Ex. 4 4
30 -8

Show that A-=AT, A" being transpose of matrix A.


-8 1 4
Solution. Given 4 4
1 -8

4
AAT 4 4 -8
-8 7

64 + 1+ 16 -32 + 4 + 28 -8 - 8 + 16]
-32 + 4 + 28 16 + 16 + 49 4 - 32 + 28
-8 - 8 + 16 4 - 32 + 28 I+ 64 + 16J

[81 0 (0
81
0

i.e.,
AAT =I AT= A-!
Matrices 161

Ex. 13. Find the matrix B such thatA = BC


3 2
where -1
2
(Rohilkhand, 2005)
Solution. Given A= BC
Multiplying both sides by C-!, we gct
AC-! = BCC-! ’ B= AC-! ...(1)
2
det C=|CI -1
2
= 1(0 + 2) -(0 1) -1(4- 1)
= 2+2-3= 1
As ICl=0, so Cis nonsingular and has an inverse.
The cofactors of IClare
C| =(-l)!+. (0 +2) = 2, C12= (-1)! +2 (-1) =+1, Ci3 =(-1)! +3,3 = 3
C21 =(-12+1(0 + 2) = -2, C,) = (-1)2+2 (0 1)= -1, Cza = (-1)2 +3(2 + 2) = -4
Caj =(-1) + (-2- 1) =-3,C3) = (-1)3+2 (-1 +2) =-1, C33 = (-1)3+3 (-1 -4) =-5
The matrix of cofactors of det C is
10
-14|
-1 -5J

Transpose of above matrix, i.e.,


-2 3

adj, C = -1
-4

-2 -2
C-l adi.C -1 -1
-4 -4

2 3 -21 r2 -2
B= AC-1 =4 -1 -1
Lo 1 4
r4 +3.-6 -4 -3+8 -6-3 + 10]
=|8 - 1- 6 -8 + 1+8 -12 + 1+
lo + 1 + 0 0 - l +0t0- 1+ 0

Ex. 14/If amatrix A satisfies a relation A? + A -I= 0 prove that A e


A- = I+ A, where I is identity matrix.
Solution. We have A'+ A-I=0
162

A2+ A=I ’ A2 + AI=I


Mathematical Physics
A [A +I) =I
TAWA +I| =|II
Obviously IAl0, so A-! exists.
Again A2+ A-I=0 A2 + A=I
A- (A2+ A)= A- I
A+I=A-! ’ A-!=I+A
2-17. Orthogonal Matrices
A square finite matrix A is said to be orthogonal if
ATA =I
This implies AAT =I ... (la)
where A" is the transpose of Aand I is unit matrix. .(1b)
We know that
IAT|= |Al and IAT A|=IATI |AI
Hence if ATA = ; we have lAP=I ie. IA |=±1
This shows that the determinant of an orthogonal matrix can only have
At the same time this shows that Ais non-singular (since IA values
+ 1or-1.
l0); so that A-! exists
Multiplying condition (la) by A- from the right ; we get
AT AA-! = IA-! ie. AT=A-l
This is alternative definition of an orthogonal matrix. ... (2)

Equating ijh element of both sides of (1b); we get

k=| ...(3a)
where n is order of square matrix A.
Similarly cquating ijh element of both sides of (la); we get
k=1 ... (36)
The conditions (3a) and (3b) satisfied by the
elements of an orthogonal matrix are not
independent, because equations (la) and (1b) are themselves
(la) and (1b) holds, the other also holds. not independent. If any of cquations
Theoremn 2-5. The products of orthogonal matrices are also
and B are orthogonal matrices, then AB and BA are also orthogonal, ie., if A
Proof. If Aand Bare given orthogonal matrices.
orthogonal
AT A=1, BT B=I matrices, then
such that
Now IAT|=1Al+0;1B
(AB)' (AB) = (BTAT) (AB) = BT
|=| B|#0
and moreover (ATA)B= BT IB = B'B = 1
Similarly it1AB|=\A|IBI
#0; hence matrix ABisorthogonal.
may be shown that matrix BA is
Ex. 15. Show that the
following orthogonal.
[cos0 -sin 01 matrices are orthogonal
sin 0 cos0
2
(Rohilkhand 2004. 1999, Agra 1994)
Matrices 163

cos -sin 01 cos 0 sin 0]


Solution () Let A then A
[sin 0 cos0] -sin 0 cos0]

sin 0] cos 0
..AA
cose sin 0
Hence matrix A is orthogonal.
[0 1
(i) Let B
|0 0

Hence matrix B is orthogonal.


1

(iii) 1
2

2
CT 1
-2
9 1 0. 01
2 -21
1
CCT 1 1
2
Lo 0

Hence matrix C is orthogonal.


a, b and c when
Ex. 16. Determine the values of
ro 2b C
(Meerut 2005, AMIE, 2004)
a b -
-clisorthogonal.
-b

ro 2b
Solution. Let A=a
-b

Transpose of Ais AT= |2b b -b C -C

AA"= 1
As A is orthogonal matrix,so
2b 0 0
ro
b 1
La -b C C -c 0
164

4b2 + c2
262 - c2
262c2
a' + b2 + c2
-2b2 + c?1
a2 - b2- c2 =o
Mathematical Physics
0
1
L-262 + c2 a2- b2 c2 a2 + b2 + c2
Equating the corresponding clements
4b2 + c2=1
2b2-c2 =0
a' + b2 + c²= 1 ..(2)
Solving (1) and (2) .3)
b=+ and c=t
V6 V3
Substituting these values in (3), we get
a=t 1

1
.. Thus
a=tV2 b=+
2-18. Unitary Matrices
square fnite matrix A is said to be unitary if
AtA =I
.(la)
This implies AAt =I
where A is conjugate transpose of A and / is a unit matrix. . (16)
We know that IAt |= TA* Iand | A A|=|A|AT
Hence ifAtA =I;we have IAA|=1A||A |=1l=1
This shows that the determinant of unitary matrix is always of
unitary matrix is non-singular. unit modulus and hence a
Multiplying (1a) from the right by A-; we get
At AA-! = IA- orAt =A-!
... (2)
This is alternative definition of a unitary matrix.
Equating ijh element of both sides of (1); we get
n
n

k=1 ayj* a =oiji k=l2 ag agt= ôij ; 1Si.jSn ...(3)


where a; is kih clement ofAand n is its
order.
Theorem 26. The products of two
B are unitary
matrices, then AB and BAunitary are
mnatrices are also unitary i.e. if A and
also unitary.
Proof. As A and B are unitary
At A=1, Bt B=Isuch matrices ;we have
that |A1#0; IBl#0
(AB) (AB) = (B At) (AB) = Bt (Ad A) B
= Bt IB = Bt
and moreover IAB |=|A||B|+0
B=|
Hence mnatrix AB is unitary,
be
Similarly it may shown that matrix BA is
unitary.
Matrices o 165

Ex. 17/Show that the given I/V2


is unitary.
-/V2 -1/N2
IN2 iN21 1/N2 iN2]
Solution. Let A =
l-iN2

Hence A' A = iN2] 1/N2


(-iN2 -IN2 -iN2 -1/N2 o
Hence given matrix is unitary.
Ex. 18. (a) ifA is a real skew-symmetric matrix and A2 + 1=0. then show that A is
orthogonal.
(b) IfH is a Hermitian matrix, what kind of matrix is eiH ?
(Rohilkhand 1995. Meerut 1994)
Solution. (a) Let A be the real skew-symmetric matrix. Then
AT=-A ...(1)
Also we have
A2 +I=0 (given) ...2)
We have ATA =(-A)(A) = I using (2)
which is the condition for the matrix A to be orthogonal. Hence the matrix A is orthogonal.
(b)Let H be the Hermitian matrix. Then
H=H ...3)
For any matrix M we have
...(4)

Let eilH = A
Then
AtA=(eH'. eiH =eli)". eiH using (4)
.eilH = eiH, eiH using (3)
A A=I ...(5)
i.e.
Hence the matrix eiH is unitary
which is the condition for the matrix A = elH to be unitary.
matrix.

B19)
2-19) Trace of a Matrix
The sum of diagonal elements of a square matrix is
called the trace of the matri.
0
0
For example let [A]=
0 0 -1

The sum of diagonal elements is 1+l-l=1,so the trace of matrix LAl is 1.


Ex. 19. Find the trace of the following matrices
1 |2 1
) [B] = 0 3
13 4 2 4
I60
Mathematical Physics
Solution. () Trace |A]= sum of diagonal elements
=l+2+ 5= 8
(0) Tace (B] = sum of diagonal clements
=2+3-3=2

2-20 Elenmentary Operations


The following three operations upon rows Orr columns of a matrix are defined as
opetions or elementary trnsformations : elementary
() The interchange of wo rows(or columns).
The interehange of h and h rows or columns will be denoted by Ry or Cj respectively.
() 7he multiplication of any row or column by a non-zero number.
The multiplication of h row by any non-zero number is denoted by R,()or R.
while the multiplication of ibh column by à is denoted by C(a) or AC;.
() The addition of one row (or column) to another row (or column) nmultiplied bya non
tem number,
Theaddiion of the th row multiplied by a non-zero number to the h row is denoted by
R (à)or R;+ àR;while the addition of ith columnmultiplied by to ith column is denoted
by C (^)(or C;+ ACj)
An elementary operation is called a row operation or a column operation accordingly it
applies to rows or columns.

2-21. Elementary Matrices


A
matrix obtained from a unit matrix by the application of any single elementary operation
is called an elementary matrix. It is denoted as E-matrix. For example,

1 01
are elementary matrices ; because they can be obtained from the unit matrix I; =0 1 0

by means ofthe single elementary operations Ri3 or C13. R; or C|;), R12 (4) (or R1
- 4R,) respectively.
Notations for elementary matrices
()E, denotes the elementary matrix obtained by interchanging the h and th rows of a
unit matrix I. As the matrix obtained by interchanging the jth and ;h rows of a unit matrix I is
the same as that obtained by interchanging ¡h and jth rows of I, therefore E; willdenote the
matrixobtained by interchanging the hand jh rows of (or columns) of the unit matrix I.
(ii) E;() denotes the elementary matrix obtained by multiplying ih row of a unit matrix I
by apon-zero number . Again, as the matrix obtained by multiplying the h column of the
nit marix I by a non-zero number is same as that obtained by multiplying the h row of I
therefore E; (0) will denote the matrix obtained by multiplying ih row(or column) ofa
non-zero number.
unit matrix by a
Matrices 167

(i) E, (2) denotes the elementary matrix obtaëned byadding to the h row, times the
jh row.
E, () which is the transpose of E.(À) Wwill denote theelementary matrix obtained by
ndding to the / column, Atimes the h column.
Theorem 2·7. Every elementary row (or column) operation on a mairs
equivalent to pre-multiplication (or post-multiplication) by the elementary matrix
corresponding to that operation.
Proof. Let A be an m × n matrix. Then we may write
...(1)
I=L, A
where Lm is M-square matrix.
Now if p isan elementary row operation, then from (1)
...(2)
pA=p(I,, A) =(p I,,) A= EA
where E is the elementary matrix corresponding to the elementary row operation p.
Again we may write
L... (3)
A=A I , a d aniedo
where I, is n-square unit matrix.
Now if o is the elementary column operation, then from equation (3)
... (4)
GA =o (A I)=A (o I) = AE,
column operation o.
where E is the elementary matrix corresponding to the elementary
Thus from equations (2) and (4) we conclude that
equivalent to pre-multiplication
Every elementary row (or column) operation on a matrix is
(or post-multiplication) by the corresponding elementary matrix.
Theorem 2-8. Every elementary matrix is non-singular.
R; (or column operation C)
Proof. (i) The application of the elementary row operationAccording
gives us the original unit matrix I. to theorem 2-7 this
to the elementary matrix E;
post-multiplication) of E, with E;, i.e..
operation is equivalent to pre-multiplication (or
Ej Ej=I
This implies Ej= E
Thus E; exists and E; is its own inverse.
matrix.
Also, I E;l=-1#0. Hence E; is non-singular
(ii) The application of the elementary row operation R,(H}or column operation Ci

H}to the elementary matrix E; (0)gives us the original unit matrix I. According to theorem
2-7this operation is equivalent to the pre-multiplication (or post-multiplication) of E,(^) with
E()ie.
E)E, 0) =B4) E() -1
This implies E, (0-)= (E{(0))-!
Thus (E, (2)}- exists and also E,(0) l=+0.
Hence E (2) is non-singular matrix.
168

(ii) The
E,() gives usapplication of the clementary row operation R; (-) to
the original unit matrix I. This operation is equivalent the
Mathematical Physics
of E,(2) with E,(-), i.e., to theelementary matrix
i.e.
E, (-1)-E) =I
This implies E(-4) =(E()J
premultiplication
Thus (E,(0))- exists. Hence E,(0) is non-singular matrix.
From abbove discussion it follows that the inverse of an
elementar matrir of the same type, thereby
singular. showing that everyeleelementary
mentary matrixmatriyis also n
(2-22. Equivalent Matrices
Two matrices of the same order are
said to be equivalent if
2other by afinite chain of elementary operations. one can be
We write B ~Aand read as B is obtained from the
equivalent to A.J
If the matrix B is obtained by
equialent of Aand is denoted by simply rOW operations on A, then Bis
said to be
R
row-
BA(B is rowequivalent to A)
On the other hand if the
is said to be column matrix B is obtained by simply
equivalent of A and is denoted by column operations on A, then B
C
BA (B is column equivalent to A)
If the matrix Bis
performing certain finiteequivalent
to matrix A, then by above
elementary row and column operations definition B can be obtained by
Q1,Q2, ..., Q, be the elementary row and column matrices on A. Let Pi, P, ..., P, and
row and column operations
which transform A into B. Then bycorresponding to the elementary
theorem 2.7, we have
B=P. ..... PPi AQ,02 ... Q,
Let ...(1)
P=P, ...... P,P,:Q=Q10Q2...Q
Then (1) reduces to ...2)
B= PAQ
By theorem 2-8 every
non-singular. This means thatelementary matrix is non-singular and hence their
P and Q are product is also
Thus we can say that the two non-singular matrices.
non-singular matrices P and Q suchmatrices
that
A and B are
eguivalent if and only if there extst O
R
B= PAQ.
Cor. . IfBA(i.e. if Bis 1 ...3)
P; ... Ps such that
row equivalent to A), then there exist elementary matrices P.
B =P,... P,P A
PA ..(4)
Cor. 2. IfBA((.e. Bis
Q1.Q2 ... Q, such that
column equivalent to A), then there exist elementary matrices
B= AQ1Q2 ... Q,= AQ
...(5)
Matrices
169
Properties of equivalent matrices:
1. Reflexivity
For
:Every matrix is cquivalent to itsclf i.e., A~A
A = IAI so the P
=I,Q =1.
2. Symmetry :- If B~A, then A~B.
B~A implies that there exist two
non-singular natrices P andQ such that
This implies
B=PAQ
A = P-BÌ-!
As P- and Q, being inverses of P and O are
non-singular matrices, theretore
A~B
3. Transitivity : If A~ B. B~ C, then A ~C
A~B and B~C imply that there cxist non-singular matrices such that
A = PBQ, B = RCS
Then A = PRCSQ = (PR)C (SQ) ... (6)
As PR and SQ, being the products of non-singular matrices are non-singular matrices,
therefore (6) implies
A~C
2-23.)Rank of a Matrix
A natural number ris said to be the rank of a matrix A if it has the following two
properties:
) There is at least one non-zerominor of a matrix Aof order r.
(iü) Every minor of Aof order (r+ 1), ifany, vanishes.
minors of order
As every minor of order (r +2) can be expanded as the sum of multiples of
order (r +2) vanishes.
(r+ 1), therefore the property (ii) of number implies that every minor ofwill
order greater than r vanish.
In fact, the property(ii) implies that every minor of
non-vanishing
Thus, briefly we can say that the rank of a matrix is the largest of any
minor of the matrix.
Usually the rank of the matrix A is denoted by the symbol p (A).
useful results
From the above definition of the rank of the matrix we have the following
for determining the rank of the matrix:
1. The rank of every zero (nul) matrix is
zerO.
1.
2. The rank of every non-zero matrix is 2
any (r+
3. If every(r + 1)-rowed minor of a matrix vanishes [or if matrix does not possess
l)-rowed minor ], then the rank of the matrix s r. matrix
then the rank of the
4. If there is at least one non-zero minor of order r of a matrix,
is r.
is n.
5.The rank of every n-square non-singular matrix
6. If every (r + 1)-rowed minor of a matrix is zero, then
every higher order minor iS
automatically zero.
7. (a) The rank of any m Xn matrix is S m if msn.
(b) The rank of any m Xn matrix is Sn ifn Sm.
For example :
(i) If A is a square matrix of order n such that
170
Mathematical Physics
IAl0, then p(A) = n
(i) IfI, is a unit matrix of order n, then
IL, I0 so that pl I, |=n
(ii) If Ais any diagonal matrixwith non-zero n-diagonal clements, then
IA|#0 so that p (A) = n.
Theorem 2:9. The rank of a matrix remains invariant under
operations. elementary
In other words eguivalent matrices have the same rank.
Proof. letA be any mxnmatrix of rank rso that every minor of Aof order (rely
vanishes., Let B bc the matrix obtaincd by performing clementary row operations on A. [The
theorem can also be proved by clementary column operations as well]. We shall prove.the
theorem in thrce stages.
() Let s be the rank of the matrix obtaincd by performing clementary row operation R.
let Bo be any (r + 1) rowed square sub-matrix of B. Then (r +1) rows of Bo are also rows
of some (r+1) rowed sub-matrix of A, say A. As the interchange of any two rows of a
determinantchange only the sign of the determinant, we may have

Since every (r +1) rowed minor of A vanishes, we must have


TA l= 0 ...(1)
Therefore cqn. (1)gives
|BÍ l= 0,
i.e., every (r +1)-rowed minor of B vanishes, sot that
p (B) Sr i.e. ssr ...(2)
Since A can be obtained by application of clementary row operation R; on B, therefore by
interchanging the roles of AandB, we find that
rSs ...(3)
Comparing (2) and (3), we get
r=s

Hence the interchange of any two rows does not alter the rank of a matrix.
(ii) Now let the matrix B be obtained by performing the elementary row operation R; (^)
and let s be the rank of the matrix B.
Let Bo be any (r +1 )rowed square sub-matrix of B and let A be the correspondingly
placed sub-matriX of A. The effect of the elementary row operation on Ag is either of the
following :
(i) to leave it unchanged
(ii) to multiply one of its rows by .
Therefore, we get either
| Bo l=lAgl or |Bl=2 l ...(4)
Ao l
Since, if we muluply a row of a determinant by a non-zero number ,
determinant is multiplied bythe number ) then the wnole
Since p (A) =r, every minor of order (r+ 1)is zero i.e.
lA,l= 0.
Then eqn. (4),gives
IBo l= 0
171
revet int of cráer ír +l) of
B vshes, so that
p(B)Sr iz sSr 5)
Since A cam be obtained by the
zgpicatiom of elememtary ro operation R,()B.
hefore by interchngimg the roles of Aznd B. we fimd that
rss
Comparing (5) znd(6, we get
r=s
Thus the aiplicution of the elements cf arow bs nom-zero RUber Goes not alier ta
runt ofthe matrir.
(aEY Noa let the matrix B be cbezined by performing the elememtry rom ogeratso, K
r the matrix A and lets be the rark of the matrix B.
Let B, be any (r +1) rowed square sub-matrix of B. and let A, be correspondingy placed
ubmarix A. The eet of the elementzry row operation on A, iseitber of the foloa
()to leave it unchanged
() to aid to the elements of one row, . times the corespozing elements of zoer ua
of A
(i) To add tothe elements of one row, tims the corespoding elements of zsoher
roW which is cot of A,
In case () and (ü) I B, l=lA,
In case (ii) |B,l= IA, I t imes another (r+ l) rowad minors of A.
Sinceevery (r+ 1HOwad minor of Avanishes, then I A,I=0;so in all the cases
IB, I=0
ie. every minor of order (r+ 1)of B vanishes, so that
p (B)<rie. s<r -..)
Since A can be obtained by the application of elementary row operation R ( ) ca B.
therefore by interchanging the roles ofA and B, we find that
rss

Comparing (7) and (8), we get


r=s

Thus the addition to the elements ofa row the corresponding elements of another ro
multiplied by a non-zero nunmber does not alter the rark of the matriz.
Combining (), (i) and (iii) stages we conclude that
The rark of a matrix remains invariant under elementary row operaticns.
By making parallel arguments, we can show that the rank of a matrix emains invarnant
under elementary column operations.
Thås, finally we conclude that the rank of the matrix remains invariart under elemenaan
operations.
Keeping in mind the definition for equivalent matrices the above result in other wors
states that the equivalent matrices have the same rank
Cor. By theorem 2-7 we know that every elementary ow (or columaerateu
matrix is equivalent to pre-multiplication (or post-multiplication) with the coesps
elementary matrix.
172
Mathematical Physics
Combining this result with above theorem, we conclude that
The pre-multiplicationand post-multiplication by any elementary matrix (or byany finite
chain of elementary matrices) does not alter the rank of matrix.
Ex. 20. Find the rank of the following matrices
3
3-3
3 3
(Kanpur Math. 1998)

() (Agra Math. 1995)


4

2
4 (Rohilkhand Math. 2000)
-3 -6

-1 3 6
Solution. () Here A=|1 3-3
3 3 11
As the given matrix is of order 3 X 4, therefore the highest order of any minor of the
matrix A is 3.

3 6|
we have =-6# 0
3 3

Thus there is at least one 3-square non-zero minor of the matrix A.


Hence the rank of the matrix A, p (A)=3r
2 1 -1
(ii) Given A = 3
4

Here 3-square minor is the higher order minor ofA which vanishes i.e.
2 1
0 3
1 4

Now consider 2-rowed minor of A e.g. |2


1

|0 3 | s

we have =6#0

Thus there is at least one 2-rowed minor of matrix A which does not
rank of the matrix p (A)= 2 vanish. Hence the
2
(iüi) Given A
Matrices 173

Here l Al=0 and alsoeach of the 2-rOwed minors of Ais zero. But the matrix is non-Zero
and we have l square minor of A, hence the rank of the matrix, p (A)
=1.
6 1 3 8
Ex. 21/ Find the rank of matrix A where [A] = 4 2 6 -1
10 3 9
L16 4 12 15 J
Solution. Matrix is 4 X4 matrix, we can have minors of order 1,2,3,4.
Minor of order 4 is determinant
6 1 8 6 1 3 81
2 6 -1 4 2 6 -1
3 9 7 10 3 9 7
4 12 15 6 1 3 6

Operation (R4- R)
taking common factor 3 from C3.
6 1
4 2
IA|=3
3 3
1 8

=0 as C, and C are identical.


Hence p (A)< 4.
One minor of order 3

6 1 3 |6 1 1
4 2 4 2 2 =0 [Since Ch and C are identical]
|10 3 10 3 3

Similarly it can be shown that all minors of order 3 are zero.


p (A)< 3
One minor of order 2 is

=12 4=8 0

Hence pA) = 2.
2-24. Reduction of a Non-zero Matrix to Normal Form
The following theorem is useful for reducing a non-zero matrix to normal form:
Theorem 2-10. Every non-zero matrix of rank r can be reduced to the form
afinite chain of elementary operations, where I, is the unt matrx
rank r.
Proof. Let A = [a] be mxn matrix of rank r. =#0.
As Ais a non-zerO matrix, it will haye at least one non-zero element saya;

You might also like