You are on page 1of 141

MATRIX AND TENSOR CALCULUS

With Applications to Mechanics, Elasticity and Aeronautics

ARISTOTLE D. MICHAL

DOVER PUBLICATIONS, INC.

Mineola, New York

Bibliographical Note

This Dover edition, first published in 2008, is an unabridged republication of the work originally published in 1947 by John Wiley and Sons, Inc., New York, as part of the GALCIT (Graduate Aeronautical Laboratories, California Institute of Technology) Aeronautical Series.

Library of Congress Cataloging-in-Publication Data

Michal, Aristotle D., 1899- Matrix and tensor calculus: with applications to mechanics, elasticity, and

aeronautics I Aristotle D. Michal. -

p. em.

Dover ed.

Originally published: New York: J. Wiley, [1941] Includes index. ISBN-13: 978-0-486-46246-2 ISBN-IO: 0-486-46246-3 I. Calculus of tensors. 2. Matrices. I. Title.

QA433.M45 2008

515'.63-dc22

2008000472

Manufactured in the United States of America Dover Publications, Inc., 31 East 2nd Street, Mineola, N.Y. 11501

",

To my wiJe

Luddye Kennerly Michal

EDITOR'S PREFACE

The editors believe that the reader who has finished the study of this book will see the full justification for including it in a series of volumes dealing with aeronautical" subjects. " However, the editor's preface usUally is addressed to the reader who starts with the reading of the volume, and therefore a few words on our reasons for including Professor Michal's book on matrices and tensors

in the GALCIT series seem to be appropriate.

Since the beginnings of the modem age of the aeronautical sciences

a close cooperation has existed between applied mathematics and aeronautics. Engineers at large have always appreciated the help of applied mathematics in furnishing them practical methods for numerical

and graphical solutions of algebraic and differential equations. How- ever, aeronautical and also electrical engineers are faced with problems reaching much further into several domains of modem mathematics. As a matter of fact, these branches of engineering science have often exerted an inspiring influence on the development of novel methods in applied mathematics. One branch of applied mathematics which fits especially the needs

of the scientific aeronautical engineer is the matrix and tensor calculus.

The matrix operations represent a powerful method for the solution of problems dealing with mechanical systems" with a certain number of

degrees of freedom. The tensor calculus gives admirable insight into

complex problems of the mechanics of continuous media, the mechanics

of fluids, and elastic and plastic media.

Professor Michal's course on the subject given in the frame of the war-training program on engineering science and management has found a surprisingly favorable response among engineers of the aero- nautical industry in the Southern Californian region. The editors be-

lieve that the engineers throughout the country will welcome a book which skillfully unites exact and clear presentation of mathematical statements with fitness for immediate practical applications.

THEODORE VON KAmIdN CLARK B. MILLIKAN

v

PREFACE

This volume is based on a series of lectures on matrix calculus and tensor calculus, and their applications, given under the sponsorship of the Engineering, Science, and Management War Training (ESMWT) program, from August 1942 to March 1943. The group taking the course included a considerable number of outstanding research en- gineers and directors of engineering research and development. I am very grateful to these men who welcomed me and by their interest in my lectures encouraged me. The purpose of this book is to give the reader a working knowledge of the fundamentals of matrix calculus and tensor calculus, which he may apply to his own field. Mathematicians, physicists, meteorologists, and electrical en~eers, as well as mechaiucal and aeronautical e~­ gineers, will discover principles applicable to their respective fields. The last group, for instance, will find material on vibrations, aircraft flutter, elasticity, hydrodynamics, and fluid The book is divided into two independent parts,_ the first dealing with the matrix calculus and its applications, the second with the tensor calculus and its applications. The minimum of mathematical concepts is presented in the introduction to each part, the more ad- vanced mathematical ideas being developed as they are needed in connection with the applications in the later chapters. The two-part division of the book is primarily due to the fact that matrix and tensor calculus are essentially two distinct mathematical studies. The matrix calculus is a purely analytic and algebraic sub- ject, whereas the tensor calculus is geometric, being connected with transformations of coordinates and other geometric concepts. A care- ful reading of the first chapter in each part of the book will, clarify the meaning of the word "tensor," which is occasionally misused in modem scientific and engineering literature. I wish to acknowledge with gratitude the kind cooperation of the Douglas Aircraft Company in making available some of its work in connection with the last part of Chapter 7 on aircraft flutter. It is a pleasure to thank several of my students, especially Dr. J. E. Lipp and Messrs. C. H. Putt and Paul Lieber of the Douglas Aircraft Company, for making available the material worked out by Mr. Lieber and his research group. I am also very glad to thank the members of my seminar on applied mathematics at the California Institute for their helpful suggestions. I wish to make special mention of Dr. C. C.

vii

viii

PREFACE

Lin, who not only took an active part in the seminar but who also kindly consented. to have his unpublished researches on some dramatic applications of the tensor calculus to boundary-layer theory in aer.o- nautics incorporated. in Chapter 18. This furnishes an application of the Riemannian tensor calculus described in Chapter 17. I should like also to thank Dr. W. Z. Chien for his timely help.

gratefully acknowledge the suggestions of my colleague Prc;Ifessor Clark B. Millikan concerning ways of making the book more useful to aeronautical engineers.

. Above all, I am indebted to my distinguished colleague and friend, Professor Theodore von K8.rm8.n, director of the Guggenheim Graduate School of Aeronautics at the California Institute, for honoring me by an invitation to put my lecture notes in book form for publicat,ion in the GALCIT series. I ~ve also the delightful privilege of expressing my indebtedness to Dr. Karman for his inspiring conversations and wise counsel on applied mathematics in general and this volume in particular, and for encouraging me to make contacts with the aircraft industry on an advanced mathematical level.

I regret that, in order not to delay unduly the publication of this boQk, I am unable to include some of my more recent unpublished researches on the applications of the tensor calculus of curved infinite dimensional spaces to the vibrations of elastic beams and other elastic media.

I

AmsTOTLE D. MiCHAL

CALIFORNIA INsTITUTE OF TECHNOLOGY OcroBI!lB, 1946

MATRIX

CONTENTS

PART'I CALCULUS AND ITS APPLICATIONS

CHA.PTJlB

 

PAGE

1.

ALGlilBBAIC

PBELlMINARIES

Introduction

1

. Definitions and notations . Elementary operations on matrices

.

.

.

.

 

1

2.

ALGl!IBBAIC PRELIMINARIES (Continued)

 

Inverse of a matrix and the solution of linear equations • • • • • ••

8

Multiplication of matrices by numbers, and matric polynomials. • ••

11

Characteristic equation of a matrix and the Cayley-Hamilton theorem.

12

3.

DIFFERENTIAL AND INTl!lGRAL CALCULUS OF MATBICES

 

Power series in matrices

15

.--.

. Differentiation and integration depending on a numerical variable •

.

.

.

.

.

.

.

.

.

.

16

4.

DIFFERENTIAL AND INTEIlBAL CALCULUS OF MATBICES (Continued)

Systems of linear differential equations with constant coefficients

 

20

Systems of linear differential equations with variable coefficients.

21

5.

MATRIX METHODS IN PROBLl!IMS OF SMALL OSCILLATIONS

 

Differential equations pf motion

 

24

Illustrative example .

26

6.

.

.

.

.

.

.

.

.

.

.

MATBIX METHODS IN PROBLEMS OF SMALL OsCILLATIONS (Continued)

Calculation of frequencies and amplitudes

 

.

.

.

.

.

.

.

.

.

.

•.

28

7.

MATRIX METHODS IN THE MATHEMATICAL THEORY OF AIBCllAI'T FLUTTER

32

8.

MATRIX METHODS IN ELASTIC DEFORMATION THEORY

 

38

I

TENSOR

PART 11 CALCULUS AND ITS APPLICATIONS

9. SPACIil LINE ELEMENT IN CURVILINEAB COORDINATES

Introductory remarks

42

. Notation and summation coDvention •

Euclidean metrio tensor

.

.

.

.

.

42

44

 

.

.

.

.

.

.

10.

Vl!ICI'OB FIELDS, TENSOR FIELDS, AND EUCLIDEAN GHlWITOFFilL SnmoLS

 

The strain tensor

48

. Scalars, contravariant vectors, and covariant vectors

.

.

.

.

.

.

.

.

49

Tensor fields of rank two

50

Euclidean Christoffel symbols

53

ix

x

CONTENTS

CBAPTIlB

11.

TENsoR ANALYSIS

PAGlIl

12.

Covariant difierentiation of vector fields Tensor fields of rank r = p + q, contravariant of rank p and covariant •

of rank'p.

.

.

56

57

Properties of tensor fields

"

59

.

.

.

.

.

.

.

.

LAPLACE EQUATION, WAVE EQUATION, AND POISSON EQUATION IN QuaY!;' LINlIlAR COORDINATES

Some further concepts and remarks on the tensor caloulus

 

60

. Laplace's equation for veotor fields Wave equation Poisson's equation •

Laplace's equation

•.

.

.

.

.'

••.•

.'

62

65

65

66

13.

SOME ELEMENTARY ApPLICATIONS OF THE TENSOR CALCULUS TO HYDRO- DYNAMICS

Navier-Stokes differential equations for the motion of a viscous'fluid • Multiple-point tensor A two-point correlation tensor field in turbulence

.

.

.

.

69

71

73

14.

APPLICATIONS OF THE TENSOR CALCULUS TO ELASTICITY THJiIORY

 

Finite deformation theory of elastic media

 

75

Strain tensors in rectangular coordinates • •

77

Change in volume under elastic deformation

79

15.

HOMOGENEOUS AND ISOTROPIC 8TaAJNs, STRAIN INVAJUANTS, AND VARJ- ATION OF STRAIN TENSOR

Strain invariants

.

.

.

.

.

.

.

.

.

.

.

.

82

Homogeneous and isotropic strains

83

.

.

.

.

A fundamental theorem on homogeneous strains

 

84

Variation of the strain tensor.

.

.

.

.

.

.

.

86

16.

STRESS TENSOR, ELASTIC POTENTIAL, AND STRESS-8TaAJN RELATIONS

Stress tensor

.

.

.

.

.

.

89

Elastic potential.

','

•.

91

.

.

.

.

.

.

.

.

.

StresHtrain relations for an isotropic medium

.

.

.

.

.

.

.

93

17.

TENifoR

CALCULUS

IN

RlmMANNJAN SPACliIS AND

TBJD

FuNDAMENTALS

OF CLASSICAL MECHANICS

 

95

 

Multidimensional Euclidean spaces •

Riemannian geometry.

. Curved surfaces as examples of RiElmannian spaces

.

.

.

.

.

.

.

.

.

96

98

The Riemann-Chrlstoffel ourvature ~r

99

Geodesics. • •

100

. Equations of motion of a dynamical system with n degrees of freedom.

.

.

.

.

.

.

.

.

101

CONTENTS

xi

CBAPTmB

PAGE

18. ,ApPLICA!l'IONB OF THE TENSOB CALCULUS TO BOUNDARy-LAYER TBlDOBY

"Incompressible and compressible fluids.

103

.

.

.

.

.

.

.

Boundary-layer equations for the steady motion of a homogeneous in-

compressible fluid

104

NOTES ON PART I

III

NOTJIlS ON

PART IT.

.

114

RuEBENCES

FOB PART I •

 

124

RmFERENCES FOR PART IT

INDEX.

125

129

PART I.

MATRIX CALCULUS

AND ITS APPLICATIONS

CHAPTER 1

ALGEBRAIC PRELIMINARIES

Introduction.

Although matrices have been investigated by mathematicians for al- most a century, their thoroughgoing application to physics,It engineer- ing, and other subj~ts2- such as cryptography, psychology, and educational and other statistical measurements - has taken place o~y since 1925. In particular, the use of matrices in aeronautical engi- neering in connection with small oscillations, aircraft flutter, and elastic deformations did not receive much attention before 1935. It is inter- esting to note that the only book on matrices with systematic chaptem on the differential and integral calculus of matrices was written by three ~ronautical engineers.t .

Definitions and Notations.

A table of mn numbers, called elements, arranged in a rectangular array of m rows and n columns is called a matrix 3 with m rOW8 am n columna. If a} is the element in the ith row and 3th column, then the matrix can be written down in the following pictorial form with the conventional double bar on each side.

aI,

,

,

ai, a;',

,

a:'

In the expression oj the index i is called a 8Uper8Cl'ipt and the index 3 a

superscript i in oJ is not the

ith power of a variable 0,. If the number m of rows is equal to the number n of columns, then

8Ubscript.

It is to be emphasized that the

t Superior numbers refer to the notes at the end of the book.

t Frazer, Duncan, and Collar, ElementaT'/l Matrice& and 80fM ApplicCJtioM to

Dynamic8 and Diilertmti4l EguatioM, Cambridge University Press, 1938.

2

ALGEBRAIC PRELIMINARIES

the matrix is called a square matrix.t The number of rows, or equiva- lently the number of columns, will be called the order of the square matrix. Besides square matrices, two other general'types of matrices occur frequently. One is the row matrix

II al, as, "', ax

II

;

the other is the column matrix

am

It is to be observed that the superscript 1 in the elements of the row matrix was omitted. Similarly the subscript 1 in the elements of the column matrix was also omitted. All this is done in the interest of brevity; the index notation is unnecessary when the index, whether a subscript or superscript, cannot have' at least two values. It is often very convenient to have a more compact notation for matrices than the one just given. This compact notation is as follows:

if oj is the element of a matrix in the ith row and jth column we can write simply

II oj

"

instead of stringing out all the mn elements of the matrix. In par- ticular, a row matrix with element al: in the kth column will be written

II al: II,

and a column matrix with element al: in the kth row will be written

II al: II.

Elementary Operations on Matrices.

Before we can use matrices effectively we must define the addition of matrices and the muUiplication of matrices. The definitions are those that have been found most useful in the general theory and in the applications. Let A and B be matrice8 oj the same type, i.e., matrices with 'the same number m of rows and the same number n of columns. Let

A =

II a; II,

B =

II bj

II

.

Then by the sum A + B of the matrices A and B we shall mean the

t It will occasionally be convenient to write tliJ for the element in the ith row

and jth column of a square matrix.

See Chapter 5 and the following chapters.

ELEMENTARY OPERATIONS ON MATRICES

3

uniquely obta.inable matrix

c= 11411,

where c} = oj + b}

In other words, to add two matrices of the same type, calbulate the matrix whose elements are precisely the numerical sum of the cor- responding elements of the two given matrices. The addition of two matrices of different type has no meaning for us. To complete the preliminary definitions we must make clear what we mean when we say that two matrices are equal. Two matrices A == II oj II and B == II bj II of the same type are equal, written as A = B, if and only if the numerical equalities oj = b} hold for each i and j.

Exercise

(i = 1, 2,

, mj

j

=

I, 2,

, n).

 

I,

-1,

5

 

A=

0,

0,

3,

-2

 

1.1,

2,

-4,

1

0,

0, -~,

1

0,

0,

-I,

3

I,

0,

2,

-4

Then

 

I,

-1,

0,

6

 

A + B =

0,

0,

2,

1

 

2.1,

2,

-2,-3

The following results embodied in a theorem show that matric addition has some of the properties of numerical ad~tion.

THEOREM.

If A and B are any two matrices of the same type, then

A +B = B+A. If C is any third matrix of the same type as A and B, then

(A + B) + C = A +

(B + 0).

Before we proceed with the definition of muUiplication of matrices, a word or two must be said about two very important special square matrices. One is the zero matrix, i.e., a square matrix all of whose elements are zero,

0,0,' ",0

0,0"",0

O. O

·.0

4

ALGEBRAIC PRELIMINARIES'

We can denote the zero matrix by the capital letter O. Occasionally we shalI use the terminology zero matrix for a non-squa.re matrix with zero elements. The other is the unit matri3:, i.e., a matrix

where

. I

=

II 8; II,

~=1

if

i

=j.

= 0

if

i ~j.

In the more explicit notation

1=

1,0,0,,·,,0

0,1,0,···,0

0,0,1,0,·,·,0

0,0,0"",0,1

One of the most useful and simplifying conventions in all mathe-

matics is the 8Ummation convention: the repetition 01 an iru1ex once as a sub8cript and once as a superscript wUZ indicate a summation over the

total rafl4e 01 that iru1ex. 1 to 5, then

For example, if the range of the indices is '

ap'

means

Ii

.Eap'

1=1

or

D.J.b 1 + afb2 + aafJ8 + aJJ4 + aH.

Again we warn the reader that th~superscript i in b' is not the ith power of a variable b. The definition of the multiplication of two matrices can now be given

in a neat form with the aid of the summation convention.

Let

A=

B ,.

~, ~,

~,~,

~, a;,

, a!.

,a:.

, a:.

b~, ~, , b; ,b!. ~,~,
b~, ~,
, b;
,b!.
~,~,

ELEMENTARY OPERATIONS ON MATRICES

5

Then, by the product AB 0/ the two matrice8, we 8haJl mean the mcrtri:e

where

c; = a'J>j

C=lIc;lI,

(i = 1,2,

,

nj

j

= 1, 2,

.•• , p).

If c; is written out in extenso without the aid of the summation qon- vention, we have

Cj=/lij+(J3i+···

i

'b

I

'b

i

+

'bm

Q,f,,;j.

It should be emphasized here that, in order that the product AB of two matrices be well defined, the number of rows in the matrix B must be precisely equal to the number of columns in the matrix A. It follows

in particular that, i/ A and B are square matrice8 0/ the sam.e type, then

AB as well as BA is always weU defined. However, it must be empha- sized that in general AB is not equal to BA, written 88 AB ¢ BA, even if both AB and BA are well defined. In other words, matrix multiplication of matrices, unlike numerical multiplication, is not

always commutative.

Exercise

The following example illustrates the non-commutativity of matrix

multiplication.

and

A

-11 0

-

1

1

B == II -0

Now

Hence

Similarly

Take

1

0

0 1

II

II

so that

at = 0,

= 1,

=

1,

= 0,

M =

""

so that

-1,

 

0,

b~ =

0,

=

1.

c~ =

a!b! = (0)(-1) + (1)(0) = 0,

(0)(0) + (1)(1) = 1,

~ = a!b; =

~

=

~1 = (1)(-1) + (0)(0) =

~ = a2 b;

=

(1)(0) + (0)(1) = o.

-1,

AB = II_~ ~II·

BA

-II ~

-~ II·

But obviously AB ¢ BA.

. The unit. matrix 1 of order n baa the interesting property that it

In fact, if A is

commutes with all square matrices of the. same order.

6

ALGEBRAIC PRELIMINARIES

an arbitrary square matrix of order n, then

AI = IA = A.

The multiplication of row and column matrices with the same

number of elements is instructive.

 

Let

 

A

II

II

be the row matrix and

 

B =

II

b i

II

the column matrix.

element (the double-bar notation has been omitted).

Then AB = a.:tJ', a number, or a matrix with one

Exercise

 

o

H

A = II 1,

1, 0

II

and

B -

0

,then

 

1

AB =

(1) (0) + (1) (0) + .(0) (1) = O.

This example also illustrates the fact that the product oj two matrice8

can be a zero matrix although neither of the multipZied matrices i8 a zero

matrix.

The multiplication oj a square matrix with a column matrix occurs

frequenJly in the applications.

in n unknowns Xl, Xl,

, x"

A system of n linear al{Jebraic equations

ajxi

;

b i

can be written as a ltingZe matrix equation AX=B

in the unknown

matrix A =

column matrix

X = II Xi

-= II

I~ a} II and column matrix B

II

b i

and

II.

the given square

A system of first-order difierential equations

dx'

•.

- = lLjX'

dt

can be written as one matric difierential.equation

dX = AX.

dt

Finally a system of second-order difierential equations occurring in the theory of small oscillations

(/Jxi

-

clt2

,.

=ajtl

ELEMENTARY OPERATIONS ON MATRICES

7

can be written as one matric second-order differential equation

~"'AX.

The above illustrations suffice to show the compactness and sim- plicity of matric equations when use is made of matrix multiplication.

Exercises

1. Compute the matrix AB when

Is BA defined?

A.IWI audB-II_~~!II·

Explain.

I. Compute the matrix AX when

A = II-l: ~:!II and X = II j II·

Is XA defined? Explain.

CHAPTER 2

ALGEBRAIC PRELIMINARIES

(Continued)

Inverse of a Matrix and the Solution of Linear Equations,l

a

~

The inverse a- 1 , or reciprocal, of a real number a is well defined if

If A is a

O.

There is an analogous operation for square matrices.

square matri2:

A=IIt411

oj arder n and if the determinant

tation

I

t4

I

~'~'"'' a!

~,a:,

ai, a;,

,

~

, a:

~

0,

~O,

or in more extended no

then there exists a 'Unique matrix, written A -1 in analogy to the a number, 'ID'ith the important propertie8

(2·1)

{

AA-1 A-1A = = I I

(I is the unit matrix.)

inverse of

The matrix: A-1, if it exists, is called the inverse matrix oj A. In fact, the following more extensive result holds good. A nec688ary

and sufficient condittion that a matri2: A = II t4 II have an inverse is that

the associated

determinant I a} I ~ O.

From now on we shall refer to the determinant a = I aj

I as

the

Occasionally we sha]J, write I A

determinant a of the matrix A. the determinant 01 A.

The general form of the inverse of a matrix can be given with the aid of a few results from the theory of determinants. Let a = I aj I

be a determinant, not necessarily different from zero. Let a: be the

determinant a; note that the indices i and j are

interchanged in a: as compared with a{. Then the following results

cofactor t of a~ in the

Ijor

t The (n - I)-rowed determinant obtained from the determinant G by striking out the.ith row and ith column in G, and then multiplying the result by (_I)H1.

8

INVERSE OF A MATRIX

9

come from- the properties of determinants:

a;-al '" a a1

ajai", a a1

(expansion by elements of ith row); (expansion by elements of kth column).

H then the determinant a '" 0, we obtain the following relations,

(2·2)

on defining

{ t4M

ai

= ~,

~ja£ = ~

a:

M= -.

,

Let A", II aj II, B = II ~ IIi then' relations 2·2 state, In. terms of matrix multiplication, that

AB=I,

BA =1.

In other words, the matrix B is precisely the inverse'matrix A-1 of A.

iJ the

determinant a oj a square matrix A '" II aj II i8 dijJerent Jrom zero, then

To summarize, we have the following. computational result:

the inverse matrix A -1 oj A exists and i8 gifJen by

A-1 = II ~ II,

where M'" a; and ex} i8 the coJactor oj at in the determinant a

matrix A.

These results on the inverse of a matrix have a simple application to the solution of n non-homogeneous linear (algebraic) equations in n unknowns xl, z2, "', x". Let the n equations be .

ajxi=b'

On de-

i

a

oj the

(the n'J numbers aj are given and the n numbers b i are given).

fining the matrices A = II aj II, X =" x' 1/, B = II b' II, we can, as in the first chapter, write the n linear equations as one matric equation

AX=B

in the unknown column matrix X. If we now assume tha.t the de- terminant a of the matrix A is not zero, the inverse matrix A -1 will exist and we shall have by matrix multiplication A-1(AX) = A-lB. Since A-1A = 1 and IX = X, we obtain the solution X", A-IB

In other words, if aj is the cofactor of a! in

oj the equation AX = B.

oj n equations *1 = b' under the condition a ;o! O. This is equivalent

to Cramer's rule! for the solution of non-homogeneous linear equations

as ratios of determinants. It is more explicit than Cramer's rule in that the determinants· in the numerator of the solution expressions are

expanded in terms of the given right-hand sides b

linear equations.

b' readily and obtain x' = ).jbf. The inverse matrix A -1 to A = II oj II can then be read off by inspection - in fact, A-1 = II >.} II. Practical methods, including approximate methods, for the calcula- tion of the inverse (sometimes called reciprocoI) of a matrix are given in

Chapter IV of the book on matriees by Frazer, Duncan, and Collar.

A method based on the Cayley-Hamilton theorem will be presented at

"', b- of the

10

ALGEBRAIC PRELIMINARIES

It is sometimes possible to solve the equations *'

1 ,

bt,

the end of the chapter. A simple example on the inverse of a matrix would be instructive at

this·point.

ExerciSe

.Consider the two-rowed matrix

A = II_~ ~/I.

According to our notations

al =

0,

~

=

1,

~

=

-1, ~ =

O.

Hence the cofactors aj of A will be al = (cofactor of aD = 0,

~

=

(coflloCtor of ~

= -1,

(cofactor of ~) = 1,

ex: = (cofactor

But the determinant of A is

gives us immediately ~t = 0, ~ = -1, {If = 1, ~ = O.

of

~)

-

O.

a~

Now A -1 =

a = 1.

This

II ~j II , where ~ = aj/a.

In other words,

A -1 = II ~ -~II·

Approximate numerical examples abound in the study of airplane-

wing oscillations.

A =

For example, if

.

0.0176, 0.000128, 0.90289 0.000128, 0.00000824, 0.0000413 0.00289, 0.0000413, 0.000725

then approximately

 

170.9,

1,063.,

-741.7

A-I

1063.,

176,500., -14,290.

-741.7,

-14,290.,

5,150.

,

MULTIPLICATION OF MATRICES

11

.From :the rule for the product of two determinants, 8 the following result is immediate on observing closely the definition of the product of two matrices:

If A and B are two square matrices with determinants a and b reapec- tWely, then the determinant c of the matric product C = AB i8 given by the numerical muUiplication of the two number8 a and b, i.e., c = abo

This result enables us to calculate immediately the determinant of the inverse of a matrix. Since AA-1 = I, and since the determinant of the unit matrix I is 1, the above result shows that the determinant of

A -1 is 1/a, where a i8 the determinant of A.

From the associativity of the ope~tion of multiplication of square matrices and the properties of inverses of matrices, the usual index laws for powers of numbers hold good for powers of matrices even though matric multiplication is not commutative. By the associativity of the operation of matric multiplication we mean that, if A, B, Care any three square matrices of the same order, then t

A (BC) = (A[J)C.

If then A is a square matrix, there is a unique matrix AA ••• A with 8 factors for any given positive integer 8. We shall write this matrix as A' and call it the 8th power of the matrix A. Now if we define AD = I, the unit matrix, then the following index law8 hold for all

poBiJive integral and zero indice8 r and s:

A'A' = A'A' = A-+- (A')' = (A')' = A".

Furthermore, these index laws hold for all integral r and 8, positive or

negative, whenever A -1 exists.

negative power8 of matrices are defined as positive power8 of their inver8es,

i.e., A -r is defined for any positive integer r by

A-r = (A-I)•.

This is with the understanding that

Multiplication of Matrices by Numbers, and Matrie Polynomials.

Besides the operations on matrices that have been discussed up to

If

A = 1\ a} \I is a matrix, not .necessarily a square matrix, and a is a number, real or complex, then by aA we 8hall mean the matrix II aa} II. Thia operation of multiplication by numbers enables us to consider matrix polynomials of type

this section, there is still another one that is of great importance.

(2·3)

aoA" + alA ,,-1 + agA,,-4l +

+ a

-

1 A + aJ.

t Similarly, if the two square matrices A and B and the column matrix X have

12

ALGEBRAIC PRELIMINARIES

In expression 2 "3, au, at,

and I is the unit matrix of the same order as A. In a given matric

polynomial, $e ais are given numbers, and A is a variable square matrix.

, a

are numbers, A is a square matrix,

Characteristic Equation of a Matrix and the Cayley-Hamilton Theorem.

We are now in a position to discuss some results whose importance

cannot be overestimated in the study of vibrations of all sorts (see

Chapter 6).

If A - II aj II is a given square matrix of order n, one can form the matrix >J - A, called the characteri8tic maJ,rix of A. The determinant of this J[l8.trix, considered as a function of )., is a (numerical) poly-

More

explicitly, let J().) =

at)

nomial of degree n in ).; called the characteri8tic Junction oj A.

I >J -

A

I; then J().) has the form J().)

-l

+

+

a

-l).

+ a

Since a

= J(O) , we see that a

-

)

+

I -A I;

ie., a is (-1)" times the determinant oj the matrix A. The algebraic equation of degree n for )

J().) =

0

is called the charactmatic equation oj the matrix A, and the roots of the

equation are called the charactmatic roots oj A.

We shall close this chapter with what is, perhaps, the most famous theorem in the algebra of matrices.

THE CAYLEY-lIA.MruroN THEOREM:.

J().)

=

)

+ atA

-

1 +

Let

+ a

-l).

+ a

be the characteristic Junction oj a m.at1'W A, and let I and 0 be the unit matrix and sero matrix respectively with an order equal to that oj A. Then the matric polynomial equation

X" + a1X

is 8atisjied by X = A.

Take

A = II

~

~

II;

-l

+

+ a

Example

then

J().) =

-1X

\' _~

+ aJ = 0

-! I =

).2 -

1.

Here

n=2,and'at-O,at=-1. ButA2'=II~ ~II' HenceA2-I=O.

The Cayley-Hamilton theorem is often laconically stated in the form "A matrix satisfies its own characteristic equation." In symbols,

if J(>

Such statements are, of course, nonsensical if taken literally at their

)

is the characteristic function for a matrix A, then J(A) = O.

OHARACTERISTIC EQUATION OF A MATRIX

13

face value.

oughly understand the statement of the Cayley-Hamilton theorem.

A knowledge oj the characteri8tic Junction oj a matrix enabks one to compute the inver8e oj a matrix, iJ it exists, with the aid oj the Cayley-

Ham1,1J,on theorem. In fact, let A be an n-rowed square matrix with an inverse A-1. This implies that the determinant a of A is not zero. Since 0 ;14 ~ = (- 1)"a, we find with the aid of the Cayley-Hamilton theorem that A satisfies the matric equation

However, such mnemonics are useful to those who thor-

1

1= - -[A" + a1A-1 +

+ a,,_~2+ a,.-tAJ.

~

Multiplying both sides by A -1,

we See that the inver8e matrix A -1 can

+ a-~ + an-1I].

be compute(llYg the Jollowi1l4 JorTIItula:

(2·4)

-1

A -1 = --=[A a,.

-1

+

alA ,,-II +

To compute A -1 by formula 2·4 one has to know the coefficients a1. at, "', a,.-l, a" in the characteristic function of the given matrix A. Let A = II aj II i then the trace of the matrix A, written tr (A), is defined by tr (A) = ~, the sum of the n diagonal elements a~,~, "', 0.;. Define the numbers' 81, Bt, "', 8" by

(2·5)

so that 8r is the trace of the rth power of the given matrix A. It can be shown' by a long algebraic argument that the numbers a1, "', a,. can be computed successively by the following recurrence formulas:

81 = tr (A),

Bt =

tr (A2),

"',

8~ = tr (A~), "',

8" = tr (A")

(2·6)

a1 =

-81

G2

=

-t(a181 + 82)

as

=

-t(G2 8 1 + a1Bt + 83)

1

a,. = --(a-181 + a-2B2 +

n

+ a18,,-1 + 8

).

We can summarize our results in the following rule for the calculation of the inverse matrix A-1 to a given matrix A.

First

compute the first n - 1 powers A, A2, "', A--1 of the given n-

rowed' matrix A. Then compute the diagonal elements only of A".

Next compute the n numbers 81, 81, "',8

as defined in 2·5. Insert

these values for the 8. in formula 2·6, and calculate a1, G2, "', ~ successively by means of 2·6. Finally by formula 2·4 one can calcu-

A

RULE

FOR

CALCULATION

OF THE

INVERSE MATRIX A-1.

14

ALGEBRAIC PRELIMINARIES

late A-t from the kIiowledge of aI, "', a and the matrices A, AI, " A-I. Notice that the whole A" is not needed in the calculation

but merely s

P'Unched-card met1wd8 can be uSed to calculate the powers of the matrix A. The .rest of the calculations are easily made by standard calculating machines. Hence one method of getting numerical solutiona

,

tr (A"),

the trace of A".

ola system of n linear equations in the n 'Unknowns z'

*1 = b i

(I aj 1¢ 0)

is to compute A-I of A = II aj II by the above rule with the aid of punched-card methods and then to compute A-IB, where B = /I b' /I, by punched-card methods. The Solution column matrix X = II z' " is given by X = A-lB.

Exercises

1. Calou1ate the inverse matrix to A '" II ~~II by the last method of tlUachapter.

Solution.

II ~~ II by the last method of tlUachapter. Solution. Now A-I '" - 1 -

Now A-I '" -

1

-

lit

[A + IItl] '" A. Hence

A-I '" - 1 - lit [A + IItl] '" A. Hence I. See the exercise

I. See the exercise given in M. D. Bingham's paper.

8. Calculate A-I by the above rule when

 

15

11

6

-9

-15

1

3

9

-3

-8

A

7

6

6

-3

-11

7

7

5

-3

-11

17

12

5

-10

-16

See the bibliography.

Mter calculating A2, A', A4, and the diagonal elements of AS, caloulate '1'" 5.

It

-41, '8

-217, B4

-17, Is '" 3185.

Inserting these values in 2·6, find

III '" -5, lit '" 33, 113

-51, 114 '" 135, IJa

225•.

Incidentally the characteristio equation of A is

I().) '" AI -

5A4 + 33).8 - 51A9 + 135). + 225

_

(A + 1)().9 -

SA + 15)1 '" O.

Finally, using formula 2·4, find

,4-1

1

--

225

-207

-315

-315

-225

-414

64

30

30

75

53

-124

195

-30

-75

52

III

-ISO

45

0

-3

171

270

270

225

342

CHAPTER 3

DIFFERENTIAL AND

INTEGRAL

CALCULUS

or

MATRICES

Power Series in Matrices.

Before we discuss the subject of power series, it is convenient to make a few introductory remarks on general series in matrices. Let

be an infinite sequence of matrices of the same type

(Le., same number of rows and col1¥Ulls) and let 8 p = Ao + Al + As

+ Ap be the matric sum of the matrices A o , A l , As, .'., and A p.

converges (in the ordinary numerical

sense) as p tends to infinity, then by 8 = lim 8 p we shall mean the

A o , A l , ·A s , Aa

+

If every element in the matrix 8 p

p->o>

matrix 8 of the limiting elements. If then the matrix 8 = lim 8 p exi8ts

p-+CZI

in the above sense, we shall say, by definition, that the matric infinite series

,

D,. converges to the matrix 8.

,

0

Take Ao = 1

Al = 1

8 =

p

,

,

Ao + Al + As +

Example

1

As = -1

2!

'

Aa =

1-

-1

3!

'

+ Ap = (1 + 1+ !

2!

,

1

A, = -1

if'

+ !

3!

+

Then

.

+ !)1

p!

.

Hence, on recalling the expansion for the exponential e, we find that

lim 8 p =

p--+Q)

el.

,

In other words, :EAr = el.

r==O

If A is a square matrix and the al, as, consider matric power series in A

,

:Ea,.Ar

r=O

are numbers, one can

In other words, matric power series are particular matric series in which each matrix Ar is of special type t A,. = a,.Ar, where Ar is the rth power of a square matrix A. (AO = 1 is the identity matrix.) Clearly matric polynomials (see Chapter 2) are special matric power series in which aU the numbers a, after a certain value of i are zero. An important example of a matric power series is the matric exp0- nential Junction e" defined by the following matric power series:

e" -

1 +A +.!:

2!

A2+.!: Aa+

31

t The index r is not summed.

15

.•• +

.

16

DIFFERENTIAL AND INTEGRAL CALCULUS

The following properties of the matrix exponential have been used frequently in investigations on the matrix calculus:

1. The matric power series expansion for ~ is convergent l for all

square matrices A.

2. ~Il = ileA. = ~+B whenever A and B are commutative matrices,

i.e., whenever AB = BA.

,

3. ,~e-A. = e-A.~ = I.

(These relations express the fact that e-A.

is the inverse matrix of ~.) Every numerical power series has its matric analogue. However, the corresponding matric power series'have more complicated proper- ties-for example,~. Other ~ples are, say, the matric sine, sin A, and the matric cosine, cos A, defined by

sin A

=

A -

!

A8

+ 1_A & -

•••

 

31

sr-

1

1

COB A

= I -

2!A2 + 41A4 -

••••

The usual trigonometric identities are not always satisfied by sin A and cos A for arbitrary matrices.

Difterentiation and Integration of :Matrices DependiD,g on a Numeri- cal Variable. Let A(t) be a matrix depending on a numerical variable t so that the elements of A(t) are numerical functionS of t.

A(t) =

aW), ~(t), .",

~(t),

a!(t)

~(t), '.', a!(t)

ar(t), a:(t),

, a:(t)

Then we'define the derivative of A (t), and write it d~(t),by

dA(t)

---;u:- =

dt'dt'

~(t) ~(t)

d.t' d.t'

~(t) ~(t)

,

da!(t)

,&

da!(t)

---;u:-

write it d~(t), by dA(t) ---;u:- = dt'dt' ~(t) ~(t) d.t' d.t' ~(t) ~(t) , da!(t)

DIFFERENTIATION AND INTEGRATION

11

Similarly we define the integral of A (t) by

fA(t)dt=

f~(t) dt, f~(t) dt, "', fa!(t)

f~(t) dt, f~(t) dt, "', fo!(t)

dt

dt

far(t) dt, fa:(t) dt, "', frt:(t) dt

It is no mathematical feat to show that differentiation of matrices has the following properties:

(3·1)

(3·2)

(3·3)

etc.

d[A (t) + B(t)] dt

d[A (t)B(t)] = dA (t) B(t)

dA (t)

dB(t)

= -;u + ---;it

A (t) dB(t)

dt

dt

+,

dt

~[A(t)B(t)C(t)] = d~t)B(t)C(t) + A (t)d~t)C(t)

+ A(t)B(t)d~t),

There are important immediate consequences of properties 3·2 and

3 ·3.

For example, from 3·2 and A -l(t)A (t) = I, we see that

(3.4)

dA-l(t) = _A-l(t)dA(t)A-l(t)

dt

dt'

Also, from 3·3, we obtain

(3.5)

dA3(t) = dA(t) A2(t) + A (t)dA(t) A(t) + A2(t)dA(t).

dt

dt

dt

dt

There are similar formulas for the derivative of any positive integral power of A (t). IT t is a real variable and A a constant square matrix, then one obtains

d(trA) = rtr-lA dt

Then, with the usual term-by-term differentiation of the numerical exponential, the following difJerentiation can be justified:

(3·6)

{

d

~

(e A )

P

t3

= A +t42+2~a+3IA4+

= Ae tA = JA.A.

+

18

DIFFERENTIAL AND INTEGRAL

CALCULUS

There is an important theorem in the matrix calculus that turns up in the mathematicaJ theory of aircroJt flutter (see Chapter 7). The proof, into which we'can not enter here, makes use of the modem

theory of Ju,nctionals.

THEoREM.

If F(A) is a pqwer 8eries that confJerge8 Jor all A, then the

can be computed by

the ezpansion 2

matric pqwer series F(A)

(3·7)

where A ia an n-rowed square matrix with n distinct characteri8tic roots

Al, AI, "', ~, and (h, <h, "', G are n matrice8 defined by8

(3·8)

G. =

1

ll(AI- A).

ll(Ai - A.) ,'',

i"""

There are a few matters that must be kept in mind in order to have a clear underst&ndin& of the meaning of this result. In the first place

when-

In other words AD = 1 is

"replaced" by AD = I, the unit matrix, in the transition from F(A) to F(A). Secondly to avoid ambiguities we muSt write explicitly the

ever F(A) = era + alA + atX 2 +

, +

the matric power series F(A) = aJ + alA + atA 2 +

+

compact products occurring in equation 3·8.

TI(Ai - A,) = (AI - Ai)(A2 - Ai) .•• (A.-I - A.)(Ai+t - A,) ••• ~ - Ai),

ipt.,

II(AI- A) = (All - A)(Asl - A) .•. (A'-ll - A)(Ai+1I - A) •••

ipt.i

(AJ -

A).

There are special cases of particular interest in vibration theory (see Chapters 6 and 7). They correspond to the power of a matrix Ar and the matrix ·exponential~. The expansion 3·7 rields immediately

(3·9)

and

(3·10)

where the matrices G, have the same meaning as in 3·8.

Exercise

Calculate the matrix eA when A is the matrix A = II ~ ~II· by calculating eA directly.

Check the result

DIFFERENTIATION AND INTEGRATION

19

Solution.

The eha.racteristio roots are ~1

co

!

2

!

2

1,

! III

~

(I + A)

2

1

1

-1

(1 _ A) co ! II

2

0 1 and Os are 88 follows:

G 1

Os

~s1- A

~s

-

~11-

-

~1

A

-1.

1 II.

1

-1 II.

1

Hence the matrices

Now

Hence

~1

~2

eA _ t e\{J,

,"1

'! III 1 II + '-III 1 -1 II-

.2

1

1

2

-1

1

eA

11 sinh cosh 1 1

sinh cosh 1 1

II.

.

CHAPTER 4

DIFFERENTIAL AND INTEGRAL

CALCULUS OF

MATRICES

(Continued)

Systems of Linear Differential Equations with Constant Coefllcients.

The matric exponential has important applications to the solution of

systems of n linear difierential equations in n unknown functions x 1 (t),

X2(t) , "', x and with n 2 constant coefficients aj. The variable t

is usually the time in physical and engineering problems. Without

defining the derivative d!, we merely mentioned in the first chapter

that we can write such a system of equations as one matric equation

(t)

(4.1)

dX(t) = AX(t) dt

.

Having defined the matric derivative, we are enabled to view this equation with complete understanding. From formula 3·6 of the previous chapter we find that

(4·2)

formula 3·6 of the previous chapter we find that (4·2) where to is an arbitrarily given

where to is an arbitrarily given value

to saying that X(t) = [e<t-lol.4.JX o is a solution of the matric difierential

equation 4·1 for an arbitrary column matrix Xo. A glance at the expansion for the matric exponential e(t-lolA shows that the solution X(t) has the property

X(to) = Xo.

But this result is equiValent

of t.

In summary, we have the result 1 that

(4·3)

X(t) = [eCt-tolA]Xo

is a 80lution 2 0/4·1 with the property that X(to) = Xo/or any prea88igned

constant column matrix Xo.

so that

dx 1 (t) dt

Example