You are on page 1of 396
V.A.Yokubovich and V.M. Starthinskii LINEAR DIFFERENTIAL EQUATIONS WITH PERIODIC COEFFICIENTS 1 Translated from Russian by D. Louvish ‘A HALSTED PRESS BOOK JOHN WILEY & SONS New York : Toronto ISRAEL PROGRAM FOR SCIENTIFIC TRANSLATIONS, Jerusalem « London (© 1975 Keter Publishing House Jerusalem Lid, Sole distributors for the Western Hemisphere HALSTED PRESS, a division of JOHN WILEY & SONS, INC, NEW YORK Library of Congress Cataloging in Publication Data fakubovien, Vladimie Andreevich Linear differential equations with periodic cetti- cents Teanation of Line diferent nye waren s periodicheskimi koéffittientami i ith prilozhenita, “A Halted Pres book” Biboaphy: p. Insider index 1. Differential equation, Liner, 1, Starchinskd Viichestv Mithaiovch, joint author. Hl Tite gasaiaigsiS38 7sa388 ISBN 0470969539 Distributors forthe U.K., Europe, Africa and the Middle East JOHN WILEY & SONS, LTD., CHICHESTER Distributors for Japan, Southeast Asa, and India TOPPAN COMPANY, LTD., TOKYO AND SINGAPORE. Distribute inthe rest of the world by KETER PUBLISHING HOUSE JERUSALEM LTD, ISBN 0 7065 14807 IST eat, no, 22123 2 “This book isa tranlation from Russian of LINEINYE DIFFERENTSIAL NYE URAVNENIYA 'S PERIODICHESKIMI KOEFFITSIENTAMEL IKH. PRILOZHENIYA dat stvo "Nauka," Moscow, 1972 Printed and bound by Keterpress Enterprises, Jerusalem Printed in leet PREFACE ‘Many problems in physics and engineering ultimately involve systems of linear differential equations with periodic coefficients. Till quite re~ cently, research engineers, frequently employing nonrigorous methods, merely reduced these problems to Hill's equation or even to Mathieu's Eauation, | Modern sugineering probleme offen demand the investigation oF | ‘Systems with several degrees of freedom (dynamic stability of elastic systems, parametric resonance in high-power transmission lines and particle accelerators, numerous problems of celestial mechanics). In this Connection one ie often most interested ii'a Quantitative description of —| phenomena for which there are no analogs in the simple system with one Jogree of freedom deserined by Hilts equation. [ The o1d techniques prove inadequate, is also noteworthy that. since Lyapunov and Poincaré, practical methods for investigating the stability of periodic motions described by nonlinear dif= ferential equations have centered increasingly around systems of linear dif- ferential equations with periodic coefficients ‘Thanks to the efforts of many authors, the past two decades have seen significant progress made in the mathematical theory of such systems (particularly Hamiltonian systems), Practical methods are now available which frequently furnish solutions, quite satisfactory from the engineering standpoint, to many problems which previously seemed quite forbidding, It is characteristic of these methods that the computational difficulties in- crease only slowly with increase in the order of the system, so that they are particularly fruitful in regard to problems with many degrees of freedom. ‘Many new and profound results have been obtained for Hill's equationas well. ‘This book aime at a systematic exposition of these results, previously available only in the periodical literature in a brief form, not readily under~ ‘stood by the nonspecialist. Most of the book is intended for the mathema: tically mature engineer, equipped with the basic mathematical tools of linear algebra and Lyapunov stability theory at the level of a technical college education. The exceptions are Chapters Ill and Vill, which demand more sophisticated mathematical tools and require an acquaintance with advanced university courses in mechanics, physics and mathematics. ‘The authors have not presented methods previously discussed in the ‘monograph literature, such as the methods of Krylov-Bogoly ubov- Mitropol'skei /10, 11, 66a, b, c/, treatment of Hill's equation through infinite determinants and continued fractions (see Strutt /88/, McLachlan /61 /). ‘The reader interested in the method of Lappo-Danilevskii is referred to the monographs /58a, b, 40b/. ‘The fundamental results of Lyapunov /60/ and Poincaré /I11a,b/, closely bound up with the subject of this book, may be found in the well known monographs of Krasovskii /54/, Malkin /62c/, Chetaev /102/. Many related questions are considered in Erugin /40a,¢/, Malkin /62a,b/, Rozenvasser /78/, Shtokalo /104/, Cesari /100/, Vale /98/. To facilitate the use of the book, the authors have striven, without fear (of repetition, to highlight the final couciusions and to state them in a form, comprehensible with minimum mathematical knowledge and minimum acquaintance with previous parts of the book ‘The first two chapters are introductory. The content of each of the succeeding chapters is outlined in a brief introduction. In view of the frequent appearance in applications of problems involving parameters which are combinations of real physical quantities, the authors have paid special attention to effective techniques for constructing stability and in- stability domains in parameter spaces for systems with many degrees of freedom. ‘These constructions, and the various computational procedures and stability tests provided in the text, are illustrated by many examples, from mechanics, physics, and engineering, concentrated mainly in Chapters Wand V1 ‘The authors lay no claims to the book's being a complete presentation of all current methods for investigation of differential equations with periodic coefficients. The material is of course a reflection of their own scientific interests, based on lectures read at the universities of Moscow and Leningrad. References to original papers related to the content of the book, as well as a list of relevant literature, may be found in the bibliogra- phical notes at the end of the book. ‘Throughout the text we denote scalars (real or complex) by Greek or Roman italics (c. g.,a, x, 4, T, A), vectors by bold-type upright lower case letters (x, a, etc. ), matrices by bold-type capitals (X, A. ete. ), sets and subspaces by Gothic letters (i, &, , ete. ), operators by Gothic or Roman bold-type capitals, In the numbering of formulas or subsections in the same chapter, the first numeral signifies the section number, the second the ordinal number of the formula or subsection in the section, In references to formulas, Sections and subsections of other chapters a Roman numeral is added on the left to identify the chapter. Lemmas and theorems are numbered within subsections, Chapters 11, V and VIII were written by Yakubovich, the remaining chapters by both authors jointly, ‘The logical interdependence of the chapters is as follows: Ung bg 2-7 vn a. ial ~asa Vv vir Ws 354-6 ‘The authors are indebted to V.B, Lidskii for many valuable remarks, which were taken into account in the final version, CONTENTS* Preface 5 . . v Contents of Volume 2. oa fox Chapter, PREREQUISITES FROM LINEAR ALGEBRA AND MATRIX, THEORY... . . a 5 . od §1. FiniteDimensional Complex Linear Spaces and Matrices. 1 1. Complex linear npace (1). 1.2. Matrices and operations on matrices (3). 1.3. Determinant and inverse ofa mattix (8). 1.4. Linear dependent and inde pendent vectors, Gram matrix (10). 1. Linear subspaces (11). 1.6. Systems fof linear algebraic equations (13). 1.7. Linear operator and its matrix. Change ofbasis (16). 18. Eigenvalues and eigenvectors. Eignsubspaces (18). 1.9. Re Auction of « matrix to block diagonal form, Invariant subspaces (19). 1.10. Root ‘subspaces and simple cyclic subspaces. Decomposition ofthe space into a direct sum of simple eyelic Ainvariant subspaces 20). 1-11. Jordan form (22). 1.12. Elementary divisors. Minimum polynomial 24). 1.13. Root subspace of adjoint matrices 26). 1-14. Hermitian, antihermitan,symmetic, skew-symmetric unitary And orthogonal matrices, Reduction of hermitian and unitary matrices to diagonal form 28). 1.15. Inequalities for hermitian forms (30). 1.16, Two lemmas on decomposition of, Test for simplicity of elementary divisors (30). 1.17. Com- plex conjugate and rel subspaces (33). 1-18. Jordan form ofa real matrix (36). 119. Examples (38). §2. Functions of Matrices. fae 44 2.1. Holomorphic functions of matrices (44). 2.2. Pecewiseanaltie functions of ‘matries 49). 2.3. Interpolation polynomials of matrices (51). 24. Multiplicative property (53). 2.5. Composite function lemma ($4). 2.6. Logarithm of a matrix (55), 2.7. Logasthm ofa real mateix (56). 2.8. Examples (59). 83. Logarithm of an Analytic Matrix-Function, . . a) 3.1, Lemme on the logavithm ofan analytic matrix function (60). 3.2. Lemma on radius of convergence (63). 3.3, Choice of branch ofthe logarithm guaranteeing max {mum radive of convergence of the series (2.7) (66). 3.4. Cas of 2X2 matrix (67). * (he abject index, of Volume 2 Ibllographical notes and bibliography to both volumes will appear atthe end (Chapter i, GENERAL THEORY OF SYSTEMS OF LINEAR DIFFEREN- TIAL EQUATIONS WITH PERIODIC COEFFICIENTS §1. Systems of Linear Differential Equations . L.A, Existence and uniqueness of solutions (70). 1.2. Motizant and fundamental, set of solutions (73). 1.3. Matsizant as an analyte function ofa parameter (78). TA. Inhomogeneous systems (76) 15, First-order equation with constant coef cient (77). 1.6. Second-order system of equations with constant coeicent (81) 17. Continuity of solution with respect to coefficients (85). 1.8. Adjoint systems 87) §2. Systems of Linear Differential Equations with Periodic Coefficients, of Arbitrary Type ‘i 88 2.1. Monodromy matsix. Multiplier (88). 2.2. Floquet-Lyapunoy theorem (90) 23. case of real coefficients (@1), 24. Reduciblity of systems with periodic coe: ficients (93), 2.5. Characteristic exponents. General conclusions on sabiity (94). 26. Structure of salutions ofa system with periodic coeticents (95). 27, Proper: ‘es of solotions depending on characteristic exponents and multipliers (96). 2.8. Re- ‘mark: application of Lyapunoy's second method (98). 29. periodic slutions of the inhomogeneous equation when the homogeneous equation has no T-petiodie solvtions (99). 2.10.. T-ptiodie solutions of the inhomogeneous equation when the Ihomogencovs equation has Tperiodc solutions (102) §3. Canonical and Hamiltonian Systems of Linear Differential Equations . 109 3.1 Basic definitions (109). 3.2. Second-order canonical equations and their reduction of first-order equations (111). 3.3. Canonical equations with constant coeiisiens (114), 3.4. Properties of solutions of canonical and Hamiltonian equations (115). 3.5. Lyapunow-Poincaré theorem (117). 3.6. Conservation of stability under small perturbations ofthe Hamiltonian (119). 3.7. Poincare’ ‘variational equations and their properties (123). 3.8, Example: flexible shaft with teary symmeticaly attached flywheel (126). 39. Systems of linear dif ‘ential equations invasant under time reversal (invariant) (129), §4. Estimates for the Characteristic Exponents of a System of Linear Differential Equations with Periodic Coefficients o 133 4.1. Theorem on estimates for characteristic exponents (133). 4.2. Second-order systems. Example (139). 4.3, Estimates for characteristic exponents of second- ‘order squations (141). 4.4. Unboundad sabty of dynamic system described by second-order linear differential equation (148), 4.5. Estimates for characteristic ‘exponents ofa system of second-order equations (151) (Chapter II, HAMILTONIAN SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS WITH PERIODIC COEFFICIENTS 155 §1. Behavior of Multipliers under Perturbation of the Hamiltonian. Krein’s Theorem on Multipliers of the First and Second Kinds on the Unit Circle. . - 186 1.1. Linea space with indefinite metic (186). 1.2. Eigenvalues ofthe frst and second kinds (159). 1.3. First theorem on perturbation of G-nitary matrices (161). 14. Strong and weak sablty of Hamiltonian systems (162), 1.5. Analytical Properties of multipliers of Hamiltonian systems (168). 1.6, Behavior of multiplies of one kind under complex perturbations of the Hamiltonian (1699, 1.7. Behavior ‘of multiplies with increase in the Hamiltonian (171), §2. Linear Transformations in a Space with Indefinite Metric - In 2.1. Classification of subspaces in a space with indefinite metic (172). 2.2. Gram tmatrix (173), 2.3. Lemma on real nonsingular skewsymmetric matrices (176) 2id. G-orthogonality of certain root subspaces of G-unitary and G-antheemitian ‘matrices (178). 2.5. Canonical decompositions of Gnitary nd G-Hamiltonian ‘matrices with simple elementary divisors (181), 2.6, Eigenvalues of mixed kind of (Gronitary and G-Hlasltonian matrices (184). 2.7. Lemma on eigenvalues (184, 2.8. Projections of invariant subspaces of G-itary matrices (187). 29. Projec: tion mattix of a som of rat subspaces (188). 2-10. Continuity of multiplies of fist and second kinds with espect to the Hamiltonian (190) §3. Krein-Gel'fand-Lidskii Strong Stability Theorem 192 3.1, Second theorem on perturbation of Guitary matrices (192). 3.2. Krein- Gelfand Liki strong sublity theorem (195), §4, Hamiltonian Systems with a Parameter. tee 197 4.1, Hamiltonian equations of positive type. Selfadjoint boundary-alve problem fora Hamiltonian equation (197), 4.2. Analytical properties ofthe multiplies of ailtonian equations of postive type (208). 4.3, Behavior of multipliers with {increase inthe Hamiltonian (cominuation of §1.7) (208). 4.4. Lemmas on loga- rithms of Ganitary and symplectic matrices (211). 4.5. Floquet-Lyapunoy theorem for Hamiltonian systems (213). 46. Floquet-Lyapunov theorem for Hamiltonian systems witha parameter 215). §5. Stability Domains. Structure of the Space of Hamiltonians. 2116 5.1. Statement ofthe problem (216). 5.2. Multiplier eypes of strongly stable Hamito tians 217). 53. Index of strongly stable Hamiltonian. Gelfand-Lidsit theorem (G19), 5.4, Arguments of symplectic matrices. Other definitions ofthe index of strongly stable Hamiltonian (221). §.5. Structure of the set of strongly stable Ham tonias and some related sets (228). 86. Stability Tests for Canonical Equations, Based on the Theorem on Directional Wideness of Stability Domains . 235 6.1. Method for derivation of stability tests (238). 6.2. Diectional wideness of stability domains (239). 6.3. Directional convexity of certain subsets of stability omains (243), 6-4, Strong stability tess for almost separable canonical systems (50), 6.5, Canonical equations with constant coetTicients (258). 6.6. Strong ‘ability tes for canonical equations with almost constant Hamiltonians 262) 6.7, Strong stably tests for 2e-th order systems (corollaries of tests for second frdersystcms) (268). 6.8. Estimate ofthe smal parameter for which the method of averaging may be applied to linear Hamiltonian systems with periodic coef cents (271). §7. Stability Tests for Canonical Equations, Based on Estimates for the Eigenvalues of Boundary-Value Problems Biola 273 71.1, LidskitNeigaue test (273). 7.2. Strong stability tests for canonical equations (G76). 73. Strong stability ters for second-order canonical vector equations (279). (Chapter IV. METHODS OF PERTURBATION THEORY (SYSTEMS WITH A SMALL PARAMETER) 5 282 $1. Perturbation Theory for Matrix Equations. oa 285 1:1, Statement ofthe problem. General case (285). 1.2. Case of simple elementary Aivsors 289). 13. Case of simple eigenvalues (291). "1.4. Cave of multiple clone ‘ary divisors (291). 1.5. Generalizations 296), §2. Computation of Characteristic Exponents for a System of General Form 207 2:1, Characteristic exponent of unperturbed system (297). 2.2. The operator. S and F. Two lemmas 300). 2.3. Theorem on computation of characters expo- nents 303). 2.4. Computation of characteristic exponents inthe general eae {7 G06). 2.5. Example (308). 26. Computation of characteristic exesnenta of simple type (o=7). Formulas for fst approximation (312). 2.1. Compuction of characteristic exponents of simple type (=). Second and hight approxima. tions G15). 2.8. Example G21). 83. Computation of Characteristic Exponents of Canonical (Hamilto- nian) and Almost-Canonical Systems . 326 3.1. Stability concusion based on approximate characteristic exponents (326). 32, Formuls for characteristic exponents of Hamiltonian systems G23). 3. ‘Vector equation of second order (329). 3.4. Second onde vector equation ath syroseopie term G33), 84. Computation of Exponent Matrix K (e) and Matrizant 335 4:1. Notation, Singular and nonsingular cases (335). 4.2. Reduetion of singular second computational procedures (345), 46, Determination of costceate cha Pansion (6.28). Third procedure (350). 4.7. Solution of inhomogensows aren of ‘equations with periodic coefficients (355), §5. Computation of Exponent Matrix K (@) and Matrizant for Systems and Equations of Special Types a $:h: Tita computational procedure (357). 5.2. Hill's equation: he case 2 = 0 (388) $3. Hills equation (#0) and Mathieu's equation, Nonsinguar ces O61) 54. ils equation (e250) and Mathieu's equation. Singulat case (064). 85, Ex ‘ample of third-order equation (368). 5.6. Computation ofthe exponent mataie ‘the variational equation for periodic motion af feibe shat with heave soe ‘metrically placed Aywhee (371, 387 §6. Stability of Periodic Motions of Nonautonomous Systems . 3% 6.1, Differential equations of the generating solution and fest cortections, None nant case (374), 6.2. Resonant case (376). 6.3, Variational equation for semotc {iistionl equation (378), 6.5. Summary of fina reslts (380). 6.6 Examle G81). 6.7. Example 383), CONTENTS OF VOLUME TWO Chapter V. THEORY OF PARAMETRIC RESONANCE 81. First-Approximation Formulas for Boundaries of Dynamic Instability Domains §2. Fust-Approximation Formulas for Boundaries of Dynamic Instability Domains when the Coefficients of the Equations Are Nonlinear Functions of the Parameter 1/9. 83. Properties of Boundaries of Dynamic Instability Domains. Sezond ‘and Higher Approximations for the Boundaries. 84, Parametric Resonance in Almost-Canonical Systems. Chapter VI, PARAMETRIC RESONANCE IN MECHANICAL AND PHYSICAL SYSTEMS . §1. Vibrations of Suspension Bridges §2, Problems in which the Coefficients of the Equations Are Noninar iny= 1/0. §3. Dynamic Stability of ThinwWalled Bars with Longitudinal Periodic Load Be 64. Stability of Flexural Vibrations of Rotating Coaxial Shafts §5. Parametric Resonance in Wave Propagation Problems. Acoustic Waveguides {§6. Parametric Resonance in Wave Propagation Problems in Periodic Structures. Electromagnetic Waveguides ‘ §7. Application ofthe Bubnor Galerkin Method in Parametric Ret- nance Problems for Mechanical Systems a Chapter VII. LYAPUNOV'S METHOD FOR ESTIMATING THE CHARACTERISTIC CONSTANT 5 Introduction. §1. Properties of the Characteristic Function A (X) of a Second-Order ‘Canonical System of Linear Differential Equations with Periodic Coefficients 387 387 44 425 447 436 456 479 484 489 510 519 536 536 538 92. 83 Estimate for the Characteristic Constant of Hills Equation Generalization of Lyapunov’s Method to Systems of Two Linear First-Order Differential Equations with Periodic Coefficients (Chapter VII CANONICAL SYSTEM OF TWO LINEAR DIFFERENTIAL, 1 . §2. Stability and Instability Tests, Estimates for Characteristic Expo nents of a Canonical System of Two Linear Differential Equations with Periodic Coefficients §3. Application ofthe Results of §§1,2 to ll's Equation, 4. Stability Tests and Estimates for Characteristic Exponens of Hills Equation ~ Variational Methods §5. Some Other Stability Conditions for Hill's Equation §6. Oscillation Theorem Appendix 1 Appendix 2 . Brief Bibliographical Notes ~ Bibliography EQUATIONS WITH PERIODIC COEFFICIENTS. HILL'S EQUATION. . General Remarks. Structure of Hamiltonian Space . Subject Index for Volumes I and 2 587 569 612 ois 660 691 710 751 760 782 798 809 814 838 Chapter I PREREQUISITES FROM LINEAR ALGEBRA AND MATRIX THEORY ‘This introductory chapter presents the algebraic prerequisites for the rest of the book. Proofs are given only when they are so simple ahd brief that it is pointless to refer the reader to specialized texts, and also when they do not appear in textbooks of linear algebra and matrix theory. The reader familiar with these subjects may skip this chapter, referring to it only as the need arises, For the benefit of those doing so, we mention the following nonstandard notation employed throughout the book: A block~ @iagonal matrix A=diag (Ay,..., Ay) will also be danoted by ATA, FAy-F.. FAs. x Az deveet som are column vectors, we write z=x4y. ‘The main part of the book uses only the simplest material, presented in §§1.1 through 1,1 and 2.1, 2.3, The remainder of Chapter I, including §3, ‘will be used only in Chapters Il, 1V and VILL $1, FINITE-DIMENSIONAL COMPLEX LINEAR SPACES AND MATRICES 1.1. Complex linear space We define an vector x to be an ordered mi numbers §, Ey «++ fy Which we write as a column: (2) ‘The set of all such vectors is denoted by &, We denote complex conjugation by a bar; thus “(é) is the complex conjugate of the above vector x. The vector with zero Somponents will be denoted by 0. The transpose of an mvestor te an ordered n-tuple of numbers, wriiten as a row HG es By. The hermitian conjugate ofa vector, x, is defined by wee... B. (3) Bay ay C2) = @ Aggition of vectors and multiplication of vectors by complex numbers satisfy the usual rules: are defined by AHORN XtYRYHH KEOH REY GE, LxeK, Bey maxtay, (a+ P)xmaxt px, a(Bx)—(af)x. Chapa a, GIANT #PACC (see.e.g., /21/, Chapter 1, $1; /1080/, Chapter 2, $11). AS usual, ( Yay is the scalar product of vectors x and y: Ge ayn ota (ay It follows from the definition (1.1) that (= G9) (ax, By) —aB (x, y), OHH I= NEO.2), Oe 2) — OR, FOR 2) and GMI HRPSO it xo, ‘The norm (modulus, length) of a vector x€R, is denoted by |x|: [xl=0if x0, 2) Ixl=Ixh 3) [ix[=[allx| (where 28 a scalar), 4) etyl in af 7 ey ++ For example, a vector x and its transpose tare mx1and Lxa matrices, respectively.* The word "matrix" alone, without indication of its order, will always mean a square matrix, i.e.,an axn matrix, which we write as Jey es a=]: Heylt Jen ‘The elements ayy... du of a square matrix A will be called its diagonal elements. Let (2) om be the /-th column of the matrix A. We shall also write the matrix as Aelity os tall * And therfore, apart om the ntti of $1, we shall ato wit ie Eall [Epona Let Heyl B= 1B.) be rectangular matrices of the same order and 4a complex number. The matrices A+B and AA are defined respectively as ey + Byll and [ay ‘The product of two square matrices A and B is the matrix € defined by allt ram Sent as) oe ) 3 @) & eaakt bode UE, om. Bt. Hit, Asai ‘Multiplication of matrices is not commutative, i, ¢.,in general BA: AB. Mf BAmAB, we shall say that Aand B commute (or are permutable), ‘Matrix multiplication is associative: (AB)C=A (80), and distributive: (A$B)CmAC+BC, A(B+C)=AB+AC. ‘The identity matrix of order axn is the matrix Here byis the Kronecker symbol: =0for iti, by=1 (t,J=1, seas). 41. INTTEINENSIONAL UNEARSPACES AND MATRICES 5 It is easy to see that for any axa matrix A and any n-vector x AL=LASA, 1x=x. and C=AB is partitioned into # blocks of Poh cGeae) (the matrices Aj, By Cy have the same orders), then the corresponding blocks jy may be determined by a formula analogous to (1.3): If each of the matrices A, ‘equal respective orders, on diate By, By. om): Ayu AuBur C= AB ABs Bit AnByiy Cy AnBia + AnD. then Consider a matrix A of the form AO) 0A)" where A, and A, are square matrices of orders mxn and Xm, respectively (+m=n), and 0 stands for nxty and nxn, matrices all of whose elements are zero, We shall call A the direct sum of Ayand A,, writing AmA Ay (It is obvious that in general Ay} A,%A, Ay. ) ‘This definition may be extended to any finite number of summands: AwAPA,+...Ay will denote the matrix having the matrices Ay Ay ss Ay along its principal diagonal and zeros elsewhere. The matrix A,-f-Ay. ‘A, will also be denoted by Ae diag (Ay Ayy +5 Ad In particular, any matrix A=|aylf with zero off-diagonal elements (ay=0 for ij) may be written A=diag(2y, .-. dy) and we shall call it a diagonal matrix, Let Amat. 4A, BeBd.. +B, where A,,B,are matrices of the same orders, It follows from the block ‘multiplication rule that AB=AQB,. +B. Let xbea fevector and y an [-vector. ‘The column vector x of order k-+1 with elements (from top to bottom) By van Mr vv My will be called the direct sum of xand y and be denoted by (Tents (gee yfame ty {2 Se after ae we define the complex conjugate matrix A, the transpose A‘ and the adjoint (orhermitian conjugate) matrix At ou co ‘The elements of A* are obtained from those of A by reflection in the principal diagonal, along which we have the numbers Guy seve tn 6 we interchange ay anda, to obtain AY, we replace ay oY For any x, y@ Re, (Ax =O, AY), (a) as may be verified directly. ‘The norm of a matrix A is defined by as) For example, the norm of the identity matrix 1, is Il =V% ‘Thus defined, the norm has the usual properties: 1) [Al>0it Axo, 2) IASB) (8) ‘This follows from the obvious inequalities in DAGany| im DAUD Aty PAU a4] <, aul 1.3, Determinant and inverse ofa matrix We shall denote the determinant of a matrix B by detB. We recall the main points of the theory of determinants (see, e.g., /103a/, Chapter I). 1) Transposition of a matrix leaves the determinant unchanged: et B= det B. Since detB=dEB , we have the following relation for the adjoint det B= de 2) A homogeneous vector equation Bree (saipyie. x () has @ nontrivial solution x40 if and only if det B then said tobe singular. ) 3) If detB#0 (Bis nonsingular), there exists a unique matrix such that (The matrix Bis {). FINTEOMENSIONAL UNEARSPACES ANO MATRICES axl where Ay denotes the cofactor of the element by (j,/= ls») in the de~ terminant of B, 1, e,, the determinant obtained from the latter by deleting its th row and j-th column and multiplying by (—I)"/, Hence it follows lat once that in this case the Inhomogeneous vector equation ‘This matrix is given by Aug eos Aaa Ase ith has a unique solution x=! 4) det (AB) = det Adet if detBy0, then det: 5) det(Ay Ay... LAs) —det Ay det A 6) Let the columns of the matrix B 7, since B(B-th) =1,b=b. b, and write B B= lib, by ball Suppose that b, is a linear combination of vectors bt and bt: by=ab!-+Bb" (where a, Bare complex numbers). ‘Then we have the following repre sentation of the determinant of B: det|fab'+ BBY, bys, Byll—erdet||b', By 5 Byll+Bdet |, by. 1) If two columns are permuted, the determinant changes sign: Oe If follows from properties 4 and 5 that if a column of the matrix is a linear combination of other columns, say b,=ab,-+Pb,, then the determinant vanishes, By property 1, properties 4 and 5 are also valid for rows. Consider any rrows and r columns of a matrix B. The elements at the intersections of these rows and columns form an 'r xr matrix whose determinant is known as an r-th order minor of B, The rank of an nx nmatrix Bis the number r such that all minors of B of order greater than + (if n>r) vanish, but there is at least one minor of order 7 ("basis minor") which does not vanish, For a nonsingular matrix, r=, and detB is a basis minor, ‘The columns of B whose elements figure in a basis minor are called basis columns. Ina singular matrix B (detB=0) any column is a linear combination of basis columns (/108a/, Chapter 1, $9). The difference den—r between the order and rank of a matrix is known as its nullity 8) If Ais a nonsingular matrix, the ranks of the matrices AB and BA are both equal to that of B. et bys coy By ovee by by — det Dy esos Dalle 1.4. Linearly dependent and independent vectors, Gram matrix estore fu «hare said 0 be linearly independent if any relation Oh oy (1) Orbe ie MATS COMPLEX numbers, implies that aye... aed hear on eget fy wes Weare sald to Be linearif eponeene i clear, for example, that the vectors Ot) ns basis of aniiependent. Any m linearly independent vectors are called a REALE of the space My.” The basis ey yet keoen ee standard basis, Vectors fy, are linearly independent if and only if the rank of the xk matrix ct Lith columns fy, fy is & In partieutor, sin matrix B we have’ detB=0 if and only iff ceitins (or its rows, vie eerie) are linearly dependent (sce /21/, Charees UL, $1), ‘The matrix of scalar products [ft 1) oy tay o-| Pee | Hl BA) 5 es galled the Gram mateix® of fy Ga Fr. We shall now show that I is easy to see that aetGyeo linearly warrant sutficient condition for the vectors ty... ty tobe. siucarly independent. | First suppose thet (Tienes + 20. We have H“[E Me Wa l—|Bas, 1) J-tol=o. ‘Thus deta, Conversely, let detG=0. Then the rows of 6 are linearly dependent, 89 that for some a%0 we have Ga=0. Since a'G'a= Bast, "=0, it follows that edo and so the vectors ty ..., hare linearly'dependent "Inve wool te Grom mate efied athe sane G* of ou cfs 1. FINTEOIEENSIONAL LINEAR SPACES ANDMATRICES 11 1.5, Linear subspaces ix} of vectors x’ in , which is closed under the operations of addition and multiplication by numbers is called a subspace of By ‘Thus ® is a subspace if xR’, ER’ imply ox,+Bx,€R’ for any complex numbers aand B. The smallest umber of linearly independent vectors in fa subspace ® is known as its dimension, and the vectors themselves area basis for the subspace. For example, the set & of all vectors 5 4 ° 0 where & n are arbitrary complex numbers, is a two-dimensional subspace with basis. ¢, (1.8). ‘The subspace B of all vectors aify-}...-+ayly, where fy... fy are given vectors and a, ..., 4% arbitrary numbers, is known as the subspace spanned by the vectors fy.) (or their span). If fy... f,are linearly in- dependent, they form a basis for ®, ‘Let R’ and & be disjoint subspaces (i, , , having only the zero vector in common) of ®,. The set of all vectors x'-+x", where x'ER', x €B, is readily seen to be a subspace, known as the direct sum of the subspaces and denoted by R48". If WER =B,, as) i,e.,every vector x€8 is uniquely expressible kek ex, where x’€®, x €W", we shall say that R may be decomposed into the direct sum of subspaces and Bt. ‘Again, let and be arbitrary subspaces of 8. They are said to be orthogonal @ | 8) if the scalar product of any pair of vectors x EW, ¥ ER vanishes: x)= 0. For example, the subspaces Rae the) and w= e+e, where fy fy fa Bare arbitrary complex numbers and ¢ the vectors of the standard basis, are orthogonal. A direct sum of orthogonal subspaces is denoted by RGA", Thus, if the subspaces ® and in (1,9) are orthogonal, we write Rw OR. In this case Wis called the orthogonal complement of & (and Bis ‘the orthogonal complement of 8), We write R'= (QW), Similarly one de- fines a decomposition of &, into a direct sum of ! subspace: Rye LA. ate, and if moreover any two of the summands are orthogonal subspaces, then R=REKD... OH. If ® is an m-dimensional subspace, O0, independent of the vector b, such that for any solution x satis- fying (1.19) Ixl iso. (1.82) as: ‘This equality may also hold in the case of multiple eigenvalues. If (1.32) is true, we shall say that the matrix Sreduces Ato diagonal form Formula (1,32) has various corollaries in matrix theory. If A has maltiple eigenvalues, it need not be reducible to diagonal form, i.e., it may not admit a representation (1.32). Even then, however, there ig a special form of the matrix — the Jordan form — to which any matrix is reducible: Every matrix has a representation similar to (1.32) except that the numbers 2, along the diagonal are replaced by "blocks" (elementary Jordan matrices) of a special structure, We now present a detailed ‘account. 1.9. Reduetion of a matrix to block-disgonsl form. Invariant subspaces Suppose that for some nonsingular matrix $ (su, ass) where A, are mXm, matrices (k= 1,2; m+m,=A) and A=! Let §, +++ denote the columns of S: Sais. (1.34) It follows from (1.38) that AS: Alls os Sall lS ---oSqllAse AllBaes +m om). (1.35) ‘The vectors 4, ...,8, are linearly independent (because det$#0}. Conse- quently, the same is true of the Vectors §.+.+18q aNd Sqaus ores Consider the subspaces Ml, and W, with bases &...1%m,and Seene--+Bye respectively (M is the set of all vectors 18,+---+Yaga, and Mk the set Of Vectors YqssSqa-t-+-+Yafe Where Yy are arbitrary complex sumbers). A subspace Mis said tobe invariant under A if x€M :mplies that AXEM. It follows from (1.35) that each of the subspaces M, Mis invariant under A. Since 6 «++, 18 a basis for My, any vector x is uniquely expressible as Xx, where ¥ EM, EM, i.e. y=, 4M, Now suppose that MR, and M, are arbitrary A-invariant subspaces of dimensions m,and m, respectively, such that the above decomposition holds. It is easily shown that then (1.33) is true for some S, Ay, Ay. Te this end, we simply note that m,+m=n, take an arbitrary basis Sy ---.S, in M, and an arbitrary basis Sy,+n +» in My, and define § by (1.34). The invariance of M, and M, implies (1.35), which 1 equivalent to (1.33). All this remains valid when applied to a block-diagonal matrix with any number of blocks. In fact, if there is a matrix § such that A 0 HSE. EADS, where A, are matrices of order my (j= ly...) m-+-..-+m,=n),then the space 9, splits into a direct sum of A-invariant subspaces Mt, of dimension m%, LD FM, (1.38) and conversely, if (1.86) is true we have a representation of the above type. The basis for each subspace M, will be the corresponding group of columns of $, Thus reduction of a matrix A to block-diagonal form is equivalent ‘0 decomposition of the space ®, a8 a direct sum of A-invarian: subspaces. |.10, Root subspaces and simple cyclic subspaces, Decomposition of the space into direct sum of simple ‘yelie A-invaiant subspaces ‘A vector bx0 is called al jot a matrix A if there exists «complex number 4, such that for Some natural number & [1 FINTESIMENSIONAL LEAR SPACES ANOMATRICES 2 ast) It follows from (1.97) (§ 1.6) that ‘Thus the number A, in (1.37) must be a root of the characteristic equation (1.29), 1.e.,an eigenvalue of A. And then for any &>1 there is a vector bye 0 satisfying (1.37). ‘The set 8, of root vectors BEM, corresponding to a fixed root Ay is clearly a subspace, known as a root subspace. Thus the matrix Ahas ‘as many root subspaces as the characteristic equation has roots. Every eigenvector is of course a root vector (with k=1), Conversely, for every root vector b we can find a corresponding eigenvector. Indeed, in the sequence of vectors BAAD’, + AALY B, the first is not zero, but from some point on all vectors in the sequence vanish, by (1.37). Denote the last nonzero vector by a. Then for some natural pce I b#0, AKI) ‘Thus (A—21,)a=0, so that a is an eigenvector. Denote (AHL )P—*by fy (ABLE B, oe Hyer (AMAL B, ty With this notation, we have AAAII =O, AMD ates AALD Auli MMH coos Alp Mp tpoe (1.88) ‘The vectors f, ....f, are linearly independent. Indeed, in the case p=3, say, if Gah pant tat, =0 ‘then ARI ah bat bah) ah beh ARI af reut) ah, =0. Since #0, we successively obtain a=0, «=0, 4 =0. A subspace M@ which has a basis fy, .... f, satisfying (1.88) is known as a simple cyclic subspace (relative to A).* This subspace is ob- viously Invariant under A: if x€MR, then Ax eM + me definition ofa eyote mbypace wil not be needed below ané ereforeamited. ‘We have shown that to each root vector b corresponds a simple cyclic subspace Mt ‘Note that whereas the root A, uniguely determines a root subspace of A, there may be more than one simple eyelic subspace — it depends on the choice of the vector b. ‘Let 4%, ...,4!? (@1, m3, we have Moe a=0, (At —T1)™ b= 0, (1.52) ‘The proof will now proceed by induction on m-tmy. If m,+m,=2, then |, 80 that both a and b are eigenvectors: A Wa, At =H, We have (a, b)=(Aa, b)= (a, APB) =A (a, 80 that OAM (a, =O, (1.53) But by assumption aA, and 50 (a, b)=0. Let m33 be an arbitrary integer; ‘suppose the assertion true for all natural numbers m, such that m,-+m, 0, 1.15, Inequalities for hermitian forms Let W be a hermitian matrix, hy and hyy, its smallest and largest eigenvalues. Then for any vector XE, rath 3) (HE, 2) Shia ¥)- (1.58) Proof. (a) We first assume that His a diagonal hermitian matrix, 4 hy). Then. (is, 3 gE Ee LBS and clearly Fae BIBS DAE PS nae SIM proving (1.58). (b) In the ger 1 ease, we have from (1.57) H=sas" where Q=diag(hy « Hence (ix, 2) = (608%, x)=(Qy, 9), where y=S%x, By case (8), fga(s ¥) SQ, ¥ 1). Then there is a vector f,740 such that A=0, Aaty ‘The first equality means that {€, the second that f,€D. Thus the inter- section RD of Rand D contains a vector f,40, and'(1.59) cannot hold. ‘Corollary. A necessary and sufficient condition for all elementary divisors of a matrix A belonging to the eigenvalue 4=0 to be simple is that the subspaces R and be disjoint (i, e., their only common vector is the zero vector). Indeed, we have just shown that the existence of multiple elementary divisors implies the existence of a vector f,y0 such that eR, LED. Conversely, if the clementary divisors are siuuple, it follows from (2.59) that Rand 9 are disjoint, [cum a iL) 16 Let 8, Dand WD, be Wie wall spaces and Vanes of We matrices A and A, respectively. Then &, is an orthogonal direct sum BD, Proof. 1) We shall prove that the subspaces Rand D,intersect only in the zero vector: RNDj=0, Let fERND,. We have EH, so that A= 0 and f€2,, 90 that there is a vector g such that A'g=!, Therefore, N= N=, AN=0, whence it follows that f=0. 2) We show that ®|9,. Consider arbitrary vectors f€, ¢€B.. Since #9, there exists a vector h such that Ath=g. Then a) (At, h)=0, as required. 3) There is no vector t#0 orthogonal to both Rand De LLR, LL.D. For if there were, it would follow from £1, that {A% for any x€8%, 80 that om, Ax) \,»). Hence Afm0, 1€R. Since fL8% this means that f 4) Denote W=RGHY. By Lemma lin $1.5, RR GH, where @)t is the set of all vectors orthogonal to ®. But by part 9 above, (R= 0, co that R= R'—MBD,. QE. D. Note that Lemma ITis equivalent to the lemma of $1.6, Lemmas I and Il easily yield the following test for simplicity of elementary divisors Lemma III. Let we, 46%, (O=1, ..., ) be bases for the subspaces Rand R,i.e., complete sets of linearly independent solutions of the equations ay Jor a singular matrix A. Then all elementary divisors of A belonging to ‘the eigenvalue 4=0 are simple if and only if detox a) iO. (1.83) + By #1.6, thee equations have the same name of Lineal independent sluton {1 FINTEDIMERSIONAL UNEARSPACES AND MATRICES 33 Proof. Let det(yp m)Kf=0. ‘Then there are numbers «+++ Ye such that [uff oes [tal #0 and Ou 204+ tees HO b= Hy vend) Setting Y=Wht--_+wa, we have: (a) yx0, Since yy ++. Yeare linearly independent; {5) YER, since WER; (c) (, %)=0 (tl, -.., d), 60 that by Lemma Il ye. By the corollary to Lemma 1, not all the elementary divisors are simple. Now let (1.63) be true. Let yeDNR. Since y€R, it follows thet yermt.-+¥e, Since y€, Lemma I implies that (y, %4)=0, so that We Bes tae H)=0 (el, «-., d). It follows from (1.63) that Ye he. —1%=0, y=0, By the corollary to Lemma J, all the elementary divisors Mf are simple and therefore have the form A, .... 4. 1.17, Complex conjugate and real subspaces A matrix or vector is said to be real if all its components are real. ‘This is true if and only if A=A or X=x, It is clear that for any complex @, Band any matrix A ae FBR = a, + BR, R= AE Lemma 1, A matrix 4 is real if and only if for any vector 16%, Wea ax. (1.64) Indeed, if A is a real matrix, then A¥=AX=AX. Conversely, if (1.64) is true, we see by successively setting x—ey .-., X=ey (Where ey «+4 & is the standard basis) that the vectors Ae, .--y Aéy, the columns of A, are real. This proves Lemma 1 I B =r} is Bome subspace of My then the set HY (V’} of all complex conjugates of the vectors x'€W is also a subspace. Indeed, if xe,» €H then 4 =hayeH, where % and Xare vectors of Thus ax, + Bx 8 for all comple wand B. Consequently, axi+Bx—ax, +x, Ef a6 required The subspace Wis said tobe complex conjugate to ®’. A subspace Wis said to be real if Ha, i.e., Bis invariant under complex con~ jugation. A real subspace (like any other nontrivial subspace of complex Space) contains vectors with complex components. TI 1sthe root subspace ofa real matrix A belonging toaneigeavalue 4, then Bar isthe root subspace belonging tothe eigenvalue £, Indeed, if A-Mypx=0, then ASHIPx=0, Lc, (AWHl,)x=0, since X ‘Let 4° be all the root subspaces of a real’ matrix A. Set B_Bo RZ Bo, w= Fs Combining the appropriate terms in (1.39), we obtain a decompos:tion of into three A-invariant subspaces: RR ERO ERO, (1.88) vhere (1.68) ‘the converse is also valid: Lemma II. Suppose we have a decomposition (1.65), where Sy RO, ROY are A-invaviant subspaces satisfying (1.66), and (1, 64) is true Jor vectors x’ ER, and x" ER. Then A is a real matrix, Proof. Equality (1.64) is also true for vectors x"ER™, Indeed, Ae x/"ER-!, then x=" ERY, It follows from (1.64) for x=x" that Ax’ = AX, heey, AN? AR” ‘Thus (1.64) is true for each of the subspaces My, RO, R-!. (¢.65) any vector x is expressible as RONEN EK, where ER, CER, x ER, ‘Thus I$ TAR AE LAR =. 1 follows that (1.64) is true for all x€ and 50, by Lemma I, Ais a real matrix Now let A be an arbitrary matrix, It may be regarded as the matrix (relative to the standard basis) of some operator &. We shat! say that the operator Wis real ifits matrix relative to the otandard basis 1 real Lemma 111. Let % be a real operator and 3 an invariant subspace. Then the mxm matrix A of in the subspace Mt, relative to an arbitrary FeAl basis ye = Ba Peal. Indeed, A"flay [where the numbers ay, are defined by (1.26): = Zane, Gat vm. We have W-Daes, since G=m- In addition, Wj=Wg=Mg, Therefore Says= Daye whence it follows that ay/=dy (, A=, «--, m) Lemma IV. Suppose we have a direct sum decomposition pt Boyes Say, where a» are real subspaces and Bj! =8). Pick a basis for each of the subspaces: for each 8 a real basis {ah}, for each 8)" an arbitrary basis {tl}, and for each 5» the complex conjugate basis {84}. Let each of the subspaces jr, Ry», Bi? Be invariant under and let Aj, Aj Al” be the matrices of the operator in these subspaces relative to the selected bases. Then & is a real operator if and only if Apaay, AP. 1 FINTEOMENSIONAL LINEAR SPACES AND MATRICES For simplicity's sake we give the proof only for pmr=1, omitting the indices of the subspaces, the basis vectors and the matrices A", Aj", A)”. Necessity. Let Hay Dende Ade= That Mb, = Zoe If Wis a real operator, then Way =M%a, H5,—Wby. Substituting (1.67) into ‘these equalities and comparing coefficients, we find that Gam Br (1.67) (1.68) i.e, Avian AM, AP Sufficiency. if conditions (1,68) are satisfied, then for any vee:or x xe Deaet Dub Boab Using (1.67), (1.68), we obtain Wa DE vtnate + Doras + Thy Taabe = Mee ‘By Lemma 1, is a real operator Demma V A'subspace Bt ts real if and only if it has a real basis. Proof, ity --1 ly is a real basis in Bt then for any vector Dh, kadgBh we have X—0h-+ ct Mlq€D 50 that Mis real. ‘Let M=i, We have to show that M has a real basis. Let €» be an arbitrary basis. Then gj€WR, 1-2. Bee tab t et Piaf Ba Yai tot Tae ‘Taking complex conjugates, substituting the values of # and comparing coefficients, we see that the matrix C=f7iqlfl-s has the property e=1,. We seek a real basis fy «++ fa in the form A= Baitit + + Baal Brats}! F Banta

You might also like