You are on page 1of 613
ee os Mathematics Applied to Continuum Mechanics fe» rr Lee A. Segel With additional material on elasticity by G. H. Handelman CeLeArS*SeI° CS In Applied Mathematics Siam 52 Mathematics Applied to Continuum Mechanics SIAM’ Classics in Applied Mathematics series consists of books that were previously allowed :o go cout of print. These books are republished by SIAM asa professional service because they continue to be important resources for mathematical scientists. Editor-in-Chief Robert E. O'Malley, Jr, University of Washington Editorial Board Richand A. Brusthli, University of Wisconsin Madison Leah Bdelstein-Keshet, University of British Columbia Nicholas J. Higham, University of Manchester Herbert B. Keller, California Institute of Technology Anuirzej Z. Manitius, George Mason University lary Ockendon, University of Oxfond Stanford University Peter Olver, University of Minnesota Ferdinand Verhulst, Mathematisch Instituut, University of Utrecht 4, Mathematics Applied to Deterministic Problems in the Natural Sciences E Belinfante and Bernard Kelman, A Suey of Lie Groups and Lie Algebras eith tions and Computational Methods James M. Ortega, Numerical Analysis: A Secunel Course Anthony V. Fineco and Garth P, McCormick, Nonlinear Progrimniing: Sequential Unconstrained Minimigution Techniques EH. Clarke, Optimization and Nonsmooth Analysis George F Carrier and Carl E, Pearson, Ordinary Differential Equations Leo Breiman, Probability R. Bellman and G. M. Wing, An Introduction to Invariant Imbedding Abrahum Berman and Robert J. Plemmons, Nonnegative Matrices in the Mathematical Sciences Olvi L. Mangasairian, Nonlinear Programming *Carl Friedrich Gans, Theory of the Combination of Ohservations Least Subject wo Errors: Part One, Part Two, Supplement. Translated by G. W. Srewart Richanl Bellman, Introduetion to Matrix Anabis U.M, Ascher, R. M. M. Mattheij, and RD, Russell, Numerical Sofution of Boundary Value Problems for Ordinary Differential Equations KE, Brenan, S, L- Campbell, and L. R. Petaold, Numerical Solution of Initial-Value Problems in Differential Algebraic Equations Charles L- Lawson and Richard J. Hanson Solving Least Squeires Problems J-E. Dennis, Jr. sand Robert B. Schnabel, Numerical Methods for Unconstrained! Optimization ‘and Nonlinear Equations Richond E. Barlow and Frank Proschan, Mathematical Theory of Reliability ‘Comelius Lanczos, Linear Differential Operators Richard Bellman, Introduction to Matrix Analysis, Second Edition Beresford N. Parkett, The Symmetric Eigenvalue Problem **Furst rime in print. Classics in Applied Mathematics (continued) Richard Hab and Traffic Flow Peter W. M. John, Statistical Design and Anabsis of Experiments Tamer Basar and Geert Jan Olsder, Dynamic Noncooperative Game Theury, Second Emanuel Parzen, Stochastic Processes Petar Kokotovié, Hassan K. Khalil, and John O'Reilly, Singular Perturbation Methods in Control: Analysis and Design Jean Dickinson Gibbons, Inga Olkin, and Milton Sobel, Selecting and Ordering Populations: ‘A New Statistical Methodology James A. Murdock, Perturbations: Theory and Methods Ivar Ekeland and Roger Témam, Convex Analysis and Variational Problems Ivar Stakgold, Boundary Value Problems of Mathematical Physics, Volumes 1 amd U J.M. Ortega and W. C. Rheinbolde, Iterative Solution of Nonlinear Equations in Several Variables David Kinderlehrer and Guide Stumpacchia, An Introduction to Variational Inequalities «nd Thew Applications FE Natterer, The Mathematics of Computerized Tomography Avinash C. Kak and Malcolm Slaney, Principles of Computerized Tomographic Imaging R. Wong, Asymptotic Approximations of Integrals O. Axelsson and V. A. Barker, Finite Element Solurion of Boundary Value Problems: Theory ‘end Computation IR. Brillinger, Time Series: Data Analysis and Theory Joel N. Franklin, Methods of Mathematical Economics: Linear and Nonlinear Prognus Fixee!-Point Theorems Philip Haraman, Ordinary Differential Equations, Second Edition Michael D. Intriligitor, Mathematical Optimization and Economic Theary Philippe G. Ciarlet, The Finite Element Method for Elliptic Problems Jane K. Callum and Ralph A. Willoughby, Lanczos Algorithnis for Large Symmetric Eigenvalue Computations, Vol. I: Theory M. Vidyasagar, Nonlinear Systems Anabsis, Second Edition Robert Mattheij and Jaap Molenaar, Ordinary Differential Equations in Theory and Practice Shanti S. Gupta and S. Panchapakesan, Multiple Decision Procedures: Theory und Methodology of Selecting and Ranking Populations Eugene L. Allgower and Kurt Georg, Introduction to Numerical Continuation Methods Leah Edelstein-Keshet, Mathematical Models in Biology Heinz-Orto Kreiss and Jens Lorene, Initial Boundary Value Problems and the NavierStokes Equations J. L. Hodges, Jr. and E. L. Lehmann, Busic Concepts of Probability und Statistics, Second Edition George F Carrier, Max Krook, and Carl E. Pearson, Functions of a Complex Variable: Theory and Technique Friedrich Pukelsheim, Optimal Design of Experiments Isael Gohberg, Peter Lancaster, and Leiha Rodman, Invariant Subspaces of Matrices with Applications Lee A. Segel with G. H. Handelman, Mathematics Applied to Continuum Mechanics wn, Mathematical Models: Mechanical Vibrations, Population Dynam A beetle making waves on 2 war air interface From experience with ship waves one would ‘apeet ihe disturbance to be confined 10 V-shaped region behind the objec. Here they pre- ‘cee the objec; perhaps beetles have a mysterious organ to reverse the natural order of ise und effect! Or perhaps there is more to wave propagation than meets the eye. See the con- luting portion of Section 9.1. {The photograph appeared on the cover of Science 166 (NOX. 14, 1969, Copyright 1969 by the American Association for the Advancement of Science), in con- nection with un uricte by VA. Tucker, "Wave-Making by Whirligig Beetles (Gyrinidae).” pp- 897-99. Reproduction is by peemiston | Be or) Mathematics Applied to Continuum Mechanics So Lee A. Segel With additional material on elasticity by G.H. Handelnan siam Society for Industrial and Applied Mathematics Philadelphia Copyright © 2007 by 1 for lndlustreal snd Applied Mathematics This SIAM edition is an unabridged republics Maemillan Publishing Go., New York, 1977, on of the work first published! by the 1O9KT654321 Alllcights reserved. Printed in the United States of America. Ne part of this hook any be reproduced, stored, or ny neinner without the written permis sion of the publisher. For information, write 10 th y for Indhesrial and Applied M ics, 1600 Univ center, Philadelphia, PA 19104-2688. hem Library of Congecss Cataloging-in-Publication Data: aves / Lee A. Seuek; with ackliteonal p-eun, ++ (Classics in applied muathemsaties ; 52) Originally published: New York : Macinallin, e197. Includes bibliographical referenecs and inden. BN- 10: 0-89871-620-9 ISBN-13: 978-0-89871-620-7 1. Contiauutn mechanics. 2. Mathematics. 1. Handelmau, G. 11 IL Tile: QAHO8.2.543 2007 S322 2006052201 SLAUIL iss repistered To Ruthie Contents Fort worn to mt Chassics Eprion avi PResace xin CONVENTIONS xxiii PART A GEOMETRICAL PREREQUISITES FOR THREE-DIMENSIONAL CONTINUUM MECHANICS CuaPTER | VECTORS, DETERMINANTS, AND MOTIVATION FOR TENSORS 3 1.1 Vectors in a Cartesian coordinate system, transformation of vector components, and the summation convention 3 Projection 4 | Transformation of coordinates Geometric approach 5 | Notation | Vectors—Aluebraic point of view 9 | Vectors—Geometric point of view 10 | Review of linear dependence 12 1.2 Determinants and the permutation symbol 14 ‘The permutation symbol 15 | Determinants 16 | The“ ed” rule 20 1.3. The consistency requirement 23 The consistency of Newton's second law 23 | Physical laws: General stavement versus particular numerical version 24 | Guessing the tensor transformation law {from the consistency requirement 25 14) The tensor as a linear transformation 28 Linear transformations 28 | The stress tensor induces a linear transformation of differenuial arca into differential force 29 | A linear transformation of Aifferensial area to differential force can be identified with the stress tensor 31 CHAPTER 2 CARTESIAN TENSORS 2.1 Tensor algebra 33 Definitions and elementary properties 34 | Special results for second order tensors 41 | Isotropic tensors 43 ( The vector associated with an antisymmetric tensor 46 2.2 The eigenvalue problem 49 Eigenvalues and eigenvectors of symmetric tensors 55 | Principal axes 57 ix x Contents 23 The calculus of tensor functions 60 Theorems for derivatives of tensor fields 63 | Integral theorems 64 | Representation theorems 67 AppeENDIx 2.1 Some basic equations of continuum mechanics 72 PART B PROBLEMS IN CONTINUUM MECHANICS Craerer 3 Viscous Fiuips 3.1 The Navier-Stokes equations 78 Analysis of the local velocity field 78 / Assumpuons that underlie the constitutive equation 80 / Derivation of the final equations 82 / Boundary conditions 85 | tncoripressible viscous flow 86 3.2. Exact solutions 93 Solution 1. Plane Couette flow 93 / Soluiion 2: Plane Poiseuille flow 95 | Sofution 3: Rayleigh impulsive flow 9 3.3. On boundary layers 105 Comparative magnitudes of viscous and inviscid terms 105 | Reynolds number 107 | Boundary layer equations for steady flow past a flat plate 107 | Boundary conditions 11 3.4 Boundary layer flow past a semi-infinite flat plate 115 Formulation 116 | Blasius similarity solution 116 Defects in the Blasius solution 118 | Boundary layer separation 120 | Slightly viscous uniform flow past streamlined bodies 121 | Slightly viscous uniform flow past bluff Bodies 122 3.5. Vorticity changes in viscous fluid motion 129 Vortietty convection and stretching 129 | Viscous vortucuy diffusion and boundary generation 131 / Vorticuy in two-dimensional flows 1432 | Vorticity in particular viscous flows 132. / Summary of the role of roructiy +34 3.6 Slow viscous flow past a small sphere 135 Formulation 136 Solution 137 AfPENDIX 3.1 Navier-Stokes equations in cylindrical coordinates 138 APPENDIX 3.2 Generation of confidence in the boundary layer equations by construction of a finite difference scheme for their solution 139 Contents xi CHAPTER 4 FOUNDATIONS OF ELASTICITY 4.1 Analysis of local motion 144 Strain tensor in material coordinates 144 | Geometrical interpretation of strain componems 147 | Strain sensor in spatial coordinates vay | The rotation tensor 150 j Principal axes of strain 151) Compauibility equaaions 153 | Some examples of strain 155 4.2. Hooke’s constitutive equation and some exact solutions 159 Generahzed Hooke's taw 159 | Interpretation of the elastic coefficients nia exact Sohuions 162 | Tension of a cylindrical bar 163 | Shear of a rectangular ‘har 164 | Compression of a rectargular parallelepiped 166 4.3 Final formulation of the problem of linear elasticity 168 ‘Summary of general equations, Boundary condisons, and initia! conditions 163 | Navier's equations 174 | Betirarsi-Michel equations 171 4.4 Energy concepts and the principle of virtual work 174 Energy balance 174 | Principle of virrwal work 177 | Uniqueness theorems 178 | Potential eneryy manimnization tn equilibrium 179 45 Some effects of finite deformation 184 Kinemanics 184. | Comparison with linear theory 181 | A cunstinutine equation _for nonlinear elasticity 188 1 Simple shear 189) Cuaprer 5 Some EXamMPces OF STATIC PROBLEMS IN ELASTICITY 5.1 Bending of beams 194 Bending by terminal couples Formulation 194 | Solution 196 | Interpretation 199 | Introduction 10 the engineering theory of bending — Basic assumptions 200 | Equations of engineering bending theory 204 | Boundary conditions 208 | Traveling-wave solutions 210 | Buckling of a beam 212 | Varuional methods in elasticity 214 5.2. St. Venant torsion problem 219 Warping function 220 Stress function 224 | Further properties of the stress function 226 | Modified siress function 229 | Rectangular cross section 230 | Elliptical cross section 233) St. Venant principle 235 5.3. Some plane problems 240 Equations for plane strain 241 | Airy's stress function 241 | Boundary conditions 242 | Polar coordinates 24s | Kirsch problem Swress ‘concentration 248 | Plane stress 250 | Generalized plane stress 252 | Concluding remarks 255 xii Contents CHAPTER 6 INTRODUCTION TO DYNAMIC PROBLEMS IN ELASTICITY 6.1 Elastic waves in unbounded media 259 Dilatational and rotanonal wave equations 259 | Waves via the Helmholtz representation 260 | Plane wave solutions 261 62 Propagation of discontinuity surfaces 264 A condition on mild discontinuities 265 | Further jump conditions 265 | Solving Sor discontinuity velocities 269 | Orientations of discontinuity surfaces 270 6.3 Reflection of plane shear waves 271 Formulation 272 | Attempted solution witha reflected wave 273 | Additional reflected wave 274 | Interpretation of results 276 64 Elastic surface waves 278 Formulation 278 | Solution: Plane waves that decay with depth 279 | Analysis ‘of the solution 281 | Occurrences of Rayleigh waves 283 65. Internal reflection 285 6.6 Love waves 289 Formulation and solution 290 | Examination of the nature of dispersion 293 | Further characteristics of the solution 294 | Concluding remarks 295 PART C WATER WAVES CHAPTER 7 FORMULATION OF THE THEORY OF SURFACE WAVES IN AN INVISCID FLUID 7.1 Boundary conditions 301 Kinematic boundary condition 301 | Basic facts about surface tension 303 | Quick derivation of a dynamic boundary condition yo4 | Detailed derivation of the dynamic boundary condition for an inviscid fluid 305 | Forces on a surface element 308 | Consequences of local equilibrium 313 7.2 Formulation and simplification 319 Equations for two-dimensional waves in an infinitely wide layer of inviscid ‘fuuid 320 | Static and dynamic pressure 322 | Nondimensionalization 324 | ‘Linearizavion 326 7.3. Order-of-magnitude estimates, nondimensionalization, and scaling 327 Estimating the size of terms in equations governing water waves 328 | ‘Scaling 329 | Use of dimensionless scaled variables 331. | Pressure scale 332 Contents Nii Cuaprer 8 SOLUTION IN THE LINEAR THEORY 8.1 8.2 83 A solution of the linearized equations 335, Assumption of a solution of exponential wpe 3x5 | Verification that all conditions are sutisfied 336 | Rewen to dimensional variables x39 Interpretation of the solution 340 Initial value problems: Periodic cases 346 Superposing solutions of exponential ype 146; Another way: to write the reat part of complex wans 349 | Satisfying initial conditions 352 Aperiodic initial values 354 Abandoning dimensionless variables 334. Sohution via Fourier integrals 455 | A qualitative feature of the solution 359 , Superposition and the deta function yo APpenDix 8.1 Bessel functions 365 CHAPTER 9 Group Sreep anv Group Vetocity 9. 92 93 9.4 Group speed via the method of stationary phase 369 Need for an asymptotic approximation oy | Stationary-phase approximation 469 | Application of the approximation 370 | Interpretation Group speed 314 | Phase speed 375 | Special conditions near exirema of group speed 377 | Some applications to flow past obstactes 378 Experiments and practical applications 381 Experiments on the collupse of a rectangular bump of water x82} Comparison of theory and experiment 384 | Practical application of theory 388 A kinematic approach to group velocity 39! Properties of slowly rarviy ware trains 392 ; The phase function tn regions with ‘an uncarying number of wares 393 | Integral and differential expressions of wave conservation 395 | Group and phase velocity 306 | Energy propagation 397 | Asympotic form of the surfuce A terse derivation 398 Ship, duck, and beetle waves 401 Consequences of steady motion 403 | Steady wares induced by @ pout source 404 | Partial differential equation for the phase Jurction 408 formal AppENDix 9.1 The method of stationary phase—An discussion 412 Motivation 412 / Development of a theorem 414 | Heurisuc derivation of the Key approximation 415 { Generalization 416 xiv Contents CHAPTER 10 NONLINEAR EFFECTS 10.1 Formation of perturbation equations for traveling waves 418 ‘Change of variables 419 | Consequence of traveling-wave assumption 420 / Series solution 421 | Determination of successive sets of equations 422 { Remarks 425 10.2 Traveling finite-amplitude waves 426 Lowest order equations 426 | Second order equations 428 | Normalization 430 | Completion of second order calculations 431 Third order calculations 432 | Discussion 434 | Resonant case 436 | Near resonance 438 | Special resonant solutions to compare with experiment 439 | Recapitulation 441 | Final remarks— Formalism and rigor 442 PART D VARIATIONAL METHODS AND EXTREMUM PRINCIPLES CHaPTER II CALCULUS OF VARIATIONS 11.t Extrema of a function—Lagrange multipliers 447 Unconstrained extremalization of a function 448 / Constrained extremalization of a function 450 | Inequality constraints 455 11.2 Introduction to the calculus of variations 461 The brachistochrone 461 | A general extremalization problem 462 | The Euler equation 463 | Natural boundary condivions 4fs1 | Are solutions to the Euler equation extremals? 471 11.3 Calculus of variations—generalizations 476 A. More derivatives 476 |B. More functions 477 | C. More independent variables 478 | D. Integral constraint 47g | E. Functional constraint 81 | F. End point free to move on a given curve 483 APPENDIX ILL Lemma A 496 ApPenpix 11.2 Variational notation 497 CHapTer 12 CHARACTERIZATION OF EIGENVALUES AND EQuiLipriuM STATES AS EXTREMA 12.1 Eigenvalues and stationary points 500 Three stationary value problems 500 | A particular self-adjoint positive problem 502 Contes Pe 122 123 124 Eigenvalues as minima and the Ritz method 504 Motivation 504 | Specification of a linear operator 5c8 | The lowest eigenvalue 509 / Higher eigerwalues 510 | The Ritz method 513 | Validivy and utility of the Ritz method 515 | Generalization 517 | Example: Transverse vibrations of a tapered hollow beam s18 | Natural boundary conditions 521 ‘The Courant maximum-minimum principle 528 A problem in vibration theory 528 | The max-min principle 59 | Application to the Ritz method 531 Minimal characterization of linear positive problems $33 Particle equilibrium as a minirum of potential energy 534 | The loaded membrane 537 | Equivalence of an inhomogeneous equation and a minimization problem 539 | The Ritz method applied t0 the torsion problem 541 APPENDIX 12.1 Self-adjoint operators on vector spaces 546 Real vector spaces $47 | Scalar products $49 | Linear self-adjoint operators 551 | Eigenvalue problems 552 | Positive operators 554 | Orthonormal elements 556 | Inhomogeneous problems 558 | The adjoint s60 / Banach and Hilbert spaces s62 | Completely continuous operators 563 BiptiocRapHy sm HINTS AND ANSWERS 5799 INDEX 585 Foreword to the Classics Edition In the 1960s, continaum mechanics was undergoing a revolution trom a “toudal™ science—with fiefs of fluid mechanics, elasticity, and esoteric co of those sciences with clectromagnetics—to a “cos mmbinsitions wopolitan” consisting of a theory of everything continuum. Love's Treatise on the Mathematical Theory of Elasticity and Lamb's Hydrodynamics had brought rigor to these areas, and Truesdell’s evo Handbuch der Physik articles rigorized the more abstract “rational mechanics” approach. To further complicate matters, science was starting te become computerized, with codes to approx complex problems. mate solutions to increasingly Against this backdrop, Lee Sepel set out to write a book fora coune in applied mathematics at Rensselaer. The text for the first semester was Mathematics Applied to Deterministic Problems in the Natural Sciences, also published by SIAM. The present volume was the text for the second semester of this course. As texts for such a course, the first enjoyed more success than the second. This is partly because mathematics was starting to be applied ta a wide spectrum at disciplines in the physi se tinuum chanics (ook on less of a role as a source for interesting and challenging ematical problems. Hand se neces and partly, because co Suill, in che decades since this book's first publication, continuum approaches were 3 mpted for the description of a multitude of ruechanics probleans, inchudang mixtures, reacting Muids, heterogeneous solids, muluphase flows, and. siructure- fluid interactions, Successful treatment of each added complications that required ain understiinding of the fundamentals of continuum mechanues that went beyor the manipulations required for describing Navier-Stokes fluids or linear chasticity This text possesses a quality thut makes it an important scientific tool that ¢ be recommended to anyone interested in understanding th agaries of continuu mechanics. It is aimed at explaining the science of continuum mechanies with clarity winning out over rigor, w manipulation. explanation and motivation emphasized over This is a book from which it is possible to learn continuum mechanics. As such, res a niche with many others, borh modem and classical. But the student who uses this book to add understanding to appreciation of the science of continuum mechanics. Donald Drew Rensselaer Polytechnic Institute Preface Tarstex develops and uses mathematics to analyze continuum models of fiuid flow and solid deformation. It isintended for upper level undergraduates and graduate students in applied mathematics, science, and enginecring: all of the material has been tested in various courses for such students. There is an emphasis on the process of achieving understanding, even at the expense of limiting the topics covered Although applied mathematics has fowered far from its original roots in physics, continuum mechanics remains a core course. One reason is that the concepts that theoreticians use to organize experience are developed most fully in classical subjects such as continuum mechanics. Subjects of com- parable depth—electromagnetism and quantum physics, for example ~ do not share with continuum mechanics a concentration on relatively familiar natural phenomena and therefore are not so well suited to the rapid develop- ment of physical intuition, Moreover, because it deals with problems that are al once deep and widely understandable, continuum mechanics has been for (wo centuries 2 major source of significant new applied mathernaties. Boundary layer theory provides a paradigm for the development of a concept from one invented for a particular class of problems in continuum, mechanics to one presently used in a wide variety of applications. Another example is provided by the study of water waves. which continucs to stimu- late the generation of new techniques for analyzing nonlinear partial differential equations. This volume is a sequel to C. C. Linand L. A. Segel’s Mathematics Applied to Deterministic Problems in the Natural Sciences (hereafter referred to as 1) It is permeated with the spirit discussed in the preface of that work. In particular, 2 case-study approach is used, heursm dommates rigor, and brevity is sometimes sacrificed to permit depth of exposition. In I various idcas were introduced in extremely simple contexts. so that their essence could be clearly seen. The danger that these ideas would thus seem rather trivial is countered here by providing more realistic examples. To give one illustration, regular perturbation theory was introduced in | to approxi- mate the solutions of simple algebraic and ordinary differential equations, in the present volume the theory is applied to the system of nonlinear partial differential equations with a “free” boundary that governs water waves. Material in I is a prerequisite for the present work only to the extent that here we assume familiarity with the basic mass and momentum conservation laws of continuum mechanics. Remarks that refer to | can be skipped. xix ax Preface Various themes that appear there are taken up again here, but in a sclf- contained fashion. There is no doubt, however, that background from 1 is useful here, for the totality of material in both volumes is designed to provide a thorough understanding of many topics. Asin I, “we have assumed that the potential reader has had an introductory college course in physics and is familiar with calculus and differential equations. . . . We make considerable use of such topics as directional derivatives, change of variables in multiple integrals, line and surface integrals, and the divergence theorem,” Complex analysis is used in a few exercises and in Section 5.3. This volume is divided into four parts: A. Geometrical prerequisites for three-dimensional continuum mech- anics. B. Problems in continuum mechanics. C. Water waves. D. Extremum principles. The table of contents gives a more detailed outline, Part A features a careful discussion of how the requirement that natural laws be “essentially the same" in different Cartesian coordinate systems leads to the introduction of Cartesian tensors. The bulk of Part A is devoted to a fairly format development of tensor algebra and calculus. Tensors are applied in Part B to derive the constitutive equations for viscous fluids and for linearly elastic solids. (For those who desire it, paths are indicated that reach these equations without the use of tensor machinery.) Boundary layer theory plays the central role in the discussion of viscous fluid motion. A variety of static and dynamic problems are available once the equations of elasticity have been derived. The treatment of water waves in Part C provides the most detailed dis- cussion in the text. The concept of group velocity plays a central part, and the role of nonlinerity is considered. Experimental verification of the theory is also discussed. In this area, therefore, the text brings the reader to the frontiers of modern research. Moreover, the material on elastic waves in Part B, taken together with that on water waves in Part C, provides at the same time an illustration of the unifying power of a single concept (wave motion), and also the refinements and modifications that are necessary when this concept is applied to different areas. The text concludes with a discussion in Part D of extremum principles, considered both as natural frameworks for physical laws and as bases for direct methods of computation. As Part D proceeds, simple concepts of functional analysis are introduced, since they form a natural language for the discourse. There is considerable independence among the various parts and chapters. Part D. for example, can be begun at once, although knowledge of mechanics will lend urgency to some of the problems posed there. The material on Preface xxi water waves (Part C) has as a prerequisite only the equations of incom- pressible inviscid fluid motion. If one is willing to accept the governing equations more or less as axioms, much of the material on elasticity and fluid mechanics in Part B can be made quickly accessible. Although the motivation for tensors in Chapter 1 requires knowledge of the stress tensor, if one is willing to forego extensive motivation it is possible to acquire the formal methods from Part A, Section 1.1, and Chapter This book grew out of the course “ Foundations of Applied Mathematics” introduced by G. H. Handelman at Rensselaer Polytechnic Institute around 1957 and taught every year since then, many times by the author. (A pre- cursor of this course was given by A. Schild and Handelman at Carnegie Institute of Technology, now part of Carnegie-Mellon University.) Special thanks are due to Dr. Handelman, not only for his general in- fluence, but also for his willingness to write most of the material in the chapters on elasticity. (The principal author is responsible for final editing.) Other writers, teachers, colleagues, and students have all been influential. Joseph Blum, Paul Davis, Donald Drew, Eitan Klein, Simon Mochon, William Siegmann, and Marc Triebitz have been particularly helpful with the present volume. Many secretaries have given excellent service. As was the case with I, the publishers, particularly editors Everett Smethurst and Elaine Wetterau, have provided first-rate assistance. Finally. help with the index and other last-minute editorial matters was provided by Joel, Susan, Daniel, and Michael Seget ‘The author was partially supported in 1968-1969 by a Leave of Absence Grant from Rensselaer. Further support was received during 1971-1972 from National Science Foundation Grant GP33679X to Rensselaer and from a John Simon Guggenheim Foundation Fellowship. That year was spent asa visitor to the Department of Applied Mathematics, The Weizmann Institute of Science, Rehovot, Israel; the author joined this department in September 1973, but he retains an association with Rensselaer Polytechnic Institute. He is thus formally indebted to both institutions for support during years of on-and-off writing; the support was always generously given and the acknowledgment is correspondingly warm, LAS. Conventions Bocca chapter is divided into several sections (eg., Section 5.2 is the second section of Chapter 5). Equations are numbered consecutively within each section. Figures and tables are numbered consecutively within cach chapter. When an equation outside a given section is referred to, the section nurnber precedes the equation number. Thus “ Equation (6.3.2)" refers to the second numbered equation of Section 6.3. But if this equation were referred to within Section I of Chapter 6, then the chapter number would be assumed and the reference would be to “Equation (3.2).” The fourth numbered equation in Appendix 3.1 is denoted by (A3.1.4) ‘A double dagger (t) preceding an exercise, or a part thereof, signifies that a hint or an answer will be found in the back of this volume ‘The symbol (signifies that the proof of a theorem has concluded. The preceding volume is referred to as “I.” A brief bibliography of useful references (some repeated from 1) can be found at the end of this volume, When one of these books is cited, the style “Smith (1970)” is employed. xxiii PART A Geometrical Prerequisites for Three-Dimensional Continuum Mechanics Cuarter 1 Vectors, Determinants, and Motivation for Tensors Trnsons have both computational and conceptual value. We try to illustrate both aspects in Part A. ‘We begin Chapter I with a review of vectors which emphasizes that more than direction and magnitude characterize these entities, In a consideration of how vector components transform when different Cartesian coordinate systems are used, we introduce # standard summation convention that makes the required formulas more condensed and hence more manageable, Determmants are developed in a computational fashion with the aid of the alternating symbol. Most readers will be familiar with at least the principal results, but the alternating symbol is intrinsically worth studying, and it provides an approach to determinants that should be new and appealing We make contact with the central content of the chapter when we turn to a detailed discussion of the fact that physical laws cannot intrinsically depend ‘on the coordinate system used, From this discussion we are able to provide strong motivation for the definition of a tensor. An optional section provides another motivation, based on the connection between tensors and linear transformations. We consider only Cartesian tensors, which means that only Cartesian coordinate systems are allowed. Generalization of our considera- tions are possible and useful, but they are beyond the scope of this book. 1.1 Vectors in a Cartesian Coordinate System, Transformation of Vector Components, and the Summation Convention Recall that a Cartesian coordinate system is based on three mutually perpendicular coordinate planes. These intersect in three mutually per- pendicular lines called the x,-, x,-, and xy-axes. A positive direction is assigned to each axis. The coordinate system is called right- (left-Jhanded if the third finger of the right (left) hand points in the positive x-direction when the thumb and index finger are pointed in the positive x,- and x,-directions, respectively. ‘As shown in Figure 1.1, a point P can be reached by following three displacements OA, AB, and BP parallel to the three coordinate axes Select a unit of length. If these displacements (positive or negative) have the 3 4 Vectors, Determinants, and Motivation for Tensors (Ch. 1 a Ficure 1.1. Any point P can be reached by three displacements that are parallel to the coordinate axes. measured values x,, x2, and x,, respectively, then the coordinates of the point P are (%1, ¥2%3). The distance OP is given by (x} + x} + x3)". Unit vectors c", e'', and e® point along the respective axes. The above material will be fai to the reader, but we wish to emphasize that some fundamental assumptions are implied, i.e., that we can find three mutually perpendicular lines and that the law of Pythagoras holds. By contrast, on a spherical surface great circles play the role of lines. We can find mutually perpendicular great circles through any one point, but the law of Pythagoras does not hold for distances measured along great circles. Although we can always conceive of a Cartesian coordinate system, the fact that we can use it to describe the space of our common experience is an experimental fact; Le, we must verify experimentally that Pythagoras’s theorem holds, Thus, in Figure 1.2, since OB = OC and AB = AC, we should have OA? + OB? = 4B? and OC? + OA? = AC. If these relationshij can be verified, we speak of the space as Euclidean, otherwise as non- Euclidean.* PROJECTION A fundamental reiationship in the Cartesian coordinate system is the rule of projection. If PQ is a displacement with components (u, v, w) and RS is a line with direction at angles («, B, y) to the coordinate axes, then the * The Euctidean nature of the space of common experience has primarily been established indircetly by the agreement between observation and multitudes of thcorctical predictions ‘whose derivation assumes that space is Euclidean. Sec. 1.1] Vectors in a Cartesian Coordinate System 5 A a o © FIGURE 1.2. Measurements on a construction of this sort can in principle be used in an effort 10 find out whether or not ~reat™ space ts Euctidean. projection of PQ on RS is u cos a + v cos f + w cos y.* This is a familiar result, but it is not trivial when the lines PQ and RS are skew to each other. Readers should demonstrate this to their own satisfaction (Exercise 1). We shall now use the rule of projection to study how components change when new axes are selected. A change of axes may result in a simplification of some problems —this is sufficient reason to study such changes, bul there are more fundamental reasons, (00, as we shall see TRANSFORMATION OF COORDINATES—GEOMETRIC APPROACH Consider two sets of Cartesian axes, primed and unprimed, with common origin O (Figure 1.3). Let a displacement OP have components (x1, x2.x3). FIGURE 1.3. Two sets of Cartesian axes, rotated with respect 10 one another. Either can be used to describe the displacement P. * We remind the reader that the projection of a point P on RS is the intersection of RS wath, the perpendicular dropped from P onto RS. The projection of PQ is the union of the projections of its points. 6 Vectors, Determinants, and Motivation for Tensors (Ch, 1 Denote the projection of OP on the x;-axis by x,and soon. By the above rule of projection, X= x4 60S (x4, x4) + x2 COS (xy, X1) + x3 C05 (x5, x4), where (x;, x}) = (x), x) is the angle between the positive x-direction and the positive x;-direction. In general, x:008 (nx) f= 1,23 « = For the inverse transformation, the rule of projection* yields 2 = Lxjcos(x,x) f= 12,3. re.) a This must be compatible with (1). To show that this is so, we substitute (1) into (2), obtaining 33 X1= YY x, 608 (xs, x) 60s (x), xd. seth But 3 - Eowstenrpcostu.x= {5 ee ueakae B since the sum is merely the projection of a unit displacement along x, upon the x-axis. By (3), then, we have the identity x, = x, verified for each i NOTATION The preceding calculations are simple examples of a type of manipulation that is very often encountered. Experience indicates that the following new notational conventions, once they have been mastered, shorten and clarify the formulas. In 2 given equation, an index (subscript or superscript) is called a dummy if it is used in a summation. Other indices are called free. In Q), iand k are free subscripts and j is a dummy subscript. Range convention A free index takes on the values I, 2, 3. The same (ree indices must appear in each term of an equation. ‘Equations (t) and (2) can easily be derived algebraically (Exercise 2) Sec. 1.1} Vectors in a Cartesian Coordinate System 7 ‘Summation convention No index will be written more than twice in a single term.* Suppose that an index, say i, appears twice in a single term. Then the symbot >. , will be understood to precede that term. From now on we shalll use the range and summation conventions unless an explicit statement is made 10 the contrary. For example, instead of AnnreYpg, MR = 12,3, we shall write Xone = Annee You ‘The same free subscripts, m and n, are used on both sides of the equation. ‘The subscripts p and q are repeated, so that summation is implied Two special abbreviations occur repeatedly. The Kronecker delta is defined by Lf tay 6,= { inj @ ‘The transformation symbol is defined by 0s (x, o Using these definitions and the summation convention, we see that (1) to (3) can be written briefly as X= Xi © X= Xfi a Cbs = Su (8) By the same reasoning as that used to obtain (8), we also have Cita = Ow. 0) Readers should satisfy themselves before going further that because of our various definitions and conventions, the following four equations convey * Violation of this requirement isthe biggest single cause of the errors that beginners make in ‘manipulations involving the summation convention. 8 Vectors, Determinants, and Motivation for Tensors {Ch.1 precisely the same information, respectively, as the four equations just written: X= FXe x, = Xi p. Sialta = Sus nba = On Readers will also find it helpful in rapidly acquiring familiarity with the conventions if they expand some of the compactly written formulas below. Note that ¢,, # ¢,, in general. To remember the definition (5), memorize: The second subscript goes with the prime. The same phrase makes it easy to remember the transformation laws (6) and (7) and to avoid errors in future formulas. In (6), for example, the dummy subscript i appears on the unprimed quantity x,. Since the second subscript goes with the prime, the first subscript of ¢ must also be i, The free subscript j is associated with the primed components on the left side of (6) and with the second (primed) position in 7+ As our first new calculation using the summation convention, we shall verify that the distance between two points is invariant under a rotation of ‘axes. The distance d between two points P and Q is given by @ = [x(Q) — xAPYI[xAQ) — xAP)]. In the new coordinate system, we have (YP = ExfQ) ~ xfP NEO) — xfP 1 = LQ) — xAPIDalQ) ~ xl PIs By use of (8) we see that (P= [x4Q) — xAP LQ) — PM (10) Only terms when k = i need be retained in (10), since 6, = 0 when k # i, so one can write either (4? = [xdQ) — xAPY] EXO) — xAPY] * ‘The changes that have been made are as follows: i) the order of multiplying the real num- bers x, and /,, has been reversed, compared with (6): (i) ihe free subscript f of (7} has had its ~ name’ changed 10 p (a permissible change, since i does not violate the cule against writing an index more than twice in a single term); (ii) the dummy subscript j of (8) has had its “name” changed 10 q (a permissible change); and (iv) in (9}, both the dummy subscript j and the free subscripts t and k have been changed in a permissible way. f There is a notation for the transformation symbol with which one docs not have 10 remember that the second subscript goes with the prime because the order of subscripts is immaterial. In this notation cos{x,,x)) = /,, = /.y- Because the extra primes are s bit burden- some and because a departure from conventionality creates some confusion, we shall not adopt this notation here, but we do tecommmend it as intrinsically superior. Sec. 1.1] Vectors ina Cartesian Coordinate System 9 or aye x4(Q) — xs(PY EQ) — xP], so that (P= d?. VECTORS—ALGEBRAIC POINT OF Ew Consider the position coordinates x, of a particle in motion so that X= f(t), where t is the time. In another coordinate system, the position will be given by x; = gt), ie, x) = yn = Gy Al) = g ft). (iy The velocity of the particle is tm =f) in the x-system and v= of0) in the x;-system. (Here * = d/dt.) Thus, from (11), Uy = yee Multiply by 4, (and sum): C0) = ClO = F40, = 0 We see that the transformations linking the velocity components », and wv; are the same as the transformations (6) and (7) for the displacement com- ponents. We thus designate as vectors quantities x whose components x, and x, transform according to (6) and (7). We speak of the displacement Vector, the velocity vector, and so on. Similarly, acceleration is a vector, since a, = b{0) = 840). Thus a vector v is an entity that is associated with three components in any Cartesian coordinate system. The components v, and v; in two coordinate systems are linked by the relations : vy = vty. a2) Iv and w are vectors with components v; and w, in some coordinate system, their sum y + w is defined as the entity with components »; + w,. If ¢ is a real number, the scalar multiple cv is the entity with components cu,. It is easy to show that the sum of two vectors is a vector, as is the scalar multiple of a vector. 10 Vectors, Determinants, and Motivation for Tensors (Ch. | VECTORS— GEOMETRIC POINT OF VIEW In showing above that d° = (c')*, we have verified that if », are the com- Ponents of any vector, then its magnitude » = (v,»,)"? is invariant under a rotation of axes. If » # 0, then »,/v are the components of 2 direction vector or unit vector, which has unit magnitude. The geometric picture of a vector as an arrow (indicating magnitude and direction) implicitly assumes the validity of the operations used in an algebraic treatment, namely, (i) the decomposition into components along mutually perpendicular coordinate axes, and (ii) the addition of components along the axes to determine the resultant sum of two vectors. The correct geometrical description of a vector that makes these assumptions explicit is as follows: A vector is a quantity with direction and magnitude. The sum of wo vectors is determined by the parallelogram law. It can easily be shown that this geometric description is equivalent to the algebraic definition given above. To show that a vector sum requires the addition of components, one simply projects the parallelogram along the three coordinate axes (Exercise 3). Note that quantities with equal direction and magnitude (which follow the parallelogram law) are regarded as the same vector. Also recall that from the geometric point of view, the scalar multiple cv is a vector with a magnitude |c| times the magnitude of v and a direction the same as (opposite to) that of v if c is positive (negative). ‘The geometric point of view is useful in furnishing a proof of the following important result.* Theorem 1. Let |Zj denote the determinant of the transformation matrix L with components /,,. Then |L| = +1 if the primed and wu primed systems are both right-handed or both left-handed. and |L| = — if one system is right-handed and the other left-handed. Proof+ Let us work in the primed coordinate system, which we assume for the moment to be right-handed. We can regard the kth row of the transformation matrix to be formed of the components of the unit coordinate vector e“!, for eM = cel” implies that ee = gel vel = e4)5y = Cu Hence the components c,, are just the ¢,,, for (5) certainly implies that ee = 4, (13) Thus by a standard vector theorem |L| = e!"!- e a e!§ so that |L| has a magnitude equal to the volume of the rectangular parallelepiped spanned by ee, and e, and has a positive or negative sign depending on whether + Another proof is given in Example 1.24 + An algebraic proof is provided in Example 4 of the nex! section. § We use the symbol A 10 denote the vector product See 1A) Vectors in « Cartesian Coordinate System u these vectors form a right- or left-handed system. In the present case this parallelepiped is # unit cube, so that the theorem is proved when the primed coordinate system is right-handed. Completion of the proof is left for Exercise 4. 0 REMARK. If the transformation between two Cartesian coordinate systems does not involve a reflection, the transformation is called a proper rotation. Otherwise the rotation is improper. In this terminology, Theorem ates that ithe { | for proper rotations, cn =| for improper rotations Note that every vector has a direction ural u maynitude but that the converse is not true. For example. the finite rotation of a rigid body about a fixed axis can be specified by a direction and 2 magnitude. It is natural (o represent such a rotation by an arrow with the specified direction and magnitude. But if we do this, we find that the decomposition into components has no meaning Consider, for example, the rotation of the unit square OABC through an angle of (1/2),/2 radians = 130° about the OB axis (sec Figure 1.4) The “natural” representation of this rotation is an arrow R along OB with length (1/2),/2. If Ris a vector, its components must be vectors along the v-and y-axes, each of length 1/2. But these hypothetical component vectors correspond to rotating the square 1/2 radians about OX and then x/2 radians about OY, leaving the square perpendicular to the plane. Obviously, the sum of these two compo- nents does not represent the same rotation as R. even though their vector sum is R. Thus the concept of component does not apply to a finite rotation; a Ficure 1.4. The arrow R provides a natural representation for rotation of the square OABC about the line OB, through an angle {K/2),/2. The direction of Ris indicated, and its magnitude should obviously be (n/2),/2. But Ris not a vector. 2 Vectors, Determinants, and Motivation for Tensors (Ch. | rotation cannot be represented by a vector even though it has a direction and magnitude. See Exercises 5 and 2.6 for further information about finite rotations. REVIEW OF LINEAR DEPENDENCE Each vector V satisfies V = Vee + Hel 4 He = Hee, where e“ represents a unit vector in the direction of the x,-axis. The question may be asked: If we choose three arbitrary vectors A,, A, and As, is it possible to find three constants (c), cy, c3) such that an arbitrary vector V is expressible in the following form? V=cA, (15) Let the components of A, be denoted by 4,,. Since two vectors are equal if and only if their components are equal, the answer to our question depends on the solution to the linear system of equations Ay + 2Ai2 + C3Ais = Ny Cray + c2An2 + C923 = Vas (16) CA yy + CaAy2 + CyAg3 = Vay with the coefficient matrix Au Aiz Ais\ A=[Ar Arz Aas Ay As2 As, The reader will doubtless recall that if the determinant |A| of the matrix A does not vanish, then (16) has exactly one solution for the unknowns ¢1,¢2.€3, and hence there is a unique representation. Otherwise, this is not the case. If|A| = 0,then there exists at least one set of constants(cy, C2, C3) # (0, 0, 0) such that cA, = 0, ay) where the zero vector 0 has components (0, 0, 0) in any coordinate system. If (17) holds, the vectors A, are said to be linearly dependent. If |4] # 0, they are said to be linearly independent. Any vector can be represented as a linear combination of three linearly independent vectors. When three vectors are linearly dependent, we may express one of them as a linear combination of the other two, e.g., A, = kA, + ky Ag. Sec. 1.1] Vectors in a Cartesian Coordinate System 3 If one constructs the triangle of Figure 1.5 to represent the sum above. it is evident that the plane containing A, and Ay must also contain A,. Three linearly dependent vectors are coplanar, FIGURE 1.5. Ay. Ap. and Ay are linearly dependent Therefore, they are coplanar. EXERCISES 41. Demonstrate the projection rule discussed at the beginning of the section. 42. Derive (1) and (2) algebraically, starting with the relation xje” = x,¢ 3. Show that the “ parallelogram law” of vector addition implies component wise addition, and conversely. 4. Complete the proof of Theorem |. 5. Let x be transformed into y by a right-handed rotation through an angle 6, The axis of rotation is given by a unit vector n, Show that y— x= (I~ cos O)[x — (x+ npn] + sin O(n 0 x). 6. (a) Simplify the result of Exercise 5 when U/is small, to obtain y-x=OAx — where@ = On (b) According to part (a), what is the result of two successive small rotations? What does this imply about the possibility of specifying small rotations by a vector? (c) How small does @ have to be for the formula of part (a) to give 10 per cent accuracy? Y7. (a) Write down the matrix for the coordinate transformation obtained by rotating 180° about the x-axis. (See Figure 1.6.) (b) Calculate (and check by using common sense, which makes the answer obvious) the new components of a vector having components p, in the old system. $8. If A;. Aj. and Aj are the components of a vector with respect to three different Cartesian coordinate systems, prove that the values of Aj are the same whether (i) one transforms A, into Aj directly or (ii) one trans- forms A; into A; and then A; into Aj. Vectors, Determinants, and Motivation for Tensors (Ch. 1 a5 FIGURE 1.6. The primed coordinate system is rotated 180° with respect 10 the unprimed, about the X,-axis. 9. Consider the numbers (a) (b) © 10. (a) () 12 12-9 4' 25 25 5 304 t=) 3 30 -16 123 25 «25 § By verifying (8) and (9), show that £ can be considered a matrix describing the transformation between Cartesian coordinate systems rotated with respect to each other. Point P has coordinates (0, 1, — 1) in the unprimed system. What are its coordinates in the primed system? ‘Show that 2x —$xh4x5=1 and Bx, + Mx, - 4x, =5 describe the same plane. Explain why the usual geometrical definition of a “vector product” of two vectors A and B yields a vector. Explain why one does not get a vector if the magnitude AB sin @ is replaced by AB cos 8 or AB'sin 20. Substantiate your discussion with detailed calculations. Determinants and the Permutation Symbol As was illustrated twice in the previous section, a number of mathematical problems reduce to questions wherein determinants play an important role. Although readers will be familiar with at least the main results from the See. 1.2} Determinants and the Permutation Symbol 5 theory of determinants, it is appropriate here to sketch that theory starting with first principles. The mode of presentation, via the permutation symbol &im (to be defined betow), will be new to many. Even leaving aside its value in determinant theory, the permutation symbol must become an object of familiarity, for it is used repeatedly in the remainder of the chapter. In partic ular, although the proofs of the main properties of determinants (Theorems 2 to 8) may be omitted, the “ed™ rule of Theorem 1] must be mastered. ‘A somewhat abstract approach is now commonly used in introducing determinants, particularly to mathematicians but increasingly to engineers and physicists as well. For example, the important theorem that the deter- minant of the product of two matrices equals the product of their determinants is arrived at by the use of elementary row and column operations, expressed with the aid of elementary matrices. Here we take a direct computational approach, as is perhaps fitting for applied mathematicians. The simplicity of notation allowed by the summation convention and the compact storage of information in the permutation symbol, however, give the direct treatment an unexpected elegance. In the remainder of the book we shall only nced results for 3 by 3 deter- minants and the corresponding three-subscripted alternating symbol. We shall restrict our presentation accordingly. Nevertheless, it should be clear that generalization to n by n determinants is almost immediate. THE PERMUTATION SYMBOL Definition 1. (a) A permutation of a list of integers is a listing of these integers in another order. Example; (312) is a permutation of (123). (b) A transposition is a special permutation in which two adjacent integers are interchanged, Example: (132) is a transposition of (312) in which the first two integers are interchanged. (c) A permutation is said to be even (odd) if it can be accomplished by an even (odd) number of transpositions. Example: (312) is an even permutation of (123) because it can be accomplished by two transpositions, as follows: (312) + (132) + (123). The trivial permutation (123) -> (123) is even, because it can be accomplished by zero interchanges, or two [(123) + (213) + (123)]. (d) The cyclic permutations of (123) are (123), (231), and (312). Cyclic permutations of (123) can be obtained by picking three consecutive integers from Figure 1.7, proceeding in the clockwise direction. The anticyclic permutations of (123), namely (132), (213), and (321), can be obtained by Proceeding in the counterclockwise direction. As the reader can easily show, alll cyclic permutations are even, all anticyclic permutations are odd. In Definition 1c we have implicitly assumed that any permutation can be ‘accomplished by a succession of transpositions and that all such successions which accomplish a given permutation are cither even or odd. For proofs that these reasonable assumptions are valid, see texts on algebra such as 16 Vectors, Determinants, and Motivation for Tensors (Ch. | Anticyclie FiGure |.7. The evelic and anticyelic permutations of the first three integers. FW Ficken, Linear Transformations and Matrices (Englewood Cliffs, N.J.: Prentice-Hall, 1967), Chap. 4, Sec. E. Definition 2, The permutation symbol or alternator ¢,,. i,j,k = 1.2.3, has 27 values, one for each of the 27 ordered sets of subscripts. These are defined as follows: by =0 if two of the integers i, j,k are equal, or if all three are equal, ii aaa if (gk) is an even permutation of (123), fx = —1 if ijk) is an odd permutation of (123). The following relations are almost obvious. Formal proof is tefl to the reader. Cie = Lys Cay = — Eyes (2a, b) =u. 8) fin DETERMINANTS ‘Throughout this section, 4 and B will denote the matrices An Ai2 Ais By, Biz Bys\ Ax Azz Ars and By, Bizz Bry As: Azz Ass By, By, By We wish to associate a certain number with a 3 by 3 matrix like A, called its determinant. We shall use the notations det (4) or | A] or sometimes det (4,). Sec. 1.2] Determinants and the Permutation Symbot ” \Al = Cp AriAr, Ane. @ To make sure that the reader has grasped the meaning of the permutation symbol, we shall write out the nonzero terms. There are Definition 3 IA] = €223y1Az2 Ags + 91241321 As2 + £25142 423Aai + €y21AisAza Aga + €1s2 Ariza Asz + fas Az ArAay = AyyA22 Ags + A132 432 + Ai2 23a — Ais A22 An ~ AyA23As2 — Ar2 Ariss ‘Theorem 1 VAL = Gi Aira Aas (9) An alternative statement of the theorem is |A™| = |A], where A™ is the transpose of A. Outline of proof. Corresponding, say, to the term ¢23)442A23Aa1 in (4) is the term €5;2Ay,A12425 in(5). Both terms consist of a product of the same 4's, but what of their sign? The sign depends on the parity (evenness or oddness) of the subscripts not in their natural order. That this is the same for both terms can be seen by noting that the number of interchanges required is the same as the number of interchanges necessary to change one order of the A’s into the other. For example, in the present case, two interchanges are required: AyArsAgr > ArrAsi Ars? Asi AizArs- Reading from left to right, we see that these two interchanges put the second subscripts into natural order: (231) -> (213) — (123). Reading from right to left, we see that two interchanges also put the first subscripts into natural order: (312) -» (132) —» (123), 0 ‘Theorem 2. If B is the same as A except for the interchange of a pair of rows or columns, then |A| = ~ |B| Proof. Suppose that B is obtained from A by the interchange of the first two rows so that Then (Bl = 64,8 iiBrj Bay = biA 2A Ase = FA Aarne where in the last step we have interchanged the dummy subscripts # and j. (This is often useful.) The theorem follows by (2a). The same reasoning holds in the other circumstances. 8 Vectors, Determinants, and Motivation for Tensors (Ch. 1 ‘Theorem 3. If A has two identical rows or two identical columns, then la] =0. Proof. Interchange the identical rows or colurans. The resulting matrix is unchanged and therefore has the same determinant as A, but by Theorem 2its determinant is also —|A]. 0 Theorem 4. If Bis the same as 4 except that 2 row or column of Bis a multiple ¢ of the corresponding row or column of B, then |B| = c|4| Proof. Left to the reader. ‘Theorem 5. If B is the same as A except that the jth row (column) of B equals the jth row (column) of A plus a multiple of the kth row (column) of A.k # j, then |B] = |Al. Proof. Left to the reader. Definition 4. The minor M,, of an element A, is the determinant of the matrix formed by deleting the ith row and jth column of A. The correspond- ing cofactor C,, is defined by C,, = (—1)'"/M,,. Theorem 6 (The Cofactor Expansion of a Determinant). In the statement of this theorem we do not use the summation convention. 3 IAl= YAyCy 1 = 1,23, ran a (6a, b) WAL = YAyCy, f= 2.3. a Outline of Proof. The idea is to write out one of the sums in (4) oF (5). For example, writing out the i-summation in (5), we find that VAL = Ay Ayz Aas + Aartrp Ayr Ans + Asis Ar Ars: The factors of Ay,, Az1, and Ay, are readily recognized as the appropriate cofactors for an expansion along the first column. We leave the full proof to the reader. D ‘Theorem 7. (We again employ the summation convention.) ApsCy = Alps AipCy = LAS py. (7a,b) Proof. For (7a), if p = i, we have a restatement of (6a). If p # i, the left side of (7a) is zero, for it is the cofactor expansion of the determinant of a matrix whose pth and ith rows are identical. This follows from the fact that the clements of a given row of a matrix never appear in the cofactors of that row, since it is always deleted. 0 Sec. 1.2| Determinants and the Permutation Symbol 9 ‘Theorem 8 (Cramer's Rule), Given the linear equations Aux; = b, (s) with coefficient matrix A, we have IAly = 6.Cu (9) If | A] is nonzero, we can divide by it and obtain a unique solution to (8). If| A] = 0,there is no solution unless bC,, = 0. Note that the same reasoning as was used in Theorem 7 shows that the right side of (9) is the expansion of 14,1, where &, is a matrix that is identical with the coefficient matrix A except that its kth column is composed of the “right-side” clements hy, by, and by Proof. We multiply the ith equation of (8) by the cofactor Cy for some fixed k, and sum on i: Ca AyXy = bi Cu The result follows at once from (7b). D Theorem 9 ral Al = CipAeA Aue = im An As Aa- (10a, b) Proof. To prove (10a), we first note that unless r, s, and ¢ are all different, the right side is the determinant of a matrix with at least two identical columns and so is zero by Theorem 3. The left side is zero in this case by the definition of the alternating symbol. If (rst) = (123), then (10a) is just (5). If (rst)is an even (odd) permutation of (123), the right side is the determinant of a matrix which differs from A by an even (odd) number of column interchanges and so equals (minus) | A] by Theorem 2. So does the left side. (1 Corollary JAB] = JAIL 8} Proof. Left to the reader. Theorem 10 Ay Ag Aue Aig Aig Air| = Sinlpyel Al- ay Aip Arq Abr Proof. Suppose that at least two of the integers (ijk) arc equal, or that at Jeast two of the integers (pgr) are equal. Then the determinant on the left of (11) has at least two rows which are equal or two columns which are equal, In such a case, both sides of (1 1) equal zero. If (ijk) = (123), the determinant on the left of (11) differs from | A] because of an even (odd) number of column interchanges if (pgr) is an even (odd) permutation of (123). The validity of (11) in this case follows from Theorem 2, as it does when (ijk) is a permutation of (123). O 20 Vectors, Determinants, and Motivation for Tensors (Ch. 1 THE “ED” RULE Theorem 11 Fig lps = Fi She (12) Proof. From Theorem 10 with r = kand Ay = 34, Sip Sq Sa Sintra = |p Sy Sal. Sin aq Ome Expanding the determinant by its third row, we find that bp bu bp Su — +5, Faas Sip a 7 ip 54 3) ws |-|% Fa], |S Ful, by by 5p ‘a Interchanging the columns of the first determinant in (13) gives the desired result: Sip Sig aton = [5° S| = 5 da Sadie 0 As stated in (3), a cyclic permutation of subscripts does not change the sign of the alternator. Hence we can write (12) in various ways by performing a cyclic permutation of subscripts on cither or both alternators: Ein®ape = "ein ’per = ij€apa = Enikery = SipDpq ~ Sig Dyp- (4) We shall term this remarkably useful formula the £6 rule (pronounced “ed” rule). The €6 rule is well worth remembering, but it is better to remember it in a form that is independent of a particular choice of tettcrs for subscripts. To accomplish this, ignore the dummy subscript. [In (12), ignore k.] If “first,” “second,” “inner,” and “outer” subscripts are labeled as on the left side of Figure 1.8, then subscripts on 6 are (first) (second)-(outer) (inner), as on the right side of Figure 1.8. Readers should satisfy themselves that this scheme produces the correct formula for all cases of (14). ‘We shall now give some exampks that show the efficiency of expressing determinants by means of the alternator and the usefulness of the £6 rule. More examples will be given in the section on tensor fields and in the exercises. Sec 1.2) Determinants and the Permutation Symbol 2 — 5ig Sp Quiet tower (Second) = = === —L —— FiGure 1.8, Mnemonic diagram for the“ ed" rule. Example 1. Let a, b, and ¢ be vectors with components a,. b,, and ¢, relative to a triad of base vectors €". Express the vector and triple scalar products using the per- mutation symbol Solution bAc= KyeMbjer, (sy ar(b Ae) = baabyty 6 These formulas follow directly fom well-known connections between the left-hand sides of (15) and (16) and determinants. Iris often more convenient to use the notation ith component of v = [¥], 17) to write [bac], = Gabjce (18) Example 2, Prove that (a ab) Ac = (acyb — (b- ep. (yy Solution. Using (18) twice, we obtain [la Ab) A Cd = Smirk tls Pk p eo “The eB rue cannot yet be xpplied immediately because the dummy subscript appears second in f_;, but appears first in, [compare (14)]. Therefore, we transpose subscripts before applying the rule. [lar BY A Cha = —FimptintsPa cy = ~WnjSp ~ Ona Djphtbacy a Pol'y + PmteCp = —(b-ojfa], + fa ob]... Example 3. Let position x depend on the initial postion A at ume r, Find a compact formula for the time derivative of the Jacobian J = A(x,, Xp. <5). Aa. As) 2 Vectors, Determinants, and Motivation for Tensors |Ch. 1 ‘Solution. The Jacobian determinant can be written (2) so (using the rule for differentiating products) Sites "8 94,04,0A,) Here and in the following discussion, we shall noi write out two additional terms whose is obvious by symmetry. Introducing the velocity v, = ,, we have, by the chain (wewyls. (22) Example 3 provides an elegant proof of formula (13.4.11) of 1, which is essential in the study of kinematics. Example 4. Let || denote the determinant of the transformation matrix L (with ‘components /,)). Use the results of the present section to show that |L| = | for proper rotations, |L| = — 1 for improper rotations. Outline of Solution [An alternative proof of (1.14). Also sec Exercise 14]. Put Ajj = 6, in (10b) and multiply by 2,406 FrulL lett = fin leila sje al ali Using (10a) and (1.9), we deduce that fawlE? = Gnbubpbey = Swe SOULE = 1 But || is a continuous function of the elements of the transformation matrix L. Also, IL = 1 for the identity transformation (wherein ¢,, = 6,) by direct computation, and therefore for all proper rotations by continuity. Improper rotations have |L| = —1, for they can be obtained by continuous twansformation after the particular reflection wherein 10 L={0 1° of and [xj= o 0 * As required by the chain cule (compare Appendix 13.1 of 1), the vertical line indicates that the substitution x = x(A, 4) must be made after the partial differentiation Sec. 1.3] The Consistency Requirement 4 EXERCISES 1. Prove that cyclic (anticyclic) permutations are even (odd), 2. Prove (2) and (3). 3. (a) Prove Theorem | by writing out the terms. (b) Prove Theorem 4. 4 (a) Prove Theorem 5, (b) Complete the proof of Theorem 6, (c) Prove the Corollary of Theorem 9. 5. Formalize the continuity arguments that were used to complete the solution of Example 4. $6. Write the result of Exercise 1.5 in the form y, = Rj,x,. (This shows that a nine-component entity is needed to represent a finite rotation. A study ‘of how such components change when new coordinates are introduced will lead us to the concept ofa tensor.) 7, Let y= a 4b. Use the permutation symbol to show that (a) |¥|? = Ja/?|bp? sin? 8; (b) v Ove 8, Verify that the (first)(second)- domtiiomed scheme is correct for all cases of (14). 9. Use the e6 rule to determine an expansion of a a (b ac) 10. Show that [a 4 bl? = |a/?|b/? — (aby. 11. Show that (a A b)-(¢ A d) = (a ¢)(b- d) — (a- d)(b-¢). 12. Find two different expressions equivalent to (a a b) a (c ad) 13. Using the permutation symbol, work out a way to differentiate a determi- nant. 14, Use results about determinants to deduce |L|? = | from (1.8). 1.3 The Consistency Requirement We attempt here to provide motivation for the formal definition of a tensor that will be given in the next section. THE CONSISTENCY OF NEWTON'S SECOND LAW Versions of the same scientific law in different coordinate systems must be consistent. As an example of what we mean by this, consider Newton's second law. In a certain Cartesian coordinate system, let F, and a, denote the components of the force acting on a particle and the components of its acceleration, respectively. Let F; and ai denote corresponding quantities in another Cartesian coordinate system. In the first system, Newton's second law can be written Fy = may, ® In the second system the law must have exactly the same form, namely, F, = maj. 2 4 Vectors, Determinants, and Motivution for Tensors (Ch. | One reason for this similarity can be found in the definitions of the com- ponents of vectors like F: The components F, are the projections of F on the fixed mutually Perpendicular unit vectors e", The components F; are the projections of F on the fixed mutually perpendicular unit vectors e'” ‘These definitions show that components in the unprimed and primed systems are conceptually identical and only notationally different. If (1) is (rue, therefore, (2) must be true, because the correct notational change of adding primes to vector components has been made. Let us check directly that (2) holds if and only if(1) holds. Since the primed and unprimed components of a vector are linked by (1.6), (2) can be written 4, (3a, b) Ef Equations (3b) can be regarded as three homogencous equations for the three unknowns F,, ~ ma,. By Theorem |. the determinant of the coefficients is either +1 or —1,so that (3b) has only the trivial solution p= ld, or Fy(F, — ma) F,— ma, = 0, which is the same as (1), By reversing the above reasoning, one can show that (1) implies (2). PHY: PARTICULAR NUMERICAL VERSION To be more precise about the meaning of the consistency requirement, let us first make clear that we are not presently interested in how laws appear to different observers who arc moving in relation to cach other or to some fixed “inertial” frame of reference. Rather, we are concerned with how laws are expressed in the different coordinate systems available to a single ob- server. After selecting a coordinate system, an observer of a vector obtains a rule for assigning to each point in three-dimensional space at least one riple of real numbers, and a rule for assigning exactly one point in space to each such triple. Since these rules must be perfectly definite, there always exist transformation formulas that allow one to pass back and forth from triples obtained in one coordinate system to triples obtained in another. We distinguish between the general statement of a law and its particular numerical version. The general statement can be expressed independently of any coordinate system (F = ma) while the particular version relates numbers obtained using a particular coordinate system (F, = ma,). In general, different versions of a given law, i.c., relations between different sets of numbers, are obtained if different coordinate systems are used (F, = ma,, F; = maj). These different versions are found from the general See. 1.3) The Consistency Requirement 7 statement of a law by applying the coordinate rules appropriate to different coordinate systems. Consistency* demands that the same result be obtained if a given version of the law is obtained directly from the general statement (F, = ma,) or is obtained indirectly by transformation of another version LE, = ma, follows from F; = ma; and the appropriate transformation law (.6)}, In the previous paragraph, we cited F = ma as an example of a general coordinate-independent statement of a physical law. This example used vectors, entities that have a coordinate-free intepretation which involves length, direction, and addition by means of the parallelogram law (Section 1.1) Combinations of vectors, sometimes involving derivative operators, also have coordinate-free interpretations and therefore can also be used in the general statement of a physical law. To mention the simplest example, the scalar product can be described in a coordinate-free manner, as the product of the lengths of the two vectors involved times the cosine of the angle between them.f But we cannot restrict ourselves to combinations of vectors in formulating physical laws. More complicated entities called “tensors” are necessary. We tur now to a discussion that will motivate the detailed definition of a tensor. GUESSING THE TENSOR TRANSFORMATION LAW FROM THE CONSISTENCY REQUIREMENT A fundamental result in continuum mechanics is that the stress vector (x, ( 2, is formed by setting two of its indices equal and performing the resulting sum. An example of how contraction leads to a new tensor whose order is 2 less than the original tensor is provided by the following theorem. 38 Cartesian Tenors \Ch. 2 Theorem 4. Let T be a fourth order tensor, where CT ijnn = Tin Define U by Un = Wa = Ti: as) Then U is a second order tensor. Proof. The rule for obtaining the components of U is given by (15). Since T is a fourth order tensor, Ty 0h ilicla The $0, setting m = j, Tain = Cig trmT But (16) as in (1.1.9), 50 Tiga ™ Cigna T pers By (15), the above equation is equivalent to Un = CiplraU yes which is the required transformation law. 0 Definition 7. Let A be an nth order tensor and B be an mth order tensor, where TA = Anes (Bin Then the tensor product AB is defined by (AB), 0.5 = Ae Bueno’ a7 Note that the right side of (17) is the product of two real numbers. Example 4. If v and w are first order tensors with components »; and w,, then Low], = Hy. “The ijth component of vw, the product of the ith component of v with the jth component of, ean be found in the ith row and jth column of the following array: by, OW Uw (em) =| vr, er v2 ms (1g) Daw, UyW2 UaWs, NOTE, Two vectors have a scalar product, a vector product, and a tensor product. These products are, respectively, a scalar, a vector, and a special second order tensor called a dyad (see Exercise 24). I] Tensor Algebra 9 Theorem 5. Let W = AB, where A is an nth order tensor and Bis an mth order tensor. Then W is an (n + rath order tensor. Proof for n= \, m= 2. Suppose that A, and By, and Ay, and By, are the components of A and B in two different coordinate systems. By definition of Wa = AB, Since A and B are tensors, AiBye = Com Ain ipl sg Big Therefore, from (19), we obtain the desired transformation law, Win = Cont inl ig Winn: Proof for the gencral case. Left to the reader. 0 ay Definition 8. Let A and B be second order tensors with components A,, and Bp,. Then the contraction product 4 - B is defined by [A- Bl, = 4,,By- The contraction product of tensors of arbitrary order is defined similarly, by summing over adjacent indices. Theorem 6. Let A be an mth order tensor and B be an nth order tensor, Then 4 -B is a tensor of order m + n — 2. Proof. A- Bis a contraction of the (mm + n)th order tensor AB. 0 Easily proved results (Exercise 16), and useful ones, are (ree = Ty [el Th = Ty (20a, b) TM = T,. (20c) These equations assign interpretations to each individual tensor component. In Section 1.3 we showed in essence that if t = n- T holds for arbitrary t and n, then T must satisfy the tensor transformation law. It is useful here to restate this result in a general context. To make this section's presentation more complete, we shall repeat the main points of the proof. ‘Theorem 7. Suppose that in any Cartesian coordinate system, there is a rule associating a unique ordered set of nine quantities with T. If, for an arbitrary vector a, a- T is a vector, then T is a second order tensor.* * ‘This theorem brings out a second order tensor’s characterization as a mapping of a vector ‘space into itself (compare Section 1.4). 40 Cartesian Tensors (Ch. 2 Proof. Let a-T = v. We must show the following. Suppose that aT =v, and aT =; (21a, b) and suppose further that a and v satisfy the vector transformation law, so a= ad and 0, =F pb, (22a, b) Then it must be true that Ta = Cin’ eT in! ia Nis But upon substituting (21a) and (2tb) into (22a) and then using (22b), we find that FG Ty = Cpe = Cpt To employ (16) for the purpose of moving all /’s to one side of the equation, we multiply by /,,: Cla Tis = Cpl spi Tip = Sup or AC pf aT — Te Since aj is arbitrary, an argument used in Section 1.3 shows that the terms within the square brackets must vanish, which is equivalent to the desired conclusion. 0 REMARK. If a and b are real numbers, 6 # 0, their quotient a/b is defined by ce pea = be By analogy, v = a- T could be interpreted as expressing T as the quotient of v and a. Theorem 7 is called a quotient rule because it can be regarded as asserting that the quotient of two first order tensors is a second order tensor. While suggestive, the “quotient” aspect of tensor equations need not be taken too seriously. The following is another quotient rule of a rather general character. Theorem 8. Suppose that in any coordinate system, there is a rule associ- ating a unique set of 3" quantities with T. Suppose that for an arbitrary mth order tensor A, TA is a tensor® of order m + n. Then T is an nth order tensor. The proof of this theorem and of other quotient rules will be given as exercises. © Since T is not known 10 be a tensor, strictly speaking 7'A ts not defined. tn any coordinate system, by TA we mean the analog of (17) See. 2.1 Tensor Algebra at SPECIAL RESULTS FOR SECOND ORDER TENSORS Definition 9. Let A be a second order tensor with components 4,,. Then the transpose of A is denoted by AT and is defined by (A™1y = Au 23 Theorem9. If A isa second order tensor, then A" is a second order tensor. Proof. Left to the reader. Definition 10. If A=A™ or Ay= Ay then A is said to be symmetric. If A=-A™ of Ay= Ay, then A is said to be antisymmetric (or skew-symmetric). For example, in a nonpolar material, the stress tensor is symmetric. A second order tensor may be neither symmetric nor antisymmetric, but it can be written as the sum of a symmetric and an antisymmetric tensor, as is shown in the following theorem. Theorem 10. Let T be a second order tensor. Then one can write T=S+A, (24) where S isa symmetric second order tensor and 4 is an antisymmetric second order tensor. This decomposition is unique. Outline of proof THT + T+ HT-T, (25) On the right side of (25), the first term is symmetric and the second is anti- symmetric. The uniqueness proof is requested in Exercise 9. 0 Matrices are an aid in performing certain manipulations with the com- ponents of second order tensors. For example, in a given coordinate system [A], can be written in the ith row and jth column ofa 3 by 3 matrix, We shall denote this matrix by (4). Thus Ai An Avs (A)=[ Aor Azz Ars As As Ass Note that A has a meaning independent of any coordinate system, but the matrix (A) is defined with reference to a particular coordinate system. In (18) we displayed the matrix corresponding to the second order tensor vw. ae Cartesian Tensors (Ch. 2 ‘Theorem L1. For second order tensors A and B, (A: B) = (A)(B). (26) In words, the matrix of the contraction product of A and B is the product of the A and B matrices. Proof. The component in the pth row and qth column of the matrix (A+ B)is A,:Byy. A,B, is found by taking the “dot product” of the pth row of (A) and the gth column of (B). By definition of matrix multiplication, this is the element in the pth row and qth column of the matrix product (4)(B). 0 ‘Theorem 12, The vector transformation laws U1 = Fimtins U5 = Oa jo can be written, using matrix notation, as u eu iz Ga\ (i tap=[4or 422 4s ][r Pai fa1 Or aah \es Ay Oa 3) WL er P= (er Pe er far fas} var far fas and respectively. AS before, let L be the transformation matrix, and let (¥) be the column vector with components (Dj, v2, 03). If a prime indicates use of a primed coordinate system and if superscript (Tr) denotes matrix transpose, then we can write W= Loy, wT = (TL. (27a) Proof. Left to the reader. Theorem 13. ‘The transformation law for second order tensors, Ay Cw Aaa can be written (A) = L(AyL™, (27b) Proof. Left to the reader. REMARK. Since tensor equations are valid if they are demonstrated in any single set of components, there is a theory for second order tensors that completely parallels the corresponding matrix theory. We have seen this in Theorem 10; the decomposition into the sum of symmetric and antisymmetric See. 2.4) Tensor Algebra 9 tensors is the same as the corresponding matrix decomposition. To give another example, one can term a second order tensor nonsingular if any of its component matrices has a nonzero determinant. If R is a nonsingular tensor. then [Exercise 25(c)] there exists an inverse tensor, which we denote by R™ ', with the property ReR''=R''+R=1 (28) ISOTROPIC TENSORS There are certain materials whose structure singles out one or more directions. Examples are the direction of the grain in a piece of wood, and various symmetry axes in a crystal lattice. Other materials, like water, appear to have no internal preferred direction. The constitutive equations for mater ials of the latter kind must be expressed in terms of special tensors whose components are always the same. We shall now study this class of” isotropic” tensors. Definition 11. Tensors, such as I and E, whose components are given by the same set of numbers in all coordinate systems, are called isotropic It follows immediately from the definition of zero order tensors that they are isotropic. Arc there any isotropic tensors of order |? There is at least one, the zero tensor of order |. Excluding this case (which we shall term trivial), are there any others? There are not. as we shall now show. Theorem 14, There are no nontrivial isotropic tensors of order |. Proof. (We proceed ina Way that makes up in conceptual simplicity what it lacks in elegance. We rule out the possibility that there is a nontrivial vector which has the same components in all coordinate systems by showing that there is no such vector even if only three special coordinate systems are considered.) Suppose that v is an isotropic vector and that its components are 0, relative to a certain basis e"”, Consider another set of basis vectors satisfying et = et, or = ge = el) (29) It is clear from Figure 2.1 that the components vj satisfy Dy = 02, 12 = Py, y= (30) This cyclic change of subscripts, symbolized by +2 243, Fo BY) can also be verified by calculating the appropriate /,, and applying the vector transformation law (1.1.6). (The calculations are almost the same as those requested in Exercise 1.3.7.) 44 Cartesian Tensors Ch. 2 Piatt a gh Gi! FiGuRE 2.1. A rotation of axes equivalent to a cyclic exchange of subscripts. Since v is isotropic, =, the 0, th = v5. 2) Combining (30) and (32), we find that Vp = 02 = 02 = Oy = =O G3 Consider a third set of basis vectors e®” satisfying OF ag 2 gt, = gl), G4) From Figure 2.2, r= Py but, by isotropy, vy =v. Thus vp, = —p;. This is consistent with (33) if and only if», = v, — vs ~ 0. Consequently, the only isotropic vector is the zero vector. 0 Theorem 15, Any isotropic tensor of order 2 can be written Al for some scalar A. Proof. Suppose that T is an isotropic tensor of second order and suppose that its components are T;, relative to a certain basis e, As in the proof of the previous theorem, consider a new set of basis vectors satisfying (29). By (31), Tu = Ta T= Tas, Ts = Thy 5a) Ta-Tas, Tis=Tay Ti = Tas (5b) Ta = Ty T= Ts, Tis = Ta 5c) See. 24] Tensor Alyebra % Figure 2.2. The “double-primed™ system is rotated 90° abour the €'-axis. Since T is isotropic, Th.= From (35a) we deduce that Ta = Tar = Ts for some number 2. Similarly, we can deduce from (35b) and (35c) that Ta=Ta= Te Ta = Ta = Ts G8) Now consider the components of T relative to a third set of basis vectors €"" satisfying (34). Applying the transformation law, one finds, as in Exercise 1.38, that his Ti2 = Thre Tay = Ths. Ge G7) Ty = Ty But, by isotropy, T= Tie Tia Ta Thus T= -Ter Ty = Ts 39) For (38) and (39) to be consistent, we must have Tia = Tas = Ty = Tan = Tua = Ths =O. (40) The equations of (37) and (40) show that if T has the same set of components for the three sets of axes related by (29) and (34), then T necessarily has the a6 Cartesian Tensors Ch. 2 form ML. But AI is a tensor by Example 1 and part (c) of Theorem 1, and Mf certainly has the same set of components for all axes. 0) Theorem 16. Any isotropic tensor of order 3 can be written AE for some scalar A. (If reflections are allowed, £ is not a tensor, so the only isotropic tensor of order 3 is 0.) Proof. The argument is entirely analogous to that required for Theorem 15. Details are left to the reader. ‘Theorem 17. If T is an isotropic tensor of order 4, then [pars = A8pgFes + HO pda + Sp de) + M5 p-5qs — 5psdye) «ay for some scalars J, p, and x: Proof. Again the same ideas suffice, but the details are now somewhat lengthy. The reader is asked to supply a proof in Exercise 19. THE VECTOR ASSOCIATED WITH AN ANTISYMMETRIC TENSOR To conclude this section, we shall present some formulas involving the alternating tensor that will be seen to be useful in our discussion of kine- matics. To motivate our discussion, we start with the fact (see Exercise 21) that ifa rigid material is in uniform rotation about the origin, then the velocity Vat the point x is given by v=max, (42) where wis the angular velocity vector. In terms of the alternator, (42) can be written i (43) Since (43) holds for arbitrary x, the quotient rule implies that the quantities 6,» are the components of a second order tensor. Not surprisingly, it proves useful to regard v as given by the contraction product of this tensor and x. As will be seen in the formal development that starts in the next paragraph, it is conventional to deal with a tensor whose components are —€,; 10; With a vector @ we shall associate an antisymmetric tensor 2 defined by Qyg = py, OF R= Eve. (44) Thus 0 a, -o (= [-0, 0 a}. (45) o, -o, 0 Sec. 2.1] Tensor Algebra ar An explicit formula for in terms of © can be obtained by multiplying both sides of (44) by &;,, and using the £6 rule: igang = ipa lpr Dr ~ Egip Engr Dy + = Cede — Fed Oe = 30, — ,. ‘That is, 0 = Sing Dp, (46) (44) when @ is antisymmetric [Exercise 20(b)]- en Similarly, (46) im Sometimes (46) is w o=$9,. 47 What we have shown can be summed up as follows. ‘Theorem 18. Corresponding to any antisymmetric tensor @ there exists a vector w [given by (46)] such that 2 = E-w. Here w is called the vector of the antisymmetric tensor 2. Corresponding to any vector @, there exists an antisymmetric tensor Q [given by (44)] such that (46) holds. We motivated our present discussion with some remarks concerning rigid body rotation. In this connection we can now draw the following uscful conclusion. Theorem 19. If a velocity vector v satisfies v = x- 2 = —Q-x for some antisymmetric tensor Q that is independent of the position vector x, then the motion is a uniform rigid vee rotation about the origin with angular velocity ew given by 0; = Hing pe Proof. From Theorem 18, ite @ is defined as above, then = X4Q%yp = Kye gyeOe = Spry ldeXy = [A Xp. Comparing with na, we sce thal the velocity is that of a rigid body rotation with angular velocity @. 0 REMARK. Later [see (4.1.33)] we use a result analogous to Theorem 19 to show, in essence, that infinitesimal displacements of the formu = &2- x are rotations. EXERCISES 1. Prove Theorem I, parts (b) and (c). 2 Prove (9). 3. Verify (10), (11), (12), (13), and (14), 4. Prove from first principles that if T is a second order tensor, then Tis a scalar. 5. Prove Theorem 5: (a) form = m = 1;(b) for n = 1,m = 3;(c)in general. 6, Prove Theorem 8: (a) when m = 1, n = 2; (b) in general. 7. Prove Theorem9. 48 I 1 1 I u Cartesian Tensors (Ch. 2 8. Let T be a second order tensor with components 7,, and 7%, relative to two different bases, Prove that (a) Ty = Ty implies that T,, = T, (b) Tj = —T, implies that Tj, = —T},. Show that the decomposition (24) is unique. (0. Prove (a) Theorem 12; (b) Theorem 13. 1. Suppose that for any Cartesian coordinate system there is a rule for associating T with 81 components Zq,. Suppose that for arbitrary vectors v and w, v- T- wis.a second order tensor. Prove that Tis a fourth order tensor. 2, Ina certain Cartesian coordinate system, the components of the vectors v and ware (I, 2, 3) and (4, 5, 6). Find the components of vw. 3. Give a geometric interpretation of Example 3 in terms of the triple sealar product (see Example 1.2.1). In particular, state why reflections must be prohibited. 4, In any Cartesian coordinate system there is a rule for associating T components 7, such that 7; = Ty. For arbitrary vectors scalar. Prove that T is a symmetric tensor. 5. Let L be an entity whose components with respect to any unit vectors care given by (LJ, = ue”. Hercu!®,u), andu® arean orthonormal set of vectors that are regarded as fixed, while components of L are computed with respect to various coordinate systems. Is L a second order tensor? 2 16. (a) Demonstrate (20). (b) Specialize (20b) to the case where T is the stress tensor. Show that the original interpretation (Section 14.2 of I) of the components of T is recovered. 17, Prove Theorem 16. 18. What theorems and definitions show that the right side of (41) is an isotropic tensor? 19. Prove Theorem 17 in the following manner (Jeffreys and Jeffreys, 1962, Sec. 3.031). {a) By considering special rotations, as in the text, show that Mass = 4z222 = Ussay = Cs Yinaz = 42200 2233 = Mas22 = Magna = May: yaaa = Masia Us Maize = Uran2 = Ma232 = C3, Miza1 = Mainz = M2332 = 43223 = Mania = Wia31 = Cay so that one can write CT Voges = C2 8pera + €35p-Des + C4 8ys5qr F (ey — €2 = C3 — C4) gre where jg, = 1 ifall subscripts are equal and equals zero otherwise. Sec. 2.2) The Eigenvalue Problem 0 (b) From the fact that T,,,.x)x,X,x, must be a scalar if x is a vector with components x, (why?), deduce that ¢, — c, — c) — cy = 0. 20. (a) Verify (45). (b) Show that (46) implies (44) when 2 is antisymmetric. 21. Deduce (42) from Exercise 1.1.6(a). 422. Let Tj; and &,,, be components of second order tensors. Suppose that for arbitrary fim T; (a) Show that if Tj is symmetric, then Cyan = Cyam- (b) Suppose that the c,,, are the components of a symmetric tensor and that the Ca, are the components of an isotropic tensor. Show that it = Comm am Ty = Qty + eyd,, for certain coefficients pt and 2. 23, True or false: If ef” gives the components of e" in a primed coordinate system, then (20c) implies that Ty = 9 Tee? - Explain your answer. 24. The tensor product of two vectors is called a dyad. A sum of such products is called a dyadic {{a) Write the dyad ab as the sum of symmetric and asymmetric terms. t(b) Let e be a set of mutually orthogonal unit vectors. Show that T= ele (c) Try to state and prove some little theorems involving dyads. (@) Show by construction that every second order tensor is a dyadic. 28. (a) If U is a nonsingular symmetric tensor, show that U~! is sym- metric. (b) Show that (A- BY’ = BT-A7. (c) Prove that a nonsingular second order tensor R possesses an inverse so that (28) holds. (d) Show that (C+ D)~' = D-*+ C7". 2.2 The Eigenvalue Problem In this section we shall be concerned primarily with the problem of deter- mining the eigenvalues associated with a symmetric second order tensor.* We shall thereby discover a coordinate system in which the components of such a tensor assume an especially simple form. The eigenvalue problem turns ‘out to be one of the most fundamental in all mathematics, but it is at first not + Many readers will be familiar with the eigenvalue problem for square matrices and/or linear transformations. They will find little new in this section. so Cartesian Tensors (Ch. 2 clear why one ought to consider it at all. We can, however, find some motiva- tion in the sort of investigation that moved Cauchy to begin the study of eigenvalue problems, namely a deeper study of stress. The relation between the stress vector ¢ and the unit normal nis typically that shown in Figure 2.3(a). (We consider a particular point x and time ¢ throughout, so we do not explicitly indicate the dependence of ton x and t,) It is not unreasonable to anticipate that there are directions m such that the corresponding stress vectors t(n) point in the n-direction, as in Figure 2.4(a). In such a case t(n) = An for some scalar 2. Because t(—m) = —t(n), a small “flake” whose relatively large flat surfaces are perpendicular to nis subjected to a particularly simple set of normal stresses [Figure 2.4(b)]. The stresses simply act to extend the flake, or to compress it if 2 is negative. Compare Figure 2.4(b) with Figure 2.3(b); in the former there is a shear stress compo- nent of t perpendicular to n. This can perhaps be seen more clearly in Figure 2.3(c) in which the stresses have been resolved into components that are respectively normal and tangential to the flake. uy won 4 [x wot Ficure 23. (a) Typical relationship between the unit extertor normal and the stress vector tn). (b) The stress vectors on opposite faces of a small thin “flake” are equal in magnitude and point in opposite directions. (c) Stress vectors (dashed arrows) are resolved imto components of shear (along the surface of the flake) and extension (normal to the flake) Pure extension is a relatively simple state of stress. It is worth investigating whether such states generally do exist. We thus ask ourselves the question: “Are there any directions n such that t(n) = 2m for some scalar 4?” Using the relation t(n) = n - T, we can pose the question i terms of the stress tensor T: Given T, find n such that n- T = An. Indeed, if we can find any vector v satisfying a) we shall have virtually achieved our objective. We merely need to divide both sides of (1) by |¥| to see that v/v] is the desired unit vector n. See. 2.2) The Eigenvalue Problem e Ma) = hw \ tn) ta ry a FIGURE 2.4. (a) The surface element with direction m suffers a purely normal stress. (b) A flake that suffers purely normal stress wil] be subject only to extension, not 10 shear. Let us select a particular set of base vectors e“. In terms of the associated components of v and T, (1) becomes v,Ty = av, or oT, = vd, (2a, b) Equations (1) and (2) are alternative statements of the mathematical problem that we wish to discuss. Note that throughout this section by“ tensor” we shall understand “second order tensor.” Writing (2b) as “ATi — 45,) = 0, @) we see that it can be considered as a set of three equations for the three unknowns v,. These equations clearly have the trivial solution v, = 0. This solution is unique unless the determinant of coefficients vanishes. For a nontrivial solution (o exist, therefore, it is necessary that the determinant of coefficients be zero, i¢., Tr-A Tt Tas det (Ty — 26) =|Tr Ta -2 Thy |= 0. @ Ty Ta Tu Expanding the determinant in (4), we find that 2 + 12? — 1,1 +15 =0, (5) where Tus (6a) Th Ti T, Trz Ths 1,-|% Tal, [To I (6) 7 [7 Tal” [Ta Ty Ts Jy = det (Tp. (&) 32 Cartesian Tensors Ch. 2 Equation (5) is called the characteristic equation, Its roots (the zeros of the characteristic are the elgenvalnes or characteristic values of 7. We shall denote them by A", = 1, 2,3. The eigenvalues satisfy (1), a coordinate-free equation, so that they must have the same value no matter what coordinate system is used. The coefficients in (5) must therefore be scalars. This can be verified directly. For example, consider the first principal invariant /,, also termed the trace and denoted by tr T. Thisis the contraction of a second order tensor, and hence is a scalar (Exercise 1.4). The coefficients of the characteristic equation can be economically characterized. To do this, we need two definitions. The principal diagonal of (T,p consists of T,,, T22, and T33. An mth order principal minor of a third order matrix is an m by m determinant formed by deleting the row and column of 3 — m terms in the determinant’s principal diagonal. Thus the first, second, and third order principal minors of the matrix 1 2 3 (: 5 : - 8 -9, are the determinants y=. IS}=5, 1-91= $ 6 1 3 1 a sk fp sh le sh and 2 3 4 5 6, 7 8 -9 respectively. ‘Theorem 1. The principal invariants 1,,1,, and 1, equal, respectively, the sum of all different principal minors of orders 1, 2, and 3. Proof. We merely verify by inspection that the determinants in (6) are principal minors of the required order. 0 Corresponding to cach eigenvalue 2", we can determine a nonzero eigenvector or characteristic vector v that satisfies (2). We denote the eigen- vector corresponding to 4) by v"”). With this notation, (2) reads WPT, = Aer or PT, = 2¥fP'5, (no sum onp). (Ta, b) REMARK. Because our discussion was motivated by means of the stress tensor, we began with the eigenvalue problem (1) wT, Sec. 2.2) The Eigenvalue Problem 3 but frequently there arises the related problem T-w= dw, It is easily seen that the characteristic equation corresponding to the latter problem is the same as (4), so one can speak of the eigenvalues of 7. But the equations that give the components of the eigenvectors are not the same in the two cases, So one must speak of left and right eigenvectors of T. In the important special case wherein Tis symmetric, one refers to she eigenvectors, for then the right and left eigenvectors are identical Example 1. Find the cigenvalues and eigenvectors for the symmetric second order tensor T whose components in a certain coordinate system are 1 0 0 ot 4 4 Ji 9 ee 4 1-4 0 6 i z 4 = (8) Ji 9 -=— -2 7 4 4 or " a \ 3 o-a alta) ‘)- Ruan AR AB - a, W722, 23. (Each individual can choose for himself which eigenvalue he chooses to designate 2, which 2%, and which 2°) When d= 2!" = 1, the three equations of (7) become v0 Ce (Syree -CEee( Equations (9b) and (9c) have the unique solution v{? = vf!” = 0, so v= gh 0. (9a, be} for any nonzero constant a. “ Cartesian Tensors. (Ch. 2 When d= 4°) = 2,(7) becomes vf) = 0, (e- (Ce 0 (¢ gry (ne (10a, be) (The easiest way to obtain the matrix of coefficients in (10) is to make the substitution 4 = 2°) in the matrix whose determinant appears in (8),] Multiplication of (10c) by ~/3 gives (10b), so (10b) and (10c) are dependent. We find that = 0, mb, oi = 4/36, for any nonzero constant 6. Thus WO) = bel 4/32, which yields v1 = — fice + ce™, for any nonzero constant ¢ (By definition, eigenvectors are nonzero,soc = Ois unaccept- able) If we want our eigenvectors to be of unit length, we can take , vn be (Ber aa ( S Nes 3ee To) Observe that the unit vectors of (11) are mutually perpendicular: VW = 8, This is not an accident, but it is connected with the symmetric character of the particular matrix whose eigenvalues and eigenvectors we just determined. For comparison, here are the results of a second example. vi Example 2. Consider the second order tensor whose components in a certain coordinate system are 1 0 Ol} o o 48 0 -4 9, Determine 2 set of eigenvectors and discuss their orthogonality. Partial solution. As the reader is asked to show [Exercise 3(a)], the characteristic equation is (nu? + 4) =0, Sec. 2.2| The Eigenvalue Problem #3 so the eigenvalues are a qe with corresponding right eigenvectors vm get, 2 Be 4 Die PL el29 — Dice where a, b, and care any nonzero constants. Note that there are complex eigenvalues and that the corresponding eigenvectors have complex components. When complex components are possible, one usually takes the scalar product of vectors to be KY) & o, (2) where an overbar denotes complex conjugate. This definition preserves the property of nonnegative length:

You might also like