You are on page 1of 373
MATHEMATICAL PHYSICS EUGENE BUTKOV St. John’s University, New York tte: ‘or Shug. rt ADDISON-WESLEY PUBLISHING COMPANY Reasling, Massachusetts » Menlo Park, California - London » Sydnay - Manila SS ¥ g g 2 8 = Dep Stupe® FIRST PRINTING 1973 A complete and unabridged reprint of the original American textbook, this World Student Series edition may be sold only in those countries to which it is con- signed by Addison-Wesley or its authorized trade distributors. It may not be re-exported from the country to which it has been consigned, and it may not be sold in the United States'of America or its possessions. Copyright © 1968 by Addison-Wesley Publishing Company, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, with- ‘out the prior written permission of the publisher. Original edition published in the United: ‘States of America. Published simultaneously in Canada. Philippines copyright 1968 Library of Congress Catalog Card Number: 68- (1391 | PREFACE During the past decade we have witnessed a remarkable increase iri the number of students secking higher education as well as the development of many new colleges and universities. The inevitable nonuniformity of conditions present in different institutions necessitates considerable variety in purpose, general approach, and the level of instruction in any given discipline. This has naturally contributed to the proliferation of texts on almost any topic, and the subject of mathematical pliysies is no exception. There is a number of texts in this field, and some of them are undoubtedly of outstanding quality. Nevertheless, many teachers often feel that none of the existing texts is properly suited, for one reason or another, for their particular courses. More important, students sometimes complain that they have difficulties studying the subject from texts of unquestionable m This is not as surprising as.it sounds: Some texts have an encyclopedic character, with the material arranged in a different order from the way it is usually taught; others become too much involved in complex mathematical analysis, preempting the available space from practical examples: still others cover a very wide variety of topics with utmost brevity, leaving the student to struggle with a number of difficult questions of theoretical nature. ‘True enough, a well-prepared and bright student should be able to find his way through most of such difficulties. A less-gifted student may, however, find it very difficult to grasp and absorb the multitude of new concepts strewn across an ad- vanced text. ‘Under these ciscumstances, it seems desirable to give more stress to the peda- gogical side ofa text to make it more readable to the student and more suitable for independent study. Hopefully, the present work represents a step in this direction. It has several features designed to conform to the path an average student may conceivably follow in acquiring the knowledge of the subject. First, the inductive approach is used in each chapter throughout the book. Fol- lowing the fundamentais of modern physics, the text is almost entirely devoted to linear problems, but the unifying concepts of linear space are fully developed rather Jate in the book after the student is exposed to a number of practical mathematical techniques. Also, almost every chapter starts with an example or discussion of elementary nature, with subject matter that is probably familiar to the reader. The introduction of new concepts is made against a familiar background and is later extended to more sophisticated situations. A typical example is Chapter °, where the basic aspects of partial differential equations are illustrated using the “elementary functions” exclusively. Another facet of this trend is the repeated use of the harmonic oscillator and the stretched string as physical models: no v vi PREFACE attempt is made to solve as many problems for the student as possible, but rather to show how various methods can be used to the same end within a familiar phys- ical context. In the process of learning, students inevitably pose a number of questions necessary to clarify the material under scrutiny, While most of these questions naturally belong to classroom discussion, it is certainly beneficial to attempt to anticipate some of them in a text. The Remarks and many footnotes are designed to contribute to this goal. The author hopes they answer some questions in the mind of the student as well as suggest some new ones, stimulating an interest in further inquiry. A number of cross-teferences serves a similar purpose, inviting the reader to make multiple use of various sections of the book. The absence of numbered formulas is intentional: if the student bothers to look into the indicated section or page, he should not simply check that the quoted formula “is indeed there,” but, rather, glance through the text and recall its origin and meaning. The question of mathematical rigor is quite important in the subject treated here, although it is sometimes controversial. It is the author's opinion that a theo- retical physicist should know where he stands, whether he is proving his own deduc- tions, quoting somebody else's proof, or just offering a reasonable conjecture. Consequently, he should be trained in this direction, and the texts should be written in this spirit. On the other hand, it would be unwise to overload every student with mathematics for two main reasons: first, because of the limitations of time in the classroom and the space in a text, and second, because physicists are apt to change their mathematical postulates as soon as experimental physics lends support to such suggestions. The reader can find examples of the latter philosophy in Chapters 4 and 6 of the text. Whether the author was able to follow these principles is left to the judgment of users of this book. Each chapter is supplied with its share of problems proportional to the time presumed to be allotted to its study. The student may find some of the problems rather difficult since they require more comprehension of the material rather than sheer technique. To balance this, a variety of hints and explanations are often supplied. Answers are not given because many problems contain the answer in their formulation ; the remaining ones may be used to test the ability of the student for independent work. The exercises within the text can be used as problems to test the students’ manipulative skills. For many of the methods of instruction of mathematical physics presented in this book, the author is indebted to his own teachers at the University of British Columbia and McGill University. The encouragement of his colleagues and students at St. John’s University and Hunter College of the City University of New York is greatly appreciated. Also, the author wishes to thank Mrs. Ludmilla Verenicin and Miss Anne Marie Nowom for their help in the preparation of the manuscript. Palo Alto, Calif. EB. August 1966 Chapter 1 1d 12 13 14 1s 16 17 18 19 Chapter 2 21 22 23 24 25 2.6 27 28 29 2.10 211 212° 2413 2.14 215 Chapter 3 3.1 32 33 34 35 36 CONTENTS Vectors, Matrices, and Coordinates Introduction . 1 Vectors in Cartesian Coordinate Systems. 1 Changes of Axes. Rotation Matrices toe eee 4 Repeated Rotations, Matrix Mulliplication. 2 2... kk 8 Skew Cartesian Systems. Matrices in General woe ee OH Scalarand Vector Fields © 2 2 2... ew. Vector Fieldsin Plane. 2 2 2 2 we wwe. Vector FieldsinSpace. 2 2 2 2 2. kk Curvilinear Coordinates 6 we Functions of a Complex Variable Complex Numbers... wee ee AF Basic Algebra and Geometry of Complex Numbers. soe ee AS ‘De Moivre Formuta and the Calculation of Roots. 2 2... 48 ‘Complex Functions. Euler's Formula. toe ee Applications of Euler's Formula... | Multivalued Functions and Riemann Surfaces see ee SB Analytic Functions, Cauchy Theorem. . re Other Integral Theorems. Cauchy Integral Formula |. 1 |. 62 Complex Sequencesand Series 2. 1 6 2 ww ee. 6 Taylor and Laurent Series... 6 ee Zeros and Singularities . cee ee ee B The Residue Theorem and its Applications woe ee ee ee 8B Conformal Mapping by Analytic Functions. 2 2 2... OT Complex Sphere and PointatInfinity 2 2. 2... . (102 Integral Representations. 2... wee 108 Linear Differential Equations of Second Order General Introduction. The Wronskian . soe ee ee IDB General Solution of The Homogeneous Equation . woes 25 ‘The Nonhomogencous Equation. Variation of Constants. ||. 126 Power Series Solutions... 2... we ee OB The Frobenius Method. 2... we ee we 10 Some other Methods of Solution. 2 2 2 | 2. ws. OT vii vi Chapter 4 4l 42 43 44 45 46 AT 48 Chapter 5 SA 3.2 5.3 5.4 55 5.6 57 5.8 59 5.10 Chapter 6 6.1 6.2 6.3 64 6.5 66 67 68 69 6.10 Chapter 7 a 72 73 TA 4S 76 W CONTENTS: Fourier Series Trigonometric Series 6. se Definition of Fourier Series 2 5 2 1 ee ee Examples of Fourier Series. . Loe Parity Properties, Sine and Cosine Series ‘Complex Form of Fourier Series . Le Pointwise Convergence of Fourier Series... Convergence in the Mean. Applications of Fourier Series . The Laplace Transformation Operational Calculus Co The Laplace Integral... Soe Basic Properties of Laplace Transform... 5s The Inversion Problem. . tee The Rational Fraction Decomposition : The Convolution Theorem. . Ls Additional Properties of Laplace Transform toe Periodic Functions. Rectification. 2. 6 6 ee ee The Mellin Inversion Integral... 6 1 2 ee Applications of Laplace Transforms . ‘Concepts of the Theory of Distributions Strongly Peaked Functions and The Dirac Delta Function Delia Sequences... co ‘The 8-Calculus : Representations of Delta Functions Applications of The 8-Calculus Weak Convergence . te Correspondence of Funetions and Distributions. Properties of Distributions . . Sequences and Series.of Distributions 5. 5. _ Distributions in N dimensions . : Fourier Transferms Representations of aFunction . 2 6 6 + - Examples of Fourier Transformations : Properties of Fourier Transforms... . ~ Fourier Integral Theorem . Lo Fourier Transforms of Distributions . 6. - + Fourier Sine and Cosine Transforms . . : Applications of Fourier Transforms. ‘The Prineiple “of Causality - 154 155 157 161 165 167 168 172, n9 180 184 187 189 194 200 206 210 224 223 226 229 232 236 240 245 250 287 260 262 266 269 27h 273 276 Chapter 8 8.1 B2 83 84 85 8.6 87 88 89 Chapter 9 91 92 93 94 95 9.6 OT 98 99 9.10 OAT Chapter 10 10.t 10.2 103 104 10.5 10.6 10.7 10.8 10.9 ° Chapter 11 wd 1.2 u3 114 11.5 6 “7 Ha CONTENTS, Partial Differential Equations The Stretched String. Wave Equation The Method of Separation of Variables . Laplace and Poisson Equations The Diffusion Equation . Use of Fourier and Laplace Transforms » The Method of Eigenfunction Expansions and Finite Transforms Continuous Eigenvalue Spectrum . Vibrations of a Membrane. Degeneracy . Propagation of Sound. Helmbeltz Equation Special Functions Cylindrical and Sphericat Coordinates The Common Boundary-Valuc Problems The Sturm-Liouville Problem . Self-Adjoint Operators Legendre Polynomials. Fourier-Legendre Series Bessel Functions : Associated Legendre Functions and Spherical Harmonies - Sphericat Bessel Functions . Neumann Functions Modified Bessel Functions - Finite-Dimensional Linear Spaces ‘Oscillations of Systems with Two Degrees of Freedom ‘Normal Coordinates and Linear Transformations Vector Spaces, Bases, Coordinates Lincar Operators, Matrices, Inverses . Changes of Basis Inner Product. Orthogonality. Unitary Operators - ‘The Metric. Generalized Orthogonality Eigenvalue Problems. Diagonalization Simultaneous Diagonalization . . . Jnfinite-Dimensional Vector Spaces Spaces of Functions : The Postulates of Quantum Mechanics ‘The Harmonic Oscillator Matrix Representations of Linear Operators Algchraic Methods of Solution Bases with Generalized Orthogonality Stretched String with a Discrete Mass in the Applications of Figenfunctions, 287 294 295 297 299 304 308 33 39 332 34 337 340 442 350 355 372 381 388 304 408 ait 419 424 433 437 441 443 451 463 467 47 476 483 488 492 495 x CONTENTS Chapter 12 12.1 12.2 12.3 124 12.5 126 12.7 12.8 12.9 Chapter 13 ABA 13.2 133 13.4 13.5 13.6 13.7 13.8 13.9 Chapter 14 14.1 14.2 14.3 144 14.5 46 14.7 14.8 14.9 14.10 Chapter 15 151 15.2 15.3 15.4 15.5 15.6 ‘Chapter 16 16.1 16.2, ‘Green’s Functions Introduction... Lk Green's Function for the Sturm-Liouville Operator |. Series Expansions for Gx}... eee Green’s Functions in Two Dimensions Green's Functions for Initial Conditions . Green's Functions with Reflection Properties Green’s Functions for Boundary Conditions The Green’s Function Method 2 2. 2. ee A Case of Continuous Specttum © 2 2. eee Yariational Methods The Brachistochrone Problem . eee The Euler-Lagrange Equation. . 2... Hamilton's Principle... woe Problems involving Sturm-Liouvi Operators ve The Rayleigh-Ritz Method woe ee Variational Problems with Constraints. . Yariational Formulation of Eigenvatue Problems Variational Problems in Many Dimensions . . Formulation of Eigenvalue Problems by The Ratio Method ‘Traveling Waves, Radiation, Scattering Motion of Infinite Stretched Suing. - 2 Propagation of Initial Conditions - : Semi-infnite String, Use of Symmetry Properties. Energy and Power Flow in a Stretched String Generation of Waves in a Stretched String Radiation of Sound from a Pulsating Sphere : The Retarded Potential =. - woe Traveling Waves in Nonhomogeneous Media Scattering Amplitudes and Phase Shifts Seattering in Three Dimensions. Partial Wave Analysis Perturbation Methods ‘Introduction The Born Approximation . Le Perturbation of Eigenvalue Problems... First-Order Rayleigh-Schridinger Theory The Second-Order Nondegenerate Theory... - ‘The Case of Degenerate Eigenvalues... . . Tensors Introduction . . Two-Dimensional Stresses . 503 508 514 520 523 527 53 536 543 553 554 560 362 565 567 573 S17 581 589 592, 595 599 603 61! 619 624 628 633 644 647 650 653 658 665 ert 672 16.3 16.4 16.5 16.6 16.7 16.8 16.9 16.10 1601 Cartesian Tensors Algebra of Cartesian Tensors. . Kronecker and Levi-Civita Tensors. Psendotensors Derivatives of Tensors. Strain Tensor and Hooke’s Law . Tensors in Skew Cartesian Frames. Covariant and Contravariant Representations . General Tensors . Algebra of General ‘Tensors. Relative Tensore . The Covariant Derivative . Calculus of General Tensors Index . CONTENTS. xi 676 681 684 687 696 700 705 7 ns Rn? CHAPTER 1 VECTORS, MATRICES, AND COORDINATES 1.1 INTRODUCTION To be able to follow this text without undue difficulties, the reader is expected to have adequate preparation in mathematics and physics. ‘This involves a good working knowledge of advanced calculus, a basic course in differential equations, and a basic course in undergraduate algebra. Rudimentary knowledge of complex numbers, matrices, and Fourier series is very desirable but not indispensable. As for the subjects in physics, the reader should have completed the standard undergraduate. training in mechanics, thermodynamics, electromagnetism, and atomic physics. Despite these prerequisites, a need is often recognized for reviewing some of the preparatory material at the beginning of a text. Let us follow this custom and devote some time to the subject of vector analysis which has a bearing, in more than one way, on the material developed in this text. Of course, such a review must be brief and we must omit all the details, in particular those involving mathematical proofs. The reader is referred to standard textbooks in advanced calculus and vector analysis* for a full discussion. On the other hand, we hope to draw attention to some interesting puints not always emphasized in commonly used texts. 1.2 VECFORS IN CARTESIAN COORDINATE SYSTEMS In many elementary textbooks a vector is defined as a quantity characterized by magnitude and direction. We shalt see in Chapter 10 that vectors are much more general than this, but it is fair to say that the concept of vectors was first intro- duced into mathematics (by physicists) to represent “quantities with direction,” eg., displacement, velocity, force, ete. Doubtless, they are the simplest and most familiar kinds of vectors. As we well know, quantities with direction can be graphically represented by arrows and are subject to two basic operations: a) multiplication by a scalar,t —_ 6) addition. These operations are illustrated in Fig. 1.1. * For example, A. E. Taylor, Advanced Catculus; T. M. Apostol, Mathematical Analysis; W. Kaplan, Adeanced Calculus. + Until we are ready to discuss complex vectors (Chapler 10) we shall assume that scalars are reat numbers. 1 2. VECTORS, MATRICES, AND COORDINATES 12 we a“ “Oe In many cases we can plot various vectors from a single point, the origin. Then each vector can be characterized by the coordinates of its “tip.” Various coordinate systems are possible but the cartesian coordinate systems are the most convenient. The reason is very simple and very deep: The cartesian coordinates of a point can serve as the components of the corresponding vector at the same time. ‘This is illustrated in Fig. 1.2, where orthogonal cartesian systems, in plane and in space, are selected. Note that the three-dimensional system is “right-handed” ;* in general, we shall use right-handed systems in this book. Figure 1.1 Figure 1.2 Wecan now associate with a vector u (in space) a set of three scalars (ts, ty, ts), such that Au will correspond to (Autz, My, du.) and u-+ ¥ will correspond to (tz + Os, thy + dy, 4s + 0,). Note that no such relations hold, in general, if a vector is characterized by other types of coordinates, e.g., spherical or cylindrical. In addition, orthogonal cartesian coordinates result in very simple formulas for other common quantities associated with vectors, such as : a) length (ihagnitude) of a vector: Yo] = = ud tay + ay, b) projections of a vector on coordinate axes:f u, = ucos(u,i), ty = ucos (a,j), v4 = wos (u, k), * Rotation of the x-axis by 90° to coincide with the y-axis appears counterclodkwise for all observers with z > 0. — + Standard notation is used: The symbols, j, k are unit vectors in x-, y-, and z-directions, respectively. The symbot (u, ) stands for the angle between the directions given by wand ¥, 12 VECTORS IN CARTESIAN COORDINATE SYSTEMS 3 ©) projection of a vector on an arbitrary direc- = tion defined by vector s (Fig. 1.3): OP = nu, = ucosy = u, cos (s, i) + u, cos (5, j} + 1, cos (s, &), ¢ scalar product (dot product) of two vectors: (uy) = wos (a, ¥) = wb. + wy + Udy ¢) vector product (cross product): . fu X ¥] = Gp, — 420y)h + Qeds — 0,)j + (iedy — yP.)K. Figure 1.3 The important distinctive [eature of the cross product is that fu X v] ¥ [v X uh, namely, it is not commutative; rather, it is anticommutative: fu X vy] = -[fy X a} Remark. Apart from its important physical applications, the cross product of two vectors leads us to the concept of “oriented area.” The magnitude of [u x ¥), namely uo sin (u, »)], is equal lo the area of the parallelogram formed by wand ¥. The direction of [u X ¥] can serve to distinguish the “positive side” of the parallelogram from its “negative side.” Figure 1.4 shows two views of the same parallelogram illustrating this idea,” [ux] Positive side Figure 1.4 Closely related to this property is the concept of a right-handed triple of vectors. ,Any three vectors a, v, and w, taken in this order, are said to form a right- handed (or positive) triple if the so-called triple product (lu X vw) is positive.* This happens if w is on the same side of the plane defined by the vectors u and ¥, as illustrated in Fig. 1.5. It is not hard to verify that ((w X ¥] > w) represents, in this case, the volume V of the paralfetepiped formed by the vectors u, ¥, and wy, * These vectors form a left-handed (negalive) triple if (u X ¥]- w< 0. 4 VECTORS, MATRICES, AND COORDINATES 1.3 Exercise. Show that = {la x vw] under any circumstances. Show also that the sign of the triple product is unchanged under cyclic permutation of u, ¥, and w, that is, (la Xv] -#) = (lr X why) = tly x wd). Figure 1.5 1.3 CHANGES OF AXES, ROTATION MATRICES We have seen that a given vector wis associated with a set of three numbers, namely its components,* with respect to some orthogonal cartesian system. However, it is clear that if the system of axes is changed, the components change as well. Let us study these changes. Consider, for vectors in a plane, a change in the system of axes which is pro- duced by a rotation by the angle 0, as illustrated in Fig. 1.6. The old system is (x, y) and the new system is (x, y’). Since u = tf + uyj, the x’-component of u is the sum of projections of vectors u,i and uj on the x’-axis, and similarly for the y’-component.{ From the diagram we see that this yields Jub = ucos@ + tysin®, = —uz sind 1 uycos 8. It is insteuctive to note that the angle between the x’- and y-axes is (w/2 — 4) while the angle between the y’- and x-axes is (x/2 + 6). In view of sind = cos (5 - ) ~sin 8 = cos (3 + ): we see that all four coefficients in the above equations represent cosines of the angles between the respective axes, Let us now turn to the three-dimensional case. Figure 1.7 represents two orthogonal cartesian systems, both right-handed, centered at O. It is intuitively clear that the primed system can be obtained from the unprimed one by the mo- tion of a “ body about a fixed point.” In fact, it is shown in almost any text- book on mechanics} that such a motion can be reduced to a rotation about some axis (Euler’s theorem). and ; * Instead of “components,” the term “coordinates of a vector” is often used (see also. Section 10,3). + For example, Goldstein, Classical Mechanics, Section 4.6. 1.3 CHANGES OF AXES. ROTATION MATRICES 5. Write u= «b+ uj + uk, collect contributions to ut from the three vectors wei, uyj, and ak, and obtain ue = uz cos (i, i) + 4, cos (i,j) -F ui, cos (V’, k), where ’ is, of course, the unit vector in the x’-direction. Note that the cosines involved are the directional cosines of the x‘-direction with respect to the unprimed system or, for that matter, the dot products of i’ with i, j, and k. It is clear that similar formulas can be written for uf and wf. At this stage, however, it is very convenient to switch to a different notation: Instead of writing (tgs Myr He), let Us write (144, wz, us) and similarly (14, «4, u4) for (ud, wh, 04). More- over, denote by an the angle between the mth primed axis and the nth unprimed axis (Aree such angles are marked on Fig. 1.6) and by dan the corresponding cosine (that is, Gan = COSam»). This new notation permits us to write the transformation formulas in an easily memorized pattern: Hy = 443th + Ayala + aygtg, us dais + aa2tt, + aagts, U3 = a3 ty + agatte + aagtts, or, if desired, in the compact form the = DO date Un = 3,2,3). mot From this analysis we conclude that the new components (i, 1, 4) can be obtained from the old components (i), #2, wa) with the help of nite coefficients. 6 VECTORS, MATRICES, AND COORDINATES. 13 These nine coefficients, arranged in the self-explanatory pattern below are said to form a matrix.* We shall denote matrices by capital letters. Columns Ist 2nd 3rd Ist 2nd )Rows 3rd Matrix 4 has three rows and three columns; the individual coefficients dmx are referred to as matrix elements, or entries. It is customary to use the first subscript (mm in our case) to label the row and the second one to label the column; thus the matrix element a, should be located at the intersection of the kth row with the ith column. The set of elements in a given row is very often called a row vector and the set of elements in a column, a column vector. This nomenclature is justified by the fact that any three numbers can be treated as components of some vector in space. However, at this stage it is worthwhile to make a digression and establish a geo- metric interpretation for the column vectors of 4. The reader should pay par- ticular attention to the argument because of its general significance. . Let us imagine a unit vector u. Suppose the unprimed system was oriented in such a way that u was along the x-axis. Then the components of ware (1, 0, 0) and u actually coincides with the vector i. If the coordinate system is now rotated, the new components of u are given by uy = ah + a0 + 4190 = aus, Wy = Gah + 920 + dog) = G21, Wh = ag! + 2320 + 49:0 = as. We see that the first column vector of matrix 4 is composed of the new components of vector u. In other words, we can say that the new coniponenis of i are (@11, 421) 254) and we can write i= ayy + agi’ + ak’. Similar statements telate § and k to the second and third column vectors of 4. Note that in this discussion the unit vectors i, j,k assume a role independent of their respective coordinate axes, The axes are rotated but the vectors i, ik stay in place and are then referred to the rotated system of axes. * More precisely, a3 3 matrix is formed. The reader cant easily construct-an analogous 2% 2 matrix to account for two-dimensional rotations. 13 CHANGES OF AXES. ROTATION MATRICES 7 Exercise. Establish the geometrical meaning of the row vectors of matrix 4 representing a rotation. The definitions introduced above allow the computation of (u4, u>, 43) from (2, ta, 3) by the following rule: To obtain uj, take the dot product of the kth row of matrix A with the vector u, as given by the triple (u,, 2, u3). Since in this process each row of the matrix is “dotted” with (ws, #2, 43), we may regard it as some kind of multiplication of a vector by a matrix. In fact, this operation is commonly known as vector-matrix multiplication and is visually exhibited as shown: um wh ip = ty us us Matrix A Column vector # Column vector u’ As we see, the old components are arranged in a column which we shall denote by u. This column vector is multiplied by the matrix A and this results in another column vector, denoted by u’. The multiplication means, of course: Form the dot product of the first row of A with u for the first component of u'; then form the dot product of the second row of A with u to get uh, and similarly for uy. The entire procedure is symbolically written as Awa. Remark, Note that the set (w), #2, #3), arranged in a column, has not been denoted simply by u but rather by 2 new symbol w.* The point is that in the context of our problem, both w and u represent the same vector u, but with respect to different systems of axes. We must think of u and u’ as two different representations of u, and the matrix A shows us how to switch from one representation to another, Before we discuss further topics involving matrices, let us record the fact that our matrix A is not just a collection of nine arbitrary scalars. Its matrix ele- ments are interdependent and possess the following properties. a) The columns of A are orthogonal to each other, namely, 1112 F a2idag + 51932 = 0, 442019 + ae2t23 + ay2435 = 0, 419411 + Aoadas + G3a9g1 = 0. This property follows from the fact that the colunins of A are representations (in the new system) of the vectors i, j, and k and these vectors are mutually orthogonal. * The symbol » should not be confused with {ul, the magnitude of vector u, which is also denoted by (p. 2). 3 VECTORS, MATRICES, AND COORDINATES 14 b) The columns of 4 have unit magnitude, namely, 2 aah tah a aty + ay + a3. = 1, 2 ahs + ais + af = 1, 2 1 a gh = ats + ads + 33 = 1, because |, j, and k are unit vectors. ; , ; ¢) The rows of A are also mutually orthogonal and have unit magnitude. “This is verified by establishing the geometrical meaning of the row vectors of A* Matrices satisfying these three properties are called orthogonal. We may then conclude that the matrices representing rotations of orthogonal cartesian systems in space are orthogonal matrices. Remark. There are orthogonal matrices which do not represent rotations, Rotation matrices have an additional property: their determinant (i.e., the determinant of the equations on p. 5) is equal to 4-1. Otherwise, orthogonal matrices can have the deter- minant equal to —1. The point is that a rotation must yield a right-handed triple (i’, j’, \) of unit vectors since (i, j,k) is a right-handed triplet 1.4 REPEATED ROTATIONS. MATRIX MULTIPLICATION The matrix notation introduced in (he preceding sections is particularly usefut when we are faced with repeated changes of coordinate axes. In addition to the primed and unprimed systems related by a matrix A, let there be a third double- primed system of axes (x”, ”, 2”) and let it be related to the primed system by a matrix B: bit bso Ons B=| ba: boa boa bs: bse baa Evidently, the system (x”, y”, 2”) can be related directly to the system (2, y, 2) through some matrix C and our task is to evaluate the matrix elements Cua in terms of matrix elements dmx and bmn. We have ‘ wy = Bates + Byatls + Brats bast + Boats + bast, uf f Ug = baru + Batty + baat, and : wi = ayy + aipte F args, . uh = agit, + azatt2 + rats, 1 thy = Ggitty + @gatig + aatta, * Ht will be shown in Chapter 10 that any NX N matrix which satisfies (a) and.(b) must also satisfy (c). {See Problem 4 at the end of this chapter. 14 REPEATED ROTATIONS, MATRIX MULTIPLICATION 9 from which it follows that Wf = raga + Brsaey + 13051) + (bixtra + bracaa + Oractae ea + Grays + By2003 + Bisdas)us, uf = (boiai + baocar + basass\us + (erda2 + besten + Begtaa ye + (bar@ia + ba2Ge3 + Beadga)us, uf = Bsrayy + barge, + basazyyur + area + b3raeg + bagaga)te + (baits + Baodeg + b3g033)us- The maze of matrix elements above becomes quite manageable if we observe that every coefficient associated with u, is a dot product of some row of matrix B and some column of matrix 4. A closer look at these relationships leads us to the following statement: The element Cnn of masrix C is obtained by taking the dot product of the mth row of mairix B with the nth cohann of matrix A. Now, if we record our relationship. in the vector-matrix symbolic notation w= Au w= Bi, ow = Cu, then we are naturally led to the relation a! = Cu = B(Au). It seems reasonable now to define the product of two matrices, like B and A, to be equal to a third matrix, say C, so that the above relationship may also be writt * Tien as ul = Cu = (BANU. In this sense we write Cin Cig Cia bit Bia bag M1 M2 413 €21 Cea Coa | =| Bar baa bog || aor oz dos €31 Caz C33 531 baa b3a @31 432 433 , symbolically, or, symbolically, c-BA given that the matrix elements of C are defined by the rule quoted above. Having introduced the notion of matrix multiplication, we are naturally interested in determining whether it has the same properties as the multiplication of ordinary numbers (scalars). A simple check shows that the associative Jaw holds: If we multiply three matrices 4, 5, and C in that order, then this can be done in two ways: ABC = (ABC = A(BC) (where it is understood that the operation in parentheses is performed first). * The difference is, of course, that in B(An) the column vector u is first multiplied by A, producing another column vector which is, in turn, multiplied by 2. In (BA)u the matrices are being multiplied first, resulting in a new matrix which acts on a. 10 VECTORS, MATRICES, AND COORDINATES 14 Exercise. Verify this statement. [Hint: If AB = D, then the elements of D are given by 3 dan =D, Omidine fm Develop the matrix elements of (AB)C and A(BC) in this fashion and verify that they are the same,] On the other hand, matrix multiplication is not commutative: AB ¥ BA, and, furthermore, there is no simple relation, in general, between AB and BA. This noncommutati feature precludes the possibility of defining “matrix division."* However, it is possible to talk about the inverse of a matrix and this concept arises naturally in our discussion of rotations. Indeed, if we rotate our orthogonal cartesian system of axes, the new coordinates of vector u are obtained from the vector-matrix equation Suppose now that we rotate the axes back to their original position. The new components of vector u are given by (t1, ue, ta) and the old ones are given by (a4, 2, 49); these components must be related by some matrix B: au Bu, Combining these relations we obtain u = B(Au) = (BA)u. Therefore, the matrix BA must transform the components (ty, #2, #3) into them- selves. It is easy to see that this task is accomplished by the sc-called unit matrix (“identity matrix”) 9 1 0 ~ it oo - oe Exercise. Show that if 2 = (BA)u is to hold for an arbitrary vector u, then BA must necessarily be of the above form. In other words, the identity matrix is unique. It is now customary to call B the inverse of matrix A and to denote it by symbol ‘An 50 that we have A714 = 7. Since we could have performed our rotations in reverse order, it is not hard to see that AB = J as well, that is, ATA =) AAT! and A = B-!. While two rotation matrices may not commute, a rotation matrix always commutes with its inverse.t * If we write 4/B = X the question would arise whether we imply A — BX or A = XB. + See Section 10.4 for a general statement to that effect. Ls SKEW CARTESIAN SYSTEMS. MATRICES IN GENERAL i Ke may now be of interest to relate the elements of matrix B to those of matrix A. To: obtain By» we should, in principle, solve the equations on p. 5 for 4, ta, us. However, in the case of rotations, we have a much simpler method at our disposal. Let us write the matrix equation BA = J in detail: by, bie Oia ay a12 a13 100 bo, bea b23 |-| a21 ae ae3 |=] 0 1 bs, baz baa 431 G32 Gag 001 To get the first row of 7 we must take the dot products of the first row vector of B with each of the column vectors of A. However, we know that the fatter are just the vectors I, j, k in new representation. We see that the first row vector of B is orthogonal to J and k and its dot product with i is unity. Consequently, it could be nothing else but the vector i (in new.representation, of course), and we conclude that by = 41, 45 = aay, and 513 = ag). Repeat this argument for the other rows of B and deduce that the rows of B are the columns of A and vice versa. This is also expressed by the formula Brn = Dem Any two matrices 4 and B satisfying these conditions are called fransposes of each other and are denoted by B — AT and A = B”, While it is not, in general, re that the inverse and transpose of a matrix are identical, this rule holds for rotation matrices and is very useful. 1.5 SKEW CARTESIAN SYSTEMS. MATRICES IN GENERAL If the coordinate axes in a cartesian system form angles other than 90°, we have a skew cartesian system. Figure 1.8 shows two such systems, one in plane and one in space, along with the decomposition of a vector into its respective components. Figure 1.8 x 12 VECTORS, MATRICES, AND COORDINATES 15 Skew systems are specified by the angles between the axes (one in plane, three in space) which may vary between 0° and 180°, As before, i, j, and k will be used to denote unit vectors in the direction of axes. Note that we can still talk about right-handed systems in space, where the vectors i, j,k (in that order!) form a right-handed triple. The vectors are added and multiplied by scalars according to the same rules that were stated before. However, the length of'a vector is now given by 2 different formula. For instance, for a plane vector we have from Fig. 1.8(a), by the cosine theorem, ul? = a2 ud ~ Quen cos Ge — g) = ab + uy arty cos ob fof = Vu? + a2 + Qu.uy cos 4. In general, the dot product is no longer given by the sum of the products of com- ponents, but by a more complicated formula, As a matter of fact we shall even introduce a new name for it and call it the inner product of two vectors which is then defined by so that (ay) = [ul - {yj -cos (a, ¥). The reason is that we would like to retain the name dot product to mean the sum of the products of components of two vectors, regardless of whether the axes are orthogonal or skew.* ~The derivation of a formula for inner product in a skew system is greatly facilitated by its distributive property, namely, (a+) = @ + G@w). Indeed, from Fig. 1.9, no matter which coordinate system is used, we see that (w+ (v4 w)) = ll: [v + wl cos @ = Jul - CEN). : However, MN = MP +. PN; since MP is the projection of v on u, we have ful - (AP) = jul - [vf cos (u,v), and similarly for PN, establishing the result. Note that this argument is also valid for vectors in space. Now we can writet for two vectors in a plane u = ui + uj and v= vd +o is ay) i (uch gf) E(u) + (ud 228) + (ai Pad = thy + Udy COS + Uys COS. H + Uyry Note that this formula reduces to the usual dot product when ¢ = 90°. * With this distinction, the dot product becomes an algebraic concept (referring to the components) while inner product is a geometrical concept, independent of the coordinate system, ‘The two are identical provided the system is orthogonal. + Since (a-v) = (v-w), the second distributive law (Qu +) 0) = (i ¥) +) is trivial. 1S SKEW CARTESIAN SYSTEMS. MATRICES IN GENERAL 13 There is no difficulty now in establishing other formulas for skew systems, in plane or in space. We shall not go into these details but rather consider another important question: the transformation of coordinates from an orthogonal to a skew system of axes. Consider, for instance, Fig. 1.10; here a skew x’y’-system with unit vectors i’ and j’ is superimposed on an orthogonal xp-system with unit vectors i and j. .A given vector u can be represented either as u — u,i + n,j or as w= al + ay Figure 1.9 Figure 1.10 Aithough i = 1, the components uf and uw, are not equal; rather, we have w= O00 = 08 - OO = u — my tan. Also, ul, = OP’ = uy, sec ¥. We see that the new components (ui, #}) are lincarly related to the old components (tte, %,) and this relationship can be represented by means of vector-matrix multiplication: i 1 -tany Wy uy 0 secy | ty This can be written symbolically as ul = Au, where 1’ stands for the column vector (u,, uj), and w stands for the column vector (up, t,). The obvious difference from the previous cases is that the matrix 4 is no longer orthogonal. Its inverse A”? is readily calculated by solving for te, ty in terms of a, 14, and it reads va | siny 0 cosy Note that it is no longer the transpose of A. i4 VECTORS, MATRICES, AND COORDINATES 16 Itis still true that the columns of A are the old unit vectors in new representa- tion (Fig. 1.11), that is, i-i~1-%40-7, j= —tany-¥ + secy-j’. The rows of matrix 4 do not have a simple geometrical interpretation, but the columns of matrix 4—’ do have one.* From the above analysis we may conjecture that, in general, a change from cone set of unit vectors to another in a plane or in space involves. linear relationship between the old and new components of a vector, expressible by a vector-matrix multiplication uf = Au (A is a2 X 2 or a3 X 3 matrix). We shall consider this problem in detail in Chapter 10. For the time being we shall mention the fact that not every matrix A can represent such a relationship. Consider, for instance, the following hypothetical relation between the new and old coordinates of some vector u: ue = 4u,— Dy, uy, = Ute — tty If we attempt to solve these two equations for u, and «, we find that they have no solution if us and wy are arbitrary. We say that the matrix 4-2 2-1 Az does not possess an inverse; such matrices are called singular matrices. It is not difficult to see that in this example the pair ug, uw cannot possibly repre- sent an arbitrary vector in plane: Our equations imply «; = 2x, so that there is only one independent component instead of the two required for a plane. Figure 1.14 Remark. It is of interest to note that if x = 2), is actually satisfied, our system of equations has an infinity of solutions for uz and sy (since two equations reduce to a singte one), Furthermore, if an additional requirement u, = 2x, is imposed, the system has a unique solution (tt = 4853 uy = fu). All these features should be remembered since they are important in physical applications. 16 SCALAR AND VECTOR FIELDS So far we have been discussing constant vectors, but we can also contemplate vectors which depend on one or more variable parameters. The simplest example is, perhaps, a position vector which depends on time ¢. In a fixed cootdinate system, this is equivalent to saying that its components are functions of time * If A were orthogonal, the rows of A would be identical with columns of A~! (see pp. II and 440). 16 . SCALAR AND VECTOR FIELDS 15 and we write w= uf) = wi + (d+ ek, Such vectors can be differentiated with respect to the variable f according to the definition dun = fim H+ aut ao Om 410 ie With u(s) and uff + Ad) expressed in terms of their components, it is trivial to deduce that dts 5 dt 4m) = as +o +H ae so that the operation of differentiation of a vector is reduced to differentiation of its components. While vectors depending on time are widely used in mechanics of particles, we shall be more interested in another type of variable vectors: those depending ‘on space coordinates (x, y, z).* Such vectors are said to form vector fields and can be denoted as follows: - w= ux, pz) = x, », 2 Oe», DIT a(x, », 2k Common examples are electric and magnetic fields in space, velocity ficld of a fluid in motion, and others. The simplest kind of such a field is probably the so-called gradient fieldt which can be derived from a single scalar function ¢(x, y, 2), usually referred to as a scalar field. Familiar cases of scalar fields include the temperature distribution in a solid body, density of a nonhomogeneous medium, clectrostatic potential, etc. A scalar field gives rise to numerous other quantities through its various partial derivatives. In particular, Ict us concentrate our attention on a) the total differential = % Be gy 4 28 dp 3 ats dy +5 and b) the directional derivative de _ de dx , de dy , dy dz. ds ax ds ay ds ' az ds * These vectors may also depend on time, but we shall be mostly interested in instanta- neous relationships, where ¢ has some fixed value. t This is also called consercative field or potential field. 4 Rute of change of g per unit length in some particular direction characterized, ayy ty" the clement of arc ds of some curve, See, e.g., Apostol, p. 104. 16 VECTORS, MATRICES, AND COORDINATES 16 The expressions on the right-hand side of the equations in (a} and (b) have the appearance of a dot product. It is convenient to define the gradient of a scalar field g(x, y, 2) by the vector a ey 9 grad p = Si i+2 sit ak Then we can write do = (gradg-ds) and fe. = (grad ¢* So), where ds = dxi+ dy} + dzk represents infinitesimal displacement in some direction and = it V4 Gu is the unit vector in the specified directian.* Since every differentiable scalar field generates a gradient field, it is natural to ask whether any given vector field u = u(x, y, z) may not be the gradient of some scalar y. The answer is negative and this becomes clear as we examine the basic properties of gradient fields. In this survey we shall need certain assumptions regarding the differentiability of various functions and anslytic properties of curves and surfaces involved in vector analysis. We shall mention these assump- tions as we need them. In many cases they can be relaxed and the results can be generalized, but we shall confine ourselves to the common situations encountered in physics. A curve in space is called smooth if it can be represented by z= x, y= 7 = 20, where x(#), p(4), and 2(4) have continuous derivatives with respect to the parameter 1 (for a curve in a planc, simply sct z = 0). Smooth curves possess tangents at all points and a (vector) line element ds can be defined at any point. The smooth- ness also guarantees the existence of line integrals. This last property is trivially extended to piecewise smooth curves; i.c., those consisting of a firite number of smooth parts. We shall assume that all curves considered by us are piecewise smooth. Regarding the differentiability of various functions, we must remember the following definitions and statements: the interior of a sphere of arbitrary radius € (usually thought to be small) centered at some point M(x, y, z) is called a neigh- borhoodt of this point (in a plane, replace “sphere” by “‘circle"). If a set of points ——_— { * Observe that dx/ds = cos (so, i), etc., are the directional cosines of the idirection defined by ds or by 59. ’ + We shall assume that all integrals are Riemann integrals which are adequate for our purposes. For example, see Apostot p. 276, 1A more precise term is an e-neighborhood. 16 SCALAR AND VECTOR FIELDS 17 is such that it contains some neighborhood of every one of its points, then it is called an open set. For instance, the interior of a cube is an open set; we can always draw a small sphere about each interior point which will lie entirely within the cube. However, the cube with boundary points included is no longer an open set. ‘The reason these concepts are needed is that partial derivatives of a function in space are defined by a limiting process that is ticd to a neighborhood. We must make sure a region is an open set before we can say that f(x, y, z) is differentiable in this region. In addition, we shall be mostly interested in connected open sets, or domains. These are open sets any two points of which can be connected by a polygon, i.e. a curve which is formed by a finite number of connected straight-line segments. From now on we shall] assume that all our piecewise smooth curves lie in domains where the functions under consideration (scalar fields and components of vector fields) possess continuous first-order partial derivatives. Let us now return to the properties of gradient fields. Suppose that = grad v(x, y, z) and consider the following integrat between points M(Xo, Yo 26) and N(x1, Ya, 21), taken along some curve C: Nv N [cot - J, Gee + a+ jee): Using the parameter # as the variable of integration, we have [vate = "(Be 4 0 4 8 Bam [dear = oe) ato, where fo and #, are the values of the parameter ¢ corresponding to points M and N, We see that the integral is sirnply the difference of values of g(x, y, z) at points Nand M and, therefore, is independent of the choice of curve C. Conversely, if the integral f (u- ds) is independent of path,* then keeping Mf fixed and treating NV as a variable point,| ' we can define a function bes 2) = POPP ae de) = FPP Gas de + ayy + ued). * From now on, we shall occasionally use the term “path" to indicate a piecewise smooth curve, 18 VECTORS, MATRICES, AND COORDINATES 16 ” Path Cy Path Cy Path C, Path C) M @) {b) Figure 1.12 It is now simple to show that grad y = u. For instance, cepAzy.2) (x + 4x, 9, 2) — 0%, ¥ 2) = f ee _ pee ug dx Heese} and the statement uz = calculus. We have then established the following theorem: The necessary and sufficient condition that u = grad ¢ is the independence of path of the integral {(u- ds). An alternative way of stating this result follows from consideration of the integral § (us) over a simple closed path C, called the circulation of vector w around C. By simple closed path we mean a closed path which does not intersect itself.* The following theorem holds: The circulation of u vanishes for an arbitrary simpte closed path C (in a domain D) if and only if the integral [% (u- ds) is inde- pendent of path (in D). Indeed, let C (Fig. 1.12a) be a simple closed path. Choose two arbitrary points M and N on C and write dy/dz follows from the fundamental theorem of integral $8) = [pera fy ood [Poa - [fF was), Along Cr Monk Cy Along C2 If the integral fi (u - ds) is independent of path, the right-hand side vanishes and the cireulation is zero. Conversely, if two paths Cy and C2 connecting two points (Fig. 1.12b) do not intersect (in space), a simple closed path can be formed from them and the above equation holds. If the feft-hand side is zero, so is the right-hand side, yielding the independence of path. If C, and Cy intersect a finite number of times, the proof is obtained by splitting the closed path into a finite number of simple * This property permits us to assign the direction of integration around the curve, char- acterized by the vector ds, in a unique fashion. 16 SCALAR AND VECTOR FIELDS = 19. closed paths. {n the rather exceptional case when Cy and Cy cross each other an infinite number of times, a limiting process can be invoked reducing this case to the preceding one.* Remark, Within the context of the above discussion, it is emphasized that by (x, y, 2) we mean a well-defined single-valued functiont over the entire domain and nothing short of this requirement will suffice, In many treatmentst of the magnetostatic field H, ‘one introduces the so-called scalar magnetic potential X so that H = grad X and yet the circulation of H{ does not vanish over some contours. However, in all such cases it is impossible to define X uniquely over the entire contour (for those contours for which it is possible, the circulation of H does indeed vanish). It should now be clear that many vector fields do not fail into the category of gradient fields since it is easy to construct a vector u for which the integral J(u- ds) will actually depend on path. It is perhaps even easier to sketch some such fields, a task facilitated by the introduction of the concept of field lines. These field lines are curves with tangents directed along the vector field u at every point. For instance, Fig. 1.13 shows the velocity field (in the plane) of a fluid rotating around a circular obstacle. In this case the field lines are the tra- jectories along which the particles of fluid actually move. Figare 1.13 Figure 1.14 It is evident that the circulation of the velocity vector u around any one of the circles in Fig. 1.13 cannot be zero (the product u - ds has the same sign at each point of the circle). Consequently, the above field cannot be a gradient field. ‘The velocity fietd of a fluid is, perhaps, the best starting point for investigation of other types of vector fields because it naturally ieads us to another fundamental concept: the flux of a vector field. Consider the element dS of a surface S (Fig. 1.14). Fust as in the case of curves, we shall deal only with piecewise smooth surfaces, i.e., those consisting of * The details may be found in O. D. Kellogg, Foundations of Potential Theory. { A multivalued function is not one function, but a collection of several different functions. { For example, Reitz and Milford, Foundations of Electromagnetic Theory, Section 8.8. 20 VECTORS, MATRICES, AND COORDINATES 17 a finite number of smooth portions, By a smooth surface we mean a surface | representable by | x= Y= VED 2= HD where p and g are independent parameters and the functions x, y, and z have con- tinuous first partials with respect to p and q in the domain under consideration. Smooth surfaces possess tangential planes at all points and can be oriented; that is, we can distinguish between the positive side and the negative side of the surface. We shalt also assume that our piecewise smooth surfaces are constructed in such a way that they are oriented.* It is customary to represent surface elements dS by vectors dS of magnitudes that are directed along the positive normal to the sur- face, as illustrated in Fig. 1.14. Suppose that the vector field u represents the velocity of a moving fluid. It can be seen that the inner product (u- dS} repre- sents the amount of fluid passing through dS pec unit time. Indeed, the particles of fluid crossing dS at time ¢ will occupy the face ABCD of the shown paraliclepiped at time ¢ + d¢ and all particles of fluid which have crossed dS between # and t+ dt will be located at #-+ dt inside the paratlelepiped. Consequently, the amount of fluid passing through dS in the interval dt is given by the volume of the parallelepiped, equal to dS + \ul -dicos @ = (u- dS) di. Divide by d# and obtain the desired statement. By analogy with these observations, we define, in general, the flux of a vector field u through a surface 5S by the surface integral & = fftu- as). In this formula, S can be either an open or a closed surface. A very familiar case of the latter is found in Gauss’ theorem in clcctrostatics. 1.7 VECTOR FIELDS IN PLANE According to the material of the preceding section, integrals representing circula- tion and flux are important in the study of vector fields. For vectors in a plane, the circulation integral has the formt 4, (u- ds) = f (uz dx + uy d,). Integrals of this type can be analyzed by means of Green's theorem: If C}s a (piece- wise smooth) simple closed curve in a simply connected domain D and if P0x y) * For details, consult, e.g., Kaplan, p. 260 et seq. . + Unless stated otherwise, it is conventional to take the direction of integration in suck integrals, Le., the orientation of ds, as counterclockwise. 17 VECTOR FIFLDS IN PLANE 21 Simply connected domain Doubly comected domain Figure Ls and Q(x, y) have continuous first partials in D, then feu oon [f (2-5) where § is the area bounded by C. ‘The importance of the requirement that C is a simple closed curve (see p. 18) lies in the fact that we can distinguish the interior of the curve from its exterior by the following rule: As we proceed along the curve in the direction of ds we designate the region appearing on our right as the exterior and that on our left as the interior. If the curve crosses itself such a formulation leads to contradiction as should be obvious by considering a curve in the shape of a “figure eight.” A domain in a plane is said to be simply connected if every simple closed curve in it has its interior inside the domain as well. Figuratively speaking, a domain is simply connected if it has no “holes” (Fig. 1.15). Without going into mathematical details we shall sketch a possible method of proving Green’s theorem which greatly facilitates its physical interpretations. First of all, note that P(x, y} and Q(x, y) can always be treated as the components u(x, y) and u(x, p) of some vector field, and we shall adopt, for convenience, this identification. Let us now divide the arca S into a network of meshes, as illustrated ‘in Fig. 1.16(a). Taking the integrals fa ds) around each mesh in Figure 1.16 22. VECTORS, MATRICES, AND COORDINATES 7 counterclockwise direction we can easily deduce that -ds) = - ds), fa = f a) mneslies (The contribution from a common boundary between two meshes cancels out because of opposite orientations of vectors ds for each mesh; this leaves only the contributions from the pieces of C.) Furthermore, multiplying and dividing each term in the sum by the area AS of each mesh, we obtain Suppose now that the number of meshes is increased to infinity and that each mesh “shrinks to a point” so that AS» 0. If the fimit ds) fim —- aso 4S = fy) exists and is independent of the shape of AS,* then the sum reduces to an integral and we have §, (as) = ff tos nas. ‘S Therefore, it remains for us to evaluate the function f(x, y). A typical mesh is shown in Fig. 1.16(b); it need not be rectangular since the arguments presented below are valid for an arbitrary shape. If v, and u, have continuous partials, then we can write . Me ton we + (2) 0+ () 0-0 and - a & de + ), @- p+ @), o-» with the approximations being within the first order in [x — ¢| and ly — yf Here P(, n) is the fixed point to which AS ultimately shrinks and M(x, y) is an arbitrary point on the boundary of the mesh. Writing now f ids) = furde + §uydy, : . ' * Except that the largest diameter of AS must approach zero: the mesh should not become: infinitesimally thin while retaining finite length. + By the theorem on existence of total differential, guaranteed by the continuity of partial derivatives. i? VECTOR FIELDS IN PLANE 23 we see that the following six integrals will be needed: fas, fdr, gxdn fydx, fxd § yay. The first two are zero, the second two are +AS and —AS, respectively, and the {| Tast two are zero. Asa result, we have iM _ (Atty _ aus forma (—M) a8 tim FOS) _ My Otte Jovy) = Jie aS 7 Ox > oy Consequently, where the stipulation that the partials are to be calculated at P(é, 9) can be omitted since P is now an arbitrary point within C. We have then the result $ drt nay) = VG ay a as, which is simply Green’s theorem in our notation. With regard to a vector field u = uci + wi, the function f(x, y) is known as the cur? of u so that, by definition, m Fu ds) | curl uy = ay Ss We have then evaluated the expression for curl u in (orthogonal) cartesian co- ordinates in plane: Remark, Attention is drawn to the fact that, for a vector field iz a plane, curl u is essen- tially a scalar* and not a vector. The point is that by vectors we must mean quantities expressible as ai + 5j and cur! w is definitely not this type, whether or not we introduce the third axis. A vector field u which has zero cur] at some point is called an ivrotational field (at that point). If u is irrotational in a simply connected domain, then by Green's theorem, it is a conservative field (gradient field) in this domain, i. has zero circulation. Ihe converse has to be worded rather carefully: If u is a gradient field (namely, u = grad y), then it is irrotational provided ¢ has continuous sccond- order partial derivatives. * In a more elaborate nomenclature, curl o is called pseudoscalar due to ils peculiar property of changing sign if the x- and y-axes are interchanged, See Section 16.5.

You might also like