LECTURE NOTES ON
MATHEMATICAL METHODS
Mihir Sen
Joseph M. Powers
Department of Aerospace and Mechanical Engineering
University of Notre Dame
Notre Dame, Indiana 465565637
USA
updated
April 9, 2003
2
Contents
1 Multivariable calculus 11
1.1 Implicit functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2 Functional dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3 Coordinate Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.1 Jacobians and Metric Tensors . . . . . . . . . . . . . . . . . . . . . . 19
1.3.2 Covariance and Contravariance . . . . . . . . . . . . . . . . . . . . . 25
1.4 Maxima and minima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.4.1 Derivatives of integral expressions . . . . . . . . . . . . . . . . . . . . 31
1.4.2 Calculus of variations . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.5 Lagrange multipliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2 Firstorder ordinary diﬀerential equations 43
2.1 Separation of variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.2 Homogeneous equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.3 Exact equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.4 Integrating factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5 Bernoulli equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.6 Riccati equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.7 Reduction of order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.7.1 y absent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.7.2 x absent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.8 Uniqueness and singular solutions . . . . . . . . . . . . . . . . . . . . . . . . 55
2.9 Clairaut equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3 Linear ordinary diﬀerential equations 61
3.1 Linearity and linear independence . . . . . . . . . . . . . . . . . . . . . . . . 61
3.2 Complementary functions for equations with constant coeﬃcients . . . . . . 63
3.2.1 Arbitrary order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2.2 First order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.2.3 Second order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.3 Complementary functions for equations with variable coeﬃcients . . . . . . . 66
3.3.1 One solution to ﬁnd another . . . . . . . . . . . . . . . . . . . . . . . 66
3.3.2 Euler equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3
3.4 Particular solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.4.1 Method of undetermined coeﬃcients . . . . . . . . . . . . . . . . . . 68
3.4.2 Variation of parameters . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.4.3 Operator D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.4.4 Green’s functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4 Series solution methods 81
4.1 Power series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.1.1 Firstorder equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.1.2 Secondorder equation . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.1.2.1 Ordinary point . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.1.2.2 Regular singular point . . . . . . . . . . . . . . . . . . . . . 86
4.1.2.3 Irregular singular point . . . . . . . . . . . . . . . . . . . . 89
4.1.3 Higher order equations . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.2 Perturbation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.2.1 Algebraic and transcendental equations . . . . . . . . . . . . . . . . . 91
4.2.2 Regular perturbations . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.2.3 Strained coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.2.4 Multiple scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.2.5 Harmonic approximation . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.2.6 Boundary layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.2.7 WKB method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.2.8 Solutions of the type e
S(x)
. . . . . . . . . . . . . . . . . . . . . . . . 112
4.2.9 Repeated substitution . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5 Special functions 121
5.1 SturmLiouville equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.1.1 Linear oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.1.2 Legendre equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5.1.3 Chebyshev equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.1.4 Hermite equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.1.5 Laguerre equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.1.6 Bessel equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.1.6.1 ﬁrst and second kind . . . . . . . . . . . . . . . . . . . . . . 132
5.1.6.2 third kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.1.6.3 modiﬁed Bessel functions . . . . . . . . . . . . . . . . . . . 135
5.1.6.4 ber and bei functions . . . . . . . . . . . . . . . . . . . . . . 135
5.2 Representation of arbitrary functions . . . . . . . . . . . . . . . . . . . . . . 135
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
4
6 Vectors and tensors 143
6.1 Cartesian index notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6.2 Cartesian tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.2.1 Direction cosines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.2.1.1 Scalars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
6.2.1.2 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.2.1.3 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.2.2 Matrix representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.2.3 Transpose of a tensor, symmetric and antisymmetric tensors . . . . . 148
6.2.4 Dual vector of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 149
6.2.5 Principal axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
6.3 Algebra of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
6.3.1 Deﬁnition and properties . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.3.2 Scalar product (dot product, inner product) . . . . . . . . . . . . . . 152
6.3.3 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.3.4 Scalar triple product . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.3.5 Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.4 Calculus of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.4.1 Vector function of single scalar variable . . . . . . . . . . . . . . . . . 153
6.4.2 Diﬀerential geometry of curves . . . . . . . . . . . . . . . . . . . . . . 155
6.4.2.1 Curves on a plane . . . . . . . . . . . . . . . . . . . . . . . 156
6.4.2.2 Curves in 3dimensional space . . . . . . . . . . . . . . . . . 157
6.5 Line and surface integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
6.5.1 Line integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
6.5.2 Surface integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.6 Diﬀerential operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.6.1 Gradient of a scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
6.6.2 Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
6.6.2.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
6.6.2.2 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
6.6.3 Curl of a vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
6.6.4 Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
6.6.4.1 Scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
6.6.4.2 Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
6.6.5 Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
6.7 Special theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.7.1 Path independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.7.2 Green’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.7.3 Gauss’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
6.7.4 Green’s identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
6.7.5 Stokes’ theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
6.7.6 Leibniz’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
6.8 Orthogonal curvilinear coordinates . . . . . . . . . . . . . . . . . . . . . . . 171
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5
7 Linear analysis 175
7.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
7.2 Diﬀerentiation and integration . . . . . . . . . . . . . . . . . . . . . . . . . . 176
7.2.1 Fr´echet derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
7.2.2 Riemann integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
7.2.3 Lebesgue integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
7.3 Vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
7.3.1 Normed spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
7.3.2 Inner product spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
7.3.2.1 Hilbert space . . . . . . . . . . . . . . . . . . . . . . . . . . 191
7.3.2.2 Noncommutation of the inner product . . . . . . . . . . . . 192
7.3.2.3 Minkowski space . . . . . . . . . . . . . . . . . . . . . . . . 193
7.3.2.4 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . 197
7.3.2.5 GramSchmidt procedure . . . . . . . . . . . . . . . . . . . 198
7.3.2.6 Representation of a vector . . . . . . . . . . . . . . . . . . . 199
7.3.2.7 Parseval’s equation, convergence, and completeness . . . . . 205
7.3.3 Reciprocal bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
7.4 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
7.4.1 Linear operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
7.4.2 Adjoint operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
7.4.3 Inverse operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
7.4.4 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . 216
7.5 Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
7.6 Method of weighted residuals . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
8 Linear algebra 245
8.1 Determinants and rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
8.2 Matrix algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
8.2.1 Column, row, left and right null spaces . . . . . . . . . . . . . . . . . 247
8.2.2 Matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
8.2.3 Deﬁnitions and properties . . . . . . . . . . . . . . . . . . . . . . . . 250
8.2.3.1 Diagonal matrices . . . . . . . . . . . . . . . . . . . . . . . 250
8.2.3.2 Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
8.2.3.3 Similar matrices . . . . . . . . . . . . . . . . . . . . . . . . 253
8.2.4 Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
8.2.4.1 Overconstrained Systems . . . . . . . . . . . . . . . . . . . 253
8.2.4.2 Underconstrained Systems . . . . . . . . . . . . . . . . . . . 256
8.2.4.3 Over and Underconstrained Systems . . . . . . . . . . . . . 257
8.2.4.4 Square Systems . . . . . . . . . . . . . . . . . . . . . . . . . 259
8.2.5 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . 261
8.2.6 Complex matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
8.3 Orthogonal and unitary matrices . . . . . . . . . . . . . . . . . . . . . . . . 266
8.3.1 Orthogonal matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
8.3.2 Unitary matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
6
8.4 Matrix decompositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
8.4.1 L D U decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 268
8.4.2 Echelon form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
8.4.3 Q R decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
8.4.4 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
8.4.5 Jordan canonical form . . . . . . . . . . . . . . . . . . . . . . . . . . 280
8.4.6 Schur decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
8.4.7 Singular value decomposition . . . . . . . . . . . . . . . . . . . . . . 282
8.4.8 Hessenberg form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
8.5 Projection matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
8.6 Method of least squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
8.6.1 Unweighted least squares . . . . . . . . . . . . . . . . . . . . . . . . . 286
8.6.2 Weighted least squares . . . . . . . . . . . . . . . . . . . . . . . . . . 287
8.7 Matrix exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
8.8 Quadratic form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
8.9 MoorePenrose inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
9 Dynamical systems 301
9.1 Paradigm problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
9.1.1 Autonomous example . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
9.1.2 Nonautonomous example . . . . . . . . . . . . . . . . . . . . . . . . 305
9.2 General theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
9.3 Iterated maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
9.4 High order scalar diﬀerential equations . . . . . . . . . . . . . . . . . . . . . 311
9.5 Linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
9.5.1 Homogeneous equations with constant A . . . . . . . . . . . . . . . . 313
9.5.1.1 n eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . 314
9.5.1.2 < n eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . 315
9.5.1.3 Summary of method . . . . . . . . . . . . . . . . . . . . . . 316
9.5.1.4 Alternative method . . . . . . . . . . . . . . . . . . . . . . . 316
9.5.1.5 Fundamental matrix . . . . . . . . . . . . . . . . . . . . . . 319
9.5.2 Inhomogeneous equations . . . . . . . . . . . . . . . . . . . . . . . . 320
9.5.2.1 Undetermined coeﬃcients . . . . . . . . . . . . . . . . . . . 321
9.5.2.2 Variation of parameters . . . . . . . . . . . . . . . . . . . . 321
9.6 Nonlinear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
9.6.1 Deﬁnitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
9.6.2 Linear stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
9.6.3 Lyapunov functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
9.6.4 Hamiltonian systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
9.7 Fractals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
9.7.1 Cantor set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
9.7.2 Koch curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
9.7.3 Weierstrass function . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
9.7.4 Mandelbrot and Julia sets . . . . . . . . . . . . . . . . . . . . . . . . 331
7
9.8 Bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
9.8.1 Pitchfork bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
9.8.2 Transcritical bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . 335
9.8.3 Saddlenode bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . 336
9.8.4 Hopf bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
9.9 Lorenz equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
9.9.1 Linear stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
9.9.2 Center manifold projection . . . . . . . . . . . . . . . . . . . . . . . . 341
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
10 Appendix 353
10.1 Trigonometric relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
10.2 RouthHurwitz criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
10.3 Inﬁnite series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
10.4 Asymptotic expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.4.1 Expansion of integrals . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.4.2 Integration and diﬀerentiation of series . . . . . . . . . . . . . . . . . 356
10.5 Limits and continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.6 Special functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.6.1 Gamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.6.2 Beta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.6.3 Riemann zeta function . . . . . . . . . . . . . . . . . . . . . . . . . . 357
10.6.4 Error function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
10.6.5 Fresnel integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
10.6.6 Sine and cosineintegral functions . . . . . . . . . . . . . . . . . . . . 358
10.6.7 Elliptic integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
10.6.8 Gauss’s hypergeometric function . . . . . . . . . . . . . . . . . . . . . 360
10.6.9 δ distribution and Heaviside function . . . . . . . . . . . . . . . . . . 360
10.7 Singular integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
10.8 Chain rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
10.9 Complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
10.9.1 Euler’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
10.9.2 Polar and Cartesian representations . . . . . . . . . . . . . . . . . . . 363
10.9.3 CauchyRiemann equations . . . . . . . . . . . . . . . . . . . . . . . 364
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Bibliography 367
8
Preface
These are lecture notes for AME 561 Mathematical Methods I, the ﬁrst of a pair of courses
on applied mathematics taught at the Department of Aerospace and Mechanical Engineering
of the University of Notre Dame. Most of the students in this course are beginning graduate
students in engineering coming from a wide variety of backgrounds. The objective of the
course is to provide a survey of a variety of topics in applied mathematics, including mul
tidimensional calculus, ordinary diﬀerential equations, perturbation methods, vectors and
tensors, linear analysis, and linear algebra, and dynamic systems. The companion course,
AME 562, covers complex variables, integral transforms, and partial diﬀerential equations.
These notes emphasize method and technique over rigor and completeness; the student
should call on textbooks and other reference materials. It should also be remembered that
practice is essential to the learning process; the student would do well to apply the techniques
presented here by working as many problems as possible.
The notes, along with much information on the course itself, can be found on the world
wide web at http://www.nd.edu/∼powers/ame.561. At this stage, anyone is free to duplicate
the notes on their own printers.
These notes have appeared in various forms for the past few years; minor changes and
additions have been made and will continue to be made. We would be happy to hear from
you about errors or suggestions for improvement.
Mihir Sen
Mihir.Sen.1@nd.edu
http://www.nd.edu/∼msen
Joseph M. Powers
powers@nd.edu
http://www.nd.edu/∼powers
Notre Dame, Indiana; USA
April 9, 2003
Copyright c ( 2003 by Mihir Sen and Joseph M. Powers.
All rights reserved.
9
10
Chapter 1
Multivariable calculus
see Kaplan, Chapter 2: 2.12.22, Chapter 3: 3.9,
see Riley, Hobson, Bence, Chapters 4, 19, 20,
see Lopez, Chapters 32, 46, 47, 48.
1.1 Implicit functions
We can think of a relation such as f(x
1
, x
2
, . . . , x
n
, y) = 0, also written as f(x
i
, y) = 0, in
some region as an implicit function of y with respect to the other variables. We cannot have
∂f
∂y
= 0, because then f would not depend on y in this region. In principle, we can write
y = y(x
1
, x
2
, . . . , x
n
) or y = y(x
i
) (1.1)
if
∂f
∂y
= 0.
The derivative
∂y
∂x
i
can be determined from f = 0 without explicitly solving for y. First,
from the chain rule, we have
df =
∂f
∂x
1
dx
1
+
∂f
∂x
2
dx
2
+ . . . +
∂f
∂x
i
dx
i
+ . . . +
∂f
∂x
n
dx
n
+
∂f
∂y
dy = 0 (1.2)
Diﬀerentiating with respect to x
i
while holding all the other x
j
, j = i constant, we get
∂f
∂x
i
+
∂f
∂y
∂y
∂x
i
= 0 (1.3)
so that
∂y
∂x
i
= −
∂f
∂x
i
∂f
∂y
(1.4)
which can be found if
∂f
∂y
= 0. That is to say, y can be considered a function of x
i
if
∂f
∂y
= 0.
Let us now consider the equations
f(x, y, u, v) = 0 (1.5)
g(x, y, u, v) = 0 (1.6)
11
Under certain circumstances, we can unravel these equations (either algebraically or numer
ically) to form u = u(x, y), v = v(x, y). The conditions for the existence of such a functional
dependency can be found by diﬀerentiation of the original equations, for example:
df =
∂f
∂x
dx +
∂f
∂y
dy +
∂f
∂u
du +
∂f
∂v
dv = 0
holding y constant and dividing by dx we get
∂f
∂x
+
∂f
∂u
∂u
∂x
+
∂f
∂v
∂v
∂x
= 0
in the same manner, we get
∂g
∂x
+
∂g
∂u
∂u
∂x
+
∂g
∂v
∂v
∂x
= 0
∂f
∂y
+
∂f
∂u
∂u
∂y
+
∂f
∂v
∂v
∂y
= 0
∂g
∂y
+
∂g
∂u
∂u
∂y
+
∂g
∂v
∂v
∂y
= 0
Two of the above equations can be solved for
∂u
∂x
and
∂v
∂x
, and two others for
∂u
∂y
and
∂v
∂y
by
using Cramer’s
1
rule. To solve for
∂u
∂x
and
∂v
∂x
, we ﬁrst write two of the previous equations
in matrix form:
∂f
∂u
∂f
∂v
∂g
∂u
∂g
∂v
∂u
∂x
∂v
∂x
=
−
∂f
∂x
−
∂g
∂x
(1.7)
thus from Cramer’s rule we have
∂u
∂x
=
−
∂f
∂x
∂f
∂v
−
∂g
∂x
∂g
∂v
∂f
∂u
∂f
∂v
∂g
∂u
∂g
∂v
≡ −
∂(f,g)
∂(x,v)
∂(f,g)
∂(u,v)
∂v
∂x
=
∂f
∂u
−
∂f
∂x
∂g
∂u
−
∂g
∂x
∂f
∂u
∂f
∂v
∂g
∂u
∂g
∂v
≡ −
∂(f,g)
∂(u,x)
∂(f,g)
∂(u,v)
In a similar fashion, we can form expressions for
∂u
∂y
and
∂v
∂y
:
∂u
∂y
=
−
∂f
∂y
∂f
∂v
−
∂g
∂y
∂g
∂v
∂f
∂u
∂f
∂v
∂g
∂u
∂g
∂v
≡ −
∂(f,g)
∂(y,v)
∂(f,g)
∂(u,v)
∂v
∂y
=
∂f
∂u
−
∂f
∂y
∂g
∂u
−
∂g
∂y
∂f
∂u
∂f
∂v
∂g
∂u
∂g
∂v
≡ −
∂(f,g)
∂(u,y)
∂(f,g)
∂(u,v)
1
Gabriel Cramer, 17041752, welltravelled Swissborn mathematician who did enunciate his well known
rule, but was not the ﬁrst to do so.
12
If the Jacobian
2
determinant, deﬁned below, is nonzero, the derivatives exist, and we
indeed can form u(x, y) and v(x, y).
∂(f, g)
∂(u, v)
=
∂f
∂u
∂f
∂v
∂g
∂u
∂g
∂v
= 0 (1.8)
This is the condition for the implicit to explicit function conversion. Similar conditions hold
for multiple implicit functions f
i
(x
1
, . . . , x
n
, y
1
, . . . , y
m
) = 0, i = 1, . . . , m. The derivatives
∂f
i
∂x
j
, i = 1, . . . , m, j = 1, . . . , n exist in some region if the determinant of the matrix
∂f
i
∂y
j
= 0
(i, j = 1, . . . , m) in this region.
Example 1.1
If
x +y +u
6
+u +v = 0
xy +uv = 1
Find
∂u
∂x
.
Note that we have four unknowns in two equations. In principle we could solve for u(x, y) and
v(x, y) and then determine all partial derivatives, such as the one desired. In practice this is not always
possible; for example, there is no general solution to sixth order equations such as we have here.
The two equations are rewritten as
f(x, y, u, v) = x +y +u
6
+u +v = 0
g(x, y, u, v) = xy +uv −1 = 0
Using the formula developed above to solve for the desired derivative, we get
∂u
∂x
=
−
∂f
∂x
∂f
∂v
−
∂g
∂x
∂g
∂v
∂f
∂u
∂f
∂v
∂g
∂u
∂g
∂v
Substituting, we get
∂u
∂x
=
−1 1
−y u
6u
5
+ 1 1
v u
=
y −u
u(6u
5
+ 1) −v
Note when
v = 6u
6
+u
that the relevant Jacobian is zero; at such points we can determine neither
∂u
∂x
nor
∂u
∂y
; thus we cannot
form u(x, y).
At points where the relevant Jacobian
∂(f,g)
∂(u,v)
= 0, (which includes nearly all of the (x, y) plane) given
a local value of (x, y), we can use algebra to ﬁnd a corresponding u and v, which may be multivalued,
and use the formula developed to ﬁnd the local value of the partial derivative.
2
Carl Gustav Jacob Jacobi, 18041851, German/Prussian mathematician who used these determinants,
which were ﬁrst studied by Cauchy, in his work on partial diﬀerential equations.
13
1.2 Functional dependence
Let u = u(x, y) and v = v(x, y). If we can write u = g(v) or v = h(u), then u and v are said
to be functionally dependent. If functional dependence between u and v exists, then we can
consider f(u, v) = 0. So,
∂f
∂u
∂u
∂x
+
∂f
∂v
∂v
∂x
= 0, (1.9)
∂f
∂u
∂u
∂y
+
∂f
∂v
∂v
∂y
= 0, (1.10)
∂u
∂x
∂v
∂x
∂u
∂y
∂v
∂y
∂f
∂u
∂f
∂v
=
0
0
(1.11)
Since the right hand side is zero, and we desire a nontrivial solution, the determinant of the
coeﬃcient matrix, must be zero for functional dependency, i.e.
∂u
∂x
∂v
∂x
∂u
∂y
∂v
∂y
= 0. (1.12)
Note, since det A = det A
T
, that this is equivalent to
∂u
∂x
∂u
∂y
∂v
∂x
∂v
∂y
=
∂(u, v)
∂(x, y)
= 0. (1.13)
That is the Jacobian must be zero.
Example 1.2
Determine if
u = y +z
v = x + 2z
2
w = x −4yz −2y
2
are functionally dependent.
The determinant of the resulting coeﬃcient matrix, by extension to three functions of three vari
ables, is
∂(u, v, w)
∂(x, y, z)
=
∂u
∂x
∂u
∂y
∂u
∂z
∂v
∂x
∂v
∂y
∂v
∂z
∂w
∂x
∂w
∂y
∂w
∂z
=
∂u
∂x
∂v
∂x
∂w
∂x
∂u
∂y
∂v
∂y
∂w
∂y
∂u
∂z
∂v
∂z
∂w
∂z
=
0 1 1
1 0 −4(y +z)
1 4z −4y
= (−1)(−4y −(−4)(y +z)) + (1)(4z)
= 4y −4y −4z + 4z
= 0
So, u, v, w are functionally dependent. In fact w = v −2u
2
.
14
Example 1.3
Let
x +y +z = 0
x
2
+y
2
+z
2
+ 2xz = 1
Can x and y be considered as functions of z?
If x = x(z) and y = y(z), then
dx
dz
and
dy
dz
must exist. If we take
f(x, y, z) = x +y +z = 0
g(x, y, z) = x
2
+y
2
+z
2
+ 2xz −1 = 0
df =
∂f
∂z
dz +
∂f
∂x
dx +
∂f
∂y
dy = 0
dg =
∂g
∂z
dz +
∂g
∂x
dx +
∂g
∂y
dy = 0
∂f
∂z
+
∂f
∂x
dx
dz
+
∂f
∂y
dy
dz
= 0
∂g
∂z
+
∂g
∂x
dx
dz
+
∂g
∂y
dy
dz
= 0
∂f
∂x
∂f
∂y
∂g
∂x
∂g
∂y
dx
dz
dy
dz
=
−
∂f
∂z
−
∂g
∂z
then the solution matrix
dx
dz
,
dy
dz
T
can be obtained by Cramer’s rule.
dx
dz
=
−
∂f
∂z
∂f
∂y
−
∂g
∂z
∂g
∂y
∂f
∂x
∂f
∂y
∂g
∂x
∂g
∂y
=
−1 1
−(2z + 2x) 2y
1 1
2x + 2z 2y
=
−2y + 2z + 2x
2y −2x −2z
= −1
dy
dz
=
∂f
∂x
−
∂f
∂z
∂g
∂x
−
∂g
∂z
∂f
∂x
∂f
∂y
∂g
∂x
∂g
∂y
=
1 −1
2x + 2z −(2z + 2x)
1 1
2x + 2z 2y
=
0
2y −2x −2z
Note here that in the expression for
dx
dz
that the numerator and denominator cancel; there is no special
condition deﬁned by the Jacobian determinant of the denominator being zero. In the second,
dy
dz
= 0 if
y −x −z = 0, in which case this formula cannot give us the derivative.
Now in fact, it is easily shown by algebraic manipulations (which for more general functions are
not possible) that
x(z) = −z ±
√
2
2
y(z) = ∓
√
2
2
Note that in fact y −x −z = 0, so the Jacobian determinant
∂(f,g)
∂(x,y)
= 0; thus, the above expression
for
dy
dz
is indeterminant. However, we see from the explicit expression y = ∓
√
2
2
that in fact,
dy
dz
= 0.
15
1
0
1
x
0.5
0
0.5
y
1
0.5
0
0.5
1
z
1
0
1
x
0.5
0
0.5
y
1
0
0
0
1
1
0.5
0
0.5
1
x
2
1
0
1
2
y
1
0.5
0
0.5
1
z
1
0.5
0
0.5
1
x
2
1
0
1
2
y
1
0.5
0
5
1
Figure 1.1: Surfaces of x+y +z = 0 and x
2
+y
2
+z
2
+2xz = 1, and their loci of intersection
The two original functions and their loci of intersection are plotted in Figure 1.1.
It is seen that the surface represented by the quadratic function is a open cylindrical tube, and that
represented by the linear function is a plane. Note that planes and cylinders may or may not intersect.
If they intersect, it is most likely that the intersection will be a closed arc. However, when the plane
is aligned with the axis of the cylinder, the intersection will be two nonintersecting lines; such is the
case in this example.
Let’s see how slightly altering the equation for the plane removes the degeneracy. Take now
5x +y +z = 0
x
2
+y
2
+z
2
+ 2xz = 1
Can x and y be considered as functions of z?
If x = x(z) and y = y(z), then
dx
dz
and
dy
dz
must exist. If we take
f(x, y, z) = 5x +y +z = 0
g(x, y, z) = x
2
+y
2
+z
2
+ 2xz −1 = 0
then the solution matrix
dx
dz
,
dy
dz
T
is found as before:
dx
dz
=
−
∂f
∂z
∂f
∂y
−
∂g
∂z
∂g
∂y
∂f
∂x
∂f
∂y
∂g
∂x
∂g
∂y
=
−1 1
−(2z + 2x) 2y
5 1
2x + 2z 2y
=
−2y + 2z + 2x
10y −2x −2z
dy
dz
=
∂f
∂x
−
∂f
∂z
∂g
∂x
−
∂g
∂z
∂f
∂x
∂f
∂y
∂g
∂x
∂g
∂y
=
5 −1
2x + 2z −(2z + 2x)
5 1
2x + 2z 2y
=
−8x −8z
10y −2x −2z
The two original functions and their loci of intersection are plotted in Figure 1.2.
Straightforward algebra in this case shows that an explicit dependency exists:
x(z) =
−6z ±
√
2
√
13 −8z
2
26
y(z) =
−4z ∓5
√
2
√
13 −8z
2
26
16
0.2
0
0.2
1
0.5
0
0.5
1
1
0
1
z
0
x
1
0.5
0
0.5
1
y
1
0
1
1
0.5
0
0.5
1
x
2
1
0
1
2
y
1
0.5
0
0.5
1
z
1
0.5
0
0.5
1
x
2
1
0
1
2
y
1
0.5
0
0.5
1
z
Figure 1.2: Surfaces of 5x+y+z = 0 and x
2
+y
2
+z
2
+2xz = 1, and their loci of intersection
These curves represent the projection of the curve of intersection on the x − z and y − z planes,
respectively. In both cases, the projections are ellipses.
1.3 Coordinate Transformations
Many problems are formulated in threedimensional Cartesian
3
space. However, many of
these problems, especially those involving curved geometrical bodies, are better posed in a
nonCartesian, curvilinear coordinate system. As such, one needs techniques to transform
from one coordinate system to another.
For this section, we will take Cartesian coordinates to be represented by (ξ
1
, ξ
2
, ξ
3
). Here
the superscript is an index and does not represent a power of ξ. We will denote this point
by ξ
i
, where i = 1, 2, 3. Since the space is Cartesian, we have the usual Euclidean
4
formula
for arc length s:
(ds)
2
=
dξ
1
2
+
dξ
2
2
+
dξ
3
2
(1.14)
(ds)
2
=
3
¸
i=1
dξ
i
dξ
i
≡ dξ
i
dξ
i
(1.15)
Here we have adopted the summation convention that when an index appears twice, a
summation from 1 to 3 is understood.
3
Ren´e Descartes, 15961650, French mathematician and philosopher.
4
Euclid of Alexandria, ∼ 325 B.C.∼ 265 B.C., Greek geometer.
17
Now let us map a point from a point in (ξ
1
, ξ
2
, ξ
3
) space to a point in a more convenient
(x
1
, x
2
, x
3
) space. This mapping is achieved by deﬁning the following functional dependen
cies:
x
1
= x
1
(ξ
1
, ξ
2
, ξ
3
) (1.16)
x
2
= x
2
(ξ
1
, ξ
2
, ξ
3
) (1.17)
x
3
= x
3
(ξ
1
, ξ
2
, ξ
3
) (1.18)
Taking derivatives can tell us whether the inverse exists.
dx
1
=
∂x
1
∂ξ
1
dξ
1
+
∂x
1
∂ξ
2
dξ
2
+
∂x
1
∂ξ
3
dξ
3
=
∂x
1
∂ξ
j
dξ
j
(1.19)
dx
2
=
∂x
2
∂ξ
1
dξ
1
+
∂x
2
∂ξ
2
dξ
2
+
∂x
2
∂ξ
3
dξ
3
=
∂x
2
∂ξ
j
dξ
j
(1.20)
dx
3
=
∂x
3
∂ξ
1
dξ
1
+
∂x
3
∂ξ
2
dξ
2
+
∂x
3
∂ξ
3
dξ
3
=
∂x
3
∂ξ
j
dξ
j
(1.21)
¸
dx
1
dx
2
dx
3
=
¸
¸
∂x
1
∂ξ
1
∂x
1
∂ξ
2
∂x
1
∂ξ
3
∂x
2
∂ξ
1
∂x
2
∂ξ
2
∂x
2
∂ξ
3
∂x
3
∂ξ
1
∂x
3
∂ξ
2
∂x
3
∂ξ
3
¸
dξ
1
dξ
2
dξ
3
(1.22)
dx
i
=
∂x
i
∂ξ
j
dξ
j
(1.23)
In order for the inverse to exist we must have a nonzero Jacobian for the transformation,
i.e.
∂(x
1
, x
2
, x
3
)
∂(ξ
1
, ξ
2
, ξ
3
)
= 0 (1.24)
It can then be inferred that the inverse transformation exists.
ξ
1
= ξ
1
(x
1
, x
2
, x
3
) (1.25)
ξ
2
= ξ
2
(x
1
, x
2
, x
3
) (1.26)
ξ
3
= ξ
3
(x
1
, x
2
, x
3
) (1.27)
Likewise then,
dξ
i
=
∂ξ
i
∂x
j
dx
j
(1.28)
18
1.3.1 Jacobians and Metric Tensors
Deﬁning
5
the Jacobian matrix J, which we associate with the inverse transformation, that
is the transformation from nonCartesian to Cartesian coordinates, to be
J =
∂ξ
i
∂x
j
=
¸
∂ξ
1
∂x
1
∂ξ
1
∂x
2
∂ξ
1
∂x
3
∂ξ
2
∂x
1
∂ξ
2
∂x
2
∂ξ
2
∂x
3
∂ξ
3
∂x
1
∂ξ
3
∂x
2
∂ξ
3
∂x
3
(1.29)
we can rewrite dξ
i
in Gibbs’
6
vector notation as
dξ = J dx (1.30)
Now for Euclidean spaces, distance must be independent of coordinate systems, so we
require
(ds)
2
= dξ
i
dξ
i
=
∂ξ
i
∂x
k
dx
k
∂ξ
i
∂x
l
dx
l
=
∂ξ
i
∂x
k
∂ξ
i
∂x
l
dx
k
dx
l
(1.31)
In Gibbs’ vector notation this becomes
(ds)
2
= dξ
T
dξ (1.32)
= (J dx)
T
(J dx) (1.33)
= dx
T
J
T
J dx (1.34)
If we deﬁne the metric tensor, g
kl
or G, as follows:
g
kl
=
∂ξ
i
∂x
k
∂ξ
i
∂x
l
(1.35)
G = J
T
J (1.36)
then we have, equivalently in both index and Gibbs notation,
(ds)
2
= g
kl
dx
k
dx
l
(1.37)
(ds)
2
= dx
T
G dx (1.38)
Now g
kl
can be represented as a matrix. If we deﬁne
g = det (g
kl
) , (1.39)
5
The deﬁnition we adopt is that used in most texts, including Kaplan. A few, e.g. Aris, deﬁne the
Jacobian determinant in terms of the transpose of the Jacobian matrix, which is not problematic since the
two are the same. Extending this, an argument can be made that a better deﬁnition of the Jacobian matrix
would be the transpose of the traditional Jacobian matrix. This is because when one considers that the
diﬀerential operator acts ﬁrst, the Jacobian matrix is really
∂
∂x
j
ξ
i
, and the alternative deﬁnition is more
consistent with traditional matrix notation, which would have the ﬁrst row as
∂
∂x
1
ξ
1
,
∂
∂x
1
ξ
2
,
∂
∂x
1
ξ
3
. As long
as one realizes the implications of the notation, however, the convention adopted ultimately does not matter.
6
Josiah Willard Gibbs, 18391903, proliﬁc American physicist and mathematician with a lifetime aﬃliation
with Yale University.
19
it can be shown that the ratio of volumes of diﬀerential elements in one space to that of the
other is given by
dξ
1
dξ
2
dξ
3
=
√
g dx
1
dx
2
dx
3
(1.40)
We also require dependent variables and all derivatives to take on the same values at
corresponding points in each space, e.g. if S [S = f(ξ
1
, ξ
2
, ξ
3
) = h(x
1
, x
2
, x
3
)] is a dependent
variable deﬁned at (
ˆ
ξ
1
,
ˆ
ξ
2
,
ˆ
ξ
3
), and (
ˆ
ξ
1
,
ˆ
ξ
2
,
ˆ
ξ
3
) maps into (ˆ x
1
, ˆ x
2
, ˆ x
3
)), we require f(
ˆ
ξ
1
,
ˆ
ξ
2
,
ˆ
ξ
3
) =
h(ˆ x
1
, ˆ x
2
, ˆ x
3
))
The chain rule lets us transform derivatives to other spaces
(
∂S
∂ξ
1
∂S
∂ξ
2
∂S
∂ξ
3
) = (
∂S
∂x
1
∂S
∂x
2
∂S
∂x
3
)
¸
¸
∂x
1
∂ξ
1
∂x
1
∂ξ
2
∂x
1
∂ξ
3
∂x
2
∂ξ
1
∂x
2
∂ξ
2
∂x
2
∂ξ
3
∂x
3
∂ξ
1
∂x
3
∂ξ
2
∂x
3
∂ξ
3
(1.41)
∂S
∂ξ
i
=
∂S
∂x
j
∂x
j
∂ξ
i
(1.42)
This can also be inverted, given that g = 0, to ﬁnd
∂S
∂x
1
,
∂S
∂x
2
,
∂S
∂x
3
T
. The fact that the gradient
operator required the use of row vectors in conjunction with the Jacobian matrix, while the
transformation of distance, earlier in this section, required the use of column vectors is of
fundamental importance, and will be examined further in an upcoming section where we
distinguish between what are known as covariant and contravariant vectors.
Example 1.4
Transform the Cartesian equation
∂S
∂ξ
1
−S =
ξ
1
2
+
ξ
2
2
under under the following:
1. Cartesian to aﬃne coordinates.
Consider the following linear nonorthogonal transformation (those of this type are known as aﬃne):
x
1
= 4ξ
1
+ 2ξ
2
x
2
= 3ξ
1
+ 2ξ
2
x
3
= ξ
3
This is a linear system of three equations in three unknowns; using standard techniques of linear algebra
allows us to solve for ξ
1
, ξ
2
, ξ
3
in terms of x
1
, x
2
, x
3
; that is we ﬁnd the inverse transformation, which
is
ξ
1
= x
1
−x
2
ξ
2
= −
3
2
x
1
+ 2x
2
ξ
3
= x
3
Lines of constant x
1
and x
2
in the ξ
1
, ξ
2
plane are plotted in Figure 1.3.
20
4 2 2 4
4
3
2
1
1
2
3
4
ξ
ξ
1
2
Figure 1.3: Lines of constant x
1
and x
2
in the ξ
1
, ξ
2
plane for aﬃne transformation of example
problem.
The appropriate Jacobian matrix for the inverse transformation is
J =
∂ξ
i
∂x
j
=
∂(ξ
1
, ξ
2
, ξ
3
)
∂(x
1
, x
2
, x
3
)
=
¸
¸
∂ξ
1
∂x
1
∂ξ
1
∂x
2
∂ξ
1
∂x
3
∂ξ
2
∂x
1
∂ξ
2
∂x
2
∂ξ
2
∂x
3
∂ξ
3
∂x
1
∂ξ
3
∂x
2
∂ξ
3
∂x
3
J =
¸
1 −1 0
−
3
2
2 0
0 0 1
The determinant of the Jacobian matrix is
2 −
3
2
=
1
2
So a unique transformation always exists, since the Jacobian determinant is never zero.
The metric tensor is
g
kl
=
∂ξ
i
∂x
k
∂ξ
i
∂x
l
=
∂ξ
1
∂x
k
∂ξ
1
∂x
l
+
∂ξ
2
∂x
k
∂ξ
2
∂x
l
+
∂ξ
3
∂x
k
∂ξ
3
∂x
l
for example for k = 1, l = 1 we get
g
11
=
∂ξ
i
∂x
1
∂ξ
i
∂x
1
=
∂ξ
1
∂x
1
∂ξ
1
∂x
1
+
∂ξ
2
∂x
1
∂ξ
2
∂x
1
+
∂ξ
3
∂x
1
∂ξ
3
∂x
1
g
11
= (1)(1) +
−
3
2
−
3
2
+ (0)(0) =
13
4
repeating this operation, we ﬁnd the complete metric tensor is
21
g
kl
=
¸
13
4
−4 0
−4 5 0
0 0 1
g = det (g
kl
) =
65
4
−16 =
1
4
This is equivalent to the calculation in Gibbs notation:
G = J
T
J
G =
¸
1 −
3
2
0
−1 2 0
0 0 1
¸
1 −1 0
−
3
2
2 0
0 0 1
G =
¸
13
4
−4 0
−4 5 0
0 0 1
Distance in the transformed system is given by
(ds)
2
= g
kl
dx
k
dx
l
(ds)
2
= dx
T
G dx
(ds)
2
= ( dx
1
dx
2
dx
3
)
¸
13
4
−4 0
−4 5 0
0 0 1
¸
dx
1
dx
2
dx
3
(ds)
2
= ( dx
1
dx
2
dx
3
)
¸
13
4
dx
1
−4 dx
2
−4 dx
1
+ 5 dx
2
dx
3
(ds)
2
=
13
4
dx
1
2
+ 5
dx
2
2
+
dx
3
2
−8 dx
1
dx
2
Algebraic manipulation reveals that this can be rewritten as follows:
(ds)
2
= 8.22
0.627 dx
1
−0.779 dx
2
2
+ 0.0304
0.779 dx
1
+ 0.627 dx
2
2
+
dx
3
2
Note:
• The Jacobian matrix J is not symmetric.
• The metric tensor G = J
T
J is symmetric.
• The fact that the metric tensor has nonzero oﬀdiagonal elements is a consequence of the transfor
mation being nonorthogonal.
• The distance is guaranteed to be positive. This will be true for all aﬃne transformations in ordinary
threedimensional Euclidean space. In the generalized spacetime continuum suggested by the theory
of relativity, the generalized distance may in fact be negative; this generalized distance ds for an
inﬁntesmial change in space and time is given by ds
2
=
dξ
1
2
+
dξ
2
2
+
dξ
3
2
−
dξ
4
2
, where the
ﬁrst three coordinates are the ordinary Cartesian space coordinates and the fourth is
dξ
4
2
= (c dt)
2
,
where c is the speed of light.
Also we have the volume ratio of diﬀerential elements as
dξ
1
dξ
2
dξ
3
=
1
2
dx
1
dx
2
dx
3
Now
∂S
∂ξ
1
=
∂S
∂x
1
∂x
1
∂ξ
1
+
∂S
∂x
2
∂x
2
∂ξ
1
+
∂S
∂x
3
∂x
3
∂ξ
1
= 4
∂S
∂x
1
+ 3
∂S
∂x
2
22
3
2 1 1 2 3
3
2
1
1
2
3
ξ
ξ
2
1
1
x = 1
1
x = 2
1
x = 3
2
x = 0
2
x = π/4
2
x = π/2
2
x = 3π/2
2
x = π
2
x = 5π/4
x = 3π/4
2
x = 7π/4
Figure 1.4: Lines of constant x
1
and x
2
in the ξ
1
, ξ
2
plane for cylindrical transformation of
example problem.
So the transformed equation becomes
4
∂S
∂x
1
+ 3
∂S
∂x
2
−S =
x
1
−x
2
2
+
−
3
2
x
1
+ 2 x
2
2
4
∂S
∂x
1
+ 3
∂S
∂x
2
−S =
13
4
x
1
2
−8 x
1
x
2
+ 5
x
2
2
2. Cartesian to cylindrical coordinates.
The transformations are
x
1
= +
(ξ
1
)
2
+ (ξ
2
)
2
x
2
= tan
−1
ξ
2
ξ
1
x
3
= ξ
3
Note this system of equations is nonlinear. For such systems, we cannot always ﬁnd an explicit algebraic
expression for the inverse transformation. In this case, some straightforward algebraic and trigonometric
manipulation reveals that we can ﬁnd an explicit representation of the inverse transformation, which is
ξ
1
= x
1
cos x
2
ξ
2
= x
1
sin x
2
ξ
3
= x
3
Lines of constant x
1
and x
2
in the ξ
1
, ξ
2
plane are plotted in Figure 1.4. Notice that the lines of
constant x
1
are orthogonal to lines of constant x
2
in the Cartesian ξ
1
, ξ
2
plane. For general transfor
mations, this will not be the case.
The appropriate Jacobian matrix for the inverse transformation is
J =
∂ξ
i
∂x
j
=
∂(ξ
1
, ξ
2
, ξ
3
)
∂(x
1
, x
2
, x
3
)
=
¸
¸
∂ξ
1
∂x
1
∂ξ
1
∂x
2
∂ξ
1
∂x
3
∂ξ
2
∂x
1
∂ξ
2
∂x
2
∂ξ
2
∂x
3
∂ξ
3
∂x
1
∂ξ
3
∂x
2
∂ξ
3
∂x
3
23
J =
¸
cos x
2
−x
1
sin x
2
0
sin x
2
x
1
cos x
2
0
0 0 1
The determinant of the Jacobian matrix is
x
1
cos
2
x
2
+x
1
sin
2
x
2
= x
1
.
So a unique transformation fails to exist when x
1
= 0.
The metric tensor is
g
kl
=
∂ξ
i
∂x
k
∂ξ
i
∂x
l
=
∂ξ
1
∂x
k
∂ξ
1
∂x
l
+
∂ξ
2
∂x
k
∂ξ
2
∂x
l
+
∂ξ
3
∂x
k
∂ξ
3
∂x
l
for example for k = 1, l = 1 we get
g
11
=
∂ξ
i
∂x
1
∂ξ
i
∂x
1
=
∂ξ
1
∂x
1
∂ξ
1
∂x
1
+
∂ξ
2
∂x
1
∂ξ
2
∂x
1
+
∂ξ
3
∂x
1
∂ξ
3
∂x
1
g
11
= cos
2
x
2
+ sin
2
x
2
+ 0 = 1
repeating this operation, we ﬁnd the complete metric tensor is
g
kl
=
¸
1 0 0
0
x
1
2
0
0 0 1
g = det (g
kl
) = (x
1
)
2
This is equivalent to the calculation in Gibbs notation:
G = J
T
J
G =
¸
cos x
2
sin x
2
0
−x
1
sin x
2
x
1
cos x
2
0
0 0 1
¸
cos x
2
−x
1
sin x
2
0
sin x
2
x
1
cos x
2
0
0 0 1
G =
¸
1 0 0
0
x
1
2
0
0 0 1
Distance in the transformed system is given by
(ds)
2
= g
kl
dx
k
dx
l
(ds)
2
= dx
T
G dx
(ds)
2
= [ dx
1
dx
2
dx
3
]
¸
1 0 0
0
x
1
2
0
0 0 1
¸
dx
1
dx
2
dx
3
(ds)
2
= ( dx
1
dx
2
dx
3
)
¸
dx
1
x
1
2
dx
2
dx
3
(ds)
2
=
dx
1
2
+
x
1
dx
2
2
+
dx
3
2
Note:
• The fact that the metric tensor is diagonal can be attributed to the transformation being orthogonal.
24
• Since the product of any matrix with its transpose is guaranteed to yield a symmetric matrix, the
metric tensor is always symmetric.
Also we have the volume ratio of diﬀerential elements as
dξ
1
dξ
2
dξ
3
= x
1
dx
1
dx
2
dx
3
Now
∂S
∂ξ
1
=
∂S
∂x
1
∂x
1
∂ξ
1
+
∂S
∂x
2
∂x
2
∂ξ
1
+
∂S
∂x
3
∂x
3
∂ξ
1
=
∂S
∂x
1
ξ
1
(ξ
1
)
2
+ (ξ
2
)
2
−
∂S
∂x
2
ξ
2
(ξ
1
)
2
+ (ξ
2
)
2
= cos x
2
∂S
∂x
1
−
sin x
2
x
1
∂S
∂x
2
So the transformed equation becomes
cos x
2
∂S
∂x
1
−
sin x
2
x
1
∂S
∂x
2
−S =
x
1
2
1.3.2 Covariance and Contravariance
Quantities known as contravariant vectors transform according to
¯ u
i
=
∂¯ x
i
∂x
j
u
j
(1.43)
Quantities known as covariant vectors transform according to
¯ u
i
=
∂x
j
∂¯ x
i
u
j
(1.44)
Here we have considered general transformations from one nonCartesian coordinate system
(x
1
, x
2
, x
3
) to another (¯ x
1
, ¯ x
2
, ¯ x
3
).
Example 1.5
Let’s say (x, y, z) is a normal Cartesian system and deﬁne the transformation
¯ x = λx ¯ y = λy ¯ z = λz
Now we can assign velocities in both the unbarred and barred systems:
u
x
=
dx
dt
u
y
=
dy
dt
u
z
=
dz
dt
¯ u
¯ x
=
d¯ x
dt
¯ u
¯ y
=
d¯ y
dt
¯ u
¯ z
=
d¯ z
dt
25
¯ u
¯ x
=
∂¯ x
∂x
dx
dt
¯ u
¯ y
=
∂¯ y
∂y
dy
dt
¯ u
¯ z
=
∂¯ z
∂z
dz
dt
¯ u
¯ x
= λu
x
¯ u
¯ y
= λu
y
¯ u
¯ z
= λu
z
¯ u
¯ x
=
∂¯ x
∂x
u
x
¯ u
¯ y
=
∂¯ y
∂y
u
y
¯ u
¯ z
=
∂¯ z
∂z
u
z
This suggests the velocity vector is contravariant.
Now consider a vector which is the gradient of a function f(x, y, z). For example, let
f(x, y, z) = x +y
2
+z
3
u
x
=
∂f
∂x
u
y
=
∂f
∂y
u
z
=
∂f
∂z
u
x
= 1 u
y
= 2y u
z
= 3z
2
In the new coordinates
f
¯ x
λ
,
¯ y
λ
,
¯ z
λ
=
¯ x
λ
+
¯ y
2
λ
2
+
¯ z
3
λ
3
so
¯
f (¯ x, ¯ y, ¯ z) =
¯ x
λ
+
¯ y
2
λ
2
+
¯ z
3
λ
3
Now
¯ u
¯ x
=
∂
¯
f
∂¯ x
¯ u
¯ y
=
∂
¯
f
∂¯ y
¯ u
¯ z
=
∂
¯
f
∂¯ z
¯ u
¯ x
=
1
λ
¯ u
¯ y
=
2¯ y
λ
2
¯ u
¯ z
=
3¯ z
2
λ
3
In terms of x, y, z, we have
¯ u
¯ x
=
1
λ
¯ u
¯ y
=
2y
λ
¯ u
¯ z
=
3z
2
λ
So it is clear here that, in contrast to the velocity vector,
¯ u
¯ x
=
1
λ
u
x
¯ u
¯ y
=
1
λ
u
y
¯ u
¯ z
=
1
λ
u
z
Somewhat more generally we ﬁnd for this case that
¯ u
¯ x
=
∂x
∂¯ x
u
x
¯ u
¯ y
=
∂y
∂¯ y
u
y
¯ u
¯ z
=
∂z
∂¯ z
u
z
,
which suggests the gradient vector is covariant.
Contravariant tensors transform according to
¯ v
ij
=
∂¯ x
i
∂x
k
∂¯ x
j
∂x
l
v
kl
Covariant tensors transform according to
¯ v
ij
=
∂x
k
∂¯ x
i
∂x
l
∂¯ x
j
v
kl
Mixed tensors transform according to
¯ v
i
j
=
∂¯ x
i
∂x
k
∂x
l
∂¯ x
j
v
k
l
26
The idea of covariant and contravariant derivatives play an important role in mathemat
ical physics, namely in that the equations should be formulated such that they are invariant
under coordinate transformations. This is not particularly diﬃcult for Cartesian systems,
but for nonorthogonal systems, one cannot use diﬀerentiation in the ordinary sense but
must instead use the notion of covariant and contravariant derivatives, depending on the
problem. The role of these terms was especially important in the development of the theory
of relativity.
Consider a contravariant vector u
i
deﬁned in x
i
which has corresponding components U
i
in the Cartesian ξ
i
. Take w
i
j
and W
i
j
to represent the covariant spatial derivative of u
i
and
U
i
, respectively. Let’s use the chain rule and deﬁnitions of tensorial quantities to arrive at
a formula for covariant diﬀerentiation.
From the deﬁnition of contravariance
U
i
=
∂ξ
i
∂x
l
u
l
(1.45)
Take the derivative in Cartesian space and then use the chain rule:
W
i
j
=
∂U
i
∂ξ
j
=
∂U
i
∂x
k
∂x
k
∂ξ
j
(1.46)
=
∂
∂x
k
∂ξ
i
∂x
l
u
l
∂x
k
∂ξ
j
(1.47)
=
∂
2
ξ
i
∂x
k
∂x
l
u
l
+
∂ξ
i
∂x
l
∂u
l
∂x
k
∂x
k
∂ξ
j
(1.48)
W
p
q
=
∂
2
ξ
p
∂x
k
∂x
l
u
l
+
∂ξ
p
∂x
l
∂u
l
∂x
k
∂x
k
∂ξ
q
(1.49)
From the deﬁnition of a mixed tensor
w
i
j
= W
p
q
∂x
i
∂ξ
p
∂ξ
q
∂x
j
(1.50)
=
∂
2
ξ
p
∂x
k
∂x
l
u
l
+
∂ξ
p
∂x
l
∂u
l
∂x
k
∂x
k
∂ξ
q
∂x
i
∂ξ
p
∂ξ
q
∂x
j
(1.51)
=
∂
2
ξ
p
∂x
k
∂x
l
∂x
k
∂ξ
q
∂x
i
∂ξ
p
∂ξ
q
∂x
j
u
l
+
∂ξ
p
∂x
l
∂x
k
∂ξ
q
∂x
i
∂ξ
p
∂ξ
q
∂x
j
∂u
l
∂x
k
(1.52)
=
∂
2
ξ
p
∂x
k
∂x
l
∂x
k
∂x
j
∂x
i
∂ξ
p
u
l
+
∂x
i
∂x
l
∂x
k
∂x
j
∂u
l
∂x
k
(1.53)
=
∂
2
ξ
p
∂x
k
∂x
l
δ
k
j
∂x
i
∂ξ
p
u
l
+ δ
i
l
δ
k
j
∂u
l
∂x
k
(1.54)
=
∂
2
ξ
p
∂x
j
∂x
l
∂x
i
∂ξ
p
u
l
+
∂u
i
∂x
j
(1.55)
Here we have used the identity that
∂x
i
∂x
j
= δ
i
j
, where δ
i
j
is the Kronecker
7
delta, δ
i
j
= 1, i =
j, δ
i
j
= 0, i = j. We deﬁne the Christoﬀel
8
symbols Γ
i
jl
as follows:
7
Leopold Kronecker, 18231891, German/Prussian mathematician.
8
Elwin Bruno Christoﬀel, 18291900, German mathematician.
27
Γ
i
jl
=
∂
2
ξ
p
∂x
j
∂x
l
∂x
i
∂ξ
p
(1.56)
and use the term ∆
j
to represent the covariant derivative. Thus we have found the covariant
derivative of a contravariant vector u
i
is as follows:
∆
j
u
i
= w
i
j
=
∂u
i
∂x
j
+ Γ
i
jl
u
l
(1.57)
Example 1.6
Find ∇ u in cylindrical coordinates. The transformations are
x
1
= +
(ξ
1
)
2
+ (ξ
2
)
2
x
2
= tan
−1
ξ
2
ξ
1
x
3
= ξ
3
The inverse transformation is
ξ
1
= x
1
cos x
2
ξ
2
= x
1
sin x
2
ξ
3
= x
3
This corresponds to ﬁnding
∆
i
u
i
= w
i
i
=
∂u
i
∂x
i
+ Γ
i
il
u
l
Now for i = j
Γ
i
il
u
l
=
∂
2
ξ
p
∂x
i
∂x
l
∂x
i
∂ξ
p
u
l
=
∂
2
ξ
1
∂x
i
∂x
l
∂x
i
∂ξ
1
u
l
+
∂
2
ξ
2
∂x
i
∂x
l
∂x
i
∂ξ
2
u
l
+
∂
2
ξ
3
∂x
i
∂x
l
∂x
i
∂ξ
3
u
l
noting that all second partials of ξ
3
are zero,
=
∂
2
ξ
1
∂x
i
∂x
l
∂x
i
∂ξ
1
u
l
+
∂
2
ξ
2
∂x
i
∂x
l
∂x
i
∂ξ
2
u
l
=
∂
2
ξ
1
∂x
1
∂x
l
∂x
1
∂ξ
1
u
l
+
∂
2
ξ
1
∂x
2
∂x
l
∂x
2
∂ξ
1
u
l
+
∂
2
ξ
1
∂x
3
∂x
l
∂x
3
∂ξ
1
u
l
+
∂
2
ξ
2
∂x
1
∂x
l
∂x
1
∂ξ
2
u
l
+
∂
2
ξ
2
∂x
2
∂x
l
∂x
2
∂ξ
2
u
l
+
∂
2
ξ
2
∂x
3
∂x
l
∂x
3
∂ξ
2
u
l
noting that partials of x
3
with respect to ξ
1
and ξ
2
are zero,
=
∂
2
ξ
1
∂x
1
∂x
l
∂x
1
∂ξ
1
u
l
+
∂
2
ξ
1
∂x
2
∂x
l
∂x
2
∂ξ
1
u
l
+
∂
2
ξ
2
∂x
1
∂x
l
∂x
1
∂ξ
2
u
l
+
∂
2
ξ
2
∂x
2
∂x
l
∂x
2
∂ξ
2
u
l
28
=
∂
2
ξ
1
∂x
1
∂x
1
∂x
1
∂ξ
1
u
1
+
∂
2
ξ
1
∂x
1
∂x
2
∂x
1
∂ξ
1
u
2
+
∂
2
ξ
1
∂x
1
∂x
3
∂x
1
∂ξ
1
u
3
+
∂
2
ξ
1
∂x
2
∂x
1
∂x
2
∂ξ
1
u
1
+
∂
2
ξ
1
∂x
2
∂x
2
∂x
2
∂ξ
1
u
2
+
∂
2
ξ
1
∂x
2
∂x
3
∂x
2
∂ξ
1
u
3
+
∂
2
ξ
2
∂x
1
∂x
1
∂x
1
∂ξ
2
u
1
+
∂
2
ξ
2
∂x
1
∂x
2
∂x
1
∂ξ
2
u
2
+
∂
2
ξ
2
∂x
1
∂x
3
∂x
1
∂ξ
2
u
3
+
∂
2
ξ
2
∂x
2
∂x
1
∂x
2
∂ξ
2
u
1
+
∂
2
ξ
2
∂x
2
∂x
2
∂x
2
∂ξ
2
u
2
+
∂
2
ξ
2
∂x
2
∂x
3
∂x
2
∂ξ
2
u
3
again removing the x
3
variation
=
∂
2
ξ
1
∂x
1
∂x
1
∂x
1
∂ξ
1
u
1
+
∂
2
ξ
1
∂x
1
∂x
2
∂x
1
∂ξ
1
u
2
+
∂
2
ξ
1
∂x
2
∂x
1
∂x
2
∂ξ
1
u
1
+
∂
2
ξ
1
∂x
2
∂x
2
∂x
2
∂ξ
1
u
2
+
∂
2
ξ
2
∂x
1
∂x
1
∂x
1
∂ξ
2
u
1
+
∂
2
ξ
2
∂x
1
∂x
2
∂x
1
∂ξ
2
u
2
+
∂
2
ξ
2
∂x
2
∂x
1
∂x
2
∂ξ
2
u
1
+
∂
2
ξ
2
∂x
2
∂x
2
∂x
2
∂ξ
2
u
2
substituting for the partial derivatives
= 0u
1
−sin x
2
cos x
2
u
2
−sin x
2
−sin x
2
x
1
u
1
−x
1
cos x
2
−sin x
2
x
1
u
2
+0u
1
+ cos x
2
sin x
2
u
2
+cos x
2
cos x
2
x
1
u
1
−x
1
sin x
2
cos x
2
x
1
u
2
=
u
1
x
1
So in cylindrical coordinates
∇ u =
∂u
1
∂x
1
+
∂u
2
∂x
2
+
∂u
3
∂x
3
+
u
1
x
1
Note: In standard cylindrical notation, x
1
= r, x
2
= θ, x
3
= z. Considering u to be a velocity vector,
we get
∇ u =
∂
∂r
dr
dt
+
∂
∂θ
dθ
dt
+
∂
∂z
dz
dt
+
1
r
dr
dt
∇ u =
1
r
∂
∂r
r
dr
dt
+
1
r
∂
∂θ
r
dθ
dt
+
∂
∂z
dz
dt
∇ u =
1
r
∂
∂r
(ru
r
) +
1
r
∂u
θ
∂θ
+
∂u
z
∂z
Here we have also used the more traditional u
θ
= r
dθ
dt
= x
1
u
2
, along with u
r
= u
1
, u
z
= u
3
. For
practical purposes, this insures that u
r
, u
θ
, u
z
all have the same dimensions.
29
We summarize some useful identities, all of which can be proved, as well as some other
common notation, as follows
g
kl
=
∂ξ
i
∂x
k
∂ξ
i
∂x
l
(1.58)
g = det(g
ij
) (1.59)
g
ik
g
kj
= δ
j
i
(1.60)
u
i
= g
ij
u
j
(1.61)
u
i
= g
ij
u
j
(1.62)
u v = u
i
v
i
= u
i
v
i
= g
ij
u
j
v
i
= g
ij
u
j
v
i
(1.63)
u v =
ijk
g
jm
g
kn
u
m
v
n
=
ijk
u
j
v
k
(1.64)
Γ
i
jk
=
∂
2
ξ
p
∂x
j
∂x
k
∂x
i
∂ξ
p
=
1
2
g
ip
∂g
pj
∂x
k
+
∂g
pk
∂x
j
−
∂g
jk
∂x
p
(1.65)
∇u = ∆
j
u
i
= u
i
,j
=
∂u
i
∂x
j
+ Γ
i
jl
u
l
(1.66)
∇ u = ∆
i
u
i
= u
i
,i
=
∂u
i
∂x
i
+ Γ
i
il
u
l
=
1
√
g
∂
∂x
i
√
g u
i
(1.67)
∇u =
∂u
j
∂x
i
−
∂u
i
∂x
j
(1.68)
∇φ = φ
,i
=
∂φ
∂x
i
(1.69)
∇
2
φ = ∇ ∇φ = g
ij
φ
,ij
=
∂
∂x
j
g
ij
∂φ
∂x
i
+ Γ
j
jk
g
ik
∂φ
∂x
i
(1.70)
=
1
√
g
∂
∂x
j
√
g g
ij
∂φ
∂x
i
(1.71)
∇T = T
ij
,k
=
∂T
ij
∂x
k
+ Γ
i
lk
T
lj
+ Γ
j
lk
T
il
(1.72)
∇ T = T
ij
,j
=
∂T
ij
∂x
j
+ Γ
i
lj
T
lj
+ Γ
j
lj
T
il
=
1
√
g
∂
∂x
j
√
g T
ij
+ Γ
i
jk
T
jk
(1.73)
=
1
√
g
∂
∂x
j
√
g T
kj
∂ξ
i
∂x
k
(1.74)
1.4 Maxima and minima
Consider the real function f(x), where x ∈ [a, b]. Extrema are at x = x
m
, where f
(x
m
) = 0,
if x
m
∈ [a, b]. It is a local minimum, a local maximum, or an inﬂection point according to
whether f
(x
m
) is positive, negative or zero, respectively.
Now consider a function of two variables f(x, y), with x ∈ [a, b], y ∈ [c, d]. A necessary
condition for an extremum is
∂f
∂x
(x
m
, y
m
) =
∂f
∂y
(x
m
, y
m
) = 0 (1.75)
30
where x
m
∈ [a, b], y
m
∈ [c, d]. Next we ﬁnd the Hessian
9
matrix (Hildebrand 356)
H =
∂
2
f
∂x
2
∂
2
f
∂x∂y
∂
2
f
∂x∂y
∂
2
f
∂y
2
(1.76)
and its determinant D = −det H. It can be shown that
f is a maximum if
∂
2
f
∂x
2
< 0 and D < 0
f is a minimum if
∂
2
f
∂x
2
> 0 and D < 0
f is a saddle if D > 0
Higher order derivatives must be considered if D = 0.
Example 1.7
f = x
2
−y
2
Equating partial derivatives with respect to x and to y to zero, we get
2x = 0
−2y = 0
This gives x = 0, y = 0. For these values we ﬁnd that
D = −
2 0
0 −2
= 4
Since D > 0, the point (0,0) is a saddle point.
1.4.1 Derivatives of integral expressions
Often functions are expressed in terms of integrals. For example
y(x) =
b(x)
a(x)
f(x, t) dt
Here t is a dummy variable of integration. Leibniz’s
10
rule tells us how to take derivatives
of functions in integral form:
y(x) =
b(x)
a(x)
f(x, t) dt (1.77)
dy(x)
dx
= f(x, b(x))
db(x)
dx
−f(x, a(x))
da(x)
dx
+
b(x)
a(x)
∂f(x, t)
∂x
dt (1.78)
9
Ludwig Otto Hesse, 18111874, German mathematician, studied under Jacobi.
10
Gottfried Wilhelm von Leibniz, 16461716, German mathematician and philosopher of great inﬂuence;
coinventor with Sir Isaac Newton, 16431727, of the calculus.
31
Inverting this arrangement in a special case, we note if
y(x) = y(x
o
) +
x
x
0
f(t) dt (1.79)
then (1.80)
dy(x)
dx
= f(x)
dx
dx
−f(x
0
)
dx
o
dx
+
b(x)
a(x)
∂f(t)
∂x
dt (1.81)
dy(x)
dx
= f(x) (1.82)
Note that the integral expression naturally includes the initial condition that when x = x
0
,
y = y(x
0
). This needs to be expressed separately for the diﬀerential version of the equation.
Example 1.8
Find
dy
dx
if
y(x) =
x
2
x
(x + 1)t
2
dt (1.83)
Using Leibniz’s rule we get
dy(x)
dx
= [(x + 1)x
4
](2x) −[(x + 1)x
2
](1) +
x
2
x
t
2
dt (1.84)
= 2x
6
+ 2x
5
−x
3
−x
2
+
¸
t
3
3
x
2
x
(1.85)
= 2x
6
+ 2x
5
−x
3
−x
2
+
x
6
3
−
x
3
3
(1.86)
=
7x
6
3
+ 2x
5
−
4x
3
3
−x
2
(1.87)
(1.88)
In this case, but not all, we can achieve the same result from explicit formulation of y(x):
y(x) = (x + 1)
x
2
x
t
2
dt (1.89)
= (x + 1)
¸
t
3
3
x
2
x
(1.90)
= (x + 1)
x
6
3
−
x
3
3
(1.91)
y(x) =
x
7
3
+
x
6
3
−
x
4
3
−
x
3
3
(1.92)
dy(x)
dx
=
7x
6
3
+ 2x
5
−
4x
3
3
−x
2
(1.93)
So the two methods give identical results.
32
1.4.2 Calculus of variations
(See Hildebrand, p. 360)
The problem is to ﬁnd the function y(x), with x ∈ [x
1
, x
2
], and boundary conditions
y(x
1
) = y
1
, y(x
2
) = y
2
, such that the integral
I =
x
2
x
1
f(x, y, y
) dx (1.94)
is an extremum. If y(x) is the desired solution, let Y (x) = y(x) + h(x), where h(x
1
) =
h(x
2
) = 0. Thus Y (x) also satisﬁes the boundary conditions; also Y
(x) = y
(x) + h
(x).
We can write
I() =
x
2
x
1
f(x, Y, Y
) dx
Taking
dI
d
, utilizing Leibniz’s formula, we get
dI
d
=
x
2
x
1
∂f
∂x
∂x
∂
+
∂f
∂Y
∂Y
∂
+
∂f
∂Y
∂Y
∂
dx
Evaluating, we ﬁnd
dI
d
=
x
2
x
1
∂f
∂x
0 +
∂f
∂Y
h(x) +
∂f
∂Y
h
(x)
dx
Since I is an extremum at = 0, we have dI/d = 0 for = 0. This gives
0 =
x
2
x
1
∂f
∂Y
h(x) +
∂f
∂Y
h
(x)
=0
dx
Also when = 0, we have Y = y, Y
= y
, so
0 =
x
2
x
1
∂f
∂y
h(x) +
∂f
∂y
h
(x)
dx
Look at the second term in this integral. Since from integration by parts we get
x
2
x
1
∂f
∂y
h
(x) dx =
x
2
x
1
∂f
∂y
dh
=
∂f
∂y
h(x)
x
2
x
1
−
x
2
x
1
d
dx
∂f
∂y
h(x) dx
The ﬁrst term above is zero because of our conditions on h(x
1
) and h(x
2
). Thus substituting
into the original equation we have
x
2
x
1
∂f
∂y
−
d
dx
∂f
∂y
h(x) dx = 0 (1.95)
The equality holds for all h(x), so that we must have
∂f
∂y
−
d
dx
∂f
∂y
= 0 (1.96)
33
called the Euler
11
equation.
While this is, in general, the preferred form of the Euler equation, its explicit dependency
on the two end conditions is better displayed by considering a slightly diﬀerent form. By
expanding the total derivative term, that is
d
dx
∂f
∂y
(x, y, y
)
=
∂
2
f
∂y
∂x
+
∂
2
f
∂y
∂y
dy
dx
+
∂
2
f
∂y
∂y
dy
dx
(1.97)
=
∂
2
f
∂y
∂x
+
∂
2
f
∂y
∂y
y
+
∂
2
f
∂y
∂y
y
(1.98)
the Euler equation after slight rearrangement becomes
∂
2
f
∂y
∂y
y
+
∂
2
f
∂y
∂y
y
+
∂
2
f
∂y
∂x
−
∂f
∂y
= 0 (1.99)
f
y
y
d
2
y
dx
2
+ f
y
y
dy
dx
+ (f
y
x
−f
y
) = 0 (1.100)
This is a clearly second order diﬀerential equation for f
y
y
= 0, and in general, nonlinear.
If f
y
y
is always nonzero, the problem is said to be regular. If f
y
y
= 0 at any point, the
equation is no longer second order, and the problem is said to be singular at such points.
Note that satisfaction of two boundary conditions becomes problematic for equations less
than second order.
There are several special cases of the function f.
1. f = f(x, y)
The Euler equation is
∂f
∂y
= 0 (1.101)
which is easily solved:
f(x, y) = A(x) (1.102)
which, knowing f, is then solved for y(x).
2. f = f(x, y
)
The Euler equation is
d
dx
∂f
∂y
= 0 (1.103)
which yields
∂f
∂y
= A (1.104)
f(x, y
) = Ay
+ B(x) (1.105)
Again, knowing f, the equation is solved for y
and then integrated to ﬁnd y(x).
11
Leonhard Euler, 17071783, proliﬁc Swiss mathematician, born in Basel, died in St. Petersburg.
34
3. f = f(y, y
)
The Euler equation is
∂f
∂y
−
d
dx
∂f
∂y
(y, y
)
= 0 (1.106)
∂f
∂y
−
∂
2
f
∂y∂y
dy
dx
+
∂
2
f
∂y
∂y
dy
dx
= 0 (1.107)
∂f
∂y
−
∂
2
f
∂y∂y
dy
dx
−
∂
2
f
∂y
∂y
d
2
y
dx
2
= 0 (1.108)
Multiply by y
to get
y
∂f
∂y
−
∂
2
f
∂y∂y
dy
dx
−
∂
2
f
∂y
∂y
d
2
y
dx
2
= 0 (1.109)
∂f
∂y
y
+
∂f
∂y
y
−y
∂
2
f
∂y∂y
dy
dx
−
∂
2
f
∂y
∂y
d
2
y
dx
2
−
∂f
∂y
y
= 0 (1.110)
(1.111)
d
dx
f −y
∂f
∂y
= 0 (1.112)
which can be integrated. Thus
f(y, y
) −y
∂f
∂y
= A (1.113)
which is eﬀectively a ﬁrst order ordinary diﬀerential equation which is solved. Another
integration constant arises. This along with A are determined by the two end point
conditions.
Example 1.9
Find the curve of minimum length between the points (x
1
, y
1
) and (x
2
, y
2
).
If y(x) is the curve, then y(x
1
) = y
1
and y(x
2
) = y
2
. The length of the curve is
L =
x2
x1
1 + (y
)
2
dx
The Euler equation is
d
dx
y
1 + (y
)
2
= 0
which can be integrated to give
y
1 + (y
)
2
= C
Solving for y
we get
y
= A =
C
2
1 −C
2
from which
y = Ax +B
The constants A and B are obtained from the boundary conditions y(x
1
) = y
1
and y(x
2
) = y
2
.
35
1 0.5 0 0.5 1 1.5 2
x
0.5
1
1.5
2
2.5
3
y
1
0
1
2
x
2
0
2
y
2
0
2
z
1
0
1
2
x
2
0
2
y
2
0
2
curve with
endpoints at
(1, 3.09), (2, 2.26)
which minimizes
surface area of body
of revolution
corresponding
surface of
revolution
.
.
Figure 1.5: Body of revolution of minimum surface area for (x
1
, y
1
) = (−1, 3.08616) and
(x
2
, y
2
) = (2, 2.25525)
Example 1.10
Find the curve through the points (x
1
, y
1
) and (x
2
, y
2
), such that the surface area of the body of
revolution by rotating the curve around the xaxis is a minimum.
We wish to minimize
I =
x2
x1
y
1 + (y
)
2
dx
Here f(y, y
) = y
1 + (y
)
2
. So the Euler equation reduces to
f(y, y
) −y
∂f
∂y
= A
y
1 +y
2
−y
y
y
1 +y
2
= A
y(1 +y
2
) −yy
2
= A
1 +y
2
y = A
1 +y
2
y
=
y
A
2
−1
y(x) = Acosh
x −B
A
This is a catenary. The constants A and B are determined from the boundary conditions y(x
1
) = y
1
and y(x
2
) = y
2
. In general this requires a trial and error solution of simultaneous algebraic equations.
If (x
1
, y
1
) = (−1, 3.08616) and (x
2
, y
2
) = (2, 2.25525), one ﬁnds solution of the resulting algebraic
equations gives A = 2, B = 1.
For these conditions, the curve y(x) along with the resulting body of revolution of minimum surface
area are plotted in Figure 1.5.
36
1.5 Lagrange multipliers
Suppose we have to determine the extremum of f(x
1
, x
2
, . . . , x
m
) subject to the n constraints
g
i
(x
1
, x
2
, . . . , x
m
) = 0, i = 1, 2, . . . , n (1.114)
Deﬁne
f
∗
= f −λ
1
g
1
−λ
2
g
2
−. . . −λ
n
g
n
(1.115)
where the λ
i
(i = 1, 2, , n) are unknown constants called Lagrange
12
multipliers. To get
the extremum of f
∗
, we equate to zero its derivative with respect to x
1
, x
2
, . . . , x
m
. Thus we
have
∂f
∗
∂x
i
= 0, i = 1, . . . , m (1.116)
g
i
= 0, i = 1, . . . , n (1.117)
which are (m+n) equations that can be solved for x
i
(i = 1, 2, . . . , m) and λ
i
(i = 1, 2, . . . , n).
Example 1.11
Extremize f = x
2
+y
2
subject to the constraint 5x
2
−6xy + 5y
2
= 8.
Let
f
∗
= x
2
+y
2
−λ(5x
2
−6xy + 5y
2
−8)
from which
2x −10λx + 6λy = 0
2y + 6λx −10λy = 0
5x
2
−6xy + 5y
2
= 8
From the ﬁrst equation
λ =
2x
10x −6y
which, when substituted into the second, gives
x = ±y
The last equation gives the extrema to be at (x, y) = (
√
2,
√
2), (−
√
2, −
√
2), (
1
√
2
, −
1
√
2
), (−
1
√
2
,
1
√
2
).
The ﬁrst two sets give f = 4 (maximum) and the last two f = 1 (minimum).
The function to be maximized along with the constraint function and its image are plotted in Figure
1.6.
A similar technique can be used for the extremization of a functional with constraint.
We wish to ﬁnd the function y(x), with x ∈ [x
1
, x
2
], and y(x
1
) = y
1
, y(x
2
) = y
2
, such that
the integral
I =
x
2
x
1
f(x, y, y
) dx (1.118)
12
JosephLouis Lagrange, 17361813, Italianborn French mathematician.
37
1
0
1
x
1
0
1 y
0
1
2
3
4
f(x,y)
1
0
1
x
1
0
1 y
0
1
2
3
4
1
0
1
2
x
2
1
0
1
2
y
0
2
4
6
8
f(x,y)
1
0
1
2
x
y
0
2
4
6
8
f(x,y)
constraint
function
constrained
function
unconstrained
function
Figure 1.6: Unconstrained function f(x, y) along with constrained function and constraint
function (image of constrained function)
is an extremum, and satisﬁes the constraint
g = 0 (1.119)
Deﬁne
I
∗
= I −λg (1.120)
and continue as before.
Example 1.12
Extremize I, where
I =
a
0
y
1 + (y
)
2
dx
with y(0) = y(a) = 0, and subject to the constraint
a
0
1 + (y
)
2
dx =
That is ﬁnd the maximum surface area of a body of revolution which has a constant length. Let
g =
a
0
1 + (y
)
2
dx − = 0
Then let
I
∗
= I −λg =
a
0
y
1 + (y
)
2
dx −λ
a
0
1 + (y
)
2
dx +λ
=
a
0
(y −λ)
1 + (y
)
2
dx +λ
=
a
0
(y −λ)
1 + (y
)
2
+
λ
a
dx
38
0.2
0
0.2
y
0.2
0
0.2
z
0
0.25
0.5
0.75
1
x
0.2 0.4 0.6 0.8 1
x
0.3
0.25
0.2
0.15
0.1
0.05
y
Figure 1.7: Curve of length = 5/4 with y(0) = y(1) = 0 whose surface area of corresponding
body of revolution (also shown) is maximum.
With f
∗
= (y −λ)
1 + (y
)
2
+
λ
a
, we have the Euler equation
∂f
∗
∂y
−
d
dx
∂f
∗
∂y
= 0
Integrating from an earlier developed relationship when f = f(y, y
), and absorbing
λ
a
into a constant
A, we have
(y −λ)
1 + (y
)
2
−y
(y −λ)
y
1 + (y
)
2
= A
from which
(y −λ)(1 + (y
)
2
) −(y
)
2
(y −λ) = A
1 + (y
)
2
(y −λ)
1 + (y
)
2
−(y
)
2
= A
1 + (y
)
2
y −λ = A
1 + (y
)
2
y
=
y −λ
A
2
−1
y = λ +Acosh
x −B
A
Here A, B, λ have to be numerically determined from the three conditions y(0) = y(a) = 0, g = 0.
If we take the case where a = 1, = 5/4, we ﬁnd that A = 0.422752, B =
1
2
, λ = −0.754549. For
these values, the curve of interest, along with the surface of revolution, is plotted in Figure 1.7.
Problems
1. If
z
3
+zx +x
4
y = 3y,
(a) ﬁnd a general expression for
∂z
∂x
y
,
∂z
∂y
x
,
39
(b) evaluate
∂z
∂x
y
,
∂z
∂y
x
,
at (x, y) = (1, 2),
(c) Give a computer generated plot of the surface z(x, y) for −2 < x < 2, −2 < y < 2. You may
wish to use an appropriate implicit plotting function in the xmaple software program.
2. Determine the general curve y(x), with x
1
≤ x ≤ x
2
, of total length L with endpoints y(x
1
) = y
1
and y(x
2
) = y
2
ﬁxed, for which the area under the curve,
x2
x1
y dx, is a maximum. Show that if
(x
1
, y
1
) = (0, 0); (x
2
, y
2
) = (1, 1); L = 3/2, that the curve which maximizes the area and satisﬁes all
constraints is the circle, (y + 0.254272)
2
+ (x − 1.2453)
2
= (1.26920)
2
. Plot this curve. What is the
area? Verify that each constraint is satisﬁed. What function y(x) minimizes the area and satisﬁes all
constraints? Plot this curve. What is the area? Verify that each constraint is satisﬁed.
3. Show that if a ray of light is reﬂected from a mirror, the shortest distance of travel is when the angle
of incidence on the mirror is equal to the angle of reﬂection.
4. The speed of light in diﬀerent media separated by a planar interface is c
1
and c
2
. Show that if the
time taken for light to go from a ﬁxed point in one medium to another in the second is a minimum,
the angle of incidence, α
i
, and the angle of refraction, α
r
, are related by
sin α
i
sin α
r
=
c
1
c
2
5. T is a quadrilateral with perimeter P. Find the form of T such that its area is a maximum. What is
this area?
6. A body slides due to gravity from point A to point B along the curve y = f(x). There is no friction
and the initial velocity is zero. If points A and B are ﬁxed, ﬁnd f(x) for which the time taken will
be the least. What is this time? If A : (x, y) = (1, 2), B : (x, y) = (0, 0), where distances are in
meters, plot the minimum time curve, and ﬁnd the minimum time if the gravitational acceleration is
g = −9.81
m
s
2
j.
7. Consider the integral I =
1
0
(y
− y + e
x
)
2
dx. What kind of extremum does this integral have
(maximum or minimum)? What should y(x) be for this extremum? What does the solution of the
Euler equation give, if y(0) = 0 and y(1) = −e? Find the value of the extremum. Plot y(x) for the
extremum. If y
0
(x) is the solution of the Euler equation, compute I for y
1
(x) = y
0
(x) + h(x), where
you can take any h(x) you like, but with h(0) = h(1) = 0.
8. Find the length of the shortest curve between two points with cylindrical coordinates (r, θ, z) = (a, 0, 0)
and (r, θ, z) = (a, Θ, Z) along the surface of the cylinder r = a.
9. Determine the shape of a parallelogram with a given area which has the least perimeter.
10. Find the extremum of the functional
1
0
(x
2
y
2
+ 40x
4
y) dx
with y(0) = 0 and y(1) = 1. Plot y(x) which renders the integral at an extreme point.
11. Find the point on the plane ax +by +cz = d which is nearest to the origin.
12. Extremize the integral
1
0
y
2
dx
subject to the end conditions y(0) = 0, y(1) = 0, and also the constraint
1
0
y dx = 1
Plot the function y(x) which extremizes the integral and satisﬁes all constraints.
40
13. Show that the functions
u =
x +y
x −y
v =
xy
(x −y)
2
are functionally dependent.
14. Find the point on the curve of intersection of z − xy = 10 and x + y + z = 1, that is closest to the
origin.
15. Find a function y(x) with y(0) = 1, y(1) = 0 that extremizes the integral
I =
1
0
1 +
dy
dx
2
y
dx
Plot y(x) for this function.
16. For elliptic cylindrical coordinates
ξ
1
= cosh x
1
cos x
2
ξ
2
= sinhx
1
sin x
2
ξ
3
= x
3
Find the Jacobian matrix J and the metric tensor G. Find the inverse transformation. Plot lines of
constant x
1
and x
2
in the ξ
1
and ξ
2
plane.
17. For the elliptic coordinate system of the previous problem, ﬁne ∇ u where u is an arbitrary vector.
18. Find the covariant derivative of the contravariant velocity vector in cylindrical coordinates.
41
42
Chapter 2
Firstorder ordinary diﬀerential
equations
see Lopez, Chapters 13,
see Riley, Hobson, and Bence, Chapter 12,
see Bender and Orszag, 1.6.
A ﬁrstorder ordinary diﬀerential equation is of the form
F(x, y, y
) = 0 (2.1)
where y
=
dy
dx
.
2.1 Separation of variables
Equation (2.1) is separable if it can be written in the form
P(x)dx = Q(y)dy (2.2)
which can then be integrated.
Example 2.1
Solve
yy
=
8x + 1
y
, with y(1) = −5.
Separating variables
y
2
dy = 8xdx +dx.
Integrating, we have
y
3
3
= 4x
2
+x +C.
The initial condition gives C = −
140
3
, so that the solution is
y
3
= 12x
2
+ 3x −140.
The solution is plotted in Figure 2.1.
43
10 5 5 10
x
5
2.5
2.5
5
7.5
10
y
Figure 2.1: y(x) which solves yy
= (8x + 1)/y with y(1) = −5.
2.2 Homogeneous equations
An equation is homogeneous if it can be written in the form
y
= f
y
x
. (2.3)
Deﬁning
u =
y
x
(2.4)
we get
y = ux,
from which
y
= u + xu
.
Substituting in equation (2.3) and separating variables, we have
u + xu
= f(u) (2.5)
u + x
du
dx
= f(u) (2.6)
x
du
dx
= f(u) −u (2.7)
du
f(u) −u
=
dx
x
(2.8)
which can be integrated.
Equations of the form
y
= f
ax + by + c
dx + ey + h
(2.9)
44
can be similarly integrated.
Example 2.2
Solve
xy
= 3y +
y
2
x
, with y(1) = 4.
This can be written as
y
= 3
y
x
+
y
x
2
.
Letting u = y/x, we get
du
2u +u
2
=
dx
x
.
Since
1
2u +u
2
=
1
2u
−
1
4 + 2u
both sides can be integrated to give
1
2
(ln [u[ −ln [2 +u[) = ln [x[ +C.
The initial condition gives C =
1
2
ln
2
3
, so that the solution can be reduced to
y
2x +y
=
2
3
x
2
.
This can be solved explicitly for y(x) where we ﬁnd for each case of the absolute value. The ﬁrst case
y(x) =
4
3
x
3
1 −
2
3
x
2
is seen to satisfy the condition at x = 1. The second case is discarded as it does not satisfy the condition
at x = 1.
The solution is plotted in Figure 2.2.
2.3 Exact equations
A diﬀerential equation is exact if it can be written in the form
dF(x, y) = 0. (2.10)
where F(x, y) = 0 is a solution to the diﬀerential equation Using the chain rule to expand
the derivative
∂F
∂x
dx +
∂F
∂y
dy = 0
So for an equation of the form
P(x, y)dx + Q(x, y)dy = 0 (2.11)
45
6 4 2 2 4 6
x
20
15
10
5
5
10
15
20
y
Figure 2.2: y(x) which solves xy
= 3y +
y
2
x
with y(1) = 4
we have an exact diﬀerential if
∂F
∂x
= P(x, y),
∂F
∂y
= Q(x, y) (2.12)
∂
2
F
∂x∂y
=
∂P
∂y
,
∂
2
F
∂y∂x
=
∂Q
∂x
(2.13)
As long as F(x, y) is continuous and diﬀerentiable, the mixed second partials are equal, thus,
∂P
∂y
=
∂Q
∂x
(2.14)
must hold if F(x, y) is to exist and render the original diﬀerential equation to be exact.
Example 2.3
Solve
dy
dx
=
e
x−y
e
x−y
−1
e
x−y
dx +
1 −e
x−y
dy = 0
∂P
∂y
= −e
x−y
∂Q
∂x
= −e
x−y
Since
∂P
∂y
=
∂Q
∂x
, the equation is exact. Thus
∂F
∂x
= P(x, y)
46
2
2
4
6
y
6 4 2
2 4 6
x
C = 2
C = 1
C = 0
C = 1
C = 2
Figure 2.3: y(x) which solves y
= exp(x −y)/(exp(x −y) −1)
∂F
∂x
= e
x−y
F(x, y) = e
x−y
+A(y)
∂F
∂y
= −e
x−y
+
dA
dy
= Q(x, y) = 1 −e
x−y
dA
dy
= 1
A(y) = y −C
F(x, y) = e
x−y
+y −C = 0
e
x−y
+y = C
The solution for various values of C is plotted in Figure 2.3.
2.4 Integrating factors
Sometimes, an equation of the form (2.11) is not exact, but can be made so by multiplication
by a function u(x, y), where u is called the integrating factor. It is not always obvious that
integrating factors exist; sometimes they do not.
Example 2.4
Solve
dy
dx
=
2xy
x
2
−y
2
Separating variables, we get
(x
2
−y
2
) dy = 2xy dx.
47
1.5 1 0.5 0.5 1 1.5
x
3
2
1
1
2
3
y
C = 3
C = 2
C = 1
C = 1
C = 2
C = 3
Figure 2.4: y(x) which solves y
(x) =
2xy
(x
2
−y
2
)
This is not exact according to criterion (2.14). It turns out that the integrating factor is y
−2
, so that
on multiplication, we get
2x
y
dx −
x
2
y
2
−1
dy = 0.
This can be written as
d
x
2
y
+y
= 0
which gives
x
2
+y
2
= Cy.
The solution for various values of C is plotted in Figure 2.4.
The general ﬁrstorder linear equation
dy(x)
dx
+ P(x) y(x) = Q(x) (2.15)
with
y(x
o
) = y
o
can be solved using the integrating factor
e
x
a
P(s)ds
= e
(F(x)−F(a))
.
We choose a such that
F(a) = 0.
48
Multiply by the integrating factor and proceed:
e
x
a
P(s)ds
dy(x)
dx
+
e
x
a
P(s)ds
P(x) y(x) =
e
x
a
P(s)ds
Q(x) (2.16)
product rule:
d
dx
e
x
a
P(s)ds
y(x)
=
e
x
a
P(s)ds
Q(x) (2.17)
replace x by t:
d
dt
e
t
a
P(s)ds
y(t)
=
e
t
a
P(s)ds
Q(t) (2.18)
integrate:
x
xo
d
dt
e
t
a
P(s)ds
y(t)
dt =
x
xo
e
t
a
P(s)ds
Q(t)dt (2.19)
e
x
a
P(s)ds
y(x) −e
xo
a
P(s)ds
y(x
o
) =
x
xo
e
t
a
P(s)ds
Q(t) dt (2.20)
which yields
y(x) = e
−
x
a
P(s)ds
e
xo
a
P(s)ds
y
o
+
x
xo
e
t
a
P(s)ds
Q(t)dt
. (2.21)
Example 2.5
Solve
y
−y = e
2x
; y(0) = y
o
.
Here
P(x) = −1
or
P(s) = −1
x
a
P(s)ds =
x
a
(−1)ds = −s[
x
a
= a −x
So
F(τ) = −τ
For F(a) = 0, take a = 0. So the integrating factor is
e
x
a
P(s)ds
= e
a−x
= e
0−x
= e
−x
Multiplying and rearranging, we get
e
−x
dy(x)
dx
−e
−x
y(x) = e
x
d
dx
e
−x
y(x)
= e
x
d
dt
e
−t
y(t)
= e
t
x
xo=0
d
dt
e
−t
y(t)
dt =
x
xo=0
e
t
dt
e
−x
y(x) −e
−0
y(0) = e
x
−e
0
e
−x
y(x) −y
o
= e
x
−1
y(x) = e
x
(y
o
+e
x
−1)
y(x) = e
2x
+ (y
o
−1) e
x
The solution for various values of y
o
is plotted in Figure 2.5.
49
3 2 1 1 2 3
x
3
2
1
1
2
3
y
y
=
2
o
y
=
0
o
y
=

2
o
Figure 2.5: y(x) which solves y
−y = e
2x
with y(0) = y
o
2.5 Bernoulli equation
Some ﬁrstorder nonlinear equations also have analytical solutions. An example is the
Bernoulli
1
equation
y
+ P(x)y = Q(x)y
n
. (2.22)
where n = 1. Let
u = y
1−n
,
so that
y = u
1
1−n
.
The derivative is
y
=
1
1 −n
u
n
1−n
u
.
Substituting in equation (2.22), we get
1
1 −n
u
n
1−n
u
+ P(x)u
1
1−n
= Q(x)u
n
1−n
.
This can be written as
u
+ (1 −n)P(x)u = (1 −n)Q(x) (2.23)
which is a ﬁrstorder linear equation of the form (2.15) and can be solved.
1
after one of the members of the proliﬁc Bernoulli family.
50
2.6 Riccati equation
A Riccati
2
equation is of the form
dy
dx
= P(x)y
2
+ Q(x)y + R(x). (2.24)
If we know a speciﬁc solution y = S(x) of this equation, the general solution can then be
found. Let
y = S(x) +
1
z(x)
. (2.25)
thus
dy
dx
=
dS
dx
−
1
z
2
dz
dx
(2.26)
Substituting into equation (2.24), we get
dS
dx
−
1
z
2
dz
dx
= P
S +
1
z
2
+ Q
S +
1
z
+ R (2.27)
dS
dx
−
1
z
2
dz
dx
= P
S
2
+
2S
z
+
1
z
2
+ Q
S +
1
z
+ R (2.28)
Since S(x) is itself a solution to equation (2.24), we subtract appropriate terms to get
−
1
z
2
dz
dx
= P
2S
z
+
1
z
2
+ Q
1
z
(2.29)
−
dz
dx
= P (2Sz + 1) + Qz (2.30)
dz
dx
+ (2P(x)S(x) + Q(x)) z = −P(x). (2.31)
Again this is a ﬁrst order linear equation in z and x of the form of equation (2.15) and can
be solved.
Example 2.6
Solve
y
=
e
−3x
x
y
2
−
1
x
y + 3e
3x
.
One solution is
y = S(x) = e
3x
,
Verify:
3e
3x
=
e
−3x
x
e
6x
−
1
x
e
3x
+ 3e
3x
3e
3x
=
e
3x
x
−
e
3x
x
+ 3e
3x
3e
3x
= 3e
3x
2
Jacopo Riccati, 16761754, Venetian mathematician.
51
so let
y = e
3x
+
1
z
.
Also we have
P(x) =
e
−3x
x
Q(x) = −
1
x
R(x) = 3e
3x
Substituting in the equation, we get
dz
dx
+
2
e
−3x
x
e
3x
−
1
x
z = −
e
−3x
x
dz
dx
+
z
x
= −
e
−3x
x
.
The integrating factor here is
e
dx
x
= e
ln x
= x
Multiplying by the integrating factor x
x
dz
dx
+z = −e
−3x
d(xz)
dx
= −e
−3x
which can be integrated as
z =
e
−3x
3x
+
C
x
=
e
−3x
+ 3C
3x
.
Since y = S(x) +
1
z
, the solution is thus
y = e
3x
+
3x
e
−3x
+ 3C
.
The solution for various values of C is plotted in Figure 2.6.
2.7 Reduction of order
There are higher order equations that can be reduced to ﬁrstorder equations and then solved.
2.7.1 y absent
If
f(x, y
, y
) = 0 (2.32)
then let u(x) = y
. Thus u
(x) = y
, and the equation reduces to
f
x, u,
du
dx
= 0 (2.33)
52
1 0.8 0.6 0.4 0.2 0.2 0.4
x
1
0.5
0.5
1
1.5
2
2.5
3
y
C
=
0
C
=
2
C= 1
C= 2
C= 2
C= 1
C
=

2
Figure 2.6: y(x) which solves y
=
exp(−3x)
x
−y/x + 3 exp(3x)
which is an equation of ﬁrst order.
Example 2.7
Solve
xy
+ 2y
= 4x
3
.
Let u = y
, so that
x
du
dx
+ 2u = 4x
3
.
Multiplying by x
x
2
du
dx
+ 2xu = 4x
4
d
dx
(x
2
u) = 4x
4
.
This can be integrated to give
u =
4
5
x
3
+
C
1
x
2
from which
y =
1
5
x
4
−
C
1
x
+C
2
for x = 0.
2.7.2 x absent
If
f(y, y
, y
) = 0 (2.34)
53
let u(x) = y
, so that
y
=
dy
dx
=
dy
dy
dy
dx
=
du
dy
u
The equation becomes
f
y, u, u
du
dy
= 0 (2.35)
which is also an equation of ﬁrst order. Note however that the independent variable is now
y while the dependent variable is u.
Example 2.8
Solve
y
−2yy
= 0; y(0) = y
o
, y
(0) = y
o
.
Let u = y
, so that y
=
du
dx
=
dy
dx
du
dy
= u
du
dy
. The equation becomes
u
du
dy
−2yu = 0
Now
u = 0
satisﬁes the equation. Thus
dy
dx
= 0
y = C
applying one initial condition: y = y
o
This satisﬁes the initial conditions only under special circumstances, i.e. y
o
= 0. For u = 0,
du
dy
= 2y
u = y
2
+C
1
apply I.C.’s: y
o
= y
2
o
+C
1
C
1
= y
o
−y
2
o
dy
dx
= y
2
+y
o
−y
2
o
dy
y
2
+y
o
−y
2
o
= dx
from which for y
o
−y
2
o
> 0
1
y
o
−y
2
o
tan
−1
y
y
o
−y
2
o
= x +C
2
1
y
o
−y
2
o
tan
−1
y
o
y
o
−y
2
o
= C
2
y(x) =
y
o
−y
2
o
tan
x
y
o
−y
2
o
+ tan
−1
y
o
y
o
−y
2
o
The solution for y
o
= 0, y
o
= 1 is plotted in Figure 2.7.
54
1.5 1 0.5 0.5 1 1.5
x
3
2
1
1
2
3
y
Figure 2.7: y(x) which solves y
−2yy
= 0 with y(0) = 0, y
(0) = 1
For y
o
−y
2
o
= 0
dy
dx
= y
2
dy
y
2
= dx
−
1
y
= x +C
2
−
1
y
o
= C
2
−
1
y
= x −
1
y
o
y =
1
1
yo
−x
For y
o
−y
2
o
< 0, one would obtain solutions in terms of hyperbolic trigonometric functions.
2.8 Uniqueness and singular solutions
Not all diﬀerential equations have solutions, as can be seen by considering y
=
y
x
ln y, with
y(0) = 1. y = e
Cx
is the general solution of the diﬀerential equation, but no ﬁnite value of
C allows the initial condition to be satisﬁed.
Theorem
Let f(x, y) be continuous and satisfy [f(x, y)[ ≤ m and the Lipschitz condition [f(x, y) −
f(x, y
0
[ ≤ k[y −y
0
[ in a bounded region 1. Then the equation y
= f(x, y) has one and only
one solution containing the point (x
0
, y
0
).
55
A stronger condition is that if f(x, y) and ∂f/∂y are ﬁnite and continuous at (x
0
, y
0
),
then a solution of y
= f(x, y) exists and is unique in the neighborhood of this point.
Example 2.9
Analyze the uniqueness of the solution of
y
= −K
√
y, y(T) = 0.
Taking,
f(x, y) = −K
√
y
we have
∂f
∂y
= −
K
2
√
y
which is not ﬁnite at y = 0. So the solution cannot be guaranteed to be unique. In fact, one solution is
y(t) =
1
4
K
2
(t −T)
2
.
Another solution which satisﬁes the initial condition and diﬀerential equation is
y(t) = 0.
Obviously the solution is not unique.
Consider the equation
y
= 3y
2/3
, with y(2) = 0.
On separating variables and integrating
3y
1/3
= 3x + 3C
so that the general solution is
y = (x + C)
3
Applying the initial condition
y = (x −2)
3
.
However,
y = 0
and
y =
(x −2)
3
if x ≥ 2
0 if x < 2
are also solutions. These singular solutions cannot be obtained from the general solution.
However, values of y
and y are the same at intersections. Both satisfy the diﬀerential
equation.
The two solutions are plotted in Figure 2.8.
56
1 2 3 4
x
1
0.75
0.5
0.25
0.25
0.5
0.75
1
y
1 2 3 4
x
1
0.75
0.5
0.25
0.25
0.5
0.75
1
y
Figure 2.8: Two solutions y(x) which satisfy y
= 3y
2/3
with y(2) = 0
2.9 Clairaut equation
The solution of a Clairaut
3
equation
y = xy
+ f(y
) (2.36)
can be obtained by letting y
= u(x), so that
y = xu + f(u). (2.37)
Diﬀerentiating with respect to x, we get
y
= xu
+ u +
df
du
u
(2.38)
u = xu
+ u +
df
du
u
(2.39)
u
x +
df
du
= 0. (2.40)
If we take
u
=
du
dx
= 0, (2.41)
we can integrate to get
u = C (2.42)
3
Alexis Claude Clairaut, 17131765, Parisian/French mathematician.
57
where C is a constant. Then, from equation (2.37), we get the general solution
y = Cx + f(C) (2.43)
Applying an initial condition y(x
o
) = y
o
gives what we will call the regular solution.
But if we take
x +
df
du
= 0 (2.44)
then this equation along with equation (2.37)
y = −u
df
du
+ f(u) (2.45)
form a set of parametric equations for what we call the singular solution.
Example 2.10
Solve
y = xy
+ (y
)
3
, y(0) = y
o
Take
u = y
then
f(u) = u
3
df
du
= 3u
2
so
y = Cx +C
3
is the general solution. Use the initial condition to evaluate C and get the regular solution:
y
o
= C(0) +C
3
C = y
1/3
o
y = y
1/3
o
x +y
o
The parametric form of a singular solution is
y = −2u
3
x = −3u
2
Eliminating the parameter u, we obtain
y = ±2
−
x
3
3/2
as the explicit form of the singular solution.
The regular solutions and singular solution are plotted in Figure 2.9.
Note
• In contrast to solutions for equations linear in y
, the trajectories y(x; x
o
) cross at numerous locations
in the x −y plane. This is a consequence of the diﬀerential equation’s nonlinearity
• While the singular solution satisﬁes the diﬀerential equation, it satisﬁes this initial condition only
when y
o
= 0
• Because of nonlinearity, addition of the regular and singular solutions does not yield a solution to
the diﬀerential equation.
58
4 3 2 1 1 2
x
6
4
2
2
4
6
y
y = 0
o
y = 1
o
y = 2
o
y = 3
o
y = 1
o
y = 2
o
y = 3
o
y = 0
o
y = 0
o
(singular)
(singular)
Figure 2.9: Two solutions y(x) which satisfy y = xy
+ (y
)
3
with y(0) = y
o
Problems
1. Find the general solution of the diﬀerential equation
y
+x
2
y(1 +y) = 1 +x
3
(1 +x).
Plot solutions for y(0) = −2, 0, 2.
2. Solve
˙ x = 2tx +te
−t
2
x
2
.
Plot a solution for x(0) = 1.
3. Solve
3x
2
y
2
dx + 2x
3
y dy = 0.
4. Solve
dy
dx
=
x −y
x +y
.
5. Solve the nonlinear equation (y
−x)y
+ 2y
= 2x.
6. Solve xy
+ 2y
= x. Plot a solution for y(1) = 1, y
(1) = 1.
7. Solve y
−2yy
= 0. Plot a solution for y(0) = 0, y
(0) = 3.
8. Given that y
1
= x
−1
is one solution of y
+
3
x
y
+
1
x
2
y = 0, ﬁnd the other solution.
9. Solve
(a) y
tan y + 2 sin xsin(
π
2
+x) + ln x = 0
(b) xy
−2y −x
4
−y
2
= 0
(c) y
cos y cos x + sin y sin x = 0
(d) y
+y cot x = e
x
(e) x
5
y
+y +e
x
2
(x
6
−1)y
3
= 0, with y(1) = e
−1/2
(f) y
+y
2
−xy −1 = 0
(g) y
(x +y
2
) −y = 0
59
(h) y
=
x+2y−5
−2x−y+4
(i) y
+xy = y
Plot solutions for y(0) = −1, 0, 1 (except for part e).
10. Find all solutions of
(x + 1)(y
)
2
+ (x −y)y
−y = 0
11. Find an a for which a unique solution of
(y
)
4
+ 8(y
)
3
+ (3a + 16)(y
)
2
+ 12ay
+ 2a
2
= 0, with y(1) = −2
exists. Find the solution.
12. Solve
y
−
1
x
2
y
2
+
1
x
y = 1
60
Chapter 3
Linear ordinary diﬀerential equations
see Kaplan, 9.19.4,
see Lopez, Chapter 5,
see Bender and Orszag, 1.11.5,
see Riley, Hobson, and Bence, Chapter 13, Chapter 15.6,
see Friedman, Chapter 3.
3.1 Linearity and linear independence
An ordinary diﬀerential equation can be written in the form
L(y) = f(x) (3.1)
where y(x) is an unknown function. The equation is said to be homogeneous if f(x) = 0, giving then
L(y) = 0 (3.2)
The operator L is composed of a combination of derivatives
d
dx
,
d
2
dx
2
etc. L is linear if
L(y
1
+y
2
) = L(y
1
) +L(y
2
) (3.3)
and
L(αy) = αL(y) (3.4)
where α is a scalar. The general form of L is
L = P
n
(x)
d
n
dx
n
+P
n−1
(x)
d
n−1
dx
n−1
+. . . +P
1
(x)
d
dx
+P
0
(x) (3.5)
The ordinary diﬀerential equation (3.1) is then linear.
Deﬁnition: The functions y
1
(x), y
2
(x), . . . , y
n
(x) are said to be linearly independent when C
1
y
1
(x)+C
2
y
2
(x)+
. . . +C
n
y
n
(x) = 0 is true only when C
1
= C
2
= . . . = C
n
= 0.
A homogeneous equation of order n can be shown to have n linearly independent solutions. These are
called complementary functions. If y
i
(i = 1, . . . , n) are the complementary functions of the equation, then
y(x) =
n
¸
i=1
C
i
y
i
(x) (3.6)
is the general solution of the homogeneous equation. If y
p
(x) is a particular solution of equation (3.1), the
general solution is then
y(x) = y
p
(x) +
n
¸
i=1
C
i
y
i
(x). (3.7)
61
Now we would like to show that any solution φ(x) to the homogeneous equation L(y) = 0 can be written
as a linear combination of the n complementary functions y
i
(x):
C
1
y
1
(x) +C
2
y
2
(x) +. . . +C
n
y
n
(x) = φ(x) (3.8)
We can form additional equations by taking a series of derivatives up to n −1:
C
1
y
1
(x) +C
2
y
2
(x) +. . . +C
n
y
n
(x) = φ
(x) (3.9)
.
.
. (3.10)
C
1
y
(n−1)
1
(x) +C
2
y
(n−1)
2
(x) +. . . +C
n
y
(n−1)
n
(x) = φ
(n−1)
(x) (3.11)
This is a linear system of algebraic equations:
¸
¸
¸
¸
y
1
y
2
. . . y
n
y
1
y
2
. . . y
n
.
.
.
.
.
. . . .
.
.
.
y
(n−1)
1
y
(n−1)
2
. . . y
(n−1)
n
¸
¸
¸
C
1
C
2
.
.
.
C
n
=
¸
¸
¸
φ(x)
φ
(x)
.
.
.
φ
(n−1)
(x)
(3.12)
For a unique solution, we need the determinant of the coeﬃcient matrix to be nonzero. This particular
determinant is known as the Wronskian
1
W of y
1
(x), y
2
(x), . . . , y
n
(x) and is deﬁned as
W =
y
1
y
2
. . . y
n
y
1
y
2
. . . y
n
.
.
.
.
.
. . . .
.
.
.
y
(n−1)
1
y
(n−1)
2
. . . y
(n−1)
n
(3.13)
W = 0 indicates linear independence of the functions y
1
(x), y
2
(x), . . . , y
n
(x), since if φ(x) ≡ 0, the only
solution is C
i
= 0, i = 1...n. Unfortunately, the converse is not always true; that is if W = 0, the comple
mentary functions may or may not be linearly dependent, though in most cases W = 0 indeed implies linear
dependence.
Example 3.1
Determine the linear independence of (a) y
1
= x and y
2
= 2x, (b) y
1
= x and y
2
= x
2
, and (c)
y
1
= x
2
and y
2
= x[x[ for x ∈ (−1, 1)
(a) W =
x 2x
1 2
= 0, linearly dependent.
(b) W =
x x
2
1 2x
= x
2
= 0, linearly independent, except at x = 0.
(c) We can restate y
2
as
y
2
(x) = −x
2
x ∈ (−1, 0]
y
2
(x) = x
2
x ∈ (0, 1)
so that
W =
x
2
−x
2
2x −2x
= −2x
3
+ 2x
3
= 0 x ∈ (−1, 0]
W =
x
2
x
2
2x 2x
= 2x
3
−2x
3
= 0 x ∈ (0, 1)
1
Josef Ho¨en´e de Wronski, 17781853, Polishborn French mathematician.
62
Thus W = 0 for x ∈ (−1, 1), which suggests the functions may be linearly dependent. However, when
we seek C
1
and C
2
such that C
1
y
1
+ C
2
y
2
= 0, we ﬁnd the only solution is C
1
= 0, C
2
= 0; therefore,
the functions are in fact linearly independent, despite the fact that W = 0! Let’s check this. For
x ∈ (−1, 0],
C
1
x
2
+C
2
(−x
2
) = 0,
so we will need C
1
= C
2
at a minimum. For x ∈ (0, 1),
C
1
x
2
+C
2
x
2
= 0,
which gives the requirement that C
1
= −C
2
. Substituting the ﬁrst condition into the second gives
C
2
= −C
2
, which is only satisﬁed if C
2
= 0, thus requiring that C
1
= 0; hence the functions are indeed
linearly independent.
3.2 Complementary functions for equations with con
stant coeﬃcients
This section will consider solutions to the homogeneous part of the diﬀerential equation.
3.2.1 Arbitrary order
Consider the homogeneous equation with constant coeﬃcients
A
n
y
(n)
+ A
n−1
y
(n−1)
+ . . . + A
1
y
+ A
0
y = 0 (3.14)
where A
i
, (i = 0, . . . , n) are constants. To ﬁnd the solution of this equation we let y = e
rx
.
Substituting we get
A
n
r
n
e
rx
+ A
n−1
r
(n−1)
e
rx
+ . . . + A
1
r
1
e
rx
+ A
0
e
rx
= 0. (3.15)
Canceling the common factor e
rx
, we get
A
n
r
n
+ A
n−1
r
(n−1)
+ . . . + A
1
r
1
+ A
0
r
0
= 0 (3.16)
n
¸
j=0
A
j
r
j
= 0. (3.17)
This is called the characteristic equation. It is an n
th
order polynomial which has n roots
(some of which could be repeated, some of which could be complex), r
i
(i = 1, . . . n) from
which n linearly independent complementary functions y
i
(x) (i = 1, . . . n) have to be ob
tained. The general solution is then given by equation (3.6).
If all roots are real and distinct, then the complementary functions are simply e
r
i
x
,
(i = 1, . . . , n). If, however, k of these roots are repeated, i.e. r
1
= r
2
= . . . = r
k
= r,
then the linearly independent complementary functions are obtained by multiplying e
rx
by
1, x, x
2
, . . . , x
k−1
. For a pair of complex conjugate roots p ±qi, one can use de Moivre’s for
mula (see Appendix) to show that the complementary functions are e
px
cos qx and e
px
sin qx.
63
Example 3.2
Solve
d
4
y
dx
4
−2
d
3
y
dx
3
+
d
2
y
dx
2
+ 2
dy
dx
−2y = 0
Substituting y = e
rx
, we get a characteristic equation
r
4
−2r
3
+r
2
+ 2r −2 = 0
which can be factorized as
(r + 1)(r −1)(r
2
−2r + 2) = 0
from which
r
1
= −1, r
2
= 1 r
3
= 1 +i r
4
= 1 −i
The general solution is
y(x) = C
1
e
−x
+C
2
e
x
+C
3
e
(1+i)x
+C
4
e
(1−i)x
= C
1
e
−x
+C
2
e
x
+C
3
e
x
e
ix
+C
4
e
x
e
−ix
= C
1
e
−x
+C
2
e
x
+e
x
C
3
e
ix
+C
4
e
−ix
= C
1
e
−x
+C
2
e
x
+e
x
[C
3
(cos(x) +i sin(x)) +C
4
(cos(−x) +i sin(−x))]
= C
1
e
−x
+C
2
e
x
+e
x
[(C
3
+C
4
) cos(x) +i(C
3
−C
4
) sin(x)]
y(x) = C
1
e
−x
+C
2
e
x
+e
x
(C
3
cos x +C
4
sin x)
where C
3
= C
3
+C
4
and C
4
= i(C
3
−C
4
).
3.2.2 First order
The characteristic polynomial of the ﬁrst order equation
ay
+ by = 0 (3.18)
is
ar + b = 0 (3.19)
so
r = −
b
a
(3.20)
thus the complementary function for this equation is simply
y = Ce
−
b
a
x
(3.21)
64
3.2.3 Second order
The characteristic polynomial of the second order equation
a
d
2
y
dx
2
+ b
dy
dx
+ cy = 0 (3.22)
is
ar
2
+ br + c = 0 (3.23)
Depending on the coeﬃcients of this quadratic equation, there are three cases to be consid
ered.
• b
2
− 4ac > 0: two distinct real roots r
1
and r
2
. The complementary functions are
y
1
= e
r
1
x
and y
2
= e
r
2
x
.
• b
2
−4ac = 0: one real root. The complementary functions are y
1
= e
rx
and y
2
= xe
rx
.
• b
2
− 4ac < 0: two complex conjugate roots p ± qi. The complementary functions are
y
1
= e
px
cos qx and y
2
= e
px
sin qx.
Example 3.3
Solve
d
2
y
dx
2
−3
dy
dx
+ 2y = 0
The characteristic equation is
r
2
−3r + 2 = 0
with solutions
r
1
= 1, r
2
= 2.
The general solution is then
y = C
1
e
x
+C
2
e
2x
Example 3.4
Solve
d
2
y
dx
2
−2
dy
dx
+y = 0
The characteristic equation is
r
2
−2r + 1 = 0
with a repeated root
r = 1.
The general solution is then
y = C
1
e
x
+C
2
xe
x
65
Example 3.5
Solve
d
2
y
dx
2
−2
dy
dx
+ 10y = 0
The characteristic equation is
r
2
−2r + 10 = 0
with solutions
r
1
= 1 + 3i, r
2
= 1 −3i.
The general solution is then
y = e
x
(C
1
cos 3x +C
2
sin 3x)
3.3 Complementary functions for equations with vari
able coeﬃcients
3.3.1 One solution to ﬁnd another
If y
1
(x) is a known solution of
y
+ P(x)y
+ Q(x)y = 0 (3.24)
let the other solution be y
2
(x) = u(x)y
1
(x). We then form derivatives of y
2
and substitute
into the original diﬀerential equation. First the derivatives:
y
2
= uy
1
+ u
y
1
(3.25)
y
2
= uy
1
+ u
y
1
+ u
y
1
+ u
y
1
(3.26)
y
2
= uy
1
+ 2u
y
1
+ u
y
1
(3.27)
(3.28)
Substituting in the equation, we get
(uy
1
+ 2u
y
1
+ u
y
1
) + P(x)(uy
1
+ u
y
1
) + Q(x)uy
1
= 0 (3.29)
u
y
1
+ u
(2y
1
+ P(x)y
1
) + u(y
1
+ P(x)y
1
+ Q(x)y
1
) = 0 (3.30)
cancel coeﬃcient on u: u
y
1
+ u
(2y
1
+ P(x)y
1
) = 0. (3.31)
This can be written as a ﬁrstorder equation in v, where v = u
:
v
y
1
+ v(2y
1
+ P(x)y
1
) = 0 (3.32)
which is solved for v(x) using known methods for ﬁrst order equations.
66
3.3.2 Euler equation
An equation of the type
x
2
d
2
y
dx
2
+ Ax
dy
dx
+ By = 0, (3.33)
where A and B are constants, can be solved by a change of independent variables. Let
z = ln x
so that
x = e
z
.
Then
dz
dx
=
1
x
= e
−z
,
dy
dx
=
dy
dz
dz
dx
= e
−z
dy
dz
so
d
dx
= e
−z
d
dz
d
2
y
dx
2
=
d
dx
dy
dx
= e
−z
d
dz
e
−z
dy
dz
= e
−2z
d
2
y
dz
2
−
dy
dz
Substituting into the diﬀerential equation, we get
d
2
y
dz
2
+ (A −1)
dy
dz
+ By = 0 (3.34)
which is an equation with constant coeﬃcients.
Example 3.6
Solve
x
2
y
−2xy
+ 2y = 0, for x > 0.
With x = e
z
, we get
d
2
y
dz
2
−3
dy
dz
+ 2y = 0.
The solution is
y = C
1
e
z
+C
2
e
2z
= C
1
x +C
2
x
2
.
Note that this equation can also be solved by letting y = x
r
. Substituting into the equation, we get
r
2
− 3r + 2 = 0, so that r
1
= 1 and r
2
= 2. The solution is then obtained as a linear combination of
x
r1
and x
r2
.
3.4 Particular solutions
We will now consider particular solutions of the inhomogeneous equation (3.1).
67
3.4.1 Method of undetermined coeﬃcients
Guess a solution with unknown coeﬃcients and then substitute in the equation to determine
these coeﬃcients.
Example 3.7
y
+ 4y
+ 4y = 169 sin 3x
Thus
r
2
+ 4r + 4 = 0
(r + 2)(r + 2) = 0
r = −2
Since the roots are repeated, the complementary functions are
y
1
= e
−2x
y
2
= xe
−2x
For the particular function, guess
y
p
= a sin 3x +b cos 3x
so
y
p
= 3a cos 3x −3b sin 3x
y
p
= −9a sin 3x −9b cos 3x
Substituting in the diﬀerential equation, we get
(−9a sin 3x −9b cos 3x) + 4 (3a cos 3x −3b sin 3x) + 4 (a sin 3x +b cos 3x) = 169 sin 3x
(−5a −12b) sin 3x + (12a −5b) cos 3x = 169 sin 3x
Equating the coeﬃcients of the sin and cos terms,
12 −5
−5 −12
a
b
=
0
169
we ﬁnd that a = −5 and b = −12. The solution is then
y(x) = (C
1
+C
2
x)e
−2x
−5 sin 3x −12 cos 3x.
Example 3.8
Solve
y
−2y
+y
+ 2y
−2y = x
2
+x + 1
Let the particular integral be of the form y
p
= ax
2
+bx +c. Substituting we get
−(2a + 1)x
2
+ (4a −2b −1)x + (2a + 2b −2c −1) = 0
68
For this to hold for all values of x, the coeﬃcients must be zero, from which a = −
1
2
, b = −
3
2
, and
c = −
5
2
. Thus
y
p
= −
1
2
(x
2
+ 3x + 5)
The solution of the homogeneous equation was found in a previous example, so that the general solution
is
y = C
1
e
−x
+C
2
e
x
+e
x
(C
3
cos x +C
4
sin x) −
1
2
(x
2
+ 3x + 5)
A variant must be attempted if any term of F(x) is a complementary function.
Example 3.9
Solve
y
+ 4y = 6 sin 2x
Since sin 2x is a complementary function, we will try
y
p
= x(a sin 2x +b cos 2x)
from which
y
p
= 2x(a cos 2x −b sin 2x) + (a sin 2x +b cos 2x)
y
p
= −4x(a sin 2x +b cos 2x) + 4(a cos 2x −b sin 2x)
Substituting into the equation, we compare coeﬃcients and get a = 0, b = −
3
2
. The general solution
is then
y = C
1
sin 2x +C
2
cos 2x −
3
2
xcos 2x.
Example 3.10
Solve
y
+ 2y
+y = xe
−x
The complementary functions are e
−x
and xe
−x
. To get the particular solution we have to choose
a function of the kind y
p
= ax
3
e
−x
. On substitution we ﬁnd that a = 1/6. Thus the general solution is
y = C
1
e
−x
+C
2
xe
−x
+
1
6
x
3
e
−x
69
3.4.2 Variation of parameters
For an equation of the kind
P
n
(x)y
(n)
+ P
n−1
(x)y
(n−1)
+ . . . + P
1
(x)y
+ P
0
(x)y = F(x) (3.35)
we propose
y
p
=
n
¸
i=1
u
i
(x)y
i
(x) (3.36)
where y
i
(x), (i = 1, . . . , n) are complementary functions of the equation, and u
i
(x) are n
unknown functions. Diﬀerentiating, we have
y
p
=
n
¸
i=1
u
i
y
i
+
n
¸
i=1
u
i
y
i
.
We set
¸
n
i=1
u
i
y
i
to zero as a ﬁrst condition. Diﬀerentiating the rest
y
p
=
n
¸
i=1
u
i
y
i
+
n
¸
i=1
u
i
y
i
.
Again we set the ﬁrst term on the right side to zero as a second condition. Following this
procedure repeatedly we arrive at
y
(n−1)
p
=
n
¸
i=1
u
i
y
(n−2)
i
+
n
¸
i=1
u
i
y
(n−1)
i
.
The vanishing of the ﬁrst term on the right gives us the (n − 1)’th condition. Substituting
these in the governing equation, the last condition
P
n
(x)
n
¸
i=1
u
i
y
(n−1)
i
+
n
¸
i=1
u
i
[P
n
y
(n)
i
+ P
n−1
y
(n−1)
i
+ . . . + P
1
y
i
+ P
0
y
i
] = F(x)
is obtained. Since each of the functions y
i
is a complementary function, the term within
brackets is zero.
To summarize, we have the following n equations in the n unknowns u
i
, (i = 1, . . . , n)
that we have obtained:
n
¸
i=1
u
i
y
i
= 0,
n
¸
i=1
u
i
y
= 0,
.
.
. (3.37)
n
¸
i=1
u
i
y
(n−2)
i
= 0,
P
n
(x)
n
¸
i=1
u
i
y
(n−1)
i
= F(x).
70
These can be solved for u
i
, and then integrated to give the u
i
’s.
Example 3.11
Solve
y
+y = tan x.
The complementary functions are
y
1
= cos x, y
2
= sin x
The equations for u
1
(x) and u
2
(x) are
u
1
y
1
+u
2
y
2
= 0
u
1
y
1
+u
2
y
2
= tan x.
Solving this system, which is linear in u
1
and u
2
, we get
u
1
= −sin xtan x,
u
2
= cos xtan x
Integrating, we get
u
1
=
sin xtan xdx = sin x −ln [ sec x + tan x[,
u
2
=
cos xtan xdx = −cos x.
The particular solution is
y
p
= u
1
y
1
+u
2
y
2
= (sin x −ln [ sec x + tan x[) cos x −cos xsin x
= −cos xln [ sec x + tan x[
The complete solution, obtained by adding the complementary and particular, is
y = C
1
cos x +C
2
sin x −cos xln [ sec x + tan x[
3.4.3 Operator D
The linear operator D is deﬁned by
D(y) =
dy
dx
.
or, in terms of the operator alone,
D =
d
dx
The operator can be repeatedly applied, so that
D
n
(y) =
d
n
y
dx
n
.
71
Another example of its use is
(D−a)(D−b)f(x) = (D−a)[(D−b)f(x)]
= (D−a)
¸
df
dx
−bf
=
d
2
f
dx
2
−(a + b)
df
dx
+ abf
Negative powers of D are related to integrals. This comes from
dy(x)
dx
= f(x) y(x
o
) = y
o
y(x) = y
o
+
x
xo
f(s) ds
then
substituting: D[y(x)] = f(x)
apply inverse: D
−1
[D[y(x)]] = D
−1
[f(x)]
y(x) = D
−1
[f(x)]
= y
o
+
x
xo
f(s) ds
so D
−1
= y
o
+
x
xo
[. . .] ds
We can evaluate h(x) where
h(x) =
1
D−a
f(x) (3.38)
in the following way
(D−a) h(x) = (D−a)
¸
1
D−a
f(x)
(D−a) h(x) = f(x)
dh(x)
dx
−ah(x) = f(x)
e
−ax
dh(x)
dx
−ae
−ax
h(x) = f(x)e
−ax
d
dx
e
−ax
h(x)
= f(x)e
−ax
d
ds
e
−as
h(s)
= f(s)e
−as
x
xo
d
ds
e
−as
h(s)
ds =
x
xo
f(s)e
−as
ds
e
−ax
h(x) −e
−axo
h(x
o
) =
x
xo
f(s)e
−as
ds
72
h(x) = e
a(x−xo)
h(x
o
) + e
ax
x
xo
f(s)e
−as
ds
1
D−a
f(x) = e
a(x−xo)
h(x
o
) + e
ax
x
xo
f(s)e
−as
ds
This gives us h(x) explicitly in terms of the known function f such that h satisﬁes D[h]−ah =
f.
We can iterate to ﬁnd the solution to higher order equations such as
(D−a)(D−b)y(x) = f(x) y(x
o
) = y
o
, y
(x
o
) = y
o
(D−b)y(x) =
1
D−a
f(x)
(D−b)y(x) = h(x)
y(x) = y
o
e
b(x−xo)
+ e
bx
x
xo
h(s)e
−bs
ds
Note that
dy
dx
= y
o
be
b(x−xo)
+ h(x) + be
bx
x
xo
h(s)e
−bs
ds
dy
dx
(x
o
) = y
o
= y
o
b + h(x
o
)
which can be rewritten as
(D−b)[y(x
o
)] = h(x
o
)
which is what one would expect.
Returning to the problem at hand, we take our expression for h(x), evaluate it at x = s
and substitute into the expression for y(x) to get
y(x) = y
o
e
b(x−xo)
+ e
bx
x
xo
h(x
o
)e
a(s−xo)
+ e
as
s
xo
f(t)e
−at
dt
e
−bs
ds
= y
o
e
b(x−xo)
+ e
bx
x
xo
(y
o
−y
o
b) e
a(s−xo)
+ e
as
s
xo
f(t)e
−at
dt
e
−bs
ds
= y
o
e
b(x−xo)
+ e
bx
x
xo
(y
o
−y
o
b) e
(a−b)s−axo
+ e
(a−b)s
s
xo
f(t)e
−at
dt
ds
= y
o
e
b(x−xo)
+ e
bx
(y
o
−y
o
b)
x
xo
e
(a−b)s−axo
ds + e
bx
x
xo
e
(a−b)s
s
xo
f(t)e
−at
dt
ds
= y
o
e
b(x−xo)
+ e
bx
(y
o
−y
o
b)
e
a(x−xo)−xb
−e
−bxo
a −b
+ e
bx
x
xo
e
(a−b)s
s
xo
f(t)e
−at
dt
ds
= y
o
e
b(x−xo)
+ (y
o
−y
o
b)
e
a(x−xo)
−e
b(x−xo)
a −b
+ e
bx
x
xo
e
(a−b)s
s
xo
f(t)e
−at
dt
ds
= y
o
e
b(x−xo)
+ (y
o
−y
o
b)
e
a(x−xo)
−e
b(x−xo)
a −b
+ e
bx
x
xo
s
xo
e
(a−b)s
f(t)e
−at
dt ds
73
Changing the order of integration and integrating on s:
= y
o
e
b(x−xo)
+ (y
o
−y
o
b)
e
a(x−xo)
−e
b(x−xo)
a −b
+ e
bx
x
xo
x
t
e
(a−b)s
f(t)e
−at
ds dt
= y
o
e
b(x−xo)
+ (y
o
−y
o
b)
e
a(x−xo)
−e
b(x−xo)
a −b
+ e
bx
x
xo
f(t)e
−at
x
t
e
(a−b)s
ds
dt
= y
o
e
b(x−xo)
+ (y
o
−y
o
b)
e
a(x−xo)
−e
b(x−xo)
a −b
+
x
xo
f(t)
a −b
e
a(x−t)
−e
b(x−t)
dt
Thus we have a solution to the second order linear diﬀerential equation with constant
coeﬃcients and arbitrary forcing expressed in integral form. A similar alternate expression
can be developed when a = b.
3.4.4 Green’s functions
A similar goal can be achieved for boundary value problems involving a more general linear
operator L.
If on the closed interval a ≤ x ≤ b we have a two point boundary problem for a general
linear diﬀerential equation of the form:
Ly = f(x), (3.39)
where the highest derivative in L is order n and with general homogeneous boundary condi
tions at x = a and x = b on linear combinations of y and n −1 of its derivatives:
A
y(a), y
(a), . . . , y
(n−1)
(a)
T
+B
y(b), y
(b), . . . , y
(n−1)
(b)
T
= 0 (3.40)
where A and B are n n constant coeﬃcient matrices, then knowing L, A and B, we can
form a solution of the form:
y(x) =
b
a
f(s)g(x, s)ds (3.41)
This is desirable as
• once g(x, s) is known, the solution is deﬁned for all f including
– forms of f for which no simple explicit integrals can be written
– piecewise continuous forms of f
• numerical solution of the quadrature problem is more robust than direct numerical
solution of the original diﬀerential equation
• the solution will automatically satisfy all boundary conditions
• the solution is useful in experiments in which the system dynamics are well charac
terized (e.g. mass spring damper) but the forcing may be erratic (perhaps digitally
speciﬁed)
74
We now deﬁne the Green’s
2
function: g(x, s) and proceed to show that with this deﬁ
nition, we are guaranteed to achieve the solution to the diﬀerential equation in the desired
form as shown at the beginning of the section. We take g(x, s) to be the Green’s function
for the linear diﬀerential operator L if it satisﬁes the following conditions:
1. Lg(x, s) = δ(x −s)
2. g(x, s) satisﬁes all boundary conditions given on x
3. g(x, s) is a solution of Lg = 0 on a ≤ x < s and on s < x ≤ b
4. g(x, s), g
(x, s), . . . , g
(n−2)
(x, s) are continuous for [a, b]
5. g
(n−1)
(x, s) is continuous for [a, b] except at x = s where it has a jump of
−1
Pn(s)
Also for purposes of the above conditions, s is thought of as a constant parameter. In the
actual Green’s function representation of the solution, s is a dummy variable.
These conditions are not all independent; nor is the dependence obvious. Consider for
example,
L = P
2
(x)
d
2
dx
2
+ P
1
(x)
d
dx
+ P
o
(x)
Then we have
P
2
(x)
d
2
g
dx
2
+ P
1
(x)
dg
dx
+ P
o
(x)g = δ(x −s)
d
2
g
dx
2
+
P
1
(x)
P
2
(x)
dg
dx
+
P
o
(x)
P
2
(x)
g =
δ(x −s)
P
2
(x)
Now integrate both sides with respect to x in a small neighborhood enveloping x = s:
s+
s−
d
2
g
dx
2
dx +
s+
s−
P
1
(x)
P
2
(x)
dg
dx
dx +
s+
s−
P
o
(x)
P
2
(x)
g dx =
s+
s−
δ(x −s)
P
2
(x)
dx
Since P
s are continuous, as we let →0 we get
s+
s−
d
2
g
dx
2
dx +
P
1
(s)
P
2
(s)
s+
s−
dg
dx
dx +
P
o
(s)
P
2
(s)
s+
s−
g dx =
1
P
2
(s)
s+
s−
δ(x −s) dx
Integrating
dg
dx
s+
−
dg
dx
s−
+
P
1
(s)
P
2
(s)
g[
s+
− g[
s−
+
P
o
(s)
P
2
(s)
s+
s−
g dx =
1
P
2
(s)
H(x −s)[
s+
s−
Since g is continuous, this reduces to
dg
dx
s+
−
dg
dx
s−
=
1
P
2
(s)
2
George Green, 17931841, English cornmiller and mathematician of humble origin and uncertain edu
cation, though he generated modern mathematics of the ﬁrst rank.
75
This is consistent with the ﬁnal point, that the second highest derivative of g suﬀers a jump
at x = s.
Next, we show that applying this deﬁnition of g(x, s) to our desired result lets us recover
the original diﬀerential equation, rendering g(x, s) to be appropriately deﬁned. This can be
easily shown by direct substitution:
y(x) =
b
a
f(s)g(x, s)ds
Ly = L
b
a
f(s)g(x, s)ds
L behaves as
∂
n
∂x
n
, via Leibniz’s Rule:
=
b
a
f(s)Lg(x, s)ds
=
b
a
f(s)δ(x −s)ds
= f(x)
The analysis can be extended in a straightforward manner to more arbitrary systems
with inhomogeneous boundary conditions using matrix methods (c.f. Wylie and Barrett,
1995).
Example 3.12
Find the Green’s function and the corresponding solution integral of the diﬀerential equation
d
2
y
dx
2
= f(x)
subject to boundary conditions
y(0) = 0, y(1) = 0
Verify the solution integral if f(x) = 6x.
Here
L =
d
2
dx
2
Now 1) break the problem up into two domains: a) x < s, b) x > s, 2) Solve Lg = 0 in both domains;
four constants arise, 3) Use boundary conditions for two constants, 4) use conditions at x = s: continuity
of g and a jump of
dg
dx
, for the other two constants.
a) x < s
d
2
g
dx
2
= 0
dg
dx
= C
1
g = C
1
x +C
2
g(0) = 0 = C
1
(0) +C
2
C
2
= 0
g(x, s) = C
1
x, x < s
76
b) x > s
d
2
g
dx
2
= 0
dg
dx
= C
3
g = C
3
x +C
4
g(1) = 0 = C
3
(1) +C
4
C
4
= −C
3
g(x, s) = C
3
(x −1) , x > s
Continuity of g(x, s) when x = s:
C
1
s = C
3
(s −1)
C
1
= C
3
s −1
s
g(x, s) = C
3
s −1
s
x, x < s
g(x, s) = C
3
(x −1) , x > s
Jump in
dg
dx
at x = s (note P
2
(x) = 1):
dg
dx
s+
−
dg
dx
s−
= 1
C
3
−C
3
s −1
s
= 1
C
3
= s
g(x, s) = x(s −1), x < s
g(x, s) = s(x −1), x > s
Note some properties of g(x, s) which are common in such problems:
• it is broken into two domains
• it is continuous in and through both domains
• its n −1 (here n = 2, so ﬁrst) derivative is discontinuous at x = s
• it is symmetric in s and x across the two domains
• it is seen by inspection to satisfy both boundary conditions
The general solution in integral form can be written by breaking the integral into two pieces as
y(x) =
x
0
f(s) s(x −1) ds +
1
x
f(s) x(s −1) ds
y(x) = (x −1)
x
0
f(s) s ds +x
1
x
f(s) (s −1) ds
Now evaluate the integral if f(x) = 6x (thus f(s) = 6s).
y(x) = (x −1)
x
0
(6s) s ds +x
1
x
(6s) (s −1) ds
= (x −1)
x
0
6s
2
ds +x
1
x
6s
2
−6s
ds
= (x −1)
2s
3
x
0
+x
2s
3
−3s
2
1
x
= (x −1)(2x
3
−0) +x[(2 −3) −(2x
3
−3x
2
)]
= 2x
4
−2x
3
−x −2x
4
+ 3x
3
y(x) = x
3
−x
77
Note the original diﬀerential equation and both boundary conditions are automatically satisﬁed by the
solution.
The solution is plotted in Figure 3.1.
2 1 1 2
x
1.5
1
0.5
0.5
1
1.5
y
0.2 0.4 0.6 0.8 1
x
0.3
0.2
0.1
y
y’’ = 6x, y(0) = 0, y(1) = 0
y(x) = x  x
3
y(x) = x  x
3
in domain of interest 0 < x < 1
in expanded domain, 2 < x < 2
Figure 3.1: Sketch of problem solution, y
= 6x, y(0) = y(1) = 0.
Problems
1. Find the general solution of the diﬀerential equation
y
+x
2
y(1 +y) = 1 +x
3
(1 +x)
2. Show that the functions y
1
= sin x, y
2
= xcos x, and y
3
= x are linearly independent. Find the lowest
order diﬀerential equation of which they are the complementary functions.
3. Solve the following initial value problem for (a) C = 6, (b) C = 4, and (c) C = 3 with y(0) = 1 and
y
(0) = −3.
d
2
y
dt
2
+C
dy
dt
+ 4y = 0
Plot your results.
4. Solve
(a)
d
3
y
dx
3
−3
d
2
y
dx
2
+ 4y = 0
(b)
d
4
y
dx
4
−5
d
3
y
dx
3
+ 11
d
2
y
dx
2
−7
dy
dx
= 12
(c) y
+ 2y = 6e
x
+ cos 2x
(d) x
2
y
−3xy
−5y = x
2
log x.
(e)
d
2
y
dx
2
+y = 2e
x
cos x + (e
x
−2) sin x.
5. Find a particular solution to the following ODE using (a) variation of parameters and (b) undetermined
coeﬃcients.
d
2
y
dx
2
−4y = cosh 2x
78
6. Solve the boundary value problem
d
2
y
dx
2
+y
dy
dx
= 0
with boundary conditions y(0) = 0 and y(π/2) = −1 Plot your result.
7. Solve
2x
2
d
3
y
dx
3
+ 2x
d
2
y
dx
2
−8
dy
dx
= 1
with y(1) = 4, y
(1) = 8, y(2) = 11. Plot your result.
8. Solve
x
2
y
+xy
−4y = 6x
9. Find the general solution of
y
+ 2y
+y = xe
−x
10. Find the Green’s function solution of
y
+y
−2y = f(x)
with y(0) = 0, y(1) = 0. Determine y(x) if f(x) = 3 sin x. Plot your result.
11. Find the Green’s function solution of
y
+y = f(x)
with y(0) = 0, y
(π) = 0. Verify this is the correct solution when f(x) = x
3
. Plot your result.
12. Solve y
−2y
−y
+ 2y = sin
2
x.
13. Solve y
+ 6y
+ 12y
+ 8y = e
x
−3 sin x −8e
−2x
.
14. Solve x
4
y
+ 7x
3
y
+ 8x
2
y
= 4x
−3
.
15. Show that x
−1
and x
5
are solutions of the equation
x
2
y
−3xy
−5y = 0
Thus ﬁnd the general solution of
x
2
y
−3xy
−5y = x
2
16. Solve the equation
2y
−4y
+ 2y =
e
x
x
where x > 0.
79
80
Chapter 4
Series solution methods
see Kaplan, Chapter 6,
see Hinch, Chapters 1, 2, 5, 6, 7,
see Bender and Orszag,
see Lopez, Chapters 711, 14,
see Riley, Hobson, and Bence, Chapter 14.
This chapter will deal with series solution methods. Such methods are useful in solving both
algebraic and diﬀerential equations. The ﬁrst method is formally exact in that an inﬁnite
number of terms can often be shown to have absolute and uniform convergence properties.
The second method, asymptotic series solutions, is less rigorous in that convergence is not
always guaranteed; in fact convergence is rarely examined because the problems tend to
be intractable. Still asymptotic methods will be seen to be quite useful in interpreting the
results of highly nonlinear equations in local domains.
4.1 Power series
Solutions to many diﬀerential equations cannot be found in a closed form solution expressed
for instance in terms of polynomials and transcendental functions such as sin and cos. Often,
instead, the solutions can be expressed as an inﬁnite series of polynomials. It is desirable
to get a complete expression for the n
th
term of the series so that one can make statements
regarding absolute and uniform convergence of the series. Such solutions are approximate
in that if one uses a ﬁnite number of terms to represent the solution, there is a truncation
error. Formally though, for series which converge, an inﬁnite number of terms gives a true
representation of the actual solution, and hence the method is exact.
4.1.1 Firstorder equation
An equation of the form
dy
dx
+ P(x)y = Q(x) (4.1)
81
where P(x) and Q(x) are analytic at x = a has a power series solution
y(x) =
∞
¸
n=0
a
n
(x −a)
n
(4.2)
around this point.
Example 4.1
Find the power series solution of
dy
dx
= y y(0) = y
o
around x = 0. Let
y = a
0
+a
1
x +a
2
x
2
+a
3
x
3
+
so that
dy
dx
= a
1
+ 2a
2
x + 3a
3
x
2
+ 4a
4
x
3
+
Substituting in the equation, we have
(a
1
−a
0
) + (2a
2
−a
1
)x + (3a
3
−a
2
)x
2
+ (4a
4
−a
3
)x
3
+ = 0
If this is valid for all x, the coeﬃcients must be all zero. Thus
a
1
= a
0
a
2
=
1
2
a
1
=
1
2
a
0
a
3
=
1
3
a
2
=
1
3!
a
0
a
4
=
1
4
a
3
=
1
4!
a
0
.
.
.
so that
y(x) = a
0
1 +x +
x
2
2!
+
x
3
3!
+
x
4
4!
+
Applying the initial condition at x = 0 gives a
o
= y
o
so
y(x) = y
o
1 +x +
x
2
2!
+
x
3
3!
+
x
4
4!
+
Of course this power series is the Taylor
1
series expansion of the closed form solution y = y
o
e
x
.
For y
o
= 1 the exact solution and three approximations to the exact solution are shown in Figure
4.1.
Alternatively, one can use a compact summation notation. Thus
y =
∞
¸
n=0
a
n
x
n
dy
dx
=
∞
¸
n=0
na
n
x
n−1
1
Brook Taylor, 16851731, English mathematician, musician, and painter.
82
1.5 1 0.5 0.5 1 1.5
x
1
2
3
4
y
y’ = y
y (0) = 1
y = exp( x)
y = 1 + x + x / 2
y = 1 + x
y = 1
2
Figure 4.1: Comparison of truncated series and exact solutions
=
∞
¸
n=1
na
n
x
n−1
m = n −1 =
∞
¸
m=0
(m+ 1)a
m+1
x
m
=
∞
¸
n=0
(n + 1)a
n+1
x
n
(n + 1)a
n+1
= a
n
a
n
=
a
0
n!
y = a
0
∞
¸
n=0
x
n
n!
y = y
o
∞
¸
n=0
x
n
n!
The ratio test tells us that
lim
n→∞
a
n+1
a
n
=
1
n + 1
→0,
so the series converges absolutely.
If a series is uniformly convergent in a domain, it converges at the same rate for all x in that
domain. We can use the Weierstrass
2
Mtest for uniform convergence. That is for a series
∞
¸
n=0
u
n
(x)
to be convergent, we need a convergent series of constants M
n
to exist
∞
¸
n=0
M
n
such that
[u
n
(x)[ ≤ M
n
for all x in the domain. For our problem, we take the domain to be −A ≤ x ≤ A, where A > 0.
2
Karl Theodor Wilhelm Weierstrass, 18151897, Westphaliaborn German mathematician.
83
So for uniform convergence we must have
x
n
n!
≤ M
n
So take
M
n
=
A
n
n!
(Note M
n
is thus strictly positive). So
∞
¸
n=0
M
n
=
∞
¸
n=0
A
n
n!
By the ratio test, this is convergent if
lim
n→∞
A
n+1
(n+1)!
A
n
(n)!
≤ 1
lim
n→∞
A
n + 1
≤ 1
This holds for all A, so in the domain, −∞< x < ∞ the series converges absolutely and uniformly.
4.1.2 Secondorder equation
We consider series solutions of
P(x)
d
2
y
dx
2
+ Q(x)
dy
dx
+ R(x)y = 0 (4.3)
around x = a. There are three diﬀerent cases, depending of the behavior of P(a), Q(a) and
R(a), in which x = a is classiﬁed as a ordinary point, a regular singular point, or an irregular
singular point. These are described below.
4.1.2.1 Ordinary point
If P(a) = 0 and Q/P, R/P are analytic at x = a, this point is called an ordinary point. The
general solution is y = C
1
y
1
(x) +C
2
y
2
(x) where y
1
and y
2
are of the form
¸
∞
n=0
a
n
(x −a)
n
.
The radius of convergence of the series is the distance to the nearest complex singularity,
i.e. the distance between x = a and the closest point on the complex plane at which Q/P
or R/P is not analytic.
Example 4.2
Find the series solution of
y
+xy
+y = 0 y(0) = y
o
y
(0) = y
o
around x = 0.
84
x = 0 is an ordinary point, so that we have
y =
∞
¸
n=0
a
n
x
n
y
=
∞
¸
n=1
na
n
x
n−1
xy
=
∞
¸
n=1
na
n
x
n
xy
=
∞
¸
n=0
na
n
x
n
y
=
∞
¸
n=2
n(n −1)a
n
x
n−2
m = n −2 =
∞
¸
m=0
(m+ 1)(m+ 2)a
m+2
x
m
=
∞
¸
n=0
(n + 1)(n + 2)a
n+2
x
n
Substituting in the equation we get
∞
¸
n=0
[(n + 1)(n + 2)a
n+2
+na
n
+a
n
]x
n
= 0
Equating the coeﬃcients to zero, we get
a
n+2
= −
1
n + 2
a
n
so that
y = a
0
¸
1 −
x
2
2
+
x
4
4 2
−
x
6
6 4 2
+
+a
1
¸
x −
x
3
3
+
x
5
5 3
−
x
7
7 5 3
+
y = y
o
¸
1 −
x
2
2
+
x
4
4 2
−
x
6
6 4 2
+
+y
o
¸
x −
x
3
3
+
x
5
5 3
−
x
7
7 5 3
+
y = y
o
∞
¸
n=0
(−1)
n
2
n
n!
x
2n
+y
o
∞
¸
n=1
(−1)
n−1
2
n
n!
(2n)!
x
2n−1
The series converges for all x. For y
o
= 1, y
o
= 0 the exact solution, which can be shown to be
y = exp
−
x
2
2
,
and two approximations to the exact solution are shown in Figure 4.2.
85
4 2 2 4
x
1
1
2
y
y’’ + x y’ + y = 0, y (0) = 1, y’ (0) = 0
y = 1  x /2
y = 1  x /2 + x /8
2
2
4
y = exp ( x /2) (exact)
2
Figure 4.2: Comparison of truncated series and exact solutions
4.1.2.2 Regular singular point
If P(a) = 0, then x = a is a singular point. Furthermore, if
(x−a)Q
P
and
(x−a)
2
R
P
are both
analytic at x = a, this point is called a regular singular point. Then there exists at least one
solution of the form (x−a)
r
¸
∞
n=0
a
n
(x−a)
n
Frobenius
3
method. The radius of convergence
of the series is again the distance to the nearest complex singularity.
An equation for r is called the indicial equation. The following are the diﬀerent kinds of
solutions of the indicial equation possible:
1. r
1
= r
2
, and r
1
−r
2
not an integer. Then
y
1
= (x −a)
r
1
∞
¸
n=0
a
n
(x −a)
n
(4.4)
y
2
= (x −a)
r
2
∞
¸
n=0
b
n
(x −a)
n
(4.5)
2. r
1
= r
2
= r. Then
y
1
= (x −a)
r
∞
¸
n=0
a
n
(x −a)
n
(4.6)
y
2
= y
1
ln x + (x −a)
r
∞
¸
n=1
b
n
(x −a)
n
(4.7)
3. r
1
= r
2
, and r
1
−r
2
is a positive integer.
y
1
= (x −a)
r
1
∞
¸
n=0
a
n
(x −a)
n
(4.8)
3
Ferdinand Georg Frobenius, 18491917, Prussian/German mathematician.
86
y
2
= ky
1
ln x + (x −a)
r
2
∞
¸
n=0
b
n
(x −a)
n
(4.9)
The constants a
n
and k are determined by the diﬀerential equation. The general solution is
y(x) = C
1
y
1
(x) + C
2
y
2
(x) (4.10)
Example 4.3
Find the series solution of
4xy
+ 2y
+y = 0
around x = 0.
x = 0 is a regular singular point. So we take
y =
∞
¸
n=0
a
n
x
n+r
y
=
∞
¸
n=0
a
n
(n +r)x
n+r−1
y
=
∞
¸
n=0
a
n
(n +r)(n +r −1)x
n+r−2
4
∞
¸
n=0
a
n
(n +r)(n +r −1)x
n+r−1
+ 2
∞
¸
n=0
a
n
(n +r)x
n+r−1
+
∞
¸
n=0
a
n
x
n+r
= 0
2
∞
¸
n=0
a
n
(n +r)(2n + 2r −1)x
n+r−1
+
∞
¸
n=0
a
n
x
n+r
= 0
m = n −1 2
∞
¸
m=−1
a
m+1
(m+ 1 +r)(2(m+ 1) + 2r −1)x
m+r
+
∞
¸
n=0
a
n
x
n+r
= 0
2
∞
¸
n=−1
a
n+1
(n + 1 +r)(2(n + 1) + 2r −1)x
n+r
+
∞
¸
n=0
a
n
x
n+r
= 0
The ﬁrst term (n = −1) gives the indicial equation:
r(2r −1) = 0
from which r = 0,
1
2
. We then have
2
∞
¸
n=0
a
n+1
(n +r + 1)(2n + 2r + 1)x
n+r
+
∞
¸
n=0
a
n
x
n+r
= 0
For r = 0
a
n+1
= −a
n
1
(2n + 2)(2n + 1)
y
1
= a
0
¸
1 −
x
2!
+
x
2
4!
−
x
3
6!
+
87
20 40 60 80 100
x
4
3
2
1
1
y
y = 1  x / 2
y = cos (x ) (exact)
1/2
4 x y’’ + 2 y’ + y = 0
y (0) = 1
y ( π /4) = 0
2
Figure 4.3: Comparison of truncated series and exact solutions
For r =
1
2
a
n+1
= −a
n
1
2(2n + 3)(n + 1)
y
2
= a
0
x
1/2
¸
1 −
x
3!
+
x
2
5!
−
x
3
7!
+
The series converge for all x to y
1
= cos
√
x and y
2
= sin
√
x. The general solution is
y = C
1
y
1
+C
2
y
2
or
y(x) = C
1
cos
√
x +C
2
sin
√
x
For y
o
= 1, y(π
2
/4) = 0 the exact solution and the linear approximation to the exact solution are
shown in Figure 4.3.
Example 4.4
Find the series solution of
xy
−y = 0
around x = 0.
Let y =
¸
∞
n=0
a
n
x
n+r
. Then, from the equation
r(r −1)a
0
x
r−1
+
∞
¸
n=1
[(n +r)(n +r −1)a
n
−a
n−1
]x
n+r−1
= 0
The indicial equation is r(r −1) = 0, from which r = 0, 1.
Consider the larger of the two, i.e. r = 1. For this we get
a
n
=
1
n(n + 1)
a
n−1
=
1
n!(n + 1)!
a
0
88
Thus
y
1
(x) = x +
1
2
x
2
+
1
12
x
3
+
1
144
x
4
+. . .
The second solution is
y
2
(x) = ky
1
(x) ln x +
∞
¸
n=0
b
n
x
n
Substituting into the diﬀerential equation we get
−
ky
1
x
+ 2ky
1
+
∞
¸
n=0
b
n
n(n −1)x
n−1
−
∞
¸
n=0
b
n
x
n
= 0
Substituting the solution y
1
(x) already obtained, we get
0 = −k
1 +
1
2
x +
1
12
x
2
+. . .
+ 2k
1 +x +
1
2
x
2
+. . .
+
2b
2
x + 6b
3
x
2
+. . .
−
b
0
+b
1
x +b
2
x
2
+. . .
Collecting terms, we have
k = b
0
b
n+1
=
1
n(n + 1)
¸
b
n
−
k(2n + 1)
n!(n + 1)!
for n = 1, 2, . . .
Thus
y
2
(x) = b
0
y
1
ln x +b
0
¸
1 −
3
4
x
2
−
7
36
x
3
−
35
1728
x
4
−. . .
+b
1
¸
x +
1
2
x
2
+
1
12
x
3
+
1
144
x
4
+. . .
Since the last series is y
1
(x), we choose b
0
= 1 and b
1
= 0. The general solution is
y(x) = C
1
¸
x +
1
2
x
2
+
1
12
x
3
+
1
144
x
4
+. . .
+C
2
¸
x +
1
2
x
2
+
1
12
x
3
+
1
144
x
4
+. . .
ln x +
1 −
3
4
x
2
−
7
36
x
3
−
35
1728
x
4
−. . .
4.1.2.3 Irregular singular point
If P(a) = 0 and in addition either (x −a)Q/P or (x −a)
2
R/P is not analytic at x = a, this
point is an irregular singular point. In this case a series solution cannot be guaranteed.
4.1.3 Higher order equations
Similar techniques can sometimes be used for equations of higher order.
Example 4.5
Solve
y
−xy = 0
89
4 2 2 4
x
1
2
3
4
5
6
7
y
y = 1 + x / 24
4
exact
y’’’  x y = 0,
y(0) = 1,
y’ (0) = 0,
y’’ (0) = 0.
Figure 4.4: Comparison of truncated series and exact solutions
around x = 0.
Let
y =
∞
¸
n=0
a
n
x
n
from which
xy =
∞
¸
n=1
a
n−1
x
n
y
= 6a
3
+
∞
¸
n=1
(n + 1)(n + 2)(n + 3)a
n+3
x
n
Substituting in the equation, we ﬁnd that
a
3
= 0
a
n+3
=
1
(n + 1)(n + 2)(n + 3)
a
n−1
which gives the general solution
y(x) = a
0
¸
1 +
1
24
x
4
+
1
8064
x
8
+. . .
+a
1
x
¸
1 +
1
60
x
4
+
1
30240
x
8
+. . .
+a
2
x
2
¸
1 +
1
120
x
4
+
1
86400
x
8
+. . .
For y
o
= 1, y
(0) = 0, y
(0) = 0 the exact solution and the linear approximation to the exact solution
are shown in Figure 4.4.
90
4.2 Perturbation methods
Perturbation methods, also known as linearization or asymptotic, techniques are not as
rigorous as inﬁnite series methods in that usually it is impossible to make a statement
regarding convergence. Nevertheless, the methods have proven to be powerful in many
regimes of applied mathematics, science, and engineering.
The method hinges on the identiﬁcation of a small parameter , 0 < << 1. Typically
there is an easily obtained solution when = 0. One then uses this solution as a seed to
construct a linear theory about it. The resulting set of linear equations are then solved
giving a solution which is valid in a regime near = 0.
4.2.1 Algebraic and transcendental equations
To illustrate the method of solution, we begin with quadratic algebraic equations for which
exact solutions are available. We can then easily see the advantages and limitations of the
method.
Example 4.6
For 0 < << 1 solve
x
2
+x −1 = 0
Let
x = x
0
+x
1
+
2
x
2
+
Substituting in the equation,
x
0
+x
1
+
2
x
2
+
2
+
x
0
+x
1
+
2
x
2
+
−1 = 0
expanding the square by polynomial multiplication,
x
2
0
+ 2x
1
x
0
+
x
2
1
+ 2x
2
x
0
2
+. . .
+
x
0
+x
1
2
+. . .
−1 = 0
and collecting diﬀerent powers of , we get
O(
0
) : x
2
0
−1 = 0, x
0
= 1, −1
O(
1
) : 2x
0
x
1
+x
0
= 0, x
1
= −
1
2
, −
1
2
O(
2
) : x
2
1
+ 2x
0
x
2
+x
1
= 0, x
2
=
1
8
, −
1
8
.
.
.
The solutions are
x = 1 −
2
+
2
8
+
and
x = −1 −
2
−
2
8
+
The exact solutions can also be expanded
x =
1
2
− ±
2
+ 4
= ±1 −
2
±
2
8
+. . .
to give the same results.
The exact solution and the linear approximation are shown in Figure 4.5.
91
3 2 1 1 2 3
3
2
1
1
2
3
x
ε
linear
exact
exact
linear
x + ε x  1 = 0
2
Figure 4.5: Comparison of asymptotic and exact solutions
Example 4.7
For 0 < << 1 solve
x
2
+x −1 = 0
Let
x = x
0
+x
1
+
2
x
2
+
Substituting in the equation we get
x
0
+x
1
+
2
x
2
+
2
+
x
0
+x
1
+
2
x
2
+
−1 = 0
Expanding the quadratic term gives
x
2
0
+ 2x
o
x
1
+
+
x
0
+x
1
+
2
x
2
+
−1 = 0
Collecting diﬀerent powers of , we get
O(
0
) : x
0
−1 = 0, x
0
= 1
O(
1
) : x
2
0
+x
1
= 0, x
1
= −1
O(
2
) : 2x
0
x
1
+x
2
= 0, x
2
= 2
.
.
.
This gives one solution
x = 1 − + 2
2
+
To get the other solution, let
X = x/
α
The equation becomes
2α+1
X
2
+
α
X −1 = 0
The ﬁrst two terms are of the same order if α = −1. With this,
X = x X
2
+X − = 0
We expand
X = X
0
+X
1
+
2
X
2
+
92
1 1 2 3
3
2
1
1
2
3
x
ε
exact
exact
asymptotic
asymptotic
asymptotic
Figure 4.6: Comparison of asymptotic and exact solutions
so
X
0
+X
1
+
2
X
2
+
2
+
X
0
+X
1
+
2
X
2
+
− = 0
X
2
0
+ 2X
0
X
1
+
2
(X
2
1
+ 2X
0
X
2
) +
+
X
0
+X
1
+
2
X
2
+
− = 0
Collecting terms of the same order
O(
0
) : X
2
0
+X
0
= 0, X
0
= −1, 0
O(
1
) : 2X
0
X
1
+X
1
= 1, X
1
= −1, 1
O(
2
) : X
2
1
+ 2X
0
X
2
+X
2
= 0, X
2
= 1, −1
.
.
.
to give the two solutions
X = −1 − +
2
+
X = −
2
+
or
x =
1
−1 − +
2
+
x = 1 − +
Expansion of the exact solutions
x =
1
2
−1 ±
√
1 + 4
=
1
2
−1 ±(1 + 2 −2
2
+ 4
4
+ )
give the same results.
The exact solution and the linear approximation are shown in Figure 4.6.
93
10 5 5
10
x
1
0.5
0.5
1
f(x)
ε = 0.1
cos (x)
ε sin(x + ε)
. .
.
.
. .
Figure 4.7: Location of roots
Example 4.8
Solve
cos x = sin(x +)
for x near
π
2
.
Figure 4.7 shows a plot of cos x and sin(x + ) for = 0.1. It is seen that there are multiple
intersections near x =
n +
1
2
π
. We seek only one of these.
We substitute
x = x
0
+x
1
+
2
x
2
+
we have
cos(x
0
+x
1
+
2
x
2
+ ) = sin(x
0
+x
1
+
2
x
2
+ +)
Now we expand both the left and right hand sides in a Taylor series in about = 0. We note that
a general function f() has such a Taylor series of f() ∼ f(0) +f
(0) + (
2
/2)f
(0) +. . . Expanding
the left hand side, we get
cos(x
0
+x
1
+. . .) = cos(x
0
+x
1
+. . .)[
=0
+ (−(sin(x
0
+x
1
+. . .))(x
1
+ 2x
2
+. . .)[
=0
+. . . ,
cos(x
0
+x
1
+. . .) = cos x
0
−x
1
sin x
0
+. . . ,
The right hand side is similar. We then arrive at the original equation being expressed as
cos x
o
−x
1
sin x
0
+. . . = (sin x
0
+. . .)
Collecting terms
O(
0
) : cos x
0
= 0, x
0
=
π
2
O(
1
) : −x
1
sin x
0
−sin x
0
= 0, x
1
= −1
.
.
.
The solution is
x =
π
2
− +
94
4.2.2 Regular perturbations
Diﬀerential equations can also be solved using perturbation techniques.
Example 4.9
For 0 < << 1 solve
y
+y
2
= 0, with y(0) = 1, y
(0) = 0
Let
y(x) = y
0
(x) +y
1
(x) +
2
y
2
(x) +
y
(x) = y
0
(x) +y
1
(x) +
2
y
2
(x) +
y
(x) = y
0
(x) +y
1
(x) +
2
y
2
(x) +
Substituting in the equation
y
0
(x) +y
1
(x) +
2
y
2
(x) +
+
y
0
(x) +y
1
(x) +
2
y
2
(x) +
2
= 0
y
0
(x) +y
1
(x) +
2
y
2
(x) +
+
y
2
0
(x) + 2y
1
(x)y
o
(x) +
= 0
Substituting into the boundary conditions:
y
0
(0) +y
1
(0) +
2
y
2
(0) + = 1
y
0
(0) +y
1
(0) +
2
y
2
(0) + = 0
Collecting terms
O(
0
) : y
0
= 0, y
0
(0) = 1, y
0
(0) = 0, y
0
= 1
O(
1
) : y
1
= −y
2
0
, y
1
(0) = 0, y
1
(0) = 0, y
1
= −
x
2
2
O(
2
) : y
2
= −2y
0
y
1
, y
2
(0) = 0, y
2
(0) = 0, y
2
=
x
4
12
.
.
.
The solution is
y = 1 −
x
2
2
+
2
x
4
12
+
Using the techniques of the previous chapter it is seen that this equation has an exact solution. With
u =
dy
dx
d
2
y
dx
2
=
dy
dy
dy
dx
=
du
dy
u
the original equation becomes
u
du
dy
+y
2
= 0
udu = −y
2
dy
u
2
2
= −
3
y
3
+C
1
u = 0 when y = 1 so C =
3
u = ±
2
3
(1 −y
3
)
dy
dx
= ±
2
3
(1 −y
3
)
95
6 4 2 2 4 6
x
1.5
1
0.5
0.5
1
y
15 10 5 5 10 15
x
20
15
10
5
y
asymptotic
exact
asymptotic
exact
farfield view closeup view
y’’ + ε y = 0
y(0) = 1
y’(0) = 0
ε = 0.1
2
Figure 4.8: Comparison of asymptotic and exact solutions
dx = ±
dy
2
3
(1 −y
3
)
x = ±
y
1
ds
2
3
(1 −s
3
)
It can be shown that this integral can be represented in terms of Gauss’s
4
hypergeometric function,
2
F
1
(a, b, c, z) as follows:
x = ∓
π
6
Γ
1
3
Γ
5
6
±
3
2
y
2
F
1
1
3
,
1
2
,
4
3
, y
3
It is likely diﬃcult to invert either of the above functions to get y(x) explicitly. For small , the essence
of the solution is better conveyed by the asymptotic solution. A portion of the asymptotic and exact
solutions for = 0.1 are shown in Figure 4.8.
Example 4.10
Solve
y
+y
2
= 0, with y(0) = 1, y
(0) =
Let
y(x) = y
0
(x) +y
1
(x) +
2
y
2
(x) +
Substituting in the equation and collecting terms
O(
0
) : y
0
= 0, y
0
(0) = 1, y
0
(0) = 0, y
0
= 1
O(
1
) : y
1
= −y
2
0
, y
1
(0) = 0, y
1
(0) = 1, y
1
= −
x
2
2
+x
O(
2
) : y
2
= −2y
0
y
1
, y
2
(0) = 0, y
2
(0) = 0, y
2
=
x
4
12
−
x
3
3
.
.
.
The solution is
y = 1 −
x
2
2
−x
+
2
x
4
12
−
x
3
3
+
A portion of the asymptotic and exact solutions for = 0.1 are shown in Figure 4.9.
Compared to the previous example, there is a slight oﬀset from the y axis.
4
Johann Carl Friedrich Gauss, 17771855, Brunswickborn German mathematician of tremendous inﬂu
ence.
96
10 5 5 10
x
5
4
3
2
1
1
y
asymptotic
exact
y’’ + ε y = 0
y(0) = 1
y’(0) = ε
ε = 0.1
2
Figure 4.9: Comparison of asymptotic and exact solutions
4.2.3 Strained coordinates
The regular perturbation expansion may not be valid over the complete domain of interest.
Example 4.11
Find an approximate solution of the Duﬃng equation:
¨ x +x +x
3
= 0, with x(0) = 1 and ˙ x(0) = 0
First let’s give some physical motivation. One problem in which Duﬃng’s equation arises is the un
damped motion of a mass subject to a nonlinear spring force. Consider a body of mass m moving in
the horizontal x plane. Initially the body is given a small positive displacement x(0) = x
o
. The body
has zero initial velocity
dx
dt
(0) = 0. The body is subjected to a nonlinear spring force F
s
oriented such
that it will pull the body towards x = 0:
F
s
= (k
o
+k
1
x
2
)x
Here k
o
and k
1
are dimensional constants with SI units N/m and N/m
3
respectively. Newton’s second
law gives us
m
d
2
x
dt
2
= −(k
o
+k
1
x
2
)x
m
d
2
x
dt
2
+ (k
o
+k
1
x
2
)x = 0, x(0) = x
o
,
dx
dt
(0) = 0
Choose an as yet arbitrary length scale L and an as yet arbitrary time scale T with which to scale the
problem and take:
˜ x =
x
L
˜
t =
t
T
Substitute
mL
T
2
d
2
˜ x
d
˜
t
2
+k
o
L˜ x +k
1
L
3
˜ x
3
= 0 L˜ x(0) = x
o
,
L
T
d˜ x
d
˜
t
(0) = 0
97
20 40 60 80 100
t
6
4
2
2
4
6
x
x’’ + x + ε x = 0
x(0) = 1, x’(0) = 0
3
ε = 0.2
Figure 4.10: Numerical solution to Duﬃng’s equation
Rearrange to make all terms dimensionless:
d
2
˜ x
d
˜
t
2
+
k
o
T
2
m
˜ x +
k
1
L
2
T
2
m
˜ x
3
= 0 ˜ x(0) =
x
o
L
,
d˜ x
d
˜
t
(0) = 0
Now we want to examine the eﬀect of small nonlinearities. Choose the length and time scales such
that the leading order motion has an amplitude which is O(1) and a frequency which is O(1). So take
T ≡
m
k
o
L ≡ x
o
So
d
2
˜ x
d
˜
t
2
+ ˜ x +
k
1
x
2
o
m
ko
m
˜ x
3
= 0 ˜ x(0) = 1,
d˜ x
d
˜
t
(0) = 0
Choosing
≡
k
1
x
2
o
k
o
we get
d
2
˜ x
d
˜
t
2
+ ˜ x +˜ x
3
= 0 ˜ x(0) = 1,
d˜ x
d
˜
t
(0) = 0
So our asymptotic theory will be valid for
<< 1 k
1
x
2
o
<< k
o
Now, let’s drop the superscripts and focus on the mathematics. First, the exact solution for = 0.2
is shown Figure 4.10.
Let’s use an asymptotic method to try to capture this solution. Using the expansion
x(t) = x
0
(t) +x
1
(t) +
2
x
2
(t) +
and collecting terms
O(
0
) : ¨ x
0
+x
0
= 0, x
0
(0) = 1, ˙ x
0
(0) = 0, x
0
= cos t
O(
1
) : ¨ x
1
+x
1
= −x
3
0
, x
1
(0) = 0, ˙ x
1
(0) = 0, x
1
=
1
32
(−cos t + cos 3t −12t sin t)
.
.
.
98
20 40 60 80 100
t
6
4
2
2
4
6
Error
Numerical  O(1)
ε = 0.2
Figure 4.11: Diﬀerence between exact and leading order solution to Duﬃng’s equation
The diﬀerence between the exact solution and the leading order solution, x
exact
(t) −x
o
(t) is plotted in
Figure 4.11. The error is the same order of magnitude as the solution itself for moderate values of t.
This is undesirable.
To O() the solution is
x = cos t +
1
32
(−cos t + cos 3t −12t sin t) +
The original diﬀerential equation can be integrated once to give
1
2
˙ x
2
+
1
2
x
2
+
4
x
4
=
1
4
(2 +)
indicating that the solution is bounded. However, the series has a secular term −
3
8
t sin t that grows
without bound. This solution is only valid for t <
−1
.
The diﬀerence between the exact solution and the leading order solution, x
exact
(t)−(x
o
(t)+x
1
(t))
is plotted in Figure 4.12. There is some improvement for early time, but the solution is actually worse
for later time. This is because of the secularity.
To have a solution valid for all time, we strain the time coordinate
t = (1 +c
1
+c
2
2
+ )τ
where τ is the new time variable. The c
i
’s should be chosen to avoid secular terms.
Diﬀerentiating
˙ x =
dx
dτ
dτ
dt
=
dx
dτ
dt
dτ
−1
=
dx
dτ
[1 +c
1
+c
2
2
+ ]
−1
¨ x =
d
2
x
dτ
2
[1 +c
1
+c
2
2
+ ]
−2
=
d
2
x
dτ
2
[1 −2((c
1
+c
2
2
+ ) + 3(c
1
+c
2
2
+ )
2
+ )]
=
d
2
x
dτ
2
[1 −2c
1
+ (3c
2
1
−2c
2
)
2
+ ]
Furthermore, we write
x = x
0
+x
1
+
2
x
2
+. . .
99
20 40 60 80 100
t
6
4
2
2
4
6
Error
Numerical  [O(1) + O(ε)]
Uncorrected
ε = 0.2
Figure 4.12: Diﬀerence between exact and uncorrected solution to O() for Duﬃng’s equation
Substituting in the equation
d
2
x
0
dτ
2
+
d
2
x
1
dτ
2
+
2
d
2
x
2
dτ
2
+
[1 −2c
1
+ (3c
2
1
−2c
2
)
2
+ ]
+(x
0
+x
1
+
2
x
2
+ ) +(x
0
+x
1
+
2
x
2
+ )
3
= 0
Collecting terms
O(
0
) :
d
2
x0
dτ
2
+x
0
= 0, x
0
(0) = 1,
dx0
dτ
(0) = 0
x
0
(τ) = cos τ
O(
1
) :
d
2
x1
dτ
2
+x
1
= 2c
1
d
2
x0
dτ
2
−x
3
0
, x
1
(0) = 0,
dx1
dτ
(0) = 0
= −2c
1
cos τ −cos
3
τ
= −(2c
1
+
3
4
) cos τ −
1
4
cos 3τ
x
1
(τ) =
1
32
(−cos τ + cos 3τ) if we choose c
1
= −
3
8
Thus
x(τ) = cos τ +
1
32
(−cos τ + cos 3τ) +
Since
t = (1 −
3
8
+ )τ
τ = (1 +
3
8
+ )t
so that
x(t) = cos
1 +
3
8
+
t
+
1
32
¸
−cos
(1 +
3
8
+ )t
+ cos
3(1 +
3
8
+ )t
+
The diﬀerence between the exact solution and the leading order solution, x
exact
(t)−(x
o
(t)+x
1
(t))
for the corrected solution to O() is plotted in Figure 4.13. The error is much smaller relative to the
previous cases; there does appear to be a slight growth in the amplitude of the error with time. This
might not be expected, but in fact is a characteristic behavior of the truncation error of the numerical
method used to generate the exact solution.
100
20 40 60 80 100
t
6
4
2
2
4
6
Error
Numerical  [O(1) + O(ε)]
Corrected
ε = 0.2
Figure 4.13: Diﬀerence between exact and corrected solution to O() for Duﬃng’s equation
Example 4.12
Find the amplitude of the limit cycle oscillations of the van der Pol equation
¨ x −(1 −x
2
) ˙ x +x = 0, with x(0) = A, ˙ x(0) = 0, <1
Here A is the amplitude and is considered to be an adjustable parameter in this problem.
Let
t = (1 +c
1
+c
2
2
+ )τ,
so that the equation becomes
d
2
x
dτ
2
[1 −2c
1
+. . .] −(1 −x
2
)
dx
dτ
[1 −c
1
+. . .] +x = 0.
We also use
x = x
0
+x
1
+
2
x
2
+. . . .
Thus we get
x
0
= Acos τ
to O(
0
). To O(), the equation is
d
2
x
1
dτ
2
+x
1
= −2c
1
Acos τ −A
1 −
A
2
4
sin τ +
A
3
4
sin 3τ
Choosing c
1
= 0 and A = 2 in order to suppress secular terms, we get
x
1
=
3
4
sin τ −
1
4
sin 3τ.
The amplitude, to lowest order, is
A = 2
so to O() the solution is
x(t) = 2 cos
t +O(
2
)
+
3
4
sin
t +O(
2
)
−
1
4
sin
3
t +O(
2
)
+O(
2
)
The exact solution x
exact
(t), the diﬀerence between the exact solution and the asymptotic leading
order solution, x
exact
(t) − x
o
(t), and the diﬀerence between the exact solution and the asymptotic
solution corrected to O(): x
exact
(t) −(x
o
(t) +x
1
(t)) is plotted in Figure 4.14.
101
10 20 30 40 50
t
2
1
1
2
x
10 20 30 40 50
t
0.2
0.1
0.1
0.2
Error
10 20 30 40 50
t
0.2
0.1
0.1
0.2
Error
Numerical Solution
Numerical  O(1)
Numerical  [O(1) + O(ε)]
Method of
Strained Coordinates
x’’  ε (1  x ) + x = 0
x(0) = 2, x’(0) = 0
2
ε = 0.1
Figure 4.14: Exact, diﬀerence between exact and asymptotic leading order solution, and
diﬀerence between exact and corrected asymptotic solution to O() for van der Pol equation
for the method of strained coordinates
4.2.4 Multiple scales
Example 4.13
Solve
d
2
x
dt
2
−(1 −x
2
)
dx
dt
+x = 0, with x(0) = 0,
dx
dt
(0) = 1
Let x = x(τ, ˜ τ), where the fast time scale is
τ = (1 +a
1
+a
2
2
+ )t
and the slow time scale is
˜ τ = t
Since
x = x(τ, ˜ τ)
dx
dt
=
∂x
∂τ
dτ
dt
+
∂x
∂˜ τ
d˜ τ
dt
The ﬁrst derivative is
dx
dt
=
∂x
∂τ
(1 +a
1
+a
2
2
+ ) +
∂x
∂˜ τ
so
d
dt
= (1 +a
1
+a
2
2
+ )
∂
∂τ
+
∂
∂˜ τ
Applying this operator to
dx
dt
we get
d
2
x
dt
2
= (1 +a
1
+a
2
2
+ )
2
∂
2
x
∂τ
2
+ 2(1 +a
1
+a
2
2
+ )
∂
2
x
∂τ∂˜ τ
+
2
∂
2
x
∂˜ τ
2
102
Introduce
x = x
0
+x
1
+
2
x
2
+
So to O(), the diﬀerential equation becomes
(1 + 2a
1
+ )
∂
2
(x
0
+x
1
)
∂τ
2
+ 2
∂
2
(x
0
+ )
∂τ∂˜ τ
−(1 −x
2
o
− )
∂(x
o
+ )
∂τ
+ (x
o
+x
1
+ ) = 0
Collecting terms of O(
0
), we have
∂
2
x
0
∂τ
2
+x
0
= 0 with x
0
(0, 0) = 0,
∂x0
∂τ
(0, 0) = 1
The solution is
x
0
= A(˜ τ) cos τ +B(˜ τ) sin τ with A(0) = 0, B(0) = 1
The terms of O(
1
) give
∂
2
x
1
∂τ
2
+x
1
= −2a
1
∂
2
x
0
∂τ
2
−2
∂
2
x
0
∂τ∂˜ τ
+ (1 −x
2
0
)
∂x
0
∂τ
=
¸
2a
1
B + 2A
−A+
A
4
(A
2
+B
2
)
sin τ
+
¸
2a
1
A−2B
+B −
B
4
(A
2
+B
2
)
cos τ
+
A
4
(A
2
−3B
2
) sin 3τ −
B
4
(3A
2
−B
2
) cos 3τ
with
x
1
(0, 0) = 0
∂x
1
∂τ
(0, 0) = −a
1
∂x
0
∂τ
(0, 0) −
∂x
0
∂˜ τ
(0, 0)
= −a
1
−
∂x
0
∂˜ τ
(0, 0)
Since t is already represented in ˜ τ, choose a
1
= 0. Then
2A
−A+
A
4
(A
2
+B
2
) = 0
2B
−B +
B
4
(A
2
+B
2
) = 0
Since A(0) = 0, try A(˜ τ) = 0. Then
2B
−B +
B
3
4
= 0
Multiplying by B and simplifying, we get
F
−F +
F
2
4
= 0
where F = B
2
. Separating variables and integrating, we get
B
2
1 −
B
2
4
= Ce
˜ τ
103
10 20 30 40 50
t
2
1
1
2
x
10 20 30 40 50
t
1
0.5
0.5
1
Error
10 20 30 40 50
t
0.1
0.05
0.05
0.1
Error
Numerical Solution
Numerical  O(1)
Numerical  [O(1) + O(ε)]
Corrected
Method of Multiple Scales
x’’  ε (1  x ) x’ + x = 0
x(0) = 0, x’(0) = 1
ε = 0.1
2
Figure 4.15: Exact, diﬀerence between exact and asymptotic leading order solution, and
diﬀerence between exact and corrected asymptotic solution to O() for van der Pol equation
for the method of multiple scales
Since B(0) = 1, we get C = 4/3. From this
B =
2
√
1 + 3e
−˜ τ
so that
x(τ, ˜ τ) =
2
√
1 + 3e
−˜ τ
sin τ +O()
x(t) =
2
√
1 + 3e
−t
sin
[1 +O(
2
)]t
+O()
The exact solution x
exact
(t), the diﬀerence between the exact solution and the asymptotic leading
order solution, x
exact
(t) − x
o
(t), and the diﬀerence between the exact solution and the asymptotic
solution corrected to O(): x
exact
(t) −(x
o
(t) +x
1
(t)) is plotted in Figure 4.15.
Note that the amplitude, which is initially 1, grows to a value of 2, the same value which was
obtained in the previous example. Here, we have additionally obtained the time scale for the growth
of the amplitude change. Note also that the leading order approximation is quite poor for t >
1
, while
the corrected approximation is relatively quite good.
4.2.5 Harmonic approximation
This method is useful for the purpose of determining whether a given diﬀerential equation
has a limit cycle.
104
Example 4.14
Find if the Rayleigh
5
equation
¨ y +(1 −k ˙ y
2
) ˙ y +y = 0
has an approximate solution of the form y = Acos ωt.
Substitute in the equation to get
−Aω
2
cos ωt −(1 −kA
2
ω
2
sin
2
ωt)Aω sin ωt +Acos ωt = 0
from which
Aω sin ωt(−1 +
3
4
kA
2
ω
2
) +Acos ωt(−ω
2
+ 1) −
1
4
kA
3
ω
3
sin 3ωt = 0
Disregarding the higher harmonic term sin 3ωt, we have ω = 1 and A = 2/
√
3k. This is the frequency
and amplitude of the limit cycle.
4.2.6 Boundary layers
The method of boundary layers, also known as matched asymptotic expansion, can be used
in some cases. It is most appropriate for cases in which a small parameter multiplies the
highest order derivative. In such cases a regular perturbation scheme will fail since we lose
a boundary condition at leading order.
Example 4.15
Solve
y
+y
+y = 0, with y(0) = 0, y(1) = 1 (4.11)
An exact solution to this equation exists, namely
y(x) = exp
1 −x
2
sinh
x
√
1−4
2
sinh
√
1−4
2
We could in principle simply expand this in a Taylor series in . However, for more diﬃcult problems,
exact solutions are not available. So here we will just use the exact solution to verify the validity of the
method.
We begin with a regular perturbation expansion
y(x) = y
0
+y
1
(x) +
2
y
2
(x) +
Substituting and collecting terms, we get
O(
0
) : y
0
+y
0
= 0, y
0
(0) = 0, y
0
(1) = 1
the solution to which is
y
0
= ae
−x
5
John William Strutt (Lord Rayleigh), 18421919, English physicist and mathematician.
105
It is not possible for the solution to satisfy two boundary conditions. So, we divide the region of interest
0 ≤ x ≤ 1 into two parts, a thin inner region or boundary layer around x = 0, and an outer region
elsewhere.
The solution obtained above is the solution in the outer region. To satisfy the boundary condition
y
0
(1) = 1, we ﬁnd that a = e, so that
y = e
1−x
+
In the inner region, we choose a new independent variable X deﬁned as X = x/, so that the equation
becomes
d
2
y
dX
2
+
dy
dX
+y = 0
Using a perturbation expansion, the lowest order equation is
d
2
y
0
dX
2
+
dy
0
dX
= 0
with a solution
y
0
= A+Be
−X
Applying the boundary condition y
0
(0) = 0, we get
y
0
= A(1 −e
−X
)
Matching of the inner and outer solutions is achieved by (Prandtl’s
6
method)
y
inner
(X →∞) = y
outer
(x →0)
which gives A = e. The solution is
y(x) = e(1 −e
−x/
) + in the inner region
lim
x→∞
y = e
and
y(x) = e
1−x
+ in the outer region
lim
x→0
y = e
A composite solution can also be written by adding the two solutions and subtracting the common
part.
y(x) =
e(1 −e
−x/
) +
+
e
1−x
+
−e
y = e(e
−x
−e
−x/
) +
The exact solution, the inner layer solution, the outer layer solution, and the composite solution
are plotted in Figure 4.16.
Example 4.16
Obtain the solution of the previous problem
y
+y
+y = 0, with y(0) = 0, y(1) = 1 (4.12)
6
Ludwig Prandtl, 18751953, German engineer based in G¨ ottingen.
106
0.2 0.4 0.6 0.8 1
x
0.5
1
1.5
2
2.5
y
Outer Layer
Solution
Inner Layer
Solution
Exact
Solution
Composite
Solution
ε y’’ + y’ + y = 0
y (0) = 0
y (1) = 1
ε = 0.1
Prandtl’s
Boundary Layer Method
Figure 4.16: Exact, inner layer solution, outer layer solution, and composite solution for
boundary layer problem
to the next order.
Keeping terms of the next order in , we have
y = e
1−x
+[(1 −x)e
1−x
] +. . .
for the outer solution, and
y = A(1 −e
−X
) +[B −AX −(B +AX)e
−X
] +. . .
for the inner solution.
Higher order matching (Van Dyke’s
7
method) is obtained by expanding the outer solution in terms
of the inner variable, the inner solution in terms of the outer variable, and comparing. Thus the outer
solution is, as →0.
y = e
1−X
+[(1 −X)e
1−X
] +. . .
= e(1 −X) +e(1 −X)
2
ignoring terms which are > O(
2
)
= e(1 −X) +e
= e +e(1 −X)
= e +e(1 −x/)
= e +e −ex
Similarly, the inner solution as →0 is
y = A(1 −e
−x/
) +[B −A
x
−(B +A
x
)e
−x/
] +. . .
= A+B −Ax
7
Milton Van Dyke, 20th century American engineer and applied mathematician.
107
0.2 0.4 0.6 0.8 1
x
0.02
0.04
0.06
0.08
0.1
Error
Exact  [O(1) + O(ε)]
Exact  [O(1) + O(ε) + O(ε )]
2
ε y’’ + y’ + y = 0
y (0) = 0
y(1) = 1
ε = 0.1
Prandtl’s
Boundary Layer Method
Figure 4.17: Diﬀerence between exact and asymptotic solutions for two diﬀerent orders of
approximation for a boundary layer problem
Comparing, we get A = B = e, so that
y(x) = e(1 −e
−x/
) +e[ −x −( +x)e
−x/
] + in the inner region
and
y(x) = e
1−x
+(1 −x)e
1−x
in the outer region
The composite solution is
y = e
1−x
−(1 +x)e
1−x/
+[(1 −x)e
1−x
−e
1−x/
] +
The diﬀerence between the exact solution and the approximation from the previous example, and the
diﬀerence between the exact solution and approximation from this example are plotted in Figure 4.17.
Example 4.17
In the same problem, investigate the possibility of having the boundary layer at x = 1.
The outer solution now satisﬁes the condition y(0) = 0, giving y = 0.
Let
X =
x −1
The lowest order inner solution satisfying y(X = 0) = 1 is
y = A+ (1 −A)e
−X
However, as X →−∞, this becomes unbounded and cannot be matched with the outer solution. Thus,
a boundary layer at x = 1 is not possible.
Example 4.18
Solve
y
−y
+y = 0, with y(0) = 0, y(1) = 1
108
0 0.2 0.4 0.6 0.8 1
x
0.2
0.4
0.6
0.8
1
y
0 0.2 0.4 0.6 0.8 1
x
0.02
0.04
0.06
0.08
0.1
Error
Approximate
Exact
ε y’’  y’ + y = 0
y (0) = 0, y(1) = 1
ε = 0.1
Figure 4.18: Exact, approximate, and diﬀerence in predictions for a boundary layer problem
The boundary layer is at x = 1. The outer solution is y = 0. Taking
X = (x −1)/
the inner solution is
y = A+ (1 −A)e
X
+. . .
Matching, we get
A = 0
so that we have a composite solution
y(x) = e
(x−1)/
+. . .
The exact solution, the approximate solution to O(), and the diﬀerence between the exact solution
and the approximation, are plotted in Figure 4.18.
4.2.7 WKB method
Any equation of the form
d
2
v
dx
2
+ P(x)
dv
dx
+ Q(x)v = 0 (4.13)
can be written as
d
2
y
dx
2
+ R(x)y = 0 (4.14)
where
v(x) = y(x) exp
¸
−
1
2
x
0
P(s)ds
(4.15)
R(x) = Q(x) −
1
2
dP
dx
−
1
4
(P(x))
2
(4.16)
So it is suﬃcient to study equations of the form (4.14). The WKB method
8
is used for
equations of the kind
2
d
2
y
dx
2
= f(x)y (4.17)
8
Wentzel, Kramers and Brillouin.
109
where is a small parameter. This also includes an equation of the type
2
d
2
y
dx
2
= [λ
2
p(x) + q(x)]y (4.18)
where λ is a large parameter. Alternatively, by taking x = t, equation (4.17) becomes
d
2
y
dt
2
= f(t)y (4.19)
We can also write equation (4.17) as
d
2
y
dx
2
= g(x)y (4.20)
where g(x) is slowly varying in the sense that g
/g
3/2
∼ O().
We seek solutions to equation (4.17) of the form
y(x) = exp
¸
1
x
xo
[S
0
(s) + S
1
(s) +
2
S
2
(s) + ]ds
(4.21)
The derivatives are
dy
dx
=
1
[S
0
(x) + S
1
(x) +
2
S
2
(x) + ] y(x)
d
2
y
dx
2
=
1
2
[S
0
(x) + S
1
(x) +
2
S
2
(x) + ]
2
y(x)
+
1
dS
0
dx
+
dS
1
dx
+
2
dS
2
dx
+
y(x)
Substituting in the equation we get
[(S
0
(x))
2
+ 2S
0
(x)S
1
(x) + ] y(x) +
dS
0
dx
+
y(x) = f(x)y(x)
and collecting terms O(
0
), we have
S
2
0
(x) = f(x)
from which
S
0
(x) = ±
f(x)
To O(
1
) we have
2S
0
(x)S
1
(x) +
dS
o
dx
= 0
from which
S
1
(x) = −
dSo
dx
2S
o
(x)
S
1
(x) = −
±
1
2
√
f(x)
df
dx
2
±
f(x)
110
S
1
(x) = −
df
dx
4f(x)
Thus we get the general solution
y(x) = C
1
exp
¸
1
x
xo
[S
0
(s) + S
1
(s) + ]ds
(4.22)
+C
2
exp
¸
1
x
xo
[S
0
(s) + S
1
(s) + ]ds
(4.23)
y(x) = C
1
exp
¸
1
x
xo
[
f(s) −
df
ds
4f(s)
+ ]ds
¸
(4.24)
+C
2
exp
¸
1
x
xo
[−
f(s) −
df
ds
4f(s)
+ ]ds
¸
(4.25)
y(x) = C
1
exp
−
f(x)
f(xo)
df
4f
exp
¸
1
x
xo
[
f(s) + ]ds
(4.26)
+C
2
exp
−
f(x)
f(xo)
df
4f
exp
¸
−
1
x
xo
[
f(s) + ]ds
(4.27)
y(x) =
ˆ
C
1
(f(x))
1/4
exp
¸
1
x
xo
f(s)ds
+
ˆ
C
2
(f(x))
1/4
exp
¸
−
1
x
xo
f(s)ds
(4.28)
This solution is not valid near x = a for which f(a) = 0. These are called turning points.
Example 4.19
Find an approximate solution of the Airy
9
equation
2
y
+xy = 0, for x > 0
In this case
f(x) = −x
so that
S
0
(x) = ±i
√
x
and
S
1
(x) = −
S
0
2S
0
= −
1
4x
The solutions are of the form
y = exp
¸
±
i
√
x dx −
dx
4x
=
1
x
1/4
exp
¸
±
2x
3/2
i
3
9
George Biddell Airy, 18011892, English applied mathematician, First Wrangler at Cambridge, holder of
the Lucasian Chair (that held by Newton) at Cambridge, Astronomer Royal who had some role in delaying
the identiﬁcation of Neptune as predicted by John Couch Adams’ perturbation theory in 1845.
111
The general solution is
y =
C
1
x
1/4
sin
¸
2x
3/2
3
+
C
2
x
1/4
cos
¸
2x
3/2
3
+
Example 4.20
Find a solution of x
3
y
= y, for small, positive x.
Let
2
X = x, so that X is of O(1) when x is small. Then the equation becomes
2
d
2
y
dX
2
= X
−3
y
The WKB method is applicable. We have f = X
−3
. The general solution is
y = C
1
X
3/4
exp
−
2
√
X
+C
2
X
3/4
exp
2
√
X
+
In terms of the original variables
y = C
1
x
3/4
exp
−
2
√
x
+C
2
x
3/4
exp
2
√
x
+
4.2.8 Solutions of the type e
S(x)
Example 4.21
Solve
x
3
y
= y
for small, positive x. Let y = e
S(x)
, so that y
= S
e
S
, y
= (S
)
2
e
S
+S
e
S
, from which
S
+ (S
)
2
= x
−3
Assume that S
< (S
)
2
(to be checked later). Thus S
= ±x
−3/2
, and S = ±2x
−1/2
. Checking we
get S
/(S
)
2
= x
1/2
→0 as x →0, conﬁrming the assumption. Now we add a correction term so that
S(x) = 2x
−1/2
+C(x), where we have taken the positive sign. Assume that C <2x
−1/2
. Substituting
in the equation, we have
3
2
x
−5/2
+C
−2x
−3/2
C
+ (C
)
2
= 0
Since C <2x
−1/2
, we have C
<x
−3/2
and C
<
3
2
x
−5/2
. Thus
3
2
x
−5/2
−2x
−3/2
C
= 0
from which C
=
3
4
x
−1
and C =
3
4
lnx. We can now check the assumption on C.
We have S(x) = 2x
−1/2
+
3
4
ln x, so that
y = x
3/4
exp
−
2
√
x
Another solution is obtained by taking S(x) = −2x
−1/2
+C(x).
This procedure is similar to that of the WKB method and the solution is identical.
112
4.2.9 Repeated substitution
This technique sometimes works if the range of the independent variable is such that some
term is small.
Example 4.22
Solve
y
= e
−xy
for y > 0 and large x.
As x →∞, y
→0, so that y →c (a positive constant). Substituting into the equation, we have
y
= e
−cx
which can be integrated to get
y = c −
1
c
e
−cx
Substituting again, we have
y
= exp[−x(c −
1
c
e
−cx
)]
= e
−cx
(1 +
x
c
e
−cx
+. . .)
which can be integrated to give
y = c −
1
c
e
−cx
−
1
c
2
(x +
1
2c
)e
−2cx
+. . .
The series converges for large x.
An accurate numerical solution along with the ﬁrst approximation are plotted in Figure 4.19.
Problems
1. Solve as a series in x for x > 0 about the point x = 0:
(a) x
2
y
−2xy
+ (2x −3)y = 0; y(0) = 0, y(1) = 1
(b) xy
+ 2y
+x
2
y = 0; y(0) = 1, y
(0) = 0
In each case ﬁnd the exact solution with a symbolic computation program, and compare graphically
the ﬁrst four terms of your series solution with the exact solution.
2. Find two term expansions for each of the roots of
(x −1)(x + 2)(x −3λ) + 2 = 0
where λ is large.
113
2 4 6 8 10
x
0.4
0.2
0.2
0.4
0.6
0.8
1
y
Numerical
First
Approximation, y = 1  exp(x)
y’ = exp (xy)
y ( ) = 1
8
Repeated Substitution Method
Figure 4.19: Numerical and ﬁrst approximate solution for repeated substitution problem
3. Find two terms of an approximate solution of
y
+
λ
λ +x
y = 0
with y(0) = 0, y(1) = 1, where λ is a large parameter. For λ = 20, plot y(x) for the two term
expansion. Also compute the exact solution by numerical integration. Plot the diﬀerence between the
asymptotic and numerical solution versus x.
4. Find the leading order solution for
(x −y)
dy
dx
+xy = e
−x
,
where y(1) = 1, and x ∈ [0, 1], < 1. For = 0.2, plot the asymptotic solution, the exact solution
and the diﬀerence versus x.
5. The motion of a pendulum is governed by the equation
d
2
x
dt
2
+ sin(x) = 0
with x(0) = ,
dx
dt
(0) = 0. Using strained coordinates, ﬁnd the approximate solution of x(t) for small
through O(
2
). Plot your results for both your asymptotic results and those obtained by a numerical
integration of the full equation.
6. Find an approximate solution for
y
−ye
y/10
= 0,
with y(0) = 1, y(1) = e.
7. Find an approximate solution for the following problem:
¨ y −ye
y/12
= 0 with y(0) = 0.1, ˙ y(0) = 1.2
Compare with the numerical solution for 0 ≤ x ≤ 1.
114
8. Find the lowest order solution for
2
y
+y
2
−y + 1 = 0,
with y(0) = 1, y(1) = 3, where is small. For = 0.2, plot the asymptotic and exact solutions.
9. Show that for small the solution of
dy
dt
−y = e
t
,
with y(0) = 1 can be approximated as an exponential on a slightly diﬀerent time scale.
10. Obtain approximate general solutions of the following equations near x = 0.
(a) xy
+y
+xy = 0, through O(x
6
),
(b) xy
+y = 0, through O(x
2
).
11. Find all solutions through O(
2
), where is a small parameter, and compare with the exact result for
= 0.1.
(a) 4x
4
+ 4(2 + 1)x
3
+ 3(3 −5)x
2
+ ( −16)x −4 = 0
(b) 4x
4
+ 4(2 + 1)x
3
+ (13 −2)x
2
−13x −4 = 0.
12. Find three terms of a solution of
x + cos(x + 2) =
π
2
where is a small parameter. For = 0.2, compare the best asymptotic solution with the exact
solution.
13. Find three terms of the solution of
˙ x + 2x +x
2
= 0, with x(0) = cosh
where is a small parameter. Compare graphically with the exact solution for = 0.3 and 0 ≤ t ≤ 2.
14. Write down an approximation for
π/2
0
1 + cos
2
x dx
if = 0.1, so that the absolute error is less than 2 10
−4
.
15. Solve
y
+y = e
sin x
, with y(0) = y(1) = 0
through O(), where is a small parameter. For = 0.25 graphically compare the asymptotic solution
with a numerically obtained solution.
16. The solution of the matrix equation A x = y can be written as x = A
−1
y. Find the perturbation
solution of (A+B) x = y, where is a small parameter.
17. Find all solutions of x
5
+x −1 = 0 approximately, if is small and positive. If = .03, compare the
exact solution obtained by trial and error with the asymptotic solution.
18. Obtain the ﬁrst two terms of an approximate solution to
¨ x + 3(1 +) ˙ x + 2x = 0 with x(0) = 2(1 +), ˙ x(0) = −3(1 + 2)
for small . Compare the approximate and exact solutions graphically in the range 0 ≤ x ≤ 1 for (a)
= 0.1, (b) = 0.25, and (c) = 0.5.
19. Find an approximate solution to
¨ x + (1 +)x = 0 with x(0) = A, ˙ x(0) = B.
for small, positive . Compare with the exact solution. Plot both the exact solution and the approxi
mate solution on the same graph for A = 1, B = 0, = 0.3.
115
20. Find an approximate solution to the following problem for small
2
¨ y −y = −1 with y(0) = 0, y(1) = 0
Compare graphically with the exact solution for = 0.1.
21. Solve to leading order
y
+yy
−y = 0 with y(0) = 0, y(1) = 3
Compare graphically to the exact solution for = 0.2.
22. If ¨ x + x + x
3
= 0 with x(0) = A, ˙ x(0) = 0 where is small, a regular expansion gives x(t) ≈
Acos t +
A
3
32
(−cos t +cos 3t −12t sin t). Explain why this is not valid for all time, and obtain a better
solution by inserting t = (1 + a
1
+ . . .)τ into this solution, expanding in terms of , and choosing
a
1
, a
2
, properly (Pritulo’s method).
23. Use perturbations to ﬁnd an approximate solution to
y
+λy
= λ with y(0) = 0, y(1) = 0,
where λ 1.
24. Find the complementary functions of
y
−xy = 0
in terms of expansions near x = 0. Retain only two terms for each function.
25. Find, correct to O(), the solution of
¨ x + (1 + cos 2t) x = 0 with x(0) = 1 and ˙ x(0) = 0
that is bounded for all t, where <1.
26. Find the function f to O() where it satisﬁes the integral equation
x =
x+ sin x
0
f(ξ) dξ
27. Find three terms of a perturbation solution of
y
+y
2
= 0
with y(0) = 0, y(1) = 1 for < 1. For = 2.5, compare the O(1), O(), and O(
2
) solutions to a
numerically obtained solution in x ∈ [0, 1].
28. Obtain a power series solution (in summation form) for y
+ ky = 0 about x = 0, where k is an
arbitrary, nonzero constant. Compare to a Taylor series expansion of the exact solution.
29. Obtain two terms of an approximate solution for e
x
= cos x when is small. Graphically compare
to the actual values (obtained numerically) when = 0.2, 0.1, 0.01.
30. Obtain three terms of a perturbation solution for the roots of the equation (1 − )x
2
− 2x + 1 = 0.
(Hint: The expansion x = x
0
+x
1
+
2
x
2
+. . . will not work.)
31. The solution of the matrix equation A x = y can be written as x = A
−1
y. Find the n
th
term of
the perturbation solution of (A+ B) x = y, where is a small parameter. Obtain the ﬁrst three
terms of the solution for
A =
¸
1 2 1
2 2 1
1 2 3
, B =
¸
1/10 1/2 1/10
0 1/5 0
1/2 1/10 1/2
, y =
¸
1/2
1/5
1/10
.
116
32. Obtain leading and ﬁrst order terms for u and v, governed by the following set of coupled diﬀerential
equations, for small :
d
2
u
dx
2
+v
du
dx
= 1, u(0) = 0, u(1) =
1
2
+
1
120
d
2
v
dx
2
+u
dv
dx
= x, v(0) = 0, v(1) =
1
6
+
1
80
Compare asymptotic and numerically obtained results for = 0.2.
33. Obtain two terms of a perturbation solution to f
xx
+f
x
= −e
−x
with boundary conditions f(0) = 0,
f(1) = 1. Graph the solution for = 0.2, 0.1, 0.05, 0.025 on 0 ≤ x ≤ 1.
34. Find two uniformly valid approximate solutions of
¨ u +
ω
2
u
1 +u
2
= 0 with u(0) = 0
up to the ﬁrst order. Note that ω is not small.
35. Using a twovariable expansion, ﬁnd the lowest order solution of
(a) ¨ x + ˙ x +x = 0 with x(0) = 0, ˙ x(0) = 1
(b) ¨ x + ˙ x
3
+x = 0 with x(0) = 0, ˙ x(0) = 1
where <1. Compare asymptotic and numerically obtained results for = 0.01.
36. Obtain a threeterm solution of
¨ x − ˙ x = 1, with x(0) = 0, x(1) = 2
where <1.
37. Find an approximate solution to the following problem for small
2
¨ y −y = −1 with y(0) = 0, y(1) = 0
Compare graphically with the exact solution for = 0.1.
38. A projectile of mass m is launched at an angle α with respect to the horizontal, and with an initial
velocity V . Find the time it takes to reach its maximum height. Assume that the air resistance is
small and can be written as k times the square of the velocity of the projectile. Choosing appropriate
values for the parameters, compare with the numerical result.
39. For small , solve using WKB
2
y
= (1 +x
2
)
2
y with y(0) = 0, y(1) = 1.
40. Obtain a general series solution of
y
+k
2
y = 0
about x = 0.
41. Find a general solution of
y
+e
x
y = 1
near x = 0.
42. Solve
x
2
y
+x
1
2
+ 2x
y
+
x −
1
2
y = 0
around x = 0.
43. Solve y
−
√
xy = 0, x > 0 in each one of the following ways:
117
(a) Substitute x =
−4/5
X, and then use WKB.
(b) Substitute x =
2/5
X, and then use regular perturbation.
(c) Find an approximate solution of the kind y = e
S(x)
.
where is small
44. Find a solution of
y
−
√
xy = 0
for small x ≥ 0
45. Find an approximate general solution of
xsin x y
+ (2xcos x +x
2
sin x) y
+ (xsin x + sin x +x
2
cos x) y = 0
valid near x = 0.
46. A bead can slide along a circular hoop in a vertical plane. The bead is initially at the lowest position,
θ = 0, and given an initial velocity of 2
√
gR, where g is the acceleration due to gravity and R is the
radius of the hoop. If the friction coeﬃcient is µ, ﬁnd the maximum angle θ
max
reached by the bead.
Compare perturbation and numerical results. Present results on a θ
max
vs. µ plot, for 0 ≤ µ ≤ 0.3.
47. The initial velocity downwards of a body of mass m immersed in a very viscous ﬂuid is V . Find
the velocity of the body as a function of time. Assume that the viscous force is proportional to the
velocity. Assume that the inertia of the body is small, but not negligible, relative to viscous and
gravity forces. Compare perturbation and exact solutions graphically.
48. For small , solve to lowest order using the method of multiple scales
¨ x + ˙ x +x = 0 with x(0) = 0, ˙ x(0) = 1.
Compare exact and asymptotic results for = 0.3.
49. For small , solve using WKB
2
y
= (1 +x
2
)
2
y with y(0) = 0, y(1) = 1.
Plot asymptotic and numerical solutions for = 0.11.
50. Find the lowest order approximate solution to
2
y
+y
2
−y + 1 = 0 with y(0) = 1, y(1) = 2
where is small. Plot asymptotic and numerical solutions for = 0.23.
51. A pendulum is used to measure the earth’s gravity. The frequency of oscillation is measured, and the
gravity calculated assuming a small amplitude of motion and knowing the length of the pendulum.
What must the maximum initial angular displacement of the pendulum be if the error in gravity is
to be less than 1%. Neglect air resistance.
52. Find two terms of an approximate solution of
y
+
λ
λ +x
y = 0
with y(0) = 0, y(1) = 1, where λ is a large parameter.
53. Find all solutions of e
x
= x
2
through O(
2
), where is a small parameter.
54. Solve
(1 +)y
+y
2
= 1
with y(0) = 0, y(1) = 1 through O(
2
), where is a small parameter.
118
55. Solve to lowest order
y
+y
+y
2
= 1
with y(0) = −1, y(1) = 1, where is a small parameter. For = 0.2, plot asymptotic and numerical
solutions to the full equation.
56. Find the series solution of the diﬀerential equation
y
+xy = 0
around x = 0 up to four terms.
57. Find the local solution of the equation
y
=
√
xy
near x →0
+
.
58. Find the solution of the transcendental equation
sin x = cos x
near x = π for small positive .
59. Solve
y
−y
= 1
with y(0) = 0, y(1) = 2 for small . Plot asymptotic and numerical solutions for = 0.04.
60. Find two terms of the perturbation solution of
(1 +y)y
+y
2
−N
2
y = 0
with y
(0) = 0, y(1) = 1. for small . N is a constant. Plot the asymptotic and numerical solution for
= 0.12, N = 10.
61. Solve
y
+y
=
1
2
with y(0) = 0, y(1) = 1 for small . Plot asymptotic and numerical solutions for = 0.12.
62. Find if the van der Pol equation
¨ y −(1 −y
2
) ˙ y +k
2
y = 0
has a limit cycle of the form y = Acos ωt.
63. Solve y
= e
−2xy
for large x where y is positive. Plot y(x).
119
120
Chapter 5
Special functions
see Kaplan, Chapter 7,
see Lopez, Chapters 10, 16,
see Riley, Hobson, and Bence, Chapter 15.4, 15.5.
Solutions of diﬀerential equations give rise to complementary functions. Some of these are
well known such as sin and cos. This chapter will consider these and other functions which
arise from the solution from a variety of second order diﬀerential equations with nonconstant
coeﬃcients.
5.1 SturmLiouville equations
Consider the following general linear homogeneous second order diﬀerential equation with
general homogeneous boundary conditions:
a(x)
d
2
y
dx
2
+ b(x)
dy
dx
+ c(x)y + λy = 0 (5.1)
α
1
y(x
0
) + α
2
y
(x
0
) = 0 (5.2)
β
1
y(x
1
) + β
2
y
(x
1
) = 0 (5.3)
Deﬁne the following functions:
p(x) = exp
x
b(s)
a(s)
ds
(5.4)
r(x) =
1
a(x)
exp
x
b(s)
a(s)
ds
(5.5)
q(x) =
c(x)
a(x)
exp
x
b(s)
a(s)
ds
(5.6)
With these deﬁnitions, the original equation is transformed to the type known as a
121
SturmLiouville
1
equation:
d
dx
¸
p(x)
dy
dx
+ [q(x) + λr(x)] y(x) = 0 (5.7)
¸
1
r(x)
d
dx
p(x)
d
dx
+ q(x)
y(x) = −λ y(x) (5.8)
Here the SturmLiouville linear operator L
s
is
L
s
=
1
r(x)
d
dx
p(x)
d
dx
+ q(x)
so we have
L
s
y(x) = −λ y(x)
Now the trivial solution y(x) = 0 will satisfy the diﬀerential equation. In addition for special
values of λ, known as eigenvalues, there are certain functions, known as eigenfunctions which
also satisfy the diﬀerential equation.
Now it can be shown that if we have in [x
0
, x
1
]
p(x) > 0 (5.9)
r(x) > 0 (5.10)
q(x) ≥ 0 (5.11)
then an inﬁnite number of real positive eigenvalues λ and corresponding eigenfunctions y
i
(x)
exist for which the diﬀerential equation is satisﬁed. Moreover it can also be shown (Hilde
brand, p. 204) that a consequence of the homogeneous boundary conditions is the orthogo
nality condition:
x
1
x
0
r(x)y
i
(x)y
j
(x) dx = 0, for i = j (5.12)
x
1
x
0
r(x)y
i
(x)y
i
(x) dx = K
2
(5.13)
Here K is a real constant. SturmLiouville theory shares many analogies with vector algebra.
In the same sense that the dot product of a vector with itself is guaranteed positive, we wish
to deﬁne a “product” for the eigenfunctions in which the “product” of a function and itself
is guaranteed positive. Consequently, the eigenfunctions are said to be orthogonal to each
other.
Based on the above result we can deﬁne functions ϕ
i
(x):
ϕ
i
(x) =
r(x)
K
2
y
i
(x) (5.14)
so that
x
1
x
0
ϕ
2
i
(x) dx = 1 (5.15)
1
Jacques Charles Fran¸ cois Sturm, 18031855, Swissborn French mathematician and Joseph Liouville,
18091882, French mathematician.
122
Such functions are said to be orthonormal. While orthonormal functions have great
utility, note that in the context of our SturmLiouville nomenclature, that ϕ
i
(x) does not in
general satisfy the SturmLiouville equation: L
s
ϕ
i
(x) = −λ
i
ϕ
i
(x). If, however, r(x) = C,
where C is a scalar constant, then in fact L
s
ϕ
i
(x) = −λ
i
ϕ
i
(x). Whatever the case, in all
cases we are guaranteed L
s
y
i
(x) = −λ
i
y
i
(x). The y
i
(x) functions are orthogonal under the
inﬂuence of the weighting function r(x), but not necessarily orthonormal.
5.1.1 Linear oscillator
A linear oscillator gives a simple example of a SturmLiouville problem.
d
2
y
dx
2
+ λy = 0 (5.16)
αy(a) + β
dy
dx
(a) = 0 (5.17)
γy(b) + δ
dy
dx
(b) = 0 (5.18)
Here we have
a(x) = 1 (5.19)
b(x) = 0 (5.20)
c(x) = 0 (5.21)
so
p(x) = 1 (5.22)
r(x) = 1 (5.23)
q(x) = 0 (5.24)
So we can consider the domain −∞ < x < ∞. In practice it is more common to consider
the ﬁnite domain in which x
0
< x < x
1
. The SturmLiouville operator is
L
s
=
d
2
dx
2
The eigenvalue problem is
d
2
dx
2
y(x) = −λ y(x)
The general solution is
y(x) = Acos(
√
λx) + Bsin(
√
λx)
Example 5.1
Find the eigenvalues and eigenfunctions for
d
2
y
dx
2
+λy = 0
123
y(0) = y() = 0
For y(0) = 0 we get
y(0) = 0 = Acos(
√
λ(0)) +Bsin(
√
λ(0))
y(0) = 0 = A(1) +B(0)
A = 0
So
y(x) = Bsin(
√
λx)
At the other boundary we have
y() = 0 = Bsin(
√
λ())
For nontrivial solutions we need B = 0, which then requires that
√
λ = nπ n = ±1, ±2, ±3, . . .
so
λ =
nπ
2
The eigenvalues and eigenfunctions are
λ
n
= n
2
π
2
/
2
and
y
n
(x) = sin
nπx
respectively.
Check orthogonality for y
2
(x) and y
3
(x).
I =
0
sin
2πx
sin
3πx
dx (5.25)
=
2π
¸
sin
πx
−
1
5
sin
5πx
0
(5.26)
= 0 (5.27)
Check orthogonality for y
4
(x) and y
4
(x).
I =
0
sin
4πx
sin
4πx
dx (5.28)
=
¸
x
2
−
16π
sin
8πx
o
(5.29)
=
2
(5.30)
In fact
0
sin
nπx
sin
nπx
dx =
2
(5.31)
so the orthonormal functions ϕ
n
(x) for this problem are
ϕ
n
(x) =
2
sin
nπx
(5.32)
124
5.1.2 Legendre equation
see Kaplan, p. 533
Legendre’s
2
equation is below. Here, it is convenient to let the term n(n + 1) play the
role of λ.
(1 −x
2
)
d
2
y
dx
2
−2x
dy
dx
+ n(n + 1)y = 0 (5.33)
Here
a(x) = 1 −x
2
(5.34)
b(x) = −2x (5.35)
c(x) = 0 (5.36)
Then
p(x) = exp
x
−2s
1 −s
2
ds (5.37)
= exp
ln
1 −x
2
(5.38)
= 1 −x
2
(5.39)
We ﬁnd then that
r(x) = 1 (5.40)
q(x) = 0 (5.41)
Thus we require −1 < x < 1. In SturmLiouville form the equation reduces to
d
dx
¸
(1 −x
2
)
dy
dx
+ n(n + 1) y = 0 (5.42)
d
dx
¸
(1 −x
2
)
d
dx
y(x) = −n(n + 1) y(x) (5.43)
So
L
s
=
d
dx
¸
(1 −x
2
)
d
dx
(5.44)
Now x = 0 is a regular point, so we can expand in a power series around this point. Let
y =
∞
¸
m=0
a
m
x
m
(5.45)
Substituting in the diﬀerential equation, we ﬁnd that
a
m+2
= a
m
(m + n + 1)(m−n)
(m + 1)(m + 2)
(5.46)
2
AdrienMarie Legendre, 17521833, French/Parisian mathematician.
125
1.5 1 0.5 0.5 1 1.5
2
1
1
2
3
P (x)
x
n P
P
P
P
P
0
1
2
3
4
Figure 5.1: Legendre polynomials P
0
(x), P
1
(x), P
2
(x), P
3
(x), P
4
(x)
Thus the general solution is
y(x) = y
1
(x) + y
2
(x)
y
1
(x) = a
o
[1 −n(n + 1)
x
2
2!
+ n(n + 1)(n −2)(n + 3)
x
4
4!
−. . .]
y
2
(x) = a
1
[x −(n −1)(n + 2)
x
3
3!
+ (n −1)(n + 2)(n −3)(n + 4)
x
5
5!
−. . .]
For n = 0, 2, 4, . . ., y
1
(x) is a ﬁnite polynomial, while y
2
(x) is an inﬁnite series which diverges
at [x[ = 1. For n = 1, 3, 5, . . . it is the other way around.
The polynomials can be normalized by dividing through by their values at x = 1 to give
the Legendre polynomials:
n = 0 P
0
(x) = 1 (5.47)
n = 1 P
1
(x) = x (5.48)
n = 2 P
2
(x) =
1
2
(3x
2
−1) (5.49)
n = 3 P
3
(x) =
1
2
(5x
3
−3x) (5.50)
n = 4 P
4
(x) =
1
8
(35x
4
−30x
2
+ 3) (5.51)
.
.
. (5.52)
n P
n
(x) =
1
2
n
n!
d
n
dx
n
(x
2
−1)
n
Rodrigues’ formula (5.53)
The ﬁrst ﬁve eigenfunctions of the Legendre equation are plotted in the Figure 5.1.
The total solution can be expressed as the sum of the polynomials P
n
(x) (Legendre
functions of the ﬁrst kind and degree n) and the series Q
n
(x) (Legendre functions of the
second kind and degree n):
126
y(x) = AP
n
(x) + BQ
n
(x) (5.54)
The orthogonality condition is
1
−1
P
i
(x)P
j
(x) dx = 0 i = j (5.55)
1
−1
P
n
(x)P
n
(x) dx =
2
2n + 1
(5.56)
Direct substitution shows that P
n
(x) satisﬁes both the diﬀerential equation and the orthog
onality condition. It is then easily shown that the following functions are orthonormal on
the interval [−1, 1]:
ϕ
n
(x) =
n +
1
2
P
n
(x) (5.57)
5.1.3 Chebyshev equation
The Chebyshev
3
equation is
(1 −x
2
)
d
2
y
dx
2
−x
dy
dx
+ λy = 0 (5.58)
Let’s get this into SturmLiouville form.
a(x) = 1 −x
2
(5.59)
b(x) = −x (5.60)
c(x) = 0 (5.61)
Now
p(x) = exp
x
b(s)
a(s)
ds
(5.62)
= exp
x
−s
1 −s
2
ds
(5.63)
= exp
1
2
ln(1 −x
2
)
(5.64)
=
√
1 −x
2
(5.65)
r(x) =
exp
x b(s)
a(s)
ds
a(x)
=
1
√
1 −x
2
(5.66)
q(x) = 0 (5.67)
3
Pafnuty Lvovich Chebyshev, 18211894, Russian mathematician.
127
2 1 1 2
2
1.5
1
0.5
0.5
1
1.5
2
T (x)
n
x
T
T
T
T
T
1
2
3
0
4
Figure 5.2: Chebyshev polynomials T
0
(x), T
1
(x), T
2
(x), T
3
(x), T
4
(x)
Thus we require −1 < x < 1
The Chebyshev equation in SturmLiouville form is
d
dx
¸
√
1 −x
2
dy
dx
+
λ
√
1 −x
2
y = 0 (5.68)
√
1 −x
2
d
dx
¸
√
1 −x
2
d
dx
y(x) = −λ y(x) (5.69)
Thus
L
s
=
√
1 −x
2
d
dx
¸
√
1 −x
2
d
dx
(5.70)
That the two forms are equivalent can be easily checked by direct expansion of the above
equation.
The ﬁrst ﬁve eigenfunctions of the Chebyshev equation are plotted in the Figure 5.2.
They can be expressed in terms of polynomials known as the Chebyshev polynomials, T
n
(x).
These polynomials can be obtained by a regular series expansion of the original diﬀerential
equation.
Eigenvalues and eigenfunctions are listed below:
λ = 0 T
0
(x) = 1 (5.71)
λ = 1 T
1
(x) = x (5.72)
λ = 4 T
2
(x) = −1 + 2x
2
(5.73)
λ = 9 T
3
(x) = −3x + 4x
3
(5.74)
λ = 16 T
4
(x) = 1 −8x
2
+ 8x
4
(5.75)
.
.
. (5.76)
λ = n
2
T
n
(x) = cos(ncos
−1
x) Rodrigues’ formula (5.77)
128
The Rodrigues
4
formula gives a generating formula for general n. The orthogonality condi
tion is
1
−1
T
i
(x)T
j
(x)
√
1 −x
2
dx = 0 i = j (5.78)
1
−1
T
i
(x)T
i
(x)
√
1 −x
2
dx =
π if i = j = 0
π
2
if i = j = 1, 2, . . .
(5.79)
Direct substitution shows that T
n
(x) satisﬁes both the diﬀerential equation and the orthog
onality condition. We can deduce then that the functions ϕ
n
(x)
ϕ
n
(x) =
1
π
√
1−x
2
T
n
(x) if n = 0
2
π
√
1−x
2
T
n
(x) if n = 1, 2, . . .
(5.80)
are an orthonormal set of functions on the interval [−1, 1]. That is
1
−1
ϕ
2
n
(x) dx = 1 (5.81)
5.1.4 Hermite equation
see Kaplan, p. 541
The Hermite
5
equation is given below.
d
2
y
dx
2
−2x
dy
dx
+ λy = 0 (5.82)
We ﬁnd that
p(x) = e
−x
2
(5.83)
r(x) = e
−x
2
(5.84)
q(x) = 0 (5.85)
Thus we allow −∞< x < ∞. In SturmLiouville form it becomes
d
dx
¸
e
−x
2 dy
dx
+ λe
−x
2
y = 0 (5.86)
e
x
2 d
dx
¸
e
−x
2 d
dx
y(x) = −λ y(x) (5.87)
So
L
s
= e
x
2 d
dx
¸
e
−x
2 d
dx
(5.88)
The ﬁrst ﬁve eigenfunctions of the Hermite equation are plotted in the Figure 5.3. They
can be expressed in terms of polynomials known as the Hermite polynomials, H
n
(x). These
polynomials can be obtained by a regular series expansion of the original diﬀerential equation.
4
Benjamin Olinde Rodrigues, 17941851, obscure French mathematician, of Portuguese and perhaps Span
ish roots.
5
Charles Hermite, 18221901, Lorraineborn French mathematician.
129
3 2 1 1 2 3
30
20
10
10
20
30
H (x)
n
x
H
H
H
H
H
1
2
3
0
4
Figure 5.3: Hermite polynomials H
0
(x), H
1
(x), H
2
(x), H
3
(x), H
4
(x)
Eigenvalues and eigenfunctions are listed below:
λ = 0 H
0
(x) = 1 (5.89)
λ = 2 H
1
(x) = 2x (5.90)
λ = 4 H
2
(x) = −2 + 4x
2
(5.91)
λ = 6 H
3
(x) = −12x + 8x
3
(5.92)
λ = 8 H
4
(x) = 12 −48x
2
+ 16x
4
(5.93)
.
.
. (5.94)
λ = 2n H
n
(x) = (−1)
n
e
x
2 d
n
e
−x
2
dx
n
Rodrigues’ formula (5.95)
The orthogonality condition is
∞
−∞
e
−x
2
H
i
(x)H
j
(x) dx = 0 i = j (5.96)
∞
−∞
e
−x
2
H
n
(x)H
n
(x) dx = 2
n
n!
√
π (5.97)
Direct substitution shows that H
n
(x) satisﬁes both the diﬀerential equation and the orthog
onality condition. It is then easily shown that the following functions are orthonormal on
the interval (−∞, ∞)
ϕ
n
(x) =
e
−x
2
/2
H
n
(x)
√
π2
n
n!
(5.98)
5.1.5 Laguerre equation
see Kaplan, p. 541
130
The Laguerre
6
equation is
x
d
2
y
dx
2
+ (1 −x)
dy
dx
+ λy = 0 (5.99)
We ﬁnd that
p(x) = xe
−x
(5.100)
r(x) = e
−x
(5.101)
q(x) = 0 (5.102)
Thus we require 0 < x < ∞.
In SturmLiouville form it becomes
d
dx
¸
xe
−x
dy
dx
+ λe
−x
y = 0 (5.103)
e
x
d
dx
¸
xe
−x
d
dx
y(x) = −λ y(x) (5.104)
So
L
s
= e
x
d
dx
¸
xe
−x
d
dx
(5.105)
The ﬁrst ﬁve eigenfunctions of the Laguerre equation are plotted in the Figure 5.4. They
can be expressed in terms of polynomials known as the Laguerre polynomials, L
n
(x). These
polynomials can be obtained by a regular series expansion of the original diﬀerential equation.
Eigenvalues and eigenfunctions are listed below:
λ = 0 L
0
(x) = 1 (5.106)
λ = 1 L
1
(x) = 1 −x (5.107)
λ = 2 L
2
(x) = 1 −2x +
1
2
x
2
(5.108)
λ = 3 L
3
(x) = 1 −3x +
3
2
x
2
−
1
6
x
3
(5.109)
λ = 4 L
4
(x) = 1 −4x + 3x
2
−
2
3
x
3
+
1
24
x
4
(5.110)
.
.
. (5.111)
λ = n L
n
(x) =
1
n!
e
x
d
n
(x
n
e
−x
)
dx
n
Rodrigues’ formula (5.112)
The orthogonality condition is
∞
0
e
−x
L
i
(x)L
j
(x) dx = 0 i = j (5.113)
∞
0
e
−x
L
n
(x)L
n
(x) dx = 1 (5.114)
6
Edmond Nicolas Laguerre, 18341886, French mathematician.
131
2 2 4 6 8 10
10
5
5
10
L (x)
x
n
L
L
L
L
L
1
2
3
4
0
Figure 5.4: Laguerre polynomials L
0
(x), L
1
(x), L
2
(x), L
3
(x), L
4
(x)
Direct substitution shows that L
n
(x) satisﬁes both the diﬀerential equation and the orthog
onality condition. It is then easily shown that the following functions are orthonormal on
the interval [0, ∞):
ϕ
n
(x) = e
−x/2
L
n
(x) (5.115)
5.1.6 Bessel equation
see Kaplan, p. 537
5.1.6.1 ﬁrst and second kind
Bessel’s
7
diﬀerential equation is as follows, with it being convenient to deﬁne λ = −ν
2
.
x
2
d
2
y
dx
2
+ x
dy
dx
+ (µ
2
x
2
−ν
2
)y = 0 (5.116)
We ﬁnd that
p(x) = x (5.117)
r(x) =
1
x
(5.118)
q(x) = µ
2
x (5.119)
7
Friedrich Wilhelm Bessel, 17841846, Westphaliaborn German mathematician.
132
We thus require 0 < x < ∞, though in practice, it is more common to employ a ﬁnite domain
such as 0 < x < . In SturmLiouville form, we have
d
dx
¸
x
dy
dx
+
µ
2
x −
ν
2
x
y = 0 (5.120)
¸
x
d
dx
x
d
dx
+ µ
2
x
y(x) = ν
2
y(x) (5.121)
The SturmLiouville operator is
L
s
= x
d
dx
x
d
dx
+ µ
2
x
(5.122)
In some other cases it is more convenient to take λ = µ
2
in which case we get
p(x) = x (5.123)
r(x) = x (5.124)
q(x) = −
ν
2
x
(5.125)
and the SturmLiouville form and operator are:
¸
1
x
d
dx
x
d
dx
−
ν
2
x
y(x) = −µ
2
y(x) (5.126)
L
s
=
1
x
d
dx
x
d
dx
−
ν
2
x
(5.127)
The general solution is
y(x) = C
1
J
ν
(µx) + C
2
Y
ν
(µx) if ν is an integer (5.128)
y(x) = C
1
J
ν
(µx) + C
2
J
−ν
(µx) if ν is not an integer (5.129)
where J
ν
(µx) and Y
ν
(µx) are called the Bessel and Neumann functions of order ν. Often
J
ν
(µx) is known as a Bessel function of the ﬁrst kind and Y
ν
(µx) is known as a Bessel
function of the second kind. Both J
ν
and Y
ν
are represented by inﬁnite series rather than
ﬁnite series such as the series for Legendre polynomials.
The Bessel function of the ﬁrst kind of order ν, J
ν
(µx), is represented by
J
ν
(µx) =
1
2
µx
ν ∞
¸
k=0
−
1
4
µ
2
x
2
k
k!Γ(ν + k + 1)
(5.130)
The Neumann function Y
ν
(µx) has a complicated series representation (see Hildebrand).
The representations for J
o
(µx) and Y
o
(µx) are
J
o
(µx) = 1 −
1
4
µ
2
x
2
1
(1!)
2
+
1
4
µ
2
x
2
2
(2!)
2
+ . . . +
−
1
4
µ
2
x
2
n
(n!)
2
Y
o
(µx) =
2
π
ln
1
2
µx
+ γ
J
o
(µx)
+
2
π
1
4
µ
2
x
2
1
(1!)
2
−
1 +
1
2
1
4
µ
2
x
2
2
(2!)
2
. . .
133
0.2 0.4 0.6 0.8 1
0.4
0.2
0.2
0.4
0.6
0.8
1
x
J (µ x)
o
n
J (µ x)
o
0
J (µ x)
o
1
J (µ x)
o
2
J (µ x)
o
3
Figure 5.5: Bessel functions J
0
(µ
0
x), J
0
(µ
1
x), J
0
(µ
2
x), J
0
(µ
3
x)
2 4 6 8 10
x
0.4
0.2
0.2
0.4
0.6
0.8
1
2 4 6 8 10
x
1
0.75
0.5
0.25
0.25
0.5
0.75
1
Y (x)
ν
J (x)
ν
J
J
J
J
J
0
1
2
3
4
Y
Y Y
Y
Y
0
1
2
3
4
Figure 5.6: Bessel functions J
0
(x), J
1
(x), J
2
(x), J
3
(x), J
4
(x) and Neumann functions Y
0
(x),
Y
1
(x), Y
2
(x), Y
3
(x), Y
4
(x)
It can be shown using term by term diﬀerentiation that
dJ
ν
(µx)
dx
= µ
J
ν+1
(µx) −J
ν−1
(µx)
2
dY
ν
(µx)
dx
= µ
Y
ν+1
(µx) −Y
ν−1
(µx)
2
(5.131)
d
dx
[x
ν
J
ν
(µx)] = µx
ν
J
ν−1
(x)
d
dx
[x
ν
Y
ν
(µx)] = µx
ν
Y
ν−1
(x) (5.132)
The Bessel functions J
0
(µ
0
x), J
0
(µ
1
x), J
0
(µ
2
x), J
0
(µ
3
x) are plotted in Figure 5.5. Here
the eigenvalues µ
i
can be determined from trial and error. The ﬁrst four are found to
be µ
0
= 2.40483, µ
1
= 5.52008, µ
2
= 8.65373, and µ
3
= 11.7915. The Bessel functions
J
o
(x), J
1
(x), J
2
(x), J
3
(x), and J
4
(x) along with the Neumann functions Y
o
(x), Y
1
(x), Y
2
(x), Y
3
(x),
and Y
4
(x) are plotted in Figure 5.6 (so here µ = 1).
The orthogonality condition for a domain x ∈ [0, 1], taken here for the case in which the
eigenvalue is µ
i
, can be shown to be
1
0
xJ
ν
(µ
i
x)J
ν
(µ
j
x) dx = 0 i = j (5.133)
134
1
0
xJ
ν
(µ
n
x)J
ν
(µ
n
x) dx =
1
2
(J
ν+1
(µ
n
))
2
i = j (5.134)
Here we must choose µ
i
such that J
ν
(µ
i
) = 0, which corresponds to a vanishing of the
function at the outer limit x = 1. See Hildebrand, p. 226.
So the orthonormal Bessel function is
ϕ
n
(x) =
√
2xJ
ν
(µ
n
x)
[J
ν+1
(µ
n
)[
(5.135)
5.1.6.2 third kind
Hankel functions, also known as Bessel functions of the third kind are deﬁned by
H
(1)
ν
(x) = J
ν
(x) + iY
ν
(x) (5.136)
H
(2)
ν
(x) = J
ν
(x) −iY
ν
(x) (5.137)
5.1.6.3 modiﬁed Bessel functions
The modiﬁed Bessel equation is
x
2
d
2
y
dx
2
+ x
dy
dx
−(x
2
+ ν
2
)y = 0 (5.138)
the solutions of which are the modiﬁed Bessel functions. It is satisﬁed by the modiﬁed Bessel
functions. The modiﬁed Bessel function of the ﬁrst kind of order ν is
I
ν
(x) = i
−ν
J
ν
(ix) (5.139)
The modiﬁed Bessel function of the second kind of order ν is
K
ν
(x) =
π
2
i
ν+1
H
(1)
n
(ix) (5.140)
5.1.6.4 ber and bei functions
The real and imaginary parts of the solutions of
x
2
d
2
y
dx
2
+ x
dy
dx
−(p
2
+ ix
2
)y = 0 (5.141)
where p is a real constant, are called the ber and bei functions.
5.2 Representation of arbitrary functions
It is often useful, especially when solving partial diﬀerential equations, to be able to represent
an arbitrary function f(x) in the domain [x
0
, x
1
] with a sum of orthonormal functions ϕ
n
(x):
f(x) =
N
¸
n=0
A
n
ϕ
n
(x) (5.142)
135
The problem is to determine what the coeﬃcients A
n
must be. They can be found in the
following manner. We ﬁrst assume the expansion exists and multiply both sides by ϕ
k
(x):
f(x)ϕ
k
(x) =
N
¸
n=0
A
n
ϕ
n
(x)ϕ
k
(x) (5.143)
x
1
x
0
f(x)ϕ
k
(x) dx =
x
1
x
0
N
¸
n=0
A
n
ϕ
n
(x)ϕ
k
(x) dx (5.144)
=
N
¸
n=0
A
n
x
1
x
0
ϕ
n
(x)ϕ
k
(x) dx (5.145)
= A
k
(5.146)
So trading k and n
A
n
=
x
1
x
0
f(x)ϕ
n
(x) dx (5.147)
The series is known as a Fourier
8
series. Depending on the expansion functions, the series
is often specialized as Fouriersine, Fouriercosine, FourierLegendre, FourierBessel, etc.
Example 5.2
Represent
f(x) = x
2
on 0 < x < 3 (5.148)
with a series of
• trigonometric functions
• Legendre polynomials
• Chebyshev polynomials
• Bessel functions
Trigonometric Series
For the trigonometric series let’s try a Fourier sine series. The orthonormal functions in this case
are
2
3
sin
nπx
3
(5.149)
The coeﬃcients are thus
A
n
=
2
3
3
0
x
2
sin
nπx
3
dx (5.150)
so
A
0
= 0 (5.151)
A
1
= 4.17328 (5.152)
A
2
= −3.50864 (5.153)
A
3
= 2.23376 (5.154)
A
4
= −1.75432 (5.155)
A
5
= 1.3807 (5.156)
8
Jean Baptiste Joseph Fourier, 17681830, French mathematician.
136
0.5 1 1.5 2 2.5 3
x
2
4
6
8
f(x)
x
2
Fouriersine series
(five terms)
Figure 5.7: Five term Fouriersine series approximation to f(x) = x
2
f(x) =
2
3
4.17328 sin
πx
3
−3.50864 sin
2πx
3
+2.23376 sin
3πx
3
−1.75432 sin
4πx
3
+ 1.3807 sin
5πx
3
+. . .
The function f(x) = x
2
and ﬁve terms are plotted in Figure 5.7
Legendre polynomials
Next let’s try the Legendre polynomials. The Legendre polynomials are orthogonal on [−1, 1] so
let’s deﬁne
˜ x =
2
3
x −1 (5.157)
x =
3
2
(˜ x + 1) (5.158)
so that the domain x ∈ [0, 3] maps into ˜ x ∈ [−1, 1]. So expanding x
2
in [0, 3] is equivalent to expanding
3
2
2
(˜ x + 1)
2
=
9
4
(˜ x + 1)
2
˜ x ∈ [−1, 1] (5.159)
Now
ϕ
n
(˜ x) =
n +
1
2
P
n
(˜ x) (5.160)
So
A
n
=
9
4
n +
1
2
1
−1
(˜ x + 1)
2
P
n
(˜ x) d˜ x (5.161)
Evaluating we get
A
0
= 4.24264 (5.162)
A
1
= 3.67423 (5.163)
A
2
= 0.948683 (5.164)
A
3
= 0 (5.165)
.
.
. (5.166)
A
n
= 0 n > 3 (5.167)
137
Carrying out the multiplication and returning to x space gives the ﬁnite series
f(x) = x
2
= 3P
0
2
3
x −1
+
9
2
P
1
2
3
x −1
+
3
2
P
2
2
3
x −1
(5.168)
Direct substitution shows that the representation is exact over the entire domain.
Chebyshev polynomials
Let’s now try the Chebyshev polynomials. These are orthogonal on the same domain as the Leg
endre polynomials, so let’s use the same transformation as before.
Now
ϕ
n
(˜ x) =
1
π
√
1 − ˜ x
2
T
n
(˜ x) n = 0 (5.169)
ϕ
n
(˜ x) =
2
π
√
1 − ˜ x
2
T
n
(˜ x) n > 0 (5.170)
So
A
0
=
9
4
1
−1
(˜ x + 1)
2
1
π
√
1 − ˜ x
2
T
n
(˜ x) d˜ x (5.171)
A
n
=
9
4
1
−1
(˜ x + 1)
2
2
π
√
1 − ˜ x
2
T
n
(˜ x) d˜ x (5.172)
Evaluating we get
A
0
= 4.2587 (5.173)
A
1
= 3.4415 (5.174)
A
2
= −0.28679 (5.175)
A
3
= −1.1472 (5.176)
.
.
. (5.177)
So
f(x) = x
2
=
2
π
1 −
2
3
x −1
2
4.2587
√
2
T
0
2
3
x −1
+ 3.4415 T
1
2
3
x −1
(5.178)
−0.28679 T
2
2
3
x −1
−1.1472 T
3
2
3
x −1
+. . .
(5.179)
The function f(x) = x
2
and four terms are plotted in Figure 5.8
Bessel functions
Now let’s expand in terms of Bessel functions. The Bessel functions have been deﬁned such that
they are orthogonal on a domain between zero and one when the eigenvalues are the zeros of the Bessel
function. To achieve this we adopt the transformation (and inverse):
˜ x =
x
3
x = 3˜ x.
With this transformation our domain transforms as follows:
x : [0, 3] ˜ x : [0, 1]
138
0.5 1 1.5 2 2.5 3
x
2
4
6
8
10
f(x)
x
2 FourierChebyshev series
(four terms)
Figure 5.8: Four term FourierChebyshev series approximation to f(x) = x
2
So in the transformed space, we seek an expansion
9˜ x
2
=
∞
¸
n=0
A
n
J
ν
(µ
n
˜ x)
Let’s choose to expand on J
0
so we take
9˜ x
2
=
∞
¸
n=0
A
n
J
0
(µ
n
˜ x)
Now, the eigenvalues µ
n
are such that J
o
(µ
n
) = 0. We ﬁnd using trial and error methods that solutions
for all the zeros can be found:
µ
0
= 2.40483 (5.180)
µ
1
= 5.52008 (5.181)
µ
2
= 8.65373 (5.182)
.
.
. (5.183)
Similar to the other functions, we could expand in terms of the orthonormalized Bessel functions, ϕ
n
(x).
Instead, for variety, let’s directly operate on the above expression to determine the values for A
n
.
9˜ x
2
˜ xJ
0
(µ
k
˜ x) =
∞
¸
n=0
A
n
˜ xJ
0
(µ
n
˜ x)J
0
(µ
k
˜ x) (5.184)
1
0
9˜ x
3
J
0
(µ
k
˜ x) d˜ x =
1
0
∞
¸
n=0
A
n
˜ xJ
0
(µ
n
˜ x)J
0
(µ
k
˜ x) d˜ x (5.185)
9
1
0
˜ x
3
J
0
(µ
k
˜ x) d˜ x =
∞
¸
n=0
A
n
1
0
˜ xJ
0
(µ
n
˜ x)J
0
(µ
k
˜ x) d˜ x (5.186)
= A
k
1
0
˜ xJ
0
(µ
k
˜ x)J
0
(µ
k
˜ x) d˜ x (5.187)
So replacing k by n and dividing we get
A
n
=
9
1
0
˜ x
3
J
0
(µ
n
˜ x) d˜ x
1
0
˜ xJ
0
(µ
n
˜ x)J
0
(µ
n
˜ x) d˜ x
(5.188)
139
0.5 1 1.5 2 2.5 3
2
2
4
6
8
f (x)
x
x
2
FourierBessel Series
(ten terms)
Figure 5.9: Ten term FourierBessel series approximation to f(x) = x
2
Evaluating the ﬁrst three terms we get
A
0
= 4.44557 (5.189)
A
1
= −8.3252 (5.190)
A
2
= 7.2533 (5.191)
.
.
. (5.192)
The function f(x) = x
2
and ten terms of the FourierBessel series approximation are plotted in
Figure 5.9 The FourierBessel approximation is
f(x) = x
2
= 4.44557 J
o
2.40483
x
3
(5.193)
−8.3252 J
o
5.52008
x
3
(5.194)
+7.2533 J
o
8.65373
x
3
+. . . (5.195)
Note that other FourierBessel expansions exist. Also note that even though the Bessel function does
not match the function itself at either boundary point, that the series still appears to be converging.
Problems
1. Show that oscillatory solutions of the delay equation
dx
dt
(t) +x(t) +bx(t −1) = 0
are possible only when b = 2.2617. Find the frequency.
140
2. Show that x
a
J
ν
(bx
c
) is a solution of
y
−
2a −1
x
y
+ (b
2
c
2
x
2c−2
+
a
2
−ν
2
c
2
x
2
)y = 0
Hence solve in terms of Bessel functions:
(a)
d
2
y
dx
2
+k
2
xy = 0
(b)
d
2
y
dx
2
+x
4
y = 0
3. Laguerre’s diﬀerential equation is
xy
+ (1 −x)y
+λy = 0
Show that when λ = n, a nonnegative integer, there is a polynomial solution L
n
(x) (called a Laguerre
polynomial) of degree n with coeﬃcient of x
n
equal to 1. Determine L
0
through L
4
.
4. Consider the function y(x) = x
2
−2 deﬁned for x ∈ [0, 3]. Find eight term expansions in terms of a)
FourierSine, b) FourierLegendre, c) FourierHermite, d) FourierBessel series and plot your results
on a single graph.
5. Consider the function y(x) = 0, x ∈ [0, 1), y(x) = x − 1, x ∈ [1, 2]. Find an eight term Fourier
Chebyshev expansion of this function. Plot the function and the eight term expansion for x ∈ [0, 2].
6. Consider the function y(x) = 2x, x ∈ [0, 2]. Find an eight term a) FourierChebyshev and b) Fourier
sine expansion of this function. Plot the function and the eight term expansions for x ∈ [0, 2]. Which
expansion minimizes the error in representation of the function?
141
142
Chapter 6
Vectors and tensors
see Kaplan, Chapters 3, 4, 5,
see Lopez, Chapters 1723,
see Riley, Hobson, and Bence, Chapters 6, 8, 19.
6.1 Cartesian index notation
Here we will consider what is known as Cartesian index notation as a way to represent vectors
and tensors. In contrast to Chapter 1, which considered general coordinate transformations,
when we restrict our transformations to rotations about the origin, many simpliﬁcations
result. It can be shown that for such transformations, the distinction between contravariance
and covariance disappears, as does the necessity for Christoﬀel symbols, and also the need
for an “upstairsdownstairs” index notation.
Many vector relations can be written in a compact form by using Cartesian index nota
tion. Let x
1
, x
2
, x
3
represent the three coordinate directions and e
1
, e
2
, e
3
the unit vectors
in those directions. Then a vector u may be written as
u = u
1
e
1
+ u
2
e
2
+ u
3
e
3
=
3
¸
i=1
u
i
e
i
= u
i
e
i
= u
i
(6.1)
where u
1
, u
2
, and u
3
are the three Cartesian components of u. Note that we do not need to
use the summation sign every time if we use the Einstein
1
convention to sum from 1 to 3 if
an index is repeated. The single free index on the right side indicating that an e
i
is assumed.
Two additional symbols are needed for later use. They are the Kronecker delta
δ
ij
=
0 if i = j
1 if i = j
(6.2)
and the substitution symbol (or LeviCivita
2
density)
ijk
=
1 if indices are in cyclical order 1,2,3,1,2,
−1 if indices are not in cyclical order
0 if two or more indices are the same
(6.3)
1
Albert Einstein, 18791955, German/American physicist and mathematician.
2
Tullio LeviCivita, 18831941, Italian mathematician.
143
The identity
ijk
lmn
= δ
il
δ
jm
δ
kn
+ δ
im
δ
jn
δ
kl
+ δ
in
δ
jl
δ
km
−δ
il
δ
jn
δ
km
−δ
im
δ
jl
δ
kn
−δ
in
δ
jm
δ
kl
(6.4)
relates the two. The following identities are also easily shown:
δ
ii
= 3 (6.5)
δ
ij
= δ
ji
(6.6)
δ
ij
δ
jk
= δ
ik
(6.7)
ijk
ilm
= δ
jl
δ
km
−δ
jm
δ
kl
(6.8)
ijk
ljk
= 2δ
il
(6.9)
ijk
ijk
= 6 (6.10)
ijk
= −
ikj
(6.11)
ijk
= −
jik
(6.12)
ijk
= −
kji
(6.13)
ijk
=
kij
=
jki
(6.14)
Regarding index notation:
• repeated index indicates summation on that index
• nonrepeated index is known as a free index
• number of free indices give the order of the tensor
– u, uv, u
i
v
i
w, u
ii
, u
ij
v
ij
, zeroth order tensor–scalar
– u
i
, u
i
v
ij
, ﬁrst order tensor–vector
– u
ij
, u
ij
v
jk
, u
i
v
j
, second order tensor
– u
ijk
, u
i
v
j
w
k
, u
ij
v
km
w
m
, third order tensor
– u
ijkl
, u
ij
v
kl
, fourth order tensor
• indices cannot be repeated more than once
– u
iik
, u
ij
, u
iijj
, v
i
u
jk
are proper.
– u
i
v
i
w
i
, u
iiij
, u
ij
v
ii
are improper!
• Cartesian components commute: u
ij
v
i
w
klm
= v
i
w
klm
u
ij
• Cartesian indices do not commute: u
ijkl
= u
jlik
144
x
1
x
2
x’
2
x’
1
P
x*
1
x*
2
α
β
α
α
β β
x*’
1
x*’ = x* cos α + x* cos β
1 1 2
Figure 6.1: Rotation of axes in a twodimensional Cartesian system
6.2 Cartesian tensors
6.2.1 Direction cosines
Consider the transformation of the (x
1
, x
2
) Cartesian coordinate system by rotation of each
coordinate axes by angle α to the rotated Cartesian coordinate system x
1
, x
2
as sketched in
Figure 6.1.
We deﬁne the angle between the x
1
and x
1
axes as α:
α ≡ [x
1
, x
1
]
With β =
π
2
−α, the angle between the x
1
and x
2
axes is
β ≡ [x
2
, x
1
]
The point P is can be represented in both coordinate systems. In the unrotated system, P
is represented by the coordinates:
P : (x
∗
1
, x
∗
2
)
In the rotated coordinate system P is represented by
P : (x
∗
1
, x
∗
2
)
Trigonometry shows us that
x
∗
1
= x
∗
1
cos α + x
∗
2
cos β (6.15)
x
∗
1
= x
∗
1
cos[x
1
, x
1
] + x
∗
2
cos[x
2
, x
1
] (6.16)
Dropping the stars, and extending to three dimensions, we ﬁnd that
x
1
= x
1
cos[x
1
, x
1
] + x
2
cos[x
2
, x
1
] + x
3
cos[x
3
, x
1
] (6.17)
145
Extending to expressions for x
2
and x
3
and writing in matrix form, we get
( x
1
x
2
x
3
) = ( x
1
x
2
x
3
)
¸
cos[x
1
, x
1
] cos[x
1
, x
2
] cos[x
1
, x
3
]
cos[x
2
, x
1
] cos[x
2
, x
2
] cos[x
2
, x
3
]
cos[x
3
, x
1
] cos[x
3
, x
2
] cos[x
3
, x
3
]
(6.18)
Using the notation
ij
= cos[x
i
, x
j
],
we have
( x
1
x
2
x
3
) = ( x
1
x
2
x
3
)
¸
11
12
13
21
22
23
31
32
33
(6.19)
Expanding the ﬁrst term we ﬁnd
x
1
= x
1
11
+ x
2
21
+ x
3
31
More generally we have
x
j
= x
1
1j
+ x
2
2j
+ x
3
3j
, (6.20)
x
j
= x
i
ij
. (6.21)
What amounts to the law of cosines,
ij
kj
= δ
ik
law of cosines, (6.22)
can easily be proven by direct substitution. Using this, we can easily ﬁnd the inverse trans
formation back to the unprimed coordinates via the following operations:
kj
x
j
=
kj
x
i
ij
, (6.23)
=
ij
kj
x
i
, (6.24)
= δ
ik
x
i
, (6.25)
= x
k
, (6.26)
ij
x
j
= x
i
, (6.27)
x
i
=
ij
x
j
. (6.28)
Note that the Jacobian matrix of the transformation is J =
∂x
i
∂x
j
=
ij
. Note it can be shown
that the metric tensor is G = J
T
J =
ji
ki
= δ
jk
, so g = 1, and the transformation is
volume preserving.
6.2.1.1 Scalars
A term φ is a scalar if it is invariant under a rotation of coordinate axes.
146
x
q
(2)
(1)
q
(3)
q
Τ
11
Τ
12
Τ
13
Τ
21
Τ
22
Τ
23
Τ
31
Τ
32
Τ
33
x
1
2
x
3
Figure 6.2: Tensor visualization
6.2.1.2 Vectors
A set of three scalars (v
1
, v
2
, v
3
)
T
is deﬁned as a vector if under a rotation of coordinate axes,
the triple also transforms according to
v
j
= v
i
ij
.
A vector associates a scalar with a chosen direction in space by an expression which is
linear in the direction cosines of the chosen direction.
6.2.1.3 Tensors
A set of nine scalars is deﬁned as a second order tensor if under a rotation of coordinate
axes, they transform as
T
ij
=
ki
lj
T
kl
A tensor associates a vector with each direction in space by an expression that is linear in
the direction cosines of the chosen transformation. It will be seen that
• ﬁrst subscript gives associated direction (or face; hence ﬁrst–face)
• second subscript gives the vector components for that face
Graphically one can use the following sketch, Figure 6.2 to visualize a second order tensor:
147
6.2.2 Matrix representation
Tensors can be represented as matrices (but all matrices are not tensors!):
T
ij
=
¸
T
11
T
12
T
13
T
21
T
22
T
23
T
31
T
32
T
33
–vector associated with 1 direction
–vector associated with 2 direction
–vector associated with 3 direction
A simple way to choose a vector q
j
associated with a plane of arbitrary orientation is to
form the inner product of the tensor T
ij
and the unit normal associated with the plane n
i
:
q
j
= n
i
T
ij
q = n T (6.29)
Here n
i
has components which are the direction cosines of the chosen direction. For example
to determine the vector associated with face 2, we choose
n
i
= (0, 1, 0)
Thus
n T = (0, 1, 0)
¸
T
11
T
12
T
13
T
21
T
22
T
23
T
31
T
32
T
33
= (T
21
, T
22
, T
23
) (6.30)
n
i
T
ij
= n
1
T
1j
+ n
2
T
2j
+ n
3
T
3j
(6.31)
= (0)T
1j
+ (1)T
2j
+ (0)T
3j
(6.32)
= (T
21
, T
22
, T
23
) (6.33)
6.2.3 Transpose of a tensor, symmetric and antisymmetric ten
sors
The transpose T
T
ij
of a tensor T
ij
is found by trading elements across the diagonal
T
T
ij
= T
ji
so
T
T
ij
=
¸
T
11
T
21
T
31
T
12
T
22
T
32
T
13
T
23
T
33
A tensor is symmetric if it is equal to its transpose, i.e.
T
ij
= T
ji
if symmetric
A tensor is antisymmetric if it is equal to the additive inverse of its transpose, i.e.
T
ij
= −T
ji
if antisymmetric
The tensor inner product of a symmetric tensor S
ij
and antisymmetric tensor A
ij
can be
shown to be 0:
S
ij
A
ij
= 0
148
Example 6.1
Show this for a twodimensional space. Take a general symmetric tensor to be
S
ij
=
a b
b c
Take a general antisymmetric tensor to be
A
ij
=
0 d
−d 0
So
S
ij
A
ij
= S
11
A
11
+S
12
A
12
+S
21
A
21
+S
22
A
22
(6.34)
= a(0) +bd −bd +c(0) (6.35)
= 0 (6.36)
An arbitrary tensor can be represented as the sum of a symmetric and antisymmetric
tensor:
T
ij
=
1
2
T
ij
+
1
2
T
ij
+
1
2
T
ji
−
1
2
T
ji
(6.37)
=
1
2
(T
ij
+ T
ji
) +
1
2
(T
ij
−T
ji
) (6.38)
so with (6.39)
T
(ij)
≡
1
2
(T
ij
+ T
ji
) (6.40)
T
[ij]
≡
1
2
(T
ij
−T
ji
) (6.41)
T
ij
= T
(ij)
+ T
[ij]
(6.42)
The ﬁrst term, T
(ij)
, is called the symmetric part of T
ij
; the second term, T
[ij]
, is called the
antisymmetric part of T
ij
.
6.2.4 Dual vector of a tensor
As the antisymmetric part of a three by three tensor has only three independent components,
we might expect a threecomponent vector can be associated with this. Let’s deﬁne the dual
vector to be
d
i
=
ijk
T
jk
=
ijk
T
(jk)
+
ijk
T
[jk]
For ﬁxed i,
ijk
is antisymmetric, so the ﬁrst term is zero, so
d
i
=
ijk
T
[jk]
149
Let’s ﬁnd the inverse.
ilm
d
i
=
ilm
ijk
T
jk
(6.43)
= (δ
lj
δ
mk
−δ
lk
δ
mj
)T
jk
(6.44)
= T
lm
−T
ml
(6.45)
= 2T
[lm]
(6.46)
T
[lm]
=
1
2
ilm
d
i
(6.47)
T
[ij]
=
1
2
kij
d
k
(6.48)
T
ij
= T
(ij)
+
1
2
kij
d
k
(6.49)
6.2.5 Principal axes
Given a tensor T
ij
, ﬁnd the associated direction such that the vector components in this
associated direction are parallel to the direction. So we want
n
i
T
ij
= λn
j
This deﬁnes an eigenvalue problem. Linear algebra gives us the eigenvalues and associated
eigenvectors.
n
i
T
ij
= λn
i
δ
ij
n
i
(T
ij
−λδ
ij
) = 0
(n
1
, n
2
, n
3
)
¸
T
11
−λ T
12
T
13
T
21
T
22
−λ T
23
T
31
T
32
T
33
−λ
= (0, 0, 0)
We get nontrivial solutions if
T
11
−λ T
12
T
13
T
21
T
22
−λ T
23
T
31
T
32
T
33
−λ
= 0
We shall see in later chapters that we are actually ﬁnding the socalled left eigenvectors.
These arise with less frequency than the right eigenvectors, which are deﬁned by T
ij
u
j
=
λδ
ij
u
j
. If T
ij
is real and symmetric, it can be shown that
• the eigenvalues are real,
• the left and right eigenvectors are identical, and
• the eigenvectors are real and orthogonal.
If the matrix is not symmetric, the eigenvalues and eigenvectors could be complex. It is often
most physically relevant to decompose a tensor into symmetric and antisymmetric parts and
ﬁnd the orthogonal basis vectors and real eigenvalues associated with the symmetric part
and the dual vector associated with the antisymmetric part.
In continuum mechanics,
150
• the symmetric part of a tensor can be associated with deformation along principal axes
• the antisymmetric part of a tensor can be associated with rotation of an element
Example 6.2
Decompose the tensor given below into a combination of orthogonal basis vectors and a dual vector.
T
ij
=
¸
1 1 −2
3 2 −3
−4 1 1
First
T
(ij)
=
1
2
(T
ij
+T
ji
) =
¸
1 2 −3
2 2 −1
−3 −1 1
T
[ij]
=
1
2
(T
ij
−T
ji
) =
¸
0 −1 1
1 0 −2
−1 2 0
First get the dual vector d
i
:
d
i
=
ijk
T
[jk]
(6.50)
d
1
=
1jk
T
[jk]
=
123
T
[23]
+
132
T
[32]
= (1)(−2) + (−1)(2) = −4 (6.51)
d
2
=
2jk
T
[jk]
=
213
T
[13]
+
231
T
[31]
= (−1)(1) + (1)(−1) = −2 (6.52)
d
3
=
3jk
T
[jk]
=
312
T
[12]
+
321
T
[21]
= (1)(−1) + (−1)(1) = −2 (6.53)
d
i
= (−4, −2, −2)
T
(6.54)
Now ﬁnd the eigenvalues and eigenvectors for the symmetric part.
1 −λ 2 −3
2 2 −λ −1
−3 −1 1 −λ
= 0
We get the characteristic polynomial,
λ
3
−4λ
2
−9λ + 9 = 0
The eigenvalue and associated normalized eigenvector for each root is
λ
(1)
= 5.36488 n
(1)
i
= (−0.630537, −0.540358, 0.557168)
T
(6.55)
λ
(2)
= −2.14644 n
(2)
i
= (−0.740094, 0.202303, −0.641353)
T
(6.56)
λ
(3)
= 0.781562 n
(3)
i
= (−0.233844, 0.816754, 0.527476)
T
(6.57)
It is easily veriﬁed that each eigenvector is orthogonal. When the coordinates are transformed to be
aligned with the principal axes, the magnitude of the vector associated with each face is the eigenvalue;
this vector points in the same direction of the unit normal associated with the face.
6.3 Algebra of vectors
Here we will primarily use bold letters for vectors, such as in u. At times we will use the
notation u
i
to represent a vector.
151
6.3.1 Deﬁnition and properties
Null vector: A vector with zero components.
Multiplication by a scalar α: αu = αu
1
e
1
+ αu
2
e
2
+ αu
3
e
3
= αu
i
Sum of vectors: u +v = (u
1
+ v
1
)e
1
+ (u
2
+ v
2
)e
2
+ (u
3
+ v
3
)e
3
= (u
i
+ v
i
)
Magnitude, length, or norm of a vector: [[u[[
2
=
u
2
1
+ u
2
2
+ u
2
3
=
√
u
i
u
i
Triangle inequality: [[u +v[[
2
≤ [[u[[
2
+[[v[[
2
.
Here the subscript 2 indicates we are considering a Euclidean space. In many sources
in the literature this subscript is omitted, and the norm is understood to be the Euclidean
norm. For a more general, nonEuclidean space, we can still retain the property of a norm
for a more general p norm for three dimensional vector:
[[u[[
p
= ([u
1
[
p
+[u
2
[
p
+[u
3
[
p
)
1/p
, 1 ≤ p < ∞.
For example the 1 norm of a vector is the sum of the absolute values of its components:
[[u[[
1
= ([u
1
[ +[u
2
[ +[u
3
[) .
The ∞ norm selects the largest component:
[[u[[
∞
= lim
p→∞
([u
1
[
p
+[u
2
[
p
+[u
3
[
p
)
1/p
= max
i=1,2,3
[u
i
[.
6.3.2 Scalar product (dot product, inner product)
The scalar product of u and v is deﬁned as
u v = u
1
v
1
+ u
2
v
2
+ u
3
v
3
= u
i
v
i
(6.58)
The vectors u and v are said to be orthogonal if u v = 0. Also
u u = ([[u[[
2
)
2
6.3.3 Cross product
The cross product of u and v is deﬁned as
u v =
e
1
e
2
e
3
u
1
u
2
u
3
v
1
v
2
v
3
=
ijk
u
j
v
k
(6.59)
Property: u αu = 0. Let’s use Cartesian index notation to prove this
u αu =
ijk
u
j
αu
k
= α
ijk
u
j
u
k
= α(
i11
u
1
u
1
+
i12
u
1
u
2
+
i13
u
1
u
3
+
i21
u
2
u
1
+
i22
u
2
u
2
+
i23
u
2
u
3
+
i31
u
3
u
1
+
i32
u
3
u
2
+
i33
u
3
u
3
)
= 0
152
since
i11
=
i22
=
i33
= 0 and
i12
= −
i21
,
i13
= −
i31
, and
i23
= −
i32
6.3.4 Scalar triple product
The scalar triple product of three vectors u, v, and w is deﬁned by
[u, v, w] = u (v w) (6.60)
=
ijk
u
i
v
j
w
k
(6.61)
Physically it represents the volume of the parallelepiped with edges parallel to the three
vectors.
6.3.5 Identities
[u, v, w] = −[u, w, v] (6.62)
u (v w) = (u w)v −(u v)w (6.63)
(u v) (wx) = [u, w, x]v −[v, w, x]u (6.64)
(u v) (wx) = (u w)(v x) −(u x)(v w) (6.65)
Example 6.3
Prove the second identity using Cartesian index notation.
u (v w) =
ijk
u
j
(
klm
v
l
w
m
)
=
ijk
klm
u
j
v
l
w
m
=
kij
klm
u
j
v
l
w
m
= (δ
il
δ
jm
−δ
im
δ
jl
) u
j
v
l
w
m
= u
j
v
i
w
j
−u
j
v
j
w
i
= u
j
w
j
v
i
−u
j
v
j
w
i
= (u w)v −(u v)w
6.4 Calculus of vectors
6.4.1 Vector function of single scalar variable
If
r(τ) = x
i
(τ)e
i
= x
i
(τ) (6.66)
153
then r(τ) describes a curve in threedimensional space. Here τ is a general scalar parameter,
which may or may not have a simple physical interpretation. If we require that the basis
vectors be constants (this is not always the case!), the derivative is
dr(τ)
dτ
= r
(τ) = x
i
(τ)e
i
= x
i
(τ) (6.67)
Now r
(τ) is a vector that is tangent to the curve. A unit vector in this direction is
t =
r
(τ)
[[r
(τ)[[
2
(6.68)
where
[[r
(τ)[[
2
=
x
i
x
i
(6.69)
In the special case in which τ is time t, we denote the derivative by a dot ( ˙ ) notation
rather than a prime (
) notation; ˙ r is the velocity vector, ˙ x
i
its components, and [[ ˙ r[[
2
the
magnitude. Note that the unit tangent vector t is not the scalar parameter for time, t. Also
we will occasionally use the scalar components of t: t
i
, which again are not related to time
t.
If s(t) is the distance along the curve, then
ds = [[dx
i
[[
2
(6.70)
ds
dt
=
dx
i
dt
2
= [[ ˙ r(t)[[
2
(6.71)
so that
t =
dr
dt
ds
dt
=
dr
ds
t
i
=
dr
i
ds
(6.72)
Also
s =
b
a
[[ ˙ r(t)[[
2
dt =
b
a
dx
i
dt
dx
i
dt
dt =
b
a
dx
1
dt
dx
1
dt
+
dx
2
dt
dx
2
dt
+
dx
3
dt
dx
3
dt
dt (6.73)
is the distance along the curve between t = a and t = b.
Identities:
d
dt
(φu) = φ
du
dt
+
dφ
dt
u
d
dt
(φu
i
) = φ
du
i
dt
+
dφ
dt
u
i
d
dt
(u v) = u
dv
dt
+
du
dt
v
d
dt
(u
i
v
i
) = u
i
dv
i
dt
+
dv
i
dt
u
i
d
dt
(u v) = u
dv
dt
+
du
dt
v
d
dt
(
ijk
u
j
v
k
) =
ijk
u
j
dv
k
dt
+
ijk
v
k
du
j
dt
Example 6.4
If
r(t) = 2t
2
i +t
3
j
154
t
t ’
ρ
θ
∆θ
∆s
(t)
(t + ∆t)
Figure 6.3: Sketch for determination of radius of curvature
ﬁnd the unit tangent at t = 1, and the length of the curve from t = 0 to t = 1.
The derivative is
˙ r(t) = 4ti + 3t
2
j
At t = 1,
˙ r(t) = 4i + 3j
so that the unit vector in this direction is
t =
4
5
i +
3
5
j
The length of the curve from t = 0 to t = 1 is
s =
1
0
16t
2
+ 9t
4
dt
=
1
27
(16 + 9t
2
)
3/2
[
1
0
=
61
27
6.4.2 Diﬀerential geometry of curves
In Figure 6.3 r(t) describes a circle. Two unit tangents, t and t
are drawn at t and t + ∆t.
At t we have
t = −sin θ i + cos θ j
155
At t + ∆t we have
t
= −sin (θ + ∆θ) i + cos (θ + ∆θ) j
Expanding in a Taylor series about ∆θ = 0 we get
t
=
−sin θ −∆θ cos θ + O(∆θ)
2
i +
cos θ −∆θ sin θ + O(∆θ)
2
j
so as ∆θ →0
t
−t = −∆θ cos θ i −∆θ sin θ j
∆t = ∆θ (−cos θ i −sin θ j)
Note that ∆t t = 0, so ∆t is normal to t. Furthermore,
[[∆t[[
2
= ∆θ
Now for ∆θ →0,
∆s = ρ∆θ
where ρ is the radius of curvature. So
[[∆t[[
2
=
∆s
ρ
Taking all limits to zero, we get
dt
ds
2
=
1
ρ
(6.74)
The term on the right side is often deﬁned as the curvature, κ:
κ =
1
ρ
.
6.4.2.1 Curves on a plane
The plane curve y = f(x) in the xy plane can be represented as
r(t) = x(t) i + y(t) j (6.75)
where x(t) = t and y(t) = f(t). Diﬀerentiating, we have
˙ r(t) = ˙ x(t) i + ˙ y(t) j (6.76)
The unit vector from equation (6.68) is
t =
˙ xi + ˙ yj
[ ˙ x
2
+ ˙ y
2
]
1/2
(6.77)
=
i + y
j
[1 + (y
)
2
]
1/2
(6.78)
156
where the primes are derivatives with respect to x. Since
ds
2
= dx
2
+ dy
2
(6.79)
ds =
dx
2
+ dy
2
1/2
(6.80)
ds
dx
=
1
dx
dx
2
+ dy
2
1/2
(6.81)
ds
dx
= [1 + (y
)
2
]
1/2
(6.82)
we have
dt
ds
=
dt
dx
ds
dx
=
[1 + (y
)
2
]
1/2
y
j −(i + y
j)[1 + (y
)
2
]
−1/2
y
y
1 + (y
)
2
1
[1 + (y
)
2
]
1/2
=
y
[1 + (y
)
2
]
3/2
−y
i +j
[1 + (y
)
2
]
1/2
As the second factor of this expression is a unit vector, the preceding scalar is a magnitude.
Expanding our notion of curvature and radius of curvature, we deﬁne
dt
ds
such that
dt
ds
2
= κ =
1
ρ
.
Thus
κ =
y
[1 + (y
)
2
]
3/2
(6.83)
ρ =
[1 + (y
)
2
]
3/2
y
(6.84)
for curves on a plane.
6.4.2.2 Curves in 3dimensional space
A set of local, righthanded, orthogonal coordinates can be deﬁned at a point on a curve r(t).
The unit vectors at this point are the tangent t, the principal normal n, and the binormal
b, where
t =
dr
ds
(6.85)
n =
1
κ
dt
ds
(6.86)
b = t n (6.87)
We will ﬁrst show that t, n, and b form an orthogonal systems of unit vectors. We have
already seen that t is a unit vector tangent to the curve. Since t t = [[t[[
2
2
= 1, we have
t
dt
ds
=
1
2
d
ds
(t t)
= 0.
157
Thus t is orthogonal to
dt
ds
. Since n is parallel to
dt
ds
, it is orthogonal to t also. From equations
(6.74) and (6.86), we see that n is a unit vector. Furthermore b is a unit vector orthogonal
to both t and n.
Next we will derive some basic relations involving the unit vectors and the characteristics
of the curve.
db
ds
=
dt
ds
n +t
dn
ds
=
dt
ds
1
κ
dt
ds
+t
dn
ds
= t
dn
ds
so we see that
db
ds
is orthogonal to t. In addition, since [[b[[
2
= 1
b
db
ds
=
1
2
d
ds
(b b)
=
1
2
d
ds
([[b[[
2
2
)
= 0
So
db
ds
is orthogonal to b also. Thus
db
ds
must be in the n direction, so that we can write
db
ds
= τn (6.88)
where τ is called the torsion of the curve.
From the relation n = b t, we get
dn
ds
=
db
ds
t +b
dt
ds
= τn t +b κn
= −τb −κt
Summarizing
dt
ds
= κn (6.89)
dn
ds
= −κt −τb (6.90)
db
ds
= τn (6.91)
These are the FrenetSerret
3
relations. In matrix form, we can say that
d
ds
¸
t
n
b
=
¸
0 κ 0
−κ 0 −τ
0 τ 0
¸
t
n
b
(6.92)
3
Jean Fr´ed´eric Frenet, 18161900, French mathematician, and Joseph Alfred Serret, 18191885, French
mathematician.
158
Note the coeﬃcient matrix is antisymmetric.
Example 6.5
Find the local coordinates, the curvature and the torsion for the helix
r(t) = a cos t i +a sin t j +bt k
Taking the derivative and ﬁnding its magnitude we get
dr(t)
dt
= −a sin t i +a cos t j +b k
dr(t)
dt
2
=
a
2
sin
2
t +a
2
cos
2
t +b
2
=
a
2
+b
2
This gives us the unit tangent vector t:
t =
dr
dt
dr
dt
2
=
−a sin t i +a cos t j +b k
√
a
2
+b
2
We also have
ds
dt
=
dx
dt
2
+
dy
dt
2
+
dz
dt
2
(6.93)
=
a
2
sin
2
t +a
2
cos
2
t +b
2
(6.94)
=
a
2
+b
2
(6.95)
Continuing, we have
dt
ds
=
dt
dt
ds
dt
= −a
cos t i + sin t j
√
a
2
+b
2
1
√
a
2
+b
2
= κn
Thus the unit principal normal is
n = −(cos t i + sin t j)
The curvature is
κ =
a
a
2
+b
2
The radius of curvature is
ρ =
a
2
+b
2
a
We also ﬁnd the unit binormal
b = t n
=
1
√
a
2
+b
2
i j k
−a sin t a cos t b
−cos t −sin t 0
=
b sin t i −b cos t j +a k
√
a
2
+b
2
159
The torsion is determined from
τn =
db
dt
ds
dt
= b
cos t i + sin t j
a
2
+b
2
from which
τ = −
b
a
2
+b
2
Further identities:
dr
dt
d
2
r
dt
2
= κv
3
b (6.96)
dr
dt
d
2
r
dt
2
d
3
r
dt
3
= −κ
2
v
6
τ (6.97)
1
[[ ˙ r[[
3
2
[[[¨r[[
2
2
[[ ˙ r[[
2
2
−(˙ r ¨r)
2
]
1/2
= κ (6.98)
where v =
ds
dt
.
6.5 Line and surface integrals
If r is a position vector
r = x
i
e
i
(6.99)
then φ(r) is a scalar ﬁeld and u(r) is a vector ﬁeld.
6.5.1 Line integrals
A line integral is of the form
I =
C
u dr (6.100)
where u is a vector ﬁeld, and dr is an element of curve C.
If u = u
i
, and dr = dx
i
, then we can write
I =
C
u
i
dx
i
(6.101)
Example 6.6
Find
I =
C
u dr
if
u = yz i +xy j +xz k
160
and C goes from (0, 0, 0) to (1, 1, 0) along
(a) the curve x = y
2
, z = 0,
(b) the straight line x = y, z = 0.
We have
C
u dr =
C
(yz dx +xy dy +xz(0))
(a) Substituting x = y
2
, z = 0, we get
I =
1
0
y
3
dy
=
1
4
(b) Substituting x = y, z = 0, we get
I =
1
0
y
2
dy
=
1
3
6.5.2 Surface integrals
A surface integral is of the form
I =
S
u n dS =
S
u
i
n
i
dS (6.102)
where u (or u
i
) is a vector ﬁeld, S is an open or closed surface, dS is an element of this
surface, and n (or n
i
) is a unit vector normal to the surface element.
6.6 Diﬀerential operators
Surface integrals can be used for coordinateindependent deﬁnitions of diﬀerential operators.
Beginning with some wellknown theorems: Gauss’s theorem for a scalar, Gauss’s theorem
for a vector, and a little known theorem, which is possible to demonstrate, we have
V
∇φ dV =
S
nφ dS, (6.103)
V
∇ u dV =
S
n u dS, (6.104)
V
(∇u) dV =
S
n u dS. (6.105)
161
dx
1
dx
3
dx
2 x
1
x
2
x
3
O
Figure 6.4: Element of volume
Now we invoke the mean value theorem, which asserts that somewhere within the limits of
integration, the integrand takes on its mean value, which we denote with an overline, so
that, for example,
V
α dV = αV . Thus we get
(∇φ) V =
S
nφ dS, (6.106)
(∇ u) V =
S
n u dS, (6.107)
(∇u) V =
S
n u dS. (6.108)
As we let V →0, mean values approach local values, so we get Thus
∇φ ≡ grad φ = lim
V →0
1
V
S
nφ dS (6.109)
∇ u ≡ div u = lim
V →0
1
V
S
n u dS (6.110)
∇u ≡ curl u = lim
V →0
1
V
S
n u dS (6.111)
where φ(r) is a scalar ﬁeld, and u(r) is a vector ﬁeld. V is the region enclosed within a
closed surface S, and n is the unit normal to an element of the surface dS. Here “grad” is
the gradient operator, “div” is the divergence operator, and “curl” is the curl operator.
Consider the element of volume in Cartesian coordinates shown in Figure 6.4. The
diﬀerential operations in this coordinate system can be deduced from the deﬁnitions and
written in terms of the operator
∇ = e
1
∂
∂x
1
+e
2
∂
∂x
2
+e
3
∂
∂x
3
=
∂
∂x
i
(6.112)
162
6.6.1 Gradient of a scalar
Let’s evaluate the gradient of a scalar function of a vector
grad [φ(x
i
)]
We take the reference value of φ to be at the center O. At the center of the x
1
= ±
dx
1
2
faces
a distance ±
dx
1
2
away from O in the x
1
direction, it takes a value of
φ ±
∂φ
∂x
1
dx
1
2
Writing V = dx
1
dx
2
dx
3
, equation (6.109) gives
grad φ = lim
V →0
1
V
φ +
∂φ
∂x
1
dx
1
2
e
1
dx
2
dx
3
−
φ −
∂φ
∂x
1
dx
1
2
e
1
dx
2
dx
3
+ similar terms from the x
2
and x
3
faces
=
∂φ
∂x
1
e
1
+
∂φ
∂x
2
e
2
+
∂φ
∂x
3
e
3
=
∂φ
∂x
i
e
i
=
∂φ
∂x
i
= ∇φ (6.113)
The derivative of φ on a particular path is called the directional derivative. If the path
has a unit tangent t , the derivative in this direction is
∇φ t = t
i
∂φ
∂x
i
(6.114)
If φ(x, y, z) = constant is a surface, then dφ = 0 on this surface. Also
dφ =
∂φ
∂x
i
dx
i
= ∇φ dr
Since dr is tangent to the surface, ∇φ must be normal to it. The tangent plane at r = r
0
is
deﬁned by the position vector r such that
∇φ (r −r
0
) = 0 (6.115)
Example 6.7
At the point (1,1,1), ﬁnd the unit normal to the surface
z
3
+xz = x
2
+y
2
Deﬁne
φ(x, y, z) = z
3
+xz −x
2
−y
2
= 0
163
A normal at (1,1,1) is
∇φ = (z −2x) i −2y j + (3z
2
+x)k
= −1 i −2 j + 4 k
The unit normal is
n =
∇φ
[[∇φ[[
2
=
1
√
21
(−1 i −2 j + 4 k)
6.6.2 Divergence
6.6.2.1 Vectors
Equation (6.110) becomes
div u = lim
V →0
1
V
u
1
+
∂u
1
∂x
1
dx
1
2
dx
2
dx
3
−
u
1
−
∂u
1
∂x
1
dx
1
2
dx
2
dx
3
+ similar terms from the x
2
and x
3
faces
=
∂u
1
∂x
1
+
∂u
2
∂x
2
+
∂u
3
∂x
3
=
∂u
i
∂x
i
= ∇ u (6.116)
6.6.2.2 Tensors
The extension to tensors is straightforward
divT = ∇ T (6.117)
=
∂T
ij
∂x
i
(6.118)
Notice that this yields a vector quantity.
6.6.3 Curl of a vector
From equation (6.111), we have
curl u = lim
V →0
1
V
¸
u
2
+
∂u
2
∂x
1
dx
1
2
e
3
dx
2
dx
3
−
u
3
+
∂u
3
∂x
1
dx
1
2
e
2
dx
2
dx
3
164
−
u
2
−
∂u
2
∂x
1
dx
1
2
e
3
dx
2
dx
3
+
u
3
−
∂u
3
∂x
1
dx
1
2
e
2
dx
2
dx
3
+ similar terms from the x
2
and x
3
faces
=
e
1
e
2
e
3
∂
∂x
1
∂
∂x
2
∂
∂x
3
u
1
u
2
u
3
=
ijk
∂u
k
∂x
j
= ∇u (6.119)
The curl of a tensor does not arise often in practice.
6.6.4 Laplacian
6.6.4.1 Scalar
The Laplacian
4
is simply div grad φ, and can be written as
∇ (∇φ) = ∇
2
φ =
∂
2
φ
∂x
i
∂x
i
(6.120)
6.6.4.2 Vector
One of the identities below
∇
2
u = ∇ ∇u = ∇(∇ u) −∇(∇u) (6.121)
is used to evaluate this.
6.6.5 Identities
∇(∇φ) = 0 (6.122)
∇ (∇u) = 0 (6.123)
∇ (φu) = φ∇ u +∇φ u (6.124)
∇(φu) = φ∇u +∇φ u (6.125)
∇ (u v) = v (∇u) −u (∇v) (6.126)
∇(u v) = (v ∇)u −(u ∇)v +u(∇ v) −v(∇ u) (6.127)
∇(u v) = (u ∇)v + (v ∇)u +u (∇v) +v (∇u) (6.128)
∇(∇u) = ∇(∇ u) −∇ ∇u (6.129)
4
PierreSimon Laplace, 17491827, Normanborn French mathematician.
165
Example 6.8
Show that
∇ ∇u = ∇(∇ u) −∇(∇u)
Going from right to left
∇(∇ u) −∇(∇u) =
∂
∂x
i
∂u
j
∂x
j
−
ijk
∂
∂x
j
klm
∂u
m
∂x
l
=
∂
∂x
i
∂u
j
∂x
j
−
kij
klm
∂
∂x
j
∂u
m
∂x
l
=
∂
2
u
j
∂x
i
∂x
j
−(δ
il
δ
jm
−δ
im
δ
jl
)
∂
2
u
m
∂x
j
∂x
l
=
∂
2
u
j
∂x
i
∂x
j
−
∂
2
u
j
∂x
j
∂x
i
+
∂
2
u
i
∂x
j
∂x
j
=
∂
∂x
j
∂u
i
∂x
j
= ∇ ∇u
6.7 Special theorems
6.7.1 Path independence
In general the value of a line integral depends on the path. If, however, we have the special
case in which we can form u = ∇φ in equation (6.100), where φ is a scalar ﬁeld, then
I =
C
∇φ dr
=
C
∂φ
∂x
i
dx
i
=
C
dφ
= φ(b) −φ(a)
where a and b are the beginning and end of curve C. The integral I is then independent of
path. u is then called a conservative ﬁeld, and φ is its potential.
6.7.2 Green’s theorem
Let u = u
x
i + u
y
j be a vector ﬁeld, C a closed curve, and D the region enclosed by C, all
in the xy plane. Then
C
u dr =
D
∂u
y
∂x
−
∂u
x
∂y
dx dy (6.130)
166
Example 6.9
Show that Green’s theorem is valid if u = y i + 2xy j, and C consists of the straight lines (0,0) to
(1,0) to (1,1) to (0,0).
C
u dr =
C1
u dr +
C2
u dr +
C3
u dr
where C
1
, C
2
, and C
3
are the straight lines (0,0) to (1,0), (1,0) to (1,1), and (1,1) to (0,0), respectively.
For this problem we have
C
1
: y = 0, dy = 0, x ∈ [0, 1], u = 0
C
2
: x = 1, dx = 0, y ∈ [0, 1], u = y i + 2y j
C
3
: x = y, dx = dy, x ∈ [1, 0], y ∈ [1, 0] u = x i + 2x
2
j
Thus
C
u dr =
1
0
(0 i + 0 j) (dx i) +
1
0
(y i + 2y j) (dy j) +
0
1
(x i + 2x
2
j) (dx i +dx j)
=
1
0
2y dy +
0
1
(x + 2x
2
) dx
=
y
2
1
0
+
¸
1
2
x
2
+
2
3
x
3
0
1
= 1 −
1
2
−
2
3
= −
1
6
On the other hand
D
∂u
y
∂x
−
∂u
x
∂y
dx dy =
1
0
x
0
(2y −1) dy dx
=
1
0
(x
2
−x) dx
= −
1
6
6.7.3 Gauss’s theorem
Let S be a closed surface, and V the region enclosed within it, then
S
u n dS =
V
∇ u dV (6.131)
S
u
i
n
i
dS =
V
∂u
i
∂x
i
dV (6.132)
where dV an element of volume, dS is an element of the surface, and n (or n
i
) is the outward
unit normal to it. It extends to tensors of arbitrary order:
S
T
ijk...
n
i
dS =
V
∂T
ijk...
∂x
i
dV
167
Note if T
ijk...
= C then we get
S
n
i
dS = 0
Gauss’s theorem can be thought of as an extension of the familiar onedimensional scalar
result:
φ(b) −φ(a) =
b
a
dφ
dx
dx (6.133)
Here the end points play the role of the surface integral, and the integral on x plays the
role of the volume integral.
Example 6.10
Show that Gauss’s theorem is valid if
u = x i +y j
and S is the closed surface which consists of a circular base and the hemisphere of unit radius with
center at the origin and z ≥ 0, that is,
x
2
+y
2
+z
2
= 1
In spherical coordinates, deﬁned by
x = r sin θ cos φ
y = r sin θ sin φ
z = r cos θ
The hemispherical surface is described by
r = 1.
We split the surface integral into two parts
S
u n dS =
B
u n dS +
H
u n dS
where B is the base and H the curved surface of the hemisphere.
The ﬁrst term on the right is zero since n = −k, and u n = 0 on B. On H the unit normal is
n = sin θ cos φ i + sin θ sin φ j + () k
Thus
u n = sin
2
θ cos
2
φ + sin
2
θ sin
2
φ
= sin
2
θ
H
u n dS =
2π
0
π/2
0
sin
2
θ(sinθ dθ dφ)
= 2π
π/2
0
3
4
sin θ −
1
4
sin 3θ
dθ
= 2π
3
4
−
1
12
=
4
3
π
168
On the other hand, if we use Gauss’s theorem we ﬁnd that
∇ u = 2
so that
V
∇ u dV =
4
3
π
since the volume of the hemisphere is
2
3
π.
6.7.4 Green’s identities
Applying Gauss’s theorem to the vector u = φ∇ψ, we get
S
φ∇ψ n dS =
V
∇ (φ∇ψ) dV (6.134)
S
φ
∂ψ
∂x
i
n
i
dS =
V
∂
∂x
i
φ
∂ψ
∂x
i
dV (6.135)
From this we get Green’s ﬁrst identity
S
φ∇ψ n dS =
V
(φ∇
2
ψ +∇φ ∇ψ) dV (6.136)
S
φ
∂ψ
∂x
i
n
i
dS =
V
φ
∂
2
ψ
∂x
i
∂x
i
+
∂φ
∂x
i
∂ψ
∂x
i
dV (6.137)
Interchanging φ and ψ in the above and subtracting, we get Green’s second identity
S
(φ∇ψ −ψ∇φ) n dS =
V
(φ∇
2
ψ −ψ∇
2
φ) dV (6.138)
S
φ
∂ψ
∂x
i
−ψ
∂φ
∂x
i
n
i
dS =
V
φ
∂
2
ψ
∂x
i
∂x
i
−ψ
∂
2
φ
∂x
i
∂x
i
dV (6.139)
6.7.5 Stokes’ theorem
Consider Stokes’
5
theorem. Let S be an open surface, and the curve C its boundary. Then
S
(∇u) n dS =
C
u dr (6.140)
S
ijk
∂u
k
∂x
j
n
i
dS =
C
u
i
dr
i
(6.141)
where n is the unit vector normal to the element dS, and dr an element of curve C.
5
George Gabriel Stokes, 18191903, Irishborn English mathematician.
169
Example 6.11
Evaluate I =
S
(∇ u) n dS, using Stokes’s theorem, where u = x
3
j − (z + 1) k and S is the
surface z = 4 −4x
2
−y
2
for z ≥ 0.
Using Stokes’s theorem, the surface integral can be converted to a line integral along the boundary
C which is the curve 4 −4x
2
−y
2
= 0.
I =
C
u dr
=
[x
3
j −(z + 1) k] [dx i +dy j]
=
C
x
3
dy
C can be represented by the parametric equations x = cos t, y = 2 sin t. Thus dy = 2 cos t dt, so that
I = 2
2π
0
cos
4
t dt
= 2
2π
0
1
8
cos 4t +
1
2
cos 2t +
3
8
dt
= 2
¸
1
32
sin 4t +
1
4
sin 2t +
3
8
t
¸
2π
0
=
3
2
π
6.7.6 Leibniz’s Theorem
If we consider an arbitrary moving volume V (t) with a corresponding surface area S(t)
with surface volume elements moving at velocity w
k
, Leibniz’s theorem gives us a means to
calculate the time derivatives of integrated quantities. For an arbitrary order tensor, it is
d
dt
V (t)
T
jk...
(x
i
, t) dV =
V (t)
∂T
jk...
(x
i
, t)
∂t
dV +
S(t)
n
m
w
m
T
jk....
(x
i
, t) dS (6.142)
Note if T
jk...
(x
i
, t) = 1 we get
d
dt
V (t)
(1) dV =
V (t)
∂
∂t
(1) dV +
S(t)
n
m
w
m
(1) dS (6.143)
dV
dt
=
S(t)
n
m
w
m
dS (6.144)
Here the volume changes due to the net surface motion. In one dimension T
jk...
(x
i
, t) = f(x, t)
we get
d
dt
x=b(t)
x=a(t)
f(x, t) dx =
x=b(t)
x=a(t)
∂f
∂t
dx +
db
dt
f(b(t), t) −
da
dt
f(a(t), t) (6.145)
170
6.8 Orthogonal curvilinear coordinates
For an orthogonal curvilinear coordinate system (q
1
, q
2
, q
3
), we have
ds
2
= (h
1
dq
1
)
2
+ (h
2
dq
2
)
2
+ (h
3
dq
3
)
2
(6.146)
where
h
i
=
∂x
1
∂q
i
2
+
∂x
2
∂q
i
2
+
∂x
3
∂q
i
2
(6.147)
We can show that
grad φ =
1
h
1
∂φ
∂q
1
e
1
+
1
h
2
∂φ
∂q
2
e
2
+
1
h
3
∂φ
∂q
3
e
3
div u =
1
h
1
h
2
h
3
¸
∂
∂q
1
(u
1
h
2
h
3
) +
∂
∂q
2
(u
2
h
3
h
1
) +
∂
∂q
3
(u
3
h
1
h
2
)
curl u =
1
h
1
h
2
h
3
h
1
e
1
h
2
e
2
h
3
e
3
∂
∂q
1
∂
∂q
2
∂
∂q
3
u
1
h
1
u
2
h
2
u
3
h
3
div grad φ =
1
h
1
h
2
h
3
¸
∂
∂q
1
h
2
h
3
h
1
∂φ
∂q
1
+
∂
∂q
2
h
3
h
1
h
2
∂φ
∂q
2
+
∂
∂q
3
h
1
h
2
h
3
∂φ
∂q
3
Example 6.12
Find expressions for the gradient, divergence, and curl in cylindrical coordinates (r, θ, z) where
x
1
= r cos θ
x
2
= r sin θ
x
3
= z
The 1,2 and 3 directions are associated with r, θ, and z, respectively. From equation (6.147) the scale
factors are
h
r
=
∂x
1
∂r
2
+
∂x
2
∂r
2
+
∂x
3
∂r
2
=
cos
2
θ + sin
2
θ
= 1
h
θ
=
∂x
1
∂θ
2
+
∂x
2
∂θ
2
+
∂x
3
∂θ
2
=
r
2
sin
2
θ +r
2
cos
2
θ
= r
h
z
=
∂x
1
∂z
2
+
∂x
2
∂z
2
+
∂x
3
∂z
2
= 1
171
so that
grad φ =
∂φ
∂r
e
r
+
1
r
∂φ
∂θ
e
θ
+
∂φ
∂z
e
z
div u =
1
r
∂
∂r
(u
r
r) +
∂
∂θ
(u
θ
) +
∂
∂z
(u
z
r)
curl u =
1
r
e
r
re
θ
e
z
∂
∂r
∂
∂θ
∂
∂z
u
r
u
θ
r u
z
Problems
1. Find the angle between the planes
3x −y + 2z = 2
x −2y = 1
2. Find the curve of intersection of the cylinders x
2
+y
2
= 1 and y
2
+z
2
= 1. Determine also the radius
of curvature of this curve at the points (0,1,0) and (1,0,1).
3. Show that for a curve r(t)
t
dt
ds
d
2
t
ds
2
= κ
2
τ
dr
ds
d
2
r
ds
2
d
3
r
ds
3
d
2
r
ds
2
d
2
r
ds
2
= τ
where t is the unit tangent, s is the length along the curve, κ is the curvature, and τ is the torsion.
4. Find the equation for the tangent to the curve of intersection of x = 2 and y = 1 + xz sin y
2
z at the
point (2, 1, π).
5. Find the curvature and torsion of the curve r(t) = 2ti +t
2
j +t
3
k at the point (2, 1, 1).
6. Apply Stokes’s theorem to the plane vector ﬁeld u(x, y) = u
x
i + u
y
j and a closed curve enclosing a
plane region. What is the result called? Use this result to ﬁnd
C
u dr, where u = −yi +xj and the
integration is counterclockwise along the sides C of the trapezoid with corners at (0,0), (2,0), (2,1),
and (1,1).
7. Orthogonal bipolar coordinates (u, v, w) are deﬁned by
x =
αsinh v
cosh v −cos u
y =
αsin u
cosh v −cos u
z = w
For α = 1, plot some of the surfaces of constant x and y in the u −v plane.
172
8. Using Cartesian index notation, show that
∇(u v) = (v ∇)u −(u ∇)v +u(∇ v) −v(∇ u)
where u and v are vector ﬁelds.
9. Consider two Cartesian coordinate systems: S with unit vectors (i, j, k), and S
with (i
, j
, k
), where
i
= i, j
= (j −k)/
√
2, k
= (j +k)/
√
2. The tensor T has the following components in S:
¸
1 0 0
0 −1 0
0 0 2
Find its components in S
.
10. Find the matrix A that operates on any vector of unit length in the xy plane and turns it through
an angle θ around the zaxis without changing its length. Show that A is orthogonal; that is that all
of its columns are mutually orthogonal vectors of unit magnitude.
11. What is the unit vector normal to the plane passing through the points (1,0,0), (0,1,0) and (0,0,2)?
12. Prove the following identities using Cartesian index notation:
(a) (a b) c = a (b c)
(b) a (b c) = b(a c) −c(a b)
(c) (a b) c d = (a b) c d
13. The position of a point is given by r = ia cos ωt + jb sin ωt. Show that the path of the point is an
ellipse. Find its velocity v and show that r v = constant. Show also that the acceleration of the
point is directed towards the origin and its magnitude is proportional to the distance from the origin.
14. System S is deﬁned by the unit vectors e
1
, e
2
, and e
3
. Another Cartesian system S
is deﬁned by
unit vectors e
1
, e
2
, and e
3
in directions a, b, and c where
a = e
1
b = e
2
−e
3
(a) Find e
1
, e
2
, e
3
, (b) ﬁnd the transformation array A
ij
, (c) show that δ
ij
= A
ki
A
kj
is satisﬁed, and
(d) ﬁnd the components of the vector e
1
+e
2
+e
3
in S
.
15. Use Green’s theorem to calculate
C
u dr, where u = x
2
i +2xyj, and C is the counterclockwise path
around a rectangle with vertices at (0,0), (2,0), (0,4) and (2,4).
16. Derive an expression for the divergence of a vector in orthogonal paraboloidal coordinates
x = uv cos θ
y = uv sin θ
z =
1
2
(u
2
−v
2
)
Determine the scale factors. Find ∇φ, ∇ u, ∇u, and ∇
2
φ in this coordinate system.
17. Derive an expression for the gradient, divergence, curl and Laplacian operators in orthogonal parabolic
cylindrical coordinates (u, v, w) where
x = uv
y =
1
2
(u
2
−v
2
)
z = w
where u ∈ [0, ∞), v ∈ (−∞, ∞), and w ∈ (−∞, ∞).
173
18. Consider orthogonal elliptic cylindrical coordinates (u, v, z) which are related to Cartesian coordinates
(x, y, z) by
x = a cosh ucos v
y = a sinh usin v
z = z
where u ∈ [0, ∞), v ∈ [0, 2π) and z ∈ (−∞, ∞). Determine ∇f, ∇ u, ∇u and ∇
2
f in this system,
where f is a scalar ﬁeld and u is a vector ﬁeld.
19. Determine a unit vector in the plane of the vectors i − j and j + k and perpendicular to the vector
i −j +k.
20. Determine a unit vector perpendicular to the plane of the vectors a = i + 2j −k, b = 2i +j + 0k.
21. Find the curvature and the radius of curvature of y = a sin x at the peaks and valleys.
22. Determine the unit vector normal to the surface x
3
−2xyz +z
3
= 0 at the point (1,1,1).
23. Show using indicial notation that
∇∇φ = = 0
∇ ∇u = 0
∇(u v) = (u ∇)v + (v ∇)u +u (∇v) +v (∇u)
1
2
∇(u u) = (u ∇)u +u (∇u)
∇ (u v) = v ∇u −u ∇v
∇(∇u) = ∇(∇ u) −∇
2
u
∇(u v) = (v ∇)u −(u ∇)v +u(∇ v) −v(∇ u)
24. Show that the Laplacian operator
∂
2
∂xi∂xi
has the same form in S and S
.
25. If
T
ij
=
¸
x
2
1
x
2
2x
3
x
1
−2x
2
x
2
x
1
x
3
x
3
3
0 x
1
2x
2
−x
3
a) Evaluate T
ij
at P : [−2, 3, −1]
b) ﬁnd T
(ij)
and T
[ij]
at P
c) ﬁnd the associated dual vector d
i
d) ﬁnd the principal values and the orientations of each associated normal vector for the symmetric
part of T
ij
evaluated at P
e) evaluate the divergence of T
ij
at P
f) evaluate the curl of the divergence of T
ij
at P
26. Consider the tensor
T
ij
=
¸
2 −1 2
3 1 0
0 1 3
deﬁned in a Cartesian coordinate system. Consider the vector associated with the plane whose normal
points in the direction (2, 3, −1). What is the magnitude of the component of the associated vector
that is aligned with the normal to the plane?
174
Chapter 7
Linear analysis
see Kaplan, Chapter 1,
see Friedman, Chapter 1, 2,
see Riley, Hobson, and Bence, Chapters 7, 10, 15,
see Lopez, Chapters 15, 31,
see Greenberg, Chapters 17 and 18,
see Wylie and Barrett, Chapter 13,
see Zeidler,
see Debnath and Mikusinski.
7.1 Sets
Consider two sets A and B. We use the following notation
x ∈ A x is an element of A
x / ∈ A x is not an element of A
A = B A and B have the same elements
A ⊂ B the elements of A also belong to B
A ∪ B set of elements that belong to A or B
A ∩ B set of elements that belong to A and B
A −B set of elements that belong to A but not to B
If A ⊂ B, then B −A is the complement of A in B.
Some sets that are commonly used are:
Z set of all integers
N set of all positive integers
Q set of all rational numbers
R set of all real numbers
R
+
set of all nonnegative real numbers
C set of all complex numbers
• An interval is a portion of the real line.
175
• An open interval (a, b) does not include the end points, so that if x ∈ (a, b), then
a < x < b. In set notation this is ¦x ∈ R : a < x < b¦ if x is real
• A closed interval [a, b] includes the end points. If x ∈ [a, b], then a ≤ x ≤ b. In set
notation this is ¦x ∈ R : a ≤ x ≤ b¦ if x is real
• The complement of any open subset of [a, b] is a closed set.
• A set A ⊂ R is bounded from above if there exists a real number, called the upper
bound, such that for every x ∈ A is less than or equal to that number.
• The least upper bound or supremum is the minimum of all upper bounds.
• In a similar fashion, a set A ⊂ R can be bounded from below, in which case it will
have a greatest lower bound or inﬁmum.
• A set which has no elements is the empty set ¦¦, also known as the null set ∅. Note
the set with 0 as the only element, 0, is not empty.
• A set that is either ﬁnite, or for which each element can be associated with a member
of N is said to be countable. Otherwise the set is uncountable.
• An ordered pair is P = (x, y), where x ∈ A, and y ∈ B. Then P ∈ A B, where the
symbol represents a Cartesian product. If x ∈ A and y ∈ A also, then we write
P = (x, y) ∈ A
2
.
• A real function of a single variable can be written as f : X → Y or y = f(x) where f
maps x ∈ X ⊂ R to y ∈ Y ⊂ R. For each x, there is only one y, though there may be
more than one x that maps to a given y. The set X is called the domain of f, y the
image of x, and the range the set of all images.
7.2 Diﬀerentiation and integration
7.2.1 Fr´echet derivative
An example of a Fr´echet
1
derivative is the Jacobian derivative. It is a generalization of the
ordinary derivative.
7.2.2 Riemann integral
Consider a function f(t) deﬁned in the interval [a, b]. Choose t
1
, t
2
, , t
n−1
such that
a = t
0
< t
1
< t
2
< < t
n−1
< t
n
= b
Let ξ
k
∈ [t
k−1
, t
k
], and
I
n
= f(ξ
1
)(t
1
−t
0
) + f(ξ
2
)(t
2
−t
1
) + + f(ξ
n
)(t
n
−t
n−1
)
1
Maurice Ren´e Fr´echet, 18781973, French mathematician.
176
t
f(t)
a b
t t t t
t 1
2 0
n1 n
ξ ξ ξ
ξ
1 2
k
n
t
k1
t
k
Figure 7.1: Riemann integration process
Also let max
k
[t
k
−t
k−1
[ →0 as n →∞. Then I
n
→I, where
I =
b
a
f(t) dt (7.1)
If I exists and is independent of the manner of subdivision, then f(t) is Riemann
2
integrable
in [a, b].
The Riemann integration process is sketched in Figure 7.1.
Example 7.1
Determine if the function f(t) Riemann integrable in [0, 1] where
f(t) =
0 if t is rational
1 if t is irrational
On choosing ξ
k
rational, I = 0, but if ξ
k
is irrational, then I = 1. So f(t) is not Riemann integrable.
7.2.3 Lebesgue integral
Let us consider sets belonging to the interval [a, b] where a and b are real scalars. The
covering of a set is an open set which contains the given set; the covering will have a certain
length. The outer measure of a set is the length of the smallest covering possible. The inner
measure of the set is (a − b) minus the outer measure of the complement of the set. If the
two measures are the same, then the value is the measure and the set is measurable.
For the set I = (a, b), the measure is m(I) = [b − a[. If there are two disjoint intervals
I
1
= (a, b) and I
2
= (c, d). Then the measure of I = I
1
∪ I
2
is m(I) = [b −a[ +[c −d[.
2
Georg Friedrich Bernhard Riemann, 18261866, Hanoverborn German mathematician.
177
t
f(t)
a
b
e
k
y
y
y
y
0
1
k1
k
e
1
y
n
y
n1
e
n
Figure 7.2: Lebesgue integration process
Consider again a function f(t) deﬁned in the interval [a, b]. Let the set
e
k
= ¦t : y
k−1
≤ f(t) ≤ y
k
¦
(e
k
is the set of all t’s for which f(t) is bounded between two values, y
k−1
and y
k
). Also let
the sum I
n
be deﬁned as
I
n
= y
1
m(e
1
) + y
2
m(e
2
) + + y
n
m(e
n
) (7.2)
Let max
k
[y
k
−y
k−1
[ →0 as n →∞. Then I
n
→I, where
I =
b
a
f(t) dt (7.3)
I is said to be the Lebesgue
3
integral of f(t).
The Lebesgue integration process is sketched in Figure 7.2.
Example 7.2
To integrate the function in the previous example, we observe ﬁrst that the set of rational and
irrational numbers in [0,1] has measure zero and 1 respectively. Thus from equation (7.2) the Lebesgue
integral exists, and is equal to 1. Loosely speaking, the reason is that the rationals are not dense in
[0, 1] while the irrationals are dense in [0, 1]. That is to say every rational number exists in isolation
from other rational numbers and surrounded by irrationals. Thus the rationals exist as isolated points
on the real line; these points have measure 0; The irrationals have measure 1 over the same interval;
hence the integral is I
n
= y
1
m(e
1
) +y
2
m(e
2
) = 1(1) + 0(0) = 1.
The Riemann integral is based on the concept of the length of an interval, and the
Lebesgue integral on the measure of a set. When both integrals exist, their values are the
same. If the Riemann integral exists, the Lebesgue integral also exists. The converse is not
necessarily true.
3
Henri L`eon Lebesgue, 18751941, French mathematician.
178
The importance of the distinction is subtle. It can be shown that certain integral oper
ators which operate on Lebesgue integrable functions are guaranteed to generate a function
which is also Lebesgue integrable. In contrast, certain operators operating on functions which
are at most Riemann integrable can generate functions which are not Riemann integrable.
7.3 Vector spaces
A ﬁeld F is typically a set of numbers which contains the sum, diﬀerence, product, and
quotient (excluding division by zero) of any two numbers in the ﬁeld.
4
Examples are the
set of rational numbers Q, real numbers, R, or complex numbers, C. We will usually use only
R or C. Note the integers Z are not a ﬁeld as the quotient of two integers is not necessarily
an integer.
Consider a set S with two operations deﬁned: addition of two elements (denoted by +)
both belonging to the set, and multiplication of a member of the set by a scalar belonging
to a ﬁeld F (indicated by juxtaposition). Let us also require the set to be closed under the
operations of addition and multiplication by a scalar, i.e. if x ∈ S, y ∈ S, and α ∈ F then
x + y ∈ S, and αx ∈ S. Furthermore:
1. ∀ x, y ∈ S : x + y = y + x. For all elements x and y in S, the addition operator on
such elements is commutative
2. ∀ x, y ∈ S : (x + y) + z = x + (y + z). For all elements x and y in S, the addition
operator on such elements is associative
3. ∃ 0 ∈ S [ ∀ x ∈ S, x + 0 = x: there exists a 0, which is an element of S, such that for
all x in S when the addition operator is applied to 0 and x, the original element x is
yielded.
4. ∀ x ∈ S, ∃ −x ∈ S [ x + (−x) = 0. For all x in S there exists an element −x, also in
S, such that when added to x, yields the 0 element.
5. ∃ 1 ∈ F [ ∀ x ∈ S, 1x = x. There exists and element 1 in F such that for all x in S,1
multiplying the element x yields the element x.
6. ∀ a, b ∈ F, ∀x ∈ S, (a + b)x = ax + bx. For all a and b which are in F and for all x
which are in S, the addition operator distributes onto multiplication.
7. ∀ a ∈ F, ∀ x, y ∈ S, a(x + y) = ax + ay
8. ∀ a, b ∈ F, ∀ x ∈ S, a(bx) = (ab)x
Such a set is called a linear space or vector space over the ﬁeld F, and its elements are
called vectors. We will see that our deﬁnition is inclusive enough to include elements which
are traditionally thought of as vectors (in the sense of a directed line segment), and some
which are outside of this tradition. Note that typical vector elements x and y are no longer
4
More formally a ﬁeld is what is known as a commutative ring with some special properties, not discussed
here. What is known as function ﬁelds can also be deﬁned.
179
indicated in bold. However, they are in general not scalars, though in special cases, they can
be.
The element 0 ∈ S is called the null vector. Examples of vector spaces S over the ﬁeld of
real numbers (i.e. F : R) are:
1. S : R
1
. Set of real numbers, x = x
1
, with addition and scalar multiplication deﬁned as
usual; also known as S : R.
2. S : R
2
. Set of ordered pairs of real numbers, x = (x
1
, x
2
)
T
, with addition and scalar
multiplication deﬁned as:
x + y =
x
1
+ y
1
x
2
+ y
2
= (x
1
+ y
1
, x
2
+ y
2
)
T
,
αx =
αx
1
αx
2
= (αx
1
, αx
2
)
T
,
where
x =
x
1
x
2
= (x
1
, x
2
)
T
∈ R
2
, y =
y
1
y
2
= (y
1
, y
2
)
T
∈ R
2
, α ∈ R
1
.
3. S : R
n
. Set of n real numbers, x = (x
1
, , x
n
)
T
, with addition and scalar multiplica
tion deﬁned similar to the above.
4. S : R
∞
. Set of an inﬁnite number of real numbers, x = (x
1
, x
2
, )
T
, with addition and
scalar multiplication deﬁned similar to the above. Functions, e.g. x = 3t
2
+ t, t ∈ R
1
generate vectors x ∈ R
∞
.
5. S : C. Set of all complex numbers z = z
1
, with z
1
= a
1
+ ib
1
; a
1
, b
1
∈ R
1
.
6. S : C
2
. Set of all ordered pairs of complex numbers z = (z
1
, z
2
)
T
, with z
1
= a
1
+ib
1
, z
2
=
a
2
+ ib
2
; a
1
, a
2
, b
1
, b
2
∈ R
1
.
7. S : C
n
. Set of n complex numbers, z = (z
1
, , z
n
)
T
.
8. S : C
∞
. Set of an inﬁnite number of complex numbers, z = (z
1
, z
2
, )
T
. Scalar
complex functions give rise to sets in C
∞
.
9. S : M. Set of all m n matrices with addition and multiplication by a scalar deﬁned
as usual, and m ∈ N, n ∈ N.
10. S : C[a, b] Set of realvalued continuous functions, x(t) for t ∈ [a, b] ∈ R
1
with addition
and scalar multiplication deﬁned as usual.
11. S : C
n
[a, b] Set of realvalued functions x(t) for t ∈ [a, b] with continuous n
th
derivative
with addition and scalar multiplication deﬁned as usual; n ∈ N.
12. S : L
2
[a, b] Set of realvalued functions x(t) such that x(t)
2
is Lebesgue integrable in
t ∈ [a, b] ∈ R
1
, a < b, with addition and multiplication by a scalar deﬁned as usual.
Note that the integral must be ﬁnite.
180
13. S : L
p
[a, b] Set of realvalued functions x(t) such that [x(t)[
p
, p ∈ [1, ∞), is Lebesgue
integrable in t ∈ [a, b] ∈ R
1
, a < b, with addition and multiplication by a scalar deﬁned
as usual. Note that the integral must be ﬁnite.
14. S : L
p
[a, b] Set of complexvalued functions x(t) such that [x(t)[
p
, p ∈ [1, ∞) ∈ R
1
, is
Lebesgue integrable in t ∈ [a, b] ∈ R
1
, a < b, with addition and multiplication by a
scalar deﬁned as usual.
15. S : W
1
2
(G), Set of realvalued functions u(x) such that u(x)
2
and
¸
n
i=1
∂u
∂x
i
2
are
Lebesgue integrable in G, where x ∈ G ∈ R
n
, n ∈ N. This is an example of a Sobolov
5
space, which is useful in variational calculus and the ﬁnite element method. Sobolov
space W
1
2
(G) is to Lebesgue space L
2
[a, b] as the real space R
1
is to the rational space
Q
1
. That is Sobolov space allows a broader class of functions to be solutions to physical
problems. See Zeidler.
16. S : W
m
p
(G), Set of realvalued functions u(x) such that [u(x)[
p
and
¸
n
i=1
∂u
∂x
i
p
+
∂
m
u
∂x
m
i
p
is Lebesgue integrable in G, where x ∈ G ∈ R
n
, n, m ∈ N (may be in error
due to neglect of mixed partial derivatives!).
17. S : P
n
Set of all polynomials of degree ≤ n with addition and multiplication by a scalar
deﬁned as usual; n ∈ N.
Some examples of sets that are not vector spaces are Z and N over the ﬁeld R for the same
reason that they do not form a ﬁeld, namely that they are not closed over the multiplication
operation.
• S
is a subspace of S if S
⊂ S, and S
is itself a vector space. For example R
2
is a
subspace of R
3
.
• If S
1
and S
2
are subspaces of S, then S
1
∩ S
2
is also a subspace. The set S
1
+S
2
of all
x
1
+ x
2
with x
1
∈ S
1
and x
2
∈ S
2
is also a subspace of S.
• If S
1
+S
2
= S, and S
1
∩ S
2
= ¦0¦ then S is the direct sum of S
1
and S
2
written as
S = S
1
⊕S
2
(7.4)
• If x
1
, x
2
, , x
n
are elements of a vector space S and α
1
, α
2
, , α
n
belong to the ﬁeld
F, then x = α
1
x
1
+ α
2
x
2
+ + α
n
x
n
∈ S is a linear combination.
• Vectors x
1
, x
2
, , x
n
for which it is possible to have α
1
x
1
+ α
2
x
2
+ + α
n
x
n
= 0
where the scalars α
i
are not all zero, are said to be linearly dependent. Otherwise they
are linearly independent.
5
Sergei Lvovich Sobolev, 19081989, St. Petersburgborn Russian physicist and mathematician.
181
• The set of all linear combination of k vectors ¦x
1
, x
2
, , x
k
¦ of a vector space constitute
a subspace of the vector space.
• A set of n linearly independent vectors in an ndimensional vector space is said to span
the space.
• If the vector space S contains a set of n linearly independent set of vectors, and any
set with (n + 1) elements is linearly dependent, then the space is said to be ﬁnite
dimensional, and n is the dimension of the space. If n does not exist, the space is
inﬁnite dimensional.
• A basis of a ﬁnite dimensional space of dimension n is a set of n linearly independent
vectors ¦u
1
, u
2
, . . . , u
n
¦. All elements of the vector space can be represented as linear
combinations of the basis vectors.
• A set of vectors in a linear space S is convex iﬀ ∀x, y ∈ S and α ∈ [0, 1] ∈ R
1
implies
αx + (1 − α)y ∈ S. For example if we consider S to be a subspace of R
2
, that is a
region of the x, y plane, S is convex if for any two points in S, all points on the line
segment between them also lie in S. Spaces with lobes are not convex. Functions f
are convex iﬀ the space on which they operate are convex and if f(αx + (1 − α)y) ≤
αf(x) + (1 −α)f(y) ∀ x, y ∈ S, α ∈ [0, 1] ∈ R
1
.
7.3.1 Normed spaces
The norm [[x[[ of a vector x ∈ S is a real number that satisﬁes the following properties:
1. [[x[[ ≥ 0.
2. [[x[[ = 0 if and only if x = 0.
3. [[αx[[ = [α[ [[x[[; α ∈ C
1
4. [[x + y[[ ≤ [[x[[ +[[y[[ (triangle or Minkowski
6
inequality)
The norm is a natural generalization of the length of a vector. All properties of a norm can
be cast in terms of ordinary ﬁnite dimensional Euclidean vectors, and thus have geometrical
interpretations. The ﬁrst property says length is greater than or equal to zero. The second
says the only vector with zero length is the zero vector. The third says the length of a scalar
multiple of a vector is equal to the magnitude of the scaler times the length of the original
vector. The Minkowski inequality is easily understood in terms of vector addition. If we add
vectorially two vectors x and y, we will get a third vector whose length is less than or equal
to the sum of the lengths of the original two vectors. We will get equality when x and y
point in the same direction. The interesting generalization is that these properties hold for
the norms of functions as well as ordinary geometric vectors.
Examples of norms are:
6
Hermann Minkowski, 18641909, Russian/Lithuanianborn Germanbased mathematician and physicist.
182
1. x ∈ R
1
, [[x[[ = [x[. This space is also written as
1
(R
1
) or in abbreviated form
1
1
. The
subscript on in either case denotes the type of norm; the superscript in the second
form denotes the dimension of the space. Another way to denote this norm is [[x[[
1
.
2. x ∈ R
2
, x = (x
1
, x
2
)
T
, the Euclidean norm [[x[[ = [[x[[
2
= +
x
2
1
+ x
2
2
= +
√
x
T
x. We
can call this normed space E
2
, or
2
(R
2
), or
2
2
.
3. x ∈ R
n
, x = (x
1
, x
2
, , x
n
)
T
, [[x[[ = [[x[[
2
= +
x
2
1
+ x
2
2
+ + x
2
n
= +
√
x
T
x. We
can call this norm the Euclidean norm and the normed space Euclidean E
n
, or
2
(R
n
)
or
n
2
.
4. x ∈ R
n
, x = (x
1
, x
2
, , x
n
)
T
, [[x[[ = [[x[[
1
= [x
1
[ +[x
2
[ + +[x
n
[. This is also
1
(R
n
)
or
n
1
.
5. x ∈ R
n
, x = (x
1
, x
2
, , x
n
)
T
, [[x[[ = [[x[[
p
= ([x
1
[
p
+ [x
2
[
p
+ + [x
n
[
p
)
1/p
, where
1 ≤ p < ∞. This space is called or
p
(R
n
) or
n
p
.
6. x ∈ R
n
, x = (x
1
, x
2
, , x
n
)
T
, [[x[[ = [[x[[
∞
= max
1≤k≤n
[x
k
[. This space is called
∞
(R
n
) or
n
∞
.
7. x ∈ C
n
, x = (x
1
, x
2
, , x
n
)
T
, [[x[[ = [[x[[
2
= +
[x
1
[
2
+[x
2
[
2
+ +[x
n
[
2
= +
√
x
T
x.
This space is described as
2
(C
n
).
8. x ∈ C[a, b], [[x[[ = max
a≤t≤b
[x(t)[; t ∈ [a, b] ∈ R
1
.
9. x ∈ C
1
[a, b], [[x[[ = max
a≤t≤b
[x(t)[ + max
a≤t≤b
[x
(t)[; t ∈ [a, b] ∈ R
1
.
10. x ∈ L
2
[a, b], [[x[[ = [[x[[
2
= +
b
a
x(t)
2
dt; t ∈ [a, b] ∈ R
1
.
11. x ∈ L
p
[a, b], [[x[[ = [[x[[
p
= +
b
a
[x(t)[
p
dt
1/p
; t ∈ [a, b] ∈ R
1
.
12. x ∈ L
2
[a, b], [[x[[ = [[x[[
2
= +
b
a
[x(t)[
2
dt = +
b
a
x(t)x(t) dt; t ∈ [a, b] ∈ R
1
.
13. x ∈ L
p
[a, b], [[x[[ = [[x[[
p
= +
b
a
[x(t)[
p
dt
1/p
= +
b
a
x(t)x(t)
p/2
dt
1/p
; t ∈
[a, b] ∈ R
1
.
14. u ∈ W
1
2
(G), [[u[[ = [[u[[
1,2
= +
G
u(x)u(x) +
¸
n
i=1
∂u
∂x
i
∂u
∂x
i
dx; x ∈ G ∈ R
n
, u ∈
L
2
(G),
∂u
∂x
i
∈ L
2
(G). This is an example of a Sobolov space which is useful in variational
calculus and the ﬁnite element method.
15. u ∈ W
m
p
(G), [[u[[ = [[u[[
m,p
= +
G
[u(x)[
p
+
¸
n
i=1
∂u
∂x
i
p
+ +
∂
m
u
∂x
m
i
p
dx
1/p
; x ∈
G ∈ R
n
,
∂
α
u
∂x
α
i
∈ L
p
(G) ∀ α ≤ m ∈ N. (may be in error due to neglect of mixed partial
derivatives!)
• A vector space in which a norm is deﬁned is called a normed vector space.
183
• The metric or distance between x and y is deﬁned by d(x, y) = [[x−y[[. This a natural
metric induced by the norm. Thus [[x[[ is the distance between x and the null vector.
• The diameter of a set of vectors is the supremum (i.e. least upper bound) of the
distance between any two vectors of the set.
• Let S
1
and S
2
be subsets of a normed vector space S such that S
1
⊂ S
2
. Then S
1
is dense in S
2
if for every x
(2)
∈ S
2
and every > 0, there is a x
(1)
∈ S
1
for which
[[x
(2)
−x
(1)
[[ < .
• A sequence x
(1)
, x
(2)
, ∈ S, where S is a normed vector space, is a Cauchy
7
sequence
if for every > 0 there exists a number N
such that [[x
(m)
− x
(n)
[[ < for every m
and n greater than N
.
• The sequence x
(1)
, x
(2)
, ∈ S, where S is a normed vector space, converges if there
exists an x ∈ S such that lim
n→∞
[[x
(n)
− x[[ = 0. Then x is the limit point of the
sequence, and we write lim
n→∞
x
(n)
= x or x
(n)
→x.
• Every convergent sequence is a Cauchy sequence, but the converse is not true.
• A normed vector space S is complete if every Cauchy sequence in S is convergent, i.e.
if S contains all the limit points.
• A complete normed vector space is also called a Banach
8
space.
• It can be shown that every ﬁnitedimensional normed vector space is complete.
• Norms [[ [[
i
and [[ [[
j
in S are equivalent if there exist a, b > 0 such that, for any x ∈ S,
a[[x[[
j
≤ [[x[[
i
≤ b[[x[[
j
. (7.5)
• In a ﬁnitedimensional vector space, any norm is equivalent to any other norm. So,
the convergence of a sequence in such a space does not depend on the choice of norm.
We recall that if z ∈ C
1
, then we can represent z as z = a + ib where a ∈ R
1
, b ∈ R
1
;
further, the complex conjugate of z is represented as z = a −ib. It can be easily shown for
z
1
∈ C
1
, z
2
∈ C
1
that
• (z
1
+ z
2
) = z
1
+ z
2
• (z
1
−z
2
) = z
1
−z
2
• z
1
z
2
= z
1
z
2
•
z
1
z
2
=
z
1
z
2
7
Augustin Louis Cauchy, 17891857, French mathematician and physicist.
8
Stefan Banach, 18921945, Polish mathematician.
184
We also recall that the modulus of z, [z[ has the following properties:
[z[
2
= zz = (a + ib)(a −ib) = a
2
+ iab −iab −i
2
b
2
= a
2
+ b
2
≥ 0.
Example 7.3
Consider x ∈ R
3
and take
x =
¸
1
−4
2
.
Find the norm if x ∈
3
1
(absolute value norm, nonEuclidean), x ∈
3
2
(Euclidean norm), if x =
3
3
(another nonEuclidean norm), and if x ∈
3
∞
(maximum norm).
By the deﬁnition of the absolute value norm for x ∈
3
1
,
[[x[[ = [[x[[
1
= [x
1
[ +[x
2
[ +[x
3
[,
we get
[[x[[
1
= [1[ +[ −4[ +[2[ = 1 + 4 + 2 = 7.
Now consider the Euclidean norm for x ∈
3
2
. By the deﬁnition of the Euclidean norm,
[[x[[ = [[x[[
2
+
x
2
1
+x
2
2
+x
2
3
,
we get
[[x[[
2
= +
1
2
+ (−4)
2
+ 2
2
=
√
1 + 16 + 4 = +
√
21 ∼ 4.583.
Since the norm is Euclidean, this is the ordinary length of the vector.
For the nonEuclidean norm, x ∈
3
3
, we have
[[x[[ = [[x[[
3
= +
[x
1
[
3
+[x
2
[
3
+[x
3
[
3
1/3
,
so
[[x[[
3
= +
[1[
3
+[ −4[
3
+[2[
3
1/3
= (1 + 64 + 8)
1/3
∼ 4.179
For the maximum norm, x ∈
3
∞
, we have
[[x[[ = [[x[[
∞
= lim
p→∞
+([x
1
[
p
+[x
2
[
p
+[x
3
[
p
)
1/p
,
so
[[x[[
∞
= lim
p→∞
+([1[
p
+[ −4[
p
+[2[
p
)
1/p
= 4.
This picks out the magnitude of the component of x whose magnitude is maximum.
Note that as p increases the norm of the vector decreases.
Example 7.4
For x ∈
2
(C
2
), ﬁnd the norm of
x =
i
1
=
0 + 1i
1 + 0i
.
185
The deﬁnition of the space deﬁnes the norm is a 2 norm (“Euclidean”):
[[x[[ = [[x[[
2
= +
x
T
x = +
√
x
1
x
1
+x
2
x
2
=
[x
1
[
2
+[x
2
[
2
,
so
[[x[[
2
= +
( 0 + 1i 1 + 0i )
0 + 1i
1 + 0i
,
[[x[[
2
= +
(0 + 1i)(0 + 1i) + (1 + 0i)(1 + 0i) = +
(0 −1i)(0 + 1i) + (1 −0i)(1 + 0i),
[[x[[
2
= +
−i
2
+ 1 = +
√
2.
Note that if we were negligent in the use of the conjugate and deﬁned the norm as [[x[[
2
= +
√
x
T
x,
we would obtain
[[x[[
2
= +
√
x
T
x = +
( i 1 )
i
1
= +
i
2
+ 1 = +
√
−1 + 1 = 0!
This violates the property of the norm that [[x[[ > 0 if x = 0!
Example 7.5
Consider x ∈ L
2
[0, 1] where x(t) = 2t; t ∈ [0, 1] ∈ R
1
. Find [[x[[.
By the deﬁnition of the norm for this space, we have
[[x[[ = [[x[[
2
= +
1
0
x
2
(t) dt
[[x[[
2
2
=
1
0
x(t)x(t) dt =
1
0
(2t)(2t) dt = 4
1
0
t
2
dt = 4
¸
t
3
3
1
0
,
[[x[[
2
2
= 4
1
3
3
−
0
3
3
=
4
3
,
[[x[[
2
=
2
√
3
3
∼ 1.1547.
Example 7.6
Consider x ∈ L
3
[−2, 3] where x(t) = 1 + 2it; t ∈ [−2, 3] ∈ R
1
. Find [[x[[.
By the deﬁnition of the norm we have
[[x[[ = [[x[[
3
= +
3
−2
[1 + 2it[
3
dt
1/3
[[x[[
3
= +
3
−2
(1 + 2it) (1 + 2it)
3/2
dt
1/3
186
[[x[[
3
3
=
3
−2
(1 + 2it) (1 + 2it)
3/2
dt
[[x[[
3
3
=
3
−2
((1 −2it) (1 + 2it))
3/2
dt
[[x[[
3
3
=
3
−2
1 + 4t
2
3/2
dt
[[x[[
3
3
=
¸
1 + 4t
2
5t
8
+t
3
+
3
16
sinh
−1
(2t)
3
−2
[[x[[
3
3
=
37
√
17
4
+
3 sinh
−1
(4)
16
+
3
16
154
√
17 + sinh
−1
(6)
∼ 214.638
[[x[[
3
∼ 5.98737
Example 7.7
Consider x ∈ L
p
[a, b] where x(t) = c; t ∈ [a, b] ∈ R
1
, c ∈ C
1
. Find [[x[[. Let us take the complex
constant c = α +iβ, α ∈ R
1
, β ∈ R
1
. Then
[c[ =
α
2
+β
2
1/2
.
Now
[[x[[ = [[x[[
p
=
b
a
[x(t)[
p
dt
1/p
[[x[[
p
=
b
a
α
2
+β
2
p/2
dt
1/p
[[x[[
p
=
α
2
+β
2
p/2
b
a
dt
1/p
[[x[[
p
=
α
2
+β
2
p/2
(b −a)
1/p
[[x[[
p
=
α
2
+β
2
1/2
(b −a)
1/p
[[x[[
p
= [c[(b −a)
1/p
Note the norm is proportional to the magnitude of the complex constant c. For ﬁnite p, it is also
increases with the the extent of the domain b −a. For inﬁnite p, it is independent of the length of the
domain, and simply selects the value [c[. This is consistent with the norm in L
∞
selecting the maximum
value of the function.
Example 7.8
Consider x ∈ L
p
[0, b] where x(t) = 2t
2
; t ∈ [0, b] ∈ R
1
. Find [[x[[. Now
[[x[[ = [[x[[
p
=
b
0
[x(t)[
p
dt
1/p
187
[[x[[
p
=
b
0
[2t
2
[
p
dt
1/p
[[x[[
p
=
b
0
2
p
t
2p
dt
1/p
[[x[[
p
=
¸
2
p
t
2p+1
2p + 1
b
0
1/p
[[x[[
p
=
2
p
b
2p+1
2p + 1
1/p
[[x[[
p
=
2b
2p+1
p
(2p + 1)
1/p
Note as p →∞ that (2p + 1)
1/p
→1, and
2p+1
p
→2, so
lim
p→∞
[[x[[ = 2b
2
.
This is the maximum value of x(t) = 2t
2
in t ∈ [0, b], as expected.
Example 7.9
Consider u ∈ W
1
2
(G) with u(x) = 2x
4
x ∈ [0, 3] ∈ R
1
. Find [[u[[.
We note that here that n = 1; the consequent onedimensional domain G = [0, 3] is a closed interval
on the real number line. For more general problems, it can be areas, volumes, or ndimensional regions
of space. Also here m = 1 and p = 2, so we require u ∈ L
2
[0, 3] and
∂u
∂x
∈ L
2
[0, 3], which for our choice
of u, is satisﬁed. The formula for the norm in W
1
2
[0, 3] is
[[u[[ = [[u[[
1,2
= +
3
0
u(x)u(x) +
du
dx
du
dx
dx,
[[u[[
1,2
= +
3
0
((2x
4
)(2x
4
) + (8x
3
)(8x
3
)) dx,
[[u[[
1,2
= +
3
0
(4x
8
+ 64x
6
) dx,
[[u[[
1,2
= +
¸
4x
9
9
+
64x
7
7
3
0
= 54
69
7
∼ 169.539,
Example 7.10
Consider the sequence of vectors x
(1)
, x
(2)
, . . . ∈ Q
3
, where Q
3
is the space of rational numbers
over the ﬁeld of rational numbers, and
x
(1)
= ¦1, 3, 0¦ =
¸
x
(1)1
, x
(1)2
, x
(1)3
¸
(7.6)
188
x
(2)
=
1
1 + 1
, 3, 0
=
1
2
, 3, 0
(7.7)
x
(3)
=
1
1 +
1
2
, 3, 0
=
2
3
, 3, 0
(7.8)
x
(4)
=
1
1 +
2
3
, 3, 0
=
3
5
, 3, 0
(7.9)
.
.
. (7.10)
x
(n)
=
1
1 +x
(n−1)1
, 3, 0
(7.11)
(7.12)
for n ≥ 2. Does this sequence have a limit point in Q
3
? Is this a Cauchy sequence?
As n →∞,
x
(n)
→
√
5 −1
2
, 3, 0
¸
.
Thus since the sequence converges, it is a Cauchy sequence, a necessary but not suﬃcient condition
for convergence of the sequence. It is not, however a limit point, as the point to which it converges is
irrational, hence it is not convergent in Q.
Example 7.11
Is the inﬁnite sequence of linearly independent functions
v = ¦v
1
(t), v
2
(t), , v
n
(t), ¦ =
¸
t(t), t(t
2
), t(t
3
), , t(t
n
),
¸
,
a basis in L
2
[0, 1]?
The fundamental question is whether when an arbitrary function x(t) can is approximated as
x(t) =
n
¸
i=1
a
i
v
i
(t).
First check if the sequence is a Cauchy sequence:
lim
n,m→∞
[[v
n
(t) −v
m
(t)[[
2
=
1
0
(t
n+1
−t
m+1
)
2
dt =
1
2n + 3
−
2
m+n + 3
+
1
2m+ 3
= 0.
As this norm approaches zero, the sequence is a Cauchy sequence, so a necessary condition for conver
gence in L
2
is satisﬁed. We also have
lim
n→∞
v
n
(t) =
0, 0 ≤ t < 1
1, t = 1
The above function, the one to which the sequence converges, is in L
2
, which is another necessary
condition for convergence in L
2
. It is diﬃcult to prove that approximations to all functions x(t) will
converge. In general, one would choose x(t), get an expression for the coeﬃcients a
i
, and then examine
the necessary and suﬃcient condition
lim
n→∞
x(t) −
n
¸
i=1
a
i
v
i
(t)
2
= 0.
189
The following calculation suggests the contribution of the basis function for large n is becoming
negligible:
lim
n→∞
[[v
n
(t)[[
2
=
1
0
(t
n+1
)
2
dt =
1
2n + 3
= 0.
This does not prove convergence as one needs to consider the norm of the diﬀerence between the actual
function and the sum of the product of the coeﬃcients and basis functions for such a proof.
Example 7.12
Is the inﬁnite sequence of linearly independent functions
v =
t
t −
2
5
, t
t
2
−
1
5
, t(t
3
−0), t
t
4
+
1
5
, , t
t
n
+
n −3
5
,
,
a basis in L
2
[0, 1]?
Following the logic of the previous example, we check the norm of an arbitrary function in the series
for convergence:
lim
n→∞
[[v
n
(t)[[
2
=
1
0
t
t
n
+
n −3
5
dt
1/2
=
36 + 12n + 33n
2
−3n
3
+ 2n
4
75(3 +n)(3 + 2n)
→
n
√
75
→∞.
Since the norm of the basis function grows without bound with n, we do not expect convergence. A
similar, but lengthy, calculation shows lim
n,m→∞
[[v
n
(t) −v
m
(t)[[
2
→∞ as m, n →∞.
7.3.2 Inner product spaces
The inner product <x, y> is, in general, a complex scalar (<x, y> ∈ C
1
) associated with
two elements x and y of a normed vector space satisfying the following rules. For x, y, z ∈ S
and α, β ∈ C,
1. <x, x> > 0 if x = 0.
2. <x, x> = 0 if and only if x = 0.
3. <x, αy + βz> = α<x, y> + β<x, z>; α ∈ C
1
, β ∈ C
1
.
4. <x, y> = <y, x>, where <> indicates the complex conjugate of the inner product.
Inner product spaces are subspaces of linear vector spaces and are sometimes called pre
Hilbert
9
spaces. A preHilbert space is not necessarily complete, so it may or may not form
a Banach space.
9
David Hilbert, 18621943, German mathematician of great inﬂuence.
190
Example 7.13
Show
<αx, y> = α<x, y>
Using the properties of the inner product and the complex conjugate we have
<αx, y> = <y, αx>
= α<y, x>
= α <y, x>
= α <x, y>
Note that in a real vector space we have
<x, αy> = <αx, y> = α<x, y>and also that (7.13)
<x, y> = <y, x> (7.14)
since every scalar is equal to its complex conjugate.
7.3.2.1 Hilbert space
A Banach space (i.e. a complete normed vector space) on which an inner product is deﬁned
is also called a Hilbert space. While Banach spaces allow for the deﬁnition of several types
of norms, Hilbert spaces are more restrictive: we must deﬁne the norm such that
[[x[[ = [[x[[
2
= +
√
<x, x>. (7.15)
As a counterexample if x ∈ R
2
, and we take [[x[[ = [[x[[
3
= ([x
1
[
3
+ [x
2
[
3
)
1/3
(thus x ∈
2
3
which is a Banach space), we cannot ﬁnd a deﬁnition of the inner product which satisﬁes all
its properties. Thus the space
2
3
cannot be a Hilbert space!
Unless speciﬁed otherwise the unsubscripted norm [[ [[ can be taken to represent the
Hilbert space norm [[ [[
2
. It is quite common for both subscripted and unscripted versions
of the norm to appear in the literature.
Examples of spaces which are Hilbert spaces are
1. Finite dimensional vector spaces
• x ∈ R
3
, y ∈ R
3
with <x, y> = x
T
y = x
1
y
1
+ x
2
y
2
+ x
3
y
3
, where x = (x
1
, x
2
, x
3
)
T
,
and y = (y
1
, y
2
, y
3
)
T
. This is the ordinary dot product for threedimensional
Cartesian vectors. With this deﬁnition of the inner product <x, x> = [[x[[
2
=
x
2
1
+x
2
2
+x
2
3
, so the space is the Euclidean space, E
3
. The space is also
2
(R
3
) or
3
2
.
• x ∈ R
n
, y ∈ R
n
with <x, y> = x
T
y = x
1
y
1
+ x
2
y
2
+ + x
n
y
n
, where x =
(x
1
, x
2
, , x
n
)
T
, and y = (y
1
, y
2
, , y
n
)
T
. This is the ordinary dot product for
ndimensional Cartesian vectors; the space is the Euclidean space, E
n
, or
2
(R
n
),
or
n
2
.
191
• x ∈ C
n
, y ∈ C
n
with <x, y> = x
T
y = x
1
y
1
+ x
2
y
2
+ + x
n
y
n
, where x =
(x
1
, x
2
, , x
n
)
T
, and y = (y
1
, y
2
, , y
n
)
T
. This space is also
2
(C
n
). Note that
– <x, x> = x
1
x
1
+ x
2
x
2
+ + x
n
x
n
= [x
1
[
2
+[x
2
[
2
+ . . . +[x
n
[
2
= [[x[[
2
2
.
– <x, y> = x
1
y
1
+ x
2
y
2
+ . . . + x
n
y
n
.
– It is easily shown that this deﬁnition guarantees [[x[[
2
≥ 0 and <x, y> =
<y, x>
2. Lebesgue spaces
• x ∈ L
2
[a, b], y ∈ L
2
[a, b], t ∈ [a, b] ∈ R
1
with <x, y> =
b
a
x(t)y(t) dt.
• x ∈ L
2
[a, b], y ∈ L
2
[a, b], t ∈ [a, b] ∈ R
1
with <x, y> =
b
a
x(t)y(t) dt.
3. Sobolov spaces
• u ∈ W
1
2
(G), v ∈ W
1
2
(G), x ∈ G ∈ R
n
, n ∈ N, u ∈ L
2
(G),
∂u
∂x
i
∈ L
2
(G), v ∈
L
2
(G),
∂v
∂x
i
∈ L
2
(G) with
<u, v> =
G
u(x)v(x) +
n
¸
i=1
∂u
∂x
i
∂v
∂x
i
dx.
A Venn
10
diagram of some of the common spaces is shown in Figure 7.3.
7.3.2.2 Noncommutation of the inner product
By the fourth property of inner products, we see that the inner product operation is not
commutative in general. Speciﬁcally when the vectors are complex, <x, y> = <y, x>. When
the vectors x and y are real, the inner product is real, and the abovedeﬁned inner products
commute, e.g. ∀x ∈ R
n
, y ∈ R
n
, <x, y> = <y, x>. At ﬁrst glance one may wonder why
one would deﬁne a noncommutative operation. It is done to preserve the positive deﬁnite
character of the norm. If, for example, we had instead deﬁned the inner product to commute
for complex vectors, we might have taken <x, y> = x
T
y. Then if we had taken x = (i, 1)
T
and y = (1, 1)
T
, we would have <x, y> = <y, x> = 1 + i. However, we would also have
<x, x> = [[x[[
2
2
= (i, 1)(i, 1)
T
= 0! Obviously, this would violate the property of the norm
since we must have [[x[[
2
2
> 0 for x = 0.
Interestingly, one can interpret the Heisenberg
11
uncertainty principle to be entirely
consistent with our deﬁnition of an inner product which does not commute in a complex
space. In quantum mechanics, the superposition of physical states of a system is deﬁned by
a complexvalued vector ﬁeld. Position is determined by application of a position operator,
and momentum is determined by application of a momentum operator. If one wants to know
both position and momentum, both operators are applied. However, they do not commute,
10
John Venn, 18341923, English mathematician.
11
Werner Karl Heisenberg, 19011976, German physicist.
192
complex
scalars
ndimensional
complex vectors
L
2
Lebesgue integrable
function space
1
W
2
Sobolov space
Hilbert space
(complete, normed, inner product)
Banach space
(complete, normed)
Linear space
Space
l
2
(C )
1
(C )
n
l
2
Minkowski space
Figure 7.3: Venn diagram showing relationship between various classes of spaces.
and application of them in diﬀerent orders leads to a result which varies by a factor related
to Planck’s
12
constant.
Matrix multiplicaton is another example of an inner product that does not commute,
in general. Such topics are considered in the more general group theory. Operators that
commute are known as Abelian
13
and those that do not are known as nonAbelian.
7.3.2.3 Minkowski space
While nonrelativistic quantum mechanics, as well as classical mechanics, works quite well in
complex Hilbert spaces, the situation becomes more diﬃcult when one considers Einstein’s
theories of special and general relativity. In those theories, which are developed to be con
sistent with experimental observations of 1) systems moving at velocities near the speed of
light, 2) systems involving vast distances and gravitation, or 3) systems involving minute
length scales, the relevant linear vector space is known as Minkowski space. The vectors
have four components, describing the three spacelike and one timelike location of an event
in spacetime, given for example by x = (x
0
, x
1
, x
2
, x
3
)
T
, where x
0
= ct, with c as the speed
of light. Unlike Hilbert or Banach spaces, however, a norm in the sense that we have deﬁned
does not exist! While inner products are deﬁned in Minkowski space, they are deﬁned in such
12
Max Karl Ernst Ludwig Planck, 18581947, German physicist.
13
Niels Henrick Abel, 18021829, Norweigen mathematician, considered solution of quintic equations by
elliptic functions, proved impossibility of solving quintic equations with radicals, gave ﬁrst solution of an
integral equation, famously ignored by Gauss.
193
a fashion that the inner product of a space time vector with itself can be negative. From the
theory of special relativity, the inner product which renders the equations invariant under a
Lorentz transformation (necessary so that the speed of light measures the same in all frames
and, moreover, not the Galilean transformation of Newtonian theory) is
<x, x> = x
2
0
−x
2
1
−x
2
2
−x
2
3
.
Obviously, this inner product can take on negative values. The theory goes on to show that
when relativistic eﬀects are important, ordinary concepts of Euclidean geometry become
meaningless, and a variety of nonintuitive results can be obtained. In the Venn diagram,
we see that Minkowski spaces certainly are not Banach, but there are also linear spaces that
are not Minkowski, so it occupies an island in the diagram.
Example 7.14
For x and y belonging to a Hilbert space, prove the parallelogram equality
[[x +y[[
2
2
+[[x −y[[
2
2
= 2[[x[[
2
2
+ 2[[y[[
2
2
The left side is
<x +y, x +y>+<x −y, x −y> =
<x, x>+<x, y>+<y, x>+<y, y>
+
<x, x>−<x, y>−<y, x>+<y, y>
= 2<x, x>+ 2<y, y>
= 2[[x[[
2
2
+ 2[[y[[
2
2
Example 7.15
Prove the Schwarz
14
inequality
[[x[[
2
[[y[[
2
≥ [<x, y>[
where x and y are elements of a Hilbert space.
If y = 0, both sides are zero and the equality holds. Let us take y = 0. Then, we have
[[x −αy[[
2
2
= <x −αy, x −αy> where α is any scalar
= <x, x>−<x, αy>−<αy, x>+<αy, αy>
= <x, x>−α<x, y>−α <y, x>+αα <y, y>
on choosing α =
<y, x>
<y, y>
=
<x, y>
<y, y>
= <x, x>−
<x, y>
<y, y>
<x, y>−
<x, y>
<y, y>
<y, x>+
<y, x><x, y>
<y, y>
2
<y, y>
. .. .
=0
= [[x[[
2
2
−
[<x, y>[
2
[[y[[
2
2
[[x −αy[[
2
2
[[y[[
2
2
= [[x[[
2
2
[[y[[
2
2
−[<x, y>[
2
14
Hermann Amandus Schwarz, 18431921, Silesiaborn German mathematician, deeply inﬂuenced by
Weierstrass, on the faculty at Berlin, captain of the local volunteer ﬁre brigade, and assistant to railway
stationmaster.
194
Since [[x −αy[[
2
2
[[y[[
2
2
≥ 0,
[[x[[
2
2
[[y[[
2
2
−[<x, y>[
2
≥ 0 (7.16)
[[x[[
2
2
[[y[[
2
2
≥ [<x, y>[
2
(7.17)
[[x[[
2
[[y[[
2
≥ [<x, y>[ QED (7.18)
Note that this eﬀectively deﬁnes the angle between two vectors. Because of the inequality, we have
[[x[[
2
[[y[[
2
[<x, y>[
≥ 1,
[<x, y>[
[[x[[
2
[[y[[
2
≤ 1.
Deﬁning α to be the angle between the vectors x and y, we recover the familiar result from vector
analysis
cos α =
<x, y>
[[x[[
2
[[y[[
2
. (7.19)
This reduces to the ordinary relationship we ﬁnd in Euclidean geometry when x, y ∈ R
3
.
Example 7.16
For x, y ∈
2
(R
2
), ﬁnd <x, y> if
x =
1
3
, y =
2
−2
.
The solution is
<x, y> = x
T
y = ( 1 3 )
2
−2
= (1)(2) + (3)(−2) = −4.
Note that the inner product yields a real scalar, but in contrast to the norm, it can be negative. Note
also that the Schwarz inequality holds as [[x[[
2
[[y[[
2
=
√
10
√
8 ∼ 8.944 > [ − 4[. Also the Minkowski
inequality holds as [[x +y[[
2
= [[(3, 1)
T
[[
2
= +
√
10 < [[x[[
2
+[[y[[
2
=
√
10 +
√
8.
Example 7.17
For x, y ∈
2
(C
2
), ﬁnd <x, y> if
x =
−1 +i
3 −2i
, y =
1 −2i
−2
.
The solution is
<x, y> = x
T
y = ( −1 −i 3 + 2i )
1 −2i
−2
= (−1 −i)(1 −2i) + (3 + 2i)(−2) = −9 −3i.
Note that the inner product is a complex scalar which has negative components. It is easily shown that
[[x[[
2
= 3.870 and [[y[[
2
= 3 and [[x + y[[
2
= 2.4495. Also [<x, y>[ = 9.4868. The Schwarz inequality
holds as (3.870)(3) = 11.61 > 9.4868. The Minkowski inequality holds as 2.4495 < 3.870 + 3 = 6.870.
195
Example 7.18
For x, y ∈ L
2
[0, 1], ﬁnd <x, y> if
x(t) = 3t + 4, y(t) = −t −1.
The solution is
<x, y> =
1
0
(3t + 4)(−t −1) dt =
¸
−4t −
7t
2
2
−t
3
1
0
= −
17
2
= −8.5.
Once more the inner product is a negative scalar. It is easily shown that [[x[[
2
= 5.56776 and [[y[[
2
=
1.52753 and [[x+y[[
2
= 4.04145. Also [<x, y>[ = 8.5. It is easily seen that the Schwarz inequality holds
as (5.56776)(1.52753) = 8.505 > 8.5. The Minkowski inequality holds as 4.04145 < 5.56776+1.52753 =
7.095.
Example 7.19
For x, y ∈ L
2
[0, 1], ﬁnd <x, y> if
x(t) = it, y(t) = t +i.
We recall that
<x, y> =
1
0
x(t)y(t) dt.
The solution is
<x, y> =
1
0
(−it)(t +i) dt =
¸
t
2
2
−
it
3
3
1
0
=
1
2
−
i
3
.
The inner product is a complex scalar. It is easily shown that [[x[[
2
= 0.57735 and [[y[[
2
= 1.1547
and [[x + y[[
2
= 1.63299. Also [<x, y>[ = 0.601. The Schwarz inequality holds as (0.57735)(1.1547) =
0.6667 > 0.601. The Minkowski inequality holds as 1.63299 < 0.57735 + 1.1547 = 1.7321.
Example 7.20
For u, v ∈ W
1
2
(G)), ﬁnd <u, v> if
u(x) = x
1
+x
2
, v(x) = −x
1
x
2
,
and G is the square region in the x
1
, x
2
plane x
1
∈ [0, 1], x
2
∈ [0, 1]. We recall that
<u, v> =
G
u(x)v(x) +
∂u
∂x
1
∂v
∂x
1
+
∂u
∂x
2
∂v
∂x
2
dx,
<u, v> =
1
0
1
0
[(x
1
+x
2
)(−x
1
x
2
) + (1)(−x
2
) + (1)(−x
1
)] dx
1
dx
2
= −
4
3
= −1.33333.
The inner product here is negative real scalar. It is easily shown that [[u[[
1,2
= 1.77951 and [[v[[
1,2
=
0.881917 and [[u + v[[
1,2
= 1.13039. Also [<u, v>[ = 1.33333. The Schwarz inequality holds as
(1.77951)(0.881917) = 1.56938 > 1.33333. The Minkowski inequality holds as 1.13039 < 1.77951 +
0.881917 = 2.66143.
196
7.3.2.4 Orthogonality
One of the primary advantages of working in Hilbert spaces is that the inner product allows
one to utilize of the useful concept of orthogonality:
• x and y are said to be orthogonal to each other if
<x, y> = 0
• In an orthogonal set of vectors ¦v
1
, v
2
, ¦ the elements of the set are all orthogonal
to each other, so that <v
i
, v
j
> = 0 if i = j.
• If a set ¦ϕ
1
, ϕ
2
, ¦ exists such that <ϕ
i
, ϕ
j
> = δ
ij
, then the elements of the set are
orthonormal.
• A basis ¦v
1
, v
2
, , v
n
¦ of a ﬁnitedimensional space that is also orthogonal is an or
thogonal basis. On dividing each vector by its norm we get
ϕ
i
=
v
i
√
<v
i
, v
i
>
to give us an orthonormal basis ¦ϕ
1
, ϕ
2
, , ϕ
n
¦.
Example 7.21
If elements x and y of an inner product space are orthogonal to each other, prove the Pythagorean
theorem
[[x[[
2
2
+[[y[[
2
2
= [[x +y[[
2
2
The right side is
<x +y, x +y> = <x, x>+<x, y>+<y, x>+<y, y>
= <x, x>+<y, y> since <x, y> = <y, x> = 0 due to orthogonality
= [[x[[
2
2
+[[y[[
2
2
Example 7.22
Show that an orthogonal set of vectors in an inner product space is linearly independent.
Let ¦v
1
, v
2
, ¦ be an orthogonal set of vectors. Then consider
α
1
v
1
+α
2
v
2
+. . . +α
j
v
j
+. . . +α
n
v
n
= 0.
Taking the inner product with v
j
, where j = 1, 2, . . . we get
<v
j
, (α
1
v
1
+α
2
v
2
+. . . +α
j
v
j
+. . . +α
n
v
n
)> = <v
j
, 0>
α
1
<v
j
, v
1
>+α
2
<v
j
, v
2
>+. . . +α
j
<v
j
, v
j
>+. . . α
n
<v
j
, v
n
> = 0
α
j
<v
j
, v
j
> = 0
since all the other inner products are zero. Thus, α
j
= 0, indicating that the set ¦v
1
, v
2
, ¦ is linearly
independent.
197
7.3.2.5 GramSchmidt procedure
In a given inner product space, the GramSchmidt
15
procedure can be used to ﬁnd an
orthonormal set using a linearly independent set of vectors.
Example 7.23
Find an orthonormal set of vectors ¦ϕ
1
, ϕ
2
, . . .¦ in L
2
[−1, 1] using linear combinations of the linearly
independent set of vectors ¦1, t, t
2
, t
3
, . . .¦ where −1 ≤ t ≤ 1.
Choose
v
1
= 1
Now choose the second vector linearly independent of v
1
as
v
2
= a +bt.
This should be orthogonal to v
1
, so that
1
−1
(a +bt) dt = 0
a(1 −(−1)) +
b
2
(1
2
−(−1)
2
) = 0
from which
a = 0
Taking b = 1 arbitrarily, we have
v
2
= t.
Choose the third vector linearly independent of v
1
and v
2
, i.e.
v
3
= a +bt +ct
2
.
For this to be orthogonal to v
1
and v
2
, we get the conditions
1
−1
(a +bt +ct
2
) dt = 0
1
−1
(a +bt +ct
2
)t dt = 0
The ﬁrst of these gives c = −3a. Taking a = 1 arbitrarily, we have c = −3. The second relation gives
b = 0. Thus
v
3
= 1 −3t
2
In this manner we can ﬁnd as many orthogonal vectors as we want. We can make them orthonormal
by dividing each by its norm, so that we have
ϕ
1
=
1
√
2
ϕ
2
=
3
2
t
ϕ
3
=
5
8
(1 −3t
2
)
.
.
.
15
Jorgen Pedersen Gram, 18501916, Danish mathematician, and Erhard Schmidt, 18761959,
German/Estonianborn Berlin mathematician, studied under David Hilbert, founder of modern functional
analysis. The GramSchmidt procedure was actually ﬁrst introduced by Laplace.
198
Scalar multiples of these functions, with the functions set to unity at t = 1, are the Legendre polyno
mials: P
0
(t) = 1, P
1
(t) = t, P
2
(t) =
1
2
(3t
2
−1) . . .
Some orthonormal sets in L
2
[a, b] are:
1.
ϕ
n
(t) =
2n + 1
2
P
n
(t),
where P
n
are the Legendre polynomials in [−1, 1].
2.
ϕ
n
(t) =
e
−t
2
/2
(2
n
n!
√
π)
1/2
H
n
,
where H
n
are the Hermite polynomials in (−∞, ∞).
3.
ϕ
n
(t) = e
−t/2
L
n
(t),
where L
n
are the Laguerre polynomials in [0, ∞).
7.3.2.6 Representation of a vector
A vector x in an ndimensional inner product space can be represented in terms of a linear
combination of n basis vectors ¦u
1
, u
2
, , u
n
¦. In general, the basis vectors do not have to
be orthogonal; they just need to form a basis.
If such a combination exists, we can write
x = a
1
u
1
+ a
2
u
2
+ + a
n
u
n
(7.20)
The general task here is to ﬁnd expressions for the coeﬃcients a
k
, k = 1, 2, . . . n. To get the
coeﬃcients, we begin by taking inner products with u
1
, u
2
, , u
n
in turn to get
<u
1
, x> = <u
1
, a
1
u
1
> + <u
1
, a
2
u
2
> + . . . + <u
1
, a
n
u
n
>
using the properties of an inner product
and carrying out the procedure for all u
k
:
<u
1
, x> = a
1
<u
1
, u
1
> + a
2
<u
1
, u
2
> + . . . + a
n
<u
1
, u
n
>
<u
2
, x> = a
1
<u
2
, u
1
> + a
2
<u
2
, u
2
> + . . . + a
n
<u
2
, u
n
>
.
.
.
<u
n
, x> = a
1
<u
n
, u
1
> + a
2
<u
n
, u
2
> + . . . + a
n
<u
n
, u
n
>
199
Knowing x, u
1
, u
2
, , u
n
, all the inner products can be determined, and the equations can
be posed as a linear algebraic system:
¸
¸
¸
<u
1
, u
1
> <u
1
, u
2
> . . . <u
1
, u
n
>
<u
2
, u
1
> <u
2
, u
2
> . . . <u
2
, u
n
>
.
.
.
.
.
. . . .
.
.
.
<u
n
, u
1
> <u
n
, u
2
> . . . <u
n
, u
n
>
¸
¸
¸
a
1
a
2
.
.
.
a
n
=
¸
¸
¸
<u
1
, x>
<u
2
, x>
.
.
.
<u
n
, x>
(7.21)
This can also be written as
<u
i
, u
j
>a
j
= <u
i
, x>
In either case, Cramer’s rule can be used to solve for the unknown coeﬃcients, a
j
.
The process is simpler if the basis vectors are orthogonal. If orthogonal,
<u
i
, u
j
> = 0, i = j
substituting into the equation for the coeﬃcients, we get
¸
¸
¸
<u
1
, u
1
> 0 . . . 0
0 <u
2
, u
2
> . . . 0
.
.
.
.
.
. . . .
.
.
.
0 0 . . . <u
n
, u
n
>
¸
¸
¸
a
1
a
2
.
.
.
a
n
=
¸
¸
¸
<u
1
, x>
<u
2
, x>
.
.
.
<u
n
, x>
(7.22)
This can be solved directly for the coeﬃcients:
a
i
=
<u
i
, x>
<u
i
, u
i
>
(7.23)
So if the basis vectors are orthogonal, we can write
x =
<u
1
, x>
<u
1
, u
1
>
u
1
+
<u
2
, x>
<u
2
, u
2
>
u
2
+ . . . +
<u
n
, x>
<u
n
, u
n
>
u
n
(7.24)
x =
n
¸
i=1
<u
i
, x>
<u
i
, u
i
>
u
i
(7.25)
If we use an orthonormal basis ¦ϕ
1
, ϕ
2
, . . . , ϕ
n
¦ then the representation is even more eﬃcient:
x =
n
¸
i=1
<ϕ
i
, x> ϕ
i
(7.26)
Similar expansions apply to vectors in inﬁnitedimensional spaces, except that one must
be careful that the orthonormal set is complete. Only then is there any guarantee that any
vector can be represented as linear combinations of this orthonormal set.
If ¦ϕ
1
, ϕ
2
, . . .¦ is a complete orthonormal set of vectors in some domain Ω, then any
vector x can be represented as
x =
∞
¸
i=1
a
i
ϕ
i
(7.27)
200
where
a
i
= <ϕ
i
, x> (7.28)
This is a Fourier series representation, and the a
i
are the Fourier coeﬃcients. Though
trigonometric functions are sometimes used, other orthogonal functions are also common.
Thus we can have FourierLegendre for Ω = [−1, 1], FourierHermite for Ω = (−∞, ∞), or
FourierLaguerre series for Ω = [0, ∞).
Example 7.24
Show that the functions ϕ
1
(t), ϕ
2
(t), . . . , ϕ
n
(t) are orthonormal in L
2
(0, 1], where
ϕ
i
(t) =
√
n
i−1
n
< t ≤
i
n
0 otherwise
Expand x(t) = t
2
in terms of these functions, and ﬁnd the error for a ﬁnite n.
We note that the basis functions are a set of “top hat” functions whose amplitude increases with
and whose width decreases with increasing n. For ﬁxed n, the basis functions are a series of top hats
that ﬁlls the domain [0, 1]. The area enclosed by a single basis function is 1/
√
n. If i = j the inner
product
<ϕ
i
, ϕ
j
> =
1
0
ϕ
i
(t)ϕ
j
(t) dt = 0
because the integrand is zero everywhere. If i = j, the inner product is
1
0
ϕ
i
(t)ϕ
i
(t) dt =
i−1
n
0
(0)(0) dt +
i
n
i−1
n
√
n
√
n dt +
1
i
n
(0)(0) dt
= n
i
n
−
i −1
n
= 1
So, ϕ
1
, ϕ
2
, . . . , ϕ
n
is an orthonormal set. We can expand the function f(t) = t
2
in the form
t
2
=
n
¸
i=1
α
i
ϕ
i
Taking the inner product of both sides with ϕ
j
, we get
1
0
ϕ
j
(t)t
2
dt =
1
0
ϕ
j
(t)
n
¸
i=1
α
i
ϕ
i
(t) dt
1
0
ϕ
j
(t)t
2
dt =
n
¸
i=1
α
i
1
0
ϕ
j
(t)ϕ
i
(t) dt
. .. .
= δ
ij
1
0
ϕ
j
(t)t
2
dt = α
j
Thus
α
i
= 0 +
i
n
i−1
n
t
2
√
n dt + 0
Thus
α
i
=
1
3n
5/2
3i
2
−3i −1
201
0.2 0.4 0.6 0.8 1
t
0.2
0.4
0.6
0.8
1
x(t)
0.2 0.4 0.6 0.8 1
t
0.2
0.4
0.6
0.8
1
x(t)
n = 5
n = 10
x(t) = t
2
x(t) = t
2
Figure 7.4: Expansion of x(t) = t
2
in terms of “top hat” basis functions for two levels of
approximation, n = 5, n = 10.
The functions t
2
and the partial sums f
n
(t) =
¸
n
i=1
α
i
ϕ
i
(t) for n = 5 and n = 10 are shown in Figure
7.4. The L
2
error for the partial sums can be calculated as ∆
n
, where
∆
2
n
= [[f(t) −f
n
(t)[[
2
2
=
1
0
t
2
−
n
¸
i=1
α
i
ϕ
i
(t)
2
dt
=
1
9n
2
1 −
1
5n
2
∆
n
=
1
3n
1 −
1
5n
2
which vanishes as n →∞.
Example 7.25
Show the Fourier sine series for x(t) = 2t converges at a rate proportional to
1
√
n
, where n is the
number of terms used to approximate x(t), in L
2
[0, 1].
Consider the sequence of functions
ϕ =
√
2 sin(πt),
√
2 sin(2πt), . . . ,
√
2 sin(nπt), . . .
¸
.
It is easy to show linear independence for these functions. They are orthonormal in the Hilbert space
L
2
[0, 1], e.g.
<ϕ
2
, ϕ
3
> =
1
0
√
2 sin(2πt)
√
2 sin(3πt)
dt = 0,
<ϕ
3
, ϕ
3
> =
1
0
√
2 sin(3πt)
√
2 sin(3πt)
dt = 1.
Note that while the basis functions evaluate to 0 at both t = 0 and t = 1, that the function itself
only has value 0 at t = 0. We must tolerate a large error at t = 1, but hope that this error is conﬁned
to an ever collapsing neighborhood around t = 1 as more terms are included in the approximation.
The Fourier coeﬃcients are
α
k
= <ϕ
k
(t), 2t> =
1
0
√
2 sin(kπt) (2t) dt =
2
√
2(−1)
k+1
kπ
.
202
1 1.5 2 3 5 7 10 15 20
0.2
0.3
0.5
0.7
n
 x(t)  x (t) 
a
~ 0.841 n
 0.481
 x(t)  x (t) 
a 2
2
Figure 7.5: Behavior in the error norm of the Fourier series approximation to x(t) = 2t with
the number n of terms included in the series.
The approximation then is
x
a
(t) =
n
¸
k=1
4(−1)
k+1
kπ
sin(kπt).
The norm of the error is then
[[x(t) −x
a
(t)[[
2
=
1
0
2t −
n
¸
k=1
4(−1)
k+1
kπ
sin(kπt)
2
dt.
This is diﬃcult to evaluate analytically. It is straightforward to examine this with symbolic calculational
software.
A plot of the norm of the error as a function of the number of terms in the approximation, n,
is given in the loglog plot of Figure 7.5. A weighted least squares curve ﬁt, with a weighting factor
proportional to n
2
so that priority is given to data as n →∞, shows that the function
[[x(t) −x
a
(t)[[
2
∼ 0.841 n
−0.481
,
approximates the convergence performance well. In the loglog plot the exponent on n is the slope. It
appears from the graph that the slope may be approaching a limit, in which it is likely that
[[x(t) −x
a
(t)[[
2
∼
1
√
n
.
This indicates convergence of this series. Note that the series converges even though the norm of the
n
th
basis function does not approach zero as n →∞:
lim
n→∞
[[ϕ
n
[[
2
= 1,
since the basis functions are orthonormal. Also note that the behavior of the norm of the ﬁnal term in
the series,
[[α
n
ϕ
n
(t)[[
2
=
1
0
2
√
2(−1)
n+1
nπ
√
2 sin(nπt)
2
dt =
2
√
2
nπ
,
does not tell us how the series actually converges.
203
1 1.5 2 3 5 7 10 15 20
0.00001
0.00005
0.0001
0.0005
0.001
0.005
 x(t)  x (t)
2
a
 x(t)  x (t)
2
a
n
~ 0.00994 n
 2.492
Figure 7.6: Behavior in the error norm of the Fourier series approximation to x(t) = t(1 −t)
with the number n of terms included in the series.
Example 7.26
Show the Fourier sine series for x(t) = t − t
2
converges at a rate proportional to
1
n
5/2
, where n is
the number of terms used to approximate x(t), in L
2
[0, 1].
Again, consider the sequence of functions
ϕ =
√
2 sin(πt),
√
2 sin(2πt), . . . ,
√
2 sin(nπt), . . .
¸
.
which are as before, linearly independent and moreover, orthonormal. Note that in this case, as opposed
to the previous example, both the basis functions and the function to be approximated vanish identically
at both t = 0 and t = 1. Consequently, there will be no error in the approximation at either end point.
The Fourier coeﬃcients are
α
k
=
2
√
2
1 + (−1)
k+1
k
3
π
3
.
Note that α
k
= 0 for even values of k. Taking this into account and retaining only the necessary basis
functions, we can write the Fourier sine series as
x(t) = t(1 −t) ∼ x
a
(t) =
n
¸
m=1
4
√
2
(2m−1)
3
π
3
sin[(2m−1)πt].
The norm of the error is then
[[x(t) −x
a
(t)[[
2
=
1
0
t(1 −t) −
n
¸
m=1
4
√
2
(2m−1)
3
π
3
sin[(2m−1)πt]
2
dt.
Again this is diﬃcult to address analytically, but symbolic computation allows computation of the error
norm as a function of n.
A plot of the norm of the error as a function of the number of terms in the approximation, n,
is given in the loglog plot of Figure 7.6. A weighted least squares curve ﬁt, with a weighting factor
proportional to n
2
so that priority is given to data as n →∞, shows that the function
[[x(t) −x
a
(t)[[
2
∼ 0.00995 n
−2.492
,
204
approximates the convergence performance well. Thus we might suspect that
lim
n→∞
[[x(t) −x
a
(t)[[
2
∼
1
n
5/2
.
Note that the convergence is much more rapid than in the previous example! This can be critically
important in numerical calculations and demonstrates that a judicious selection of basis functions can
have fruitful consequences.
7.3.2.7 Parseval’s equation, convergence, and completeness
We consider Parseval’s
16
equation and associated issues here. For a basis to be complete, we require
that the norm of the diﬀerence of the series representation of all functions and the functions themselves
converge to zero in L
2
as the number of terms in the series approaches inﬁnity. For an orthonormal
basis ϕ
i
(t), this is
lim
n→∞
x(t) −
n
¸
k=1
α
k
ϕ
k
(t)
2
= 0.
Now for the orthonormal basis, we can show this reduces to a particularly simple form. Consider for
instance the error for a one term Fourier expansion
[[x −αϕ[[
2
2
= <x −αϕ, x −αϕ> (7.29)
= <x, x>−<x, αϕ>−<αϕ, x>+<αϕ, αϕ>, (7.30)
= [[x[[
2
2
−α<x, ϕ>−α<ϕ, x>+αα<ϕ, ϕ>, (7.31)
= [[x[[
2
2
−α<ϕ, x>−α<ϕ, x>+αα<ϕ, ϕ>, (7.32)
= [[x[[
2
2
−αα −αα +αα(1), (7.33)
= [[x[[
2
2
−αα, (7.34)
= [[x[[
2
2
−[α[
2
. (7.35)
Here we have used the deﬁnition of the Fourier coeﬃcient <ϕ, x> = α, and orthonormality <ϕ, ϕ> = 1.
This is easily extended to multiterm expansions to give
x(t) −
n
¸
k=1
α
k
ϕ
k
(t)
2
2
= [[x(t)[[
2
2
−
n
¸
k=1
[α
k
[
2
. (7.36)
So convergence, and thus completeness of the basis, is equivalent to requiring that
[[x(t)[[
2
2
= lim
n→∞
n
¸
k=1
[α
k
[
2
, (7.37)
for all functions x(t). Note that this requirement is stronger than just requiring that the last Fourier
coeﬃcient vanish for large n; also note that it does not address the important question of the rate of
convergence, which can be diﬀerent for diﬀerent functions x(t), for the same basis.
7.3.3 Reciprocal bases
Let ¦u
1
, , u
n
¦ be a basis of a ﬁnitedimensional inner product space. Also let ¦u
R
1
, , u
R
n
¦ be
elements of the same space such that
<u
i
, u
R
j
> = δ
ij
Then ¦u
R
1
, , u
R
n
¦ is called the reciprocal (or dual) basis of ¦u
1
, , u
n
¦. Of course an orthonormal
basis is its own reciprocal.
16
MarcAntoine Parseval des Chˆenes, 17551835, French mathematician.
205
Since ¦u
1
, , u
n
¦ is a basis, we can write any vector x as
x =
n
¸
j=1
α
j
u
j
(7.38)
Taking the inner product of both sides with u
R
i
, we get
<u
R
i
, x> = <u
R
i
,
n
¸
j=1
α
j
u
j
> (7.39)
=
n
¸
j=1
<u
R
i
, α
j
u
j
> (7.40)
=
n
¸
j=1
α
j
<u
R
i
, u
j
> (7.41)
=
n
¸
j=1
α
j
δ
ij
(7.42)
= α
i
(7.43)
so that
x =
n
¸
j=1
<u
R
j
, x>u
j
. (7.44)
Example 7.27
Consider x ∈ R
2
. The vectors u
1
=
2
0
and u
2
=
1
3
span the space R
2
and thus can be used
as a basis.
Find the reciprocal basis u
R
1
, u
R
2
, and use the above relation to expand x =
3
5
in terms of both
the basis u
1
and u
2
and then the reciprocal basis u
R
1
and u
R
2
.
We adopt the dot product as our inner product. Let’s get α
1
, α
2
. To do this we ﬁrst need the
reciprocal basis vectors which are deﬁned by the inner product:
<u
i
, u
R
j
> = δ
ij
We take
u
R
1
=
a
1
R
1
a
1
R
2
, u
R
2
=
a
2
R
1
a
2
R
2
expanding, we get
<u
1
, u
R
1
> = u
T
1
u
R
1
= (2, 0)
a
1
R
1
a
1
R
2
= (2)a
1
R
1
+ (0)a
1
R
2
= 1
<u
1
, u
R
2
> = u
T
1
u
R
2
= (2, 0)
a
2
R
1
a
2
R
2
= (2)a
2
R
1
+ (0)a
2
R
2
= 0
<u
2
, u
R
1
> = u
T
2
u
R
1
= (1, 3)
a
1
R
1
a
1
R
2
= (1)a
1
R
1
+ (3)a
1
R
2
= 0
<u
2
, u
R
2
> = u
T
2
u
R
2
= (1, 3)
a
2
R
1
a
2
R
2
= (1)a
2
R
1
+ (3)a
2
R
2
= 1
Solving, we get
a
1
R
1
=
1
2
, a
1
R
2
= −
1
6
, a
2
R
1
= 0, a
2
R
2
=
1
3
206
so substituting, we get expressions for the reciprocal base vectors:
u
R
1
=
1
2
−
1
6
, u
R
2
=
0
1
3
We can now get the coeﬃcients α
i
:
α
1
= <u
R
1
, x> = (1/2, −1/6)
3
5
=
3
2
−
5
6
=
2
3
α
2
= <u
R
2
, x> = (0, 1/3)
3
5
= 0 +
5
3
=
5
3
So on the new basis, x can be represented as
x =
2
3
u
1
+
5
3
u
2
The representation is shown in Figure 7.7. Note that u
R
1
is orthogonal to u
2
and that u
R
2
is
x
x
x
5/3 u
2/3 u
1
1
2
2
2
u
1
u
2
u
1
u
2
u 18
1
u 6
R
R
R
R
Figure 7.7: Representation of a vector x on a nonorthogonal contravariant basis u
1
, u
2
and
its reciprocal covariant basis u
R
1
, u
R
2
orthogonal to u
1
. Further since [[u
1
[[
2
> 1, [[u
2
[[
2
> 1, we get [[u
R
1
[[
2
< 1 and [[u
R
2
[[
2
< 1 in order to
have <u
i
, u
R
j
> = δ
ij
.
In a similar manner it is easily shown that x can be represented in terms of the reciprocal basis as
x =
n
¸
i=1
β
i
u
R
i
= β
1
u
R
1
+β
2
u
R
2
,
207
where
β
i
= <u
i
, x>.
For this problem, this yields
x = 6u
R
1
+ 18u
R
2
.
Thus we see for the nonorthogonal basis that two natural representations of the same vector exist.
One of these is actually a a covariant representation; the other is contravariant.
7.4 Operators
• For two sets X and Y, an operator (or mapping, or transformation) f is a rule that
associates every x ∈ X with an image y ∈ Y. We can write f : X → Y, X
f
→ Y or
x →y. X is the domain of the operator, and Y is the range.
• If every element of Y is not necessarily an image, then X is mapped into Y; this map
is called an injection.
• If, on the other hand, every element of Y is an image of some element of X, then X is
mapped onto Y and the map is a surjection.
• If, for every x ∈ X there is a unique y ∈ Y, and for every y ∈ Y there is a unique
x ∈ X, the operator is onetoone or invertible; it is a bijection.
• f and g are inverses of each other, when X
f
→Y and Y
g
→X.
• f : X → Y is continuous at x
0
∈ X if, for every > 0, there is a δ > 0, such that
[[f(x) −f(x
0
)[[ < ∀ x satisfying [[x −x
0
[[ < δ.
A Venn diagram showing various classes of operators is given in Figure 7.8.
Examples of continuous operators are:
1. (x
1
, x
2
, , x
n
) →y, where y = f(x
1
, x
2
, , x
n
).
2. f →g, where g =
df
dt
.
3. f → g, where g(t) =
b
a
K(s, t)f(s) ds. K(s, t) is called the kernel of the integral
transformation. If
b
a
b
a
[K(s, t)[
2
ds dt is ﬁnite, then f belongs to L
2
if g does.
4. (x
1
, x
2
, , x
m
)
T
→ (y
1
, y
2
, , y
n
)
T
, where y = Ax with y, A, and x being n 1,
n m, and m 1 matrices, respectively (y
n×1
= A
n×m
x
m×1
), and the usual matrix
multiplication is assumed. Here A is a left operator, and is the most common type of
matrix operator.
5. (x
1
, x
2
, , x
n
) → (y
1
, y
2
, , y
m
), where y = xA with y, x, and A being 1 m,
1 n, and n m matrices, respectively (y
1×m
= x
1×n
A
n×m
), and the usual matrix
multiplication is assumed. Here A is a right operator.
208
.
.
.
.
.
.
.
.
.
.
.
. .
.
X
Y
X
Y
X
Y
Domain Range
Injection: Inverse may not exist
f
f
f
f
f
f
f
Surjection: Inverse not always unique
.
.
.
f
Bijection (onetoone): Inverse always exists
Figure 7.8: Venn diagram showing classes of operators.
7.4.1 Linear operators
• A linear operator T is one that satisﬁes
T(x + y) = Tx +Ty (7.45)
T(αx) = αTx (7.46)
• An operator T is bounded if ∀x ∈ X∃ a constant c such that
[[Tx[[ ≤ c[[x[[. (7.47)
• A special operator is the identity I, which is deﬁned by Ix = x.
• The null space or kernel of an operator T is the set of all x such that Tx = 0. The
null space is a vector space.
• The norm of an operator T can be deﬁned as
[[T[[ = sup
x=0
[[Tx[[
[[x[[
(7.48)
• An operator T is
209
positive deﬁnite if <Tx, x> > 0
positive semideﬁnite if <Tx, x> ≥ 0
negative deﬁnite if <Tx, x> < 0
negative semideﬁnite if <Tx, x> ≤ 0
∀ x = 0.
• Theorem
For a matrix A, C
m
→ C
n
, [[A[[
2
=
√
λ
max
, where λ
max
is the largest eigenvalue of
the matrix A
T
A. It will soon be shown that because A
T
A is symmetric, that all of
its eigenvalues are guaranteed real. Moreover, it can be shown that they are also all
greater than or equal to zero. Hence, the deﬁnition will satisfy all properties of the
norm.
• the above theorem holds only for Hilbert spaces and not for arbitrary Banach spaces.
7.4.2 Adjoint operators
The operator T
∗
is the adjoint of the operator T, if
<Tx, y> = <x, T
∗
y> (7.49)
If T
∗
= T, the operator is selfadjoint.
Example 7.28
Find the adjoint of the real matrix A : R
2
→R
2
, where
A =
a
11
a
12
a
21
a
22
We assume a
11
, a
12
, a
21
, a
22
are known constants.
Let the adjoint of A be
A
∗
=
a
∗
11
a
∗
12
a
∗
21
a
∗
22
Here the starred quantities are to be determined. We also have for x and y:
x =
x
1
x
2
y =
y
1
y
2
We take the inner product equation (7.49) and expand:
<Ax, y> = <x, A
∗
y>
(a
11
x
1
+a
12
x
2
)y
1
+ (a
21
x
1
+a
22
x
2
)y
2
= x
1
(a
∗
11
y
1
+a
∗
12
y
2
) +x
2
(a
∗
21
y
1
+a
∗
22
y
2
)
Since this must hold for any x
1
, x
2
, y
1
, y
2
, we have
a
∗
11
= a
11
a
∗
12
= a
21
a
∗
21
= a
12
a
∗
22
= a
22
210
Thus
A
∗
=
a
11
a
21
a
12
a
22
= A
T
Thus a symmetric matrix is selfadjoint. The above result is easily extended to complex matrices
A : C
n
→C
m
to show that
A
∗
= A
T
.
Example 7.29
Find the adjoint of the diﬀerential operator L : X →X, where
L =
d
2
ds
2
+
d
ds
and X is the subspace of L
2
[0, 1] with x(0) = x(1) = 0 if x ∈ X.
Using integration by parts on the inner product
<Lx, y> =
1
0
(x
(s) +x
(s)) y(s) ds
=
1
0
x
(s)y(s) ds +
1
0
x
(s)y(s) ds
= x
(1)y(1) −x
(0)y(0) −
1
0
x
(s)y
(s) ds +x(1)y(1) −x(0)y(0) −
1
0
x(s)y
(s) ds
= x
(1)y(1) −x
(0)y(0) −
1
0
x
(s)y
(s) ds −
1
0
x(s)y
(s) ds
= x
(1)y(1) −x
(0)y(0) −
x(1)y
(1) −x(0)y
(0) −
1
0
x(s)y
(s)ds
−
1
0
x(s)y
(s)ds
= x
(1)y(1) −x
(0)y(0) +
1
0
x(s)y
(s) ds −
1
0
x(s)y
(s) ds
= x
(1)y(1) −x
(0)y(0) +
1
0
x(s) (y
(s) −y
(s)) ds
This maintains the form of an inner product in L
2
[0, 1] if we require y(0) = y(1) = 0; doing this we get
<Lx, y> =
1
0
x(s) (y
(s) −y
(s)) dt = <x, L
∗
y>
We see by inspection that the adjoint operator is
L
∗
=
d
2
ds
2
−
d
ds
Since the adjoint operator is not equal to the operator itself, the operator is not selfadjoint.
211
Example 7.30
Find the adjoint of the diﬀerential operator L : X → X, where L =
d
2
ds
2
, and X is the subspace of
L
2
[0, 1] with x(0) = x(1) = 0 if x ∈ X.
Using integration by parts on the inner product
<Lx, y> =
1
0
x
(s)y(s) ds
= x
(1)y(1) −x
(0)y(0) −
1
0
x
(s)y
(s) ds
= x
(1)y(1) −x
(0)y(0) −
x(1)y
(1) −x(0)y
(0) −
1
0
x(s)y
(s) ds
= x
(1)y(1) −x
(0)y(0) +
1
0
x(s)y
(s) ds
If we require y(0) = y(1) = 0, then
<Lx, y> =
1
0
x(s)y
(s) dt = <x, L
∗
y>
In this case, we see that L = L
∗
, so the operator is selfadjoint.
Example 7.31
Find the adjoint of the integral operator L : L
2
[a, b] →L
2
[a, b], where
Lx =
b
a
K(s, t)x(s) ds.
The inner product
<Lx, y> =
b
a
b
a
K(s, t)x(s) ds
y(t) dt
=
b
a
b
a
K(s, t)x(s)y(t) ds dt
=
b
a
b
a
x(s)K(s, t)y(t) dt ds
=
b
a
x(s)
b
a
K(s, t)y(t) dt
ds
= <x, L
∗
y>
where
L
∗
y =
b
a
K(s, t)y(t) dt
or equivalently
L
∗
y =
b
a
K(t, s)y(s) ds
212
Note in the deﬁnition of Lx, the second argument of K is a free variable, while in the consequent
deﬁnition of L
∗
y, the ﬁrst argument of K is a free argument. So in general, the operator and its adjoint
are diﬀerent. Note however, that
if K(s, t) = K(t, s) then the operator is selfadjoint
That is, a symmetric kernel yields a selfadjoint operator.
Properties:
[[T
∗
[[ = [[T[[ (7.50)
(T
1
+T
2
)
∗
= T
∗
1
+T
∗
2
(7.51)
(αT)
∗
= αT
∗
(7.52)
(T
1
T
2
)
∗
= T
∗
2
T
∗
1
(7.53)
(T
∗
)
∗
= T (7.54)
(T
−1
)
∗
= (T
∗
)
−1
if T
−1
exists (7.55)
7.4.3 Inverse operators
Let
Tx = y
If an inverse of T exists, which we will call T
−1
, then
x = T
−1
y
Substituting for x in favor of y, we get
TT
−1
y = y
so that
TT
−1
= I
Properties:
(T
a
T
b
)
−1
= T
−1
b
T
−1
a
(7.56)
Let’s show this. Say
y = T
a
T
b
x. (7.57)
Then
T
−1
a
y = T
b
x, (7.58)
T
−1
b
T
−1
a
y = x. (7.59)
Consequently, we see that
(T
a
T
b
)
−1
= T
−1
b
T
−1
a
. (7.60)
213
Example 7.32
Let L be the operator deﬁned by
Lx =
d
2
dt
2
+k
2
x(t) = f(t)
where x belongs to the subspace of L
2
with x(0) = a and x(π) = b. Show that the inverse operator
L
−1
is given by
x(t) = L
−1
f(t) = b
∂g
∂τ
(π, t) −a
∂g
∂τ
(0, t) +
π
0
g(τ, t)f(τ) dτ
where g(τ, t) is the Green’s function.
From the deﬁnition of L and L
−1
above
L
−1
(Lx) = b
∂g
∂τ
(π, t) −a
∂g
∂τ
(0, t) +
π
0
g(τ, t)
d
2
x(τ)
dτ
2
+k
2
x(τ)
dτ
Using integration by parts and the property that g(0, t) = g(π, t) = 0, the integral in the right can be
simpliﬁed as
−x(π)
∂g
∂τ
(π, t) +x(0)
∂g
∂τ
(0, t) +
π
0
x(τ)
∂
2
g
∂τ
2
+k
2
g
dτ
Since x(0) = a, x(π) = b and
∂
2
g
∂τ
2
+k
2
g = δ(t −τ)
we have
L
−1
(Lx) =
π
0
x(τ)δ(t −τ) dτ
= x(t)
Thus, L
−1
L = I, proving the proposition. Note, it is easily shown for this problem that the Green’s
function is
g(τ, t) = −
sin[k(π −τ)] sin[kt]
k sin[kπ]
t < τ
= −
sin[kτ] sin[k(π −t)]
k sin[kπ]
τ < t
so that we can write x(t) explicitly in terms of the forcing function f(t) including the inhomogeneous
boundary conditions as follows:
x(t) =
b sin[kt]
sin[kπ]
+
a sin[k(π −t)]
sin[kπ]
−
sin[k(π −t)]
k sin[kπ]
t
0
f(τ) sin[kτ] dτ −
sin[kt]
k sin[kπ]
π
t
f(τ) sin[k(π −τ)] dτ
For linear algebraic systems, the reciprocal or dual basis can be easily formulated in
terms of operator notation and is closely related to the inverse operator. If we deﬁne U
to be a n n matrix which has the n basis vectors u
i
, each of length n, which span the
214
ndimensional space, we seek U
R
, the n n matrix which has as its columns the vectors
u
R
j
which form the reciprocal or dual basis. The reciprocal basis is found by enforcing the
equivalent of <u
i
, u
R
j
> = δ
ij
:
U
T
U
R
= I. (7.61)
Solving for U
R
,
U
T
U
R
= I, (7.62)
U
T
U
R
= I, (7.63)
U
T
U
R
T
= I
T
, (7.64)
U
R
T
U = I, (7.65)
U
R
T
U U
−1
= I U
−1
, (7.66)
U
R
T
= U
−1
, (7.67)
U
R
= U
−1
T
, (7.68)
we see that the set of reciprocal basis vectors is given by the conjugate transpose of the inverse
of the original matrix of basis vectors. Then the expression for the amplitudes modulating
the basis vectors, α
i
= <u
R
i
, x>, is
α = U
R
T
x. (7.69)
Substituting for U
R
in terms of its deﬁnition, we can also say
α = U
−1
T
T
x = U
−1
x. (7.70)
Then the expansion for the vector x =
¸
n
j=1
α
j
u
j
=
¸
n
j=1
<u
R
j
, x>u
j
is written in the
alternate notation as
x = U α = U U
−1
x = x. (7.71)
Example 7.33
Consider the problem of a previous example with x ∈ R
2
and with basis vectors u
1
=
2
0
and
u
2
=
1
3
, ﬁnd the reciprocal basis vectors and an expansion of x in terms of the basis vectors.
Using the alternate vector and matrix notation, we deﬁne the matrix of basis vectors as
U =
2 1
0 3
.
Since this matrix is real, the complex conjugation process is not important, but it will be retained for
completeness. Using standard techniques, we ﬁnd that the inverse is
U
−1
=
1
2
−
1
6
0
1
3
.
215
Thus the matrix with the reciprocal basis vectors in its columns is
U
R
= U
−1
T
=
1
2
0
−
1
6
1
3
.
This agrees with the earlier analysis. For x = (3, 5)
T
, we ﬁnd the coeﬃcients α to be
α = U
R
T
x =
1
2
−
1
6
0
1
3
3
5
=
2
3
5
3
.
We see that we do indeed recover x upon taking the product
x = U α =
2 1
0 3
2
3
5
3
=
2
3
2
0
+
5
3
1
3
=
3
5
.
7.4.4 Eigenvalues and eigenvectors
If T is a linear operator, its eigenvalue problem consists of a nontrivial solution of the
equation
Te = λe (7.72)
where e is called an eigenvector, and λ an eigenvalue.
Theorem
The eigenvalues of an operator and its adjoint are complex conjugates of each other.
Proof: Let λ and λ
∗
be the eigenvalues of T and T
∗
, respectively, and let e and e
∗
be the
corresponding eigenvectors. Consider then,
<Te, e
∗
> = <e, T
∗
e
∗
>
<λe, e
∗
> = <e, λ
∗
e
∗
>
λ<e, e
∗
> = λ
∗
<e, e
∗
>
λ = λ
∗
This holds for <e, e
∗
> = 0, which will hold in general.
Theorem
The eigenvalues of a selfadjoint operator are real.
Proof:
Since selfadjoint we have
<Te, e> = <e, Te>
<λe, e> = <e, λe>
λ<e, e> = λ<e, e>
λ = λ
λ
R
−iλ
I
= λ
R
+ iλ
I
; λ
R
, λ
I
∈ R
2
λ
R
= λ
R
−λ
I
= λ
I
λ
I
= 0
216
Here we note that for nontrivial eigenvectors (<e, e>) > 0, so the division can be performed.
The only way a complex number can equal its conjugate is if its imaginary part is zero;
consequently, the eigenvalue must be strictly real.
Theorem
The eigenvectors of a selfadjoint operator corresponding to distinct eigenvalues are or
thogonal.
Proof: Let λ
i
and λ
j
be two distinct, λ
i
= λ
j
, real, λ
i
, λ
j
∈ R
1
, eigenvalues of the selfadjoint
operator T, and let e
i
and e
j
be the corresponding eigenvectors. Then,
<Te
i
, e
j
> = <e
i
, Te
j
>,
<λ
i
e
i
, e
j
> = <e
i
, λ
j
e
j
>,
λ
i
<e
i
, e
j
> = λ
j
<e
i
, e
j
>,
<e
i
, e
j
>(λ
i
−λ
j
) = 0,
<e
i
, e
j
> = 0.
since λ
i
= λ
j
.
Theorem
The eigenvectors of any selfadjoint operator on vectors of a ﬁnitedimensional vector
space constitute a basis for the space.
As discussed by Friedman, the following conditions are suﬃcient for the eigenvectors in
a inﬁnitedimensional Hilbert space to be form a complete basis:
• the operator must be selfadjoint,
• the operator is deﬁned on a ﬁnite domain, and
• the operator has no singularities in its domain.
If the operator is not selfadjoint, Friedman (p. 204) discusses how the eigenfunctions of
the adjoint operator can be used to obtain the coeﬃcients α
k
on the eigenfunctions of the
operator.
Example 7.34
For x ∈ R
2
, A : R
2
→R
2
, Find the eigenvalues and eigenvectors of
A =
2 1
1 2
The eigenvalue problem is
Ax = λx.
which can be written as
Ax = λIx
(A−λI)x = 0
where the identity matrix is
I =
1 0
0 1
217
If we write
x =
x
1
x
2
then
2 −λ 1
1 2 −λ
x
1
x
2
=
0
0
(a)
By Cramer’s rule we could say
x
1
=
det
0 1
0 2 −λ
det
2 −λ 1
1 2 −λ
=
0
det
2 −λ 1
1 2 −λ
,
x
2
=
det
2 −λ 0
1 0
det
2 −λ 1
1 2 −λ
=
0
det
2 −λ 1
1 2 −λ
.
An obvious, but uninteresting solution is the trivial solution x
1
= 0, x
2
= 0. Nontrivial solutions of x
1
and x
2
can be obtained only if
2 −λ 1
1 2 −λ
= 0
which gives the characteristic equation
(2 −λ)
2
−1 = 0
Solutions are λ
1
= 1 and λ
2
= 3. The eigenvector corresponding to each eigenvalue is found in the
following manner. The eigenvalue is substituted in equation (a). A dependent set of equations in x
1
and x
2
is obtained. The eigenvector solution is thus not unique.
For λ = 1, equation (a) gives
2 −1 1
1 2 −1
x
1
x
2
=
1 1
1 1
x
1
x
2
=
0
0
,
which are the two identical equations,
x
1
+x
2
= 0
If we choose x
1
= α, then x
2
= −α. So the eigenvector corresponding to λ = 1 is
e
1
= α
1
−1
Since the magnitude of an eigenvector is arbitrary, we will take α = 1 and thus
e
1
=
1
−1
.
For λ = 3, the equations are
2 −3 1
1 2 −3
x
1
x
2
=
−1 1
1 −1
x
1
x
2
=
0
0
,
which yield the two identical equations,
−x
1
+x
2
= 0
This yields an eigenvector of
218
e
2
= β
1
1
We take β = 1, so that
e
2
=
1
1
.
Comments:
1. Since the matrix is symmetric (thus selfadjoint), the eigenvalues are real, and the eigenvectors are
orthogonal.
2. We have actually solved for the right eigenvectors.
This is the usual set of eigenvectors. The left eigenvectors can be found from x
T
A = x
T
Iλ. Since here
A is equal to its conjugate transpose, x
T
A = Ax, so the left eigenvectors are the same as the right
eigenvectors. More generally, we can say the left eigenvectors of an operator are the right eigenvectors
of the adjoint of that operator, A
T
.
3. Multiplication of an eigenvector by any scalar is also an eigenvector.
4. The normalized eigenvectors are
e
1
=
1
√
2
−
1
√
2
, e
2
=
1
√
2
1
√
2
5. A natural way to express a vector is on orthonormal basis as given below
x = c
1
1
√
2
−
1
√
2
+c
2
1
√
2
1
√
2
=
1
√
2
1
√
2
−
1
√
2
1
√
2
c
1
c
2
.
Example 7.35
For x ∈ R
2
, A : R
2
→R
2
, ﬁnd the eigenvalues and eigenvectors of
A =
1 −1
0 1
This matrix is not symmetric. We ﬁnd the eigensystem by solving
(A−λI) e = 0.
The characteristic equation which results is
(1 −λ)
2
= 0
which repeated roots λ = 1, λ = 1. For this eigenvalue, the components of the eigenvector satisfy the
equation
x
2
= 0
Thus only one ordinary eigenvector
e
1
= α
1
0
219
can be found. We take arbitrarily α = 1 so that
e
1
=
1
0
.
We can however ﬁnd a generalized eigenvector g
1
such that
(A−λI)g
1
= e
1
.
Note then that
(A−λI)(A−λI)g
1
= (A−λI)e
1
,
(A−λI)
2
g
1
= 0.
Take
0 −1
0 0
β
γ
= 1
1
0
.
We get a solution if β ∈ R
1
, γ = −1. That is
g
1
=
β
−1
.
Take β = 0 to give an orthogonal generalized eigenvector. So
g
1
=
0
−1
.
Note that the ordinary eigenvector and the generalized eigenvector combine to form a basis, in this case
an orthonormal basis.
Example 7.36
For x ∈ C
2
, A : C
2
→C
2
, ﬁnd the eigenvalues, right eigenvectors, and left eigenvectors if
A =
1 2
−3 1
.
The right eigenvector problem is the usual
Ae
R
= λIe
R
.
The characteristic polynomial is
(1 −λ)
2
+ 6 = 0,
which has complex roots. The eigensystem is
λ
1
= 1 −
√
6i, e
1R
=
2
3
i
1
; λ
2
= 1 +
√
6i, e
2R
=
−
2
3
i
1
.
Note as the operator is not selfadjoint, we are not guaranteed real eigenvalues. The right eigenvectors
are not orthogonal as e
1R
T
e
2R
=
1
3
.
For the left eigenvectors, we have
e
T
L
A = e
T
L
Iλ.
We can put this in a slightly more standard form by taking the conjugate transpose of both sides:
220
e
T
L
A
T
= e
T
L
Iλ
T
.
A
T
e
L
= Iλe
L
.
A
T
e
L
= Iλe
L
.
A
∗
e
L
= Iλ
∗
e
L
.
So the left eigenvectors of A are the right eigenvectors of the adjoint of A. Now we have
A
T
=
1 −3
2 1
.
The resulting eigensystem is
λ
∗
1
= 1 +
√
6i, e
1L
=
3
2
i
1
. λ
∗
2
= 1 −
√
6i, e
2L
=
−
3
2
i
1
;
Note that in addition to being complex conjugates of themselves, which does not hold for general
complex matrices, the eigenvalues of the adjoint are complex conjugates of those of the original matrix,
which does hold for general complex matrices. That is λ
∗
= λ. The left eigenvectors are not orthogonal
as e
1L
T
e
2L
= −
1
2
. It is easily shown by taking the conjugate transpose of the adjoint eigenvalue problem
however that
e
T
L
A = e
T
L
λ,
as desired. Note that the eigenvalues for both the left and right eigensystems are the same.
Example 7.37
Consider a small change from the previous example. For x ∈ C
2
, A : C
2
→C
2
, ﬁnd the eigenvalues,
right eigenvectors, and left eigenvectors if
A =
1 2
−3 1 +i
.
The right eigenvector problem is the usual
Ae
R
= λe
R
.
The characteristic polynomial is
λ
2
−(2 +i)λ + (7 +i) = 0,
which has complex roots. The eigensystem is
λ
1
= 1 −2i, e
1R
=
i
1
; λ
2
= 1 + 3i, e
2R
=
−2i
3
.
Note as the operator is not selfadjoint, we are not guaranteed real eigenvalues. The right eigenvectors
are not orthogonal as e
1R
T
e
2R
= 1 = 0
For the left eigenvectors, we solve the corresponding right eigensystem for the adjoint of A which
is A
∗
= A
T
.
A
T
=
1 −3
2 1 −i
.
221
The eigenvalue problem is A
T
e
L
= λ
∗
e
L
. The eigensystem is
λ
∗
1
= 1 + 2i, e
1L
=
3i
2
; λ
∗
2
= 1 −3i, e
2L
=
−i
1
.
Note that here, the eigenvalues λ
∗
1
, λ
∗
2
have no relation to each other, but they are complex conjugates
of the eigenvalues, λ
1
, λ
2
, of the right eigenvalue problem of the original matrix. The left eigenvectors
are not orthogonal as e
1L
T
e
2L
= −1. It is easily shown however that
e
T
L
A = e
T
L
λ,
as desired.
Example 7.38
For x ∈ R
3
, A : R
3
→R
3
, ﬁnd the eigenvalues and eigenvectors of
A =
¸
2 0 0
0 1 1
0 1 1
From
2 −λ 0 0
0 1 −λ 1
0 1 1 −λ
= 0
the characteristic equation is
(2 −λ)
(1 −λ)
2
−1
= 0
The solutions are λ = 0, 2, 2. The second eigenvalue is of multiplicity two. Next we ﬁnd the eigenvectors
e =
¸
x
1
x
2
x
3
For λ = 0, the equations for the components of the eigenvectors are
2x
1
= 0
x
2
+x
3
= 0
from which
e
1
= α
¸
0
1
−1
For λ = 2, we have
−x
2
+x
3
= 0,
We then see that the following eigenvector satisﬁes
e =
¸
β
γ
γ
.
222
Here we have two free parameters, β and γ; we can thus extract two independent eigenvectors from
this. For e
2
we arbitrarily take β = 0 and γ = 1 to get
e
2
=
¸
0
1
1
.
For e
3
we arbitrarily take β = 1 and γ = 0 to get
e
3
=
¸
1
0
0
In this case e
1
, e
2
, e
3
are orthogonal even though e
2
and e
3
correspond to the same eigenvalue.
Example 7.39
For y ∈ L
2
[0, 1], ﬁnd the eigenvalues and eigenvectors of L = −d
2
/dt
2
, operating on functions which
vanish at 0 and 1.
The eigenvalue problem is
Ly = −
d
2
y
dt
2
= λy with y(0) = y(1) = 0
or
d
2
y
dt
2
+λy = 0 with y(0) = y(1) = 0
The solution of this diﬀerential equation is
y(t) = a sin λ
1/2
t +b cos λ
1/2
t
The boundary condition y(0) = 0 gives b = 0. The other condition y(1) = 0 gives a sin λ
1/2
= 0. A
nontrivial solution can only be obtained if
sin λ
1/2
= 0
There are an inﬁnite but countable number of values of λ for which this can be satisﬁed. These are
λ
n
= n
2
π
2
, n = 1, 2, . The eigenvectors (also called eigenfunctions in this case) y
n
(t), n = 1, 2,
are
y
n
(t) = sin nπt
The diﬀerential operator is selfadjoint so that the eigenvalues are real and the eigenfunctions are
orthogonal.
Example 7.40
For x ∈ L
2
[0, 1], and L = d
2
/ds
2
+ d/ds with x(0) = x(1) = 0, ﬁnd the Fourier expansion of an
arbitrary function f(s) in terms of the eigenfunctions of L. That is we seek expressions for α
n
in
f(s) =
N
¸
n=1
α
n
x
n
(s).
223
Here x
n
(s) is an eigenfunction of L.
The eigenvalue problem is
Lx =
d
2
x
ds
2
+
dx
ds
= λx; x(0) = x(1) = 0.
It is easily shown that the eigenvalues of L are given by
λ
n
= −
1
4
−n
2
π
2
,
where n is a positive integer, and the unnormalized eigenfunctions of L are
x
n
(s) = e
−s/2
sin (nπs) .
Although the eigenvalues are real, the eigenfunctions are not orthogonal. We see this, for example,
by forming <x
1
, x
2
>:
<x
1
, x
2
> =
1
0
e
−s/2
sin (πs) e
−s/2
sin (2πs) ds,
<x
1
, x
2
> =
4(1 +e)π
2
e(1 +π
2
)(1 + 9π
2
)
= 0.
By using integration by parts, we calculate the adjoint operator to be
L
∗
y =
d
2
y
ds
2
−
dy
ds
= λ
∗
y; y(0) = y(1) = 0.
We then ﬁnd the eigenvalues of the adjoint operator to be the same as those of the operator (this is
true because the eigenvalues are real; in general they are complex conjugates of one another).
λ
∗
m
= λ
m
= −
1
4
−m
2
π
2
,
where m is a positive integer.
The unnormalized eigenfunctions of the adjoint are
y
m
(s) = e
s/2
sin (mπs) .
Now, since by deﬁnition <y
m
, Lx
n
> = <L
∗
y
m
, x
n
>, we have
<y
m
, Lx
n
>−<L
∗
y
m
, x
n
> = 0,
<y
m
, λ
n
x
n
>−<λ
∗
m
y
m
, x
n
> = 0,
λ
n
<y
m
, x
n
>−λ
∗
m
<y
m
, x
n
> = 0,
(λ
n
−λ
m
)<y
m
, x
n
> = 0.
So, for m = n, we get <y
n
, x
n
> = 0, and for m = n, we get <y
m
, x
n
> = 0. we must have the socalled
biorthogonality condition
<y
m
, x
n
> = Kδ
mn
.
Here K is a real number which can be set to 1 with proper normalization.
Now consider the following series of operations on the original form of the expansion we seek
f(s) =
N
¸
i=n
α
n
x
n
(s).
<y
j
(s), f(s)> = <y
j
(s),
N
¸
i=n
α
n
x
n
(s)>.
224
<y
j
(s), f(s)> = α
j
<y
j
(s), x
j
(s)>.
α
j
=
<y
j
(s), f(s)>
<y
j
(s), x
j
(s)>
,
α
n
=
<y
n
(s), f(s)>
<y
n
(s), x
n
(s)>
,
Now in the case at hand, it is easily shown that
<y
n
(s), x
n
(s)> =
1
2
,
so we have
α
n
= 2<y
n
(s), f(s)>.
The Nterm approximate representation of f(s) is thus given by
f(s) ∼
N
¸
n=1
2
1
0
e
t/2
sin (nπt) f(t) dt
e
−s/2
sin (nπs) .
In this exercise, the eigenfunctions of the adjoint are actually the reciprocal basis functions. We see that
getting the Fourier coeﬃcients for eigenfunctions of a nonselfadjoint operator requires consideration
of the adjoint operator. We also note that it is often a diﬃcult exercise in problems with practical
signiﬁcance to actually ﬁnd the adjoint operator and its eigenfunctions.
7.5 Equations
The existence and uniqueness of the solution x of the equation
Lx = y
for given linear operator L and y is governed by the following theorems.
Theorem
If the range of L is closed, Lx = y has a solution if and only if y is orthogonal to every
solution of the adjoint homogeneous equation L
∗
z = 0.
Theorem
The solution of Lx = y is nonunique if the solution of the homogeneous equation Lx = 0
is also nonunique, and conversely.
There are two basic ways in which the equation can be solved.
1. Inverse: If an inverse of L exists then
x = L
−1
y (7.73)
225
2. Eigenvector expansion: Assume that x, y belong to a vector space S and the eigenvec
tors (e
1
, e
2
, ) of L span S. Then we can write
y =
¸
n
α
n
e
n
(7.74)
x =
¸
n
β
n
e
n
(7.75)
where the α’s are known and the β’s are unknown. We get Lx =
¸
n
β
n
Le
n
=
¸
n
β
n
λ
n
e
n
, where the λs are the eigenvalues of L. Comparing the expressions for
Lx and y, we ﬁnd that
β
n
λ
n
= α
n
(7.76)
If all λ
n
= 0, then β
n
= α
n
/λ
n
and we have the unique solution
x =
¸
n
α
n
λ
n
e
n
If, however, one of the λs, λ
k
say, is zero, we still have β
n
= α
n
/λ
n
for n = k. For
n = k, there are two possibilities:
(a) If α
k
= 0, no solution is possible since equation (7.76) is not satisﬁed for n = k.
(b) If α
k
= 0, we have the nonunique solution
x =
¸
n=k
α
n
λ
n
e
n
+ γe
k
where γ is an arbitrary scalar. Equation (7.76) is satisﬁed ∀n.
Example 7.41
Solve for x in Lx = y if L = d
2
/dt
2
, with side conditions x(0) = x(1) = 0, and y(t) = 2t, via an
eigenfunction expansion.
This problem of course has an exact solution via straightforward integration:
d
2
x
dt
2
= 2t; x(0) = x(1) = 0,
integrates to yield
x(t) =
t
3
(t
2
−1).
However, let’s use the series expansion technique. This can be more useful in other problems in
which exact solutions do not exist. First, ﬁnd the eigenvalues and eigenfunctions of the operator:
d
2
x
dt
2
= λx; x(0) = x(1) = 0.
This has general solution
x(t) = Asin
√
−λt
+Bcos
√
−λt
.
226
0.2 0.4 0.6 0.8 1
t
0.12
0.1
0.08
0.06
0.04
0.02
x
0.2 0.4 0.6 0.8 1
t
0.008
0.006
0.004
0.002
0.002
0.004
error
Figure 7.9: Approximate and exact solution x(t); Error in solution x
a
(t) −x(t)
To satisfy the boundary conditions, we require that B = 0 and λ = −n
2
π
2
, so
x(t) = Asin (nπt) .
This suggests that we expand y(t) = 2t in a Fourier sine series. We know from an earlier problem that
the Fourier sine series for y(t) = 2t is
2t =
∞
¸
n=1
4(−1)
n+1
(nπ)
sin(nπt).
For x(t) then we have
x(t) =
∞
¸
n=1
α
n
e
n
λ
n
=
∞
¸
n=1
4(−1)
n+1
(nπ)λ
n
sin(nπt).
Substituting in for λ
n
= −n
2
π
2
, we get
x(t) =
∞
¸
n=1
4(−1)
n+1
(−nπ)
3
sin(nπt).
Retaining only two terms in the expansion for x(t),
x(t) ∼ −
4
π
3
sin(πt) +
1
2π
3
sin(2πt),
gives a very good approximation for the solution, which as shown in Figure 7.9, has a peak error of
about 0.008.
Example 7.42
Solve Ax = y using the eigenvector expansion technique when
A =
2 1
1 2
, y =
3
4
.
We already know from an earlier example that for A
λ
1
= 1, e
1
=
1
−1
,
λ
2
= 3, e
2
=
1
1
,
227
We want to express y as
y = c
1
e
1
+c
2
e
2
.
Since the eigenvectors are orthogonal, we have
c
1
=
<e
1
, y>
<e
1
, e
1
>
=
3 −4
1 + 1
= −
1
2
,
c
2
=
<e
2
, y>
<e
2
, e
2
>
=
3 + 4
1 + 1
=
7
2
,
so
y = −
1
2
e
1
+
7
2
e
2
.
Then
x = −
1
2
1
λ
1
e
1
+
7
2
1
λ
2
e
2
, (7.77)
x = −
1
2
1
1
e
1
+
7
2
1
3
e
2
, (7.78)
x = −
1
2
1
1
1
−1
+
7
2
1
3
1
1
, (7.79)
x =
2/3
5/3
. (7.80)
Example 7.43
Solve Ax = y using the eigenvector expansion technique when
A =
2 1
4 2
, y =
3
4
, y =
3
6
.
It is easily shown that for A
λ
1
= 4, e
1
=
1
2
,
λ
2
= 0, e
2
=
−1
2
,
First consider y =
3
4
. We want to express y as
y = c
1
e
1
+c
2
e
2
.
For this nonsymmetric matrix, the eigenvectors are linearly independent, so they form a basis. However
they are not orthogonal, so there is not a direct way to compute c
1
and c
2
. Matrix inversion shows
that c
1
=
5
2
and c
2
= −
1
2
, so
y =
5
2
e
1
−
1
2
e
2
.
Since the eigenvectors form a basis, y can be represented with an eigenvector expansion. However no
solution for x exists because λ
2
= 0 and c
2
= 0, hence the coeﬃcient β
2
= c
2
/λ
2
does not exist.
228
However, for y =
3
6
, we can say that
y = 3e
1
+ 0e
2
.
Consequently,
x =
c
1
λ
1
e
1
+
c
2
λ
2
e
2
, (7.81)
=
c
1
λ
1
e
1
+
0
0
e
2
, (7.82)
=
3
4
e
1
+γe
2
, (7.83)
=
3
4
1
2
+γ
−1
2
, (7.84)
=
3/4 −γ
3/2 + 2γ
, (7.85)
where γ is an arbitrary constant.
7.6 Method of weighted residuals
The method of weighted residuals is a quite general technique to solve equations. Two
important methods which have widespread use in the engineering world, spectral methods
and the even more pervasive ﬁnite element method, are special types of weighted residual
methods.
Consider the diﬀerential equation
Ly = f(t), t ∈ [a, b], (7.86)
with certain boundary conditions. L is a diﬀerential operator that is not necessarily linear.
We will work with functions and inner products in L
2
[a, b] space.
Approximate y(t) by
y(t) ≈ y
a
(t) =
n
¸
j=1
c
j
φ
j
(t) (7.87)
where φ
j
(t), (j = 1, , n) are linearly independent functions (called trial functions) which
satisfy the boundary conditions. Forcing the trial functions to satisfy the boundary condi
tions, in addition to having aesthetic appeal, makes it much more likely that if convergence
is obtained, the convergence will be to a solution which satisﬁes the diﬀerential equation
and boundary conditions. The trial functions can be orthogonal or nonorthogonal.
17
The
constants c
j
, (j = 1, , n) are to be determined. Substituting into the equation, we get a
residual error
e(t) = Ly
a
(t) −f(t) (7.88)
17
It is occasionally advantageous, especially in the context of what is known as waveletbased methods, to
add extra functions which are linearly dependent into the set of trial functions. Such a basis is known as a
frame. We will not consider these here; some background is give by Daubechies.
229
This error will almost always be nonzero for t ∈ [a, b]. We can, however, choose c
j
such that
an error, averaged over the domain, is zero. To achieve this, we select now a set of linearly
independent weight functions ψ
j
(t), (j = 1, , n) and make them orthogonal to the residual
error. Thus
<ψ
i
(t), e(t)> = 0, i = 1, , n (7.89)
These are n equations for the constants c
j
.
There are several special ways in which the weight functions can be selected.
1. Galerkin
18
: ψ
i
(t) = φ
i
(t).
2. Collocation: ψ
i
(t) = δ(t −t
i
). Thus e(t
i
) = 0.
3. Subdomain ψ
i
(t) = 1 for t
i−1
≤ t < t
i
and zero everywhere else. Note that these
functions are orthogonal to each other. Also this method is easliy shown to reduce to
the well known ﬁnite volume method.
4. Least squares: Minimize [[e(t)[[. This gives
∂[[e[[
2
∂c
j
=
∂
∂c
j
b
a
e
2
dt
= 2
b
a
e
∂e
∂c
j
dt
So this method corresponds to ψ
j
=
∂e
∂c
j
.
5. Moments: ψ
i
(t) = t
i
, i = 0, 1, .
If the trial functions are orthogonal and the method is Galerkin, we will, following
Fletcher, who builds on the work of Finlayson, deﬁne the method to be a spectral method.
Other less restrictive deﬁnitions are in common usage in the present literature, and there is
no single consensus on what precisely constitutes a spectral method.
19
18
Boris Gigorievich Galerkin, 18711945, Belarussianborn Russianbased engineer and mathematician.
19
An important school in spectral methods, exempliﬁed in the work of Gottlieb and Orszag, Canuto,
et al., and Fornberg, uses a looser nomenclature, which is not always precisely deﬁned. In these works,
spectral methods are distinguished from ﬁnite diﬀerence methods and ﬁnite element methods in that spectral
methods employ basis functions which have global rather than local support; that is spectral methods’ basis
functions have nonzero values throughout the entire domain. While orthogonality of the basis functions
within a Galerkin framework is often employed, it is not demanded that this be the distinguishing feature
by those authors. Within this school, less emphasis is placed on the framework of the method of weighted
residuals, and the spectral method is divided into subclasses known as Galerkin, tau, and collocation. The
collocation method this school deﬁnes is identical to that deﬁned here, and is also called by this school the
“pseudospectral” method. In nearly all understandings of the word “spectral,” a convergence rate which is
more rapid than those exhibited by ﬁnite diﬀerence or ﬁnite element methods exists. In fact the accuracy of
a spectral method should grow exponentially with the number of nodes for a spectral method, as opposed
to that for a ﬁnite diﬀerence or ﬁnite element, whose accuracy grows only with the number of nodes raised
to some power.
Another concern which arises with methods of this type is how many terms are necessary to properly
230
Example 7.44
For x ∈ L
2
[0, 1], ﬁnd a oneterm approximate solution of the equation
d
2
x
dt
2
+x = t −1
with x(0) = −1, x(1) = 1.
It is easy to show that the exact solution is
x(t) = −1 +t + csc(1) sin(t).
Here we will see how well the method of weighted residuals can approximate this known solution. The
real value of the method is for problems in which exact solutions are not known.
Let y = x −(2t −1), so that y(0) = y(1) = 0. The transformed diﬀerential equation is
d
2
y
dt
2
+y = −t
Let us consider a oneterm approximation y ≈ y
a
(t) = cφ(t). There are many choices of basis functions
φ(t). Let’s try ﬁnite dimensional nontrivial polynomials which match the boundary conditions. If we
choose φ(t) = a, a constant, we must take a = 0 to satisfy the boundary conditions, so this does not
work. If we choose φ(t) = a + bt, we must take a = 0, b = 0 to satisfy both boundary conditions, so
this also does not work. We can ﬁnd a quadratic polynomial which is nontrivial and satisﬁes both
boundary conditions: φ(t) = t(1 −t). Then
y
a
(t) = ct(1 −t).
We have to determine c. Substituting into the equation, the residual error is found to be
e(t) = Ly
a
−f(t) =
d
2
y
a
dt
2
+y
a
−f(t),
e(t) = −2c +ct(1 −t) −(−t) = t −c(t
2
−t + 2).
Then for we choose c such that
<ψ(t), e(t)> = <ψ(t), t −c(t
2
−t + 2)> =
1
0
ψ(t)
t −c(t
2
−t + 2)
dt = 0.
The form of the weighting function ψ is dictated by the particular method we choose:
1. Galerkin: ψ(t) = φ(t) = t(1−t). The inner product gives
c
12
−
3c
2
10
= 0, so that for nontrivial solution,
c =
5
18
= 0.277.
y
a
(t) = 0.277t(1 −t).
x
a
(t) = 0.277t(1 −t) + 2t −1.
2. Collocation: Choose ψ(t) = δ(t −
1
2
) which gives −
7
2
c + 1 = 0, from which c =
2
7
= 0.286.
y
a
(t) = 0.286t(1 −t).
x
a
(t) = 0.286t(1 −t) + 2t −1.
model the desired frequency level. For example, take our equation to be d
2
u/dt
2
= 1 +u
2
; u(0) = u(π) = 0,
and take u =
¸
N
n=1
a
n
sin(nt). If N = 1, we get e(t) = −a
1
sin t −1 −a
2
1
sin
2
t. Expanding the square of the
sin term, we see the error has higher order frequency content: e(t) = −a
1
sin t − 1 − a
2
1
(1/2 − 1/2 cos(2t)).
The result is that if we want to get things right at a given level, we may have to reach outside that level.
How far outside we have to reach will be problem dependent.
231
0.2 0.4 0.6 0.8 1
t
1
0.5
0.5
1
x
0.2 0.4 0.6 0.8 1
t
0.0075
0.005
0.0025
0.0025
0.005
0.0075
e=xxa
Exact solution and
Galerkin approximation
Error in Galerkin approximation
x’’+ x = t  1; x(0) = 1, x(1) = 1
Figure 7.10: One term estimate x
a
(t) and exact solution x(t); Error in solution x
a
(t) −x(t).
3. Subdomain: ψ(t) = 1, from which −
11
6
c +
1
2
= 0, and c =
3
11
= 0.273
y
a
(t) = 0.273t(1 −t).
x
a
(t) = 0.273t(1 −t) + 2t −1.
4. Least squares: ψ(t) =
∂e(t)
∂c
= −t
2
+t −2. Thus −
11
12
+
101
30
c = 0, from which c =
55
202
= 0.273.
y
a
(t) = 0.273t(1 −t).
x
a
(t) = 0.273t(1 −t) + 2t −1.
5. Moments: ψ(t) = 1 which, for this case, is the same as the subdomain method above.
y
a
(t) = 0.273t(1 −t).
x
a
(t) = 0.273t(1 −t) + 2t −1.
The approximate solution determined by the Galerkin method is overlaid against the exact solution in
Figure 7.10. Also shown is the error in the approximation. The approximation is surprisingly accurate.
Some simpliﬁcation can arise through use of integration by parts. This has the result of
admitting basis functions which have less stringent requirements on the continuity of their
derivatives. It is also a commonly used strategy in the ﬁnite element technique.
Example 7.45
Consider a slight variant of the previous example problem, and employ integration by parts.
d
2
y
dt
2
+y = f(t), y(0) = 0, y(1) = 0.
Again, take a one term expansion
y
a
(t) = cφ(t).
At this point, we will only require φ(t) to satisfy the boundary conditions, and will specify it later. The
error in the approximation is
e(t) =
d
2
y
a
dt
2
+y
a
−f(t) = c
d
2
φ
dt
2
+cφ −f(t).
232
Now set a weighted error to zero. We will also require the weighting function ψ(t) to vanish at the
boundaries.
<ψ, e> =
1
0
ψ(t)
c
d
2
φ
dt
2
+cφ(t) −f(t)
dt = 0.
Rearranging, we get
c
1
0
ψ(t)
d
2
φ
dt
2
+ψ(t)φ(t)
=
1
0
ψ(t)f(t) dt.
Now integrate by parts to get
c
ψ(t)
dφ
dt
1
0
+
1
0
ψ(t)φ(t) −
dψ
dt
dφ
dt
dt
=
1
0
ψ(t)f(t) dt.
Since we have required ψ(0) = ψ(1) = 0, this simpliﬁes to
c
1
0
ψ(t)φ(t) −
dψ
dt
dφ
dt
dt =
1
0
ψ(t)f(t) dt.
So the basis function φ only needs an integrable ﬁrst derivative rather than an integrable second
derivative. As an aside, we note that the term on the left hand side bears resemblance (but diﬀers by
a sign) to an inner product in the Sobolov space W
1
2
[0, 1] in which the Sobolov inner product <., .>
s
(an extension of the inner product for Hilbert space) is <ψ(t), φ(t)>
s
=
1
0
ψ(t)φ(t) +
dψ
dt
dφ
dt
dt.
Taking now, as before, φ = t(1 −t) and then choosing a Galkerin method so ψ(t) = φ(t) = t(1 −t),
and f(t) = −t we get
c
1
0
t
2
(1 −t)
2
−(1 −2t)
2
dt =
1
0
t(1 −t)(−t) dt,
which gives
c
−
3
10
= −
1
12
,
so
c =
5
18
,
as was found earlier. So
y
a
=
5
18
t(1 −t),
with the Galerkin method.
Example 7.46
For y ∈ L
2
[0, 1], ﬁnd a twoterm spectral approximation (which by our deﬁnition of “spectral”
mandates a Galerkin formulation) to the solution of
d
2
y
dt
2
+
√
t y = 1; y(0) = 0, y(1) = 0.
Let’s try polynomial basis functions. At a minimum, these basis functions must satisfy the boundary
conditions. Assumption of the ﬁrst basis function to be a constant or linear gives rise to a trivial basis
function when the boundary conditions are enforced. The ﬁrst nontrivial basis function is a quadratic:
φ
1
(t) = a
0
+a
1
t +a
2
t
2
233
We need φ
1
(0) = 0 and φ
1
(1) = 0. The ﬁrst condition gives a
0
= 0; the second gives a
1
= −a
2
, so we
have φ
1
= a
1
(t −t
2
). Since the magnitude of a basis function is arbitrary, a
1
can be set to unity to give
φ
1
(t) = t(1 −t).
Alternatively, we could have chosen the magnitude in such a fashion to guarantee an orthonormal basis
function, but that is a secondary concern for the purposes of this example.
We need a second linearly independent basis function for the two term approximation. We try a
third order polynomial:
φ
2
(t) = b
0
+b
1
t +b
2
t
2
+b
3
t
3
.
Enforcing the boundary conditions as before gives b
0
= 0 and b
1
= −(b
2
+b
3
), so
φ
2
(t) = −(b
2
+b
3
)t +b
2
t
2
+b
3
t
3
.
To achieve a spectral method (which in general is not necessary to achieve an approximate solution!),
we enforce <φ
1
, φ
2
> = 0:
1
0
t(1 −t)
−(b
2
+b
3
)t +b
2
t
2
+b
3
t
3
dt = 0
−
b
2
30
−
b
3
20
= 0
b
2
= −
3
2
b
3
Substituting and factoring gives
φ
2
(t) =
b
3
2
t(1 −t)(2t −1).
Again because φ
2
is a basis function, the lead constant is arbitrary; we take for convenience b
3
= 2 to
give
φ
2
= t(1 −t)(2t −1).
Again, b
3
could alternatively have been chosen to yield an orthonormal basis function.
Now we want to choose c
1
and c
2
so that our approximate solution
y
a
(t) = c
1
φ
1
(t) +c
2
φ
2
(t)
has a zero weighted error. With
L =
¸
d
2
dt
2
+
√
t
,
we have the error as
e(t) = Ly
a
(t) −f(t) = L(c
1
φ
1
(t) +c
2
φ
2
(t)) −1 = c
1
Lφ
1
(t) +c
2
Lφ
2
(t) −1.
To drive the weighted error to zero, take
<ψ
1
, e> = c
1
<ψ
1
, Lφ
1
>+c
2
<ψ
1
, Lφ
2
>−<ψ
1
, 1> = 0
<ψ
2
, e> = c
1
<ψ
2
, Lφ
1
>+c
2
<ψ
2
, Lφ
2
>−<ψ
2
, 1> = 0
This is easily cast in matrix form as a linear system of equations for the unknowns c
1
and c
2
<ψ
1
, Lφ
1
> <ψ
1
, Lφ
2
>
<ψ
2
, Lφ
1
> <ψ
2
, Lφ
2
>
c
1
c
2
=
<ψ
1
, 1>
<ψ
2
, 1>
234
0.2 0.4 0.6 0.8 1
t
0.12
0.1
0.08
0.06
0.04
0.02
y
0.2 0.4 0.6 0.8 1
t
0.0005
0.0005
0.001
y  y
a
Numerical (~exact) solution overlaid
on twoterm spectral (Galerkin) approximation
Difference between numerical (~Exact)
and twoterm spectral (Galerkin) approximation
Figure 7.11: Two term spectral (Galerkin) estimate y
a
(t) and highly accurate numerical
solution y(t); Error in approximation y
a
(t) −y(t).
We choose the Galerkin method, and thus set ψ
1
= φ
1
and ψ
2
= φ
2
, so
<φ
1
, Lφ
1
> <φ
1
, Lφ
2
>
<φ
2
, Lφ
1
> <φ
2
, Lφ
2
>
c
1
c
2
=
<φ
1
, 1>
<φ
2
, 1>
Each of the inner products represents a deﬁnite integral which is easily evaluated via computer algebra.
For example,
<φ
1
, Lφ
1
> =
1
0
t(1 −t)
−2 + (1 −t)t
3/2
dt = −
215
693
.
When each inner product is evaluated, the following system results
¸
−
215
693
16
9009
16
9009
−
197
1001
¸
c
1
c
2
=
¸
1
6
0
.
Inverting the system, it is found that
c
1
= −
760617
1415794
= −0.537, c
2
= −
3432
707897
= −0.00485
Thus the estimate for the solution is
y
a
(t) = −0.537 t(1 −t) −0.00485 t(1 −t)(2t −1)
The twoterm approximate solution determined is overlaid against a more accurate solution obtained by
numerical integration of the full equation in Figure 7.11. Also shown is the error in the approximation.
The two term solution is surprisingly accurate.
Example 7.47
For the equation of the previous example,
d
2
y
dt
2
+
√
t y = 1; y(0) = 0; y(1) = 1,
examine the convergence rates for a collocation method as the number of modes becomes large.
235
Let us consider a set of trial functions which do not happen to be orthogonal, but are, of course,
linearly independent. Take
φ
i
(t) = t
i
(t −1), i = 1, . . . , n.
So we seek to ﬁnd a vector c = c
i
, i = 1, . . . , n, such that for a given number of collocation points n the
approximation
y
n
(t) = c
1
φ
1
(t) +. . . c
i
φ
i
(t) +. . . c
n
φ
n
(t),
drives a weighted error to zero. Obviously each these trial functions satisﬁes both boundary conditions,
and they have the advantage of being easy to program for an arbitrary number of modes, as no Gram
Schmidt orthogonalization process is necessary. The details of the analysis are similar to those of the
previous example, except we perform it many times, varying the number of nodes in each calculation.
For the collocation method, we take the weighting functions to be
ψ
j
(t) = δ(t −t
j
), j = 1, . . . , n.
Here we choose t
j
= j/(n + 1), j = 1, . . . , n, so that the collocation points are evenly distributed in
t ∈ [0, 1]. We then form the matrix
A =
¸
¸
¸
<ψ
1
, Lφ
1
>, <ψ
1
, Lφ
2
> . . . <ψ
1
, Lφ
n
>
<ψ
2
, Lφ
1
>, <ψ
2
, Lφ
2
> . . . <ψ
2
, Lφ
n
>
.
.
.
.
.
.
.
.
.
.
.
.
<ψ
n
, Lφ
1
>, <ψ
n
, Lφ
2
> . . . <ψ
n
, Lφ
n
>
,
and the vector
b =
¸
¸
<ψ
1
, 1>
.
.
.
<ψ
n
, 1>
,
and then solve for c in
A c = b.
We then perform this calculation for n = 1, . . . , N. We consider n = N to give the most exact solution
and calculate an error by ﬁnding the norm of the diﬀerence of the solution for n < N and that at
n = N:
e
n
= [[y
n
(t) −y
N
(t)[[
2
=
1
0
(y
n
(t) −y
N
(t))
2
dt.
A plot of the error e
n
is plotted as a function of n in Figure 7.12. We notice even on a logarithmic plot
that the error reduction is accelerating as the number of nodes n increases. If the slope had relaxed
to a constant, then the convergence would be a power law convergence; which is characteristic of ﬁnite
diﬀerence and ﬁnite element methods. For this example of the method of weighted residuals, we see that
the rate of convergence increases as the number of nodes increases, which is characteristic of exponential
convergence. For exponential convergence, we have e
n
∼ exp(−αn), where α is some positive constant;
for power law convergence, we have e
n
∼ n
−β
where β is some positive constant. At the highest value
of n, n = 10, we have a local convergence rate of O(n
−21.9
) which is remarkably fast. In comparison,
a second order ﬁnite diﬀerence technique will converge at a rate of O(n
−2
). In general and if possible
one would choose a method with the fastest convergence rate, all else being equal.
236
1 1.5 2 3 5 7 10
n
1. · 10
6
0.00001
0.0001
0.001
y  y 
n
2 N
y  y  ~ n
n
2 N
 21.9
Figure 7.12: Error in solution y
n
(t) −y
N
(t) as a function of number of collocation points n.
Problems
1. Use a oneterm collocation method with a polynomial basis function to ﬁnd an approximation for
y
+ (1 +x)y = 1
with y(0) = y
(0) = y
(1) = y
(1) = 0.
2. Use twoterm spectral, collocation, subdomain, least squares and moments methods to solve the
equation
y
+ (1 +x)y = 1
with y(0) = y
(0) = y(1) = y
(1) = 0. Compare graphically with the exact solution.
3. If x
1
, x
2
, , x
n
and y
1
, y
2
, , y
n
are real numbers, show that
n
¸
i=1
x
i
y
i
2
≤
n
¸
i=1
x
2
i
n
¸
i=1
y
2
i
4. If x, y ∈ X, an inner product space, and x is orthogonal to y, then show that [[x + αy[[ = [[x − αy[[
where α is a scalar.
5. For an inner product space, show that
<x, y +z> = <x, y>+<x, z>
<αx, y> = α<x, y>
<x, y> = <y, x> in a real vector space
6. The linear operator A : X →Y, where X = R
2
, Y = R
2
. The norms in X and Y are deﬁned by
x = (ξ
1
, ξ
2
)
T
∈ X, [[x[[
∞
= max ([ξ
1
[, [ξ
2
[)
y = (η
1
, η
2
)
T
∈ Y, [[y[[
1
= [η
1
[ +[η
2
[.
Find [[A[[ if A =
3 −1
5 −2
.
237
7. Let Q, C and R be the sets of all rational, complex and real numbers respectively. For the following
determine if A is a vector space over the ﬁeld F. For ﬁnitedimensional vector spaces, ﬁnd also a set
of basis vectors.
(a) A is the set of all polynomials which are all exactly of degree n, F = R.
(b) A is the set of all functions with continuous second derivatives over the interval [0, L] and
satisfying the diﬀerential equation y
+ 2y
+y = 0, F = R.
(c) A = R, F = R.
(d) A = ¦(a
1
, a
2
, a
3
) such that a
1
, a
2
∈ Q, 2a
1
+a
2
= 4a
3
¦, F = Q.
(e) A = C, F = Q.
(f) A = ¦ae
x
+be
−2x
such that a, b ∈ R, x ∈ [0, 1]¦, F = R.
8. Which of the following subsets of R
3
constitute a subspace of R
3
where x = (x
1
, x
2
, x
3
) ∈ R
3
:
(a) All x with x
1
= x
2
and x
3
= 0.
(b) All x with x
1
= x
2
+ 1.
(c) All x with positive x
1
, x
2
, x
3
.
(d) All x with x
1
−x
2
+x
3
= constant k.
9. Given a set S of linearly independent vectors in a vector space V, show that any subset of S is also
linearly independent.
10. Do the following vectors, (2, 1, 3, −1)
T
, (1, −4, 0, 7)
T
, (−1, 2, 1, 1)
T
, (−1, 9, 5, −6)
T
, form a basis in R
4
?
11. Given x
1
, the iterative procedure x
n+1
= Tx
n
generates x
2
, x
3
, x
4
, , where T is a linear operator
and all the x’s belong to a complete normed space. Show that ¦x
n
, n = 1, 2, ¦ is a Cauchy sequence
if [[T[[ < 1. Does it converge? If so ﬁnd the limit.
12. If ¦e
k
, k = 1, 2, ¦ is an orthonormal set in a Hilbert space H, show that for every x ∈ H, the vector
y =
¸
n
k=1
<x, e
k
>e
k
exists in H, and that x −y is orthogonal to every e
k
.
13. Let the linear operator A : C
2
→C
2
be represented by the matrix A =
2 −4
1 5
. Find [[A[[ if all
vectors in the domain and range are within a Hilbert space.
14. Let the linear operator A : C
2
→ C
2
be represented by the matrix A =
2 +i −4
1 5
. Find [[A[[
if all vectors in the domain and range are within a Hilbert space.
15. Using the inner product (x, y) =
b
a
w(t)x(t)y(t) dt, where w(t) > 0 for a ≤ t ≤ b, show that the
SturmLiouville operator
L =
1
w(t)
d
dt
¸
p(t)
d
dt
+r(t)
with αx(a) +βx
(a) = 0, and γx(b) +δx
(b) = 0 is selfadjoint.
16. For elements x, y and z of an inner product space, prove the Appolonius identity:
[[z −x[[
2
2
+[[z −y[[
2
2
=
1
2
[[x −y[[
2
2
+ 2
z −
1
2
(x +y)
2
2
17. If x, y ∈ X an inner product space, and x is orthogonal to y, then show that [[x + ay[[
2
= [[x −ay[[
2
where a is a scalar.
238
18. Using the GramSchmidt procedure, ﬁnd the ﬁrst three members of the orthonormal set belonging to
L
2
(−∞, ∞), using the basis functions ¦exp(−t
2
/2), t exp(−t
2
/2), t
2
exp(−t
2
/2), ¦. You may need
the following deﬁnite integral
∞
−∞
exp(−t
2
/2) dt =
√
2π.
19. Let C(0,1) be the space of all continuous functions in (0,1) with the norm
[[f[[
2
=
1
0
[f(t)[
2
dt.
Show that
f
n
(t) =
2
n
t
n+1
for 0 ≤ t <
1
2
1 −2
n
(1 −t)
n+1
for
1
2
≤ t ≤ 1
belongs to C(0,1). Show also that ¦f
n
, n = 1, ¦ is a Cauchy sequence, and that C(0,1) is not
complete.
20. Find the ﬁrst three terms of the FourierLegendre series for f(x) = cos(πx/2) for x ∈ [−1, 1]. Compare
graphically with exact function.
21. Find the ﬁrst three terms of the FourierLegendre series for
f(x) =
−1 for −1 ≤ x < 0
1 for 0 ≤ x ≤ 1
22. Consider
d
3
y
dt
3
+ 2t
3
y = 1 −t
y(0) = 0 y(2) = 0
dy
dt
(0) = 0
Choosing polynomials as the basis functions, use a Galerkin and moments method to obtain a two
term estimate to y(t). Plot your approximations and the exact solution on a single curve. Plot the
error in both methods for x ∈ [0, 2]
23. Solve
x
+ 2xx
+t = 0
with x(0) = 0, x(4) = 0, approximately using a twoterm weighted residual method where the basis
functions are of the type sinλt. Do both a spectral (as a consequence Galerkin) and pseudospectral
(as a consequence collocation) method. Plot your approximations and the exact solution on a single
curve. Plot the error in both methods for x ∈ [0, 4].
24. Show that the set of solutions of the linear equations
x
1
+ 3x
2
+x
3
−x
4
= 0
−2x
1
+ 2x
2
−x
3
+x
4
= 0
form a vector space. Find the dimension and a set of basis vectors.
25. Let
A =
¸
1 1 1
0 1 1
0 0 1
For A : R
3
→R
3
, ﬁnd [[A[[ if the norm of x = (x
1
, x
2
, x
3
)
T
∈ R
3
is given by
[[x[[
∞
= max([x
1
[, [x
2
[, [x
3
[).
239
26. For any complete orthonormal set ¦φ
i
, i = 1, 2, ¦ in a Hilbert space H, show that
u =
¸
i
<u, φ
i
>φ
i
<u, v> =
¸
i
<u, φ
i
><v, φ
i
>
[[u[[
2
2
=
¸
i
[<u, φ
i
>[
2
where u and v belong to H.
27. Show that the set P
4
[0, 1] of all polynomials of degree 4 or less in the interval 0 < x < 1 is a vector
space. What is the dimension of this space?
28. Show that
(x
2
1
+x
2
2
+. . . +x
2
n
)(y
2
1
+y
2
2
+. . . +y
2
n
) ≥ (x
1
y
1
+x
2
y
2
+. . . +x
n
y
n
)
2
where x
1
, x
2
, . . . , x
n
, y
1
, y
2
, . . . , y
n
are real numbers.
29. Show that the functions e
1
(t), e
2
(t), . . . , e
n
(t) are orthogonal in L
2
(0, 1], where
e
i
(t) =
1
i−1
n
< t ≤
i
n
0 otherwise
Expand t
2
in terms of these functions.
30. Find oneterm collocation approximations for all solutions of
d
2
y
dx
2
+y
4
= 1
with y(0) = 0, y(1) = 0.
31. Show that
b
a
f(x) +g(x)
2
dx ≤
b
a
f(x)
2
dx +
b
a
g(x)
2
dx
where f(x) and y(x) belong to L
2
[a, b].
32. Find the eigenvalues and eigenfunctions of the operator
L = −
¸
d
2
dx
2
+ 2
d
dx
+ 1
which operates on functions x ∈ L
2
[0, 5] that vanish at x = 0 and x = 5.
33. Find the supremum and inﬁmum of the set S = ¦1/n, where n = 1, 2, ¦.
34. Find the L
2
[0, 1] norm of the function f(x) = x + 1.
35. Find the distance between the functions x and x
3
under the L
2
[0, 1] norm.
36. Find the inner product of the functions x and x
3
using the L
2
[0,1] deﬁnition.
37. Find the Green’s function for the problem
d
2
x
dt
2
+k
2
x = f(t), with x(0) = a, x(π) = b
Write the solution of the diﬀerential equation in terms of this function.
240
38. Find the ﬁrst three terms of the FourierLegendre series for
f(x) =
−2 for −1 ≤ x < 0
1 for 0 ≤ x ≤ 1
Graph f(x) and its approximation.
39. Find the null space of
(a) the matrix operator
A =
¸
1 1 1
2 1 1
2 1 1
(b) the diﬀerential operator
L =
d
2
dt
2
+k
2
40. Test the positive deﬁniteness of a diagonal matrix with positive real numbers on the diagonal.
41. Let S be a subspace of L
2
[0, 1] such that for every x ∈ S, x(0) = 0, and ˙ x(0) = 1. Find the eigenvalues
and eigenfunctions of L = −d
2
/dt
2
operating on elements of S.
42. Show that
lim
→0
β
α
f(x)∆
(x −a)dx = f(a)
for α < a < β, where
∆
(x −a) =
0 if x < a −
2
1/ if a −
2
≤ x ≤ a +
2
0 if x > a +
2
43. Consider functions of two variables in a domain Ω with the inner product deﬁned as
<u, v> =
Ω
u(x, y)v(x, y) dx dy
Find the space of functions such that the Laplacian operator is selfadjoint.
44. Find the eigenvalues and eigenfunctions of the operator L where
Ly = (1 −t
2
)
d
2
y
dt
2
−t
dy
dt
with t ∈ [−1, 1] and y(−1) = y(1) = 0. Show that there exists a weight function r(x) such that the
eigenfunctions are orthogonal in [−1, 1] with respect to it.
45. Show that the eigenvalues of an operator and its adjoint are complex conjugates of each other.
46. Using an eigenvector expansion, ﬁnd the general solution of A x = y where
A =
¸
2 0 0
0 1 1
0 1 1
y =
¸
2
3
5
241
47. Show graphically that the Fourier trigonometric series representation of the function
f(t) =
−1 if −π ≤ t < 0
1 if 0 ≤ t ≤ π
always has an overshoot near x = 0, however many terms one takes (Gibbs phenomenon). Estimate
the overshoot.
48. Let ¦e
1
, , e
n
¦ be an orthonormal set in an inner product space S. Approximate x ∈ S by y =
β
1
e
1
+ + β
n
e
n
, where the β’s are to be selected. Show that [[x − y[[ is a minimum if we choose
β
i
= <x, e
i
>.
49. (a) Starting with a vector in the direction (1, 2, 0)
T
use the GramSchmidt procedure to ﬁnd a set of
orthonormal vectors in R
3
. Using these vectors, construct (b) an orthogonal matrix Q, and then ﬁnd
(c) the angles between x
i
and Qx
i
, where x
i
is (1, 0, 0)
T
, (0, 1, 0)
T
and (0, 0, 1)
T
respectively. The
orthogonal matrix Q is deﬁned as a matrix having orthonormal vectors in its columns.
50. Find the null space of the operator L deﬁned by Lx =
d
2
dt
2
x(t). Also ﬁnd the eigenvalues and eigen
functions (in terms of real functions) of L with x(0) = 1,
dx
dt
(0) = 0.
51. Find all approximate solutions of the boundary value problem
d
2
y
dx
2
+ 5y +y
3
= 1,
with y(0) = y(1) = 0 using a two term collocation method. Compare graphically with the exact
solution determined by numerical methods.
52. Find a oneterm approximation for the boundary value problem
y
−y = x
with y(0) = y(1) = 0, using the collocation, Galerkin, leastsquares, and moments methods. Compare
graphically with the exact solution.
53. Consider the sequence ¦
1+
1
n
2+
1
n
¦ in R
n
. Show that this is a Cauchy sequence. Does it converge?
54. Prove that (T
a
T
b
)
∗
= T
∗
b
T
∗
a
when T
a
and T
b
are linear operators which operate on vectors in a
Hilbert space.
55. If ¦x
i
¦ is a sequence in an inner product space such that the series [[x
1
[[ +[[x
2
[[ + converges, show
that ¦s
n
¦ is a Cauchy sequence, where s
n
= x
1
+x
2
+ +x
n
.
56. If L(x) = a
0
(t)
d
2
x
dt
2
+a
1
(t)
dx
dt
+a
2
(t)x, ﬁnd the operator that is formally adjoint to it.
57. If
y(t) = A[x(t)] =
t
0
x(τ) dτ
where y(t) and x(t) are real functions in some properly deﬁned space, ﬁnd the eigenvalues and eigen
functions of the operator A.
58. Using a dual basis, expand the vector (1, 3, 2)
T
in terms of the basis vectors (1, 1, 1)
T
, (1, 0, −1)
T
, and
(1, 0, 1)
T
in R
3
. The inner product is deﬁned as usual.
59. With f
1
(x) = 3 +i + 2x and f
2
(x) = 1 +ix + 2x
2
a) Find the L
2
[0, 1] norms of f
1
(x) and f
2
(x).
b) Find the inner product of f
1
(x) and f
2
(x) under the L
2
[0, 1] norm.
c) Find the “distance” between f
1
(x) and f
2
(x) under the L
2
[0, 1] norm.
242
60. Show the vectors u
1
= (1, 0, 0, 1 + i)
T
, u
2
= (1, 2, i, 3)
T
, u
3
= (0, 3 − i, 2, −2)
T
, u
4
= (2, 0, 0, 3)
T
form
a basis in C
4
. Find the set of reciprocal basis vectors. For x ∈ C
4
, and x = (i, 3 −i, −2, 5), express x
as an expansion in the above deﬁned basis vectors. That is ﬁnd c
i
such that x = c
i
u
i
61. The following norms can be used in R
n
, where x = (ξ
1
, , ξ
n
) ∈ R
n
.
(a) [[x[[
∞
= max
1≤j≤n
[ξ
j
[
(b) [[x[[
1
=
¸
n
j=1
[ξ
j
[
(c) [[x[[
2
= (
¸
n
j=1
[ξ
j
[
2
)
1/2
(d) [[x[[
p
= (
¸
n
j=1
[ξ
j
[
p
)
1/p
, 1 ≤ p < ∞
Show by examples that these are all valid norms.
62. Show that the set of all matrices A : R
n
→ R
n
is a vector space under the usual rules of matrix
manipulation.
63. Show that if A is a linear operator such that
(a) A : (R
n
, [[ [[
∞
) →(R
n
, [[ [[
1
) then [[A[[ =
¸
n
i,j=1
A
ij
.
(b) A : (R
n
, [[ [[
∞
) →(R
n
, [[ [[
∞
) then [[A[[ = max
1≤i≤n
¸
n
j=1
A
ij
.
243
244
Chapter 8
Linear algebra
see Kaplan, Chapter 1,
see Lopez, Chapters 33, 34,
see Riley, Hobson, and Bence, Chapter 7,
Strang, Linear Algebra and its Applications,
Strang, Introduction to Applied Mathematics.
The key problem in linear algebra is addressing the problem
A x = b, (8.1)
where A is a known constant rectangular matrix, b is a known column vector, and x is an
unknown column vector. To explicitly indicate the dimension of the matrices and vectors,
we sometimes write this in expanded form:
A
n×m
x
m×1
= b
n×1
, (8.2)
where n, m ∈ N are the positive integers which give the dimensions. If n = m, the matrix
is square, and solution techniques are usually straightforward. For n = m, which arises
often in physical problems, the issues are not as straightforward. In some cases we ﬁnd an
inﬁnite number of solutions; in others we ﬁnd none. Relaxing our equality constraint, we
can, however, always ﬁnd a vector x
∗
x
∗
= x such that [[A x −b[[
2
→min. (8.3)
This vector x
∗
is the best solution to the equation A x = b, for cases in which there is no
exact solution. Depending on the problem, it may turn out that x
∗
is not unique. It will
always be the case, however, that of all the vectors x
∗
which minimize [[A x −b[[
2
, that
one of them, ˆ x, will itself have a minimum norm.
8.1 Determinants and rank
We can take the determinant of a square matrix A, written det A. Details of computation of
determinants are found in any standard reference and will not be repeated here. Properties
of the determinant include
245
• det A
n×n
is equal to the volume of a parallelepiped in ndimensional space whose edges
are formed by the rows of A.
• If all elements of a row (or column) are multiplied by a scalar, the determinant is also
similarly multiplied.
• The elementary operation of subtracting a multiple of one row from another leaves the
determinant unchanged.
• If two rows (or columns) of a matrix are interchanged the sign of the determinant
changes.
A singular matrix is one whose determinant is zero. The rank of a matrix is the size r of
the largest square nonsingular matrix that can be formed by deleting rows and columns.
While the determinant is useful to some ends in linear algebra, most of the common
problems are better solved without using the determinant at all; in fact it is probably a fair
generalization to say that the determinant is less, rather than more, useful than imagined by
many. It is useful in solving linear systems of equations of small dimension, but becomes much
to cumbersome relative to other methods for commonly encountered large systems of linear
algebraic equations. While it can be used to ﬁnd the rank, there are also other more eﬃcient
means to calculate this. Further, while a zero value for the determinant almost always has
signiﬁcance, other values do not. Some matrices which are particularly illconditioned for
certain problems often have a determinant which gives no clue as to diﬃculties which may
arise.
8.2 Matrix algebra
We will denote a matrix of size n m as
A
n×m
=
¸
¸
¸
¸
a
11
a
12
a
1m
a
21
a
22
a
2m
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
a
nm
(8.4)
Addition of matrices can be deﬁned as
A
n×m
+B
n×m
= C
n×m
where the elements of C are obtained by adding the corresponding elements of A and B.
Multiplication of a matrix by a scalar α can be deﬁned as
αA
n×m
= B
n×m
where the elements of B are the corresponding elements of A multiplied by α.
It can be shown that the set of all nm matrices is a vector space. We will also refer to
a n 1 matrix as an ndimensional column vector. Likewise a 1 m matrix will be called
an mdimensional row vector. Unless otherwise stated vectors are assumed to be column
vectors. In this sense the inner product of two vectors x
n×1
and y
n×1
is <x, y> = ¯ x
T
y.
In this chapter matrices will be represented by uppercase boldfaced letters, such as A, and
vectors by lowercase boldfaced letters, such as x.
246
8.2.1 Column, row, left and right null spaces
The m column vectors c
i
∈ C
n
, i = 1, 2, . . . , m, of the matrix A
n×m
are each one of the
columns of A. The column space is the subspace of C
m
spanned by the column vectors. The
n row vectors r
i
∈ C
m
, i = 1, 2, . . . , n, of the same matrix are each one of the rows. The row
space is the subspace of C
n
spanned by the row vectors. The column space vectors and the
row space vectors span spaces of the same dimension. Consequently, the column space and
row space have the same dimension.
The right null space is the set of all vectors x
m×1
∈ C
m
for which A
n×m
x
m×1
= 0
n×1
. The
left null space is the set of all vectors y
n×1
∈ C
n
for which y
T
n×1
A
n×m
= y
1×n
A
n×m
= 0
1×m
.
If we have A
n×m
: C
m
→ C
n
, and recall that the rank of A is r, then we have the
following important results:
• The column space of A
n×m
has dimension r, (r ≤ m).
• The left null space of A
n×m
has dimension n −r.
• The row space of A
n×m
has dimension r, (r ≤ n).
• The right null space of A
n×m
has dimension m−r.
We also can show
C
n
= column space ⊕left null space, (8.5)
C
m
= row space ⊕right null space. (8.6)
Also
• Any vector x ∈ C
m
can be written as a linear combination of vectors in the row space
and the right null space.
• Any m dimensional vector x which is in the right null space of A is orthogonal to any
m dimensional vector in the row space. This comes directly from the deﬁnition of the
right null space A x = 0.
• Any vector y ∈ C
n
can be written as the sum of vectors in the column space and the
left null space.
• Any n dimensional vector y which is in the left null space of A is orthogonal to any
n dimensional vector in the column space. This comes directly from the deﬁnition of
the left null space y
T
A = 0.
Example 8.1
Find the column and row spaces of
A =
1 0 1
0 1 2
247
and their dimensions.
Restricting ourselves to real vectors, we note ﬁrst that in the equation A x = b, A is an operator
which maps threedimensional real vectors x into vectors b which are elements of a twodimensional
real space, i.e.
A : R
3
→R
2
.
The column vectors are
c
1
=
1
0
c
2
=
0
1
c
3
=
1
2
The column space consists of the vectors α
1
c
1
+α
2
c
2
+α
3
c
3
, where the α’s are any scalars. Since only
two of the c
i
’s are linearly independent, the dimension of the column space is also two. We can see this
by looking at the subdeterminant
det
1 0
0 1
= 1,
which indicates the rank, r = 2.
Note that
• c
1
+ 2c
2
= c
3
• The three column vectors thus lie in a single twodimensional plane.
• The three column vectors are thus said to span a twodimensional subspace of R
3
The two row vectors are
r
1
=
1 0 1
r
2
=
0 1 2
The row space consists of the vectors β
1
r
1
+β
2
r
2
, where the β’s are any scalars. Since the two r
i
’s are
linearly independent, the dimension of the row space is also two. That is the two row vectors are both
three dimensional, but span a twodimensional subspace.
We note for instance, if x = (1, 2, 1)
T
, that A x = b gives
1 0 1
0 1 2
¸
1
2
1
=
2
4
.
So
b = 1c
1
+ 2c
2
+ 1c
3
.
That is b is a linear combination of the column space vectors and thus lies in the column space of A.
We note for this problem that since an arbitrary b is twodimensional and the dimension of the column
space is two, that we can represent an arbitrary b as some linear combination of the column space
vectors. For example, we can also say that b = 2c
1
+ 4c
2
. We also note that x in general does not
lie in the row space of A, since x is an arbitrary threedimensional vector, and we only have enough
row vectors to span a twodimensional subspace (i.e. a plane embedded in a threedimensional space).
However, as will be seen, x does lie in the space deﬁned by the combination of the row space of A, and
the right null space of A (the set of vectors x for which A x = 0). In special cases, x will in fact lie
in the row space of A.
248
8.2.2 Matrix multiplication
Multiplication of matrices A and B can be deﬁned if they are of the proper sizes. Thus
A
n×k
B
k×m
= C
n×m
It may be better to say here that A is a linear operator which operates on elements which
are in a space of dimension l m so as to generate elements which are in a space of dimension
k m.
Example 8.2
Consider the matrix operator
A =
1 2 1
−3 3 1
which operates on 3 4 matrices, i.e.
A : R
3
R
4
→R
2
R
4
For example we can use A to operate on a 3 4 matrix as follows:
1 2 1
−3 3 1
¸
1 0 3 −2
2 −4 1 3
−1 4 0 2
=
4 −4 5 6
2 −8 −6 17
Note the operation does not exist if the order is reversed.
A vector operating on a vector can yield a scalar or a matrix, depending on the order of
operation.
Example 8.3
Consider the vector operations A
1×3
B
3×1
and B
3×1
A
1×3
where
A
1×3
= a
T
= ( 2 3 1 )
B
3×1
= b =
¸
3
−2
5
Then
A
1×3
B
3×1
= a
T
b = ( 2 3 1 )
¸
3
−2
5
= (2)(3) + (3)(−2) + (1)(5) = 5
This is the ordinary inner product <a, b>. The commutation of this operation however yields a matrix:
B
3×1
A
1×3
= ba
T
=
¸
3
−2
5
( 2 3 1 ) =
¸
(3)(2) (3)(3) (3)(1)
(−2)(2) (−2)(3) (−2)(1)
(5)(2) (5)(3) (5)(1)
=
¸
6 9 3
−4 −6 −2
10 15 5
This is the dyadic product of the two vectors. Note that for vector (lower case notation) the dyadic
product usually is not usually characterized by the “dot” operator that we use for the vector inner
product.
249
A special case is that of a square matrix A
n×n
of size n. For square matrices of the same
size both A B and B A exist. While A B and B A both yield n n matrices, the
actual value of the two products is diﬀerent. In what follows, we will often assume that we
are dealing with square matrices.
Properties of matrices include
1. (A B) C = A (B C) (associative),
2. A (B+C) = A B+A C (distributive),
3. (A+B) C = A C+B C (distributive),
4. A B = B A in general (not commutative),
5. det A B = (det A)(det B).
8.2.3 Deﬁnitions and properties
8.2.3.1 Diagonal matrices
A square matrix A is called nilpotent if there exists a positive integer n for which A
n
= 0
A diagonal matrix D has nonzero terms only along its main diagonal. The sum and
product of diagonal matrices are also diagonal. The determinant of a diagonal matrix is the
product of all diagonal elements. The identity matrix I is a square diagonal matrix with 1
on the main diagonal. With this deﬁnition, we get
A
n×m
I
m×m
= A
n×m
, (8.7)
I
n×n
A
n×m
= A
n×m
, or, more compactly (8.8)
A I = I A = A, (8.9)
where the unsubscripted identity matrix is understood to be square with the correct dimen
sion for matrix multiplication.
The transpose A
T
of a matrix A is one in which the terms above and below the diagonal
are interchanged. For any matrix A
n×m
, we ﬁnd that A A
T
and A
T
A are square matrices
of size n and m, respectively.
Properties include
1. det A = det A
T
,
2. (A
n×m
B
m×n
)
T
= B
T
A
T
,
3. (A
n×n
x
n×1
)
T
y
n×1
= x
T
A
T
y = x
T
(A
T
y).
250
A symmetric matrix is one for which A
T
= A. An antisymmetric or skewsymmetric
matrix is one for which A
T
= −A. Any matrix A can be written as
A =
1
2
(A+A
T
) +
1
2
(A−A
T
), (8.10)
where
1
2
(A+A
T
) is symmetric and
1
2
(A−A
T
) is antisymmetric.
A lower (or upper) triangular matrix is one in which all entries above (or below) the main
diagonal are zero. Lower triangular matrices are often denoted by L, and upper triangular
matrices by either U or R.
A positive deﬁnite matrix A is a symmetric matrix for which x
T
A x > 0 for all
nonzero vectors x. A positive deﬁnite matrix has real, positive eigenvalues. There exists a
nonsingular W such that A = W
T
W. All the eigenvalues of such a matrix are positive.
Every positive deﬁnite matrix A can be written as A = L L
T
, where L is a lower triangular
matrix (Cholesky decomposition).
A permutation matrix P is a square matrix composed of zeroes and a single one in each
column. None of the ones occur in the same row. It eﬀects a row exchange when it operates
on a general matrix A. It is never singular, and is in fact its own inverse, P = P
−1
, so
P P = I. Also [[P[[
2
= 1.
Example 8.4
Find P which eﬀects the exchange of the ﬁrst and second rows of A, where
A =
¸
1 3 5 7
2 3 1 2
3 1 3 2
.
To construct P, we begin with at 3 3 identity matrix I. For a ﬁrst and second row exchange, we
replace the ones in the (1, 1) and (2, 2) slot with zero, then replace the zeroes in the (1, 2) and (2, 1)
slot with ones. Thus
P A =
¸
0 1 0
1 0 0
0 0 1
¸
1 3 5 7
2 3 1 2
3 1 3 2
=
¸
2 3 1 2
1 3 5 7
3 1 3 2
.
Example 8.5
Find the rank and right null space of
A =
¸
1 0 1
5 4 9
2 4 6
.
The rank of A is not three since
det A = 0.
Since
1 0
5 4
= 0,
251
the rank of A is 2.
Let
x =
¸
x
1
x
2
x
3
belong to the right null space of A. Then
x
1
+x
3
= 0,
5x
1
+ 4x
2
+ 9x
3
= 0,
2x
1
+ 4x
2
+ 6x
3
= 0,
which gives
x =
¸
x
1
x
2
x
3
=
¸
t
t
−t
= t
¸
1
1
−1
, t ∈ R
1
.
Therefore the right null space is the straight line in R
3
which passes through (0,0,0) and (1,1,1).
8.2.3.2 Inverse
Deﬁnition: A matrix A has an inverse A
−1
if A A
−1
= A
−1
A = I.
Theorem
A unique inverse exists if the matrix is nonsingular.
Properties of the inverse include
1. (A B)
−1
= B
−1
A
−1
,
2. (A
−1
)
T
= (A
T
)
−1
,
3. det(A
−1
) = (det A)
−1
.
If a
ij
and a
−1
ij
are the elements of A and A
−1
, then
a
−1
ij
=
(−1)
i+j
b
ji
det A
, (8.11)
where b
ij
is the minor of a
ji
which is the determinant of the matrix obtained by canceling
out the jth row and ith column. The inverse of a diagonal matrix is also diagonal, but
with the reciprocals of the original diagonal elements.
Example 8.6
Find the inverse of
A =
1 1
−1 1
.
The inverse is
A
−1
=
1
2
1 −1
1 1
.
We can conﬁrm that A A
−1
= A
−1
A = I.
252
8.2.3.3 Similar matrices
Matrices Aand Bare similar if there exists a nonsingular matrix Csuch that B = C
−1
AC.
Similar matrices have the same determinant, eigenvalues, multiplicities and eigenvectors.
8.2.4 Equations
In general, for matrices that are not necessarily square, the equation A
n×m
x
m×1
= b
n×1
is solvable iﬀ b can be expressed as combinations of the columns of A. Problems in which
m < n are overconstrained; in special cases, those in which b is in the column space of A,
a unique solution x exists. However in general no solution x exists; nevertheless, one can
ﬁnd an x which will minimize [[A x − b[[
2
. This is closely related to what is known as
the method of least squares. Problems in which m > n are generally underconstrained, and
have an inﬁnite number of solutions x which will satisfy the original equation. Problems for
which m = n (square matrices) have a unique solution x when the rank r of A is equal to
n. If r < n, then the problem is underconstrained.
8.2.4.1 Overconstrained Systems
Example 8.7
For x ∈ R
2
, b ∈ R
3
, consider A : R
2
→R
3
,
¸
1 2
1 0
1 1
x
1
x
2
=
¸
5
1
3
.
The column space of A is spanned by the two column vectors
c
1
=
¸
1
1
1
, c
2
=
¸
2
0
1
.
Our equation can also be cast in the form which makes the contribution of the column vectors obvious:
x
1
¸
1
1
1
+x
2
¸
2
0
1
=
¸
5
1
3
.
Here we have the unusual case that b = (5, 1, 3)
T
is in the column space of A (in fact b = c
1
+ 2c
2
),
and we have a unique solution of
x =
1
2
.
Note that the solution vector x lies in the row space of A; here it identically the ﬁrst row vector
r
1
= (1, 2)
T
. Note also that here the column space is a twodimensional subspace, in this case a plane
deﬁned by the two column vectors, embedded within a threedimensional space. The operator A maps
arbitrary twodimensional vectors x into the threedimensional b; however these b vectors are conﬁned
to a subspace within the greater threedimensional space. Consequently, we cannot always expect to
ﬁnd a vector x for arbitrary b!
A sketch of this system is shown in in Figure 8.1 Here we sketch what might represent this example
in which the column space of A does not span the entire space R
3
, but for which b lies in the column
253
c
1
c
2
C
2
R
3
x A
.
= b
Figure 8.1: Plot for b which lies in column space (space spanned by c
1
and c
2
) of A.
space of A. In such a case [[A x − b[[
2
= 0. We have A as a matrix which maps two dimensional
vectors x into three dimensional vectors b. Our space is R
3
and embedded within that space are two
column vectors c
1
and c
2
which span a column space C
2
, which is represented by a plane within a three
dimensional volume. Since b in this example happens to lie in the column space, there exists a unique
vector x for which A x = b.
Example 8.8
Consider now
¸
1 2
1 0
1 1
x
1
x
2
=
¸
0
1
3
.
Here b = (0, 1, 3)
T
is not in the column space of A, and there is no solution x for which A x = b!
Again, the column space is a plane deﬁned by two vectors; the vector b does not happen to lie in the
plane deﬁned by the column space. However, we can ﬁnd a solution for x
p
which can be shown to
minimize the least squares error, by the following procedure.
A x
p
= b
A
T
A x
p
= A
T
b
x
p
= (A
T
A)
−1
A
T
b
1 1 1
2 0 1
¸
1 2
1 0
1 1
x
1
x
2
=
1 1 1
2 0 1
¸
0
1
3
3 3
3 5
x
1
x
2
=
4
3
x
1
x
2
=
11
6
−
1
2
Note the resulting x
p
will not satisfy the original equation! It is the best we can do in the least squares
error sense.
So for this example [[Ax
p
−b[[
2
= 2.0412. If we tried any nearby x, say x =
2, −
3
5
T
, [[Ax−b[[
2
=
2.0494 > 2.0412. Since the problem is linear, this minimum is global; if we take x = (10, −24)
T
, then
[[A x −b[[
2
= 42.5911 > 2.0412.
254
c
1
c
2
b
C
2
R
3
x
p
A
.
= b = b
p
Figure 8.2: Plot for b which lies outside of column space (space spanned by c
1
and c
2
) of A.
Further manipulation shows that we can write our solution as a combination of vectors in the row
space of A. As the dimension of the right null space of A is zero, there is no possible contribution from
the right null space vectors.
11
6
−
1
2
= α
1
1
2
+α
2
1
0
.
11
6
−
1
2
=
1 1
2 0
α
1
α
2
.
Solving, we ﬁnd
α
1
α
2
=
−
1
4
25
12
.
So
x
1
x
2
= −
1
4
1
2
+
25
12
1
0
. .. .
linear combination of row space vectors
.
We could also have chosen to expand in terms of the other row space vector (1, 1)
T
, since any two of
the three row space vectors span the space R
2
.
The vector A x
p
actually represents the projection of b onto the subspace spanned by the column
vectors (i.e. the column space). Call the projected vector b
p
:
b
p
= A x
p
= A (A
T
A)
−1
A
T
b
For this example b
p
=
5
6
,
11
6
,
4
3
T
. We can think of b
p
as the shadow cast by b onto the column space.
A sketch of this system is shown in in Figure 8.2 Here we sketch what might represent this example
in which the column space of A does not span the entire space R
3
, and for which b lies outside of
the column space of A. In such a case [[A x
p
− b[[
2
> 0. We have A as a matrix which maps two
dimensional vectors x into three dimensional vectors b. Our space is R
3
, and that embedded within
that space are two column vectors c
1
and c
2
which span a column space C
2
, which is represented by a
plane within a three dimensional volume. Since b lies outside the column space, there exists no unique
vector x for which A x = b.
255
8.2.4.2 Underconstrained Systems
Example 8.9
Consider now A : R
3
→R
2
such that
1 1 1
2 0 1
¸
x
1
x
2
x
3
=
1
3
.
Certainly b = (1, 3)
T
lies in the column space of A, since for example, b = 0(1, 2)
T
−2(1, 0)
T
+3(1, 1)
T
.
Setting x
1
= t, where t is an arbitrary number, lets us solve for x
2
, x
3
:
1 1 1
2 0 1
¸
t
x
2
x
3
=
1
3
,
1 1
0 1
x
2
x
3
=
1 −t
3 −2t
.
Inversion gives
x
2
x
3
=
−2 +t
3 −2t
,
so
¸
x
1
x
2
x
3
=
¸
t
−2 +t
3 −2t
=
¸
0
−2
3
+ t
¸
1
1
−2
. .. .
right null space
, t ∈ R
1
.
A useful way to think of problems such as this which are underdetermined is that the matrix A maps
the additive combination of a unique vector from the row space of A plus an arbitrary vector from the
right null space of A into the vector b. Here the vector (1, 1, −2)
T
is in the right null space; however,
the vector (0, −2, 3)
T
has components in both the right null space and the row space. Let us extract
the parts of (0, −2, 3)
T
which are in each space. Since the row space and right null space are linearly
indpendent, they form a basis, and we can say
¸
0
−2
3
= a
1
¸
1
1
1
+a
2
¸
2
0
1
. .. .
row space
+ a
3
¸
1
1
−2
. .. .
right null space
.
In matrix form, we then get
¸
0
−2
3
=
¸
1 2 1
1 0 1
1 1 −2
¸
a
1
a
2
a
3
.
The coeﬃcient matrix is nonsingular and thus invertable. Solving, we get
¸
a
1
a
2
a
3
=
¸
−
2
3
1
−
4
3
.
So x can be rewritten as
x = −
2
3
¸
1
1
1
+
¸
2
0
1
. .. .
row space
+
t −
4
3
¸
1
1
−2
. .. .
right null space
, t ∈ R
1
.
256
The ﬁrst two terms in the ﬁnal expression above are the unique linear combination of the row space
vectors, while the third term is from the right null space. As by deﬁnition, A maps any vector from the
right null space into the zero element, it makes no contribution to forming b; hence, one can allow for
an arbitrary constant. Note the analogy here with solutions to inhomogeneous diﬀerential equations.
The right null space vector can be thought of as a solution to the homogeneous equation, and the terms
with the row space vectors can be thought of as particular solutions.
We can also write the solution x in matrix form. The matrix is composed of three column vectors,
which are the original two row space vectors and the right null space vector, which together form a
basis in R
3
:
x =
¸
1 2 1
1 0 1
1 1 −2
¸
−
2
3
1
t −
4
3
, t ∈ R
1
.
While the right null space vector is orthogonal to both row space vectors, the row space vectors are not
orthogonal to themselves, so this basis is not orthogonal. Leaving out the calculational details, we can
use the GramSchmidt procedure to cast the solution on an orthonormal basis:
x =
1
√
3
¸
¸
1
√
3
1
√
3
1
√
3
−
√
2
¸
−
1
√
2
1
√
2
0
. .. .
row space
+
√
6
t −
4
3
¸
¸
1
√
6
1
√
6
−
2
3
. .. .
right null space
, t ∈ R
1
.
The ﬁrst two terms are in the row space, now represented on an orthonormal basis, the third is in the
right null space. In matrix form, we can say that
x =
¸
¸
1
√
3
−
1
√
2
1
√
6
1
√
3
1
√
2
1
√
6
1
√
3
0 −
2
3
¸
1
√
3
−
√
2
√
6
t −
4
3
, t ∈ R
1
.
Of course, there are other orthonormal bases on which the system can be cast.
We see that the minimum length of the vector x occurs when t =
4
3
, that is when x is entirely in
the row space. In such a case we have
min[[x[[
2
=
1
√
3
2
+
−
√
2
2
= +
7
3
.
Lastly note that here, we achieved a reasonable answer by setting x
1
= t at the outset. We could
have achieved an equivalent result by starting with x
2
= t, or x
3
= t. This will not work in all problems,
as will be discussed in the section on echelon form.
8.2.4.3 Over and Underconstrained Systems
Some systems of equations are both over and underconstrained simultaneously. This often
happens when the rank r of the matrix is less than both n and m, the matrix dimensions.
Such matrices are known as less than full rank matrices.
257
Example 8.10
Consider A : R
4
→R
3
such that
¸
1 2 0 4
3 2 −1 3
−1 2 1 5
¸
¸
x
1
x
2
x
3
x
4
=
¸
1
3
2
Using elementary row operations to perform Gaussian elimination gives rise to the equivalent system:
¸
1 0 −1/2 −1/2
0 1 1/4 9/4
0 0 0 0
¸
¸
x
1
x
2
x
3
x
4
=
¸
0
0
1
We immediately see that there is a problem in the last equation, which purports 0 = 1! What is actually
happening is that A is not full rank r = 3, but actually has r = 2, so vectors x ∈ R
4
are mapped into a
twodimensional subspace. So, we do not expect to ﬁnd any solution to this problem, since our vector b
is an arbitrary three dimensional vector which most likely does not lie in the twodimensional subspace.
We can, however, ﬁnd an x which minimizes the least squares error. We return to the original equation
and operate on a both sides with A
T
to form A
T
A x = A
T
b. It can be easily veriﬁed that if we
chose to operate on the system which was reduced by Gaussian elimination that we would not recover
a solution which minimized [[A x −b[[!
¸
¸
1 3 −1
2 2 2
0 −1 1
4 3 5
¸
1 2 0 4
3 2 −1 3
−1 2 1 5
¸
¸
x
1
x
2
x
3
x
4
=
¸
¸
1 3 −1
2 2 2
0 −1 1
4 3 5
¸
1
3
2
¸
¸
11 6 −4 8
6 12 0 24
−4 0 2 2
8 24 2 50
¸
¸
x
1
x
2
x
3
x
4
=
¸
¸
8
12
−1
23
.
This operation has mapped both sides of the equation into the same space, namely, the column space
of A
T
, which is also the row space of A. Since the rank of A is r = 2, the dimension of the row space
is also two, and now the vectors on both sides of the equation have been mapped into the same plane.
Again using row operations to perform Gaussian elimination gives rise to
¸
¸
1 0 −1/2 −1/2
0 1 1/4 9/4
0 0 0 0
0 0 0 0
¸
¸
x
1
x
2
x
3
x
4
=
¸
¸
1/4
7/8
0
0
.
This equation suggests that here x
3
and x
4
are arbitrary, so we set x
3
= s, x
4
= t and, treating s and
t as known quantities, reduce the system to the following
1 0
0 1
x
1
x
2
=
1/4 +s/2 +t/2
7/8 −s/4 −9t/4
,
so
¸
¸
x
1
x
2
x
3
x
4
=
¸
¸
1/4
7/8
0
0
+s
¸
¸
1/2
−1/4
1
0
+t
¸
¸
1/2
−9/4
0
1
.
The vectors which are multiplied by s and t are in the right null space of A. The vector (1/4, 7/8, 0, 0)
T
is not entirely in the row space of A; it has components in both the row space and right null space. We
258
can, thus, decompose this vector into a linear combination of row space vectors and right null space
vectors using the procedure in the previous section, solving the following equation for the coeﬃcients
a
1
, . . . , a
4
, which are the coeﬃcients of the row and right null space vectors:
¸
¸
1/4
7/8
0
0
=
¸
¸
1 3 1/2 1/2
2 2 −1/4 −9/4
0 −1 1 0
4 3 0 1
¸
¸
a
1
a
2
a
3
a
4
.
Solving, we get
¸
¸
a
1
a
2
a
3
a
4
=
¸
¸
−3/244
29/244
29/244
−75/244
.
So we can recast the solution as
¸
¸
x
1
x
2
x
3
x
4
= −
3
244
¸
¸
1
2
0
4
+
29
244
¸
¸
3
2
−1
3
. .. .
row space
+
s +
29
244
¸
¸
1/2
−1/4
1
0
+
t −
75
244
¸
¸
1/2
−9/4
0
1
. .. .
right null space
.
This choice of x guarantees that we minimize [[A x −b[[
2
, which in this case is 1.22474. So there are
no vectors x which satisfy the original equation A x = b, but there are a doubly inﬁnite number of
vectors x which can minimize the least squares error.
We can choose special values of s and t such that we minimize [[x[[
2
while maintaining [[A x−b[[
2
at its global minimum. This is done simply by forcing the magnitude of the right null space vectors to
zero, so we choose s = −29/244, t = 75/244, giving
¸
¸
x
1
x
2
x
3
x
4
= −
3
244
¸
¸
1
2
0
4
+
29
244
¸
¸
3
2
−1
3
. .. .
row space
=
¸
¸
21/61
13/61
−29/244
75/244
.
This vector has [[x[[
2
= 0.522055.
8.2.4.4 Square Systems
A set of n linear algebraic equations in n unknowns can be represented as
A
n×n
x
n×1
= b
n×1
(8.12)
There is a unique solution if det A = 0 and either no solution or an inﬁnite number of
solutions otherwise. In the case where there are no solutions, one can still ﬁnd an x which
minimizes the normed error [[A x −b[[
2
.
Theorem
(Cramer’s rule) The solution of the equation is
x
i
=
det A
i
det A
(8.13)
259
where A
i
is the matrix obtained by replacing the ith column of A by y. While generally
valid, Cramer’s rule is most useful for low dimension systems. For large systems, Gaussian
elimination, is a more eﬃcient technique.
Example 8.11
For A: R
2
→R
2
, Solve for x in A x = b:
1 2
3 2
x
1
x
2
=
4
5
By Cramer’s rule
x
1
=
4 2
5 2
1 2
3 2
=
−2
−4
=
1
2
x
2
=
1 4
3 5
1 2
3 2
=
−7
−4
=
7
4
So
x =
1
2
7
4
We get the same result by Gaussian elimination. Subtracting three times the ﬁrst row from the second
yields
1 2
0 −4
x
1
x
2
=
4
−7
Thus x
2
=
7
4
. Back substitution into the ﬁrst equation then gives x
1
=
1
2
.
Example 8.12
With A : R
2
→R
2
, ﬁnd the most general x which best satisﬁes A x = b for
1 2
3 6
x
1
x
2
=
2
0
.
Obviously, there is no unique solution to this system since the determinant of the coeﬃcient matrix is
zero. The rank of A is 1, so in actuality, A maps vectors from R
2
into a one dimensional subspace, R
1
.
For a general b, which does not lie in the one dimensional subspace, we can ﬁnd the best solution x by
ﬁrst multiplying both sides by A
T
:
1 3
2 6
1 2
3 6
x
1
x
2
=
1 3
2 6
2
0
.
10 20
20 40
x
1
x
2
=
2
4
.
This operation maps both sides of the equation into the column space of A
T
, which is the row space
of A, which has dimension 1. Since the vectors are now in the same space, a solution can be found.
Using row reductions to perform Gaussian elimination, we get
1 2
0 0
x
1
x
2
=
1/5
0
.
260
We set x
2
= t, where t is any arbitrary real number and solve to get
x
1
x
2
=
1/5
0
+t
−2
1
.
The vector which t multiplies, (−2, 1)
T
, is in the right null space of A. We can recast the vector
(1/5, 0)
T
in terms of a linear combination of the row space vector (1, 2)
T
and the right null space vector
to get the ﬁnal form of the solution:
x
1
x
2
=
1
25
1
2
. .. .
row space
+
t −
2
25
−2
1
. .. .
right null space
.
This choice of x guarantees that the least squares error [[A x−b[[
2
is minimized. In this case the least
squares error is 1.89737. The vector x with the smallest norm that minimizes [[A x −b[[
2
is found by
setting the magnitude of the right null space contribution to zero, so we can take t = 2/25 giving
x
1
x
2
=
1
25
1
2
. .. .
row space
.
This gives rise to [[x[[
2
= 0.0894427.
8.2.5 Eigenvalues and eigenvectors
Much of the general discussion of eigenvectors and eigenvalues has been covered in the chapter
on linear analysis and will not be repeated here. A few new concepts are introduced, and
some old ones reinforced.
First, we recall that when one refers to eigenvectors, one typically is referring to the right
eigenvectors which arise from A e = λI e; if no distinction is made, it can be assumed
that it is the right set that is being discussed. Though it does not often arise, there are
occasions when one requires the left eigenvectors which arise from e
T
A = e
T
Iλ. If the
matrix A is selfadjoint, it can be shown that it has the same left and right eigenvectors.
If A is not selfadjoint, it has diﬀerent left and right eigenvectors. The eigenvalues are the
same for both left and right eigenvectors of the same operator, whether or not the system is
selfadjoint.
Second, the polynomial equation that arises in the eigenvalue problem is the characteristic
equation of the matrix.
Theorem
A matrix satisﬁes its own characteristic equation (CayleyHamilton
1
theorem).
If a matrix is triangular, then the eigenvalues are the diagonal terms. Eigenvalues of
A
2
are the square of the eigenvalues of A. Every eigenvector of A is also an eigenvector
of A A = A
2
. The spectral radius of a matrix is the largest of the absolute values of the
eigenvalues.
1
after Arthur Cayley, 18211895, English mathematician, and William Rowan Hamilton, 18051865,
AngloIrish mathematician.
261
The trace of a matrix is the sum of the terms on the leading diagonal.
Theorem
The trace of a n n matrix is the sum of its n eigenvalues.
Theorem
The product of the n eigenvalues is the determinant of the matrix.
Example 8.13
Demonstrate the above theorems for
A =
¸
0 1 −2
2 1 0
4 −2 5
.
The characteristic equation is
λ
3
−6λ
2
+ 11λ −6 = 0.
The CayleyHamilton theorem is easily veriﬁed by direct substitution:
A
3
−6A
2
+ 11A−6I = 0,
¸
0 1 −2
2 1 0
4 −2 5
¸
0 1 −2
2 1 0
4 −2 5
¸
0 1 −2
2 1 0
4 −2 5
−6
¸
0 1 −2
2 1 0
4 −2 5
¸
0 1 −2
2 1 0
4 −2 5
+11
¸
0 1 −2
2 1 0
4 −2 5
−6
¸
1 0 0
0 1 0
0 0 1
=
¸
0 0 0
0 0 0
0 0 0
,
¸
−30 19 −38
−10 13 −24
52 −26 53
+
¸
36 −30 60
−12 −18 24
−96 48 −102
+
¸
0 11 −22
22 11 0
44 −22 55
+
¸
−6 0 0
0 −6 0
0 0 −6
=
¸
0 0 0
0 0 0
0 0 0
.
Considering the traditional right eigenvalue problem, Ae = λIe, it is easily shown that the eigenvalues
and (right) eigenvectors for this system are
λ
1
= 1, e
(1)
=
¸
0
2
1
,
λ
2
= 2, e
(2)
=
¸
1
2
1
0
,
λ
3
= 3, e
(3)
=
¸
−1
−1
1
.
One notes that while the eigenvectors do form a basis in R
3
, that they are not orthogonal; this is a
consequence of the matrix not being selfadjoint (or more speciﬁcally asymmetric). The spectral radius
of A is 3. Now
A
2
= A A =
¸
0 1 −2
2 1 0
4 −2 5
¸
0 1 −2
2 1 0
4 −2 5
=
¸
−6 5 −10
2 3 −4
16 −8 17
.
262
It is easily shown that the eigenvalues for A
2
are 1, 4, 9, precisely the squares of the eigenvalues of A.
The trace is
trA = 0 + 1 + 5 = 6.
Note this is the equal to the sum of the eigenvalues
3
¸
i=1
λ
i
= 1 + 2 + 3 = 6.
Note also that
detA = 6 = λ
1
λ
2
λ
3
= (1)(2)(3) = 6.
Note that since all the eigenvalues are positive, A is a positive matrix. It is not positive deﬁnite since
it is not symmetric. Note for instance if x = (−1, 1, 1)
T
, that x
T
A x = −1. We might ask about the
positive deﬁniteness of the symmetric part of A, A
s
=
1
2
(A+A
T
) :
A
s
=
¸
0
3
2
1
3
2
1 −1
1 −1 5
.
In this case A
s
has real eigenvalues, both positive and negative, λ
1
= 5.32, λ
2
= −1.39, λ
3
= 2.07.
Because of the presence of a negative eigenvalue, the symmetric part of A is also not positive deﬁnite.
We also note that the antisymmetric part of a matrix can never be positive deﬁnite by the following
argument. Recalling that the eigenvalues of an antisymmetric matrix A
a
are purely imaginary, we have
x
T
A
a
x = x
T
(λ) x = x
T
(iλ
I
) x = iλ
I
x
T
x = iλ
I
[[x[[
2
2
, where λ
I
∈ R
1
. Hence whenever the vector
x is an eigenvector of A
a
, the quantity x
T
A
a
x is a pure imaginary number, and hence there is no
positive deﬁniteness.
We can also easily solve the left eigenvalue problem, e
T
L
A = λe
T
L
I:
λ
1
= 1, e
(1)
L
=
¸
2
−1
1
,
λ
2
= 2, e
(2)
L
=
¸
−3
1
−2
,
λ
3
= 3, e
(3)
L
=
¸
2
−1
2
.
We see eigenvalues are the same, but the left and right eigenvectors are diﬀerent.
8.2.6 Complex matrices
If x and y are complex vectors, we know that their inner product involves the conjugate
transpose. The conjugate transpose operation occurs so often we give it a name, the Her
mitian transpose, and denote it by a superscript H. Thus we deﬁne the inner product
as
<x, y> = x
T
y = x
H
y.
Then the norm is given by
[[x[[
2
= +
√
x
H
x.
263
Example 8.14
If
x =
¸
¸
1 +i
3 −2i
2
−3i
,
ﬁnd [[x[[
2
.
[[x[[
2
= +
√
x
H
x = +
(1 −i, 3 + 2i, 2, +3i)
¸
¸
1 +i
3 −2i
2
−3i
= +
√
2 + 13 + 4 + 9 = 2
√
7.
Example 8.15
If
x =
¸
1 +i
−2 + 3i
2 −i
,
y =
¸
3
4 −2i
3 + 3i
,
ﬁnd <x, y>.
<x, y> = x
H
y,
= (1 −i, −2 −3i, 2 +i)
¸
3
4 −2i
3 + 3i
,
= (3 −3i) + (−14 −8i) + (3 + 9i),
= −8 −2i.
Likewise, the conjugate or Hermitian transpose of a matrix A is A
H
, given by the trans
pose of the matrix with each element being replaced by its conjugate:
A
H
=
¯
A
T
.
As the Hermitian transpose is the adjoint operator corresponding to a given complex matrix,
we can apply an earlier proved theorem for linear operators to deduce that the eigenvalues
of a complex matrix are the complex conjugates of the Hermitian transpose of that matrix.
The Hermitian transpose is distinguished from a matrix which is Hermitian as follows. A
Hermitian matrix is one which is equal to its conjugate transpose. So a matrix which equals
its Hermitian transpose is Hermitian. A matrix which does not equal its Hermitian transpose
is nonHermitian. A skewHermitian matrix is the negative of its Hermitian transpose. A
Hermitian matrix is selfadjoint.
Properties:
264
• x
H
A x is real if A is Hermitian.
• The eigenvalues of a Hermitian matrix are real.
• The eigenvectors of a Hermitian matrix that correspond to diﬀerent eigenvalues, are
orthogonal to each other.
• The determinant of a Hermitian matrix is real.
• If A is skewHermitian, then iA is Hermitian, and viceversa.
Note the diagonal elements of a Hermitian matrix must be real as they must be unchanged
by conjugation.
Example 8.16
Consider A x = b, where A : C
3
→C
3
with A the Hermitian matrix and x the complex vector:
A =
¸
1 2 −i 3
2 +i −3 2i
3 −2i 4
, x =
¸
3 + 2i
−1
2 −i
.
First, we have
b = A x =
¸
1 2 −i 3
2 +i −3 2i
3 −2i 4
¸
3 + 2i
−1
2 −i
=
¸
7
9 + 11i
17 + 4i
.
Now, demonstrate that the properties of Hermitian matrices hold for this case. First
x
H
A x = (3 −2i, −1, 2 +i)
¸
1 2 −i 3
2 +i −3 2i
3 −2i 4
¸
3 + 2i
−1
2 −i
= 42 ∈ R
1
.
The eigenvalues and (right, same as left here) eigenvectors are
λ
1
= 6.51907, e
(1)
=
¸
0.525248
0.132451 + 0.223964i
0.803339 −0.105159i
,
λ
2
= −0.104237, e
(2)
=
¸
−0.745909
−0.385446 + 0.0890195i
0.501844 −0.187828i
,
λ
3
= −4.41484, e
(3)
=
¸
0.409554
−0.871868 −0.125103i
−0.116278 −0.207222i
.
Check for orthogonality between two of the eigenvectors, e.g e
(1)
, e
(2)
:
<e
(1)
, e
(2)
> = e
(1)H
e
(2)
,
= (0.525248, 0.132451 −0.223964i, 0.803339 + 0.105159i)
¸
−0.745909
−0.385446 + 0.0890195i
0.501844 −0.187828i
,
= 0 + 0i.
The same holds for other eigenvectors.
265
It can then be shown that
det A = 3,
which is also equal to the product of the eigenvalues.
Lastly
iA =
¸
i 1 + 2i 3i
−1 + 2i −3i −2
3i 2 4i
,
is skewsymmetric. It is easily shown the eigenvalues of iA are
λ
1
= 6.51907i, λ
2
= −0.104237i, λ
3
= −4.41484i.
Note the eigenvalues of this matrix are just those of the previous multiplied by i.
8.3 Orthogonal and unitary matrices
8.3.1 Orthogonal matrices
A set of n ndimensional real orthonormal vectors ¦e
1
, e
2
, , e
n
¦ can be formed into an
orthogonal matrix
Q =
¦e
1
¦ ¦e
2
¦ ¦e
n
¦
. (8.14)
Properties of orthogonal matrices include
1. Q
T
= Q
−1
, and both are orthogonal.
2. Q
T
Q = Q Q
T
= I.
3. [[Q[[
2
= 1, when the domain and range of Q are in Hilbert spaces.
4. [[Q x[[
2
= [[x[[
2
, where x is a vector.
5. (Q x)
T
(Q y) = x
T
y, where x and y are real vectors.
6. Eigenvalues of Q have [λ
i
[ = 1, λ
i
∈ C
1
.
Geometrically, an orthogonal matrix is an operator which rotates but does not stretch a
vector. Rotation matrices, as well as their transpose, matrices of direction cosines, are both
orthogonal matrices.
Example 8.17
Find the orthogonal matrix corresponding to
A =
2 1
1 2
.
The normalized eigenvectors are
1
√
2
1
−1
and
1
√
2
1
1
.
The orthogonal matrix is
Q =
1
√
2
1 1
−1 1
.
266
8.3.2 Unitary matrices
A unitary matrix U is a complex matrix with orthonormal columns. It is the complex analog
of an orthogonal matrix.
Properties of unitary matrices include
• U
H
= U
−1
, and both are unitary.
• U
H
U = U U
H
= I.
• [[U[[
2
= 1, when the domain and range of U are in Hilbert spaces.
• [[U x[[
2
= [[x[[
2
, where x is a vector.
• (U x)
H
(U y) = x
H
y, where x and y are vectors.
• Eigenvalues of U have [λ
i
[ = 1, λ
i
∈ C
1
.
• Eigenvectors of U corresponding to diﬀerent eigenvalues are orthogonal.
Unitary matrices represent pure rotations in a complex space.
Example 8.18
Consider the unitary matrix
U =
1+i
√
3
1−2i
√
15
1
√
3
1+3i
√
15
.
The column vectors are easily seen to be normal. They are also orthogonal:
1 −i
√
3
,
1
√
3
1−2i
√
15
1+3i
√
15
= 0 + 0i.
The matrix itself is not Hermitian. Still, its Hermitian transpose exists:
U
H
=
1−i
√
3
1
√
3
1+2i
√
15
1−3i
√
15
.
It is then easily veriﬁed that
U
−1
= U
H
,
U U
H
= U
H
U = I.
The eigensystem is
λ
1
= −0.0986232 + 0.995125i, e
(1)
=
0.688191 −0.425325i
0.587785
,
λ
2
= 0.934172 + 0.356822i, e
(2)
=
−0.306358 + 0.501633i
−0.721676 −0.36564i
.
It is easily veriﬁed that the eigenvectors are orthogonal and the eigenvalues have magnitude of one.
267
8.4 Matrix decompositions
One of the most important tasks, especially in the numerical solution of algebraic and dif
ferential equations, is decomposing general matrices into simpler components. A brief dis
cussion will be given here of some of the more important decompositions. Full discussions
can be found in Strang’s text. It is noted that many popular software programs, such as
Matlab, Mathematica, IMSL libraries, etc. have routines which routinely calculate these
decompositions.
8.4.1 L D U decomposition
Probably the most important technique in solving linear systems of algebraic equations of
the form A x = b, uses the decomposition
A = P
−1
L D U, (8.15)
where A is a square matrix,
2
P is a neversingular permutation matrix, L is a lower trian
gular matrix, D is a diagonal matrix, and U is an upper triangular matrix. The notation of
U for the upper triangular matrix is common, and should not be confused with the identical
notation for a unitary matrix. In other contexts R is sometimes used for an upper triangular
matrix. All terms can be found by ordinary Gaussian elimination. The permutation matrix
is necessary in case row exchanges are necessary in the Gaussian elimination.
A common numerical algorithm to solve for x in A x = b is as follows
• Factor A into P
−1
L D U so that A x = b becomes
P
−1
L D U x = b.
• Operate on both sides as follows:
U x =
P
−1
L D
−1
b.
• Solve next for the new variable c in the new equation
P
−1
L D c = b
so
c =
P
−1
L D
−1
b
The triangular form of L D renders the inversion of (P
−1
L D) to be much more
computationally eﬃcient than inversion of an arbitrary square matrix.
• Substitute c into the modiﬁed original equation to get
U x = c
so
x = U
−1
c
Again since U is triangular, the inversion is computationally eﬃcient.
2
If A is not square, there is an equivalent decomposition, known as echelon form, to be discussed later
in this chapter.
268
Example 8.19
Find the L D U decomposition of the matrix below:
A =
¸
−5 4 9
−22 14 18
16 −8 −6
.
The process is essentially a series of row operations, which is the essence of Gaussian elimination. First
we operate to transform the −22 and 16 in the ﬁrst column into zeroes. Crucial in this step is the
necessity of the term in the 1,1 slot, known as the pivot, to be nonzero. If it is zero, a row exchange
will be necessary, mandating a permutation matrix which is not the identity matrix. In this case there
are no such problems. We multiply the ﬁrst row by
22
5
and subtract from the second row, then multiply
the ﬁrst row by −
16
5
and subtract from the third row. The factors
22
5
and −
16
5
will go in the 2,1 and
3,1 slots of the matrix L. The diagonal of L always is ﬁlled with ones. This row operation yields
A =
¸
−5 4 9
−22 14 18
16 −8 −6
=
¸
1 0 0
22/5 1 0
−16/5 0 1
¸
−5 4 9
0 −18/5 −108/5
0 24/5 114/5
.
Now multiplying the new second row by −
4
3
, subtracting this from the third row, and depositing the
factor −
4
3
into 3,2 slot of the matrix L, we get
A =
¸
−5 4 9
−22 14 18
16 −8 −6
=
¸
1 0 0
22/5 1 0
−16/5 −4/3 1
. .. .
L
¸
−5 4 9
0 −18/5 −108/5
0 0 −6
. .. .
U
.
The form above is often described as the L U decomposition of A. We can force the diagonal terms
of the upper triangular matrix to unity by extracting a diagonal matrix D to form the L D U
decomposition:
A =
¸
−5 4 9
−22 14 18
16 −8 −6
=
¸
1 0 0
22/5 1 0
−16/5 −4/3 1
. .. .
L
¸
−5 0 0
0 −18/5 0
0 0 −6
. .. .
D
¸
1 −4/5 −9/5
0 1 6
0 0 1
. .. .
U
.
Note that Ddoes not contain the eigenvalues of A. Also since there were no row exchanges necessary
P = P
−1
= I, and it has not been included.
Example 8.20
Find the L D U decomposition of the matrix A:
A =
¸
0 1 2
1 1 1
1 0 0
.
There is a zero in the pivot, so a row exchange is necessary:
P A =
¸
0 0 1
0 1 0
1 0 0
¸
0 1 2
1 1 1
1 0 0
=
¸
1 0 0
1 1 1
0 1 2
.
269
Performing Gaussian elimination by subtracting 1 times the ﬁrst row from the second and depositing
the 1 in the 2,1 slot of L, we get
P A = L U →
¸
0 0 1
0 1 0
1 0 0
¸
0 1 2
1 1 1
1 0 0
=
¸
1 0 0
1 1 0
0 0 1
¸
1 0 0
0 1 1
0 1 2
.
Now subtracting 1 times the second row, and depositing the 1 in the 3,2 slot of L
P A = L U →
¸
0 0 1
0 1 0
1 0 0
¸
0 1 2
1 1 1
1 0 0
=
¸
1 0 0
1 1 0
0 1 1
¸
1 0 0
0 1 1
0 0 1
.
Now U already has ones on the diagonal, so the diagonal matrix D is simply the identity matrix. Using
this and inverting P, which is P itself(!), we get the ﬁnal decomposition
A = P
−1
L D U →
¸
0 1 2
1 1 1
1 0 0
=
¸
0 0 1
0 1 0
1 0 0
. .. .
P
−1
¸
1 0 0
1 1 0
0 1 1
. .. .
L
¸
1 0 0
0 1 0
0 0 1
. .. .
D
¸
1 0 0
0 1 1
0 0 1
. .. .
U
.
It can also be shown that if A is symmetric, that the decomposition can be written as
A = P
−1
L D L
T
.
8.4.2 Echelon form
When A is not square, we can still use Gaussian elimination to cast the matrix in echelon
form:
A = P
−1
L D U.
Again P is a neversingular permutation matrix, L is lower triangular and square, D is
diagonal and square, U is upper triangular and rectangular and of the same dimension as
A. The strategy is to use row operations in such a fashion that ones or zeroes appear on the
diagonal.
Example 8.21
Consider the nonsquare matrix studied earlier,
A =
1 −3 2
2 0 3
.
We take 2 times the ﬁrst row and subtract the result from the second row. The scalar 2 is deposited in
the 2,1 slot in the L matrix. So
A =
1 −3 2
2 0 3
=
1 0
2 1
. .. .
L
1 −3 2
0 6 −1
. .. .
U
.
270
Again, the above is also known as an L U decomposition, and is often as useful as the L D U
decomposition. There is no row exchange so the permutation matrix and its inverse are the identity
matrix. We extract a 1 and 6 to form the diagonal matrix D, so the ﬁnal form is
A = P
−1
L D U =
1 0
0 1
. .. .
P
−1
1 0
2 1
. .. .
L
1 0
0 6
. .. .
D
1 −3 2
0 1 −
1
6
. .. .
U
.
Echelon form is an especially useful form for underconstrained systems as illustrated in
the following example.
Example 8.22
Consider solutions for the unknown x in the equation A x = b where A is known A : R
5
→ R
3
,
and b is left general, but considered to be known:
¸
2 1 −1 1 2
4 2 −2 1 0
−2 −1 1 −2 −6
¸
¸
¸
¸
x
1
x
2
x
3
x
4
x
5
=
¸
b
1
b
2
b
3
.
We perform Gaussian elimination row operations on the second and third rows to get zeros in the ﬁrst
column:
¸
2 1 −1 1 2
0 0 0 −1 −4
0 0 0 −1 −4
¸
¸
¸
¸
x
1
x
2
x
3
x
4
x
5
=
¸
b
1
−2b
1
+b
2
b
1
+b
3
.
The next round of Gaussian elimination works on the third row and yields
¸
2 1 −1 1 2
0 0 0 −1 −4
0 0 0 0 0
¸
¸
¸
¸
x
1
x
2
x
3
x
4
x
5
=
¸
b
1
−2b
1
+b
2
3b
1
−b
2
+b
3
.
Note that the reduced third equation gives
0 = 3b
1
−b
2
+b
3
.
This is the equation of a plane in R
3
. Thus arbitrary b ∈ R
3
will not satisfy the original equation.
Said another way, the operator A maps arbitrary ﬁve dimensional vectors x into a twodimensional
subspace of a three dimensional vector space. The rank of A is 2. Thus the dimension of both the row
space and the column space is 2; the dimension of the right null space is 3, and the dimension of the
left null space is 1.
We also note there are two nontrivial equations remaining. The ﬁrst nonzero elements from the
left of each row are known as the pivots. The number of pivots is equal to the rank of the matrix.
Variables which correspond to each pivot are known as basic variables. Variables with no pivot are
known as free variables. Here the basic variables are x
1
and x
4
, while the free variables are x
2
, x
3
, and
x
5
.
271
Now enforcing the constraint 3b
1
− b
2
+ b
3
= 0, without which there will be no solution, we can
set each free variable to an arbitrary value, and then solve the resulting square system. Take x
2
= r,
x
3
= s, x
5
= t, where here r, s, and t are arbitrary real scalar constants. So
¸
2 1 −1 1 2
0 0 0 −1 −4
0 0 0 0 0
¸
¸
¸
¸
x
1
r
s
x
4
t
=
¸
b
1
−2b
1
+b
2
0
,
which gives
2 1
0 −1
x
1
x
4
=
b
1
−r +s −2t
−2b
1
+b
2
+ 4t
,
which yields
x
4
= 2b
1
−b
2
−4t,
x
1
=
1
2
(−b
1
+b
2
−r +s + 2t).
Thus
x =
¸
¸
¸
¸
x
1
x
2
x
3
x
4
x
5
=
¸
¸
¸
¸
1
2
(−b
1
+b
2
−r +s + 2t)
r
s
2b
1
−b
2
−4t
t
=
¸
¸
¸
¸
1
2
(−b
1
+b
2
)
0
0
2b
1
−b
2
0
+r
¸
¸
¸
¸
−
1
2
1
0
0
0
+s
¸
¸
¸
¸
1
2
0
1
0
0
+t
¸
¸
¸
¸
1
0
0
−4
1
r, s, t ∈ R
1
.
The coeﬃcients r, s, and t multiply the three right null space vectors. These in combination with two
independent row space vectors, form a basis for any vector x. Thus, we can again cast the solution as a
particular solution which is a unique combination of independent row space vectors and a nonunique
combination of the right null space vectors (the homogeneous solution):
x =
¸
¸
¸
¸
x
1
x
2
x
3
x
4
x
5
=
25b
1
−13b
2
106
¸
¸
¸
¸
2
1
−1
1
2
+
−13b
1
+ 11b
2
106
¸
¸
¸
¸
4
2
−2
1
0
. .. .
row space
+ ˆ r
¸
¸
¸
¸
−
1
2
1
0
0
0
+ ˆ s
¸
¸
¸
¸
1
2
0
1
0
0
+
ˆ
t
¸
¸
¸
¸
1
0
0
−4
1
. .. .
right null space
.
In matrix form, we can say that
x =
¸
¸
¸
¸
x
1
x
2
x
3
x
4
x
5
=
¸
¸
¸
¸
2 4 −
1
2
1
2
1
1 2 1 0 0
−1 −2 0 1 0
1 1 0 0 −4
2 0 0 0 1
¸
¸
¸
¸
¸
25b1−13b2
106
−13b1+11b2
106
ˆ r
ˆ s
ˆ
t
.
Here we have taken ˆ r = r +
b1−9b2
106
, ˆ s = s+
−b1+9b2
106
, and
ˆ
t =
−30b1+26b2
106
; as they are arbitrary constants
multiplying vectors in the right null space, the relationship to b
1
and b
2
is actually unimportant. As
before, while the null space basis vectors are orthogonal to the row space basis vectors, the entire
system is not orthogonal. The GramSchmidt procedure could be used to cast the solution on either
an orthogonal or orthonormal basis.
It is also noted that we have eﬀectively found the L U decomposition of A. The terms in L are
from the Gaussian elimination, and we have already U:
A = L U →
¸
2 1 −1 1 2
4 2 −2 1 0
−2 −1 1 −2 −6
. .. .
A
=
¸
1 0 0
2 1 0
−1 1 1
. .. .
L
¸
2 1 −1 1 2
0 0 0 −1 −4
0 0 0 0 0
. .. .
U
.
272
The L D U decomposition is
¸
2 1 −1 1 2
4 2 −2 1 0
−2 −1 1 −2 −6
. .. .
A
=
¸
1 0 0
2 1 0
−1 1 1
. .. .
L
¸
2 0 0
0 −1 0
0 0 0
. .. .
D
¸
1
1
2
−
1
2
1
2
1
0 0 0 1 4
0 0 0 0 0
. .. .
U
.
There were no row exchanges, so in eﬀect the permutation matrix P is the identity matrix, and there
is no need to include it.
Lastly, we note that a more robust alternative to the method shown here would be to ﬁrst apply
the A
T
operator to both sides of the equation so to map both sides into the column space of A. Then
there would be no need to restrict b so that it lies in the column space. Our results are then interpreted
as giving us only a projection of x. Taking A
T
A x = A
T
b and then casting the result into row
echelon form gives
¸
¸
¸
¸
1 1/2 −1/2 0 −1
0 0 0 1 4
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
¸
¸
¸
¸
x
1
x
2
x
3
x
4
x
5
=
¸
¸
¸
¸
(1/22)(b
1
+ 7b
2
+ 4b
3
)
(1/11)(b
1
−4b
2
−7b
3
)
0
0
0
.
This suggests we take x
2
= r, x
3
= s, and x
5
= t and solve so to get
¸
¸
¸
¸
x
1
x
2
x
3
x
4
x
5
¸
¸
¸
¸
(1/22)(b
1
+ 7b
2
+ 4b
3
)
0
0
(1/11)(b
1
−4b
2
−7b
3
)
0
+r
¸
¸
¸
¸
−1/2
1
0
0
0
+s
¸
¸
¸
¸
1/2
0
1
0
0
+t
¸
¸
¸
¸
1
0
0
−4
1
.
We could go on to cast this in terms of combinations of row vectors and right null space vectors, but
will not do so here. It is reiterated the this result is valid for arbitrary b, but that it only represents a
solution which minimizes the error in [[A x −b[[
2
.
8.4.3 Q R decomposition
The Q R decomposition allows us to formulate and matrix as the product of an orthogonal
(unitary if complex) matrix Q and an upper triangular matrix R, of the same dimension as
A. That is we seek Q and R such that
A = Q R (8.16)
The matrix A can be square or rectangular. See Strang for details of the algorithm.
Example 8.23
The Q R decomposition of the matrix we diagonalized in a previous example is as follows:
A =
¸
−5 4 9
−22 14 18
16 −8 −6
. .. .
A
= Q R =
¸
−0.1808 −0.4982 0.8480
−0.7954 −0.4331 −0.4240
0.5785 −0.7512 −0.3180
. .. .
Q
¸
27.6586 −16.4867 −19.4153
0 −2.0465 −7.7722
0 0 1.9080
. .. .
R
.
273
Noting that [[Q[[
2
= 1, we deduce that [[R[[
2
= [[A[[
2
. Also recalling how matrices can
be thought of as transformations, we see how to think of A as a pure rotation (Q) followed
by stretching (R).
Example 8.24
Find the Q R decomposition for our nonsquare matrix
A =
1 −3 2
2 0 3
.
The decomposition is
A =
−0.4472 −0.8944
−0.8944 0.4472
. .. .
Q
−2.2361 1.3416 −3.577
0 2.6833 −0.4472
. .. .
R
.
The Q R decomposition can be shown to be closely related to the GramSchmidt orthog
onalization process. It is also useful in increasing the eﬃciency of estimating x for A x · b
when the system is overconstrained; that is b is not in the column space of A, R(A). If we,
as usual operate on both sides as follows,
A x · b, b ∈ R(A),
A
T
A x · A
T
b, A = Q R,
(Q R)
T
Q R x · (Q R)
T
b,
R
T
Q
T
Q R x · R
T
Q
T
b,
R
T
Q
−1
Q R x · R
T
Q
T
b,
R
T
R x · R
T
Q
T
b,
x ·
R
T
R
−1
R
T
Q
T
b,
Q R x · Q
R
R
T
R
−1
R
T
Q
T
b,
A x · Q
R
R
T
R
−1
R
T
Q
T
b.
When rectangular R has no zeros on its diagonal, R
R
T
R
−1
R
T
has all zeroes, except
for r ones on the diagonal, where r is the rank of R. This makes solution of overconstrained
problems particularly simple.
8.4.4 Diagonalization
Casting a matrix into a form in which all (or sometimes most) of its oﬀdiagonal elements
have zero value has its most important application in solving systems of diﬀerential equations
274
but also in other scenarios. For many cases, we can decompose a square matrix A into the
form
A = S Λ S
−1
, (8.17)
where S is nonsingular matrix and Λ is a diagonal matrix. To diagonalize a square matrix
A, we must ﬁnd S, a diagonalizing matrix, such that S
−1
A S is diagonal. Not all matrices
are diagonalizable.
Theorem
A matrix with distinct eigenvalues can be diagonalized, but the diagonalizing matrix is
not unique.
Deﬁnition: The algebraic multiplicity of an eigenvalue is number of times it occurs. The
geometric multiplicity of an eigenvalue is the number of eigenvectors it has.
Theorem
Nonzero eigenvectors corresponding to diﬀerent eigenvalues are linearly independent.
Theorem
If A is an n n matrix with n linearly independent right eigenvectors ¦e
1
, e
2
, , e
n
¦
corresponding to eigenvalues ¦λ
1
, λ
2
, , λ
n
¦ (not necessarily distinct), then the nn matrix
S whose columns are populated by the eigenvectors of A
S =
¦e
1
¦ ¦e
2
¦ ¦e
n
¦
(8.18)
makes
S
−1
A S = Λ, (8.19)
where
Λ =
¸
¸
¸
¸
λ
1
0 0
0 λ
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 λ
n
(8.20)
is a diagonal matrix of eigenvalues. The matrices A and Λ are similar.
Let’s see if this recipe works when we ﬁll the columns of S with the eigenvectors.
S
−1
A S = Λ, (8.21)
A S = S Λ, (8.22)
¸
a
11
a
1n
.
.
.
.
.
.
.
.
.
a
n1
a
nn
¸
e
(1)
1
e
(n)
1
.
.
.
.
.
.
.
.
.
e
(1)
n
e
(n)
n
=
¸
e
(1)
1
e
(n)
1
.
.
.
.
.
.
.
.
.
e
(1)
n
e
(n)
n
¸
λ
1
0
.
.
.
.
.
.
.
.
.
0 λ
n
,(8.23)
=
¸
λ
1
e
(1)
1
λ
n
e
(n)
1
.
.
.
.
.
.
.
.
.
λ
1
e
(1)
n
λ
n
e
(n)
n
, (8.24)
A e
(1)
= λ
1
e
(1)
, (8.25)
A e
(2)
= λ
2
e
(2)
, (8.26)
.
.
.
.
.
.
A e
(n)
= λ
n
e
(n)
. (8.27)
275
Note also the eﬀect postmultiplication of both sides by S
−1
:
A S S
−1
= S Λ S
−1
,
A = S Λ S
−1
.
Example 8.25
Diagonalize the matrix considered in a previous example:
A =
¸
−5 4 9
−22 14 18
16 −8 −6
,
and check.
The eigenvalueeigenvector pairs are
λ
1
= −6, e
1
=
¸
−1
−2
1
,
λ
2
= 3, e
2
=
¸
1
2
0
,
λ
3
= 6, e
3
=
¸
2
1
2
.
Then
S = ( e
1
e
2
e
3
) =
¸
−1 1 2
−2 2 1
1 0 2
.
The inverse is
S
−1
=
1
3
¸
−4 2 3
−5 4 3
2 −1 0
.
Thus
A S =
¸
6 3 12
12 6 6
−6 0 12
,
and
Λ = S
−1
A S =
¸
−6 0 0
0 3 0
0 0 6
.
Let us also note the complementary decomposition of A:
A = S Λ S
−1
=
¸
−1 1 2
−2 2 1
1 0 2
. .. .
S
¸
−6 0 0
0 3 0
0 0 6
. .. .
Λ
1
3
¸
−4 2 3
−5 4 3
2 −1 0
. .. .
S
−1
=
¸
−5 4 9
−22 14 18
16 −8 −6
. .. .
A
.
Note that because the matrix is not symmetric, the eigenvectors are not orthogonal, e.g. e
T
1
e
2
= −5.
276
Note that if A is symmetric (Hermitian), then its eigenvectors must be orthogonal; thus,
it is possible to normalize the eigenvectors so that the matrix S is in fact orthogonal (unitary
if complex). Thus for symmetric A we have
A = Q Λ Q
−1
.
Since Q
−1
= Q
T
, we have
A = Q Λ Q
T
.
Note also that with AS = SΛ, the column vectors of S (which are the right eigenvectors
of A) form a basis in C
n
. Consider now the right eigensystem of the adjoint of A, denoted
by A
∗
:
A
∗
P = P Λ
∗
(8.28)
where Λ
∗
is the diagonal matrix containing the eigenvalues of A
∗
, and P is the matrix whose
columns are populated by the (right) eigenvectors of A
∗
. Now we know from an earlier
proof that the eigenvalues of the adjoint are the complex conjugates of those of the original
operator, thus Λ
∗
= Λ
H
. Also the adjoint operator for matrices is the Hermitian transpose.
So, we ﬁnd that
A
H
P = P Λ
H
(8.29)
Taking the Hermitian transpose of both sides, we recover
P
H
A = Λ P
H
. (8.30)
So we see clearly that the left eigenvectors of a linear operator are the right eigenvectors of
the adjoint of that operator. Moreover the eigenvalues are the same for both the left and
right eigenvectors.
It is also possible to show that, remarkably, when we take the inner product of the matrix
of right eigenvectors of the operator with the matrix of right eigenvectors of its adjoint, that
we obtain a diagonal matrix, which we denote as D:
S
H
P = D. (8.31)
Equivalently, this states that the inner product of the left eigenvector matrix with the right
eigenvector matrix is diagonal. Let us see how this comes about. Let s
i
be a right eigenvector
of A with eigenvalue λ
i
and p
j
be a left eigenvector of A with eigenvalue λ
j
. Then
A s
i
= λ
i
s
i
, (8.32)
and
p
H
j
A = λ
j
p
H
j
. (8.33)
If we premultiply the ﬁrst eigenrelation by p
H
j
, we obtain
p
H
j
A s
i
= p
H
j
(λ
i
s
i
) . (8.34)
Substituting from the second eigenrelation and rearranging, we get
λ
j
p
H
j
s
i
= λ
i
p
H
j
s
i
. (8.35)
277
Rearranging
(λ
j
−λ
i
)
p
H
j
s
i
= 0. (8.36)
Now if i = j and λ
i
= λ
j
, we must have
p
H
j
s
i
= 0, (8.37)
or, taking the Hermitian transpose,
s
H
i
p
j
= 0. (8.38)
If i = j, then all we can say is s
H
i
p
j
is some arbitrary scalar. Hence we have shown the
desired relation that S
H
P = D.
Since eigenvectors have an arbitrary magnitude, it is a straightforward process to scale
either P or S such that the diagonal matrix is actually the identity matrix. Here we choose
to scale P, given that our task was to ﬁnd the reciprocal basis vectors of S. We take then
S
H
ˆ
P = I. (8.39)
Here
ˆ
P denotes the matrix in which each eigenvector (column) of the original P has been
scaled such that the above identity is achieved. Hence
ˆ
P is seen to give the set of reciprocal
basis vectors for the basis deﬁned by S:
S
R
=
ˆ
P. (8.40)
It is also easy to see then that the inverse of the matrix S is given by
S
−1
=
ˆ
P
H
. (8.41)
Example 8.26
For a matrix A considered in an earlier example, consider the basis formed by its matrix of eigen
vectors S, and use the properly scaled matrix of eigenvectors of A
∗
= A
H
to determine the reciprocal
basis S
R
.
We will take
A =
¸
−5 4 9
−22 14 18
16 −8 −6
.
As found before, the eigenvalue (right) eigenvector pairs are
λ
1
= −6, e
1R
=
¸
−1
−2
1
,
λ
2
= 3, e
2R
=
¸
1
2
0
,
λ
3
= 6, e
3R
=
¸
2
1
2
.
278
Then we take the matrix of basis vectors to be
S = ( e
1
e
2
e
3
) =
¸
−1 1 2
−2 2 1
1 0 2
.
The adjoint of A is
A
H
=
¸
−5 −22 16
4 14 −8
9 18 −6
.
The eigenvalues(right) eigenvectors of A
H
, which are the left eigenvectors of A, are found to be
λ
1
= −6, e
1L
=
¸
−4
2
3
,
λ
2
= 3, e
2L
=
¸
−5
4
3
,
λ
3
= 6, e
3L
=
¸
−2
1
0
.
So the matrix of right eigenvectors of the adjoint, which contains the left eigenvectors of the original
matrix, is
P =
¸
−4 −5 −2
2 4 1
3 3 0
.
We indeed ﬁnd that the inner product of S and P is a diagonal matrix D:
S
H
P =
¸
−1 −2 1
1 2 0
2 1 2
¸
−4 −5 −2
2 4 1
3 3 0
=
¸
3 0 0
0 3 0
0 0 −3
.
Using our knowledge of D, we individually scale each column of P to form the desired reciprocal basis
ˆ
P =
¸
−4/3 −5/3 2/3
2/3 4/3 −1/3
1 1 0
= S
R
.
Then we see that the inner product of S and the reciprocal basis
ˆ
P = S
R
is indeed the identity matrix:
S
H
ˆ
P =
¸
−1 −2 1
1 2 0
2 1 2
¸
−4/3 −5/3 2/3
2/3 4/3 −1/3
1 1 0
=
¸
1 0 0
0 1 0
0 0 1
.
279
8.4.5 Jordan canonical form
A square matrix A without a suﬃcient number of linearly independent eigenvectors can still
be decomposed into a neardiagonal form:
A = S J S
−1
, (8.42)
This form is known as the Jordan
3
(upper) canonical form in which the neardiagonal matrix
J
J = S
−1
A S, (8.43)
has zeros everywhere except for eigenvalues along the principal diagonal and unity above the
missing eigenvectors.
Consider the eigenvalue λ of multiplicity n of the matrix A
n×n
. Then
(A−λI) e = 0, (8.44)
gives some linearly independent eigenvectors e
1
, e
2
, . . . , e
k
. If k = n, the matrix can be
diagonalized. If, however, k < n we need k − n more linearly independent vectors. These
are the generalized eigenvectors. One can be obtained from
(A−λI) g
1
= e, (8.45)
and others from
(A−λI) g
k+1
= g
k
for k = 2, 3, . . . (8.46)
This procedure is continued until n linearly independent eigenvectors and generalized eigen
vectors are obtained, which is the most that we can have in R
n
. Then
S =
¦e
1
¦ ¦e
k
¦¦g
1
¦ ¦g
n−k
¦
(8.47)
gives S
−1
A S = J, where J is of the Jordan canonical form.
Notice that g
n
also satisﬁes (A−λI)
n
g
n
= 0. For example, if
(A−λI) g = e,
(A−λI) (A−λI) g = (A−λI) e,
(A−λI) (A−λI) g = 0,
(A−λI)
2
g = 0.
However any solution of the ﬁnal equation above is not necessarily a generalized eigenvector.
Example 8.27
Find the Jordan canonical form of
A =
¸
4 1 3
0 4 1
0 0 4
3
Marie Ennemond Camille Jordan, 18381922, French mathematician.
280
The eigenvalues are λ = 4 with multiplicity three. For this value
(A−λI) =
¸
0 1 3
0 0 1
0 0 0
.
The eigenvectors are obtained from (A − λI) e
(1)
= 0, which gives x
(1)
2
+ 3x
(1)
3
= 0, x
(1)
3
= 0. The
most general form of the eigenvector is
e
(1)
=
¸
a
0
0
.
Only one eigenvector can be obtained from this eigenvalue. To get a generalized eigenvector, we take
(A−λI) g
(1)
= e
(1)
, which gives x
(2)
2
+ 3x
(2)
3
= a, x
(2)
3
= 0, so that
g
(1)
=
¸
b
a
0
.
Another generalized eigenvector can be similarly obtained from (A− λI) g
(2)
= g
(1)
, so that x
(3)
2
+
3x
(3)
3
= b, x
(3)
3
= a. Thus we get
g
(2)
=
¸
c
b −3a
a
.
From the eigenvector and generalized eigenvectors
S = ( ¦e
(1)
¦ ¦g
(1)
¦ ¦g
(2)
¦ ) =
¸
a b c
0 a b −3a
0 0 a
,
and
S
−1
=
¸
1
a
−
b
a
2
−b
2
+3ba+ca
a
3
0
1
a
−b+3a
a
2
0 0
1
a
.
The Jordan canonical form is
J = S
−1
A S =
¸
4 1 0
0 4 1
0 0 4
.
Note that, in the above, a and b are any constants. Choosing, for example, a = 1, b = c = 0 simpliﬁes
the algebra giving
S =
¸
1 0 0
0 1 −3
0 0 1
,
and
S
−1
=
¸
1 0 0
0 1 3
0 0 1
.
281
8.4.6 Schur decomposition
The Schur
4
decomposition is as follows:
A = Q R Q
T
. (8.48)
Here Q is an orthogonal (unitary if complex) matrix, and R is upper triangular, with the
eigenvalues this time along the diagonal. The matrix A must be square.
Example 8.28
The Schur decomposition of the matrix we diagonalized in a previous example is as follows:
A =
¸
−5 4 9
−22 14 18
16 −8 −6
. .. .
A
= Q R Q
T
=
¸
−0.4082 0.1826 0.8944
−0.8165 0.3651 −0.4472
0.4082 0.9129 0
. .. .
Q
¸
−6 −20.1246 31.0376
0 3 5.7155
0 0 6
. .. .
R
¸
−0.4082 −0.8165 0.4082
0.1826 0.3651 0.9129
0.8944 −0.4472 0
. .. .
Q
T
.
If A is symmetric, then the upper triangular matrix R reduces to the diagonal matrix
with eigenvalues on the diagonal, Λ; the Schur decomposition is in this case simply A =
Q Λ Q
T
.
8.4.7 Singular value decomposition
The singular value decomposition is used for nonsquare matrices and is the most general
form of diagonalization. Any complex matrix A
n×m
can be factored into the form
A
n×m
= Q
n×n
B
n×m
Q
H
m×m
,
where Q
n×n
and Q
H
m×m
are orthogonal (unitary, if complex) matrices, and B has positive
numbers µ
i
, (i = 1, 2, . . . , r) in the ﬁrst r positions on the main diagonal, and zero everywhere
else. It turns out that r is the rank of A
n×m
. The columns of Q
n×n
are the eigenvectors of
A
n×m
A
H
n×m
. The columns of Q
m×m
are the eigenvectors of A
H
n×m
A
n×m
. The values µ
i
,
(i = 1, 2, . . . , r) ∈ R
1
are called the singular values of A. They are analogous to eigenvalues
and are in fact are the positive square root of the eigenvalues of A
n×m
A
H
n×m
or A
H
n×m
A
n×m
.
Note that since the matrix from which the eigenvalues are drawn is Hermitian, that the
eigenvalues, and thus the singular values, are guaranteed real. Note also that if A itself is
square and Hermitian, that the absolute value of the eigenvalues of A will equal its singular
values. If A is square and nonHermitian, there is no simple relation between its eigenvalues
4
Issai Schur, 18751941, Belrussianborn Germanbased mathematician.
282
and singular values. The factorization Q
n×n
B
n×m
Q
H
m×m
is called the singular value
decomposition (SVD).
As discussed by Strang, the column vectors of Q
n×n
and Q
m×m
are even more than
orthonormal. They also must be chosen in such a way that A
n×m
Q
m×m
is a scalar multiple
of Q
n×n
. This comes directly from postmultiplying the general form of the singular value
decomposition by Q
m×m
: A
n×m
Q
m×m
= Q
n×n
B
n×m
. So in fact a more robust way
of computing the singular value decomposition is to ﬁrst compute one of the orthogonal
matrices, and then compute the other orthogonal matrix with which the ﬁrst one is consistent.
Example 8.29
Find the singular value decomposition of
A
2×3
=
1 −3 2
2 0 3
.
The matrix is real so we do not need to consider the conjugate transpose; we will retain the notation
for generality though here the ordinary transpose would suﬃce. First consider A A
H
:
A A
H
=
1 −3 2
2 0 3
. .. .
A
¸
1 2
−3 0
2 3
. .. .
A
H
=
14 8
8 13
.
The diagonal eigenvalue matrix and corresponding orthogonal matrix composed of the normalized
eigenvectors in the columns are
Λ
2×2
=
21.5156 0
0 5.48439
, Q
2×2
=
0.728827 −0.684698
0.684698 0.728827
.
Next we consider A
H
A:
A
H
A =
¸
1 2
−3 0
2 3
. .. .
A
H
1 −3 2
2 0 3
. .. .
A
=
¸
5 −3 8
−3 9 −6
8 −6 13
.
The diagonal eigenvalue matrix and corresponding orthogonal matrix composed of the normalized
eigenvectors in the columns are
Λ
3×3
=
¸
21.52 0 0
0 5.484 0
0 0 0
, Q
3×3
=
¸
0.4524 0.3301 −0.8285
−0.4714 0.8771 0.09206
0.7571 0.3489 0.5523
.
We take
B
2×3
=
√
21.52 0 0
0
√
5.484 0
=
4.639 0 0
0 2.342 0
,
and can easily verify that
Q
2×2
B
2×3
Q
H
3×3
=
0.7288 −0.6847
0.6847 0.7288
. .. .
Q2×2
4.639 0 0
0 2.342 0
. .. .
B2×3
¸
0.4524 −0.4714 0.7571
0.3301 0.8771 0.3489
−0.8285 0.09206 0.5523
. .. .
Q
H
3×3
,
=
1 −3 2
2 0 3
= A
2×3
.
283
The singular values here are µ
1
= 4.639, µ
2
= 2.342.
Let’s see how we can get another singular value decomposition of the same matrix. Here we will
employ the more robust technique of computing the decomposition. The orthogonal matrices Q
3×3
and
Q
2×2
are not unique as one can multiply any row or column by −1 and still maintain orthonormality.
For example, instead of the value found earlier, let us presume that we found
Q
3×3
=
¸
−0.4524 0.3301 −0.8285
0.4714 0.8771 0.09206
−0.7571 0.3489 0.5523
.
Here, the ﬁrst column of the original Q
3×3
has been multiplied by −1. If we used this new Q
3×3
in
conjunction with the previously found matrices to form Q
2×2
A
2×3
Q
H
3×3
, we would not recover A
2×3
!
The more robust way is to take
A
2×3
= Q
2×2
B
2×3
Q
H
3×3
,
A
2×3
Q
3×3
= Q
2×2
B
2×3
,
1 −3 2
2 0 3
. .. .
A2×3
¸
−0.4524 0.3301 −0.8285
0.4714 0.8771 0.09206
−0.7571 0.3489 0.5523
. .. .
Q3×3
=
q
11
q
12
q
21
q
22
. .. .
Q2×2
4.639 0 0
0 2.342 0
. .. .
B2×3
,
−3.381 −1.603 0
−3.176 1.707 0
=
4.639q
11
2.342q
12
0
4.639q
21
2.342q
22
0
.
Solving for q
ij
, we ﬁnd that
Q
2×2
=
−0.7288 −0.6847
−0.6847 0.7288
.
It is easily seen that this version of Q
2×2
diﬀers from the ﬁrst version by a sign change in the ﬁrst
column. Direct substitution shows that the new decomposition also recovers A
2×3
:
Q
2×2
B
2×3
Q
H
3×3
=
−0.7288 −0.6847
−0.6847 0.7288
. .. .
Q2×2
4.639 0 0
0 2.342 0
. .. .
B2×3
¸
−0.4524 0.4714 −0.7571
0.3301 0.8771 0.3489
−0.8285 0.09206 0.5523
. .. .
Q
H
3×3
,
=
1 −3 2
2 0 3
= A
2×3
.
It is also easily shown that the singular values of a square Hermitian matrix are identical
to the eigenvalues of that matrix. The singular values of a square nonHermitian matrix are
not, in general, the eigenvalues of that matrix.
8.4.8 Hessenberg form
A square matrix A can be decomposed into Hessenberg form
A = Q H Q
T
,
284
where Q is an orthogonal (or unitary) matrix and H has zeros below the ﬁrst subdiagonal.
When A is Hermitian, Q is tridiagonal, which is very easy to invert numerically. Also H
has the same eigenvalues as A
Example 8.30
The Hessenberg form of our example square matrix A is
A =
¸
−5 4 9
−22 14 18
16 −8 −6
= Q H Q
T
=
¸
1 0 0
0 −0.8087 0.5882
0 0.5882 0.8087
. .. .
Q
¸
−5 2.0586 9.6313
27.2029 2.3243 −24.0451
0 1.9459 5.6757
. .. .
H
¸
1 0 0
0 −0.8087 0.5882
0 0.5882 0.8087
. .. .
Q
T
.
8.5 Projection matrix
The vector A x belongs to the column space of A. Here A is not necessarily square.
Consider the equation A x = b, where A and b are given. If the given vector b does not
lie in the column space of A, the equation cannot be solved for x. Still, we would like to
ﬁnd x
p
such that
A x
p
= b
p
, (8.49)
which does lie in the column space of A, such that b
p
is the projection of b onto the column
space. The error vector is
e = b
p
−b.
For a projection, this error should be orthogonal to all vectors A z which belong to the
column space, where the components of z are arbitrary. Enforcing this condition, we get
0 = (A z)
T
e,
= (A z)
T
(b
p
−b),
= z
T
A
T
(A x
p
−b),
= z
T
(A
T
A x
p
−A
T
b).
Since z is an arbitrary vector,
A
T
A x
p
−A
T
b = 0. (8.50)
from which
A
T
A x
p
= A
T
b,
x
p
= (A
T
A)
−1
A
T
b,
A x
p
= A (A
T
A)
−1
A
T
b,
b
p
= A (A
T
A)
−1
A
T
b.
285
The projection matrix R deﬁned by b
p
= R b is
R = A (A
T
A)
−1
A
T
.
The projection matrix for an operator A, when operating on an arbitrary vector b yields
the projection of b onto the column space of A. Note that many vectors b could have the
same projection onto the column space of A.
8.6 Method of least squares
One important application of projection matrices is the method of least squares. This method
is often used to ﬁt data to a given functional form. The form is most often in terms of polyno
mials, but there is absolutely no restriction; trigonometric functions, logarithmic functions,
Bessel functions can all serve as well. Now if one has say, ten data points, one can in princi
ple, ﬁnd a ninth order polynomial which will pass through all the data points. Often times,
especially when there is much experimental error in the data, such a function may be subject
to wild oscillations, which are unwarranted by the underlying physics, and thus is not useful
as a predictive tool. In such cases, it may be more useful to choose a lower order curve which
does not exactly pass through all experimental points, but which does minimize the error.
In this method, one
• examines the data,
• makes a nonunique judgment of what the functional form might be,
• substitutes each data point into the assumed form so as to form an overconstrained
system of linear equations,
• uses the technique associated with projection matrices to solve for the coeﬃcients which
best represent the given data.
8.6.1 Unweighted least squares
This is the most common method used when one has equal conﬁdence in all the data.
Example 8.31
Find the best straight line to approximate the measured data relating x to t.
t x
0 5
1 7
2 10
3 12
6 15
A straight line ﬁt will have the form
x = a
0
+a
1
t,
286
where a
0
and a
1
are the terms to be determined. Substituting each data point to the assumed form,
we get ﬁve equations in two unknowns:
5 = a
0
+ 0a
1
,
7 = a
0
+ 1a
1
,
10 = a
0
+ 2a
1
,
12 = a
0
+ 3a
1
,
15 = a
0
+ 6a
1
.
Rearranging, we get
¸
¸
¸
¸
1 0
1 1
1 2
1 3
1 6
a
0
a
1
=
¸
¸
¸
¸
5
7
10
12
15
.
This is of the form A a = b. We then ﬁnd that
a =
A
T
A
−1
A
T
b.
Substituting, we ﬁnd that
a
0
a
1
. .. .
a
=
1 1 1 1 1
0 1 2 3 6
. .. .
A
T
¸
¸
¸
¸
1 0
1 1
1 2
1 3
1 6
. .. .
A
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
−1
1 1 1 1 1
0 1 2 3 6
. .. .
A
T
¸
¸
¸
¸
5
7
10
12
15
. .. .
b
=
5.7925
1.6698
.
So the best ﬁt estimate is
x = 5.7925 + 1.6698 t.
The least squares error is [[A a −b[[
2
= 1.9206. This represents the
2
error of the prediction.
A plot of the raw data and the best ﬁt straight line is shown in Figure 8.3
8.6.2 Weighted least squares
If one has more conﬁdence in some data points than others, one can deﬁne a weighting
function to give more priority to those particular data points.
Example 8.32
Find the best straight line ﬁt for the data in the previous example. Now however, assume that we
have ﬁve times the conﬁdence in the accuracy of the ﬁnal two data points, relative to the other points.
Deﬁne a square weighting matrix W:
W =
¸
¸
¸
¸
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 5 0
0 0 0 0 5
.
287
0 1 2 3 4 5 6 7
0
2
4
6
8
10
12
14
16
18
20
t
x
x = 5.7925 + 1.6698 t
Data Points
Figure 8.3: Plot of x −t data and best least squares straight line ﬁt.
Now we perform the following operations:
A a = b,
W A a = W b,
(W A)
T
W A a = (W A)
T
W b,
a =
(W A)
T
W A
−1
(W A)
T
W b.
With the above values of W, direct substitution leads to
a =
a
0
a
1
=
8.0008
1.1972
.
So the best weighted least squares ﬁt is
x = 8.0008 + 1.1972 t.
A plot of the raw data and the best ﬁt straight line is shown in Figure 8.4
When the measurements are independent and equally reliable, W is the identity matrix.
If the measurements are independent but not equally reliable, W is at most diagonal. If the
measurements are not independent, then nonzero terms can appear oﬀ the diagonal in W.
It is often advantageous, for instance in problems in which one wants to control a process in
real time, to give priority to recent data estimates over old data estimates and to continually
employ a least squares technique to estimate future system behavior. The previous example
does just that. A famous fast algorithm for such problems is known as a Kalman Filter.
288
0 1 2 3 4 5 6 7
0
2
4
6
8
10
12
14
16
18
20
t
x
x = 8.0008 + 1.1972 t
weighted data points
Figure 8.4: Plot of x −t data and best weighted least squares straight line ﬁt.
8.7 Matrix exponential
Deﬁnition: The exponential matrix is deﬁned as
e
A
= I +A+
1
2!
A
2
+
1
3!
A
3
+ (8.51)
Thus
e
At
= I +At +
1
2!
A
2
t
2
+
1
3!
A
3
t
3
+ ,
d
dt
e
At
= A+A
2
t +
1
2!
A
3
t
2
+ ,
= A
I +At +
1
2!
A
2
t
2
+
1
3!
A
3
t
3
+
,
= A e
At
.
Properties of the matrix exponential include
e
aI
= e
a
I, (8.52)
(e
A
)
−1
= e
−A
, (8.53)
e
A(t+s)
= e
At
e
As
. (8.54)
But e
A+B
= e
A
e
B
only if A B = B A. Thus, e
tI+sA
= e
t
e
sA
Example 8.33
Find e
At
if
A =
¸
a 1 0
0 a 1
0 0 a
.
289
We have
A = aI +B,
where
B =
¸
0 1 0
0 0 1
0 0 0
.
Thus
B
2
=
¸
0 0 1
0 0 0
0 0 0
,
B
3
=
¸
0 0 0
0 0 0
0 0 0
,
.
.
.
B
n
=
¸
0 0 0
0 0 0
0 0 0
, for n ≥ 4.
Furthermore
I B = B I = B.
Thus
e
At
= e
(aI+B)t
,
= e
atI
e
Bt
,
=
I +atI +
1
2!
a
2
t
2
I
2
+
1
3!
a
3
t
3
I
3
+
I +Bt +
1
2!
B
2
t
2
+
1
3!
B
3
t
3
+
,
= e
at
I
I +Bt +B
2
t
2
2
,
= e
at
¸
1 t
t
2
2
,
0 1 t
0 0 1
.
If A can be diagonalized, the calculation is simpliﬁed. Then
e
At
= e
S·Λ·S
−1
t
= I +S Λ S
−1
t + . . . +
1
n!
S Λ S
−1
t
n
.
Noting that
S Λ S
−1
2
= S Λ S
−1
S Λ S
−1
= S Λ
2
S
−1
,
S Λ S
−1
n
= S Λ S
−1
. . . S Λ S
−1
= S Λ
n
S
−1
,
the original equation reduces to
e
At
= S
I +Λt + . . . +
1
n!
(Λ
n
t
n
)
S
−1
= S e
Λt
S
−1
.
290
8.8 Quadratic form
At times one may be given a polynomial equation for which one wants to determine conditions
under which the expression is positive. For example if we have
f(x
1
, x
2
, x
3
) = 18x
2
1
−16x
1
x
2
+ 5x
2
2
+ 12x
1
x
3
−4x
2
x
3
+ 6x
2
3
,
it is not obvious whether or not there exist (x
1
, x
2
, x
3
) which will give positive or negative
values of f. However, it is easily veriﬁed that f can be rewritten as
f(x
1
, x
2
, x
3
) = 2(x
1
−x
2
+ x
3
)
2
+ 3(2x
1
−x
2
)
2
+ 4(x
1
+ x
3
)
2
.
So in this case f ≥ 0 for all (x
1
, x
2
, x
3
). How to demonstrate positivity (or nonpositivity)
of such expressions is the topic of this section. A quadratic form is an expression
f(x
1
, , x
n
) =
n
¸
j=1
n
¸
i=1
a
ij
x
i
x
j
, (8.55)
where ¦a
ij
¦ is a real, symmetric matrix which we will also call A. The surface represented by
the equation
¸
n
j=1
¸
n
i=1
a
ij
x
i
x
j
= constant is a quadric surface. With the coeﬃcient matrix
deﬁned, we can represent f as
f = x
T
A x. (8.56)
Now A can be decomposed as Q Λ Q
−1
, where Q is the orthogonal matrix corresponding
to A and Λ is the corresponding diagonal matrix of eigenvalues. Thus
f = x
T
Q Λ Q
−1
x. (8.57)
Since Q is orthogonal, we then have
f = x
T
Q Λ Q
T
x. (8.58)
Now, deﬁne y so that y = Q
T
x = Q
−1
x. Consequently, x = Q y. Thus our equation
for f becomes
f = (Q y)
T
Q Λ y, (8.59)
= y
T
Q
T
Q Λ y, (8.60)
= y
T
Q
−1
Q Λ y, (8.61)
= y
T
Λ y. (8.62)
This standard form of a quadratic form is one in which the crossproduct terms (i.e. x
i
x
j
,
i = j) do not appear.
Theorem
(Principal axis theorem) If Q is the orthogonal matrix and λ
1
, , λ
n
the eigenvalues
corresponding to ¦a
ij
¦, a change in coordinates
¸
¸
x
1
.
.
.
x
n
= Q
¸
¸
y
1
.
.
.
y
n
, (8.63)
291
will reduce the quadratic form (8.55) to its standard form
λ
1
y
2
1
+ λ
2
y
2
2
+ + λ
n
y
2
n
. (8.64)
Example 8.34
Change
f(x
1
, x
2
) = 2x
2
1
+ 2x
1
x
2
+ 2x
2
2
,
to standard form.
For n = 2, equation (8.55) becomes
f(x
1
, x
2
) = a
11
x
2
1
+ (a
12
+a
21
)x
1
x
2
+a
22
x
2
2
.
We choose ¦a
ij
¦ such that the matrix is symmetric. This gives us
a
11
= 2,
a
12
= 1,
a
21
= 1,
a
22
= 2.
So we get
A =
2 1
1 2
.
The eigenvalue of A are λ = 1, λ = 3. The orthogonal matrix corresponding to A is
Q =
1
√
2
1 1
−1 1
, Q
−1
= Q
T
=
1
√
2
1 −1
1 1
.
The transformation x = Q y is
x
1
=
1
√
2
(y
1
+y
2
),
x
2
=
1
√
2
(−y
1
+y
2
).
The inverse transformation y = Q
−1
x is
y
1
=
1
√
2
(x
1
−x
2
),
y
2
=
1
√
2
(x
1
+x
2
).
Substituting in f(x
1
, x
2
) we get
f(y
1
, y
2
) = y
2
1
+ 3y
2
2
.
In terms of the original variables, we get
f(x
1
, x
2
) =
1
2
(x
1
−x
2
)
2
+
3
2
(x
1
+x
2
)
2
.
292
Example 8.35
Change
f(x
1
, x
2
, x
3
) = 18x
2
1
−16x
1
x
2
+ 5x
2
2
+ 12x
1
x
3
−4x
2
x
3
+ 6x
2
3
,
to standard form.
For n = 3, equation (8.55) becomes
f(x
1
, x
2
, x
3
) = ( x
1
x
2
x
3
)
¸
18 −8 6
−8 5 −2
6 −2 6
¸
x
1
x
2
x
3
= x
T
A x.
The eigenvalues of A are λ
1
= 1, λ
2
= 4, λ
3
= 24. The orthogonal matrix corresponding to A is
Q =
¸
¸
¸
−
4
√
69
−
1
√
30
13
√
230
−
7
√
69
2
15
−3
2
115
2
√
69
5
6
5
46
, Q
−1
= Q
T
=
¸
¸
¸
−
4
√
69
−
7
√
69
2
√
69
−
1
√
30
2
15
5
6
13
√
230
−3
2
115
5
46
The inverse transformation y = Q
−1
x is
y
1
=
−4
√
69
x
1
−
7
√
69
x
2
+
2
√
69
x
3
,
y
2
= −
1
√
30
x
1
+
2
15
x
2
+
5
6
x
3
,
y
3
=
13
√
230
x
1
−3
2
115
x
2
+
5
46
x
3
.
Substituting in f(x
1
, x
2
, x
3
), we get
f(y
1
, y
2
, y
3
) = y
2
1
+ 4y
2
2
+ 24y
2
3
.
In terms of the original variables, we get
f(x
1
, x
2
, x
3
) =
−4
√
69
x
1
−
7
√
69
x
2
+
2
√
69
x
3
2
+4
−
1
√
30
x
1
+
2
15
x
2
+
5
6
x
3
2
+24
13
√
230
x
1
−3
2
115
x
2
+
5
46
x
3
2
It is clear that f(x
1
, x
2
, x
3
) is positive deﬁnite. Moreover, by carrying out the multiplications, it is easily
seen that the original form is recovered. Further manipulation would also show that f(x
1
, x
2
, x
3
) =
2(x
1
−x
2
+x
3
)
2
+ 3(2x
1
−x
2
)
2
+ 4(x
1
+x
3
)
2
, so we see the particular quadratic form is not unique.
293
8.9 MoorePenrose inverse
We seek the MoorePenrose
5
inverse: A
+
m×n
such that
A
n×m
A
+
m×n
A
n×m
= A
n×m
. (8.65)
This will be achieved if we deﬁne
A
+
m×n
= Q
m×m
B
+
m×n
Q
H
n×n
. (8.66)
The matrix B
+
is mn with µ
−1
i
, (i = 1, 2, . . .) in the ﬁrst r positions on the main diagonal.
The MoorePenrose inverse, A
+
m×n
, is also known as the pseudoinverse. This is because in
the special case in which n ≤ m and n = r that it can be shown that
A
n×m
A
+
m×n
= I
n×n
. (8.67)
Let’s check this with our deﬁnitions for the case when n ≤ m, n = r.
A
n×m
A
+
m×n
=
Q
n×n
B
n×m
Q
H
m×m
Q
m×m
B
+
m×n
Q
H
n×n
, (8.68)
= Q
n×n
B
n×m
Q
−1
m×m
Q
m×m
B
+
m×n
Q
H
n×n
, (8.69)
= Q
n×n
B
n×m
B
+
m×n
Q
H
n×n
, (8.70)
= Q
n×n
I
n×n
Q
H
n×n
, (8.71)
= Q
n×n
Q
H
n×n
, (8.72)
= Q
n×n
Q
−1
n×n
, (8.73)
= I
n×n
. (8.74)
We note for this special case that precisely because of the way we deﬁned B
+
that B
n×m
B
+
m×n
= I
n×n
. When n > m, B
n×m
B
+
m×n
yields a matrix with r ones on the diagonal and
zeros elsewhere.
Example 8.36
Find the MoorePenrose inverse, A
+
3×2
, of A
2×3
in the previous example:
A
2×3
=
1 −3 2
2 0 3
.
A
+
3×2
= Q
3×3
B
+
3×2
Q
H
2×2
,
A
+
3×2
=
¸
0.452350 0.330059 −0.828517
−0.471378 0.877114 0.0920575
0.757088 0.348902 0.552345
¸
1
4.6385
0
0
1
2.3419
0 0
0.728827 0.684698
−0.684698 0.728827
,
A
+
3×2
=
¸
−0.0254237 0.169492
−0.330508 0.20339
0.0169492 0.220339
.
Note that
A
2×3
A
+
3×2
=
1 −3 2
2 0 3
¸
−0.0254237 0.169492
−0.330508 0.20339
0.0169492 0.220339
=
1 0
0 1
.
5
Roger Penrose, 1931, English mathematician. No information is available on Moore.
294
Example 8.37
Use the MoorePenrose inverse to solve the problem A x = b studied in an earlier example:
1 2
3 6
x
1
x
2
=
2
0
.
We ﬁrst seek the singular value decomposition of A, A = Q
2
B Q
H
1
. Now
A
H
A =
1 3
2 6
1 2
3 6
=
10 20
20 40
.
The eigensystem with normalized eigenvectors corresponding to A
H
A is
λ
1
= 50, e
1
=
1
√
5
1
2
,
λ
2
= 0, e
2
=
1
√
5
−2
1
,
so
Q
1
=
1
√
5
1 −2
2 1
,
B
1
=
√
50 0
0 0
,
so taking A Q
1
= Q
2
B, gives
1 2
3 6
1
√
5
1 −2
2 1
=
q
11
q
12
q
21
q
22
√
50 0
0 0
,
√
5
1 0
3 0
=
5
√
2q
11
0
5
√
2q
21
0
.
Solving, we get
q
11
q
21
=
1
√
10
1
3
.
Imposing orthonormality to ﬁnd q
12
and q
22
, we get
q
12
q
22
=
1
√
10
3
−1
,
so
Q
2
=
1
√
10
1 3
3 −1
,
and
A = Q
2
B Q
H
1
=
1
√
10
1 3
3 −1
. .. .
Q2
5
√
2 0
0 0
. .. .
B
1
√
5
1 2
−2 1
. .. .
Q
H
1
=
1 2
3 6
.
Now the MoorePenrose inverse is
A
+
= Q
1
B
+
Q
H
2
=
1
√
5
1 −2
2 1
. .. .
Q1
1
5
√
2
0
0 0
. .. .
B
+
1
√
10
1 3
3 −1
. .. .
Q
H
2
=
1
50
1 3
2 6
.
295
Direct multiplication shows that A A
+
= I, but that A A
+
A = A. This is a consequence of A not
being a full rank matrix.
Lastly, applying the MoorePenrose inverse operator to the vector b to form x = A
+
b, we get
x = A
+
b =
1
50
1 3
2 6
. .. .
A
+
2
0
. .. .
b
=
1
25
1
2
.
We see that the MoorePenrose operator acting on b has yielded an x vector which is in the row space
of A. As there is no right null space component, it is the minimum length vector that minimizes the
error [[A x − b[[
2
. It is fully consistent with the solution we found using Gaussian elimination in an
earlier example.
Problems
1. Find the x with smallest [[x[[
2
which minimizes [[A x −b[[
2
for
A =
¸
1 0 3
2 −1 3
3 −1 5
, b =
¸
1
0
1
.
2. Find the most general x which minimizes [[A x −b[[
2
for
A =
¸
1 0
2 −1
3 −2
, b =
¸
1
0
1
.
3. Find x with the smallest [[x[[
2
which minimizes [[A x −b[[
2
for
A =
¸
1 0 2 4
1 0 2 −1
2 1 3 −2
, b =
¸
2
1
3
.
4. Find e
A
if
A =
¸
1 1 1
0 3 2
0 0 5
.
5. Diagonalize or reduce to Jordan canonical form
A =
¸
5 2 −1
0 5 1
0 0 5
.
6. Find the eigenvectors and generalized eigenvectors of
A =
¸
¸
¸
1 1 1 1
0 1 1 1
0 0 0 1
0 0 0 0
.
296
7. Decompose A into Jordan form S J S
−1
, P
−1
L D U, Q R, Schur form, and Hessenberg form
A =
¸
¸
¸
0 1 0 1
1 0 1 0
0 1 0 1
1 0 1 0
.
8. Find the matrix S that will convert the following to the Jordan canonical form
(a)
¸
¸
¸
6 −1 −3 1
−1 6 1 −3
−3 1 6 −1
1 −3 −1 6
,
(b)
¸
¸
¸
8 −2 −2 0
0 6 2 −4
−2 0 8 −2
2 −4 0 6
,
and show the Jordan canonical form.
9. Show that the eigenvectors and generalized eigenvectors of
¸
¸
¸
1 1 2 0
0 1 3 0
0 0 2 2
0 0 0 1
span the space.
10. Find the projection matrix onto the space spanned by (1, 1, 1) and (1, 2, 3).
11. Reduce 4x
2
+ 4y
2
+ 2z
2
−4xy + 4yz + 4zx to standard quadratic form.
12. Find the inverse of
¸
1/4 1/2 3/4
3/4 1/2 1/4
1/4 1/2 1/2
.
13. Find exp
¸
0 0 i
0 1 0
1 0 0
.
14. Find the nth power of
1 3
3 1
.
15. If
A =
5 4
1 2
,
ﬁnd a matrix S such that S
−1
A S is a diagonal matrix. Show by multiplication that it is indeed
diagonal.
16. Determine if A =
6 2
−2 1
and B =
8 6
−3 −1
are similar.
17. Find the eigenvalues, eigenvectors, and the matrix S such that S
−1
A S is diagonal or of Jordan
form, where A is
(a)
¸
5 0 0
1 0 1
0 0 −2
,
(b)
¸
−2 0 2
2 1 0
0 0 −2i
,
297
(c)
¸
3 0 −1
−1 2 2i
1 0 1 +i
.
18. Put each of the matrices above in L D U form.
19. Put each of the matrices above in Q R form.
20. Put each of the matrices above in Schur form.
21. Let
A =
¸
1 1 2
0 1 1
0 0 1
.
Find S such that S
−1
A S = J, where J is of the Jordan form. Show by multiplication that
A S = S J.
22. Show that
e
A
=
cos(1) sin(1)
−sin(1) cos(1)
,
if
A =
0 1
−1 0
.
23. Write A in echelon form
A =
¸
0 0 1 0
2 −2 0 0
1 0 1 2
.
24. Show that the function
f(x, y, z) = x
2
+y
2
+z
2
+yz −zx −xy,
is always nonnegative.
25. If A :
2
2
→
2
2
, ﬁnd [[A[[ when
A =
1 −1
1 1
.
Also ﬁnd its inverse and adjoint.
26. Is the quadratic form
f(x
1
, x
2
, x
3
) = 4x
2
1
+ 2x
1
x
2
+ 4x
1
x
3
,
positive deﬁnite?
27. Find the Schur decomposition and L D L
T
of A:
A =
¸
¸
¸
0 0 0 0
0 1 −3 0
0 −3 1 0
0 0 0 0
,
28. Find the x with minimum [[x[[
2
which minimizes [[A x −b[[
2
in the following problems:
(a)
A =
¸
−5 1 2
2 0 1
−9 1 0
b =
¸
1
3
2
,
298
(b)
A =
¸
4 3 2 5 6
7 2 1 −3 5
1 4 3 13 7
b =
¸
0
2
1
.
29. In each of the problems above, ﬁnd the right null space and show the most general solution vector
can be represented as a linear combination of a unique vector in the row space of A plus an arbitrary
scalar multiple of the right null space of A.
30. An experiment yields the following data:
t x
0.00 1.001
0.10 1.099
0.24 1.240
0.70 1.604
0.90 1.781
1.50 2.020
2.60 1.512
3.00 1.141
We have ten times as much conﬁdence in the ﬁrst three data points than we do in all the others. Find
the least squares best ﬁt coeﬃcients a, b, and c if the assumed functional form is
(a) x = a +bt +ct
2
,
(b) x = a +b sin t +c sin 2t.
Plot on a single graph the data points and the two best ﬁt estimates. Which best ﬁt estimate has the
smallest least squares error?
31. For
A =
¸
12 4 −2 −1
6 8 −2 9
−3 2 0 5
,
a) ﬁnd the P
−1
LDUdecomposition b) ﬁnd the singular values and the singular value decomposition.
32. For the complex matrix A ﬁnd eigenvectors, eigenvalues, demonstrate whether or not the eigenvectors
are orthogonal, and ﬁnd (if possible) the matrix S such that S
−1
A S is of Jordan form if
A =
3 +i 1
1 2
,
A =
¸
3 3i 1 +i
−3i 3 3
1 −i 3 −2
.
299
300
Chapter 9
Dynamical systems
see Kaplan, Chapter 9,
see Drazin,
see Lopez, Chapter 12.
In this chapter we consider the evolution of systems, often called dynamic systems. Generally,
we will be concerned with systems which can be described by sets of ordinary diﬀerential
equations, both linear and nonlinear. Some other classes of systems will also be studied.
9.1 Paradigm problems
We ﬁrst consider some paradigm problems which will illustrate the techniques used to solve
nonlinear systems of ordinary diﬀerential equations. Systems of equations are typically more
complicated than scalar diﬀerential equations. The fundamental procedure for analyzing
systems of nonlinear ordinary diﬀerential equations is to
• Cast the system into a standard form.
• Identify the equilibria of the system.
• If possible, linearize the system about its equilibria.
• If linearizable, ascertain the stability of the linearized system to small disturbances.
• If not linearizable, attempt to ascertain the stability of the nonlinear system near its
equilibria.
• Solve the full nonlinear system.
9.1.1 Autonomous example
First consider a simple example of what is known as an autonomous system. An autonomous
system of ordinary diﬀerential equations can be written in the form
dx
dt
= f (x). (9.1)
301
Notice that the independent variable t does not appear explicitly.
Example 9.1
For x ∈ R
2
, t ∈ R
1
, f : R
2
→R
2
, consider
dx
1
dt
= x
2
−x
2
1
= f
1
(x
1
, x
2
),
dx
2
dt
= x
2
−x
1
= f
2
(x
1
, x
2
).
The curves deﬁned in the x
1
, x
2
plane by f
1
= 0 and f
2
= 0 are very useful in determining both the ﬁxed
points (found at the intersection) and in the behavior of the system of diﬀerential equations. In fact
one can sketch trajectories of paths in this phase space by inspection in many cases. The loci of points
where f
1
= 0 and f
2
= 0 are plotted in Figure 9.1. The zeroes are found at (x
1
, x
2
) = (0, 0), (1, 1).
−0.5 0 0.5 1 1.5
−0.5
0
0.5
1
1.5
x
2
x
1
f = 0
1
f = 0
2
spiral source
node
saddle
node
Figure 9.1: Phase plane for
dx
1
dt
= x
2
−x
2
1
,
dx
2
dt
= x
2
−x
1
, along with equilibrium points (0, 0)
and (1, 1), separatrices x
2
− x
2
1
= 0, x
2
− x
1
= 0, solution trajectories, and corresponding
vector ﬁeld.
Linearize about both points to ﬁnd the local behavior of the solution near these points. Near (0,0), the
linearization is
dx
1
dt
= x
2
,
302
dx
2
dt
= x
2
−x
1
,
or
dx1
dt
dx2
dt
=
0 1
−1 1
x
1
x
2
.
This is of the form
dx
dt
= A x.
And with
P z ≡ x,
where P is a constant matrix, we get
d
dt
(P z) = P
dz
dt
= A P z,
dz
dt
= P
−1
A P z.
At this point we assume that A has distinct eigenvalues and linearly independent eigenvectors; other
cases are easily handled. If we choose P such that its columns contain the eigenvectors of A, we will
get a diagonal matrix, which will lead to a set of uncoupled diﬀerential equations; each of these can be
solved individually. So for our A, standard linear algebra gives
P =
1
2
+
√
3
2
i
1
2
−
√
3
2
i
1 1
, P
−1
=
i
√
3
1
2
+
√
3
6
i
−
i
√
3
1
2
−
√
3
6
i
.
With this choice we get the eigenvalue matrix
P
−1
A P =
1
2
−
√
3
2
i 0
0
1
2
+
√
3
2
i
.
So we get two uncoupled equations for z:
dz
1
dt
=
1
2
−
√
3
2
i
z
1
,
dz
2
dt
=
1
2
+
√
3
2
i
z
2
,
which have solutions
z
1
= c
1
exp
1
2
−
√
3
2
i
t
,
z
2
= c
2
exp
1
2
+
√
3
2
i
t
.
Then we form x by taking x = P z so that
x
1
= c
1
1
2
+
√
3
2
i
exp
1
2
−
√
3
2
i
t
+c
2
1
2
−
√
3
2
i
exp
1
2
+
√
3
2
i
t
,
x
2
= c
1
exp
1
2
−
√
3
2
i
t
+c
2
exp
1
2
+
√
3
2
i
t
.
Since there is a positive real coeﬃcient in the exponential terms, both x
1
and x
2
grow exponentially.
The imaginary component indicates that this is an oscillatory growth. Hence, there is no tendency for
a solution which is initially close to (0, 0), to remain there. So the ﬁxed point is unstable.
303
Consider the next ﬁxed point near (1, 1). First deﬁne a new set of local variables:
˜ x
1
= x
1
−1,
˜ x
2
= x
2
−1.
Then
dx
1
dt
=
d˜ x
1
dt
= (˜ x
2
+ 1) −(˜ x
1
+ 1)
2
,
dx
2
dt
=
d˜ x
2
dt
= (˜ x
2
+ 1) −(˜ x
1
+ 1).
Expanding, we get
d˜ x
1
dt
= (˜ x
2
+ 1) − ˜ x
2
1
−2˜ x
1
−1,
d˜ x
2
dt
= (˜ x
2
+ 1) −(˜ x
1
+ 1).
Linearizing about (˜ x
1
, ˜ x
2
) = (0, 0), we ﬁnd
d˜ x
1
dt
= ˜ x
2
−2˜ x
1
,
d˜ x
2
dt
= ˜ x
2
− ˜ x
1
,
or
d˜ x1
dt
d˜ x2
dt
=
−2 1
−1 1
˜ x
1
˜ x
2
.
Going through an essentially identical exercise gives the eigenvalues to be
λ
1
= −
1
2
+
√
5
2
> 0,
λ
2
= −
1
2
−
√
5
2
< 0,
which in itself shows the solution to be essentially unstable since there is a positive eigenvalue. After
the usual linear algebra and back transformations, one obtains the local solution:
x
1
= 1 +c
1
3 −
√
5
2
exp
−
1
2
+
√
5
2
t
+c
2
3 +
√
5
2
exp
−
1
2
−
√
5
2
t
,
x
2
= 1 +c
1
exp
−
1
2
+
√
5
2
t
+c
2
exp
−
1
2
−
√
5
2
t
.
Note that while this solution is generally unstable, if one has the special case in which c
1
= 0, that the
ﬁxed point in fact is stable. Such is characteristic of a saddle node.
304
9.1.2 Nonautonomous example
Next consider a more complicated example. Among other things, the system as originally
cast is nonautonomous in that the independent variable t appears explicitly. Additionally,
it has coupled and contains hidden singularities. Some operations are necessary in order to
cast the system in standard form.
Example 9.2
For x ∈ R
2
, t ∈ R
1
, f : R
2
R
1
→R
2
, analyze
t
dx
1
dt
+x
2
x
1
dx
2
dt
= x
1
+t = f
1
(x
1
, x
2
, t),
x
1
dx
1
dt
+x
2
2
dx
2
dt
= x
1
t = f
2
(x
1
, x
2
, t),
x
1
(0) = x
10
, x
2
(0) = x
20
.
Let
dt
ds
= 1, t(0) = 0,
and further y
1
= x
1
, y
2
= x
2
, y
3
= t. Then with s ∈ R
1
, y ∈ R
3
, g : R
3
→R
3
,
y
3
dy
1
ds
+y
2
y
1
dy
2
ds
= y
1
+y
3
= g
1
(y
1
, y
2
, y
3
),
y
1
dy
1
ds
+y
2
2
dy
2
ds
= y
1
y
3
= g
2
(y
1
, y
2
, y
3
),
dy
3
ds
= 1 = g
3
(y
1
, y
2
, y
3
),
y
1
(0) = y
10
, y
2
(0) = y
20
, y
3
(0) = 0.
In matrix form, we have
¸
y
3
y
2
y
1
0
y
1
y
2
2
0
0 0 1
¸
dy1
ds
dy2
ds
dy3
ds
=
¸
y
1
+y
3
y
1
y
3
1
.
Inverting the coeﬃcient matrix, we obtain the following equation which is in autonomous form:
d
ds
¸
y
1
y
2
y
3
=
¸
¸
¸
y1y2−y
2
1
y3+y2y3
y2y3−y
2
1
y1(y
2
3
−y1−y3)
y2(y2y3−y
2
1
)
1
=
¸
h
1
(y
1
, y
2
, y
3
)
h
2
(y
1
, y
2
, y
3
)
h
3
(y
1
, y
2
, y
3
)
.
There are potential singularities at y
2
= 0 and y
2
y
3
= y
2
1
. These can be addressed by deﬁning a new
independent variable u ∈ R
1
via the equation
ds
du
= y
2
y
2
y
3
−y
2
1
.
The system of equations then transforms to
d
du
¸
y
1
y
2
y
3
=
¸
y
2
y
1
y
2
−y
2
1
y
3
+y
2
y
3
y
1
y
2
3
−y
1
−y
3
y
2
y
2
y
3
−y
2
1
=
¸
p
1
(y
1
, y
2
, y
3
)
p
2
(y
1
, y
2
, y
3
)
p
3
(y
1
, y
2
, y
3
)
.
This equation actually has an inﬁnite number of ﬁxed points, all of which lie on a line in the three
dimensional phase volume. The line is given parametrically by (y
1
, y
2
, y
3
)
T
= (0, 0, v)
T
, v ∈ R
1
Here
305
v is just a parameter used in describing the line of ﬁxed points. However, it turns out in this case
that the Taylor series expansions yield no linear contribution any of the ﬁxed points, so we don’t get
to use the standard linear analysis technique! The problem has an essential nonlinear essence, even
near ﬁxed points. More potent methods would need to be employed, but the example demonstrates the
principle. Figure 9.2 gives a numerically obtained solution for y
1
(u), y
2
(u), y
3
(u) along with a trajectory
in y
1
, y
2
, y
3
space when y
1
(0) = 1, y
2
(0) = −1, y
3
(0) = 0. This corresponds to x
1
(t = 0) = 1, x
2
(t =
0) = −1.
0
20
40
2
1
0
1
2
0
2
4
6
8
10
0
20
40
2
1
0
1
2
10
20
30
40
50
2
1
0.2 0.4 0.6 0.8 1 1.2
u
10
20
30
40
50
0.2 0.4 0.6 0.8 1 1.2
u
2
1
1
2
0.2 0.4 0.6 0.8 1 1.2
u
2
4
6
8
10
y
1
y
2
y
3
y
3
y
2
y
1
2 4 6 8 10
t
x
1
2 4 6 8 10
t
1
2
x
2
= t
x
2
=
x
1
=
Figure 9.2: Solutions for one set of initial conditions, y
1
(0) = 1, y
2
(0) = −1, y
3
(0) = 0,
for second paradigm example: trajectory in phase volume (y
1
, y
2
, y
3
); also y
1
(u), y
2
(u), y
3
(u)
and x
1
(t), x
2
(t). Here y
1
= x
1
, y
2
= x
2
, y
3
= t.
We note that while the solutions are monotonic in the variable u, that they are not monotonic
in t, after the transformation back to x
1
(t), x
2
(t) is eﬀected. Also, while it appears there are points
(u = 0.38, u = 0.84, u = 1.07) where the derivatives
dy1
du
,
dy2
du
,
dy3
du
become unbounded, closer inspection
reveals that they are simply points of steep, but bounded, derivatives. However at points where the
slop
dy3
du
=
dt
du
changes sign, the derivatives
dx1
dt
,
dx2
dt
formally are inﬁnite, as is reﬂected in the cyclic
behavior exhibited in the plots of x
1
versus t or x
2
versus t.
306
9.2 General theory
Consider x ∈ R
n
, t ∈ R
1
, A : R
n
R
1
→R
n
R
n
, f : R
n
R
1
→R
n
. A quasilinear problem
of the form
A(x, t)
dx
dt
= f (x, t), x(0) = x
o
, (9.2)
can be reduced to the following form, known as autonomous, in the following manner. With
x =
¸
x
1
.
.
.
x
n
, A(x, t) =
¸
a
11
(x, t) . . . a
1n
(x, t)
.
.
.
.
.
.
.
.
.
a
n1
(x, t) . . . a
nn
(x, t)
, f (x, t) =
¸
f
1
(x
1
, . . . , x
n
)
.
.
.
f
n
(x
1
, . . . , x
n
)
,
deﬁne s ∈ R
1
such that
dt
ds
= 1, t(0) = 0, (9.3)
Then deﬁne y ∈ R
n+1
, B : R
n+1
→ R
n+1
R
n+1
, g : R
n+1
→ R
n+1
, such that along with
s ∈ R
1
that
y =
¸
¸
¸
y
1
.
.
.
y
n
y
n+1
=
¸
¸
¸
x
1
.
.
.
x
n
t
, (9.4)
B(y) =
¸
¸
¸
a
11
(y) . . . a
1n
(y) 0
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
(y) . . . a
nn
(y) 0
0 . . . 0 1
, (9.5)
g(y) =
¸
¸
¸
g
1
(y
1
, . . . , y
n+1
)
.
.
.
g
n
(y
1
, . . . , y
n+1
)
g
n+1
(y
1
, . . . , y
n+1
)
=
¸
¸
¸
f
1
(x
1
, . . . , x
n
, t)
.
.
.
f
n
(x
1
, . . . , x
n
, t)
1
. (9.6)
The original equation then is of the form
B(y)
dy
ds
= g(y). (9.7)
By forming B
−1
, it can be written as
dy
ds
= B
−1
(y) g(y), (9.8)
or by taking
B
−1
(y) g(y) ≡ h(y), (9.9)
we get the form, commonly called autonomous form, with s ∈ R
1
, y ∈ R
n+1
, h : R
n+1
→
R
n+1
:
dy
ds
= h(y). (9.10)
307
Sometimes h has singularities. If the source of the singularity can be identiﬁed, a
singularityfree autonomous set of equations can often be written. For example, suppose
h can be rewritten as
h(y) =
p(y)
q(y)
(9.11)
where p and q have no singularities. Then we can remove the singularity by introducing the
new independent variable u ∈ R
1
such that
ds
du
= q(y). (9.12)
Using the chain rule, the system then becomes
dy
ds
=
p(y)
q(y)
, (9.13)
ds
du
dy
ds
= q(y)
p(y)
q(y)
, (9.14)
dy
du
= p(y), (9.15)
(9.16)
which has no singularities.
Casting ordinary diﬀerential equations systems in autonomous form is the starting point
for most problems and most theoretical development. The task from here generally proceeds
as follows:
• Find all the zeroes of h. This is an algebra problem, which can be topologically diﬃcult
for nonlinear problems.
• If h has any singularities, redeﬁne variables in the manner demonstrated to remove the
singularity
• If possible, linearize h (or its equivalent) about each of its zeroes
• Perform a local analysis of the system of diﬀerential equations near zeroes.
• If the system is linear, an eigenvalue analysis is suﬃcient to reveal stability; for non
linear systems, the situation is not always straightforward.
9.3 Iterated maps
A map f : R
n
→R
n
can be iterated to give a dynamical system of the form
x
k+1
i
= f
i
(x
k
1
, x
k
2
, , x
k
n
), i = 1, , n. (9.17)
Given an initial point x
0
i
, (i = 1, . . . , n) in R
n
, a series of images x
1
i
, x
2
i
, x
3
i
, . . . can be found
as k = 0, 1, 2, . . .. The map is dissipative or conservative according to whether the diameter
308
of a set is larger than that of its image or the same, respectively, i.e if the determinant of
the Jacobian matrix ∂f
i
/∂x
j
is < or = 1.
The point x
i
= x
i
is a ﬁxed point of the map if it maps to itself, i.e. if
x
i
= f
i
(x
1
, x
2
, , x
n
), i = 1, , n. (9.18)
The ﬁxed point x
i
= 0 is linearly unstable if a small perturbation from it leads the images
farther and farther away. Otherwise it is stable. A special case of this is asymptotic stability
wherein the image returns arbitrarily close to the ﬁxed point.
A linear map can be written as x
k+1
i
=
¸
n
j=1
A
ij
x
k
j
, (i = 1, 2, . . .) or x
k+1
= A x
k
. The
origin x = 0 is a ﬁxed point of this map. If [[A[[ > 1, then [[x
k+1
[[ > [[x
k
[[ and the map is
unstable. Otherwise it is stable.
Example 9.3
Examine the linear stability of the ﬁxed points of the logistics map
x
k+1
= rx
k
(1 −x
k
),
We take r ∈ [0, 4] so that x
k
∈ [0, 1] maps onto x
k+1
∈ [0, 1]. That is the mapping is onto itself.
The ﬁxed points are solutions of
x = rx(1 −x),
which are
x = 0, x = 1 −
1
r
.
Consider the mapping itself. For an initial seed x
0
, we generate a series of x
k
. For example if we take
r = 0.4 and x
o
= 0.3, we get
x
0
= 0.3,
x
1
= 0.4(0.3)(1 −0.3) = 0.084,
x
2
= 0.4(0.084)(1 −0.084) = 0.0307776,
x
3
= 0.4(0.0307776)(1 −0.0307776) = 0.0119321,
x
4
= 0.4(0.0119321)(1 −0.0119321) = 0.0047159,
x
5
= 0.4(0.0047159)(1 −0.0047159) = 0.00187747,
.
.
.
x
∞
= 0.
For this value of r, the solution approaches the ﬁxed point of 0. Consider r =
4
3
and x
o
= 0.3
x
0
= 0.3,
x
1
= (4/3)(0.3)(1 −0.3) = 0.28,
x
2
= (4/3)(0.28)(1 −0.28) = 0.2688,
x
3
= (4/3)(0.2688)(1 −0.2688) = 0.262062,
x
4
= (4/3)(0.262062)(1 −0.262062) = 0.257847,
x
5
= (4/3)(0.257847)(1 −0.257847) = 0.255149,
.
.
.
x
∞
= 0.250 = 1 −
1
r
.
309
In this case the solution was attracted to the alternate ﬁxed point.
To analyze the stability of each ﬁxed point, we give it a small perturbation x
. Thus x + x
is
mapped to x +x
, where
x +x
= r(x +x
)(1 −x −x
) = r(x −x
2
+x
−2xx
+x
2
).
Neglecting small terms, we get
x +x
= r(x −x
2
+x
−2xx
) = rx(1 −x) +rx
(1 −2x).
Simplifying, we get
x
= rx
(1 −2x).
A ﬁxed point is stable if [x
/x
[ ≤ 1. This indicates that the perturbation is decaying. Now consider
each ﬁxed point in turn.
x = 0:
x
= rx
(1 −2(0)),
x
= rx
,
x
x
= r.
This is stable if r < 1.
x = 1 −
1
r
:
x
= rx
1 −2
1 −
1
r
,
x
= (2 −r)x
,
x
x
= [2 −r[ .
This is unstable for r < 1, stable for 1 ≤ r ≤ 3, unstable for r > 3.
What happens to the map for r > 3. Consider Consider r = 3.2 and x
o
= 0.3
x
0
= 0.3,
x
1
= 3.2(0.3)(1 −0.3) = 0.672,
x
2
= 3.2(0.672)(1 −0.672) = 0.705331,
x
3
= 3.2(0.705331)(1 −0.705331) = 0.665085,
x
4
= 3.2(0.665085)(1 −0.665085) = 0.71279,
x
5
= 3.2(0.71279)(1 −0.71279) = 0.655105,
x
6
= 3.2(0.655105)(1 −0.655105) = 0.723016,
x
7
= 3.2(0.723016)(1 −0.723016) = 0.640845,
x
8
= 3.2(0.640845)(1 −0.640845) = 0.736521,
.
.
.
x
∞−1
= 0.799455,
x
∞
= 0.513045.
This system has bifurcated. It oscillates between two points, never going to the ﬁxed point. The two
points about which it oscillates are quite constant for this value of r. For greater values of r, the system
moves between 4, 8, 16, ... points. Such is the essence of bifurcation phenomena. A coarsegrained plot
of the equilibrium values of x as a function of r is given in Figure 9.3. Finer grained plots reveal many
interesting structures which are discussed in standard texts.
310
1 2 3 4
r
0.2
0.4
0.6
0.8
1
x
Figure 9.3: Coarsegrained plot of x
k
as k → ∞ as a function of r for the logistics map,
x
k+1
= rx
k
(1 −x
k
) for r ∈ [0, 4].
Other maps that have been studied are:
1. H´enon map:
x
k+1
= y
k
+ 1 −ax
2
k
,
y
k+1
= bx
k
.
For a = 1.3, b = 0.34, the attractor is periodic, while for a = 1.4, b = 0.34, the map
has a strange attractor
2. Dissipative standard map:
x
k+1
= x
k
+ y
k+1
mod 2π,
y
k+1
= λy
k
+ k sin x
k
.
If λ = 1 the map is area preserving.
9.4 High order scalar diﬀerential equations
An equation with x ∈ R
1
, t ∈ R
1
, a : R
1
R
1
→R
n
, f : R
1
→R
1
of the form
d
n
x
dt
n
+ a
n
(x, t)
d
n−1
x
dt
n−1
+ + a
2
(x, t)
dx
dt
+ a
1
(x, t)x = f(t), (9.19)
can be expressed as a system of n + 1 ﬁrst order autonomous equations. Let x = y
1
,
dx
dt
=
y
2
, ,
d
n−1
x
dt
n−1
= y
n
, t = y
n+1
. Then with y ∈ R
n+1
, s = t ∈ R
1
, a : R
1
R
1
→R
n
, f : R
1
→R
1
,
dy
1
ds
= y
2
, (9.20)
dy
2
ds
= y
3
, (9.21)
311
.
.
. (9.22)
dy
n−1
ds
= y
n
, (9.23)
dy
n
ds
= −a
n
(y
1
, y
n+1
)y
n
−a
n−1
(y
1
, y
n+1
)y
n−1
− −a
1
(y
1
, y
n+1
)y
1
+ f(y
n+1
),(9.24)
dy
n+1
ds
= 1. (9.25)
Example 9.4
For x ∈ R
1
, t ∈ R
1
, consider the forced Duﬃng equation:
d
2
x
dt
2
+x +x
3
= sin(2t), x(0) = 0,
dx
dt
t=0
= 0.
Here a
2
(x, t) = 0, a
1
(x, t) = 1 + x
2
, f(t) = sin(2t). Now this nonlinear diﬀerential equation with
homogeneous boundary conditions and forcing has no analytic solution. It can be solved numerically;
most solution techniques require a recasting as a system of ﬁrst order equations. To recast this as an
autonomous set of equations, with y ∈ R
3
, s ∈ R
1
, consider
x = y
1
,
dx
dt
= y
2
, t = s = y
3
.
Then
d
dt
=
d
ds
, and the equations transform to
d
ds
¸
y
1
y
2
y
3
=
¸
y
2
−y
1
−y
3
1
+ sin(2y
3
)
1
=
¸
h
1
(y
1
, y
2
, y
3
)
h
2
(y
1
, y
2
, y
3
)
h
3
(y
1
, y
2
, y
3
)
,
¸
y
1
(0)
y
2
(0)
y
3
(0)
=
¸
0
0
0
.
Note that this system has no equilibrium point as there exists no y for which h = 0. Once the numerical
solution is obtained, one transforms back to x, t space. Figure 9.4 give the trajectory in the y
1
, y
2
, y
3
phase space, and a plot of the corresponding solution x(t) for t ∈ [0, 50].
9.5 Linear systems
For a linear system the coeﬃcients a
n
, . . . , a
2
, a
1
in equation (9.19) are independent of x. In
general, for x ∈ R
n
, t ∈ R
1
, A : R
1
→ R
n
R
n
, f : R
1
→ R
n
, any linear system may be
written in matrix form as
dx
dt
= A(t) x +f (t), (9.26)
where
x =
¸
¸
¸
¸
x
1
(t)
x
2
(t)
.
.
.
x
n
(t)
, (9.27)
312
10 20 30 40 50
t
1
0.5
0.5
1
x
1
0.5
0
0.5
1
1
0
1
0
20
40
1
0
1
y = x
1
y = dx/dt
2
y = t
3
Figure 9.4: Phase space trajectory and solution x(t) for forced Duﬃng equation.
A =
¸
¸
¸
¸
a
11
(t) a
12
(t) a
1n
(t)
a
21
(t) a
22
(t) a
2n
(t)
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
(t) a
n2
(t) a
nn
(t)
, (9.28)
f =
¸
¸
¸
¸
f
1
(t)
f
2
(t)
.
.
.
f
n
(t)
. (9.29)
Here A and f are known. The solution can be written as x = x
H
+ x
P
, where x
H
is the
solution to the homogeneous equation, and x
P
is the particular solution.
9.5.1 Homogeneous equations with constant A
For x ∈ R
n
, t ∈ R
1
, A ∈ R
n
R
n
, the solution of the homogeneous equation
dx
dt
= A x, (9.30)
where A is a matrix of constants is obtained by setting
x = ce
λt
, (9.31)
with c ∈ R
n
. Substituting into the equation, we get
λce
λt
= A ce
λt
, (9.32)
λc = A c. (9.33)
313
This is an eigenvalue problem where λ is an eigenvalue and c is an eigenvector.
In this case there is only one ﬁxed point, namely the null vector:
x = 0.
9.5.1.1 n eigenvectors
We will assume that there is a full set of eigenvectors even though not all the eigenvalues
are distinct. If e
1
, e
2
, . . . , e
n
are the eigenvectors corresponding to eigenvalues λ
1
, λ
2
, . . . , λ
n
,
then
x =
n
¸
i=1
c
i
e
i
e
λ
i
t
, (9.34)
is the general solution, where c
1
, c
2
, . . . , c
n
are arbitrary constants.
Example 9.5
For x ∈ R
3
, t ∈ R
1
, A ∈ R
3
R
3
, solve
dx
dt
= A x where
A =
¸
1 −1 4
3 2 −1
2 1 −1
.
The eigenvalues and eigenvectors are
λ
1
= 1, e
1
=
¸
−1
4
1
,
λ
2
= 3, e
2
=
¸
1
2
1
,
λ
3
= −2, e
3
=
¸
−1
1
1
.
Thus the solution is
x = c
1
¸
−1
4
1
e
t
+c
2
¸
1
2
1
e
3t
+c
3
¸
−1
1
1
e
−2t
.
or expanding,
x
1
(t) = −c
1
e
t
+c
2
e
3t
−c
3
e
−2t
,
x
2
(t) = 4c
1
e
t
+ 2c
2
e
3t
+c
3
e
−2t
,
x
3
(t) = c
1
e
t
+c
2
e
3t
+c
3
e
−2t
.
314
Example 9.6
For x ∈ R
3
, t ∈ R
1
, A ∈ R
3
R
3
, solve
dx
dt
= A x where
A =
¸
2 −1 −1
2 1 −1
0 −1 1
.
The eigenvalues and eigenvectors are
λ
1
= 2, e
1
=
¸
0
1
−1
,
λ
2
= 1 +i, e
2
=
¸
1
−i
1
,
λ
3
= 1 −i, e
3
=
¸
1
i
1
.
Thus the solution is
x = c
1
¸
0
1
−1
e
2t
+c
2
¸
1
−i
1
e
(1+i)t
+c
3
¸
1
i
1
e
(1−i)t
,
= c
1
¸
0
1
−1
e
2t
+c
2
¸
cos t
sin t
cos t
e
t
+c
3
¸
sin t
−cos t
sin t
e
t
,
where c
2
= c
2
+c
3
, c
3
= i(c
2
−c
3
).
9.5.1.2 < n eigenvectors
One solution of
dx
dt
= A x is x = e
At
c, where c is a constant vector. If c
1
, c
2
, , c
n
are linearly independent vectors, then x
i
= e
At
c
i
, i = 1, , n, are linearly independent
solutions. We would like to choose c
i
, i = 1, 2, , n, such that each e
At
c
i
is a series with
a ﬁnite number of terms. This can be done in the following manner. Since
e
At
c = e
λIt
e
(A−λI)t
c,
= e
λt
e
(A−λI)t
c,
= e
λt
I + (A−λI)t +
1
2!
(A−λI)
2
t
2
+
c.
the series will be ﬁnite if
(A−λI)
k
c = 0,
for some positive integer k.
315
9.5.1.3 Summary of method
The procedure to ﬁnd x
i
, (i = 1, 2, . . . , n), the n linearly independent solutions of
dx
dt
= A x, (9.35)
where A is a constant, is the following. First ﬁnd all eigenvalues λ
i
, i = 1, , n, and as
many eigenvectors e
i
, i = 1, 2, , k as possible.
1. If k = n, the n linearly independent solutions are x
i
= e
λ
i
t
e
i
.
2. If k < n, there are only k linearly independent solutions of the type x
i
= e
λ
i
t
e
i
. To
ﬁnd additional solutions corresponding to a multiple eigenvalue λ, ﬁnd all linearly
independent c such that (A−λI)
2
c = 0, but (A−λI) c = 0. Notice that generalized
eigenvectors will satisfy the requirement, though it has other solutions as well. For
each such c, we have
e
At
c = e
λt
(c + t(A−λI) c) , (9.36)
which is a solution.
3. If more solutions are needed, then ﬁnd all linearly independent c for which (A−λI)
3
c =
0, but (A−λI)
2
c = 0. The corresponding solution is
e
At
c = e
λt
c + t(A−λI) c +
t
2
2
(A−λI)
2
c
. (9.37)
4. Continue until n linearly independent solutions have been found.
A linear combination of the n linearly independent solutions
x =
n
¸
i=1
c
i
x
i
, (9.38)
is the general solution, where c
1
, c
2
, . . . , c
n
are arbitrary constants.
9.5.1.4 Alternative method
As an alternative to the method just described, which is easily seen to be equivalent, we can
use the Jordan canonical form in a straightforward way to arrive at the solution. Recall that
the Jordan form exists for all matrices. We begin with
dx
dt
= A x. (9.39)
Then we use the Jordan decomposition A = S J S
−1
to write
dx
dt
= S J S
−1
x. (9.40)
316
If we apply the matrix operator S
−1
, which is a constant, to both sides, we get
d
dt
S
−1
x
= J S
−1
x. (9.41)
Now taking z = S
−1
x, we get
dz
dt
= J z. (9.42)
We then solve each equation one by one, starting with the last equation
dz
N
dt
= λ
N
z
N
, and
proceeding to the ﬁrst. In the process of solving these equations sequentially, there will be
feedback for each oﬀdiagonal term which will give rise to a secular term in the solution.
Once z is determined, we solve for x by taking x = S z.
It is also noted that this method works in the common case in which the matrix J is
diagonal; that is, it applies for cases in which there are n diﬀerential equations and n ordinary
eigenvectors.
Example 9.7
For x ∈ R
3
, t ∈ R
1
, A ∈ R
3
R
3
, ﬁnd the general solution of
dx
dt
= A x,
where
A =
¸
4 1 3
0 4 1
0 0 4
.
A has an eigenvalue λ = 4 with multiplicity three. The eigenvector is
e =
¸
1
0
0
,
which gives a solution
e
4t
¸
1
0
0
.
A generalized eigenvector is
g
1
=
¸
0
1
0
,
which leads to the solution
e
4t
[g
1
+t(A−λI) g
1
] = e
4t
¸
0
1
0
+t
¸
0 1 3
0 0 1
0 0 0
¸
0
1
0
¸
¸
,
= e
4t
¸
t
1
0
.
Another generalized eigenvalue
g
2
=
¸
0
−3
1
,
317
gives the solution
e
4t
¸
g
2
+t(A−λI) g
2
+
t
2
2
(A−λI)
2
g
2
= e
4t
¸
¸
0
−3
1
+t
¸
0 1 3
0 0 1
0 0 0
¸
0
−3
1
+
t
2
2
¸
0 0 1
0 0 0
0 0 0
¸
0
−3
1
,
= e
4t
¸
t
2
2
−3 +t
1
.
The general solution is
x = c
1
e
4t
¸
1
0
0
+c
2
e
4t
¸
t
1
0
+c
3
e
4t
¸
t
2
2
−3 +t
1
,
where c
1
, c
2
, c
3
are arbitrary constants.
Alternative method
Alternatively, we can simply use the Jordan decomposition to form the solution. When we form
the matrix S from the eigenvectors and generalized eigenvectors, we have
S = ( e g
1
g
2
) =
¸
1 0 0
0 1 −3
0 0 1
.
We then get
S
−1
=
¸
1 0 0
0 1 3
0 0 1
,
J = S
−1
A S =
¸
4 1 0
0 4 1
0 0 4
.
Now with z = S
−1
x, we solve
dz
dt
= J z,
¸
dz1
dt
dz2
dt
dz3
dt
=
¸
4 1 0
0 4 1
0 0 4
¸
z
1
z
2
z
3
.
The ﬁnal equation is totally uncoupled; solving
dz3
dt
= 4z
3
, we get
z
3
(t) = c
3
e
4t
.
Now consider the second equation,
dz
2
dt
= 4z
2
+z
3
.
Using our solution for z
3
, we get
dz
2
dt
= 4z
2
+c
3
e
4t
.
Solving, we get
z
2
(t) = c
2
e
4t
+c
3
te
4t
.
318
Now consider the ﬁrst equation,
dz
1
dt
= 4z
1
+z
2
.
Using our solution for z
2
, we get
dz
1
dt
= 4z
1
+c
2
e
4t
+c
3
te
4t
.
Solving, we get
z
1
(t) = c
1
e
4t
+
1
2
te
4t
(2c
2
+tc
3
) .
so we have
z(t) =
¸
c
1
e
4t
+
1
2
te
4t
(2c
2
+tc
3
)
c
2
e
4t
+c
3
te
4t
c
3
e
4t
Then for x = S z, we recover
x = c
1
e
4t
¸
1
0
0
+c
2
e
4t
¸
t
1
0
+c
3
e
4t
¸
t
2
2
−3 +t
1
,
which is identical to our earlier result.
9.5.1.5 Fundamental matrix
If x
i
, i = 1, , n are linearly independent solutions of
dx
dt
= A x, then
Ω =
¦x
1
¦ ¦x
2
¦ ¦x
n
¦
(9.43)
is called a fundamental matrix. The general solution is
x = Ω c, (9.44)
where
c =
¸
¸
c
1
.
.
.
c
n
. (9.45)
The term e
At
= Ω(t) Ω
−1
(0) is a fundamental matrix.
Example 9.8
Find the fundamental matrix of the problem given above.
The fundamental matrix is
Ω = e
4t
¸
1 t
t
2
2
0 1 −3 +t
0 0 1
,
so that
x = Ω c = e
4t
¸
1 t
t
2
2
0 1 −3 +t
0 0 1
¸
c
1
c
2
c
3
.
319
9.5.2 Inhomogeneous equations
If A is a constant matrix that is diagonalizable, the system of diﬀerential equations repre
sented by
dx
dt
= A x +f (t) can be decoupled into a set of scalar equations, each of which is
in terms of a single dependent variable. Thus let A be such that P
−1
A P = Λ, where Λ
is a diagonal matrix of eigenvalues. Taking x = P z, we get
d(P z)
dt
= A P z +f (t), (9.46)
P
dz
dt
= A P z +f (t). (9.47)
Applying P
−1
to both sides,
dz
dt
= P
−1
A P z +P
−1
f (t), (9.48)
dz
dt
= Λ z +g(t), (9.49)
where Λ = P
−1
A P and g(t) = P
−1
f (t). This is the decoupled form of the original
equation.
Example 9.9
For x ∈ R
2
, t ∈ R
1
, solve
dx
1
dt
= 2x
1
+x
2
+ 1,
dx
2
dt
= x
1
+ 2x
2
+t.
This can be written as
d
dt
x
1
x
2
=
2 1
1 2
x
1
x
2
+
1
t
.
We have
P =
1 1
−1 1
, P
−1
=
1
2
1 −1
1 1
, Λ =
1 0
0 3
,
so that
d
dt
z
1
z
2
=
1 0
0 3
z
1
z
2
+
1
2
1 −t
1 +t
.
The solution is
z
1
= ae
t
+
t
2
,
z
2
= be
3t
−
2
9
−
t
6
,
which, using x
1
= z
1
+z
2
and x
2
= −z
1
+z
2
transforms to
x
1
= ae
t
+be
3t
−
2
9
+
t
3
,
x
2
= −ae
t
+be
3t
−
2
9
−
2t
3
.
320
9.5.2.1 Undetermined coeﬃcients
This method is similar to that presented for scalar equations.
Example 9.10
For x ∈ R
3
, t ∈ R
1
, A ∈ R
3
R
3
, f : R
1
→R
3
, solve
dx
dt
= A x +f (t) with
A =
¸
4 1 3
0 4 1
0 0 4
, f =
¸
3e
t
0
0
.
The homogeneous part of this problem has been solved before. Let the particular solution be
x
P
= ce
t
.
Substituting into the equation, we get
ce
t
= A ce
t
+
¸
3
0
0
e
t
.
We can cancel the exponential to get
(I −A) c =
¸
3
0
0
,
which can be solved to get
c =
¸
−1
0
0
.
Therefore,
x = x
H
+
¸
−1
0
0
e
t
.
The method must be modiﬁed if f = ce
λt
, where λ is an eigenvalue of A. Then the
particular solution must be of the form x
P
= (c
0
+ tc
1
+ t
2
c
2
+ )e
λt
, where the series is
ﬁnite, and we take as many terms as necessary.
9.5.2.2 Variation of parameters
This follows the general procedure explained in Section 3.4.2, page 70.
321
9.6 Nonlinear equations
Nonlinear equations are diﬃcult to solve. Even for algebraic equations, general solutions do
not exist for polynomial equations of arbitrary degree. Nonlinear diﬀerential equations, both
ordinary and partial, admit analytical solutions only in special cases. Since these equations
are quite common in engineering applications, many techniques for approximate numerical
and analytical solutions have been developed. Our purpose here is more restricted; it is to
analyze the longtime stability of the solutions as a function of a system parameter. We will
ﬁrst develop some of the basic ideas of stability, and then illustrate them through examples.
9.6.1 Deﬁnitions
With x ∈ R
n
, t ∈ R
1
, f : R
n
→ R
n
, consider a system of n nonlinear ﬁrstorder ordinary
diﬀerential equations
dx
i
dt
= f
i
(x
1
, x
2
, , x
n
) (i = 1, , n). (9.50)
where t is time, and f
i
is a vector ﬁeld. The system is autonomous since f
i
is not a function
of t. The coordinates x
1
, x
2
, , x
n
form a phase or state space. The divergence of the
vector ﬁeld
¸
n
i=1
∂f
i
/∂x
i
indicates the change of a given volume of initial conditions in
phase space. If the divergence is zero, the volume remains constant and the system is said to
be conservative. If the divergence is negative, the volume shrinks with time and the system
is dissipative. The volume in a dissipative system eventually goes to zero. This ﬁnal state
to which some initial set of points in phase space goes is called an attractor. Attractors may
be points, closed curves, or tori, or fractal (strange). A given dynamical system may have
several attractors that coexist. Each attractor has its own basin of attraction in R
n
; initial
conditions that lie on this basin tend to that particular attractor.
The steady state solutions x
i
= x
i
of equation (9.50) are called critical (or ﬁxed, singular
or stationary) points. Thus, by deﬁnition
f
i
(x
1
, x
2
, , x
n
) = 0 (i = 1, , n), (9.51)
which is an algebraic or transcendental equation. The dynamics of the system is analyzed
by studying the stability of the critical point. For this we perturb the system so that
x
i
= x
i
+ x
i
, (9.52)
where the prime denotes a perturbation. If [[x
i
[[ is bounded for t → ∞, the critical point
is said to be stable, otherwise it is unstable. As a special case, if [[x
i
[[ → 0 as t → ∞, the
critical point is asymptotically stable.
9.6.2 Linear stability
The linear stability of the critical point is determined by restricting the analysis to a small
neighborhood of the critical point, i.e. for small values of [[x
i
[[. We substitute equation (9.52)
322
into (9.50), and linearize by keeping only the terms that are linear in x
i
and neglecting all
products of x
i
. Thus equation (9.50) takes a linearized local form
dx
i
dt
=
n
¸
j=1
A
ij
x
j
. (9.53)
Another way of obtaining the same result is to expand the vector ﬁeld in a Taylor series
around x
i
= x
i
so that
f
i
(x
i
) =
n
¸
j=1
¸
∂f
i
∂x
j
x
i
=x
i
x
j
+ H.O.T., (9.54)
and then neglect the higher order terms (H.O.T.) Thus, in equation (9.53)
A
ij
=
¸
∂f
i
∂x
j
x
i
=x
i
(9.55)
is the Jacobian of f
i
evaluated at the critical point. In matrix form the linearized equation
for the perturbation x
is
dx
dt
= A x
. (9.56)
The real parts of the eigenvalues of A determine the linear stability of the critical point
x = 0, and the behavior of the solution near it:
1. If all eigenvalues have real parts < 0, the critical point is asymptotically stable.
2. If at least one eigenvalue has a real part > 0, the critical point is unstable.
3. If all eigenvalues have real parts ≤ 0, and some have zero real parts, then the critical
point is stable if A has k linearly independent eigenvectors for each eigenvalue of
multiplicity k. Otherwise it is unstable.
The following are some terms used in classifying critical points according to the real and
imaginary parts of the eigenvalues of A.
Classiﬁcation Eigenvalues
Hyperbolic No zero real part
Saddle Some real parts negative, others positive
Stable node or sink All real parts negative
ordinary sink All real parts negative, imaginary parts zero
spiral sink All real parts negative, imaginary parts nonzero
Unstable node or source All real parts positive
ordinary source All real parts positive, imaginary parts zero
spiral source All real parts positive, imaginary parts nonzero
Center All purely imaginary and nonzero
Figures 9.5 9.6 give examples of phase planes for simple systems which describe an ordi
nary source node, a spiral sink node, an ordinary center node, and a saddle node.
Figure 9.7 gives a phase plane, vector ﬁeld, and trajectories for a complex system with
many nodes present. Here the nodes are spiral and saddle nodes.
323
x ’ = x
y ’ = y
−3 −2 −1 0 1 2 3
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
x
y
x’=0
y’=0
x ’ = − (x + y)
y ’ = x
−3 −2 −1 0 1 2 3
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
x
y
x’ = 0 y’ = 0
Ordinary Source Node
Spiral Sink Node
Figure 9.5: Phase plane for system with ordinary source node and spiral sink node.
9.6.3 Lyapunov functions
For x ∈ R
n
, t ∈ R
1
, f : R
n
→R
n
Consider the system of diﬀerential equations
dx
i
dt
= f
i
(x
1
, x
2
, , x
n
), i = 1, 2, , n, (9.57)
with x
i
= 0 as a critical point. If there exists a V (x
1
, x
2
, , x
n
) : R
n
→R
1
such that
• V > 0 for x
i
= 0,
• V = 0 for x
i
= 0,
•
dV
dt
< 0 for x
i
= 0, and
•
dV
dt
= 0 for x
i
= 0,
324
x ’ = − y
y ’ = x
−3 −2 −1 0 1 2 3
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
x
y
x ’ = y − x
y ’ = x
−3 −2 −1 0 1 2 3
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
x
y
x’ = 0 y’=0
Ordinary Center Node
Saddle Node
Figure 9.6: Phase plane for systems with center node and saddle node
then the equilibrium point of the diﬀerential equations, x
i
= 0, is globally stable to all per
turbations, large or small. The function V (x
1
, x
2
, , x
n
) is called a Lyapunov
1
function.
Although one cannot always ﬁnd a Lyapunov function for a given system of diﬀerential
equations, we can pose a method to seek a Lyapunov function given a set of autonomous
ordinary diﬀerential equations. While the method lacks robustness, it is always straight
forward to guess a functional form for a Lyapunov function and test whether or not the
proposed function satisﬁes the criteria:
1. Choose a test function V (x
1
, , x
n
). The function should be chosen to be strictly
positive for x
i
= 0 and zero for x
i
= 0.
2. Calculate
dV
dt
=
∂V
∂x
1
dx
1
dt
+
∂V
∂x
2
dx
2
dt
+ +
∂V
∂x
n
dx
n
dt
, (9.58)
1
Alexandr Mikhailovich Lyapunov, 18571918, Russian mathematician.
325
x ’ = (y + 1/4 x − 1/2) (x − 2 y
2
+ 5/2)
y ’ = (y − x) (x − 2) (x + 2)
−3 −2 −1 0 1 2 3
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
x
y
x’ = 0
x’ = 0
y ’ = 0
y’ = 0
x’ = 0
x’ = 0
y’ = 0
y’ = 0
Sprial Nodes and Saddles
Figure 9.7: Phase plane for system with many nodes
dV
dt
=
∂V
∂x
1
f
1
(x
1
, , x
n
) +
∂V
∂x
2
f
2
(x
1
, , x
n
) + +
∂V
∂x
n
f
n
(x
1
, , x
n
).(9.59)
It is this step where the diﬀerential equations actually enter into the calculation.
3. Determine if for the proposed V (x
1
, , x
n
) whether or not
dV
dt
< 0, x
i
= 0;
dV
dt
=
0, x
i
= 0. If so, then it is a Lyapunov function. If not, there may or may not be a
Lyapunov function for the system; one can guess a new functional form and test again.
Example 9.11
Show that x = 0 is globally stable, if
m
d
2
x
dt
2
+β
dx
dt
+k
1
x +k
2
x
3
= 0, where m, β, k
1
, k
2
> 0.
This system models the motion of a massspringdamper system when the spring is nonlinear. Breaking
the original second order diﬀerential equation into two ﬁrst order equations, we get
dx
dt
= y,
dy
dt
= −
β
m
y −
k
1
m
x −
k
2
m
x
3
.
Here x represents the position, and y represents the velocity. Let us guess that the Lyapunov function
has the form
V (x, y) = ax
2
+by
2
+cx
4
, where a, b, c > 0.
Note that V (x, y) ≥ 0 and that V (0, 0) = 0. Then
dV
dt
=
∂V
∂x
dx
dt
+
∂V
∂y
dy
dt
,
326
= 2ax
dx
dt
+ 4cx
3
dx
dt
+ 2by
dy
dt
,
= (2ax + 4cx
3
)y + 2by
−
β
m
y −
k
1
m
x −
k
2
m
x
3
,
= 2
a −
bk
1
m
xy + 2
2c −
bk
2
m
x
3
y −
2b
m
βy
2
.
If we choose b =
m
2
, a =
1
2
k
1
, c =
k2
4
, then the coeﬃcients on xy and xy
3
in the expression for
dV
dt
are
identically zero, and we get
dV
dt
= −βy
2
,
which for β > 0 is negative for all y = 0 and zero for y = 0. Further, with these choices of a, b, c, the
Lyapunov function itself is
V =
1
2
k
1
x
2
+
1
4
k
2
x
4
+
1
2
my
2
≥ 0.
Checking, we see
dV
dt
= k
1
x
dx
dt
+k
2
x
3
dx
dt
+my
dy
dt
,
= k
1
xy +k
2
x
3
y +my
−
β
m
y −
k
1
m
x −
k
2
m
x
3
,
= k
1
xy +k
2
x
3
y −βy
2
−k
1
xy −k
2
x
3
y,
= −βy
2
≤ 0.
Thus V is a Lyapunov function, and x = y = 0 is globally stable. Actually, in this case, V = (kinetic
energy + potential energy), where kinetic energy =
1
2
my
2
, and potential energy =
1
2
k
1
x
2
+
1
4
k
2
x
4
.
Note that V (x, y) is just an algebraic function of the system’s state variables. When we take the time
derivative of V , we are forced to invoke our original system, which deﬁnes the diﬀerential equations.
We note for this system that precisely since V is strictly positive or zero for all x, y, and moreover that
it is decaying for all time, that this necessarily implies that V →0, hence x, y →0.
9.6.4 Hamiltonian systems
Closely related to the Lyapunov function of a system is the Hamiltonian, which exists for
systems which are nondissipative, that is those systems for which
dV
dt
= 0. In such a case
we deﬁne the Hamiltonian H to be the Lyapunov function H = V with
dH
dt
≡ 0. For such
systems, we integrate once to ﬁnd that H(x
i
, y
i
) must be a constant for all x
i
, y
i
. Such
systems are said to be conservative.
With x ∈ R
n
, y ∈ R
n
, t ∈ R
1
, f : R
2n
→ R
n
, g : R
2n
→ R
n
We say a system of equations
of the form
dx
i
dt
= f
i
(x
1
, , x
n
, y
1
, , y
n
),
dy
i
dt
= g
i
(x
1
, , x
n
, y
1
, , y
n
), i = 1, , n,
(9.60)
is Hamiltonian if we can ﬁnd a function H(x
i
, y
i
) : R
n
R
n
→R
1
such that
dH
dt
=
∂H
∂x
i
dx
i
dt
+
∂H
∂y
i
dy
i
dt
= 0, (9.61)
dH
dt
=
∂H
∂x
i
f
i
(x
1
, , x
n
, y
1
, , y
n
) +
∂H
∂y
i
g
i
(x
1
, , x
n
, y
1
, , y
n
) = 0. (9.62)
327
This diﬀerential equation can at times be solved directly by the method of separation of
variables in which we assume a speciﬁc functional form for H(x
i
, y
i
).
Alternatively, we can also determine H by demanding that
∂H
∂y
i
=
dx
i
dt
,
∂H
∂x
i
= −
dy
i
dt
. (9.63)
Substituting from the original diﬀerential equations, we are led to equations for H(x
i
, y
i
)
∂H
∂y
i
= f
i
(x
1
, , x
n
, y
1
, , y
n
),
∂H
∂x
i
= −g
i
(x
1
, , x
n
, y
1
, , y
n
). (9.64)
Example 9.12
Find the Hamiltonian for a linear mass spring system:
m
d
2
x
dt
2
+kx = 0, x(0) = x
o
,
dx
dt
0
= ˙ x
0
.
Taking
dx
dt
= y to reduce this to a system of two ﬁrst order equations, we have
dx
dt
= y, x(0) = x
o
dy
dt
= −
k
m
x, y(0) = y
o
For this system n = 1.
We seek H(x, y) such that
dH
dt
= 0. That is
dH
dt
=
∂H
∂x
dx
dt
+
∂H
∂y
dy
dt
= 0.
Substituting from the given system of diﬀerential equations we have
∂H
∂x
y +
∂H
∂y
−
k
m
x
= 0.
As with all partial diﬀerential equations, one has to transform to a system of ordinary equations in
order to solve. Here we will take the approach of the method of separation of variables and assume a
solution of the form
H(x, y) = A(x) +B(y),
where A and B are functions to be determined. With this assumption, we get
y
dA
dx
−
k
m
x
dB
dy
= 0.
Rearranging, we get
1
x
dA
dx
=
k
my
dB
dy
.
Now the term on the left is a function of x only, and the term on the right is a function of y only. The
only way this can be generally valid is if both terms are equal to the same constant, which we take to
be C. Hence,
1
x
dA
dx
=
k
my
dB
dy
= C,
328
from which we get two ordinary diﬀerential equations:
dA
dx
= Cx,
dB
dy
=
Cm
k
y.
The solution is
A(x) =
1
2
Cx
2
+K
1
, B(y) =
1
2
Cm
k
y
2
+K
2
.
A general solution is
H(x, y) =
1
2
C
x
2
+
m
k
y
2
+K
1
+K
2
.
While this general solution is perfectly valid, we can obtain a common physical interpretation by taking
C = k, K
1
+K
2
= 0. With these choices, the Hamiltonian becomes
H(x, y) =
1
2
kx
2
+
1
2
my
2
.
The ﬁrst term represents the potential energy of the spring, the second term represents the kinetic
energy. Since by deﬁnition
dH
dt
= 0, this system conserves its mechanical energy. Verifying the properties
of a Hamiltonian, we see
dH
dt
=
∂H
∂x
dx
dt
+
∂H
∂y
dy
dt
,
= kxy +my
−
k
m
x
,
= 0.
Since this system has
dH
dt
= 0, then H(x, y) must be constant for all time, including t = 0, when the
initial conditions apply. So
H(x(t), y(t)) = H(x(0), y(0)) =
1
2
kx
2
0
+my
2
0
.
Thus the system has the integral
1
2
kx
2
+my
2
=
1
2
kx
2
0
+my
2
0
.
9.7 Fractals
In the discussion on attractors in Section 9.6.1, we included geometrical shapes called frac
tals. These are objects that are not smooth, but occur frequently in the dynamical systems
literature either as attractors or as boundaries of basins of attractions.
A fractal can be deﬁned as a geometrical shape in which the parts are in some way similar
to the whole. This selfsimilarity may be exact, i.e. a piece of the fractal, if magniﬁed, may
look exactly like the whole fractal. Before discussing examples we need to put forward
a working deﬁnition of dimension. Though there are many deﬁnitions in current use, we
present here the HausdorﬀBesicovitch dimension D. If N
is the number of ‘boxes’ of side
length needed to cover an object, then
D = lim
→0
ln N
ln(1/)
. (9.65)
We can check that this deﬁnition corresponds to the common geometrical shapes.
329
1. Point: N
= 1, D = 0 since D = lim
→0
ln1
−ln
= 0,
2. Line of length l: N
= l/, D = 1 since D = lim
→0
ln(l/)
−ln
=
ln l−ln
−ln
= 1,
3. Surface of size l
2
: N
= (l/)
2
, D = 2 since D = lim
→0
ln(l
2
/
2
)
−ln
=
2 ln l−2 ln
−ln
= 2,
4. Volume of size l
3
: N
= (l/)
3
, D = 3 since D = lim
→0
ln(l
3
/
3
)
−ln
=
3 ln l−3 ln
−ln
= 3.
A fractal has a dimension that is not an integer. Many physical objects are fractallike, in
that they are fractal within a range of length scales. Coastlines are among the geographical
features that are of this shape. If there are N
units of a measuring stick of length , the
measured length of the coastline will be of the powerlaw form N
=
1−D
, where D is the
dimension.
9.7.1 Cantor set
Consider the line corresponding to k = 0 in Figure 9.8. Take away the middle third to leave
k=0
k=1
k=2
k=3
k=4
Figure 9.8: Cantor set
the two portions; this is shown as k = 1. Repeat the process to get k = 2, 3, . . .. If k →∞,
what is left is called the Cantor
2
set. Let us take the length of the line segment to be unity
when k = 0. Since N
= 2
k
and = 1/3
k
, the dimension of the Cantor set is
D = lim
→0
ln N
ln(1/)
= lim
k→∞
ln 2
k
ln 3
k
=
k ln 2
k ln 3
=
ln 2
ln 3
= 0.6309 . . . . (9.66)
It can be seen that the endpoints of the removed intervals are never removed; it can be
shown the Cantor set contains an inﬁnite number of points, and it is an uncountable set. It
is totally disconnected and has a Lebesgue measure zero.
9.7.2 Koch curve
Here we start with an equilateral triangle shown in Figure 9.9 as k = 0. Each side of the
original triangle has unit length. The middle third of each side of the triangle is removed,
and two sides of a triangle drawn on that. This is shown as k = 1. The process is continued,
and in the limit gives a continuous, closed curve that is nowhere smooth. Since N
= 3 4
k
and = 1/3
k
, the dimension of the Koch
3
curve is
D = lim
→0
ln N
ln(1/)
= lim
k→∞
ln(3)4
k
ln 3
k
= lim
k→∞
ln 3 + k ln 4
k ln 3
=
ln 4
ln 3
= 1.261 . . . . (9.67)
2
Georg Ferdinand Ludwig Philipp Cantor, 18451918, Russianborn, Germanbased mathematician.
3
Niels Fabian Helge von Koch, 18701924, Sweedish mathematician.
330
k = 0
k = 1 k = 2
Figure 9.9: Koch curve
The limit curve itself has inﬁnite length, it is nowhere diﬀerentiable, and it surrounds a ﬁnite
area.
9.7.3 Weierstrass function
For a, b, t ∈ R
1
, W : R
1
→R
1
, the Weierstrass
4
function
W(t) =
∞
¸
k=1
a
k
cos b
k
t, (9.68)
where a is real, b is odd, and ab > 1 + 3π/2. It is everywhere continuous, but nowhere
diﬀerentiable! Both require some eﬀort to prove. A Weierstrass function is plotted in Figure
9.10. Its fractal character can be seen when one recognizes that cosine waves of ever higher
frequency are superposed onto low frequency cosine waves.
9.7.4 Mandelbrot and Julia sets
For z ∈ C
1
, c ∈ C
1
, the Mandelbrot
5
set is the set of all c for which
z
k+1
= z
2
k
+ c (9.69)
stays bounded as k →∞, when z
0
= 0. The boundaries of this set are fractal. A Mandelbrot
set is sketched in Figure 9.11.
Associated with each c for the Mandelbrot set is a Julia
6
set. In this case, the Julia set
is the set of complex initial seeds z
0
which allow z
k+1
= z
2
k
+c to converge for ﬁxed complex
c. A Julia set for c = −1.25 is sketched in Figure 9.12.
4
Karl Theodor Wilhalm Weierstrass, 18151897, Westphaliaborn German mathematician.
5
Benoit Mandelbrot, 1924, Polishborn mathematician based mainly in France.
6
Gaston Maurice Julia, 18931978, Algerianborn French mathematician.
331
0.1 0.2 0.3 0.4 0.5
t
0.75
0.5
0.25
0.25
0.5
0.75
W
Figure 9.10: Four term (k = 1, . . . , 4) approximation to Weierstrass function, W(t) for
b = 13, a = 1/2.
9.8 Bifurcations
Dynamical systems representing some physical problem frequently have parameters associ
ated with them. Thus, for x ∈ R
n
, t ∈ R
1
, λ ∈ R
1
, f : R
n
→R
n
, we can write
dx
i
dt
= f
i
(x
1
, x
2
, , x
n
; λ) (i = 1, , n), (9.70)
where λ is a parameter. The theory can easily be extended if there is more than one
parameter.
We would like to consider the changes in the behavior of t → ∞ solutions as the real
number λ, called the bifurcation parameter, is varied. The nature of the critical point may
change as the parameter λ is varied; other critical points may appear or disappear, or its
stability may change. This is a bifurcation, and the λ at which at which it happens is the
bifurcation point. The study of the solutions and bifurcations of the steady state falls under
singularity theory.
Let us look at some of the bifurcations obtained for diﬀerent vector ﬁelds. Some of the
examples will be onedimensional, i.e. x ∈ R
1
, λ ∈ R
1
, f : R
1
→R
1
.
dx
dt
= f(x; λ). (9.71)
Even though this can be solved exactly in most cases, we will assume that such a solution
is not available so that the techniques of analysis can be developed for more complicated
systems. For a coeﬃcient matrix that is a scalar, the eigenvalue is the coeﬃcient itself. The
eigenvalue will be real and will cross the imaginary axis of the complex plane through the
origin as λ is changed. This is called a simple bifurcation.
332
Figure 9.11: Mandelbrot set. Black regions stay bounded; gray regions become unbounded
with shade of gray indicated how rapidly the system becomes unbounded.
9.8.1 Pitchfork bifurcation
For x ∈ R
1
, t ∈ R
1
, λ ∈ R
1
, λ
0
∈ R
1
, consider
dx
dt
= −x[x
2
−(λ −λ
0
)]. (9.72)
The critical points are x = 0, and ±
√
λ −λ
0
. λ = λ
0
is a bifurcation point; for λ < λ
0
there
is only one critical point, while for λ > λ
0
there are three.
Linearizing around the critical point x = 0, we get
dx
dt
= (λ −λ
0
)x
.
This has solution
x
(t) = x
(0) exp ((λ −λ
0
)t) .
For λ < λ
0
, the critical point is asymptotically stable; for λ > λ
0
it is unstable.
Notice that the function V (x) = x
2
satisﬁes the following conditions: V > 0 for x = 0,
V = 0 for x = 0, and
dV
dt
=
dV
dx
dx
dt
= −2x
2
[x
2
− (λ − λ
0
)] ≤ 0 for λ < λ
0
. Thus V (x) is a
Lyapunov function and x = 0 is globally stable for all perturbations, large or small, as long
as λ < λ
0
.
Now let us examine the critical point x =
√
λ −λ
0
which exists only for λ > λ
0
. Putting
x = x + x
, the right side of equation (9.72) becomes
f(x) = −
λ −λ
0
+ x
¸
λ −λ
0
+ x
2
−(λ −λ
0
)
.
Linearizing for small x
, we get
dx
dt
= −2(λ −λ
0
)x
.
333
1.5 1 0.5 0 0.5 1 1.5
1
0.5
0
0.5
1
Figure 9.12: Julia set for c = −1.25. Black regions stay bounded; gray regions become
unbounded with shade of gray indicated how rapidly the system becomes unbounded.
This has solution
x
(t) = x
(0) exp (−2(λ −λ
0
)t) .
For λ > λ
0
, this critical point is stable. The other critical point x = −
√
λ −λ
0
is also found
to be stable for λ > λ
0
.
The results are summarized in the bifurcation diagram sketched in Figure 9.13.
x
λ
λ
ο
Figure 9.13: The pitchfork bifurcation. Heavy lines are stable
At the bifurcation point, λ = λ
0
, we have
dx
dt
= −x
3
. (9.73)
This equation has a critical point at x = 0 but has no linearization. We must do a nonlinear
analysis to determine the stability of the critical point. In this case it is straightforward.
334
Solving directly and applying an initial condition, we obtain
x(t) = ±
x(0)
1 + 2x(0)
2
t
, (9.74)
lim
t→∞
x(t) = 0. (9.75)
Since the system approaches the critical point as t → ∞ for all values of x(0), the critical
point x = 0 unconditionally stable.
9.8.2 Transcritical bifurcation
For x ∈ R
1
, t ∈ R
1
, λ ∈ R
1
, λ
0
∈ R
1
, consider
dx
dt
= −x[x −(λ −λ
0
)]. (9.76)
The critical points are x = 0 and λ −λ
0
. The bifurcation occurs at λ = λ
0
. Once again the
linear stability of the solutions can be determined. Near x = 0, the linearization is
dx
dt
= (λ −λ
0
)x
, (9.77)
which has solution
x
(t) = x
(0) exp ((λ −λ
0
)t) .
So this solution is stable for λ < λ
0
. Near x = λ − λ
0
, we take x
= x − (λ − λ
0
). The
resulting linearization is
dx
dt
= −(λ −λ
0
)x
, (9.78)
which has solution
x
(t) = x
(0) exp (−(λ −λ
0
)t) .
So this solution is stable for λ > λ
0
.
At the bifurcation point, λ = λ
0
, there is no linearization, and the system becomes
dx
dt
= −x
2
, (9.79)
which has solution
x(t) =
x(0)
1 + x(0)t
. (9.80)
Here the asymptotic stability depends on the initial condition! For x(0) ≥ 0, the critical
point at x = 0 is stable. For x(0) < 0, there is a blowup phenomena at t = −
1
x(0)
.
The results are summarized in the bifurcation diagram sketched in Figure 9.14.
335
x
λ
λ
ο
Figure 9.14: Transcritical bifurcation. Heavy lines are stable
9.8.3 Saddlenode bifurcation
For x ∈ R
1
, t ∈ R
1
, λ ∈ R
1
, λ
0
∈ R
1
, consider
dx
dt
= −x
2
+ (λ −λ
0
). (9.81)
The critical points are x = ±
√
λ −λ
0
. Taking x
= x ∓
√
λ −λ
0
and linearizing, we obtain
dx
dt
= ∓2
λ −λ
0
x
, (9.82)
which has solution
x
(t) = x
(0) exp
∓
λ −λ
0
t
. (9.83)
For λ > λ
0
, the root x = +
√
λ −λ
0
is asymptotically stable. The root x = −
√
λ −λ
0
is
asymptotically unstable.
At the point, λ = λ
0
, there is no linearization, and the system becomes
dx
dt
= −x
2
, (9.84)
which has solution
x(t) =
x(0)
1 + x(0)t
. (9.85)
Here the asymptotic stability again depends on the initial condition For x(0) ≥ 0, the critical
point at x = 0 is stable. For x(0) < 0, there is a blowup phenomena at t =
1
x(0)
.
The results are summarized in the bifurcation diagram sketched in Figure 9.15.
336
x
λ
λ
ο
Figure 9.15: Saddlenode bifurcation. Heavy lines are stable
9.8.4 Hopf bifurcation
To give an example of complex eigenvalues, one must go to a twodimensional vector ﬁeld.
Example 9.13
With x, y, t, λ, λ
0
∈ R
1
, take
dx
dt
= (λ −λ
0
)x −y −x(x
2
+y
2
),
dy
dt
= x + (λ −λ
0
)y −y(x
2
+y
2
).
The origin (0,0) is a critical point. The linearized perturbation equations are
d
dt
x
y
=
λ −λ
0
−1
1 λ −λ
0
x
y
.
The eigenvalues µ of the coeﬃcient matrix are µ = (λ − λ
0
) ± i. For λ < λ
0
the real part is negative
and the origin is stable. At λ = λ
0
there is a Hopf
7
bifurcation as the eigenvalues cross the imaginary
axis of the complex plane as λ is changed. For λ > λ
0
a periodic orbit in the (x, y) phase plane appears.
The linear analysis will not give the amplitude of the motion. Writing the given equation in polar
coordinates (r, θ)
dr
dt
= r(λ −λ
0
) −r
3
,
dθ
dt
= 1.
This is a pitchfork bifurcation in the amplitude of the oscillation r.
7
Eberhard Frederich Ferdinand Hopf, 19021983, Austrianborn, German mathematician.
337
9.9 Lorenz equations
For independent variable t ∈ R
1
, dependent variables x, y, z ∈ R
1
, and parameters σ, r, b ∈
R
1
, the Lorenz
8
equations are
dx
dt
= σ(y −x), (9.86)
dy
dt
= rx −y −xz, (9.87)
dz
dt
= −bz + xy, (9.88)
where σ and b are taken to be positive constants, with σ > b +1. The bifurcation parameter
will be either r or σ.
9.9.1 Linear stability
The critical points are obtained from
y −x = 0, (9.89)
rx −y −xz = 0, (9.90)
−bz + xy = 0, (9.91)
which give
¸
x
y
z
=
¸
0
0
0
,
¸
b(r −1)
b(r −1)
r −1
,
¸
−
b(r −1)
−
b(r −1)
r −1
. (9.92)
A linear stability analysis of each critical point follows.
• x = y = z = 0. Small perturbations around this point give
d
dt
¸
x
y
z
=
¸
−σ σ 0
r −1 0
0 0 −b
¸
x
y
z
. (9.93)
The characteristic equation is
(λ + b)[λ
2
+ λ(σ + 1) −σ(r −1)] = 0, (9.94)
from which we get the eigenvalues
λ = −b, λ =
1
2
−(1 + σ) ±
(1 + σ)
2
−4σ(1 −r)
. (9.95)
For 0 < r < 1, the eigenvalues are real and negative, since (1 + σ)
2
> 4σ(1 − r). At
r = 1, there is a pitchfork bifurcation with one zero eigenvalue. For r > 1, the origin
becomes unstable.
8
Edward Norton Lorenz, 1917, American meteorologist.
338
• x = y =
b(r −1), z = r −1. Small perturbations give
d
dt
¸
x
y
z
=
¸
−σ σ 0
1 −1 −
b(r −1)
b(r −1)
b(r −1) −b
¸
x
y
z
. (9.96)
The characteristic equation is
λ
3
+ (σ + b + 1)λ
2
+ (σ + r)bλ + 2σb(r −1) = 0. (9.97)
Using the Hurwitz criteria we can determine the sign of the real parts of the solutions
of this cubic equation without actually solving it. The Hurwitz determinants are
D
1
= σ + b + 1 (9.98)
D
2
=
σ + b + 1 2σb(r + 1)
1 (σ + r)b
, (9.99)
= σb(σ + b + 3) −rb(σ −b −1) (9.100)
D
3
=
σ + b + 1 2σb(r −1) 0
1 (σ + r)b 0
0 σ + b + 1 2σb(r −1)
, (9.101)
= 2σb(r −1)[σb(σ + b + 3) −rb(σ −b −1)]. (9.102)
Thus the real parts of the eigenvalues are negative if r < r
c
=
σ(σ+b+3)
σ−b−1
. At r = r
c
the
characteristic equation (9.97) can be factored to give the eigenvalues −(σ +b +1), and
±i
2σ(σ+1)
σ−b−1
, corresponding to a Hopf bifurcation. The periodic solution which is created
at this value of r can be shown to be unstable so that the bifurcation is subcritical.
Example 9.14
Consider the solution to the Lorenz equations for conditions: σ = 1, r = 28, b =
8
3
with initial
conditions x(0) = y(0) = z(0) = 1. The ﬁxed point is given by
x =
b(r −1) =
8
3
(28 −1) = 8.485, (9.103)
y =
b(r −1) =
8
3
(28 −1) = 8.485, (9.104)
z = r −1 = 28 −1 = 27. (9.105)
Consideration of the roots of the characteristic equation shows the ﬁxed point here is stable:
λ
3
+ (σ +b + 1)λ
2
+ (σ +r)bλ + 2σb(r −1) = 0, (9.106)
λ
3
+
14
3
λ
2
+
232
3
λ + 144 = 0, (9.107)
λ = −2, λ = −
4
3
±
√
2528
6
i. (9.108)
Figure 9.16 shows the phase space trajectories in x, y, z space and the behavior in the time domain,
x(t), y(t), z(t).
339
8.3
8.4
8.5
8.6
x
8
8.5
9
y
26
26.5
27
27.5
z
8.3
8.4
8.5
8.6
x
8
8.5
9
y
26
26.5
27
27.5
z
0 1 2 3 4 5 6
t
2
4
6
8
10
x
1 2 3 4 5 6
t
5
5
10
15
20
25
30
y
0 1 2 3 4 5 6
t
10
20
30
40
50
60
z
Figure 9.16: Solution to Lorenz equations, σ = 1, r = 28, b =
8
3
. Initial conditions are
x(0) = y(0) = z(0) = 1.
Example 9.15
Now consider the conditions: σ = 10, r = 28, b =
8
3
. Initial conditions are x(0) = y(0) = z(0) = 1.
The ﬁxed point is again given by
x =
b(r −1) =
8
3
(28 −1) = 8.485, (9.109)
y =
b(r −1) =
8
3
(28 −1) = 8.485, (9.110)
z = r −1 = 28 −1 = 27. (9.111)
Now, consideration of the roots of the characteristic equation shows the ﬁxed point here is unstable:
λ
3
+ (σ +b + 1)λ
2
+ (σ +r)bλ + 2σb(r −1) = 0, (9.112)
340
λ
3
+
41
3
λ
2
+
304
3
λ + 1440 = 0, (9.113)
λ = −13.8546, λ = 0.094 ±10.2 i. (9.114)
The consequence of this is that the solution is chaotic! Figure 9.17 shows the phase space trajectory
and behavior in the time domain
10
0
10
20
x
20
0
20
y
0
20
40
z
10
0
10
20
x
20
0
20
y
0
20
40
z
5 10 15 20 25
t
20
15
10
5
5
10
15
20
x
5 10 15 20 25
t
20
10
10
20
30
y
0 5 10 15 20 25
t
10
20
30
40
50
z
Figure 9.17: Phase space trajectory and time domain plots for solution to Lorenz equations,
σ = 10, r = 28, b =
8
3
. Initial conditions are x(0) = y(0) = z(0) = 1.
9.9.2 Center manifold projection
This is a procedure for obtaining the nonlinear behavior near an eigenvalue with zero real
part. As an example we will look at the Lorenz system at the bifurcation point r = 1.
Linearization of the Lorenz equations near the equilibrium point at (0, 0, 0) gives rise to a
matrix with eigenvalues and eigenvectors
λ
1
= 0, e
1
=
¸
1
1
0
, (9.115)
341
λ
2
= −(σ + 1), e
2
=
¸
σ
−1
0
, (9.116)
λ
3
= −b, e
3
¸
0
0
1
. (9.117)
We use the eigenvectors as a basis to deﬁne new coordinates (u, v, w) where
¸
x
y
z
=
¸
1 σ 0
1 −1 0
0 0 1
¸
u
v
w
. (9.118)
In terms of these new variables
dx
dt
=
du
dt
+ σ
dv
dt
, (9.119)
dy
dt
=
du
dt
−
dv
dt
, (9.120)
dz
dt
=
dw
dt
, (9.121)
so that original nonlinear Lorenz equations (9.869.88) become
du
dt
+ σ
dv
dt
= −σ(1 + σ)v, (9.122)
du
dt
−
dv
dt
= (1 + σ)v −(u + σv)w, (9.123)
dw
dt
= −bw + (u + σv)(u −v). (9.124)
Solving directly for the derivatives so as to place the equations in autonomous form, we get
du
dt
= 0u −
σ
1 + σ
(u + σv)w = λ
1
u + nonlinear terms, (9.125)
dv
dt
= −(1 + σ)v +
1
1 + σ
(u + σv)w = λ
2
v + nonlinear terms, (9.126)
dw
dt
= −bw + (u + σv)(u −v) = λ
3
w + nonlinear terms. (9.127)
The objective of using the eigenvectors as basis vectors is to change the original system to
diagonal form in the linear terms. Notice that the linear portion of the system is in diagonal
form with the coeﬃcients on each linear term as a distinct eigenvalue. Furthermore, the
eigenvalues λ
2
= −(1 + σ) and λ
3
= −b are negative ensuring that the linear behavior
v = e
−(1+σ)t
and w = e
−bt
takes the solution very quickly to zero in these variables.
It would appear then that we are only left with an equation in u(t) for large t. However,
if we put v = w = 0 in the right side, dv/dt and dw/dt would be zero if it were not for the
u
2
term in dw/dt, implying that the dynamics is conﬁned to v = w = 0 only if we ignore
342
this term. According to the center manifold theorem it is possible to ﬁnd a line (called the
center manifold) which is tangent to u = 0, but is not necessarily the tangent itself, to which
the dynamics is indeed conﬁned.
We can get as good an approximation to the center manifold as we want by choosing new
variables. Expanding the equation for
dw
dt
, which has the potential problem, we get
dw
dt
= −bw + u
2
+ (σ −1)uv −σv
2
. (9.128)
Letting
˜ w = w −
u
2
b
, (9.129)
so that −bw + u
2
= −b ˜ w, we can eliminate the potential problem with the derivative of w.
In the new variables (u, v, ˜ w), the full Lorenz equations are written as
du
dt
= −
σ
1 + σ
(u + σv)( ˜ w +
u
2
b
), (9.130)
dv
dt
= −(1 + σ)v +
1
1 + σ
(u + σv)( ˜ w +
u
2
b
), (9.131)
d ˜ w
dt
= −b ˜ w + (σ −1)uv −σv
2
+
2σ
b(1 + σ)
u(u + σv)( ˜ w +
u
2
b
). (9.132)
Once again the variables v and ˜ w go to zero very quickly. Formally setting them to zero,
and examining all equations we see that
du
dt
= −
σ
b(1 + σ)
u
3
, (9.133)
dv
dt
=
1
(1 + σ)b
u
3
, (9.134)
d ˜ w
dt
=
2σ
(1 + σ)b
2
u
4
. (9.135)
Here dv/dt and d ˜ w/dt approach zero if u approaches zero. Now the equation for the evolution
of u suggests that this is the case. Simply integrating this equation and applying and initial
condition we get
u(t) = ±[u(0)]
b(1 + σ)
b(1 + σ) + 2σ[u(0)]
2
t
(9.136)
which is asymptotically stable as t →∞. So to this approximation the dynamics is conﬁned
to the v = ˜ w = 0 line. The bifurcation at r = 1 is said to be supercritical. Higher order
terms can be included to obtain improved accuracy, if necessary.
Figure 9.18 gives the projection of the solution trajectory and the center manifold in
the u, w phase space. Here the initial conditions were u(0) = 1, v(0) = 1/2, w(0) = 1, and
r = 1, σ = 1, b = 8/3. It is seen ﬁrst that the actual solution trajectory indeed approaches
the equilibrium point; moreover, the solution trajectory is well approximated by the center
manifold in the neighborhood of the equilibrium point.
343
0.2 0.4 0.6 0.8 1
u
0.1
0.2
0.3
0.4
w
Lorenz EquationSolution at Bifurcation Point
r = 1, σ = 1, b = 8/3
x = u+v, y = uv, z = w,
u(0) = 1,v(0) = 1/2, w(0) = 1.
w  u / b = 0
2
center
manifold
stable
equilibrium
point
solution trajectory
.
Figure 9.18: Projection of solution trajectory and center manifold for forced Lorenz equations
at bifurcation point.
Problems
1. For the logistics equation: x
k+1
= rx
k
(1 − x
k
); 0 < x
k
< 1, 0 < r < 4, write a short program which
determines the value of x as k →∞. Plot the bifurcation diagram, that is the limiting value of x as
a function of r for 0 < r < 4. If r
i
is the i
th
bifurcation point, that is the value at which the number
of ﬁxed points changes, make an estimate of Feigenbaum’s constant,
δ = lim
i→∞
r
i−1
−r
i
r
i
−r
i+1
2. If
x
dx
dt
+xy
dy
dt
= x −1
(2x +y)
dx
dt
+x
dy
dt
= y
write the equation in autonomous form,
dx
dt
= f(x, y)
dy
dt
= g(x, y)
Plot the lines f = 0, g = 0 in the xy phase plane. Also plot in this plane the vector ﬁeld deﬁned by
the diﬀerential equations. With a combination of analysis and numerics, ﬁnd a path in phase space
from one critical point to the other. For this path, plot x(t), y(t) and include the path in the xy phase
plane.
344
3. Show that for all initial conditions the solutions of
dx
dt
= −x +x
2
y −y
2
dy
dt
= −x
3
+xy −6z
dz
dt
= 2y
tend to x = y = z = 0 as t →∞.
4. Draw the bifurcation diagram of
dx
dt
= x
3
+x
(λ −3)
2
−1
where λ is the bifurcation parameter, indicating stable and unstable branches.
5. A twodimensional dynamical system expressed in polar form is
dr
dt
= r(r −2)(r −3)
dθ
dt
= 2
Find the (a) critical point(s), (b) periodic solution(s), and (c) analyze their stability.
6. Find a critical point of the following system, and show its local and global stability.
dx
dt
= (x −2)
(y −1)
2
−1
dy
dt
= (2 −y)
(x −2)
2
+ 1
dz
dt
= (4 −z)
7. Find the general solution of dx/dt = A x where
A =
¸
1 −3 1
2 −1 −2
2 −3 0
8. Find the solution of dx/dt = A x where
A =
¸
3 0 −1
−1 2 1
1 0 1
and
x(0) =
¸
2
2
1
9. Find the solution of dx/dt = A x where
A =
¸
1 −3 2
0 −1 0
0 −1 −2
and
x(0) =
¸
1
2
1
345
10. Find the solution of dx/dt = A x where
A =
¸
1 0 0
0 1 −1
0 1 1
and
x(0) =
¸
1
1
1
11. Express
x
1
+x
1
+x
2
+ 3x
2
= 0
x
1
+ 3x
2
+x
2
= 0
in the form x
= A x and solve. Plot the x
1
, x
2
phase plane and the vector ﬁeld deﬁned by the
system of equations.
12. Classify the critical points of
dx
dt
= x −y −3
dy
dt
= y −x
2
+ 1
and analyze their stability. Plot the global xy phase plane including critical points and vector ﬁelds.
13. The following equations arise in a natural circulation loop problem
dx
dt
= y −x
dy
dt
= a −zx
dz
dt
= xy −b
where a and b are nonnegative parameters. Find the critical points and analyze their linear stability.
Find numerically the attractors for (i) a = 2, b = 1, (ii) a = 0.95, b = 1, and (iii) a = 0, b = 1.
14. Sketch the steady state bifurcation diagrams of the following equations. Determine and indicate the
linear stability of each branch.
dx
dt
= −
1
x
−λ
(2x −λ)
dx
dt
= −x
(x −2)
2
−(λ −1)
15. The motion of a freely spinning object in space is given by
dx
dt
= yz
dy
dt
= −2xz
dz
dt
= xy
where x, y, z represent the angular velocities about the three principal axes. Show that x
2
+y
2
+z
2
is a
constant. Find the critical points and analyze their linear stability. Check by throwing a nonspherical
object (a book?) in the air.
346
16. A bead moves along a smooth circular wire of radius a which is rotating about a vertical axis with
constant angular speed ω. Taking gravity and centrifugal forces into account, the motion of the bead
is given by
a
d
2
θ
dt
2
= −g sin θ +aω
2
cos θ sin θ
where θ is the angular position of the bead with respect to the downward vertical position. Find the
equilibrium positions and their stability as the parameter µ = aω
2
/g is varied.
17. Find a Lyapunov function of the form V = ax
2
+by
2
to investigate the global stability of the critical
point x = y = 0 of the system of equations
dx
dt
= −2x
3
+ 3xy
2
dy
dt
= −x
2
y −y
3
18. Let
A =
¸
1 1 2
0 1 1
0 0 1
Solve the equation
dx
dt
= A x.
Determine the critical points and their stability.
19. Draw the bifurcation diagram of
dx
dt
= (x
2
−2)
2
−2(x
2
+ 1)(λ −1) + (λ −1)
2
indicating the stability of each branch.
20. Show that for all initial conditions the solutions of
dx
dt
= −x +x
2
y −y
2
dy
dt
= −x
3
+xy −6z
dz
dt
= 2y
tend to x = y = z = 0 as t →∞.
21. Draw the bifurcation diagram of
dx
dt
= x
3
+x
(λ −2)
3
−1
where λ is the bifurcation parameter, indicating stable and unstable branches.
22. Solve the system of equations x
= A x where
A =
¸
¸
¸
−3 0 1 0
0 −2 0 0
0 0 1 2
0 0 0 0
347
23. Find a Lyapunov function for the system
dx
dt
= −x −2y
2
dy
dt
= xy −y
3
24. Analyze the local stability of the origin in the following system
dx
dt
= −2x +y + 3z + 8y
3
dy
dt
= −6y −5z + 2z
3
dz
dt
= z +x
2
+y
3
.
25. Show that the origin is linearly stable
dx
dt
= (x −by)(x
2
+y
2
−1)
dy
dt
= (ax +y)(x
2
+y
2
−1)
where a, b > 0. Show also that the origin is stable to large perturbations, as long as they satisfy
x
2
+y
2
< 1.
26. Draw the bifurcation diagram and analyze the stability of
dx
dt
= −x(x
3
−λ −1) −0.1
27. Find the dynamical system corresponding to the Hamiltonian H(x, y) = x
2
+2xy +y
2
and then solve
it.
28. Show that solutions of the system of diﬀerential equations
dx
dt
= −x +y
3
−z
3
dy
dt
= = −y +z
3
−x
3
dz
dt
= −z +x
3
−y
3
eventually approach the origin for all initial conditions.
29. Find and sketch all critical points (x, y) of
dx
dt
= (λ −1)x −3xy
2
−x
3
dy
dt
= (λ −1)y −3x
2
y −y
3
as functions of λ. Determine the stability of (x, y) = (0, 0), and of one postbifurcation branch.
30. Write in matrix form and solve
dx
dt
= y +z
dy
dt
= z +x
dz
dt
= x +y
348
31. Find the critical point (or points) of the Van der Pol equation
d
2
x
dt
2
−a(1 −x
2
)
dx
dt
+x = 0, a > 0
and determine its (or their) stability to small perturbations. Plot the
dx
dt
, x phase plane including
critical points and vector ﬁelds.
32. Consider a straight line between x = 0 and x = l. Remove the middle half (i.e. the portion between
x = l/4 and x = 3l/4). Repeat the process on the two pieces that are left. Find the dimension of
what is left after an inﬁnite number of iterations.
33. Classify the critical points of
dx
dt
= x +y −2
dy
dt
= 2y −x
2
+ 1
and analyze their stability.
34. Determine if the origin is stable if
dx
dt
= A x, where
A =
¸
3 −3 0
0 −5 −2
−6 0 −3
35. Find a Lyapunov function of the form V = ax
2
+by
2
to investigate the global stability of the critical
point x = y = 0 of the system of equations
dx
dt
= −2x
3
+ 3xy
2
dy
dt
= −x
2
y −y
3
36. Draw a bifurcation diagram for the diﬀerential equation
dx
dt
= (x −3)(x
2
−λ)
Analyze linear stability and indicate stable and unstable branches.
37. Solve the following system of diﬀerential equations using generalized eigenvectors
dx
dt
= −5x + 2y +z
dy
dt
= −5y + 3z
dz
dt
= = −5z
38. Analyze the linear stability of the critical point of
dx
dt
= 2y +y
2
dy
dt
= −λ + 2x
2
349
39. Show that the solutions of
dx
dt
= y −x
3
dy
dt
= −x −y
3
tend to (0,0) as t →∞.
40. Sketch the bifurcation diagram showing the stable and unstable steady states of
dx
dt
= λx(2 −x) −x
41. Show in parameter space the diﬀerent possible behaviors of
dx
dt
= a +x
2
y −2bx −x
dy
dt
= bx −x
2
y
where a, b > 0.
42. Show that the H´enonHeiles system
d
2
x
dt
2
= −x −2xy
d
2
y
dt
2
= −y +y
2
−x
2
is Hamiltonian. Find the Hamiltonian of the system, and determine the stability of the critical point
at the origin.
43. Solve x
= A x where
A =
2 1
0 2
using the exponential matrix.
44. Sketch the steady state bifurcation diagrams of
dx
dt
= (x −λ)(x +λ)[(x −3)
2
+ (λ −1)
2
−1]
Determine the linear stability of each branch; indicate the stable and unstable ones diﬀerently on the
diagram.
45. Classify the critical point of
d
2
x
dt
2
+ (λ −λ
0
)x = 0
46. Show that x = 0 is a stable critical point of the diﬀerential equation
dx
dt
= −
¸
i=0
na
i
x
2i+1
where a
i
≥ 0, i = 0, 1, , n.
47. Find the stability of the critical points of the Duﬃng equation
d
2
x
dt
2
= a
dx
dt
−bx +x
3
= 0
for positive and negative values of a and b. Sketch the ﬂow lines.
350
48. Find a Lyapunov function to investigate the critical point x = y = 0 of the system of equations
dx
dt
= −2x
3
+ 3xy
2
dx
dt
= −x
2
y −y
3
49. The populations x and y of two competing animal species are governed by
dx
dt
= x −2xy
dy
dt
= −y +xy
What are the steadystate populations? Is the situation stable?
50. For the Lorenz equations with b = 8/3, r = 28 and initial conditions x(0) = 4, y(0) = 1, z(0) = 3,
numerically integrate the Lorenz equations for two cases, σ = 1, σ = 10. For each case plot the
trajectory in xyz phase space and plot x(t), y(t), z(t) for 0 < t < 50. Change the initial condition on
x to x(0) = 4.0001 and plot the diﬀerence in the predictions of x versus time for both values of σ.
351
352
Chapter 10
Appendix
The material in this section is not covered in detail; some is review from undergraduate
classes.
10.1 Trigonometric relations
sin xsin y =
1
2
cos(x −y) −
1
2
cos(x + y)
sin xcos y =
1
2
sin(x + y) +
1
2
sin(x −y)
cos xcos y =
1
2
cos(x −y) +
1
2
cos(x + y)
sin
2
x =
1
2
−
1
2
cos 2x
sin xcos x =
1
2
sin 2x
cos
2
x =
1
2
+
1
2
cos 2x
sin
3
x =
3
4
sin x −
1
4
sin 3x
sin
2
xcos x =
1
4
cos x −
1
4
cos 3x
sin xcos
2
x =
1
4
sin x +
1
4
sin 3x
cos
3
x =
3
4
cos x +
1
4
cos 3x
sin
4
x =
3
8
−
1
2
cos 2x +
1
8
cos 4x
sin
3
xcos x =
1
4
sin 2x −
1
8
sin 4x
353
sin
2
xcos
2
x =
1
8
−
1
8
cos 4x
sin xcos
3
x =
1
4
sin 2x +
1
8
sin 4x
cos
4
x =
3
8
+
1
2
cos 2x +
1
8
cos 4x
sin
5
x =
5
8
sin x −
5
16
sin 3x +
1
16
sin 5x
sin
4
xcos x =
1
8
cos x −
3
16
cos 3x +
1
16
cos 5x
sin
3
xcos
2
x =
1
8
sin x +
1
16
sin 3x −
1
16
sin 5x
sin
2
xcos
3
x = −
1
8
cos x −
1
16
cos 3x −
1
16
cos 5x
sin xcos
4
x =
1
8
sin x +
3
16
sin 3x +
1
16
sin 5x
cos
5
x =
5
8
cos x +
5
16
cos 3x +
1
16
cos 5x
10.2 RouthHurwitz criterion
Here we consider the RouthHurwitz
1
criterion. The polynomial equation
a
0
s
n
+ a
1
s
n−1
+ . . . + a
n−1
s + a
n
= 0
has roots with negative real parts if and only if the following conditions are satisﬁed:
(i) a
1
/a
0
, a
2
/a
0
, . . . , a
n
/a
0
> 0
(ii) D
i
> 0, i = 1, . . . , n
The Hurwitz determinants D
i
are deﬁned by
D
1
= a
1
D
2
=
a
1
a
3
a
0
a
2
D
3
=
a
1
a
3
a
5
a
0
a
2
a
4
0 a
1
a
3
D
n
=
a
1
a
3
a
5
. . . a
2n−1
a
0
a
2
a
4
. . . a
2n−2
0 a
1
a
3
. . . a
2n−3
0 a
0
a
2
. . . a
2n−4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 . . . a
n
with a
i
= 0, if i > n.
1
Edward John Routh, 18311907, Canadianborn English mathematician, and Adolf Hurwitz, 18591919,
German mathematician.
354
10.3 Inﬁnite series
Deﬁnition: A power series is of the form
∞
¸
n=0
a
n
(x −a)
n
. (10.1)
The series converges if [x −a[ < R, where R is the radius of convergence.
Deﬁnition: A function f(x) is said to be analytic at x = a if f and all its derivatives exist
at this point.
An analytic function can be expanded in a Taylor series:
f(x) = f(a) + f
(a)(x −a) +
1
2
f
(a)(x −a)
2
+ (10.2)
where the function and its derivatives on the right side are evaluated at x = a. This is a
power series for f(x). We have used primes to indicate derivatives.
Example 10.1
Expand (1 +x)
n
about x = 0.
f(x) = (1 +x)
n
f(0) = 1
f
(0) = n
f
(0) = n(n −1)
.
.
.
(1 +x)
n
= 1 +nx +
1
2
n(n −1)x
2
+
A function of two variables f(x, y) can be similarly expanded
f(x, y) = f
a,b
+
∂f
∂x
a,b
(x −a) +
∂f
∂y
a,b
(y −b)
+
1
2
∂
2
f
∂x
2
a,b
(x −a)
2
+
∂
2
f
∂x∂y
a,b
(x −a)(y −b) +
1
2
∂
2
f
∂y
2
a,b
(y −b)
2
+ (10.3)
if f and all its partial derivatives exist and are evaluated at x = a, y = b.
355
10.4 Asymptotic expansions
Deﬁnition: Consider two function f(x) and g(x). We write that
f(x) ∼ g(x), if lim
x→a
f(x)
g(x)
= 1.
f(x) = o[g(x)], if lim
x→a
f(x)
g(x)
= 0;
f(x) = O[g(x)], if lim
x→a
f(x)
g(x)
=constant;
10.4.1 Expansion of integrals
10.4.2 Integration and diﬀerentiation of series
10.5 Limits and continuity
10.6 Special functions
10.6.1 Gamma function
The gamma function is deﬁned by
Γ(x) =
∞
0
e
−t
t
x−1
dt (10.4)
Generally, we are interested in x > 0, but results are available for all x. Some properties are:
1. Γ(1) = 1.
2. Γ(x) = (x −1)Γ(x −1), x > 1.
3. Γ(x) = (x −1)(x −2) (x −r)Γ(x −r), x > r.
4. Γ(n) = (n −1)!, where n is a positive integer.
5. Γ(x) ∼
2π
x
x
x
e
−x
1 +
1
12x
+
1
288x
2
+ . . .
, (Stirling’s formula)
Bender and Orszag show that Stirling’s formula is a divergent series. It is an asymptotic
series, but as more terms are added, the solution can actually get worse. The Gamma
function and its amplitude are plotted in Figure 10.1.
10.6.2 Beta function
The beta function is deﬁned by
B(p, q) =
1
0
x
p−1
(1 −x)
q−1
dx (10.5)
Property:
B(p, q) =
Γ(p)Γ(q)
Γ(p + q)
(10.6)
356
4 2 2 4
x
15
10
5
5
10
15
Γ
15 10 5 0 5 10 15
x
1. · 10
8
0.0001
1
10000
1. · 10
8
Γ
Figure 10.1: Gamma function and amplitude of Gamma function.
10.6.3 Riemann zeta function
This is deﬁned as
ζ(x) =
∞
¸
n=1
n
−x
(10.7)
The function can be evaluated in closed form for even integer values of x. It can be shown
that ζ(2) = π
2
/6, ζ(4) = π
4
/90, and ζ(6) = π
6
/945. All negative even integer values of x
give ζ(x) = 0. Further lim
x→∞
ζ(x) = 1. For large negative values of x, the Riemann zeta
function oscillates with increasing amplitude. Plots of the Riemann zeta function for x > 1
and the amplitude of the Riemann zeta function over a broader domain on a logarithmic
scale as shown in Figure 10.2.
2 4 6 8 10
x
2
4
6
8
10
ζ
30 20 10 0 10
x
0.0001
0.1
100
100000.
ζ
Figure 10.2: Riemann zeta function and amplitude of Riemann zeta function.
10.6.4 Error function
The error function is deﬁned by
erf (x) =
2
√
π
x
0
e
−ξ
2
dξ (10.8)
and the complementary error function by
erfc (x) = 1 −erf x (10.9)
The error function and the error function complement are plotted in Figure 10.3.
357
4 2 2 4
0.5
1
1.5
2
4 2 2 4
1
0.5
0.5
1
x
erf (x)
x
erfc (x)
Figure 10.3: Error function and error function complement.
10.6.5 Fresnel integrals
These are deﬁned by
C(x) =
x
0
cos
πt
2
2
dt (10.10)
S(x) =
x
0
sin
πt
2
2
dt (10.11)
The Fresnel cosine and sine integrals are plotted in Figure 10.4.
7.5 5 2.5 2.5 5 7.5
0.75
0.5
0.25
0.25
0.5
0.75
7.5 5 2.5 2.5 5 7.5
0.6
0.4
0.2
0.2
0.4
0.6
x
C(x)
x
S(x)
Figure 10.4: Fresnel cosine, C(x), and sine, S(x), integrals.
10.6.6 Sine and cosineintegral functions
The sineintegral function is deﬁned by
Si (x) =
x
0
sin ξ
ξ
dξ (10.12)
and the cosineintegral function by
Ci (x) =
x
0
cos ξ
ξ
dξ (10.13)
358
The sine integral function is real valued for x ∈ (−∞, ∞). The cosine integral function is
real valued for x ∈ [0, ∞). We also have lim
x→0
+
Ci(x) →−∞. The cosine integral takes on
a value of zero at discrete positive real values, and has an amplitude which slowly decays as
x →∞. The sine integral and cosine integral functions are plotted in Figure 10.5.
20 10 10 20
1.5
1
0.5
0.5
1
1.5
5 10 15 20 25 30
0.2
0.2
0.4
x
x
Si (x) Ci (x)
Figure 10.5: Sine integral function, Si(x), and cosine integral function Ci(x).
10.6.7 Elliptic integrals
The Legendre elliptic integral of the ﬁrst kind is
F(y, k) =
y
0
dη
(1 −η
2
)(1 −k
2
η
2
)
(10.14)
Another common way of writing the elliptic integral is to take η = sin φ, so that
F(φ, k) =
φ
0
dφ
(1 −k
2
sin
2
φ)
(10.15)
The Legendre elliptic integral of the second kind is
E(y, k) =
y
0
(1 −k
2
η
2
)
(1 −η
2
)
dη (10.16)
which, on again using η = sin φ, becomes
E(φ, k) =
φ
0
1 −k
2
sin
2
φ dφ (10.17)
The Legendre elliptic integral of the third kind is
Π(y, n, k) =
φ
0
dφ
(1 + nsin
2
φ)
(1 −k
2
sin
2
φ)
(10.18)
which is equivalent to
Π(φ, n, k) =
φ
0
1 −k
2
sin
2
φ dφ (10.19)
359
For φ = π/2, we have the complete elliptic integrals:
F
π
2
, k
=
π/2
0
dφ
(1 −k
2
sin
2
φ)
(10.20)
E
π
2
, k
=
π/2
0
1 −k
2
sin
2
φ dφ (10.21)
Π
π
2
, n, k
=
π/2
0
1 −k
2
sin
2
φ dφ (10.22)
10.6.8 Gauss’s hypergeometric function
An integral representation of Gauss’s hypergeometric function is
2
F
1
(a, b, c, x) =
Γ(c)
Γ(b)Γ(c −b)
1
0
t
b−1
(1 −t)
c−b−1
(1 −tx)
−a
dt (10.23)
For special values of a, b, and c, this very general function reduces to other functions such
as tanh
−1
.
10.6.9 δ distribution and Heaviside function
Deﬁnition: The Dirac
2
δdistribution (or generalized function, or simply function), is deﬁned
by
β
α
f(x)δ(x −a)dx =
0 if a ∈ [α, β]
f(a) if a ∈ [α, β]
(10.24)
From this it follows that
δ(x −a) = 0 if x = a (10.25)
∞
−∞
δ(x −a)dx = 1. (10.26)
The δdistribution may be imagined in a limiting fashion as
δ(x −a) = lim
→0
+
∆
(x −a)
where ∆
(x −a) has one of the following forms:
1.
∆
(x −a) =
0 if x < a −
2
1
if a −
2
≤ x ≤ a +
2
0 if x > a +
2
(10.27)
2.
∆
(x −a) =
π[(x −a)
2
+
2
]
(10.28)
2
Paul Adrien Maurice Dirac, 19021984, English physicist.
360
3.
∆
(x −a) =
1
√
π
e
−(x−a)
2
/
(10.29)
The derivative of the function
h(x −a) =
0 if x < a −
2
1
(x −a) +
1
2
if a −
2
≤ x ≤ a +
2
1 if x > a +
2
(10.30)
is ∆
(x −a) in item (1) above. If we deﬁne the Heaviside
3
function, H(x −a), as
H(x −a) = lim
→0
+
h(x −a) (10.31)
then
d
dx
H(x −a) = δ(x −a) (10.32)
The generator of the Dirac function ∆
(x − a) and the generator of the Heaviside function
h(x−a) are plotted for a = 0 and = 1/5 in Figure 10.6. As →0, ∆
has its width decrease
1 0.5 0.5 1
1
2
3
4
5
1 0.5 0.5 1
0.2
0.4
0.6
0.8
1
x
x
∆ (x)
h (x)
ε
Dirac
delta function
generator
Heaviside
step function
generator
Figure 10.6: Generators of Dirac delta function and Heaviside function, ∆
(x−a) and h(x−a)
plotted for a = 0 and = 1/5.
and its height increase in such a fashion that its area remains constant; simultaneously h
has its slope steepen in the region where it jumps from zero to unity as →0.
10.7 Singular integrals
10.8 Chain rule
A function of several variables f(x
1
, x
2
, , x
n
) may be diﬀerentiated using the chain rule
df =
∂f
∂x
1
dx
1
+
∂f
∂x
2
dx
2
+ +
∂f
∂x
n
dx
n
. (10.33)
3
Oliver Heaviside, 18501925, English mathematician.
361
Diﬀerentiation of an integral is done using the Leibniz rule
y(x) =
b(x)
a(x)
f(x, t)dt (10.34)
dy(x)
dx
=
d
dx
b(x)
a(x)
f(x, t)dt = f(x, b(x))
db(x)
dx
−f(x, a(x))
da(x)
dx
+
a(x)
b(x)
∂f(x, t)
∂x
dt. (10.35)
10.9 Complex numbers
Here we brieﬂy introduce some basic elements of complex variable theory. Recall that the
imaginary number i is deﬁned such that
i
2
= −1, i =
√
−1. (10.36)
10.9.1 Euler’s formula
We can get a very useful formula Euler’s formula, by considering the following Taylor ex
pansions of common functions about t = 0:
e
t
= 1 +t +
1
2!
t
2
+
1
3!
t
3
+
1
4!
t
4
+
1
5!
t
5
. . . , (10.37)
sin t = 0 +t + 0
1
2!
t
2
−
1
3!
t
3
+ 0
1
4!
t
4
+
1
5!
t
5
. . . , (10.38)
cos t = 1 + 0t −
1
2!
t
2
+ 0
1
3!
t
3
+
1
4!
t
4
+ 0
1
5!
t
5
. . . , (10.39)
(10.40)
With these expansions now consider the following combinations: (cos t + i sin t)
t=θ
and
e
t
[
t=iθ
:
cos θ + i sin θ = 1 + iθ −
1
2!
θ
2
−i
1
3!
θ
3
+
1
4!
θ
4
+ i
1
5!
θ
5
+ . . . , (10.41)
e
iθ
= 1 + iθ +
1
2!
(iθ)
2
+
1
3!
(iθ)
3
+
1
4!
(iθ)
4
+
1
5!
(iθ)
5
+ . . . , (10.42)
= 1 + iθ −
1
2!
θ
2
−i
1
3!
θ
3
+
1
4!
θ
4
+ i
1
5!
θ
5
+ . . . (10.43)
As the two series are identical, we have Euler’s formula
e
iθ
= cos θ + i sin θ. (10.44)
Powers of complex numbers can be easily obtained using de Moivre’s
4
formula:
e
inθ
= cos nθ + i sin nθ. (10.45)
4
Abraham de Moivre, 16671754, French mathematician.
362
x
i y
x
y
r
=
x
+
y
2
2
θ
Figure 10.7: Polar and Cartesian representation of a complex number z.
10.9.2 Polar and Cartesian representations
Now if we take x and y to be real numbers and deﬁne the complex number z to be
z = x + iy, (10.46)
we can multiply and divide by
x
2
+ y
2
to obtain
z =
x
2
+ y
2
x
x
2
+ y
2
+ i
y
x
2
+ y
2
. (10.47)
Noting the similarities between this and the transformation between Cartesian and polar
coordinates suggests we adopt
r =
x
2
+ y
2
, cos θ =
x
x
2
+ y
2
, sin θ =
y
x
2
+ y
2
. (10.48)
Thus we have
z = r (cos θ + i sin θ) , (10.49)
z = re
iθ
. (10.50)
The polar and Cartesian representation of a complex number z is shown in Figure 10.7.
Now we can deﬁne the complex conjugate z as
z = x −iy, (10.51)
z =
x
2
+ y
2
x
x
2
+ y
2
−i
y
x
2
+ y
2
, (10.52)
z = r (cos θ −i sin θ) , (10.53)
z = r (cos(−θ) + i sin(−θ)) , (10.54)
z = re
−iθ
. (10.55)
363
Note now that
zz = (x + iy)(x −iy) = x
2
+ y
2
= [z[
2
, (10.56)
= re
iθ
re
−iθ
= r
2
= [z[
2
. (10.57)
We also have
sin θ =
e
iθ
−e
−iθ
2i
, (10.58)
cos θ =
e
iθ
+ e
−iθ
2
. (10.59)
10.9.3 CauchyRiemann equations
Now it is possible to deﬁne complex functions of complex variables W(z). For example take
a complex function to be deﬁned as
W(z) = z
2
+ z, (10.60)
= (x + iy)
2
+ (x + iy), (10.61)
= x
2
+ 2xyi −y
2
+ x + iy, (10.62)
=
x
2
+ x −y
2
+ i (2xy + y) . (10.63)
In general, we can say
W(z) = φ(x, y) + iψ(x, y). (10.64)
Here φ and ψ are real functions of real variables.
Now W(z) is deﬁned as analytic at z
o
if
dW
dz
exists at z
o
and is independent of the direction
in which it was calculated. That is, using the deﬁnition of the derivative
dW
dz
zo
=
W(z
o
+ ∆z) −W(z
o
)
∆z
. (10.65)
Now there are many paths that we can choose to evaluate the derivative. Let us consider
two distinct paths, y = C
1
and x = C
2
. We will get a result which can be shown to be valid
for arbitrary paths. For y = C
1
, we have ∆z = ∆x, so
dW
dz
zo
=
W(x
o
+ iy
o
+ ∆x) −W(x
o
+ iy
o
)
∆x
=
∂W
∂x
y
. (10.66)
For x = C
2
, we have ∆z = i∆y, so
dW
dz
zo
=
W(x
o
+ iy
o
+ i∆y) −W(x
o
+ iy
o
)
i∆y
=
1
i
∂W
∂y
x
= −i
∂W
∂y
x
. (10.67)
Now for an analytic function, we need
∂W
∂x
y
= −i
∂W
∂y
x
. (10.68)
364
or, expanding, we need
∂φ
∂x
+ i
∂ψ
∂x
= −i
∂φ
∂y
+ i
∂ψ
∂y
, (10.69)
=
∂ψ
∂y
−i
∂φ
∂y
. (10.70)
For equality, and thus path independence of the derivative, we require
∂φ
∂x
=
∂ψ
∂y
,
∂φ
∂y
= −
∂ψ
∂x
. (10.71)
These are the well known CauchyRiemann equations for analytic functions of complex
variables.
Now most common functions are easily shown to be analytic. For example for the function
W(z) = z
2
+ z, which can be expressed as W(z) = (x
2
+ x −y
2
) + i(2xy + y), we have
φ(x, y) = x
2
+ x −y
2
, ψ(x, y) = 2xy + y, (10.72)
∂φ
∂x
= 2x + 1,
∂ψ
∂x
= 2y, (10.73)
∂φ
∂y
= −2y,
∂ψ
∂y
= 2x + 1. (10.74)
Note that the CauchyRiemann equations are satisﬁed since
∂φ
∂x
=
∂ψ
∂y
and
∂φ
∂y
= −
∂ψ
∂x
. So the
derivative is independent of direction, and we can say
dW
dz
=
∂W
∂x
y
= (2x + 1) +i(2y) = 2(x + iy) + 1 = 2z + 1. (10.75)
We could get this result by ordinary rules of derivatives for real functions.
For an example of a nonanalytic function consider W(z) = z. Thus
W(z) = x −iy. (10.76)
So φ = x and ψ = −y,
∂φ
∂x
= 1,
∂φ
∂y
= 0, and
∂ψ
∂x
= 0,
∂ψ
∂y
= −1. Since
∂φ
∂x
=
∂ψ
∂y
, the
CauchyRiemann equations are not satisﬁed, and the derivative depends on direction.
Problems
1. Find the limit as x →0 of
4 cosh x + sinh(arctan ln cos 2x) −4
e
−x
+ arcsin x −
√
1 +x
2
.
2. Find
dφ
dx
in two diﬀerent ways, where
φ =
x
4
x
2
x
√
ydy.
3. Determine
365
(a)
4
√
i
(b) i
i
i
√
i
4. Write three terms of a Taylor series expansion for the function f(x) = exp(tan x) about the point
x = π/4. For what range of x is the series convergent?
5. Find all complex numbers z = x +iy such that [z + 2i[ = [1 +i[.
6. Determine lim
n→∞
z
n
for z
n
=
3
n
+ [(n + 1)/(n + 2)]i.
7. A particle is constrained to a path which is deﬁned by the function s(x, y) = x
2
+ y − 5 = 0. The
velocity component in the xdirection, dx/dt = 2y. What are the position and velocity components
in the ydirection when x = 4.
8. The error function is deﬁned as erf (x) =
2
√
π
x
0
e
−u
2
du. Determine its derivative with respect to x.
9. Verify that
lim
n→∞
2π
π
sin nx
nx
dx = 0.
10. Write a Taylor series expansion for the function f(x, y) = x
2
cos y about the point x = 2, y = π.
Include the x
2
, y
2
and xy terms.
11. Show that
φ =
∞
0
e
−t
2
cos 2tx dt
satisﬁes the diﬀerential equation
dφ
dx
+ 2φx = 0.
12. Evaluate the Dirichlet discontinuous integral
I =
1
π
∞
−∞
sin ax
x
dx
for −∞< a < ∞. You can use the results of example 3.11, Greenberg.
13. Deﬁning
u(x, y) =
x
3
−y
3
x
2
+y
2
,
except at x = y = 0, where u = 0, show that u
x
(x, y) exists at x = y = 0 but is not continuous there.
14. Using complex numbers show that
(a) cos
3
x =
1
4
(cos 3x + 3 cos x)
(b) sin
3
x =
1
4
(3 sin x −sin 3x)
366
Bibliography
M. Abramowitz and I. A. Stegun, eds., Handbook of Mathematical Functions, Dover, New York, 1964.
V. I. Arnold, Ordinary Diﬀerential Equations, MIT Press, Cambridge, MA, 1973.
R. Aris, Vectors, Tensors, and the Basic Equations of Fluid Mechanics, Dover, New York, 1962.
N. H. Asmar, Applied Complex Analysis with Partial Diﬀerential Equations, PrenticeHall, Upper Saddle
River, NJ, 2002.
G. I. Barenblatt, Scaling, SelfSimilarity, and Intermediate Asymptotics, Cambridge University Press, Cam
bridge, U.K., 1996.
R. Bellman and K. L. Cooke, DiﬀerentialDiﬀerence Equations, Academic Press, New York, NY, 1963.
C. M. Bender and S. A. Orszag, Advanced Mathematical Methods for Scientists and Engineers, Springer
Verlag, New York, NY, 1999.
A. I. Borisenko and I. E. Tarapov, Vector and Tensor Analysis with Applications, Dover, New York, 1968.
M. Braun, Diﬀerential Equations and Their Applications, SpringerVerlag, New York, NY, 1983.
I. N. Bronshtein and K. A. Semendyayev, Handbook of Mathematics, Springer, Berlin, 1998.
C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang, Spectral Methods in Fluid Dynamics, Springer
Verlag, New York, NY, 1988.
G. F. Carrier and C. E. Pearson, Ordinary Diﬀerential Equations, SIAM, Philadelphia, 1991.
P. G. Ciarlet, Introduction to Numerical Linear Algebra and Optimisation, Cambridge University Press,
Cambridge, U.K., 1989.
J. A. Cochran, H. C. Wiser and B. J. Rice, Advanced Engineering Mathematics, 2nd Ed., Brooks/Cole,
Monterey, CA, 1987.
R. Courant and D. Hilbert, Methods of Mathematical Physics, Vols. 1 and 2, Wiley, New York, 1953.
I. Daubechies, Ten Lectures on Wavelets, SIAM, Philadelphia, PA, 1992.
L. Debnath and P. Mikusinski, Introduction to Hilbert Spaces with Applications, Academic Press, 1998.
P. G. Drazin, Nonlinear Systems, Cambridge University Press, Cambridge, U.K., 1992.
R. D. Driver, Ordinary and Delay Diﬀerential Equations, SpringerVerlag, New York, NY, 1977.
J. Feder, Fractals, Plenum Press, New York, NY, 1988.
C. A. J. Fletcher, Computational Techniques for Fluid Dynamics, SpringerVerlag, New York, NY, 1991.
B. A. Finlayson, The Method of Weighted Residuals and Variational Principles, Academic Press, New York,
NY, 1972.
B. Fornberg, A Practical Guide to Pseudospectral Methods, Cambridge, New York, NY, 1998.
B. Friedman, Principles and Techniques of Applied Mathematics, Dover Publications, New York, NY, 1956.
367
J. Gleick, Chaos, Viking, New York, 1987.
G. H. Golub and C. F. Van Loan, Matrix Computations, Third Edition, Johns Hopkins, Baltimore, 1996.
S. W. Goode, An Introduction to Diﬀerential Equations and Linear Algebra, PrenticeHall, Englewood Cliﬀs,
NJ, 1991.
D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications, SIAM,
Philadelphia, PA, 1977.
M. D. Greenberg, Foundations of Applied Mathematics, PrenticeHall, Englewood Cliﬀs, NJ, 1978.
J. Guckenheimer and P. H. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector
Fields, SpringerVerlag, New York, NY, 1983.
J. Hale and H. Ko¸ cak, Dynamics and Bifurcations, SpringerVerlag, New York, NY, 1991.
F. B. Hildebrand, Advanced Calculus for Applications, 2nd Ed., PrenticeHall, Englewood Cliﬀs, NJ, 1976.
M. H. Holmes, Introduction to Perturbation Methods, SpringerVerlag, New York, NY, 1995.
M. Humi and W. Miller, Second Course in Ordinary Diﬀerential Equations for Scientists and Engineers,
SpringerVerlag, New York, NY, 1988.
E. J. Hinch, Perturbation Methods, Cambridge, Cambridge, UK, 1991.
D. W. Jordan and P. Smith, Nonlinear Ordinary Diﬀerential Equations, Clarendon Press, Oxford, U.K.,
1987.
P. B. Kahn, Mathematical Methods for Engineers and Scientists, John Wiley and Sons, New York, NY, 1990.
W. Kaplan, Advanced Calculus, Fifth Edition, AddisonWesley, Boston, MA, 2003.
J. Kevorkian and J. D. Cole, Perturbation Methods in Applied Mathematics, SpringerVerlag, New York,
1981.
J. Kevorkian and J. D. Cole, Multiple Scale and Singular Perturbation Methods, SpringerVerlag, New York,
1996.
L. D. Kovach, Advanced Engineering Mathematics, AddisonWesley, Reading, MA, 1982.
R. Kress, Numerical Analysis, SpringerVerlag, New York, 1998.
E. Kreyszig, Advanced Engineering Mathematics, John Wiley and Sons, New York, NY, 1962.
E. Kreyszig, Introductory Functional Analysis with Applications, John Wiley and Sons, New York, NY, 1978.
Lin, C. C., and Segel, L. A., Mathematics Applied to Deterministic Problems in the Natural Sciences, SIAM,
Philadelphia, 1988.
J. R. Lee, Advanced Calculus with Linear Analysis, Academic Press, New York, NY, 1972.
R. J. Lopez, Advanced Engineering Mathematics, Addison Wesley Longman, Boston, MA, 2001.
J. Mathews and R. L. Walker, Mathematical Methods of Physics, AddisonWesley, Redwood City, CA, 1970.
A. J. McConnell, Applications of Tensor Analysis, Dover, New York, 1957.
A. N. Michel and C. J. Herget, Applied Algebra and Functional Analysis, Dover, New York, 1981.
P. M. Morse and H. Feshbach, Methods of Theoretical Physics, Vols. 1 and 2, McGrawHill, New York, 1953.
J. A. Murdock, Perturbations, Theory and Methods, John Wiley and Sons, New York, NY, 1991.
P. V. O’Neil, Advanced Engineering Mathematics, Wadsworth, 1987.
368
J. N. Reddy, Applied Functional Analysis and Variational Methods in Engineering, McGrawHill, New York,
NY, 1986.
J. N. Reddy and M. L. Rasmussen, Advanced Engineering Analysis, John Wiley and Sons, New York, NY,
1982.
F. Riesz and B. Sz.Nagy, Functional Analysis, Dover, 1990.
K. F. Riley, M. P. Hobson, and S. J. Bence, Mathematical Methods for Physics and Engineering, Second
Edition, Cambridge, Cambridge, UK, 2002.
M. Rosenlicht, Introduction to Analysis, Dover, New York, 1968.
H. M. Schey, Div, Grad, Curl, and All That, 2nd Ed., W.W. Norton, London, 1992.
M. J. Schramm, Introduction to Real Analysis, PrenticeHall, Englewood Cliﬀs, NJ, 1996.
I. S. Sokolnikoﬀ and R. M. Redheﬀer, Mathematics of Physics and Modern Engineering, 2nd Ed., McGraw
Hill, New York, NY, 1966.
G. Stephenson and P. M. Radmore, Advanced Mathematical Methods for Engineering and Science Students,
Cambridge University Press, Cambridge, U.K., 1990.
G. Strang, Linear Algebra and its Applications, 2nd Ed., Academic Press, New York, NY, 1980.
G. Strang, Introduction to Applied Mathematics, WellesleyCambridge, Wellesley, MA, 1986.
S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos, Springer Verlag, New York,
NY, 1990.
M. Van Dyke, Perturbation Methods in Fluid Mechanics, Parabolic Press, Stanford, CA, 1975.
A. Varma and M. Morbidelli, Mathematical Methods in Chemical Engineering, Oxford Univ. Press, 1997.
C. R. Wylie and L. C. Barrett, Advanced Engineering Mathematics, 6th Ed., McGrawHill, New York, NY,
1995.
E. Zeidler, Applied Functional Analysis, Springer Verlag, New York, NY, 1995.
369
2
Contents
1 Multivariable calculus 1.1 Implicit functions . . . . . . . . . . . . . 1.2 Functional dependence . . . . . . . . . . 1.3 Coordinate Transformations . . . . . . . 1.3.1 Jacobians and Metric Tensors . . 1.3.2 Covariance and Contravariance . 1.4 Maxima and minima . . . . . . . . . . . 1.4.1 Derivatives of integral expressions 1.4.2 Calculus of variations . . . . . . . 1.5 Lagrange multipliers . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . 11 11 14 17 19 25 30 31 33 37 39 43 43 44 45 47 50 51 52 52 53 55 57 59 61 61 63 63 64 65 66 66 67
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
2 Firstorder ordinary diﬀerential equations 2.1 Separation of variables . . . . . . . . . . . 2.2 Homogeneous equations . . . . . . . . . . 2.3 Exact equations . . . . . . . . . . . . . . . 2.4 Integrating factors . . . . . . . . . . . . . 2.5 Bernoulli equation . . . . . . . . . . . . . 2.6 Riccati equation . . . . . . . . . . . . . . . 2.7 Reduction of order . . . . . . . . . . . . . 2.7.1 y absent . . . . . . . . . . . . . . . 2.7.2 x absent . . . . . . . . . . . . . . . 2.8 Uniqueness and singular solutions . . . . . 2.9 Clairaut equation . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . 3 Linear ordinary diﬀerential equations 3.1 Linearity and linear independence . . . 3.2 Complementary functions for equations 3.2.1 Arbitrary order . . . . . . . . . 3.2.2 First order . . . . . . . . . . . . 3.2.3 Second order . . . . . . . . . . 3.3 Complementary functions for equations 3.3.1 One solution to ﬁnd another . . 3.3.2 Euler equation . . . . . . . . . 3
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . . . . . with constant coeﬃcients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . with variable coeﬃcients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
3.4
Particular solutions . . . . . . . 3.4.1 Method of undetermined 3.4.2 Variation of parameters 3.4.3 Operator D . . . . . . . 3.4.4 Green’s functions . . . . Problems . . . . . . . . . . . . . . .
. . . . . . . coeﬃcients . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
67 68 70 71 74 78 81 81 81 84 84 86 89 89 91 91 95 97 102 104 105 109 112 113 113 121 121 123 125 127 129 130 132 132 135 135 135 135 140
4 Series solution methods 4.1 Power series . . . . . . . . . . . . . . . . . . . 4.1.1 Firstorder equation . . . . . . . . . . 4.1.2 Secondorder equation . . . . . . . . . 4.1.2.1 Ordinary point . . . . . . . . 4.1.2.2 Regular singular point . . . . 4.1.2.3 Irregular singular point . . . 4.1.3 Higher order equations . . . . . . . . . 4.2 Perturbation methods . . . . . . . . . . . . . 4.2.1 Algebraic and transcendental equations 4.2.2 Regular perturbations . . . . . . . . . 4.2.3 Strained coordinates . . . . . . . . . . 4.2.4 Multiple scales . . . . . . . . . . . . . 4.2.5 Harmonic approximation . . . . . . . . 4.2.6 Boundary layers . . . . . . . . . . . . . 4.2.7 WKB method . . . . . . . . . . . . . . 4.2.8 Solutions of the type eS(x) . . . . . . . 4.2.9 Repeated substitution . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . 5 Special functions 5.1 SturmLiouville equations . . . . . . . . . 5.1.1 Linear oscillator . . . . . . . . . . . 5.1.2 Legendre equation . . . . . . . . . 5.1.3 Chebyshev equation . . . . . . . . 5.1.4 Hermite equation . . . . . . . . . . 5.1.5 Laguerre equation . . . . . . . . . . 5.1.6 Bessel equation . . . . . . . . . . . 5.1.6.1 ﬁrst and second kind . . . 5.1.6.2 third kind . . . . . . . . . 5.1.6.3 modiﬁed Bessel functions 5.1.6.4 ber and bei functions . . . 5.2 Representation of arbitrary functions . . . Problems . . . . . . . . . . . . . . . . . . . . . 4
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
6 Vectors and tensors 6.1 Cartesian index notation . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Cartesian tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Direction cosines . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1.1 Scalars . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1.2 Vectors . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1.3 Tensors . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Matrix representation . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Transpose of a tensor, symmetric and antisymmetric tensors 6.2.4 Dual vector of a tensor . . . . . . . . . . . . . . . . . . . . . 6.2.5 Principal axes . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Algebra of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Deﬁnition and properties . . . . . . . . . . . . . . . . . . . . 6.3.2 Scalar product (dot product, inner product) . . . . . . . . . 6.3.3 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Scalar triple product . . . . . . . . . . . . . . . . . . . . . . 6.3.5 Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Calculus of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Vector function of single scalar variable . . . . . . . . . . . . 6.4.2 Diﬀerential geometry of curves . . . . . . . . . . . . . . . . . 6.4.2.1 Curves on a plane . . . . . . . . . . . . . . . . . . 6.4.2.2 Curves in 3dimensional space . . . . . . . . . . . . 6.5 Line and surface integrals . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Line integrals . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Surface integrals . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Diﬀerential operators . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Gradient of a scalar . . . . . . . . . . . . . . . . . . . . . . . 6.6.2 Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2.2 Tensors . . . . . . . . . . . . . . . . . . . . . . . . 6.6.3 Curl of a vector . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.4 Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.4.1 Scalar . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.4.2 Vector . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.5 Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Special theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 Path independence . . . . . . . . . . . . . . . . . . . . . . . 6.7.2 Green’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.3 Gauss’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.4 Green’s identities . . . . . . . . . . . . . . . . . . . . . . . . 6.7.5 Stokes’ theorem . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.6 Leibniz’s Theorem . . . . . . . . . . . . . . . . . . . . . . . 6.8 Orthogonal curvilinear coordinates . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
143 143 145 145 146 147 147 148 148 149 150 151 152 152 152 153 153 153 153 155 156 157 160 160 161 161 163 164 164 164 164 165 165 165 165 166 166 166 167 169 169 170 171 172
7 Linear analysis 7.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Diﬀerentiation and integration . . . . . . . . . . . . . . . . . . . . . 7.2.1 Fr´chet derivative . . . . . . . . . . . . . . . . . . . . . . . . e 7.2.2 Riemann integral . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Lebesgue integral . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Normed spaces . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Inner product spaces . . . . . . . . . . . . . . . . . . . . . . 7.3.2.1 Hilbert space . . . . . . . . . . . . . . . . . . . . . 7.3.2.2 Noncommutation of the inner product . . . . . . . 7.3.2.3 Minkowski space . . . . . . . . . . . . . . . . . . . 7.3.2.4 Orthogonality . . . . . . . . . . . . . . . . . . . . . 7.3.2.5 GramSchmidt procedure . . . . . . . . . . . . . . 7.3.2.6 Representation of a vector . . . . . . . . . . . . . . 7.3.2.7 Parseval’s equation, convergence, and completeness 7.3.3 Reciprocal bases . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Linear operators . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Adjoint operators . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Inverse operators . . . . . . . . . . . . . . . . . . . . . . . . 7.4.4 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . 7.5 Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Method of weighted residuals . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Linear algebra 8.1 Determinants and rank . . . . . . . . . . . . . . . . . 8.2 Matrix algebra . . . . . . . . . . . . . . . . . . . . . 8.2.1 Column, row, left and right null spaces . . . . 8.2.2 Matrix multiplication . . . . . . . . . . . . . . 8.2.3 Deﬁnitions and properties . . . . . . . . . . . 8.2.3.1 Diagonal matrices . . . . . . . . . . 8.2.3.2 Inverse . . . . . . . . . . . . . . . . . 8.2.3.3 Similar matrices . . . . . . . . . . . 8.2.4 Equations . . . . . . . . . . . . . . . . . . . . 8.2.4.1 Overconstrained Systems . . . . . . 8.2.4.2 Underconstrained Systems . . . . . . 8.2.4.3 Over and Underconstrained Systems 8.2.4.4 Square Systems . . . . . . . . . . . . 8.2.5 Eigenvalues and eigenvectors . . . . . . . . . . 8.2.6 Complex matrices . . . . . . . . . . . . . . . . 8.3 Orthogonal and unitary matrices . . . . . . . . . . . 8.3.1 Orthogonal matrices . . . . . . . . . . . . . . 8.3.2 Unitary matrices . . . . . . . . . . . . . . . . 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
175 175 176 176 176 177 179 182 190 191 192 193 197 198 199 205 205 208 209 210 213 216 225 229 237 245 245 246 247 249 250 250 252 253 253 253 256 257 259 261 263 266 266 267
8.4
Matrix decompositions . . . . . . . . 8.4.1 L · D · U decomposition . . . 8.4.2 Echelon form . . . . . . . . . 8.4.3 Q · R decomposition . . . . . 8.4.4 Diagonalization . . . . . . . . 8.4.5 Jordan canonical form . . . . 8.4.6 Schur decomposition . . . . . 8.4.7 Singular value decomposition 8.4.8 Hessenberg form . . . . . . . 8.5 Projection matrix . . . . . . . . . . . 8.6 Method of least squares . . . . . . . 8.6.1 Unweighted least squares . . . 8.6.2 Weighted least squares . . . . 8.7 Matrix exponential . . . . . . . . . . 8.8 Quadratic form . . . . . . . . . . . . 8.9 MoorePenrose inverse . . . . . . . . Problems . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
268 268 270 273 274 280 282 282 284 285 286 286 287 289 291 294 296 301 301 301 305 307 308 311 312 313 314 315 316 316 319 320 321 321 322 322 322 324 327 329 330 330 331 331
9 Dynamical systems 9.1 Paradigm problems . . . . . . . . . . . . . . . . 9.1.1 Autonomous example . . . . . . . . . . . 9.1.2 Nonautonomous example . . . . . . . . 9.2 General theory . . . . . . . . . . . . . . . . . . 9.3 Iterated maps . . . . . . . . . . . . . . . . . . . 9.4 High order scalar diﬀerential equations . . . . . 9.5 Linear systems . . . . . . . . . . . . . . . . . . 9.5.1 Homogeneous equations with constant A 9.5.1.1 n eigenvectors . . . . . . . . . . 9.5.1.2 < n eigenvectors . . . . . . . . 9.5.1.3 Summary of method . . . . . . 9.5.1.4 Alternative method . . . . . . . 9.5.1.5 Fundamental matrix . . . . . . 9.5.2 Inhomogeneous equations . . . . . . . . 9.5.2.1 Undetermined coeﬃcients . . . 9.5.2.2 Variation of parameters . . . . 9.6 Nonlinear equations . . . . . . . . . . . . . . . . 9.6.1 Deﬁnitions . . . . . . . . . . . . . . . . . 9.6.2 Linear stability . . . . . . . . . . . . . . 9.6.3 Lyapunov functions . . . . . . . . . . . . 9.6.4 Hamiltonian systems . . . . . . . . . . . 9.7 Fractals . . . . . . . . . . . . . . . . . . . . . . 9.7.1 Cantor set . . . . . . . . . . . . . . . . . 9.7.2 Koch curve . . . . . . . . . . . . . . . . 9.7.3 Weierstrass function . . . . . . . . . . . 9.7.4 Mandelbrot and Julia sets . . . . . . . . 7
. . .6. . . 10. . . . . . . . . . .8. . . . . . . . . . 9. . . . . . . . . . . . . . . .4. . . . . . . . . . . . . 9. . . . . . . . .8. . . . . . . 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Gamma function . . . . . . . . . . .9 Complex numbers . . . . . . .6. . . . . . . . . . . .2 Beta function . Problems . . . . . .8. . . . . . . . . . . . . . . . . . . . . . . .4 Error function . . . . . . . . . . .9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Saddlenode bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10. . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . 10. . .2 RouthHurwitz criterion . . . . . . . . . .9. . . . . . 10. 10. . . .6. . . . . . . . . . . . .6. . . . . . . . . . . . . .8 Bifurcations .3 Riemann zeta function . . . . . . . . . . 9. . . . . . . 10. . . . . .1 Expansion of integrals . . . . . . . . . . . . . . . . . . . .8. . . . . . . . . .4 Asymptotic expansions . 10. . . .2 Center manifold projection Problems . . . . . . . . . . . .6.7 Elliptic integrals . . .9 δ distribution and Heaviside function . . . . . 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . .2 Transcritical bifurcation . . . . . . . . . . . . . . . . . . 9. . . .9.6 Special functions . . . . . . . . 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Fresnel integrals .6. . . . . . . . . . . . .5 Limits and continuity . . . . . . . . . . . . . . . . .2 Polar and Cartesian representations . . . 10. . . . . . . . . . . . . 10. . . . . . . . . . . . . . . . .1 Pitchfork bifurcation . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10. . . . . . . . . . . . . . . . . . . . . .3 CauchyRiemann equations . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Trigonometric relations . .9. . . . . . . . . . . .6 Sine. 10. . . . . . . . . . 10. . . . . . . . 9. . . . . . . .8 Gauss’s hypergeometric function . . . . .1 Euler’s formula . . . . . . . . . . 10. . . . . . . . . . . . .and cosineintegral functions . . . . . . . . . . . . . 10. . . . . . . . . . . . . . . . . . . . .2 Integration and diﬀerentiation of series 10. . . . . . . . .6.3 Inﬁnite series . . . . . . . . . . . . . .1 Linear stability . . . .7 Singular integrals . . 10. . . . . . . . . . .6. . . . . . . . . .9. . . . . . . . . . . . . 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. . . . . . . . .4 Hopf bifurcation . . . . . . . . 10. . . . . . . . . . .9 Lorenz equations . . . 10. . . Bibliography 8 . 10. . . . . . . . .8 Chain rule . . . . . . . . . . . . . . . . . . . 332 333 335 336 337 338 338 341 344 353 353 354 355 356 356 356 356 356 356 356 357 357 358 358 359 360 360 361 361 362 362 363 364 365 367 10 Appendix 10. . . . . . . . . . . . . . . .
Indiana. along with much information on the course itself. perturbation methods.nd.edu/∼powers/ame. covers complex variables.nd.Preface These are lecture notes for AME 561 Mathematical Methods I. Most of the students in this course are beginning graduate students in engineering coming from a wide variety of backgrounds.nd. The companion course. including multidimensional calculus. Powers.edu/∼msen Joseph M.Sen. the ﬁrst of a pair of courses on applied mathematics taught at the Department of Aerospace and Mechanical Engineering of the University of Notre Dame. anyone is free to duplicate the notes on their own printers. minor changes and additions have been made and will continue to be made.edu/∼powers Notre Dame. AME 562.1@nd. the student should call on textbooks and other reference materials. 2003 Copyright c 2003 by Mihir Sen and Joseph M. Mihir Sen Mihir. 9 . the student would do well to apply the techniques presented here by working as many problems as possible. integral transforms. USA April 9.561. and dynamic systems. and partial diﬀerential equations. linear analysis. All rights reserved.edu http://www. These notes emphasize method and technique over rigor and completeness. At this stage. These notes have appeared in various forms for the past few years. Powers powers@nd.edu http://www. ordinary diﬀerential equations. and linear algebra. vectors and tensors. We would be happy to hear from you about errors or suggestions for improvement. The notes. can be found on the world wide web at http://www. The objective of the course is to provide a survey of a variety of topics in applied mathematics. It should also be remembered that practice is essential to the learning process.
10 .
19. 47. y) = 0. y. xn ) if ∂f ∂y or y = y(xi ) (1. (1.9. in some region as an implicit function of y with respect to the other variables.5) (1. 46. y can be considered a function of xi if ∂y Let us now consider the equations f (x.6) . + dxn + dy = 0 ∂x1 ∂x2 ∂xi ∂xn ∂y (1. .3) (1. u.22. u. . . Hobson.12. That is to say. 1. Bence. . 48.Chapter 1 Multivariable calculus see Kaplan. . . + dxi + .1 Implicit functions We can think of a relation such as f (x1 . x2 . xn . 20. . y) = 0. Chapter 3: 3. ∂y The derivative ∂xi can be determined from f = 0 without explicitly solving for y. see Riley. y. In principle. Chapters 4. because then f would not depend on y in this region.4) = 0. We cannot have ∂f = 0. x2 .1) = 0. v) = 0 11 ∂f ∂y ∂f (1. we get ∂f ∂f ∂y + =0 ∂xi ∂y ∂xi so that ∂y = − ∂xi ∂f ∂xi ∂y which can be found if ∂f = 0. Chapter 2: 2. from the chain rule. . . . see Lopez.2) Diﬀerentiating with respect to xi while holding all the other xj . also written as f (xi . Chapters 32. . v) = 0 g(x. j = i constant. . we have df = ∂f ∂f ∂f ∂f ∂f dx1 + dx2 + . First. we can write ∂y y = y(x1 .
we can unravel these equations (either algebraically or numerically) to form u = u(x. y). and two others for ∂u and ∂y by ∂x ∂y ∂v using Cramer’s 1 rule. 17041752.g) ∂(u.x) − ∂(f. To solve for ∂u and ∂x . v = v(x.v) − ∂(f.g) ∂(u. for example: df = ∂f ∂f ∂f ∂f dx + dy + du + dv = 0 ∂x ∂y ∂u ∂v holding y constant and dividing by dx we get ∂f ∂f ∂u ∂f ∂v + + = 0 ∂x ∂u ∂x ∂v ∂x in the same manner.7) thus from Cramer’s rule we have ∂u = ∂x − ∂f ∂x ∂g − ∂x ∂f ∂u ∂g ∂u ∂f ∂v ∂g ∂v ∂f ∂v ∂g ∂v ∂(f. welltravelled Swissborn mathematician who did enunciate his well known rule. but was not the ﬁrst to do so. we get ∂g ∂g ∂u ∂g ∂v + + = 0 ∂x ∂u ∂x ∂v ∂x ∂f ∂f ∂u ∂f ∂v + + = 0 ∂y ∂u ∂y ∂v ∂y ∂g ∂g ∂u ∂g ∂v + + = 0 ∂y ∂u ∂y ∂v ∂y ∂v ∂v Two of the above equations can be solved for ∂u and ∂x .v) ≡ ∂v = ∂x ∂f ∂u ∂g ∂u ∂f ∂u ∂g ∂u ∂u ∂y − ∂f ∂x ∂g − ∂x ∂f ∂v ∂g ∂v ≡ ∂(f.g) ∂(u.g) ∂(u.g) ∂(u.v) Gabriel Cramer.g) ∂(u.v) In a similar fashion. we ﬁrst write two of the previous equations ∂x in matrix form: ∂f ∂u ∂g ∂u ∂f ∂v ∂g ∂v ∂u ∂x ∂v ∂x = − ∂f ∂x ∂g − ∂x (1. 12 .Under certain circumstances. we can form expressions for − ∂f ∂y ∂g − ∂y ∂f ∂u ∂g ∂u ∂f ∂v ∂g ∂v ∂f ∂v ∂g ∂v and ∂v : ∂y ∂u = ∂y 1 ≡ ∂(f.v) ∂v = ∂y ∂f ∂u ∂g ∂u ∂f ∂u ∂g ∂u − ∂f ∂y ∂g − ∂y ∂f ∂v ∂g ∂v ≡ ∂(f.g) ∂(x.y) − ∂(f. The conditions for the existence of such a functional dependency can be found by diﬀerentiation of the original equations. y).g) ∂(y.v) − ∂(f.
In principle we could solve for u(x. y) and then determine all partial derivatives.1 If x + y + u6 + u + v = 0 xy + uv = 1 Find ∂u ∂x . y). v) = x + y + u6 + u + v = 0 g(x. in his work on partial diﬀerential equations. . . u. for example. y) and v(x. n exist in some region if the determinant of the matrix ∂yj = 0 ∂xj (i. the derivatives exist. v) ∂f ∂u ∂g ∂u ∂f ∂v ∂g ∂v =0 (1. y). 18041851. y. . The two equations are rewritten as f (x. i = 1. .v) = 0. such as the one desired. . − ∂f ∂x ∂g − ∂x ∂f ∂u ∂g ∂u ∂f ∂v ∂g ∂v ∂f ∂v ∂g ∂v Carl Gustav Jacob Jacobi. . m. we get ∂u = ∂x Substituting. . j = 1. y) plane) given a local value of (x. m. Example 1. . ym ) = 0. .8) This is the condition for the implicit to explicit function conversion. ∂(f. and use the formula developed to ﬁnd the local value of the partial derivative. . . . y). ∂(f. m) in this region. . y. . (which includes nearly all of the (x. . . Note that we have four unknowns in two equations. y) and v(x. In practice this is not always possible. we can use algebra to ﬁnd a corresponding u and v. . which may be multivalued. German/Prussian mathematician who used these determinants. u. The derivatives ∂fi ∂fi . at such points we can determine neither ∂u nor ∂u . we get −1 1 −y u y−u ∂u = = ∂x u(6u5 + 1) − v 6u5 + 1 1 v u Note when v = 6u6 + u that the relevant Jacobian is zero. g) = ∂(u. 2 13 .g) At points where the relevant Jacobian ∂(u. thus we cannot ∂x ∂y form u(x. . . Similar conditions hold for multiple implicit functions fi (x1 .If the Jacobian 2 determinant. is nonzero. . . . . . deﬁned below. and we indeed can form u(x. i = 1. there is no general solution to sixth order equations such as we have here. y1 . which were ﬁrst studied by Cauchy. j = 1. xn . v) = xy + uv − 1 = 0 Using the formula developed above to solve for the desired derivative.
Example 1. y).e.11) = Since the right hand side is zero. v. ∂f ∂u ∂f ∂v + = 0. v) = 0.1. ∂u ∂x ∂v ∂x ∂f ∂u ∂f ∂v + = 0. ∂u ∂y ∂v ∂y ∂u ∂x ∂u ∂y ∂v ∂x ∂v ∂y ∂f ∂u ∂f ∂v (1.10) 0 0 (1. 14 . ∂u ∂x ∂u ∂y ∂v ∂x ∂v ∂y = 0. (1. by extension to three functions of three variables. w are functionally dependent. that this is equivalent to ∂u ∂x ∂v ∂x ∂u ∂y ∂v ∂y = ∂(u. must be zero for functional dependency. y) and v = v(x. v) = 0. v.9) (1. then u and v are said to be functionally dependent. since det A = det AT .2 Functional dependence Let u = u(x. y. If we can write u = g(v) or v = h(u).13) That is the Jacobian must be zero. So. y) (1. ∂(x. z) ∂u ∂x ∂v ∂x ∂w ∂x ∂u ∂y ∂v ∂y ∂w ∂y ∂u ∂z ∂v ∂z ∂w ∂z ∂u ∂x ∂u ∂y ∂u ∂z ∂v ∂x ∂v ∂y ∂v ∂z ∂w ∂x ∂w ∂y ∂w ∂z = = = 0 1 1 0 1 4z 1 −4(y + z) −4y (−1)(−4y − (−4)(y + z)) + (1)(4z) = 4y − 4y − 4z + 4z = 0 So.12) Note. In fact w = v − 2u2 . i. and we desire a nontrivial solution. w) = ∂(x. The determinant of the resulting coeﬃcient matrix.2 Determine if u = y+z v w = x + 2z 2 = x − 4yz − 2y 2 are functionally dependent. u. is ∂(u. If functional dependence between u and v exists. the determinant of the coeﬃcient matrix. then we can consider f (u.
thus. If we take dz dz f (x. dy = 0 if dz y − x − z = 0. dz ∂f ∂y ∂g ∂y ∂f ∂y ∂g ∂y can be obtained by Cramer’s rule.3 Let x+y+z x + y + z + 2xz 2 2 2 = = 0 1 Can x and y be considered as functions of z? If x = x(z) and y = y(z). the above expression √ 2 2 is indeterminant. then dx and dy must exist. z) ∂f ∂f ∂f dz + dx + dy df = ∂z ∂x ∂y ∂g ∂g ∂g dz + dx + dy dg = ∂z ∂x ∂y ∂f dx ∂f dy ∂f + + ∂z ∂x dz ∂y dz ∂g dx ∂g dy ∂g + + ∂z ∂x dz ∂y dz ∂f ∂x ∂g ∂x ∂f ∂y ∂g ∂y T dx dz dy dz = x+y+z =0 = x2 + y 2 + z 2 + 2xz − 1 = 0 = = = = = 0 0 0 0 − ∂f ∂z − ∂g ∂z then the solution matrix − ∂f ∂z − ∂g dx ∂z = ∂f dz ∂x dx dy dz .g) ∂(x. −1 1 −(2z + 2x) 2y −2y + 2z + 2x = = = −1 2y − 2x − 2z 1 1 2x + 2z 2y 1 2x + 2z −1 −(2z + 2x) 0 = 2y − 2x − 2z 1 1 2x + 2z 2y dy = dz ∂g ∂x ∂f ∂x ∂g ∂x ∂f ∂x ∂g ∂x − ∂f ∂z − ∂g ∂z ∂f ∂y ∂g ∂y = Note here that in the expression for dx that the numerator and denominator cancel. it is easily shown by algebraic manipulations (which for more general functions are not possible) that √ 2 x(z) = −z ± √ 2 2 y(z) = 2 Note that in fact y − x − z = 0. in which case this formula cannot give us the derivative. so the Jacobian determinant for dy dz ∂(f. there is no special dz condition deﬁned by the Jacobian determinant of the denominator being zero. However. y. In the second. 15 . Now in fact. dy dz = 0. we see from the explicit expression y = that in fact.Example 1. z) g(x. y.y) = 0.
5 1 0. when the plane is aligned with the axis of the cylinder.5 0. z) g(x. y.2.5 1 1 0.5 1 Figure 1. it is most likely that the intersection will be a closed arc.5 0 x 0. and that represented by the linear function is a plane. Let’s see how slightly altering the equation for the plane removes the degeneracy. dz T = 5x + y + z = 0 = x2 + y 2 + z 2 + 2xz − 1 = 0 then the solution matrix is found as before: ∂f ∂y ∂g ∂y ∂f ∂y ∂g ∂y − ∂f ∂z dx = dz ∂f ∂x ∂g ∂x ∂f ∂x ∂g ∂x − ∂g ∂z dy = dz ∂f ∂x ∂g ∂x − ∂f ∂z − ∂g ∂z ∂f ∂y ∂g ∂y −1 1 −(2z + 2x) 2y −2y + 2z + 2x = = 10y − 2x − 2z 5 1 2x + 2z 2y 5 2x + 2z −1 −(2z + 2x) −8x − 8z = 10y − 2x − 2z 5 1 2x + 2z 2y = The two original functions and their loci of intersection are plotted in Figure 1. Take now 5x + y + z x + y + z + 2xz 2 2 2 = = 0 1 Can x and y be considered as functions of z? If x = x(z) and y = y(z). y. However.1. z) dx dy dz . the intersection will be two nonintersecting lines. If we take dz dz f (x. It is seen that the surface represented by the quadratic function is a open cylindrical tube.5 1 2 1 0.5 0 y 0. Straightforward algebra in this case shows that an explicit dependency exists: √ √ −6z ± 2 13 − 8z 2 x(z) = 26 √ √ −4z 5 2 13 − 8z 2 y(z) = 26 16 .1: Surfaces of x + y + z = 0 and x2 + y 2 + z 2 + 2xz = 1. Note that planes and cylinders may or may not intersect. If they intersect.5 5 z 0 0. such is the case in this example. and their loci of intersection The two original functions and their loci of intersection are plotted in Figure 1.5 0 z 0 0. then dx and dy must exist.2 1 y 0 1 x 0 1 1 0 0.
5 1 1 0 1 0. we have the usual Euclidean 4 formula for arc length s: (ds)2 = (ds)2 = dξ 1 3 2 + dξ 2 2 + dξ 3 2 (1. 1. and their loci of intersection These curves represent the projection of the curve of intersection on the x − z and y − z planes.2 1 Figure 1. especially those involving curved geometrical bodies. Here the superscript is an index and does not represent a power of ξ. respectively. Greek geometer.2 1 y 0 y 0.5 1 2 1 0. we will take Cartesian coordinates to be represented by (ξ 1 . a summation from 1 to 3 is understood. For this section.14) (1.2 0 1 0. ∼ 325 B. where i = 1.C. 3 4 Ren´ Descartes.5 0 x 0. However.2: Surfaces of 5x + y + z = 0 and x2 + y 2 + z 2 + 2xz = 1. one needs techniques to transform from one coordinate system to another. are better posed in a nonCartesian.5 0 x 0. French mathematician and philosopher. Since the space is Cartesian. We will denote this point by ξ i . 15961650.∼ 265 B.5 z 0 z 0.3 Coordinate Transformations Many problems are formulated in threedimensional Cartesian 3 space. many of these problems. ξ 3 ).15) dξ i dξ i ≡ dξ i dξ i i=1 Here we have adopted the summation convention that when an index appears twice. 3. curvilinear coordinate system. the projections are ellipses. ξ 2 . 17 . 2. e Euclid of Alexandria.. In both cases. As such.C.5 1 1 0.
x3 ) Likewise then.16) (1.27) (1. x3 ) space.23) In order for the inverse to exist we must have a nonzero Jacobian for the transformation. ξ 1 = ξ 1 (x1 .Now let us map a point from a point in (ξ 1 . x2 . ξ 3 ) x2 = x2 (ξ 1 . x2 .25) (1. (1. ∂(x1 .20) (1.19) (1. x2 .22) dxi ∂xi j = dξ ∂ξ j (1. This mapping is achieved by deﬁning the following functional dependencies: x1 = x1 (ξ 1 . ξ 2 .26) (1.24) (1. ξ 3 ) Taking derivatives can tell us whether the inverse exists. ξ 2 . x3 ) ξ 2 = ξ 2 (x1 . ξ 3 ) x3 = x3 (ξ 1 . ξ 3 ) It can then be inferred that the inverse transformation exists. x3 ) ξ 3 = ξ 3 (x1 . dξ i = ∂ξ i j dx ∂xj 18 (1. ξ 3 ) space to a point in a more convenient (x1 . x2 .18) ∂x1 1 ∂x1 2 ∂x1 3 dξ + 2 dξ + 3 dξ = ∂ξ 1 ∂ξ ∂ξ 2 2 ∂x 1 ∂x 2 ∂x2 3 dx2 = dξ + 2 dξ + 3 dξ = ∂ξ 1 ∂ξ ∂ξ 3 3 ∂x 1 ∂x 2 ∂x3 3 dx3 = dξ + 2 dξ + 3 dξ = ∂ξ 1 ∂ξ ∂ξ ∂x1 ∂x1 ∂x1 1 ∂ξ 1 ∂ξ 2 ∂ξ 3 dξ 1 dx 2 2 2 dx2 = ∂x1 ∂x2 ∂x3 dξ 2 ∂ξ ∂ξ ∂ξ ∂x3 ∂x3 ∂x3 dx3 dξ 3 1 2 3 dx1 = ∂ξ ∂ξ ∂ξ ∂x1 j dξ ∂ξ j ∂x2 j dξ ∂ξ j ∂x3 j dξ ∂ξ j (1.e. x2 .17) (1. ξ 2 .21) (1. ξ 2 .28) . ξ 2 . i. x3 ) =0 ∂(ξ 1 .
g.3. so we require ∂ξ i l ∂ξ i ∂ξ i k l ∂ξ i k (ds)2 = dξ i dξ i = dx dx = dx dx (1. A few. (ds)2 = gkl dxk dxl (ds)2 = dxT · G · dx Now gkl can be represented as a matrix. equivalently in both index and Gibbs notation.36) (1.1 Jacobians and Metric Tensors Deﬁning 5 the Jacobian matrix J. ∂x1 ξ 3 .34) then we have. 5 (1. which we associate with the inverse transformation.33) (1. which is not problematic since the two are the same. that is the transformation from nonCartesian to Cartesian coordinates. e.32) (1. which would have the ﬁrst row as ∂x1 ξ 1 . This is because when one considers that the ∂ diﬀerential operator acts ﬁrst. gkl or G. to be ∂ξ1 J= ∂ξ = ∂xj i ∂x1 ∂ξ 2 ∂x1 ∂ξ 3 ∂x1 ∂ξ 1 ∂x2 ∂ξ 2 ∂x2 ∂ξ 3 ∂x2 ∂ξ 1 ∂x3 ∂ξ 2 ∂x3 ∂ξ 3 ∂x3 (1. as follows: gkl = ∂ξ i ∂ξ i ∂xk ∂xl G = JT · J (1.37) (1.38) (1. the Jacobian matrix is really ∂xj ξ i . however. If we deﬁne g = det (gkl ) .39) The deﬁnition we adopt is that used in most texts. 18391903.1.35) (1. including Kaplan.30) Now for Euclidean spaces. an argument can be made that a better deﬁnition of the Jacobian matrix would be the transpose of the traditional Jacobian matrix. 6 Josiah Willard Gibbs. 19 . and the alternative deﬁnition is more ∂ ∂ ∂ consistent with traditional matrix notation.29) we can rewrite dξ i in Gibbs’ 6 vector notation as dξ = J · dx (1. Extending this. ∂x1 ξ 2 . deﬁne the Jacobian determinant in terms of the transpose of the Jacobian matrix. distance must be independent of coordinate systems. Aris.31) ∂xk ∂xl ∂xk ∂xl In Gibbs’ vector notation this becomes (ds)2 = dξ T · dξ = (J · dx)T · (J · dx) = dxT · JT · J · dx If we deﬁne the metric tensor. the convention adopted ultimately does not matter. As long as one realizes the implications of the notation. proliﬁc American physicist and mathematician with a lifetime aﬃliation with Yale University.
using standard techniques of linear algebra allows us to solve for ξ 1 . x2 . Consider the following linear nonorthogonal transformation (those of this type are known as aﬃne): x1 x2 x3 = 4ξ 1 + 2ξ 2 = 3ξ 1 + 2ξ 2 = ξ3 2 + ξ2 2 This is a linear system of three equations in three unknowns. Cartesian to aﬃne coordinates. x . x )) x ˆ ˆ The chain rule lets us transform derivatives to other spaces ∂x1 ∂S ( ∂ξ1 ∂S ∂ξ 2 ∂S ∂ξ 3 ∂S ) = ( ∂x1 ∂S ∂x2 ∂S ∂x3 ∂x1 ∂ξ 2 ∂x2 ∂ξ 2 ∂x3 ∂ξ 2 ∂x1 ∂ξ 3 ∂x2 ∂ξ 3 ∂x3 ∂ξ 3 (1. ξ 2 . x3 )] is a dependent ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ x ˆ ˆ variable deﬁned at (ξ 1 . x3 .42) ∂S ∂S ∂S This can also be inverted. ξ 2 . ξ 2 . ∂x3 . ξ 3 ). ξ 3 ) maps into (ˆ1 . x2 . x2 . e. if S [S = f (ξ 1 .3. given that g = 0. The fact that the gradient operator required the use of row vectors in conjunction with the Jacobian matrix. ξ 3 ) = h(x1 . to ﬁnd ∂x1 .41) ) ∂ξ 1 ∂x2 ∂ξ 1 ∂x3 ∂ξ 1 ∂S ∂S ∂xj = ∂ξ i ∂xj ∂ξ i T (1.it can be shown that the ratio of volumes of diﬀerential elements in one space to that of the other is given by dξ 1 dξ 2 dξ 3 = √ g dx1 dx2 dx3 (1. 20 .40) We also require dependent variables and all derivatives to take on the same values at corresponding points in each space. earlier in this section. and will be examined further in an upcoming section where we distinguish between what are known as covariant and contravariant vectors. ξ 2 .g. that is we ﬁnd the inverse transformation. ξ 2 . ξ 3 in terms of x1 . required the use of column vectors is of fundamental importance. ξ 3 ) = 1 2 3 h(ˆ . we require f (ξ 1 . which is ξ1 ξ2 ξ3 = x1 − x2 3 = − x1 + 2x2 2 = x3 Lines of constant x1 and x2 in the ξ 1 .4 Transform the Cartesian equation ∂S − S = ξ1 ∂ξ 1 under under the following: 1. x3 )). while the transformation of distance. ξ 2 plane are plotted in Figure 1. Example 1. and (ξ 1 . ∂x2 .
3: Lines of constant x1 and x2 in the ξ 1 . ξ . ξ ) ∂ξ = ∂xj ∂(x1 . The appropriate Jacobian matrix for the inverse transformation is J= ∂(ξ . since the Jacobian determinant is never zero. x3 ) i 1 2 3 ∂ξ 1 ∂x1 ∂ξ 2 ∂x1 ∂ξ 3 ∂x1 ∂ξ 1 ∂x2 ∂ξ 2 ∂x2 ∂ξ 3 ∂x2 ∂ξ 1 ∂x3 ∂ξ 2 ∂x3 ∂ξ 3 ∂x3 = J The determinant of the Jacobian matrix is 2− 1 = −3 2 0 −1 0 2 0 0 1 1 3 = 2 2 So a unique transformation always exists. x2 . ξ 2 plane for aﬃne transformation of example problem. l = 1 we get g11 g11 ∂ξ i ∂ξ i ∂ξ 1 ∂ξ 1 ∂ξ 2 ∂ξ 2 ∂ξ 3 ∂ξ 3 = + + ∂x1 ∂x1 ∂x1 ∂x1 ∂x1 ∂x1 ∂x1 ∂x1 3 3 13 = (1)(1) + − − + (0)(0) = 2 2 4 = repeating this operation. The metric tensor is ∂ξ i ∂ξ i ∂ξ 1 ∂ξ 1 ∂ξ 2 ∂ξ 2 ∂ξ 3 ∂ξ 3 = + + ∂xk ∂xl ∂xk ∂xl ∂xk ∂xl ∂xk ∂xl for example for k = 1.ξ 4 3 2 1 4 2 1 2 3 4 2 2 ξ 4 1 Figure 1. we ﬁnd the complete metric tensor is gkl = 21 .
this generalized distance ds for an 2 2 2 2 inﬁntesmial change in space and time is given by ds2 = dξ 1 + dξ 2 + dξ 3 − dξ 4 .779 dx2 2 + 0.627 dx1 − 0.627 dx2 2 + dx3 2 • The Jacobian matrix J is not symmetric.0304 0. In the generalized spacetime continuum suggested by the theory of relativity. This will be true for all aﬃne transformations in ordinary threedimensional Euclidean space. • The metric tensor G = JT · J is symmetric. 2 22 .779 dx1 + 0. Also we have the volume ratio of diﬀerential elements as 1 dξ 1 dξ 2 dξ 3 = dx1 dx2 dx3 2 Now ∂S ∂ξ 1 ∂S ∂x1 ∂S ∂x2 ∂S ∂x3 + + ∂x1 ∂ξ 1 ∂x2 ∂ξ 1 ∂x3 ∂ξ 1 ∂S ∂S = 4 1 +3 2 ∂x ∂x = 2 = (c dt) . the generalized distance may in fact be negative. • The distance is guaranteed to be positive. where the ﬁrst three coordinates are the ordinary Cartesian space coordinates and the fourth is dξ 4 where c is the speed of light. • The fact that the metric tensor has nonzero oﬀdiagonal elements is a consequence of the transformation being nonorthogonal.22 0. 0 gkl 0 1 1 65 − 16 = g = det (gkl ) = 4 4 This is equivalent to the calculation in Gibbs notation: −4 = −4 5 0 0 13 4 = ·J JT 1 1 −3 0 2 G = −1 2 0 · − 3 2 0 0 0 1 13 −4 0 4 G = −4 5 0 0 0 1 G Distance in the transformed system is given by (ds) 2 2 −1 2 0 0 0 1 = gkl dxk dxl = dxT · G · dx = ( dx1 dx2 1 −4 0 dx dx3 ) −4 5 0 dx2 dx3 0 0 1 13 1 2 4 dx − 4 dx 3 1 dx ) −4 dx + 5 dx2 dx3 13 4 (ds) (ds) 2 (ds) (ds) 2 = ( dx1 = dx2 13 2 2 2 dx1 + 5 dx2 + dx3 − 8 dx1 dx2 4 Algebraic manipulation reveals that this can be rewritten as follows: 2 (ds) Note: 2 = 8.
we cannot always ﬁnd an explicit algebraic expression for the inverse transformation. For such systems. ξ . some straightforward algebraic and trigonometric manipulation reveals that we can ﬁnd an explicit representation of the inverse transformation. So the transformed equation becomes 4 ∂S 3 ∂S 2 + 3 2 − S = x1 − x2 + − x1 + 2 x2 1 ∂x ∂x 2 ∂S ∂S 13 1 2 x 4 1 +3 2 −S = − 8 x1 x2 + 5 x2 ∂x ∂x 4 2 2 2. x2 . ξ ) ∂ξ = ∂xj ∂(x1 . which is ξ1 ξ2 ξ3 = x1 cos x2 = x1 sin x2 = x3 Lines of constant x1 and x2 in the ξ 1 .x 2 = π/2 ξ2 x = 3π/4 3 x1 = 3 x1 = 2 x 2 = π/4 2 1 x1 = 1 x2 = π 3 2 1 1 2 3 ξ 1 x 2= 0 1 2 x 2 = 5π/4 2 3 x 2 = 7π/4 x = 3π/2 Figure 1. x3 ) i 1 2 3 ∂ξ 1 ∂x1 ∂ξ 2 ∂x1 ∂ξ 3 ∂x1 ∂ξ 1 ∂x2 ∂ξ 2 ∂x2 ∂ξ 3 ∂x2 ∂ξ 1 ∂x3 ∂ξ 2 ∂x3 ∂ξ 3 ∂x3 = 23 . this will not be the case.4. ξ 2 plane for cylindrical transformation of example problem. In this case. For general transformations. Cartesian to cylindrical coordinates.4: Lines of constant x1 and x2 in the ξ 1 . Notice that the lines of constant x1 are orthogonal to lines of constant x2 in the Cartesian ξ 1 . The appropriate Jacobian matrix for the inverse transformation is J= ∂(ξ . ξ 2 plane. ξ 2 plane are plotted in Figure 1. The transformations are x1 x2 x3 = = + (ξ 1 ) + (ξ 2 ) tan−1 ξ2 ξ1 2 2 = ξ3 Note this system of equations is nonlinear.
J The determinant of the Jacobian matrix is cos x2 sin x2 = 0 −x1 sin x2 x1 cos x2 0 0 0 1 x1 cos2 x2 + x1 sin2 x2 = x1 . we ﬁnd the complete metric tensor is 1 = 0 0 0 x1 0 2 gkl = gkl g 0 0 1 2 = det (gkl ) = (x1 ) This is equivalent to the calculation in Gibbs notation: = ·J JT sin x2 cos x2 1 2 1 G = −x sin x x cos x2 0 0 1 0 0 2 0 G = 0 x1 0 0 1 G Distance in the transformed system is given by (ds) (ds) (ds) 2 2 cos x2 0 0 · sin x2 1 0 −x1 sin x2 x1 cos x2 0 0 0 1 = gkl dxk dxl = dxT · G · dx = [ dx1 dx2 1 dx3 ] 0 0 3 2 0 x1 0 2 (ds) (ds) Note: 2 = ( dx = 1 dx 2 2 dx1 x1 2 dx2 dx ) dx3 2 1 0 dx 0 dx2 dx3 1 2 dx1 + x1 dx2 + dx3 2 • The fact that the metric tensor is diagonal can be attributed to the transformation being orthogonal. l = 1 we get g11 g11 ∂ξ i ∂ξ i ∂ξ 1 ∂ξ 1 ∂ξ 2 ∂ξ 2 ∂ξ 3 ∂ξ 3 = + + ∂x1 ∂x1 ∂x1 ∂x1 ∂x1 ∂x1 ∂x1 ∂x1 2 2 2 2 = cos x + sin x + 0 = 1 = repeating this operation. 24 . So a unique transformation fails to exist when x1 = 0. The metric tensor is ∂ξ i ∂ξ i ∂ξ 1 ∂ξ 1 ∂ξ 2 ∂ξ 2 ∂ξ 3 ∂ξ 3 = + + ∂xk ∂xl ∂xk ∂xl ∂xk ∂xl ∂xk ∂xl for example for k = 1.
x3 ). the metric tensor is always symmetric. x3 ) to another (¯1 . z) is a normal Cartesian system and deﬁne the transformation x = λx ¯ y = λy ¯ z = λz ¯ Now we can assign velocities in both the unbarred and barred systems: ux = ux = ¯¯ dx dt d¯ x dt uy = uy = ¯¯ dy dt d¯ y dt uz = uz = ¯¯ dz dt d¯ z dt 25 . x ¯ ¯ Example 1.3.43) Quantities known as covariant vectors transform according to ui = ¯ ∂xj uj ∂ xi ¯ (1.2 Covariance and Contravariance ∂ xi j ¯ u ∂xj Quantities known as contravariant vectors transform according to ui = ¯ (1. x2 . Also we have the volume ratio of diﬀerential elements as dξ 1 dξ 2 dξ 3 = x1 dx1 dx2 dx3 Now ∂S ∂ξ 1 = = ∂S ∂x1 ∂S ∂x2 ∂S ∂x3 + 2 1 + 3 1 ∂x1 ∂ξ 1 ∂x ∂ξ ∂x ∂ξ 1 ξ ξ2 ∂S ∂S − 2 ∂x1 ∂x2 (ξ 1 ) + (ξ 2 )2 2 2 (ξ 1 ) + (ξ 2 ) ∂S sin x2 ∂S − ∂x1 x1 ∂x2 = cos x2 So the transformed equation becomes cos x2 ∂S sin x2 ∂S − − S = x1 ∂x1 x1 ∂x2 2 1. x2 .• Since the product of any matrix with its transpose is guaranteed to yield a symmetric matrix.44) Here we have considered general transformations from one nonCartesian coordinate system (x1 .5 Let’s say (x. y.
. y. ux = ¯¯ ux = ¯¯ 1 ux λ uy = ¯¯ 1 uy λ uz = ¯¯ 1 uz λ Somewhat more generally we ﬁnd for this case that ux = ¯¯ ∂x ux ∂x ¯ uy = ¯¯ ∂y uy ∂y ¯ uz = ¯¯ ∂z uz . let f (x. in contrast to the velocity vector. y. ∂z ¯ which suggests the gradient vector is covariant. = + 2+ 3 λ λ λ λ λ λ ¯ z3 ¯ x y2 ¯ ¯ x ¯ ¯ f (¯. z). y . z. For example. z ) = + 2 + 3 λ λ λ Now ux = ¯¯ ux = ¯¯ In terms of x. Now consider a vector which is the gradient of a function f (x.ux = ¯¯ ∂ x dx ¯ ∂x dt uy = ¯¯ ∂ y dy ¯ ∂y dt uz = ¯¯ ∂ z dz ¯ ∂z dt ux = λux ¯¯ ∂x x ¯ u ux = ¯¯ ∂x uy = λuy ¯¯ ∂y y ¯ u uy = ¯¯ ∂y uz = λuz ¯¯ ∂z z ¯ u uz = ¯¯ ∂z This suggests the velocity vector is contravariant. z) = x + y 2 + z 3 ux = ∂f ∂x uy = ∂f ∂y uz = ∂f ∂z ux = 1 In the new coordinates f so uy = 2y uz = 3z 2 x y z ¯ ¯ ¯ ¯ z3 ¯ x y2 ¯ . we have ¯ ∂f ∂x ¯ 1 λ uy = ¯¯ uy = ¯¯ ¯ ∂f ∂y ¯ 2¯ y λ2 uz = ¯¯ uz = ¯¯ ¯ ∂f ∂z ¯ 3¯2 z λ3 1 2y 3z 2 uy = ¯¯ uz = ¯¯ λ λ λ So it is clear here that. y. Contravariant tensors transform according to v ij = ¯ ∂ xi ∂ xj kl ¯ ¯ v ∂xk ∂xl Covariant tensors transform according to vij = ¯ Mixed tensors transform according to vj = ¯i ∂ xi ∂xl k ¯ v ∂xk ∂ xj l ¯ 26 ∂xk ∂xl vkl ∂ xi ∂ xj ¯ ¯ .
respectively.50) (1.48) (1. where δj is the Kronecker i j. From the deﬁnition of contravariance ∂ξ i l u ∂xl Take the derivative in Cartesian space and then use the chain rule: Ui = Wji = ∂U i ∂U i ∂xk = ∂ξ j ∂xk ∂ξ j ∂ξ i l ∂xk ∂ u = ∂xk ∂xl ∂ξ j ∂ 2 ξ i l ∂ξ i ∂ul ∂xk = u + l k ∂xk ∂xl ∂x ∂x ∂ξ j ∂ 2 ξ p l ∂ξ p ∂ul ∂xk Wqp = u + l k ∂xk ∂xl ∂x ∂x ∂ξ q (1. German mathematician.The idea of covariant and contravariant derivatives play an important role in mathematical physics. i = j.51) (1. δj = 1.55) 7 i delta. Take wj and Wji to represent the covariant spatial derivative of ui and U i . but for nonorthogonal systems. Elwin Bruno Christoﬀel. The role of these terms was especially important in the development of the theory of relativity. This is not particularly diﬃcult for Cartesian systems. Consider a contravariant vector ui deﬁned in xi which has corresponding components U i i in the Cartesian ξ i . depending on the problem. Let’s use the chain rule and deﬁnitions of tensorial quantities to arrive at a formula for covariant diﬀerentiation.52) (1. one cannot use diﬀerentiation in the ordinary sense but must instead use the notion of covariant and contravariant derivatives. We deﬁne the Christoﬀel 8 symbols Γi as follows: jl 7 8 Leopold Kronecker. δj = 0. 18231891. i = ∂x i i Here we have used the identity that ∂xj = δj . namely in that the equations should be formulated such that they are invariant under coordinate transformations.45) (1.49) From the deﬁnition of a mixed tensor i wj = Wqp = = = = = ∂xi ∂ξ q ∂ξ p ∂xj ∂ 2 ξ p l ∂ξ p ∂ul ∂xk ∂xi ∂ξ q u + l k ∂xk ∂xl ∂x ∂x ∂ξ q ∂ξ p ∂xj ∂ 2 ξ p ∂xk ∂xi ∂ξ q l ∂ξ p ∂xk ∂xi ∂ξ q ∂ul u + l q p j k ∂xk ∂xl ∂ξ q ∂ξ p ∂xj ∂x ∂ξ ∂ξ ∂x ∂x 2 p k i i ∂ ξ ∂x ∂x l ∂x ∂xk ∂ul u + l j k ∂xk ∂xl ∂xj ∂ξ p ∂x ∂x ∂x 2 p i l ∂ ξ k ∂x k ∂u δj p ul + δli δj k ∂xk ∂xl ∂ξ ∂x ∂ 2 ξ p ∂xi l ∂ui u + j ∂xj ∂xl ∂ξ p ∂x i (1.46) (1.47) (1.53) (1. 27 . 18291900. German/Prussian mathematician.54) (1.
Γi = jl ∂ 2 ξ p ∂xi ∂xj ∂xl ∂ξ p (1. = ∂ 2 ξ 2 ∂xi l ∂ 2 ξ 1 ∂xi l u + u i ∂xl ∂ξ 1 ∂x ∂xi ∂xl ∂ξ 2 ∂ 2 ξ 1 ∂x1 l ∂ 2 ξ 1 ∂x2 l ∂ 2 ξ 1 ∂x3 l = u + u + u 1 ∂xl ∂ξ 1 2 ∂xl ∂ξ 1 ∂x ∂x ∂x3 ∂xl ∂ξ 1 ∂ 2 ξ 2 ∂x1 ∂ 2 ξ 2 ∂x2 l ∂ 2 ξ 2 ∂x3 l + 1 l 2 ul + u + u ∂x ∂x ∂ξ ∂x2 ∂xl ∂ξ 2 ∂x3 ∂xl ∂ξ 2 noting that partials of x3 with respect to ξ 1 and ξ 2 are zero. Thus we have found the covariant derivative of a contravariant vector ui is as follows: i ∆j u i = w j = ∂ui + Γi ul jl ∂xj (1.56) and use the term ∆j to represent the covariant derivative.57) Example 1. = ∂ 2 ξ 1 ∂x2 l ∂ 2 ξ 1 ∂x1 l u + u ∂x1 ∂xl ∂ξ 1 ∂x2 ∂xl ∂ξ 1 ∂ 2 ξ 2 ∂x1 ∂ 2 ξ 2 ∂x2 l + 1 l 2 ul + u ∂x ∂x ∂ξ ∂x2 ∂xl ∂ξ 2 28 . The transformations are x1 x2 x3 The inverse transformation is ξ1 ξ2 ξ3 This corresponds to ﬁnding i ∆i ui = wi = = = + (ξ 1 ) + (ξ 2 ) tan−1 ξ2 ξ1 2 2 = ξ3 = x1 cos x2 = x1 sin x2 = x3 ∂ui + Γi u l il ∂xi Now for i = j Γi ul il = = ∂ 2 ξ p ∂xi l u ∂xi ∂xl ∂ξ p ∂ 2 ξ 1 ∂xi l ∂ 2 ξ 2 ∂xi l ∂ 2 ξ 3 ∂xi l u + u + u i ∂xl ∂ξ 1 i ∂xl ∂ξ 2 ∂x ∂x ∂xi ∂xl ∂ξ 3 noting that all second partials of ξ 3 are zero.6 Find · u in cylindrical coordinates.
x1 = r. For dt practical purposes. we get ·u = ·u = ·u = ∂ dθ ∂ dz 1 ∂ dr + + + ∂r dt ∂θ dt ∂z dt r dr 1 ∂ dθ ∂ dz 1 ∂ r + r + r ∂r dt r ∂θ dt ∂z dt ∂uz 1 ∂uθ 1 ∂ (rur ) + + r ∂r r ∂θ ∂z dr dt Here we have also used the more traditional uθ = r dθ = x1 u2 . x2 = θ. along with ur = u1 . this insures that ur . 29 .= ∂ 2 ξ 1 ∂x1 1 ∂ 2 ξ 1 ∂x1 2 ∂ 2 ξ 1 ∂x1 3 u + u + u 1 ∂x1 ∂ξ 1 1 ∂x2 ∂ξ 1 ∂x ∂x ∂x1 ∂x3 ∂ξ 1 ∂ 2 ξ 1 ∂x2 ∂ 2 ξ 1 ∂x2 2 ∂ 2 ξ 1 ∂x2 3 + 2 1 1 u1 + u + u 2 ∂x2 ∂ξ 1 ∂x ∂x ∂ξ ∂x ∂x2 ∂x3 ∂ξ 1 ∂ 2 ξ 2 ∂x1 ∂ 2 ξ 2 ∂x1 2 ∂ 2 ξ 2 ∂x1 3 + 1 1 2 u1 + u + u ∂x ∂x ∂ξ ∂x1 ∂x2 ∂ξ 2 ∂x1 ∂x3 ∂ξ 2 ∂ 2 ξ 2 ∂x2 ∂ 2 ξ 2 ∂x2 2 ∂ 2 ξ 2 ∂x2 3 + 2 1 2 u1 + u + u ∂x ∂x ∂ξ ∂x2 ∂x2 ∂ξ 2 ∂x2 ∂x3 ∂ξ 2 again removing the x3 variation = ∂ 2 ξ 1 ∂x1 2 ∂ 2 ξ 1 ∂x1 1 u + u ∂x1 ∂x1 ∂ξ 1 ∂x1 ∂x2 ∂ξ 1 ∂ 2 ξ 1 ∂x2 ∂ 2 ξ 1 ∂x2 2 + 2 1 1 u1 + u ∂x ∂x ∂ξ ∂x2 ∂x2 ∂ξ 1 ∂ 2 ξ 2 ∂x1 ∂ 2 ξ 2 ∂x1 2 + 1 1 2 u1 + u ∂x ∂x ∂ξ ∂x1 ∂x2 ∂ξ 2 ∂ 2 ξ 2 ∂x2 ∂ 2 ξ 2 ∂x2 2 + 2 1 2 u1 + u ∂x ∂x ∂ξ ∂x2 ∂x2 ∂ξ 2 substituting for the partial derivatives = 0u1 − sin x2 cos x2 u2 − sin x2 − sin x2 u1 − x1 cos x2 x1 +0u1 + cos x2 sin x2 u2 cos x2 + cos x2 u1 − x1 sin x2 x1 u1 = x1 So in cylindrical coordinates ·u= − sin x2 x1 cos x2 x1 u2 u2 ∂u2 ∂u3 u1 ∂u1 + + + 1 1 2 3 ∂x ∂x ∂x x Note: In standard cylindrical notation. x3 = z. Considering u to be a velocity vector. uz all have the same dimensions. uθ . uz = u3 .
We summarize some useful identities. negative or zero. Extrema are at x = xm .67) (1. as follows gkl = g = gik g = ui = ui = u·v = u×v = kj Γi = jk u = ·u = ×u = φ = 2 φ = = T = ·T = = ∂ξ i ∂ξ i ∂xk ∂xl det(gij ) j δi gij uj g ij uj ui v i = ui vi = gij uj v i = g ij uj vi ijk gjm gkn um v n = ijk uj vk ∂gpj ∂gpk ∂gjk ∂ 2 ξ p ∂xi 1 = g ip + − j ∂xk ∂ξ p ∂x 2 ∂xk ∂xj ∂xp ∂ui ∆j ui = ui = + Γi ul . b].ij = jk j ∂x ∂x ∂x 1 ∂ √ ij ∂φ gg √ j g ∂x ∂xi ∂T ij ij T. b].74) 1. all of which can be proved. y ∈ [c. or an inﬂection point according to whether f (xm ) is positive.i il i ∂x g ∂xi ∂uj ∂ui − j i ∂x ∂x ∂φ φ.69) (1.60) (1. A necessary condition for an extremum is ∂f ∂f (xm . as well as some other common notation.58) (1. respectively.61) (1. It is a local minimum.62) (1. d]. Now consider a function of two variables f (x.j = + Γi T lj + Γj T il = √ g T ij + Γi T jk lj jk lj j j ∂x g ∂x 1 ∂ ∂ξ i √ g T kj k √ g ∂xj ∂x (1. with x ∈ [a.j jl j ∂x ∂ui 1 ∂ √ i ∆i ui = ui = + Γi ul = √ gu .75) .68) (1. if xm ∈ [a. where f (xm ) = 0.4 Maxima and minima Consider the real function f (x). y). b]. ym ) = 0 ∂x ∂y 30 (1.72) (1.70) (1.i = ∂xi ∂φ ∂φ ∂ g ij i + Γj g ik i · φ = g ij φ.64) (1.k = + Γi T lj + Γj T il lk lk ∂xk ∂T ij 1 ∂ √ ij T.66) (1.65) (1.63) (1.59) (1. where x ∈ [a. a local maximum.73) (1. ym ) = (xm .71) (1.
16431727. 18111874. It can be shown that 2 f is a maximum if ∂ f < 0 and D < 0 ∂x2 2 f is a minimum if ∂ f > 0 and D < 0 ∂x2 f is a saddle if D > 0 Higher order derivatives must be considered if D = 0.76) and its determinant D = − det H. t) dt ∂x (1. the point (0. coinventor with Sir Isaac Newton. Gottfried Wilhelm von Leibniz. ym ∈ [c.1 Derivatives of integral expressions b(x) Often functions are expressed in terms of integrals.78) dy(x) db(x) da(x) = f (x. Next we ﬁnd the Hessian H= ∂2f ∂x2 ∂2f ∂x∂y ∂2f ∂x∂y ∂2f ∂y 2 9 matrix (Hildebrand 356) (1.0) is a saddle point. Example 1. b(x)) − f (x.where xm ∈ [a. 4 2 0 0 −2 1. 16461716. studied under Jacobi.77) ∂f (x. of the calculus. For these values we ﬁnd that D = − = Since D > 0. a(x)) + dx dx dx 9 10 Ludwig Otto Hesse. Leibniz’s of functions in integral form: b(x) rule tells us how to take derivatives y(x) = a(x) f (x. t) dt 10 Here t is a dummy variable of integration. b]. y = 0. 31 .7 f = x2 − y 2 Equating partial derivatives with respect to x and to y to zero. German mathematician and philosopher of great inﬂuence.4. we get 2x = −2y = 0 0 This gives x = 0. d]. For example y(x) = a(x) f (x. t) dt b(x) a(x) (1. German mathematician.
Inverting this arrangement in a special case, we note if
x
y(x) = y(xo ) +
x0
f (t) dt
b(x) a(x)
(1.79) (1.80) ∂f (t) dt ∂x (1.81) (1.82)
then dy(x) dxo dx = f (x) − f (x0 ) + dx dx dx dy(x) = f (x) dx
Note that the integral expression naturally includes the initial condition that when x = x0 , y = y(x0 ). This needs to be expressed separately for the diﬀerential version of the equation. Example 1.8
Find
dy dx
if
x2
y(x) Using Leibniz’s rule we get dy(x) dx
=
x
(x + 1)t2 dt
(1.83)
= [(x + 1)x ](2x) − [(x + 1)x ](1) +
x
4
2
x2
t2 dt
(1.84) (1.85)
t3 = 2x + 2x − x − x + 3
6 5 3 2
x2 x
= 2x6 + 2x5 − x3 − x2 + = 4x3 7x6 + 2x5 − − x2 3 3
x3 x6 − 3 3
(1.86) (1.87) (1.88)
In this case, but not all, we can achieve the same result from explicit formulation of y(x):
x2
y(x)
= = =
(x + 1)
x
t2 dt
x2
(1.89) (1.90)
t3 (x + 1) 3 (x + 1)
x
y(x) dy(x) dx
= =
x3 x6 − 3 3 x6 x4 x3 x7 + − − 3 3 3 3 7x6 4x3 + 2x5 − − x2 3 3
(1.91) (1.92) (1.93)
So the two methods give identical results.
32
1.4.2
Calculus of variations
(See Hildebrand, p. 360) The problem is to ﬁnd the function y(x), with x ∈ [x1 , x2 ], and boundary conditions y(x1 ) = y1 , y(x2 ) = y2 , such that the integral
x2
I=
x1
f (x, y, y ) dx
(1.94)
is an extremum. If y(x) is the desired solution, let Y (x) = y(x) + h(x), where h(x1 ) = h(x2 ) = 0. Thus Y (x) also satisﬁes the boundary conditions; also Y (x) = y (x) + h (x). We can write x2 I( ) = f (x, Y, Y ) dx
x1
Taking
dI , d
utilizing Leibniz’s formula, we get dI = d
x2 x1
∂f ∂x ∂f ∂Y ∂f ∂Y + + ∂x ∂ ∂Y ∂ ∂Y ∂
dx
Evaluating, we ﬁnd dI = d
x2 x1
∂f ∂f ∂f h (x) dx 0+ h(x) + ∂x ∂Y ∂Y
Since I is an extremum at = 0, we have dI/d = 0 for = 0. This gives
x2
0 =
x1
∂f ∂f h (x) h(x) + ∂Y ∂Y
dx
=0
Also when
= 0, we have Y = y, Y = y , so
x2
0 =
x1
∂f ∂f h(x) + h (x) dx ∂y ∂y
x2 x1
Look at the second term in this integral. Since from integration by parts we get
x2 x1
∂f h (x) dx = ∂y
∂f dh ∂y
x2 x1
∂f h(x) = ∂y
−
x2 x1
d dx
∂f ∂y
h(x) dx
The ﬁrst term above is zero because of our conditions on h(x1 ) and h(x2 ). Thus substituting into the original equation we have
x2 x1
d ∂f − ∂y dx
∂f ∂y
h(x) dx = 0
(1.95)
The equality holds for all h(x), so that we must have ∂f d − ∂y dx ∂f ∂y 33 =0 (1.96)
called the Euler 11 equation. While this is, in general, the preferred form of the Euler equation, its explicit dependency on the two end conditions is better displayed by considering a slightly diﬀerent form. By expanding the total derivative term, that is d dx ∂f (x, y, y ) ∂y ∂ 2 f dy ∂ 2 f dy ∂2f + + = ∂y ∂x ∂y ∂y dx ∂y ∂y dx ∂2f ∂2f ∂ 2f = y + y + ∂y ∂x ∂y ∂y ∂y ∂y
(1.97) (1.98)
the Euler equation after slight rearrangement becomes ∂2f ∂2f ∂ 2f ∂f y + y + − = 0 ∂y ∂y ∂y ∂y ∂y ∂x ∂y d2 y dy + (fy x − fy ) = 0 fy y 2 + fy y dx dx (1.99) (1.100)
This is a clearly second order diﬀerential equation for fy y = 0, and in general, nonlinear. If fy y is always nonzero, the problem is said to be regular. If fy y = 0 at any point, the equation is no longer second order, and the problem is said to be singular at such points. Note that satisfaction of two boundary conditions becomes problematic for equations less than second order. There are several special cases of the function f . 1. f = f (x, y) The Euler equation is ∂f =0 ∂y which is easily solved: f (x, y) = A(x) which, knowing f , is then solved for y(x). 2. f = f (x, y ) The Euler equation is d dx which yields ∂f =A ∂y f (x, y ) = Ay + B(x) (1.104) (1.105) ∂f ∂y =0 (1.103) (1.102) (1.101)
Again, knowing f , the equation is solved for y and then integrated to ﬁnd y(x).
11
Leonhard Euler, 17071783, proliﬁc Swiss mathematician, born in Basel, died in St. Petersburg.
34
3. f = f (y, y ) The Euler equation is ∂f d ∂f (y, y ) = 0 − ∂y dx ∂y ∂f ∂ 2 f dy ∂ 2 f dy − + =0 ∂y ∂y∂y dx ∂y ∂y dx ∂f ∂ 2 f dy ∂ 2 f d2 y =0 − − ∂y ∂y∂y dx ∂y ∂y dx2 Multiply by y to get y ∂f ∂f y −y y + ∂y ∂y ∂f ∂ 2 f dy ∂ 2f − − ∂y ∂y∂y dx ∂y ∂y 2 ∂ f dy ∂ 2 f d2 y − − ∂y∂y dx ∂y ∂y dx2 d dx which can be integrated. Thus f (y, y ) − y ∂f =A ∂y (1.113) ∂f ∂y d2 y =0 dx2 ∂f y =0 ∂y (1.109) (1.110) (1.111) f −y =0 (1.112) (1.106) (1.107) (1.108)
which is eﬀectively a ﬁrst order ordinary diﬀerential equation which is solved. Another integration constant arises. This along with A are determined by the two end point conditions.
Example 1.9
Find the curve of minimum length between the points (x1 , y1 ) and (x2 , y2 ). If y(x) is the curve, then y(x1 ) = y1 and y(x2 ) = y2 . The length of the curve is
x2
L=
x1
1 + (y )2 dx
The Euler equation is d dx which can be integrated to give y 1 + (y )2 Solving for y we get y =A= from which y = Ax + B The constants A and B are obtained from the boundary conditions y(x1 ) = y1 and y(x2 ) = y2 . C2 1 − C2 =C y 1 + (y )2 =0
35
y 0
2
.
1 0.5
y 3 2.5 2 1.5 1 0.5 0
2
.
curve with endpoints at (1, 3.09), (2, 2.26) which minimizes surface area of body of revolution
0.5 1 1.5 2 x z
2
0
2
corresponding surface of revolution
0 x 1 2
1
Figure 1.5: Body of revolution of minimum surface area for (x1 , y1 ) = (−1, 3.08616) and (x2 , y2 ) = (2, 2.25525)
Example 1.10
Find the curve through the points (x1 , y1 ) and (x2 , y2 ), such that the surface area of the body of revolution by rotating the curve around the xaxis is a minimum. We wish to minimize x
2
I=
x1
y
1 + (y )2 dx
Here f (y, y ) = y
1 + (y )2 . So the Euler equation reduces to f (y, y ) − y y 1+y2−y y y 1+y2 y y y(x) ∂f ∂y = A = A = A 1+y2 = A 1+y2 = y A
2
y(1 + y 2 ) − yy 2
−1
= A cosh
x−B A
This is a catenary. The constants A and B are determined from the boundary conditions y(x1 ) = y1 and y(x2 ) = y2 . In general this requires a trial and error solution of simultaneous algebraic equations. If (x1 , y1 ) = (−1, 3.08616) and (x2 , y2 ) = (2, 2.25525), one ﬁnds solution of the resulting algebraic equations gives A = 2, B = 1. For these conditions, the curve y(x) along with the resulting body of revolution of minimum surface area are plotted in Figure 1.5.
36
1.5
Lagrange multipliers
gi (x1 , x2 , . . . , xm ) = 0, i = 1, 2, . . . , n (1.114) (1.115)
Suppose we have to determine the extremum of f (x1 , x2 , . . . , xm ) subject to the n constraints
Deﬁne
f ∗ = f − λ 1 g1 − λ 2 g2 − . . . − λ n gn
where the λi (i = 1, 2, · · · , n) are unknown constants called Lagrange 12 multipliers. To get the extremum of f ∗ , we equate to zero its derivative with respect to x1 , x2 , . . . , xm . Thus we have ∂f ∗ = 0, i = 1, . . . , m ∂xi gi = 0, i = 1, . . . , n (1.116) (1.117)
which are (m+n) equations that can be solved for xi (i = 1, 2, . . . , m) and λi (i = 1, 2, . . . , n). Example 1.11
Extremize f = x2 + y 2 subject to the constraint 5x2 − 6xy + 5y 2 = 8. Let f ∗ = x2 + y 2 − λ(5x2 − 6xy + 5y 2 − 8)
from which 2x − 10λx + 6λy 2y + 6λx − 10λy 5x2 − 6xy + 5y 2 From the ﬁrst equation λ= 2x 10x − 6y = = = 0 0 8
which, when substituted into the second, gives x = ±y √ √ √ √ 1 1 1 1 The last equation gives the extrema to be at (x, y) = ( 2, 2), (− 2, − 2), ( √2 , − √2 ), (− √2 , √2 ). The ﬁrst two sets give f = 4 (maximum) and the last two f = 1 (minimum). The function to be maximized along with the constraint function and its image are plotted in Figure 1.6.
A similar technique can be used for the extremization of a functional with constraint. We wish to ﬁnd the function y(x), with x ∈ [x1 , x2 ], and y(x1 ) = y1 , y(x2 ) = y2 , such that the integral
x2
I=
x1
12
f (x, y, y ) dx
(1.118)
JosephLouis Lagrange, 17361813, Italianborn French mathematician.
37
y 0 1 2 8
2 1
1
x 0
y 1 2 1 4 0
1
1
x 0 1
constrained function
6 f(x,y) f(x,y) 4
3 2 1 0 0
2
unconstrained function
constraint function
Figure 1.6: Unconstrained function f (x, y) along with constrained function and constraint function (image of constrained function) is an extremum, and satisﬁes the constraint g=0 Deﬁne and continue as before. Example 1.12
Extremize I, where I=
0 a
(1.119) (1.120)
I ∗ = I − λg
y
1 + (y )2 dx
with y(0) = y(a) = 0, and subject to the constraint
a
1 + (y )2 dx =
0
That is ﬁnd the maximum surface area of a body of revolution which has a constant length. Let
a
g=
0
1 + (y )2 dx − = 0
a
Then let I ∗ = I − λg =
0
a
y
a
1 + (y )2 dx − λ (y − λ)
1 + (y )2 dx + λ
0
=
0 a
1 + (y )2 dx + λ λ a dx
=
0
(y − λ) 1 + (y )2 +
38
With f ∗ = (y − λ) 1 + (y )2 + λ a .7.754549.2 0. If (a) ﬁnd a general expression for ∂z ∂x .422752. λ = −0. λ have to be numerically determined from the three conditions y(0) = y(a) = 0.0. For 2 these values. B = 1 . y z 3 + zx + x4 y = 3y.25 0 Figure 1.6 0.2 z 0 0.2 0.8 1 x 0. we ﬁnd that A = 0. ∂z ∂y . y ).7: Curve of length = 5/4 with y(0) = y(1) = 0 whose surface area of corresponding body of revolution (also shown) is maximum. we have the Euler equation d ∂f ∗ − ∂y dx ∂f ∗ ∂y =0 λ a Integrating from an earlier developed relationship when f = f (y.05 0.75 0. B. g = 0.3 y 0. If we take the case where a = 1.15 0. = 5/4.2 y 0 0. we have y =A (y − λ) 1 + (y )2 − y (y − λ) 1 + (y )2 from which (y − λ)(1 + (y )2 ) − (y )2 (y − λ) = A 1 + (y )2 (y − λ) 1 + (y )2 − (y )2 = A 1 + (y )2 y−λ=A y = 1 + (y )2 2 into a constant y−λ A −1 x−B A Here A.25 0. is plotted in Figure 1.2 1 0. y = λ + A cosh Problems 1.1 0. x 39 . along with the surface of revolution.4 0. and absorbing A.2 0. the curve of interest.5 x 0.
1). θ. 12. where you can take any h(x) you like. plot the minimum time curve. Extremize the integral 0 1 y 2 dx subject to the end conditions y(0) = 0. x12 y dx. 2). 2).81 s2 j. If points A and B are ﬁxed. Plot y(x) which renders the integral at an extreme point. y1 ) = (0. 3. where distances are in meters. Plot y(x) for the extremum. y) = (1. αr . L = 3/2. Z) along the surface of the cylinder r = a.26920)2 . 9. y) = (0. What is this time? If A : (x. is a maximum. ﬁnd f (x) for which the time taken will be the least. y) = (1. y2 ) = (1. with x1 ≤ x ≤ x2 . 10. the angle of incidence. Find the form of F such that its area is a maximum. F is a quadrilateral with perimeter P . the shortest distance of travel is when the angle of incidence on the mirror is equal to the angle of reﬂection. for which the area under the curve. 7. Determine the general curve y(x). that the curve which maximizes the area and satisﬁes all constraints is the circle. A body slides due to gravity from point A to point B along the curve y = f (x). if y(0) = 0 and y(1) = −e? Find the value of the extremum. 2. y) for −2 < x < 2. and the angle of refraction. Find the length of the shortest curve between two points with cylindrical coordinates (r. 11. z) = (a. B : (x. Show that if a ray of light is reﬂected from a mirror. 40 . y(1) = 0. θ. If y0 (x) is the solution of the Euler equation. (x2 . −2 < y < 2. 0) and (r. Consider the integral I = 0 (y − y + ex )2 dx. The speed of light in diﬀerent media separated by a planar interface is c1 and c2 . Show that if the time taken for light to go from a ﬁxed point in one medium to another in the second is a minimum. Show that if (x1 . 0). Determine the shape of a parallelogram with a given area which has the least perimeter. Plot this curve. There is no friction and the initial velocity is zero. and ﬁnd the minimum time if the gravitational acceleration is m g = −9. Find the extremum of the functional 1 0 1 . but with h(0) = h(1) = 0. compute I for y1 (x) = y0 (x) + h(x).(b) evaluate ∂z ∂x at (x. (y + 0. and also the constraint 1 y dx = 1 0 Plot the function y(x) which extremizes the integral and satisﬁes all constraints. 4. y ∂z ∂y . z) = (a. What is the area? Verify that each constraint is satisﬁed. 0. What kind of extremum does this integral have (maximum or minimum)? What should y(x) be for this extremum? What does the solution of the Euler equation give. 0). are related by c1 sin αi = sin αr c2 5. What function y(x) minimizes the area and satisﬁes all constraints? Plot this curve.2453)2 = (1. Θ. What is this area? 6.254272)2 + (x − 1. x (x2 y 2 + 40x4 y) dx with y(0) = 0 and y(1) = 1. What is the area? Verify that each constraint is satisﬁed. αi . (c) Give a computer generated plot of the surface z(x. Find the point on the plane ax + by + cz = d which is nearest to the origin. You may wish to use an appropriate implicit plotting function in the xmaple software program. of total length L with endpoints y(x1 ) = y1 x and y(x2 ) = y2 ﬁxed. 8.
that is closest to the origin. Plot lines of constant x1 and x2 in the ξ 1 and ξ 2 plane. ﬁne · u where u is an arbitrary vector. 17. Find the point on the curve of intersection of z − xy = 10 and x + y + z = 1. Show that the functions u v are functionally dependent. Find a function y(x) with y(0) = 1. Find the covariant derivative of the contravariant velocity vector in cylindrical coordinates. 41 . y(1) = 0 that extremizes the integral 1 = = x+y x−y xy (x − y)2 1+ y I= 0 dy dx 2 dx Plot y(x) for this function.13. For elliptic cylindrical coordinates ξ1 ξ2 ξ3 = = cosh x1 cos x2 sinh x1 sin x2 = x3 Find the Jacobian matrix J and the metric tensor G. 18. 16. 14. 15. Find the inverse transformation. For the elliptic coordinate system of the previous problem.
42 .
with y(1) = −5. Chapters 13.1) is separable if it can be written in the form which can then be integrated.6. y3 = 4x2 + x + C.1) 2. 43 . Example 2.Chapter 2 Firstorder ordinary diﬀerential equations see Lopez. and Bence. y ) = 0 where y = dy . dx (2.1 Solve yy = Separating variables Integrating. so that the solution is 3 y 3 = 12x2 + 3x − 140. 1. 3 The initial condition gives C = − 140 . we have 8x + 1 .2) Equation (2. y y 2 dy = 8xdx + dx. see Riley. see Bender and Orszag. Chapter 12. Hobson. A ﬁrstorder ordinary diﬀerential equation is of the form F (x.1 Separation of variables P (x)dx = Q(y)dy (2. y.1. The solution is plotted in Figure 2.
we have u + xu du u+x dx du x dx du f (u) − u which can be integrated.8) . 2.2 Homogeneous equations y =f y . Equations of the form y =f ax + by + c dx + ey + h 44 (2. from which y = u + xu .5 10 5 5 10 x 2.3) and separating variables.1: y(x) which solves yy = (8x + 1)/y with y(1) = −5.6) (2.5 5 Figure 2. Substituting in equation (2. x y x (2.7) (2.9) = f (u) = f (u) = f (u) − u = dx x (2.5) (2.3) An equation is homogeneous if it can be written in the form Deﬁning u= we get (2.4) y = ux.5 5 2.y 10 7.
y) = 0. x y y + x x 2 . both sides can be integrated to give 1 (ln u − ln 2 + u) = ln x + C. The solution is plotted in Figure 2. with y(1) = 4. 2.2 Solve xy = 3y + This can be written as y =3 Letting u = y/x. The ﬁrst case y(x) = 4 3 3x − 2 x2 3 1 is seen to satisfy the condition at x = 1.can be similarly integrated. The second case is discarded as it does not satisfy the condition at x = 1. y)dx + Q(x. y) = 0 is a solution to the diﬀerential equation Using the chain rule to expand the derivative ∂F ∂F dx + dy = 0 ∂x ∂y So for an equation of the form P (x. so that the solution can be reduced to 3 2 y = x2 . = 2u + u2 x Since 1 1 1 − = 2u + u2 2u 4 + 2u y2 . Example 2.3 Exact equations dF (x. 2 The initial condition gives C = 1 2 ln 2 .2. 2x + y 3 This can be solved explicitly for y(x) where we ﬁnd for each case of the absolute value.11) . we get dx du .10) A diﬀerential equation is exact if it can be written in the form where F (x. (2. y)dy = 0 45 (2.
∂P ∂Q = ∂y ∂x (2. y) 46 .13) As long as F (x.2: y(x) which solves xy = 3y + we have an exact diﬀerential if y2 x with y(1) = 4 ∂F ∂F = P (x. = ∂x∂y ∂y ∂y∂x ∂x (2.y 20 15 10 5 x 6 4 2 5 2 4 6 10 15 20 Figure 2.3 Solve dy dx dx + 1 − ex−y dy ∂P ∂y ∂Q ∂x = = ex−y ex−y − 1 0 ex−y = −ex−y = −ex−y Since ∂P ∂y = ∂Q ∂x . the mixed second partials are equal. thus.14) must hold if F (x. the equation is exact. = Q(x. y) ∂x ∂y ∂2F ∂P ∂2F ∂Q = .12) (2. y). Example 2. y) is to exist and render the original diﬀerential equation to be exact. y) is continuous and diﬀerentiable. Thus ∂F ∂x = P (x.
y) = e x−y = ex−y = ex−y + A(y) = = 1 − ex−y 1 = y−C = 0 = C +y−C ex−y + y The solution for various values of C is plotted in Figure 2. sometimes they do not.3. but can be made so by multiplication by a function u(x.11) is not exact. 47 .3: y(x) which solves y = exp(x − y)/(exp(x − y) − 1) ∂F ∂x F (x. It is not always obvious that integrating factors exist. Example 2. where u is called the integrating factor. 2.4 Integrating factors Sometimes. y) dA ∂F = −ex−y + = Q(x.4 Solve 2xy dy = 2 dx x − y2 Separating variables. y) ∂y dy dA dy A(y) F (x. an equation of the form (2. we get (x2 − y 2 ) dy = 2xy dx.y 6 4 C=2 C=1 C=0 6 4 2 2 2 4 6 x C = 1 C = 2 2 Figure 2. y).
y y2 This can be written as d which gives x2 + y 2 = Cy. 48 x a (2. x2 +y y =0 The general ﬁrstorder linear equation dy(x) + P (x) y(x) = Q(x) dx with y(xo ) = yo can be solved using the integrating factor e We choose a such that F (a) = 0.5 0. we get x2 2x dx − − 1 dy = 0.5 1 x 1.5 1 C = 1 C = 2 2 C = 3 3 Figure 2. so that on multiplication.y 3 C=3 2 C=2 1 C=1 1.14). .4: y(x) which solves y (x) = 2xy (x2 −y 2 ) This is not exact according to criterion (2. It turns out that the integrating factor is y −2 . The solution for various values of C is plotted in Figure 2.5 1 0.15) P (s)ds = e(F (x)−F (a)) .4.
So the integrating factor is e x a P (s)ds = ea−x = e0−x = e−x Multiplying and rearranging. take a = 0. y(0) = yo .20) product rule: replace x by t: x integrate: xo x d e a P (s)ds y(x) = dx t d e a P (s)ds y(t) = dt t d e a P (s)ds y(t) dt = dt xo a x e xo x P (s)ds e which yields y(x) = e x a P (s)ds y(x) − e P (s)ds y(xo ) = xo e t a P (s)ds − x a P (s)ds e xo a P (s)ds x yo + xo e t a P (s)ds Q(t)dt . P (s)ds = a a (−1)ds = −sa = a − x x So F (τ ) = −τ For F (a) = 0.18) (2.17) (2. (2.19) (2.Multiply by the integrating factor and proceed: e x a P (s)ds dy(x) + e dx x a P (s)ds P (x) y(x) = e e e x a x a t a P (s)ds P (s)ds P (s)ds t a Q(x) Q(x) Q(t) Q(t)dt Q(t) dt (2.21) Example 2.5 Solve Here P (x) = −1 or P (s) = −1 x x y − y = e2x .5. 49 . we get e−x dy(x) − e−x y(x) = ex dx d −x e y(x) = ex dx d −t e y(t) = et dt x x d −t e y(t) dt = et dt dt xo =0 xo =0 e−x y(x) − e−0 y(0) = ex − e0 e−x y(x) − yo = ex − 1 y(x) = ex (yo + ex − 1) y(x) = e2x + (yo − 1) ex The solution for various values of yo is plotted in Figure 2.16) (2.
5 Bernoulli equation Some ﬁrstorder nonlinear equations also have analytical solutions.22) where n = 1.5: y(x) which solves y − y = e2x with y(0) = yo 2.y 3 2 1 yo = 2 yo = 0 yo = 2 3 2 1 1 2 3 x 1 2 3 Figure 2. 1−n u = y 1−n . Let so that The derivative is y = Substituting in equation (2. we get n 1 n 1 u 1−n u + P (x)u 1−n = Q(x)u 1−n . An example is the Bernoulli 1 equation y + P (x)y = Q(x)y n . 1−n 1 This can be written as u + (1 − n)P (x)u = (1 − n)Q(x) which is a ﬁrstorder linear equation of the form (2. 1 (2. y = u 1−n . n 1 u 1−n u . 50 . (2.22).15) and can be solved.23) after one of the members of the proliﬁc Bernoulli family.
26) 1 z 1 +Q S+ z +Q S+ +R +R (2.29) (2.31) dz + (2P (x)S(x) + Q(x)) z = −P (x). Let 1 y = S(x) + . Example 2. Venetian mathematician. dx Again this is a ﬁrst order linear equation in z and x of the form of equation (2. (2.6 Riccati equation 2 A Riccati equation is of the form dy = P (x)y 2 + Q(x)y + R(x).28) Since S(x) is itself a solution to equation (2. dx (2. = = = e−3x 6x 1 3x e − e + 3e3x x x e3x e3x − + 3e3x x x 3e3x Jacopo Riccati.15) and can be solved. we subtract appropriate terms to get − 1 dz 1 2S = P + 2 +Q z 2 dx z z dz − = P (2Sz + 1) + Qz dx 1 z (2.30) (2.25) z(x) thus dy dS 1 dz = − 2 dx dx z dx Substituting into equation (2. x x y = S(x) = e3x . 51 . we get 1 dz 1 dS − 2 =P S+ dx z dx z 2S dS 1 dz 1 − 2 = P S2 + + 2 dx z dx z z 2 (2. the general solution can then be found.27) (2.6 Solve y = One solution is Verify: 3e3x 3e3x 3e3x 2 e−3x 2 1 y − y + 3e3x .24) If we know a speciﬁc solution y = S(x) of this equation.2. 16761754.24).24).
dx x x The integrating factor here is e Multiplying by the integrating factor x x dz + z = −e−3x dx d(xz) = −e−3x dx C e−3x + 3C e−3x + = . we get = e−3x 3x 1 e−3x dz + 2 e − z = − dx x x x −3x z e dz + =− .1 If y absent f (x. and the equation reduces to f x. 3x x 3x 1 Since y = S(x) + z .7. 2.6. y ) = 0 (2.33) 52 . y . e−3x + 3C which can be integrated as dx x = eln x = x The solution for various values of C is plotted in Figure 2. u. z e−3x x 1 = − x = 3e3x P (x) Q(x) R(x) Substituting in the equation. Thus u (x) = y .7 Reduction of order There are higher order equations that can be reduced to ﬁrstorder equations and then solved. du dx =0 (2. 2.so let Also we have 1 y = e3x + .32) then let u(x) = y . the solution is thus z= y = e3x + 3x .
so that x Multiplying by x x2 exp(−3x) x − y/x + 3 exp(3x) xy + 2y = 4x3 . y ) = 0 53 (2. dx u= 4 3 C1 x + 2 5 x This can be integrated to give from which y= for x = 0.4 x 1 0.5 C= 2 C= 0 C= 2 0.2 0.8 0.7 Solve Let u = y .C= 2 C= 1 y 3 2.5 1 0.2 If x absent f (y. Example 2. du + 2u = 4x3 . dx du + 2xu = 4x4 dx d 2 (x u) = 4x4 . y . 1 4 C1 x − + C2 5 x 2.6 0.5 2 1.4 0.2 0.7.5 1 C= 2 C= 1 Figure 2.6: y(x) which solves y = which is an equation of ﬁrst order.34) .
so that y = The equation becomes f y. = dy du dx dy = u du . so that y = du dx y(0) = yo . 54 . Let u = y .8 Solve y − 2yy = 0. y (0) = yo . Note however that the independent variable is now y while the dependent variable is u. Example 2. i.C.35) dy dy dy du = = u dx dy dx dy which is also an equation of ﬁrst order.7. Thus dy dx y applying one initial condition: y = 0 = C = yo This satisﬁes the initial conditions only under special circumstances. For u = 0. The equation becomes dy u du − 2yu = 0 dy u=0 Now satisﬁes the equation. yo = 0.’s: yo C1 dy dx 2 = yo + C1 2 = yo − y o 2 = y 2 + yo − yo y2 2 from which for yo − yo > 0 dy 2 + yo − yo = dx 1 tan−1 2 yo − yo 1 tan−1 2 yo − yo y(x) = y 2 yo − yo = x + C2 = C2 yo 2 yo − yo 2 2 yo − yo tan x yo − yo + tan−1 yo 2 yo − yo The solution for yo = 0. u. yo = 1 is plotted in Figure 2. du = 2y dy u = y 2 + C1 apply I. u du dy =0 (2.let u(x) = y .e.
5 1 0. y0 ).8 Uniqueness and singular solutions y Not all diﬀerential equations have solutions.5 1 1. y) ≤ m and the Lipschitz condition f (x. y) has one and only one solution containing the point (x0 . with y(0) = 1. y0  ≤ ky − y0  in a bounded region R. y) − f (x. 2. as can be seen by considering y = x ln y. one would obtain solutions in terms of hyperbolic trigonometric functions. y = eCx is the general solution of the diﬀerential equation. y (0) = 1 2 For yo − yo = 0 dy dx dy y2 1 − y 1 − yo 1 − y y = y2 = dx = x + C2 = C2 = x− = 1 yo 1 yo 1 −x 2 For yo − yo < 0.5 2 3 Figure 2. y) be continuous and satisfy f (x.5 1 0.y 3 2 1 x 1. 55 . Theorem Let f (x. Then the equation y = f (x.7: y(x) which solves y − 2yy = 0 with y(0) = 0. but no ﬁnite value of C allows the initial condition to be satisﬁed.
Example 2. values of y and y are the same at intersections. one solution is y(t) = 1 2 K (t − T )2 . These singular solutions cannot be obtained from the general solution. √ f (x. with y(2) = 0. However. y) and ∂f /∂y are ﬁnite and continuous at (x0 . y0 ). Consider the equation y = 3y 2/3 . y) exists and is unique in the neighborhood of this point. On separating variables and integrating 3y 1/3 = 3x + 3C so that the general solution is y = (x + C)3 Applying the initial condition y = (x − 2)3 .9 Analyze the uniqueness of the solution of √ y = −K y. So the solution cannot be guaranteed to be unique. Taking. 56 . 4 Another solution which satisﬁes the initial condition and diﬀerential equation is y(t) = 0. we have y(T ) = 0. The two solutions are plotted in Figure 2. In fact.A stronger condition is that if f (x. However. then a solution of y = f (x. Obviously the solution is not unique. Both satisfy the diﬀerential equation. y) = −K y K ∂f =− √ ∂y 2 y which is not ﬁnite at y = 0. y=0 and y= (x − 2)3 0 if x ≥ 2 if x < 2 are also solutions.8.
25 1 0.75 1 2 3 4 x 2 3 4 x Figure 2.42) we can integrate to get u=C 3 Alexis Claude Clairaut.5 0. Diﬀerentiating with respect to x. we get y df u du df u = xu + u + u du = xu + u + = 0.5 0.25 0.75 1 y 1 0.41) (2.25 1 0.75 0.36) can be obtained by letting y = u(x). 17131765. 57 . dx (2. so that y = xu + f (u).39) (2.75 0.y 1 0.38) (2.40) (2.8: Two solutions y(x) which satisfy y = 3y 2/3 with y(2) = 0 2.37) u If we take x+ df du u = (2.5 0.9 Clairaut equation 3 The solution of a Clairaut equation y = xy + f (y ) (2. Parisian/French mathematician.5 0.25 0. du = 0.
37) df + f (u) du form a set of parametric equations for what we call the singular solution.37). addition of the regular and singular solutions does not yield a solution to the diﬀerential equation. the trajectories y(x. y = −u Example 2.44) du then this equation along with equation (2. This is a consequence of the diﬀerential equation’s nonlinearity • While the singular solution satisﬁes the diﬀerential equation. we obtain y = ±2 − x 3 3/2 as the explicit form of the singular solution. it satisﬁes this initial condition only when yo = 0 • Because of nonlinearity. Use the initial condition to evaluate C and get the regular solution: yo C y The parametric form of a singular solution is y = −2u3 x = −3u2 Eliminating the parameter u. y(0) = yo so is the general solution. Note • In contrast to solutions for equations linear in y . we get the general solution y = Cx + f (C) (2. Then.43) Applying an initial condition y(xo ) = yo gives what we will call the regular solution. The regular solutions and singular solution are plotted in Figure 2. 58 .45) y = xy + (y )3 .where C is a constant. xo ) cross at numerous locations in the x − y plane.10 Solve Take u=y then f (u) = u3 df = 3u2 du y = Cx + C 3 = C(0) + C 3 1/3 = yo 1/3 = yo x + yo (2. from equation (2. But if we take df x+ =0 (2.9.
dx x+y x = 2tx + te−t x2 . 3 8. Solve xy + 2y = x. Plot a solution for y(0) = 0. x−y dy = . 3. with y(1) = e−1/2 (f) y + y 2 − xy − 1 = 0 (g) y (x + y 2 ) − y = 0 2 59 . Plot a solution for y(1) = 1. 0. Find the general solution of the diﬀerential equation y + x2 y(1 + y) = 1 + x3 (1 + x).9: Two solutions y(x) which satisfy y = xy + (y )3 with y(0) = yo Problems 1. Solve the nonlinear equation (y − x)y + 2y = 2x. 2. ﬁnd the other solution. 7. Solve 3x2 y 2 dx + 2x3 y dy = 0. Solve Plot a solution for x(0) = 1. Solve y − 2yy = 0. ˙ 2 5. Solve (a) y tan y + 2 sin x sin( π + x) + ln x = 0 2 (b) xy − 2y − x4 − y 2 = 0 (c) y cos y cos x + sin y sin x = 0 (d) y + y cot x = ex (e) x5 y + y + ex (x6 − 1)y 3 = 0. y (0) = 3. Solve 4. 9. 6. Plot solutions for y(0) = −2.y 6 yo = 3 yo = 2 yo = 1 yo = 0 (singular) 4 2 x yo = 0 4 3 2 1 2 1 2 yo = 0 (singular) 4 y o= 1 y o= 2 6 y o= 3 Figure 2. y (1) = 1. 2. Given that y1 = x−1 is one solution of y + x y + 1 x2 y = 0.
with y(1) = −2 exists. 12.(h) y = x+2y−5 −2x−y+4 (i) y + xy = y Plot solutions for y(0) = −1. 1 (except for part e). Solve y − 1 2 1 y + y=1 x2 x 60 . Find all solutions of (x + 1)(y )2 + (x − y)y − y = 0 11. 0. Find an a for which a unique solution of (y )4 + 8(y )3 + (3a + 16)(y )2 + 12ay + 2a2 = 0. 10. Find the solution.
These are called complementary functions.1) is then linear. . 1.4. + Cn yn (x) = 0 is true only when C1 = C2 = .2) etc. y2 (x).3) (3. dx2 (3.11. . = Cn = 0. . The general form of L is L = Pn (x) dn dn−1 d + P0 (x) + Pn−1 (x) n−1 + . (3.Chapter 3 Linear ordinary diﬀerential equations see see see see see Kaplan.6. Riley. 9.6) is the general solution of the homogeneous equation.1). L is linear if (3. . . yn (x) are said to be linearly independent when C1 y1 (x)+C2 y2 (x)+ . and Bence. . . If yi (i = 1. . Bender and Orszag. Chapter 3. If yp (x) is a particular solution of equation (3. 3.5. . n) are the complementary functions of the equation. Hobson. . Chapter 13.1 Linearity and linear independence L(y) = f (x) (3. then n y(x) = i=1 Ci yi (x) (3. Lopez. . . .4) L(y1 + y2 ) = L(y1 ) + L(y2 ) and L(αy) = αL(y) where α is a scalar.5) The ordinary diﬀerential equation (3. The equation is said to be homogeneous if f (x) = 0. + P1 (x) dxn dx dx (3.7) 61 . the general solution is then n y(x) = yp (x) + i=1 Ci yi (x). A homogeneous equation of order n can be shown to have n linearly independent solutions. giving then L(y) = 0 The operator L is composed of a combination of derivatives d d2 dx .1) An ordinary diﬀerential equation can be written in the form where y(x) is an unknown function.19. Deﬁnition: The functions y1 (x). Chapter 15. . Friedman. Chapter 5.
. the converse is not always true. . since if φ(x) ≡ 0.. Polishborn French mathematician.... 1) x ∈ (−1.. (n−1) y2 y2 . 0] x ∈ (0. + Cn yn (x) . .13) W = 0 indicates linear independence of the functions y1 (x).Now we would like to show that any solution φ(x) to the homogeneous equation L(y) = 0 can be written as a linear combination of the n complementary functions yi (x): C1 y1 (x) + C2 y2 (x) + . Unfortunately.. Example 3. ..9) (3. . (n−1) C1 y1 (x) = φ (x) (3. Cn φ(n−1) (x) φ(x) φ (x) . . This particular determinant is known as the Wronskian 1 W of y1 (x). (3. y2 (x). and (c) y1 = x2 and y2 = xx for x ∈ (−1. . . . y2 (x) = −x2 y2 (x) = x2 so that W = W = 1 (c) We can restate y2 as x ∈ (−1.. yn yn . . e e 62 . . . linearly dependent. .8) We can form additional equations by taking a series of derivatives up to n − 1: C1 y1 (x) + C2 y2 (x) + . . except at x = 0. the only solution is Ci = 0.. .n. linearly independent. yn (x) and is deﬁned as y1 y1 . . ... ... (b) y1 = x and y2 = x2 . i = 1. 1) x2 −x2 2x −2x x2 x2 2x 2x = −2x3 + 2x3 = 0 = 2x3 − 2x3 = 0 Josef Ho¨n´ de Wronski. . y2 (x). . that is if W = 0. ..10) + (n−1) C2 y2 (x) + . . y2 (n−1) . yn yn .1 Determine the linear independence of (a) y1 = x and y2 = 2x. . we need the determinant of the coeﬃcient matrix to be nonzero. y1 (n−1) W = y2 y2 . 1) x 2x = 0. (a) W = 1 2 (b) W = x 1 x2 2x = x2 = 0.. .. yn (n−1) C1 C2 .. y2 (n−1) . . 0] x ∈ (0. . .. . 17781853. . . + Cn yn (x) = φ(x) (3. yn (n−1) (3. + (n−1) Cn yn (x) = φ (n−1) (x) (3. though in most cases W = 0 indeed implies linear dependence.. = .12) For a unique solution. . . . . yn (x). the complementary functions may or may not be linearly dependent.11) This is a linear system of algebraic equations: y1 y1 y1 . ..
thus requiring that C1 = 0. For a pair of complex conjugate roots p ± qi. For x ∈ (0. 0]. ri (i = 1. n) are constants. x. which suggests the functions may be linearly dependent. . . we ﬁnd the only solution is C1 = 0. . . .6).14) where Ai . then the complementary functions are simply eri x . . the functions are in fact linearly independent. . For x ∈ (−1. j=0 This is called the characteristic equation. . (i = 1. . Canceling the common factor erx .2 Complementary functions for equations with constant coeﬃcients This section will consider solutions to the homogeneous part of the diﬀerential equation. If all roots are real and distinct. . therefore. C2 = 0. The general solution is then given by equation (3. + A1 r1 erx + A0 erx = 0. . . i. However. . . . one can use de Moivre’s formula (see Appendix) to show that the complementary functions are epx cos qx and epx sin qx. .e. C1 x2 + C2 x2 = 0. . . despite the fact that W = 0! Let’s check this. we get An rn + An−1 r(n−1) + . C1 x2 + C2 (−x2 ) = 0. . . 3. then the linearly independent complementary functions are obtained by multiplying erx by 1. If.2. 1).Thus W = 0 for x ∈ (−1. . n). To ﬁnd the solution of this equation we let y = erx . Substituting we get An rn erx + An−1 r(n−1) erx + . 3.1 Arbitrary order An y (n) + An−1 y (n−1) + . + A1 y + A0 y = 0 Consider the homogeneous equation with constant coeﬃcients (3. so we will need C1 = C2 at a minimum. . Substituting the ﬁrst condition into the second gives C2 = −C2 . . when we seek C1 and C2 such that C1 y1 + C2 y2 = 0. r1 = r2 = . however.16) (3. which gives the requirement that C1 = −C2 . xk−1 . k of these roots are repeated.17) Aj rj = 0. which is only satisﬁed if C2 = 0. 1). 63 . some of which could be complex). hence the functions are indeed linearly independent. (i = 0. n) have to be obtained.15) (3. . = rk = r. + A1 r1 + A0 r0 = 0 n (3. n) from which n linearly independent complementary functions yi (x) (i = 1. x2 . . It is an nth order polynomial which has n roots (some of which could be repeated. .
2 First order The characteristic polynomial of the ﬁrst order equation ay + by = 0 is ar + b = 0 so r=− b a (3.19) (3.2. we get a characteristic equation r4 − 2r3 + r2 + 2r − 2 = 0 which can be factorized as (r + 1)(r − 1)(r2 − 2r + 2) = 0 from which r1 = −1.20) (3. The general solution is y(x) = C1 e−x + C2 ex + C3 e(1+i)x + C4 e(1−i)x = C1 e−x + C2 ex + C3 ex eix + C4 ex e−ix = C1 e−x + C2 ex + ex C3 eix + C4 e−ix = C1 e−x + C2 ex + ex [C3 (cos(x) + i sin(x)) + C4 (cos(−x) + i sin(−x))] = C1 e−x + C2 ex + ex [(C3 + C4 ) cos(x) + i(C3 − C4 ) sin(x)] y(x) = C1 e−x + C2 ex + ex (C3 cos x + C4 sin x) where C3 = C3 + C4 and C4 = i(C3 − C4 ).2 Solve d3 y d2 y dy d4 y − 2y = 0 −2 3 + 2 +2 4 dx dx dx dx Substituting y = erx .Example 3.21) .18) thus the complementary function for this equation is simply y = Ce− a x 64 b (3. r2 = 1 r3 = 1 + i r4 = 1 − i 3.
3 Solve dy d2 y + 2y = 0 −3 2 dx dx r2 − 3r + 2 = 0 r1 = 1. • b2 − 4ac > 0: two distinct real roots r1 and r2 . The characteristic equation is with solutions y = C1 ex + C2 e2x Example 3. The complementary functions are y1 = er1 x and y2 = er2 x .2. The complementary functions are y1 = erx and y2 = xerx . The complementary functions are y1 = epx cos qx and y2 = epx sin qx. • b2 − 4ac < 0: two complex conjugate roots p ± qi. there are three cases to be considered.3 Second order dy d2 y + b + cy = 0 2 dx dx ar2 + br + c = 0 The characteristic polynomial of the second order equation a is (3.3. The general solution is then y = C1 ex + C2 xex The characteristic equation is with a repeated root 65 . The general solution is then r2 = 2. Example 3.4 Solve dy d2 y +y =0 −2 2 dx dx r2 − 2r + 1 = 0 r = 1.22) (3.23) Depending on the coeﬃcients of this quadratic equation. • b2 − 4ac = 0: one real root.
31) (3. The general solution is then y = ex (C1 cos 3x + C2 sin 3x) r2 = 1 − 3i. 66 (3.30) (3.1 Complementary functions for equations with variable coeﬃcients One solution to ﬁnd another y + P (x)y + Q(x)y = 0 (3.32) (3.5 Solve dy d2 y + 10y = 0 −2 2 dx dx r2 − 2r + 10 = 0 r1 = 1 + 3i. First the derivatives: y2 = uy1 + u y1 y2 = uy1 + u y1 + u y1 + u y1 y2 = uy1 + 2u y1 + u y1 Substituting in the equation.27) (3.25) (3.3 3. We then form derivatives of y2 and substitute into the original diﬀerential equation.Example 3. The characteristic equation is with solutions 3. where v = u : v y1 + v(2y1 + P (x)y1 ) = 0 which is solved for v(x) using known methods for ﬁrst order equations. This can be written as a ﬁrstorder equation in v.24) If y1 (x) is a known solution of let the other solution be y2 (x) = u(x)y1 (x).3.26) (3.29) (3.28) . we get (uy1 + 2u y1 + u y1 ) + P (x)(uy1 + u y1 ) + Q(x)uy1 = 0 u y1 + u (2y1 + P (x)y1 ) + u(y1 + P (x)y1 + Q(x)y1 ) = 0 cancel coeﬃcient on u: u y1 + u (2y1 + P (x)y1 ) = 0.
Substituting into the equation. Then dz 1 = = e−z . dx x dy dy dz dy = = e−z dx dz dx dz d2 y d = dx2 dx dy dx = e−z d dz e−z so dy dz d d = e−z dx dz = e−2z d2 y dy − dz 2 dz Substituting into the diﬀerential equation. The solution is then obtained as a linear combination of xr1 and xr2 . 67 .2 Euler equation x2 An equation of the type dy d2 y + Ax + By = 0. Example 3. Let z = ln x so that x = ez .34) The solution is 3.6 Solve x2 y − 2xy + 2y = 0. we get r2 − 3r + 2 = 0.4 Particular solutions We will now consider particular solutions of the inhomogeneous equation (3. −3 2 dz dz y = C1 ez + C2 e2z = C1 x + C2 x2 . With x = ez . can be solved by a change of independent variables.33) 2 dx dx where A and B are constants. (3.3. (3. Note that this equation can also be solved by letting y = xr . we get dy d2 y + 2y = 0.3.1). so that r1 = 1 and r2 = 2. we get d2 y dy + (A − 1) + By = 0 2 dz dz which is an equation with constant coeﬃcients. for x > 0.
4. we get (−9a sin 3x − 9b cos 3x) + 4 (3a cos 3x − 3b sin 3x) + 4 (a sin 3x + b cos 3x) = 169 sin 3x (−5a − 12b) sin 3x + (12a − 5b) cos 3x = 169 sin 3x Equating the coeﬃcients of the sin and cos terms.3. 12 −5 −5 −12 a b = 0 169 y2 = xe−2x we ﬁnd that a = −5 and b = −12. the complementary functions are y1 = e−2x For the particular function.8 Solve y − 2y + y + 2y − 2y = x2 + x + 1 Let the particular integral be of the form yp = ax2 + bx + c.7 y + 4y + 4y = 169 sin 3x Thus r2 + 4r + 4 = 0 (r + 2)(r + 2) = 0 r = −2 Since the roots are repeated. Substituting we get −(2a + 1)x2 + (4a − 2b − 1)x + (2a + 2b − 2c − 1) = 0 68 . The solution is then y(x) = (C1 + C2 x)e−2x − 5 sin 3x − 12 cos 3x. Example 3. Example 3.1 Method of undetermined coeﬃcients Guess a solution with unknown coeﬃcients and then substitute in the equation to determine these coeﬃcients. guess yp = a sin 3x + b cos 3x so yp = 3a cos 3x − 3b sin 3x yp = −9a sin 3x − 9b cos 3x Substituting in the diﬀerential equation.
b = − 3 . b = − 3 . Example 3. Thus the general solution is 1 y = C1 e−x + C2 xe−x + x3 e−x 6 69 . 2 Example 3. Thus 1 yp = − (x2 + 3x + 5) 2 The solution of the homogeneous equation was found in a previous example. On substitution we ﬁnd that a = 1/6. from which a = − 1 . To get the particular solution we have to choose a function of the kind yp = ax3 e−x .10 Solve y + 2y + y = xe−x The complementary functions are e−x and xe−x . the coeﬃcients must be zero. so that the general solution is 1 y = C1 e−x + C2 ex + ex (C3 cos x + C4 sin x) − (x2 + 3x + 5) 2 A variant must be attempted if any term of F (x) is a complementary function.For this to hold for all values of x.9 Solve y + 4y = 6 sin 2x Since sin 2x is a complementary function. we compare coeﬃcients and get a = 0. we will try yp = x(a sin 2x + b cos 2x) from which yp yp = 2x(a cos 2x − b sin 2x) + (a sin 2x + b cos 2x) = −4x(a sin 2x + b cos 2x) + 4(a cos 2x − b sin 2x) Substituting into the equation. and 2 2 5 c = − 2 . The general solution 2 is then 3 y = C1 sin 2x + C2 cos 2x − x cos 2x.
. . Again we set the ﬁrst term on the right side to zero as a second condition. the term within brackets is zero. n) that we have obtained: n ui yi = 0. Substituting these in the governing equation. (3. we have n n yp = i=1 ui y i + i=1 ui y i . . Diﬀerentiating the rest n n yp = i=1 ui y i + i=1 ui y i .2 Variation of parameters Pn (x)y (n) + Pn−1 (x)y (n−1) + .3. We set n i=1 ui yi to zero as a ﬁrst condition. + P1 yi + P0 yi ] = F (x) is obtained. .37) = 0. and ui (x) are n unknown functions.4. Since each of the functions yi is a complementary function.35) For an equation of the kind we propose yp = n ui (x)yi (x) i=1 (3. + P1 (x)y + P0 (x)y = F (x) (3. (i = 1. (i = 1. . = F (x). . Following this procedure repeatedly we arrive at n (n−1) yp = i=1 ui y i (n−2) n + i=1 ui y i (n−1) . . .36) where yi (x). Diﬀerentiating. . n ui y i i=1 n (n−2) Pn (x) i=1 ui y i (n−1) 70 . . the last condition n Pn (x) i=1 (n−1) ui y i n + i=1 ui [Pn yi (n) + Pn−1 yi (n−1) + . . . n) are complementary functions of the equation. . we have the following n equations in the n unknowns ui . To summarize. i=1 n ui y i=1 = 0. . The vanishing of the ﬁrst term on the right gives us the (n − 1)’th condition. .
cos x tan xdx = − cos x. y2 = sin x Solving this system. so that D= Dn (y) = 71 dn y . we get u1 u2 The particular solution is yp = u1 y1 + u 2 y2 = (sin x − ln  sec x + tan x) cos x − cos x sin x = − cos x ln  sec x + tan x The complete solution. Example 3. and then integrated to give the ui ’s.4. d dx The operator can be repeatedly applied. The equations for u1 (x) and u2 (x) are u 1 y1 + u 2 y2 u 1 y1 + u 2 y2 = = 0 tan x. dxn . = cos x tan x 3.3 Operator D dy . obtained by adding the complementary and particular.11 Solve y + y = tan x. The complementary functions are y1 = cos x. = − sin x tan x.These can be solved for ui . which is linear in u1 and u2 . dx The linear operator D is deﬁned by D(y) = or. we get u1 u2 Integrating. in terms of the operator alone. is y = C1 cos x + C2 sin x − cos x ln  sec x + tan x = = sin x tan xdx = sin x − ln  sec x + tan x.
] ds xo 1 f (x) D−a 1 f (x) D−a (3. . This comes from dy(x) = f (x) dx y(x) = yo + xo y(xo ) = yo x f (s) ds then substituting: D[y(x)] = f (x) −1 D [D[y(x)]] = D−1 [f (x)] apply inverse: y(x) = D−1 [f (x)] x = yo + so We can evaluate h(x) where h(x) = in the following way (D − a) h(x) = (D − a) (D − a) h(x) dh(x) − ah(x) dx dh(x) e−ax − ae−ax h(x) dx d −ax e h(x) dx d −as e h(s) ds x d −as e h(s) ds xo ds = f (x) = f (x) = f (x)e−ax = f (x)e−ax = f (s)e−as x f (s) ds xo x D−1 = yo + [.Another example of its use is (D − a)(D − b)f (x) = (D − a)[(D − b)f (x)] df = (D − a) − bf dx df d2 f − (a + b) + abf = dx2 dx Negative powers of D are related to integrals. .38) = xo x xo f (s)e−as ds f (s)e−as ds e−ax h(x) − e−axo h(xo ) = 72 .
Returning to the problem at hand. we take our expression for h(x). y (xo ) = yo 1 (D − b)y(x) = f (x) D−a (D − b)y(x) = h(x) y(x) = yo e Note that b(x−xo ) x +e bx xo h(s)e−bs ds dy = yo beb(x−xo ) + h(x) + bebx dx x xo h(s)e−bs ds dy (xo ) = yo = yo b + h(xo ) dx which can be rewritten as (D − b)[y(xo )] = h(xo ) which is what one would expect.h(x) = e a(x−xo ) x h(xo ) + e ax xo x xo f (s)e−as ds f (s)e−as ds 1 f (x) = ea(x−xo ) h(xo ) + eax D−a This gives us h(x) explicitly in terms of the known function f such that h satisﬁes D[h]−ah = f. We can iterate to ﬁnd the solution to higher order equations such as (D − a)(D − b)y(x) = f (x) y(xo ) = yo . evaluate it at x = s and substitute into the expression for y(x) to get y(x) = yo eb(x−xo ) + ebx x xo x xo x xo s xo h(xo )ea(s−xo ) + eas s xo f (t)e−at dt e−bs ds = yo eb(x−xo ) + ebx = yo eb(x−xo ) + ebx = yo e b(x−xo ) bx (yo − yo b) ea(s−xo ) + eas f (t)e−at dt e−bs ds s xo (yo − yo b) e(a−b)s−axo + e(a−b)s x f (t)e−at dt ds s xo + e (yo − yo b) e e (a−b)s−axo x ds + e −bxo bx xo e (a−b)s f (t)e−at dt ds s −e e(a−b)s f (t)e−at dt ds + ebx a−b xo xo x s a(x−xo ) b(x−xo ) −e e = yo eb(x−xo ) + (yo − yo b) e(a−b)s f (t)e−at dt ds + ebx a−b xo xo x s a(x−xo ) b(x−xo ) e −e = yo eb(x−xo ) + (yo − yo b) e(a−b)s f (t)e−at dt ds + ebx a−b xo xo = yo eb(x−xo ) + ebx (yo − yo b) xo a(x−xo )−xb x 73 .
. y (b). A and B. y (n−1) (a) T + B y(b).39) where the highest derivative in L is order n and with general homogeneous boundary conditions at x = a and x = b on linear combinations of y and n − 1 of its derivatives: A y(a). y (n−1) (b) T =0 (3. s)ds (3. s) is known. A similar alternate expression can be developed when a = b.Changing the order of integration and integrating on s: = yo eb(x−xo ) + (yo − yo b) x x ea(x−xo ) − eb(x−xo ) e(a−b)s f (t)e−at ds dt + ebx a−b xo t x x a(x−xo ) b(x−xo ) e −e = yo eb(x−xo ) + (yo − yo b) f (t)e−at e(a−b)s ds dt + ebx a−b xo t x a(x−xo ) b(x−xo ) −e f (t) a(x−t) e = yo eb(x−xo ) + (yo − yo b) − eb(x−t) dt + e a−b xo a − b Thus we have a solution to the second order linear diﬀerential equation with constant coeﬃcients and arbitrary forcing expressed in integral form. . .40) where A and B are n × n constant coeﬃcient matrices. (3. the solution is deﬁned for all f including – forms of f for which no simple explicit integrals can be written – piecewise continuous forms of f • numerical solution of the quadrature problem is more robust than direct numerical solution of the original diﬀerential equation • the solution will automatically satisfy all boundary conditions • the solution is useful in experiments in which the system dynamics are well characterized (e.4 Green’s functions A similar goal can be achieved for boundary value problems involving a more general linear operator L. . y (a). .41) This is desirable as • once g(x. 3. If on the closed interval a ≤ x ≤ b we have a two point boundary problem for a general linear diﬀerential equation of the form: Ly = f (x).4.g. we can form a solution of the form: b y(x) = a f (s)g(x. . mass spring damper) but the forcing may be erratic (perhaps digitally speciﬁed) 74 . . then knowing L. .
g (x. s) are continuous for [a. s). s) to be the Green’s function for the linear diﬀerential operator L if it satisﬁes the following conditions: 1. . 17931841. s is a dummy variable. b] 5. this reduces to dg dx 2 − s+ dg dx = s− 1 P2 (s) George Green. s is thought of as a constant parameter. we are guaranteed to achieve the solution to the diﬀerential equation in the desired form as shown at the beginning of the section. s) is a solution of Lg = 0 on a ≤ x < s and on s < x ≤ b 4.We now deﬁne the Green’s 2 function: g(x. g(x. as we let → 0 we get s+ s− d2 g P1 (s) dx + 2 dx P2 (s) s+ s− dg Po (s) dx + dx P2 (s) s+ s− 1 g dx = P2 (s) s+ δ(x − s) dx s− Integrating dg dx − s+ dg dx + s− P1 (s) gs+ − gs− P2 (s) + Po (s) P2 (s) s+ g dx = s− 1 s+ H(x − s)s− P2 (s) Since g is continuous. g(x. 75 . s) = δ(x − s) 2. . s) is continuous for [a. s). Consider for example. . Lg(x. b] except at x = s where it has a jump of −1 Pn (s) Also for purposes of the above conditions. g (n−1) (x. These conditions are not all independent. In the actual Green’s function representation of the solution. L = P2 (x) Then we have P2 (x) d2 g dg + P1 (x) + Po (x)g = δ(x − s) 2 dx dx d2 g P1 (x) dg Po (x) δ(x − s) + + g = 2 dx P2 (x) dx P2 (x) P2 (x) d2 d + P1 (x) + Po (x) dx2 dx Now integrate both sides with respect to x in a small neighborhood enveloping x = s: s+ s− d2 g dx + dx2 s+ s− P1 (x) dg dx + P2 (x) dx s+ s− Po (x) g dx = P2 (x) s+ s− δ(x − s) dx P2 (x) Since P s are continuous. g (n−2) (x. though he generated modern mathematics of the ﬁrst rank. s) satisﬁes all boundary conditions given on x 3. s) and proceed to show that with this deﬁnition. . g(x. We take g(x. nor is the dependence obvious. English cornmiller and mathematician of humble origin and uncertain education.
This is consistent with the ﬁnal point. s)ds b Ly = L L behaves as ∂n . Here L= y(1) = 0 d2 dx2 Now 1) break the problem up into two domains: a) x < s. s) to be appropriately deﬁned. s) = 0 = C1 x. ∂xn a f (s)g(x. Wylie and Barrett. that the second highest derivative of g suﬀers a jump at x = s. x<s 76 . s) to our desired result lets us recover the original diﬀerential equation. 4) use conditions at x = s: continuity dg of g and a jump of dx . Next. 1995). Verify the solution integral if f (x) = 6x.12 Find the Green’s function and the corresponding solution integral of the diﬀerential equation d2 y = f (x) dx2 subject to boundary conditions y(0) = 0.f. s)ds f (s)δ(x − s)ds a = = f (x) The analysis can be extended in a straightforward manner to more arbitrary systems with inhomogeneous boundary conditions using matrix methods (c. s)ds via Leibniz’s Rule: b = a b f (s)Lg(x. Example 3. 3) Use boundary conditions for two constants. This can be easily shown by direct substitution: b y(x) = a f (s)g(x. four constants arise. rendering g(x. we show that applying this deﬁnition of g(x. b) x > s. 2) Solve Lg = 0 in both domains. a) x < s d2 g = 0 dx2 dg = C1 dx g = C1 x + C2 g(0) = 0 = C1 (0) + C2 C2 g(x. for the other two constants.
Jump in dg dx = 0 = C3 = C3 x + C4 = 0 = C3 (1) + C4 = −C3 = C3 (x − 1) . x>s x<s x>s at x = s (note P2 (x) = 1): dg dx dg dx s− s+ s−1 C3 − C3 s C3 g(x. s) − g(x.b) x > s d2 g dx2 dg dx g g(1) C4 g(x. x y(x) = (x − 1) = (x − 1) (6s) s ds + x 0 x 0 x 0 x (6s) (s − 1) ds 1 6s2 ds + x 6s2 − 6s ds 1 x x = (x − 1) 2s3 + x 2s3 − 3s2 = (x − 1)(2x3 − 0) + x[(2 − 3) − (2x3 − 3x2 )] = 2x4 − 2x3 − x − 2x4 + 3x3 y(x) = x3 − x 77 . s) Continuity of g(x. s) = C3 (x − 1) . so ﬁrst) derivative is discontinuous at x = s • it is symmetric in s and x across the two domains • it is seen by inspection to satisfy both boundary conditions The general solution in integral form can be written by breaking the integral into two pieces as x 1 y(x) = 0 f (s) s(x − 1) ds + x x f (s) x(s − 1) ds 1 y(x) = (x − 1) f (s) s ds + x 0 x f (s) (s − 1) ds 1 Now evaluate the integral if f (x) = 6x (thus f (s) = 6s). = s(x − 1). s) = C3 s g(x. g(x. Note some properties of g(x. s) when x = s: C1 s = C3 (s − 1) s−1 C1 = C3 s s−1 x. s) = = 1 1 x<s x>s = s = x(s − 1). s) which are common in such problems: • it is broken into two domains • it is continuous in and through both domains • its n − 1 (here n = 2.
2 2 1 0.4 0. Solve (a) (b) d3 y dx3 d4 y dx4 d − 3 dxy + 4y = 0 2 d d dy − 5 dxy + 11 dxy − 7 dx = 12 3 2 3 2 2 (c) y + 2y = 6ex + cos 2x (d) x2 y − 3xy − 5y = x2 log x.1 0.2 0. y’’ = 6x.5 x y(0) = 0. y = 6x.1: Sketch of problem solution.1. dy d2 y + 4y = 0 +C 2 dt dt Plot your results. The solution is plotted in Figure 3. (b) C = 4.5 y(x) = x .8 1 x 1. Find the lowest order diﬀerential equation of which they are the complementary functions. 4. Find a particular solution to the following ODE using (a) variation of parameters and (b) undetermined coeﬃcients.5 1 2 0.5 1 0.x in domain of interest 0 < x < 1 3 y(x) = x . y2 = x cos x. Problems 1. 3.3 1 1.Note the original diﬀerential equation and both boundary conditions are automatically satisﬁed by the solution.6 0. and (c) C = 3 with y(0) = 1 and y (0) = −3. and y3 = x are linearly independent. y(0) = y(1) = 0.x in expanded domain. y(1) = 0 y 0. (e) d2 y dx2 + y = 2ex cos x + (ex − 2) sin x. Show that the functions y1 = sin x. 5. Solve the following initial value problem for (a) C = 6. y 0. Find the general solution of the diﬀerential equation y + x2 y(1 + y) = 1 + x3 (1 + x) 2. d2 y − 4y = cosh 2x dx2 78 . 2 < x < 2 3 Figure 3.
Solve the boundary value problem d2 y dy =0 +y 2 dx dx with boundary conditions y(0) = 0 and y(π/2) = −1 Plot your result. 2x2 7. Verify this is the correct solution when f (x) = x3 . x2 y − 3xy − 5y = x2 ex x 79 .6. Solve 9. Determine y(x) if f (x) = 3 sin x. + 6y + 12y + 8y = ex − 3 sin x − 8e−2x . Solve x4 y 15. + 7x3 y + 8x2 y = 4x−3 . 11. y (π) = 0. Find the Green’s function solution of y + y = f (x) with y(0) = 0. 12. 14. Find the general solution of 10. Solve d3 y d2 y dy =1 + 2x 2 − 8 dx3 dx dx with y(1) = 4. Show that x−1 and x5 are solutions of the equation x2 y − 3xy − 5y = 0 Thus ﬁnd the general solution of 16. Find the Green’s function solution of y + y − 2y = f (x) with y(0) = 0. y(1) = 0. Solve the equation 2y − 4y + 2y = where x > 0. Solve y − 2y − y + 2y = sin2 x. Plot your result. Plot your result. Plot your result. y(2) = 11. y (1) = 8. x2 y + xy − 4y = 6x y + 2y + y = xe−x 8. Solve y 13.
80 .
4. in fact convergence is rarely examined because the problems tend to be intractable. Lopez. Formally though. The second method. Often.1 Firstorder equation dy + P (x)y = Q(x) dx 81 An equation of the form (4. 7.1. there is a truncation error. Such solutions are approximate in that if one uses a ﬁnite number of terms to represent the solution. 6. Chapter 6. and Bence. the solutions can be expressed as an inﬁnite series of polynomials. It is desirable to get a complete expression for the nth term of the series so that one can make statements regarding absolute and uniform convergence of the series. Chapters 711. and hence the method is exact. Riley. an inﬁnite number of terms gives a true representation of the actual solution. Hobson. Still asymptotic methods will be seen to be quite useful in interpreting the results of highly nonlinear equations in local domains. 5. This chapter will deal with series solution methods.Chapter 4 Series solution methods see see see see see Kaplan. Chapter 14.1) . Chapters 1. The ﬁrst method is formally exact in that an inﬁnite number of terms can often be shown to have absolute and uniform convergence properties. for series which converge. asymptotic series solutions. 14. 4. Hinch. instead. Bender and Orszag. is less rigorous in that convergence is not always guaranteed. Such methods are useful in solving both algebraic and diﬀerential equations.1 Power series Solutions to many diﬀerential equations cannot be found in a closed form solution expressed for instance in terms of polynomials and transcendental functions such as sin and cos. 2.
so that y(x) = a0 1 + x + x3 x4 x2 + + + ··· 2! 3! 4! = a0 1 1 a1 = a0 = 2 2 1 1 a2 = a0 = 3 3! 1 1 a3 = a0 = 4 4! Applying the initial condition at x = 0 gives ao = yo so y(x) = yo 1 + x + x3 x4 x2 + + + ··· 2! 3! 4! Of course this power series is the Taylor 1 series expansion of the closed form solution y = yo ex . the coeﬃcients must be all zero. Thus ∞ y dy dx 1 = n=0 ∞ an xn nan xn−1 n=0 = Brook Taylor. and painter.1 Find the power series solution of dy =y dx around x = 0.2) around this point. we have (a1 − a0 ) + (2a2 − a1 )x + (3a3 − a2 )x2 + (4a4 − a3 )x3 + · · · = 0 If this is valid for all x. For yo = 1 the exact solution and three approximations to the exact solution are shown in Figure 4. one can use a compact summation notation. .where P (x) and Q(x) are analytic at x = a has a power series solution ∞ y(x) = n=0 an (x − a)n (4. . 16851731. Alternatively. English mathematician. 82 . Example 4. musician. Let so that y(0) = yo y = a0 + a1 x + a2 x2 + a3 x3 + · · · dy = a1 + 2a2 x + 3a3 x2 + 4a4 x3 + · · · dx Substituting in the equation.1. Thus a1 a2 a3 a4 .
5 1 1.5 Figure 4.1: Comparison of truncated series and exact solutions ∞ = n=1 ∞ nan xn−1 (m + 1)am+1 xm m=0 ∞ m=n−1 = = (n + 1)an+1 xn n=0 (n + 1)an+1 an y y The ratio test tells us that n→∞ = an a0 = n! = a0 = yo xn n! n=0 xn n! n=0 ∞ ∞ lim an+1 1 = → 0. where A > 0. we need a convergent series of constants Mn to exist ∞ Mn n=0 such that un (x) ≤ Mn for all x in the domain. That is for a series ∞ un (x) n=0 to be convergent.y y’ = y y (0) = 1 4 3 y = exp( x) y = 1 + x + x 2/ 2 y=1+x 2 1 y=1 x 1. 18151897. If a series is uniformly convergent in a domain. 83 . it converges at the same rate for all x in that domain. We can use the Weierstrass 2 M test for uniform convergence. 2 Karl Theodor Wilhelm Weierstrass. an n+1 so the series converges absolutely.5 1 0. For our problem.5 0. we take the domain to be −A ≤ x ≤ A. Westphaliaborn German mathematician.
1 Ordinary point If P (a) = 0 and Q/P . Q(a) and R(a). 4. this is convergent if lim An+1 (n+1)! An (n)! n→∞ ≤1 n→∞ lim A ≤1 n+1 This holds for all A. depending of the behavior of P (a).1. so in the domain. n=0 The radius of convergence of the series is the distance to the nearest complex singularity.2 Secondorder equation d2 y dy + Q(x) + R(x)y = 0 2 dx dx We consider series solutions of P (x) (4. The general solution is y = C1 y1 (x) + C2 y2 (x) where y1 and y2 are of the form ∞ an (x − a)n . 4. this point is called an ordinary point. i. or an irregular singular point. So ∞ An n! ∞ Mn = n=0 An n! n=0 By the ratio test. Example 4. y(0) = yo y (0) = yo 84 . a regular singular point. −∞ < x < ∞ the series converges absolutely and uniformly.e. These are described below. in which x = a is classiﬁed as a ordinary point.3) around x = a.2 Find the series solution of y + xy + y = 0 around x = 0. R/P are analytic at x = a.2. There are three diﬀerent cases.So for uniform convergence we must have xn ≤ Mn n! So take Mn = (Note Mn is thus strictly positive).1. the distance between x = a and the closest point on the complex plane at which Q/P or R/P is not analytic.
which can be shown to be y = exp − x2 2 . we get an+2 = − so that y y y = a0 1 − = yo = yo x4 x6 x5 x7 x2 x3 + − + · · · + a1 x − + − + ··· 2 4·2 6·4·2 3 5·3 7·5·3 x4 x6 x5 x7 x2 x3 + − + · · · + yo x − + − + ··· 1− 2 4·2 6·4·2 3 5·3 7·5·3 ∞ 1 an n+2 (−1)n 2n (−1)n−1 2n n! 2n−1 x + yo x 2n n! (2n)! n=0 n=1 ∞ The series converges for all x. and two approximations to the exact solution are shown in Figure 4. yo = 0 the exact solution.2.x = 0 is an ordinary point. For yo = 1. so that we have ∞ y y xy xy y m=n−2 = n=0 ∞ an xn nan xn−1 n=1 ∞ = = n=1 ∞ nan xn nan xn n=0 ∞ = = n=2 ∞ n(n − 1)an xn−2 (m + 1)(m + 2)am+2 xm m=0 ∞ = = (n + 1)(n + 2)an+2 xn n=0 Substituting in the equation we get ∞ [(n + 1)(n + 2)an+2 + nan + an ]xn = 0 n=0 Equating the coeﬃcients to zero. 85 .
86 . y’ (0) = 0 y y = 1 .x 2 /2) (exact) x 4 2 1 2 4 y = 1 . Furthermore. r1 = r2 = r. Then ∞ y1 = (x − a)r n=0 an (x − a)n ∞ r n=1 (4. The radius of convergence n=0 of the series is again the distance to the nearest complex singularity. Then there exists at least one solution of the form (x − a)r ∞ an (x − a)n Frobenius 3 method. r1 = r2 .2: Comparison of truncated series and exact solutions 4. and r1 − r2 not an integer.2 Regular singular point 2 If P (a) = 0. this point is called a regular singular point.8) Ferdinand Georg Frobenius. ∞ bn (x − a)n y1 = (x − a) 3 r1 n=0 an (x − a)n (4.7) y2 = y1 ln x + (x − a) 3. Then ∞ y1 = (x − a)r1 n=0 ∞ an (x − a)n bn (x − a)n n=0 (4.y’’ + x y’ + y = 0. The following are the diﬀerent kinds of solutions of the indicial equation possible: 1.1. r1 = r2 .2.4) (4. then x = a is a singular point.x 2 /2 Figure 4. if (x−a)Q and (x−a) R are both P P analytic at x = a. y (0) = 1. An equation for r is called the indicial equation. Prussian/German mathematician.x /2 + x 4 /8 2 2 1 y = exp (. 18491917.5) y2 = (x − a)r2 2.6) (4. and r1 − r2 is a positive integer.
So we take ∞ y y y = n=0 ∞ an xn+r an (n + r)xn+r−1 n=0 ∞ = = n=0 an (n + r)(n + r − 1)xn+r−2 ∞ ∞ ∞ 4 n=0 an (n + r)(n + r − 1)xn+r−1 + 2 n=0 ∞ an (n + r)xn+r−1 + n=0 ∞ an xn+r an xn+r n=0 ∞ = = = = 0 0 0 0 2 n=0 ∞ an (n + r)(2n + 2r − 1)xn+r−1 + m=n−1 2 2 am+1 (m + 1 + r)(2(m + 1) + 2r − 1)xm+r + m=−1 ∞ n=0 ∞ an xn+r an xn+r n=0 an+1 (n + 1 + r)(2(n + 1) + 2r − 1)xn+r + n=−1 The ﬁrst term (n = −1) gives the indicial equation: r(2r − 1) = 0 from which r = 0. The general solution is y(x) = C1 y1 (x) + C2 y2 (x) (4. x = 0 is a regular singular point.10) Example 4.∞ y2 = ky1 ln x + (x − a)r2 n=0 bn (x − a)n (4. We then have 2 ∞ ∞ 2 n=0 an+1 (n + r + 1)(2n + 2r + 1)xn+r + n=0 an xn+r = 0 For r = 0 an+1 y1 = −an = a0 1 (2n + 2)(2n + 1) x2 x3 x − + ··· 1− + 2! 4! 6! 87 .9) The constants an and k are determined by the diﬀerential equation.3 Find the series solution of 4xy + 2y + y = 0 around x = 0. 1 .
3. Consider the larger of the two. Example 4.3: Comparison of truncated series and exact solutions For r = 1 2 1 2(2n + 3)(n + 1) x2 x3 x − + ··· y2 = a0 x1/2 1 − + 3! 5! 7! √ √ The series converge for all x to y1 = cos x and y2 = sin x.y 1 y = cos (x1/2 ) (exact) 20 1 2 3 40 60 80 100 x 4 x y’’ + 2 y’ + y = 0 y (0) = 1 y=1x/2 y ( π2/4) = 0 4 Figure 4. i. For this we get an = = 1 an−1 n(n + 1) 1 a0 n!(n + 1)! 88 . Let y = ∞ n=0 an xn+r . r = 1. from which r = 0.e. y(π 2 /4) = 0 the exact solution and the linear approximation to the exact solution are shown in Figure 4. Then. The general solution is an+1 = −an y = C1 y1 + C2 y2 or y(x) = C1 cos √ √ x + C2 sin x For yo = 1. from the equation ∞ r(r − 1)a0 xr−1 + [(n + r)(n + r − 1)an − an−1 ]xn+r−1 = 0 n=1 The indicial equation is r(r − 1) = 0.4 Find the series solution of xy − y = 0 around x = 0. 1.
. ... 4. In this case a series solution cannot be guaranteed. − b0 + b1 x + b2 x2 + .1. + 2k 1 + x + x2 + . .5 Solve y − xy = 0 89 . .. y1 (x) = x + x2 + x3 + 2 12 144 ∞ The second solution is y2 (x) = ky1 (x) ln x + bn xn n=0 Substituting into the diﬀerential equation we get − ky1 + 2ky1 + bn n(n − 1)xn−1 − bn xn = 0 x n=0 n=0 ∞ ∞ Substituting the solution y1 (x) already obtained. +C2 x + x2 + x3 + 2 12 144 4 36 1728 = b0 = 1 k(2n + 1) bn − for n = 1.. = C1 x + x2 + x3 + 2 12 144 1 1 4 3 7 35 4 1 x + . . + b1 x + x2 + x3 + x + . . y2 (x) = b0 y1 ln x + b0 1 − x2 − x3 − 4 36 1728 2 12 144 Since the last series is y1 (x)..3 Higher order equations Similar techniques can sometimes be used for equations of higher order. we have k bn+1 Thus 7 35 4 1 1 4 3 1 x − .. this point is an irregular singular point.Thus 1 1 1 4 x + . . we choose b0 = 1 and b1 = 0. .1. . .. 2. 2 12 2 + 2b2 x + 6b3 x2 + . The general solution is y(x) 1 1 4 1 x + .2. Example 4. .. n(n + 1) n!(n + 1)! 4. Collecting terms. ln x + 1 − x2 − x3 − x − . . . we get 0 1 1 1 = −k 1 + x + x2 + . .3 Irregular singular point If P (a) = 0 and in addition either (x − a)Q/P or (x − a)2 R/P is not analytic at x = a. .
. Let y= n=0 ∞ an xn from which ∞ xy y = n=1 an−1 xn ∞ = 6a3 + (n + 1)(n + 2)(n + 3)an+3 xn n=1 Substituting in the equation..x y = 0. y’’ (0) = 0. . . .4: Comparison of truncated series and exact solutions around x = 0. +a1 x 1 + x4 + 60 30240 1 1 4 x + x8 + . y (0) = 0. y’ (0) = 0.4. 90 . 24 8064 1 1 x8 + . we ﬁnd that a3 an+3 which gives the general solution y(x) = a0 1 + 1 8 1 4 x + x + . Figure 4. +a2 x2 1 + 120 86400 = = 0 1 an−1 (n + 1)(n + 2)(n + 3) For yo = 1.y 7 6 5 4 3 2 1 4 2 2 4 x exact y = 1 + x 4 / 24 y’’’ . . y (0) = 0 the exact solution and the linear approximation to the exact solution are shown in Figure 4. y(0) = 1.
x1 = − 1 . − 1 2 2 x2 + 2x0 x2 + x1 = 0. x2 = 1 .. . Typically there is an easily obtained solution when = 0. The method hinges on the identiﬁcation of a small parameter . and engineering. The solutions are x=1− and x = −1 − The exact solutions can also be expanded x = 1 − ± 2 2 ± 2 2 x2 − 1 = 0. + x0 + x1 2 + . techniques are not as rigorous as inﬁnite series methods in that usually it is impossible to make a statement regarding convergence. also known as linearization or asymptotic. Nevertheless. 0 < << 1. The exact solution and the linear approximation are shown in Figure 4. One then uses this solution as a seed to construct a linear theory about it. science.. 91 . 4. −1 0 2x0 x1 + x0 = 0.2 Perturbation methods Perturbation methods. x0 + x1 + 2 2 x2 + x − 1 = 0 2 x2 + · · · 2 x2 + · · · + x0 + x1 + x2 + · · · − 1 = 0 expanding the square by polynomial multiplication. . − 1 = 0 and collecting diﬀerent powers of .. we begin with quadratic algebraic equations for which exact solutions are available. . we get O( 0 ) : O( 1 ) : O( 2 ) : . The resulting set of linear equations are then solved giving a solution which is valid in a regime near = 0. = ±1 − 8 to give the same results.2. We can then easily see the advantages and limitations of the method.5.6 For 0 < << 1 solve Let x = x0 + x1 + Substituting in the equation..4. −1 1 8 8 2 2 + − 8 2 + ··· + ··· 2 8 +4 + . x0 = 1. the methods have proven to be powerful in many regimes of applied mathematics. Example 4. x2 + 2x1 x0 + x2 + 2x2 x0 0 1 2 + . .1 Algebraic and transcendental equations To illustrate the method of solution.
x0 = 1 = 0. x2 = 2 2 + ··· X2 + α X −1=0 The ﬁrst two terms are of the same order if α = −1. . X=x We expand X = X0 + X1 + X2 + X − = 0 2 X2 + · · · 92 . we get O( 0 ) : O( 1 ) : O( 2 ) : .5: Comparison of asymptotic and exact solutions Example 4. x1 = −1 = 0.x exact linear 3 2 1 x +εx1=0 2 3 2 1 1 2 3 1 2 3 ε linear exact Figure 4. This gives one solution x=1− +2 To get the other solution. let X = x/ The equation becomes 2α+1 α 2 x2 + · · · − 1 = 0 x0 − 1 x2 + x1 0 2x0 x1 + x2 = 0. With this.7 For 0 < << 1 solve Let x = x0 + x1 + Substituting in the equation we get x0 + x1 + 2 2 x2 + x − 1 = 0 2 x2 + · · · x2 + · · · + x0 + x1 + 2 x2 + · · · − 1 = 0 Expanding the quadratic term gives x2 + 2 xo x1 + · · · + x0 + x1 + 0 Collecting diﬀerent powers of . .
= 0. .6. = 1. 93 .x 3 2 1 asymptotic 1 1 2 3 1 2 exact ε 3 exact asymptotic asymptotic Figure 4. . X2 = 1. The exact solution and the linear approximation are shown in Figure 4. X0 = −1.6: Comparison of asymptotic and exact solutions so X0 + X1 + 2 X0 + 2 X0 X1 + 2 2 X2 + · · · 2 + X0 + X1 + 2 X2 + · · · − = 0 2 2 (X1 + 2X0 X2 ) + · · · + X0 + X1 + X2 + · · · − = 0 Collecting terms of the same order O( 0 ) : O( 1 ) : O( 2 ) : . X1 = −1. 0 1 −1 = −1 − + = − 2 2 + ··· + ··· + ··· x = 1 − + ··· Expansion of the exact solutions x = = √ 1 −1 ± 1 + 4 2 1 −1 ± (1 + 2 − 2 2 2 +4 4 + · · ·) give the same results. to give the two solutions X X or x = 1 −1 − + 2 2 X0 + X 0 2X0 X1 + X1 2 X1 + 2X0 X2 + X2 = 0.
. 0.) =0 + (−(sin(x0 + x1 + .5 5 5 0. The right hand side is similar. Expanding the left hand side. . . . We then arrive at the original equation being expressed as cos xo − x1 sin x0 + . It is seen that there are multiple intersections near x = n + 1 π . . . We note that a general function f ( ) has such a Taylor series of f ( ) ∼ f (0) + f (0) + ( 2 /2)f (0) + . 2 We substitute x = x0 + x1 + 2 x2 + · · · we have cos(x0 + x1 + 2 x2 + · · ·) = sin(x0 + x1 + 2 x2 + · · · + ) Now we expand both the left and right hand sides in a Taylor series in about = 0. 2 Figure 4. . . x= = = 0. .1.1 1 cos (x) 10 . .. we get cos(x0 + x1 + . . x0 = π 2 x1 = −1 The solution is π − + ··· 2 94 .7: Location of roots Example 4. ..) Collecting terms cos x0 O( 0 ) : O( 1 ) : −x1 sin x0 − sin x0 .) =0 + . cos(x0 + x1 + .5 1 ε sin(x + ε) x 10 Figure 4.. . . . .f(x) ε = 0. . . = (sin x0 + .7 shows a plot of cos x and sin(x + ) for = 0. We seek only one of these.))(x1 + 2 x2 + .8 Solve cos x = sin(x + ) for x near π . ... .. .) = cos x0 − x1 sin x0 + .. .) = cos(x0 + x1 + . 0.
2 = −y0 . = −2y0 y1 . Example 4. y0 (0) = 0. y (0) = 0 2 y2 (x) + · · · y2 (x) + · · · y2 (x) + · · · 2 2 y2 (x) + · · · + 2 y0 (x) + y1 (x) + 2 y2 (x) + · · · 2 =0 y0 (x) + y1 (x) + y2 (x) + · · · + 2 y0 (x) + 2 y1 (x)yo (x) + · · · = 0 Substituting into the boundary conditions: y0 (0) + y1 (0) + y0 (0) + y1 (0) + Collecting terms O( 0 ) : y0 O( 1 ) : y1 O( 2 ) : y2 . With y =1− u= the original equation becomes u du + y2 = 0 dy dy dx d2 y du dy dy = u = dx2 dy dx dy udu = − y 2 dy u2 = − y 3 + C1 2 3 u=0 when u=± dy =± dx y=1 so C= 3 2 (1 − y 3 ) 3 2 (1 − y 3 ) 3 95 . .2. y2 = x 12 2 2 y2 (0) + · · · = 1 y2 (0) + · · · = 0 x4 x2 + 2 + ··· 2 12 Using the techniques of the previous chapter it is seen that this equation has an exact solution. y0 = 1 2 y1 (0) = 0. y2 (0) = 0. y1 (0) = 0. with y(0) = 1. y0 (0) = 1.2 Regular perturbations Diﬀerential equations can also be solved using perturbation techniques. The solution is = 0.9 For 0 < << 1 solve Let y(x) = y0 (x) + y1 (x) + y (x) = y0 (x) + y1 (x) + y (x) = y0 (x) + y1 (x) + Substituting in the equation y0 (x) + y1 (x) + 2 y + y 2 = 0. y1 = − x 2 4 y2 (0) = 0.4. .
y0 (0) = 1. A portion of the asymptotic and exact solutions for = 0. ± 3 y 2 F1 2 1 1 4 3 . . y0 = 1 2 y1 = − x + x 2 4 3 y2 = x − x 12 3 y + y 2 = 0.9.8. . 17771855. . b. .5 2 4 6 15 exact 1 1.5 asymptotic 20 Figure 4. 2 = −y0 . y1 (0) = 1. there is a slight oﬀset from the y axis. Brunswickborn German mathematician of tremendous inﬂuence. y (0) = 2 y2 (x) + · · · x2 −x + 2 2 x3 x4 − 12 3 + ···