This action might not be possible to undo. Are you sure you want to continue?
By ir. J.C.A. Wevers
c _1999, 2008 J.C.A. Wevers Version: May 4, 2008
Dear reader,
This document contains 66 pages with mathematical equations intended for physicists and engineers. It is intended
to be a short reference for anyone who often needs to look up mathematical equations.
This document can also be obtained from the author, Johan Wevers (johanw@vulcan.xs4all.nl).
It can also be found on the WWW on http://www.xs4all.nl/˜johanw/index.html.
This document is Copyright by J.C.A. Wevers. All rights reserved. Permission to use, copy and distribute this
unmodiﬁed document by any means and for any purpose except proﬁt purposes is hereby granted. Reproducing this
document by any means, included, but not limited to, printing, copying existing prints, publishing by electronic or
other means, implies full agreement to the above nonproﬁtuse clause, unless upon explicit prior written permission
of the author.
The C code for the rootﬁnding via Newtons method and the FFT in chapter 8 are from “Numerical Recipes in C ”,
2nd Edition, ISBN 0521431085.
The Mathematics Formulary is made with teT
E
X and L
A
T
E
X version 2.09.
If you prefer the notation in which vectors are typefaced in boldface, uncomment the redeﬁnition of the ¸vec
command and recompile the ﬁle.
If you ﬁnd any errors or have any comments, please let me know. I am always open for suggestions and possible
corrections to the mathematics formulary.
Johan Wevers
Contents
Contents I
1 Basics 1
1.1 Goniometric functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Hyperbolic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Complex numbers and quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5.1 Complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5.2 Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6.1 Triangles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6.2 Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.8 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8.1 Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8.2 Convergence and divergence of series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8.3 Convergence and divergence of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.9 Products and quotients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.10 Logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.11 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.12 Primes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Probability and statistics 9
2.1 Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Probability theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.2 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Regression analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3 Calculus 12
3.1 Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.1 Arithmetic rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.2 Arc lengts, surfaces and volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.3 Separation of quotients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.4 Special functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.5 Goniometric integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Functions with more variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.1 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.2 Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.3 Extrema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.4 The ∇operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.5 Integral theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.6 Multiple integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.7 Coordinate transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Orthogonality of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
I
II Mathematics Formulary door J.C.A. Wevers
4 Differential equations 20
4.1 Linear differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.1.1 First order linear DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.1.2 Second order linear DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.1.3 The Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.1.4 Power series substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Some special cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2.1 Frobenius’ method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2.2 Euler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2.3 Legendre’s DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2.4 The associated Legendre equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2.5 Solutions for Bessel’s equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2.6 Properties of Bessel functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2.7 Laguerre’s equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2.8 The associated Laguerre equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2.9 Hermite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2.10 Chebyshev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2.11 Weber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3 Nonlinear differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 SturmLiouville equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.5 Linear partial differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.5.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.5.2 Special cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.5.3 Potential theory and Green’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5 Linear algebra 29
5.1 Vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.2 Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.3 Matrix calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.3.1 Basic operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.3.2 Matrix equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.4 Linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.5 Plane and line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.6 Coordinate transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.7 Eigen values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.8 Transformation types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.9 Homogeneous coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.10 Inner product spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.11 The Laplace transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.12 The convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.13 Systems of linear differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.14 Quadratic forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.14.1 Quadratic forms in IR
2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.14.2 Quadratic surfaces in IR
3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6 Complex function theory 39
6.1 Functions of complex variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.2 Complex integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.2.1 Cauchy’s integral formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.2.2 Residue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.3 Analytical functions deﬁnied by series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.4 Laurent series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.5 Jordan’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Mathematics Formulary by J.C.A. Wevers III
7 Tensor calculus 43
7.1 Vectors and covectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.2 Tensor algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.3 Inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.4 Tensor product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.5 Symmetric and antisymmetric tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.6 Outer product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.7 The Hodge star operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.8 Differential operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.8.1 The directional derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.8.2 The Liederivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.8.3 Christoffel symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.8.4 The covariant derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.9 Differential operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.10 Differential geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.10.1 Space curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.10.2 Surfaces in IR
3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.10.3 The ﬁrst fundamental tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.10.4 The second fundamental tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.10.5 Geodetic curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.11 Riemannian geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8 Numerical mathematics 51
8.1 Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
8.2 Floating point representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
8.3 Systems of equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
8.3.1 Triangular matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
8.3.2 Gauss elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
8.3.3 Pivot strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
8.4 Roots of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
8.4.1 Successive substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
8.4.2 Local convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
8.4.3 Aitken extrapolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
8.4.4 Newton iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
8.4.5 The secant method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
8.5 Polynomial interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
8.6 Deﬁnite integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
8.7 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
8.8 Differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
8.9 The fast Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
IV Mathematics Formulary door J.C.A. Wevers
Chapter 1
Basics
1.1 Goniometric functions
For the goniometric ratios for a point p on the unit circle holds:
cos(φ) = x
p
, sin(φ) = y
p
, tan(φ) =
y
p
x
p
sin
2
(x) + cos
2
(x) = 1 and cos
−2
(x) = 1 + tan
2
(x).
cos(a ±b) = cos(a) cos(b) ∓sin(a) sin(b) , sin(a ±b) = sin(a) cos(b) ±cos(a) sin(b)
tan(a ±b) =
tan(a) ±tan(b)
1 ∓tan(a) tan(b)
The sum formulas are:
sin(p) + sin(q) = 2 sin(
1
2
(p +q)) cos(
1
2
(p −q))
sin(p) −sin(q) = 2 cos(
1
2
(p +q)) sin(
1
2
(p −q))
cos(p) + cos(q) = 2 cos(
1
2
(p +q)) cos(
1
2
(p −q))
cos(p) −cos(q) = −2 sin(
1
2
(p +q)) sin(
1
2
(p −q))
From these equations can be derived that
2 cos
2
(x) = 1 + cos(2x) , 2 sin
2
(x) = 1 −cos(2x)
sin(π −x) = sin(x) , cos(π −x) = −cos(x)
sin(
1
2
π −x) = cos(x) , cos(
1
2
π −x) = sin(x)
Conclusions from equalities:
sin(x) = sin(a) ⇒ x = a ±2kπ or x = (π −a) ±2kπ, k ∈ IN
cos(x) = cos(a) ⇒ x = a ±2kπ or x = −a ±2kπ
tan(x) = tan(a) ⇒ x = a ±kπ and x ,=
π
2
±kπ
The following relations exist between the inverse goniometric functions:
arctan(x) = arcsin
_
x
√
x
2
+ 1
_
= arccos
_
1
√
x
2
+ 1
_
, sin(arccos(x)) =
_
1 −x
2
1.2 Hyperbolic functions
The hyperbolic functions are deﬁned by:
sinh(x) =
e
x
−e
−x
2
, cosh(x) =
e
x
+ e
−x
2
, tanh(x) =
sinh(x)
cosh(x)
From this follows that cosh
2
(x) −sinh
2
(x) = 1. Further holds:
arsinh(x) = ln [x +
_
x
2
+ 1[ , arcosh(x) = arsinh(
_
x
2
−1)
1
2 Mathematics Formulary by ir. J.C.A. Wevers
1.3 Calculus
The derivative of a function is deﬁned as:
df
dx
= lim
h→0
f(x +h) −f(x)
h
Derivatives obey the following algebraic rules:
d(x ±y) = dx ±dy , d(xy) = xdy +ydx , d
_
x
y
_
=
ydx −xdy
y
2
For the derivative of the inverse function f
inv
(y), deﬁned by f
inv
(f(x)) = x, holds at point P = (x, f(x)):
_
df
inv
(y)
dy
_
P
_
df(x)
dx
_
P
= 1
Chain rule: if f = f(g(x)), then holds
df
dx
=
df
dg
dg
dx
Further, for the derivatives of products of functions holds:
(f g)
(n)
=
n
k=0
_
n
k
_
f
(n−k)
g
(k)
For the primitive function F(x) holds: F
(x) = f(x). An overview of derivatives and primitives is:
y = f(x) dy/dx = f
(x)
_
f(x)dx
ax
n
anx
n−1
a(n + 1)
−1
x
n+1
1/x −x
−2
ln [x[
a 0 ax
a
x
a
x
ln(a) a
x
/ ln(a)
e
x
e
x
e
x
a
log(x) (xln(a))
−1
(xln(x) −x)/ ln(a)
ln(x) 1/x xln(x) −x
sin(x) cos(x) −cos(x)
cos(x) −sin(x) sin(x)
tan(x) cos
−2
(x) −ln [ cos(x)[
sin
−1
(x) −sin
−2
(x) cos(x) ln [ tan(
1
2
x)[
sinh(x) cosh(x) cosh(x)
cosh(x) sinh(x) sinh(x)
arcsin(x) 1/
√
1 −x
2
xarcsin(x) +
√
1 −x
2
arccos(x) −1/
√
1 −x
2
xarccos(x) −
√
1 −x
2
arctan(x) (1 +x
2
)
−1
xarctan(x) −
1
2
ln(1 +x
2
)
(a +x
2
)
−1/2
−x(a +x
2
)
−3/2
ln [x +
√
a +x
2
[
(a
2
−x
2
)
−1
2x(a
2
+x
2
)
−2
1
2a
ln [(a +x)/(a −x)[
The curvature ρ of a curve is given by: ρ =
(1 + (y
)
2
)
3/2
[y
[
The theorem of De ’l Hˆ opital: if f(a) = 0 and g(a) = 0, then is lim
x→a
f(x)
g(x)
= lim
x→a
f
(x)
g
(x)
Chapter 1: Basics 3
1.4 Limits
lim
x→0
sin(x)
x
= 1 , lim
x→0
e
x
−1
x
= 1 , lim
x→0
tan(x)
x
= 1 , lim
k→0
(1 +k)
1/k
= e , lim
x→∞
_
1 +
n
x
_
x
= e
n
lim
x↓0
x
a
ln(x) = 0 , lim
x→∞
ln
p
(x)
x
a
= 0 , lim
x→0
ln(x +a)
x
= a , lim
x→∞
x
p
a
x
= 0 als [a[ > 1.
lim
x→0
_
a
1/x
−1
_
= ln(a) , lim
x→0
arcsin(x)
x
= 1 , lim
x→∞
x
√
x = 1
1.5 Complex numbers and quaternions
1.5.1 Complex numbers
The complex number z = a + bi with a and b ∈ IR. a is the real part, b the imaginary part of z. [z[ =
√
a
2
+b
2
.
By deﬁnition holds: i
2
= −1. Every complex number can be written as z = [z[ exp(iϕ), with tan(ϕ) = b/a. The
complex conjugate of z is deﬁned as z = z
∗
:= a −bi. Further holds:
(a +bi)(c +di) = (ac −bd) +i(ad +bc)
(a +bi) + (c +di) = a +c +i(b +d)
a +bi
c +di
=
(ac +bd) +i(bc −ad)
c
2
+d
2
Goniometric functions can be written as complex exponents:
sin(x) =
1
2i
(e
ix
−e
−ix
)
cos(x) =
1
2
(e
ix
+ e
−ix
)
From this follows that cos(ix) = cosh(x) and sin(ix) = i sinh(x). Further follows from this that
e
±ix
= cos(x) ±i sin(x), so e
iz
,= 0∀z. Also the theorem of De Moivre follows from this:
(cos(ϕ) +i sin(ϕ))
n
= cos(nϕ) +i sin(nϕ).
Products and quotients of complex numbers can be written as:
z
1
z
2
= [z
1
[ [z
2
[(cos(ϕ
1
+ϕ
2
) +i sin(ϕ
1
+ϕ
2
))
z
1
z
2
=
[z
1
[
[z
2
[
(cos(ϕ
1
−ϕ
2
) +i sin(ϕ
1
−ϕ
2
))
The following can be derived:
[z
1
+z
2
[ ≤ [z
1
[ +[z
2
[ , [z
1
−z
2
[ ≥ [ [z
1
[ −[z
2
[ [
And from z = r exp(iθ) follows: ln(z) = ln(r) +iθ, ln(z) = ln(z) ±2nπi.
1.5.2 Quaternions
Quaternions are deﬁned as: z = a + bi + cj + dk, with a, b, c, d ∈ IR and i
2
= j
2
= k
2
= −1. The products of
i, j, k with each other are given by ij = −ji = k, jk = −kj = i and ki = −ik = j.
4 Mathematics Formulary by ir. J.C.A. Wevers
1.6 Geometry
1.6.1 Triangles
The sine rule is:
a
sin(α)
=
b
sin(β)
=
c
sin(γ)
Here, α is the angle opposite to a, β is opposite to b and γ opposite to c. The cosine rule is: a
2
= b
2
+c
2
−2bc cos(α).
For each triangle holds: α +β +γ = 180
◦
.
Further holds:
tan(
1
2
(α +β))
tan(
1
2
(α −β))
=
a +b
a −b
The surface of a triangle is given by
1
2
ab sin(γ) =
1
2
ah
a
=
_
s(s −a)(s −b)(s −c) with h
a
the perpendicular on
a and s =
1
2
(a +b +c).
1.6.2 Curves
Cycloid: if a circle with radius a rolls along a straight line, the trajectory of a point on this circle has the following
parameter equation:
x = a(t + sin(t)) , y = a(1 + cos(t))
Epicycloid: if a small circle with radius a rolls along a big circle with radius R, the trajectory of a point on the small
circle has the following parameter equation:
x = a sin
_
R +a
a
t
_
+ (R +a) sin(t) , y = a cos
_
R +a
a
t
_
+ (R +a) cos(t)
Hypocycloid: if a small circle with radius a rolls inside a big circle with radius R, the trajectory of a point on the
small circle has the following parameter equation:
x = a sin
_
R −a
a
t
_
+ (R −a) sin(t) , y = −a cos
_
R −a
a
t
_
+ (R −a) cos(t)
A hypocycloid with a = R is called a cardioid. It has the following parameterequation in polar coordinates:
r = 2a[1 −cos(ϕ)].
1.7 Vectors
The inner product is deﬁned by: a
b =
i
a
i
b
i
= [a [ [
b [ cos(ϕ)
where ϕ is the angle between a and
b. The external product is in IR
3
deﬁned by:
a
b =
_
_
a
y
b
z
−a
z
b
y
a
z
b
x
−a
x
b
z
a
x
b
y
−a
y
b
x
_
_
=
¸
¸
¸
¸
¸
¸
e
x
e
y
e
z
a
x
a
y
a
z
b
x
b
y
b
z
¸
¸
¸
¸
¸
¸
Further holds: [a
b [ = [a [ [
b [ sin(ϕ), and a (
b c ) = (a c )
b −(a
b )c.
Chapter 1: Basics 5
1.8 Series
1.8.1 Expansion
The Binomium of Newton is:
(a +b)
n
=
n
k=0
_
n
k
_
a
n−k
b
k
where
_
n
k
_
:=
n!
k!(n −k)!
.
By subtracting the series
n
k=0
r
k
and r
n
k=0
r
k
one ﬁnds:
n
k=0
r
k
=
1 −r
n+1
1 −r
and for [r[ < 1 this gives the geometric series:
∞
k=0
r
k
=
1
1 −r
.
The arithmetic series is given by:
N
n=0
(a +nV ) = a(N + 1) +
1
2
N(N + 1)V .
The expansion of a function around the point a is given by the Taylor series:
f(x) = f(a) + (x −a)f
(a) +
(x −a)
2
2
f
(a) + +
(x −a)
n
n!
f
(n)
(a) +R
where the remainder is given by:
R
n
(h) = (1 −θ)
n
h
n
n!
f
(n+1)
(θh)
and is subject to:
mh
n+1
(n + 1)!
≤ R
n
(h) ≤
Mh
n+1
(n + 1)!
From this one can deduce that
(1 −x)
α
=
∞
n=0
_
α
n
_
x
n
One can derive that:
∞
n=1
1
n
2
=
π
2
6
,
∞
n=1
1
n
4
=
π
4
90
,
∞
n=1
1
n
6
=
π
6
945
n
k=1
k
2
=
1
6
n(n + 1)(2n + 1) ,
∞
n=1
(−1)
n+1
n
2
=
π
2
12
,
∞
n=1
(−1)
n+1
n
= ln(2)
∞
n=1
1
4n
2
−1
=
1
2
,
∞
n=1
1
(2n −1)
2
=
π
2
8
,
∞
n=1
1
(2n −1)
4
=
π
4
96
,
∞
n=1
(−1)
n+1
(2n −1)
3
=
π
3
32
1.8.2 Convergence and divergence of series
If
n
[u
n
[ converges,
n
u
n
also converges.
If lim
n→∞
u
n
,= 0 then
n
u
n
is divergent.
An alternating series of which the absolute values of the terms drop monotonously to 0 is convergent (Leibniz).
6 Mathematics Formulary by ir. J.C.A. Wevers
If
_
∞
p
f(x)dx < ∞, then
n
f
n
is convergent.
If u
n
> 0 ∀n then is
n
u
n
convergent if
n
ln(u
n
+ 1) is convergent.
If u
n
= c
n
x
n
the radius of convergence ρ of
n
u
n
is given by:
1
ρ
= lim
n→∞
n
_
[c
n
[ = lim
n→∞
¸
¸
¸
¸
c
n+1
c
n
¸
¸
¸
¸
.
The series
∞
n=1
1
n
p
is convergent if p > 1 and divergent if p ≤ 1.
If: lim
n→∞
u
n
v
n
= p, than the following is true: if p > 0 than
n
u
n
and
n
v
n
are both divergent or both convergent, if
p = 0 holds: if
n
v
n
is convergent, than
n
u
n
is also convergent.
If L is deﬁned by: L = lim
n→∞
n
_
[n
n
[, or by: L = lim
n→∞
¸
¸
¸
¸
u
n+1
u
n
¸
¸
¸
¸
, then is
n
u
n
divergent if L > 1 and convergent if
L < 1.
1.8.3 Convergence and divergence of functions
f(x) is continuous in x = a only if the upper  and lower limit are equal: lim
x↑a
f(x) = lim
x↓a
f(x). This is written as:
f(a
−
) = f(a
+
).
If f(x) is continuous in a and: lim
x↑a
f
(x) = lim
x↓a
f
(x), than f(x) is differentiable in x = a.
We deﬁne: f
W
:= sup([f(x)[ [x ∈ W), and lim
x→∞
f
n
(x) = f(x). Than holds: ¦f
n
¦ is uniform convergent if
lim
n→∞
f
n
−f = 0, or: ∀(ε > 0)∃(N)∀(n ≥ N)f
n
−f < ε.
Weierstrass’ test: if
u
n

W
is convergent, than
u
n
is uniform convergent.
We deﬁne S(x) =
∞
n=N
u
n
(x) and F(y) =
b
_
a
f(x, y)dx := F. Than it can be proved that:
Theorem For Demands on W Than holds on W
rows f
n
continuous, f is continuous
¦f
n
¦ uniform convergent
C series S(x) uniform convergent, S is continuous
u
n
continuous
integral f is continuous F is continuous
rows f
n
can be integrated, f
n
can be integrated,
¦f
n
¦ uniform convergent
_
f(x)dx = lim
n→∞
_
f
n
dx
I series S(x) is uniform convergent, S can be integrated,
_
Sdx =
_
u
n
dx
u
n
can be integrated
integral f is continuous
_
Fdy =
__
f(x, y)dxdy
rows ¦f
n
¦ ∈C
−1
; ¦f
n
¦ unif.conv → φ f
= φ(x)
D series u
n
∈C
−1
;
u
n
conv;
u
n
u.c. S
(x) =
u
n
(x)
integral ∂f/∂y continuous F
y
=
_
f
y
(x, y)dx
Chapter 1: Basics 7
1.9 Products and quotients
For a, b, c, d ∈ IR holds:
The distributive property: (a +b)(c +d) = ac +ad +bc +bd
The associative property: a(bc) = b(ac) = c(ab) and a(b +c) = ab +ac
The commutative property: a +b = b +a, ab = ba.
Further holds:
a
2n
−b
2n
a ±b
= a
2n−1
±a
2n−2
b +a
2n−3
b
2
± ±b
2n−1
,
a
2n+1
−b
2n+1
a +b
=
n
k=0
a
2n−k
b
2k
(a ±b)(a
2
±ab +b
2
) = a
3
±b
3
, (a +b)(a −b) = a
2
+b
2
,
a
3
±b
3
a +b
= a
2
∓ba +b
2
1.10 Logarithms
Deﬁnition:
a
log(x) = b ⇔ a
b
= x. For logarithms with base e one writes ln(x).
Rules: log(x
n
) = nlog(x), log(a) + log(b) = log(ab), log(a) −log(b) = log(a/b).
1.11 Polynomials
Equations of the type
n
k=0
a
k
x
k
= 0
have n roots which may be equal to each other. Each polynomial p(z) of order n ≥ 1 has at least one root in C. If
all a
k
∈ IR holds: when x = p with p ∈ C a root, than p
∗
is also a root. Polynomials up to and including order 4
have a general analytical solution, for polynomials with order ≥ 5 there does not exist a general analytical solution.
For a, b, c ∈ IR and a ,= 0 holds: the 2nd order equation ax
2
+bx +c = 0 has the general solution:
x =
−b ±
√
b
2
−4ac
2a
For a, b, c, d ∈ IR and a ,= 0 holds: the 3rd order equation ax
3
+ bx
2
+ cx + d = 0 has the general analytical
solution:
x
1
= K −
3ac −b
2
9a
2
K
−
b
3a
x
2
= x
∗
3
= −
K
2
+
3ac −b
2
18a
2
K
−
b
3a
+i
√
3
2
_
K +
3ac −b
2
9a
2
K
_
with K =
_
9abc −27da
2
−2b
3
54a
3
+
√
3
√
4ac
3
−c
2
b
2
−18abcd + 27a
2
d
2
+ 4db
3
18a
2
_
1/3
1.12 Primes
A prime is a number ∈ IN that can only be divided by itself and 1. There are an inﬁnite number of primes. Proof:
suppose that the collection of primes P would be ﬁnite, than construct the number q = 1 +
p∈P
p, than holds
q = 1(p) and so Q cannot be written as a product of primes from P. This is a contradiction.
8 Mathematics Formulary by ir. J.C.A. Wevers
If π(x) is the number of primes ≤ x, than holds:
lim
x→∞
π(x)
x/ ln(x)
= 1 and lim
x→∞
π(x)
x
_
2
dt
ln(t)
= 1
For each N ≥ 2 there is a prime between N and 2N.
The numbers F
k
:= 2
k
+ 1 with k ∈ IN are called Fermat numbers. Many Fermat numbers are prime.
The numbers M
k
:= 2
k
− 1 are called Mersenne numbers. They occur when one searches for perfect numbers,
which are numbers n ∈ IN which are the sum of their different dividers, for example 6 = 1 + 2 + 3. There
are 23 Mersenne numbers for k < 12000 which are prime: for k ∈ ¦2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521,
607, 1279, 2203, 2281, 3217, 4253, 4423, 9689, 9941, 11213¦.
To check if a given number n is prime one can use a sieve method. The ﬁrst known sieve method was developed by
Eratosthenes. A faster method for large numbers are the 4 Fermat tests, who don’t prove that a number is prime but
give a large probability.
1. Take the ﬁrst 4 primes: b = ¦2, 3, 5, 7¦,
2. Take w(b) = b
n−1
mod n, for each b,
3. If w = 1 for each b, then n is probably prime. For each other value of w, n is certainly not prime.
Chapter 2
Probability and statistics
2.1 Combinations
The number of possible combinations of k elements from n elements is given by
_
n
k
_
=
n!
k!(n −k)!
The number of permutations of p from n is given by
n!
(n −p)!
= p!
_
n
p
_
The number of different ways to classify n
i
elements in i groups, when the total number of elements is N, is
N!
i
n
i
!
2.2 Probability theory
The probability P(A) that an event A occurs is deﬁned by:
P(A) =
n(A)
n(U)
where n(A) is the number of events when A occurs and n(U) the total number of events.
The probability P(A) that A does not occur is: P(A) = 1 − P(A). The probability P(A ∪ B) that A and
B both occur is given by: P(A ∪ B) = P(A) + P(B) − P(A ∩ B). If A and B are independent, than holds:
P(A∩ B) = P(A) P(B).
The probability P(A[B) that A occurs, given the fact that B occurs, is:
P(A[B) =
P(A∩ B)
P(B)
2.3 Statistics
2.3.1 General
The average or mean value ¸x) of a collection of values is: ¸x) =
i
x
i
/n. The standard deviation σ
x
in the
distribution of x is given by:
σ
x
=
¸
¸
¸
_
n
i=1
(x
i
−¸x))
2
n
When samples are being used the sample variance s is given by s
2
=
n
n −1
σ
2
.
9
10 Mathematics Formulary by ir. J.C.A. Wevers
The covariance σ
xy
of x and y is given by::
σ
xy
=
n
i=1
(x
i
−¸x))(y
i
−¸y))
n −1
The correlation coefﬁcient r
xy
of x and y than becomes: r
xy
= σ
xy
/σ
x
σ
y
.
The standard deviation in a variable f(x, y) resulting from errors in x and y is:
σ
2
f(x,y)
=
_
∂f
∂x
σ
x
_
2
+
_
∂f
∂y
σ
y
_
2
+
∂f
∂x
∂f
∂y
σ
xy
2.3.2 Distributions
1. The Binomial distribution is the distribution describing a sampling with replacement. The probability for
success is p. The probability P for k successes in n trials is then given by:
P(x = k) =
_
n
k
_
p
k
(1 −p)
n−k
The standard deviation is given by σ
x
=
_
np(1 −p) and the expectation value is ε = np.
2. The Hypergeometric distribution is the distribution describing a sampling without replacement in which the
order is irrelevant. The probability for k successes in a trial with A possible successes and B possible failures
is then given by:
P(x = k) =
_
A
k
__
B
n −k
_
_
A+B
n
_
The expectation value is given by ε = nA/(A+B).
3. The Poisson distribution is a limiting case of the binomial distribution when p → 0, n → ∞ and also
np = λ is constant.
P(x) =
λ
x
e
−λ
x!
This distribution is normalized to
∞
x=0
P(x) = 1.
4. The Normal distribution is a limiting case of the binomial distribution for continuous variables:
P(x) =
1
σ
√
2π
exp
_
−
1
2
_
x −¸x)
σ
_
2
_
5. The Uniform distribution occurs when a random number x is taken from the set a ≤ x ≤ b and is given by:
_
¸
_
¸
_
P(x) =
1
b −a
if a ≤ x ≤ b
P(x) = 0 in all other cases
¸x) =
1
2
(b −a) and σ
2
=
(b −a)
2
12
.
Chapter 2: Probability and statistics 11
6. The Gamma distribution is given by:
_
P(x) =
x
α−1
e
−x/β
β
α
Γ(α)
if 0 ≤ y ≤ ∞
with α > 0 and β > 0. The distribution has the following properties: ¸x) = αβ, σ
2
= αβ
2
.
7. The Beta distribution is given by:
_
¸
¸
_
¸
¸
_
P(x) =
x
α−1
(1 −x)
β−1
β(α, β)
if 0 ≤ x ≤ 1
P(x) = 0 everywhere else
and has the following properties: ¸x) =
α
α +β
, σ
2
=
αβ
(α +β)
2
(α +β + 1)
.
For P(χ
2
) holds: α = V/2 and β = 2.
8. The Weibull distribution is given by:
_
¸
_
¸
_
P(x) =
α
β
x
α−1
e
−x
α
if 0 ≤ x ≤ ∞∧ α ∧ β > 0
P(x) = 0 in all other cases
The average is ¸x) = β
1/α
Γ((α + 1)α)
9. For a twodimensional distribution holds:
P
1
(x
1
) =
_
P(x
1
, x
2
)dx
2
, P
2
(x
2
) =
_
P(x
1
, x
2
)dx
1
with
ε(g(x
1
, x
2
)) =
__
g(x
1
, x
2
)P(x
1
, x
2
)dx
1
dx
2
=
x
1
x
2
g P
2.4 Regression analyses
When there exists a relation between the quantities x and y of the form y = ax + b and there is a measured set x
i
with related y
i
, the following relation holds for a and b with x = (x
1
, x
2
, ..., x
n
) and e = (1, 1, ..., 1):
y −ax −be ∈< x, e >
⊥
From this follows that the inner products are 0:
_
(y, x) −a(x, x) −b(e, x) = 0
(y, e ) −a(x, e ) −b(e, e ) = 0
with (x, x) =
i
x
2
i
, (x, y ) =
i
x
i
y
i
, (x, e ) =
i
x
i
and (e, e ) = n. a and b follow from this.
A similar method works for higher order polynomial ﬁts: for a second order ﬁt holds:
y −a
x
2
−bx −ce ∈<
x
2
, x, e >
⊥
with
x
2
= (x
2
1
, ..., x
2
n
).
The correlation coefﬁcient r is a measure for the quality of a ﬁt. In case of linear regression it is given by:
r =
n
xy −
x
y
_
(n
x
2
−(
x)
2
)(n
y
2
−(
y)
2
)
Chapter 3
Calculus
3.1 Integrals
3.1.1 Arithmetic rules
The primitive function F(x) of f(x) obeys the rule F
(x) = f(x). With F(x) the primitive of f(x) holds for the
deﬁnite integral
b
_
a
f(x)dx = F(b) −F(a)
If u = f(x) holds:
b
_
a
g(f(x))df(x) =
f(b)
_
f(a)
g(u)du
Partial integration: with F and G the primitives of f and g holds:
_
f(x) g(x)dx = f(x)G(x) −
_
G(x)
df(x)
dx
dx
A derivative can be brought under the intergral sign (see section 1.8.3 for the required conditions):
d
dy
_
¸
_
x=h(y)
_
x=g(y)
f(x, y)dx
_
¸
_ =
x=h(y)
_
x=g(y)
∂f(x, y)
∂y
dx −f(g(y), y)
dg(y)
dy
+f(h(y), y)
dh(y)
dy
3.1.2 Arc lengts, surfaces and volumes
The arc length of a curve y(x) is given by:
=
_
¸
1 +
_
dy(x)
dx
_
2
dx
The arc length of a parameter curve F(x(t)) is:
=
_
Fds =
_
F(x(t))[
˙
x(t)[dt
with
t =
dx
ds
=
˙
x(t)
[
˙
x(t)[
, [
t [ = 1
_
(v,
t)ds =
_
(v,
˙
t(t))dt =
_
(v
1
dx +v
2
dy +v
3
dz)
The surface A of a solid of revolution is:
A = 2π
_
y
¸
1 +
_
dy(x)
dx
_
2
dx
12
Chapter 3: Calculus 13
The volume V of a solid of revolution is:
V = π
_
f
2
(x)dx
3.1.3 Separation of quotients
Every rational function P(x)/Q(x) where P and Q are polynomials can be written as a linear combination of
functions of the type (x −a)
k
with k ∈ ZZ, and of functions of the type
px +q
((x −a)
2
+b
2
)
n
with b > 0 and n ∈ IN. So:
p(x)
(x −a)
n
=
n
k=1
A
k
(x −a)
k
,
p(x)
((x −b)
2
+c
2
)
n
=
n
k=1
A
k
x +B
((x −b)
2
+c
2
)
k
Recurrent relation: for n ,= 0 holds:
_
dx
(x
2
+ 1)
n+1
=
1
2n
x
(x
2
+ 1)
n
+
2n −1
2n
_
dx
(x
2
+ 1)
n
3.1.4 Special functions
Elliptic functions
Elliptic functions can be written as a power series as follows:
_
1 −k
2
sin
2
(x) = 1 −
∞
n=1
(2n −1)!!
(2n)!!(2n −1)
k
2n
sin
2n
(x)
1
_
1 −k
2
sin
2
(x)
= 1 +
∞
n=1
(2n −1)!!
(2n)!!
k
2n
sin
2n
(x)
with n!! = n(n −2)!!.
The Gamma function
The gamma function Γ(y) is deﬁned by:
Γ(y) =
∞
_
0
e
−x
x
y−1
dx
One can derive that Γ(y + 1) = yΓ(y) = y!. This is a way to deﬁne faculties for nonintegers. Further one can
derive that
Γ(n +
1
2
) =
√
π
2
n
(2n −1)!! and Γ
(n)
(y) =
∞
_
0
e
−x
x
y−1
ln
n
(x)dx
The Beta function
The betafunction β(p, q) is deﬁned by:
β(p, q) =
1
_
0
x
p−1
(1 −x)
q−1
dx
with p and q > 0. The beta and gamma functions are related by the following equation:
β(p, q) =
Γ(p)Γ(q)
Γ(p +q)
14 Mathematics Formulary by ir. J.C.A. Wevers
The Delta function
The delta function δ(x) is an inﬁnitely thin peak function with surface 1. It can be deﬁned by:
δ(x) = lim
ε→0
P(ε, x) with P(ε, x) =
_
0 for [x[ > ε
1
2ε
when [x[ < ε
Some properties are:
∞
_
−∞
δ(x)dx = 1 ,
∞
_
−∞
F(x)δ(x)dx = F(0)
3.1.5 Goniometric integrals
When solving goniometric integrals it can be useful to change variables. The following holds if one deﬁnes
tan(
1
2
x) := t:
dx =
2dt
1 +t
2
, cos(x) =
1 −t
2
1 +t
2
, sin(x) =
2t
1 +t
2
Each integral of the type
_
R(x,
√
ax
2
+bx +c)dx can be converted into one of the types that were treated in
section 3.1.3. After this conversion one can substitute in the integrals of the type:
_
R(x,
_
x
2
+ 1)dx : x = tan(ϕ) , dx =
dϕ
cos(ϕ)
of
_
x
2
+ 1 = t +x
_
R(x,
_
1 −x
2
)dx : x = sin(ϕ) , dx = cos(ϕ)dϕ of
_
1 −x
2
= 1 −tx
_
R(x,
_
x
2
−1)dx : x =
1
cos(ϕ)
, dx =
sin(ϕ)
cos
2
(ϕ)
dϕ of
_
x
2
−1 = x −t
These deﬁnite integrals are easily solved:
π/2
_
0
cos
n
(x) sin
m
(x)dx =
(n −1)!!(m−1)!!
(m+n)!!
_
π/2 when m and n are both even
1 in all other cases
Some important integrals are:
∞
_
0
xdx
e
ax
+ 1
=
π
2
12a
2
,
∞
_
−∞
x
2
dx
(e
x
+ 1)
2
=
π
2
3
,
∞
_
0
x
3
dx
e
x
+ 1
=
π
4
15
3.2 Functions with more variables
3.2.1 Derivatives
The partial derivative with respect to x of a function f(x, y) is deﬁned by:
_
∂f
∂x
_
x
0
= lim
h→0
f(x
0
+h, y
0
) −f(x
0
, y
0
)
h
The directional derivative in the direction of α is deﬁned by:
∂f
∂α
= lim
r↓0
f(x
0
+r cos(α), y
0
+r sin(α)) −f(x
0
, y
0
)
r
= (
∇f, (sin α, cos α)) =
∇f v
[v[
Chapter 3: Calculus 15
When one changes to coordinates f(x(u, v), y(u, v)) holds:
∂f
∂u
=
∂f
∂x
∂x
∂u
+
∂f
∂y
∂y
∂u
If x(t) and y(t) depend only on one parameter t holds:
∂f
∂t
=
∂f
∂x
dx
dt
+
∂f
∂y
dy
dt
The total differential df of a function of 3 variables is given by:
df =
∂f
∂x
dx +
∂f
∂y
dy +
∂f
∂z
dz
So
df
dx
=
∂f
∂x
+
∂f
∂y
dy
dx
+
∂f
∂z
dz
dx
The tangent in point x
0
at the surface f(x, y) = 0 is given by the equation f
x
(x
0
)(x −x
0
) +f
y
(x
0
)(y −y
0
) = 0.
The tangent plane in x
0
is given by: f
x
(x
0
)(x −x
0
) +f
y
(x
0
)(y −y
0
) = z −f(x
0
).
3.2.2 Taylor series
A function of two variables can be expanded as follows in a Taylor series:
f(x
0
+h, y
0
+k) =
n
p=0
1
p!
_
h
∂
p
∂x
p
+k
∂
p
∂y
p
_
f(x
0
, y
0
) +R(n)
with R(n) the residual error and
_
h
∂
p
∂x
p
+k
∂
p
∂y
p
_
f(a, b) =
p
m=0
_
p
m
_
h
m
k
p−m
∂
p
f(a, b)
∂x
m
∂y
p−m
3.2.3 Extrema
When f is continuous on a compact boundary V there exists a global maximum and a global minumum for f on
this boundary. A boundary is called compact if it is limited and closed.
Possible extrema of f(x, y) on a boundary V ∈ IR
2
are:
1. Points on V where f(x, y) is not differentiable,
2. Points where
∇f =
0,
3. If the boundary V is given by ϕ(x, y) = 0, than all points where
∇f(x, y) +λ
∇ϕ(x, y) = 0 are possible for
extrema. This is the multiplicator method of Lagrange, λ is called a multiplicator.
The same as in IR
2
holds in IR
3
when the area to be searched is constrained by a compact V , and V is deﬁned by
ϕ
1
(x, y, z) = 0 and ϕ
2
(x, y, z) = 0 for extrema of f(x, y, z) for points (1) and (2). Point (3) is rewritten as follows:
possible extrema are points where
∇f(x, y, z) +λ
1
∇ϕ
1
(x, y, z) +λ
2
∇ϕ
2
(x, y, z) = 0.
16 Mathematics Formulary by ir. J.C.A. Wevers
3.2.4 The ∇operator
In cartesian coordinates (x, y, z) holds:
∇ =
∂
∂x
e
x
+
∂
∂y
e
y
+
∂
∂z
e
z
gradf =
∂f
∂x
e
x
+
∂f
∂y
e
y
+
∂f
∂z
e
z
div a =
∂a
x
∂x
+
∂a
y
∂y
+
∂a
z
∂z
curl a =
_
∂a
z
∂y
−
∂a
y
∂z
_
e
x
+
_
∂a
x
∂z
−
∂a
z
∂x
_
e
y
+
_
∂a
y
∂x
−
∂a
x
∂y
_
e
z
∇
2
f =
∂
2
f
∂x
2
+
∂
2
f
∂y
2
+
∂
2
f
∂z
2
In cylindrical coordinates (r, ϕ, z) holds:
∇ =
∂
∂r
e
r
+
1
r
∂
∂ϕ
e
ϕ
+
∂
∂z
e
z
gradf =
∂f
∂r
e
r
+
1
r
∂f
∂ϕ
e
ϕ
+
∂f
∂z
e
z
div a =
∂a
r
∂r
+
a
r
r
+
1
r
∂a
ϕ
∂ϕ
+
∂a
z
∂z
curl a =
_
1
r
∂a
z
∂ϕ
−
∂a
ϕ
∂z
_
e
r
+
_
∂a
r
∂z
−
∂a
z
∂r
_
e
ϕ
+
_
∂a
ϕ
∂r
+
a
ϕ
r
−
1
r
∂a
r
∂ϕ
_
e
z
∇
2
f =
∂
2
f
∂r
2
+
1
r
∂f
∂r
+
1
r
2
∂
2
f
∂ϕ
2
+
∂
2
f
∂z
2
In spherical coordinates (r, θ, ϕ) holds:
∇ =
∂
∂r
e
r
+
1
r
∂
∂θ
e
θ
+
1
r sin θ
∂
∂ϕ
e
ϕ
gradf =
∂f
∂r
e
r
+
1
r
∂f
∂θ
e
θ
+
1
r sin θ
∂f
∂ϕ
e
ϕ
div a =
∂a
r
∂r
+
2a
r
r
+
1
r
∂a
θ
∂θ
+
a
θ
r tan θ
+
1
r sin θ
∂a
ϕ
∂ϕ
curl a =
_
1
r
∂a
ϕ
∂θ
+
a
θ
r tan θ
−
1
r sin θ
∂a
θ
∂ϕ
_
e
r
+
_
1
r sin θ
∂a
r
∂ϕ
−
∂a
ϕ
∂r
−
a
ϕ
r
_
e
θ
+
_
∂a
θ
∂r
+
a
θ
r
−
1
r
∂a
r
∂θ
_
e
ϕ
∇
2
f =
∂
2
f
∂r
2
+
2
r
∂f
∂r
+
1
r
2
∂
2
f
∂θ
2
+
1
r
2
tan θ
∂f
∂θ
+
1
r
2
sin
2
θ
∂
2
f
∂ϕ
2
General orthonormal curvilinear coordinates (u, v, w) can be derived from cartesian coordinates by the transforma
tion x = x(u, v, w). The unit vectors are given by:
e
u
=
1
h
1
∂x
∂u
, e
v
=
1
h
2
∂x
∂v
, e
w
=
1
h
3
∂x
∂w
where the terms h
i
give normalization to length 1. The differential operators are than given by:
gradf =
1
h
1
∂f
∂u
e
u
+
1
h
2
∂f
∂v
e
v
+
1
h
3
∂f
∂w
e
w
Chapter 3: Calculus 17
div a =
1
h
1
h
2
h
3
_
∂
∂u
(h
2
h
3
a
u
) +
∂
∂v
(h
3
h
1
a
v
) +
∂
∂w
(h
1
h
2
a
w
)
_
curl a =
1
h
2
h
3
_
∂(h
3
a
w
)
∂v
−
∂(h
2
a
v
)
∂w
_
e
u
+
1
h
3
h
1
_
∂(h
1
a
u
)
∂w
−
∂(h
3
a
w
)
∂u
_
e
v
+
1
h
1
h
2
_
∂(h
2
a
v
)
∂u
−
∂(h
1
a
u
)
∂v
_
e
w
∇
2
f =
1
h
1
h
2
h
3
_
∂
∂u
_
h
2
h
3
h
1
∂f
∂u
_
+
∂
∂v
_
h
3
h
1
h
2
∂f
∂v
_
+
∂
∂w
_
h
1
h
2
h
3
∂f
∂w
__
Some properties of the ∇operator are:
div(φv) = φdivv + gradφ v curl(φv) = φcurlv + (gradφ) v curl gradφ =
0
div(u v) = v (curlu) −u (curlv) curl curlv = grad divv −∇
2
v div curlv = 0
div gradφ = ∇
2
φ ∇
2
v ≡ (∇
2
v
1
, ∇
2
v
2
, ∇
2
v
3
)
Here, v is an arbitrary vectorﬁeld and φ an arbitrary scalar ﬁeld.
3.2.5 Integral theorems
Some important integral theorems are:
Gauss:
__
_(v n)d
2
A =
___
(divv )d
3
V
Stokes for a scalar ﬁeld:
_
(φ e
t
)ds =
__
(n gradφ)d
2
A
Stokes for a vector ﬁeld:
_
(v e
t
)ds =
__
(curlv n)d
2
A
this gives:
__
_(curlv n)d
2
A = 0
Ostrogradsky:
__
_(n v )d
2
A =
___
(curlv )d
3
A
__
_(φn)d
2
A =
___
(gradφ)d
3
V
Here the orientable surface
__
d
2
A is bounded by the Jordan curve s(t).
3.2.6 Multiple integrals
Let A be a closed curve given by f(x, y) = 0, than the surface A inside the curve in IR
2
is given by
A =
__
d
2
A =
__
dxdy
Let the surface A be deﬁned by the function z = f(x, y). The volume V bounded by A and the xy plane is than
given by:
V =
__
f(x, y)dxdy
The volume inside a closed surface deﬁned by z = f(x, y) is given by:
V =
___
d
3
V =
__
f(x, y)dxdy =
___
dxdydz
18 Mathematics Formulary by ir. J.C.A. Wevers
3.2.7 Coordinate transformations
The expressions d
2
A and d
3
V transform as follows when one changes coordinates to u = (u, v, w) through the
transformation x(u, v, w):
V =
___
f(x, y, z)dxdydz =
___
f(x(u))
¸
¸
¸
¸
∂x
∂u
¸
¸
¸
¸
dudvdw
In IR
2
holds:
∂x
∂u
=
¸
¸
¸
¸
x
u
x
v
y
u
y
v
¸
¸
¸
¸
Let the surface A be deﬁned by z = F(x, y) = X(u, v). Than the volume bounded by the xy plane and F is given
by:
__
S
f(x)d
2
A =
__
G
f(x(u))
¸
¸
¸
¸
∂X
∂u
∂X
∂v
¸
¸
¸
¸
dudv =
__
G
f(x, y, F(x, y))
_
1 +∂
x
F
2
+∂
y
F
2
dxdy
3.3 Orthogonality of functions
The inner product of two functions f(x) and g(x) on the interval [a, b] is given by:
(f, g) =
b
_
a
f(x)g(x)dx
or, when using a weight function p(x), by:
(f, g) =
b
_
a
p(x)f(x)g(x)dx
The norm f follows from: f
2
= (f, f). A set functions f
i
is orthonormal if (f
i
, f
j
) = δ
ij
.
Each function f(x) can be written as a sum of orthogonal functions:
f(x) =
∞
i=0
c
i
g
i
(x)
and
c
2
i
≤ f
2
. Let the set g
i
be orthogonal, than it follows:
c
i
=
f, g
i
(g
i
, g
i
)
3.4 Fourier series
Each function can be written as a sum of independent base functions. When one chooses the orthogonal basis
(cos(nx), sin(nx)) we have a Fourier series.
A periodical function f(x) with period 2L can be written as:
f(x) = a
0
+
∞
n=1
_
a
n
cos
_
nπx
L
_
+b
n
sin
_
nπx
L
__
Due to the orthogonality follows for the coefﬁcients:
a
0
=
1
2L
L
_
−L
f(t)dt , a
n
=
1
L
L
_
−L
f(t) cos
_
nπt
L
_
dt , b
n
=
1
L
L
_
−L
f(t) sin
_
nπt
L
_
dt
Chapter 3: Calculus 19
A Fourier series can also be written as a sum of complex exponents:
f(x) =
∞
n=−∞
c
n
e
inx
with
c
n
=
1
2π
π
_
−π
f(x)e
−inx
dx
The Fourier transform of a function f(x) gives the transformed function
ˆ
f(ω):
ˆ
f(ω) =
1
√
2π
∞
_
−∞
f(x)e
−iωx
dx
The inverse transformation is given by:
1
2
_
f(x
+
) +f(x
−
)
¸
=
1
√
2π
∞
_
−∞
ˆ
f(ω)e
iωx
dω
where f(x
+
) and f(x
−
) are deﬁned by the lower  and upper limit:
f(a
−
) = lim
x↑a
f(x) , f(a
+
) = lim
x↓a
f(x)
For continuous functions is
1
2
[f(x
+
) +f(x
−
)] = f(x).
Chapter 4
Differential equations
4.1 Linear differential equations
4.1.1 First order linear DE
The general solution of a linear differential equation is given by y
A
= y
H
+ y
P
, where y
H
is the solution of the
homogeneous equation and y
P
is a particular solution.
A ﬁrst order differential equation is given by: y
(x) + a(x)y(x) = b(x). Its homogeneous equation is y
(x) +
a(x)y(x) = 0.
The solution of the homogeneous equation is given by
y
H
= k exp
__
a(x)dx
_
Suppose that a(x) = a =constant.
Substitution of exp(λx) in the homogeneous equation leads to the characteristic equation λ +a = 0
⇒ λ = −a.
Suppose b(x) = αexp(µx). Than one can distinguish two cases:
1. λ ,= µ: a particular solution is: y
P
= exp(µx)
2. λ = µ: a particular solution is: y
P
= xexp(µx)
When a DE is solved by variation of parameters one writes: y
P
(x) = y
H
(x)f(x), and than one solves f(x) from
this.
4.1.2 Second order linear DE
A differential equation of the second order with constant coefﬁcients is given by: y
(x) + ay
(x) + by(x) = c(x).
If c(x) = c =constant there exists a particular solution y
P
= c/b.
Substitution of y = exp(λx) leads to the characteristic equation λ
2
+aλ +b = 0.
There are now 2 possibilities:
1. λ
1
,= λ
2
: than y
H
= αexp(λ
1
x) +β exp(λ
2
x).
2. λ
1
= λ
2
= λ: than y
H
= (α +βx) exp(λx).
If c(x) = p(x) exp(µx) where p(x) is a polynomial there are 3 possibilities:
1. λ
1
, λ
2
,= µ: y
P
= q(x) exp(µx).
2. λ
1
= µ, λ
2
,= µ: y
P
= xq(x) exp(µx).
3. λ
1
= λ
2
= µ: y
P
= x
2
q(x) exp(µx).
where q(x) is a polynomial of the same order as p(x).
When: y
(x) +ω
2
y(x) = ωf(x) and y(0) = y
(0) = 0 follows: y(x) =
x
_
0
f(x) sin(ω(x −t))dt.
20
Chapter 4: Diﬀerential equations 21
4.1.3 The Wronskian
We start with the LDE y
(x) +p(x)y
(x) +q(x)y(x) = 0 and the two initial conditions y(x
0
) = K
0
and y
(x
0
) =
K
1
. When p(x) and q(x) are continuous on the open interval I there exists a unique solution y(x) on this interval.
The general solution can than be written as y(x) = c
1
y
1
(x) +c
2
y
2
(x) and y
1
and y
2
are linear independent. These
are also all solutions of the LDE.
The Wronskian is deﬁned by:
W(y
1
, y
2
) =
¸
¸
¸
¸
y
1
y
2
y
1
y
2
¸
¸
¸
¸
= y
1
y
2
−y
2
y
1
y
1
and y
2
are linear independent if and only if on the interval I when ∃x
0
∈ I so that holds:
W(y
1
(x
0
), y
2
(x
0
)) = 0.
4.1.4 Power series substitution
When a series y =
a
n
x
n
is substituted in the LDE with constant coefﬁcients y
(x) + py
(x) + qy(x) = 0 this
leads to:
n
_
n(n −1)a
n
x
n−2
+pna
n
x
n−1
+qa
n
x
n
¸
= 0
Setting coefﬁcients for equal powers of x equal gives:
(n + 2)(n + 1)a
n+2
+p(n + 1)a
n+1
+qa
n
= 0
This gives a general relation between the coefﬁcients. Special cases are n = 0, 1, 2.
4.2 Some special cases
4.2.1 Frobenius’ method
Given the LDE
d
2
y(x)
dx
2
+
b(x)
x
dy(x)
dx
+
c(x)
x
2
y(x) = 0
with b(x) and c(x) analytical at x = 0. This LDE has at least one solution of the form
y
i
(x) = x
r
i
∞
n=0
a
n
x
n
with i = 1, 2
with r real or complex and chosen so that a
0
,= 0. When one expands b(x) and c(x) as b(x) = b
0
+b
1
x+b
2
x
2
+...
and c(x) = c
0
+c
1
x +c
2
x
2
+..., it follows for r:
r
2
+ (b
0
−1)r +c
0
= 0
There are now 3 possibilities:
1. r
1
= r
2
: than y(x) = y
1
(x) ln [x[ +y
2
(x).
2. r
1
−r
2
∈ IN: than y(x) = ky
1
(x) ln [x[ +y
2
(x).
3. r
1
−r
2
,= ZZ: than y(x) = y
1
(x) +y
2
(x).
22 Mathematics Formulary by ir. J.C.A. Wevers
4.2.2 Euler
Given the LDE
x
2
d
2
y(x)
dx
2
+ax
dy(x)
dx
+by(x) = 0
Substitution of y(x) = x
r
gives an equation for r: r
2
+ (a −1)r +b = 0. From this one gets two solutions r
1
and
r
2
. There are now 2 possibilities:
1. r
1
,= r
2
: than y(x) = C
1
x
r1
+C
2
x
r
2
.
2. r
1
= r
2
= r: than y(x) = (C
1
ln(x) +C
2
)x
r
.
4.2.3 Legendre’s DE
Given the LDE
(1 −x
2
)
d
2
y(x)
dx
2
−2x
dy(x)
dx
+n(n −1)y(x) = 0
The solutions of this equation are given by y(x) = aP
n
(x) + by
2
(x) where the Legendre polynomials P(x) are
deﬁned by:
P
n
(x) =
d
n
dx
n
_
(1 −x
2
)
n
2
n
n!
_
For these holds: P
n

2
= 2/(2n + 1).
4.2.4 The associated Legendre equation
This equation follows from the θdependent part of the wave equation ∇
2
Ψ = 0 by substitution of
ξ = cos(θ). Than follows:
(1 −ξ
2
)
d
dξ
_
(1 −ξ
2
)
dP(ξ)
dξ
_
+ [C(1 −ξ
2
) −m
2
]P(ξ) = 0
Regular solutions exists only if C = l(l + 1). They are of the form:
P
m
l
(ξ) = (1 −ξ
2
)
m/2
d
m
P
0
(ξ)
dξ
m
=
(1 −ξ
2
)
m/2
2
l
l!
d
m+l
dξ
m+l
(ξ
2
−1)
l
For [m[ > l is P
m
l
(ξ) = 0. Some properties of P
0
l
(ξ) zijn:
1
_
−1
P
0
l
(ξ)P
0
l
(ξ)dξ =
2
2l + 1
δ
ll
,
∞
l=0
P
0
l
(ξ)t
l
=
1
_
1 −2ξt +t
2
This polynomial can be written as:
P
0
l
(ξ) =
1
π
π
_
0
(ξ +
_
ξ
2
−1 cos(θ))
l
dθ
4.2.5 Solutions for Bessel’s equation
Given the LDE
x
2
d
2
y(x)
dx
2
+x
dy(x)
dx
+ (x
2
−ν
2
)y(x) = 0
also called Bessel’s equation, and the Bessel functions of the ﬁrst kind
J
ν
(x) = x
ν
∞
m=0
(−1)
m
x
2m
2
2m+ν
m!Γ(ν +m+ 1)
Chapter 4: Diﬀerential equations 23
for ν := n ∈ IN this becomes:
J
n
(x) = x
n
∞
m=0
(−1)
m
x
2m
2
2m+n
m!(n +m)!
When ν ,= ZZ the solution is given by y(x) = aJ
ν
(x) +bJ
−ν
(x). But because for n ∈ ZZ holds:
J
−n
(x) = (−1)
n
J
n
(x), this does not apply to integers. The general solution of Bessel’s equation is given by
y(x) = aJ
ν
(x) +bY
ν
(x), where Y
ν
are the Bessel functions of the second kind:
Y
ν
(x) =
J
ν
(x) cos(νπ) −J
−ν
(x)
sin(νπ)
and Y
n
(x) = lim
ν→n
Y
ν
(x)
The equation x
2
y
(x) + xy
(x) − (x
2
+ ν
2
)y(x) = 0 has the modiﬁed Bessel functions of the ﬁrst kind I
ν
(x) =
i
−ν
J
ν
(ix) as solution, and also solutions K
ν
= π[I
−ν
(x) −I
ν
(x)]/[2 sin(νπ)].
Sometimes it can be convenient to write the solutions of Bessel’s equation in terms of the Hankel functions
H
(1)
n
(x) = J
n
(x) +iY
n
(x) , H
(2)
n
(x) = J
n
(x) −iY
n
(x)
4.2.6 Properties of Bessel functions
Bessel functions are orthogonal with respect to the weight function p(x) = x.
J
−n
(x) = (−1)
n
J
n
(x). The Neumann functions N
m
(x) are deﬁnied as:
N
m
(x) =
1
2π
J
m
(x) ln(x) +
1
x
m
∞
n=0
α
n
x
2n
The following holds: lim
x→0
J
m
(x) = x
m
, lim
x→0
N
m
(x) = x
−m
for m ,= 0, lim
x→0
N
0
(x) = ln(x).
lim
r→∞
H(r) =
e
±ikr
e
iωt
√
r
, lim
x→∞
J
n
(x) =
_
2
πx
cos(x −x
n
) , lim
x→∞
J
−n
(x) =
_
2
πx
sin(x −x
n
)
with x
n
=
1
2
π(n +
1
2
).
J
n+1
(x) +J
n−1
(x) =
2n
x
J
n
(x) , J
n+1
(x) −J
n−1
(x) = −2
dJ
n
(x)
dx
The following integral relations hold:
J
m
(x) =
1
2π
2π
_
0
exp[i(xsin(θ) −mθ)]dθ =
1
π
π
_
0
cos(xsin(θ) −mθ)dθ
4.2.7 Laguerre’s equation
Given the LDE
x
d
2
y(x)
dx
2
+ (1 −x)
dy(x)
dx
+ny(x) = 0
Solutions of this equation are the Laguerre polynomials L
n
(x):
L
n
(x) =
e
x
n!
d
n
dx
n
_
x
n
e
−x
_
=
∞
m=0
(−1)
m
m!
_
n
m
_
x
m
24 Mathematics Formulary by ir. J.C.A. Wevers
4.2.8 The associated Laguerre equation
Given the LDE
d
2
y(x)
dx
2
+
_
m+ 1
x
−1
_
dy(x)
dx
+
_
n +
1
2
(m+ 1)
x
_
y(x) = 0
Solutions of this equation are the associated Laguerre polynomials L
m
n
(x):
L
m
n
(x) =
(−1)
m
n!
(n −m)!
e
−x
x
−m
d
n−m
dx
n−m
_
e
−x
x
n
_
4.2.9 Hermite
The differential equations of Hermite are:
d
2
H
n
(x)
dx
2
−2x
dH
n
(x)
dx
+ 2nH
n
(x) = 0 and
d
2
He
n
(x)
dx
2
−x
dHe
n
(x)
dx
+nHe
n
(x) = 0
Solutions of these equations are the Hermite polynomials, given by:
H
n
(x) = (−1)
n
exp
_
1
2
x
2
_
d
n
(exp(−
1
2
x
2
))
dx
n
= 2
n/2
He
n
(x
√
2)
He
n
(x) = (−1)
n
(exp
_
x
2
_
d
n
(exp(−x
2
))
dx
n
= 2
−n/2
H
n
(x/
√
2)
4.2.10 Chebyshev
The LDE
(1 −x
2
)
d
2
U
n
(x)
dx
2
−3x
dU
n
(x)
dx
+n(n + 2)U
n
(x) = 0
has solutions of the form
U
n
(x) =
sin[(n + 1) arccos(x)]
√
1 −x
2
The LDE
(1 −x
2
)
d
2
T
n
(x)
dx
2
−x
dT
n
(x)
dx
+n
2
T
n
(x) = 0
has solutions T
n
(x) = cos(narccos(x)).
4.2.11 Weber
The LDE W
n
(x) + (n +
1
2
−
1
4
x
2
)W
n
(x) = 0 has solutions: W
n
(x) = He
n
(x) exp(−
1
4
x
2
).
4.3 Nonlinear differential equations
Some nonlinear differential equations and a solution are:
y
= a
_
y
2
+b
2
y = b sinh(a(x −x
0
))
y
= a
_
y
2
−b
2
y = b cosh(a(x −x
0
))
y
= a
_
b
2
−y
2
y = b cos(a(x −x
0
))
y
= a(y
2
+b
2
) y = b tan(a(x −x
0
))
y
= a(y
2
−b
2
) y = b coth(a(x −x
0
))
y
= a(b
2
−y
2
) y = b tanh(a(x −x
0
))
y
= ay
_
b −y
b
_
y =
b
1 +Cb exp(−ax)
Chapter 4: Diﬀerential equations 25
4.4 SturmLiouville equations
SturmLiouville equations are second order LDE’s of the form:
−
d
dx
_
p(x)
dy(x)
dx
_
+q(x)y(x) = λm(x)y(x)
The boundary conditions are chosen so that the operator
L = −
d
dx
_
p(x)
d
dx
_
+q(x)
is Hermitean. The normalization function m(x) must satisfy
b
_
a
m(x)y
i
(x)y
j
(x)dx = δ
ij
When y
1
(x) and y
2
(x) are two linear independent solutions one can write the Wronskian in this form:
W(y
1
, y
2
) =
¸
¸
¸
¸
y
1
y
2
y
1
y
2
¸
¸
¸
¸
=
C
p(x)
where C is constant. By changing to another dependent variable u(x), given by: u(x) = y(x)
_
p(x), the LDE
transforms into the normal form:
d
2
u(x)
dx
2
+I(x)u(x) = 0 with I(x) =
1
4
_
p
(x)
p(x)
_
2
−
1
2
p
(x)
p(x)
−
q(x) −λm(x)
p(x)
If I(x) > 0, than y
/y < 0 and the solution has an oscillatory behaviour, if I(x) < 0, than y
/y > 0 and the
solution has an exponential behaviour.
4.5 Linear partial differential equations
4.5.1 General
The normal derivative is deﬁned by:
∂u
∂n
= (
∇u, n)
A frequently used solution method for PDE’s is separation of variables: one assumes that the solution can be written
as u(x, t) = X(x)T(t). When this is substituted two ordinary DE’s for X(x) and T(t) are obtained.
4.5.2 Special cases
The wave equation
The wave equation in 1 dimension is given by
∂
2
u
∂t
2
= c
2
∂
2
u
∂x
2
When the initial conditions u(x, 0) = ϕ(x) and ∂u(x, 0)/∂t = Ψ(x) apply, the general solution is given by:
u(x, t) =
1
2
[ϕ(x +ct) +ϕ(x −ct)] +
1
2c
x+ct
_
x−ct
Ψ(ξ)dξ
26 Mathematics Formulary by ir. J.C.A. Wevers
The diffusion equation
The diffusion equation is:
∂u
∂t
= D∇
2
u
Its solutions can be written in terms of the propagators P(x, x
, t). These have the property that
P(x, x
, 0) = δ(x −x
). In 1 dimension it reads:
P(x, x
, t) =
1
2
√
πDt
exp
_
−(x −x
)
2
4Dt
_
In 3 dimensions it reads:
P(x, x
, t) =
1
8(πDt)
3/2
exp
_
−(x −x
)
2
4Dt
_
With initial condition u(x, 0) = f(x) the solution is:
u(x, t) =
_
G
f(x
)P(x, x
, t)dx
The solution of the equation
∂u
∂t
−D
∂
2
u
∂x
2
= g(x, t)
is given by
u(x, t) =
_
dt
_
dx
g(x
, t
)P(x, x
, t −t
)
The equation of Helmholtz
The equation of Helmholtz is obtained by substitution of u(x, t) = v(x) exp(iωt) in the wave equation. This gives
for v:
∇
2
v(x, ω) +k
2
v(x, ω) = 0
This gives as solutions for v:
1. In cartesian coordinates: substitution of v = Aexp(i
k x) gives:
v(x) =
_
_
A(k)e
i
k·x
dk
with the integrals over
k
2
= k
2
.
2. In polar coordinates:
v(r, ϕ) =
∞
m=0
(A
m
J
m
(kr) +B
m
N
m
(kr))e
imϕ
3. In spherical coordinates:
v(r, θ, ϕ) =
∞
l=0
l
m=−l
[A
lm
J
l+
1
2
(kr) +B
lm
J
−l−
1
2
(kr)]
Y (θ, ϕ)
√
r
Chapter 4: Diﬀerential equations 27
4.5.3 Potential theory and Green’s theorem
Subject of the potential theory are the Poisson equation ∇
2
u = −f(x) where f is a given function, and the Laplace
equation ∇
2
u = 0. The solutions of these can often be interpreted as a potential. The solutions of Laplace’s
equation are called harmonic functions.
When a vector ﬁeld v is given by v = gradϕ holds:
b
_
a
(v,
t )ds = ϕ(
b ) −ϕ(a )
In this case there exist functions ϕ and w so that v = gradϕ + curl w.
The ﬁeld lines of the ﬁeld v(x) follow from:
˙
x(t) = λv(x)
The ﬁrst theorem of Green is:
___
G
[u∇
2
v + (∇u, ∇v)]d
3
V =
__
_
S
u
∂v
∂n
d
2
A
The second theorem of Green is:
___
G
[u∇
2
v −v∇
2
u]d
3
V =
__
_
S
_
u
∂v
∂n
−v
∂u
∂n
_
d
2
A
A harmonic function which is 0 on the boundary of an area is also 0 within that area. A harmonic function with a
normal derivative of 0 on the boundary of an area is constant within that area.
The Dirichlet problem is:
∇
2
u(x) = −f(x) , x ∈ R , u(x) = g(x) for all x ∈ S.
It has a unique solution.
The Neumann problem is:
∇
2
u(x) = −f(x) , x ∈ R ,
∂u(x)
∂n
= h(x) for all x ∈ S.
The solution is unique except for a constant. The solution exists if:
−
___
R
f(x)d
3
V =
__
_
S
h(x)d
2
A
A fundamental solution of the Laplace equation satisﬁes:
∇
2
u(x) = −δ(x)
This has in 2 dimensions in polar coordinates the following solution:
u(r) =
ln(r)
2π
This has in 3 dimensions in spherical coordinates the following solution:
u(r) =
1
4πr
28 Mathematics Formulary by ir. J.C.A. Wevers
The equation ∇
2
v = −δ(x −
ξ ) has the solution
v(x) =
1
4π[x −
ξ [
After substituting this in Green’s 2nd theorem and applying the sieve property of the δ function one can derive
Green’s 3rd theorem:
u(
ξ ) = −
1
4π
___
R
∇
2
u
r
d
3
V +
1
4π
__
_
S
_
1
r
∂u
∂n
−u
∂
∂n
_
1
r
__
d
2
A
The Green function G(x,
ξ ) is deﬁned by: ∇
2
G = −δ(x −
ξ ), and on boundary S holds G(x,
ξ ) = 0. Than G can
be written as:
G(x,
ξ ) =
1
4π[x −
ξ [
+g(x,
ξ )
Than g(x,
ξ ) is a solution of Dirichlet’s problem. The solution of Poisson’s equation ∇
2
u = −f(x) when on the
boundary S holds: u(x) = g(x), is:
u(
ξ ) =
___
R
G(x,
ξ )f(x)d
3
V −
__
_
S
g(x)
∂G(x,
ξ )
∂n
d
2
A
Chapter 5
Linear algebra
5.1 Vector spaces
( is a group for the operation ⊗ if:
1. ∀a, b ∈ ( ⇒ a ⊗b ∈ (: a group is closed.
2. (a ⊗b) ⊗c = a ⊗(b ⊗c): a group is associative.
3. ∃e ∈ ( so that a ⊗e = e ⊗a = a: there exists a unit element.
4. ∀a ∈ (∃a ∈ ( so that a ⊗a = e: each element has an inverse.
If
5. a ⊗b = b ⊗a
the group is called Abelian or commutative. Vector spaces form an Abelian group for addition and multiplication:
1 a =a, λ(µa) = (λµ)a, (λ +µ)(a +
b) = λa +λ
b +µa +µ
b.
W is a linear subspace if ∀ w
1
, w
2
∈ W holds: λ w
1
+µ w
2
∈ W.
W is an invariant subspace of V for the operator A if ∀ w ∈ W holds: A w ∈ W.
5.2 Basis
For an orthogonal basis holds: (e
i
, e
j
) = cδ
ij
. For an orthonormal basis holds: (e
i
, e
j
) = δ
ij
.
The set vectors ¦a
n
¦ is linear independent if:
i
λ
i
a
i
= 0 ⇔ ∀
i
λ
i
= 0
The set ¦a
n
¦ is a basis if it is 1. independent and 2. V =<a
1
, a
2
, ... >=
λ
i
a
i
.
5.3 Matrix calculus
5.3.1 Basic operations
For the matrix multiplication of matrices A = a
ij
and B = b
kl
holds with
r
the row index and
k
the column index:
A
r
1
k
1
B
r
2
k
2
= C
r
1
k
2
, (AB)
ij
=
k
a
ik
b
kj
where
r
is the number of rows and
k
the number of columns.
The transpose of A is deﬁned by: a
T
ij
= a
ji
. For this holds (AB)
T
= B
T
A
T
and (A
T
)
−1
= (A
−1
)
T
. For the
inverse matrix holds: (A B)
−1
= B
−1
A
−1
. The inverse matrix A
−1
has the property that A A
−1
= II and can
be found by diagonalization: (A
ij
[II) ∼ (II[A
−1
ij
).
29
30 Mathematics Formulary by ir. J.C.A. Wevers
The inverse of a 2 2 matrix is:
_
a b
c d
_
−1
=
1
ad −bc
_
d −b
−c a
_
The determinant function D = det(A) is deﬁned by:
det(A) = D(a
∗1
, a
∗2
, ..., a
∗n
)
For the determinant det(A) of a matrix A holds: det(AB) = det(A) det(B). Een 2 2 matrix has determinant:
det
_
a b
c d
_
= ad −cb
The derivative of a matrix is a matrix with the derivatives of the coefﬁcients:
dA
dt
=
da
ij
dt
and
dAB
dt
= B
dA
dt
+A
dB
dt
The derivative of the determinant is given by:
d det(A)
dt
= D(
da
1
dt
, ..., a
n
) +D(a
1
,
da
2
dt
, ..., a
n
) +... +D(a
1
, ...,
da
n
dt
)
When the rows of a matrix are considered as vectors the row rank of a matrix is the number of independent vectors
in this set. Similar for the column rank. The row rank equals the column rank for each matrix.
Let
˜
A :
˜
V →
˜
V be the complex extension of the real linear operator A : V → V in a ﬁnite dimensional V . Then A
and
˜
A have the same caracteristic equation.
When A
ij
∈ IR and v
1
+i v
2
is an eigenvector of A at eigenvalue λ = λ
1
+iλ
2
, than holds:
1. Av
1
= λ
1
v
1
−λ
2
v
2
and Av
2
= λ
2
v
1
+λ
1
v
2
.
2. v
∗
= v
1
−iv
2
is an eigenvalue at λ
∗
= λ
1
−iλ
2
.
3. The linear span < v
1
, v
2
> is an invariant subspace of A.
If
k
n
are the columns of A, than the transformed space of A is given by:
R(A) =< Ae
1
, ..., Ae
n
>=<
k
1
, ...,
k
n
>
If the columns
k
n
of a n m matrix A are independent, than the nullspace ^(A) = ¦
0 ¦.
5.3.2 Matrix equations
We start with the equation
A x =
b
and
b ,=
0. If det(A) = 0 the only solution is
0. If det(A) ,= 0 there exists exactly one solution ,=
0.
The equation
A x =
0
has exactly one solution ,=
0 if det(A) = 0, and if det(A) ,= 0 the solution is
0.
Cramer’s rule for the solution of systems of linear equations is: let the system be written as
A x =
b ≡a
1
x
1
+... +a
n
x
n
=
b
then x
j
is given by:
x
j
=
D(a
1
, ..., a
j−1
,
b, a
j+1
, ..., a
n
)
det(A)
Chapter 5: Linear algebra 31
5.4 Linear transformations
A transformation A is linear if: A(λx +βy ) = λAx +βAy.
Some common linear transformations are:
Transformation type Equation
Projection on the line <a > P(x) = (a, x)a/(a, a )
Projection on the plane (a, x) = 0 Q(x) = x −P(x)
Mirror image in the line <a > S(x) = 2P(x) −x
Mirror image in the plane (a, x) = 0 T(x) = 2Q(x) −x = x −2P(x)
For a projection holds: x −P
W
(x) ⊥ P
W
(x) and P
W
(x) ∈ W.
If for a transformation A holds: (Ax, y ) = (x, Ay ) = (Ax, Ay ), than A is a projection.
Let A : W → W deﬁne a linear transformation; we deﬁne:
• If S is a subset of V : A(S) := ¦Ax ∈ W[x ∈ S¦
• If T is a subset of W: A
←
(T) := ¦x ∈ V [A(x) ∈ T¦
Than A(S) is a linear subspace of W and the inverse transformation A
←
(T) is a linear subspace of V . From this
follows that A(V ) is the image space of A, notation: 1(A). A
←
(
0 ) = E
0
is a linear subspace of V , the null space
of A, notation: ^(A). Then the following holds:
dim(^(A)) + dim(1(A)) = dim(V )
5.5 Plane and line
The equation of a line that contains the points a and
b is:
x =a +λ(
b −a ) =a +λr
The equation of a plane is:
x =a +λ(
b −a ) +µ(c −a ) =a +λr
1
+µr
2
When this is a plane in IR
3
, the normal vector to this plane is given by:
n
V
=
r
1
r
2
[r
1
r
2
[
A line can also be described by the points for which the line equation : (a, x) + b = 0 holds, and for a plane V:
(a, x) +k = 0. The normal vector to V is than: a/[a[.
The distance d between 2 points p and q is given by d( p, q ) =  p −q .
In IR
2
holds: The distance of a point p to the line (a, x) +b = 0 is
d( p, ) =
[(a, p ) +b[
[a[
Similarly in IR
3
: The distance of a point p to the plane (a, x) +k = 0 is
d( p, V ) =
[(a, p ) +k[
[a[
This can be generalized for IR
n
and C
n
(theorem from Hesse).
32 Mathematics Formulary by ir. J.C.A. Wevers
5.6 Coordinate transformations
The linear transformation A from IK
n
→ IK
m
is given by (IK = IR of C):
y = A
m×n
x
where a column of A is the image of a base vector in the original.
The matrix A
β
α
transforms a vector given w.r.t. a basis α into a vector w.r.t. a basis β. It is given by:
A
β
α
= (β(Aa
1
), ..., β(Aa
n
))
where β(x) is the representation of the vector x w.r.t. basis β.
The transformation matrix S
β
α
transforms vectors from coordinate system α into coordinate system β:
S
β
α
:= II
β
α
= (β(a
1
), ..., β(a
n
))
and S
β
α
S
α
β
= II
The matrix of a transformation A is than given by:
A
β
α
=
_
A
β
α
e
1
, ..., A
β
α
e
n
_
For the transformation of matrix operators to another coordinate system holds: A
δ
α
= S
δ
λ
A
λ
β
S
β
α
, A
α
α
= S
α
β
A
β
β
S
β
α
and (AB)
λ
α
= A
λ
β
B
β
α
.
Further is A
β
α
= S
β
α
A
α
α
, A
α
β
= A
α
α
S
α
β
. A vector is transformed via X
α
= S
β
α
X
β
.
5.7 Eigen values
The eigenvalue equation
Ax = λx
with eigenvalues λ can be solved with (A − λII) =
0 ⇒ det(A − λII) = 0. The eigenvalues follow from this
characteristic equation. The following is true: det(A) =
i
λ
i
and Tr(A) =
i
a
ii
=
i
λ
i
.
The eigen values λ
i
are independent of the chosen basis. The matrix of A in a basis of eigenvectors, with S the
transformation matrix to this basis, S = (E
λ
1
, ..., E
λ
n
), is given by:
Λ = S
−1
AS = diag(λ
1
, ..., λ
n
)
When 0 is an eigen value of A than E
0
(A) = ^(A).
When λ is an eigen value of A holds: A
n
x = λ
n
x.
5.8 Transformation types
Isometric transformations
A transformation is isometric when: Ax = x. This implies that the eigen values of an isometric transformation
are given by λ = exp(iϕ) ⇒ [λ[ = 1. Than also holds: (Ax, Ay ) = (x, y ).
When W is an invariant subspace if the isometric transformation Awith dim(A) < ∞, than also W
⊥
is an invariante
subspace.
Chapter 5: Linear algebra 33
Orthogonal transformations
A transformation A is orthogonal if A is isometric and the inverse A
←
exists. For an orthogonal transformation O
holds O
T
O = II, so: O
T
= O
−1
. If A and B are orthogonal, than AB and A
−1
are also orthogonal.
Let A : V → V be orthogonal with dim(V ) < ∞. Than A is:
Direct orthogonal if det(A) = +1. A describes a rotation. A rotation in IR
2
through angle ϕ is given by:
R =
_
cos(ϕ) −sin(ϕ)
sin(ϕ) cos(ϕ)
_
So the rotation angle ϕ is determined by Tr(A) = 2 cos(ϕ) with 0 ≤ ϕ ≤ π. Let λ
1
and λ
2
be the roots of the
characteristic equation, than also holds: '(λ
1
) = '(λ
2
) = cos(ϕ), and λ
1
= exp(iϕ), λ
2
= exp(−iϕ).
In IR
3
holds: λ
1
= 1, λ
2
= λ
∗
3
= exp(iϕ). A rotation over E
λ
1
is given by the matrix
_
_
1 0 0
0 cos(ϕ) −sin(ϕ)
0 sin(ϕ) cos(ϕ)
_
_
Mirrored orthogonal if det(A) = −1. Vectors from E
−1
are mirrored by A w.r.t. the invariant subspace E
⊥
−1
. A
mirroring in IR
2
in < (cos(
1
2
ϕ), sin(
1
2
ϕ)) > is given by:
S =
_
cos(ϕ) sin(ϕ)
sin(ϕ) −cos(ϕ)
_
Mirrored orthogonal transformations in IR
3
are rotational mirrorings: rotations of axis <a
1
> through angle ϕ and
mirror plane <a
1
>
⊥
. The matrix of such a transformation is given by:
_
_
−1 0 0
0 cos(ϕ) −sin(ϕ)
0 sin(ϕ) cos(ϕ)
_
_
For all orthogonal transformations O in IR
3
holds that O(x) O(y ) = O(x y ).
IR
n
(n < ∞) can be decomposed in invariant subspaces with dimension 1 or 2 for each orthogonal transformation.
Unitary transformations
Let V be a complex space on which an inner product is deﬁned. Than a linear transformation U is unitary if U is
isometric and its inverse transformation A
←
exists. A n n matrix is unitary if U
H
U = II. It has determinant
[ det(U)[ = 1. Each isometric transformation in a ﬁnitedimensional complex vector space is unitary.
Theorem: for a n n matrix A the following statements are equivalent:
1. A is unitary,
2. The columns of A are an orthonormal set,
3. The rows of A are an orthonormal set.
Symmetric transformations
A transformation A on IR
n
is symmetric if (Ax, y ) = (x, Ay ). A matrix A ∈ IM
n×n
is symmetric if A = A
T
. A
linear operator is only symmetric if its matrix w.r.t. an arbitrary basis is symmetric. All eigenvalues of a symmetric
transformation belong to IR. The different eigenvectors are mutually perpendicular. If A is symmetric, than A
T
=
A = A
H
on an orthogonal basis.
For each matrix B ∈ IM
m×n
holds: B
T
B is symmetric.
34 Mathematics Formulary by ir. J.C.A. Wevers
Hermitian transformations
A transformation H : V → V with V = C
n
is Hermitian if (Hx, y ) = (x, Hy ). The Hermitian conjugated
transformation A
H
of A is: [a
ij
]
H
= [a
∗
ji
]. An alternative notation is: A
H
= A
†
. The inner product of two vectors
x and y can now be written in the form: (x, y ) = x
H
y.
If the transformations A and B are Hermitian, than their product AB is Hermitian if:
[A, B] = AB −BA = 0. [A, B] is called the commutator of A and B.
The eigenvalues of a Hermitian transformation belong to IR.
A matrix representation can be coupled with a Hermitian operator L. W.r.t. a basis e
i
it is given by L
mn
=
(e
m
, Le
n
).
Normal transformations
For each linear transformation A in a complex vector space V there exists exactly one linear transformation B so
that (Ax, y ) = (x, By ). This B is called the adjungated transformation of A. Notation: B = A
∗
. The following
holds: (CD)
∗
= D
∗
C
∗
. A
∗
= A
−1
if A is unitary and A
∗
= A if A is Hermitian.
Deﬁnition: the linear transformation A is normal in a complex vector space V if A
∗
A = AA
∗
. This is only the case
if for its matrix S w.r.t. an orthonormal basis holds: A
†
A = AA
†
.
If A is normal holds:
1. For all vectors x ∈ V and a normal transformation A holds:
(Ax, Ay ) = (A
∗
Ax, y ) = (AA
∗
x, y ) = (A
∗
x, A
∗
y )
2. x is an eigenvector of A if and only if x is an eigenvector of A
∗
.
3. Eigenvectors of A for different eigenvalues are mutually perpendicular.
4. If E
λ
if an eigenspace from A than the orthogonal complement E
⊥
λ
is an invariant subspace of A.
Let the different roots of the characteristic equation of A be β
i
with multiplicities n
i
. Than the dimension of each
eigenspace V
i
equals n
i
. These eigenspaces are mutually perpendicular and each vector x ∈ V can be written in
exactly one way as
x =
i
x
i
with x
i
∈ V
i
This can also be written as: x
i
= P
i
x where P
i
is a projection on V
i
. This leads to the spectral mapping theorem:
let A be a normal transformation in a complex vector space V with dim(V ) = n. Than:
1. There exist projection transformations P
i
, 1 ≤ i ≤ p, with the properties
• P
i
P
j
= 0 for i ,= j,
• P
1
+... +P
p
= II,
• dimP
1
(V ) +... + dimP
p
(V ) = n
and complex numbers α
1
, ..., α
p
so that A = α
1
P
1
+... +α
p
P
p
.
2. If A is unitary than holds [α
i
[ = 1 ∀i.
3. If A is Hermitian than α
i
∈ IR ∀i.
Chapter 5: Linear algebra 35
Complete systems of commuting Hermitian transformations
Consider m Hermitian linear transformations A
i
in a n dimensional complex inner product space V . Assume they
mutually commute.
Lemma: if E
λ
is the eigenspace for eigenvalue λ from A
1
, than E
λ
is an invariant subspace of all transformations
A
i
. This means that if x ∈ E
λ
, than A
i
x ∈ E
λ
.
Theorem. Consider m commuting Hermitian matrices A
i
. Than there exists a unitary matrix U so that all matrices
U
†
A
i
U are diagonal. The columns of U are the common eigenvectors of all matrices A
j
.
If all eigenvalues of a Hermitian linear transformation in a ndimensional complex vector space differ, than the
normalized eigenvector is known except for a phase factor exp(iα).
Deﬁnition: a commuting set Hermitian transformations is called complete if for each set of two common eigenvec
tors v
i
, v
j
there exists a transformation A
k
so that v
i
and v
j
are eigenvectors with different eigenvalues of A
k
.
Usually a commuting set is taken as small as possible. In quantum physics one speaks of commuting observables.
The required number of commuting observables equals the number of quantum numbers required to characterize a
state.
5.9 Homogeneous coordinates
Homogeneous coordinates are used if one wants to combine both rotations and translations in one matrix transfor
mation. An extra coordinate is introduced to describe the nonlinearities. Homogeneous coordinates are derived
from cartesian coordinates as follows:
_
_
x
y
z
_
_
cart
=
_
_
_
_
wx
wy
wz
w
_
_
_
_
hom
=
_
_
_
_
X
Y
Z
w
_
_
_
_
hom
so x = X/w, y = Y/w and z = Z/w. Transformations in homogeneous coordinates are described by the following
matrices:
1. Translation along vector (X
0
, Y
0
, Z
0
, w
0
):
T =
_
_
_
_
w
0
0 0 X
0
0 w
0
0 Y
0
0 0 w
0
Z
0
0 0 0 w
0
_
_
_
_
2. Rotations of the x, y, z axis, resp. through angles α, β, γ:
R
x
(α) =
_
_
_
_
1 0 0 0
0 cos α −sin α 0
0 sin α cos α 0
0 0 0 1
_
_
_
_
R
y
(β) =
_
_
_
_
cos β 0 sin β 0
0 1 0 0
−sin β 0 cos β 0
0 0 0 1
_
_
_
_
R
z
(γ) =
_
_
_
_
cos γ −sin γ 0 0
sin γ cos γ 0 0
0 0 1 0
0 0 0 1
_
_
_
_
3. A perspective projection on image plane z = c with the center of projection in the origin. This transformation
has no inverse.
P(z = c) =
_
_
_
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 1/c 0
_
_
_
_
36 Mathematics Formulary by ir. J.C.A. Wevers
5.10 Inner product spaces
A complex inner product on a complex vector space is deﬁned as follows:
1. (a,
b ) = (
b, a ),
2. (a, β
1
b
1
+β
2
b
2
) = β
1
(a,
b
1
) +β
2
(a,
b
2
) for all a,
b
1
,
b
2
∈ V and β
1
, β
2
∈ C.
3. (a, a ) ≥ 0 for all a ∈ V , (a, a ) = 0 if and only if a =
0.
Due to (1) holds: (a, a ) ∈ IR. The inner product space C
n
is the complex vector space on which a complex inner
product is deﬁned by:
(a,
b ) =
n
i=1
a
∗
i
b
i
For function spaces holds:
(f, g) =
b
_
a
f
∗
(t)g(t)dt
For eacha the length a  is deﬁned by: a  =
_
(a, a ). The following holds: a −
b  ≤ a+
b  ≤ a +
b ,
and with ϕ the angle between a and
b holds: (a,
b ) = a  
b  cos(ϕ).
Let ¦a
1
, ..., a
n
¦ be a set of vectors in an inner product space V . Than the Gramian G of this set is given by:
G
ij
= (a
i
, a
j
). The set of vectors is independent if and only if det(G) = 0.
A set is orthonormal if (a
i
, a
j
) = δ
ij
. If e
1
, e
2
, ... form an orthonormal row in an inﬁnite dimensional vector space
Bessel’s inequality holds:
x
2
≥
∞
i=1
[(e
i
, x)[
2
The equal sign holds if and only if lim
n→∞
x
n
−x = 0.
The inner product space
2
is deﬁned in C
∞
by:
2
=
_
a = (a
1
, a
2
, ...) [
∞
n=1
[a
n
[
2
< ∞
_
A space is called a Hilbert space if it is
2
and if also holds: lim
n→∞
[a
n+1
−a
n
[ = 0.
5.11 The Laplace transformation
The class LT exists of functions for which holds:
1. On each interval [0, A], A > 0 there are no more than a ﬁnite number of discontinuities and each discontinuity
has an upper  and lower limit,
2. ∃t
0
∈ [0, ∞ > and a, M ∈ IR so that for t ≥ t
0
holds: [f(t)[ exp(−at) < M.
Than there exists a Laplace transform for f.
The Laplace transformation is a generalisation of the Fourier transformation. The Laplace transform of a function
f(t) is, with s ∈ C and t ≥ 0:
F(s) =
∞
_
0
f(t)e
−st
dt
Chapter 5: Linear algebra 37
The Laplace transform of the derivative of a function is given by:
L
_
f
(n)
(t)
_
= −f
(n−1)
(0) −sf
(n−2)
(0) −... −s
n−1
f(0) +s
n
F(s)
The operator L has the following properties:
1. Equal shapes: if a > 0 than
L(f(at)) =
1
a
F
_
s
a
_
2. Damping: L(e
−at
f(t)) = F(s +a)
3. Translation: If a > 0 and g is deﬁned by g(t) = f(t − a) if t > a and g(t) = 0 for t ≤ a, than holds:
L(g(t)) = e
−sa
L(f(t)).
If s ∈ IR than holds '(λf) = L('(f)) and ·(λf) = L(·(f)).
For some often occurring functions holds:
f(t) = F(s) = L(f(t)) =
t
n
n!
e
at
(s −a)
−n−1
e
at
cos(ωt)
s −a
(s −a)
2
+ω
2
e
at
sin(ωt)
ω
(s −a)
2
+ω
2
δ(t −a) exp(−as)
5.12 The convolution
The convolution integral is deﬁned by:
(f ∗ g)(t) =
t
_
0
f(u)g(t −u)du
The convolution has the following properties:
1. f ∗ g ∈LT
2. L(f ∗ g) = L(f) L(g)
3. Distribution: f ∗ (g +h) = f ∗ g +f ∗ h
4. Commutative: f ∗ g = g ∗ f
5. Homogenity: f ∗ (λg) = λf ∗ g
If L(f) = F
1
F
2
, than is f(t) = f
1
∗ f
2
.
5.13 Systems of linear differential equations
We start with the equation
˙
x = Ax. Assume that x = v exp(λt), than follows: Av = λv. In the 2 2 case holds:
1. λ
1
= λ
2
: than x(t) =
v
i
exp(λ
i
t).
2. λ
1
,= λ
2
: than x(t) = (ut +v) exp(λt).
38 Mathematics Formulary by ir. J.C.A. Wevers
Assume that λ = α + iβ is an eigenvalue with eigenvector v, than λ
∗
is also an eigenvalue for eigenvector v
∗
.
Decompose v = u +i w, than the real solutions are
c
1
[ucos(βt) − wsin(βt)]e
αt
+c
2
[v cos(βt) +usin(βt)]e
αt
There are two solution strategies for the equation
¨
x = Ax:
1. Let x = v exp(λt) ⇒ det(A−λ
2
II) = 0.
2. Introduce: ˙ x = u and ˙ y = v, this leads to ¨ x = ˙ u and ¨ y = ˙ v. This transforms a ndimensional set of second
order equations into a 2ndimensional set of ﬁrst order equations.
5.14 Quadratic forms
5.14.1 Quadratic forms in IR
2
The general equation of a quadratic form is: x
T
Ax + 2x
T
P + S = 0. Here, A is a symmetric matrix. If Λ =
S
−1
AS = diag(λ
1
, ..., λ
n
) holds: u
T
Λu+2u
T
P +S = 0, so all cross terms are 0. u = (u, v, w) should be chosen
so that det(S) = +1, to maintain the same orientation as the system (x, y, z).
Starting with the equation
ax
2
+ 2bxy +cy
2
+dx +ey +f = 0
we have [A[ = ac − b
2
. An ellipse has [A[ > 0, a parabola [A[ = 0 and a hyperbole [A[ < 0. In polar coordinates
this can be written as:
r =
ep
1 −e cos(θ)
An ellipse has e < 1, a parabola e = 1 and a hyperbola e > 1.
5.14.2 Quadratic surfaces in IR
3
Rank 3:
p
x
2
a
2
+q
y
2
b
2
+r
z
2
c
2
= d
• Ellipsoid: p = q = r = d = 1, a, b, c are the lengths of the semi axes.
• Singlebladed hyperboloid: p = q = d = 1, r = −1.
• Doublebladed hyperboloid: r = d = 1, p = q = −1.
• Cone: p = q = 1, r = −1, d = 0.
Rank 2:
p
x
2
a
2
+q
y
2
b
2
+r
z
c
2
= d
• Elliptic paraboloid: p = q = 1, r = −1, d = 0.
• Hyperbolic paraboloid: p = r = −1, q = 1, d = 0.
• Elliptic cylinder: p = q = −1, r = d = 0.
• Hyperbolic cylinder: p = d = 1, q = −1, r = 0.
• Pair of planes: p = 1, q = −1, d = 0.
Rank 1:
py
2
+qx = d
• Parabolic cylinder: p, q > 0.
• Parallel pair of planes: d > 0, q = 0, p ,= 0.
• Double plane: p ,= 0, q = d = 0.
Chapter 6
Complex function theory
6.1 Functions of complex variables
Complex function theory deals with complex functions of a complex variable. Some deﬁnitions:
f is analytical on ( if f is continuous and differentiable on (.
A Jordan curve is a curve that is closed and singular.
If K is a curve in C with parameter equation z = φ(t) = x(t) + iy(t), a ≤ t ≤ b, than the length L of K is given
by:
L =
b
_
a
¸
_
dx
dt
_
2
+
_
dy
dt
_
2
dt =
b
_
a
¸
¸
¸
¸
dz
dt
¸
¸
¸
¸
dt =
b
_
a
[φ
(t)[dt
The derivative of f in point z = a is:
f
(a) = lim
z→a
f(z) −f(a)
z −a
If f(z) = u(x, y) +iv(x, y) the derivative is:
f
(z) =
∂u
∂x
+i
∂v
∂x
= −i
∂u
∂y
+
∂v
∂y
Setting both results equal yields the equations of CauchyRiemann:
∂u
∂x
=
∂v
∂y
,
∂u
∂y
= −
∂v
∂x
These equations imply that ∇
2
u = ∇
2
v = 0. f is analytical if u and v satisfy these equations.
6.2 Complex integration
6.2.1 Cauchy’s integral formula
Let K be a curve described by z = φ(t) on a ≤ t ≤ b and f(z) is continuous on K. Than the integral of f over K
is:
_
K
f(z)dz =
b
_
a
f(φ(t))
˙
φ(t)dt
fcontinuous
= F(b) −F(a)
Lemma: let K be the circle with center a and radius r taken in a positive direction. Than holds for integer m:
1
2πi
_
K
dz
(z −a)
m
=
_
0 if m ,= 1
1 if m = 1
Theorem: if L is the length of curve K and if [f(z)[ ≤ M for z ∈ K, than, if the integral exists, holds:
¸
¸
¸
¸
¸
¸
_
K
f(z)dz
¸
¸
¸
¸
¸
¸
≤ ML
39
40 Mathematics Formulary by ir. J.C.A. Wevers
Theorem: let f be continuous on an area G and let p be a ﬁxed point of G. Let F(z) =
_
z
p
f(ξ)dξ for all z ∈ G
only depend on z and not on the integration path. Than F(z) is analytical on G with F
(z) = f(z).
This leads to two equivalent formulations of the main theorem of complex integration: let the function f be analytical
on an area G. Let K and K
be two curves with the same starting  and end points, which can be transformed into
each other by continous deformation within G. Let B be a Jordan curve. Than holds
_
K
f(z)dz =
_
K
f(z)dz ⇔
_
B
f(z)dz = 0
By applying the main theorem on e
iz
/z one can derive that
∞
_
0
sin(x)
x
dx =
π
2
6.2.2 Residue
A point a ∈ C is a regular point of a function f(z) if f is analytical in a. Otherwise a is a singular point or pole of
f(z). The residue of f in a is deﬁned by
Res
z=a
f(z) =
1
2πi
_
K
f(z)dz
where K is a Jordan curve which encloses a in positive direction. The residue is 0 in regular points, in singular
points it can be both 0 and ,= 0. Cauchy’s residue proposition is: let f be analytical within and on a Jordan curve K
except in a ﬁnite number of singular points a
i
within K. Than, if K is taken in a positive direction, holds:
1
2πi
_
K
f(z)dz =
n
k=1
Res
z=a
k
f(z)
Lemma: let the function f be analytical in a, than holds:
Res
z=a
f(z)
z −a
= f(a)
This leads to Cauchy’s integral theorem: if F is analytical on the Jordan curve K, which is taken in a positive
direction, holds:
1
2πi
_
K
f(z)
z −a
dz =
_
f(a) if a inside K
0 if a outside K
Theorem: let K be a curve (K need not be closed) and let φ(ξ) be continuous on K. Than the function
f(z) =
_
K
φ(ξ)dξ
ξ −z
is analytical with nth derivative
f
(n)
(z) = n!
_
K
φ(ξ)dξ
(ξ −z)
n+1
Theorem: let K be a curve and G an area. Let φ(ξ, z) be deﬁned for ξ ∈ K, z ∈ G, with the following properties:
1. φ(ξ, z) is limited, this means [φ(ξ, z)[ ≤ M for ξ ∈ K, z ∈ G,
2. For ﬁxed ξ ∈ K, φ(ξ, z) is an analytical function of z on G,
Chapter 6: Complex function theory 41
3. For ﬁxed z ∈ G the functions φ(ξ, z) and ∂φ(ξ, z)/∂z are continuous functions of ξ on K.
Than the function
f(z) =
_
K
φ(ξ, z)dξ
is analytical with derivative
f
(z) =
_
K
∂φ(ξ, z)
∂z
dξ
Cauchy’s inequality: let f(z) be an analytical function within and on the circle C : [z −a[ = Rand let [f(z)[ ≤ M
for z ∈ C. Than holds
¸
¸
¸f
(n)
(a)
¸
¸
¸ ≤
Mn!
R
n
6.3 Analytical functions deﬁnied by series
The series
f
n
(z) is called pointwise convergent on an area G with sum F(z) if
∀
ε>0
∀
z∈G
∃
N
0
∈IR
∀
n>n
0
_ ¸
¸
¸
¸
¸
f(z) −
N
n=1
f
n
(z)
¸
¸
¸
¸
¸
< ε
_
The series is called uniform convergent if
∀
ε>0
∃
N
0
∈IR
∀
n>n
0
∃
z∈G
_ ¸
¸
¸
¸
¸
f(z) −
N
n=1
f
n
(z)
¸
¸
¸
¸
¸
< ε
_
Uniform convergence implies pointwise convergence, the opposite is not necessary.
Theorem: let the power series
∞
n=0
a
n
z
n
have a radius of convergence R. R is the distance to the ﬁrst nonessential
singularity.
• If lim
n→∞
n
_
[a
n
[ = L exists, than R = 1/L.
• If lim
n→∞
[a
n+1
[/[a
n
[ = L exists, than R = 1/L.
If these limits both don’t exist one can ﬁnd R with the formula of CauchyHadamard:
1
R
= lim
n→∞
sup
n
_
[a
n
[
6.4 Laurent series
Taylor’s theorem: let f be analytical in an area G and let point a ∈ G has distance r to the boundary of G. Than
f(z) can be expanded into the Taylor series near a:
f(z) =
∞
n=0
c
n
(z −a)
n
with c
n
=
f
(n)
(a)
n!
valid for [z − a[ < r. The radius of convergence of the Taylor series is ≥ r. If f has a pole of order k in a than
c
1
, ..., c
k−1
= 0, c
k
,= 0.
Theorem of Laurent: let f be analytical in the circular area G : r < [z −a[ < R. Than f(z) can be expanded into
a Laurent series with center a:
f(z) =
∞
n=−∞
c
n
(z −a)
n
with c
n
=
1
2πi
_
K
f(w)dw
(w −a)
n+1
, n ∈ ZZ
42 Mathematics Formulary by ir. J.C.A. Wevers
valid for r < [z −a[ < R and K an arbitrary Jordan curve in G which encloses point a in positive direction.
The principal part of a Laurent series is:
∞
n=1
c
−n
(z −a)
−n
. One can classify singular points with this. There are 3
cases:
1. There is no principal part. Than a is a nonessential singularity. Deﬁne f(a) = c
0
and the series is also valid
for [z −a[ < R and f is analytical in a.
2. The principal part contains a ﬁnite number of terms. Than there exists a k ∈ IN so that
lim
z→a
(z − a)
k
f(z) = c
−k
,= 0. Than the function g(z) = (z − a)
k
f(z) has a nonessential singularity in a.
One speaks of a pole of order k in z = a.
3. The principal part contains an inﬁnite number of terms. Then, a is an essential singular point of f, such as
exp(1/z) for z = 0.
If f and g are analytical, f(a) ,= 0, g(a) = 0, g
(a) ,= 0 than f(z)/g(z) has a simple pole (i.e. a pole of order 1) in
z = a with
Res
z=a
f(z)
g(z)
=
f(a)
g
(a)
6.5 Jordan’s theorem
Residues are often used when solving deﬁnite integrals. We deﬁne the notations C
+
ρ
= ¦z[[z[ = ρ, ·(z) ≥ 0¦ and
C
−
ρ
= ¦z[[z[ = ρ, ·(z) ≤ 0¦ and M
+
(ρ, f) = max
z∈C
+
ρ
[f(z)[, M
−
(ρ, f) = max
z∈C
−
ρ
[f(z)[. We assume that f(z) is
analytical for ·(z) > 0 with a possible exception of a ﬁnite number of singular points which do not lie on the real
axis, lim
ρ→∞
ρM
+
(ρ, f) = 0 and that the integral exists, than
∞
_
−∞
f(x)dx = 2πi
Resf(z) in ·(z) > 0
Replace M
+
by M
−
in the conditions above and it follows that:
∞
_
−∞
f(x)dx = −2πi
Resf(z) in ·(z) < 0
Jordan’s lemma: let f be continuous for [z[ ≥ R, ·(z) ≥ 0 and lim
ρ→∞
M
+
(ρ, f) = 0. Than holds for α > 0
lim
ρ→∞
_
C
+
ρ
f(z)e
iαz
dz = 0
Let f be continuous for [z[ ≥ R, ·(z) ≤ 0 and lim
ρ→∞
M
−
(ρ, f) = 0. Than holds for α < 0
lim
ρ→∞
_
C
−
ρ
f(z)e
iαz
dz = 0
Let z = a be a simple pole of f(z) and let C
δ
be the half circle [z −a[ = δ, 0 ≤ arg(z −a) ≤ π, taken from a +δ
to a −δ. Than is
lim
δ↓0
1
2πi
_
C
δ
f(z)dz =
1
2
Res
z=a
f(z)
Chapter 7
Tensor calculus
7.1 Vectors and covectors
A ﬁnite dimensional vector space is denoted by 1, J. The vector space of linear transformations from 1 to J is
denoted by L(1, J). Consider L(1,IR) := 1
∗
. We name 1
∗
the dual space of 1. Now we can deﬁne vectors in 1
with basis c and covectors in 1
∗
with basis
ˆ
c. Properties of both are:
1. Vectors: x = x
i
c
i
with basis vectors c
i
:
c
i
=
∂
∂x
i
Transformation from system i to i
is given by:
c
i
= A
i
i
c
i
= ∂
i
∈ 1 , x
i
= A
i
i
x
i
2. Covectors:
ˆ
x = x
i
ˆ
c
i
with basis vectors
ˆ
c
i
ˆ
c
i
= dx
i
Transformation from system i to i
is given by:
ˆ
c
i
= A
i
i
ˆ
c
i
∈ 1
∗
, x
i
= A
i
i
x
i
Here the Einstein convention is used:
a
i
b
i
:=
i
a
i
b
i
The coordinate transformation is given by:
A
i
i
=
∂x
i
∂x
i
, A
i
i
=
∂x
i
∂x
i
From this follows that A
i
k
A
k
l
= δ
k
l
and A
i
i
= (A
i
i
)
−1
.
In differential notation the coordinate transformations are given by:
dx
i
=
∂x
i
∂x
i
dx
i
and
∂
∂x
i
=
∂x
i
∂x
i
∂
∂x
i
The general transformation rule for a tensor T is:
T
q
1
...q
n
s
1
...s
m
=
¸
¸
¸
¸
∂x
∂u
¸
¸
¸
¸
∂u
q
1
∂x
p
1
∂u
q
n
∂x
p
n
∂x
r
1
∂u
s
1
∂x
r
m
∂u
s
m
T
p
1
...p
n
r
1
...r
m
For an absolute tensor = 0.
43
44 Mathematics Formulary by ir. J.C.A. Wevers
7.2 Tensor algebra
The following holds:
a
ij
(x
i
+y
i
) ≡ a
ij
x
i
+a
ij
y
i
, but: a
ij
(x
i
+y
j
) ,≡ a
ij
x
i
+a
ij
y
j
and
(a
ij
+a
ji
)x
i
x
j
≡ 2a
ij
x
i
x
j
, but: (a
ij
+a
ji
)x
i
y
j
,≡ 2a
ij
x
i
y
j
en (a
ij
−a
ji
)x
i
x
j
≡ 0.
The sum and difference of two tensors is a tensor of the same rank: A
p
q
± B
p
q
. The outer tensor product results in
a tensor with a rank equal to the sum of the ranks of both tensors: A
pr
q
B
m
s
= C
prm
qs
. The contraction equals two
indices and sums over them. Suppose we take r = s for a tensor A
mpr
qs
, this results in:
r
A
mpr
qr
= B
mp
q
. The inner
product of two tensors is deﬁned by taking the outer product followed by a contraction.
7.3 Inner product
Deﬁnition: the bilinear transformation B : 1 1
∗
→ IR, B(x,
ˆ
y ) =
ˆ
y(x) is denoted by < x,
ˆ
y >. For this pairing
operator < , >= δ holds:
ˆ
y(x) =< x,
ˆ
y >= y
i
x
i
, <
ˆ
c
i
, c
j
>= δ
i
j
Let G : 1 → 1
∗
be a linear bijection. Deﬁne the bilinear forms
g : 1 1 → IR g(x, y ) =< x, Gy >
h : 1
∗
1
∗
→ IR h(
ˆ
x,
ˆ
y ) =< G
−1
ˆ
x,
ˆ
y >
Both are not degenerated. The following holds: h(Gx, Gy ) =< x, Gy >= g(x, y ). If we identify 1 and 1
∗
with
G, than g (or h) gives an inner product on 1.
The inner product (, )
Λ
on Λ
k
(1) is deﬁned by:
(Φ, Ψ)
Λ
=
1
k!
(Φ, Ψ)
T
0
k
(V)
The inner product of two vectors is than given by:
(x, y ) = x
i
y
i
<c
i
, Gc
j
>= g
ij
x
i
x
j
The matrix g
ij
of G is given by
g
ij
ˆ
c
j
= Gc
i
The matrix g
ij
of G
−1
is given by:
g
kl
c
l
= G
−1
ˆ
c
k
For this metric tensor g
ij
holds: g
ij
g
jk
= δ
k
i
. This tensor can raise or lower indices:
x
j
= g
ij
x
i
, x
i
= g
ij
x
j
and du
i
=
ˆ
c
i
= g
ij
c
j
.
Chapter 7: Tensor calculus 45
7.4 Tensor product
Deﬁnition: let  and 1 be two ﬁnite dimensional vector spaces with dimensions m and n. Let 
∗
1
∗
be the
cartesian product of  and 1. A function t : 
∗
1
∗
→ IR; (
ˆ
u;
ˆ
v ) → t(
ˆ
u;
ˆ
v ) = t
αβ
u
α
u
β
∈ IR is called a tensor
if t is linear in
ˆ
u and
ˆ
v. The tensors t form a vector space denoted by  ⊗ 1. The elements T ∈ 1 ⊗ 1 are called
contravariant 2tensors: T = T
ij
c
i
⊗c
j
= T
ij
∂
i
⊗∂
j
. The elements T ∈ 1
∗
⊗1
∗
are called covariant 2tensors:
T = T
ij
ˆ
c
i
⊗
ˆ
c
j
= T
ij
dx
i
⊗ dx
j
. The elements T ∈ 1
∗
⊗ 1 are called mixed 2 tensors: T = T
.j
i
ˆ
c
i
⊗c
j
=
T
.j
i
dx
i
⊗∂
j
, and analogous for T ∈ 1 ⊗1
∗
.
The numbers given by
t
αβ
= t(
ˆ
c
α
,
ˆ
c
β
)
with 1 ≤ α ≤ m and 1 ≤ β ≤ n are the components of t.
Take x ∈  and y ∈ 1. Than the function x ⊗y, deﬁnied by
(x ⊗y)(
ˆ
u,
ˆ
v) =< x,
ˆ
u >
U
< y,
ˆ
v >
V
is a tensor. The components are derived from: (u ⊗v )
ij
= u
i
v
j
. The tensor product of 2 tensors is given by:
_
2
0
_
form: (v ⊗ w)(
ˆ
p,
ˆ
q) = v
i
p
i
w
k
q
k
= T
ik
p
i
q
k
_
0
2
_
form: (
ˆ
p ⊗
ˆ
q)(v, w) = p
i
v
i
q
k
w
k
= T
ik
v
i
w
k
_
1
1
_
form: (v ⊗
ˆ
p)(
ˆ
q, w) = v
i
q
i
p
k
w
k
= T
i
k
q
i
w
k
7.5 Symmetric and antisymmetric tensors
A tensor t ∈ 1 ⊗1 is called symmetric resp. antisymmetric if ∀
ˆ
x,
ˆ
y ∈ 1
∗
holds: t(
ˆ
x,
ˆ
y ) = t(
ˆ
y,
ˆ
x) resp. t(
ˆ
x,
ˆ
y ) =
−t(
ˆ
y,
ˆ
x).
A tensor t ∈ 1
∗
⊗1
∗
is called symmetric resp. antisymmetric if ∀x, y ∈ 1 holds: t(x, y ) = t(y, x) resp.
t(x, y ) = −t(y, x). The linear transformations o and / in 1 ⊗J are deﬁned by:
ot(
ˆ
x,
ˆ
y ) =
1
2
(t(
ˆ
x,
ˆ
y) +t(
ˆ
y,
ˆ
x))
/t(
ˆ
x,
ˆ
y ) =
1
2
(t(
ˆ
x,
ˆ
y) −t(
ˆ
y,
ˆ
x))
Analogous in 1
∗
⊗1
∗
. If t is symmetric resp. antisymmetric, than ot = t resp. /t = t.
The tensors e
i
∨e
j
= e
i
e
j
= 2o(e
i
⊗e
j
), with 1 ≤ i ≤ j ≤ n are a basis in o(1 ⊗1) with dimension
1
2
n(n +1).
The tensors e
i
∧e
j
= 2/(e
i
⊗e
j
), with 1 ≤ i ≤ j ≤ n are a basis in /(1 ⊗1) with dimension
1
2
n(n −1).
The complete antisymmetric tensor ε is given by: ε
ijk
ε
klm
= δ
il
δ
jm
−δ
im
δ
jl
.
The permutationoperators e
pqr
are deﬁned by: e
123
= e
231
= e
312
= 1, e
213
= e
132
= e
321
= −1, for all other
combinations e
pqr
= 0. There is a connection with the ε tensor: ε
pqr
= g
−1/2
e
pqr
and ε
pqr
= g
1/2
e
pqr
.
7.6 Outer product
Let α ∈ Λ
k
(1) and β ∈ Λ
l
(1). Than α ∧ β ∈ Λ
k+l
(1) is deﬁned by:
α ∧ β =
(k +l)!
k!l!
/(α ⊗β)
If α and β ∈ Λ
1
(1) = 1
∗
holds: α ∧ β = α ⊗β −β ⊗α
46 Mathematics Formulary by ir. J.C.A. Wevers
The outer product can be written as: (a
b)
i
= ε
ijk
a
j
b
k
, a
b = G
−1
∗(Ga ∧ G
b ).
Take a,
b, c,
d ∈ IR
4
. Than (dt ∧ dz)(a,
b ) = a
0
b
4
− b
0
a
4
is the oriented surface of the projection on the tzplane
of the parallelogram spanned by a and
b.
Further
(dt ∧ dy ∧ dz)(a,
b, c) = det
¸
¸
¸
¸
¸
¸
a
0
b
0
c
0
a
2
b
2
c
2
a
4
b
4
c
4
¸
¸
¸
¸
¸
¸
is the oriented 3dimensional volume of the projection on the tyzplane of the parallelepiped spanned by a,
b and c.
(dt ∧ dx ∧ dy ∧ dz)(a,
b, c,
d) = det(a,
b, c,
d) is the 4dimensional volume of the hyperparellelepiped spanned by
a,
b, c and
d.
7.7 The Hodge star operator
Λ
k
(1) and Λ
n−k
(1) have the same dimension because
_
n
k
_
=
_
n
n−k
_
for 1 ≤ k ≤ n. Dim(Λ
n
(1)) = 1. The choice
of a basis means the choice of an oriented measure of volume, a volume µ, in 1. We can gauge µ so that for an
orthonormal basis e
i
holds: µ(e
i
) = 1. This basis is than by deﬁnition positive oriented if µ =
ˆ
e
1
∧
ˆ
e
2
∧...∧
ˆ
e
n
= 1.
Because both spaces have the same dimension one can ask if there exists a bijection between them. If 1 has no extra
structure this is not the case. However, such an operation does exist if there is an inner product deﬁned on 1 and the
corresponding volume µ. This is called the Hodge star operator and denoted by ∗. The following holds:
∀
w∈Λ
k
(V)
∃
∗w∈Λ
k−n
(V)
∀
θ∈Λ
k
(V)
θ ∧ ∗w = (θ, w)
λ
µ
For an orthonormal basis in IR
3
holds: the volume: µ = dx ∧ dy ∧ dz, ∗dx ∧ dy ∧ dz = 1, ∗dx = dy ∧ dz,
∗dz = dx ∧ dy, ∗dy = −dx ∧ dz, ∗(dx ∧ dy) = dz, ∗(dy ∧ dz) = dx, ∗(dx ∧ dz) = −dy.
For a Minkowski basis in IR
4
holds: µ = dt ∧ dx ∧ dy ∧ dz, G = dt ⊗ dt − dx ⊗ dx − dy ⊗ dy − dz ⊗ dz, and
∗dt ∧ dx ∧ dy ∧ dz = 1 and ∗1 = dt ∧ dx ∧ dy ∧ dz. Further ∗dt = dx ∧ dy ∧ dz and ∗dx = dt ∧ dy ∧ dz.
7.8 Differential operations
7.8.1 The directional derivative
The directional derivative in point a is given by:
L
a
f =<a, df >= a
i
∂f
∂x
i
7.8.2 The Liederivative
The Liederivative is given by:
(L
v
w)
j
= w
i
∂
i
v
j
−v
i
∂
i
w
j
7.8.3 Christoffel symbols
To each curvelinear coordinate system u
i
we add a system of n
3
functions Γ
i
jk
of u, deﬁned by
∂
2
x
∂u
i
∂u
k
= Γ
i
jk
∂x
∂u
i
These are Christoffel symbols of the second kind. Christoffel symbols are no tensors. The Christoffel symbols of the
second kind are given by:
_
i
jk
_
:= Γ
i
jk
=
_
∂
2
x
∂u
k
∂u
j
, dx
i
_
Chapter 7: Tensor calculus 47
with Γ
i
jk
= Γ
i
kj
. Their transformation to a different coordinate system is given by:
Γ
i
j
k
= A
i
i
A
j
j
A
k
k
Γ
i
jk
+A
i
i
(∂
j
A
i
k
)
The ﬁrst term in this expression is 0 if the primed coordinates are cartesian.
There is a relation between Christoffel symbols and the metric:
Γ
i
jk
=
1
2
g
ir
(∂
j
g
kr
+∂
k
g
rj
−∂
r
g
jk
)
and Γ
α
βα
= ∂
β
(ln(
_
[g[)).
Lowering an index gives the Christoffel symbols of the ﬁrst kind: Γ
i
jk
= g
il
Γ
jkl
.
7.8.4 The covariant derivative
The covariant derivative ∇
j
of a vector, covector and of rank2 tensors is given by:
∇
j
a
i
= ∂
j
a
i
+ Γ
i
jk
a
k
∇
j
a
i
= ∂
j
a
i
−Γ
k
ij
a
k
∇
γ
a
α
β
= ∂
γ
a
α
β
−Γ
ε
γβ
a
α
ε
+ Γ
α
γε
a
ε
β
∇
γ
a
αβ
= ∂
γ
a
αβ
−Γ
ε
γα
a
εβ
−Γ
ε
γβ
a
αε
∇
γ
a
αβ
= ∂
γ
a
αβ
+ Γ
α
γε
a
εβ
+ Γ
β
γε
a
αε
Ricci’s theorem:
∇
γ
g
αβ
= ∇
γ
g
αβ
= 0
7.9 Differential operators
The Gradient
is given by:
grad(f) = G
−1
df = g
ki
∂f
∂x
i
∂
∂x
k
The divergence
is given by:
div(a
i
) = ∇
i
a
i
=
1
√
g
∂
k
(
√
g a
k
)
The curl
is given by:
rot(a) = G
−1
∗ d Ga = −ε
pqr
∇
q
a
p
= ∇
q
a
p
−∇
p
a
q
The Laplacian
is given by:
∆(f) = div grad(f) = ∗d ∗ df = ∇
i
g
ij
∂
j
f = g
ij
∇
i
∇
j
f =
1
√
g
∂
∂x
i
_
√
g g
ij
∂f
∂x
j
_
48 Mathematics Formulary by ir. J.C.A. Wevers
7.10 Differential geometry
7.10.1 Space curves
We limit ourselves to IR
3
with a ﬁxed orthonormal basis. A point is represented by the vector x = (x
1
, x
2
, x
3
). A
space curve is a collection of points represented by x = x(t). The arc length of a space curve is given by:
s(t) =
t
_
t
0
¸
_
dx
dτ
_
2
+
_
dy
dτ
_
2
+
_
dz
dτ
_
2
dτ
The derivative of s with respect to t is the length of the vector dx/dt:
_
ds
dt
_
2
=
_
dx
dt
,
dx
dt
_
The osculation plane in a point P of a space curve is the limiting position of the plane through the tangent of the
plane in point P and a point Q when Q approaches P along the space curve. The osculation plane is parallel with
˙
x(s). If
¨
x ,= 0 the osculation plane is given by:
y = x +λ
˙
x +µ
¨
x so det(y −x,
˙
x,
¨
x) = 0
In a bending point holds, if
...
x,= 0:
y = x +λ
˙
x +µ
...
x
The tangent has unit vector
=
˙
x, the main normal unit vector n =
¨
x and the binormal
b =
˙
x
¨
x. So the main
normal lies in the osculation plane, the binormal is perpendicular to it.
Let P be a point and Q be a nearby point of a space curve x(s). Let ∆ϕ be the angle between the tangents in P
and Q and let ∆ψ be the angle between the osculation planes (binormals) in P and Q. Then the curvature ρ and the
torsion τ in P are deﬁned by:
ρ
2
=
_
dϕ
ds
_
2
= lim
∆s→0
_
∆ϕ
∆s
_
2
, τ
2
=
_
dψ
ds
_
2
and ρ > 0. For plane curves ρ is the ordinary curvature and τ = 0. The following holds:
ρ
2
= (
,
) = (
¨
x,
¨
x) and τ
2
= (
˙
b,
˙
b)
Frenet’s equations express the derivatives as linear combinations of these vectors:
˙
= ρn ,
˙
n = −ρ
+τ
b ,
˙
b = −τn
From this follows that det(
˙
x,
¨
x,
...
x ) = ρ
2
τ.
Some curves and their properties are:
Screw line τ/ρ =constant
Circle screw line τ =constant, ρ =constant
Plane curves τ = 0
Circles ρ =constant, τ = 0
Lines ρ = τ = 0
7.10.2 Surfaces in IR
3
A surface in IR
3
is the collection of end points of the vectors x = x(u, v), so x
h
= x
h
(u
α
). On the surface are 2
families of curves, one with u =constant and one with v =constant.
The tangent plane in a point P at the surface has basis:
c
1
= ∂
1
x and c
2
= ∂
2
x
Chapter 7: Tensor calculus 49
7.10.3 The ﬁrst fundamental tensor
Let P be a point of the surface x = x(u
α
). The following two curves through P, denoted by u
α
= u
α
(t),
u
α
= v
α
(τ), have as tangent vectors in P
dx
dt
=
du
α
dt
∂
α
x ,
dx
dτ
=
dv
β
dτ
∂
β
x
The ﬁrst fundamental tensor of the surface in P is the inner product of these tangent vectors:
_
dx
dt
,
dx
dτ
_
= (c
α
, c
β
)
du
α
dt
dv
β
dτ
The covariant components w.r.t. the basis c
α
= ∂
α
x are:
g
αβ
= (c
α
, c
β
)
For the angle φ between the parameter curves in P: u = t, v =constant and u =constant, v = τ holds:
cos(φ) =
g
12
√
g
11
g
22
For the arc length s of P along the curve u
α
(t) holds:
ds
2
= g
αβ
du
α
du
β
This expression is called the line element.
7.10.4 The second fundamental tensor
The 4 derivatives of the tangent vectors ∂
α
∂
β
x = ∂
α
c
β
are each linear independent of the vectors c
1
, c
2
and
N,
with
N perpendicular to c
1
and c
2
. This is written as:
∂
α
c
β
= Γ
γ
αβ
c
γ
+h
αβ
N
This leads to:
Γ
γ
αβ
= (c
γ
, ∂
α
c
β
) , h
αβ
= (
N, ∂
α
c
β
) =
1
_
det [g[
det(c
1
, c
2
, ∂
α
c
β
)
7.10.5 Geodetic curvature
A curve on the surface x(u
α
) is given by: u
α
= u
α
(s), than x = x(u
α
(s)) with s the arc length of the curve. The
length of
¨
x is the curvature ρ of the curve in P. The projection of
¨
x on the surface is a vector with components
p
γ
= ¨ u
γ
+ Γ
γ
αβ
˙ u
α
˙ u
β
of which the length is called the geodetic curvature of the curve in p. This remains the same if the surface is curved
and the line element remains the same. The projection of
¨
x on
N has length
p = h
αβ
˙ u
α
˙ u
β
and is called the normal curvature of the curve in P. The theorem of Meusnier states that different curves on the
surface with the same tangent vector in P have the same normal curvature.
A geodetic line of a surface is a curve on the surface for which in each point the main normal of the curve is the
same as the normal on the surface. So for a geodetic line is in each point p
γ
= 0, so
d
2
u
γ
ds
2
+ Γ
γ
αβ
du
α
ds
du
β
ds
= 0
50 Mathematics Formulary by ir. J.C.A. Wevers
The covariant derivative ∇/dt in P of a vector ﬁeld of a surface along a curve is the projection on the tangent plane
in P of the normal derivative in P.
For two vector ﬁelds v(t) and w(t) along the same curve of the surface follows Leibniz’ rule:
d(v, w)
dt
=
_
v,
∇ w
dt
_
+
_
w,
∇v
dt
_
Along a curve holds:
∇
dt
(v
α
c
α
) =
_
dv
γ
dt
+ Γ
γ
αβ
du
α
dt
v
β
_
c
γ
7.11 Riemannian geometry
The Riemann tensor R is deﬁned by:
R
µ
ναβ
T
ν
= ∇
α
∇
β
T
µ
−∇
β
∇
α
T
µ
This is a
_
1
3
_
tensor with n
2
(n
2
−1)/12 independent components not identically equal to 0. This tensor is a measure
for the curvature of the considered space. If it is 0, the space is a ﬂat manifold. It has the following symmetry
properties:
R
αβµν
= R
µναβ
= −R
βαµν
= −R
αβνµ
The following relation holds:
[∇
α
, ∇
β
]T
µ
ν
= R
µ
σαβ
T
σ
ν
+R
σ
ναβ
T
µ
σ
The Riemann tensor depends on the Christoffel symbols through
R
α
βµν
= ∂
µ
Γ
α
βν
−∂
ν
Γ
α
βµ
+ Γ
α
σµ
Γ
σ
βν
−Γ
α
σν
Γ
σ
βµ
In a space and coordinate system where the Christoffel symbols are 0 this becomes:
R
α
βµν
=
1
2
g
ασ
(∂
β
∂
µ
g
σν
−∂
β
∂
ν
g
σµ
+∂
σ
∂
ν
g
βµ
−∂
σ
∂
µ
g
βν
)
The Bianchi identities are: ∇
λ
R
αβµν
+∇
ν
R
αβλµ
+∇
µ
R
αβνλ
= 0.
The Ricci tensor is obtained by contracting the Riemann tensor: R
αβ
≡ R
µ
αµβ
, and is symmetric in its indices:
R
αβ
= R
βα
. The Einstein tensor G is deﬁned by: G
αβ
≡ R
αβ
−
1
2
g
αβ
. It has the property that ∇
β
G
αβ
= 0. The
Ricciscalar is R = g
αβ
R
αβ
.
Chapter 8
Numerical mathematics
8.1 Errors
There will be an error in the solution if a problem has a number of parameters which are not exactly known. The
dependency between errors in input data and errors in the solution can be expressed in the condition number c. If
the problem is given by x = φ(a) the ﬁrstorder approximation for an error δa in a is:
δx
x
=
aφ
(a)
φ(a)
δa
a
The number c(a) = [aφ
(a)[/[φ(a)[. c ¸ 1 if the problem is wellconditioned.
8.2 Floating point representations
The ﬂoating point representation depends on 4 natural numbers:
1. The basis of the number system β,
2. The length of the mantissa t,
3. The length of the exponent q,
4. The sign s.
Than the representation of machine numbers becomes: rd(x) = s m β
e
where mantissa m is a number with t
βbased numbers and for which holds 1/β ≤ [m[ < 1, and e is a number with q βbased numbers for which holds
[e[ ≤ β
q
−1. The number 0 is added to this set, for example with m = e = 0. The largest machine number is
a
max
= (1 −β
−t
)β
β
q
−1
and the smallest positive machine number is
a
min
= β
−β
q
The distance between two successive machine numbers in the interval [β
p−1
, β
p
] is β
p−t
. If x is a real number and
the closest machine number is rd(x), than holds:
rd(x) = x(1 +ε) with [ε[ ≤
1
2
β
1−t
x = rd(x)(1 +ε
) with [ε
[ ≤
1
2
β
1−t
The number η :=
1
2
β
1−t
is called the machineaccuracy, and
ε, ε
≤ η
¸
¸
¸
¸
x −rd(x)
x
¸
¸
¸
¸
≤ η
An often used 32 bits ﬂoat format is: 1 bit for s, 8 for the exponent and 23 for de mantissa. The base here is 2.
51
52 Mathematics Formulary by ir. J.C.A. Wevers
8.3 Systems of equations
We want to solve the matrix equation Ax =
b for a nonsingular A, which is equivalent to ﬁnding the inverse matrix
A
−1
. Inverting a nn matrix via Cramer’s rule requires too much multiplications f(n) with n! ≤ f(n) ≤ (e−1)n!,
so other methods are preferable.
8.3.1 Triangular matrices
Consider the equation Ux = c where U is a rightupper triangular, this is a matrix in which U
ij
= 0 for all j < i,
and all U
ii
,= 0. Than:
x
n
= c
n
/U
nn
x
n−1
= (c
n−1
−U
n−1,n
x
n
)/U
n−1,n−1
.
.
.
.
.
.
x
1
= (c
1
−
n
j=2
U
1j
x
j
)/U
11
In code:
for (k = n; k > 0; k)
{
S = c[k];
for (j = k + 1; j < n; j++)
{
S = U[k][j] * x[j];
}
x[k] = S / U[k][k];
}
This algorithm requires
1
2
n(n + 1) ﬂoating point calculations.
8.3.2 Gauss elimination
Consider a general set Ax =
b. This can be reduced by Gauss elimination to a triangular form by multiplying the
ﬁrst equation with A
i1
/A
11
and than subtract it from all others; now the ﬁrst column contains all 0’s except A
11
.
Than the 2nd equation is subtracted in such a way from the others that all elements on the second row are 0 except
A
22
, etc. In code:
for (k = 1; k <= n; k++)
{
for (j = k; j <= n; j++) U[k][j] = A[k][j];
c[k] = b[k];
for (i = k + 1; i <= n; i++)
{
L = A[i][k] / U[k][k];
for (j = k + 1; j <= n; j++)
{
A[i][j] = L * U[k][j];
}
b[i] = L * c[k];
}
}
Chapter 8: Numerical mathematics 53
This algorithm requires
1
3
n(n
2
− 1) ﬂoating point multiplications and divisions for operations on the coefﬁcient
matrix and
1
2
n(n −1) multiplications for operations on the righthand terms, whereafter the triangular set has to be
solved with
1
2
n(n + 1) operations.
8.3.3 Pivot strategy
Some equations have to be interchanged if the corner elements A
11
, A
(1)
22
, ... are not all ,= 0 to allow Gauss elimina
tion to work. In the following, A
(n)
is the element after the nth iteration. One method is: if A
(k−1)
kk
= 0, than search
for an element A
(k−1)
pk
with p > k that is ,= 0 and interchange the pth and the nth equation. This strategy fails only
if the set is singular and has no solution at all.
8.4 Roots of functions
8.4.1 Successive substitution
We want to solve the equation F(x) = 0, so we want to ﬁnd the root α with F(α) = 0.
Many solutions are essentially the following:
1. Rewrite the equation in the form x = f(x) so that a solution of this equation is also a solution of F(x) = 0.
Further, f(x) may not vary too much with respect to x near α.
2. Assume an initial estimation x
0
for α and obtain the series x
n
with x
n
= f(x
n−1
), in the hope that lim
n→∞
x
n
=
α.
Example: choose
f(x) = β −ε
h(x)
g(x)
= x −
F(x)
G(x)
than we can expect that the row x
n
with
x
0
= β
x
n
= x
n−1
−ε
h(x
n−1
)
g(x
n−1
)
converges to α.
8.4.2 Local convergence
Let α be a solution of x = f(x) and let x
n
= f(x
n−1
) for a given x
0
. Let f
(x) be continuous in a neighbourhood
of α. Let f
(α) = A with [A[ < 1. Than there exists a δ > 0 so that for each x
0
with [x
0
−α[ ≤ δ holds:
1. lim
n→∞
n
n
= α,
2. If for a particular k holds: x
k
= α, than for each n ≥ k holds that x
n
= α. If x
n
,= α for all n than holds
lim
n→∞
α −x
n
α −x
n−1
= A , lim
n→∞
x
n
−x
n−1
x
n−1
−x
n−2
= A , lim
n→∞
α −x
n
x
n
−x
n−1
=
A
1 −A
The quantity A is called the asymptotic convergence factor, the quantity B = −
10
log [A[ is called the asymptotic
convergence speed.
54 Mathematics Formulary by ir. J.C.A. Wevers
8.4.3 Aitken extrapolation
We deﬁne
A = lim
n→∞
x
n
−x
n−1
x
n−1
−x
n−2
A converges to f
(a). Than the row
α
n
= x
n
+
A
n
1 −A
n
(x
n
−x
n−1
)
will converge to α.
8.4.4 Newton iteration
There are more ways to transform F(x) = 0 into x = f(x). One essential condition for them all is that in a
neighbourhood of a root α holds that [f
(x)[ < 1, and the smaller f
(x), the faster the series converges. A general
method to construct f(x) is:
f(x) = x −Φ(x)F(x)
with Φ(x) ,= 0 in a neighbourhood of α. If one chooses:
Φ(x) =
1
F
(x)
Than this becomes Newtons method. The iteration formula than becomes:
x
n
= x
n−1
−
F(x
n−1
)
F
(x
n−1
)
Some remarks:
• This same result can also be derived with Taylor series.
• Local convergence is often difﬁcult to determine.
• If x
n
is far apart from α the convergence can sometimes be very slow.
• The assumption F
(α) ,= 0 means that α is a simple root.
For F(x) = x
k
−a the series becomes:
x
n
=
1
k
_
(k −1)x
n−1
+
a
x
k−1
n−1
_
This is a wellknown way to compute roots.
The following code ﬁnds the root of a function by means of Newton’s method. The root lies within the interval
[x1, x2]. The value is adapted until the accuracy is better than ±eps. The function funcd is a routine that
returns both the function and its ﬁrst derivative in point x in the passed pointers.
float SolveNewton(void (*funcd)(float, float*, float*), float x1, float x2, float eps)
{
int j, max_iter = 25;
float df, dx, f, root;
root = 0.5 * (x1 + x2);
for (j = 1; j <= max_iter; j++)
{
(*funcd)(root, &f, &df);
dx = f/df;
Chapter 8: Numerical mathematics 55
root = dx;
if ( (x1  root)*(root  x2) < 0.0 )
{
perror("Jumped out of brackets in SolveNewton.");
exit(1);
}
if ( fabs(dx) < eps ) return root; /* Convergence */
}
perror("Maximum number of iterations exceeded in SolveNewton.");
exit(1);
return 0.0;
}
8.4.5 The secant method
This is, in contrast to the two methods discussed previously, a twostep method. If two approximations x
n
and x
n−1
exist for a root, than one can ﬁnd the next approximation with
x
n+1
= x
n
−F(x
n
)
x
n
−x
n−1
F(x
n
) −F(x
n−1
)
If F(x
n
) and F(x
n−1
) have a different sign one is interpolating, otherwise extrapolating.
8.5 Polynomial interpolation
A base for polynomials of order n is given by Lagrange’s interpolation polynomials:
L
j
(x) =
n
l=0
l=j
x −x
l
x
j
−x
l
The following holds:
1. Each L
j
(x) has order n,
2. L
j
(x
i
) = δ
ij
for i, j = 0, 1, ..., n,
3. Each polynomial p(x) can be written uniquely as
p(x) =
n
j=0
c
j
L
j
(x) with c
j
= p(x
j
)
This is not a suitable method to calculate the value of a ploynomial in a given point x = a. To do this, the Horner
algorithm is more usable: the value s =
k
c
k
x
k
in x = a can be calculated as follows:
float GetPolyValue(float c[], int n)
{
int i; float s = c[n];
for (i = n  1; i >= 0; i)
{
s = s * a + c[i];
}
return s;
}
After it is ﬁnished s has value p(a).
56 Mathematics Formulary by ir. J.C.A. Wevers
8.6 Deﬁnite integrals
Almost all numerical methods are based on a formula of the type:
b
_
a
f(x)dx =
n
i=0
c
i
f(x
i
) +R(f)
with n, c
i
and x
i
independent of f(x) and R(f) the error which has the form R(f) = Cf
(q)
(ξ) for all common
methods. Here, ξ ∈ (a, b) and q ≥ n + 1. Often the points x
i
are chosen equidistant. Some common formulas are:
• The trapezoid rule: n = 1, x
0
= a, x
1
= b, h = b −a:
b
_
a
f(x)dx =
h
2
[f(x
0
) +f(x
1
)] −
h
3
12
f
(ξ)
• Simpson’s rule: n = 2, x
0
= a, x
1
=
1
2
(a +b), x
2
= b, h =
1
2
(b −a):
b
_
a
f(x)dx =
h
3
[f(x
0
) + 4f(x
1
) +f(x
2
)] −
h
5
90
f
(4)
(ξ)
• The midpoint rule: n = 0, x
0
=
1
2
(a +b), h = b −a:
b
_
a
f(x)dx = hf(x
0
) +
h
3
24
f
(ξ)
The interval will usually be split up and the integration formulas be applied to the partial intervals if f varies much
within the interval.
A Gaussian integration formula is obtained when one wants to get both the coefﬁcients c
j
and the points x
j
in an
integral formula so that the integral formula gives exact results for polynomials of an order as high as possible. Two
examples are:
1. Gaussian formula with 2 points:
h
_
−h
f(x)dx = h
_
f
_
−h
√
3
_
+f
_
h
√
3
__
+
h
5
135
f
(4)
(ξ)
2. Gaussian formula with 3 points:
h
_
−h
f(x)dx =
h
9
_
5f
_
−h
_
3
5
_
+ 8f(0) + 5f
_
h
_
3
5
__
+
h
7
15750
f
(6)
(ξ)
8.7 Derivatives
There are several formulas for the numerical calculation of f
(x):
• Forward differentiation:
f
(x) =
f(x +h) −f(x)
h
−
1
2
hf
(ξ)
Chapter 8: Numerical mathematics 57
• Backward differentiation:
f
(x) =
f(x) −f(x −h)
h
+
1
2
hf
(ξ)
• Central differentiation:
f
(x) =
f(x +h) −f(x −h)
2h
−
h
2
6
f
(ξ)
• The approximation is better if more function values are used:
f
(x) =
−f(x + 2h) + 8f(x +h) −8f(x −h) +f(x −2h)
12h
+
h
4
30
f
(5)
(ξ)
There are also formulas for higher derivatives:
f
(x) =
−f(x + 2h) + 16f(x +h) −30f(x) + 16f(x −h) −f(x −2h)
12h
2
+
h
4
90
f
(6)
(ξ)
8.8 Differential equations
We start with the ﬁrst order DE y
(x) = f(x, y) for x > x
0
and initial condition y(x
0
) = x
0
. Suppose we ﬁnd
approximations z
1
, z
2
, ..., z
n
for y(x
1
), y(x
2
),..., y(x
n
). Than we can derive some formulas to obtain z
n+1
as
approximation for y(x
n+1
):
• Euler (single step, explicit):
z
n+1
= z
n
+hf(x
n
, z
n
) +
h
2
2
y
(ξ)
• Midpoint rule (two steps, explicit):
z
n+1
= z
n−1
+ 2hf(x
n
, z
n
) +
h
3
3
y
(ξ)
• Trapezoid rule (single step, implicit):
z
n+1
= z
n
+
1
2
h(f(x
n
, z
n
) +f(x
n+1
, z
n+1
)) −
h
3
12
y
(ξ)
RungeKutta methods are an important class of singlestep methods. They work so well because the solution y(x)
can be written as:
y
n+1
= y
n
+hf(ξ
n
, y(ξ
n
)) with ξ
n
∈ (x
n
, x
n+1
)
Because ξ
n
is unknown some “measurements” are done on the increment function k = hf(x, y) in well chosen
points near the solution. Than one takes for z
n+1
− z
n
a weighted average of the measured values. One of the
possible 3rd order RungeKutta methods is given by:
k
1
= hf(x
n
, z
n
)
k
2
= hf(x
n
+
1
2
h, z
n
+
1
2
k
1
)
k
3
= hf(x
n
+
3
4
h, z
n
+
3
4
k
2
)
z
n+1
= z
n
+
1
9
(2k
1
+ 3k
2
+ 4k
3
)
and the classical 4th order method is:
k
1
= hf(x
n
, z
n
)
k
2
= hf(x
n
+
1
2
h, z
n
+
1
2
k
1
)
k
3
= hf(x
n
+
1
2
h, z
n
+
1
2
k
2
)
k
4
= hf(x
n
+h, z
n
+k
3
)
z
n+1
= z
n
+
1
6
(k
1
+ 2k
2
+ 2k
3
+k
4
)
Often the accuracy is increased by adjusting the stepsize for each step with the estimated error. Step doubling is
most often used for 4th order RungeKutta.
58 Mathematics Formulary by ir. J.C.A. Wevers
8.9 The fast Fourier transform
The Fourier transform of a function can be approximated when some discrete points are known. Suppose we have
N successive samples h
k
= h(t
k
) with t
k
= k∆, k = 0, 1, 2, ..., N − 1. Than the discrete Fourier transform is
given by:
H
n
=
N−1
k=0
h
k
e
2πikn/N
and the inverse Fourier transform by
h
k
=
1
N
N−1
n=0
H
n
e
−2πikn/N
This operation is order N
2
. It can be faster, order N
2
log(N), with the fast Fourier transform. The basic idea is
that a Fourier transform of length N can be rewritten as the sum of two discrete Fourier transforms, each of length
N/2. One is formed from the evennumbered points of the original N, the other from the oddnumbered points.
This can be implemented as follows. The array data[1..2*nn] contains on the odd positions the real and on the
even positions the imaginary parts of the input data: data[1] is the real part and data[2] the imaginary part of
f
0
, etc. The next routine replaces the values in data by their discrete Fourier transformed values if isign = 1,
and by their inverse transformed values if isign = −1. nn must be a power of 2.
#include <math.h>
#define SWAP(a,b) tempr=(a);(a)=(b);(b)=tempr
void FourierTransform(float data[], unsigned long nn, int isign)
{
unsigned long n, mmax, m, j, istep, i;
double wtemp, wr, wpr, wpi, wi, theta;
float tempr, tempi;
n = nn << 1;
j = 1;
for (i = 1; i < n; i += 2)
{
if ( j > i )
{
SWAP(data[j], data[i]);
SWAP(data[j+1], data[i+1]);
}
m = n >> 1;
while ( m >= 2 && j > m )
{
j = m;
m >>= 1;
}
j += m;
}
mmax = 2;
while ( n > mmax ) /* Outermost loop, is executed log2(nn) times */
{
istep = mmax << 1;
theta = isign * (6.28318530717959/mmax);
wtemp = sin(0.5 * theta);
wpr = 2.0 * wtemp * wtemp;
wpi = sin(theta);
Chapter 8: Numerical mathematics 59
wr = 1.0;
wi = 0.0;
for (m = 1; m < mmax; m += 2)
{
for (i = m; i <= n; i += istep) /* DanielsonLanczos equation */
{
j = i + mmax;
tempr = wr * data[j]  wi * data[j+1];
tempi = wr * data[j+1] + wi * data[j];
data[j] = data[i]  tempr;
data[j+1] = data[i+1]  tempi;
data[i] += tempr;
data[i+1] += tempi;
}
wr = (wtemp = wr) * wpr  wi * wpi + wr;
wi = wi * wpr + wtemp * wpi + wi;
}
mmax=istep;
}
}
c 1999, 2008 J.C.A. Wevers Dear reader,
Version: May 4, 2008
This document contains 66 pages with mathematical equations intended for physicists and engineers. It is intended to be a short reference for anyone who often needs to look up mathematical equations. This document can also be obtained from the author, Johan Wevers (johanw@vulcan.xs4all.nl). It can also be found on the WWW on http://www.xs4all.nl/˜johanw/index.html. This document is Copyright by J.C.A. Wevers. All rights reserved. Permission to use, copy and distribute this unmodiﬁed document by any means and for any purpose except proﬁt purposes is hereby granted. Reproducing this document by any means, included, but not limited to, printing, copying existing prints, publishing by electronic or other means, implies full agreement to the above nonproﬁtuse clause, unless upon explicit prior written permission of the author. The C code for the rootﬁnding via Newtons method and the FFT in chapter 8 are from “Numerical Recipes in C ”, 2nd Edition, ISBN 0521431085.
A The Mathematics Formulary is made with teTEX and LTEX version 2.09.
If you prefer the notation in which vectors are typefaced in boldface, uncomment the redeﬁnition of the \vec command and recompile the ﬁle. If you ﬁnd any errors or have any comments, please let me know. I am always open for suggestions and possible corrections to the mathematics formulary. Johan Wevers
Contents
Contents 1 Basics 1.1 Goniometric functions . . . . . . . . . . . . . . 1.2 Hyperbolic functions . . . . . . . . . . . . . . . 1.3 Calculus . . . . . . . . . . . . . . . . . . . . . . 1.4 Limits . . . . . . . . . . . . . . . . . . . . . . . 1.5 Complex numbers and quaternions . . . . . . . . 1.5.1 Complex numbers . . . . . . . . . . . . 1.5.2 Quaternions . . . . . . . . . . . . . . . . 1.6 Geometry . . . . . . . . . . . . . . . . . . . . . 1.6.1 Triangles . . . . . . . . . . . . . . . . . 1.6.2 Curves . . . . . . . . . . . . . . . . . . 1.7 Vectors . . . . . . . . . . . . . . . . . . . . . . 1.8 Series . . . . . . . . . . . . . . . . . . . . . . . 1.8.1 Expansion . . . . . . . . . . . . . . . . . 1.8.2 Convergence and divergence of series . . 1.8.3 Convergence and divergence of functions 1.9 Products and quotients . . . . . . . . . . . . . . 1.10 Logarithms . . . . . . . . . . . . . . . . . . . . 1.11 Polynomials . . . . . . . . . . . . . . . . . . . . 1.12 Primes . . . . . . . . . . . . . . . . . . . . . . . Probability and statistics 2.1 Combinations . . . . 2.2 Probability theory . . 2.3 Statistics . . . . . . . 2.3.1 General . . . 2.3.2 Distributions 2.4 Regression analyses . I 1 1 1 2 3 3 3 3 4 4 4 4 5 5 5 6 7 7 7 7 9 9 9 9 9 10 11 12 12 12 12 13 13 14 14 14 15 15 16 17 17 18 18 18
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
2
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
3
Calculus 3.1 Integrals . . . . . . . . . . . . . . . . . . 3.1.1 Arithmetic rules . . . . . . . . . 3.1.2 Arc lengts, surfaces and volumes . 3.1.3 Separation of quotients . . . . . . 3.1.4 Special functions . . . . . . . . . 3.1.5 Goniometric integrals . . . . . . . 3.2 Functions with more variables . . . . . . 3.2.1 Derivatives . . . . . . . . . . . . 3.2.2 Taylor series . . . . . . . . . . . 3.2.3 Extrema . . . . . . . . . . . . . . 3.2.4 The operator . . . . . . . . . . 3.2.5 Integral theorems . . . . . . . . . 3.2.6 Multiple integrals . . . . . . . . . 3.2.7 Coordinate transformations . . . . 3.3 Orthogonality of functions . . . . . . . . 3.4 Fourier series . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . I
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . .5. . . . . . . . . . . . . . . . . 5.1 Vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . .4 SturmLiouville equations . . . . . . . . . . . . . . . . .5 Jordan’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . 4. . . . . . . . . . . . . .2 Residue . . . .6 Properties of Bessel functions . .3 Matrix calculus . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Homogeneous coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 The associated Legendre equation . . . . . . . . . . . . . . . . . . . . . . .5 Solutions for Bessel’s equation . .C. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .II Mathematics Formulary door J. . . . . . . . . . . . . . . . . . . . .1 Functions of complex variables . . . . . . . .5. . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Analytical functions deﬁnied by series 6. . . . . . 4. .2 Matrix equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Quadratic forms . .1 Linear differential equations . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Plane and line . . .13 Systems of linear differential equations . . .9 Hermite . . . . . . . . 5. . . . . . . 4. . . . . . . . . . . . . . . 4. . . . . . . . . . 4. R 5. . . . . . .1. . . . . . . . . . . . . . . . . . . . . . .1 First order linear DE . . 4. .1 General . . . . . .2. . . . . . .1 Basic operations . . . . . . . . . .12 The convolution . . . . . .3. . . . . . . . .2 Second order linear DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Basis . . . . . . . . . . . . . . . . . . 4. . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . .7 Eigen values . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . 5. . 5. . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . .3 The Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Coordinate transformations . .2 Special cases . . . . . . . . . . . 6. . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . . . Wevers 4 Differential equations 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . .2. . . .3 Legendre’s DE . . . . . . . . . . .11 The Laplace transformation . . . . . . . . . . . . . .10 Inner product spaces . . . . . . . . . . .4 Power series substitution . . . . . . . . . . . . . 20 20 20 20 21 21 21 21 22 22 22 22 23 23 24 24 24 24 24 25 25 25 25 27 29 29 29 29 29 30 31 31 32 32 32 35 36 36 37 37 38 38 38 39 39 39 39 40 41 41 42 5 . 4. . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . 5. . . . . . . . . . . . . .2 Some special cases .1 Cauchy’s integral formula . . . . . . . . .10 Chebyshev . . .4 Laurent series . . . . . . . . . . . . . . . . .14. . . . . . . . . . . . . . . . . . .2. . R Complex function theory 6. . . . . . . 5. .2 Euler . . . . . . . . . . . . . . . . . . . . . . . . .14.2. . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . 4.4 Linear transformations . . .A. . . . . . . . . . . . . . . . . . . . 6 . . . . . . . 5. 4. . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . 5. . . . . . . . . . . . . . . . .5 Linear partial differential equations . .3. . . . . . .3 Potential theory and Green’s theorem Linear algebra 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Laguerre’s equation . . . .2. . . . . . . . . . . . 4. .8 Transformation types . . . . . . . 5. . . . . . . . . . . . . . . . 6. . . . . . . . . . . .2 Quadratic surfaces in I 3 . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . .2.1 Quadratic forms in I 2 . . . . . . . . . . . . . . . . . .1. . . . .1 Frobenius’ method . . . . . . . .2. . . . . . . . . . . . . . . . . 6. .2. . . . . . . . . . . .11 Weber . . . . . . . . . . . . . . 4. . . . . .2 Complex integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . .8 The associated Laguerre equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . .3 Nonlinear differential equations . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . 4. . . . .
. . . . 7. . . . . . . . . . . . . . . . . . . . . .10. .10. .9 Differential operators . . . . . .2 Local convergence . . . . . . . . . . . . . .2 The Liederivative . . Numerical mathematics 8. . . . . . . . . . . . . . .4. . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . .6 Outer product . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . R 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .C. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 The ﬁrst fundamental tensor . . . . . . . . . . . . . . .2 Surfaces in I 3 . . . . . . . . . . . . . . . . . . .1 Successive substitution 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Mathematics Formulary by J. . . . . . . .9 The fast Fourier transform . . . . . . .1 The directional derivative . . .4 Tensor product . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Space curves . . . . . 7. . . . . . . . . . . . . . .3 Pivot strategy .2 Gauss elimination . . .8. . . . . . . . . . . . .4. . . . . . .4 Roots of functions . . . . .4 Newton iteration . . . . . . . . . . . .3 Christoffel symbols . . . . . . . . . . . . . .1 Vectors and covectors . . . . . .3 Inner product . . . . . . 8. . . . . . . . . . . 8. . . . . . . . . . . . . . . . . . . 8. 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . .5 Symmetric and antisymmetric tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Triangular matrices . . . . .3 Aitken extrapolation . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . .10. . . .7 The Hodge star operator . . . . . . . . . . . . . . . . .6 Deﬁnite integrals . . . . . . . . . . 8. . . . . . . 8. . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Geodetic curvature . . . . . . . .5 Polynomial interpolation . . . . . . . . . . . 43 43 44 44 45 45 45 46 46 46 46 46 47 47 48 48 48 49 49 49 50 51 51 51 52 52 52 53 53 53 53 54 54 55 55 56 56 57 58 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . .10 Differential geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. 8. . . . . . . . . 7. . . . . . . . . . . . . . . . .4. . . . . 7.8 Differential operations . . . . . .11 Riemannian geometry . . . . .A. . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8. . . . . . . .7 Derivatives . . . . . . . . . . . . . . . . . . . . . . . .10. . . . . . . . . . . . . . 7. . . . . . .4 The covariant derivative . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Tensor algebra . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . 8. .5 The secant method . . . . . . . . . . . . . . . . . . . . .2 Floating point representations . . . . . . . .8 Differential equations . . . . . . . . . . . 8. . 7. . . . . . . . . . . . . . . . .8. . . . . . . . . .4 The second fundamental tensor 7. . . . . . 8. . . . .8. Wevers III 7 Tensor calculus 7. . . . . . . . . . . . . . . . . . . .4. . . . 7. . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. 7.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . .3 Systems of equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10. . . . . . . . . . . . . . . . . . .1 Errors . . 8. . . . . . . . . . . . . .
A.C. Wevers .IV Mathematics Formulary door J.
sin(arccos(x)) = 1 − x2 ⇒ ⇒ . .2 Hyperbolic functions ex − e−x . 2 sinh(x) cosh(x) The hyperbolic functions are deﬁned by: sinh(x) = cosh(x) = tanh(x) = From this follows that cosh2 (x) − sinh2 (x) = 1. tan(φ) = sin2 (x) + cos2 (x) = 1 and cos−2 (x) = 1 + tan2 (x). cos(a ± b) = cos(a) cos(b) sin(a) sin(b) . arcosh(x) = arsinh( x2 − 1) 1 . sin(φ) = yp . 2 sin2 (x) = 1 − cos(2x) cos(π − x) = − cos(x) cos( 1 π − x) = sin(x) 2 tan(a) ± tan(b) 1 tan(a) tan(b) yp xp For the goniometric ratios for a point p on the unit circle holds: 1. sin(a ± b) = sin(a) cos(b) ± cos(a) sin(b) tan(a ± b) = The sum formulas are: sin(p) + sin(q) = 2 sin( 1 (p + q)) cos( 1 (p − q)) 2 2 sin(p) − sin(q) = 2 cos( 1 (p + q)) sin( 1 (p − q)) 2 2 1 cos(p) + cos(q) = 2 cos( 2 (p + q)) cos( 1 (p − q)) 2 cos(p) − cos(q) = −2 sin( 1 (p + q)) sin( 1 (p − q)) 2 2 From these equations can be derived that 2 cos2 (x) = 1 + cos(2x) sin(π − x) = sin(x) 1 sin( 2 π − x) = cos(x) Conclusions from equalities: x = a ± 2kπ or x = (π − a) ± 2kπ. . 2 ex + e−x .1 Goniometric functions cos(φ) = xp . Further holds: arsinh(x) = ln x + x2 + 1 . k ∈ I N x = a ± 2kπ or x = −a ± 2kπ π tan(x) = tan(a) ⇒ x = a ± kπ and x = ± kπ 2 The following relations exist between the inverse goniometric functions: sin(x) = sin(a) cos(x) = cos(a) arctan(x) = arcsin √ x x2 +1 = arccos √ 1 x2 +1 .Chapter 1 Basics 1.
holds at point P = (x.3 Calculus df f (x + h) − f (x) = lim dx h→0 h The derivative of a function is deﬁned as: Derivatives obey the following algebraic rules: d(x ± y) = dx ± dy .2 Mathematics Formulary by ir. d x y = ydx − xdy y2 For the derivative of the inverse function f inv (y). deﬁned by f inv (f (x)) = x. f (x)): df inv (y) dy Chain rule: if f = f (g(x)). J. then holds df dg df = dx dg dx Further. then is lim o f (x) f (x) = lim g(x) x→a g (x) . d(xy) = xdy + ydx .C. An overview of derivatives and primitives is: y = f (x) axn 1/x a ax ex a log(x) ln(x) sin(x) cos(x) tan(x) sin−1 (x) sinh(x) cosh(x) arcsin(x) arccos(x) arctan(x) (a + x2 )−1/2 (a2 − x2 )−1 dy/dx = f (x) anxn−1 −x−2 0 ax ln(a) ex (x ln(a))−1 1/x cos(x) − sin(x) cos−2 (x) − sin−2 (x) cos(x) cosh(x) sinh(x) √ 1/ √ − x2 1 −1/ 1 − x2 (1 + x2 )−1 −x(a + x2 )−3/2 2x(a2 + x2 )−2 (1 + (y )2 )3/2 y  x→a f (x)dx a(n + 1)−1 xn+1 ln x ax ax / ln(a) ex (x ln(x) − x)/ ln(a) x ln(x) − x − cos(x) sin(x) − ln  cos(x) ln  tan( 1 x) 2 cosh(x) sinh(x) √ x arcsin(x) + √1 − x2 x arccos(x) − 1 − x2 x arctan(x) − 1 ln(1 + x2 ) 2 √ ln x + a + x2  1 ln (a + x)/(a − x) 2a The curvature ρ of a curve is given by: ρ = The theorem of De ’l Hˆ pital: if f (a) = 0 and g(a) = 0. Wevers 1. for the derivatives of products of functions holds: n · P df (x) dx =1 P (f · g) (n) = k=0 n (n−k) (k) f ·g k For the primitive function F (x) holds: F (x) = f (x).A.
Products and quotients of complex numbers can be written as: z1 · z2 z1 z2 The following can be derived: z1 + z2  ≤ z1  + z2  . with a.Chapter 1: Basics 3 1. j. Further holds: (a + bi)(c + di) = (ac − bd) + i(ad + bc) (a + bi) + (c + di) = a + c + i(b + d) a + bi (ac + bd) + i(bc − ad) = c + di c2 + d2 Goniometric functions can be written as complex exponents: sin(x) cos(x) = = 1 ix (e − e−ix ) 2i 1 ix (e + e−ix ) 2 From this follows that cos(ix) = cosh(x) and sin(ix) = i sinh(x). z1 − z2  ≥  z1  − z2   And from z = r exp(iθ) follows: ln(z) = ln(r) + iθ. R. d ∈ I and i2 = j 2 = k 2 = −1. jk = −kj = i and ki = −ik = j. Further follows from this that e±ix = cos(x) ± i sin(x). lim =a . a x→∞ x→0 x x lim arcsin(x) =1 . x→0 x xp = 0 als a > 1. ln(z) = ln(z) ± 2nπi. c.4 lim Limits lim 1+ n x x ex − 1 tan(x) sin(x) = 1 . x↓0 x→∞ = en lnp (x) ln(x + a) = 0 . lim = 1 . By deﬁnition holds: i2 = −1.5. . x→0 x→0 x→0 k→0 x x x lim xa ln(x) = 0 . with tan(ϕ) = b/a. z = a2 + b2 . The complex conjugate of z is deﬁned as z = z ∗ := a − bi. lim x→∞ 1. x→∞ ax lim lim √ x x=1 x→0 lim a1/x − 1 = ln(a) . so eiz = 0∀z. Also the theorem of De Moivre follows from this: (cos(ϕ) + i sin(ϕ))n = cos(nϕ) + i sin(nϕ).5.5 1. b the imaginary part of z. lim = 1 .1 Complex numbers and quaternions Complex numbers √ The complex number z = a + bi with a and b ∈ I a is the real part. k with each other are given by ij = −ji = k.2 Quaternions Quaternions are deﬁned as: z = a + bi + cj + dk. = z1  · z2 (cos(ϕ1 + ϕ2 ) + i sin(ϕ1 + ϕ2 )) z1  = (cos(ϕ1 − ϕ2 ) + i sin(ϕ1 − ϕ2 )) z2  1. The products of R i. lim (1 + k)1/k = e . b. Every complex number can be written as z = z exp(iϕ).
C. Further holds: 1 tan( 2 (α + β)) a+b = 1 a−b tan( 2 (α − β)) 1 The surface of a triangle is given by 2 ab sin(γ) = 1 aha = 2 1 a and s = 2 (a + b + c).6 1.4 Mathematics Formulary by ir. y = a(1 + cos(t)) Epicycloid: if a small circle with radius a rolls along a big circle with radius R. the trajectory of a point on the small circle has the following parameter equation: x = a sin R+a t + (R + a) sin(t) . 1. . J.7 Vectors ai bi = a  · b  cos(ϕ) i The inner product is deﬁned by: a · b = where ϕ is the angle between a and b. The cosine rule is: a2 = b2 +c2 −2bc cos(α). For each triangle holds: α + β + γ = 180◦ . s(s − a)(s − b)(s − c) with ha the perpendicular on 1. and a × (b × c ) = (a · c )b − (a · b )c. The external product is in I 3 deﬁned by: R ay bz − az by a × b = az bx − ax bz = ax by − ay bx ex ax bx ey ay by ez az bz Further holds: a × b  = a  · b  sin(ϕ). y = −a cos a R−a t + (R − a) cos(t) a A hypocycloid with a = R is called a cardioid.1 Geometry Triangles a b c = = sin(α) sin(β) sin(γ) The sine rule is: Here.2 Curves Cycloid: if a circle with radius a rolls along a straight line. the trajectory of a point on the small circle has the following parameter equation: x = a sin R−a t + (R − a) sin(t) .A. It has the following parameterequation in polar coordinates: r = 2a[1 − cos(ϕ)].6. the trajectory of a point on this circle has the following parameter equation: x = a(t + sin(t)) . y = a cos a R+a t + (R + a) cos(t) a Hypocycloid: if a small circle with radius a rolls inside a big circle with radius R. α is the angle opposite to a. β is opposite to b and γ opposite to c.6. Wevers 1.
(2n − 1)4 96 n=1 1. The expansion of a function around the point a is given by the Taylor series: f (x) = f (a) + (x − a)f (a) + where the remainder is given by: Rn (h) = (1 − θ)n and is subject to: M hn+1 mhn+1 ≤ Rn (h) ≤ (n + 1)! (n + 1)! From this one can deduce that (1 − x)α = n=0 ∞ (x − a)n (n) (x − a)2 f (a) + · · · + f (a) + R 2 n! hn (n+1) f (θh) n! α n x n 1 π6 = n6 945 n=1 (−1)n+1 = ln(2) n n=1 (−1)n+1 π3 = (2n − 1)3 32 n=1 ∞ ∞ ∞ One can derive that: 1 π2 = . n2 12 n=1 ∞ 1 = 2−1 4n n=1 ∞ 1 2 . An alternating series of which the absolute values of the terms drop monotonously to 0 is convergent (Leibniz). .8. 1 π2 = . n4 90 n=1 ∞ ∞ k 2 = 1 n(n + 1)(2n + 1) . n2 6 n=1 n ∞ 1 π4 = .8. k!(n − k)! n By subtracting the series k=0 rk and r n k=0 rk one ﬁnds: n rk = k=0 ∞ 1 − rn+1 1−r 1 . n un  converges. 6 k=1 (−1)n+1 π2 = . n→∞ n If lim un = 0 then un is divergent.Chapter 1: Basics 5 1. 1−r and for r < 1 this gives the geometric series: k=0 N rk = The arithmetic series is given by: n=0 1 (a + nV ) = a(N + 1) + 2 N (N + 1)V .1 Series Expansion n The Binomium of Newton is: (a + b)n = k=0 n n−k k a b k where n k := n! .2 If n Convergence and divergence of series un also converges.8 1. (2n − 1)2 8 n=1 ∞ 1 π4 = .
J.c. Than holds: {fn } is uniform convergent if x→∞ lim fn − f = 0. if n un = p. This is written as: f (a− ) = f (a+ ). than un is also convergent. un continuous f is continuous fn can be integrated. Wevers If ∞ p f (x)dx < ∞. than f (x) is differentiable in x = a. {fn } uniform convergent S(x) uniform convergent. or: ∀(ε > 0)∃(N )∀(n ≥ N ) fn − f < ε.and lower limit are equal: lim f (x) = lim f (x). y)dx . and lim fn (x) = f (x). un (x) and F (y) = a f (x. y)dx := F . n If un > 0 ∀n then is n un convergent if If un = cn xn the radius of convergence ρ of n ∞ un is given by: 1 = lim ρ n→∞ n cn  = lim n→∞ cn+1 . {fn } uniform convergent S(x) is uniform convergent. then n fn is convergent. cn The series If: lim 1 is convergent if p > 1 and divergent if p ≤ 1. y)dxdy D series integral ∂f /∂y continuous fy (x.8. np n=1 vn are both divergent or both convergent. n→∞ nn . If f (x) is continuous in a and: lim f (x) = lim f (x). {fn } unif. Than holds on W f is continuous S is continuous F is continuous fn can be integrated. un u. than the following is true: if p > 0 than un and vn n p = 0 holds: if vn is convergent.6 Mathematics Formulary by ir.A.conv → φ un ∈C−1 .C. un can be integrated f is continuous {fn } ∈C−1 . Than it can be proved that: Theorem For rows series integral rows C Demands on W fn continuous. f (x)dx = lim fn dx n→∞ I series integral rows S can be integrated. x↑a x↓a We deﬁne: f n→∞ W := sup(f (x) x ∈ W ). than b un is uniform convergent. n→∞ n n n If L is deﬁned by: L = lim L < 1. F dy = f = φ(x) S (x) = Fy = un (x) Sdx = un dx f (x. then is un un divergent if L > 1 and convergent if n 1. un ∞ W Weierstrass’ test: if We deﬁne S(x) = n=N is convergent.3 Convergence and divergence of functions x↑a x↓a f (x) is continuous in x = a only if the upper . ln(un + 1) is convergent. un conv. or by: L = lim n→∞ un+1 .
1. There are an inﬁnite number of primes. Further holds: a2n − b2n = a2n−1 ± a2n−2 b + a2n−3 b2 ± · · · ± b2n−1 . for polynomials with order ≥ 5 there does not exist a general analytical solution. a±b a2n+1 − b2n+1 = a+b n a2n−k b2k k=0 (a ± b)(a2 ± ab + b2 ) = a3 ± b3 . c.9 Products and quotients For a.12 Primes p∈P A prime is a number ∈ I that can only be divided by itself and 1. c. Polynomials up to and including order 4 R have a general analytical solution. d ∈ I holds: R The distributive property: (a + b)(c + d) = ac + ad + bc + bd The associative property: a(bc) = b(ac) = c(ab) and a(b + c) = ab + ac The commutative property: a + b = b + a. b. d ∈ I and a = 0 holds: the 3rd order equation ax3 + bx2 + cx + d = 0 has the general analytical R solution: x1 x2 = x∗ 3 3ac − b2 b − 9a2 K 3a √ K 3ac − b2 b 3 = − + − +i 2 18a2 K 3a 2 = K− K+ 3ac − b2 9a2 K 1/3 with K = 9abc − 27da2 − 2b3 + 54a3 √ √ 3 4ac3 − c2 b2 − 18abcd + 27a2 d2 + 4db3 18a2 1. For logarithms with base e one writes ln(x). ab = ba. than holds q = 1(p) and so Q cannot be written as a product of primes from P . Rules: log(xn ) = n log(x). (a + b)(a − b) = a2 + b2 .11 Polynomials n Equations of the type ak xk = 0 k=0 have n roots which may be equal to each other. than p∗ is also a root. If all ak ∈ I holds: when x = p with p ∈ C a root. c ∈ I and a = 0 holds: the 2nd order equation ax2 + bx + c = 0 has the general solution: R √ −b ± b2 − 4ac x= 2a For a. b. Each polynomial p(z) of order n ≥ 1 has at least one root in C . log(a) + log(b) = log(ab). b.10 Logarithms Deﬁnition: a log(x) = b ⇔ ab = x. than construct the number q = 1 + p. For a. This is a contradiction. Proof: N suppose that the collection of primes P would be ﬁnite. . log(a) − log(b) = log(a/b). a3 ± b3 = a2 a+b ba + b2 1.Chapter 1: Basics 7 1.
They occur when one searches for perfect numbers. 607. J. than holds: π(x) = 1 and x→∞ x/ ln(x) lim For each N ≥ 2 there is a prime between N and 2N . A faster method for large numbers are the 4 Fermat tests. 9941. 2. 3217. 107. 521. 17. then n is probably prime. 3. The numbers Fk := 2k + 1 with k ∈ I are called Fermat numbers. Take w(b) = bn−1 mod n. 5. Wevers If π(x) is the number of primes ≤ x. 61. 19. 1279. There N are 23 Mersenne numbers for k < 12000 which are prime: for k ∈ {2. N The numbers Mk := 2k − 1 are called Mersenne numbers. 7. 2203. which are numbers n ∈ I which are the sum of their different dividers. 13. 2281. 1. 127. for example 6 = 1 + 2 + 3.C. Take the ﬁrst 4 primes: b = {2. To check if a given number n is prime one can use a sieve method.8 Mathematics Formulary by ir. 31. x→∞ x 2 lim π(x) dt ln(t) =1 . 89. 3. 9689. For each other value of w. If w = 1 for each b. The ﬁrst known sieve method was developed by Eratosthenes. for each b. 4253. 4423. 7}. 5. who don’t prove that a number is prime but give a large probability. Many Fermat numbers are prime. 11213}.A. n is certainly not prime. 3.
when the total number of elements is N . The standard deviation σx in the (xi − x )2 σx = i=1 n n σ2 . The probability P (AB) that A occurs. The probability P (¬A) that A does not occur is: P (¬A) = 1 − P (A). than holds: P (A ∩ B) = P (A) · P (B). given the fact that B occurs.3 2. n−1 When samples are being used the sample variance s is given by s2 = 9 .3. If A and B are independent. The probability P (A ∪ B) that A and B both occur is given by: P (A ∪ B) = P (A) + P (B) − P (A ∩ B).2 Probability theory n(A) n(U ) The probability P (A) that an event A occurs is deﬁned by: P (A) = where n(A) is the number of events when A occurs and n(U ) the total number of events.1 Combinations n k The number of permutations of p from n is given by n! n = p! (n − p)! p The number of different ways to classify ni elements in i groups. is: P (AB) = P (A ∩ B) P (B) 2.1 Statistics General i The average or mean value x of a collection of values is: x = distribution of x is given by: n xi /n.Chapter 2 Probability and statistics 2. is N! ni ! i The number of possible combinations of k elements from n elements is given by = n! k!(n − k)! 2.
C.A.3.10 Mathematics Formulary by ir. The Uniform distribution occurs when a random number x is taken from the set a ≤ x ≤ b and is given by: P (x) = 1 x = 2 (b − a) and σ 2 = 1 if a ≤ x ≤ b b−a P (x) = 0 in all other cases (b − a)2 . The Poisson distribution is a limiting case of the binomial distribution when p → 0. y) resulting from errors in x and y is: 2 σf (x. The probability for k successes in a trial with A possible successes and B possible failures is then given by: A B k n−k P (x = k) = A+B n The expectation value is given by ε = nA/(A + B). The probability for success is p.2 Distributions 1. n → ∞ and also np = λ is constant. The standard deviation in a variable f (x. λx e−λ P (x) = x! ∞ This distribution is normalized to x=0 P (x) = 1. 3. The Hypergeometric distribution is the distribution describing a sampling without replacement in which the order is irrelevant. 12 . The Binomial distribution is the distribution describing a sampling with replacement. The Normal distribution is a limiting case of the binomial distribution for continuous variables: P (x) = 1 1 √ exp − 2 σ 2π x− x σ 2 5. The probability P for k successes in n trials is then given by: P (x = k) = The standard deviation is given by σx = n k p (1 − p)n−k k np(1 − p) and the expectation value is ε = np. J. 2. 4. Wevers The covariance σxy of x and y is given by:: n (xi − x )(yi − y ) σxy = i=1 n−1 The correlation coefﬁcient rxy of x and y than becomes: rxy = σxy /σx σy .y) = ∂f σx ∂x 2 + ∂f σy ∂y 2 + ∂f ∂f σxy ∂x ∂y 2.
x ) = 0 (y.. For a twodimensional distribution holds: P1 (x1 ) = with ε(g(x1 . 2 (α + β + 1) α+β (α + β) P (x1 . .. (x. 7. P2 (x2 ) = P (x1 . The Weibull distribution is given by: α α P (x) = xα−1 e−x if 0 ≤ x ≤ ∞ ∧ α ∧ β > 0 β P (x) = 0 in all other cases The average is x = β 1/α Γ((α + 1)α) 9... the following relation holds for a and b with x = (x1 . 1): y − ax − be ∈< x. β) P (x) = 0 everywhere else and has the following properties: x = For P (χ2 ) holds: α = V /2 and β = 2. x ) = i x2 . .4 Regression analyses When there exists a relation between the quantities x and y of the form y = ax + b and there is a measured set xi with related yi . In case of linear regression it is given by: r= n (n x2 −( xy − x y y2 −( y)2 ) x)2 )(n . xn ) and e = (1. A similar method works for higher order polynomial ﬁts: for a second order ﬁt holds: y − ax2 − bx − ce ∈< x2 . (x. y ) = i xi yi .. x2 ). x2 )dx2 . n 1 The correlation coefﬁcient r is a measure for the quality of a ﬁt.Chapter 2: Probability and statistics 11 6.. 8. x2 )) = g(x1 . e ) = n. x ) − a(x. x ) − b(e. . σ 2 = αβ 2 . e ) = i i xi and (e. The Beta distribution is given by: α−1 (1 − x)β−1 P (x) = x if 0 ≤ x ≤ 1 β(α. a and b follow from this. x2 )dx1 dx2 = x1 x2 α αβ .. x.. e ) = 0 with (x. e ) − a(x. e >⊥ From this follows that the inner products are 0: (y. e ) − b(e. x2 )P (x1 . e >⊥ with x2 = (x2 .. 1. x2 . x2 )dx1 g·P 2. The Gamma distribution is given by: P (x) = xα−1 e−x/β if 0 ≤ y ≤ ∞ β α Γ(α) with α > 0 and β > 0. The distribution has the following properties: x = αβ. σ2 = .
1. y) dg(y) dh(y) dx − f (g(y).Chapter 3 Calculus 3. t  = 1 ˙ ds x(t) (v1 dx + v2 dy + v3 dz) ˙ (v. y) + f (h(y).2 Arc lengts.8. t)ds = The surface A of a solid of revolution is: A = 2π y 1+ dy(x) dx 2 1+ dy(x) dx 2 dx F ds = ˙ F (x(t))x(t)dt ˙ dx x(t) = . With F (x) the primitive of f (x) holds for the deﬁnite integral b f (x)dx = F (b) − F (a) a If u = f (x) holds: b f (b) g(f (x))df (x) = a f (a) g(u)du Partial integration: with F and G the primitives of f and g holds: f (x) · g(x)dx = f (x)G(x) − G(x) df (x) dx dx A derivative can be brought under the intergral sign (see section 1. y)dx = x=g(y) ∂f (x. surfaces and volumes The arc length of a curve y(x) is given by: = The arc length of a parameter curve F (x(t)) is: = with t= (v. t(t))dt = dx 12 .1. y) ∂y dy dy 3.1 Integrals Arithmetic rules The primitive function F (x) of f (x) obeys the rule F (x) = f (x).1 3.3 for the required conditions): x=h(y) x=h(y) d dy x=g(y) f (x.
px + q ((x − a)2 + b2 )n with b > 0 and n ∈ I .1.4 Special functions Elliptic functions Elliptic functions can be written as a power series as follows: 1 − k 2 sin2 (x) = 1 − 1 1 − k 2 sin2 (x) with n!! = n(n − 2)!!. The Gamma function The gamma function Γ(y) is deﬁned by: ∞ ∞ (2n − 1)!! k 2n sin2n (x) (2n)!!(2n − 1) n=1 (2n − 1)!! 2n 2n k sin (x) (2n)!! n=1 ∞ =1+ Γ(y) = 0 e−x xy−1 dx One can derive that Γ(y + 1) = yΓ(y) = y!. Further one can derive that ∞ √ π (n) 1 Γ(n + 2 ) = n (2n − 1)!! and Γ (y) = e−x xy−1 lnn (x)dx 2 0 The Beta function The betafunction β(p. q) = 0 xp−1 (1 − x)q−1 dx with p and q > 0. This is a way to deﬁne faculties for nonintegers.3 Separation of quotients Every rational function P (x)/Q(x) where P and Q are polynomials can be written as a linear combination of functions of the type (x − a)k with k ∈ Z and of functions of the type Z. So: N p(x) = (x − a)n n k=1 Ak .Chapter 3: Calculus 13 The volume V of a solid of revolution is: V =π f 2 (x)dx 3. q) is deﬁned by: 1 β(p. The beta and gamma functions are related by the following equation: β(p.1. q) = Γ(p)Γ(q) Γ(p + q) . (x − a)k p(x) = ((x − b)2 + c2 )n n k=1 Ak x + B ((x − b)2 + c2 )k Recurrent relation: for n = 0 holds: (x2 dx 1 2n − 1 x = + n+1 2 + 1)n + 1) 2n (x 2n (x2 dx + 1)n 3.
It can be deﬁned by: δ(x) = lim P (ε. sin(x) = dx = 1 + t2 1 + t2 1 + t2 √ Each integral of the type R(x. y0 ) h The directional derivative in the direction of α is deﬁned by: f (x0 + r cos(α).1. ax + 1 e 12a2 ∞ π2 x2 dx = . R(x. After this conversion one can substitute in the integrals of the type: R(x. y0 ) ∂f = lim = ( f. x) = ε→0 0 for x > ε 1 when x < ε 2ε Some properties are: ∞ ∞ δ(x)dx = 1 .3. y) is deﬁned by: ∂f ∂x = lim x0 h→0 f (x0 + h. −∞ −∞ F (x)δ(x)dx = F (0) 3.1. x + 1)2 (e 3 ∞ x3 dx π4 = x+1 e 15 0 −∞ 0 3. cos α)) = r↓0 ∂α r f ·v v .5 Goniometric integrals When solving goniometric integrals it can be useful to change variables. Wevers The Delta function The delta function δ(x) is an inﬁnitely thin peak function with surface 1. y0 + r sin(α)) − f (x0 . dx = dϕ of cos(ϕ) cos2 (ϕ) These deﬁnite integrals are easily solved: π/2 cosn (x) sinm (x)dx = 0 (n − 1)!!(m − 1)!! · (m + n)!! π/2 when m and n are both even 1 in all other cases Some important integrals are: ∞ xdx π2 = . ax2 + bx + c)dx can be converted into one of the types that were treated in section 3.2 3. dx = cos(ϕ)dϕ of x= 1 sin(ϕ) . J. R(x.A. y0 ) − f (x0 . cos(x) = . The following holds if one deﬁnes 1 tan( 2 x) := t: 1 − t2 2t 2dt .1 Functions with more variables Derivatives The partial derivative with respect to x of a function f (x.2. x2 + 1)dx 1 − x2 )dx x2 − 1)dx : : : x = tan(ϕ) .C. (sin α.14 Mathematics Formulary by ir. dx = dϕ of cos(ϕ) x2 + 1 = t + x 1 − x2 = 1 − tx x2 − 1 = x − t x = sin(ϕ) . x) with P (ε.
y) + λ ϕ(x. y) on a boundary V ∈ I 2 are: R 1. y. b) h k m ∂xm ∂y p−m 3. The tangent plane in x0 is given by: fx (x0 )(x − x0 ) + fy (x0 )(y − y0 ) = z − f (x0 ). y) = 0. and V is deﬁned by R R ϕ1 (x. v)) holds: ∂f ∂f ∂x ∂f ∂y = + ∂u ∂x ∂u ∂y ∂u If x(t) and y(t) depend only on one parameter t holds: ∂f dx ∂f dy ∂f = + ∂t ∂x dt ∂y dt The total differential df of a function of 3 variables is given by: df = So ∂f ∂f dy ∂f dz df = + + dx ∂x ∂y dx ∂z dx The tangent in point x0 at the surface f (x. b) = m=0 p m p−m ∂ p f (a. than all points where f (x. If the boundary V is given by ϕ(x. This is the multiplicator method of Lagrange. z) + λ2 ϕ2 (x.2. y. z) = 0 and ϕ2 (x. Point (3) is rewritten as follows: possible extrema are points where f (x. y) = 0 is given by the equation fx (x0 )(x − x0 ) + fy (x0 )(y − y0 ) = 0. 3.3 Extrema When f is continuous on a compact boundary V there exists a global maximum and a global minumum for f on this boundary. y0 ) + R(n) with R(n) the residual error and h ∂p ∂p +k p p ∂x ∂y p f (a. Points on V where f (x. Possible extrema of f (x. y. v). y) = 0 are possible for extrema. y0 + k) = 1 p! p=0 n h ∂p ∂p +k p p ∂x ∂y f (x0 . Points where f = 0. y. z) = 0. y) is not differentiable. . z) = 0 for extrema of f (x. z) for points (1) and (2).2.Chapter 3: Calculus 15 When one changes to coordinates f (x(u.2 Taylor series A function of two variables can be expanded as follows in a Taylor series: f (x0 + h. A boundary is called compact if it is limited and closed. y. ∂f ∂f ∂f dx + dy + dz ∂x ∂y ∂z 3. y. y(u. z) + λ1 ϕ1 (x. The same as in I 2 holds in I 3 when the area to be searched is constrained by a compact V . λ is called a multiplicator. 2.
z) holds: = gradf = ∂ ∂ ∂ ex + ey + ez ∂x ∂y ∂z ∂f ∂f ∂f ex + ey + ez ∂x ∂y ∂z ∂ax ∂ay ∂az + + ∂x ∂y ∂z ∂az ∂ay ∂ax ∂az − ex + − ∂y ∂z ∂z ∂x 2 2 2 ∂ f ∂ f ∂ f + 2 + 2 2 ∂x ∂y ∂z div a = curl a = 2 ey + ∂ay ∂ax − ∂x ∂y ez f = In cylindrical coordinates (r. ϕ. The unit vectors are given by: eu = 1 ∂x 1 ∂x 1 ∂x . Wevers 3. z) holds: = gradf = ∂ 1 ∂ ∂ er + eϕ + ez ∂r r ∂ϕ ∂z ∂f 1 ∂f ∂f er + eϕ + ez ∂r r ∂ϕ ∂z ar 1 ∂aϕ ∂az ∂ar + + + ∂r r r ∂ϕ ∂z ∂aϕ ∂ar ∂az 1 ∂az − er + − r ∂ϕ ∂z ∂z ∂r 2 2 2 ∂ f 1 ∂f 1 ∂ f ∂ f + + 2 + 2 ∂r2 r ∂r r ∂ϕ2 ∂z div a = curl a = 2 eϕ + ∂aϕ aϕ 1 ∂ar + − ∂r r r ∂ϕ ez f = In spherical coordinates (r. The differential operators are than given by: gradf = 1 ∂f 1 ∂f 1 ∂f eu + ev + ew h1 ∂u h2 ∂v h3 ∂w . θ. ev = .4 The operator In cartesian coordinates (x.A. y. w). ϕ) holds: = gradf = 1 ∂ 1 ∂ ∂ er + eθ + eϕ ∂r r ∂θ r sin θ ∂ϕ 1 ∂f 1 ∂f ∂f er + eθ + eϕ ∂r r ∂θ r sin θ ∂ϕ ∂ar 2ar 1 ∂aθ aθ 1 ∂aϕ + + + + ∂r r r ∂θ r tan θ r sin θ ∂ϕ 1 ∂aϕ aθ 1 ∂aθ 1 ∂ar ∂aϕ aϕ + − er + − − r ∂θ r tan θ r sin θ ∂ϕ r sin θ ∂ϕ ∂r r ∂aθ aθ 1 ∂ar + − eϕ ∂r r r ∂θ 2 ∂f 1 ∂2f 1 ∂f 1 ∂2f ∂2f + + 2 2 + 2 + 2 2 2 ∂r r ∂r r ∂θ r tan θ ∂θ r sin θ ∂ϕ2 div a = curl a = eθ + 2 f = General orthonormal curvilinear coordinates (u.2.C.16 Mathematics Formulary by ir. ew = h1 ∂u h2 ∂v h3 ∂w where the terms hi give normalization to length 1. v. w) can be derived from cartesian coordinates by the transformation x = x(u. v. J.
y)dxdy = dxdydz . y)dxdy The volume inside a closed surface deﬁned by z = f (x. 3. y) = 0. than the surface A inside the curve in I 2 is given by R A= d2 A = dxdy Let the surface A be deﬁned by the function z = f (x.6 Multiple integrals Let A be a closed curve given by f (x. 2 v3 ) ev + Some properties of the div(φv) = φdivv + gradφ · v div(u × v) = v · (curlu) − u · (curlv) div gradφ = 2 φ curl gradφ = 0 div curlv = 0 Here. The volume V bounded by A and the xy plane is than given by: V = f (x.2. v is an arbitrary vectorﬁeld and φ an arbitrary scalar ﬁeld. y) is given by: V = d3 V = f (x.Chapter 3: Calculus 17 div a = curl a = 2 f = 1 ∂ ∂ ∂ (h2 h3 au ) + (h3 h1 av ) + (h1 h2 aw ) h1 h2 h3 ∂u ∂v ∂w 1 ∂(h3 aw ) ∂(h2 av ) 1 ∂(h1 au ) ∂(h3 aw ) − eu + − h2 h3 ∂v ∂w h3 h1 ∂w ∂u 1 ∂(h2 av ) ∂(h1 au ) − ew h1 h2 ∂u ∂v ∂ h2 h3 ∂f ∂ h3 h1 ∂f ∂ h1 h2 ∂f 1 + + h1 h2 h3 ∂u h1 ∂u ∂v h2 ∂v ∂w h3 ∂w operator are: curl(φv) = φcurlv + (gradφ) × v curl curlv = grad divv − 2 v 2 v ≡ ( 2 v1 . 2 v2 . y). 3.5 Integral theorems Some important integral theorems are: Gauss: Stokes for a scalar ﬁeld: Stokes for a vector ﬁeld: this gives: Ostrogradsky: (v · n)d2 A = (φ · et )ds = (v · et )ds = (divv )d3 V (n × gradφ)d2 A (curlv · n)d2 A (curlv · n)d2 A = 0 (n × v )d2 A = (φn )d2 A = (curlv )d3 A (gradφ)d3 V Here the orientable surface d2 A is bounded by the Jordan curve s(t).2.
Wevers 3. y. than it follows: ci = f.18 Mathematics Formulary by ir. by: b (f.3 Orthogonality of functions b The inner product of two functions f (x) and g(x) on the interval [a. bn = 1 L −L f (t) sin nπt L dt . gi ) 3. b] is given by: (f. gi (gi . v. A set functions fi is orthonormal if (fi . y)) 1 + ∂x F 2 + ∂y F 2 dxdy × ∂u ∂v S G G 3. When one chooses the orthogonal basis (cos(nx). g) = a f (x)g(x)dx or.4 Fourier series Each function can be written as a sum of independent base functions. g) = a p(x)f (x)g(x)dx The norm f follows from: f 2 = (f. Let the set gi be orthogonal. an = 1 L −L f (t) cos nπt L dt . v). sin(nx)) we have a Fourier series.7 Coordinate transformations The expressions d2 A and d3 V transform as follows when one changes coordinates to u = (u. y) = X(u. z)dxdydz = f (x(u)) ∂x dudvdw ∂u ∂x = ∂u xu yu xv yv Let the surface A be deﬁned by z = F (x. f ).C. fj ) = δij . A periodical function f (x) with period 2L can be written as: ∞ f (x) = a0 + n=1 an cos nπx nπx + bn sin L L Due to the orthogonality follows for the coefﬁcients: L L L a0 = 1 2L −L f (t)dt . w): V = In I 2 holds: R f (x. ∞ Each function f (x) can be written as a sum of orthogonal functions: f (x) = i=0 ci gi (x) and c2 ≤ f i 2 . Than the volume bounded by the xy plane and F is given by: ∂X ∂X f (x)d2 A = f (x(u)) dudv = f (x. F (x.A. y. when using a weight function p(x). J. v. w) through the transformation x(u.2.
.and upper limit: f (a− ) = lim f (x) .Chapter 3: Calculus 19 A Fourier series can also be written as a sum of complex exponents: ∞ f (x) = n=−∞ cn einx with 1 cn = 2π π f (x)e−inx dx −π ˆ The Fourier transform of a function f (x) gives the transformed function f (ω): ∞ 1 ˆ f (ω) = √ 2π The inverse transformation is given by: f (x)e−iωx dx −∞ ∞ 1 1 f (x+ ) + f (x− ) = √ 2 2π ˆ f (ω)eiωx dω −∞ where f (x+ ) and f (x− ) are deﬁned by the lower . f (a+ ) = lim f (x) x↑a x↓a For continuous functions is 1 2 [f (x+ ) + f (x− )] = f (x).
Chapter 4 Differential equations 4. 20 . 2. 3. If c(x) = p(x) exp(µx) where p(x) is a polynomial there are 3 possibilities: 1. The solution of the homogeneous equation is given by yH = k exp Suppose that a(x) = a =constant.1. where q(x) is a polynomial of the same order as p(x). λ2 = µ: yP = q(x) exp(µx). λ = µ: a particular solution is: yP = x exp(µx) When a DE is solved by variation of parameters one writes: yP (x) = yH (x)f (x).1 4. Substitution of exp(λx) in the homogeneous equation leads to the characteristic equation λ + a = 0 ⇒ λ = −a. and than one solves f (x) from this. a(x)dx 4. Than one can distinguish two cases: 1. λ1 .1 Linear differential equations First order linear DE The general solution of a linear differential equation is given by yA = yH + yP . 2. where yH is the solution of the homogeneous equation and yP is a particular solution. λ1 = µ.1. There are now 2 possibilities: 1.2 Second order linear DE A differential equation of the second order with constant coefﬁcients is given by: y (x) + ay (x) + by(x) = c(x). Substitution of y = exp(λx) leads to the characteristic equation λ2 + aλ + b = 0. A ﬁrst order differential equation is given by: y (x) + a(x)y(x) = b(x). λ1 = λ2 = µ: yP = x2 q(x) exp(µx). λ1 = λ2 : than yH = α exp(λ1 x) + β exp(λ2 x). If c(x) = c =constant there exists a particular solution yP = c/b. λ1 = λ2 = λ: than yH = (α + βx) exp(λx). Suppose b(x) = α exp(µx). x When: y (x) + ω 2 y(x) = ωf (x) and y(0) = y (0) = 0 follows: y(x) = 0 f (x) sin(ω(x − t))dt. λ2 = µ: yP = xq(x) exp(µx). Its homogeneous equation is y (x) + a(x)y(x) = 0. λ = µ: a particular solution is: yP = exp(µx) 2.
1. y2 ) = y1 y1 y2 y2 = y1 y2 − y2 y1 y1 and y2 are linear independent if and only if on the interval I when ∃x0 ∈ I so that holds: W (y1 (x0 ). This LDE has at least one solution of the form ∞ yi (x) = xri n=0 an xn with i = 1.. When p(x) and q(x) are continuous on the open interval I there exists a unique solution y(x) on this interval. y2 (x0 )) = 0. These are also all solutions of the LDE. and c(x) = c0 + c1 x + c2 x2 + . it follows for r: r2 + (b0 − 1)r + c0 = 0 There are now 3 possibilities: 1.4 Power series substitution an xn is substituted in the LDE with constant coefﬁcients y (x) + py (x) + qy(x) = 0 this n(n − 1)an xn−2 + pnan xn−1 + qan xn = 0 n When a series y = leads to: Setting coefﬁcients for equal powers of x equal gives: (n + 2)(n + 1)an+2 + p(n + 1)an+1 + qan = 0 This gives a general relation between the coefﬁcients. When one expands b(x) and c(x) as b(x) = b0 + b1 x + b2 x2 + . r1 = r2 : than y(x) = y1 (x) ln x + y2 (x). The Wronskian is deﬁned by: W (y1 ....Chapter 4: Diﬀerential equations 21 4. r1 − r2 = Z than y(x) = y1 (x) + y2 (x). Z: . 1. 4. 2. 4.2 4.3 The Wronskian We start with the LDE y (x) + p(x)y (x) + q(x)y(x) = 0 and the two initial conditions y(x0 ) = K0 and y (x0 ) = K1 .. 2.1 Some special cases Frobenius’ method d2 y(x) b(x) dy(x) c(x) + + 2 y(x) = 0 dx2 x dx x Given the LDE with b(x) and c(x) analytical at x = 0.1. r1 − r2 ∈ I : than y(x) = ky1 (x) ln x + y2 (x).2. Special cases are n = 0. The general solution can than be written as y(x) = c1 y1 (x) + c2 y2 (x) and y1 and y2 are linear independent. 2 with r real or complex and chosen so that a0 = 0. N 3.
2. 4.4 The associated Legendre equation 2 This equation follows from the θdependent part of the wave equation ξ = cos(θ). r1 = r2 : than y(x) = C1 xr1 + C2 xr2 . 2. Wevers 4.3 Legendre’s DE (1 − x2 ) Given the LDE dy(x) d2 y(x) − 2x + n(n − 1)y(x) = 0 2 dx dx The solutions of this equation are given by y(x) = aPn (x) + by2 (x) where the Legendre polynomials P (x) are deﬁned by: dn (1 − x2 )n Pn (x) = n dx 2n n! 2 For these holds: Pn = 2/(2n + 1). l=0 Pl0 (ξ)tl = 1 1 − 2ξt + t2 This polynomial can be written as: π Pl0 (ξ) 1 = π 0 (ξ + ξ 2 − 1 cos(θ))l dθ 4.22 Mathematics Formulary by ir.5 Solutions for Bessel’s equation x2 Given the LDE dy(x) d2 y(x) +x + (x2 − ν 2 )y(x) = 0 dx2 dx also called Bessel’s equation. They are of the form: Pl For m > l is Pl m m (ξ) = (1 − ξ 2 )m/2 dm P 0 (ξ) (1 − ξ 2 )m/2 dm+l 2 = (ξ − 1)l 2l l! dξ m dξ m+l (ξ) = 0.2. Some properties of Pl0 (ξ) zijn: 1 Pl0 (ξ)Pl0 (ξ)dξ = −1 2 δll 2l + 1 ∞ .2 Euler x2 Given the LDE d2 y(x) dy(x) + ax + by(x) = 0 2 dx dx Substitution of y(x) = xr gives an equation for r: r2 + (a − 1)r + b = 0.A. J. and the Bessel functions of the ﬁrst kind ∞ Jν (x) = xν m=0 22m+ν m!Γ(ν (−1)m x2m + m + 1) . r1 = r2 = r: than y(x) = (C1 ln(x) + C2 )xr . There are now 2 possibilities: 1.C.2.2. From this one gets two solutions r1 and r2 . 4. Than follows: (1 − ξ 2 ) d dξ (1 − ξ 2 ) dP (ξ) dξ Ψ = 0 by substitution of + [C(1 − ξ 2 ) − m2 ]P (ξ) = 0 Regular solutions exists only if C = l(l + 1).
x→0 x→0 x→0 r→∞ lim H(r) = e±ikr eiωt √ .Chapter 4: Diﬀerential equations 23 for ν := n ∈ I this becomes: N ∞ Jn (x) = xn m=0 22m+n m!(n (−1)m x2m + m)! When ν = Z the solution is given by y(x) = aJν (x) + bJ−ν (x). this does not apply to integers. But because for n ∈ Z holds: Z Z J−n (x) = (−1)n Jn (x). Hn (x) = Jn (x) − iYn (x) 4. J−n (x) = (−1)n Jn (x). where Yν are the Bessel functions of the second kind: Yν (x) = Jν (x) cos(νπ) − J−ν (x) and Yn (x) = lim Yν (x) ν→n sin(νπ) The equation x2 y (x) + xy (x) − (x2 + ν 2 )y(x) = 0 has the modiﬁed Bessel functions of the ﬁrst kind Iν (x) = i−ν Jν (ix) as solution.2. and also solutions Kν = π[I−ν (x) − Iν (x)]/[2 sin(νπ)]. The Neumann functions Nm (x) are deﬁnied as: Nm (x) = 1 1 Jm (x) ln(x) + m 2π x ∞ αn x2n n=0 The following holds: lim Jm (x) = xm .2. Sometimes it can be convenient to write the solutions of Bessel’s equation in terms of the Hankel functions (1) (2) Hn (x) = Jn (x) + iYn (x) . lim Nm (x) = x−m for m = 0.7 Laguerre’s equation x d2 y(x) dy(x) + (1 − x) + ny(x) = 0 dx2 dx Given the LDE Solutions of this equation are the Laguerre polynomials Ln (x): Ln (x) = ex dn (−1)m n m xn e−x = x n n! dx m! m m=0 ∞ . r x→∞ lim Jn (x) = 2 cos(x − xn ) . 2 Jn+1 (x) + Jn−1 (x) = The following integral relations hold: 2π 2n dJn (x) Jn (x) . πx x→∞ lim J−n (x) = 2 sin(x − xn ) πx 1 with xn = 2 π(n + 1 ). Jn+1 (x) − Jn−1 (x) = −2 x dx π 1 Jm (x) = 2π 0 1 exp[i(x sin(θ) − mθ)]dθ = π 0 cos(x sin(θ) − mθ)dθ 4.6 Properties of Bessel functions Bessel functions are orthogonal with respect to the weight function p(x) = x. The general solution of Bessel’s equation is given by y(x) = aJν (x) + bYν (x). lim N0 (x) = ln(x).
Wevers 4.C. J.2.9 Hermite dHn (x) dHen (x) d2 Hen (x) d2 Hn (x) − 2x −x + 2nHn (x) = 0 and + nHen (x) = 0 2 2 dx dx dx dx The differential equations of Hermite are: Solutions of these equations are the Hermite polynomials.11 Weber 1 2 1 − 1 x2 )Wn (x) = 0 has solutions: Wn (x) = Hen (x) exp(− 4 x2 ).10 The LDE Chebyshev (1 − x2 ) d2 Un (x) dUn (x) − 3x + n(n + 2)Un (x) = 0 dx2 dx Un (x) = sin[(n + 1) arccos(x)] √ 1 − x2 has solutions of the form The LDE (1 − x2 ) has solutions Tn (x) = cos(n arccos(x)).3 Nonlinear differential equations y y y y y y = a y 2 + b2 = a y 2 − b2 = a b2 − y 2 = a(y 2 + b2 ) = a(y 2 − b2 ) = a(b2 − y 2 ) b−y y = ay b y y y y y y = b sinh(a(x − x0 )) = b cosh(a(x − x0 )) = b cos(a(x − x0 )) = b tan(a(x − x0 )) = b coth(a(x − x0 )) = b tanh(a(x − x0 )) b y= 1 + Cb exp(−ax) Some nonlinear differential equations and a solution are: . given by: Hn (x) = (−1)n exp 1 2 x 2 1 √ dn (exp(− 2 x2 )) = 2n/2 Hen (x 2) n dx Hen (x) = (−1)n (exp x2 √ dn (exp(−x2 )) = 2−n/2 Hn (x/ 2) n dx 4. 4 The LDE Wn (x) + (n + 4.2. d2 Tn (x) dTn (x) −x + n2 Tn (x) = 0 dx2 dx 4.24 Mathematics Formulary by ir.2.A.2.8 The associated Laguerre equation d2 y(x) + dx2 m+1 −1 x dy(x) + dx n + 1 (m + 1) 2 x y(x) = 0 Given the LDE Solutions of this equation are the associated Laguerre polynomials Lm (x): n Lm (x) = n (−1)m n! −x −m dn−m e x e−x xn (n − m)! dxn−m 4.
4. t) = X(x)T (t). than y /y > 0 and the solution has an exponential behaviour. 0) = ϕ(x) and ∂u(x. The normalization function m(x) must satisfy b m(x)yi (x)yj (x)dx = δij a When y1 (x) and y2 (x) are two linear independent solutions one can write the Wronskian in this form: W (y1 . 4. By changing to another dependent variable u(x).4 SturmLiouville equations d dx dy(x) dx SturmLiouville equations are second order LDE’s of the form: − p(x) + q(x)y(x) = λm(x)y(x) The boundary conditions are chosen so that the operator L=− d dx p(x) d dx + q(x) is Hermitean.5. When this is substituted two ordinary DE’s for X(x) and T (t) are obtained.5 4.2 Special cases The wave equation The wave equation in 1 dimension is given by ∂2u ∂2u = c2 2 ∂t2 ∂x When the initial conditions u(x. the general solution is given by: x+ct 1 1 u(x. n) ∂n A frequently used solution method for PDE’s is separation of variables: one assumes that the solution can be written as u(x.Chapter 4: Diﬀerential equations 25 4. 0)/∂t = Ψ(x) apply. y2 ) = y1 y1 y2 y2 = C p(x) where C is constant.5.1 Linear partial differential equations General The normal derivative is deﬁned by: ∂u = ( u. t) = [ϕ(x + ct) + ϕ(x − ct)] + 2 2c x−ct Ψ(ξ)dξ . given by: u(x) = y(x) p(x). than y /y < 0 and the solution has an oscillatory behaviour. the LDE transforms into the normal form: d2 u(x) 1 + I(x)u(x) = 0 with I(x) = 2 dx 4 p (x) p(x) 2 − 1 p (x) q(x) − λm(x) − 2 p(x) p(x) If I(x) > 0. if I(x) < 0.
In polar coordinates: ∞ ··· A(k)eik·x dk v(r. This gives for v: 2 v(x. θ.26 Mathematics Formulary by ir. t) = 1 exp 8(πDt)3/2 −(x − x )2 4Dt −(x − x )2 4Dt With initial condition u(x. t − t ) The equation of Helmholtz The equation of Helmholtz is obtained by substitution of u(x. In 1 dimension it reads: 1 exp P (x. 0) = f (x) the solution is: u(x. x . x . These have the property that P (x. x . J. ω) = 0 This gives as solutions for v: 1. t) = v(x) exp(iωt) in the wave equation. ϕ) √ r . t). x . t) = √ 2 πDt In 3 dimensions it reads: P (x. t) = G f (x )P (x. ϕ) = l=0 m=−l Y (θ. x . 0) = δ(x − x ). In cartesian coordinates: substitution of v = A exp(ik · x ) gives: v(x ) = with the integrals over k 2 = k 2 . t)dx The solution of the equation ∂2u ∂u − D 2 = g(x. ω) + k 2 v(x. t )P (x.A.C. x . t) ∂t ∂x is given by u(x. Wevers The diffusion equation The diffusion equation is: ∂u =D ∂t 2 u Its solutions can be written in terms of the propagators P (x. 2. In spherical coordinates: ∞ l 1 [Alm Jl+ 2 (kr) + Blm J−l− 1 (kr)] 2 v(r. ϕ) = m=0 (Am Jm (kr) + Bm Nm (kr))eimϕ 3. t) = dt dx g(x .
5.Chapter 4: Diﬀerential equations 27 4. ∂u(x ) = h(x ) for all x ∈ S. t )ds = ϕ(b ) − ϕ(a ) a In this case there exist functions ϕ and w so that v = gradϕ + curlw. and the Laplace equation 2 u = 0. v)]d3 V = S u ∂v 2 d A ∂n The second theorem of Green is: [u G 2 v−v 2 u]d3 V = S u ∂v ∂u −v ∂n ∂n d2 A A harmonic function which is 0 on the boundary of an area is also 0 within that area. The ﬁeld lines of the ﬁeld v(x ) follow from: ˙ x (t) = λv(x ) The ﬁrst theorem of Green is: [u G 2 v + ( u. It has a unique solution. When a vector ﬁeld v is given by v = gradϕ holds: b (v. u(x ) = g(x ) for all x ∈ S. x ∈ R . x ∈ R . The Neumann problem is: 2 u(x ) = −f (x ) . The solutions of Laplace’s equation are called harmonic functions. The solution exists if: − R f (x )d3 V = S h(x )d2 A A fundamental solution of the Laplace equation satisﬁes: 2 u(x ) = −δ(x ) This has in 2 dimensions in polar coordinates the following solution: u(r) = ln(r) 2π This has in 3 dimensions in spherical coordinates the following solution: u(r) = 1 4πr . A harmonic function with a normal derivative of 0 on the boundary of an area is constant within that area.3 Potential theory and Green’s theorem Subject of the potential theory are the Poisson equation 2 u = −f (x ) where f is a given function. The solutions of these can often be interpreted as a potential. ∂n The solution is unique except for a constant. The Dirichlet problem is: 2 u(x ) = −f (x ) .
28 Mathematics Formulary by ir. ξ ) is a solution of Dirichlet’s problem. ξ )f (x )d3 V − S g(x ) ∂G(x. Wevers The equation 2 v = −δ(x − ξ ) has the solution v(x ) = 1 4πx − ξ  After substituting this in Green’s 2nd theorem and applying the sieve property of the δ function one can derive Green’s 3rd theorem: u(ξ ) = − 1 4π R 2 u r d3 V + 1 4π S 1 ∂u ∂ −u r ∂n ∂n 1 r d2 A The Green function G(x. Than G can be written as: 1 + g(x. ξ ) G(x. ξ ) = 0. ξ ) is deﬁned by: 2 G = −δ(x − ξ ). ξ ) 2 d A ∂n . and on boundary S holds G(x. The solution of Poisson’s equation boundary S holds: u(x ) = g(x ).A. is: u(ξ ) = R 2 u = −f (x ) when on the G(x.C. J. ξ ) = 4πx − ξ  Than g(x.
I) IAij 29 . W is a linear subspace if ∀w1 .Chapter 5 Linear algebra 5.3 5.2 Basis For an orthogonal basis holds: (ei . independent and 2. 4. For an orthonormal basis holds: (ei ..1 Matrix calculus Basic operations For the matrix multiplication of matrices A = aij and B = bkl holds with r the row index and k the column index: Ar1 k1 · B r2 k2 = C r1 k2 . 5. ej ) = cδij . For this holds (AB)T = B T AT and (AT )−1 = (A−1 )T .1 Vector spaces G is a group for the operation ⊗ if: 1. a2 .3. ∃e ∈ G so that a ⊗ e = e ⊗ a = a: there exists a unit element.. The set vectors {an } is linear independent if: λi ai = 0 ⇔ ∀i λi = 0 i The set {an } is a basis if it is 1. Vector spaces form an Abelian group for addition and multiplication: 1 · a = a. (a ⊗ b) ⊗ c = a ⊗ (b ⊗ c): a group is associative. If 5. a ⊗ b = b ⊗ a the group is called Abelian or commutative. . (AB)ij = k aik bkj where r is the number of rows and k the number of columns. ∀a ∈ G∃a ∈ G so that a ⊗ a = e: each element has an inverse. For the ij inverse matrix holds: (A · B)−1 = B −1 · A−1 . ej ) = δij . The inverse matrix A−1 has the property that A · A−1 = I and can I be found by diagonalization: (Aij I ∼ (I −1 ). 3. λ(µa) = (λµ)a. W is an invariant subspace of V for the operator A if ∀w ∈ W holds: Aw ∈ W . V =< a1 . 2. ∀a. w2 ∈ W holds: λw1 + µw2 ∈ W . >= λi ai . The transpose of A is deﬁned by: aT = aji . (λ + µ)(a + b) = λa + λb + µa + µb. 5. b ∈ G ⇒ a ⊗ b ∈ G: a group is closed.
. Cramer’s rule for the solution of systems of linear equations is: let the system be written as A · x = b ≡ a1 x1 + . ... The row rank equals the column rank for each matrix. and if det(A) = 0 the solution is 0. v2 > is an invariant subspace of A. Av1 = λ1 v1 − λ2 v2 and Av2 = λ2 v1 + λ1 v2 . The linear span < v1 . kn > If the columns kn of a n × m matrix A are independent.. 5. Similar for the column rank... J. a∗2 .. an ) + D(a1 .... When Aij ∈ I and v1 + iv2 is an eigenvector of A at eigenvalue λ = λ1 + iλ2 . ) dt dt dt dt When the rows of a matrix are considered as vectors the row rank of a matrix is the number of independent vectors in this set. . .. 2. 3. Wevers The inverse of a 2 × 2 matrix is: a b c d −1 = 1 ad − bc d −b −c a The determinant function D = det(A) is deﬁned by: det(A) = D(a∗1 .... If det(A) = 0 the only solution is 0..2 Matrix equations A·x=b We start with the equation and b = 0. v ∗ = v1 − iv2 is an eigenvalue at λ∗ = λ1 − iλ2 . If kn are the columns of A. Then A ˜ and A have the same caracteristic equation.3.30 Mathematics Formulary by ir. ˜ ˜ ˜ Let A : V → V be the complex extension of the real linear operator A : V → V in a ﬁnite dimensional V . than holds: R 1. aj+1 . The equation A·x=0 has exactly one solution = 0 if det(A) = 0..A. an ) + . than the transformed space of A is given by: R(A) =< Ae1 . Een 2 × 2 matrix has determinant: det a b c d = ad − cb The derivative of a matrix is a matrix with the derivatives of the coefﬁcients: daij dAB dA dB dA = and =B +A dt dt dt dt dt The derivative of the determinant is given by: da1 da2 dan d det(A) = D( . If det(A) = 0 there exists exactly one solution = 0..... ... + D(a1 ... . than the nullspace N (A) = {0 }... . Aen >=< k1 .. .. an ) det(A) . b. . aj−1 .C.. + an xn = b then xj is given by: xj = D(a1 . a∗n ) For the determinant det(A) of a matrix A holds: det(AB) = det(A) · det(B).
Ay ) = (Ax. Ay ). Some common linear transformations are: Transformation type Projection on the line < a > Projection on the plane (a. The normal vector to V is than: a/a. The distance d between 2 points p and q is given by d(p. V ) = (a. x ) = 0 Equation P (x ) = (a. ) = (a.5 Plane and line x = a + λ(b − a ) = a + λr The equation of a line that contains the points a and b is: The equation of a plane is: x = a + λ(b − a ) + µ(c − a ) = a + λr1 + µr2 When this is a plane in I 3 . Let A : W → W deﬁne a linear transformation. q ) = p − q . the normal vector to this plane is given by: R nV = r1 × r2 r1 × r2  A line can also be described by the points for which the line equation : (a. From this follows that A(V ) is the image space of A. a ) Q(x ) = x − P (x ) S(x ) = 2P (x ) − x T (x ) = 2Q(x ) − x = x − 2P (x ) For a projection holds: x − PW (x ) ⊥ PW (x ) and PW (x ) ∈ W . notation: N (A). x ) = 0 Mirror image in the line < a > Mirror image in the plane (a. and for a plane V: (a. x )a/(a. If for a transformation A holds: (Ax. A← (0 ) = E0 is a linear subspace of V . x ) + b = 0 is R d(p. p ) + b a Similarly in I 3 : The distance of a point p to the plane (a. p ) + k a This can be generalized for I n and C n (theorem from Hesse). In I 2 holds: The distance of a point p to the line (a. Then the following holds: dim(N (A)) + dim(R(A)) = dim(V ) 5.4 Linear transformations A transformation A is linear if: A(λx + βy ) = λAx + βAy. notation: R(A). R . we deﬁne: • If S is a subset of V : A(S) := {Ax ∈ W x ∈ S} • If T is a subset of W : A← (T ) := {x ∈ V A(x ) ∈ T } Than A(S) is a linear subspace of W and the inverse transformation A← (T ) is a linear subspace of V . the null space of A. x ) + k = 0 is R d(p.Chapter 5: Linear algebra 31 5. y ) = (x. x) + k = 0. x) + b = 0 holds. than A is a projection.
. This implies that the eigen values of an isometric transformation are given by λ = exp(iϕ) ⇒ λ = 1. ..A. A vector is transformed via Xα = Sα Xβ . Wevers 5. α α α β 5. a basis β.. Aα = Aα Sβ .r. When λ is an eigen value of A holds: An x = λn x. λn ) When 0 is an eigen value of A than E0 (A) = N (A).. J. 5. β(an )) Iβ β α and Sα · Sβ = I I The matrix of a transformation A is than given by: Aβ = Aβ e1 . a basis α into a vector w.r. i i i The eigen values λi are independent of the chosen basis. .. .7 Eigen values Ax = λx The eigenvalue equation with eigenvalues λ can be solved with (A − λI = 0 ⇒ det(A − λI = 0. It is given by: α Aβ = (β(Aa1 ). . The eigenvalues follow from this I) I) characteristic equation... Aα = Sβ Aβ Sα α α β β β and (AB)λ = Aλ Bα . When W is an invariant subspace if the isometric transformation A with dim(A) < ∞.8 Transformation types Isometric transformations A transformation is isometric when: Ax = x . The matrix Aβ transforms a vector given w.. y ). Ay ) = (x. The following is true: det(A) = λi and Tr(A) = aii = λi . Aβ en α α α δ β α β For the transformation of matrix operators to another coordinate system holds: Aδ = Sλ Aλ Sα .t.32 Mathematics Formulary by ir. with S the transformation matrix to this basis. β(Aan )) α where β(x ) is the representation of the vector x w. is given by: Λ = S −1 AS = diag(λ1 . α β β α β Further is Aβ = Sα Aα . basis β.. β The transformation matrix Sα transforms vectors from coordinate system α into coordinate system β: β Sα := I α = (β(a1 ). S = (Eλ1 . . The matrix of A in a basis of eigenvectors....C..6 Coordinate transformations y = Am×n x The linear transformation A from I n → I m is given by (I = I of C ): K K K R where a column of A is the image of a base vector in the original. Eλn ).t.. than also W ⊥ is an invariante subspace..r. Than also holds: (Ax..t.
A 1 1 2 mirroring in I in < (cos( 2 ϕ). If A and B are orthogonal.r. If A is symmetric. A = AH on an orthogonal basis. the invariant subspace E−1 . than also holds: (λ1 ) = (λ2 ) = cos(ϕ). For an orthogonal transformation O holds OT O = I so: OT = O−1 . A describes a rotation. Symmetric transformations A transformation A on I n is symmetric if (Ax. A is unitary. A R M linear operator is only symmetric if its matrix w. Each isometric transformation in a ﬁnitedimensional complex vector space is unitary. λ2 = λ∗ = exp(iϕ). an arbitrary basis is symmetric. and λ1 = exp(iϕ). The matrix of such a transformation is given by: −1 0 0 0 cos(ϕ) − sin(ϕ) 0 sin(ϕ) cos(ϕ) For all orthogonal transformations O in I 3 holds that O(x ) × O(y ) = O(x × y ).  det(U ) = 1. λ2 = exp(−iϕ). than AT = R. Than A is: Direct orthogonal if det(A) = +1. The rows of A are an orthonormal set. For each matrix B ∈ I m×n holds: B T B is symmetric.r. A n × n matrix is unitary if U H U = I It has determinant I. Theorem: for a n × n matrix A the following statements are equivalent: 1.t. sin( 2 ϕ)) > is given by: R S= cos(ϕ) sin(ϕ) sin(ϕ) − cos(ϕ) Mirrored orthogonal transformations in I 3 are rotational mirrorings: rotations of axis < a1 > through angle ϕ and R mirror plane < a1 >⊥ . A matrix A ∈ I n×n is symmetric if A = AT . Vectors from E−1 are mirrored by A w. All eigenvalues of a symmetric transformation belong to I The different eigenvectors are mutually perpendicular. The columns of A are an orthonormal set. 3. Ay ). R I n (n < ∞) can be decomposed in invariant subspaces with dimension 1 or 2 for each orthogonal transformation.Chapter 5: Linear algebra 33 Orthogonal transformations A transformation A is orthogonal if A is isometric and the inverse A← exists. M . 2.t. y ) = (x. R Unitary transformations Let V be a complex space on which an inner product is deﬁned. I. In I 3 holds: λ1 = 1. Than a linear transformation U is unitary if U is isometric and its inverse transformation A← exists. Let λ1 and λ2 be the roots of the characteristic equation. A rotation in I 2 through angle ϕ is given by: R R= cos(ϕ) − sin(ϕ) sin(ϕ) cos(ϕ) So the rotation angle ϕ is determined by Tr(A) = 2 cos(ϕ) with 0 ≤ ϕ ≤ π. A rotation over Eλ1 is given by the matrix R 3 1 0 0 0 cos(ϕ) − sin(ϕ) 0 sin(ϕ) cos(ϕ) ⊥ Mirrored orthogonal if det(A) = −1. Let A : V → V be orthogonal with dim(V ) < ∞. than AB and A−1 are also orthogonal.
Eigenvectors of A for different eigenvalues are mutually perpendicular.t. A matrix representation can be coupled with a Hermitian operator L.C. y ) = (x. J. Than: 1. W. The following holds: (CD)∗ = D∗ C ∗ ..34 Mathematics Formulary by ir. than their product AB is Hermitian if: [A. 1 ≤ i ≤ p.. The eigenvalues of a Hermitian transformation belong to I R. Len ). For all vectors x ∈ V and a normal transformation A holds: (Ax.r. Ay ) = (A∗ Ax. 3. By ).. αp so that A = α1 P1 + . y ) = (A∗ x. • P1 + . These eigenspaces are mutually perpendicular and each vector x ∈ V can be written in exactly one way as x= i xi with xi ∈ Vi This can also be written as: xi = Pi x where Pi is a projection on Vi .. y ) = xH y. with the properties • Pi · Pj = 0 for i = j. There exist projection transformations Pi . If A is Hermitian than αi ∈ I ∀i. Hy ). . an orthonormal basis holds: A† A = AA† . Let the different roots of the characteristic equation of A be βi with multiplicities ni . This B is called the adjungated transformation of A. ⊥ 4.t. If the transformations A and B are Hermitian. 2.. Notation: B = A∗ . [A. Than the dimension of each eigenspace Vi equals ni . + Pp = I I. This is only the case if for its matrix S w. If A is unitary than holds αi  = 1 ∀i. A∗ y ) 2.. + dimPp (V ) = n and complex numbers α1 . The inner product of two vectors ji x and y can now be written in the form: (x. The Hermitian conjugated transformation AH of A is: [aij ]H = [a∗ ]. Normal transformations For each linear transformation A in a complex vector space V there exists exactly one linear transformation B so that (Ax. If A is normal holds: 1.. An alternative notation is: AH = A† .. This leads to the spectral mapping theorem: let A be a normal transformation in a complex vector space V with dim(V ) = n.r. B] = AB − BA = 0. B] is called the commutator of A and B. A∗ = A−1 if A is unitary and A∗ = A if A is Hermitian. • dimP1 (V ) + . 3. y ) = (x. R . x is an eigenvector of A if and only if x is an eigenvector of A∗ .. a basis ei it is given by Lmn = (em . y ) = (AA∗ x.A. If Eλ if an eigenspace from A than the orthogonal complement Eλ is an invariant subspace of A. Deﬁnition: the linear transformation A is normal in a complex vector space V if A∗ A = AA∗ . Wevers Hermitian transformations A transformation H : V → V with V = C n is Hermitian if (Hx. + αp Pp .
γ: 1 0 0 0 0 cos α − sin α 0 Rx (α) = 0 sin α cos α 0 Ry (β) = 0 0 0 1 cos γ − sin γ 0 sin γ cos γ 0 Rz (γ) = 0 0 1 0 0 0 cos β 0 − sin β 0 0 0 0 1 0 sin β 1 0 0 cos β 0 0 0 0 0 1 3. The columns of U are the common eigenvectors of all matrices Aj . In quantum physics one speaks of commuting observables. Homogeneous coordinates are derived from cartesian coordinates as follows: X wx x Y wy y = = Z wz z cart w hom w hom so x = X/w. Z0 . vj there exists a transformation Ak so that vi and vj are eigenvectors with different eigenvalues of Ak . This transformation has no inverse. 5. resp. If all eigenvalues of a Hermitian linear transformation in a ndimensional complex vector space differ. than the normalized eigenvector is known except for a phase factor exp(iα). than Ai x ∈ Eλ . Translation along vector (X0 . A perspective projection on image plane z = c with the center of projection in the origin. Y0 . w0 ): w0 0 T = 0 0 0 w0 0 0 0 0 w0 0 X0 Y0 Z0 w0 2. 1 0 0 0 0 1 0 0 P (z = c) = 0 0 1 0 0 0 1/c 0 . y. z axis. Usually a commuting set is taken as small as possible. Rotations of the x. Deﬁnition: a commuting set Hermitian transformations is called complete if for each set of two common eigenvectors vi . Consider m commuting Hermitian matrices Ai . β. Transformations in homogeneous coordinates are described by the following matrices: 1. than Eλ is an invariant subspace of all transformations Ai . Than there exists a unitary matrix U so that all matrices U † Ai U are diagonal. y = Y /w and z = Z/w. Theorem. An extra coordinate is introduced to describe the nonlinearities. through angles α. Assume they mutually commute.Chapter 5: Linear algebra 35 Complete systems of commuting Hermitian transformations Consider m Hermitian linear transformations Ai in a n dimensional complex inner product space V .9 Homogeneous coordinates Homogeneous coordinates are used if one wants to combine both rotations and translations in one matrix transformation. Lemma: if Eλ is the eigenspace for eigenvalue λ from A1 . This means that if x ∈ Eλ . The required number of commuting observables equals the number of quantum numbers required to characterize a state.
n→∞ 5. (a. g) = a f ∗ (t)g(t)dt For each a the length a is deﬁned by: a = (a. If e1 . The following holds: a − b ≤ a+ b ≤ a + b . 2. aj ) = δij . J. b ) = i=1 a∗ bi i For function spaces holds: b (f. A].)  n=1 an 2 < ∞ A space is called a Hilbert space if it is and if also holds: lim an+1 − an  = 0. a ). On each interval [0. b2 ∈ V and β1 . a ) ∈ I The inner product space C n is the complex vector space on which a complex inner R. The Laplace transformation is a generalisation of the Fourier transformation.36 Mathematics Formulary by ir. . β2 ∈ C . a ). (a. A set is orthonormal if (ai . an } be a set of vectors in an inner product space V ... The Laplace transform of a function f (t) is.10 Inner product spaces A complex inner product on a complex vector space is deﬁned as follows: 1. e2 . b ) = (b. a2 . b 2 ) for all a. form an orthonormal row in an inﬁnite dimensional vector space Bessel’s inequality holds: ∞ x The equal sign holds if and only if lim The inner product space 2 2 ≥ i=1 (ei . aj ). b1 . product is deﬁned by: n (a. R Than there exists a Laplace transform for f . The set of vectors is independent if and only if det(G) = 0.. with s ∈ C and t ≥ 0: ∞ F (s) = 0 f (t)e−st dt . .A.and lower limit.11 The Laplace transformation The class LT exists of functions for which holds: 1. b ) = a · b cos(ϕ). ∞ > and a. a ) ≥ 0 for all a ∈ V . ∃t0 ∈ [0. M ∈ I so that for t ≥ t0 holds: f (t) exp(−at) < M . .C. is deﬁned in C ∞ by: ∞ 2 = 2 a = (a1 . (a. Let {a1 . Due to (1) holds: (a. 2.. A > 0 there are no more than a ﬁnite number of discontinuities and each discontinuity has an upper . Than the Gramian G of this set is given by: Gij = (ai . Wevers 5. a ) = 0 if and only if a = 0. b 1 ) + β2 (a. x )2 n→∞ xn − x = 0... and with ϕ the angle between a and b holds: (a. (a. β1 b1 + β2 b 2 ) = β1 (a.. 3.
than is f (t) = f1 ∗ f2 .13 Systems of linear differential equations vi exp(λi t). Assume that x = v exp(λt).. R For some often occurring functions holds: f (t) = tn at e n! eat cos(ωt) eat sin(ωt) δ(t − a) F (s) = L(f (t)) = (s − a)−n−1 s−a (s − a)2 + ω 2 ω (s − a)2 + ω 2 exp(−as) 1 F a s a 5. f ∗ g ∈LT 2. 5. Damping: L (e−at f (t)) = F (s + a) 3. Translation: If a > 0 and g is deﬁned by g(t) = f (t − a) if t > a and g(t) = 0 for t ≤ a. than holds: L (g(t)) = e−sa L(f (t)).. λ1 = λ2 : than x(t) = (ut + v) exp(λt). ˙ We start with the equation x = Ax. . If s ∈ I than holds (λf ) = L( (f )) and (λf ) = L( (f )). λ1 = λ2 : than x(t) = 2. − sn−1 f (0) + sn F (s) The operator L has the following properties: 1. Commutative: f ∗ g = g ∗ f 5.Chapter 5: Linear algebra 37 The Laplace transform of the derivative of a function is given by: L f (n) (t) = −f (n−1) (0) − sf (n−2) (0) − . Distribution: f ∗ (g + h) = f ∗ g + f ∗ h 4. than follows: Av = λv. L(f ∗ g) = L(f ) · L(g) 3. Homogenity: f ∗ (λg) = λf ∗ g If L(f ) = F1 · F2 .12 The convolution t The convolution integral is deﬁned by: (f ∗ g)(t) = 0 f (u)g(t − u)du The convolution has the following properties: 1. In the 2 × 2 case holds: 1. Equal shapes: if a > 0 than L (f (at)) = 2.
5. w) should be chosen so that det(S) = +1. than the real solutions are c1 [u cos(βt) − w sin(βt)]eαt + c2 [v cos(βt) + u sin(βt)]eαt ¨ There are two solution strategies for the equation x = Ax: 1.2 Rank 3: Quadratic surfaces in IR3 p x2 y2 z2 +q 2 +r 2 =d 2 a b c • Ellipsoid: p = q = r = d = 1. I) 2. p = 0. a parabola e = 1 and a hyperbola e > 1. so all cross terms are 0. Rank 2: x2 y2 z +q 2 +r 2 =d a2 b c • Elliptic paraboloid: p = q = 1. • Pair of planes: p = 1. z). c are the lengths of the semi axes. y. Introduce: x = u and y = v. . d = 0..A. r = −1. than λ∗ is also an eigenvalue for eigenvector v ∗ . This transforms a ndimensional set of second ˙ ˙ ¨ ˙ ¨ ˙ order equations into a 2ndimensional set of ﬁrst order equations. v.. r = 0. d = 0.14. a. this leads to x = u and y = v. q = d = 0. If Λ = S −1 AS = diag(λ1 .1 Quadratic forms Quadratic forms in IR2 The general equation of a quadratic form is: xT Ax + 2xT P + S = 0. • Doublebladed hyperboloid: r = d = 1. • Hyperbolic cylinder: p = d = 1. An ellipse has A > 0. to maintain the same orientation as the system (x. Rank 1: py 2 + qx = d • Parabolic cylinder: p. d = 0. A is a symmetric matrix. Wevers Assume that λ = α + iβ is an eigenvalue with eigenvector v. q = −1. q = 1. .. • Double plane: p = 0.C. Here. p = q = −1.14. b. Decompose v = u + iw. In polar coordinates this can be written as: ep r= 1 − e cos(θ) An ellipse has e < 1. p • Hyperbolic paraboloid: p = r = −1. • Parallel pair of planes: d > 0. r = d = 0. u = (u. • Cone: p = q = 1. r = −1. q > 0. 5.38 Mathematics Formulary by ir. q = 0. d = 0. λn ) holds: uT Λu + 2uT P + S = 0. • Elliptic cylinder: p = q = −1. Let x = v exp(λt) ⇒ det(A − λ2 I = 0. r = −1. J.14 5. q = −1. • Singlebladed hyperboloid: p = q = d = 1. Starting with the equation ax2 + 2bxy + cy 2 + dx + ey + f = 0 we have A = ac − b2 . a parabola A = 0 and a hyperbole A < 0.
holds: f (z)dz ≤ M L K 39 . f is analytical if u and v satisfy these equations. A Jordan curve is a curve that is closed and singular.1 Functions of complex variables Complex function theory deals with complex functions of a complex variable. y) + iv(x. If K is a curve in C with parameter equation z = φ(t) = x(t) + iy(t). than the length L of K is given by: b L= a dx dt 2 + dy dt 2 b b dt = a dz dt = dt a φ (t)dt The derivative of f in point z = a is: f (a) = lim If f (z) = u(x. Some deﬁnitions: f is analytical on G if f is continuous and differentiable on G. a ≤ t ≤ b. Than holds for integer m: 1 2πi K dz = (z − a)m 0 if m = 1 1 if m = 1 Theorem: if L is the length of curve K and if f (z) ≤ M for z ∈ K.2 6.1 Complex integration Cauchy’s integral formula Let K be a curve described by z = φ(t) on a ≤ t ≤ b and f (z) is continuous on K. 6. y) the derivative is: f (z) = ∂u ∂v ∂u ∂v +i = −i + ∂x ∂x ∂y ∂y z→a f (z) − f (a) z−a Setting both results equal yields the equations of CauchyRiemann: ∂v ∂u = . Than the integral of f over K is: b f (z)dz = K a ˙ f (φ(t))φ(t)dt f continuous = F (b) − F (a) Lemma: let K be the circle with center a and radius r taken in a positive direction.2. ∂x ∂y These equations imply that 2 ∂u ∂v =− ∂y ∂x u= 2 v = 0. if the integral exists.Chapter 6 Complex function theory 6. than.
Wevers Theorem: let f be continuous on an area G and let p be a ﬁxed point of G.A. Cauchy’s residue proposition is: let f be analytical within and on a Jordan curve K except in a ﬁnite number of singular points ai within K. Otherwise a is a singular point or pole of f (z).and end points. this means φ(ξ. Let K and K be two curves with the same starting . φ(ξ. which can be transformed into each other by continous deformation within G. For ﬁxed ξ ∈ K. holds: 1 2πi K n f (z)dz = k=1 z=ak Res f (z) Lemma: let the function f be analytical in a.2. J. . Let B be a Jordan curve. in singular points it can be both 0 and = 0. Than. This leads to two equivalent formulations of the main theorem of complex integration: let the function f be analytical on an area G. z) ≤ M for ξ ∈ K. z) is limited. Let φ(ξ. with the following properties: 1. The residue is 0 in regular points.40 Mathematics Formulary by ir. Than holds f (z)dz = K K z f (z)dz ⇔ B f (z)dz = 0 By applying the main theorem on eiz /z one can derive that ∞ sin(x) π dx = x 2 0 6. z) is an analytical function of z on G.C. Let F (z) = p f (ξ)dξ for all z ∈ G only depend on z and not on the integration path. z ∈ G. 2. than holds: Res z=a f (z) = f (a) z−a This leads to Cauchy’s integral theorem: if F is analytical on the Jordan curve K. The residue of f in a is deﬁned by Res f (z) = z=a 1 2πi K f (z)dz where K is a Jordan curve which encloses a in positive direction. holds: f (z) 1 f (a) if a inside K dz = 0 if a outside K 2πi z−a K Theorem: let K be a curve (K need not be closed) and let φ(ξ) be continuous on K. z) be deﬁned for ξ ∈ K. Than F (z) is analytical on G with F (z) = f (z). φ(ξ. which is taken in a positive direction.2 Residue A point a ∈ C is a regular point of a function f (z) if f is analytical in a. Than the function f (z) = K φ(ξ)dξ ξ−z is analytical with nth derivative f (n) (z) = n! K φ(ξ)dξ (ξ − z)n+1 Theorem: let K be a curve and G an area. z ∈ G. if K is taken in a positive direction.
Than f (z) can be expanded into a Laurent series with center a: ∞ f (z) = n=−∞ cn (z − a)n with cn = 1 2πi K f (w)dw . If f has a pole of order k in a than c1 . ck = 0. z)/∂z are continuous functions of ξ on K. Than holds M n! f (n) (a) ≤ n R 6. the opposite is not necessary.4 Laurent series Taylor’s theorem: let f be analytical in an area G and let point a ∈ G has distance r to the boundary of G.Chapter 6: Complex function theory 41 3. n→∞ If these limits both don’t exist one can ﬁnd R with the formula of CauchyHadamard: 1 = lim sup n an  R n→∞ 6. .. ck−1 = 0. z) dξ ∂z Cauchy’s inequality: let f (z) be an analytical function within and on the circle C : z −a = R and let f (z) ≤ M for z ∈ C.. • If lim n n→∞ an  = L exists. z)dξ is analytical with derivative f (z) = K ∂φ(ξ.3 Analytical functions deﬁnied by series fn (z) is called pointwise convergent on an area G with sum F (z) if N The series ∀ε>0 ∀z∈G ∃N0 ∈I ∀n>n0 R The series is called uniform convergent if f (z) − n=1 fn (z) < ε N ∀ε>0 ∃N0 ∈I ∀n>n0 ∃z∈G R f (z) − n=1 fn (z) < ε Uniform convergence implies pointwise convergence. • If lim an+1 /an  = L exists. ∞ Theorem: let the power series n=0 an z n have a radius of convergence R. Than the function f (z) = K φ(ξ. n∈Z Z (w − a)n+1 . than R = 1/L. The radius of convergence of the Taylor series is ≥ r. Theorem of Laurent: let f be analytical in the circular area G : r < z − a < R.. than R = 1/L. z) and ∂φ(ξ. R is the distance to the ﬁrst nonessential singularity. Than f (z) can be expanded into the Taylor series near a: ∞ f (z) = n=0 cn (z − a)n with cn = f (n) (a) n! valid for z − a < r. For ﬁxed z ∈ G the functions φ(ξ.
Then. z→a One speaks of a pole of order k in z = a. Deﬁne f (a) = c0 and the series is also valid for z − a < R and f is analytical in a. taken from a + δ to a − δ. There is no principal part. 2. ∞ The principal part of a Laurent series is: cases: n=1 c−n (z − a)−n .e. f (a) = 0. a is an essential singular point of f . We deﬁne the notations Cρ = {zz = ρ. f ) = max f (z).42 Mathematics Formulary by ir. a pole of order 1) in z = a with f (a) f (z) = Res z=a g(z) g (a) 6. We assume that f (z) is analytical for (z) > 0 with a possible exception of a ﬁnite number of singular points which do not lie on the real axis. Than the function g(z) = (z − a)k f (z) has a nonessential singularity in a. g (a) = 0 than f (z)/g(z) has a simple pole (i. Than holds for α < 0 ρ→∞ ρ→∞ − Cρ lim f (z)eiαz dz = 0 Let z = a be a simple pole of f (z) and let Cδ be the half circle z − a = δ. If f and g are analytical. M (ρ. Wevers valid for r < z − a < R and K an arbitrary Jordan curve in G which encloses point a in positive direction.5 Jordan’s theorem + z∈Cρ − z∈Cρ + Residues are often used when solving deﬁnite integrals. One can classify singular points with this.A. g(a) = 0. 3. such as exp(1/z) for z = 0. Than there exists a k ∈ I so that N lim (z − a)k f (z) = c−k = 0. The principal part contains a ﬁnite number of terms. (z) ≥ 0 and lim M + (ρ. (z) ≤ 0 and lim M − (ρ. than ρ→∞ ∞ f (x)dx = 2πi −∞ Resf (z) in (z) > 0 Replace M + by M − in the conditions above and it follows that: ∞ f (x)dx = −2πi −∞ Resf (z) in (z) < 0 Jordan’s lemma: let f be continuous for z ≥ R. 0 ≤ arg(z − a) ≤ π. (z) ≥ 0} and − + − Cρ = {zz = ρ. f ) = 0 and that the integral exists. Than holds for α > 0 ρ→∞ ρ→∞ + Cρ lim f (z)eiαz dz = 0 Let f be continuous for z ≥ R. (z) ≤ 0} and M (ρ. J.C. f ) = 0. f ) = 0. f ) = max f (z). Than a is a nonessential singularity. There are 3 1. The principal part contains an inﬁnite number of terms. lim ρM + (ρ. Than is 1 1 lim f (z)dz = 2 Res f (z) z=a δ↓0 2πi Cδ .
xi = Ai xi c i c i Here the Einstein convention is used: ai bi := i ai bi The coordinate transformation is given by: Ai = i ∂xi ∂xi . Vectors: x = xi ci with basis vectors ci : ci = Transformation from system i to i is given by: ci = Ai ci = ∂i ∈ V . The vector space of linear transformations from V to W is denoted by L(V.. Now we can deﬁne vectors in V R) ∗ with basis c and covectors in V with basis ˆ Properties of both are: c.1 Vectors and covectors A ﬁnite dimensional vector space is denoted by V. 1.I := V ∗ ... We name V ∗ the dual space of V. 43 . i i k l In differential notation the coordinate transformations are given by: dxi = ∂xi i dx ∂xi and ∂ ∂xi ∂ = i ∂x ∂xi ∂xi The general transformation rule for a tensor T is: q1 . Ai = i i ∂x ∂xi k From this follows that Ai · Ak = δl and Ai = (Ai )−1 ...rm p1 s1 ∂x ∂x ∂u ∂u For an absolute tensor = 0.pn · · · pn · · · · sm Tr11.sm = ∂x ∂u ∂uq1 ∂uqn ∂xr1 ∂xrm p . W). W. Covectors: x = xiˆ with basis vectors ˆ c c ∂ ∂xi ˆ i = dxi c Transformation from system i to i is given by: ˆ i = Ai ˆ i ∈ V ∗ . Consider L(V... xi = Ai xi i i i i ˆ 2.qn Ts1 .Chapter 7 Tensor calculus 7..
44
Mathematics Formulary by ir. J.C.A. Wevers
7.2
Tensor algebra
The following holds: aij (xi + yi ) ≡ aij xi + aij yi , but: aij (xi + yj ) ≡ aij xi + aij yj and (aij + aji )xi xj ≡ 2aij xi xj , but: (aij + aji )xi yj ≡ 2aij xi yj en (aij − aji )xi xj ≡ 0.
p The sum and difference of two tensors is a tensor of the same rank: Ap ± Bq . The outer tensor product results in q pr m prm a tensor with a rank equal to the sum of the ranks of both tensors: Aq · Bs = Cqs . The contraction equals two mpr mp indices and sums over them. Suppose we take r = s for a tensor Aqs , this results in: Ampr = Bq . The inner qr r
product of two tensors is deﬁned by taking the outer product followed by a contraction.
7.3
Inner product
ˆ ˆ ˆ Deﬁnition: the bilinear transformation B : V × V ∗ → I B(x, y ) = y(x ) is denoted by < x, y >. For this pairing R, operator < ·, · >= δ holds: ˆ ˆ y(x) =< x, y >= yi xi , < cˆi , cj >= δ i
j
Let G : V → V ∗ be a linear bijection. Deﬁne the bilinear forms g :V ×V →I R h:V ×V →I R
∗ ∗
g(x, y ) =< x, Gy > ˆ ˆ ˆ ˆ h(x, y ) =< G−1 x, y >
Both are not degenerated. The following holds: h(Gx, Gy ) =< x, Gy >= g(x, y ). If we identify V and V ∗ with G, than g (or h) gives an inner product on V. The inner product (, )Λ on Λk (V) is deﬁned by: (Φ, Ψ)Λ = The inner product of two vectors is than given by: (x, y ) = xi y i < ci , Gcj >= gij xi xj The matrix gij of G is given by gij ˆ = Gci c The matrix g ij of G−1 is given by: g kl cl = G−1ˆ c
k j
1 0 (Φ, Ψ)Tk (V) k!
k For this metric tensor gij holds: gij g jk = δi . This tensor can raise or lower indices:
xj = gij xi , xi = g ij xj and dui = ˆ = g ij cj . c
i
Chapter 7: Tensor calculus
45
7.4
Tensor product
Deﬁnition: let U and V be two ﬁnite dimensional vector spaces with dimensions m and n. Let U ∗ × V ∗ be the ˆ ˆ cartesian product of U and V. A function t : U ∗ × V ∗ → I (u; v ) → t(u; v ) = tαβ uα uβ ∈ I is called a tensor R; ˆ ˆ R ˆ and v. The tensors t form a vector space denoted by U ⊗ V. The elements T ∈ V ⊗ V are called ˆ if t is linear in u contravariant 2tensors: T = T ij ci ⊗ cj = T ij ∂i ⊗ ∂j . The elements T ∈ V ∗ ⊗ V ∗ are called covariant 2tensors: i j i T = Tij ˆ ⊗ ˆ = Tij dxi ⊗ dxj . The elements T ∈ V ∗ ⊗ V are called mixed 2 tensors: T = T .j ˆ ⊗ cj = c c c Ti.j dxi ⊗ ∂j , and analogous for T ∈ V ⊗ V ∗ . The numbers given by
α β tαβ = t(ˆ , ˆ ) c c i
with 1 ≤ α ≤ m and 1 ≤ β ≤ n are the components of t. Take x ∈ U and y ∈ V. Than the function x ⊗ y, deﬁnied by ˆ ˆ ˆ ˆ (x ⊗ y)(u, v) =< x, u >U < y, v >V is a tensor. The components are derived from: (u ⊗ v )ij = ui v j . The tensor product of 2 tensors is given by: 2 form: 0 0 form: 2 1 form: 1 ˆ q) (v ⊗ w)(p, ˆ = v i pi wk qk = T ik pi qk ˆ q)(v, w) = pi v i qk wk = Tik v i wk (p ⊗ ˆ
i ˆ q, (v ⊗ p)(ˆ w) = v i qi pk wk = Tk qi wk
7.5
Symmetric and antisymmetric tensors
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ A tensor t ∈ V ⊗ V is called symmetric resp. antisymmetric if ∀x, y ∈ V ∗ holds: t(x, y ) = t(y, x ) resp. t(x, y ) = ˆ x ). ˆ −t(y, A tensor t ∈ V ∗ ⊗ V ∗ is called symmetric resp. antisymmetric if ∀x, y ∈ V holds: t(x, y ) = t(y, x ) resp. t(x, y ) = −t(y, x ). The linear transformations S and A in V ⊗ W are deﬁned by: ˆ ˆ St(x, y ) = ˆ ˆ At(x, y ) =
1 ˆ ˆ 2 (t(x, y) 1 ˆ ˆ 2 (t(x, y)
ˆ ˆ + t(y, x )) ˆ ˆ − t(y, x ))
Analogous in V ∗ ⊗ V ∗ . If t is symmetric resp. antisymmetric, than St = t resp. At = t.
1 The tensors ei ∨ ej = ei ej = 2S(ei ⊗ ej ), with 1 ≤ i ≤ j ≤ n are a basis in S(V ⊗ V) with dimension 2 n(n + 1). 1 The tensors ei ∧ ej = 2A(ei ⊗ ej ), with 1 ≤ i ≤ j ≤ n are a basis in A(V ⊗ V) with dimension 2 n(n − 1).
The complete antisymmetric tensor ε is given by: εijk εklm = δil δjm − δim δjl . The permutationoperators epqr are deﬁned by: e123 = e231 = e312 = 1, e213 = e132 = e321 = −1, for all other combinations epqr = 0. There is a connection with the ε tensor: εpqr = g −1/2 epqr and εpqr = g 1/2 epqr .
7.6
Outer product
(k + l)! A(α ⊗ β) k!l!
Let α ∈ Λk (V) and β ∈ Λl (V). Than α ∧ β ∈ Λk+l (V) is deﬁned by: α∧β =
If α and β ∈ Λ1 (V) = V ∗ holds: α ∧ β = α ⊗ β − β ⊗ α
46
Mathematics Formulary by ir. J.C.A. Wevers
The outer product can be written as: (a × b)i = εijk aj bk , a × b = G−1 · ∗(Ga ∧ Gb ). Take a, b, c, d ∈ I 4 . Than (dt ∧ dz)(a, b ) = a0 b4 − b0 a4 is the oriented surface of the projection on the tzplane R of the parallelogram spanned by a and b. Further a0 (dt ∧ dy ∧ dz)(a, b, c) = det a2 a4 b0 b2 b4 c0 c2 c4
is the oriented 3dimensional volume of the projection on the tyzplane of the parallelepiped spanned by a, b and c. (dt ∧ dx ∧ dy ∧ dz)(a, b, c, d) = det(a, b, c, d) is the 4dimensional volume of the hyperparellelepiped spanned by a, b, c and d.
7.7
The Hodge star operator
n Λk (V) and Λn−k (V) have the same dimension because n = n−k for 1 ≤ k ≤ n. Dim(Λn (V)) = 1. The choice k of a basis means the choice of an oriented measure of volume, a volume µ, in V. We can gauge µ so that for an 1 2 n e orthonormal basis ei holds: µ(ei ) = 1. This basis is than by deﬁnition positive oriented if µ = ˆ ∧ˆ ∧...∧ˆ = 1. e e
Because both spaces have the same dimension one can ask if there exists a bijection between them. If V has no extra structure this is not the case. However, such an operation does exist if there is an inner product deﬁned on V and the corresponding volume µ. This is called the Hodge star operator and denoted by ∗. The following holds: ∀w∈Λk (V) ∃∗w∈Λk−n (V) ∀θ∈Λk (V) θ ∧ ∗w = (θ, w)λ µ For an orthonormal basis in I 3 holds: the volume: µ = dx ∧ dy ∧ dz, ∗dx ∧ dy ∧ dz = 1, ∗dx = dy ∧ dz, R ∗dz = dx ∧ dy, ∗dy = −dx ∧ dz, ∗(dx ∧ dy) = dz, ∗(dy ∧ dz) = dx, ∗(dx ∧ dz) = −dy. For a Minkowski basis in I 4 holds: µ = dt ∧ dx ∧ dy ∧ dz, G = dt ⊗ dt − dx ⊗ dx − dy ⊗ dy − dz ⊗ dz, and R ∗dt ∧ dx ∧ dy ∧ dz = 1 and ∗1 = dt ∧ dx ∧ dy ∧ dz. Further ∗dt = dx ∧ dy ∧ dz and ∗dx = dt ∧ dy ∧ dz.
7.8
7.8.1
Differential operations
The directional derivative
∂f ∂xi
The directional derivative in point a is given by: La f =< a, df >= ai
7.8.2
The Liederivative
(Lv w)j = wi ∂i v j − v i ∂i wj
The Liederivative is given by:
7.8.3
Christoffel symbols
∂2x ∂x = Γi jk i ∂uk ∂u ∂ui
To each curvelinear coordinate system ui we add a system of n3 functions Γi of u, deﬁned by jk
These are Christoffel symbols of the second kind. Christoffel symbols are no tensors. The Christoffel symbols of the second kind are given by: ∂2x i := Γi = , dxi jk jk ∂uk ∂uj
4 The covariant derivative j The covariant derivative of a vector. Their transformation to a different coordinate system is given by: jk kj Γi j k = Ai Aj Ak Γi + Ai (∂j Ai ) i i k jk k j The ﬁrst term in this expression is 0 if the primed coordinates are cartesian. There is a relation between Christoffel symbols and the metric: Γi = 1 g ir (∂j gkr + ∂k grj − ∂r gjk ) jk 2 and Γα = ∂β (ln( βα g)).8. covector and of rank2 tensors is given by: i ja j ai α γ aβ γ aαβ αβ γa = ∂j ai + Γi ak jk = ∂j ai − Γk ak ij = ∂γ aα − Γε aα + Γα aε β γβ ε γε β = ∂γ aαβ − Γε aεβ − Γε aαε γα γβ = ∂γ aαβ + Γα aεβ + Γβ aαε γε γε αβ Ricci’s theorem: γ gαβ = γg =0 7.9 Differential operators The Gradient is given by: grad(f ) = G−1 df = g ki The divergence is given by: div(ai ) = i ia ∂f ∂ ∂xi ∂xk 1 √ = √ ∂k ( g ak ) g The curl is given by: rot(a) = G−1 · ∗ · d · Ga = −εpqr The Laplacian is given by: ∆(f ) = div grad(f ) = ∗d ∗ df = ig ij q ap = q ap − p aq ∂j f = g ij i jf 1 ∂ =√ g ∂xi √ g g ij ∂f ∂xj . Lowering an index gives the Christoffel symbols of the ﬁrst kind: Γi = g il Γjkl . jk 7.Chapter 7: Tensor calculus 47 with Γi = Γi .
b) Frenet’s equations express the derivatives as linear combinations of these vectors: ˙ .. On the surface are 2 R families of curves. Wevers 7. The tangent plane in a point P at the surface has basis: c1 = ∂1 x and c2 = ∂2 x . x ) = ρ2 τ . ˙ ¨ ˙ ¨ The tangent has unit vector = x. So the main normal lies in the osculation plane.... ρ =constant τ =0 ρ =constant.10. x2 . The following holds: ˙ ˙ ¨ ¨ ρ2 = ( . Let P be a point and Q be a nearby point of a space curve x(s). In a bending point holds. x ) and τ 2 = (b. b = −τ n ˙ ¨ From this follows that det(x. x3 ).A. so xh = xh (uα ). the binormal is perpendicular to it. x ) = 0 .10.2 Surfaces in IR3 A surface in I 3 is the collection of end points of the vectors x = x(u.1 Differential geometry Space curves We limit ourselves to I 3 with a ﬁxed orthonormal basis. one with u =constant and one with v =constant. x. If x = 0 the osculation plane is given by: ˙ ¨ ˙ ¨ y = x + λx + µx so det(y − x. x.. ) = (x. τ2 = dψ ds 2 and ρ > 0. τ = 0 ρ=τ =0 7.C.48 Mathematics Formulary by ir. For plane curves ρ is the ordinary curvature and τ = 0. ˙ ˙ = ρn .. The arc length of a space curve is given by: t s(t) = t0 dx dτ 2 + dy dτ 2 + dz dτ 2 dτ The derivative of s with respect to t is the length of the vector dx/dt: ds dt 2 = dx dx . A point is represented by the vector x = (x1 . The osculation plane is parallel with ˙ ¨ x(s). Some curves and their properties are: Screw line Circle screw line Plane curves Circles Lines τ /ρ =constant τ =constant. n = −ρ + τ b . Let ∆ϕ be the angle between the tangents in P and Q and let ∆ψ be the angle between the osculation planes (binormals) in P and Q. if x= 0: ˙ y = x + λx + µ x . Then the curvature ρ and the torsion τ in P are deﬁned by: ρ2 = dϕ ds 2 = lim ∆s→0 ∆ϕ ∆s 2 . J. the main normal unit vector n = x and the binormal b = x × x.10 7. v). dt dt The osculation plane in a point P of a space curve is the limiting position of the plane through the tangent of the plane in point P and a point Q when Q approaches P along the space curve. A R space curve is a collection of points represented by x = x(t).
∂α cβ ) 7.r. hαβ = (N . The following two curves through P . cβ ) For the angle φ between the parameter curves in P : u = t.3 The ﬁrst fundamental tensor Let P be a point of the surface x = x(uα ).4 The second fundamental tensor The 4 derivatives of the tangent vectors ∂α ∂β x = ∂α cβ are each linear independent of the vectors c1 . c2 . The projection of x on the surface is a vector with components p γ = u γ + Γγ u α u β ¨ αβ ˙ ˙ of which the length is called the geodetic curvature of the curve in p.5 Geodetic curvature A curve on the surface x(uα ) is given by: uα = uα (s). This remains the same if the surface is curved ¨ and the line element remains the same. dt dt dx dv β = ∂β x dτ dτ The ﬁrst fundamental tensor of the surface in P is the inner product of these tangent vectors: dx dx . ∂α cβ ) = αβ 1 det g det(c1 . with N perpendicular to c1 and c2 . The projection of x on N has length p = hαβ uα uβ ˙ ˙ and is called the normal curvature of the curve in P . A geodetic line of a surface is a curve on the surface for which in each point the main normal of the curve is the same as the normal on the surface. v = τ holds: cos(φ) = √ For the arc length s of P along the curve uα (t) holds: ds2 = gαβ duα duβ This expression is called the line element.10. g12 g11 g22 7. have as tangent vectors in P duα dx = ∂α x .10. cβ ) duα dv β dt dτ The covariant components w. This is written as: ∂α cβ = Γγ cγ + hαβ N αβ This leads to: Γγ = (c γ . dt dτ = (cα . the basis cα = ∂α x are: gαβ = (cα . ∂α cβ ) . denoted by uα = uα (t).Chapter 7: Tensor calculus 49 7. so d2 uγ duα duβ + Γγ =0 αβ ds2 ds ds .10. So for a geodetic line is in each point pγ = 0. v =constant and u =constant. than x = x(uα (s)) with s the arc length of the curve. The theorem of Meusnier states that different curves on the surface with the same tangent vector in P have the same normal curvature. The ¨ ¨ length of x is the curvature ρ of the curve in P . c2 and N . uα = v α (τ ).t.
50 Mathematics Formulary by ir. and is symmetric in its indices: 1 Rαβ = Rβα . It has the following symmetry properties: Rαβµν = Rµναβ = −Rβαµν = −Rαβνµ The following relation holds: [ α.A. µ β ]Tν µ σ σ µ = Rσαβ Tν + Rναβ Tσ The Riemann tensor depends on the Christoffel symbols through α Rβµν = ∂µ Γα − ∂ν Γα + Γα Γσ − Γα Γσ βν βµ σµ βν σν βµ In a space and coordinate system where the Christoffel symbols are 0 this becomes: α 1 Rβµν = 2 g ασ (∂β ∂µ gσν − ∂β ∂ν gσµ + ∂σ ∂ν gβµ − ∂σ ∂µ gβν ) The Bianchi identities are: λ Rαβµν + ν Rαβλµ + µ Rαβνλ = 0. For two vector ﬁelds v(t) and w(t) along the same curve of the surface follows Leibniz’ rule: d(v. w ) = dt Along a curve holds: dt (v α cα ) = dv γ duα β + Γγ v cγ αβ dt dt v. It has the property that β Gαβ = 0. µ The Ricci tensor is obtained by contracting the Riemann tensor: Rαβ ≡ Rαµβ . The αβ Ricciscalar is R = g Rαβ . v dt 7.C. J.11 Riemannian geometry µ Rναβ T ν = βT µ The Riemann tensor R is deﬁned by: α − β αT µ This is a 1 tensor with n2 (n2 − 1)/12 independent components not identically equal to 0. The Einstein tensor G is deﬁned by: Gαβ ≡ Rαβ − 2 g αβ . . This tensor is a measure 3 for the curvature of the considered space. If it is 0. the space is a ﬂat manifold. Wevers The covariant derivative /dt in P of a vector ﬁeld of a surface along a curve is the projection on the tangent plane in P of the normal derivative in P . w dt + w.
The largest machine number is amax = (1 − β −t )β β and the smallest positive machine number is amin = β −β q q −1 The distance between two successive machine numbers in the interval [β p−1 . The length of the exponent q. and 2 ε. β p ] is β p−t . Than the representation of machine numbers becomes: rd(x) = s · m · β e where mantissa m is a number with t βbased numbers and for which holds 1/β ≤ m < 1. 51 . The sign s. for example with m = e = 0. 4.1 Errors There will be an error in the solution if a problem has a number of parameters which are not exactly known. The basis of the number system β. If x is a real number and the closest machine number is rd(x). If the problem is given by x = φ(a) the ﬁrstorder approximation for an error δa in a is: aφ (a) δa δx = · x φ(a) a The number c(a) = aφ (a)/φ(a).2 Floating point representations The ﬂoating point representation depends on 4 natural numbers: 1. The dependency between errors in input data and errors in the solution can be expressed in the condition number c. 3. The number 0 is added to this set. and e is a number with q βbased numbers for which holds e ≤ β q − 1. than holds: rd(x) = x(1 + ε) x = rd(x)(1 + ε ) with with ε ≤ 1 β 1−t 2 ε  ≤ 1 β 1−t 2 The number η := 1 β 1−t is called the machineaccuracy. c 1 if the problem is wellconditioned. 8. The length of the mantissa t.Chapter 8 Numerical mathematics 8. The base here is 2. 2. ε ≤ η x − rd(x) ≤η x An often used 32 bits ﬂoat format is: 1 bit for s. 8 for the exponent and 23 for de mantissa.
} = (c1 − j=2 U1j xj )/U11 1 This algorithm requires 2 n(n + 1) ﬂoating point calculations. for (i = k + 1. } } . k) { S = c[k]. k <= n. k > 0. for (j = k + 1. j <= n.3. } x[k] = S / U[k][k]. J. Than the 2nd equation is subtracted in such a way from the others that all elements on the second row are 0 except A22 . Than: = cn /Unn xn−1 = (cn−1 − Un−1. j++) U[k][j] = A[k][j]. j <= n.n xn )/Un−1. etc. so other methods are preferable. .1 Triangular matrices Consider the equation U x = c where U is a rightupper triangular.3 Systems of equations We want to solve the matrix equation Ax = b for a nonsingular A. and all Uii = 0. Wevers 8. j++) { S = U[k][j] * x[j]. j < n. c[k] = b[k]. In code: for (k = 1. now the ﬁrst column contains all 0’s except A11 . . j++) { A[i][j] = L * U[k][j]. . .3. 8.52 Mathematics Formulary by ir.A. which is equivalent to ﬁnding the inverse matrix A−1 . } b[i] = L * c[k]. i++) { L = A[i][k] / U[k][k]. 8. for (j = k + 1.2 Gauss elimination Consider a general set Ax = b. . this is a matrix in which Uij = 0 for all j < i. k++) { for (j = k. This can be reduced by Gauss elimination to a triangular form by multiplying the ﬁrst equation with Ai1 /A11 and than subtract it from all others.C. n xn x1 In code: for (k = n.n−1 . Inverting a n×n matrix via Cramer’s rule requires too much multiplications f (n) with n! ≤ f (n) ≤ (e−1)n!. i <= n.
= β = xn−1 − ε h(xn−1 ) g(xn−1 ) F (x) h(x) =x− g(x) G(x) 8. f (x) may not vary too much with respect to x near α. 8. Example: choose f (x) = β − ε than we can expect that the row xn with x0 xn converges to α. xn−1 − xn−2 lim α − xn A = xn − xn−1 1−A n→∞ n→∞ n→∞ The quantity A is called the asymptotic convergence factor. . in the hope that lim xn = n→∞ α.3 Pivot strategy (1) Some equations have to be interchanged if the corner elements A11 . α − xn−1 lim xn − xn−1 =A . n→∞ 2.1 Roots of functions Successive substitution We want to solve the equation F (x) = 0. Further. 8.4 8.2 Local convergence Let α be a solution of x = f (x) and let xn = f (xn−1 ) for a given x0 . whereafter the triangular set has to be 2 1 solved with 2 n(n + 1) operations... 2.4. If xn = α for all n than holds lim α − xn =A . than search (k−1) for an element Apk with p > k that is = 0 and interchange the pth and the nth equation. are not all = 0 to allow Gauss elimina(k−1) tion to work. A22 . A(n) is the element after the nth iteration. Let f (x) be continuous in a neighbourhood of α. Let f (α) = A with A < 1. . lim nn = α. Than there exists a δ > 0 so that for each x0 with x0 − α ≤ δ holds: 1. One method is: if Akk = 0. In the following. Rewrite the equation in the form x = f (x) so that a solution of this equation is also a solution of F (x) = 0. If for a particular k holds: xk = α. Assume an initial estimation x0 for α and obtain the series xn with xn = f (xn−1 ).3.4.Chapter 8: Numerical mathematics 53 This algorithm requires 1 n(n2 − 1) ﬂoating point multiplications and divisions for operations on the coefﬁcient 3 matrix and 1 n(n − 1) multiplications for operations on the righthand terms. Many solutions are essentially the following: 1. the quantity B = −10 log A is called the asymptotic convergence speed. than for each n ≥ k holds that xn = α. so we want to ﬁnd the root α with F (α) = 0. This strategy fails only if the set is singular and has no solution at all.
5 * (x1 + x2). float*). x2]. The value is adapted until the accuracy is better than ±eps. Wevers 8. The root lies within the interval [x1. • Local convergence is often difﬁcult to determine. • The assumption F (α) = 0 means that α is a simple root.C. dx. the faster the series converges. f. &df). A general method to construct f (x) is: f (x) = x − Φ(x)F (x) with Φ(x) = 0 in a neighbourhood of α. float x2. Than the row αn = xn + will converge to α. 1 k (k − 1)xn−1 + a xk−1 n−1 F (xn−1 ) F (xn−1 ) . float x1. float*. float eps) { int j.4 Newton iteration There are more ways to transform F (x) = 0 into x = f (x). j++) { (*funcd)(root. max_iter = 25.4. root = 0. and the smaller f (x). The function funcd is a routine that returns both the function and its ﬁrst derivative in point x in the passed pointers. For F (x) = xk − a the series becomes: xn = This is a wellknown way to compute roots. j <= max_iter. • If xn is far apart from α the convergence can sometimes be very slow. for (j = 1.4. float SolveNewton(void (*funcd)(float. The following code ﬁnds the root of a function by means of Newton’s method. If one chooses: Φ(x) = 1 F (x) Than this becomes Newtons method. float df. One essential condition for them all is that in a neighbourhood of a root α holds that f (x) < 1. dx = f/df. An (xn − xn−1 ) 1 − An 8. root. J.54 Mathematics Formulary by ir. The iteration formula than becomes: xn = xn−1 − Some remarks: • This same result can also be derived with Taylor series.3 Aitken extrapolation A = lim xn − xn−1 xn−1 − xn−2 We deﬁne n→∞ A converges to f (a).A. &f.
otherwise extrapolating. exit(1).1. } if ( fabs(dx) < eps ) return root. 1.x2) < 0. if ( (x1 .. Lj (xi ) = δij for i. 8."). . a twostep method. If two approximations xn and xn−1 exist for a root. float s = c[n]. than one can ﬁnd the next approximation with xn+1 = xn − F (xn ) xn − xn−1 F (xn ) − F (xn−1 ) If F (xn ) and F (xn−1 ) have a different sign one is interpolating. return 0.4. i) { s = s * a + c[i].Chapter 8: Numerical mathematics 55 root = dx. } return s..0 ) { perror("Jumped out of brackets in SolveNewton. in contrast to the two methods discussed previously. To do this.0. i >= 0. 3. n. j = 0. 2. /* Convergence */ } perror("Maximum number of iterations exceeded in SolveNewton. Each Lj (x) has order n. the Horner algorithm is more usable: the value s = k ck xk in x = a can be calculated as follows: float GetPolyValue(float c[].. Each polynomial p(x) can be written uniquely as n p(x) = j=0 cj Lj (x) with cj = p(xj ) This is not a suitable method to calculate the value of a ploynomial in a given point x = a. for (i = n . } 8. int n) { int i.root)*(root ."). .5 Polynomial interpolation n A base for polynomials of order n is given by Lagrange’s interpolation polynomials: Lj (x) = l=0 l=j x − xl xj − xl The following holds: 1.5 The secant method This is. exit(1). } After it is ﬁnished s has value p(a).
x0 = a. x1 = b.6 Deﬁnite integrals b Almost all numerical methods are based on a formula of the type: n f (x)dx = a i=0 ci f (xi ) + R(f ) with n. J. h = b − a: 2 b f (x)dx = hf (x0 ) + a h3 f (ξ) 24 The interval will usually be split up and the integration formulas be applied to the partial intervals if f varies much within the interval. x1 = 1 (a + b). Wevers 8. h = b − a: b f (x)dx = a h h3 [f (x0 ) + f (x1 )] − f (ξ) 2 12 1 • Simpson’s rule: n = 2. x2 = b.C. h = 2 (b − a): 2 b f (x)dx = a h5 h [f (x0 ) + 4f (x1 ) + f (x2 )] − f (4) (ξ) 3 90 • The midpoint rule: n = 0.A. ξ ∈ (a. A Gaussian integration formula is obtained when one wants to get both the coefﬁcients cj and the points xj in an integral formula so that the integral formula gives exact results for polynomials of an order as high as possible. b) and q ≥ n + 1. ci and xi independent of f (x) and R(f ) the error which has the form R(f ) = Cf (q) (ξ) for all common methods. x0 = 1 (a + b). Two examples are: 1. Gaussian formula with 2 points: h f (x)dx = h f −h −h √ 3 +f h √ 3 + h5 (4) f (ξ) 135 2. Gaussian formula with 3 points: h f (x)dx = −h h 5f 9 −h 3 5 + 8f (0) + 5f h 3 5 + h7 f (6) (ξ) 15750 8.56 Mathematics Formulary by ir.7 Derivatives There are several formulas for the numerical calculation of f (x): • Forward differentiation: f (x) = f (x + h) − f (x) 1 − 2 hf (ξ) h . x0 = a. Often the points xi are chosen equidistant. Some common formulas are: • The trapezoid rule: n = 1. Here.
. explicit): zn+1 = zn + hf (xn . zn ) 1 hf (xn + 2 h. . y(x2 ). One of the possible 3rd order RungeKutta methods is given by: k1 k2 k3 zn+1 and the classical 4th order method is: k1 k2 k3 k4 zn+1 = = = = = hf (xn .. y) for x > x0 and initial condition y(x0 ) = x0 . They work so well because the solution y(x) can be written as: yn+1 = yn + hf (ξn . zn + 2 k1 ) 2 3 3 hf (xn + 4 h. zn+1 )) − h2 y (ξ) 2 h3 y (ξ) 3 h3 y (ξ) 12 RungeKutta methods are an important class of singlestep methods. zn for y(x1 )..Chapter 8: Numerical mathematics 57 • Backward differentiation: f (x) = • Central differentiation: f (x) − f (x − h) 1 + 2 hf (ξ) h f (x + h) − f (x − h) h2 − f (ξ) 2h 6 • The approximation is better if more function values are used: f (x) = f (x) = −f (x + 2h) + 8f (x + h) − 8f (x − h) + f (x − 2h) h4 (5) + f (ξ) 12h 30 There are also formulas for higher derivatives: f (x) = −f (x + 2h) + 16f (x + h) − 30f (x) + 16f (x − h) − f (x − 2h) h4 (6) + f (ξ) 12h2 90 8. y(xn ). Than we can derive some formulas to obtain zn+1 as approximation for y(xn+1 ): • Euler (single step. zn + k3 ) zn + 1 (k1 + 2k2 + 2k3 + k4 ) 6 = = = = hf (xn . explicit): zn+1 = zn−1 + 2hf (xn .8 Differential equations We start with the ﬁrst order DE y (x) = f (x. y) in well chosen points near the solution... xn+1 ) Because ξn is unknown some “measurements” are done on the increment function k = hf (x. y(ξn )) with ξn ∈ (xn . zn ) 1 hf (xn + 1 h.. . zn + 1 k1 ) 2 hf (xn + 1 h. Suppose we ﬁnd approximations z1 . z2 . Step doubling is most often used for 4th order RungeKutta. zn ) + • Trapezoid rule (single step.. zn + 1 k2 ) 2 2 hf (xn + h. zn + 4 k2 ) zn + 1 (2k1 + 3k2 + 4k3 ) 9 Often the accuracy is increased by adjusting the stepsize for each step with the estimated error. Than one takes for zn+1 − zn a weighted average of the measured values. zn ) + • Midpoint rule (two steps. zn ) + f (xn+1 . implicit): 1 zn+1 = zn + 2 h(f (xn .
n = nn << 1. One is formed from the evennumbered points of the original N . for (i = 1. i += 2) { if ( j > i ) { SWAP(data[j]. i. The basic idea is that a Fourier transform of length N can be rewritten as the sum of two discrete Fourier transforms. m. nn must be a power of 2. data[i]). j = 1. float tempr. N − 1. j. wpi = sin(theta). while ( n > mmax ) /* Outermost loop. 2. mmax.b) tempr=(a). istep. wr. the other from the oddnumbered points. } mmax = 2. data[i+1]).h> #define SWAP(a. #include <math. J. . } j += m. wpi. order N ·2 log(N ).C. with the fast Fourier transform. k = 0. 1. theta. The next routine replaces the values in data by their discrete Fourier transformed values if isign = 1. Wevers 8. Suppose we have N successive samples hk = h(tk ) with tk = k∆.A. unsigned long nn. int isign) { unsigned long n.0 * wtemp * wtemp. Than the discrete Fourier transform is given by: N −1 Hn = k=0 hk e2πikn/N and the inverse Fourier transform by hk = 2 1 N N −1 Hn e−2πikn/N n=0 This operation is order N . The array data[1. It can be faster. wtemp = sin(0. This can be implemented as follows. double wtemp.. wpr = 2. is executed log2(nn) times */ { istep = mmax << 1. theta = isign * (6.5 * theta).. tempi.28318530717959/mmax). SWAP(data[j+1]. and by their inverse transformed values if isign = −1. etc. } m = n >> 1. wi.2*nn] contains on the odd positions the real and on the even positions the imaginary parts of the input data: data[1] is the real part and data[2] the imaginary part of f0 . while ( m >= 2 && j > m ) { j = m.(b)=tempr void FourierTransform(float data[]..9 The fast Fourier transform The Fourier transform of a function can be approximated when some discrete points are known. i < n. each of length N/2. wpr..58 Mathematics Formulary by ir. m >>= 1.(a)=(b). .
. } } DanielsonLanczos equation */ * data[j+1]. m += 2) { for (i = m.0. i += istep) /* { j = i + mmax. wi = 0.0. m < mmax. } wr = (wtemp = wr) * wpr . data[i+1] += tempi. data[j+1] = data[i+1] .Chapter 8: Numerical mathematics 59 wr = 1. * data[j]. tempr = wr * data[j] .wi tempi = wr * data[j+1] + wi data[j] = data[i] .wi * wpi wi = wi * wpr + wtemp * wpi + wi.tempr. for (m = 1. data[i] += tempr. i <= n. + wr. } mmax=istep.tempi.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.