You are on page 1of 187

Table of Contents

Chapter 1 List of articles:.....................................................................................................................4


Chapter 2 The Binomial Theorem........................................................................................................6
Paragraph 2.1 The Binomial Theorem............................................................................................6
2.1.1 A. Binomial Theorem for Positive Integral Index.............................................................6
2.1.2 B. Properties of the Binomial Expansion.......................................................................7
2.1.3 C. The Relative Magnitudes of the Binomial Coefficients................................................8
2.1.4 D. The Greatest Term in Binomial Expansion...................................................................8
2.1.5 E. Other Properties of the Binomial Coefficients..............................................................9
2.1.6 F. Expansion of Multinomials..........................................................................................10
2.1.7 G. Binomial Theorem for a Rational Index.....................................................................11
Chapter 3 Legendre polynomials ......................................................................................................14
Paragraph 3.1 Legendre polynomials ..........................................................................................14
3.1.1 Legendre polynomials......................................................................................................14
3.1.2 The associated Legendre Polynomials:............................................................................14
3.1.3 Orthogonality - associated Legendre functions...............................................................15
3.1.4 Integral needed for Bessel ..............................................................................................16
3.1.5 Spherical harmonics.........................................................................................................17
Chapter 4 Hydrogen atom..................................................................................................................17
Paragraph 4.1 The Hydrogen Atom..............................................................................................18
4.1.1 The equation....................................................................................................................18
4.1.2 The Radial equation.........................................................................................................22
Chapter 5 Bessel.................................................................................................................................25
Paragraph 5.1 Schrdinger equation in three dimensions............................................................25
5.1.1 Separation of variables spherical harmonics ................................................................25
5.1.2 Radial equation................................................................................................................28
5.1.3 Stationary States Expressed in spherical Bessel functions............................................29
5.1.4 Relationship with regular Bessel functions......................................................................32
5.1.5 Generating Function of Spherical Bessel Functions........................................................33
5.1.6 Integral Representation of spherical Bessel Functions....................................................35
5.1.7 Rayleigh equation............................................................................................................35
5.1.8 Generating function, integer order, Jn (x)........................................................................35
5.1.9 Recurrence relations.........................................................................................................37
5.1.10 Bessels differential equation ........................................................................................38
5.1.11 Integral representation....................................................................................................39
5.1.12 Completeness ................................................................................................................52
Chapter 6 Fourier Transform full derivation .....................................................................................55
Paragraph 6.1 Fourier series and their coefficients .....................................................................55
Paragraph 6.2 Complex Form of the Fourier Series.....................................................................59
Paragraph 6.3 Fourier Transform (is for Non-Periodic Function)................................................60
6.3.1 Most important.................................................................................................................62
Paragraph 6.4 Deduction II ..........................................................................................................62
6.4.1 Fourier integral exponential form ...................................................................................63
6.4.2 Dirac Delta Function Derivation......................................................................................63
6.4.3 Fourier transforms............................................................................................................65
1
6.4.4 Cosine transform .............................................................................................................66
6.4.5 Sine transform .................................................................................................................66
Paragraph 6.5 Dirac delta function Electrostatic Enigma ............................................................67
6.5.1 Savelyev and Griffits explanation on Dirac delta function..............................................73
Chapter 7 The most important for which I write all about things......................................................76
Paragraph 7.1 Fourier transform of damped oscillator.................................................................76
Paragraph 7.2 The Fourier transform of the exponential decay function ....................................77
Paragraph 7.3 Fourier transform of delta function ......................................................................77
Paragraph 7.4 Delta Dirac formulas.............................................................................................77
Chapter 8 Vectors...............................................................................................................................78
Paragraph 8.1 Vector areas ..........................................................................................................78
Paragraph 8.2 The scalar product ................................................................................................79
Paragraph 8.3 The vector product ................................................................................................81
Paragraph 8.4 The scalar triple product ......................................................................................83
Paragraph 8.5 The vector triple product ......................................................................................84
Paragraph 8.6 Line integrals ........................................................................................................85
Paragraph 8.7 Vector line integrals ..............................................................................................87
Paragraph 8.8 Surface integrals ...................................................................................................88
Paragraph 8.9 Vector surface integrals ........................................................................................89
Paragraph 8.10 Volume integrals .................................................................................................90
Paragraph 8.11 Gradient ..............................................................................................................91
Paragraph 8.12 Divergence ..........................................................................................................93
Paragraph 8.13 Curl .....................................................................................................................95
Paragraph 8.14 Notations: Hamiltonian operator (Del) ...............................................................98
Chapter 9 Electromagnetism..............................................................................................................99
Paragraph 9.1 General trick for polarisation magnetisation and current density........................99
9.1.1 Energy flux.......................................................................................................................99
9.1.2 Polarization....................................................................................................................100
9.1.3 Magnetisation.................................................................................................................101
9.1.4 Current density:..............................................................................................................103
9.1.5 Continuity equation........................................................................................................104
Paragraph 9.2 Polarization .........................................................................................................105
Paragraph 9.3 Magnetization .....................................................................................................107
Paragraph 9.4 Other derivation for polarization and magnetisation...........................................109
Paragraph 9.5 Gauss theorem derivation II...............................................................................109
9.5.1 Poisson equation & Laplace equation ...........................................................................111
9.5.2 Poisson equation for magnetic vector potential A..........................................................112
9.5.3 Coulomb's law from gradient potential and Biot-Savart law derivation from magnetic
vector potential .......................................................................................................................113
9.5.4 Lorentz force&Ampere Law .........................................................................................116
9.5.5 Ampre's circuital law....................................................................................................117
9.5.6 The torque of magnetic field , magnetic dipole moment & work..................................120
9.5.7 Law of electromagnetic induction (induce electromotive force) induced e.m.f. or
Faraday's law............................................................................................................................121
9.5.8 Maxwell's Equations......................................................................................................123
9.5.9 MULTIPOLE EXPANSION..........................................................................................125
9.5.10 MULTIPOLE EXPANSION........................................................................................125
2
9.5.11 MULTIPOLE EXPANSION OF THE POTENTIAL..................................................125
9.5.12 Legendre......................................................................................................................129
9.5.13 Mathematics of spherical harmonics...........................................................................129
9.5.14 The dipole potential ....................................................................................................129
9.5.15 Green formula .............................................................................................................131
9.5.16 Magnetic multipoles Savelyev proof ..........................................................................133
9.5.17 Magnetic multipoles Heald proof :..............................................................................137
9.5.18 Magnetic multipoles Jackson proof : ..........................................................................141
9.5.19 Spin spin interactions ..................................................................................................141
9.5.20 Hyperfine interactions (spin -nuclear).........................................................................142
9.5.21 Organic Triplet State Molecules and the Dipolar Interaction......................................143
9.5.22 spin orbit interaction ...................................................................................................146
9.5.23 the zeeman interactions................................................................................................148
9.5.24 Legendre:.....................................................................................................................149
Chapter 10 Radiation........................................................................................................................151
Paragraph 10.1 Novotny & Hecht derivation.............................................................................151
10.1.1 Macroscopic electrodynamics......................................................................................151
10.1.2 Wave equations............................................................................................................152
10.1.3 Consecutive relations ..................................................................................................152
10.1.4 Spectral representation of time-dependent fields.........................................................153
10.1.5 Time-harmonic fields...................................................................................................153
10.1.6 Complex dielectric constant ........................................................................................154
10.1.7 Dyadic Green's functions.............................................................................................155
10.1.8 Mathematical basis of Greens functions ....................................................................155
10.1.9 Derivation of the Greens function for the electric field .............................................156
10.1.10 Time dependent Green's Functions............................................................................159
10.1.11 The radiating electric dipole ......................................................................................159
10.1.12 Electric dipole fields in a homogeneous space .........................................................160
Paragraph 10.2 Savelyev & Tamm & Born derivation...............................................................162
10.2.1 Field Produced by a System of Charges at Great Distances .......................................162
10.2.2 Dipole radiation Savelyev proof..................................................................................166
10.2.3 Dipole radiation Savelyev like Born proof including all terms..................................168
10.2.4 Dipole radiation Born proof ........................................................................................173
Paragraph 10.3 Kuno derivation.................................................................................................175
Paragraph 10.4 Agarwal & Berne derivation.............................................................................175
Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS .............................................................175
Paragraph 11.1 Linear equations ...............................................................................................176
11.1.1 Method of varying the arbitrary constant. Linear equations. ......................................176
11.1.2 Bernoulli's and Riccati's equations...............................................................................178
Paragraph 11.2 Second order equations......................................................................................178
11.2.1 Second order equations................................................................................................178
11.2.2 Constant coefficients ...................................................................................................179
11.2.3 Non-homogenous equations.........................................................................................182
11.2.4 Non- homogeneous linear equations of the second order with constant coefficients . 184
3
Chapter 1 List of articles:
[1] www.physics.hku.hk/~phys2325/notes/chap1.doc
[2] , Curs de matematica superioara, Chisinau 1971
[3] Eyring H. and G. Kimball, Quantum Chemistry, Singapore, New-York, London (1944)
[4] Riley K., M. Hobson, Mathematical Methods for Physics and Engineering: A Comprehensive
Guide, Second Edition, Cambridge 2002
[5] D. Long, The Raman Effect A Unified Treatment of the Theory of Raman Scattering by
Molecules , (2002)
[6] L. Barron, Molecular Light Scattering and Optical Activity(2004)
[7] Eric C. Le Ru, Pablo G. Etchegoin, Principles of Surface Enhanced Raman Spectroscopy,
(2009)(bond- polarizability model Wolkenstein)
[8] D. Griffiths, Introduction to electrodynamics, New Jersey (1999)
[9] G. Arfken, H. Weber, Mathematical Methods For Physicists, International Student
Edition(2005)
[10] R. Feynman, The Feynman Lectures on Physics, V I,II, III, California (1964) (Ref. pag.:162)
[11] C. Gerry, Introductory Quantum Optics, Cambridge 2005
[12] E. Kartheuser , Elements de MECANIQUE QUANTIQUE , Liege 1998?!
4
[13] M. Kuno, Quantum mechanics, Notre Dame, USA 2008
[14] M. Kuno, Quantum spectroscopy, Notre Dame,USA 2006
[15] I. Savelyev, Fundamental of theoretical Physics vol. I,II, Moscow (1982)(Ref. Pag.:162, 166)
[16] I. Savelyev, Physics: A general course vol. I,III,III, Moscow (1980) (Ref. Pag.:162)
[17] .. . [ 5, 1]
[18] . . , M 1991
[19] A. Matveev , , Moscow 1989
[20] . , , 2001
[21] A. Messiah, Quantum mechanics
[22] E.B. Manoukian, Quantum Theory:A Wide Spectrum , Springer (2006)
[23] Smirnov V., Cours de Mathmatique Suprieure , tome III deuxime partie dition MIR
Moscou 1972
[24] Secrieru V., Optica Prelegeri, Chisinau 2000
[25] Tikhonov A, A Samarskii, Equations of Mathematical Physics, Moskow 1977
[26] http://farside.ph.utexas.edu/teaching/316/lectures/lectures.html
[27] Condon, E., Shortley G., The Theory of Atomic Spectra (1935)
[28] Heald M.A., Marion J.B. Classical electromagnetic radiation (3ed., Saunders, 1995)(ISBN
0030972779)(T)(S)(586s)
[29] Dykistra Chem. Rev., 1993, 93 (7), pp 23392353
[30] Stewart James Calculus Early Transcendentals (Stewart's Calculus Series) Brooks Cole (2007)
[31] .. , 2. , , (2 ., ,
1982)()()(496) (Ref. Pag.:133 )
[32] Rieger P., Electron Spin Resonance Analysis and Interpretation Royal Society of
Chemistry(2007)
[33] Lenef A., S. Rand , Electronic structure of the N-V center in diamond: Theory, Phys. Rev. B
53 13441(1996)
[34] Maze J., A. Gali, E. Togan, Y Chu, A. Trifonov, E. Kaxiras, And M. Lukin , Propreties of
nitrogen-vacancy centers in diamond: the group theoretical approach, New J. Phys. 13
025025(2011)
[35] Doherty M., N. Manson, P. Delaney and L. Hollenberg , The negatively charged N-V center in
diamond: the electronic solution, arXiv:1008.5224(2010)
[36] W. Nolting, Quantum thory of magnetism, Springer-Verlag Berlin Heidelberg 2009
[37] Di Bartolo B., Optical interactions in solids,ISBN: 978-981-4295-74-1 (2010)
[38] Eisberg R., Resnick R., Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles
(1985)
[39] Raab R., de Lange O.,Multipole theory in electromagnetism Oxford University Press, USA
(2005)
[40] Schulten K., Notes on quantum mechanics, University of Illinois at UrbanaChampaign (2010)
[41] Shankar R., Principles of quantum mechanics, Springer(1994)
[42] Schweiger A. Priceples of pulse electron paramagnetic resonance
[43] Morse P. and Feshbach H., Methods Of Theoretical Physics vol I , New-York Toronto London
(1953)
[44] Novotny L., Hecht B., Principles of Nano-Optics, Cambridge(2006)(Ref. Pag.: 151, 127)
[45] Hiroses A., PHYS 812.3-Electromagnetic Theory http://physics.usask.ca/~hirose/hirose.htm
[46] Baker M., (ch 10,11) Grren's Functions in Physics,(2003)
[47] Agarwal G.S. Quantum Optics Quantum Theories of Spontaneous Emission,(Springer Tracts
in Modern Physics) Springer(1974)
5
[48] Berne B., Pecora R., Dynamic Light Scattering, Wiley(1990)
[49] Scully M., Zubairy M., Quantum Optics, Cambridge(1997)
[50] Born M., Wolf E., Principles of Optics, CUP(2005)(Ref. Pag.:162, Error: Reference source not
found)
[51] Jackson J.D., Classical Electrodynamics, Wiley(1962) (Ref. Pag.:159 , 162)
[52] Landau L., Lifshits: Classical Theory of Fields (radiation of E&M and gravity waves)
[53] ., , 2003 (Ref. Pag.:162)
[54] Hitoshi M., http://hitoshi.ipmu.jp/ see http://hitoshi.ipmu.jp/221B-S02.html
[55] Slater J.C. and Frank N., Introduction to Theoretical Physics, McGraw-Hill (1933)(ISBN
0070580901)(Ref. Pag.:162)
[56] Applequist J., J. Am. Chem. Soc., 1972, 94 (9), pp 2952, (Ref. pag.:127)
[57] Thole B, Chemical Physics (1981), 59 341 Holland,(Ref. Pag.:127 )
[58] Smirnov V.I., Higher mathematics, Vol II., Addison Wesley (1964)
[59] Allen L., Eberly J., Optical Resonance and Two-level Atoms, (Interscience monographs &
texts in physics & astronomy)(1975)
[60]
Chapter 2 The Binomial Theorem
Paragraph 2.1 The Binomial Theorem
2.1.1 A. Binomial Theorem for Positive Integral Index
If n is a positive integer, then ( x+a)
n
=x
n
+C
1
n
x
n1
a+C
2
n
x
n2
a
2
++C
r
n
x
nr
a
r
++C
n
n
a
n
where
C
r
n
=
n!
r ! ( nr ) !
.
( a+b)
n
=

r=0
n
C
r
n
a
nr
b
r
.
The general term of the expansion = (r + 1)
th
term = C
r
n
a
nr
b
r
.
Any power of a binomial expression may be reduced to the product of a single term and a binomial
power of the form (1 + x)
r
.
( a+b)
n
=

a
(
1+
b
a
)
|
n
=a
n
(
1+
b
a
)
n
=a
n
(1+x)
n
where
x=
b
a
.
( )

+ + + + + + +
n
r
r n
r
n r n
r
n n n
x C x x C x C x C x
0
2
2 1
1 1
.
The general term = (r + 1)
th
term = C
r
n
x
r
.
( ) ( ) ( ) ( ) ( ) ( )
r r n n
r
n
r
r n n
n
r r n n
r
n n n n n n
b a C b C b x C b a C b a C a b a

+ + + + + +
0
2 2
2
1
1
1
.
6
The general term = (r + 1)
th
term =( )
r r n n
r
r
b a C

1 .
( ) ( ) ( ) [ ] ( ) L b a C a b a b a C b a b a
n n n
n
r
r r n r r n n
r
n n
+ + + + + +


2 2
2
0
2
where

'

even is if
odd is if
1
1
n b
n ab C
L
n
n n
n
.
2.1.2 B. Properties of the Binomial Expansion
In the expansion of (a + b)
n
,
n
C
0
,
n
C
1
,
n
C
2
, ,
n
n
C are the binomial coefficients.
- The expansion contains (n + 1) terms.
- The binomial coefficients
n
r
C are all integers (from the theory of combination).
- The coefficients of terms equidistant from the beginning and end of the expansion are equal.
i.e.
1
0

n
n
n
C C
,
n C C
n
n
n

1 1 ,
( )! !
!
r n r
n
C C
n
r n
n
r


.
(from the fact that
n n
a b b a ) ( ) ( + + or by definition of
n
r
C .)
There is an alternative notation for the binomial coefficients
( )( ) ( )
!
1 2 1
r
r n n n n
r
n
+

,
_


.
Thus
n
n

,
_

1
,
( )
! 2
1
2

,
_

n n
n
,
( )( )
! 3
2 1
3

,
_

n n n
n
.
We observe that if n is an integer, then
n
r
C
r
n

,
_

.
If n is not an integer,
n
r
C is meaningless, while

,
_

r
n
has the meaning defined above.
EXAMPLE 1 Write down the general term in the expansion of
n
x
x
,
_

+
1
2
in descending powers of x,
and show therefrom that the powers of x which occur are alternatively even and odd. Show that if a
term involving x
9
occurs, then n must be a multiple of 3.
EXAMPLE 2 Find the first three terms in the expansion of ( ) ( )
( ) 1
1 1

+
a p pa
x x where p and a are
positive integers. If the coefficients of x and x
2
are equal, find an expression for p in terms of a.
EXAMPLE 3 If ( 1+x
2
)
2
( 1+x)
n
=C
0
+C
1
x+C
2
x
2
+C
3
x
3
+, and C0, C1, C2 are in arithmetic
progression, show that there are two possible values n and find them.
EXAMPLE 4 Prove that, if the coefficients of three consecutive powers of x in the expansion of (1 +
x)
r
, where n denotes a positive integer, are in arithmetic progression, (n + 2) must be the square of an
integer. Find the coefficients when n = 7.
7
EXAMPLE 5 Find the term independent of x in the expansion of
6
2
1
2
1

,
_

x
x . Find also the rational
part of
9
3
1
2 1

,
_

+
.
EXAMPLE 6 If
1
1
+
x
x
, prove that 1
1
7
7
+
x
x .
EXAMPLE 7 (Vision Algebra P.156 Ex.5.2 Q.5)
EXAMPLE 8 Prove that the coefficients of x
2
and x
3
in the expansion of ( )
n
x x 2 2
2
+ + are 2
n1
n
2
and
( )
1 2 1
3 1 2

n n
n
respectively where nZ.
2.1.3 C. The Relative Magnitudes of the Binomial Coefficients
For the expansion (a + b)
n
,
n
r
C is the binomial coefficient
1. If n is even, the greatest coefficient
=C
1
2
n
n
=
n!

(
1
2
n
)
!
|
2
. (coefficient of the middle term)
2. If n is odd, the greatest coefficient
=C
1
2
( n1)
n
and C
1
2
( n+1)
n
=
n!
(
1
2
n+
1
2
)
!
(
1
2
n
1
2
)
!
. (coefficient of
the two middle terms)
2.1.4 D. The Greatest Term in Binomial Expansion
For the expansion (1 + x)
r
, x > 0, let ur denotes the r
th
term.
( )( ) ( )
r
r
x
r
r n n n n
u
!
1 2 1
1
+

.
( )( ) ( )
( )
1
! 1
1 2 1

r
r
x
r
r n n n n
u

.
u
r+1
u
r
=
( nr+1)
r
x .
u
r+1
u
r
1
( nr+1)
r
x1( nr+1) xr r (1+x)( n+1) xr
( n+1) x
1+x
.
If
( )
x
x n
+
+
1
1
is an integer, say p, then the maximum terms are up and up+1 (since
8

+
increase increase
1 1 1
| |
,..., , ,...,
n p p p
u u u u u
).
If
( )
x
x n
+
+
1
1
is not an integer, denote the integral part of the expression by q, then the maximum term is
uq+1 (since
u
1
, . .. , u
q
, u
q+1
, u
q+2
, .. . , u
n
increaseincrease
).
EXAMPLE 1 Find the greatest term in the expansion of (1 + x)
r
in ascending powers of x when
n = 14, x = 0.2, and show that the eleventh term is one-tenth of the tenth term.
EXAMPLE 2 Find the term independent of x in the expansion of
n
x
x
4
3
1

,
_

+ and show that this is the


greatest term provided
n
n
x
n
n
3
1
1 3
4
+
<
+
in ascending powers of x.
EXAMPLE 3 (Vision Algebra P.168 Ex.5.4 Q.3)
2.1.5 E. Other Properties of the Binomial Coefficients
( )

+
n
r
r n
r
n
x C x
0
1
.
5 The sum of all the coefficients is 2
n
. i.e.
n
n
r
n
r
C 2
0

.
PROOF
( )

+
n
r
r n
r
n
x C x
0
1
, put x = 1,

n
r
n
r
n
C
0
2
.
6 The sum of the even coefficients is equal to the sum of the odd coefficients, each being equal to
2
n1
.
PROOF
( )
n r n
r
n n
n
r
r n
r
n
x x C x C x C x C x + + + + + + +


2
2 1
0
1 1
.
Put x = 1, 0 = ( ) ( ) ( ) + + + + + + + +
n n n n n n n
n
n n n n
C C C C C C C C C C
5 3 1 4 2 0 2 1 0
1
sum of even coefficients sum of odd coefficients.
sum of even coefficients = sum of odd coefficients =
2
2
n
= 2
n1
.
7 Vandermondes Theorem:
n
r
n
s
m
s r
n m
r
n m
r
m
r
n m
r
C C C C C C C C C + + + + + +

+

2 2 1 1
.
9
May be written as

,
_

+ +

,
_

,
_

+ +

,
_

,
_

,
_

,
_

,
_

,
_

+
r
n
s
n
s r
m n
r
m n
r
m
r
m
r
n m

2 2 1 1
.
PROOF
( )
n m r n m
r
n m n m n m
x x C x C x C x
+ + + + +
+ + + + + + +
2
2 1
1 1 .
On the other hand,
( ) ( ) ( )
( )( ). 1 1
1 1 1
2
2 1
2
2 1
n r n
r
n n m r m
r
m m
n m n m
x x C x C x C x x C x C x C
x x x
+ + + + + + + + + + + +
+ + +
+

Equating the coefficient of x
r
,
n
r
n
s
m
s r
n m
r
n m
r
m
r
n m
r
C C C C C C C C C + + + + + +

+

2 2 1 1
.
EXAMPLE 1 Let cr denote the coefficient of x
r
in the binomial expansion of (1 + x)
n
, where n is a
positive integer. Sum the following series.
(a) c0 + 2c1 + 3c2 ++ (n + 1)cn, (b)
2 2
2
2
1
2
0 n
c c c c + + + + ,
(c) c0c1 + c1c2 ++ cn1cn, (d) ( )
n
n
c n c c c
2 1
3
2
2
2
1
1 3 2

+ + for n > 2,
(e)
n
c
n
c c c
1
1
3
1
2
1
2 1 0
+
+ + + +
, (f) ( )
n
c
c c c
n
n 1
3 2 1
1
3
1
2
1

+ + ,
(g) c0 + 3c1 + 5c2 ++ (2n + 1)cn, (h)
1 2
3
1
2
0
1
3 2

+ + + +
n
n
c
nc
c
c
c
c
c
c

.
EXAMPLE 2 Show that ( ) [ ]( ) ( )
n
n
n
x c n x c x c c x x n 1 3 2 1 1 1
2
2 1 0
1
+ + + + + + + +

, where cr denote
the coefficient of x
r
in the expansion of (1 + x)
n
by the binomial theorem.
Hence show that ( )
2 2
2
2
1
2
0
1 3 2
n
c n c c c + + + + + is equal to the coefficient of x
n
in the expansion of
( ) [ ]( )
1 2
1 1 1

+ + +
n
x x n , and that its value is
( )( )
( )! 1 !
! 1 2 2

+
n n
n n
.
EXAMPLE 3 (Vision Algebra P.165 Ex.5.3 Q.15)
EXAMPLE 4 If
n
r
C denote the number of combinations of n things taken r at a time establish the
following results by considering ( )
n
x
2
2
1 :
( ) ( )
n
r
r n
r
n
r
n
r
n
r
n
r
n
r
C C C C C C C
2
2
1 2
2
2
2
2
1
2
1
2 2
1 1
2
1
+
+ +
+ +
and
C
r
2n
=2
r
C
r
n
+2
r2
C
r2
n
C
1
nr+2
+2
r4
C
r4
n
C
2
nr+4
+ the last term being
n
r
C
2
1 or
1
) 1 (
1
2
1
2

n
r
n
C C

according as r is even or odd.
2.1.6 F. Expansion of Multinomials
An expression consisting of three or more terms may be raised to any positive integral power by
repeated application of the binomial theorem.
EXAMPLE Expand ( )
n
x x
2
1 + + .
10
THEOREM (Multinomial Theorem)
Let a + b + c + + K denote any polynomial and n be a positive integer
( a+b+c++K )
n
=

+++k=n
n!
! ! k !
a

K
k
where the sum on the right contains one term for each set of values , , , k that can be selected from
0, 1, 2, , n such that + + + k = n.
2.1.7 G. Binomial Theorem for a Rational Index
Write

,
_

0
) (
r
r
x
r
n
n f
,

,
_

0
) (
r
r
x
r
m
m f
.
The series converges absolutely when
x
< 1 and diverges for
x
> 1.
1
]
1

,
_

1
]
1

,
_

0 0
) ( ) (
r
r
r
r
x
r
n
x
r
m
n f m f
.
Coefficient of x
r

,
_

+ +

,
_

,
_

,
_

,
_

,
_

r
m m
r
n m
r
n
r
n

2 2 1 1

,
_

,
_

,
_

r
n m
p
m
p r
n
r
p 0
by Vandermondes Theorem.
) ( ) ( ) (
0
n m f x
r
n m
n f m f
r
r
+

,
_

.
Similarly,
) ( ) ( ) ( ) ( + + + u n m f u f n f m f
.
THEOREM For
x
< 1 and n is any rational number
( )
r
r
n
x
r
n
x

,
_

+
0
1
.
PROOF
If n is a positive integer,
n
r
C
r n r
n
r
n

,
_

)! ( !
!
.
( )
n n n
n
n n n
r
r
x x C x C x C x
n
n
x
n
x
n
x
r
n
n f + + + + +

,
_

+ +

,
_

,
_

,
_

1 1
2 1
1 ) (
2
2 1
2
0

.
Now suppose
q
p
n
denote any positive rational number, q and p being positive integers.
Take m = n = u = =
q
p
, q values in all being considered.
Then

,
_

+ + +

,
_

,
_

,
_


q
p
q
p
q
p
f
q
p
f
q
p
f
q
p
f
.
( )
p
q
x p f
q
p
f +
1
]
1

,
_

1 ) ( . (since p is a positive integer)


11
f
(
p
q
)
=( 1+x)
p
q
. i.e.
( )
r
r
n
x
r
n
x

,
_

+
0
1
for
x
< 1, n is any positive rational number.
For n to be negative rational numbers, put m = n into f (m + n) = f (m)f (n), we have
) (
1
) ( 1 ) 0 ( ) ( ) (
n f
n f f n f n f


.
Take n =
q
p

,
( ) q
p
x
q
p
f
q
p
f
+

,
_

,
_

1
1 1
since
0 >
q
p
.
( ) q
p
x
q
p
f

+

,
_

1
.
Hence (1 + x)
n
= f (n).

( )
r
r
n
x
r
n
x

,
_

+
0
1
for
x
< 1 where n is any rational number.
REMARK For (a + b)
n
, we may write it into
( )

,
_

,
_


,
_

+ +
0
1
r
r
n
n
n n
a
b
r
n
a
a
b
a b a with
a
b
< 1, i.e.
b a >
or
( )

,
_

,
_


,
_

+ +
0
1
r
r
n
n
n n
b
a
r
n
b
b
a
b b a with
b
a
< 1, i.e.
a b >
.
Particular Values of n
( ) ( )
r
r
n
x
r
n
x

,
_

0
1
,
x
< 1, x and n are positive.
Coefficient of x
r
( ) ( )
( )( ) ( )
( )
( ) ( )
!
1 1
1
!
1 1
1 1
2
r
r n n n
r
r n n n
r
n
r r r
+ +

+

,
_



( ) ( )
!
1 1
r
r n n n + +


> 0.
( 1+x)
1
2
=

r=0

(
1
2
r
)
x
r
=1+
1
2
x+

r=2

1
2
(
1
2
)(
3
2
)
(
1
2
r+1
)
r !
x
r

=
( )
( )


+ +
2
1
! 2
3 2 5 3 1
1
2
1
1
r
r
r
r
x
r
r
x

(2.1.1)
12
( 1+x)

1
2
=

r=0

1
2
r
)
x
r
=1+

r =1


1
2
(
3
2
)(
5
2
)
(

1
2
r+1
)
r !
x
r

=
( )
( )



+
1
! 2
1 2 5 3 1
1 1
r
r
r
r
x
r
r
(2.1.2)
( 1x)

1
2
=1+

r=1

1
2

3
2

5
2

(
1
2
+r1
)
r !
x
r
=1+

r=1

135( 2r 1)
2
r
r !
x
r
(2.1.3)
( 1x)

1
2
=1+

r=1

1
2

3
2

5
2

(
1
2
+r1
)
r !
x
r
=1+

r=1

135( 2r1)
2
r
r !
x
r
(2.1.4)
( 1x)

1
3
=1+

r=1

1
3

4
3

7
3

(
1
3
+r1
)
r !
x
r
=1+

r=1

147( 3r2)
3
r
r !
x
r
(2.1.5)
( 1x)
1
=

r=0

x
r
(2.1.6)
( )
( )
( )

+
+

0 0
2
1
!
1 3 2
1
r
r
r
r
x r x
r
r
x

(2.1.7)
( ) ( )
( ) ( )
( )

,
_


0 0
3
!
1 3 4 3
3
1
r
r
r
r
x
r
r
x
r
x


( )
( ) ( )
( ) ( )

+ +
+ +

0 0
2
2 1
2
1
!
2 1 4 3
1
r
r
r
r r
x r r x
r
r r
(2.1.8)
( )
( ) ( )

,
_

+
0 0
3
!
1 3 4 3
3
1
r
r
r
r
x
r
r
x
r
x


( )
( )( ) ( )
( )( )
. 10 6 3 1
2 1
2
1
!
2 1 4 3
1
3 2
0 0

+ +
+ +

+ +


x x x
x r r x
r
r r
r
r
r
r
r r
(2.1.9)
13
Chapter 3 Legendre polynomials
Paragraph 3.1 Legendre polynomials
3.1.1 Legendre polynomials
See Eyring [3]pag 52
consider the equation:
(1x
2
)
dy
dx
+2nx=0
(3.1.1)
If we write this in the form
dy
y
=
2nx dx
(1x
2
)
(3.1.2)
It may be immediately integrated to give
y=c(1x
2
)
n
(3.1.3)
If we differentiate this eq n times the result is:
(1x
2
)
d
n+2
y
dx
n+2
2x
d
n+1
y
dx
n+1
+n( n+1)
d
n
y
d x
n
=0 (3.1.4)
Which may be written as
(1x
2
)
d
2
z
dx
2
2x
dz
dx
+n(n+1) z=0 (3.1.5)
Where
z=
d
n
y
d x
n
=c
d
n
y
dx
n
=c
d
n
dx
n
(1x
2
)
n
(3.1.6)
Equation (3.1.5) is known as Legendres equation . The particular solution is:
z=P
n
( x)=
1
2
n
n!
d
n
dx
n
( x
2
1)
n
(3.1.7)
3.1.2 The associated Legendre Polynomials:
If eq(3.1.1) is differentiated (m+n+1)times we obtain:
(1x
2
)
d
m+n+2
y
dx
m+n+2
2x
d
m+n+1
y
dx
m+n+1
+(m+n+1)(nm)
d
m+n
y
d x
m+n
=0 (3.1.8)
Where
14
z=
d
m+n
y
d x
m+n
=c
d
m
y
dx
m
P
n
( x)=c
d
m+n
d x
m+n
(1x
2
)
n
(3.1.9)
Let us now put
z =u(1x
2
)
m/2
(3.1.10)
Then
dz
dx
=

du
dx
+
mxu
1x
2

(1x
2
)
m/2
d
2
z
d
2
x
=

d
2
u
d
2
x
+
2mxu
1x
2
du
dx
(
m
1x
2
+
m(m+2) x
2
(1x
2
)
2
)
(1x
2
)
m/ 2
so that the differential equation for u is
(1x
2
)
d
2
u
dx
2
2x
du
dx
+
(
n(n+1)
m
2
1x
2
)
u=0 (3.1.11)
This equation is known as the associated Legendre equation, and the function u , denoted by
u=P
n
m
( x) is called the associated Legendre polynomial of degree n and order m.
we have
P
n
m
( x)=(1x
2
)
m/ 2
z=(1x
2
)
m/2 d
m
dx
m
P
n
( x) (3.1.12)
or using (3.1.7)
P
n
m
( x)=
(1x
2
)
m/ 2
2
n
n!
d
n+m
dx
m+n
( x
2
1)
n
(3.1.13)
3.1.3 Orthogonality - associated Legendre functions
Let consider the integral

1
+1

P
n
m
( x)
|
2
dx=

1
+1
(1x
2
)
m
d
m
P
n
dx
m
d
m
P
n
dx
m
dx
=

(1x
2
)
m
d
m
P
n
dx
m
d
m1
P
n
dx
m1
|
x=1
x=1

1
+1
d
m1
P
n
dx
m1
d
dx

(1x
2
)
m
d
m
P
n
dx
m

dx
(3.1.14)
If (3.1.1) is differentiated m+n times and the result multiplied by (1x
2
)
m1
we obtain
(1x
2
)
m
d
m+1
P
n
dx
m+1
2m x(1x
2
)
m1
d
m
P
n
dx
m
+(n+m)( nm+1)(1x
2
)
m1
d
m1
P
n
dx
m1
=0 (3.1.15)
Which is equivalent to
15
d
dx

(1x
2
)
m
d
m
P
n
dx
m

=(n+m)( nm+1)(1x
2
)
m1
d
m1
P
n
dx
m1
=0 (3.1.16)
Substituting this result in (3.1.14), we find

1
+1

P
n
m
( x)
|
2
dx=(n+m)(nm+1)

1
+1
(1x
2
)
m1

d
m1
P
n
dx
m1
|
2
dx
=(n+m)( nm+1)

1
+1
P
n
m1
( x)|
2
dx
(3.1.17)
If this process is continued we finally arrive at the result

1
+1

P
n
m
( x)
|
2
dx=
(n+m)!
(nm)!

1
+1

P
n
m1
( x)
|
2
dx (3.1.18)
This last integral can be evaluated by means of the explicit expression (3.1.7) for
P
n
( x)
. We have

1
+1

P
n
m
( x)
|
2
dx=
1
2
n
n! |
2

1
+1
d
n
dx
n
( x
2
1)
n d
n
dx
n
( x
2
1)
n
dx (3.1.19)
Integrating by parts n times this reduces to:

1
+1

P
n
m
( x)
|
2
dx=
(1)
n
2
n
n! |
2

1
+1
( x
2
1)
n d
2n
dx
2n
( x
2
1)
n
dx
=
(1)
n
2
n
n! |
2

1
+1
( x
2
1)
n
(2n! ) dx=
(2n !)
2
n
n! |
2

1
+1
(1x)
n
(1+x)
n
dx
(3.1.20)
Since

1
+1
(1x)
n
(1+x)
n
dx=
n
n+1

1
+1
(1x)
n1
(1+x)
n+1
=
n( n1)...1
(n+1)(n+2)... 2 n

1
+1
(1+x)
2n
=
(n! )
2
( 2n) !
2
2 n+1
(2n+1)
(3.1.21)
Eq (3.1.18) may be written as

1
+1

P
n
m
( x)
|
2
dx=
(n+m)!
(nm)!
2
2n+1
(3.1.22)
And for

1
+1

P
n
( x)
|
2
dx=
2
2n+1
(3.1.23)
So that the normalised function are
.
2n+1
2
( n+m)!
( nm)!
P
n
m
( x) (3.1.24)
3.1.4 Integral needed for Bessel
See [40]pag 112
16
Using the new integration variable x=cos 0

d 0sin
2l +1
0=

1
+1
dx(1x
2
)
l
= x(1x
2
)
l
|
1
+1
+2l

1
+1
dx x
2
(1x
2
)
l1
=2l

x
3
3
x(1x
2
)
l1
|
1
+1
+
2l(2l 2)
13

1
+1
dx x
4
(1x
2
)
l2
=
2l( 2l 2)...2
135. ..(2l 1)

1
+1
dx x
2l
=
2l(2l 2) ...2
135. ..( 2l 1)
2
2l +1
=
(2l ) !
135. ..(2l 1)|
2
2
2l +1
(3.1.25)
3.1.5 Spherical harmonics
Y
lm
(0, )=
.
(2l +1)
4
(l m)!
(l +m)!
(1)
m
2
l
l !
sin
m
0
(

cos0
)
m
P
l
(cos 0) e
i m
(3.1.26)
Chapter 4 Hydrogen atom
17
Paragraph 4.1 The Hydrogen Atom.
4.1.1 The equation.
The first equation we want to solve is
m
d
d
2
2
2

(4.1.1)
This equation is of familiar form; recall that for the free particle, we had
k
dx
d
2
2
2

for which the solution is
kx sin k / a kx cos a ) x (
1 0
+
Since

x sin i x cos e
ix
t
t
a more general solution to equations of this type is

im im
e B e A

+
In order that

) 2 ( ) ( +
it is necessary that
A e
im
+ B e
im
= A e
im(+2 )
+ B e
im(+2)
= A e
im
e
im2
+ B e
im
e
im2
Since e
im2
= cos (m 2) i sin (m 2) = 1 only when m = 0, 1, 2, the equation has solutions
. , 2 , 1 , 0 m , e A
im
t t
We can determine A by requiring that the wavefunctions be normalized,
2
1
A ,
2
1
A 1 ) 0 2 ( A
1 d A d e e A d
2 2
2
0
2
im im
2
0
2
*
2
0




so
18
The value of at some value of
must be the same at + 2, since is
periodic.
. , 2 , 1 , 0 m e
2
1

im
m
t t
are the final solutions to the equation.
A postscript.
These wavefunctions are complex. Sometimes it is more useful to have real wavefunctions. These can
be constructed by first defining
) m sin i m (cos
2
1
e
2
1

) m sin i m (cos
2
1
e
2
1

m i
m i

+

+
+
and then adding and subtracting
+

and

we say, forming linear combinations:



m sin

1
) (
2
1

m cos

1
) (
2
1

antisymm
symm

+
+
+
each of which is a real function. We cannot associate with these functions a particular m value, but
only with
m
. The first three of these functions are
. etc sin

cos

2
1

1
1
0

The equation.
The equation is
. 0
sin
m
d
d
sin
d
d
sin
1
2
2
+
,
_

Rearranging,
19
These functions are also
solutions to the
equation. Try it!
. 0
sin
m

d
d
sin
cos
d
d
2
2
2
2

,
_

+ +
Now, make the substitutions
dx
d
cos
dx
d
sin
d
d
,
dx
d
sin
d
dx
dx
d
d
d
x 1 sin , cos x
2
2
2
2
2
2 2


After some algebra, we get
. 0
) x 1 (
m

dx
d
x 2
dx
d
) x 1 (
2
2
2
2
2

,
_

+
This equation is identical to the associated equation of Legendre
0 P
) x 1 (
m
) 1 (
dx
dP
x 2
dx
P d
) x 1 (
2
2
2
2
2

,
_

+ + t t
if we identify P with and with ( + 1).
The solutions P of the associated Legendre equation are called the associated Legendre functions; these
may be expressed in closed form as (since x = cos )
P

m
( cos ) = ( 1cos
2
)
m
2

k0
(1)
k
( 22k)! ( cos )
m2k
2

( k )! k ! ( m2k )!
Here,
m
P
t
is a polynomial of degree and order
m
, where and m are integers. k is an (integer)
index, and the sum () runs from k = 0 to an upper limit of
odd is ) m ( if 2 / ) 1 m ( k
even is ) m ( if 2 / ) m ( k


t t
t t
Since m is an integer, and since the solutions to the associated Legendre equation are acceptable only if
) m ( t
is an integer, it is necessary that
. m with , eger int t t
The solutions P
) (
must of course be normalized; the requirement that
) (cos d ) (cos P ) (cos P A d 1
m m *
2
1
1
m ,
*
m ,

0
t t t t



leads to
2
1
! m (
! m (
2
1 2
A

'

,
_

t
t
t
which gives
). (cos P
)! m (
)! m (
2
1 2
) (
m
2
1
m , t t
t
t t

'

,
_

20
These wavefunctions, though they appear to be complicated, are not, at least for small . For example,
2
2
) ( . 0 m , 0
0 , 0
t (s)
cos
2
6
) ( . 0 m , 1
0 , 1
t (p)
sin
2
3
) ( . 1 m , 1
1 , 1
t
t
t (p)
) 1 cos 3 (
4
10
) ( . 0 m , 2
2
0 , 2
t (d)
cos sin
4
15
) ( . 1 m , 2
1 , 2
t
t
t (d)
sin
4
15
) ( . 2 m , 2
2
2 , 2
t
t
t (d)
You have already met these functions before, though possibly not in this form. These are the angular
functions describing the probability amplitudes in s, p, d orbitals!
Some postscripts.
1. The associated Legendre functions are derivatives of the Legendre polynomials P

(cos )
( ) ) x ( P
dx
d
x 1 ) x ( P
m
m
2
m
2 m
t t

The L. polynomials
)! k 2 ( ! k )! k ( 2
x )! k 2 2 ( ) 1 (
) x ( P
k 2 k
0 k

t t
t
t
t
t
are, in turn, solutions of the Legendre equation
). P z ( 0 z ) 1 (
dx
dz
x 2
dx
z d
) x 1 (
2
2
2
+ + t t
2. The functions ) (cos P
2
1
t
t+ and ) (cos P
m
t
form an orthonormal set in the interval -1 cos
1.
3. The L. functions are symmetric or antisymmetric as is even or odd
21
) (cos P ) 1 ( ) cos ( P
) (cos P ) 1 ( ) cos ( P
m
m
m
t
t
t
t
t
t



1. The functions do not exceed 1 in absolute value
. ) 1 ( ) 1 ( P , 1 ) 1 ( P ; 1 ) (cos P
t
t t t
e.g.
2. Since the P

(x) are polynomials, there exist roots, or values of cos , for which P

(x)
changes sign. The sign of P

(x) is often indicated by a circular diagram,
At the north pole in this diagram, = 0 and x = cos = +1; at the equator, x = cos
2

= 0; at the
south pole, x = cos = -1. We then use lines on the circle to indicate the values of at which the
polynomial is zero:
+
+
+
+
-
-
3. Recurrence relations exist for both the
m
P
t
and P

, e.g.
1 m
1
1 m
1
m
P P P ) (cos ) 1 2 (
+

+
+
+
t t t
t
4. The product functions
) , ( Y
m
t
) ( ) ( ) , ( Y
m m ,
m
t t

are called spherical harmonies. These are given by the formula
. e ) (cos P
)! m ( 4
)! m ( ) 1 2 (
) , ( Y
im
m
2
1
m
t t
t
t t

'

+
+

4.1.2 The Radial equation.


The radial equation for the electron in orbit about the nucleus of the hydrogen atom is
22
-1
= 0
=
2

=
+1
0
nodes
P
0
= 1 P
1
= cos P
2
= (3 cos
2
- 1)
d
2
R
dr
2
+
2
r
dR
dr
+

2m

2 (
E +
Ze
2
r
)

( +1)
r
2
|
R = 0
If we consider bound states (E < 0) only, and introduce the new variables n and , where

Z
a n
2
1

e Z m
n
2
1
r
e m
a
a n 2
e Z
h n 2
e Z m
E
0
2
2 2
2
2
0
0
2
2 2
2 2
4 2

,
_

,
_

,
_

1
the radial equation becomes
. 0 R

) 1 (

n
4
1
d
dR

2
d
R d
2 2
2

,
_

+
+ + +
t t
(A)
We seek solutions of the form
2 /
e ) ( u c R

t
(B)
If (B) is substituted into (A), we find that u () must satisfy the differential equation
0 u ) 1 n (
d
du
) 2 2 (
d
u d

2
2
+ + + t t
(C)
Eq. (C) is of the same form as the associated equation of Laguerre,
. 0 L ) (
dx
dL
) x 1 (
dx
L d
x
2
2
+ + + (D)
(D) has solutions, known as the associated Laguerre polynomials, which are of the form
k
2
k

0 k

x
! k )! k ( )! k (
) ! (
) 1 ( ) x ( L
+

where and are integers, k is an index running from 0 to ( - ), and ( - ) is an integer greater than
zero. Thus, the solutions u() of Eq. (C) are of the form
) x ( L

, providing one makes the


identifications
) 1 n ( ) ( , ) 2 2 ( ) 1 ( , x + + t t
Combining these relations, one finds
. n , 1 2 t t + +
and
L
n+
2+1
( p) =

k=0
n 1
(1)
k+1
( n+ ) ! |
2
( n 1k )! ( 2+1+k )! k !

k
Eigenvalues.
Since the condition for solution is
0 ) 1 n ( ) ( > t
and since = 0, 1, 2, , n may take the values
n = 1, 2, 3,
with the restriction that
n + 1
23
This gives the allowed (negative) values of the energy
). m , of t independen ( , 3 , 2 , 1 n ,
n 2
e Z m
E
2 2
4 2
n
t .


Eigenfunctions.
The radial wavefunctions for the hydrogen atom are of the form
) ( L e c ) ( R
1 2
n
2 / +
+

t
t
t
To determine the normalizing constant c, we require that
1 dr r ) ( L e c dr r ) r ( R
2
2
1 2
n
2
0
2 2
2
0

+
+



t
t
t
Substituting r = (na
0
/2Z), this becomes
( ) [ ]
). 66 . p , EWK (
)! 1 n (
! n n 2
Z 2
na
c
d ) ( L e
Z 2
na
c 1
3
3
0 2
2
1 2
n
2 2
0
3
0 2

+

,
_

,
_

+
+
+

t
t
t
t
t
so that
( ) [ ]
2
1
3
3
0
! n n 2
)! 1 n (
na
Z 2
c

'

,
_

t
t
t
We choose c < 0 to make the (total) wavefunction positive, so
24
This result is identical with the values
obtained by means of the Bohr theory.
The resulting energy level diagram is
shown on the right.
30 31 32
20
21
10
nL
+
0
-
30 31 32
20
21
10
nL
+
0

me
4
2
2
( Z=1)

me
4
8
2
R
nL
( r ) =

(
2Z
na
0
)
3
( n1)!
2n ( n+ ) ! |
3

1
2
(
2 Zr
na
0
)

e
Zr/ na
0
L
n+
2+1
(
2Zr
na
0
)
The first few R
nL
(r) are, expressed in terms of = 2Zr/na
0
,
R
10
= 2
(
Z
a
0
)
3
2
e
/ 2
R
20
=
2
2.2
(
Z
a
0
)
3
2
( 2 ) e
/ 2
R
21
=
1
2.6
(
Z
a
0
)
3
2
e
/ 2

R
30
=
1
9.3
(
Z
a
0
)
3
2
(66+6
2
) e
/ 2
R
31
=
1
9.6
(
Z
a
0
)
3
2
( 4 ) e
/ 2
R
32
=
1
9.30
(
Z
a
0
)
3
2

2
e
/ 2
Note the very important structure of these wavefunctions. Each function consists of a constant, times
a polynomial in , times an exponential factor in -/2. The last factor looks, of course, like
R
10
R
21
R
20
0
e
-/2

=2

so R
10
is a simple exponential. But R
20
, which contains, in addition, the factor (2-), has a node at =
2, as shown above. And R
21
, which contains the factor , goes to zero at the origin, also as shown
above.
Also note that, as n increases, the number of nodes increases as (n - 1)this structure being dictated by
the highest power of appearing in the polynomial!
Chapter 5 Bessel
Paragraph 5.1 Schrdinger equation in three dimensions
5.1.1 Separation of variables spherical harmonics
http://physicspages.com/2011/03/25/schrodinger-equation-in-three-dimensions-spherical-harmonics/
Although some physical systems can be described in one or two dimensions, the most general problems
25
require the solution of the Schrdinger equation in three dimensions. Recall that in one dimension, the
equation reads
p
2
2m
1+V 1=i
1
t
(5.1.1)
where the momentum p is given by the differential operator
p
x
=i

x
(5.1.2)
To generalize this to three dimensions, we can give the momentum its three components in each of the
three spatial dimensions, so we get
p
x
=i

x

p
y
=i

y
p
z
=i

z
p
2
=
2
(

2
x
2
+

2
y
2
+

2
z
2
)
(5.1.3)
where the differential operator
2
is defined by this equation.
The Schrdinger equation in three dimensions can therefore be written as

2
2m

2
1+V 1=i
1
t
(5.1.4)
If we assume that the potential is V =V ( x , y , z) independent of time, we can use the same
separation of variables method that we used in one dimension to split off the time part of the solution to
get
1( x , y , z , t )=( x , y , z) e
iEt /
(5.1.5)
So far, the analysis is the same as that for one dimension.
Things get interesting when we consider the analysis relating to the three spatial dimensions. A
common situation is that where the potential is spherically symmetric; it depends only on the radial
distance r for the origin (the electrostatic potential is one such case). In this case, it makes more
sense to use spherical coordinates, so we need to rewrite 1 in spherical coordinates. There is a general
method for transforming differential operators such as
2
into other coordinate systems which
well consider in another post. For now well just quote the result in spherical coordinates (r , 0 ,) ,
where r is the distance from the origin (r>0), 0 is the angle from the positive z axis (so
00 ), and is the azimuthal angle measured from the x axis (so 02 ) :

2
=
1
r
2

r
(
r
2
r
)
+
1
r
2
sin 0

0
(
sin0

0
)
+
1
r
2
sin
2
0
(

2

2
)
(5.1.6)
Given this, the time-independent portion of the Schrdinger equation satisfies
26

2
2m
1
r
2

r
(
r
2
r
)
+
1
r
2
sin 0

0
(
sin 0

0
)
+
1
r
2
sin
2
0
(

2
)
|
+V =E (5.1.7)
At this point, we try separation of variables again, initially by just peeling off the dependence on r.We
prpopose ( r ,0 , )=R(r) Y (0 , )

2
2m
Y
r
2

r
(
r
2 R
r
)
+
R
r
2
sin0

0
(
sin0
Y
0
)
+
R
r
2
sin
2
0
(

2
Y

2
)|
+VRY (5.1.8)

2
2m
1
Rr
2

r
(
r
2 R
r
)
+
1
Yr
2
sin 0

0
(
sin0
Y
0
)
+
1
Yr
2
sin
2
0
(

2
Y

2
)|
+(VE)=0 (5.1.9)

1
R

r
(
r
2 R
r
)

2mr
2

2
(V E)
|
+

1
Y sin0

0
(
sin0
Y
0
)
+
1
Y sin
2
0
(

2
Y

2
)|
=0 (5.1.10)
In the second line, we divided through by RY and in the third line, we multiplied through by
2mr
2

2
and regrouped the terms. We can see that in the last line, the terms in the first square
brackets depend only on r while those in the second square brackets depend only on 0 and .
Thus each term must be equal to some constant, which is written in the curious form of l (l +1) The
reason for this will appear when we consider the angular equation in more detail in a minute. So we
write:
1
R

r
(
r
2 R
r
)

2mr
2

2
(V E)=l (l +1)
1
Y sin0

0
(
sin0
Y
0
)
+
1
Y sin
2
0
(

2
Y

2
)
=l (l +1)
(5.1.11)
We found that the angular equation could be solved and that the solutions were the spherical
harmonics:
Y
l
m
(0, )=

2l +1
4
( pm)!
( p+m) !
|
1/2
e
i m
P
l
m
(cos 0) (5.1.12)
They obey the normalization condition

0
2

(Y
l
m
)
*
Y
l '
m'
sin 0 d 0 d =6
l l '
6
mm'
(5.1.13)
27
5.1.2 Radial equation
Returning to the radial function we find that we can actually make one further transformation of the
equation that makes it a bit easier to solve in some cases. We can rewrite the equation using total
derivatives, since R( r) depends only on r:

r
(
r
2 d R
d r
)

2mr
2

2
(V E) R=l (l +1) R (5.1.14)
We can now make the substitution
u(r)r R
R=
u
r
dR
dr
=
u
r
2
+
u'
r
=
1
r
2
( r u' u)
d
d r
(
r
2 d R
d r
)
=u' +r u' ' u' =r u' '
The radial equation then becomes
r
d
2
u
dr
2

2mr

2
(VE) u=l (l +1)
u
r
(5.1.15)

2
2m
d
2
u
dr
2
+
(
V +

2
2m
l (l +1)
r
2
)
u=E u (5.1.16)
In this form, the equation looks like the original one-dimensional Schrdinger equation with the wave
function given by u and the potential given by
V
rad
=V +

2
2m
l (l +1)
r
2
(5.1.17)
The extra term

2
2m
l (l +1)
r
2
is called the centrifugal term. Classically, the force due to this term is:
F
cent
=
d
dr

2
2m
l (l +1)
r
2
=

2
m
l (l +1)
r
3
(5.1.18)
which is a force that tends to repel the particle from the origin (the force gets larger the closer to the
origin we are). Thus it is analogous to the pseudo-force known as the centrifugal force in classical
physics.
Also see [40]pag110
the kinetic energy of a classical particle:(which is the same as (5.1.18))
p
2
2m
=
p
r
2
2m
+
J
class
2
2mr
2
(5.1.19)
28
5.1.3 Stationary States Expressed in spherical Bessel functions
see [40]pag 188
We consider first the case of a particle moving in a force-free space described by the potential
V ( r)0 (5.1.20)
The stationary Schrdinger equation for this potential reads

2
2m

2
E
|

E
(r )=0 (5.1.21)
( kr )=N e
i veckr
(5.1.22)
E=

2
k
2
2m
(5.1.23)
The corresponding stationary states, i.e., solutions of (5.1.21) are given by wave functions of the form
( k , l , mr )=v
k ,l
(r) Y
lm
(0 , )
(5.1.24)
Where the radial wave functions obeys:


2
2m
1
r

r
2
r+

2
2m
l (l +1)
r
2
E
l ,m
|
v
k ,l
(r)=0 (5.1.25)
Taking into account that E=

2
k
2
2m
and multiplying by 2mr/
2
yields the radial Shrodinger
equation

r
2


2
2m
l (l +1)
r
2
+k
2
|
r v
k ,l
(r)=0 (5.1.26)
One can write
v
k , l
= j
l
(kr )
and introducing the new variable z=kr

d
2
d z
2

l (l +1)
z
2
+1
|
z j
l
z=0 (5.1.27)
At small r the regular solution behaves like
j
l
( z)z
l
for r -0 (5.1.28)
There exists alo a so-called irregular olution denoted by
n
l
( z)
which behaves like
n
l
( z)z
l1
for r -0 (5.1.29)
For large z the solution ois governed by

d
2
d z
2
+1
|
z j
l
z=0 for r - (5.1.30)
The general solution of which is
j
l
( z)
1
z
sin( z+o)
for r - (5.1.31)
29
We attempt to express the solution of (5.1.27) for arbitrary z values in the form
j
l
( z)=z
l
f ( z
2
) , f ( z
2
)=

a
n
z
2n
(5.1.32)
The unknown expansion coefficients can be obtained by inserting this series into (5.1.27) . This
follows from
d
2
dz
2
z
(l +1)
f ( z)=z
l+1 d
2
dz
2
f ( z)+2(l +1) z
l d
dz
f +l (l +1) z
l 1
f (5.1.33)
From which we can conclude
(
d
2
dz
2
+(l +1)
2
z
d
dz
+1
)
f ( z)=0 (5.1.34)
Introducing new variable
v=z
2
yields, using
1
z
d
dz
=2
d
d v
,
d
2
dz
2
=4 v
d
2
d v
2
+2
d
d v
(5.1.35)
The differential equation
(
d
2
d v
2
+
2l +3
2v
d
d v
+
1
4 v
)
f (v)=0 (5.1.36)
The coefficients in the series expansion of f ( z
2
) can be obtain form inserting
n=0

a
n
z
2n
into
(5.1.36) (v=z
2
)

n
(
a
n+1
(n1) v
n2
+
1
2
(2l +3) a
n
v
n2
+
1
4
a
n
v
n1
)
=0
(5.1.37)
Changing summation indices for the first two terms in the sum yields

n
(
a
n+1
(n1)+
1
2
(2l +3)a
n
+
1
4
a
n
)
v
n1
=0
(5.1.38)
In this expression each term v
n1
must vanish individually and hence,
a
n+1
=
1
2
1
( n+1)(2n+2)(2n+2l +3)
a
n
(5.1.39)
One can readily derive
a
1
=
1
2
1
1! ( 2l +3)
a
0,
a
2
=
1
4
1
2! (2l +3)(2l +5)
a
0
(5.1.40)
The common factor
a
0
is arbitrary. Choosing
a
0
=
1
135(2l +1)
(5.1.41)
The ensuing functions (l=0,1,2,...)
j
l
( z)=
z
l
135( 2l+1)

1
1
2
z
2
1! ( 2l +3)
+
(
1
2
z
2
)
2
2! (2l +3)(2l +5)
...
|
(5.1.42)
30
Are called regular spherical Bessel functions.
One can similarly derive solution for (5.1.29) :
n
l
( z)=
135( 2l +1)
z
l +1

1
1
2
z
2
1! (12l )
+
(
1
2
z
2
)
2
2! (12l )(32l )
...
|
(5.1.43)
This functions are called irregular spherical Bessel functions.
The Bessel functions can be expressed trough an infinite sum which we want to speciffy now. For this
purpose we write
j
l
( z)=
(
z
2
)
l
1
1
2

3
2

5
2
... (l+
1
2
)
1+
(
iz
2
)
2
1!(l +
3
2
)
+
(
iz
2
)
4
2! (3+
3
2
)(2l +
5
2
)
+...
|
(5.1.44)
The factorial type products
1
2

3
2

5
2
...( l+
1
2
)
(5.1.45)
Can be expressed trough so-called Gamma -function defined trough:
I( z)=

dt t
z 1
e
t
(5.1.46)
This function has the following properties:
I( z+1)=z I( z) (5.1.47)
I( n+1)=n! for n (5.1.48)
I
(
1
2
)
=.
(5.1.49)
I( z) I(1z)=

sin z
(5.1.50)
All this is demonstrated see my conspect Fourier complex.
From which we can deduce readily
I
(
l +
1
2
)
=.
1
2

3
2

5
2
...
(
l +
1
2
)
(5.1.51)
One can then
31
j
l
( z)=
.
2
(
z
2
)
l

n=0

(
iz
2
)
2n
n! I
(
n+1+l+
1
2
)
(5.1.52)
Similarly one can expres
n
l
( z)
as given in (5.1.43)
n
l
( z)=
2
l
. z
l +1
I
(
l +
1
2
)

1+
(
iz
2
)
2
1!
(
1
2
l
)
+
(
iz
2
)
4
2!
(
1
2
l
)(
3
2
l
)
+...
|
(5.1.53)
Using (5.1.50)
I
(
l +
1
2
)
=(1)
l
I
(
1
2
l
)
(5.1.54)
Yields
n
l
( z)=(1)
l +1
.
2
l
z
l +1

1
I
(
1
2
l
)
+
(
iz
2
)
2
1! I
(
1
2
l
)(
1
2
l
)
+
(
iz
2
)
4
2! I
(
1
2
l
)(
1
2
l
)(
3
2
l
)
+...
|
(5.1.55)
Or
n
l
( z)=(1)
l +1 .
2
(
2
z
)
l +1

n=0

(
iz
2
)
2n
n! I
(
n+1l
1
2
)
(5.1.56)
5.1.4 Relationship with regular Bessel functions
Regular Bessel equation is

d
2
dz
2
1
z
d
dz

v
2
z
2
+1
|
G
v
( z)=0 (5.1.57)
Where
v=l +
1
2
. The regular solution of this equation is called the regular Bessel function
32
J
v
( z)=
(
z
2
)
v
n=0

(
i z
2
)
2n
n! I( v+n+1)
(5.1.58)
One can relate
J
l+
1
2
and
J
l
1
2
to the regular and irregular spherical Bessel functions:
j
l
( z)=
.

2z
J
l +
1
2
( z)
n
l
( z)=(1)
l
.

2z
J
l
1
2
( z)
(5.1.59)
5.1.5 Generating Function of Spherical Bessel Functions
The stationary Schrdinger equation of free particles(5.1.21) has two solutions, namely, one given by
((5.1.22) (5.1.23)) and one given by(5.1.24). One can expand the former solution in terms of solutions
(5.1.24). For example, in case of a free particle moving along the x3-axis one expands
e
i k
3
x
3
=

l , m
a
l m
j
l
(kr )Y
lm
(0 , )
(5.1.60)
Since the spherical harmonics(3.1.26) , are given in terms of Legendre polynomials
b
l
one can
replace the expansion in (5.1.60) by
e
i k
3
x
3
=

l , m
b
l
j
l
(kr) P
l
(cos 0)
(5.1.61)
We want to determine the expansion coefficients
b
l
the orthogonality properties(3.1.23) yield from (5.1.61)

1
+1
d cos 0e
i k r cos 0
P
l
( cos0)=b
l
j
l
(kr)
2
2l +1
(5.1.62)
Defining x=cos0 , z=kr and using Rodriguez formula for Legendre polynomials one obtain

1
+1
dx e
i z x
P
l
( x)=

1
+1
dxe
i z x 1
2
l
l !

l
x
l
( x
2
1)
l
(5.1.63)
Integration by parts yields

1
+1
dx e
i z x
P
l
( x)=
1
2
l
l !

e
i z x
d
l 1
d x
l1
( x
2
1)
l
|
1
+1

1
2
l
l !

1
+1
dx
(
d
dx
e
i z x
)
d
l1
d x
l1
( x
2
1)
l
(5.1.64)
One can show
d
l1
d x
l 1
( x
2
1)
l
( x
2
1)polynomial in x (5.1.65)
33
And hence the surface term ...|
1
+1
vanishes. This holds for l consecutive integrations by part and
one can conclude

1
+1
dx e
i z x
P
l
( x)=
(1)
l
2
l
l !

1
+1
dx( x
2
1)
l d
l
d x
l
e
i z x
=
(iz)
l
2
l
l !

1
+1
dx(1x
2
)
l
e
i z x
(5.1.66)
Comparison with (5.1.62) gives
b
l
j
l
( kr)
2
2l +1
=
(iz)
l
2
l
l !

1
+1
dx(1x
2
)
l
e
i z x
(5.1.67)
This expression allows one to determine the expansion coefficients
b
l
z
l
135(2l +1)
2
2l +1
=
(iz)
l
2
l
l !

1
+1
dx(1x
2
)
l
e
i z x
(5.1.68)
Employing (3.1.25)

d 0sin
2l+1
0=

1
+1
dx(1x
2
)
l
= x(1x
2
)
l
|
1
+1
+2l

1
+1
dx x
2
(1x
2
)
l1
=2l

x
3
3
x(1x
2
)
l1
|
1
+1
+
2l(2l 2)
13

1
+1
dx x
4
(1x
2
)
l 2
=
2l( 2l 2)...2
135. ..(2l 1)

1
+1
dx x
2l
=
2l(2l 2) ...2
135. ..( 2l 1)
2
2l +1
=
(2l ) !
135. ..(2l 1)|
2
2
2l +1
(5.1.69)
we have
z
l
i
l 1
2
l
l !
=
(2l ) !
135. ..(2l 1)|
2
2
2l +1
(5.1.70)
Or
i
l
(2l +1) z
l
=
1
135. ..( 2l+1)
2
2l +1
(2l )!
1234... (2l 1)2l
(5.1.71)
Where the last factor is equal to unity. Comparison with the LHS of(5.1.68) yields finally
b
l
=i
l
( 2l+1) (5.1.72)
Or after insertion into (5.1.61)
e
i k
3
x
3
=

l , m
i
l
( 2l +1) j
l
( kr) P
l
(cos0)
(5.1.73)
One refers to the LHS of as the generating function of spherical Bessel functions.
See also[41] pag 350 and pag 538
34
5.1.6 Integral Representation of spherical Bessel Functions
Combining (5.1.67) with (5.1.72) results in integral representation of
j
l
( z)
j
l
( z)=
( z)
l
2
l+1
l !

1
+1
dx (1x
2
)
l
e
i z x
(5.1.74)
Employing (5.1.59) one can express this using
v=l +
1
2
J
v
( z)=
1
.I
(
v+
1
2
)
(
z
2
)
v

1
+1
dx(1x
2
)
v
1
2
e
i z x
(5.1.75)
See [9]pag 770
5.1.7 Rayleigh equation
Rayleigh equation is (5.1.73) see[9] pag 769
e
i k r cos 0
=

l ,m
i
l
(2l +1) j
l
( kr) P
l
(cos0)
(5.1.76)
5.1.8 Generating function, integer order, J
n
(x)
Although Bessel functions are of interest primarily as solutions of differential equations, it is
instructive and convenient to develop them from a completely different approach, that of the generating
function. Let us introduce a function of two variables,
g( x , t )=e
( x/ 2 )(t 1/ t )
.
(6.1)
Expanding it in a Laurent series, we obtain
35
e
( x/ 2 )(t 1/ t )
=

n=

J
n
( x )t
n
. (5.1.77)
(6.2)
The coefficient of t
n
, J
n
, is defined to be Bessel function of the first kind of integer order n.
Expanding the exponentials, we have
e
xt / 2
e
x/ 2t
=

r =0

(
x
2
)
r
t
r
r !

s=0

(1)
s
(
x
2
)
s
t
s
s!
. (6.3)
Setting n =r- s, yields

s=0

n=s

(
x
2
)
n+s
t
n+s
( n+s)!
(1)
s
(
x
2
)
s
t
s
s!
. (6.4)
Since

ns
1
( n+s)!
|=0,
(note:
1
(m)!
=0
for a positive integer m)
we have

s=0

n=s

s=0

n=

=

n=

s=0

.
The coefficient of t
n
is then
J
n
( x)=

s=0

(1)
s
s! ( n+s)!
(
x
2
)
n+2s
=
x
n
2
n
n!

x
n+2
2
n+2
( n+1) !
+. (6.5)
This series form exhibits the behavior of the Bessel function J
n
for small x. The results for J
0
, J
1
, and
J
2
are shown in Fig.6.1. The Bessel functions oscillate but are not periodic.
Figure 6.1 Bessel function,
J
0
( x)
,
J
1
( x )
, and
J
2
( x)
.
36
Eq.(6.5) actually holds for n < 0 , also giving
J
n
( x)=

s=0

(1)
s
s! ( sn)!
(
x
2
)
2sn
. (6.6)
Since the terms for n<s (corresponding to the negative integer (s-n) ) vanish, the series can be
considered to start with s=n. Replacing s by s + n, we obtain
J
n
( x)=

s=0

(1)
s+n
s! ( s+n)!
(
x
2
)
2s+n
=(1)
n
J
n
( x) . (6.7)
These series expressions may be used with n replaced by v to define J
v
and J
-v
for non-integer v.
5.1.9 Recurrence relations
Differentiating Eq.(6.1) partially with respect to t, we find that

t
g ( x , t )=
1
2
x(1+
1
t
2
) e
( x/ 2)( t 1/ t )
=

n=

nJ
n
( x)t
n1
, (6.9)
and substituting Eq.(6.2) for the exponential and equating the coefficients of t
n-1
, we obtain

J
n1
( x )+J
n+1
( x)=
2n
x
J
n
( x ).
(6.10)
This is a three-term recurrence relation.
On the other hand, differentiating Eq.(6.1) partially with respect to x, we have

x
g( x , t )=
1
2
( t
1
t
)e
( x/ 2 )(t 1/ t )
=

n=

J
n
'
( x )t
n
. (6.11)
Again, substituting in Eq.(6.2) and equating the coefficients of t
n
, we obtain the result
J
n1
( x )J
n+1
( x)=2 J
n
'
( x).
(6.12)
As a special case,
J
0
'
( x)=J
1
( x).
(6.13)
Adding Eqs.(6.10) and (6.12) and dividing by 2, we have
37
J
n1
( x )=
n
x
J
n
( x)+J
n
'
( x ).
(6.14)
Multiplying by x
n
and rearranging terms produces
d
dx

x
n
J
n
( x)
|
=x
n
J
n1
( x).
(6.15)
Subtracting Eq.(6.12) from (6.10) and dividing by 2 yields
J
n+1
( x )=
n
x
J
n
( x)J
n
'
( x ).
(6.16)
Multiplying by x
-n
and rearranging terms, we obtain
d
dx

x
n
J
n
( x)
|
=x
n
J
n+1
( x).
(6.17)
5.1.10 Bessels differential equation
Suppose we consider a set of functions
Z
v
( x )
which satisfies the basic recurrence relations (Eq.
(6.10) and (6.12)), but with v not necessarily an integer and
Z
v
( x )
not necessarily given by the series
(Eq. (6.5)). Equation (6.14) may be rewritten (nv) as
x Z
v
'
( x)=xZ
v1
( x)vZ
v
( x) .

(6.18)
On differentiating with respect to x, we have
x Z
v
' '
( x )+( v+1) Z
v
'
x Z
v1
'
Z
v1
=0.
(6.19)
Multiplying by x and subtracting Eq (6.18) multiplied by v given us
x
2
Z
v
' '
+x Z
v
'
v
2
Z
v
+( v1) xZ
v1
x
2
Z
v1
'
=0. (6.20)
Now we rewrite Eq. (6.16) and replace n by v-1.
x Z
v1
'
=( v1) Z
v1
xZ
v
.
(6.21)
Using this to eliminate
Z
v1
and
Z
v1
'
from Eq. (6.20), we finally get
x
2
Z
v
' '
+x Z
v
'
+( x
2
v
2
) Z
v
=0. (6.22)
38
This is just Bessels equation. Hence any function,
Z
v
( x )
, that satisfy the recurrence relations (Eq.
(6.10) and (6.12), (6.14) and (6.16), or (6.15) and (6.17)) satisfy Bessels function; that is,the unknown
v
Z
are Bessels function. In particular, we have shown that the functions J
n
(x), defined by our
generating funcition, satisfy Bessels function. If the argument is k rather than x, Eq. (6.22) becomes

2
d
2
d
2
Z
v
( k)+
d
d
Z
v
( k)+( k
2

2
v
2
) Z
v
( k)=0. (6.22)
5.1.11 Integral representation

A particular useful and powerful way of treating Bessel functions employs integral representations. If
we return to the generating function (Eq. (5.1.77)), and substitute t=e
i
,
e
ixsin
=J
0
( x )+2( J
2
( x )cos 2+J
4
( x )cos 4+)
+2i ( J
1
( x )sin +J
3
( x) sin 3+) ,
=

J
m
( x)exp
i m0
(5.1.78)
(6.23)
see[4] pag 574 also [25]pag 661where (5.1.78)is called the decomposition in Fourier series.
in which we have used the relations
J
1
( x)e
i
+J
1
( x) e
i
=J
1
( x )( e
i
e
i
)
=2iJ
1
( x )sin ,
(6.24)
J
2
( x) e
2i
+J
2
( x)e
2i
=2J
2
( x) cos ,
and so on.

39
In summation notation

cos ( xsin )=J
0
( x)+2

n=1

J
2n
( x )cos ( 2n ) ,
sin( xsin )=2

n=1

J
2n1
( x) cos( 2n1) | ,
(6.25)
equating real and imaginary parts, respectively. It might be noted that angle (in radius) has no
dimensions. Likewise sin has no dimensions and function cos(xsin) is perfectly proper from a
dimensional point of view.
By employing the orthogonality properties of cosine and sine,

0

cos ncos md=

2

nm
(6.26a)

0

sin n sin md=

2

nm
(6.26b)
in which n and m are positive integers (zero is excluded), we obtain

1

cos( xsin ) cos nd=

J
n
( x) neven
0 nodd
(6.27)

1

sin ( xsin )sin nd=

0 neven
J
n
( x ) nodd
(6.28)
If these two equations are added together

J
n
( x)=
1

cos ( x sin ) cos n+sin( x sin ) sin n| d


=
1

cos( nx sin) d ,
n=0,1,2,3, (6.29)
As a special case,
J
0
( x)=
1

cos( x sin ) d .
(6.30)
Nothing that cos( xsin ) repeats itself in all four quadrants (

1
= ,
2
=0
,

4
=+ ,
4
=
), we may write Eq. (6.30) as
J
0
( x)=
1
2

0
2
cos ( x sin ) d .
(6.30a)
On the other hand, sin( xsin) reverses its sign in the third and fourth quadrants so that
1
2

0
2
sin ( x sin ) d=0.
(6.30b)
Adding Eq. (6.30a) and i times Eq. (6.30b), we obtain the complex exponential representation
40
J
0
( x)=
1
2

0
2
e
ixsin
d=
1
2

0
2
e
ixcos
d.
(6.30c)
This integral representation (Eq. (6.30c)) may be obtained somewhat more directly by employing
contour integration.
See [9]Pag 688
See [9]Pag 712
41
see [42]Schweiger A. Priceples of pulse electron paramagnetic resonance
See [9]Pag 688
See [9]Pag 945 = see [25]pag 669
42
see [25]pag 669
43
44
see pag 734
45
46
47
pag 730
48
pag 723
pag 639
49
pag 566 phys mat
50
see [43]pag 765 vol I
51
you put the asymptotic formulas for example for half integer after integration you get:
sincos cos sin=sin(ab)
5.1.12 Completeness
http://www.math.osu.edu/~gerlach/math/BVtypset/node127.html
The cylinder waves form a complete set. More precisely,
6(rr
0
)6(00
0
)
r
=

m=

1
2

k dk J
m
(kr ) J
m
( kr
0
)e
i m(00
0
)
(5.1.79)
This relation is the cylindrical analogue of the familiar completeness relation for plane waves,
6( xx
0
)6( yy
0
)=

dk
x
dk
y
e
i ( k
xx
+k
y
y)
2
e
i (k
x
x
0
+k
y
y
0
)
2
=

0
2
k dk d o
e
i k r cos (o0)
2
e
i k r
0
cos( o0
0
)
2
(5.1.80)
52
In fact, the one for plane waves is equivalent to the one for cylinder waves. The connecting link
between the two is the plane wave expansion, Eq.(5.24),
e
i k r cos (0o)
=

i
m
J
m
(kr )e
i m(0o)
(5.1.81)
Introduce it into Eq.(5.41) and obtain
6( xx
0
)6( yy
0
)
=

k dk

0
2
d o J
m
( kr)i
m e
i m(0o)
2

m'
J
m'
(k r
0
)
e
i m' ( 0
0
o)
2
(i )
m'
(5.1.82)
Using the orthogonality property

0
2
d o
e
i ( m' m) o
2
=6
mm'
(5.1.83)
the definition
6( xx
0
)6( yy
0
) dxdy=6(rr
0
) 6(00
0
) dr d 0
(5.1.84)
and dx dy=r d rd 0 one obtains
6(rr
0
)6(00
0
)
r
=

m=

1
2

k dk J
m
(kr ) J
m
( kr
0
)e
i m(00
0
)
(5.1.85)
the completeness relation for the cylinder waves.
Property 22 (Fourier-Bessel transform)
The Bessel functions
J
m
( kr ): 0k
of fixed integral order form a complete set
6(rr
0
)
r
=

J
m
(kr ) J
m
( kr
0
) k dk (5.1.86)
This result is a direct consequence of Property 21. Indeed, multiply the cylinder wave completeness
relation, Eq.(5.40) by e
i m' 0
, integrate over 0 from 0 to 2 , again use the orthogonality
property, Eq. 5.42, and cancel out the factor common factor e
i m' 0
from both sides. The result is
Eq.(5.44), the completeness relation for the Bessel functions on the positive r -axis.
Remark: By interchanging the roles of k and r one obtain from Eq.(5.44)
6(kk
0
)
k
=

J
m
(kr ) J
m
( kr
0
) r dr (5.1.87)
Remark: The completeness relation, Eq.(5.44), yields
f (r)=

F (k ) J
m
( kr ) k dk (5.1.88)
where
F (k )=

f (r ) J
m
( kr) r dr (5.1.89)
53
This is the Fourier-Bessel transform theorem.
It is interesting to note that the completeness relation, Eq.(5.44), is independent of the integral order
J
m
( kr)
of . One therefore wonders whether Eq.(5.44) also holds true if one uses
J
v
(kr )
, Bessel
functions of any complex order v . This is ideed the case.
Property 23 (Bessel transform)
The Bessel functions
J
v
( kr): 0k
of complex order v form a complete set
6(rr
0
)
r
=

J
v
( kr ) J
v
( kr
0
) k dk (5.1.90)
This result gives rise to the transform pair
f (r)=

F (k ) J
v
(kr) k dk
F ( k)=

f (r) J
v
( kr)r dr
(5.1.91)
and it is obvious that mathematically Property
54
Chapter 6 Fourier Transform full derivation
Paragraph 6.1 Fourier series and their coefficients
For a function with period T , a continuous Fourier series can be expressed as
f ( t )=a
0
+

k=1

a
k
cos ( kw
0
t )+b
k
sin ( kw
0
t ) (6.1.1)
The unknown Fourier coefficients
,
0
a
k
a
and
k
b
can be computed as
dt t f
T
a
T

,
_

0
0
) (
1
(6.1.2)
Thus, 0
a
can be interpreted as the average function value between the period interval
] , 0 [ T
.

,
_

T
k
dt t kw t f
T
a
0
0
) cos( ) (
2
(6.1.3)
k
a

(hence k
a
is an even function)

,
_

T
k
dt t kw t f
T
b
0
0
) sin( ) (
2
(6.1.4)
b
k
(hence
b
k
is an odd function)
where for angular frequency we have
55
w
0
=2f
(6.1.5)
And for frequency
f =
1
T
(6.1.6)
Derivation of formulas for
a
0
,

a
k
and
b
k
where we just will define or verify the orthogonality
relations see Error: Reference source not found, formulas Error: Reference source not found,Error:
Reference source not found,Error: Reference source not foundandError: Reference source not found.
Integrating both sides of Equation (6.1.1) with respect to time, one gets

0
T
f ( t ) dt =

0
T
a
0
dt +

0
T

k=1

a
k
cos ( kw
0
t ) dt+

0
T

k=1

b
k
sin ( kw
0
t ) dt (6.1.7)
The second and third terms on the right hand side of the above equations are both zeros,

0
T
sin( kw
0
t ) dt =

0
T
cos( kw
0
t )dt =0 (6.1.8)
Proof:
Let A=

0
T
sin( kw
0
t ) dt
=
(
1
kw
0
)

cos( kw
0
t )
|
0
T
A=
(
1
kw
0
)

cos( kw
0
T )cos ( 0)
|
=
(
1
kw
0
)
cos ( k2 )1| =0
Thus,

0
T
f ( t ) dt =

a
0
t
|
0
T
=a
0
T
Hence,
dt t f
T
a
T

,
_

0
0
) (
1
Now, if both sides of Equation (6.1.1) are multiplied by
sin ( mw
0
t )
and then integrated with respect to
time, one obtains

0
T
f ( t )sin ( mw
0
t ) dt=

0
T
a
0
sin( mw
0
t ) dt +

0
T

k=1

a
k
cos ( kw
0
t )sin( mw
0
t ) dt
+

0
T

k=1

b
k
sin ( kw
0
t )sin( mw
0
t ) dt
(6.1.9)
The first term on RHS see (6.1.8) and the second terms on the right hand side (RHS) of Equation
(6.1.9) are zero.

0
T
cos ( kw
0
t )sin ( mw
0
t ) dt =0 (6.1.10)
Proof:
56
Let
C=

0
T
sin ( mw
0
t )cos( kw
0
t )dt (6.1.11)
Recall that
sin (+)=sin ( ) cos( )+sin ( )cos( )
Hence,
C=

0
T

sin

( m+k ) w
0
t
|
sin( kw
0
t )cos( mw
0
t )
|
dt
=

0
T
sin

( m+k ) w
0
t
|
dt

0
T
sin ( kw
0
t )cos ( mw
0
t )dt
From Equation (6.1.8) ,

0
T
sin ( m+k ) w
0
t | dt =0
then
C=0

0
T
sin( kw
0
t )cos( mw
0
t ) dt (6.1.12)
Adding Equations (6.1.11),(6.1.12)
2C=

0
T
sin( mw
0
t ) cos( kw
0
t ) dt

0
T
sin( kw
0
t )cos( mw
0
t ) dt
=

0
T
sin
(
mw
0
t
)
( kw
0
t )
|
dt =

0
T
sin

( mk ) w
0
t
|
dt
2C=0 , since the right side of the above equation is zero . Thus,
C=

o
T
sin ( mw
0
t )cos( kw
0
t )dt =0 which was to be proved.
The third RHS term of Equation (6.1.9) is also zero, with the exception when m k ,

0
T
sin ( kw
0
t ) sin( mw
0
t ) dt =0

0
T
cos( kw
0
t )cos( mw
0
t ) dt =0
(6.1.13)
Let
D=

0
T
sin( kw
0
t )sin ( mw
0
t )dt (6.1.14)
Since
cos( +)=cos ( )cos ( )sin ( )sin ( )
or
sin ( )sin ( )=cos( ) cos( )cos(+ )
57
Thus,
D=

0
T
cos( kw
0
t ) cos( mw
0
t ) dt

0
T
cos

( k+m) w
0
t
|
dt (6.1.15)
From Equation (6.1.8)

0
T
cos

( k+m) w
0
t
|
dt =0
then
D=

0
T
cos( kw
0
t ) cos( mw
0
t ) dt 0 (6.1.16)
Adding Equations (6.1.14),(6.1.16)
2D=

0
T
sin ( kw
0
t )sin ( mw
0
t )+

0
T
cos( kw
0
t )cos( mw
0
t ) dt
=

0
T
cos

kw
0
t mw
0
t
|
dt =

0
T
cos

( km) w
0
t
|
dt
2D = 0, since the right side of the above equation is zero (see Equation (6.1.8)). Thus,
D

0
T
sin( kw
0
t )sin ( mw
0
t )dt =0
when m k
Let
B=

0
T
sin
2
( kw
0
t ) dt (6.1.17)
Recall
sin
2
( )=
1cos( 2)
2
Thus,
B=

o
T

1
2

1
2
cos ( 2kw
0
t )
|
dt =

(
1
2
)
t
(
1
2
)
(
1
2 kw
0
)
sin( 2kw
0
t )
|
0
T
B=

T
2

1
4kw
0
sin ( 2kw
0
T )
|
0| =
T
2

(
1
4kw
0
)
sin ( 2k2)=
T
2
we can do the same for cos
2
and we get:

0
T
sin
2
( kw
0
t ) dt =

0
T
cos
2
( kw
0
t ) dt =
T
2
(6.1.18)
Finally we have for (6.1.9)
58

0
T
f ( t )sin ( kw
0
t )dt =0+0+

0
T
b
k
sin
2
( kw
0
t )dt =b
k

T
2
Thus,
b
k
=
(
2
T
)

0
T
f ( t )sin ( kw
0
t ) dt
Similar derivation can be used to obtain
a
k
, as shown in Equation (6.1.10).
Paragraph 6.2 Complex Form of the Fourier Series
Using Eulers identity, e
ix
=cos ( x )+i sin ( x ) , and e
ix
=cos ( x )i sin ( x ) , the sine and cosine can be
expressed in the exponential form as
sin ( x)=
e
ix
e
ix
2i
=odd function , since sin ( x)=sin(x)
cos( x)=
e
ix
+e
ix
2
=even function , since cos( x)=cos(x )
Thus, the Fourier series (expressed in Equation (6.1.1)) can be converted into the following form
f ( t )=a
0
+

k =1

a
k (
e
ikw
0
t
+e
ikw
0
t
2
)
+b
k
(
e
ikw
0
t
e
ikw
0
t
2i
)
(6.2.1)

or
f ( t )=a
0
+

k =1

e
ikw
0
t
(
a
k
2
+
b
k
2i

i
i
)
+e
ikw
0
t
(
a
k
2

b
k
2i

i
i
)
or, since i
2
=1, one obtains
f ( t )=a
0
+

k =1

e
ikw
0
t
(
a
k
ib
k
2
)
+e
ikw
0
t
(
a
k
+ib
k
2
)
(6.2.2)
Define the following constants

C
0
a
0
(6.2.3)
C
k
~

a
k
ib
k
2
(6.2.4)
Hence:

C
k

a
k
ib
k
2
(6.2.5)
Using the even and odd properties shown in Equations (6.1.3) and (6.1.4) respectively, Equation (6.2.5)
becomes
C
k
~

a
k
+ib
k
2
(6.2.6)
Substituting Equations (6.2.3),(6.2.4),Error: Reference source not found into Equation (6.2.2), one gets
59
f ( t )=

C
0
+

k=1

C
k
e
ikw
0
t
+

k =1

C
k
e
ikw
0
t
=

k=0

C
k
e
ikw
0
t
+

k=1

C
k
e
ikw
0
t
=

k=0

C
k
e
ikw
0
t
+

k=
1

C
k
e
ikw
0
t
=

k=

C
k
e
ikw
0
t

(6.2.7)
The coefficient

C
k
can be computed, by substituting Equations (6.1.3) and (6.1.15) into Equation
(6.2.4) to obtain

C
k
=
(
1
2
)(
2
T
)

0
T
f ( t ) cos( kw
0
t ) dt i

0
T
f (t )sin( kw
0
t ) dt

=
(
1
T
)

0
T
f ( t )

cos( kw
0
t )i sin( kw
0
t )
|
dt

Substituting Equations (10, 11) into the above equation, one gets

C
k
=
(
1
T
)

0
T
f (t )

e
ikw
0
t
+e
ikw
0
t
2
i
e
ikw
0
t
e
ikw
0
t
2i
|
dt

=
(
1
T
)

0
T
f ( t )e
ikw
0
t
dt


(6.2.8)
Thus, Equations (6.2.7) and (6.2.8) are the equivalent complex version of Equations (6.1.1)-(6.1.4).
Paragraph 6.3 Fourier Transform (is for Non-Periodic Function)
This deduction is similar to pag 439[4]. Recall that a periodic function can be expressed in terms of the
exponential form, accordingly to Equations (6.2.7) and (6.2.8) of Paragraph 6.2 as
f ( t )=

k =

C
k
e
ikw
0
t

C
k
=
(
1
T
)

0
T
f (t )e
ikw
0
t
dt

Define the following function

2
2
0
0
) ( ) (

T
T
t ikw
dt e t f ikw F
(6.3.1)
where ) (

0
ikw F is a function of i , k , and
w
0
Then, Equation (6.2.8) can be written as

C
k
=
(
1
T
)

F( ikw
0
)
(6.3.2)
and Equation (6.2.7) becomes
f ( t )=

k =

(
1
T
)

F ( ikw
0
)e
ikw
0
t
(6.3.3)
A non-periodic function np
f
can be considered as a periodic function, with the period
, T
or
0
1

T
f
(see Fig 1)
60
From Equations (6.1.5) and (6.1.6), one gets
f w 2
0

T
2
( ) f 2

(6.3.4)
Fig 1 Discretization of frequency data
From Equation (6.3.3), one obtains
f
np
( t )= lim
T -
orA f -0
f ( t )




k
t ikw
f
e ikw F f
0
) (

) ( lim
0
0
(6.3.5)

In the above equation, the subscript np denotes non-periodic function.
f
np
( t )= lim
f -0

k=

( f )

F ( ik 2f )e
ik 2 ft
(6.3.6)

Realizing that
f f k
(See Fig 1), the above equation becomes
f
np
( t )=

df

F( i2f ) e
i2 ft
df e f i F t f
ft i
np

2
) 2 (

) (
(6.3.7)

Multiplying and dividing the right-hand-side of the equation by 2 , one obtains
61

,
_

) 2 ( ) 2 (

2
1
) (
2
f d e f i F t f
ft i
np

,
_

) ( ) (

2
1
0 0
0
w d e iw F
t iw

(6.3.8)
(6.3.8) Is inverse Fourier transform
Using the definition stated in Equation (6.3.1), one has

) ( ) ( ) (
0
0
t d e t f iw F
t iw
np
(6.3.9)
(6.3.9) is Fourier transform .
6.3.1 Most important
Thus, Equations (6.3.9) and (6.3.8) will transform a non-periodic function from time domain to
frequency domain, and from frequency domain to time domain, respectively.
Paragraph 6.4 Deduction II
see pag 936 [9]:
we will write the coefficients (6.1.3) and (6.1.4) in the following form
a
n
=
1
L

L
L
f (t ) cos
nt
L
dt (6.4.1)
b
n
=
1
L

L
L
f (t ) sin
nt
L
dt (6.4.2)
By introducing these coefficients in (6.1.1) we have the resulting Fourier series
f ( x)=
1
2L

L
L
f ( t) dt +
1
L

n=1

cos
n x
L

L
L
f (t ) cos
nt
L
dt+
+
1
L

n=1

sin
n x
L

L
L
f (t )sin
nt
L
dt
or
f ( x)=
1
2L

L
L
f ( t) dt +
1
L

n=1

L
L
f ( t) cos
n
L
(tx) dt (6.4.3)
We now let the parameter L approach infinity, transforming the finite interval [L,L] into the infinite
interval (,) . We set
n
L
=u,

L
=A u
with L-
then we have
f ( x)=
1

n=1

A u

f ( t) cosu(t x) dt
62
or
f ( x)=
1

d u

f ( t) cosu(t x) dt
(6.4.4)
replacing the infinite sum by the integral over . The first term (corresponding to a
0
) has
vanished, assuming that

f ( t) dt =0
. This one is a formal demonstration you can see pag 753[1].
We take Eq. (6.4.4) as the Fourier integral. It is subject to the conditions that f (x) is (1) piecewise
continuous, (2) piecewise differentiable, and (3) absolutely integrablethat is,

f ( x)dx=0
is finite.
6.4.1 Fourier integral exponential form
this is deduction for pag 38 [24] that is taken from pag 937 [9]:
Our Fourier integral (Eq. (6.4.4)) may be put into exponential form by noting that
f ( x)=
1
2

d u

f (t )cos u(tx) dt
(6.4.5)
Which differ from (Eq. (6.4.4)) by 2 and

Whereas
0=
1
2

d u

f ( t) sin u( tx) dt
(6.4.6)
Because cos(t x) is an even function of and sin(t x) is an odd function of . Adding Eqs.
(6.4.5) and (6.4.3) (with a factor i ), we obtain the Fourier integral theorem
f ( x)=
1
2

e
i ux
d u

f (t ) e
i ut
dt
(6.4.7)
6.4.2 Dirac Delta Function Derivation
Definition of Dirac delta function
see [8] : pag 45 the problem of divergence gauss theorem !
The one dimensional Dirac delta function, 6( x) , can be pictured as an infinitely high,
infinitesimally narrow "spike," with area 1 (Fig 2). That is to say:
Technically, 8{x) is not a function at all, since its value
is not finite at x =0. In the
mathematical literature it is known as a generalized
function, or distribution. It is, if you
6( x)=

0, if x0
, if x=0
and
63
Fig 2

6( x) dx=1
Technically, 6( x) is not a function at all, since its value is not finite at x =0. In the mathematical
literature it is known as a generalized function, or distribution. It is, if you like, the limit of a sequence
of functions, such as rectangles R
n
(x), of height n and width 1/n, or isosceles triangles T
n
(x), of height
n and base 2/n (Fig 3). If f(x) is some "ordinary" function (that is, not another delta functionin fact,
just to be on the safe side let's say that f(x) is continuous), then the product f ( x) 6( x) is zero
everywhere except at x=0 . It follows that
f ( x) 6( x)= f (0) 6( x)
(This is the most important fact about the
delta function, so make sure you
understand why it is true: since the product
is zero anyway except at x0 , we may
as well replace f(x) by the value it assumes
at the origin.) In particular

f ( x)6( x) dx=f (0)

6( x) dx=f (0)
(6.4.8)
Of course, we can shift the spike from x = 0 to some other
point, x = a (Fig 4):
6( xa)=

0, if xa
, if x=a
and

6( xa) dx=1
If the order of integration of Eq. (6.4.7) is reversed, we may rewrite it as
f ( x)=

f (t )

1
2

e
i u(t x)
d u

dt (6.4.9)
Apparently the quantity in curly brackets behaves as a delta function (t x) . We might take Eq.
(6.4.9) as presenting us with a representation of the Dirac delta function. Alternatively, we take it as a
64
Fig 3
Fig 4
clue to a new derivation of the Fourier integral theorem.
We know that the properties of delta function can be represented by:
f ( x)=lim
n-

f (t) 6
n
(t x) dt
(6.4.10)
where n(t x) is a sequence defining the distribution (t x) . Note that Eq. (6.4.10) assumes that
f(t) is continuous at t = x . We take n(t x) to be
6
n
( tx)=
sin n( t x)
(t x)
=
1
2

n
n
e
i u(tx )
d u (6.4.11)
because
1
2

n
n
e
i u(tx )
d u=
1
2

e
i u(t x)
i u
|
n
n
=
1
2

e
i n(t x)
i n

e
i n(t x)
i (n)
|
=
=
sin n(t x)
( t x)
now take the limit
f ( x)=lim
n-
1
2

f ( t)

n
n
e
i u(tx )
dt (6.4.12)
Interchanging the order of integration and then taking the limit as n , we have Eq. (6.4.7), the
Fourier integral theorem. With the understanding that it belongs under an integral sign, as in Eq.
(6.4.10), the identification
6(tx)=
1
2

e
i u(t x)
d u
(6.4.13)
We can see also pag 59 [12].
6.4.3 Fourier transforms
See pag 938 [9] we have
Let us define g() , the Fourier transform of the function f (t) , by
g (u)
1
2

f (t )e
i ut
dt
(6.4.14)
and its inverse by
f (t )
1
2

g( u)e
i ut
d u
(6.4.15)
Which were derived from Eq. (6.4.7). see pag 441 [4].
which is in the 2001 of Arfken [9] :
f ( x)
1
2

g (u) e
i ux
d u
(6.4.16)
65
Note that Eqs. (6.4.14)and (6.4.15) are almost but not quite symmetrical, differing in the sign of i
.Here two points deserve comment. First, the 1/ 2 symmetry is a matter of choice, not of
necessity. Many authors will attach the entire 1/2 factor of Eq. (6.4.7) to one of the two equations:
Eq. (6.4.14) or Eq. (6.4.15). Second, although the Fourier integral, Eq. (6.4.7), has received much
attention in the mathematics literature, we shall be primarily interested in the Fourier transform and its
inverse. They are the equations with physical significance.
6.4.4 Cosine transform
If f (x) is odd or even, these transforms may be expressed in a somewhat different form. Consider first
an even function fc with fc(x) = fc(x) . Writing the exponential of Eq. (15.22) in trigonometric form,
we have
g
c
(u)=
1
2

f
c
( t) (cosut+i sin ut )=
=
.
2

f
c
(t )cos ut dt
(6.4.17)
the sin t dependence vanishing on integration over the symmetric interval (,) . Similarly,
since cos t is even, Eqs. (6.4.15) transforms to
f
c
(t )=
.
2

g
c
( u) cosut d u (6.4.18)
Or like in (6.4.16):
f
c
( x)=
.
2

g
c
(u) cosux d u (6.4.19)
Equations (6.4.17) and (6.4.15) are known as Fourier cosine transforms.
6.4.5 Sine transform
The corresponding pair of Fourier sine transforms is obtained by assuming that fs(x) = fs (x) , odd,
and applying the same symmetry arguments. The equations are
g
s
(u)=
.
2

f
s
(t )sin ut dt (6.4.20)
And
f
s
( t)=
.
2

g
s
(u)sin ut d u (6.4.21)
Note that the Fourier cosine transforms and the Fourier sine transforms each involve only positive
values (and zero) of the arguments. We use the parity of f (x) to establish the transforms; but once the
transforms are established, the behaviour of the functions f and g for negative argument is irrelevant.
In effect, the transform equations themselves impose a definite parity : even for the Fourier cosine
66
transform and odd for the Fourier sine transform.
Paragraph 6.5 Dirac delta function Electrostatic Enigma
Superposition principle
Consider N charges, q
1
though q
N
, which
are located at position vectors r
1
through r
N
.
Electrical forces obey what is known as the principle
of superposition. The electrical force acting on a test
charge q at position vector r is simply the
vector sum of all of the Coulomb law forces from
each of the N charges taken in isolation. In other
words, the electrical force exerted by the i th
charge (say) on the test charge is the same as if all the
other charges were not there. Thus, the force acting on
the test charge is given by
f ( r )=q

i =1
N
q
i
4 t
0
r r
i
r r
i
|
3
(6.5.1)
It is helpful to define a vector field

E( r ) , called the electric field, which is the force exerted on a


unit test charge located at position vector r . So, the force on a test charge is

f =q

E (6.5.2)
(169) and the electric field is given by

E( r )=

i =1
N
q
i
4 t
0
r r
i
r r
i

3
(6.5.3)
At this point, we have no reason to believe that the electric field has any real physical existence. It is
just a useful device for calculating the force which acts on test charges placed at various locations.
The electric field from a single charge q located at the origin is purely radial, points outwards if
the charge is positive, inwards if it is negative, and has magnitude
E
r
( r )=
q
4 c
0
r
2
,
(6.5.4)
where r=r .
We can represent an electric field by field-lines. The direction of the lines indicates the direction of
the local electric field, and the density of the lines perpendicular to this direction is proportional to the
magnitude of the local electric field. Thus, the field of a point positive charge is represented by a group
of equally spaced straight lines radiating from the charge (see Fig 5).
67
Fig 5
The electric field from a collection of charges is simply the vector sum of the fields from each of the
charges taken in isolation. In other words, electric fields are completely superposable. Suppose that,
instead of having discrete charges, we have a continuous distribution of charge represented by a charge
density (r ) . Thus, the charge at position vector r ' is (r ' ) d
3
r ' , where d
3
r ' is the
volume element at r ' . It follows from a simple extension of Eq. (6.5.3) that the electric field
generated by this charge distribution is
E( r )=
1
4 t
0

p(r ' )
rr '
r r '
3
d
3
r '
(6.5.5)
where the volume integral is over all space, or, at least, over all space for which (r ' ) is non-zero
is non-zero.
The electric scalar potential
Suppose that r =( x , y , z ) and

=( x

, y

, z

)
in Cartesian coordinates. The x component
of
rr '
r r '
3
is written
xx

( xx

)
2
+( yy

)
2
+( zz

)
2
|
3 2
=
=

x
(
1
( xx

)
2
+( yy

)
2
+( zz

)
2
|
1/ 2
)
(6.5.6)

Since there is nothing special about the x -axis, we can write

r r '
r r '
3
=
(
1
rr '
)
(6.5.7)
where ( x , y , z ) is a differential operator which involves the components of r
but not those of r ' . It follows from Eq. (6.5.5) that

E= (6.5.8)
where
(r )=
1
4 t
0

(r ' )
rr '
d
3
r '
(6.5.9)
Thus, the electric field generated by a collection of fixed charges can be written as the gradient of a
scalar potential, and this potential can be expressed as a simple volume integral involving the charge
distribution.
The scalar potential generated by a charge q located at the origin is
( r )=
q
4 t
0
r
.
(6.5.10)
According to Eq. (6.5.3), the scalar potential generated by a set of N discrete charges q
i
, located
at r
i
, is
68
( r )=

i=1
N

i
( r ) (6.5.11)
where

i
( r )=
q
i
4 t
0
r r
i

(6.5.12)
Thus, the scalar potential is just the sum of the potentials generated by each of the charges taken in
isolation.
Suppose that a particle of charge q is taken along some path from point P to point Q . The
net work done on the particle by electrical forces is
W=

P
Q

fd

l (6.5.13)
where

f is the electrical force, and d

l is a line element along the path. Making use of Eqs.


(6.5.2) and (6.5.8), we obtain
W=q

P
Q

Ed

l =q

P
Q

l =q ( Q)( P) |. (6.5.14)
Thus, the work done on the particle is simply minus its charge times the difference in electric potential
between the end point and the beginning point. This quantity is clearly independent of the path taken
between P and Q . So, an electric field generated by stationary charges is an example of a
conservative field. In fact, this result follows immediately from vector field theory once we are told, in
Eq. (6.5.8), that the electric field is the gradient of a scalar potential. The

Ed

l =0
(6.5.15)
Is the work done on the particle when it is taken around a closed loop is zero, so

for any closed loop C . This implies from Stokes theorem that

E=0
(6.5.16)
Gauss law
Consider a single charge located at the origin. The electric field
generated by such a charge is given by Eq. (6.5.4). Suppose that we
surround the charge by a concentric spherical surface S of radius
r
(see Fig 5). The flux of the electric field through this surface is
given by
69
Fig 6

Ed

S=

S
E
r
d S
r
=E
r
( r ) 4 r
2
=
q
4 t
0
r
2
4 r
2
=
q
t
0
(6.5.17)
since the normal to the surface is always parallel to the local electric field. However, we also know
from Gauss theorem that

Ed

S =

E d
3
r
(6.5.18)
where V is the volume enclosed by surface S . Let us evaluate

E directly. In Cartesian
coordinates, the field is written

E=
q
4
0
(
x
r
3
,
y
r
3
,
z
r
3
) ,
(6.5.19)
where r
2
=x
2
+y
2
+z
2
. So,
E
x
x
=
q
4t
0
(
x
x
1
r
3
+x

r
(
1
r
3
)
r
x )
E
x
x
=
q
4 t
0
(
1
r
3

3 x
r
4
x
r
)
=
q
4 t
0
r
2
3 x
2
r
5
(6.5.20)

r
x
=
.x
2
+y
2
+y
3
x
=
2x
2.x
2
+y
2
+y
3
=
x
r
Here, use has been made of formulas analogous to Eq. (6.5.20) can be obtained for
E
y
y
and
E
z
z
. The divergence of the field is thus given by
E=
E
x
x
+
E
y
y
+
E
z
z
=
q
4
0
3r
2
3 x
2
3 y
2
3 z
2
r
5
=0. (6.5.21)
This is a puzzling result! We have from Eqs. (6.5.17) and (6.5.18) that

E d
3
r=
q
t
0
,
(6.5.22)
and yet we have just proved that

E=0 . This paradox can be resolved after a close examination


of Eq. (6.5.10). At the origin ( r =0 ) we find that

E=0 0 , which means that E can


take any value at this point. Thus, Eqs. (6.5.10) and (6.5.22) can be reconciled if

E is some sort
of spike function: i.e., it is zero everywhere except arbitrarily close to the origin, where it becomes
very large. This must occur in such a manner that the volume integral over the spike is finite.
Dirac delta function
This is zero everywhere except arbitrarily close to x=0 , it also possess a finite integral:

( x ) d x=1 . (6.5.23)
Thus, ( x) has all of the required properties of a spike function. The one-dimensional spike
function ( x) is called the Dirac delta-function after the Cambridge physicist Paul Dirac who
70
invented it in 1927 while investigating quantum mechanics. The delta-function is an example of what
mathematicians call a generalized function: it is not well-defined at x=0 , but its integral is
nevertheless well-defined. Consider the integral
E=mc
2
(6.5.24)

f ( x ) ( x ) d x , (6.5.25)
where f(x) is a function which is well-behaved in the vicinity of x=0 . Since the delta-function is
zero everywhere apart from very close to x=0 , it is clear that

f ( x ) ( x ) d x=f ( 0)

( x ) d x=f ( 0) , (6.5.26)
where use has been made of Eq. (6.5.23). The above equation, which is valid for any well-behaved
function, f(x), is effectively the definition of a delta-function.
A simple change of variables allows us to define ( xx
0
) , which is a spike function centered on
x=x
0
. Equation (6.5.26) gives

f ( x ) ( xx
0
) d x=f ( x
0
). (6.5.27)
We actually want a three-dimensional spike function: i.e., a function which is zero everywhere apart
from arbitrarily close to the origin, and whose volume integral is unity. If we denote this function by
6(r ) then it is easily seen that the three-dimensional delta-function is the product of three one-
dimensional delta-functions:
( r )=( x) ( y ) ( z ) . (6.5.28)
This function is clearly zero everywhere except the origin. But is its volume integral unity? Let us
integrate over a cube of dimensions 2 a which is centred on the origin, and aligned along the
Cartesian axes. This volume integral is obviously separable, so that
( r ) d
3
r=
a
a
( x ) d x
a
a
( y) d y
a
a
( z ) d z . (6.5.29)
The integral can be turned into an integral over all space by taking the limit a - . However, we
know that for one-dimensional delta-functions

6( x) dx=1 , so it follows from the above equation


thatfn

6(r ) d
3
r=1 (6.5.30)
which is the desired result. A simple generalization of previous arguments yields

(6.5.31)
where f ( r ) is any well-behaved scalar field. Finally, we can change variables and write
( r r ' )=( xx

) ( yy

) ( zz

) , (6.5.32)
which is a three-dimensional spike function centred on r=r

. It is easily demonstrated that


f ( r )( r r ' ) d
3
r= f ( r ' ) (6.5.33)
Up to now, we have only considered volume integrals taken over all space. However, it should be
obvious that the above result also holds for integrals over any finite volume V which contains the point
r=r

. Likewise, the integral is zero if V does not contain r=r

.
Let us now return to the problem in hand. The electric field generated by a charge q located at the
71
f ( r ) ( r ) d
3
r= f ( 0)
origin has E=0 everywhere apart from the origin, and also satisfies

E d
3
r=
q
t
0
(6.5.34)
for a spherical volume V centered on the origin. These two facts imply that

E=
q
t
0
( r ) ,
(6.5.35)
where use has been made of Eq. (6.5.30).
At this stage, vector field theory has yet to show its worth.. After all, we have just spent an
inordinately long time proving something using vector field theory which we previously proved [see
Eq. (6.5.18)] using conventional analysis. It is time to demonstrate the power of vector field theory.
Consider, again, a charge q at the origin surrounded by a spherical surface S which is centered on
the origin. Suppose that we now displace the surface S , so that it is no longer centered on the
origin. What is the flux of the electric field out of S? This is not a simple problem for conventional
analysis, because the normal to the surface is no longer parallel to the local electric field. However,
using vector field theory this problem is no more difficult than the previous one. We have

Ed

S=

Ed
3
r
(6.5.36)
from Gauss theorem (6.5.29), plus Eq. (6.5.34). From these equations, it is clear that the flux of E
out of S is q t
0
for a spherical surface displaced from the origin. However, the flux becomes
zero when the displacement is sufficiently large that the origin is no longer enclosed by the sphere. It is
possible to prove this via conventional analysis, but it is certainly not easy. Suppose that the surface S
is not spherical but is instead highly distorted. What now is the flux of E out of S ? This is a
virtually impossible problem in conventional analysis, but it is still easy using vector field theory.
Gauss theorem and Eq. (6.5.34) tell us that the flux is q/ t
0
provided that the surface contains the
origin, and that the flux is zero otherwise. This result is completely independent of the shape of S.
Consider N charges q
i
located at r
i
. A simple generalization of Eq. (6.5.34) gives

E=

i=1
N
q
i
t
0
( r r
i
) (6.5.37)
Thus, Gauss theorem (6.5.29) implies that

Ed

S=

Ed
3
r=
Q
t
0
,
(6.5.38)
where Q is the total charge enclosed by the surface S. This result is called Gauss law, and does not
depend on the shape of the surface.
Suppose, finally, that instead of having a set of discrete charges, we have a continuous charge
distribution described by a charge density p( r) . The charge contained in a small rectangular volume
of dimensions d x , d y , and d z located at position r is Q=( r ) d x d y d z .
However, if we integrate E over this volume element we obtain

E dx dydz=
Q
t
0
=
d x d y d z
t
0
,
(6.5.39)
where use has been made of Eq. (6.5.32). Here, the volume element is assumed to be sufficiently small
that

E does not vary significantly across it. Thus, we obtain

E=

t
0
.
(6.5.40)
72
This is the first of four field equations, called Maxwells equations, which together form a complete
description of electromagnetism. Of course, our derivation of Eq. (6.5.31) is only valid for electric
fields generated by stationary charge distributions. In principle, additional terms might be required to
describe fields generated by moving charge distributions. However, it turns out that this is not the case,
and that Eq. (6.5.31) is universally valid. From (6.5.8) and (6.5.9) we have
E( r )=
1
4 t
0

p(r ' )
rr '
r r '
3
d
3
r '
(6.5.41)
Equations (6.5.31) and (6.5.33) can be reconciled provided
r r '
r r '
3
=
(
1
rr '
)
xx

( xx

)
2
+( yy

)
2
+( zz

)
2
|
3 2
=
=

x
(
1
( xx

)
2
+( yy

)
2
+( zz

)
2
|
1/ 2
)

(
r r '
r r '
3
)
=
2
(
1
rr '
)
=46(r r ' )
(6.5.42)
where use has been made of Eq. (6.5.7).
It follows that


E( r )=
1
4 t
0

( r ' )
(
rr '
r r '
3
)
d
3
r ' =

=

( r ' )
t
0
( rr ' ) d
3
r ' =
( r )
t
0
(6.5.43)
which is the desired result. The most general form of Gauss law, Eq. (6.5.32), is obtained by
integrating Eq. (6.5.31) over a volume V surrounded by a surface S, and making use of Gauss theorem
(6.5.29):

Ed

S=
1
t
0

V
( r ) d
3
r
(6.5.44)

6.5.1 Savelyev and Griffits explanation on Dirac delta function
Saveliev deduction vector representation in cartesian coordinates pag 161 [15],
In Saveliev volume theoretical we have Gauss units
so in SI
( r )=
1
4 t
0

( r ' )
r r '
d
3
r '
in Gauss
(r )=

(r ' )
rr '
d
3
r '
73
so

S
E
n
dS+4

V
pdV
(6.5.45)
Appling Gauss theorem to Eq. (6.5.28), we have

V
E+4

V
pdV
(6.5.46)
Or in differential form :
E=4p (6.5.47)
Taking into account that

E=

grad =

(6.5.48)
We have for (6.5.27):
E=mc
2
(6.5.49)
( )=4p or A=4p (6.5.50)
Eq (6.5.25) is called Poisson equation and if p=0 then we obtain the faolowing eq that is the
Laplace equation:
A=0 (6.5.51)
The solution of eq. (6.5.21) is
=

p(r ' ) dV '


rr '
(6.5.52)
This come from solving the (6.5.19) by representing for example (r ) in polar coordinates we have
A=
1
r
2
(
r
2
r
)
=0
by integration two times =
C
1
r
+C
2
or the simplest way is from Coulomb
law :
=
E
r
=
q
r
=

pdV
r
Equations of Mathematical Physics A Tikhonov A. Samrskii Pag 282 [25].
[15] Saveliev he takes
r
x
=
. x
2
+y
2
+y
3
x
=
2x
2.x
2
+y
2
+y
3
=
x
r
(6.5.53)
pag 384 the partial derivatives of ( r) with respect to
x
k
has the form:

x
k
=

r
r
x
k
=

r
x
k
r
(6.5.54)
74
Where
r
x
=
.x
2
+y
2
+y
3
x
=
2x
2 .x
2
+y
2
+y
3
=
x
r
( r)=

k
e
k

r
x
k
r
=

r
1
r

k
e
k
x
k
=

r
r
r
(6.5.55)
or
( r)=

r
r=

r
r
r
=

r
e
r
(6.5.56)
Now if we consider pag 160 [15] that
(r)=
1
r
(6.5.57)
Take the gradient or from (6.5.16) we have :
( r)=
1
r
2
r
r
(6.5.58)
Applying once more operator to the (6.5.14) we have
A
1
r
=
(

1
r
)
=
(
r
r
3
)
=
1
r
3
(r )r
(
1
r
3
)
(6.5.59)
The divergence of r is three :
r=(
x
i
2
)
1/2
(6.5.60)
And gradient
grad r=r=

i
r
x
i
=

i
e
i
x
i
r
=
r
r
= e
r
(6.5.61)
And divergence
r=

i
r
i
x
i
=

i
x
i
x
i
=3 (6.5.62)
Finally we get for (6.5.12):
A
1
r
=
3
r
3
r
(

3
r
4
r
r
)
=
3
r
3
+
3
r
3
=0
(6.5.63)
Proof of Griffits [8]pag 45
here instead of (6.5.15) he put (6.5.14) and take care of notations:
(6.5.64)
Where I put p= e
r
instead of r in order to not confuse because he consider
r= e
r
=1
!!!
and he take the the divergence of (6.5.13) but in spherical coordinates that is easier and faster:
=
1
r
2

r
(
r
2 1
r
2
)
=
1
r
2

r
( 1)=0
(6.5.65)
75
So you must understand that (6.5.6) is the same as (6.5.12)and(6.5.11)or(6.5.10).
And he go further by applying the Gauss theorem :
he take the surface integral

d

S=

(
1
R
2
p
)
( R
2
sin0d 0d p)=
(

sin0 d 0
)(

o
2
d
)
=4 (6.5.66)
So he said if does mean that the divergence theorem or Gauss theorem is false?!
Griffits [8]pag 46
But the volume integral, is zero, if we are really to believe Eq. (6.5.6). Does this mean that the
divergence theorem is false? What's going on here?
The source of the problem is the point r = 0, where v blows up (and where, in Eq. (6.5.6) we have
unwittingly divided by zero). It is quite true that =0 everywhere except the origin, but right at
the origin the situation is more complicated. Notice that the surface integral (6.5.1) is independent of R;
if the divergence theorem is right (and it is), we should get =4 for any sphere centered at the
origin, no matter how small. Evidently the entire contribution must be coming from the point r = 0!
Thus, has the bizarre property that it vanishes everywhere except at one point, and yet its
integral (over an volume containing that point) is 4 . No ordinary function behaves like that. (On
the other hand, a physical example does come to mind: the density (mass per unit volume) of a point
particle. It's zero except at the exact location of the particle, and yet its integral is finite - namely, the
mass of the particle.) What we have stumbled on is a mathematical object known to physicists as the
Dirac delta function. It arises in many branches of theoretical physics. Moreover, the specific problem
at hand (the divergence of the function

1
r
2
r
r
=
1
r
2
p
) is not just some arcane curiosityit is, in
fact, central to the whole theory of electrodynamics. So it is worthwhile to pause here and study the
Dirac delta function with some care.
Chapter 7 The most important for which I write all about things
Paragraph 7.1 Fourier transform of damped oscillator
See pag 457 [4],
The displacement of a damped harmonic oscillator as a function of time is given by
f (t )=

0 for t0
e
t / f
sin u
0t
for t 0
(7.1.1)
The Fourier transform is:

f (u)=

0
0e
i ut
dt+

e
t / f
sin u
0
t e
i ut
(7.1.2)
writing
76
sin ut=( e
i u
0
t
e
i u
0
t
) /2 i
(7.1.3)
we obtain

f (u)=0+
1
2i

e
it (uu
0
i /f )
e
it (u+u
0
i / f)
| dt =
1
2

1
u+u
0
i /f

1
uu
0
i /f
|
(7.1.4)
Paragraph 7.2 The Fourier transform of the exponential decay function
See pag 441 [4]
f (t )=

0 for t0
Ae
\t
sin u
0t
for t0
(7.2.1)
Using the definition (6.4.14) and separating the integral in two parts,

f (u)=
1
.2

0
( 0) e
i ut
dt +
A
.2

e
\t
e
i ut
dt=
=0+
A
.2

e
(\+i ut )
\+i u
|
0

=
A
.2(\+i u)
(7.2.2)
Paragraph 7.3 Fourier transform of delta function
Pag 449 [4],
We also note that the Fourier transform definition of the delta function, (6.4.13), shows that the latter
is real since
6

(t )
1
2

e
i ut
d u=6(t )=6( t)
(7.3.1)
The Fourier transform of delta function is simply:

6(t )
1
.2

6( t) e
i ut
dt =
1
.2
(7.3.2)
Paragraph 7.4 Delta Dirac formulas
See pag 73 [12],
see 6.4.2
6(ax )=
1
a
6( x)
(7.4.1)
77
6( xa)( xb)|=
1
ab
6( xa)+6( xb)|
(7.4.2)
6( x
2
a
2
)|=
1
2a
6( xa)+6( xb)|
(7.4.3)

6( xa)( xb) dx=6( ab) (7.4.4)

f ( x) 6' ( xa) dx=f ' ( a) (7.4.5)


1
x!i t
=P
(
1
x
)
i 6( t)
(7.4.6)
6( x)=lim
t-0
1
2

e
ixt tt
dt=
lim
t -0
1
2i

1
xi t

1
x+i t

=
1

lim
c-0
t
2
x
2
+c
2
(7.4.7)
6( x)=lim
a-
g
a
( x)
where
g
a
( x)=
a
.
e
a
2
x
2
and

g ( x)=1
(7.4.8)
Chapter 8 Vectors
See [26].
Paragraph 8.1 Vector areas
Suppose that we have planar surface of scalar area S. We can define
a vector area S whose magnitude is S, and whose direction is
perpendicular to the plane, in the sense determined by the right-hand
grip rule on the rim (see Fig 7). This quantity clearly possesses both
magnitude and direction. But is it a true vector? We know that if the
normal to the surface makes an angle
o
x
with the x -axis then
the area seen looking along the x -direction is
S coso
x
. This is
the x-component of S . Similarly, if the normal makes an angle
o
y
with the y-axis then the area seen looking along they
-direction is
S coso
y
. This is the y-component of S . If we
limit ourselves to a surface whose normal is perpendicular to the z-
direction then
o
x
=

2
o
y
=o
. It follows that S=S ,( cos o, sin o ,0)
. If we rotate the basis about the z-axis by 0 degrees, which is
equivalent to rotating the normal to the surface about the z-axis by
78
Fig 7

S
0 degrees, then
S
x'
=S cos (o0)=S cos ocos0+S sin osin 0=S
x
cos0+S
y
sin 0
(8.1.1)
which is the correct transformation rule for the x-component of a vector. The other components
transform correctly as well. This proves that a vector area is a true vector.
According to the vector addition theorem, the projected area of two plane surfaces, joined together at a
line, looking along the x-direction (say) is the x-component of the resultant of the vector areas of the
two surfaces. Likewise, for many joined-up plane areas, the projected area in the x-direction, which is
the same as the projected area of the rim in the x-direction, is the x-component of the resultant of all the
vector areas:
S=

i
S
i
(8.1.2)
If we approach a limit, by letting the number of plane facets increase, and their areas reduce, then we
obtain a continuous surface denoted by the resultant vector area:
S=

i
6 S
i
(8.1.3)
It is clear that the projected area of the rim in the x-direction is just
S
x
. Note that the rim of the
surface determines the vector area rather than the nature of the surface. So, two different surfaces
sharing the same rim both possess the same vector area.
In conclusion, a loop (not all in one plane) has a vector area S which is the resultant of the vector
areas of any surface ending on the loop. The components of S are the projected areas of the loop in
the directions of the basis vectors. As a corollary, a closed surface has S=0 , since it does not
possess a rim.
Paragraph 8.2 The scalar product
A scalar quantity is invariant under all possible rotational transformations. The individual components
of a vector are not scalars because they change under transformation. Can we form a scalar out of some
combination of the components of one, or more, vectors? Suppose that we were to define the
``ampersand'' product,
a&b=a
x
b
y
+a
y
b
z
+a
z
b
x
=scalar number
(8.2.1)
for general vectors a and b . Is a &b invariant under transformation, as must be the case if it
is a scalar number? Let us consider an example. Suppose that a=(1,0,0) and b=(0,1,0) . It is
easily seen that a &b=1 . Let us now rotate the basis through 45

about the z-axis. In the new


basis, a=(1/ .2 ,1/ .2 , 0) and , b=(1/ .2 ,1/ .2, 0) giving a &b=1/ 2 . Clearly a &b , is
not invariant under rotational transformation, so the above definition is a bad one.
79
Consider, now, the dot product or scalar product:
ab=a
x
b
y
+a
y
b
z
+a
z
b
x
=scalar number
(8.2.2)
Consider the special case where a=b . Clearly,
ab=a
x
2
+a
y
2
+a
z
2
=Length (OP)
2
(8.2.3)
if a is the position vector of P relative to the origin O. So, the invariance of aa is equivalent
to the invariance of the length, or magnitude, of vector a under transformation. The length of vector
a is usually denoted a (``the modulus of a '') or sometimes just a , so
aa=a
2
=a
2
(8.2.4)
Let us now investigate the general case. The length squared of AB (see Fig 8) is
(ba)( ba)=| a|
2
+| b|
2
2ab (8.2.5)
However, according to the ``cosine rule'' of trigonometry,
( AB)
2
=(OA)
2
+(OB)
2
2(OA)(OB) cos 0 (8.2.6)
Where ( AB) denotes the length of side AB . It follows that
ab=| a| | b| cos 0 (8.2.7)
Clearly, the invariance of ab under transformation is equivalent to the invariance of the angle
subtended between the two vectors. Note that if the ab=0 either a=0, b=0 , or the
vectors a and b are perpendicular. The angle subtended between two vectors can easily be
obtained from the dot product:
cos0=
ab
| a| | b|
(8.2.8)
The work W performed by a constant force F moving an object through a displacement r is
the product of the magnitude of F times the displacement in the direction of F . If the angle
subtended between F and r is 0 then
W=F(rcos 0)=Fr (8.2.9)
80
Fig 8
0
O

ba
a
The dot product is commutative:
ab=ba (8.2.10)
Paragraph 8.3 The vector product
We have discovered how to construct a scalar from the components of two general vectors a a n b
d . Can we also construct a vector which is not just a linear combination o b f an b d ? Consider
the following definition:
ab=( a
x
b
x
, a
y
b
y
, a
z
b
z
)
(8.3.1)
Is ab a proper vector? Suppose that a=(1,0,0) and b=(0,1,0) . It is easily seen that
ab=0 . Let us now rotate the basis through 45

about the z-axis. In the new basis,


a=(1/ .2 ,1/ .2 , 0) and , b=(1/ .2 ,1/ .2, 0) giving ab=(1/ 2,1/ 2,0) . Thus, ab
does not transform like a vector, because its magnitude depends on the choice of axes. So, above
definition is a bad one.
Consider, now, the cross product or vector product:
ab=( a
y
b
z
a
z
b
y
, a
z
b
x
a
x
b
z
, a
x
b
y
a
y
b
x
)=c
(8.3.2)
Does this rather unlikely combination transform like a vector? Let us try rotating the basis through
0 degrees about the z-axis . In the new basis,
c
x'
=(a
x
sin 0+a
y
cos0)b
z
a
z
(b
x
sin 0+b
y
cos0)
=(a
y
b
z
a
z
b
y
) cos0+(a
z
b
x
a
x
b
z
)sin 0
=c
x
cos 0+c
y
sin 0
(8.3.3)
Thus, the x-component of ab transforms correctly. It can easily be shown that the other
components transform correctly as well, and that all components also transform correctly under rotation
about the y- and z-axes. Thus ab , is a proper vector. Incidentally ab , is the only simple
combination of the components of two vectors which transforms like a vector (which is non-coplanar
with a and b ). The cross product is anticommutative,
ab=ba (8.3.4)
distributive,
a(b+c)=ab+ac (8.3.5)
but is not associative:
a(bc)(ab)c (8.3.6)
The cross product transforms like a vector, which means that it must have a well-defined direction and
magnitude. We can show that ab is perpendicular to both a and . b Consider aab . If
this is zero then the cross product must be perpendicular to a . Now
aab=a
x
( a
y
b
z
a
z
b
y
)+a
y
(a
z
b
x
a
x
b
z
)+a
z
(a
x
b
y
a
y
b
x
)=0
(8.3.7)
Therefore, ab is perpendicular to a . Likewise, it can be demonstrated that ab is
81
perpendicular to b . The vectors a , b , and ab form a right-handed set. This defines a
unique direction for ab , which is obtained from the right-hand rule (see Fig 9).
Let us now evaluate the magnitude of ab . We have
(ab)
2
=(a
y
b
z
a
z
b
y
)
2
+( a
z
b
x
a
x
b
z
)
2
+(a
x
b
z
a
y
b
x
)
2
=(a
x
2
+a
y
2
+a
z
2
)(b
x
2
+b
y
2
+b
z
2
)(a
x
b
x
+a
y
b
y
+a
z
b
z
)
2
=| a|
2
| b|
2
( ab)
2
=a
2
| b|
2
| a|
2
| b|
2
cos
2
0=| a|
2
| b|
2
sin
2
0
(8.3.8)
Thus,
ab=absin 0 (8.3.9)
Clearly, aa=0 for any vector, since 0 is always zero in this case. Also, if ab=0 then
either a=0, b=0 , or the vectors a and b is parallel (or antiparallel) to a .
Consider the parallelogram defined by vectors a and b (see Fig 10). The scalar area is
a bsin 0 . The vector area has the magnitude of the scalar area, and is normal to the plane of the
parallelogram, which means that it is perpendicular to both a and b . Clearly, the vector area is
given by
S=ab (8.3.10)
with the sense obtained from the right-hand grip rule by rotating a on to b .
Suppose that a force F is applied at position r (see Fig 11). The moment, or torque, about the
82
Fig 9
Fig 10
a

b
0
a

b
0

b
a
origin O is the product of the magnitude of the force and the length of the lever arm OQ . Thus, the
magnitude of the moment is Frsin 0 . The direction of the moment is conventionally the
direction of the axis through O about which the force tries to rotate objects, in the sense determined by
the right-hand grip rule. It follows that the vector moment is given by
M=rF (8.3.11)
Paragraph 8.4 The scalar triple product
Consider three vectors a , b , and c . The scalar triple product is defined abc . Now,
bc is the vector area of the parallelogram defined by b and . c So, abc is the scalar
area of this parallelogram times the component of a in the direction of its normal. It follows that is
abc the volume of the parallelepiped defined by vectors a , b , and c (see Fig 12). This
volume is independent of how the triple product is formed from a , b , and c , except that
abc=acb (8.4.1)
So, the ``volume'' is positive if a , b , and c form a right-handed set (i.e., i a f lies above the
plane of b and c , in the sense determined from the right-hand grip rule by rotating o b nt c o
) and negative if they form a left-handed set. The triple product is unchanged if the dot and cross
product operators are interchanged:
abc=abc (8.4.2)
The triple product is also invariant under any cyclic permutation of a , b , and c ,
abc=bca=cab (8.4.3)
but any anti-cyclic permutation causes it to change sign,
abc=bac (8.4.4)
83
Fig 11
Fig 12
0
O
r
r sin 0
Q

F
a

b
c
The scalar triple product is zero if any two of a , b , and c are parallel, or if a , b , and
c are co-planar.
If a , b , and c are non-coplanar, then any vector r can be written in terms of them:
r=oa+ b+y c (8.4.5)
Forming the dot product of this equation with bc , we then obtain
rbc=oabc (8.4.6)
so
o=
rbc
abc
(8.4.7)
Analogous expressions can be written for and y .
Paragraph 8.5 The vector triple product
For three vectors a , b , and c , the vector triple product is defined a(bc) . The brackets
are important because a(bc)(ab)c . In fact, it can be demonstrated that
a(bc)b( ac)c(ab) (8.5.1)
and
(ab)c(ac) b( bc) a (8.5.2)
Let us try to prove the first of the above theorems. The left-hand side and the right-hand side are both
proper vectors, so if we can prove this result in one particular coordinate system then it must be true in
general. Let us take convenient axes such that the x-axis lies along b , and c lies in the x-y plane.
It follows that
b=(b
x
, 0,0)
,
c=(c
x
, c
y
, 0)
, and
a=(a
x
, a
y
, a
z
)
. The vector bc is
directed along the z-axis:
bc=( 0,0,b
x
c
y
)
. It follow a(bc) s that lies in the x-y plane:
a(bc)=(a
y
b
x
c
y
,a
x
b
x
c
y
, 0)
. This is the left-hand side of (8.5.1) in our convenient axes. To
evaluate the right-hand side, we need
ac=a
x
c
x
+a
y
c
y
and
ab=a
x
b
x
. It follows that the right-
hand side is
RHS=( a
x
c
x
+a
y
c
y
| b
x
, 0, 0)(a
x
b
x
c
x
, a
x
b
x
c
y
,0)
=(a
y
c
y
b
x
,a
x
b
x
c
y
,0)=LHS
(8.5.3)
which proves the theorem.
Pag 74 Lungu the same proof.
84
Paragraph 8.6 Line integrals
Consider a two-dimensional function f ( x , y) which is defined for all x and y. What is meant by the
integral of f along a given curve from P to in Q the x-y plane? We first draw out f as a function
of length l along the path (see Fig 13). The integral is then simply given by

P
Q
f ( x , y)dl =Area under the curve (8.6.1)
As an example of this, consider the integral of f ( x , y)=xy between P and Q along the two routes
indicated inFig 14 . Along route 1 we have x=y , so dl =.2dx . Thus,

P
Q
x y dl =

0
1
x
2
.2 dx=
.2
3
(8.6.2)
The integration along route 2 gives

P
Q
x y dl =

0
1
x y dx

y=0
+

0
1
x y dy

x=1
=0+

0
1
y dy=
1
2
(8.6.3)
Note that the integral depends on the route taken between the initial and final points.
85
Fig 13
Fig 14
O
x
y
Q
P
l
O
l
f
Q
P
The most common type of line integral is that where the contributions from dx and dy are
evaluated separately, rather that through the path length dl :

P
Q
f ( x , y) dx+g( x , y) dy| (8.6.4)
As an example of this, consider the integral

P
Q
y
3
dx+x dy| (8.6.5)
along the two routes indicated in Fig 15. Along route 1 we have x=y+1 and dx=dy , so

P
Q
=

0
1
y
3
dy+( y+1)dy| =
7
4
(8.6.6)
Along route 2,

P
Q
=

1
2
y
3
dx

y=0
+

0
1
x dy

x=2
=2 (8.6.7)
Again, the integral depends on the path of integration.
suppose that we have a line integral which does not depend on the path of integration. It follows that

P
Q
=

1
2
y
3
dx

y=0
+

0
1
x dy

x=2
=2 (8.6.8)
for some function F . Given F ( P) for one point P in the x-y plane, then
F (Q)=F ( P)+

P
Q
( f dx+g dy) (8.6.9)
Defines F (Q) for all other points in the plane. We can then draw a contour map of F ( x , y) .
The line integral between points P and Q is simply the change in height in the contour map
between these two points:

P
Q
( f dx+g dy)=

P
Q
dF ( x , y)=F (Q)F( P) (8.6.10)
Thus,
86
Fig 15
dF ( x , y)= f ( x , y) dx+g ( x , y) dy (8.6.11)
For instance, if F=xy
3
then dF=y
3
dx+3 x y
2
dy and

P
Q
( y
3
dx+3 x y
2
dy) = x y
3
|
P
Q
(8.6.12)
is independent of the path of integration.
It is clear that there are two distinct types of line integral. Those which depend only on their endpoints
and not on the path of integration, and those which depend both on their endpoints and the integration
path. Later on, we shall learn how to distinguish between these two types.
Paragraph 8.7 Vector line integrals
A vector field is defined as a set of vectors associated with each point in space. For instance, the
velocity v( r) in a moving liquid (e.g., a whirlpool) constitutes a vector field. By analogy, a scalar
field is a set of scalars associated with each point in space. An example of a scalar field is the
temperature distribution T ( r) in a furnace.
Consider a general vector field A(r ) . Let d l =(dx , dy , dz ) be the vector element of line length.
Vector line integrals often arise as See pag 384[4]:

P
Q
Ad l =( A
x

i +A
y

j +A
z

k )(dx

i +dy

j +dz

k)=

P
Q
( A
x
dx+A
y
dy+A
z
dz) (8.7.1)
For instance, if A is a force then the line integral is the work done in going from P to Q.
As an example, consider the work done in a repulsive, inverse-square, central field, F=r/r
3
.
The element of work done is dW=Fd l . Take P=(, 0, 0) and Q=(a , 0, 0) . Route 1 is
along the x-axis, so
W=

a
(

1
x
2
)
dx=

1
x
|

a
=
1
a
(8.7.2)
The second route is, firstly, around a large circle (r=constant) to the point (a , , 0) , and then parallel
to the y-axis. In the first, part no work is done, since F is perpendicular to d l . In the second
part,
W=

0
y dy
(a
2
+y
2
)
3/ 2
=

1
( y
2
+a
2
)
1/ 2
|
0

=
1
a
(8.7.3)
In this case, the integral is independent of the path. However, not all vector line integrals are path
independent.
87
Paragraph 8.8 Surface integrals
Let us take a surface S, which is not necessarily co-planar, and divide in up into (scalar) elements
6S
i
. Then

S
f ( x , y , z)dS=lim
6 S
i
-0
i
f ( x , y , z) 6S
i
(8.8.1)
is a surface integral. For instance, the volume of water in a lake of depth D( x , y) is
V =

D( x , y) dS (8.8.2)
To evaluate this integral we must split the calculation into two ordinary integrals. The volume in the
strip shown in Fig 16 is

x
1
x
2
D( x , y) dx
|
dy (8.8.3)
Note that the limits
x
1
and
x
2
depend on y. The total volume is the sum over all strips:
V =

y
1
y
2
dy

x
1
( y)
x
2
( y)
D( x , y)dx
|

S
D( x , y) dx dy (8.8.4)
Of course, the integral can be evaluated by taking the strips the other way around:
V =

x
1
x
2
dx

y
1
( x)
y
2
( x)
D( x , y) dy (8.8.5)
Interchanging the order of integration is a very powerful and useful trick. But great care must be taken
when evaluating the limits.
As an example, consider
88
Fig 16

S
x
2
y dx dy
(8.8.6)
Where S is shown in Fig 17. Suppose that we evaluate the x integral first:
dy
(
0
1y
x
2
y dx
)
=ydy

x
3
3
|
0
1y
=
y
3
(1y)
3
dy (8.8.7)
Let us now evaluate the y integral:

0
1
(
y
3
y
2
+y
3

y
4
3
)
dy=
1
60
(8.8.8)
We can also evaluate the integral by interchanging the order of integration:

0
1
x
2
dx

0
1x
y dy=

0
1
x
2
2
(1x)
2
dx=
1
60
(8.8.9)
In some cases, a surface integral is just the product of two separate integrals. For instance,

S
x
2
y dx dy
(8.8.10)
Where S is a unit square. This integral can be written

0
1
dx

0
1
x
2
y dy=
(
0
1
x
2
dx
) (
0
1
y dy
)
=
1
3
1
2
=
1
6
(8.8.11)
since the limits are both independent of the other variable.
In general, when interchanging the order of integration, the most important part of the whole problem is
getting the limits of integration right. The only foolproof way of doing this is to draw a diagram
Paragraph 8.9 Vector surface integrals
Surface integrals often occur during vector analysis. For instance, the rate of flow of a liquid of
89
Fig 17
velocity v through an infinitesimal surface of vector area d S is vd S . The net rate of flow
through a surface S made up of lots of infinitesimal surfaces is

S
vd S=lim
dS-0

v cos 0dS
|
(8.9.1)
Where 0 is the angle subtended between the normal to the surface and the flow velocity.
Analogously to line integrals, most surface integrals depend both on the surface and the rim. But some
(very important) integrals depend only on the rim, and not on the nature of the surface which spans it.
As an example of this, consider incompressible fluid flow between two surfaces
S
1
and
S
2

which end on the same rim. The volume between the surfaces is constant, so what goes in must come
out, and

S
1
vd S=

S
2
vd S
(8.9.2)
It follows that

vd S (8.9.3)
depends only on the rim, and not on the form of surfaces
S
1
and
S
2
.
Paragraph 8.10 Volume integrals
A volume integral takes the form

V
f ( x , y , z) dV
(8.10.1)
Where V is some volume, and dV =dxdy dz is a small volume element. The volume element is
sometimes written d
3
r , or even d f . As an example of a volume integral, let us evaluate the
centre of gravity of a solid hemisphere of radius a (centred on the origin). The height of the centre
of gravity is given by
z=

z dV

dV
(8.10.2)
The bottom integral is simply the volume of the hemisphere, which is 2a
3
/ 3 . The top integral is
most easily evaluated in spherical polar coordinates, for which z=r cos0
and dV =r
2
sin 0dr d 0d . Thus,

z dV =

0
a
dr int
0
/ 2
d 0

0
2
d r cos 0r
2
sin 0
=

0
a
r
3
dr

0
/ 2
sin 0 cos 0d 0

0
2
d =
a
4
4
(8.10.3)
giving
90
z=
a
4
4
3
2a
3
=
3a
8
(8.10.4)
Paragraph 8.11 Gradient
A one-dimensional function f ( x) has a gradient df / dx which is defined as the slope of the
tangent to the curve at x . We wish to extend this idea to cover scalar fields in two and three
dimensions.
Consider a two-dimensional scalar field h( x , y) , which is (say) the height of a hill. Let
d l =(dx , dy) be an element of horizontal distance. Consider dh/ dl , where dh is the change
in height after moving an infinitesimal distance . Thi d l s quantity is somewhat like the one-
dimensional gradient, except that dh depends on the direction of d l , as well as its magnitude. In
the immediate vicinity of some point P , the slope reduces to an inclined plane (see Fig 16). The largest
value of dh/ dl is straight up the slope. For any other direction
dh
dl
=
(
dh
dl
)
max
cos0
(8.11.1)
Let us define a two-dimensional vecto grad h r, , called the gradient of h , whose magnitude is
( dh/ dl )
max
, and whose direction is the direction up the steepest slope. Because of the cos0
property, the component of grad h in any direction equals dh/ dl for that direction.
The component of dh/ dl in the x-direction can be obtained by plotting out the profile of h at
constant y , and then finding the slope of the tangent to the curve at given x . This quantity is
known as the partial derivative of h with respect to x at constant y , and is denoted
( h/ x)
y
. Likewise, the gradient of the profile at constant x is written
(h/ y)
x
. Note that
91
Fig 18
the subscripts denoting constant-x and constant-y are usually omitted, unless there is any ambiguity. If
follows that in component form
grad h=
(
h
x
,
h
y
)
(8.11.2)
Now, the equation of the tangent plane at
P=( x
0,
y
0
)
is
h
T
( x , y)=h( x
0,
y
0
)+o( xx
0
)+( yy
0
)
(8.11.3)
This has the same local gradients as h( x , y) , so
o=
h
x
, =
h
y
(8.11.4)
by differentiation of the above. For small
dx=xx
0
and
dy=yy
0
, the function h is
coincident with the tangent plane. We have
dh=
h
x
dx+
h
y
dy
(8.11.5)
But grad h=(h/ x ,h/ y) and d l =(dx , dy) , so
dh=grad hd l (8.11.6)
Incidentally, the above equation demonstrates that grad h is a proper vector, since the left-hand side
is a scalar, and, according to the properties of the dot product, the right-hand side is also a scalar,
provided that d l and grad h are both proper vectors ( d l is an obvious vector, because it is
directly derived from displacements).
In general,

P
Q
Ad l depends on path, but for some special vector fields the integral is path
independent. Such fields are called conservative fields. It can be shown that if A is a conservative
field then A=grad for some scalar field . The proof of this is straightforward. Keeping P
fixed we have

P
Q
Ad l =V (Q) (8.11.7)
Where V (Q) is a well-defined function, due to the path independent nature of the line integral.
Consider moving the position of the end point by an infinitesimal amount dx in the x-direction. We
have
V (Q+dx)=V (Q)+

Q
Q+dx
Ad l =V (Q)+A
dx
(8.11.8)
Hence,
V
x
=A
x
(8.11.9)
with analogous relations for the other components of A . It follows that
A=grad V (8.11.10)
In physics, the force due to gravity is a good example of a conservative field. A If is a force, then

Ad l is the work done in traversing some path. If A is conservative then


92

Ad l =0 (8.11.11)
where

corresponds to the line integral around some closed loop. The fact that zero net work is
done in going around a closed loop is equivalent to the conservation of energy (this is why conservative
fields are called ``conservative''). A good example of a non-conservative field is the force due to
friction. Clearly, a frictional system loses energy in going around a closed cycle, so

Ad l 0 . It
is useful to define the vector operator

(

x
,

y
,

z
)
(8.11.12)
which is usually called the grad or del operator. This operator acts on everything to its right in a
expression, until the end of the expression or a closing bracket is reached. For instance,
grad f = f =
(
f
x
,
f
y
,
f
z
)
(8.11.13)
For two scalar fields and ,
grad()=grad + grad (8.11.14)
can be written more succinctly as
()=+ (8.11.15)
Paragraph 8.12 Divergence
Let us start with a vector field A . Consider
S
Ad S
over some closed surface S , where
d S denotes an outward pointing surface element. This surface integral is usually called the flux of
A out of S. If A is the velocity of some fluid, then is
S
Ad S
the rate of flow of material
out of S.
If A is constant in space then it is easily demonstrated that the net flux out of S is zero,

Ad S=A

d S=AS=0 (8.12.1)
since the vector area S of a closed surface is zero.
93
Fig 19
Suppose, now, that A is not uniform in space. Consider a very small rectangular volume over
which A hardly varies. The contribution to
S
Ad S
from the two faces normal to the x-axis is
A
x
( x+dx) dy dzA
x
( x) dy dz=
A
x
x
dx dy dz=
A
x
x
dV (8.12.2)
Where dV =dxdy dz is the volume element (see Fig 19). There are analogous contributions from
the sides normal to the y- and z-axes, so the total of all the contributions is

Ad S=
(
A
x
x
+
A
y
y
+
A
z
z
)
dV (8.12.3)
The divergence of a vector field is defined
div A=A=
A
x
x
+
A
y
y
+
A
z
z
(8.12.4)
Divergence is a good scalar (i.e., it is coordinate independent), since it is the dot product of the vector
operator with A . The formal definition of div A is
div A=lim
dV -0

Ad S
dV
(8.12.5)
This definition is independent of the shape of the infinitesimal volume element.
One of the most important results in vector field theory is the so-called divergence theorem or Gauss'
theorem. This states that for any volume V surrounded by a closed surface S,

S
Ad S=

V
div AdV
(8.12.6)
Where d S is an outward pointing volume element. The proof is very straightforward. We divide up
the volume into lots of very small cubes, and sum

Ad S over all of the surfaces. The
94
Fig 20
contributions from the interior surfaces cancel out, leaving just the contribution from the outer surface
(see Fig 20). We can use Eq.(8.12.3)) for each cube individually. This tells us that the summation is
equivalent to

div AdV over the whole volume. Thus, the integral of Ad S over the outer
surface is equal to the integral of div A over the whole volume, which proves the divergence
theorem.
Now, for a vector field with div A=0 ,

S
Ad S=0
(8.12.7)
for any closed surface S . So, for two surfaces on the same rim (see Fig 21),

S
1
Ad S=

S
2
Ad S
(8.12.8)
Thus, if div A=0 then the surface integral depends on the rim but not the nature of the surface
which spans it. On the other hand, if div A0 then the integral depends on both the rim and the
surface.
Paragraph 8.13 Curl
Consider a vector field A , and a loop which lies in one plane. The integral of A around this
loop is written

Ad l , where d l is a line element of the loop. If A is a conservative field


then A=grad and

Ad l =0 for all loops. In general, for a non-conservative field,

Ad l 0 .
95
Fig 21
For a small loop we expect

Ad l to be proportional to the area of the loop. Moreover, for a fixed
area loop we expect

Ad l to depend on the orientation of the loop. One particular orientation


will give the maximum value:

Ad l =I
max
. If the loop subtends an angle 0 with this optimum
orientation then we expect
I =I
max
cos0
. Let us introduce the vector field curl A whose
magnitude is
curl A=lim
dS -0

Ad l
dS
(8.13.1)
for the orientation giving
I
max
. Here, dS is the area of the loop. The direction of curl A is
perpendicular to the plane of the loop, when it is in the orientation giving
I
max
, with the sense given
by the right-hand grip rule.
Let us now express curl A in terms of the components of A . First, we shall evaluate

Ad l
around a small rectangle in the y-z plane (see Fig 22). The contribution from sides 1 and 3 is
A
z
( y+dy)dz A
z
( y) dz=
A
z
y
dy dz (8.13.2)
The contribution from sides 2 and 4 is
A
y
( z+dz) dy+A
y
( z) dy=
A
y
y
dy dz (8.13.3)
So, the total of all contributions gives

Ad l =
(
A
z
y

A
y
z
)
dS (8.13.4)
Where dS=dy dz is the area of the loop.
Consider a non-rectangular (but still small) loop in the y-z plane. We can divide it into rectangular
elements, and form

Ad l over all the resultant loops. The interior contributions cancel, so we are
just left with the contribution from the outer loop. Also, the area of the outer loop is the sum of all the
areas of the inner loops. We conclude that
96
Fig 22

Ad l =
(
A
z
y

A
y
z
)
dS
z
(8.13.5)
is valid for a small loop
d S=(dS
x
,0,0)
of any shape in the y-z plane. Likewise, we can show that
if the loop is in the x-z plane then
d S=(0, dS
y
, 0)
and

Ad l =
(
A
x
z

A
z
x
)
dS
y
(8.13.6)
Finally, if the loop is in the x-y plane then
d S=(0,0, dS
z
)
and

Ad l =
(
A
y
x

A
x
y
)
dS
z
(8.13.7)
Imagine an arbitrary loop of vector area
d S=(dS
x
, dS
y
, dS
z
)
. We can construct this out of three
loops in the x-, y-, and z-directions, as indicated in Fig 23. If we form the line integral around all three
loops then the interior contributions cancel, and we are left with the line integral around the original
loop. Thus,

Ad l =

Ad l
1
+

Ad l
2
+

Ad l
3
(8.13.8)
giving

Ad l =curl Ad S=curl Ad Scos 0 (8.13.9)


where
curl A=
(
A
z
y

A
y
z
,
A
x
z

A
z
x
,
A
y
x

A
x
y
)
(8.13.10)
Note that
curl A=A (8.13.11)
Another important result of vector field theory is the curl theorem or Stokes' theorem,
97
Fig 23

C
Ad l =

S
curl Ad S
(8.13.12)
for some (non-planar) surface S bounded by a rim C. This theorem can easily be proved by splitting the
loop up into many small rectangular loops, and forming the integral around all of the resultant loops.
All of the contributions from the interior loops cancel, leaving just the contribution from the outer rim.
Making use of Eq.(8.13.9) for each of the small loops, we can see that the contribution from all of the
loops is also equal to the integral of curl Ad S across the whole surface. This proves the theorem.
Finally, it can be shown that
curl (curl A)=grad ( div A)
2
A (8.13.13)
where

2
A=(
2
A
x
,
2
A
y
,
2
A
z
) (8.13.14)
It should be emphasized, however, that the above result is only valid in Cartesian coordinates.
Paragraph 8.14 Notations: Hamiltonian operator (Del)
See pag 48 [16] :
It is useful to define the vector operator

(

x
,

y
,

z
)
(8.14.1)
which is usually called the grad or del or nabla or Hamiltonian operator. This operator acts on
everything to its right in a expression, until the end of the expression or a closing bracket is reached.
For instance,
grad f = f =
(
f
x
,
f
y
,
f
z
)
=e
x
f
x
+e
y
f
y
+e
z
f
z
(8.14.2)
For two scalar fields and ,
grad()=grad + grad (8.14.3)
can be written more succinctly as
()=+ (8.14.4)
=grad (8.14.5)
div A=A=
x
A
x
+
y
A
y
+
z
A
z
=
A
x
x
+
A
y
y
+
A
z
z
(8.14.6)
98
curl A=A= A|=

e
x
e
y
e
z

x

y

z
A
x
A
y
A
z

=
(
A
z
y

A
y
z
,
A
x
z

A
z
x
,
A
y
x

A
x
y
)
(8.14.7)
curl (curl A)= A||=( A)() A=grad (div A)
2
A (8.14.8)
we have used
a(bc)b( ac)c(ab)
a bc||b(ac)c( ab)
(8.14.9)
Chapter 9 Electromagnetism
Paragraph 9.1 General trick for polarisation magnetisation and current
density
9.1.1 Energy flux
See [16] pag289 volume II:
For this purpose, let us divide the surface into elementary areas dS . During the time dt , the
energy dW confined in the oblique cylinder shown in Fig 24 will pass through area dS . The
volume of this cylinder is
dV =v dt dS cos (9.1.1)
It contains the energy
99
Fig 24
d W=v dW=w v dt dS cos (9.1.2)
(here w is the instantaneous value of the energy density where area dS is). Taking into account that
w v dt dS cos=j d S cos=j d S (9.1.3)
( d S=ndS ; see Fig 24), we can write: dW=j d S dt . Hence, we obtain the following equation
for the energy flux
d 4=
dW
dt
= j d S
(9.1.4)
The total energy flux through a surface equals the sum of the elementary fluxes given by
4=

S
j d S
(9.1.5)
9.1.2 Polarization
The density of bound charges:
P=

A V
p
AV
(9.1.6)
P Is called the polarization of a dielectric.
See [16] pag67 volume II

Introducing the density of bound charges p' we can write:
q
surf
=

V
pdV
(9.1.7)
Let us imagine closed surface S inside the dielectric. When the field is switched on, a bound charge q'
will intersect this surface and emerge from it. This charge is:
100
Fig 25
q
em
=

S
dq
em
(9.1.8)
Lets find
q
em
?
Aq=eN =enAV (9.1.9)
e elementary charge
V volume
n concentration= the number of charges in unit volume
an oblique cylinder of volume see Fig 25:
AV=(l
1
+l
2
)A S coso
(9.1.10)
Then we have
Aq=en(l
1
+l
2
)AS coso
(9.1.11)
As a result of this displacement each pair of charges acquires the dipole moment
p=e l =e(l
1
+l
2
)
(9.1.12)
From (9.1.6) we have:
e(l
1
+l
2
)n=e l n=p n=P
(9.1.13)
By substitution of (9.1.13) in (9.1.11) we get:
Aq=P AS cos o=P A S (9.1.14)
It is follow that:
q
surf
=q
em
=

S
dq
em
=

S
P d S=4
(9.1.15)
( 4 is the flux of the vector P through surface S).
from (9.1.7) and(9.1.15) we have

V
p d V=

S
P d S
(9.1.16)
Using the Gauss theorem we get

V
p d V=

V
P d V
(9.1.17)
And finally for density of bound charges in differential form
p=P (9.1.18)
9.1.3 Magnetisation
If there are N such atoms or molecules per unit volume then the magnetization M (i.e., the
101
magnetic dipole moment per unit volume) is given by . M=N m More generally,
M (r )=

i
N
i
m
i

(9.1.19)
where
m
i

is the average magnetic dipole moment of the i-th type of molecule in the vicinity of
point r , and
N
i
is the average number of such molecules per unit volume at r .
Consider a general medium which is made up of molecules which are polarizable and possess a net
magnetic moment. It is easily demonstrated that any circulation in the magnetization field M(r)
gives rise to an effective current density
j
m
in the medium.
See fig Fig. 26 Molecular currents whose centres are inside of a oblique cylinder of volume
dV =S cosodl (9.1.20)
(where S is the area enclosed by a separate molecular current). If n is the number of molecules in unit
volume, then the total current enclosed by the element dl is
dI =I
mol
n S cosodl
(9.1.21)
The product
m
mol
=I
mol
S
(9.1.22)
equals the magnetic moment
m
mol
of an individual molecular current. Hence, the expression
M=I
mol
S n
(9.1.23)
is the magnetic moment of unit volume, i.e. it gives the magnitude of the vector M, while
M=I
mol
S ncoso
(9.1.24)
gives the projection of the vector M onto the direction of the element dl . Thus, the total
molecular current enclosed by the element dl is M d l , while the sum of the molecular currents
enclosed by the entire loop is
I =

j d S=

I
M d l
(9.1.25)
Transforming the right hand side according Stokes theorem we have:
102
Fig. 26

S
j
mol
d S=

S
M| d S
(9.1.26)
Or in differential form:
j
mol
=M
(9.1.27)
This current density is called the magnetization current density.
9.1.4 Current density:
This vector numerically equals the current dI trough the area dS
j =
dI
dS
(9.1.28)
See [16] pag289 volume II:
For this purpose, let us divide the surface into elementary areas dS . During the time dt , the
charge dq confined in the oblique cylinder shown in Fig 27 will pass through area dS . The
volume of this cylinder is
dV =v dt dS cos (9.1.29)
It contains the energy
dq=p d V=pv dt dS cos (9.1.30)
(here p is the instantaneous value of the charge density where area is dS ). Taking into account that
pv dt dS cos= j d S cos= j d S (9.1.31)
( d S=ndS ; see Fig 27), we can write: dq=j d S dt . Hence, we obtain the following equation
for the current
103
Fig 27
dI =
dq
dt
= j d S
(9.1.32)
The total current flux through a surface equals the sum of the elementary fluxes given by
I =

S
j d S
(9.1.33)
9.1.5 Continuity equation
The expression

S
j d S
gives the charge emerging in a unit time from the volume V enclosed by
surface S. Owing to charge conservation, this quantity must equal the rate of diminishing of the charge
q contained in the given volume:

S
j d S=
dq
dt
(9.1.34)
Let us transforming the left hand side according Gauss theorem:

V
j d V =
dq
dt
=
d
dt

V
pdV=

V
p
t
dV
(9.1.35)
So the continuity equation in differential form is:
j =
p
t
(9.1.36)
104
Paragraph 9.2 Polarization
The terrestrial environment is characterized by dielectric media (e.g., air, water) which are, for the most
part, electrically neutral, since they are made up of neutral atoms and molecules. However, if these
atoms and molecules are placed in an electric field then they tend to polarize. Suppose that when a
given neutral molecule is placed in an electric field E , the centre of charge of its constituent
electrons (whose total charge is q ) is displaced by a distance d with respect to the centre of charge
of its constituent atomic nucleus. The dipole moment of the molecule is defined p=qd . If there are
N such molecules per unit volume then the electric polarization P (i.e., the dipole moment per unit
volume) is given by P=N p . More generally,
P (r)=

i
N
i
p
i

(9.2.1)
where
p
i

is the average dipole moment of the i-th type of molecule in the vicinity of point r ,
and
N
i
is the average number of such molecules per unit volume at r .
Consider an infinitesimal cube of dielectric material with x-coordinates between x and x+dx ,
y-coordinates between y and y+dy , and z-coordinates between z and z+dz . Suppose
that the dielectric consists of electrically neutral polar molecules, of varying number density N ( r) ,
whose electrons, charge q, displace a constant distance d from the nuclei, charge -q. Thus, the
dipole moment per unit volume is P (r)=N (r) qd . Due to the polarization of the molecules, a net
charge
N ( x , y , z)q d
x
dy dz
enters the bottom face of the cube, perpendicular to the x-axis, whilst a
net charge
N ( x+dx , y , z) q d
x
dy dz
leaves the top face. Hence, the net charge acquired by the cube
due to molecular polarization in the x-direction is
dq=N ( x+dx , y , z)q d
x
dy dz+N ( x , y , z) q d
x
dydz
=

N ( x , y , z)
x
|
q d
x
dx dy dz=

P
x
( x , y , z)
x
|
dx dy dz
.
There are analogous contributions due to polarization in the y- and z-directions. Hence, the net charge
acquired by the cube due to molecular polarization is
105
dq=

P
x
( x , y , z)
x
+
P
y
( x , y , z)
y
+
P
z
( x , y , z)
z
|
dx dy dz =( P) dx dy dz
Thus, it follows that the charge density acquired by the cube due to molecular polarization is simply
P .
As explained above, it is easily demonstrated that any divergence of the polarization field P (r) of a
dielectric medium gives rise to an effective charge density
p
b
in the medium, where
p
b
=P
(9.2.2)
This charge density is attributable to bound charges (i.e., charges which arise from the polarization of
neutral atoms), and is usually distinguished from the charge density
p
f
due to free charges, which
represents a net surplus or deficit of electrons in the medium. Thus, the total charge density p in the
medium is
p=p
f
+p
b
(9.2.3)
It must be emphasized that both terms in this equation represent real physical charge. Nevertheless, it is
useful to make the distinction between bound and free charges, especially when it comes to working
out the energy associated with electric fields in dielectric media.
Gauss' law takes the differential form
E=
p
t
0
=
p
f
+p
b
t
0
=
p
f
P
t
0
(9.2.4)
This expression can be rearranged to give
(t
0
E+P)=p
f
(9.2.5)
D=p
f
(9.2.6)
where
D=t
0
E+P
(9.2.7)
is termed the electric displacement, and has the same dimensions as P (dipole moment per unit
volume). Gauss' theorem tells us that

S
Dd S=

V
p
f
dV.
(9.2.8)
In other words, the flux of D out of some closed surface S is equal to the total free charge enclosed
within that surface. Unlike the electric field E (which is the force acting on a unit charge), or the
polarization P (the dipole moment per unit volume), the electric displacement D has no clear
physical meaning. The only reason for introducing this quantity is that it enables us to calculate electric
fields in the presence of dielectric materials without first having to know the distribution of bound
charges. However, this is only possible if we have a constitutive relation connecting E and D . It
is conventional to assume that the induced polarization P is directly proportional to the electric field
E , so that
P=t
0
_
e
E
(9.2.9)
106
Where
_
e
is termed the electric susceptibility of the medium. It follows that
D=t
0
t E
(9.2.10)
where
t=1+_
e
(9.2.11)
is termed the dielectric constant or relative permittivity of the medium. (Likewise,
t
0
is termed the
permittivity of free space.) Note that t is dimensionless. It follows that
E=
p
f
t
0
t
(9.2.12)
Thus, the electric fields produced by free charges in a uniform dielectric medium are analogous to those
produced by the same charges in a vacuum, except that they are reduced by a factor t . This
reduction can be understood in terms of a polarization of the atoms or molecules of the dielectric
medium that produces electric fields in opposition to those generated by the free charges. One
immediate consequence of this is that the capacitance of a capacitor is increased by a factor t if the
empty space between the electrodes is filled with a dielectric medium of dielectric constant t
(assuming that fringing fields can be neglected).
Paragraph 9.3 Magnetization
All matter is built up out of atoms, and each atom consists of electrons in motion. The currents
associated with this motion are termed atomic currents. Each atomic current is a tiny closed circuit of
atomic dimensions, and may therefore be appropriately described as a magnetic dipole. If the atomic
currents of a given atom all flow in the same plane then the atomic dipole moment is directed normal to
the plane (in the sense given by the right-hand rule), and its magnitude is the product of the total
circulating current and the area of the current loop. More generally, if j (r) is the atomic current
density at the point r then the magnetic moment of the atom is
m=
1
2

rj d
3
r
(9.3.1)
where the integral is over the volume of the atom. If there are N such atoms or molecules per unit
volume then the magnetization M (i.e., the magnetic dipole moment per unit volume) is given by .
M=N m More generally,
M(r )=

i
N
i
m
i

(9.3.2)
where
m
i

is the average magnetic dipole moment of the i-th type of molecule in the vicinity of
point r , and
N
i
is the average number of such molecules per unit volume at r .
Consider a general medium which is made up of molecules which are polarizable and possess a net
magnetic moment. It is easily demonstrated that any circulation in the magnetization field M(r)
107
gives rise to an effective current density
j
m
in the medium. In fact,
j
m
=M
(9.3.3)
This current density is called the magnetization current density, and is usually distinguished from the
true current density,
j
t
, which represents the convection of free charges in the medium. In fact,
there is a third type of current called a polarization current, which is due to the apparent convection of
bound charges. It is easily demonstrated that the polarization current density,
j
p
, is given by
j
p
=
P
t
(9.3.4)
Thus, the total current density, j , in the medium takes the form
j = j
t
+M+
P
t
(9.3.5)
It must be emphasized that all terms on the right-hand side of the above equation represent real
physical currents, although only the first term is due to the motion of real charges (over more than
atomic dimensions).
The differential form of Ampre's law is
B=
0
j +
0
t
0
E
t
(9.3.6)
which can also be written
B=
0
j
t
+
0
M+
0
D
t
(9.3.7)
where use has been made of the definition
D=t
0
E+P
. The above expression can be rearranged to
give
H=j
t
+
D
t
(9.3.8)
where
H=
B

0
M
(9.3.9)
is termed the magnetic intensity, and has the same dimensions as M (i.e., magnetic dipole moment
per unit volume). In a steady-state situation, Stokes' theorem tell us that

C
Hd l =

S
j
t
d S
(9.3.10)
In other words, the line integral of H around some closed curve is equal to the flux of true current
through any surface attached to that curve. Unlike the magnetic field B (which specifies the force
q vB acting on a charge q moving with velocity v ), or the magnetization M (the
magnetic dipole moment per unit volume), the magnetic intensity H has no clear physical meaning.
The only reason for introducing it is that it enables us to calculate fields in the presence of magnetic
materials without first having to know the distribution of magnetization currents. However, this is only
possible if we possess a constitutive relation connecting B and H .
108
Paragraph 9.4 Other derivation for polarization and magnetisation
http://mysite.du.edu/~jcalvert/phys/emfields.htm
Paragraph 9.5 Gauss theorem derivation II
the field is written
109
Illustration 28
Illustration 29

E=
q
4
0
(
x
r
3
,
y
r
3
,
z
r
3
) ,
(9.5.1)
where r
2
=x
2
+y
2
+z
2
. So,
E
x
x
=
q
4t
0
(
x
x
1
r
3
+x

r
(
1
r
3
)
r
x )
E
x
x
=
q
4 t
0
(
1
r
3

3 x
r
4
x
r
)
=
q
4 t
0
r
2
3 x
2
r
5
(9.5.2)

r
x
=
.x
2
+y
2
+y
3
x
=
2x
2.x
2
+y
2
+y
3
=
x
r
So we can write in vector form
E( r )=
1
4 t
0

p(r ' )
rr '
r r '
3
d
3
r '
(9.5.3)
If we take into account that:
r r '
r r '
3
=
(
1
rr '
)
xx

( xx

)
2
+( yy

)
2
+( zz

)
2
|
3 2
=
=

x
(
1
( xx

)
2
+( yy

)
2
+( zz

)
2
|
1/ 2
)
but is better to simplify like this

x
(
1
r
)
=
(
1
r
1
x
+1

r
(
1
r
)
r
x
)
=
(
0
1
r
2
x
r
)
=
x
r
3
and for the second derivatives:

x
(
x
r
3
)
=
(
x
x
1
r
3
+x

r
(
1
r
3
)
r
x )
=
(
1
r
3

3 x
r
4
x
r
)
=
r
2
3 x
2
r
5
so

(
r r '
r r '
3
)
=
2
(
1
rr '
)
=46(r r ' )
Gauss units (9.5.4)
It follows that
110


E( r )=
1
4 t
0

( r ' )
(
rr '
r r '
3
)
d
3
r ' =

=

( r ' )
t
0
( rr ' ) d
3
r ' =
( r )
t
0
SI units
(9.5.5)
which is the desired result. The most general form of Gauss law:

Ed

S=
1
t
0

V
( r ) d
3
r
SI units (9.5.6)

9.5.1 Poisson equation & Laplace equation
Saveliev deduction vector representation in cartesian coordinates pag 161 [15],
In Saveliev volume theoretical we have Gauss units

S
E
n
dS=4

V
pdV
(9.5.7)
Applying Gauss theorem , we have

V
E=4

V
pdV
(9.5.8)

E=

t
0
.
Or in differential form :
E=4p Gauss units

E=

t
0
.
SI units
(9.5.9)
Taking into account that

E=

grad =

(9.5.10)
We have for:
( )=4p or A=4p (9.5.11)
Eq (9.5.11) is called Poisson equation and if p=0 then we obtain the faolowing eq that is the
Laplace equation:
A=0 (9.5.12)
The solution is see Paragraph 9.5 :
=

p(r ' ) dV '


rr '
(9.5.13)
so in SI
111
( r )=
1
4 t
0

( r ' )
r r '
d
3
r '
in Gauss
(r )=

(r ' )
rr '
d
3
r '

other type of solution can be obtained by procedure described in Equations of Mathematical Physics A
Tikhonov A. Samrskii Pag 282 [25].
This come from solving by representing for example (r ) in polar coordinates we have
A=
1
r
2
(
r
2
r
)
=0
by integration two times =
C
1
r
+C
2
or the simplest way is from Coulomb
law :
=
E
r
=
q
r
=

pdV
r
9.5.2 Poisson equation for magnetic vector potential A
The usual gauge for the scalar potential

is such that
-0
at infinity. The usual
gauge for
A
is such that
A=0 (9.5.14)
This particular choice is known as the Coulomb gauge.
Let us take the curl
B=A=( A)
2
A=
2
A (9.5.15)
where use has been made of the Coulomb gauge condition (9.5.14). We can combine the
above relation with the field equation
B=
0
j
to give

2
A=
0
j (9.5.16)
But, this is just Poisson's equation . We can immediately write the unique solutions to
the above equations:
A( r)=

0
4

j (r ' )
rr '
d
3
r ' (9.5.17)
Of course, we have seen a equation like this before see Paragraph 9.5 :
(r )=

(r ' )
r r '
d
3
r ' (9.5.18)
112
9.5.3 Coulomb's law from gradient potential and Biot-Savart law derivation from
magnetic vector potential
See Savelyev [15]pag385 vol. I formula XI.56
for curl:
Levi-Civita Symbol
For future use it is convenient to introduce the three-dimensional Levi-Civita symbol ij k , defined by
123 = 231 = 312 = 1,
321 = 132 = 213 = 1 (in order to memories the inverse of 123 is 321 and make the cycles) all
other ijk = 0.
curl a=a= a|=

e
x
e
y
e
z

x

y

z
a
x
a
y
a
z

=
(
a
z
y

a
y
z
,
a
x
z

a
z
x
,
a
y
x

a
x
y
)
(9.5.19)
This can be written in more compact formula:
a|=

i , k ,l
t
ikl
a
k
x
i
e
i
(9.5.20)
Let us write the vectors product of the vectors a=

e
i
a
i
and b=

e
k
b
k
a b|=(

i
e
i
a
i
) , (

k
e
k
a
k
)|
(9.5.21)
Owing to distributivity we can write
a b|=

i , k
e
i
e
k
| a
i
b
k
(9.5.22)
Let us replace
e
i
e
k
|
with Levi Civita symbols:
a b|=

i , k
a
i
b
k

l
t
ikl
e
l
=

i , k ,l
t
ikl
a
i
b
k
e
l
(9.5.23)
Of the 27 addends of the sum only six are non zero. Writing them out we obtain:
a b|=a
1
b
2
e
3
+a
2
b
3
e
1
+a
3
b
1
e
2
a
3
b
2
e
1
a
2
b
1
e
3
a
1
b
3
e
2
113
combining of terms with identical units vector we have:
a b|=e
1
(a
2
b
3
a
3
b
2
)+e
2
(a
3
b
1
a
1
b
3
)+e
3
(a
1
b
2
a
2
b
1
)
(9.5.24)
This can be writen in form of a determinant:
a|=

e
x
e
y
e
z
a
x
a
y
a
z
b
x
b
y
b
z

(9.5.25)
Now assume that we are given the vector function
a(()=

k
a
k
(() e
k
.
Let us find the curl of this vector:
According to the rule of differentiation of composite function we have:
a
k
x
i
=
a
k
(
(
x
i
(9.5.26)
Taking into account (9.5.23) we have
a|=

i , k ,l
t
ikl
a
k
(
(
x
i
e
l
(9.5.27)
Substitution of
(
x
i
=(
in (9.5.27):
a|=

i , k ,l
t
ikl
(()
i
(
a
(
)
k
e
l
=

(,
a
(
|
167 (9.5.28)
Now if we put (=r
a( r)|=

r ,
a
r
|
=

e
r
,
a
r
|
(9.5.29)
Where
r=

k
e
k
r
x
k
=

k
e
k
x
k
r
=
r
r
=e
r
(9.5.30)
with
r
x
=
. x
2
+y
2
+y
3
x
=
2x
2.x
2
+y
2
+y
3
=
x
r
for divergence:
div A=A=
A
x
x
+
A
y
y
+
A
z
z
=

i
a
i
x
i
(9.5.31)
So we have:
114
a=

k
a
k
x
k
=

k
a
k
(
(
x
k
=

k
(
a
(
)
k
(()
k
=
a
(
( (9.5.32)
Now if we put (=r
a(r)=
a
r
r=
a
r
e
r
=
a
r
r
r
(9.5.33)
For gradient:
pag 384 the partial derivatives of ( r) with respect to
x
k
has the form:

x
k
=

r
r
x
k
=

r
x
k
r
(9.5.34)
Where
r
x
=
.x
2
+y
2
+y
3
x
=
2x
2 .x
2
+y
2
+y
3
=
x
r
( r)=

k
e
k

r
x
k
r
=

r
1
r

k
e
k
x
k
=

r
r
r
(9.5.35)
or
( r)=

r
r=

r
r
r
=

r
e
r
166 (9.5.36)
Now if we consider pag 160 [15] that
(r)=
1
r
(9.5.37)
Take the gradient or from (6.5.16) we have :
( r)=
1
r
2
r
r
(9.5.38)
From Poisson's equation we can immediately write the unique solutions
A( r)=

0
4

j (r ' )
rr '
d
3
r ' (9.5.39)
Of course, we have seen a equation like this before see Paragraph 9.5 :
(r )=

(r ' )
rr '
d
3
r ' (9.5.40)
And taking into account that
B=A
E=grad
(9.5.41)
115
And utilising (9.5.31)and (9.5.36) we have:
we obtain the fundamental force laws for electric and magnetic fields.
Coulomb's law,
E( r )=
1
4t
0

p( r ' )
rr '
rr '
3
d
3
r '
(9.5.42)
and the Biot-Savart law,
B( r)=

0
4

j (r ' )( rr ' )
rr '
3
d
3
r ' (9.5.43)
Where
j =e
r '
j
(9.5.44)
9.5.4 Lorentz force&Ampere Law
Ampre discovered that the force exerted on the test wire is directly proportional to its length.
d F d l
If the test current is parallel to a magnetic loop then there is no force exerted on the test wire.
If the test current is rotated in a single plane, so that it starts parallel to the central current and ends up
pointing along a magnetic loop, then the magnitude of the force on the test wire attenuates like sino
(where o is the angle the current is turned through- o=0 corresponds to the case where the test
current is parallel to the central current), and its direction is again always at right-angles to the test
current.
Bsino and d l B
dFdl Bsino
dFdl Bsino
d F d l , B|
(9.5.45)
If a wire carrying a current is in a magnetic field, then each of the current carriers experiences the
Lorentz force
F=e v B| (9.5.46)
Here v is the velocity of chaotic motion of a carrier.
dFneI
d F=I d l , B|
or dF=I Bdl sino
(9.5.47)
This equation determines the force exerted on a current element dl in a magnetic field. Equation
(9.5.47) was established experimentally by Ampere and is called Ampere's law. (see pag 126[16] vol
II).
He also made the following observations:
If the current in the test wire (i.e., the test current) flows parallel to the current in the central wire then
the two wires attract one another.
If the current in the test wire is reversed then the two wires repel one another.
If the test current points radially towards the central wire (and the current in the central wire flows
116
upwards) then the test wire is subject to a downwards force. If the test current is reversed then the force
is upwards. If the test current is rotated in a single plane, so that it starts parallel to the central current
and ends up pointing radially towards it, then the force on the test wire is of constant magnitude, and is
always at right-angles to the test current.
Finally, Ampre was able to establish that the attractive force between two parallel current carrying
wires is proportional to the product of the two currents, and falls off like the inverse of the
perpendicular distance between the wires.
If the distance between the currents is b (Fig. 30), then each element of the current
I
2
, will be in a
magnetic field whose induction is
B
1
=

0
4
2 I
1
b
(9.5.48)
So
F
21
=I
2
B
1
=

0
4
2 I
1
I
2
b
(9.5.49)
9.5.5 Ampre's circuital law
We can modify (9.5.43)
j (r) d
3
r= jSdl =I dl =I d l (9.5.50)
We can do it because vector I have the same directions as d l
so in differential form we have:
d B=

0
I
4
d l r
r
3
(9.5.51)
117
Fig. 30
A glance at Fig. 31 shows that the vector d B is directed at right angles to the plane passing through
d l and the point for which the field is being calculated so that rotation about d l in the direction
of d B is associated with d l by the right-hand screw rule. The magnitude of d B is
determined by the expression
d B=

0
I
4
d l sino
r
2
(9.5.52)
r=
b
sin o
dl =
r d o
sin o
=
b d o
sin
2
o
introducing this values in (9.5.52):
dB=

0
4
I b d osin osin
2
o
b
2
sin
2
o
=

0
4
I
b
sinod o (9.5.53)
The angle o varies within the limits 0 to for all elements of infinite line current. Hence
B=

dB=

0
4
I
b

sin od o=

0
4
2I
b
(9.5.54)
Thus the magnetic induction of the field of a line current is determined by the formula
B=

0
4
2I
b
(9.5.55)

Let us now turn to the circulation of the vector B. By definition, the circulation equals the integral
118
Fig. 31

Bd l (9.5.56)
It is the simplest to calculate this integral for the field of a line current. Assume that a closed loop is in
a plane perpendicular to the current (Fig. 32; the current is perpendicular to the plane of the drawing
and is directed beyond the drawing). At each point of the loop, the vector B is directed along a
tangent to the circumference passing through this point. Let us substitute
Bdl
B
for Bd l in the
expression for the circulation (
dl
B
is the projection of a loop element onto the direction of the vector
B ). Inspection of the figure shows that
dl
B
equals b d o , where b is the distance from the wire carrying the current to d l , and
d o . is the angle through which a radial straight line turns when it moves along the loop over the
element d l Thus, introducing (9.5.55) for B, we get (see (8.7.1))
Bd l =B dl
B
=

0
4
2I
b
bd o=

0
I
2
d o (9.5.57)
We have

Bd l =

0
I
2

d o (9.5.58)
Upon circumvention of the loop enclosing the current, the radial straight line constantly turns in one
direction, therefore

d o=2 . Matters are different if the current is not enclosed by the loop (Fig.
32b). Here upon circumvention of the loop, the radial straight line first turns in one direction (segment
1-2), and then in the opposite one (2-1), owing to which

d o equals zero. With a view to this


result we can write that

Bd l =
0
I (9.5.59)
Taking into account that
I =

S
j d S
we have

Bd l =
0
S
j d S
(9.5.60)
This result is called Ampre's circuital law(The magnetic field in space around an electric current is
proportional to the electric current which serves as its source, just as the electric field in space is
119
Fig. 32
proportional to the charge which serves as its source. Ampere's circuital Law states that for any closed
loop path, the sum of the length elements times the magnetic field in the direction of the length element
is equal to the permeability times the electric current enclosed in the loop). Transforming the left side
according to the Stokes theorem we get:

S
B| d S=
0
S
j d S
(9.5.61)
We thus arrive at the conclusion that the curl of the magnetic induction vector is proportional to the
current density vector at the given point:
B|=
0
j
(9.5.62)
9.5.6 The torque of magnetic field , magnetic dipole moment & work
Let us divide the area of the loop into narrow strips of width dy parallel to the direction of the
vector B (see Fig. 33; Fig. 34 is an enlarged view of one of these strips). The force
d F
1

directed beyond the drawing is exerted on the loop element
d l
1
enclosing the strip at the left. The
magnitude of this force is (see (9.5.47))
dF
1
=I B dl
1
sin o
1
=I B dy
(9.5.63)
(see Fig. 34b). The force
d F
2
directed toward us is exerted on the loop element
d l
2
enclosing
the strip at the right. The magnitude of this force is
dF
2
=I Bdl
2
sin o
2
=I B dy
(9.5.64)
The result we have obtained signifies that the forces applied to opposite loop elements
d l
1
and
d l
2
form a couple whose torque is
dT =I B x dy=I BdS (9.5.65)
120
Fig. 33
( dS is the area of a strip). A glance at Fig. 33 shows that the vector d T is perpendicular to the
vectors n and B and, consequently, can be written in the form
d T = I nB| dS (9.5.66)
Summation of this equation over all the strips yields the torque acting on the loop
T=

I n , B| dS=I n, B| S=( I S n) , B| (9.5.67)


the magnetic dipole moment of a current loop
m=IS n (9.5.68)
Thus we can write the torque as:
T= m, B|
T=mBsino
(9.5.69)
To increase the angle o between the vectors m and B by d o , the following work must be
done against the forces exerted on a loop in a magnetic field:
dA=T d o=mBsinod o (9.5.70)
Hence, the work (9.5.70) goes to increase the potential energy W
dW=m Bsin od o (9.5.71)
Integrations yields:
W=m Bcoso+const (9.5.72)
We have the dot product
W=mB (9.5.73)
9.5.7 Law of electromagnetic induction (induce electromotive force) induced e.m.f. or
Faraday's law
See[16] pag 182 vol II
In 1831, the British physicist and chemist Michael Faraday (1791- 1867) discovered that an electric
current is produced in a closed conducting loop when the flux of magnetic induction through the
surface enclosed by this loop changes. This phenomenon is called electromagnetic induction, and the
121
Fig. 34
current produced an induced current.
Law of electromagnetic induction (induce electromotive force) induced e.m.f. or Faraday's law:
The phenomenon of electromagnetic induction shows that when the magnetic flux in a loop changes, an
induced electromotive force is set up
e.m.f.=E =
d 4
dt
(9.5.74)
See[16] pag 140 vol II
Let us consider a current loop formed by stationary wires and a movable rod of length l sliding
along them (Fig. 35). Let the loop be in an external magnetic field which we shall assume to be
homogeneous and at right angles to the plane of the loop. With the directions of the current and field
shown in the figure, the force F exerted on the rod will be directed to the right and will equal
F=I Bl (9.5.75)
When the rod moves to the right by dh, this force does the positive work
dA=F dh=I Bl dh=I Bl dS (9.5.76)
the flux through the area of a current loop,
4=

BndS (9.5.77)
Or
d 4=B dS (9.5.78)
It follows that
dA=I d 4 (9.5.79)
dA=I d 4=
dq
dt
d 4
e.m.f.=E =
dA
dq
=
d 4
dt
When the field is directed toward us (Fig. 35b), the force exerted on the rod is directed to the left.
Therefore when the rod moves to the right through the distance dh, the magnetic force does the
122
Fig. 35
negative work
d 4=BdS (9.5.80)
Lenz established a rule permitting us to find the direction of an induced current. Lenz's rule states that
an induced current is always directed so as to oppose the cause producing it. It follows tht we must
have an opposite sign:
e.m.f.=E =
dA
dq
=
d 4
dt
(9.5.81)
9.5.8 Maxwell's Equations
Let us consider electromagnetic induction when a wire loop in which a current is induced is stationary,
and the changes in the magnetic flux are due to changes in the magnetic field. The setting up of an
induced current signifies that the changes in the magnetic field produce extraneous forces in the loop
that are exerted on the current carriers. These extraneous forces are associated neither with chemical
nor with thermal processes in the wire. They also cannot be magnetic forces because such forces do not
work on charges. It remains to conclude that the induced current is due to the electric field set up in the
wire.
The e.m.f. equals the circulation of the vector E around the given loop:
E =

E d l (9.5.82)
On the other hand
e.m.f.=E =
dA
dq
=
d 4
dt
=
d
dt

S
B dS
(9.5.83)
Equating this two equations and taking into account that since the loop and the surface are stationary,
the operations of time differentiation and integration over the surface can have their places exchanged
we get

E d l =
d
dt

S
BdS=

S
B
t
dS
(9.5.84)
This is the second Maxwell equation in integral form.
Let us transform the left-hand side of Eq. (9.5.84) in accordance with Stokes's theorem. The result is
Maxwell equation in differential form:

S
E| dS=

S
B
t
dS
E|=
B
t

(9.5.85)

123
Displacement current
the continuity equation see (9.1.36):
j =
p
t
(9.5.86)
From eq(9.2.6):
D=p (9.5.87)
eq(9.5.86) can be rewritten as:
j =
D
t
=
D
t
(9.5.88)
It follows that
j =
D
t
(9.5.89)
From eq(9.3.8) and (9.5.62)
H|=j +j
dicplacement
= j +
D
t
(9.5.90)
The introduction of the displacement current has "given equal rights" to an electric field and a
magnetic field. It can be seen from the phenomenon of electromagnetic induction that a varying
magnetic field sets up an electric field. It follows that a varying electric field sets up a magnetic field.
Maxwell's Equations differential form:See[16] pag 206 vol II
B=0
E|=
B
t
D=p
H|=j +
D
t

(9.5.91)
Maxwell's Equations integral form:See[16] pag 206 vol II

S
Bd S=0

I
Ed l =
d
dt

S
BdS

S
Dd S=

p
p dV

I
H d l =

S
j d S+
d
dt

S
DdS
(9.5.92)
124
See[27] pag 83 Condon
These equations express respectively the fundamental laws:
(a) Non-existence of a magnetic stuff analogous to electric charge.
(b) Faraday s law of electromagnetic induction.
(c) The Coulomb law in electrostatics.
(d)The Ampere law for the magnetic field due to electric currents, to gether with Maxwell's
displacement-current hypothesis.
9.5.9 MULTIPOLE EXPANSION
9.5.10 MULTIPOLE EXPANSION
See[28] pag 48
9.5.11 MULTIPOLE EXPANSION OF THE POTENTIAL
Next, we consider the general situation in which we have a static collection of charges
q
o
arbitrarily
located (but in the vicinity of the origin). We let r
o
=r
o
( x
o ,i
'
) be the vector that designates the
position of the o th charge at the point ( x
a ,1
'
, x
a ,2
'
, x
a, 3
'
) . The vectors to the field point
P=P( x
i
)
from the charge
q
o
and from the origin are denoted by R
o
=rr
o
'
and r ,
respectively (see Fig. 36). The field point P is considered fixed, so that the vector EU is a function of
the coordinates x'a,i of the charges
q
o
.
125
Fig. 36
The potential at the field point P due to the source charge
q
o
is
4
o
=
q
o
R
o
(9.5.93)
Where
R
o
=rr
o
'
=
.

i
( x
i
x
o, i
'
)
2
(9.5.94)
For a fixed field position r , we wish to expand 1/ R
o
about the source origin, r
o
=0 . The
general three-dimensional Taylor (Maclaurin) expansion with respect to the coordinates
r
o
'
=( x
a ,1
'
, x
a ,2
'
, x
a, 3
'
) is
f ( r
o
'
)=f (0)+

i
x
o ,i
'

f (r
o
'
)
x
o, i
'
|
r
o
'
=0
+
1
2

i , j
x
o ,i
'
x
o, j
'

2
f (r
o
'
)
x
o ,i
'
x
o , j
'
|
r
o
'
=0
+...
(9.5.95)
Therefore, when we let
f ( r
o
'
)=
q
o
R
o
(r
o
'
)
(9.5.96)
we have for the potential due to the charge
q
o
f ( r
o
'
)=
q
o
r
+q
o
i
x
o,i
'


x
o ,i
'
(
1
R
o
)
|
R
o
=r
+
1
2
q
o
i , j
x
o,i
'
x
o , j
'

2
x
o, i
'
x
o, j
'
(
1
R
o
)
|
R
o
=r
+...
(9.5.97)
Now, from Eq. (9.5.94) it is clear that the spatial derivatives can be exchanged according to

x
o ,i
'
f ( R
o
)=

x
i
f ( R
o
)
(9.5.98)
So that with


x
o, i
'
(
1
R
o
)
|
R
o
=r
=


x
i
(
1
R
o
)
|
R
o
=r
=

x
i
(
1
r
)
(9.5.99)
Consequently, the potential may be written as
126
4
o
=
q
o
r
q
o
i
x
o, i
'
x
i
(
1
r
)
+
1
2
q
o
i , j
x
o,i
'
x
o , j
'
2
x
i
x
j
(
1
r
)
...
(9.5.100)
The potential due to a collection of charges may then be written as
4=

o
4
o
=4
(1)
+4
( 2)
+4
(4)
+...+4
(2
l
)
+...
(9.5.101)
Where
4
( 1)

o
q
o
r
=
q
r
(9.5.102)
4
( 2)

o
q
o
i
x
o ,i
'
x
i
(
1
r
)
(9.5.103)
4
( 4)

1
2

o
q
o
i , j
x
o, i
'
x
o, j
'
2
x
i
x
j
(
1
r
)
(9.5.104)
4
( 2
l
)

(1)
l
l !

o
q
o

i , j ,... ,l
x
o, i
'
x
o, j
'
... x
o ,l
'
l
x
i
x
j
... x
l
(
1
r
)
(9.5.105)
The first term 4
( 1)
is just the potential that would result if the total charge
q=

o
q
o were located
at the origin; it is called the monopole potential. The monopole moment is just the total charge q. The
term 4
( 2)
is called the dipole potential . The term 4
( 4)
is called the quadrupole potential, and, in
general, the term
4
( 2
l
)
is called the 2
l
multipole potential.
This can be written like in Dykistra Chem. Rev., 1993, 93 (7), pp 23392353:
4=TM (9.5.106)
The tensor:
T=
(
1
r
,
x
r
3
,
y
r
3
,
z
r
3
,
3 x
2
r
2
r
5
,
3 x y
r
5
,
3 x z
r
5
,
3 y x
r
5
,
3 y
2
r
2
r
5
,
3 y z
r
5,
3 z x
r
5,
3z y
r
5
,
3z
2
r
2
r
5
, ...
)
(9.5.107)
Thus
4
( 2)
=T
(2)
M
( 2)
(9.5.108)
Where (see [57],[56] or [44], pag 262 eq (8.59)):
T
(2)
=
1
r
3

I
3
r
2

x
2
xy xz
yx y
2
yz
zx zy z
2 |
|
=
1
r
3

I 3
rr
r
2
|
or
T

(2)
=
1
r
3

3
rr
r
2

|
=
1
r
3

3
rr
r
2
|
or
T

(2)
=
1
R
3

3
RR
R
2
|
161
(9.5.109)
127
Proof:
r=
.

i
x
i
2
=.x
2
+y
2
+z
2
(9.5.110)

x
(
1
r
)
=
(
1
r
1
x
+1

r
(
1
r
)
r
x
)
=
(
0
1
r
2
x
r
)
=
x
r
3
(9.5.111)
and for the second derivatives:

x
(
x
r
3
)
=
(
x
x
1
r
3
+x

r
(
1
r
3
)
r
x )
=
(
1
r
3

3 x
r
4
x
r
)
=
r
2
3 x
2
r
5
(9.5.112)

y
(
x
r
3
)
=
(
x
y
1
r
3
+x

r
(
1
r
3
)
r
y )
=
(
0
3x
r
4
y
r
)
=
3 x y
r
5
(9.5.113)
Where
r
x
=
.x
2
+y
2
+y
3
x
=
2x
2 .x
2
+y
2
+y
3
=
x
r
(9.5.114)
But
the field is written

E=
q
4
0
(
x
r
3
,
y
r
3
,
z
r
3
) ,
(9.5.115)
where r
2
=x
2
+y
2
+z
2
. So,
E
x
x
=
q
4t
0
(
x
x
1
r
3
+x

r
(
1
r
3
)
r
x )
E
x
x
=
q
4 t
0
(
1
r
3

3 x
r
4
x
r
)
=
q
4 t
0
r
2
3 x
2
r
5
(9.5.116)

r
x
=
.x
2
+y
2
+y
3
x
=
2x
2.x
2
+y
2
+y
3
=
x
r
The divergence of the field is thus given by
E=
E
x
x
+
E
y
y
+
E
z
z
=
q
4
0
3r
2
3 x
2
3 y
2
3 z
2
r
5
=0. (9.5.117)
128

r
r
3
=r
1
r
3
+
1
r
3
r=r
(

3
r
4
r
r
)
+
1
r
3
3=0
Thus you must understand that only for the trace is zero:

2
x
i
2
(
1
r
)
=0
(9.5.118)
Mainly see (9.5.117):

i , j

2
x
i
x
j
(
1
r
)
6
ij
=0 (9.5.119)
9.5.12 Legendre
See [39] pag 2, also [9]pag 745
9.5.13 Mathematics of spherical harmonics
1. Laplace equation in spherical coordinates:
2. Separation of variables:
with Y(,) as spherical harmonics (Kugelfunktion)
3. Resulting differential equations:
Solutions: sin/cos associated Legendre polynomials
=> Spherical harmonics:
9.5.14 The dipole potential
129
=
1
r

2
r
2
( r)+
1
r
2
1
sin

(
sin


)
+
1
r
2
1
sin
2

2
=0
( r , , )=F ( r )Y ( , )Y ( , )=P( cos )( )

2
+m
2
=0 (1s
2
)

2
s
2
P2s

s
P+
(
l ( l+1)
m
2
1s
2
)
P=0
s=cos
Y
l
m
( s )=(1)
( m+m)/ 2
.
( l m)!
( l +m)!
e
im
P
l
m
( cos )
We first direct our attention to the term 4
( 2)
given by Eq. (9.5.103):
4
( 2)

o
q
o
i
x
o ,i
'
x
i
(
1
r
)
=

o
q
o
r
o
'
grad
(
1
r
)
(9.5.120)
But the sum over the q
o
r
o
'
, is just the dipole moment of the system:
p=

o
q
o
r
o
'
(9.5.121)
Thus
4
( 2)
=pgrad
(
1
r
)
=p
1
r
=p
(

1
r
2
r
r
)
=p(
r
r
3
)

4
( 2)
=
pe
r
r
2
(9.5.122)
The electric dipole field vector E
(2)
may be calculated by taking the gradient of 4
( 2)
:
E
(2)
=grad 4
( 2)
=grad
(
pr
r
3
)
(9.5.123)
Expanding the gradient of the product of two scalar functions
E
(2)
=
1
r
3
grad ( pr)( pr ) grad
(
1
r
3
)
(9.5.124)
Now
grad ( pr)=grad ( p
x
x+p
y
y+p
z
z)=

i
p
i
e
i
=p
(9.5.125)
And
(9.5.126)

r
(
1
r
3
)
=
3
r
4
=
3r
r
5
Therefore
E
(2)
=
p
r
3
+( pr)
3r
r
5
=
1
r
5
3( pr) rp r
2
|
(9.5.127)
For quadrupole we have
Attention
we have that from Laplace equation:see (9.5.119):

r
r
3
=r
1
r
3
+
1
r
3
r=r
(

3
r
4
r
r
)
+
1
r
3
3=0

i , j

2
x
i
x
j
(
1
r
)
6
ij
=0 (9.5.128)
130
Because this is a null quantity, any constant times this quantity may be added to 4
( 4)
without altering
the value. If we choose this constant to be
1
6

o
q
o
r
o
' 2
, then we have
4
( 4)

1
6

o
q
o
i , j
(3 x
o, i
'
x
o, j
'
r
o
' 2
6
ij
)

2
x
i
x
j
(
1
r
)
(9.5.129)
We may write this equation as
4
( 4)

1
6

i , j
Q
ij

2
x
i
x
j
(
1
r
)
=
1
6

i , j
Q
i , j
(
3 x
i
x
j
r
2
6
ij
r
5
)
(9.5.130)
The nine quantities is called quadrupole tensor:
Q
ij
=

o
q
o
(3 x
o, i
'
x
o, j
'
r
o
' 2
6
ij
)
(9.5.131)
Thus we have two multipliers:
Quadrupole tensor that is the sum for each charge in the system:
Q
i , j
=
(

o
q
o
(3 x
o
2
r
o
2
) 3

o
q
o
x
o
y
o
3

o
q
o
x
o
z
o
3

o
q
o
y
o
x
o
o
q
o
(3 y
o
2
r
o
2
) 3

o
q
o
y
o
z
o
3

o
q
o
z
o
x
o
3

o
q
o
z
o
y
o
o
q
o
(3 z
o
2
r
o
2
)
)
(9.5.132)
and the other term:
(
3 x
2
r
2
r
5
3 x y
r
5
3 x z
r
5
3 y x
r
5
3 y
2
r
2
r
5
3 y z
r
5
3 z x
r
5
3 z y
r
5
3 z
2
r
2
r
5
)
(9.5.133)
9.5.15 Green formula
See [30]pag 1055 or[2] pag 597
GREENS THEOREM Let C be a positively oriented, piecewise-smooth, simple closed curve in the
plane and let D be the region bounded by C . If P and Q have continuous partial derivatives on an open
region that contains D , then

C
( P dx+Qdy)=

D (
Q
x

P
y
)
dxdy
(9.5.134)
Greens Theorem should be regarded as the counterpart of the Fundamental Theorem of Calculus for
double integrals:
131

a
b
dF ( x)
dx
dx=F( b)F (a) (9.5.135)
In order to proof we must show that:

C
P dx=

D

P
y
dxdy
(9.5.136)
And

C
Qdy=

D
Q
x
dxdy
(9.5.137)

taking into account (9.5.135) we write (9.5.136) as:

D
P
y
dxdy=

a
b

g
1
( x)
g
2
( x)
P
y
( x , y) dxdy
=

a
b
P( x , g
2
( x))P( x , g
1
( x))| dx
(9.5.138)
the properties:

C
3
P( x , y) dx=

C
3
P( x , y) dx=

a
b
P( x , g
2
( x)) dx=

b
a
P( x , g
2
( x)) dx (9.5.139)
Let us compute the line integral breaking up C in
C
1
, C
2
,C
3
,C
4
we have:

C
P( x , y) dx=

C
1
P( x , y) dx+

C
2
P( x , y) dx+

C
3
P( x , y) dx+

C
4
P( x , y) dx
=

a
b
P( x , g
1
( x))P( x , g
2
( x))| dx
(9.5.140)
(9.5.140)And (9.5.136) differs because you have calculated
D
P
y
dxdy
and not
D

P
y
dxdy
So you must take the minus sign in (9.5.140) outside of parentheses and you obtain (9.5.136)
132
Fig. 37

Equation (9.5.137) can be proved in much the same way .
Applications if we take (9.5.134)
Q
x

P
y
=1
(9.5.141)
We obtain

C
( P dx+Qdy)=

D
dxdy=S
(9.5.142)
Thus for P( x , y)=y and Q( x , y)=x
P
y
=1,
Q
x
=1

C
(y dx+x dy)=

D (
Q
x

P
y
)
dxdy=

D
2dxdy=2S
(9.5.143)
Or for P( x , y)=0 and Q( x , y)=x
P
y
=0,
Q
x
=1
Or for P( x , y)=y and Q( x , y)=0
we have
S=

C
xdy=

C
y dx=
1
2

C
x dyy dx
(9.5.144)

9.5.16 Magnetic multipoles Savelyev proof
See appendix of [31] :
Important in (9.5.173) the last integral you can solve in many ways the most easiest is to put
Q
x

P
y
=1
and iech term you can integrate apart
thus
1
2

C
x dyy dx
you can solve in 3 ways
Or for P( x , y)=0 and Q( x , y)=x
P
y
=0,
Q
x
=1
133
Or for P( x , y)=y and Q( x , y)=0
Or P( x , y)=y and Q( x , y)=x
P
y
=1,
Q
x
=1
Further at pag 383-384 [4],
The method of evaluating a line integral is to reduce it to a set of scalar integrals. It is usual to work in
Cartesian coordinates, in which case d r=dxi+dy j+dz k . The line integral becomes simply

C
d r=i

C
( x , y , z ) dx+ j

C
( x , y , z) dy+k

C
( x , y , z) dz
(9.5.145)
The three integrals on the RHS are ordinary scalar integrals that can be evaluated in the usual way once
the path of integration C has been specified .
Thus each integral can be calculated apart.
We now turn our attention to the representation by a multipole expansion of the magnetic effects of
steady currents. We begin by writing the expression for the vector potential, :
A( r)=
1
c

J (r ' )
rr '
d
3
r' =
1
c

V
J
r
dv '
(9.5.146)
As for electric potential we have the same thing see (9.5.102),(9.5.103)and(9.5.120):
A( r)=
1
c r

V
J (r ' ) dv'
1
c

V
J (r ' )

r 'grad
(
1
r
)
|
dv' +...
(9.5.147)
The quadrupole and higher-order terms may be treated in a manner analogous to that used in the
electrostatic case; for simplicity, we omit these terms.
Let us first examine the monopole term:
A
( 1)
(r)=
1
c r

V
J (r ' ) dv'
(9.5.148)
The current density in the system may be considered to arise from many closed* filamentary current
loops. Therefore, the volume integral of J may be represented as the sum over all the line integrals of
the filamentary currents around the individual loops

V
J (r ' )dv ' =

'
d s

(9.5.149)
But because I

'
is constant for any given loop, the right-hand side of this expression becomes

'

d s

(9.5.150)
which vanishes because the integrand is an exact differential. Thus
A
( 1)
(r)=0 (9.5.151)
The second term may be transformed in a similar manner with the result
A
(2)
( r)=
1
c

'

(
r 'grad
(
1
r
))
d s

141 (9.5.152)
In the last integral we have dot product and multiplied by another vector
134

( rk) d s=( k
x
x+k
y
y+k
z
z) (dx e
x
+dy e
y
+dz e
z
)=
e
x
( k
x
xdx+k
y
ydx+k
z
zdx)+
e
y
( k
x
xdy+k
y
ydy+k
z
zdy)+
e
z
( k
x
xdz+k
y
ydz+k
z
zdz)
now taking into account

C
( P dx+Qdy)=

D (
Q
x

P
y
)
dxdy
and
Q
x

P
y
=1
I chose the right hand direction and I have
for x and y
Q
x

P
y
=1
(9.5.153)
for y and z
Q
y

P
z
=1
(9.5.154)
for z and x
Q
z

P
x
=1
(9.5.155)

( rk) d s=( k
x
x+k
y
y+k
z
z) (dx e
x
+dy e
y
+dz e
z
)=
e
x
(0+k
y

ydx+k
z

zdx)+
e
y
( k
x

xdy+0+k
z

zdy)+
e
z
( k
x

xdz+k
y

ydz+0)

r ( kd s)=( x e
x
+y e
y
+z e
z
)(k
x
dx+k
y
dy+k
z
dz )=
e
x
( 0k
y

y
ydx+k
z

z
zdx)+
e
y
(k
x

x
xdy+0k
z

z
zdy)+
e
z
(k
x

x
xdz+k
y

y
ydz+0)=
S e
x
( k
y
k
z
)S e
y
( k
z
k
x
)S e
z
( k
x
k
y
)
(9.5.156)
The

e
x
(k
x
xdx)=0 because from green formula (9.5.136) :
135
Fig. 38
z
y
x
n

C
x dx=

D

P
y
dxdy=

D
0 dxdy=0
(9.5.157)
The proof for the second part (9.5.177) is the same.
And because of total differential

e
x
( k
x
xdx)=e
x
k
x

x dx=e
x
k
x

d (
x
2
2
) thus

d (
x
2
2
)=
x
2
2

1
2
+
x
2
2

2
1
=0 (9.5.158)
because is loop is closed !!!
see Savelyev [15] pag 384 Vol I:

r d l =

( x dx+y dy+z dz )=
1
2

d ( x
2
+y
2
+z
2
)=0

Savelyev wrote that is zero because is a total differential in the integrand.
But he don't explain why! I explained in (9.5.158) .
pag 491 [31] :
the projections of unit vector n=1e
x
+1 e
y
+1e
z
for the vector
k=grad
(
1
r
)
=k
x
e
x
+k
y
e
y
+k
z
e
z
we have:
n k|=

e
x
e
y
e
z
1 1 1
k
x
k
y
k
z

=e
x
( k
z
k
y
)+e
y
( k
x
k
z
)+e
z
( k
y
k
x
)
k n|=

e
x
e
y
e
z
k
x
k
y
k
z
1 1 1

=e
x
( k
y
k
z
)+e
y
( k
z
k
x
)+e
z
( k
x
k
y
)
(9.5.159)
Substituting (9.5.159) in (9.5.156) we get :

( rk) d s= k , S n|=

grad
(
1
r
)
, S n
|
(9.5.160)
But
grad
(
1
r
)
=
1
r
=
(

1
r
2
r
r
)
=
(

r
r
3
)
(9.5.161)
It follows that
A
(2)
( r)=
1
c

'

r 'grad
(
1
r
)|
d s

r
r
3
,
I S
c
n
|
=
mr|
r
3
(9.5.162)
where
136
m=
I S
c
n
(9.5.163)
9.5.17 Magnetic multipoles Heald proof :
See pag 58 [28] :
We now turn our attention to the representation by a multipole expansion of the magnetic effects of
steady currents. We begin by writing the expression for the vector potential, :
A( r)=
1
c

J (r ' )
rr '
d
3
r' =
1
c

V
J
r
dv '
(9.5.164)
As for electric potential we have the same thing see (9.5.102),(9.5.103)and(9.5.120):
A( r)=
1
c r

V
J (r ' ) dv'
1
c

V
J (r ' )

r 'grad
(
1
r
)
|
dv' +...
(9.5.165)
The quadrupole and higher-order terms may be treated in a manner analogous to that used in the
electrostatic case; for simplicity, we omit these terms.
Let us first examine the monopole term:
A
(1)
( r)=
1
c r

V
J (r ' ) dv '
(9.5.166)
The current density in the system may be considered to arise from many closed* filamentary current
loops. Therefore, the volume integral of J may be represented as the sum over all the line integrals of
the filamentary currents around the individual loops

V
J (r ' )dv ' =

'
d s

(9.5.167)
But because I

'
is constant for any given loop, the right-hand side of this expression becomes

'

d s

(9.5.168)
which vanishes because the integrand is an exact differential. Thus
A
(1)
( r)=0 (9.5.169)
The second term may be transformed in a similar manner with the result
A
(2)
( r)=
1
c

'

(
r 'grad
(
1
r
))
d s

(9.5.170)
if a surface S (with elements d a ) is bounded by a closed curve r (with elements d s ), then

I
r ' d s=2

S
d a=2 S
(9.5.171)
Proof see [31]pag 490 and [15] pag 192 vol II:
137
r ds|=

e
x
e
y
e
z
x y z
dx dy dz

=e
x
( y dzzdy)+e
y
( zdxxdz)+e
z
( xdyydx)
or in integral form

I
rd s=e
x
( y dz zdy)+e
y
( zdxxdz)+e
z
( xdyydx)
(9.5.172)
Now apply green formula (9.5.144) three times we have:

I
r ' d s=e
x
2S+e
y
2S+e
z
2S=2 S
(9.5.173)
In fact one of the definition of magnetic dipole is (see [15] pag 192 vol II) he do not depend on r
m=
1
2

r ' j d
3
r=
1
2c

I
r ' d s
m=
I S
c
n
see pag 60 [28]
(9.5.174)
The cross-product of the integrand r ' d s with an arbitrary vector k can be expanded by the BAC-
CAB rule
(r ' d s)k=d s( r 'k)r ' (d sk) (9.5.175)
Using again formula (or for other demonstration see [15] pag 190 vol II)

k( r ' d s)=
e
x
( k
y

( xdyydx)k
z

( zdxxdz))+
e
y
(k
z

( y dzzdy)k
x

( xdyydx))+
e
z
( k
x

( zdxxdz)k
y

( y dzzdy))=
2Se
x
(k
y
k
z
)+2Se
y
(k
z
k
x
)+2Se
z
( k
x
k
y
)
(9.5.176)
So in fact we have
k(r ' d s)=r ' (kd s )d s( kr ' )=2r ' ( kd s)=2d s(kr ' ) (9.5.177)
Proof:

r ( kd s)=( x e
x
+y e
y
+z e
z
)(k
x
dx+k
y
dy+k
z
dz )=
e
x
( k
x
xdx+k
y
xdy+k
z
xdz)+
e
y
( k
x
ydx+k
y
ydy+k
z
ydz)+
e
z
(k
x
zdx+k
y
zdy+k
z
zdz )
taking into account formula (9.5.153)-(9.5.155) and Fig. 38:

r ( kd s)=( x e
x
+y e
y
+z e
z
)(k
x
dx+k
y
dy+k
z
dz )=
e
x
(0+k
y

xdy+k
z

xdz)+
e
y
(+k
x

ydx+0+k
z

ydz)+
e
z
(+k
x

zdx+k
y

zdy+0)
138

r ( kd s)=

( x e
x
+y e
y
+z e
z
)( k
x
dx+k
y
dy+k
z
dz)=
e
x
( 0+k
y

x
xdyk
z

x
xdz)+
e
y
(k
x

y
ydx+0+k
z

y
ydz)+
e
z
(+k
x
Q
z
zdxk
y

z
zdy+0)=
S e
x
( k
y
k
z
)+S e
y
( k
z
k
x
)+S e
z
(k
x
k
y
)
(9.5.178)
and

e
x
( k
x
xdx)=0 because from green formula (9.5.136) :

C
x dx=

D

P
y
dxdy=

D
0 dxdy=0
(9.5.179)
The proof for the second part (9.5.177) is (9.5.156) which is with different sighn.

With these ingredients and identifying k with grad (1/ r) , we can write the integral as
A
(2)
( r)=
1
c

'

r 'grad
(
1
r
)|
d s

=
1
2
grad
(
1
r
)

r ' d s

r
r
3
,
I S
c
n
|
=grad
(
1
r
)
m=

(
1
r
)
, m
|
=

m,
(
1
r
)
|
=
mr|
r
3
(9.5.180)
B
( 2)
=curl A
( 2)
= A
( 2)
|=curl
mr|
r
3
=

,
m, r |
r
3
|
see [28] pag 61: Expanding the curl of the product of a vector
B
( 2)
=
(
1
r
3
)
curl ( mr)(mr)grad
(
1
r
3
)
with
Then expanding by the BAC-CAB rule, and using
grad
(
1
r
3
)
=
3 r
r
5
=

r
(
1
r
3
)
=
3
r
4
=
3r
r
5
, we
have
139
B
( 2)
=(div r)
m
r
3

(
m
r
3
grad
)
r+
(
m
3 r
r
5
)
r
(
r
3r
r
5
)
m
And finally using (div r)=3=
r
x
x
+
r
y
y
+
r
z
z
=
x
x
+
y
y
+
z
z
and (mgrad ) r=m
(mgrad ) r=( m) r=
(
m
x

x
+m
y

y
+m
z

z
)
( x e
x
+y e
y
+z e
z
)
=

k
e
k
i
m
i
6
i k
=

k
e
k
m
k
=m
we
obtain :
B
( 2)
=
1
r
5
3( mr) rmr
2
|
(9.5.181)
see [28] pag 61 is not so good better is [15] pag 192 vol II:
, a b| |=
a
, ab||+
b
, a b||=a(
a
b)b(
a
a)+a(
b
b)b(
b
a)
(9.5.182)
see (8.3.4) The cross product is anticommutative,
ab=ba (9.5.183)
And see (8.2.10) The dot product is commutative:
ab=ba (9.5.184)

a
Act on vector a and

b
on b thus (9.5.182) become:
, a b| |=
a
, ab||+
b
, a b||=( b
a
) ab(
a
a)+a(
b
b)(a
b
) b
=(b ) ab(a)+a( b)(a ) b
(9.5.185)
Using (9.5.185) :
B= A|=

m,
r
r
3
||
=
(
r
r
3

)
m
r
r
3
( m)+m
(

r
r
3
)
(m)
r
r
3
(9.5.186)
The vector m does not depend on r see (9.5.174), therefore first and second term vanishes
we now that

r
r
3
=r
1
r
3
+
1
r
3
r=r
(

3
r
4
r
r
)
+
1
r
3
3=0
taking
(a)(b)=(

a)(b)+(
b
a)(b)=b(a )+(a ) b=b(a)+(a ) b
(9.5.187)
We get
140
(m)
r
r
3
=r
(
m
1
r
3
)
+
1
r
3
(m) r=r
(
m3
r
4
r
r
)
+
1
r
3
m
(9.5.188)
Where we have used
(mgrad ) r=( m) r=
(
m
x

x
+m
y

y
+m
z

z
)
( x e
x
+y e
y
+z e
z
)
=

k
e
k
i
m
i
6
i k
=

k
e
k
m
k
=m
finally we obtain :
B
( 2)
=
1
r
5
3( mr) rmr
2
|
(9.5.189)
9.5.18 Magnetic multipoles Jackson proof :
In the preceding section I proved the formula
k(r ' d s)=r ' (kd s )d s( kr ' )=2r ' ( kd s)=2d s(kr ' ) (9.5.190)
(r ' d s)k=d s( r 'k)r ' (d sk)
We can treat in a more general way see Savelyev pag 190 [15] and pag 274 Jackson [51]:
Jackson said that the vector potential (9.5.152) can be written as the sum of two terms one of which
gives a transverse magnetic induction and the other of which gives a transverse electric field. Thus we
have a symmetric and antisymmetric part :
1
c
( nx) J =
1
2c
( nx) J +( nJ ) x|+
1
2c
( xJ ) n
(9.5.191)
Where the second part is recognizable as the magnetisation due to current J
m=
1
2c
( xJ )
(9.5.192)
The first term is related to the electric quadrupole moment density
which I have calculated by electric dipole expansions and can also be calculated as expansion of
magnetisation current J .
And Jackson calculate the vector potential considering only the second term of (9.5.191).
9.5.19 Spin spin interactions
the quantity

B
=
e
2m
=0.92710
34
J/ T=0.92710
20
erg/Gs
(9.5.193)
is called Bohr magnetron
141
in Gaus

B
=
e
2m
e
c
the magnetic spin moment of the electron (see [36] pag68):
m
s
=2

s (9.5.194)
The Hamiltonian term for the electronelectron dipolar interaction is
H=m
s
B
(9.5.195)
Substituting (9.5.189) and (9.5.194):
H=2

s
1
r
5
3(mr ) rmr
2
|
(9.5.196)
s Must be dot product multiplied only with vectors r and m
H=2

B

1
r
5
3( mr)(rs)(ms) r
2
|
(9.5.197)
And for two spins we have
H=4

B
2

2
1
r
5
3(s
1
r )( rs
2
)( s
1
s
2
) r
2
|
=g
2

B
2

2
1
r
5
(s
1
s
2
)r
2
3(s
1
r )( s
2
r)|
Gauss
H=g
2

B
2

2

0
4
1
r
5
(s
1
s
2
)r
2
3(s
1
r )( s
2
r)| SI
(9.5.198)
g is the spin g factor pin
9.5.20 Hyperfine interactions (spin -nuclear)
The kinetic energy of the electron is:
(see [36] pag68):
1
2m
( p+e A)
2
=
p
2
2m
+
e
m
pA+O( A
2
) (9.5.199)
Orbital hyperfine interaction:
H
HF
(1)
=
e
m
p A=

0
4
e
m
1
r
3
p(mr)=

0
4
e
m
1
r
3
=m(rp)
(9.5.200)
We made a cyclic permutation for mixt product see(8.4.3) :Orbital hyperfine interaction:
H
HF
(1)
=2

1
r
3

0
4
( ml )
(9.5.201)
142
Expressing the nuclear moment in terms of the nuclear spin I
m=g
N

N

I (9.5.202)
Where

N
nuclear magneton
H
HF
(1)
=
g
N

2
1
r
3

0
2
( Il ) (9.5.203)
The dipolar hyperfine interaction is:
H
HF
(2)
=2


0
4
1
r
5
3(mr)( rs)( ms)r
2
|
(9.5.204)
Finally the hyperfine interaction can be written:
H
HF
(1)
+H
HF
(2)
=
g
N

2
1
r
3

0
2
( Il )+
(
3(sr)( Ir)
r
2
( sI )
)
|
(9.5.205)
9.5.21 Organic Triplet State Molecules and the Dipolar Interaction
See [32]pag117 also [33],[34],[35]
The Hamiltonian term for the electronelectron dipolar interaction is
H=g
2

B
2

2
1
r
5
(s
1
s
2
)r
2
3(s
1
r)(s
2
r)| (9.5.206)
where r is the vector pointing from electron 1 to electron 2. We have used a lower-case s for
the one-electron spin operators, reserving upper-case S for the total electron spin operators. The dot
products can be expanded to give:
143
144
145
9.5.22 spin orbit interaction
F rom electrodynamics, we know that the magnetic moment (in e.m. u nits)
of a cu rrent in a single loop is eq u al to the area A of the loop, times the
cu rrent (e.s. u nits) divided by c:
146
A t this point, we ou ght to remember that the electron has a spin. The
orbital angu lar momentu m has associated with it a magnetic moment; in
the
same way a magnetic moment is associated to the spin angu lar momentu
m.
F or the orbital angu lar momentu m, taking the sign of the electron charge
in accou nt
147
9.5.23 the zeeman interactions
See [37]pag 157 and[38] pag 278
148
9.5.24 Legendre:
149
150
Chapter 10 Radiation
Paragraph 10.1 Novotny & Hecht derivation
10.1.1 Macroscopic electrodynamics
See chapter 2 [44].
In macroscopic electrodynamics the singular character of charges and their associated currents is
avoided by considering charge densities and current densities j. In differential form and in SI units the
macroscopic Maxwells equations have the form
E(r , t)=
B( r , t)
t
159 (10.1.1)
H (r , t)=
D( r , t )
t
+j (r , t ) 151 (10.1.2)
D(r , t )=p( r , t) (10.1.3)
B( r , t )=0 153 (10.1.4)
Where E denotes the electric field, D the electric displacement, H the magnetic field, B the
magnetic induction, j the current density, and the charge density. The components of these vector
and scalar fields constitute a set of 16 unknowns. Depending on the considered medium, the number of
unknowns can be reduced considerably. For example, in linear, isotropic, homogeneous and source-free
media the electromagnetic field is entirely defined by two scalar fields. Maxwells equations combine
and complete the laws formerly established by Faraday, Ampre, Gauss, Poisson, and others. Since
Maxwells equations are differential equations they do not account for any fields that are constant in
space and time. Any such field can therefore be added to the fields. It has to be emphasized that the
concept of fields was introduced to explain the transmission of forces from a source to a receiver. The
physical observables are therefore forces, whereas the fields are definitions introduced to explain the
troublesome phenomenon of the action at a distance. Notice that the macroscopic Maxwells
equations deal with fields that are local spatial averages over microscopic fields associated with
discrete charges. Hence, the microscopic nature of matter is not included in the macroscopic fields.
Charge and current densities are considered as continuous functions of space. In order to describe the
fields on an atomic scale it is necessary to use the microscopic Maxwells equations which consider all
matter to be made of charged and uncharged particles.
The conservation of charge is implicitly contained in Maxwells equations. Taking the divergence of
Eq. (10.1.2), noting that H is identical zero, and substituting Eq. (10.1.3) for D one
obtains the continuity equation
j ( r , t)+
p( r , t )
t
=0 (10.1.5)
The electromagnetic properties of the medium are most commonly discussed in terms of the
151
macroscopic polarization P and magnetization M according to
D( r , t )=t E( r , t )+P(r , t) 152 (10.1.6)
H ( r , t)=
0
1
B(r , t )M (r , t ) (10.1.7)
where t
0
and
0
are the permittivity and the permeability of vacuum, respectively. These
equations do not impose any conditions on the medium and are therefore always valid.
10.1.2 Wave equations
After substituting the fields D and B in Maxwells curl equations by the expressions (10.1.6)
and (2.7) and combining the two resulting equations we obtain the inhomogeneous wave equations
E+
1
c
2

2
E
t
2
=
0

t
(
j +
P
t
+M
)
152 (10.1.8)
H+
1
c
2

2
H
t
2
=j +
P
t
+
0
M
t
(10.1.9)
The constant c was introduced for (t
0

0
)
1/ 2
and is known as the vacuum speed of light. The
expression in the brackets of Eq. (10.1.8) can be associated with the total current density
j
t
=j
s
+j
c
+
P
t
+M
(10.1.10)
where j has been split into a source current density js and an induced conduction current density j c .
The terms P/t and M are recognized as the polarization current density and the magnetization
current density, respectively.
10.1.3 Consecutive relations
Maxwells equations define the fields that are generated by currents and charges in matter. However,
they do not describe how these currents and charges are generated. Thus, to find a self-consistent
solution for the electromagnetic field, Maxwells equations must be supplemented by relations that
describe the behavior of matter under the influence of the fields. These material equations are known as
constitutive relations. In a non-dispersive linear and isotropic medium they have the form
D=t
0
t E( P=t
0
_
e
E)
(10.1.11)
B=
0
H ( M=_
m
M)
(10.1.12)
j
c
=u E (10.1.13)
with _
e
and _
m
denoting the electric and magnetic susceptibility, respectively. For nonlinear
media, the right hand sides can be supplemented by terms of higher power. Anisotropic media can be
considered using tensorial forms for and . In order to account for general bianisotropic media,
additional terms relating D and E to both B and H have to be introduced. For such
complex media, solutions to the wave equations can be found for very special situations only. The
constituent relations given above account for inhomogeneous media if the material parameters t ,
and u are functions of space. The medium is called temporally dispersive if the material
152
parameters are functions of frequency, and spatially dispersive if the constitutive relations are
convolutions over space. An electromagnetic field in a linear medium can be written as a superposition
of monochromatic fields of the form
E( r , t )=E( k , u) cos( krut ) (10.1.14)
Where k and u are the wave vector and the angular frequency, respectively. In its most general
form the amplitude of the induced dicplacement D( r , t ) can be written as (in an anisotropic medium
the dielectric constant
t=t

is a second rank tensor)


D( k , u)=t
0
t ( k , u) E( k , u)
153 (10.1.15)
Since E( k , u) is equivalent to the Fourier transform

E of an arbitrary time dependent E( r , t )


we can apply the inverse Fourier transform to Eq. (10.1.15) and obtain
D( r , t )=t
0
t (rr ' , tt ' ) E( r ' , t ' ) d r d t ' (10.1.16)
Here t denotes the response function in space and time.
10.1.4 Spectral representation of time-dependent fields
The spectrum

E( r , u) of an arbitrary time-dependent field E( r , t) is defined by the Fourier


transform

E( r , u)=
1
2

E( r , t ) e
i ut
dt
(10.1.17)
In order that E( r , t ) is a real valued field we have to require that

E( r ,u)=

( r , u) (10.1.18)
Applying the Fourier transform to the time-dependent Maxwells equations (10.1.1)-(10.1.4) gives

E(r , u)=i u

B( r , u) (10.1.19)


H (r , u)=i u

D( r , u)+

j (r , u) (10.1.20)

D(r , u)= p( r , u) (10.1.21)

B( r , u)=0 154 (10.1.22)


Once the solution for

E( r , u) has been determined, the time-dependent field is calculated by the


inverse transform as
E( r , t )=

E(r ,u) e
i ut
dt (10.1.23)
Thus, the time dependence of a non-harmonic electromagnetic field can be Fourier transformed and
every spectral component can be treated separately as a monochromatic field. The general time
dependence is obtained from the inverse transform.
10.1.5 Time-harmonic fields
The time dependence in the wave equations can be easily separated to obtain a harmonic differential
equation. A monochromatic field can then be written as
153
E( r , t )=Re E(r ) e
i ut
=
1
2
E( r) e
i ut
+E

( r )e
i ut
| 159 (10.1.24)
, with similar expression for other fields.
This can also be written as
E( r , t )=ReE( r)cosut +ImE( r)sin ut=E( r)cosut+( r)| (10.1.25)
Where the phase is determined by ( r)=arctanImE( r)/ ReE( r)|
Notice that E( r , t) is real, whereas the spatial part E( r) is complex. The symbol E will be
used for both, the real, time dependent field and the complex spatial part of the field. The introduction
of a new symbol is avoided in oreder to keep the notations simple. It is convenient to represent the
fields of a time-harmonic field by their complex amplitudes. Maxwells equations can then be written
as
E(r , u)=i uB( r , u) (10.1.26)
H (r , u)=i uD( r , u)+j (r , u) 156 (10.1.27)
D(r , u)=p( r , u) (10.1.28)
B( r , u)=0 (10.1.29)
which is equivalent to Maxwells equations (10.1.19)-(10.1.22) for the spectra of arbitrary time-
dependent fields. Thus, the solution for E( r) is equivalent to the spectrum

E( r , u) of an
arbitrary time-dependent field. It is obvious that the complex field amplitudes depend on the angular
frequency u , i.e. E( r) = E( r , u) . However, u is usually not included in the argument.
Also the material parameters , , and are functions of space and frequency, i.e. = (r, ), = (r,
), = (r, ). For simpler notation, we will often drop the argument in the fields and material
parameters. It is the context of the problem that determines which of the fields E( r , t) , E( r) , or
E( r , u) is being considered.
10.1.6 Complex dielectric constant
With the help of the linear constitutive relations we can express Maxwells curl equations (2.25) and
(2.26) in terms of E( r) and H( r) . We then multiply both sides of the first equation by 1 and
then apply the curl operator to both sides.
After the expression H is substituted by the second equation we obtain

1
E
u
2
c
2
t+i u/ (ut
0
)| E=i u j
s
(10.1.30)
It is common practice to replace the expression in the brackets on the left hand side by a complex
dielectric constant, i.e.
t+i u/(ut
0
)| -t
(10.1.31)
In this notation one does not distinguish between conduction currents and polarization currents. Energy
dissipation is associated with the imaginary part of the dielectric constant. With the new definition of ,
the wave equations for the complex fields E( r) and H( r) in linear, isotropic, but
inhomogeneous media are

1
Ek
0
2
t E=i u j
s
158 (10.1.32)
t
1
Hk
0
2
H=t j
s
(10.1.33)
154
where
k
0
=u/ c
denotes the vacuum wavenumber. These equations are also valid for anisotropic
media if the substitutions
t -t

and
-

are performed.
10.1.7 Dyadic Green's functions
An important concept in field theory is the Greens function: the fields due to a point source. In
electromagnetic theory, the dyadic Greens function
G

is essentially defined by the electric field


E at the field point r generated by a radiating electric dipole located at the source point
r ' . In mathematical terms this reads as
E( r )=u
2

0
G

(r , r ' )
(10.1.34)
To understand the basic idea of Greens functions we will first consider a general mathematical point of
view.
10.1.8 Mathematical basis of Greens functions
Consider the following general, inhomogeneous equation:
L A(r )=B(r ) (10.1.35)
L is a linear operator acting on the vectorfield A representing the unknown response of the
system. The vectorfield B is a known source function and makes the differential equation
inhomogeneous. A well-known theorem for linear differential equations states that the general solution
is equal to the sum of the complete homogeneous solution ( B=0 ) and a particular inhomogeneous
solution. Here, we assume that the homogeneous solution ( A
0
) is known. We thus need to solve for
an arbitrary particular solution.
Usually it is difficult to find a solution of Eq. (10.1.35) and it is easier to consider the special
inhomogeneity 6( r r ' ) , which is zero everywhere, except in the point r=r ' . Then, the linear
equation reads as
L G

i
(r , r ' )=n
i
6(r r ' ) (i =x , y , z)
(10.1.36)
where n
i
denotes an arbitrary constant unit vector. In general, the vectorfield G
i
is dependent on
the location r of the inhomogeneity 6( r r ' ) . Therefore, the vector r ' has been included in the
argument of G
i
. The three equations given by Eq. (10.1.36) can be written in closed form as
L G

(r , r ' )=I

6( rr ' )
(10.1.37)
where the operator L acts on each column of s
G

eparately and
I

is the unit dyad. The dyadic


function fu
G

lfilling Eq. (10.1.37) is known as the dyadic Greens function. In a next step, assume
that Eq. (10.1.37) has been solved and that
G

is known. Postmultiplying Eq. (10.1.37) with


B( r ' ) on both sides and integrating over the volume V in which B0 gives

V
L G

(r , r ' ) B(r ' ) dV ' =

V
B( r ' )6( r r ' ) dV '
(10.1.38)
The right hand side reduces to B( r) and with the Eq. (10.1.35) it follows that
155
L A(r )=

V
L G

( r , r ' ) B( r ' ) dV '


(10.1.39)
If on the right hand side the operator L is taken out of the integral, the solution of Eq. (10.1.35) can
be expressed as
A( r)=

V
G

( r , r ' ) B( r ' ) dV '


(10.1.40)
Thus, the solution of the original equation can be found by integrating the product of the dyadic
Greens function and the inhomogeneity B over the source volume V.
The assumption that the operators L and

dV ' can be interchanged is not strictly valid and


special care must be applied if the integrand is not well behaved. Most often
G

is singular at r=r '


and an infinitesimal exclusion volume surrounding r=r ' has to be introduced[44] . Depolarization
of the principal volume must be treated separately resulting in a term
( L

)
depending on the
geometrical shape of the volume. Furthermore, in numerical schemes the principal volume has a finite
size giving rise to a second correction term commonly designated by
( M

)
. As long as we consider
field points outside of the source volume V , i.e. rV , we do not need to consider these tricky
issues.
10.1.9 Derivation of the Greens function for the electric field
The derivation of the Greens function for the electric field is most conveniently accomplished by
considering the time-harmonic vector potential A and the scalar potential in an infinite and
homogeneous space characterized by the constants and . In this case, A and are defined by the
relationships
E( r)=i u A( r)(r ) 158 (10.1.41)
H( r)=
1

A(r )
(10.1.42)
We can insert these equations into Maxwells second equation (10.1.27):
H (r , u)=i uD( r , u)+j (r , u) (10.1.43)
Taking into account
D( r , u)=t
0
t E(r , u)
=t
0
t i uA(r )( r)|
(10.1.44)
and obtain
A(r )=
0
A(r )i u
0
t
0
t i u A( r)(r )|
(10.1.45)
Where we used
D=t
0
t E
. The potential A , are not uniquely defined by Eqs. (10.1.41) and
(10.1.42) . We are still free to define the value of A which we choose as
A=i u
0
t
0
t (r)
(Lorentz gauge) (10.1.46)
A condition that fixes the redundancy of Eqs. (10.1.41) and (10.1.42) is called a gauge condition. The
gauge chosen through Eq. (10.1.46) is the so-called Lorentz gauge.
Now if we take the nabla of both sides of (10.1.46) we get:
156
A=i u
0
t
0
t ( r)
158 (10.1.47)
also
k=
2
\
. t
and
1
c
2
=
0
t
0 (10.1.48)
By the way in vacuum we have
. t=1 and
k=
2
\
=
u
c
(10.1.49)
Now using the mathematical identity =
2
+ together with the gradient or nabla of
Lorentz gauge (10.1.47) (where A with A will reduce,where A=

grad div

A
)we can rewrite Eq. (10.1.45) as

2
+k
2
| A(r)=
0
j (r) 159 (10.1.50)
which is the inhomogeneous Helmholtz equation. It holds independently for each component
A
i
of
A . A similar equation can be derived for the scalar potential

2
+k
2
|(r )=
p( r)
t
0
t
(10.1.51)
Thus, we obtain four scalar Helmholtz equations of the form

2
+k
2
| f (r)=g( r) (10.1.52)
To derive the scalar Greens function
G
0
(r , r ' )
for the Helmholtz operator we replace the source
term g( r) by a single point source 6( r r ' ) and obtain

2
+k
2
|G
0
(r , r ' )=6( rr ' ) (10.1.53)
The coordinate r denotes the location of the field point, i.e. the point at which the fields are to be
evaluated, whereas the coordinate r designates the location of the point source. Once we have
determined G 0 we can state the particular solution for the vector potential in Eq. (10.1.50) as
A( r)=
0

V
j (r ' )G
0
(r , r ' ) dV '
159 (10.1.54)
A similar equation holds for the scalar potential. Both solutions require the knowledge of the Greens
function defined through Eq. (10.1.53). In free space, the only physical solution of this equation is [51]:
G
0
(r , r ' )=
e
ikrr '
4rr '
(10.1.55)
The solution with the plus sign denotes a spherical wave that propagates out of the origin whereas the
solution with the minus sign is a wave that converges towards the origin. In the following we only
retain the outwards propagating wave.
The scalar Greens function can be introduced into Eq. (10.1.54) and the vector potential can be
calculated by integrating over the source volume V . Thus, we are in a position to calculate the vector
potential and scalar potential for any given current distribution j and charge distribution . Notice that
the Greens function in Eq. (10.1.55) applies only to a homogeneous three-dimensional space. The
Greens function of a two-dimensional space or a half-space will have a different form.
So far we have reduced the treatment of Greens functions to the potentials A and because it allows us
to work with scalar equations. The formalism becomes more involved when we consider the electric
and magnetic fields. The reason for this is that a source current in the x-direction leads to an electric
and magnetic field with x-, y-, and z-components. This is different for the vector potential: a source
current in x only gives rise to a vector potential with an x-component. Thus, in the case of the electric
157
and magnetic fields we need a Greens function that relates all components of the source with all
components of the fields, or, in other words, the Greens function must be a tensor. This type of Greens
function is denoted as dyadic Greens function and has been introduced in the previous section. To
determine the dyadic Greens function we start with the wave equation for the electric field Eq.
(10.1.32). In a homogeneous space it reads as
E( r)k
2
E(r)=i u
0
j ( r) (10.1.56)
We can define for each component of j a corresponding Greens function. For example, for
j
x
we
have
G
x
( r , r ' )k
2
G
x
( r , r ' )=6( r , r ' ) n
x
(10.1.57)
Where n
x
is the unit vector in x-direction.
A similar equation can be formulated a point
source in the -y and -z directions. In order to
account for all orientations we write the
general definition of the dyadic Green's
function for the electric field .
G( r , r ' )k
2
G
x
( r , r ' )=I

6( r , r ' )
(10.1.58)
I

being the unit dyad (unit tensor). The first column of the tensor G corresponds to the field due to
a point source in the x-direction, the second column to the field due to a point source in the y-direction,
and the third column is the field due to a point source in the z-direction. Thus a dyadic Greens function
is just a compact notation for three vectorial Greens functions. As before, we can view the source
current in Eq. (10.1.56) as a superposition of point currents. Thus, if we know the Greens function G
we can state a particular solution of Eq. (10.1.56) As
E( r)=
0

V
G

(r , r ' ) j ( r ' ) dV '


(10.1.59)
In order to solve Eqs. (2.80) and (2.81) for a given distribution of currents, we still need to determine
the explicit form of
G

. Introducing the gradient of Lorentz gauge (10.1.47) into Eqs. (10.1.41)


leads to
158
Fig 39 Illustration of the dyadic Greens function
G

(r , r ' )
. The Greens function renders the electric
field at the field point r due to a single point source j
at the source point r ' . Since the field at r
depends on the orientation of j the Greens function
must account for all possible orientations in the form of
a tensor.
E( r)=i u

1+
1
k
2

|
A(r )
or
E( r)=i u

1+
1
k
2

grad div
|
A( r)
(10.1.60)
Inserting the vector potential (10.1.54) and (10.1.59) into (10.1.60) we find
G

(r , r ' )=

+
1
k
2

|
G
0
(r , r ' )
(10.1.61)
10.1.10 Time dependent Green's Functions
The time dependence in the wave equations can be separated and the resulting harmonic differential
equation for the time behaviour is easily solved. A monochromatic field can be represented in the form
of Eq. (10.1.24) and any other time dependent field can be generated by a Fourier transform (sum of
monochromatic fields). However, for the study of ultra-fast phenomena it is of advantage to retain the
explicit time behaviour. In this case we have to generalize the definition of A and as
E( r , t )=

t
A(r , t )(r , t )
(10.1.62)
H ( r , t)=
1

A( r , t )
(10.1.63)
From which we find the time dependent Helmholtz equation in the Lorentz gauge Eq.(10.1.50) :

n
2
c
2

2
t
2
|
A( r , t )=
0
j ( r , t ) (10.1.64)
A similar equation holds for the scalar potential . The definition of the scalar Greens function is now
generalized to

n
2
c
2

2
t
2
|
G
0
(r , r ' ;t , t ' )=6(rr ' )6(t t ' ) (10.1.65)
The point source is now defined with respect to space and time. The solution for G
0
is [51] :
G
0
(r , r ' ;t , t ' )=
6
(
t '

t
n
c
rr '
|)
4rr '
(10.1.66)
where the minus sign is associated with the response at a time t later than t .
10.1.11 The radiating electric dipole
In order to derive the potential energy for a microscopic system we have to give up the definitions of
the electric displacement D and the magnetic field H and consider only the field vectors E and B in the
empty space between a set of discrete charges qn . We thus replace D=t
0
E and B=
0
H in
Maxwells equations (cf. Eqs. (10.1.1)-(10.1.4) and set
159
p( r)=

n
q
n
6rr
n
|
(10.1.67)
j (r )=

n
q
n
r
n
6rr
n
|
(10.1.68)
where rn denotes the position vector of the nth charge and r
n
its velocity. The total charge and
current of the particle are obtained by a volume integration over and j .
We can develop this current density in a Taylor series with origin r0 , which is typically at the center of
the charge distribution. If we keep only the lowest-order term we find
j (r , t )=
d
dt
( t) 6rr
0
|
(10.1.69)
With the dipole moment
(t )=

n
q
n
r
n
(t )r
0
|
(10.1.70)
The dipole moment is identical with the definition in Eq. (8.11) for which we had r0 = 0. We assume a
harmonic time dependence which allows us to write the current density as
j (r , t )=Re j ( r )exp(it ) and the dipole moment as (t )=Re exp( it ) . Equation
(10.1.69) can then be written as
j (r )=i u 6rr
0
|
(10.1.71)
Thus, to lowest order, any current density can be thought of as an oscillating dipole with origin at the
center of the charge distribution.
10.1.12 Electric dipole fields in a homogeneous space
In this section we will derive the fields of a dipole representing the current density of a small charge
distribution located in a homogeneous, linear and isotropic space. The fields of the dipole can be
derived by considering two oscillating charges q of opposite sign, separated by an infinitesimal vector
ds. In this physical picture the dipole moment is given by = q ds. However, it is more elegant to
derive the dipole fields using the Greens function formalism developed in Section 10.1.9. There, we
have derived the so-called volume integral equations
E( r)=E
0
+i u
0

V
G

(r , r ' ) j ( r ' ) dV '


(10.1.72)
H ( r)=H
0
+

V
G

( r , r ' )| j ( r ' ) dV '


(10.1.73)
The integration runs over the source volume specified by the coordinate r . If we introduce the current
from Eq. (10.1.71) into the last two equations and assume that all fields are produced by the dipole we
find (you must know the delta function properties f (0)=

f (r )6(r ) dr )
E( r)=u
2

0
G

( r , r
0
)
(10.1.74)
H ( r)=i u G

( r , r
0
)|
(10.1.75)
Hence, the fields of an arbitrarily oriented electric dipole located at r=r
0
are determined by the
Greens function
G

(r , r
0
)
. As mentioned earlier, each column vector of
G

specifies the electric


160
field of dipole whose axis is aligned with one of the coordinate axes. For homogeneous space,
G

has been derived as


G

(r , r ' )=

+
1
k
2

|
G
0
(r , r ' )
, G
0
(r , r ' )=
e
ikrr'
4rr '
(10.1.76)
where
I

is the unit dyad and G( r , r


0
) the scalar Greens function. It is straightfor ward to
calculate G in the major three coordinate systems. In a Cartesian system
G

can be written as
G

(r , r
0
)=
exp(ikR)
4 R
(
1+
ikR1
k
2
R
2
)
I

+
33ikRk
2
R
2
k
2
R
2
RR
R
2
|
(10.1.77)
where R is the absolute value of the vector R=r r
0
and RR denotes the outer product of R
with itself. Equation (10.1.77) defines a symmetric 33 matrix (see (9.5.109))
G

(r , r
0
)=

G
xx
G
xy
G
xz
G
yx
G
yy
G
yz
G
zx
G
zy
G
zz
|
(10.1.78)
which, together with Eqs. (10.1.74) and (10.1.75) , determines the electromagnetic field of an arbitrary
electric dipole with Cartesian components x , y , z . The tensor

|
can be expressed as
G

( r , r
0
)=
exp(ikR)
4 R
k ( RI

)
R
(
i
1
kR
)
(10.1.79)
where
RI

denotes the matrix generated by the cross-product of R with each column vector of
I

. The Greens function


G

has terms in ( kR)


1
, ( kR)
2
and ( kR)
3
. In the far field,
for which R >>, only the terms with ( kR)
1
survive. On the other hand, the dominant terms in the
near-field, for which R<<, are the terms with ( kR)
3
. The terms with ( kR)
2
dominate the
intermediate-field at R . To distinguish these three ranges it is convenient to write
G

=G

NF
+G

IF
+G

FF
(10.1.80)
where the near-field ( G
NF
), intermediate-field ( G
IF
) and far-field ( G
FF
) Greens functions
are given by
G

FF
=
exp(ikR)
4 R
1
k
2
R
2

+
3 RR
R
2
|
(10.1.81)
G

FF
=
exp(ikR)
4 R
1
k
2
R
2

+
3 RR
R
2
|
(10.1.82)
G

NF
=
exp(ikR)
4 R

I

RR
R
2
|
(10.1.83)
Notice that the intermediate-field is 90 out of phase with respect to the near- and far-field.
Mathematica .nb proof =

grad div :
Remove["Global'*"]
R=.x
2
+y
2
+z
2
;
161
G R _ |=
ExpIkR|
4R
; an error her you must calculate in MATHEMATICA with +I
D G[R] , {{ x , y , z } , 2}| /. {. x
2
+y
2
+z
2
-r , x
2
+y
2
+z
2
-r
2
}

%//Simplify
e
i k r
( 3 x
2
i k .r
2
( r
2
3 x
2
)r
2
( 1+k
2
x
2
))
4 r
4
.r
2
;
e
i k r
.r
2
( 3k
2
r
2
+3i k .r
2
) x y
4 r
6
;
e
i k r
.r
2
( 3k
2
r
2
+3i k .r
2
) x z
4 r
6

e
i k r
.r
2
( 3k
2
r
2
+3i k .r
2
) x y
4 r
6
;
e
i k r
( 3 y
2
i k .r
2
( r
2
3 y
2
)r
2
( 1+k
2
y
2
) )
4 r
4
.r
2
;
e
i k r
.r
2
( 3k
2
r
2
+3i k .r
2
) y z
4 r
6

e
i k r
.r
2
( 3k
2
r
2
+3i k .r
2
) x z
4 r
6
;
e
i k r
.r
2
( 3k
2
r
2
+3i k .r
2
) y z
4 r
6
;
e
i k r
( 3 z
2
i k .r
2
( r
2
3 z
2
r
2
( 1+k
2
z
2
))
4 r
4
.r
2

and
G2 R _ |=
Exp IkR|
4 R

D G2 R| , {{ x , y , z }, 1}| /.{. x
2
+y
2
+z
2
-r , x
2
+y
2
+z
2
-r
2
}

%//Simplify
{
e
i k r
(i k r
2
+.r
2
) x
4 r
4
,
e
i k r
(i k r
2
+.r
2
) y
4 r
4
,
e
i k r
(i k r
2
+.r
2
) z
4 r
4
}
Paragraph 10.2 Savelyev & Tamm & Born derivation
See Savelyev [15] vol I pag 287-291; [16] vol. II pag. 34;Born [50] pag 75-88; Tamm [53] pag 457-
472 also Jackson [51] pag 271(eq 9.18) see Feynman [10] pag 260 ( 21.25)
and finaly Slater [55] pag 286-293.
10.2.1 Field Produced by a System of Charges at Great Distances
Assume that we have a system of moving charges that do not leave the confines of a certain volume in
their motion. We shall presume that the system as a whole is neutral. Let us consider the field produced
by such a system at distances that are great in comparison with its dimensions. We shall place the origin
of coordinates inside the system and characterize the distribution of the charge with the aid of the
function p=p(r ' , t ) . Hence, the charge inside the volume dV' at the point with the position vector
r ' will be de (t) = p( r ' , t ) dV ' . Let r stand for the position vector of the observation point P. In
addition, we shall introduce the notation bold R=rr ' . It is obvious that R is the vector drawn
from de to the point P. The retarded potentials of the field produced by the system:
see [16] vol. II pag. 34
162
from Fig 40: the potential at the point determined by position
vector is ( r)=
1
4 t
0

i=1
N
1
rr
i

Owing to the smallness of


r
i
in comparison with r, we can assume that

( rr
i
)=rr
i
and projecting on
e
r
we get:
R=rr
i
=rr
i
e
r
=r
(
1
r
i
e
r
r
)
now the potential can be
written as ( r)=
1
4 t
0

i=1
N
q
i
r
1
1r
i
e
r
/r
substituting
r
i
e
r
r
by x we get
expanding in Taylor series we get
1
1x
1+x
f ( x)= f (0)+f ' (0)6 x+... (10.2.1)
f (0)=
1
10
=1
f ' (0)=
1
( 1x)
2

x=0
=1
Now I will derive by analogy
The usual gauge for the scalar potential is such that -0 at infinity. The usual gauge for
A is such that
A=0 (10.2.2)
This particular choice is known as the Coulomb gauge.
Let us take the curl
B=A=( A)
2
A=
2
A (10.2.3)
where use has been made of the Coulomb gauge condition (10.2.2). We can combine the above relation
with the field equation
B=
0
j
to give

2
A=
0
j (10.2.4)
But, this is just Poisson's equation . We can immediately write the unique solutions to the above
equations:
A( r , t )=
1
c

j (r ' , tR/ c)
R
dV ' (10.2.5)
Of course, we have seen a equation like this before see Paragraph 9.5
( r , t )=

p( r ' , tR/ c)
R
dV ' (10.2.6)
Supposing that
163
Fig 40:
R=rr '=r
(
1
r
r
r '
)
=rnr '
(10.2.7)
The Eqs. (10.2.5) and (10.2.6) become
A( r , t )=
1
c

j (r ' , tR/ c+nr ' / c)


rnr '
dV ' (10.2.8)
( r , t )=

p( r ' , tR/ c+nr ' / c)


rnr '
dV ' (10.2.9)
Let us expand the integrand in (79.4) into a series in the ratio r'lr. Considering the quantity nr ' as
the small increment 6 x like in (10.2.1) we obtain
p
(
r ' , t
rnr '
c
)
rnr '
=
p
(
r ' , t
r
c
)
r
+

t

p
(
r ' , t
r
c
)
r
|
(nr ' )+...
(10.2.10)
What has been said above shows that the subsequent terms in the expansion (10.2.10) may be
disregarded if the condition
l
cT
1
(10.2.11)
Is satisfied. The ratio l / cT determines the proper retardation time f' . Consequently condition
(10.2.11) can be written as
f' T (10.2.12)
It can be seen from (10.2.12) that we may limit ourselves to the first terms of expansion (10.2.10) when
the time needed for the propagation of an electromagnetic perturbation within the limits of the system
is much smaller than the time during which the distribution of the charges in the system changes
appreciably.
Condition (10.2.11) can he written in two other ways. The product cT gives the wavelength \ of
the radiation produced by the system. Therefore, inequality (10.2.11) can he written as
l \ (10.2.13)
the dimensions of the system must be much smaller than the wavelength. Finally, having in view that
l /T in the order of magnitude equals the velocity v of the charges in the system, instead of
inequality (10.2.11) we can write
uc (10.2.14)
A glance at the last relation shows that by interrupting expansion (10.2.10) at the second term, we have
limited ourselves to considering the radiation of a non-relativistic system of charges.
The substituting of (10.2.10) in (10.2.9) and (10.2.8) yields
A( r , t )=
1
cr

j ( r ' , tr /c) dV '



r

1
cr

j (r ' , tr /c)( nr ' ) dV '

(10.2.15)
( r , t )=
1
r

( r ' , tr /c) dV '



r

1
r

( r ' , tr /c)( nr ' ) dV '

(10.2.16)
164
The density of the charge at the instant t ( r /c) is inside the first integral. Consequently, this
integral gives the total charge of the system, which owing to the presumed electroneutrality of the
system is zero. We must therefore retain only the second term in formula (10.2.16). Putting n in it
outside the integral and the derivative , we arrive at the expression
( r , t )=n

r

1
r

p( r ' , tr /c) r ' dV '

(10.2.17)
The integral in this expression is the dipole electric moment which the system had at the instant t
t ( r /c) :
p=

er ' =

pr ' dV ' =

p(r ' , tr /c) r ' dV ' (10.2.18)


We can therefore write
( r , t )=n

r

p( t r /c)
r
|
(10.2.19)
Finally having performed differentiation and taken into consideration that
t ( r /c)=( then
p
r
=
p
(
(
r
=
p
(
(

1
c
)
,
p
t
=
p
(
(
t
whence
p
r
=
1
c
p
t
=
1
c
p
(10.2.20)
We obtain
( r , t )=
n p(tr /c)
r
2
+
n p(t r /c)
cr
(10.2.21)
The first term in this formula coincides with the potential (43.9) of a static dipole (
n=
r
r
). We must
note that the field corresponding to this term at the distance r and the instant t is determined by the
value of the dipole moment at the instant t ( r /c) . The first term diminishes with an increasing
distance r much more rapidly than the second term. Therefore, considering the field at great distances,
we can assume that
( r , t )=
n p(tr /c)
cr
(10.2.22)
Concerning (10.2.15) if the currents were stationary, i.e. did not depend on t, the first integral would
vanish . For non-stationary currents, how ever, this integral differs from zero. We may therefore retain
only the first term in expansion (10.2.15). We can thus assume that
A( r , t )=
1
cr

j ( r ' , t r /c) dV '


(10.2.23)
We shall prove that

j (r ' , t r/ c) dV ' equals the time derivative of the dipole moment of the
system taken for the instant t ( r /c) . It will be the simplest to prove this by passing from the
continuous distribution of the charges to a discrete one. Let us perform the substitution

j dV ' =

pv dV ' -

e
a
v
a
(10.2.24)
(the velocities of the charges, like the function pv . must be taken for the instant t ( r /c) ).
However,
165

e
a
u
a
=

e
a
r
a
' =
d
dt

e
a
r
a
' = p(t r / c)
(10.2.25)
Consequently,
A( r , t )=
p(t r /c)
cr
(10.2.26)
A comparison with (10.2.22) allows us to write
=An (10.2.27)
The potentials (10.2.22) and (10.2.26) are determined by the value of the time derivative of the dipole
moment of the system. This is why they are called potentials calculated in a dipole approximation. The
dipole approximation is allowable when the conditions (10.2.12)-(10.2.14) are observed.
10.2.2 Dipole radiation Savelyev proof
See [15] pag 288
The region of a field that is at a distance r from the radiating system much greater not only than the
dimensions of the system l , but also than the radiated wavelength ( r\l) is known as a
wave zone. In this zone, conditions are observed in which the dipole approximation treated in
preceding section holds. In this approximation
( r , t )=
n p(tr / c)
cr
, A( r , t )=
p(t r /c)
cr
(10.2.28)
To calculate E we must find and A/t . Using formula (9.5.36)(see all proof115):
(r )=

r
r=

r
r
r
=

r
e
r
(10.2.29)
Recall that
n=e
r
=
r
r
. Hence ,
(r , t)=

r
(
n p(tr / c)
cr
)
n=
n p
c r
2
n
n p
c
2
r
n (10.2.30)
We have taken advantage of the circumstance that (10.2.20):
p
r
=
1
c
p
t
(10.2.31)
The first term in the expression we have obtained diminishes with an increasing distance much more
rapidly than the second one. It may therefore be ignored for great distances, and we may consider that
(r , t)=
n p( tr /c)
c
2
r
n
(10.2.32)
The derivative
A
t
=
p
cr
(10.2.33)
Hence,
E=( r , t )
1
c
A
t
=
n p
c
2
r
n
1
c
p
cr
=
(n p) n p
c
2
r
(10.2.34)
166
The numerator of this expression can be written as nn p|| . This can readily be verified by
expanding the vector triple product using formula b( ac)c( ab) and taking into account that
nn=1 . The electric field is thus determined by the formula
E=
1
c
2
r
nn

p||=
1
c
2
r


p n|n|
(10.2.35)
Let us go over to calculation of the magnetic field. The vector potential is a function of r. Therefore, by
(9.5.28) we have
B= A|=

r ,
A
r
|
=

n,
A
r
|
(10.2.36)
Differentiation of Eq. (10.2.28) for A yields
A( r , t )=

r
(
p(tr / c)
cr
)
=
p
c r
2

p
c
2
r
(10.2.37)
Discarding the term proportional to
1
r
2
we find that
A
r
=
p
c
2
r
, Hence,
B=
1
c
2
r
n

p|=
1
c
2
r


p n|
(10.2.38)
We shall write the final expression of E and B
E=
1
c
2
r


p n| n|
,
B=
1
c
2
r


p n|
(10.2.39)
(remember that the value of p must be taken for the instant tr / c )
A comparison of these expression leads to the conclusion that
E= Bn| (10.2.40)
Whence it follows that the vector E is perpendicular to the vector B and n , Hence as in plane
wave these vectors are mutually perpendicular. In addition the vectors B and E as in plane wave
are identical in magnitude, and
E=B=
psin 0
c
2
r
(10.2.41)
Where the 0 is the angle between the directions of the vectors p and n .
In vacuum .t =1 , H=B .
That the relations observed for a plane wave were found to hold
for the field we are studying is not surprising. At distances that are
great in comparison with the dimensions of the radiating system, a
wave must be spherical. At the same time, provided that r \ ,
small portions of the spherical wave virtually coincide with a
plane wave. It can be seen from (10.2.39) that the fields E and
B are determined by the second derivative of the dipole
moment of the system. This is the reason why the radiation being
considered is called dipole radiation. The dipole moment is
determined by the expression p=

er ' . Consequently,
167
Fig. 41

p
n

B
0
r
p=

e r ' =

e v . It thus follows that charges emit . electromagnetic waves only when they move
with acceleration.
To comprehend the pattern of the field at great distances, let us introduce a spherical system of
coordintes, measuring the polar angle 0 from the direction of the vector p(t r /c) (Fig. 41). By
(10.2.39), the vector B is perpendicular to the plane determined by the vectors p and n .
Consequently, B is directed along a tangent to a "parallel", the vectors p , n and B forming
a right-handed system. Examination of (80.5) shows that the vectors B , n , and E form a
right-handed system. Hence, it follows that E is directed along a tangent to a "meridian", the
directions of and E on the equator being opposite. We stress once more that the vectors depicted in
(Fig. 41) relate to different instants: p to the instant t r / c , and B and E to the instant t.
The magnitude of the vectors B and E are proportional to sin 0 . Therefore the fields have
the maximum at the equator and vanishes at poles.
To determine the intensity of radiation in different directions and the total radiated power, let us
calculate the Poynting vector. With a view to (10.2.41) , we obtain
S=
c
4
E B|=
c
4
E Bn=
p
2
sin
2
0
4c
3
r
2
n (10.2.42)
Hence, the intensity of dipole radiation is proportional to sin
2
0 .
To determine the radiated power P, let us find the energy flux through the entire spherical surface. The
area of a spherical band of width d 0 is 2 r
2
sin 0d 0 . Consequently,
P=

S d f =

p
2
sin
2
0
4 c
3
r
2
2 r
2
sin 0d 0=
2 p
2
3c
3
(10.2.43)
Assume that of all the charges of the system only one has acceleration. Hence, p=

e
a
v
a
=e v and
the radiated power is
P=
2e
2
v
2
3c
3
(10.2.44)
10.2.3 Dipole radiation Savelyev like Born proof including all terms
See Savelyev [15] pag 288
First step is taking divergence. Thus the fromula (10.2.21) take the form
( r , t )=
n p(tr /c)
r
2
+
n p(tr /c)
cr
(10.2.45)
Now we are looking for gradient of (10.2.45). and take care because we have a composed function !!!
For the first term in (10.2.45):
168
grad
(
n p
r
2
)
=

r
(
n p
r
2
)
n=
1
r
2


r
( n p)
|
n+( n p)

r
(
1
r
2
)
n
(10.2.46)
Savelyev forgot the first term (because he was doing approximation) !!!

1
r
2


r
( n p)
|
=
1
r
2


x
(
x
r
p
x
)
+

y
(
y
r
p
y
)
+

z
(
z
r
p
z
)
|
(10.2.47)

x
(
x
r
)
=
(
1
r
x
x
+x

r
(
1
r
)
r
x
)
=
(
1
r

1
r
2
x
2
r
)
(10.2.48)
Where
r
x
=
.x
2
+y
2
+y
3
x
=
2x
2 .x
2
+y
2
+y
3
=
x
r
(10.2.49)
p
x
r
e
x
+
p
y
r
e
y
+
p
z
r
e
z
=
p
r
(10.2.50)
And
p
x
x
2
r
3
e
x
+
p
y
y
2
r
3
e
y
+
p
z
y
2
r
3
e
z
=
r
2
p
r
3
=
np
r
n (10.2.51)
Substituting (10.2.50), (10.2.51) in (10.2.47) :
grad
(
n p
r
2
)
=
p
r
3
+
np
r
3
n
the second term in (10.2.46) and time derivative:
(n p)

r
(
1
r
2
)
n=
2 n p
r
3
n
(10.2.52)
And first term in (10.2.46) also contain time derivative:
time derivtive of

1
r
2


r
(n p)
|
n
|
=
n p
cr
2
n
(10.2.53)
We have taken advantage of the circumstance that (10.2.20):
p
r
=
1
c
p
t
=
1
c
p
and
p
r
=
1
c
p
t
(10.2.54)
Thus in (10.2.46) take the form:
grad
(
n p
r
2
)
=
p
r
3
+
np
r
3
n+
2 np
r
3
n+
n p
c r
2
n=
p
r
3
+
3np
r
3
n+
n p
c r
2
n
(10.2.55)

The same for the second term of (10.2.45) :
besides

r
(
n p(tr/ c)
cr
)
n=
n p
c r
2
n
n p
c
2
r
n (10.2.56)
169
That include also the time derivative we need also:
grad
(
n p
cr
)
=
p
c r
2
+
2np
c r
2
n
(10.2.57)
Summing (10.2.57) with (10.2.53) :
grad
(
n p
c r
)
=
p
c r
2
+
2np
c r
2
n+
n p
c r
2
n+
n p
c
2
r
n=
p
c r
2
+
3np
c r
2
n+
n p
c
2
r
n
(10.2.58)
And finally for the derivative
A
t
=
p
cr
(10.2.59)
Hence,
E=( r , t )
1
c
A
t

=
(
3 p
r
3
+
3 p
c r
2
+
p
c
2
r
)
(ne
p
) n
(
p
r
3
+
p
c r
2
+
p
c
2
r
)
e
p
(10.2.60)
Final formula:

E=D=
(
3 p
r
3
+
3 p
c r
2
+
p
c
2
r
)
(ne
p
) n
(
p
r
3
+
p
cr
2
+
p
c
2
r
)
e
p (10.2.61)
now from Fig. 41 and Fig. 42 we see that
thus we have:
n=e
r
e
p
=cos0e
r
sin 0e
0
because E , p and n are in the same plane!!! 171
( ne
p
)=e
r
( cos0e
r
sin 0e
0
)=cos0

(10.2.62)
170
Fig. 42 e
r
and e
0
are in
the same plane , but e

enter
in plan of figure
e
0
e

e
r
E
r

E
E
0
E=E
r
e
r
+E
0
e
0
+E

and
B=B
r
e
r
+B
0
e
0
+B

(10.2.63)
Substituting the (10.2.62)in (10.2.61) and collecting the terms with e
r
we will get E
r
:
E
r
=
(
2 p
r
3
+
2 p
c r
2
)
cos 0
(10.2.64)
E
0
=
(
p
r
3
+
p
c r
2
+
p
c
2
r
)
sin0
(10.2.65)
B

=
(
p
cr
2
+
p
c
2
r
)
sin 0
(10.2.66)
The last equation because by (9.5.28) we have
B= A|=

r ,
A
r
|
=

n,
A
r
|
(10.2.67)
Differentiation of Eq. (10.2.28) for A yields
A( r , t )=

r
(
p(tr / c)
cr
)
=
p
c r
2

p
c
2
r
(10.2.68)
, Hence,
B=
(
p
c r
2
+
p
c
2
r
)
(e
p
n)
(10.2.69)
In the case that you don't believe in Fig. 42 and do not agree with (10.2.62) or you want to prove by
means of spherical coordinates then(see proof of (10.2.84)) problem 1.37[8] and pag 354-355 [12]:
first I have a great confusion in Griffiths notation I misunderstood the vector position with unit vector
in spherical coordinates thus the unit vector:
r=n=e
r
=
r
r
or
x=e
x
(10.2.70)
Thus
rr=r (10.2.71)
The position vector in cartesian and spherical coordinates
r=x x+y y+z z=r sin 0cos x+r sin0siny+r cos 0 z (10.2.72)
Varying r slightly , then
d r=

r
( r)dr
(10.2.73)
Which is a short vector pointing in the direction of increase in r.
To make it a unit vector , one must divide by length. Thus
r=
r
r

r
r

;

0=
r
0

r
0

=
r

(10.2.74)
r
r
=sin0cos x+sin0sin y+cos 0 z
171

r
r

2
=sin
2
0 cos
2
+sin
2
0sin
2
+cos
2
0=1
r
0
=r cos0 cos x+r cos 0sin yr sin 0 z

r
0

2
=r
2
cos
2
0 cos
2
+r
2
cos
2
0sin
2
+r
2
sin
2
0=r
2
r

=rsin 0sin x+r sin0 cos y

2
=r
2
sin
2
0sin
2
+r
2
sin
2
0cos
2
=r
2
sin
2
0
It follows
r=sin0 cos x+sin0sin y+cos0 z

0=cos 0cos x+cos0sin ysin0 z

=sin x+cos y
(10.2.75)
Check:
r r=sin
2
0(cos
2
+sin
2
)+cos
2
0=sin
2
0+cos
2
0=1

0=cos 0sincos+cos 0sincos=0 etc.


sin0 r=sin
2
0cos x+sin
2
0 sin y+sin 0 cos0 z (10.2.76)
cos0

0=cos
2
0cos x+cos
2
0sin ycos0sin 0 z (10.2.77)
Add (10.2.76) and (10.2.77) together yields:
sin0

+cos0

0=+cos x+siny (10.2.78)


And last of (10.2.75):

=sin x+cos y (10.2.79)


Multiply (10.2.78) by cos and (10.2.79) by sin and subtract:
x=sin0cos r+cos0 cos

0sin

(10.2.80)
Multiply (10.2.78) by sin and (10.2.79) by cos and add:
y=sin 0sin r+cos 0sin

0+cos

(10.2.81)
Finally for
cos0 r=sin0cos 0cos x+sin 0 cos0sin y+cos0 cos0 z (10.2.82)
sin0

0=sin0cos0 cos x+sin 0cos 0sin ysin0sin 0 z (10.2.83)


Subtract (10.2.82) from (10.2.83) :
z=cos 0 rsin0

0 (10.2.84)
And together :
x=sin0cos r+cos0cos

0sin


y=sin 0sin r+cos 0sin

0+cos

z=cos 0 rsin0

0
(10.2.85)

172
10.2.4 Dipole radiation Born proof
Born [50] pag 86
In this section we have the same approaches as in preceding sections but including all terms.
Born have introduced the Hertz vector by means of the relations
A=
1
c

I
e
+curl I
m
=div I
e

(10.2.86)
For electric dipoles and magnetic dipoles.
The Lorentz condition is now automatically satisfied and I
e
and I
m
are also solutions of the
inhomogeneous wave equations

2
I
e

1
c
2

I
e
=4 p
(10.2.87)

2
I
m

1
c
2

I
m
=4m
(10.2.88)
for electric part
I
e
=

p(tr/ c)
R
n dV ' (10.2.89)
I
e
=
p (tr /c)
R
n (10.2.90)
And for magnetic part
I
m
=

m(tr / c)
R
n dV ' (10.2.91)
Which are the vectors.
And the homogeneous wave equations for vacuum become
E=D=curl curl I
e
(10.2.92)
B=H=
1
c
curl

I
m
(10.2.93)
using the identity
curl curlgrad div
2
(10.2.94)
(10.2.88) May be written in the form
E=D=grad div I
e

1
c
2

I
e (10.2.95)
First step is taking divergence. Thus the fromula (10.2.21) take the form
( r , t )=
n p(tr /c)
r
2
+
n p(tr /c)
cr
(10.2.96)
Or the same in Born notation
div I
e
=

p|
R
3
+
p|
c R
2

(nR)
(10.2.97)
173
Second step is taking the gradient (10.2.97):
Savelyev haven't aplly the composed formula for derivatives because he was looking only for far fields.
E
(2)
=

p|
R
3
+
p|
c R
2

grad ( nR)( nR) grad

p|
R
3
+
p|
c R
2

(10.2.98)
We have after differentiating
grad ( nR)=grad ( n
x
X+n
y
Y+n
z
Z )=

i
n
i
e
i
=n
(10.2.99)

R
(
1
R
3
)
=
3
R
4
=
3 R
R
5

(10.2.10
0)

R
(
1
R
2
)
=
2
R
3
=
2 R
R
4

(10.2.10
1)
and we have taken advantage of the circumstance that (10.2.20):
p
R
=
1
c
p
t
=
1
c
p
and
p
r
=
1
c
p
t

(10.2.10
2)
We get the time derivatives

r
(
p|
c R
2
)
n=
p|
c
2
R
3
R
And

r
(
p|
R
3
)
n=
p|
c R
4
R
(10.2.10
3)
Summing (10.2.100), (10.2.101) and (10.2.103) we get:
grad

p|
R
3
+
p|
c R
2

=

=

3 p|
R
5
+
2 p|
c R
4
+
p|
c R
4
+
p|
c
2
R
3

( nR) R
=

3 p|
R
5
+
3 p|
c R
4
+
p|
c
2
R
3

( nR) R
(10.2.10
4)
And second time derivative

1
c
2

I
e
=

2
R
2
(
p(tR/c)
R
)
n=
p|
c
2
R
n
(10.2.10
5)
And finally summing (10.2.98) , (10.2.104) and (10.2.105) yields
E=D=

3 p|
R
5
+
3 p|
c R
4
+
p|
c
2
R
3

( nR) R

p|
R
3
+
p|
c R
2
+
p|
c
2
R

n

(10.2.10
6)
Let us go over to calculation of the magnetic field. The vector potential is a function of r. Therefore, by
(9.5.28) we have like in Savelyev
B=

I
e
|=

r ,


I
e
r
|
=

n,


I
e
r
|

(10.2.10
7)
Differentiation of Eq. (10.2.53) for

I
e
yields
174

I
e
( R, t )=

R
(
p(tR/ c)
cR
)
=
p
c R
2

p
c
2
R
=
p|
c R
3
n
p|
c
2
R
2
n
(10.2.10
8)
Substituting (10.2.108) in (10.2.107) yields
B=H=

p|
c R
3
+
p|
c
2
R
2

( nR)

(10.2.10
9)
Paragraph 10.3 Kuno derivation
Paragraph 10.4 Agarwal & Berne derivation
Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS
These notes provide an introduction to the analytical solution of ordinary differential equations.
Emphasis is placed on simple equations of first and second order, with emphasis on equations with
constant coefficients. Brief treatment is given to nonhomogenous equations of second and higher
orders.
A differential equation is an equation, which contains a derivative. The simplest kind of a differential
equation is shown below:
175
Fig. 43

E
R

E
0

0
n

R
x y
z
dy
dx
=f ( x ) with y=y
0
at x=x
0
(11.1)
In general, differential equations have an infinite number of solutions. In order to obtain a unique
solution one or more initial conditions (or boundary conditions) must be specified. In the above
example, the statement that y = y
0
at x = x
0
is an initial condition. (The difference between initial and
boundary conditions, which is really one of naming, is discussed below.) The differential equation in
(11.2) can be solved as a definite integral.
yy
0
=

x
0
x
f ( x) dx (11.2)
The definite integral can be either found from a table of integrals or solved numerically, depending on
f(x). The initial (or boundary) condition (y = y0 at x = x0) enters the solution directly. Changes in the
values of y0 or x0 affect the ultimate solution for y.
Paragraph 11.1 Linear equations
See Smirnov pag 9 [58] and Savelyev[16] vol II, pag 192
11.1.1 Method of varying the arbitrary constant. Linear equations.
See Smirnov pag 9[58] :
An equation of the form
y ' +P( x) y+Q( x)=0 (11.1.1)
Is called a linear equation of the first order.
We start by considering the equation with no term Q(x):
z ' +P( x) z=0
The variable are separable here
d z
z
+P( x) dx=0

And we get
z=Ce
P( x dx)
(11.1.2)
We integrate the given linear equation (11.1.1) by using the method of varying the arbitrary constant,
i.e. we seek a solution of the equation in a form analogous to the form (11.1.2) for z
y=ue
P( x)dx
(11.1.3)
Where u is no longer a constant but the required function of x. We get by differentiation:
y ' =u' e
P( x)dx
P( x) u e
P(x )dx
Substitution in (11.1.1) gives:
u ' e
P( x)dx
Q( x)=0
u ' =Q( x)e
P( x) dx
, whence u=C

Q( x)e
P( x)dx
dx
we finally get by equation (11.1.3) for y:
176
y=e

P( x)dx

Q( x) e

P(x )dx
dx
|
(11.1.4)
If we replace them by definite integrals we can rewrite (11.1.4)
y=e

x0
x
P(x )dx

x
o
x
Q( x) e

x0
x
P( x)dx
dx
|
(11.1.5)
Where x
0
is a definite number, though chosen arbitrarily. On substituting the value x=x
0
for the
variable upper limit, the right-hand side of the formula written is equal to C, since integrals with
identical upper and lower limits are equal to zero; in other words, the constant C in formula (11.1.5) is
the value of the function y at x=x
0
. This value, which we denote by y
0
, is called the initial value
of the solution.
We denote this fact by writing:
y
x =x
0
=y
0
(11.1.6)
If the initial value of the required solution is given for x=x
0
,(11.1.5) yields a completely defined
solution of the equation:
y=e

x0
x
P(x )dx

y
0

x
o
x
Q( x) e

x0
x
P( x)dx
dx
|
(11.1.7)
Condition (11.1.6) is called the initial condition.
Remarks:
If we take Q( x)0 , we obtain the solution of the homogeneous equation:
y ' +P( x) y=0
satisfying condition (11.1.6)
y=y
0
e

x0
x
P(x )dx
(11.1.8)
It follows from (29) (11.1.5) that solutions of a linear differential equation have the form :
y=
1
( x) C+
2
( x)
(11.1.9)
i.e. y is a linear function of the arbitrary constant.
Let y
1
be a solution of eq. (11.1.1) . On setting
y=y
1
+z
we get the equation for z :
z ' +P( x) z+ y
1
' +P( x) y
1
+Q( x)|=0
The sum appearing in square brackets is equal to zero, since y
1
is a solution of equation (11.1.1) by
hypothesis. It follows that z is a solution of the equation when the term Q (x) is absent and is defined
by (11.1.2), whence:
y=y
1
+Ce
P( x)dx
(11.1.10)
177
11.1.2 Bernoulli's and Riccati's equations
Bernoulli's equation is a generalisation of linear eq. (11.1.1) :
y ' +P( x) y+Q( x) y
m
=0 (11.1.11)
where the exponent m can be considered as differing from zero and unity, since the equation is linear in
these cases. We divide both sides by y
m
:
y
m
y ' +y
m+1
P( x) y+Q( x) y
m
=0
and we introduce a new unknown function u instead if y
u=y
1m
and u ' =(1m) y
m
y '
The equations now reduces to the form:
u ' =P
1
( x) u+Q
1
( x)=0 (11.1.12)
Where P
1
( x)=(1m) P( x) and Q
1
( x)=(1m)Q( x)
i.e. Bernoulli's equations reduces to linear equations.
Riccati's equation :
y ' +P( x) y+Q( x) y
2
+R( x)=0 (11.1.13)
Does not reduce to quadrature in the case of arbitrary coefficients. It can reduce to a linear equation if
any one particular solution is known. Let y
1
( x) be in fact a solution of equation (11.1.11) , i.e.
y
1
' +P( x) y
1
+Q( x) y
1
2
+R( x)=0 (11.1.14)
By substitution
y=y
1
+
1
u
in (11.1.11) and taking into account (11.1.8) we obtain a linear equation of the form:
u ' P( x)+2Q( x) y
1
|UQ( x)=0 (11.1.15)
Paragraph 11.2 Second order equations
11.2.1 Second order equations
Second-order differential equations are among the most common in mechanical engineering
applications. Many of these equations arise from Newtons second law of motion, F = ma, where the
acceleration is the second derivative of the displacement. We start by considering linear, second-order
differential equations. The most general such equation has the following form.
d
2
y
dx
2
+p( x)
d y
dx
+q( x) y=r ( x) (11.2.1)
Here we assume that, if the physical model has a factor multiplying the second derivative, we can
divide the entire equation by that factor. The resulting equation has no factor multiplying the second
derivative. A simpler class of differential equations results if the right hand side term, r(x) is zero. This
is called the linear, second-order, homogenous differential equation.
178
d
2
y
dx
2
+p( x)
d y
dx
+q( x) y=0 (11.2.2)
For this linear homogenous differential equation we have the general result that any linear combination
of linearly independent solutions to the equation is also a solution. For example, if y
1
and y
2
are
solutions, then the linear combination y = c
1
y
1
+ c
2
y
2
is also a solution. This is true for the linear
homogenous equation only.
For the solution of second-order, linear, homogenous equations, we will generally have two linearly
independent solutions that provide a basis for all solutions of the differential equation. These two
independent solutions can be added in the form shown above, y = c
1
y
1
+ c
2
y
2
, to give a general solution
to the differential equation. The values of c
1
and c
2
are then found by fitting initial conditions or
boundary conditions for the problem. Two such conditions are required. Initial conditions typically
specify the value of y and its first derivative at some (initial) value of x. Boundary conditions specify
the value of y at two different x locations.
11.2.2 Constant coefficients
The easiest case to consider is the equation with constant coefficients shown below.
d
2
y
dx
2
+a
d y
dx
+b y=0 (11.2.3)
Two solutions to this equation are shown below.
y
1
=e
\
1
x
(11.2.4)
y
2
=e
\
2
x
(11.2.5)
Where \
1
and \
2
are the roots to the following quadratic equation.
\
1
=a+
.a
2
4b
2
=
a
2
+
.
a
2
4
b
(11.2.6)
\
2
=a
.a
2
4b
2
=
a
2

.
a
2
4
b (11.2.7)
The general solution is a linear combination of the two solutions
y=C
1
y
1
+C
2
y
2
=C
1
e
\
1
x
+C
2
e
\
2
x
(11.2.8)
The two solutions in equation (11.1.9) will not be linearly independent if \
1
=\
2
this will occur if a
2
= 4b so that \
1
=\
2
=a/2 and y
1
=e
ax / 2
. In this case we use a method known as reduction of
order to find the second solution. We start by writing the second solution, y
2
, in terms of the first
solution, y
1
, and an unknown function, u(x). The derivation shown below finds an expression for u(x)
that gives us our second solution.
y
2
=uy
1
=ue
\
1
x
=u e
a x / 2
(11.2.9)
Substituting this equation into equation (11.1.10) gives.
d
2
y
2
dx
2
+a
dy
2
dx
+by
2
=u
d
2
y
1
dx
2
+y
1
d
2
u
dx
2
d y
1
dx
+2y
1
d u
dx
d y
1
dx
+a y
1
du
dx
+buy
1
=0 (11.2.10)
We can combine the three terms multiplied by u to get a factor which is the same as the original
179
differential equation.
u

d
2
y
1
dx
2
+a
dy
1
dx
+by
1
|
+y
1
d
2
u
dx
2
+
du
dx
(
2
dy
1
dx
+ay
1
)
=0 (11.2.11)
Since y
1
is a solution of the differential equation in (11.1.12) , the term in brackets is zero. Setting this
term to zero and substituting y
1
=e
ax / 2
and dy
1
/ dx=(a/ 2) e
ax/ 2
gives the result, shown below,
that
e
ax /2 d
2
u
dx
2
+
du
dx

2
(

a
2
e
ax/ 2
)
+a e
ax / 2
|
=e
ax / 2 d
2
u
dx
2
=0
(11.2.12)
Equation (11.1.13) can only be satisfied if
d
2
u
dx
2
=0 u=Ax+B (11.2.13)
Since y2 = uy1, we have the following solution for y2.
y
2
=u y
1
=( Ax+B) e
ax / 2
(11.2.14)
The general solution when a
2
=4b is given by a linear combination of the solution in
equation(11.1.14) and y
1
=e
ax / 2
. Since the solution for y
1
is contained as a linear factor in the
solution for y
2
, we can use the following pair of solutions in the double root case, when a
2
=4b .
Both solutions are then used to give the general solution.
y=C
1
y
1
+C
2
y
2
=C
1
e
\
1
x
+C
2
e
\
2
x
=(C
1
+C
2
x)e
\
2
x
(11.2.15)
When a
2
< 4b,
the argument of the square root is negative and we will have complex values for
\
1
and
\
2
. In
this case we can define the argument of the square root as u
2
, and use this definition to write the
values for \
1
and

\
2
as shown below, where i
2
= 1.
u
2
=b
a
2
4
\
1
, \
2
=
a
2
!i u
(11.2.16)
We can get a modified form of the solution in equation (11.1.15) in this case, that gives a better
indication of the actual behaviour. To do this we use the Euler relationship for complex exponentials.
e
i 0
=cos0+sin 0 (11.2.17)
Substituting equation (11.2.1) into equation (11.1),(11.1.7) gives the following result.
1
y
1
=e
\
1
x
=e
ax/ 2
e
i ux
=e
ax /2
cosux+i sin ux | (11.2.18)
y
2
=e
\
2
x
=e
ax / 2
e
i ux
=e
ax / 2
cos(u x)+i sin(ux)| (11.2.19)
The general solution is a linear combination of the two solutions in equations (11.2.2), (11.2.3)
y=C
1
y
2
+C
2
y
2
=C
1
e
ax / 2
cos u+i sin ux|+C
2
e
ax/ 2
cosui sin ux|
y=e
ax / 2
Acos u+Bsin ux|
(11.2.20)
In the final step of this derivation, we have defined
A=C
1
+C
2
and B=i (C
1
C
2
) .
1
In the final step we use the trigonometric relations that cos(x) = cos(x) and sin(x) = sin(x)
180
However, in practice, we can use the final form of equation(11.2.20) as the general solution when a
2
<
4b and use initial or boundary conditions to determine the constants A and B.
If we are given the initial values of y and dy/dx as y
0
and v
0
, respectively, then we can find the
constants in the general solution for each of the three cases considered above:
(1) two distinct real roots,
(2) the double root, and
(3) two complex roots.
For two distinct, real roots, equation (11.2.20) gives the following equations for the initial conditions.
y
0
=y( 0)=C
1
e
\
1
(0)
+C
2
e
\
2
(0)
=C
1
+C
2

v
0
=
dy
dx

x=0
=\
1
C
1
e
\
1
(0)
+\
2
C
2
e
\
2
(0)
=\
1
C
1
+\
2
C
2
(11.2.21)
Solving for C
1
and C
2
gives.
C
1
=
\
2
y
0
v
0
\
2
\
1
C
2
=
\
1
y
0
+v
0
\
2
\
1
(11.2.22)
Substituting these results into equation (11.2.20) gives the general solution for two distinct real roots in
terms of the initial conditions on y and dy/dx.
y=C
1
y
2
+C
2
y
2
=
\
2
y
0
v
0
\
2
\
1
e
\
1
x
+
\
1
y
0
+v
0
\
2
\
1
e
\
2
x
(11.2.23)
When there is a double root, the solution is given by equation (11.2.23). Using that equation for the
initial conditions on y and dy/dx gives the following result.
y
0
=y( 0)=(C
1
+C
2
( 0)) e
a( 0) / 2
=C
1

v
0
=
dy
dx

x=0
=(C
1
+C
2
( 0))
(

a
2
)
e
a(0)/ 2
=
a
2
C
1
+C
2
(11.2.24)
Here, C
`1
= y
0
, and C
2
= v
0
+ ay
0
/2, and the solution for y is.
y=

y
0
+
(
v
0
+
ay
0
2
)
x
|
e
ax / 2
(11.2.25)
Finally, in the case of complex solutions, we get the following equations for the initial conditions on y
0
and v
0
.
y
0
=e
a(0)/ 2
Acos u(0)+Bsin u(0)|=A (11.2.26)
v
0
=
(

a
2
)
e
a(0)/ 2
Acosu(0)+Bsin u( 0)|+e
a(0)/ 2
u Bcosu( 0) Asin u( 0)|=
aA
2
+uB
(11.2.27)
[ ]
[ ] [ ] B
aA
A B e B A e
a
v
A B A e y
a a
a
+ + +

,
_

2
) 0 ( sin ) 0 ( cos ) 0 ( sin ) 0 ( cos
2
) 0 ( sin ) 0 ( cos
2 / ) 0 ( 2 / ) 0 (
0
2 / ) 0 (
0
[1]
This gives A=y
0
and
B=( v
0
+ay
0
/2) / u
so that the general solution for the specified initial
conditions is
y=e
ax / 2

y
0
cos ux+
1
u
(
v
0
+
ay
0
2
)
sinux
|
(11.2.28)
181
Linear combinations of sine and cosine of ux can be written in terms of cos(ux 6) by using
the trigonometric formula for the cosine of the difference of two angles.
C cos(ux6)=Ccos 6 cos ux+Csin 6sin ux=Acos ux+Bsinux (11.2.29)
The two expressions for C cos( ux 6) in the above equations are equivalent if the following two
equations hold
A=Ccos6 B=Csin6 (11.2.30)
The relationships between the constants A and B for the sine and cosine expression and the constants C
and 6 for the cos (ux 6) expression are shown below
A
2
+B
2
=C
2
cos
2
6+C
2
sin
2
6=C
2
C
2
=A
2
+B
2
(11.2.31)
B
A
=
Csin 6
Ccos6
=tan 6 6 =tan
1 B
A
(11.2.32)
We can rewrite equation (11.2.28) as follows
y=C e
ax / 2
cos (ux6) (11.2.33)
where
C=
.
y
0
2
+
1
u
2
(
v
0
+
ay
0
2
)
2
(11.2.34)
And
6=tan
1

1
u
(
v
0
+
ay
0
2
)
|
(11.2.35)
11.2.3 Non-homogenous equations
The solution to a linear non-homogenous equation such as the second-order equation shown below,
d
2
y
dx
2
+p( x)
d y
dx
+q( x) y=r ( x) (11.2.36)
Can be written in solution of y
H
d
2
y
H
dx
2
+p( x)
d y
H
dx
+q( x) y
H
=0 (11.2.37)
The total solution is the sum of the homogenous solution and a particular solution, y
P
, that satisfies
equation:
y=y
H
+y
P
(11.2.38)
by substituting equation (11.2.38) into equation (11.2.36) . This gives
d
2
( y
H
+y
P
)
dx
2
+p( x)
d ( y
H
+y
P
)
dx
+q( x)( y
H
+y
P
)=
d
2
y
H
dx
2
+p( x)
d y
H
dx
+q( x) y
H
+
d
2
y
P
dx
2
+p( x)
d y
P
dx
+q( x) y
P
=r ( x)
The solution to the non-homogenous equation, then, proceeds by first finding the solution to the
182
homogenous equation then by finding the particular solution. An important part of this process is that
the arbitrary constants in the homogenous solution should not be determined until the final solution, the
sum of the homogenous and particular solution is found.
One method for finding the particular solution is known as the method of undetermined coefficients. In
this method, a trial solution for y
P
is proposed using the trial solutions shown in the table below.
For these r ( x) Start with this y
P
as a trial solution
r ( x)=Ae
ax
y
P
=Be
ax
r ( x)=A x
n
y
P
=a
0
+a
1
x++a
n
x
n
r ( x)=Asin ux
y
P
=Bsin ux+C cos ux
r ( x)=Acos ux
r ( x)=Ae
ax
sin ux
y
P
=e
ax
( Bsin ux+C cos ux)
r ( x)=Ae
ax
cosux
If r ( x) contains more than one of the terms shown on the left, include the
corresponding y
P
terms in the general solution for y
P
. If r(x) contains an n
th
order
polynomial in x, y
P
should include a polynomial with all possible powers of x from
x
0
to x
n
.
If any term in r(x) is proportional to part of the solutions for y
H
multiply the proposed
y
P
in the table above by x. E.g., if both r(x) and y
H
have a term in e
ax
, with the
same value of a, y
P
should contain a term in x e
ax
instead of e
ax
.
The solutions proposed for y
P
in the table above have undetermined coefficients. These
coefficients are found by substituting the proposed solution in to the differential equation and equating
coefficients of like terms on both sides of the resulting equation.
For example, we can apply the method of undetermined coefficients to the solution of the following
equation.
d
2
y
dx
2
+3
d y
dx
+2y=x
2
First we find the solution to the homogenous equation.
d
2
y
H
dx
2
+3
d y
H
dx
+2 y
H
=0
We have previously show that the solution to this equation is given by equations (11.2.6),(11.2.7) ,
where we have to find the roots of the characteristic equation. For this problem, those roots are found
as follows:
\
1
=
3+.3
2
4( 2)
2
=1
\
2
=
3.3
2
4(2)
2
=2
Thus the solution to the homogeneous equation is given by the following equation
y=C
1
e
x
+C
2
e
2x
Since r(x) is a second order polynomial in x, we have to use the following equation for y
P
( x)
y
p
a
0
+a
1
x+a
2
x
2
183
Substituting this particular solution into the original differential equation gives
d
2
y
P
dx
2
+3
d y
P
dx
+2 y
P
=2 a
2
+3(a
1
+2 a
2
x)+2( a
0
+a
1
x+a
2
x
2
)=x
2
Setting terms in like powers of on both sides of the equation equal to each other gives.
x
0
terms : 2a
2
+3a
1
+2a
0
=0
x
1
terms : 6a
2
+2a
1
=0
x
2
terms : 2a
2
=1
We can easily solve these equations, starting with the last one and working backwards, to find a
2
= ,
a
1
= -3/2, and a
0
= 7/4. This gives the particular solution shown below.
y
P
=
7
4

3
2
x+
1
2
x
2
You can substitute this solution into equation to verify that it satisfies the differential equation. The
solution to the differential equation is the sum of the particular solution just found the homogeneous
solution
y=y
H
+y
P
=C
1
e
x
+C
2
e
2x
+
7
4

3
2
x+
1
2
x
2
Only after we have the combined solution can we match the initial or boundary conditions. If we have
an initial condition on both y and dy/ dx as y
0
and v
0
, we have to satisfy the following equations.
y
0
=y( 0)=C
1
e
(0)
+C
2
e
2(0)
+
7
4

3
2
(0)+
1
2
(0)=C
1
+C
2
+
7
4
v
0
=
dy
dx

x=0
=C
1
e
(0)
2C
2
e
2(0)

3
2
+(0)=C
1
2C
2

3
2
Solving this pair of equations for the two unknowns gives C
1
= 2y
0
+ v
0
2, and C
2
= - y
0
v
0
.
Substituting these values ( taking into account the initial conditions that y(0) = y
0
and
dy
dx

x =0
=v
0
)
we get finally
y=(2 y
0
+v
0
2) e
x
+
(
1
4
y
0
v
0
)
e
2x
+
7
4

3
2
x+
1
2
x
2
11.2.4 Non- homogeneous linear equations of the second order with constant
coefficients
See Smirnov pag 86 [58]
We now take the non-homogeneous equation
y ' ' +py ' +qy=f ( x) (11.2.39)
Where p and q are given real numbers as before and f(x) is given function of x. To find the general
solution of this equation it is sufficient to find any particular solution and to add this to general solution
of corresponding homogeneous equation. Since the general solution of the homogeneous equation is
known, the particular solution can be found with the aid of a quadrature by using the method of
variation of the arbitrary constants . Let us take as an example an equation of the form:
184
y ' ' +k
2
y= f ( x) (11.2.40)
The general solution of the corresponding homogeneous equation is given as
y=C
1
cos kx+C
2
cos kx (11.2.41)
We must seek the particular solution of (11.2.40) in the form
u=v
1
( x)cos kx+v
2
( x) sin kx
(11.2.42)
Where
v
1
( x)
and
v
2
( x)
are required functions of x.
u=v
1
( x) y
1
+v
2
( x) y
2
(11.2.43)
Since we have two required functions and not just one we can subject
v
1
( x)
and
v
2
( x)
to a
further condition :
v
1
'
( x) y
1
+v
2
'
( x) y
2
=0 (11.2.44)
On differentiating (11.2.43) and using (11.2.44) we obtain:
q ( x)u = v
1
( x) y
1
+v
2
( x) y
2
p( x)u
'
= v
1
( x) y
1
'
+v
2
( x) y
2
'
1u
' '
= v
1
( x) y
1
' '
+v
2
( x) y
2
' '
+v
1
'
( x) y
1
'
+v
2
'
( x) y
2
'
we substitute in the left hand side of (11.2.39) and get
v
1
( x) y
1
' '
+p( x) y
1
'
+q( x) y
1
|+v
2
( x) y
2
' '
+p( x) y
2
'
+q( x) y
2
|+
v
1
'
( x) y
1
'
+v
2
'
( x) y
2
'
=f ( x)
Bearing in mind that y
1
and y
2
are solutions of homogeneous equation and recalling (11.2.44)
we have the system of the equations
v
1
'
( x) y
1
+v
2
'
( x) y
2
=0
v
1
'
( x) y
1
'
+v
2
'
( x) y
2
'
=f ( x)
(11.2.45)
For determining v
1
'
( x) and v
2
'
( x) .
for our particular case (10.2.43) we have the system:
v
1
'
( x)cos kx+v
2
'
( x) sin kx = 0
v
1
'
( x)sin kx+v
2
'
( x) coskx =
1
k
f ( x)
Solving these give us:
v
1
'
( x)=
1
k
f ( x) sin kx
and
v
2
'
( x)=
1
k
f ( x)cos kx
We write the primitives as integrals with variable upper limits and with the variable of integration
denoted by ( :
v
1
( x)=
1
k

x
0
x
f (() sink (d (
v
2
( x)=
1
k

x
0
x
f ( () cos k (d (
where x_0 is a fixed number . Substitution in (11.2.42) gives us particular solution
185
u=
cos kx
k

x
0
x
f (()sin k (d (+
sin kx
k

x
0
x
f ( () cos k (d ( (11.2.46)
Or on taking under the integral sign the factors independent of the variable of integration
u=
1
k

x
0
x
f (()sin k ( x() d ( (11.2.47)
And the general solution becomes
y=C
1
cos kx+C
2
sinkx+
1
k

x
0
x
f (()sin k ( x() d ( (11.2.48)
11.2.5 The linear dipole oscillator
See pag Chapter 1 pag 2 [59]
According to Lorentz, the majority of optical phenomena can be accounted for by the interaction of
electric charges with the electromagnetic field . We begin by assuming that these charges are bound
into neutral atoms, and that they oscillate about their equilibrium positions with very small amplitudes.
That is, each electron-ion pair behaves as a simple harmonic oscillator, which couples to the
electromagnetic field through its electric dipole moment. The motion of a collection of such dipole
oscillators, comprising a gas or other dielectric system, is thus governed by the Hamiltonian:
H =
1
2m

a
( p
a
2
+u
a
2
m
2
r
a
2
)e

a
r
a
E(t , r
a
)
(11.2.49)
Where p
a
and r
a
are the canonical momentum and position of dipole a that has a natural
oscillation frequency u
a
and where E(t , r
a
) is the electric field strength at the position of atom a
at time t. The specific equation of motion obeyed by a single-atom dipole oscillator is very simple. We
may write it in its simplest form by recognizing that a given component of r
a
couples only to the
same component of E . Let the scalar quantities x
a
and E represent a pair of coupled
components. The Poisson bracket relations
x
a
=x
a
, H and p
a
= p
a
, H (11.2.50)
Or Hamiltonian's equations
p
a
=
H
q
a
and q
a
=
H
p
a
(11.2.51)
Lead to familiar result

x
a
+u
a
2
x
a
=
e
m
E(t , r
a
)
(11.2.52)
which is merely the electric part of the Lorentz force law for a nonrelativistic charge. In the relativistic
limit the magnetic force term
(e /c)v
a
B
is not small and must be included in equation (11.2.52).
In the nonrelativistic limit the magnetic force may be ignored.
One of the most elementary properties of a dipole oscillator is that it radiates electromagnetic energy.
Thus even if there were no other charges and currents in the universe to produce a field E at the
position r
a
, a field due to the dipole's own radiation would still exist. The realization that this is so
posed an important self-consistency problem to Lorentz: the problem of accounting for the effect of a
186
single oscillator's own field on its own motion. One very direct way to solve this problem is to make
use of energy conservation. The energy radiated into the field must be consistent with the energy lost
by the oscillator. The consequences of this radiation reaction self-consistency are easily worked out. If
the oscillator has a fixed centre of oscillationbecause the neutral atom is very massive and slow-
moving compared with the oscillating electronthen the existence of local energy conservation for the
system of electromagnetic field plus oscillator is implied by the relation (Born and Wolf[50] , Section
1.1.4):
S+
U
em
t
+
U
mat
t
=0 (11.2.53)
187