This action might not be possible to undo. Are you sure you want to continue?
Chemists
Sture Nordholm
Department of Chemistry, The University of Gothenburg
August 2, 2011
Abstract
Chemists need the language and the tools of mathematics to ex
press its accumulated knowledge and to solve problems. This course is
intended to help students of chemistry to acquire this language and to
use mathematical methods within a chemical context. It is assumed
that the student has had exposure to university mathematics. The
concept of the course is to brie‡y summarize a relevant area of mathe
matics with emphasis placed on understanding the origin of its tools of
analysis. This is then followed by selected applications of these tools
to chemical problems. In this way the twostep process of translating
a chemical problem into mathematical form and then solving it by an
appropriate mathematical method is amply illustrated.
Part I
Linear Algebra and
Applications
1 Chemical Algebra  A Preview
Mathematics starts with integers ...2,1,0,1,2,..... and we can’t do without
them in chemistry. We need them in an interesting and nontrivial way when
1
we try to balance a chemical reaction,
i
1
C
14
H
12
+i
2
C
2
= i
3
CC
2
+i
4
H
2
C. (1)
Here we are looking for the set of smallest positive integers i
1
. i
2
. i
3
. i
4
such
that the equation is balanced with respect to all atomic species present.
As we all know this can be a frustrating task if the molecules involved are
many and large. When we then try to calculate the corresponding masses in
a corresponding laboratory experiment we enter the realm of real numbers
which in mathematics we often call x, particularly before we have managed to
obtain them. The concept of a "mole" is the bridge between the stochiometric
equations in chemistry and the measurable masses of reactants and products
in laboratory reactions. It is by a decision of IUPAC de…ned as the unit of
"amount of substance" and takes stochiometeric calculations from integers
to real numbers.
We know that real numbers can be added, subtracted, multiplied and
divided unless the denominator is zero. In order to be able to take square
roots of negative numbers and do other handy things we introduce complex
numbers z = x+iy. Now we have an object which is speci…ed by two real
numbers. But the story goes on. When we want to specify the position of a
particle we need three coordinates x,y,z and we need them so often that we
decide to call this set of numbers a vector r. This particular vector is three
dimensional but if we have to specify the positions of two particles we need
six real numbers r
1
. ¸
1
. .
1,
r
2
. ¸
2
. .
2
which can be regarded as a six dimensional
vector. In the same way we can go on and …nd need for vectors of very high
dimension. If, for example, we would try to discuss the instantaneous state
of a gas by classical mechanics we may need vectors of Avogadros number
of dimensions. Now our ability to call these objects vectors and use vector
notation which suppresses the delineation of the components is essential be
cause we could not …nd time in a lifetime to write down all these components.
Nevertheless it is possible for us to work with such monstrous vectors if we
know the rules which apply to them. Thus we are getting ready to study
the rules applying to vectorspaces. A powerful but simple set of operations
involving vectors are described by what is called linear algebra. As it turns
out quantum mechanics is dominated by linear algebra and through quantum
chemistry, which is, perhaps, the fastest growing branch of chemistry, linear
algebra has become an essential tool for chemists. For this reason we shall
have a look at how quantum mechanics relies on linear algebra and how it
gets into chemistry.
2
1.1 The Linear Algebra of Quantum Mechanics and
Quantum Chemistry
Life may seem to get horribly complicated when we move from classical to
quantum mechanics. A classical particle moving in one dimension can be
described by two real numbers x,v, the position and momentum. Newton’s
equations of motion deal with only these two quantities. When we go to
quantum mechanics we must describe the state of the same particle as a
wavefunction ·(r). A function of x is de…ned by its value at all xvalues. If
all such function values are compared with the components of a vector we see
that a function is a vector in an in…nite dimensional vectorspace so we would
seem to have a problem worse than that of describing the classical states
of a gas of a mole of particles. It is not quite so di¢cult as it may sound.
The timedevelopment is described by a timedependent wavefunction ·(t; r)
which satis…es the timedependent Schrödinger equation,
J
Jt
·(t; r) = ÷
i
~
H·(t; r). (2)
where H is the Hamiltonian operator,
H = ÷
~
2
2:
J
2
Jr
2
+\ (r). (3)
Here V(x) is the potential acting on the particle. Much interest is focused on
the socalled energy eigenfunctions ·
1
(r) which satisfy the timeindependent
Schrödinger equation,
H·
1
(r) = 1·
1
(r). (4)
Note that the spatial probability density associated with a wavefunction ·(r)
is
j(r) = [·(r)[
2
. (5)
An energy eigenfunction satis…es the timedependent Schrödinger equation
if it is multiplied by the phasefactor c
i1t¸~
. This means that the spatial
probability density p(x) is timeindependent. Thus the energy eigenfunctions
are called stationary states. They also have wellde…ned energies equal to the
energy eigenvalue E while wavefunctions in general are neither stationary nor
of wellde…ned energy. In general, a wavefunction ·(r) can be expanded in
the energy eigenfunctions as follows,
·(r) =
1
c
1
·
1
(r). (6)
3
In quantum chemistry one normally seeks the wavefunction of lowest energy,
the socalled groundstate wavefunction ·
1
0
which is an energy eigenfunction.
It is generally found by the …nite basis set method which is approximative
assuming that the groundstate can be found as a superposition of basisfunc
tions ,
i
. i = 1. ...`.
·
1
0
=
.
i=1
c
i
,
i
. (7)
The coe¢cients ¦c
i
¦ form a vector c which is obtained by solving the time
independent Schrödinger equation in matrix form,
Hc = 1c. (8)
Here I have assumed that the basisfunctions are orthonormal, i.e., they satisfy
_
dr,
i
(r),
)
(r) = o
i)
. (9)
where o
i)
is Kronecker’s delta,
o
i)
= 1. if i = ,. (10)
= 0. if i ,= ,.
We shall discuss the …nite basis set method in greater detail in the next
chapter. Now we simply note that practical use of quantum mechanics leads
to the eigenvalue problem in matrix form.
1.1.1 The Self Consistent Field (SCF) Aproximation of Quantum
Chemistry
Before we leave this introduction to the linear algebra of quantum mechanics
I want to deal with one major problem of quantum chemistry, the many
electron problem, and point out the approach to it followed by chemists
with great success. We started by considering one particle moving in one
dimension x. A more realistic application has one particle moving in three
dimensions x,y,z. The timeindependent Schrödinger equation then becomes
÷
~
2
2:
\
2
·(r. ¸. .) +\ (r. ¸. .)·(r. ¸. .) = 1·(r. ¸. .). (11)
4
where
\
2
=
J
2
Jr
2
+
J
2
J¸
2
+
J
2
J.
2
. (12)
This is not too di¢cult to deal with. We have three coordinates to deal
with rather than one. At this point we would be able to …nd the energy
eigenvalues and eigenfunctions of the hydrogen atom and the H
+
2
ion. This
is not a bad start but we need to go on to H
2
. H
2
O, ..... with more than one
electron. Now the number of coordinates grows linearly with the number of
atoms and the quantum chemistry starts to look completely intractable for
anything but the smallest molecules. The way to deal with this problem is
to note that dealing with many electrons is not such a problem if we could
assume that they were moving independently, i. e., without explicit coupling
to each other. If this were the case then the wavefunctions could be taken to
be products,
(:
1
. :
2
. ...) = ·(:
1
)·(:
2
) . (13)
of oneelectron wavefunctions each of which could be obtained from a one
electron Scrödinger equation with the same form of Hamiltonian operator.
The corresponding manyelectron energy eigenvalues would then be sums of
the corresponding oneelectron eigenvalues. This sounds brilliant but it does
not work because the electrons interact with each other by the Coulomb re
pulsion between like charges. Moreover, the Pauli principle enters and insists
that the total electronic wavefunction be antisymmetric with respect to in
terchange of two electrons. These two mechanisms mean that the electrons
are not moving independently. However, the scheme can be carried out ap
proximately in the following way: First we do not use product wavefunctions
directly but combine them into Slater determinants which satisfy the Pauli
principle. This means that no more than two electrons of opposite spin can
be assigned the same oneelectron eigenfunction (orbital) in the Slater deter
minant. In order to obtain the lowest energy we stack the electrons in the
lowest available orbitals. The electrons still move without explicit coupling to
each other but the pattern of motion has been restricted by the Pauli princi
ple. But how do we choose the Hamiltonian which describes the oneparticle
motion? We want this Hamiltonian, which is called the Fock operator in
quantum chemistry, to represent the repulsion between the electrons in some
average way, otherwise the resulting energy and electronic structure will be
completely unrealistic. This can be done by an iterative procedure. The Fock
operator can be found if the occupied orbitals are known. Thus we can start
5
with the socalled core Hamiltonian which neglects electronelectron repul
sion and obtain the corresponding orbitals,
_
·
(0)
)
_
, which yield a new Fock
operator 1
(0)
. We now use this Fock operator to obtain a new set of orbitals,
which include the e¤ect of electronelectron repulsion,
_
·
(1)
)
_
. These orbitals
can, in turn, be used to …nd an improved Fock operator 1
(1)
and so on. The
vital step in this iteration is the solution of the Fock eigenvalue problem,
1
(i)
·
(i+1)
=
(i+1)
·
(i+1)
. (14)
Eventually, in almost all cases, this iterative procedure will converge so that
the Fock operator eigenfunctions are the same as those orbitals used to con
struct it to within tolerable accuracy. Then we have found the self consistent
…eld solution to the electronic stucture of the atom or molecule. It is not pre
cise. We have not accounted for the correlation of the electrons as they move.
We have also generally used …nite basis sets which cannot completely resolve
even the uncorrelated motion but the accuracy achieved in a good SCF cal
culation is often quite good and the reduction of the problem to oneelectron
form is so attractive that nearly all quantum chemistry is done this way. The
language of chemistry is dominated by atomic and molecular orbitals which
are SCF constructs. It is often not clear to the practising chemist that these
concepts are approximate but they are so close to the truth that almost all
methods used to unravel the subtle correlation e¤ects start from the SCF or,
as it is more often called among quantum chemists, the HartreeFock theory.
1.2 The Vibrational Modes of Molecules
Even though we often talk about the geometries of molecules as if the atoms
were stationary relative to each other this is not the case. There are internal
motions in molecules in the form of rotations and vibrations. If there are N
atoms in the molecule then there are 3N3 rotations and vibrations in the
molecule. For a linear molecule there are 2 rotations and 3N5 vibrations
while there are 3 rotations in a nonlinear molecule and 3N6 vibrations. This
means that vibrations soon completely dominate the internal motion as the
number of atoms increases. Again the exact treatment of the internal motion
in a molecule of more than two atoms is very di¢cult due to the coupling
between all the di¤erent motions. However, for small amplitude motion one
can enormously simplify the problem by assuming separable rotations and
6
harmonic vibrations. This means that the potential is approximated by a
quadratic form in the coordinates,
\ (r
1
. r
2
. r
3
. .....) = \
11
r
2
1
+\
12
r
1
r
2
+..... (15)
=
1
2
i,)
\
i,)
r
i
r
)
.
where \
i)
is the second derivative of the potential at the minimum,
\
i)
=
J
2
Jr
i
Jr
)
\ (r
1
. r
2
. ......) at r
1
. r
2
. ..... = r
(min)
1
. r
(min)
2
. ....... . (16)
It turns out that in this harmonic approximation one can …nd a coordinate
transformation such that the vibrations all separate and become indepen
dent socalled normal modes each performing harmonic oscillatory motion
at a wellde…ned frequency. This results in an enormous simpli…cation of
the treatment of internal molecular dynamics which is close enough to the
truth for low energies to be of great practical value in chemistry. Thus we
shall, after having learnt the necessary prerequesites of linear algebra, return
to consider how to obtain the coordinate transformation which yields these
normal modes.
2 Vector Spaces and the Eigenvalue Problem
2.1 Vector Spaces
A set of elements x. y. z...... forms a vector space S if addition of two
vectors generates another vector in S,
x +y = z ¸ o. and x +y = y +x. and (x +y) +z = x + (y +z), (17)
and there exists a zero 0 such that x +0 = x, and there exists for every
vector x a vector ÷x such that x + (÷x) = 0, and multiplication by scalar
numbers `. j is possible according to the rules
`x ¸ o, `(jx) = (`j)x. (` +j)x =`x +jx. (18)
`(x +y) = `x+`y. 1 x = x, 0 x =0.
7
De…nition 1: The vectors x
1
. x
2
. x
a
are said to be linearly inde
pendent if
a
i=1
`
i
x
i
= 0. ÷`
i
= 0. for all i. (19)
De…nition 2: Dimension: If there is a set of N linearly independent
nonzero vectors ¦e
i
¦
.
i=1
but no set of N+1 such vectors then the vector space
is Ndimensional and for any x ¸ o we have
x =
.
i=1
r
i
e
i
. (20)
The set of vectors ¦e
i
¦
.
i=1
is called a basis in S. In a given basis the vector
x can be represented by a column matrix of its expansion coe¢cients, e.g.
_
_
_
_
r
1
r
2
r
3
r
4
_
_
_
_
.
De…nition 3: Linear operator: A is a linear operator on S if for x ¸ o
also Ax ¸ o and A satis…es the linearity condition
A(`x +jy) = `Ax +jAy. (21)
Exercise 1 What is the dimension of our Euclidean space? Find a conve
nient basis for it.
Exercise 2 What are the vectors and the linear operators of quantum me
chanics?
Exercise 3 Is the set of vectors
_
_
1
1
1
_
_
.
_
_
1
0
1
_
_
.
_
_
1
÷3
1
_
_
a set of indepen
dent vectors?
2.1.1 Matrices
Suppose the linear operator A satis…es
Ae
i
=
.
)=1
¹
)i
e
)
, for i = 1, . . . `. (22)
8
then it follows that
Ax = A
.
i=1
r
i
e
i
=
.
i=1
.
)=1
¹
)i
r
i
e
)
. (23)
Thus in a given basis a linear operator A can be represented by a matrix
¦¹
i)
¦, where i and , are the row and column indices, respectively, and its
operation can be described by matrix multiplication, e.g.,
Ax =
_
¹
11
¹
12
¹
21
¹
22
__
r
1
r
2
_
=
_
¹
11
r
1
+¹
12
r
2
¹
21
r
1
+¹
22
r
2
_
. (24)
Linear operators can be multiplied by scalars,
(`A)x = `(Ax). (`A)
i)
= `¹
i)
. (25)
added,
(A+B)x = Ax +Bx. (A+B)
i)
= ¹
i)
+1
i)
. (26)
and multiplied,
(AB)x = A(Bx). (AB)
i)
=
.
I=1
¹
iI
1
I)
. (27)
There is a null operator 0, such that 0x = 0 for all x, and an identity operator
I such that Ix = x for all x.
De…nition 4: Inverse: Certain operators A have inverses A
1
such
that
AA
1
= A
1
A = I. (28)
De…nition 5: Determinant: Square matrices such as we have discussed
above posess a determinant det A de…ned as
det A =
¸
¸
¸
¸
¸
¸
¹
11
. . . ¹
1.
. . . . . . . . .
¹
.1
. . . ¹
..
¸
¸
¸
¸
¸
¸
=
(÷1)
o
¹
1j
1
¹
2j
2
¹
.j
N
. (29)
where the sum is over all permutations of the numbers 1,2,....N, i.e., N! per
mutations, and the exponent a is 0 for even permutations and 1 for odd
permutations. The even and odd permutations can be distinguished by the
fact that the former are constructed by an even number of pairwise inter
changes, the latter from an odd number of pairwise interchanges, starting
from the original sequence which is even.
9
Exercise 4 Find the determinants and, if possible, the inverses of the ma
trices
_
÷1 0
0 1
_
.
_
÷1 1
÷1 1
_
.
_
1 2
3 4
_
.
Example 1: 1,2 is an even, 2,1 an odd permutation of 1,2. 1,2,3 is an
even permutation. So are 3,1,2 and 2,3,1 but 2,1,3; 3,2,1 and 1,3,2 are odd
permutations of 1,2,3.
Matrix operations:
« Transpose of A:
¯
A is de…ned by
_
¯
A
_
i)
= ¹
)i
.
« Complex conjugate of A : A
is de…ned by [A
]
i)
= ¹
i)
.
« Hermitian conjugate of A : A
y
is de…ned by
_
A
y
¸
i)
= ¹
)i
.
Note that Hermitian conjugate is often called adjoint as in the Beta Hand
book. There it is indicated by superscript * while complex conjugate is de
noted by a bar over the quantity. I am following the standard notation of
physicists and chemists here.
Some terminology: The matrix is called 
« real if A
= A and symmetric if
¯
A = A,
« antisymmetric if
¯
A = ÷A and Hermitian if A
y
= A.
« orthogonal if A
1
= A and unitary if A
1
= A
y
.
« diagonal if A
i)
= 0 for all i,= ,.
« idempotent if A
2
= A and normal if AA
y
= A
y
A.
De…nition 6: Trace: The trace of a matrix A is denoted 1:(A) and
de…ned as
1: (A) =
.
i=1
¹
ii
. (30)
Exercise 5 Consider the matrix
_
c /i
ci d
_
, where c. /. c and d are real
numbers. Determine its i) transpose, ii) complex conjugate and iii) Her
mitian conjugate. Determine minimal conditions on c. /. c and d such that
the matrix is iv) real, v) antisymmetric, vi) orthogonal and vii) idempotent.
10
2.2 Introduction of a Metric
De…nition 7: Scalar product: The bilinear operation denoted by which
takes two vectors x and y into a scalar xy is called a scalar product if
« x y =(y x)
.
« x (`y+jz) =`x y +jx z.
« x x = 0 only if x = 0.
De…nition 8: Norm: [x[ =
_
x x is called the norm (or length) of the
vector x.
De…nition 9: Orthogonality: x and y are said to be orthogonal if
x y = 0.
De…nition 10: Orthonormality: The basis ¦e
i
¦
.
i=1
is said to be ortho
normal if e
i
e
)
= o
i)
.
2.3 The Eigenvalue Problem
De…nition 11: Eigenvector and eigenvalue: x is an eigenvector of A if
Ax =`x and ` is called the eigenvalue of A corresponding to the eigenvector
x.
How to …nd the eigenvalues of an operator A?  We shall assume here
that our operator can be represented as a matrix in an orthonormal basis.
Theorem 1: The eigenvalues of the matrix A can be found as the roots
of the secular equation dct(A÷`I) = 0.
 I will only sketch a proof here leaving the details to be worked out
as an exercise. First we note that the determinant is unaltered if a scalar
times one of the columns of the matrix is added to another column in the
matrix. This follows from the fact that matrices with two identical columns
have a vanishing determinant. In turn, this follows from the antisymmetry
of the determinant with respect to the interchange of two columns. From the
eigenvalue equation Ax =`x follows that the columns of the matrix A÷`I
are linearly dependent. This means that by adding scalar numbers times the
other columns to one of them one can, with the proper choice of scalars, make
this column consist of only zeros in a matrix which must have unchanged
determinant. It is then trivial to see that the determinant must be zero.
Note that the secular equation is an Nth order algebraic equation. Thus
there are N roots ¦`
i
¦
.
i=1
some of which may be degenerate, i.e., the same.
11
Note that for each eigenvalue one can …nd at least one corresponding eigen
vector and at most a number of linearly independent eigenvectors equal to
the degeneracy (multiplicity) of the corresponding eigenvector (root of the
secular equation). Since it is trivial to see that if x is an eigenvector so is `x
eigenvectors are only determined up to an arbitrary scalar prefactor. A set of
eigenvectors corresponding to di¤erent eigenvalues are linearly independent.
2.3.1 Properties of Hermitian matrices
« Using longhand notation for scalar product such that x y is written
as (x. y) we have in general (x. Ay) = (A
y
x. y) and for Hermitian
matrices (x. Ay) = (Ax. y).
« The eigenvalues of an Hermitian operator are real and the eigenvectors
corresponding to di¤erent eigenvalues are orthogonal. There are N
linearly independent eigenvectors.
The proof that the eigenvectors are real and orthogonal follows readily
from consideration of the scalar product (x
1
. Ax
2
) and use of the above
Hermitivity relation and the eigenvalue relation.
Exercise 6 Prove explicitly that (x. Ay) =
_
A
y
x. y
_
in the case of a 2x2
matrix.
Exercise 7 Prove explicitly that if A
y
= A then eigenvectors of di¤erent
eigenvalues are orthogonal and the eigenvalues are real.
How does one …nd the eigenvector(s) corresponding to a given eigenvalue?
 Given the eigenvalue, the eigenvalue equation Ax =`x turns into a set of
coupled linear equations which can be solved by stepwise elimination. Note
that since the eigenvector is arbitrary up to a scalar prefactor one component
of the eigenvector will be undetermined or can be set to unity unless this
component turns out to be zero. If so one simply picks another component
to set to unity or some other convenient value.
Example 8 Find the eigenvalues and eigenvectors of the matrix
_
4 1
1 4
_
.
12
Solution: the secular equation is 
(4 ÷`)
2
÷1 = 0.
The eigenvalues are then `
1
= 5 and `
2
= 3 with corresponding eigenvectors
obtained by solving the equations 
4r
1
+r
2
= `r
1
.
r
1
+ 4r
2
= `r
2
.
The …rst equation yields 
r
2
= (` ÷4)r
1
.
and the second yields 
r
1
= (` ÷4)r
2
.
Insisting that these two equations give the same eigenvectors leads to the
secular equation and the two possible eigenvalues. A convenient way to
proceed is to choose r
1
= 1 and then …nd the two eigenvectors 
x
1
=
_
1
1
_
.
x
2
=
_
1
÷1
_
.
Note that these eigenvectors are orthogonal  as they must be for a Hermitian
matrix. In order to make them orthonormal we multiply them by 1,
_
2. 
In some cases, namely when r
1
= 0, the ”convenient way to proceed” above
does not work. One can then instead choose r
2
= 1.
2.4 The Generalized Eigenvalue Problem
The operator eigenvalue above could be written as a matrix equation. Such
an equation can be generated from an operator equation by use of an ortho
normal basis ¦e
)
¦
a
1
. The operator equation 
A
cj
x =`x (31)
13
can be turned into a matrix equation in the space spanned by the basis
vectors by projecting both x and the operator equation onto the space of the
basis vectors, i.e. 
x =
a
)=1
r
)
e
)
. (32)
(e
i
. A
cj
x) =
_
e
i
. A
cj
a
)=1
r
)
e
)
_
=
a
)=1
r
)
(e
i
. A
cj
e
)
) =
a
)=1
¹
i)
r
)
= `(e
i
. x) = `
a
)=1
r
)
(e
i
. e
)
) = `r
i
. (33)
In the case of a general basis which is not orthonormal the matrix equation
is 
Ax = Sx. (34)
a
i=1
¹
i)
r
)
= `
a
i=1
o
i)
r
)
. (35)
which is the generalized eigenvalue problem. This problem can be converted
to normal form by multiplication with the inverse of o 
S
1
Ax = `S
1
Sx =`x. (36)
A
c
x = `x. where A
c
= S
1
A. (37)
The projection of an operator eigenvalue problem into a reduced space
spanned by a given basis and a corresponding matrix form is called the
Galerkin method and used very commonly in physics and chemistry.
2.5 Linear Algebra Exercises:
1. Try to convert the chemical problem of writing a stoichiometrically
balanced equation for a reaction with given reactants and products into
a mathematical problem. a) Can molecules be treated as basis functions
in a space of all possible chemical species? b) What distinguishes the
space of molecules from a vector space? c) Can you suggest a way to use
vector space methods to solve our problem of balancing a stochiometric
equation?
14
2. Let the matrices A and B be de…ned by
A =
_
_
2 1 0
1 0 1
0 1 2
_
_
and B =
_
_
1 4 7
2 5 8
3 6 9
_
_
.
a) Obtain the determinants of the two matrices. b) Obtain A
2
. B
2
. AB
explicitly. c) Obtain the eigenvalues of A.
3. Verify Theorem 1 explicitly for 2 2 matrices.
4. Show that for any square matrices A and B we have a)
^
(AB) =
¯
B
¯
A
and b) (AB)
y
= B
y
A
y
.
5. Show that if U is a unitary matrix then its eigenvalues ¦`¦ satisfy
[`[ = 1.
6. Find the eigenvalues and eigenvectors of the matrix A =
_
1 2
3 7
_
.
7. Find the eigenvalues and eigenvectors of the matrix B =
_
_
1 0 2
0 5 0
3 0 7
_
_
.
8. Prove that eigenvectors of a linear operator corresponding to di¤erent
eigenvalues must be linearly independent.
9. Under what conditions can the linear equation Ax = b be uniquely
solved? Here A is an : : matrix and x. b are :dimensional vectors.
Give proof of your conclusion.
10. Prove that (x. Ay) =
_
A
y
x. y
_
.
11. The concept of "linearity" is important and has many applications in
chemistry. It implies a dramatic simpli…cation by comparison with the
more general "nonlinear" case. We are, e.g., lucky that the Schrödinger
equation is linear so that nearly all of quantum chemistry reduces to
linear algebra. Chemical phenomena are, however, generally nonlinear.
An important illustration of this can be found in the current debate
concerning chemicals in our bodies and our environment. Imagine that
15
 as a person with knowledge of chemistry  you have been asked to pro
nounce whether a certain chemical is poisonous or not. How would you
explain the mathematical content of this question in terms of concepts
of linearity and nonlinearity? Note that nonlinearity is the complement
of linearity, i.e. any relation which is not linear is nonlinear.
3 Hückel Theory of Delocalization
The HartreeFock theory shows us how we can, to a good approximation,
reduce the problem of …nding the total energy 1
0
of a molecule to one
electron form. We …nd the Fock operator F which plays the role of a one
electron Hamiltonian operator from which oneelectron (canonical orbital)
wave functions can be found by solving
F·= "·. (38)
The Fock operator is obtained by a selfconsistent iterative procedure and
it depends on the occupied electronic orbitals
_
·
)
_
a
)=1
themselves. Once we
have chosen a basis set ¦,

¦
.
=1
in which to resolve the canonical orbitals then
the Fock equation 38 turns into a matrix equation
Fc
)
=
)
Sc
)
. (39)
where
1
n)
=
_
dr,
n
(r)1,
)
(r). (40)
o
n)
=
_
dr,
n
(r),
)
(r). (41)
Here S is the overlap matrix which becomes equal to the unit matrix if the
basis functions are orthonormal. Note that 39 can be derived from 38 by
taking scalar products with respect to basis functions and expanding the
orbital in the basis, i.e.,
·
)
=
.
n=1
c
n)
,
n
. (42)
In the 1930’s Hückel developed a very simpli…ed form of HartreeFock
theory for planar aromatic molecules. He noted that if the molecule lies
in the x,yplane then the carbon 2p
:
orbitals stick out orthogonally to the
16
plane. These orbitals do not mix with the inplane sigma orbitals. Thus
we can develop a reduced Fock matrix in the space of C 2p
:
orbitals. The
corresponding molecular orbitals obtained by diagonalizing this reduced Fock
matrix are called :orbitals and the electrons assigned to them are called :
electrons. The electron assignment to orbitals follows the Aufbau principle:
Fill the orbitals in order of increasing orbital energy but place no more than
two electrons (of opposite spin direction) in each orbital not to run afoul of
the Pauli principle.
The calculation to obtain the Fock matrix, or the :part of it, rigorously
by the HartreeFock method is relatively hard numerical work. In the 30’s it
must have been impossible for most molecules. Hückel had the idea that it
may be possible to construct the reduced Fock matrix for the :electrons, /
¬
or just / below, by empirical means. He proposed to use the C 2p
:
orbitals
on all double bonded carbons as a minimal basis set. Thus the matrix / was
a square matrix of the same order as the number of double bonded carbon
atoms in the planar molecule, N
c
. Next Hückel assumed that the diagonal
terms in this matrix, /
))
. were equal to the carbon 2p
:
atomic orbital energy,
/
))
=
C2j
= c. (43)
Then he assumed that coupling only existed between bonded carbon atoms
(nearest neighbour carbon atoms). The same approximation is justi…ed for
the overlap matrix also and we end up with a reduced Fock equation for, as
an example butadiene (C
4
H
6
in a linear chain geometry for the carbons), of
the type
_
_
_
_
c , 0 0
, c , 0
0 , c ,
0 0 , c
_
_
_
_
_
_
_
_
c
1
c
2
c
3
c
4
_
_
_
_
=
_
_
_
_
1 : 0 0
: 1 : 0
0 : 1 :
0 0 : 1
_
_
_
_
_
_
_
_
c
1
c
2
c
3
c
4
_
_
_
_
. (44)
This is a generalized eigenvalue equation due to the presence of the overlap
matrix S on the left. It is not much more di¢cult to solve this generalized
form of the eigenvalue problem but Hückel simpli…ed it further by setting
: = 0. Thus, for butadiene, he obtained
_
_
_
_
c , 0 0
, c , 0
0 , c ,
0 0 , c
_
_
_
_
_
_
_
_
c
1
c
2
c
3
c
4
_
_
_
_
=
_
_
_
_
c
1
c
2
c
3
c
4
_
_
_
_
. (45)
17
Note that the coupling term ,,
, = /
)n
. with j bonded to m, (46)
is taken to be independent of which bond is referred to. In order to write down
the Hückel matrices one needs to number the atoms in some reasonable way
and keep track of all coupled and uncoupled carbon atom pairs. When the
indices in /
)n
refer to uncoupled carbon atoms the matrix element vanishes
so for large molecules there will be a lot of zeroes. The coupling term , is
often called the resonance integral. It is responsible for the delocalization
of :electron motion in the molecule. Thus it is also responsible for the
lowering of the energy which is the cause of the ”resonance stabilization” of
the aromatic or conjugated molecules. Note that both c and , are negative
numbers representing a negative atomic C2penergy and a coupling energy.
In order to solve the Hückel eigenvalue problem one usually works in
energy units such that ÷, = 1. Then the secular equation becomes
det
_
_
_
_
1 0 0
1 1 0
0 1 1
0 0 1
_
_
_
_
= 0. (47)
Here we have de…ned as
= (c ÷),, = ( ÷c),(÷,). (48)
The determinant can be worked out by noting that the only nonvanishing
matrix element products are the (1. 2. 3. 4), (2. 1. 3. 4), (1. 2. 4. 3), (1. 3. 2. 4),
(2. 1. 4. 3) permutations leading to the secular equation 
4
÷3
2
+ 1 = 0. (49)
We solve …rst for
2
and …nd 
2
=
3 ±
_
5
2
. (50)
The corresponding solutions for are obtained as 
= ±
_
3 ÷
_
5
2
= ±
_
_
5 ÷1
2
_
= ±0.618. (51)
= ±
_
3 +
_
5
2
= ±
_
_
5 + 1
2
_
= ±1.618.
18
The set of eigenvectors and eigenvalues are 
·
1
= 0.3717(,
1
+,
4
) + 0.6015(,
2
+,
3
),
1
= ÷1.618.
·
2
= 0.6015(,
1
÷,
4
) + 0.3717(,
2
÷,
3
),
2
= ÷0.618.
·
3
= 0.6015(,
1
+,
4
) ÷0.3717(,
2
+,
3
),
3
= 0.618.
·
4
= 0.3717(,
1
÷,
4
) ÷0.6015(,
2
÷,
3
),
4
= 1.618.
The energy eigenvalues are displaced down and up from the C2p energy in
a symmetric fashion. Note that the eigenfunctions are symmetric or antisym
metric to re‡ection in the midpoint. Note also the linear growth in the num
ber of nodes with excitation. The expansion coe¢cients change with carbon
atom somewhat like the eigenfunctions of the onedimensional particleina
box problem. When we work out the :÷contribution to the total binding
energy we …rst …nd the number of :÷electrons to be placed and then place
them in the available :÷orbitals from the lowest and up in accord with the
Aufbau principle which restricts the number of electrons in an orbital to 0,
1 or 2. In the case of butadiene we have four :÷electrons which …ll the two
lowest orbitals. The binding energy contribution is 2(1.618+0.618)=4.472 in
, units. If we instead considered the butadiene molecule to contain pair
wise localized and uncoupled :÷bonds, C=CC=C, then the binding energy
contribution would turn out to be 4, as will be clear below. Thus the extra
binding energy due to the delocalization of the :÷electrons over more than
two nuclei is 0.472, here. This we shall call the resonance stabilization of
the butadiene molecule.
How can we obtain estimates of the two parameters c and ,? The …rst, c,
is supposed to be the C2p
:
orbital energy. It can be estimated as minus the
ionization energy of the carbon atom which is 1090 kJ/mol. This parameter
does not play a very important role in the Hückel theory. The more impor
tant parameter is ,, the coupling constant, which delocalizes the electrons
and stabilizes favored structures. In order to estimate , we shall consider
the simplest molecule containing a :bond, i.e., ethylene C
2
H
4
. From a ta
ble of average molecular bond energies ( Table 13.5 in Zumdahl, Chemical
Principles) we …nd that a C=C bond consisting of a :÷ and a o÷bond the
average bond energy is 614 kJ/mol while for the CC o÷bond the energy is
347 kJ/mol. Thus it appears that the :÷bond energy can be estimated for
19
the ethylene molecule to be 267 kJ/mol. Let us now apply Hückel theory
to ethylene to …nd the predicted :÷bond energy in terms of ,. The Hückel
hamiltonian is
/ =
_
c ,
, c
_
. (52)
The reduced secular equation becomes 
det
_
1
1
_
= 0. (53)
The solution is = ±1. The lower orbital will be occupied by two electrons
each contributing an equal amount to the bond energy. Thus the :÷bond
energy is in reduced units of , equal to 2. Setting this equal to 267 kJ/mol
we …nd that , = ÷134 kJ/mol in SIunits. This should be considered a very
approximative estimate but it gives us a plausible magnitude and an example
of how , could be obtained.
3.1 Hückel exercises:
1. One might expect on the basis of standard bonding pictures that the
middle carboncarbon bond in butadiene is weaker and longer due to
dominant single bond character. Estimate the resonance stabilization
energy of butadiene (i.e., that part of the binding energy due to the fur
ther delocalization of the :electrons in the butadiene chain) in kJ/mol
in the case when the coupling between the two middle carbon atoms,
due to a longer bond, is half the normal value. Compare with the value
for the standard model with all couplings the same.
2. Work out the Hückel orbitals for butadiene, i.e., verify the results stated
in the text showing the algebra explicitly.
3. Write down the Hückel matrix for benzene. Use symmetry to obtain
the form and the energy of the lowest lying :orbital.
4. Consider the square planar molecule cyclobutadiene C
4
H
4
obtained by
tying the ends of the butadiene molecule together with the loss of two
hydrogen atoms. Write down its Hückel matrix and calculate the cor
responding :orbitals and orbital energies. What is the total resonance
stabilization energy?
20
5. Obtain the :÷electron contribution to the binding energy of the square
planar N
4
molecule and estimate the corresponding resonance energy
due to the further delocalization of the :÷electrons. You may use the
bond energies NN 160 kJ/mol, N=N 418 kJ/mol, in your estimation.
4 Vibrational Modes of a Molecule
Our purpose here will be to describe the motion of the nuclei in a molecule,
e.g., the water molecule H
2
O. The exact solution to this problemis intractable
due to coupling between the motion of electrons and nuclei and between
nuclei due to anharmonic e¤ects. However, on the basis of a number of
simpli…cations we shall be able to get close to reality without losing too
much in accuracy. The most important simpli…cations are 
« the point particle model  i.e., the electrons are neglected and the atoms
become point particles interacting by a potential \ (r
1
. r
2
.....) (actually
due to the electrons), where r
1
. r
2
. ... are the spatial coordinates of the
atoms;
« classical mechanics  while we really ought to use quantum mechanics;
« small amplitude vibrations  anharmonic e¤ects and vibrationrotation
coupling will be neglected.
4.1 The One Dimensional Oscillator
Let the potential acting on a particle moving in one dimension be \ (r) and
the mass of the particle m. We assume that \ (r) has the general character
of a bond potential in a diatomic molecule, i.e., a well de…ned minimum.
As the temperature decreases the bondlength decreases to become equal to
the so called equilibrium value r
c
at T=0 K. The particle would then in our
simple picture sit motionless at the bottom of the potential well. We can
…nd x
c
by examining the stationary points satisfying
J\
Jr
= 0. and
J
2
\
Jr
2
0. (54)
These conditions identify a potential minimum. If there are several minima
we choose the point where the potential is the smallest.
21
We now apply the harmonic approximation by expanding the potential
in a Taylor series around the point x=x
c
and retaining only the …rst three
terms,
\ (r) = \ (r
c
) + (r ÷r
c
)\
0
+
1
2
(r ÷r
c
)
2
\
00
. (55)
= \ (r
c
) +
1
2
(r ÷r
c
)
2
\
00
.
Here we have used the fact that \
0
vanishes at r = r
c
. Without loss of
generality, as the mathematicians like to put it, we will now choose an energy
scale such that \ (r
c
) = 0 and the origin on the xaxis such that r
c
= 0. Then
the potential can be written as
\ (r) =
1
2
/r
2
. with / = \
00
(r
c
). (56)
Chemists call / the force constant. A particle moving in such a potential
is called a harmonic oscillator. We shall now review the solution for the
harmonic oscillator motion. We start from Newton’s equation of motion 
:
J
2
Jt
2
r(t) = ÷
J
Jr
\ (r). (57)
which yields
d
2
dt
2
r(t) = ÷
/
:
r(t). (58)
In order to solve this second order ordinary di¤erential equation we note
that it is linear and use a method to be discussed in greater detail in a later
chapter. Shifting the RHS over to the left we rewrite the equation as
(
d
dt
+i
_
/
:
)(
d
dt
÷i
_
/
:
)r(t) = 0. (59)
It is now clear that since the factors can be written in any order any sum of
solutions to the two …rst order equations
(
d
dt
+i
_
/
:
)r(t) = 0. or (
d
dt
÷i
_
/
:
)r(t) = 0. (60)
will be a solution of the second order equation 58. The solutions to the two
…rst order equations are 
r(t) = cc
i
_
I¸nt
. and r(t) = /c
i
_
I¸nt
. (61)
22
respectively. Thus the harmonic oscillator solution is
r(t) = cc
i
_
I¸nt
+/c
i
_
I¸nt
. (62)
but since r(t) must be real the solution can be reduced to the form 
r(t) = ¹sin(
_
/,:t +o ), (63)
where ¹ is a real amplitude, i.e., the maximal excursion of the oscillation
from the origin, and o is an arbitrary initial phase, 0 _ o _ 2:, of the
oscillator.
4.2 Multidimensional vibrations in molecules
If the molecule is made up of N atoms there are 3N spatial coordinates so
the problem is now ndimensional where n is an integer larger than one. The
3N spatial degrees of freedom can be divided into center of mass, rotational
and vibrational subsets as shown below.
# of dof linear nonlinear
c of m 3 3
rot 2 3
vib 3N5 3N6
The potential now depends on the spatial coordinates, \ (r
1
. r
2
. .... r
a
). If
we are describing a molecule in …eld free space then the potential must be
invariant to translation (center of mass coordinates) and rotation (rotational
coordinates). This makes it useful to transform our analysis into a set of
internal bond coordinates explicitly accounting for the invariance of \ to
external motions. However, the gain in lower dimensionality is more than
o¤set by loss of simplicity so we shall stick with our initial representation
in terms of cartesian atomic coordinates. The global minimum is found by
solving for all extrema given by the set of equations 
J
Jr
i
\ (r
1
. r
2
. .... r
a
) = 0. i=1,2,.....,n. (64)
It is now a little more di¢cult to determine whether we have found a maxi
mum or minimum but evaluating \ at the point and choosing the extremum
giving the lowest potential should work for all reasonable vibrational poten
tials. Once the global minimum is found we transform our coordinate system
23
to make this point the origin. Thus we shall take r
1c
. r
2c
. ... all to vanish in
our discussion below. We also take the potential energy to be zero at the
minimum. From the multidimensional Taylor series expansion (see Appen
dix) follows that in the harmonic approximation where terms of order higher
than quadratic in the coordinates vanish we get 
\ (r
1
. r
2
. .... r
a
) =
1
2
a
i=1
a
)=1
\
i)
r
i
r
)
. (65)
where
\
i)
=
_
J
2
Jr
i
Jr
)
\ (r
1
. r
2
. .... r
a
)
_
x=0
= \
)i
. (66)
We are now ready to consider the multidimensional motion which is de
scribed by Newton’s equation suitably generalized to the multidimensional
case,
d
2
dt
2
r
I
(t) = ÷
1
:
I
J
Jr
I
\ (r
1
. ...r
a
) = ÷
1
:
I
a
=1
\
I
r

(t). for k=1,2,...,n. (67)
We have a set of n coupled linear second order di¤erential equations. This
sounds, perhaps, quite intractable for large n but we shall see that it is
possible to …nd a coordinate transformation x ÷ y such that we end up
with n uncoupled equations of motion of the type 58 which we have already
solved, i.e.,
d
2
dt
2
¸
I
(t) = ÷l
II
¸
I
(t). for k=1,2,....,n, (68)
with solutions 
¸
I
(t) = ¹
I
sin(
_
l
II
t +o
I
). for k=1,2,....,n. (69)
Here the factor
_
l
II
= .
I
is called the frequency of the kth vibrational
mode and is normally given in radians per second. We often refer to the
variables ¦¸
I
¦
a
I=1
as the normal modes since they are decoupled from each
other.
The coordinate transformation we shall use is linear and given by 
¸
I
=
a
=1
o
I
r

. for k=1,2,....,n, (70)
24
or in vector notation 
y = Sx. (71)
The task that remains is to …nd the matrix S and the frequencies ¦.
I
¦
a
I=1
.
Note …rst that Newton’s equations in the xcoordinates can be rewritten
as
d
2
dt
2
x = ÷Rx. (72)
where
1
I
=
1
:
I
\
I
. (73)
In order to …nd the corresponding equation for y we multiply both sides by
S,
d
2
dt
2
Sx = ÷SRx = ÷SRS
1
Sx. (74)
which can be rewritten as
d
2
dt
2
y = ÷Uy. (75)
Here we used the fact that S
1
S = I. the identity operator, and denoted
SRS
1
as U.
Theorem: Let ¦e
i
¦
a
i=1
be the basis vectors of our vector space of posi
tions so that any vector x can be expanded in this basis,
x =
a
i=1
r
i
e
i
. (76)
In our case ¦e
i
¦
a
i=1
will be the set of cartesian basis vectors. Suppose that R
has n linearly independent eigenvectors ¦x
i
¦
a
i=1
such that
x

=
a
i=1
r
i
e
i
. (77)
Then, if S is chosen so that
S
1
e
i
= x
i
. i = 1. 2. .... :. (78)
i.e., the eigenvectors of R become columns of S
1
,
_
S
1
¸
i
= r
i
= [x
i
]

. (79)
25
then U becomes diagonal.
Proof:
l
i)
= e
i
SRS
1
e
)
= e
i
SRx
)
= 1
)
e
i
Sx
)
= 1
)
e
i
SS
1
e
)
= 1
)
o
i)
. (80)
where
Rx
)
= 1
)
x
)
. (81)
and we have assumed the basis to be orthonormal, i.e., e
i
e
)
= o
i)
.
Thus we conclude that when R has n linearly independent eigenvectors
then we can construct S
1
from these eigenvectors and take the inverse to
…nd S. Moreover, each such eigenvector corresponds to a normal mode with
frequency equal to the square root of the corresponding eigenvalue 1
)
.
Suppose now that all the masses of the atoms in our molecule are the
same as in, e.g., O
3
, then
1
i)
=
1
:
\
i)
=
1
:
\
)i
= 1
)i
. (82)
Thus Ris symmetric and Hermitian. It follows that it has n linearly indepen
dent eigenvectors ¦x
i
¦
a
i=1
so the theorem applies. Moreover, transformations
between ortonormal basis sets correspond to orthogonal matrices so
S =
]
(S
1
) and o
i
= r
i
. (83)
This follows from
_
SS
1
¸
i)
=
a
=1
o
i
_
S
1
¸
)
=
a
=1
r
i
r
)
= x
i
x
)
= o
i)
. (84)
What to do when the masses are not equal? Here is a useful trick. We will
simply insert a transformation to mass weighted coordinates to symmetrize
R before applying the transformation above. De…ne M by
`
I
=
_
:
I
o
I
(85)
and note that M
1
satis…es
_
M
1
¸
I
=
1
_
:
I
o
I
. (86)
26
Now note that
SRS
1
= SM
1
(MRM
1
)MS
1
= ARA
1
. (87)
where
1
I
=
1
_
:
I
:

\
I
= 1
I
. (88)
Thus the transformation to mass weighted coordinates produces a coupling
matrix R which is Hermitian and has n orthonormal eigenvectors ¦x
i
¦
a
i=1
.
By our theorem we then …nd that ¹
i
= r
i
, and from A = SM
1
follows
that
S = AM. and o
I
=
a
i=1
¹
Ii
`
i
=
a
i=1
r
Ii
_
:
i
o
i
=
_
:

r
I
. (89)
The frequency corresponding to the kth mode is .
I
=
_
1
I
. where
Rx
I
= 1
I
x
I
. (90)
Note when the masses are di¤erent S is no longer orthogonal, i.e., the normal
modes are no longer orthogonal to each other. In calculations for realistic
molecular models one will …nd that the normal modes corresponding to center
of mass translation and rotation correspond to zero frequency modes, . = 0.
Note that when working out normal mode coordinates we divide by some
appropriate
_
:c:: to get 
¸
I
=
a
=1
_
:

:c::
r
I
r

. (91)
4.2.1 Summary of steps in the normal mode analysis
1. Make sure you have masses :
1
. ...:
a
and a potential \ (r
1
. ...r
a
).
2. Find minimumof \ (r
1
. ...r
a
) and double derivatives \
i)
=
0
2
0a
i
0a
j
\ (r
1
. ....r
a
)
at the minimum.
3. If the masses are not the same then we create the Hermitian matrix R
de…ned by 
1
I
=
1
_
:
I
:

\
I
.
If the masses are all the same then we can proceed as below noting that
R =R.
27
4. Solve the eigenvalue problem Rx = 1
)
x
)
for : orthonormal eigenvec
tors with corresponding real eigenvalues.
5. Obtain the vibrational frequencies as .
I
=
_
1
I
and the corresponding
normal mode coordinate as ¸
I
=
a
=1
_
n
l
nocc
r
I
r

where :c:: can be
taken to be any convenient mass, e.g., one of the atomic masses or
the average atomic mass. Note that the masstretched components of
the eigenvectors of the coupling matrix R become the normal mode
coordinates.
4.3 Example 1  A TwoDimensional Vibration
Suppose that two particles of mass :
1
= : and :
2
= 3:, respectively, are
performing vibrations in a potential \ (:
1
. :
2
) = 10 +:
1
÷2:
2
+:
2
1
,2 +2:
2
2
+
:
1
:
2
. Let us …nd the normal modes of this motion. We begin by …nding the
equilibrium geometry of the vibrational motion, i.e., the potential minimum.
J
J:
1
\ = 1 + :
1
+:
2
= 0; (92)
J
J:
2
\ = ÷2 + 4:
2
+:
1
= 0;
:
2,cq
= 1; :
1,cq
= ÷2.
Next we rewrite the potential in new coordinates r
1
. r
2
de…ned as r
1
=
:
1
÷:
1,cq
= :
1
+ 2 and r
2
= :
2
÷:
2,cq
= :
2
÷1. We get 
\ (r
1
. r
2
) = 8 +r
2
1
,2 + 2r
2
2
+r
1
r
2
. (93)
The Vmatrix is then 
V =
_
1 1
1 4
_
. (94)
and the symmetrized Rmatrix is 
R =
1
:
_
1 1,
_
3
1,
_
3 4,3
_
. (95)
The matrix :R has the eigenvalues and eigenvectors 
_
1 1,
_
3
1,
_
3 4,3
_
28
, eigenvectors:
__
1
_
3
_
1 +
_
13
_
,6
__
÷
_
7 +
_
13
_
,6.
__
1
_
3
_
1 ÷
_
13
_
,6
__
÷
_
7 ÷
_
13
_
,6. Thus the vibrational frequencies are  .
1
=
_
7
6
+
1
6
_
13,
_
:
and .
2
=
_
7
6
÷
1
6
_
13,
_
:. The corresponding normal mode coordinates
(unnormalized) are 
¸
1
= r
1
+
_
3
_
_
3
_
7
6
+
_
13
6
_
÷
_
3
_
r
2
= r
1
+
_
1 +
_
13
_
2
r
2
.
¸
2
= r
1
+
_
3
_
_
3
_
7
6
÷
_
13
6
_
÷
_
3
_
r
2
= r
1
+
(1 ÷
_
13)
2
r
2
. (96)
4.4 Example 2  Normal Modes of a Linear Chain
Molecule
Let us consider a molecule consisting of three atoms attached to each other
by chemical bonds represented by Morse potentials, c(:) = 1(c
2¸(vve)
÷
2c
¸(vve)
). and con…ned to move in one dimension. We shall take the masses
to be the same. The coordinate : is the bondlength. The Morse potential
reaches its minimum potential energy of 1, the bond dissociation energy, at
the equilibrium bondlength :
c
. The full potential can then be written as
\ (:
1
. :
2
. :
3
) = 1(c
2¸(v
2
v
1
ve)
÷2c
¸(v
2
v
1
ve)
+c
2¸(v
3
v
2
ve)
÷2c
¸(v
3
v
2
ve)
).
(97)
The global potential minimum occurs when :
2
÷ :
1
= :
3
÷ :
2
= :
c
. where
the total potential energy is ÷21. The minimum we shall choose to expand
around is :
1
= 0. :
2
= :
c
. :
3
= 2:
c
. Our xcoordinates measuring deviation
from the equilibrium positions are then r
1
= :
1
. r
2
= :
2
÷:
c
. r
3
= :
3
÷2:
c
.
In these coordinates the potential becomes
\ (r
1
. r
2
. r
3
) = 1(c
2¸(a
2
a
1
)
÷2c
¸(a
2
a
1
)
+c
2¸(a
3
a
2
)
÷2c
¸(a
3
a
2
)
). (98)
The …rst derivatives are
J\
Jr
1
= 21¸(c
2¸(a
2
a
1
)
÷c
¸(a
2
a
1
)
). (99)
J\
Jr
2
= 21¸(÷c
2¸(a
2
a
1
)
+c
¸(a
2
a
1
)
+c
2¸(a
3
a
2
)
÷c
¸(a
3
a
2
)
).
J\
Jr
3
= ÷21¸(c
2¸(a
3
a
2
)
÷c
¸(a
3
a
2
)
).
29
Note that the derivatives vanish at r
1
= r
2
= r
3
= 0. Using the de…nition
\
i)
=
J
2
Jr
i
Jr
)
\. (100)
where the double derivative is evaluated at the origin, we get
\
11
= 21¸
2
= \
33
. (101)
\
22
= 41¸
2
.
\
12
= ÷21¸
2
= \
21
= \
23
= \
32
.
\
13
= \
31
= 0.
The corresponding R matrix is
R =
1
:
21¸
2
_
_
1 ÷1 0
÷1 2 ÷1
0 ÷1 1
_
_
=
21¸
2
:
´
R. (102)
Now we are ready to …nd the eigenvectors of
´
R. We have 
det
_
_
1 ÷` ÷1 0
÷1 2 ÷` ÷1
0 ÷1 1 ÷`
_
_
= (1 ÷`)
2
(2 ÷`) ÷2(1 ÷`) (103)
= ÷`
3
+ 4`
2
÷3`.
The roots are `
1
= 0. `
2
= 3. `
3
= 1. The corresponding orthonormal eigen
vectors are
x
1
=
1
_
3
_
_
1
1
1
_
_
. x
2
=
1
_
6
_
_
1
÷2
1
_
_
. x
3
=
1
_
2
_
_
1
0
÷1
_
_
. (104)
It follows that the normal modes are
¸
1
=
1
_
3
r
1
+
1
_
3
r
2
+
1
_
3
r
3
. .
1
= 0. translation, (105)
¸
2
=
1
_
6
r
1
÷
2
_
6
r
2
+
1
_
6
r
3
. .
2
= ¸
_
61
:
. antisymmetric vibration,
¸
3
=
1
_
2
r
1
÷
1
_
2
r
3
. .
3
= ¸
_
21
:
. symmetric vibration.
30
4.5 Exercises on Normal Mode Analysis
1. A particle of mass : is hanging vertically from the ceiling in a har
monic spring described by the wallparticle potential \ (r
1
) = /r
2
1
,2.
The gravitational potential can be taken to be \
G
(r
1
) = ÷q:r
1
. a)
What is the equilibrium position and vibrational frequency of the ver
tical vibration? b) Obtain the equilibrium geometry, vibrational modes
and corresponding frequencies for the case when a second particle of
the same type is attached to the …rst particle by a harmonic spring
corresponding to the potential \ (r
1
. r
2
) = /(r
1
÷r
2
)
2
,2.
2. Find the vibrational frequencies of the triatomic onedimensional chain
in the example above in the case when the mass of the central particle
is four times larger (4m) but all other parameters of the system remain
the same.
3. Obtain the normal modes and corresponding frequencies of a system
consisting of two identical particles of mass : moving in one dimension
r and interacting by the LennardJones potential 
\ (r
1
. r
2
) = 4
_
_
o
r
12
_
12
÷
_
o
r
12
_
6
_
.
where r
12
= [r
1
÷r
2
[.
4. Reconsider the system in Exercise 3 above generalized to the case when
the particle masses are not equal :
1
,= :
2
.
5. Find the harmonic vibrational frequency of the diatomic molecule ¹
2
with the bond potential \ (:) = 1(exp (÷2c:
2
) ÷2 exp(÷c:
2
)) .
4.6 Appendix  Multidimensional Taylor Series Expan
sion
Recall the form of the onedimensional Taylor series expansion,
\ (:
0
+r) = \ (:
0
) +r
_
J\
J:
_
v=v
0
+
1
2
r
2
_
J
2
\
J:
2
_
v=v
0
+...... (106)
=
1
a=0
1
:!
r
a
_
J
a
\
J:
a
_
v=v
0
.
31
where : = :
0
+r. The corresponding multidimensional Taylor series expansion
is
\ (r
0
+x) = \ (r
0
) +
.
i=1
r
i
_
J\
J:
i
_
r=r
0
+
1
2
.
i=1
.
)=1
r
i
r
)
_
J
2
\
J:
i
J:
)
_
r=r
0
......... (107)
=
1
a=0
1
:!
__
x
J
Jr
_
a
\ (r)
_
r=r
0
.
Probably the easiest way to prove this expansion is to convert it to a one
dimensional expansion by setting x = :´ x, where ´ x is a directional vector and
: tells us how far in this direction we have gone. Then we can, of course,
expand with respect to : to get
\ (r
0
+:´ x) =
1
a=0
1
:!
:
a
_
J
a
\
J:
a
_
c=0
. (108)
But now we note that
J
J:
\ (r
0
+:´ x) =
.
i=1
´ r
i
_
J\
J:
i
_
r=r
0
. (109)
J
2
J:
2
\ (r
0
+:´ x) =
.
i=1
.
)=1
´ r
i
´ r
)
_
J
2
\
J:
i
J:
)
_
r=r
0
. (110)
and so on. Thus we can write
:
a
J
a
J:
a
\ (r
0
+:´ x) =
_
x
J
Jr
_
a
\ (r
0
+x). (111)
Insertion of this result in 108 above yields the multidimensional form of the
Taylor series expansion.
32
Part II
Series Expansions and
Transforms with Applications
5 Fourier Series Expansion
We have already met the idea that function spaces can be turned into …nite
or discrete in…nitedimensional vector spaces by the introduction of a basis
set of functions. This is the key idea which has made quantum chemistry
tractable for even reasonably large molecules. Long before quantum chemists
mathematicians had the same idea. The best example is the Fourier expan
sion which we shall now discuss.
We can readily verify that functions on a domain D, the set of independent
variable values over which the function will be de…ned, form a vector space.
By introduction of a metric we obtain a metric vector space. The metric is
de…ned by a scalar product satisfying, as we learnt in Chapter 1, a number
of conditions. The most common de…nition of a scalar product in a vector
space of functions is given by
(q. /) =
_
1
dq
()/(). (112)
where represents the independent variables. This de…nition of scalar prod
uct, called an L
2
scalar product, means in our case that we have an Hilbert
space. We have already used this scalar product in our discussion of Hückel
theory.
5.1 Simple Form
Suppose now that we consider a function ,(r) on the inteval [÷:. :] . The
Fourier basis functions are then the functions ¦cos :r¦
1
a=0
'¦sin :r¦
1
a=1
. They
33
are all orthogonal in our Hilbert space, i.e.,
_
¬
¬
dr sin :r cos :r = 0. for all m,n, (113)
_
¬
¬
dr sin :r sin :r =
_
¬
¬
dr cos :cos :r = :o
n,a
, for m,n0,
cos 0r = 1 ÷
_
¬
¬
dr cos 0r cos :r = 2:o
0,n
.
_
¬
¬
dr cos 0r sin :r = 0. for : 0.
Using these functions as basis functions we can expand f(x) as
,(r) = c
0
+
1
a=1
(c
a
cos :r +/
a
sin :r) . : < r < :. (114)
where the expansion ce¢cients can be found by taking scalar products of
both sides of the equality 114 with respect to each basis function in turn.
We get
c
0
=
1
2:
_
¬
¬
dr,(r). (115)
c
a
=
1
:
_
¬
¬
dr cos :r,(r). : = 1,.....·.
/
a
=
1
:
_
¬
¬
dr sin :r,(r). : = 1,......·.
This de…nes the Fourier expansion of f(x) on [÷:. :] . The identity of the
original and the expanded function is not absolute but of a ”weak” sense
meaning that
(q. ,) =
_
q. c
0
+
1
a=1
(c
a
cos :r +/
a
sin :r)
_
(116)
for any function g(x) in our vector space. Although the mathematicians call
this sense of equality weak we …nd in physics and chemistry that it is quite
strong. We rarely need to distinguish between weak and absolute point by
point equality.
34
Note that all the basis functions are periodic with the period 2:. i.e., for
an integer : we get
cos (:r + 2::) = cos :r. : = 0. 1. .....·. (117)
sin (:r + 2::) = sin :r. : = 1. .....·.
It follows that the expanded function also is periodic with the period 2:.
,(r + 2::) = ,(r). (118)
Thus, irrespective of what the original function was up to, the Fourier ex
pansion outside the chosen interval [÷:. :] produces a periodic function by
the relation 118.
5.2 Arbitrary Interval
The Fourier expansion above can easily be generalized to an arbitrary …nite
interval [r
0
. r
0
+1] . This is done by recognizing that the variable transforma
tion y=(2:(xx
0
),1)÷: transforms the interval [r
0
. r
0
+1] back to the inter
val [÷:. :] . Thus our newbasis functions are ¦cos(2::(r ÷r
0
),1) ÷:)¦
1
a=0
'
¦sin(2::(r ÷r
0
),1) ÷:)¦
1
a=1
. However, using the trigonometric relations
cos(c ±:) = ÷cos(c). (119)
sin(c ±:) = ÷sin(c).
and the fact that the sign of a basis function can be changed without dif
…culty we can simplify the basis to the form ¦cos(2::(r ÷r
0
),1))¦
1
a=0
'
¦sin(2::(r ÷r
0
),1))¦
1
a=1
. The Fourier expansion then becomes
,(r) = c
0
+
1
a=1
(c
a
cos(2::(r÷r
0
),1)+/
a
sin(2::(r÷r
0
),1)). r
0
< r < r
0
+1.
(120)
where the expansion coe¢cients are obtained as
c
0
=
1
1
_
a
0
+1
a
0
dr,(r). (121)
c
a
=
2
1
_
a
0
+1
a
0
dr cos(2::(r ÷r
0
),1),(r). : = 0. 1. .....·.
/
a
=
2
1
_
a
0
+1
a
0
dr sin(2::(r ÷r
0
),1),(r). : = 1. .....·.
35
When can a function be expanded in a Fourier series on a …nite interval?
There are two answers to this question:
Theorem  Weak formulation: The Fourier basis functions form a com
plete set on the …nite interval for which they were chosen.
Theorem  Strong formulation  Dirichlet’s theorem: Suppose f(x) is
wellde…ned on [r
0
. r
0
+1], is bounded, has only a …nite number of maxima
and minima and discontinuities. Let f(x) for other values of x be de…ned by
periodicity, f(x+nL)=f(x) for n=integer, then the Fourier series 120 for f(x)
converges to (f(x+)+f(x))/2 for all x.
The …rst theorem simply states that the Fourier expansion gives a unique
mapping of ,(r) into a vector in our vector space of functions such that all
scalar products with functions in our vector space are reproduced by the
Fourier expansion. As noted above, this is nearly always su¢cient for us.
The second theorem is deeper and describes what happens at each point x
given some conditions on the function.
5.3 Complex form
At the minor cost of working with complex basis functions we can give the
Fourier series expansion its most general form. Suppose that we want to
consider a function f(x) on the interval [÷1,2. 1,2] . We can then use the
basis functions ¦exp(i2::r,1)¦
1
a=1
which satisfy the orthogonality relation
(c
i2¬na¸1
. c
i2¬aa¸1
) =
_
1¸2
1¸2
drc
i2¬na¸1
c
i2¬aa¸1
= 1o
n,a
for m,n=integers.
(122)
The Fourier expansion then takes the form
,(r) =
1
a=1
d
a
c
i2¬aa¸1
. (123)
with expansion coe¢cients de…ned by
d
a
=
1
1
_
1¸2
1¸2
drc
i2¬aa¸1
,(r). n=·... ÷1. 0. 1. .....·. (124)
36
The relation between the complex and the real form of the Fourier expansion
can be seen from the relation
,(r) = d
0
+
1
a=1
(d
a
c
i2¬aa¸1
+d
a
c
i2¬aa¸1
) (125)
= d
0
+
1
a=1
[(d
a
+d
a
) cos(2::r,1) +i(d
a
÷d
a
) sin(2::r,1)]
= c
0
+
1
a=1
[c
a
cos(2::r,1) +/
a
sin(2::r,1)].
It follows that
c
0
= d
0
. (126)
c
a
= d
a
+d
a
. and /
a
= i(d
a
÷d
a
). for : = 1,.....·.
d
a
=
1
2
(c
a
÷i/
a
). and d
a
=
1
2
(c
a
+i/
a
), for : = 1,....·.
Example 1: Obtain the Fourier series expansion of the function ,(r) = r
on the interval ÷: < r < :.  Let us …rst use the original simple Fourier
basis set. Note now that using the relations (3.3) and (3.4) above we get 
_
¬
¬
drr =
_
¬
¬
drr cos(:r) = 0.
_
¬
¬
drr sin(:r) =
_
÷
r
:
cos(:r)
_
¬
¬
+
_
¬
¬
dr
1
:
cos(:r) = ÷
2:
:
cos(::).
Thus the co¢cients c
0
. c
1
. ..... all vanish. Noting that cos(n:) = +1. for
: = 2. 4. .... and = ÷1. for : = 1. 3. ....... we get 
r = 2(sin r ÷
1
2
sin 2r +
1
3
sin 3r ÷...........) = 2
1
a=1
(÷1)
a+1
:
sin(:r).
5.4 Exercises on Fourier Series:
1. Prove that if the function f(x) on [÷1,2. 1,2] is even then all its Fourier
expansion coe¢cients corresponding to sinfunctions vanish while if
,(r) is odd the coe¢cients corresponding to cosfunctions will van
ish. Note that an even function satis…es ,(r) = ,(÷r) and an odd
function satis…es ,(r) = ÷,(÷r).
37
2. Obtain the Fourier expansion of ,(r) = r
2
on the interval [÷:. :] using
…rst the real form then the complex form of the expansion.
3. Obtain the Fourier expansion of the function ,(r) = r on the interval
[÷:. :]. Draw ,(r) and its approximation ,
a
(r) = c
0
+ ...c
a
cos :r +
/
a
sin :r for : = 0. 1 and 2. Discuss the observed approach of ,
a
to ,
with increasing :. Why might this particular function be a harsh test
of the convergence of a Fourier expansion?
6 Fourier Transforms
The Fourier series expansion is applicable on a …nite interval and to periodic
functions on an in…nite interval. Naturally one would like to be able to
handle functions on the entire line · < r < · whether or not they are
periodic. The required generalization to accomplish this will lead us to the
Fourier transform. We shall start from the complex form of the Fourier series
expansion and write it in a suggestive form as follows,
,(r) =
1
a=1
/
¯
,(/
a
)c
iIna
. (127)
where /
a
= 2::,1 and / = /
a
÷/
a1
= 2:,1. The coe¢cient d
a
has been
renamed /
¯
,(/
a
). i.e.,
¯
,(/
a
) =
1
2:
_
1¸2
1¸2
drc
iIna
,(r). (128)
Insertion into 127 yields
,(r) =
1
a=1
/
1
2:
_
1¸2
1¸2
dr
0
c
iIna
0
,(r
0
)c
iIna
. (129)
Now we are ready to take the limit as 1 ÷ ·. Instead of the discrete
coe¢cients /
a
we now get a continuous parameter k and the sum over n
becomes an integral over /. If all integrals converge properly we have
,(r) =
_
1
1
d/
¯
,(/)c
iIa
. (130)
¯
,(/) =
1
2:
_
1
1
dr,(r)c
iIa
.
38
Note that we have found a pair of functions which are images in either x or
kspace of one function. If we know f(x) we can generate
¯
,(/) and vice versa.
We say that
¯
,(/) is the Fourier transform of f(x). The expressions for f(x)
and
¯
,(/) look very similar and can be made to look even more similar if we
symmetrize the transform. We de…ne a new transform ,(/) by
,(/) =
_
2:
¯
,(/). (131)
The new transform can then be seen to satisfy the relations
,(r) =
1
_
2:
_
1
1
d/,(/)c
iIa
. (132)
,(/) =
1
_
2:
_
1
1
dr,(r)c
iIa
.
Example 2: Let us obtain the Fourier transform of the function 
,(r) = exp(÷cr). r 0.
= 0. r _ 0.
If c is real and greater than zero we have 
,(/) =
1
_
2:
_
1
1
drc
iIa
,(r) =
1
_
2:
_
1
0
drc
iIaoa
=
1
_
2:
_
÷1
c +i/
c
(o+iI)a
_
1
0
=
1
_
2:
1
c +i/
.
Finally, to more clearly expose the real and imaginary parts, we might write
the result above in the form 
,(/) =
1
_
2:
c ÷i/
c
2
+/
2
.
6.1 The oFunction
By inserting the expression for the Fourier transform in the expression for
the function f(x) we get
,(r) =
_
1
1
d/
1
2:
_
1
1
dr
0
,(r
0
)c
iIa
0
c
iIa
. (133)
39
By inverting order of integration we …nd then that
,(r) =
_
1
1
dr
0
,(r
0
)o(r ÷r
0
). (134)
where the so called o÷function is de…ned by
o(r ÷r
0
) =
1
2:
_
1
1
d/c
iI(aa
0
)
. (135)
The de…nition of a ofunction requires that it satisfy the integral relation
q(r) =
_
b
o
dr
0
o(r ÷r
0
)q(r
0
), if c < r < /, (136)
but there are many forms for it other than that in 135. The ofunction is a
generalized sort of function,  a limit of a sequence of functions, e.g.,
o(r ÷r
0
) = lim
c!1
1
2
cc
cjaa
0
j
. (137)
Next we shall consider the lengths of f(x) and its Fourier transform ,(/)
in their respective Hilbert spaces.
Parseval’s Theorem: The 1
2
norms of the function ,(r) and its Fourier
transform ,(/) are the same, i.e.,
_
1
1
dr [,(r)[
2
=
_
1
1
d/
¸
¸
,(/)
¸
¸
2
. (138)
Proof:
_
1
1
dr [,(r)[
2
=
_
1
1
dr,(r),
(r) (139)
=
_
1
1
dr
1
_
2:
_
1
1
d/,(/)c
iIa
1
_
2:
_
1
1
d/
0
,(/
0
)c
iI
0
a
=
_
1
1
d/
_
1
1
d/
0
,(/),
(/
0
)
1
2:
_
1
1
drc
i(II
0
)a
=
_
1
1
d/
¸
¸
,(/)
¸
¸
2
.
since
1
2:
_
1
1
drc
i(II
0
)a
= o(/ ÷/
0
). (140)
40
Note that according to the L
2
metric f(x) and ,(/) have the same length.
Existence Theorem: The Fourier transform of ,(r), i.e., either
¯
,(/) or
,(/), exists if ,(r) satis…es the Dirichlet theorem on any …nite interval and
moreover
_
1
1
dr [,(r)[ < ` < ·.
6.2 Properties of the Fourier Transform
1. Linearity: Let 11(q) be either ¯ q or q and `. j two scalar numbers,
then
11(`, +jq) = `11(,) +j11(q). (141)
2. Fourier transform of derivatives:
11(
d
a
dr
a
,(r) = (i/)
a
11(,(r)). (142)
3. Convolution theorem: If the function q(r) is a convolution of , and /,
q(r) =
_
1
1
d¸,(r ÷¸)/(¸). (143)
then the Fourier transform of g satis…es
¯ q(/) = 2:
¯
,(/)
¯
/(/). (144)
q(/) =
_
2:,(/)/(/).
6.3 Fourier Series and Transforms in Higher Dimen
sion
Both the Fourier series expansion and the Fourier transform are readily gen
eralized to higher dimension. This is accomplished by forming products of
onedimensional Fourier basis functions and using these products as basis
functions in higher dimensional spaces. Suppose x is an ndimensional vector
and the domain 1 of the function q(x) is rectangular, ÷1
i
,2 < r
i
< 1
i
,2.
for i=1,....n , then if q(x) is a properly behaved function on 1 it can be
expanded as
q(x) =
1
)
1
=1
....
1
)n=1
d
)
1
,...)n
c
i2¬()
1
a
1
¸1
1
+....+)nan¸1n)
. (145)
41
where the expansion coe¢cients are given by
d
)
1
,...)n
=
1
1
1
1
a
_
1
dxq(x)c
i2¬()
1
a
1
¸1
1
+....+)nan¸1n)
. (146)
This is the higher dimensional formof the Fourier series expansion. Removing
the restriction to a …nite domain we can de…ne the Fourier transform as
¯ q(/) =
1
(2:)
a
_
dxq(x)c
ikx
. (147)
where both k and x are :dimensional vectors. The function q(x) can then
be expressed as
q(x) =
_
dk¯ q(k)c
ikx
. (148)
The symmetrized Fourier transform can be obtained by noting that
q(k) = (2:)
a¸2
¯ q(k) (149)
in which case we have 
q(x) =
1
(2:)
a¸2
_
dkq(k)c
ikx
. (150)
6.4 Exercises on Fourier Transforms:
1. What is the Fourier transform of a ofunction? Note how in 137 the
ofunction is de…ned as a limit
o(r ÷r
0
) = lim
c!1
,
c
(r ÷r
0
)
with a particular choice of function ,
c
(r÷r
0
). Can you think of another
choice of function ,
c
which will still generate the ofunction in the same
limit? Show that your chosen form of ,
c
is valid.
2. Prove the convolution theorem, i.e., starting from 143 show 144.
3. Calculate the Fourier transform of the function ,(r) = :i:jr, ÷1 _
r _ 1. ,(r) = 0, [r[ 1. Draw simple …gures to show how the
transform
¯
,(/) varies with 1.
42
4. Prove Plancherel’s formula 
_
1
1
dr,
(r)q(r) =
_
1
1
d/,
(/)q(/).
i.e., show that the scalar product (1
2
) is preserved as we go from func
tion to Fourier transform (symmetric) space.
(Chapter head:)Applications of Fourier Series and Transforms
7 Di¤raction
Natural areas of application of Fourier series and transforms are quantum
dynamics and electrodynamics since the plane wave,
1(t; x) = 1
0
c
i(kx.t)
. (151)
is a solution of both the Schrödinger and the Maxwell equations, respectively,
in the absence of a variable external …eld. The phenomenon of di¤raction
arises when particles are placed in the path of propagating plane waves rep
resenting either particles or electromagnetic radiation. We shall assume that
we are dealing with radiation remembering that our results could apply as
well to plane wave particle motion. It is important to note that both the
wave function in the case of particle motion and the electromagnetic …eld
we shall discuss below are amplitudes. Thus the probability density or the
light intensity are obtained by taking the absolute magnitude squared,
1(t. x) = [1(t; x)[
2
. (152)
For the plane wave above the intensity turns out to be a constant. We shall
see below, however, that scattered light is not characterized by a constant
intensity. The particles in a sample scatter the radiation in all directions
and a detector placed at a large distance measures an intensity of radiation
which is sensitive to the orientation if the particles in the irradiated sample
are ordered as in a crystal. Even if the particles are disordered as in a glass
or a liquid the intensity tells us something about the structure of the sample.
43
7.1 Single Particle Scattering
If a small spherically symmetric particle is placed in the path of a propa
gating plane wave it will scatter some of the radiation, i.e., it will absorb
and reemit the radiation becoming itself a radiative source. The emitted
radiation will have spherical symmetry but it retains the wavelength and
frequency of the original plane wave. This follows from our assumption that
the scattering process is elastic, i.e., energy conserving. Thus the amplitude
of the electromagnetic …eld will have the form
1
)
(t; x) = 1
)
c
i(I[xx
j[.t)
, [x ÷x
)
[ . (153)
where x
)
is the location of the particle, k is the wave vector of the plane
wave, its length / is related to the wave length ` by the relation / = 2:,`.
and . is the frequency of the radiation. Note that . = 2:i = /c. The
corresponding intensity of the radiation from a single particle is
1
)
(t; x) = [1
)
[
2
, [x ÷x
)
[
2
. (154)
Note that the denominator is the square of the distance from the particle.
It accounts for the fact that the same ‡ux of radiation intensity is passing
through a spherical surface which grows like 4: [x ÷x
)
[
2
with distance. Flux
conservation then dictates the form of the intensity. The amplitude factor
1
)
is given by
1
)
= c
)
c
ikx
j
. (155)
where the exponential phase factor accounts for the phase of the plane wave
at the particle and c
)
is a constant independent of the particle location x
)
.
It will re‡ect the size of the particle (its cross section) and the process of
absorbtion and reemission.
7.2 ManyParticle Scattering  Crystal Di¤raction
We now consider the case of scattering from a crystal, i.e., from a three
dimensional array of particles located at the positions ¦x
)
¦. The scattered
…eld 1
c
(t; x) is now a sum over the …elds of all the particles,
1
c
(t; x) =
)
1
)
(t; x) =
)
c
)
[x ÷x
)
[
c
ikx
j
c
i(I[xx
j[.t)
. (156)
44
If the particles are identical as they might be in a crystal then c
)
is indepen
dent of j, c
)
= c.
We now apply an approximation valid when the irradiated sample is of
much smaller length scale than the distance from sample to detector. Note
that x is the vector taking us from the origin which we shall place in the
sample and the detector. The vector x
)
takes us from the sample origin
to the speci…c particle ,. These two vectors lie in a plane and the angle
between them we shall call o
)
. It is easy to see that if xx
)
then to a good
approximation we have
[x ÷x
)
[ ¯ =r ÷r
)
cos o
)
. (157)
We then …nd the much simpli…ed form for the scattered …eld 
1
c
(t; x) =
c
r
c
i(Ia.t)
)
c
ikx
j
. (158)
where k = (/x,r) ÷ k and we have assumed identical scatterers as in a
monatomic crystal. Here we have neglected all terms of order r
)
,r or smaller.
7.2.1 Constructive and destructive interference:
The observed intensity at x is given by the absolute magnitude squared of
the …eld,
1(x) = [1
c
(t; x)[
2
= [c[
2
[x[
2
[¹[
2
. (159)
where all dependence on particle positions is collected in the amplitude factor,
¹ =
)
c
ikx
j
. (160)
In a crystal with a monatomic unit cell, for simplicity, the particle positions
can be given by
x
)
= :
)
a +:
)
b +j
)
c. (161)
where a. b. c are primitive translational vectors and :
)
. :
)
. j
)
are integers.
For simplicity we assume that the crystal sample is described by the integers
: = 0. ..... `; : = 0. ......`; j = 0. .....1; We then get
¹ =
A
n=0
c
inka
.
a=0
c
iakb
1
j=0
c
ijkc
(162)
=
1 ÷c
i(A+1)ka
1 ÷c
ika
1 ÷c
i(.+1)kb
1 ÷c
ikb
1 ÷c
i(1+1)kc
1 ÷c
ikc
.
45
But note now that
A
n=0
c
inka
= C:dc:(`
1¸2
). generally, (163)
= ` + 1. for k a =2:¡. q=integer.
Thus if `. `. 1 are large integers, as they would be for a typical crystal
sample in a plane wave beam of macroscopic width, then we would …nd
strong intensity peaks when x and k are such that
ak = 2:¡. for q=integer, (164)
bk = 2::. for r=integer,
ck = 2::. for s=integer.
These are called Laue’s equations. When they are all satis…ed we get a
strong peak in the intensity. For which kvectors will the Laue conditions
all be satis…ed?  They will be satis…ed for k of the form
k = ¡
´
A+:
´
B+:
´
C. (165)
where
´
A = 2:(b c),a(b c). (166)
´
B = 2:(c a),a(b c).
´
C = 2:(a b),a(b c).
Note that
´
A.
´
B.
´
C serve as primitive translational vectors in kspace and
they determine the primitive translational vectors a. b. c and thereby the
crystal structure. The vectors
´
A.
´
B.
´
C are de…ned with the help of the vector
product concept, e.g., bc, which is a vector orthogonal to b and c and of a
length /c:i:o where o is the angle between the vectors. The denominator is
the volume of the parallelipiped formed by the vectors a. b. c, possibly with
a negative sign. Thus we can enter the vectors in any order without a¤ecting
the Laue conditions. See the Beta Handbook Section 3.4.
Example 1  Simple cubic crystal:  The simple cubic crystal has
orthogonal primitive translational vectors a. b. c of equal length. Let us take
this length, i.e., the separation between neighboring atoms to be 4 Å. We can
also let the directions of a. b. c be the three axial directions in our Cartesian
46
coordinate system, i.e., ´ x. ´ y. ´z. We then recall that ´ x ´ y = ´z. ´z ´ x = ´ y.
´ y ´z = ´ x. It follows that 
´
A =
2:
4
´ x.
´
B =
2:
4
´ y.
´
C =
2:
4
´z.
Peak intensities occur according to the Laue conditions when the wavevector
shift k satis…es 
k =¡
2:
4
´ x +:
2:
4
´ y +:
2:
4
´z.
where ¡. :. : are integers. Note that the length of k is 
/ =
2:
4
(¡
2
+:
2
+:
2
)
1¸2
.
Thus the smallest /, not including the forward scattered light at ¡ = : =
: = 0 which is submerged in the unscattered light, is 2:,4 Å
1
. Since k is
obtained by a rotation of the wavevector k of the incident light it follows that
/ must be less than 2/. If 2/ < 2:,4 Å
1
or / < :,4 Å
1
then no Laue
peak can be observed. Recalling that / = 2:,` where ` is the wavelength
of the light we see that we should have 2:,` :,4 or ` < 8 Å
1
in order
for Laue peaks to be observed. Thus the wavelength should be less than
twice the particle spacing in the lattice. It might be convenient to take `
in our case to be about 0.5 Å or so. This would ensure that a number of
peaks would be observable but not so many as to crowd the resolution of the
detector.
7.2.2 Scattering from a continuous medium.
In the case of a ‡uid the particle positions are not nicely ordered on a lattice
but more or less randomly distributed. The particle positions may then be
described by a particle density j(x). If the particle density is j(x) we can
obtain the scattered …eld as
1
c
(t; x) =
c
[x[
c
i(Ijxj.t)
_
dx
0
j(x
0
)c
ikx
0
=
c
[x[
c
i(Ijxj.t)
¹(k). (167)
where
¹(k) =
_
dx
0
j(x
0
)c
ikx
0
= (2:)
3¸2
j(k). (168)
47
Note that the amplitude factor ¹(k) is directly proportional to the Fourier
transform of the particle density j(x). We can generally only measure inten
sity directly, i.e.,
1(k) = [¹(k)[
2
[c[
2
, [x[
2
. (169)
While ¹(k), if it is known for all k, determines j(x) the intensity 1(k)
contains less information. Note that
1(k) [x[
2
, [c[
2
= ¹(k)¹
(k) (170)
=
_
duj(u)c
iku
_
du
0
j(u
0
)c
iku
0
=
_
dx
__
du
0
j(u
0
)j(u
0
+x)
_
c
ikx
= (2:)
3¸2
1(k).
where the spatial correlation function 1 is de…ned by
1(x) =
_
du
0
j(u
0
)j(u
0
+x). (171)
and we have used the variable transformation x = u ÷u
0
. Thus the scattered
intensity can tell us about the correlations in the particle positions of a
disordered ‡uid.
7.3 Exercises on Di¤raction
1. Suppose you have a linear molecule consisting of a very large number
of evenly spaced point particles on a straight line. Let the molecule be
held …xed in a position orthogonal to an incoming plane wave …eld of
wave vector k. What sort of di¤raction pattern would be observed by
a detector? How would you propose to determine the particle spacing
d from this pattern? What [k[value would be most appropriate for the
plane wave …eld?
2. Consider an adsorbed monolayer of atoms on a perfectly ‡at surface.
Assume that the surface does not scatter light  only the adsorbed
monolayer of atoms scatters light. a) Suppose that the monolayer forms
a regular twodimensional crystal lattice. What Laue conditions would
apply to a di¤raction experiment done to determine the crystal struc
ture? b) If the adatoms were instead in a disordered ‡uid state what
could be learnt about its structure by the di¤raction experiment?
48
8 Fourier Spectroscopy
Our purpose here is to determine the frequency spectrum of a light source,
i.e., the intensity 1(.) as a function of the frequency .. The method is to
use the interference pattern arising when a plane wave light beam is split
into two parts traveling paths of length 1 and 1 before being recombined
at the detector. The interference pattern as a function of 1 is in Fourier
spectroscopy used to reveal the frequency spectrum. Thus we shall consider
how to determine 1(.) from the intensity as a function of 1,
^
1(1).
8.1 Monochromatic Light Source
Let us …rst note that a thin beam of light can be passed by a path determined
by mirrors from light source to detector. If the path length is 1 then the
amplitude of the electromagnetic …eld at the detector is given by
1
1
(t; x) = 1
0
c
i(I1.t)
. (172)
Here we have assumed that we have a point source and an in…nitely sharp
beam. If the beam is not sharp there will be a distribution of path lengths
so that the amplitude at the detector is instead given by
1
1
(t; x) = 1
0
_
d1j(1)c
i(I1.t)
. (173)
where j(1) is a probabilty density satisfying
_
d1j(1) = 1. (174)
We shall always assume that in the absence of a beam splitter our beam is
sharp.
Suppose now that we introduce a beam splitter which splits the beam into
two parts of equal amplitude but travelling di¤erent path lengths L
1
and L
2
,
respectively. The di¤erence in path length is 1, i.e., 1
1
= 1, 1
2
= 1+1.
The amplitude at the detector will then be the sum of the contributions from
the two paths,
1
1
(t) =
1
2
1
0
c
i(I1
1
.t)
+
1
2
1
0
c
i(I1
2
.t)
(175)
=
1
2
1
0
c
i(I1.t)
(1 +c
iI1
).
49
The corresponding intensity is
1
1
(t; 1) = [1
1
(t; 1)[
2
=
1
4
[1
0
[
2
¸
¸
1 +c
iI1
¸
¸
2
(176)
=
1
2
[1
0
[
2
(1 + cos /1).
Noting that for radiation propagating in vacuum we have . = /c, where c is
the velocity of light, we can write
1
1
(t; 1) =
1
2
[1
0
[
2
(1 + cos .t). (177)
where t is the di¤erence between the times of propagation along the two
paths, t = 1,c. Thus the intensity shows a sinusoidal variation with 1
or t as shown below in a plot of ¸ = (1 + cos r).
10 8 6 4 2 0 2 4 6 8 10
0.5
1.0
1.5
2.0
x
y
If we identify r in this plot with t then this would be the shape of the
intensity variation for . = 1. In general, the separation between neighbour
ing peaks would be 2:,.. This allows the frequency of the radiation to be
identi…ed from the intensity as a function of t.
8.1.1 Several frequencies:
If the radiation is made up of intensity at a number of wellde…ned frequencies
then the amplitude without the beam splitter becomes
1
1
(t) =
)
1
)
c
i(I
j
1.
j
t)
. (178)
50
and the corresponding intensity is
1
1
(t) =
n
a
1
n
1
a
_
c
i((InIm)1(.n.m)t)
¸
(179)
=
n
[1
n
[
2
+
n
a6=n
1
n
1
a
exp(i((/
a
÷/
n
)1 ÷(.
a
÷.
n
)t)).
But note that the latter term above gives rise to a ‡uctuation in intensity in
time. The long time average of this ‡uctuation vanishes, i.e.,
1
1
= lim
T!1
1
1
_
T
0
dt1(t) =
n
[1
n
[
2
. (180)
After introduction of the beam splitter we get
1
1
(t) =
1
2
)
1
)
c
i(I
j
1.
j
t)
(1 +c
iI
j
1
). (181)
1(t) =
1
4
n
a
1
n
1
a
c
i((InIm)1(.n.m)t)
(1 +c
iIn1
)(1 +c
iIm1
). (182)
1(t) =
1
2
n
[1
n
[
2
(1 + cos .
n
t). (183)
We shall now use a method which has the character of a ”mathematical form
of …ltering”. Suppose now that 1(t) has been measured over the interval
0<t < 1 < ·. Let 1(1. .) be de…ned by
1(1. .) =
_
1
0
d: cos .:1(:). (184)
where : = t. then we …nd that with 1
n
= [1
n
[
2
we have
1(1. .) =
1
2
n
1
n
_
1
0
d: cos .:(1 + cos .
n
:) (185)
=
1
2
n
1
n
_
sin .1
.
+
sin(. +.
n
)1
2(. +.
n
)
+
sin(. ÷.
n
)1
2(. ÷.
n
)
_
.
In order to obtain this result it is convenient to use the identity
cos .: cos .
n
: =
1
2
(cos(. +.
n
): + cos(. ÷.
n
):). (186)
51
Note now that for . = .
n
we have
sin(. ÷.
n
)1
(. ÷.
n
)
= 1. (187)
which, if R is su¢ciently large, gives rise to a blip in B(.) at . = .
n
.
1(1. .
n
)¯ =
1
4
1
n
1.
This maximum in 1(1. .) can be used to identify the frequencies .
n
present
and the corresponding intensities I
n
= [1
n
[
2
.
Example 2: Suppose we have a light source of two spectral lines with
freqencies . = 1 and . = 3 and unit intensities. Draw the cosine transform
1(1. .) for 1 = 5 and 10.  Note that 1(1. .) for 1 = 5 takes the form 
1(1. .) =
1
2
(2
sin 5.
.
+
sin 5(. + 1)
2(. + 1)
+
sin 5(. ÷1)
2(. ÷1)
+
sin 5(. + 3)
2(. + 3)
+
sin 5(. ÷3)
2(. ÷3)
)
5 4 3 2 1 1 2 3 4 5
1
2
3
4
x
y
and the shape shown in the …gure above. If 1 = 10 we get for 1(1. .) 
1(1. .) =
1
2
(2
sin 10.
.
+
sin 10(. + 1)
2(. + 1)
+
sin 10(. ÷1)
2(. ÷1)
+
sin 10(. + 3)
2(. + 3)
+
sin 10(. ÷3)
2(. ÷3)
)
52
5 4 3 2 1 1 2 3 4 5
2
4
6
8
x
y
and the shape as shown above. Note the sharper positive peaks at . = 1
and 3.
8.2 Exercises on Fourier Spectroscopy:
1. Consider the interpretation of the function 1(1. .) as derived above.
One problem in determining the frequencies and intensities from the
peaks of 1(1. .) is to make sure that the peak is not just a ‡uctuation
in the background, i.e., not due to . = .
n
as suggested. How could
one guard against this possibility? Propose a method that as far as
possible eliminates background peaks from a set of chosen high peaks.
2. In the case of a continuous light source we have
1(:) =
1
2
_
1
0
d.j(.)(1 + cos .:). (188)
where j(.) is the light intensity as a function of the frequency. Obtain
the cosine transform 1(1. .) for this type of light. Supposing that
we can obtain 1(:) over the interval 0 < : < 1 < · suggest a way
by which j(.) could be obtained at least approximately from 1(1. .).
You may use the fact that 
lim
1!1
1
:
sin((r ÷r
0
)1)
r ÷r
0
= o(r ÷r
0
).
53
9 Laplace Transforms and Applications
9.1 Derivation and Properties
With the help of Fourier transforms we can work with functions on the
whole real axis but there are still many functions that we cannot apply the
Fourier transform to due to the requirement of absolute integrability, i.e.,
_
1
1
dr,(r) < ` < ·. Thus we cannot de…ne the Fourier transform for the
functions `, r
a
. : _ ÷1. For this and other reasons we shall continue to
develop the Fourier transform into the Laplace transform.
Suppose we are interested in ,(r) on the interval [0. ·] . In order to be
able to apply the Fourier transform we shall …rst apply two operations to the
function ,(r):
« Multiply it by the Heaviside step function H(r) de…ned by
H(r) = 0. for x<0, (189)
= 1. for x _ 0.
« Multiply it by an exponential function crj(÷cr).
Thus the function has now been changed according to
,(r) ÷H(r)c
ca
,(r) = q(c. r). (190)
The new function q(c. r) can be Fourier transformed if ,(r) is of exponential
order, i.e., there is an c 0 such that
lim
a!1
c
ca
,(r) = 0. (191)
and c is picked su¢ciently large. The Fourier transform of q(c. r) is
¯ q(c. /) =
1
2:
_
1
1
drH(r)c
(c+iI)a
,(r) =
1
2:
_
1
0
drc
(c+iI)a
,(r). (192)
and the corresponding expression for the function q(c. r) is
q(c. r) =
_
1
1
d/¯ q(c. /)c
iIa
. (193)
54
By multiplication of this last equation by c
ca
we can recover ,(r),
,(r) =
_
1
1
d/¯ q(c. /)c
(c+iI)a
. for r 0. (194)
Since we are dealing with complex numbers anyway we shall take the liberty
to de…ne the complex variable : = c + i/. Moreover, we de…ne the Laplace
transform
´
,(:) as 2:¯ q(:), i.e.,
´
,(:) =
_
1
0
drc
ca
,(r). for 1c(:) = c = large enough. (195)
The corresponding expression for ,(r) can be obtained as
,(r) =
1
2:i
_
c+i1
ci1
d:c
ca
´
,(:). for r 0. (196)
Here we have changed variable of integration from / to : and noted that if we
integrate in the complex plane along a line parallel to the imaginary axis from
c ÷i·to c +i·then d/ = d:,i. The form of the inverse Laplace transform
in 196 invites the use of the residue theorem (see the Beta Handbook, section
14.2). However, it is more common to do the inversion directly from a table
of Laplace transforms, perhaps with the aid of some of the many simplifying
properties of the Laplace transform described below.
9.1.1 Properties of the Laplace transform: LP= Laplace trans
form
1. Linearity: 11(`,(r) +j/(r)) = `11(,(r)) +j11(/(r)).
2. Derivative theorem: 11(
o
oa
,(r)) = :11(,) ÷,(0).
3. Integral theorem: 11(
_
a
0
dt,(t)) =
1
c
11(,).
4. Convolution theorem: If q(r) =
_
a
0
dt,(t)/(r÷t) then 11(q) = 11(,)11(/).
5. Exponential shift theorem: 11(c
ja
,(r)) =
´
,(: +j).
The linearity follows directly from the linearity of the Fourier transform.
The derivative and the integral theorems can be proven by use of partial
55
integration and the exponential shift theorem follows by inspection. Let us
have a look at the convolution theorem.
Proof of the convolution theorem:
_
1
0
drc
ca
_
a
0
dt,(t)/(r ÷t) =
_
1
0
dt
_
1
t
drc
ca
,(t)/(r ÷t) (197)
=
_
1
0
dt
_
1
t
drc
ct
,(t)c
c(at)
/(r ÷t)
=
_
1
0
dtc
ct
,(t)
_
1
t
drc
c(at)
/(r ÷t)
=
_
1
0
dtc
ct
,(t)
_
1
0
d:c
cv
/(:) =
´
,(:)
´
/(:).
The important step is to change order of integration and realize how the
limits of integration change. The variable change : = r ÷ t then completes
the proof.
9.1.2 Small table of Laplace transforms:
Function of r Laplace transform
1 1,:
c
ja
1,(: ÷j)
co:jr :,(:
2
+j
2
)
:i:jr j,(:
2
+j
2
)
r
a
. : = 0. 1. 2. .... :!,:
a+1
9.2 Applications of Laplace Transforms
The Laplace transform …nds many applications in chemistry. The most com
mon application is probably to linear di¤erential or integrodi¤erential equa
tions where one makes use of the derivative, integral and convolution the
orems to obtain algebraic equations for the transforms themselves without
derivatives or integrals. Let us consider some examples.
Example 1  Unimolecular decomposition: Suppose we have a chem
ical reaction ¹ ÷ j:odnct:. which is irreversible and proceeds according to
a unimolecular rate law, i.e., if the concentration of ¹ at time t is c(t) then
the time development satis…es
d
dt
c(t) = ÷/c(t). (198)
56
where / is the unimolecular rate coe¢cient. Find the time development of c
from the initial value c(0).  This is a linear …rst order di¤erential equation.
We want c(t) for t 0 so we apply the Laplace transform to both sides of
the equation,
:´c(:) ÷c(0) = ÷/´c(:). (199)
Here we have used the derivative law. This equation for the Laplace trans
form can be solved to yield
´c(:) =
c(0)
: +/
. (200)
From our small table of Laplace transforms it follows that
c(t) = c(0)c
It
. (201)
Example 2  Coupled chemical reactions: Consider a set of coupled
chemical reactions of the type ¹ ÷ 1, 1 ÷ C, C ÷ product. The corre
sponding time dependent concentrations are c
¹
(t), c
1
(t). c
C
(t) and the rate
equations are
d
dt
c
¹
(t) = ÷/
1
c
¹
(t). (202)
d
dt
c
1
(t) = /
1
c
¹
(t) ÷/
2
c
1
(t).
d
dt
c
C
(t) = /
2
c
1
(t) ÷/
3
c
C
(t).
These are coupled linear …rst order equations. We apply the Laplace trans
form to both sides of all three equations to obtain 
:´c
¹
(:) ÷c
¹
(0) = ÷/
1
´c
¹
(:). (203)
:´c
1
(:) ÷c
1
(0) = /
1
´c
¹
(:) ÷/
2
´c
1
(:).
:´c
C
(:) ÷c
C
(0) = /
2
´c
1
(:) ÷/
3
´c
C
(:).
The …rst equation can be solved as in the example above. We get 
´c
¹
(:) =
c
¹
(0)
: +/
1
. and c
¹
(t) = c
¹
(0)c
I
1
t
. (204)
Insertion in the second equation yields 
´c
1
(:) =
c
1
(0)
: +/
2
+
/
1
c
¹
(0)
(: +/
1
)(: +/
2
)
. (205)
57
In order to …nd this transform by linear combinations of transforms in our
small table we note that 
1
(: +/
1
)(: +/
2
)
=
1
/
1
÷/
2
(
1
: +/
2
÷
1
: +/
1
). (206)
Now it is straightforward to …nd the inverse Laplace transform. We get 
c
1
(t) = c
1
(0)c
I
2
t
+
/
1
c
¹
(0)
/
1
÷/
2
(c
I
2
t
÷c
I
1
t
). (207)
Finally, we solve for ´c
C
(:) and …nd 
´c
C
(:) =
c
C
(0)
: +/
3
+
/
2
´c
1
(:)
: +/
3
(208)
=
c
C
(0)
: +/
3
+
/
2
: +/
3
(
c
1
(0)
: +/
2
+
/
1
c
¹
(0)
(: +/
1
)(: +/
2
)
)
=
c
C
(0)
: +/
3
+
/
2
: +/
3
(
c
1
(0)
: +/
2
+
/
1
c
¹
(0)
/
1
÷/
2
(
1
: +/
2
÷
1
: +/
1
))
=
c
C
(0)
: +/
3
+
/
2
c
1
(0)
/
2
÷/
3
(
1
: +/
3
÷
1
: +/
2
) +
/
1
/
2
c
¹
(0)
/
1
÷/
2
_
1
/
2
÷/
3
(
1
: +/
3
÷
1
: +/
2
) ÷
1
/
1
÷/
3
(
1
: +/
3
÷
1
: +/
1
)
_
.
At this point the transform is of a form such that we can immediately identify
the terms in our table. We …nd that the concentration of species C decays
by a triple exponential time dependence, i.e.,
c
C
(t) = c
1
c
I
1
t
+c
2
c
I
2
t
+c
3
c
I
3
t
. (209)
c
1
=
/
1
/
2
c
¹
(0)
(/
1
÷/
2
)(/
1
÷/
3
)
.
c
2
=
/
2
c
1
(0)
/
3
÷/
2
+
/
1
/
2
c
¹
(0)
(/
1
÷/
2
)(/
3
÷/
2
)
.
c
3
= c
C
(0) +
/
2
c
1
(0)
/
2
÷/
3
+
/
1
/
2
c
¹
(0)
/
1
÷/
2
(
1
/
2
÷/
3
÷
1
/
1
÷/
3
)
= c
C
(0) +
/
2
c
1
(0)
/
2
÷/
3
+
/
1
/
2
c
¹
(0)
(/
2
÷/
3
)(/
1
÷/
3
)
. (210)
Example 3  Harmonic oscillator: We have already encountered the
harmonic oscillator in our discussion of normal modes of molecules in Chapter
58
2. The equation of motion is 
:
d
2
dt
2
r(t) +/r(t) = 0. (211)
where m is the mass and k the force constant. We shall now see that we can
readily solve this equation by Laplace transformation but …rst we must note
that the derivative theorem can be iterated to apply to higher derivatives:
11(
d
2
dt
2
r(t)) = :11(
d
dt
r(t)) ÷·(0) (212)
= :
2
´ r(:) ÷:r(0) ÷·(0).
where v is the velocity, i.e., the time derivative of r. Using this extension of
the derivative theorem we can take the Laplace transform of the equation of
motion for the harmonic oscillator and obtain 
::
2
´ r(:) ÷::r(0) ÷:·(0) +/´ r(:) = 0. (213)
This equation yields 
´ r(:) =
::r(0) +:·(0)
::
2
+/
=
:r(0) +·(0)
:
2
+
I
n
(214)
=
:r(0)
:
2
+
I
n
+
·(0)
:
2
+
I
n
.
Now we can identify the terms in our small table of Laplace transforms and
…nd 
r(t) = r(0) cos(
_
/
:
t) +·(0)
_
:
/
sin(
_
/
:
t). (215)
Example 4  Debye Hückel theory: Now we shall consider the screen
ing of an ion in an electrolyte solution. Let c(:) be the average electrostatic
potential at the distance r from the ion. It must be spherically symmetric
so it depends on the distance but not on the direction. In the absence of
other ions the …eld would have been of Coulombic form, c(:) · ¡,:, where
q is the charge of the central ion. In the presence of the mobile ions in the
solution the …eld is screened by the attraction of counterions and repulsion
of coions. According to the DebyeHückel analysis the concentration of an
ion of species i is altered by the …eld to the form 
c
i
(:) = c
i1
c
1
i
¸I
B
T
= c
i1
c
q
i
ç(v)¸I
B
T
. (216)
59
where ¡
i
is the charge of the ionic species in the screening atmosphere, c
i1
is
its bulk concentration, /
1
is Boltzmann’s constant and 1 is the temperature
in Kelvin. From electrostatic theory we know that there is a direct rela
tionship between the …eld and the charge density j expressed by Poisson’s
equation,
(
J
2
Jr
2
+
J
2
J¸
2
+
J
2
J.
2
)c = \
2
c = ÷j,
0
. (217)
which in the case of spherical symmetry becomes
1
:
2
d
d:
(:
2
d
d:
c(:)) = ÷j(:),
0
. (218)
The charge density j can be expressed in terms of the concentrations of the
charged species, i.e.,
j(:) =
i
¡
i
c
i
(:) =
i
¡
i
c
i1
c
q
i
ç(v)¸I
B
T
. (219)
If we now insert this expression for j in Poisson’s equation we get the so
called PoissonBoltzmann equation, which in spherical symmetry takes the
form 
1
:
2
d
d:
(:
2
d
d:
c(:)) = ÷
i
¡
i
c
i1
0
c
q
i
ç(v)¸I
B
T
. (220)
At this point we note that the PoissonBoltzmann equation is nonlinear
in the …eld and therefore di¢cult to solve. Debye and Hückel investigated
the weak coupling limit when the interaction energy is small compared to
/
1
1. i.e.,
[¡
i
c(:),/
1
1[ ¸1. (221)
Then we can linearize the Boltzmann factor and get 
j(:) =
i
¡
i
c
i1
c
q
i
ç(v)¸I
B
T
~
=
i
(¡
i
c
i1
÷¡
2
i
c
i1
c(:),/
1
1) (222)
= ÷
i
¡
2
i
c
i1
c(:),/
1
1.
since by electroneutrality in the bulk we have 
i
¡
i
c
i1
= 0. (223)
60
The corresponding linearized PoissonBoltzmann equation takes the form 
1
:
2
d
d:
(:
2
d
d:
c(:)) = i
2
c(:). (224)
where
i
2
=
i
¡
2
i
c
i1
,(
0
/
1
1). (225)
Now we have a second order linear di¤erential equation to solve. Before
we apply the Laplace transform method we change dependent variable to
n(:) = :c(:). The linearized PoissonBoltzmann equation then becomes 
d
2
d:
2
n(:) = i
2
n(:). (226)
By Laplace transformation of both sides we get 
:
2
´ n(:) ÷:n(0) ÷n
0
(0) = i
2
´ n(:). (227)
´ n(:) =
:n(0) +n
0
(0)
:
2
÷i
2
.
Recalling that 
1
:
2
÷i
2
=
1
(: +i)(: ÷i)
=
1
2i
(
1
: ÷i
÷
1
: +i
). (228)
:
:
2
÷i
2
=
: ÷i +i
(: +i)(: ÷i)
=
1
: +i
+
1
2
(
1
: ÷i
÷
1
: +i
)
=
1
2
(
1
: ÷i
+
1
: +i
).
we …nd that n(:) has the form 
n(:) =
1
2
(n(0) ÷
n
0
(0)
i
)c
iv
+
1
2
(n(0) +
n
0
(0)
i
)c
iv
. (229)
However, for physical reasons we cannot tolerate the exponentially growing
term so we must have n
0
(0) = ÷in(0). Thus we get the physical solution 
n(:) = n(0)c
iv
. (230)
and, …nally, the …eld is given by 
c(:) =
n(0)
:
c
iv
. (231)
61
The parameter n(0) is determined by the condition that the …eld approach
the Coulomb potential of the bare charge as : ÷ 0. i.e., n(0) = ¡,4:
0
in
SIunits. Thus we …nally obtain the following screened Coulomb potential 
c(:) =
¡
4:
0
:
c
iv
. (232)
9.3 Exercises on Laplace Transforms:
1. Calculate the Laplace transforms of the functions co:/(r) and :i:/(r).
2. Consider the isomerization reaction ¹ ÷ 1 and its reverse reaction
1 ÷ ¹ proceeding with the rate coe¢cients /
)
and /
b
, the forward
and backward rate coe¢cients, respectively. Write down the corre
sponding coupled rate equations for the concentrations c
¹
and c
1
and
solve them. Determine the equilibrium concentrations and the rate at
which equilibrium is approached.
3. If a harmonic oscillator is placed in a dissipative medium ( e.g., a gas or
a liquid) a frictional force is expected to appear which is proportional
to the velocity. The corresponding equation of motion is
:
d
2
dt
2
r(t) = ÷/r(t) ÷¸
d
dt
r(t). (233)
Solve this equation for r(t), t 0, and describe the qualitative change
in the time dependence as ¸ goes from ÷· to +·. For what values
of ¸ is the e¤ect on the motion consistent with a friction acting on an
oscillator?
4. Consider the DebyeHückel theory of electrolytes above. a) Show that
the solution obtained for c(:) leads to a charge density j(:) which
satis…es charge neutrality, i.e., the integrated charge density equals the
central charge with reverse sign (÷¡). b) In the more realistic model
where the ions are considered to be hard spheres of diameter d, such
that j(:) = 0. for : < d, the solution for n(:) is 
n(:) = n(d) exp(÷i(: ÷d)). : d.
Determine n(d) so that charge neutrality is again satis…ed.
62
Part III
Di¤erential Equations
10 Ordinary Di¤erential Equations
Consider the equation 
d
3
dr
3
¸ +
d
dr
¸ +r
2
¸
a
= q(r) (234)
for the the function ¸(r). It is called an ordinary di¤erential equation
(ODE) because there appear only derivatives with respect to one unknown
variable called r in this case. It is said to be of 3rd order because the highest
order derivative to appear is of this order. If : = 1 then the equation is
linear and if q(r) = 0 then it is called homogeneous while if q(r) ,= 0
then it is said to be inhomogeneous. If : = 2. 3. ... then the equation is
nonlinear.
10.1 First Order Equations
10.1.1 Simple Integration:
The simplest type of ordinary di¤erential equation is of the form 
¸
0
= ,(r) (235)
and can be solved by direct integration of both sides,
¸ =
_
a
d:,(:). (236)
Note that superscript prime indicates that a derivative with respect to x has
been taken and double prime indicates double derivative and so on. The
integral on the right hand side is any primitive function to ,(r), 1(r). We
could then write 
¸ =
_
a
0
d:,(:) +C. (237)
to make explicit the fact that the solution requires speci…cation of a constant
C. The value of ¸(r) at one point is su¢cient to specify C, e.g.,
¸ =
_
a
0
d:,(:) +¸(0). (238)
63
Thus we see that solving an ODE involves …nding a general solution including
one or more undetermined parameters which are then determined by some
boundary condition or point values of the solution or its derivatives. For a
…rst order ODE only one parameter is involved.
10.1.2 Generalized integration:
A more general form of …rst order ODE is 
¸
0
= ,(r. ¸). (239)
If the function ,(r. ¸) is separable 
,(r. ¸) = /(r),q(¸). (240)
then an implicit solution can be obtained as follows:
¸
0
q(¸) = /(r). (241)
G(¸) = H(r) +C. (242)
Here G(¸) and H(r) are primitive functions to q(¸) and ,(r), respectively,
and C is an undetermined constant. This equation for ¸ must now be solved
for y as a function of r. This can often but not always be done analytically.
Example 1: y
0
= r is solved by simple integration, i.e., y =
_
a
d::+C =
(r
2
,2) +C.
Example 2: y
0
= ÷/¸ is solved by generalized integration, i.e.,
¸
0
1
¸
= ÷/.
ln ¸ = ÷/r +C.
¸ = c
Ia+C
= 1c
Ia
.
10.1.3 Reduction by Variable Transformation:
An equivalent way of writing the general …rst order ODE 239 is 
d¸ ÷,(r. ¸)dr = 0. (243)
This equation can be multiplied by another function q(r. ¸) to produce 
q(r. ¸)d¸ ÷q(r. ¸),(r. ¸)dr = 0. (244)
64
Thus we see that another way to write a general …rst order ODE is 
1(r. ¸)d¸ +Q(r. ¸)dr = 0. (245)
Consider now the special case when 1 and Q are both homogeneous of degree
:, i.e.,
1(`r. `¸) = `
a
1(r. ¸). (246)
Q(`r. `¸) = `
a
Q(r. ¸).
If we now make the variable transformation · = ¸,r and substitute r· for ¸
in 245, then we get 
1(r. r·)d¸ +Q(r. r·)dr = r
a
1(1. ·)d¸ +r
a
Q(1. ·)dr = 0. (247)
which yields 
1(1. ·)d¸ +Q(1. ·)dr = 0. (248)
1(1. ·)(rd· +·dr) +Q(1. ·)dr = 0.
1(1. ·)
·1(1. ·) +Q(1. ·)
d· +
1
r
dr = 0.
This equation is now of the form 
·
0
q(·) = ÷1,r. (249)
which can be solved by generalized integration to yield 
G(·) = ÷ln r +C. (250)
Example 3:  Consider the ordinary di¤erential equation 
d¸
dr
=
¸
2
,2 ÷r
2
r
2
+r¸
.
It can be rewritten in the form 
(r
2
+r¸)d¸ + (r
2
÷¸
2
,2)dr = 0.
Here we have 
1(r. ¸) = r
2
+r¸.
Q(r. ¸) = r
2
÷¸
2
,2.
65
Note that both 1(r. ¸) and Q(r. ¸) are homogeneous of second order. Thus
we introduce the new dependent variable · = ¸,r and …nd ¸ = r·, d¸ =
rd· +·dr. Inserted in the ODE we get 
r
2
(1 +·)(rd· +·dr) +r
2
(1 ÷·
2
,2)dr = 0.
(1 +·)rd· + (1 + · +·
2
,2)dr = 0.
1 +·
1 +· +·
2
,2
d· +
1
r
dr = 0.
Now we apply generalized integration to both sides and …nd 
ln(1 +· +·
2
,2) = ÷ln(r) +C.
By exponentiation we obtain an implicit solution 
1 +· +·
2
,2 = ¹,r.
and by multiplication by r
2

r
2
+r¸ +¸
2
,2 = ¹r.
Since this is a second order equation in ¸ we can solve for ¸ explicitly as
follows:
¸
2
+ 2r¸ + 2r
2
= 2¹r.
(¸ +r)
2
= 2¹r ÷r
2
.
¸ = ÷r ±
_
2¹r ÷r
2
.
10.2 Method of Exact Di¤erentials:
Suppose ¸ is given implicitly as a function of r by the equation 1(r. ¸) = 0.
Di¤erentiating with respect to r we get 
J1
Jr
+
J1
J¸
d¸
dr
= 0. (251)
which can be written as 
d1 =
J1
Jr
dr +
J1
J¸
d¸ = 1d¸ +Qdr = 0. (252)
66
Thus we know that when the ODE can be written as an exact di¤erential
equal to zero as above then 1(r. ¸) = 0 yields an implicit solution. It remains
to learn how to recognize when the ODE has this form. Note that for all
physical functions we have
J
2
1
JrJ¸
=
J
2
1
J¸Jr
. (253)
Thus if the ODE 245 satis…es 
J
Jr
1(r. ¸) =
J
J¸
Q(r. ¸). (254)
then it is of exact di¤erential form and we have the implicit solution 1(r. ¸) =
0, where 1(r. ¸) can be found from 
J1
J¸
= 1(r. ¸). (255)
J1
Jr
= Q(r. ¸).
By integration we then …nd 
1(r. ¸) = q(r) +
_
j
d:1(r. :). (256)
1(r. ¸) = ,(¸) +
_
a
d:Q(:. ¸).
These equations can then, with a bit of luck, be solved for 1(r. ¸) which in
turn yields ¸(r) if we can solve 1(r. ¸) = 0.
Example 4:  Consider the ODE  (r +¸)dr +rd¸ = 0. Noting that 
J
J¸
(r +¸) =
J
Jr
r = 1.
we see that the equation is of exact di¤erential form. Thus we can …nd
a function 1(r. ¸) such that an implicit solution for ¸(r) is obtained from
1(r. ¸) = 0 by solving the equations 
1(r. ¸) = q(r) +
_
j
d:r = q(r) +¸r.
1(r. ¸) = ,(¸) +
_
a
d:(: +¸) = ,(¸) +
1
2
r
2
+¸r.
67
Noting that the right hand sides must be identical we get 
q(r) +¸r = ,(¸) +
1
2
r
2
+¸r.
which yields 
q(r) =
1
2
r
2
+C.
,(¸) = C.
and 
1(r. ¸) = ¸r +
1
2
r
2
+C = 0.
This equation can, …nally be solved for y(x) as 
¸(r) = ÷(
r
2
+
C
r
).
where C is a parameter to be determined from, e.g., a point value of y(x).
Suppose, for example, that we have y(1) = 1, then C = 3/2.
10.3 Method of Integrating Factors:
Consider the linear …rst order ODE of the form 
¸
0
+:(r)¸ = `(r). (257)
Suppose that `(r) is a primitive function of :(r), then we note that 
d
dr
(¸c
A(a)
) = (¸
0
+:(r)¸)c
A(a)
= `(r)c
A(a)
. (258)
Integrating both sides with respect to x we get 
¸(r)c
A(a)
=
_
a
d:`(:)c
A(c)
+C. (259)
¸(r) = c
A(a)
(
_
a
d:`(:)c
A(c)
+C).
Again C is a parameter to be determined. Suppose that we know ¸(0). It is
then convenient to let the integration go from 0 to r, i.e.,
¸(r)c
A(a)
÷¸(0)c
A(0)
=
_
a
0
d:`(:)c
A(c)
. (260)
¸(r) = c
A(a)
(
_
a
0
d:`(:)c
A(c)
+¸(0)c
A(0)
).
68
For simplicity we normally choose `(r) so that `(0) = 0.
Example 5:  Consider the equation: ¸
0
+ /¸ = cr
2
. It is of the form
appropriate for the method of integrating factors. A primitive function for /
is /r so we get 
d
dr
(¸c
Ia
) = (¸
0
+/¸)c
Ia
= cr
2
c
Ia
.
¸(r) = c
Ia
(
_
a
0
d:c:
2
c
Ic
+¸(0)).
The integration can be carried out by partial integration or by di¤erentiation
as follows:
_
a
0
d::
2
c
Ic
=
J
2
J/
2
_
a
0
d:c
Ic
=
J
2
J/
2
(
1
/
(c
Ia
÷1)
= c
Ia
(
r
2
/
÷
2r
/
2
+
2
/
3
) ÷
2
/
3
.
Thus we get 
¸(r) = ¸(0)c
Ia
+c(
r
2
/
÷
2r
/
2
+
2
/
3
) ÷
2c
/
3
c
Ia
.
10.4 Factorization Method:
Suppose we consider a di¤erential equation which can be factorized, i.e.,
1
1
(r. ¸. ¸
0
. ...)1
2
(r. ¸. ¸
0
. ...) = 0. (261)
then we obtain the solutions from each factor,
1
1
(r. ¸. ¸
0
. ...) = 0. (262)
1
2
(r. ¸. ¸
0
. ...) = 0.
and add them to a set of solutions for the full equation.
Example 6:  Consider the ODE  (¸
0
)
2
÷(c + /)¸
0
+ c/ = 0. It can be
factorized as
(¸
0
÷c)(¸
0
÷/) = 0.
The factor solutions are ¸ = cr + ¹ and ¸ = /r + 1. Thus the solutions to
the full equation are 
¸(r) = cr +¹ or ¸(r) = /r +1.
69
10.5 Linear Ordinary Di¤erential Equations
A linear ODE can be either homogeneous or inhomogeneous, i.e., written in
the form 
1¸(r) = 0. (homogeneous) (263)
or 
1¸(r) = ,(r). (inhomogeneous) (264)
where the operator 1 can be de…ned as 
1 = q
0
(r) +q
1
(r)
d
dr
+q
2
(r)
d
2
dr
2
+ . (265)
Suppose now that we have found two linearly independent solutions of a
homogeneous linear ODE,
1¸
1
(r) = 0. (266)
1¸
2
(r) = 0.
then it follows from the linearity that 
1(`¸
1
(r) +j¸
2
(r)) = `1¸
1
(r) +j1¸
2
(r) = 0. (267)
Thus linear combinations of solutions are also solutions. There is a vec
torspace of solutions and in order to describe it we must try to …nd the largest
set of linearly independent solutions of the homogeneous ODE. If ¦¸
i
(r)¦
a
i=1
is such a set then it can serve as a basis set in the space of solutions which
can be written as 
¸(r) =
a
i=1
c
i
¸
i
(r). (268)
where ¦c
i
¦
a
i=1
is a set scalar numbers which are free parameters to be deter
mined by further information about the solution. The space of solutions of
the corresponding inhomogeneous linear ODE is of the same dimension but
shifted in function space by a function n(r) which is a socalled particular
solution of the inhomogeneous ODE, i.e.,
1n(r) = ,(r). (269)
The general solution of the inhomogeneous linear ODE can then be written
as 
¸(r) = n(r) +
a
i=1
c
i
¸
i
(r). (270)
70
10.5.1 Linear ODE’s with Constant Coe¢cients:
In the special case when all the functions q
i
(r) in the de…nition of 1 are scalar
constants the search for the space of solutions of the homogeneous ODE is
much simpli…ed by the factorization of 1,
1 = 1
a
+j
1
1
a1
+....... +j
a1
1 +j
a
(271)
= (1 ÷:
1
)(1 ÷:
2
)(1 ÷:
3
) (1 ÷:
a
).
where 1 = d,dr, ¦j
i
¦ is the set constants which de…ne 1 and ¦:
i
¦ the set
of n roots of the nth order polonomial formed by 1 if 1 is treated as an
ordinary scalar variable. Since the coe¢cients are constants we have 
1:
i
= :
i
1. (272)
i.e., the derivative operator commutes with all coe¢cients, and 1 can be
treated as an ordinary scalar in forming the factorized form of 1 above. It
is now easy to see that the solution of (1 ÷ :
a
)¸ = 0 is also a solution of
1¸ = 0. The …rst factor in 1 simply kills ¸ if it is a solution of (1÷:
a
)¸ = 0.
However, we can write the factors in any order. Thus the solutions of all the
equations
(1 ÷:
i
)¸ = 0. i=1, 2,......n, (273)
will also be solutions of 1¸ = 0. Recall now that 
¸
0
÷:
i
¸ = 0. (274)
has the general solution 
¸
i
(r) = c
i
c
n
i
a
. (275)
If the roots ¦:
i
¦
a
i=1
are all di¤erent then the corresponding solutions are all
linearly independent and we get the general solution of 1¸ = 0 in the form 
¸(r) =
a
i=1
c
i
c
n
i
a
. (276)
If we have a root of degeneracy d. i.e., d roots are identical, then the corre
sponding terms are replaced as shown below:
o
i=1
c
i
c
n
i
a
÷
a
i=1
c
i
r
i1
c
n
1
a
. : = d ÷1. (277)
71
These last results are o¤ered without proof but can readily be veri…ed.
How can we obtain a particular solution? Since any form of particular
solution will do it is often possible to …nd one ”by inspection”, by a guess in
spired by the form of the ordinary di¤erential equation. More systematically,
we can use the method of integrating factors iteratively in the following way:
1¸ = (1 ÷:
1
)(1 ÷:
2
) (1 ÷:
a
)¸ = ,(r). (278)
De…ning a new function ·(r) by 
·(r) = (1 ÷:
2
)(1 ÷:
3
) (1 ÷:
a
)¸. (279)
we get a new …rst order ODE,
(1 ÷:
1
)·(r) = ,(r). (280)
which we solve for ·(r) by the integrating factor method,
·(r) = c
n
1
a
(C
1
+
_
a
0
d:,(:)c
n
1
c
). (281)
Now we have obtained a new linear ODE of order : ÷1,
(1 ÷:
2
)(1 ÷:
3
) (1 ÷:
a
)¸ = ·(r). (282)
where ·(r) is a known function. Thus we can repeat the step and use the
integrating facor method to peal o¤ one factor at a time until ¸(r) itself is
found.
Example 7:  Find the solutions of the di¤erential equation 
¸
00
÷3¸
0
+ 2¸ = 1.
First we note that this is a linear second order ODE which is inhomogeneous
with constant coe¢cients. The corresponding homogeneous equation is 
¸
00
÷3¸
0
+ 2¸ = 0.
In polynomial form it can be written 
(1 ÷1)(1 ÷2)¸ = 0.
72
Thus we see that the general solution of the homogeneous equation is 
¸(r) = c
1
c
a
+c
2
c
2a
.
A particular solution can be found by inspection in the form y(x)=1/2. It
follows that the general solution of the inhomogeneous equation is 
¸(r) =
1
2
+c
1
c
a
+c
2
c
2a
.
Example 8:  Find the solution of the di¤erential equation 
¸
00
÷2¸
0
+¸ = c
2a
.
Note that the polynomial form of the corresponding homogeneous equation
is 
(1 ÷1)
2
¸ = 0.
It has a doubly degenerate root : = 1. In order to test the statement
above concerning the solution in the case of degenerate roots let us solve
this equation by the iterative method of integrating factors. We …rst set
·(r) = (1 ÷1)¸ and …nd then that 
·
0
÷· = 0.
which yields 
·(r) = c
1
c
a
.
Now we get from the de…nition of v(x) the equation 
¸
0
÷¸ = c
1
c
a
.
By the method of integrating factors this yields 
¸(r) = c
a
(c
2
+
_
a
0
d:c
1
) = (c
2
+c
1
r)c
a
.
in agreement with the statement above. Applying the iterative integrating
factor method to the inhomogeneous equation we get 
·
0
÷· = c
2a
.
·(r) = c
a
(c
1
+
_
a
0
d:c
3c
) = c
1
c
a
+
1
3
(c
a
÷c
2a
).
¸
0
÷¸ = ·(r) = (c
1
+
1
3
)c
a
÷
1
3
c
2a
.
¸(r) = c
a
(c
2
+ (c
1
+
1
3
)r ÷
1
9
(1 ÷c
3a
))
= (c
2
÷
1
9
+ (c
1
+
1
3
)r)c
a
+
1
9
c
2a
.
73
Noting that c
1
. c
2
are undetermined scalar constants we can rewrite this result
as 
¸(r) = c
2
exp(r) +c
1
r exp(r) +
1
9
exp(÷2r).
There are often possibilities to shortcut the ”brute force” type of solution
by a ”solution by inspection”. In this case we could proceed as follows.
We …rst note that the inhomogeneity in the form of exp(÷2r) suggests that
the solution will contain the same exponential. The simplest form of such
solution is c exp(÷2r). Inserting this guess into the di¤erential equation
yields 
4c exp(÷2r) + 4c exp(÷2r) +c exp(÷2r) = exp(÷2r).
It follows immediately that c = 1,9 and thus a particular solution has been
found as 
n(r) =
1
9
exp(÷2r).
10.6 Known Second Order Di¤erential Equations:
There are, of course, many ODE’s which do not fall in any of the categories
of solvable problems discussed above. It is good to know then that some such
ODE’s are well studied and documented in the literature. Here are some that
you could look up in most texts on di¤erential equations:
1. Legendre’s equation:
(1 ÷r
2
)¸
00
÷2r¸
0
+:(: + 1)¸ = 0. (283)
2. Associated Legendre’s equation:
(1 ÷r
2
)¸
00
÷2r¸
0
+ (:(: + 1) ÷
:
2
1 ÷r
2
)¸ = 0. (284)
3. Bessel’s equation:
r
2
¸
00
+r¸
0
+ (r
2
÷:
2
)¸ = 0. (285)
4. Hypergeometric equation:
r(1 ÷r)¸
00
+ [c ÷(c +/ + 1)r] ¸
0
÷c/¸ = 0. (286)
74
10.7 Exercises on Ordinary Di¤erential Equations:
Exercise 1:  Obtain the general solution of the following ODE’s clearly
indicating in each case the method you are using and what conditions on
¸(r) at r = 0 would completely determine the solution. Note that general
analytical solutions may sometimes have to be in implicit form. Take the
solution as far as you can towards explicit form and then leave it implicit if
necessary.
a)
¸
0
= rc
Ia
.
b)
¸
0
= ÷¸,(2
_
r¸ ÷r). (implicit solution is su¢cient)
c)
¸
0
= ¸
2
(2 + sin r).
d)
¸
0
÷(1 +r
2
)¸ = r
3
.
e)
(¸
0
)
3
+ (c +/ +c)(¸
0
)
2
¸ + (c/ +/c +cc)¸
0
¸
2
+c/c¸
3
= 0.
f)
¸
000
+ 2¸
00
+ 4¸
0
= 0.
11 Partial Di¤erential Equations
Partial di¤erential equations are di¤erential equations on multidimensional
domains, i.e., there are several independent variables. There are many im
portant examples in chemistry such as:
1. The onedimensional wave equation for the displacement ·(t; r):
J
2
·
Jr
2
=
1
c
2
J
2
·
Jt
2
. (287)
2. The threedimensional wave equation for the displacement ·(t; r. ¸. .):
\
2
· =
J
2
·
Jr
2
+
J
2
·
J¸
2
+
J
2
·
J.
2
=
1
c
2
J
2
·
Jt
2
. (288)
75
3. The Laplace equation for the electrostatic …eld ·(r. ¸. .):
\
2
· = 0. (289)
4. Poisson’s equation for the electrostatic …eld ·(r. ¸. .):
\
2
· = q(r. ¸. .). (290)
5. The threedimensional di¤usion equation for the particle density ·(t; r. ¸. .):
\
2
· =
1
1
J·
Jt
. (291)
6. The timedependent Schrödinger equation for the wavefunction ·(t; r. ¸. .):
J·
Jt
= ÷
i
}
(÷
}
2
2:
\
2
· +\ (r. ¸. .)·). (292)
7. The timeindependent Schrödinger equation for the eigenfunction ·(r. ¸. .):
÷
}
2
2:
\
2
· +\ (r. ¸. .)· = 1·.
Note that, with the exception of Poisson’s equation which is inhomo
geneous, all these equations are linear, homogeneous equations of second
order. As in the case of ordinary di¤erential equations the partial di¤erential
equations have many solutions which become unique by the application of
boundary conditions. The following types of boundary conditions are com
mon:
« Dirichlet conditions: · is known on the boundary.
« Neumann conditions: (\·)
a
(i.e., the normal gradient) is known on
the boundary.
« Cauchy conditions: · and (\·)
a
are both known at the boundary.
76
11.1 Separation of Variables
We shall consider a few of the most commonly used methods of solving partial
di¤erential equations (PDE’s). Perhaps the most commonly used method is
to attempt to reduce the partial equation to ordinary form by separation
of variables, i.e., we try to …nd solutions of product form. Thus if we are
looking for a solution ·(r. ¸. .) then we propose the form 
·(r. ¸. .) = A(r)1 (¸)2(.). (293)
Upon insertion into the PDE this will, if the PDE is separable in these coor
dinates, generate three ordinary di¤erential equations which can be attacked
by the methods of the preceding chapter.
Example 1  The vibrating string:
Let us consider an elastic string such as a guitar string of length 1. Its
deformation from the straight line shape is resisted by a tension in the string.
We shall limit our string to motion in one dimension only and let the deviation
of the string from its resting (equilibrium) position be denoted by ·(t; r) as
a function of the time t and the position along the axis of the string at rest
x. The boundary conditions are 
·(t; 0) = ·(t; 1) = 0. (294)
and 
·(0; r) = ,(r). (295)
J
Jt
·(t; r) = q(r). for t = 0. (296)
The …rst condition re‡ects the fact that the string is tied at the two ends.
The second and third conditions give the initial position and velocity of each
point in the chain.
We now assume that the string can be described by the direct product 
·(t; r) = A(r)1(t). (297)
By insertion in the applicable partial di¤erential equation, i.e., the one–
dimensional wave equation (287), we get 
1(t)
J
2
Jr
2
A(r) =
1
c
2
A(r)
J
2
Jt
2
1(t). (298)
77
If we now divide by A(r)1(t) we …nd 
1
A(r)
J
2
Jr
2
A(r) =
1
c
2
1
1(t)
J
2
Jt
2
1(t) = ÷`
2
. (299)
Here ` is a constant independent of both r and t. We then have two ordinary
di¤erential equations to solve. The one in r is a boundary value problem
J
2
Jr
2
A(r) = ÷`
2
A(r). (300)
with the condition that A(0) = A(1) = 0. The one in t is an initial value
problem 
J
2
Jt
2
1(t) = ÷`
2
c
2
1(t). (301)
with the condition that 1(0) and J1(t),Jt at t = 0 have predetermined
values. Note that our expectation that there be sinusoidal variations suggests
that ` be a real number. These equations are then readily solved. They can
both be identi…ed with the harmonic oscillator problem dealt with in both
Chapter 2 and Chapter 6. We …nd the solutions 
A(r) = ¹sin(`r +o). (302)
1(t) = 1sin(`ct +c). (303)
The boundary conditions on A(r) leads to ` = ::,1. n=1,2,...... with o = 0.
i.e.,
A
a
(r) = ¹
a
sin(::r,1). n=1,2,........ (304)
This is the same form of solution as for the 1D particleinthebox problem
in quantum mechanics. The corresponding solution for 1(t) is 
1
a
(t) = 1
a
sin(::ct,1 +c
a
). (305)
Now we can see that the set of A
a
functions form a set of normal modes of
the chain equivalent to the normal vibrational modes of molecules considered
in Chapter 2. Had our string consisted of a chain of atoms the anology
would have been perfect. In our string model here we have simply taken the
continuum limit when the particles become in…nitely many and at the same
time in…nitely small so as to preserve the mass per unit length in the chain.
The general solution is obtained by superposing normal mode solutions so as
78
to match the initial value conditions given. Thus we expand ,(r) in the box
eigenfunction basis set,
,(r) =
1
a=1
c
a
sin(::r,1). (306)
and the same type of expansion applies also to the timederivative q(r),
q(r) =
1
a=1
d
a
sin(::r,1). (307)
The general form of the solution is 
·(t; r) =
1
a=1
¹
a
sin(::ct,1 +c
a
) sin(::r,1). (308)
Thus we can solve for ¹
a
and c
a
from the relations 
¹
a
sin(c
a
) = c
a
.
(¹
a
::c,1) cos(c
a
) = d
a
. (309)
By dividing the …rst of these equations by the last we get 
c
a
= arctan(c
a
::c,d
a
1). (310)
and then ¹
a
can be found as 
¹
a
= c
a
, sin(c
a
). (311)
11.2 Integral Transform Method
A very general method of solving partial di¤erential equations is to introduce
a basis set and convert the PDE into an algebraic equation for the expansion
coe¢cients by projection onto the …nite space spanned by the basis. We have
already seen this method in use in the Hückel theory of :electron structure
in planar conjugated hydrocarbon molecules. As we noticed in introducing
the Fourier series all series expansion methods and by extension also the
transform methods are basically the same. Thus they bring the possibility of
reducing di¤erential equations to algebraic form. We shall show by example
how the Fourier transform can be used to solve a PDE in this way.
79
Example 2  Di¤usion in an in…nite solid:
Consider an in…nite solid in which we have a spatially varying tempera
ture 1(t; r) at t = 0. The general relation for the timedevelopment of the
temperature is Fick’s law which is 
\
2
1(t; r. ¸. .) =
1
i
J1
Jt
. (312)
Note that Fick’s law is just a di¤usion equation in three dimensions and i
is the corresponding di¤usion coe¢cient. This analogy is reasonable since
the random motion of particles is one of the main mechanisms of energy
transport. In a solid the particles only rarely leave their equilibrium lattice
sites but the vibrations also have a random character and the transfer of
energy between sites in a solid can be approximately described as a di¤u
sional process. Since the temperature variation is initially con…ned to the
rdirection it will remain so for all times. Thus we can look for a function
1(t; r) and use the reduced version of Fick’s law 
J
2
Jr
2
1(t; r) =
1
i
J
Jt
1(t; r). (313)
under the boundary condition that 1(0; r) is known.
We now take the Fourier transform with respect to r of both sides. On
the condition that the transform exists we get 
÷/
2
1(t; /) =
1
i
J
Jt
1(t; /). (314)
From this equation follows by our ODE solving methods 
1(t; /) = 1(0; /) exp(÷i/
2
t). (315)
Note that this result implies fast damping of high k components, i.e., non
smooth features of 1(t; r). Thus time evolution produces an increasingly
smooth spatial distribution of temperature.
At this point we need to consider the initial temperature distribution
1(0; r). It seems unlikely that it vanish for r ÷ ±·. It would appear
therefore that our clever idea to use the Fourier transform will fail since 
_
1
1
dr [1(0; r)[ = ·. (316)
80
However, reality comes to the rescue. We can consider a temperature distur
bance o1(t; r) which is de…ned by 
o1(t; r) = 1(t; r) ÷1
bj
. (317)
where 1
bj
is a background temperature so de…ned that 
_
1
1
dr [o1(0; r)[ < ·. (318)
Now the temperature disturbance satis…es the same PDE as 1 itself and it
does have a Fourier transform. Thus we can proceed with our method. The
general solution can be written as 
o1(t; r) =
1
_
2:
_
1
1
d/o1(t; /) exp(i/r)
=
1
_
2:
_
1
1
d/o1(0; /) exp(÷i/
2
t) exp(i/r). (319)
Greens function: Suppose now that the temperature disturbance is
initially perfectly localised in r, i.e.,
o1(0; r) = o(r ÷r
0
). (320)
Then the Fourier transform is 
o(r
0
; /) =
1
_
2:
_
1
1
dr exp(÷i/r)o(r ÷r
0
) (321)
=
1
_
2:
exp(÷i/r
0
).
The corresponding temperature disturbance is 
o1(t; r
0
. r) =
1
2:
_
1
1
d/ exp(i/(r ÷r
0
) ÷i/
2
t). (322)
This integral can be evaluated analytically. Note that 
_
1
1
d/ exp(÷c/
2
+//) =
_
1
1
d/ exp(÷c(/ ÷
/
2c
)
2
+
/
2
4c
)
=
_
:
c
exp(
/
2
4c
). (323)
81
This result holds even though b is complex as follows from the fact that the
integrand is analytical and the path of integration in the complex plane can
be moved to the real axis. (See Section 14.2 in the Beta Handbook) Thus
our temperature distribution becomes 
o1(t; r
0
. r) =
_
1
4:it
exp(÷(r ÷r
0
)
2
,4it). (324)
Finally, we note that due to linearity and the fact that the initial tem
perature distribution can be written as an integral over such deltafunctions,
o1(0; r) =
_
dro1(0; r
0
)o(r ÷r
0
). (325)
we can obtain the general solution as 
o1(t; r) =
_
dr
0
o1(0; r
0
)o1(t; r
0
. r)
=
_
dr
0
o1(0; r
0
)
_
1
4:it
exp(÷(r ÷r
0
)
2
,4it). (326)
This expression means that each point of excess temperature broadens into
a Gaussian ball of excess with its maximum excess decreasing like 1,
_
t
and its width increasing like
_
t with time. The solution in the case of a
delta function disturbance is called a Green’s function by the physicists (See
Section 9.4 of the Beta Handbook).
Linear timepropagation: In order to understand the interest in Green’s
functions consider the timedevelopment of the linear Fick’s law,
J
Jt
1(t; r) = i
J
2
Jr
2
1(t; r) = L1(t; r). (327)
This equation has the formal solution 
1(t; r) = exp(Lt)1(0; r). (328)
The timepropagator crj(Lt) is also a linear operator. It follows that if 
1(0; r) =
a
c
a
c
a
(0; r). (329)
82
then 
1(t; r) =
a
c
a
exp(Lt)c
a
(0; r) =
a
c
a
c
a
(t; r). (330)
Similarly, if the initial …eld can be written as an integral over ofunctions,
1(0; r) =
_
dr
0
1(0; r
0
)o(r ÷r
0
). (331)
then we get 
1(t; r) =
_
dr
0
1(0; r
0
) exp(Lt)o(r ÷r
0
) =
_
dr
0
1(0; r
0
)o1(t; r
0
. r).
(332)
Thus the timedependent ofunctions, just like the time dependent Fourier
basis functions, allow us to obtain the timedependent amplitude 1(t; r)
simply by integration.
11.3 Exercises on Partial Di¤erential Equations:
1. Reduce the timedependent Schrödinger equation for particle motion
in three dimensions to the form of two di¤erential equations  one for
the timedependence and one for the spatial dependence of the wave
function.
2. Consider the quantum mechanical motion of a particle in two dimen
sions. Solve the timeindependent Schrödinger equation to obtain the
energy eigenfunctions of the twodimensional particleinthebox prob
lem where the potential vanishes when 0 < r < 1
1
and 0 < ¸ < 1
2
but
is in…nite elsewhere.
Part IV
Numerical Methods
12 Numerical Solution of ODE’s
So far we have studied mathematical methods of an analytical form. The
result has been either explicit solutions of an exact or approximate nature,
83
or mathematical relationships which require evaluation by numerical means,
e.g., by numerical di¤erentiation or integration. Now we shall proceed to
study the numerical methods most commonly used by chemists. Perhaps the
most commonly used method of all is the …nite basis set method employed by
all users of the standard quantum chemical methods. This method was dis
cussed already in Chapters 1 and 2. The second most commonly used method
could well be the molecular dynamics method of simulating both dynamical
processes and equilibrium properties of chemical systems. Thus we shall fo
cus on this, the socalled MD method, next. It is based on the numerical
solution of ordinary di¤erential equations. The third most commonly used
numerical method might be the Monte Carlo method of numerical integra
tion. It is used to obtain equilibrium properties of chemical systems through
the evaluation of statistical mechanical averages. Thus we shall study the
socalled MC method of numerical simulation.
12.1 Numerical Di¤erentiation
Recall the de…nition of the derivative of the function ,(r) at r,
,
0
(r) = lim
I!0
1
/
(,(r +/) ÷,(r)). (333)
This de…nition immediately suggests a numerical evaluation of the derivative
by the relation
,
0
(r)
~
=
1
/
(,(r +/) ÷,(r)). (334)
for a small value of /. How small should / be? This is not so easy to deter
mine in practice. It depends on the round o¤ error a¤ecting the evaluation
of ,, the intrinsic error in ,
0
and our accuracy requirement. We shall leave
aside the round o¤ error which is machine dependendent, i.e., dependent on
the computer or other computational device you are using, and focus our
attention on the intrinsic error of the approximation. This is the error which
remains if we could evaluate the expression (334) exactly. In order to …nd
this error we start from the Taylor series expansion of the function which we
assume to be analytical in the domain of interest, i.e.,
,(r +/) = ,(r) +/,
0
(r) +
1
2
/
2
,
00
(r) +
1
6
/
3
,
000
(r) +
1
24
/
4
,
0000
(r) +.....
=
1
a=0
1
:!
/
a
,
(a)
(r). (335)
84
By subtraction of f(x) and division by h we readily …nd that
,
0
(r)
~
=
1
/
(,(r +/) ÷,(r)) = ,
0
(r) +
1
2
/,
00
(r) +..... = ,
0
(r) +O(/). (336)
By this notation we mean that the leading term in the error is proportional to
/, i.e., if / is small enough the term proportional to / will dominate the error.
If h is small then /
2
is smaller, /
3
is smaller still etcetera. Thus we want
numerical approximations with as high order as is needed to get su¢cient
accuracy. In the end the human algebraic labor tends to put an end to our
ambitions for high accuracy but it is certainly very important to know how
to generate higher accuracy when needed. Fortunately, it is not di¢cult in
this case to see how to evaluate ,
0
(r) to higher order in /. The …rst trick is
to recognize the merit of a central rather than the forward di¤erence method
used above. Note that
,(r +
/
2
) ÷,(r ÷
/
2
) = /,
0
(r) +O(/
3
). (337)
Thus we …nd
,
0
(r) =
1
/
(,(r +
/
2
) ÷,(r ÷
/
2
)) +O(/
2
). (338)
We can get even higher order error by expressions for ,
0
(r) involving more
function evaluations.
Example 1: High order derivate evaluation 
,
0
(r) =
4
3
1
/
_
,(r +
/
2
) ÷,(r ÷
/
2
)
_
÷
1
6
1
/
(,(r +/) ÷,(r ÷/)) +O(/
4
).
(339)
Let us now consider higher order derivatives. The second derivative is
most easily evaluated by a sequential application of the de…nition of a deriv
ative,
,
00
(r)
~
=
1
/
(,
0
(r +
/
2
) ÷,
0
(r ÷
/
2
)) (340)
~
=
1
/
2
(,(r +/) ÷2,(r) +,(r ÷/)).
The order of the error follows from the substitution of the Taylor series
expanded forms of ,(r +/) and ,(r ÷/) and we have
,
00
(r) =
1
/
2
(,(r +/) ÷2,(r) +,(r ÷/)) +O(/
2
). (341)
85
Note that again this is a central di¤erence form of approximation so we get a
second order error where for a noncentral form we would expect a …rst order
error. As before we can get higher order accuracy by using more function
evaluations. The third order derivative ,
000
(r) can similarly be obtained by
an iterative application of the de…nition of a derivative,
,
000
(r)
~
=
1
/
(,
00
(r +
/
2
) ÷,
00
(r ÷
/
2
)) (342)
~
=
1
/
2
(,
0
(r +/) ÷2,
0
(r) +,
0
(r ÷/))
~
=
1
/
3
(,(r +
3/
2
) ÷3,(r +
/
2
) + 3,(r ÷
/
2
) ÷,(r ÷
3/
2
)).
Again the central di¤erence form of this approximation ensures that the error
is of second order and we …nd
,
000
(r) =
1
/
3
(,(r+
3/
2
) ÷3,(r+
/
2
) +3,(r÷
/
2
) ÷,(r÷
3/
2
)) +O(/
2
). (343)
By the same method we can generate numerical derivatives of any order of
derivation and any order of accuracy. The accuracy can be improved, e.g.,
by identifying the leading error term in the approximations above and sub
tracting the corresponding numerical derivative multiplied by the appropriate
constant. Note, for example, that the central di¤erence approximation for
,
0
(r) can be written as
,
0
(r) =
1
/
(,(r +
/
2
) ÷,(r ÷
/
2
)) ÷
1
24
/
2
,
000
(r) +O(/
4
). (344)
Inserting the expression for ,
000
(r) from 343 we then get
,
0
(r) =
1
/
(÷
1
24
,(r+
3/
2
) +
9
8
,(r+
/
2
) ÷
9
8
,(r÷
/
2
) +
1
24
,(r÷
3/
2
)) +O(/
4
).
(345)
12.2 Numerical Solution of ODE’s
12.2.1 Direct Taylor Series Expansion Methods
Consider …rst the ordinary di¤erential equation (ODE)
¸
0
= ,(r. ¸). (346)
86
Replacing ¸
0
by the simplest form of numerical derivative in (334) we …nd
that
1
/
(¸(r +/) ÷¸(r)) = ,(r. ¸(r)) +O(/). (347)
which yields
¸(r +/) = ¸(r) +/,(r. ¸(r)) +O(/
2
). (348)
This equation propagates the solution y(x) from x to x+h at the cost of an
error of order h
2
. This propagation step can now be iterated to yield y(x+2h)
as
¸(r + 2/) = ¸(r +/) +/,(r +/. ¸(r +/)) +O(/
2
). (349)
Note that in the second propagation step we have an error of order /
2
in
¸(r + /) and in ,(r + /. ¸(r + /)) if it is an analytical function of r and ¸.
The latter term is multiplied by / so the additional error is of order /
3
. We
can now repeat the propagation step to generate the solution over a grid of
points. Naturally we can vary the steplength as we go along. The error will
grow in some way which depends on both our choice of method and on the
function ,(r. ¸). Fortunately it will not always tend in the same direction.
Error cancellation will happen to some extent. In the end the error growth
remains a rather di¢cult aspect of numerical solutions of ODE’s which needs
to be checked in each application.
Let us now consider how we might improve the accuracy of the numerical
solution of the …rst order ODE above. The obvious idea is to use the central
di¤erence de…nition of the derivative. Note that
¸
0
(r) =
1
2/
(¸(r +/) ÷¸(r ÷/)) +O(/
2
). (350)
Thus we can insert the relation for ¸
0
from the ODE and obtain
¸(r +/) = ¸(r ÷/) + 2/,(r. ¸(r)) +O(/
3
). (351)
In order to use this type of propagation we need two values of ¸, ¸(r ÷ /)
and ¸(r), in order to generate the new value ¸(r + /). This is no problem
once the propagation is running but will require a special starting procedure.
A simple way to handle this is to use the lower order method above for the
…rst step and then go over to the higher order central di¤erence scheme.
Now we have seen the general character of the problem of solving ODE’s
by numerical means. We need to devise a propagation step with an error
of as high order as needed and be prepared to construct a special start up
87
procedure to generate the information required for the propagation. We shall
focus now on some general or particularly advantageous ways of propagating
the solution. We begin with a general method based directly on the Taylor
series expansion. Note that we can always write
¸(r +/) = ¸(r) +/¸
0
(r) +
1
2
/
2
¸
00
(r) +
1
6
/
3
¸
000
(r) +........ (352)
Thus if the ODE is of …rst order as discussed above then it is natural to
insert the equation for ¸
0
(r) and obtain
¸(r +/) = ¸(r) +/,(r. ¸(r)) +O(/
2
) (353)
as above. If we want to increase the accuracy we need an expression for the
next term in the Taylor series expansion. Such an expression can be obtained
by di¤erentiating the original ODE to get
¸
00
(r) =
d
dr
,(r. ¸) =
J
Jr
,(r. ¸) +¸
0
(r)
J
J¸
,(r. ¸)
=
J
Jr
,(r. ¸) +,(r. ¸(r))
J
J¸
,(r. ¸). (354)
Now we can write
¸(r+/) = ¸(r)+/,(r. ¸(r))+
1
2
/
2
_
J
Jr
,(r. ¸) +,(r. ¸(r))
J
J¸
,(r. ¸)
_
+O(/
3
).
(355)
This method illustrates that one can start directly from the Taylor series
expansion and increase the accuracy by generating expressions for higher
order derivatives by di¤erentiating the original ODE.
Example 2: Consider the ordinary di¤erential equation 
¸
0
= r +¸
2
. (356)
In this case we obtain by di¤erentiation 
¸
00
= 1 + 2¸¸
0
= 1 + 2r¸ + 2¸
3
. (357)
Thus we can write 
¸(r +/) = ¸(r) +/(r +¸
2
(r)) +
/
2
2
(1 + 2r¸(r) + 2¸
3
(r)) +O(/
3
). (358)
88
Consider now a second order ODE,
¸
00
= ,(r. ¸. ¸
0
). (359)
The corresponding natural propagating step is
¸(r +/) = ¸(r) +/¸
0
(r) +
1
2
/
2
,(r. ¸(r). ¸
0
(r)) +O(/
3
). (360)
This equation shows a dependence on both y(x) and y
0
(r). This means that
we should have initial information on these two functions and we must then
propagate both forward. Thus we complement the propagating equation for
y by the following propagating equation for y
0
.
¸
0
(r +/) = ¸
0
(r) +/,(r. ¸(r). ¸
0
(r)) +O(/
2
). (361)
Solving these two equations in tandem we can generate the numerical solution
of the second order ODE. Note that both y and its …rst derivative were needed
initially and had to be propagated forward.
We now go to a third order ODE,
¸
000
= ,(r. ¸. ¸
0
. ¸
00
). (362)
The natural approximation based on the Taylor series expansion nowbecomes
¸(r +/) = ¸(r) +/¸
0
(r) +
1
2
/
2
¸
00
(r) +
1
6
/
3
,(r. ¸(r). ¸
0
(r). ¸
00
(r)) +O(/
4
).
Now we need to know initially and to propagate ¸, ¸
0
. ¸
00
. Thus we addend
the propagating equations for ¸
0
and ¸
00
.
¸
0
(r +/) = ¸
0
(r) +/¸
00
(r) +
1
2
/
2
,(r. ¸(r). ¸
0
(r). ¸
00
(r)) +O(/
3
). (363)
¸
00
(r +/) = ¸
00
(r) +/,(r. ¸(r). ¸
0
(r). ¸
00
(r)) +O(/
2
). (364)
The pattern is now clear. For an nth order ODE we need to know initially
and propagate all lower order derivatives including the function ¸ itself. The
error in ¸(r) in the ”natural approximation” is of order : + 1 in / and this
order decreases in unit steps as we proceed to the derivatives. Although we
shall not show this explicitly it is clear that the accuracy can be increased
by di¤erentiating the ODE as shown in the …rst order case above.
89
12.2.2 RungeKutta Methods
If you have understood the direct Taylor series expansion method described
above it may well seem as if the problem is solved in the sense that the algebra
required to produce a solution to desired order of accuracy is straightforward.
However, the direct Taylor methods are, with the exception of the central
di¤erence method applied to the second order Newton’s equations as we
shall see, rarely used. There is nothing wrong with these methods from the
theoretical point of view, but the necessary operations, e.g., the evaluation
of derivatives of the function ,, are often inconvenient. Thus most of the
popular numerical methods for the solution of ODE’s are what might be
called implicit Taylor series expansion methods. They are justi…ed by and
reducible to the Taylor series expansion method but by various tricks the
necessary operations have been made more convenient. We shall now have
a look at one of the most popular such implicit methods, the RungeKutta
method.
The basic idea is to replace the evaluation of derivatives of , by additional
evaluations of , itself. Note that
,(r + r) = ,(r) + r,
0
(r) +O((r)
2
). (365)
and it follows that
,
0
(r) =
1
r
(,(r + r) ÷,(r)) +O(r). (366)
Thus the simplest …rst order equation
¸
0
= ,(r) (367)
can be solved to an error of order /
3
by
¸(r +/) = ¸(r) +/,(r) +
1
2
/
2
,
0
(r) +O(/
3
) (368)
= ¸(r) +/,(r) +
1
2
/
2
1
r
(,(r + r) ÷,(r)) +O(/
3
).
Here we pick r to be a real number of the order of h. If we pick r to be
h then we …nd
¸(r +/) = ¸(r) +
/
2
(,(r +/) +,(r)) +O(/
3
). (369)
90
We see that two evaluations of , have decreased the error by one order of /.
We can go on and improve the error by further function evaluations. Note
that the choice of r = / has given us an added advantage in that although
two values of , appear in the propagation equation one of them will be reused
in the next step. Thus the number of new function evaluations per step is
one except for the …rst step when it is two.
What happens if we have the more general case when , depends on both
r and ¸? Note that
,(r + r. ¸(r + r)) = ,(r + r. ¸(r) + r¸
0
(r)) +O((r)
2
) (370)
= ,(r + r. ¸(r) + r,(r. ¸(r))) +O((r)
2
).
Thus if we let superscript prime indicate a total derivative with respect to r,
d
dr
,(r. ¸(r)) = ,
0
(r. ¸(r)). (371)
then we get
,
0
(r. ¸(r)) =
1
r
(,(r + r. ¸(r) +/) ÷,(r. ¸(r))) +O(r). (372)
with
/ = r¸
0
(r) = r,(r. ¸(r)). (373)
The propagation equation can then be written
¸(r+/) = ¸(r)+/,(r. ¸(r))+
1
2
/
2
1
r
(,(r+r. ¸(r)+/)÷,(r. ¸(r)))+O(/
3
).
(374)
Again we could pick r = / and get 
¸(r +/) = ¸(r) +
/
2
(,(r. ¸(r)) +,(r +/. ¸(r) +/)) +O(/
3
). (375)
Note that
¸(r) +/ = ¸(r) +/,(r. ¸(r)) = ¸(r +/) +O(/
2
). (376)
Thus we have inserted a lower order propagation solution for ¸(r + /) in
the higher order propagation equation. Thus the RungeKutta method is
essentially an iterative solution method.
91
There are many di¤erent RungeKutta methods all based on the same
idea of using multiple function evaluations to raise the accuracy (see the
Beta Handbook, Section 16.5). One commonly used scheme bringing the
error to order /
4
for …rst order ODE’s is as follows:
¸(r +/) = ¸(r) +
1
6
(/
1
+ 2/
2
+ 2/
3
+/
4
). (377)
/
1
= /,(r. ¸(r)).
/
2
= /,(r +
/
2
. ¸(r) +
/
1
2
).
/
3
= /,(r +
/
2
. ¸(r) +
/
2
2
).
/
4
= /,(r +/. ¸(r) +/
3
).
The RungeKutta methods can be applied to higher order ODE’s and
to coupled sets of ODE’s. There are also many other elegant and intricate
ways to solve ODE’s. Normally one does not have to derive or program these
methods oneself. They can be found in program libraries and mathemati
cal programs such as Mathematica, Matlab, Mathcad and Maple. We shall
discuss such programs later in this course. Examples of programs solving
second order ODE’s will be given in the next chapter on molecular dynamics
simulation.
12.3 Exercises:
1. Derive an expression for ,
00
(r) in terms of function values such that the
error is of order /
4
.
2. Derive propagating equations which yield a numerical solution of the
ODE ¸
00
= /¸ +r
2
with the error of order /
4
per step of length / in r.
3. Verify explicitly for the ODE ¸
0
= r + ¸ that the error in the Runge
Kutta propagating equation (8.45) is of …fth order in /.
4. Obtain an estimate of ,
0
(r) and ,
00
(r) in terms of the function values
,(r ÷
I
2
), ,(r) and ,(r +
I
2
) to the highest possible accuracy. Show
your derivation and the order of the error in /.
92
13 Molecular Dynamics Simulation
One could say that Newton started the development of molecular dynamics
(MD) simulation when he proposed that systems and their dynamics, i.e.,
movement, were determined by potentials and corresponding forces accord
ing to what we now call classical mechanics. His original insight has been
associated with the gravitational force acting on an apple falling from a tree
but his classical mechanics is now applied to objects of nearly all imaginable
types from planets in motion around a star to atoms and molecules perform
ing their dynamics on microscopic time and length scales. The latter type
of dynamics is of particular interest to chemists. Unfortunately, classical
mechanics is not exactly but only approximately valid for atomic and mole
cular motions and the accuracy of the ”classical mechanical approximation”
reaches its practical boundary in the middle of the …eld of chemistry. Thus
we can understand most of the properties of the macroscopic phases on the
basis of classical mechanics but electronic structure and dynamics as well as
vibrational dynamics of molecules and solids must be described by quantum
mechanics. Given the development of fast computers the application of clas
sical mechanics in the form of molecular dynamics, as we shall soon describe,
has become an extremely powerful tool which is revolutionizing the …eld of
chemistry. In fact, the use of MD simulation is so pervasive that one could
argue that it exceeds the scope motivated by its validity and timescale limi
tations. One reason for the great popularity of MD simulation is that it is so
relatively easy to implement on our ever more powerful computers. Thus MD
simulation can be applied to problems of a complexity beyond all other meth
ods by chemists who do not require lengthy training to grasp the essential
facts and features of the method. The other major reason for its popularity
is that there are so many important unresolved problems seemingly out of
range for all other methods that MD simulation is applied whether or not it
is completely justi…ed. The hope is that one will always learn something of
value. So far this hope seems well justi…ed.
So what is molecular dynamics simulation? It is based on a number of
propositions which might be summarized as follows:
1. Potentials can be found which accurately describe the forces on the
particles which make up the relevant system.
2. Classical mechanics describes the equilibrium and dynamical properties
of the relevant system to su¢cient accuracy.
93
3. Where the actual physical system is unmanagably large a small sample
of the system still retains the essential behaviour of the actual system.
The sample system can be chosen small enough to be tractable for MD
simulation.
4. Relaxation processes and relevant dynamical phenomena occur on a
time scale accessible to MD simulation.
These propositions are the subject of many ifs and buts and clever tricks
have been and are being developed to extend their validity. More problems
are coming within range of the MD method every day not the least due
to the continual improvement of computer capacity. Our purpose here is
to illustrate the MD method in its simplest forms leaving the large and
complicated MD programs for later.
13.1 Simplest Case  OneDimensional Oscillation
We shall begin by simulating the motion of a onedimensional (1D) oscillator.
The system is then de…ned by a mass : and a potential \ (r). The motion
is completely described by the time development of the position and the
velocity, i.e., by r(t) and ·(t). These quantities will form what we call the
trajectory described by the system as it moves with time. The main task
in MD simulation is to obtain this trajectory for some initial condition or
ensemble of initial conditions. The trajectory is found by solving Newton’s
equation,
J
2
Jt
2
r(t) = ÷
1
:
J\ (r)
Jr
=
1
:
1(r(t)). (378)
When r(t) is known the velocity can be found by di¤erentiation,
·(t) =
d
dt
r(t). (379)
Thus we have to solve a second order ODE to obtain r(t). Note that the
force is in this case, as almost always, only dependent on the dependent
variable r(t) itself. The simplest propagating equation is obtained by the
direct Taylor series expansion method as follows,
r(t +/) = r(t) +/·(t) +
/
2
2:
1(r(t)) +O(/
3
). (380)
94
Note that h is now a timestep. The simplest equation for the velocity is
·(t +/) = ·(t) +
/
:
1(r(t)) +O(/
2
). (381)
However, recalling the simplest RungeKutta method as in equation (8.43)
we can improve the accuracy at essentially no additional cost in computation
by using the average force between time t and t +/, i.e.,
·(t +/) = ·(t) +
/
2:
(1(r(t)) +1(r(t +/))) +O(/
3
). (382)
This form of the velocity equation is nicely symmetric as well as of the same
accuracy as the equation for the position. Together these two equations form
a very stable and accurate method of solving Newton’s equations which goes
by the name of the ”velocity Verlet method” and is very commonly used in
the …eld of MD simulation.
A program implementing the velocity Verlet method for a 1D oscillator
with : = 1 and a potential given by
\ (r) = cr
2
+/r
4
+cr
6
. (383)
is included in an appendix to this chapter. The program is written in For
tran 77. Let us consider what can be obtained from such a program. Most
obviously we can obtain dynamical information about the oscillation directly
from the trajectory. However, the MD method is often used to obtain infor
mation about equilibrium properties. In this case we must use the socalled
ergodic hypothesis:
« Equilibrium properties in the microcanonical ensemble can be obtained
as long time trajectory averages of the corresponding property.
Suppose for example that we want to know the average potential energy
of the oscillator ¸\ ¸ as a function of the energy 1. It is obtained as
¸\ ¸
1
= lim
t!1
1
t
_
t
0
dt\ (r(t)). (384)
Even with a modern computer we can’t run forever so we have to assume
that the integral converges reasonably rapidly. This is where the time scale
problem enters but for a simple 1D oscillator we should not have any di¢culty
95
…nding a converged value for ¸\ ¸
1
. Note that since Newtonian dynamics
conserves the energy 1,
1 =
:
2
·
2
+\ (r). (385)
we will generate socalled microcanonical averages if the ergodic hypothesis
turns out to be valid.
13.2 TwoDimensional Oscillation
Let us now consider the case of two coupled oscillators. The masses will
again be taken to be unity. The potential will be
\ (r. ¸) = cr
2
+/r
4
+c¸
2
+d¸
4
+qr
2
¸
2
. (386)
The equation of motion will be
d
dt
r(t) = ·
a
(t). (387)
d
dt
¸(t) = ·
j
(t).
d
dt
·
a
(t) = 1
a
(r(t). ¸(t)) = ÷
J
Jr
\ (r. ¸).
d
dt
·
j
(t) = 1
j
(r(t). ¸(t)) = ÷
J
J¸
\ (r. ¸).
In our special case we have
1
a
(r. ¸) = ÷(2cr + 4/r
3
+ 2qr¸
2
). (388)
1
j
(r. ¸) = ÷(2c¸ + 4d¸
3
+ 2qr
2
¸).
If we apply the velocity Verlet method we get the following propagating
equations,
r(t +/) = r(t) +/·
a
(t) +
1
2
/
2
1
a
(r(t). ¸(t)). (389)
¸(t +/) = ¸(t) +/·
j
(t) +
1
2
/
2
1
j
(r(t). ¸(t)).
·
a
(t +/) = ·
a
(t) +
/
2
(1
a
(r(t). ¸(t)) +1
a
(r(t +/). ¸(t +/))).
·
j
(t +/) = ·
j
(t) +
/
2
(1
j
(r(t). ¸(t)) +1
j
(r(t +/). ¸(t +/))).
96
The error is of order /
3
in all the propagating equations. Note that the
velocity Verlet method is very straightforwardly extended to two or more
dimensional system, a very endearing trait since MD simulations are often
carried out for manyparticle systems with 1000 dimensions or more.
We might still be interested in calculating the microcanonical potential
energy average for the twodimensional oscillator. It is worth recalling that
the evaluation of such microcanonical averages as long time trajectory av
erages is based on the ergodic hypothesis which may not hold for a given
system. If, e.g., we were to set g=0 to break the coupling between the two
oscillators then the initial energy in each oscillator would be conserved and
our system will be nonergodic. The question is whether a coupling term of
the type used here, qr
2
¸
2
. is su¢cient to make the system ergodic. A listing
of a program carrying out the MD simulation of our two coupled oscillator
system is included in an appendix.
13.2.1 A OneDimensional Fluid
In order to illustrate the simulation of an in…nite system without unduly
burdening our discussion with technical details we shall consider a one
dimensional ‡uid. The particles will be of unit mass and the interaction
is described by a pairpotential, i.e., a potential acting between each set of
two particles in the ‡uid. For reasons of tradition and convenience we shall
let the pair potential be of LennardJones form,
c(:) = 4(
_
o
:
_
12
÷
_
o
:
_
6
). (390)
Here r is the particle separation, is the well depth and o is the separation
where the potential becomes positive. In reduced units such that and o are
unity the potential becomes c(r) = 4(r
12
÷r
6
) and has the shape shown
in the …gure. Note the very rapid rise in c(r) as x decreases from unity.
This rise corresponds to the Pauli repulsion between closed shell atoms or
molecules. For larger separations the potential is negative approaching zero
as :
6
. The complete potential for a onedimensional ‡uid is then
\ (r
1
. r
2
. r
3
. ......) =
i
)i
c(r
i)
) =
1
2
i
)6=i
c(r
i)
). (391)
97
1.0 1.2 1.4 1.6 1.8 2.0
0
2
4
6
x
y
where r
i)
is the separation [r
i
÷r
)
[ between particles i and ,. The force on
particle i is given by
1
i
(r
1
. r
2
. .....) = ÷
J
Jr
i
\ (r
1
. ....) =
)6=i
(÷
J
Jr
i
c(r
i)
)) (392)
=
)<i
24(2r
13
i)
÷r
7
i)
) ÷
)i
24(2r
13
i)
÷r
7
i)
).
The sum should, in principle, go over the in…nite or nearly in…nite number
of other particles in the ‡uid. This can, of course, not be managed so one
uses a trick called ”periodic boundary conditions”. One takes a large but
managable number of particles ` and assigns them to an interval [0. 1] which
has a length 1 related to the ‡uid density : by
: = `,1. (393)
Around this interval are placed replicas de…ned by periodicity, i.e., a particle
at r has replicas at r ± 1. r ± 21. r ± 31. ... The e¤ect of this is that the
forces on our ` particles pick up contributions fromthese replicas which serve
to represent the rest of the in…nite ‡uid. Thus the ` particles experience
a more realistic environment, if we want to know about properties of an
in…nite system, then a set of ` particles constrained to the interval [0. 1]
by hard walls. In fact, one normally truncates the interaction potential c(r)
at r = 1 < 1 for two reasons: i) to avoid time consuming summations in
the force evaluations and ii) because one does not want a particle to interact
with its own image. I do not believe the second reason is signi…cant but it
is nevertheless often referred to in the simulation literature. The truncation
98
of the potential, i.e., setting it to zero for r 1, causes problems because
the potential actually used is then not analytical. It has a step at r = 1
which generates a o÷function in the force at that point. Accounting for it in
the solution of the equations of motion is possible but technically messy. For
short ranged potentials such as the LennardJones potential the problem can
be dealt with in a very summary fashion while for long ranged potentials it
is much more of a problem.
Ignoring the technical problems of potential truncation, the propagating
equations of the velocity Verlet method can be written as
r
i
(t +/) = r
i
(t) +/·
i
(t) +
1
2
/
2
1
i
(r
1
(t). r
2
(t). ....) +O(/
3
). (394)
·
i
(t +/) = ·
i
(t) +
1
2
/[1
i
(r
1
(t). .....) +1
i
(r
1
(t +/). .....)] +O(/
3
). (395)
for i = 1. 2. ..... `. Apart from the summations hidden behind our notation
for the force 1
i
the propagation equations are no more di¢cult to deal with
than in our smaller simulations above.  And the computers are good at
repetitive summations. It is possible to simulate ‡uids of thousands of parti
cles even on a personal computer. If the aim is to evaluate simple properties
such as the average potential energy per particle, the pressure, speci…c heat
or other bulk thermodynamic property than the program may only be a few
hundred lines long. We shall see examples of such programs later although
using Monte Carlo rather than MD propagation.
13.3 Exercises:
1. Draw a boxdiagram (boxes containing speci…cation of tasks done con
nected with lines showing the order in which work is done) illustrating
how the onedimensional oscillation is followed by the MD simulation
program. Write out below the explicit form of the propagating equa
tions for the case when the potential is \ (r) = cr
2
+/r
4
+cr
6
.
2. The original Verlet algorithm for the propagation in molecular dynam
ics simulation is in one dimension of the form 
r(t +/) = 2r(t) ÷r(t ÷/) +1(t; r)/
2
,:.
a) Derive this form of propagation from Newton’s equation. b) Show
that the error is of order /
4
. c) Show that this propagation is of time
99
reversible form. d) Point out any disadvantages of this method and
suggest remedies.
3. Another interaction potential often used in MDsimulations to represent
pairwise interaction among particles is the Morse potential 
c(:) = 1(exp(÷2¸(: ÷:
c
)) ÷2 exp(÷¸(: ÷:
c
)) .
Describe how to carry out an MD simulation of a onedimensional chain
of particles interacting by Morse pairpotentials. The level of detail
should be as for the chain of LennardJones interacting particles above.
13.4 Timedevelopment of a 1Doscillator  NUMSIM.FOR
Note that in the programlisting below normal mathematical notation is used
rather than F77 notation whenever convenient.
01 C This program calculates a trajectory for an oscillator in 1D.
02 C The oscillator potential is cr
2
+/r
4
+cr
6
. The mass is 1.
03 program numsim
04 implicit real*8 (ah,oz)
05 jot(r) = cr
2
+/r
4
+cr
6
06 ,cc(r) = ÷2cr ÷4/r
3
÷6cr
5
07 c(r. ·) = ·
2
,2 +jot(r)
08 write(*,*) ’The potential is cr
2
+/r
4
+cr
6
. Enter c. /. c = ’
09 read(*,*) c. /. c
10 write(*,*) ’Input initial position and velocity r. · = ’
11 read(*,*) r. ·
12 cc = c(r. ·)
13 write(*,*) ’Mass is set to 1. For c = 1. / = c = 0 the frequency
is 1.
14 write(*,*) ’The timestep should be < 1. Set the timestep dt = ’
15 read(*,*) dt
16 write(*,*) ’Set the number of timesteps to be taken, :t = ’
17 read(*,*) :t
18 C Calculate the trajectory by the velocity Verlet method.
19 C Evaluate the time average of the potential energy, jc/, and the
20 C average kinetic energy, c//.
21 jc/ = 0
22 c// = 0
100
23 cdc· = 0
24 C Time propagation according to the velocity Verlet method.
25 do 10 i = 1. :t
26 jc/ = jc/ +jot(r),:t
27 c// = c// +·
2
,2:t
28 r:cn = r +·dt +,cc(r)(dt)
2
,2
29 ·:cn = · + (,cc(r) +,cc(r:cn))dt,2
30 r = r:cn
31 · = ·:cn
32 cdc· = cdc· +dc/:(c(r. ·) ÷cc),:t
33 10 continue
34 write(*,*) ’Total time, energy, average pe, average ke ’
35 write(*,20) :t + dt, cc, jc/, c//
36 20 format(4d16.6)
37 write(*,*) ’Final energy, average energy deviation’
38 write(*,30) c(r. ·). cdc·
39 30 format(2d16.6)
40 stop
41 end
13.5 Some results for an anharmonic oscillator such
that \ (r) = r
2
+r
4
+r
6
Initial position and velocity are r. · = 0. 1 and total elepsed time is 100
seconds = :t + dt.
dt :t Final energy Average energy deviation
0.1 1000 0.49978D+00 0.104767D02
0.01 10000 0.500000D+00 0.104033D04
14 Timedevelopment of two anharmonically
coupled oscillators  MD2OSC.FOR
01 program md2osc
02 C This program calculates a trajectory for two oscillators with
anharmonic coupling. The particle masses are 1.
03 C The oscillator potential is cr
2
+/r
4
+c¸
2
+d¸
4
+qr
2
¸
2
.
04 implicit real*8 (ah,oz)
101
05 jot(r. ¸) = cr
2
+/r
4
+c¸
2
+d¸
4
+qr
2
¸
2
06 ,r(r. ¸) = ÷2cr ÷4/r
3
÷2qr¸
2
07 ,¸(r. ¸) = ÷2c¸ ÷4d¸
3
÷2qr
2
¸
08 c(r. jr. ¸. j¸) = jr
2
,2 +j¸
2
,2 +jot(r. ¸)
09 write(*,*) ’The potential is cr
2
+/r
4
+c¸
2
+d¸
4
+qr
2
¸
2
’
10 write(*,*) ’Enter c. /. c. d. q = ’
11 read(*,*) c. /. c. d. q
12 write(*,*) ’Input initial positions and momenta r. jr. ¸. j¸ = ’
13 read(*,*) r. jr. ¸. j¸
14 cc = c(r. jr. ¸. j¸)
15 write(*,*) ’For \ (r) = r
2
+ ¸
2
the frequencies are 1. The
timestep should be < 1. Set dt = ’
16 read(*,*) dt
17 write(*,*) ’Set the number of timesteps :t = ’
18 read(*,*) :t
19 C Calculate the trajectory by velocity Verlet propagation. Evalu
ate the time average of the potential energy jc/
20 C and the average kinetic energy c//. Also evaluate the ratio
c·c:cqc(jr
2)
,c·c:cqc(j¸
2
).
21 jc/ = 0.d0
22 c// = 0.d0
23 cdc· = 0.d0
24 jr2 = 0.d0
25 j¸2 = 0.d0
26 do 10 i = 1. :t
27 jc/ = jc/ +jot(r. ¸),:t
28 c// = c// + (jr
2
+j¸
2
),2:t
29 jr2 = jr2 +jr
2
,:t
30 j¸2 = j¸2 +j¸
2
,:t
31 r:cn = r +jrdt +,r(r. ¸)(dt)
2
,2
32 ¸:cn = ¸ +j¸dt +,¸(r. ¸)(dt)
2
,2
33 jr = jr +dt(,r(r. ¸) +,r(r:cn. ¸:cn)),2
34 j¸ = j¸ +dt(,¸(r. ¸) +,¸(r:cn. ¸:cn)),2
35 r = r:cn
36 ¸ = ¸:cn
37 cdc· = cdc· +dc/:(c(r. jr. ¸. j¸) ÷cc),:t
38 10 continue
39 write(*,*) ’Total time, energy, average pe, average ke ’
102
40 write(*,20) :tdt. cc. jc/. c//
41 20 format(4d16.6)
42 write(*,*) ’Final energy, average energy deviation’
43 write(*,30) c(r. jr. ¸. j¸). cdc·
44 30 format(2d16.6)
45 write(*,*) ’The ratio < jr
2
, < j¸
2
is ’
46 write(*,40) jr2,j¸2
47 40 format(2x,’Average jr
2
, Average j¸
2
= ’,d16.6)
48 stop
49 end
15 Numerical Integration
Integration is the inverse of di¤erentiation. Both of these operations are es
sential tools of applied mathematics and of chemistry. We shall consider now
the basic theory of numerical integration and a sampling of the most popular
methods used. The discussion will be con…ned here to one dimensional inte
gration. In the subsequent chapter we will discuss the Monte Carlo method
of numerical integration which is particularly suited to high dimensional in
tegrals.
Let us focus our attention on the integral
1(,; c. /) =
_
b
o
dr,(r). (396)
We will assume that ,(r) is an analytical function in the interval [c. /] .
The exact value of the integral can be obtained if the primitive function
corresponding to ,(r), i.e.,
1(r; c) =
_
a
o
dr,(r). (397)
can be found. We then get
_
b
o
dr,(r) = 1(/; c) = 1(/; c) ÷1(c; c). (398)
Here c is a real number which is undetermined. The latter form of the exact
integral is the one we generally use since it allows us to use any primitive
103
function 1(r) satisfying
d
dr
1(r) = ,(r) (399)
in the interval of integration [c. /] . Note now that 1(r; c) satis…es a …rst
order ODE of the form above with the initial value 1(c) = 0. Thus we
can use all our numerical methods of solving …rst order ODE’s to obtain
a numerical estimate of a one dimensional integral of this type. Therefore
we have already quite a rich collection of methods available for numerical
integration.
Let us try to be systematic. Just like was the case for di¤erentiation and
ODE’s the Taylor series expansion must be the starting point of our theory
of numerical integration. We have
,(r +/) = ,(r) +/,
0
(r) +
1
2
/
2
,
00
(r) +
1
6
/
3
,
000
(r) +...... (400)
where / is again a small increment in r. By direct integration we then …nd
that
1(r +/; c) = 1(r; c) +
_
I
0
d:,(r +:)
= 1(r; c) +/,(r) +
1
2
/
2
,
0
(r) +
1
6
/
3
,
00
(r) +
1
24
/
4
,
000
(r) +..... (401)
Thus, at the cost of evaluating derivatives of ,(r) we can generate a stepwise
evaluation of the integral to any order of accuracy desired. In this sense the
theory of numerical integration is extremely simple. However, the evaluation
of derivatives of ,(r) of higher order may be di¢cult or tedious. Just as
in the case of the RungeKutta methods of ODE’s we may want to replace
the derivatives by additional function evaluations. For greatest convenience
these function evaluations should occur at points inside the interval [r. r+/].
Otherwise the function evaluations will spill out of the full range [c. /] of the
integral. The simplest and lowest order integration step is
1(r +/; c) = 1(r; c) +/,(r) +O(/
2
). (402)
Next we want to include the term to order /
2
in the Taylor series expansion
by function evaluation. Our experience with central di¤erence schemes for
numerical di¤erentiation suggests that this can be done as follows,
1(r +/; c) = 1(r; c) +
1
2
/(,(r) +,(r +/)) +O(/
3
). (403)
104
Taylor series expansion of ,(r + /) shows that this is correct as expected.
Interestingly there is another way to accomplish the same thing. We can
stick with one function evaluation but place it at the step midpoint,
1(r +/; c) = 1(r; c) +/,(r +
/
2
) +O(/
3
). (404)
This looks like a better idea since we need only one function evaluation
rather than two in (10.8). However, the two functions in (10.8) are reused
once so there is only one new function evaluation for each step except for
the very …rst. At any rate it is worth remembering that additional function
evaluations can be replaced by clever placement of the points of evaluation.
It is not di¢cult to go to higher order. Let us propose to use the three
function values ,(r). ,(r +
I
2
). ,(r + /). By symmetry the coe¢cients in
front of the two end values must be the same, j. The middle function value
must then be 1 ÷2j since the coe¢cients must add up to unity. Thus we get
1(r +/; c) = 1(r; c) +/
_
j,(r) + (1 ÷2j),(r +
/
2
) +j,(r +/)
_
+O(/
4
).
(405)
Taylor series expansion of ,(r+
I
2
) and ,(r+/) shows that j should be 1,6,
i.e.,
1(r +/; c) = 1(r; c) +
/
6
_
,(r) + 4,(r +
/
2
) +,(r +/)
_
+O(/
4
). (406)
This is a very good integration step which leads to the following expression
for the full integral
1(/; c)
~
=
_
b
o
dr,(r) =
_
/
6
2.
a=0
(3 + (÷1)
a+1
),(c +
:/
2
)
_
÷
/
6
[,(c) +,(/)] .
(407)
where / = (/ ÷c),`.
The Beta Handbook (Section 16.4) gives a number of useful integration
methods for onedimensional integrals. It also gives them names commonly
used in the mathematical literature. Thus (10.9) is called the midpoint rule,
(10.8) is called the trapezoidal rule and the scheme recommended above in
(10.12) is called Simpson’s rule.
105
15.1 Exercises:
1. Verify explicitly the validity of the result (10.12), i.e., Simpson’s rule.
Write out the terms in the expression on the right hand side explicitly
for the case ` = 3 and show that the contribution from each interval
is treated as in (10.11).
2. Show that it is possible to evaluate the contribution 
1(r +/; c) ÷1(r; c) =
_
a+I
a
d:,(:).
to 4th order in h with only two function evaluations in the interval
[r. r +/].
3. Derive a numerical integration method based on the four function val
ues at r. r+
I
3
. r+
2I
3
. r+/ such that the error in the integration from
r to r +/ is of order /
5
.
16 Monte Carlo Simulation
We have seen in the previous chapter how to evaluate one dimensional inte
grals. The methods we developed required evaluation of the integrand on a
lattice of points followed by summation over the function values multiplied
by an integration step length and some weighting factor determined by the
numerical integration scheme selected. If the function varies substantially
over the interval of integration we must use a large number of points to get
good accuracy. Lets say that we need typically a hundred points. Although
we shall not go into the details it should be clear that these methods can
be extended to two dimensional and higher dimensional integrals. Unfortu
nately the numerical problem becomes much harder in higher dimensions.
Suppose we have a two dimensional integral over a rectangular domain and
the integrand varies in both dimensions about as much as in a typical one
dimensional integral. Then we would need 100 100 = 10000 points in
the grid of r. ¸÷values where the integrand is to be evaluated in order to
produce an integral of accuracy comparable to that in the one dimensional
case. However, in chemistry we often want to evaluate integrals of dimension
1000 or more, e.g., in the process of calculating thermodynamic properties
of ‡uids and larger molecules. The number of points in a grid which should
106
yield an accuracy of the integral comparable to that in the one dimensional
case would be of the order 10
2000
which is hopelessly out of range for any
computer now available or in sight. There is hardly any point to quibble
about the most e¤ective of our normal grid methods. We need to think in
new directions.
16.1 The Global Monte Carlo Method
We shall seek a radically new approach to integration by using two new tools:
statistics and dynamics. We begin here by considering the statistical tool.
Consider the problem of holding an election in a country like Sweden. It
takes an enormous e¤ort to conduct such an election which can be regarded
as a kind of gigantic integration. But we know that the pollsters can predict
the outcome of the election rather well. They use a random sample of some
3000 or so voters to predict the result of an election in an electorate of about
4 million voters. If it were not for the problem that a sampled voter may
respond di¤erently than a voter in the real election the pollsters would be
much more accurate. The key to this method is that the sample really is
random. Any bias in the sample can signi…cantly reduce the accuracy.
In the global Monte Carlo method we use the method of the random draw
of the pollster to determine the average value of the integrand over a domain
of known size. Let the integral be de…ned by the notation
_
dr
1
.....
_
dr
.
,(r
1
. ..... r
.
) =
_
1
d,(). (408)
where summarizes all coordinates and D denotes a domain of integration.
The integral can now be written as
_
1
d,() = ¸,¸
1
_
1
d. (409)
where
¸,¸
1
=
_
1
d,()
_
1
d
. (410)
We shall assume that we can evaluate the area of the domain
_
1
d. If the
natural de…nition of the domain is too di¢cult we can always extend the
domain so that the area can be evaluated. The function should then be
given the value zero in all of the added area. We are then left to estimate the
107
average of the integrand ¸,¸
1
. This is what the pollster is good at. We can
do it by drawing a random sample of coordinate vectors ¦
i
¦
.
i=1
and setting
¸,¸
1
~
= ¸,¸
(.)
1
=
1
`
.
i=1
,(
i
). (411)
How large should ` be? This depends entirely on the nature of ,()
and the accuracy requirement. A very practical approach is to calculate the
average for a sequence of increasing `values and observe the convergence
of the average with `. This will make it possible to stop the calculation as
soon as the accuracy seems su¢cient. Naturally the statistical estimate will
always entail a risk that the sample is unrepresentative in some way but by
running on for larger ` con…dence can be built up to any desired level. A
huge advantage of this method is that it will start to produce reasonable if
not accurate values for the integral even for quite small sample size N. If we
have 1000 dimensions we may try ` = 1000000. If we go back to the grid
methods we must have at least two points in each dimension and 2
1000
is a
large number, too large already for our computational power.
16.2 The Metropolis Monte Carlo Method
The weakness of the global Monte Carlo method shows up when we have a
very illconditioned function ,(). This is the case, e.g., when we consider
the con…guration integral part of the partition function for a dense ‡uid.
What happens is that ,() is nearly always close to zero because random
placement of particles produces overlap of hard cores and unphysically high
potential energies. The integral is then dominated by contributions from a
miniscule subdomain of 1 which we may not even …nd by a random sample.
To surmount this problem we shall use a clever trick. We shall start from
a reasonable point
1
and let a type of di¤usional dynamics generate the
subsequent vector coordinates in the chain
1
.
2
.
3
. ..... in such a way as to
search out the important regions of the domain 1.
Consider the problem of calculating the average potential energy of a
‡uid. According to statistical mechanics we have
¸\ ¸
T
=
_
1
dc
o\ ()
\ ()
_
1
dc
o\ ()
. (412)
108
where \ () is the potential energy of the ‡uid in the con…guration . ,
is 1,/
1
1 and 1 is the absolute temperature. The domain 1 is de…ned
by the fact that any particle can be anywhere in the available volume 1 .
The ”area” of 1 is then 1
.
. The big problem is that the Boltzmann factor
crj(÷,\ ()) is nearly zero at nearly all points in 1 for a dense ‡uid. In
the Monte Carlo method we handle this problem by generating a Markov
chain of ÷values starting from one point
1
which has a reasonably low
energy \ (). This chain searches out important regions in the domain 1.
The Monte Carlo method is formulated in terms of the probability density
j() and is constructed primarily to allow averages like 
< ¹ =
_
dj()¹() (413)
to be e¢ciently evaluated. Here ¹() is a property such as, e.g., the potential
energy \ () above. The Markov chain is generated as follows:
1. Choose a ”reasonable” initial point
1
.
2. Make a random displacement in
1
to obtain a proposed
2
.
3. If j(
2
),j(
1
) ¸ where ¸ is a random number on [0. 1] then
2
is
accepted as the new point. Otherwise we accept
1
as the new point
(i.e.,
1
is repeated in the list of points in the Markov chain).
4. We then generate a new random displacement and thereby a new pro
posed value for
3
. The two steps 2. and 3. are iterated with each new
point taking the place of
1
until ` ÷ points have been generated.
5. The average < ¹ can now be obtained as 
< ¹ =
1
`
.
i=1
¹(
i
). (414)
In the case of the statistical mechanical application above the proba
bility density can be written as 
j() = exp(÷,\ (),
_
dexp(÷,\ () (415)
109
and the thermal average potential energy is evaluated as 
¸\ ¸
T
=
1
`
.
i=1
\ (
i
). (416)
Note that each
i
in this average is of equal weight. The Metropolis
Monte Carlo method generates a sample of values such that sampling
power is not wasted on unimportant points in space.
What do we mean by a random displacement above?  There are many
ways to generate a random displacement. The most commonly used one is to
select a maximal coordinate displacement and then visit each coordinate
sequentially setting
r
i,2
= r
i,1
+ 2(¸ ÷0.5). (417)
where ¸ is a random number on [0. 1] . Note that a random displacement can
mean a change in only one or a few or all of the coordinates according to
this prescription. Usually one only moves one coordinate in going from
i
to
i+1
but as long as all coordinates are visited in an unbiased way it does not
really matter. The value is chosen so that the probabilty of rejection of
the new value is about a half. This gives a good balance between moving
over lots of territory and sticking to the most relevant subdomain.
A listing of a Fortran program simulating a onedimensional Lennard
Jones ‡uid as discussed in Chapter 9 by the Metropolis Monte Carlo (or just
MC) method is included as an appendix. Note that the MC method is very
closely related to the MD method. The di¤erence lies in the form of the
dynamics used. The MC method uses a di¤usional motion. Both methods
are enormously e¢cient by searching out the relevant part of the domain
of integration. They both allow equilibrium averages to be evaluated for
properties of systems with a thousand particles or more where traditional
methods of integration look completely hopeless.
16.3 Exercise:
1. Draw a boxdiagram showing how the onedimensional LennardJones
‡uid can be simulated by the Monte Carlo method to produce the
thermal average potential energy. You can use the program in the
appendix as a guide if you wish.
110
16.4 Appendix  The MC1DT.FOR program
A listing of a Fortran 77 program created to simulate a 1D nearest neigh
bor interacting chain of LJ(126) particles in the canonical ensemble follows
below.
001 C This program simulates a 1D LJ(126) ‡uid in the canonical
ensemble
002 C Interactions are pairwise between nearestneighbors only
003 C Meltdown of the initial con…guration is included
004 C General initial statements follow
005 C Programis limited to at most 1000 particles by X(1000), EP(1000)
006 PROGRAM MC1DT
007 IMPLICIT REAL*4 (AC, EH, OZ)
008 IMPLICIT REAL*8 (DD)
009 COMMON/A005/X(1000),EP(1000),VP(1000),TX,ES,VS,RL,SCUT,N,IN,IR
010 OPEN(2,FILE=’OUTPUT’)
011 C Initialize the random number generator
012 IR=137
013 C Input information interactively
014 C Spacing in uniform grid is SP. Number of active particles is N.
015 C Reduced units (EPS=SIGMA=1) are used
016 C Cuto¤ imposed on the range of the pairpotential is SCUT.
017 WRITE(*,*) ’Canonical ensemble. Thermal energy is TKB.’
018 WRITE(*,*) ’Length of active interval is RL=N*SP’
019 WRITE(*,*) ’Enter spacing = SP, particle # = N (<1000),
TKB and potential cuto¤ = SCUT in reduced units’
020 READ(*,*) SP, N, TKB, SCUT
021 WRITE(*,*) *Keep steplength SL less than active length RL =
N*SP’
022 WRITE(*,*) ’Enter steplength SL, number of steps in Markov
chain = NC and seed random number DNR’
023 READ(*,*) SL, NC, DNR
024 WRITE(*,*) ’Enter number of steps in meltdown MM =’
025 READ(*,*) MM
026 WRITE(*,*) ’Computing. Please wait ....’
027 C Active length is RL
028 RL=N*SP
029 C Generate initial coordinates
111
030 SX=SP
031 5 CONTINUE
032 DO 10 I=1,N
033 X(I)=I*SXSX/2
034 10 CONTINUE
035 C Evaluate initial potential energy in double precision = REAL*8
036 NB=INT(SCUT/SP)
037 DU=0.D0
038 DV=0.D0
039 C The removal energy for particle J is EP(J) below
040 C The virial for particle J is VP(J) below
041 DO 30 J=1,N
042 IN=J
043 TX=X(IN)
044 CALL EPART
045 EP(J)=ES
046 VP(J)=VS
047 DU=DU+EP(J)/2
048 DV=DV+VP(J)/2
049 30 CONTINUE
050 C The total potential energy is DU. The total virial is DV. The
MC loop follows. NR is the number of rejections.
051 II=1
052 NCC=NC
053 NC=MM
054 40 CONTINUE
055 IN=0
056 NR=0
057 SEC=0.D0
058 SVC=0.D0
059 NER=0
060 AU=DU
061 AV=DV
062 AV2=DV*DV
063 DO 100 K=1,NC
064 IN=IN+1
065 IF(IN.EQ.N+1) IN=INN
112
066 C Calculate the removal energy and virial of particle IN before
move.
067 TX=X(IN)
068 CALL EPART
069 EP(IN)=ES
070 VP(IN)=VS
071 44 CALL RANDOM(DNR,IR)
072 TX=X(IN)+SL*(DNR0.5E0)
073 IF(TX.LT.0.D.0) TX=TX+RL
074 IF(TX.GT.RL) TX=TXRL
075 CALL EPART
076 EC=ESEP(IN)
077 VC=VSVP(IN)
078 IF(EC.LT.0.D0) GO TO 66
079 CALL RANDOM(DNR,IR)
080 BFR=EC/TKB
081 IF(DLOG(DNR).LT.BFR) GO TO 66
082 TX=X(IN)
083 ES=EP(IN)
084 NR=NR+1
085 EC=0.E0
086 VC=0.D0
087 C New con…guration is accepted
088 66 EP(IN)=ES
089 VP(IN)=VS
090 C The sum of energy changes is SEC. The sum of virial changes
is SVC.
091 SEC=SEC+EC
092 SVC=SVC+VC
093 X(IN)=TX
094 DU=DU+EC
095 DV=DV+VC
096 C The average potential energy is AU. The average virial is AV.
097 AU=AU+SEC/NC
098 AV=AV+SVC/NC
099 C The average squared virial <virial**2 is calculated below.
100 AV2=AV2+DV*DV/NC
101 100 CONTINUE
113
102 II=II+1
103 IF(II.EQ.2) THEN
104 NC=NCC
105 GO TO 40
106 ENDIF
107 WRITE(*,*) ’Final con…guration: particle #, location’
108 WRITE(*,130) (L, X(L),L=1,N)
109 130 FORMAT(20X,I10,F16.6)
110 WRITE(*,*) ’Average potential energy, …nal potential energy’
111 WRITE(*,135) AU,DU
112 WRITE(*,*) ’Average virial energy, …nal virial energy’
113 WRITE(*,135) AV,DV
114 135 FORMAT(10X,D16.6,10X,D16.6)
115 PID=TKB*N/RL
116 PPOT=AV/RL
117 PV=PID+PPOT
118 VARPPOT=SQRT((AV2AV*AV)/NC)/RL
119 WRITE(*,*) ’Ideal pressure, potential pressure, total pressure’
120 WRITE(*,137) PID, PPOT, PV
121 137 FORMAT(2X,’PID’,D16.6,2X,’PPOT=’,D16.6,2X,’PV=’,D16.6)
122 WRITE(*,138) VARPPOT
123 138 FORMAT(2X,’Variance in PPOT is ’,D16.6)
124 WRITE(*,*) ’Variance estimate is based on independent events.
Correlations are neglected.’
125 WRITE(2,140) (DU,L,XL(L),L=1,N)
126 140 FORMAT(D16.6,I10,F16.6)
127 WRITE(*,150) NC,NR
128 150 FORMAT(’# of confs=’,I10,’# of rejs=’,I10)
129 CLOSE(2)
130 STOP
131 END
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
132 SUBROUTINE RANDOM(DNR,IR)
133 REAL*8 DNR,D1,D2
134 DNR=DABS(DNR)
135 D1=DLOG(DNR*IR)
136 D1=DABS(D1)
137 D1=D1DINT(D1)
114
138 D2=1.D7*D1
139 DNR=D2DINT(D2)
140 IR=IR+1
141 RETURN
142 END
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
143 C This subroutine calculates the energy contribution due to a
single particle, i.e., the sum of nearest neighbor bond energies.
144 C It has been extended to also calculate the virial energy of a
single particle.
145 SUBROUTINE EPART
146 COMMON/A005/X(1000),EP(1000),VP(1000),TX,ES,VS,RL,SCUT,N,IN,IR
147 REAL*4 UJ,VJ,DX,R,RM6,R1,R2
148 ES=0.D0
149 VS=0.D0
150 R1=RL
151 R2=RL
152 DO 1 K=1,N
153 C The particle does not interact with itself.
154 IF(K.EQ.IN) GO TO 1
155 C Let the particle interact with the nearest image of X(K).
156 DX=TXX(K)
157 R=DABS(DX)
158 IF(R.GT.RL/2) THEN
159 R=RLR
160 IF(DX.GT.0.D0) THEN
161 DX=RRL
162 ELSE
163 DX=RLR
164 ENDIF
165 ENDIF
166 R=ABS(DX)
167 C Protect against over‡ow.
168 IF(R.LT.1.E1) R=1.E1
169 C Pick out smallest left and right distances R1 and R2 in the
chain.
170 IF(DX.LT.0.E0) THEN
171 IF(R.LT.R1) R1=R
115
172 GO TO 1
173 ENDIF
174 1 CONTINUE
175 C Add energies due to nearest neighbor bonds.
176 RM6=1/R1**6
177 UJ=4*RM6*(RM61.D0)
178 VJ=24*RM6*(2*RM61.D0)
179 ES=ES+UJ
180 VS=VS+VJ
181 RM6=1/R2**6
182 UJ=4*RM6*(RM61.D0)
183 VJ=24*RM6*(2*RM61.D0)
184 ES=ES+UJ
185 VS=VS+VJ
186 RETURN
187 END
116
we try to balance a chemical reaction,
1 C14 H12
+
2 O2
=
3 CO2
+
4 H2 O.
(1)
Here we are looking for the set of smallest positive integers 1 ; 2 ; 3 ; 4 such that the equation is balanced with respect to all atomic species present. As we all know this can be a frustrating task if the molecules involved are many and large. When we then try to calculate the corresponding masses in a corresponding laboratory experiment we enter the realm of real numbers which in mathematics we often call x, particularly before we have managed to obtain them. The concept of a "mole" is the bridge between the stochiometric equations in chemistry and the measurable masses of reactants and products in laboratory reactions. It is by a decision of IUPAC de…ned as the unit of "amount of substance" and takes stochiometeric calculations from integers to real numbers. We know that real numbers can be added, subtracted, multiplied and divided unless the denominator is zero. In order to be able to take square roots of negative numbers and do other handy things we introduce complex numbers z = x+iy. Now we have an object which is speci…ed by two real numbers. But the story goes on. When we want to specify the position of a particle we need three coordinates x,y,z and we need them so often that we decide to call this set of numbers a vector r. This particular vector is three dimensional but if we have to specify the positions of two particles we need six real numbers x1 ; y1 ; z1; x2 ; y2 ; z2 which can be regarded as a six dimensional vector. In the same way we can go on and …nd need for vectors of very high dimension. If, for example, we would try to discuss the instantaneous state of a gas by classical mechanics we may need vectors of Avogadros number of dimensions. Now our ability to call these objects vectors and use vector notation which suppresses the delineation of the components is essential because we could not …nd time in a lifetime to write down all these components. Nevertheless it is possible for us to work with such monstrous vectors if we know the rules which apply to them. Thus we are getting ready to study the rules applying to vectorspaces. A powerful but simple set of operations involving vectors are described by what is called linear algebra. As it turns out quantum mechanics is dominated by linear algebra and through quantum chemistry, which is, perhaps, the fastest growing branch of chemistry, linear algebra has become an essential tool for chemists. For this reason we shall have a look at how quantum mechanics relies on linear algebra and how it gets into chemistry. 2
1.1
The Linear Algebra of Quantum Mechanics and Quantum Chemistry
Life may seem to get horribly complicated when we move from classical to quantum mechanics. A classical particle moving in one dimension can be described by two real numbers x,v, the position and momentum. Newton’ s equations of motion deal with only these two quantities. When we go to quantum mechanics we must describe the state of the same particle as a wavefunction (x). A function of x is de…ned by its value at all xvalues. If all such function values are compared with the components of a vector we see that a function is a vector in an in…nite dimensional vectorspace so we would seem to have a problem worse than that of describing the classical states of a gas of a mole of particles. It is not quite so di¢ cult as it may sound. The timedevelopment is described by a timedependent wavefunction (t; x) which satis…es the timedependent Schrödinger equation, i @ (t; x) = H (t; x); @t ~ where H is the Hamiltonian operator, H= (2)
~2 @ 2 + V (x): (3) 2m @x2 Here V(x) is the potential acting on the particle. Much interest is focused on the socalled energy eigenfunctions E (x) which satisfy the timeindependent Schrödinger equation, H E (x) = E E (x): (4) Note that the spatial probability density associated with a wavefunction (x) is p(x) = j (x)j2 : (5)
An energy eigenfunction satis…es the timedependent Schrödinger equation if it is multiplied by the phasefactor e iEt=~ . This means that the spatial probability density p(x) is timeindependent. Thus the energy eigenfunctions are called stationary states. They also have wellde…ned energies equal to the energy eigenvalue E while wavefunctions in general are neither stationary nor of wellde…ned energy. In general, a wavefunction (x) can be expanded in the energy eigenfunctions as follows, X (x) = cE E (x): (6)
E
3
The timeindependent Schrödinger equation then becomes ~2 2 r (x. if i = j.In quantum chemistry one normally seeks the wavefunction of lowest energy. Hc = Ec: (8) Here I have assumed that the basisfunctions are orthonormal.e. 1. Now we simply note that practical use of quantum mechanics leads to the eigenvalue problem in matrix form. :::N.z. and point out the approach to it followed by chemists with great success. (9) where ij is Kronecker’ delta.. the socalled groundstate wavefunction E0 which is an energy eigenfunction. they satisfy Z dx'i (x)'j (x) = ij . = 0. z). It is generally found by the …nite basis set method which is approximative assuming that the groundstate can be found as a superposition of basisfunctions 'i . if i 6= j: (10) We shall discuss the …nite basis set method in greater detail in the next chapter. z) = E (x. the manyelectron problem. y. We started by considering one particle moving in one dimension x. N X ci 'i : (7) E0 = i=1 The coe¢ cients fci g form a vector c which is obtained by solving the timeindependent Schrödinger equation in matrix form. i = 1. 2m 4 (11) .y.1. y. i. A more realistic application has one particle moving in three dimensions x.1 The Self Consistent Field (SCF) Aproximation of Quantum Chemistry Before we leave this introduction to the linear algebra of quantum mechanics I want to deal with one major problem of quantum chemistry. y. z) (x. s ij = 1. y. z) + V (x.
But how do we choose the Hamiltonian which describes the oneparticle motion? We want this Hamiltonian. without explicit coupling to each other. These two mechanisms mean that the electrons are not moving independently. (13) of oneelectron wavefunctions each of which could be obtained from a oneelectron Scrödinger equation with the same form of Hamiltonian operator. otherwise the resulting energy and electronic structure will be completely unrealistic. to represent the repulsion between the electrons in some average way. e. At this point we would be able to …nd the energy eigenvalues and eigenfunctions of the hydrogen atom and the H+ ion. r2 . This can be done by an iterative procedure. Thus we can start 5 . This sounds brilliant but it does not work because the electrons interact with each other by the Coulomb repulsion between like charges. which is called the Fock operator in quantum chemistry.. The corresponding manyelectron energy eigenvalues would then be sums of the corresponding oneelectron eigenvalues.. :::) = (r1 ) (r2 ) . with more than one electron. The Fock operator can be found if the occupied orbitals are known. . the Pauli principle enters and insists that the total electronic wavefunction be antisymmetric with respect to interchange of two electrons. In order to obtain the lowest energy we stack the electrons in the lowest available orbitals. However. If this were the case then the wavefunctions could be taken to be products. i. The electrons still move without explicit coupling to each other but the pattern of motion has been restricted by the Pauli principle. the scheme can be carried out approximately in the following way: First we do not use product wavefunctions directly but combine them into Slater determinants which satisfy the Pauli principle. We have three coordinates to deal with rather than one. This means that no more than two electrons of opposite spin can be assigned the same oneelectron eigenfunction (orbital) in the Slater determinant. Moreover.where @2 @2 @2 r = + + : @x2 @y 2 @z 2 2 (12) This is not too di¢ cult to deal with.... (r1 . H2 O. Now the number of coordinates grows linearly with the number of atoms and the quantum chemistry starts to look completely intractable for anything but the smallest molecules. This 2 is not a bad start but we need to go on to H2 . The way to deal with this problem is to note that dealing with many electrons is not such a problem if we could assume that they were moving independently.
for small amplitude motion one can enormously simplify the problem by assuming separable rotations and 6 . It is not precise.2 The Vibrational Modes of Molecules Even though we often talk about the geometries of molecules as if the atoms were stationary relative to each other this is not the case. j F (i) (i+1) can. There are internal motions in molecules in the form of rotations and vibrations. The language of chemistry is dominated by atomic and molecular orbitals which are SCF constructs. in almost all cases. If there are N atoms in the molecule then there are 3N3 rotations and vibrations in the molecule. o n (1) : These orbitals which include the e¤ect of electronelectron repulsion. Again the exact treatment of the internal motion in a molecule of more than two atoms is very di¢ cult due to the coupling between all the di¤erent motions. We now use this Fock operator to obtain a new set of orbitals. the HartreeFock theory. We have also generally used …nite basis sets which cannot completely resolve even the uncorrelated motion but the accuracy achieved in a good SCF calculation is often quite good and the reduction of the problem to oneelectron form is so attractive that nearly all quantum chemistry is done this way. We have not accounted for the correlation of the electrons as they move.with the socalled core Hamiltonian which neglects electronelectron repulo n (0) . be used to …nd an improved Fock operator F (1) and so on. which yield a new Fock sion and obtain the corresponding orbitals. as it is more often called among quantum chemists. j operator F (0) . 1. = "(i+1) (i+1) : (14) Eventually. in turn. For a linear molecule there are 2 rotations and 3N5 vibrations while there are 3 rotations in a nonlinear molecule and 3N6 vibrations. this iterative procedure will converge so that the Fock operator eigenfunctions are the same as those orbitals used to construct it to within tolerable accuracy. Then we have found the self consistent …eld solution to the electronic stucture of the atom or molecule. However. The vital step in this iteration is the solution of the Fock eigenvalue problem. This means that vibrations soon completely dominate the internal motion as the number of atoms increases. It is often not clear to the practising chemist that these concepts are approximate but they are so close to the truth that almost all methods used to unravel the subtle correlation e¤ects start from the SCF or.
and there exists for every vector x a vector x such that x + ( x) = 0. z:. V (x1 . :::::) = V11 x2 + V12 x1 x2 + ::::: 1 1X = Vi. x2 . @xi @xj (16) (15) It turns out that in this harmonic approximation one can …nd a coordinate transformation such that the vibrations all separate and become independent socalled normal modes each performing harmonic oscillatory motion at a wellde…ned frequency. This means that the potential is approximated by a quadratic form in the coordinates.j where Vij is the second derivative of the potential at the minimum. (17) and there exists a zero 0 such that x + 0 = x. y.. and multiplication by scalar numbers .j xi xj . 1 x = x. and (x + y) + z = x + (y + z). and x + y = y + x. x3 . ::::::: . 2 2. return to consider how to obtain the coordinate transformation which yields these normal modes. ( + )x = x + x. ::::::) at x1 . x2 . is possible according to the rules x 2 S. x2 . x2 .. after having learnt the necessary prerequesites of linear algebra. x + y = z 2 S. 2 i. Vij = @2 (min) (min) V (x1 . 0 x =0: (18) 7 . forms a vector space S if addition of two vectors generates another vector in S. (x + y) = x+ y. Thus we shall.. ( x) = ( )x. This results in an enormous simpli…cation of the treatment of internal molecular dynamics which is close enough to the truth for low energies to be of great practical value in chemistry.harmonic vibrations. ::::: = x1 ..1 Vector Spaces and the Eigenvalue Problem Vector Spaces A set of elements x.
1. for i = 1. @ 3 A a set of indepen1 1 1 dent vectors? 2. e. : : : N. x4 De…nition 3: Linear operator: A is a linear operator on S if for x 2 S also Ax 2 S and A satis…es the linearity condition A( x + y) = Ax + Ay: (21) Exercise 1 What is the dimension of our Euclidean space? Find a convenient basis for it. x2 . xn are said to be linearly independent if n X (19) i xi = 0. ! i = 0.g. In a given basis the vector i=1 x can 1 represented by a column matrix of its expansion coe¢ cients. i=1 De…nition 2: Dimension: If there is a set of N linearly independent nonzero vectors fei gN but no set of N+1 such vectors then the vector space i=1 is Ndimensional and for any x 2 S we have x= N X i=1 xi e i : (20) The set of vectors fei gN is called a basis in S. Exercise 2 What are the vectors and the linear operators of quantum mechanics? 0 1 0 1 0 1 1 1 1 Exercise 3 Is the set of vectors @ 1 A . be 0 x1 B x2 C B C @ x3 A .De…nition 1: The vectors x1 . @ 0 A .1 Matrices Suppose the linear operator A satis…es Aei = N X j=1 Aji ej . 8 (22) . for all i.
(29) where the sum is over all permutations of the numbers 1. De…nition 4: Inverse: Certain operators A have inverses A 1 such that AA 1 = A 1 A = I: (28) De…nition 5: Determinant: Square matrices such as we have discussed above posess a determinant det A de…ned as det A = A11 : : : A1N ::: ::: ::: AN 1 : : : AN N = X ( 1)a A1p1 A2p2 AN pN . (A + B)x = Ax + Bx. the latter from an odd number of pairwise interchanges. added. i. ( A)ij = Aij . (AB)ij = N X k=1 (25) (26) Aik Bkj : (27) There is a null operator 0. e. 9 ..g. respectively. (A + B)ij = Aij + Bij .e. and its operation can be described by matrix multiplication. ( A)x = (Ax). Ax = A11 A12 A21 A22 x1 x2 = A11 x1 + A12 x2 A21 x1 + A22 x2 : (24) Linear operators can be multiplied by scalars. where i and j are the row and column indices.N.. and an identity operator I such that Ix = x for all x.2. and the exponent a is 0 for even permutations and 1 for odd permutations.then it follows that Ax = A N X i=1 xi e i = N N XX i=1 j=1 Aji xi ej : (23) Thus in a given basis a linear operator A can be represented by a matrix fAij g. and multiplied. starting from the original sequence which is even. The even and odd permutations can be distinguished by the fact that the former are constructed by an even number of pairwise interchanges.. such that 0x = 0 for all x.... (AB)x = A(Bx). N! permutations.
1 but 2. vi) orthogonal and vii) idempotent. b. c and d such that the matrix is iv) real. 2.1 and 1. diagonal if Aij = 0 for all i6= j.2 are odd permutations of 1. There it is indicated by superscript * while complex conjugate is denoted by a bar over the quantity.3 is an even permutation. ii) complex conjugate and iii) Hermitian conjugate. the inverses of the ma1 0 1 1 1 2 trices . e antisymmetric if A = orthogonal if A 1 A and Hermitian if Ay = A. 1 = A and unitary if A = Ay . . c and d are real ci d numbers.3.3. v) antisymmetric. 1.1. I am following the standard notation of physicists and chemists here. if possible.3.2 is an even. Determine minimal conditions on a. Determine its i) transpose. Some terminology: The matrix is called e real if A = A and symmetric if A = A. Matrix operations: h i e e Transpose of A: A is de…ned by A = Aji : ij Complex conjugate of A : A is de…ned by [A ]ij = Aij : ij Hermitian conjugate of A : Ay is de…ned by Ay = Aji : Note that Hermitian conjugate is often called adjoint as in the Beta Handbook. b.2. : 0 1 1 1 3 4 Example 1: 1.2. So are 3. where a.2. 3.1.2.2 and 2. idempotent if A2 = A and normal if AAy = Ay A: De…nition 6: Trace: The trace of a matrix A is denoted T r(A) and de…ned as N X T r (A) = Aii : (30) i=1 a bi .3. Exercise 5 Consider the matrix 10 .Exercise 4 Find the determinants and.1 an odd permutation of 1.
with the proper choice of scalars. the same. De…nition 9: Orthogonality: x and y are said to be orthogonal if x y = 0: De…nition 10: Orthonormality: The basis fei gN is said to be orthoi=1 normal if ei ej = ij : 2. i. This means that by adding scalar numbers times the other columns to one of them one can. From the eigenvalue equation Ax = x follows that the columns of the matrix A I are linearly dependent. This follows from the fact that matrices with two identical columns have a vanishing determinant. x ( y+ z) = x y + x z. .3 The Eigenvalue Problem De…nition 11: Eigenvector and eigenvalue: x is an eigenvector of A if Ax = x and is called the eigenvalue of A corresponding to the eigenvector x. In turn. make this column consist of only zeros in a matrix which must have unchanged determinant.2 Introduction of a Metric De…nition 7: Scalar product: The bilinear operation denoted by which takes two vectors x and y into a scalar x y is called a scalar product if x y = (y x) .We shall assume here that our operator can be represented as a matrix in an orthonormal basis. Note that the secular equation is an Nth order algebraic equation. this follows from the antisymmetry of the determinant with respect to the interchange of two columns.2. It is then trivial to see that the determinant must be zero. x x = 0 only if x = 0. How to …nd the eigenvalues of an operator A? . First we note that the determinant is unaltered if a scalar times one of the columns of the matrix is added to another column in the matrix. Theorem 1: The eigenvalues of the matrix A can be found as the roots of the secular equation det(A I) = 0. p De…nition 8: Norm: jxj = x x is called the norm (or length) of the vector x.e. Thus there are N roots f i gN some of which may be degenerate.I will only sketch a proof here leaving the details to be worked out as an exercise. i=1 11 ..
y). y) we have in general (x. Ay) = (Ay x. y) and for Hermitian matrices (x. How does one …nd the eigenvector(s) corresponding to a given eigenvalue? . Exercise 6 Prove explicitly that (x. If so one simply picks another component to set to unity or some other convenient value. y in the case of a 2x2 matrix. 2. Exercise 7 Prove explicitly that if Ay = A then eigenvectors of di¤erent eigenvalues are orthogonal and the eigenvalues are real. Ay) = Ay x. There are N linearly independent eigenvectors.3. Ay) = (Ax.Given the eigenvalue. 12 . The eigenvalues of an Hermitian operator are real and the eigenvectors corresponding to di¤erent eigenvalues are orthogonal.Note that for each eigenvalue one can …nd at least one corresponding eigenvector and at most a number of linearly independent eigenvectors equal to the degeneracy (multiplicity) of the corresponding eigenvector (root of the secular equation).1 Properties of Hermitian matrices Using longhand notation for scalar product such that x y is written as (x. Note that since the eigenvector is arbitrary up to a scalar prefactor one component of the eigenvector will be undetermined or can be set to unity unless this component turns out to be zero. Since it is trivial to see that if x is an eigenvector so is x eigenvectors are only determined up to an arbitrary scalar prefactor. The proof that the eigenvectors are real and orthogonal follows readily from consideration of the scalar product (x1 . Ax2 ) and use of the above Hermitivity relation and the eigenvalue relation. the eigenvalue equation Ax = x turns into a set of coupled linear equations which can be solved by stepwise elimination. A set of eigenvectors corresponding to di¤erent eigenvalues are linearly independent. Example 8 Find the eigenvalues and eigenvectors of the matrix 4 1 1 4 .
Such an equation can be generated from an operator equation by use of an orthonormal basis fej gn . In order to make them orthonormal we multiply them by 1= 2.4 The Generalized Eigenvalue Problem The operator eigenvalue above could be written as a matrix equation. x2 : The eigenvalues are then 1 = 5 and obtained by solving the equations  4x1 + x2 = x1 + 4x2 = The …rst equation yields x2 = ( and the second yields x1 = ( 4)x1 . namely when x1 = 0. 4)x2 : Insisting that these two equations give the same eigenvectors leads to the secular equation and the two possible eigenvalues. : Note that these eigenvectors are orthogonal . 2. In some cases.Solution: the secular equation is (4 )2 2 1 = 0: = 3 with corresponding eigenvectors x1 . A convenient way to proceed is to choose x1 = 1 and then …nd the two eigenvectors x1 = x2 = 1 1 1 1 . The operator equation 1 Aop x = x (31) 13 . One can then instead choose x2 = 1.as they must be for a Hermitian p matrix. the ” convenient way to proceed”above does not work.
Aop ej ) = Aij xj (33) (ei . where As = S 1 A. x. a) Can molecules be treated as basis functions in a space of all possible chemical species? b) What distinguishes the space of molecules from a vector space? c) Can you suggest a way to use vector space methods to solve our problem of balancing a stochiometric equation? 14 .5 Linear Algebra Exercises: 1. x) = xj (ei . (35) which is the generalized eigenvalue problem. x = n X j=1 xj e j . ej ) = xi : In the case of a general basis which is not orthonormal the matrix equation is Ax = Sx.e.can be turned into a matrix equation in the space spanned by the basis vectors by projecting both x and the operator equation onto the space of the basis vectors. Try to convert the chemical problem of writing a stoichiometrically balanced equation for a reaction with given reactants and products into a mathematical problem. 2. Aop xj e j n X j=1 = xj (ei . Aop x) = = ei . n X j=1 (32) ! n X j=1 n X j=1 (ei . i. (36) (37) The projection of an operator eigenvalue problem into a reduced space spanned by a given basis and a corresponding matrix form is called the Galerkin method and used very commonly in physics and chemistry. n X i=1 (34) n X i=1 Aij xj = Sij xj . This problem can be converted to normal form by multiplication with the inverse of S S 1 Ax = As x = S 1 Sx = x.
y . however. We are. 10. generally nonlinear. Let the matrices A and 0 2 @ 1 A= 0 a) Obtain the determinants of the two matrices. Ay) = Ay x. AB explicitly. b) Obtain A2 . Under what conditions can the linear equation Ax = b be uniquely solved? Here A is an n n matrix and x. 2 matrices. The concept of "linearity" is important and has many applications in chemistry. Show that if U is a unitary matrix then its eigenvalues f g satisfy j j = 1: 1 2 3 7 : 6. lucky that the Schrödinger equation is linear so that nearly all of quantum chemistry reduces to linear algebra.g. 11. Find the eigenvalues and eigenvectors of the matrix B = @ 0 5 0 A : 3 0 7 8. b are ndimensional vectors. Imagine that 15 . Give proof of your conclusion. Find the eigenvalues and eigenvectors of the matrix A = 0 1 1 0 2 7. Prove that eigenvectors of a linear operator corresponding to di¤erent eigenvalues must be linearly independent. B be de…ned by 1 0 1 1 0 1 4 7 0 1 A and B = @ 2 5 8 A : 1 2 3 6 9 3.. It implies a dramatic simpli…cation by comparison with the more general "nonlinear" case. B2 . 9. Chemical phenomena are.2. c) Obtain the eigenvalues of A. An important illustration of this can be found in the current debate concerning chemicals in our bodies and our environment. e. Show that for any square matrices A and B we have a) (AB) = BA y y y and b) (AB) = B A : 5. Verify Theorem 1 explicitly for 2 ^ ee 4. Prove that (x.
e. dr'm (r)'j (r): (40) (41) Smj = In the 1930’ Hückel developed a very simpli…ed form of HartreeFock s theory for planar aromatic molecules..yplane then the carbon 2pz orbitals stick out orthogonally to the 16 Here S is the overlap matrix which becomes equal to the unit matrix if the basis functions are orthonormal.e. i. We …nd the Fock operator F which plays the role of a oneelectron Hamiltonian operator from which oneelectron (canonical orbital) wave functions can be found by solving F =" : (38) The Fock operator is obtained by a selfconsistent iterative procedure and n it depends on the occupied electronic orbitals j j=1 themselves.as a person with knowledge of chemistry . to a good approximation.. where Fmj = Z (39) dr'm (r)F 'j (r). reduce the problem of …nding the total energy E0 of a molecule to oneelectron form. He noted that if the molecule lies in the x.you have been asked to pronounce whether a certain chemical is poisonous or not. Note that 39 can be derived from 38 by taking scalar products with respect to basis functions and expanding the orbital in the basis. 3 Hückel Theory of Delocalization The HartreeFock theory shows us how we can. any relation which is not linear is nonlinear. How would you explain the mathematical content of this question in terms of concepts of linearity and nonlinearity? Note that nonlinearity is the complement of linearity. N X cmj 'm : (42) j = m=1 Z . Once we have chosen a basis set f'l gN in which to resolve the canonical orbitals then l=1 the Fock equation 38 turns into a matrix equation Fcj = "j Scj . i.
Thus. The calculation to obtain the Fock matrix. as an example butadiene (C4 H6 in a linear chain geometry for the carbons). for butadiene. Next Hückel assumed that the diagonal terms in this matrix. Thus we can develop a reduced Fock matrix in the space of C 2pz orbitals. Thus the matrix h was a square matrix of the same order as the number of double bonded carbon atoms in the planar molecule. or the part of it. by empirical means.plane. Hückel had the idea that it may be possible to construct the reduced Fock matrix for the electrons. It is not much more di¢ cult to solve this generalized form of the eigenvalue problem but Hückel simpli…ed it further by setting s = 0. These orbitals do not mix with the inplane sigma orbitals. of the type 0 10 1 0 10 1 0 0 c1 1 s 0 0 c1 B B CB C 0 C B c2 C B CB C = " B s 1 s 0 C B c2 C : (44) @ 0 A @ c3 A @ 0 s 1 s A @ c3 A 0 0 c4 0 0 s 1 c4 This is a generalized eigenvalue equation due to the presence of the overlap matrix S on the left. were equal to the carbon 2pz atomic orbital energy. he obtained 10 1 0 1 0 0 0 c1 c1 B B C 0 C B c2 C B CB C = " B c2 C : (45) @ 0 A @ c3 A @ c3 A 0 0 c4 c4 17 . He proposed to use the C 2pz orbitals on all double bonded carbons as a minimal basis set. Nc . The corresponding molecular orbitals obtained by diagonalizing this reduced Fock matrix are called orbitals and the electrons assigned to them are called electrons. h or just h below. The same approximation is justi…ed for the overlap matrix also and we end up with a reduced Fock equation for. hjj = "C2p = : (43) Then he assumed that coupling only existed between bonded carbon atoms (nearest neighbour carbon atoms). hjj . In the 30’ it s must have been impossible for most molecules. The electron assignment to orbitals follows the Aufbau principle: Fill the orbitals in order of increasing orbital energy but place no more than two electrons (of opposite spin direction) in each orbital not to run afoul of the Pauli principle. rigorously by the HartreeFock method is relatively hard numerical work.
1. Thus it is also responsible for the lowering of the energy which is the cause of the ” resonance stabilization”of the aromatic or conjugated molecules. 2 2 s ! p p 3+ 5 5+1 " = = = 1:618: 2 2 18 (50) (51) . (1. (2.Note that the coupling term . In order to write down the Hückel matrices one needs to number the atoms in some reasonable way and keep track of all coupled and uncoupled carbon atom pairs. 3) permutations leading to the secular equation "4 We solve …rst for "2 and …nd " = 2 3 "2 + 1 = 0: 3 p 5 (49) : 2 The corresponding solutions for " are obtained as s ! p p 3 5 5 1 " = = = 0:618. 2. (46) is taken to be independent of which bond is referred to. When the indices in hjm refer to uncoupled carbon atoms the matrix element vanishes so for large molecules there will be a lot of zeroes. 3. 2. It is responsible for the delocalization of electron motion in the molecule. = hjm . (2. 2. with j bonded to m. 3. 4). 4. (1. In order to solve the Hückel eigenvalue problem one usually works in energy units such that = 1. 3. Then the secular equation becomes 0 1 " 1 0 0 B 1 " 1 0 C C = 0: det B (47) @ 0 1 " 1 A 0 0 1 " Here we have de…ned " as "=( ")= = (" )=( ): (48) The determinant can be worked out by noting that the only nonvanishing matrix element products are the (1. 4). Note that both and are negative numbers representing a negative atomic C2penergy and a coupling energy. 4. 3). 4). The coupling term is often called the resonance integral. 1.
5 in Zumdahl. Note also the linear growth in the number of nodes with excitation. Thus it appears that the bond energy can be estimated for 19 .618+0. 2 3 = 0:6015('1 + '4 ) = 0:3717('1 '4 ) 0:3717('2 + '3 ). This we shall call the resonance stabilization of the butadiene molecule. The more important parameter is . In order to estimate we shall consider the simplest molecule containing a bond. From a table of average molecular bond energies ( Table 13. The binding energy contribution is 2(1. Thus the extra binding energy due to the delocalization of the electrons over more than two nuclei is 0. When we work out the contribution to the total binding energy we …rst …nd the number of electrons to be placed and then place them in the available orbitals from the lowest and up in accord with the Aufbau principle which restricts the number of electrons in an orbital to 0.e. C=CC=C. which delocalizes the electrons and stabilizes favored structures. 1 or 2. ethylene C2 H4 . In the case of butadiene we have four electrons which …ll the two lowest orbitals. 0:6015('2 '3 ). . This parameter does not play a very important role in the Hückel theory..472 here.units. "1 = "2 = 1:618. It can be estimated as minus the ionization energy of the carbon atom which is 1090 kJ/mol. How can we obtain estimates of the two parameters and ? The …rst. then the binding energy contribution would turn out to be 4 as will be clear below. the coupling constant. Chemical Principles) we …nd that a C=C bond consisting of a and a bond the average bond energy is 614 kJ/mol while for the CC bond the energy is 347 kJ/mol. i.618)=4. = 0:6015('1 '4 ) + 0:3717('2 '3 ). "4 = 1:618: 4 The energy eigenvalues are displaced down and up from the C2p energy in a symmetric fashion.The set of eigenvectors and eigenvalues are 1 = 0:3717('1 + '4 ) + 0:6015('2 + '3 ). "3 = 0:618.472 in . If we instead considered the butadiene molecule to contain pairwise localized and uncoupled bonds. 0:618. is supposed to be the C2pz orbital energy. The expansion coe¢ cients change with carbon atom somewhat like the eigenfunctions of the onedimensional particleinabox problem. Note that the eigenfunctions are symmetric or antisymmetric to re‡ ection in the midpoint.
e. Write down the Hückel matrix for benzene..the ethylene molecule to be 267 kJ/mol.1 Hückel exercises: 1. Setting this equal to 267 kJ/mol we …nd that = 134 kJ/mol in SIunits. The Hückel hamiltonian is h= The reduced secular equation becomes det " 1 1 " = 0: (53) : (52) The solution is " = 1. What is the total resonance stabilization energy? 20 . This should be considered a very approximative estimate but it gives us a plausible magnitude and an example of how could be obtained. Write down its Hückel matrix and calculate the corresponding orbitals and orbital energies. verify the results stated in the text showing the algebra explicitly. One might expect on the basis of standard bonding pictures that the middle carboncarbon bond in butadiene is weaker and longer due to dominant single bond character. 4. The lower orbital will be occupied by two electrons each contributing an equal amount to the bond energy.equal to 2. 3.. 3.e. Thus the bond energy is in reduced units of . Work out the Hückel orbitals for butadiene. that part of the binding energy due to the further delocalization of the electrons in the butadiene chain) in kJ/mol in the case when the coupling between the two middle carbon atoms. i. 2. is half the normal value. Compare with the value for the standard model with all couplings the same. Let us now apply Hückel theory to ethylene to …nd the predicted bond energy in terms of . due to a longer bond. Consider the square planar molecule cyclobutadiene C4 H4 obtained by tying the ends of the butadiene molecule together with the loss of two hydrogen atoms. Estimate the resonance stabilization energy of butadiene (i. Use symmetry to obtain the form and the energy of the lowest lying orbital.
N=N 418 kJ/mol.. where r1 .e. The most important simpli…cations are the point particle model .) (actually due to the electrons). We assume that V (x) has the general character of a bond potential in a diatomic molecule. You may use the bond energies NN 160 kJ/mol.. 4. As the temperature decreases the bondlength decreases to become equal to the so called equilibrium value xe at T=0 K. a well de…ned minimum.. The exact solution to this problem is intractable due to coupling between the motion of electrons and nuclei and between nuclei due to anharmonic e¤ects. r2 . Obtain the electron contribution to the binding energy of the square planar N4 molecule and estimate the corresponding resonance energy due to the further delocalization of the electrons. in your estimation. r2 ..i.5.. the water molecule H2 O.. The particle would then in our simple picture sit motionless at the bottom of the potential well. 21 . i. ::: are the spatial coordinates of the atoms.g. However. 4 Vibrational Modes of a Molecule Our purpose here will be to describe the motion of the nuclei in a molecule.. We can …nd xe by examining the stationary points satisfying @ 2V @V = 0.while we really ought to use quantum mechanics. classical mechanics .e. e.anharmonic e¤ects and vibrationrotation coupling will be neglected. small amplitude vibrations . the electrons are neglected and the atoms become point particles interacting by a potential V (r1 .1 The One Dimensional Oscillator Let the potential acting on a particle moving in one dimension be V (x) and the mass of the particle m. on the basis of a number of simpli…cations we shall be able to get close to reality without losing too much in accuracy. If there are several minima we choose the point where the potential is the smallest. and > 0: @x @x2 (54) These conditions identify a potential minimum.
Without loss of generality. We shall now review the solution for the harmonic oscillator motion.We now apply the harmonic approximation by expanding the potential in a Taylor series around the point x=xe and retaining only the …rst three terms. (61) 22 . Shifting the RHS over to the left we rewrite the equation as r r d k d k ( +i )( i )x(t) = 0: dt m dt m (58) note later @2 x(t) = @t2 @ V (x). and x(t) = be . (55) Here we have used the fact that V 0 vanishes at x = xe . The solutions to the two …rst order equations are p p i k=mt i k=mt x(t) = ae . V (x) = V (xe ) + (x 1 = V (xe ) + (x 2 1 xe )V 0 + (x 2 xe )2 V 00 : xe )2 V 00 . Then the potential can be written as 1 (56) V (x) = kx2 . We start from Newton’ equation of motion s m which yields d2 k x(t) = x(t): 2 dt m In order to solve this second order ordinary di¤erential equation we that it is linear and use a method to be discussed in greater detail in a chapter. with k = V 00 (xe ). (60) dt m dt m will be a solution of the second order equation 58. A particle moving in such a potential is called a harmonic oscillator. 2 Chemists call k the force constant. @x (57) (59) It is now clear that since the factors can be written in any order any sum of solutions to the two …rst order equations r r k k d d ( +i )x(t) = 0. or ( i )x(t) = 0. we will now choose an energy scale such that V (xe ) = 0 and the origin on the xaxis such that xe = 0. as the mathematicians like to put it.
0 2 . This makes it useful to transform our analysis into a set of internal bond coordinates explicitly accounting for the invariance of V to external motions. and is an arbitrary initial phase. but since x(t) must be real the solution can be reduced to the form p x(t) = A sin( k=mt + ). (62) (63) where A is a real amplitude. xn ): If we are describing a molecule in …eld free space then the potential must be invariant to translation (center of mass coordinates) and rotation (rotational coordinates).. xn ) = 0. However. i=1.2 Multidimensional vibrations in molecules If the molecule is made up of N atoms there are 3N spatial coordinates so the problem is now ndimensional where n is an integer larger than one. the maximal excursion of the oscillation from the origin.. x2 . :::. @xi (64) It is now a little more di¢ cult to determine whether we have found a maximum or minimum but evaluating V at the point and choosing the extremum giving the lowest potential should work for all reasonable vibrational potentials.respectively.n. :::. Thus the harmonic oscillator solution is p p i k=mt i k=mt x(t) = ae + be . x2 . V (x1 . of the oscillator. rotational and vibrational subsets as shown below.. the gain in lower dimensionality is more than o¤set by loss of simplicity so we shall stick with our initial representation in terms of cartesian atomic coordinates..2.e.. 4. The global minimum is found by solving for all extrema given by the set of equations @ V (x1 . i. # of dof c of m rot vib linear 3 2 3N5 nonlinear 3 3 3N6 The potential now depends on the spatial coordinates.. Once the global minimum is found we transform our coordinate system 23 .. The 3N spatial degrees of freedom can be divided into center of mass.
n. (69) p Here the factor Ukk = ! k is called the frequency of the kth vibrational mode and is normally given in radians per second.. Thus we shall take x1e ... We often refer to the variables fyk gn as the normal modes since they are decoupled from each k=1 other.. quite intractable for large n but we shall see that it is possible to …nd a coordinate transformation x ! y such that we end up with n uncoupled equations of motion of the type 58 which we have already solved... ::: all to vanish in our discussion below.. d2 xk (t) = dt2 1 @ V (x1 .... xn ) = 2 i=1 j=1 n n (65) where Vij = @2 V (x1 . The coordinate transformation we shall use is linear and given by yk = n X l=1 Skl xl .2.2. :::.. i. for k=1. :::xn ) = mk @xk n 1 X Vkl xl (t). perhaps... :::. 24 (70) . We also take the potential energy to be zero at the minimum. x2e . x2 . xn ) @xi @xj = Vji : x=0 (66) We are now ready to consider the multidimensional motion which is described by Newton’ equation suitably generalized to the multidimensional s case... This sounds. From the multidimensional Taylor series expansion (see Appendix) follows that in the harmonic approximation where terms of order higher than quadratic in the coordinates vanish we get 1 XX Vij xi xj . for k=1.n.n.. for k=1.2..n. (68) dt2 with solutions p yk (t) = Ak sin( Ukk t + k )...to make this point the origin. for k=1. V (x1 . (67) mk l=1 We have a set of n coupled linear second order di¤erential equations..2. x2 . d2 yk (t) = Ukk yk (t).e.
:::. Suppose that R i=1 has n linearly independent eigenvectors fxi gn such that i=1 xl = Then. the eigenvectors of R become columns of S 1 . (72) dt2 where 1 Vkl : (73) Rkl = mk In order to …nd the corresponding equation for y we multiply both sides by S. S 1 li n X i=1 xli ei : (77) (78) = xil = [xi ]l . (74) dt2 which can be rewritten as d2 y = Uy: (75) dt2 Here we used the fact that S 1 S = I. if S is chosen so that S 1 ei = xi . 25 (79) . i = 1. i. and denoted SRS 1 as U: Theorem: Let fei gn be the basis vectors of our vector space of posii=1 tions so that any vector x can be expanded in this basis. the identity operator.or in vector notation y = Sx: (71) f! k gn k=1 The task that remains is to …nd the matrix S and the frequencies : Note …rst that Newton’ equations in the xcoordinates can be rewritten s as d2 x = Rx. 2. n..e. x= n X i=1 xi e i . (76) In our case fei gn will be the set of cartesian basis vectors. d2 Sx = SRx = SRS 1 Sx.
then Rij = 1 1 Vij = Vji = Rji : m m (82) ij ..g.e. De…ne M by Mkl = and note that M 1 p mk kl (85) satis…es M 1 kl 1 =p mk kl : (86) 26 .then U becomes diagonal. Moreover. We will simply insert a transformation to mass weighted coordinates to symmetrize R before applying the transformation above. Moreover. e. (81) and we have assumed the basis to be orthonormal. ei ej = ij : Thus we conclude that when R has n linearly independent eigenvectors then we can construct S 1 from these eigenvectors and take the inverse to …nd S. It follows that it has n linearly independent eigenvectors fxi gn so the theorem applies. Proof: Uij = ei SRS 1 ej = ei SRxj = Rj ei Sxj = Rj ei SS 1 ej = Rj where Rxj = Rj xj .. O3 . transformations i=1 between ortonormal basis sets correspond to orthogonal matrices so ] S =(S 1 ) and Sli = xli : This follows from SS 1 ij (83) = n X l=1 Sil S 1 lj = n X l=1 xil xjl = xi xj = ij : (84) What to do when the masses are not equal? Here is a useful trick. i. (80) Thus R is symmetric and Hermitian. each such eigenvector corresponds to a normal mode with frequency equal to the square root of the corresponding eigenvalue Rj : Suppose now that all the masses of the atoms in our molecule are the same as in.
In calculations for realistic molecular models one will …nd that the normal modes corresponding to center of mass translation and rotation correspond to zero frequency modes.e..Now note that SRS where 1 = SM 1 (MRM 1 )MS Rkl = p 1 = ARA 1 . the normal modes are no longer orthogonal to each other. 27 . If the masses are not the same then we create the Hermitian matrix R de…ned by 1 Rkl = p Vkl : mk ml If the masses are all the same then we can proceed as below noting that R =R. :::xn ) and double derivatives Vij = at the minimum.2. ::::xn ) 3. i. 2. @2 V @xi @xj (x1 . Make sure you have masses m1 . Find minimum of V (x1 . where (90) Note when the masses are di¤erent S is no longer orthogonal. and Skl = Aki Mil = i=1 i=1 The frequency corresponding to the kth mode is ! k = Rxk = Rk xk : p Rk . :::mn and a potential V (x1 . :::xn ). i=1 By our theorem we then …nd that Ali = xli . ! = 0: Note that when working out normal mode coordinates we divide by some p appropriate mass to get n r X ml xkl xl : (91) yk = mass l=1 4. and from A = SM 1 follows that n n X X p p xki mi il = ml xkl : (89) S = AM. (87) 1 Vkl = Rlk : (88) mk ml Thus the transformation to mass weighted coordinates produces a coupling matrix R which is Hermitian and has n orthonormal eigenvectors fxi gn .1 Summary of steps in the normal mode analysis 1.
4. i.. (94) (93) The matrix mR has the eigenvalues and eigenvectors p 1 1= 3 p 1= 3 4=3 28 . @r1 @ V = 2 + 4r2 + r1 = 0. Note that the masstretched components of the eigenvectors of the coupling matrix R become the normal mode coordinates. p 5.4. @r2 r2.A TwoDimensional Vibration Suppose that two particles of mass m1 = m and m2 = 3m. r1.. r2 ) = 10 + r1 2r2 + r1 =2 + 2r2 + r1 r2 : Let us …nd the normal modes of this motion.3 Example 1 .eq = 2: (92) Next we rewrite the potential in new coordinates x1 . respectively. @ V = 1 + r1 + r2 = 0. one of the atomic masses or the average atomic mass. are 2 2 performing vibrations in a potential V (r1 . the potential minimum. e.eq = r1 + 2 and x2 = r2 r2. Obtain the vibrational frequencies as ! k = Rk and the corresponding P p ml normal mode coordinate as yk = n x x where mass can be l=1 mass kl l taken to be any convenient mass.eq = r2 1.e. We get V (x1 . x2 de…ned as x1 = r1 r1. Solve the eigenvalue problem Rx = Rj xj for n orthonormal eigenvectors with corresponding real eigenvalues. We begin by …nding the equilibrium geometry of the vibrational motion.eq = 1.g. x2 ) = 8 + x2 =2 + 2x2 + x1 x2 : 1 2 The Vmatrix is then V= and the symmetrized Rmatrix is 1 R= m p 1 1= 3 p 1= 3 4=3 : (95) 1 1 1 4 .
at the equilibrium bondlength re : The full potential can then be written as V (r1 . 2 (x3 x2 ) (x3 x2 ) 29 . y 1 = x1 + 3 6 6 2 ! p ! p p p p 7 13 (1 13) x2 : (96) y 2 = x1 + 3 3 3 x2 = x1 + 6 6 2 .! 1 = 7 + 1 13= m 6 6 q p p 7 1 13= m. The coordinate r is the bondlength. Thus the vibrational frequencies are . (r) = D(e 2 (r re ) 2e (r re ) ). +e ): 2 (x3 x2 ) (99) e (x3 x2 ) = 2D ( e = 2D (e 2 (x2 x1 ) (x2 x1 ) ).1p p 3 1 13 =6 q p p p 7 13 =6. r3 = 2re . The corresponding normal mode coordinates and ! 2 = 6 6 (unnormalized) are ! p p ! p p p 1 + 13 7 13 3 + 3 x2 = x1 + x2 . r2 = re . We shall take the masses to be the same.4 Example 2 . where the total potential energy is 2D. The Morse potential reaches its minimum potential energy of D. The minimum we shall choose to expand around is r1 = 0. x2 = r2 re . x3 = r3 2re : In these coordinates the potential becomes V (x1 . Our xcoordinates measuring deviation from the equilibrium positions are then x1 = r1 . r2 . x3 ) = D(e The …rst @V @x1 @V @x2 @V @x3 2 (x2 x1 ) 2 (r2 r1 re ) 2e (r2 r1 re ) +e 2 (r3 r2 re ) 2e (r3 r2 re ) 2e (x2 x1 ) +e 2 (x3 x2 ) 2e (x3 x2 ) ): (98) derivatives are = 2D (e 2 (x2 x1 ) e +e e (x2 x1 ) ). and con…ned to move in one dimension. x2 . the bond dissociation energy. 1p 3 1 + 13 =6 p $ 4. eigenvectors: p $ 7+ 13 =6.Normal Modes of a Linear Chain Molecule Let us consider a molecule consisting of three atoms attached to each other by chemical bonds represented by Morse potentials. r3 ) = D(e ): (97) The global potential minimum occurs when r2 r1 = r3 r2 = re .
m 2 30 . ! 1 = 0. @xi @xj (100) where the double derivative is evaluated at the origin. antisymmetric vibration. 2 = 3. x 3 = p @ 0 A : 3 6 2 1 1 1 0 (104) 1 1 p x2 + p x3 . ! 3 = . (105) 3 3 r 2 1 6D p x2 + p x3 . x2 = p @ 2 A . = V31 = 0: 1 0 2D 2 b 1 A= R: m 1 (101) The corresponding R matrix is 0 1 1 R = 2D 2 @ 1 m 0 1 2 1 (102) b Now we are ready to …nd the eigenvectors of R: We have 0 1 1 1 0 1 2 1 A = (1 det @ )2 (2 ) 2(1 0 1 1 = 3 ) (103) +4 2 3 : The roots are vectors are 1 = 0. ! 2 = . = 4D 2 .Note that the derivatives vanish at x1 = x2 = x3 = 0: Using the de…nition Vij = @2 V. we get V11 V22 V12 V13 = 2D 2 = V33 . symmetric vibration. = 2D 2 = V21 = V23 = V32 . translation. 3 = 1: The corresponding orthonormal eigen It follows that the normal modes are 1 y1 = p x1 + 3 1 y2 = p x1 6 1 y3 = p x1 2 1 0 1 0 1 1 1 1 1 1 1 x1 = p @ 1 A . m 6 6 r 1 2D p x3 .
Find the harmonic vibrational frequency of the diatomic molecule A2 with the bond potential V (r) = D (exp ( 2ar2 ) 2 exp( ar2 )) : 4. x12 x12 . 3. Obtain the normal modes and corresponding frequencies of a system consisting of two identical particles of mass m moving in one dimension x and interacting by the LennardJones potential " # 12 6 V (x1 .Multidimensional Taylor Series Expansion @V @r 1 @ 2V + x2 2 @r2 . where x12 = jx1 4. 1 The gravitational potential can be taken to be VG (x1 ) = gmx1 : a) What is the equilibrium position and vibrational frequency of the vertical vibration? b) Obtain the equilibrium geometry. Reconsider the system in Exercise 3 above generalized to the case when the particle masses are not equal m1 6= m2 . x2 ) = k(x1 x2 )2 =2: 2.6 Appendix .4. vibrational modes and corresponding frequencies for the case when a second particle of the same type is attached to the …rst particle by a harmonic spring corresponding to the potential V (x1 . V (r0 + x) = V (r0 ) + x = + :::::: r=r0 (106) r=r0 1 X 1 @ nV xn n! @rn n=0 31 . r=r0 Recall the form of the onedimensional Taylor series expansion. Find the vibrational frequencies of the triatomic onedimensional chain in the example above in the case when the mass of the central particle is four times larger (4m) but all other parameters of the system remain the same. 5. A particle of mass m is hanging vertically from the ceiling in a harmonic spring described by the wallparticle potential V (x1 ) = kx2 =2. x2 ) = 4" x2 j.5 Exercises on Normal Mode Analysis 1.
expand with respect to s to get 1 X 1 @ nV sn V (r0 + sb) = x n! @sn n=0 : s=0 (108) But now we note that X @ @V V (r0 + sb) = x xi b @s @ri i=1 N N N . 32 . r=r0 (110) V (r0 +x): (111) Insertion of this result in 108 above yields the multidimensional form of the Taylor series expansion. r=r0 (109) and so on. where x is a directional vector and x s tells us how far in this direction we have gone.where r = r0 +x: The corresponding multidimensional Taylor series expansion is V (r0 + x) = V (r0 ) + 1 X 1 = n! n=0 N X i=1 xi @ @r @V @ri n r=r0 1 XX @ 2V + xi xj 2 i=1 j=1 @ri @rj N N ::::::::: (107) r=r0 x V (r) r=r0 : Probably the easiest way to prove this expansion is to convert it to a oneb dimensional expansion by setting x = sb. Thus we can write n XX @2 @ 2V V (r0 + sb) = x xi xj bb @s2 @ri @rj i=1 j=1 @n V (r0 + sb) = x s @sn @ x @r n . Then we can. of course.
the set of independent variable values over which the function will be de…ned. We can readily verify that functions on a domain D. as we learnt in Chapter 1. This de…nition of scalar product. This is the key idea which has made quantum chemistry tractable for even reasonably large molecules. 5. (112) D where represents the independent variables. We have already used this scalar product in our discussion of Hückel theory. Long before quantum chemists mathematicians had the same idea. form a vector space. ] : The Fourier basis functions are then the functions fcos nxg1 [fsin nxg1 : They n=0 n=1 33 . The best example is the Fourier expansion which we shall now discuss. means in our case that we have an Hilbert space. The metric is de…ned by a scalar product satisfying. called an L2 scalar product. a number of conditions. h) = d g ( )h( ).Part II Series Expansions and Transforms with Applications 5 Fourier Series Expansion We have already met the idea that function spaces can be turned into …nite or discrete in…nitedimensional vector spaces by the introduction of a basis set of functions. The most common de…nition of a scalar product in a vector space of functions is given by Z (g.1 Simple Form Suppose now that we consider a function f (x) on the inteval [ . By introduction of a metric we obtain a metric vector space.
Using these functions as basis functions we can expand f(x) as f (x) = a0 + 1 X n=1 (an cos nx + bn sin nx) . 34 .. for all m. Z dx cos 0x cos mx = 2 0. Z 1 bn = dx sin nxf (x). n = 1. n = 1.. .n>0. (113) Z Z dx sin mx sin nx = dx cos m cos nx = m.... for m.are all orthogonal in our Hilbert space. (115) 2 Z 1 an = dx cos nxf (x). ] : The identity of the original and the expanded function is not absolute but of a ” weak” sense meaning that ! 1 X (g. We get Z 1 a0 = dxf (x).< x < . Z dx sin mx cos nx = 0. for n > 0. Although the mathematicians call this sense of equality weak we …nd in physics and chemistry that it is quite strong. cos 0x = 1 ! Z dx cos 0x sin nx = 0. f ) = g. i. a0 + (an cos nx + bn sin nx) (116) n=1 for any function g(x) in our vector space....n.n .1: This de…nes the Fourier expansion of f(x) on [ . (114) where the expansion ce¢ cients can be found by taking scalar products of both sides of the equality 114 with respect to each basis function in turn.. We rarely need to distinguish between weak and absolute point by point equality.1.e.m ....
.. n = 1.. irrespective of what the original function was up to. using the trigonometric relations n=1 cos( sin( ) = ) = cos( ). n = 1. ] produces a periodic function by the relation 118.. 5. ..e.. ] : Thus our new basis functions are fcos(2 n(x x0 )=L) )g1 [ n=0 fsin(2 n(x x0 )=L) )g1 : However. x0 + L] back to the interval [ . x0 + L] : This is done by recognizing that the variable transformation y=(2 (xx0 )=L) transforms the interval [x0 .Note that all the basis functions are periodic with the period 2 . 1... the Fourier expansion outside the chosen interval [ .1.. i. (121) a0 = L x0 Z 2 x0 +L an = dx cos(2 n(x x0 )=L)f (x).. sin( ). L x0 Z 2 x0 +L bn = dx sin(2 n(x x0 )=L)f (x).. .1: L x0 35 . 1. ...1: (117) It follows that the expanded function also is periodic with the period 2 . sin (nx + 2 m) = sin nx. n=1 (120) where the expansion coe¢ cients are obtained as Z 1 x0 +L dxf (x). (119) and the fact that the sign of a basis function can be changed without dif…culty we can simplify the basis to the form fcos(2 n(x x0 )=L))g1 [ n=0 fsin(2 n(x x0 )=L))g1 : The Fourier expansion then becomes n=1 1 X f (x) = a0 + (an cos(2 n(x x0 )=L)+bn sin(2 n(x x0 )=L))..2 Arbitrary Interval The Fourier expansion above can easily be generalized to an arbitrary …nite interval [x0 ... n = 0. f (x + 2 m) = f (x): (118) Thus. n = 0.. .1. for an integer m we get cos (nx + 2 m) = cos nx. x0 < x < x0 +L.
3 Complex form At the minor cost of working with complex basis functions we can give the Fourier series expansion its most general form.e i2 nx=L )= Z L=2 dxe L=2 i2 mx=L i2 nx=L e =L m. x0 + L]. Suppose that we want to consider a function f(x) on the interval [ L=2. has only a …nite number of maxima and minima and discontinuities.Strong formulation . As noted above. The second theorem is deeper and describes what happens at each point x given some conditions on the function. :::::1: (124) 36 . Let f(x) for other values of x be de…ned by periodicity.n for m. is bounded. then the Fourier series 120 for f(x) converges to (f(x+)+f(x))/2 for all x. n=1::: 1. 1. 0.n=integers. (123) n= 1 with expansion coe¢ cients de…ned by 1 dn = L Z L=2 dxe L=2 i2 nx=L f (x). (122) The Fourier expansion then takes the form f (x) = 1 X dn ei2 nx=L . f(x+nL)=f(x) for n=integer. The …rst theorem simply states that the Fourier expansion gives a unique mapping of f (x) into a vector in our vector space of functions such that all scalar products with functions in our vector space are reproduced by the Fourier expansion.When can a function be expanded in a Fourier series on a …nite interval? There are two answers to this question: Theorem . Theorem . 5. L=2] : We can then use the basis functions fexp(i2 nx=L)g1 1 which satisfy the orthogonality relation n= (e i2 mx=L . this is nearly always su¢ cient for us.Dirichlet’ theorem: Suppose f(x) is s wellde…ned on [x0 .Weak formulation: The Fourier basis functions form a complete set on the …nite interval for which they were chosen.
. for n = 1.. for n = 1. L=2] is even then all its Fourier expansion coe¢ cients corresponding to sinfunctions vanish while if f (x) is odd the coe¢ cients corresponding to cosfunctions will vanish. for n = 2..1: dn = 2 2 (126) Thus the co¢ cients a0 . :::: and = 1. 4. for n = 1. 3.1.4) above we get Z Z dxx = dxx cos(nx) = 0. and bn = i(dn d n )..4 Exercises on Fourier Series: 1. Prove that if the function f(x) on [ L=2.. a1 ..The relation between the complex and the real form of the Fourier expansion can be seen from the relation 1 X f (x) = d0 + (dn ei2 nx=L +d ne i2 nx=L ) d n ) sin(2 (125) nx=L)] = d0 + = a0 + It follows that n=1 1 X [(dn + d n ) cos(2 nx=L) + i(dn n=1 1 X n=1 [an cos(2 nx=L) + bn sin(2 nx=L)]: a0 = d 0 .3) and (3.Let us …rst use the original simple Fourier basis set. Note now that using the relations (3. Noting that cos(n ) = +1.. Note that an even function satis…es f (x) = f ( x) and an odd function satis…es f (x) = f ( x): 37 . 1 1 (an ibn ). .. ::::: all vanish. and d n = (an + ibn ). an = dn + d n . ::::::: we get x = 2(sin x 1 1 sin 2x + sin 3x 2 3 1 X ( 1)n+1 :::::::::::) = 2 sin(nx): n n=1 Example 1: Obtain the Fourier series expansion of the function f (x) = x on the interval < x < . Z Z h x i 1 2 cos(nx) + dx cos(nx) = cos(n ): dxx sin(nx) = n n n 5..
k = kn 1 X n= 1 kn Z Z 1 e k f (kn )eikn x . f (x) = where kn = 2 n=L and e renamed k f (kn ).e. ]. Why might this particular function be a harsh test of the convergence of a Fourier expansion? 6 Fourier Transforms The Fourier series expansion is applicable on a …nite interval and to periodic functions on an in…nite interval. Obtain the Fourier expansion of f (x) = x2 on the interval [ r. Discuss the observed approach of fn to f with increasing n. r] using …rst the real form then the complex form of the expansion.. 3. The coe¢ cient dn has been Insertion into 127 yields f (x) = 1 e f (kn ) = 2 1 X L=2 dxe L=2 ikn x f (x): (128) n= 1 1 k 2 L=2 dx0 e L=2 ikn x0 f (x0 )eikn x : (129) Now we are ready to take the limit as L ! 1: Instead of the discrete coe¢ cients kn we now get a continuous parameter k and the sum over n becomes an integral over k. Draw f (x) and its approximation fn (x) = a0 + :::an cos nx + bn sin nx for n = 0. i. The required generalization to accomplish this will lead us to the Fourier transform. 1 and 2. Obtain the Fourier expansion of the function f (x) = x on the interval [ . (127) = 2 =L. We shall start from the complex form of the Fourier series expansion and write it in a suggestive form as follows. If all integrals converge properly we have Z 1 e f (x) = dk f (k)eikx . (130) 1 Z 1 e(k) = 1 dxf (x)e ikx : f 2 1 38 .2. Naturally one would like to be able to handle functions on the entire line 1 < x < 1 whether or not they are periodic.
e We say that f (k) is the Fourier transform of f(x). x > 0. f (x) = p 2 1 Z 1 1 f (k) = p dxf (x)e ikx : 2 1 (132) Example 2: Let us obtain the Fourier transform of the function f (x) = exp( ax). We de…ne a new transform f (k) by p e f (k) = 2 f (k): (131) The new transform can then be seen to satisfy the relations Z 1 1 dkf (k)eikx .1 The Function By inserting the expression for the Fourier transform in the expression for the function f(x) we get Z 1 Z 1 1 0 f (x) = dk dx0 f (x0 )e ikx eikx : (133) 2 1 1 39 . The expressions for f(x) e and f (k) look very similar and can be made to look even more similar if we symmetrize the transform. = 0. x 0: If a is real and greater than zero we have Z 1 Z 1 1 1 ikx f (k) = p dxe f (x) = p dxe ikx 2 2 0 1 1 1 1 1 1 e (a+ik)x : = p =p 2 a + ik 2 a + ik 0 ax Finally. If we know f(x) we can generate f (k) and vice versa. we might write the result above in the form 1 a ik f (k) = p : 2 a2 + k 2 6.Note that we have found a pair of functions which are images in either x. to more clearly expose the real and imaginary parts.or e kspace of one function.
. (136) a but there are many forms for it other than that in 135. e. f (x) = 1 (134) where the so called function is de…ned by Z 1 1 0 (x x ) = dkeik(x 2 1 x0 ) : (135) The de…nition of a function requires that it satisfy the integral relation Z b g(x) = dx0 (x x0 )g(x0 )..e.a limit of a sequence of functions.By inverting order of integration we …nd then that Z 1 dx0 f (x0 ) (x x0 ). (x x0 ) = lim 1 e !1 2 jx x0 j : (137) Next we shall consider the lengths of f(x) and its Fourier transform f (k) in their respective Hilbert spaces.. = 1 1 1 Z 1 since 1 2 Z 1 1 dxei(k k0 )x = (k k 0 ): (140) 40 . The function is a generalized sort of function. Parseval’ Theorem: The L2 norms of the function f (x) and its Fourier s transform f (k) are the same. if a < x < b. Z 1 Z 1 2 2 dk f (k) : dx jf (x)j = (138) 1 1 Proof: Z 1 1 dx jf (x)j 2 dxf (x)f (x) (139) Z 1 Z 1 Z 1 0 ikx 1 dkf (k)e p dk 0 f (k 0 )e ik x = dx p 2 2 1 1 Z 1 Z 1 Z 1 1 1 0 = dk dk 0 f (k)f (k 0 ) dxei(k k )x 2 1 1 1 Z 1 2 = dk f (k) .g. i.
i.. e Existence Theorem: The Fourier transform of f (x). two scalar numbers.2 Note that according to the L2 metric f(x) and f (k) have the same length. either f (k) or f (k). then if g(x) is a properly behaved function on D it can be expanded as g(x) = 1 X :::: j1 = 1 jn = 1 1 X dj1 .:::jn ei2 41 (j1 x1 =L1 +::::+jn xn =Ln ) . This is accomplished by forming products of onedimensional Fourier basis functions and using these products as basis functions in higher dimensional spaces. g (k) = 2 f (k)e e p g(k) = 2 f (k)h(k): (144) 6.e. (145) . Fourier transform of derivatives: FT( dn f (x) = (ik)n F T (f (x)): dxn (142) 3... Li =2 < xi < Li =2. Convolution theorem: If the function g(x) is a convolution of f and h. for i=1. Suppose x is an ndimensional vector and the domain D of the function g(x) is rectangular.3 Fourier Series and Transforms in Higher Dimension Both the Fourier series expansion and the Fourier transform are readily generalized to higher dimension.n . Z 1 dyf (x y)h(y).. Linearity: Let F T (g) be either g or g and . (143) g(x) = 1 then the Fourier transform of g satis…es e h(k). e then F T ( f + g) = F T (f ) + F T (g): (141) 2.6. exists if f (x) satis…es the Dirichlet theorem on any …nite interval and R1 moreover 1 dx jf (x)j < M < 1: Properties of the Fourier Transform 1..
Can you think of another choice of function f which will still generate the function in the same limit? Show that your chosen form of f is valid. Calculate the Fourier transform of the function f (x) = sinpx. 3. f (x) = 0. The function g(x) can then be expressed as Z g(x) = dke(k)eik x : g (148) The symmetrized Fourier transform can be obtained by noting that g(k) = (2 )n=2 g (k) e Z dkg(k)eik x : (149) This is the higher dimensional form of the Fourier series expansion.4 Exercises on Fourier Transforms: 1.e. starting from 143 show 144.:::jn = dxg(x)e i2 (j1 x1 =L1 +::::+jn xn =Ln ) : L1 Ln D (146) where both k and x are ndimensional vectors. What is the Fourier transform of a function? Note how in 137 the function is de…ned as a limit (x x0 ) = lim f (x !1 x0 ) with a particular choice of function f (x x0 ).where the expansion coe¢ cients are given by Z 1 dj1 .. Removing the restriction to a …nite domain we can de…ne the Fourier transform as Z 1 g (k) = e dxg(x)e ik x . (147) (2 )n in which case we have  1 g(x) = (2 )n=2 (150) 6. jxj > L: Draw simple …gures to show how the e transform f (k) varies with L: 42 . L x L. Prove the convolution theorem. 2. i.
We shall see below. x) = jF (t.e. The phenomenon of di¤raction arises when particles are placed in the path of propagating plane waves representing either particles or electromagnetic radiation. It is important to note that both the wave function in the case of particle motion and the electromagnetic …eld we shall discuss below are amplitudes. We shall assume that we are dealing with radiation remembering that our results could apply as well to plane wave particle motion. The particles in a sample scatter the radiation in all directions and a detector placed at a large distance measures an intensity of radiation which is sensitive to the orientation if the particles in the irradiated sample are ordered as in a crystal. show that the scalar product (L2 ) is preserved as we go from function to Fourier transform (symmetric) space. (151) is a solution of both the Schrödinger and the Maxwell equations. i. x)j2 : (152) For the plane wave above the intensity turns out to be a constant. Thus the probability density or the light intensity are obtained by taking the absolute magnitude squared. Even if the particles are disordered as in a glass or a liquid the intensity tells us something about the structure of the sample. in the absence of a variable external …eld. F (t. x) = F0 ei(k x !t) . respectively.4.. 43 . Prove Plancherel’ formula s Z Z 1 dxf (x)g(x) = 1 1 1 dkf (k)g(k). (Chapter head:)Applications of Fourier Series and Transforms 7 Di¤raction Natural areas of application of Fourier series and transforms are quantum dynamics and electrodynamics since the plane wave. that scattered light is not characterized by a constant intensity. I(t. however.
Flux conservation then dictates the form of the intensity.e.. energy conserving. from a threedimensional array of particles located at the positions fxj g. i. The corresponding intensity of the radiation from a single particle is Ij (t. its length k is related to the wave length by the relation k = 2 = .e. The scattered …eld Fs (t. i.Crystal Di¤raction We now consider the case of scattering from a crystal. X X j eik xj ei(kjx xj j !t) : (156) Fs (t. (155) where the exponential phase factor accounts for the phase of the plane wave at the particle and j is a constant independent of the particle location xj : It will re‡ the size of the particle (its cross section) and the process of ect absorbtion and reemission. (153) where xj is the location of the particle. The amplitude factor Fj is given by Fj = j eik xj . 7. It accounts for the fact that the same ‡ of radiation intensity is passing ux through a spherical surface which grows like 4 jx xj j2 with distance. it will absorb and reemit the radiation becoming itself a radiative source. This follows from our assumption that the scattering process is elastic. i.2 ManyParticle Scattering . x) = Fj ei(kjx xj j !t) = jx xj j . and ! is the frequency of the radiation. x) = jFj j2 = jx xj j2 : (154) Note that the denominator is the square of the distance from the particle. x) = jx xj j j j 44 .7. x) = Fj (t..1 Single Particle Scattering If a small spherically symmetric particle is placed in the path of a propagating plane wave it will scatter some of the radiation. x) is now a sum over the …elds of all the particles.. k is the wave vector of the plane wave. The emitted radiation will have spherical symmetry but it retains the wavelength and frequency of the original plane wave. Note that ! = 2 = kc.e. Thus the amplitude of the electromagnetic …eld will have the form Fj (t.
Note that x is the vector taking us from the origin which we shall place in the sample and the detector. Fs (t. x) = ei(kx !t) x j (158) where k = (kx=x) k and we have assumed identical scatterers as in a monatomic crystal. It is easy to see that if x>>xj then to a good approximation we have jx xj j =x e xj cos j: (157) We then …nd the much simpli…ed form for the scattered …eld X e i k xj . p = 0. (161) where a. j = . (159) where all dependence on particle positions is collected in the amplitude factor. :::::P . ::::::N . n = 0.2. The vector xj takes us from the sample origin to the speci…c particle j.If the particles are identical as they might be in a crystal then j is independent of j. the particle positions can be given by xj = mj a + nj b + pj c. Here we have neglected all terms of order xj =x or smaller. 7.1 Constructive and destructive interference: The observed intensity at x is given by the absolute magnitude squared of the …eld. c are primitive translational vectors and mj . x)j2 = j j2 jxj 2 jAj2 . ::::. We then get A = = M X e e im k a m=0 1 i(M +1) k a N X n=0 e 1 in k b e 1 45 i(N +1) k b P X p=0 e ip k c (162) e i(P +1) k c 1 1 1 e i ka e i kb e i kc : . b. nj . I(x) = jFs (t. We now apply an approximation valid when the irradiated sample is of much smaller length scale than the distance from sample to detector. X A= e i k xj : (160) j In a crystal with a monatomic unit cell. M . pj are integers. These two vectors lie in a plane and the angle between them we shall call j . For simplicity we assume that the crystal sample is described by the integers m = 0. for simplicity.
b A = 2 (b b B = 2 (c b C = 2 (a c)=a (b a)=a (b b)=a (b c). b. Thus we can enter the vectors in any order without a¤ecting the Laue conditions. generally.But note now that M X e im k a = Order(M 1=2 ). c of equal length. N.4. We can also let the directions of a.e. the separation between neighboring atoms to be 4 Å. for s=integer. b. c): (165) where (166) b b b Note that A. C serve as primitive translational vectors in kspace and they determine the primitive translational vectors a. The denominator is the volume of the parallelipiped formed by the vectors a. which is a vector orthogonal to b and c and of a length bcsin where is the angle between the vectors.Simple cubic crystal: . c be the three axial directions in our Cartesian 46 .. b. as they would be for a typical crystal sample in a plane wave beam of macroscopic width. k = 2 r. Let us take this length. P are large integers.. b c. The vectors A. (164) These are called Laue’ equations. i. c).The simple cubic crystal has orthogonal primitive translational vectors a.g. For which kvectors will the Laue conditions all be satis…ed? . (163) m=0 Thus if M. c. e. for k a =2 q. See the Beta Handbook Section 3. C are de…ned with the help of the vector product concept. q=integer. = M + 1. possibly with a negative sign. B.They will be satis…ed for k of the form b b b k = q A + rB + sC. c and thereby the b b b crystal structure. b. then we would …nd strong intensity peaks when x and k are such that a b c k = 2 q. for r=integer. for q=integer. When they are all satis…ed we get a s strong peak in the intensity. k = 2 s. B. Example 1 .
i. b: We then recall that x b z b y b = x: It follows that  Peak intensities occur according to the Laue conditions when the wavevector shift k satis…es 2 2 2 b b k =q x + r y + s b.2 Scattering from a continuous medium. is 2 =4 Å 1 : Since k is obtained by a rotation of the wavevector k of the incident light it follows that k must be less than 2k.5 Å or so. This would ensure that a number of peaks would be observable but not so many as to crowd the resolution of the detector. x.. (167) Fs (t. Thus the smallest k. B = y. It might be convenient to take in our case to be about 0. In the case of a ‡ the particle positions are not nicely ordered on a lattice uid but more or less randomly distributed. The particle positions may then be described by a particle density (x). x) = jxj jxj where A( k) = Z dx0 (x0 )e i k x0 = (2 )3=2 ( k): (168) 47 . If the particle density is (x) we can obtain the scattered …eld as Z 0 i(kjxj !t) e dx0 (x0 )e i k x = ei(kjxj !t) A( k). 7. If 2k < 2 =4 Å 1 or k < =4 Å 1 then no Laue peak can be observed. z 4 4 4 where q. Recalling that k = 2 = where is the wavelength of the light we see that we should have 2 = > =4 or < 8 Å 1 in order for Laue peaks to be observed. b z z b b x = y. C = z 4 4 4 b y = b. y. Note that the length of k is k= 2 (q 2 + r2 + s2 )1=2 : 4 2 2 2 b b b b b b: A= x. r. s are integers.b b z b coordinate system. Thus the wavelength should be less than twice the particle spacing in the lattice.e. not including the forward scattered light at q = r = s = 0 which is submerged in the unscattered light.2.
3 Exercises on Di¤raction 1. at Assume that the surface does not scatter light . 7. determines (x) the intensity I( k) contains less information.e. Let the molecule be held …xed in a position orthogonal to an incoming plane wave …eld of wave vector k. a) Suppose that the monolayer forms a regular twodimensional crystal lattice. What Laue conditions would apply to a di¤raction experiment done to determine the crystal structure? b) If the adatoms were instead in a disordered ‡ state what uid could be learnt about its structure by the di¤raction experiment? 48 ..only the adsorbed monolayer of atoms scatters light. Consider an adsorbed monolayer of atoms on a perfectly ‡ surface. Note that I( k) jxj2 = j j2 = A( k)A ( k) Z Z i ku = du (u)e du0 (u0 )ei Z Z = dx du0 (u0 ) (u0 + x) e = (2 )3=2 P ( k). Suppose you have a linear molecule consisting of a very large number of evenly spaced point particles on a straight line. What sort of di¤raction pattern would be observed by a detector? How would you propose to determine the particle spacing d from this pattern? What jkjvalue would be most appropriate for the plane wave …eld? 2. where the spatial correlation function P is de…ned by Z P (x) = du0 (u0 ) (u0 + x).Note that the amplitude factor A( k) is directly proportional to the Fourier transform of the particle density (x): We can generally only measure intensity directly. k u0 (170) i kx (171) and we have used the variable transformation x = u u0 : Thus the scattered intensity can tell us about the correlations in the particle positions of a disordered ‡ uid. i. I( k) = jA( k)j2 j j2 = jxj2 : (169) While A( k). if it is known for all k.
. x) = F0 ei(kL !t) : (172) Here we have assumed that we have a point source and an in…nitely sharp beam. respectively. 8. FD (t) = 1 1 F0 ei(kL1 !t) + F0 ei(kL2 2 2 1 = F0 ei(kL !t) (1 + eik L ): 2 49 !t) (175) . the intensity I(!) as a function of the frequency !: The method is to use the interference pattern arising when a plane wave light beam is split into two parts traveling paths of length L and L before being recombined at the detector. L1 = L.. If the path length is L then the amplitude of the electromagnetic …eld at the detector is given by FD (t. Suppose now that we introduce a beam splitter which splits the beam into two parts of equal amplitude but travelling di¤erent path lengths L1 and L2 .e.8 Fourier Spectroscopy Our purpose here is to determine the frequency spectrum of a light source. i. Thus we shall consider ^ how to determine I(!) from the intensity as a function of L. The interference pattern as a function of L is in Fourier spectroscopy used to reveal the frequency spectrum. L2 = L + L. The amplitude at the detector will then be the sum of the contributions from the two paths.1 Monochromatic Light Source Let us …rst note that a thin beam of light can be passed by a path determined by mirrors from light source to detector. i. The di¤erence in path length is L.e. (173) where (L) is a probabilty density satisfying Z dL (L) = 1: (174) We shall always assume that in the absence of a beam splitter our beam is sharp. I( L). If the beam is not sharp there will be a distribution of path lengths so that the amplitude at the detector is instead given by Z FD (t. x) = F0 dL (L)ei(kL !t) .
L)j2 = = 1 jF0 j2 1 + eik 4 L 2 (176) 1 jF0 j2 (1 + cos k L): 2 Noting that for radiation propagating in vacuum we have ! = kc.The corresponding intensity is ID (t. L) = 1 jF0 j2 (1 + cos ! t). t = L=c: Thus the intensity shows a sinusoidal variation with L or t as shown below in a plot of y = (1 + cos x): y 2. we can write ID (t.5 10 8 6 4 2 0 2 4 6 8 x 10 If we identify x in this plot with t then this would be the shape of the intensity variation for ! = 1: In general.5 1. (178) j 50 .1.0 0. 2 (177) where t is the di¤erence between the times of propagation along the two paths. L) = jFD (t.0 1. the separation between neighbouring peaks would be 2 =!: This allows the frequency of the radiation to be identi…ed from the intensity as a function of t: 8.1 Several frequencies: If the radiation is made up of intensity at a number of wellde…ned frequencies then the amplitude without the beam splitter becomes X FD (t) = Fj ei(kj L !j t) . where c is the velocity of light.
!) be de…ned by Z R (184) ds cos !sI(s). i.and the corresponding intensity is XX ID (t) = Fm Fn ei((kn km )L = X m m n (! n ! m )t) (179) km )L (! n ! m )t)): jFm j2 + XX Fm Fn exp(i((kn m n6=m But note that the latter term above gives rise to a ‡ uctuation in intensity in time. 0< t < R < 1: Let B(R. B(R. Z T X 1 ID = lim dtI(t) = jFm j2 : (180) T !1 T 0 m After introduction of the beam splitter we get 1X Fj ei(kj L !j t) (1 + eikj FD (t) = 2 j I(t) = 1 XX Fm Fn ei((kn 4 m n km )L (! n ! m )t) L ). !) = 0 1X jFm j2 (1 + cos ! m t): 2 m where s = t.. The long time average of this ‡ uctuation vanishes. (182) (183) I( t) = We shall now use a method which has the character of a ” mathematical form of …ltering” Suppose now that I( t) has been measured over the interval . then we …nd that with Im = jFm j2 we have Z R 1X B(R. !) = Im ds cos !s(1 + cos ! m s) 2 m 0 1X sin !R sin(! + ! m )R sin(! = Im + + 2 m ! 2(! + ! m ) 2(! 1 cos !s cos ! m s = (cos(! + ! m )s + cos(! 2 51 (185) ! m )R : !m) In order to obtain this result it is convenient to use the identity ! m )s): (186) .e. ikm L (181) (1 + eikn L )(1 + e ).
!) for R = 5 takes the form 1 sin 5! sin 5(! + 1) sin 5(! 1) sin 5(! + 3) sin 5(! 3) + + + + ) B(R. gives rise to a blip in B(!) at ! = ! m .Note that B(R. . Draw the cosine transform B(R. !) = (2 + + + + ) 2 ! 2(! + 1) 2(! 1) 2(! + 3) 2(! 3) 52 . !m) (187) which. 1 B(R. !) 1 sin 10! sin 10(! + 1) sin 10(! 1) sin 10(! + 3) sin 10(! 3) B(R. if R is su¢ ciently large. ! m )= Im R: e 4 This maximum in B(R.Note now that for ! = ! m we have sin(! (! ! m )R = R. If R = 10 we get for B(R. !) for R = 5 and 10. !) can be used to identify the frequencies ! m present and the corresponding intensities Im = jFm j2 : Example 2: Suppose we have a light source of two spectral lines with freqencies ! = 1 and ! = 3 and unit intensities. !) = (2 2 ! 2(! + 1) 2(! 1) 2(! + 3) 2(! 3) y 4 3 2 1 5 4 3 2 1 1 2 3 4 x 5 and the shape shown in the …gure above.
In the case of a continuous light source we have Z 1 1 d! (!)(1 + cos !s). 8. !) is to make sure that the peak is not just a ‡ uctuation in the background. i. Consider the interpretation of the function B(R. 2. You may use the fact that lim 1 sin((x x0 )R) = (x x x0 x0 ): R!1 53 . How could one guard against this possibility? Propose a method that as far as possible eliminates background peaks from a set of chosen high peaks. !) as derived above.y 8 6 4 2 5 4 3 2 1 1 2 3 4 x 5 and the shape as shown above. Obtain the cosine transform B(R. !). !) for this type of light..e. Supposing that we can obtain I(s) over the interval 0 < s < R < 1 suggest a way by which (!) could be obtained at least approximately from B(R. Note the sharper positive peaks at ! = 1 and 3. I(s) = 2 0 (188) where (!) is the light intensity as a function of the frequency. One problem in determining the frequencies and intensities from the peaks of B(R. not due to ! = ! m as suggested.2 Exercises on Fourier Spectroscopy: 1.
For this and other reasons we shall continue to develop the Fourier transform into the Laplace transform. xn . k) = e 2 2 0 1 (192) (193) 54 . x) can be Fourier transformed if f (x) is of exponential order.1 Laplace Transforms and Applications Derivation and Properties With the help of Fourier transforms we can work with functions on the whole real axis but there are still many functions that we cannot apply the Fourier transform to due to the requirement of absolute integrability. x): (190) The new function g(c. Thus the function has now been changed according to f (x) ! H(x)e cx (189) f (x) = g(c. Suppose we are interested in f (x) on the interval [0. there is an > 0 such that x!1 lim e x f (x) = 0.e. = 1.. The Fourier transform of g(c.e. g (c.. k)eikx : g 1 and c is picked su¢ ciently large. for x 0: Multiply it by an exponential function exp( cx). i. (191) and the corresponding expression for the function g(c. R1 dxf (x) < M < 1: Thus we cannot de…ne the Fourier transform for the 1 functions . for x<0. 1] : In order to be able to apply the Fourier transform we shall …rst apply two operations to the function f (x): Multiply it by the Heaviside step function H(x) de…ned by H(x) = 0. x) is Z 1 Z 1 1 1 (c+ik)x dxH(x)e f (x) = dxe (c+ik)x f (x). n 1. x) is Z 1 g(c.9 9. x) = dke(c. i.
e Z 1 b(s) = dxe sx f (x).1.. The derivative and the integral theorems can be proven by use of partial b f (x)) = f (s + p): 55 . g 1 (194) Since we are dealing with complex numbers anyway we shall take the liberty to de…ne the complex variable s = c + ik. Exponential shift theorem: LP (e px The linearity follows directly from the linearity of the Fourier transform. (196) 1. However. (195) f 0 The corresponding expression for f (x) can be obtained as 1 f (x) = 2 i Z c+i1 c i1 Here we have changed variable of integration from k to s and noted that if we integrate in the complex plane along a line parallel to the imaginary axis from c i1 to c + i1 then dk = ds=i. The form of the inverse Laplace transform in 196 invites the use of the residue theorem (see the Beta Handbook. k)e(c+ik)x . Integral theorem: LP ( 0 dtf (t)) = 1 LP (f ): s Rx 4. it is more common to do the inversion directly from a table of Laplace transforms. for x > 0. Moreover. Convolution theorem: If g(x) = 0 dtf (t)h(x t) then LP (g) = LP (f )LP (h). 9.1 Properties of the Laplace transform: LP= Laplace transform b dsesx f (s).By multiplication of this last equation by ecx we can recover f (x). we de…ne the Laplace b transform f (s) as 2 g (s). Linearity: LP ( f (x) + h(x)) = LP (f (x)) + LP (h(x)): d 2. perhaps with the aid of some of the many simplifying properties of the Laplace transform described below. section 14. for Re(s) = c = large enough. Derivative theorem: LP ( dx f (x)) = sLP (f ) f (0): Rx 3. 5. i. Z 1 f (x) = dke(c.2).e. for x > 0.
2. (198) ..2 Applications of Laplace Transforms The Laplace transform …nds many applications in chemistry. n = 0. The most common application is probably to linear di¤erential or integrodi¤erential equations where one makes use of the derivative. Let us consider some examples.Unimolecular decomposition: Suppose we have a chemical reaction A ! products. i. :::: 9. which is irreversible and proceeds according to a unimolecular rate law.e. The variable change r = x t then completes the proof.2 Small table of Laplace transforms: Laplace transform 1=s 1=(s p) s=(s2 + p2 ) p=(s2 + p2 ) n!=sn+1 Function of x 1 epx cospx sinpx xn .1. integral and convolution theorems to obtain algebraic equations for the transforms themselves without derivatives or integrals. Example 1 . Let us have a look at the convolution theorem.integration and the exponential shift theorem follows by inspection. 1. if the concentration of A at time t is c(t) then the time development satis…es d c(t) = dt 56 kc(t). Proof of the convolution theorem: Z 1 Z 1 Z x Z 1 sx dxe sx f (t)h(x t) (197) dt dtf (t)h(x t) = dxe t 0 0 0 Z 1 Z 1 dt dxe st f (t)e s(x t) h(x t) = t Z0 1 Z 1 st = dte f (t) dxe s(x t) h(x t) Z0 1 Zt 1 b h(s): dte st f (t) dre sr h(r) = f (s)b = 0 0 The important step is to change order of integration and realize how the limits of integration change. 9.
where k is the unimolecular rate coe¢ cient. Find the time development of c from the initial value c(0).  This is a linear …rst order di¤erential equation. We want c(t) for t > 0 so we apply the Laplace transform to both sides of the equation, sb(s) c(0) = kb(s): c c (199) Here we have used the derivative law. This equation for the Laplace transform can be solved to yield b(s) = c c(0) : s+k
kt
(200)
From our small table of Laplace transforms it follows that c(t) = c(0)e : (201)
Example 2  Coupled chemical reactions: Consider a set of coupled chemical reactions of the type A ! B, B ! C, C ! product. The corresponding time dependent concentrations are cA (t), cB (t); cC (t) and the rate equations are d cA (t) = k1 cA (t); (202) dt d cB (t) = k1 cA (t) k2 cB (t); dt d cC (t) = k2 cB (t) k3 cC (t): dt These are coupled linear …rst order equations. We apply the Laplace transform to both sides of all three equations to obtain sbA (s) c sbB (s) c sbC (s) c cA (0) = k1bA (s); c cB (0) = k1bA (s) k2bB (s); c c cC (0) = k2bB (s) k3bC (s): c c
k1 t
(203)
The …rst equation can be solved as in the example above. We get bA (s) = c cA (0) ; and cA (t) = cA (0)e s + k1 : (204)
Insertion in the second equation yields bB (s) = c cB (0) k1 cA (0) + : s + k2 (s + k1 )(s + k2 ) 57 (205)
In order to …nd this transform by linear combinations of transforms in our small table we note that 1 1 1 = ( (s + k1 )(s + k2 ) k1 k2 s + k2 1 ): s + k1 (206)
Now it is straightforward to …nd the inverse Laplace transform. We get cB (t) = cB (0)e
k2 t
+
k1 cA (0) (e k1 k2
k2 t
e
k1 t
):
(207)
Finally, we solve for bC (s) and …nd c bC (s) = c cC (0) s + k3 cC (0) = s + k3 cC (0) = s + k3 cC (0) = s + k3 +
k2bB (s) c (208) s + k3 k2 cB (0) k1 cA (0) ) + ( + s + k3 s + k2 (s + k1 )(s + k2 ) k2 cB (0) k1 cA (0) 1 1 + ( + ( )) s + k3 s + k2 k1 k2 s + k2 s + k1 k2 cB (0) 1 1 k1 k2 cA (0) + ( )+ k2 k3 s + k3 s + k2 k1 k2 1 1 1 1 1 1 ( ) ( ) : k2 k3 s + k3 s + k2 k1 k3 s + k3 s + k1
At this point the transform is of a form such that we can immediately identify the terms in our table. We …nd that the concentration of species C decays by a triple exponential time dependence, i.e., cC (t) = a1 e a1 = a2 = a3 = = + a2 e k2 t + a3 e k3 t ; k1 k2 cA (0) ; (k1 k2 )(k1 k3 ) k2 cB (0) k1 k2 cA (0) + ; k3 k2 (k1 k2 )(k3 k2 ) k2 cB (0) k1 k2 cA (0) 1 cC (0) + + ( k2 k3 k1 k2 k2 k3 k2 cB (0) k1 k2 cA (0) cC (0) + + : k2 k3 (k2 k3 )(k1 k3 )
k1 t
(209)
1 k1 k3
) (210)
Example 3  Harmonic oscillator: We have already encountered the harmonic oscillator in our discussion of normal modes of molecules in Chapter 58
2. The equation of motion is d2 m 2 x(t) + kx(t) = 0; dt (211)
where m is the mass and k the force constant. We shall now see that we can readily solve this equation by Laplace transformation but …rst we must note that the derivative theorem can be iterated to apply to higher derivatives: LP ( d2 d x(t)) = sLP ( x(t)) v(0) 2 dt dt 2 = s x(s) sx(0) v(0); b (212)
where v is the velocity, i.e., the time derivative of x. Using this extension of the derivative theorem we can take the Laplace transform of the equation of motion for the harmonic oscillator and obtain ms2 x(s) b msx(0) mv(0) + kb(s) = 0: x (213)
This equation yields 
x(s) = b
sx(0) + v(0) msx(0) + mv(0) = k 2+k ms s2 + m sx(0) v(0) = 2 k + 2 k: s +m s +m
(214)
Now we can identify the terms in our small table of Laplace transforms and …nd r r r k m k t) + v(0) sin( t): (215) x(t) = x(0) cos( m k m Example 4  Debye Hückel theory: Now we shall consider the screening of an ion in an electrolyte solution. Let (r) be the average electrostatic potential at the distance r from the ion. It must be spherically symmetric so it depends on the distance but not on the direction. In the absence of other ions the …eld would have been of Coulombic form, (r) / q=r, where q is the charge of the central ion. In the presence of the mobile ions in the solution the …eld is screened by the attraction of counterions and repulsion of coions. According to the DebyeHückel analysis the concentration of an ion of species i is altered by the …eld to the form ci (r) = ciB e
Ei =kB T
= ciB e
qi (r)=kB T
;
(216)
59
Debye and Hückel investigated the weak coupling limit when the interaction energy is small compared to kB T.e. which in spherical symmetry takes the form X qi ciB 1 d 2d e qi (r)=kB T : (220) (r (r)) = 2 dr r dr ""0 i At this point we note that the PoissonBoltzmann equation is nonlinear in the …eld and therefore di¢ cult to solve. since by electroneutrality in the bulk we have X qi ciB = 0: i (223) 60 .e. (217) ( 2 + 2 + 2 ) = r2 = @x @y @z which in the case of spherical symmetry becomes 1 d 2d (r (r)) = r2 dr dr (r)=""0 : (218) The charge density can be expressed in terms of the concentrations of the charged species.where qi is the charge of the ionic species in the screening atmosphere. i. ciB is its bulk concentration. From electrostatic theory we know that there is a direct relationship between the …eld and the charge density expressed by Poisson’ s equation. @2 @2 @2 =""0 . jqi (r)=kB T j 1: (221) (222) Then we can linearize the Boltzmann factor and get X X 2 (r) = qi ciB e qi (r)=kB T = (qi ciB qi ciB (r)=kB T ) i = X i i 2 qi ciB (r)=kB T.. X X (r) = qi ci (r) = qi ciB e qi (r)=kB T : (219) i i If we now insert this expression for in Poisson’ equation we get the so s called PoissonBoltzmann equation. kB is Boltzmann’ constant and T is the temperature s in Kelvin. i..
2 s s+ = (228) we …nd that u(r) has the form 1 u(r) = (u(0) 2 u0 (0) )e r 1 u0 (0) r + (u(0) + )e : 2 (229) However. b su(0) + u0 (0) u(s) = b : 2 s2 2 (227) 1 1 1 1 = ( ). the …eld is given by (r) = u(0) e r 61 r r . Before we apply the Laplace transform method we change dependent variable to u(r) = r (r): The linearized PoissonBoltzmann equation then becomes d2 u(r) = dr2 s2 u(s) b Recalling that 1 s2 s s2 2 2 2 X i 2 qi ciB =(""0 kB T ): (225) u(r): (226) By Laplace transformation of both sides we get su(0) u0 (0) = u(s). for physical reasons we cannot tolerate the exponentially growing term so we must have u0 (0) = u(0). …nally. Thus we get the physical solution u(r) = u(0)e and. (s + )(s ) 2 s s+ s + 1 1 1 1 = = + ( ) (s + )(s ) s+ 2 s s+ 1 1 1 = ( + ). (230) : (231) . (224) = Now we have a second order linear di¤erential equation to solve.The corresponding linearized PoissonBoltzmann equation takes the form 1 d 2d (r (r)) = r2 dr dr where 2 2 (r).
e. u(0) = q=4 ""0 in SIunits. Thus we …nally obtain the following screened Coulomb potential (r) = q e 4 ""0 r r : (232) 9. i. 3.3 Exercises on Laplace Transforms: 1. i. a) Show that the solution obtained for (r) leads to a charge density (r) which satis…es charge neutrality. respectively. the integrated charge density equals the central charge with reverse sign ( q). 2. and describe the qualitative change in the time dependence as goes from 1 to +1: For what values of is the e¤ect on the motion consistent with a friction acting on an oscillator? 4. such that (r) = 0. The corresponding equation of motion is m d2 x(t) = dt2 kx(t) d x(t): dt (233) Solve this equation for x(t).e.The parameter u(0) is determined by the condition that the …eld approach the Coulomb potential of the bare charge as r ! 0. Consider the isomerization reaction A ! B and its reverse reaction B ! A proceeding with the rate coe¢ cients kf and kb . Determine the equilibrium concentrations and the rate at which equilibrium is approached.. Write down the corresponding coupled rate equations for the concentrations cA and cB and solve them. a gas or a liquid) a frictional force is expected to appear which is proportional to the velocity. t > 0. for r < d..g. r > d: Determine u(d) so that charge neutrality is again satis…ed. If a harmonic oscillator is placed in a dissipative medium ( e. Calculate the Laplace transforms of the functions cosh(x) and sinh(x). b) In the more realistic model where the ions are considered to be hard spheres of diameter d.. the solution for u(r) is u(r) = u(d) exp( (r d)). the forward and backward rate coe¢ cients. Consider the DebyeHückel theory of electrolytes above. 62 .
Part III
Di¤erential Equations
10 Ordinary Di¤erential Equations
Consider the equation d3 d (234) y + y + x2 y n = g(x) 3 dx dx for the the function y(x). It is called an ordinary di¤erential equation (ODE) because there appear only derivatives with respect to one unknown variable called x in this case. It is said to be of 3rd order because the highest order derivative to appear is of this order. If n = 1 then the equation is linear and if g(x) = 0 then it is called homogeneous while if g(x) 6= 0 then it is said to be inhomogeneous. If n = 2; 3; ::: then the equation is nonlinear.
10.1
10.1.1
First Order Equations
Simple Integration: y 0 = f (x)
The simplest type of ordinary di¤erential equation is of the form (235)
and can be solved by direct integration of both sides, Z x dsf (s): y=
(236)
Note that superscript prime indicates that a derivative with respect to x has been taken and double prime indicates double derivative and so on. The integral on the right hand side is any primitive function to f (x), F (x). We could then write Z x y= dsf (s) + C; (237)
0
to make explicit the fact that the solution requires speci…cation of a constant C. The value of y(x) at one point is su¢ cient to specify C, e.g., Z x y= dsf (s) + y(0): (238)
0
63
Thus we see that solving an ODE involves …nding a general solution including one or more undetermined parameters which are then determined by some boundary condition or point values of the solution or its derivatives. For a …rst order ODE only one parameter is involved. 10.1.2 Generalized integration:
A more general form of …rst order ODE is y 0 = f (x; y): If the function f (x; y) is separable f (x; y) = h(x)=g(y); then an implicit solution can be obtained as follows: y 0 g(y) = h(x); G(y) = H(x) + C: (241) (242) (240) (239)
Here G(y) and H(x) are primitive functions to g(y) and f (x), respectively, and C is an undetermined constant. This equation for y must now be solved for y as a function of x. This can often but not always be doneRanalytically. x Example 1: y0 = x is solved by simple integration, i.e., y = dss+C = 2 (x =2) + C: Example 2: y0 = ky is solved by generalized integration, i.e., 1 = k; y ln y = kx + C; y = e kx+C = De y0 10.1.3
kx
:
Reduction by Variable Transformation:
An equivalent way of writing the general …rst order ODE 239 is dy f (x; y)dx = 0: (243)
This equation can be multiplied by another function g(x; y) to produce g(x; y)dy g(x; y)f (x; y)dx = 0: 64 (244)
Thus we see that another way to write a general …rst order ODE is P (x; y)dy + Q(x; y)dx = 0: (245)
Consider now the special case when P and Q are both homogeneous of degree n, i.e., P ( x; y) = Q( x; y) =
n n
P (x; y); Q(x; y):
(246)
If we now make the variable transformation v = y=x and substitute xv for y in 245, then we get P (x; xv)dy + Q(x; xv)dx = xn P (1; v)dy + xn Q(1; v)dx = 0; which yields P (1; v)dy + Q(1; v)dx = 0; P (1; v)(xdv + vdx) + Q(1; v)dx = 0; P (1; v) 1 dv + dx = 0: vP (1; v) + Q(1; v) x This equation is now of the form v 0 g(v) = 1=x; (249) (248) (247)
which can be solved by generalized integration to yield G(v) = ln x + C: (250)
Example 3:  Consider the ordinary di¤erential equation dy y 2 =2 x2 = 2 : dx x + xy It can be rewritten in the form (x2 + xy)dy + (x2 Here we have P (x; y) = x2 + xy; Q(x; y) = x2 y 2 =2: 65 y 2 =2)dx = 0:
2 Method of Exact Di¤erentials: Suppose y is given implicitly as a function of x by the equation F (x. Thus we introduce the new dependent variable v = y=x and …nd y = xv.Note that both P (x. (1 + v)xdv + (1 + v + v 2 =2)dx = 0. and by multiplication by x2 x2 + xy + y 2 =2 = Ax: Since this is a second order equation in y we can solve for y explicitly as follows: y 2 + 2xy + 2x2 = 2Ax. p 2Ax y = x x2 : 10. y) = 0. y) are homogeneous of second order. y) and Q(x. dy = xdv + vdx: Inserted in the ODE we get x2 (1 + v)(xdv + vdx) + x2 (1 v 2 =2)dx = 0. @x @y dx which can be written as dF = @F @F dx + dy = P dy + Qdx = 0: @x @y 66 (252) (251) . (y + x)2 = 2Ax x2 . Di¤erentiating with respect to x we get @F @F dy + = 0. 1+v 1 dv + dx = 0: 2 =2 1+v+v x Now we apply generalized integration to both sides and …nd ln(1 + v + v 2 =2) = ln(x) + C: By exponentiation we obtain an implicit solution 1 + v + v 2 =2 = A=x.
Thus we can …nd a function F (x. s). x (256) Z dsQ(s. Example 4: . y) = g(x) + dsx = g(x) + yx. = Q(x. y): (255) dsP (x. y) = g(x) + F (x. y). where F (x. Note that for all physical functions we have @ 2F @ 2F = : @x@y @y@x Thus if the ODE 245 satis…es @ @ P (x. y) = 0 by solving the equations Z y F (x. y) = f (y) + ds(s + y) = f (y) + x2 + yx: 2 67 . @x @y (254) (253) then it is of exact di¤erential form and we have the implicit solution F (x. y) which in turn yields y(x) if we can solve F (x. be solved for F (x. y) = f (y) + Z y = P (x. with a bit of luck. Noting that @ @ (x + y) = x = 1.(x + y)dx + xdy = 0. y) = Q(x. It remains to learn how to recognize when the ODE has this form. y) can be found from @F @y @F @x By integration we then …nd F (x. y) = 0. y) = 0. y) = 0 yields an implicit solution. @y @x we see that the equation is of exact di¤erential form.Thus we know that when the ODE can be written as an exact di¤erential equal to zero as above then F (x.Consider the ODE . y). Z x 1 F (x. y): These equations can then. y) such that an implicit solution for y(x) is obtained from F (x.
(260) 0 Z x y(x) = e M (x) ( dsN (s)eM (s) + y(0)eM (0) ): 0 68 .g.. y(x) = and  10. y) = yx + x2 + C = 0: 2 This equation can. Suppose. It is then convenient to let the integration go from 0 to x.3 Method of Integrating Factors: y 0 + m(x)y = N (x): (257) Consider the linear …rst order ODE of the form  Suppose that M (x) is a primitive function of m(x). that we have y(1) = 1. 1 F (x.e. g(x) = 2 f (y) = C. a point value of y(x).. then C = 3/2. …nally be solved for y(x) as x C ( + ).Noting that the right hand sides must be identical we get 1 g(x) + yx = f (y) + x2 + yx. Suppose that we know y(0). then we note that d (yeM (x) ) = (y 0 + m(x)y)eM (x) = N (x)eM (x) : dx Integrating both sides with respect to x we get Z x M (x) y(x)e = dsN (s)eM (s) + C. Z x M (x) y(x) = e ( dsN (s)eM (s) + C): (258) (259) Again C is a parameter to be determined. 2 which yields 1 2 x + C. for example. e. 2 x where C is a parameter to be determined from. i. Z x M (x) M (0) y(x)e y(0)e = dsN (s)eM (s) .
(y 0 )2 (a + b)y 0 + ab = 0: It can be factorized as (y 0 a)(y 0 b) = 0: The factor solutions are y = ax + A and y = bx + B. :::) = 0. y. y 0 .e. A primitive function for k is kx so we get d (yekx ) = (y 0 + ky)ekx = ax2 ekx .. :::) = 0. P1 (x. (262) and add them to a set of solutions for the full equation. (261) Suppose we consider a di¤erential equation which can be factorized.For simplicity we normally choose M (x) so that M (0) = 0. y 0 . Example 6: . Thus the solutions to the full equation are y(x) = ax + A or y(x) = bx + B: 69 . y. y 0 . Example 5: . :::) = 0. y 0 . :::)P2 (x. y. y. P2 (x.Consider the equation: y 0 + ky = ax2 : It is of the form appropriate for the method of integrating factors. i. then we obtain the solutions from each factor.Consider the ODE .4 Factorization Method: P1 (x. dx Z x y(x) = e kx ( dsas2 eks + y(0)): 0 The integration can be carried out by partial integration or by di¤erentiation as follows: Z x Z x @2 @2 1 2 ks dss e = dseks = 2 ( (ekx 1) @k 2 0 @k k 0 2 x 2x 2 2 = ekx ( + 3) : 2 k k k k3 Thus we get y(x) = y(0)e kx + a( x2 k 2x 2 + 3) 2 k k 2a e k3 kx : 10.
i..5 Linear Ordinary Di¤erential Equations A linear ODE can be either homogeneous or inhomogeneous.e. then it follows from the linearity that L( y1 (x) + y2 (x)) = Ly1 (x) + Ly2 (x) = 0: (267) (266) (264) is a set scalar numbers which are free parameters to be deterwhere mined by further information about the solution. If fyi (x)gn i=1 is such a set then it can serve as a basis set in the space of solutions which can be written as n X y(x) = ci yi (x). written in the form Ly(x) = 0. The space of solutions of the corresponding inhomogeneous linear ODE is of the same dimension but shifted in function space by a function u(x) which is a socalled particular solution of the inhomogeneous ODE.10. (inhomogeneous) where the operator L can be de…ned as d2 d + g2 (x) 2 + : (265) dx dx Suppose now that we have found two linearly independent solutions of a homogeneous linear ODE. (homogeneous) (263) or Ly(x) = f (x). i. L = g0 (x) + g1 (x) Ly1 (x) = 0.. Ly2 (x) = 0.e. Lu(x) = f (x): (269) Thus linear combinations of solutions are also solutions. There is a vectorspace of solutions and in order to describe it we must try to …nd the largest set of linearly independent solutions of the homogeneous ODE. (268) i=1 fci gn i=1 The general solution of the inhomogeneous linear ODE can then be written as n X y(x) = u(x) + ci yi (x): (270) i=1 70 .
.e. then the corresponding terms are replaced as shown below: d X i=1 ci emi x ! n X i=1 ci xi 1 em1 x . i.1 Linear ODE’ with Constant Coe¢ cients: s In the special case when all the functions gi (x) in the de…nition of L are scalar constants the search for the space of solutions of the homogeneous ODE is much simpli…ed by the factorization of L. the derivative operator commutes with all coe¢ cients.5. Since the coe¢ cients are constants we have Dmi = mi D.. Thus the solutions of all the equations (D mi )y = 0.. d roots are identical.. and D can be treated as an ordinary scalar in forming the factorized form of L above. 2. L = Dn + p1 Dn 1 + ::::::: + pn 1 D + pn = (D m1 )(D m2 )(D m3 ) (D (271) mn ). fpi g is the set constants which de…ne L and fmi g the set of n roots of the nth order polonomial formed by L if D is treated as an ordinary scalar variable. we can write the factors in any order. i=1. (274) ci emi x : (276) If we have a root of degeneracy d.. The …rst factor in L simply kills y if it is a solution of (D mn )y = 0. where D = d=dx.n.e. However... Recall now that y0 has the general solution yi (x) = ci emi x : fmi gn i=1 (275) If the roots are all di¤erent then the corresponding solutions are all linearly independent and we get the general solution of Ly = 0 in the form y(x) = n X i=1 mi y = 0. n = d 71 1: (277) .10. (273) will also be solutions of Ly = 0. (272) i.. It is now easy to see that the solution of (D mn )y = 0 is also a solution of Ly = 0.
The corresponding homogeneous equation is y 00 3y 0 + 2y = 0: In polynomial form it can be written (D 1)(D 2)y = 0: 72 . (D m1 )v(x) = f (x). (282) mn )y = v(x). Example 7: . where v(x) is a known function. How can we obtain a particular solution? Since any form of particular solution will do it is often possible to …nd one ” inspection” by a guess inby . (280) which we solve for v(x) by the integrating factor method. Z x m1 x v(x) = e (C1 + dsf (s)e m1 s ): 0 (281) Now we have obtained a new linear ODE of order n (D m2 )(D m3 ) (D 1. spired by the form of the ordinary di¤erential equation. More systematically.Find the solutions of the di¤erential equation y 00 3y 0 + 2y = 1: First we note that this is a linear second order ODE which is inhomogeneous with constant coe¢ cients. we can use the method of integrating factors iteratively in the following way: Ly = (D m1 )(D m2 ) (D mn )y = f (x): (278) De…ning a new function v(x) by v(x) = (D m2 )(D m3 ) (D mn )y.These last results are o¤ered without proof but can readily be veri…ed. Thus we can repeat the step and use the integrating facor method to peal o¤ one factor at a time until y(x) itself is found. (279) we get a new …rst order ODE.
We …rst set v(x) = (D 1)y and …nd then that v0 which yields v(x) = c1 ex : Now we get from the de…nition of v(x) the equation y0 y = c1 e x : By the method of integrating factors this yields Z x x y(x) = e (c2 + dsc1 ) = (c2 + c1 x)ex . in agreement with the statement above. 73 .Find the solution of the di¤erential equation y 00 2y 0 + y = e 2x : Note that the polynomial form of the corresponding homogeneous equation is (D 1)2 y = 0: It has a doubly degenerate root m = 1. 1 ) = c1 ex + (ex 3 0 1 1 2x y 0 y = v(x) = (c1 + )ex e . Applying the iterative integrating factor method to the inhomogeneous equation we get v0 v = e x 2x . It follows that the general solution of the inhomogeneous equation is 1 y(x) = + c1 ex + c2 e2x : 2 Example 8: . 0 v = 0. In order to test the statement above concerning the solution in the case of degenerate roots let us solve this equation by the iterative method of integrating factors.Thus we see that the general solution of the homogeneous equation is y(x) = c1 ex + c2 e2x : A particular solution can be found by inspection in the form y(x)=1/2. 3 3 1 1 y(x) = ex (c2 + (c1 + )x (1 e 3x )) 3 9 1 1 1 = (c2 + (c1 + )x)ex + e 2x : 9 3 9 v(x) = e (c1 + dse 3s Z x e 2x ).
c2 are undetermined scalar constants we can rewrite this result as 1 y(x) = c2 exp(x) + c1 x exp(x) + exp( 2x): 9 There are often possibilities to shortcut the ” brute force”type of solution by a ” solution by inspection” In this case we could proceed as follows. . of course. Inserting this guess into the di¤erential equation yields 4a exp( 2x) + 4a exp( 2x) + a exp( 2x) = exp( 2x): It follows immediately that a = 1=9 and thus a particular solution has been found as 1 u(x) = exp( 2x): 9 10. We …rst note that the inhomogeneity in the form of exp( 2x) suggests that the solution will contain the same exponential. Associated Legendre’ equation: s (1 x2 )y 00 2xy 0 + (n(n + 1) m2 )y = 0: 1 x2 (284) 3.Noting that c1 . The simplest form of such solution is a exp( 2x). It is good to know then that some such ODE’ are well studied and documented in the literature. Hypergeometric equation: x(1 x)y 00 + [c (a + b + 1)x] y 0 74 aby = 0: (286) m2 )y = 0: (285) . many ODE’ which do not fall in any of the categories s of solvable problems discussed above. Bessel’ equation: s x2 y 00 + xy 0 + (x2 4. Legendre’ equation: s (1 x2 )y 00 2xy 0 + n(n + 1)y = 0: (283) 2.6 Known Second Order Di¤erential Equations: There are. Here are some that s you could look up in most texts on di¤erential equations: 1.
. f) y 000 + 2y 00 + 4y 0 = 0: (1 + x2 )y = x3 . d) y0 e) (y 0 )3 + (a + b + c)(y 0 )2 y + (ab + bc + ac)y 0 y 2 + abcy 3 = 0. a) y 0 = xekx . Note that general analytical solutions may sometimes have to be in implicit form.Obtain the general solution of the following ODE’ clearly s indicating in each case the method you are using and what conditions on y(x) at x = 0 would completely determine the solution. p y=(2 xy x). i.10. y. The threedimensional wave equation for the displacement (t. x): 1 @2 @2 = 2 2: @x2 c @t (287) 2. x. The onedimensional wave equation for the displacement (t.e. b) y0 = c) y 0 = y 2 (2 + sin x).7 Exercises on Ordinary Di¤erential Equations: Exercise 1: . z): r2 = @2 @2 @2 1 @2 + 2 + 2 = 2 2: @x2 @y @z c @t 75 (288) . (implicit solution is su¢ cient) 11 Partial Di¤erential Equations Partial di¤erential equations are di¤erential equations on multidimensional domains. there are several independent variables. Take the solution as far as you can towards explicit form and then leave it implicit if necessary. There are many important examples in chemistry such as: 1.
The timeindependent Schrödinger equation for the eigenfunction (x. y. homogeneous equations of second order.e. z): @ = @t i }2 2 ( r + V (x. z): r2 = 1@ : D @t (291) 6. x. y. y. y. Neumann conditions: (r )n (i. y. z): }2 2 r + V (x. y. The following types of boundary conditions are common: Dirichlet conditions: is known on the boundary. y. The timedependent Schrödinger equation for the wavefunction (t. with the exception of Poisson’ equation which is inhomos geneous. The threedimensional di¤usion equation for the particle density (t..3. z) ): } 2m (292) 7. y. As in the case of ordinary di¤erential equations the partial di¤erential equations have many solutions which become unique by the application of boundary conditions. z) = E : 2m Note that. the normal gradient) is known on the boundary. The Laplace equation for the electrostatic …eld (x. z): s r2 = g(x. x. z): (290) (289) 5. all these equations are linear. Poisson’ equation for the electrostatic …eld (x. Cauchy conditions: and (r )n are both known at the boundary. 76 . z): r2 = 0: 4.
if the PDE is separable in these coordinates. x) as a function of the time t and the position along the axis of the string at rest x. x) = g(x). 0) = (t. the one– dimensional wave equation (287). L) = 0. Thus if we are looking for a solution (x.. z) = X(x)Y (y)Z(z): (293) Upon insertion into the PDE this will. we try to …nd solutions of product form.e. to attempt to reduce the partial equation to ordinary form by separation of variables.1 Separation of Variables We shall consider a few of the most commonly used methods of solving partial di¤erential equations (PDE’ Perhaps the most commonly used method is s). i. generate three ordinary di¤erential equations which can be attacked by the methods of the preceding chapter. We shall limit our string to motion in one dimension only and let the deviation of the string from its resting (equilibrium) position be denoted by (t. x) = f (x). The second and third conditions give the initial position and velocity of each point in the chain. We now assume that the string can be described by the direct product (t.The vibrating string: Let us consider an elastic string such as a guitar string of length L. i. we get T (t) 1 @2 @2 X(x) = 2 X(x) 2 T (t): @x2 c @t 77 (298) . y. Example 1 . y. for t = 0: (296) @t The …rst condition re‡ ects the fact that the string is tied at the two ends. Its deformation from the straight line shape is resisted by a tension in the string.e. and (0.. The boundary conditions are (t. z) then we propose the form (x.11. x) = X(x)T (t): (297) (294) By insertion in the applicable partial di¤erential equation. (295) @ (t.
..e.. In our string model here we have simply taken the continuum limit when the particles become in…nitely many and at the same time in…nitely small so as to preserve the mass per unit length in the chain. n=1... with = 0. Had our string consisted of a chain of atoms the anology would have been perfect. T (t) = B sin( ct + ): (302) (303) The boundary conditions on X(x) leads to = n =L... They can both be identi…ed with the harmonic oscillator problem dealt with in both Chapter 2 and Chapter 6.2. The corresponding solution for T (t) is Tn (t) = Bn sin(n ct=L + n ): (305) Now we can see that the set of Xn functions form a set of normal modes of the chain equivalent to the normal vibrational modes of molecules considered in Chapter 2.. The one in t is an initial value problem @2 2 2 T (t) = c T (t). We then have two ordinary di¤erential equations to solve.. i.. Xn (x) = An sin(n x=L). Note that our expectation that there be sinusoidal variations suggests that be a real number. These equations are then readily solved. The one in x is a boundary value problem @2 X(x) = @x2 2 X(x).... The general solution is obtained by superposing normal mode solutions so as 78 . (301) 2 @t with the condition that T (0) and @T (t)=@t at t = 0 have predetermined values. (300) with the condition that X(0) = X(L) = 0. We …nd the solutions X(x) = A sin( x + ).2. (304) This is the same form of solution as for the 1D particleinthebox problem in quantum mechanics. n=1..If we now divide by X(x)T (t) we …nd 1 @2 1 1 @2 X(x) = 2 T (t) = X(x) @x2 c T (t) @t2 2 : (299) Here is a constant independent of both x and t..
x) = 1 X n=1 An sin(n ct=L + n n ) sin(n x=L): (308) Thus we can solve for An and from the relations n) An sin( (An n c=L) cos( = cn . We shall show by example how the Fourier transform can be used to solve a PDE in this way. g(x) = 1 X n=1 dn sin(n x=L): (307) The general form of the solution is (t.2 Integral Transform Method A very general method of solving partial di¤erential equations is to introduce a basis set and convert the PDE into an algebraic equation for the expansion coe¢ cients by projection onto the …nite space spanned by the basis. f (x) = 1 X n=1 cn sin(n x=L). (310) and then An can be found as An = cn = sin( n ): (311) 11. Thus we expand f (x) in the box eigenfunction basis set. n ) = dn : (309) By dividing the …rst of these equations by the last we get n = arctan(cn n c=dn L). As we noticed in introducing the Fourier series all series expansion methods and by extension also the transform methods are basically the same. Thus they bring the possibility of reducing di¤erential equations to algebraic form. (306) and the same type of expansion applies also to the timederivative g(x). 79 . We have already seen this method in use in the Hückel theory of electron structure in planar conjugated hydrocarbon molecules.to match the initial value conditions given.
x) = T (t. k) = 1 @ T (t. x) is known. Thus we can look for a function T (t. x) and use the reduced version of Fick’ law s 1 @ @2 T (t. x). This analogy is reasonable since the random motion of particles is one of the main mechanisms of energy transport. It would appear therefore that our clever idea to use the Fourier transform will fail since Z 1 dx jT (0. k): @t (314) From this equation follows by our ODE solving methods T (t. We now take the Fourier transform with respect to x of both sides. At this point we need to consider the initial temperature distribution T (0.e. y. nonsmooth features of T (t. k) exp( k 2 t): (315) Note that this result implies fast damping of high k components. x. On the condition that the transform exists we get k 2 T (t. i. It seems unlikely that it vanish for x ! 1.. x).Example 2 . Since the temperature variation is initially con…ned to the xdirection it will remain so for all times. In a solid the particles only rarely leave their equilibrium lattice sites but the vibrations also have a random character and the transfer of energy between sites in a solid can be approximately described as a di¤usional process. x): Thus time evolution produces an increasingly smooth spatial distribution of temperature. k) = T (0. The general relation for the timedevelopment of the temperature is Fick’ law which is s r2 T (t. 2 @x @t (313) under the boundary condition that T (0. z) = 1 @T : @t (312) Note that Fick’ law is just a di¤usion equation in three dimensions and s is the corresponding di¤usion coe¢ cient. x)j = 1: (316) 1 80 .Di¤usion in an in…nite solid: Consider an in…nite solid in which we have a spatially varying temperature T (t. x) at t = 0.
Thus we can proceed with our method. k) = p dx exp( ikx) (x 2 1 1 = p exp( ikx0 ): 2 x0 ) (321) The corresponding temperature disturbance is Z 1 1 dk exp(ik(x T (t. x)j < 1: 1 (318) Now the temperature disturbance satis…es the same PDE as T itself and it does have a Fourier transform. i. x) = T (t. (317) where Tbg is a background temperature so de…ned that Z 1 dx j T (0. x) Tbg .e.. x) = (x Then the Fourier transform is x0 ): (320) Z 1 1 (x0 . x) = p 2 1 Z 1 1 = p dk T (0. x0 . x) which is de…ned by T (t. Note that Z 1 Z 1 b 2 b2 2 dk exp( ak + bk) = dk exp( a(k ) + ) 2a 4a 1 1 r 2 b = exp( ): a 4a 81 (323) .However. x) = 2 1 x0 ) k 2 t): (322) This integral can be evaluated analytically. k) exp( k 2 t) exp(ikx): (319) 2 1 Greens function: Suppose now that the temperature disturbance is initially perfectly localised in x. reality comes to the rescue. The general solution can be written as Z 1 1 dk T (t. k) exp(ikx) T (t. We can consider a temperature disturbance T (t. T (0.
x) = cn n (0. x): (328) (327) The timepropagator exp(Lt) is also a linear operator. x) = dx T (0. x) = LT (t. x0 ) T (t. Linear timepropagation: In order to understand the interest in Green’ s functions consider the timedevelopment of the linear Fick’ law. x0 . x0 . s @ @2 T (t. x0 ) (x x0 ). (329) n 82 . x) = dx0 T (0. Z (325) T (0. x) = exp(Lt)T (0. x): @t @x2 This equation has the formal solution T (t. x0 ) exp( (x 4 t x0 )2 =4 t): (326) This expression means that each point of excess temperature broadens into p a Gaussian ball of excess with its maximum excess decreasing like 1= t p and its width increasing like t with time. (See Section 14. It follows that if X T (0. x) = T (t.This result holds even though b is complex as follows from the fact that the integrand is analytical and the path of integration in the complex plane can be moved to the real axis. we note that due to linearity and the fact that the initial temperature distribution can be written as an integral over such deltafunctions.2 in the Beta Handbook) Thus our temperature distribution becomes r 1 exp( (x x0 )2 =4 t): (324) T (t. x) = 4 t Finally.4 of the Beta Handbook). we can obtain the general solution as Z T (t. The solution in the case of a delta function disturbance is called a Green’ function by the physicists (See s Section 9. x). x) r Z 1 = dx0 T (0.
(331) then we get Z T (t.3 Exercises on Partial Di¤erential Equations: 1. Reduce the timedependent Schrödinger equation for particle motion in three dimensions to the form of two di¤erential equations . x): (330) x0 ) = dx0 T (0.then T (t. if the initial …eld can be written as an integral over functions. x0 ) (x x0 ). Solve the timeindependent Schrödinger equation to obtain the energy eigenfunctions of the twodimensional particleinthebox problem where the potential vanishes when 0 < x < L1 and 0 < y < L2 but is in…nite elsewhere. x) = dx0 T (0. just like the time dependent Fourier basis functions. 11. x) = X n cn n (t. The result has been either explicit solutions of an exact or approximate nature. Z T (0. x0 .one for the timedependence and one for the spatial dependence of the wavefunction. Part IV Numerical Methods 12 Numerical Solution of ODE’ s So far we have studied mathematical methods of an analytical form. 2. 83 . Consider the quantum mechanical motion of a particle in two dimensions. allow us to obtain the timedependent amplitude T (t. x) = Similarly. x0 ) T (t. x): (332) Thus the timedependent functions. x) = dx0 T (0. x) simply by integration. x0 ) exp(Lt) (x Z X n cn exp(Lt) n (0.
Thus we shall focus on this. In order to …nd this error we start from the Taylor series expansion of the function which we assume to be analytical in the domain of interest.e. dependent on the computer or other computational device you are using. Perhaps the most commonly used method of all is the …nite basis set method employed by all users of the standard quantum chemical methods.. next. 12. and focus our attention on the intrinsic error of the approximation.g... Now we shall proceed to study the numerical methods most commonly used by chemists.e. This is the error which remains if we could evaluate the expression (334) exactly. the intrinsic error in f 0 and our accuracy requirement. Thus we shall study the socalled MC method of numerical simulation. by numerical di¤erentiation or integration. It is based on the numerical solution of ordinary di¤erential equations.or mathematical relationships which require evaluation by numerical means. (334) h for a small value of h. 1 1 1 f (x + h) = f (x) + hf 0 (x) + h2 f 00 (x) + h3 f 000 (x) + h4 f 0000 (x) + ::::: 2 6 24 1 X 1 = hn f (n) (x): (335) n! n=0 84 . This method was discussed already in Chapters 1 and 2. i. i. We shall leave aside the round o¤ error which is machine dependendent. The third most commonly used numerical method might be the Monte Carlo method of numerical integration. 1 (333) f 0 (x) = lim (f (x + h) f (x)): h!0 h This de…nition immediately suggests a numerical evaluation of the derivative by the relation 1 f 0 (x) = (f (x + h) f (x)). The second most commonly used method could well be the molecular dynamics method of simulating both dynamical processes and equilibrium properties of chemical systems. How small should h be? This is not so easy to determine in practice. the socalled MD method. It depends on the round o¤ error a¤ecting the evaluation of f . It is used to obtain equilibrium properties of chemical systems through the evaluation of statistical mechanical averages. e.1 Numerical Di¤erentiation Recall the de…nition of the derivative of the function f (x) at x.
if h is small enough the term proportional to h will dominate the error. Thus we want numerical approximations with as high order as is needed to get su¢ cient accuracy. i. Example 1: High order derivate evaluation f 0 (x) = 41 3h h f (x + ) 2 f (x h ) 2 11 (f (x + h) 6h f (x h)) + O(h4 ): (339) Let us now consider higher order derivatives.By subtraction of f(x) and division by h we readily …nd that 1 1 (f (x + h) f (x)) = f 0 (x) + hf 00 (x) + ::::: = f 0 (x) + O(h): (336) h 2 By this notation we mean that the leading term in the error is proportional to h. The second derivative is most easily evaluated by a sequential application of the de…nition of a derivative. Fortunately.e. Note that h h ) = hf 0 (x) + O(h3 ): (337) f (x + ) f (x 2 2 Thus we …nd h h 1 )) + O(h2 ): (338) f 0 (x) = (f (x + ) f (x h 2 2 f 0 (x) = We can get even higher order error by expressions for f 0 (x) involving more function evaluations. h3 is smaller still etcetera. 1 0 h h (f (x + ) f 0 (x )) (340) h 2 2 1 (f (x + h) 2f (x) + f (x h)): = h2 The order of the error follows from the substitution of the Taylor series expanded forms of f (x + h) and f (x h) and we have f 00 (x) = f 00 (x) = 1 (f (x + h) h2 2f (x) + f (x 85 h)) + O(h2 ): (341) . If h is small then h2 is smaller. it is not di¢ cult in this case to see how to evaluate f 0 (x) to higher order in h. The …rst trick is to recognize the merit of a central rather than the forward di¤erence method used above.. In the end the human algebraic labor tends to put an end to our ambitions for high accuracy but it is certainly very important to know how to generate higher accuracy when needed.
The third order derivative f 000 (x) can similarly be obtained by an iterative application of the de…nition of a derivative.g. by identifying the leading error term in the approximations above and subtracting the corresponding numerical derivative multiplied by the appropriate constant.1 Numerical Solution of ODE’ s Direct Taylor Series Expansion Methods Consider …rst the ordinary di¤erential equation (ODE) y 0 = f (x.2 12. e. The accuracy can be improved. for example. y): 86 (346) . f 000 (x) = h h 1 00 (f (x + ) f 00 (x )) h 2 2 1 0 (f (x + h) 2f 0 (x) + f 0 (x h)) = h2 1 3h h (f (x + ) 3f (x + ) + 3f (x = 3 h 2 2 (342) h ) 2 f (x 3h )): 2 Again the central di¤erence form of this approximation ensures that the error is of second order and we …nd f 000 (x) = 1 3h (f (x + ) 3 h 2 h 3f (x + ) + 3f (x 2 h ) 2 f (x 3h )) + O(h2 ): (343) 2 By the same method we can generate numerical derivatives of any order of derivation and any order of accuracy. Note.2.. As before we can get higher order accuracy by using more function evaluations. that the central di¤erence approximation for f 0 (x) can be written as f 0 (x) = h 1 (f (x + ) h 2 f (x h )) 2 1 2 000 h f (x) + O(h4 ): 24 (344) Inserting the expression for f 000 (x) from 343 we then get f 0 (x) = 1 1 3h 9 h ( f (x + ) + f (x + ) h 24 2 8 2 9 f (x 8 h 1 ) + f (x 2 24 3h )) + O(h4 ): 2 (345) 12.Note that again this is a central di¤erence form of approximation so we get a second order error where for a noncentral form we would expect a …rst order error.
This propagation step can now be iterated to yield y(x+2h) as y(x + 2h) = y(x + h) + hf (x + h. in order to generate the new value y(x + h). y(x h) and y(x). y(x)) + O(h3 ): (351) In order to use this type of propagation we need two values of y. This is no problem once the propagation is running but will require a special starting procedure. y(x + h)) if it is an analytical function of x and y. y(x)) + O(h2 ): (348) This equation propagates the solution y(x) from x to x+h at the cost of an error of order h2 . Fortunately it will not always tend in the same direction. y(x + h)) + O(h2 ): (349) Note that in the second propagation step we have an error of order h2 in y(x + h) and in f (x + h. We need to devise a propagation step with an error of as high order as needed and be prepared to construct a special start up 87 . Now we have seen the general character of the problem of solving ODE’ s by numerical means. A simple way to handle this is to use the lower order method above for the …rst step and then go over to the higher order central di¤erence scheme. The obvious idea is to use the central di¤erence de…nition of the derivative. The latter term is multiplied by h so the additional error is of order h3 . y). The error will grow in some way which depends on both our choice of method and on the function f (x. Error cancellation will happen to some extent.Replacing y 0 by the simplest form of numerical derivative in (334) we …nd that 1 (y(x + h) y(x)) = f (x. Note that y 0 (x) = 1 (y(x + h) 2h y(x h)) + O(h2 ): (350) Thus we can insert the relation for y 0 from the ODE and obtain y(x + h) = y(x h) + 2hf (x. (347) h which yields y(x + h) = y(x) + hf (x. Let us now consider how we might improve the accuracy of the numerical solution of the …rst order ODE above. In the end the error growth remains a rather di¢ cult aspect of numerical solutions of ODE’ which needs s to be checked in each application. Naturally we can vary the steplength as we go along. y(x)) + O(h). We can now repeat the propagation step to generate the solution over a grid of points.
Such an expression can be obtained by di¤erentiating the original ODE to get y 00 (x) = @ @ d f (x. y) + f (x. y(x)) f (x. y(x)) + O(h2 ) (353) as above. We shall focus now on some general or particularly advantageous ways of propagating the solution. y): @x @y (354) Now we can write 1 @ @ y(x+h) = y(x)+hf (x. y) dx @x @y @ @ = f (x. y(x)) f (x. If we want to increase the accuracy we need an expression for the next term in the Taylor series expansion.procedure to generate the information required for the propagation. We begin with a general method based directly on the Taylor series expansion. y) + f (x. y(x))+ h2 f (x. Example 2: Consider the ordinary di¤erential equation y0 = x + y2: In this case we obtain by di¤erentiation y 00 = 1 + 2yy 0 = 1 + 2xy + 2y 3 : Thus we can write y(x + h) = y(x) + h(x + y 2 (x)) + h2 (1 + 2xy(x) + 2y 3 (x)) + O(h3 ): (358) 2 88 (357) (356) . y) = f (x. y) +O(h3 ): 2 @x @y (355) This method illustrates that one can start directly from the Taylor series expansion and increase the accuracy by generating expressions for higher order derivatives by di¤erentiating the original ODE. Note that we can always write 1 1 y(x + h) = y(x) + hy 0 (x) + h2 y 00 (x) + h3 y 000 (x) + :::::::: 2 6 (352) Thus if the ODE is of …rst order as discussed above then it is natural to insert the equation for y 0 (x) and obtain y(x + h) = y(x) + hf (x. y) + y 0 (x) f (x.
2 y 00 (x + h) = y 00 (x) + hf (x. y 00 (x)) + O(h3 ). y 00 : Thus we addend the propagating equations for y 0 and y 00 . 1 y 0 (x + h) = y 0 (x) + hy 00 (x) + h2 f (x. y 0 (x)) + O(h2 ): (361) Solving these two equations in tandem we can generate the numerical solution of the second order ODE. y 00 = f (x. 89 . For an nth order ODE we need to know initially and propagate all lower order derivatives including the function y itself. y 0 . Thus we complement the propagating equation for y by the following propagating equation for y0 . This means that we should have initial information on these two functions and we must then propagate both forward. y 0 (x)) + O(h3 ): 2 (360) (359) This equation shows a dependence on both y(x) and y0 (x). y 00 ): (362) The natural approximation based on the Taylor series expansion now becomes 1 1 y(x + h) = y(x) + hy 0 (x) + h2 y 00 (x) + h3 f (x. y(x). Note that both y and its …rst derivative were needed initially and had to be propagated forward. y 0 (x). y. y(x). y 000 = f (x. y(x). y 0 . y 0 (x). y(x). y 00 (x)) + O(h4 ): 2 6 Now we need to know initially and to propagate y. y. y 00 (x)) + O(h2 ): (363) (364) The pattern is now clear. y 0 (x + h) = y 0 (x) + hf (x. y 0 (x). y(x). Although we shall not show this explicitly it is clear that the accuracy can be increased by di¤erentiating the ODE as shown in the …rst order case above. The error in y(x) in the ” natural approximation” is of order n + 1 in h and this order decreases in unit steps as we proceed to the derivatives.Consider now a second order ODE. y 0 ): The corresponding natural propagating step is 1 y(x + h) = y(x) + hy 0 (x) + h2 f (x. We now go to a third order ODE.
g. Note that f (x + and it follows that f 0 (x) = 1 (f (x + x x) f (x)) + O( x): (366) x) = f (x) + xf 0 (x) + O(( x)2 ).2 RungeKutta Methods If you have understood the direct Taylor series expansion method described above it may well seem as if the problem is solved in the sense that the algebra required to produce a solution to desired order of accuracy is straightforward.. the evaluation of derivatives of the function f . the RungeKutta method. There is nothing wrong with these methods from the theoretical point of view. but the necessary operations.12. We shall now have a look at one of the most popular such implicit methods. rarely used. The basic idea is to replace the evaluation of derivatives of f by additional evaluations of f itself. e. (365) Thus the simplest …rst order equation y 0 = f (x) can be solved to an error of order h3 by 1 y(x + h) = y(x) + hf (x) + h2 f 0 (x) + O(h3 ) 2 1 1 = y(x) + hf (x) + h2 (f (x + x) 2 x (368) f (x)) + O(h3 ): x to be (367) Here we pick x to be a real number of the order of h. with the exception of the central di¤erence method applied to the second order Newton’ equations as we s shall see. They are justi…ed by and reducible to the Taylor series expansion method but by various tricks the necessary operations have been made more convenient. the direct Taylor methods are. Thus most of the popular numerical methods for the solution of ODE’ are what might be s called implicit Taylor series expansion methods. However. are often inconvenient. If we pick h then we …nd h y(x + h) = y(x) + (f (x + h) + f (x)) + O(h3 ): 2 90 (369) .2.
y(x))+ h2 2 x (374) Again we could pick x = h and get h y(x + h) = y(x) + (f (x. y(x)). y(x))) + O( x). 91 . What happens if we have the more general case when f depends on both x and y? Note that f (x + x. y(x)))+O(h3 ): y(x+h) = y(x)+hf (x. dx then we get f 0 (x. y(x)) = f 0 (x. y(x) + xy 0 (x)) + O(( x)2 ) (370) xf (x. y(x) + x.We see that two evaluations of f have decreased the error by one order of h: We can go on and improve the error by further function evaluations. y(x) + k)) + O(h3 ): 2 Note that y(x) + k = y(x) + hf (x. y(x))) + O(( x)2 ): Thus if we let superscript prime indicate a total derivative with respect to x. y(x)): (373) The propagation equation can then be written 1 1 (f (x+ x. y(x)) + f (x + h. y(x + x)) = f (x + = f (x + x. Note that the choice of x = h has given us an added advantage in that although two values of f appear in the propagation equation one of them will be reused in the next step. y(x) + k) f (x. d f (x. y(x)) = y(x + h) + O(h2 ): (376) (375) 1 (f (x + x x. y(x)+k) f (x. Thus the RungeKutta method is essentially an iterative solution method. (372) (371) Thus we have inserted a lower order propagation solution for y(x + h) in the higher order propagation equation. Thus the number of new function evaluations per step is one except for the …rst step when it is two. y(x)) = with k= xy 0 (x) = xf (x.
6 k1 = hf (x. Verify explicitly for the ODE y 0 = x + y that the error in the RungeKutta propagating equation (8. We shall discuss such programs later in this course.5).3 Exercises: 1.There are many di¤erent RungeKutta methods all based on the same idea of using multiple function evaluations to raise the accuracy (see the Beta Handbook. y(x)). f (x) and f (x + h ) to the highest possible accuracy. 2 2 k4 = hf (x + h. 92 . Section 16. ways to solve ODE’ Normally one does not have to derive or program these s. 2 2 k2 h k3 = hf (x + . 12. They can be found in program libraries and mathematical programs such as Mathematica. Show 2 2 your derivation and the order of the error in h. Matlab. One commonly used scheme bringing the error to order h4 for …rst order ODE’ is as follows: s 1 y(x + h) = y(x) + (k1 + 2k2 + 2k3 + k4 ). k1 h k2 = hf (x + . methods oneself. y(x) + k3 ): (377) The RungeKutta methods can be applied to higher order ODE’ and s to coupled sets of ODE’ There are also many other elegant and intricate s. y(x) + ).45) is of …fth order in h. Mathcad and Maple. y(x) + ). 4. Derive an expression for f 00 (x) in terms of function values such that the error is of order h4 : 2. Obtain an estimate of f 0 (x) and f 00 (x) in terms of the function values f (x h ). Derive propagating equations which yield a numerical solution of the ODE y 00 = ky + x2 with the error of order h4 per step of length h in x: 3. Examples of programs solving second order ODE’ will be given in the next chapter on molecular dynamics s simulation.
Thus we can understand most of the properties of the macroscopic phases on the basis of classical mechanics but electronic structure and dynamics as well as vibrational dynamics of molecules and solids must be described by quantum mechanics.13 Molecular Dynamics Simulation One could say that Newton started the development of molecular dynamics (MD) simulation when he proposed that systems and their dynamics. So what is molecular dynamics simulation? It is based on a number of propositions which might be summarized as follows: 1. The latter type of dynamics is of particular interest to chemists. 93 . So far this hope seems well justi…ed. classical mechanics is not exactly but only approximately valid for atomic and molecular motions and the accuracy of the ” classical mechanical approximation” reaches its practical boundary in the middle of the …eld of chemistry. In fact. The other major reason for its popularity is that there are so many important unresolved problems seemingly out of range for all other methods that MD simulation is applied whether or not it is completely justi…ed. the use of MD simulation is so pervasive that one could argue that it exceeds the scope motivated by its validity and timescale limitations. were determined by potentials and corresponding forces according to what we now call classical mechanics. Classical mechanics describes the equilibrium and dynamical properties of the relevant system to su¢ cient accuracy. His original insight has been associated with the gravitational force acting on an apple falling from a tree but his classical mechanics is now applied to objects of nearly all imaginable types from planets in motion around a star to atoms and molecules performing their dynamics on microscopic time and length scales. 2.e. as we shall soon describe.. Thus MD simulation can be applied to problems of a complexity beyond all other methods by chemists who do not require lengthy training to grasp the essential facts and features of the method. movement. i. Unfortunately. The hope is that one will always learn something of value. Potentials can be found which accurately describe the forces on the particles which make up the relevant system. Given the development of fast computers the application of classical mechanics in the form of molecular dynamics. has become an extremely powerful tool which is revolutionizing the …eld of chemistry. One reason for the great popularity of MD simulation is that it is so relatively easy to implement on our ever more powerful computers.
1 @V (x) 1 @2 x(t) = = F (x(t)): (378) 2 @t m @x m When x(t) is known the velocity can be found by di¤erentiation. x(t + h) = x(t) + hv(t) + 94 h2 F (x(t)) + O(h3 ): 2m (380) . The main task in MD simulation is to obtain this trajectory for some initial condition or ensemble of initial conditions. i.1 Simplest Case . Our purpose here is to illustrate the MD method in its simplest forms leaving the large and complicated MD programs for later. as almost always. 4. These quantities will form what we call the trajectory described by the system as it moves with time. Note that the force is in this case. only dependent on the dependent variable x(t) itself. Where the actual physical system is unmanagably large a small sample of the system still retains the essential behaviour of the actual system. The simplest propagating equation is obtained by the direct Taylor series expansion method as follows. The motion is completely described by the time development of the position and the velocity. The trajectory is found by solving Newton’ s equation.OneDimensional Oscillation We shall begin by simulating the motion of a onedimensional (1D) oscillator. Relaxation processes and relevant dynamical phenomena occur on a time scale accessible to MD simulation. 13.. More problems are coming within range of the MD method every day not the least due to the continual improvement of computer capacity.3. The system is then de…ned by a mass m and a potential V (x). by x(t) and v(t). v(t) = d x(t): dt (379) Thus we have to solve a second order ODE to obtain x(t).e. The sample system can be chosen small enough to be tractable for MD simulation. These propositions are the subject of many ifs and buts and clever tricks have been and are being developed to extend their validity.
The simplest equation for the velocity is v(t + h) = v(t) + h F (x(t)) + O(h2 ): m (381) However. A program implementing the velocity Verlet method for a 1D oscillator with m = 1 and a potential given by V (x) = ax2 + bx4 + cx6 ..43) we can improve the accuracy at essentially no additional cost in computation by using the average force between time t and t + h. the MD method is often used to obtain information about equilibrium properties.Note that h is now a timestep. The program is written in Fortran 77. Suppose for example that we want to know the average potential energy of the oscillator hV i as a function of the energy E. recalling the simplest RungeKutta method as in equation (8. v(t + h) = v(t) + h (F (x(t)) + F (x(t + h))) + O(h3 ): 2m (382) This form of the velocity equation is nicely symmetric as well as of the same accuracy as the equation for the position. However. Let us consider what can be obtained from such a program. Most obviously we can obtain dynamical information about the oscillation directly from the trajectory. Together these two equations form a very stable and accurate method of solving Newton’ equations which goes s by the name of the ” velocity Verlet method” and is very commonly used in the …eld of MD simulation. In this case we must use the socalled ergodic hypothesis: Equilibrium properties in the microcanonical ensemble can be obtained as long time trajectory averages of the corresponding property. This is where the time scale problem enters but for a simple 1D oscillator we should not have any di¢ culty 95 . (383) is included in an appendix to this chapter. i. It is obtained as Z 1 dtV (x(t)): (384) hV iE = lim !1 0 Even with a modern computer we can’ run forever so we have to assume t that the integral converges reasonably rapidly.e.
(2cy + 4dy 3 + 2gx2 y): (388) If we apply the velocity Verlet method we get the following propagating equations. @x @ V (x. y(t)) + Fx (x(t + h).…nding a converged value for hV iE . y(t)). y) = Fy (x. y(t)) = @ V (x.2 TwoDimensional Oscillation Let us now consider the case of two coupled oscillators. y(t + h))): 2 96 . 2 we will generate socalled microcanonical averages if the ergodic hypothesis turns out to be valid. y) = (2ax + 4bx3 + 2gxy 2 ). = vy (t). m (385) E = v 2 + V (x). The masses will again be taken to be unity. y(t)) = = Fy (x(t). 2 h vx (t + h) = vx (t) + (Fx (x(t). 2 h vy (t + h) = vy (t) + (Fy (x(t). = Fx (x(t). y): @y (387) (386) In our special case we have Fx (x. Note that since Newtonian dynamics conserves the energy E. The potential will be V (x. y(t)). y). 13. (389) 2 1 y(t + h) = y(t) + hvy (t) + h2 Fy (x(t). y(t)) + Fy (x(t + h). 1 x(t + h) = x(t) + hvx (t) + h2 Fx (x(t). y) = ax2 + bx4 + cy 2 + dy 4 + gx2 y 2 : The equation of motion will be d x(t) dt d y(t) dt d vx (t) dt d vy (t) dt = vx (t). y(t + h))).
. x3 . we were to set g=0 to break the coupling between the two oscillators then the initial energy in each oscillator would be conserved and our system will be nonergodic. e. For larger separations the potential is negative approaching zero as r 6 . gx2 y 2 .2.1 A OneDimensional Fluid In order to illustrate the simulation of an in…nite system without unduly burdening our discussion with technical details we shall consider a onedimensional ‡ uid.. i. A listing of a program carrying out the MD simulation of our two coupled oscillator system is included in an appendix.e. For reasons of tradition and convenience we shall let the pair potential be of LennardJones form. The particles will be of unit mass and the interaction is described by a pairpotential. a potential acting between each set of two particles in the ‡ uid. ::::::) = XX i j>i (xij ) = 1 XX (xij ). a very endearing trait since MD simulations are often carried out for manyparticle systems with 1000 dimensions or more. We might still be interested in calculating the microcanonical potential energy average for the twodimensional oscillator. The complete potential for a onedimensional ‡ is then uid V (x1 . Note the very rapid rise in (x) as x decreases from unity. x2 . In reduced units such that " and are unity the potential becomes (x) = 4(x 12 x 6 ) and has the shape shown in the …gure. The question is whether a coupling term of the type used here. 12 6 (r) = 4"( r r ): (390) Here r is the particle separation. If. is su¢ cient to make the system ergodic.g. This rise corresponds to the Pauli repulsion between closed shell atoms or molecules. It is worth recalling that the evaluation of such microcanonical averages as long time trajectory averages is based on the ergodic hypothesis which may not hold for a given system. 13.The error is of order h3 in all the propagating equations. " is the well depth and is the separation where the potential becomes positive. Note that the velocity Verlet method is very straightforwardly extended to two or more dimensional system. 2 i j6=i (391) 97 .
8 x 2. x 3L.e. The truncation 98 . L] by hard walls. ::::) = ( (xij )) @xi @xi j6=i X X = 24"(2xij13 xij7 ) 24"(2xij13 j<i j>i The sum should. ::: The e¤ect of this is that the forces on our N particles pick up contributions from these replicas which serve to represent the rest of the in…nite ‡ uid.0 1.0 where xij is the separation jxi particle i is given by Fi (x1 . x 2L. i. :::::) = xj j between particles i and j. if we want to know about properties of an in…nite system. L] which has a length L related to the ‡ density n by uid n = N=L: (393) Around this interval are placed replicas de…ned by periodicity. one normally truncates the interaction potential (x) at x = R < L for two reasons: i) to avoid time consuming summations in the force evaluations and ii) because one does not want a particle to interact with its own image. x2 . I do not believe the second reason is signi…cant but it is nevertheless often referred to in the simulation literature. In fact.2 1.4 1. a particle at x has replicas at x L. not be managed so one uses a trick called ” periodic boundary conditions” One takes a large but . of course. in principle. The force on (392) xij7 ): X @ @ V (x1 .. This can.y6 4 2 0 1. go over the in…nite or nearly in…nite number of other particles in the ‡ uid. managable number of particles N and assigns them to an interval [0.6 1. then a set of N particles constrained to the interval [0. Thus the N particles experience a more realistic environment.
i. Accounting for it in the solution of the equations of motion is possible but technically messy. We shall see examples of such programs later although using Monte Carlo rather than MD propagation. speci…c heat or other bulk thermodynamic property than the program may only be a few hundred lines long. causes problems because the potential actually used is then not analytical. x2 (t). 13. N .And the computers are good at repetitive summations. Ignoring the technical problems of potential truncation. It has a step at x = R which generates a function in the force at that point. Draw a boxdiagram (boxes containing speci…cation of tasks done connected with lines showing the order in which work is done) illustrating how the onedimensional oscillation is followed by the MD simulation program. x)h2 =m: a) Derive this form of propagation from Newton’ equation.e. setting it to zero for x > R. The original Verlet algorithm for the propagation in molecular dynamics simulation is in one dimension of the form x(t + h) = 2x(t) x(t h) + F (t.3 Exercises: 1. 2 (394) 1 vi (t + h) = vi (t) + h [Fi (x1 (t).. :::::)] + O(h3 ). . 2. It is possible to simulate ‡ uids of thousands of particles even on a personal computer. (395) 2 for i = 1. the propagating equations of the velocity Verlet method can be written as 1 xi (t + h) = xi (t) + hvi (t) + h2 Fi (x1 (t). ::::) + O(h3 ). c) Show that this propagation is of time 99 . :::::) + Fi (x1 (t + h). 2. For short ranged potentials such as the LennardJones potential the problem can be dealt with in a very summary fashion while for long ranged potentials it is much more of a problem. b) Show s 4 that the error is of order h . Write out below the explicit form of the propagating equations for the case when the potential is V (x) = ax2 + bx4 + cx6 . the pressure.of the potential. Apart from the summations hidden behind our notation for the force Fi the propagation equations are no more di¢ cult to deal with than in our smaller simulations above. If the aim is to evaluate simple properties such as the average potential energy per particle. ::::.
c 10 write(*.4 Timedevelopment of a 1D oscillator . v) 13 write(*. 19 C Evaluate the time average of the potential energy. nt = ’ Set 17 read(*. v) = v 2 =2 + pot(x) 08 write(*. Set the timestep dt = ’ 15 read(*. v 12 ee = e(x.FOR Note that in the programlisting below normal mathematical notation is used rather than F77 notation whenever convenient.*) a.*) ’ the number of timesteps to be taken. d) Point out any disadvantages of this method and suggest remedies.*) ’ The timestep should be < 1. 14 write(*.*) ’ The potential is ax2 + bx4 + cx6 : Enter a.*) x. The level of detail should be as for the chain of LennardJones interacting particles above.*) nt 18 C Calculate the trajectory by the velocity Verlet method. 02 C The oscillator potential is ax2 + bx4 + cx6 : The mass is 1.oz) 05 pot(x) = ax2 + bx4 + cx6 06 f ce(x) = 2ax 4bx3 6cx5 07 e(x. b = c = 0 the frequency is 1.reversible form.*) ’ Mass is set to 1. b. Another interaction potential often used in MD simulations to represent pairwise interaction among particles is the Morse potential (r) = D (exp( 2 (r re )) 2 exp( (r re )) : Describe how to carry out an MD simulation of a onedimensional chain of particles interacting by Morse pairpotentials. For a = 1. and the 20 C average kinetic energy. ekb. 01 C This program calculates a trajectory for an oscillator in 1D. peb. 03 program numsim 04 implicit real*8 (ah.*) ’ Input initial position and velocity x. v = ’ 11 read(*. 3. 21 peb = 0 22 ekb = 0 100 .NUMSIM. b. 13.*) dt 16 write(*. c = ’ 09 read(*.
energy. 04 implicit real*8 (ah. nt peb = peb + pot(x)=nt ekb = ekb + v 2 =2nt xnew = x + vdt + f ce(x)(dt)2 =2 vnew = v + (f ce(x) + f ce(xnew))dt=2 x = xnew v = vnew edev = edev + dabs(e(x. The particle masses are 1. 03 C The oscillator potential is ax2 + bx4 + cy 2 + dy 4 + gx2 y 2 .FOR program md2osc 02 C This program calculates a trajectory for two oscillators with anharmonic coupling.104033D04 14 01 Timedevelopment of two anharmonically coupled oscillators .20) nt dt.MD2OSC. dt nt Final energy Average energy deviation 0. v = 0.30) e(x.104767D02 0. peb. 1 and total elepsed time is 100 seconds = nt dt. ekb 20 format(4d16.6) stop end 13.23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 edev = 0 C Time propagation according to the velocity Verlet method. average pe. v) ee)=nt 10 continue write(*. do 10 i = 1.*) ’ Final energy.6) write(*. average ke ’ write(*.*) ’ Total time. ee. edev 30 format(2d16.1 1000 0.01 10000 0. average energy deviation’ write(*.5 Some results for an anharmonic oscillator such that V (x) = x2 + x4 + x6 Initial position and velocity are x.49978D+00 0.oz) 101 .500000D+00 0. v).
*) nt 19 C Calculate the trajectory by velocity Verlet propagation. Set dt = ’ 16 read(*.*) dt 17 write(*.*) a. py = ’ 13 read(*.*) ’ The potential is ax2 + bx4 + cy 2 + dy 4 + gx2 y 2 ’ 10 write(*. b. py) ee)=nt 38 10 continue 39 write(*. c. average ke ’ 102 . px. y.*) ’ Input initial positions and momenta x. y) 09 write(*. c. y.*) ’ Total time. nt 27 peb = peb + pot(x.*) x.*) ’ Enter a. y. y)=nt 28 ekb = ekb + (px2 + py 2 )=2nt 29 px2 = px2 + px2 =nt 30 py2 = py2 + py 2 =nt 31 xnew = x + pxdt + f x(x. py 14 ee = e(x. Also evaluate the ratio 2) average(px =average(py 2 ). b. d. y. y) = 2cy 4dy 3 2gx2 y 08 e(x. Evaluate the time average of the potential energy peb 20 C and the average kinetic energy ekb. The timestep should be < 1. y) + f y(xnew. py) 15 write(*. 21 peb = 0:d0 22 ekb = 0:d0 23 edev = 0:d0 24 px2 = 0:d0 25 py2 = 0:d0 26 do 10 i = 1. energy. py) = px2 =2 + py 2 =2 + pot(x. y)(dt)2 =2 33 px = px + dt(f x(x. y) + f x(xnew. g 12 write(*. y) = ax2 + bx4 + cy 2 + dy 4 + gx2 y 2 06 f x(x. px. ynew))=2 35 x = xnew 36 y = ynew 37 edev = edev + dabs(e(x.05 pot(x. average pe. px. px.*) ’ For V (x) = x2 + y 2 the frequencies are 1. y)(dt)2 =2 32 ynew = y + pydt + f y(x. y) = 2ax 4bx3 2gxy 2 07 f y(x. d. y. ynew))=2 34 py = py + dt(f y(x. px. g = ’ 11 read(*.*) ’ the number of timesteps nt = ’ Set 18 read(*.
Let us focus our attention on the integral Z b I(f ..20) ntdt. ee.*) ’ The ratio < px2 > = < py 2 > is ’ 46 write(*. a) = F (b. Both of these operations are essential tools of applied mathematics and of chemistry.*) ’ Final energy.d16.’ Average px2 = Average py 2 = ’ . px. We then get Z b dxf (x) = F (b. In the subsequent chapter we will discuss the Monte Carlo method of numerical integration which is particularly suited to high dimensional integrals. b] : The exact value of the integral can be obtained if the primitive function corresponding to f (x). b) = dxf (x): (396) a We will assume that f (x) is an analytical function in the interval [a.40) px2=py2 47 40 format(2x. The latter form of the exact integral is the one we generally use since it allows us to use any primitive 103 . c) a F (a. Z x F (x. edev 44 30 format(2d16. i.6) 42 write(*. average energy deviation’ 43 write(*. py). y. (397) a can be found. We shall consider now the basic theory of numerical integration and a sampling of the most popular methods used. a) = dxf (x).40 write(*.30) e(x. a. peb. c): (398) Here c is a real number which is undetermined. ekb 41 20 format(4d16.6) 45 write(*.e.6) 48 stop 49 end 15 Numerical Integration Integration is the inverse of di¤erentiation. The discussion will be con…ned here to one dimensional integration.
a) + hf (x) + h2 f 0 (x) + h3 f 00 (x) + h4 f 000 (x) + ::::: (401) 2 6 24 Thus. Thus we can use all our numerical methods of solving …rst order ODE’ to obtain s a numerical estimate of a one dimensional integral of this type. the evaluation of derivatives of f (x) of higher order may be di¢ cult or tedious. Otherwise the function evaluations will spill out of the full range [a. However. By direct integration we then …nd that Z h dsf (x + s) F (x + h.function F (x) satisfying d F (x) = f (x) (399) dx in the interval of integration [a. a) = F (x. Therefore we have already quite a rich collection of methods available for numerical integration. In this sense the theory of numerical integration is extremely simple. a) + 0 1 1 1 = F (x. Let us try to be systematic. Just like was the case for di¤erentiation and ODE’ the Taylor series expansion must be the starting point of our theory s of numerical integration. b] of the integral. a) + hf (x) + O(h2 ): (402) Next we want to include the term to order h2 in the Taylor series expansion by function evaluation. Our experience with central di¤erence schemes for numerical di¤erentiation suggests that this can be done as follows. a) = F (x. a) = F (x. a) + h(f (x) + f (x + h)) + O(h3 ): 2 104 (403) . We have 1 1 f (x + h) = f (x) + hf 0 (x) + h2 f 00 (x) + h3 f 000 (x) + :::::. The simplest and lowest order integration step is F (x + h. b] : Note now that F (x. For greatest convenience these function evaluations should occur at points inside the interval [x. 1 F (x + h. Just as in the case of the RungeKutta methods of ODE’ we may want to replace s the derivatives by additional function evaluations. 2 6 (400) where h is again a small increment in x. x+h]. at the cost of evaluating derivatives of f (x) we can generate a stepwise evaluation of the integral to any order of accuracy desired. a) satis…es a …rst order ODE of the form above with the initial value F (a) = 0.
a) = F (x.8) is called the trapezoidal rule and the scheme recommended above in (10. It also gives them names commonly used in the mathematical literature. f (x + h ). a) + h h f (x) + 4f (x + ) + f (x + h) + O(h4 ): (406) 6 2 This is a very good integration step which leads to the following expression for the full integral " 2N # Z b hX h nh F (b. However. s 105 . h F (x + h. 6 n=0 2 6 a (407) where h = (b a)=N .e. F (x + h.9) is called the midpoint rule. It is not di¢ cult to go to higher order. f (x + h).4) gives a number of useful integration methods for onedimensional integrals. a) = dxf (x) = (3 + ( 1)n+1 )f (a + ) [f (a) + f (b)] . a) = F (x.12) is called Simpson’ rule. Let us propose to use the three function values f (x). a) + hf (x + ) + O(h3 ): 2 (404) This looks like a better idea since we need only one function evaluation rather than two in (10.8) are reused once so there is only one new function evaluation for each step except for the very …rst. The middle function value must then be 1 2p since the coe¢ cients must add up to unity.8). Thus (10. By symmetry the coe¢ cients in 2 front of the two end values must be the same. 2 i. a) = F (x.. a) + h pf (x) + (1 F (x + h. At any rate it is worth remembering that additional function evaluations can be replaced by clever placement of the points of evaluation. Interestingly there is another way to accomplish the same thing. The Beta Handbook (Section 16.Taylor series expansion of f (x + h) shows that this is correct as expected. Thus we get h 2p)f (x + ) + pf (x + h) + O(h4 ): 2 (405) Taylor series expansion of f (x + h ) and f (x + h) shows that p should be 1=6. (10. the two functions in (10. p. We can stick with one function evaluation but place it at the step midpoint.
Verify explicitly the validity of the result (10.e. Then we would need 100 100 = 10000 points in the grid of x. If the function varies substantially over the interval of integration we must use a large number of points to get good accuracy. y values where the integrand is to be evaluated in order to produce an integral of accuracy comparable to that in the one dimensional case. x + h such that the error in the integration from 3 3 x to x + h is of order h5 . Suppose we have a two dimensional integral over a rectangular domain and the integrand varies in both dimensions about as much as in a typical one dimensional integral. The number of points in a grid which should 106 . e. a) = x to 4th order in h with only two function evaluations in the interval [x. Derive a numerical integration method based on the four function values at x. 3. Simpson’ rule. F (x + h. i. x + h].15.11). s Write out the terms in the expression on the right hand side explicitly for the case N = 3 and show that the contribution from each interval is treated as in (10.1 Exercises: 1. Show that it is possible to evaluate the contribution Z x+h dsf (s). However. Although we shall not go into the details it should be clear that these methods can be extended to two dimensional and higher dimensional integrals. Unfortunately the numerical problem becomes much harder in higher dimensions. x + 2h . a) F (x. Lets say that we need typically a hundred points. in the process of calculating thermodynamic properties of ‡ uids and larger molecules. in chemistry we often want to evaluate integrals of dimension 1000 or more. 2.12)... x + h . The methods we developed required evaluation of the integrand on a lattice of points followed by summation over the function values multiplied by an integration step length and some weighting factor determined by the numerical integration scheme selected.g. 16 Monte Carlo Simulation We have seen in the previous chapter how to evaluate one dimensional integrals.
If the natural de…nition of the domain is too di¢ cult we can always extend the domain so that the area can be evaluated. The integral can now be written as Z Z d f ( ) = hf iD d . xN ) = D where summarizes all coordinates and D denotes a domain of integration. If it were not for the problem that a sampled voter may respond di¤erently than a voter in the real election the pollsters would be much more accurate. But we know that the pollsters can predict the outcome of the election rather well. We begin here by considering the statistical tool. (409) D D where hf iD = R We shall assume that we can evaluate the area of the domain D d . In the global Monte Carlo method we use the method of the random draw of the pollster to determine the average value of the integrand over a domain of known size. The key to this method is that the sample really is random. They use a random sample of some 3000 or so voters to predict the result of an election in an electorate of about 4 million voters. ::::.1 The Global Monte Carlo Method We shall seek a radically new approach to integration by using two new tools: statistics and dynamics. Consider the problem of holding an election in a country like Sweden. (408) dx1 ::::: dxN f (x1 . It takes an enormous e¤ort to conduct such an election which can be regarded as a kind of gigantic integration. 16. Let the integral be de…ned by the notation Z Z Z d f ( ).yield an accuracy of the integral comparable to that in the one dimensional case would be of the order 102000 which is hopelessly out of range for any computer now available or in sight. The function should then be given the value zero in all of the added area. We are then left to estimate the 107 R DR d f( ) : d D (410) . Any bias in the sample can signi…cantly reduce the accuracy. There is hardly any point to quibble about the most e¤ective of our normal grid methods. We need to think in new directions.
This is what the pollster is good at. 2 . Consider the problem of calculating the average potential energy of a ‡ uid. According to statistical mechanics we have R d e V ( )V ( ) . This will make it possible to stop the calculation as soon as the accuracy seems su¢ cient. Naturally the statistical estimate will always entail a risk that the sample is unrepresentative in some way but by running on for larger N con…dence can be built up to any desired level. A very practical approach is to calculate the average for a sequence of increasing N values and observe the convergence of the average with N .g. 3 . 16. when we consider the con…guration integral part of the partition function for a dense ‡ uid. We can do it by drawing a random sample of coordinate vectors f i gN and setting i=1 hf iD = hf iD = (N ) N 1 X f ( i ): N i=1 (411) How large should N be? This depends entirely on the nature of f ( ) and the accuracy requirement. What happens is that f ( ) is nearly always close to zero because random placement of particles produces overlap of hard cores and unphysically high potential energies.average of the integrand hf iD . too large already for our computational power.2 The Metropolis Monte Carlo Method The weakness of the global Monte Carlo method shows up when we have a very illconditioned function f ( ). (412) hV iT = DR d e V( ) D 108 . The integral is then dominated by contributions from a miniscule subdomain of D which we may not even …nd by a random sample. If we have 1000 dimensions we may try N = 1000000. This is the case. ::::: in such a way as to search out the important regions of the domain D. We shall start from a reasonable point 1 and let a type of di¤usional dynamics generate the subsequent vector coordinates in the chain 1 . To surmount this problem we shall use a clever trick. e. A huge advantage of this method is that it will start to produce reasonable if not accurate values for the integral even for quite small sample size N. If we go back to the grid methods we must have at least two points in each dimension and 21000 is a large number..
We then generate a new random displacement and thereby a new proposed value for 3 : The two steps 2. Make a random displacement in 1 1. Otherwise we accept 1 as the new point (i.e. to obtain a proposed 2: 3. are iterated with each new point taking the place of 1 until N points have been generated. Here A( ) is a property such as. is 1=kB T and T is the absolute temperature. Choose a ” reasonable”initial point 2.. and 3. The domain D is de…ned by the fact that any particle can be anywhere in the available volume V . e. 1 is repeated in the list of points in the Markov chain). The Monte Carlo method is formulated in terms of the probability density ( ) and is constructed primarily to allow averages like Z < A >= d ( )A( ) (413) to be e¢ ciently evaluated. 4. The average < A > can now be obtained as N 1 X A( i ): < A >= N i=1 (414) In the case of the statistical mechanical application above the probability density can be written as Z ( ) = exp( V ( )= d exp( V ( ) (415) 109 .. 5. The Markov chain is generated as follows: 1.g. If ( 2 )= ( 1 ) > where is a random number on [0. the potential energy V ( ) above.where V ( ) is the potential energy of the ‡ uid in the con…guration . The ” area” of D is then V N : The big problem is that the Boltzmann factor exp( V ( )) is nearly zero at nearly all points in D for a dense ‡ uid. 1] then 2 is accepted as the new point. In the Monte Carlo method we handle this problem by generating a Markov chain of values starting from one point 1 which has a reasonably low energy V ( ): This chain searches out important regions in the domain D.
A listing of a Fortran program simulating a onedimensional LennardJones ‡ as discussed in Chapter 9 by the Metropolis Monte Carlo (or just uid MC) method is included as an appendix. 16. You can use the program in the appendix as a guide if you wish. This gives a good balance between moving over lots of territory and sticking to the most relevant subdomain.1 + 2( 0:5) . Draw a boxdiagram showing how the onedimensional LennardJones ‡ uid can be simulated by the Monte Carlo method to produce the thermal average potential energy.3 Exercise: 1. 110 .There are many ways to generate a random displacement. Usually one only moves one coordinate in going from i to i+1 but as long as all coordinates are visited in an unbiased way it does not really matter. Both methods are enormously e¢ cient by searching out the relevant part of the domain of integration. The MC method uses a di¤usional motion. The value is chosen so that the probabilty of rejection of the new value is about a half. The Metropolis Monte Carlo method generates a sample of values such that sampling power is not wasted on unimportant points in space. What do we mean by a random displacement above? . 1] : Note that a random displacement can mean a change in only one or a few or all of the coordinates according to this prescription. The most commonly used one is to select a maximal coordinate displacement and then visit each coordinate sequentially setting xi. Note that the MC method is very closely related to the MD method.2 = xi. (417) where is a random number on [0. The di¤erence lies in the form of the dynamics used.and the thermal average potential energy is evaluated as N 1 X hV iT = V ( i ): N i=1 (416) Note that each i in this average is of equal weight. They both allow equilibrium averages to be evaluated for properties of systems with a thousand particles or more where traditional methods of integration look completely hopeless.
*) *Keep steplength SL less than active length RL = N*SP’ 022 WRITE(*. Number of active particles is N..ES..IN. DNR 024 WRITE(*.*) ’ Canonical ensemble.*) ’ Enter spacing = SP..RL. Please wait . Thermal energy is TKB.’ 027 C Active length is RL 028 RL=N*SP 029 C Generate initial coordinates 111 .FILE=’ OUTPUT’ ) 011 C Initialize the random number generator 012 IR=137 013 C Input information interactively 014 C Spacing in uniform grid is SP.N. particle # = N (<1000). EP(1000) 006 PROGRAM MC1DT 007 IMPLICIT REAL*4 (AC.*) ’ Enter steplength SL.’ 018 WRITE(*.16. N. 001 C This program simulates a 1D LJ(126) ‡ in the canonical uid ensemble 002 C Interactions are pairwise between nearestneighbors only 003 C Meltdown of the initial con…guration is included 004 C General initial statements follow 005 C Program is limited to at most 1000 particles by X(1000).IR 010 OPEN(2. number of steps in Markov chain = NC and seed random number DNR’ 023 READ(*.TX.4 Appendix . TKB and potential cuto¤ = SCUT in reduced units’ 020 READ(*.*) MM 026 WRITE(*.EP(1000).SCUT.*) ’ Computing.*) ’ Enter number of steps in meltdown MM =’ 025 READ(*.FOR program A listing of a Fortran 77 program created to simulate a 1D nearest neighbor interacting chain of LJ(126) particles in the canonical ensemble follows below. NC. 015 C Reduced units (EPS=SIGMA=1) are used 016 C Cuto¤ imposed on the range of the pairpotential is SCUT.VP(1000). EH.VS. 017 WRITE(*.*) ’ Length of active interval is RL=N*SP’ 019 WRITE(*. OZ) 008 IMPLICIT REAL*8 (DD) 009 COMMON /A005/X(1000).*) SL.*) SP. SCUT 021 WRITE(*.The MC1DT. TKB.
N 042 IN=J 043 TX=X(IN) 044 CALL EPART 045 EP(J)=ES 046 VP(J)=VS 047 DU=DU+EP(J)/2 048 DV=DV+VP(J)/2 049 30 CONTINUE 050 C The total potential energy is DU.D0 038 DV=0. NR is the number of rejections.D0 059 NER=0 060 AU=DU 061 AV=DV 062 AV2=DV*DV 063 DO 100 K=1.N 033 X(I)=I*SXSX/2 034 10 CONTINUE 035 C Evaluate initial potential energy in double precision = REAL*8 036 NB=INT(SCUT/SP) 037 DU=0.030 SX=SP 031 5 CONTINUE 032 DO 10 I=1.NC 064 IN=IN+1 065 IF(IN. The MC loop follows.D0 039 C The removal energy for particle J is EP(J) below 040 C The virial for particle J is VP(J) below 041 DO 30 J=1.D0 058 SVC=0.N+1) IN=INN 112 . 051 II=1 052 NCC=NC 053 NC=MM 054 40 CONTINUE 055 IN=0 056 NR=0 057 SEC=0. The total virial is DV.EQ.
LT.RL) TX=TXRL CALL EPART EC=ESEP(IN) VC=VSVP(IN) IF(EC.0) TX=TX+RL IF(TX.IR) TX=X(IN)+SL*(DNR0.LT.0.D0 New con…guration is accepted EP(IN)=ES VP(IN)=VS The sum of energy changes is SEC.D0) GO TO 66 CALL RANDOM(DNR. The average virial is AV.GT. 067 068 069 070 071 44 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 C 088 66 089 090 C is SVC.IR) BFR=EC/TKB IF(DLOG(DNR). The sum of virial changes SEC=SEC+EC SVC=SVC+VC X(IN)=TX DU=DU+EC DV=DV+VC The average potential energy is AU. AU=AU+SEC/NC AV=AV+SVC/NC The average squared virial <virial**2> is calculated below. AV2=AV2+DV*DV/NC CONTINUE 113 . 091 092 093 094 095 096 C 097 098 099 C 100 101 100 Calculate the removal energy and virial of particle IN before TX=X(IN) CALL EPART EP(IN)=ES VP(IN)=VS CALL RANDOM(DNR.E0 VC=0.LT.BFR) GO TO 66 TX=X(IN) ES=EP(IN) NR=NR+1 EC=0.D.0.066 C move.5E0) IF(TX.
potential pressure.*) ’ Average virial energy.’ PPOT=’ .F16.NR 128 150 FORMAT(’ of confs=’ # .D16.102 II=II+1 103 IF(II.DU 112 WRITE(*.2X.2X. PV 121 137 FORMAT(2X.6.10X.135) AV.130) (L.I10) 129 CLOSE(2) 130 STOP 131 END XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 132 SUBROUTINE RANDOM(DNR. X(L).*) ’ Average potential energy.I10.L.’ 125 WRITE(2.I10.6) 124 WRITE(*.D16.’ PID’ .6) 127 WRITE(*.6.*) ’ Ideal pressure.XL(L). …nal potential energy’ 111 WRITE(*. PPOT.D1.D16.6.140) (DU.N) 109 130 FORMAT(20X.’ Variance in PPOT is ’ . total pressure’ 120 WRITE(*.D16.137) PID.L=1. location’ 108 WRITE(*.L=1.135) AU. …nal virial energy’ 113 WRITE(*.6.6) 110 WRITE(*.D2 134 DNR=DABS(DNR) 135 D1=DLOG(DNR*IR) 136 D1=DABS(D1) 137 D1=D1DINT(D1) 114 .138) VARPPOT 123 138 FORMAT(2X.6) 115 PID=TKB*N/RL 116 PPOT=AV/RL 117 PV=PID+PPOT 118 VARPPOT=SQRT((AV2AV*AV)/NC)/RL 119 WRITE(*.150) NC.IR) 133 REAL*8 DNR.*) ’ Final con…guration: particle #.I10.EQ.’ PV=’ .2) THEN 104 NC=NCC 105 GO TO 40 106 ENDIF 107 WRITE(*.D16.D16.*) ’ Variance estimate is based on independent events.6) 122 WRITE(*. Correlations are neglected.’ of rejs=’ # .F16.DV 114 135 FORMAT(10X.N) 126 140 FORMAT(D16.
144 C It has been extended to also calculate the virial energy of a single particle. 156 DX=TXX(K) 157 R=DABS(DX) 158 IF(R.IR 147 REAL*4 UJ.VP(1000).IN.LT.N 153 C The particle does not interact with itself.e.E1 169 C Pick out smallest left and right distances R1 and R2 in the chain.SCUT.LT.RL.RL/2) THEN 159 R=RLR 160 IF(DX.1.E0) THEN 171 IF(R.LT.GT. the sum of nearest neighbor bond energies.EP(1000). 154 IF(K.DX.ES.VS.0. i.IN) GO TO 1 155 C Let the particle interact with the nearest image of X(K).138 D2=1.EQ.RM6.R2 148 ES=0.D0) THEN 161 DX=RRL 162 ELSE 163 DX=RLR 164 ENDIF 165 ENDIF 166 R=ABS(DX) 167 C Protect against over‡ ow.D7*D1 139 DNR=D2DINT(D2) 140 IR=IR+1 141 RETURN 142 END XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 143 C This subroutine calculates the energy contribution due to a single particle.TX.E1) R=1.R. 168 IF(R.D0 150 R1=RL 151 R2=RL 152 DO 1 K=1.D0 149 VS=0..R1.VJ.GT. 170 IF(DX.R1) R1=R 115 .N.0. 145 SUBROUTINE EPART 146 COMMON /A005/X(1000).
176 RM6=1/R1**6 177 UJ=4*RM6*(RM61.D0) 184 ES=ES+UJ 185 VS=VS+VJ 186 RETURN 187 END 116 .D0) 179 ES=ES+UJ 180 VS=VS+VJ 181 RM6=1/R2**6 182 UJ=4*RM6*(RM61.D0) 178 VJ=24*RM6*(2*RM61.172 GO TO 1 173 ENDIF 174 1 CONTINUE 175 C Add energies due to nearest neighbor bonds.D0) 183 VJ=24*RM6*(2*RM61.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.