Quantum Mechanics

:
A graduate level course
Richard Fitzpatrick
Associate Professor of Physics
The University of Texas at Austin
Contents
1 Introduction 5
1.1 Major sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Fundamental concepts 6
2.1 The breakdown of classical physics . . . . . . . . . . . . . . . . . . 6
2.2 The polarization of photons . . . . . . . . . . . . . . . . . . . . . . 7
2.3 The fundamental principles of quantum mechanics . . . . . . . . . 9
2.4 Ket space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5 Bra space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.6 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.7 The outer product . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.8 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . 20
2.9 Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.10 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.11 Expectation values . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.12 Degeneracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.13 Compatible observables . . . . . . . . . . . . . . . . . . . . . . . . 27
2.14 The uncertainty relation . . . . . . . . . . . . . . . . . . . . . . . . 28
2.15 Continuous spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3 Position and momentum 33
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Poisson brackets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3 Wave-functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4 Schr¨ odinger’s representation - I . . . . . . . . . . . . . . . . . . . . 39
3.5 Schr¨ odinger’s representation - II . . . . . . . . . . . . . . . . . . . 43
3.6 The momentum representation . . . . . . . . . . . . . . . . . . . . 46
3.7 The uncertainty relation . . . . . . . . . . . . . . . . . . . . . . . . 48
3.8 Displacement operators . . . . . . . . . . . . . . . . . . . . . . . . 50
4 Quantum dynamics 55
4.1 Schr¨ odinger’s equations of motion . . . . . . . . . . . . . . . . . . 55
4.2 Heisenberg’s equations of motion . . . . . . . . . . . . . . . . . . . 59
2
4.3 Ehrenfest’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.4 Schr¨ odinger’s wave-equation . . . . . . . . . . . . . . . . . . . . . 65
5 Angular momentum 71
5.1 Orbital angular momentum . . . . . . . . . . . . . . . . . . . . . . 71
5.2 Eigenvalues of angular momentum . . . . . . . . . . . . . . . . . . 74
5.3 Rotation operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.4 Eigenfunctions of orbital angular momentum . . . . . . . . . . . . 81
5.5 Motion in a central field . . . . . . . . . . . . . . . . . . . . . . . . 84
5.6 Energy levels of the hydrogen atom . . . . . . . . . . . . . . . . . . 86
5.7 Spin angular momentum . . . . . . . . . . . . . . . . . . . . . . . 89
5.8 Wave-function of a spin one-half particle . . . . . . . . . . . . . . . 91
5.9 Rotation operators in spin space . . . . . . . . . . . . . . . . . . . 93
5.10 Magnetic moments . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.11 Spin precession . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.12 Pauli two-component formalism . . . . . . . . . . . . . . . . . . . . 99
5.13 Spin greater than one-half systems . . . . . . . . . . . . . . . . . . 105
5.14 Addition of angular momentum . . . . . . . . . . . . . . . . . . . . 110
6 Approximation methods 120
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6.2 The two-state system . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6.3 Non-degenerate perturbation theory . . . . . . . . . . . . . . . . . 122
6.4 The quadratic Stark effect . . . . . . . . . . . . . . . . . . . . . . . 124
6.5 Degenerate perturbation theory . . . . . . . . . . . . . . . . . . . . 129
6.6 The linear Stark effect . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.7 Fine structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.8 The Zeeman effect . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6.9 Time-dependent perturbation theory . . . . . . . . . . . . . . . . . 144
6.10 The two-state system . . . . . . . . . . . . . . . . . . . . . . . . . . 146
6.11 Spin magnetic resonance . . . . . . . . . . . . . . . . . . . . . . . 149
6.12 The Dyson series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
6.13 Constant perturbations . . . . . . . . . . . . . . . . . . . . . . . . . 154
6.14 Harmonic perturbations . . . . . . . . . . . . . . . . . . . . . . . . 158
6.15 Absorption and stimulated emission of radiation . . . . . . . . . . 159
3
6.16 The electric dipole approximation . . . . . . . . . . . . . . . . . . . 162
6.17 Energy-shifts and decay-widths . . . . . . . . . . . . . . . . . . . . 165
7 Scattering theory 170
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
7.2 The Lipmann-Schwinger equation . . . . . . . . . . . . . . . . . . 170
7.3 The Born approximation . . . . . . . . . . . . . . . . . . . . . . . . 175
7.4 Partial waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
7.5 The optical theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 181
7.6 Determination of phase-shifts . . . . . . . . . . . . . . . . . . . . . 182
7.7 Hard sphere scattering . . . . . . . . . . . . . . . . . . . . . . . . . 184
7.8 Low energy scattering . . . . . . . . . . . . . . . . . . . . . . . . . 186
7.9 Resonances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
4
1 INTRODUCTION
1 Introduction
1.1 Major sources
The textbooks which I have consulted most frequently while developing course
material are:
The principles of quantum mechanics, P.A.M. Dirac, 4th Edition (revised), (Ox-
ford University Press, Oxford, UK, 1958).
The Feynman lectures on physics, R.P. Feynman, R.B. Leighton, and M. Sands,
Volume III (Addison-Wesley, Reading MA, 1965).
Quantum mechanics, E. Merzbacher, 2nd Edition (John Wiley & Sons, New York
NY, 1970).
Modern quantum mechanics, J.J. Sakurai, (Benjamin/Cummings, Menlo Park
CA, 1985).
5
2 FUNDAMENTAL CONCEPTS
2 Fundamental concepts
2.1 The breakdown of classical physics
The necessity for a departure from classical mechanics is clearly demonstrated
by:
1. The anomalous stability of atoms and molecules: According to classical physics,
an electron orbiting a nucleus should lose energy by emission of synchrotron
radiation, and gradually spiral in towards the nucleus. Experimentally, this
is not observed to happen.
2. The anomalously low specific heats of atoms and molecules: According to the
equipartition theorem of classical physics, each degree of freedom of an
atomic or molecular system should contribute R/2 to its molar specific heat,
where R is the ideal gas constant. In fact, only the translational and some
rotational degrees of freedom seem to contribute. The vibrational degrees
of freedom appear to make no contribution at all (except at high temper-
atures). Incidentally, this fundamental problem with classical physics was
known and appreciated in the middle of the nineteenth century. Stories that
physicists at the start of the twentieth century thought that classical physics
explained everything, and that there was nothing left to discover, are largely
apocryphal (see Feynman, Vol. I, Cha. 40).
3. The ultraviolet catastrophe: According to classical physics, the energy density
of an electromagnetic field in vacuum is infinite due to a divergence of en-
ergy carried by short wave-length modes. Experimentally, there is no such
divergence, and the total energy density is finite.
4. Wave-particle duality: Classical physics can deal with waves or particles. How-
ever, various experiments (e.g., light interference, the photo-electric effect,
electron diffraction) show quite clearly that waves sometimes act as if they
were streams of particles, and streams of particles sometimes act as if they
were waves. This is completely inexplicable within the framework of classi-
cal physics.
6
2.2 The polarization of photons 2 FUNDAMENTAL CONCEPTS
2.2 The polarization of photons
It is known experimentally that when plane polarized light is used to eject photo-
electrons there is a preferred direction of emission of the electrons. Clearly, the
polarization properties of light, which are more usually associated with its wave-
like behaviour, also extend to its particle-like behaviour. In particular, a polariza-
tion can be ascribed to each individual photon in a beam of light.
Consider the following well-known experiment. A beam of plane polarized
light is passed through a polaroid film, which has the property that it is only
transparent to light whose plane of polarization lies perpendicular to its optic
axis. Classical electromagnetic wave theory tells us that if the beam is polarized
perpendicular to the optic axis then all of the light is transmitted, if the beam is
polarized parallel to the optic axis then none of the light is transmitted, and if the
light is polarized at an angle α to the axis then a fraction sin
2
α of the beam is
transmitted. Let us try to account for these observations at the individual photon
level.
A beam of light which is plane polarized in a certain direction is made up of a
stream of photons which are each plane polarized in that direction. This picture
leads to no difficulty if the plane of polarization lies parallel or perpendicular
to the optic axis of the polaroid. In the former case, none of the photons are
transmitted, and, in the latter case, all of the photons are transmitted. But, what
happens in the case of an obliquely polarized incident beam?
The above question is not very precise. Let us reformulate it as a question
relating to the result of some experiment which we could perform. Suppose that
we were to fire a single photon at a polaroid film, and then look to see whether
or not it emerges from the other side. The possible results of the experiment are
that either a whole photon, whose energy is equal to the energy of the incident
photon, is observed, or no photon is observed. Any photon which is transmitted
though the film must be polarized perpendicular to the optic axis. Furthermore,
it is impossible to imagine (in physics) finding part of a photon on the other side
of the film. If we repeat the experiment a great number of times then, on average,
a fraction sin
2
α of the photons are transmitted through the film, and a fraction
7
2.2 The polarization of photons 2 FUNDAMENTAL CONCEPTS
cos
2
α are absorbed. Thus, we conclude that a photon has a probability sin
2
α of
being transmitted as a photon polarized in the plane perpendicular to the optic
axis, and a probability cos
2
α of being absorbed. These values for the probabilities
lead to the correct classical limit for a beamcontaining a large number of photons.
Note that we have only been able to preserve the individuality of photons,
in all cases, by abandoning the determinacy of classical theory, and adopting a
fundamentally probabilistic approach. We have no way of knowing whether an
individual obliquely polarized photon is going to be absorbed by or transmitted
through a polaroid film. We only know the probability of each event occurring.
This is a fairly sweeping statement, but recall that the state of a photon is fully
specified once its energy, direction of propagation, and polarization are known.
If we imagine performing experiments using monochromatic light, normally in-
cident on a polaroid film, with a particular oblique polarization, then the state of
each individual photon in the beam is completely specified, and there is nothing
left over to uniquely determine whether the photon is transmitted or absorbed by
the film.
The above discussion about the results of an experiment with a single obliquely
polarized photon incident on a polaroid film answers all that can be legitimately
asked about what happens to the photon when it reaches the film. Questions as
to what decides whether the photon is transmitted or not, or how it changes its
direction of polarization, are illegitimate, since they do not relate to the outcome
of a possible experiment. Nevertheless, some further description is needed in
order to allow the results of this experiment to be correlated with the results of
other experiments which can be performed using photons.
The further description provided by quantum mechanics is as follows. It is
supposed that a photon polarized obliquely to the optic axis can be regarded as
being partly in a state of polarization parallel to the axis, and partly in a state of
polarization perpendicular to the axis. In other words, the oblique polarization
state is some sort of superposition of two states of parallel and perpendicular
polarization. Since there is nothing special about the orientation of the optic
axis in our experiment, we must conclude that any state of polarization can be
regarded as a superposition of two mutually perpendicular states of polarization.
8
2.3 The fundamental principles of quantum mechanics 2 FUNDAMENTAL CONCEPTS
When we make the photon encounter a polaroid film, we are subjecting it
to an observation. In fact, we are observing whether it is polarized parallel or
perpendicular to the optic axis. The effect of making this observation is to force
the photon entirely into a state of parallel or perpendicular polarization. In other
words, the photon has to jump suddenly from being partly in each of these two
states to being entirely in one or the other of them. Which of the two states it will
jump into cannot be predicted, but is governed by probability laws. If the photon
jumps into a state of parallel polarization then it is absorbed. Otherwise, it is
transmitted. Note that, in this example, the introduction of indeterminacy into
the problem is clearly connected with the act of observation. In other words, the
indeterminacy is related to the inevitable disturbance of the system associated
with the act of observation.
2.3 The fundamental principles of quantum mechanics
There is nothing special about the transmission and absorption of photons through
a polaroid film. Exactly the same conclusions as those outlined above are ob-
tained by studying other simple experiments, such as the interference of photons
(see Dirac, Sect. I.3), and the Stern-Gerlach experiment (see Sakurai, Cha. 1;
Feynman, Cha. 5). The study of these simple experiments leads us to formulate
the following fundamental principles of quantum mechanics:
1. Dirac’s razor: Quantum mechanics can only answer questions regarding the
outcome of possible experiments. Any other questions lie beyond the realms
of physics.
2. The principle of superposition of states: Any microscopic system (i.e., an atom,
molecule, or particle) in a given state can be regarded as being partly in
each of two or more other states. In other words, any state can be regarded
as a superposition of two or more other states. Such superpositions can be
performed in an infinite number of different ways.
3. The principle of indeterminacy: An observation made on a microscopic system
causes it to jump into one or more particular states (which are related to
9
2.4 Ket space 2 FUNDAMENTAL CONCEPTS
the type of observation). It is impossible to predict into which final state
a particular system will jump, however the probability of a given system
jumping into a given final state can be predicted.
The first of these principles was formulated by quantum physicists (such as Dirac)
in the 1920s to fend off awkward questions such as “How can a system suddenly
jump from one state into another?”, or “How does a system decide which state to
jump into?”. As we shall see, the second principle is the basis for the mathemat-
ical formulation of quantum mechanics. The final principle is still rather vague.
We need to extend it so that we can predict which possible states a system can
jump into after a particular type of observation, as well as the probability of the
system making a particular jump.
2.4 Ket space
Consider a microscopic system composed of particles or bodies with specific prop-
erties (mass, moment of inertia, etc.) interacting according to specific laws of
force. There will be various possible motions of the particles or bodies consistent
with the laws of force. Let us term each such motion a state of the system. Accord-
ing to the principle of superposition of states, any given state can be regarded as
a superposition of two or more other states. Thus, states must be related to math-
ematical quantities of a kind which can be added together to give other quantities
of the same kind. The most obvious examples of such quantities are vectors.
Let us consider a particular microscopic system in a particular state, which we
label A: e.g., a photon with a particular energy, momentum, and polarization.
We can represent this state as a particular vector, which we also label A, residing
in some vector space, where the other elements of the space represent all of the
other possible states of the system. Such a space is called a ket space (after Dirac).
The state vector A is conventionally written
|A). (2.1)
Suppose that state A is, in fact, the superposition of two different states, B and
10
2.4 Ket space 2 FUNDAMENTAL CONCEPTS
C. This interrelation is represented in ket space by writing
|A) = |B) + |C), (2.2)
where |B) is the vector relating to the state B, etc. For instance, state |B) might
represent a photon propagating in the z-direction, and plane polarized in the x-
direction, and state |C) might represent a similar photon plane polarized in the
y-direction. In this case, the sum of these two states represents a photon whose
plane of polarization makes an angle of 45

with both the x- and y-directions (by
analogy with classical physics). This latter state is represented by |B) + |C) in ket
space.
Suppose that we want to construct a state whose plane of polarization makes
an arbitrary angle α with the x-direction. We can do this via a suitably weighted
superposition of states B and C. By analogy with classical physics, we require
cos α of state B, and sinα of state C. This new state is represented by
cos α|B) + sinα|C) (2.3)
in ket space. Note that we cannot form a new state by superposing a state with
itself. For instance, a photon polarized in the y-direction superposed with another
photon polarized in the y-direction (with the same energy and momentum) gives
the same photon. This implies that the ket vector
c
1
|A) +c
2
|A) = (c
1
+c
2
)|A) (2.4)
corresponds to the same state that |A) does. Thus, ket vectors differ from con-
ventional vectors in that their magnitudes, or lengths, are physically irrelevant.
All the states of the system are in one to one correspondence with all the possi-
ble directions of vectors in the ket space, no distinction being made between the
directions of the ket vectors |A) and −|A). There is, however, one caveat to the
above statements. If c
1
+ c
2
= 0 then the superposition process yields nothing at
all: i.e., no state. The absence of a state is represented by the null vector |0) in
ket space. The null vector has the fairly obvious property that
|A) + |0) = |A), (2.5)
for any vector |A). The fact that ket vectors pointing in the same direction repre-
sent the same state relates ultimately to the quantization of matter: i.e., the fact
11
2.4 Ket space 2 FUNDAMENTAL CONCEPTS
that it comes in irreducible packets called photons, electrons, atoms, etc. If we ob-
serve a microscopic system then we either see a state (i.e., a photon, or an atom,
or a molecule, etc.) or we see nothing—we can never see a fraction or a multiple
of a state. In classical physics, if we observe a wave then the amplitude of the
wave can take any value between zero and infinity. Thus, if we were to represent
a classical wave by a vector, then the magnitude, or length, of the vector would
correspond to the amplitude of the wave, and the direction would correspond to
the frequency and wave-length, so that two vectors of different lengths pointing
in the same direction would represent different wave states.
We have seen, in Eq. (2.3), that any plane polarized state of a photon can
be represented as a linear superposition of two orthogonal polarization states
in which the weights are real numbers. Suppose that we want to construct a
circularly polarized photon state. Well, we know from classical physics that a cir-
cularly polarized wave is a superposition of two waves of equal amplitude, plane
polarized in orthogonal directions, which are in phase quadrature. This suggests
that a circularly polarized photon is the superposition of a photon polarized in
the x-direction (state B) and a photon polarized in the y-direction (state C), with
equal weights given to the two states, but with the proviso that state C is 90

out of phase with state B. By analogy with classical physics, we can use complex
numbers to simultaneously represent the weighting and relative phase in a linear
superposition. Thus, a circularly polarized photon is represented by
|B) + i |C) (2.6)
in ket space. A general elliptically polarized photon is represented by
c
1
|B) +c
2
|C), (2.7)
where c
1
and c
2
are complex numbers. We conclude that a ket space must be
a complex vector space if it is to properly represent the mutual interrelations
between the possible states of a microscopic system.
Suppose that the ket |R) is expressible linearly in terms of the kets |A) and |B),
so that
|R) = c
1
|A) +c
2
|B). (2.8)
12
2.4 Ket space 2 FUNDAMENTAL CONCEPTS
We say that |R) is dependent on |A) and |B). It follows that the state R can be
regarded as a linear superposition of the states A and B. So, we can also say that
state R is dependent on states A and B. In fact, any ket vector (or state) which
is expressible linearly in terms of certain others is said to be dependent on them.
Likewise, a set of ket vectors (or states) are termed independent if none of them
are expressible linearly in terms of the others.
The dimensionality of a conventional vector space is defined as the number
of independent vectors contained in the space. Likewise, the dimensionality of
a ket space is equivalent to the number of independent ket vectors it contains.
Thus, the ket space which represents the possible polarization states of a photon
propagating in the z-direction is two-dimensional (the two independent vectors
correspond to photons plane polarized in the x- and y-directions, respectively).
Some microscopic systems have a finite number of independent states (e.g., the
spin states of an electron in a magnetic field). If there are N independent states,
then the possible states of the system are represented as an N-dimensional ket
space. Some microscopic systems have a denumerably infinite number of inde-
pendent states (e.g., a particle in an infinitely deep, one-dimensional potential
well). The possible states of such a system are represented as a ket space whose
dimensions are denumerably infinite. Such a space can be treated in more or less
the same manner as a finite-dimensional space. Unfortunately, some microscopic
systems have a nondenumerably infinite number of independent states (e.g., a
free particle). The possible states of such a system are represented as a ket space
whose dimensions are nondenumerably infinite. This type of space requires a
slightly different treatment to spaces of finite, or denumerably infinite, dimen-
sions.
In conclusion, the states of a general microscopic system can be represented as
a complex vector space of (possibly) infinite dimensions. Such a space is termed
a Hilbert space by mathematicians.
13
2.5 Bra space 2 FUNDAMENTAL CONCEPTS
2.5 Bra space
A snack machine inputs coins plus some code entered on a key pad, and (hope-
fully) outputs a snack. It also does so in a deterministic manner: i.e., the same
money plus the same code produces the same snack (or the same error message)
time after time. Note that the input and output of the machine have completely
different natures. We can imagine building a rather abstract snack machine which
inputs ket vectors and outputs complex numbers in a deterministic fashion. Math-
ematicians call such a machine a functional. Imagine a general functional, labeled
F, acting on a general ket vector, labeled A, and spitting out a general complex
number φ
A
. This process is represented mathematically by writing
¸F|(|A)) = φ
A
. (2.9)
Let us narrow our focus to those functionals which preserve the linear dependen-
cies of the ket vectors upon which they operate. Not surprisingly, such functionals
are termed linear functionals. A general linear functional, labeled F, satisfies
¸F|(|A) + |B)) = ¸F|(|A)) +¸F|(|B)), (2.10)
where |A) and |B) are any two kets in a given ket space.
Consider an N-dimensional ket space [i.e., a finite-dimensional, or denumer-
ably infinite dimensional (i.e., N → ∞), space]. Let the |i) (where i runs from 1
to N) represent N independent ket vectors in this space. A general ket vector can
be written
1
|A) =
N

i=1
α
i
|i), (2.11)
where the α
i
are an arbitrary set of complex numbers. The only way the func-
tional F can satisfy Eq. (2.10) for all vectors in the ket space is if
¸F|(|A)) =
N

i=1
f
i
α
i
, (2.12)
1
Actually, this is only strictly true for finite-dimensional spaces. Only a special subset of denumerably infinite
dimensional spaces have this property (i.e., they are complete), but since a ket space must be complete if it is to
represent the states of a microscopic system, we need only consider this special subset.
14
2.5 Bra space 2 FUNDAMENTAL CONCEPTS
where the f
i
are a set of complex numbers relating to the functional.
Let us define N basis functionals ¸i| which satisfy
¸i|(|j)) = δ
ij
. (2.13)
It follows from the previous three equations that
¸F| =
N

i=1
f
i
¸i|. (2.14)
But, this implies that the set of all possible linear functionals acting on an N-
dimensional ket space is itself an N-dimensional vector space. This type of vector
space is called a bra space (after Dirac), and its constituent vectors (which are
actually functionals of the ket space) are called bra vectors. Note that bra vectors
are quite different in nature to ket vectors (hence, these vectors are written in
mirror image notation, ¸ | and | ), so that they can never be confused). Bra
space is an example of what mathematicians call a dual vector space (i.e., it is
dual to the original ket space). There is a one to one correspondence between
the elements of the ket space and those of the related bra space. So, for every
element A of the ket space, there is a corresponding element, which it is also
convenient to label A, in the bra space. That is,
|A)
DC
←→¸A|, (2.15)
where DC stands for dual correspondence.
There are an infinite number of ways of setting up the correspondence between
vectors in a ket space and those in the related bra space. However, only one
of these has any physical significance. For a general ket vector A, specified by
Eq. (2.11), the corresponding bra vector is written
¸A| =
N

i=1
α

i
¸i|, (2.16)
where the α

i
are the complex conjugates of the α
i
. ¸A| is termed the dual vector
to |A). It follows, from the above, that the dual to c¸A| is c

|A), where c is a
complex number. More generally,
c
1
|A) +c
2
|B)
DC
←→c

1
¸A| +c

2
¸B|. (2.17)
15
2.5 Bra space 2 FUNDAMENTAL CONCEPTS
Recall that a bra vector is a functional which acts on a general ket vector, and
spits out a complex number. Consider the functional which is dual to the ket
vector
|B) =
N

i=1
β
i
|i) (2.18)
acting on the ket vector |A). This operation is denoted ¸B|(|A)). Note, however,
that we can omit the round brackets without causing any ambiguity, so the oper-
ation can also be written ¸B||A). This expression can be further simplified to give
¸B|A). According to Eqs. (2.11), (2.12), (2.16), and (2.18),
¸B|A) =
N

i=1
β

i
α
i
. (2.19)
Mathematicians term ¸B|A) the inner product of a bra and a ket.
2
An inner prod-
uct is (almost) analogous to a scalar product between a covariant and contravari-
ant vector in some curvilinear space. It is easily demonstrated that
¸B|A) = ¸A|B)

. (2.20)
Consider the special case where |B) →|A). It follows from Eqs. (2.12) and (2.20)
that ¸A|A) is a real number, and that
¸A|A) ≥ 0. (2.21)
The equality sign only holds if |A) is the null ket [i.e., if all of the α
i
are zero in
Eq. (2.11)]. This property of bra and ket vectors is essential for the probabilistic
interpretation of quantum mechanics, as will become apparent later.
Two kets |A) and |B) are said to be orthogonal if
¸A|B) = 0, (2.22)
which also implies that ¸B|A) = 0.
Given a ket |A) which is not the null ket, we can define a normalized ket |
˜
A),
where
|
˜
A) =
_
_
1
_
¸A|A)
_
_
|A), (2.23)
2
We can now appreciate the elegance of Dirac’s notation. The combination of a bra and a ket yields a “bra(c)ket”
(which is just a number).
16
2.6 Operators 2 FUNDAMENTAL CONCEPTS
with the property
¸
˜
A|
˜
A) = 1. (2.24)
Here,
_
¸A|A) is known as the norm or “length” of |A), and is analogous to the
length, or magnitude, of a conventional vector. Since |A) and c|A) represent
the same physical state, it makes sense to require that all kets corresponding to
physical states have unit norms.
It is possible to define a dual bra space for a ket space of nondenumerably
infinite dimensions in much the same manner as that described above. The main
differences are that summations over discrete labels become integrations over
continuous labels, Kronecker delta-functions become Dirac delta-functions, com-
pleteness must be assumed (it cannot be proved), and the normalization conven-
tion is somewhat different. More of this later.
2.6 Operators
We have seen that a functional is a machine which inputs a ket vector and spits
out a complex number. Consider a somewhat different machine which inputs a
ket vector and spits out another ket vector in a deterministic fashion. Mathemati-
cians call such a machine an operator. We are only interested in operators which
preserve the linear dependencies of the ket vectors upon which they act. Such
operators are termed linear operators. Consider an operator labeled X. Suppose
that when this operator acts on a general ket vector |A) it spits out a new ket
vector which is denoted X|A). Operator X is linear provided that
X(|A) + |B)) = X|A) +X|B), (2.25)
for all ket vectors |A) and |B), and
X(c|A)) = cX|A), (2.26)
for all complex numbers c. Operators X and Y are said to be equal if
X|A) = Y|A) (2.27)
17
2.6 Operators 2 FUNDAMENTAL CONCEPTS
for all kets in the ket space in question. Operator X is termed the null operator if
X|A) = |0) (2.28)
for all ket vectors in the space. Operators can be added together. Such addition
is defined to obey a commutative and associate algebra:
X +Y = Y +X, (2.29)
X + (Y +Z) = (X +Y) +Z. (2.30)
Operators can also be multiplied. The multiplication is associative:
X(Y|A)) = (XY)|A) = XY|A), (2.31)
X(Y Z) = (XY)Z = XY Z. (2.32)
However, in general, it is noncommutative:
XY ,= Y X. (2.33)
So far, we have only considered linear operators acting on ket vectors. We can
also give a meaning to their operating on bra vectors. Consider the inner product
of a general bra ¸B| with the ket X|A). This product is a number which depends
linearly on |A). Thus, it may be considered to be the inner product of |A) with
some bra. This bra depends linearly on ¸B|, so we may look on it as the result of
some linear operator applied to ¸B|. This operator is uniquely determined by the
original operator X, so we might as well call it the same operator acting on |B). A
suitable notation to use for the resulting bra when X operates on ¸B| is ¸B|X. The
equation which defines this vector is
(¸B|X)|A) = ¸B|(X|A)) (2.34)
for any |A) and ¸B|. The triple product of ¸B|, X, and |A) can be written ¸B|X|A)
without ambiguity, provided we adopt the convention that the bra vector always
goes on the left, the operator in the middle, and the ket vector on the right.
Consider the dual bra to X|A). This bra depends antilinearly on |A) and must
therefore depend linearly on ¸A|. Thus, it may be regarded as the result of some
18
2.7 The outer product 2 FUNDAMENTAL CONCEPTS
linear operator applied to ¸A|. This operator is termed the adjoint of X, and is
denoted X

. Thus,
X|A)
DC
←→¸A|X

. (2.35)
It is readily demonstrated that
¸B|X

|A) = ¸A|X|B)

, (2.36)
plus
(XY)

= Y

X

. (2.37)
It is also easily seen that the adjoint of the adjoint of a linear operator is equiva-
lent to the original operator. A Hermitian operator ξ has the special property that
it is its own adjoint: i.e.,
ξ = ξ

. (2.38)
2.7 The outer product
So far we have formed the following products: ¸B|A), X|A), ¸A|X, XY, ¸B|X|A).
Are there any other products we are allowed to form? How about
|B)¸A| ? (2.39)
This clearly depends linearly on the ket |A) and the bra |B). Suppose that we
right-multiply the above product by the general ket |C). We obtain
|B)¸A|C) = ¸A|C)|B), (2.40)
since ¸A|C) is just a number. Thus, |B)¸A| acting on a general ket |C) yields
another ket. Clearly, the product |B)¸A| is a linear operator. This operator also
acts on bras, as is easily demonstrated by left-multiplying the expression (2.39)
by a general bra ¸C|. It is also easily demonstrated that
(|B)¸A|)

= |A)¸B|. (2.41)
Mathematicians term the operator |B)¸A| the outer product of |B) and ¸A|. The
outer product should not be confused with the inner product, ¸A|B), which is just
a number.
19
2.8 Eigenvalues and eigenvectors 2 FUNDAMENTAL CONCEPTS
2.8 Eigenvalues and eigenvectors
In general, the ket X|A) is not a constant multiple of |A). However, there are
some special kets known as the eigenkets of operator X. These are denoted
|x

), |x

), |x

) . . . , (2.42)
and have the property
X|x

) = x

|x

), X|x

) = x

|x

) . . . , (2.43)
where x

, x

, . . . are numbers called eigenvalues. Clearly, applying X to one of its
eigenkets yields the same eigenket multiplied by the associated eigenvalue.
Consider the eigenkets and eigenvalues of a Hermitian operator ξ. These are
denoted
ξ|ξ

) = ξ

), (2.44)
where |ξ

) is the eigenket associated with the eigenvalue ξ

. Three important
results are readily deduced:
(i) The eigenvalues are all real numbers, and the eigenkets corresponding to
different eigenvalues are orthogonal. Since ξ is Hermitian, the dual equation to
Eq. (2.44) (for the eigenvalue ξ

) reads
¸ξ

|ξ = ξ

¸ξ

|. (2.45)
If we left-multiply Eq. (2.44) by ¸ξ

|, right-multiply the above equation by |ξ

),
and take the difference, we obtain

−ξ

)¸ξ

) = 0. (2.46)
Suppose that the eigenvalues ξ

and ξ

are the same. It follows from the above
that
ξ

= ξ

, (2.47)
where we have used the fact that |ξ

) is not the null ket. This proves that the
eigenvalues are real numbers. Suppose that the eigenvalues ξ

and ξ

are differ-
ent. It follows that
¸ξ

) = 0, (2.48)
20
2.9 Observables 2 FUNDAMENTAL CONCEPTS
which demonstrates that eigenkets corresponding to different eigenvalues are
orthogonal.
(ii) The eigenvalues associated with eigenkets are the same as the eigenvalues
associated with eigenbras. An eigenbra of ξ corresponding to an eigenvalue ξ

is
defined
¸ξ

|ξ = ¸ξ

. (2.49)
(iii) The dual of any eigenket is an eigenbra belonging to the same eigenvalue,
and conversely.
2.9 Observables
We have developed a mathematical formalism which comprises three types of
objects—bras, kets, and linear operators. We have already seen that kets can be
used to represent the possible states of a microscopic system. However, there is
a one to one correspondence between the elements of a ket space and its dual
bra space, so we must conclude that bras could just as well be used to repre-
sent the states of a microscopic system. What about the dynamical variables of
the system (e.g., its position, momentum, energy, spin, etc.)? How can these be
represented in our formalism? Well, the only objects we have left over are oper-
ators. We, therefore, assume that the dynamical variables of a microscopic system
are represented as linear operators acting on the bras and kets which correspond to
the various possible states of the system. Note that the operators have to be linear,
otherwise they would, in general, spit out bras/kets pointing in different direc-
tions when fed bras/kets pointing in the same direction but differing in length.
Since the lengths of bras and kets have no physical significance, it is reasonable
to suppose that non-linear operators are also without physical significance.
We have seen that if we observe the polarization state of a photon, by placing
a polaroid film in its path, the result is to cause the photon to jump into a state
of polarization parallel or perpendicular to the optic axis of the film. The former
state is absorbed, and the latter state is transmitted (which is how we tell them
apart). In general, we cannot predict into which state a given photon will jump
21
2.9 Observables 2 FUNDAMENTAL CONCEPTS
(except in a statistical sense). However, we do know that if the photon is initially
polarized parallel to the optic axis then it will definitely be absorbed, and if it is
initially polarized perpendicular to the axis then it will definitely be transmitted.
We also known that after passing though the film a photon must be in a state of
polarization perpendicular to the optic axis (otherwise it would not have been
transmitted). We can make a second observation of the polarization state of
such a photon by placing an identical polaroid film (with the same orientation of
the optic axis) immediately behind the first film. It is clear that the photon will
definitely be transmitted through the second film.
There is nothing special about the polarization states of a photon. So, more
generally, we can say that when a dynamical variable of a microscopic system
is measured the system is caused to jump into one of a number of independent
states (note that the perpendicular and parallel polarization states of our photon
are linearly independent). In general, each of these final states is associated with
a different result of the measurement: i.e., a different value of the dynamical
variable. Note that the result of the measurement must be a real number (there
are no measurement machines which output complex numbers). Finally, if an
observation is made, and the system is found to be a one particular final state,
with one particular value for the dynamical variable, then a second observation,
made immediately after the first one, will definitely find the system in the same
state, and yield the same value for the dynamical variable.
How can we represent all of these facts in our mathematical formalism? Well,
by a fairly non-obvious leap of intuition, we are going to assert that a measure-
ment of a dynamical variable corresponding to an operator X in ket space causes
the system to jump into a state corresponding to one of the eigenkets of X. Not
surprisingly, such a state is termed an eigenstate. Furthermore, the result of the
measurement is the eigenvalue associated with the eigenket into which the system
jumps. The fact that the result of the measurement must be a real number implies
that dynamical variables can only be represented by Hermitian operators (since only
Hermitian operators are guaranteed to have real eigenvalues). The fact that the
eigenkets of a Hermitian operator corresponding to different eigenvalues (i.e., dif-
ferent results of the measurement) are orthogonal is in accordance with our ear-
lier requirement that the states into which the system jumps should be mutually
22
2.9 Observables 2 FUNDAMENTAL CONCEPTS
independent. We can conclude that the result of a measurement of a dynamical
variable represented by a Hermitian operator ξ must be one of the eigenvalues of
ξ. Conversely, every eigenvalue of ξ is a possible result of a measurement made
on the corresponding dynamical variable. This gives us the physical significance
of the eigenvalues. (From now on, the distinction between a state and its rep-
resentative ket vector, and a dynamical variable and its representative operator,
will be dropped, for the sake of simplicity.)
It is reasonable to suppose that if a certain dynamical variable ξ is measured
with the system in a particular state, then the states into which the system may
jump on account of the measurement are such that the original state is dependent
on them. This fairly innocuous statement has two very important corollaries.
First, immediately after an observation whose result is a particular eigenvalue ξ

,
the system is left in the associated eigenstate. However, this eigenstate is orthog-
onal to (i.e., independent of) any other eigenstate corresponding to a different
eigenvalue. It follows that a second measurement made immediately after the
first one must leave the system in an eigenstate corresponding to the eigenvalue
ξ

. In other words, the second measurement is bound to give the same result as
the first. Furthermore, if the system is in an eigenstate of ξ, corresponding to an
eigenvalue ξ

, then a measurement of ξ is bound to give the result ξ

. This follows
because the system cannot jump into an eigenstate corresponding to a different
eigenvalue of ξ, since such a state is not dependent on the original state. Second,
it stands to reason that a measurement of ξ must always yield some result. It fol-
lows that no matter what the initial state of the system, it must always be able to
jump into one of the eigenstates of ξ. In other words, a general ket must always
be dependent on the eigenkets of ξ. This can only be the case if the eigenkets
form a complete set (i.e., they span ket space). Thus, in order for a Hermitian oper-
ator ξ to be observable its eigenkets must form a complete set. A Hermitian operator
which satisfies this condition is termed an observable. Conversely, any observable
quantity must be a Hermitian operator with a complete set of eigenstates.
23
2.10 Measurements 2 FUNDAMENTAL CONCEPTS
2.10 Measurements
We have seen that a measurement of some observable ξ of a microscopic system
causes the system to jump into one of the eigenstates of ξ. The result of the
measurement is the associated eigenvalue (or some function of this quantity). It
is impossible to determine into which eigenstate a given system will jump, but it is
possible to predict the probability of such a transition. So, what is the probability
that a system in some initial state |A) makes a transition to an eigenstate |ξ

) of an
observable ξ, as a result of a measurement made on the system? Let us start with
the simplest case. If the system is initially in an eigenstate |ξ

) then the transition
probability to a eigenstate |ξ

) corresponding to a different eigenvalue is zero,
and the transition probability to the same eigenstate |ξ

) is unity. It is convenient
to normalize our eigenkets such that they all have unit norms. It follows from the
orthogonality property of the eigenkets that
¸ξ

) = δ
ξ

ξ
, (2.50)
where δ
ξ

ξ
is unity if ξ

= ξ

, and zero otherwise. For the moment, we are
assuming that the eigenvalues of ξ are all different.
Note that the probability of a transition from an initial eigenstate |ξ

) to a fi-
nal eigenstate |ξ

) is the same as the value of the inner product ¸ξ

). Can we
use this correspondence to obtain a general rule for calculating transition prob-
abilities? Well, suppose that the system is initially in a state |A) which is not an
eigenstate of ξ. Can we identify the transition probability to a final eigenstate

) with the inner product ¸A|ξ

)? The straight answer is “no”, since ¸A|ξ

) is, in
general, a complex number, and complex probabilities do not make much sense.
Let us try again. How about if we identify the transition probability with the mod-
ulus squared of the inner product, |¸A|ξ

)|
2
? This quantity is definitely a positive
number (so it could be a probability). This guess also gives the right answer for
the transition probabilities between eigenstates. In fact, it is the correct guess.
Since the eigenstates of an observable ξ form a complete set, we can express
any given state |A) as a linear combination of them. It is easily demonstrated that
|A) =

ξ

)¸ξ

|A), (2.51)
24
2.11 Expectation values 2 FUNDAMENTAL CONCEPTS
¸A| =

ξ

¸A|ξ

)¸ξ

|, (2.52)
¸A|A) =

ξ

¸A|ξ

)¸ξ

|A) =

ξ

|¸A|ξ

)|
2
, (2.53)
where the summation is over all the different eigenvalues of ξ, and use has been
made of Eq. (2.20), and the fact that the eigenstates are mutually orthogonal.
Note that all of the above results follow from the extremely useful (and easily
proved) result

ξ

)¸ξ

| = 1, (2.54)
where 1 denotes the identity operator. The relative probability of a transition to
an eigenstate |ξ

), which is equivalent to the relative probability of a measure-
ment of ξ yielding the result ξ

, is
P(ξ

) ∝ |¸A|ξ

)|
2
. (2.55)
The absolute probability is clearly
P(ξ

) =
|¸A|ξ

)|
2

ξ

|¸A|ξ

)|
2
=
|¸A|ξ

)|
2
¸A|A)
. (2.56)
If the ket |A) is normalized such that its norm is unity, then this probability simply
reduces to
P(ξ

) = |¸A|ξ

)|
2
. (2.57)
2.11 Expectation values
Consider an ensemble of microscopic systems prepared in the same initial state
|A). Suppose a measurement of the observable ξ is made on each system. We
know that each measurement yields the value ξ

with probability P(ξ

). What is
the mean value of the measurement? This quantity, which is generally referred
to as the expectation value of ξ, is given by
¸ξ) =

ξ

ξ

P(ξ

) =

ξ

ξ

|¸A|ξ

)|
2
25
2.12 Degeneracy 2 FUNDAMENTAL CONCEPTS
=

ξ

ξ

¸A|ξ

)¸ξ

|A) =

ξ

¸A|ξ|ξ

)¸ξ

|A), (2.58)
which reduces to
¸ξ) = ¸A|ξ|A) (2.59)
with the aid of Eq. (2.54).
Consider the identity operator, 1. All states are eigenstates of this operator
with the eigenvalue unity. Thus, the expectation value of this operator is always
unity: i.e.,
¸A|1|A) = ¸A|A) = 1, (2.60)
for all |A). Note that it is only possible to normalize a given ket |A) such that
Eq. (2.60) is satisfied because of the more general property (2.21) of the norm.
This property depends on the particular correspondence (2.16), that we adopted
earlier, between the elements of a ket space and those of its dual bra space.
2.12 Degeneracy
Suppose that two different eigenstates |ξ

a
) and |ξ

b
) of ξ correspond to the same
eigenvalue ξ

. These are termed degenerate eigenstates. Degenerate eigenstates
are necessarily orthogonal to any eigenstates corresponding to different eigen-
values, but, in general, they are not orthogonal to each other (i.e., the proof of
orthogonality given in Sect. 2.8 does not work in this case). This is unfortunate,
since much of the previous formalism depends crucially on the mutual orthogo-
nality of the different eigenstates of an observable. Note, however, that any linear
combination of |ξ

a
) and |ξ

b
) is also an eigenstate corresponding to the eigenvalue
ξ

. It follows that we can always construct two mutually orthogonal degenerate
eigenstates. For instance,

1
) = |ξ

a
), (2.61)

2
) =

b
) −¸ξ

a

b
)|ξ

a
)
1 − |¸ξ

a

b
)|
2
. (2.62)
26
2.13 Compatible observables 2 FUNDAMENTAL CONCEPTS
This result is easily generalized to the case of more than two degenerate eigen-
states. We conclude that it is always possible to construct a complete set of mu-
tually orthogonal eigenstates for any given observable.
2.13 Compatible observables
Suppose that we wish to simultaneously measure two observables, ξ and η, of
a microscopic system? Let us assume that we possess an apparatus which is ca-
pable of measuring ξ, and another which can measure η. For instance, the two
observables in question might be the projection in the x- and z-directions of the
spin angular momentum of a spin one-half particle. These could be measured us-
ing appropriate Stern-Gerlach apparatuses (see Sakurai, Sect. 1.1). Suppose that
we make a measurement of ξ, and the system is consequently thrown into one
of the eigenstates of ξ, |ξ

), with eigenvalue ξ

. What happens if we now make
a measurement of η? Well, suppose that the eigenstate |ξ

) is also an eigenstate
of η, with eigenvalue η

. In this case, a measurement of η will definitely give the
result η

. A second measurement of ξ will definitely give the result ξ

, and so on.
In this sense, we can say that the observables ξ and η simultaneously have the
values ξ

and η

, respectively. Clearly, if all eigenstates of ξ are also eigenstates
of η then it is always possible to make a simultaneous measurement of ξ and η.
Such observables are termed compatible.
Suppose, however, that the eigenstates of ξ are not eigenstates of η. Is it
still possible to measure both observables simultaneously? Let us again make an
observation of ξ which throws the system into an eigenstate |ξ

), with eigenvalue
ξ

. We can now make a second observation to determine η. This will throw
the system into one of the (many) eigenstates of η which depend on |ξ

). In
principle, each of these eigenstates is associated with a different result of the
measurement. Suppose that the system is thrown into an eigenstate |η

), with
the eigenvalue η

. Another measurement of ξ will throw the system into one
of the (many) eigenstates of ξ which depend on |η

). Each eigenstate is again
associated with a different possible result of the measurement. It is clear that if
the observables ξ and η do not possess simultaneous eigenstates then if the value
27
2.14 The uncertainty relation 2 FUNDAMENTAL CONCEPTS
of ξ is known (i.e., the system is in an eigenstate of ξ) then the value of η is
uncertain (i.e., the system is not in an eigenstate of η), and vice versa. We say
that the two observables are incompatible.
We have seen that the condition for two observables ξ and η to be simultane-
ously measurable is that they should possess simultaneous eigenstates (i.e., every
eigenstate of ξ should also be an eigenstate of η). Suppose that this is the case.
Let a general eigenstate of ξ, with eigenvalue ξ

, also be an eigenstate of η, with
eigenvalue η

. It is convenient to denote this simultaneous eigenstate |ξ

η

). We
have
ξ|ξ

η

) = ξ

η

), (2.63)
η|ξ

η

) = η

η

). (2.64)
We can left-multiply the first equation by η, and the second equation by ξ, and
then take the difference. The result is
(ξ η −ηξ)|ξ

η

) = |0) (2.65)
for each simultaneous eigenstate. Recall that the eigenstates of an observable
must form a complete set. It follows that the simultaneous eigenstates of two
observables must also form a complete set. Thus, the above equation implies that
(ξ η −ηξ)|A) = |0), (2.66)
where |A) is a general ket. The only way that this can be true is if
ξ η = ηξ. (2.67)
Thus, the condition for two observables ξ and η to be simultaneously measurable is
that they should commute.
2.14 The uncertainty relation
We have seen that if ξ and η are two noncommuting observables, then a deter-
mination of the value of ξ leaves the value of η uncertain, and vice versa. It is
28
2.14 The uncertainty relation 2 FUNDAMENTAL CONCEPTS
possible to quantify this uncertainty. For a general observable ξ, we can define a
Hermitian operator
∆ξ = ξ −¸ξ), (2.68)
where the expectation value is taken over the particular physical state under con-
sideration. It is obvious that the expectation value of ∆ξ is zero. The expectation
value of (∆ξ)
2
≡ ∆ξ ∆ξ is termed the variance of ξ, and is, in general, non-zero.
In fact, it is easily demonstrated that
¸(∆ξ)
2
) = ¸ξ
2
) −¸ξ)
2
. (2.69)
The variance of ξ is a measure of the uncertainty in the value of ξ for the particu-
lar state in question (i.e., it is a measure of the width of the distribution of likely
values of ξ about the expectation value). If the variance is zero then there is no
uncertainty, and a measurement of ξ is bound to give the expectation value, ¸ξ).
Consider the Schwarz inequality
¸A|A)¸B|B) ≥ |¸A|B)|
2
, (2.70)
which is analogous to
|a|
2
| b|
2
≥ |a b|
2
(2.71)
in Euclidian space. This inequality can be proved by noting that
(¸A| +c

¸B|)(|A) +c|B)) ≥ 0, (2.72)
where c is any complex number. If c takes the special value −¸B|A)/¸B|B) then
the above inequality reduces to
¸A|A)¸B|B) − |¸A|B)|
2
≥ 0, (2.73)
which is the same as the Schwarz inequality.
Let us substitute
|A) = ∆ξ| ), (2.74)
|B) = ∆η| ), (2.75)
29
2.14 The uncertainty relation 2 FUNDAMENTAL CONCEPTS
into the Schwarz inequality, where the blank ket | ) stands for any general ket.
We find
¸(∆ξ)
2
)¸(∆η)
2
) ≥ |¸∆ξ ∆η)|
2
, (2.76)
where use has been made of the fact that ∆ξ and ∆η are Hermitian operators.
Note that
∆ξ ∆η =
1
2
[∆ξ, ∆η] +
1
2
{∆ξ, ∆η} , (2.77)
where the commutator, [∆ξ, ∆η], and the anti-commutator, {∆ξ, ∆η}, are defined
[∆ξ, ∆η] ≡ ∆ξ ∆η −∆η∆ξ, (2.78)
{∆ξ, ∆η} ≡ ∆ξ ∆η +∆η∆ξ. (2.79)
The commutator is clearly anti-Hermitian,
([∆ξ, ∆η])

= (∆ξ ∆η −∆η∆ξ)

= ∆η∆ξ −∆ξ ∆η = − [∆ξ, ∆η] , (2.80)
whereas the anti-commutator is obviously Hermitian. Now, it is easily demon-
strated that the expectation value of a Hermitian operator is a real number,
whereas the expectation value of an anti-Hermitian operator is a pure imaginary
number. It is clear that the right hand side of
¸∆ξ ∆η) =
1
2
¸[∆ξ, ∆η]) +
1
2
¸{∆ξ, ∆η}), (2.81)
consists of the sum of a purely real and a purely imaginary number. Taking the
modulus squared of both sides gives
|¸∆ξ ∆η)|
2
=
1
4
|¸[ξ, η])|
2
+
1
4
|¸{∆ξ, ∆η})|
2
, (2.82)
where use has been made of ¸∆ξ) = 0, etc. The final term in the above expression
is positive definite, so we can write
¸(∆ξ)
2
)¸(∆η)
2
) ≥
1
4
|¸[ξ, η])|
2
, (2.83)
where use has been made of Eq. (2.76). The above expression is termed the
uncertainty relation. According to this relation, an exact knowledge of the value
of ξ implies no knowledge whatsoever of the value of η, and vice versa. The one
exception to this rule is when ξ and η commute, in which case exact knowledge
of ξ does not necessarily imply no knowledge of η.
30
2.15 Continuous spectra 2 FUNDAMENTAL CONCEPTS
2.15 Continuous spectra
Up to now, we have studiously avoided dealing with observables possessing eigen-
values which lie in a continuous range, rather than having discrete values. The
reason for this is because continuous eigenvalues imply a ket space of nonde-
numerably infinite dimension. Unfortunately, continuous eigenvalues are un-
avoidable in quantum mechanics. In fact, the most important observables of all,
namely position and momentum, generally have continuous eigenvalues. Fortu-
nately, many of the results we obtained previously for a finite-dimensional ket
space with discrete eigenvalues can be generalized to ket spaces of nondenumer-
ably infinite dimensions.
Suppose that ξ is an observable with continuous eigenvalues. We can still
write the eigenvalue equation as
ξ|ξ

) = ξ

). (2.84)
But, ξ

can now take a continuous range of values. Let us assume, for the sake of
simplicity, that ξ

can take any value. The orthogonality condition (2.50) gener-
alizes to
¸ξ

) = δ(ξ

−ξ

), (2.85)
where δ(x) denotes the famous Dirac delta-function. Note that there are clearly a
nondenumerably infinite number of mutually orthogonal eigenstates of ξ. Hence,
the dimensionality of ket space is nondenumerably infinite. Note, also, that eigen-
states corresponding to a continuous range of eigenvalues cannot be normalized
so that they have unit norms. In fact, these eigenstates have infinite norms: i.e.,
they are infinitely long. This is the major difference between eigenstates in a
finite-dimensional and an infinite-dimensional ket space. The extremely useful
relation (2.54) generalizes to
_

)¸ξ

| = 1. (2.86)
Note that a summation over discrete eigenvalues goes over into an integral over
a continuous range of eigenvalues. The eigenstates |ξ

) must form a complete set
if ξ is to be an observable. It follows that any general ket can be expanded in
31
2.15 Continuous spectra 2 FUNDAMENTAL CONCEPTS
terms of the |ξ

). In fact, the expansions (2.51)–(2.53) generalize to
|A) =
_

)¸ξ

|A), (2.87)
¸A| =
_

¸A|ξ

)¸ξ

|, (2.88)
¸A|A) =
_

¸A|ξ

)¸ξ

|A) =
_

|¸A|ξ

)|
2
. (2.89)
These results also follow simply from Eq. (2.86). We have seen that it is not possi-
ble to normalize the eigenstates |ξ

) such that they have unit norms. Fortunately,
this convenient normalization is still possible for a general state vector. In fact,
according to Eq. (2.89), the normalization condition can be written
¸A|A) =
_

|¸A|ξ

)|
2
= 1. (2.90)
We have now studied observables whose eigenvalues can take a discrete num-
ber of values as well as those whose eigenvalues can take any value. There are
number of other cases we could look at. For instance, observables whose eigen-
values can only take a finite range of values, or observables whose eigenvalues
take on a finite range of values plus a set of discrete values. Both of these cases
can be dealt with using a fairly straight-forward generalization of the previous
analysis (see Dirac, Cha. II and III).
32
3 POSITION AND MOMENTUM
3 Position and momentum
3.1 Introduction
So far, we have considered general dynamical variables represented by general
linear operators acting in ket space. However, in classical mechanics the most
important dynamical variables are those involving position and momentum. Let
us investigate the role of such variables in quantum mechanics.
In classical mechanics, the position q and momentum p of some component
of a dynamical system are represented as real numbers which, by definition, com-
mute. In quantum mechanics, these quantities are represented as noncommuting
linear Hermitian operators acting in a ket space which represents all of the pos-
sible states of the system. Our first task is to discover a quantum mechanical
replacement for the classical result qp − pq = 0. Do the position and momen-
tum operators commute? If not, what is the value of qp −pq?
3.2 Poisson brackets
Consider a dynamic system whose state at a particular time t is fully specified
by N independent classical coordinates q
i
(where i runs from 1 to N). Associ-
ated with each generalized coordinate q
i
is a classical canonical momentum p
i
.
For instance, a Cartesian coordinate has an associated linear momentum, an an-
gular coordinate has an associated angular momentum, etc. As is well-known,
the behaviour of a classical system can be specified in terms of Lagrangian or
Hamiltonian dynamics. For instance, in Hamiltonian dynamics,
dq
i
dt
=
∂H
∂p
i
, (3.1)
dp
i
dt
= −
∂H
∂q
i
, (3.2)
where the function H(q
i
, p
i
, t) is the energy of the system at time t expressed
in terms of the classical coordinates and canonical momenta. This function is
33
3.2 Poisson brackets 3 POSITION AND MOMENTUM
usually referred to as the Hamiltonian of the system.
We are interested in finding some construct of classical dynamics which con-
sists of products of dynamical variables. If such a construct exists we hope to
generalize it somehow to obtain a rule describing how dynamical variables com-
mute with one another in quantum mechanics. There is, indeed, one well-known
construct in classical dynamics which involves products of dynamical variables.
The Poisson bracket of two dynamical variables u and v is defined
[u, v] =
N

i=1
_
∂u
∂q
i
∂v
∂p
i

∂u
∂p
i
∂v
∂q
i
_
, (3.3)
where u and v are regarded as functions of the coordinates and momenta q
i
and
p
i
. It is easily demonstrated that
[q
i
, q
j
] = 0, (3.4)
[p
i
, p
j
] = 0, (3.5)
[q
i
, p
j
] = δ
ij
. (3.6)
The time evolution of a dynamical variable can also be written in terms of a
Poisson bracket by noting that
du
dt
=
N

i=1
_
∂u
∂q
i
dq
i
dt
+
∂u
∂p
i
dp
i
dt
_
=
N

i=1
_
∂u
∂q
i
∂H
∂p
i

∂u
∂p
i
∂H
∂q
i
_
= [u, H], (3.7)
where use has been made of Hamilton’s equations.
Can we construct a quantum mechanical Poisson bracket in which u and v are
noncommuting operators, instead of functions? Well, the main properties of the
classical Poisson bracket are as follows:
[u, v] = −[v, u], (3.8)
34
3.2 Poisson brackets 3 POSITION AND MOMENTUM
[u, c] = 0, (3.9)
[u
1
+u
2
, v] = [u
1
, v] + [u
2
, v], (3.10)
[u, v
1
+v
2
] = [u, v
1
] + [u, v
2
] (3.11)
[u
1
u
2
, v] = [u
1
, v]u
2
+u
1
[u
2
, v], (3.12)
[u, v
1
v
2
] = [u, v
1
]v
2
+v
1
[u, v
2
], (3.13)
and
[u, [v, w]] + [v, [w, u]] + [w, [u, v]] = 0. (3.14)
The last relation is known as the Jacobi identity. In the above, u, v, w, etc.,
represent dynamical variables, and c represents a number. Can we find some
combination of noncommuting operators u and v, etc., which satisfies all of the
above relations?
Well, we can evaluate the Poisson bracket [u
1
u
2
, v
1
v
2
] in two different ways,
since we can use either of the formulae (3.12) or (3.13) first. Thus,
[u
1
u
2
, v
1
v
2
] = [u
1
, v
1
v
2
]u
2
+u
1
[u
2
, v
1
v
2
] (3.15)
= {[u
1
, v
1
]v
2
+v
1
[u
1
, v
2
]} u
2
+u
1
{[u
2
, v
1
]v
2
+v
1
[u
2
, v
2
]}
= [u
1
, v
1
]v
2
u
2
+v
1
[u
1
, v
2
]u
2
+u
1
[u
2
, v
1
]v
2
+u
1
v
1
[u
2
, v
2
],
and
[u
1
u
2
, v
1
v
2
] = [u
1
u
2
, v
1
]v
2
+v
1
[u
1
u
2
, v
2
] (3.16)
= [u
1
, v
1
]u
2
v
2
+u
1
[u
2
, v
1
]v
2
+v
1
[u
1
, v
2
]u
2
+v
1
u
1
[u
2
, v
2
].
Note that the order of the various factors has been preserved, since they now
represent noncommuting operators. Equating the above two results yields
[u
1
, v
1
](u
2
v
2
−v
2
u
2
) = (u
1
v
1
−v
1
u
1
)[u
2
, v
2
]. (3.17)
Since this relation must hold for u
1
and v
1
quite independent of u
2
and v
2
, it
follows that
u
1
v
1
−v
1
u
1
= i ¯h[u
1
, v
1
], (3.18)
u
2
v
2
−v
2
u
2
= i ¯h[u
2
, v
2
], (3.19)
35
3.2 Poisson brackets 3 POSITION AND MOMENTUM
where ¯h does not depend on u
1
, v
1
, u
2
, v
2
, and also commutes with (u
1
v
1
−v
1
u
1
).
Since u
1
, etc., are quite general operators, it follows that ¯h is just a number. We
want the quantum mechanical Poisson bracket of two Hermitian operators to
be an Hermitian operator itself, since the classical Poisson bracket of two real
dynamical variables is real. This requirement is satisfied if ¯h is a real number.
Thus, the quantum mechanical Poisson bracket of two dynamical variables u and
v is given by
[u, v] =
uv −v u
i ¯h
, (3.20)
where ¯h is a new universal constant of nature. Quantum mechanics agrees with
experiments provided that ¯h takes the value h/2π, where
h = 6.6261 10
−34
J s (3.21)
is Planck’s constant. Somewhat confusingly, the notation [u, v] is convention-
ally reserved for the commutator uv − v u in quantum mechanics. We will use
[u, v]
quantum
to denote the quantum Poisson bracket. Thus,
[u, v]
quantum
=
[u, v]
i ¯h
. (3.22)
It is easily demonstrated that the quantum mechanical Poisson bracket, as defined
above, satisfies all of the relations (3.8)–(3.14).
The strong analogy we have found between the classical Poisson bracket,
defined in Eq. (3.3), and the quantum mechanical Poisson bracket, defined in
Eq. (3.22), leads us to make the assumption that the quantum mechanical bracket
has the same value as the corresponding classical bracket, at least for the simplest
cases. In other words, we are assuming that Eqs. (3.4)–(3.6) hold for quantum
mechanical as well as classical Poisson brackets. This argument yields the funda-
mental commutation relations
[q
i
, q
j
] = 0, (3.23)
[p
i
, p
j
] = 0, (3.24)
[q
i
, p
j
] = i ¯hδ
ij
. (3.25)
These results provide us with the basis for calculating commutation relations be-
tween general dynamical variables. For instance, if two dynamical variables, ξ
36
3.3 Wave-functions 3 POSITION AND MOMENTUM
and η, can both be written as a power series in the q
i
and p
i
, then repeated
application of Eqs. (3.8)–(3.13) allows [ξ, η] to be expressed in terms of the fun-
damental commutation relations (3.23)–(3.25).
Equations (3.23)–(3.25) provide the foundation for the analogy between quan-
tum mechanics and classical mechanics. Note that the classical result (that every-
thing commutes) is obtained in the limit ¯h → 0. Thus, classical mechanics can be
regarded as the limiting case of quantum mechanics when ¯h goes to zero. In classi-
cal mechanics, each pair of generalized coordinate and its conjugate momentum,
q
i
and p
i
, correspond to a different classical degree of freedom of the system. It is
clear from Eqs. (3.23)–(3.25) that in quantum mechanics the dynamical variables
corresponding to different degrees of freedom all commute. It is only those variables
corresponding to the same degree of freedom which may fail to commute.
3.3 Wave-functions
Consider a simple system with one classical degree of freedom, which corre-
sponds to the Cartesian coordinate x. Suppose that x is free to take any value
(e.g., x could be the position of a free particle). The classical dynamical vari-
able x is represented in quantum mechanics as a linear Hermitian operator which
is also called x. Moreover, the operator x possesses eigenvalues x

lying in the
continuous range −∞ < x

< +∞ (since the eigenvalues correspond to all the
possible results of a measurement of x). We can span ket space using the suit-
ably normalized eigenkets of x. An eigenket corresponding to the eigenvalue x

is denoted |x

). Moreover, [see Eq. (2.85)]
¸x

|x

) = δ(x

−x

). (3.26)
The eigenkets satisfy the extremely useful relation [see Eq. (2.86)]
_
+∞
−∞
dx

|x

)¸x

| = 1. (3.27)
This formula expresses the fact that the eigenkets are complete, mutually orthog-
onal, and suitably normalized.
37
3.3 Wave-functions 3 POSITION AND MOMENTUM
A state ket |A) (which represents a general state A of the system) can be
expressed as a linear superposition of the eigenkets of the position operator using
Eq. (3.27). Thus,
|A) =
_
+∞
−∞
dx

¸x

|A)|x

) (3.28)
The quantity ¸x

|A) is a complex function of the position eigenvalue x

. We can
write
¸x

|A) = ψ
A
(x

). (3.29)
Here, ψ
A
(x

) is the famous wave-function of quantum mechanics. Note that state
A is completely specified by its wave-function ψ
A
(x

) [since the wave-function
can be used to reconstruct the state ket |A) using Eq. (3.28)]. It is clear that
the wave-function of state A is simply the collection of the weights of the cor-
responding state ket |A), when it is expanded in terms of the eigenkets of the
position operator. Recall, from Sect. 2.10, that the probability of a measurement
of a dynamical variable ξ yielding the result ξ

when the system is in state A is
given by |¸ξ

|A)|
2
, assuming that the eigenvalues of ξ are discrete. This result is
easily generalized to dynamical variables possessing continuous eigenvalues. In
fact, the probability of a measurement of x yielding a result lying in the range
x

to x

+ dx

when the system is in a state |A) is |¸x

|A)|
2
dx

. In other words,
the probability of a measurement of position yielding a result in the range x

to
x

+dx

when the wave-function of the system is ψ
A
(x

) is
P(x

, dx

) = |ψ
A
(x

)|
2
dx

. (3.30)
This formula is only valid if the state ket |A) is properly normalized: i.e., if
¸A|A) = 1. The corresponding normalization for the wave-function is
_
+∞
−∞

A
(x

)|
2
dx

= 1. (3.31)
Consider a second state B represented by a state ket |B) and a wave-function
ψ
B
(x

). The inner product ¸B|A) can be written
¸B|A) =
_
+∞
−∞
dx

¸B|x

)¸x

|A) =
_
+∞
−∞
ψ

B
(x

) ψ

A
(x

) dx

, (3.32)
38
3.4 Schr¨ odinger’s representation - I 3 POSITION AND MOMENTUM
where use has been made of Eqs. (3.27) and (3.29). Thus, the inner product of
two states is related to the overlap integral of their wave-functions.
Consider a general function f(x) of the observable x [e.g., f(x) = x
2
]. If |B) =
f(x)|A) then it follows that
ψ
B
(x

) = ¸x

|f(x)
_
+∞
−∞
dx

ψ
A
(x

)|x

)
=
_
+∞
−∞
dx

f(x

) ψ
A
(x

)¸x

|x

), (3.33)
giving
ψ
B
(x

) = f(x

) ψ
A
(x

), (3.34)
where use has been made of Eq. (3.26). Here, f(x

) is the same function of the
position eigenvalue x

that f(x) is of the position operator x: i.e., if f(x) = x
2
then
f(x

) = x
2
. It follows, from the above result, that a general state ket |A) can be
written
|A) = ψ
A
(x)), (3.35)
where ψ
A
(x) is the same function of the operator x that the wave-function ψ
A
(x

)
is of the position eigenvalue x

, and the ket ) has the wave-function ψ(x

) = 1.
The ket ) is termed the standard ket. The dual of the standard ket is termed the
standard bra, and is denoted ¸. It is easily seen that
¸ψ

A
(x)
DC
←→ψ
A
(x)). (3.36)
Note, finally, that ψ
A
(x)) is often shortened to ψ
A
), leaving the dependence on
the position operator x tacitly understood.
3.4 Schr¨ odinger’s representation - I
Consider the simple system described in the previous section. A general state ket
can be written ψ(x)), where ψ(x) is a general function of the position operator x,
and ψ(x

) is the associated wave-function. Consider the ket whose wave-function
39
3.4 Schr¨ odinger’s representation - I 3 POSITION AND MOMENTUM
is dψ(x

)/dx

. This ket is denoted dψ/dx). The new ket is clearly a linear func-
tion of the original ket, so we can think of it as the result of some linear operator
acting on ψ). Let us denote this operator d/dx. It follows that
d
dx
ψ) =

dx
). (3.37)
Any linear operator which acts on ket vectors can also act on bra vectors.
Consider d/dx acting on a general bra ¸φ(x). According to Eq. (2.34), the bra
¸φd/dx satisfies
_
¸φ
d
dx
_
ψ) = ¸φ
_
d
dx
ψ)
_
. (3.38)
Making use of Eqs. (3.27) and (3.29), we can write
_
+∞
−∞
¸φ
d
dx
|x

) dx

ψ(x

) =
_
+∞
−∞
φ(x

) dx

dψ(x

)
dx

. (3.39)
The right-hand side can be transformed via integration by parts to give
_
+∞
−∞
¸φ
d
dx
|x

) dx

ψ(x

) = −
_
+∞
−∞
dφ(x

)
dx

dx

ψ(x

), (3.40)
assuming that the contributions from the limits of integration vanish. It follows
that
¸φ
d
dx
|x

) = −
dφ(x

)
dx

, (3.41)
which implies
¸φ
d
dx
= −¸

dx
. (3.42)
The neglect of contributions from the limits of integration in Eq. (3.40) is rea-
sonable because physical wave-functions are square-integrable [see Eq. (3.31)].
Note that
d
dx
ψ) =

dx
)
DC
←→¸


dx
= −¸ψ

d
dx
, (3.43)
where use has been made of Eq. (3.42). It follows, by comparison with Eqs. (2.35)
and (3.36), that
_
d
dx
_

= −
d
dx
. (3.44)
40
3.4 Schr¨ odinger’s representation - I 3 POSITION AND MOMENTUM
Thus, d/dx is an anti-Hermitian operator.
Let us evaluate the commutation relation between the operators x and d/dx.
We have
d
dx
x ψ) =
d(x ψ)
dx
) = x
d
dx
ψ) +ψ). (3.45)
Since this holds for any ket ψ), it follows that
d
dx
x −x
d
dx
= 1. (3.46)
Let p be the momentumconjugate to x (for the simple systemunder consideration
p is a straight-forward linear momentum). According to Eq. (3.25), x and p
satisfy the commutation relation
x p −px = i ¯h. (3.47)
It can be seen, by comparison with Eq. (3.46), that the Hermitian operator
−i ¯hd/dx satisfies the same commutation relation with x that p does. The most
general conclusion which may be drawn from a comparison of Eqs. (3.46) and
(3.47) is that
p = −i ¯h
d
dx
+f(x), (3.48)
since (as is easily demonstrated) a general function f(x) of the position operator
automatically commutes with x.
We have chosen to normalize the eigenkets and eigenbras of the position oper-
ator so that they satisfy the normalization condition (3.26). However, this choice
of normalization does not uniquely determine the eigenkets and eigenbras. Sup-
pose that we transform to a new set of eigenbras which are related to the old set
via
¸x

|
new
= e
i γ

¸x

|
old
, (3.49)
where γ

≡ γ(x

) is a real function of x

. This transformation amounts to a
rearrangement of the relative phases of the eigenbras. The new normalization
condition is
¸x

|x

)
new
= ¸x

|e
i γ

e
−i γ

|x

)
old
= e
i (γ

−γ

)
¸x

|x

)
old
= e
i (γ

−γ

)
δ(x

−x

) = δ(x

−x

). (3.50)
41
3.4 Schr¨ odinger’s representation - I 3 POSITION AND MOMENTUM
Thus, the new eigenbras satisfy the same normalization condition as the old
eigenbras.
By definition, the standard ket ) satisfies ¸x

|) = 1. It follows from Eq. (3.49)
that the new standard ket is related to the old standard ket via
)
new
= e
−i γ
)
old
, (3.51)
where γ ≡ γ(x) is a real function of the position operator x. The dual of the
above equation yields the transformation rule for the standard bra,
¸
new
= ¸
old
e
i γ
. (3.52)
The transformation rule for a general operator A follows from Eqs. (3.51) and
(3.52), plus the requirement that the triple product ¸A) remain invariant (this
must be the case, otherwise the probability of a measurement yielding a certain
result would depend on the choice of eigenbras). Thus,
A
new
= e
−i γ
A
old
e
i γ
. (3.53)
Of course, if A commutes with x then A is invariant under the transformation. In
fact, d/dx is the only operator (we know of) which does not commute with x, so
Eq. (3.53) yields
_
d
dx
_
new
= e
−i γ
d
dx
e
i γ
=
d
dx
+ i

dx
, (3.54)
where the subscript “old” is taken as read. It follows, from Eq. (3.48), that the
momentum operator p can be written
p = −i ¯h
_
d
dx
_
new
− ¯h

dx
+f(x). (3.55)
Thus, the special choice
¯hγ(x) =
_
x
f(x) dx (3.56)
yields
p = −i ¯h
_
d
dx
_
new
. (3.57)
42
3.5 Schr¨ odinger’s representation - II 3 POSITION AND MOMENTUM
Equation (3.56) fixes γ to within an arbitrary additive constant: i.e., the special
eigenkets and eigenbras for which Eq. (3.57) is true are determined to within an
arbitrary common phase-factor.
In conclusion, it is possible to find a set of basis eigenkets and eigenbras of
the position operator x which satisfy the normalization condition (3.26), and for
which the momentum conjugate to x can be represented as the operator
p = −i ¯h
d
dx
. (3.58)
A general state ket is written ψ(x)), where the standard ket ) satisfies ¸x

|) =
1, and where ψ(x

) = ¸x

|ψ(x)) is the wave-function. This scheme of things is
known as Schr¨ odinger’s representation, and is the basis of wave mechanics.
3.5 Schr¨ odinger’s representation - II
In the preceding sections, we have developed Schr¨ odinger’s representation for
the case of a single operator x corresponding to a classical Cartesian coordinate.
However, this scheme can easily be extended. Consider a system with N general-
ized coordinates, q
1
q
N
, which can all be simultaneously measured. These are
represented as N commuting operators, q
1
q
N
, each with a continuous range
of eigenvalues, q

1
q

N
. Ket space is conveniently spanned by the simultaneous
eigenkets of q
1
q
N
, which are denoted |q

1
q

N
). These eigenkets must form
a complete set, otherwise the q
1
q
N
would not be simultaneously observable.
The orthogonality condition for the eigenkets [i.e., the generalization of Eq. (3.26)]
is
¸q

1
q

N
|q

1
q

N
) = δ(q

1
−q

1
) δ(q

2
−q

2
) δ(q

N
−q

N
). (3.59)
The completeness condition [i.e., the generalization of Eq. (3.27)] is
_
+∞
−∞

_
+∞
−∞
dq

1
dq

N
|q

1
q

N
)¸q

1
q

N
| = 1. (3.60)
The standard ket ) is defined such that
¸q

1
q

N
|) = 1. (3.61)
43
3.5 Schr¨ odinger’s representation - II 3 POSITION AND MOMENTUM
The standard bra ¸ is the dual of the standard ket. A general state ket is written
ψ(q
1
q
N
)). (3.62)
The associated wave-function is
ψ(q

1
q

N
) = ¸q

1
q

N
|ψ). (3.63)
Likewise, a general state bra is written
¸φ(q
1
q
N
), (3.64)
where
φ(q

1
q

N
) = ¸φ|q

1
q

N
). (3.65)
The probability of an observation of the system finding the first coordinate in the
range q

1
to q

1
+dq

1
, the second coordinate in the range q

2
to q

2
+dq

2
, etc., is
P(q

1
q

N
; dq

1
dq

N
) = |ψ(q

1
q

N
)|
2
dq

1
dq

N
. (3.66)
Finally, the normalization condition for a physical wave-function is
_
+∞
−∞

_
+∞
−∞
|ψ(q

1
q

N
)|
2
dq

1
dq

N
= 1. (3.67)
The N linear operators ∂/∂q
i
(where i runs from 1 to N) are defined

∂q
i
ψ) =
∂ψ
∂q
i
). (3.68)
These linear operators can also act on bras (provided the associated wave-functions
are square integrable) in accordance with [see Eq. (3.42)]
¸φ

∂q
i
= −¸
∂φ
∂q
i
. (3.69)
Corresponding to Eq. (3.46), we can derive the commutation relations

∂q
i
q
j
−q
j

∂q
i
= δ
ij
. (3.70)
44
3.5 Schr¨ odinger’s representation - II 3 POSITION AND MOMENTUM
It is also clear that

∂q
i

∂q
j
ψ) =

2
ψ
∂q
i
∂q
j
) =

∂q
j

∂q
i
ψ), (3.71)
showing that

∂q
i

∂q
j
=

∂q
j

∂q
i
. (3.72)
It can be seen, by comparison with Eqs. (3.23)–(3.25), that the linear oper-
ators −i ¯h∂/∂q
i
satisfy the same commutation relations with the q’s and with
each other that the p’s do. The most general conclusion we can draw from this
coincidence of commutation relations is (see Dirac)
p
i
= −i ¯h

∂q
i
+
∂F(q
1
q
N
)
∂q
i
. (3.73)
However, the function F can be transformed away via a suitable readjustment of
the phases of the basis eigenkets (see Sect. 3.4, and Dirac). Thus, we can always
construct a set of simultaneous eigenkets of q
1
q
N
for which
p
i
= −i ¯h

∂q
i
. (3.74)
This is the generalized Schr¨ odinger representation.
It follows from Eqs. (3.61), (3.68), and (3.74) that
p
i
) = 0. (3.75)
Thus, the standard ket in Schr¨ odinger’s representation is a simultaneous eigenket
of all the momentum operators belonging to the eigenvalue zero. Note that
¸q

1
q

N
|

∂q
i
ψ) = ¸q

1
q

N
|
∂ψ
∂q
i
) =
∂ψ(q

1
q

N
)
∂q

i
=

∂q

i
¸q

1
q

N
|ψ). (3.76)
Hence,
¸q

1
q

N
|

∂q
i
=

∂q

i
¸q

1
q

N
|, (3.77)
so that
¸q

1
q

N
|p
i
= −i ¯h

∂q

i
¸q

1
q

N
|. (3.78)
45
3.6 The momentum representation 3 POSITION AND MOMENTUM
The dual of the above equation gives
p
i
|q

1
q

N
) = i ¯h

∂q

i
|q

1
q

N
). (3.79)
3.6 The momentum representation
Consider a system with one degree of freedom, describable in terms of a coordi-
nate x and its conjugate momentum p, both of which have a continuous range
of eigenvalues. We have seen that it is possible to represent the system in terms
of the eigenkets of x. This is termed Schr¨ odinger’s representation. However, it is
also possible to represent the system in terms of the eigenkets of p.
Consider the eigenkets of p which belong to the eigenvalues p

. These are
denoted |p

). The orthogonality relation for the momentum eigenkets is
¸p

|p

) = δ(p

−p

), (3.80)
and the corresponding completeness relation is
_
+∞
−∞
dp

|p

)¸p

| = 1. (3.81)
A general state ket can be written
φ(p)) (3.82)
where the standard ket ) satisfies
¸p

|) = 1. (3.83)
Note that the standard ket in this representation is quite different to that in
Schr¨ odinger’s representation. The momentum space wave-function φ(p

) sat-
isfies
φ(p

) = ¸p

|φ). (3.84)
The probability that a measurement of the momentum yields a result lying in the
range p

to p

+dp

is given by
P(p

, dp

) = |φ(p

)|
2
dp

. (3.85)
46
3.6 The momentum representation 3 POSITION AND MOMENTUM
Finally, the normalization condition for a physical momentumspace wave-function
is
_
+∞
−∞
|φ(p

)|
2
dp

= 1. (3.86)
The fundamental commutation relations (3.23)–(3.25) exhibit a particular
symmetry between coordinates and their conjugate momenta. If all the coor-
dinates are transformed into their conjugate momenta, and vice versa, and i is
then replaced by −i, the commutation relations are unchanged. It follows from
this symmetry that we can always choose the eigenkets of p in such a manner
that the coordinate x can be represented as (see Sect. 3.4)
x = i ¯h
d
dp
. (3.87)
This is termed the momentum representation.
The above result is easily generalized to a system with more than one degree
of freedom. Suppose the system is specified by N coordinates, q
1
q
N
, and
N conjugate momenta, p
1
p
N
. Then, in the momentum representation, the
coordinates can be written as
q
i
= i ¯h

∂p
i
. (3.88)
We also have
q
i
) = 0, (3.89)
and
¸p

1
p

N
|q
i
= i ¯h

∂p

i
¸p

1
p

N
|. (3.90)
The momentum representation is less useful than Schr¨ odinger’s representa-
tion for a very simple reason. The energy operator (i.e., the Hamiltonian) of
most simple systems takes the form of a sum of quadratic terms in the momenta
(i.e., the kinetic energy) plus a complicated function of the coordinates (i.e., the
potential energy). In Schr¨ odinger’s representation, the eigenvalue problem for
the energy translates into a second-order differential equation in the coordinates,
with a complicated potential function. In the momentum representation, the
47
3.7 The uncertainty relation 3 POSITION AND MOMENTUM
problem transforms into a high-order differential equation in the momenta, with
a quadratic potential. With the mathematical tools at our disposal, we are far bet-
ter able to solve the former type of problem than the latter. Hence, Schr¨ odinger’s
representation is generally more useful than the momentum representation.
3.7 The uncertainty relation
How is a momentum space wave-function related to the corresponding coordi-
nate space wave-function? To answer this question, let us consider the represen-
tative ¸x

|p

) of the momentum eigenkets |p

) in Schr¨ odinger’s representation for
a system with a single degree of freedom. This representative satisfies
p

¸x

|p

) = ¸x

|p|p

) = −i ¯h
d
dx

¸x

|p

), (3.91)
where use has been made of Eq. (3.78) (for the case of a system with one degree
of freedom). The solution of the above differential equation is
¸x

|p

) = c

exp(i p

x

/¯h), (3.92)
where c

= c

(p

). It is easily demonstrated that
¸p

|p

) =
_
+∞
−∞
¸p

|x

) dx

¸x

|p

) = c

c

_

−∞
exp[−i (p

−p

) x

/¯h] dx

. (3.93)
The well-known mathematical result
_
+∞
−∞
exp(i ax) dx = 2πδ(a), (3.94)
yields
¸p

|p

) = |c

|
2
hδ(p

−p

). (3.95)
This is consistent with Eq. (3.80), provided that c

= h
−1/2
. Thus,
¸x

|p

) = h
−1/2
exp(i p

x

/¯h). (3.96)
48
3.7 The uncertainty relation 3 POSITION AND MOMENTUM
Consider a general state ket |A) whose coordinate wave-function is ψ(x

), and
whose momentum wave-function is Ψ(p

). In other words,
ψ(x

) = ¸x

|A), (3.97)
Ψ(p

) = ¸p

|A). (3.98)
It is easily demonstrated that
ψ(x

) =
_
+∞
−∞
dp

¸x

|p

)¸p

|A)
=
1
h
1/2
_
+∞
−∞
Ψ(p

) exp(i p

x

/¯h) dp

(3.99)
and
Ψ(p

) =
_
+∞
−∞
dx

¸p

|x

)¸x

|A)
=
1
h
1/2
_
+∞
−∞
ψ(x

) exp(−i p

x

/¯h) dx

, (3.100)
where use has been made of Eqs. (3.27), (3.81), (3.94), and (3.96). Clearly, the
momentum space wave-function is the Fourier transform of the coordinate space
wave-function.
Consider a state whose coordinate space wave-function is a wave-packet. In
other words, the wave-function only has non-negligible amplitude in some spa-
tially localized region of extent ∆x. As is well-know, the Fourier transform of a
wave-packet fills up a wave-number band of approximate extent δk ∼ 1/∆x. Note
that in Eq. (3.99) the role of the wave-number k is played by the quantity p

/¯h. It
follows that the momentum space wave-function corresponding to a wave-packet
in coordinate space extends over a range of momenta ∆p ∼ ¯h/∆x. Clearly, a mea-
surement of x is almost certain to give a result lying in a range of width ∆x.
Likewise, measurement of p is almost certain to yield a result lying in a range of
width ∆p. The product of these two uncertainties is
∆x ∆p ∼ ¯h. (3.101)
This result is called Heisenberg’s uncertainty principle.
49
3.8 Displacement operators 3 POSITION AND MOMENTUM
Actually, it is possible to write Heisenberg’s uncertainty principle more exactly
by making use of Eq. (2.83) and the commutation relation (3.47). We obtain
¸(∆x)
2
)¸(∆p)
2
) ≥
¯h
2
4
(3.102)
for any general state. It is easily demonstrated that the minimum uncertainty
states, for which the equality sign holds in the above relation, correspond to
Gaussian wave-packets in both coordinate and momentum space.
3.8 Displacement operators
Consider a system with one degree of freedom corresponding to the Cartesian
coordinate x. Suppose that we displace this system some distance along the x-
axis. We could imagine that the system is on wheels, and we just give it a little
push. The final state of the system is completely determined by its initial state,
together with the direction and magnitude of the displacement. Note that the
type of displacement we are considering is one in which everything to do with the
system is displaced. So, if the system is subject to an external potential, then the
potential must be displaced.
The situation is not so clear with state kets. The final state of the system
only determines the direction of the displaced state ket. Even if we adopt the
convention that all state kets have unit norms, the final ket is still not completely
determined, since it can be multiplied by a constant phase-factor. However, we
know that the superposition relations between states remain invariant under the
displacement. This follows because the superposition relations have a physical
significance which is unaffected by a displacement of the system. Thus, if
|R) = |A) + |B) (3.103)
in the undisplaced system, and the displacement causes ket |R) to transform to
ket |Rd), etc., then in the displaced system we have
|Rd) = |Ad) + |Bd). (3.104)
50
3.8 Displacement operators 3 POSITION AND MOMENTUM
Incidentally, this determines the displaced kets to within a single arbitrary phase-
factor to be multiplied into all of them. The displaced kets cannot be multiplied
by individual phase-factors, because this would wreck the superposition relations.
Since Eq. (3.104) holds in the displaced system whenever Eq. (3.103) holds in
the undisplaced system, it follows that the displaced ket |Rd) must be the result
of some linear operator acting on the undisplaced ket |R). In other words,
|Rd) = D|R), (3.105)
where D an operator which depends only on the nature of the displacement. The
arbitrary phase-factor by which all displaced kets may be multiplied results in D
being undetermined to an arbitrary multiplicative constant of modulus unity.
We now adopt the ansatz that any combination of bras, kets, and dynamical
variables which possesses a physical significance is invariant under a displace-
ment of the system. The normalization condition
¸A|A) = 1 (3.106)
for a state ket |A) certainly has a physical significance. Thus, we must have
¸Ad|Ad) = 1. (3.107)
Now, |Ad) = D|A) and ¸Ad| = ¸A|D

, so
¸A|D

D|A) = 1. (3.108)
Since this must hold for any state ket |A), it follows that
D

D = 1. (3.109)
Hence, the displacement operator is unitary. Note that the above relation implies
that
|A) = D

|Ad). (3.110)
The equation
v |A) = |B), (3.111)
51
3.8 Displacement operators 3 POSITION AND MOMENTUM
where the operator v represents a dynamical variable, has some physical signifi-
cance. Thus, we require that
v
d
|Ad) = |Bd), (3.112)
where v
d
is the displaced operator. It follows that
v
d
|Ad) = D|B) = Dv |A) = Dv D

|Ad). (3.113)
Since this is true for any ket |Ad), we have
v
d
= Dv D

. (3.114)
Note that the arbitrary numerical factor in D does not affect either of the results
(3.109) and (3.114).
Suppose, now, that the system is displaced an infinitesimal distance δx along
the x-axis. We expect that the displaced ket |Ad) should approach the undisplaced
ket |A) in the limit as δx →0. Thus, we expect the limit
lim
δx→0
|Ad) − |A)
δx
= lim
δx→0
D−1
δx
|A) (3.115)
to exist. Let
d
x
= lim
δx→0
D−1
δx
, (3.116)
where d
x
is denoted the displacement operator along the x-axis. The fact that D
can be replaced by Dexp(i γ), where γ is a real phase-angle, implies that d
x
can
be replaced by
lim
δx→0
Dexp(i γ) −1
δx
= lim
δx→0
D−1 + i γ
δx
= d
x
+ i a
x
, (3.117)
where a
x
is the limit of γ/δx. We have assumed, as seems reasonable, that γ tends
to zero as δx → 0. It is clear that the displacement operator is undetermined to
an arbitrary imaginary additive constant.
For small δx, we have
D = 1 +δx d
x
. (3.118)
52
3.8 Displacement operators 3 POSITION AND MOMENTUM
It follows from Eq. (3.109) that
(1 +δx d

x
)(1 +δx d
x
) = 1. (3.119)
Neglecting order (δx)
2
, we obtain
d

x
+d
x
= 0. (3.120)
Thus, the displacement operator is anti-Hermitian. Substituting into Eq. (3.114),
and again neglecting order (δx)
2
, we find that
v
d
= (1 +δx d
x
) v (1 −δx d
x
) = v +δx (d
x
v −v d
x
), (3.121)
which implies
lim
δx→0
v
d
−v
δx
= d
x
v −v d
x
. (3.122)
Let us consider a specific example. Suppose that a state has a wave-function
ψ(x

). If the system is displaced a distance δx along the x-axis then the new
wave-function is ψ(x

− δx) (i.e., the same shape shifted in the x-direction by a
distance δx). Actually, the new wave-function can be multiplied by an arbitrary
number of modulus unity. It can be seen that the new wave-function is obtained
from the old wave-function according to the prescription x

→x

−δx. Thus,
x
d
= x −δx. (3.123)
A comparison with Eq. (3.122), using x = v, yields
d
x
x −x d
x
= −1. (3.124)
It follows that i ¯hd
x
obeys the same commutation relation with x that p
x
, the
momentum conjugate to x, does [see Eq. (3.25)]. The most general conclusion
we can draw from this observation is that
p
x
= i ¯hd
x
+f(x), (3.125)
where f is Hermitian (since p
x
is Hermitian). However, the fact that d
x
is unde-
termined to an arbitrary additive imaginary constant (which could be a function
of x) enables us to transform the function f out of the above equation, leaving
p
x
= i ¯hd
x
. (3.126)
53
3.8 Displacement operators 3 POSITION AND MOMENTUM
Thus, the displacement operator in the x-direction is proportional to the momen-
tum conjugate to x. We say that p
x
is the generator of translations along the
x-axis.
A finite translation along the x-axis can be constructed from a series of very
many infinitesimal translations. Thus, the operator D(∆x) which translates the
system a distance ∆x along the x-axis is written
D(∆x) = lim
N→∞
_
1 − i
∆x
N
p
x
¯h
_
N
, (3.127)
where use has been made of Eqs. (3.118) and (3.126). It follows that
D(∆x) = exp(−i p
x
∆x/¯h) . (3.128)
The unitary nature of the operator is now clearly apparent.
We can also construct displacement operators which translate the system along
the y- and z-axes. Note that a displacement a distance ∆x along the x-axis com-
mutes with a displacement a distance ∆y along the y-axis. In other words, if the
system is moved ∆x along the x-axis, and then ∆y along the y-axis then it ends
up in the same state as if it were moved ∆y along the y-axis, and then ∆x along
the x-axis. The fact that translations in independent directions commute is clearly
associated with the fact that the conjugate momentum operators associated with
these directions also commute [see Eqs. (3.24) and (3.128)].
54
4 QUANTUM DYNAMICS
4 Quantum dynamics
4.1 Schr¨ odinger’s equations of motion
Up to now, we have only considered systems at one particular instant of time. Let
us now investigate how quantum mechanical systems evolve with time.
Consider a system in a state A which evolves in time. At time t the state of the
system is represented by the ket |At). The label A is needed to distinguish the
ket from any other ket (|Bt), say) which is evolving in time. The label t is needed
to distinguish the different states of the system at different times.
The final state of the system at time t is completely determined by its initial
state at time t
0
plus the time interval t − t
0
(assuming that the system is left
undisturbed during this time interval). However, the final state only determines
the direction of the final state ket. Even if we adopt the convention that all state
kets have unit norms, the final ket is still not completely determined, since it
can be multiplied by an arbitrary phase-factor. However, we expect that if a
superposition relation holds for certain states at time t
0
then the same relation
should hold between the corresponding time-evolved states at time t, assuming
that the system is left undisturbed between times t
0
and t. In other words, if
|Rt
0
) = |At
0
) + |Bt
0
) (4.1)
for any three kets, then we should have
|Rt) = |At) + |Bt). (4.2)
This rule determines the time-evolved kets to within a single arbitrary phase-
factor to be multiplied into all of them. The evolved kets cannot be multiplied by
individual phase-factors, since this would invalidate the superposition relation at
later times.
According to Eqs. (4.1) and (4.2), the final ket |Rt) depends linearly on the
initial ket |Rt
0
). Thus, the final ket can be regarded as the result of some linear
operator acting on the initial ket: , i.e.,
|Rt) = T|Rt
0
), (4.3)
55
4.1 Schr¨ odinger’s equations of motion 4 QUANTUM DYNAMICS
where T is a linear operator which depends only on the times t and t
0
. The
arbitrary phase-factor by which all time evolved kets may be multiplied results
in T(t, t
0
) being undetermined to an arbitrary multiplicative constant of modulus
unity.
Since we have adopted a convention in which the norm of any state ket is
unity, it make sense to define the time evolution operator T in such a manner
that it preserves the length of any ket upon which it acts (i.e., if a ket is prop-
erly normalized at time t then it will remain normalized at all subsequent times
t > t
0
). This is always possible, since the length of a ket possesses no physical
significance. Thus, we require that
¸At
0
|At
0
) = ¸At|At) (4.4)
for any ket A, which immediately yields
T

T = 1. (4.5)
Hence, the time evolution operator T is a unitary operator.
Up to now, the time evolution operator T looks very much like the spatial
displacement operator D introduced in the previous section. However, there are
some important differences between time evolution and spatial displacement.
In general, we do expect the expectation value of some observable ξ to evolve
with time, even if the system is left in a state of undisturbed motion (after all,
time evolution has no meaning unless something observable changes with time).
The triple product ¸A|ξ|A) can evolve either because the ket |A) evolves and the
operator ξ stays constant, the ket |A) stays constant and the operator ξ evolves,
or both the ket |A) and the operator ξ evolve. Since we are already committed to
evolving state kets, according to Eq. (4.3), let us assume that the time evolution
operator T can be chosen in such a manner that the operators representing the
dynamical variables of the system do not evolve in time (unless they contain some
specific time dependence).
We expect, from physical continuity, that as t → t
0
then |At) → |At
0
) for any
ket A. Thus, the limit
lim
t→t
0
|At) − |At
0
)
t −t
0
= lim
t→t
0
T −1
t −t
0
|At
0
) (4.6)
56
4.1 Schr¨ odinger’s equations of motion 4 QUANTUM DYNAMICS
should exist. Note that this limit is simply the derivative of |At
0
) with respect to
t
0
. Let
τ(t
0
) = lim
t→t
0
T(t, t
0
) −1
t −t
0
. (4.7)
It is easily demonstrated from Eq. (4.5) that τ is anti-Hermitian: i.e.,
τ

+τ = 0. (4.8)
The fact that T can be replaced by T exp(i γ) (where γ is real) implies that τ is
undetermined to an arbitrary imaginary additive constant (see previous section).
Let us define the Hermitian operator H(t
0
) = i ¯hτ. This operator is undetermined
to an arbitrary real additive constant. It follows from Eqs. (4.6) and (4.7) that
i ¯h
d|At
0
)
dt
0
= i ¯h lim
t→t
0
|At) − |At
0
)
t −t
0
= i ¯hτ(t
0
)|At
0
) = H(t
0
)|At
0
). (4.9)
When written for general t this equation becomes
i ¯h
d|At)
dt
= H(t)|At). (4.10)
Equation (4.10) gives the general law for the time evolution of a state ket in
a scheme in which the operators representing the dynamical variables remain
fixed. This equation is denoted Schr¨ odinger’s equation of motion. It involves a
Hermitian operator H(t) which is, presumably, a characteristic of the dynamical
system under investigation.
We saw, in the previous section, that if the operator D(x, x
0
) displaces the
system along the x-axis from x
0
to x then
p
x
= i ¯h lim
x→x
0
D(x, x
0
) −1
x −x
0
, (4.11)
where p
x
is the operator representing the momentum conjugate to x. We now
have that if the operator T(t, t
0
) evolves the system in time from t
0
to t then
H(t
0
) = i ¯h lim
t→t
0
T(t, t
0
) −1
t −t
0
. (4.12)
57
4.1 Schr¨ odinger’s equations of motion 4 QUANTUM DYNAMICS
Thus, the dynamical variable corresponding to the operator H stands to time t as
the momentum p
x
stands to the coordinate x. By analogy with classical physics,
this suggests that H(t) is the operator representing the total energy of the system.
(Recall that, in classical physics, if the equations of motion of a system are in-
variant under an x-displacement of the system then this implies that the system
conserves momentum in the x-direction. Likewise, if the equations of motion are
invariant under a temporal displacement then this implies that the system con-
serves energy.) The operator H(t) is usually called the Hamiltonian of the system.
The fact that the Hamiltonian is undetermined to an arbitrary real additive con-
stant is related to the well-known phenomenon that energy is undetermined to
an arbitrary additive constant in physics (i.e., the zero of potential energy is not
well-defined).
Substituting |At) = T|At
0
) into Eq. (4.10) yields
i ¯h
dT
dt
|At
0
) = H(t) T|At
0
). (4.13)
Since this must hold for any initial state |At
0
) we conclude that
i ¯h
dT
dt
= H(t) T. (4.14)
This equation can be integrated to give
T(t, t
0
) = exp
_
_
−i
_
t
t
0
H(t

) dt

/¯h
_
_
, (4.15)
where use has been made of Eqs. (4.5) and (4.6). (Here, we assume that Hamil-
tonian operators evaluated at different times commute with one another). It is
now clear how the fact that H is undetermined to an arbitrary real additive con-
stant leaves T undetermined to a phase-factor. Note that, in the above analysis,
time is not an operator (we cannot observe time, as such), it is just a parameter
(or, more accurately, a continuous label). Since we are only dealing with non-
relativistic quantum mechanics, the fact that position is an operator, but time is
only a label, need not worry us unduly. In relativistic quantum mechanics, time
and space coordinates are treated on the same footing by relegating position from
being an operator to being just a label.
58
4.2 Heisenberg’s equations of motion 4 QUANTUM DYNAMICS
4.2 Heisenberg’s equations of motion
We have seen that in Schr¨ odinger’s scheme the dynamical variables of the system
remain fixed during a period of undisturbed motion, whereas the state kets evolve
according to Eq. (4.10). However, this is not the only way in which to represent
the time evolution of the system.
Suppose that a general state ket A is subject to the transformation
|A
t
) = T

(t, t
0
)|A). (4.16)
This is a time-dependent transformation, since the operator T(t, t
0
) obviously
depends on time. The subscript t is used to remind us that the transformation is
time-dependent. The time evolution of the transformed state ket is given by
|A
t
t) = T

(t, t
0
)|At) = T

(t, t
0
) T(t, t
0
)|At
0
) = |A
t
t
0
), (4.17)
where use has been made of Eqs. (4.3), (4.5), and the fact that T(t
0
, t
0
) = 1.
Clearly, the transformed state ket does not evolve in time. Thus, the transforma-
tion (4.16) has the effect of bringing all kets representing states of undisturbed
motion of the system to rest.
The transformation must also be applied to bras. The dual of Eq. (4.16) yields
¸A
t
| = ¸A|T. (4.18)
The transformation rule for a general observable v is obtained from the require-
ment that the expectation value ¸A|v|A) should remain invariant. It is easily seen
that
v
t
= T

v T. (4.19)
Thus, a dynamical variable, which corresponds to a fixed linear operator in Schr¨ o-
dinger’s scheme, corresponds to a moving linear operator in this new scheme. It
is clear that the transformation (4.16) leads us to a scenario in which the state
of the system is represented by a fixed vector, and the dynamical variables are
represented by moving linear operators. This is termed the Heisenberg picture, as
opposed to the Schr¨ odinger picture, which is outlined in Sect. 4.1.
59
4.2 Heisenberg’s equations of motion 4 QUANTUM DYNAMICS
Consider a dynamical variable v corresponding to a fixed linear operator in the
Schr¨ odinger picture. According to Eq. (4.19), we can write
T v
t
= v T. (4.20)
Differentiation with respect to time yields
dT
dt
v
t
+T
dv
t
dt
= v
dT
dt
. (4.21)
With the help of Eq. (4.14), this reduces to
HT v
t
+ i ¯hT
dv
t
dt
= v HT, (4.22)
or
i ¯h
dv
t
dt
= T

v HT −T

HT v
t
= v
t
H
t
−H
t
v
t
, (4.23)
where
H
t
= T

HT. (4.24)
Equation (4.23) can be written
i ¯h
dv
t
dt
= [v
t
, H
t
]. (4.25)
Equation (4.25) shows how the dynamical variables of the system evolve in
the Heisenberg picture. It is denoted Heisenberg’s equation of motion. Note that
the time-varying dynamical variables in the Heisenberg picture are usually called
Heisenberg dynamical variables to distinguish them from Schr¨ odinger dynamical
variables (i.e., the corresponding variables in the Schr¨ odinger picture), which do
not evolve in time.
According to Eq. (3.22), the Heisenberg equation of motion can be written
dv
t
dt
= [v
t
, H
t
]
quantum
, (4.26)
where [ ]
quantum
denotes the quantumPoisson bracket. Let us compare this equa-
tion with the classical time evolution equation for a general dynamical variable
v, which can be written in the form [see Eq. (3.7)]
dv
dt
= [v, H]
classical
. (4.27)
60
4.3 Ehrenfest’s theorem 4 QUANTUM DYNAMICS
Here, [ ]
classical
is the classical Poisson bracket, and H denotes the classical
Hamiltonian. The strong resemblance between Eqs. (4.26) and (4.27) provides
us with further justification for our identification of the linear operator H with
the energy of the system in quantum mechanics.
Note that if the Hamiltonian does not explicitly depend on time (i.e., the sys-
tem is not subject to some time-dependent external force) then Eq. (4.15) yields
T(t, t
0
) = exp [−i H(t −t
0
)/¯h] . (4.28)
This operator manifestly commutes with H, so
H
t
= T

HT = H. (4.29)
Furthermore, Eq. (4.25) gives
i ¯h
dH
dt
= [H, H] = 0. (4.30)
Thus, if the energy of the system has no explicit time-dependence then it is rep-
resented by the same non-time-varying operator H in both the Schr¨ odinger and
Heisenberg pictures.
Suppose that v is an observable which commutes with the Hamiltonian (and,
hence, with the time evolution operator T). It follows from Eq. (4.19) that v
t
= v.
Heisenberg’s equation of motion yields
i ¯h
dv
dt
= [v, H] = 0. (4.31)
Thus, any observable which commutes with the Hamiltonian is a constant of the mo-
tion (hence, it is represented by the same fixed operator in both the Schr¨ odinger
and Heisenberg pictures). Only those observables which do not commute with
the Hamiltonian evolve in time in the Heisenberg picture.
4.3 Ehrenfest’s theorem
We have now derived all of the basic elements of quantum mechanics. The only
thing which is lacking is some rule to determine the form of the quantum mechan-
ical Hamiltonian. For a physical system which possess a classical analogue, we
61
4.3 Ehrenfest’s theorem 4 QUANTUM DYNAMICS
generally assume that the Hamiltonian has the same form as in classical physics
(i.e., we replace the classical coordinates and conjugate momenta by the corre-
sponding quantum mechanical operators). This scheme guarantees that quantum
mechanics yields the correct classical equations of motion in the classical limit.
Whenever an ambiguity arises because of non-commuting observables, this can
usually be resolved by requiring the Hamiltonian H to be an Hermitian oper-
ator. For instance, we would write the quantum mechanical analogue of the
classical product x p, appearing in the Hamiltonian, as the Hermitian product
(1/2)(x p + px). When the system in question has no classical analogue then we
are reduced to guessing a form for H which reproduces the observed behaviour
of the system.
Consider a three-dimensional systemcharacterized by three independent Carte-
sian position coordinates x
i
(where i runs from 1 to 3), with three corresponding
conjugate momenta p
i
. These are represented by three commuting position op-
erators x
i
, and three commuting momentum operators p
i
, respectively. The com-
mutation relations satisfied by the position and momentum operators are [see
Eq. (3.25)]
[x
i
, p
j
] = i ¯hδ
ij
. (4.32)
It is helpful to denote (x
1
, x
2
, x
3
) as x and (p
1
, p
2
, p
3
) as p. The following useful
formulae,
[x
i
, F(p)] = i ¯h
∂F
∂p
i
, (4.33)
[p
i
, G(x)] = −i ¯h
∂G
∂x
i
, (4.34)
where F and G are functions which can be expanded as power series, are easily
proved using the fundamental commutation relations Eq. (4.32).
Let us now consider the three-dimensional motion of a free particle of mass m
in the Heisenberg picture. The Hamiltonian is assumed to have the same form as
in classical physics:
H =
p
2
2 m
=
1
2 m
3

i=1
p
2
i
. (4.35)
62
4.3 Ehrenfest’s theorem 4 QUANTUM DYNAMICS
In the following, all dynamical variables are assumed to be Heisenberg dynamical
variables, although we will omit the subscript t for the sake of clarity. The time
evolution of the momentum operator p
i
follows from Heisenberg’s equation of
motion (4.25). We find that
dp
i
dt
=
1
i ¯h
[p
i
, H] = 0, (4.36)
since p
i
automatically commutes with any function of the momentum operators.
Thus, for a free particle the momentum operators are constants of the motion,
which means that p
i
(t) = p
i
(0) at all times t (for i is 1 to 3). The time evolution
of the position operator x
i
is given by
dx
i
dt
=
1
i ¯h
[x
i
, H] =
1
i ¯h
1
2 m
i ¯h

∂p
i
_
_
_
3

j=1
p
2
j
_
_
_ =
p
i
m
=
p
i
(0)
m
, (4.37)
where use has been made of Eq. (4.33). It follows that
x
i
(t) = x
i
(0) +
_
_
p
i
(0)
m
_
_
t, (4.38)
which is analogous to the equation of motion of a classical free particle. Note
that even though
[x
i
(0), x
j
(0)] = 0, (4.39)
where the position operators are evaluated at equal times, the x
i
do not commute
when evaluated at different times. For instance,
[x
i
(t), x
i
(0)] =
_
_
p
i
(0) t
m
, x
i
(0)
_
_
=
−i ¯ht
m
. (4.40)
Combining the above commutation relation with the uncertainty relation (2.83)
yields
¸(∆x
i
)
2
)
t
¸(∆x
i
)
2
)
t=0

¯h
2
t
2
4 m
2
. (4.41)
This result implies that even if a particle is well-localized at t = 0, its position
becomes progressively more uncertain with time. This conclusion can also be
obtained by studying the propagation of wave-packets in wave mechanics.
63
4.3 Ehrenfest’s theorem 4 QUANTUM DYNAMICS
Let us now add a potential V(x) to our free particle Hamiltonian:
H =
p
2
2 m
+V(x). (4.42)
Here, V is some function of the x
i
operators. Heisenberg’s equation of motion
gives
dp
i
dt
=
1
i ¯h
[p
i
, V(x)] = −
∂V(x)
∂x
i
, (4.43)
where use has been made of Eq. (4.34). On the other hand, the result
dx
i
dt
=
p
i
m
(4.44)
still holds, because the x
i
all commute with the new term V(x) in the Hamilto-
nian. We can use the Heisenberg equation of motion a second time to deduce
that
d
2
x
i
dt
2
=
1
i ¯h
_
dx
i
dt
, H
_
=
1
i ¯h
_
p
i
m
, H
_
=
1
m
dp
i
dt
= −
1
m
∂V(x)
∂x
i
. (4.45)
In vectorial form, this equation becomes
m
d
2
x
dt
2
=
dp
dt
= −∇V(x). (4.46)
This is the quantum mechanical equivalent of Newton’s second law of motion.
Taking the expectation values of both sides with respect to a Heisenberg state ket
that does not move with time, we obtain
m
d
2
¸x)
dt
2
=
d¸p)
dt
= −¸∇V(x)). (4.47)
This is known as Ehrenfest’s theorem. When written in terms of expectation
values, this result is independent of whether we are using the Heisenberg or
Schr¨ odinger picture. In contrast, the operator equation (4.46) only holds if x and
p are understood to be Heisenberg dynamical variables. Note that Eq. (4.47) has
no dependence on ¯h. In fact, it guarantees to us that the centre of a wave-packet
always moves like a classical particle.
64
4.4 Schr¨ odinger’s wave-equation 4 QUANTUM DYNAMICS
4.4 Schr¨ odinger’s wave-equation
Let us nowconsider the motion of a particle in three dimensions in the Schr¨ odinger
picture. The fixed dynamical variables of the system are the position operators
x ≡ (x
1
, x
2
, x
3
), and the momentum operators p ≡ (p
1
, p
2
, p
3
). The state of the
system is represented as some time evolving ket |At).
Let |x

) represent a simultaneous eigenket of the position operators belonging
to the eigenvalues x

≡ (x

1
, x

2
, x

3
). Note that, since the position operators are
fixed in the Schr¨ odinger picture, we do not expect the |x

) to evolve in time. The
wave-function of the system at time t is defined
ψ(x

, t) = ¸x

|At). (4.48)
The Hamiltonian of the system is taken to be
H =
p
2
2 m
+V(x). (4.49)
Schr¨ odinger’s equation of motion (4.10) yields
i ¯h
∂¸x

|At)
∂t
= ¸x

|H|At), (4.50)
where use has been made of the time independence of the |x

). We adopt Schr¨ od-
inger’s representation in which the momentum conjugate to the position operator
x
i
is written [see Eq. (3.74)]
p
i
= −i ¯h

∂x
i
. (4.51)
Thus,
_
x

¸
¸
¸
¸
¸
¸
p
2
2 m
¸
¸
¸
¸
¸
¸
At
_
= −
_
_
¯h
2
2 m
_
_

2
¸x

|At), (4.52)
where use has been made of Eq. (3.78). Here, ∇

≡ (∂/∂x

, ∂/∂y

, ∂/∂z

) denotes
the gradient operator written in terms of the position eigenvalues. We can also
write
¸x

|V(x) = V(x

)¸x

|, (4.53)
65
4.4 Schr¨ odinger’s wave-equation 4 QUANTUM DYNAMICS
where V(x

) is a scalar function of the position eigenvalues. Combining Eqs. (4.49),
(4.50), (4.52), and (4.53), we obtain
i ¯h
∂¸x

|At)
∂t
= −
_
_
¯h
2
2 m
_
_

2
¸x

|At) +V(x

)¸x

|At), (4.54)
which can also be written
i ¯h
∂ψ(x

, t)
∂t
= −
_
_
¯h
2
2 m
_
_

2
ψ(x

, t) +V(x

) ψ(x

, t). (4.55)
This is Schr¨ odinger’s famous wave-equation, and is the basis of wave mechanics.
Note, however, that the wave-equation is just one of many possible representa-
tions of quantum mechanics. It just happens to give a type of equation which we
know how to solve. In deriving the wave-equation, we have chosen to represent
the system in terms of the eigenkets of the position operators, instead of those
of the momentum operators. We have also fixed the relative phases of the |x

)
according to Schr¨ odinger’s representation, so that Eq. (4.51) is valid. Finally, we
have chosen to work in the Schr¨ odinger picture, in which state kets evolve and
dynamical variables are fixed, instead of the Heisenberg picture, in which the
opposite is true.
Suppose that the ket |At) is an eigenket of the Hamiltonian belonging to the
eigenvalue H

:
H|At) = H

|At). (4.56)
Schr¨ odinger’s equation of motion (4.10) yields
i ¯h
d|At)
dt
= H

|At). (4.57)
This can be integrated to give
|At) = exp[−i H

(t −t
0
)/¯h]|At
0
). (4.58)
Note that |At) only differs from |At
0
) by a phase-factor. The direction of the
vector remains fixed in ket space. This suggests that if the system is initially in an
eigenstate of the Hamiltonian then it remains in this state for ever, as long as the
66
4.4 Schr¨ odinger’s wave-equation 4 QUANTUM DYNAMICS
system is undisturbed. Such a state is called a stationary state. The wave-function
of a stationary state satisfies
ψ(x

, t) = ψ(x

, t
0
) exp[−i H

(t −t
0
)/¯h]. (4.59)
Substituting the above relation into Schr¨ odinger’s wave equation (4.55), we
obtain

_
_
¯h
2
2 m
_
_

2
ψ
0
(x

) + (V(x

) −E) ψ
0
(x

) = 0, (4.60)
where ψ
0
(x

) ≡ ψ(x

, t
0
), and E = H

is the energy of the system. This is
Schr¨ odinger’s time-independent wave-equation. A bound state solution of the
above equation, in which the particle is confined within a finite region of space,
satisfies the boundary condition
ψ
0
(x

) →0 as |x

| →∞. (4.61)
Such a solution is only possible if
E < lim
|x

|→∞
V(x

). (4.62)
Since it is conventional to set the potential at infinity equal to zero, the above
relation implies that bound states are equivalent to negative energy states. The
boundary condition (4.61) is sufficient to uniquely specify the solution of Eq. (4.60).
The quantity ρ(x

, t), defined by
ρ(x

, t) = |ψ(x

, t)|
2
, (4.63)
is termed the probability density. Recall, from Eq. (3.30), that the probability of
observing the particle in some volume element d
3
x

around position x

is propor-
tional to ρ(x

, t) d
3
x

. The probability is equal to ρ(x

, t) d
3
x

if the wave-function
is properly normalized, so that
_
ρ(x

, t) d
3
x

= 1. (4.64)
Schr¨ odinger’s time-dependent wave-equation, (4.55), can easily be written in
the form of a conservation equation for the probability density:
∂ρ
∂t
+∇

j = 0. (4.65)
67
4.4 Schr¨ odinger’s wave-equation 4 QUANTUM DYNAMICS
The probability current j takes the form
j(x

, t) = −
_
i ¯h
2 m
_


ψ− (∇

ψ

) ψ] =
_
¯h
m
_
Im(ψ

ψ). (4.66)
We can integrate Eq. (4.65) over all space, using the divergence theorem, and the
boundary condition ρ →0 as |x

| →∞, to obtain

∂t
_
ρ(x

, t) d
3
x

= 0. (4.67)
Thus, Schr¨ odinger’s wave-equation conserves probability. In particular, if the
wave-function starts off properly normalized, according to Eq. (4.64), then it
remains properly normalized at all subsequent times. It is easily demonstrated
that
_
j(x

, t) d
3
x

=
¸p)
t
m
, (4.68)
where ¸p)
t
denotes the expectation value of the momentum evaluated at time t.
Clearly, the probability current is indirectly related to the particle momentum.
In deriving Eq. (4.65) we have, naturally, assumed that the potential V(x

) is
real. Suppose, however, that the potential has an imaginary component. In this
case, Eq. (4.65) generalizes to
∂ρ
∂t
+∇

j =
2 Im(V)
¯h
ρ, (4.69)
giving

∂t
_
ρ(x

, t) d
3
x

=
2
¯h
Im
_
V(x

) ρ(x

, t) d
3
x

. (4.70)
Thus, if Im(V) < 0 then the total probability of observing the particle anywhere
in space decreases monotonically with time. Thus, an imaginary potential can
be used to account for the disappearance of a particle. Such a potential is often
employed to model nuclear reactions in which incident particles can be absorbed
by nuclei.
The wave-function can always be written in the form
ψ(x

, t) =
_
ρ(x

, t) exp
_
_
i S(x

, t)
¯h
_
_
, (4.71)
68
4.4 Schr¨ odinger’s wave-equation 4 QUANTUM DYNAMICS
where ρ and S are both real functions. The interpretation of ρ as a probability
density has already been given. What is the interpretation of S? Note that
ψ

ψ =

ρ ∇

(

ρ) +
_
i
¯h
_
ρ∇

S. (4.72)
It follows from Eq. (4.66) that
j =
ρ ∇

S
m
. (4.73)
Thus, the gradient of the phase of the wave-function determines the direction of
the probability current. In particular, the probability current is locally normal to
the contours of the phase-function S.
Let us substitute Eq. (4.71) into Schr¨ odinger’s time-dependent wave-equation.
We obtain

1
2 m
_
¯h
2

2

ρ +2i ¯h∇

(

ρ)∇

S −

ρ |∇

S|
2
+ i ¯h

ρ ∇
2
S
_
+

ρ V
=
_
i ¯h


ρ
∂t


ρ
∂S
∂t
_
. (4.74)
Let us treat ¯h as a small quantity. To lowest order, Eq. (4.74) yields

∂S(x

, t)
∂t
=
1
2 m
|∇

S(x

, t)|
2
+V(x

, t) = H(x

, ∇

S, t), (4.75)
where H(x, p, t) is the Hamiltonian operator. The above equation is known as the
Hamilton-Jacobi equation, and is one of the many forms in which we can write
the equations of classical mechanics. In classical mechanics, S is the action (i.e.,
the path-integral of the Lagrangian). Thus, in the limit ¯h → 0, wave mechanics
reduces to classical mechanics. It is a good approximation to neglect the terms
involving ¯h in Eq. (4.74) provided that
¯h|∇
2
S| ¸|∇

S|
2
. (4.76)
Note that, according to Eq. (4.71),
¯ λ =
¯h
|∇

S|
, (4.77)
69
4.4 Schr¨ odinger’s wave-equation 4 QUANTUM DYNAMICS
where ¯ λ is the de Broglie wave-length divided by 2π. The inequality (4.76) is
equivalent to
|∇

¯ λ| ¸1. (4.78)
In other words, quantum mechanics reduces to classical mechanics whenever
the de Broglie wave-length is small compared to the characteristic distance over
which things (other than the quantum phase) vary. This distance is usually set by
the variation scale-length of the potential.
70
5 ANGULAR MOMENTUM
5 Angular momentum
5.1 Orbital angular momentum
Consider a particle described by the Cartesian coordinates (x, y, z) ≡ r and their
conjugate momenta (p
x
, p
y
, p
z
) ≡ p. The classical definition of the orbital angular
momentum of such a particle about the origin is L = r p, giving
L
x
= yp
z
−z p
y
, (5.1)
L
y
= z p
x
−x p
z
, (5.2)
L
z
= x p
y
−yp
x
. (5.3)
Let us assume that the operators (L
x
, L
y
, L
z
) ≡ L which represent the compo-
nents of orbital angular momentum in quantum mechanics can be defined in an
analogous manner to the corresponding components of classical angular momen-
tum. In other words, we are going to assume that the above equations specify
the angular momentum operators in terms of the position and linear momentum
operators. Note that L
x
, L
y
, and L
z
are Hermitian, so they represent things which
can, in principle, be measured. Note, also, that there is no ambiguity regard-
ing the order in which operators appear in products on the right-hand sides of
Eqs. (5.1)–(5.3), since all of the products consist of operators which commute.
The fundamental commutation relations satisfied by the position and linear
momentum operators are [see Eqs. (3.23)–(3.25)]
[x
i
, x
j
] = 0, (5.4)
[p
i
, p
j
] = 0, (5.5)
[x
i
, p
j
] = i ¯hδ
ij
, (5.6)
where i and j stand for either x, y, or z. Consider the commutator of the operators
L
x
and L
z
:
[L
x
, L
y
] = [(yp
z
−z p
y
), (z p
x
−x p
z
)] = y[p
z
, z] p
x
+x p
y
[z, p
z
]
= i ¯h(−yp
x
+x p
y
) = i ¯hL
z
. (5.7)
71
5.1 Orbital angular momentum 5 ANGULAR MOMENTUM
The cyclic permutations of the above result yield the fundamental commutation
relations satisfied by the components of an angular momentum:
[L
x
, L
y
] = i ¯hL
z
, (5.8)
[L
y
, L
z
] = i ¯hL
x
, (5.9)
[L
z
, L
x
] = i ¯hL
y
. (5.10)
These can be summed up more succinctly by writing
L L = i ¯hL. (5.11)
The three commutation relations (5.8)–(5.10) are the foundation for the whole
theory of angular momentum in quantum mechanics. Whenever we encounter
three operators having these commutation relations, we know that the dynamical
variables which they represent have identical properties to those of the compo-
nents of an angular momentum (which we are about to derive). In fact, we shall
assume that any three operators which satisfy the commutation relations (5.8)–
(5.10) represent the components of an angular momentum.
Suppose that there are N particles in the system, with angular momentum
vectors L
i
(where i runs from 1 to N). Each of these vectors satisfies Eq. (5.11),
so that
L
i
L
i
= i ¯hL
i
. (5.12)
However, we expect the angular momentum operators belonging to different par-
ticles to commute, since they represent different degrees of freedom of the sys-
tem. So, we can write
L
i
L
j
+L
j
L
i
= 0, (5.13)
for i ,= j. Consider the total angular momentum of the system, L =

N
i=1
L
i
. It is
clear from Eqs. (5.12) and (5.13) that
L L =
N

i=1
L
i

N

j=1
L
j
=
N

i=1
L
i
L
i
+
1
2
N

i,j=1
(L
i
L
j
+L
j
L
i
)
= i ¯h
N

i=1
L
i
= i ¯hL. (5.14)
72
5.1 Orbital angular momentum 5 ANGULAR MOMENTUM
Thus, the sum of two or more angular momentum vectors satisfies the same
commutation relation as a primitive angular momentum vector. In particular,
the total angular momentum of the system satisfies the commutation relation
(5.11).
The immediate conclusion which can be drawn from the commutation rela-
tions (5.8)–(5.10) is that the three components of an angular momentum vector
cannot be specified (or measured) simultaneously. In fact, once we have speci-
fied one component, the values of other two components become uncertain. It is
conventional to specify the z-component, L
z
.
Consider the magnitude squared of the angular momentum vector, L
2
≡ L
2
x
+
L
2
y
+L
2
z
. The commutator of L
2
and L
z
is written
[L
2
, L
z
] = [L
2
x
, L
z
] + [L
2
y
, L
z
] + [L
2
z
, L
z
]. (5.15)
It is easily demonstrated that
[L
2
x
, L
z
] = −i ¯h(L
x
L
y
+L
y
L
x
), (5.16)
[L
2
y
, L
z
] = +i ¯h(L
x
L
y
+L
y
L
x
), (5.17)
[L
2
z
, L
z
] = 0, (5.18)
so
[L
2
, L
z
] = 0. (5.19)
Since there is nothing special about the z-axis, we conclude that L
2
also commutes
with L
x
and L
y
. It is clear from Eqs. (5.8)–(5.10) and (5.19) that the best we can
do in quantum mechanics is to specify the magnitude of an angular momentum
vector along with one of its components (by convention, the z-component).
It is convenient to define the shift operators L
+
and L

:
L
+
= L
x
+ i L
y
, (5.20)
L

= L
x
− i L
y
. (5.21)
Note that
[L
+
, L
z
] = −¯hL
+
, (5.22)
73
5.2 Eigenvalues of angular momentum 5 ANGULAR MOMENTUM
[L

, L
z
] = +¯hL

, (5.23)
[L
+
, L

] = 2 ¯hL
z
. (5.24)
Note, also, that both shift operators commute with L
2
.
5.2 Eigenvalues of angular momentum
Suppose that the simultaneous eigenkets of L
2
and L
z
are completely specified by
two quantum numbers, l and m. These kets are denoted |l, m). The quantum
number m is defined by
L
z
|l, m) = m¯h|l, m). (5.25)
Thus, m is the eigenvalue of L
z
divided by ¯h. It is possible to write such an
equation because ¯h has the dimensions of angular momentum. Note that m is a
real number, since L
z
is an Hermitian operator.
We can write
L
2
|l, m) = f(l, m) ¯h
2
|l, m), (5.26)
without loss of generality, where f(l, m) is some real dimensionless function of l
and m. Later on, we will show that f(l, m) = l (l +1). Now,
¸l, m|L
2
−L
2
z
|l, m) = ¸l, m|f(l, m) ¯h
2
−m
2
¯h
2
|l, m) = [f(l, m) −m
2
]¯h
2
, (5.27)
assuming that the |l, m) have unit norms. However,
¸l, m|L
2
−L
2
z
|l, m) = ¸l, m|L
2
x
+L
2
y
|l, m)
= ¸l, m|L
2
x
|l, m) +¸l, m|L
2
y
|l, m). (5.28)
It is easily demonstrated that
¸A|ξ
2
|A) ≥ 0, (5.29)
where |A) is a general ket, and ξ is an Hermitian operator. The proof follows
from the observation that
¸A|ξ
2
|A) = ¸A|ξ

ξ|A) = ¸B|B), (5.30)
74
5.2 Eigenvalues of angular momentum 5 ANGULAR MOMENTUM
where |B) = ξ|A), plus the fact that ¸B|B) ≥ 0 for a general ket |B) [see Eq. (2.21)].
It follows from Eqs. (5.27)–(5.29) that
m
2
≤ f(l, m). (5.31)
Consider the effect of the shift operator L
+
on the eigenket |l, m). It is easily
demonstrated that
L
2
(L
+
|l, m)) = ¯h
2
f(l, m) (L
+
|l, m)), (5.32)
where use has been made of Eq. (5.26), plus the fact that L
2
and L
z
commute. It
follows that the ket L
+
|l, m) has the same eigenvalue of L
2
as the ket |l, m). Thus,
the shift operator L
+
does not affect the magnitude of the angular momentum of
any eigenket it acts upon. Note that
L
z
L
+
|l, m) = (L
+
L
z
+ [L
z
, L
+
])|l, m) = (L
+
L
z
+ ¯hL
+
)|l, m)
= (m+1) ¯hL
+
|l, m), (5.33)
where use has been made of Eq. (5.22). The above equation implies that L
+
|l, m)
is proportional to |l, m+1). We can write
L
+
|l, m) = c
+
l,m
¯h|l, m+1), (5.34)
where c
+
l,m
is a number. It is clear that when the operator L
+
acts on a simulta-
neous eigenstate of L
2
and L
z
, the eigenvalue of L
2
remains unchanged, but the
eigenvalue of L
z
is increased by ¯h. For this reason, L
+
is called a raising operator.
Using similar arguments to those given above, it is possible to demonstrate
that
L

|l, m) = c

l,m
¯h|l, m−1). (5.35)
Hence, L

is called a lowering operator.
The shift operators step the value of m up and down by unity each time they
operate on one of the simultaneous eigenkets of L
2
and L
z
. It would appear, at
first sight, that any value of m can be obtained by applying the shift operators a
sufficient number of times. However, according to Eq. (5.31), there is a definite
upper bound to the values that m
2
can take. This bound is determined by the
75
5.2 Eigenvalues of angular momentum 5 ANGULAR MOMENTUM
eigenvalue of L
2
[see Eq. (5.26)]. It follows that there is a maximum and a
minimum possible value which m can take. Suppose that we attempt to raise
the value of m above its maximum value m
max
. Since there is no state with
m > m
max
, we must have
L
+
|l, m
max
) = |0). (5.36)
This implies that
L

L
+
|l, m
max
) = |0). (5.37)
However,
L

L
+
= L
2
x
+L
2
y
+ i [L
x
, L
y
] = L
2
−L
2
z
− ¯hL
z
, (5.38)
so Eq. (5.37) yields
(L
2
−L
2
z
− ¯hL
z
)|l, m
max
) = |0). (5.39)
The above equation can be rearranged to give
L
2
|l, m
max
) = (L
2
z
+ ¯hL
z
)|l, m
max
) = m
max
(m
max
+1) ¯h
2
|l, m
max
). (5.40)
Comparison of this equation with Eq. (5.26) yields the result
f(l, m
max
) = m
max
(m
max
+1). (5.41)
But, when L

operates on |n, m
max
) it generates |n, m
max
− 1), |n, m
max
− 2), etc.
Since the lowering operator does not change the eigenvalue of L
2
, all of these
states must correspond to the same value of f, namely m
max
(m
max
+1). Thus,
L
2
|l, m) = m
max
(m
max
+1) ¯h
2
|l, m). (5.42)
At this stage, we can give the unknown quantum number l the value m
max
, with-
out loss of generality. We can also write the above equation in the form
L
2
|l, m) = l (l +1) ¯h
2
|l, m). (5.43)
It is easily seen that
L

L
+
|l, m) = (L
2
−L
2
z
− ¯hL
z
)|l, m) = ¯h
2
[l (l +1) −m(m+1)]|l, m). (5.44)
Thus,
¸l, m|L

L
+
|l, m) = ¯h
2
[l (l +1) −m(m+1)]. (5.45)
76
5.2 Eigenvalues of angular momentum 5 ANGULAR MOMENTUM
However, we also know that
¸l, m|L

L
+
|l, m) = ¸l, m|L

¯hc
+
l,m
|l, m+1) = ¯h
2
c
+
l,m
c

l,m+1
, (5.46)
where use has been made of Eqs. (5.34) and (5.35). It follows that
c
+
l,m
c

l,m+1
= [l (l +1) −m(m+1)]. (5.47)
Consider the following:
¸l, m|L

|l, m+1) = ¸l, m|L
x
|l, m+1) − i ¸l, m|L
y
|l, m+1)
= ¸l, m+1|L
x
|l, m)

− i ¸l, m+1|L
y
|l, m)

= (¸l, m+1|L
x
|l, m) + i ¸l, m+1|L
y
|l, m))

= ¸l, m+1|L
+
|l, m)

, (5.48)
where use has been made of the fact that L
x
and L
y
are Hermitian. The above
equation reduces to
c

l,m+1
= (c
+
l,m
)

(5.49)
with the aid of Eqs. (5.34) and (5.35).
Equations (5.47) and (5.49) can be combined to give
|c
+
l,m
|
2
= [l (l +1) −m(m+1)]. (5.50)
The solution of the above equation is
c
+
l,m
=
_
l (l +1) −m(m+1). (5.51)
Note that c
+
l,m
is undetermined to an arbitrary phase-factor [i.e., we can replace
c
+
l,m
, given above, by c
+
l,m
exp(i γ), where γ is real, and we still satisfy Eq. (5.50)].
We have made the arbitrary, but convenient, choice that c
+
l,m
is real and posi-
tive. This is equivalent to choosing the relative phases of the eigenkets |l, m).
According to Eq. (5.49),
c

l,m
= (c
+
l,m−1
)

=
_
l (l +1) −m(m−1). (5.52)
We have already seen that the inequality (5.31) implies that there is a maxi-
mum and a minimum possible value of m. The maximum value of m is denoted
77
5.3 Rotation operators 5 ANGULAR MOMENTUM
l. What is the minimum value? Suppose that we try to lower the value of m
below its minimum value m
min
. Since there is no state with m < m
min
, we must
have
L

|l, m
min
) = 0. (5.53)
According to Eq. (5.35), this implies that
c

l,m
min
= 0. (5.54)
It can be seen from Eq. (5.52) that m
min
= −l. We conclude that m can take a
“ladder” of discrete values, each rung differing from its immediate neighbours by
unity. The top rung is l, and the bottom rung is −l. There are only two possible
choices for l. Either it is an integer (e.g., l = 2, which allows m to take the values
−2, −1, 0, 1, 2), or it is a half-integer (e.g., l = 3/2, which allows m to take the
values −3/2, −1/2, 1/2, 3/2). We will prove in the next section that an orbital
angular momentum can only take integer values of l.
In summary, using just the fundamental commutation relations (5.8)–(5.10),
plus the fact that L
x
, L
y
, and L
z
are Hermitian operators, we have shown that the
eigenvalues of L
2
≡ L
2
x
+L
2
y
+L
2
z
can be written l (l +1) ¯h
2
, where l is an integer,
or a half-integer. We have also demonstrated that the eigenvalues of L
z
can only
take the values m¯h, where m lies in the range −l, −l + 1, l − 1, l. Let |l, m)
denote a properly normalized simultaneous eigenket of L
2
and L
z
, belonging to
the eigenvalues l (l +1) ¯h
2
and m¯h, respectively. We have shown that
L
+
|l, m) =
_
l (l +1) −m(m+1) ¯h|l, m+1) (5.55)
L

|l, m) =
_
l (l +1) −m(m−1) ¯h|l, m−1), (5.56)
where L
±
= L
x
±i L
y
are the so-called shift operators.
5.3 Rotation operators
Consider a particle described by the spherical polar coordinates (r, θ, ϕ). The
classical momentum conjugate to the azimuthal angle ϕ is the z-component of
angular momentum, L
z
. According to Sect. 3.5, in quantum mechanics we can
78
5.3 Rotation operators 5 ANGULAR MOMENTUM
always adopt Schr¨ odinger’s representation, for which ket space is spanned by the
simultaneous eigenkets of the position operators r, θ, and φ, and L
z
takes the
form
L
z
= −i ¯h

∂ϕ
. (5.57)
We can do this because there is nothing in Sect. 3.5 which specifies that we
have to use Cartesian coordinates—the representation (3.74) works for any well-
defined set of coordinates.
Consider an operator R(∆ϕ) which rotates the system an angle ∆ϕabout the z-
axis. This operator is very similar to the operator D(∆x), introduced in Sect. 3.8,
which translates the system a distance ∆x along the x axis. We were able to
demonstrate in Sect. 3.8 that
p
x
= i ¯h lim
δx→0
D(δx) −1
δx
, (5.58)
where p
x
is the linear momentum conjugate to x. There is nothing in our deriva-
tion of this result which specifies that x has to be a Cartesian coordinate. Thus,
the result should apply just as well to an angular coordinate. We conclude that
L
z
= i ¯h lim
δϕ→0
R(δϕ) −1
δϕ
. (5.59)
According to Eq. (5.59), we can write
R(δϕ) = 1 − i L
z
δϕ/¯h (5.60)
in the limit δϕ → 0. In other words, the angular momentum operator L
z
can be
used to rotate the system about the z-axis by an infinitesimal amount. We say
that L
z
is the generator of rotations about the z-axis. The above equation implies
that
R(∆ϕ) = lim
N→∞
_
1 − i
∆ϕ
N
L
z
¯h
_
N
, (5.61)
which reduces to
R(∆ϕ) = exp(−i L
z
∆ϕ/¯h). (5.62)
79
5.3 Rotation operators 5 ANGULAR MOMENTUM
Note that R(∆ϕ) has all of the properties we would expect of a rotation operator
R(0) = 1, (5.63)
R(∆ϕ) R(−∆ϕ) = 1, (5.64)
R(∆ϕ
1
) R(∆ϕ
2
) = R(∆ϕ
1
+∆ϕ
2
). (5.65)
Suppose that the system is in a simultaneous eigenstate of L
2
and L
z
. As before,
this state is represented by the eigenket |l, m), where the eigenvalue of L
2
is
l (l + 1) ¯h
2
, and the eigenvalue of L
z
is m¯h. We expect the wave-function to
remain unaltered if we rotate the system 2π degrees about the z-axis. Thus,
R(2π)|l, m) = exp(−i L
z
2π/¯h)|l, m) = exp(−i 2 πm)|l, m) = |l, m). (5.66)
We conclude that m must be an integer. This implies, from the previous section,
that l must also be an integer. Thus, orbital angular momentum can only take on
integer values of the quantum numbers l and m.
Consider the action of the rotation operator R(∆ϕ) on an eigenstate possessing
zero angular momentum about the z-axis (i.e., an m = 0 state). We have
R(∆ϕ)|l, 0) = exp(0)|l, 0) = |l, 0). (5.67)
Thus, the eigenstate is invariant to rotations about the z-axis. Clearly, its wave-
function must be symmetric about the z-axis.
There is nothing special about the z-axis, so we can write
R
x
(∆ϕ
x
) = exp(−i L
x
∆ϕ
x
/¯h), (5.68)
R
y
(∆ϕ
y
) = exp(−i L
y
∆ϕ
y
/¯h), (5.69)
R
z
(∆ϕ
y
) = exp(−i L
z
∆ϕ
z
/¯h), (5.70)
by analogy with Eq. (5.62). Here, R
x
(∆ϕ
x
) denotes an operator which rotates the
system by an angle ∆ϕ
x
about the x-axis, etc. Suppose that the system is in an
eigenstate of zero overall orbital angular momentum (i.e., an l = 0 state). We
know that the system is also in an eigenstate of zero orbital angular momentum
about any particular axis. This follows because l = 0 implies m = 0, according
80
5.4 Eigenfunctions of orbital angular momentum 5 ANGULAR MOMENTUM
to the previous section, and we can choose the z-axis to point in any direction.
Thus,
R
x
(∆ϕ
x
)|0, 0) = exp(0)|0, 0) = |0, 0), (5.71)
R
y
(∆ϕ
y
)|0, 0) = exp(0)|0, 0) = |0, 0), (5.72)
R
z
(∆ϕ
z
)|0, 0) = exp(0)|0, 0) = |0, 0). (5.73)
Clearly, a zero angular momentum state is invariant to rotations about any axis.
Such a state must possess a spherically symmetric wave-function.
Note that a rotation about the x-axis does not commute with a rotation about
the y-axis. In other words, if the system is rotated an angle ∆ϕ
x
about the x-axis,
and then ∆ϕ
y
about the y-axis, it ends up in a different state to that obtained
by rotating an angle ∆ϕ
y
about the y-axis, and then ∆ϕ
x
about the x-axis. In
quantum mechanics, this implies that R
y
(∆ϕ
y
) R
x
(∆ϕ
x
) ,= R
x
(∆ϕ
x
) R
y
(∆ϕ
y
), or
L
y
L
x
,= L
x
L
y
, [see Eqs. (5.68)–(5.70)]. Thus, the noncommuting nature of the
angular momentum operators is a direct consequence of the fact that rotations
do not commute.
5.4 Eigenfunctions of orbital angular momentum
In Cartesian coordinates, the three components of orbital angular momentum can
be written
L
x
= −i ¯h
_
y

∂z
−z

∂y
_
, (5.74)
L
y
= −i ¯h
_
z

∂x
−x

∂z
_
, (5.75)
L
z
= −i ¯h
_
x

∂y
−y

∂x
_
, (5.76)
using the Schr¨ odinger representation. Transforming to standard spherical polar
coordinates,
x = r sinθ cos ϕ, (5.77)
81
5.4 Eigenfunctions of orbital angular momentum 5 ANGULAR MOMENTUM
y = r sinθ sinϕ, (5.78)
z = r cos θ, (5.79)
we obtain
L
x
= i ¯h
_
sinϕ

∂θ
+ cot θcos ϕ

∂ϕ
_
(5.80)
L
y
= −i ¯h
_
cos ϕ

∂θ
− cot θsinϕ

∂ϕ
_
(5.81)
L
z
= −i ¯h

∂ϕ
. (5.82)
Note that Eq. (5.82) accords with Eq. (5.57). The shift operators L
±
= L
x
± i L
y
become
L
±
= ±¯h exp(±i ϕ)
_

∂θ
±i cot θ

∂ϕ
_
. (5.83)
Now,
L
2
= L
2
x
+L
2
y
+L
2
z
= L
2
z
+ (L
+
L

+L

L
+
)/2, (5.84)
so
L
2
= −¯h
2
_
_
1
sinθ

∂θ
sinθ

∂θ
+
1
sin
2
θ

2
∂ϕ
2
_
_
. (5.85)
The eigenvalue problem for L
2
takes the form
L
2
ψ = λ ¯h
2
ψ, (5.86)
where ψ(r, θ, ϕ) is the wave-function, and λ is a number. Let us write
ψ(r, θ, ϕ) = R(r) Y(θ, ϕ). (5.87)
Equation (5.86) reduces to
_
_
1
sinθ

∂θ
sinθ

∂θ
+
1
sin
2
θ

2
∂ϕ
2
_
_
Y +λ Y = 0, (5.88)
where use has been made of Eq. (5.85). As is well-known, square integrable
solutions to this equation only exist when λ takes the values l (l + 1), where l is
an integer. These solutions are known as spherical harmonics, and can be written
Y
m
l
(θ, ϕ) =
¸
¸
¸
¸
_
2 l +1

(l −m)!
(l +m)!
(−1)
m
e
i mϕ
P
m
l
(cos ϕ), (5.89)
82
5.4 Eigenfunctions of orbital angular momentum 5 ANGULAR MOMENTUM
where m is a positive integer lying in the range 0 ≤ m ≤ l. Here, P
m
l
(ξ) is an
associated Legendre function satisfying the equation
d

_
(1 −ξ
2
)
dP
m
l

_

m
2
1 −ξ
2
P
m
l
+l (l +1) P
m
l
= 0. (5.90)
We define
Y
−m
l
= (−1)
m
(Y
m
l
)

, (5.91)
which allows m to take the negative values −l ≤ m < 0. The spherical harmonics
are orthogonal functions, and are properly normalized with respect to integration
over the entire solid angle:
_
π
0
_

0
Y
m∗
l
(θ, ϕ) Y
m

l
(θ, ϕ) sinθdθdϕ = δ
ll
δ
mm
. (5.92)
The spherical harmonics also form a complete set for representing general func-
tions of θ and ϕ.
By definition,
L
2
Y
m
l
= l (l +1) ¯h
2
Y
m
l
, (5.93)
where l is an integer. It follows from Eqs. (5.82) and (5.89) that
L
z
Y
m
l
= m¯hY
m
l
, (5.94)
where m is an integer lying in the range −l ≤ m ≤ l. Thus, the wave-function
ψ(r, θ, ϕ) = R(r) Y
m
l
(θ, φ), where R is a general function, has all of the expected
features of the wave-function of a simultaneous eigenstate of L
2
and L
z
belonging
to the quantum numbers l and m. The well-known formula
dP
m
l

=
1

1 −ξ
2
P
m+1
l


1 −ξ
2
P
m
l
= −
(l +m)(l −m+1)

1 −ξ
2
P
m−1
l
+

1 −ξ
2
P
m
l
(5.95)
can be combined with Eqs. (5.83) and (5.89) to give
L
+
Y
m
l
=
_
l (l +1) −m(m+1) ¯hY
m+1
l
, (5.96)
L

Y
m
l
=
_
l (l +1) −m(m−1) ¯hY
m−1
l
. (5.97)
83
5.5 Motion in a central field 5 ANGULAR MOMENTUM
These equations are equivalent to Eqs. (5.55)–(5.56). Note that a spherical har-
monic wave-function is symmetric about the z-axis (i.e., independent of ϕ) when-
ever m = 0, and is spherically symmetric whenever l = 0 (since Y
0
0
= 1/

4π).
In summary, by solving directly for the eigenfunctions of L
2
and L
z
in Schr¨ od-
inger’s representation, we have been able to reproduce all of the results of Sect. 5.2.
Nevertheless, the results of Sect. 5.2 are more general than those obtained in this
section, because they still apply when the quantum number l takes on half-integer
values.
5.5 Motion in a central field
Consider a particle of mass M moving in a spherically symmetric potential. The
Hamiltonian takes the form
H =
p
2
2 M
+V(r). (5.98)
Adopting Schr¨ odinger’s representation, we can write p = −(i/¯h)∇. Hence,
H = −
¯h
2
2 M

2
+V(r). (5.99)
When written in spherical polar coordinates, the above equation becomes
H = −
¯h
2
2 M
_
_
1
r
2

∂r
r
2

∂r
+
1
r
2
sinθ

∂θ
sinθ

∂θ
+
1
r
2
sin
2
θ

2
∂ϕ
2
_
_
+V(r). (5.100)
Comparing this equation with Eq. (5.85), we find that
H =
¯h
2
2 M
_
_

1
r
2

∂r
r
2

∂r
+
L
2
¯h
2
r
2
_
_
+V(r). (5.101)
Now, we know that the three components of angular momentum commute
with L
2
(see Sect. 5.1). We also know, from Eqs. (5.80)–(5.82), that L
x
, L
y
,
and L
z
take the form of partial derivative operators of the angular coordinates,
84
5.5 Motion in a central field 5 ANGULAR MOMENTUM
when written in terms of spherical polar coordinates using Schr¨ odinger’s repre-
sentation. It follows from Eq. (5.101) that all three components of the angular
momentum commute with the Hamiltonian:
[L, H] = 0. (5.102)
It is also easily seen that L
2
commutes with the Hamiltonian:
[L
2
, H] = 0. (5.103)
According to Sect. 4.2, the previous two equations ensure that the angular mo-
mentum L and its magnitude squared L
2
are both constants of the motion. This
is as expected for a spherically symmetric potential.
Consider the energy eigenvalue problem
Hψ = Eψ, (5.104)
where E is a number. Since L
2
and L
z
commute with each other and the Hamil-
tonian, it is always possible to represent the state of the system in terms of the
simultaneous eigenstates of L
2
, L
z
, and H. But, we already know that the most
general form for the wave-function of a simultaneous eigenstate of L
2
and L
z
is
(see previous section)
ψ(r, θ, ϕ) = R(r) Y
m
l
(θ, ϕ). (5.105)
Substituting Eq. (5.105) into Eq. (5.101), and making use of Eq. (5.93), we ob-
tain
_
_
¯h
2
2 M
_
_

1
r
2
d
dr
r
2
d
dr
+
l (l +1)
r
2
_
_
+V(r) −E
_
_
R = 0. (5.106)
This is a Sturm-Liouville equation for the function R(r). We know, from the gen-
eral properties of this type of equation, that if R(r) is required to be well-behaved
at r = 0 and as r → ∞ then solutions only exist for a discrete set of values of E.
These are the energy eigenvalues. In general, the energy eigenvalues depend on
the quantum number l, but are independent of the quantum number m.
85
5.6 Energy levels of the hydrogen atom 5 ANGULAR MOMENTUM
5.6 Energy levels of the hydrogen atom
Consider a hydrogen atom, for which the potential takes the specific form
V(r) = −
e
2

0
r
. (5.107)
The radial eigenfunction R(r) satisfies Eq. (5.106), which can be written
_
_
¯h
2
2 µ
_
_

1
r
2
d
dr
r
2
d
dr
+
l (l +1)
r
2
_
_

e
2

0
r
−E
_
_
R = 0. (5.108)
Here, µ = m
e
m
p
/(m
e
+m
p
) is the reduced mass, which takes into account the fact
that the electron (of mass m
e
) and the proton (of mass m
p
) both rotate about a
common centre, which is equivalent to a particle of mass µ rotating about a fixed
point. Let us write the product r R(r) as the function P(r). The above equation
transforms to
d
2
P
dr
2

2 µ
¯h
2
_
_
l (l +1)¯h
2
2 µr
2

e
2

0
r
−E
_
_
P = 0, (5.109)
which is the one-dimensional Schr¨ odinger equation for a particle of mass µ mov-
ing in the effective potential
V
eff
(r) = −
e
2

0
r
+
l (l +1) ¯h
2
2 µr
2
. (5.110)
The effective potential has a simple physical interpretation. The first part is the
attractive Coulomb potential, and the second part corresponds to the repulsive
centrifugal force.
Let
a =
¸
¸
¸
¸
_
−¯h
2
2 µE
, (5.111)
and y = r/a, with
P(r) = f(y) exp(−y). (5.112)
Here, it is assumed that the energy eigenvalue E is negative. Equation (5.109)
transforms to
_
_
d
2
dy
2
−2
d
dy

l (l +1)
y
2
+
2 µe
2
a

0
¯h
2
y
_
_
f = 0. (5.113)
86
5.6 Energy levels of the hydrogen atom 5 ANGULAR MOMENTUM
Let us look for a power-law solution of the form
f(y) =

n
c
n
y
n
. (5.114)
Substituting this solution into Eq. (5.113), we obtain

n
c
n
_
n(n −1) y
n−2
−2 ny
n−1
−l (l +1) y
n−2
+
2 µe
2
a

0
¯h
2
y
n−1
_
= 0. (5.115)
Equating the coefficients of y
n−2
gives
c
n
[n(n −1) −l (l +1)] = c
n−1
_
_
2 (n −1) −
2 µe
2
a

0
¯h
2
_
_
. (5.116)
Now, the power law series (5.114) must terminate at small n, at some positive
value of n, otherwise f(y) behaves unphysically as y →0. This is only possible if
[n
min
(n
min
−1)−l (l+1)] = 0, where the first term in the series is c
n
min
y
n
min
. There
are two possibilities: n
min
= −l or n
min
= l + 1. The former predicts unphysical
behaviour of the wave-function at y = 0. Thus, we conclude that n
min
= l + 1.
Note that for an l = 0 state there is a finite probability of finding the electron at
the nucleus, whereas for an l > 0 state there is zero probability of finding the
electron at the nucleus (i.e., |ψ|
2
= 0 at r = 0, except when l = 0). Note, also,
that it is only possible to obtain sensible behaviour of the wave-function as r →0
if l is an integer.
For large values of y, the ratio of successive terms in the series (5.114) is
c
n
y
c
n−1
=
2 y
n
, (5.117)
according to Eq. (5.116). This is the same as the ratio of successive terms in the
series

n
(2 y)
n
n!
, (5.118)
which converges to exp(2 y). We conclude that f(y) → exp(2 y) as y → ∞. It
follows from Eq. (5.112) that R(r) → exp(r/a)/r as r → ∞. This does not cor-
respond to physically acceptable behaviour of the wave-function, since
_
|ψ|
2
dV
must be finite. The only way in which we can avoid this unphysical behaviour is
87
5.6 Energy levels of the hydrogen atom 5 ANGULAR MOMENTUM
if the series (5.114) terminates at some maximum value of n. According to the
recursion relation (5.116), this is only possible if
µe
2
a

0
¯h
2
= n, (5.119)
where the last term in the series is c
n
y
n
. It follows from Eq. (5.111) that the
energy eigenvalues are quantized, and can only take the values
E = −
µe
4
32π
2

2
0
¯h
2
n
2
. (5.120)
Here, n is a positive integer which must exceed the quantum number l, otherwise
there would be no terms in the series (5.114).
It is clear that the wave-function for a hydrogen atom can be written
ψ(r, θ, ϕ) = R(r/a) Y
m
l
(θ, ϕ), (5.121)
where
a =
n4π
0
¯h
2
µe
2
= 5.3 10
−11
n meters, (5.122)
and R(x) is a well-behaved solution of the differential equation
_
_
1
x
2
d
dx
x
2
d
dx

l (l +1)
x
2
+
2 n
x
−1
_
_
R = 0. (5.123)
Finally, the Y
m
l
are spherical harmonics. The restrictions on the quantum numbers
are |m| ≤ l < n. Here, n is a positive integer, l is a non-negative integer, and m
is an integer.
The ground state of hydrogen corresponds to n = 1. The only permissible
values of the other quantum numbers are l = 0 and m = 0. Thus, the ground
state is a spherically symmetric, zero angular momentum state. The energy of
the ground state is
E
0
= −
µe
4
32π
2

2
0
¯h
2
= −13.6 electron volts. (5.124)
88
5.7 Spin angular momentum 5 ANGULAR MOMENTUM
The next energy level corresponds to n = 2. The other quantum numbers are
allowed to take the values l = 0, m = 0 or l = 1, m = −1, 0, 1. Thus, there are
n = 2 states with non-zero angular momentum. Note that the energy levels given
in Eq. (5.120) are independent of the quantum number l, despite the fact that l
appears in the radial eigenfunction equation (5.123). This is a special property
of a 1/r Coulomb potential.
In addition to the quantized negative energy state of the hydrogen atom, which
we have just found, there is also a continuum of unbound positive energy states.
5.7 Spin angular momentum
Up to now, we have tacitly assumed that the state of a particle in quantum me-
chanics can be completely specified by giving the wave-function ψ as a function
of the spatial coordinates x, y, and z. Unfortunately, there is a wealth of experi-
mental evidence which suggests that this simplistic approach is incomplete.
Consider an isolated system at rest, and let the eigenvalue of its total angular
momentum be j (j +1) ¯h
2
. According to the theory of orbital angular momentum
outlined in Sects. 5.4 and 5.5, there are two possibilities. For a system consisting
of a single particle, j = 0. For a system consisting of two (or more) particles, j is
a non-negative integer. However, this does not agree with observations, because
we often find systems which appear to be structureless, and yet have j ,= 0. Even
worse, systems where j has half-integer values abound in nature. In order to
explain this apparent discrepancy between theory and experiments, Gouldsmit
and Uhlenbeck (in 1925) introduced the concept of an internal, purely quantum
mechanical, angular momentum called spin. For a particle with spin, the total
angular momentum in the rest frame is non-vanishing.
Let us denote the three components of the spin angular momentum of a par-
ticle by the Hermitian operators (S
x
, S
y
, S
z
) ≡ S. We assume that these operators
obey the fundamental commutation relations (5.8)–(5.10) for the components of
an angular momentum. Thus, we can write
S S = i ¯hS. (5.125)
89
5.7 Spin angular momentum 5 ANGULAR MOMENTUM
We can also define the operator
S
2
= S
2
x
+S
2
y
+S
2
z
. (5.126)
According to the quite general analysis of Sect. 5.1,
[S, S
2
] = 0. (5.127)
Thus, it is possible to find simultaneous eigenstates of S
2
and S
z
. These are
denoted |s, s
z
), where
S
z
|s, s
z
) = s
z
¯h|s, s
z
), (5.128)
S
2
|s, s
z
) = s (s +1) ¯h
2
|s, s
z
). (5.129)
According to the equally general analysis of Sect. 5.2, the quantum number s can,
in principle, take integer or half-integer values, and the quantum number s
z
can
only take the values s, s −1 −s +1, −s.
Spin angular momentum clearly has many properties in common with orbital
angular momentum. However, there is one vitally important difference. Spin
angular momentum operators cannot be expressed in terms of position and mo-
mentum operators, like in Eqs. (5.1)–(5.3), since this identification depends on
an analogy with classical mechanics, and the concept of spin is purely quantum
mechanical: i.e., it has no analogy in classical physics. Consequently, the re-
striction that the quantum number of the overall angular momentum must take
integer values is lifted for spin angular momentum, since this restriction (found
in Sects. 5.3 and 5.4) depends on Eqs. (5.1)–(5.3). In other words, the quantum
number s is allowed to take half-integer values.
Consider a spin one-half particle, for which
S
z
|±) = ±
¯h
2
|±), (5.130)
S
2
|±) =
3 ¯h
2
4
|±). (5.131)
Here, the |±) denote eigenkets of the S
z
operator corresponding to the eigen-
values ±¯h/2. These kets are orthonormal (since S
z
is an Hermitian operator),
so
¸+|−) = 0. (5.132)
90
5.8 Wave-function of a spin one-half particle 5 ANGULAR MOMENTUM
They are also properly normalized and complete, so that
¸+|+) = ¸−|−) = 1, (5.133)
and
|+)¸+| + |−)¸−| = 1. (5.134)
It is easily verified that the Hermitian operators defined by
S
x
=
¯h
2
( |+)¸−| + |−)¸+| ) , (5.135)
S
y
=
i ¯h
2
( − |+)¸−| + |−)¸+| ) , (5.136)
S
z
=
¯h
2
( |+)¸+| − |−)¸−| ) , (5.137)
satisfy the commutation relations (5.8)–(5.10) (with the L
j
replaced by the S
j
).
The operator S
2
takes the form
S
2
=
3 ¯h
2
4
. (5.138)
It is also easily demonstrated that S
2
and S
z
, defined in this manner, satisfy the
eigenvalue relations (5.130)–(5.131). Equations (5.135)–(5.138) constitute a re-
alization of the spin operators S and S
2
(for a spin one-half particle) in spin space
(i.e., that Hilbert sub-space consisting of kets which correspond to the different
spin states of the particle).
5.8 Wave-function of a spin one-half particle
The state of a spin one-half particle is represented as a vector in ket space.
Let us suppose that this space is spanned by the basis kets |x

, y

, z

, ±). Here,
|x

, y

, z

, ±) denotes a simultaneous eigenstate of the position operators x, y, z,
and the spin operator S
z
, corresponding to the eigenvalues x

, y

, z

, and ±¯h/2,
respectively. The basis kets are assumed to satisfy the completeness relation
___
( |x

, y

, z

, +)¸x

, y

, z

, +| + |x

, y

, z

, −)¸x

, y

, z

, −| ) dx

dy

dz

= 1.
(5.139)
91
5.8 Wave-function of a spin one-half particle 5 ANGULAR MOMENTUM
It is helpful to think of the ket |x

, y

, z

, +) as the product of two kets—a
position space ket |x

, y

, z

), and a spin space ket |+). We assume that such a
product obeys the commutative and distributive axioms of multiplication:
|x

, y

, z

)|+) = |+)|x

, y

, z

), (5.140)
(c

|x

, y

, z

) +c

|x

, y

, z

)) |+) = c

|x

, y

, z

)|+)
+c

|x

, y

, z

)|+) (5.141)
|x

, y

, z

) (c
+
|+) +c

|−)) = c
+
|x

, y

, z

)|+)
+c

|x

, y

, z

)|−), (5.142)
where the c’s are numbers. We can give meaning to any position space operator
(such as L
z
) acting on the product |x

, y

, z

)|+) by assuming that it operates only
on the |x

, y

, z

) factor, and commutes with the |+) factor. Similarly, we can give
a meaning to any spin operator (such as S
z
) acting on |x

, y

, z

)|+) by assuming
that it operates only on |+), and commutes with |x

, y

, z

). This implies that every
position space operator commutes with every spin operator. In this manner, we
can give meaning to the equation
|x

, y

, z

, ±) = |x

, y

, z

)|±) = |±)|x

, y

, z

). (5.143)
The multiplication in the above equation is of quite a different type to any
which we have encountered previously. The ket vectors |x

, y

, z

) and |±) are
in two quite separate vector spaces, and their product |x

, y

, z

)|±) is in a third
vector space. In mathematics, the latter space is termed the product space of the
former spaces, which are termed factor spaces. The number of dimensions of a
product space is equal to the product of the number of dimensions of each of the
factor spaces. A general ket of the product space is not of the form (5.143), but
is instead a sum or integral of kets of this form.
A general state A of a spin one-half particle is represented as a ket ||A)) in the
product of the spin and position spaces. This state can be completely specified by
two wavefunctions:
ψ
+
(x

, y

, z

) = ¸x

, y

, z

|¸+||A)), (5.144)
ψ

(x

, y

, z

) = ¸x

, y

, z

|¸−||A)). (5.145)
92
5.9 Rotation operators in spin space 5 ANGULAR MOMENTUM
The probability of observing the particle in the region x

to x

+dx

, y

to y

+dy

,
and z

to z

+ dz

, with s
z
= +1/2 is |ψ
+
(x

, y

, z

)|
2
dx

dy

dz

. Likewise, the
probability of observing the particle in the region x

to x

+ dx

, y

to y

+ dy

,
and z

to z

+dz

, with s
z
= −1/2 is |ψ

(x

, y

, z

)|
2
dx

dy

dz

. The normalization
condition for the wavefunctions is
___
_

+
|
2
+ |ψ

|
2
_
dx

dy

dz

= 1. (5.146)
5.9 Rotation operators in spin space
Let us, for the moment, forget about the spatial position of the particle, and
concentrate on its spin state. A general spin state A is represented by the ket
|A) = ¸+|A)|+) +¸−|A)|−) (5.147)
in spin space. In Sect. 5.3, we were able to construct an operator R
z
(∆ϕ) which
rotates the system by an angle ∆ϕ about the z-axis in position space. Can we also
construct an operator T
z
(∆ϕ) which rotates the system by an angle ∆ϕ about
the z-axis in spin space? By analogy with Eq. (5.62), we would expect such an
operator to take the form
T
z
(∆ϕ) = exp(−i S
z
∆ϕ/¯h). (5.148)
Thus, after rotation, the ket |A) becomes
|A
R
) = T
z
(∆ϕ)|A). (5.149)
To demonstrate that the operator (5.148) really does rotate the spin of the
system, let us consider its effect on ¸S
x
). Under rotation, this expectation value
changes as follows:
¸S
x
) →¸A
R
|S
x
|A
R
) = ¸A|T

z
S
x
T
z
|A). (5.150)
Thus, we need to compute
exp( i S
z
∆ϕ/¯h) S
x
exp(−i S
z
∆ϕ/¯h). (5.151)
93
5.9 Rotation operators in spin space 5 ANGULAR MOMENTUM
This can be achieved in two different ways.
First, we can use the explicit formula for S
x
given in Eq. (5.135). We find that
Eq. (5.151) becomes
¯h
2
exp( i S
z
∆ϕ/¯h) ( |+)¸−| + |−)¸+| ) exp(−i S
z
∆ϕ/¯h), (5.152)
or
¯h
2
_
e
i ∆ϕ/2
|+)¸−| e
i ∆ϕ/2
+ e
−i ∆ϕ/2
|−)¸+| e
−i ∆ϕ/2
_
, (5.153)
which reduces to
S
x
cos ∆ϕ−S
y
sin∆ϕ, (5.154)
where use has been made of Eqs. (5.135)–(5.137).
A second approach is to use the so called Baker-Hausdorff lemma. This takes
the form
exp( i Gλ) A exp(−i Gλ) = A+ i λ[G, A] +
_
_
i
2
λ
2
2!
_
_
[G, [G, A]] + (5.155)
+
_
i
n
λ
n
n!
_
[G, [G, [G, [G, A]]] ],
where G is a Hermitian operator, and λ is a real parameter. The proof of this
lemma is left as an exercise. Applying the Baker-Hausdorff lemma to Eq. (5.151),
we obtain
S
x
+
_
i ∆ϕ
¯h
_
[S
z
, S
x
] +
_
1
2!
_ _
i ∆ϕ
¯h
_
2
[S
z
, [S
z
, S
x
]] + , (5.156)
which reduces to
S
x
_
_
1 −
∆ϕ
2
2!
+
_
_
−S
y
_
_
ϕ−
∆ϕ
3
3!
+
_
_
, (5.157)
or
S
x
cos ∆ϕ−S
y
sin∆ϕ, (5.158)
where use has been made of Eq. (5.125). The second proof is more general than
the first, since it only uses the fundamental commutation relation (5.125), and is,
therefore, valid for systems with spin angular momentum higher than one-half.
94
5.9 Rotation operators in spin space 5 ANGULAR MOMENTUM
For a spin one-half system, both methods imply that
¸S
x
) →¸S
x
) cos ∆ϕ−¸S
y
) sin∆ϕ (5.159)
under the action of the rotation operator (5.148). It is straight-forward to show
that
¸S
y
) →¸S
y
) cos ∆ϕ+¸S
x
) sin∆ϕ. (5.160)
Furthermore,
¸S
z
) →¸S
z
), (5.161)
since S
z
commutes with the rotation operator. Equations (5.159)–(5.161) demon-
strate that the operator (5.148) rotates the expectation value of S by an angle ∆ϕ
about the z-axis. In fact, the expectation value of the spin operator behaves like
a classical vector under rotation:
¸S
k
) →

l
R
kl
¸S
l
), (5.162)
where the R
kl
are the elements of the conventional rotation matrix for the rotation
in question. It is clear, from our second derivation of the result (5.159), that this
property is not restricted to the spin operators of a spin one-half system. In fact,
we have effectively demonstrated that
¸J
k
) →

l
R
kl
¸J
l
), (5.163)
where the J
k
are the generators of rotation, satisfying the fundamental commuta-
tion relation J J = i ¯hJ, and the rotation operator about the kth axis is written
R
k
(∆ϕ) = exp(−i J
k
∆ϕ/¯h).
Consider the effect of the rotation operator (5.148) on the state ket (5.147).
It is easily seen that
T
z
(∆ϕ)|A) = e
−i ∆ϕ/2
¸+|A)|+) + e
i ∆ϕ/2
¸−|A)|−). (5.164)
Consider a rotation by 2π radians. We find that
|A) →T
z
(2π)|A) = −|A). (5.165)
95
5.10 Magnetic moments 5 ANGULAR MOMENTUM
Note that a ket rotated by 2π radians differs from the original ket by a minus
sign. In fact, a rotation by 4π radians is needed to transform a ket into itself.
The minus sign does not affect the expectation value of S, since S is sandwiched
between ¸A| and |A), both of which change sign. Nevertheless, the minus sign
does give rise to observable consequences, as we shall see presently.
5.10 Magnetic moments
Consider a particle of charge q and velocity v performing a circular orbit of radius
r in the x-y plane. The charge is equivalent to a current loop of radius r in the
x-y plane carrying current I = qv/2πr. The magnetic moment µ of the loop is of
magnitude πr
2
I and is directed along the z-axis. Thus, we can write
µ =
q
2
r v, (5.166)
where r and v are the vector position and velocity of the particle, respectively.
However, we know that p = v/m, where p is the vector momentum of the parti-
cle, and m is its mass. We also know that L = rp, where L is the orbital angular
momentum. It follows that
µ =
q
2 m
L. (5.167)
Using the usual analogy between classical and quantum mechanics, we expect the
above relation to also hold between the quantum mechanical operators, µ and L,
which represent magnetic moment and orbital angular momentum, respectively.
This is indeed found to the the case.
Does spin angular momentum also give rise to a contribution to the magnetic
moment of a charged particle? The answer is “yes”. In fact, relativistic quantum
mechanics actually predicts that a charged particle possessing spin should also
possess a magnetic moment (this was first demonstrated by Dirac). We can write
µ =
q
2 m
(L +gS) , (5.168)
where g is called the gyromagnetic ratio. For an electron this ratio is found to be
g
e
= 2
_
_
1 +
1

e
2

0
¯hc
_
_
. (5.169)
96
5.11 Spin precession 5 ANGULAR MOMENTUM
The factor 2 is correctly predicted by Dirac’s relativistic theory of the electron.
The small correction 1/(2π137), derived originally by Schwinger, is due to quan-
tum field effects. We shall ignore this correction in the following, so
µ · −
e
2 m
e
(L +2 S) (5.170)
for an electron (here, e > 0).
5.11 Spin precession
The Hamiltonian for an electron at rest in a z-directed magnetic field, B = B ^z, is
H = −µB =
_
e
m
e
_
SB = ωS
z
, (5.171)
where
ω =
e B
m
e
. (5.172)
According to Eq. (4.28), the time evolution operator for this system is
T(t, 0) = exp(−i Ht/¯h) = exp(−i S
z
ωt/¯h). (5.173)
It can be seen, by comparison with Eq. (5.148), that the time evolution operator
is precisely the same as the rotation operator for spin, with ∆ϕ set equal to ωt.
It is immediately clear that the Hamiltonian (5.171) causes the electron spin to
precess about the z-axis with angular frequency ω. In fact, Eqs. (5.159)–(5.161)
imply that
¸S
x
)
t
= ¸S
x
)
t=0
cos ωt −¸S
y
)
t=0
sinωt, (5.174)
¸S
y
)
t
= ¸S
y
)
t=0
cos ωt +¸S
x
)
t=0
sinωt, (5.175)
¸S
z
)
t
= ¸S
z
)
t=0
. (5.176)
The time evolution of the state ket is given by analogy with Eq. (5.164):
|A, t) = e
−i ωt/2
¸+|A, 0)|+) + e
i ωt/2
¸−|A, 0)|−). (5.177)
97
5.11 Spin precession 5 ANGULAR MOMENTUM
Note that it takes time t = 4π/ω for the state ket to return to its original state. By
contrast, it only takes times t = 2π/ω for the spin vector to point in its original
direction.
We now describe an experiment to detect the minus sign in Eq. (5.165). An
almost monoenergetic beam of neutrons is split in two, sent along two different
paths, A and B, and then recombined. Path A goes through a magnetic field free
region. However, path B enters a small region where a static magnetic field is
present. As a result, a neutron state ket going along path B acquires a phase-shift
exp(∓i ωT/2) (the ∓ signs correspond to s
z
= ±1/2 states). Here, T is the time
spent in the magnetic field, and ω is the spin precession frequency
ω =
g
n
e B
m
p
. (5.178)
This frequency is defined in an analogous manner to Eq. (5.172). The gyro-
magnetic ratio for a neutron is found experimentally to be g
n
= −1.91. (The
magnetic moment of a neutron is entirely a quantum field effect). When neu-
trons from path A and path B meet they undergo interference. We expect the
observed neutron intensity in the interference region to exhibit a cos(±ωT/2+δ)
variation, where δ is the phase difference between paths A and B in the absence
of a magnetic field. In experiments, the time of flight T through the magnetic
field region is kept constant, while the field-strength B is varied. It follows that
the change in magnetic field required to produce successive maxima is
∆B =
4π¯h
e g
n
¯ λ l
, (5.179)
where l is the path-length through the magnetic field region, and ¯ λ is the de
Broglie wavelength over 2π of the neutrons. The above prediction has been ver-
ified experimentally to within a fraction of a percent. This prediction depends
crucially on the fact that it takes a 4π rotation to return a state ket to its original
state. If it only took a 2π rotation then ∆B would be half of the value given above,
which does not agree with the experimental data.
98
5.12 Pauli two-component formalism 5 ANGULAR MOMENTUM
5.12 Pauli two-component formalism
We have seen, in Sect. 5.4, that the eigenstates of orbital angular momentum
can be conveniently represented as spherical harmonics. In this representation,
the orbital angular momentum operators take the form of differential operators
involving only angular coordinates. It is conventional to represent the eigenstates
of spin angular momentum as column (or row) matrices. In this representation,
the spin angular momentum operators take the form of matrices.
The matrix representation of a spin one-half system was introduced by Pauli in
1926. Recall, from Sect. 5.9, that a general spin ket can be expressed as a linear
combination of the two eigenkets of S
z
belonging to the eigenvalues ±¯h/2. These
are denoted |±). Let us represent these basis eigenkets as column matrices:
|+) →
_
_
1
0
_
_
≡ χ
+
, (5.180)
|−) →
_
_
0
1
_
_
≡ χ

. (5.181)
The corresponding eigenbras are represented as row matrices:
¸+| → (1, 0) ≡ χ

+
, (5.182)
¸−| → (0, 1) ≡ χ


. (5.183)
In this scheme, a general bra takes the form
|A) = ¸+|A)|+) +¸−|A)|−) →
_
_
¸+|A)
¸−|A)
_
_
, (5.184)
and a general ket becomes
¸A| = ¸A|+)¸+| +¸A|−)¸−| →(¸A|+), ¸A|−)). (5.185)
The column matrix (5.184) is called a two-component spinor, and can be written
χ ≡
_
_
¸+|A)
¸−|A)
_
_
=
_
_
c
+
c

_
_
= c
+
χ
+
+c

χ

, (5.186)
99
5.12 Pauli two-component formalism 5 ANGULAR MOMENTUM
where the c
±
are complex numbers. The row matrix (5.185) becomes
χ

≡ (¸A|+), ¸A|−)) = (c

+
, c


) = c

+
χ

+
+c


χ


. (5.187)
Consider the ket obtained by the action of a spin operator on ket A:
|A

) = S
k
|A). (5.188)
This ket is represented as
|A

) →
_
_
¸+|A

)
¸−|A

)
_
_
≡ χ

. (5.189)
However,
¸+|A

) = ¸+|S
k
|+)¸+|A) +¸+|S
k
|−)¸−|A), (5.190)
¸−|A

) = ¸−|S
k
|+)¸+|A) +¸−|S
k
|−)¸−|A), (5.191)
or
_
_
¸+|A

)
¸−|A

)
_
_
=
_
_
¸+|S
k
|+) ¸+|S
k
|−)
¸−|S
k
|+) ¸−|S
k
|−)
_
_
_
_
¸+|A)
¸−|A)
_
_
. (5.192)
It follows that we can represent the operator/ket relation (5.188) as the matrix
relation
χ

=
_
¯h
2
_
σ
k
χ, (5.193)
where the σ
k
are the matrices of the ¸±|S
k
|±) values divided by ¯h/2. These
matrices, which are called the Pauli matrices, can easily be evaluated using the
explicit forms for the spin operators given in Eqs. (5.135)–(5.137). We find that
σ
1
=
_
_
0 1
1 0
_
_
, (5.194)
σ
2
=
_
_
0 −i
i 0
_
_
, (5.195)
σ
3
=
_
_
1 0
0 −1
_
_
. (5.196)
100
5.12 Pauli two-component formalism 5 ANGULAR MOMENTUM
Here, 1, 2, and 3 refer to x, y, and z, respectively. Note that, in this scheme, we
are effectively representing the spin operators in terms of the Pauli matrices:
S
k

_
¯h
2
_
σ
k
. (5.197)
The expectation value of S
k
can be written in terms of spinors and the Pauli
matrices:
¸S
k
) = ¸A|S
k
|A) =

±
¸A|±)¸±|S
k
|±)¸±|A) =
_
¯h
2
_
χ

σ
k
χ. (5.198)
The fundamental commutation relation for angular momentum, Eq. (5.125),
can be combined with (5.197) to give the following commutation relation for the
Pauli matrices:
σ σ = 2 i σ. (5.199)
It is easily seen that the matrices (5.194)–(5.196) actually satisfy these relations
(i.e., σ
1
σ
2
−σ
2
σ
1
= 2 i σ
3
, plus all cyclic permutations). It is also easily seen that
the Pauli matrices satisfy the anti-commutation relations

i
, σ
j
} = 2 δ
ij
. (5.200)
Let us examine how the Pauli scheme can be extended to take into account the
position of a spin one-half particle. Recall, from Sect. 5.8, that we can represent
a general basis ket as the product of basis kets in position space and spin space:
|x

, y

, z

, ±) = |x

, y

, z

)|±) = |±)|x

, y

, z

). (5.201)
The ket corresponding to state A is denoted ||A)), and resides in the product
space of the position and spin ket spaces. State A is completely specified by the
two wave-functions
ψ
+
(x

, y

, z

) = ¸x

, y

, z

|¸+||A)), (5.202)
ψ

(x

, y

, z

) = ¸x

, y

, z

|¸−||A)). (5.203)
Consider the operator relation
||A

)) = S
k
||A)). (5.204)
101
5.12 Pauli two-component formalism 5 ANGULAR MOMENTUM
It is easily seen that
¸x

, y

, z

|¸+|A

)) = ¸+|S
k
|+)¸x

, y

, z

|¸+||A))
+¸+|S
k
|−)¸x

, y

, z

|¸−||A)), (5.205)
¸x

, y

, z

|¸−|A

)) = ¸−|S
k
|+)¸x

, y

, z

|¸+||A))
+¸−|S
k
|−)¸x

, y

, z

|¸−||A)), (5.206)
where use has been made of the fact that the spin operator S
k
commutes with
the eigenbras ¸x

, y

, z

|. It is fairly obvious that we can represent the operator
relation (5.204) as a matrix relation if we generalize our definition of a spinor by
writing
||A)) →
_
_
ψ
+
(r

)
ψ

(r

)
_
_
≡ χ, (5.207)
and so on. The components of a spinor are now wave-functions, instead of com-
plex numbers. In this scheme, the operator equation (5.204) becomes simply
χ

=
_
¯h
2
_
σ
k
χ. (5.208)
Consider the operator relation
||A

)) = p
k
||A)). (5.209)
In the Schr¨ odinger representation, we have
¸x

, y

, z

|¸+|A

)) = ¸x

, y

, z

|p
k
¸+||A))
= −i ¯h

∂x

k
¸x

, y

, z

|¸+||A)), (5.210)
¸x

, y

, z

|¸−|A

)) = ¸x

, y

, z

|p
k
¸−||A))
= −i ¯h

∂x

k
¸x

, y

, z

|¸−||A)), (5.211)
where use has been made of Eq. (3.78). The above equation reduces to
_
_
ψ

+
(r

)
ψ


(r

)
_
_
=
_
_
−i ¯h∂ψ
+
(r

)/∂x

k
−i ¯h∂ψ

(r

)/∂x

k
_
_
. (5.212)
102
5.12 Pauli two-component formalism 5 ANGULAR MOMENTUM
Thus, the operator equation (5.209) can be written
χ

= p
k
χ, (5.213)
where
p
k
→−i ¯h

∂x

k
I. (5.214)
Here, I is the 2 2 unit matrix. In fact, any position operator (e.g., p
k
or L
k
)
is represented in the Pauli scheme as some differential operator of the position
eigenvalues multiplied by the 2 2 unit matrix.
What about combinations of position and spin operators? The most commonly
occurring combination is a dot product: e.g., S L = (¯h/2) σ L. Consider the
hybrid operator σa, where a ≡ (a
x
, a
y
, a
z
) is some vector position operator. This
quantity is represented as a 2 2 matrix:
σa ≡

k
a
k
σ
k
=
_
_
+a
3
a
1
− i a
2
a
1
+ i a
2
−a
3
_
_
. (5.215)
Since, in the Schr¨ odinger representation, a general position operator takes the
form of a differential operator in x

, y

, or z

, it is clear that the above quantity
must be regarded as a matrix differential operator which acts on spinors of the
general form (5.207). The important identity
(σa) (σb) = ab + i σ(a b) (5.216)
follows fromthe commutation and anti-commutation relations (5.199) and (5.200).
Thus,

j
σ
j
a
j

k
σ
k
b
k
=

j

k
_
1
2

j
, σ
k
} +
1
2

j
, σ
k
]
_
a
j
b
k
=

j

k

jk
+ i
jkl
σ
l
) a
j
b
k
= ab + i σ(a b). (5.217)
A general rotation operator in spin space is written
T(∆φ) = exp(−i Sn∆ϕ/¯h) , (5.218)
103
5.12 Pauli two-component formalism 5 ANGULAR MOMENTUM
by analogy with Eq. (5.148), where n is a unit vector pointing along the axis of
rotation, and ∆ϕ is the angle of rotation. Here, n can be regarded as a trivial
position operator. The rotation operator is represented
exp(−i Sn∆ϕ/¯h) →exp(−i σn∆ϕ/2) (5.219)
in the Pauli scheme. The term on the right-hand side of the above expression is
the exponential of a matrix. This can easily be evaluated using the Taylor series
for an exponential, plus the rules
(σn)
n
= 1 for n even, (5.220)
(σn)
n
= (σn) for n odd. (5.221)
These rules follow trivially from the identity (5.216). Thus, we can write
exp (−i σn∆ϕ/2) =
_
_
1 −
(σn)
2
2!
_
∆ϕ
2
_
2
+
(σn)
4
4!
_
∆ϕ
2
_
4
+
_
_
−i
_
_
(σ n)
_
∆ϕ
2
_

(σn)
3
3!
_
∆ϕ
2
_
3
+
_
_
= cos(∆ϕ/2) I − i sin(∆ϕ/2) σn. (5.222)
The explicit 2 2 form of this matrix is
_
_
cos(∆ϕ/2) − i n
z
sin(∆ϕ/2) (−i n
x
−n
y
) sin(∆ϕ/2)
(−i n
x
+n
y
) sin(∆ϕ/2) cos(∆ϕ/2) + i n
z
sin(∆ϕ/2)
_
_
. (5.223)
Rotation matrices act on spinors in much the same manner as the corresponding
rotation operators act on state kets. Thus,
χ

= exp(−i σn∆ϕ/2) χ, (5.224)
where χ

denotes the spinor obtained after rotating the spinor χ an angle ∆ϕ
about the n-axis. The Pauli matrices remain unchanged under rotations. How-
ever, the quantity χ

σ
k
χ is proportional to the expectation value of S
k
[see
Eq. (5.198)], so we would expect it to transform like a vector under rotation
(see Sect. 5.9). In fact, we require


σ
k
χ)

≡ (χ

)

σ
k
χ

=

l
R
kl


σ
l
χ), (5.225)
104
5.13 Spin greater than one-half systems 5 ANGULAR MOMENTUM
where the R
kl
are the elements of a conventional rotation matrix. This is easily
demonstrated, since
exp
_
i σ
3
∆ϕ
2
_
σ
1
exp
_
−i σ
3
∆ϕ
2
_
= σ
1
cos ∆ϕ−σ
2
sin∆ϕ (5.226)
plus all cyclic permutations. The above expression is the 2 2 matrix analogue
of (see Sect. 5.9)
exp
_
i S
z
∆ϕ
¯h
_
S
x
exp
_
−i S
z
∆ϕ
¯h
_
= S
x
cos ∆ϕ−S
y
sin∆ϕ. (5.227)
The previous two formulae can both be validated using the Baker-Hausdorff
lemma, (5.156), which holds for Hermitian matrices, in addition to Hermitian
operators.
5.13 Spin greater than one-half systems
In the absence of spin, the Hamiltonian can be written as some function of the
position and momentum operators. Using the Schr¨ odinger representation, in
which p →−i ¯h∇, the energy eigenvalue problem,
H|E) = E|E), (5.228)
can be transformed into a partial differential equation for the wave-function
ψ(r

) ≡ ¸r

|E). This function specifies the probability density for observing the
particle at a given position, r

. In general, we find
Hψ = Eψ, (5.229)
where H is now a partial differential operator. The boundary conditions (for a
bound state) are obtained from the normalization constraint
_
|ψ|
2
dV = 1. (5.230)
This is all very familiar. However, we now know how to generalize this scheme
to deal with a spin one-half particle. Instead of representing the state of the par-
ticle by a single wave-function, we use two wave-functions. The first, ψ
+
(r

),
105
5.13 Spin greater than one-half systems 5 ANGULAR MOMENTUM
specifies the probability density of observing the particle at position r

with spin
angular momentum +¯h/2 in the z-direction. The second, ψ

(r

), specifies the
probability density of observing the particle at position r

with spin angular mo-
mentum −¯h/2 in the z-direction. In the Pauli scheme, these wave-functions are
combined into a spinor, χ, which is simply the row vector of ψ
+
and ψ

. In gen-
eral, the Hamiltonian is a function of the position, momentum, and spin opera-
tors. Adopting the Schr¨ odinger representation, and the Pauli scheme, the energy
eigenvalue problem reduces to
Hχ = Eχ, (5.231)
where χ is a spinor (i.e., a 1 2 matrix of wave-functions) and H is a 2 2 matrix
partial differential operator [see Eq. (5.215)]. The above spinor equation can
always be written out explicitly as two coupled partial differential equations for
ψ
+
and ψ

.
Suppose that the Hamiltonian has no dependence on the spin operators. In this
case, the Hamiltonian is represented as diagonal 2 2 matrix partial differential
operator in the Schr¨ odinger/Pauli scheme [see Eq. (5.214)]. In other words, the
partial differential equation for ψ
+
decouples from that for ψ

. In fact, both
equations have the same form, so there is only really one differential equation.
In this situation, the most general solution to Eq. (5.231) can be written
χ = ψ(r

)
_
_
c
+
c

_
_
. (5.232)
Here, ψ(r

) is determined by the solution of the differential equation, and the
c
±
are arbitrary complex numbers. The physical significance of the above ex-
pression is clear. The Hamiltonian determines the relative probabilities of finding
the particle at various different positions, but the direction of its spin angular
momentum remains undetermined.
Suppose that the Hamiltonian depends only on the spin operators. In this
case, the Hamiltonian is represented as a 2 2 matrix of complex numbers in
the Schr¨ odinger/Pauli scheme [see Eq. (5.197)], and the spinor eigenvalue equa-
tion (5.231) reduces to a straight-forward matrix eigenvalue problem. The most
106
5.13 Spin greater than one-half systems 5 ANGULAR MOMENTUM
general solution can again be written
χ = ψ(r

)
_
_
c
+
c

_
_
. (5.233)
Here, the ratio c
+
/c

is determined by the matrix eigenvalue problem, and the
wave-function ψ(r

) is arbitrary. Clearly, the Hamiltonian determines the direc-
tion of the particle’s spin angular momentum, but leaves its position undeter-
mined.
In general, of course, the Hamiltonian is a function of both position and
spin operators. In this case, it is not possible to decompose the spinor as in
Eqs. (5.232) and (5.233). In other words, a general Hamiltonian causes the di-
rection of the particle’s spin angular momentum to vary with position in some
specified manner. This can only be represented as a spinor involving different
wave-functions, ψ
+
and ψ

.
But, what happens if we have a spin one or a spin three-halves particle? It
turns out that we can generalize the Pauli two-component scheme in a fairly
straight-forward manner. Consider a spin-s particle: i.e., a particle for which the
eigenvalue of S
2
is s (s + 1) ¯h
2
. Here, s is either an integer, or a half-integer.
The eigenvalues of S
z
are written s
z
¯h, where s
z
is allowed to take the values
s, s − 1, , −s + 1, −s. In fact, there are 2 s + 1 distinct allowed values of s
z
.
Not surprisingly, we can represent the state of the particle by 2 s + 1 different
wave-functions, denoted ψ
s
z
(r

). Here, ψ
s
z
(r

) specifies the probability density
for observing the particle at position r

with spin angular momentum s
z
¯h in the
z-direction. More exactly,
ψ
s
z
(r

) = ¸r

|¸s, s
z
||A)), (5.234)
where ||A)) denotes a state ket in the product space of the position and spin op-
erators. The state of the particle can be represented more succinctly by a spinor,
χ, which is simply the 2 s + 1 component row vector of the ψ
s
z
(r

). Thus, a spin
one-half particle is represented by a two-component spinor, a spin one particle
by a three-component spinor, a spin three-halves particle by a four-component
spinor, and so on.
107
5.13 Spin greater than one-half systems 5 ANGULAR MOMENTUM
In this extended Schr¨ odinger/Pauli scheme, position space operators take the
form of diagonal (2 s + 1) (2 s + 1) matrix differential operators. Thus, we can
represent the momentum operators as [see Eq. (5.214)]
p
k
→−i ¯h

∂x

k
I, (5.235)
where I is the (2 s +1) (2 s +1) unit matrix. We represent the spin operators as
S
k
→s ¯hσ
k
, (5.236)
where the (2 s +1) (2 s +1) extended Pauli matrix σ
k
has elements

k
)
jl
=
¸s, j|S
k
|s, l)
s ¯h
. (5.237)
Here, j, l are integers, or half-integers, lying in the range −s to +s. But, how can
we evaluate the brackets ¸s, j|S
k
|s, l) and, thereby, construct the extended Pauli
matrices? In fact, it is trivial to construct the σ
z
matrix. By definition,
S
z
|s, j) = j ¯h|s, j). (5.238)
Hence,

z
)
jl
=
¸s, j|S
z
|s, l)
s ¯h
=
j
s
δ
jl
, (5.239)
where use has been made of the orthonormality property of the |s, j). Thus, σ
z
is the suitably normalized diagonal matrix of the eigenvalues of S
z
. The matrix
elements of σ
x
and σ
y
are most easily obtained by considering the shift operators,
S
±
= S
x
±i S
y
. (5.240)
We know, from Eqs. (5.55)–(5.56), that
S
+
|s, j) =
_
s (s +1) −j (j +1) ¯h|s, j +1), (5.241)
S

|s, j) =
_
s (s +1) −j (j −1) ¯h|s, j −1). (5.242)
It follows from Eqs. (5.237), and (5.240)–(5.242), that

x
)
jl
=
_
s (s +1) −j (j −1) δ
j,l+1
2 s
108
5.13 Spin greater than one-half systems 5 ANGULAR MOMENTUM
+
_
s (s +1) −j (j +1) δ
j,l−1
2 s
, (5.243)

y
)
jl
=
_
s (s +1) −j (j −1) δ
j,l+1
2 i s

_
s (s +1) −j (j +1) δ
j,l−1
2 i s
. (5.244)
According to Eqs. (5.239) and (5.243)–(5.244), the Pauli matrices for a spin one-
half (s = 1/2) particle are
σ
x
=
_
_
0 1
1 0
_
_
, (5.245)
σ
y
=
_
_
0 −i
i 0
_
_
, (5.246)
σ
z
=
_
_
1 0
0 −1
_
_
, (5.247)
as we have seen previously. For a spin one (s = 1) particle, we find that
σ
x
=
1

2
_
_
_
_
_
0 1 0
1 0 1
0 1 0
_
_
_
_
_
, (5.248)
σ
y
=
1

2
_
_
_
_
_
0 −i 0
i 0 −i
0 i 0
_
_
_
_
_
, (5.249)
σ
z
=
_
_
_
_
_
1 0 0
0 0 0
0 0 −1
_
_
_
_
_
. (5.250)
In fact, we can now construct the Pauli matrices for a spin anything particle.
This means that we can convert the general energy eigenvalue problem for a
spin-s particle, where the Hamiltonian is some function of position and spin op-
erators, into 2 s + 1 coupled partial differential equations involving the 2 s + 1
wave-functions ψ
s
z
(r

). Unfortunately, such a system of equations is generally
too complicated to solve exactly.
109
5.14 Addition of angular momentum 5 ANGULAR MOMENTUM
5.14 Addition of angular momentum
Consider a hydrogen atom in an l = 1 state. The electron possesses orbital angu-
lar momentum of magnitude ¯h, and spin angular momentum of magnitude ¯h/2.
So, what is the total angular momentum of the system?
In order to answer this question, we are going to have to learn how to add
angular momentum operators. Let us consider the most general case. Suppose
that we have two sets of angular momentum operators, J
1
and J
2
. By definition,
these operators are Hermitian, and obey the fundamental commutation relations
J
1
J
1
= i ¯hJ
1
, (5.251)
J
2
J
2
= i ¯hJ
2
. (5.252)
We assume that the two groups of operators correspond to different degrees of
freedom of the system, so that
[J
1i
, J
2j
] = 0, (5.253)
where i, j stand for either x, y, or z. For instance, J
1
could be an orbital angular
momentum operator, and J
2
a spin angular momentum operator. Alternatively, J
1
and J
2
could be the orbital angular momentum operators of two different parti-
cles in a multi-particle system. We know, from the general properties of angular
momentum, that the eigenvalues of J
2
1
and J
2
2
can be written j
1
(j
1
+ 1) ¯h
2
and
j
2
(j
2
+1) ¯h
2
, respectively, where j
1
and j
2
are either integers, or half-integers. We
also know that the eigenvalues of J
1z
and J
2z
take the form m
1
¯h and m
2
¯h, respec-
tively, where m
1
and m
2
are numbers lying in the ranges j
1
, j
1
−1, , −j
1
+1, −j
1
and j
2
, j
2
−1, , −j
2
+1, −j
2
, respectively.
Let us define the total angular momentum operator
J = J
1
+J
2
. (5.254)
Now J is an Hermitian operator, since it is the sum of Hermitian operators. Ac-
cording to Eqs. (5.11) and (5.14), J satisfies the fundamental commutation rela-
tion
J J = i ¯hJ. (5.255)
110
5.14 Addition of angular momentum 5 ANGULAR MOMENTUM
Thus, J possesses all of the expected properties of an angular momentum opera-
tor. It follows that the eigenvalue of J
2
can be written j (j + 1) ¯h
2
, where j is an
integer, or a half-integer. The eigenvalue of J
z
takes the form m¯h, where m lies
in the range j, j −1, , −j +1, −j. At this stage, we do not know the relationship
between the quantum numbers of the total angular momentum, j and m, and
those of the individual angular momenta, j
1
, j
2
, m
1
, and m
2
.
Now
J
2
= J
2
1
+J
2
2
+2 J
1
J
2
. (5.256)
We know that
[J
2
1
, J
1i
] = 0, (5.257)
[J
2
2
, J
2i
] = 0, (5.258)
and also that all of the J
1i
operators commute with the J
2i
operators. It follows
from Eq. (5.256) that
[J
2
, J
2
1
] = [J
2
, J
2
2
] = 0. (5.259)
This implies that the quantum numbers j
1
, j
2
, and j can all be measured simulta-
neously. In other words, we can know the magnitude of the total angular momen-
tum together with the magnitudes of the component angular momenta. However,
it is clear from Eq. (5.256) that
[J
2
, J
1z
] ,= 0, (5.260)
[J
2
, J
2z
] ,= 0. (5.261)
This suggests that it is not possible to measure the quantum numbers m
1
and
m
2
simultaneously with the quantum number j. Thus, we cannot determine the
projections of the individual angular momenta along the z-axis at the same time
as the magnitude of the total angular momentum.
It is clear, from the preceding discussion, that we can form two alternate
groups of mutually commuting operators. The first group is J
2
1
, J
2
2
, J
1z
, and J
2z
.
The second group is J
2
1
, J
2
2
, J
2
, and J
z
. These two groups of operators are in-
compatible with one another. We can define simultaneous eigenkets of each
operator group. The simultaneous eigenkets of J
2
1
, J
2
2
, J
1z
, and J
2z
are denoted
111
5.14 Addition of angular momentum 5 ANGULAR MOMENTUM
|j
1
, j
2
; m
1
, m
2
), where
J
2
1
|j
1
, j
2
; m
1
, m
2
) = j
1
(j
1
+1) ¯h
2
|j
1
, j
2
; m
1
, m
2
), (5.262)
J
2
2
|j
1
, j
2
; m
1
, m
2
) = j
2
(j
2
+1) ¯h
2
|j
1
, j
2
; m
1
, m
2
), (5.263)
J
1z
|j
1
, j
2
; m
1
, m
2
) = m
1
¯h|j
1
, j
2
; m
1
, m
2
), (5.264)
J
2z
|j
1
, j
2
; m
1
, m
2
) = m
2
¯h|j
1
, j
2
; m
1
, m
2
). (5.265)
The simultaneous eigenkets of J
2
1
, J
2
2
, J
2
and J
z
are denoted |j
1
, j
2
; j, m), where
J
2
1
|j
1
, j
2
; j, m) = j
1
(j
1
+1) ¯h
2
|j
1
, j
2
; j, m), (5.266)
J
2
2
|j
1
, j
2
; j, m) = j
2
(j
2
+1) ¯h
2
|j
1
, j
2
; j, m), (5.267)
J
2
|j
1
, j
2
; j, m) = j (j +1) ¯h
2
|j
1
, j
2
; j, m), (5.268)
J
z
|j
1
, j
2
; j, m) = m¯h|j
1
, j
2
; j, m). (5.269)
Each set of eigenkets are complete, mutually orthogonal (for eigenkets corre-
sponding to different sets of eigenvalues), and have unit norms. Since the op-
erators J
2
1
and J
2
2
are common to both operator groups, we can assume that the
quantum numbers j
1
and j
2
are known. In other words, we can always determine
the magnitudes of the individual angular momenta. In addition, we can either
know the quantum numbers m
1
and m
2
, or the quantum numbers j and m, but
we cannot know both pairs of quantum numbers at the same time. We can write
a conventional completeness relation for both sets of eigenkets:

m
1

m
2
|j
1
, j
2
; m
1
, m
2
)¸j
1
, j
2
; m
1
, m
2
| = 1, (5.270)

j

m
|j
1
, j
2
; j, m)¸j
1
, j
2
; j, m| = 1, (5.271)
where the right-hand sides denote the identity operator in the ket space corre-
sponding to states of given j
1
and j
2
. The summation is over all allowed values of
m
1
, m
2
, j, and m.
The operator group J
2
1
, J
2
2
, J
2
, and J
z
is incompatible with the group J
2
1
, J
2
2
,
J
1z
, and J
2z
. This means that if the system is in a simultaneous eigenstate of the
former group then, in general, it is not in an eigenstate of the latter. In other
112
5.14 Addition of angular momentum 5 ANGULAR MOMENTUM
words, if the quantum numbers j
1
, j
2
, j, and m are known with certainty, then a
measurement of the quantum numbers m
1
and m
2
will give a range of possible
values. We can use the completeness relation (5.270) to write
|j
1
, j
2
; j, m) =

m
1

m
2
¸j
1
, j
2
; m
1
, m
2
|j
1
, j
2
; j, m)|j
1
, j
2
; m
1
, m
2
). (5.272)
Thus, we can write the eigenkets of the first group of operators as a weighted
sum of the eigenkets of the second set. The weights, ¸j
1
, j
2
; m
1
, m
2
|j
1
, j
2
; j, m),
are called the Clebsch-Gordon coefficients. If the system is in a state where a
measurement of J
2
1
, J
2
2
, J
2
, and J
z
is bound to give the results j
1
(j
1
+1) ¯h
2
, j
2
(j
2
+
1) ¯h
2
, j (j + 1) ¯h
2
, and j
z
¯h, respectively, then a measurement of J
1z
and J
2z
will
give the results m
1
¯h and m
2
¯h with probability |¸j
1
, j
2
; m
1
, m
2
|j
1
, j
2
; j, m)|
2
.
The Clebsch-Gordon coefficients possess a number of very important proper-
ties. First, the coefficients are zero unless
m = m
1
+m
2
. (5.273)
To prove this, we note that
(J
z
−J
1z
−J
2z
)|j
1
, j
2
; j, m) = 0. (5.274)
Forming the inner product with ¸j
1
, j
2
; m
1
, m
2
|, we obtain
(m−m
1
−m
2
)¸j
1
, j
2
; m
1
, m
2
|j
1
, j
2
; j, m) = 0, (5.275)
which proves the assertion. Thus, the z-components of different angular mo-
menta add algebraically. So, an electron in an l = 1 state, with orbital angular
momentum ¯h, and spin angular momentum ¯h/2, projected along the z-axis, con-
stitutes a state whose total angular momentum projected along the z-axis is 3¯h/2.
What is uncertain is the magnitude of the total angular momentum.
Second, the coefficients vanish unless
|j
1
−j
2
| ≤ j ≤ j
1
+j
2
. (5.276)
We can assume, without loss of generality, that j
1
≥ j
2
. We know, fromEq. (5.273),
that for given j
1
and j
2
the largest possible value of m is j
1
+ j
2
(since j
1
is the
113
5.14 Addition of angular momentum 5 ANGULAR MOMENTUM
largest possible value of m
1
, etc.). This implies that the largest possible value of j
is j
1
+j
2
(since, by definition, the largest value of m is equal to j). Now, there are
(2 j
1
+1) allowable values of m
1
and (2 j
2
+1) allowable values of m
2
. Thus, there
are (2 j
1
+ 1) (2 j
2
+ 1) independent eigenkets, |j
1
, j
2
; m
1
, m
2
), needed to span the
ket space corresponding to fixed j
1
and j
2
. Since the eigenkets |j
1
, j
2
; j, m) span
the same space, they must also form a set of (2 j
1
+1) (2 j
2
+1) independent kets.
In other words, there can only be (2 j
1
+ 1) (2 j
2
+ 1) distinct allowable values of
the quantum numbers j and m. For each allowed value of j, there are 2 j + 1
allowed values of m. We have already seen that the maximum allowed value of
j is j
1
+ j
2
. It is easily seen that if the minimum allowed value of j is j
1
− j
2
then
the total number of allowed values of j and m is (2 j
1
+1) (2 j
2
+1): i.e.,
j
1
+j
2

j=j
1
−j
2
(2 j +1) ≡ (2 j
1
+1) (2 j
2
+1). (5.277)
This proves our assertion.
Third, the sum of the modulus squared of all of the Clebsch-Gordon coeffi-
cients is unity: i.e.,

m
1

m
2
|¸j
1
, j
2
; m
1
, m
2
|j
1
, j
2
; j, m)|
2
= 1. (5.278)
This assertion is proved as follows:
¸j
1
, j
2
; j, m|j
1
, j
2
; j, m) =

m
1

m
2
¸j
1
, j
2
; j, m|j
1
, j
2
; m
1
, m
2
)¸j
1
, j
2
; m
1
, m
2
|j
1
, j
2
; j, m)
=

m
1

m
2
|¸j
1
, j
2
; m
1
, m
2
|j
1
, j
2
; j, m)|
2
= 1, (5.279)
where use has been made of the completeness relation (5.270).
Finally, the Clebsch-Gordon coefficients obey two recursion relations. To ob-
tain these relations we start from
J
±
|j
1
, j
2
; j, m) = (J
±
1
+J
±
2
) (5.280)

m

1

m

2
¸j
1
, j
2
; m

1
, m

2
|j
1
, j
2
; j, m)|j
1
, j
2
; m

1
, m

2
).
114
5.14 Addition of angular momentum 5 ANGULAR MOMENTUM
Making use of the well-known properties of the shift operators, which are speci-
fied by Eqs. (5.55)–(5.56), we obtain
_
j (j +1) −m(m±1) |j
1
, j
2
; j, m±1) =

m

1

m

2
_
_
j
1
(j
1
+1) −m

1
(m

1
±1) |j
1
, j
2
; m

1
±1, m

2
)
+
_
j
2
(j
2
+1) −m

2
(m

2
±1) |j
1
, j
2
; m

1
, m

2
±1)
_
¸j
1
, j
2
; m

1
, m

2
|j
1
, j
2
; j, m). (5.281)
Taking the inner product with ¸j
1
, j
2
; m
1
, m
2
|, and making use of the orthonormal-
ity property of the basis eigenkets, we obtain the desired recursion relations:
_
j (j +1) −m(m±1) ¸j
1
, j
2
; m
1
, m
2
|j
1
, j
2
; j, m±1) =
_
j
1
(j
1
+1) −m
1
(m
1
∓1) ¸j
1
, j
2
; m
1
∓1, m
2
|j
1
, j
2
; j, m)
+
_
j
2
(j
2
+1) −m
2
(m
2
∓1) ¸j
1
, j
2
; m
1
, m
2
∓1|j
1
, j
2
; j, m). (5.282)
It is clear, from the absence of complex coupling coefficients in the above re-
lations, that we can always choose the Clebsch-Gordon coefficients to be real
numbers. This is a convenient choice, since it ensures that the inverse Clebsch-
Gordon coefficients, ¸j
1
, j
2
; j, m|j
1
, j
2
; m
1
, m
2
), are identical to the Clebsch-Gordon
coefficients. In other words,
¸j
1
, j
2
; j, m|j
1
, j
2
; m
1
, m
2
) = ¸j
1
, j
2
; m
1
, m
2
|j
1
, j
2
; j, m). (5.283)
The inverse Clebsch-Gordon coefficients are the weights in the expansion of the
|j
1
, j
2
; m
1
, m
2
) in terms of the |j
1
, j
2
; j, m):
|j
1
, j
2
; m
1
, m
2
) =

j

m
¸j
1
, j
2
; j, m|j
1
, j
2
; m
1
, m
2
)|j
1
, j
2
; j, m). (5.284)
It turns out that the recursion relations (5.282), together with the normal-
ization condition (5.278), are sufficient to completely determine the Clebsch-
Gordon coefficients to within an arbitrary sign (multiplied into all of the coeffi-
cients). This sign is fixed by convention. The easiest way of demonstrating this
assertion is by considering some specific examples.
115
5.14 Addition of angular momentum 5 ANGULAR MOMENTUM
Let us add the angular momentum of two spin one-half systems: e.g., two
electrons at rest. So, j
1
= j
2
= 1/2. We know, from general principles, that
|m
1
| ≤ 1/2 and |m
2
| ≤ 1/2. We also know, from Eq. (5.276), that 0 ≤ j ≤ 1,
where the allowed values of j differ by integer amounts. It follows that either
j = 0 or j = 1. Thus, two spin one-half systems can be combined to form either a
spin zero system or a spin one system. It is helpful to arrange all of the possibly
non-zero Clebsch-Gordon coefficients in a table:
m
1
m
2
1/2 1/2 ? ? ? ?
1/2 -1/2 ? ? ? ?
-1/2 1/2 ? ? ? ?
-1/2 -1/2 ? ? ? ?
j
1
=1/2 j 1 1 1 0
j
2
=1/2 m 1 0 -1 0
The box in this table corresponding to m
1
= 1/2, m
2
= 1/2, j = 1, m = 1
gives the Clebsch-Gordon coefficient ¸1/2, 1/2; 1/2, 1/2|1/2, 1/2; 1, 1), or the in-
verse Clebsch-Gordon coefficient ¸1/2, 1/2; 1, 1|1/2, 1/2; 1/2, 1/2). All the boxes
contain question marks because we do not know any Clebsch-Gordon coefficients
at the moment.
A Clebsch-Gordon coefficient is automatically zero unless m
1
+ m
2
= m. In
other words, the z-components of angular momentum have to add algebraically.
Many of the boxes in the above table correspond to m
1
+m
2
,= m. We immediately
conclude that these boxes must contain zeroes: i.e.,
m
1
m
2
1/2 1/2 ? 0 0 0
1/2 -1/2 0 ? 0 ?
-1/2 1/2 0 ? 0 ?
-1/2 -1/2 0 0 ? 0
j
1
=1/2 j 1 1 1 0
j
2
=1/2 m 1 0 -1 0
The normalization condition (5.278) implies that the sum of the squares of all
116
5.14 Addition of angular momentum 5 ANGULAR MOMENTUM
the rows and columns of the above table must be unity. There are two rows and
two columns which only contain a single non-zero entry. We conclude that these
entries must be ±1, but we have no way of determining the signs at present.
Thus,
m
1
m
2
1/2 1/2 ±1 0 0 0
1/2 -1/2 0 ? 0 ?
-1/2 1/2 0 ? 0 ?
-1/2 -1/2 0 0 ±1 0
j
1
=1/2 j 1 1 1 0
j
2
=1/2 m 1 0 -1 0
Let us evaluate the recursion relation (5.282) for j
1
= j
2
= 1/2, with j = 1,
m = 0, m
1
= m
2
= ±1/2, taking the upper/lower sign. We find that
¸1/2, −1/2|1, 0) +¸−1/2, 1/2|1, 0) =

2 ¸1/2, 1/2|1, 1) = ±

2, (5.285)
and
¸1/2, −1/2|1, 0) +¸−1/2, 1/2|1, 0) =

2 ¸−1/2, −1/2|1, −1) = ±

2. (5.286)
Here, the j
1
and j
2
labels have been suppressed for ease of notation. We also
know that
¸1/2, −1/2|1, 0)
2
+¸−1/2, 1/2|1, 0)
2
= 1, (5.287)
from the normalization condition. The only real solutions to the above set of
equations are

2 ¸1/2, −1/2|1, 0) =

2 ¸−1/2, 1/2|1, 0)
= ¸1/2, 1/2|1, 1) = ¸1/2, 1/2|1, −1) = ±1. (5.288)
The choice of sign is arbitrary—the conventional choice is a positive sign. Thus,
117
5.14 Addition of angular momentum 5 ANGULAR MOMENTUM
our table now reads
m
1
m
2
1/2 1/2 1 0 0 0
1/2 -1/2 0 1/

2 0 ?
-1/2 1/2 0 1/

2 0 ?
-1/2 -1/2 0 0 1 0
j
1
=1/2 j 1 1 1 0
j
2
=1/2 m 1 0 -1 0
We could fill in the remaining unknown entries of our table by using the re-
cursion relation again. However, an easier method is to observe that the rows
and columns of the table must all be mutually orthogonal. That is, the dot prod-
uct of a row with any other row must be zero. Likewise, for the dot product of
a column with any other column. This follows because the entries in the table
give the expansion coefficients of one of our alternative sets of eigenkets in terms
of the other set, and each set of eigenkets contains mutually orthogonal vectors
with unit norms. The normalization condition tells us that the dot product of a
row or column with itself must be unity. The only way that the dot product of the
fourth column with the second column can be zero is if the unknown entries are
equal and opposite. The requirement that the dot product of the fourth column
with itself is unity tells us that the magnitudes of the unknown entries have to
be 1/

2. The unknown entries are undetermined to an arbitrary sign multiplied
into them both. Thus, the final form of our table (with the conventional choice
of arbitrary signs) is
m
1
m
2
1/2 1/2 1 0 0 0
1/2 -1/2 0 1/

2 0 1/

2
-1/2 1/2 0 1/

2 0 -1/

2
-1/2 -1/2 0 0 1 0
j
1
=1/2 j 1 1 1 0
j
2
=1/2 m 1 0 -1 0
The table can be read in one of two ways. The columns give the expansions of
118
5.14 Addition of angular momentum 5 ANGULAR MOMENTUM
the eigenstates of overall angular momentum in terms of the eigenstates of the
individual angular momenta of the two component systems. Thus, the second
column tells us that
|1, 0) =
1

2
( |1/2, −1/2) + | −1/2, 1/2) ) . (5.289)
The ket on the left-hand side is a |j, m) ket, whereas those on the right-hand side
are |m
1
, m
2
) kets. The rows give the expansions of the eigenstates of individual
angular momentum in terms of those of overall angular momentum. Thus, the
second row tells us that
|1/2, −1/2) =
1

2
( |1, 0) + |0, 0) ) . (5.290)
Here, the ket on the left-hand side is a |m
1
, m
2
) ket, whereas those on the right-
hand side are |j, m) kets.
Note that our table is really a combination of two sub-tables, one involving
j = 0 states, and one involving j = 1 states. The Clebsch-Gordon coefficients cor-
responding to two different choices of j are completely independent: i.e., there is
no recursion relation linking Clebsch-Gordon coefficients corresponding to differ-
ent values of j. Thus, for every choice of j
1
, j
2
, and j we can construct a table of
Clebsch-Gordon coefficients corresponding to the different allowed values of m
1
,
m
2
, and m (subject to the constraint that m
1
+m
2
= m). A complete knowledge
of angular momentum addition is equivalent to a knowing all possible tables of
Clebsch-Gordon coefficients. These tables are listed (for moderate values of j
1
, j
2
and j) in many standard reference books.
119
6 APPROXIMATION METHODS
6 Approximation methods
6.1 Introduction
We have developed techniques by which the general energy eigenvalue problem
can be reduced to a set of coupled partial differential equations involving various
wave-functions. Unfortunately, the number of such problems which yield exactly
soluble equations is comparatively small. Clearly, we need to develop some tech-
niques for finding approximate solutions to otherwise intractable problems.
Consider the following problem, which is very common. The Hamiltonian of a
system is written
H = H
0
+H
1
. (6.1)
Here, H
0
is a simple Hamiltonian for which we know the exact eigenvalues and
eigenstates. H
1
introduces some interesting additional physics into the problem,
but it is sufficiently complicated that when we add it to H
0
we can no longer
find the exact energy eigenvalues and eigenstates. However, H
1
can, in some
sense (which we shall specify more exactly later on), be regarded as being small
compared to H
0
. Can we find the approximate eigenvalues and eigenstates of the
modified Hamiltonian, H
0
+H
1
, by performing some sort of perturbation analysis
about the eigenvalues and eigenstates of the original Hamiltonian, H
0
? Let us
investigate.
6.2 The two-state system
Let us begin by considering time-independent perturbation theory, in which the
modification to the Hamiltonian, H
1
, has no explicit dependence on time. It is
usually assumed that the unperturbed Hamiltonian, H
0
, is also time-independent.
Consider the simplest non-trivial system, in which there are only two indepen-
dent eigenkets of the unperturbed Hamiltonian. These are denoted
H
0
|1) = E
1
|1), (6.2)
120
6.2 The two-state system 6 APPROXIMATION METHODS
H
0
|2) = E
2
|2). (6.3)
It is assumed that these states, and their associated eigenvalues, are known. Since
H
0
is, by definition, an Hermitian operator, its two eigenkets are orthonormal and
form a complete set. The lengths of these eigenkets are both normalized to unity.
Let us now try to solve the modified energy eigenvalue problem
(H
0
+H
1
)|E) = E|E). (6.4)
In fact, we can solve this problem exactly. Since, the eigenkets of H
0
form a
complete set, we can write
|E) = ¸1|E)|1) +¸2|E)|2). (6.5)
Right-multiplication of Eq. (6.4) by ¸1| and ¸2| yields two coupled equations,
which can be written in matrix form:
_
_
E
1
−E +e
11
e
12
e

12
E
2
−E +e
22
_
_
_
_
¸1|E)
¸2|E)
_
_
=
_
_
0
0
_
_
. (6.6)
Here,
e
11
= ¸1|H
1
|1), (6.7)
e
22
= ¸2|H
1
|2), (6.8)
e
12
= ¸1|H
1
|2). (6.9)
In the special (but common) case of a perturbing Hamiltonian whose diagonal
matrix elements (in the unperturbed eigenstates) are zero, so that
e
11
= e
22
= 0, (6.10)
the solution of Eq. (6.6) (obtained by setting the determinant of the matrix equal
to zero) is
E =
(E
1
+E
2
) ±
_
(E
1
−E
2
)
2
+4 |e
12
|
2
2
. (6.11)
Let us expand in the supposedly small parameter
=
|e
12
|
|E
1
−E
2
|
. (6.12)
121
6.3 Non-degenerate perturbation theory 6 APPROXIMATION METHODS
We obtain
E ·
1
2
(E
1
+E
2
) ±
1
2
(E
1
−E
2
)(1 +2
2
+ ). (6.13)
The above expression yields the modifications to the energy eigenvalues due to
the perturbing Hamiltonian:
E

1
= E
1
+
|e
12
|
2
E
1
−E
2
+ , (6.14)
E

2
= E
2

|e
12
|
2
E
1
−E
2
+ . (6.15)
Note that H
1
causes the upper eigenvalue to rise, and the lower eigenvalue to
fall. It is easily demonstrated that the modified eigenkets take the form
|1)

= |1) +
e

12
E
1
−E
2
|2) + , (6.16)
|2)

= |2) −
e
12
E
1
−E
2
|1) + . (6.17)
Thus, the modified energy eigenstates consist of one of the unperturbed eigen-
states with a slight admixture of the other. Note that the series expansion in
Eq. (6.13) only converges if 2 || < 1. This suggests that the condition for the
validity of the perturbation expansion is
|e
12
| <
|E
1
−E
2
|
2
. (6.18)
In other words, when we say that H
1
needs to be small compared to H
0
, what we
really mean is that the above inequality needs to be satisfied.
6.3 Non-degenerate perturbation theory
Let us now generalize our perturbation analysis to deal with systems possessing
more than two energy eigenstates. The energy eigenstates of the unperturbed
Hamiltonian, H
0
, are denoted
H
0
|n) = E
n
|n), (6.19)
122
6.3 Non-degenerate perturbation theory 6 APPROXIMATION METHODS
where n runs from 1 to N. The eigenkets |n) are orthonormal, form a complete
set, and have their lengths normalized to unity. Let us now try to solve the energy
eigenvalue problem for the perturbed Hamiltonian:
(H
0
+H
1
)|E) = E|E). (6.20)
We can express |E) as a linear superposition of the unperturbed energy eigenkets,
|E) =

k
¸k|E)|k), (6.21)
where the summation is from k = 1 to N. Substituting the above equation into
Eq. (6.20), and right-multiplying by ¸m|, we obtain
(E
m
+e
mm
−E)¸m|E) +

k=m
e
mk
¸k|E) = 0, (6.22)
where
e
mk
= ¸m|H
1
|k). (6.23)
Let us now develop our perturbation expansion. We assume that
|e
mk
|
E
m
−E
k
∼ O(), (6.24)
for all m ,= k, where ¸1 is our expansion parameter. We also assume that
|e
mm
|
E
m
∼ O(), (6.25)
for all m. Let us search for a modified version of the nth unperturbed energy
eigenstate, for which
E = E
n
+O(), (6.26)
and
¸n|E) = 1, (6.27)
¸m|E) ∼ O(), (6.28)
for m ,= n. Suppose that we write out Eq. (6.22) for m ,= n, neglecting terms
which are O(
2
) according to our expansion scheme. We find that
(E
m
−E
n
)¸m|E) +e
mn
· 0, (6.29)
123
6.4 The quadratic Stark effect 6 APPROXIMATION METHODS
giving
¸m|E) · −
e
mn
E
m
−E
n
. (6.30)
Substituting the above expression into Eq. (6.22), evaluated for m = n, and
neglecting O(
3
) terms, we obtain
(E
n
+e
nn
−E) −

k=n
|e
nk
|
2
E
k
−E
n
= 0. (6.31)
Thus, the modified nth energy eigenstate possesses an eigenvalue
E

n
= E
n
+e
nn
+

k=n
|e
nk
|
2
E
n
−E
k
+O(
3
), (6.32)
and a eigenket
|n)

= |n) +

k=n
e
kn
E
n
−E
k
|k) +O(
2
). (6.33)
Note that
¸m|n)

= δ
mn
+
e

nm
E
m
−E
n
+
e
mn
E
n
−E
m
+O(
2
) = δ
mn
+O(
2
). (6.34)
Thus, the modified eigenkets remain orthonormal and properly normalized to
O(
2
).
6.4 The quadratic Stark effect
Suppose that a one-electron atom [i.e., either a hydrogen atom, or an alkali metal
atom (which possesses one valance electron orbiting outside a closed, spher-
ically symmetric shell)] is subjected to a uniform electric field in the positive
z-direction. The Hamiltonian of the system can be split into two parts. The un-
perturbed Hamiltonian,
H
0
=
p
2
2 m
e
+V(r), (6.35)
and the perturbing Hamiltonian,
H
1
= e |E| z. (6.36)
124
6.4 The quadratic Stark effect 6 APPROXIMATION METHODS
It is assumed that the unperturbed energy eigenvalues and eigenstates are
completely known. The electron spin is irrelevant in this problem (since the spin
operators all commute with H
1
), so we can ignore the spin degrees of freedom of
the system. This implies that the system possesses no degenerate energy eigen-
values. This is not true for the n ,= 1 energy levels of the hydrogen atom, due to
the special properties of a pure Coulomb potential. It is necessary to deal with
this case separately, because the perturbation theory presented in Sect. 6.3 breaks
down for degenerate unperturbed energy levels.
An energy eigenket of the unperturbed Hamiltonian is characterized by three
quantum numbers—the radial quantum number n, and the two angular quantum
numbers l and m (see Sect. 5.6). Let us denote such a ket |n, l, m), and let its
energy level be E
nlm
. According to Eq. (6.32), the change in this energy level
induced by a small electric field is given by
∆E
nlm
= e |E| ¸n, l, m|z|n, l, m)
+e
2
|E|
2

n

,l

,m

=n,l,m
|¸n, l, m|z|n,

l

, m

)|
2
E
nlm
−E
n

l

m

. (6.37)
Now, since
L
z
= x p
y
−yp
x
, (6.38)
it follows that
[L
z
, z] = 0. (6.39)
Thus,
¸n, l, m|[L
z
, z]|n

, l

, m

) = 0, (6.40)
giving
(m−m

)¸n, l, m|z|n

, l

, m

) = 0, (6.41)
since |n, l, m) is, by definition, an eigenstate of L
z
with eigenvalue m¯h. It is
clear, from the above relation, that the matrix element ¸n, l, m|z|n

, l

, m

) is zero
unless m

= m. This is termed the selection rule for the quantum number m.
Let us now determine the selection rule for l. We have
[L
2
, z] = [L
2
x
, z] + [L
2
y
, z]
125
6.4 The quadratic Stark effect 6 APPROXIMATION METHODS
= L
x
[L
x
, z] + [L
x
, z]L
x
+L
y
[L
y
, z] + [L
y
, z]L
y
= i ¯h(−L
x
y −yL
x
+L
y
x +x L
y
)
= 2 i ¯h(L
y
x −L
x
y + i ¯hz)
= 2 i ¯h(L
y
x −yL
x
) = 2 i ¯h(x L
y
−L
x
y), (6.42)
where use has been made of Eqs. (5.1)–(5.6). Similarly,
[L
2
, y] = 2 i ¯h(L
x
z −x L
z
), (6.43)
[L
2
, x] = 2 i ¯h(yL
z
−L
y
z). (6.44)
Thus,
[L
2
, [L
2
, z]] = 2 i ¯h
_
L
2
, L
y
x −L
x
y + i ¯hz
_
= 2 i ¯h
_
L
y
[L
2
, x] −L
x
[L
2
, y] + i ¯h[L
2
, z]
_
,
= −4 ¯h
2
L
y
(yL
z
−L
y
z) +4 ¯h
2
L
x
(L
x
z −x L
z
)
−2 ¯h
2
(L
2
z −z L
2
). (6.45)
This reduces to
[L
2
, [L
2
, z]] = −¯h
2
_
4 (L
x
x +L
y
y +L
z
z)L
z
−4 (L
2
x
+L
2
y
+L
2
z
) z
+2 (L
2
z −z L
2
)
_
. (6.46)
However, it is clear from Eqs. (5.1)–(5.3) that
L
x
x +L
y
y +L
z
z = 0. (6.47)
Hence, we obtain
[L
2
, [L
2
, z]] = 2 ¯h
2
(L
2
z +z L
2
). (6.48)
Finally, the above expression expands to give
L
4
z −2 L
2
z L
2
+z L
4
−2 ¯h
2
(L
2
z +z L
2
) = 0. (6.49)
Equation (6.49) implies that
¸n, l, m|L
4
z −2 L
2
z L
2
+z L
4
−2 ¯h
2
(L
2
z +z L
2
)|n

, l

, m

) = 0. (6.50)
126
6.4 The quadratic Stark effect 6 APPROXIMATION METHODS
This expression yields
_
l
2
(l +1)
2
−2 l (l +1) l

(l

+1) +l
2
(l

+1)
2
−2 l (l +1) −2 l

(l

+1)] ¸n, l, m|z|n

, l

, m

) = 0, (6.51)
which reduces to
(l +l

+2) (l +l

) (l −l

+1) (l −l

−1)¸n, l, m|z|n

, l

, m

) = 0. (6.52)
According to the above formula, the matrix element ¸n, l, m|z|n

, l

, m

) vanishes
unless l = l

= 0 or l

= l ±1. This matrix element can be written
¸n, l, m|z|n

, l

, m

) =
___
ψ

nlm
(r

, θ

, ϕ

) r

cos θ

ψ
n

m

l
(r

, θ

, ϕ

) dV

, (6.53)
where ψ
nlm
(r

) = ¸r

|n, l, m). Recall, however, that the wave-function of an l = 0
state is spherically symmetric (see Sect. 5.3): i.e., ψ
n00
(r

) = ψ
n00
(r

). It follows
from Eq. (6.53) that the matrix element vanishes by symmetry when l = l

= 0.
In conclusion, the matrix element ¸n, l, m|z|n

, l

, m

) is zero unless l

= l ± 1.
This is the selection rule for the quantum number l.
Application of the selection rules to Eq. (6.37) yields
∆E
nlm
= e
2
|E|
2

n

l

=l±1
|¸n, l, m|z|n

, l

, m)|
2
E
nlm
−E
n

l

m
. (6.54)
Note that all of the terms in Eq. (6.37) which vary linearly with the electric field-
strength vanish by symmetry, according to the selection rules. Only those terms
which vary quadratically with the field-strength survive. The polarizability of an
atom is defined in terms of the energy-shift of the atomic state as follows:
∆E = −
1
2
α|E|
2
. (6.55)
Consider the ground state of a hydrogen atom. (Recall, that we cannot address
the n > 1 excited states because they are degenerate, and our theory cannot
handle this at present). The polarizability of this state is given by
α = 2 e
2

n>1
|¸1, 0, 0|z|n, 1, 0)|
2
E
n00
−E
100
. (6.56)
127
6.4 The quadratic Stark effect 6 APPROXIMATION METHODS
Here, we have made use of the fact that E
n10
= E
n00
for a hydrogen atom.
The sum in the above expression can be evaluated approximately by noting
that [see Eq. (5.120)]
E
n00
= −
e
2

0
a
0
n
2
(6.57)
for a hydrogen atom, where
a
0
=

0
¯h
2
µe
2
= 5.3 10
−11
meters (6.58)
is the Bohr radius. We can write
E
n00
−E
100
≥ E
200
−E
100
=
3
4
e
2

0
a
0
. (6.59)
Thus,
α <
16
3

0
a
0

n>1
|¸1, 0, 0|z|n, 1, 0)|
2
. (6.60)
However,

n>1
|¸1, 0, 0|z|n, 1, 0)|
2
=

n

,l

,m

¸1, 0, 0|z|n

, l

, m

)¸n

, m

, l

|z|1, 0, 0)
= ¸1, 0, 0|z
2
|1, 0, 0), (6.61)
where we have made use of the fact that the wave-functions of a hydrogen atom
form a complete set. It is easily demonstrated from the actual form of the ground
state wave-function that
¸1, 0, 0|z
2
|1, 0, 0) = a
2
0
. (6.62)
Thus, we conclude that
α <
16
3

0
a
3
0
· 5.3 4π
0
a
3
0
. (6.63)
The true result is
α =
9
2

0
a
3
0
= 4.5 4π
0
a
3
0
. (6.64)
It is actually possible to obtain this answer, without recourse to perturbation the-
ory, by solving Schr¨ odinger’s equation exactly in parabolic coordinates.
128
6.5 Degenerate perturbation theory 6 APPROXIMATION METHODS
6.5 Degenerate perturbation theory
Let us now consider systems in which the eigenstates of the unperturbed Hamil-
tonian, H
0
, possess degenerate energy levels. It is always possible to represent
degenerate energy eigenstates as the simultaneous eigenstates of the Hamilto-
nian and some other Hermitian operator (or group of operators). Let us denote
this operator (or group of operators) L. We can write
H
0
|n, l) = E
n
|n, l), (6.65)
and
L|n, l) = L
nl
|n, l), (6.66)
where [H
0
, L] = 0. Here, the E
n
and the L
nl
are real numbers which depend on
the quantum numbers n, and n and l, respectively. It is always possible to find a
sufficient number of operators which commute with the Hamiltonian in order to
ensure that the L
nl
are all different. In other words, we can choose L such that
the quantum numbers n and l uniquely specify each eigenstate. Suppose that for
each value of n there are N
n
different values of l: i.e., the nth energy eigenstate
is N
n
-fold degenerate.
In general, L does not commute with the perturbing Hamiltonian, H
1
. This
implies that the modified energy eigenstates are not eigenstates of L. In this situ-
ation, we expect the perturbation to split the degeneracy of the energy levels, so
that each modified eigenstate |n, l)

acquires a unique energy eigenvalue E

nl
. Let
us naively attempt to use the standard perturbation theory of Sect. 6.3 to evalu-
ate the modified energy eigenstates and energy levels. A direct generalization of
Eqs. (6.32) and (6.33) yields
E

nl
= E
n
+e
nlnl
+

n

,l

=n,l
|e
n

l

nl
|
2
E
n
−E
n

+O(
3
), (6.67)
and
|n, l)

= |n, l) +

n

,l

=n,l
e
n

l

nl
E
n
−E
n

|n

, l

) +O(
2
), (6.68)
where
e
n

l

nl
= ¸n

, l

|H
1
|n, l). (6.69)
129
6.5 Degenerate perturbation theory 6 APPROXIMATION METHODS
It is fairly obvious that the summations in Eqs. (6.67) and (6.68) are not well-
behaved if the nth energy level is degenerate. The problem terms are those
involving unperturbed eigenstates labeled by the same value of n, but different
values of l: i.e., those states whose unperturbed energies are E
n
. These terms
give rise to singular factors 1/(E
n
− E
n
) in the summations. Note, however, that
this problem would not exist if the matrix elements, e
nl

nl
, of the perturbing
Hamiltonian between distinct, degenerate, unperturbed energy eigenstates cor-
responding to the eigenvalue E
n
were zero. In other words, if
¸n, l

|H
1
|n, l) = λ
nl
δ
ll
, (6.70)
then all of the singular terms in Eqs. (6.67) and (6.68) would vanish.
In general, Eq. (6.70) is not satisfied. Fortunately, we can always redefine the
unperturbed energy eigenstates belonging to the eigenvalue E
n
in such a manner
that Eq. (6.70) is satisfied. Let us define N
n
new states which are linear combina-
tions of the N
n
original degenerate eigenstates corresponding to the eigenvalue
E
n
:
|n, l
(1)
) =
N
n

k=1
¸n, k|n, l
(1)
)|n, k). (6.71)
Note that these new states are also degenerate energy eigenstates of the unper-
turbed Hamiltonian corresponding to the eigenvalue E
n
. The |n, l
(1)
) are chosen
in such a manner that they are eigenstates of the perturbing Hamiltonian, H
1
.
Thus,
H
1
|n, l
(1)
) = λ
nl
|n, l
(1)
). (6.72)
The |n, l
(1)
) are also chosen so that they are orthonormal, and have unit lengths.
It follows that
¸n, l
(1)
|H
1
|n, l
(1)
) = λ
nl
δ
ll
. (6.73)
Thus, if we use the new eigenstates, instead of the old ones, then we can employ
Eqs. (6.67) and (6.68) directly, since all of the singular terms vanish. The only
remaining difficulty is to determine the new eigenstates in terms of the original
ones.
130
6.5 Degenerate perturbation theory 6 APPROXIMATION METHODS
Now
N
n

l=1
|n, l)¸n, l| = 1, (6.74)
where 1 denotes the identity operator in the sub-space of all unperturbed energy
eigenkets corresponding to the eigenvalue E
n
. Using this completeness relation,
the operator eigenvalue equation (6.72) can be transformed into a straightfor-
ward matrix eigenvalue equation:
N
n

l

=1
¸n, l

|H
1
|n, l

)¸n, l

|n, l
(1)
) = λ
nl
¸n, l

|n, l
(1)
). (6.75)
This can be written more transparently as
Ux = λ x, (6.76)
where the elements of the N
n
N
n
Hermitian matrix U are
U
jk
= ¸n, j|H
1
|n, k). (6.77)
Provided that the determinant of U is non-zero, Eq. (6.76) can always be solved
to give N
n
eigenvalues λ
nl
(for l = 1 to N
n
), with N
n
corresponding eigenvectors
x
nl
. The eigenvectors specify the weights of the new eigenstates in terms of the
original eigenstates: i.e.,
(x
nl
)
k
= ¸n, k|n, l
(1)
), (6.78)
for k = 1 to N
n
. In our new scheme, Eqs. (6.67) and (6.68) yield
E

nl
= E
n

nl
+

n

=n,l

|e
n

l

nl
|
2
E
n
−E
n

+O(
3
), (6.79)
and
|n, l
(1)
)

= |n, l
(1)
) +

n

=n,l

e
n

l

nl
E
n
−E
n

|n

, l

) +O(
2
). (6.80)
There are no singular terms in these expressions, since the summations are over
n

,= n: i.e., they specifically exclude the problematic, degenerate, unperturbed
energy eigenstates corresponding to the eigenvalue E
n
. Note that the first-order
energy shifts are equivalent to the eigenvalues of the matrix equation (6.76).
131
6.6 The linear Stark effect 6 APPROXIMATION METHODS
6.6 The linear Stark effect
Let us examine the effect of an electric field on the excited energy levels of a
hydrogen atom. For instance, consider the n = 2 states. There is a single l =
0 state, usually referred to as 2s, and three l = 1 states (with m = −1, 0, 1),
usually referred to as 2p. All of these states possess the same energy, E
200
=
−e
2
/(32π
0
a
0
). As in Sect. 6.4, the perturbing Hamiltonian is
H
1
= e |E| z. (6.81)
In order to apply perturbation theory, we have to solve the matrix eigenvalue
equation
Ux = λ x, (6.82)
where U is the array of the matrix elements of H
1
between the degenerate 2s and
2p states. Thus,
U = e |E|
_
_
_
_
_
_
_
_
0 ¸2, 0, 0|z|2, 1, 0) 0 0
¸2, 1, 0|z|2, 0, 0) 0 0 0
0 0 0 0
0 0 0 0
_
_
_
_
_
_
_
_
, (6.83)
where the rows and columns correspond to the |2, 0, 0), |2, 1, 0), |2, 1, 1), and
|2, 1, −1) states, respectively. Here, we have made use of the selection rules,
which tell us that the matrix element of z between two hydrogen atom states is
zero unless the states possess the same m quantum number, and l quantum num-
bers which differ by unity. It is easily demonstrated, from the exact forms of the
2s and 2p wave-functions, that
¸2, 0, 0|z|2, 1, 0) = ¸2, 1, 0|z|2, 0, 0) = 3 a
0
. (6.84)
It can be seen, by inspection, that the eigenvalues of U are λ
1
= 3 e a
0
|E|,
λ
2
= −3 e a
0
|E|, λ
3
= 0, and λ
4
= 0. The corresponding eigenvectors are
x
1
=
_
_
_
_
_
_
_
_
1/

2
1/

2
0
0
_
_
_
_
_
_
_
_
, (6.85)
132
6.6 The linear Stark effect 6 APPROXIMATION METHODS
x
2
=
_
_
_
_
_
_
_
_
1/

2
−1/

2
0
0
_
_
_
_
_
_
_
_
, (6.86)
x
3
=
_
_
_
_
_
_
_
_
0
0
1
0
_
_
_
_
_
_
_
_
, (6.87)
x
4
=
_
_
_
_
_
_
_
_
0
0
0
1
_
_
_
_
_
_
_
_
. (6.88)
It follows from Sect. 6.5 that the simultaneous eigenstates of the unperturbed
Hamiltonian and the perturbing Hamiltonian take the form
|1) =
|2, 0, 0) + |2, 1, 0)

2
, (6.89)
|2) =
|2, 0, 0) − |2, 1, 0)

2
, (6.90)
|3) = |2, 1, 1), (6.91)
|4) = |2, 1, −1). (6.92)
In the absence of an electric field, all of these states possess the same energy, E
200
.
The first-order energy shifts induced by an electric field are given by
∆E
1
= +3 e a
0
|E|, (6.93)
∆E
2
= −3 e a
0
|E|, (6.94)
∆E
3
= 0, (6.95)
∆E
4
= 0. (6.96)
Thus, the energies of states 1 and 2 are shifted upwards and downwards, respec-
tively, by an amount 3 e a
0
|E| in the presence of an electric field. States 1 and
2 are orthogonal linear combinations of the original 2s and 2p(m = 0) states.
133
6.7 Fine structure 6 APPROXIMATION METHODS
Note that the energy shifts are linear in the electric field-strength, so this is a
much larger effect that the quadratic effect described in Sect. 6.4. The energies
of states 3 and 4 (which are equivalent to the original 2p(m = 1) and 2p(m = −1)
states, respectively) are not affected to first-order. Of course, to second-order the
energies of these states are shifted by an amount which depends on the square of
the electric field-strength.
Note that the linear Stark effect depends crucially on the degeneracy of the 2s
and 2p states. This degeneracy is a special property of a pure Coulomb potential,
and, therefore, only applies to a hydrogen atom. Thus, alkali metal atoms do not
exhibit the linear Stark effect.
6.7 Fine structure
Let us now consider the energy levels of hydrogen-like atoms (i.e., alkali metal
atoms) in more detail. The outermost electron moves in a spherically symmetric
potential V(r) due to the nuclear charge and the charges of the other electrons
(which occupy spherically symmetric closed shells). The shielding effect of the
inner electrons causes V(r) to depart from the pure Coulomb form. This splits the
degeneracy of states characterized by the same value of n, but different values of
l. In fact, higher l states have higher energies.
Let us examine a phenomenon known as fine structure, which is due to inter-
action between the spin and orbital angular momenta of the outermost electron.
This electron experiences an electric field
E =
∇V
e
. (6.97)
However, a charge moving in an electric field also experiences an effective mag-
netic field
B = −v E. (6.98)
Now, an electron possesses a spin magnetic moment [see Eq. (5.170)]
µ = −
e S
m
e
. (6.99)
134
6.7 Fine structure 6 APPROXIMATION METHODS
We, therefore, expect a spin-orbit contribution to the Hamiltonian of the form
H
LS
= −µB
= −
e S
m
e
v
_
1
e
r
r
dV
dr
_
=
1
m
2
e
r
dV
dr
LS, (6.100)
where L = m
e
rv is the orbital angular momentum. When the above expression
is compared to the observed spin-orbit interaction, it is found to be too large by
a factor of two. There is a classical explanation for this, due to spin precession,
which we need not go into. The correct quantummechanical explanation requires
a relativistically covariant treatment of electron dynamics (this is achieved using
the so-called Dirac equation).
Let us now apply perturbation theory to a hydrogen-like atom, using H
LS
as
the perturbation (with H
LS
taking one half of the value given above), and
H
0
=
p
2
2 m
e
+V(r) (6.101)
as the unperturbed Hamiltonian. We have two choices for the energy eigenstates
of H
0
. We can adopt the simultaneous eigenstates of H
0
, L
2
, S
2
, L
z
and S
z
, or the
simultaneous eigenstates of H
0
, L
2
, S
2
, J
2
, and J
z
, where J = L + S is the total
angular momentum. Although the departure of V(r) from a pure 1/r form splits
the degeneracy of same n, different l, states, those states characterized by the
same values of n and l, but different values of m
l
, are still degenerate. (Here,
m
l
, m
s
, and m
j
are the quantum numbers corresponding to L
z
, S
z
, and J
z
, respec-
tively.) Moreover, with the addition of spin degrees of freedom, each state is
doubly degenerate due to the two possible orientations of the electron spin (i.e.,
m
s
= ±1/2). Thus, we are still dealing with a highly degenerate system. We
know, from Sect. 6.6, that the application of perturbation theory to a degenerate
system is greatly simplified if the basis eigenstates of the unperturbed Hamilto-
nian are also eigenstates of the perturbing Hamiltonian. Now, the perturbing
Hamiltonian, H
LS
, is proportional to LS, where
LS =
J
2
−L
2
−S
2
2
. (6.102)
135
6.7 Fine structure 6 APPROXIMATION METHODS
It is fairly obvious that the first group of operators (H
0
, L
2
, S
2
, L
z
and S
z
) does not
commute with H
LS
, whereas the second group (H
0
, L
2
, S
2
, J
2
, and J
z
) does. In
fact, LS is just a combination of operators appearing in the second group. Thus,
it is advantageous to work in terms of the eigenstates of the second group of
operators, rather than those of the first group.
We now need to find the simultaneous eigenstates of H
0
, L
2
, S
2
, J
2
, and J
z
. This
is equivalent to finding the eigenstates of the total angular momentum resulting
from the addition of two angular momenta: j
1
= l, and j
2
= s = 1/2. According
to Eq. (5.276), the allowed values of the total angular momentum are j = l +1/2
and j = l −1/2. We can write
|l +1/2, m) = cos α|m−1/2, 1/2) + sinα|m+1/2, −1/2), (6.103)
|l −1/2, m) = −sinα|m−1/2, 1/2) + cos α|m+1/2, −1/2). (6.104)
Here, the kets on the left-hand side are |j, m
j
) kets, whereas those on the right-
hand side are |m
l
, m
s
) kets (the j
1
, j
2
labels have been dropped, for the sake of
clarity). We have made use of the fact that the Clebsch-Gordon coefficients are
automatically zero unless m
j
= m
l
+ m
s
. We have also made use of the fact that
both the |j, m
j
) and |m
l
, m
s
) kets are orthonormal, and have unit lengths. We
now need to determine
cos α = ¸m−1/2, 1/2|l +1/2, m), (6.105)
where the Clebsch-Gordon coefficient is written in ¸m
l
, m
s
|j, m
j
) form.
Let us now employ the recursion relation for Clebsch-Gordon coefficients,
Eq. (5.282), with j
1
= l, j
2
= 1/2, j = l + 1/2, m
1
= m − 1/2, m
2
= 1/2 (lower
sign). We obtain
_
(l +1/2) (l +3/2) −m(m+1) ¸m−1/2, 1/2|l +1/2, m)
=
_
l (l +1) − (m−1/2) (m+1/2) ¸m+1/2, 1/2|l +1/2, m+1), (6.106)
which reduces to
¸m−1/2, 1/2|l +1/2, m) =
¸
¸
¸
¸
_
l +m+1/2
l +m+3/2
¸m+1/2, 1/2|l +1/2, m+1). (6.107)
136
6.7 Fine structure 6 APPROXIMATION METHODS
We can use this formula to successively increase the value of m
l
. For instance,
¸m−1/2, 1/2|l +1/2, m) =
¸
¸
¸
¸
_
l +m+1/2
l +m+3/2
¸
¸
¸
¸
_
l +m+3/2
l +m+5/2
¸m+3/2, 1/2|l +1/2, m+2). (6.108)
This procedure can be continued until m
l
attains its maximum possible value, l.
Thus,
¸m−1/2, 1/2|l +1/2, m) =
¸
¸
¸
_
l +m+1/2
2 l +1
¸l, 1/2|l +1/2, l +1/2). (6.109)
Consider the situation in which m
l
and m both take their maximum values,
l and 1/2, respectively. The corresponding value of m
j
is l + 1/2. This value is
possible when j = l +1/2, but not when j = l −1/2. Thus, the |m
l
, m
s
) ket |l, 1/2)
must be equal to the |j, m
j
) ket |l + 1/2, l + 1/2), up to an arbitrary phase-factor.
By convention, this factor is taken to be unity, giving
¸l, 1/2|l +1/2, l +1/2) = 1. (6.110)
It follows from Eq. (6.109) that
cos α = ¸m−1/2, 1/2|l +1/2, m) =
¸
¸
¸
_
l +m+1/2
2 l +1
. (6.111)
Now,
sin
2
α = 1 −
l +m+1/2
2 l +1
=
l −m+1/2
2 l +1
. (6.112)
We now need to determine the sign of sinα. A careful examination of the
recursion relation, Eq. (5.282), shows that the plus sign is appropriate. Thus,
|l +1/2, m) =
¸
¸
¸
_
l +m+1/2
2 l +1
|m−1/2, 1/2)
+
¸
¸
¸
_
l −m+1/2
2 l +1
|m+1/2, −1/2), (6.113)
137
6.7 Fine structure 6 APPROXIMATION METHODS
|l −1/2, m) = −
¸
¸
¸
_
l −m+1/2
2 l +1
|m−1/2, 1/2)
+
¸
¸
¸
_
l +m+1/2
2 l +1
|m+1/2, −1/2). (6.114)
It is convenient to define so called spin-angular functions using the Pauli two-
component formalism:
¸
j=l±1/2,m
l
= ±
¸
¸
¸
_
l ±m+1/2
2 l +1
Y
m−1/2
l
(θ, ϕ) χ
+
+
¸
¸
¸
_
l ∓m+1/2
2 l +1
Y
m+1/2
l
(θ, ϕ) χ

=
1

2 l +1
_
_
_
±
_
l ±m+1/2 Y
m−1/2
l
(θ, ϕ)
_
l ∓m+1/2 Y
m+1/2
l
(θ, ϕ)
_
_
_. (6.115)
These functions are eigenfunctions of the total angular momentum for spin one-
half particles, just as the spherical harmonics are eigenfunctions of the orbital
angular momentum. A general wave-function for an energy eigenstate in a
hydrogen-like atom is written
ψ
nlm±
= R
nl
(r) ¸
j=l±1/2,m
. (6.116)
The radial part of the wave-function, R
nl
(r), depends on the radial quantum num-
ber n and the angular quantum number l. The wave-function is also labeled by
m, which is the quantum number associated with J
z
. For a given choice of l, the
quantum number j (i.e., the quantum number associated with J
2
) can take the
values l ±1/2.
The |l ±1/2, m) kets are eigenstates of LS, according to Eq. (6.102). Thus,
LS|j = l ±1/2, m
j
= m) =
¯h
2
2
[j (j +1) −l (l +1) −3/4] |j, m), (6.117)
giving
LS|l +1/2, m) =
l ¯h
2
2
|l +1/2, m), (6.118)
LS|l −1/2, m) = −
(l +1) ¯h
2
2
|l −1/2, m). (6.119)
138
6.7 Fine structure 6 APPROXIMATION METHODS
It follows that
_

l+1/2,m
)

LS¸
l+1/2,m
dΩ =
l ¯h
2
2
, (6.120)
_

l−1/2,m
)

LS¸
l−1/2,m
dΩ = −
(l +1) ¯h
2
2
, (6.121)
where the integrals are over all solid angle.
Let us nowapply degenerate perturbation theory to evaluate the shift in energy
of a state whose wave-function is ψ
nlm±
due to the spin-orbit Hamiltonian H
LS
.
To first-order, the energy-shift is given by
∆E
nlm±
=
_

nlm±
)

H
LS
ψ
nlm±
dV, (6.122)
where the integral is over all space. Equations (6.100) (remember the factor of
two), (6.116), and (6.120)–(6.121) yield
∆E
nlm+
= +
1
2 m
2
e
_
1
r
dV
dr
_
l ¯h
2
2
, (6.123)
∆E
nlm−
= −
1
2 m
2
e
_
1
r
dV
dr
_
(l +1) ¯h
2
2
, (6.124)
where
_
1
r
dV
dr
_
=
_
(R
nl
)

1
r
dV
dr
R
nl
r
2
dr. (6.125)
Equations (6.123)–(6.124) are known as Lande’s interval rule.
Let us now apply the above result to the case of a sodium atom. In chemist’s
notation, the ground state is written
(1s)
2
(2s)
2
(2p)
6
(3s). (6.126)
The inner ten electrons effectively form a spherically symmetric electron cloud.
We are interested in the excitation of the eleventh electron from 3s to some
higher energy state. The closest (in energy) unoccupied state is 3p. This state
has a higher energy than 3s due to the deviations of the potential from the pure
139
6.8 The Zeeman effect 6 APPROXIMATION METHODS
Coulomb form. In the absence of spin-orbit interaction, there are six degenerate
3p states. The spin-orbit interaction breaks the degeneracy of these states. The
modified states are labeled (3p)
1/2
and (3p)
3/2
, where the subscript refers to the
value of j. The four (3p)
3/2
states lie at a slightly higher energy level than the two
(3p)
1/2
states, because the radial integral (6.125) is positive. The splitting of the
(3p) energy levels of the sodium atom can be observed using a spectroscope. The
well-known sodium D line is associated with transitions between the 3p and 3s
states. The fact that there are two slightly different 3p energy levels (note that
spin-orbit coupling does not split the 3s energy levels) means that the sodium D
line actually consists of two very closely spaced spectroscopic lines. It is easily
demonstrated that the ratio of the typical spacing of Balmer lines to the splitting
brought about by spin-orbit interaction is about 1 : α
2
, where
α =
e
2
2
0
hc
=
1
137
(6.127)
is the fine structure constant. Note that Eqs. (6.123)–(6.124) are not entirely
correct, since we have neglected an effect (namely, the relativistic mass correction
of the electron) which is the same order of magnitude as spin-orbit coupling.
6.8 The Zeeman effect
Consider a hydrogen-like atom placed in a uniform z-directed magnetic field. The
change in energy of the outermost electron is
H
B
= −µB, (6.128)
where
µ = −
e
2 m
e
(L +2 S) (6.129)
is its magnetic moment, including both the spin and orbital contributions. Thus,
H
B
=
e B
2 m
e
(L
z
+2 S
z
). (6.130)
Suppose that the energy-shifts induced by the magnetic field are much smaller
than those induced by spin-orbit interaction. In this situation, we can treat H
B
as
140
6.8 The Zeeman effect 6 APPROXIMATION METHODS
a small perturbation acting on the eigenstates of H
0
+H
LS
. Of course, these states
are the simultaneous eigenstates of J
2
and J
z
. Let us consider one of these states,
labeled by the quantum numbers j and m, where j = l ± 1/2. From standard
perturbation theory, the first-order energy-shift in the presence of a magnetic
field is
∆E
nlm±
= ¸l ±1/2, m|H
B
|l ±1/2, m). (6.131)
Since
L
z
+2 S
z
= J
z
+S
z
, (6.132)
we find that
∆E
nlm±
=
e B
2 m
e
(m¯h +¸l ±1/2, m|S
z
|l ±1/2, m) ) . (6.133)
Now, from Eqs. (6.113)–(6.114),
|l ±1/2, m) = ±
¸
¸
¸
_
l ±m+1/2
2 l +1
|m−1/2, 1/2)
+
¸
¸
¸
_
l ∓m+1/2
2 l +1
|m+1/2, −1/2). (6.134)
It follows that
¸l ±1/2, m|S
z
|l ±1/2, m) =
¯h
2 (2 l +1)
[(l ±m+1/2) − (l ∓m+1/2)]
= ±
m¯h
2 l +1
. (6.135)
Thus, we obtain Lande’s formula for the energy-shift induced by a weak magnetic
field:
∆E
nlm±
=
e ¯hB
2 m
e
m
_
1 ±
1
2 l +1
_
. (6.136)
Let us apply this theory to the sodium atom. We have already seen that the
non-Coulomb potential splits the degeneracy of the 3s and 3p states, the latter
states acquiring a higher energy. The spin-orbit interaction splits the six 3p states
into two groups, with four j = 3/2 states lying at a slightly higher energy than
two j = 1/2 states. According to Eq. (6.136), a magnetic field splits the (3p)
3/2
141
6.8 The Zeeman effect 6 APPROXIMATION METHODS
quadruplet of states, each state acquiring a different energy. In fact, the energy of
each state becomes dependent on the quantum number m, which measures the
projection of the total angular momentum along the z-axis. States with higher
m values have higher energies. A magnetic field also splits the (3p)
1/2
doublet
of states. However, it is evident from Eq. (6.136) that these states are split by a
lesser amount than the j = 3/2 states.
Suppose that we increase the strength of the magnetic field, so that the energy-
shift due to the magnetic field becomes comparable to the energy-shift induced
by spin-orbit interaction. Clearly, in this situation, it does not make much sense to
think of H
B
as a small interaction term operating on the eigenstates of H
0
+H
LS
.
In fact, this intermediate case is very difficult to analyze. Let us consider the
extreme limit in which the energy-shift due to the magnetic field greatly exceeds
that induced by spin-orbit effects. This is called the Paschen-Back limit.
In the Paschen-Back limit we can think of the spin-orbit Hamiltonian, H
LS
, as
a small interaction term operating on the eigenstates of H
0
+ H
B
. Note that the
magnetic Hamiltonian, H
B
, commutes with L
2
, S
2
, L
z
, S
z
, but does not commute
with L
2
, S
2
, J
2
, J
z
. Thus, in an intense magnetic field, the energy eigenstates of a
hydrogen-like atom are approximate eigenstates of the spin and orbital angular
momenta, but are not eigenstates of the total angular momentum. We can label
each state by the quantum numbers n (the energy quantum number), l, m
l
,
and m
s
. Thus, our energy eigenkets are written |n, l, m
l
, m
s
). The unperturbed
Hamiltonian, H
0
, causes states with different values of the quantum numbers n
and l to have different energies. However, states with the same value of n and l,
but different values of m
l
and m
s
, are degenerate. The shift in energy due to the
magnetic field is simply
∆E
nlm
l
m
s
= ¸n, l, m
l
, m
s
|H
B
|n, l, m
l
, m
s
)
=
e ¯hB
2 m
e
(m
l
+2 m
s
). (6.137)
Thus, states with different values of m
l
+2 m
s
acquire different energies.
Let us apply this result to a sodium atom. In the absence of a magnetic field,
the six 3p states form two groups of four and two states, depending on the values
142
6.8 The Zeeman effect 6 APPROXIMATION METHODS
of their total angular momentum. In the presence of an intense magnetic field
the 3p states are split into five groups. There is a state with m
l
+2 m
s
= 2, a state
with m
l
+2 m
s
= 1, two states with m
l
+2 m
s
= 0, a state with m
l
+2 m
s
= −1,
and a state with m
l
+2 m
s
= −2. These groups are equally spaced in energy, the
energy difference between adjacent groups being e ¯hB/2 m
e
.
The energy-shift induced by the spin-orbit Hamiltonian is given by
∆E
nl m
l
m
s
= ¸n, l, m
l
, m
s
|H
LS
|n, l, m
l
, m
s
), (6.138)
where
H
LS
=
1
2 m
2
e
1
r
dV
dr
LS. (6.139)
Now,
¸LS) = ¸ L
z
S
z
+ (L
+
S

+L

S
+
)/2 )
= ¯h
2
m
l
m
s
, (6.140)
since
¸L
±
) = ¸S
±
) = 0 (6.141)
for expectation values taken between the simultaneous eigenkets of L
z
and S
z
.
Thus,
∆E
nlm
l
m
s
=
¯h
2
m
l
m
s
2 m
2
e
_
1
r
dV
dr
_
. (6.142)
Let us apply the above result to a sodium atom. In the presence of an intense
magnetic field, the 3p states are split into five groups with (m
l
, m
s
) quantum
numbers (1, 1/2), (0, 1/2), (1, −1/2), or (−1, 1/2), (0, −1/2), and (−1, −1/2), re-
spectively, in order of decreasing energy. The spin-orbit term increases the en-
ergy of the highest energy state, does not affect the next highest energy state,
decreases, but does not split, the energy of the doublet, does not affect the next
lowest energy state, and increases the energy of the lowest energy state. The net
result is that the five groups of states are no longer equally spaced in energy.
The sort of magnetic field-strength needed to get into the Paschen-Bach limit
is given by
B
PB
∼ α
2
e m
e

0
ha
0
· 25 tesla. (6.143)
143
6.9 Time-dependent perturbation theory 6 APPROXIMATION METHODS
Obviuously, this is an extremely large field-strength.
6.9 Time-dependent perturbation theory
Suppose that the Hamiltonian of the system under consideration can be written
H = H
0
+H
1
(t), (6.144)
where H
0
does not contain time explicitly, and H
1
is a small time-dependent
perturbation. It is assumed that we are able to calculate the eigenkets of the
unperturbed Hamiltonian:
H
0
|n) = E
n
|n). (6.145)
We know that if the system is in one of the eigenstates of H
0
then, in the ab-
sence of the external perturbation, it remains in this state for ever. However,
the presence of a small time-dependent perturbation can, in principle, give rise
to a finite probability that a system initially in some eigenstate |i) of the unper-
turbed Hamiltonian is found in some other eigenstate at a subsequent time (since
|i) is no longer an exact eigenstate of the total Hamiltonian), In other words, a
time-dependent perturbation causes the system to make transitions between its
unperturbed energy eigenstates. Let us investigate this effect.
Suppose that at t = t
0
the state of the system is represented by
|A) =

n
c
n
|n), (6.146)
where the c
n
are complex numbers. Thus, the initial state is some linear su-
perposition of the unperturbed energy eigenstates. In the absence of the time-
dependent perturbation, the time evolution of the system is given by
|A, t
0
, t) =

n
c
n
exp([−i E
n
(t −t
0
)/¯h] |n). (6.147)
Now, the probability of finding the system in state |n) at time t is
P
n
(t) = |c
n
exp[−i E
n
(t −t
0
)/¯h]|
2
= |c
n
|
2
= P
n
(t
0
). (6.148)
144
6.9 Time-dependent perturbation theory 6 APPROXIMATION METHODS
Clearly, with H
1
= 0, the probability of finding the system in state |n) at time t is
exactly the same as the probability of finding the system in this state at the initial
time t
0
. However, with H
1
,= 0, we expect P
n
(t) to vary with time. Thus, we can
write
|A, t
0
, t) =

n
c
n
(t) exp[−i E
n
(t −t
0
)/¯h] |n), (6.149)
where P
n
(t) = |c
n
(t)|
2
. Here, we have carefully separated the fast phase oscilla-
tion of the eigenkets, which depends on the unperturbed Hamiltonian, from the
slow variation of the amplitudes c
n
(t), which depends entirely on the perturba-
tion (i.e., c
n
is constant if H
1
= 0). Note that in Eq. (6.149) the eigenkets |n) are
time-independent (they are actually the eigenkets of H
0
evaluated at the time t
0
).
Schr¨ odinger’s time evolution equation yields
i ¯h

∂t
|A, t
0
, t) = H|A, t
0
, t) = (H
0
+H
1
) |A, t
0
, t). (6.150)
It follows from Eq. (6.149) that
(H
0
+H
1
)|A, t
0
, t) =

m
c
m
(t) exp[−i E
m
(t −t
0
)/¯h] (E
m
+H
1
) |m). (6.151)
We also have
i ¯h

∂t
|A, t
0
, t) =

m
_
i ¯h
dc
m
dt
+c
m
(t) E
m
_
exp[−i E
m
(t −t
0
)/¯h] |m), (6.152)
where use has been made of the time-independence of the kets |m). According
to Eq. (6.150), we can equate the right-hand sides of the previous two equations
to obtain

m
i ¯h
dc
m
dt
exp[−i E
m
(t −t
0
)/¯h]|m) =

m
c
m
(t) exp[−i E
m
(t −t
0
)/¯h] H
1
|m).
(6.153)
Left-multiplication by ¸n| yields
i ¯h
dc
n
dt
=

m
H
nm
(t) exp[i ω
nm
(t −t
0
)] c
m
(t), (6.154)
where
H
nm
(t) = ¸n|H
1
(t)|m), (6.155)
145
6.10 The two-state system 6 APPROXIMATION METHODS
and
ω
nm
=
E
n
−E
m
¯h
. (6.156)
Here, we have made use of the standard orthonormality result, ¸n|m) = δ
nm
.
Suppose that there are N linearly independent eigenkets of the unperturbed
Hamiltonian. According to Eq. (6.154), the time variation of the coefficients
c
n
, which specify the probability of finding the system in state |n) at time t, is de-
termined by N coupled first-order differential equations. Note that Eq. (6.154) is
exact—we have made no approximations at this stage. Unfortunately, we cannot
generally find exact solutions to this equation, so we have to obtain approximate
solutions via suitable expansions in small quantities. However, for the particu-
larly simple case of a two-state system (i.e., N = 2), it is actually possible to
solve Eq. (6.154) without approximation. This solution is of enormous practical
importance.
6.10 The two-state system
Consider a system in which the time-independent Hamiltonian possesses two
eigenstates, denoted
H
0
|1) = E
1
|1), (6.157)
H
0
|2) = E
2
|2). (6.158)
Suppose, for the sake of simplicity, that the diagonal matrix elements of the in-
teraction Hamiltonian, H
1
, are zero:
¸1|H
1
|1) = ¸2|H
1
|2) = 0. (6.159)
The off-diagonal matrix elements are assumed to oscillate sinusoidally at some
frequency ω:
¸1|H
1
|2) = ¸2|H
1
|1)

= γexp(i ωt), (6.160)
where γ and ω are real. Note that it is only the off-diagonal matrix elements
which give rise to the effect which we are interested in—namely, transitions be-
tween states 1 and 2.
146
6.10 The two-state system 6 APPROXIMATION METHODS
For a two-state system, Eq. (6.154) reduces to
i ¯h
dc
1
dt
= γexp[+i (ω−ω
21
) t ] c
2
, (6.161)
i ¯h
dc
2
dt
= γexp[−i (ω−ω
21
) t ] c
1
, (6.162)
where ω
21
= (E
2
− E
1
)/¯h, and assuming that t
0
= 0. Equations (6.161) and
(6.162) can be combined to give a second-order differential equation for the
time variation of the amplitude c
2
:
d
2
c
2
dt
2
+ i (ω−ω
21
)
dc
2
dt
+
γ
2
¯h
2
c
2
= 0. (6.163)
Once we have solved for c
2
, we can use Eq. (6.162) to obtain the amplitude c
1
.
Let us look for a solution in which the system is certain to be in state 1 at time
t = 0. Thus, our boundary conditions are c
1
(0) = 1 and c
2
(0) = 0. It is easily
demonstrated that the appropriate solutions are
c
2
(t) =
−i γ/¯h
_
γ
2
/¯h
2
+ (ω−ω
21
)
2
/4
exp[−i (ω−ω
21
) t/2]
sin
__
γ
2
/¯h
2
+ (ω−ω
21
)
2
/4 t
_
, (6.164)
c
1
(t) = exp[ i (ω−ω
21
) t/2] cos
__
γ
2
/¯h
2
+ (ω−ω
21
)
2
/4 t
_

i (ω−ω
21
)/2
_
γ
2
/¯h
2
+ (ω−ω
21
)
2
/4
exp[ i (ω−ω
21
) t/2]
sin
__
γ
2
/¯h
2
+ (ω−ω
21
)
2
/4 t
_
. (6.165)
Now, the probability of finding the system in state 1 at time t is simply P
1
(t) =
|c
1
|
2
. Likewise, the probability of finding the system in state 2 at time t is P
2
(t) =
|c
2
|
2
. It follows that
P
2
(t) =
γ
2
/¯h
2
γ
2
/¯h
2
+ (ω−ω
21
)
2
/4
147
6.10 The two-state system 6 APPROXIMATION METHODS
sin
2
__
γ
2
/¯h
2
+ (ω−ω
21
)
2
/4 t
_
, (6.166)
P
1
(t) = 1 −P
2
(t). (6.167)
This result is known as Rabi’s formula.
Equation (6.166) exhibits all the features of a classic resonance. At resonance,
when the oscillation frequency of the perturbation, ω, matches the frequency
ω
21
, we find that
P
1
(t) = cos
2
(γt/¯h), (6.168)
P
2
(t) = sin
2
(γt/¯h). (6.169)
According to the above result, the system starts off at t = 0 in state 1. After a
time interval π¯h/2 γ it is certain to be in state 2. After a further time interval
π¯h/2 γ it is certain to be in state 1, and so on. Thus, the system periodically
flip-flops between states 1 and 2 under the influence of the time-dependent per-
turbation. This implies that the system alternatively absorbs and emits energy
from the source of the perturbation.
The absorption-emission cycle also take place away from the resonance, when
ω ,= ω
21
. However, the amplitude of oscillation of the coefficient c
2
is reduced.
This means that the maximum value of P
2
(t) is no longer unity, nor is the mini-
mum value of P
1
(t) zero. In fact, if we plot the maximum value of P
2
(t) as a func-
tion of the applied frequency, ω, we obtain a resonance curve whose maximum
(unity) lies at the resonance, and whose full-width half-maximum (in frequency)
is 4 γ/¯h. Thus, if the applied frequency differs from the resonant frequency by
substantially more than 2 γ/¯h then the probability of the system jumping from
state 1 to state 2 is very small. In other words, the time-dependent perturbation
is only effective at causing transitions between states 1 and 2 if its frequency of
oscillation lies in the approximate range ω
21
± 2 γ/¯h. Clearly, the weaker the
perturbation (i.e., the smaller γ becomes), the narrower the resonance.
148
6.11 Spin magnetic resonance 6 APPROXIMATION METHODS
6.11 Spin magnetic resonance
Consider a spin one-half system (e.g., a bound electron) placed in a uniform z-
directed magnetic field, and then subjected to a small time-dependent magnetic
field rotating in the x-y plane. Thus,
B = B
0
^z +B
1
(cos ωt ^ x + sinωt ^ y), (6.170)
where B
0
and B
1
are constants, with B
1
¸ B
0
. The rotating magnetic field usu-
ally represents the magnetic component of an electromagnetic wave propagating
along the z-axis. In this system, the electric component of the wave has no effect.
The Hamiltonian is written
H = −µB = H
0
+H
1
, (6.171)
where
H
0
=
e B
0
m
e
S
z
, (6.172)
and
H
1
=
e B
1
m
e
(cos ωt S
x
+ sinωt S
y
) . (6.173)
The eigenstates of the unperturbed Hamiltonian are the ‘spin up’ and ‘spin
down’ states, denoted |+) and |−), respectively. Thus,
H
0
|±) = ±
e ¯hB
0
2 m
e
|±). (6.174)
The time-dependent Hamiltonian can be written
H
1
=
e B
1
2 m
e
_
exp( i ωt) S

+ exp(−i ωt) S
+
_
, (6.175)
where S
+
and S

are the conventional raising and lowering operators for the spin
angular momentum. It follows that
¸+|H
1
|+) = ¸−|H
1
|−) = 0, (6.176)
and
¸−|H
1
|+) = ¸+|H
1
|−)

=
e ¯hB
1
2 m
e
exp( i ωt). (6.177)
149
6.12 The Dyson series 6 APPROXIMATION METHODS
It can be seen that this system is exactly the same as the two-state system
discussed in the previous section, provided that we make the identifications
|1) → |−), (6.178)
|2) → |+), (6.179)
ω
21

e B
0
m
e
, (6.180)
γ →
e ¯hB
1
2 m
e
. (6.181)
The resonant frequency, ω
21
, is simply the spin precession frequency for an elec-
tron in a uniform magnetic field of strength B
0
. In the absence of the perturba-
tion, the expectation values of S
x
and S
y
oscillate because of the spin precession,
but the expectation value of S
z
remains invariant. If we now apply a magnetic
perturbation rotating at the resonant frequency then, according to the analysis of
the previous section, the system undergoes a succession of spin-flops, |+)
÷
÷|−),
in addition to the spin precession. We also know that if the oscillation frequency
of the applied field is very different from the resonant frequency then there is
virtually zero probability of the field triggering a spin-flop. The width of the
resonance (in frequency) is determined by the strength of the oscillating mag-
netic perturbation. Experimentalist are able to measure the magnetic moments
of electrons, and other spin one-half particles, to a high degree of accuracy by
placing the particles in a magnetic field, and subjecting them to an oscillating
magnetic field whose frequency is gradually scanned. By determining the reso-
nant frequency (i.e., the frequency at which the particles absorb energy from the
oscillating field), it is possible to calculate the magnetic moment.
6.12 The Dyson series
Let us now try to find approximate solutions of Eq. (6.154) for a general system.
It is convenient to work in terms of the time evolution operator, U(t
0
, t), which is
defined
|A, t
0
, t) = U(t
0
, t) |A). (6.182)
150
6.12 The Dyson series 6 APPROXIMATION METHODS
Here, |A, t
0
, t) is the state ket of the system at time t, given that the state ket at
the initial time t
0
is |A). It is easily seen that the time evolution operator satisfies
the differential equation
i ¯h
∂U(t
0
, t)
∂t
= (H
0
+H
1
) U(t
0
, t), (6.183)
subject to the boundary condition
U(t
0
, t
0
) = 1. (6.184)
In the absence of the external perturbation, the time evolution operator re-
duces to
U(t
0
, t) = exp[−i H
0
(t −t
0
)/¯h]. (6.185)
Let us switch on the perturbation and look for a solution of the form
U(t
0
, t) = exp[−i H
0
(t −t
0
)/¯h] U
I
(t
0
, t). (6.186)
It is readily demonstrated that U
I
satisfies the differential equation
i ¯h
∂U
I
(t
0
, t)
∂t
= H
I
(t
0
, t) U
I
(t
0
, t), (6.187)
where
H
I
(t
0
, t) = exp[+i H
0
(t −t
0
)/¯h] H
1
exp[−i H
0
(t −t
0
)/¯h], (6.188)
subject to the boundary condition
U
I
(t
0
, t
0
) = 1. (6.189)
Note that U
I
specifies that component of the time evolution operator which is due
to the time-dependent perturbation. Thus, we would expect U
I
to contain all of
the information regarding transitions between different eigenstates of H
0
caused
by the perturbation.
Suppose that the system starts off at time t
0
in the eigenstate |i) of the un-
perturbed Hamiltonian. The subsequent evolution of the state ket is given by
Eq. (6.149),
|i, t
0
, t) =

m
c
m
(t) exp[−i E
m
(t −t
0
)/¯h] |m). (6.190)
151
6.12 The Dyson series 6 APPROXIMATION METHODS
However, we also have
|i, t
0
, t) = exp[−i H
0
(t −t
0
)/¯h] U
I
(t
0
, t) |i). (6.191)
It follows that
c
n
(t) = ¸n|U
I
(t
0
, t)|i), (6.192)
where use has been made of ¸n|m) = δ
nm
. Thus, the probability that the system
is found in state |n) at time t, given that it is definitely in state |i) at time t
0
, is
simply
P
i→n
(t
0
, t) = |¸n|U
I
(t
0
, t)|i)|
2
. (6.193)
This quantity is usually termed the transition probability between states |i) and
|n).
Note that the differential equation (6.187), plus the boundary condition (6.189),
are equivalent to the following integral equation,
U
I
(t
0
, t) = 1 −
i
¯h
_
t
t
0
H
I
(t
0
, t

) U
I
(t
0
, t

) dt

. (6.194)
We can obtain an approximate solution to this equation by iteration:
U
I
(t
0
, t) · 1 −
i
¯h
_
t
t
0
H
I
(t
0
, t

)
_
_
1 −
i
¯h
_
t

t
0
H
I
(t
0
, t

) U
I
(t
0
, t

)
_
_
dt

· 1 −
i
¯h
_
t
t
0
H
I
(t
0
, t

) dt

+
_
−i
¯h
_
2
_
t
t
0
dt

_
t

t
0
H
I
(t
0
, t

) H
I
(t
0
, t

) dt

+ . (6.195)
This expansion is known as the Dyson series. Let
c
n
= c
(0)
n
+c
(1)
n
+c
(2)
n
+ , (6.196)
where the superscript
(1)
refers to a first-order term in the expansion, etc. It
follows from Eqs. (6.192) and (6.195) that
c
(0)
n
(t) = δ
in
, (6.197)
152
6.12 The Dyson series 6 APPROXIMATION METHODS
c
(1)
n
(t) = −
i
¯h
_
t
t
0
¸n|H
I
(t
0
, t

)|i) dt

, (6.198)
c
(2)
n
(t) =
_
−i
¯h
_
2
_
t
t
0
dt

_
t

t
0
¸n|H
I
(t
0
, t

) H
I
(t
0
, t

)|i) dt

. (6.199)
These expressions simplify to
c
(0)
n
(t) = δ
in
, (6.200)
c
(1)
n
(t) = −
i
¯h
_
t
t
0
exp[ i ω
ni
(t

−t
0
)] H
ni
(t

) dt

, (6.201)
c
(2)
n
(t) =
_
−i
¯h
_
2

m
_
t
t
0
dt

_
t

t
0
dt

exp[ i ω
nm
(t

−t
0
)]
H
nm
(t

) exp[ i ω
mi
(t

−t
0
)] H
mi
(t

), (6.202)
where
ω
nm
=
E
n
−E
m
¯h
, (6.203)
and
H
nm
(t) = ¸n|H
1
(t)|m). (6.204)
The transition probability between states i and n is simply
P
i→n
(t
0
, t) = |c
(0)
n
+c
(1)
n
+c
(2)
n
+ |
2
. (6.205)
According to the above analysis, there is no chance of a transition between
states |i) and |n) (i ,= n) to zeroth-order (i.e., in the absence of the perturbation).
To first-order, the transition probability is proportional to the time integral of the
matrix element ¸n|H
1
|i), weighted by some oscillatory phase-factor. Thus, if the
matrix element is zero, then there is no chance of a first-order transition between
states |i) and |n). However, to second-order, a transition between states |i) and
|n) is possible even when the matrix element ¸n|H
1
|i) is zero.
153
6.13 Constant perturbations 6 APPROXIMATION METHODS
6.13 Constant perturbations
Consider a constant perturbation which is suddenly switched on at time t = 0:
H
1
(t) = 0 for t < 0
H
1
(t) = H
1
for t ≥ 0, (6.206)
where H
1
is time-independent, but is generally a function of the position, mo-
mentum, and spin operators. Suppose that the system is definitely in state |i) at
time t = 0. According to Eqs. (6.200)–(6.202) (with t
0
= 0),
c
(0)
n
(t) = δ
in
, (6.207)
c
(1)
n
(t) = −
i
¯h
H
ni
_
t
0
exp[ i ω
ni
(t

−t)] dt

=
H
ni
E
n
−E
i
[1 − exp( i ω
ni
t)], (6.208)
giving
P
i→n
(t) · |c
(1)
n
|
2
=
4 |H
ni
|
2
|E
n
−E
i
|
2
sin
2
_
_
(E
n
−E
i
) t
2 ¯h
_
_
, (6.209)
for i ,= n. The transition probability between states |i) and |n) can be written
P
i→n
(t) =
|H
ni
|
2
t
2
¯h
2
sinc
2
_
_
(E
n
−E
i
) t
2 ¯h
_
_
, (6.210)
where
sinc(x) =
sinx
x
. (6.211)
The sinc function is highly oscillatory, and decays like 1/|x| at large |x|. It is a
good approximation to say that sinc(x) is small except when |x|
<

π. It follows
that the transition probability, P
i→n
, is small except when
|E
n
−E
i
|
<

2π¯h
t
. (6.212)
Note that in the limit t → ∞ only those transitions which conserve energy (i.e.,
E
n
= E
i
) have an appreciable probability of occurrence. At finite t, is is possible
to have transitions which do not exactly conserve energy, provided that
∆E∆t
<

¯h, (6.213)
154
6.13 Constant perturbations 6 APPROXIMATION METHODS
where ∆E = |E
n
− E
i
| is change in energy of the system associated with the
transition, and ∆t = t is the time elapsed since the perturbation was switched
on. Clearly, this result is just a manifestation of the well-known uncertainty re-
lation for energy and time. This uncertainty relation is fundamentally different
to the position-momentum uncertainty relation, since in non-relativistic quan-
tum mechanics position and momentum are operators, whereas time is merely a
parameter.
The probability of a transition which conserves energy (i.e., E
n
= E
i
) is
P
i→n
(t) =
|H
in
|
2
t
2
¯h
2
, (6.214)
where use has been made of sinc(0) = 1. Note that this probability grows quadrat-
ically with time. This result is somewhat surprising, since it implies that the prob-
ability of a transition occurring in a fixed time interval, t to t +dt, grows linearly
with t, despite the fact that H
1
is constant for t > 0. In practice, there is usually
a group of final states, all possessing nearly the same energy as the energy of the
initial state |i). It is helpful to define the density of states, ρ(E), where the num-
ber of final states lying in the energy range E to E+dE is given by ρ(E) dE. Thus,
the probability of a transition from the initial state i to any of the continuum of
possible final states is
P
i→
(t) =
_
P
i→n
(t) ρ(E
n
) dE
n
, (6.215)
giving
P
i→
(t) =
2 t
¯h
_
|H
ni
|
2
ρ(E
n
) sinc
2
(x) dx, (6.216)
where
x = (E
n
−E
i
) t/2 ¯h, (6.217)
and use has been made of Eq. (6.210). We know that in the limit t → ∞ the
function sinc(x) is only non-zero in an infinitesimally narrow range of final ener-
gies centred on E
n
= E
i
. It follows that, in this limit, we can take ρ(E
n
) and |H
ni
|
2
out of the integral in the above formula to obtain
P
i→[n]
(t) =

¯h
|H
ni
|
2
ρ(E
n
) t
¸
¸
¸
¸
¸
E
n
E
i
, (6.218)
155
6.13 Constant perturbations 6 APPROXIMATION METHODS
where P
i→[n]
denotes the transition probability between the initial state |i) and
all final states |n) which have approximately the same energy as the initial state.
Here, |H
ni
|
2
is the average of |H
ni
|
2
over all final states with approximately the
same energy as the initial state. In deriving the above formula, we have made
use of the result
_

−∞
sinc
2
(x) dx = π. (6.219)
Note that the transition probability, P
i→[n]
, is now proportional to t, instead of t
2
.
It is convenient to define the transition rate, which is simply the transition
probability per unit time. Thus,
w
i→[n]
=
dP
i→[n]
dt
, (6.220)
giving
w
i→[n]
=

¯h
|H
ni
|
2
ρ(E
n
)
¸
¸
¸
¸
¸
E
n
E
i
. (6.221)
This appealingly simple result is known as Fermi’s golden rule. Note that the
transition rate is constant in time (for t > 0): i.e., the probability of a transition
occurring in the time interval t to t + dt is independent of t for fixed dt. Fermi’s
golden rule is sometimes written
w
i→n
=

¯h
|H
ni
|
2
δ(E
n
−E), (6.222)
where it is understood that this formula must be integrated with
_
ρ(E
n
) dE
n
to
obtain the actual transition rate.
Let us now calculate the second-order term in the Dyson series, using the
constant perturbation (6.206). From Eq. (6.202) we find that
c
(2)
n
(t) =
_
−i
¯h
_
2

m
H
nm
H
mi
_
t
0
dt

exp( i ω
nm
t

)
_
t

0
dt

exp( i ω
mi
t )
=
i
¯h

m
H
nm
H
mi
E
m
−E
i
_
t
0
[exp( i ω
ni
t

) − exp( i ω
nm
t

] ) dt

=
i t
¯h

m
H
nm
H
mi
E
m
−E
i
[exp( i ω
ni
t/2) sinc(ω
ni
t/2)
156
6.13 Constant perturbations 6 APPROXIMATION METHODS
−exp( i ω
nm
t/2) sinc(ω
nm
t/2)] . (6.223)
Thus,
c
n
(t) = c
(1)
n
+c
(2)
n
=
i t
¯h
exp( i ω
ni
t/2)
_
_
_
_
H
ni
+

m
H
nm
H
mi
E
m
−E
i
_
_
sinc(ω
ni
t/2)

m
H
nm
H
mi
E
m
−E
i
exp( i ω
im
t/2) sinc(ω
nm
t/2)
_
_
, (6.224)
where use has been made of Eq. (6.208). It follows, by analogy with the previous
analysis, that
w
i→[n]
=

¯h
¸
¸
¸
¸
¸
¸
H
ni
+

m
H
nm
H
mi
E
m
−E
i
¸
¸
¸
¸
¸
¸
2
ρ(E
n
)
¸
¸
¸
¸
¸
¸
¸
E
n
E
i
, (6.225)
where the transition rate is calculated for all final states, |n), with approximately
the same energy as the initial state, |i), and for intermediate states, |m) whose
energies differ from that of the initial state. The fact that E
m
,= E
i
causes the
last term on the right-hand side of Eq. (6.224) to average to zero (due to the
oscillatory phase-factor) during the evaluation of the transition probability.
According to Eq. (6.225), a second-order transition takes place in two steps.
First, the system makes a non-energy-conserving transition to some intermedi-
ate state |m). Subsequently, the system makes another non-energy-conserving
transition to the final state |n). The net transition, from |i) to |n), conserves en-
ergy. The non-energy-conserving transitions are generally termed virtual transi-
tions, whereas the energy conserving first-order transition is termed a real transi-
tion. The above formula clearly breaks down if H
nm
H
mi
,= 0 when E
m
= E
i
.
This problem can be avoided by gradually turning on the perturbation: i.e.,
H
1
→ exp(ηt) H
1
(where η is very small). The net result is to change the en-
ergy denominator in Eq. (6.225) from E
i
−E
m
to E
i
−E
m
+ i ¯hη.
157
6.14 Harmonic perturbations 6 APPROXIMATION METHODS
6.14 Harmonic perturbations
Consider a perturbation which oscillates sinusoidally in time. This is usually
called a harmonic perturbation. Thus,
H
1
(t) = V exp( i ωt) +V

exp(−i ωt), (6.226)
where V is, in general, a function of position, momentum, and spin operators.
Let us initiate the system in the eigenstate |i) of the unperturbed Hamiltonian,
H
0
, and switch on the harmonic perturbation at t = 0. It follows from Eq. (6.201)
that
c
(1)
n
=
−i
¯h
_
t
0
_
V
ni
exp(i ωt

) +V

ni
exp(−i ωt

)
_
exp( i ω
ni
t

) dt

, (6.227)
=
1
¯h
_
_
1 − exp[ i (ω
ni
+ω) t]
ω
ni

V
ni
+
1 − exp[ i (ω
ni
−ω) t]
ω
ni
−ω
V

ni
_
_
,
where
V
ni
= ¸n|V|i), (6.228)
V

ni
= ¸n|V

|i) = ¸i|V|n)

. (6.229)
This formula is analogous to Eq. (6.208), provided that
ω
ni
=
E
n
−E
i
¯h
→ω
ni
±ω. (6.230)
Thus, it follows fromthe previous analysis that the transition probability P
i→n
(t) =
|c
(1)
n
|
2
is only appreciable in the limit t →∞if
ω
ni
+ω · 0 or E
n
· E
i
− ¯hω, (6.231)
ω
ni
−ω · 0 or E
n
· E
i
+ ¯hω. (6.232)
Clearly, (6.231) corresponds to the first termon the right-hand side of Eq. (6.227),
and (6.232) corresponds to the second term. The former term describes a process
by which the system gives up energy ¯hω to the perturbing field, whilst making a
transition to a final state whose energy level is less than that of the initial state
158
6.15 Absorption and stimulated emission of radiation 6 APPROXIMATION METHODS
by ¯hω. This process is known as stimulated emission. The latter term describes
a process by which the system gains energy ¯hω from the perturbing field, whilst
making a transition to a final state whose energy level exceeds that of the initial
state by ¯hω. This process is known as absorption. In both cases, the total energy
(i.e., that of the system plus the perturbing field) is conserved.
By analogy with Eq. (6.221),
w
i→[n]
=

¯h
|V
ni
|
2
ρ(E
n
)
¸
¸
¸
¸
¸
E
n
=E
i
−¯hω
, (6.233)
w
i→[n]
=

¯h
|V

ni
|
2
ρ(E
n
)
¸
¸
¸
¸
¸
E
n
=E
i
+¯hω
. (6.234)
Equation (6.233) specifies the transition rate for stimulated emission, whereas
Eq. (6.234) gives the transition rate for absorption. These equations are more
usually written
w
i→n
=

¯h
|V
ni
|
2
δ(E
n
−E
i
+ ¯hω), (6.235)
w
i→n
=

¯h
|V

ni
|
2
δ(E
n
−E
i
− ¯hω). (6.236)
It is clear from Eqs. (6.228)-(6.229) that |V

ni
|
2
= |V
ni
|
2
. It follows from
Eqs. (6.233)–(6.234) that
w
i→[n]
ρ(E
n
)
=
w
n→[i]
ρ(E
i
)
. (6.237)
In other words, the rate of stimulated emission, divided by the density of final
states for stimulated emission, equals the rate of absorption, divided by the den-
sity of final states for absorption. This result, which expresses a fundamental
symmetry between absorption and stimulated emission, is known as detailed bal-
ancing, and is very important in statistical mechanics.
6.15 Absorption and stimulated emission of radiation
Let us use some of the results of time-dependent perturbation theory to inves-
tigate the interaction of an atomic electron with classical (i.e., non-quantized)
159
6.15 Absorption and stimulated emission of radiation 6 APPROXIMATION METHODS
electromagnetic radiation.
The unperturbed Hamiltonian is
H
0
=
p
2
2 m
e
+V
0
(r). (6.238)
The standard classical prescription for obtaining the Hamiltonian of a particle of
charge q in the presence of an electromagnetic field is
p → p +qA, (6.239)
H → H−qφ, (6.240)
where A(r) is the vector potential and φ(r) is the scalar potential. Note that
E = −∇φ −
∂A
∂t
, (6.241)
B = ∇A. (6.242)
This prescription also works in quantum mechanics. Thus, the Hamiltonian of an
atomic electron placed in an electromagnetic field is
H =
(p −e A)
2
2 m
e
+e φ +V
0
(r), (6.243)
where A and φ are functions of the position operators. The above equation can
be written
H =
_
p
2
−e Ap −e pA +e
2
A
2
_
2 m
e
+e φ +V
0
(r). (6.244)
Now,
pA = Ap, (6.245)
provided that we adopt the gauge ∇A = 0. Hence,
H =
p
2
2 m
e

e Ap
m
e
+
e
2
A
2
2 m
e
+e φ +V
0
(r). (6.246)
Suppose that the perturbation corresponds to a monochromatic plane-wave,
for which
φ = 0, (6.247)
A = 2 A
0
cos
_
ω
c
nr −ωt
_
, (6.248)
160
6.15 Absorption and stimulated emission of radiation 6 APPROXIMATION METHODS
where and n are unit vectors which specify the direction of polarization and
the direction of propagation, respectively. Note that n = 0. The Hamiltonian
becomes
H = H
0
+H
1
(t), (6.249)
with
H
0
=
p
2
2 m
e
+V(r), (6.250)
and
H
1
· −
e Ap
m
e
, (6.251)
where the A
2
term, which is second order in A
0
, has been neglected.
The perturbing Hamiltonian can be written
H
1
= −
e A
0
p
m
e
(exp[ i (ω/c) nr − i ωt] + exp[−i (ω/c) nr + i ωt]) . (6.252)
This has the same form as Eq. (6.226), provided that
V = −
e A
0
p
m
e
exp[−i (ω/c) nr ] (6.253)
It is clear, by analogy with the previous analysis, that the first term on the right-
hand side of Eq. (6.252) describes the absorption of a photon of energy ¯hω,
whereas the second term describes the stimulated emission of a photon of energy
¯hω. It follows from Eq. (6.236) that the rate of absorption is
w
i→n
=

¯h
e
2
m
2
e
|A
0
|
2
|¸n| exp[ i (ω/c) nr] p|i)|
2
δ(E
n
−E
i
− ¯hω). (6.254)
The absorption cross-section is defined as the ratio of the power absorbed by
the atom to the incident power per unit area in the electromagnetic field. Now
the energy density of an electromagnetic field is
U =
1
2
_
_

0
E
2
0
2
+
B
2
0
2 µ
0
_
_
, (6.255)
161
6.16 The electric dipole approximation 6 APPROXIMATION METHODS
where E
0
and B
0
= E
0
/c = 2 A
0
ω/c are the peak electric and magnetic field-
strengths, respectively. The incident power per unit area of the electromagnetic
field is
c U = 2
0
c ω
2
|A
0
|
2
. (6.256)
Now,
σ
abs
=
¯hωw
i→n
c U
, (6.257)
so
σ
abs
=
πe
2

0
m
2
e
ωc
|¸n| exp[ i (ω/c) nr] p|i)|
2
δ(E
n
−E
i
− ¯hω). (6.258)
6.16 The electric dipole approximation
In general, the wave-length of the type of electromagnetic radiation which in-
duces, or is emitted during, transitions between different atomic energy levels is
much larger than the typical size of a light atom. Thus,
exp[ i (ω/c) nr] = 1 + i
ω
c
nr + , (6.259)
can be approximated by its first term, unity (remember that ω/c = 2π/λ). This
approximation is known as the electric dipole approximation. It follows that
¸n| exp[ i (ω/c) nr] p|i) · ¸n|p|i). (6.260)
It is readily demonstrated that
[r, H
0
] =
i ¯hp
m
e
, (6.261)
so
¸n|p|i) = −i
m
e
¯h
¸n|[r, H
0
]|i) = i m
e
ω
ni
¸n|r|i). (6.262)
Using Eq. (6.258), we obtain
σ
abs
= 4π
2
αω
ni
|¸n|r|i)|
2
δ(ω−ω
ni
), (6.263)
162
6.16 The electric dipole approximation 6 APPROXIMATION METHODS
where α = e
2
/(2
0
hc) = 1/137 is the fine structure constant. It is clear that if
the absorption cross-section is regarded as a function of the applied frequency,
ω, then it exhibits a sharp maximum at ω = ω
ni
= (E
n
−E
i
)/¯h.
Suppose that the radiation is polarized in the z-direction, so that = ^z. We
have already seen, from Sect. 6.4, that ¸n|z|i) = 0 unless the initial and final
states satisfy
∆l = ±1, (6.264)
∆m = 0. (6.265)
Here, l is the quantum number describing the total orbital angular momentum
of the electron, and m is the quantum number describing the projection of the
orbital angular momentum along the z-axis. It is easily demonstrated that ¸n|x|i)
and ¸n|y|i) are only non-zero if
∆l = ±1, (6.266)
∆m = ±1. (6.267)
Thus, for generally directed radiation ¸n|r|i) is only non-zero if
∆l = ±1, (6.268)
∆m = 0, ±1. (6.269)
These are termed the selection rules for electric dipole transitions. It is clear, for
instance, that the electric dipole approximation allows a transition from a 2p
state to a 1s state, but disallows a transition from a 2s to a 1s state. The latter
transition is called a forbidden transition.
Forbidden transitions are not strictly forbidden. Instead, they take place at a
far lower rate than transitions which are allowed according to the electric dipole
approximation. After electric dipole transitions, the next most likely type of tran-
sition is a magnetic dipole transition, which is due to the interaction between the
electron spin and the oscillating magnetic field of the incident electromagnetic
radiation. Magnetic dipole transitions are typically about 10
5
times more unlikely
than similar electric dipole transitions. The first-order term in Eq. (6.259) yields
163
6.16 The electric dipole approximation 6 APPROXIMATION METHODS
so-called electric quadrupole transitions. These are typically about 10
8
times more
unlikely than electric dipole transitions. Magnetic dipole and electric quadrupole
transitions satisfy different selection rules than electric dipole transitions: for
instance, the selection rules for electric quadrupole transitions are ∆l = 0, ±2.
Thus, transitions which are forbidden as electric dipole transitions may well be
allowed as magnetic dipole or electric quadrupole transitions.
Integrating Eq. (6.263) over all possible frequencies of the incident radiation
yields
_
σ
abs
(ω) dω =

n

2
αω
ni
|¸n|r|i)|
2
. (6.270)
Suppose, for the sake of definiteness, that the incident radiation is polarized in
the x-direction. It is easily demonstrated that
[x, [x, H
0
] ] = −
¯h
2
m
e
. (6.271)
Thus,
¸i|[x, [x, H
0
] ]|i) = ¸i|x
2
H
0
+H
0
x
2
−2 x H
0
x|i) = −
¯h
2
m
e
, (6.272)
giving
2

n
(¸i|x|n)E
i
¸n|x|i) −¸i|x|n)E
n
¸n|x|i)) = −
¯h
2
m
e
. (6.273)
It follows that
2 m
e
¯h

n
ω
ni
|¸n|x|i)|
2
= 1. (6.274)
This is known as the Thomas-Reiche-Kuhn sum rule. According to this rule,
Eq. (6.270) reduces to
_
σ
abs
(ω) dω =

2
α¯h
m
e
=
πe
2
2
0
m
e
c
. (6.275)
Note that ¯h has dropped out of the final result. In fact, the above formula is ex-
actly the same as that obtained classically by treating the electron as an oscillator.
164
6.17 Energy-shifts and decay-widths 6 APPROXIMATION METHODS
6.17 Energy-shifts and decay-widths
We have examined how a state |n), other than the initial state |i), becomes popu-
lated as a result of some time-dependent perturbation applied to the system. Let
us now consider how the initial state becomes depopulated.
It is convenient to gradually turn on the perturbation from zero at t = −∞.
Thus,
H
1
(t) = exp(ηt) H
1
, (6.276)
where η is small and positive, and H
1
is a constant.
In the remote past, t →−∞, the system is assumed to be in the initial state |i).
Thus, c
i
(t → −∞) = 1, and c
n=i
(t → −∞) = 0. Basically, we want to calculate
the time evolution of the coefficient c
i
(t). First, however, let us check that our
previous Fermi golden rule result still applies when the perturbing potential is
turned on slowly, instead of very suddenly. For c
n=i
(t) we have fromEqs. (6.200)–
(6.201) that
c
(0)
n
(t) = 0, (6.277)
c
(1)
n
(t) = −
i
¯h
H
ni
_
t
−∞
exp[ (η + i ω
ni
)t

] dt

= −
i
¯h
H
ni
exp[ (η + i ω
ni
)t ]
η + i ω
ni
, (6.278)
where H
ni
= ¸n|H
1
|i). It follows that, to first-order, the transition probability
from state |i) to state |n) is
P
i→n
(t) = |c
(1)
n
|
2
=
|H
ni
|
2
¯h
2
exp(2 ηt)
η
2

2
ni
. (6.279)
The transition rate is given by
w
i→n
(t) =
dP
i→n
dt
=
2 |H
ni
|
2
¯h
2
ηexp(2 ηt)
η
2

2
ni
. (6.280)
Consider the limit η →0. In this limit, exp(ηt) →1, but
lim
η→0
η
η
2

2
ni
= πδ(ω
ni
) = π¯hδ(E
n
−E
i
). (6.281)
165
6.17 Energy-shifts and decay-widths 6 APPROXIMATION METHODS
Thus, Eq. (6.280) yields the standard Fermi golden rule result
w
i→n
=

¯h
|H
ni
|
2
δ(E
n
−E
i
). (6.282)
It is clear that the delta-function in the above formula actually represents a func-
tion which is highly peaked at some particular energy. The width of the peak is
determined by how fast the perturbation is switched on.
Let us now calculate c
i
(t) using Eqs. (6.200)–(6.202). We have
c
(0)
i
(t) = 1, (6.283)
c
(1)
i
(t) = −
i
¯h
H
ii
_
t
−∞
exp(ηt

) dt

= −
i
¯h
H
ii
exp(ηt)
η
, (6.284)
c
(2)
i
(t) =
_
−i
¯h
_
2

m
|H
mi
|
2
_
t
−∞
dt

_
t

−∞
dt

exp[ (η + i ω
im
)t

] exp[ (η + i ω
mi
)t

],
=
_
−i
¯h
_
2

m
|H
mi
|
2
exp(2 ηt)
2 η(η + i ω
mi
)
. (6.285)
Thus, to second-order we have
c
i
(t) · 1 +
_
−i
¯h
_
H
ii
exp(ηt)
η
+
_
−i
¯h
_
2
|H
ii
|
2
exp(2 ηt)
2 η
2
+
_
−i
¯h
_

m=i
|H
mi
|
2
exp(2 ηt)
2 η(E
i
−E
m
+ i ¯hη)
. (6.286)
Let us now consider the ratio ˙ c
i
/c
i
, where ˙ c
i
≡ dc
i
/dt. Using Eq. (6.286), we
can evaluate this ratio in the limit η →0. We obtain
˙ c
i
c
i
·
_
¸
_
_
−i
¯h
_
H
ii
+
_
−i
¯h
_
2
|H
ii
|
2
η
+
_
−i
¯h
_

m=i
|H
mi
|
2
E
i
−E
m
+ i ¯hη
_
¸
_
__
1 −
i
¯h
H
ii
η
_
·
_
−i
¯h
_
H
ii
+ lim
η→0
_
−i
¯h
_

m=i
|H
mi
|
2
E
i
−E
m
+ i ¯hη
. (6.287)
166
6.17 Energy-shifts and decay-widths 6 APPROXIMATION METHODS
This result is formally correct to second-order in perturbed quantities. Note that
the right-hand side of Eq. (6.287) is independent of time. We can write
˙ c
i
c
i
=
_
−i
¯h
_

i
, (6.288)
where

i
= H
ii
+ lim
η→0

m=i
|H
mi
|
2
E
i
−E
m
+ i ¯hη
(6.289)
is a constant. According to a well-known result in pure mathematics,
lim
→0
1
x + i
= P
1
x
− i πδ(x), (6.290)
where > 0, and P denotes the principle part. It follows that

i
= H
ii
+P

m=i
|H
mi
|
2
E
i
−E
m
− i π

m=i
|H
mi
|
2
δ(E
i
−E
m
). (6.291)
It is convenient to normalize the solution of Eq. (6.288) so that c
i
(0) = 1.
Thus, we obtain
c
i
(t) = exp
_
−i ∆
i
t
¯h
_
. (6.292)
According to Eq. (6.149), the time evolution of the initial state ket |i) is given by
|i, t) = exp[−i (∆
i
+E
i
) t/¯h] |i). (6.293)
We can rewrite this result as
|i, t) = exp(−i [E
i
+ Re(∆
i
) ] t/¯h) exp[ Im(∆
i
) t/¯h] |i). (6.294)
It is clear that the real part of ∆
i
gives rise to a simple shift in energy of state |i),
whereas the imaginary part of ∆
i
governs the growth or decay of this state. Thus,
|i, t) = exp[−i (E
i
+∆E
i
) t/¯h] exp(−Γ
i
t/2¯h) |i), (6.295)
where
∆E
i
= Re(∆
i
) = H
ii
+P

m=i
|H
mi
|
2
E
i
−E
m
, (6.296)
167
6.17 Energy-shifts and decay-widths 6 APPROXIMATION METHODS
and
Γ
i
¯h
= −
2 Im(∆
i
)
¯h
=

¯h

m=i
|H
mi
|
2
δ(E
i
−E
m
). (6.297)
Note that the energy-shift ∆E
i
is the same as that predicted by standard time-
independent perturbation theory.
The probability of observing the system in state |i) at time t > 0, given that it
is definately in state |i) at time t = 0, is given by
P
i→i
(t) = |c
i
|
2
= exp(−Γ
i
t/¯h), (6.298)
where
Γ
i
¯h
=

m=i
w
i→m
. (6.299)
Here, use has been made of Eq. (6.222). Clearly, the rate of decay of the initial
state is a simple function of the transition rates to the other states. Note that the
system conserves probability up to second-order in perturbed quantities, since
|c
i
|
2
+

m=i
|c
m
|
2
· (1 −Γ
i
t/¯h) +

m=i
w
i→m
t = 1. (6.300)
The quantity ∆
i
is called the decay-width of state |i). It is closely related to the
mean lifetime of this state,
τ
i
=
¯h
Γ
i
, (6.301)
where
P
i→i
= exp(−t/τ
i
). (6.302)
According to Eq. (6.294), the amplitude of state |i) both oscillates and decays as
time progresses. Clearly, state |i) is not a stationary state in the presence of the
time-dependent perturbation. However, we can still represent it as a superposi-
tion of stationary states (whose amplitudes simply oscillate in time). Thus,
exp[−i (E
i
+∆E
i
) t/¯h] exp(−Γ
i
t/2¯h) =
_
f(E) exp(−i Et/¯h) dE, (6.303)
168
6.17 Energy-shifts and decay-widths 6 APPROXIMATION METHODS
where f(E) is the weight of the stationary state with energy E in the superposition.
The Fourier inversion theorem yields
|f(E)|
2

1
(E − [E
i
+ Re(∆
i
)])
2

2
i
/4
. (6.304)
In the absence of the perturbation, |f(E)|
2
is basically a delta-function centred on
the unperturbed energy E
i
of state |i). In other words, state |i) is a stationary state
whose energy is completely determined. In the presence of the perturbation, the
energy of state |i) is shifted by Re(∆
i
). The fact that the state is no longer station-
ary (i.e., it decays in time) implies that its energy cannot be exactly determined.
Indeed, the energy of the state is smeared over some region of width (in energy)
Γ
i
centred around the shifted energy E
i
+Re(∆
i
). The faster the decay of the state
(i.e., the larger Γ
i
), the more its energy is spread out. This effect is clearly a man-
ifestation of the energy-time uncertainty relation ∆E∆t ∼ ¯h. One consequence of
this effect is the existence of a natural width of spectral lines associated with the
decay of some excited state to the ground state (or any other lower energy state).
The uncertainty in energy of the excited state, due to its propensity to decay, gives
rise to a slight smearing (in wave-length) of the spectral line associated with the
transition. Strong lines, which correspond to fast transitions, are smeared out
more that weak lines. For this reason, spectroscopists generally favour forbid-
den lines for Doppler shift measurements. Such lines are not as bright as those
corresponding to allowed transitions, but they are a lot sharper.
169
7 SCATTERING THEORY
7 Scattering theory
7.1 Introduction
Historically, data regarding quantum phenomena has been obtained from two
main sources—the study of spectroscopic lines, and scattering experiments. We
have already developed theories which account for some aspects of the spectra of
hydrogen-like atoms. Let us now examine the quantum theory of scattering.
7.2 The Lipmann-Schwinger equation
Consider time-independent scattering theory, for which the Hamiltonian of the
system is written
H = H
0
+H
1
, (7.1)
where H
0
is the Hamiltonian of a free particle of mass m,
H
0
=
p
2
2 m
, (7.2)
and H
1
represents the non-time-varying source of the scattering. Let |φ) be an
energy eigenket of H
0
,
H
0
|φ) = E|φ), (7.3)
whose wave-function ¸r

|φ) is φ(r

). This state is a plane-wave state or, possibly,
a spherical-wave state. Schr¨ odinger’s equation for the scattering problem is
(H
0
+H
1
)|ψ) = E|ψ), (7.4)
where |ψ) is an energy eigenstate of the total Hamiltonian whose wave-function
¸r

|ψ) is ψ(r

). In general, both H
0
and H
0
+ H
1
have continuous energy spectra:
i.e., their energy eigenstates are unbound. We require a solution of Eq. (7.4)
which satisfies the boundary condition |ψ) → |φ) as H
1
→ 0. Here, |φ) is a
solution of the free particle Schr¨ odinger equation, (7.3), corresponding to the
same energy eigenvalue.
170
7.2 The Lipmann-Schwinger equation 7 SCATTERING THEORY
Formally, the desired solution can be written
|ψ) = |φ) +
1
E −H
0
H
1
|ψ). (7.5)
Note that we can recover Eq. (7.4) by operating on the above equation with
E − H
0
, and making use of Eq. (7.3). Furthermore, the solution satisfies the
boundary condition |ψ) →|φ) as H
1
→0. Unfortunately, the operator (E −H
0
)
−1
is singular: i.e., it produces infinities when it operates on an eigenstate of H
0
corresponding to the eigenvalue E. We need a prescription for dealing with these
infinities, otherwise the above solution is useless. The standard prescription is to
make the energy eigenvalue E slightly complex. Thus,

±
) = |φ) +
1
E −H
0
±i
H
1

±
), (7.6)
where is real, positive, and small. Equation (7.6) is called the Lipmann-Schwinger
equation, and is non-singular as long as > 0. The physical significance of the ±
signs will become apparent later on.
The Lipmann-Schwinger equation can be converted into an integral equation
via left multiplication by ¸r|. Thus,
ψ
±
(r) = φ(r) +
_
_
r
¸
¸
¸
¸
¸
1
E −H
0
±i
¸
¸
¸
¸
¸
r

_
¸r

|H
1

±
) d
3
r

. (7.7)
Adopting the Schr¨ odinger representation, we can write the scattering problem
(7.4) in the form
(∇
2
+k
2
) ψ(r) =
2 m
¯h
2
¸r|H
1
|ψ), (7.8)
where
E =
¯h
2
k
2
2 m
. (7.9)
This equation is called Helmholtz’s equation, and can be inverted using standard
Green’s function techniques. Thus,
ψ(r) = φ(r) +
2 m
¯h
2
_
G(r, r

)¸r

|H
1
|ψ) d
3
r

, (7.10)
171
7.2 The Lipmann-Schwinger equation 7 SCATTERING THEORY
where
(∇
2
+k
2
) G(r, r

) = δ(r −r

). (7.11)
Note that the solution (7.10) satisfies the boundary condition |ψ) →|φ) as H
1

0. As is well-known, the Green’s function for the Helmholtz problem is given by
G(r, r

) = −
exp(±i k |r −r

| )
4π|r −r

|
. (7.12)
Thus, Eq. (7.10) becomes
ψ
±
(r) = φ(r) −
2 m
¯h
2
_
exp(±i k |r −r

| )
4π|r −r

|
¸r

|H
1
|ψ) d
3
r

. (7.13)
A comparison of Eqs. (7.7) and (7.13) suggests that the kernel to Eq. (7.7) takes
the form
_
r
¸
¸
¸
¸
¸
1
E −H
0
±i
¸
¸
¸
¸
¸
r

_
= −
2 m
¯h
2
exp(±i k |r −r

| )
4π|r −r

|
. (7.14)
It is not entirely clear that the ± signs correspond on both sides of this equation.
In fact, they do, as is easily proved by a more rigorous derivation of this result.
Let us suppose that the scattering Hamiltonian, H
1
, is only a function of the
position operators. This implies that
¸r

|H
1
|r) = V(r) δ(r −r

). (7.15)
We can write
¸r

|H
1

±
) =
_
¸r

|H
1
|r

)¸r


±
) d
3
r

= V(r

) ψ
±
(r

). (7.16)
Thus, the integral equation (7.13) simplifies to
ψ
±
(r) = φ(r) −
2 m
¯h
2
_
exp(±i k |r −r

|)
4π|r −r

|
V(r

) ψ
±
(r

) d
3
r

. (7.17)
Suppose that the initial state |φ) is a plane-wave with wave-vector k (i.e., a
stream of particles of definite momentum p = ¯hk). The ket corresponding to
this state is denoted |k). The associated wave-function takes the form
¸r|k) =
exp( i kr)
(2π)
3/2
. (7.18)
172
7.2 The Lipmann-Schwinger equation 7 SCATTERING THEORY
The wave-function is normalized such that
¸k|k

) =
_
¸k|r)¸r|k

) d
3
r
=
_
exp[−i r(k −k

)]
(2π)
3
d
3
r = δ(k −k

). (7.19)
Suppose that the scattering potential V(r) is only non-zero in some relatively
localized region centred on the origin (r = 0). Let us calculate the wave-function
ψ(r) a long way from the scattering region. In other words, let us adopt the
ordering r ¸r

. It is easily demonstrated that
|r −r

| · r − ^rr

(7.20)
to first-order in r

/r, where
^r =
r
r
(7.21)
is a unit vector which points from the scattering region to the observation point.
Let us define
k

= k^r. (7.22)
Clearly, k

is the wave-vector for particles which possess the same energy as the
incoming particles (i.e., k

= k), but propagate from the scattering region to the
observation point. Note that
exp(±i k |r −r

| ) · exp(±i k r) exp(∓i k

r

). (7.23)
In the large-r limit, Eq. (7.17) reduces to
ψ(r)
±
·
exp( i kr)
(2π)
3/2

m
2π¯h
2
exp(±i k r)
r
_
exp(∓i k

r

) V(r

) ψ
±
(r

) d
3
r

. (7.24)
The first term on the right-hand side is the incident wave. The second term
represents a spherical wave centred on the scattering region. The plus sign (on
ψ
±
) corresponds to a wave propagating away from the scattering region, whereas
the minus sign corresponds to a wave propagating towards the scattering region.
173
7.2 The Lipmann-Schwinger equation 7 SCATTERING THEORY
It is obvious that the former represents the physical solution. Thus, the wave-
function a long way from the scattering region can be written
ψ(r) =
1
(2π)
3/2
_
_
exp( i kr) +
exp( i kr)
r
f(k

, k)
_
_
, (7.25)
where
f(k

, k) = −
(2π)
2
m
¯h
2
_
exp(−i k

r

)
(2π)
3/2
V(r

) ψ(r

) d
3
r

= −
(2π)
2
m
¯h
2
¸k

|H
1
|ψ). (7.26)
Let us define the differential cross-section dσ/dΩ as the number of particles
per unit time scattered into an element of solid angle dΩ, divided by the incident
flux of particles. Recall, from Sect. 4, that the probability flux (i.e., the particle
flux) associated with a wave-function ψ is
j =
¯h
m
Im(ψ

∇ψ). (7.27)
Thus, the probability flux associated with the incident wave-function,
exp( i kr)
(2π)
3/2
, (7.28)
is
j
inci
=
¯h
(2π)
3
m
k. (7.29)
Likewise, the probability flux associated with the scattered wave-function,
exp( i k r)
(2π)
3/2
f(k

, k)
r
, (7.30)
is
j
scat
=
¯h
(2π)
3
m
|f(k

, k)|
2
r
2
k^r. (7.31)
Now,

dΩ
dΩ =
r
2
dΩ|j
scat
|
|j
inci
|
, (7.32)
174
7.3 The Born approximation 7 SCATTERING THEORY
giving

dΩ
= |f(k

, k)|
2
. (7.33)
Thus, |f(k

, k)|
2
gives the differential cross-section for particles with incident mo-
mentum ¯hk to be scattered into states whose momentum vectors are directed in
a range of solid angles dΩ about ¯hk

. Note that the scattered particles possess
the same energy as the incoming particles (i.e., k

= k). This is always the case
for scattering Hamiltonians of the form shown in Eq. (7.15).
7.3 The Born approximation
Equation (7.33) is not particularly useful, as it stands, because the quantity
f(k

, k) depends on the unknown ket |ψ). Recall that ψ(r) = ¸r|ψ) is the solu-
tion of the integral equation
ψ(r) = φ(r) −
m
2π¯h
2
exp( i k r)
r
_
exp(−i k

r

) V(r

) ψ(r

) d
3
r

, (7.34)
where φ(r) is the wave-function of the incident state. According to the above
equation the total wave-function is a superposition of the incident wave-function
and lots of spherical-waves emitted from the scattering region. The strength of
the spherical-wave emitted at a given point is proportional to the local value of
the scattering potential, V, as well as the local value of the wave-function, ψ.
Suppose that the scattering is not particularly strong. In this case, it is reason-
able to suppose that the total wave-function, ψ(r), does not differ substantially
from the incident wave-function, φ(r). Thus, we can obtain an expression for
f(k

, k) by making the substitution
ψ(r) →φ(r) =
exp( i kr)
(2π)
3/2
. (7.35)
This is called the Born approximation.
The Born approximation yields
f(k

, k) · −
m
2π¯h
2
_
exp[ i (k −k

)r

] V(r

) d
3
r

. (7.36)
175
7.3 The Born approximation 7 SCATTERING THEORY
Thus, f(k

, k) is proportional to the Fourier transform of the scattering potential
V(r) with respect to the wave-vector q ≡ k −k

.
For a spherically symmetric potential,
f(k

, k) · −
m
2π¯h
2
___
exp( i qr

cos θ

) V(r

) r
2
dr

sinθ

, (7.37)
giving
f(k

, k) · −
2 m
¯h
2
q
_

0
r

V(r

) sin(qr

) dr

. (7.38)
Note that f(k

, k) is just a function of q for a spherically symmetric potential. It is
easily demonstrated that
q ≡ |k −k

| = 2 k sin(θ/2), (7.39)
where θ is the angle subtended between the vectors k and k

. In other words, θ
is the angle of scattering. Recall that the vectors k and k

have the same length
by energy conservation.
Consider scattering by a Yukawa potential
V(r) =
V
0
exp(−µr)
µr
, (7.40)
where V
0
is a constant and 1/µ measures the “range” of the potential. It follows
from Eq. (7.38) that
f(θ) = −
2 mV
0
¯h
2
µ
1
q
2

2
, (7.41)
since
_

0
exp(−µr

) sin(qr

) dr

=
q
µ
2
+q
2
. (7.42)
Thus, in the Born approximation, the differential cross-section for scattering by a
Yukawa potential is

dΩ
·
_
_
2 mV
0
¯h
2
µ
_
_
2
1
[2 k
2
(1 − cos θ) +µ
2
]
2
, (7.43)
given that
q
2
= 4 k
2
sin
2
(θ/2) = 2 k
2
(1 − cos θ). (7.44)
176
7.3 The Born approximation 7 SCATTERING THEORY
The Yukawa potential reduces to the familiar Coulomb potential as µ → 0,
provided that V
0
/µ → ZZ

e
2
/4π
0
. In this limit the Born differential cross-
section becomes

dΩ
·
_
_
2 mZZ

e
2

0
¯h
2
_
_
2
1
16 k
4
sin
4
(θ/2)
. (7.45)
Recall that ¯hk is equivalent to |p|, so the above equation can be rewritten

dΩ
·
_
_
ZZ

e
2
16π
0
E
_
_
2
1
sin
4
(θ/2)
, (7.46)
where E = p
2
/2 m is the kinetic energy of the incident particles. Equation (7.46)
is the classical Rutherford scattering cross-section formula.
The Born approximation is valid provided that ψ(r) is not too different from
φ(r) in the scattering region. It follows, from Eq. (7.17), that the condition for
ψ(r) · φ(r) in the vicinity of r = 0 is
¸
¸
¸
¸
¸
¸
m
2π¯h
2
_
exp( i k r

)
r

V(r

) d
3
r

¸
¸
¸
¸
¸
¸
¸1. (7.47)
Consider the special case of the Yukawa potential. At low energies, (i.e., k ¸ µ)
we can replace exp( i k r

) by unity, giving
2 m
¯h
2
|V
0
|
µ
2
¸1 (7.48)
as the condition for the validity of the Born approximation. The condition for the
Yukawa potential to develop a bound state is
2 m
¯h
2
|V
0
|
µ
2
≥ 2.7, (7.49)
where V
0
is negative. Thus, if the potential is strong enough to form a bound
state then the Born approximation is likely to break down. In the high-k limit,
Eq. (7.47) yields
2 m
¯h
2
|V
0
|
µk
¸1. (7.50)
This inequality becomes progressively easier to satisfy as k increases, implying
that the Born approximation is more accurate at high incident particle energies.
177
7.4 Partial waves 7 SCATTERING THEORY
7.4 Partial waves
We can assume, without loss of generality, that the incident wave-function is
characterized by a wave-vector k which is aligned parallel to the z-axis. The
scattered wave-function is characterized by a wave-vector k

which has the same
magnitude as k, but, in general, points in a different direction. The direction of k

is specified by the polar angle θ (i.e., the angle subtended between the two wave-
vectors), and an azimuthal angle ϕ about the z-axis. Equation (7.38) strongly
suggests that for a spherically symmetric scattering potential [i.e., V(r) = V(r)]
the scattering amplitude is a function of θ only:
f(θ, ϕ) = f(θ). (7.51)
It follows that neither the incident wave-function,
φ(r) =
exp( i k z)
(2π)
3/2
=
exp( i k r cos θ)
(2π)
3/2
, (7.52)
nor the total wave-function,
ψ(r) =
1
(2π)
3/2
_
_
exp( i k r cos θ) +
exp( i k r) f(θ)
r
_
_
, (7.53)
depend on the azimuthal angle ϕ.
Outside the range of the scattering potential, both φ(r) and ψ(r) satisfy the
free space Schr¨ odinger equation
(∇
2
+k
2
) ψ = 0. (7.54)
What is the most general solution to this equation in spherical polar coordinates
which does not depend on the azimuthal angle ϕ? Separation of variables yields
ψ(r, θ) =

l
R
l
(r) P
l
(cos θ), (7.55)
since the Legendre functions P
l
(cos θ) form a complete set in θ-space. The Leg-
endre functions are related to the spherical harmonics introduced in Sect. 5 via
P
l
(cos θ) =
¸
¸
¸
_

2 l +1
Y
0
l
(θ, ϕ). (7.56)
178
7.4 Partial waves 7 SCATTERING THEORY
Equations (7.54) and (7.55) can be combined to give
r
2
d
2
R
l
dr
2
+2 r
dR
l
dr
+ [k
2
r
2
−l (l +1)]R
l
= 0. (7.57)
The two independent solutions to this equation are called a spherical Bessel func-
tion, j
l
(k r), and a Neumann function, η
l
(k r). It is easily demonstrated that
j
l
(y) = y
l
_

1
y
d
dy
_
l
siny
y
, (7.58)
η
l
(y) = −y
l
_

1
y
d
dy
_
l
cos y
y
. (7.59)
Note that spherical Bessel functions are well-behaved in the limit y →0 , whereas
Neumann functions become singular. The asymptotic behaviour of these func-
tions in the limit y →∞is
j
l
(y) →
sin(y −l π/2)
y
, (7.60)
η
l
(y) → −
cos(y −l π/2)
y
. (7.61)
We can write
exp( i k r cos θ) =

l
a
l
j
l
(k r) P
l
(cos θ), (7.62)
where the a
l
are constants. Note there are no Neumann functions in this expan-
sion, because they are not well-behaved as r → 0. The Legendre functions are
orthonormal,
_
1
−1
P
n
(µ) P
m
(µ) dµ =
δ
nm
n +1/2
, (7.63)
so we can invert the above expansion to give
a
l
j
l
(k r) = (l +1/2)
_
1
−1
exp( i k r µ) P
l
(µ) dµ. (7.64)
It is well-known that
j
l
(y) =
(−i)
l
2
_
1
−1
exp( i yµ) P
l
(µ) dµ, (7.65)
179
7.4 Partial waves 7 SCATTERING THEORY
where l = 0, 1, 2, [see Abramowitz and Stegun (Dover, New York NY, 1965),
Eq. 10.1.14]. Thus,
a
l
= i
l
(2 l +1), (7.66)
giving
exp( i k r cos θ) =

l
i
l
(2 l +1) j
l
(k r) P
l
(cos θ). (7.67)
The above expression tells us how to decompose a plane-wave into a series of
spherical-waves (or “partial waves”).
The most general solution for the total wave-function outside the scattering
region is
ψ(r) =
1
(2π)
3/2

l
[A
l
j
l
(k r) +B
l
η
l
(k r)] P
l
(cos θ), (7.68)
where the A
l
and B
l
are constants. Note that the Neumann functions are allowed
to appear in this expansion, because its region of validity does not include the
origin. In the large-r limit, the total wave-function reduces to
ψ(r) ·
1
(2π)
3/2

l
_
_
A
l
sin(k r −l π/2)
k r
−B
l
cos(k r −l π/2)
k r
_
_
P
l
(cos θ), (7.69)
where use has been made of Eqs. (7.60)–(7.61). The above expression can also
be written
ψ(r) ·
1
(2π)
3/2

l
C
l
sin(k r −l π/2 +δ
l
)
k r
P
l
(cos θ), (7.70)
where the sine and cosine functions have been combined to give a sine function
which is phase-shifted by δ
l
.
Equation (7.70) yields
ψ(r) ·
1
(2π)
3/2

l
C
l
exp[ i (k r −l π/2 +δ
l
)] − exp[−i (k r −l π/2 +δ
l
)]
2 i k r
P
l
(cos θ), (7.71)
which contains both incoming and outgoing spherical-waves. What is the source
of the incoming waves? Obviously, they must be part of the large-r asymptotic
180
7.5 The optical theorem 7 SCATTERING THEORY
expansion of the incident wave-function. In fact, it is easily seen that
φ(r) ·
1
(2π)
3/2

l
i
l
(2l +1)
exp[ i (k r −l π/2)] − exp[−i (k r −l π/2)]
2 i k r
P
l
(cos θ) (7.72)
in the large-r limit. Now, Eqs. (7.52) and (7.53) give
(2π)
3/2
[ψ(r) −φ(r)] =
exp( i k r)
r
f(θ). (7.73)
Note that the right-hand side consists only of an outgoing spherical wave. This
implies that the coefficients of the incoming spherical waves in the large-r ex-
pansions of ψ(r) and φ(r) must be equal. It follows from Eqs. (7.71) and (7.72)
that
C
l
= (2 l +1) exp[ i (δ
l
+l π/2)]. (7.74)
Thus, Eqs. (7.71)–(7.73) yield
f(θ) =

l=0
(2 l +1)
exp( i δ
l
)
k
sinδ
l
P
l
(cos θ). (7.75)
Clearly, determining the scattering amplitude f(θ) via a decomposition into par-
tial waves (i.e., spherical-waves) is equivalent to determining the phase-shifts δ
l
.
7.5 The optical theorem
The differential scattering cross-section dσ/dΩ is simply the modulus squared of
the scattering amplitude f(θ). The total cross-section is given by
σ
total
=
_
|f(θ)|
2
dΩ
=
1
k
2
_

_
1
−1

l

l

(2 l +1) (2 l

+1) exp[ i (δ
l
−δ
l
]
sinδ
l
sinδ
l
P
l
(µ) P
l
(µ), (7.76)
181
7.6 Determination of phase-shifts 7 SCATTERING THEORY
where µ = cos θ. It follows that
σ
total
=

k
2

l
(2 l +1) sin
2
δ
l
, (7.77)
where use has been made of Eq. (7.63). A comparison of this result with Eq. (7.75)
yields
σ
total
=

k
Im[f(0)] , (7.78)
since P
l
(1) = 1. This result is known as the optical theorem. It is a reflection
of the fact that the very existence of scattering requires scattering in the forward
(θ = 0) direction in order to interfere with the incident wave, and thereby reduce
the probability current in this direction.
It is usual to write
σ
total
=

l=0
σ
l
, (7.79)
where
σ
l
=

k
2
(2 l +1) sin
2
δ
l
(7.80)
is the lth partial cross-section: i.e., the contribution to the total cross-section from
the lth partial wave. Note that the maximumvalue for the lth partial cross-section
occurs when the phase-shift δ
l
takes the value π/2.
7.6 Determination of phase-shifts
Let us now consider how the phase-shifts δ
l
can be evaluated. Consider a spher-
ically symmetric potential V(r) which vanishes for r > a, where a is termed
the range of the potential. In the region r > a, the wave-function ψ(r) satisfies
the free-space Schr¨ odinger equation (7.54). The most general solution which is
consistent with no incoming spherical-waves is
ψ(r) =
1
(2π)
3/2

l=0
i
l
(2 l +1) A
l
(r) P
l
(cos θ), (7.81)
182
7.6 Determination of phase-shifts 7 SCATTERING THEORY
where
A
l
(r) = exp( i δ
l
) [ cos δ
l
j
l
(k r) − sinδ
l
η
l
(k r) ] . (7.82)
Note that Neumann functions are allowed to appear in the above expression,
because its region of validity does not include the origin (where V ,= 0). The
logarithmic derivative of the lth radial wave-function A
l
(r) just outside the range
of the potential is given by
β
l+
= k a
_
_
cos δ
l
j

l
(k a) − sinδ
l
η

l
(k a)
cos δ
l
j
l
(k a) − sinδ
l
η
l
(k a)
_
_
, (7.83)
where j

l
(x) denotes dj
l
(x)/dx, etc. The above equation can be inverted to give
tanδ
l
=
k aj

l
(k a) −β
l+
j
l
(k a)
k aη

l
(k a) −β
l+
η
l
(k a)
. (7.84)
Thus, the problem of determining the phase-shift δ
l
is equivalent to that of ob-
taining β
l+
.
The most general solution to Schr¨ odinger’s equation inside the range of the
potential (r < a) which does not depend on the azimuthal angle ϕ is
ψ(r) =
1
(2π)
3/2

l=0
i
l
(2 l +1) R
l
(r) P
l
(cos θ), (7.85)
where
R
l
(r) =
u
l
(r)
r
, (7.86)
and
d
2
u
l
dr
2
+
_
_
k
2

2m
¯h
2
V −
l (l +1)
r
2
_
_
u
l
= 0. (7.87)
The boundary condition
u
l
(0) = 0 (7.88)
ensures that the radial wave-function is well-behaved at the origin. We can
launch a well-behaved solution of the above equation from r = 0, integrate out
to r = a, and form the logarithmic derivative
β
l−
=
1
(u
l
/r)
d(u
l
/r)
dr
¸
¸
¸
¸
¸
¸
r=a
. (7.89)
183
7.7 Hard sphere scattering 7 SCATTERING THEORY
Since ψ(r) and its first derivatives are necessarily continuous for physically ac-
ceptible wave-functions, it follows that
β
l+
= β
l−
. (7.90)
The phase-shift δ
l
is obtainable from Eq. (7.84).
7.7 Hard sphere scattering
Let us test out this scheme using a particularly simple example. Consider scatter-
ing by a hard sphere, for which the potential is infinite for r < a, and zero for
r > a. It follows that ψ(r) is zero in the region r < a, which implies that u
l
= 0
for all l. Thus,
β
l−
= β
l+
= ∞, (7.91)
for all l. It follows from Eq. (7.84) that
tanδ
l
=
j
l
(k a)
η
l
(k a)
. (7.92)
Consider the l = 0 partial wave, which is usually referred to as the s-wave.
Equation (7.92) yields
tanδ
0
=
sin(k a)/k a
−cos(k a)/ka
= −tank a, (7.93)
where use has been made of Eqs. (7.58)–(7.59). It follows that
δ
0
= −k a. (7.94)
The s-wave radial wave function is
A
0
(r) = exp(−i k a)
[cos k a sink r − sink a cos k r]
k r
= exp(−i k a)
sin[k (r −a)]
k r
. (7.95)
The corresponding radial wave-function for the incident wave takes the form
˜
A
0
(r) =
sink r
k r
. (7.96)
184
7.7 Hard sphere scattering 7 SCATTERING THEORY
It is clear that the actual l = 0 radial wave-function is similar to the incident l = 0
wave-function, except that it is phase-shifted by k a.
Let us consider the low and high energy asymptotic limits of tanδ
l
. Low energy
means k a ¸ 1. In this regime, the spherical Bessel functions and Neumann
functions reduce to:
j
l
(k r) ·
(k r)
l
(2 l +1)!!
, (7.97)
η
l
(k r) · −
(2 l −1)!!
(k r)
l+1
, (7.98)
where n!! = n(n −2) (n −4) 1. It follows that
tanδ
l
=
−(k a)
2 l+1
(2 l +1) [(2 l −1)!!]
2
. (7.99)
It is clear that we can neglect δ
l
, with l > 0, with respect to δ
0
. In other words,
at low energy only s-wave scattering (i.e., spherically symmetric scattering) is
important. It follows from Eqs. (7.33), (7.75), and (7.94) that

dΩ
=
sin
2
k a
k
2
· a
2
(7.100)
for k a ¸1. Note that the total cross-section
σ
total
=
_

dΩ
dΩ = 4πa
2
(7.101)
is four times the geometric cross-section πa
2
(i.e., the cross-section for classical
particles bouncing off a hard sphere of radius a). However, low energy scattering
implies relatively long wave-lengths, so we do not expect to obtain the classical
result in this limit.
Consider the high energy limit k a ¸ 1. At high energies, all partial waves
up to l
max
= k a contribute significantly to the scattering cross-section. It follows
from Eq. (7.77) that
σ
total
=

k
2
l
max

l=0
(2 l +1) sin
2
δ
l
. (7.102)
185
7.8 Low energy scattering 7 SCATTERING THEORY
With so many l values contributing, it is legitimate to replace sin
2
δ
l
by its average
value 1/2. Thus,
σ
total
=
ka

l=0

k
2
(2 l +1) · 2πa
2
. (7.103)
This is twice the classical result, which is somewhat surprizing, since we might ex-
pect to obtain the classical result in the short wave-length limit. For hard sphere
scattering, incident waves with impact parameters less than a must be deflected.
However, in order to produce a “shadow” behind the sphere, there must be scat-
tering in the forward direction (recall the optical theorem) to produce destruc-
tive interference with the incident plane-wave. In fact, the interference is not
completely destructive, and the shadow has a bright spot in the forward direc-
tion. The effective cross-section associated with this bright spot is πa
2
which,
when combined with the cross-section for classical reflection, πa
2
, gives the ac-
tual cross-section of 2πa
2
.
7.8 Low energy scattering
At low energies (i.e., when 1/k is much larger than the range of the potential)
partial waves with l > 0, in general, make a negligible contribution to the scatter-
ing cross-section. It follows that, at these energies, with a finite range potential,
only s-wave scattering is important.
As a specific example, let us consider scattering by a finite potential well, char-
acterized by V = V
0
for r < a, and V = 0 for r ≥ a. Here, V
0
is a constant.
The potential is repulsive for V
0
> 0, and attractive for V
0
< 0. The outside
wave-function is given by [see Eq. (7.82)]
A
0
(r) = exp( i δ
0
) [j
0
(k r) cos δ
0
−η
0
(k r) sinδ
0
] (7.104)
=
exp( i δ
0
) sin(k r +δ
0
)
k r
, (7.105)
where use has been made of Eqs. (7.58)–(7.59). The inside wave-function follows
186
7.8 Low energy scattering 7 SCATTERING THEORY
from Eq. (7.87). We obtain
A
0
(r) = B
sink

r
r
, (7.106)
where use has been made of the boundary condition (7.88). Here, B is a constant,
and
E −V
0
=
¯h
2
k
2
2 m
. (7.107)
Note that Eq. (7.106) only applies when E > V
0
. For E < V
0
, we have
A
0
(r) = B
sinhκ r
r
, (7.108)
where
V
0
−E =
¯h
2
κ
2
2 m
. (7.109)
Matching A
0
(r), and its radial derivative at r = a, yields
tan(k a +δ
0
) =
k
k

tank

a (7.110)
for E > V
0
, and
tan(k a +δ
0
) =
k
κ
tanhκ a (7.111)
for E < V
0
.
Consider an attractive potential, for which E > V
0
. Suppose that |V
0
| ¸ E
(i.e., the depth of the potential well is much larger than the energy of the inci-
dent particles), so that k

¸ k. It follows from Eq. (7.110) that, unless tank

a
becomes extremely large, the right-hand side is much less that unity, so replacing
the tangent of a small quantity with the quantity itself, we obtain
k a +δ
0
·
k
k

tank

a. (7.112)
This yields
δ
0
· k a
_
_
tank

a
k

a
−1
_
_
. (7.113)
187
7.9 Resonances 7 SCATTERING THEORY
According to Eq. (7.102), the scattering cross-section is given by
σ
total
·

k
2
sin
2
δ
0
= 4πa
2
_
_
tank

a
k

a
−1
_
_
2
. (7.114)
Now
k

a =
¸
¸
¸
_
k
2
a
2
+
2 m|V
0
| a
2
¯h
2
, (7.115)
so for sufficiently small values of k a,
k

a ·
¸
¸
¸
_
2 m|V
0
| a
2
¯h
2
. (7.116)
It follows that the total (s-wave) scattering cross-section is independent of the
energy of the incident particles (provided that this energy is sufficiently small).
Note that there are values of k

a (e.g., k

a · 4.49) at which δ
0
→ π, and the
scattering cross-section (7.114) vanishes, despite the very strong attraction of the
potential. In reality, the cross-section is not exactly zero, because of contributions
from l > 0 partial waves. But, at low incident energies, these contributions are
small. It follows that there are certain values of V
0
and k which give rise to almost
perfect transmission of the incident wave. This is called the Ramsauer-Townsend
effect, and has been observed experimentally.
7.9 Resonances
There is a significant exception to the independence of the cross-section on en-
ergy. Suppose that the quantity
_
2 m|V
0
| a
2
/¯h
2
is slightly less than π/2. As the
incident energy increases, k

a, which is given by Eq. (7.115), can reach the value
π/2. In this case, tank

a becomes infinite, so we can no longer assume that the
right-hand side of Eq. (7.110) is small. In fact, at the value of the incident energy
when k

a = π/2, it follows from Eq. (7.110) that k a + δ
0
= π/2, or δ
0
· π/2
(since we are assuming that k a ¸1). This implies that
σ
total
=

k
2
sin
2
δ
0
= 4πa
2
_
1
k
2
a
2
_
. (7.117)
188
7.9 Resonances 7 SCATTERING THEORY
Note that the cross-section now depends on the energy. Furthermore, the mag-
nitude of the cross-section is much larger than that given in Eq. (7.114) for
k

a ,= π/2 (since k a ¸1).
The origin of this rather strange behaviour is quite simple. The condition
¸
¸
¸
_
2 m|V
0
| a
2
¯h
2
=
π
2
(7.118)
is equivalent to the condition that a spherical well of depth V
0
possesses a bound
state at zero energy. Thus, for a potential well which satisfies the above equation,
the energy of the scattering system is essentially the same as the energy of the
bound state. In this situation, an incident particle would like to form a bound
state in the potential well. However, the bound state is not stable, since the
system has a small positive energy. Nevertheless, this sort of resonance scattering
is best understood as the capture of an incident particle to form a metastable
bound state, and the subsequent decay of the bound state and release of the
particle. The cross-section for resonance scattering is generally far higher than
that for non-resonance scattering.
We have seen that there is a resonant effect when the phase-shift of the s-wave
takes the value π/2. There is nothing special about the l = 0 partial wave, so it
is reasonable to assume that there is a similar resonance when the phase-shift of
the lth partial wave is π/2. Suppose that δ
l
attains the value π/2 at the incident
energy E
0
, so that
δ
l
(E
0
) =
π
2
. (7.119)
Let us expand cot δ
l
in the vicinity of the resonant energy:
cot δ
l
(E) = cot δ
l
(E
0
) +
_
dcot δ
l
dE
_
E=E
0
(E −E
0
) + (7.120)
= −
_
_
1
sin
2
δ
l

l
dE
_
_
E=E
0
(E −E
0
) + . (7.121)
Defining
_
_

l
(E)
dE
_
_
E=E
0
=
2
Γ
, (7.122)
189
7.9 Resonances 7 SCATTERING THEORY
we obtain
cot δ
l
(E) = −
2
Γ
(E −E
0
) + . (7.123)
Recall, from Eq. (7.80), that the contribution of the lth partial wave to the scat-
tering cross-section is
σ
l
=

k
2
(2 l +1) sin
2
δ
l
=

k
2
(2 l +1)
1
1 + cot
2
δ
l
. (7.124)
Thus,
σ
l
·

k
2
(2 l +1)
Γ
2
/4
(E −E
0
)
2

2
/4
. (7.125)
This is the famous Breit-Wigner formula. The variation of the partial cross-section
σ
l
with the incident energy has the form of a classical resonance curve. The
quantity Γ is the width of the resonance (in energy). We can interpret the Breit-
Wigner formula as describing the absorption of an incident particle to form a
metastable state, of energy E
0
, and lifetime τ = ¯h/Γ (see Sect. 6.17).
190

Contents
1 Introduction 1.1 Major sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Fundamental concepts 2.1 The breakdown of classical physics . . . . . . . . . 2.2 The polarization of photons . . . . . . . . . . . . . 2.3 The fundamental principles of quantum mechanics 2.4 Ket space . . . . . . . . . . . . . . . . . . . . . . . 2.5 Bra space . . . . . . . . . . . . . . . . . . . . . . . 2.6 Operators . . . . . . . . . . . . . . . . . . . . . . . 2.7 The outer product . . . . . . . . . . . . . . . . . . 2.8 Eigenvalues and eigenvectors . . . . . . . . . . . . 2.9 Observables . . . . . . . . . . . . . . . . . . . . . . 2.10 Measurements . . . . . . . . . . . . . . . . . . . . 2.11 Expectation values . . . . . . . . . . . . . . . . . . 2.12 Degeneracy . . . . . . . . . . . . . . . . . . . . . . 2.13 Compatible observables . . . . . . . . . . . . . . . 2.14 The uncertainty relation . . . . . . . . . . . . . . . 2.15 Continuous spectra . . . . . . . . . . . . . . . . . . 3 Position and momentum 3.1 Introduction . . . . . . . . . . . 3.2 Poisson brackets . . . . . . . . . 3.3 Wave-functions . . . . . . . . . . 3.4 Schr¨dinger’s representation - I . o 3.5 Schr¨dinger’s representation - II o 3.6 The momentum representation . 3.7 The uncertainty relation . . . . . 3.8 Displacement operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 5 6 6 7 9 10 14 17 19 20 21 24 25 26 27 28 31 33 33 33 37 39 43 46 48 50

4 Quantum dynamics 55 4.1 Schr¨dinger’s equations of motion . . . . . . . . . . . . . . . . . . 55 o 4.2 Heisenberg’s equations of motion . . . . . . . . . . . . . . . . . . . 59
2

4.3 4.4

Ehrenfest’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Schr¨dinger’s wave-equation . . . . . . . . . . . . . . . . . . . . . 65 o . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 71 74 78 81 84 86 89 91 93 96 97 99 105 110 120 120 120 122 124 129 132 134 140 144 146 149 150 154 158 159

5 Angular momentum 5.1 Orbital angular momentum . . . . . . . . . . 5.2 Eigenvalues of angular momentum . . . . . . 5.3 Rotation operators . . . . . . . . . . . . . . . 5.4 Eigenfunctions of orbital angular momentum 5.5 Motion in a central field . . . . . . . . . . . . 5.6 Energy levels of the hydrogen atom . . . . . . 5.7 Spin angular momentum . . . . . . . . . . . 5.8 Wave-function of a spin one-half particle . . . 5.9 Rotation operators in spin space . . . . . . . 5.10 Magnetic moments . . . . . . . . . . . . . . . 5.11 Spin precession . . . . . . . . . . . . . . . . . 5.12 Pauli two-component formalism . . . . . . . . 5.13 Spin greater than one-half systems . . . . . . 5.14 Addition of angular momentum . . . . . . . .

6 Approximation methods 6.1 Introduction . . . . . . . . . . . . . . . . . . . . 6.2 The two-state system . . . . . . . . . . . . . . . . 6.3 Non-degenerate perturbation theory . . . . . . . 6.4 The quadratic Stark effect . . . . . . . . . . . . . 6.5 Degenerate perturbation theory . . . . . . . . . . 6.6 The linear Stark effect . . . . . . . . . . . . . . . 6.7 Fine structure . . . . . . . . . . . . . . . . . . . . 6.8 The Zeeman effect . . . . . . . . . . . . . . . . . 6.9 Time-dependent perturbation theory . . . . . . . 6.10 The two-state system . . . . . . . . . . . . . . . . 6.11 Spin magnetic resonance . . . . . . . . . . . . . 6.12 The Dyson series . . . . . . . . . . . . . . . . . . 6.13 Constant perturbations . . . . . . . . . . . . . . . 6.14 Harmonic perturbations . . . . . . . . . . . . . . 6.15 Absorption and stimulated emission of radiation
3

. . . . . . . . . . . . . . . . . . . . . . .6 Determination of phase-shifts . . . . . . . . . . 7. .2 The Lipmann-Schwinger equation 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . .3 The Born approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 170 170 175 178 181 182 184 186 188 4 . . . . . . . .9 Resonances . . . . . . . .6. . . . . . . . . 7. . . . . . . . .7 Hard sphere scattering . . . . . . . . . . . . . . . . . . . . . . . . . .4 Partial waves . . . 7. . .16 The electric dipole approximation . . . . . . . . . 162 6. . . . .8 Low energy scattering . . . 7. . .1 Introduction . . . . . . . . . . . .5 The optical theorem . . . . . . . . . . . . . . . . . . . . . 165 7 Scattering theory 7. . . . . . . . . . . 7. . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . .17 Energy-shifts and decay-widths . . . . . . . . . . .

2nd Edition (John Wiley & Sons.P Feynman. . Oxford. J. and M. R. 5 .A. Menlo Park CA. Merzbacher. Dirac.J. 1965). Sakurai. Sands. R.B. P ford University Press. The Feynman lectures on physics. 1958). E. (Benjamin/Cummings. UK. Quantum mechanics. 1970). (OxThe principles of quantum mechanics. Modern quantum mechanics. Volume III (Addison-Wesley. Reading MA. New York NY.1 Major sources The textbooks which I have consulted most frequently while developing course material are: .1 INTRODUCTION 1 Introduction 1. 4th Edition (revised). 1985).M. Leighton.

there is no such divergence. I. this fundamental problem with classical physics was known and appreciated in the middle of the nineteenth century. The anomalously low specific heats of atoms and molecules: According to the equipartition theorem of classical physics. 4. and gradually spiral in towards the nucleus. The vibrational degrees of freedom appear to make no contribution at all (except at high temperatures). electron diffraction) show quite clearly that waves sometimes act as if they were streams of particles. The ultraviolet catastrophe: According to classical physics. each degree of freedom of an atomic or molecular system should contribute R/2 to its molar specific heat. only the translational and some rotational degrees of freedom seem to contribute. and that there was nothing left to discover. Incidentally. various experiments (e.2 FUNDAMENTAL CONCEPTS 2 Fundamental concepts 2.1 The breakdown of classical physics The necessity for a departure from classical mechanics is clearly demonstrated by: 1. light interference. However. 40). Experimentally. where R is the ideal gas constant. and streams of particles sometimes act as if they were waves. the photo-electric effect. Wave-particle duality: Classical physics can deal with waves or particles. In fact. 3.. Cha. Stories that physicists at the start of the twentieth century thought that classical physics explained everything. 6 . The anomalous stability of atoms and molecules: According to classical physics. Vol. 2. are largely apocryphal (see Feynman. this is not observed to happen.g. and the total energy density is finite. an electron orbiting a nucleus should lose energy by emission of synchrotron radiation. This is completely inexplicable within the framework of classical physics. Experimentally. the energy density of an electromagnetic field in vacuum is infinite due to a divergence of energy carried by short wave-length modes.

Let us try to account for these observations at the individual photon level. also extend to its particle-like behaviour. the polarization properties of light. all of the photons are transmitted. which has the property that it is only transparent to light whose plane of polarization lies perpendicular to its optic axis. none of the photons are transmitted. In the former case. a fraction sin2 α of the photons are transmitted through the film.2 The polarization of photons It is known experimentally that when plane polarized light is used to eject photoelectrons there is a preferred direction of emission of the electrons. is observed. This picture leads to no difficulty if the plane of polarization lies parallel or perpendicular to the optic axis of the polaroid. whose energy is equal to the energy of the incident photon. Suppose that we were to fire a single photon at a polaroid film. which are more usually associated with its wavelike behaviour. But. Consider the following well-known experiment. a polarization can be ascribed to each individual photon in a beam of light.2. In particular. The possible results of the experiment are that either a whole photon. Any photon which is transmitted though the film must be polarized perpendicular to the optic axis. Furthermore. or no photon is observed. A beam of plane polarized light is passed through a polaroid film. and. and a fraction 7 . Clearly.2 The polarization of photons 2 FUNDAMENTAL CONCEPTS 2. and then look to see whether or not it emerges from the other side. it is impossible to imagine (in physics) finding part of a photon on the other side of the film. and if the light is polarized at an angle α to the axis then a fraction sin 2 α of the beam is transmitted. if the beam is polarized parallel to the optic axis then none of the light is transmitted. what happens in the case of an obliquely polarized incident beam? The above question is not very precise. A beam of light which is plane polarized in a certain direction is made up of a stream of photons which are each plane polarized in that direction. Classical electromagnetic wave theory tells us that if the beam is polarized perpendicular to the optic axis then all of the light is transmitted. Let us reformulate it as a question relating to the result of some experiment which we could perform. in the latter case. If we repeat the experiment a great number of times then. on average.

The above discussion about the results of an experiment with a single obliquely polarized photon incident on a polaroid film answers all that can be legitimately asked about what happens to the photon when it reaches the film. and there is nothing left over to uniquely determine whether the photon is transmitted or absorbed by the film. Thus. Questions as to what decides whether the photon is transmitted or not. the oblique polarization state is some sort of superposition of two states of parallel and perpendicular polarization. Since there is nothing special about the orientation of the optic axis in our experiment. If we imagine performing experiments using monochromatic light. direction of propagation. We have no way of knowing whether an individual obliquely polarized photon is going to be absorbed by or transmitted through a polaroid film. 8 . with a particular oblique polarization. In other words. or how it changes its direction of polarization. and partly in a state of polarization perpendicular to the axis. but recall that the state of a photon is fully specified once its energy. The further description provided by quantum mechanics is as follows. It is supposed that a photon polarized obliquely to the optic axis can be regarded as being partly in a state of polarization parallel to the axis. some further description is needed in order to allow the results of this experiment to be correlated with the results of other experiments which can be performed using photons. and a probability cos2 α of being absorbed. then the state of each individual photon in the beam is completely specified. normally incident on a polaroid film. and polarization are known. by abandoning the determinacy of classical theory. These values for the probabilities lead to the correct classical limit for a beam containing a large number of photons.2. Nevertheless. since they do not relate to the outcome of a possible experiment.2 The polarization of photons 2 FUNDAMENTAL CONCEPTS cos2 α are absorbed. we conclude that a photon has a probability sin 2 α of being transmitted as a photon polarized in the plane perpendicular to the optic axis. We only know the probability of each event occurring. This is a fairly sweeping statement. we must conclude that any state of polarization can be regarded as a superposition of two mutually perpendicular states of polarization. and adopting a fundamentally probabilistic approach. are illegitimate. Note that we have only been able to preserve the individuality of photons. in all cases.

5).3). Dirac’s razor: Quantum mechanics can only answer questions regarding the outcome of possible experiments. If the photon jumps into a state of parallel polarization then it is absorbed. the photon has to jump suddenly from being partly in each of these two states to being entirely in one or the other of them. Otherwise. Such superpositions can be performed in an infinite number of different ways. In other words. The principle of superposition of states: Any microscopic system (i. Any other questions lie beyond the realms of physics. we are subjecting it to an observation. 3. In fact. Cha. molecule.. in this example.3 The fundamental principles of quantum mechanics There is nothing special about the transmission and absorption of photons through a polaroid film.3 The fundamental principles of quantum mechanics 2 FUNDAMENTAL CONCEPTS When we make the photon encounter a polaroid film. 2. any state can be regarded as a superposition of two or more other states. The principle of indeterminacy: An observation made on a microscopic system causes it to jump into one or more particular states (which are related to 9 . or particle) in a given state can be regarded as being partly in each of two or more other states.2. I. The effect of making this observation is to force the photon entirely into a state of parallel or perpendicular polarization. 1. but is governed by probability laws. we are observing whether it is polarized parallel or perpendicular to the optic axis. Exactly the same conclusions as those outlined above are obtained by studying other simple experiments. In other words.e. Which of the two states it will jump into cannot be predicted. Sect. Feynman. the introduction of indeterminacy into the problem is clearly connected with the act of observation. it is transmitted. the indeterminacy is related to the inevitable disturbance of the system associated with the act of observation. Note that. an atom. The study of these simple experiments leads us to formulate the following fundamental principles of quantum mechanics: 1. and the Stern-Gerlach experiment (see Sakurai. such as the interference of photons (see Dirac. In other words. Cha. 2.

1) Suppose that state A is. The final principle is still rather vague. in fact. a photon with a particular energy. Let us consider a particular microscopic system in a particular state. where the other elements of the space represent all of the other possible states of the system.g. residing in some vector space.4 Ket space Consider a microscopic system composed of particles or bodies with specific properties (mass. Such a space is called a ket space (after Dirac). or “How does a system decide which state to jump into?”. As we shall see. It is impossible to predict into which final state a particular system will jump. 2. states must be related to mathematical quantities of a kind which can be added together to give other quantities of the same kind. the second principle is the basis for the mathematical formulation of quantum mechanics. According to the principle of superposition of states. as well as the probability of the system making a particular jump. B and 10 . etc. the superposition of two different states. We need to extend it so that we can predict which possible states a system can jump into after a particular type of observation. any given state can be regarded as a superposition of two or more other states. Let us term each such motion a state of the system. The most obvious examples of such quantities are vectors. Thus. momentum. and polarization. (2. We can represent this state as a particular vector. The state vector A is conventionally written |A . moment of inertia. which we also label A.4 Ket space 2 FUNDAMENTAL CONCEPTS the type of observation).2.. however the probability of a given system jumping into a given final state can be predicted. The first of these principles was formulated by quantum physicists (such as Dirac) in the 1920s to fend off awkward questions such as “How can a system suddenly jump from one state into another?”.) interacting according to specific laws of force. There will be various possible motions of the particles or bodies consistent with the laws of force. which we label A: e.

the fact 11 ..2) where |B is the vector relating to the state B. ket vectors differ from conventional vectors in that their magnitudes.4 Ket space 2 FUNDAMENTAL CONCEPTS C. and state |C might represent a similar photon plane polarized in the y-direction. and sin α of state C. By analogy with classical physics. and plane polarized in the xdirection. If c1 + c2 = 0 then the superposition process yields nothing at all: i. Suppose that we want to construct a state whose plane of polarization makes an arbitrary angle α with the x-direction. the sum of these two states represents a photon whose plane of polarization makes an angle of 45◦ with both the x. are physically irrelevant.4) corresponds to the same state that |A does.e. a photon polarized in the y-direction superposed with another photon polarized in the y-direction (with the same energy and momentum) gives the same photon.. one caveat to the above statements. no distinction being made between the directions of the ket vectors |A and −|A . This implies that the ket vector c1 |A + c2 |A = (c1 + c2 )|A (2. For instance. no state.3) in ket space. or lengths.5) for any vector |A . (2. This interrelation is represented in ket space by writing |A = |B + |C . All the states of the system are in one to one correspondence with all the possible directions of vectors in the ket space.and y-directions (by analogy with classical physics). however. Note that we cannot form a new state by superposing a state with itself. There is. For instance. state |B might represent a photon propagating in the z-direction. The null vector has the fairly obvious property that |A + |0 = |A . etc.e. This latter state is represented by |B + |C in ket space. This new state is represented by cos α |B + sin α |C (2. We can do this via a suitably weighted superposition of states B and C. The fact that ket vectors pointing in the same direction represent the same state relates ultimately to the quantization of matter: i. In this case. (2. we require cos α of state B. The absence of a state is represented by the null vector |0 in ket space.2. Thus.

with equal weights given to the two states. of the vector would correspond to the amplitude of the wave. which are in phase quadrature..6) where c1 and c2 are complex numbers. or an atom. Thus. If we observe a microscopic system then we either see a state (i. that any plane polarized state of a photon can be represented as a linear superposition of two orthogonal polarization states in which the weights are real numbers. Suppose that we want to construct a circularly polarized photon state. Suppose that the ket |R is expressible linearly in terms of the kets |A and |B . A general elliptically polarized photon is represented by c1 |B + c2 |C . or length. a circularly polarized photon is represented by |B + i |C in ket space. etc. This suggests that a circularly polarized photon is the superposition of a photon polarized in the x-direction (state B) and a photon polarized in the y-direction (state C). (2. so that two vectors of different lengths pointing in the same direction would represent different wave states.2. Thus. or a molecule.e. We conclude that a ket space must be a complex vector space if it is to properly represent the mutual interrelations between the possible states of a microscopic system. (2. etc. By analogy with classical physics. and the direction would correspond to the frequency and wave-length. then the magnitude. In classical physics. we know from classical physics that a circularly polarized wave is a superposition of two waves of equal amplitude.3).4 Ket space 2 FUNDAMENTAL CONCEPTS that it comes in irreducible packets called photons.) or we see nothing—we can never see a fraction or a multiple of a state.8) 12 . but with the proviso that state C is 90 ◦ out of phase with state B.7) (2. if we observe a wave then the amplitude of the wave can take any value between zero and infinity. plane polarized in orthogonal directions. Well. we can use complex numbers to simultaneously represent the weighting and relative phase in a linear superposition. electrons. We have seen. atoms. (2. in Eq. so that |R = c1 |A + c2 |B . if we were to represent a classical wave by a vector. a photon.

or denumerably infinite.g.and y-directions. So.4 Ket space 2 FUNDAMENTAL CONCEPTS We say that |R is dependent on |A and |B . then the possible states of the system are represented as an N-dimensional ket space. dimensions. respectively). Such a space can be treated in more or less the same manner as a finite-dimensional space. Likewise. a set of ket vectors (or states) are termed independent if none of them are expressible linearly in terms of the others. Such a space is termed a Hilbert space by mathematicians. This type of space requires a slightly different treatment to spaces of finite. one-dimensional potential well). Likewise..g.2. If there are N independent states. some microscopic systems have a nondenumerably infinite number of independent states (e. 13 . The possible states of such a system are represented as a ket space whose dimensions are nondenumerably infinite. In fact. Some microscopic systems have a finite number of independent states (e. the ket space which represents the possible polarization states of a photon propagating in the z-direction is two-dimensional (the two independent vectors correspond to photons plane polarized in the x.g. a free particle). The possible states of such a system are represented as a ket space whose dimensions are denumerably infinite. It follows that the state R can be regarded as a linear superposition of the states A and B. Some microscopic systems have a denumerably infinite number of independent states (e. Thus. the spin states of an electron in a magnetic field). Unfortunately. The dimensionality of a conventional vector space is defined as the number of independent vectors contained in the space.. the states of a general microscopic system can be represented as a complex vector space of (possibly) infinite dimensions.. In conclusion. a particle in an infinitely deep. we can also say that state R is dependent on states A and B. the dimensionality of a ket space is equivalent to the number of independent ket vectors it contains. any ket vector (or state) which is expressible linearly in terms of certain others is said to be dependent on them.

e. N → ∞). Imagine a general functional. where |A and |B are any two kets in a given ket space. It also does so in a deterministic manner: i. Not surprisingly. labeled F.. This process is represented mathematically by writing F|(|A ) = φA . (2. but since a ket space must be complete if it is to represent the states of a microscopic system. such functionals are termed linear functionals.5 Bra space 2 FUNDAMENTAL CONCEPTS 2.10) for all vectors in the ket space is if N F|(|A ) = i=1 1 fi αi . and spitting out a general complex number φA .5 Bra space A snack machine inputs coins plus some code entered on a key pad.e. the same money plus the same code produces the same snack (or the same error message) time after time. acting on a general ket vector. space].e.. they are complete).12) Actually. this is only strictly true for finite-dimensional spaces. Consider an N-dimensional ket space [i.10) |A = i=1 αi |i . a finite-dimensional. The only way the functional F can satisfy Eq. labeled A. and (hopefully) outputs a snack. we need only consider this special subset. A general linear functional. Mathematicians call such a machine a functional.e. (2.9) Let us narrow our focus to those functionals which preserve the linear dependencies of the ket vectors upon which they operate. labeled F. We can imagine building a rather abstract snack machine which inputs ket vectors and outputs complex numbers in a deterministic fashion. satisfies F|(|A + |B ) = F|(|A ) + F|(|B ).. or denumerably infinite dimensional (i.2. 14 . (2. A general ket vector can be written1 N (2. (2. Only a special subset of denumerably infinite dimensional spaces have this property (i. Note that the input and output of the machine have completely different natures. Let the |i (where i runs from 1 to N) represent N independent ket vectors in this space..11) where the αi are an arbitrary set of complex numbers.

where c is a complex number. and its constituent vectors (which are actually functionals of the ket space) are called bra vectors.14) But. these vectors are written in mirror image notation. More generally. This type of vector space is called a bra space (after Dirac). specified by Eq. There are an infinite number of ways of setting up the correspondence between vectors in a ket space and those in the related bra space.e. Bra space is an example of what mathematicians call a dual vector space (i. So. (2. Note that bra vectors are quite different in nature to ket vectors (hence. i (2. the corresponding bra vector is written N A| = i=1 α∗ i|. so that they can never be confused). this implies that the set of all possible linear functionals acting on an Ndimensional ket space is itself an N-dimensional vector space. A| is termed the dual vector to |A . There is a one to one correspondence between the elements of the ket space and those of the related bra space. there is a corresponding element. that the dual to c A| is c∗ |A .2.11). · · · | and | · · · .17) . it is dual to the original ket space). It follows. (2.5 Bra space 2 FUNDAMENTAL CONCEPTS where the fi are a set of complex numbers relating to the functional. only one of these has any physical significance. which it is also convenient to label A. It follows from the previous three equations that N (2. c1 |A + c2 |B ←→ c∗ A| + c∗ B|. in the bra space.13) F| = i=1 fi i|. That is. Let us define N basis functionals i| which satisfy i|(|j ) = δij ..16) where the are the complex conjugates of the αi . |A ←→ A|. from the above. DC (2. for every element A of the ket space. For a general ket vector A.15) where DC stands for dual correspondence. However. 1 2 15 DC α∗ i (2.

20) that A|A is a real number.20) Consider the special case where |B → |A . 2 A|A ≥ 0.18). (2.22) 16 . i (2.2.11). and spits out a complex number. The combination of a bra and a ket yields a “bra(c)ket” (which is just a number).16). This operation is denoted B|(|A ). however. 2 An inner product is (almost) analogous to a scalar product between a covariant and contravariant vector in some curvilinear space.11)].12) and (2. (2. Consider the functional which is dual to the ket vector N |B = i=1 βi |i (2.19) Mathematicians term B|A the inner product of a bra and a ket. if all of the α i are zero in Eq. as will become apparent later. (2.23) |A =  A|A We can now appreciate the elegance of Dirac’s notation. that we can omit the round brackets without causing any ambiguity. and that The equality sign only holds if |A is the null ket [i. This expression can be further simplified to give B|A . so the operation can also be written B||A . ˜ Given a ket |A which is not the null ket. which also implies that B|A = 0.. This property of bra and ket vectors is essential for the probabilistic interpretation of quantum mechanics. and (2. N B|A = i=1 β∗ α i . (2.21) (2. According to Eqs. (2. (2. (2.e.5 Bra space 2 FUNDAMENTAL CONCEPTS Recall that a bra vector is a functional which acts on a general ket vector. Two kets |A and |B are said to be orthogonal if A|B = 0.18) acting on the ket vector |A . It is easily demonstrated that B|A = A|B ∗ . we can define a normalized ket | A . It follows from Eqs. where   1  ˜ |A . Note. (2.12).

of a conventional vector. and X(c|A ) = cX|A . it makes sense to require that all kets corresponding to physical states have unit norms. The main differences are that summations over discrete labels become integrations over continuous labels. A|A is known as the norm or “length” of |A . Operator X is linear provided that X(|A + |B ) = X|A + X|B . We are only interested in operators which preserve the linear dependencies of the ket vectors upon which they act. completeness must be assumed (it cannot be proved).6 Operators We have seen that a functional is a machine which inputs a ket vector and spits out a complex number. and is analogous to the length. Kronecker delta-functions become Dirac delta-functions. Mathematicians call such a machine an operator. Suppose that when this operator acts on a general ket vector |A it spits out a new ket vector which is denoted X|A . More of this later. for all ket vectors |A and |B .26) (2.6 Operators 2 FUNDAMENTAL CONCEPTS with the property ˜ ˜ A|A = 1.27) (2.24) Here. Consider an operator labeled X. and the normalization convention is somewhat different.25) 17 .2. Such operators are termed linear operators. Since |A and c|A represent the same physical state. 2. Consider a somewhat different machine which inputs a ket vector and spits out another ket vector in a deterministic fashion. Operators X and Y are said to be equal if X|A = Y|A (2. (2. or magnitude. It is possible to define a dual bra space for a ket space of nondenumerably infinite dimensions in much the same manner as that described above. for all complex numbers c.

the operator in the middle.6 Operators 2 FUNDAMENTAL CONCEPTS for all kets in the ket space in question. The multiplication is associative: X(Y|A ) = (X Y)|A = X Y|A . However. it is noncommutative: X Y = Y X. and |A can be written B|X|A without ambiguity. A suitable notation to use for the resulting bra when X operates on B| is B|X. Consider the inner product of a general bra B| with the ket X|A . The triple product of B|. it may be considered to be the inner product of |A with some bra. X + (Y + Z) = (X + Y) + Z. so we might as well call it the same operator acting on |B . Operators can be added together. This bra depends antilinearly on |A and must therefore depend linearly on A|. we have only considered linear operators acting on ket vectors.33) (2. and the ket vector on the right.31) (2. This bra depends linearly on B|. This operator is uniquely determined by the original operator X. We can also give a meaning to their operating on bra vectors. Operator X is termed the null operator if X|A = |0 (2. so we may look on it as the result of some linear operator applied to B|. (2. Consider the dual bra to X|A . X(Y Z) = (X Y)Z = X Y Z.29) (2.2.32) (2.30) So far. Thus. provided we adopt the convention that the bra vector always goes on the left. X. Operators can also be multiplied. in general. This product is a number which depends linearly on |A . The equation which defines this vector is ( B|X)|A = B|(X|A ) (2. Thus.34) for any |A and B|.28) for all ket vectors in the space. Such addition is defined to obey a commutative and associate algebra: X + Y = Y + X. it may be regarded as the result of some 18 .

DC X|A ←→ A|X† .35) It is readily demonstrated that B|X† |A = A|X|B ∗ . This operator is termed the adjoint of X. Clearly.41) Mathematicians term the operator |B A| the outer product of |B and A|.39) This clearly depends linearly on the ket |A and the bra |B .37) It is also easily seen that the adjoint of the adjoint of a linear operator is equivalent to the original operator.. (2. ξ = ξ† . Suppose that we right-multiply the above product by the general ket |C . Thus.7 The outer product 2 FUNDAMENTAL CONCEPTS linear operator applied to A|. (2. The outer product should not be confused with the inner product.e. A Hermitian operator ξ has the special property that it is its own adjoint: i. as is easily demonstrated by left-multiplying the expression (2. (2. and is denoted X† .38) (2. A|B .2. We obtain |B A|C = A|C |B . B|X|A . (2.39) by a general bra C|. A|X. (2. Thus. which is just a number. Are there any other products we are allowed to form? How about |B A| ? (2. X|A . plus (X Y)† = Y † X† . It is also easily demonstrated that (|B A|)† = |A B|. This operator also acts on bras.40) since A|C is just a number.36) 2. X Y. |B A| acting on a general ket |C yields another ket.7 The outer product So far we have formed the following products: B|A . 19 . the product |B A| is a linear operator.

and the eigenkets corresponding to different eigenvalues are orthogonal. Clearly. applying X to one of its eigenkets yields the same eigenket multiplied by the associated eigenvalue. right-multiply the above equation by |ξ . are numbers called eigenvalues.44) where |ξ is the eigenket associated with the eigenvalue ξ .42) where x . |x and have the property X|x = x |x . (2. This proves that the eigenvalues are real numbers. X|x = x |x . . Three important results are readily deduced: (i) The eigenvalues are all real numbers. the dual equation to Eq. (2.43) . (2.44) (for the eigenvalue ξ ) reads ξ |ξ = ξ ∗ ξ |.. .. |x .48) 20 .8 Eigenvalues and eigenvectors 2 FUNDAMENTAL CONCEPTS 2. (2. (2. the ket X|A is not a constant multiple of |A ...45) If we left-multiply Eq. Suppose that the eigenvalues ξ and ξ are different.8 Eigenvalues and eigenvectors In general. (2.. (2.44) by ξ |.46) Suppose that the eigenvalues ξ and ξ are the same.2. However. Since ξ is Hermitian.47) where we have used the fact that |ξ is not the null ket. . and take the difference. It follows that ξ |ξ = 0. Consider the eigenkets and eigenvalues of a Hermitian operator ξ. we obtain (ξ − ξ ∗ ) ξ |ξ = 0. there are some special kets known as the eigenkets of operator X. These are denoted |x . It follows from the above that ξ = ξ ∗.. These are denoted ξ|ξ = ξ |ξ . (2. (2. x .

the result is to cause the photon to jump into a state of polarization parallel or perpendicular to the optic axis of the film.49) (iii) The dual of any eigenket is an eigenbra belonging to the same eigenvalue. momentum. In general. its position. by placing a polaroid film in its path.9 Observables We have developed a mathematical formalism which comprises three types of objects—bras.2. etc. We. Since the lengths of bras and kets have no physical significance. We have seen that if we observe the polarization state of a photon. energy. spin. Note that the operators have to be linear. An eigenbra of ξ corresponding to an eigenvalue ξ is defined ξ |ξ = ξ |ξ . in general. However. kets. and conversely. (ii) The eigenvalues associated with eigenkets are the same as the eigenvalues associated with eigenbras. the only objects we have left over are operators. (2. so we must conclude that bras could just as well be used to represent the states of a microscopic system. therefore. spit out bras/kets pointing in different directions when fed bras/kets pointing in the same direction but differing in length.)? How can these be represented in our formalism? Well. it is reasonable to suppose that non-linear operators are also without physical significance. and the latter state is transmitted (which is how we tell them apart). otherwise they would. The former state is absorbed.. What about the dynamical variables of the system (e. assume that the dynamical variables of a microscopic system are represented as linear operators acting on the bras and kets which correspond to the various possible states of the system.9 Observables 2 FUNDAMENTAL CONCEPTS which demonstrates that eigenkets corresponding to different eigenvalues are orthogonal. We have already seen that kets can be used to represent the possible states of a microscopic system. we cannot predict into which state a given photon will jump 21 . there is a one to one correspondence between the elements of a ket space and its dual bra space. 2.g. and linear operators.

. more generally.e. if an observation is made. It is clear that the photon will definitely be transmitted through the second film. such a state is termed an eigenstate. There is nothing special about the polarization states of a photon. we can say that when a dynamical variable of a microscopic system is measured the system is caused to jump into one of a number of independent states (note that the perpendicular and parallel polarization states of our photon are linearly independent).2.e. Furthermore. with one particular value for the dynamical variable. We can make a second observation of the polarization state of such a photon by placing an identical polaroid film (with the same orientation of the optic axis) immediately behind the first film. a different value of the dynamical variable. The fact that the result of the measurement must be a real number implies that dynamical variables can only be represented by Hermitian operators (since only Hermitian operators are guaranteed to have real eigenvalues). made immediately after the first one. the result of the measurement is the eigenvalue associated with the eigenket into which the system jumps. Finally. In general. each of these final states is associated with a different result of the measurement: i. However. We also known that after passing though the film a photon must be in a state of polarization perpendicular to the optic axis (otherwise it would not have been transmitted). The fact that the eigenkets of a Hermitian operator corresponding to different eigenvalues (i. and the system is found to be a one particular final state. by a fairly non-obvious leap of intuition. So. and yield the same value for the dynamical variable. then a second observation.. we are going to assert that a measurement of a dynamical variable corresponding to an operator X in ket space causes the system to jump into a state corresponding to one of the eigenkets of X. Note that the result of the measurement must be a real number (there are no measurement machines which output complex numbers). Not surprisingly. we do know that if the photon is initially polarized parallel to the optic axis then it will definitely be absorbed.9 Observables 2 FUNDAMENTAL CONCEPTS (except in a statistical sense). and if it is initially polarized perpendicular to the axis then it will definitely be transmitted. will definitely find the system in the same state. How can we represent all of these facts in our mathematical formalism? Well. different results of the measurement) are orthogonal is in accordance with our earlier requirement that the states into which the system jumps should be mutually 22 .

This gives us the physical significance of the eigenvalues. then the states into which the system may jump on account of the measurement are such that the original state is dependent on them. the system is left in the associated eigenstate. This fairly innocuous statement has two very important corollaries. Thus.2. We can conclude that the result of a measurement of a dynamical variable represented by a Hermitian operator ξ must be one of the eigenvalues of ξ. the second measurement is bound to give the same result as the first. any observable quantity must be a Hermitian operator with a complete set of eigenstates. then a measurement of ξ is bound to give the result ξ . This follows because the system cannot jump into an eigenstate corresponding to a different eigenvalue of ξ. 23 . if the system is in an eigenstate of ξ. Furthermore. corresponding to an eigenvalue ξ . in order for a Hermitian operator ξ to be observable its eigenkets must form a complete set.e..) It is reasonable to suppose that if a certain dynamical variable ξ is measured with the system in a particular state. It follows that a second measurement made immediately after the first one must leave the system in an eigenstate corresponding to the eigenvalue ξ . will be dropped.e. Conversely. First.. In other words. since such a state is not dependent on the original state. immediately after an observation whose result is a particular eigenvalue ξ . this eigenstate is orthogonal to (i. In other words.9 Observables 2 FUNDAMENTAL CONCEPTS independent. Second. and a dynamical variable and its representative operator. a general ket must always be dependent on the eigenkets of ξ. (From now on. independent of) any other eigenstate corresponding to a different eigenvalue. it must always be able to jump into one of the eigenstates of ξ. A Hermitian operator which satisfies this condition is termed an observable. it stands to reason that a measurement of ξ must always yield some result. Conversely. every eigenvalue of ξ is a possible result of a measurement made on the corresponding dynamical variable. This can only be the case if the eigenkets form a complete set (i. However. they span ket space). It follows that no matter what the initial state of the system. for the sake of simplicity. the distinction between a state and its representative ket vector.

2. It is easily demonstrated that |A = ξ |ξ ξ |A .50) where δξ ξ is unity if ξ = ξ . we can express any given state |A as a linear combination of them. a complex number. and the transition probability to the same eigenstate |ξ is unity. suppose that the system is initially in a state |A which is not an eigenstate of ξ. (2. It is convenient to normalize our eigenkets such that they all have unit norms. It is impossible to determine into which eigenstate a given system will jump.10 Measurements 2 FUNDAMENTAL CONCEPTS 2. The result of the measurement is the associated eigenvalue (or some function of this quantity). How about if we identify the transition probability with the modulus squared of the inner product. but it is possible to predict the probability of such a transition. In fact. If the system is initially in an eigenstate |ξ then the transition probability to a eigenstate |ξ corresponding to a different eigenvalue is zero.10 Measurements We have seen that a measurement of some observable ξ of a microscopic system causes the system to jump into one of the eigenstates of ξ. it is the correct guess. as a result of a measurement made on the system? Let us start with the simplest case. Since the eigenstates of an observable ξ form a complete set. Can we use this correspondence to obtain a general rule for calculating transition probabilities? Well. Let us try again. It follows from the orthogonality property of the eigenkets that ξ |ξ = δξ ξ . Can we identify the transition probability to a final eigenstate |ξ with the inner product A|ξ ? The straight answer is “no”. and complex probabilities do not make much sense. and zero otherwise. 24 (2. This guess also gives the right answer for the transition probabilities between eigenstates. what is the probability that a system in some initial state |A makes a transition to an eigenstate |ξ of an observable ξ. So. we are assuming that the eigenvalues of ξ are all different. since A|ξ is.51) . For the moment. Note that the probability of a transition from an initial eigenstate |ξ to a final eigenstate |ξ is the same as the value of the inner product ξ |ξ . | A|ξ |2 ? This quantity is definitely a positive number (so it could be a probability). in general.

is given by ξ = ξ ξ P(ξ ) = ξ ξ | A|ξ |2 25 . = 2 A|A ξ | A|ξ | (2. is P(ξ ) ∝ | A|ξ |2 . What is the mean value of the measurement? This quantity.11 Expectation values 2 FUNDAMENTAL CONCEPTS A| = ξ A|ξ A|ξ ξ ξ |. The relative probability of a transition to an eigenstate |ξ . We know that each measurement yields the value ξ with probability P(ξ ).52) | A|ξ |2 .56) (2.54) ξ where 1 denotes the identity operator.57) 2. (2.53) A|A = where the summation is over all the different eigenvalues of ξ.2. The absolute probability is clearly P(ξ ) = | A|ξ |2 | A|ξ |2 .20). which is equivalent to the relative probability of a measurement of ξ yielding the result ξ . which is generally referred to as the expectation value of ξ. (2. then this probability simply reduces to P(ξ ) = | A|ξ |2 . (2. Suppose a measurement of the observable ξ is made on each system. Note that all of the above results follow from the extremely useful (and easily proved) result |ξ ξ | = 1.11 Expectation values Consider an ensemble of microscopic systems prepared in the same initial state |A . ξ |A = ξ (2. (2. and the fact that the eigenstates are mutually orthogonal. and use has been made of Eq.55) If the ket |A is normalized such that its norm is unity.

62) 26 .2. however. Note.e. (2. 1 − | ξa |ξb |2 (2. This is unfortunate. This property depends on the particular correspondence (2. (2.58) which reduces to ξ = A|ξ|A with the aid of Eq.e. (2. Consider the identity operator.12 Degeneracy 2 FUNDAMENTAL CONCEPTS = ξ ξ A|ξ ξ |A = ξ A|ξ|ξ ξ |A . All states are eigenstates of this operator with the eigenvalue unity.60) for all |A . These are termed degenerate eigenstates.12 Degeneracy Suppose that two different eigenstates |ξa and |ξb of ξ correspond to the same eigenvalue ξ . = |ξb − ξa |ξb |ξa . in general.61) (2. 2.. they are not orthogonal to each other (i. It follows that we can always construct two mutually orthogonal degenerate eigenstates. (2. the expectation value of this operator is always unity: i. between the elements of a ket space and those of its dual bra space..60) is satisfied because of the more general property (2. A|1|A = A|A = 1.59) 2. Degenerate eigenstates are necessarily orthogonal to any eigenstates corresponding to different eigenvalues.21) of the norm. |ξ1 |ξ2 = |ξa . (2.16). Note that it is only possible to normalize a given ket |A such that Eq.54). that any linear combination of |ξa and |ξb is also an eigenstate corresponding to the eigenvalue ξ . 1. Thus. the proof of orthogonality given in Sect. since much of the previous formalism depends crucially on the mutual orthogonality of the different eigenstates of an observable.8 does not work in this case). but. For instance. that we adopted earlier.

and the system is consequently thrown into one of the eigenstates of ξ. respectively. we can say that the observables ξ and η simultaneously have the values ξ and η . if all eigenstates of ξ are also eigenstates of η then it is always possible to make a simultaneous measurement of ξ and η. Suppose that the system is thrown into an eigenstate |η . |ξ . with eigenvalue ξ . These could be measured using appropriate Stern-Gerlach apparatuses (see Sakurai. that the eigenstates of ξ are not eigenstates of η. a measurement of η will definitely give the result η . Is it still possible to measure both observables simultaneously? Let us again make an observation of ξ which throws the system into an eigenstate |ξ . Suppose. Suppose that we make a measurement of ξ.13 Compatible observables Suppose that we wish to simultaneously measure two observables. and so on. Another measurement of ξ will throw the system into one of the (many) eigenstates of ξ which depend on |η . For instance. ξ and η. We conclude that it is always possible to construct a complete set of mutually orthogonal eigenstates for any given observable. Each eigenstate is again associated with a different possible result of the measurement. with eigenvalue ξ . Sect. Clearly. It is clear that if the observables ξ and η do not possess simultaneous eigenstates then if the value 27 . A second measurement of ξ will definitely give the result ξ . each of these eigenstates is associated with a different result of the measurement. 2. This will throw the system into one of the (many) eigenstates of η which depend on |ξ . suppose that the eigenstate |ξ is also an eigenstate of η. In principle. In this sense. with the eigenvalue η .13 Compatible observables 2 FUNDAMENTAL CONCEPTS This result is easily generalized to the case of more than two degenerate eigenstates. We can now make a second observation to determine η. and another which can measure η. What happens if we now make a measurement of η? Well. the two observables in question might be the projection in the x. 1.2. Such observables are termed compatible.and z-directions of the spin angular momentum of a spin one-half particle. with eigenvalue η . In this case. however.1). of a microscopic system? Let us assume that we possess an apparatus which is capable of measuring ξ.

It is 28 .65) for each simultaneous eigenstate. and vice versa. with eigenvalue η . Let a general eigenstate of ξ. with eigenvalue ξ . and then take the difference. Suppose that this is the case. where |A is a general ket.e. then a determination of the value of ξ leaves the value of η uncertain. It is convenient to denote this simultaneous eigenstate |ξ η . We have seen that the condition for two observables ξ and η to be simultaneously measurable is that they should possess simultaneous eigenstates (i. (2.66) Thus.64) We can left-multiply the first equation by η. We say that the two observables are incompatible. also be an eigenstate of η.67) (2.14 The uncertainty relation 2 FUNDAMENTAL CONCEPTS of ξ is known (i. Thus. the system is in an eigenstate of ξ) then the value of η is uncertain (i.2. (2. the above equation implies that (ξ η − η ξ)|A = |0 . the system is not in an eigenstate of η)..63) (2. and the second equation by ξ. The result is (ξ η − η ξ)|ξ η = |0 (2. It follows that the simultaneous eigenstates of two observables must also form a complete set. The only way that this can be true is if ξ η = η ξ. 2. = η |ξ η .e. the condition for two observables ξ and η to be simultaneously measurable is that they should commute.e. every eigenstate of ξ should also be an eigenstate of η). We have ξ|ξ η η|ξ η = ξ |ξ η ... Recall that the eigenstates of an observable must form a complete set. and vice versa.14 The uncertainty relation We have seen that if ξ and η are two noncommuting observables.

Let us substitute |A |B = ∆ξ| . If the variance is zero then there is no uncertainty. it is a measure of the width of the distribution of likely values of ξ about the expectation value). and is. we can define a Hermitian operator ∆ξ = ξ − ξ .69) The variance of ξ is a measure of the uncertainty in the value of ξ for the particular state in question (i. (2.68) where the expectation value is taken over the particular physical state under consideration. = ∆η| .14 The uncertainty relation 2 FUNDAMENTAL CONCEPTS possible to quantify this uncertainty. which is the same as the Schwarz inequality.70) where c is any complex number. which is analogous to in Euclidian space. The expectation value of (∆ξ)2 ≡ ∆ξ ∆ξ is termed the variance of ξ. (2.72) |a|2 | b|2 ≥ |a · b|2 (2. and a measurement of ξ is bound to give the expectation value. In fact. (2..75) (2. This inequality can be proved by noting that ( A| + c∗ B|)(|A + c|B ) ≥ 0. non-zero. it is easily demonstrated that (∆ξ)2 = ξ2 − ξ 2 . (2.73) 29 .2. Consider the Schwarz inequality A|A B|B ≥ | A|B |2 . It is obvious that the expectation value of ∆ξ is zero.71) (2.e. If c takes the special value − B|A / B|B then the above inequality reduces to A|A B|B − | A|B |2 ≥ 0.74) (2. For a general observable ξ. ξ . in general.

∆η])† = (∆ξ ∆η − ∆η ∆ξ)† = ∆η ∆ξ − ∆ξ ∆η = − [∆ξ.14 The uncertainty relation 2 FUNDAMENTAL CONCEPTS where use has been made of the fact that ∆ξ and ∆η are Hermitian operators. 2 2 where the commutator. [∆ξ. in which case exact knowledge of ξ does not necessarily imply no knowledge of η. (2.82) 4 4 where use has been made of ∆ξ = 0. (2. (2. ∆η] ≡ ∆ξ ∆η − ∆η ∆ξ. ∆η}. (2. ∆η} |2 . ∆η} . Taking the modulus squared of both sides gives ∆ξ ∆η = 1 1 | ∆ξ ∆η |2 = | [ξ. (2. η] |2 . We find (∆ξ)2 (∆η)2 ≥ | ∆ξ ∆η |2 . ∆η].2. Note that 1 1 (2. The one exception to this rule is when ξ and η commute. According to this relation. ∆η] + {∆ξ. It is clear that the right hand side of 1 1 [∆ξ. ∆η] .80) whereas the anti-commutator is obviously Hermitian. ∆η] + {∆ξ. The commutator is clearly anti-Hermitian.76).79) into the Schwarz inequality.76) {∆ξ. it is easily demonstrated that the expectation value of a Hermitian operator is a real number. (2.81) 2 2 consists of the sum of a purely real and a purely imaginary number.77) ∆ξ ∆η = [∆ξ. etc. and the anti-commutator. and vice versa. ∆η} ≡ ∆ξ ∆η + ∆η ∆ξ. ∆η} . The above expression is termed the uncertainty relation. (2. where the blank ket | stands for any general ket. η] |2 + | {∆ξ. so we can write 1 (∆ξ)2 (∆η)2 ≥ | [ξ. an exact knowledge of the value of ξ implies no knowledge whatsoever of the value of η. {∆ξ. 30 . whereas the expectation value of an anti-Hermitian operator is a pure imaginary number. The final term in the above expression is positive definite. are defined [∆ξ. ([∆ξ.78) (2.83) 4 where use has been made of Eq. Now.

generally have continuous eigenvalues. Note. In fact. Suppose that ξ is an observable with continuous eigenvalues. This is the major difference between eigenstates in a finite-dimensional and an infinite-dimensional ket space. they are infinitely long. rather than having discrete values.50) generalizes to ξ |ξ = δ(ξ − ξ ). these eigenstates have infinite norms: i. We can still write the eigenvalue equation as ξ|ξ = ξ |ξ . The reason for this is because continuous eigenvalues imply a ket space of nondenumerably infinite dimension.15 Continuous spectra Up to now. Fortunately. (2. Let us assume.85) where δ(x) denotes the famous Dirac delta-function. The eigenstates |ξ must form a complete set if ξ is to be an observable.84) But. continuous eigenvalues are unavoidable in quantum mechanics. ξ can now take a continuous range of values. Note that there are clearly a nondenumerably infinite number of mutually orthogonal eigenstates of ξ. It follows that any general ket can be expanded in 31 . (2. many of the results we obtained previously for a finite-dimensional ket space with discrete eigenvalues can be generalized to ket spaces of nondenumerably infinite dimensions. In fact.15 Continuous spectra 2 FUNDAMENTAL CONCEPTS 2. (2. Hence. also. the dimensionality of ket space is nondenumerably infinite.e. we have studiously avoided dealing with observables possessing eigenvalues which lie in a continuous range. that eigenstates corresponding to a continuous range of eigenvalues cannot be normalized so that they have unit norms. The orthogonality condition (2.. for the sake of simplicity.86) Note that a summation over discrete eigenvalues goes over into an integral over a continuous range of eigenvalues. Unfortunately.54) generalizes to dξ |ξ ξ | = 1. that ξ can take any value. namely position and momentum. the most important observables of all. The extremely useful relation (2.2.

Fortunately. There are number of other cases we could look at.90) We have now studied observables whose eigenvalues can take a discrete number of values as well as those whose eigenvalues can take any value.88) (2. observables whose eigenvalues can only take a finite range of values. In fact. ξ |A = dξ | A|ξ |2 . For instance. In fact.86). 32 . Cha. or observables whose eigenvalues take on a finite range of values plus a set of discrete values.89). We have seen that it is not possible to normalize the eigenstates |ξ such that they have unit norms. (2.89) A| = A|A = dξ A|ξ dξ A|ξ These results also follow simply from Eq. this convenient normalization is still possible for a general state vector.51)–(2. the expansions (2. the normalization condition can be written A|A = dξ | A|ξ |2 = 1.87) (2. (2. (2. ξ |. according to Eq.15 Continuous spectra 2 FUNDAMENTAL CONCEPTS terms of the |ξ . (2. Both of these cases can be dealt with using a fairly straight-forward generalization of the previous analysis (see Dirac. II and III).53) generalize to |A = dξ |ξ ξ |A .2.

1 Introduction So far. by definition. the position q and momentum p of some component of a dynamical system are represented as real numbers which.3 POSITION AND MOMENTUM 3 Position and momentum 3. dt ∂pi dpi ∂H = − . However.1) (3. dqi ∂H = . the behaviour of a classical system can be specified in terms of Lagrangian or Hamiltonian dynamics. In classical mechanics. For instance.2) where the function H(qi . etc. In quantum mechanics. these quantities are represented as noncommuting linear Hermitian operators acting in a ket space which represents all of the possible states of the system. an angular coordinate has an associated angular momentum. in Hamiltonian dynamics. Our first task is to discover a quantum mechanical replacement for the classical result q p − p q = 0. a Cartesian coordinate has an associated linear momentum. dt ∂qi (3. t) is the energy of the system at time t expressed in terms of the classical coordinates and canonical momenta. commute.2 Poisson brackets Consider a dynamic system whose state at a particular time t is fully specified by N independent classical coordinates qi (where i runs from 1 to N). This function is 33 . Associated with each generalized coordinate qi is a classical canonical momentum pi . As is well-known. pi . we have considered general dynamical variables represented by general linear operators acting in ket space. Let us investigate the role of such variables in quantum mechanics. Do the position and momentum operators commute? If not. For instance. what is the value of q p − p q? 3. in classical mechanics the most important dynamical variables are those involving position and momentum.

34 (3. ∂qi ∂pi ∂pi ∂qi (3. [qi . v] = i=1 ∂u ∂v ∂u ∂v − . If such a construct exists we hope to generalize it somehow to obtain a rule describing how dynamical variables commute with one another in quantum mechanics. (3. v] = −[v. indeed.3. H]. pj ] = 0. u]. where use has been made of Hamilton’s equations.4) (3. The Poisson bracket of two dynamical variables u and v is defined N [u. There is.6) The time evolution of a dynamical variable can also be written in terms of a Poisson bracket by noting that du = dt = i=1 N i=1 N ∂u dpi ∂u dqi + ∂qi dt ∂pi dt ∂u ∂H ∂u ∂H − ∂qi ∂pi ∂pi ∂qi (3.8) .5) (3. one well-known construct in classical dynamics which involves products of dynamical variables.2 Poisson brackets 3 POSITION AND MOMENTUM usually referred to as the Hamiltonian of the system. instead of functions? Well.7) = [u. [pi . the main properties of the classical Poisson bracket are as follows: [u. qj ] = 0. pj ] = δij .3) where u and v are regarded as functions of the coordinates and momenta qi and pi . Can we construct a quantum mechanical Poisson bracket in which u and v are noncommuting operators. We are interested in finding some construct of classical dynamics which consists of products of dynamical variables. It is easily demonstrated that [qi .

v2 ]. v1 ]v2 + v1 [u1 .16) (3. v] = [u1 .19) . c] = 0.. v]u2 + u1 [u2 .18) (3. v]. [v. v2 ]u2 + u1 [u2 . v2 ]} u2 + u1 {[u2 . Equating the above two results yields [u1 . [u1 u2 .17) Since this relation must hold for u1 and v1 quite independent of u2 and v2 . etc. [w.9) (3. v2 ]} = [u1 . we can evaluate the Poisson bracket [u1 u2 . v1 ] + [u. [u.13) first. w]] + [v. (3. v2 ] (3. v1 ]v2 + v1 [u1 u2 . since we can use either of the formulae (3.12) or (3.14) The last relation is known as the Jacobi identity. v2 ] [u1 u2 . v2 ]. In the above. v1 ]v2 + v1 [u2 . and c represents a number. u]] + [w. h u2 v2 − v2 u2 = i ¯ [u2 .3. which satisfies all of the above relations? Well. v1 ](u2 v2 − v2 u2 ) = (u1 v1 − v1 u1 )[u2 . [u. v1 + v2 ] = [u. etc. v]] = 0.2 Poisson brackets 3 POSITION AND MOMENTUM [u.13) (3. v2 ]. v] + [u2 . v1 ].10) (3. it follows that u1 v1 − v1 u1 = i ¯ [u1 . and [u1 u2 . [u. v1 ]v2 + u1 v1 [u2 . and [u.11) (3. Note that the order of the various factors has been preserved. v1 v2 ]u2 + u1 [u2 . u. since they now represent noncommuting operators. v1 ]v2 + v1 [u1 .12) (3. v1 ]v2 + v1 [u. v1 v2 ] in two different ways..15) = [u1 . represent dynamical variables. v1 ]u2 v2 + u1 [u2 . Can we find some combination of noncommuting operators u and v. h 35 (3. w. Thus. v2 ]. (3. v2 ]u2 + v1 u1 [u2 . v1 ]v2 u2 + v1 [u1 . v2 ]. v1 v2 ] = [u. v]. v. v1 v2 ] = [u1 u2 . v1 v2 ] = {[u1 . v] = [u1 . v1 v2 ] = [u1 . [u1 + u2 .

21) is Planck’s constant. In other words. where h h = 6. [u. h Since u1 . We will use [u. Thus. the quantum mechanical Poisson bracket of two dynamical variables u and v is given by uv − vu [u. it follows that ¯ is just a number. and also commutes with (u1 v1 −v1 u1 ).22) i¯ h It is easily demonstrated that the quantum mechanical Poisson bracket. qj ] = 0. satisfies all of the relations (3.23) (3. the notation [u. This argument yields the fundamental commutation relations [qi . v]quantum to denote the quantum Poisson bracket.6261 × 10−34 J s (3. [qi . h (3.24) (3. are quite general operators. (3. since the classical Poisson bracket of two real dynamical variables is real.8)–(3.14). Quantum mechanics agrees with h experiments provided that ¯ takes the value h/2π. v] is conventionally reserved for the commutator u v − v u in quantum mechanics. at least for the simplest cases.4)–(3. This requirement is satisfied if ¯ is a real number.20) i¯ h where ¯ is a new universal constant of nature. v1 . [pi . h Thus.. (3. ξ 36 .2 Poisson brackets 3 POSITION AND MOMENTUM where ¯ does not depend on u1 . We h want the quantum mechanical Poisson bracket of two Hermitian operators to be an Hermitian operator itself.25) These results provide us with the basis for calculating commutation relations between general dynamical variables. defined in Eq. as defined above.3.22). Somewhat confusingly. u2 . we are assuming that Eqs.3). v]quantum = The strong analogy we have found between the classical Poisson bracket. v] = . v] . For instance. leads us to make the assumption that the quantum mechanical bracket has the same value as the corresponding classical bracket. and the quantum mechanical Poisson bracket. (3.6) hold for quantum mechanical as well as classical Poisson brackets. v2 . pj ] = 0. if two dynamical variables. etc. defined in Eq. (3. [u. (3. pj ] = i ¯ δij .

3.26) The eigenkets satisfy the extremely useful relation [see Eq. mutually orthogonal. can both be written as a power series in the qi and pi . the operator x possesses eigenvalues x lying in the continuous range −∞ < x < +∞ (since the eigenvalues correspond to all the possible results of a measurement of x). 37 .13) allows [ξ. It is clear from Eqs. qi and pi . which corresponds to the Cartesian coordinate x. correspond to a different classical degree of freedom of the system.85)] x |x = δ(x − x ). each pair of generalized coordinate and its conjugate momentum.86)] +∞ dx |x −∞ x | = 1.8)–(3..23)–(3.27) This formula expresses the fact that the eigenkets are complete.25) provide the foundation for the analogy between quantum mechanics and classical mechanics. Note that the classical result (that everything commutes) is obtained in the limit ¯ → 0. Moreover. (2.25).3 Wave-functions 3 POSITION AND MOMENTUM and η. The classical dynamical variable x is represented in quantum mechanics as a linear Hermitian operator which is also called x.3 Wave-functions Consider a simple system with one classical degree of freedom. Suppose that x is free to take any value (e. and suitably normalized. then repeated application of Eqs. Thus.23)–(3. It is only those variables corresponding to the same degree of freedom which may fail to commute.25) that in quantum mechanics the dynamical variables corresponding to different degrees of freedom all commute.23)–(3. (3. (2. [see Eq. An eigenket corresponding to the eigenvalue x is denoted |x . We can span ket space using the suitably normalized eigenkets of x. Moreover. (3. x could be the position of a free particle). (3. In classih cal mechanics.g. 3. (3. Equations (3. η] to be expressed in terms of the fundamental commutation relations (3. classical mechanics can be h regarded as the limiting case of quantum mechanics when ¯ goes to zero.

(3.28)]. dx ) = |ψA (x )|2 dx . We can write x |A = ψA (x ). In other words. (3. This result is easily generalized to dynamical variables possessing continuous eigenvalues. The corresponding normalization for the wave-function is +∞ |ψA (x )|2 dx = 1.3.29) Here. from Sect.31) Consider a second state B represented by a state ket |B and a wave-function ψB (x ). assuming that the eigenvalues of ξ are discrete.27). It is clear that the wave-function of state A is simply the collection of the weights of the corresponding state ket |A . Recall. In fact. +∞ |A = −∞ dx x |A |x (3. The inner product B|A can be written +∞ +∞ B|A = −∞ dx B|x x |A = −∞ ψ∗ (x ) ψA (x ) dx .3 Wave-functions 3 POSITION AND MOMENTUM A state ket |A (which represents a general state A of the system) can be expressed as a linear superposition of the eigenkets of the position operator using Eq. the probability of a measurement of position yielding a result in the range x to x + dx when the wave-function of the system is ψA (x ) is P(x .. Note that state A is completely specified by its wave-function ψA (x ) [since the wave-function can be used to reconstruct the state ket |A using Eq. B (3. if A|A = 1. (3. Thus.10. that the probability of a measurement of a dynamical variable ξ yielding the result ξ when the system is in state A is given by | ξ |A |2 .32) 38 . 2. (3. −∞ (3.30) This formula is only valid if the state ket |A is properly normalized: i.e. ψA (x ) is the famous wave-function of quantum mechanics. the probability of a measurement of x yielding a result lying in the range x to x + dx when the system is in a state |A is | x |A |2 dx . when it is expanded in terms of the eigenkets of the position operator.28) The quantity x |A is a complex function of the position eigenvalue x .

finally. 3.34) where use has been made of Eq.26).35) where ψA (x) is the same function of the operator x that the wave-function ψ A (x ) is of the position eigenvalue x . from the above result.4 Schr¨dinger’s representation .I o Consider the simple system described in the previous section. (3. The ket is termed the standard ket.. DC (3. (3. leaving the dependence on the position operator x tacitly understood. f(x ) is the same function of the position eigenvalue x that f(x) is of the position operator x: i.. Thus. Here. It is easily seen that ∗ ψA (x) ←→ ψA (x) .e. the inner product of two states is related to the overlap integral of their wave-functions. and is denoted . It follows. −∞ giving ψB (x ) = f(x ) ψA (x ). A general state ket can be written ψ(x) .3.36) Note. that a general state ket |A can be written |A = ψA (x) . The dual of the standard ket is termed the standard bra.29). where ψ(x) is a general function of the position operator x.I o 3 POSITION AND MOMENTUM where use has been made of Eqs. If |B = f(x)|A then it follows that +∞ ψB (x ) = = x |f(x) −∞ +∞ dx ψA (x )|x (3.27) and (3.g.4 Schr¨ dinger’s representation . Consider a general function f(x) of the observable x [e. (3. that ψA (x) is often shortened to ψA .33) dx f(x ) ψA (x ) x |x . if f(x) = x 2 then f(x ) = x 2 . (3. f(x) = x 2 ]. and the ket has the wave-function ψ(x ) = 1. and ψ(x ) is the associated wave-function. Consider the ket whose wave-function 39 .

34). According to Eq. (3.43) dx dx dx dx where use has been made of Eq.40) is reasonable because physical wave-functions are square-integrable [see Eq.35) and (3. (3. (3. The new ket is clearly a linear function of the original ket. we can write +∞ −∞ d φ |x dx ψ(x ) = dx +∞ φ(x ) dx −∞ dψ(x ) .I o 3 POSITION AND MOMENTUM is dψ(x )/dx .31)]. the bra φ d/dx satisfies d d ψ = φ ψ . (3.40) assuming that the contributions from the limits of integration vanish.41) φ |x = − dx dx which implies d dφ φ =− .42). Note that d dψ DC dψ∗ d ψ = ←→ = − ψ∗ . Let us denote this operator d/dx. (3. (3.37) Any linear operator which acts on ket vectors can also act on bra vectors. dx (3. (3. so we can think of it as the result of some linear operator acting on ψ .29).3.38) φ dx dx Making use of Eqs.4 Schr¨dinger’s representation . that d † d =− .27) and (3. It follows that d dψ ψ = . This ket is denoted dψ/dx . (2. It follows that dφ(x ) d . dx (3. by comparison with Eqs.39) The right-hand side can be transformed via integration by parts to give +∞ −∞ d φ |x dx ψ(x ) = − dx +∞ −∞ dφ(x ) dx ψ(x ). It follows. Consider d/dx acting on a general bra φ(x).44) dx dx 40 . (3.36). (2. dx dx (3.42) dx dx The neglect of contributions from the limits of integration in Eq. (3.

3.4 Schr¨dinger’s representation - I o

3 POSITION AND MOMENTUM

Thus, d/dx is an anti-Hermitian operator. Let us evaluate the commutation relation between the operators x and d/dx. We have d d(x ψ) d xψ = =x ψ +ψ . (3.45) dx dx dx Since this holds for any ket ψ , it follows that d d x−x = 1. (3.46) dx dx Let p be the momentum conjugate to x (for the simple system under consideration p is a straight-forward linear momentum). According to Eq. (3.25), x and p satisfy the commutation relation xp − px = i¯. h (3.47)

It can be seen, by comparison with Eq. (3.46), that the Hermitian operator −i ¯ d/dx satisfies the same commutation relation with x that p does. The most h general conclusion which may be drawn from a comparison of Eqs. (3.46) and (3.47) is that d p = −i ¯ h + f(x), (3.48) dx since (as is easily demonstrated) a general function f(x) of the position operator automatically commutes with x. We have chosen to normalize the eigenkets and eigenbras of the position operator so that they satisfy the normalization condition (3.26). However, this choice of normalization does not uniquely determine the eigenkets and eigenbras. Suppose that we transform to a new set of eigenbras which are related to the old set via x |new = e i γ x |old , (3.49) where γ ≡ γ(x ) is a real function of x . This transformation amounts to a rearrangement of the relative phases of the eigenbras. The new normalization condition is x |x
new

=

x |e i γ e−i γ |x

old

= e i (γ −γ

)

x |x

old

= e i (γ −γ ) δ(x − x ) = δ(x − x ).
41

(3.50)

3.4 Schr¨dinger’s representation - I o

3 POSITION AND MOMENTUM

Thus, the new eigenbras satisfy the same normalization condition as the old eigenbras. By definition, the standard ket satisfies x | = 1. It follows from Eq. (3.49) that the new standard ket is related to the old standard ket via
new

= e−i γ

old ,

(3.51)

where γ ≡ γ(x) is a real function of the position operator x. The dual of the above equation yields the transformation rule for the standard bra,
new = old e iγ

.

(3.52)

The transformation rule for a general operator A follows from Eqs. (3.51) and (3.52), plus the requirement that the triple product A remain invariant (this must be the case, otherwise the probability of a measurement yielding a certain result would depend on the choice of eigenbras). Thus, Anew = e−i γ Aold e i γ . (3.53)

Of course, if A commutes with x then A is invariant under the transformation. In fact, d/dx is the only operator (we know of) which does not commute with x, so Eq. (3.53) yields d dγ d d +i , (3.54) = e−i γ e i γ = dx new dx dx dx where the subscript “old” is taken as read. It follows, from Eq. (3.48), that the momentum operator p can be written p = −i ¯ h Thus, the special choice ¯ γ(x) = h yields p = −i ¯ h d dx .
new

d dx

−¯ h
new x

dγ + f(x). dx

(3.55)

f(x) dx

(3.56)

(3.57)

42

3.5 Schr¨dinger’s representation - II o

3 POSITION AND MOMENTUM

Equation (3.56) fixes γ to within an arbitrary additive constant: i.e., the special eigenkets and eigenbras for which Eq. (3.57) is true are determined to within an arbitrary common phase-factor. In conclusion, it is possible to find a set of basis eigenkets and eigenbras of the position operator x which satisfy the normalization condition (3.26), and for which the momentum conjugate to x can be represented as the operator d . (3.58) dx A general state ket is written ψ(x) , where the standard ket satisfies x | = 1, and where ψ(x ) = x |ψ(x) is the wave-function. This scheme of things is known as Schr¨dinger’s representation, and is the basis of wave mechanics. o p = −i ¯ h

3.5 Schr¨ dinger’s representation - II o o In the preceding sections, we have developed Schr¨dinger’s representation for the case of a single operator x corresponding to a classical Cartesian coordinate. However, this scheme can easily be extended. Consider a system with N generalized coordinates, q1 · · · qN , which can all be simultaneously measured. These are represented as N commuting operators, q1 · · · qN , each with a continuous range of eigenvalues, q1 · · · qN . Ket space is conveniently spanned by the simultaneous eigenkets of q1 · · · qN , which are denoted |q1 · · · qN . These eigenkets must form a complete set, otherwise the q1 · · · qN would not be simultaneously observable. The orthogonality condition for the eigenkets [i.e., the generalization of Eq. (3.26)] is The completeness condition [i.e., the generalization of Eq. (3.27)] is
+∞ −∞ +∞

q1 · · · qN |q1 · · · qN = δ(q1 − q1 ) δ(q2 − q2 ) · · · δ(qN − qN ). ··· dq1 · · · dqN |q1 · · · qN q1 · · · qN | = 1.

(3.59)

(3.60)

−∞

The standard ket is defined such that q1 · · · qN | = 1.
43

(3.61)

70) . we can derive the commutation relations ∂ ∂ qj − q j = δij . (3.63) (3. (3.64) (3. etc. the normalization condition for a physical wave-function is +∞ −∞ +∞ (3. A general state ket is written ψ(q1 · · · qN ) . a general state bra is written φ(q1 · · · qN ).42)] φ ∂ ∂φ =− .5 Schr¨dinger’s representation . (3.66) ··· −∞ |ψ(q1 · · · qN )|2 dq1 · · · dqN = 1..65) φ(q1 · · · qN ) = φ|q1 · · · qN .68) These linear operators can also act on bras (provided the associated wave-functions are square integrable) in accordance with [see Eq.II o 3 POSITION AND MOMENTUM The standard bra is the dual of the standard ket.69) Corresponding to Eq.46).3. is P(q1 · · · qN .62) (3. Finally. ∂qi ∂qi (3. where The probability of an observation of the system finding the first coordinate in the range q1 to q1 + dq1 . Likewise.67) The N linear operators ∂/∂qi (where i runs from 1 to N) are defined ∂ψ ∂ ψ = . ∂qi ∂qi 44 (3. The associated wave-function is ψ(q1 · · · qN ) = q1 · · · qN |ψ . dq1 · · · dqN ) = |ψ(q1 · · · qN )|2 dq1 · · · dqN . (3. ∂qi ∂qi (3. the second coordinate in the range q2 to q2 + dq2 .

76) ∂qi ∂qi ∂qi ∂qi 1 ∂ ∂ = q · · · qN |. o It follows from Eqs. ∂qi ∂qj ∂qj ∂qi (3. we can always construct a set of simultaneous eigenkets of q1 · · · qN for which pi = −i ¯ h ∂ . and (3. the standard ket in Schr¨dinger’s representation is a simultaneous eigenket o of all the momentum operators belonging to the eigenvalue zero.25). ∂qi (3. Note that q1 · · · q N | Hence.74) This is the generalized Schr¨dinger representation.77) (3. ∂qi ∂qi 1 ∂ q · · · qN |. (3.61).4. 3. ∂qi ∂qi (3.78) . (3. q1 · · · q N | so that h q1 · · · qN |pi = −i ¯ 45 ∂ψ ∂ψ(q1 · · · qN ) ∂ ∂ ψ = q1 · · · qN | = = q · · · qN |ψ .5 Schr¨dinger’s representation .II o 3 POSITION AND MOMENTUM It is also clear that ∂2 ψ ∂ ∂ ∂ ∂ ψ = = ψ.73) However. (3. (3.74) that pi = 0. that the linear operators −i ¯ ∂/∂qi satisfy the same commutation relations with the q’s and with h each other that the p’s do. (3.71) It can be seen.75) Thus.3. ∂qi ∂qj ∂qi ∂qj ∂qj ∂qi showing that ∂ ∂ ∂ ∂ = . The most general conclusion we can draw from this coincidence of commutation relations is (see Dirac) pi = −i ¯ h ∂F(q1 · · · qN ) ∂ + . by comparison with Eqs.72) (3.23)–(3. ∂qi 1 (3.68). and Dirac). the function F can be transformed away via a suitable readjustment of the phases of the basis eigenkets (see Sect. Thus.

These are denoted |p . describable in terms of a coordinate x and its conjugate momentum p. However. dp ) = |φ(p )|2 dp . This is termed Schr¨dinger’s representation. it is o also possible to represent the system in terms of the eigenkets of p.82) Note that the standard ket in this representation is quite different to that in Schr¨dinger’s representation.84) The probability that a measurement of the momentum yields a result lying in the range p to p + dp is given by P(p .6 The momentum representation 3 POSITION AND MOMENTUM The dual of the above equation gives pi |q1 · · · qN = i ¯ h 3. ∂qi 1 (3. (3. (3. We have seen that it is possible to represent the system in terms of the eigenkets of x.3.6 The momentum representation Consider a system with one degree of freedom.79) = δ(p − p ). (3. The orthogonality relation for the momentum eigenkets is p |p +∞ ∂ |q · · · qN . 46 (3. The momentum space wave-function φ(p ) sato isfies φ(p ) = p |φ .83) (3.85) . both of which have a continuous range of eigenvalues.81) A general state ket can be written φ(p) where the standard ket satisfies p | = 1.80) and the corresponding completeness relation is dp |p −∞ p | = 1. (3. Consider the eigenkets of p which belong to the eigenvalues p .

3.6 The momentum representation

3 POSITION AND MOMENTUM

Finally, the normalization condition for a physical momentum space wave-function is +∞ |φ(p )|2 dp = 1.
−∞

(3.86)

The fundamental commutation relations (3.23)–(3.25) exhibit a particular symmetry between coordinates and their conjugate momenta. If all the coordinates are transformed into their conjugate momenta, and vice versa, and i is then replaced by −i, the commutation relations are unchanged. It follows from this symmetry that we can always choose the eigenkets of p in such a manner that the coordinate x can be represented as (see Sect. 3.4) x = i¯ h d . dp (3.87)

This is termed the momentum representation. The above result is easily generalized to a system with more than one degree of freedom. Suppose the system is specified by N coordinates, q1 · · · qN , and N conjugate momenta, p1 · · · pN . Then, in the momentum representation, the coordinates can be written as ∂ qi = i ¯ h . (3.88) ∂pi We also have qi = 0, (3.89) and h p1 · · · pN |qi = i ¯ ∂ p · · · pN |. ∂pi 1 (3.90)

The momentum representation is less useful than Schr¨dinger’s representao tion for a very simple reason. The energy operator (i.e., the Hamiltonian) of most simple systems takes the form of a sum of quadratic terms in the momenta (i.e., the kinetic energy) plus a complicated function of the coordinates (i.e., the potential energy). In Schr¨dinger’s representation, the eigenvalue problem for o the energy translates into a second-order differential equation in the coordinates, with a complicated potential function. In the momentum representation, the
47

3.7 The uncertainty relation

3 POSITION AND MOMENTUM

problem transforms into a high-order differential equation in the momenta, with a quadratic potential. With the mathematical tools at our disposal, we are far better able to solve the former type of problem than the latter. Hence, Schr¨dinger’s o representation is generally more useful than the momentum representation.

3.7 The uncertainty relation How is a momentum space wave-function related to the corresponding coordinate space wave-function? To answer this question, let us consider the representative x |p of the momentum eigenkets |p in Schr¨dinger’s representation for o a system with a single degree of freedom. This representative satisfies p x |p = x |p|p = −i ¯ h d x |p , dx (3.91)

where use has been made of Eq. (3.78) (for the case of a system with one degree of freedom). The solution of the above differential equation is x |p = c exp(i p x /¯ ), h where c = c (p ). It is easily demonstrated that
+∞

(3.92)

p |p

=
−∞

p |x dx x |p

=c c

−∞

exp[−i (p − p ) x /¯ ] dx . h

(3.93)

The well-known mathematical result
+∞

exp(i a x) dx = 2π δ(a),
−∞

(3.94)

yields p |p = |c |2 h δ(p − p ). (3.95) This is consistent with Eq. (3.80), provided that c = h−1/2 . Thus, x |p = h−1/2 exp(i p x /¯ ). h (3.96)

48

3.7 The uncertainty relation

3 POSITION AND MOMENTUM

Consider a general state ket |A whose coordinate wave-function is ψ(x ), and whose momentum wave-function is Ψ(p ). In other words, ψ(x ) = Ψ(p ) = It is easily demonstrated that
+∞

x |A , p |A .

(3.97) (3.98)

ψ(x ) =
−∞

dp x |p
+∞

p |A (3.99)

1 = 1/2 h and
+∞

Ψ(p ) exp(i p x /¯ ) dp h
−∞

Ψ(p ) =
−∞

dx p |x
+∞

x |A (3.100)

1 = 1/2 h

ψ(x ) exp(−i p x /¯ ) dx , h
−∞

where use has been made of Eqs. (3.27), (3.81), (3.94), and (3.96). Clearly, the momentum space wave-function is the Fourier transform of the coordinate space wave-function. Consider a state whose coordinate space wave-function is a wave-packet. In other words, the wave-function only has non-negligible amplitude in some spatially localized region of extent ∆x. As is well-know, the Fourier transform of a wave-packet fills up a wave-number band of approximate extent δk ∼ 1/∆x. Note that in Eq. (3.99) the role of the wave-number k is played by the quantity p /¯ . It h follows that the momentum space wave-function corresponding to a wave-packet in coordinate space extends over a range of momenta ∆p ∼ ¯ /∆x. Clearly, a meah surement of x is almost certain to give a result lying in a range of width ∆x. Likewise, measurement of p is almost certain to yield a result lying in a range of width ∆p. The product of these two uncertainties is ∆x ∆p ∼ ¯ . h This result is called Heisenberg’s uncertainty principle.
49

(3.101)

However. 50 (3. if |R = |A + |B (3. We could imagine that the system is on wheels.8 Displacement operators 3 POSITION AND MOMENTUM Actually. the final ket is still not completely determined. we know that the superposition relations between states remain invariant under the displacement. Suppose that we displace this system some distance along the xaxis. then in the displaced system we have |Rd = |Ad + |Bd .102) for any general state. The final state of the system is completely determined by its initial state. if the system is subject to an external potential. it is possible to write Heisenberg’s uncertainty principle more exactly by making use of Eq.104) . (2.8 Displacement operators Consider a system with one degree of freedom corresponding to the Cartesian coordinate x. Thus. for which the equality sign holds in the above relation.83) and the commutation relation (3. This follows because the superposition relations have a physical significance which is unaffected by a displacement of the system. etc. We obtain (∆x) 2 (∆p) 2 ¯2 h ≥ 4 (3. Even if we adopt the convention that all state kets have unit norms. since it can be multiplied by a constant phase-factor. and the displacement causes ket |R to transform to ket |Rd . The situation is not so clear with state kets. So.3.. 3. It is easily demonstrated that the minimum uncertainty states.103) in the undisplaced system. then the potential must be displaced. correspond to Gaussian wave-packets in both coordinate and momentum space. and we just give it a little push.47). Note that the type of displacement we are considering is one in which everything to do with the system is displaced. The final state of the system only determines the direction of the displaced state ket. together with the direction and magnitude of the displacement.

because this would wreck the superposition relations. Since this must hold for any state ket |A . it follows that the displaced ket |Rd must be the result of some linear operator acting on the undisplaced ket |R . (3. (3. |Rd = D|R .106) for a state ket |A certainly has a physical significance. 51 (3. |Ad = D|A and Ad| = A|D† .110) The equation v |A = |B . so A|D† D|A = 1. (3.103) holds in the undisplaced system. (3. The arbitrary phase-factor by which all displaced kets may be multiplied results in D being undetermined to an arbitrary multiplicative constant of modulus unity. Since Eq.107) Hence. (3. kets.104) holds in the displaced system whenever Eq.108) (3.3. In other words. Now. Thus.105) where D an operator which depends only on the nature of the displacement. We now adopt the ansatz that any combination of bras.8 Displacement operators 3 POSITION AND MOMENTUM Incidentally.111) .109) (3. we must have Ad|Ad = 1. and dynamical variables which possesses a physical significance is invariant under a displacement of the system. The displaced kets cannot be multiplied by individual phase-factors. this determines the displaced kets to within a single arbitrary phasefactor to be multiplied into all of them. Note that the above relation implies that |A = D† |Ad . it follows that D† D = 1. The normalization condition A|A = 1 (3. the displacement operator is unitary.

Thus.109) and (3.118) . We have assumed. 52 (3. Suppose. implies that d x can be replaced by D exp(i γ) − 1 D − 1 + iγ = lim = dx + i a x .8 Displacement operators 3 POSITION AND MOMENTUM where the operator v represents a dynamical variable.117) where ax is the limit of γ/δx. The fact that D can be replaced by D exp(i γ). as seems reasonable. (3. now.3. Since this is true for any ket |Ad . that the system is displaced an infinitesimal distance δx along the x-axis. that γ tends to zero as δx → 0. (3.113) (3. δx→0 δx→0 δx δx lim (3. we require that vd |Ad = |Bd . It is clear that the displacement operator is undetermined to an arbitrary imaginary additive constant.114) (3.114). we have D = 1 + δx dx . It follows that vd |Ad = D|B = D v |A = D v D† |Ad . For small δx.116) δx→0 δx where dx is denoted the displacement operator along the x-axis.115) D−1 . we have vd = D v D † . we expect the limit |Ad − |A D−1 = lim |A δx→0 δx→0 δx δx lim dx = lim to exist.112) Note that the arbitrary numerical factor in D does not affect either of the results (3. Thus. We expect that the displaced ket |Ad should approach the undisplaced ket |A in the limit as δx → 0. has some physical significance. where vd is the displaced operator. Let (3. where γ is a real phase-angle.

3. yields dx x − x dx = −1. we obtain dx† + dx = 0.122) Let us consider a specific example.123) (3. Actually. (3. the fact that dx is undetermined to an arbitrary additive imaginary constant (which could be a function of x) enables us to transform the function f out of the above equation.109) that (1 + δx dx† )(1 + δx dx ) = 1. A comparison with Eq. the h momentum conjugate to x. h (3. Substituting into Eq. we find that vd = (1 + δx dx ) v (1 − δx dx ) = v + δx (dx v − v dx ).8 Displacement operators 3 POSITION AND MOMENTUM It follows from Eq.119) Thus.126) . (3. Neglecting order (δx)2 . does [see Eq. the same shape shifted in the x-direction by a distance δx). It can be seen that the new wave-function is obtained from the old wave-function according to the prescription x → x − δx. using x = v. (3. the displacement operator is anti-Hermitian. The most general conclusion we can draw from this observation is that px = i ¯ dx + f(x). and again neglecting order (δx)2 . However. Thus. the new wave-function can be multiplied by an arbitrary number of modulus unity. which implies vd − v = dx v − v d x .122).. (3.121) (3.e. δx→0 δx lim (3. xd = x − δx. (3.114). If the system is displaced a distance δx along the x-axis then the new wave-function is ψ(x − δx) (i. Suppose that a state has a wave-function ψ(x ).120) (3. leaving px = i ¯ d x .25)]. (3. h 53 (3.125) where f is Hermitian (since px is Hermitian).124) It follows that i ¯ dx obeys the same commutation relation with x that px .

A finite translation along the x-axis can be constructed from a series of very many infinitesimal translations.24) and (3. (3.128) 54 .8 Displacement operators 3 POSITION AND MOMENTUM Thus. and then ∆y along the y-axis then it ends up in the same state as if it were moved ∆y along the y-axis.118) and (3.128)]. We can also construct displacement operators which translate the system along the y.3. (3. if the system is moved ∆x along the x-axis. Thus. It follows that D(∆x) = exp (−i px ∆x/¯ ) . The fact that translations in independent directions commute is clearly associated with the fact that the conjugate momentum operators associated with these directions also commute [see Eqs. In other words. (3. and then ∆x along the x-axis.127) where use has been made of Eqs. the operator D(∆x) which translates the system a distance ∆x along the x-axis is written D(∆x) = lim ∆x px 1−i N ¯ h N N→∞ . (3.and z-axes. the displacement operator in the x-direction is proportional to the momentum conjugate to x.126). We say that px is the generator of translations along the x-axis. Note that a displacement a distance ∆x along the x-axis commutes with a displacement a distance ∆y along the y-axis. h The unitary nature of the operator is now clearly apparent.

2) (4. The final state of the system at time t is completely determined by its initial state at time t0 plus the time interval t − t0 (assuming that the system is left undisturbed during this time interval). Consider a system in a state A which evolves in time.1) and (4. (4. |Rt = T |Rt0 . the final ket can be regarded as the result of some linear operator acting on the initial ket: . since this would invalidate the superposition relation at later times. the final ket |Rt depends linearly on the initial ket |Rt0 . the final ket is still not completely determined. assuming that the system is left undisturbed between times t0 and t.. we expect that if a superposition relation holds for certain states at time t0 then the same relation should hold between the corresponding time-evolved states at time t. However. At time t the state of the system is represented by the ket |At . we have only considered systems at one particular instant of time. In other words.1) This rule determines the time-evolved kets to within a single arbitrary phasefactor to be multiplied into all of them.4 QUANTUM DYNAMICS 4 Quantum dynamics 4.e. (4. since it can be multiplied by an arbitrary phase-factor.3) . say) which is evolving in time. i. then we should have |Rt = |At + |Bt . Thus. the final state only determines the direction of the final state ket. if |Rt0 = |At0 + |Bt0 for any three kets. The label A is needed to distinguish the ket from any other ket (|Bt . The evolved kets cannot be multiplied by individual phase-factors. Even if we adopt the convention that all state kets have unit norms. 55 (4. However.2). Let us now investigate how quantum mechanical systems evolve with time. The label t is needed to distinguish the different states of the system at different times. According to Eqs.1 Schr¨ dinger’s equations of motion o Up to now.

or both the ket |A and the operator ξ evolve. We expect. even if the system is left in a state of undisturbed motion (after all. This is always possible. time evolution has no meaning unless something observable changes with time).4) (4. according to Eq. The triple product A|ξ|A can evolve either because the ket |A evolves and the operator ξ stays constant. Since we have adopted a convention in which the norm of any state ket is unity. In general. we do expect the expectation value of some observable ξ to evolve with time. that as t → t0 then |At → |At0 for any ket A. (4. if a ket is properly normalized at time t then it will remain normalized at all subsequent times t > t0 ). t0 ) being undetermined to an arbitrary multiplicative constant of modulus unity. the time evolution operator T is a unitary operator.3). The arbitrary phase-factor by which all time evolved kets may be multiplied results in T (t. which immediately yields T † T = 1.5) . Thus. from physical continuity.6) lim t→t0 t − t0 t→t0 t − t0 56 (4. there are some important differences between time evolution and spatial displacement. the ket |A stays constant and the operator ξ evolves. let us assume that the time evolution operator T can be chosen in such a manner that the operators representing the dynamical variables of the system do not evolve in time (unless they contain some specific time dependence). Up to now.. the limit T −1 |At − |At0 = lim |At0 (4. Hence. since the length of a ket possesses no physical significance. we require that At0 |At0 = At|At for any ket A. Thus. the time evolution operator T looks very much like the spatial displacement operator D introduced in the previous section.4. Since we are already committed to evolving state kets. However.1 Schr¨dinger’s equations of motion o 4 QUANTUM DYNAMICS where T is a linear operator which depends only on the times t and t0 . it make sense to define the time evolution operator T in such a manner that it preserves the length of any ket upon which it acts (i.e.

7) that i¯ h d|At0 |At − |At0 = i ¯ lim h = i ¯ τ(t0 )|At0 = H(t0 )|At0 . We saw. t0 ) evolves the system in time from t0 to t then H(t0 ) = i ¯ lim h 57 T (t. t0 ) − 1 .e. It follows from Eqs. We now have that if the operator T (t. a characteristic of the dynamical system under investigation. dt (4. This equation is denoted Schr¨dinger’s equation of motion. x0 ) displaces the system along the x-axis from x0 to x then px = i ¯ x→x h lim D(x. that if the operator D(x. (4.10) Equation (4. Note that this limit is simply the derivative of |At 0 with respect to t0 . h t→t0 dt0 t − t0 (4. Let T (t.8) The fact that T can be replaced by T exp(i γ) (where γ is real) implies that τ is undetermined to an arbitrary imaginary additive constant (see previous section).1 Schr¨dinger’s equations of motion o 4 QUANTUM DYNAMICS should exist. (4. τ† + τ = 0. presumably.6) and (4. (4. x0 ) − 1 . t→t0 t − t0 (4. (4.5) that τ is anti-Hermitian: i.11) where px is the operator representing the momentum conjugate to x.4. 0 x − x0 (4. It involves a o Hermitian operator H(t) which is.. Let us define the Hermitian operator H(t0 ) = i ¯ τ.9) When written for general t this equation becomes i¯ h d|At = H(t)|At . t0 ) − 1 .7) τ(t0 ) = lim t→t0 t − t0 It is easily demonstrated from Eq. in the previous section.10) gives the general law for the time evolution of a state ket in a scheme in which the operators representing the dynamical variables remain fixed. This operator is undetermined h to an arbitrary real additive constant.12) .

more accurately. (4. The fact that the Hamiltonian is undetermined to an arbitrary real additive constant is related to the well-known phenomenon that energy is undetermined to an arbitrary additive constant in physics (i. dt t t0 (4.e. need not worry us unduly. the zero of potential energy is not well-defined).15) where use has been made of Eqs.6). (4. dt (4.4. By analogy with classical physics.1 Schr¨dinger’s equations of motion o 4 QUANTUM DYNAMICS Thus. In relativistic quantum mechanics. the dynamical variable corresponding to the operator H stands to time t as the momentum px stands to the coordinate x. Likewise.10) yields i¯ h dT |At0 = H(t) T |At0 . Note that. h  (4. It is now clear how the fact that H is undetermined to an arbitrary real additive constant leaves T undetermined to a phase-factor. we assume that Hamiltonian operators evaluated at different times commute with one another). Substituting |At = T |At0 into Eq. Since we are only dealing with nonrelativistic quantum mechanics.14) This equation can be integrated to give T (t. the fact that position is an operator. if the equations of motion are invariant under a temporal displacement then this implies that the system conserves energy. (Recall that. t0 ) = exp −i  H(t ) dt /¯  . in classical physics. if the equations of motion of a system are invariant under an x-displacement of the system then this implies that the system conserves momentum in the x-direction. 58 . this suggests that H(t) is the operator representing the total energy of the system. it is just a parameter (or. time and space coordinates are treated on the same footing by relegating position from being an operator to being just a label. in the above analysis. (Here. time is not an operator (we cannot observe time.13) Since this must hold for any initial state |At0 we conclude that i¯ h dT = H(t) T. but time is only a label. a continuous label). as such).) The operator H(t) is usually called the Hamiltonian of the system.5) and (4..

The time evolution of the transformed state ket is given by |At t = T † (t. The dual of Eq. t0 ) T (t.10). (4.3). since the operator T (t. (4. (4.2 Heisenberg’s equations of motion 4 QUANTUM DYNAMICS 4. The transformation must also be applied to bras. and the fact that T (t0 .16) yields At | = A|T. and the dynamical variables are represented by moving linear operators. The subscript t is used to remind us that the transformation is time-dependent. Suppose that a general state ket A is subject to the transformation |At = T † (t. (4. corresponds to a moving linear operator in this new scheme.1. It is clear that the transformation (4.2 Heisenberg’s equations of motion We have seen that in Schr¨dinger’s scheme the dynamical variables of the system o remain fixed during a period of undisturbed motion.18) The transformation rule for a general observable v is obtained from the requirement that the expectation value A|v|A should remain invariant. the transformed state ket does not evolve in time. this is not the only way in which to represent the time evolution of the system. t0 )|At = T † (t. Thus. t0 )|A . o 59 . (4.16) leads us to a scenario in which the state of the system is represented by a fixed vector. Clearly.4.16) has the effect of bringing all kets representing states of undisturbed motion of the system to rest. It is easily seen that vt = T † v T. as opposed to the Schr¨dinger picture.19) Thus.16) This is a time-dependent transformation. t0 ) = 1. (4. whereas the state kets evolve according to Eq. (4.17) where use has been made of Eqs. 4. This is termed the Heisenberg picture. However.5). which is outlined in Sect. which corresponds to a fixed linear operator in Schr¨o dinger’s scheme. the transformation (4. t 0 ) obviously depends on time. a dynamical variable. (4. t0 )|At0 = |At t0 .

25) shows how the dynamical variables of the system evolve in the Heisenberg picture. Ht ]. dt dt dt With the help of Eq.19). dt (4.27) .22). which can be written in the form [see Eq.7)] dv = [v. Note that the time-varying dynamical variables in the Heisenberg picture are usually called Heisenberg dynamical variables to distinguish them from Schr¨dinger dynamical o variables (i. (4.14).26) dt where [· · ·]quantum denotes the quantum Poisson bracket.4.25) (4.. It is denoted Heisenberg’s equation of motion. (3. the corresponding variables in the Schr¨dinger picture). According to Eq. dt Equation (4. dt 60 (4. (3. Ht ]quantum .23) dvt = T † v H T − T † H T v t = v t H t − H t vt . this reduces to H T vt + i ¯ T h or i¯ h where Ht = T † H T. we can write o T vt = v T.23) can be written i¯ h dvt = [vt .22) (4. Equation (4. H]classical . Let us compare this equation with the classical time evolution equation for a general dynamical variable v. the Heisenberg equation of motion can be written dvt = [vt . dt (4. which do o not evolve in time. Differentiation with respect to time yields dvt dT dT vt + T =v . According to Eq. (4.24) dvt = v H T. (4.20) (4.e.21) (4.2 Heisenberg’s equations of motion 4 QUANTUM DYNAMICS Consider a dynamical variable v corresponding to a fixed linear operator in the Schr¨dinger picture.

the system is not subject to some time-dependent external force) then Eq.29) (4. The strong resemblance between Eqs.3 Ehrenfest’s theorem 4 QUANTUM DYNAMICS Here. (4. (4.27) provides us with further justification for our identification of the linear operator H with the energy of the system in quantum mechanics.4. [· · ·]classical is the classical Poisson bracket..31) dt Thus. (4. It follows from Eq. H] = 0. Note that if the Hamiltonian does not explicitly depend on time (i. (4. For a physical system which possess a classical analogue. it is represented by the same fixed operator in both the Schr¨dinger o and Heisenberg pictures).26) and (4. Furthermore. (4.25) gives dH = [H. The only thing which is lacking is some rule to determine the form of the quantum mechanical Hamiltonian. and H denotes the classical Hamiltonian. t0 ) = exp [−i H (t − t0 )/¯ ] . Heisenberg’s equation of motion yields dv i¯ h = [v. if the energy of the system has no explicit time-dependence then it is represented by the same non-time-varying operator H in both the Schr¨dinger and o Heisenberg pictures. Eq. (4. so Ht = T † H T = H. any observable which commutes with the Hamiltonian is a constant of the motion (hence. we 61 . (4.15) yields T (t. with the time evolution operator T ). Only those observables which do not commute with the Hamiltonian evolve in time in the Heisenberg picture.30) dt Thus.28) 4. hence. H] = 0.3 Ehrenfest’s theorem We have now derived all of the basic elements of quantum mechanics.e.19) that vt = v. i¯ h Suppose that v is an observable which commutes with the Hamiltonian (and. h This operator manifestly commutes with H.

respectively. The following useful formulae. F(p)] = i ¯ h (4.32) It is helpful to denote (x1 . this can usually be resolved by requiring the Hamiltonian H to be an Hermitian operator. The Hamiltonian is assumed to have the same form as in classical physics: 3 1 p2 = (4.4. we would write the quantum mechanical analogue of the classical product x p. G(x)] = −i ¯ h . Let us now consider the three-dimensional motion of a free particle of mass m in the Heisenberg picture. When the system in question has no classical analogue then we are reduced to guessing a form for H which reproduces the observed behaviour of the system. (4. as the Hermitian product (1/2)(x p + p x).34) where F and G are functions which can be expanded as power series. (3. For instance.. Whenever an ambiguity arises because of non-commuting observables. x2 . with three corresponding conjugate momenta pi .e.3 Ehrenfest’s theorem 4 QUANTUM DYNAMICS generally assume that the Hamiltonian has the same form as in classical physics (i. The commutation relations satisfied by the position and momentum operators are [see Eq.25)] [xi . 2 m 2 m i=1 62 . h (4. appearing in the Hamiltonian. ∂pi ∂G [pi . are easily proved using the fundamental commutation relations Eq.35) H= pi 2 .32). pj ] = i ¯ δij .33) (4. Consider a three-dimensional system characterized by three independent Cartesian position coordinates xi (where i runs from 1 to 3). and three commuting momentum operators pi . ∂F . ∂xi [xi . x3 ) as x and (p1 . These are represented by three commuting position operators xi . p2 . This scheme guarantees that quantum mechanics yields the correct classical equations of motion in the classical limit. p3 ) as p. we replace the classical coordinates and conjugate momenta by the corresponding quantum mechanical operators).

83) yields ¯ 2 t2 h 2 2 (∆xi ) t (∆xi ) t=0 ≥ . which means that pi (t) = pi (0) at all times t (for i is 1 to 3). xi (t) = xi (0) +  m    3  (4. H] = 0. For instance.25). all dynamical variables are assumed to be Heisenberg dynamical variables. pi (0) t −i ¯ t h [xi (t). (4.4. xi (0)] =  . Thus. although we will omit the subscript t for the sake of clarity. The time evolution of the momentum operator pi follows from Heisenberg’s equation of motion (4. xi (0) = . The time evolution of the position operator xi is given by 1 1 1 ∂  pi (0) pi dxi   = [xi . its position becomes progressively more uncertain with time. 63 .39) where the position operators are evaluated at equal times. (4.37) (4. It follows that pi (0)  t. xj (0)] = 0. for a free particle the momentum operators are constants of the motion. H] = i¯ h = . the xi do not commute when evaluated at different times. (4. Note that even though [xi (0).41) 4 m2 This result implies that even if a particle is well-localized at t = 0. This conclusion can also be obtained by studying the propagation of wave-packets in wave mechanics. dt i¯ h (4.40) Combining the above commutation relation with the uncertainty relation (2.36) since pi automatically commutes with any function of the momentum operators.33). m m   (4. pj 2  = dt i¯ h i¯ 2m h ∂pi j=1 m m where use has been made of Eq. We find that dpi 1 = [pi .38) which is analogous to the equation of motion of a classical free particle.3 Ehrenfest’s theorem 4 QUANTUM DYNAMICS In the following.

43) dt i¯ h ∂xi where use has been made of Eq.47) has no dependence on ¯ .3 Ehrenfest’s theorem 4 QUANTUM DYNAMICS Let us now add a potential V(x) to our free particle Hamiltonian: p2 + V(x). this equation becomes d2 x dp = − V(x). we obtain d2 x dp m = =− dt2 dt V(x) .47) This is known as Ehrenfest’s theorem. Heisenberg’s equation of motion gives dpi 1 ∂V(x) = [pi . 64 . (4. Note that Eq. (4. In contrast. this result is independent of whether we are using the Heisenberg or Schr¨dinger picture.45) dt2 i ¯ dt h i¯ m h m dt m ∂xi In vectorial form. because the xi all commute with the new term V(x) in the Hamiltonian. We can use the Heisenberg equation of motion a second time to deduce that 1 dxi 1 pi 1 dpi 1 ∂V(x) d2 x i = . V(x)] = − .H = =− . (4.H = .46) only holds if x and o p are understood to be Heisenberg dynamical variables. H= 2m (4.4. When written in terms of expectation values.44) still holds. the operator equation (4. V is some function of the xi operators. the result dxi pi = dt m (4.42) Here. On the other hand. (4.46) This is the quantum mechanical equivalent of Newton’s second law of motion. m 2 = dt dt (4. it guarantees to us that the centre of a wave-packet h always moves like a classical particle.34). (4. In fact. Taking the expectation values of both sides with respect to a Heisenberg state ket that does not move with time.

p3 ). t) = x |At .52) x 2m 2m where use has been made of Eq.10) yields o i¯ h ∂ x |At = x |H|At .74)] ∂ pi = −i ¯ h . p2 . 2m (4.49) (4. ∂/∂z ) denotes the gradient operator written in terms of the position eigenvalues. ∂t (4. ≡ (∂/∂x .51) ∂xi Thus.4 Schr¨dinger’s wave-equation o 4 QUANTUM DYNAMICS 4. We can also write x |V(x) = V(x ) x |. since the position operators are fixed in the Schr¨dinger picture. The Hamiltonian of the system is taken to be H= p2 + V(x). We adopt Schr¨do inger’s representation in which the momentum conjugate to the position operator xi is written [see Eq. (3.   ¯2  2 h p2 At = −  x |At . Let |x represent a simultaneous eigenket of the position operators belonging to the eigenvalues x ≡ (x1 . x3 ).4. x3 ).4 Schr¨ dinger’s wave-equation o Let us now consider the motion of a particle in three dimensions in the Schr¨dinger o picture. The fixed dynamical variables of the system are the position operators x ≡ (x1 . x2 . Note that.78). x2 . ∂/∂y . (4. and the momentum operators p ≡ (p1 . we do not expect the |x to evolve in time.53) 65 . The state of the system is represented as some time evolving ket |At . (3. (4.50) where use has been made of the time independence of the |x .48) Schr¨dinger’s equation of motion (4. The o wave-function of the system at time t is defined ψ(x . Here. (4.

4. instead of those of the momentum operators. (4. Finally. The direction of the vector remains fixed in ket space.50). in which the opposite is true. It just happens to give a type of equation which we know how to solve.53).56) Schr¨dinger’s equation of motion (4. however.10) yields o i¯ h This can be integrated to give |At = exp[−i H (t − t0 )/¯ ]|At0 . In deriving the wave-equation.55) This is Schr¨dinger’s famous wave-equation. t) ¯2  h  i¯ h =− ∂t 2m   2 ψ(x . dt (4. Combining Eqs. h (4.49). as long as the 66 .52). in which state kets evolve and o dynamical variables are fixed. (4. o Note.58) d|At = H |At . so that Eq. that the wave-equation is just one of many possible representations of quantum mechanics. instead of the Heisenberg picture. and (4. we have chosen to work in the Schr¨dinger picture. and is the basis of wave mechanics. (4. (4. we obtain ¯2  h ∂ x |At  i¯ h =− ∂t 2m   2 x |At + V(x ) x |At . Suppose that the ket |At is an eigenket of the Hamiltonian belonging to the eigenvalue H : H|At = H |At .51) is valid. we have chosen to represent the system in terms of the eigenkets of the position operators. t) + V(x ) ψ(x . This suggests that if the system is initially in an eigenstate of the Hamiltonian then it remains in this state for ever. We have also fixed the relative phases of the |x o according to Schr¨dinger’s representation.4 Schr¨dinger’s wave-equation o 4 QUANTUM DYNAMICS where V(x ) is a scalar function of the position eigenvalues. (4.57) Note that |At only differs from |At0 by a phase-factor.54) which can also be written ∂ψ(x . (4. (4. t).

the above relation implies that bound states are equivalent to negative energy states.60). t) = ψ(x . defined by ρ(x . so that ρ(x . t). A bound state solution of the o above equation. satisfies the boundary condition Such a solution is only possible if ψ0 (x ) → 0 |x |→∞ as |x | → ∞.4. t0 ) exp[−i H (t − t0 )/¯ ].59) Substituting the above relation into Schr¨dinger’s wave equation (4. can easily be written in o the form of a conservation equation for the probability density: ∂ρ + · j = 0. (3.61) is sufficient to uniquely specify the solution of Eq. t0 ). (4. The wave-function of a stationary state satisfies ψ(x .61) (4. and E = H is the energy of the system. t) d3 x . This is Schr¨dinger’s time-independent wave-equation. t) d3 x if the wave-function is properly normalized. The probability is equal to ρ(x . we o obtain   ¯2  2 h ψ0 (x ) + (V(x ) − E) ψ0 (x ) = 0. (4. (4. t) = |ψ(x . Since it is conventional to set the potential at infinity equal to zero. (4. t)|2 . (4. h (4. t) d3 x = 1.62) E < lim V(x ).55).64) Schr¨dinger’s time-dependent wave-equation.4 Schr¨dinger’s wave-equation o 4 QUANTUM DYNAMICS system is undisturbed. The quantity ρ(x .55). (4. that the probability of observing the particle in some volume element d3 x around position x is proportional to ρ(x .30).60) − 2m where ψ0 (x ) ≡ ψ(x . Such a state is called a stationary state. Recall. in which the particle is confined within a finite region of space. The boundary condition (4.63) is termed the probability density.65) ∂t 67 . (4. from Eq.

if Im(V) < 0 then the total probability of observing the particle anywhere in space decreases monotonically with time. t) exp  ¯ h 68   ·j= 2 Im(V) ρ. t) d3 x = 0.71) . In deriving Eq. t) d3 x = Im V(x ) ρ(x . ∂t ¯ h Thus. (4. t) d3 x = . an imaginary potential can be used to account for the disappearance of a particle.68) m where p t denotes the expectation value of the momentum evaluated at time t. (4.67) Thus. ¯ h (4. Thus. if the o wave-function starts off properly normalized. t) = ρ(x . ∂t (4. then it remains properly normalized at all subsequent times. It is easily demonstrated that pt j(x .4 Schr¨dinger’s wave-equation o 4 QUANTUM DYNAMICS The probability current j takes the form j(x . t)  . In particular. Eq. the probability current is indirectly related to the particle momentum. however. Such a potential is often employed to model nuclear reactions in which incident particles can be absorbed by nuclei. naturally. t) d3 x . Suppose. to obtain ∂ ρ(x . t) = − i¯ h [ψ∗ 2m ψ−( ψ∗ ) ψ] = ¯ h Im(ψ∗ m ψ). (4. Schr¨dinger’s wave-equation conserves probability. and the boundary condition ρ → 0 as |x | → ∞. (4.66) We can integrate Eq. The wave-function can always be written in the form i S(x . In this case. that the potential has an imaginary component. ψ(x .4.65) over all space.64).70) ρ(x . (4. using the divergence theorem. assumed that the potential V(x ) is real.65) generalizes to ∂ρ + ∂t giving ∂ 2 (4.69) (4. (4.65) we have. according to Eq. Clearly.

Eq. t).e.4 Schr¨dinger’s wave-equation o 4 QUANTUM DYNAMICS where ρ and S are both real functions.72) It follows from Eq. In particular. The above equation is known as the Hamilton-Jacobi equation. t) is the Hamiltonian operator..66) that . t) = H(x .76) (4. Let us substitute Eq. wave mechanics h reduces to classical mechanics. (4. In classical mechanics. The interpretation of ρ as a probability density has already been given. according to Eq. λ= ¯ ¯ h .74) provided that h ¯| h Note that. and is one of the many forms in which we can write the equations of classical mechanics. (4.4. (4. the gradient of the phase of the wave-function determines the direction of the probability current. (4. the path-integral of the Lagrangian). | S| 69 2 S| | S|2 . What is the interpretation of S? Note that ψ∗ ψ= √ ρ √ ( ρ) + ρ S i ρ ¯ h S. To lowest order. It is a good approximation to neglect the terms involving ¯ in Eq.74) yields h − 1 ∂S(x . S is the action (i.75) where H(x.74) − ρ ∂t ∂t Let us treat ¯ as a small quantity. S. t) = | ∂t 2m S(x . (4.71). Thus. o We obtain − 1 ¯2 h 2m 2√ j= ρ + 2i ¯ h √ ( ρ)· S− √ ρ| √ √ S|2 + i ¯ ρ 2 S + ρ V h √ ∂ ρ √ ∂S = i¯ h .71) into Schr¨dinger’s time-dependent wave-equation. the probability current is locally normal to the contours of the phase-function S. in the limit ¯ → 0. p. (4. (4.73) m Thus. (4.77) . t)|2 + V(x . (4. (4.

quantum mechanics reduces to classical mechanics whenever the de Broglie wave-length is small compared to the characteristic distance over which things (other than the quantum phase) vary. This distance is usually set by the variation scale-length of the potential.4 Schr¨dinger’s wave-equation o 4 QUANTUM DYNAMICS where λ is the de Broglie wave-length divided by 2π.78) In other words. The inequality (4. (4. 70 .76) is ¯ equivalent to | λ| ¯ 1.4.

The classical definition of the orbital angular momentum of such a particle about the origin is L = r × p. or z. [xi .7) . pj ] = i ¯ δij . giving Lx = y p z − z p y . also. xj ] = 0.1)–(5. pz ) ≡ p. h h 71 (5. pz ] = i ¯ (−y px + x py ) = i ¯ Lz . since all of the products consist of operators which commute.3). so they represent things which can. Ly . Ly = z p x − x p z .4) (5. be measured. Ly . (3. and Lz are Hermitian.5) (5. The fundamental commutation relations satisfied by the position and linear momentum operators are [see Eqs. py .5 ANGULAR MOMENTUM 5 Angular momentum 5. [pi . (5. y. that there is no ambiguity regarding the order in which operators appear in products on the right-hand sides of Eqs.2) (5. In other words. Lz ) ≡ L which represent the components of orbital angular momentum in quantum mechanics can be defined in an analogous manner to the corresponding components of classical angular momentum.23)–(3. z) ≡ r and their conjugate momenta (px . (z px − x pz )] = y [pz .3) Let us assume that the operators (Lx . in principle. Ly ] = [(y pz − z py ).6) where i and j stand for either x. y.1) (5. Note. Note that Lx . we are going to assume that the above equations specify the angular momentum operators in terms of the position and linear momentum operators. pj ] = 0.25)] [xi . z] px + x py [z.1 Orbital angular momentum Consider a particle described by the Cartesian coordinates (x. Lz = x p y − y p x . Consider the commutator of the operators Lx and Lz : [Lx . (5. h (5.

13) N i=1 Li .11) (5.11).9) (5. (5. with angular momentum vectors Li (where i runs from 1 to N). So. Each of these vectors satisfies Eq.10) The three commutation relations (5.8)– (5. Lx ] = i ¯ Ly . h (5. Lz ] = i ¯ Lx . Consider the total angular momentum of the system. h [Ly . Suppose that there are N particles in the system.10) are the foundation for the whole theory of angular momentum in quantum mechanics. Whenever we encounter three operators having these commutation relations. In fact. since they represent different degrees of freedom of the system. Ly ] = i ¯ Lz .13) that N N N N However.10) represent the components of an angular momentum.14) 72 = i¯ h i=1 Li = i ¯ L. h (5. (5. h [Lz .12) and (5. we know that the dynamical variables which they represent have identical properties to those of the components of an angular momentum (which we are about to derive). h .8) (5. h These can be summed up more succinctly by writing L × L = i ¯ L. so that Li × L i = i ¯ L i .5.1 Orbital angular momentum 5 ANGULAR MOMENTUM The cyclic permutations of the above result yield the fundamental commutation relations satisfied by the components of an angular momentum: [Lx . we shall assume that any three operators which satisfy the commutation relations (5.j=1 (5. It is L×L = i=1 Li × N Lj = j=1 i=1 1 Li × L i + (Li × Lj + Lj × Li ) 2 i. we can write Li × Lj + Lj × Li = 0. (5.12) for i = j.8)–(5. we expect the angular momentum operators belonging to different particles to commute. L = clear from Eqs.

In fact.1 Orbital angular momentum 5 ANGULAR MOMENTUM Thus. the z-component).11). Note that [L+ .8)–(5. h [Ly2 .19) Since there is nothing special about the z-axis. (5. [Lz2 . once we have specified one component. Lz ] = −¯ L+ . L2 ≡ Lx2 + Ly2 + Lz2 .8)–(5.15) (5. Lz ] = 0.10) is that the three components of an angular momentum vector cannot be specified (or measured) simultaneously.21) (5.17) (5. Lz . the total angular momentum of the system satisfies the commutation relation (5. Lz ] + [Ly2 . In particular. the values of other two components become uncertain. L− = L x − i L y . so [L2 . The immediate conclusion which can be drawn from the commutation relations (5.10) and (5. we conclude that L 2 also commutes with Lx and Ly .16) (5. It is clear from Eqs. Lz ] = [Lx2 . Lz ] = −i ¯ (Lx Ly + Ly Lx ). Lz ] + [Lz2 . It is conventional to specify the z-component. (5.22) . h 73 (5.18) (5. Lz ]. Consider the magnitude squared of the angular momentum vector.5. Lz ] = 0.20) (5. It is convenient to define the shift operators L+ and L− : L+ = L x + i L y . the sum of two or more angular momentum vectors satisfies the same commutation relation as a primitive angular momentum vector. Lz ] = +i ¯ (Lx Ly + Ly Lx ).19) that the best we can do in quantum mechanics is to specify the magnitude of an angular momentum vector along with one of its components (by convention. The commutator of L2 and Lz is written [L2 . It is easily demonstrated that h [Lx2 .

(5. m = l. also. m) is some real dimensionless function of l and m.29) l. h [L+ . We can write L2 |l.23) (5.27) where |A is a general ket. Lz ] = +¯ L− . m .2 Eigenvalues of angular momentum 5 ANGULAR MOMENTUM [L− . It is possible to write such an h equation because ¯ has the dimensions of angular momentum.5. m . assuming that the |l. However. m have unit norms. Note that m is a h real number. m|f(l. h (5. m|L2 − Lz2 |l. m = = It is easily demonstrated that A|ξ2 |A ≥ 0. where f(l. m) = l (l + 1). m . l and m. (5. The quantum number m is defined by Lz |l. m .25) Thus. h h h l. since Lz is an Hermitian operator. m is the eigenvalue of Lz divided by ¯ . m + l. m|Lx2 + Ly2 |l.30) . m) − m2 ]¯ 2 . that both shift operators commute with L2 . l. m l. and ξ is an Hermitian operator. m|Lx2 |l. m = m ¯ |l. L− ] = 2 ¯ Lz .2 Eigenvalues of angular momentum Suppose that the simultaneous eigenkets of L2 and Lz are completely specified by two quantum numbers. The proof follows from the observation that A|ξ2 |A = A|ξ† ξ|A = B|B .24) 5. h (5. h Note. Now. m) ¯ 2 |l. (5. m|Ly2 |l. 74 (5. These kets are denoted |l. m = [f(l. m = f(l. Later on.28) (5. m|L2 − Lz2 |l. m) ¯ 2 − m2 ¯ 2 |l. we will show that f(l.26) without loss of generality.

Note that Lz L+ |l. (5. m = (L+ Lz + [Lz . Thus.31). m = (L+ Lz + ¯ L+ )|l. It follows from Eqs. h (5. the shift operator L+ does not affect the magnitude of the angular momentum of any eigenket it acts upon.33) where use has been made of Eq. For this reason. (5. m .m h Hence. m = c− ¯ |l. m − 1 . at first sight.m neous eigenstate of L2 and Lz .26).5. L+ is called a raising operator. The shift operators step the value of m up and down by unity each time they operate on one of the simultaneous eigenkets of L2 and Lz . L+ ])|l. but the eigenvalue of Lz is increased by ¯ . there is a definite upper bound to the values that m2 can take.32) where use has been made of Eq. m). (5. h (5. m has the same eigenvalue of L2 as the ket |l. m ). l. m) (L+ |l. m = c+ ¯ |l. (2. m ) = ¯ 2 f(l. (5. L− is called a lowering operator. m + 1 .35) L− |l. l. It would appear. m . This bound is determined by the 75 .2 Eigenvalues of angular momentum 5 ANGULAR MOMENTUM where |B = ξ|A . the eigenvalue of L2 remains unchanged. plus the fact that B|B ≥ 0 for a general ket |B [see Eq.27)–(5. m h = (m + 1) ¯ L+ |l. it is possible to demonstrate that (5. It is easily demonstrated that L2 (L+ |l.m h (5.22). It is clear that when the operator L+ acts on a simultal. m + 1 .21)]. However. m is proportional to |l. that any value of m can be obtained by applying the shift operators a sufficient number of times.29) that m2 ≤ f(l. We can write L+ |l.31) Consider the effect of the shift operator L+ on the eigenket |l. h Using similar arguments to those given above. (5. m . The above equation implies that L+ |l. plus the fact that L2 and Lz commute. according to Eq.34) where c+ is a number. It follows that the ket L+ |l.

Suppose that we attempt to raise the value of m above its maximum value mmax .40) (5. m = ¯ 2 [l (l + 1) − m (m + 1)]. h The above equation can be rearranged to give L2 |l. (5. mmax = |0 .2 Eigenvalues of angular momentum 5 ANGULAR MOMENTUM eigenvalue of L2 [see Eq. (5.26) yields the result f(l. mmax = |0 . mmax = |0 . l. m . h h Thus. we must have L+ |l. Thus. h h Comparison of this equation with Eq.36) This implies that L− L+ |l.43) (5. h 76 (5. mmax . mmax it generates |n.5. m = ¯ 2 [l (l + 1) − m (m + 1)]|l. all of these states must correspond to the same value of f. mmax ) = mmax (mmax + 1).37) yields (L2 − Lz2 − ¯ Lz )|l.37) But. h It is easily seen that L− L+ |l. (5.39) (5. L2 |l. h (5. m . when L− operates on |n.41) (5. mmax − 2 . mmax = (Lz2 + ¯ Lz )|l. We can also write the above equation in the form L2 |l. m = mmax (mmax + 1) ¯ 2 |l. |n. mmax − 1 . m . m = l (l + 1) ¯ 2 |l.45) . we can give the unknown quantum number l the value mmax . without loss of generality.44) (5.42) At this stage. m|L− L+ |l.38) (5. (5. Since the lowering operator does not change the eigenvalue of L2 . (5. etc. It follows that there is a maximum and a minimum possible value which m can take. mmax = mmax (mmax + 1) ¯ 2 |l. Since there is no state with m > mmax . However. Ly ] = L2 − Lz2 − ¯ Lz .26)]. L− L+ = Lx2 + Ly2 + i [Lx . h so Eq. m = (L2 − Lz2 − ¯ Lz )|l. namely mmax (mmax + 1).

m + 1|L+ |l.51) (5. m|Lx |l. given above. According to Eq. but convenient.50)]. m = l.m−1 l.49) c− l.m+1 where use has been made of Eqs. ∗ (5.35). l.2 Eigenvalues of angular momentum 5 ANGULAR MOMENTUM However.48) where use has been made of the fact that Lx and Ly are Hermitian.e.m l.47) and (5.50) Note that c+ is undetermined to an arbitrary phase-factor [i. m + 1 = = = l.49). m|L− |l. l. l. h l. m )∗ (5.47) = ( l.m l. This is equivalent to choosing the relative phases of the eigenkets |l.34) and (5. m|Ly |l. The maximum value of m is denoted 77 . we also know that l.m h l. It follows that c+ c− l.m + cl. c− = (c+ )∗ = l (l + 1) − m (m − 1).52) We have already seen that the inequality (5.49) can be combined to give |c+ |2 = [l (l + 1) − m (m + 1)]. m ∗ − i l.5. where γ is real.35).m ) with the aid of Eqs. m ∗ .m+1 = (cl. (5. choice that c+ is real and posil.31) implies that there is a maximum and a minimum possible value of m.m+1 = [l (l + 1) − m (m + 1)]. m + 1|Ly |l. m + 1|Ly |l. m + i l. m + 1|Lx |l. (5.m (5.m We have made the arbitrary. we can replace l. The above equation reduces to + ∗ (5. (5. l. (5. m .m (5. m l. m + 1 l. m|L− L+ |l. Equations (5.46) (5.m The solution of the above equation is c+ = l (l + 1) − m (m + 1). by c+ exp(i γ). m + 1 − i l.m . Consider the following: l.34) and (5. m + 1|Lx |l. m|L− ¯ c+ |l. and we still satisfy Eq. m + 1 = ¯ 2 c+ c− ..m tive.

In summary. m = = l (l + 1) − m (m + 1) ¯ |l.56) where L± = Lx ± i Ly are the so-called shift operators. m + 1 h l (l + 1) − m (m − 1) ¯ |l. respectively.53) According to Eq.m (5. −l + 1. l. (5. Since there is no state with m < mmin . l = 2. l = 3/2. The top rung is l.g. we have shown that the eigenvalues of L2 ≡ Lx2 + Ly2 + Lz2 can be written l (l + 1) ¯ 2 . this implies that c− min = 0.55) (5. h or a half-integer. The classical momentum conjugate to the azimuthal angle ϕ is the z-component of angular momentum.8)–(5. belonging to h h the eigenvalues l (l + 1) ¯ 2 and m ¯ . h (5. (5. and Lz are Hermitian operators. which allows m to take the values −3/2. which allows m to take the values −2. −1.g.5. where m lies in the range −l. −1/2.35). What is the minimum value? Suppose that we try to lower the value of m below its minimum value mmin . According to Sect.5.10). l. m L− |l. We have shown that L+ |l.3 Rotation operators 5 ANGULAR MOMENTUM l. or it is a half-integer (e. 1/2. in quantum mechanics we can 78 . 2). θ. There are only two possible choices for l. each rung differing from its immediate neighbours by unity. 5. We have also demonstrated that the eigenvalues of L z can only take the values m ¯ .. Ly . mmin = 0. (5. m h denote a properly normalized simultaneous eigenket of L2 and Lz . · · · l − 1. 3. using just the fundamental commutation relations (5.52) that mmin = −l. plus the fact that Lx . Lz . m − 1 .54) It can be seen from Eq. Either it is an integer (e. where l is an integer. 0. Let |l. 1. 3/2).. We will prove in the next section that an orbital angular momentum can only take integer values of l. We conclude that m can take a “ladder” of discrete values. ϕ).3 Rotation operators Consider a particle described by the spherical polar coordinates (r. and the bottom rung is −l. we must have L− |l.

for which ket space is spanned by the o simultaneous eigenkets of the position operators r.59).59) According to Eq. we can write R(δϕ) = 1 − i Lz δϕ/¯ h (5. We were able to demonstrate in Sect. We conclude that Lz = i ¯ lim h R(δϕ) − 1 . which translates the system a distance ∆x along the x axis.57) ∂ϕ We can do this because there is nothing in Sect.3 Rotation operators 5 ANGULAR MOMENTUM always adopt Schr¨dinger’s representation.5. the angular momentum operator Lz can be used to rotate the system about the z-axis by an infinitesimal amount. We say that Lz is the generator of rotations about the z-axis. introduced in Sect. and L z takes the form ∂ Lz = −i ¯ h .8 that px = i ¯ lim h D(δx) − 1 .74) works for any welldefined set of coordinates. (5. 3. and φ. This operator is very similar to the operator D(∆x). There is nothing in our derivation of this result which specifies that x has to be a Cartesian coordinate.5 which specifies that we have to use Cartesian coordinates—the representation (3. θ. 3. (5.8. In other words. Consider an operator R(∆ϕ) which rotates the system an angle ∆ϕ about the zaxis. h (5.61) N→∞ N ¯ h which reduces to R(∆ϕ) = exp(−i Lz ∆ϕ/¯ ). δx→0 δx (5. the result should apply just as well to an angular coordinate. δϕ→0 δϕ (5. 3. (5. The above equation implies that ∆ϕ Lz N R(∆ϕ) = lim 1 − i .60) in the limit δϕ → 0.62) 79 . Thus.58) where px is the linear momentum conjugate to x.

Thus. orbital angular momentum can only take on integer values of the quantum numbers l and m. m = exp(−i 2 π m)|l. its wavefunction must be symmetric about the z-axis.e.70) by analogy with Eq. according 80 . where the eigenvalue of L 2 is l (l + 1) ¯ 2 . As before.69) (5. m . R(∆ϕ) R(−∆ϕ) = 1. This implies. from the previous section. h (5. There is nothing special about the z-axis. 0 . the eigenstate is invariant to rotations about the z-axis. 0 = exp(0)|l. We know that the system is also in an eigenstate of zero orbital angular momentum about any particular axis.3 Rotation operators 5 ANGULAR MOMENTUM Note that R(∆ϕ) has all of the properties we would expect of a rotation operator R(0) = 1. We have R(∆ϕ)|l. m = |l. this state is represented by the eigenket |l.63) (5.67) Thus. (5.. Consider the action of the rotation operator R(∆ϕ) on an eigenstate possessing zero angular momentum about the z-axis (i. h Ry (∆ϕy ) = exp(−i Ly ∆ϕy /¯ ). (5. 0 = |l.64) (5.66) We conclude that m must be an integer.. m . Suppose that the system is in an eigenstate of zero overall orbital angular momentum (i. h Rz (∆ϕy ) = exp(−i Lz ∆ϕz /¯ ). Clearly. This follows because l = 0 implies m = 0.65) Suppose that the system is in a simultaneous eigenstate of L2 and Lz .62). R(2π)|l.68) (5. an l = 0 state). Here. an m = 0 state). m = exp(−i Lz 2π/¯ )|l. etc. (5. Rx (∆ϕx ) denotes an operator which rotates the system by an angle ∆ϕx about the x-axis. Thus. and the eigenvalue of Lz is m ¯ . R(∆ϕ1 ) R(∆ϕ2 ) = R(∆ϕ1 + ∆ϕ2 ).e. h (5.5. so we can write Rx (∆ϕx ) = exp(−i Lx ∆ϕx /¯ ). We expect the wave-function to h h remain unaltered if we rotate the system 2π degrees about the z-axis. that l must also be an integer.

Note that a rotation about the x-axis does not commute with a rotation about the y-axis. x = r sin θ cos ϕ. = exp(0)|0. 0 = |0. 81 (5. Rx (∆ϕx )|0. In quantum mechanics. [see Eqs. and then ∆ϕx about the x-axis. ∂y ∂x (5. the noncommuting nature of the angular momentum operators is a direct consequence of the fact that rotations do not commute. it ends up in a different state to that obtained by rotating an angle ∆ϕy about the y-axis. In other words. and we can choose the z-axis to point in any direction. the three components of orbital angular momentum can be written Lx = −i ¯ y h Ly Lz ∂ ∂ −z .4 Eigenfunctions of orbital angular momentum In Cartesian coordinates. or Ly Lx = Lx Ly . −x ∂x ∂z ∂ ∂ = −i ¯ x h −y .4 Eigenfunctions of orbital angular momentum 5 ANGULAR MOMENTUM to the previous section. Thus.74) (5.70)]. Such a state must possess a spherically symmetric wave-function. if the system is rotated an angle ∆ϕx about the x-axis.71) (5. 5. ∂z ∂y ∂ ∂ = −i ¯ z h .5. Transforming to standard spherical polar o coordinates. a zero angular momentum state is invariant to rotations about any axis.73) Clearly.72) (5. 0 . 0 = |0.68)–(5. and then ∆ϕy about the y-axis. 0 . 0 Rz (∆ϕz )|0. (5. (5.75) (5.77) . 0 Ry (∆ϕy )|0. this implies that Ry (∆ϕy ) Rx (∆ϕx ) = Rx (∆ϕx ) Ry (∆ϕy ). = exp(0)|0. 0 = |0. Thus. 0 = exp(0)|0.76) using the Schr¨dinger representation. 0 .

5.4 Eigenfunctions of orbital angular momentum

5 ANGULAR MOMENTUM

y = r sin θ sin ϕ, z = r cos θ, we obtain

(5.78) (5.79)

∂ ∂ + cot θ cos ϕ (5.80) ∂θ ∂ϕ ∂ ∂ − cot θ sin ϕ Ly = −i ¯ cos ϕ h (5.81) ∂θ ∂ϕ ∂ . (5.82) Lz = −i ¯ h ∂ϕ Note that Eq. (5.82) accords with Eq. (5.57). The shift operators L± = Lx ± i Ly become ∂ ∂ L± = ±¯ exp(±i ϕ) h ± i cot θ . (5.83) ∂θ ∂ϕ Now, L2 = Lx2 + Ly2 + Lz2 = Lz2 + (L+ L− + L− L+ )/2, (5.84) Lx = i ¯ sin ϕ h so ∂ ∂ 1 ∂2  2 1 2 L = −¯ h sin θ + . sin θ ∂θ ∂θ sin2 θ ∂ϕ2
 

(5.85)

The eigenvalue problem for L2 takes the form L2 ψ = λ ¯ 2 ψ, h where ψ(r, θ, ϕ) is the wave-function, and λ is a number. Let us write ψ(r, θ, ϕ) = R(r) Y(θ, ϕ). Equation (5.86) reduces to ∂ 1 ∂2  1 ∂  sin θ + Y + λ Y = 0, sin θ ∂θ ∂θ sin2 θ ∂ϕ2
 

(5.86) (5.87)

(5.88)

where use has been made of Eq. (5.85). As is well-known, square integrable solutions to this equation only exist when λ takes the values l (l + 1), where l is an integer. These solutions are known as spherical harmonics, and can be written Ylm (θ, ϕ) = 2 l + 1 (l − m)! m (−1)m e i m ϕ Pl (cos ϕ), 4π (l + m)!
82

(5.89)

5.4 Eigenfunctions of orbital angular momentum

5 ANGULAR MOMENTUM

m where m is a positive integer lying in the range 0 ≤ m ≤ l. Here, Pl (ξ) is an associated Legendre function satisfying the equation m m2 m d 2 dPl m (1 − ξ ) − P + l (l + 1) Pl = 0. 2 l dξ dξ 1−ξ

(5.90)

We define Yl−m = (−1)m (Ylm )∗ , (5.91) which allows m to take the negative values −l ≤ m < 0. The spherical harmonics are orthogonal functions, and are properly normalized with respect to integration over the entire solid angle:
π 0 2π

Ylm∗ (θ, ϕ) Ylm (θ, ϕ) sin θ dθ dϕ = δll δmm .
0

(5.92)

The spherical harmonics also form a complete set for representing general functions of θ and ϕ. By definition, L2 Ylm = l (l + 1) ¯ 2 Ylm , h where l is an integer. It follows from Eqs. (5.82) and (5.89) that h Lz Ylm = m ¯ Ylm , (5.94) (5.93)

where m is an integer lying in the range −l ≤ m ≤ l. Thus, the wave-function ψ(r, θ, ϕ) = R(r) Ylm (θ, φ), where R is a general function, has all of the expected features of the wave-function of a simultaneous eigenstate of L2 and Lz belonging to the quantum numbers l and m. The well-known formula
m dPl 1 mξ m m+1 = √ P Pl − dξ 1 − ξ2 l 1 − ξ2 mξ m (l + m)(l − m + 1) m−1 √ + Pl = − P 1 − ξ2 l 1 − ξ2

(5.95)

can be combined with Eqs. (5.83) and (5.89) to give h L+ Ylm = l (l + 1) − m (m + 1) ¯ Ylm+1 , h L− Ylm = l (l + 1) − m (m − 1) ¯ Ylm−1 .
83

(5.96) (5.97)

5.5 Motion in a central field

5 ANGULAR MOMENTUM

These equations are equivalent to Eqs. (5.55)–(5.56). Note that a spherical harmonic wave-function is symmetric about the z-axis (i.e., independent of ϕ) when√ 0 ever m = 0, and is spherically symmetric whenever l = 0 (since Y0 = 1/ 4π). In summary, by solving directly for the eigenfunctions of L2 and Lz in Schr¨do inger’s representation, we have been able to reproduce all of the results of Sect. 5.2. Nevertheless, the results of Sect. 5.2 are more general than those obtained in this section, because they still apply when the quantum number l takes on half-integer values.

5.5 Motion in a central field Consider a particle of mass M moving in a spherically symmetric potential. The Hamiltonian takes the form p2 + V(r). H= 2M (5.98)

Adopting Schr¨dinger’s representation, we can write p = −(i/¯ ) . Hence, o h ¯2 h H=− 2M
2

+ V(r).

(5.99)

When written in spherical polar coordinates, the above equation becomes 1 ∂ ∂ 1 ∂2  ¯2  1 ∂ 2 ∂ h + V(r). r + sin θ + H=− 2 M r2 ∂r ∂r r2 sin θ ∂θ ∂θ r2 sin2 θ ∂ϕ2
 

(5.100)

Comparing this equation with Eq. (5.85), we find that ¯2  1 ∂ 2 ∂ h L2 H= − 2 r + 2 2  + V(r). 2M r ∂r ∂r ¯ r h
 

(5.101)

Now, we know that the three components of angular momentum commute with L2 (see Sect. 5.1). We also know, from Eqs. (5.80)–(5.82), that Lx , Ly , and Lz take the form of partial derivative operators of the angular coordinates,
84

93). 85 . (5. These are the energy eigenvalues.104) where E is a number. but are independent of the quantum number m. 4. (5. the energy eigenvalues depend on the quantum number l.5 Motion in a central field 5 ANGULAR MOMENTUM when written in terms of spherical polar coordinates using Schr¨dinger’s repreo sentation. It is also easily seen that L2 commutes with the Hamiltonian: [L2 . we already know that the most general form for the wave-function of a simultaneous eigenstate of L2 and Lz is (see previous section) ψ(r.102) According to Sect. ϕ) = R(r) Ylm (θ.105) Substituting Eq.101). We know. θ. But. we obtain     ¯2  1 d 2 d h l (l + 1)   + (5. from the general properties of this type of equation. (5.106) − 2 r + V(r) − E R = 0. the previous two equations ensure that the angular momentum L and its magnitude squared L2 are both constants of the motion.103) (5. 2M r dr dr r2 This is a Sturm-Liouville equation for the function R(r).2. (5. Consider the energy eigenvalue problem H ψ = E ψ. Lz . (5.5. H] = 0. In general.105) into Eq. It follows from Eq. H] = 0. and making use of Eq. that if R(r) is required to be well-behaved at r = 0 and as r → ∞ then solutions only exist for a discrete set of values of E. Since L2 and Lz commute with each other and the Hamiltonian. (5. This is as expected for a spherically symmetric potential. ϕ). it is always possible to represent the state of the system in terms of the simultaneous eigenstates of L2 . and H. (5.101) that all three components of the angular momentum commute with the Hamiltonian: [L.

(5. 4π 0 r  (5. 4π 0 r 2 µ r2 (5.108) Here. which can be written e2 l (l + 1)  ¯2  1 d 2 d h  − + − E R = 0.6 Energy levels of the hydrogen atom Consider a hydrogen atom. dr2 2 µ r2 4π 0 r ¯ h which is the one-dimensional Schr¨dinger equation for a particle of mass µ movo ing in the effective potential e2 l (l + 1) ¯ 2 h Veff (r) = − + . 2µE (5. with P(r) = f(y) exp(−y). for which the potential takes the specific form V(r) = − e2 . (5.109) transforms to   d l (l + 1) 2 µ e2 a  d2  −2 − + f = 0. which is equivalent to a particle of mass µ rotating about a fixed point. µ = me mp /(me +mp ) is the reduced mass. Let a= and y = r/a. − 2 r 2 2µ r dr dr r 4π 0 r    (5. Equation (5. (5. and the second part corresponds to the repulsive centrifugal force.111) . The first part is the attractive Coulomb potential.6 Energy levels of the hydrogen atom 5 ANGULAR MOMENTUM 5. The above equation transforms to   e2 d2 P 2 µ  l (l + 1)¯ 2 h (5.109) − 2 − − E P = 0. which takes into account the fact that the electron (of mass me ) and the proton (of mass mp ) both rotate about a common centre.113) dy2 dy y2 4π 0 ¯ 2 y h 86 −¯ 2 h . it is assumed that the energy eigenvalue E is negative.107) The radial eigenfunction R(r) satisfies Eq.112) Here.110) The effective potential has a simple physical interpretation.106). Let us write the product r R(r) as the function P(r).5.

since |ψ|2 dV must be finite. otherwise f(y) behaves unphysically as y → 0. There are two possibilities: nmin = −l or nmin = l + 1. (5. cn−1 n (5.114) Substituting this solution into Eq.6 Energy levels of the hydrogen atom 5 ANGULAR MOMENTUM Let us look for a power-law solution of the form f(y) = n c n yn .114) is cn y 2y = . the ratio of successive terms in the series (5. except when l = 0). (5.5. also.116). We conclude that f(y) → exp(2 y) as y → ∞. |ψ|2 = 0 at r = 0. This does not correspond to physically acceptable behaviour of the wave-function. whereas for an l > 0 state there is zero probability of finding the electron at the nucleus (i. For large values of y. Note that for an l = 0 state there is a finite probability of finding the electron at the nucleus. (5.116) Now. The only way in which we can avoid this unphysical behaviour is 87 . It follows from Eq.112) that R(r) → exp(r/a)/r as r → ∞.113).118) n! n which converges to exp(2 y). we conclude that nmin = l + 1.114) must terminate at small n. (5. where the first term in the series is cnmin ynmin . we obtain cn n (n − 1) y n n−2 − 2ny n−1 − l (l + 1) y n−2 2 µ e2 a n−1 y + 4π 0 ¯ 2 h = 0. Thus. at some positive value of n..117) according to Eq. This is the same as the ratio of successive terms in the series (2 y)n . the power law series (5.115) Equating the coefficients of yn−2 gives 2 µ e2 a  2 (n − 1) − . (5. (5. Note. that it is only possible to obtain sensible behaviour of the wave-function as r → 0 if l is an integer.e. The former predicts unphysical behaviour of the wave-function at y = 0. This is only possible if [nmin (nmin −1)−l (l+1)] = 0. cn [n (n − 1) − l (l + 1)] = cn−1 4π 0 ¯ 2 h   (5.

111) that the energy eigenvalues are quantized. x2 dx dx x2 x  (5.122) Finally. It is clear that the wave-function for a hydrogen atom can be written ψ(r. It follows from Eq.5.120) Here. where n 4π 0 ¯ 2 h a= = 5. l is a non-negative integer. n is a positive integer which must exceed the quantum number l. the ground state is a spherically symmetric.3 × 10−11 n meters. 32π2 02 ¯ 2 h 88 l (l + 1) 2 n 1 d 2d  x − + − 1 R = 0. According to the recursion relation (5. the Ylm are spherical harmonics.124) .121) (5.116).123) (5. The restrictions on the quantum numbers are |m| ≤ l < n.6 electron volts. n is a positive integer. The energy of the ground state is µ e4 E0 = − = −13. The only permissible values of the other quantum numbers are l = 0 and m = 0. ϕ) = R(r/a) Ylm (θ. 4π 0 ¯ 2 h (5. and can only take the values µ e4 . Here.114) terminates at some maximum value of n.114). 2 µe and R(x) is a well-behaved solution of the differential equation  (5.6 Energy levels of the hydrogen atom 5 ANGULAR MOMENTUM if the series (5. E=− h 32π2 02 ¯ 2 n2 (5. zero angular momentum state. (5. θ. otherwise there would be no terms in the series (5. Thus. and m is an integer. ϕ). The ground state of hydrogen corresponds to n = 1. this is only possible if µ e2 a = n.119) where the last term in the series is cn yn .

However. 0.4 and 5. Sy . Consider an isolated system at rest. Unfortunately. For a system consisting of a single particle. Gouldsmit and Uhlenbeck (in 1925) introduced the concept of an internal. angular momentum called spin.120) are independent of the quantum number l. there is also a continuum of unbound positive energy states. We assume that these operators obey the fundamental commutation relations (5.5. because we often find systems which appear to be structureless. Let us denote the three components of the spin angular momentum of a particle by the Hermitian operators (Sx . j = 0. This is a special property of a 1/r Coulomb potential. In addition to the quantized negative energy state of the hydrogen atom. 5. y. systems where j has half-integer values abound in nature. Thus.7 Spin angular momentum Up to now. 5. which we have just found. (5. there are two possibilities.123). In order to explain this apparent discrepancy between theory and experiments. we have tacitly assumed that the state of a particle in quantum mechanics can be completely specified by giving the wave-function ψ as a function of the spatial coordinates x.125) .10) for the components of an angular momentum. there are n = 2 states with non-zero angular momentum. j is a non-negative integer. purely quantum mechanical. we can write S × S = i ¯ S. Note that the energy levels given in Eq. For a particle with spin. Even worse.5. According to the theory of orbital angular momentum h outlined in Sects. m = −1. Sz ) ≡ S. and yet have j = 0.7 Spin angular momentum 5 ANGULAR MOMENTUM The next energy level corresponds to n = 2. this does not agree with observations. Thus. h 89 (5. For a system consisting of two (or more) particles. The other quantum numbers are allowed to take the values l = 0. there is a wealth of experimental evidence which suggests that this simplistic approach is incomplete.8)–(5. and z. despite the fact that l appears in the radial eigenfunction equation (5. 1. and let the eigenvalue of its total angular momentum be j (j + 1) ¯ 2 . the total angular momentum in the rest frame is non-vanishing. m = 0 or l = 1.

5.1.4) depends on Eqs. h = s (s + 1) ¯ 2 |s. According to the quite general analysis of Sect.7 Spin angular momentum 5 ANGULAR MOMENTUM We can also define the operator S2 = Sx2 + Sy2 + Sz2 . the quantum number s can.e. sz S2 |s. −s. (5.3). there is one vitally important difference. (5. (5.5. h so +|− = 0. These kets are orthonormal (since Sz is an Hermitian operator). take integer or half-integer values.132) 90 . (5. h (5. sz = sz ¯ |s. since this identification depends on an analogy with classical mechanics. Consequently. s − 1 · · · − s + 1. 5.128) (5. S2 ] = 0. and the concept of spin is purely quantum mechanical: i. since this restriction (found in Sects.129) (5. [S. sz . (5.126) According to the equally general analysis of Sect. it is possible to find simultaneous eigenstates of S2 and Sz . (5.2. Consider a spin one-half particle. sz .130) 2 3¯2 h S2 |± = |± .3 and 5. the |± denote eigenkets of the Sz operator corresponding to the eigenvalues ±¯ /2. it has no analogy in classical physics. Spin angular momentum clearly has many properties in common with orbital angular momentum. and the quantum number s z can only take the values s. 5. like in Eqs.131) 4 Here. However. These are denoted |s. the restriction that the quantum number of the overall angular momentum must take integer values is lifted for spin angular momentum. where Sz |s. the quantum number s is allowed to take half-integer values..127) Thus. in principle.3). Spin angular momentum operators cannot be expressed in terms of position and momentum operators.1)–(5.1)–(5. sz . In other words. for which ¯ h Sz |± = ± |± .

z .. z .130)–(5. and the spin operator Sz . Equations (5. h respectively. y.134) (5. + x . and |+ +| + |− −| = 1. y .139) 91 . y . The operator S2 takes the form 3¯2 h S = .133) It is easily verified that the Hermitian operators defined by ¯ h ( |+ −| + |− +| ) . y . (5. The basis kets are assumed to satisfy the completeness relation ( |x . (5.5. z .137) Sz = 2 satisfy the commutation relations (5. so that +|+ = −|− = 1. y . ± .135)–(5. |x . y .138) 4 It is also easily demonstrated that S2 and Sz . ± denotes a simultaneous eigenstate of the position operators x. +| + |x . (5. 2 5.8)–(5. Here. y . (5.e. Let us suppose that this space is spanned by the basis kets |x . defined in this manner.8 Wave-function of a spin one-half particle The state of a spin one-half particle is represented as a vector in ket space. that Hilbert sub-space consisting of kets which correspond to the different spin states of the particle). satisfy the eigenvalue relations (5.136) Sy = 2 ¯ h ( |+ +| − |− −| ) . −| ) dx dy dz = 1. and ±¯ /2.131). corresponding to the eigenvalues x . z.135) Sx = 2 i¯ h ( − |+ −| + |− +| ) . (5. z .8 Wave-function of a spin one-half particle 5 ANGULAR MOMENTUM They are also properly normalized and complete. z . y . − x . z .138) constitute a realization of the spin operators S and S2 (for a spin one-half particle) in spin space (i. (5.10) (with the Lj replaced by the Sj ). z .

y . and commutes with the |+ factor. and a spin space ket |+ . ± = |x . In this manner. y .145) . y . y . Similarly. (5. = c |x . z . A general state A of a spin one-half particle is represented as a ket ||A in the product of the spin and position spaces. z ) = ψ− (x .143) The multiplication in the above equation is of quite a different type to any which we have encountered previously. y .143).8 Wave-function of a spin one-half particle 5 ANGULAR MOMENTUM It is helpful to think of the ket |x . z . 92 (5. This implies that every position space operator commutes with every spin operator. z . z |− . y . In mathematics. z |+ (c |x . and their product |x . y .140) where the c’s are numbers. z + c |x . We assume that such a product obeys the commutative and distributive axioms of multiplication: |x . z and |± are in two quite separate vector spaces. z ) |+ = |+ |x . we can give meaning to the equation |x . z |+ |x . z |± = |± |x . z factor. A general ket of the product space is not of the form (5. y . y . z |+ +c− |x . y . z . y . z |+ +c |x . We can give meaning to any position space operator (such as Lz ) acting on the product |x . This state can be completely specified by two wavefunctions: ψ+ (x . z (c+ |+ + c− |− ) = c+ |x .142) (5. y . (5. + as the product of two kets—a position space ket |x . y . y . y . y . The ket vectors |x . y . z . y . y . y . z |+ by assuming that it operates only on the |x . The number of dimensions of a product space is equal to the product of the number of dimensions of each of the factor spaces.141) (5. the latter space is termed the product space of the former spaces. x . z . y . y . which are termed factor spaces. z |± is in a third vector space.5. z | +||A . and commutes with |x .144) (5. we can give a meaning to any spin operator (such as Sz ) acting on |x . z |+ by assuming that it operates only on |+ . z | −||A . y . y . z ) = x . but is instead a sum or integral of kets of this form.

150) Thus. y to y + dy . Can we also construct an operator Tz (∆ϕ) which rotates the system by an angle ∆ϕ about the z-axis in spin space? By analogy with Eq. and z to z + dz . y . we need to compute exp( i Sz ∆ϕ/¯ ) Sx exp(−i Sz ∆ϕ/¯ ). with sz = −1/2 is |ψ− (x . this expectation value changes as follows: † Sx → AR |Sx |AR = A|Tz Sx Tz |A .147) in spin space.148) To demonstrate that the operator (5. z )|2 dx dy dz . after rotation. 5. we were able to construct an operator Rz (∆ϕ) which rotates the system by an angle ∆ϕ about the z-axis in position space. A general spin state A is represented by the ket |A = +|A |+ + −|A |− (5. (5. and concentrate on its spin state. we would expect such an operator to take the form Tz (∆ϕ) = exp(−i Sz ∆ϕ/¯ ). The normalization condition for the wavefunctions is |ψ+ |2 + |ψ− |2 dx dy dz = 1. (5. (5. Under rotation.3.62). z )|2 dx dy dz . y . Likewise. In Sect. h h 93 (5. the probability of observing the particle in the region x to x + dx .151) .149) (5. h Thus.146) 5. forget about the spatial position of the particle. for the moment.5.148) really does rotate the spin of the system. let us consider its effect on Sx . y to y + dy . (5. the ket |A becomes |AR = Tz (∆ϕ)|A .9 Rotation operators in spin space Let us.9 Rotation operators in spin space 5 ANGULAR MOMENTUM The probability of observing the particle in the region x to x + dx . and z to z + dz . with sz = +1/2 is |ψ+ (x .

135)–(5. The second proof is more general than the first.152) (5. since it only uses the fundamental commutation relation (5. This takes the form i 2 λ2   [G. we obtain i ∆ϕ 1 Sx + [Sz . where use has been made of Eqs. A] + 2!   (5.151) becomes ¯ h exp( i Sz ∆ϕ/¯ ) ( |+ −| + |− +| ) exp(−i Sz ∆ϕ/¯ ). · · · [G. A]]] · · ·]. therefore. [Sz .9 Rotation operators in spin space 5 ANGULAR MOMENTUM This can be achieved in two different ways. (5.156) (5. we can use the explicit formula for Sx given in Eq.137). Applying the Baker-Hausdorff lemma to Eq.155) exp( i G λ) A exp(−i G λ) = A + i λ[G. 94 . and is. A second approach is to use the so called Baker-Hausdorff lemma. [G. We find that Eq.151). (5. Sx ] + ¯ h 2! which reduces to ∆ϕ2 ∆ϕ3 1 −  − S ϕ − Sx + ··· + · · · . (5.157) or Sx cos ∆ϕ − Sy sin ∆ϕ. A]] + (5.153) (5.135). (5. and λ is a real parameter. (5.158) where use has been made of Eq.5. n! where G is a Hermitian operator. (5.125). y 2! 3!     i ∆ϕ ¯ h 2 [Sz .125). h h 2 or ¯ i ∆ϕ/2 h e |+ −| e i ∆ϕ/2 + e −i ∆ϕ/2 |− +| e −i ∆ϕ/2 . First. [G. valid for systems with spin angular momentum higher than one-half. The proof of this lemma is left as an exercise. (5.154) i n λn ··· + [G. [G. Sx ]] + · · · . 2 which reduces to Sx cos ∆ϕ − Sy sin ∆ϕ.

Consider a rotation by 2π radians.148). the expectation value of the spin operator behaves like a classical vector under rotation: Sk → Rkl Sl . We find that |A → Tz (2π)|A = −|A .147). l (5. It is straight-forward to show that Sy → Sy cos ∆ϕ + Sx sin ∆ϕ. from our second derivation of the result (5. l (5.148) on the state ket (5. It is easily seen that Tz (∆ϕ)|A = e−i ∆ϕ/2 +|A |+ + e i ∆ϕ/2 −|A |− .160) Furthermore. (5.9 Rotation operators in spin space 5 ANGULAR MOMENTUM For a spin one-half system. we have effectively demonstrated that Jk → Rkl Jl .164) (5. Equations (5. 95 (5.5. In fact.163) where the Jk are the generators of rotation. both methods imply that Sx → Sx cos ∆ϕ − Sy sin ∆ϕ (5.148) rotates the expectation value of S by an angle ∆ϕ about the z-axis.162) where the Rkl are the elements of the conventional rotation matrix for the rotation in question. It is clear. and the rotation operator about the kth axis is written h Rk (∆ϕ) = exp(−i Jk ∆ϕ/¯ ).159). Sz → S z . satisfying the fundamental commutation relation J × J = i ¯ J. that this property is not restricted to the spin operators of a spin one-half system. h Consider the effect of the rotation operator (5.159)–(5.161) demonstrate that the operator (5. (5. In fact.161) since Sz commutes with the rotation operator.159) under the action of the rotation operator (5.165) .

ge = 2 2π 4π 0 ¯ c h   (5. We can write q µ= (L + g S) . where L is the orbital angular momentum. relativistic quantum mechanics actually predicts that a charged particle possessing spin should also possess a magnetic moment (this was first demonstrated by Dirac). 5. For an electron this ratio is found to be e2  1 1 + . µ and L. where p is the vector momentum of the particle. we can write q µ = r × v.169) 96 .167) µ= 2m Using the usual analogy between classical and quantum mechanics. The magnetic moment µ of the loop is of magnitude π r2 I and is directed along the z-axis. respectively. However. since S is sandwiched between A| and |A . This is indeed found to the the case. a rotation by 4π radians is needed to transform a ket into itself. Nevertheless. The minus sign does not affect the expectation value of S. and m is its mass. respectively. Thus. which represent magnetic moment and orbital angular momentum. the minus sign does give rise to observable consequences.5. In fact. Does spin angular momentum also give rise to a contribution to the magnetic moment of a charged particle? The answer is “yes”.10 Magnetic moments Consider a particle of charge q and velocity v performing a circular orbit of radius r in the x-y plane. we expect the above relation to also hold between the quantum mechanical operators. we know that p = v/m. (5. (5.166) 2 where r and v are the vector position and velocity of the particle.168) 2m where g is called the gyromagnetic ratio.10 Magnetic moments 5 ANGULAR MOMENTUM Note that a ket rotated by 2π radians differs from the original ket by a minus sign. We also know that L = r × p. It follows that q L. The charge is equivalent to a current loop of radius r in the x-y plane carrying current I = q v/2π r. (5. both of which change sign. as we shall see presently. In fact.

5. so µ for an electron (here.164): |A. 97 (5.172) (5. 0) = exp(−i Ht/¯ ) = exp(−i Sz ω t/¯ ). In fact. by comparison with Eq. 0 |+ + e i ω t/2 −|A.177) .176) The time evolution of the state ket is given by analogy with Eq. (5. 0 |− .174) (5.148). (5. the time evolution operator for this system is ω= T (t. e > 0). (4.159)–(5.161) imply that Sx Sy Sz t t t = = = Sx Sy Sz t=0 cos ωt t=0 cos ωt t=0 . h h (5. that the time evolution operator is precisely the same as the rotation operator for spin. (5.11 Spin precession The Hamiltonian for an electron at rest in a z-directed magnetic field. It is immediately clear that the Hamiltonian (5. is z H = −µ·B = where e S·B = ω Sz . t=0 sin ωt. me According to Eq.171) eB .175) (5. t = e−i ω t/2 +|A. We shall ignore this correction in the following. with ∆ϕ set equal to ω t. Eqs.173) It can be seen.171) causes the electron spin to precess about the z-axis with angular frequency ω. B = B ^. − Sy + Sx t=0 sin ωt. (5.170) 5.11 Spin precession 5 ANGULAR MOMENTUM The factor 2 is correctly predicted by Dirac’s relativistic theory of the electron. me (5. derived originally by Schwinger.28). The small correction 1/(2π 137). − e (L + 2 S) 2 me (5. is due to quantum field effects.

(5. a neutron state ket going along path B acquires a phase-shift exp( i ω T/2) (the signs correspond to sz = ±1/2 states).172). The above prediction has been verified experimentally to within a fraction of a percent.165). (The magnetic moment of a neutron is entirely a quantum field effect). When neutrons from path A and path B meet they undergo interference.179) where l is the path-length through the magnetic field region. We expect the observed neutron intensity in the interference region to exhibit a cos(±ω T/2 + δ) variation. which does not agree with the experimental data. T is the time spent in the magnetic field.91. path B enters a small region where a static magnetic field is present. It follows that the change in magnetic field required to produce successive maxima is ∆B = 4π ¯ h . The gyromagnetic ratio for a neutron is found experimentally to be gn = −1. and λ is the de ¯ Broglie wavelength over 2π of the neutrons. If it only took a 2π rotation then ∆B would be half of the value given above. 98 . As a result. while the field-strength B is varied. Path A goes through a magnetic field free region. sent along two different paths. However. mp (5. the time of flight T through the magnetic field region is kept constant. We now describe an experiment to detect the minus sign in Eq. By contrast. In experiments.11 Spin precession 5 ANGULAR MOMENTUM Note that it takes time t = 4π/ω for the state ket to return to its original state. This prediction depends crucially on the fact that it takes a 4π rotation to return a state ket to its original state. Here. A and B.178) This frequency is defined in an analogous manner to Eq. it only takes times t = 2π/ω for the spin vector to point in its original direction. and then recombined. (5. An almost monoenergetic beam of neutrons is split in two. e gn λ l ¯ (5. where δ is the phase difference between paths A and B in the absence of a magnetic field. and ω is the spin precession frequency ω= gn e B .5.

5. 0) ≡ χ† . In this representation. 5. Let us represent these basis eigenkets as column matrices: |+ |− 1 →   ≡ χ+ .185) c =  + = c χ +c χ . the spin angular momentum operators take the form of matrices. the orbital angular momentum operators take the form of differential operators involving only angular coordinates.184) is called a two-component spinor. in Sect. that the eigenstates of orbital angular momentum can be conveniently represented as spherical harmonics. that a general spin ket can be expressed as a linear combination of the two eigenkets of Sz belonging to the eigenvalues ±¯ /2. + (5. 0     (5.9. 1 In this scheme.12 Pauli two-component formalism 5 ANGULAR MOMENTUM 5.180) (5. The matrix representation of a spin one-half system was introduced by Pauli in 1926. 1) ≡ χ† .184) The column matrix (5. Recall. In this representation.4.186) .181) The corresponding eigenbras are represented as row matrices: +| → (1. +|A −|A  (5. It is conventional to represent the eigenstates of spin angular momentum as column (or row) matrices.183) 0 →   ≡ χ− .182) (5. and can be written χ≡  A| = A|+ +| + A|− −| → ( A|+ .12 Pauli two-component formalism We have seen. + + − − c− 99   (5. These h are denoted |± . a general bra takes the form −| → (0. (5. − and a general ket becomes |A = +|A |+ + −|A |− →   +|A −|A  . A|− ). from Sect. 5.

190) (5. =  i 0    . (5. (5. can easily be evaluated using the explicit forms for the spin operators given in Eqs.188) as the matrix relation ¯ h χ = σk χ. These h matrices.192) (5. (5.5.185) becomes ∗ ∗ ∗ ∗ χ† ≡ ( A|+ . = =   +|Sk |+ +|A + +|Sk |− −|A . A|− ) = (c+ .195) (5.137).193) 2 where the σk are the matrices of the ±|Sk |± values divided by ¯ /2. 1 0    = +|Sk |+ −|Sk |+ +|Sk |− −|Sk |−   +|A −|A  . 0 −1 100 0 −i  .189) However. This ket is represented as |A →  +|A −|A or    (5. which are called the Pauli matrices.196) 1 0  =  .191) +|A −|A It follows that we can represent the operator/ket relation (5. The row matrix (5.  (5.187) Consider the ket obtained by the action of a spin operator on ket A: |A = Sk |A . −|Sk |+ +|A + −|Sk |− −|A . (5. c− ) = c+ χ† + c− χ† .194) (5.12 Pauli two-component formalism 5 ANGULAR MOMENTUM where the c± are complex numbers. We find that σ1 σ2 σ3 0 1 =  .135)–(5.188) +|A −|A   ≡χ. + − (5.

that we can represent a general basis ket as the product of basis kets in position space and spin space: |x .201) The ket corresponding to state A is denoted ||A . It is also easily seen that the Pauli matrices satisfy the anti-commutation relations {σi . Eq. y . y . y .8. z ) = ψ− (x . in this scheme. and 3 refer to x.. we are effectively representing the spin operators in terms of the Pauli matrices: Sk → ¯ h σk . σ1 σ2 − σ2 σ1 = 2 i σ3 . respectively. z |± = |± |x .197) The expectation value of Sk can be written in terms of spinors and the Pauli matrices: ¯ h † Sk = A|Sk |A = A|± ±|Sk |± ±|A = χ σk χ. y .12 Pauli two-component formalism 5 ANGULAR MOMENTUM Here. 2. (5.196) actually satisfy these relations (i.198) 2 ± The fundamental commutation relation for angular momentum. z . 101 x .125).203) (5.199) It is easily seen that the matrices (5. σj } = 2 δij . y .202) (5. Note that.204) . State A is completely specified by the two wave-functions ψ+ (x . and resides in the product space of the position and spin ket spaces. 2 (5. Recall. z | +||A . and z. z | −||A . y. plus all cyclic permutations).194)–(5. (5. from Sect. z .200) Let us examine how the Pauli scheme can be extended to take into account the position of a spin one-half particle.e.197) to give the following commutation relation for the Pauli matrices: σ × σ = 2 i σ. can be combined with (5. ± = |x . x . 5.5. (5. (5. 1. (5. y . y . z ) = Consider the operator relation ||A = Sk ||A . (5.

(5. y . 2 (5.207) ψ− (r ) and so on.204) becomes simply χ = Consider the operator relation ||A = pk ||A . the operator equation (5. we have o x . y . y . The components of a spinor are now wave-functions.209) ¯ h σk χ.210) x . = −i ¯ ∂ψ− (r )/∂xk h ψ− (r ) 102  (5. y .211) where use has been made of Eq. z | +||A + +|Sk |− x . z |pk +||A ∂ x .212) . z | −|A = (5.12 Pauli two-component formalism 5 ANGULAR MOMENTUM It is easily seen that x . y .78). z | +|A x . y . z | +|A = x . z | +||A + −|Sk |− x . z | −|A = = +|Sk |+ x . (3. y . y . z |pk −||A ∂ = −i ¯ h x .208) In the Schr¨dinger representation.5. y .205) where use has been made of the fact that the spin operator Sk commutes with the eigenbras x . −|Sk |+ x . = −i ¯ h ∂xk x . z | −||A . z | −||A . instead of complex numbers.204) as a matrix relation if we generalize our definition of a spinor by writing   ψ+ (r )  ||A →  ≡ χ. y . z | −||A . y . ∂xk   (5. (5. (5. z |.206) (5. y . The above equation reduces to  h ψ (r )   −i ¯ ∂ψ+ (r )/∂xk   + . z | +||A . In this scheme. y . It is fairly obvious that we can represent the operator relation (5.

199) and (5. S · L = (¯ /2) σ · L. where a ≡ (ax . What about combinations of position and spin operators? The most commonly occurring combination is a dot product: e. Consider the h hybrid operator σ·a.. any position operator (e. σk ] aj bk 2 2 (σjk + i jkl σl ) aj bk = j k = a·b + i σ·(a × b).. In fact. This quantity is represented as a 2 × 2 matrix: σ·a ≡ k +a3 a1 − i a 2 . the operator equation (5. h 103 (5. Thus. ay .207). az ) is some vector position operator. a general position operator takes the form of a differential operator in x .216) follows from the commutation and anti-commutation relations (5. or z . σ j aj j k σk bk = j k 1 1 {σj . p k or Lk ) is represented in the Pauli scheme as some differential operator of the position eigenvalues multiplied by the 2 × 2 unit matrix.200). in the Schr¨dinger representation. where pk → −i ¯ h (5.215) o Since. ak σ k =  a1 + i a 2 −a3   (5. y .g.218) .213) ∂ I.g.214) ∂xk Here.5.217) (5.12 Pauli two-component formalism 5 ANGULAR MOMENTUM Thus. A general rotation operator in spin space is written T (∆φ) = exp (−i S·n ∆ϕ/¯ ) . (5.209) can be written χ = pk χ. σk } + [σj . it is clear that the above quantity must be regarded as a matrix differential operator which acts on spinors of the general form (5. The important identity (σ·a) (σ·b) = a·b + i σ·(a × b) (5. I is the 2 × 2 unit matrix.

l (5.219) in the Pauli scheme.12 Pauli two-component formalism 5 ANGULAR MOMENTUM by analogy with Eq. we require (χ† σk χ) ≡ (χ† ) σk χ = 104 Rkl (χ† σl χ). (5.5.148). 5.216). the quantity χ† σk χ is proportional to the expectation value of Sk [see Eq. and ∆ϕ is the angle of rotation.225) . we can write (σ·n)2 ∆ϕ 1 − exp (−i σ·n ∆ϕ/2) = 2! 2   2 (σ·n)4 ∆ϕ + 4! 2 3 4 = cos(∆ϕ/2) I − i sin(∆ϕ/2) σ·n. (5. The Pauli matrices remain unchanged under rotations.222) cos(∆ϕ/2) − i nz sin(∆ϕ/2) (−i nx − ny ) sin(∆ϕ/2) (−i nx + ny ) sin(∆ϕ/2) cos(∆ϕ/2) + i nz sin(∆ϕ/2)  Rotation matrices act on spinors in much the same manner as the corresponding rotation operators act on state kets. Here. where n is a unit vector pointing along the axis of rotation. This can easily be evaluated using the Taylor series for an exponential. plus the rules (σ·n)n = 1 (σ·n)n = (σ·n) for n even. Thus. n can be regarded as a trivial position operator. In fact. However.224) . χ = exp (−i σ·n ∆ϕ/2) χ. (5.221) These rules follow trivially from the identity (5.223) where χ denotes the spinor obtained after rotating the spinor χ an angle ∆ϕ about the n-axis. The rotation operator is represented exp (−i S·n ∆ϕ/¯ ) → exp (−i σ·n ∆ϕ/2) h (5.220) (5. so we would expect it to transform like a vector under rotation (see Sect.198)]. for n odd. (5. The explicit 2 × 2 form of this matrix is   (σ·n)3 ∆ϕ ∆ϕ (σ· n) − −i 2 3! 2 + · · ·   + · · · (5. Thus. The term on the right-hand side of the above expression is the exponential of a matrix.9). (5.

we use two wave-functions.228) where H is now a partial differential operator.13 Spin greater than one-half systems 5 ANGULAR MOMENTUM where the Rkl are the elements of a conventional rotation matrix.227) The previous two formulae can both be validated using the Baker-Hausdorff lemma. we now know how to generalize this scheme to deal with a spin one-half particle. Using the Schr¨dinger representation. The first. ¯ h ¯ h (5.226) plus all cyclic permutations. since exp i σ3 ∆ϕ −i σ3 ∆ϕ σ1 exp = σ1 cos ∆ϕ − σ2 sin ∆ϕ 2 2 (5. 105 .229) (5. (5. Instead of representing the state of the particle by a single wave-function. can be transformed into a partial differential equation for the wave-function ψ(r ) ≡ r |E . the Hamiltonian can be written as some function of the o position and momentum operators.156).230) This is all very familiar. ψ+ (r ).5. 5. (5. r . The above expression is the 2 × 2 matrix analogue of (see Sect. However. the energy eigenvalue problem. This is easily demonstrated. in which p → −i ¯ . In general. in addition to Hermitian operators. h H |E = E |E . which holds for Hermitian matrices.9) exp −i Sz ∆ϕ i Sz ∆ϕ Sx exp = Sx cos ∆ϕ − Sy sin ∆ϕ. This function specifies the probability density for observing the particle at a given position. (5. 5. The boundary conditions (for a bound state) are obtained from the normalization constraint |ψ|2 dV = 1.13 Spin greater than one-half systems In the absence of spin. we find H ψ = E ψ.

but the direction of its spin angular momentum remains undetermined. the Hamiltonian is a function of the position.5. c−   (5.231) reduces to a straight-forward matrix eigenvalue problem.231) where χ is a spinor (i.232) Here. (5. the o partial differential equation for ψ+ decouples from that for ψ− . The Hamiltonian determines the relative probabilities of finding the particle at various different positions. In the Pauli scheme. which is simply the row vector of ψ+ and ψ− . both equations have the same form. (5. these wave-functions are h combined into a spinor. and the spinor eigenvalue equao tion (5. χ. In this case.197)].214)]. a 1 × 2 matrix of wave-functions) and H is a 2 × 2 matrix partial differential operator [see Eq. ψ− (r ). In this case. Suppose that the Hamiltonian has no dependence on the spin operators. and the Pauli scheme.e. The above spinor equation can always be written out explicitly as two coupled partial differential equations for ψ+ and ψ− . The physical significance of the above expression is clear. (5. so there is only really one differential equation. and spin operao tors. Adopting the Schr¨dinger representation. The most 106 . and the c± are arbitrary complex numbers. the Hamiltonian is represented as diagonal 2 × 2 matrix partial differential operator in the Schr¨dinger/Pauli scheme [see Eq. In fact. In other words. Suppose that the Hamiltonian depends only on the spin operators. The second. the most general solution to Eq. In general.13 Spin greater than one-half systems 5 ANGULAR MOMENTUM specifies the probability density of observing the particle at position r with spin angular momentum +¯ /2 in the z-direction. specifies the h probability density of observing the particle at position r with spin angular momentum −¯ /2 in the z-direction. momentum.231) can be written c χ = ψ(r )  +  . the energy eigenvalue problem reduces to H χ = E χ. In this situation. (5.. the Hamiltonian is represented as a 2 × 2 matrix of complex numbers in the Schr¨dinger/Pauli scheme [see Eq.215)]. ψ(r ) is determined by the solution of the differential equation. (5.

· · · . what happens if we have a spin one or a spin three-halves particle? It turns out that we can generalize the Pauli two-component scheme in a fairly straight-forward manner. sz ||A .5. In general. Clearly.13 Spin greater than one-half systems 5 ANGULAR MOMENTUM general solution can again be written c χ = ψ(r )  +  . This can only be represented as a spinor involving different wave-functions. a particle for which the eigenvalue of S2 is s (s + 1) ¯ 2 . ψsz (r ) specifies the probability density for observing the particle at position r with spin angular momentum sz ¯ in the h z-direction. (5. (5. In other words.e. a general Hamiltonian causes the direction of the particle’s spin angular momentum to vary with position in some specified manner. denoted ψsz (r ). it is not possible to decompose the spinor as in Eqs. s is either an integer. but leaves its position undetermined. a spin one particle by a three-component spinor.. which is simply the 2 s + 1 component row vector of the ψsz (r ). where sz is allowed to take the values h s.233). a spin three-halves particle by a four-component spinor.232) and (5. c−   (5. The state of the particle can be represented more succinctly by a spinor.233) Here.234) where ||A denotes a state ket in the product space of the position and spin operators. In this case. Not surprisingly. 107 . h The eigenvalues of Sz are written sz ¯ . In fact. and so on. Consider a spin-s particle: i. the ratio c+ /c− is determined by the matrix eigenvalue problem. Here. Here. there are 2 s + 1 distinct allowed values of s z . But. we can represent the state of the particle by 2 s + 1 different wave-functions. s − 1. ψsz (r ) = r | s. −s. −s + 1. More exactly. χ. or a half-integer. Thus. of course. the Hamiltonian is a function of both position and spin operators. and the wave-function ψ(r ) is arbitrary. a spin one-half particle is represented by a two-component spinor. ψ+ and ψ− . the Hamiltonian determines the direction of the particle’s spin angular momentum.

position space operators take the o form of diagonal (2 s + 1) × (2 s + 1) matrix differential operators. l j = δjl . j .240) (5. Thus.239) s¯ h s where use has been made of the orthonormality property of the |s. But. h s (s + 1) − j (j − 1) ¯ |s.55)–(5.240)–(5. We represent the spin operators as where the (2 s + 1) × (2 s + 1) extended Pauli matrix σk has elements (σk )jl = s. we can represent the momentum operators as [see Eq. that (σx )jl = s (s + 1) − j (j − 1) δj. We know.13 Spin greater than one-half systems 5 ANGULAR MOMENTUM In this extended Schr¨dinger/Pauli scheme. from Eqs.236) Here.235) where I is the (2 s + 1) × (2 s + 1) unit matrix. j|Sz |s.242) (5. By definition. j. j + 1 . l . l and.214)] pk → −i ¯ h ∂ I.242). s¯ h (5. Sz |s.237). (5.5. thereby. h Hence. The matrix elements of σx and σy are most easily obtained by considering the shift operators. σ z is the suitably normalized diagonal matrix of the eigenvalues of Sz . j = j ¯ |s. Thus. how can we evaluate the brackets s. j = = s (s + 1) − j (j + 1) ¯ |s.241) (5. ∂xk (5. and (5. (5. j|Sk |s. j S− |s. that S+ |s.l+1 2s 108 . j|Sk |s. it is trivial to construct the σz matrix. h (5.237) Sk → s ¯ σ k .56). (5. j − 1 . (σz )jl = S± = S x ± i S y . (5. construct the extended Pauli matrices? In fact.238) It follows from Eqs. l are integers. lying in the range −s to +s. or half-integers. j . h (5. s.

=  i 0    (5. (5.247) as we have seen previously. This means that we can convert the general energy eigenvalue problem for a spin-s particle.13 Spin greater than one-half systems 5 ANGULAR MOMENTUM s (s + 1) − j (j + 1) δj. the Pauli matrices for a spin onehalf (s = 1/2) particle are + σx σy σz 0 1 =  . into 2 s + 1 coupled partial differential equations involving the 2 s + 1 wave-functions ψsz (r ). we can now construct the Pauli matrices for a spin anything particle. For a spin one (s = 1) particle.l−1 . 2 0 i 0  1 0 0     = 0 0 0 . such a system of equations is generally too complicated to solve exactly.249) σz (5. (5.246) (5.243)–(5.244). Unfortunately. (5.l−1 .244) − 2is According to Eqs.5.   0 0 −1   (5. 109 . we find that σx 0 1 0 1    = √ 1 0 1. 2 0 1 0    1 0  . =  0 −1 0 −i  .239) and (5.l+1 (σy )jl = 2is s (s + 1) − j (j + 1) δj.243) 2s s (s + 1) − j (j − 1) δj. where the Hamiltonian is some function of position and spin operators. 1 0    (5.245) (5.248)  σy 0 −i 0  1    = √  i 0 −i  .250) In fact.

(5. h (5. and J2 a spin angular momentum operator. respectively.254) Now J is an Hermitian operator. where j1 and j2 are either integers. We know. (5. from the general properties of angular momentum. respectively. Alternatively. −j1 + 1. h We assume that the two groups of operators correspond to different degrees of freedom of the system.14). Let us consider the most general case. −j2 + 1. · · · .251) (5. respech h tively. J1 and J2 . what is the total angular momentum of the system? In order to answer this question. h h So. Let us define the total angular momentum operator J = J 1 + J2 . By definition. since it is the sum of Hermitian operators.252) J2 × J 2 = i ¯ J 2 . J2j ] = 0. · · · . we are going to have to learn how to add angular momentum operators. j1 − 1. j2 − 1.14 Addition of angular momentum Consider a hydrogen atom in an l = 1 state.255) 110 . these operators are Hermitian. and obey the fundamental commutation relations J1 × J 1 = i ¯ J 1 . The electron possesses orbital angular momentum of magnitude ¯ . J1 and J2 could be the orbital angular momentum operators of two different particles in a multi-particle system. (5. According to Eqs. −j2 . that the eigenvalues of J12 and J22 can be written j1 (j1 + 1) ¯ 2 and h 2 j2 (j2 + 1) ¯ . h (5. y. j stand for either x. and spin angular momentum of magnitude ¯ /2. Suppose that we have two sets of angular momentum operators. For instance. −j1 and j2 .14 Addition of angular momentum 5 ANGULAR MOMENTUM 5. so that [J1i .253) where i.5.11) and (5. We h also know that the eigenvalues of J1z and J2z take the form m1 ¯ and m2 ¯ . or z. J satisfies the fundamental commutation relation J × J = i ¯ J. or half-integers. where m1 and m2 are numbers lying in the ranges j1 . J1 could be an orbital angular momentum operator.

we do not know the relationship between the quantum numbers of the total angular momentum. −j + 1. (5. It is clear. J possesses all of the expected properties of an angular momentum operator. J1z ] = 0. J22 . However. (5. At this stage. It follows that the eigenvalue of J 2 can be written j (j + 1) ¯ 2 . These two groups of operators are incompatible with one another. Thus. j2 . J2i ] = 0.257) (5. In other words. j − 1. J22 ] = 0. (5. J22 . It follows from Eq.5. m1 .256) that [J2 . j2 . · · · .14 Addition of angular momentum 5 ANGULAR MOMENTUM Thus.259) This implies that the quantum numbers j1 . and m2 . The second group is J12 . J12 ] = [J 2 . where m lies h in the range j. j and m. J2z ] = 0. [J12 . The first group is J12 . and j can all be measured simultaneously. J1z . it is clear from Eq. (5. where j is an h integer. and Jz .260) (5. and J2z . The eigenvalue of Jz takes the form m ¯ . from the preceding discussion.256) (5. and J2z are denoted 111 . (5. −j.256) that [J 2 .261) This suggests that it is not possible to measure the quantum numbers m1 and m2 simultaneously with the quantum number j. J22 . [J2 . Now We know that J 2 = J12 + J22 + 2 J1 ·J2 . The simultaneous eigenkets of J12 . J1z . or a half-integer. J1i ] = 0. J 2 . that we can form two alternate groups of mutually commuting operators. we can know the magnitude of the total angular momentum together with the magnitudes of the component angular momenta. we cannot determine the projections of the individual angular momenta along the z-axis at the same time as the magnitude of the total angular momentum. and those of the individual angular momenta. [J22 . We can define simultaneous eigenkets of each operator group.258) and also that all of the J1i operators commute with the J2i operators. j1 .

h = j2 (j2 + 1) ¯ 2 |j1 . m J22 |j1 . Since the operators J12 and J22 are common to both operator groups. m| = 1. m1 . j. j2 . m2 J22 |j1 .263) (5. j. mutually orthogonal (for eigenkets corresponding to different sets of eigenvalues). m1 . m2 . m1 m2 (5. h (5. j2 .269) Each set of eigenkets are complete. m Jz |j1 . h = m ¯ |j1 . j2 . and m. The summation is over all allowed values of m1 . j2 . j2 .14 Addition of angular momentum 5 ANGULAR MOMENTUM |j1 . j. h = j2 (j2 + 1) ¯ 2 |j1 . and J2z . j. j. J1z . in general. m1 . m2 . m1 . m . h = j (j + 1) ¯ 2 |j1 . j2 . m2 j1 . j2 . m2 . m = j1 (j1 + 1) ¯ 2 |j1 . j. m1 . m . J22 . j2 . but we cannot know both pairs of quantum numbers at the same time. j2 . In other 112 . or the quantum numbers j and m. m1 . J22 . m1 . h = m1 ¯ |j1 . j2 . and Jz is incompatible with the group J12 . j2 . m2 . We can write a conventional completeness relation for both sets of eigenkets: |j1 . j. j2 . j2 . we can either know the quantum numbers m1 and m2 . j2 . j2 . m2 . m1 . m . m2 | = 1. j2 . m2 J1z |j1 .270) (5. j. j.264) (5. it is not in an eigenstate of the latter. j. we can always determine the magnitudes of the individual angular momenta.267) (5. The operator group J12 . j. where J12 |j1 . m2 J2z |j1 . In other words.265) The simultaneous eigenkets of J12 . j2 . m j1 .262) (5. m1 . j2 . h (5. J 2 . h = m2 ¯ |j1 . j2 . where J12 |j1 . J 2 and Jz are denoted |j1 .5. m2 . and have unit norms. m J2 |j1 . j2 . j.266) (5. In addition. m . m .268) (5. This means that if the system is in a simultaneous eigenstate of the former group then. we can assume that the quantum numbers j1 and j2 are known. J22 . j2 . m2 = j1 (j1 + 1) ¯ 2 |j1 . j m where the right-hand sides denote the identity operator in the ket space corresponding to states of given j1 and j2 . j2 .271) |j1 . m1 . m1 .

(5. If the system is in a state where a measurement of J12 . then a measurement of the quantum numbers m1 and m2 will give a range of possible values. m = m1 m2 j1 . j. respectively.272) Thus. j2 . the coefficients vanish unless |j1 − j2 | ≤ j ≤ j1 + j2 . m = 0. m1 . conh h stitutes a state whose total angular momentum projected along the z-axis is 3¯ /2. from Eq. h h The Clebsch-Gordon coefficients possess a number of very important properties. and jz ¯ . we obtain (m − m1 − m2 ) j1 . then a measurement of J1z and J2z will h h h give the results m1 ¯ and m2 ¯ with probability | j1 . (5. j2 . projected along the z-axis. the coefficients are zero unless m = m 1 + m2 . j. So. j2 . m2 |. m1 . j2 . j2 .270) to write |j1 . j2 . m = 0. j1 .5. m |2 . J22 . are called the Clebsch-Gordon coefficients. j2 . j. and Jz is bound to give the results j1 (j1 + 1) ¯ 2 . j2 . (5. h What is uncertain is the magnitude of the total angular momentum. j2 . the z-components of different angular momenta add algebraically. and m are known with certainty. j (j + 1) ¯ . j2 . j. j.273). we can write the eigenkets of the first group of operators as a weighted sum of the eigenkets of the second set. an electron in an l = 1 state. j2 . j2 . m2 |j1 . m2 |j1 . m . m2 . that for given j1 and j2 the largest possible value of m is j1 + j2 (since j1 is the 113 . with orbital angular momentum ¯ . that j1 ≥ j2 . Forming the inner product with j1 . (5. m1 . First. without loss of generality. j. m |j1 . j2 (j2 + h 2 2 1) ¯ . j2 . j. if the quantum numbers j1 . m2 |j1 . The weights. m1 . m1 .273) which proves the assertion. We know.276) We can assume. To prove this. We can use the completeness relation (5. Second.274) (5.275) (5. m1 . m2 |j1 . Thus.14 Addition of angular momentum 5 ANGULAR MOMENTUM words. we note that (Jz − J1z − J2z )|j1 . and spin angular momentum ¯ /2. J 2 .

m1 .278) This assertion is proved as follows: j1 . j2 . j. j2 . We have already seen that the maximum allowed value of j is j1 + j2 . there are (2 j1 + 1) (2 j2 + 1) independent eigenkets. m2 . j1 +j2 j=j1 −j2 (2 j + 1) ≡ (2 j1 + 1) (2 j2 + 1). m1 . m = j1 . j. m|j1 . j2 . j2 . j2 . |j1 . j. m2 |j1 . m1 .280) 114 . m |2 = 1. For each allowed value of j. m2 |j1 . m1 m2 (5. m = (J± + J± ) 2 1 × j1 . Now. the Clebsch-Gordon coefficients obey two recursion relations. m span the same space.. Third. by definition. j2 . m|j1 . j2 . j2 . m2 . In other words. j2 . j2 . j. m1 . j. the sum of the modulus squared of all of the Clebsch-Gordon coefficients is unity: i. j2 . To obtain these relations we start from J± |j1 . m |2 = 1. j2 . m1 . j2 . (5. etc.270).). j2 . m2 j1 . there are (2 j1 + 1) allowable values of m1 and (2 j2 + 1) allowable values of m2 .279) where use has been made of the completeness relation (5. Thus.5. j. the largest value of m is equal to j). | j1 . they must also form a set of (2 j1 + 1) (2 j2 + 1) independent kets.e. Since the eigenkets |j1 . m1 m2 (5. j2 .277) This proves our assertion. m |j1 . needed to span the ket space corresponding to fixed j1 and j2 . j2 . m1 . (5.. m2 |j1 .e. It is easily seen that if the minimum allowed value of j is j1 − j2 then the total number of allowed values of j and m is (2 j1 + 1) (2 j2 + 1): i. m1 . This implies that the largest possible value of j is j1 + j2 (since. m m1 m2 = m1 m2 | j1 . j. j. there can only be (2 j1 + 1) (2 j2 + 1) distinct allowable values of the quantum numbers j and m. Finally. m2 |j1 . j.14 Addition of angular momentum 5 ANGULAR MOMENTUM largest possible value of m1 . there are 2 j + 1 allowed values of m.

m . j. j2 . m2 = j m j1 . m|j1 . This is a convenient choice. m : |j1 . The easiest way of demonstrating this assertion is by considering some specific examples. m1 . m1 . m . j2 . since it ensures that the inverse ClebschGordon coefficients. m|j1 . j2 . j. m ± 1 = j1 (j1 + 1) − m1 (m1 + j2 (j2 + 1) − m2 (m2 1) j1 .283) The inverse Clebsch-Gordon coefficients are the weights in the expansion of the |j1 . together with the normalization condition (5. (5. j2 . are sufficient to completely determine the ClebschGordon coefficients to within an arbitrary sign (multiplied into all of the coefficients). m . j2 . and making use of the orthonormality property of the basis eigenkets. j. m1 1. m2 |j1 . m 1|j1 . j2 . j2 . which are specified by Eqs. (5. j1 . j. j2 . m|j1 . j2 . j2 . m2 in terms of the |j1 . we obtain j (j + 1) − m (m ± 1) |j1 . j2 . m1 . m1 . j2 .278).14 Addition of angular momentum 5 ANGULAR MOMENTUM Making use of the well-known properties of the shift operators. j2 . j. m2 |j1 .5.55)–(5. j2 . j2 . m1 . m2 |j1 . j2 . m2 . j1 . j2 . j.284) It turns out that the recursion relations (5. m2 = j1 .282) 1) j1 . m2 |j1 . j2 . (5.282). m1 . are identical to the Clebsch-Gordon coefficients. that we can always choose the Clebsch-Gordon coefficients to be real numbers. j. j2 . In other words.281) Taking the inner product with j1 . m2 It is clear. j2 . j. 115 . (5. m1 . m2 + j2 (j2 + 1) − m2 (m2 ± 1) |j1 . m ± 1 = m1 m2 j1 (j1 + 1) − m1 (m1 ± 1) |j1 . m1 ± 1. (5. m1 . m1 . j2 . This sign is fixed by convention.56). m1 . j. we obtain the desired recursion relations: j (j + 1) − m (m ± 1) j1 . from the absence of complex coupling coefficients in the above relations. j. m2 |. j2 . m1 . m2 ± 1 × j1 . m2 |j1 . m . j. j2 . j2 .

. from Eq.278) implies that the sum of the squares of all 116 . 1/2 . that |m1 | ≤ 1/2 and |m2 | ≤ 1/2. Many of the boxes in the above table correspond to m1 +m2 = m. j = 1. We immediately conclude that these boxes must contain zeroes: i. In other words. j1 = j2 = 1/2.14 Addition of angular momentum 5 ANGULAR MOMENTUM Let us add the angular momentum of two spin one-half systems: e.. 1/2|1/2. 1/2. m1 m2 1/2 1/2 ? 0 1/2 -1/2 0 ? -1/2 1/2 0 ? -1/2 -1/2 0 0 j1 =1/2 j 1 1 j2 =1/2 m 1 0 0 0 0 ? 1 -1 0 ? ? 0 0 0 The normalization condition (5. the z-components of angular momentum have to add algebraically. 1. where the allowed values of j differ by integer amounts. 1 . 1/2. two electrons at rest.276).5. (5. 1|1/2. We know. It follows that either j = 0 or j = 1. two spin one-half systems can be combined to form either a spin zero system or a spin one system. from general principles. 1/2. 1/2. 1/2. It is helpful to arrange all of the possibly non-zero Clebsch-Gordon coefficients in a table: m1 m2 1/2 1/2 ? ? ? ? 1/2 -1/2 ? ? ? ? -1/2 1/2 ? ? ? ? -1/2 -1/2 ? ? ? ? j1 =1/2 j 1 1 1 0 j2 =1/2 m 1 0 -1 0 The box in this table corresponding to m1 = 1/2. Thus.g. m = 1 gives the Clebsch-Gordon coefficient 1/2. So. 1. A Clebsch-Gordon coefficient is automatically zero unless m1 + m2 = m.e. m2 = 1/2. All the boxes contain question marks because we do not know any Clebsch-Gordon coefficients at the moment. or the inverse Clebsch-Gordon coefficient 1/2. that 0 ≤ j ≤ 1. We also know. 1/2.

taking the upper/lower sign. There are two rows and two columns which only contain a single non-zero entry. −1/2|1. 0 = 2 −1/2. The only real solutions to the above set of equations are √ √ 2 1/2.286) Here. 1/2|1.285) 1/2. We conclude that these entries must be ±1. −1/2|1. but we have no way of determining the signs at present. Thus. m1 m2 1/2 1/2 ±1 0 0 0 1/2 -1/2 0 ? 0 ? -1/2 1/2 0 ? 0 ? -1/2 -1/2 0 0 ±1 0 j1 =1/2 j 1 1 1 0 j2 =1/2 m 1 0 -1 0 Let us evaluate the recursion relation (5. 0 = 1/2. 1 = ± 2.14 Addition of angular momentum 5 ANGULAR MOMENTUM the rows and columns of the above table must be unity. 1/2|1. 1/2|1. 0 2 = 1. m1 = m2 = ±1/2. 0 + −1/2. with j = 1. −1/2|1. 1/2|1. (5.5. 1/2|1. 1/2|1. We find that √ √ (5. −1 = ±1. We also know that 1/2. and 1/2. 0 = 2 1/2. Thus. 0 2 + −1/2. (5.287) from the normalization condition. −1/2|1. 0 = √ √ 2 −1/2. (5. −1/2|1.282) for j1 = j2 = 1/2. the j1 and j2 labels have been suppressed for ease of notation. 1/2|1.288) The choice of sign is arbitrary—the conventional choice is a positive sign. 1 = 1/2. 0 + −1/2. 117 . −1 = ± 2. m = 0.

The columns give the expansions of 118 . Likewise. The unknown entries are undetermined to an arbitrary sign multiplied into them both. the dot product of a row with any other row must be zero. an easier method is to observe that the rows and columns of the table must all be mutually orthogonal. This follows because the entries in the table give the expansion coefficients of one of our alternative sets of eigenkets in terms of the other set. That is. for the dot product of a column with any other column. However. The requirement that the dot product of the fourth column with itself is unity tells us that the magnitudes of the unknown entries have to √ be 1/ 2.14 Addition of angular momentum 5 ANGULAR MOMENTUM our table now reads m1 m2 1/2 1/2 1 0 √ 1/2 -1/2 0 1/√2 -1/2 1/2 0 1/ 2 -1/2 -1/2 0 0 j1 =1/2 j 1 1 j2 =1/2 m 1 0 0 0 0 1 1 -1 0 ? ? 0 0 0 We could fill in the remaining unknown entries of our table by using the recursion relation again. and each set of eigenkets contains mutually orthogonal vectors with unit norms.5. The only way that the dot product of the fourth column with the second column can be zero is if the unknown entries are equal and opposite. the final form of our table (with the conventional choice of arbitrary signs) is m1 m2 1/2 1/2 1 0 0 √ 0 √ 1/2 -1/2 0 1/√2 0 1/ √2 -1/2 1/2 0 1/ 2 0 -1/ 2 -1/2 -1/2 0 0 1 0 j1 =1/2 j 1 1 1 0 j2 =1/2 m 1 0 -1 0 The table can be read in one of two ways. The normalization condition tells us that the dot product of a row or column with itself must be unity. Thus.

Thus.e. 0 ) . m kets. the second row tells us that 1 |1/2. m2 kets. j2 and j) in many standard reference books. These tables are listed (for moderate values of j 1 . The Clebsch-Gordon coefficients corresponding to two different choices of j are completely independent: i. whereas those on the right-hand side are |m1 . 0 + |0. the second column tells us that 1 |1.14 Addition of angular momentum 5 ANGULAR MOMENTUM the eigenstates of overall angular momentum in terms of the eigenstates of the individual angular momenta of the two component systems.289) The ket on the left-hand side is a |j. there is no recursion relation linking Clebsch-Gordon coefficients corresponding to different values of j. m ket. whereas those on the righthand side are |j. 1/2 ) . one involving j = 0 states. j2 .5. 0 = √ ( |1/2. the ket on the left-hand side is a |m1 . Thus. for every choice of j1 . and one involving j = 1 states. −1/2 + | − 1/2.290) Here. 2 (5. and m (subject to the constraint that m1 + m2 = m). 119 . and j we can construct a table of Clebsch-Gordon coefficients corresponding to the different allowed values of m 1 . The rows give the expansions of the eigenstates of individual angular momentum in terms of those of overall angular momentum. Thus. Note that our table is really a combination of two sub-tables. m2 ket.. A complete knowledge of angular momentum addition is equivalent to a knowing all possible tables of Clebsch-Gordon coefficients. 2 (5. m2 . −1/2 = √ ( |1.

Consider the simplest non-trivial system. (6. These are denoted H0 |1 = E1 |1 . in some sense (which we shall specify more exactly later on). in which there are only two independent eigenkets of the unperturbed Hamiltonian. H0 . H1 introduces some interesting additional physics into the problem. Clearly. It is usually assumed that the unperturbed Hamiltonian. The Hamiltonian of a system is written H = H 0 + H1 . H1 can. 120 (6. H0 + H1 . by performing some sort of perturbation analysis about the eigenvalues and eigenstates of the original Hamiltonian. Consider the following problem. H 0 ? Let us investigate. 6.6 APPROXIMATION METHODS 6 Approximation methods 6. is also time-independent.2 The two-state system Let us begin by considering time-independent perturbation theory. has no explicit dependence on time.1 Introduction We have developed techniques by which the general energy eigenvalue problem can be reduced to a set of coupled partial differential equations involving various wave-functions. the number of such problems which yield exactly soluble equations is comparatively small. which is very common.2) . H0 is a simple Hamiltonian for which we know the exact eigenvalues and eigenstates. H1 . Unfortunately. However. be regarded as being small compared to H0 . in which the modification to the Hamiltonian. Can we find the approximate eigenvalues and eigenstates of the modified Hamiltonian.1) Here. we need to develop some techniques for finding approximate solutions to otherwise intractable problems. but it is sufficiently complicated that when we add it to H0 we can no longer find the exact energy eigenvalues and eigenstates.

(6. by definition.4) In fact. which can be written in matrix form:   E1 − E + e11 e12 ∗ e12 E2 − E + e22   1|E 2|E Here.6) (obtained by setting the determinant of the matrix equal to zero) is (E1 + E2 ) ± (E1 − E2 )2 + 4 |e12 |2 E= .2 The two-state system 6 APPROXIMATION METHODS H0 |2 = E2 |2 .8) (6. (6. so that e11 = e22 = 0. (6. an Hermitian operator. (6. The lengths of these eigenkets are both normalized to unity.9) In the special (but common) case of a perturbing Hamiltonian whose diagonal matrix elements (in the unperturbed eigenstates) are zero. its two eigenkets are orthonormal and form a complete set.6) e11 = e22 = e12 = 1|H1 |1 . 0  =  . Since H0 is.7) (6.10) the solution of Eq. Let us now try to solve the modified energy eigenvalue problem (H0 + H1 )|E = E |E . Since. and their associated eigenvalues. 1|H1 |2 . are known. 2|H1 |2 .11) 2 Let us expand in the supposedly small parameter = |e12 | . (6. (6.5) Right-multiplication of Eq.3) It is assumed that these states. (6. we can write |E = 1|E |1 + 2|E |2 .12) .6.4) by 1| and 2| yields two coupled equations. |E1 − E2 | 121 (6. the eigenkets of H 0 form a complete set. (6. 0    (6. we can solve this problem exactly.

are denoted H0 |n = En |n .6. E1 − E 2 (6. E1 − E 2 e12 = |2 − |1 + · · · . what we really mean is that the above inequality needs to be satisfied. E1 − E 2 (6.14) (6. and the lower eigenvalue to fall. E1 − E 2 |e12 |2 + ···. the modified energy eigenstates consist of one of the unperturbed eigenstates with a slight admixture of the other.16) (6.13) only converges if 2 | | < 1. H0 . Note that the series expansion in Eq.15) We obtain E2 = E 2 − Note that H1 causes the upper eigenvalue to rise.17) Thus.3 Non-degenerate perturbation theory 6 APPROXIMATION METHODS 1 1 (E1 + E2 ) ± (E1 − E2 )(1 + 2 2 + · · ·). (6.3 Non-degenerate perturbation theory Let us now generalize our perturbation analysis to deal with systems possessing more than two energy eigenstates. 6. 2 (6. It is easily demonstrated that the modified eigenkets take the form |1 |2 = |1 + ∗ e12 |2 + · · · . 122 (6. The energy eigenstates of the unperturbed Hamiltonian. (6.13) 2 2 The above expression yields the modifications to the energy eigenvalues due to the perturbing Hamiltonian: E E1 |e12 |2 = E1 + + ···. This suggests that the condition for the validity of the perturbation expansion is |e12 | < |E1 − E2 | .18) In other words.19) . when we say that H1 needs to be small compared to H0 .

Em − E k for all m = k. form a complete set. and right-multiplying by m|.23) 1 is our expansion parameter. We find that (Em − En ) m|E + emn 123 0.28) for m = n.22) where emk = m|H1 |k . (6.20) We can express |E as a linear superposition of the unperturbed energy eigenkets. (6.27) (6. (6. ∼ O( ). Let us now develop our perturbation expansion. and have their lengths normalized to unity.26) and n|E m|E = 1. We also assume that (6. Em (6. Let us search for a modified version of the nth unperturbed energy eigenstate. (6.3 Non-degenerate perturbation theory 6 APPROXIMATION METHODS where n runs from 1 to N. (6.21) where the summation is from k = 1 to N. (6. Substituting the above equation into Eq. (6. (6.22) for m = n. The eigenkets |n are orthonormal.29) . Suppose that we write out Eq.24) (6. neglecting terms which are O( 2 ) according to our expansion scheme. We assume that |emk | ∼ O( ).25) for all m. |E = k k|E |k .6. where |emm | ∼ O( ). for which E = En + O( ). we obtain (Em + emm − E) m|E + k=m emk k|E = 0. Let us now try to solve the energy eigenvalue problem for the perturbed Hamiltonian: (H0 + H1 )|E = E |E .20).

124 (6. evaluated for m = n..6. Ek − E n k=n Thus. 6.35) H0 = 2 me and the perturbing Hamiltonian.22). p2 + V(r).36) . Em − E n En − E m giving (6. spherically symmetric shell)] is subjected to a uniform electric field in the positive z-direction. The Hamiltonian of the system can be split into two parts.4 The quadratic Stark effect 6 APPROXIMATION METHODS emn . or an alkali metal atom (which possesses one valance electron orbiting outside a closed.32) ekn |k + O( 2 ).34) Thus. En = En + enn + E − Ek k=n n and a eigenket |n = |n + Note that m|n = δmn + ∗ emn enm + + O( 2 ) = δmn + O( 2 ).e.33) (6. The unperturbed Hamiltonian. (6. and neglecting O( 3 ) terms. we obtain m|E − |enk |2 (En + enn − E) − = 0. the modified eigenkets remain orthonormal and properly normalized to O( 2 ). (6. the modified nth energy eigenstate possesses an eigenvalue |enk |2 + O( 3 ). either a hydrogen atom.31) (6. H1 = e |E| z.30) Em − E n Substituting the above expression into Eq. (6.4 The quadratic Stark effect Suppose that a one-electron atom [i. E − Ek k=n n (6.

This is not true for the n = 1 energy levels of the hydrogen atom. that the matrix element n. This is termed the selection rule for the quantum number m.m 2 2 (6. from the above relation. l . the change in this energy level induced by a small electric field is given by ∆Enlm = e |E| n.39) (6.6. l. since Lz = x p y − y p x .40) . m|z|n. According to Eq. m |2 +e |E| . m . l. l . l. The electron spin is irrelevant in this problem (since the spin operators all commute with H1 ). This implies that the system possesses no degenerate energy eigenvalues. (6. l. It is h clear. (6. an eigenstate of Lz with eigenvalue m ¯ .4 The quadratic Stark effect 6 APPROXIMATION METHODS It is assumed that the unperturbed energy eigenvalues and eigenstates are completely known. and let its energy level be Enlm . It is necessary to deal with this case separately. 5. because the perturbation theory presented in Sect. due to the special properties of a pure Coulomb potential.38) (6. m|[Lz . l. z] = 0. by definition.37) Now. m|z|n. and the two angular quantum numbers l and m (see Sect. l . m = 0. Thus.32).3 breaks down for degenerate unperturbed energy levels. m | n. m is zero unless m = m. m is. Let us now determine the selection rule for l. m = 0. z]|n . 6. An energy eigenket of the unperturbed Hamiltonian is characterized by three quantum numbers—the radial quantum number n. l. n. l. z] + [Ly2 . Let us denote such a ket |n. l. it follows that [Lz . m|z|n . l . We have [L2 . so we can ignore the spin degrees of freedom of the system.41) since |n.m =n.6). z] 125 (6. m|z|n .l.l . Enlm − En l m n . z] = [Lx2 . giving (m − m ) n.

44) (6. Similarly. m|L4 z − 2 L2 z L2 + z L4 − 2 ¯ 2 (L2 z + z L2 )|n . (5. m = 0.49) (6. h 126 (6. z] + [Lx . it is clear from Eqs.49) implies that n. h Thus. z] . the above expression expands to give L4 z − 2 L2 z L2 + z L4 − 2 ¯ 2 (L2 z + z L2 ) = 0. we obtain [L2 . z] + [Ly .43) (6. h Finally. However. z]] = 2 i ¯ L2 . x] = 2 i ¯ (y Lz − Ly z). z]Ly = i ¯ (−Lx y − y Lx + Ly x + x Ly ) h = 2 i ¯ (Ly x − Lx y + i ¯ z) h h = 2 i ¯ (Ly x − y Lx ) = 2 i ¯ (x Ly − Lx y).46) (6. h h where use has been made of Eqs. [L2 . y] = 2 i ¯ (Lx z − x Lz ).6.1)–(5. y] + i ¯ [L2 . l. h h = −4 ¯ 2 Ly (y Lz − Ly z) + 4 ¯ 2 Lx (Lx z − x Lz ) h h −2 ¯ 2 (L2 z − z L2 ).1)–(5. Ly x − Lx y + i ¯ z h h = 2 i ¯ Ly [L2 .47) (6. h [L2 .45) (6. [L2 . x] − Lx [L2 .42) (6. h This reduces to [L2 . h Equation (6.4 The quadratic Stark effect 6 APPROXIMATION METHODS = Lx [Lx . z]] = −¯ 2 4 (Lx x + Ly y + Lz z)Lz − 4 (Lx2 + Ly2 + Lz2 ) z h +2 (L2 z − z L2 ) . [L2 . Hence.3) that Lx x + Ly y + Lz z = 0.6). (5. [L2 . [L2 . z]] = 2 ¯ 2 (L2 z + z L2 ).48) (6.50) . l . z]Lx + Ly [Ly .

(6.37) yields ∆Enlm = e2 |E|2 n | n. (6..53) that the matrix element vanishes by symmetry when l = l = 0. l . m is zero unless l = l ± 1. l. ψn00 (r ) = ψn00 (r ). l.56) . the matrix element n.37) which vary linearly with the electric fieldstrength vanish by symmetry.53) nlm where ψnlm (r ) = r |n. (6. m|z|n . l. 0|z|n. (Recall. l . and our theory cannot handle this at present). m|z|n . In conclusion. This is the selection rule for the quantum number l. l . however. m |2 . m|z|n . the matrix element n. Only those terms which vary quadratically with the field-strength survive. m = 0. Enlm − En l m l =l±1 (6. (6. that we cannot address the n > 1 excited states because they are degenerate. l. θ .52) = 0. that the wave-function of an l = 0 state is spherically symmetric (see Sect. ϕ ) r cos θ ψn m l (r . m|z|n .51) According to the above formula. l. (6. ϕ ) dV . θ . 2 (6. En00 − E100 127 (6. l. 0. m|z|n .e. 1. The polarizability of this state is given by α = 2e 2 n>1 | 1. Recall. l. m|z|n . m which reduces to (l + l + 2) (l + l ) (l − l + 1) (l − l − 1) n. l . Application of the selection rules to Eq. m . This matrix element can be written n. (6. l .55) Consider the ground state of a hydrogen atom.4 The quadratic Stark effect 6 APPROXIMATION METHODS This expression yields l2 (l + 1)2 − 2 l (l + 1) l (l + 1) + l 2 (l + 1)2 −2 l (l + 1) − 2 l (l + 1)] n. It follows from Eq.54) Note that all of the terms in Eq.3): i. m = ψ∗ (r .6. according to the selection rules. l . 0 |2 . The polarizability of an atom is defined in terms of the energy-shift of the atomic state as follows: 1 ∆E = − α |E|2 . m vanishes unless l = l = 0 or l = l ± 1. 5.

0|z|n . 0. 0.58) 3 e2 = . 0|z|n. o α= 128 . 0. The sum in the above expression can be evaluated approximately by noting that [see Eq. n . (6. 0|z2 |1.4 The quadratic Stark effect 6 APPROXIMATION METHODS Here. 0. 0. 0 |2 = n>1 n . 0. 0 (6. 1. (6. 0|z2 |1.3 4π 3 0 a0 . 0.61) = where we have made use of the fact that the wave-functions of a hydrogen atom form a complete set. l . 4 8π 0 a0 (6. (5. 2 It is actually possible to obtain this answer. without recourse to perturbation theory.3 × 10−11 meters a0 = 2 µe is the Bohr radius. 1.59) 16 4π 3 0 a0 n>1 | 1. It is easily demonstrated from the actual form of the ground state wave-function that (6. 0. by solving Schr¨dinger’s equation exactly in parabolic coordinates. 0 = a02 .63) 9 (6.120)] e2 En00 = − (6. l |z|1. | 1. we conclude that α< The true result is 16 4π 3 3 0 a0 5. where 4π 0 ¯ 2 h = 5.m (6.5 4π 0 a03 .57) 8π 0 a0 n2 for a hydrogen atom. 0 . We can write En00 − E100 ≥ E200 − E100 Thus.64) 4π 0 a03 = 4. m 1.l . 0 |2 . we have made use of the fact that En10 = En00 for a hydrogen atom.60) 1. 0|z|n. Thus.6. m .62) 1. α< However.

(6. respectively. Here. and n and l. the nth energy eigenstate is Nn -fold degenerate.l n en l nl |n . It is always possible to find a sufficient number of operators which commute with the Hamiltonian in order to ensure that the Lnl are all different.e. we expect the perturbation to split the degeneracy of the energy levels. In general. the En and the Lnl are real numbers which depend on the quantum numbers n. l .l =n.5 Degenerate perturbation theory Let us now consider systems in which the eigenstates of the unperturbed Hamiltonian. Let us naively attempt to use the standard perturbation theory of Sect. We can write H0 |n.l n (6.67) (6. It is always possible to represent degenerate energy eigenstates as the simultaneous eigenstates of the Hamiltonian and some other Hermitian operator (or group of operators). (6. l + where en l nl = n .69) . l = En |n. A direct generalization of Eqs. 129 (6.68) (6.65) |en l nl |2 + O( 3 ). 6. E − En n .66) where [H0 . = En + enlnl + E − En n . we can choose L such that the quantum numbers n and l uniquely specify each eigenstate.. so that each modified eigenstate |n. l = Lnl |n. l acquires a unique energy eigenvalue Enl . l |H1 |n. l + O( 2 ). l = |n. l . Suppose that for each value of n there are Nn different values of l: i.33) yields Enl and |n.3 to evaluate the modified energy eigenstates and energy levels. This implies that the modified energy eigenstates are not eigenstates of L. In other words.6.5 Degenerate perturbation theory 6 APPROXIMATION METHODS 6. L] = 0. L does not commute with the perturbing Hamiltonian.32) and (6. and L |n. H1 . In this situation. l . possess degenerate energy levels.l =n. H0 . Let us denote this operator (or group of operators) L.

Eq.5 Degenerate perturbation theory 6 APPROXIMATION METHODS It is fairly obvious that the summations in Eqs.70) is satisfied.70) |n. l(1) = λnl δll .70) is not satisfied. we can always redefine the unperturbed energy eigenstates belonging to the eigenvalue En in such a manner that Eq. It follows that (6. l |H1 |n. In other words. l = λnl δll . then we can employ Eqs. (6. These terms give rise to singular factors 1/(En − En ) in the summations. however. then all of the singular terms in Eqs. (6. those states whose unperturbed energies are En . k . (6. Thus. if we use the new eigenstates. instead of the old ones. degenerate.68) would vanish. Let us define Nn new states which are linear combinations of the Nn original degenerate eigenstates corresponding to the eigenvalue En : Nn (6.67) and (6.6. The |n.72) The |n. l (1) |H1 |n.67) and (6. l(1) are chosen in such a manner that they are eigenstates of the perturbing Hamiltonian. l(1) = λnl |n. unperturbed energy eigenstates corresponding to the eigenvalue En were zero. (6. enl nl .68) are not wellbehaved if the nth energy level is degenerate. but different values of l: i. l(1) . since all of the singular terms vanish. In general. Fortunately. of the perturbing Hamiltonian between distinct. Note.. 130 . and have unit lengths.68) directly. Thus.71) Note that these new states are also degenerate energy eigenstates of the unperturbed Hamiltonian corresponding to the eigenvalue En .e. The only remaining difficulty is to determine the new eigenstates in terms of the original ones. l(1) are also chosen so that they are orthonormal. H1 |n.73) n. (6. l(1) |n. (6. The problem terms are those involving unperturbed eigenstates labeled by the same value of n. H 1 . that this problem would not exist if the matrix elements. (6. l (1) = k=1 n.67) and (6. k|n. if n.

. degenerate. they specifically exclude the problematic. with Nn corresponding eigenvectors xnl . the operator eigenvalue equation (6.e. l l =1 n. Eqs.74) where 1 denotes the identity operator in the sub-space of all unperturbed energy eigenkets corresponding to the eigenvalue En .. l| = 1. En − E n (6. En − E n (6. l |H1 |n.72) can be transformed into a straightforward matrix eigenvalue equation: Nn n. l(1) = |n. l + O( 2 ). (6.80) There are no singular terms in these expressions.76). j|H1 |n. 131 .79) and |n. (6. where the elements of the Nn × Nn Hermitian matrix U are Ujk = n. l |n. since the summations are over n = n: i.l (6. Using this completeness relation. l(1) . (6. l |n.68) yield Enl = En + λnl + n =n. In our new scheme.77) Provided that the determinant of U is non-zero.75) This can be written more transparently as U x = λ x. Eq.5 Degenerate perturbation theory 6 APPROXIMATION METHODS Now Nn |n.78) for k = 1 to Nn . The eigenvectors specify the weights of the new eigenstates in terms of the original eigenstates: i. unperturbed energy eigenstates corresponding to the eigenvalue En . (6.76) |en l nl |2 + O( 3 ). Note that the first-order energy shifts are equivalent to the eigenvalues of the matrix equation (6.67) and (6. l n.l en l nl |n . l(1) . l=1 (6.76) can always be solved to give Nn eigenvalues λnl (for l = 1 to Nn ). k|n.6. (6. (xnl )k = n.e. l(1) = λnl n. l(1) + n =n. k .

and l quantum numbers which differ by unity.    (6. consider the n = 2 states. 1. It is easily demonstrated. 0|z|2. 0 = 3 a0 . 1. 0.4. 1). 0. There is a single l = 0 state. 0 0 0 2. which tell us that the matrix element of z between two hydrogen atom states is zero unless the states possess the same m quantum number. The corresponding eigenvectors are  √  1/√2      1/ 2  .82) where U is the array of the matrix elements of H1 between the degenerate 2s and 2p states. the perturbing Hamiltonian is H1 = e |E| z. 0|z|2. λ2 = −3 e a0 |E|. 0. 0 = 2.6. 1. 1 . Here. usually referred to as 2s. 1.81) In order to apply perturbation theory. E200 = −e2 /(32π 0 a0 ). |2. respectively. 0|z|2. For instance. (6. (6. As in Sect.83) It can be seen. All of these states possess the same energy. we have made use of the selection rules. (6. 1. 0. usually referred to as 2p.  (6. and |2. 0. 0|z|2.85) x1 =    0    0 132 . by inspection. 0 . 0 0 0 0 0 0 0 0 0 0 0 0  where the rows and columns correspond to the |2.6 The linear Stark effect Let us examine the effect of an electric field on the excited energy levels of a hydrogen atom. 1. Thus. −1 states. we have to solve the matrix eigenvalue equation U x = λ x. and three l = 1 states (with m = −1. and λ4 = 0. 0.84)    . that 2. 0 . |2.6 The linear Stark effect 6 APPROXIMATION METHODS 6.    e |E|      U= 0 2. λ3 = 0. that the eigenvalues of U are λ1 = 3 e a0 |E|. 6. 1. from the exact forms of the 2s and 2p wave-functions.

91) (6. 6. respectively. 1. The first-order energy shifts induced by an electric field are given by ∆E1 = +3 e a0 |E|.94) (6. by an amount 3 e a0 |E| in the presence of an electric field. ∆E4 = 0. 2 |2.        . ∆E2 = −3 e a0 |E|. 0.92) In the absence of an electric field. 0 − |2.    (6. 2 = |2.6. 1 .5 that the simultaneous eigenstates of the unperturbed Hamiltonian and the perturbing Hamiltonian take the form |1 |2 |3 |4 = |2. 1.87) x4 =         (6. E 200 .96) Thus.90) (6. the energies of states 1 and 2 are shifted upwards and downwards. 0 √ = .93) (6. = |2.95) (6.88) It follows from Sect. 0.6 The linear Stark effect                 6 APPROXIMATION METHODS x2 = √ 1/ √ 2 −1/ 2 0 0 0 0 1 0 0 0 0 1     . States 1 and 2 are orthogonal linear combinations of the original 2s and 2p(m = 0) states. −1 . (6. 133 . 0 + |2.        . (6. all of these states possess the same energy. ∆E3 = 0.89) (6. 1. 1. 0 √ .86) x3 = (6.

Thus. but different values of l.. an electron possesses a spin magnetic moment [see Eq. Let us examine a phenomenon known as fine structure. respectively) are not affected to first-order.170)] µ=− eS . e (6. alkali metal atoms do not exhibit the linear Stark effect. The shielding effect of the inner electrons causes V(r) to depart from the pure Coulomb form. therefore. me However. The outermost electron moves in a spherically symmetric potential V(r) due to the nuclear charge and the charges of the other electrons (which occupy spherically symmetric closed shells). (6.7 Fine structure 6 APPROXIMATION METHODS Note that the energy shifts are linear in the electric field-strength. In fact.e. Of course. so this is a much larger effect that the quadratic effect described in Sect. alkali metal atoms) in more detail. This degeneracy is a special property of a pure Coulomb potential. The energies of states 3 and 4 (which are equivalent to the original 2p(m = 1) and 2p(m = −1) states. which is due to interaction between the spin and orbital angular momenta of the outermost electron. only applies to a hydrogen atom. (5. higher l states have higher energies. to second-order the energies of these states are shifted by an amount which depends on the square of the electric field-strength.4. 6.6.7 Fine structure Let us now consider the energy levels of hydrogen-like atoms (i. This electron experiences an electric field E= V . This splits the degeneracy of states characterized by the same value of n.99) 134 .98) (6. Note that the linear Stark effect depends crucially on the degeneracy of the 2s and 2p states.97) Now. a charge moving in an electric field also experiences an effective magnetic field B = −v × E. 6. and.

states. (Here. Now. due to spin precession. L2 . L·S = 2 135 (6.102) . and Jz . When the above expression is compared to the observed spin-orbit interaction. Thus. There is a classical explanation for this. with the addition of spin degrees of freedom. ms = ±1/2). We can adopt the simultaneous eigenstates of H0 .100) where L = me r × v is the orbital angular momentum.6. different l. each state is doubly degenerate due to the two possible orientations of the electron spin (i. but different values of ml . We know. and p2 + V(r) H0 = 2 me (6. using H LS as the perturbation (with HLS taking one half of the value given above). or the simultaneous eigenstates of H0 . is proportional to L·S. J 2 . Sz . are still degenerate. Let us now apply perturbation theory to a hydrogen-like atom.101) as the unperturbed Hamiltonian. We have two choices for the energy eigenstates of H0 .e.) Moreover. S 2 .7 Fine structure 6 APPROXIMATION METHODS We. The correct quantum mechanical explanation requires a relativistically covariant treatment of electron dynamics (this is achieved using the so-called Dirac equation). L2 .6. where J 2 − L2 − S 2 . from Sect. Lz and Sz . therefore. those states characterized by the same values of n and l. where J = L + S is the total angular momentum. which we need not go into. S2 . ms . expect a spin-orbit contribution to the Hamiltonian of the form HLS = −µ·B 1 r dV eS ·v× = − me e r dr 1 dV L·S. we are still dealing with a highly degenerate system. respectively. that the application of perturbation theory to a degenerate system is greatly simplified if the basis eigenstates of the unperturbed Hamiltonian are also eigenstates of the perturbing Hamiltonian. and Jz . HLS . the perturbing Hamiltonian. 6. Although the departure of V(r) from a pure 1/r form splits the degeneracy of same n.. = me2 r dr (6. it is found to be too large by a factor of two. ml . and mj are the quantum numbers corresponding to Lz .

where the Clebsch-Gordon coefficient is written in ml . We have also made use of the fact that both the |j.6. m + 1 . 1/2 + cos α |m + 1/2. whereas the second group (H0 . In fact. 1/2|l + 1/2. 1/2|l + 1/2. j2 = 1/2. ms kets (the j1 . L2 .7 Fine structure 6 APPROXIMATION METHODS It is fairly obvious that the first group of operators (H0 . m = l (l + 1) − (m − 1/2) (m + 1/2) m + 1/2. for the sake of clarity). ms kets are orthonormal. 1/2 + sin α |m + 1/2. Thus. whereas those on the righthand side are |ml . −1/2 . J 2 . L2 . j2 labels have been dropped. 1/2|l + 1/2. the kets on the left-hand side are |j. S 2 .282). (5. mj and |ml . According to Eq. Lz and Sz ) does not commute with HLS . ms |j. rather than those of the first group. We now need to determine cos α = m − 1/2.104) Here. −1/2 . m . (5. (6. (6. 1/2|l + 1/2.106) which reduces to m − 1/2. m |l − 1/2. and Jz . m = l + m + 1/2 m + 1/2.103) (6. = − sin α |m − 1/2. 1/2|l + 1/2. m2 = 1/2 (lower sign).276). mj form. and j2 = s = 1/2. mj kets. with j1 = l. We can write |l + 1/2. j = l + 1/2. We obtain (l + 1/2) (l + 3/2) − m (m + 1) m − 1/2. m1 = m − 1/2. Eq. and Jz ) does. L2 . m = cos α |m − 1/2. Let us now employ the recursion relation for Clebsch-Gordon coefficients. J 2 . and have unit lengths.105) . We have made use of the fact that the Clebsch-Gordon coefficients are automatically zero unless mj = ml + ms . (6. L·S is just a combination of operators appearing in the second group. S 2 .107) l + m + 3/2 136 (6. m + 1 . the allowed values of the total angular momentum are j = l + 1/2 and j = l − 1/2. S2 . This is equivalent to finding the eigenstates of the total angular momentum resulting from the addition of two angular momenta: j1 = l. We now need to find the simultaneous eigenstates of H0 . it is advantageous to work in terms of the eigenstates of the second group of operators.

2l + 1 (6.6. sin2 α = 1 − l + m + 1/2 l − m + 1/2 = .112) l + m + 1/2 . A careful examination of the recursion relation. mj ket |l + 1/2. m = Now.109) that cos α = m − 1/2. m + 2 . For instance. but not when j = l − 1/2. (6. The corresponding value of mj is l + 1/2. 2l + 1 (6. m − 1/2. (6. 1/2|l + 1/2. −1/2 . l and 1/2. 2l + 1 2l + 1 (6. respectively. |l + 1/2. m = l + m + 1/2 l + m + 3/2 l + m + 3/2 l + m + 5/2 × m + 3/2. giving l. By convention. 1/2 2l + 1 l − m + 1/2 |m + 1/2. l + 1/2 = 1. 1/2 must be equal to the |j. 1/2|l + 1/2. up to an arbitrary phase-factor. m = + l + m + 1/2 |m − 1/2. 1/2|l + 1/2.111) (6. 2l + 1 137 (6. l + 1/2 . l.113) . 1/2|l + 1/2. the |ml . ms ket |l. Thus. this factor is taken to be unity. Thus. (5. 1/2|l + 1/2. m − 1/2. shows that the plus sign is appropriate. This value is possible when j = l + 1/2.110) We now need to determine the sign of sin α. l + 1/2 .282).109) Consider the situation in which ml and m both take their maximum values. Thus.7 Fine structure 6 APPROXIMATION METHODS We can use this formula to successively increase the value of ml .108) This procedure can be continued until ml attains its maximum possible value. It follows from Eq. 1/2|l + 1/2. Eq. m = l + m + 1/2 l.

e. (6. These functions are eigenfunctions of the total angular momentum for spin onehalf particles. m L·S |l − 1/2. which is the quantum number associated with Jz . For a given choice of l.m . (6. m . m l¯2 h |l + 1/2.102). −1/2 .118) (6.. Thus. m = − + l − m + 1/2 |m − 1/2. m kets are eigenstates of L·S. ϕ) χ+ 2l + 1 l m + 1/2 m+1/2 Yl (θ.116) The |l ± 1/2.115) ψnlm± = Rnl (r) Y j=l±1/2. the quantum number associated with J 2 ) can take the values l ± 1/2. A general wave-function for an energy eigenstate in a hydrogen-like atom is written The radial part of the wave-function.114) 2l + 1 It is convenient to define so called spin-angular functions using the Pauli twocomponent formalism: Yl j=l±1/2. just as the spherical harmonics are eigenfunctions of the orbital angular momentum. ϕ) 1  ± l ± m + 1/2 Yl  = √ m+1/2 2l + 1 l m + 1/2 Yl (θ.117) (6.6. the quantum number j (i. ϕ) (6. (6. = − 2 138 (θ. m .7 Fine structure 6 APPROXIMATION METHODS |l − 1/2. Rnl (r).119) . ¯2 h L·S |j = l ± 1/2. according to Eq. ϕ) χ− 2l + 1  m−1/2   . depends on the radial quantum number n and the angular quantum number l. = 2 (l + 1) ¯ 2 h |l − 1/2. m .m = ± + l ± m + 1/2 m−1/2 Yl (θ. 2 giving L·S |l + 1/2. mj = m = [j (j + 1) − l (l + 1) − 3/4] |j. 1/2 2l + 1 l + m + 1/2 |m + 1/2. (6. The wave-function is also labeled by m.

7 Fine structure 6 APPROXIMATION METHODS It follows that (Y (Y l+1/2. (6. 2 (6.124) are known as Lande’s interval rule.116).123) (6.m l¯2 h .121) yield ∆Enlm+ ∆Enlm− where 1 dV l ¯ 2 h 1 .m where the integrals are over all solid angle. dΩ = 2 (l + 1) ¯ 2 h dΩ = − . (6. The closest (in energy) unoccupied state is 3p.m † ) L·S Y ) L·S Y l+1/2. and (6.6. To first-order. the energy-shift is given by ∆Enlm± = (ψnlm± )† HLS ψnlm± dV. We are interested in the excitation of the eleventh electron from 3s to some higher energy state. r dr r dr Equations (6. the ground state is written (1s)2 (2s)2 (2p)6 (3s).125) Let us now apply the above result to the case of a sodium atom.m † l−1/2. This state has a higher energy than 3s due to the deviations of the potential from the pure 139 .121) l−1/2.122) where the integral is over all space. 2 me2 r dr 2 (6. (6. Equations (6. = + 2 me2 r dr 2 1 1 dV (l + 1) ¯ 2 h = − .126) The inner ten electrons effectively form a spherically symmetric electron cloud.120)–(6.124) 1 dV 1 dV = (Rnl )∗ Rnl r2 dr. Let us now apply degenerate perturbation theory to evaluate the shift in energy of a state whose wave-function is ψnlm± due to the spin-orbit Hamiltonian HLS .120) (6.100) (remember the factor of two). (6. In chemist’s notation.123)–(6.

8 The Zeeman effect 6 APPROXIMATION METHODS Coulomb form.127) is the fine structure constant. where µ=− (6. The change in energy of the outermost electron is HB = −µ·B. since we have neglected an effect (namely. The four (3p)3/2 states lie at a slightly higher energy level than the two (3p)1/2 states.124) are not entirely correct. In the absence of spin-orbit interaction. because the radial integral (6.125) is positive. The well-known sodium D line is associated with transitions between the 3p and 3s states. HB = eB (Lz + 2 Sz ). 2 me (6. where 1 e2 = α= 2 0 h c 137 (6. In this situation. 6. It is easily demonstrated that the ratio of the typical spacing of Balmer lines to the splitting brought about by spin-orbit interaction is about 1 : α2 . (6. Note that Eqs.128) e (L + 2 S) (6.6. we can treat H B as 140 . The modified states are labeled (3p)1/2 and (3p)3/2 . Thus. The splitting of the (3p) energy levels of the sodium atom can be observed using a spectroscope.130) Suppose that the energy-shifts induced by the magnetic field are much smaller than those induced by spin-orbit interaction.8 The Zeeman effect Consider a hydrogen-like atom placed in a uniform z-directed magnetic field.129) 2 me is its magnetic moment. including both the spin and orbital contributions. where the subscript refers to the value of j. the relativistic mass correction of the electron) which is the same order of magnitude as spin-orbit coupling. there are six degenerate 3p states. The fact that there are two slightly different 3p energy levels (note that spin-orbit coupling does not split the 3s energy levels) means that the sodium D line actually consists of two very closely spaced spectroscopic lines. The spin-orbit interaction breaks the degeneracy of these states.123)–(6.

Of course. (6. 1/2 2l + 1 l m + 1/2 |m + 1/2. h 2 me (6. where j = l ± 1/2.136). we obtain Lande’s formula for the energy-shift induced by a weak magnetic field: e¯ B h 1 ∆Enlm± = m 1± . (6. from Eqs. m|HB |l ± 1/2. (6. m = ± + It follows that l ± 1/2. |l ± 1/2. (6.132) we find that ∆Enlm± = eB (m ¯ + l ± 1/2. m|Sz |l ± 1/2. with four j = 3/2 states lying at a slightly higher energy than two j = 1/2 states. a magnetic field splits the (3p)3/2 141 .114). m . m ) .8 The Zeeman effect 6 APPROXIMATION METHODS Since a small perturbation acting on the eigenstates of H0 + HLS . m = ¯ h [(l ± m + 1/2) − (l 2 (2 l + 1) m¯ h .133) Now. these states are the simultaneous eigenstates of J 2 and Jz .6. labeled by the quantum numbers j and m.135) l ± m + 1/2 |m − 1/2. Let us consider one of these states. −1/2 . 2l + 1 (6. From standard perturbation theory.136) 2 me 2l + 1 Let us apply this theory to the sodium atom. the first-order energy-shift in the presence of a magnetic field is ∆Enlm± = l ± 1/2. = ± 2l + 1 m + 1/2)] (6. According to Eq. m|Sz |l ± 1/2. The spin-orbit interaction splits the six 3p states into two groups.134) Thus. (6.131) Lz + 2 S z = Jz + Sz .113)–(6. the latter states acquiring a higher energy. We have already seen that the non-Coulomb potential splits the degeneracy of the 3s and 3p states.

the six 3p states form two groups of four and two states. However. Thus. the energy eigenstates of a hydrogen-like atom are approximate eigenstates of the spin and orbital angular momenta. but are not eigenstates of the total angular momentum. which measures the projection of the total angular momentum along the z-axis. Lz . ml .136) that these states are split by a lesser amount than the j = 3/2 states. causes states with different values of the quantum numbers n and l to have different energies. S 2 . ml . We can label each state by the quantum numbers n (the energy quantum number). our energy eigenkets are written |n. In fact. ml . In the Paschen-Back limit we can think of the spin-orbit Hamiltonian. Sz . in this situation. commutes with L2 .8 The Zeeman effect 6 APPROXIMATION METHODS quadruplet of states. H0 . this intermediate case is very difficult to analyze. it is evident from Eq. in an intense magnetic field. l. (6. but different values of ml and ms . l. In the absence of a magnetic field. ms |HB |n. l. it does not make much sense to think of HB as a small interaction term operating on the eigenstates of H0 + HLS . but does not commute with L2 . The unperturbed Hamiltonian.137) Thus. Note that the magnetic Hamiltonian. each state acquiring a different energy. as a small interaction term operating on the eigenstates of H0 + HB . Clearly. = 2 me (6. However. l. Let us apply this result to a sodium atom. Suppose that we increase the strength of the magnetic field. so that the energyshift due to the magnetic field becomes comparable to the energy-shift induced by spin-orbit interaction. are degenerate. depending on the values 142 . Jz . states with the same value of n and l. J 2 . ms e¯ B h (ml + 2 ms ). Let us consider the extreme limit in which the energy-shift due to the magnetic field greatly exceeds that induced by spin-orbit effects. the energy of each state becomes dependent on the quantum number m. A magnetic field also splits the (3p)1/2 doublet of states. ms . states with different values of ml + 2 ms acquire different energies.6. HB . Thus. The shift in energy due to the magnetic field is simply ∆Enlml ms = n. and ms . This is called the Paschen-Back limit. States with higher m values have higher energies. ml . H LS . In fact. S 2 .

(1. two states with ml + 2 ms = 0. the 3p states are split into five groups with (ml . ¯ 2 ml ms 1 dV h ∆Enlml ms = . L·S = Lz Sz + (L+ S− + L− S+ )/2 (6. does not affect the next lowest energy state. h The energy-shift induced by the spin-orbit Hamiltonian is given by ∆Enl ml ms = n. 1/2). a state with ml + 2 ms = 1. does not affect the next highest energy state. These groups are equally spaced in energy. a state with ml + 2 ms = −1. Thus. (6. in order of decreasing energy. (6. l. The spin-orbit term increases the energy of the highest energy state. 1/2). In the presence of an intense magnetic field. −1/2).138) (6. (0. 2 me2 r dr .143) BPB ∼ α2 0 h a0 143 (6. There is a state with ml + 2 ms = 2. −1/2). The sort of magnetic field-strength needed to get into the Paschen-Bach limit is given by e me 25 tesla. where HLS = Now. and (−1. the energy of the doublet. 1/2). ml .142) 2 me2 r dr Let us apply the above result to a sodium atom.139) 1 1 dV L·S. and increases the energy of the lowest energy state. h since L± = S ± = 0 for expectation values taken between the simultaneous eigenkets of Lz and Sz . and a state with ml + 2 ms = −2. decreases. ms |HLS |n.141) = ¯ 2 ml ms .8 The Zeeman effect 6 APPROXIMATION METHODS of their total angular momentum. but does not split.140) (6.6. l. ms . (0. ml . or (−1. −1/2). respectively. ms ) quantum numbers (1. the energy difference between adjacent groups being e ¯ B/2 me . In the presence of an intense magnetic field the 3p states are split into five groups. The net result is that the five groups of states are no longer equally spaced in energy.

6.9 Time-dependent perturbation theory

6 APPROXIMATION METHODS

Obviuously, this is an extremely large field-strength.

6.9 Time-dependent perturbation theory Suppose that the Hamiltonian of the system under consideration can be written H = H0 + H1 (t), (6.144)

where H0 does not contain time explicitly, and H1 is a small time-dependent perturbation. It is assumed that we are able to calculate the eigenkets of the unperturbed Hamiltonian: H0 |n = En |n . (6.145) We know that if the system is in one of the eigenstates of H0 then, in the absence of the external perturbation, it remains in this state for ever. However, the presence of a small time-dependent perturbation can, in principle, give rise to a finite probability that a system initially in some eigenstate |i of the unperturbed Hamiltonian is found in some other eigenstate at a subsequent time (since |i is no longer an exact eigenstate of the total Hamiltonian), In other words, a time-dependent perturbation causes the system to make transitions between its unperturbed energy eigenstates. Let us investigate this effect. Suppose that at t = t0 the state of the system is represented by |A =
n

cn |n ,

(6.146)

where the cn are complex numbers. Thus, the initial state is some linear superposition of the unperturbed energy eigenstates. In the absence of the timedependent perturbation, the time evolution of the system is given by |A, t0 , t =
n

cn exp([−i En (t − t0 )/¯ ] |n . h

(6.147)

Now, the probability of finding the system in state |n at time t is Pn (t) = |cn exp[−i En (t − t0 )/¯ ]|2 = |cn |2 = Pn (t0 ). h
144

(6.148)

6.9 Time-dependent perturbation theory

6 APPROXIMATION METHODS

Clearly, with H1 = 0, the probability of finding the system in state |n at time t is exactly the same as the probability of finding the system in this state at the initial time t0 . However, with H1 = 0, we expect Pn (t) to vary with time. Thus, we can write |A, t0 , t = cn (t) exp[−i En (t − t0 )/¯ ] |n , h (6.149)
n

where Pn (t) = |cn (t)| . Here, we have carefully separated the fast phase oscillation of the eigenkets, which depends on the unperturbed Hamiltonian, from the slow variation of the amplitudes cn (t), which depends entirely on the perturbation (i.e., cn is constant if H1 = 0). Note that in Eq. (6.149) the eigenkets |n are time-independent (they are actually the eigenkets of H0 evaluated at the time t0 ). Schr¨dinger’s time evolution equation yields o ∂ |A, t0 , t = H |A, t0 , t = (H0 + H1 ) |A, t0 , t . ∂t It follows from Eq. (6.149) that i¯ h (H0 + H1 )|A, t0 , t =
m

2

(6.150)

cm (t) exp[−i Em (t − t0 )/¯ ] (Em + H1 ) |m . h

(6.151)

We also have ∂ i ¯ |A, t0 , t = h ∂t

i¯ h
m

dcm + cm (t) Em exp[−i Em (t − t0 )/¯ ] |m , h dt

(6.152)

where use has been made of the time-independence of the kets |m . According to Eq. (6.150), we can equate the right-hand sides of the previous two equations to obtain dcm exp[−i Em (t − t0 )/¯ ]|m = h cm (t) exp[−i Em (t − t0 )/¯ ] H1 |m . h i¯ h dt m m (6.153) Left-multiplication by n| yields i¯ h where Hnm (t) = n|H1 (t)|m ,
145

dcn = dt

Hnm (t) exp[i ωnm (t − t0 )] cm (t),
m

(6.154)

(6.155)

6.10 The two-state system

6 APPROXIMATION METHODS

En − E m . (6.156) ¯ h Here, we have made use of the standard orthonormality result, n|m = δ nm . Suppose that there are N linearly independent eigenkets of the unperturbed Hamiltonian. According to Eq. (6.154), the time variation of the coefficients cn , which specify the probability of finding the system in state |n at time t, is determined by N coupled first-order differential equations. Note that Eq. (6.154) is exact—we have made no approximations at this stage. Unfortunately, we cannot generally find exact solutions to this equation, so we have to obtain approximate solutions via suitable expansions in small quantities. However, for the particularly simple case of a two-state system (i.e., N = 2), it is actually possible to solve Eq. (6.154) without approximation. This solution is of enormous practical importance. ωnm =

and

6.10 The two-state system Consider a system in which the time-independent Hamiltonian possesses two eigenstates, denoted H0 |1 H0 |2 = E1 |1 , = E2 |2 . (6.157) (6.158)

Suppose, for the sake of simplicity, that the diagonal matrix elements of the interaction Hamiltonian, H1 , are zero: 1|H1 |1 = 2|H1 |2 = 0. (6.159)

The off-diagonal matrix elements are assumed to oscillate sinusoidally at some frequency ω: 1|H1 |2 = 2|H1 |1 ∗ = γ exp(i ω t), (6.160) where γ and ω are real. Note that it is only the off-diagonal matrix elements which give rise to the effect which we are interested in—namely, transitions between states 1 and 2.
146

163) Once we have solved for c2 .10 The two-state system 6 APPROXIMATION METHODS For a two-state system.154) reduces to dc1 = γ exp[+i (ω − ω21 ) t ] c2 . the probability of finding the system in state 2 at time t is P 2 (t) = |c2 |2 .162) to obtain the amplitude c1 . i¯ h dt i¯ h (6.6.162) can be combined to give a second-order differential equation for the time variation of the amplitude c2 : dc2 γ2 d2 c 2 + i (ω − ω21 ) + 2 c2 = 0. h γ2 /¯ 2 + (ω − ω21 )2 /4 t h exp[ i (ω − ω21 ) t/2] c1 (t) = exp[ i (ω − ω21 ) t/2] cos − i (ω − ω21 )/2 γ2 /¯ 2 + (ω − ω21 )2 /4 h × sin γ2 /¯ 2 + (ω − ω21 )2 /4 t . the probability of finding the system in state 1 at time t is simply P 1 (t) = |c1 |2 . h (6.162) where ω21 = (E2 − E1 )/¯ . dt2 dt ¯ h (6. Likewise. It is easily demonstrated that the appropriate solutions are c2 (t) = −i γ/¯ h γ2 /¯ 2 + (ω − ω21 )2 /4 h × sin exp[−i (ω − ω21 ) t/2] (6.165) Now. It follows that γ2 /¯ 2 h P2 (t) = 2 2 γ /¯ + (ω − ω21 )2 /4 h 147 . and assuming that t0 = 0.164) γ2 /¯ 2 + (ω − ω21 )2 /4 t .161) and h (6.161) (6. we can use Eq. our boundary conditions are c1 (0) = 1 and c2 (0) = 0. Eq. (6. Equations (6. dt dc2 = γ exp[−i (ω − ω21 ) t ] c1 . (6. Thus. Let us look for a solution in which the system is certain to be in state 1 at time t = 0.

After a time interval π ¯ /2 γ it is certain to be in state 2. the weaker the h perturbation (i.168) (6. the smaller γ becomes). This implies that the system alternatively absorbs and emits energy from the source of the perturbation. Thus. and whose full-width half-maximum (in frequency) is 4 γ/¯ . h (6. This result is known as Rabi’s formula.167) P1 (t) = 1 − P2 (t). nor is the minimum value of P1 (t) zero. 148 . the time-dependent perturbation is only effective at causing transitions between states 1 and 2 if its frequency of oscillation lies in the approximate range ω21 ± 2 γ/¯ . we obtain a resonance curve whose maximum (unity) lies at the resonance. ω. In other words. the system starts off at t = 0 in state 1. the amplitude of oscillation of the coefficient c2 is reduced. Thus. h P2 (t) = sin2 (γ t/¯ ).10 The two-state system 6 APPROXIMATION METHODS × sin2 γ2 /¯ 2 + (ω − ω21 )2 /4 t . if the applied frequency differs from the resonant frequency by h substantially more than 2 γ/¯ then the probability of the system jumping from h state 1 to state 2 is very small. In fact.166) exhibits all the features of a classic resonance. At resonance. Clearly. and so on. Equation (6.e.169) According to the above result. This means that the maximum value of P2 (t) is no longer unity. the narrower the resonance.. However. h (6. the system periodically h flip-flops between states 1 and 2 under the influence of the time-dependent perturbation. After a further time interval h π ¯ /2 γ it is certain to be in state 1. if we plot the maximum value of P2 (t) as a function of the applied frequency.166) (6. matches the frequency ω21 . we find that P1 (t) = cos2 (γ t/¯ ). The absorption-emission cycle also take place away from the resonance. when ω = ω21 . when the oscillation frequency of the perturbation. ω.6.

6. 2 me (6.11 Spin magnetic resonance Consider a spin one-half system (e. It follows that +|H1 |+ = −|H1 |− = 0. 2 me (6. The Hamiltonian is written H = −µ·B = H0 + H1 .170) where B0 and B1 are constants. me (6.. (6.174) The time-dependent Hamiltonian can be written H1 = e B1 exp( i ωt) S− + exp(−i ωt) S+ . respectively. The rotating magnetic field usually represents the magnetic component of an electromagnetic wave propagating along the z-axis. Thus.g. ^ ^ z B = B0 ^ + B1 (cos ωt x + sin ωt y). denoted |+ and |− .176) (6. 2 me 149 .11 Spin magnetic resonance 6 APPROXIMATION METHODS 6. the electric component of the wave has no effect. a bound electron) placed in a uniform zdirected magnetic field. with B1 B0 .172) e B1 (cos ωt Sx + sin ωt Sy ) . Thus.177) = e ¯ B1 h exp( i ωt).171) (6.173) The eigenstates of the unperturbed Hamiltonian are the ‘spin up’ and ‘spin down’ states. and then subjected to a small time-dependent magnetic field rotating in the x-y plane. and −|H1 |+ = +|H1 |− ∗ (6.175) where S+ and S− are the conventional raising and lowering operators for the spin angular momentum. In this system. me (6. H0 |± = ± e ¯ B0 h |± . where H0 = and H1 = e B0 Sz .

|+ |− . (6. to a high degree of accuracy by placing the particles in a magnetic field. is simply the spin precession frequency for an electron in a uniform magnetic field of strength B0 . the system undergoes a succession of spin-flops. U(t 0 .12 The Dyson series 6 APPROXIMATION METHODS It can be seen that this system is exactly the same as the two-state system discussed in the previous section. ω21 → me e ¯ B1 h γ → . according to the analysis of the previous section. t = U(t0 . It is convenient to work in terms of the time evolution operator. the expectation values of Sx and Sy oscillate because of the spin precession. which is defined |A. If we now apply a magnetic perturbation rotating at the resonant frequency then. In the absence of the perturbation. We also know that if the oscillation frequency of the applied field is very different from the resonant frequency then there is virtually zero probability of the field triggering a spin-flop. it is possible to calculate the magnetic moment. → |+ .180) (6. but the expectation value of Sz remains invariant.181) The resonant frequency.6. provided that we make the identifications |1 |2 → |− . t0 .. The width of the resonance (in frequency) is determined by the strength of the oscillating magnetic perturbation.182) 150 . Experimentalist are able to measure the magnetic moments of electrons. ω21 . e B0 .e.12 The Dyson series Let us now try to find approximate solutions of Eq. 2 me 6. t) |A . and other spin one-half particles. in addition to the spin precession. (6.179) (6. and subjecting them to an oscillating magnetic field whose frequency is gradually scanned.178) (6. By determining the resonant frequency (i.154) for a general system. (6. t). the frequency at which the particles absorb energy from the oscillating field).

t0 ) = 1. ∂t (6. The subsequent evolution of the state ket is given by Eq. It is easily seen that the time evolution operator satisfies the differential equation ∂U(t0 . Thus. ∂t subject to the boundary condition i¯ h U(t0 . given that the state ket at the initial time t0 is |A . t0 . |i.189) (6.184) In the absence of the external perturbation.183) (6.6. h (6.190) m 151 . h It is readily demonstrated that UI satisfies the differential equation i¯ h where HI (t0 . t is the state ket of the system at time t. t) = exp[−i H0 (t − t0 )/¯ ].149). |A. t) = HI (t0 . (6.185) Let us switch on the perturbation and look for a solution of the form U(t0 . t).187) (6. h h subject to the boundary condition UI (t0 .188) ∂UI (t0 .12 The Dyson series 6 APPROXIMATION METHODS Here. h (6. t). t0 . t) = exp[−i H0 (t − t0 )/¯ ] UI (t0 . the time evolution operator reduces to U(t0 . Suppose that the system starts off at time t0 in the eigenstate |i of the unperturbed Hamiltonian. we would expect UI to contain all of the information regarding transitions between different eigenstates of H 0 caused by the perturbation. t) = (H0 + H1 ) U(t0 . (6. t) UI (t0 .186) Note that UI specifies that component of the time evolution operator which is due to the time-dependent perturbation. t0 ) = 1. t = cm (t) exp[−i Em (t − t0 )/¯ ] |m . t) = exp[+i H0 (t − t0 )/¯ ] H1 exp[−i H0 (t − t0 )/¯ ]. t). (6.

192) where use has been made of n|m = δnm .195) that c(0) (t) = δin .196) where the superscript (1) refers to a first-order term in the expansion. is simply Pi→n (t0 . t)|i . t ) HI (t0 . given that it is definitely in state |i at time t 0 . are equivalent to the following integral equation.189).192) and (6. t ) dt  HI (t0 . Thus.193) This quantity is usually termed the transition probability between states |i and |n . i UI (t0 . t ) dt . t0 . we also have |i. Note that the differential equation (6. (6.191) HI (t0 . plus the boundary condition (6. h It follows that cn (t) = n|UI (t0 . t) = | n|UI (t0 .197) . t ) dt + · · · . t) i 1− ¯ h i 1− ¯ h −i + ¯ h i HI (t0 . Let cn = c(0) + c(1) + c(2) + · · · . etc. t ) dt t0 2 t t dt t0 t0 HI (t0 . It follows from Eqs.187). t = exp[−i H0 (t − t0 )/¯ ] UI (t0 .194) We can obtain an approximate solution to this equation by iteration: UI (t0 . n 152 (6.12 The Dyson series 6 APPROXIMATION METHODS However. (6.195) This expansion is known as the Dyson series. t ) UI (t0 . t ) 1 − ¯ h t0 t t  t t0 HI (t0 . t)|i |2 . t) = 1 − ¯ h t (6.6. t) |i . (6. (6. n n n (6. t ) UI (t0 . t0 (6. the probability that the system is found in state |n at time t.

t0 2 t t (6.198) (6. t) = |c(0) + c(1) + c(2) + · · · |2 . To first-order. weighted by some oscillatory phase-factor. 153 . t0 2 m t t (6. if the matrix element is zero.. Thus. to second-order.205) According to the above analysis.200) t i = − ¯ h = −i ¯ h exp[ i ωni (t − t0 )] Hni (t ) dt . These expressions simplify to (0) cn (t) = δin . (1) cn (t) (6. ¯ h (6.199) dt t0 t0 n|HI (t0 .203) (6. a transition between states |i and |n is possible even when the matrix element n|H1 |i is zero. the transition probability is proportional to the time integral of the matrix element n|H1 |i . t ) HI (t0 .202) ×Hnm (t ) exp[ i ωmi (t − t0 )] Hmi (t ). t )|i dt . However.201) (2) cn (t) dt t0 t0 dt exp[ i ωnm (t − t0 )] (6. t )|i dt . in the absence of the perturbation). then there is no chance of a first-order transition between states |i and |n . there is no chance of a transition between states |i and |n (i = n) to zeroth-order (i. where ωnm = and Hnm (t) = n|H1 (t)|m .12 The Dyson series 6 APPROXIMATION METHODS c(1) (t) n c(2) (t) n i = − ¯ h = −i ¯ h t n|HI (t0 . n n n En − E m .6.e. The transition probability between states i and n is simply Pi→n (t0 .204) (6.

(6.6. At finite t. n c(1) (t) n t i = − Hni exp[ i ωni (t − t)] dt ¯ h 0 Hni = [1 − exp( i ωni t)].208) giving (En − Ei ) t  4 |Hni |2 (1) 2 sin2  .209) Pi→n (t) |cn | = |En − Ei |2 2¯ h for i = n.13 Constant perturbations 6 APPROXIMATION METHODS 6. En = Ei ) have an appreciable probability of occurrence. ∼ h 154 where (6.206) where H1 is time-independent. but is generally a function of the position. (6. c(0) (t) = δin . and spin operators.213) .212) |En − Ei | < ∼ t Note that in the limit t → ∞ only those transitions which conserve energy (i. (6.210) sin x . Suppose that the system is definitely in state |i at time t = 0. is is possible to have transitions which do not exactly conserve energy.202) (with t0 = 0)..13 Constant perturbations Consider a constant perturbation which is suddenly switched on at time t = 0: H1 (t) = 0 H1 (t) = H1 for t < 0 for t ≥ 0. It is a good approximation to say that sinc(x) is small except when |x| < π.e.207) (6. and decays like 1/|x| at large |x|.200)–(6. is small except when 2π ¯ h .211) x The sinc function is highly oscillatory. The transition probability between states |i and |n can be written |Hni |2 t2 (En − Ei ) t  Pi→n (t) = . According to Eqs. provided that sinc(x) = ∆E ∆t < ¯ . sinc2  2¯ h ¯2 h   (6. (6. Pi→n . momentum. It follows ∼ that the transition probability. (6. En − E i   (6.

Ei (6. (6.210). whereas time is merely a parameter. Clearly. It follows that. Thus. h (6. (6. ¯ h (6.216) .214) ¯2 h where use has been made of sinc(0) = 1. grows linearly with t. in this limit.13 Constant perturbations 6 APPROXIMATION METHODS where ∆E = |En − Ei | is change in energy of the system associated with the transition. this result is just a manifestation of the well-known uncertainty relation for energy and time. despite the fact that H1 is constant for t > 0.215) 2t |Hni |2 ρ(En ) sinc2 (x) dx. and ∆t = t is the time elapsed since the perturbation was switched on. Note that this probability grows quadratically with time.217) and use has been made of Eq.218) . t to t + dt. all possessing nearly the same energy as the energy of the initial state |i . It is helpful to define the density of states. This uncertainty relation is fundamentally different to the position-momentum uncertainty relation. The probability of a transition which conserves energy (i. where the number of final states lying in the energy range E to E + dE is given by ρ(E) dE. we can take ρ(En ) and |Hni |2 out of the integral in the above formula to obtain Pi→[n] (t) = 2π |Hni |2 ρ(En ) t ¯ h En 155 Pi→n (t) ρ(En ) dEn .e.6. We know that in the limit t → ∞ the function sinc(x) is only non-zero in an infinitesimally narrow range of final energies centred on En = Ei . In practice.. the probability of a transition from the initial state i to any of the continuum of possible final states is Pi→ (t) = giving Pi→ (t) = where x = (En − Ei ) t/2 ¯ . there is usually a group of final states. since in non-relativistic quantum mechanics position and momentum are operators. since it implies that the probability of a transition occurring in a fixed time interval. (6. ρ(E). This result is somewhat surprising. E n = Ei ) is |Hin |2 t2 Pi→n (t) = .

(6. wi→n = (6. using the constant perturbation (6.220) Let us now calculate the second-order term in the Dyson series. Fermi’s golden rule is sometimes written wi→[n] = 2π |Hni |2 δ(En − E). Note that the transition rate is constant in time (for t > 0): i. which is simply the transition probability per unit time. dt (6. is now proportional to t. Thus.222) ρ(En ) dEn to dPi→[n] .6. Pi→[n] . we have made use of the result ∞ sinc2 (x) dx = π. the probability of a transition occurring in the time interval t to t + dt is independent of t for fixed dt.202) we find that (2) cn (t) = −i ¯ h 2 t t Hnm Hmi m 0 t dt exp( i ωnm t ) 0 dt exp( i ωmi t ) i = ¯ h = it ¯ h m Hnm Hmi Em − E i [exp( i ωni t ) − exp( i ωnm t ] ) dt 0 m Hnm Hmi [exp( i ωni t/2) sinc(ωni t/2) Em − E i 156 . In deriving the above formula. (6.206).219) −∞ Note that the transition probability.13 Constant perturbations 6 APPROXIMATION METHODS where Pi→[n] denotes the transition probability between the initial state |i and all final states |n which have approximately the same energy as the initial state. wi→[n] = giving 2π |Hni |2 ρ(En ) .e. ¯ h where it is understood that this formula must be integrated with obtain the actual transition rate. |Hni |2 is the average of |Hni |2 over all final states with approximately the same energy as the initial state. From Eq. It is convenient to define the transition rate.. instead of t2 . Here. (6.221) ¯ h En E i This appealingly simple result is known as Fermi’s golden rule.

|n . (6.225) from Ei − Em to Ei − Em + i ¯ η. cn (t) = c(1) n + c(2) n it exp( i ωni t/2) = ¯ h − m  H ni (6. Thus. The above formula clearly breaks down if Hnm Hmi = 0 when Em = Ei .. (6. |m whose energies differ from that of the initial state. The net transition. According to Eq. This problem can be avoided by gradually turning on the perturbation: i. by analogy with the previous analysis. It follows. that wi→[n] 2π = Hni + ¯ h Hnm Hmi ρ(En ) Em − E i 2 .224) where use has been made of Eq. Em − E i Hnm Hmi  sinc(ωni t/2) Em − E i   (6. conserves energy. the system makes a non-energy-conserving transition to some intermediate state |m . (6.224) to average to zero (due to the oscillatory phase-factor) during the evaluation of the transition probability.225) m where the transition rate is calculated for all final states. (6. with approximately the same energy as the initial state. The non-energy-conserving transitions are generally termed virtual transitions. the system makes another non-energy-conserving transition to the final state |n . from |i to |n . and for intermediate states. The net result is to change the energy denominator in Eq. a second-order transition takes place in two steps. The fact that Em = Ei causes the last term on the right-hand side of Eq.6.13 Constant perturbations 6 APPROXIMATION METHODS − exp( i ωnm t/2) sinc(ωnm t/2)] .225). h 157 . First.208). Subsequently. En E i (6. H1 → exp(η t) H1 (where η is very small).223) + m Hnm Hmi exp( i ωim t/2) sinc(ωnm t/2) . |i .e. whereas the energy conserving first-order transition is termed a real transition.

and (6. ¯ h ωni + ω ωni − ω where Vni = † Vni = n|V|i .227). h (6.14 Harmonic perturbations Consider a perturbation which oscillates sinusoidally in time. (6. and spin operators.208). (6. momentum. ¯ h 0 or En (6. h (6. (6. H1 (t) = V exp( i ωt) + V † exp(−i ωt). whilst making a h transition to a final state whose energy level is less than that of the initial state 158 .228) (6. (6.232) corresponds to the second term. (6. (6. (6. provided that ωni = En − E i → ωni ± ω. and switch on the harmonic perturbation at t = 0.231) ωni − ω 0 or En Ei + ¯ ω.230) Thus.14 Harmonic perturbations 6 APPROXIMATION METHODS 6.231) corresponds to the first term on the right-hand side of Eq.229) This formula is analogous to Eq. in general. Let us initiate the system in the eigenstate |i of the unperturbed Hamiltonian. n|V † |i = i|V|n ∗ .201) that c(1) n −i = ¯ h  t 0  † Vni exp(i ωt ) + Vni exp(−i ωt ) exp( i ωni t ) dt . It follows from Eq.226) where V is.227) 1  1 − exp[ i (ωni + ω) t] 1 − exp[ i (ωni − ω) t] †  = Vni + Vni . Thus. This is usually called a harmonic perturbation. H0 . a function of position.6. it follows from the previous analysis that the transition probability P i→n (t) = |c(1) |2 is only appreciable in the limit t → ∞ if n ωni + ω Ei − ¯ ω. The former term describes a process by which the system gives up energy ¯ ω to the perturbing field.232) Clearly.

and is very important in statistical mechanics. divided by the density of final states for stimulated emission. that of the system plus the perturbing field) is conserved.6. whereas Eq. the total energy h (i. which expresses a fundamental symmetry between absorption and stimulated emission. divided by the density of final states for absorption.233) specifies the transition rate for stimulated emission. It follows from Eqs. ¯ h En =Ei −¯ ω h 2π † 2 . equals the rate of absorption.236) † It is clear from Eqs. In both cases. is known as detailed balancing.233)–(6. The latter term describes h a process by which the system gains energy ¯ ω from the perturbing field.234) gives the transition rate for absorption.e.229) that |Vni |2 = |Vni |2 .234) that wi→[n] wn→[i] = .. wi→[n] = wi→[n] 2π |Vni |2 ρ(En ) . This result. whilst h making a transition to a final state whose energy level exceeds that of the initial state by ¯ ω. (6. By analogy with Eq. (6. These equations are more usually written wi→n = wi→n 2π |Vni |2 δ(En − Ei + ¯ ω).234) Equation (6.. This process is known as absorption.228)-(6. (6. the rate of stimulated emission.e.235) (6. non-quantized) 159 .15 Absorption and stimulated emission of radiation 6 APPROXIMATION METHODS by ¯ ω.221).237) ρ(En ) ρ(Ei ) In other words. |V | ρ(En ) = ¯ h ni En =Ei +¯ ω h (6. h ¯ h ni (6. h ¯ h 2π † 2 = |V | δ(En − Ei − ¯ ω). (6. This process is known as stimulated emission.233) (6. (6.15 Absorption and stimulated emission of radiation Let us use some of the results of time-dependent perturbation theory to investigate the interaction of an atomic electron with classical (i. 6.

(6.15 Absorption and stimulated emission of radiation 6 APPROXIMATION METHODS electromagnetic radiation.6.239) (6. p·A = A·p.247) ω n·r − ωt . (6.243) 2 me where A and φ are functions of the position operators. the Hamiltonian of an atomic electron placed in an electromagnetic field is where A(r) is the vector potential and φ(r) is the scalar potential. Hence. 2 me me 2 me ·A = 0. c (6.242) (p − e A)2 H= + e φ + V0 (r).248) . The above equation can be written p2 − e A·p − e p·A + e2 A2 H= + e φ + V0 (r). Thus. (6. (6.246) Suppose that the perturbation corresponds to a monochromatic plane-wave. (6.244) 2 me Now.238) 2 me The standard classical prescription for obtaining the Hamiltonian of a particle of charge q in the presence of an electromagnetic field is H0 = H → H − q φ. The unperturbed Hamiltonian is p2 + V0 (r). for which φ = 0.245) provided that we adopt the gauge p2 e A·p e2 A2 H= − + + e φ + V0 (r). (6. (6.241) E = − φ− ∂t B = × A. (6. A = 2 A0 cos 160 (6.240) This prescription also works in quantum mechanics. p → p + q A. Note that ∂A .

me (6. has been neglected. provided that V =− e A0 ·p exp[−i (ω/c) n·r ] me (6. that the first term on the righth hand side of Eq. me (6.236) that the rate of absorption is h wi→n 2π e2 = |A0 |2 | n| exp[ i (ω/c) n·r] ·p |i |2 δ(En − Ei − ¯ ω). whereas the second term describes the stimulated emission of a photon of energy ¯ ω.252) describes the absorption of a photon of energy ¯ ω. by analogy with the previous analysis.252) This has the same form as Eq. (6.249) with H0 = and H1 − p2 + V(r).255) 161 . 2 µ0  (6.253) It is clear.254) The absorption cross-section is defined as the ratio of the power absorbed by the atom to the incident power per unit area in the electromagnetic field.15 Absorption and stimulated emission of radiation 6 APPROXIMATION METHODS where and n are unit vectors which specify the direction of polarization and the direction of propagation.250) (6. Note that ·n = 0. (6. respectively. Now the energy density of an electromagnetic field is 1 U=  2  2 0 E0 2 B02  + . The Hamiltonian becomes H = H0 + H1 (t).251) where the A2 term. 2 me e A·p . h 2 ¯ me h (6. which is second order in A0 . The perturbing Hamiltonian can be written H1 = − e A0 ·p (exp[ i (ω/c) n·r − i ωt] + exp[−i (ω/c) n·r + i ωt]) .6. (6. It follows from Eq. (6.226).

(6. unity (remember that ω/c = 2π/λ). It follows that n| exp[ i (ω/c) n·r] ·p |i It is readily demonstrated that [r. H0 ] = so n|p|i = −i Using Eq. This approximation is known as the electric dipole approximation. (6.259) can be approximated by its first term.256) Now. Thus. 162 · n|p|i . me (6.262) (6. respectively. cU (6. ¯ h (6.263) . the wave-length of the type of electromagnetic radiation which induces. σabs = so σabs = π e2 | n| exp[ i (ω/c) n·r] ·p |i |2 δ(En − Ei − ¯ ω). c (6.258).16 The electric dipole approximation 6 APPROXIMATION METHODS where E0 and B0 = E0 /c = 2 A0 ω/c are the peak electric and magnetic fieldstrengths. we obtain σabs = 4π2 α ωni | n| ·r|i |2 δ(ω − ωni ).257) 6. transitions between different atomic energy levels is much larger than the typical size of a light atom. H0 ]|i = i me ωni n|r|i . (6.16 The electric dipole approximation In general. h 2ωc 0 me (6.261) me n|[r.6. exp[ i (ω/c) n·r] = 1 + i ω n·r + · · · .258) ¯ ω wi→n h . The incident power per unit area of the electromagnetic field is c U = 2 0 c ω2 |A0 |2 .260) i¯ p h . or is emitted during.

Instead. The latter transition is called a forbidden transition. We z have already seen. (6. Magnetic dipole transitions are typically about 10 5 times more unlikely than similar electric dipole transitions. The first-order term in Eq.4. It is easily demonstrated that n|x|i and n|y|i are only non-zero if ∆l = ±1. that n|z|i = 0 unless the initial and final states satisfy ∆l = ±1. Forbidden transitions are not strictly forbidden. for generally directed radiation n| ·r|i is only non-zero if (6. and m is the quantum number describing the projection of the orbital angular momentum along the z-axis. These are termed the selection rules for electric dipole transitions. (6. h Suppose that the radiation is polarized in the z-direction.268) (6. but disallows a transition from a 2s to a 1s state. that the electric dipole approximation allows a transition from a 2p state to a 1s state. It is clear that if the absorption cross-section is regarded as a function of the applied frequency. ω. from Sect.269) ∆m = 0.267) ∆m = ±1.6. Thus. for instance. the next most likely type of transition is a magnetic dipole transition. (6. ∆l = ±1.266) (6. so that = ^.259) yields 163 . 6. Here.265) ∆m = 0. After electric dipole transitions. which is due to the interaction between the electron spin and the oscillating magnetic field of the incident electromagnetic radiation. they take place at a far lower rate than transitions which are allowed according to the electric dipole approximation. It is clear. ±1. then it exhibits a sharp maximum at ω = ωni = (En − Ei )/¯ .16 The electric dipole approximation 6 APPROXIMATION METHODS where α = e2 /(2 0 h c) = 1/137 is the fine structure constant. l is the quantum number describing the total orbital angular momentum of the electron.264) (6.

= me 2 0 me c (6.263) over all possible frequencies of the incident radiation yields σabs (ω) dω = n 4π2 α ωni | n| ·r|i |2 . me Thus. It is easily demonstrated that ¯2 h [x. the above formula is exh actly the same as that obtained classically by treating the electron as an oscillator. H0 ] ]|i = i|x H0 + H0 x − 2 x H0 x|i = − .16 The electric dipole approximation 6 APPROXIMATION METHODS so-called electric quadrupole transitions. Eq. me 2 2 (6. According to this rule. for the sake of definiteness.271) (6. [x. (6. (6. the selection rules for electric quadrupole transitions are ∆l = 0. (6.275) Note that ¯ has dropped out of the final result. me 2 me ¯ h ωni | n|x|i |2 = 1. h ¯2 i|[x. transitions which are forbidden as electric dipole transitions may well be allowed as magnetic dipole or electric quadrupole transitions. 164 .274) This is known as the Thomas-Reiche-Kuhn sum rule. [x.6. ±2. Thus. H0 ] ] = − . In fact. These are typically about 10 8 times more unlikely than electric dipole transitions.270) reduces to σabs (ω) dω = 2π2 α ¯ h π e2 .270) Suppose.273) It follows that (6. Magnetic dipole and electric quadrupole transitions satisfy different selection rules than electric dipole transitions: for instance. that the incident radiation is polarized in the x-direction. n (6. Integrating Eq.272) giving 2 n ¯2 h ( i|x|n Ei n|x|i − i|x|n En n|x|i ) = − .

278) where Hni = n|H1 |i . It is convenient to gradually turn on the perturbation from zero at t = −∞.280) (6.279) (6. In this limit.6. dt η2 + ωni ¯2 h Consider the limit η → 0. (6. but η h lim 2 2 = π δ(ωni ) = π ¯ δ(En − Ei ). Thus.281) . and H1 is a constant. In the remote past.200)– (6. It follows that. Thus. we want to calculate the time evolution of the coefficient ci (t). other than the initial state |i .276) where η is small and positive. ¯ h η + i ωni (6. Basically. (6. First. the system is assumed to be in the initial state |i .17 Energy-shifts and decay-widths We have examined how a state |n . to first-order.201) that (0) cn (t) = 0. ci (t → −∞) = 1. ¯ 2 η2 + ωni h (6. instead of very suddenly. exp(η t) → 1. (1) cn (t) (6. η→0 η + ωni 165 |Hni |2 exp(2 η t) 2 . however. H1 (t) = exp(η t) H1 . the transition probability from state |i to state |n is Pi→n (t) = |c(1) |2 = n The transition rate is given by 2 |Hni |2 η exp(2 η t) dPi→n = wi→n (t) = 2 . For cn=i (t) we have from Eqs. becomes populated as a result of some time-dependent perturbation applied to the system.277) t i = − Hni ¯ h exp[ (η + i ωni )t ] dt −∞ i exp[ (η + i ωni )t ] = − Hni . Let us now consider how the initial state becomes depopulated.17 Energy-shifts and decay-widths 6 APPROXIMATION METHODS 6. let us check that our previous Fermi golden rule result still applies when the perturbing potential is turned on slowly. and cn=i (t → −∞) = 0. t → −∞.

287) . Let us now calculate ci (t) using Eqs. The width of the peak is determined by how fast the perturbation is switched on. to second-order we have ci (t) exp(η t) −i −i Hii + 1+ ¯ h η ¯ h −i + ¯ h 2 |Hii |2 exp(2 η t) 2 η2 (6. (1) ci (t) (2) ci (t) (0) (6. (6.286). where ci ≡ dci /dt.286) |Hmi |2 exp(2 η t) . ¯ h η (6.280) yields the standard Fermi golden rule result 2π |Hni |2 δ(En − Ei ).200)–(6. Ei − E m + i ¯ η h m=i 166 (6.283) t i = − Hii ¯ h = −i ¯ h −i ¯ h 2 exp(η t ) dt = − −∞ t t i exp(η t) Hii .6. 2 η (η + i ωmi ) (6. We obtain ci ˙ ci    −i −i Hii + ¯ h ¯ h 1− 2 |Hii |2 −i + η ¯ h i Hii ¯ η h |Hmi |2   Ei − E m + i ¯ η h m=i  −i −i Hii + lim η→0 ¯ ¯ h h |Hmi |2 .284) |Hmi | m 2 −∞ dt −∞ dt 2 m × exp[ (η + i ωim )t ] exp[ (η + i ωmi )t ]. (6. we ˙ ˙ can evaluate this ratio in the limit η → 0.202). 2 η (Ei − Em + i ¯ η) h m=i Let us now consider the ratio ci /ci .282) wi→n = ¯ h It is clear that the delta-function in the above formula actually represents a function which is highly peaked at some particular energy. Using Eq. (6. We have ci (t) = 1. |Hmi |2 exp(2 η t) .285) = Thus. Eq.17 Energy-shifts and decay-widths 6 APPROXIMATION METHODS Thus. (6.

287) is independent of time. t = exp[−i (Ei + ∆Ei ) t/¯ ] exp(−Γi t/2¯ ) |i . |i. we obtain −i ∆i t ci (t) = exp . and P denotes the principle part.291) It is convenient to normalize the solution of Eq. whereas the imaginary part of ∆i governs the growth or decay of this state.149). lim where =P (6.294) (6.17 Energy-shifts and decay-widths 6 APPROXIMATION METHODS This result is formally correct to second-order in perturbed quantities. (6. = ci ¯ h where |Hmi |2 ∆i = Hii + lim η→0 E − Em + i ¯ η h m=i i 1 →0 x + i 1 − i π δ(x).293) It is clear that the real part of ∆i gives rise to a simple shift in energy of state |i . (6.6. h h (6.295) (6. x (6. Thus.296) . Thus.290) > 0. We can write ci ˙ −i ∆i . t = exp(−i [Ei + Re(∆i ) ] t/¯ ) exp[ Im(∆i ) t/¯ ] |i . It follows that |Hmi |2 ∆i = Hii + P − iπ |Hmi |2 δ(Ei − Em ).288) (6.292) ¯ h According to Eq. (6. t = exp[−i (∆i + Ei ) t/¯ ] |i . Ei − E m m=i 167 (6. Note that the right-hand side of Eq. h h where |Hmi |2 ∆Ei = Re(∆i ) = Hii + P . the time evolution of the initial state ket |i is given by |i. h We can rewrite this result as |i.288) so that ci (0) = 1. E − Em m=i i m=i (6.289) is a constant. According to a well-known result in pure mathematics. (6.

use has been made of Eq. given that it is definately in state |i at time t = 0. The probability of observing the system in state |i at time t > 0.6. h (6.303) 168 . (6. (6. state |i is not a stationary state in the presence of the time-dependent perturbation. It is closely related to the mean lifetime of this state. Clearly. we can still represent it as a superposition of stationary states (whose amplitudes simply oscillate in time).222). (6. However. ¯ h m=i (6. since |ci |2 + m=i |cm |2 (1 − Γi t/¯ ) + h m=i wi→m t = 1.302) According to Eq. Thus. ¯ h ¯ h ¯ m=i h (6. the amplitude of state |i both oscillates and decays as time progresses.301) Γi where Pi→i = exp(−t/τi ). exp[−i (Ei + ∆Ei ) t/¯ ] exp(−Γi t/2¯ ) = h h f(E) exp(−i E t/¯ ) dE.300) The quantity ∆i is called the decay-width of state |i . (6. the rate of decay of the initial state is a simple function of the transition rates to the other states.17 Energy-shifts and decay-widths 6 APPROXIMATION METHODS and Γi 2 Im(∆i ) 2π =− = |Hmi |2 δ(Ei − Em ).297) Note that the energy-shift ∆Ei is the same as that predicted by standard timeindependent perturbation theory.298) (6. Note that the system conserves probability up to second-order in perturbed quantities. Clearly. h where Γi = wi→m . ¯ h τi = .294). is given by Pi→i (t) = |ci |2 = exp(−Γi t/¯ ). (6.299) Here.

e.e.304) In the absence of the perturbation. The faster the decay of the state (i. are smeared out more that weak lines. the larger Γi ). due to its propensity to decay. This effect is clearly a manifestation of the energy-time uncertainty relation ∆E ∆t ∼ ¯ . which correspond to fast transitions.. Strong lines. the energy of state |i is shifted by Re(∆i ).17 Energy-shifts and decay-widths 6 APPROXIMATION METHODS where f(E) is the weight of the stationary state with energy E in the superposition. In other words.6. but they are a lot sharper. the more its energy is spread out. gives rise to a slight smearing (in wave-length) of the spectral line associated with the transition. The fact that the state is no longer stationary (i. Indeed. For this reason. |f(E)|2 is basically a delta-function centred on the unperturbed energy Ei of state |i . (E − [Ei + Re(∆i )])2 + Γi 2 /4 (6. In the presence of the perturbation. The Fourier inversion theorem yields |f(E)|2 ∝ 1 . spectroscopists generally favour forbidden lines for Doppler shift measurements.. The uncertainty in energy of the excited state. 169 . the energy of the state is smeared over some region of width (in energy) Γi centred around the shifted energy Ei + Re(∆i ). One consequence of h this effect is the existence of a natural width of spectral lines associated with the decay of some excited state to the ground state (or any other lower energy state). Such lines are not as bright as those corresponding to allowed transitions. state |i is a stationary state whose energy is completely determined. it decays in time) implies that its energy cannot be exactly determined.

H0 |φ = E |φ . their energy eigenstates are unbound. a spherical-wave state.4) which satisfies the boundary condition |ψ → |φ as H1 → 0. (7. corresponding to the o same energy eigenvalue.3).2) and H1 represents the non-time-varying source of the scattering. 170 . (7. (7.1) where H0 is the Hamiltonian of a free particle of mass m. (7.e. Here. We have already developed theories which account for some aspects of the spectra of hydrogen-like atoms.. We require a solution of Eq. Let |φ be an energy eigenket of H0 . In general. This state is a plane-wave state or. Schr¨dinger’s equation for the scattering problem is o (H0 + H1 )|ψ = E |ψ .3) whose wave-function r |φ is φ(r ). for which the Hamiltonian of the system is written H = H 0 + H1 . data regarding quantum phenomena has been obtained from two main sources—the study of spectroscopic lines. p2 . H0 = 2m (7. Let us now examine the quantum theory of scattering. 7.7 SCATTERING THEORY 7 Scattering theory 7.2 The Lipmann-Schwinger equation Consider time-independent scattering theory. both H0 and H0 + H1 have continuous energy spectra: i.4) where |ψ is an energy eigenstate of the total Hamiltonian whose wave-function r |ψ is ψ(r ). (7. and scattering experiments. possibly.1 Introduction Historically. |φ is a solution of the free particle Schr¨dinger equation.

We need a prescription for dealing with these infinities. (7.8) ( 2 + k2 ) ψ(r) = 2 r|H1 |ψ .9) 2m This equation is called Helmholtz’s equation. The Lipmann-Schwinger equation can be converted into an integral equation via left multiplication by r|. Thus. and small. we can write the scattering problem o (7. it produces infinities when it operates on an eigenstate of H 0 corresponding to the eigenvalue E. E − H0 (7.e. the operator (E − H0 )−1 is singular: i. positive.3).5) Note that we can recover Eq. and making use of Eq. the desired solution can be written |ψ = |φ + 1 H1 |ψ .4) by operating on the above equation with E − H0 . Thus. and is non-singular as long as > 0.10) . 2 ¯ h 171 (7.7. The physical significance of the ± signs will become apparent later on. Furthermore. otherwise the above solution is useless. (7. |ψ± = |φ + 1 H1 |ψ± . the solution satisfies the boundary condition |ψ → |φ as H1 → 0. The standard prescription is to make the energy eigenvalue E slightly complex.7) Adopting the Schr¨dinger representation. Thus. Equation (7. ψ(r) = φ(r) + 2m G(r. Unfortunately. ¯ h where ¯ 2 k2 h E= . (7.2 The Lipmann-Schwinger equation 7 SCATTERING THEORY Formally. r ) r |H1 |ψ d3 r . E − H0 ± i (7.6) is called the Lipmann-Schwinger equation. ψ± (r) = φ(r) + r 1 E − H0 ± i r r |H1 |ψ± d3 r . and can be inverted using standard Green’s function techniques.6) where is real.4) in the form 2m (7.. (7.

a stream of particles of definite momentum p = ¯ k). the integral equation (7.e. (7.7) takes the form 2 m exp(±i k |r − r | ) 1 r =− 2 .14) r E − H0 ± i 4π |r − r | ¯ h It is not entirely clear that the ± signs correspond on both sides of this equation. ψ± (r) = φ(r) − Let us suppose that the scattering Hamiltonian. H1 .11) Note that the solution (7.13) suggests that the kernel to Eq. as is easily proved by a more rigorous derivation of this result. r ) = δ(r − r ).12) = V(r ) ψ± (r ).15) exp(±i k |r − r | ) . (7. (7.13) simplifies to ψ± (r) = φ(r) − 2m ¯2 h exp(±i k |r − r |) V(r ) ψ± (r ) d3 r .18) .7) and (7. This implies that r |H1 |r = V(r) δ(r − r ). Eq.7. (7. they do. As is well-known. 4π |r − r | (7. the Green’s function for the Helmholtz problem is given by G(r.2 The Lipmann-Schwinger equation 7 SCATTERING THEORY where ( 2 + k2 ) G(r.10) becomes 2 m exp(±i k |r − r | ) r |H1 |ψ d3 r .17) Suppose that the initial state |φ is a plane-wave with wave-vector k (i. We can write r |H1 |ψ± = r |H1 |r r |ψ± d3 r (7.16) (7. In fact. (7. (7. 4π |r − r | (7.13) 2 4π |r − r | ¯ h A comparison of Eqs.. is only a function of the position operators. Thus. The associated wave-function takes the form r|k = exp( i k·r) .10) satisfies the boundary condition |ψ → |φ as H1 → 0. r ) = − Thus. (2π)3/2 172 (7. The ket corresponding to h this state is denoted |k .

where r − ^·r r (7.24) − 2 3/2 (2π) r 2π ¯ h The first term on the right-hand side is the incident wave.17) reduces to ψ(r)± exp( i k·r) m exp(±i k r) exp( i k ·r ) V(r ) ψ± (r ) d3 r . but propagate from the scattering region to the observation point.20) r (7. let us adopt the ordering r r .22) ^= r Clearly..2 The Lipmann-Schwinger equation 7 SCATTERING THEORY The wave-function is normalized such that k|k = = k|r r|k d3 r exp[−i r·(k − k )] 3 d r = δ(k − k ).7. 173 . The second term represents a spherical wave centred on the scattering region.e. The plus sign (on ψ± ) corresponds to a wave propagating away from the scattering region. It is easily demonstrated that |r − r | to first-order in r /r. In other words. (7.21) r is a unit vector which points from the scattering region to the observation point. (7. Let us calculate the wave-function ψ(r) a long way from the scattering region. (2π)3 (7. Note that exp(±i k |r − r | ) exp(±i k r) exp( i k ·r ).19) Suppose that the scattering potential V(r) is only non-zero in some relatively localized region centred on the origin (r = 0). k is the wave-vector for particles which possess the same energy as the incoming particles (i. Let us define k = k ^. whereas the minus sign corresponds to a wave propagating towards the scattering region. Eq. r (7. k = k).23) In the large-r limit. (7.

Recall. k) = − ¯2 h exp(−i k ·r ) V(r ) ψ(r ) d3 r 3/2 (2π) (7. (7. ψ(r) = exp( i k·r) + (2π)3/2 r (2π)2 m f(k . ¯2 h Let us define the differential cross-section dσ/dΩ as the number of particles per unit time scattered into an element of solid angle dΩ.29) (2π)3 m Likewise. the probability flux associated with the scattered wave-function. k)|2 ¯ h k ^.32) .2 The Lipmann-Schwinger equation 7 SCATTERING THEORY It is obvious that the former represents the physical solution.7. j= exp( i k·r) . from Sect.25) where (2π)2 m = − k |H1 |ψ . k) . the probability flux associated with the incident wave-function. dΩ |jinci | 174 (7. that the probability flux (i.28) ¯ h k. 4.26)   (7.30) is jscat Now.e.27) (7.31) dσ r2 dΩ |jscat | dΩ = . |f(k . k) . (2π)3/2 is jinci = (7. the wavefunction a long way from the scattering region can be written 1  exp( i kr) f(k . m Thus. r = (2π)3 m r2 (7. divided by the incident flux of particles. the particle flux) associated with a wave-function ψ is ¯ h Im(ψ∗ ψ). (2π)3/2 r (7. Thus. exp( i k r) f(k ..

ψ(r) = φ(r) − Suppose that the scattering is not particularly strong. we can obtain an expression for f(k . φ(r). V. In this case. k = k). it is reasonable to suppose that the total wave-function. According to the above equation the total wave-function is a superposition of the incident wave-function and lots of spherical-waves emitted from the scattering region. |f(k . The Born approximation yields f(k . (7. Recall that ψ(r) = r|ψ is the solution of the integral equation m exp( i k r) exp(−i k ·r ) V(r ) ψ(r ) d3 r .36) . This is always the case for scattering Hamiltonians of the form shown in Eq. k) − m 2π ¯ 2 h exp [ i (k − k )·r ] V(r ) d3 r .. does not differ substantially from the incident wave-function.3 The Born approximation Equation (7. Thus.15). 7. 175 (7. (7. because the quantity f(k .33) dΩ Thus.33) is not particularly useful.7. (2π)3/2 (7.3 The Born approximation 7 SCATTERING THEORY giving dσ = |f(k . as it stands.34) 2 r 2π ¯ h where φ(r) is the wave-function of the incident state. as well as the local value of the wave-function.e. (7. The strength of the spherical-wave emitted at a given point is proportional to the local value of the scattering potential. k)|2 . k) depends on the unknown ket |ψ . Note that the scattered particles possess h the same energy as the incoming particles (i.35) This is called the Born approximation. k)|2 gives the differential cross-section for particles with incident momentum ¯ k to be scattered into states whose momentum vectors are directed in h a range of solid angles dΩ about ¯ k . ψ(r). k) by making the substitution ψ(r) → φ(r) = exp( i k·r) . ψ.

Consider scattering by a Yukawa potential V(r) = V0 exp(−µ r) .38) ¯ q 0 h Note that f(k . (7. (7. k) − 2 r V(r ) sin(q r ) dr . (7.40) where V0 is a constant and 1/µ measures the “range” of the potential. θ is the angle of scattering. in the Born approximation. For a spherically symmetric potential.42) µ + q2 0 Thus. the differential cross-section for scattering by a Yukawa potential is dσ dΩ given that  2 m V0  1  . (7.44) . It is easily demonstrated that q ≡ |k − k | = 2 k sin(θ/2). Recall that the vectors k and k have the same length by energy conservation. f(k . µr (7.3 The Born approximation 7 SCATTERING THEORY Thus.41) ¯ µ q2 + µ 2 h since ∞ q exp(−µ r ) sin(q r ) dr = 2 . [2 k2 (1 − cos θ) + µ2 ]2 ¯2µ h 2 (7.43) q2 = 4 k2 sin2 (θ/2) = 2 k2 (1 − cos θ). (7. k) is just a function of q for a spherically symmetric potential.38) that 2 m V0 1 f(θ) = − 2 .39) − m 2π ¯ 2 h exp( i q r cos θ ) V(r ) r 2 dr sin θ dθ dφ . (7. f(k . k) giving 2m ∞ f(k .7.37) where θ is the angle subtended between the vectors k and k . In other words. k) is proportional to the Fourier transform of the scattering potential V(r) with respect to the wave-vector q ≡ k − k . It follows from Eq. 176 (7.

(7. 4 16π 0 E sin (θ/2)  2 (7. In this limit the Born differential crosssection becomes 2  2 m Z Z e2  dσ 1  . (7. so the above equation can be rewritten h dσ dΩ 1 Z Z e2   . In the high-k limit.3 The Born approximation 7 SCATTERING THEORY The Yukawa potential reduces to the familiar Coulomb potential as µ → 0.47) yields 2 m |V0 | 1.45) dΩ 4π 0 ¯ 2 h 16 k4 sin4 (θ/2) Recall that ¯ k is equivalent to |p|. from Eq. (7. (7.50) ¯2 µk h This inequality becomes progressively easier to satisfy as k increases.46) where E = p2 /2 m is the kinetic energy of the incident particles.7.46) is the classical Rutherford scattering cross-section formula. giving 2 m |V0 | ¯ 2 µ2 h 1 (7. k we can replace exp( i k r ) by unity.47) µ) Consider the special case of the Yukawa potential. Equation (7. ¯ 2 µ2 h (7. (i. provided that V0 /µ → Z Z e2 /4π 0 .e. Thus. 177 .7. At low energies. It follows. if the potential is strong enough to form a bound state then the Born approximation is likely to break down. Eq.17).49) where V0 is negative. The Born approximation is valid provided that ψ(r) is not too different from φ(r) in the scattering region. that the condition for ψ(r) φ(r) in the vicinity of r = 0 is m 2π ¯ 2 h exp( i k r ) V(r ) d3 r r 1.. implying that the Born approximation is more accurate at high incident particle energies. (7.48) as the condition for the validity of the Born approximation. The condition for the Yukawa potential to develop a bound state is 2 m |V0 | ≥ 2.

points in a different direction. ϕ) = f(θ).56) ..4 Partial waves 7 SCATTERING THEORY 7. both φ(r) and ψ(r) satisfy the free space Schr¨dinger equation o ( 2 + k2 ) ψ = 0. The direction of k is specified by the polar angle θ (i. ϕ). and an azimuthal angle ϕ about the z-axis. 5 via Pl (cos θ) = 4π Yl0 (θ. (2π)3/2 r   (7. It follows that neither the incident wave-function.52) (7. 2l + 1 178 (7.51) exp( i k z) exp( i k r cos θ) = . without loss of generality. φ(r) = nor the total wave-function. 1  exp( i k r) f(θ)  ψ(r) = exp( i k r cos θ) + .7.e.54) What is the most general solution to this equation in spherical polar coordinates which does not depend on the azimuthal angle ϕ? Separation of variables yields ψ(r. but.e. (2π)3/2 (2π)3/2 (7. θ) = l Rl (r) Pl (cos θ).4 Partial waves We can assume. the angle subtended between the two wavevectors). V(r) = V(r)] the scattering amplitude is a function of θ only: f(θ. (7.53) depend on the azimuthal angle ϕ. (7. The Legendre functions are related to the spherical harmonics introduced in Sect. Outside the range of the scattering potential.55) since the Legendre functions Pl (cos θ) form a complete set in θ-space.38) strongly suggests that for a spherically symmetric scattering potential [i. that the incident wave-function is characterized by a wave-vector k which is aligned parallel to the z-axis.. in general. Equation (7. The scattered wave-function is characterized by a wave-vector k which has the same magnitude as k.

1 δnm . It is easily demonstrated that r2 jl (y) = y l 1 d − y dy l l sin y .64) It is well-known that (−i)l jl (y) = 2 1 exp( i y µ) Pl (µ) dµ. (7.63) Pn (µ) Pm (µ) dµ = n + 1/2 −1 so we can invert the above expansion to give 1 al jl (k r) = (l + 1/2) −1 exp( i k r µ) Pl (µ) dµ.61) We can write exp( i k r cos θ) = l al jl (k r) Pl (cos θ). because they are not well-behaved as r → 0.57) + 2r 2 dr dr The two independent solutions to this equation are called a spherical Bessel function.59) ηl (y) = −y 1 d − y dy cos y . y jl (y) → (7. jl (k r). The Legendre functions are orthonormal. y cos(y − l π/2) ηl (y) → − . (7. The asymptotic behaviour of these functions in the limit y → ∞ is sin(y − l π/2) . y l (7. Note there are no Neumann functions in this expansion.55) can be combined to give dRl d2 Rl + [k2 r2 − l (l + 1)]Rl = 0. (7. −1 (7. ηl (k r).62) where the al are constants. (7.60) (7.58) (7.54) and (7.7. and a Neumann function. y Note that spherical Bessel functions are well-behaved in the limit y → 0 .4 Partial waves 7 SCATTERING THEORY Equations (7.65) 179 . whereas Neumann functions become singular.

Eq. which contains both incoming and outgoing spherical-waves. l kr kr  (7. 10. 1965). The most general solution for the total wave-function outside the scattering region is 1 [Al jl (k r) + Bl ηl (k r)] Pl (cos θ).69) where use has been made of Eqs.70) where the sine and cosine functions have been combined to give a sine function which is phase-shifted by δl .71) × Pl (cos θ). Equation (7.4 Partial waves 7 SCATTERING THEORY where l = 0.68) ψ(r) = (2π)3/2 l where the Al and Bl are constants. al = il (2 l + 1). Thus. kr (7. 2.61).14].60)–(7.67) The above expression tells us how to decompose a plane-wave into a series of spherical-waves (or “partial waves”). (7. 1. the total wave-function reduces to ψ(r) 1 (2π)3/2  l cos(k r − l π/2)  sin(k r − l π/2) A − Bl Pl (cos θ).66) giving exp( i k r cos θ) = l il (2 l + 1) jl (k r) Pl (cos θ). What is the source of the incoming waves? Obviously.7. (7.1. New York NY. In the large-r limit.70) yields ψ(r) 1 (2π)3/2 Cl l exp[ i (k r − l π/2 + δl )] − exp[−i (k r − l π/2 + δl )] 2ikr (7. The above expression can also be written ψ(r) 1 (2π)3/2 Cl l sin(k r − l π/2 + δl ) Pl (cos θ). because its region of validity does not include the origin. Note that the Neumann functions are allowed to appear in this expansion. (7. · · · [see Abramowitz and Stegun (Dover. they must be part of the large-r asymptotic 180 . (7.

5 The optical theorem The differential scattering cross-section dσ/dΩ is simply the modulus squared of the scattering amplitude f(θ).75) Clearly. (7. 181 . It follows from Eqs.. 7.72) × Pl (cos θ) in the large-r limit. k (7. In fact.74) Thus.72) that Cl = (2 l + 1) exp[ i (δl + l π/2)].5 The optical theorem 7 SCATTERING THEORY expansion of the incident wave-function. it is easily seen that φ(r) 1 (2π)3/2 il (2l + 1) l exp[ i (k r − l π/2)] − exp[−i (k r − l π/2)] 2ikr (7. This implies that the coefficients of the incoming spherical waves in the large-r expansions of ψ(r) and φ(r) must be equal.53) give (2π)3/2 [ψ(r) − φ(r)] = exp( i k r) f(θ).71) and (7. spherical-waves) is equivalent to determining the phase-shifts δ l . The total cross-section is given by σtotal = |f(θ)|2 dΩ 1 1 = 2 k dϕ −1 dµ l l (2 l + 1) (2 l + 1) exp[ i (δl − δl ] (7.52) and (7. determining the scattering amplitude f(θ) via a decomposition into partial waves (i.73) Note that the right-hand side consists only of an outgoing spherical wave. (7.73) yield f(θ) = ∞ l=0 (2 l + 1) exp( i δl ) sin δl Pl (cos θ).7.76) × sin δl sin δl Pl (µ) Pl (µ). (7. Now. r (7.e. Eqs. Eqs.71)–(7. (7.

A comparison of this result with Eq.6 Determination of phase-shifts 7 SCATTERING THEORY where µ = cos θ. It is a reflection of the fact that the very existence of scattering requires scattering in the forward (θ = 0) direction in order to interfere with the incident wave.80) 2 k is the lth partial cross-section: i. l (7.6 Determination of phase-shifts Let us now consider how the phase-shifts δl can be evaluated. It is usual to write σtotal = where ∞ l=0 σl .63). (7.. σl = 7. Consider a spherically symmetric potential V(r) which vanishes for r > a. This result is known as the optical theorem.54). (7. (7. where a is termed the range of the potential.81) 182 . It follows that σtotal = 4π k2 (2 l + 1) sin2 δl . The most general solution which is o consistent with no incoming spherical-waves is 1 ψ(r) = (2π)3/2 ∞ l=0 il (2 l + 1) Al (r) Pl (cos θ).e. the wave-function ψ(r) satisfies the free-space Schr¨dinger equation (7.75) yields 4π σtotal = Im [f(0)] .7.79) 4π (2 l + 1) sin2 δl (7.77) where use has been made of Eq. Note that the maximum value for the lth partial cross-section occurs when the phase-shift δl takes the value π/2. the contribution to the total cross-section from the lth partial wave. In the region r > a. and thereby reduce the probability current in this direction.78) k since Pl (1) = 1. (7. (7.

85) (7.88) ensures that the radial wave-function is well-behaved at the origin. k a ηl (k a) − βl+ ηl (k a)   (7. (7.6 Determination of phase-shifts 7 SCATTERING THEORY where Al (r) = exp( i δl ) [ cos δl jl (k r) − sin δl ηl (k r) ] . We can launch a well-behaved solution of the above equation from r = 0.84) Thus. the problem of determining the phase-shift δl is equivalent to that of obtaining βl+ . r  (7.83) where jl (x) denotes djl (x)/dx. and form the logarithmic derivative βl− = 1 d(ul /r) (ul /r) dr 183 .82) Note that Neumann functions are allowed to appear in the above expression. The logarithmic derivative of the lth radial wave-function Al (r) just outside the range of the potential is given by βl+ cos δl jl (k a) − sin δl ηl (k a)  . ul (r) . because its region of validity does not include the origin (where V = 0). = ka cos δl jl (k a) − sin δl ηl (k a) tan δl = k a jl (k a) − βl+ jl (k a) . dr2 r2 ¯ h The boundary condition ul (0) = 0 (7.87) (7. etc. r=a (7.89) . integrate out to r = a. The most general solution to Schr¨dinger’s equation inside the range of the o potential (r < a) which does not depend on the azimuthal angle ϕ is 1 ψ(r) = (2π)3/2 where Rl (r) = and  ∞ l=0 il (2 l + 1) Rl (r) Pl (cos θ).7.86) d2 ul  2 2m l (l + 1)  + k − 2 V− ul = 0. The above equation can be inverted to give (7.

The s-wave radial wave function is [cos k a sin k r − sin k a cos k r] kr sin[k (r − a)] = exp(−i k a) .91) for all l.84). ηl (k a) (7. which is usually referred to as the s-wave. and zero for r > a. Thus. for which the potential is infinite for r < a.90) Consider the l = 0 partial wave. (7.84) that tan δl = jl (k a) . − cos(k a)/ka δ0 = −k a. It follows that ψ(r) is zero in the region r < a.96) .7 Hard sphere scattering 7 SCATTERING THEORY Since ψ(r) and its first derivatives are necessarily continuous for physically acceptible wave-functions. it follows that βl+ = βl− . 7. A0 (r) = kr 184 (7.58)–(7. βl− = βl+ = ∞. It follows from Eq. Consider scattering by a hard sphere. (7.94) (7. Equation (7.93) where use has been made of Eqs. (7.95) kr The corresponding radial wave-function for the incident wave takes the form A0 (r) = exp(−i k a) sin k r ˜ .7. (7.7 Hard sphere scattering Let us test out this scheme using a particularly simple example.59). It follows that (7. which implies that u l = 0 for all l. (7.92) (7.92) yields tan δ0 = sin(k a)/k a = − tan k a. The phase-shift δl is obtainable from Eq.

(7. It follows from Eq. Consider the high energy limit k a 1. and (7. (7. at low energy only s-wave scattering (i.7 Hard sphere scattering 7 SCATTERING THEORY It is clear that the actual l = 0 radial wave-function is similar to the incident l = 0 wave-function. all partial waves up to lmax = k a contribute significantly to the scattering cross-section.97) (7. Note that the total cross-section σtotal = dσ dΩ = 4π a2 dΩ (7. (k r)l+1 (7. spherically symmetric scattering) is important. (7. (7. In other words.101) a2 (7. (2 l + 1)!! − (2 l − 1)!! ..75).e. In this regime. It follows that −(k a)2 l+1 tan δl = . However. (2 l + 1) [(2 l − 1)!!]2 (7.33). so we do not expect to obtain the classical result in this limit. Low energy means k a 1. low energy scattering implies relatively long wave-lengths.7. the spherical Bessel functions and Neumann functions reduce to: jl (k r) ηl (k r) (k r)l . with respect to δ0 .98) where n!! = n (n − 2) (n − 4) · · · 1. the cross-section for classical particles bouncing off a hard sphere of radius a).102) k l=0 185 . except that it is phase-shifted by k a. with l > 0.99) It is clear that we can neglect δl . Let us consider the low and high energy asymptotic limits of tan δl .e.77) that l 4π max σtotal = 2 (2 l + 1) sin2 δl .94) that dσ sin2 k a = dΩ k2 for k a 1. At high energies.. It follows from Eqs.100) is four times the geometric cross-section π a2 (i.

when 1/k is much larger than the range of the potential) partial waves with l > 0. it is legitimate to replace sin 2 δl by its average value 1/2.8 Low energy scattering 7 SCATTERING THEORY With so many l values contributing. kr (7. the interference is not completely destructive. there must be scattering in the forward direction (recall the optical theorem) to produce destructive interference with the incident plane-wave. at these energies. and the shadow has a bright spot in the forward direction. Thus. Here. incident waves with impact parameters less than a must be deflected. since we might expect to obtain the classical result in the short wave-length limit.e. The effective cross-section associated with this bright spot is π a 2 which. The inside wave-function follows 186 .59). gives the actual cross-section of 2π a2 . V0 is a constant. As a specific example. In fact. (7. (7. It follows that. ka 2π (2 l + 1) 2π a2 . in order to produce a “shadow” behind the sphere. π a2 . 7. However. when combined with the cross-section for classical reflection.7.105) where use has been made of Eqs. and attractive for V0 < 0. characterized by V = V0 for r < a. which is somewhat surprizing. let us consider scattering by a finite potential well. and V = 0 for r ≥ a.. in general. The potential is repulsive for V0 > 0.82)] A0 (r) = exp( i δ0 ) [j0 (k r) cos δ0 − η0 (k r) sin δ0 ] = exp( i δ0 ) sin(k r + δ0 ) .104) (7.58)–(7. The outside wave-function is given by [see Eq. only s-wave scattering is important. make a negligible contribution to the scattering cross-section. with a finite range potential. (7.8 Low energy scattering At low energies (i.103) σtotal = 2 k l=0 This is twice the classical result. For hard sphere scattering.

107) E − V0 = 2m Note that Eq. the depth of the potential well is much larger than the energy of the incik.112) (7. and its radial derivative at r = a. yields tan(k a + δ0 ) = k tan k a k k tanh κ a κ (7.88). for which E > V0 . (7.109) (7.106) where use has been made of the boundary condition (7. unless tan k a dent particles). r (7. V0 − E = 2m Matching A0 (r).110) that. r (7.111) Consider an attractive potential. so replacing the tangent of a small quantity with the quantity itself. we have A0 (r) = B where sinh κ r .108) ¯ 2 κ2 h .113) . Suppose that |V0 | E (i. k (7.106) only applies when E > V0 . we obtain k a + δ0 This yields δ0 tan k a ka − 1 . (7. For E < V0 . (7. and ¯2k 2 h .87).110) for E > V0 . the right-hand side is much less that unity. ka 187   k tan k a. (7. and tan(k a + δ0 ) = for E < V0 . B is a constant.7. so that k becomes extremely large. (7.e. We obtain A0 (r) = B sin k r .8 Low energy scattering 7 SCATTERING THEORY from Eq. Here.. It follows from Eq.

102). at low incident energies. In fact. This is called the Ramsauer-Townsend effect. As the h incident energy increases. and has been observed experimentally. so we can no longer assume that the right-hand side of Eq.9 Resonances 7 SCATTERING THEORY According to Eq..g. ka 2 m |V0 | a2 . But. (7. It follows that there are certain values of V0 and k which give rise to almost perfect transmission of the incident wave. and the scattering cross-section (7.9 Resonances There is a significant exception to the independence of the cross-section on energy. (7. these contributions are small. tan k a becomes infinite. k a. (7.110) is small. despite the very strong attraction of the potential. the cross-section is not exactly zero. 7. ¯2 h (7. the scattering cross-section is given by σtotal Now tan k a 4π 2 sin δ0 = 4π a2  − 1 .110) that k a + δ0 = π/2. it follows from Eq. In reality. because of contributions from l > 0 partial waves. can reach the value π/2. Suppose that the quantity 2 m |V0 | a2 /¯ 2 is slightly less than π/2.114) (7. In this case.49) at which δ0 → π. Note that there are values of k a (e.114) vanishes. (7.116) It follows that the total (s-wave) scattering cross-section is independent of the energy of the incident particles (provided that this energy is sufficiently small). 2 k ka ka= k 2 a2 2 m |V0 | a2 . This implies that σtotal = 4π 2 1 sin δ0 = 4π a2 2 2 . + ¯2 h  2 (7. or δ0 (since we are assuming that k a 1). which is given by Eq.115). k a 4. at the value of the incident energy π/2 when k a = π/2.117) .7. k2 k a 188 (7.115) so for sufficiently small values of k a.

The cross-section for resonance scattering is generally far higher than that for non-resonance scattering. Thus.118) is equivalent to the condition that a spherical well of depth V0 possesses a bound state at zero energy. The condition π 2 m |V0 | a2 = 2 2 ¯ h (7. and the subsequent decay of the bound state and release of the particle. the bound state is not stable. for a potential well which satisfies the above equation. since the system has a small positive energy.9 Resonances 7 SCATTERING THEORY Note that the cross-section now depends on the energy.7. Furthermore.119) 2 Let us expand cot δl in the vicinity of the resonant energy: cot δl (E) = cot δl (E0 ) +  d cot δl dE  E=E0 (E − E0 ) + · · · (7. so it is reasonable to assume that there is a similar resonance when the phase-shift of the lth partial wave is π/2.114) for k a = π/2 (since k a 1). sin δl dE E=E0 Defining  2 dδl (E)   = . this sort of resonance scattering is best understood as the capture of an incident particle to form a metastable bound state. an incident particle would like to form a bound state in the potential well.121) 1 dδl  = − 2 (E − E0 ) + · · · . the magnitude of the cross-section is much larger than that given in Eq. We have seen that there is a resonant effect when the phase-shift of the s-wave takes the value π/2. (7. Suppose that δl attains the value π/2 at the incident energy E0 . the energy of the scattering system is essentially the same as the energy of the bound state. (7. However. The origin of this rather strange behaviour is quite simple. There is nothing special about the l = 0 partial wave.120) (7. Nevertheless. so that π δl (E0 ) = . In this situation.122) . dE E=E0 Γ 189  (7.

(2 l + 1) sin2 δl = 2 (2 l + 1) k2 k 1 + cot2 δl 4π Γ 2 /4 .124) Thus. The quantity Γ is the width of the resonance (in energy). Γ Recall. The variation of the partial cross-section σl with the incident energy has the form of a classical resonance curve.17).125) This is the famous Breit-Wigner formula. (2 l + 1) k2 (E − E0 )2 + Γ 2 /4 (7.7. σl (7.123) cot δl (E) = − (E − E0 ) + · · · . (7.80).9 Resonances 7 SCATTERING THEORY we obtain 2 (7. We can interpret the BreitWigner formula as describing the absorption of an incident particle to form a metastable state. h 190 . from Eq. that the contribution of the lth partial wave to the scattering cross-section is σl = 4π 1 4π . of energy E0 . 6. and lifetime τ = ¯ /Γ (see Sect.

Sign up to vote on this title
UsefulNot useful