You are on page 1of 173

Physics 611 & 612, 2008-2009

Advanced Quantum Mechanics


Instructors: Dr. Alakabha Datta & Dr. Roger Waxler
Notes Compiled by: Phil Blom
Texts Used:
Modern Quantum Mechanics by J.J. Sakurai,
Quantum Mechanics (Non-Relativistic Theory) by L.D. Landau and E.M. Lifshitz
Last updated: May 26, 2009
1
Contents
1 Review of Undergraduate Level Quantum Mechanics 8
1.1 The Schrodinger Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2 Innite Square Well . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Quantum Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Free Particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Delta Function Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Finite Square Well . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.7 The Hydrogen Atom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
I Fundamental Concepts 11
2 Kets, Bras, and Operators 11
2.1 Kets and Bras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Hermitian Operators - Eigenvalues and Orthonormality . . . . . . . . . . . . . . . . 13
2.4 Base Kets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5 The Projection Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.6 Matrix Representation of an Operator . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.7 Matrix Representation of Bras and Kets . . . . . . . . . . . . . . . . . . . . . . . . . 15
3 Experiments and Measurement 17
3.1 Expectation Values of Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 The Stern-Gerlach Experiment and the Failure of Classical Mechanics . . . . . . . . 18
4 Constructing Spin 1/2 Operators and Kets 19
4.1 Spin Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5 Compatibility and Operators 22
5.1 Incompatible Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6 Measurements of Compatible and Incompatible Operators 23
2
7 The Uncertainty Principle 24
8 Transformations of Base Kets 26
8.1 Diagonalization of Transform Operators . . . . . . . . . . . . . . . . . . . . . . . . . 27
8.2 Unitary Equivalent Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
9 Operators with Continuous Spectra 28
9.1 The Position Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
9.2 The Momentum Operator - Translation Method . . . . . . . . . . . . . . . . . . . . . 30
9.3 The Wave Function - Some New Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . 31
9.4 Transformations in Position and Momentum Space . . . . . . . . . . . . . . . . . . . 34
9.5 The Gaussian Wave Packet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
II QUANTUM DYNAMICS 36
10 The Time Evolution Operator 36
10.1 Innitesimal Evolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
10.2 The Schrodinger Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
10.3 Time Independent Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
10.4 Expectation Values of Stationary and Non-Stationary States . . . . . . . . . . . . . . 38
10.5 Spin Interactions - Evolution of Spin States . . . . . . . . . . . . . . . . . . . . . . . 39
11 Energy-Time Uncertainty - An Overview 40
12 Schrodinger and Heisenberg Interpretations of Time Evolution 41
12.1 Ehrenfest Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
13 Quantum Simple Harmonic Oscillator 44
13.1 Matrix Elements of Operators and Expectation Values . . . . . . . . . . . . . . . . . 46
13.2 Deriving the Wave Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
13.3 Time Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
13.4 Review of Quantum Simple Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . 50
3
14 Stationary States - Bound and Continuous Solutions 51
14.1 Dirac Delta Function Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
15 The Feynmann Path Integral & The Propagator 53
16 Potentials and Gauge Transformations in Quantum Mechanics 54
16.1 Gravitational Gauge Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
16.2 Electromagnetic Gauge Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
16.3 Alternate Approach to Gauge Transforms . . . . . . . . . . . . . . . . . . . . . . . . 61
17 The Aharonov-Bohm Eect 61
III THE THEORY OF ANGULAR MOMENTUM 62
18 Classical Rotations 62
18.1 Finite Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
18.2 Innitesimal Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
18.3 Overview of Group Theory and Group Representation . . . . . . . . . . . . . . . . . 63
19 Evolution of Spin Operators 65
19.1 Spin 1/2 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
20 The Rotation Operator D(R) 66
20.1 Experiment - Neutron Interferometer . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
21 Spin 1/2 System - 2 Component Form 67
22 The Density Matrix 72
22.1 Density Matrix For Continuous Variables and Time Evolutions . . . . . . . . . . . . 74
23 General Representation of Angular Momentum 75
23.1 Matrix Representation of Angular Momentum . . . . . . . . . . . . . . . . . . . . . . 77
23.2 Rotation Operator for Generalized Angular Momentum . . . . . . . . . . . . . . . . 78
23.3 Orbital Angular Momentum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4
23.4 Quantum Mechanical Treatment of the Spherical Harmonics . . . . . . . . . . . . . . 80
24 Addition of Angular Momentum 81
24.1 Overview - New Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
24.2 A System of 2 Spin 1/2 Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
24.3 Clebsch-Gordon Coecients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
24.4 Interaction Energy - Splitting of Energy Levels . . . . . . . . . . . . . . . . . . . . . 83
25 Tensors in Quantum Mechanics 86
25.1 Reducible and Irreducible Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
25.2 Cartesian Vectors in Classical Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . 87
25.3 Cartesian Vectors in Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . . . . 87
25.4 Spherical Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
25.5 Transformation of Spherical Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
25.6 Denition of Tensor Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
25.7 Constructing Higher Rank Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
26 Wigner-Eckart Theorem 90
IV MATHEMATICAL REVIEW OF QUANTUM MECHANICS 90
26.1 Review of the Concepts of Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . 91
26.2 The Propagator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
26.3 Qualitative Results of the Propagator . . . . . . . . . . . . . . . . . . . . . . . . . . 93
26.4 Examples Using the Propagator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
27 Gauge Transforms - An Electron in a Magnetic Field 97
27.1 Example - Electron in Uniform Magnetic Field . . . . . . . . . . . . . . . . . . . . . 98
28 The Simplistic Hydrogen Atom 101
28.1 Background - Setting Up The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 101
28.2 The Hydrogen Atom Wavefunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
28.3 Continuous Spectrum of Hydrogen . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5
V PERTURBATION THEORY IN QUANTUM MECHANICS 105
29 Bound State Perturbation Theory 105
29.1 Normalizing the Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
30 Degenerate Eigenvalue Perturbation Theory 108
30.1 First Order Corrections For Degenerate Eigenvalues . . . . . . . . . . . . . . . . . . 108
30.2 Example - Particle in a Symmetric Box in 2 Dimensions . . . . . . . . . . . . . . . . 109
30.3 More on Degenerate State Perturbations . . . . . . . . . . . . . . . . . . . . . . . . . 112
30.4 Time Evolution of Degenerate States . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
31 Time Dependent Perturbations 116
31.1 Time Dependent Perturbation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 119
31.2 Periodic Perturbation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
VI THE QUASI-CLASSICAL CASE 123
32 The WKB Approximation 124
32.1 Connecting The Quasi-Classical Solutions Around A Turning Point
Airy Function Asymptotics Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
32.2 Connecting The Quasi-Classical Solutions Around A Turning Point
Complex Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
32.3 Bohr-Sommerfeld Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
33 Penetration Through a Potential Barrier - Quantum Tunneling 130
33.1 Example - Rigid Wall With A Finite Barrier . . . . . . . . . . . . . . . . . . . . . . 130
34 WKB Approximation for a Finite Potential Barrier 139
VII SCATTERING THEORY 139
35 Potential Scattering 139
35.1 Constant Energy Wave Packet Scattering . . . . . . . . . . . . . . . . . . . . . . . . 140
6
36 Scattering Theory - A More General Approach 141
36.1 Scattering from a Localized Potential
The Born Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
36.2 Probability Flux Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
VIII HOMEWORK PROBLEMS 148
37 Semester I 148
38 Semester II 153
IX SEMESTER II FINAL EXAM 156
39 WKB Problem 156
40 Scattering Problem 161
41 Perturbation Problem 166
7
1 Review of Undergraduate Level Quantum Mechanics
1.1 The Schrodinger Equation
ih

t
( r, t) =
h
2
2m

2
( r, t) +V ( r)( r, t)
What is the wave function ? Statistically, the wave function denes the probability of nding the
particle in a given region of space at time t. In the 1 dimension case that is:
_
b
a
[(x, t)[
2
dx = probability of nding the particle between a and b.
Expectation Values:
x) =
_

xdx
p) =
_

_
h
i

x
_
dx
The Time Independent Schrodinger Equation:

h
2
2m

2
( r) = (E V ( r)) ( r)
( r, t) =

n=1
c
i

n
( r)e

i
h
Ent
Where each
n
( r) is called a stationary state of the wave function with a corresponding energy
eigenvalue E
n
.
There are several important solutions for simple potentials. These are the innite square well, the
harmonic oscillator, the free particle, the delta function potential, and the nite square well.
8
1.2 Innite Square Well
V (x) = 0 0 x a; V (x) = ; otherwise
E
n
=
n
2

2
h
2
2ma
2

n
(x) =
_
2
a
sin
_
n
a
x
_
1.3 Quantum Harmonic Oscillator
V (x) =
1
2
m
2
x
2
a

=
1

2hm
(ip +mx)
E
n
=
_
1
2
+n
_
h

0
(x) =
_
m
h
_
1/4
e

m
2 h
x
2

n
(x) =
1

n!
(a
+
)
n

0
(x)
1.4 Free Particle
V (x) = 0
(x) =
1

2
_

(k)e
i

kx
hk
2
2m
t

dk
(k) =
1

2
_

(x, 0)e
ikx
dx
1.5 Delta Function Potential
V (x) = (x)
(x) = Ae
ikx
+Be
ikx
; x < 0
(x) = Fe
ikx
+Ge
ikx
; x > 0
k =

2mE
h
Continuity:
A+B = F +G
F G = A(1 + 2i) B(1 2i) ; =
m
h
2
k
9
Transmission and Reection
R =
[B[
2
[A[
2
=

2
1 +
2
; T =
[F[
2
[A[
2
=
1
1 +
2
R +T = 1
1.6 Finite Square Well
V (x) = V
0
a x a; V (x) = 0 [x[ > a
(x) = Ae
ikx
+Be
ikx
; x < a; k =

2mE
h
(x) = C sin(lx) +Dcos(lx); a < x < a; l =
_
2m(E +V
0
)
h
(x) = Fe
ikx
; x > a; k =

2mE
h
Boundary Conditions
Ae
ika
+Be
ika
= C sin(la) +Dcos(la)
ik
_
Ae
ika
+Be
ika
_
= l (C cos(la) +Dsin(la))
C sin(la) +Dcos(la) = Fe
ika
l (C cos(la) Dsin(la)) = ikFe
ika
1.7 The Hydrogen Atom
V (r, , ) =
e
2
4
0
r

nlm
= R(r) Y
lm
(, )
E
n
=
_
m
2 h
2
_
e
2
4
0
_
2
_
1
n
2
=
E
1
n
2
H = E
n

L
2
= h
2
l (l + 1)
L
z
= hm
10
Part I
Fundamental Concepts
2 Kets, Bras, and Operators
Wednesday - 8/27
2.1 Kets and Bras
Any physical state is completely represented by a ket ([)) in a vector space. Some properties of
kets include:
Addition and Subtraction [) [) = [)
Scaling c [) for some possibly complex number c
Null Ket 0 [) = 0
Any physical observable is represented by an operator in the vector space. Operation of an operator
on a ket produces a new ket.
A[) = [)
One mathematical relation of interest is that of eigenkets and eigenvalues. Consider some operator
A, ket [), and complex number a such that:
A[) = a [)
In this case, [) is the eigenket and a is the eigenvalue. Generally, an operator will have sets of
eigenkets and corresponding eigenvalues. These sets may be nite (such as spin states) or innite
(energy levels of an innite square well or the Hydrogen atom). Any ket can be expanded in terms
of the complete set of eigenkets of an operator just like any function can be expanded in terms of
any eigenfunctions (eg. Fourier Series).
For every ket [) there exists a corresponding object called a bra [ in a dual vector space. The
relation between bras and kets is called Dual Correspondence and is written as:
[) [
An important rule of using bras involves using complex conjugates when scaling them. For example:
c [) [ c

A[) [ A

11
With kets and bras, one can dene the inner product and its properties
[) = c; [) = [)

[) > 0; [) = [)

Because the inner product of a bra and ket of dual correspondence produces a positive real number,
this can be used to normalize kets.
[ ) = 1; [ ) =
[)
[)
Finally, we can make a few statements about operators and their behavior in the vector space.
Operators satisfy the following:
If for every ket and bra: X[) = Y [) and [ X

= Y [ Y

then X = Y
If for every ket: X[) = 0 then X is the null operator
Commutative and Associative Laws of Addition:
(X +Y ) [) = (Y +X) [) ; [X + (Y +Z)] [) = [(X +Y ) +Z] [)
Multiplication is NOT commutative: (though is some cases operators do commute...but in
general assume they do not)
(XY ) [) , = (Y X) [)
Dual Correspondence reverses order of operators.
(XY ) [) [ Y

Most Operators are linear:


X[c
1
[
1
) +c
2
[
2
)] = c
1
X[
1
) +c
2
X[
2
)
Friday - 8/29
Similarly to the dened inner product a[b), which results in a number, we can also dene an outer
product [a) b[ which results in an operator. With this new denition, it is important to note a
few illegal structures in the vector space:
We can only have an operator acting on a ket by preceding it.
Incorrect : [a) X Correct : X[a)
12
We can only have operator acting on a bra by following it.
Incorrect : Y a[ Correct : a[ Y
We cannot have multiple kets (or bras) to describe the state of a system unless there are
separate parts of the system. And in such a case, the kets are combined into a single new ket
Incorrect : [x) [+) Correct : [x, +)
2.2 Operators
Operators are made up of matrix elements which we can nd using several properties of bras and
kets. For instance, the Associative Axiom states that for an operator dened as X = [b) a[ we can
state that:
X[) = ([b) a[) [) = [b) a[)
This also gives a useful property of such an operator: if X = [b) a[, X

= [a) b[.
The elements of an operator can be written as: X[a) = [) b[ X[a) = b[)
Which also means that: b[) = [b)

= a[ X

[b)
Giving our main result of:
b[ X[a)

= a[ X

[b)
Note: in the case that the operator is Hermitian, X = X

, the above becomes b[ X[a)

= a[ X[b)
2.3 Hermitian Operators - Eigenvalues and Orthonormality
Consider the operator A and eigenkets [a
1
) and a
2
[:
a
2
[ A[a
1
) = a
2
[ a
t
1
[a
1
) = a
t
1
a
2
[a
1
); a
2
[ A[a
1
) = a
2
[ a
t
2
[a
1
) = a

2
a
2
[a
1
)
_
a
t
1
a
t
2
_
a
2
[a
1
) = 0
This relation actually proves two results. Firstly, if a
t
1
,= a
t
2
, then the term (a
t
1
a
t
2
) cannot be zero,
therefore in this case the inner product has to be zero. Alternately, if a
t
1
= a
t
2
then the inner product
is not zero and therefore a
t
1
= a
t
1
which means the eigenvalues are real. This can be summarized
by:
a
t
1
= a
t
1
; a
i
[a
j
) =
ij
13
2.4 Base Kets
Any general ket can be expanded in terms of an orthonormal basis. That is:
[) =

i
C
i
[a
i
)
_
C
p
[p) dp; A
x
, A
y
, A
z
= C; x
i
= [a
i
)
The coecients can be found by:
a
j
[) =

i
C
i
a
j
[a
i
) =

i
C
i

ij
= C
j
C
j
= a
j
[)
From this result, we can dene an Identity operator as:
[) =

j
a
j
[) [a
j
) [) =

j
[a
j
) a
j
[)

j
[a
j
) a
j
[ = I
Note: Normalizing A General Ket [) =

i
C
i
[a
i
); C
i
= a
i
[) Find [).
[ I [) = [
_
_

j
[a
j
) a
j
[
_
_
[) =

j
[a
j
)a
j
[) =

j
[a
j
[)[
2
=

j
C
j
C

j
[C
j
[
2
= [) = 1
2.5 The Projection Operator
We can dene the projection operator .

i
= [a
i
) a
i
[
i
[) = C
i
[a
i
)
This operator returns the states projection on the i
th
component. This is useful if we are looking
for a probability of measuring some result from a given state.
2.6 Matrix Representation of an Operator
We can look at the individual elements of a matrix by applying the identity operator:
14
IXI =

i
[a
i
) a
i
[X

j
[a
j
) a
j
[ =

ij
[a
j
) [a
j
[ X[a
i
)] a
i
[
The object in the bracket is the matrix element of matching indices.
X
ij
= a
j
[ X[a
i
)
As an example, suppose that X is a 3 3 matrix, then the matrix elements of the matrix are then:
X =
_
_
a
1
[ X[a
1
) a
1
[ X[a
2
) a
1
[ X[a
3
)
a
2
[ X[a
1
) a
2
[ X[a
2
) a
2
[ X[a
3
)
a
3
[ X[a
1
) a
3
[ X[a
2
) a
3
[ X[a
3
)
_
_
Monday - 9/1 - LABOR DAY - NO CLASS
Wednesday - 9/3
From the denition for a single operators matrix element one can dene the i and j components
of a combination of operators as:
X
ij
= a
j
[ X[a
i
) Z
ij
= a
j
[ XY [a
i
) =

k
a
i
[ X[a
k
) a
k
[ Y [a
j
)
Z
ij
=

k
X
ik
Y
kj
This is simply the matrix resulting form multiplying the two matrix representations together.
2.7 Matrix Representation of Bras and Kets
Consider the statement: X[) = [). We can write this out instead as a
i
[) =

j
X
ij

j
. This is
the same form as x =

M x
t
which is a simple matrix-vector equation. As the operators are matrices,
the bras and kets should be able to be represented by vectors. Consider if we write the relation
[) = X[) as
_
_
_
_
_
a
1
[)
a
2
[)
.
.
.
a
n
[)
_
_
_
_
_
=
_
_
_
X
11
X
1n
.
.
.
.
.
.
.
.
.
X
n1
X
nn
_
_
_
_
_
_
_
_
a
1
[)
a
2
[)
.
.
.
a
n
[)
_
_
_
_
_
Further, the bra relation [ = [ X can be written as:
_
[a
1
) [a
n
)
_
=
_
[a
1
) [a
n
)
_
_
_
_
X
11
X
1n
.
.
.
.
.
.
.
.
.
X
n1
X
nn
_
_
_
15
Which we can write in the same form as the above as:
_
a
1
[)

a
n
[)

_
=
_
a
1
[)

a
n
[)

_
_
_
_
X
11
X
1n
.
.
.
.
.
.
.
.
.
X
n1
X
nn
_
_
_
So, we can express a ket as a column vector and a bra as a row vector:
[) =
_
_
_
a
1
[)
.
.
.
a
n
[)
_
_
_ [ =
_
a
1
[)

a
n
[)

_
Further, the inner and outer product can be expressed in matrix forms.
[) =

i
[a
i
)a
i
[) = a
i
[)

a
i
[)
X = [) [) X
ij
= a
i
[)[a
i
) = a
i
[)a
j
[)

2.7.1 Example: 2 State System


Consider the two state system describing the spin states of an electron. Using the states [+) and
[) and the spin operator S
z
we have the relations:
S
z
[+) =
h
2
[+) ; S
z
[) =
h
2
[)
We can nd the matrix elements of the operator as:
S
z
=
_
+[ S
z
[+) +[ S
z
[)
[ S
z
[+) [ S
z
[)
_
=
_
h/2 0
0 h/2
_
Solved in HW:
[+) =
_
1
0
_
; [) =
_
0
1
_
S
+
= h[+) [ = h
_
0 1
0 0
_
; S

= h[) +[ = h
_
0 0
1 0
_
16
Friday - 9/5
3 Experiments and Measurement
The act of measuring a system causes the state of the system to collapse into a single state. If the
state [) describes the state of a system before measurement, then we can write that:
[) =

C
a
[a
t
), C
a
= a
t
[)
If a measurement is taken and the resultant is a
i
, then the wave function will collapse like:
A[) a
i
[a
i
) ; [) [a
i
)
A second measurement will yield the same result no matter what the case, as [) has collapsed to
only include [a
i
). The probability associated with obtaining a
i
is related to the coecient of that
eigenket in the expansion of [).
P(a
i
) = [C
a
i
[
2
In the case that the state [) is normalized this is simply the square of the coecient. For an
ensemble of states given as [) , [) , [) , . . . , [). In this case the probabilities can be written in
terms of the number of times a measurement yields [a
i
> versus the total number of systems:
P(a
i
) =
N
a
i
N
3.0.2 Example: Spin States
Let [) represent a state for a spin system. In this case, we can write:
[) = C
+
[+) +C

[)
C
+
= +[); [C
+
[
2
= P([+))
C

= [); [C

[
2
= P([))
3.0.3 Example: Innite Square Well
Let [) represent a state for the energy of an innite square well of length a in 1 dimension. We
can write the states as:
17
[) = C
1
[E
1
) +C
2
[E
2
) ; E
n
=
n
2

2
h
2
2ma
2
If a measurement yields that the system is in state [E
2
), then a second measurement will yield
the same result. Note (this analysis involves only stationary states, after some length of time, the
system will reset, but for the purposes of this example, an immediate second measurement will yield
the same state)
3.1 Expectation Values of Operators
Given an operator A, one can write the expected value of A operating on [).
A) =
[ A[)
[)
A) =

,a

[a
tt
) a
tt
[ A[a
t
) a
t
[) =

,a

[a
tt
)a
t

,a
a
t
[) =

[a
tt
)a
t
a
t
[)
A) =

a
t
[C
a
[
2
3.2 The Stern-Gerlach Experiment and the Failure of Classical Mechanics
In this experiment, an oven is heated and emits the outer most electron of Ag (silver). The electron
is a 5s electron and therefore does have vertical spin. The electron stream is put through a small
slit and directed through a magnetic eld. According to the laws of classical mechanics, the vertical
component of the spin can have any value and the output from the eld should be a distribution
along the various angles. Instead, the beam is split into two separate beams, one corresponding
with the up spin and one with the down spin. The force on the particles is equal to:
F =

z
U =
z

z
B
z
;
z
=
[e[
mc
S
z
Clearly the spin must only be able to take on quantized values. The experiment was then extended
to dierent planes of spin. Three experiment were conducted:
3.2.1 Set Up 1
From the oven, the beam is sent into a single Stern-Gerlach z Device, separating out spin up and
down. The spin down beam is blocked and the spin up beam is directed to a second SG zDevice.
The output of the second device is only spin up particles.
18
3.2.2 Set Up 2
From the oven, the beam is sent into a single Stern-Gerlach z Device, separating out spin up and
down. The spin down beam is blocked and the spin up beam is directed to a second Stern-Gerlach
x Device. The output of the second device is both spin up and spin down (in the x direction). Once
measured, it was found that the probability of the spin being up or down is 1/2.
3.2.3 Set Up 3
From the oven, the beam is sent into a single Stern-Gerlach z Device, separating out spin up and
down. The spin down beam is blocked and the spin up beam is directed to a second SG x Device.
The spin down beam is blocked and the spin up is sent into a second SG z Device. The output of
this device is both spin up and spin down.
The output in set up 3 is expected to be only spin up (since it was ltered to do so in the rst
device) but instead both up and down are present. this is due to the fact that it is impossible to
specify both the S
z
and S
x
values.
Monday - 9/8
4 Constructing Spin 1/2 Operators and Kets
Consider the spin S
x
. From the Stern-Gerlach experiment we know that a beam containing only
[+ > entering a S
x
device will result in half spin up and spin down beams in the x direction. That
is:
[+[S
x
, +)[
2
= [+[S
x
, )[
2
= 0.5
From this we can derive:
+[S
x
, +) = +[ (a [+) +b [)) = a+[+) [a[
2
=
1
2
; a =
1

1
e
i
[S
x
, +) = [ (a [+) +b [)) = b[) [b[
2
=
1
2
; b =
1

1
e
i
[S
x
, +) =
1

2
_
[+) +e
i
1
[)
_
;
1
=
From orthonormality and normalization conditions, we can write that:
S
x
, [S
x
, ) = 1; S
x
, [S
x
, ) = 0
19
1

2
_
+[ +e
i
1
[
_
[c [+) +d [)] = 0
c +e
i
1
d = 0 c = e
i
1
d
[c[
2
+[d[
2
= [d[
2
+[d[
2
= 1 [d[
2
=
1
2
d =
1

2
e
i
; c =
1

2
e
i
e
i
1
Factoring out the phase we can write this as:
[S
x
, ) =
1

2
_
[+) e
i
1
[)
_
Finally, choosing
1
= 0 yields the known forms:
[S
x
, +) =
1

2
_
[+) +[)
_
; [S
x
, ) =
1

2
_
[+) [)
_
The operator S
x
can be written in matrix form from this information:
S
x
= [S
x
, +) S
x
, +[ S
x
[S
x
, +) S
x
, +[ + [S
x
, +) S
x
, +[ S
x
[S
x
, ) S
x
, [
+ [S
x
, ) S
x
, [ S
x
[S
x
, +) S
x
, +[ + [S
x
, ) S
x
, [ S
x
[S
x
, ) S
x
, [
= [S
x
, +)
h
2
S
x
, +[ [S
x
, )
h
2
S
x
, [
Inserting the denitions for [S
x
, ):
S
x
=
h
2
([+) [ + [) +[) =
h
2
_
0 1
1 0
_
=
h
2

x
And by similar analysis, the y direction spin kets and the operator can be derived to be:
[S
y
, +) =
1

2
_
[+) + [) e
i
2
_
[S
y
, ) =
1

2
_
[+) [) e
i
2
_
But we cant set
2
equal to zero this time therefore we have to nd out what values it can take.
From the experiment we know that [S
x
, +[S
y
, +)[
2
= 1/2. So,
S
x
, +[S
y
, +) =
1

2
(+[ + [)
_
[+) + e
i
2
[)
_
=
1
2
_
1 +e
i
2
_
20
[S
x
, +[S
y
, +)[
2
=
1
4
(2 + 2 cos(
2
)) =
1
2

2
=

2
[S
y
, +) =
1

2
[[+) +i [)] ; [S
y
, ) =
1

2
[[+) i [)]
So, again we can solve for the operator as well:
S
y
=
h
2
(i [+) [ + i [) +[) =
h
2
_
0 i
i 0
_
=
h
2

y
4.1 Spin Summary
Given all of the above information, we can summarize the kets and operators for a spin
1
2
system.
[S
x
, ) =
1

2
([+) [)) ; [S
y
, ) =
1

2
([+) i [)) ; [S
z
, ) = [)
S
x
=
h
2
_
0 1
1 0
_
=
h
2

x
; S
y
=
h
2
_
0 i
i 0
_
=
h
2

y
; S
z
=
h
2
_
1 0
0 1
_
=
h
2

z
21
Wednesday - 9/10
5 Compatibility and Operators
If two operators (A and B) are compatible then the commutator of the two is zero. That is,
[A, B]AB BA = 0. In this case, we can change the order since AB = BA. In the case of two
operators (C and D) being incompatible then the commutator of the two will be nonzero. In this
case, [C, D] = CD DC ,= 0 and therefore CD ,= DC. If two operators are compatible and one
has degenerate states, then the second operator can be used to break the degeneracy. Consider the
case that operating A on any of a set of kets [a
n
) always returns the same eigenvalue a. In this
case, the compatible operator B can break the degeneracy.
A[a
1
) = a [a
1
) , A[a
2
) = a [a
2
) , . . . , A[a
n
) = a [a
n
)
B[a
1
, b
1
) = b
1
[a
1
) , B[a
1
, b
2
) = b
2
[a
1
, b
2
) , . . . , B[a
1
, b
n
) = b
n
[a
1
, b
n
)
An example of this are the various eigenkets of the Hydrogen Atom model. For a state [n, 1, m),
the operator L
2
will always give the same result (l = 1) for any allowed value of m (1, 0, 1). This
can be broken by using the L
z
operator to get the value of m.
THEOREM:
If [A, B] = 0 and [a) are nondegenerate eigenkets of A then: (1) a
tt
[ B[a
t
) = B
t
a

,a
and (2) [a)
is also an eigenket of B.
PROOF:
a
tt
[ [A, B] [a
t
) = 0 a
tt
[ AB BA[a
t
) = 0
a
tt
[ AB[a
t
) a
tt
[ BA[a
t
) = a
tt
a
tt
[ B[a
t
) a
t
a
tt
[ B[a
t
)
(a
tt
a
t
) a
tt
[ B[a
t
) = 0 a
tt
[ B[a
t
) = B
t
a

,a

This result proves the rst statement above, and by simply expanding the result we can show the
second:
B =

,a

[a
tt
) a
tt
[ B[a
t
) a
t
[
B[a) =

,a

[a
tt
) a
tt
[ B[a
t
) a
t
[a) =

,a

[a
tt
) a
tt
[ B[a
t
)
a,a

[a
tt
) a
tt
[ B[a) =

B
a

,a

,a
[a
tt
) = B
a
[a)
Thus the second theorem is true as well.
22
So for any system, we want to nd the maximal set of compatible operators such that for operators
A, B, and C the eigenket can be written [a) [a
t
, b
t
, c
t
) such that:
A[a
t
, b
t
, c
t
) = a
t
[a
t
, b
t
, c
t
)
B[a
t
, b
t
, c
t
) = b
t
[a
t
, b
t
, c
t
)
C [a
t
, b
t
, c
t
) = c
t
[a
t
, b
t
, c
t
)
Again, an example of this is the operators H, L
2
, and L
z
for the state [) = [n, l, m).
5.1 Incompatible Operators
Theorem: There does not exist a simultaneous eigenket of both A and B if [A, B] ,= 0.
[A, B] [) ; A[) = a
t
[) ; B[) = b
t
[)
(AB BA) [) = AB[) BA[) = a
t
b
t
[ > b
t
a
t
[ >= (a
t
b
t
b
t
a
t
)[ >= 0.
However, [A,B] ,= 0, so the above eigenvalues cannot both exist.
6 Measurements of Compatible and Incompatible Operators
Consider an experiment which starts with a quantum state [) and then makes measurements in
various orders via operators A, B, and C. If a measurement takes the sequence A,B,C then the
probability of getting the result [c
t
) is:
Pr
1
= [a
t
[b
t
)[
2
[b
t
[c
t
)[
2
However, if the measurement went straight from measuring A to measuring C, then the resultant
probability would be:
Pr
2
= [a
t
[c
t
)[
2
The results dier if A and B are compatible versus if they are not. If [A, B] = 0 then Pr
1
= Pr
2
,
however if [A, B] ,= 0 then Pr
1
,= Pr
2
An example weve seen would be going from measuring S
z
, taking the spin up and measuring S
x
then taking spin up from that and going back to S
z
and then simply measuring S
z
twice in a row
taking spin up both times. The rst method returns a probability of getting spin up as the nal
measurement of 25% while the second gives a probability of 100%. This is because S
z
and S
x
dont
commute.
23
7 The Uncertainty Principle
It can be shown that for any two operators, the relation must hold that:

(A)
2
_
(B)
2
_

1
4
[ [A, B]) [
2
Where weve dened:
A = AA) (A)
2
= A
2
+A)
2
2AA)

(A)
2
_
=
__
A
2
+A)
2
2AA)
__
=

A
2
_
2A)
2
+

A
2
_

(A)
2
_
=

A
2
_
A)
2
Friday - 9/12
In order to prove the above relation between A and B. Several other relations need to be derived.
First, consider the Schwartz Inequality given as [)[) [[)[
2
. If we dene a new ket [)
as [) +[) then we can write the following relations:
[) = [) +[) [ = [ +[

< [ >= ([) +[)) ([ +[

) = [) +[) +

[) +[[
2
[)
Then, choosing a form for the variable which will assist in simplifying the above, we can set
=
[)
[)
. After plugging in the value of , we can multiply by [). This lets the above be
solved as:
< [ >= [) +
[)
[)
[) +
[)
[)
[) +
[)
[)
[)
[)
[)
= [)[) [[)[
2
[[)[
2
+[[)[
2
So, concluding since the above must be positive (since [) 0 ).
[)[) [[)[
2
In addition to this statement if A is an Hermitian operator, then A) is real.
A) = A[) A)

= A[)

= A[)
Further, if A is anti-Hermitian (that is, A

= A), then A) is pure imaginary.


A) = A[) A)

= A[)

= A[)
24
A) = A)

Returning to the Schwartz Inequality, we can dene for two Hermitian Operators A and B:
[) A[) [ = [ A
[) B[) [ = [ B
[ AA[) [ BB[) [ [ AB[) [
2
_
(A)
2
__
(B)
2
_
[[)[
2
[ AB) [
2
We can dene the right hang side as:
AB =
AB BA
2
+
AB + BA
2
=
1
2
([A, B] +A, B)
Now, we can dene two operators X = [A, B] and Y = A, B. It is easily seen that X is
Hermitian while Y is anti-Hermitian. Then the right hand side of the above becomes:
[ A, B) [
2
=

_
1
2
([A, B] +A, B)
_

2
=

i
X
2
+
Y
2

2
=
1
4
X
2
_
(A)
2
__
(B)
2
_

1
4
[ [A, B]) [
2
25
8 Transformations of Base Kets
Consider the geometric transformation from (x, y) to (x
t
, y
t
).
_
x
t
y
t
_
=
_
a b
c d
__
x
y
_
The same can be done to kets in a set of base kets. For example, we can expand some state [) as
either of the two given forms below.
[) =

C
a
[a
t
); [) =

C
b
[b
t
)
For any such orthonormal and complete sets of eigenkets, an operator U can be derived for which
U [a
t
i
) = [b
t
i
) and U

U = 1. We can immediately dene this operator by looking at the rst of these


properties:
U

k
[b
t
k
) a
t
k
[
The matrix representation of such an operator can be found easily as well:
U
ij
= a
t
i
[ U [a
t
j
) = a
t
i
[
_

k
[b
t
k
) a
t
k
[
_
[a
t
j
) =

k
a
t
i
[b
t
k
)a
t
k
[a
t
j
)
U
ij
= a
t
i
[b
t
j
)
Monday - 9/15
As an example of a transformation operator, lets consider a transformation from S
z
to S
x
.
U
zx
= [S
x
+) +[ +[S
x
) [ =
1

2
[[+) +[)] +[ +
1

2
[[+) [)] [
=
1

2
_
[+) +[ +[) +[ +[+) [ [) [
_
U
zx
=
1

2
_
1 1
1 1
_
It can be shown that this gives the expected results of:
U
zx
[+) = [S
x
+) ; U
zx
[) = [S
x
)
26
8.1 Diagonalization of Transform Operators
Consider the operators X and X
t
. We can write the matrix elements of X in terms of some set of
eigenket [a
i
) and the elements of X
t
in terms of a second basis [b
j
).
X
ij
= a
i
[ X[a
j
) ; X
t
ij
= b
i
[ X[b
j
)
And if we assume there exists some transformation between the two sets of base kets,
U [a
i
) = [b
i
) ; a
i
[ U

= b
i
[
We can immediately write the result that:
X
t
ij
= b
i
[ X[b
j
) = a
i
[ U

XU [a
j
)
X
t
= U

XU
We can further prove that the trace of X and X
t
are the same since Tr(AB) = Tr(BA) therefore
in terms of matrices
Tr(X
t
) = Tr(U

XU) = Tr(U

UX) = Tr(X)
Alternately, in terms of operators and bras and kets,
Tr(X) =

a
t
[ X[a
t
) =

,b

,b

a
t
[b
t
) b
t
[ X[b
tt
) b
tt
[a
t
) =

,b

,b

b
tt
[a
t
)a
t
[b
t
) b
t
[ X[b
tt
)
=

,b

b
tt
[b
t
) b
t
[ X[b
tt
) =

b
t
[ X[b
t
)

a
t
[ X[a
t
) =

b
t
[ X[b
t
)
Finally, consider the case that operators A and B are known where [A, B] ,= 0. The eigenvalues
and eigenkets of A are known as well and we want to solve for the eigenkets and eigenvalues of B.
We can derive this as:

a
tt
[ B[a
t
) a
t
[b
t
) = b
t
a
tt
[b
t
)
This is merely an eigenvalue equation which is solvable by:

j
B
ij
C
k
j
= b
k
C
i
Where C
k
is the k
th
eigenket and b
k
the corresponding eigenvalue. From this we can write the
transformation matrix as a combination of the eigenkets of B.
27
U =
_
_
_
C
1
1
. . . C
n
1
.
.
.
.
.
.
.
.
.
C
1
n
. . . C
n
n
_
_
_
From this results we can see that the transformation matrix can be used to diagonalize a matrix.
That is:
U

BU =
_
_
_
b
1
. . . 0
.
.
.
.
.
.
.
.
.
0 . . . b
n
_
_
_
8.2 Unitary Equivalent Observables
Lastly, well dene a Unitary Equivalent Operator. If A is an operator, then the associated
Unitary Equivalent Operator is UAU
1
. This new object has the following property:
A[a
t
) = a
t
[a
t
) UAU
1
[a
t
) =?
UA[a
t
) = a
t
U [a
t
) ; UAU
1
U [a
t
) = a
t
U [a
t
)
_
UAU
1

U [a
t
) = a
t
U [a
t
)
So, U [a
t
) is an eigenket of UAU
1
with eigenvalue a
t
. Recall that U [a
t
) = [b
t
). This means that
[b
t
) and U [a
t
) are simultaneous eigenkets of both B and UAU
1
which in many cases are the same
operator.
Wednesday - 9/17
9 Operators with Continuous Spectra
Consider a continuous eigenket [). We can make the following changes from our relations of discrete
eigenkets to continuous:
a
t
[a
tt
) =
a

,a

t
[
tt
) = (
t

tt
)

[a
t
) a
t
[ = 1
_
[
t
)
t
[ d
t
= 1
[) =

C
a
[a
t
) =

a
t
[) [a
t
) [) =
_
C

[
t
) d
t
=
_

t
[) [
t
) d
t
28

[a
t
[)[
2
= 1
_
[
t
[)[
2
d
t
= 1
[) =

[a
t
)a
t
[) [) =
_
[
t
)
t
[)d
t
A[a
t
) = a
t
[a
t
) X[
t
) =
t
[
t
)
a
tt
[ A[a
t
) = a
t

a

tt
[ X[
t
) =
t
(
t

tt
)
9.1 The Position Operator
Let the position operator x be dened in one dimension as:

X[ x
t
) = x
t
[ x
t
)
We can expand the state [) in terms of the position operator and get:
[) =
_
x[) [ x) dx
t
One might recognize the term x[). It is the wavefunction dened in undergraduate quantum
mechanics classes.
x[) =

( x)
When a measurement is made, theoretically the state [) collapses to some position [ x) but in reality
it collapses to some region between x
t
dx
t
and x
t
+ dx
t
. Therefore, the probability of measuring
a particles location and getting a result between these values is:
Pr = [ x
t
[)[
2
dx
t
= [

( x
t
)[
2
dx
t
These results can easily be generalized to three dimensions. The integral simply becomes three
integrals and the eigenket [ x) now denotes a three dimensional space vector. It is easily shown that
the dierent position operators commute:
[X
i
, X
j
] = 0
29
9.2 The Momentum Operator - Translation Method
We can dene the translation operation as an operator that moves a given position ket [x) into
some new position. That is:
T(d x
t
) [ x
t
) = [ x
t
+d x
t
)
Then dening the operator itself in terms of some Hermitian operator

k to be
T(d x
t
) =

I i

k d x
t
+O(d x
t
)
From this denition, the following properties can be demonstrated (its homework to do so)
T

T = TT

= 1
T(d x
t
)T(d x
tt
) = T(d x
tt
)T(d x
t
) = T(d x
tt
+d x
t
)
T(d x
t
)T(d x
t
) =

I T(d x
t
)
1
T(d x
t
)
lim
d x

0
T(d x
t
) =

I
Consider the commutation relation [

X, T(d x
t
)]. We can expand each term:

XT(d x
t
) [ x
t
) =

X[ x
t
+d x
t
) =
_
x
t
+d x
t
_
[ x
t
+d x
t
)
T(d x
t
)

X[ x
t
) = T(d x
t
) x
t
[ x
t
) = x
t
[ x
t
+d x
t
)
So, then,
[

X, T(d x
t
)] [ x
t
) = d x
t
[ x
t
+d x
t
)
Expanding this out, we can write:
d x
t
[ x
t
+d x
t
) = d x
t

a
[a) a[ x
t
+d x
t
) = d x
t

a
[a)
a
( x
t
+d x
t
)
And expanding the wavefunction while keeping only terms of order d x
t
:

a
( x
t
+d x
t
)
a
( x
t
) +d x
t

a
( x
t
) +. . .
d x
t

a
[a)
a
( x
t
+d x
t
) d x
t

a
[a)
a
( x
t
) d x
t
[ x
t
+d x
t
) d x
t
[ x
t
)
Plugging this into the above,
30
[

X
i
, T(d x
t
)] [ x
t
) d x
t
i
[ x
t
) [

X
i
,

I i

k d x
t
] [ x
t
) = d x
t
i
[ x
t
)
[

X
i
, i

k d x
t
] [ x
t
) = [

X
i
, i

j
k
j
d x
t
j
] [ x
t
) = d x
t
i
[ x
t
)
i

j
[

X
i
, k
j
]d x
t
j
[ x
t
) = d x
t
i
[ x
t
) i[

X
i
, k
j
] =
ij
Consider the dimension of the Translation operator. In order for the operator to be unitless, we
require the operator

k to be dened as

k = p/h. This denes the new momentum operator p = h

k.
This also turns the above relation into the well known commutation relation:
[x
i
, p
j
] = ih
ij
9.2.1 Finite Translations
Consider a nite translation along the x axis dened by;
T(x
t
x) [ x
t
) = [ x
t
+ x
t
x)
Now, consider dening the nite translation in terms of the innitesimal translations: x
t
= Ndx
t
where the value of N is very large. Consider the limit of the Translation Operator as N goes o to
innity.
lim
N
_

I
i
h
p
x
dx
t
_
N
= e
i
h
pxdx

Translations along dierent directions will always commute and a general translation operator can
be written as:
T( x) = e
i
h
pd x

= e
i
h
(pxdx

+pydy

+pzdz

)
Friday - 9/19
9.3 The Wave Function - Some New Ideas
Dene a momentum eigenket [ p
t
) = [p
t
x
, p
t
y
, p
t
z
) such that
P
i
[ p
t
) = p
t
i
[ p
t
)
If we operator on this ket with the translation operator we get:
31
T(d x
t
) [ p
t
) = e

i
h
d x


P
[ p
t
)
_
1
i
h
d x
t


P +
1
2
_
i
h
d x
t


P
_
2
+. . .
_
[ p
t
)
_
1
i
h
d x
t
p
t
+
1
2
_
i
h
d x
t
p
t
_
2
+. . .
_
[ p
t
)
T(d x
t
) [ p
t
) = e

i
h
d x

[ p
t
)
So, the eigenvalue is a pure phase. This is expected since T is a unitary operator. This can be
proven easily:
T [) = [) [

= [ T

[ T

T [) = [

[)
[[
2
= 1 = e
i
Now, consider the wave function in the context of what weve discussed. We can expand any ket
as:
[) =
_
[x
t
) x
t
[)dx
t
=
_
[x
t
)

(x
t
)dx
t
[) =
_

(x
t
)

(x
t
)dx
t
If U
a
are eigenfunctions of some operator, then we can expand any wave function in terms of them.
[) =

[a
t
) a
t
[) x[) =

x[a
t
)a
t
[)

(x) =

U
a
(x)C
a
; U
a
= x[a
t
)
Consider the term [ A[) in wave function notation:
[ A[) =
_ _
[x
t
) x
t
[ A[x
tt
) x
tt
[)dx
t
dx
tt
Consider the case that Ais some function of the position operator, then we can write that x
t
[ f(x
t
) [x
tt
) =
f(x
t
)(x
t
x
tt
). And the above simply becomes
[ f(x) [) =
_

(x
t
)f(x)

(x
t
)dx
t
32
Consider also the case that A is the momentum operator. In order to see what this is, consider the
Transport Operator acting on the ket [).
T(d x
t
) [) = T(d x
t
)
_
[x
t
) x
t
[)dx
t
=
_
[x
t
+dx
t
) x
t
[)dx
t
But we can redene the variables and then expand the wave function term:
_
[x
t
+dx
t
) x
t
[)dx
t

_
[x
t
) x
t
+dx
t
[)dx
t
x
t
+dx
t
[) =

(x
t
+dx
t
) =

(x
t
) +dx
t

x
t

(x
t
) +. . .
Plugging this back into the expression above:
T(d x
t
) [) =
_
[x
t
)
_

(x
t
) +dx
t

x
t

(x
t
) +. . .
_
dx
t
_
1
i
h
d x
t


P
_
[) =
_
[x
t
)
_

(x
t
) +dx
t

x
t

(x
t
) +. . .
_
dx
t
This can be written as:
[)
i
h
dx
t
p [) = [)
_
x
t
[

x
t

(x
t
)dx
t
dx
t
p [) =
_
[x
t
)
_
ih

x
t
_

(x
t
)dx
t
Further, we have the original result we were interested in
[ p [) =
_

(x
t
)
_
ih

x
t
_

(x
t
)dx
t
One last thing we can calculate to look at is the term x[ p [). This is going to give:
x[ p [) =
_
x[x
t
)
_
ih

x
t
_

(x
t
)dx
t
=
_
(x x
t
)
_
ih

x
t
_

(x
t
)dx
t
x[ p [) = ih

x

(x)
Monday - 9/22
From this result, we can consider what the result would be if the momentum operator repeatedly
operating on a ket, p
m
[). Its easiest to consider n = 2 and extend that result:
33
p
2
[) = p [) ; [) = p [)
p [) =
_
dx
t
[x
t
)
_
ih

x
t
_
x
t
[); x
t
[) = x
t
[ p [) = ih

x
t

(x
t
)
p
2
[) =
_
dx
t
[x
t
)
_
(ih)
2

2
x
t2
_
x
t
[)
From this, we can easily see that:
p
n
[) =
_
dx
t
[x
t
)
_
(ih)
n

n
x
tn
_

(x
t
); x[ p
n
[) = (ih)
n

n
x
tn

(x)
9.4 Transformations in Position and Momentum Space
Just as we dened position eigenkets x[x
t
) = x
t
[x
t
), we can dene momentum eigenkets p [p
t
) =
p
t
[p
t
). These momentum eigenkets have the properties (similar to the position eigenkets):
p [p
t
) = p
t
[p
t
)
p
t
[p
tt
) = (p
t
p
tt
)
[) =
_
dp
t
[p
t
) p
t
[) =
_
dp
t
[p
t
)

(p
t
)
Here we have dened the momentum space wavefunction as p
t
[) =

(p
t
). From all of this we
can dene a transformation

(p
t
)

(x
t
). In general a transform between bases [a
t
) and [b
t
)
can be given as U
a

b
= b
t
[a
t
). Therefore we can write:
U
x

p
= x
t
[p
t
)
We need to derive some kind of structure for this transformation. In the discrete case, a summation
gave the elements of a matrix. In this case, an integral gives the innite elements of an innite
matrix. That wont work, lets instead look at some other quantities:
x
t
[ p [p
t
) = ih

x
t
x
t
[p
t
) = px
t
[p
t
) x
t
[p
t
) = Ne
ip

h
We can nd the coecient by the completeness relation:
x
t
[x
tt
) = (x
t
x
tt
)
_
dp
t
x
t
[p
t
)p
t
[x
tt
) = (x
t
x
tt
)
_
dp
t
Ne
ip

h
Ne

ip

h
= [N[
2
_
dp
t
e
ip

h
(x

)
h; [N[
2
2h(x
t
x
tt
) = (x
t
x
tt
) N =
1

2h
34
x
t
[p
t
) =
1

2h
e
ip

h
From this result we can see that the transform between momentum and position space is actually
the form for a Fourier transform:

(x
t
) =
1

2h
_
dp
t
e
ip

(p
t
);

(p
t
) =
1

2h
_
dx
t
e
ip

(x
t
)
9.5 The Gaussian Wave Packet
Consider the solution of the equation:
_
ih

2
x
t2
_
(x
t
) = E(x
t
) (x
t
) =
1

1
4

d
e
ikx

x
2
2d
2
As homework, it is shown that for the above,
x
t
) =
_
dx
t

(x
t
)x
t

(x
t
) = 0
x
t2
) =
_
dx
t

(x
t
)x
t2

(x
t
) =
d
2
2
(x
t
)
2
) = x
t2
) x
t
)
2
=
d
2
2
p
t
) =
_
dx
t

(x
t
)p

(x
t
) = hk
p
t2
) =
_
dx
t

(x
t
)p
2

(x
t
) =
h
2
2d
2
+ h
2
k
2
(p
t
)
2
) = p
t2
) p
t
)
2
=
h
2
2d
2

(p
t
) =

d
h

(p

hk)
2
d
3
2 h
2
Wednesday - 9/24 - CLASS CANCELED
Friday - 9/26 - NO CLASS (DEBATE DAY)
35
Part II
QUANTUM DYNAMICS
Monday - 9/29
10 The Time Evolution Operator
Consider some operator U(t
0
; t) which turns any ket [; t
0
) into its time evolved ket [; t
0
; t) = [; t).
U(t
0
; t) [; t
0
) = [; t)
We can require this operator exhibits the properties of Conservation of Probability and Composi-
tionality. That is:
U(t, t
0
)U

(t, t
0
) = 1; , t[, t) = , t
0
[, t
0
)
This implies the result that the probabilities given by

a
[C
a
(t
0
)[
2
and

a
[C
a
(t)[
2
are conserved
in the time evolution:

[C
a
(t
0
)[
2
=

[C
a
(t)[
2
Secondly, any time evolution U(t
0
, t
2
) can be written as a composition of two evolutions U(t
0
, t
1
)U(t
1
, t
2
).
This property is also of great importance.
U(t
0
, t
2
) = U(t
0
, t
1
)U(t
1
, t
2
)
10.1 Innitesimal Evolutions
Consider the innitesimal time evolution give by U(t
0
+dt, t
0
). We rstly require that in the limit
of dt 0 this becomes the identity operator. We can dene this operator as:
U(t
0
+dt, t
0
) = 1 idt;

=
Where the condition that U

U = 1 requires the operator to be Hermitian. The units of are


inverse seconds since dt has to be unitless. From this we can dene the Hamiltonian operator as
the generator of time evolutions:
H = h
36
This denition leads us to the derivation of the Schrodinger Equation.
10.2 The Schr odinger Equation
Consider the evolution from t
0
to t and then to t +dt. That is:
U(t +dt, t
0
) = U(t +dt, t)U(t, t
0
) = [1 idt] U(t, t
0
)
Dividing this result by dt
U(t +dt, t
0
) U(t, t
0
)
dt
=
H
h
dt
dt
U(t, t
0
)
ih

t
U(t, t
0
) = HU(t, t
0
)
This leads to the most general form for the Schrodinger equation and the foundation of all of
quantum mechanics:
ih

t
U(t, t
0
) [, t
0
) = HU(t, t
0
) [, t
0
) ih

t
[, t) = H[, t)
If we assume that the Hamiltonian has the form of H =
p
2
2m
+ V (x) then we get the familiar form
of the Schrodinger Equation from our undergraduate course in Quantum Mechanics:
x[ ih

t
[, t) = x[ H[, t) x[ ih

t
[, t) = x[
p
2
2m
[, t) +x[ V (x) [, t)
ih

t

(x, t) = (ih)
2

2
x
2

(x, t) +V (x)

(x, t)
10.3 Time Independent Hamiltonian
If we assume that the Hamiltonian operator is any function which is independent of time we can
solve the relation above as:
ih

t
U(t, t
0
) = HU(t, t
0
) U(t, t
0
) = e

i
h
H(tt
0
)
This result can be checked by expanding both sides. (Check written notes for expansion) Next, if
we assume that some state [, t) can be expanded in terms of energy eigenkets as:
[, t) =

C
a
(t) [a
t
); H[a
t
) = E
a
[a
t
)
37
Given this expansion form and some initial state [, t
0
) =

a
C
a
(t
0
) [a
t
) where all values of C
a

are known, the problem is then to nd [, t) at any time t. From the above result we can see that
this is simply:
[, t) =

C
a
(t) [a
t
); C
a
(t) = C
a
(t
0
)e

i
h
E
a
(tt
0
)
As an example consider an innite square well with initial state (x, 0) given. We can use the above
to nd the form at some later time t.
(x, 0) =
_
1
3

1
(x) +
_
2
3

2
(x) (x, t) =
_
1
3
e

i
h
E
1
t

1
(x) +
_
2
3
e

i
h
E
2
t

2
(x)
10.4 Expectation Values of Stationary and Non-Stationary States
Consider two states described by

(x, 0) =
n
(x) and

(x, t) =

j
C
j
(t)
j
(x). The expectation
value of some operator B in for each of these states can be written as:
, t[ B[, t) = , t
0
[ e
i
h
En(tt
0
)
Be

i
h
En(tt
0
)
[, t
0
) = , t
0
[ B[, t
0
)
, t[ B[, t) =

j,k
C
j
(t
0
)C
k
(t
0
)e
i
h
(E
j
E
k
)(tt
0
)
a
j
, t
0
[ B[a
k
, t
0
)
Note the dierence. In the case that a state is made up of a single stationary state, then the
expectation value of that state for some operator is independent of time. However, if the state is
made up of a superposition of several states, then the expectation value will vary depending on the
energies of the states. Summarizing these results:

(x, 0) =
n
(x) , t
0
[ B[, t
0
) = , t[ B[, t)

(x, t) =

j
C
j
(t)
j
(x) , t[ B[, t) =

j,k
C
j
(t
0
)C
k
(t
0
)e
i
h
(E
j
E
k
)(tt
0
)
a
j
, t
0
[ B[a
k
, t
0
)
38
Wednesday - 10/1
10.5 Spin Interactions - Evolution of Spin States
Consider the Hamiltonian for an electron in a magnetic eld oriented in the z direction. For such
a system:
H =
p
2
2m
+H
spin
=
p
2
2m

e
mc
BS
z
The eigenkets of this operator can be written as: [p
t
, ). This gives:
H[p
t
, ) =
p
2
2m
[p
t
, )
e
mc
BS
z
[p
t
, )
H[p
t
, ) =
_
p
t2
2m

_

e
mc
h
2
B
__
[p
t
, )
Assuming that p
t
is very small, we can neglect the momentum of the particle and simplify the above
in terms of new variables =
[e[
mc
and E

=
h
2
.:
H
s
= S
z
H
s
[) = E

[)
As seen previously, if some state [) is a single stationary state, then time evolution simply adds
a phase e
i
, however if [) is made up of a superposition of several states, then the probability of
measuring any one of those states will vary in time. In our case:
[, t = 0) = c
+
[+) +c

[) [, t) = c
+
e

i
h
E
+
t
[+) +c

i
h
E

t
[)
If we let the initial state be [) = [S
x
+) then the initial and evolved states are:
[, t = 0) = [S
x
, +) =
1

2
[+) +
1

2
[)
[, t) =
1

2
e

i
h
E
+
t
[+) +
1

2
e

i
h
E

t
[)
Now, we can calculate the probability of making a measurement of the system at some later time t
and nding it is in the initial state [S
x
, ).
[S
x
, [, t)[
2
=

_
1

2
+[
1

2
[
_ _
1

2
e

i
h
E
+
t
[+) +
1

2
e

i
h
E

t
[)
_

2
=
1
4

i
h
E
+
t
e

i
h
E

2
=
1
4

cos
_

E
+
h
t
_
i sin
_

E
+
h
t
_

_
cos
_

h
t
_
i sin
_

h
t
__

2
39
Plugging in our denition for E

=
h
2
[S
x
, [, t)[
2
=
1
4

cos
_

t
2
_
i sin
_

t
2
_
cos
_
t
2
_
i sin
_
t
2
_

2
[S
x
, [, t)[
2
=
1
4

cos
_
t
2
_
cos
_
t
2
_
+i sin
_
t
2
_
i sin
_
t
2
_

2
P
+
(t) = cos
2
_
t
2
_
; P

(t) = sin
2
_
t
2
_
Next, we can calculate the expectation value of S
x
, S
y
, and S
z
. Well do the rst now, the other
two are homework, but the results are given below.
S
x
) = , t[ S
x
[, t) = , t[
h
2
[[S
x
, +) S
x
, +[ [S
x
, ) S
x
, [] [, t)
=
h
2
_
[S
x
, +[t)[
2
[S
x
, [t)[
2

=
h
2
_
cos
2
_
t
2
_
sin
2
_
t
2
__
So, in total:
S
x
) =
h
2
cos (t) ; S
y
) =
h
2
sin(t) ; S
z
) = 0
11 Energy-Time Uncertainty - An Overview
Consider some system in the initial state described as:
[, t = 0) =

C
E
[E
t
) =
_
E

+E

dE
t
C
E
[E
t
)
Let t be the life time of the state. That is, after some amount of time t, the state will no longer
be described by [, t = 0) given above. The state after some time t can be written as:
[, t) =
_
E

+E

dE
t
C
E
e

i
h
E

t
[E
t
)
Now consider the correlation probability amplitude in terms of the lifetime:
[, t = 0[, t)[
2
=

_
E

+E

dE
t
[C
E
[
2
e

i
h
E

2
=

_
E

+E

dE
t
[C
E
[
2
e

i
h
(E

0
)t
e
i
h
E

0
t

2
40
Friday - 10/3
From the above, if we dene the change in energy as E
t
= E
t
E
t
0
then we can write the relation
as:
[, t) = e
i
h
E

0
t
_
dE
t
[C
E
[
2
e

i
h
E

t
From this, an important result can be determined. Firstly though, consider the above for a station-
ary state; that is E
t
= 0.
[, t) = e
i
h
E

0
t
_
dE
t
[C
E
[
2
e

i
h
(0)t
= 1
Thus if [) is a single energy eigenket, then the energy is determined indenitely and the probability
of nding the system in the same state after some time t is always 1. Secondly, consider the case
that E
t
. The exponential is then behaving like:
cos
_
E
t
h
t
_
i sin
_
E
t
h
t
_
as the period of each oscillation goes to zero, t also goes to zero. Thus for E
t
, the lifetime
of the state t 0. From this, we can see the known result:
Et h
12 Schr odinger and Heisenberg Interpretations of Time Evolution
Consider the evolution of the expectation value of an operator X. We can write this as:
X)(t) = , 0[ U

(t, 0)XU(t, 0) [, 0)
There are two interpretations of this expectation value. According to the Schrodinger interpretation,
the ket [) evolves in time and the operator X is time independent. In this case, the above can be
written as:
, 0[ U

(t, 0)XU(t, 0) [, 0) = , t[ X
(S)
[, t) ; [, t) = U(t, 0) [, 0)
According to the Heisenberg interpretation, the state [) is constant in time and the operator X
(H)
is what evolves in time. With these assumptions, one can write the above as:
, 0[ U

(t, 0)XU(t, 0) [, 0) = [ X
(H)
(t) [) ; X
(H)
(t) = U

(t, 0)XU(t, 0)
41
We will do some analysis working with the Heisenberg interpretation. Consider the time evolution
of some operator A. We can describe this as:
A
(H)
(t) = U

(t, 0)A
(H)
(0)U(t, 0)
d
dt
A
(H)
(t) =
d
dt
_
U

(t, 0)A
(H)
(0)U(t, 0)
_
=
_
d
dt
U

(t, 0)
_
A
(H)
(0)U(t, 0)+U

(t, 0)A
(H)
(0)
_
d
dt
U(t, 0)
_
However, recall that the time derivative of the time evolution operator is known in terms of the
Hamiltonian.
ih
d
dt
U = HU; ih
d
dt
U

= U

H
And so we can write that above time evolution as:
ih
d
dt
A
(H)
(t) = HUA
(H)
(0)U +UA
(H)
(0)UH
ih
d
dt
A
(H)
(t) = [A
(H)
(t), H]
Thus the time derivative of an operator is determined by its commutator with the Hamiltonian for
the system. As an example, consider the Hamiltonian of a free particle H =
p
2
2m
. Classically, we
can nd equations of motion from Hamiltons Equations:
x
t
=
H
p
=
p
m
p
t
=
H
x
= 0
Alternately, we can nd the same results quantum mechanically:
ih
d
dt
x(t) = [x, H] = [x,
p p
2m
] = [x,
p
2m
]p +p[x,
p
2m
] =
2p
2m
ih
dx
dt
=
p
m
ih
d
dt
p(t) = [x, H] = [p,
p p
2m
] = 0
Monday - 10/6
12.1 Ehrenfest Theorem
According to the theorem proposed by Ehrenfest, the expectation values of physical observables
should follow their classical counterparts behavior. This is evident in the position and momentum
operators. Consider a free particle.
42
H =
p
2
2m

_
_
_
ih
dx
dt
=
_
x,
p
2
2m
_

dx
dt
=
p
m
ih
dp
dt
=
_
p,
p
2
2m
_
= 0
_
_
_
x(t) = x
0
+
p
m
t; p(t) = p
0
Next, consider the commutator of the position and its initial state and the resulting uncertainty
relation
[x(t), x
0
] =
_
x
0
+
p
m
t, x
0
_
=
ih
m
t
(x(t))
2
)(x
0
)
2
)
h
2
4m
2
t
2
This result is important for understanding the behavior of a system. According to this, the disper-
sion of a wave packet expands in time as:
(x(t))
2
)
ct
2
(x
0
)
2
)
Consider a particle moving in some potential described by V (x). The Hamiltonian then becomes:
H =
p
2
2m
+V (x)
_
_
_
ih
dx
i
dt
=
_
x
i
,
p
2
2m
+V ( x)
_
ih
dx
i
dt
=
p
i
m
ih
dp
i
dt
=
_
p
i
,
p
2
2m
+V ( x)
_
= 0 ih
V
x
i
_
_
_
d x
dt
=
1
m
d p
dt
;
d p
dt
= V (x) m
d
2
x
dt
2
= V (x)
This result conrms the Ehrenfest Theorem. However, this result only holds for certain systems as
complications to the Hamiltonian can cause dierences to arise. This also leads to a useful result
in respect to calculating how expectation values vary in time. Consider some operator A,
d
dt
[ A[) = [
d
dt
A[) = [
1
ih
[A, H] [)
_
dA
dt
_
= [
1
ih
[A, H] [)
43
13 Quantum Simple Harmonic Oscillator
Consider the motion of a particle in a potential dened as V (x) =
m
2
2
x
2
. For such a system, the
Hamiltonian is:
H =
p
2
2m
+
m
2
2
x
2
In order to work through this problem, well dene operators a and a

which are the lower-


ing/annihilation and raising/creation operators respectively:
a =
_
m
2h
_
x +
i
m
p
_
; a

=
_
m
2h
_
x
i
m
p
_
The coecients are rigged such that [a, a

] = 1 which is an important result. Next, we need to


write the above Hamiltonian in terms of these two operators instead of the position and momentum
operators. First, we can note that the position and momentum operators can be written in terms
of the raising and lowering operators as:
x =
_
h
2m
_
a +a

_
; p = i
_
mh
2
_
a

a
_
We could plug this into the above Hamiltonian and simplify it, however, we can also check what
the value of a

a is and nd that it is quite useful...


a

a =
m
2h
_
x
2
+
p
2
m
2

2
+
i
m
xp
i
m
px
_
=
m
2 h
x
2
+
p
2
2mh

1
2
Multiply this result by h and it is clear that:
H = h
_
a

a +
1
2
_
If we dene some new operator N a

a which we refer to as the number operator, then we can


note a few things about the system:
N

= N; [N, a] = a; [N, a

] = a

; [H, N] = 0
The rst of these two results shows that N is an Hermitian operator, meaning that its eigenvalues
n are real. Beyond that, we can show that for some ket [) = a [n) the inner product has to be
positive. Thus:
[) 0; [) = n[ a

a [n) = n[ N [n) = n n 0
44
The last result from above shows that because N and the Hamiltonian commute, the eigenvalues
are related. Lets check the relation
N [n) = n[n) ; H[n) = h
_
N +
1
2
_
[n)
H[n) = h
_
n +
1
2
_
[n) = E
n
[n) E
n
= h
_
n +
1
2
_
Finally, we can derive exactly what a [n) is. Firstly though, consider:
[a, N] = [a, a

a] = a

[a, a] + [a, a

]a = a
Thus, we can write that N acting on a [n) is:
N (a [n)) = aN [n) a [n) = (n 1)a [n) a [n) = c
n
[n 1)
n[ a

a [n) = [c
n
[
2
n 1[n 1) c
n
=

n
The same analysis can be done for the second operator and this gives the results:
a [n) =

n[n 1) ; a

[n) =

n + 1 [n + 1)
With this result we can check the valid values of n. Consider starting with values of n = 3 and
n =
3
2
and operating with a to lower the indice.
a [3) [2) ; a [2) [1) ; a [1) [0) ; a [0) = 0
a

3
2
_

1
2
_
; a

1
2
_

1
2
_
The result that a [0) is 0, means that the lowest value that n can have is n = 0. Further, because
the operators can drive [n) to negative values if n is not an integer means that n must be a whole
number. Thus the known result is found that the index n is an integer greater than or equal to
0. Now, consider that the ground state [0) is known. We can use the raising operator to nd the
higher order states. Consider the rst few and we will generalize the result:
a

[0) =

0 + 1 [1) [1) =
a

[0)

1
a

[1) =

1 + 1 [2) [2) =
a

[1)

2
=
(a

)
2
[0)

2
45
a

[2) =

2 + 1 [2) [2) =
a

[2)

3
=
(a

)
3
[0)

2 3
[n) =
(a

)
n
[0)

n!
Wednesday - 10/8
13.1 Matrix Elements of Operators and Expectation Values
Recall that the matrix elements of an operator as dened as a
j
[ A[a
i
) in some terms of base kets
[a
i
). If we consider the energy eigenkets of the harmonic system to the a set of base kets, then we
can write the matrix components of the operators a and a

as:
n
t
[ a [n) =

nn
t
[n 1) =

n
n

,n1
n
t
[ a

[n) =

n
1
n
t
[n + 1) =

n + 1
n

,n+1
Note that in the case of n
t
= n both matrix elements are zero. We can use these results to write
the matrix elements of the position and momentum operators.
n
t
[ x[n) =
_
h
2m
_
n
t
[ a [n) +n
t
[ a

[n)
_
=
_
h
2m
_
n
n

,n1
+

n + 1
n

,n+1
_
n
t
[ p [n) = i
_
mh
2
_
n
t
[ a

[n) n
t
[ a [n)
_
= i
_
mh
2
_
n + 1
n

,n+1

n
n

,n1
_
n
t
[ x
2
[n) =
h
2m
_
n
t
[ a
2
[n) +n
t
[ a

a [n) +n
t
[ aa

[n) +n
t
[ a
2
[n)
_
Note that when n = n
t
we obtain the expectation of the operator for a system in the E
n
energy
state. Consider a system in the ground state (n = 0). The expectation value of both the position
and momentum are 0. Consider the last equation though,
0[ x
2
[0) =
h
2m
_
0[ a
2
[0) +0[ a

a [0) +0[ aa

[0) +0[ a
2
[0)
_
The rst and last terms go to zero since we know that a [0) = 0 and therefore also 0[ a

= 0. This
leaves
0[ x
2
[0) =
h
2m
_
0[ a

a [0) +0[ aa

[0)
_
46
Now, in order to get a result for the remaining terms, consider the Hamiltonian for this system and
the fact that aa

a = 1.
H =
_
a

a +
1
2
_
h =
_
a

a +
1
2
_
aa

a
_
_
h H =
h
2
_
a

a +a

a
_
And so, the expectation value of x
2
for the ground state is then:
0[ x
2
[0) =
h
2m
0[
2H
h
[0) =
h
2m
Repeating this for the momentum yields a result of:
0[ p
2
[0) =
mh
2
Plugging this into the uncertainty relation gives that for the ground state of a quantum harmonic
oscillator,
_
(x)
2
__
(p)
2
_
n=0
=
h
2
4
For homework, we are to show that the result for any energy state E
n
the uncertainty relation
becomes:
_
(x)
2
__
(p)
2
_
=
_
n +
1
2
_
2
h
2
13.2 Deriving the Wave Function
In order to write the wave function for the quantum harmonic oscillator it usually involves building
solutions out of polynomial expansions. Using operator theory we can build the solutions without
solving a complex dierential equation. Consider rstly that the wave function for the ground state
is dened as:

n
(x) = x[n)
0
(x) = x[0)
And since we know that a [0) = 0 we can set up the dierential equation:
x[ a [0) = x[
_
m
2h
_
x +i
p
m
_
[0) = 0
Given the relations that:
47
x[ x[0) = x
0
(x); x[ p [0) = ih
d
dx

0
(x)
The above becomes a dierential equation of the form:
_
x
x
0
2
+
d
dx
_

0
(x) = 0; x
2
0
=
h
m
This has a solution of the form:

0
(x) = Ne

x
2
2x
2
0
And noting that a normalized Gaussian distribution has the form:
f(x) =
1

2
e

(xm)
2
2
We know that [
0
(x)[
2
should be normalized as so. Therefore the solution for the ground state is:

0
(x) =
1

1/4

x
0
e

x
2
2x
2
0
Given the relation derived at the end of last class, we should be able to write the wave function for
any energy level in terms of this ground state. Consider:
x[n) = x[
_
a

_
n

n!
[0) = x[
_
a

n

_
a

_
n1
_
(n 1)!
[0)
x[n) = x[
_
a

n
[n 1)
This can be written in terms of wave functions as:

n
(x) =
1

2nx
0
_
x x
2
0
d
dx
_

n1
(x)

n
(x) =
1

2
n
n!
1
x
n
0
_
x x
2
0
d
dx
_
n

0
(x)
48
13.3 Time Evolution
From what weve derived so far about time evolution, we can quickly write that for some state [)
made up of a linear combination of stationary state, the time evolved state will look like:
[, t) =

n
C
n
e

i
h
Ent
[, t
0
) =

n
C
n
e
i(n+
1
2
)t
[, t
0
)
Alternately we could look at the operators themselves (via the Heisenberg method) and analyze their
time evolution. However, instead of plugging the operators x and p into the Heisenberg relation,
lets use the raising and lowering operators.
ih
da
dt
= [a, H] a(t) = a
0
e
it
; ih
da

dt
= [a

, H] a

(t) = a

0
e
it
a(t) = a
0
e
it
; a

(t) = a

0
e
it
Plugging these into the above relations for position and momentum gives results of:
x(t) = x
0
cos (t) +
p
0
m
sin(t)
p(t) = p
0
cos (t) x
0
m sin(t)
With these operators dened we can check the expectation values. We nd that both x) and p)
return 0. However it is worth noting that if a state is build correctly from the stationary states, then
it is possible to construct a state [) such that the expectation values follow the classical values.
Such states are called coherent states. They are of interest for quantum optics and laser physics.
Friday - 10/10 - NO CLASS
Monday - 10/13
The results of plugging in the time evolution to the position and momentum leads to a result that
x) = n[ x(t) [n) = 0. This is inconsistent with Ehrenfests theorem that expectation values will
follow their classical counterparts behavior. However, there is a second method to derive the above
which we should note:
A(t) = U

A(0)U; U = e

i
h
Ht
x(t) = e
i
h
Ht
x(0)e

i
h
Ht
After some algebra it is noted that this gives the same form for x(t) as the previous derivation.
However, we can make a state of energy levels well call [) for which the expectation value of the
position and momentum follow the behavior of the classical position and momentum. Such a state
turns out to be an eigenket of the lowering operator. That is:
[ x(t) [) = Acos (t) +Bsin(t) ; a [) =
t
[)
One interesting thing to note (which will be shown explicitly in the homework) is that there are no
eigenkets for the operator a

.
49
13.4 Review of Quantum Simple Harmonic Oscillator
Denitions of operators:
Hamiltonian
H =
p
2
2m
+
m
2
2
x
2
= h
_
a

a +
1
2
_
=
h
2
_
a

a +aa

_
Lowering (Annihilating), Raising (Creation), and N:
a =
_
m
2h
_
x +
i
m
p
_
; a

=
_
m
2h
_
x
i
m
p
_
Position and Momentum:
x =
_
h
2m
_
a +a

_
; p = i
_
mh
2
_
a

a
_
States:
[n) =
(a

)
n
[0)

n!

0
(x) =
1

1/4

x
0
e

x
2
2x
2
0
;
n
(x) =
1

2
n
n!
1
x
n
0
_
x x
2
0
d
dx
_
n

0
(x)
Uncertainty Relation:
_
(x)
2
__
(p)
2
_
=
_
n +
1
2
_
2
h
2
Commutation Relations:
[a, a

] = 1; [a

, a] = 1
[N, a] = a, [N, a

] = a

, [N, H] = 0
Time Evolution of operators:
ih
da
dt
= [a, H] a(t) = a
0
e
it
ih
da

dt
= [a

, H] a

(t) = a

0
e
it
x(t) = x
0
cos (t) +
p
0
m
sin(t)
p(t) = p
0
cos (t) x
0
m sin(t)
50
14 Stationary States - Bound and Continuous Solutions
Consider the time dependent Schrodinger equation and how it simplies for stationary states (as is
done in undergraduate quantum mechanics).
x
t
[ ih

t
[, t) = x
t
[
_
p
2
2m
+V (x)
_
[, t) ih

t

(x
t
, t) =
_

h
2
2m

2
+V (x
t
)
_

(x
t
, t)
For stationary states this becomes the familiar starting point of many problems:
_

h
2
2m

2
+V (x)
_

(x) = E

(x); (x, t) = e

i
h
Et
(x)
Redening the above wave function as u(x) we can write this equation in the form:
d
2
u
dx
2
=
2
u(x);
2
=
2m
h
2
[V (x) E]
Consider a region in space where V (x) > E and therefore
2
> 0. In this case, the solutions of the
above take the form:
u(x) e
x
In order for our solution to be physical, it cannot blow up for [x[ . This means that of the
above, only the exponentially decaying solution survives. Which sign is picked is then decided to
where along the x axis the region exists. Such states are called bound states and mathematically
such eigenfunctions always have discrete eigenvalues. That means that some function f(x) made of
several of these states can be written in the form:
f(x) =

n
C
n
e
nx
Alternately, we could consider a region of space where V (x) < E and therefore
2
< 0. In this case,
the solutions take the form:
u(x) e
ix
These solutions simply oscillate at a rate dependent on . The exact rate of oscillation is dependent
therefore on the energy and potential energy in the region. Mathematically speaking, the eigenvalues
of these eigenfunctions form a continuous spectrum of values and instead of a summation of accepted
states an integral transform is used. In this case, the combination would cover some region of energy
values:
51
f(x) =
_
dk C(k)e
i(k)x
Beginning next class, well be looking at the Delta Function Potential given as V (x) = V
0
(x).
That means that we need to solve the equation
d
2
u
dx
2
=
2m
h
2
[V
0
(x) E] u(x)
From this we can see that the only bound states
2
< 0 occur if the energy has a value below V
0
.
In other cases, the solutions are continuous states.
Wednesday - 10/15
As an aside, consider a particle free to move in three dimensions in a potential that is radially
varying. For such a case,
_

h
2
2m

2
+V (r)
_
( x) = E( x)

l,m
( x) = R(r)Y
l,m
(, ); R(r) = ru(r)
The remaining radial equation is:
d
2
u
dr
2
=
2m
h
2
(V
eff
(r) E) u(r); V
eff
(r) = V (r) +
l (l + 1) h
2
2m
So in this case we return to the form:
d
2
u
dx
2
=
2
u(x)
14.1 Dirac Delta Function Potential
Consider the one dimensional problem of a potential dened as V (x) = V
0
(x). For values of
energy less than 0, the resulting states are bound in the delta functions well. This can be see when
x ,= 0:
d
2
u
dx
2
=
2
u(x);
2
=
2mE
h
2
Assume that the energy is negative and solutions are:
u(x) =
_
Ae
x
x > 0
Be
+x
x < 0
_
52
where the continuity condition at x = 0 means that A = B. To get a value of we can integrate
the original equation about the point where the delta function contributes. That is:
_
0+
0
_
d
2
u
dx
2
=
2m
h
2
(V
0
(x) E) u(x)
_
dx
_
0+
0
d
2
u
dx
2
dx =
2m
h
2
__
0+
0
V
0
(x)dx
_
0+
0
Eu(x)dx
_
2 =
2m
h
2
V
0
E =
m
2 h
2
V
2
0
Next, consider the continuous solutions with energy greater than zero. If we consider a wave initially
coming in from the x direction, then we can write the solution as:
u
I
(x) = e
ikx
+Re
ikx
x < 0
u
II
(x) = Te
ikx
x > 0
Continuity at x = 0 requires that T = 1 +R. Again integrating at x = 0 gives the condition that:
Tik ik (1 R) =
2mV
0
h
2
R =
2mV
0
h
2
2ik +
2mV
0
h
2
Note that this reection coecient has a pole at that value 2ik =
2mV
0
h
2
. This corresponds to the
bound energy state E
pole
=
m
2 h
2
V
2
0
. In many experiments this is used as conrmation of a bound
state for a system. As an example, refer to the bound state of a proton and neutron in a Rutherford
scattering type experiment. The data near the energy of the Deuteriums formation energy shows
a pole.
15 The Feynmann Path Integral & The Propagator
Consider two points A and B. To nd the path a particle takes between those two points, classically
we would nd the Lagrangian of the system and then apply it to the action integral to nd the path
of least action. However, quantum mechanically it is found that many dierent paths are possible
outcomes. According to Feynmann, all paths are possible trajectories, but each has a probability
associated with it. Consider the relation of an initial state to its nal state if it is expressed in
terms of energy eigenkets,
x
tt
[, t) = x
tt
[ e

i
h
H(tt
0
)
[, t
0
)
53

(x
tt
, t) = x
tt
[ e

i
h
H(tt
0
)

[a
t
) a
t
[, t
0
)

(x
tt
, t) =

x
tt
[a
t
)e

i
h
E
a
(tt
0
)
a
t
[, t
0
)
Expanding out the last term as:
a
t
[, t
0
) =
_
d
3
x
t
a
t
[x
t
)x
t
[, t
0
) =
_
d
3
x
t
U
a
(x
t
)

(x
t
, t
0
)
And plugging this into the above, we have

(x
tt
, t) =
_
d
3
x
t
_

U
a
(x
tt
)U

a
(x
t
)e

i
h
E
a
(tt
0
)
_

(x
t
, t
0
)
The quantity in brackets is known as the Propagator since with it and some initial state (x, t
0
),
the wave function at any other time and location can be derived by the equation:

(x
tt
, t) =
_
d
3
x
t
K(x
tt
, t, x
t
, t
0
)

(x
t
, t
0
); K(x
tt
, t, x
t
, t
0
) =

U
a
(x
tt
)U

a
(x
t
)e

i
h
E
a
(tt
0
)
Friday - 10/17
It can be shown that for a initial wave function (x, t
0
) = ( x x
t
), the wave function at any later
time is simply the propagation operator (x, t) = K( x, t, x
t
, t
0
). Further, one nds that the above
can be written alternately as:
K = x
tt
[ e

i
h
E
a
(tt
0
)
[x
t
)
expanding the above in terms of eigenkets [a
t
) and [a
tt
) one nds the exact denition of K stated
above.
16 Potentials and Gauge Transformations in Quantum Mechanics
Consider a free particle moving in a potential V ( x). For such a particle, the force acting on the
particle is dened to be

F = V ( x). However, classically, one could add some constant value to the
potential and not change the physics of the system. Lets considering doing the same for a quantum
mechanical system and check the results. Consider two systems described by the Hamiltonian:
H =
p
2
2m
+V (x);

H =
p
2
2m
+V (x) +V
0
54
If the systems start in the same initial state [, t
0
), then after some time t, the systems will be in
states described by new time evolved kets. Noting that

H = H +V
0
the states can be written as:
[, t) = e
i
h
H(tt
0
)
[, t
0
) ; [ , t) = e
i
h

H(tt
0
)
[, t
0
) = [, t) e

i
h
V
0
(tt
0
)
Thus the physics are unchanged except for a phase factor to the system with higher potential over
all. However, there are a number of experiments that can be done to conrm and show this is the
case.
16.1 Gravitational Gauge Transform
Consider an interferometer which can be tilted up into the air at some angle . The two paths (which
we will denote as I and II) are then only dierent by having dierent gravitational potential. The
incident beam of neutrons is split into two beams one of which (I) travels horizontal to the ground
to a reector and then up an inclined plane to a second beam splitter at the other corner of the
device. The second beam (II) travels rst up the incline and then across a horizontal path of equal
length to the rst, but at a dierent height. The potentials of each beam along the paths are:
V
I
= 0; V
II
= mgl
2
sin()
The dierence in states at the screen is then given by:

I
= e

i
h
V
I
T

0
;
II
= e

i
h
V
II
T

0
Where weve dened T = t t
0
as the travel time through that arm of the detector. Note that this
can be written in bracket notation in terms of inner products between positions A (the initial beam
splitter), B (the mirror on track II), C (the mirror on track I), and D (the recombining point).

I
= C, t
c
[A, t
a
)D, t
d
[C, t
c
);
II
= B, t
b
[A, t
a
)D, t
d
[B, t
b
)
D, t
d
[C, t
c
) = B, t
b
[A, t
a
)
Combining all of this we can see that the two resulting beams that recombine are:

II
= e

i
h
(V
II
V
I
)T

I
Further, recalling the denitions for V
I
and V
II
and the relations:
mv = p =
h

T =
l
1
v
=
ml
1
h
Then we can write the above in terms of a general phase factor:

II
= e
i

I
; () =
m
2
l
1
l
2
sin()
h
2
55
And nally, the magnitude of the resulting beam is:
[
tot
[
2
= [
I
[
2
[
_
1 +e
i
_
[
2
= 4 cos
2
_

2
_
[
I
[
2
16.2 Electromagnetic Gauge Transforms
Consider the equations of classical Electromagnetism. The physical elds

E and

B can be more
easily derived and dened in terms of the vector potential

A and scalar potential where:

E =
1
c


A
t
;

B =

A
From these denitions, it can be seen that the following Gauge Transformations are allowed without
altering the resulting physical elds.

A =

A
0
+; =
0

1
c

t
Monday - 10/20
Consider the Hamiltonian for a charged particle moving in an electromagnetic eld. Classically, one
would write:
H
cl
=
1
2m
_
p
e
c

A
_
2
+e( x)
This is found from the Hamiltonian equations which yield the Lorentz force on a particle:
p
i
t
=
H
x
i
;
x
i
t
=
H
p
i
If we dene the canonical momentum p
i
=
L
x
i
and the quantity
i
= m x
i
then we can write the
Lorentz force as:
d

dt
= e
_

1
c


A
t
+
v
c


A
_
d

dt
= e
_

_

v
c


A
__
+
e
c
_


A
t


V

A
_
Noting that the last term is actually the total derivative of

A, it can be shown that the relation
between p and is:
p =

+
e
c

A
56
And therefore the Lorentz force is:
d p
dt
=
_
e
e
c
v

A
_
From all of this we can nally continue with the Hamiltonian for a particle in an EM eld,
H
cl
=
1
2m
_
p
e
c

A
_
2
+e( x)
Consider the commutator relations and how they are altered. Of particular interest is the commu-
tator of dierent components of

:
[
i
,
j
] = [p
i

e
c
A
i
, p
j

e
c
A
j
] = [p
i
, p
j
]
e
c
[A
i
, p
j
] [p
i
,
e
c
A
j
] +
e
2
c
2
[A
i
, A
j
]
Noting that

A =

A( x), and that the rst and last term are then necessarily zero, we can express
this as:
[
i
,
j
] =
e
c
(ih)()
A
i
x
j
(ih)
e
c
A
j
x
i
[
i
,
j
] = ih
e
c
_
A
i
x
j

A
j
x
i
_
The derivative term is actually the k component of

A which is the magnetic eld, so the nal
result is that:
[
i
,
j
] = ih
e
c

ijk
B
k
Wednesday - 10/22
Note on the homework: In proving that [xi, G( p)] = ih
G
p
i
, the series is not truncated.
The higher order terms give the series expansion of the rst derivative term multiplied
by the factor pi. This gives the expected result exactly.
Recall that we derived the Hamiltonian for a particle in an electromagnetic eld as:
H =
1
2m
_
p
e
c

A
_
2
+e
And the kinetic momentum as

= p
e
c

A. These denitions allow us to compare the force


on a particle in both classical and quantum models of Electromagnetism. Consider the classical
denition:
57
d

dt
= e
_

E +
v
c


B
_
And consider the quantum derivation:
ih
d

dt
= [

Pi, H]
Consider only the i component of the kinetic momentum. We can the write this as:
ih
d
i
dt
=
_
_

i
,
1
2m

j
+e
_
_
The second term is easiest to deal with,
[
i
, e] =
_
p
i

e
c

A, e
_
= [p
i
, e] = ihe

x
i
The rst term is more involved, but can be expanded easily,
1
2m
_
_

i
,

j
_
_
=
1
2m
_
_
[
i
,
j
]

j
+

j
[
i
,
j
]
_
_
=
1
2m
_
_

j
ih
e
c

ijk
B
k

j
+

k
ih
e
c

ijk
B
k
_
_
=
ihe
2mc
_
_

j,k

ikj
B
k

j
+

j,k

ijk
B
k
_
_
=
ihe
2mc
_

_
i
+
_


B
_
i

Noting that

= m v, we can write the above in terms of velocities and get the nal result:
ih
d

dt
= [

Pi, H] = ih
_
e
2c
_
v

B

B v

e
_
d

dt
= e +
e
2c
_
v

B

B v

Note that classically one could reduce the above since v



B =

B v but this relation does not


hold in Quantum Mechanics since these quantities are represented by operators. Next, consider a
case where the potential is zero and a gauge transformation of the form:

A
0


A
0
+
58
Assume that the magnetic eld is uniform and constant along some direction (

B = B
0
z ). In this
case it is simple to see that the magnetic eld resulting from both

A =

A
0
and

A =

A
0
+ are the
same eld. The question we need to answer is whether the additional component of

A changes the
physics according to quantum mechanics. For this case, assume that changing the vector potential
also changes the state from [) to some related state [ ). We can then write the Schrodinger
equation for each system as:
1
2m
_
p
e
c

A
0
_
2
[) = ih
d
dt
[)
1
2m
_
p
e
c
_

A
0
+
_
_
2
[ ) = ih
d
dt
[ )
As a homework assignment to work with the Levi Civita function
ijk
we are to show that for

B = B
0
z, the appropriate vector potential is

A =
1
2

B r. Consider though, that we can show


that although p is not invariant in the gauge transform, we can show that

is invariant. We can
show that for the states above, there exists some operator G which transforms [) into [ ). The
transform generates a relation of the form:
[ ) = e
i( )
[)
And the following relations can be used to nd properties of the transform.
[) = [ ) G

G = 1
[ x[) = [ x[ ) G

xG = x [x, G] = 0
[

[) = [

[ ) ;

= (A
0
);

= (A
0
+)
59
Friday - 10/24
NOTE: Relation needed to do homework. Baker-Hausdro Formula: For operators A
and B, the following relation holds
e
A
e
B
= e
A+B+
1
2
[A,B]
if

[A, [A, B]] = 0


[B, [A, B]] = 0

As an example, consider the quantum harmonic oscillator. The operators a and a

satisfy
the relations
[a, a

] = 1; [a, [a, a

]] = 0; [a

, [a, a

]] = 0 e
a
e
a

= e
a+a

+
1
2
Recall that we dened the relation between [) and [ ) to be:
[ ) = G[)
Such that:
G

G = 1; G

xG = x; G

G =

From the second condition (G commuting with the position operator) we can assume that this
operator will be a function of the position. So, summarizing what weve found: G is a function of
position of the form e
i( )
. The third relation requires that:
G

_
p
i

e
c
A
i

e
c

_
G = p
i

e
c
A
i
The solution to the above is for the operator to be dened as:
G = e
ie
hc
(x)
And thus we can further know,
G

2
i
G = G

i
GG

i
G =
2
i
Plugging this relation into the denition for the Hamiltonian shows that this solution does indeed
make the appropriate transform:
1
2m
_
p
e
c

A
0
_
2
[) = ih
d
dt
[) G
_
1
2m
_
p
e
c

A
0
_
2
_
G

G[) = ih
d
dt
G[)

A

A+; [, t) e
ie
hc
(x)
[, t)
60
16.3 Alternate Approach to Gauge Transforms
An alternate way to derive this result is to assume that some transform occurs which takes a ket
[) and performs the transform.
[, t) [ ) = e
ie
hc
(x)
[, t)
It turns out that the only equation which keeps this transform invariant is:
_
1
2m
_
p
e
c

A
0
_
2
+e
_
[) = ih
d
dt
[)
17 The Aharonov-Bohm Eect
An experiment performed by Aharonov and Bohm involves observing the behavior of a free electron
on the exterior of a cylinder containing a magnetic eld. As there is no magnetic eld in the region,
there should (classically) be no change to the behavior of the electron outside the cylinder. However,
it can be shown that electrons moving around the cylinder interact with the vector potential in the
region. Consider two parts of a complete path around the cylinder, the wave function for an electron
along these paths can be written as:

i
= e
i
h
R
i
( p
e
c

A)d x
Then, if we consider the relation between the two paths, there is an observed interference pattern
dependent on the magnetic eld.

2
= e
ie
hc
[
R
1
(

A)d x
R
2
(

A)d x]
= e
ie
hc
H
(

A)d x
Note that we can write this integral instead as:
_
(

A) d x =
_

B d

S
So then the above relation can be written in terms of the phase which will be dependent on the

B eld, dimensions of the cylinder, and location of the electron. That is:

1
=
2
e
i(

B,a,,...)
61
Part III
THE THEORY OF ANGULAR
MOMENTUM
18 Classical Rotations
Monday - 10/27
We already are aware of the spin operators S
x
, S
y
, S
z
and in previous classes weve seen the orbital
angular momentum L
x
, L
y
, L
z
. In this chapter well be working with the total angular momentum
J. However, well begin by considering classical rotations. Consider some point initially at (x, y, z).
If the system is rotated about the origin, then the point will be at a new location (x
t
, y
t
, z
t
) which
is given by some rotation matrix:
_
_
x
t
y
t
z
t
_
_
=
_
_
R
_
_
_
_
x
y
z
_
_
By requiring that the distance to the point remain the same, we get the condition on R that:
_
x
t
y
t
z
t
_
_
_
x
t
y
t
z
t
_
_
=
_
x y z
_
_
_
x
y
z
_
_
RR
T
= 1
Consider this quantum mechanically. We have some state dened as [) which undergoes a system
rotation and transforms into some new ket described by [)
R
. It is necessary to nd a method to
compute the relation between these two states.
18.1 Finite Rotations
Consider a point undergoing the rotations R
z
_

2
_
R
x
_

2
_
. If the particle is initially at some point
along the x axis some distance a from the origin. In this case, changing the order or rotations gives
the results of:
(a, 0, 0) R
z
_

2
_
(0, a, 0) R
x
_

2
_
(0, 0, a)
(a, 0, 0) R
x
_

2
_
(a, 0, 0) R
z
_

2
_
(0, a, 0)
Clearly the nite rotations in many cases dont commute. We can dene rotation matrices for
rotation of some angle about each axis as:
62
R
x
=
_
_
1 0 0
0 cos sin
0 sin cos
_
_
; R
y
=
_
_
cos 0 sin
0 1 0
sin 0 cos
_
_
; R
z
=
_
_
cos sin 0
sin cos 0
0 0 1
_
_
18.2 Innitesimal Rotations
Consider if the angle is taken to be some small amount . In this case we can use the expansion
form of the above matrices and drop terms higher than O(
2
). This gives the innitesimal rotation
matrices:
R
x
()
_
_
_
1 0 0
0 1

2
2

0 1

2
2
_
_
_; R
y
()
_
_
_
1

2
2
0
0 1 0
0 1

2
2
_
_
_; R
z
()
_
_
_
1

2
2
0
1

2
2
0
0 0 1
_
_
_
From these matrices, we can check if innitesimal rotations commute. Consider R
x
()R
y
()
R
y
()R
x
().
_
_
_
1 0 0
0 1

2
2

0 1

2
2
_
_
_
_
_
_
1

2
2
0
0 1 0
0 1

2
2
_
_
_
_
_
_
1

2
2
0
0 1 0
0 1

2
2
_
_
_
_
_
_
1 0 0
0 1

2
2

0 1

2
2
_
_
_
=
_
_
0
2
0

2
0 0
0 0 0
_
_
= R
z
(
2
) 1
So, this leads to an important result that:
R
x
()R
y
() R
y
()R
x
() = R
z
(
2
) I
Similar results exist for commutators of the other combinations of R
i
and R
j
.
18.3 Overview of Group Theory and Group Representation
Consider some set of elements a, b, c . . . = G. This set is considered a group if it has the following
four properties for some binary operator :
Closure: a b = c, c G
Associative: (a b) c = a (b c)
63
Identity: a e = e a = a
Inverse: a a
1
= a
1
a = e
It can easily be seen that for the group of rotations these conditions are true, therefore the rotational
matrices form a complete group R
1
, . . .. For the rotational group, some representation group can
be dened as:
R
1
, . . . D(R
1
), . . .
This representation group can consist of operators, numbers, other matrices, or other mathematical
objects. A few important qualities of the group are:
RI = IR = I D(R)D(I) = D(I)D(R) = D(I)
RR
1
= R
1
R = I D(R)D(R
1
) = D(R
1
)D(R) = D(I) D(R
1
) = [D(R)]
1
R
x
()R
y
()R
y
()R
x
() = R
z
(
2
)I D(R
x
())D(R
y
())D(R
y
())D(R
x
()) = D(R
z
(
2
))D(I)
As well soon nd, the solution to the above question relating two kets is [)
R
= D(R) [) where
D(R) satises the last of the above conditions.
Wednesday - 10/29 - NO CLASS
Friday - 10/31
Last time we looked at nite rotations and innitesimal rotations. If we think back to when we
derived the momentum operator, we can use a similar form to dene the rotational operator:
D(R
n
()) = I
i
h
J n
in this manner, the angular momentum J is the generator of rotation. From this denition we can
nd the transform for nite rotations. Consider some rotation made up of an innite number of
innitesimal rotations d. We can write this as = lim
N
Nd. Plugging this into the above
we can expand the series as:
D(R
n
()) = lim
N
_
I
i
h
J n

N
_
N
= exp
_

i
h
J n
_
Using this denition, we can expand the relation derived in the previous class (keeping only terms
of order up to
2
):
D(R
x
())D(R
y
()) D(R
y
())D(R
x
()) = D(R
z
(
2
)) D(I)
64
_
1
i
h
J
x

h

J
2
x

2
2h
_
_
1
i
h
J
y

h

J
2
y

2
2h
_

_
1
i
h
J
y

h

J
2
y

2
2 h
_
_
1
i
h
J
x

h

J
2
x

2
2h
_
=
i
h
J
z

2
h
2
[J
x
, J
y
] =
i
h
J
z

2
[J
x
, J
y
] = ihJ
z
This can be generalized to the expected result of:
[J
i
, J
j
] = ih

ijk
J
k
NOTE: There is an alternate way to derive this using the fact that L = x p and the position-
momentum operator. Thus two dierent approaches result in the same relation.
19 Evolution of Spin Operators
It can be shown that the expectation values of S
i
transform as vector relations. This can be shown
using the spin operators or more generally with the angular momentum operators. Consider the
expectation value < J
x
>=
R
[ J
x
[
R
). We can derive this form the relation that:
e
G
Ae
G
= A+[G, A] +

2!
[G, [G, A]] +. . .
For the expectation value above, =
i
h
, G = J
Z
, and A = J
x
. Therefore, we can write that:
< J
x
>=

n=0
_
i
h
_
n
[J
z
, [J
z
, [J
z
, J
x
] ]]
Here weve shortened the notation, but each term contains n commutators of J
z
with itself and J
x
.
Considering just the rst few terms of this expansion, one can see the pattern,
n = 0 J
x
n = 1 [J
z
, J
x
] = ihJ
y
n = 2 [J
z
, [J
z
, J
x
]] = h
2
J
x
n = 3 [J
z
, [J
z
, [J
z
, J
x
]]] = ih
3
J
y
In general we can see that this is:
< J
x
> ([
z
) = J
x
_
1

2
2
+. . .
_
+J
y
_
+

3
3!
+. . .
_
= J
x
cos() J
y
sin()
65
19.1 Spin 1/2 System
Consider some system in the state:
[) = C
+
[+) +C

[)
We can write the rotated ket as:
[
R()
) = e
i
h
Sz
[C
+
[+) +C

[)]
[
R()
) = C
+
e
i
2

[+) +C

e
i
2

[)
Consider the case that a rotation of 2 is made. Classically we expect to get same resulting system
state. However, the above generates:
[
R(2)
) = C
+
e
i
[+) +C

e
i
[) = [)
This phenomenon has been demonstrated using interference patterns from wave packets initially
constructively interfering but after one has been rotated by 2 the interference becomes destructive.
Monday - 11/3
20 The Rotation Operator D(R)
Last time we showed that for some ket alpha, the rotated ket can be written in terms of an operator
D(R). The operator is written as:
D(R()) = e

i
h

J n
Further, it was seen that the expectation values of operators transformed like vectors in that:

R
[ S
i
[
R
) = R
ij
[ S
j
[)
Consider the manner in which this operator alters some state [). In a spin 1/2 system, any state
can be express as [) = C
+
[+) + C

[). Under a time evolution in a magnetic eld, this state


would be:
[, t) = U(t) [, t = 0) = e

i
h
Szt
66
However, this can also be though of as a rotation about the z axis by some angle = t. Thus any
spin precession evolution can also be thought of as a rotation problem instead. One merely has to
switch between the above relations (that is, t).
As an aside, consider the magnetic moment for a charged particle. If the particle is orbiting some
other particle(s), then the path it follows forms a circuit loop and the magnetic moment of a loop
of current is simply =
i
c

A. With some analysis, it can be shown that this quantity is actually


e
2mc

L. The addition of the intrinsic spin of the particle leads to


=
e
2mc
_

L +g

S
_
The extra factor introduced is dependent on the particle involved and is known as the g factor.
For an electron, it is roughly 2.
20.1 Experiment - Neutron Interferometer
Consider a neutron interferometer with one leg of the device passing through a uniform magnetic
eld. In this case, the state of the system passing through the eld will dier by a factor e

i
2
T
where T is the amount of time the particles spent in the eld. If we assume that the initial state
consisted of only spin up particles, when the two parts recombine they form an interference pattern
since the new state is:
[) = [+) +e

i
2
T
[+) =
_
1 +e

i
2
T
_
[+)
[[)[
2
=
_
2 + 2 cos
_
T
2
__
2
+[+) = 4 cos
2
_
T
2
_
21 Spin 1/2 System - 2 Component Form
Consider the spin 1/2 system that we are so familiar with. If we express the states in vector form,
then we make the transitions of:
[+) =
_
1
0
_
=
+
; +[
_
1 0
_
=

+
[)
_
0
1
_
=

; [
_
0 1
_
=

[) = C
+
[+) +C

[) = C
+

+
+C

[ = C

+
+[ +C

= C

+
+C

67
S
i

h
2

i
;
x
=
_
0 1
1 0
_
;
y
=
_
0 i
i 0
_
;
z
=
_
1 0
0 1
_
[ S
i
[) =

a,a

[a) a[ S
i
[a
t
) a
t
[)
h
2

And nally, some useful properties of the matrices are:


[
i
,
j
] = 2i

ijk

2
i
= 1

i
,
j
= 2
ij

T
i
=
i
Tr(
i
) = 0
Det(
i
) = 1
a =
_
a
3
a
1
ia
2
a
1
ia
2
a
3
_
( a)
_

b
_
= a

b +i
_
a

b
_
Wednesday - 11/5
Note that in the special case of a =

b, the last relation simplies to:
( a) ( a) = a a +i ( a a)
( a)
2
= a a
If we are using the 2 component description for a spin 1/2 system, then we can write the state
as:
= C
+
_
1
0
_
+C

_
0
1
_
=
_
C
+
C

_
However, weve seen that we can use vector and matrix relations for these rotations, therefore we
expect that we can nd some matrix operator such that:
_
C
R
+
C
R

_
=
_ __
C
+
C

_
We can nd the form of this matrix by expanding the operator D(R()) into matrix form. Note
that for a spin system,

J =
h
2

S.
68
e

i
2
n
=

I
i
2
n +
1
2!
_
i
2
_
2
( n)
2
+
1
3!
_
i
2
_
3
( n)
3
+. . .
We can immediately note from the relation above that:
( n)
2
= n n = 1
( n)
3
= n
And thus for any component ( n)
n
the result is 1 if n is even, and n if n is odd. From this, we
can write the above as:
e

i
2
n
=
_
1
1
2!
_

2
_
2
+. . .
_

I i n
_

2

_

2
_
3
1
3!
+. . .
_
This is simply the expansion of sine and cosine:
e

i
2
n
= cos
_

2
_

I i nsin
_

2
_
As a homework assignment, select some axis to rotate about by some angle and compute what
D(R()),
R
, and S
k
) are.
As an example, recall the spin ket [

S n+) which we derived in a homework. This can be found by


the rotation method by simply performing the following rotations: Start from spin in the z direction
either up or down, rotate around the y axis by an angle and then about the z axis by some angle
. This can be written as:

= D(R
z
())D(R
y
())

=
_
cos
_

2
_

I i
z
sin
_

2
__
_
cos
_

2
_

I i
y
sin
_

2
__

=
_
cos
_

2
_
i sin
_

2
_
0
0 cos
_

2
_
+i sin
_

2
_
_
_
_
cos
_

2
_
sin
_

2
_
sin
_

2
_
cos
_

2
_
_
_

=
_
e
i

2
0
0 e
i

2
_
_
_
cos
_

2
_
sin
_

2
_
sin
_

2
_
cos
_

2
_
_
_

This gives the results (conrming the alternate method) of:

R
+
=
_
_
cos
_

2
_
e
i
sin
_

2
_
_
_
;
R

=
_
_
e
i
sin
_

2
_
cos
_

2
_
_
_
69
Next class we will start looking at the Euler angles and how they are used in Quantum Mechanics.
As background, recall that for a 3 3 matrix, there are a total of 9 components. However, if we
require that such a matrix satisfy RR
T
= 1 then the number of independent elements is reduced to
3. These three correspond to the Euler angles of rotation.
From the above, we can dene two dierent types of rotations. Since the determinant of RR
T
can
be expanded to give:
Det
_
RR
T
_
= Det
_
R
T
_
Det (R) = [Det (R)]
2
= 1 Det (R) = 1
We dene those rotations with positive determinants to be proper rotations and they form a
group known as the Special Orthogonal Group in 3 Dimensions or SO(3).
An example of an improper rotation would be:
_
_
x
t
y
t
z
t
_
_
=
_
_
1 0 0
0 1 0
0 0 1
_
_
_
_
x
y
z
_
_
Friday - 11/7
Weve seen how a rotation operator R
n
() was a member of the SO(3) group. It can be further
shown that the set of operators D(R
n
() forms a second group known as SU(2) since they form
a group of special unitary 2 2 matrix operators (in the spin 1/2 system). In general, one of these
matrices will have the form:
U =
_
a b
c d
_
Since each element can be complex, this is a total of 8 elements of this matrix. However, when we
require that U be unitary, we get the result that:
U =
_
a b
b

_
Which reduces the number of independent elements to 4. This combined with the condition of
a proper rotation ([a[
2
+ [b[
2
= 1) leaves only 3 independent elements. This is exactly what we
expected to get. To check this result with our previous work, recall the operators dened by D(R)),
D(R
n
()) = e

i
2
n
= cos
_

2
_

I i nsin
_

2
_
D(R
n
()) =
_
_
cos
_

2
_
in
z
sin
_

2
_
(in
x
n
y
) sin
_

2
_
(in
x
+n
y
) sin
_

2
_
cos
_

2
_
+in
z
sin
_

2
_
_
_
70
This result agrees with both statements above. It is of the form U =
_
a b
b

_
and satised
[a[
2
+ [b[
2
= 1. Consider next, the rotations dened by the classical Euler angles. We can write
these as:
1. Rotate about the z axis by an angle
2. Rotate about the y
t
axis by an angle
3. Rotate about the new z
t
axis by an angle
Alternately, we can simply write this as:
D(R(, , )) = D(R
z
()) D(R
y
()) D(R
z
())
However, we need to write these rotations all in terms of the initial coordinate transformations since
that is the orientation used in the matrices . A little geometry analysis yields the results that:
D(R
y
()) D(R
z
()) = D(R
z
()) D(R
y
()) D(R
y
()) = D(R
z
()) D(R
y
()) D(R
z
())
1
D(R
z
()) D(R
y
()) = D(R
y
()) D(R
z
())
So, combining these with the above denition, we can write that the Euler Angle rotation can be
made by:
D(R(, , )) =
_
D(R
z
()) D(R
y
())
_
D(R
z
()) =
_
D(R
y
()) D(R
z
())
_
D(R
z
())
D(R(, , )) =
_
D(R
z
()) D(R
y
()) D(R
z
())
1
_
D(R
z
()) D(R
z
())
D(R(, , )) = D(R
z
()) D(R
y
()) D(R
z
())
And plugging in our spin matrices:
D(R(, , )) =
_
e
i

2
0
0 e
i

2
_
_
_
cos
_

2
_
sin
_

2
_
sin
_

2
_
cos
_

2
_
_
_
_
e
i

2
0
0 e
i

2
_
D(R(, , )) =
_
_
cos
_

2
_
e
i
+
2
sin
_

2
_
e
i

2
sin
_

2
_
e
i

2
cos
_

2
_
e
i
+
2
_
_
71
Monday - 11/10
22 The Density Matrix
Consider a system which is perfectly unpolarized. The net spin of the entire system is zero, meaning
that half the particles will be measured spin up or down for any direction of spin. How can be this
be written? We cant write this as [) =
1

2
[+) +
1

2
[) since this is not a polarized state, it is the
spin up ket in the x direction. We have to introduce a new notation for a system in such a state.
Consider a system with a fraction
i
in the state [
i
). We can write the state of the system as:

1
[
1
) ,
2
[
2
) , . . .
This will accurately describe the system but we must require that:

i
= 1
We can next dene the Ensemble Average for an operator A. If we measure the system and get
some result A
i
corresponding to a state [
i
), n
i
times out of n
tot
total measurements, then we can
write the average value of the operator as:
[A] =
1
n
tot

i
n
i
A
i
We can write this in terms of the expectation value of A:
[A] =

i
[ A[
i
)
We can write the above result in terms of elements of an operator if we insert kets [b) and [b
t
),
[A] =

i,b,b

i
[b) b[ A[b
t
) b
t
[
i
)
Therefore, the operator can be written as:
[A] =

b,b

b,b
A
b

,b
;
b,b
=

i
b[
i
)
i
[b
t
)
Thus, the operator corresponding to the density matrix is:
72
=

i
[
i
)
i
[
and we can then write a few properties of the operator. Firstly, the trace of the density matrix is
1. This is easily shown:
Tr() =

i,b

i
b[
i
)
i
[b) =

i,b

i
[b[
i
)[
2
=

i
= 1
Further, because it corresponds to a physical observable, the operator must be Hermitian. Consider
the example of a spin 1/2 system. For a pure state such as [+), we can write the above as simply:
= [+) +[ =
_
1
0
_
_
1 0
_
=
_
1 0
0 0
_
A second example would be [S
x
+),
= [S
x
+) S
x
+[ =
1

2
_
1
1
_
1

2
_
1 1
_
=
1
2
_
1 1
1 1
_
But consider a purely unpolarized state, 50%[+) , 50%[).
=
1
2
([+) +[) +
1
2
([) [) =
1
2

I =
1
2
_
1 0
0 1
_
This can be shown for any spin direction since the terms involved would include summations over
both spin up and spin down:
[S
n
+) S
n
+[ +[S
n
) S
n
[ =

i
[
i
)
i
[ = 1
An alternate way to say this is that [S
i
] = 0 for all i if the system is purely unpolarized. Lastly,
lets consider how many free elements are in the density matrix for a spin 1/2 system. For a 2 2
set up, any matrix can be constructed by:
= a

I +

i
b
i

i
Since the trace has be to equal to 1 for the density matrix, we can immediately see that a =
1
2
since
Tr(
i
) = 0 and Tr(I) = 2 in this case. To determine the other allowed coecients, consider the
ensemble average [S
i
],
73
[S
i
] = Tr(S
i
) = Tr(
h
2

i
) =
h
2
Tr
_
_
_
_
1
2

I +

j
b
j

j
_
_

i
_
_
Consider each term separately
Tr(

I) = 0; Tr(
j

i
) =
1
2
(Tr(
j

i
) +Tr(
j

i
)) =
1
2
Tr(2
ij
)
So the above gives that [S
i
] = hb
i
. And we can write that:
=
1
2

I +

i
1
h
[S
i
]
i
Wednesday - 11/12
22.1 Density Matrix For Continuous Variables and Time Evolutions
Recall that we wrote the ensemble average of an operator A as:
[A] = Tr (A)
We can expand this interns of pair of kets [a
t
) and [a
tt
) and then transform that to a continuous
variable. This results in:
[A] =

,a

a
t
[ [a
tt
) a
tt
[ A[a
t
)
_
d
3
x
t
d
3
x
tt
x
t
[ [x
tt
) x
tt
[ A[x
t
)
=

i
[
i
)
i
[
x

,x
=

i
x
t
[
i
)
i
[x
tt
) =

i
( x
t
)

i
( x
tt
)
Further, we can observe the time evolution by noting that:
(t = 0) =

i
[
i
, t = 0)
i
, t = 0[ (t) =

i
[
i
, t)
i
, t[
ih

t
[
i
, t) = H[
i
, t) ih

t

i
, t[ =
i
, t[ H
Thus, we can write:
ih

t
(t) =

i
_
ih

t
[
i
, t)
i
, t[ +ih[
i
, t)

t

i
, t[
_
=

i
(H[
i
, t)
i
, t[ [
i
, t)
i
, t[ H)
ih

t
= [H, ]
74
23 General Representation of Angular Momentum
From this point on, well be working primarily with the angular momentum operators J
x
, J
y
, J
z
,
and J
2
. For this set of operators, we need to nd a set of two that satisfy [A, B] = 0 much like we
did for the spin operators with S
2
and S
i
. We can show that for J
2
= J
2
x
+ J
2
y
+ J
2
z
, the relation
[J
2
, J
l
] = 0 holds for any J
l
.
[J
2
, J
l
] =

i
[J
i
J
i
, J
l
] =

i
J
i
[J
i
, J
l
] +

i
[J
i
, J
l
]J
i
= ih

i
J
i

ilk
J
k
+ih

ilk
J
k
J
i
In order to combine these terms we change
ilk
J
k
J
i
to
kli
J
i
J
k
and then moving indices changes:

kli
=
lik
=
ilk
which gives:
[J
2
, J
l
] = ih

i,k
J
i
J
k
(
ilk

ilk
) = 0
Therefore we can dene some base kets [a, b) for which,
J
2
[a, b) = a [a, b) J
z
[a, b) = b [a, b)
and then we can construct the ket itself. To do so, lets dene a few general operators and quantities:
J
+
J
x
+iJ
y
J

J
x
iJ
y
From this, its easily seen that J

+
= J

and the combination of the two yields:


J
+
J

= J
2
x
+J
2
y
+i[J
y
, J
x
] = J
2
x
+J
2
y
+ hJ
z
= J
2
J
2
z
+ hJ
z
J

J
+
= J
2
x
+J
2
y
hJ
z
= J
2
J
2
z
hJ
z
[J
+
, J

] = 2 hJ
z
[J
z
, J
+
] = hJ
+
[J
z
, J

] = hJ

Next, consider what operating on [a, b) with J

does. Consider rst the combination J


z
J
+
[a, b).
J
z
J
+
[a, b) = ( hJ
+
+J
+
J
z
) [a, b) = ( hJ
+
+J
+
b) [a, b)
So this gives the result:
J
z
J
+
[a, b) = (b + h) J
+
[a, b) J
z
J

[a, b) = (b h) J

[a, b)
75
=
_
J
+
[a, b) = C
a,b
[a, b + h)
J

[a, b) = D
a,b
[a, b h)
_
So, these act like raising and lowering operators (much like the quantum mechanical harmonic
oscillator problem), so we should be able to nd maximum and minimum bounds for b. From the
above denitions we can argue that:
a, b[
_
J
2
= J
2
x
+J
2
y
+J
2
z
_
[a, b) a, b[ J
2
[a, b) = a, b[ J
2
x
+J
2
y
[a, b) +a, b[ J
2
z
[a, b)
Since J
2
x
+ J
2
y
) 0, we can state that b
2
a. Meaning we have found bounds for b. Comparing
with previous results yields: a = b
max
(b
max
+h). Alternately, with the observations that J
+
acting
on [a, b
max
) and J

acting on [a, b
min
) should both return 0, we can write that:
J
+
J

[a, b
max
) =
_
J
2
J
2
z
+ hJ
z
_
[a, b
max
) a b
2
max
hb
max
= 0
J

J
+
[a, b
min
) =
_
J
2
J
2
z
hJ
z
_
[a, b
min
) a b
2
min
+ hb
min
= 0
Solving both for a, we get the resulting condition of
b
max
(b
max
+ h) = b
min
(b
min
h) (b
max
+b
min
) (b
max
b
min
) = h(b
max
+b
min
)
This leaves two possible solutions: b
max
= b
min
h or b
min
= b
max
The rst is impossible since it
requires b
max
< b
min
which is incorrect. Therefore we have the limits and values on b as:
b = 0, 1, . . . , b
max
Friday - 11/14
Summarizing what weve found so far; for some ket [a, b) we can raise and lower the value of b by:
J
+
[a, b) = C
a,b
[a, b + h) J

[a, b) = D
a,b
[a, b h)
If we start with the minimum value for b, and use J
+
to raise the value, well eventually get to a
point where we can no longer raise the value. This results in the equation: b
min
+nh = b
max
. Using
the result that b
min
= b
max
we can immediately see that:
b
max
=
n
2
h = jh
where weve dened j =
n
2
=
1
2
, 1,
3
2
, 2,
5
2
, . . . as the index. Plugging this into our result for a gives
the value for a xed j:
a = j(j + 1)h
76
So, with these new results, we can restate the eigenvalue equations in terms of some ket [j, m):
J
2
[j, m) = j(j + 1)h[j, m) J
z
[j, m) = mh[j, m)
23.1 Matrix Representation of Angular Momentum
Just as with the spin 1/2 case, we can write each of the operators as a matrix in as many dimensions
as there are possible m values. Consider this as:
J
i
=
_
_
_
j, m
max
[ J
i
[j, m
max
) . . . j, m
max
[ J
i
[j, m
min
)
.
.
.
.
.
.
.
.
.
j, m
min
[ J
i
[j, m
max
) . . . j, m
min
[ J
i
[j, m
min
)
_
_
_
Working out the components for J
2
and J
z
are straightforward:
j, m
t
[ J
2
[j, m) = j(j + 1)hj, m
t
[j, m) = j(j + 1)h
m,m

j, m
t
[ J
z
[j, m) = mhj, m
t
[j, m) = mh
m,m

Both matrices are diagonal since theyre dependent on


m,m
. Finding the x and y operators is done
by using the operators J

. It is straightforward to show that since J

+
= J

,
J
+
[j, m) = C
+
[j, m+ 1) j, m[ J

+
= j, m+ 1[ C

+
j, m[ J

= j, m+ 1[ C

+
And if we assume that C
+
is real, then this gives the condition that:
j, m[ J

J
+
[j, m) = j, m[ J
2
J
2
z
hJ
z
[j, m) = j(j + 1)h
2
m
2
h
2
mh
2
= C
2
+
Solving for C
+
and noting the symmetry between J
+
J

and J

J
+
gives that:
C

= h
_
(j m)(j m+ 1)
So, this gives that:
j, m
t
[ J

[j, m) = h
_
(j m)(j m+ 1)
m

,m1
Combining this with the denitions allows us to write the x and y components in terms J

. So,
summarizing the matrix elements:
j, m
t
[ J
2
[j, m) = j(j + 1)h
m,m
j, m
t
[ J
z
[j, m) = mh
m,m

77
J
x
=
1
2
(J
+
+J

) J
y
=
1
2i
(J
+
J

)
j, m
t
[ J

[j, m) = h
_
(j m)(j m+ 1)
m

,m1
23.2 Rotation Operator for Generalized Angular Momentum
Recall that we dened the rotation operator as:
D(R) = e

i
h
n

J
And from this we can change any ket [) into a rotated state [)
R
= D(R) [). It can be shown
that for some initial state [j, m), we can expand in terms of the eigenkets for all value of m and get
the rotated ket as:
[j, m)
R
= D(R) [j, m) =

[j, m
t
) j, m
t
[ D(R) [j, m)
Dening the expectation value of the rotation as the Wigner Functions, we can express the rotated
ket as:
[j, m)
R
=

D
j
m,m

[j, m
t
) D
j
m,m

= j, m
t
[ D(R) [j, m)
Referencing the spin 1/2 system in some orientation dened by the angle and as we have seen
before, we can quickly see that:
[1/2, 1/2)
R
=

D
1/2
1/2,m

[1/2, m
t
) D
1/2
1/2,m

= 1/2, m
t
[ D(R) [1/2, 1/2)
But m
t
can only be
1
2
.
[1/2, 1/2)
R
= D
1/2
1/2,1/2
[+) +D
1/2
1/2,1/2
[)
D
1/2
1/2,1/2
= cos D
1/2
1/2,1/2
= e
i
sin
We already know the values of two of the Wigner Functions! Other can be looked up in reference
books.
23.3 Orbital Angular Momentum
Weve been working in the case that J = S +L. If we assume that either S or L is zero, then it is
easy to show that any of these three operators satisfy the commutation relation:
78
[A
i
, A
j
] = i

ijk
A
k
A = S, L, J
Suppose that the spin of a particle can be neglected. In this case J = L and we can change notation
for a state [j, m) to [l, m). We now have the relations:
L
2
[l, m) = l(l + 1)h
2
[l, m) L
z
[l, m) = mh[l, m)
We can imagine some initial state

(r, , ) = r, , [) which is rotated some innitesimal


amount about the z axis. In this case we can write:
D(R) = 1 +
iL
z
h
+. . .
[r, , )
R
= D(R) [r, , ) = [r, , +)
Monday - 11/17
We can nd the position space representation of the rotation operators similar to how we found the
momentum operator representation. Consider an innitesimal rotation by some amount about
the z axis. Applying [ to this, we can expand in terms of and get the resulting equation:
[r, , +) = [r, , )
i
h
[ L
z
[r, , )
[r, , ) +

[r, , ) = [r, , )
i
h
[ L
z
[r, , )
Dropping the matching terms and writing the braket as the position wave function gives the expected
result:
r, , [ L
z
[) = ih

(r, , )
This can be repeated for L
x
and L
y
and the results are similar:
r, , [ L
x
[) = ih
_
sin

cot cos

(r, , )
r, , [ L
y
[) = ih
_
cos

cot sin

(r, , )
Combining the above with the denition that L
2
= L
2
x
+ L
2
y
+ L
2
z
= L
2
z
+
1
2
(L
+
L

+L

L
+
) gives
the result:
79
r, , [ L
2
[) = h
2
_
1
sin
2

2
+
1
sin

_
sin

__

(r, , )
One immediately sees that this is the angular portions of the
2
operator. We can denote this as:

,

1
sin
2

2
+
1
sin

_
sin

_
In the case that a potential is only dependent on the radial distance of a particle, we can write the
Hamiltonian operator as:
H =
p
2
2m
+V (r)
Where some examples of this type of potential would be the Hydrogen problem V (r) =
Ke
2
r
or
the 3 dimensional harmonic oscillator V (r) =
1
2
Kr
2
. Because the Hamiltonian commutes with both
L
z
and L
2
, the three operators share eigenkets.
H[E
n,l,m
) = E
n
[E
n,l,m
) ; L
Z
[E
n,l,m
) = mh[E
n,l,m
) ; L
2
[E
n,l,m
) = l(l + 1)h
2
[E
n,l,m
)
The solutions of such a potential can be expanded in the form:

(r, , ) = R(r)Y
l,m
(, )
where the angular functions turn out to be the spherical harmonics.
23.4 Quantum Mechanical Treatment of the Spherical Harmonics
If we consider some position ket determining only the angular direction of a point, we can write the
state as:
[ n) = [, )
and from this denition we can write the spherical harmonics as:
Y
l,m
(, ) , [l, m)
We can observe the behavior of the spherical harmonics from the above relations:
_
_
_
, [ L
Z
[l, m) = mh[l, m)
r, , [ L
z
[) = ih

(r, , )
_
_
_

Y
l,m
= imY
l,m
Y
l,m
(, ) e
im
80
From this result we can derive the allowed values of l, since m can range from l to +l, and the
function must be periodic on = 0, 2, we require:
Y
l,l
(, ) = Y
l,m
(, ) e
il
= e
il
e
il2
Therefore l must be an integer. Note that this is dierent from the total angular momentum index
j which could have half integer values. Consider that any direction dened by [, ) can be derived
by Euler rotation of = and = (no third rotation is necessary). We can then see that:
[, ) = D(R( = , = , = 0)) [ z)
Expanding this in terms of some other set of states l
t
and m
t
, this becomes:
Y

l,m
(, ) = l, m[, ) = l, m[ D(R( = , = , = 0)) [ z)
=

,m

l, m[ D(R( = , = , = 0)) [l
t
, m
t
) l
t
, m
t
[ z)
The last term picks out only the m
t
= 0 case which leaves us with the result of:
Y

l,m
(, ) =
_
2l + 1
4
D
l
m,0
Thus the spherical harmonics can be thought of as representations of rotations of states.
Wednesday - 11/19
24 Addition of Angular Momentum
24.1 Overview - New Notation
In order to describe systems with both position and spin, well need to modify some of our notation.
We can more explicitly write that:
[x) [x, s) ;

J =

L +

S

J =

L

I +

I

S

J [x, s)

L[x) [s) +[x)

S [s)
Thus the angular momentum operator acts only on the position part of the ket and the spin only
on the spin ket. We can denote this in position space as:
(x)
space

spin
81
24.2 A System of 2 Spin 1/2 Particles
Consider a simple system of two spin 1/2 particles with their spins coupled. In this case, the state
of the system will be denoted as:
[s
1
, s
2
; m
1
, m
2
) =

1
2
,
1
2
;
1
2
,
1
2
_
Thus for this system, there are a total of 4 states.
Consider the overall spin

S this system. We can dene this as

S =

S
1

I +

I

S
2
. Further, we can
show by expanding

S =

S
1
+

S
2
that in this new system the over all spin operators behave exactly
like this individual counterparts.
[S
i
, S
j
] =
_

S
1i


I +

I

S
2i
,

S
1j


I +

I

S
2j

= ih
_

ijk

S
1k
+

ijk

S
2k
_
= ih

ijk

S
k
In this new system, we still have the familiar eigenkets for the spin operators:

S
2
[s, m) = hs(s + 1) [s, m) ;

S
z
[s, m) = mh[s, m)
24.3 Clebsch-Gordon Coecients
In general, we can represent this system in one of two ways, [S
1
, S
2
; s, m) or as stated above
[S
1
, S
2
; m
1
, m
2
). These two notations are related by the Clebsch-Gordon coecients.
[S
1
, S
2
; S, m) =

m
1
,m
2
[S
1
, S
2
; m
1
, m
2
) S
1
, S
2
; m
1
, m
2
[S
1
, S
2
; S, m)
where the inner product is the C-G coecient. We can shorten this notation by dropping the S
i
terms and this becomes simply:
[S, m) =

m
1
,m
2
[m
1
, m
2
) m
1
, m
2
[S, m)
Returning to our example system of two spin 1/2 particles, we can see that the total spin of the
system S can be either 1 or 0. For the system in state S = 1 there are three possible m values:
m = 1, m = 0, m = 1. These account for 3 of the 4 possible states. This set of S = 1 states is
a triplet state. The remaining state S = 0, m = 0 is a singlet state. In order to nd the Clebsch-
Gordon coecients well start with a state we already know the coecient for (the maximal spin
state) since we can show that:
82

S
z
=

S
1z
+

S
2z
S, m[

S
z


S
1z


S
2z
[m
1
, m
2
) = 0
S, m[ (mh m
1
h m
2
h) [m
1
, m
2
) = 0 m = m
1
+m
2
From this result we can immediately see that [S = 1, m = 1) =

m
1
=
1
2
, m
2
=
1
2
_
. To get the other
relations well apply the S

operator to the ket and recall that this results in a state exactly like
that of the J

operator:
S

[S, m) = h
_
(S +m)(S m+ 1) [s, m1)
So, applying this to the above results in:
S

[1, 1) = S

1
2
,
1
2
_
= (S
1
+S
2
)

1
2
,
1
2
_
h
_
(1 + 1)(1 1 + 1) [1, 0) = h

_
1
2
+
1
2
__
1
2

1
2
+ 1
_

1
2
,
1
2
_
+ h

_
1
2
+
1
2
__
1
2

1
2
+ 1
_

1
2
,
1
2
_
[1, 0) =
1

1
2
,
1
2
_
+
1

1
2
,
1
2
_
Repeating this process results in
[1, 1) =

1
2
,
1
2
_
And from this information, we know that the remaining state must be orthogonal to these three
and thus it has the form:
[0, 0) = b

1
2
,
1
2
_
+c

1
2
,
1
2
_
Requiring that 0, 0[1, 0) = 0 results in:
[0, 0) =
1

1
2
,
1
2
_

1
2
,
1
2
_
24.4 Interaction Energy - Splitting of Energy Levels
Lastly, in the case that there in an interaction energy of the form H = A

S
1

S
2
, we can immediately
see that since:
83

S
2
=

S
2
1
+

S
2
2
+ 2

S
1


S
2


S
1
S
2
=
1
2
_

S
2


S
2
1


S
2
2
_
From this we can see that the interaction energy for S = 0, 1 are:

S
1


S
2
[1, m) =
h
2
4
[1, m)

S
1


S
2
[0, m) =
3h
4
[0, m)
Thus there is an increase in energy for a triplet state and a decrease in energy for a singlet state.
This splitting of energy is important in many areas of physics. That is, if the Hamiltonian is of the
form H = H
0
+A

S
1


S
2
then the energy levels will be shifted by:
E
S=1
= E
0
+
h
4
E
S=0
= E
0

3h
4
Friday - 11/21 - No Class
Monday - 12/1
Weve seen how the Clebsch-Gordon coecients related the base kets [m
1
, m
2
) and [j, m). Consider
the most general case of two spins j
1
and j
2
, the ranges of these indices are:
m
1
= j
1
, j
1
+ 1 . . . , j
1
; m
2
= j
2
, j
2
+ 1 . . . , j
2

m = j, j + 1 . . . , j
j = j
1
j
2
, j
1
j
2
+ 1 . . . , j
1
+j
2
; j
1
> j
2
It can be shown that the total number of states in either notation is the same. In the rst case
there is a total of (2j
1
+ 1) (2j
2
+ 1) states and the sum of states for the second notation is:
jmax

j
min
(2j + 1) =
j
1
+j
2

j
1
j
2
(2j + 1) = (2j
1
+ 1)(2j
2
+ 1)
We can then note a few things about these base kets:
For any given m value, various combinations of m
1
and m
2
can generate m
1
+m
2
= m.
For each value of m, multiple values of j can exist.
For any given m, the number of possible combinations of m
1
and m
2
is the same number as
the possible j values. Thus the max possible states for a single m is (2j
2
+ 1).
84
This can be seen by listing the values possible for a given m value:
m
max
= j
1
+j
2
; m
1
, m
2
= j
1
, j
2
; j = j
1
+j
2

m = j
1
+j
2
1; m
1
, m
2
= j
1
1, j
2
, j
1
, j
2
1; j = j
1
+j
2
, j
1
+j
2
1
.
.
.
m
min
= j
1
j
2
; m
1
, m
2
= j
1
, j
2
, j
1
1, j
2
+ 1 . . . j
1
2j
2
, j
2
; j = j
1
+ j
2
, j
1
+
j
2
1, . . . , j
1
j
2

The case for j


1
= 2 and j
2
= 1 is worked out in Table 1 .
Finally, we can nd a general method to determine the C-G coecients for a given set of states.
Start from the maximum m value since the coecient with this state will be one.
Repeatedly apply J

= J
1
+ J
2
to both sides of the equation to nd the lower values of
ms states.
For alternate values of j, simply expand in terms of all possible states that generate a given
m value and require orthogonality with other states with the same m but dierent j values.
Table 1: Table of possible m and j combinations for j
1
= 1, j
2
= 2
[j, m) [3, 3) [3, 2, 2) [3, 2, 1, 1) [3, 2, 1, 0) [3, 2, 1, 1) [3, 2, 2) [3, 3)
[m
1
, m
2
) [2, 1) [2, 0) [2, 1) [1, 1) [2, 1) [2, 0) [2, 1)
.
.
. [1, 1) [1, 0) [0, 0) [1, 0) [1, 1)
[0, 1) [1, 1) [0, 1)
Lastly, we can consider the case of an electron in an atom, in this case, we have J = L+S and the
possible values of the indices are:
m
l
= l, l + 1, . . . , l; m
s
=
1
2
Thus the possible j values are j = l
1
2
. That means we can write the C-G coecients for a given
l as:

l +
1
2
, m
_
= A

m
1
2
,
1
2
_
+B

m+
1
2
,
1
2
_

l
1
2
, m
_
= A
t

m
1
2
,
1
2
_
+B
t

m+
1
2
,
1
2
_
And we require that A
2
+B
2
= A
t2
+B
t2
= 1 to normalize the states, and also that AA
t
+BB
t
= 0
so that the states are orthogonal. This results in the conditions:
85
A =

l +m+
1
2
2l + 1
; B =

l m+
1
2
2l + 1
A
t
=

l m+
1
2
2l + 1
; B
t
=

l +m+
1
2
2l + 1
This gives another description using both spin states and orbital angular momentum states. The
spherical harmonics are combined with the spin states to give:
Y
l,m
Y
l+
1
2
,m
= AY
l,m
1
2

+
+BY
l,m+
1
2

Wednesday - 12/3
25 Tensors in Quantum Mechanics
Recall that in classical mechanics, a 3-vector v can be dened as an object with 3 elements that
transforms like a vector. This vector is actually a tensor of rank 1. This can be seen as:
v
t
i
=

j
R
ij
v
j
Where as tensors of rank 2 can be represented by a combination of two such vectors:
T
ij
u
i
v
j
T
t
ij
=

lm
R
il
R
jm
T
lm
And (just for reference) a rank 3 tensor has the denition that:
T
t
ijk
=

lmn
R
il
R
jm
R
kn
T
lmn
25.1 Reducible and Irreducible Tensors
Tensors are reducible in the sense that we can write any rank 2 tensor as a sum of several other
tensors:
T
ij
=
_
1
2
(T
ij
+T
ji
)
ij
Tr[

T]
_
+
_
T
ij
T
ji
2
_
+
ij
Tr[

T]
And then dening the other tensors as:
86
S
ij

1
2
(T
ij
+T
ji
)
ij
Tr[

T]; A
ij

T
ij
T
ji
2
; c Tr[

T]
Where we can immediately note that S
ij
is a symmetric tensor with 5 independent elements (6 in
the two tensors and then one less from the trace term) A
ij
is a antisymmetric tensor and therefore
has 3 independent elements. Lastly, the nal term is a constant times the delta function. This gives
a total of 9 independent elements for T
ij
which are independent of each other. That is, the rank 2
tensor T
ij
is made up of 9 elements, 5 of which are dependent on one another, another 3, and nally
a single remaining constant term. This can be seen in the transform below:
_

_
T
t
11
.
.
.
_

_ =
_

_
R
ij
. . .
.
.
.
.
.
.
_

_
_

_
T
11
.
.
.
_

_
_

_
S
t
11
.
.
.
A
t
11
.
.
.
c
t
_

_
=
_

_
M
(55)
1
0 0
0 M
(33)
2
0
0 0 M
(11)
3
_

_
_

_
S
11
.
.
.
A
11
.
.
.
c
_

_
Where M
(55)
1
denotes a 5 5 matrix of independent elements within the 9 9 matrix which is the
entire transform. The same is true for the other elements. The tensors S, A, and c are referred to
as irreducible tensors since they transform independently.
25.2 Cartesian Vectors in Classical Mechanics
Classically, any vector that behaves as v
t
i
=

j
R
ij
v
j
under a rotation where RR
T
= 1 is referred
to as a Cartesian Vector. We can consider then the innitesimal transformation given by:
R
k
ij
=
ij

ijk
as a rotation of about the axis denoted by

k. This can easily be checked as it correctly (to rst
order) generates the rotation matrix R
z
:
R
z
=
_
_
1 0
1 0
0 0 1
_
_
25.3 Cartesian Vectors in Quantum Mechanics
In quantum mechanics, the state [) rotates into [
R
) by the application of the rotation operator
D(R). So, using the above relations, we should be able to dene a quantum mechanical cartesian
vector as:
v
t
i
=
R
[ v
i
[
R
) ; v
i
= [ v
i
[)
87
And the above transformation gives the denition;

R
[ v
i
[
R
) = [ D(R)

v
i
D(R) [) = R
j
i
[ v
j
[) D(R)

v
i
D(R) = R
j
i
v
j
Plugging in the denitions for D(R) and the above classical denition for an innitesimal rotation,
we get:
_
1 +
i
h
J
k

_
v
i
_
1
i
h
J
k

_
=

ij

ijk
v
j
Which works out to be the formal denition for a Quantum Mechanical Cartesian Vector:
[v
i
, J
j
] = ih
ijk
v
k
25.4 Spherical Tensors
It can be shown that the spherical harmonics form a basis of spherical (irreducible) tensors. Consider
the spherical harmonics of order l = 1:
_
Y
1
1
, Y
0
1
_

x iy
r
,
z
r
_
(U
x
iU
y
) , U
z

Similarly for l = 2 using the notation that r

= (x iy),
_
Y
2
2
, Y
1
2
, Y
0
2
_

_
r
2

r
2
,
r

z
r
2
,
2r
+
r

+ 2z
2
r
2
_
=
_
U
2

, U

U
z
, 2U
+
U

+ 2U
z
_
Friday - 12/5
25.5 Transformation of Spherical Tensors
Now that we know how the spherical harmonics form a group of spherical tensors, consider how
these tensors transform among themselves. Consider the transition [ n) = [, ) [
t
,
t
) = [ n
t
).
The spherical harmonics are dened to be the inner product of n[l, m), so the rotation is:
[ n) [ n
t
) = D(R) [ n)
The corresponding bra would be n
t
[ = n[ D(R)

. So then we can write the spherical harmonics


and expand them in terms of possible indices:
Y
m
l
(, ) = n[l, m) Y
m
l
(
t
,
t
) = n
t
[l, m)
88
Y
m
l
(
t
,
t
) = n
t
[l, m) = n[ D(R)

[l, m) =

,l

n[l
t
, m
t
) l
t
, m
t
[ D(R)

[l, m)
Since the rotation doesnt change the index l, the resulting relation is:
Y
m
l
(
t
,
t
) =

Y
m

l
(, ) , m
t
[ D(R)

[l, m)
But the matrix element is simply the coecient D
l
m,m

, so the above gives the transformation as:


Y
m
l
(
t
,
t
) =

Y
m

l
(, )D
l
m,m

25.6 Denition of Tensor Operators


From the above relation we can dene the behavior of general tensor operators. Consider some
operator T
q
k
which we equate to a spherical tensor operator. From the above it is clear that we
require the operator to obey:
T
q
k
)
R
= [
R
T
q
k
[)
R
=

D
k
q,q
T
q

k
Which must hold for any state ket, therefore by denition:
D(R)

T
q
k
D(R) =

D
k
q,q
T
q

k
Applying the denition for an innitesimal rotation D(R) = 1
i
h

J n +O(
2
), we can expand the
above and nd the resulting relation:
_

J n, T
q
k

T
q

k
k, q
t
[

J n[k, q)
As an example, consider the result if n = z. The result is similar to the relation with an angular
momentum state [l, m),
_
J
z
, T
q
k

= hq T
q
k
J
z
[l, m) = mh[l, m)
From this result, we can immediately infer that since:
j, m
t
[ J

[j, m) = h
_
(j m)(j m+ 1)
m

,m1

_
J

, T
q
k

= h
_
(j m)(j m+ 1)T
q1
k
89
25.7 Constructing Higher Rank Tensors
Consider two tensors: X
q
1
k
1
and Z
q
2
k
2
. We can use these two to construct some other higher rank
tensor T
q
k
where k = k
1
+ k
2
and q = q
1
+ q
2
for the possible values of each index. The analysis
above indicates that as we wrote that:
[l, m) =

m
1
,m
2
m
1
, m
2
[l, m) [m
1
, m
2
)
We should be able to construct a higher rank tensor operator by similar methods. The resulting
equation is:
T
q
k
=

q
1
,q
2
q
1
, q
2
[k, q)X
q
1
k
1
Z
q
2
k
2
26 Wigner-Eckart Theorem
The Wigner-Eckart Theorem states that for an object of the form,
X =
t
, j
t
, m
t
[ T
q
k
[, j, m)
The exact result is dependent on the integral:
X =
_
R
n
(r)

Y
m

j
(, )zR
n
(r)Y
m
j
(, )d
3
x
however, we can immediately state that if the condition m+q = m
t
is not met, then the integral is
zero. That is,
X ,= 0 if m+q = m
t
The second part of the Wigner-Eckart Theorem states that the above relation can be expressed in
terms of reduced matrix elements,

t
, j
t
, m
t
[ T
q
k
[, j, m) = j, k; m, q[j, k, j
t
, m
t
)

t
, j
t
[ T
k
[, j)

2j + 1
90
Part IV
MATHEMATICAL REVIEW OF
QUANTUM MECHANICS
Thursday - 1/22
26.1 Review of the Concepts of Quantum Mechanics
Quantum Mechanics (as all physics) is a mathematical representation of the time evolution of a
system. The goal of quantum mechanics is to take information about the state of a system at
some time t
0
and determine how the system evolves for all time. The equation that describes this
evolution is the Schrodinger Equation:
ih

t
(t, x) = [H] (t, x) ; (t
0
, x) = ( x)
where weve dened the quantity [H] as the resulting function of time and space when the Hamil-
tonian operator acts on the wave function. An alternate way to generate this evolution for a given
Hamiltonian operator is:
(t, x) =
_
e
i
h
Ht
(t
0
, x)
_
(t, x)
The operator e
i
h
Ht
which we can refer to as U(t) is the time evolution operator discussed in Section
2.1 of Sakurai. It must obey the relations:
U(t
1
+t
2
) = U(t
1
)U(t
2
)
U(0) = I
U

(t)U(t) = I
The exponential form above is one possible form for U(t) for Hamiltonians which are self-adjoint
and time-independent.
26.2 The Propagator
Taking the denition above for (t, x) and expanding it in terms of some second variable y generates
the relation:
(t, x) =
_
x[ e
i
h
Ht
[y) y[)d
3
y =
_
x[ e
i
h
Ht
[y) ( y)d
3
y
91
The term dened as x[ e
i
h
Ht
[y) is the propagator seen in the previous semester. Lets consider the
case of a free particle and determine the exact form of the propagator.
26.2.1 Example - Propagator for Free Particle
For a free particle moving in one dimension in a region with no potential, the Hamiltonian function
is dened as H =
p
2
2m
=
h
2
2m
d
2
dx
2
. There are a number of ways to solve this problem, but the most
useful is to perform an eigenfunction expansion. This involves solving:
h
2
2m
d
2
dx
2
(x) = (x) (x) = ae
i
q
2m
h
2
x
+be
i
q
2m
h
2
x
In order for the solutions to be nite as x , we require that be real and positive. These
solutions oscillate like sine and cosine. If we redene the variable =
h
2
2m
k
2
then the solution can
be more easily written as:

k
=
1

2
e
ikx
; k = . . .
Because the eigenfunctions form a complete set, we can expand any function (x) in terms of them.
This can be written as:
(x) =
_

(k)
k
(x)dk;

(k) =
_

k
(x)

0
(x)dx
Because we are using the eigenfunctions of the operator H, when we expand the wave function in
terms of them, it simply turns the operator into a number. That is:
H
k
=
h
2
2m
k
2

k
e
i
h
Ht

k
= e
i
h
h
2
2m
k
2
t

k
So, we can apply this to our evolution operator and nd that:
(t, x) =
_
e

i
h
Ht

_
(t, x) = e

i
h
Ht
_

(k)
1

2
e
ikx
dk
=
_ __

k
(y)

0
(y)dy
_
1

2
e
i
h
h
2
2m
k
2
t
e
ikx
dk =
_ _
1

2
_
e
iky

0
(y)dy
_
1

2
e
i
h
h
2
2m
k
2
t
e
ikx
dk
=
1
2
_ _
e
ik(xy)
e
i
h
h
2
2m
k
2
t

0
(y)dkdy
From this we can complete the square, evaluate the integral in y, and nd the exact form for the
Propagator:
x[ e
i
h
Ht
[y) =
1
2
_
e
ik(xy)
e
i
h
h
2
2m
k
2
t
dk
92
Then,
a
2
=
h
2m
t; b = x y
x[ e
i
h
Ht
[y) =
1
2
_
e
ia
2
k
2
+ibk
dk =
1
2
_
e
ia
2

k+
b
2a
2

2
i
b
2
4a
2
dk =
1
2
_
i
2a
2
e
i
b
2
4a
2
And plugging back in the values of a and b gives the nal result of:
x[ e
i
h
Ht
[y) =
_
mi
8ht
e
i
m
2 h
(xy)
2
t
Tuesday - 1/27
What behavior can we expect as t 0? According to what we previously derived,
(t, x) =
_
x[ e
i
h
Ht
[y) ( y)d
3
y; (y) = (t = 0, y)
From this denition, it is obvious that in the limit of t going to zero, the propagator should behave
as:
(t 0, x) =
_
K(x, y)( y)d
3
y
and immediately one can claim that K(x, y) = (x y) since the delta function is a solution of the
above.
26.3 Qualitative Results of the Propagator
Consider a system where the initial state is given as (x) which we assume to be some localized,
smooth distribution (lets say a Gaussian) which is zero outside some region [x[ > L. The time
evolution of this state is then given as:
(t, x) =
_
_
mi
ht
e
i
m
2 h
(xy)
2
t
(y)dy
There are several things to note here. First, note that at t = 0, the result is simply (0, x) = (x)
as expected since the integral term goes as e
iA(xy)
2
(y) and as A the exponential essentially
becomes a delta function, pulling out only the contribution at x = y. Also, note that far from where
the particle initially is located ([x[ L), that the exponential can be approximated as:
e

im
2 ht
(x
2
2xy+y
2
)
e

im
2 ht
(x
2
2xy)
93
And evaluating the integral, we nd:
(t, x)

=
_
_
mi
ht
e

im
2 ht
(x
2
2xy)
(y)dy =
_
mi
ht
e

im
2 ht
x
2
_
e
i
m
ht
xy
(y)dy =
_
mi
ht
e

im
2 ht
x
2
_

_
mx
ht
__
Where the remaining integral has been replaced by the Fourier Transform of the initial state. This
gives the result that far from the initial distribution,
(t, x)

=
_
2mi
ht
e

im
2 ht
x
2

_
mx
ht
_
Note that for any point x, after some amount of time has passed, the probability of the particle
being there is non-zero. However, after more time passes, the probability decreases as the wave
function lls all space (the wave function diuses into all allowed states).
26.4 Examples Using the Propagator
26.4.1 Particle in an Innite Well
Now that weve considered an unbound state, consider the basic quantum mechanics of a particle
in an innite well of width L. The problem is summarized as:
ih

t
(t, x) = H(t, x); (t = 0, x) = (x)
(t, 0) = (t, L) = 0
To nd the evolution, we need to determine the form of the propagator. We can again use an
eigenfunction expansion to turn the operator H into some number. The solution is the same as the
free particle (exponentials or sines and cosines) but we have the additional concern of boundary
conditions. Choosing to use the exponential solution, the result is:

h
2
2m

2
x
2
(x) = (x)
(x) = Asin
_

2m
h
x
_
+Bcos
_

2m
h
x
_
The condition at x = 0 requires dropping the cosine term. The condition at x = L gives the allowed
values of
n
=
1
2m
_
nh
L
_
2
. Plugging these in gives:

n
(x) =
_
2
L
sin
_
n
L
x
_
,
n
=
1
2m
_
nh
L
_
2
94
Now we can evaluate the propagator as we did with the free particle.
(t, x) =

n=1
e
i
h
Ht
c
n

n
(x); c
n
=
_
L
0

n
(y)(y)dy =
n
[)
The eigenfunctions turn the operator in the exponential into a number and the rest is straight
forward:
(t, x) =

n=1
e
i
h
n
2

2
h
2
2mL
2
t
_
_
_
2
L
sin
_
n
L
y
_
(y)dy
_
_
2
L
sin
_
n
L
x
_
x[ e
i
h
Ht
[y) =
2
L

n=1
e
i
h
n
2

2
h
2
2mL
2
t
sin
_
n
L
y
_
sin
_
n
L
x
_
Wednesday - 1/28
26.4.2 Particle in a Half Innite Well
Consider as another example a particle in a potential dened as:
V (x) =
_
0 x > 0
x 0
_
in this potential, we have the same Hamiltonian operator weve been working with but with the
added condition that [
x=0
= 0. So, using the eigenfunction expansion we again know that the
solutions are either exponentials or sines and cosines. Using the sine and cosine solutions, the
boundary condition at x = 0 requires dropping the cosine term. This leaves the solution as a sine
transform:

h
2
2m
d
2
dx
2
(x) = (x) with (0) = 0
k
(x) =
_
2

sin(kx) ; k
2
=
2m
h
2

where ranges from zero to innity. From this we can immediately construct:
x[ e
i
h
Ht
[y) =
2

_

0
e
i
h
h
2
k
2
2m
t
sin(kx) sin(ky) dk
And with some initial state (x), the system will evolve into a state:
(t, x) =
2

_

0
_

0
e
i
h
h
2
k
2
2m
t
sin(kx) sin(ky) (y)dk dy
95
26.4.3 Impenetrable Wall With Finite Potential Well
Consider a particle in a potential of the form:
V (x) =
_
_
_
0 x > L
D 0 < x < L
x 0
_
_
_
This problem requires nding solutions in both regions and then matching up the solutions. For
x > 0, the solution has the familiar form of:

(>)
= e
i

2m
h
x
+e
i

2m
h
x
Because of the potential well, the allowed values of range from D to . It is therefore necessary
to note that in the case of < 0, the second term blows up exponentially, therefore that solution
requires = 0. If > 0, then there are no constraints on the coecients.
The solution inside the well is easily shown to be:

h
2
2m
d
2
dx
2

(<)
(x) = ( +D)
(<)
(x) with
(<)
(0) = 0
(<)

(x) = sin
_

2m
h
x
_
Lastly, the solution and its rst derivative must be continuous across the point x = L. Consider
rst > 0, then the condition can be more easily written in terms of sine and cosine functions as:
sin
_
_
2m( +D)
h
L
_
= Asin
_

2m
h
L
_
+Bcos
_

2m
h
L
_
_
2m( +D)
h
cos
_
_
2m( +D)
h
L
_
=

2m
h
Acos
_

2m
h
L
_

2m
h
Bsin
_

2m
h
L
_
These conditions completely determine the values of A and B which leaves a continuum of values
(integral transform) for . This is expected since these are the states which have escaped from the
well.
Alternately, for < 0, the condition can be written:
sin
_
_
2m( +D)
h
L
_
= e
i

2m
h
L
_
2m( +D)
h
cos
_
_
2m( +D)
h
L
_
= i

2m
h
e
i

2m
h
L
These conditions determine the coecient and the allowed values of in terms of the dimensions
of the problem (that is, (D, L)). The exact condition on can be determined by dividing the two
equations and replacing with a positive variable = [[.
96
tan
_
_
2m(D )
h
L
_
=
_

D
this can be solved graphically (thats part of the homework). Solutions only exist in the region from
0 < < D, so it is possible to have no solution or multiple solutions.
From all of this (assuming the eigenfunctions can be normalized), it is possible to write the Propa-
gator for the system in terms of the bound states
n
(x) and the continuum solutions

:
x[ e
i
h
Ht
[y) =
n

j=1
e
i
h

j
t

j
(x)

j
(y) +
_

0
e
i
h
t

(x)

(y)d
Monday - 2/2
27 Gauge Transforms - An Electron in a Magnetic Field
Consider the problem of an electron (or other charged particle) in a magnetic eld. We will treat
the magnetic eld as the classical object, which means that our Hamiltonian is written in terms of
the momentum operator and the vector potential:
H =
1
2m
_
ih
e
c

A
_
2
+V (x)
Expanding this out and using the product rule on the operator

A, we nd the Hamiltonian can
be written as:
H =
h
2
2m

ihe
2mc
__


A
_
+ 2

A

+
e
2
2mc
2
[

A[
2
+V (x)
Gauge transforms arise from the fact that the physical quantity

B is determined from

B =

A
and thus the vector potential can be altered by some quantity which will not contribute to the
physical eld and give the same result, the general statement is that a change of

A
t
=

A+( x)
will not change the resulting magnetic eld. Under such a gauge transformation, the Hamiltonian
above is altered.
H
t
=
1
2m
_
ih
e
c

A
e
c
()
_
2
+V (x)
However, some simply analysis reveals that:
97
_
ih
e
c

A
_
e
i
h
e
c

= e
i
h
e
c

_
ih
e
c

A
e
c

And thus,
He
i
h
e
c

= e
i
h
e
c

H
t

Adding in the time term,


_
ih

t
H
t
_
= e

i
h
e
c

_
ih

t
H
_
e
i
h
e
c

27.1 Example - Electron in Uniform Magnetic Field


27.1.1 The Symmetry Gauge
Consider an electron in some region with a uniform magnetic eld

B = B
0
z. Using what is known
as the symmetry gauge, one can write the vector potential from:

A =
1
2

B x =
1
2
B
0
z x =
B
0
2
(y x +x y)
with this choice of vector potential, the term

A is zero and the Hamiltonian is written as:
H =
h
2
2m

2
ih
eB
0
4mc
2
_
y

x
x

y
_
+
e
2
B
2
0
8mc
2
_
x
2
+y
2
_
However, the middle term is the angular derivative

in cylindrical coordinates and the last term


is the radial separation. Using these simplications and writing
2
in cylindrical coordinates makes
this a bit simpler to solve:
H =
h
@
2m
_

2
r
2
+
1
r

r
+
1
r
2

2
+

2
z
2
_
ih
eB
0
2mc

+
e
2
B
2
0
8mc
2
r
2
Finally, using the denition
B
=
[e[B
0
mc
, this simplies to:
H =
h
2
2m
_

2
r
2
+
1
r

r
+
1
r
2

2
+

2
z
2
_
+
ih
2

B

+
1
8
m
2
B
r
2
This can be solved using separation of variables and then eigenfunction expansions in some dimen-
sions to reduce the number of variables:
98
H = ; = (r)T()Z(z)
The z component is simply a Fourier transform while the angular component requires periodicity.
This gives:
Z(z) = e
ikz
; T() = e
ij
where k is a continuous spectra ranging from . . . and the index j takes integer values over
the same range. Plugging these denitions in reduces the problem to:
_

h
2
2m
_

2
r
2
+
1
r

r

j
2
r
2
k
2
_
+
ih
B
2
(ij) +
1
8
m
2
B
r
2
_
(r) = (r)
Gathering the constant terms with gives the exact dierential equation to solve:
_

h
2
2m
_

2
r
2
+
1
r

r

j
2
r
2
_
+
1
8
m
2
B
r
2
_
(r) =
_

h
2
k
2
2m
+
hj
2

B
_
(r)
the solutions of this radial equation are Conuent Hypergeometric Functions and deriving the
solution is left as a homework problem.
27.1.2 The Landau Gauge
Consider if instead of the Symmetry Gauge, a choice was made to use the vector potential

A =
B
0
y x for this problem. This choice of vector potential is called the Landau Gauge. In this case,
the Hamiltonian becomes:
H =
h
2
2m

2
ih
eB
0
2mc
2y

x
+
e
2
B
2
0
2mc
2
y
2
Expanding out the operator and again using
B
,
H =
h
2
2m

2
x
2

h
2
2m

2
y
2

h
2
2m

2
z
2
ih
B
y

x
+
1
2
m
2
B
y
2
Again we can use separation of variables and then perform eigenfunction expansions to reduce the
number of variables:
H = ; = X(x)Y (y)Z(z)
Z(z) = e
ikzz
; X(x) = e
ikxx
99
Plugging these in leaves us with the equation:
_
h
2
k
2
x
2m

h
2
2m

2
y
2
+
h
2
k
2
z
2m
ih
B
y (ik
x
) +
1
2
m
2
B
y
2
_
Y (y) = Y (y)
Again collecting terms,
_

h
2
2m

2
y
2
+ hk
x

B
y +
1
2
m
2
B
y
2
_
Y (y) =
_

h
2
2m
_
k
2
x
+k
2
z
_
+
_
Y (y)
completing the square on the two terms gives:
1
2
m
2
B
y
2
+ hk
x

b
y =
1
2
m
2
B
_
y +
hk
x
m
B
_
2

h
2
k
2
x
2m
The additional kinetic energy term in the x direction cancels with the other.
_

h
2
2m

2
y
2
+
1
2
m
2
B
_
y +
hk
x
m
B
_
2
_
Y (y) =
_

h
2
k
2
z
2m
_
Y (y)
this result is strange in a way because the resulting equation for y is a simple harmonic oscillator
with spring constant m
B
but with a displacement along the axis proportional to
kx

B
.
Wednesday - 2/4
Consider a change variables in the above form. It is easily seen that this is the Simple Harmonic
Oscillator of some variable y
t
= y +
hkz
m
B
and therefore the solution for Y (y) is:
Y (y) = A
n
H
N
_
y +
hk
z
m
B
_
e

m
B
2 h

y+
hkx
m
B

2
Where the coecient is the normalizing factor and H
n
(x) is the n
th
Hermite Polynomial. So,
combining this with our other results, the wave function and energy eigenvalues (often called the
Landau Levels) of the electron in the magnetic eld can be written as:

n
(k
x
, k
z
, x, y, z) = A
n
e
ikxx
e
ikzz
H
n
_
y +
hk
z
m
B
_
e

m
B
2 h

y+
hkx
m
B

n
(k
z
) =
h
2
k
2
z
2m
+ h
B
_
n +
1
2
_
100
28 The Simplistic Hydrogen Atom
28.1 Background - Setting Up The Problem
Now we can examine the simplest atom (a single proton and electron) in a simplistic way. By this
we mean that we ignore the nite size of the proton and treat it as a point like source. Also we take
the case of a stationary proton (m
p
) and ignore spin of the particles. Ignoring the spin of the
particles is the biggest simplication that we are making. The Hamiltonian for the system can be
written using the Coulomb interaction between the particles as:
H =
h
2
2m

e
2
[x[
Next, we need to determine which coordinate system will be best to solve the problem in since this
will determine the exact form of the operator
2
. For this problem, well use spherical coordinates.
In order to proceed well need to solve the eigenfunction problem:
H(r, , ) =
_

h
2
2m
_

2
r
2
+
2
r

r
+
1
r
2
L
_

e
2
r
_
(r, , ) = E(r, , )
where weve shortened notation using L as the angular dierential operator. We can immediately
simplify this by expanding the solution in spherical harmonics since LY
l,m
(, ) = l (l + 1) Y
l,m
(, ).
So, this leaves
_

h
2
2m
_
d
2
dr
2
+
2
r
d
dr
+
l(l + 1)
r
2
_

e
2
r
_
R(r) = ER(r); (r, , ) = R(r)Y
l,m
(, )
Consider rst the case that l = 0. In this case there is no repulsive term and the potential is simply
V (x) =
e
2
r
. For energies less than zero we expect to nd bound states held close to the proton but
there is no way of telling how many bound states we will nd. If the energy is positive, then we
expect to nd a continuum of free states.
Next, consider if l ,= 0. In this case there is an additional term in the potential due to the angular
momentum of the electron. The potential is then V
eff
(x) =
l(l+1)
r
2

e
2
r
. We again expect to nd
bound states for energies less than zero, but they will likely be shifted away from the origin by the
additional term in the potential. The energies above zero will again have a continuum of values as
there is nothing keeping them bound.
Before we begin solving the problem, its convenient to convert the scale of the problem to atomic
units. If we introduce the variable = r where the factor is to be determined, then the
Hamiltonian becomes:
H =
h
2
2m

2
_
d
2
d
2

2

d
d

l(l + 1)

2
_

e
2
r
101
We can choose such that the two terms are the same order (that is, we expect solutions to be
when the Coulomb attraction is roughly the same scale as the quantum diusion that causes the
wave packet to spread through all available space). This gives:
h
2
m

2
= e
2
=
me
2
h
2
Transforming the radial solution R
l
(r)

R
l
(r) =

R
l
() shows that the length scale for our
solutions is r =
1

. This changes the problem into:


_

1
2
_
d
2
d
2

2

d
d

l(l + 1)

2
_

_

R() =
h
2
me
4
E

R()
Thus we can also infer that the energy scale of our problem is given by
h
2
me
4
. Our solutions will
be in terms of the unitless variable and eigenvalues E
l
. Those are then scaled by the quantities
derived in this process to get the physical values.
Monday - 2/9
28.2 The Hydrogen Atom Wavefunction
At this stage, we can make the additional simplication of

R() =
1

() to eliminate the single


derivative term,
_

1
2
d
2
d
2

1

+
l (l + 1)
2
2
_
() = ()
From this equation, one nds solutions of the form:

n
=
1
n
2
; n l + 1

R() =

_
2
n
_
3
(n l 1)!
2n((n +l)!)
3
e

n
_
2
n
_
l
L
2l+1
n+1
_
2
n
_
where L
j
i
(x) is notation for the Laguerre Polynomials. These polynomials can be derived by nding
the large and small asymptotics of the solution and then determining the solutions from them.
Consider for 1 and assume that for these values, the solution is dominated by a factor of
p
where p is some power which we will nd by an indicial equation.

1
2
p (p 1) p +
l(l + 1)
2
= 0 p =
1
2

_
1
4
+l(l + 1)
p =
1
2

_
l +
1
2
_
p = l, l 1
102
so, for small values of , we expect solutions of the form

R
l
and

R
1

l+1
. Of these two results,
only the rst is a viable answer since we require square integrability. For large values, the resulting
equation is:
_
1
2
d
2
d
2
+
_

R() = 0

R() e

and given the result that only values of =


1
n
2
generate solutions, we can immediately write that
the solution should look like:

R() = N
n,l

l
e

n
f
n,l
()
_

1
2
d
2
d
2

1

d
d

1

+
l (l + 1)
2
2
+
1
2n
2
_

R() = 0
where each allowed value of n and l will generate a polynomial f
n,l
and then normalization can be
determined by integration.
With this result, the nal solution can be written in terms of the atomic radius a
0
=
1

and the
Ryberg constant R =
h
2
me
4
,

n,l,m
(r, , ) = Y
l,m
(, )

R(r)a
3
2
0
; E
n,l,m
= R
1
n
2
Further, we can make a few notes and comments on the result:
There is a degeneracy of states since the energy is only dependent on the principle quantum
number n. Given the allowed values of each index:
n = 1, 2, . . . l = 0, 1, . . . n 1 m = l, l + 1, . . . , l 1, l
there is found an n
2
fold degeneracy for each state. This is shown since the n
th
state can be
a total number of

n1
j=0
(2j + 1) states. Gauss formula quickly shows that this is n
2
.
Physical Chemists and Atomic Physicists refer to the various states in terms of orbital letters.
A few examples are given below:
n, l = 1, 0 1s state n, l = 2, 0 2s state
n, l = 2, 1 2p state n, l = 3, 2 3d state
The behavior of the energies E
n

1
n
2
leads to an accumulation of eigenvalues near n = 0.
However, all bound states are contained in the region below E = 0. This result is important
as the region of E > 0 contains the continuous spectra.
103
There is a ground state with lowest energy which coincides with n = 1. The solution is of the
form:

1,0,0
(r) = 2a
3
2
0
e

r
a
0
one could think of this ground state existing because any additional containment of the wave-
function (deeper in the potential well) would result in increased kinetic energy.
Wednesday - 2/11
28.3 Continuous Spectrum of Hydrogen
For energies above zero, the resulting equation describing the simplistic hydrogen atom is:

k,l,m
= Y
l,m
(, )R
k,l
(r)
_

1
2
d
2
dr
2

1
r
d
dr
+
l(l + 1)
2r
2
k
2

1
r
_
R
k,l
(r) = 0
where the variable k is real and positive since energies are positive. The solutions of this problem
are more hypergeometric functions:
R
l
(k, r) = c
l
(k)
1
2kr
e
ikr
F
_
i
k
+l + 1, 2l + 2, 2ikr
_
and for large values of r this behaves as:
R
l
(k, r)

=
2
r
sin
_
kr +
1
2
ln(2kr)
l
2
+
l
_

l
= arg
_

_
l + 1
1
k
__
where arg(z) denotes the complex phase of the argument. The propagator for the hydrogen atom
can then be built form these eigenfunctions by summing over the discrete (bound state) solutions
and integrating over k = 0 . . . for the continuous spectrum (free states).
104
Part V
PERTURBATION THEORY IN QUANTUM
MECHANICS
29 Bound State Perturbation Theory
Consider the case that for some Hamiltonian operator H
0
, the eigenfunctions and eigenvalues are
known. If a new operator is built by taking H = H
0
+V for some suitably small addition V , then
one can assume that eigenfunctions and eigenvalues of this new operator can be approximated by
adding small corrections to the solutions from H
0
.
Let the original operator have discrete spectra
(0)
0
,
(0)
1
, . . .
(0)
n
with
(0)
i
<
(0)
j
for i < j (that is,
no degenerate states and eigenvalues increase away from the ground state n = 0). Assume that

i
< 0 and each of these discrete eigenvalues corresponds to an eigenfunction
(0)
i
( x).
Also, assume that there is a continuous spectra of
(0)
= 0 . . . . The assumption that the discrete
spectra are all less than zero simplies the problem as it assures that there are no discrete eigenvalues
within the continuous spectra. The continuous spectra correspond to the eigenfunction solution
denoted as
(0)
(, x).
So, our task is now to solve the eigenvalue problem:
(H ) = 0
but we can simplify this if we assume that the new spectra and eigenfunctions can be written as

n
=
(0)
n
+
(1)
n
+
(2)
n
+. . .

n
( x) =
(0)
n
( x) +
(1)
n
( x) +
(2)
n
( x) +. . .
From these assumptions we can write the above in terms of the correction terms and then consider
the resulting equation for the dierent order corrections:
(H ) =
_
H
0
+V
(0)
n

(1)
n

(2)
n
+. . .
__

(0)
n
( x) +
(1)
n
( x) +
(2)
n
( x) +. . .
_
= 0
__
H
0

(0)
n
_

(0)
n
_
+
__
H
0

(0)
n
_

(1)
n
+
_
V
(1)
n
_

(0)
n
_
+
__
H
0

(0)
n
_

(2)
n
+
_
V
(1)
n
_

(1)
n

(2)
n

(0)
n
_
+. . . = 0
Here weve separated the zero-th, rst, and second order terms into brackets and omitted higher
order corrections. If we consider only the part of the rst order correction that is | to
(0)
n
by taking
the inner product of the term,
105

(0)
n
[
_
H
0

(0)
n
_

(1)
n
+
_
V
(1)
n
_

(0)
n
) = 0

(0)
n
[
_
H
0

(0)
n
_

(1)
n
) +
(0)
n
[
_
V
(1)
n
_

(0)
n
) = 0
Operating H
0
on
(0)
n
[, the rst term is zero since the resulting value is
_

(0)

(0)
_
. The second
term leaves the rst order correction to the eigenvalue:

(0)
n
[V
(0)
n
)
(0)
n
[
(1)
n

(0)
n
) = 0

(1)
n
=
(0)
n
[V
(0)
n
)
Consider now the part of the rst order correction which is | to
(0)
n
. The rst order correction can
be written as:

(1)
n
= c
(0)
n
+
n
;
(0)
n
This gives the denition of
n
as:

n
=
(0)
n
(1 +c) +
n
+
(2)
+. . .
however the extra coecient c is redundant, so it can be taken as zero. This means that the rst
order correction is perpendicular to the zero-th order solution.

(0)
n
[
(1)
n
) = 0
(0)
n

(1)
n
with this result we can write the rst order correction term with
(1)
n
as an eigenfunction expansion
in terms of the zero-th order solution:
_
H
0

(0)
n
_

(1)
n
=
_
V
(1)
n
_

(0)
n

(1)
n
=

j,=n
c
(1)
j

(0)
j
+
_
c
(1)
()
(0)
(, x)d
Then, taking the inner product with
(0)
j
the resulting equation is:
c
(1)
j
_

(0)
j

(0)
n
_
=
(0)
j
[
_
V
(1)
n
_

(0)
n
)
the inner product of
(0)
j
and
(0)
n
is zero, so we are left with:
106
c
(1)
j
=

(0)
j
[V
(0)
n
)

(0)
j

(0)
n
repeating for the integral term yields the relation for c
(1)
() as:
c
(1)
() =

(0)
(, x)[V
(0)
n
)

(0)
n
and so nally we have the form for the rst order correction:

(1)
n
=

j,=n

(0)
j
[V
(0)
n
)

(0)
j

(0)
n

(0)
j

_

0

(0)
(, x)[V
(0)
n
)

(0)
n

(0)
(, x)d
Tuesday - 2/17
29.1 Normalizing the Solution
It is possible to derive some useful information about the corrections to the solution by requiring
that it be normalized.
[[
j
[[
2
= 1
(0)
j
( x) +
(1)
j
( x) +
(2)
j
( x) +. . . [
(0)
j
( x) +
(1)
j
( x) +
(2)
j
( x) +. . .) = 1
The inner product can be expanded in powers proportional to V . So, writing out the expansion:
[[
j
[[
2
=
(0)
j
[
(0)
j
)
+
(0)
j
[
(1)
j
) +
(1)
j
[
(0)
j
)
+
(0)
j
[
(2)
j
) +
(1)
j
[
(1)
j
) +
(2)
j
[
(0)
j
) +. . .
The zero-th order term is identically 1, each remaining term must be individually zero.
The rst order term is zero if 2Re
(0)
j
[
(1)
j
) = 0. Thus the rst order correction must be
orthogonal to the zero-th order eigenfunction.

(1)
j

(0)
j
The second order term can be made to be zero by requiring 2Re
(0)
j
[
(2)
j
) =
(1)
j
[
(1)
j
).
Since the solutions are assumed to be real, this can be satised by choosing:

(0)
j
[
(2)
j
) =
1
2

(1)
j
[
(1)
j
)
107
The higher order terms can be determined in the same manner, though it becomes much more
complicated.
30 Degenerate Eigenvalue Perturbation Theory
30.1 First Order Corrections For Degenerate Eigenvalues
The previous sections derived methods to determine rst order corrections if there were no discrete
eigenvalues embedded in the continuous spectrum and if none of the discrete eigenvalues were
degenerate. Consider now if there are a set of values j
n
= j, j + 1, j + 2, . . . j + n which are
degenerate. In this case, the system has an n + 1 fold degeneracy. Because of this degeneracy, the
previous method of determining the corrections doesnt work as it generates undened coecients.
However, it is a good starting point for this analysis.
Recall that last time the rst order correction was expanded in terms of the zero-th order solution
and orthogonality was used to nd the coecients. One can repeat that process, but separate the
degenerate eigenvalues from the rest of the sum:

(1)
j
(x) =

k,=jn
c
(1)
k

(0)
k
(x) +
_
c
(1)
()
(0)
(, x)d +

k=jn
c
(1)
k

(0)
k
(x)
If we plug this into the eigenvalue equation,

k,=jn
_

(0)
k

(0)
j
_
c
(1)
k

(0)
k
(x) +
_
_

(0)
j
_
c
(1)
()
(0)
(, x)d =
_
V
(1)
j
_

k=jn
c
(1)
k

(0)
k
Then, taking the inner product with
(0)
jn
(x) sets the rst two terms to zero and leaves the relation:

k=jn
c
(1)
k

(0)
jn
[
_
V
(1)
j
_

(0)
k
) = 0
This is actually a set of n+1 equations for the degenerate eigenvalues. Written in matrix form, the
system of equations looks like:
_
_
_
_
_
_
_
_
_
_
_
_

j
[V
j
)
(1)
j

j
[V
j+1
) . . .
j
[V
j+n
)

j+1
[V
j
)
j+1
[V
j+1
)
(1)
j
.
.
.
j
[V
j
)
.
.
.

j+n
[V
j
) . . .
j+n
[V
j+n
)
(1)
j
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
c
(1)
j
c
(1)
j+1
.
.
.
c
(1)
j+n
_
_
_
_
_
_
_
_
_
_
_
_
= 0
108
In order for there to be a non-trivial solution, the determinant of the matrix must be zero. This
generates an n + 1 degree polynomial in
(1)
j
which can have at most n + 1 roots. These solutions
are the rst order corrections. One interesting thing to note is that depending on the symmetries
of the original problem and the applied perturbing potential, the degeneracy could be lifted or not
at all eected.
Wednesday - 2/18
30.2 Example - Particle in a Symmetric Box in 2 Dimensions
Consider the two dimensional problem of a particle conned to the region described by x =
1, 1; y = 1, 1. One can scale the variables so that the zero-th order Hamiltonian is
H
0
=
1
2
_

2
x
2
+

2
y
2
_
Then, let there be a small perturbation potential of the form:
V = e

(xx
0
)
2
+(yy
0
)
2

Note that this potential can be approximated by delta functions if the energy is low and the value of
is small (as we are assuming it is). This potential is then approximated V

= (x x
0
) (y y
0
).
The rst step is to nd the zero-th order solutions and their corresponding eigenvalues. It can
immediately be seen that these are:

j,k
(x, y)
(0)
= sin
_
j
2
(x + 1)
_
sin
_
k
2
(y + 1)
_

(0)
j,k
=

2
4
_
j
2
+k
2
_
j, k = 1, 2, . . .
Note that the eigenvalues
(0)
nn
are non-degenerate, while the eigenvalues
(0)
nm
are doubly degenerate
since (for example)
(0)
12
=
(0)
21
. Next, using the notation V
j,k;j

,k
=
(0)
j,k
[V
(0)
j

,k

) we can nd the
matrix elements of the perturbation.
V
j,k;j

,k
=
_

(0)
j,k
(x, y)(x x
0
)(y y
0
)
(0)
j

,k

(x, y)dxdy
= sin
_
j
2
(x
0
+ 1)
_
sin
_
k
2
(y
0
+ 1)
_
sin
_
j
t

2
(x
0
+ 1)
_
sin
_
k
t

2
(y
0
+ 1)
_
With this information we can write the eigenfunction equation as a linear algebra problem. Consider
rst H
0
, we can order the eigenvalues by magnitude:
109
H
0
=
_
_
_
_
_
_
_
_
_

(0)
1,1
0 0 0 . . .
0
(0)
2,1
0 0 . . .
0 0
(0)
1,2
0 . . .
0 0 0
(0)
2,2
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
_
_
_
_
_
_
_
_
_
=

2
4
_
_
_
_
_
_
_
2 0 0 0 . . .
0 5 0 0 . . .
0 0 5 0 . . .
0 0 0 8 . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
_
_
_
_
_
_
_
With this representation, we can write the relation (H
0
+V ) = 0 as:
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_

2
2
+V
1,1;1,1
V
1,1;1,2
V
1,1;2,1
V
1,1;2,2
. . .
V
1,2;1,1
5
2
4
+V
1,2;1,2
V
1,2;2,1
V
1,2;2,2
. . .
V
2,1;1,1
V
2,1;1,2
5
2
4
+V
2,1;2,1
V
2,1;2,2
. . .
V
2,2;1,1
V
2,2;1,2
V
2,2;2,1
2
2
+V
2,2;2,2
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
c
1,1
c
1,2
c
2,1
c
2,2
.
.
.
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
= 0
Where weve expanded the solution as:

(n,m)
(x, y) =

j,k
c
(n,m)
j,k

(0)
j,k
(x, y)
In order to solve the problem, weve expanded the energies and coecients as:

j,k
=
(0)
j,k
+
(1)
j,k
+
2

(2)
j,k
+. . .
c
(n,m)
j,k
= c
(0)(n,m)
j,k
+d
(1)(n,m)
j,k
+
2
d
(1)(n,m)
j,k
+. . .
Consider a rst order perturbation of the ground state. In this case we have that j = k = 1 and
the state is non-degenerate. We know that:

(0)
1,1
=

2
2

(1)
1,1
=
(1)
1,1

1,1
=

2
2
+
(1)
1,1
+. . .
and we expect to nd that
(1)
1,1
=
(0)
1,1
[V
(0)
1,1
). Plugging this in and dropping terms which will be
of order larger than turns the above relation into:
110
_
_
_
_
_
_
_
_
_
_
V
1,1;1,1

(1)
1,1
+. . . V
1,1;1,2
. . .
V
1,2;1,1
3
2
4
+V
1,2;1,2

(1)
1,1
. . . . . .
.
.
.
.
.
.
.
.
.
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
c
1,1
c
1,2
.
.
.
_
_
_
_
_
_
_
= 0
Finally, we can note that to order , the vector c
(0)(1,1)
is simply:
c
(0)(1,1)
=
_
_
_
1
0
.
.
.
_
_
_
So, plugging this into the above relation:
_

H
0
+

V
_
c
1,1
=
_
_
_
V
1,1;1,1

(1)
1,1
0
.
.
.
_
_
_+O
_

2
_
this which means that as expected,

(1)
1,1
= V
1,1;1,1
Consider next the case that the system is in one of the degenerate states
1,2
and
2,1
. We can
look for a solution of (H
0
+V ) = 0 for which in the limit of 0, is an eigenstate with
energy =
5
2
4
. To accomplish this, we can again expand the solution and energy as:
(x, y) =

j,k
c
j,k

(0)
j,k
(x, y) =
5
2
4
+
(1)
+. . .
However, unlike the non-degenerate case, the vector c doesnt collapse as it did for c
(1)(0.0)
. Instead,
we have that:
_
_
_
_
_
_
_
c
1,1
c
1,2
c
2,1
c
2,2
.
.
.
_
_
_
_
_
_
_

_
_
_
_
_
_
_
0

0
.
.
.
_
_
_
_
_
_
_
+O(
2
) =
_
_
_
_
_
_
_
0
cos
sin
0
.
.
.
_
_
_
_
_
_
_
+O(
2
)
Where weve introduce the mixing angle . Plugging all of this in for the matrix yields:
111
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
V
1,1;1,1

3
2
4

(1)
V
1,1;1,2
V
1,1;2,1
V
1,1;2,2
. . .
V
1,2;1,1
V
1,2;1,2

(1)
V
1,2;2,1
V
1,2;2,2
. . .
V
2,1;1,1
V
2,1;1,2
V
2,1;2,1

(1)
V
2,1;2,2
. . .
V
2,2;1,1
V
2,2;1,2
V
2,2;2,1
V
2,2;2,2
+
3
2
4

(1)
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
+O(
2
)
But noting the zero elements of the vector c we can write this as a simpler problem:
_
_
_
_
V
1,2;1,2

(1)
V
1,2;2,1
V
2,1;1,2
V
2,1;2,1

(1)
_
_
_
_
_
_
cos
sin
_
_
+O
_

2
_
= 0
this result is the secular equation for the degenerate states
1,2
and 2, 1. Noting the symmetry of
V
1,2;2,1
= V
2,1;1,2
, this can be solved as:
_
a
(1)
b
b c
(1)
_
_
_
cos
sin
_
_
= 0
_
_
_
a = V
1,2;1,2
b = V
1,2;2,1
c = V
2,1;2,1
_
_
_
in order for there to be a non-trivial solution to this problem, the determinant of the matrix must
be zero. This condition determines the allowed values of
(1)
. The resulting equation is:
_
a
(1)
__
c
(1)
_
b
2
= 0
(1)2
(a +c)
(1)
b
2
+ac = 0

(1)

=
1
2
_
a +c
_
(a c)
2
+ 4b
2
_
As long as the term under the square root is non-zero, then the degeneracy of the two states is
broken by the perturbating potential and the energy level will split. The mixing angle

can be
determined by plugging the above result back in.
Monday - 2/23
30.3 More on Degenerate State Perturbations
Consider some Hamiltonian H = H
0
+V where H
0
has a 2-fold degenerate eigenvalue
(0)
. In this
case, the matrix representation of H
0
is:
112
H
0
=
_
_
_
_
_
_
_
_
_

(0)
1
0 . . .
0
.
.
.
.
.
.
(0)
0
0
(0)
.
.
.
_
_
_
_
_
_
_
_
_
And in this case the perturbation would be written as:
V =
_
_
_
_
_
_
_
_
_
V
1,1
V
1,2
. . .
V
2,1
.
.
.
.
.
. a b
b c
.
.
.
_
_
_
_
_
_
_
_
_
The goal is to nd the leading order solution of (H
0
+V ()) = 0 where we require
(0)
in the limit of = 0. In this case, it is sucient to expand the eigenvalue as =
(0)
+ + . . ..
Plugging this expansion into the original equation gives the new matrix:
(H
0
) =
_
_
_
_
_
_
_
_
_

(0)
1

(0)
0 . . .
0
.
.
.
.
.
.
(0)

(0)
0
0
(0)

(0)

.
.
.
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_

(0)
1

(0)
0 . . .
0
.
.
.
.
.
. 0
0
.
.
.
_
_
_
_
_
_
_
_
_
To leading order the problem is now:
(H
0
+V ) =
_
_
_
+a b
b +c
_

V
0

0
(H
0
+V )

(0)

_
_
where

V
0
is the 2 matrix connecting the zeroth order degenerate states to the rest of the states
and (H
0
+V )

is the orthogonal components to the degenerate states. In this case we can write
the wave function as:
113
=
_
_

_
_
which combines with the above:
(H
0
+V ) =
_
_
_
_
_
+a b
b +c
__

2
_

V
0

0
_

2
_
_
(H
0
+V )

(0)

_
_
_
_
= 0
From this result, it is clear that

is of order since the orthogonal component term is invertible
and therefore,

= [(H
0
+V )

]
1
V

0
_

2
_
= O()
Given this result, only the 2 2 matrix is relevant if we are only looking for the rst order pertur-
bation correction. So, the solution is as we saw last class:

_
+a b
b +c
__

2
_
= O
_

2
_

_
+a b
b +c
__

2
_
= 0
Which has non-trivial solutions if the determinant is zero, thus we nd two possible solutions:

=
1
2
_
a +c
_
(a c)
2
+ 4b
2
_
30.3.1 Simple Cases
Consider two simple cases of the above:
b = 0
In this case, the values

are found to be

=
1
2
[a +c (a c)]

+
= a

= c
Plugging in, we nd the possible solutions are:
_

(+)
1

(+)
2
_
=
_
1
0
_
_

()
1

()
2
_
=
_
0
1
_
114
b ,= 0, a = c
In this case,

takes on values of:

=
1
2
(2a 2b)

+
= a +b

= a b
and again, plugging in gives the result:
_

(+)
1

(+)
2
_
=
1

2
_
1
1
_
_

()
1

()
2
_
=
1

2
_
1
1
_
30.4 Time Evolution of Degenerate States
Consider the time evolution given by:
i

t
= H
and let [
t=0
= where we can dene in terms of the eigenvectors
()
as:
= c
+
_

(+)
1

(+)
2
_
+c

()
1

()
2
_
Then, the evolution can be written as:
e
iHt

= e
i
(0)
t
e
i
0
@
+a b
b +c
1
A
t
= e
i
(0)
t
_
c
+
e
i
+
t
_

(+)
1

(+)
2
_
+c

e
i

t
_

()
1

()
2
__
30.4.1 Simple Cases
Recall that we worked through two simple cases in the previous section. Lets consider a system
initially in the state:
=
_
1
0
_
and nd the time evolution of the system for these special cases.
b = 0
In this case the initial state can be written as;
115
= c
+
_
1
0
_
+c

_
0
1
_
And clearly c
+
= 1 while c

= 0. This then gives the time evolution as:


(t)

= e
i(+a)t
_
1
0
_
So there is no actual evolution, the state is given an additional phase however.
b ,= 0, a = c
= c
+
1

2
_
1
1
_
+c

2
_
1
1
_
And again it is simple to nd: c
+
= c

=
1

2
. This gives the time evolution as:
(t) =
1

2
e
i
(0)
t
_
1

2
e
i(a+b)t
_
1
1
_
+
1

2
e
i(ab)t
_
1
1
__
In this case, the state evolves and oscillates between the two states in a manner dependent on
the values of a and b.
Wednesday - 2/25
31 Time Dependent Perturbations
Consider the time dependent problem given by:
i

t
(t, x) = (H
0
+V ) (t, x)
For the case that the eigenfunctions and eigenvalue of H
0
are well known and understood. The
solution of the perturbation problem can then be built out of the zeroth order solution.
(t, x) =

n
c
n
(t)
(0)
n
( x)
Substituting this in and collecting the zeroth order terms gives the equation:

n
_
i

t

(0)
n
_
c
n
(t)
(0)
n
=

c
n
(t)V
(0)
n

At this point it is useful to change the operator as:


116
_
i

t

(0)
n
_
= e
i
(0)
n
t
i

t
e
i
(0)
n
t
Substituting in and multiplying to eliminate the extra exponential gives the above as:

n
i

t
e
i
(0)
n
t
c
n
(t)
(0)
n
= e
i
(0)
n
t

c
n
(t)V
(0)
n

Taking the inner product with


(0)
n
to eliminate the sum on the left gives:
i

t
e
i
(0)
n
t
c
n
(t) =

e
i
(0)
n
t
c
n
(t)V
n

n
Where weve dened V
n

n
=
n
[V
n
). This equation for c
n
(t) can be solved more easily if we
consider the function a
n
(t) e
i
(0)
n
t
c
n
(t). Note that this function gives the time evolution as:

n
a
n
(t)
(0)
n
= e
iH
0
t

n
c
n
(t)
(0)
n
The dierential equation for c
n
(t) becomes an equation for a
n
(t):
i

t
a
n
(t) =

e
i

(0)
n

(0)
n

t
a
n
(t)V
nn

Consider the approximate solution for c


n
:
c
n
(t) =
jn
e
i
(0)
j
t
+d
n
(t) +O
_
V
2
_
where d
n
(t) is of order V . To leading order (that is, when V = 0), the system is in a stationary
state described as:
= e
i
(0)
j
t

(0)
j
So, to nd the rst order correction, consider the expansion of a
n
(t):
a
n
(t) =
j,n
+
n
(t) +O
_
V
2
_
where
n
(t) is of order V and related to the above coecient d
n
(t) by an exponential term. Sub-
stituting this in gives an equation for the rst order correction:
i

t
[
j,n
+
n
(t) +. . .] =

e
i

(0)
n

(0)
n

t
_

j,n
+
n
(t) +. . .

V
nn

117
i

t

n
(t) +. . . = e
i

(0)
n

(0)
j

t
V
nj
+

e
i

(0)
n

(0)
n

n
(t)V
nn
+. . .
The higher order terms are negligible and additionally
n
(t)V
n,n
V
2
and therefore it can be
neglected as well. Therefore the rst order correction is given as:
i

t

n
(t) = e
i

(0)
n

(0)
j

t
V
nj
Consider what each term of this solution represents. The correction
n
(t) is the time dependent
correction of the unperturbed wavefunctions evolution.
(0)
j
is the stationary unperturbed state
that the wavefunction is initially known to be in, while
(0)
n
is some other state that the wavefunction
is leaking into. The term V
nj
=
(0)
n
[V
(0)
j
) represents the amount of the wavefunction which is
leaked (which is dependent on the symmetries and characteristics of the perturbing potential V ).
The above result can be solved for by integrating if V
nj
is known:

n
(t) = i
_
e
i

(0)
n

(0)
j

V
nj
dt
t
31.0.2 The Sudden Approximation
Consider some potential which is constant for some amount of time T. The integral for
n
(t) is
then given as:

n
(t) = i
V
nj

(0)
n

(0)
j
_
e
i

(0)
n

(0)
j

t
1
_
0 < t < T

n
(t) = i
V
nj

(0)
n

(0)
j
_
e
i

(0)
n

(0)
j

T
1
_
t T
While the perturbation is left on, the evolution coecients change as energy is added or removed
from the system. After the perturbation is turned o, the system no longer evolves. This is seen
since for any value of t > T the coecient
n
(t) is independent of t. One example of this kind of
process is a decay. If the positive charge of the nucleus increases by a decay, then there will be
a perturbing potential change switched on (almost) instantaneously. In this case the perturbation
is never turned o, so the rst of the above solutions is the appropriate correction.
31.0.3 The Adiabatic Approximation
Consider some perturbation which is turned on slowly from time t = t
0
and reaches a constant
value at t = 0. If the potential remains constant after that then the correction can be found by
integrating by parts:
118

n
(t) = i
_
t

e
i

(0)
n

(0)
j

V
nj
dt
t
=
_
_
i
V
nj
i
_

(0)
n

(0)
j
_e
i

(0)
n

(0)
j

t
_
_
t

+
1

(0)
n

(0)
j
_
t

e
i

(0)
n

(0)
j

V
nj
t
dt
t
The second term is dependent on the rate of change of V
nj
which weve dened to be slowly varying.
Therefore this term can be neglected. This leaves only the solution as approximately:

n
(t) =
V
nj
_

(0)
n

(0)
j
_e
i

(0)
n

(0)
j

t
+O
_
V
nj
t
_
Monday - 3/2
31.1 Time Dependent Perturbation Theory
Consider the case of a time varying perturbation V (t, x). We can employ the same methods used
previously to determine corrections to the energy and eigenfunctions. Let the problem consist of:
_
i

t
H
0
V
_
(t, x) = 0
Where the operator H
0
is well known and understood (that is, its eigenvalues
(0)
i
and eigenfunctions

(0)
i
are known). As before, lets consider building a corrected solution out of the zeroth order
solution.
(t, x) =

j
d
j
(t)e
i
(0)
0
t

(0)
j
Substituting this into the Schrodinger Equation and taking the inner product to get an equation in
terms of d
k
gives:
i

t
d
k
(t) =

(0)
k
[V
(0)
j
)d
j
(t)e
i

(0)
k

(0)
j

t
This solution can be taken in various orders to determine the terms of the approximation. Consider
each separately,
Zero-th Order
In the case of V = 0 the equation above generates:
119
i

t
d
k
(t) = 0 d
k
(t) = constant
And we want the expansion to recover the unperturbed solution, therefore the rst order term
needs to pick out only the initial state:
d
k
(t) =
ik
+O(V )
First Order
To get rst order corrections, we let d
k
(t) =
ik
+
k
(t) + O
_
V
2
_
where weve assumed that
is of order V . Substituting this in gives the equation:
i

t
d
k
(t) =
(0)
k
[V
(0)
i
)e
i

(0)
k

(0)
i

t
+O
_
V
2
_
This can be solved by the integral equation:
d
k
(t) =
ij
i
_
t
t
0

(0)
k
[V (t
t
)
(0)
i
)e
i

(0)
k

(0)
i

dt
t
+O
_
V
2
_
Where t
0
is some initial time that is determined by the potential (usually t
0
will be either
0 or ). This rst order solution is the commonly used expression for approximating the
behavior of a time dependent perturbation.
Second Order
Repeating the process can generate higher order corrections, though the multiple integrals
involved quickly make the process dicult. Consider the second order correction:
i

t
d
k
(t) = V
ki
(t)e
i

(0)
k

(0)
i

t
i

j
V
kj
(t)e
i

(0)
k

(0)
j

t
_
t
t
0
V
kj
(t
t
)e
i

(0)
k

(0)
j

dt
t
+O
_
V
3
_
Where weve used the notation V
ki
(t) =
(0)
k
[V (t)
(0)
i
).
31.1.1 Notation and Interpretations
In the physical sense, the magnitude of the coecients are the probabilities that a transition has
occurred. This is simple to see at t = 0 since the coecients collapse to delta functions at the initial
states. As time evolves the probability of nding the system in some other state changes depending
on various parameters. Put simply,
[d
k
(t)[
2
= P
ik
(t)
120
31.1.2 Example - Innite Square Well With Time Dependent Perturbation
Consider the problem weve seen multiple times of a particle in a 1-dimensional innite square well
from x = 1 to x = 1. Weve seen that for H
0
=
1
2
d
2
dx
2
the resulting eigenfunctions and eigenvalues
are:

(0)
j
(x) = sin
_
j
2
(x + 1)
_

(0)
j
=
_
j
2
_
2
If there is a time dependent perturbation V (x, t) = (x)e

t
2

2
then we can nd the resulting state
in terms of the unperturbed states. We can use the above result for the rst order correction to
nd (assuming k ,= i):
d
k
(t) = i
_
t

__

sin
_
k
2
_
x
t
+ 1
_
_
(x
t
)e

t
2

2
sin
_
i
2
_
x
t
+ 1
_
_
dx
t
_
e
i

(0)
k

(0)
i

dt
t
+O
_
V
2
_
= isin
_
k
2
_
sin
_
i
2
__
t

t
2

2
e
i

(0)
k

(0)
i

dt
t
+O
_
V
2
_
This can be solved by completing the square in the exponential
t
t2

2
i
_

(0)
k

(0)
i
_
t
t
=
_
t

2
_

(0)
k

(0)
i
_
_
2
+

2
4
_

(0)
k

(0)
i
_
2
Then,
d
k
(t) = isin
_
k
2
_
sin
_
i
2
_

2
4

(0)
k

(0)
i

2
= i
(0)
k
(0)
(0)
i
(0)

2
4

(0)
k

(0)
i

2
Note the behavior of this result. First, if one of the eigenfunctions
(0)
k
or
(0)
i
have a zero at
x = 0, then the perturbation is zero and there is no transition. Further, depending on the value of

(0)
k

(0)
i
, there is some preferred for the states to transition (assuming the transition is possible).
31.2 Periodic Perturbation Theory
Consider some potential V (t) = v( x)e
it
. In this case, the rst order correction can be separated
as:
d
k
(t) =
ij
i
_
t
t
0

(0)
k
[ v( x)
(0)
i
)e
i

(0)
k

(0)
i

dt
t
+O
_
V
2
_
121
The integral in time is now evaluated as:
d
k
(t) = i

(0)
k
[ v( x)
(0)
i
)
i
_

(0)
k

(0)
i

_
_
e
i

(0)
k

(0)
i

_
t
t
0
And for nite times, this can be simplied to:
d
k
(t) =
(0)
k
[ v( x)
(0)
i
)
e
i

(0)
k

(0)
i

t
0

(0)
k

(0)
i

_
e
i

(0)
k

(0)
i

(tt
0
)
1
_
Consider the behavior of this result for t t
0
small and
(0)
k

(0)
i
, then the above result can
be approximated as:
d
k
(t)
(0)
k
[ v( x)
(0)
i
) (i(t t
0
) +. . .)
Alternately, if t
0
and t , then the integral takes values over all time. The behavior of
d
k
(t) then becomes that of a delta function,
d
k
= 2i
(0)
k
[ v( x)
(0)
i
)
_

(0)
k

(0)
i

_
This means that if the perturbation is left on indenitely, then after long times only transitions with
energy dierences equal to the frequency are probable. That is, nothing happens if ,=
(0)
k

(0)
i
.
31.2.1 Transition Probability for Period Perturbations
For a given transition, weve seen that the probability of occurrence is P
ki
(t) = [d
k
(t)[
2
. To
simplify notation, well use the denition
(0)
k

(0)
i
=
ki
. Then we can write the probability
as:
[d
k
(t)[
2
= [
(0)
k
[ v( x)
(0)
i
)[
2
1

2
ki
_
e
i
ki
(tt
0
)
1
__
e
i
ki
(tt
0
)
1
_
=
[
(0)
k
[ v( x)
(0)
i
)[
2

2
ki
_
1
_
e
i
ki
(tt
0
)
+e
i
ki
(tt
0
)
_
+ 1
_
= 2
[
(0)
k
[ v( x)
(0)
i
)[
2

2
ki
[1 cos (
ki
(t
0
t))]
[d
k
(t)[
2
= 4
[
(0)
k
[ v( x)
(0)
i
)[
2
_

(0)
k

(0)
i

_
2
sin
2
_
1
2
_

(0)
k

(0)
i

_
(t
0
t)
_
Wednesday - 3/4
Consider the behavior of the probability as the perturbation covers all time. In this case, it is useful
to determine the behavior of the sine function for large arguments. Consider the denition:
122
lim
t
sin
2
(t)
t
2
= ()
this can be shown by writing noting that for ,= 0, the above limit is zero and for = 0 the value
goes o to innity. Stated more exactly,
lim
t
_
lim
0
sin
2
(t)
t
2
_
=
Further, the integral over all values of can be determined to be:
_

sin
2
(t)
t
2
d =
1

sin
2
(x)
x
2
dx = 1
These qualities are sucient to show that this limit is indeed a delta function. From this, we can
determine that in the limit of t t
0
, the sine function in the probability becomes:
sin
2
_
1
2
_

(0)
k

(0)
i

_
(t t
0
)
_
_

(0)
k

(0)
i

_
1
2
(t t
0
)
1
2
(t t
0
)

1
2
(t t
0
)
_
1
2
_

(0)
k

(0)
i

_
_
= (t t
0
)
__

(0)
k

(0)
i

__
Then, the probability asymptotically approaches behavior described as:
[d
k
(t)[
2
= 4
[
(0)
k
[ v( x)
(0)
i
)[
2
_

(0)
k

(0)
i

_
2
sin
2
_
1
2
_

(0)
k

(0)
i

_
(t
0
t)
_
4[
(0)
k
[ v( x)
(0)
i
)[
2
(t t
0
)
_

(0)
k

(0)
i

_
And then, the rate of change in the probability is written as:
dP
ik
dt
4[
(0)
k
[ v( x)
(0)
i
)[
2

(0)
k

(0)
i

_
This result is known as Fermis Golden Rule and describes the change in probability of a transition
over time. Note that if the frequency of the perturbation does not match exactly the energy
dierent, the transition will not occur ever. Also note however, that this assumes the perturbation
is left on for an extended amount of time. For a perturbation applied for a short time scale, these
approximations are not valid.
123
Part VI
THE QUASI-CLASSICAL CASE
32 The WKB Approximation
Consider the case that the energies and geometry of a problem allow the assumption that h is
relatively small. In this case, an expansion in order of h generates what is known as the Quasi-
Classical solution to the Schrodinger Equation. Consider writing the wave function as:
_

h
2
2m
d
2
dx
2
+V (x)
_
= 0 = e
i
h
(x)
Substituting this into the Schroodinger equation, we obtain a non-linear equation for (x),

h
2
2m
_

1
h
2
_
d
dx
_
2
+
i
h
d
2

dx
2
_
+V (x) = 0
_
d
dx
_
2
ih
d
2

dx
2
+ 2m(V (x) ) = 0
At this point, we can exploit the assumption that the scale of our problem causes h to be small
relative to the energy scale. We can expand as:
(x) =
(0)
(x) + h
(1)
(x) + +h
2

(2)
(x) +. . .
Plugging this in gives:
_
d
(0)
dx
_
2
+ 2h
d
(0)
dx
d
(1)
dx
ih
d
2

(0)
dx
2
+ 2m(V (x) ) +O
_
h
2
_
= 0
We can then assume that individual powers of h must individually go to zero, this gives the zeroth
and rst order terms of (x):
Zero-th Order:
Consider the terms with powers of h
0
:
_
d
(0)
dx
_
2
+ 2m(V (x) ) = 0
d
(0)
dx
=
_
2m( V (x))
124
This can be solved easily by simply integrating against x. Note that the dierence between
the total and potential energy is the kinetic energy and the factor of 2m combined with that
gives the magnitude of the classical momentum. Therefore,

(0)
(x) =
_
_
2m( V (x))dx =
_
[ p(x)[dx
First Order:
The terms of order h
1
require:
2
d
(0)
dx
d
(1)
dx
i
d
2

(0)
dx
2
= 0
d
(1)
dx
=
i
2
d
2

(0)
dx
2
d
(0)
dx
=
i
2
d
dx
ln
_
d
(0)
dx
_

(1)
= ln
_
d
(0)
dx
_
=
i
2
ln
_
_
2m( V (x))
_
=
i
2
ln([ p(x)[)
Plugging these back into our denition for gives the approximation as:
= e
i
h

2m(V (x))dx+

+
hi
2
ln

2m(V (x))

+O(h
2
)
And simplifying the exponential,

(x) =
1
[2m( V (x))]
1
4
e

i
h
R

2m(V (x))dx
And in general, the solution will take the form of a linear combination of the two above approxi-
mations. That is, (x) c
1

+
(x) +c
2

(x).
Consider the behavior of these solutions for possible relations of and V (x). In the case of > V (x),
the square root term is real and the solutions oscillate over the space. In the case that < V (x),
the square root term is pure imaginary and the solutions grow and decay exponentially in either
direction. Depending on the region of interest, one of the solutions may be dropped since we
require (x) be integrable as x . Lastly, the points where V (x) are problematic for
this approximation therefore (as well see in the next section), additional work needs to be done to
determine behavior near these classical turning points.
Monday - 3/9
32.1 Connecting The Quasi-Classical Solutions Around A Turning Point
Airy Function Asymptotics Analysis
The behavior of our solutions for regions of (V (x) ) < 0 and (V (x) ) > 0 are drastically
dierent. Consider some potential with a value of V (a) = where V (x > a) > and V (x < a) < .
The solution in either region can be written as:
125
(x) =
B
[( V (x))]
1
4
e

1
h
R

2m(V (x

))dx

x a
(x) =
A
1
[( V (x))]
1
4
e
i
h
R

2m(V (x

))dx

+
A
2
[( V (x))]
1
4
e

i
h
R

2m(V (x

))dx

x a
But as noted above, these solutions are valid only for x far from the turning point a. So what
happens at the turning points? Lets consider the relations among the coecients B, A
1
, and A
2
.
Near the turning point, the potential can be approximated by its Taylor Series expansion.
V (x) V (a) +V
t
(a) (x a) +. . . = +V
t
(a) (x a) +. . .
Plugging this into the original dierential equation,
_

h
2
2m
d
2
dx
2
+V (x)
_

h
2
2m
d
2
dx
2
+ +V
t
(a) (x a) +. . .
_
_

h
2
2m
d
2
dx
2
+V
t
(a) (x a)
_
(x) = 0
This resulting equation can be scaled into the Airy DE
_
d
2
dz
2
z
_
f(z) = 0. By dening z = (x a)
and choosing such that the coecients of the two terms are equivalent,
_

h
2
2m

2
d
2
dz
2
+
V
t
(a)

z
_

_
z

+a
_
= 0
h
2
2m

2
=
V
t
(a)

=
_
2m
h
2
V
t
(a)
_1
3
Therefore the resulting equation to solve is:
_

d
2
dz
2
+z
_

_
_
h
2
2mV
t
(a)
_
1
3
z +a
_
= 0
The solutions of this dierential equation can be written as a linear combination of the functions
Ai(z) and Bi(z). Consider though, the large z behavior of these solutions,
Ai(x)[
x+

1
2

x
1
4
e

2
3
x
3
2
Bi(x)[
x+

1

x
1
4
e
2
3
x
3
2
As our solution cannot grow exponentially, the Bi(z) term must be neglected. Consider in addition
to the above, the large negative z behavior of Ai(z):
126
Ai(x)[
x

1

x
1
4
sin
_
2
3
x
3
2
+

4
_
From this, we can immediately see that our solution should have the asymptotic behaviors:
(x)
1
2

1
2
_
_
2mV
t
(a)
h
2
_1
3
(x a)
_

1
4
e

2
3 h

2mV

(a)(xa)
3
2
x a
(x)

1
2
_
_
2mV
t
(a)
h
2
_1
3
(x a)
_

1
4
_
sin
_
2
3h
_
2mV
t
(a) (x a)
3
2
+

4
__
x a
So, given these asymptotics, what is the connection with the WKB Approximation weve been
dealing with? Consider using the same Taylor Expansion for the WKB Solution. The integral in
the exponent becomes:
_
x
a
_
2m(V (x
t
) )dx
t

_
x
a
_
2mV
t
(a) (x
t
a)dx
t
=
2
3
_
2mV
t
(a) (x a)
3
2
Therefore, for x just slightly larger than a, the WKB Approximation behaves as:
(x)
B
[V
t
(a) (x a)]
1
4
e

2
3 h

2mV

(a)(xa)
3
2
But this matches the Airy asymptotic nearly exactly, it gives the relation:
B
[V
t
(a)]
1
4
=
1
2

_
2mV
t
(a)
h
2
_

1
12
For the asymptotics of x < a, the WKB Approximation behaves as:
(x)
A
1
[V
t
(a) (x a)]
1
4
e
i
2
3 h

2mV

(a)(xa)
3
2
+
A
2
[V
t
(a) (x a)]
1
4
e
i
2
3 h

2mV

(a)(xa)
3
2
Separating the coecients and accounting for the added phase in the Airy function asymptotics,
the comparison generates the condition:
1
[V
t
(a) (x a)]
1
4
_
A
1
e
i

4
i sin
_
2
3h
_
2mV
t
(a) (x a)
3
2
_
+A
2
e
i

4
i sin
_
2
3h
_
2mV
t
(a) (x a)
3
2
__
=

1
2
_
2mV
t
(a)
h
2
_

1
12
i sin
_
2
3 h
_
2mV
t
(a) (x a)
3
2
_
127
A similar relation holds for the real part of the exponentials, but both of the resulting relations
give:
_
A
1
e
i

4
+A
2
e
i

4
_
= 2B
This gives the connection formula by the Airy Function asymptotics as:
1
2

(. . .)
1
4
e

2
3
(...)
3
2

(. . .)
1
4
sin
_
2
3
(. . .)
3
2
+

4
_
=
1

(. . .)
1
4
cos
_
2
3
(. . .)
3
2


4
_
1

(. . .)
1
4
e
2
3
(...)
3
2

(. . .)
1
4
cos
_
2
3
(. . .)
3
2
+

4
_
(For an example of how these formulas work in a problem. See the nal exam)
Wednesday - 3/11
32.2 Connecting The Quasi-Classical Solutions Around A Turning Point
Complex Analysis
As an alternate method to what was done in the previous section, consider tracing the solutions
through the complex plane. If x is suciently close to a, then weve seen that the solution can be
approximated as:
(x)
B
[V
t
(a) (x a)]
1
4
e

2
3 h

2mV

(a)(xa)
3
2
x > a
(x)
A
1
[V
t
(a) (x a)]
1
4
e
i
2
3 h

2mV

(a)(xa)
3
2
+
A
2
[V
t
(a) (x a)]
1
4
e
i
2
3 h

2mV

(a)(xa)
3
2
x < a
Consider tracing the solution for x in the complex plane and instead of going through the singular
point at x = a, we trace a semicircle in the upper half imaginary space. The path can be written
as:
x e
i
; = [x a[
To go from the point (a+x) to (ax) on the other side, it is necessary to have = 0 . . . . Consider
rst the connection of the two terms dependent on x term:
_
e
i
_3
2
[

=
3
2
e
3i
2

= i
3
2
_
e
i
_

1
4
[

1
4
e

i
4

From this analysis we nd that the exponential switches from real to imaginary and the coecient
picks up a phase of

4
. Repeating for the second term, if we consider moving in the lower half of
the imaginary space, the value of changes from 0 . . . . This gives the connection as:
128
_
e
i
_3
2
[

=
3
2
e
3i
2

= i
3
2
_
e
i
_

1
4
[

1
4
e
i
4

This gives the nal result (which matches the Airy function analysis) that:
A
1
= Be
i

4
A
2
= Be
i

4
Which gives the connection as:
(x)
B
[(V (x) )]
1
4
e

1
h
R
x
a

2m(V (x

))dx

x > a
(x)
2B
[( V (x))]
1
4
cos
_
_
x
a
_
2m
h
2
( V (x
t
))dx
t
+

4
_
x < a
in the case that the turning point is on the reverse side (that is, V (x) > for x < a), the limits
of integration must be reversed and the phase should be chosen appropriately. Denoting the result
with the above, this is:
(x)
B
[(V (x) )]
1
4
e

1
h

R
x
a

2m(V (x

))dx

< V (x)
(x)
2B
[( V (x))]
1
4
cos
_

_
x
a
_
2m
h
2
( V (x
t
))dx
t


4
_
> V (x)
This solution conrms the Airy function asymptotics solutions, however (at least in my opinion),
the Airy function results are easier to use in working problems.
32.3 Bohr-Sommerfeld Quantization
Consider some well in a region of a potential V (x) from x = b to x = a ( a > b). What can be
said about the bound state energies form the WKB Approximation? Since we require the solutions
match up on either side of the well and the connection generate a phase of

2
at each barrier point,
it can be seen that the requirement of:
_
a
b
_
V (x
t
)dx
t
+

2
= n
will leave the wavefunction unchanged at the turning points. This gives a method of determining
approximate energy levels inside the well.
129
33 Penetration Through a Potential Barrier - Quantum Tunneling
Monday - 3/23
33.1 Example - Rigid Wall With A Finite Barrier
Consider a system with an innite potential at x 0 and a barrier of nite width and height
centered at some location x = R. In the limit that the height of the barrier is very large, the
behavior of the system becomes that of an innite square well with an innite series of bound
states. In the limit of the width growing very large, the system becomes a nite well with several
bound states and a continuum of free energies.
Consider another limit in which the barrier is very tall but the width approaches a very small value.
In this system, the potential can be approximated by a delta function. The system is then described
by:
_

d
2
dx
2
+ (x R) E
_
(x) = 0 with (0) = 0
The solutions away from the delta function are easily determined:
(x) = c
1
sin
_

Ex
_
0 < x < R
(x) = ae
i

Ex
+b
i

Ex
x > R
Here weve denoted the incoming and outgoing wave functions beyond the potential barrier as a and
b respectively. Fixing the coecients is done by satisfying boundary conditions. The wavefunction
must be continuous over all space and the rst derivative must be continuous except at locations
which have an innite potential (that is x = 0 and x = R). Integrating Schrodingers Equation over
the delta function contribution generates this second condition:
_
R+
R
__

d
2
dx
2
+ (x R) E
_
(x)
_
dx = 0

d
dx
(x)[
R+
R
+(R) +E
_
R+
R
(x)dx = 0
Evaluating this result in the limit of 0 gives the relation for the discontinuity of the derivative.
The two conditions are then:

<
(R) =
>
(R)
t
_
R + 0
+
_
+
t
_
R 0
+
_
+(R) = 0
Plugging in the above denitions for the solutions on either side of the barrier,
130
c
1
sin
_

ER
_
= ae
i

ER
+be
i

ER
i

E
_
ae
i

ER
be
i

ER
_
c
1

E cos
_

ER
_
= c
1
sin
_

ER
_
these equations can be written in matrix form as:
_
e
i

ER
e
i

ER
i

Ee
i

ER
i

Ee
i

ER
_
_
a
b
_
= c
1
_
_
sin
_

ER
_

E cos
_

ER
_
+sin
_

ER
_
_
_
The 4 4 matrix is easily inverted to give the solution as:
_
a
b
_
=
c
1
2i

E
_
i

Ee
i

ER
e
i

ER
i

Ee
i

ER
e
i

ER
_
_
_
sin
_

ER
_

E cos
_

ER
_
+sin
_

ER
_
_
_
with this solution, it is possible to graphically sweep through dierent energies and observe the
behavior of the coecients a and b. It is expected that at certain energies the values of the escap-
ing and incoming components of the wavefunction will be very small compared to the magnitude
between the rigid wall and the barrier. These energies correspond to semi-bound states or reso-
nances of the potential. They exist as remnants of the behavior of the other limits of the square wells.
Consider if instead of the above denition, one dened the solution in each region as:
(x) = c
1
sin(kx) 0 < x < R
(x) = sin(kx +) x > R
where weve denoted

E = k. With these denitions, the two conditions become:
c
1
sin(kR) = sin(kR +) c
1
k cos (kR) +c
1
sin(kR) = k cos (kR +)
Dividing these two sets of equations gives the phase of the solution outside the barrier:
sin(kR)
k cos (kR) +sin(kR)
=
1
k
tan(kR +) = tan
1
_
sin(kR)
cos (kR) +

k
sin(kR)
_
kR
Squaring both equations and dividing the second by k
2
gives the condition on the coecients c
1
and :
c
1
2
sin
2
(kR) +
c
1
2
k
2
(k cos (kR) +sin(kR))
2
=
2
_
sin
2
(kR +) + cos
2
(kR +)
_
131
c
1
2
sin
2
(kR) +
c
1
2
k
2
_
k
2
cos
2
(kR) + 2ksin(kR) cos (kR) +
2
sin
2
(kR)
_
=
2
c
1
2
sin
2
(kR) +c
1
2
cos
2
(kR) + 2c
1
2

k
sin(kR) cos (kR) +c
1
2

2
k
2
sin
2
(kR) =
2
Solving for the amplitude outside the barrier,
= c
1
_
1 + 2

k
sin(kR) cos (kR) +

2
k
2
sin
2
(kR)
This can easily be plotted for various values of k and by varying and R it is possible to see how
varying strengths and locations for the barrier change the number and strength of the semi-bound
states. Consider the limiting case of , in this limit the above solution becomes:
c
1

k
sin(kR)
Thus for a nearly innite barrier at x = R, the behavior of an innite square well is nearly recovered.
Wednesday - 3/25
Continuing on with this same tunneling problem, consider the result if the potential barrier has an
innite strength. That is, if . In this case, the solution is singular and becomes a decoupled
set of bound states and continuum states.

n
(x) =
__
2
R
sin
_
nx
R
_
0 < x < R
0 R < x
_

k
(x) =
_
0 0 < x < R
_
2

sin(k (x R)) R < x


_
Where weve determined the normalization of the continuous spectrum by referencing back to the
Sine Transforms coecient and noting the symmetry of this eigenfunction and the sine transform.
Note the behavior of this limiting case. If some initial state
0
(x) is non-zero only in one of the
two regions, the wave function will never enter the second region. Further, whatever the ratio of
the state is initially in each region will remain the same. (That is, if the initial state has equal
probability of being in either region, after the system evolves the probability of the particle being
in the region 0 < x < R is always
1
2
).
Consider the limiting case of being very large, but not innite. In this case the regions inuence
and overlap to a small extent. Because the wavefunction is able to penetrate the barrier for all
energies, there is no longer any discrete spectrum however, the behavior and physics of the system
should contain certain characteristics of the decoupled limit still. Consider the solutions in either
region:
132
(x) =
_
Asin(kx) 0 < x < R
Bsin(k (x R) +) R < x
_
As before, it is possible to determine the coecients and phase by applying the two conditions:

<
(R) =
>
(R)
t
_
R + 0
+
_
+
t
_
R 0
+
_
+(R) = 0
Bsin = Asin(kR) Bk cos = A(k cos (kR) +sin(kR))
And again, we nd the relations to be:
tan =
sin(kR)
cos (kR) +

k
sin(kR)
B
2
= A
2
_
1 + 2

k
sin(kR) cos (kR) +

2
k
2
sin
2
(kR)
_
The above completely determines the solution except for an over all normalization. However, from
the same arguments made about the coecient of the limiting case when , we can make the
claim that B =
_
2

and therefore:
= arctan
_
sin(kR)
cos (kR) +

k
sin(kR)
_
A
2
=
2

1
1 + 2

k
sin(kR) cos (kR) +

2
k
2
sin
2
(kR)
From these denitions it is easy to see that in the limit of , the value of the phase approaches 0
unless the quantity sin(kR) = 0. In this case, the phase is still zero, however some other interesting
results occur. Consider rst if is very large, and k is such that sin(kR) is also very large ( that
is, kR is not close to any value of n). In this case, we can expand the above results and nd the
small corrections:
tan
k

_
1
k

cot (kR) +O
_
k
2

2
__
A
2

k
2

2
sin
2
(kR)
_
1 2
k

cot (kR) +O
_
k
2

2
__
Alternately, in the case that is very large, but k is such that sin(kR) is not (that is, kR is close
to some integer value of ), then it is possible to assume:
sin(kR) O
_
1

_

1
cos (kR) +k
O
_
1

_
So the correction to the phase is small still, however, consider the correction to the coecient A.
To determine this, let kR = n + where is some small correction term. Then, what are the
roots of the equation 1 +2

k
sin(kR) cos (kR) +

2
k
2
sin
2
(kR) when is very large? (Note that in the
133
limit of the roots converge to kR = n)
If this function is plotted for nite values of , one nds that there are no real roots. This implies
that the roots have moved into the complex plane. Consider if =
n
+ i
n
is the actual root in
the complex plane. Then we can write that:
1 + 2

k
sin(kR) cos (kR) +

2
k
2
sin
2
(kR) = c (
n
i
n
) ( +i) = c
_
(
n
)
2
+
n
2
_
Using this information, the coecient in this limit can be written in terms of k as:
A
2

1
c
1
(kR n
n
)
2
+
n
2
=
2

1
cR
2
1
_
k
n
R

n
R
_
2
+
n
2
R
2
Monday - 3/30
33.1.1 Time Evolution
Given some initial form (x, t = 0) =
0
, consider how the wavefunction will evolve. In general the
evolution is given by:
(x, t) = e
iHt

0
Consider the case that
0
is one of the bound eigenfunctions of the limit for the problem.
In this case we can write the initial state:

0
=
n
=
__
2
R
sin
_
n
R
x
_
0 < x < R
0 R < x
_
For reference, well denote the continuum eigenvalues from this limiting case as

(0)
=
_
0 0 < x < R
_
2

sin(k [x R]) R < x


_
Consider expanding (x, t) in terms of these eigenfunctions,
(x, t) =

j=1
c
j
(t)
j
(x) +
_

0
c(k, t)
(0)
(k, x)dk
For the initial case considered here,
c
j
(0) =
n,j
c(k, 0) = 0
134
From this form, we would expect that the evolution would include an exponential decay of the
bound state and that certain values of c(k, t) would then begin to become signicantly larger than
others. We do not expect any other c
j
s to become signicant (that is, other bound states wont
show up) and we expect certain energies of the continuum to be dominant. Lets consider this
evolution in the limit of very large but not innite.
The evolution of the coecient is described by c
j
(t) =
j
[ e
iHt
[
0
) =
j
[ e
iHt
[
n
). This can be
evaluated by expanding it in terms of the
k
(x) eigenstates of the actual potential.
c
j
(t) =
_

0

j
[
k
)
k
[
n
)e
ik
2
t
dk
Noting that
i
is nonzero only on the segment 0 < x < R, the braket terms can be determined by
plugging in the denitions for
k
,
c
j
(t) =
_

0
A
2
(k)
2

__
R
0

j
(x) sin(kx) dx
___
R
0

j
(x) sin(kx) dx
_
e
ik
2
t
dk
At this point we have made no assumptions about the problem and this relation is exact. Consider
though, since is large, the form of the coecient A
2
(k). We have shown that it has the form:
Because of this, we can approximated some function integrated against such a function as:
_

0
f(k)A
2
(k)dk

f
_
n
t

R
__ n

R
+
n

R

n

A
2
(k)dk
where
n
is large enough to just cover the contribution at k =
n

R
. Using this denition, the above
can be written as:
c
j
(t) =
2

__
R
0

j
(x) sin
_
n
t

R
_
dx
___
R
0

j
(x) sin
_
n
t

R
_
dx
__ n

R
+
n

R

n

A
2
(k)e
ik
2
t
dk
Each of the integrals on x give a factor of
_
2
R
and delta functions, so this becomes:
135
c
j
(t) =
R

j,n

n,n

_ n

R
+
n

R

n

A
2
(k)e
ik
2
t
dk
We saw last time that in this limit, A
2
(k) can be approximated by the form:
A
2
(k)
1
(k
n
)
2
+
n
2
So, the coecient is then written as:
c
j
(t) =
R

j,n
_ n
R
+n
n
R
n
e
ik
2
t
(k
n
)
2
+
n
2
dk
Due to the asymptotic behavior weve seen previously, it is possible to redene the limits of inte-
gration for this relation to cover all values of k, doing so and expanding the denominator gives:
c
j
(t) =
R

j,n
_

e
ik
2
t
(k
n
i
n
) (k
n
+i
n
)
dk
This integral can be evaluated using the denition for the Principle value,
1
k
n
i
n
= P +i (k
n
)
Alternately, one could modify the contour to avoid the poles and in crossing over the pole add
the contribution there. Then, neglecting the contribution along the contour (since its negligible
compared to the value at the pole), the result is the same as the above method,
c
j
(t)
R

j,n
e
i(
2
n

2
n
+2inn)t
1
2i
n
= e
i(
2
n

2
n
)t
R
2i
n

j,n
e
2nnt
note that this result decays as we would expect with a rate proportional to
1
2nn
. This gives the
lifetime of the bound state
n
in the limit of large but not innite.
Consider the rest of the wave function. What does the escaping wavefunction look like and how does
it evolve? The above process can be repeated with several changes. Recall that weve expanded our
wavefunction as
(x, t) =

j
c
j
(t)
j
(t) +
_

0
c(k, t)
(0)
(k, x)dk
From this denition we can immediately write that the coecients of the escaped wavefunction are:
136
c(k, t) =
_

R
dx
_
R
0
dy
(0)
(k, x) x[ e
iHt
[y)
n
(y)
=
_

r
dx
_
R
0
dy
_

0
dk
t

(0)
(k, x)
k
(x)e
ik
2
t

k
(y)
n
(y)
137
Wednesday - 4/1
An aside on complex analysis - Principle Value
Last class we looked at a method of analysis using the Principle Value of an integral. Lets
clarify what this analysis involves. Consider the integral:
Z

f(x)
x
dx
where we assume that f(x) is continuous and f(0) = 0. If we consider breaking the integral into
pieces around the singularity at x = 0, we get:
Z

f(x)
x
dx =
Z

1

+
Z

2

1
+
Z

f(x)
x
dx
The integrals over the non-singular regions can be computed since it is assumed that f(x) is
analytical. The remaining integral can be determined by taking 1, 2 and assuming they go to
zero. This leaves:
Z

f(x)
x
dx =
Z

1

+
Z

f(x)
x
dx + f(0)
Z

2

1
dx
x
The last term is then,
Z

2

1
dx
x
= ln(x)|

1
However, this limit is not well dened for 1 or 2 going to zero since the natural logarithm blows
up at that value. What can be done to evaluate this integral is to move the contour away from
the singularity at x = 0 by instead integrating over a semicircle of radius . On such a contour,
x e
i
and ranges from to 0. The integral then becomes:
Z

+
Z

f(x)
x
dx +
Z
0

f(e
i
)
e
i
ie
i
d
Then, if we cancel the factors in the last integral and let the value of go to zero,
Z

f(x)
x
dx = lim
0
Z

+
Z

f(x)
x
dx + if(0)
Z
0

d
Changing notation, the integral over the non-singular regions can be called the principle value
and the above becomes:
Z

f(x)
x
dx = P
Z

f(x)
x
dx if(0)
but this must hold for any function f(x) therefore weve determined a method to write
1
x
as a
distribution:
1
x
= P
1
x
i(x)
138
34 WKB Approximation for a Finite Potential Barrier
**See nal problem for this example**
Wednesday - 4/15
Part VII
SCATTERING THEORY
35 Potential Scattering
Consider a potential localized to a region in space such that:
V ( x) =
_
f(r, , ) r R
0 else
_
For simplicity, well consider only elastic scattering at low energies. The actual interaction of
an incoming wave packet and the potential can be complicated and in some cases uninteresting.
Consider instead if we use the time independent Schrodinger equation to determine eigenfunctions
and eigenenergies of the system far from the potential. That is, we can write the eigenfunction
expansion far from the potential source as a linear combination of incoming and scattered states,

k
( x) = e
i

k x
+
Sc
(

k, x)
Sc
(

k, x) =

l,m
a
l,m
(

k)Y
l,m
( r)h
(2)
l
([k[r)
Where r denotes only the directionality of r (that is, and ), and h
(2)
l
is the spherically outgoing
Hankel function. This solution follows from the Schrodinger equation at large distances can be
written with the potential zero.
_

h
2
2m

2
+V ( x)
_

[ x[ R
_

h
2
2m

2
_
For a potential which is non-zero at large distances (such as a Coulomb potential), the above
relations would have to be replaced with the incoming plane wave and outgoing spherical distribution
from the Hydrogen atom solutions. Since we are considering the large radius asymptotics, we can
replace h
(2)
l
with the its asymptotic form to give:

k
( x) e
i

k x
+

l,m
a
l,m
(

k)Y
l,m
( r)
e
ikr
r
= e
i

k x
+f
k
(

k, r)
e
ikr
r
As it is useful to know in this section, the expansion form of the plane wave along the z axis is:
139
e
i xz

kz
=

m=0
i
m
(2m+ 1) j
m
(kr) P
m
(cos )
In general, the propagation in any direction is given by:
e
i

k x
= 4

l,m
i
l
j
l
(kr) Y
l,m
(

k)

Y
l,m
( r)
35.0.2 Example - Rigid Spherical Potential
Consider a potential of the form:
V ( x) =
_
[ x[ R
0 else
_
This potential requires that the wavefunction go to zero on the surface of the sphere. Plugging in
the appropriate solution from above:
_
e
i

k x
+
Sc
(

k, x)
_

[ x[=R
= 0
this requires that:
e
i

k x

[ x[=R
=
Sc
(

k, x)

[ x[=R
=

l,m
a
l,m
(

k)Y
l,m
( r)h
(2)
l
(kr)

[ x[=R
Looking up the expansion for e
i

k x
in spherical harmonics,
e
i

k x
= 4

l,m
i
l
j
l
(kr) Y
l,m
(

k)

Y
l,m
( r)
Combining these and solving for a
l,m
gives the amplitude scattered in the direction Y
l,m
( r),
a
l,m
= 4i
l
j
l
(kR)
h
(2)
l
(kR)
Y
l,m
(

k)

35.1 Constant Energy Wave Packet Scattering


Consider a wave packet of constant energy
h
2
2m
[k[
2
. We can expand the incoming wavefunction as:
140

in
=
_
e
i

k x
F(

k)d
2

k
Further, we can expand the entire wavefunction at large ranges in this manner to give:
_

k
F(

k)d
2

k =
_
_
_
e
i

k x
+

l,m
a
l,m
Y
l,m
( r)
e
ikr
r
_
_
F(

k)d
2

k
Since we are working in the case of r , we can simplify the rst term in the integral by noting
that for large r it oscillates between the limits of cos
kr
= 1 . . . + 1. We can write this term as:
_
e
i

k x
F(

k)d
2

k =
_ _
e
ikr cos
kr
F(

k)d
kr
sin
kr
d
kr
=
_
2
0
_
+1
1
e
ikr
F(

k)dd
kr
We can integrate this by parts and keep only terms up to order
1
r
, this leaves:
_
e
i

k x
F(

k)d
2

k = 2
1
ikr
F(

k)e
ikr

+1
=1
+O
_
1
r
2
_
=
2i
kr
_
e
ikr
F ( r) e
ikr
F ( r)
_
+O
_
1
r
2
_
So nally, we can write the wave function in terms of the incoming wave packet and outgoing waves
as:
=
2i
kr
_
e
ikr
F ( r) e
ikr
F ( r)
_
+
e
ikr
r
_
F(

k)f
k
(

k, r)d
2

k
This result can be written alternately as:
=
_
2i
e
ikr
kr
F ( r)
e
ikr
kr
_

SF
_
( r)
_
_

SF
_
( r) =
_
kf
k
(

k, r)F(

k)d
2

k + 2iF( r)
Where

S is the scattering matrix for the potential.
Monday - 4/20
36 Scattering Theory - A More General Approach
Consider approaching the scattering problem as a perturbation approximation. If we can write the
Hamiltonian of the system as:
H = H
0
+V ( x)
141
Where H
0
is known and understood (that is, its eigenfunctions and eigenvalues are determined) then
we can use Greens Function methods to determine the overall wavefunction and then get corrections
to the eigenfunctions of H
0
from the potential. Consider the problem given by:
[H E] = 0 with [H
0
E]
(0)
E
= 0
Given the above denition for the Hamiltonian, this becomes:
[H
0
+V ( x) E] = 0 [H
0
E] = V ( x)
This is a more general formulation of the scattering problem. It is easily seen that this is the same
as the above statement, one can act (H
0
E) on both sides of the equation to obtain the expected
result. From this result, we can reformulate the solution as:
=
(0)
E
+ (H
0
E)
1
V
In integral form, this can then be written:
( x) =
(0)
E

_
x[ (H
0
E)
1
[ y) V ( y) ( y) d
3
y
This is known as the Integral Scattering Equation and is useful for determining the approximate
solution. However, note that the integral is dependent on ( y), which makes this dicult to solve
completely.
One potential issue with this solution exists if E is in the continuous spectrum of H
0
(In the case
that E is a discrete eigenvalue, this formulation is invalid entirely, so we consider only continuous
spectra). Since it is possible (and likely) that E will be somewhere in the continuous spectrum,
there will exist two Greens Functions for the operator. These can be written as:
G
()
( x, y) = x[
_
H
0
E i0
+
_
1
[ y)
From this one can determine approximations starting with the original Hamiltonian. Consider:

(0)
zeroth order

(0)

_
H
0
E i0
+
_
1
V
(0)
first order

(0)

_
H
0
E i0
+
_
1
V
(0)
+
_
H
0
E i0
+
_
1
V
_
H
0
E i0
+
_
1
V
(0)
second order
The full extension of this approximation is known as the Born Series and can be written in the
form:
142
=

j=0
_

_
H
0
E i0
+
_
1
V
_
j

(0)
If this series converges for the given potential and H
0
eigenfunctions, then the wavefunction is
known. In the case we are considering, H
0
=
2
then the Greens Functions are known:
x[ (H
0
E)
1
[ y) = G
()
( x, y) =
1
4
e
ik[ x y[
[ x y[
where weve replaced E = k
2
. Using this Greens Function, lets consider an actual problem.
36.1 Scattering from a Localized Potential
The Born Formula
Consider some potential V ( x) which is localized in space. In such a case, we can assume that
outside of some radius R, the potential is always zero. The exact solution can be written using the
Integral Scattering Equation as:
( x) =
(0)
E
+
1
4
_
e
ik[ x y[
[ x y[
V ( y) ( y) d
3
y
In order to get something useful out of this, consider the case that [ x[ R. This means that in the
integral term, [ x[ is always much greater than [ y[. Because of this, we can expand the separation
as:
[ x y[ = [ x[

1 2
x y
[ x[
2
+
[ y
2
[
[ x[
2
[ x[
_
1
x y
[ x[
2
+
1
2
[ y[
2
[ x[
2
+. . .
_
[ x y[ x x y +. . .
Plugging these in and keeping only the necessary terms,
( x) =
(0)
E
+
1
4
_
e
ik(x x y+...)
x +. . .
V ( y) ( y) d
3
y
Because we only require the outward going solution, we pick the solution with the positive expo-
nential in x. Thus, for [ x[ R we have the approximation of:
( x)
(0)
k
( x) +
e
ikx
x
_
1
4
_
e
ik x y
V ( y) ( y) d
3
y
_
The bracketed term can be represented by some function f (k, x). Lets consider the rst order
solution,
143
f
(0)
(k, x) =
1
4
_
e
ik x y
V ( y)
(0)
k
( y) d
3
y =
1
4
_
e
ik x y
e
i

k y
V ( y) d
3
y
f
(0)
(k, x) =
1
4
_
e
i[

kk x] y
V ( y) d
3
y
This rst order approximation of the scattered wavefunction for a given potential V ( x) is known as
Borns Formula. Using the Born Series from above, the higher order approximations can be written
as:
f
(n)
(k, x) =
1
4
_
e
ik x y
V ( y)
__

_
H
0
k
2
i0
+
_
1
V
_
n

(0)
_
d
3
y
Alternately, we can simplify this result for a special case. Consider if the potential is spherically
symmetric (that is, V ( x) = V (r)). In this case, the above result will simplify to:
f
(0)
(k, x) =
1
4
_ _ _
e
i[

kk x[r cos
V (r)r
2
dr d (cos ) d =
1
4
(2)
_
1

k k x

r
_
_

0
rV (r) sin
_

k k x

r
_
dr
f
(0)
(k, x) =
1
2

k k x

_

0
rV (r) sin
_

k k x

r
_
dr
And since k x and

k denote the incident and outgoing directions respectively, it is simple to show
that in this case:

k k x

= 2k sin
_

sc
2
_
Where
sc
is the scattering angle. Plugging this in, we nd that:
f
(0)
(k,
sc
) =
1
4k sin
_
sc
2
_
_

0
rV (r) sin
__
2k sin
_

sc
2
__
r
_
dr
36.1.1 Example - Gaussian Potential
Consider a potential of the form:
V ( x) = e
[ x[
2
The zeroth order scattering is described by:
144
f
(0)
(k, x) =

4
_
e
i[

kk x] y
e
[ y[
2
d
3
y
Completing the square and integrating the Gaussian results in:
f
(0)
(k, x) =

4
_

_3
2
e

1
4
[

kk x[
2
Tuesday - 4/21
36.2 Probability Flux Density
As an aside, lets review the probability ux density and then consider its use in scattering theory.
Consider some wave function satises the Schrodinger equation. That is:
ih

t
= H ih

t

= H

If we consider the time derivative of the probability,



t
[

], and expand it using the above, we


nd:

t
_
V
d
3
x[

] =
_
V
d
3
x
_

t
_
And using the Schrodinger Equation,

t
_
V
d
3
x[

] =
_
V
d
3
x
_

i
h
H +
i
h
H

_
=
i
h
_
V
d
3
x[

H H

]
Plugging in a generic Hamiltonian, H =
h
2
2m

2
+ V ( x), the terms V

and V

cancel,
leaving only:

t
_
V
d
3
x[

] =
i
h
h
2
2m
_
V
d
3
x
_

2

2

=
ih
2m
_
V
d
3
x [

]
Using the divergence theorem, this can be written as the integral over the boundary instead of the
whole volume,

t
_
V
d
3
x[

] =
ih
2m
_
V
d
2
x n [

]
This denes the probability ux density

j as:
145

j
ih
2m
(

)
This can be applied to scattering theory in a simple manner. Recall that our we were solving the
problem:
_

2
+V ( x)

k
= k
2

k
Where in the limit of [ x[ getting very large, the solution can be written as:

k
( x) e
i

k x
+f (k, x)
e
ikr
r
We can compute the probability density ux of the incoming and scattered terms of the wavefunc-
tion.

j
in
= i
_
e
i

k x
e
i

k x
e
i

k x
e
i

k x
_
= 2

j
sc
= i
_
f

k, x
_
e
ikr
r
f
_

k, x
_
e
ikr
r
f
_

k, x
_
e
ikr
r
f

k, x
_
e
ikr
r
_
Taking only the radial component of the scattered wave, r =

r
, so this leaves:

j
sc
r = i

f
_

k, x
_

2
_
e
ikr
r

r
e
ikr
r

e
ikr
r

r
e
ikr
r
_
= i

f
_

k, x
_

2
__
ik
r
2

1
r
3
_

ik
r
2

1
r
3
__

j
sc
r =
2k
r
2

f
_

k, x
_

2
This result explains both the spherical spreading ( the factor of
1
r
2
) and the angular dependence of
the scattered wavefunction (the magnitude of f
_

k, x
_
). The dierential cross section of the scatterer
can be determined by this result:
d
d
=

f
_

k, x
_

2
Integrating over all angles d gives the total scattering cross section. It is possible (and usually quite
useful) to use the Born approximation derived previously to determine this cross section, however,
this is only valid in the case of weak scattering, where the scattered wave is nearly identical to the
incident wave. In general, we can refer back to one of the rst results of this section,
146

k
( x) e
i

k x
+

l,m
a
l,m
(

k)Y
l,m
( r)
e
ikr
r
= e
i

k x
+f
k
(

k, r)
e
ikr
r
From this denition, we can obtain the exact dierential cross section as:
d
d
=

f
_

k, x
_

2
=

l,m

a
l,m
(

k)

2
Y
l,m
(, ) Y

l,m
(, )
If it is possible to exactly determine the coecients a
l,m
of the spreading wave (only very simple
potentials will allow this), then the exact dierential and total cross section can be exactly deter-
mined. Using the orthogonality of the Spherical Harmonics, it is even possible to simplify the total
cross section as:
=
_
d
d
d =

l,m

a
l,m
(

k)

2
_
Y
l,m
(, ) Y

l,m
(, ) d
=

l,m

a
l,m
(

k)

2
4
2l + 1
An example of nding the cross section (both exact and approximated) is found in the nal.
147
Part VIII
HOMEWORK PROBLEMS
37 Semester I
1. Show that for Hermitian operators X and Y , that (XY )

= Y

.
2. If X [) [, then show that X

= [) [.
3. Given the denitions:
S
z
[+) =
h
2
[+) S
z
[) =
h
2
[) S
+
= h[+) [ S

= h[) +[
S
z
=
h
2
_
1 0
0 1
_
Show that:
[+) =
_
1
0
_
[) =
_
0
1
_
And determine matrix forms for S
+
and S

.
4. Determine matrix representations of S
x
, S
y
, andS
z
from the braket notation:
[S
x
, ) =
1

2
([+) [)) ; S
x
[S
x
, ) =
h
2
[S
x
, )
[S
y
, ) =
1

2
([+) i [)) ; S
y
[S
y
, ) =
h
2
[S
y
, )
[S
z
, ) = [) ; S
z
[S
z
, ) =
h
2
[S
z
, )
5. Show that the following are true for several cases:
[S
i
, S
j
] = i
ijk
hS
k
where [S
i
, S
j
] S
i
S
j
S
j
S
i
.]
S
i
, S
j
=
h
2
2

ij
where S
i
, S
j
S
i
S
j
+S
j
S
i
.]
S
2
S
2
x
+S
2
y
+S
2
z
=
3 h
2
4
and therefore
_
S
2
, S
i

= 0.
6. Use the denition of:
T = I i

k d x
t
+O
_

d x
2

_
where

k is some Hermitian operator to show that:
(a) T

T = TT

= 1
(b) T(d x
t
)T(d x
tt
) = T(d x
tt
)T(d x
t
) = T(d x
t
+d x
tt
)
(c) T (d x
t
)
1
= T(d x
t
)
148
(d) lim
d x

0
T (d x
t
) = 1
7. Use the denitions:
T = I i

k d x
t
+O
_

d x
2

_
T
_
x
t
_
= e

i
k
( p x)
to show that
(a) [T( x, x), T(y, y)] = 0
(b) [p
x
, p
y
] = 0
8. Given the Gaussian wavepacket:

1
4

d
e
ikx
e

x
2
2d
2
Determine:
(a) x)
(b)

x
2
_
(c)
_
(x)
2
_
=

x
2
_
x)
2
(d) p)
(e)

p
2
_
(f)
_
(p)
2
_
=

p
2
_
p)
2
(g)

(p)
1

2h
_
e

ipx
h

(x) dx
9. Show that for [, t = 0) = [S
x
, +), S
y
) =
h
2
sin(t); S
z
) = 0.
10. Show that there exists no state [) such that a

[) =
t
[) where [) =

n
c
n
[n). Where
[n) is an eigenstate of the quantum simple harmonic oscillator and a

is the raising operator


of the quantum simple harmonic oscillator.
11. Show that for any excited state of the quantum simple harmonic oscillator, the uncertainty
relation is
_
(x)
2
__
(p)
2
_
=
_
n +
1
2
_
2
h
2
.
12. Sakurai 1.5
(a) Consider [) , [) and suppose that a
t
[), a
tt
[), . . . and a
t
[), a
tt
[), . . . are well known
for given a[ , a
t
[ , . . . which form a complete set of bras (or kets). Determine the matrix
representation of [) [ in terms of [a
n
)s.
(b) Consider instead the spin
1
2
system. Let [) [S
z
, +) and [) [S
x
, +). Determine the
matrix representation of [) [ in terms of these new base kets.
13. Sakurai 1.6 - Suppose [i) and [j) are eigenkets of an Hermitian operator A. Under what
conditions can we conclude that [i) +[j) is also an eigenket of A?
149
14. Sakurai 1.9 - Construct the spin ket [

S n, +) such that
_

S n
_
[

S n, +) =
h
2
[

S n, +). Ex-
press the result as a linear combination of [). (The result should look like: [

S n, +) =
cos
_

2
_
[+) + sin
_

2
_
e
i
[))
15. Sakurai 1.11 - Consider a 2 state system described by:
H = H
11
[1) 1[ +H
22
[2) 2[ +H
12
[[1) 2[ +[2) [1)]
Where H
11
, H
12
, H
22
are real quantities with units of energy; and [1) and [2) are eigenvectors
of some other observable. Determine the energy eigenkets and eigenvalues.
16. Sakurai 1.13 - A beam of spin
1
2
particles pass through a series of Stern-Gerlach measurements
as follow:
Only S
z
=
h
2
pass, S
z
=
h
2
rejected.
Only S
n
=
h
2
pass, S
n
=
h
2
rejected (S
n
is dened as in problem 1.9).
Only S
z
=
h
2
pass, S
z
=
h
2
rejected.
What is the intensity of the nal beam relative to the intensity leaving the rst device? What
angle maximizes the nal intensity?
17. Sakurai 1.18 - Derive the Schwartz Inequality:
(a) Observe that ([ +

[) ([) +[)) 0 is true for any .


(b) Show that the inequality above is true if A[) = B[) for pure imaginary.
(c) Explicit calculations show that a Gaussian Wave Packet
x[)
_
2d
2
_

1
4
e
ipx
h

(xx)
2
4d
2
Satises the relation
_
(x)
2
_
(p)
2
=
h
2
. Prove that x[ x[) = (imaginary number) x[ p [)
is satised for such a Gaussian Wave Packet.
18. Sakurai 1.29
(a) Verify that [x
i
, G( p)] = ih
G
p
i
and [p
i
, F ( x)] = ih
F
x
i
.
(b) Evaluate
_
x
2
, p
2

and compare with the classical Poisson Bracket quantity


_
x
2
, p
2

Poisson
.
19. Sakurai 2.1 - Consider the spin-precession problem in the Heisenberg picture. Use the Hamil-
tonian dened as H =
_
eB
mc
_
S
z
= S
z
to determine the evolution of S
x
, S
y
, and S
z
.
20. Sakurai 2.3 - Consider an electron subject to a uniform, time-independent magnetic eld of
the form

B = B
0
z. If at time t = 0 the electron is known to be in a state [S
n
(, 0) , +), then:
(a) Obtain the probability of nding the electron in [S
x
, +) as a function of time.
(b) Determine S
x
) as a function of time.
(c) Show that these results make physical sense for the case of = 0 and =

2
.
150
21. Sakurai 2.5- Consider some particle in 1-d whose Hamiltonian is given by H =
1
2m
p
2
+V (x).
by Calculating [[H, x] , x] prove that the relation:

a
tt
[ x[a
t
)

2
(E
a
E
a
) =
h
2
2m
is true for H[a
t
) = E
a
[a
t
).
22. Sakurai 2.6 - Consider a particle in 3-d with H =
1
2m
p
2
+V ( x). Calculate [ x p, H] to obtain
the relation:
d
dt
x p) =
_
p
2
m
_
x V ( x))
To obtain the preceding relation with the quantum mechanical analogue of the virial theorem
it is essential that the left-hand-side be zero. Under what condition would this be true?
23. Sakurai 2.8 - Let [a
t
) and [a
t
/) be eigenstates of an Hermitian operator A with non-degenerate
eigenvalues a
t
,= a
tt
. Consider a Hamiltonian given as:
H = [a
t
) a
tt
[ +[a
tt
) [a
t
)
Where is a real number.
(a) Write down the eigenstates of H and nd E
a
and E
a
.
(b) Suppose that initially the system is in a state [, t = 0) = [a
t
). Determine [, t).
(c) For the determined time evolution, nd the probability that the system will be int he
state [a
tt
) as a function of time.
(d) Describe a physical system like the one in this problem.
24. Sakurai 2.18 - Coherent state of a 1-d quantum simple harmonic oscillator is give by:
a [) = [)
where is some complex number.
(a) Prove that
[) = e

||
2
2
e
a

[0)
is a normalized coherent state.
(b) Prove the minimal uncertainty relation for such a state.
(c) Write ket as:
[) =

n=0
f(n) [n)
and show that [f(n)[
2
is a Poisson distribution. Determine the most probable n (and
thus the most likely energy state).
151
(d) Show that a coherent state can be made by applying the translation operator e

i
h
pl
to
[0) where p is the momentum operator and l is some length of displacement.
25. Sakurai 2.20 - Consider a particle in a potential of the form
V (x) =
_
1
2
kx
2
x > 0
x < 0
_
(a) Find the ground state energy.
(b) Find

x
2
_
for the ground state.
26. Sakurai 2.25 - consider an electron restricted to a cylindrical shell of radii a and b with a nite
length L. If we require vanishes on the boundaries of such a cavity, then:
(a) Determine the energy eigenfunctions and show that the eigenvalues are:
E
l,m,n
=
_
h
2
2m
_
2
_
k
2
k,m
+
_
l
L
_
2
_
where k
m,n
is the n
th
root of:
J
m
(k
m,n
b) Y
m
(k
m,n
a) Y
m
(k
m,n
b) J
m
(k
m,n
a) = 0
(b) Repeat the above derivation if there exists a magnetic eld

B = B
0
z in the region
0 < < a and

B = 0, > a.
27. Sakurai 2.35 - Consider the Hamiltonian for some spinless particle with charge e. for a static
magnetic eld, the interaction term looks like p p
e
c

A. Suppose that

B = B
0
z and show
that this leads to the correct interaction of orbital momentum
e
2mc

L with

B. Further, show
that there is an extra term B
2
_
x
2
+y
2
_
and comment on its physical signicance.
28. Sakurai 3.1 - Determine the eigenvalues and eigenvectors of
y
. Suppose that an electron is
in the state
_

_
. If S
y
is measured, what is the probability of measuring the spin up?
29. Sakurai 3.8 - Consider the matrix:
D
1
2
(, ., ) = e

i
2

3
e

i
2

2
e

i
2

3
=
_
_
_
_
e
i
+
2
cos
_

2
_
e
i

2
sin
_

2
_
e
i

2
sin
_

2
_
e
i
+
2
cos
_

2
_
_
_
_
_
this is equivalent to a single rotation about some axis n by an angle . Determine .
30. Sakurai 3.9
(a) Consider a pure ensemble of spin
1
2
systems. Suppose that S
x
), S
z
), and the sign of
S
y
) are known. Show how to determine the state vector. Why do we only need the sign
of S
y
)?
152
(b) Consider a mixed ensemble of spin
1
2
systems with [S
x
], [S
y
], and [S
z
] known. Show how
to construct the 2 2 density matrix .
31. Sakurai 3.15 Consider some spherically symmetric potential V (r). If the wavefunction for a
particle moving in such a potential is written as:
( x) = (x +y + 3z) f(r)
(a) Is an eigenfunction of L
2
? If it is, what is the value of l? If it isnt, what possible
values of l might be measured?
(b) What are the probabilities of measuring the various l values?
(c) Suppose it is known that ( x) is an energy eigenfunction with energy E. Indicate how
to determine what V (r) is.
32. Sakurai 3.16 - A particle in a spherically symmetric potential is known to be in an eigenstate
of L
2
and L
z
with eigenvalues of h
2
l (l + 1) and mh respectively. Prove that for such a state
[l; m),
L
x
) = L
y
) = 0

L
2
x
_
=

L
2
y
_
=
1
2
_
l (l + 1) h
2
m
2
h
2
_
38 Semester II
33. Check the exact form for x[ e
i
h
Ht
[y) from notes and generalize to 3 dimensions.
34. Prove that the statement ih

t
e
i
h
Ht
= He
i
h
Ht
is true for some self adjoint operator H.
35. A particle is in a delta function potential described as:
H =
h
2
2m

2
x
2
(x)
If a system starts in the state:
(x) =
1

2a
_
1 a < x < a
0 otherwise
_
Find the probability that the particle is bound as a function of time.
36. Scale the variables of the Impenetrable Wall with Finite Well potential and use a plotting
program to nd some bound states for chosen values of D and L.
37. Derive the Gauge transform between the Symmetric Gauge (

A =
1
2

B x) and the Landau


Gauge (

A = B
0
y x).
38. Complete the derivation of solution for the electron in a uniform magnetic eld derived in
class from the step:
_

h
2
2m
_

2
r
2
+
1
r

r

j
2
r
2
_
+
1
8
m
2
B
r
2
_
(r) =
_

h
2
k
2
2m
+
hj
2

B
_
(r)
using Conuent Hypergeometric Functions.
153
39. Consider the one dimensional harmonic oscillator:
H =
h
2
2m
d
2
dx
2
+
1
2
m
2
x
2
Determine the scale of the length and energy eigenvalues by changing the variable.
40. For the approximate form of the Hydrogen Atom states derived in class:
_

1
2
d
2
d
2

1

d
d

1

+
l (l + 1)
2
2
+
1
2n
2
_

R() = 0

R() = N
n,l

l
e

n
f
n,l
()
determine f
1,0
, f
2,0
, and f
2,1
and the normalization factor N for each case.
41. Derive the second order corrections
(2)
j
and
(2)
j
for an operator H = H
0
+ V with no
degenerate states and no continuous spectrum.
42. Derive an expression for the rst order corrections to the eigenfunction and eigenvalue of a
system with n + 1 degenerate states.
43. Let H
0
=
1
2
d
2
dx
2
inside an innite square well with boundaries at x = 1. Consider the
perturbation V = x for some small value of ,
(a) Find the rst order corrections
(1)
j
and
(1)
j
(x) in terms of the parameter .
(b) Solve the system explicitly using Airy Functions. Show that if the exact solution is
expanded in orders of , the result matches that found in (i). Hint: The Airy functions
are solutions of:
_
d
2
dx
2
x
__
Ai(x)
Bi(x)
_
= 0
44. Consider the Particle in a Symmetric Box in Two Dimensions example. Let x
0
= y
0
=
1
2
and
nd expressions for

and

(x, y) for the degenerate states k, j = 2, 1, 1, 2.


45. For the Particle in a Symmetric Box In Two Dimensions example, nd an expression for the
rst order time evolution of a state initially in the state =
1,2
.
46. Consider the Hamiltonian H =
1
2
d
2
dx
2
+ [x[. Use the Bohm-Sommerfeld quantization to
determine approximate energies of the bound states.
47. Consider the Hamiltonian H =
1
2
d
2
dx
2
+V (x) where,
V (x) =
_
0 x < 0
x x 0
_
(a) Determine the eigenfunctions in the WKB Approximation limit.
(b) Determine the exact expressions for the eigenfunctions. (The solution will involve Airy
Functions)
154
(c) Show that the large energy asymptotics of the exact solution approach the WKB ap-
proximation results.
155
Part IX
SEMESTER II FINAL EXAM
39 WKB Problem
Consider the potential shown below and the Hamiltonian H =
h
2
2m
d
2
dx
2
+V (x) > 0.
1. Find the eigenfunction expansion for the bound and free states in the WKB Approximation.
2. Obtain an expression (in the WKB Approximation) for the number of bound states.
3. Determine the large asymptotics of part (b).
0
0
Solution:
(a) We can separate this problem into three regions by dening a value
0
that is the peak of the
potential (as noted in the attached sketch).
Region 1: >
0
In this region, the energy is always greater than the potential and therefore the solution is a
superposition of oscillating WKB Approximations:

(I)
(x) =
1
[2m( V (x))]
1
4
e

i
h
R

2m(V (x

)dx

There are no turning classical turning points for these energies, so the integral is indenite
and introduces a phase to each term. That phase can be absorbed into the coecients to give:
156

(I)
(x) =
c
+
( V (x))
1
4
e
i
h
R

2m(V (x

))dx

+
c

( V (x))
1
4
e

i
h
R

2m(V (x

))dx

At large values of x we expect this solution to appear like a Fourier transform, therefore the
coecients should reduce to
1

2
in that limit. That is:
c

1
4
=
1

2
c

=

1
4

(I)
(x) =

1
4

2 ( V (x))
1
4
e
i
h
R

2m(V (x

))dx

+

1
4

2 ( V (x))
1
4
e

i
h
R

2m(V (x

))dx

This gives the normalized approximation for >


0
.
Region 2: 0 < <
0
In this region, the classical turning points at a() and b() become important. We would
expect on either side of the potential barrier to have a combination of sines and cosines with
exponentials decaying and growing inside the barrier where V (x) > . However, sine and
cosine connect to certain exponential forms in the WKB Approximation. Therefore well have
to reference the asymptotics of Airy Functions to determine the form:
Ai(x)[
x+

1
2

x
1
4
e

2
3
x
3
2
Bi(x)[
x+

1

x
1
4
e
2
3
x
3
2
Ai(x)[
x

1

x
1
4
sin
_
2
3
x
3
2
+

4
_
Bi(x)[
x

1

x
1
4
cos
_
2
3
x
3
2
+

4
_
Given this information, we can start from a known amplitude for sine or cosine on one side
of the barrier, connect to the correct exponential term, then use the above forms to get the
behavior across the second turning point. The diculty in this is that only the exponentially
growing solution can be kept as we work across the potential. Well have to use two solutions.
Consider the left-hand-side wavefunctions:

(IIa)
(x) =
2A
[ V (x)]
1
4
sin
_
_
a()
x
_
2m
h
2
( V (x
t
))dx
t
+

4
_
x < a()

(IIb)
(x) =
2B
[ V (x)]
1
4
cos
_
_
a()
x
_
2m
h
2
( V (x
t
))dx
t
+

4
_
x < a()
Using the Airy function asymptotics, this connects to the forms:

(IIa)
(x) =
A
[[ V (x)[]
1
4
e

1
h
R
x
a()

2m[V (x)[dx

a() < x < b()


157

(IIb)
(x) =
2B
[[ V (x)[]
1
4
e
1
h
R
x
a()

2m[V (x)[dx

a() < x < b()


As we move from left to right through the well, one solution grows while the other decays o.
If we look from the other side of the well, we see the opposite behavior as the above. Again a
solution of sine and cosine should exists in the region x > b(), and they connect to the above
at x = b. However, the above switch behavior in this region. This is explained better visually
in the sketch. Regardless, passing through the barrier causes one term to grow and the other
to decay. At the other end of the well, the solution that behaved like Ai(y) at the rst turning
point, now behaves like Bi(y) and similarly for the other term. Therefore, we have:

(IIa)
(x) =
Ae

1
h
R
b()
a()

2m[V (x)[dx

[ V (x)]
1
4
cos
_
_
x
b()
_
2m
h
2
( V (x
t
))dx
t
+

4
_
b() < x

(IIb)
(x) =
Be
1
h
R
b()
a()

2m[V (x)[dx

[ V (x)]
1
4
sin
_
_
x
b()
_
2m
h
2
( V (x
t
))dx
t
+

4
_
b() < x
Where the additional constants account for the growth or decay in the barrier.
At large values of x, the above should look like Sine and Cosine transforms
f() =
_
2

_

0

f(t) sin (t) dt f() =


_
2

_

0

f(t) cos (t) dt


Therefore (since V (x) is zero for large x) we have the condition that:
_
2

=
2A

1
4
=
2B

1
4
A = B =

1
4

2
This gives the two eigenfunctions in this region 0 < <
0
as:

(IIa)
(x) =
_

_
_
2

1
4
[V (x)]
1
4
sin
_
_
a()
x
_
2m
h
2
( V (x
t
))dx
t
+

4
_
x < a()
1

1
4
[[V (x)[]
1
4
e

1
h
R
x
a()

2m[V (x)[dx

a() < x < b()


1

1
4 e

1
h
R
b()
a()

2m|V (x)|dx

[V (x)]
1
4
cos
_
_
x
b()
_
2m
h
2
( V (x
t
))dx
t
+

4
_
b() < x
_

(IIb)
(x) =
_

_
_
2

1
4
[V (x)]
1
4
cos
_
_
a()
x
_
2m
h
2
( V (x
t
))dx
t
+

4
_
x < a()
1

1
4
[[V (x)[]
1
4
e
1
h
R
x
a()

2m[V (x)[dx

a() < x < b()


1

1
4 e
1
h
R
b()
a()

2m|V (x)|dx

[V (x)]
1
4
sin
_
_
x
b()
_
2m
h
2
( V (x
t
))dx
t
+

4
_
b() < x
_

_
158
Region 3:
0
< < 0
In this region we expect an oscillating solution in the well region, and decaying solutions
outside the well. So, we can again use the Airy function asymptotics to determine the solutions
on either side of turning points. Consider rst the solutions outside the well:

(III)
(x) =
C
[[ V (x)[]
1
4
e
1
h
R
c()
x

2m[V (x)[dx

x < c()

(III)
(x) =
C
[[ V (x)[]
1
4
e

1
h
R
x
d()

2m[V (x)[dx

d() < x
The coecient matches since we expect the solution to be symmetric about the well. These
two exponential forms connect to dierent solutions on either turning point of the well. This
gives a condition on of:

(II)
left
(x) =
2C
[ V (x)]
1
4
sin
_

_
x
c()
_
2m
h
2
( V (x
t
))dx
t
+

4
_

(II)
right
(x) =
2C
[ V (x)]
1
4
sin
_
_
d()
x
_
2m
h
2
( V (x
t
))dx
t
+

4
_
This solution can be normalized by integrating over the entire region. Since these states are
bound, the limit should converge and by setting it to 1 we determine C. Though, without an
explicit form for V (x), this cannot be done analytically.
(b) For the solutions in Region III to match up requires quantizing the energy of these bound states.
The Born-Sommerfeld quantization requires:
1
h
_
d()
c()
_
2m( V (x
t
))dx
t
=
_
n
1
2
_
n = 1, 2 . . .
To estimate the number of bound states one must determine in the above form how many values of
exist between =
0
(the lowest energy allowed in the well) and = 0 (above which the states
are no longer bound). Without an exact form for V (x) this integral cannot be done analytically.
However, it is possible to approximate the maximum energy of the bound states by assuming that
it is just slightly less than zero,
1
h
_
d()
c()
_
2m(0

V (x
t
))dx
t
=
_
n
max

1
2
_
n
max
=
1
2
+

2m
h
_
L
0
_
0

V (x)dx
Where weve assumed that the well is of width L at the highest energy level. For the case that the
energy near = 0 does not generate an integer value on the right hand side of this equation, then
the nearest integer less than that value is the highest energy state. That is,
159
n
max
= Rnd
_
1
2
+

2m
h
A

well
_
Where Rnd implies rounding down to the nearest integer value and A

well
=
_
L
0
_
[V (x)[dx is
some value dependent on the area of the well.
(c) If we assume that , then the well becomes innitely deep and we can assume that the
highest energy will be exactly at = 0. This implies that the above result will not need to be
rounded o and the exact number of bound states is given by:
n
max
=
1
2
+

2m
h
_
L
0
_
[V (x)[dx
160
40 Scattering Problem
Consider some spherically symmetric potential V (r). Dened as:
V (r) =
_
r < R
0 r R
Determine the dierential scatting cross-section.
Solution:
We determined in class that the scattering problem can be done by expanding in terms of an
incoming plane wave and spherically spreading scattered waves. This solution has the form:

k
( x) = e
i

k x
+

l,m
a
l,m
_

k
_
Y
l,m
(, )
e
ikr
r
r = [ x[ R
In the case that we assume x = z
3
, then the above simplies to an azimuthally symmetric form.
The angular dependence on the spreading term is the dierential cross section and integrating over
the solid angle gives the total scattering cross section.
d
d
=

f
_

k, r
_

2
=

l,l

(2l + 1)
_
2l
t
+ 1
_
a
l
a

l
P
l
(cos ) P
l
(cos )
= 4

l=0
(2l + 1) [a
l
[
2
Where weve used the completeness and orthogonality of the Legendre polynomials
_
1
1
P
l
(x) P
l
(x) dx =
2
2l + 1

l,l

Therefore, to determine the scattering cross section, we need to determine values for a
l
.
We can solve this problem as:
_

h
2
2m

2
+V (r)
_
= 0
Plugging in the potential above,
_

2m
h
2

_

>
= 0
_

2m
h
2
( )
_

<
= 0
161
Where
<
and
>
are the solutions inside and outside the potential respectively. Then, making the
substitutions:
k
_
2m
h
2
k
t

_
2m
h
2
( )
We can look up how to write e
ikz
in terms of Legendre polynomials and write the solutions as stated
above (incoming plane, and outgoing spherical distribution). That is:

>
=
1

2
_

m=0
i
m
(2m+ 1) j
m
(kr) P
m
(cos ) +

l=0
ki
l+1
(2l + 1) a
l
h
(1)
l
(kr) P
l
(cos )
_

<
=

j=0
k
t
i
j+1
(2j + 1) b
j
j
j
_
k
t
r
_
P
j
(cos )
Note that inside the spherical potential the radial solution is a Spherical Bessel function instead of
an outgoing Hankel Function. This is because the solution includes r = 0. The boundary on the
edge of the sphere requires:

>
(r R) =
<
(r R)
t
>
(r R) =
t
<
(r R)
Where
t
denotes a radial derivative. This rst condition gives:
1

2
_

m=0
i
m
(2m+ 1) j
m
(kR) P
m
(cos ) +

l=0
ki
l+1
(2l + 1) a
l
h
(1)
l
(kR) P
l
(cos )
_
=

j=0
k
t
i
j+1
(2j + 1) b
j
j
j
_
k
t
r
_
P
j
(cos )
Multiplying by P
l
(cos ) and integrating eliminates the angular dependence, leaving:
1

2
_
i
l
j
l
(kR) +ki
l+1
a
l
h
(1)
l
(kR)
_
= k
t
i
l+1
b
l
j
l
_
k
t
R
_
Dropping the i
l
term thats common,
1

2
_
j
l
(kR) +ika
l
h
(1)
l
(kR)
_
= ik
t
b
l
j
l
_
k
t
R
_
Repeating for the derivative condition yields the same result expect for an additional k or k
t
being
picked up and the radial functions are now the derivatives of themselves, that is:
162
1

2
_
kj
t
l
(kR) +ik
2
a
l
h
(1)t
l
(kR)
_
= ik
t2
b
l
j
t
l
_
k
t
R
_
We can use one of these conditions to get b
l
in terms of a
l
and the second to determine a
l
. That is:
b
l
=
1

2
_
j
l
(kR) +ika
l
h
(1)
l
(kR)
_
ik
t
j
l
(k
t
R)
=
1

2
_
a
l
k
k
t
h
(1)
l
(kR)
j
l
(k
t
R)

i
k
t
j
l
(kR)
j
l
(k
t
R)
_
And then,
1

2
_
kj
t
l
(kR) +ik
2
a
l
h
(1)t
l
(kR)
_
= ik
t2
_
1

2
_
a
l
k
k
t
h
(1)
l
(kR)
j
l
(k
t
R)

i
k
t
j
l
(kR)
j
l
(k
t
R)
__
j
t
l
_
k
t
R
_
kj
t
l
(kR) +ik
2
a
l
h
(1)t
l
(kR) = a
l
ikk
t
h
(1)
l
(kR) j
t
l
(k
t
R)
j
l
(k
t
R)
+k
t
j
l
(kR) j
t
l
(k
t
R)
j
l
(k
t
R)
a
l
_
ik
2
h
(1)t
l
(kR) ikk
t
j
t
l
(k
t
R)
j
l
(k
t
R)
h
(1)
l
(kR)
_
= k
t
j
t
l
(k
t
R)
j
l
(k
t
R)
j
l
(kR) kj
t
l
(kR)
a
l
=
i
k
k
t
j

l
(k

R)
j
l
(k

R)
j
l
(kR) kj
t
l
(kR)
k
t
j

l
(k

R)
j
l
(k

R)
h
(1)
l
(kR) kh
(1)t
l
(kR)
Plugging in values from above and simplifying further,
a
l
= i

h
2
2m
_

l
q
2m
h
2
()R

j
l
q
2m
h
2
()R

j
l
__
2m
h
2
R
_
j
t
l
__
2m
h
2
R
_
_

l
q
2m
h
2
()R

j
l
q
2m
h
2
()R

h
(1)
l
__
2m
h
2
R
_
h
(1)t
l
__
2m
h
2
R
_
Then, the cross-section can be written as:
= 4

l=0
(2l + 1)

h
2
2m
_

l
q
2m
h
2
()R

j
l
q
2m
h
2
()R

j
l
__
2m
h
2
R
_
j
t
l
__
2m
h
2
R
_
_

l
q
2m
h
2
()R

j
l
q
2m
h
2
()R

h
(1)
l
__
2m
h
2
R
_
h
(1)t
l
__
2m
h
2
R
_

2
=
2h
2
m

l=0
(2l + 1)

(, ) j
l
__
2m
h
2
R
_
j
t
l
__
2m
h
2
R
_
(, ) h
(1)
l
__
2m
h
2
R
_
h
(1)t
l
__
2m
h
2
R
_

2
(, )
_

j
t
l
__
2m
h
2
( )R
_
j
l
__
2m
h
2
( )R
_
163
Alternately, in the limit that the scattering is not strong, it is possible to use the Born Approxima-
tion. As seen in class, this approximation gives the scattering cross section as:
f
_
k
t
, k
_

=
m
2h
2
_
e
i(

) r
V ( r)d
x
In the case of a spherically symmetric potential, this can be reduced to:
f
_
k
t
, k
_

=
m
2h
2
_ _ _
e
i[

[r cos
V (r)r
2
drd (cos ) d
=
2m
h
2
1
[

k
t
[
_

0
V (r) sin
_
[

k
t
[r
_
rdr
Plugging in our potential, this gives that in the limit of weak scattering,
f
_
k
t
, k
_

=
2m
h
2
1
[

k
t
[
_
R
0
sin
_
[

k
t
[r
_
rdr
=
2m
h
2
1
[

k
t
[
_
1

k
t

2
_
sin
_

k
t

r
_

k
t

r cos
_

k
t

r
_
_
R
0
f
_
k
t
, k
_

=
2m
h
2
1
[

k
t
[
3
_
sin
_

k
t

R
_

k
t

Rcos
_

k
t

R
_
We can determine

k
t

by recalling our initial arguments. We claimed that:


k
_
2m
h
2
k
t

_
2m
h
2
( )
If the incident wavefunction was along z then the quantity above becomes:

k
t

_
2m
h
2

3

_
2m
h
2
( )
r

2m
h
2

1
2

_
1

And expressing [ v u[ =
_
[ v[
2
+[ u[
2
2 u v, this can be expressed in terms of the scattering angle

sc
:

k
t

=
_
2m
h
2

1 +
_
1

_
2
_
1

cos
sc
=
_
2m
h
2

2
_
1

cos
sc

k
t

=
_
m
h
2

1

2

_
1

cos
sc

_
m
h
2

164
Plugging this into the above result,
f
_
k
t
, k
_

=
2m
h
2
_
h
2
m
_
3
2

_
sin
__
m
h
2

R
_

_
m
h
2

Rcos
__
m
h
2

R
__
The coecient can be simplied as
2m
h
2
h
3
m
3
2
3
2
=
2h

m
2
. This gives the dierential cross section in
the limit of weak scattering as:
d
d

f(k, k
t
)

2
=
4h
2
m
2

6

sin
__
m
h
2

R
_

_
m
h
2

Rcos
__
m
h
2

R
_

2
165
41 Perturbation Problem
Let H
0
be dened as
1
2
_

2
x
2
+

2
y
2
_
on the region [0, 1] , [0, 1] with Dirichlet boundary conditions
on the box. Let there be some perturbing potential of the form:
V (x, y) = e
(x+y)
Determine the leading order corrections to the rst and second eigenvalues of H
0
. Indicate the
correct zeroth order states.
Solution:
The general zero-th order solution of this problem is:
H
0

(0)
=
(0)

(0)

1
2
_

2
x
2
+

2
y
2
_

(0)
_

(0)
= 0
The solution of this is a summation of sines and cosines. Applying the boundary conditions on the
edges eliminates all but the sine term:

(0)
k,j
(x, y) = 2 sin(kx) sin(jy) j, k = 1, 2, 3 . . .
Where weve used the known normalization N =
_
2
Lx
_
2
Ly
to determine the coecient. Here weve
determined:

(0)
=

2
2
_
k
2
+j
2
_
where the
1
2
has come from the extra one half in front of the dierential operator.
In solving the perturbation problem:
H = H = H
0
+V (x, y)
for the rst two states, we have two cases to consider.
Ground State - Non-Degenerate Perturbation
For the ground state, well start with
(0)
1,1
(x, y) and determine the rst order correction to the
wavefunction and the energy. From the notes, we have for non-degenerate states:

(1)
j,k
=
(0)
j,k
(x, y)[V (x, y)
(0)
j,k
(x, y))

(1)
j,k
=

n,m,=j,k

(0)
n,m
(x, y)[V (x, y)
(0)
j,k
(x, y))

(0)
n,m

(0)
j,k

m,n
166
Because we are contained in a region of space, there is no continuous spectrum in the correction
(no integral term). So, to determine the corrections, we just have to calculate:

(0)
j,k
(x, y)[V (x, y)
(0)
j,k
(x, y)) =
_
1
0
dx
_
1
0
dy
(0)
j,k
(x, y)V (x, y)
(0)
j,k
(x, y)

(0)
j,k
(x, y)[V (x, y)
(0)
j,k
(x, y)) = 4
_
1
0
dx
_
1
0
dy sin
2
(kx) sin
2
(jy) e
(x+y)

(0)
j,k
(x, y)[V (x, y)
(0)
j,k
(x, y)) = 4
__
1
0
dxsin
2
(kx) e
x
___
1
0
dy sin
2
(jy) e
y
_
And similarly for the other terms:

(0)
n,m
(x, y)[V (x, y)
(0)
j,k
(x, y)) = 4
__
1
0
dxsin(nx) sin(kx) e
x
___
1
0
dy sin(my) sin(jy) e
y
_
167
All of the integrals in this problem reduce to a single form:
_
1
0
sin(nx) sin(kx) e
x
dx
So, if we can determine this integral, then plug it into each part of the solution.
_
1
0
sin(nx) sin(kx) e
x
dx =
1
(2i)
2
_
1
0
_
e
inx
e
inx
_
_
e
ikx
e
ikx
_
e
x
dx
=
1
4
_
1
0
_
e
i(n+k)x
e
i(nk)x
e
i(nk)x
+e
i(n+k)x
_
e
x
dx
=
1
4
_
1
0
_
e
[i(n+k)1]x
e
[i(nk)1]x
e
[i(nk)1]x
+e
[i(n+k)1]x
_
dx
Each integral is then simple:
_
1
0
sin(nx) sin(kx) e
x
dx =
1
4
_
e
[i(n+k)1]x
[i (n +k) 1]

e
[i(nk)1]x
[i (n k) 1]

e
[i(nk)1]x
[i (n k) 1]
+
e
[i(n+k)1]x
[i (n +k) 1]
_
1
0
=
1
4
_
e
[i(n+k)1]
1
[i (n +k) 1]

e
[i(nk)1]
1
[i (n k) 1]
+
e
[i(nk)1]
1
[i (n k) + 1]

e
[i(n+k)1]
1
[i (n +k) + 1]
_
The terms with matching exponential forms (n k) can each be combined as:
e
[i(n+k)1]
1
[i (n +k) 1]

e
[i(n+k)1]
1
[i (n +k) + 1]
=
_
e
[i(n+k)1]
1
_
[i (n +k) + 1]
_
e
[i(n+k)1]
1
_
[i (n +k) 1]
[i (n +k) 1] [i (n +k) + 1]
=
i (n +k)
_
e
i(n+k)
e
i(n+k)

e
1
+
_
e
i(n+k)
+e
i(n+k)

e
1
2

2
(n +k)
2
1
=
i (n +k) 2i sin( (n +k)) e
1
+ 2 cos ( (n +k)) e
1
2

2
(n +k)
2
1
e
[i(n+k)1]
1
[i (n +k) 1]

e
[i(n+k)1]
1
[i (n +k) + 1]
=
2
e
(n +k) sin( (n +k)) cos ( (n +k)) +e
1 +
2
(n +k)
2
However, since n and k are integer values, n +k is also an integer value. Therefore,
e
[i(n+k)1]
1
[i (n +k) 1]

e
[i(n+k)1]
1
[i (n +k) + 1]
=
2
e
(1)
(n+k)
+e
1 +
2
(n +k)
2
Repeating this for the other terms:
168

e
[i(nk)1]
1
[i (n k) 1]
+
e
[i(nk)1]
1
[i (n k) + 1]
=

_
e
[i(nk)1]
1
_
[i (n k) + 1] +
_
e
[i(nk)1]
1
_
[i (n k) 1]
[i (n k) 1] [i (n k) + 1]
=
i (n k)
_
e
i(nk)
e
i(nk)

e
1

_
e
i(nk)
+e
i(nk)

e
1
+ 2

2
(n k)
2
1
=
i (n k) [2i sin( (n k))] e
1
[2 cos (n k)] e
1
+ 2

2
(n k)
2
1
Again noting that n k is also an integer, the sine term is zero and the cosine is (1)
nk
,
so:

e
[i(nk)1]
1
[i (n k) 1]
+
e
[i(nk)1]
1
[i (n k) + 1]
=
2 (1)
nk
e
1
+ 2

2
(n k)
2
1
=
2
e
(1)
nk
+e
1 +
2
(n k)
2
While this could be simplied further, this form is usable:
_
1
0
sin(nx) sin(kx) e
x
dx =
1
2e
_
e + (1)
nk
1 +
2
(n k)
2

e + (1)
n+k
1 +
2
(n +k)
2
_
and we can also state the case of n = k:
_
1
0
sin
2
(nx) e
x
dx =
1
2e
_
e + 1
e + 1
1 +
2
(2n)
2
_
=
1
2e
(e + 1)
_
1 + 4
2
n
2
_
(e + 1)
1 + 4
2
n
2
=
1
2e
e + 1 + 4e
2
n
2
+ 4
2
n
2
e 1
1 + 4
2
n
2
=
4
2
n
2
1 + 4
2
n
2
e + 1
2e
_
1
0
sin
2
(nx) e
x
dx =
2
2
n
2
1 + 4
2
n
2
e + 1
e
169
Using these results, we have the energy correction:

(1)
1,1
=
(0)
1,1
(x, y)[V (x, y)
(0)
1,1
(x, y)) = 4
__
1
0
sin
2
(x) e
x
dx
_
2
= 4
_
2
2
1
2
1 + 4
2
1
2
e + 1
e
_
2

(1)
1,1
=
16
4
(1 + 4
2
)
2
_
e + 1
e
_
2
And the ground state wavefunction:

(1)
1,1
=

n,m,=1,1

(0)
n,m
(x, y)[V (x, y)
(0)
1,1
(x, y))

(0)
n,m

(0)
1,1

(0)
m,n

(1)
1,1
=

n,m,=1,1

4
_
_
1
0
dxsin(nx) sin(x) e
x
__
_
1
0
dy sin(my) sin(y) e
y
_

2
2
(n
2
+m
2
)

2
2
(1
2
+ 1
2
)
2 sin(mx) sin(ny)

(1)
1,1
=

n,m,=1,1

4
_
1
2e
_
e+(1)
n1
1+
2
(n1)
2

e+(1)
n+1
1+
2
(n+1)
2
___
1
2e
_
e+(1)
m1
1+
2
(m1)
2

e+(1)
m+1
1+
2
(m+1)
2
__

2
2
(n
2
+m
2
2)
2 sin(mx) sin(ny)
so the rst correction to the wave function can be expressed:

(1)
1,1
=

n,m,=1,1
8
e
2

2
(2 n
2
m
2
)
__
e + (1)
n1
1 +
2
(n 1)
2

e + (1)
n+1
1 +
2
(n + 1)
2
__

__
e + (1)
m1
1 +
2
(m1)
2

e + (1)
m+1
1 +
2
(m+ 1)
2
__
sin(mx) sin(ny)
170
First Excited State - Twice Degenerate Perturbation
This problem becomes more complicated in the rst excited state since the states
(0)
1,2
(x, y)
and
(0)
2,1
(x, y) have degenerate eigenvalues. The pertubative potential will (likely) break this
degeneracy, however well need to work through several steps to get the correct combination of
these two states to perturb about. In this case, we can write the operator (H
0
+V ) = 0
in matrix form as:
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_

2
+V
1,1;1,1
V
1,1;1,2
V
1,1;2,1
V
1,1;2,2
. . .
V
1,2;1,1
5
2
2
+V
1,2;1,2
V
1,2;2,1
V
1,2;2,2
. . .
V
2,1;1,1
V
2,1;1,2
5
2
2
+V
2,1;2,1
V
2,1;2,2
. . .
V
2,2;1,1
V
2,2;1,2
V
2,2;2,1
22
2
+V
2,2;2,2
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
c
1,1
c
1,2
c
2,1
c
2,2
.
.
.
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
= 0
Where weve dened V
n,m;j,k

(0)
n,m
(x, y)[V (x, y)
(0)
j,k
(x, y)) and expanded the solution as:

n,m
(x, y) =

j,k
c
(j,k)
n,m

(0)
j,k
(x, y)
In order to solve the problem, we expand energies and coecients as:

j,k
=
(0)
j,k
+
(1)
j,k
+
2

(2)
j,k
+. . .
c
(j,k)
n,m
= c
(0)(j,k)
n,m
+d
(1)(j,k)
n,m
+
2
d
(2)(j,k)
n,m
+. . .
However, since were only looking for corrections to the rst excited state, we can assume each
coecient is zero except for c
1,2
and c
2,1
. This leaves us with the matrix equation:
_
_
_
_
V
1,2;1,2

(1)
V
1,2;2,1
V
2,1;1,2
V
2,1;2,1

(1)
_
_
_
_
_
_
c
1,2
c
2,1
_
_
+O
_

2
_
= 0
So, we just need:
V
1,2;1,2
= 4
__
1
0
dxsin(x) sin(x) e
x
___
1
0
dy sin(2y) sin(2y) e
y
_
V
1,2;2,1
= V
2,1;1,2
= 4
__
1
0
dxsin(x) sin(2x) e
x
___
1
0
dy sin(2y) sin(y) e
y
_
171
V
2,1;2,1
= 4
__
1
0
dxsin(2x) sin(2x) e
x
___
1
0
dy sin(y) sin(y) e
y
_
Plugging in the integral results from the previous part,
V
1,2;1,2
= V
2,1;2,1
= 4
_
2
2
1
2
1 + 4
2
1
2
e + 1
e
__
2
2
2
2
1 + 4
2
2
2
e + 1
e
_
=
64
4
(1 + 4
2
) (1 + 16
2
)
_
e + 1
e
_
2

V
1,2;2,1
= V
2,1;1,2
= 4
_
1
2e
_
e + (1)
12
1 +
2
(1 2)
2

e + (1)
1+2
1 +
2
(1 + 2)
2
___
1
2e
_
e + (1)
21
1 +
2
(2 1)
2

e + (1)
2+1
1 +
2
(2 + 1)
2
__
= 4
_
1
2e
_
e 1
1 +
2

e 1
1 + 9
2
___
1
2e
_
e 1
1 +
2

e 1
1 + 9
2
__
=

e
2
_
e 1
1 +
2

e 1
1 + 9
2
_
2
=

e
2
_
(e 1)
_
1 + 9
2
_
(e 1)
_
1 +
2
_
(1 +
2
) (1 + 9
2
)
_
2
=

e
2
_
8
2
e 8
2
(1 +
2
) (1 + 9
2
)
_
2
V
1,2;2,1
= V
2,1;1,2
=
64
4
[(1 +
2
) (1 + 9
2
)]
2
_
e 1
e
_
2

Using these results, the problem reduces to:
_


__
c
1,2
c
2,1
_
= 0
We require the determinant to be zero,
( )
2

2
= 0

=
This gives the correct coecients as:
_
_
( )
( )
_
_
_
_
_
c
()
1,2
c
()
2,1
_
_
_ = 0
Which has solutions:
_
_
_
c
(+)
1,2
c
(+)
2,1
_
_
_ =
1

2
_
_
1
1
_
_
_
_
_
c
()
1,2
c
()
2,1
_
_
_ =
1

2
_
_
1
1
_
_
We can denote these rst excited states as
()
1,2
. The above results give that the combination
of wavefunctions to expand about are:
172

(+)
1,2

1

2
_

(0)
1,2
+
(0)
2,1
_
=

2 (sin(x) sin(2y) + sin(2x) sin(y))

()
1,2

1

2
_

(0)
1,2

(0)
2,1
_
=

2 (sin(x) sin(2y) sin(2x) sin(y))


And the corrected energies of these wavefunctions are given as
()
1,2

(0)
1;2
+

, so:

(+)
1,2


2
2
_
1
2
+ 2
2
_
+( +) =
5
2

2
+
64
4
(1 + 4
2
) (1 + 16
2
)
_
e + 1
e
_
2
+
64
4
[(1 +
2
) (1 + 9
2
)]
2
_
e 1
e
_
2

()
1,2


2
2
_
1
2
+ 2
2
_
+( ) =
5
2

2
+
64
4
(1 + 4
2
) (1 + 16
2
)
_
e + 1
e
_
2

64
4
[(1 +
2
) (1 + 9
2
)]
2
_
e 1
e
_
2
173

You might also like