You are on page 1of 22

Kvantemekanik II - Eksamen

Sebastian Yde Madsen


December 2022

1 Describe the concept of Hilbert space and dual space, operators acting
on states, and operators with a continuous spectrum, including the mo-
mentum operator and its relation to translations.
The I postulate of Quantum mechanics
is typically formulated in one of two ways, one of which being that the state of a physical system is specified by a state
vector, that belongs to a complex vector space V, which we refer to as the state space of the system. As the state space
is a vector space, it is fundamentally characterized by two operations on its elements:

Vector addition: ∀ |α⟩, |β⟩ ∈ V, |α⟩ + |β⟩ ∈ V (1)


Scalar multiplication: ∀ |α⟩ ∈ V, c|α⟩ ∈ V. (2)

And closed under addition - the sum of any 2 states produces another state:

∀ |α⟩ , |β⟩ ∈ V, |α⟩ + |β⟩ ∈ V (3)

Now, as the state space is an inner product space, it is equipped with an inner product, which gives a notion of distance.
Specifically, for each ket |α⟩ in state space, there exists a corresponding bra in dual space ⟨α|, which is generally given
as:
c|α⟩ ⇔ c∗ ⟨α|, (bijection) (4)
which yields the following definition of the inner product:

⟨α|β⟩ ∈ C (5)

In layman terms, the Hilbert space is perceived to be analogous w. a complex inner product space, and in quantum
mechanics, we oftentimes refer to the State space as a Hilbert space, even though the terms is most generally reserved for
infinite-dimensional complex inner product spaces, that are complete.

The II postulate of Quantum mechanics

states that every observable physical quantity of a system is associated with some linear operator Â, that acts on the kets
of the state space:
∀ |α⟩ ∈ V, Â|α⟩ = |β⟩ ∈ V. (6)
noting that the dual bra is given as:
Â|α⟩ ⇔ ⟨α|† ∈ V ∗ (7)
These operators are generally characterized by being Associative when performing both multiplication and addition, but
only commutative when performing addition. As such, we say that any two operators generally doesn’t commute:

[Â, B̂] = ÂB̂ − B̂ Â ̸= 0, e.g. : [r̂k , p̂l ] = iℏδk,l 1 (8)

and as we expect any observable physical quantity to be associated with a real scalar, most of these operators are further
more restricted to the subclass of Hermitian operators, which are generally characterized by being self-adjoint operators:

 = † (9)

1 (the orthonormal eigenstates form a basis),


P
with real eigenvalues {ai }, and a complete set of eigenstates i |ai ⟩⟨ai | =
such that:
Â|ai ⟩ = ai |ai ⟩. (10)

1
This also means that we can expand a general ket |α⟩ in state-space in the basis of an operator:
X X X
|α⟩ = |ai ⟩ ⟨ai |α⟩ = ⟨ai |α⟩|ai ⟩ = ci |α⟩ , (Insertion of identity) (11)
i i i

Generally, the state space of a physical system may be either finite-dimensional or infinite-dimensional. This is a
consequence of the fact that we associate a physical quantity capable of assuming values in some continuous range with a
hermitian operator that has a continuous spectrum, and a physical quantity only capable of assuming a discrete set of
values, with a Hermitian operator that has a discrete spectrum.
Two typical exemplifications of continuous quantities are position and momentum x̂ and p̂, with:
x̂|x⟩ = x|x⟩, p̂|p⟩ = p|p⟩ (12)
both of which has a complete set of eigenstates, with orthonormality defined as:
⟨x|x′ ⟩ = δ(x − x′ ), ⟨p|p′ ⟩ = δ(p − p′ ) (13)
allowing a general state ket to be expanded as:
Position basis Momentum basis
zZ }| { z
Z }| {
|α⟩ = ⟨x|α⟩|x⟩dx = ⟨p|α⟩|p⟩dp (14)

where the expansion-coefficients are commonly dubbed position-space wave-function and momentum-space wave-
function, respectively:
ψ(x) ≡ ⟨x|α⟩, ϕ(p) ≡ ⟨p|α⟩ (15)
To demonstrate the action of the momentum operator, one first needs to define the translation operator (unitary1 , but
not hermitian). Now, lets consider the infinitesimal translation operator which we define as an operator that ’displaces’ a
state ket in the position space representation by some infinitesimal amount:
T̂ (dx)|x⟩ = |x + dx⟩ (16)
Ansatz for the form:
T̂ (dx) ≡ 1 − iK̂dx (17)
which is reasonable as T̂ (0) = 0, and from dimensional analysis we end up with:

T̂ (dx) ≡ 1 − i dx (18)

and naturally we should expect a finite translation to correspond to a sequence FORKLAR LIDT MERE OM RELATIONEN
MELLEM TRANSLATION OG MOMENTUM (Metode til at lave operatorer - momentum som generator for translation
klassisk og så også i kvant).

2 Outline the concept of unitary time evolution. Describe the different


pictures of quantum mechanics.
• Time evolution is fundamentally governed by The Schrödinger Equation. Since this is a linear equation, we expect that
the time evolution of a general state ket |α, t0 ⟩ → |α, t0 ; t⟩, can be described by the action of some linear operator
Û.
• Goal is now to determine some operator Û(t, t0 ) that evolves a general state ket in time, such that:
|α, t0 ; t⟩ = Û(t, t0 )|α, t0 ⟩. (19)
We do this by considering the fundamental requirements for this operator:
1. Has to yield probability conservation in the Born interpretation(At least for closed systems), such that:
X X
1= |ca (t0 )|2 = |ca (t)|2 (20)
a a

from which it follows:


1 = ⟨α, t0 |α, t0 ⟩ = ⟨α, t0 ; t|α, t0 ; t⟩ = ⟨α, t0 |Û(t, t0 )† Û(t, t0 )|α, t0 ⟩ (21)
which in turn is only guaranteed for Û(t, t0 ) Û(t, t0 ) = 1, s.t. Û(t, t0 ) must be Unitary.

1 Norm preserving operator characterized by either U U † = U † U = 1 or U −1 = U † .

2
2. Has to fulfill the composition property, meaning that we can describe the time-evoution as a product of
multiple smaller time-evolutions:
Û(t2 , t0 ) = Û(t2 , t1 )Û(t1 , t0 ), t2 > t1 > t0 (22)
3. Finally, as time is commonly assumed to be a continuous parameter, we expect that in the limit:
lim Û(t0 + dt, t0 ) = 1, s.t. lim |α, t0 ; t⟩ = |α, t0 ⟩ (23)
dt→0 t→0

Now, an operator on the form: Û(t0 + dt, t0 ) = 1 − iÂdt (for any hermitian Â), fulfills 1) and 2) completely, and 3) to
the first order in dt.
By considering the physical units, and by inspiration of the classical idea of Ĥ as a generator of time, we define the
infinitesimal time-evolution operator to be:

Û(t0 + dt, t0 ) ≡ 1 − i dt (24)

and from composition, this yields:
 
Ĥ Ĥ
Û(t + dt, t0 ) = Û(t + dt, t)Û(t, t0 ) = 1 − i dt Û(t, t0 ) ⇒ Û(t + dt, t0 ) − Û(t, t0 ) = −i dt Û(t, t0 ). (25)
ℏ ℏ
Now dividing this by dt and taking the limit for dt → 0 we obtain the linear differential equation known as the
Schrödinger equation for the time-evolution operator:

Û(t, t0 ) = Ĥ Û(t, t0 )
iℏ (26)
∂t
(N.B.: right multiplication of a state-ket gives the general SE)
and finally, the solution to this equation yields an expression for the time-evolution operator, that is dependent on
the type of Ĥ:  i

 Ĥ ̸= Ĥ(t) : Û(t, t0 ) ≡ e− ℏ Ĥ(t−t0 )
− i t Ĥ(t′ )dt′
R
Ĥ = Ĥ(t) and [Ĥ(t0 ), Ĥ(t)] = 0 : Û(t, t0 ) ≡ e ℏ t0 (27)

Ĥ = Ĥ(t) and [Ĥ(t0 ), Ĥ(t)] ̸= 0 : Dyson series

• Until now, time-evolution = something happening to the state ket = Schrödinger picture.
However, in some instances it might be advantageous to consider the time-evolution from a conceptually different
perspective, inspired by classical mechanics, where it is the observable property that changes with time = the Heisen-
berg picture.
Mathematically equivalent approaches and can be seen as a consequence of the Associative Axiom of Multiplica-
tion.
• Expectation values in both pictures has to give same value:
S ⟨α, t0 ; t|ÂS |α, t0 ; t⟩S = ⟨α, t0 | Û(t, t0 )† ÂS Û(t, t0 ) |α, t0 ⟩ = H ⟨α, t0 |ÂH (t)|α, t0 ⟩H (28)
| {z } | {z } | {z }
H ⟨α,t0 | ÂH (t) |α,t0 ⟩H

From which we generally state:

Schrödinger Picture: Heisenberg Picture:

State kets evolve in time: Observables evolve in time:

|α, t0 ⟩ → Û(t, t0 )|α, t0 ⟩ Â → Û(t, t0 )† Â Û(t, t0 )


N.B. HUSK AT SKRIVE OM AT BASEKETS OGSÅ ER TIDSAFHÆNGige I HP - Se ligning (2.112) - (2.115) i
sakurai 3d. corrected.

• Now, as a general operator in the Heisenberg Picture is time-dependent (TD), it makes sense to investigate its TD,
and we find that it is generally defined by the following Equation of Motion (EOM):
d d
Û(t, t0 )† ÂS Û(t, t0 )
 
ÂH (t) =
dt dt
d  d  d 
= Û(t, t0 )† ÂS Û(t, t0 ) + Û(t, t0 )† ÂS Û(t, t0 ) + Û(t, t0 )† ÂS Û(t, t0 )
dt dt dt
1 d  
ÂH (t) , Û(t, t0 )† ĤS Û(t, t0 ) + Û(t, t0 )†

= ÂS Û(t, t0 )
iℏ | {z } | dt {z }
ĤS if [ĤS , Û ]=0
0 if ÂS ̸=ÂS (t)

3
i
as hinted in the last line, for systems with time-independent (TIP) Hamiltonians, where Û(t, t0 ) = e− ℏ Ĥs (t−t0 ) , we get
[ĤS , Û(t, t0 )] = 0, which yields:
ĤH (t) = Û(t, t0 )† ĤS Û(t, t0 ) = Û(t, t0 )† Û(t, t0 ) ĤS = ĤS . (29)

• The third and final picture, that is oftentimes utilized in QM, is the Interaction picture (sometimes referred to as
Dirac picture). In this picture, one is concerned with situations of interaction with some time-dependent potential.
Specifically, we consider situations of the form:
H = H0 + V(t) (30)
Now, in this picture, we define the state-kets in such a way, that the contribution to the time-evolution from H0 , is
eliminated. Specifically:
|α, t⟩I = eiH0 t/ℏ |α, t⟩S (31)
and the operators as:
AI = eiH0 t/ℏ AS e−iH0 t/ℏ (32)
and for some AI , where AS is not time-dependent, one can also find an equation of motion:
dAI 1
= [AI , H0 ] (33)
dt iℏ
• One of the things that truly justifies the interaction-picture formalism, is its use in perturbation theory. Specifically,
in time-dependent perturbation theory we learned that performing an approximate expansion of this time-evolution
operator in the interaction picture, rather than performing one on a state-ket, potentially allowed us to determine the

3 Discuss the propagator and Feynman path integrals formalism.


• Lets consider some state ket |α, t0 ⟩ expanded in a basis of eigenkets {|a⟩}of some observable Â:
X X X
|α, t0 ⟩ = ca (t0 ) |a⟩ = ⟨a|α, t0 ⟩ |a⟩ = |a⟩ ⟨a|α, t0 ⟩ (34)
a a a

with [Â, Ĥ] = 0, for a time-independent hamiltonian Ĥ. Lets consider the time evolution of this state:
X
|α, t, t0 ⟩ = U(t, t0 ) |α, t0 ⟩ = e−iĤ(t−t0 )/ℏ |a⟩ ⟨a|α, t0 ⟩ (35)
a
X
= e−iEa (t−t0 )/ℏ |a⟩ ⟨a|α, t0 ⟩ (36)
a

and its corresponding wave function:


X
ψ(x′ , t) ≡ ⟨x′ |α, t, t0 ⟩ = e−iEa (t−t0 )/ℏ ⟨x′ |a⟩ ⟨a|α, t0 ⟩ (37)
a

Now, note that by expanding |α, t0 ⟩ in the continuous position eigenbase {|x′ ⟩}:
Z Z Z
|α, t0 ⟩ = d3 x′ cx (t0 ) |x′ ⟩ = d3 x′ ⟨x′ |α, t0 ⟩ |x′ ⟩ = d3 x′ |x′ ⟩ ⟨x′ |α, t0 ⟩ (38)

the expansion coefficient ca (t0 ) can be expressed in terms of the position eigenkets, as:
Z
ca (t0 ) = ⟨a|α, t0 ⟩ = d3 x′ ⟨a|x′ ⟩ ⟨x′ |α, t0 ⟩ (Can also be seen as using completeness of {|x′ ⟩}). (39)

• If we now consider the wave function at a different position x′′ ̸= x′ , whilst utilizing the above:
X
ψ(x′′ , t) ≡ ⟨x′′ |α, t, t0 ⟩ = e−iEa (t−t0 )/ℏ ⟨x′′ |a⟩ ⟨a|α, t0 ⟩ (40)
a
X Z 
= e−iEa (t−t0 )/ℏ ⟨x′′ |a⟩ d3 x′ ⟨a|x′ ⟩ ⟨x′ |α, t0 ⟩ (41)
a
X Z 
= e−iEa (t−t0 )/ℏ ⟨x′′ |a⟩ d3 x′ ⟨a|x′ ⟩ ψ(x′ , t0 ) (42)
a
Z X
= d3 x′ e−iEa (t−t0 )/ℏ ⟨x′′ |a⟩ ⟨a|x′ ⟩ ψ(x′ , t0 ). (43)
a
| {z }
K(x′′ ,t;x′ ,t0 )

4
it becomes evident that wave function at x′′ , can be described as the integral of some kernel, times the original wave
function.

We call the kernel of the integral operator K(x′′ , t; x′ , t0 ) the propagator of wave mechanics. This effectively
means that if the initial wave function ψ(x′ , t0 ) is given, and one can work out the propagator, the time evolution of
the wave function is completely determined!
• The most common interpretation of the propagator, is as a Transition Amplitude. This becomes evident by
considering the following rewriting:
X
K(x′′ , t; x′ , t0 ) = e−iEa (t−t0 )/ℏ ⟨x′′ |a⟩ ⟨a|x′ ⟩ (44)
a
X   
= ⟨x′′ | e−iĤt/ℏ |a⟩ ⟨a| eiĤt0 /ℏ |x′ ⟩ (45)
a
 X  
= ⟨x′′ | e−iĤt/ℏ |a⟩ ⟨a| eiĤt0 /ℏ |x′ ⟩ (46)
a
= H ⟨x′′ , t|x′ , t0 ⟩H (47)
where the subscripts denotes that base-ket and base-bra are to be understood as eigenkets of the position operator in
the Heisenberg Picture. As such is becomes evident that we may regard the propagator as the Transition Amplitude;
the probability of a particle initialized at x′ at time t0 , to be observed at x′′ at a later time t.
• Now, from the fact that the position operator eigenkets in the Heisenberg picture {|x′′ , t′′ ⟩H } form a complete set at
all times we know: Z
d3 x′′ |x′′ , t′′ ⟩ ⟨x′′ , t′′ | = 1 (48)

which can be used to demonstrate the Composition Property of the transition Amplitude. Specifically, dividing
the interval [t′ ; t′′′ ] into [t′ ; t′′ ] and [t′′ , t′′′ ], we observe:
Z 
′′′ ′′′ ′ ′ ′′′ ′′′
⟨x , t |x , t ⟩ = ⟨x , t | d x |x , t ⟩ ⟨x , t | |x′ , t′ ⟩
3 ′′ ′′ ′′ ′′ ′′
(49)
Z
= d3 x′′ ⟨x′′′ , t′′′ |x′′ , t′′ ⟩ ⟨x′′ , t′′ |x′ , t′ ⟩ , (t′′′ > t′′ > t′ ) (50)

which we could repeat over again - dividing into smaller time intervals. Apparently, if we could somehow determine
the form of the transition amplitude over an infinitesimal time interval, we should be able to determine it for a finite
time interval by successively compounding them.
• Considering 1D and setting:
xN ≡ x′′′... N times , tN ≡ t′′′... N times (51)
tN −t1
to ease notation. We could divide the time interval up into N − 1 equal sized intervals with ∆t ≡ tj − tj−1 = N −1
and then use the Composition Property to write the Transition Amplitude as:
Z Z Z
⟨xN , tN |x1 , t1 ⟩ = dxN −1 dxN −2 . . . dx2 ⟨xN , tN |xN −1 , tN −1 ⟩ ⟨xN −1 , tN −1 |xN −2 , tN −2 ⟩ . . . ⟨x2 , t2 |x1 , t1 ⟩ (52)

This means that we must sum over all possible paths in the space-time plane with the end points fixed - see fig. (1)

Figure 1: 2 out of the infinitely paths that can be chosen.

5
• Now, the fact that multiple paths has to be taken into consideration is somewhat in contradiction to the classical-
mechanics-intuition, where given the Lagrangian Lclassical (x, ẋ) describing some particle/system, there exists a single
and definite path connecting the two points (x1 , t1 ) and (xN , tN ) in space time. Specifically, Hamilton’s principle
tells us that it is the exact path minimizing the action:
Z tN
δS(N, 1) = δ Lclassical (x, ẋ)dt = 0 (53)
t1

• Going from the following equivalence-remark in a book by Dirac:


 
S(n − 1, n)
exp i ∼ ⟨xn , tn |xn−1 , tn−1 ⟩ (54)

Feynman reached his infamous Path-integral formalism. Specifically, from the above equivalence, and the Com-
position property of the transition amplitude, it is evident that going from t1 to tN along a single spacial path
corresponds to:
N   XN   
Y S(n − 1, n) S(n − 1, n) S(N, 1)
exp i = exp i = exp i (55)
n=2
ℏ n=2
ℏ ℏ
such that that total transition amplitude, corresponding to the Superposition Principle, would be similar to the
(innumerably big) sum over all paths:
 
X S(N, 1)
⟨xN , tN |x1 , t1 ⟩ ∼ exp i (56)

’All paths’

and now, considering the classical limit ℏ → 0, as prescribed by the correspondence principle, one can observe
a tendency for cancellation among various contributions from neighboring paths (slightly different paths have very
different phases because of the smallness of ℏ), however with the exception of the spacial paths in close proximity to
the path Smin minimizing the action (as they do not differ to first order in deformation) - i.e.: as long as we stay near
the classical path, constructive interference between neighboring paths is possible. - see fig. (2)

Figure 2: Narrow tube of paths w. positive interference.

From this, and based on a dimensional analysis, Feynman reached the following expression for the transition amplitude:
Z xN  Z tN 
  iLclassical (x, ẋ)
⟨xN , tT |x1 , t1 ⟩ = D x(t) exp i dt (57)
x1 t1 ℏ
with the integral operator defined as:
 (N −1)/2 Z Z Z
  m
D x(t) = lim dxN −1 dxN −2 . . . dx2 (58)
N →∞ 2πiℏ∆T
Feynman’s space-time approach based on path integrals is not too convenient for attacking practical problems in non-
relativistic quantum mechanics. Even for the simple harmonic oscillator it is rather cumbersome to explicitly evaluate
the relevant path integral, however, his approach is extremely gratifying from a conceptual point of view. By imposing
a certain set of sensible requirements on a physical theory, we are inevitably led to a formalism equivalent to the usual
formulation of quantum mechanics.
Maybe talk about relevance for Aharonov–Bohm effect.

6
4 Outline how the rotation group works as an operator on quantum me-
chanical states, and discuss the angular momentum algebra including the
spectrum of angular momentum states.
• Finite rotations of Classical vectors in 3D are done through the application of 3 × 3 orthogonal matrices2 . E.g.
- counter clockwise rotation in the xy-plane (rotation about z), is defined through:
Infinitesimal rotations
z }| {
2
1 − ε2
   
cos(ϕ) −sin(ϕ) 0 −ε 0
Taylor expansion 2
Rz (ϕ) =  sin(ϕ) cos(ϕ) 0 −−−−−−−−−−−→ R O(εn )=0, n≥3
= ε 1 − ε2 0 (59)
0 0 1 0 0 1

from the well-known fact also becomes somewhat evident; classical 3D rotations about different axis’ doesn’t commute,
e.g.:
[Rx (ε), Ry (ε)] n
= Rz (ε) − 1. (60)
O(ε )=0, n≥3

These matrices (together with the operation of matrix multiplication) form a group, as ∀ R:
– Identity: R · 1 = R.
– Closure: R1 R2 = R3 .
– Associativity: R1 (R2 R3 ) = (R1 R2 )R3 .
– Inverse: RR−1 = R−1 R = 1.
which, as these orthogonal 3 × 3 matrices holds three independent parameters, are known as the Special Orthogonal
group 3 (SO(3)).
• The rotation of a general quantum state is performed w. some operator that is associated with the classical
rotation operator R, and which has dimensionality appropriate to the ket space
• Inspired by the infinitesimal translation and infinitesimal time-evolution operators we expect the form: 1−iGε
where we define the angular momentum operator s.t. the hermitian generator for rotation about the k’th axis
becomes G → Jℏk (Makes good sense that angular momentum physically is the ”thing” that motivates rotation), s.t.
the infinitesimal rotation-operator is:
About k′ th axis.
}| {
z
iJk iJ · n̂
D(dϕ) ≡ 1 − dϕ → D(n̂, dϕ) = 1− dϕ (61)
ℏ | {zℏ }
About axis specified by n̂

where we note that: with D → 1, as dϕ → 0 as usual. And as usually we obtain finite rotations be successively
compounding infinitesimal rotations, e.g:
Taylor expansion of: exp()
z }| {
h iJz ϕ iN iJz ϕ Jz 2 ϕ2  iJ ϕ 
Dz (ϕ) = lim 1− =1− − + . . . = exp −
z
(62)
N →∞ ℏN ℏ 2ℏ2 ℏ

• These rotation operators are also postulated to fulfill the group properties.
• Classical rotations about different axis doesn’t commute → we should not expect the rotation operators to commute,
and specifically it can be shown that:  
Ji , Jj = iℏεijk Jk (63)
Note that all cyclic permutations of ε123 equals 1, whilst all cyclic permutations of ε321 equals -1, and the rest is 0.
• Finally, through utilization of this canonical commutation rule, it becomes evident that, in expectation, performing
some rotation, e.g. about z as |α⟩R = DZ (ϕ)|α⟩:

R ⟨α|Jx |α⟩R = ⟨α|Dz (ϕ)† Jx Dz (ϕ)|α⟩ = . . . = ⟨α|Jx cos(ϕ) − Jy sin(ϕ)|α⟩ = ⟨Jx ⟩cos(ϕ) − ⟨Jy ⟩sin(ϕ) (64)

2Areal square matrix whose columns and rows are orthonormal vectors, s.t. AAT = 1. All unitary matrices w. only real entries are
orthogonal/orthonormal.

7
is equivalent to just using the SO(3) rotation matrix. Specifically, for an angular momentum, we have:
X
⟨Jk ⟩ = Rkl ⟨Jl ⟩ (65)
l

In general, this also means that if the hamiltonian of some system contains an angular momentum, the system will have
a tendency to precess (rotate about some axis), as the time-evolution operator will then oftentimes become similar to
a rotation rotation operator. Consider example 3.2.2 - where some spin-1/2 particle is subject to an external uniform
magnetic field - that couples to the particles spin - and induces precession.

• A typical example of the application, is that of spin-1/2 systems, here written in the basis of Pauli-z eigenstates:
   
1 0 General state ket
| ↑⟩ ≡ , | ↓⟩ ≡ ===========⇒ |α⟩ = ⟨ ↑ |α⟩| ↑ ⟩ + ⟨ ↓ |α⟩| ↓ ⟩ (66)
0 1

with the pauli-operators defined as:


     
0 1 0 −i 1 0 ℏ
σx ≡ , σy ≡ , σz ≡ , ⟨Sk ⟩ = ⟨α|σk |α⟩. (67)
1 0 i 0 0 −1 2
These obey that angular-momentum commutation rules, and we can in fact provide a 2 × 2 matrix representation of
the rotation operator D. Specifically we see:
 
−iσ · n̂ϕ ϕ ϕ
exp = . . . = 1cos − iσ · n̂sin (68)
2 2 2

such that a general spin-1/2 state rotates as:


 ϕ  ϕ 
|α⟩R = 1cos − iσ · n̂sin |α⟩ (69)
2 2
Together w. matrix multiplication, the Pauli matrices form the Special Unitary 2 (SU(2)), group, and from the above,
it becomes clear that the immediate difference between SO(3) and SU(2) is that:
  
exp −iσ·n̂ϕ = −1

2
ϕ=2π =⇒ SU(2) not isomorph to SO(3). (70)
=1

R (2π)
z

it might furthermore be noted that generally the action of SU(2), is rotations of points on the Bloch Sphere.
• Now, to study the angular momentum algebra and the eigenvalue spectrum forming as a result of the funda-
mental commutation relation, we specifically choose to consider two fundamental things that we can simultaneously
observe, namely, the total momentum:
J2 ≡ Jx J x + Jy Jy + J z Jz (71)
for which we have [J2 , Ji ] = 0 (seen from commutation relation), and we just chose Jz = Ji . Now, the derivation of
the exact eigenvalue equations are based on the ladder operators:

J± ≡ Jx ± iJy (72)

and specifically one ends up having:

J2 |j, m⟩ = ℏ2 j(j + 1)|j, m⟩, and Jz |j, m⟩ = mℏ|j, m⟩ (73)

with j being either some positive integer or half integer:


n
j= , n∈N (74)
2
and m assuming values in the range:
m = −j, −j + 1, . . . , 0, . . . , j − 1, j (75)
| {z }
2j+1

which just corresponds to projections onto the corresponding axis (here z.)

check that nothing is missing here

8
5 Describe how the coupling of angular momenta works in quantum me-
chanics, and discuss the Clebsch–Gordan coefficients.
• In QM, the coupling of angular momenta refers to the mathematical procedure used to determine the total angular
momentum of a system composed of multiple particles. This is of practical interest whenever we have a system with
multiple contributions to the angular momentum.
• Consider the case with one subspace having angular momentum operator J1 , and another having J2 . In this case we
define the total angular momentum operator as:

J = J1 ⊗ 1 + 1 ⊗ J2 ≡ J1 + J2 . (76)

Now, as J1 and J2 are angular momenta, they obey the well-known canonical commutation relation:

[J1i , J1j ] = iℏϵijk J1k , [J2i , J2j ] = iℏϵijk J2k , and [J1i , J2j ] = 0 (77)

( which holds in general for any pair of angular momentum operators from different subspaces.). As a direct consequence
of this, the total angular momentum operator also obeys the same commutation relation:

[Ji , Jj ] = iℏϵijk Jk (78)

and is therefore an angular momentum itself, in the technical sense. As such, it can be seen as the generator of
rotations for the entire system, and it also has a spectrum similar to that of the two angular momenta, from which it
is constructed.

• From the set of these three operators, and their projections, we may construct two different (but equivalent) sets of
basis kets, whilst noting that [J 2 , Ji ] = 0:

Uncoupled tensor-product basis eigenkets: |j1 j2 ; m1 m2 ⟩ , as simultaneous eigenkets for {J12 , J22 , J1z , J2z } (79)
Coupled tensor-product basis eigenkets: |j1 j2 ; jm⟩ , as simultaneous eigenkets for {J12 , J22 , J 2 , Jz } (80)
3
Eigenstates of the first are simply product states |j1 j2 ; m1 m2 ⟩ = |j1 m1 ⟩ ⊗ |j2 m2 ⟩. The other states are harder to
construct. For simple systems, like a coupled spin- 12 system, the coupled basis set may be found using the ladder
operators on the highest or lowest projection (i.e. the one with highest or lowest m = m1 + m2 ). For 2 fermions, this
is |↑, ↑⟩ and |↓, ↓⟩ respectively. But for large values of j1 and j2 this becomes an extremely tedious process ! Instead,
let us consider the unitary transformation connecting the 2 bases:
j1
X j2
X
|j1 j2 ; jm⟩ = |j1 j2 ; m1 m2 ⟩ ⟨j1 j2 ; m1 m2 |j1 j2 ; jm⟩ (81)
m1 =−j1 m2 =−j2

with the expansion coefficients (or transformation matrix elements) commonly known as the Clebsch–Gordan Co-
efficients. As there clearly is a total of (2j1 + 1)(2j2 + 2) values (a lot), we are lucky that many of them are 0.
Specifically, the only non vanishing coefficients are the ones for which:

m = m1 + m2 , |j1 − j2 | ≤ j ≤ j1 + j2 . (82)

It should be noted that one doesn’t really ever directly calculate these matrix elements as inner products, but rather
starts from the highest or lowest projection and utilizes the recursion relation. Specifically - by utilizing the total
ladder operator:
J± = J1,± ⊗ 1 + 1 ⊗ J2,± ≡ J1,± + J2,± , (83)
on the coupled tensor product basis (LHS of (81)), and the individual ladder operators on the uncoupled tensor product
basis (RHS of (81)), one finds a way, for a system with fixed j1 , j2 and j, to recursively calculate the coefficients for
different m1 , m2 , through:
p
(j ∓ m)(j ± m + 1) ⟨j1 j1 ; m1 m2 |j1 j2 ; j, m ± 1⟩ (84)
p p
= (j1 ∓ m1 + 1)(j1 ± m1 ) ⟨j1 j2 ; m1 ± 1, m2 |j1 j2 ; jm⟩ + (j2 ∓ m2 + 1)(j2 ± m2 ) ⟨j1 j2 ; m1 , m2 ± 1|j1 j2 ; jm⟩ (85)

3 Given 2 hermitian operators A, B. Then [A, B] = 0 if and only if there exists an orthonormal basis such that both A and B are diagonal

with respect to that basis - they share eigenstates, or equivalently, have ”simultaneous eigenkets”.

9
• Example: Consider the spin-state defined by two electrons (fermions that has spin = 1/2) - we define the total spin
angular momentum as:
S = S1 + S2 ≡ S1 ⊗ 1 + 1 ⊗ S2 (86)
which has the following 4 possible states:

n |s = 1, m = −1⟩ = | ↓↓ ⟩

1
Singlet states: |s = 0, m = 0⟩ = √ (| ↑↓ ⟩ − | ↓↑ ⟩)
2
; Triplet states: |s = 1, m = 0⟩ = √12 (| ↑↓ ⟩ + | ↓↑ ⟩)

|s = 1, m = 1⟩ = | ↑↑ ⟩

(87)
now lets consider how one might reach these states. Specifically, either, one might start from the basis states spanning
the total tensor product space, and use respectively the symmetrizer ( symmetric triplet states) or anti-symmetrizer
(anti-symmetric singlet state), and normalize accordingly. Or, we might utilize the approach used for deriving the
recursion relation - i.e. the total ladder operators on the maximum projection states and the sum of the single states
on the uncoupled basis states (RHS of above). Consider:

S− |s = 1, m = 1⟩ = (S1− + S2− ) |↑↑⟩ (88)


r r
p 1 1 1 1 1 1 1 1
(1 + 1)(1 − 1 + 1) |s = 1, m = 0⟩ = ( + )( − + 1) |↓↑⟩ + ( + )( − + 1) |↑↓⟩ (89)
2 2 2 2 2 2 2 2

2 |s = 1, m = 0⟩ = |↓↑⟩ + |↑↓⟩ (90)
1
|s = 1, m = 0⟩ = √ (|↓↑⟩ + |↑↓⟩) (91)
2

Check that nothing is missing here

6 Outline how time-dependent perturbation theory works and discuss Fermi’s


golden rule.
• Fundamentally, the goal of TDPT is to be able to describe the time evolution of a system governed by a Time dependent
Hamiltonian of the form:
With the SE ∂
H(t) = H0 + V (t) ========⇒ iℏ |α, t⟩S = (H0 + V (t))|α, t⟩S (92)
∂t
where the solution to the system described by H0 is known a priori, in the sense that we can write H0 |n⟩ = En |n⟩.
• Now, denoting the time-evolution generated by H0 , as U0 (t), in the Interaction picture we have:

|α, t⟩I = U0 (t)† |α, t⟩S = eiH0 t/ℏ |α, t⟩S , (Removes H0 time-dependence.) (93)

Now, considering LHS of SE for state in Interaction picture:


∼ LHS of SE
z }| { ∼ RHS of SE for |α, t⟩S
∂ †
 † ∂ †
z }|
 {
iℏ U0 (t) |α, t⟩S = −H0 |α, t⟩I + U0 (t) |α, t⟩S = −H0 |α, t⟩I + U0 (t) H0 + V (t) U0 (t)|α, t⟩I
∂t ∂t
= −H0 |α, t⟩I + U0 (t)U0 (t)† H0 |α, t⟩I + U0 (t)† V (t)U0 (t) |α, t⟩I


= U0 (t)† V (t)U0 (t) |α, t⟩I = V (t)I |α, t⟩I




we recover a SE type of equation for the interaction picture:



iℏ |α, t⟩I = V (t)I |α, t⟩I (94)
∂t

• Here we chose to consider the perturbation expansion of the time-development operator in the interaction
picture. The advantage of using perturbation theory to calculate the time-development operator, as opposed to the
state-vector, is that once it is computed, the state at time t is readily determined for any initial state.

10
• From the SE for the state-ket (eq. 94) an SE type equation for the time-development operator is given as:


iℏ UI (t) = VI (t)UI (t), (This is just without left. mult. with the initial state. |α, t0 ⟩I ) (95)
∂t
which, through integration, and by using the initial condition UI (0) = 1 has a solution on the form:
Z t
UI (t) = 1 − VI (t′ )UI (t′ )dt′ (96)
0

that can be approximated through a form of recursive integration:


Z t  Z t 
UI (t) = 1 − VI (t′ ) 1 − ′
VI (t′′ )UI (t′′ )dt′′ dt′
0 0
t  2 Z t Z t′
−i
Z
i
=1− ′ ′
VI (t )dt + dt ′
VI (t′ )VI (t′′ )dt′′
ℏ 0 ℏ 0 0
..
.
∞  n Z t Z t1 Z tn−1
X −i
=1+ dt1 dt2 . . . dtn VI (t1 )VI (t2 ) . . . VI (tn )
n=1
ℏ 0 0 0

known as a Dyson Series.


• Now, the smart thing is even though we are approximating the time-development operator in the interaction picture,
the Transition Probability is the same, as it is, in the Schrödinger picture. Specifically, consider that even though4 :

UI (t, t0 ) = eiH0 t/ℏ U (t, t0 )e−iH0 t/ℏ , s.t. ⟨n| UI (t, t0 ) |i⟩ = ei(En −Ei t)/ℏ ⟨n| U (t, t0 ) |i⟩ (97)

we have5 :
P[i → n] = |cn (t)|2 = | ⟨n| U (t, t0 ) |i⟩ |2 = | ⟨n| UI (t, t0 ) |i⟩ |2 (98)
As such, we move forward and consider the transition amplitudes ⟨n|UI (t)|i⟩, and we observe that the expansion
coefficients are equivalently given as:

n = ⟨n|1|i⟩ = δni
c(0) (99)
−i t −i t iωni t′
Z Z
′ ′
c(1)
n = ⟨n| VI (t )dt |i⟩ = e Vni (t′ )dt′ (100)
ℏ 0 ℏ 0
..
. (101)

• Now, lets consider the example of having a constant potential turned on at t = 0. In this case the first order element
yields: Z t
−i ′
c(1)
n = V ni eiωni t dt′ (102)
ℏ 0
corresponding to a Transition probability of:

4|Vni |2
 
2 (En − Ei )t
P[i → n] = |cn (t)| ≈ 2
|c(1)
n |
2
= sin (103)
|En − Ei |2 2ℏ

s.t. for states w. no change in energy:


1
lim |c(1) 2
n | = |Vni |2 t2 (104)
En →Ei 2
In a realistic situation where our formalism is applicable, there is usually a group of final states, all with nearly the
same energy as the energy of the initial state |i⟩. In other words, a final state forms a continuous energy spectrum in
the neighborhood of Ei . In this case we instead consider:

P[i → [n]] =
X
|c(1)
n |
2
(105)
n: En ≈Ei

4 Note that these basis kets are energy eigenkets of H0


5 The modulo squared ’kills’ the extra exponential factor.

11
and then one usually defines the number of states in the ”quasi-continuum” of final states as the density of states
ρ(En ) in the interval (E, dE), s.t. : Z
X
|c(1)
n |2
⇒ dEn ρ(En )|c(1)
n |
2
(106)
n: En ≈Ei

which, for large values of t is linear in t (we consider the steady-state limit):

lim
t→∞
P[i → [n]] = |Vni |2 ρ(En )t (107)
ℏ En ≈Ei

which yields a transition rate as:


 
d 2π
lim P[i → [n]] = |Vni |2 ρ(En ) (108)
dt t→∞ ℏ En ≈Ei

or equivalently (for a single state):


 
d 2π
lim P[i → n] = |Vni |2 δ(En − Ei ) (109)
dt t→∞ ℏ En ≈Ei

known as Fermi’s Golden Rule.


What does this rule tell us?

7 Describe the concept and formalism of discrete symmetries in quantum


mechanics, in particular, time reversal symmetry.
• From classical mechanics we are well familiar with the idea that a symmetry is oftentimes accompanied by a con-
servation, as it is the case, for instance, when the Lagrangian L of the system has no explicit dependence of the
position coordinate (invariance):
∂L
L x
=L x+δx
⇒ = 0, (x is a cyclic coordinate.) (110)
∂x
we say that it is symmetric under variation of the position. In this case the Euler-Lagrange equation tells
us that the corresponding conjugate momentum is conserved, i.e.:
d ∂L ∂L d ∂L integration ∂L
= ⇒ = 0 ========⇒ = const. (111)
dt ∂ ẋ ∂x dt ∂ ẋ ∂ ẋ
This is commonly referred to as Noether’s theorem. The same is observed in QM. In general, symmetries in QM
are associated via a unitary operator; like with translation or rotation.
• Consider a general unitary symmetry operator:
S = exp(−iαG) (112)
with G being the hermitian generator of the symmetry - even though the system may not have this particular symmetry.
Now suppose our Hamiltonian H is invariant under this particular unitary symmetry transformation:
S † HS = H ⇔ [S, H] = 0 (113)
which entails that it also commutes with the generator G of S 6 . From the Heisenberg EOM :
dGH 1 
= GH , H (114)
dt iℏ
dGH
 
we see that if GH , H = 0 then dt = 0. Now, as we have the general definition:

GH = U(t, t0 )† GU(t, t0 ) (115)


we see that if [G, U(t, t0 )] = 0 then [G, H] = 0 entails that [GH , H] = 0. Fortunately we know that because
[H, U(t, t0 )] = 07 and [G, H] = 0 that [G, U(t, t0 )] = 0 s.t. [GH , H] = 0 which in turn (From EOM) tells us that
G is invariant, s.t.:
dG
=0 (116)
dt
6 Consider the Taylor Expansion of S to see this.
7 Consider Taylor expansion of U (t, t0 )

12
• In QM we generally distinguish between continuous symmetries, which are operations that can be obtained by
applying successively infinitesimal symmetry operations (and always yields a corresponding conserved property), and
discrete symmetries.
• Typical examples of continuous symmetries:
dp
– if the system is invariant under translation, i.e. [x, H] = 0, then dt = 0 as momentum is the generator of
translation in the same sense as G is for S.
– if the system is invariant under rotation, the angular momentum as conserved (same argument as above.).
• A example of a discrete symmetry in QM is the Time reversal operator. Initially it might be worth noting that
the term time reversal is a bit of a misnomer, and that a more appropriate characterization might be reversal of
motion.

• Now, by considering the SE:



iℏ ψ(x, t) = Eψ(x, t) (117)
∂t
for some energy eigenstate wave-function ψ(x, t) = c(x)e−iEt/ℏ , it is easily seen that ψ(x, −t) is not a solution, but
that ψ(x, −t)∗ is, hinting that the operator corresponding to ”time-reversal operator” might some-how have something
to do with complex-conjugation.

Now, usually we consider unitary operators,
D Ewith the characteristic ⟨α| U U |β⟩ = ⟨α|β⟩, but this is to restrictive.
Instead, we propose the requirement, | α̃ β̃ | = | ⟨α|β⟩ | which makes sure that a complex conjugated state fulfils
this. A type of operator fulfilling this is the anti-unitary operator, defined by:

θ = U K, θ(a |α⟩ + b |β⟩) = a∗ U |α⟩ + b∗ U |β⟩ (118)

˜ = Θ |α⟩ be a ”time-reversed”
• At this point, we chose to introduce the time-reversal operator, denoted Θ, and we let |α⟩
state. The most fundamental expectations we have to this operator is that:

Θ |p⟩ = |−p⟩ , Θ |x⟩ = |x⟩ (119)

such that:
˜ |α⟩,
⟨α| x |α⟩ = ⟨α|x ˜ ˜ |α⟩
⟨α| p |α⟩ = −⟨α|p ˜ (120)
which means that:
ΘpΘ−1 = −p, ΘxΘ−1 = x (121)
Now, by utilizing the infinitesimal time-evolution operator, we see that if motion obeys time-reversal symmetry, we
must have:    
iHdt iH(−dt)
Θ 1− |α⟩ = 1 − Θ |α⟩ (122)
ℏ ℏ
or equivalently that:
−iHΘ |α⟩ = ΘiH |α⟩ (123)
From which we are in a position to emphasize the fact Θ cannot be a unitary operator, as that would make it possible
to cancel the imaginary unit in the above, leaving one with −HΘ = ΘH ⇔ {H, Θ} = 0, which would in turn mean,
that for some energy eigen-ket |n⟩, one would have:

HΘ |n⟩ = −ΘH |n⟩ = − En Θ |n⟩ (124)

a negative value for the eigen-energy, which is obviously nonsensical even in the very elementary case of a free particle,
for which there is no state lower than a particle at rest (momentum eigenstate with momentum eigenvalue zero). By
the anti-linearity of our ”Time reversal operator”, we therefore conclude: [H, Θ] = 0.

Maybe talk about Kramer’s degeneracy

8 Describe the basic idea of scattering theory and the treatment of scatter-
ing problems within the Born approximation.
• In the following treatment, we assume an Elastic Scattering process, in that the internal states of the system, such
as its energy and angular momentum, are conserved.

13
• We will treat scattering as the process in which some initial state |i⟩ is subjected to a time-independent potential
V̂ (r), for a finite period of time, causing it to transition into some final state. |n⟩.

• Specifically, we will be considering a localised potential, and view it as a time-dependent perturbation of some
a priori well-known time-independent hamiltonian Ĥ 0 , and we therefore write:

Ĥ ≡ Ĥ 0 + V̂ (r). (125)

The fact the we assume a localized potential, means that the particle will not experience the potential at x → ±∞,
and can thus be described, both initially (before scattering), and finally (after scattering), as an unbound state8 .
p̂2
We will be modeling the initial state as a free particle, s.t. |i⟩ ≡ |k⟩, and therefore set Ĥ 0 ≡ 2m as the kinetic
2 2
k
energy operator, w. plane wave eigenstates |k⟩ and eigenvalues Ek = ℏ2m .
• To enable the use of time-dependent perturbation, which assumes discrete |i⟩ , |n⟩, we can temporarily discretize our
state to a box (of side-lengths = L) of discrete k points, with periodic boundary conditions, such that our wave
function (in position repr.) becomes:
1 1
ψk (x) = ⟨x|k⟩ = √ eik·x = 3/2 eik·x , k′ k = δk′ ,k (126)
L 3 L

and then take the limit L → ∞ at the end to mimic a continuous state.
• Now, as usual, we will be dealing with time-dependent perturbation by considering the expansion of the time-
development operator in the interaction picture. Specifically we chose to only consider the contribution from the
(0) (1)
terms: cn + cn , corresponding to:
Z t
i ′
⟨n| UI (t, t0 ) |i⟩ = δni − ⟨n| V (r) |i⟩ dt′ eiωni t (127)
ℏ t0

now, to accommodate for the asymptotic case t → −∞ case, where the initial plane wave is far away from the scattering
potential, we define the T matrix, by rewriting the above, s.t.:
Z t
i ′ ′
⟨n| UI (t, t0 ) |i⟩ = δni − Tni dt′ eiωni t +εt (128)
ℏ t0

and define the S matrix as:


 
Sni = lim lim ⟨n| UI (t, −∞) |i⟩ = δni − 2πiδ(En − Ei )Tni (129)
t→∞ ε→0

Clearly, the S matrix consists of two parts. One part is that in which the final state is the same as the initial state.
The second part, governed by the T matrix, is one in which some sort of scattering occurs.
From this, utilizing the usual definition of the scattering rate, we reach:
d d  2π
P[i → n] = | ⟨n| UI (t, −∞) |i⟩ |2 = |Tni |2 δ(En − Ei )

w(i → n) = (130)
dt dt ℏ
Now, by initially determining the density of final states ρ(En ), by utilizing the assumption of elastic scattering, and
the ”box” setting, one is enabled to integrate the above single-state transition rate, over all the possible final states
in vicinity of En , which together with the probability flux, yields an expression for the differential cross-section9 (pr.
solid angle):

∝ |Tni |2 (131)
dΩ
i.e. the differential cross-section is prop. to the T matrix.
• Now, in the following we will consider both the what the state that solves the scattering problem looks like, and in
doing so, how we can define the T operator.
A fundamental and reasonable requirement is that the action of this operator on some initial state, should somehow
correspond to the effect of the potential on the system state, i.e.:

T |i⟩ = V |ψ⟩ (132)

8 The effectively means that the Hamiltonian has a continuous spectrum - infinite-dimensional Hilbert space.
9 N.B. the cross section itself dσ is defined as the transition rate divided by the prob. flux.

14
Now, based on the fundamental requirement that we must have |ψ⟩ → |i⟩ when V → 0, we make the following ansatz:
1
|ψ⟩ = |i⟩ + V |ψ⟩ (133)
E − H0
which unfortunately is singular and therefore doesn’t have a well-defined inverse (consider matrix repr. of E − H0 ).
Without delving deep into the mathematical reasoning behind, we fix this by introducing a complex part:
E 1 E
ψ (±) = |i⟩ + V ψ (±) (134)
E − H0 ± iε
This is also known as the Lippmann–Schwinger Equation (LSE). Now, lets consider how the corresponding wave-
function looks (the position basis):
Z
D E 1 E
x ψ (±) = ⟨x|i⟩ + d3 x′ ⟨x| |x′ ⟩ ⟨x| V ψ (±) (135)
R3 E − H0 ± iε
The quest is then to evaluate the matrix element in the integral, which can be done; it is a Greens function for the
Hemholtz equation. Utilizing this in the above and assuming x′ >> x (which makes good sense experimentally) we
are left with:
eikr
 
D E 1
ψ(x) = x ψ (±) = 3/2 eik·x + f (k′ , k) (136)
L r
where we define the scattering amplitude as:
mL3
f (k′ , k) = − ⟨k| V |ψ⟩ (137)
2πℏ2
from the def. of the differential cross-section and (132), we therefore know: dσ
dΩ = |f (k′ , k)|2 .
• Now the hard part is to calculate the matrix element., Operating on the left with V on the LSE, and utilizing (132),
gives the following relation10 :
1
T =V +V T = V + V GV + V GV GV + ... (138)
E − H0 ± iε
taking this to lowest order s.t. T = V , or equivalently |ψ ± ⟩ = |k⟩ is called the first-order Born approximation,
and yields:
−m
Z
′ ′

f (k , k) = 2
d3 x′ ei(k−k )·x V (x′ ) (139)
2πℏ
In general, if the potential is strong enough to develop a bound state, the Born approximation will probably give a
misleading result, furthermore the Born approximation tends to get better at higher energies.
Find out why this is true

9 Outline the theoretical description of low-energy scattering and describe


the physical meaning of the scattering length.
• Now, this time we are specifically interested in considering scattering off spherically symmetric potentials V (r) with
some finite range a. From the representation of the Lippmann-Schwinger Equation in the position basis we already
know the general form of the solution in the asymptotic limit |r| >> a11 :
Spherical
Plane wave
 wave z}|{ 
eikr
D E z}|{
x ψ (+) = ψ(r) ∝ e ik·r
+ f (θ) (140)
|{z} r
Scattering
amplitude

with the usual definition:



Differential Cross-section : = |Scattering amplitude|2 = |f (θ)|2 (141)
dΩ
In this asymptotic limit, we picture the setup as having some incoming plane-wave, which is scattered off the potential
like a spherical wave, but also has a plane-wave component ”going through” the potential.

10 The last equality comes from an assumption of a weak potential V , and is simply an order-by-order approx.
11 Realistic in practice - detector far away from potential relative to range of potential

15
• In considering scattering by a spherically symmetric potential V (r), we often examine how states with definite angular
momenta l, are affected by the scatterer, which lead to the method of partial waves.

Now, lets quickly establish some facts about the free-particle states. Specifically, as these are just eigenstates of
the kinetic energy operator12 , which also commutes with L2 and Lz , they can indeed be represented in the mutual
eigenbasis of |E, l, m⟩, and as it turns out that the transformation function between the plane-wave basis |k⟩ and this
new basis, is proportional to the spherical harmonics for the corresponding l, m:
⟨k|E, l, m⟩ ∝ Yml (k̂) (142)
we note how this means that that the plane wave state |k⟩ can be expressed as a superposition of free spherical wave
states with all possible l-values. Specifically, utilizing this, alongside the position-space wave-function ⟨x|E, l, m⟩, one
can find the following expression for the scattering amplitude:

X πTl (E)
f (k′ k) = f (θ) =

(2l + 1)fl (k)Pl cos(θ) , Partial wave amplitude = fl (k) = − (143)
k
l=0

• By utilizing this in (140), we can find that the only major change (in the asymptotic limit) when V ̸= 0 (the scattering
potential is turned on) is that is that the outgoing spherical wave is attributed a phase, which corresponds to the
scattering amplitude taking the form:

1X
(2l + 1)fl (k)eiδl sin(δl )Pl cos(θ)

(144)
k
l=0

• At low energies – or, more precisely, when λ = 1/k is comparable to or larger than the range a – partial waves for
higher l are, in general, unimportant. This point may be obvious classically because the particle cannot penetrate
the centrifugal barrier; as a result the potential inside has no effect. In terms of quantum mechanics, the effective
potential for the l-th partial wave is given by:
ℏ2 l(l + 1)
Veff = Vscattering + Vbarrier = V (r) + · (145)
2m r2
At low energy scattering, the only l we need to consider is l = 0 (S-wave scattering), as the higher l-waves simply
cannot overcome the centrifugal barrier and actually probe the potential. If the energy is very low, it must resemble
a free particle, i.e. the radial wave function Al (r) must look like jl (ρ)(spherical bessel function)

10 Describe the concept of identical particles in quantum mechanics, its


implication for multi-particle quantum states, and its relation to the
formalism of second quantization.
(N.B.: Weeks: 11 and 12. corresponding to chapters 7.1, 7.2, 7.3, 7.5, and 7.7 + lecture notes week 12)
• In Classical mechanics, all identical particles are distinguishable, meaning that we are able to label any set
of ’identical’ particles (e.g. two extremely similar billiard balls), and track their position in time, enabling us to
continuously differentiate between them.
• In Quantum mechanics, all identical particles13 are indistinguishable (e.g. we cannot differentiate between two
electrons) - consider the two equivalent situations on fig. (3)

Figure 3: Two identical particles taking two different paths - we cannot tell which of the paths taken and as such, we cannot
distinguish between the two particles.

2
12 I.e. p
for free-particle: H0 = 2m
13 We define identical particles as the ones have the same intrinsic properties, such as the ones having the same mass, charge, spin ...

16
• A profound consequence of identical particles being indistinguishable in QM, is Exchange Degeneracy, which
basically states that for any given system of identical particles, there exists an infinite number of different state-kets
which mathematically can describe the system. E.g. consider a system of two identical spin- 12 particles, described by
the following tensor-product state:
 
V = V1 ⊗ V2 = span {|↓, ↓⟩z , |↑, ↑⟩z , |↓, ↑⟩z , |↑, ↓⟩z } (146)

and consider the case where the particles spin (in z-direction), is measured with the outcomes of having one particle
w. spin up and one particle w. spin down. The obvious question is now what state the 2-particles system is in.
Imposing the usual normalization constraint, we see that we could in fact have picked an infinite number of different
values for α, β s.t. |α|2 + |β|2 = 1, translating to being able to describe the physical system by any one of the infinitely
many states on the form:
α |↓, ↑⟩z + β |↑, ↓⟩z (147)
However, as opposed to the case of just multiplying with some global factor14 , it actually turns out that 2 different
choices of α, β doesn’t lead to the same physical observations. Building on the same example, consider the
case of measuring the S1x , S2x (the spin in the x-direction)15 , after just having measured the spin in the z-direction,
and having established the state to be in α |↓, ↑⟩z + β |↑, ↓⟩z . Specifically, we note that the probability of measuring
the state to be in |↑, ↑⟩x , is:
 2
x ⟨↑, ↑| α |↓, ↑⟩z + β |↑, ↓⟩z (148)
1   2
= ⟨↑, ↑|z + ⟨↑, ↓|z + ⟨↓, ↑|z + ⟨↓, ↓|z α |↓, ↑⟩z + β |↑, ↓⟩z (149)
2
1 2
= (α + β) (150)
2
actually depends on the choice of the coefficients α, β, and as such we are faced we an apparent dilemma, as all the
mathematically allowed states, apparently doesn’t correspond to physically allowed ones (we need the states the are
mathematically valid, and doesn’t predict different physical observations.)
• As a way to resolve the problem of Exchange degeneracy, and recover predictive power, we will introduce the
symmetrization postulate. The symmetrization postulate explicitly states that the only allowed states in systems
of multiple identical particles are either Totally symmetric or Totally anti-symmetric. Now, lets consider how
exactly these states are defined. Totally symmetric states are characterized by the fact that exchanging any two
particles leads to the exactly the same physics, while the anti-symmetric particles are characterized, by the fact that
exchanging any two particles lead to a global change of sign of the state. Specifically, we say that:

Totally symmetric state: Pα |ψ⟩+ = |ψ⟩+ (151)


(
cα = +1, if Pα is an even permutation
Totally anti-symmetric state: Pα |ψ⟩− = cα |ψ⟩− , (152)
cα = −1, if Pα is an odd permutation

where it is noted that the parity of the permutation operator is determined by the parity of its corresponding
transposition operators. I.e. if the permutation corresponds to an even number of transpositions, it is an even
permutation operator, and vice versa.

For a general N -particle tensor-product state-space:

V = V1 ⊗ V2 ⊗ . . . ⊗ VN (153)

we define the two following projection-operators16 :


1 X
Symmetrizer: S+ ≡ Pα , Projects onto V+ ⊂ V (154)
N! α
1 X
Anti-symmetrizer: S− ≡ c α Pα , Projects onto V− ⊂ V (155)
N! α

14 Given any state |ψ⟩, the act of multiplying with a global phase factor - i.e. |ψ⟩ → eiϕ |ψ⟩, doesn’t influences any physically measurable quality

of the system, as evident by Born’s rule.


15 Remember that the eigenstate of the spin-x operator can be written in a basis of the spin-z eigenstates. Specifically, for spin-up, we have:

|↑⟩x = √1 (|↑⟩z + |↓⟩z ).


2
16 Any projection operator A is defined by being idempotent, s.t. A2 = A.

17
such that: (
S+ |ψ⟩ ∈ V+
∀ |ψ⟩ ∈ V, (156)
S− |ψ⟩ ∈ V−
the two projection operators projects an arbitrary state-ket to, respectively, its corresponding totally symmetric, and
corresponding totally-anti-symmetric state (possibly up to a sign).

Note that we define a particle whose state-ket is totally symmetric, as being a Boson17 , and a particle whose state-ket
is totally anti-symmetric as being a Fermion18 . Also note how this actually leads to the Pauli exclusion principle
- again consider the two-particle system:
 
V = V1 ⊗ V2 = span {|↓, ↓⟩z , |↑, ↑⟩z , |↓, ↑⟩z , |↑, ↓⟩z } (157)
and consider what happens when the anti-symmetrizer acts on |↑, ↑⟩z or |↓, ↓⟩z :
1 1
S− |↑, ↑⟩z = (P12 − P21 ) |↑, ↑⟩z = (|↑, ↑⟩z − |↑, ↑⟩z ) = 0, (Normalization is disregarded here) (158)
2 2
demonstrating how no two identical fermions can be in the same single-particle state (in the same system).
• Now, when we are dealing with systems of N identical particles
  

 V1 = span {|ui ⟩} , dim(V1 ) = N1
  
V2 = span {|vj ⟩} , dim(V2 ) = N2

V = V1 ⊗ V2 ⊗ . . . ⊗ VN , .. (159)


 .
  
VN = span {|lk ⟩} , dim(VN ) = NN

s.t.:
N
Y
 
V = span {|ui ⟩ ⊗ |vj ⟩ ⊗ . . . ⊗ |lk ⟩} , dim(V) = Nd (160)
d=1
we can, as usually, expand an arbitrary state in V in terms of its basis states:
X
∀ |ψ⟩ ∈ V, |ψ⟩ = ci,j,...,k |ui ⟩ ⊗ |vj ⟩ ⊗ . . . ⊗ |lk ⟩ (161)
i,j,...,k

and as such, by considering the action of the (symmetrizer) anti-symmetrizer, on this expansion, it becomes evident
that the basis of the sub-spaces of the totally symmetric and the totally anti-symmetric states, are formed by applying
the (symmetrizer) anti-symmetrizer on the basis-states of V:
   
V+ = span {S+ |ui ⟩ ⊗ |vj ⟩ ⊗ . . . ⊗ |lk ⟩} , V− = span {S− |ui ⟩ ⊗ |vj ⟩ ⊗ . . . ⊗ |lk ⟩} (162)
Now, the apparent ”issue” is that fact that as V+ and V− and non-complementary sub-spaces
insert footnote with definition
of the full-state space of V, their dimensions are also smaller than that of the full state space, and as such, simply
applying the (symmetrizer) anti-symmetrizer on the basis states, forms a basis with a lot of redundancy - i.e. there
are repeated states in (162).

Specifically, the redundancy comes from the fact that different permutations of an arbitrary state in V corresponds
to the same state in V+ and the same state, up to a sign change, in V− . This can be seen from the fact that
S+ Pα = Pα S+ = S+ , which is becomes evident by utilizing the fact that {Pα } forms a group, together with the
rearrangement theorem
shortly define rearrangement theorem

in:  
1 X 1 X 1 X
Pα S+ = Pα Pβ = Pα Pβ = Pβ = S+ (163)
N! N! N!
β β β

which demonstrates that if one takes any state in V, applies a permutation, and then the symmetrizer, one gets the
same state in V+ as one would have, without first acting with the permutation operator - hence the redundancy (one
could do a similar argument for the anti-symmetrizer).

17 Particles with integer spin values - particles such as photons and pions
18 Particles with half-integer spin, such as the electron.

18
• Now, to eliminate this redundancy lets introduce the Occupation number representation, which describes a state
by one of the characteristics that remains constant for any arbitrary permutation, namely the number of each of
the single-particles basis states, that is present, in the state. E.g., omitting the tensor-symbol, consider writing an
arbitrary state in V arranged as:

|ui ⟩1 |ui ⟩2 . . . |ui ⟩ni |vj ⟩ni +1 |vj ⟩ni +2 . . . |vj ⟩ni +nj . . . (164)

to make it clear that it contains single-particle basis state |ui ⟩ ni times and single-particle basis state |vj ⟩ nj times
and so on. Then we could simply label a state by its occupations numbers, as:

|ni , nj , . . . , nl ⟩ (165)

As such, we define the occupation number representation as the normalized action of the (symmetrizer) anti-symmetrizer
on any permutation of the basis states in V:

|ni , nj , . . . , nl ⟩ ≡ c± S± |ui ⟩1 |ui ⟩2 . . . |ui ⟩ni |vj ⟩ni +1 |vj ⟩ni +2 . . . |vj ⟩ni +nj . . . (166)

where:
s P 
nm !
m
Bosons: |ni , nj , . . . , nl ⟩ ≡ S+ |ui ⟩1 |ui ⟩2 . . . |ui ⟩ni |vj ⟩ni +1 |vj ⟩ni +2 . . . |vj ⟩ni +nj . . . (167)
ni !nj ! . . .
s 
 P
m nm !S− |ui ⟩1 |ui ⟩2 . . . |ui ⟩ni |vj ⟩ni +1 |vj ⟩ni +2 . . . |vj ⟩ni +nj . . .

Fermions: |ni , nj , . . . , nl ⟩ ≡ (168)

0, if any nm > 1, (Pauli exclusion principle)

Note that, for Bosons, the order of the occupations numbers aren’t important, as they are represented by totally
symmetric states. However, for Fermions they are. Finally, we conclude that:
" s P  #
m n m !
V+ = span S+ |ui ⟩1 |ui ⟩2 . . . |ui ⟩ni |vj ⟩ni +1 |vj ⟩ni +2 . . . |vj ⟩ni +nj . . . (169)
ni !nj ! . . .
s
"   P 
#
m m !S− |ui ⟩1 |ui ⟩2 . . . |ui ⟩ni |vj ⟩ni +1 |vj ⟩ni +2 . . . |vj ⟩ni +nj . . .
n

V− = span (170)

0, if any nm > 1

• Now in the Second Quantization the basic idea is that we aren’t restricting ourselves to state-spaces of some fixed
number of (identical) particles, but rather, we are concerned with describing instances where the number of particles
may vary, as particles are created or annihilated.
Specifically, we denote our states using the occupation number representation, but in the second quantization
it is to be understood that the number of particles may vary. To deal with this, we define a new state-space for our
variable sized occupation number states, which we refer to as the Fock Space. Denoting Fi as the tensor-product
state space of i identical particles, we define the Fock Space as direct sum:

F = F0 ⊕ F 1 ⊕ . . . ⊕ F N ⊕ . . . (171)

and as such, one might have states like:


|0⟩ + |1⟩ + |1, 1⟩ (172)

• Now, as a way of formalizing the creation or annihilation of any particles in a Fock state, the annihilation and creation
operators are utilized, which means that one might equivalently formulate any occupation number state as:
 n1 †  n2
Fermions: |n1 , n2 , . . .⟩ = a†1 a2 . . . |0⟩ (173)
1 n1 n2
a†1 a†2
 
Bosons: |n1 , n2 , . . .⟩ = pQ . . . |0⟩ (174)
(n
i i !)

where it is of course understood that we are dealing with two different sets of (creation) annihilation operators,
depending on whether we are creating totally symmetric bosonic states or totally anti-symmetric fermionic states.
Specifically, recall that the fermionic ones obey the anti-commutation relations, whilst the bosonic ones obey the
”normal” commutation relations.

19
11 Discuss the quantization of electromagnetic waves and the properties of
simple quantum states of the electromagnetic field.
sections 7.7 and 7.8 in book (mostly 7.8)
• In this instance the goal is to quantize the EM-field. Specifically, the aim is to show that the energy of the EM
field can be quantized into to small discrete packages of energy, namely photons, allowing for a quantum-mechanical
formulation. Starting from Maxwell’s equations, in vacuum:
1 ∂B 1 ∂E
∇ · E = 0, ∇ · B = 0, ∇×E =− , B×∇= (175)
c ∂t c ∂t
and utilizing the vector potential, for which B = ∇ × A, in the Coulomb Gauge (∇ · A = 0), one obtains:
∂A
E=− (176)
∂t
which in turn means that determining A corresponds to determining E and B. Now, performing the appropriate
substitutions in Maxwell’s equations, one obtains the following differential equation for the vector potential:
1 ∂A
∇2 A = , (177)
c2 ∂t
which is just the Wave equation, with plane-wave solutions of the form:

A(r, t) = (Ak e−i(k·r−ωt) + A∗k ei(k·r−ωt) )êk (178)

where Ak is a complex number (the mode amplitude for mode w. wave-number k) that specifies the magnitude and
phase of the plane wave, and êk is a real unit vector (the polarization vector) that specifies the direction of the vector
potential (for mode k). Furthermore, by insertion of the the above, in the wave-equation, it becomes evident that
ω = c||k||.

Moreover, due to working in the Coulomb Gauge, we know that the vector potential must be perpendicular to
the direction of propagation:
k · êk = 0 (179)
and as such, for each mode, there exist two distinct directions to choose from.
• Now suppose we put the electromagnetic field in a box of volume V = L3 with periodic boundary conditions such that
the k vectors of all the modes of the box then form a discrete set on the form:

(kx , ky , kz ) = (nx , ny , nz ), ni ∈ Z (180)
L
we can utilize the superposition principle to decompose the vector potential into all the modes of the box:
X
A(r, t) = (Ak,λ e−i(k·r−ωk t) + A∗k,λ ei(k·r−ωk t) )êk,λ (181)
k,λ

where λ accounts for the extra degree of freedom introduced by the Coulomb Gauge (the 2 polarizations).
• Now, considering the total energi of the box, we find that:
V X ωk2  ∗
Z
1
d3 r |E(r, t)|2 + |B(r, t)|2 = Ak,λ Ak,λ + Ak,λ A∗k,λ
  
E= 2
(182)
8π V 4π c
k,λ

• Our goal now is to associate (182), with the eigenvalues of a Hamiltonian operator. We will do this by hypothesizing
that the quantized electromagnetic field is made up of a collection of identical particles called photons. Now, assuming
no potential-interaction between different modes, such that the hamiltonian must take the form of a one-body
operator:
ki a†i ai
X
H= (183)
i

and more specifically, one that is very reminiscent of the hamiltonian of the harmonic oscillator in second quan-
tization form, we assert a hamiltonian of the form:

ℏωk a†k,λ ak,λ + E0


X
H= (184)
k,λ

20
which might be a very fair starting point, as we should at least expect a hamiltonian with equally spaced eigen-energies,
in the case of identical photons.
Now, from a fully relativistic treatment of the photon field, one can demonstrate that the photon has integer-spin and
is therefore a Boson, and as such, utilizing the Bosonic commutation relations, one reaches:
1X
H= ℏωk (a†k,λ ak,λ + ak,λ a†k,λ ) + E0 (185)
2
k,λ

from which it becomes evident that the process of quantizing the EM-field actually reduces to associating the
complex mode amplitudes of the vector-potential with annihilation/creation-operators:

Ak,λ ∝ ak,λ (186)

along with the realization of a “zero point” energy:


X 1X
E0 = ℏωk = ℏωk (187)
2
k,λ k

which is popularly termed the Vacuum Energy. Strangely, even though the sum is infinite, it is still a constant, and
actually holds several observable consequences, one of which is its ability to exert a macroscopic and attractive force
between conducting surfaces - the Casimir Effect with Force ∝ −distance−4 .

12 Discuss the solutions of the Dirac equation for free particles and their
physical interpretation.
• As the Hamiltonian is traditionally a central quantity (it is the hermitian generator of time), this quantity might also
serve as a natural staring point in a relativistic treatment of quantum mechanics. Specifically, working in natural units
- recall the energy of a (free) particle from Einsteins triangle:
q
E = m20 + p2 (188)

As such, one might simply take the hamiltonian to be:


Taylor expansion
z }| {
p2 p4
q
H= 1 m20 + p2 = 1m0 + − + ... (189)
2m0 8m30

but this unfortunately entails infinitely increasing orders of derivatives of the momentum when considering the position-
space wave-function (as needed in SE)19 which renders this wave equation non-local since the increasing order of
derivatives requires consideration of a increasingly larger region around x′ (RHS of SE), in order to evaluate the time
derivative at a single t (LHS of SE) → eventually, causality will be violated for any spatially localized wave-function.
• To avoid this problem (induced by sqrt), we instead consider:

E 2 = m20 + p2 ⇒ H2 = 1m20 + p2 (190)

Now, taking the time derivative of both sides of SE, and an identity for the momentum operator, yields the Klein-
Gordon equation:
 2   
∂ 2 2 µ 2
− ∇ + m 0 ⟨x|ψ(t)⟩ = 0, or equivalently ∂ µ ∂ + m0 ⟨x|ψ(t)⟩ = 0 (191)
∂t2

which is indeed covariant20 . However, appearance of the second order time derivative complicates the formalism and
makes it difficult to identify a proper probability density from the wave function.
• Solution that is both covariant and has only 1-order derivatives of time is Dirac Equation:
h i
iγ µ ∂µ − m0 ⟨x|ψ(t)⟩ = 0 (192)

This has positive definite probability → solves continuity equation.

∂ ∂
19 Recall that the SE is: iℏ ∂t ⟨x|ψ(t)⟩ = ⟨x| H |ψ(t)⟩, and the fact that ⟨x′ | p |α⟩ = −iℏ ∂x ′
′ ⟨x |α⟩
20 Principle of covariance requires a change of inertial coordinates to not effect physical laws - i.e. ⟨x′ |ψ(t′ )⟩ solves same equation as ⟨x|ψ(t)⟩

21
Elaborate on what this means
In this, γ µ are 4 × 4 matrices, which in the Dirac representation has the form:

1 0
   
0 i 0 σi
γ = , γ = (193)
0 −1 −σi 0

where σi are the Pauli Spin matrices. Equivalently, defining the relativistic hamiltonian as:

1 0
   
0 σ
H = α · p + βm0 , with α = σx ⊗ σ = , β = σz ⊗ 1 = (194)
σ 0 0 −1

the Dirac equation can be obtained by insertion in the normal SE.


• Now, the Dirac equation, like the Klein-Gordon equation, has solutions with the expected kind of time dependence
and spatial dependence like a plane wave:
µ
⟨x|ψ(t)⟩ ∝ e−(Et−p·x) = e−ip xµ
(195)

with the exception of being four-component ”Spinors”.


• In the case of p = 0 (free particle at rest):
       
1 0 0 0
(+) −im0 t 0 (+) −im0 t 1 (−) +im0 t 0 (−) +im0 t 0
       
ψ(t)1 = e 0 , ψ(t)2 = e 0 , ψ(t)3 =e 1 , ψ(t)4 =e 0 (196)
0 0 0 0

and in the case of having a constant non-zero momentum, e.g. p = ẑp, we can determine the solution as21 :
     −p   
1 0 E+m0
0
(+)  0  (+)  1  (−)  0  (−)  p 
uR =   E+m0 
 p  , uL =  0  , uR =  1  , uR =  0  (197)
    
E+m0
−p
0 E+m0 0 1

notice that states with + (-) superscript corresponds to positive (negative) energy solutions.
Discuss interpertation of negative energy solutions

µ
21 Here the normalization and factor e−ipµ x is omitted.

22

You might also like