You are on page 1of 14

The Time-Dependent Schrödinger Equation

with applications to
The Interaction of Light and Matter
and
The Selection Rules in Spectroscopy.

Lecture Notes for Chemistry 452/746


by
Marcel Nooijen
Department of Chemistry
University of Waterloo

1. The Time-Dependent Schrödinger equation.

An important postulate in quantum mechanics concerns the time-dependence of the wave


function. This is governed by the time-dependent Schrödinger equation
∂Ψ( x , t ) $
ih = HΨ( x , t ) (1.1)
∂t
where H$ is the Hamiltonian operator of the system (the operator corresponding to the
classical expression for the energy). This is a first order differential equation in t , which
means that if we specify the wave function at an initial time t0 , the wave function is
determined at all later times. Let me emphasize that this means the wavefunction has to
be specified for all x at initial time t0 . These initial conditions are familiar from wave
equations as discussed in MS Chapter 2. In classical physics we often deal with second
order differential equations and in addition the time derivative ∂Ψ( x , t ) / ∂t would then
need to be specified for all x. Let me emphasize here that although the experimental
results that can be predicted from QM are statistical in nature, the Schrödinger equation
that determines the wave function as a function of time is completely deterministic.

1
======= Special solutions: Stationary States (only if H$ is time-independent).

If we assume that the wave function can be written as a product: Ψ( x , t ) = φ ( x )γ ( t ) we


can separate the time dependence from the spatial dependence of the wave function in the
usual way. The separation constant is called E and will turn out to be the energy of the
system for such solutions
dγ ( t )
→... ih = Eγ ( t ) → γ ( t ) = e − iE ( t −t0 )/ h (1.2)
dt
H$ φ ( x ) = Eφ ( x ) → H$ φ n ( x ) = Enφ n ( x ) (1.3)
equation (1.2) is called the time-independent Schrödinger equation and plays a central
role in all of chemistry. Since the operator H$ is Hermitean the eigenfunctions form a
complete and (can be chosen to be an ) orthonormal set of functions. Using these
eigenfunctions of H$ special solutions to the time-dependent Schrödinger equation can be
expressed as
Ψ( x , t ) = φ n ( x )e− iEn ( t −t0 )/ h ; Ψ( x , t 0 ) = φ n ( x ) (1.4)
For these special solutions of the Schrödinger equation, all measurable properties are
independent of time. For this reason they are called stationary states. For example the
probability distribution
2 2 2
Ψ( x , t ) = Ψ( x , t0 ) = φ n ( x ) , (1.5)
but also
A$ = A$ ∀ A$ , (1.6)
t t0

as is easily verified, by substituting the product form of the wave function. Also the
probabilities to measure an eigenvalue ak are independent of time, as seen below

A$ ϕ k ( x ) = akϕ k ( x ) → Pk ( t ) = z ∞

−∞
2
ϕ *k ( x )Ψ( x , t ) = Pk ( t0 ) (1.7)

The common element in each of these proofs is that the time-dependent phase factor
cancels because we have both Ψ( x , t ) and Ψ* ( x , t ) in each expression. Let me note that
the stationary solutions are determined by the initial condition. If you start off with a
stationary state at t0 , the wave function is a stationary state for all time.

2
The general solution of the time-dependent Schrödinger equation (TDSE) for time-
independent Hamiltonians can be written as a time-dependent linear combination of
stationary states. If we assume that the initial state is given by
Ψ( x , t0 ) = c1φ 1 ( x ) + c2φ 2 ( x )+.... (1.8)
(it can always be written in this fashion as the eigenfunctions of H$ form a complete set),
then it is easily verified that
Ψ( x , t ) = c1e− iE1 ( t −t0 )/ hφ1 ( x ) + c2e− iE2 ( t −t0 )/ hφ 2 ( x )+... (1.9)
satisfies the TDSE and the initial condition. For this general linear combination of
eigenstates of H$ (the general case) properties do depend on time. This is true for
expectation values and probabilities, and is due to the fact that different 'components' in
the wave function oscillate with different time factors. In calculating expectation values
we get cross terms and the time-dependent phase factors do not cancel out.

Independent of the initial wave function, energy is always conserved (as would be
expected from classical physics), and also probabilities to measure a particular energy
Ek :

z−∞
∞ 2
φ *k ( x )Ψ( x , t )dx = ck e− iE ( t −t
k 0 )/ h
2
= ck
2
(1.10)

Further Remarks:

- All depends on the initial wave function, which is arbitrary in principle. The most
common way is to specify it by means of a measurement!
- Stationary states: the inital state is defined to be an eigenfunction of H$ . In this case
nothing moves except the phase of the wave function. In our current version of QM
we would find infinite lifetimes of excited states! (This is because the e.m. field is
missing from our treatment and we have assumed a time-independent H$ )
- In general properties oscillate in time (e.g. electron density) → radiation!? Again
there is a need to include e.m. field. In the real world, sytems do not satisfy our time-
dependent Scrödinger equation indefinitely. The system interacts with the

3
electromagnetic field, and in this way makes a transition to the ground state
(eventually). This occurs even if no radiation field is present (spontaneous emission).
These are the reasons that stationary states and in particular the ground state is so
important.

2. Interaction of Light and Matter.

The total Hamiltonian for a molecule in the presence of monochromatic radiation is given by
H = H 0 + H '( t ) , where H 0 is the usual molecular Hamiltonian and H '( t ) represents the
interaction with radiation:
r r 1r r
H '( t ) = − µ ⋅ E (ω )cos(ωt ) = − µ ⋅ E (ω ) eiωt + e − iωt (2.1)
2
Very importantly, this Hamiltonian is explicitly time-dependent and this completely changes
the picture from the previous section. Later on we will use that the field is not completely
sharp in the frequency and we will assume therefore that the field strength can depend on ω
r
and have a distribution of frequency components. In eqn. (1) µ is the dipole moment operator,
r
while E (ω ) is the electric field (a vector) oscillating at angular frequency ω . (For a derivation
of this result see MS problem 13.49). We will want to use the time-dependent Schrödinger
Equation. (TDSE). This is a little complicated and we will make the discussion as simple as
possible, without losing anything of the essential physics involved. Let us first review the
time-dependent SE without radiation, also to establish the notation used in this section. We
will be using atomic units throughout.

- Case I, no radiation in a 2-level system (review).


∂Ψ
i = H 0 Ψ( t ) (2.2)
∂t
Let us assume a 2-level system for simplicity. The time-independent Schrödinger Eqn. has
solutions
H0Φ1 ( x ) = E1Φ1 ( x )
(2.3)
H0Φ 2 ( x ) = E2Φ 2 ( x )

4
where we assume known the energies and wave functions. Let us assume that the wave
function at time t = 0 is given by
Ψ( t = 0 ) = Φ1 ( x )c1 + Φ 2 ( x )c2 (2.4)
where c1 and c2 are arbitrary coefficients. Then the wave function at time t is given by
Ψ( t ) = Φ1 ( x )c1e− iE1t + Φ 2 ( x )c2e− iE2t (2.5)
Please verify for yourself that this satisfies the TDSE Eqn. (2.2), assuming (2.3). Note that the
energy has units radians/s here because we have suppressed h . In this case of no radiation we
find that if we would measure the energy of the system we would find E1 with probability
2 2 2
c1e− iE1t = c1 , or E2 with probability c2 , independent of time. In particular excited states

would not decay and have infinite lifetimes! Other properties would depend on time however.
For long times this picture is clearly deficient (we have not included spontaneous emission in
this description which results in finite lifetimes for excited states), but it follows from the QM
we taught you sofar.

- Case II. General treatment of radiation in 2-level system.


Without loss of generality we can write
Ψ( t ) = Φ1 ( x )c1 ( t )e− iE1t + Φ 2 ( x )c2 ( t )e− iE2t , (2.6)
where the coefficients c1 , c2 depend on time now. Substituting in the TDSE and using the full
Hamiltonian, we should satisfy the eqn:
∂Ψ
i − HΨ = 0 (2.7)
∂t
or
∂c1
0=i Φ1 ( x )e − iE1t + [ E1c1 ( t )Φ1 ( x )e − iE1t − H 0c1( t )Φ1 ( x )e− iE1t ] − H '( t )c1 ( t )Φ1 ( x )e− iE1t
∂t
(2.8)
∂c2 − iE2t − iE2 t − iE2 t − iE2 t
+i Φ 2 ( x )e + [ E2c2 ( t )Φ 2 ( x )e − H 0c2 ( t )Φ 2 ( x )e ] − H '( t )c2 ( t )Φ 2 ( x )e
∂t
We note that the terms between square brackets cancel (the reason to write Ψ( t ) as in eqn.
2.6). We will now integrate this Eqn. against Φ1 ( x )* and Φ 2 ( x )* respectively. Furthermore we
assume (for sake of simplicity) that

z z
Φ1 ( x )* H ' ( t )Φ1 ( x )dx = Φ 2 ( x )* H ' ( t )Φ 2 ( x )dx = 0 (2.9)

and

5
z z
Φ1 ( x )* H ' ( t )Φ 2 ( x )dx = Φ 2 ( x )* H ' ( t )Φ1 ( x )dx =
1 r
z
r
− [eiωt + e − iωt ]E (ω ) ⋅ Φ1 ( x )* µΦ 2 ( x )dx ≡ V (ω )[eiωt + e− iωt ]
2
(2.10)

Performing the integration, we get two eqns, that are fully equivalent to Eqn. 2.8
∂c1 − iE1t
i e − V (ω )[eiωt + e− iωt ]c2 ( t )e− iE2t = 0
∂t
(2.11)
∂c
−V (ω )[eiωt + e − iωt ]c1 ( t )e− iE1t + i 2 e− iE2t = 0
∂t
Multiplying the first equation by eiE1t and the second by eiE2t we get
∂c1
i − V (ω )[eiωt + e− iωt ]c2 ( t )e− i ( E2 − E1 ) t = 0
∂t
(2.12)
∂c
i 2 − V (ω )[eiωt + e − iωt ]c1 ( t )ei ( E2 − E1 )t = 0
∂t
and defining E21 = E2 − E1 > 0 we obtain
∂c1
i = V (ω )[ei (ω − E21 ) t + e − i (ω + E21 ) t ]c2 ( t )
∂t
(2.13)
∂c
i 2 = V (ω )[ei (ω + E21 ) t + e − i (ω − E21 ) t ]c1 ( t )
∂t
In principle these equations can be solved fairly easily on a computer. To get some further
insight we assume that we are always close to resonance (the Bohr condition) ω ≈ E21 . Only
the slowly oscillating terms matter, as the fast oscillations only provide some fine structure on
top of the slow oscillations. This is true in particular if we have a distribution of frequencies
around ω ≈ E21 (why?). In this case the equations can be approximated as
∂c1
i = V (ω )ei (ω − E21 )t c2 ( t )
∂t
(2.14)
∂c
i 2 = V (ω )e − i (ω − E21 ) t c1 ( t )
∂t
It is to be noted that in the first equation we have the eiωt component of H '( t ) , while in the
second equation we keep the e − iωt term. This is the origin of the equality of the Einstein
coefficients for absorption and stimulated emission of radiation: if ω ≈ E21 then, of course,
E12 = − E21 ≈ −ω . In the following we will look at two important approximations.

II-A. Precise resonance ω = E21


In this case the phase factors are precisely unity, leading to

6
∂c1
i = V (ω )c2 ( t ) ≡ Vc2 ( t )
∂t
(2.15)
∂c
i 2 = V (ω )c1 ( t ) ≡ Vc1 ( t )
∂t
and hence we get second order equations
∂ 2c1
= −Vc1 ( t )
∂t 2
(2.16)
∂ 2c2
= −Vc2 ( t )
∂t 2
with known solutions
c1 ( t ) = cos(Vt + ϕ ); c2 ( t ) = sin(Vt + ϕ ) (2.17)
where ϕ is determined by the initial values of c1 , c2 . Under these conditions the probabilities
to measure the energy E1 or E2 hence oscillate in time as cos2 (Vt ) and sin 2 (Vt ) respectively.
On average there is equal probability to find E1 or E2 , independent of the initial conditions.
The oscillation time depends on V , i.e. it is proportional to the strength of the field and to the
transition dipole! It does not depend on the energy difference between the two states or the
frequency of the applied field.

II-B. Near resonance, short times, weak fields.


If we assume that at t = 0 , Ψ( t = 0) = Φ1 ( x ) , i.e. c1 = 1; c2 = 0 , we can assume that c1 remains
more or less unity and integrate the equation for c2 ( t ) . This yields

z 0
τ
c2 (τ ) = − i V (ω )e − i (ω − E21 ) t dt =
− iV (ω )
− i(ω − E21 )
( e− i (ω − E21 )τ − 1) (2.18)

The probability to find the energy E2 upon measurement at time τ would be


2 V (ω )2
c2 (τ ) = ( e − i (ω − E21 )τ − 1)( ei (ω − E21 )τ − 1)
(ω − E21 )2
V (ω )2 4V (ω )2 1
= ( 2 − 2 cos(ω − E )τ ) = sin 2 [ (ω − E21 )τ ] (2.19)
(ω − E21 ) (ω − E21 )
2 21 2
2
1
sin 2 [ (ω − E21 )τ ]
= V (ω )2 τ 2 2 ≡ V (ω )2 τ 2 F (ω )
1
[ (ω − E21 )τ ]2

7
The function F(ω ) defined above as the fraction is famous (meaning you should know what it
looks like), and you can find a sketch of it on page 531 of MS. The function is sharply peaked
near resonance and it is the explanation for Bohr's rule that energy is only absorbed if the
frequency of the radiation matches the energy difference between two quantum states.

In practice we do not have one precise frequency, but a range or distribution. This can be
incorporated by taking an integral over the frequency
1
sin 2 [ (ω − E21 )τ ]
2
c2 (τ ) = P21 (τ ) = τ 2
z
−∞

V (ω )2
1
2
[ (ω − E21 )τ ]2
dω (2.20)
2
If we use that V (ω ) will vary slowly over the region near resonance (where F(ω ) is large),
1 2
and shift variables by calling x = (ω − E21 )τ → dω = dx the integral reduces to
2 τ

P21 (τ ) = 2τV ( E21 )2 z ∞

−∞
sin 2 ( x )
x 2
dx = 2πτV 2 ( E21 ) (2.21)

the rate of transition is obtained as


∂P21
= 2πV 2 ( E21 ) (2.22)
∂τ
The analysis would be completely the same if we would start from state Φ 2 ( x ) at t = 0 , and
we would find that the rate of the transition to state Φ1 ( x ) is given by
∂P12
= 2πV 2 ( E21 ) (2.23)
∂τ
hence the rate of absorbtion equals the rate of (stimulated) emmission in the presence of
radiation. Moreover this rate is proportional to

V 2 ( E21 ) = [
1
2 z r r
Φ1 ( x )µΦ 2 ( x )dx ⋅ E ( E21 )]2 (2.24)

i.e. proportional to the square of the transition dipole moment and the intensity of the radiation
( E 2 ) at the resonance frequency. The precise formulation would involve a spherical averaging
because molecules and hence the transition dipoles are oriented at random.

8
Connection with LASERs (see chapter 15 of MS)

A LASER is a macroscopic system and requires both quantum and statistical mechanics for a
proper description. We have only discussed QM thus far. In statistical mechanics we assume
we have a (very) large number of individual quantum systems. For short times we use
quantum mechanics on individual systems, but due to still poorly understood decoherences
one assumes that on larger time scales each microsystem is in a definite eigenstates of H 0 (no
superpositions!). This mystical phenomenon (even for the quantum afficionado) is closely
related to the ill understood problem of measurement in quantum mechanics. The problem is
the smooth connection of the microscopic world (QM) to the macroscopic world (statistical
mechanics). Either of these disciplines is well defined methematically but their
interconnection is cumbersome. Collisions between molecules play a vital role to move from
superpositions of eigenstates to the thermal equilibrium of statistical mechanics that assumes
systems are in eigenstates of the Hamiltonian. Anyway, with some handwaving we can move
ahead.
In the above treatment of quantum mechanics in the presence of a radiation field we have
precisely developed the short time behavior. It shows that if we have a system in state Φ1 the
transition rate to a state Φ 2 that is in resonance through the radiation source is given by
∂P21
= B ρ( E21 ) (2.25)
∂τ
where B is a constant, and ρ ( E21 ) indicates the density of the radiation at the resonant
2
frequency (proportional to E ). This is for a single quantum system. If, at time t , we have
N1 ( t ) molecules in state Φ1 then the change in population of state Φ 2 due to absorbtion of
radiation would be
dN 2
= B ρ ( E21 ) N1 ( t ) (2.26)
dt
Molecules in states Φ 2 emit under the influence of this radiation source at precisely the same
rate, and this diminishes the population. Hence in total we would get (for a two-level system)
dN 2
= B ρ( E21 )[ N1 ( t ) − N 2 ( t )] (2.27)
dt

9
The stationary state is reached if dN 2 / dt = 0 or N1 ( t ) = N 2 ( t ) . This is not correct. We are
missing something, namely spontaneous emission of radiation that would occur from states
Φ 2 even in the absence of resonant radiation. Curiously, this naturally looking phenomenon
(finite lifetimes of excited states, even in the absence of radiation) is very hard to derive using
QM. The theory that is needed to accomplish a rigorous derivation requires a quantization of
the electromagnetic field (i.e. the introduction of photons). It is called quantum field theory. A
practical way to account for many of the observed phenomena is to define the process of
spontaneous emission. It is an approximation though. We get for the rate of change
dN 2
= B ρ( E21 )[ N1 ( t ) − N 2 ( t )] − A N 2 ( t ) (2.28)
dt
At the steady state (equilibrium) we can solve for the intensity of the radiation
A
ρ( E21 ) = (2.29)
B( N1 / N 2 − 1)
For a black body at temperature T that hypothetically would consists of a two-level system in
equilibrium with the radiation it generates and absorbs, we know the population ratio from the
Maxwell-Boltzmann distribution law
N1 / N 2 = e− hν12 / kT (2.30)
The above equation for the radiation density then agrees with the black-body radiation law if
8πh
Bν 12
3
A= 2
(2.31)
c
Einstein essentially knew about statistical mechanics, discrete energy levels, and black body
radiation and from this he deduced the concepts of spontaneous and stimulated emission and
the idea of lasing. He was remarkable, even for a genius.

10
3. Selection rules in spectroscopy.

A rigorous treatment of electromagnetic radiation (oscillating field) involves the time-


dependent Schrödinger equation, usually in the form of time-dependent perturbation theory
(see MS and lecture notes in previous section). At a given instant in time the electric field is
more or less constant over the region of a (small) molecule, as the wavelength of the
radiation (> ~200 nm) is so large compared to molecular dimensions. The relevant quantity
that determines intensities in the spectrum is the transition dipole moment between initial and
final states

z Ψ fint µ$ Ψiint dτ (3.1)

Here Ψ int indicates the total internal wave function involving both nuclear and electronic
coordinates, but not the rotational part of the wave function. Similarly
r
µ$ = ∑ qα rα , (3.2)
α

the total dipole moment operator involves both nuclei and electrons (indicated through
summation over α ). In order to make the problem managable we distinguish
r
electrons: ri
normal modes: qi (3.3)
r
molecular rotation: R
and the overall internal wavefunction (disregarding rotations) can be written (approximately)
as
r
Ψavint = Ψa ( r ; q )Φ v ( q ) , (3.4)
r
The electronic wave function Ψa ( r ; q ) depends on all of the electrons and in addition there is
a parametric dependence of the electronic wavefunction on the normal modes (internal
coordinates) q . The vibrational part Φ v ( q ) = φ v1 ( q1 )φ v2 ( q2 ).... is assumed a product of

harmonic oscillator functions for each normal mode qi . If we do not use a subscript we
lq l q
indicate the whole set of coordinates, e.g. q → qi = q1 , q2 ,... . The rotational wave function
Ω J , M J (θ , ϕ ) determines the probability distribution of the orientation of the molecule in

space. This rotational part is treated differently from the rest. Let us first discuss the problem
at a fixed orientation. We can write the overall transition moment in (3.1) as

11
r
z z z
µ fi = Ψ*bw µ$ Ψav dτ = Φ*w ( q )Φ v ( q ) Ψbel ( r; q )µ$ Ψael ( r; q )drdq (3.5)

In order for transitions to occur this dipole moment should be non-zero. Denoting the most
complicated integral over the electronic coordinates as µ ab ( q ) this can be written as

Ψbw
*
z z r
µ$ Ψav dτ = Φ*w ( q )Φ v ( q )µ ab ( q )dq (3.6)

Moreover we can assume a Taylor series expansion of the q-dependent transition dipole by
writing
r
r r ∂µ ab
µ ab ( q ) ≈ µ ab ( qe ) + ∑ qi +... , (3.7)
i ∂qi q = qe

where the first term indicates the electronic transition dipole moment at the equilibrium
geometry, which we will denote below as µ 0ab = µ ab ( qe ). This is all we need to analyse the
various cases.

A. Pure rotational transitions: a = b, w = v .

z
r
r r0 ∂µ ab r0
µ fi = Φ v ( q )Φ v ( q )( µ aa + ∑
*
qi )dq = µ aa (3.8)
i ∂qi
r r
µ 0aa ( R ) is the permanent dipole moment in the electronic state a and µ fi ≠ 0 only if the
molecule has a so-called permanent dipole moment. Further treatment of the interaction with
the field will lead to the selection rule for diatomics ∆J = ±1, ∆M = 0 (see below).

B. Vibrational transitions: a = b , w ≠ v .

z
r
r0 ∂µ ab
T fi = Φ w ( q )Φ v ( q )( µ aa + ∑
*
qi )dq (3.9)
i ∂qi
Due to the orthogonality of Φ w and Φ v the integral only yields non-zero if only one normal
mode is excited (say q j ). This means that all factors in the product function

Φ w ( q ) = φ w1 ( q1 )φ w 2 ( q2 ).... stay the same except for the one involving the normal mode q j .
The transition moment then reduces to

z
r
r ∂µ aa *
µ fi = φ w ( q j )q jφ v ( q j )dq j (3.10)
∂q j

12
As shown below this will lead to the selection rule ∆v = ±1, and only excitations of normal
modes that lead to a change in dipole moment in the electronic state are allowed (non-zero
transition moment)! For example in CO2 the symmetric stretch cannot be excited (within this
approximation), because the dipole moment remains zero. However the two bending modes
and the assymetric stretch can be excited.
To conclude this analysis we should prove the selection rules ∆v = ±1 for pure vibrational
transitions in the present (lowest order, harmonic) approximation. In order to prove ∆v = ±1
let us consider the one-dimensional integral

z ∞

−∞
ϕ w ( q )qϕ v ( q )dq
2
(3.11)

The form of the vibrational wave functions is ϕ v ( q ) = Pv ( q )e −αq / 2 , where Pv ( q ) is a


polynomial in q of order v. From the orthogonality of the wave functions we then deduce that

z ∞

−∞
2
q n e −αq / 2ϕ v ( q ) = 0 n = 0,1,..., v − 1 (3.12)

Therefore
R| 0, w < v − 1, w > v + 1
z−∞

ϕ ( q )qϕ ( q )dq = S0, w = v ( integral is odd)
w v
|T finite, w = v - 1, w = v + 1
(3.13)

This proofs the selection rule ∆v = ±1 for purely (ro)vibrational transitions.

C. Electronic transitions a ≠ b . We only keep the lead term, and have to realize that Φ w
corresponds to the normal modes and equilibrium geometries of the final (excited) state
while Φ v corresponds to the corresponding modes in the ground state. Keeping only the first
term in Equation (7) we find
r r
z
µ fi = µ 0ab Φ wfinal ( q )Φ initial
v ( q )dq (3.14)

and see that the transition moment depends on the electronic transition dipole at the initial
r
geometry µ 0ab and the overlap of the vibrational wave functions in initial and final states. If
this transition moment is zero (often by symmetry) the transition cannot be observed or only
very weakly through higher order effects. The vibrational overlaps are the Frank-Condon
factors that we discussed qualitatively in class, and which are discussed in MS (but you

13
should include the phase information in their figures!). The actual calculation is quite
involved because the integral does not factorize if the normal modes are different in the two
electronic states.

Discussion of rotational selection rules.

r
The orientation of the transition dipole µ fi depends on the orientation of the molecule in
space, in particular with respect to the electric field. The probability distribution for the
orientation is determined by the rotational wave function Ω J , M J (θ , ϕ ) . These functions are

precisely the spherical harmonics, which we have denoted before as Yl m (θ , ϕ ) . To facilitate


r
the discussion let us consider purely rotational transitions, so that µ fi is just the permanent
dipole of the molecule. A classical dipole would start rotating under influence of an
oscillating electric field. The energy is given by − µ E cos θ , where θ is the angle between
the transition dipole moment and the electric field. In the quantum world this interaction
leads to transitions between rotational eigenstates: Ω J , M J (θ , ϕ ) → Ω I , M I (θ , ϕ ) that are
governed by the by the transition moment
r
z
µ fi E Ω I , M I (θ , ϕ ) cos θΩ J , M J (θ , ϕ )sin θdθdϕ (3.15)

As discussed in MS, for diatomics these matrix elements are non-vanishing only for ∆J = ±1,
∆M = 0 . These are properties of the spherical harmonics, nothing mysterious.

14

You might also like