You are on page 1of 7

# Kramers Rule

1
:
Suppose you have a system of equations that can be represented in matrix form
as

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

x
y
z

A
B
C

## Kramers rule says

x =

A a
1,2
a
1,3
B a
2,2
a
2,3
C a
3,2
a
3,3

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

y =

a
1,1
A a
1,3
a
2,1
B a
2,3
a
3,1
C a
3,3

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

z =

a
1,1
a
1,2
A
a
2,1
a
2,2
B
a
3,1
a
3,2
C

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

Theres nothing special about three equations and three unknowns - the tech-
nique can be extended to solve any system of equations.
Solving for Nothing
Suppose the system of equations looks like

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

x
y
z

0
0
0

x =

0 a
1,2
a
1,3
0 a
2,2
a
2,3
0 a
3,2
a
3,3

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

y =

a
1,1
0 a
1,3
a
2,1
0 a
2,3
a
3,1
0 a
3,3

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

z =

a
1,1
a
1,2
0
a
2,1
a
2,2
0
a
3,1
a
3,2
0

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

## (Notice that the numerators are all zero.)

1
This installment of Corbins notes is supposed to provide a little perspective on common
features that tie some relatively disparate topics together
1

x
y
z

0
0
0

unless

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

= 0
This is important.
An equation in the form

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

x
y
z

0
0
0

x
y
z

## if and only if. . .

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

= 0
Eigen-Math
Suppose you have a system of equations that can be represented in matrix form
as

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

x
y
z

x
y
z

## Insert the identity matrix on the right-hand side:

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

x
y
z

1 0 0
0 1 0
0 0 1

x
y
z

2
and collect terms on the left-hand side

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

x
y
z

0
0
0

x
y
z

## will only have a non-trivial solution if

a
1,1
a
1,2
a
1,3
a
2,1
a
2,2
a
2,3
a
3,1
a
3,2
a
3,3

= 0
Eigen-trivia
The set of values for that solve this determinant relationship. {
i
}, are
known as eigenvalues.
For each eigenvalue
i
, there is an eigenvector

x
i
y
i
z
i

that satises

a
1,1

i
a
1,2
a
1,3
a
2,1
a
2,2

i
a
2,3
a
3,1
a
3,2
a
3,3

i

x
i
y
i
z
i

0
0
0

## If the eigenvalues are all dierent (non-degenerate), the eigenvectors will

all be mutually orthogonal. If there is any degeneracy, the eigenvectors
may be chosen to be orthogonal.
Eigenvectors span the vector space they occupy - which is to say, any vector
in that space may be formed from a superposition of the eigenvectors.
Normalized, the set of eigenvectors forms an orthonormal basis.
3
Applications
Rotational Inertia
If one nds the rotational inertia for a body about some arbitrary point in some
arbitrary frame of reference, the result is not, in general, a simple scalar (as was
usually the case in introductory physics), but rather, a 33 tensor. This means
the angular momentum of a body doesnt have to point in the same direction
as its angular velocity.

L = I
The angular momentum doesnt have to point in the same direction as the
angular velocity. But it could. Are there axes about which we could spin an
object and nd that

L and are parallel?
Along these hypothetical axes where

L is parallel to , life is simple, and the
rotational inertia can be represented with a scalar:

L = I

So were looking for s that satisfy
I = I

Insert the Identity matrix 1 on the right-hand side
I = I

1
Collect terms on the left-hand side
(I I

1) = 0
and observe that this is one of those systems of equations that equate to zero.
Non-trivial solutions for the axes require
Det[I I

1] = 0
so the (scalar) moments of inertia (I

## ) about these special axes are just eigen-

values of the rotational inertia tensor. The axis of rotation about which each of
these moments is relevant lies along the eigenvector associated with the moment
(

).
We call those special axes about which

L is parallel to principle axes, and the
rotational inertias about each of these axes principle moments of inertia.
4
Coupled Oscillations
Start with a system of coupled oscillators. Write out the kinetic and potential
energies in coordinates tied to some inertial frame of reference, then convert
those coordinates to a more convenient set of generalized coordinates. At this
point, the kinetic and potential energies are likely to look like
T =
1
2

i,j
m
i,j
q
i
q
j
V =
1
2

i,j
k
i,j
q
i
q
j
and the Lagrangian like
L =
1
2

i,j

m
i,j
q
i
q
j
k
i,j
q
i
q
j

## Evaluate the Euler-Lagrange equation for q

r
. . .

m
i,r
q
i
+ k
i,r
q
i

= 0
Now, in general, the components of the system will oscillate and the specic
solution for each q
r
will depend on the initial conditions for the system. Lets
put that aside for the moment and go for something a little more interesting.
Is it possible to nd solutions (normal modes) in which each of the systems
components is oscillating with the same characteristic frequency?
To nd out, lets write the equation of motion for each component in the form
q
i
= A
i
e
it+
(where A
i
is real)
Plug these into the Euler-Lagrange equation we wrote for q
r
and you get

m
i,r

2
+ k
i,r

A
i
= 0
a single linear equation in A
i
s. But, theres one such equation for each value of
r, so we really have a system of linear equations that can be written in matrix
form!

KM
2

A = 0
Non-trivial solutions for the coecients {A
i
} require
Det[KM
2
] = 0
The characteristic frequencies of the system are obtained from the eigenvalues
of the matrix K M
2
. The eigenvectors associated with each characteristic
5
frequency will, ultimately, determine the relative motions of the components in
these special normal modes.
But wait - theres more! The eigenvectors form a basis set. We can, in principle,
go back and construct the solution specic to our given initial conditions using
a superposition of these normal modes. If youre not so interested in a specic
solution, studying the (unnormalized) normal modes is usually sucient to get
an idea of what the system is capable of.
Quantum Mechanics
In a quantum mechanical system, every measurement you can make on a system
is represented by a mathematical operator (M) that acts on the function rep-
resenting the state of the system (). Now, in general, operating on a function
will change that function - in quantum mechanics, its no dierent; making a
measurement on a system can/will/does change the state of the system! Usually.
Meaningful measurements must be repeatable. If the very act of measuring
somethings length changes that length, you cant know the length and its
really not observable. Meaningful measurements must leave the system in a
state where an immediate repeat of the measurement would obtain the same
result.
Suppose a system is in some general state , and we measure some quantity
represented by the operator M. If has no special relationship to M, measuring
this quantity is likely to change the state of the system:
M =
But it needs to put the system in a state in which repeated measurements of
the quantity associated with M yield consistent results, so must satisfy
M = m
For convenience, lets assume that M is a matrix operator, and the state is a
vector (what follows can be generalized to non-matrix operations, too).
Insert an identity operator and collect terms on the left

Mm1

= 0
will have non-trivial solutions if
Det[Mm1] = 0
6
Though it wont be so obvious from our simple treatment, the eigenvalues of
M, {m
i
}, represent the values one may obtain when one measures the quantity
associated with the operator M. The eigenvectors (or, more generally, eigen-
functions) of M, namely {
i
}, represent the set of states a measurement of the
quantity associated with M can leave you in.
Like any set of eigenvectors, the set {
i
} forms a basis in the space of quantum
mechanical solutions for a system. Any state may be represented as a sum over
eigenfunctions of the operator M.
=

i
a
i

i,j
a
i
a

j

i

j
Assume the states have all been normalized. . .
1 =

i
a
i
a

i
The quantity a
i
a

i
represents the probability with which a measurement of the
quantity associated with M on a system in state will yield a value m
i
and
leave the system in the
i
eigenstate of M.
7