You are on page 1of 15

LECTURE NOTES 3

Formalism

I.

HILBERT SPACE

A. LINEAR VECTOR SPACE

A linear vector space over the field C of complex number , , ... is an abstract set of elements (vectors), denoted by a ket |a >, with the following properties (in what follows means for all, - exists, - in, or belongs):
1. |a > and |b > H

exists a rule for forming the sum (|a > +|b >) H

2. |a > and |b > H

(|a > +|b >) = (|b > +|a >) (commutative law).

4.

a null vector |0 > H

(|a > +|b >) + |c >= |a > + (|b > +|c >) (associative law).
with the property |a > +|0 >= |a >.

5. |a > H
a negative vector (also called the inverse vector) | a > H
|a > +| a >= |0 >.
6. and C

the following takes place:

(|a > +|b >) = |a > +|b >
( + )|a >= |a > +|a >
(.)|a >= (|a >)
0|a >= |0 >
1|a >= |a >
(1)|a >= | a >

Example of a finite vector space

All polynomials of degree < N on the interval 1 x 1, i.e. all functions of the following type:
fN (x) = a0 + a1 x + a2 x2 + ... + aN 1 xN 1 |fN (x) >
These polynomials form a linear vector space

2
B. HILBERT SPACE
A Hilbert space H is a linear vector space over the field C of complex number , , ... with an inner (scalar)
product for every ordered pair of vectors. In other words, it is an abstract set of elements (vectors), denoted by a
ket |a >, with the properties (1)-(6) plus a definition of an inner (scalar) product for every ordered pair of vectors.
7. A scalar (inner) product {|a >, |b >} C is defined in H with the following properties:

{|a >, |b >} = ({|b >, |a >})

{|a >, |b > +|c >} = {|a >, |b >} + {|a >, |c >}

or

{|a > +|b >, |c >} = {|a >, |c >} + {|b >, |c >}

or

Example of a finite vector space with an inner product

All polynomials of degree < N on the interval 1 x 1, i.e. all functions of the following type:
fN (x) = a0 + a1 x + a2 x2 + ... + aN 1 xN 1 |fN (x) >
These polynomials form a linear vector space with inner product defined as follows:
Z 1

fN
(x)gN (x)dx
1

If, for example,fN (x) = 2 + x2 , and gN (x) = 5xN , then

Z 1
Z
2
N
{|fN (x) >, |gN (x) >} =
(2 + x )(5x )dx = 10
1

1
N

x dx + 5

xN +2 dx

If N is even, we have
{|fN (x) >, |gN (x) >} =

10
20
+
.
N +1 N +2

If N is odd, we have
{|fN (x) >, |gN (x) >} = 0.
8. Two vectors, |a > and |b >, are orthogonal if {|a >, |b >} = 0.
Example of orthogonal vectors: If N is odd the above two vectors, fN (x) = 2 + x2 , and gN (x) = 5xN ,
are orthogonal.
9a. A norm ||a|| < (< is a field of real numbers) of a vector |a > is defined by
p
||a|| = {|a >, |a >}
Example: norm of vector fN (x) = 2 + x2
sZ

||fN (x)|| =

(2 + x2 )2 dx =

9b. The norm satisfies:

- the Schwatrz inequality
||a||

||b||

| {|a >, |b >} |

166
15

3
Example: Let fN (x) = 2 + x2 and g(x) := 5x8 . Norm of fN (x) = 2 + x2 is
sZ
r
1
166
2
2
||fN (x)|| =
(2 + x ) dx =
15
1
Norm of gN (x) is
sZ

||gN (x)|| =

(5x8 )2 dx

=5

Absolute value of |fN (x) > |gN (x) > is

R1
(f (x).gN (x)dx 3.1.
1 N

310
99 .

2
17

5.7

>

- the triangle inequality:

|||a > +|b > ||

||a|| + ||b||

with the equality only being valid when |a >= |b >, and > 0.
For our above two vectors we calculate:
||fN (x) + gN (x)|| = 4.5

C. DISCRETE SPACE OF DIMENSION N

The term Hilbert space H usually is reserved for the infinite-dimensional space. In this Section, we discuss
a N -dimensional (finite-dimensional) linear space VN with a scalar product (which also can be called a Hilbert N dimensional space).
Any set of linearly independent vectors |e1 >, |e2 >, ..., |eN > is called a basis in VN . Recall that |e1 >, |e2 >
, ..., |eN > are linearly independent vectors if
N
X

(1)

i=1

is possible only when all i = 0. The basis is called orthonormal basis if

{|ei >, |ej >} = ij

(2)

Any basis can be transform into orthonormal by means of the Gram-Schmidt procedure. Let |E1 >, |E2 >
, ..., |EN > is a basis, but not orthonormal. The new orthonormal basis will be denoted by |e1 >, |e2 >, ..., |eN >. The
first vector from the new orthonormal basis is
|e1 >=

1
|E1 >,
||E1 ||

To obtain |e2 >, we first construct the vector

|e02 >= |E2 > {|e1 >, |E2 >}|e1 >
This vector satisfies the condition:
{|e01 >, |e02 >} = 0

4
Thus, the second orhonormal vector is:
|e2 >=

1
|e0 >
||e02 || 2

Similarly,
|e03 >= |E3 > {|e1 >, |E3 >}|e1 > {|e2 >, |E3 >}|e2 >,

|e3 >=

1
|e0 >
||e03 || 3

The same procedure can be used to find |e4 >,|e5 >,..., and |eN >.
Supposed that we have chosen a given orthonormal basis |e1 >, |e2 >, ..., |eN >. Then, any vector |a > from VN
is uniquely defined by the equation:
|a >=

N
X

ai |ei >,

(3)

i=1

where the complex numbers a1 , a2 , ..., an , called the components of the vector |a > in the given basis |e1 >
, |e2 >, ..., |eN >, are defined by the equation:
ai = {|ei >, |a >}

(4)

We now postulate that to each ket-vector |a > in the ket space, there exists a corresponding quantity
called a bra-vector, denoted by < a| in a dual space V 0 . There is one-to-one correspondence between
bra- and ket-vectors
|a >< a|

(5)

|a > < a|

(6)

Assuming a one-to-one correspondence between bra- and ket-basis

|ei >< ei |,

i = 1, ..., N

the components of < a| in the bra-basis < e1 |, < e2 |, ..., < eN | are a1 , a2 , ..., aN , i.e.
< a| =

N
X

ai < ei |

(7)

i=1

{|a >, |b >} =

N
X

PN

i,j=1

i=1

N
X

ai |ei > and |b >=

ai bj ij =

i,j=1

N
X

PN

i=1 bi |ei

>:

ai bi

By means of the bra-vectors we redefine the inner product {|a >, |b >} between two ket-vectors |a >=
PN
and |b >= i=1 bi |ei > in terms of product between a bra-vector < a| and a ket-vector |b >:

N
X
i,j=1

ai bj < ei |ej >=

N
X
i,j=1

(8)

i=1

ai bj ij =

N
X

ai bi

PN

i=1

ai |ei >

(9)

i=1

We can also introduce one-to-one correspondence between matrices and ket- and bra-vectors in a given orthonormal
basis. The correspondence is as follows: any ket-vector |a > from the discrete space VN can be represented by a
N 1 (one-column) matrix

a1
a
|a > 2
.
aN

5
any bra-vector from the discrete space V0N can be represented by 1 N matrix (one-row matrix):
< a| (a1

a2 ...aN )

There is one-to-one correspondence between the inner product {|a >, |b >} and the product between two matrices:

b1
N
X

b2
{|a >, |b >} = (a1 a2 ...aN )
=
ai bi
.
i=1
bN
D. EXAMPLE FOR A FINITE-DIMENSIONAL VECTOR SPACE
The vectors in this example are defined as all polynomials of degree < N on the interval 1 x 1, i.e. all functions
of the following type:
fN (x) = a0 + a1 x + a2 x2 + ... + aN 1 xN 1 |fN (x) >
These polynomials form a linear vector space (one can check that the requirements (1) to (6) are fulfilled). The inner
product of two vectors, |fN (x) > and |gN (x) >, is defined as follows:
Z 1

fN
(x)gN (x)dx
1

The vectors (polynomials)

Y0 (x) = x0 = 1,

Y1 (x) = x,

Y2 (x) = x2 , ...,

YN 1 (x) = xN 1

from this linear

R 1 space form a linear independent set of N vectors, and therefore, they can be used as a basis in this vector
space. Since 1 Yi (x)Yj (x)dx 6= ij , the linearly independent basis functions Yi (x) do not form an orthonormal basis.
Applying the Gram-Schmidt procedure, we can obtain the following orthonormal basis yi (x) (Legendre polynomials):
r
r
r
r
1
3
5
7
2
, y1 (x) =
x, y2 (x) =
(3x 1), y3 (x) =
(5x3 3x),
y0 (x) =
2
2
8
8
r
y4 (x) =

9
(35x4 30x2 + 3), ...,
128

yi (x)yj (x)dx = ij
1

In this orthonormal basis, any ket vector of the space is defined by providing N coefficients f0 , f1 , ..., fN 1 , i.e.
PN 1
fN (x) = i=0 fi yi (x). We establish one-to-one correspondence between the ket vectors of the linear space and the
N 1 matrices:

f0
f
|fN (x) > 1
.
fN 1
If gN (x) =

PN 1
i=0

gi yi (x) is another ket vector from the same space

g0
g
|gN (x) > 1
.
gN 1

then:
(i) |fN (x) > +|gN (x) >= |hN (x) > is a vector

f0 + g0
f1 + g1

|hN (x) >

.
fN 1 + gN 1

6
from the same space;
(ii) the scalar product of two kets fN (x) and gN (x) is defined by
{|fN (x) >, |gN (x) >} =

N
1
X

fi gi

i=0

As we postulated above, there must be a one-to-one correspondence between bra- and ket-vectors. This requirement
can be fulfilled using the following correspondence: (i) since the Legendre polynomials are real functions, the bra
basis < yi (x)| is formed by the same Legendre polynomials; (ii) to each ket-vector |fN (x) > in the ket space, there
exists a corresponding bra-vector, denoted by < fN (x)| in a dual space V 0 :

< fN (x)| (f0 , f1 , ..., fN

1 )

We establish a one-to-one correspondence between the bra vectors and the 1 N matrices. Thus, we can rewrite the
inner product between two kets as inner product between bet and bra vectors:

g0
N 1
g1 X

=
{|fN (x) >, |gN (x) >} =< fN (x)|gN (x) >= (f0 , f1 , ..., fN
fi gi
)

1
.
i=0
gN 1

E. INFINITE-DIMENSIONAL SPACE
The above equations can be generalized for the case when the space is not a finite-dimensional space, but an infinitedimensional space. As an example, we consider the infinite-dimensional space of all functions f (x), g(x), h(x), ... on the
interval a x b. These functions are vectors from a linear vector space (statements (1) -(6) take place). Instead of a
basis of N linearly independent vectors, one has to use a complete set of vectors |Y1 (x) >, |Y2 (x) >, ..., |Yi (x) >, ...,
and by definition, if the set is complete, then an arbitrary vector |f (x) > can be expanded in terms of it:
|f (x) >=

ai |Yi (x) >

or f (x) =

i=0

ai Yi (x)

(10)

i=0

Applying the Gram-Schmidt orthogonalisation method, we can obtain a complete set of orthonormal vectors
|y1 (x) >, |y2 (x) >, ..., |yi (x) >, ... such that
Z b
yi (x)yj (x)dx = ij
a

|f (x) >=

or f (x) =

i=0

Z
fi yi (x),

yi (x)f (x)dx

(11)

i=0

fi =

i gi |yi (x)

>, by the equation:

fi gi

i=0

The dual infinite-dimensional space is a space of all functions f (x), g (x), h (x), ...:
Z b

X
X
< f (x)| =
fi < yi (x)| or f (x) =
fi yi (x), fi =
yi (x)f (x)dx
i=0

(12)

i=0

Note, that we introduced a complete vector space and a complete set of vectors. A complete vector space
means that linear combinations like (10) which involves an infinite number of terms belong to the vector space. The
last requires that < f (x)|f (x) > is a finite:
{|f (x) >, |f (x) >} =< f (x)|f (x) >=

X
i=0

|fi |2 <

(13)

7
II.

LINEAR OPERATORS ON HILBERT SPACE

b on Hilbert space H is an instruction for transforming |a > H into another vector |b > H:
An operator A
b >= |b >
A|a
In other words, a linear operator induces a mapping of H onto itself (or onto a subspace of H).
For linear operators:
b
b > + A|b
b >
A(|a
> +|b >) = A|a
The following definitions are valid |a > H:
b and B,
b are equal, A
b=B
b if
1. Two operators, A
b >= B|a
b >
A|a
2. Sum of two operators:
b + B)|a
b
b > +B|a
b >
(A
>= A|a
3. Product of two operators:
b B)|a
b
b B|a
b >)
(A.
>= A(
4. Zero operator, and unit operator:
b >= |0 >
O|a
5. If f (x) =

b
1|a >= |a >

an xn then:
b =
f (A)

bn
an A

6. If
b >= |a >
A|a
b with eigenvalue . The set of all eigenvalues of A
b is called the spectrum,
then |a > is an eigenvector of the operator A
which can be discrete n or continuous values (or both).
b acts on ket-vectors from the left. Now, we introduce an operator A
b
7. We wrote earlier that an operator A
b
named the adjoint of A, which acts in the dual space of the bra-vectors, and acts on any bra-vector from the right:
b > (< a|)A
b
A|a
b
8.
 The last equation states that if |a > and | < a| are the corresponding ket- and bra-vector, then A|a > and
b also should be corresponding ket- and bra-vector.
< a|A
It can be proved that:
 
b = A
b ;
9a. A


b+B
b =A
b + B
b;
9b. A


bB
b =B
bA
b ;
9c. A
 
b = A
b
9d. A
b = A
b then the operator A
b is called Hermitian operator.
10. If A
b = A
b we have:
For any Hermitian operator A
b >=< g|A|f
b >
< f |A|g

(14)

b >,
|G >= A|g

b >=
< f |A|g

fi Gi



X
b > = < g|A
b |f > =< G|f > ==
< g|A|f
fi Gi
i

b=A
b , then all eigenvalues n are real.
11a. If A
b
b , the eigenvectors, |a > and |b >, corresponding to the different eigenvalues and , are orthogonal,
11b. If A = A
b
b >= |b >, and 6= , then < a|b >= 0.
i.e. if A|a >= |a >, A|b
b is called unitary operator if
12. The operator U
bU
b = U
b U
b =b
U
1
13. Commutator
b B]
b =A
bB
bB
bA
b
[A,
14. Properties of commutators:
b B]
b = [B,
b A]
b
14a. [A,
b
b
b
b B]
b + [A,
b C]
b
14b. [A, B + C] = [A,
b
b
b
b
b
b
b
b
b
14c. [A, B.C] = [A, B].C + B.[A, C]
b [B,
b C]]
b + [B,
b [C,
b A]]
b + [C,
b [A,
b B]]
b =O
b
14d. [A,

b
b
b
b
14e. [A, B] = [B , A ]
15. Outer product:
We define the so-called outer product |f >< g| between ket |f > and bra < g| vectors. The result of the outer
product is not a complex number, but an operator. By definition, the outer product |f >< g| is an operator: when
acting on a ket-vector |h >, it gives another ket-vector |f > times some number < g|h >:
|f >< g|(|h >) =< g|h > |f >

(|f >< g|) = |g >< f |

which follows from the relation |f >< g||h >< h| (|f >< g|) .
17. Projection operator Fb = |f >< f | projects any vector |g > into its component along |f >:
X
X
X
Fb|g >= |f >< f |
gi |ei >=
gi < f |ei > |f >
gi fi |f >
i

18. The unit operators can be written as

b
1=

|ei >< ei |

This very important and useful property is called the completeness relation or closure. The proof is simple:
X
X
X
b
1|f >= (
|ei >< ei |)|f >=
< ei |f > |ei >=
fi |ei >= |f >
i

b is a Hermitian operator, |n > and n are the corresponding eigenvectors and eigenvalues, i.e.
19. Let A
b
A|n >= n |n >. It can be proved that:

9
(i) the eigenvalues are real and form a discrete spectrum, and
(ii) its eigenvectors form a complete orthonormal system. In other words, its eigenvectors can be used as an
orthonormal basis in the Hilbert space.
Thus, any |f > H can be written in terms of the eigenvectors |n > of a Hermitian operator:
X
X
|f >= b
1|f >= (
|n >< n|)|f >=
< n|f > |n >
n

(15)

The set of complex numbers fn =< n|f > are called the A-representation of the vector |f >.
b
21. A-representation of the operator B:
b
Suppose we have some operator B. We can write:
!
!
X
X
X
0
0
b
b
b
b 0 > |n >< n0 |Bnn0 =< n|B|n
b 0>
b
b
B = 1B 1 =
|n >< n| B
|n >< n | =
< n|B|n
n0

(16)

n,n0

b there are N 2 terms in the sum, and hence N 2 numbers < n|B|n
b 0 >. We
If there are N eigenvectors of the operator A,
can place these numbers in a N N matrix, where the matrix elements are arranged by columns and rows according
b 0 >= Bnn0 . Thus, we can write for any operator B:
b
to < n|B|n
X
b=
b 0>
B
Bnn0 |n >< n0 |; Bnn0 =< n|B|n
(17)
n,n0

b in the basis |n >. Note that that

The matrix Bnn0 formed in this way is said to be a representation of the operator B
b has been chosen to determine the basis for that representation.
the matrix representation depends on which operator A

b
Similarly, the operator B can be represented by the matrix (Bnn0 ) :
X
b =
B
Bn 0 n |n >< n0 |
(18)
n,n0

We can prove that



bC
b
B


nn0

Bnn00 Cn00 n0

(19)

n00

b
22. Trace of B:
  X
b =
Tr B
Bnn
n

The trace is independent of the particular orthonormal basis |n > that is chosen for its evaluation.
b and B
b are Hermitian operators, and if [A,
b B]
b = 0, then there exists a complete set of eigenvectors
23. If A
b and B.
b
which are eigenvectors for both A
b If the system is in state | >, then the corresponding
24. Suppose, we two Hermitian operators, Fb and G.
b >, respectively. We introduce the following vectors:
expectations values are f =< |Fb| > and g =< |G|
|f >= (Fb f )| >,

b g)| >
|g >= (G

Let us calculate < f |f >:

 


< f |f >= < |(Fb f ) | (Fb f )| >
The state | > in F -representation is:
| >=

X
n

n |n >

10
where Fb|n >= fn |n >. Thus, we calculate:

 
 X
2
< f |f >= < |(Fb f ) | (Fb f )| > =
|n |2 fn f
n

p
The last result shows that < f |f > is the standard deviation of the observable f :
X
2
f2 =< f |f >=
|n |2 fn f
n

Similarly, the standard deviation of the observable g is:

g2 =< g|g >=

|n |2 (gn g)

b n >= gn |n >.
where G|
According to the Schwarz inequality:
f2 g2 =< f |f >< g|g > | < f |g > |2
Since, for any z = a + b we have |z|2 Imz = (z z )/2, one can write:

2
1
(< f |g > < g|f >)
f2 g2
2
But
b g)| >=< |FbG|
b > f .g < | >
< f |g >=< |(Fb f )(G
b g)(Fb f )| >=< |G
b Fb| > f .g < | >
< g|f >=< |(G
and therefore,
s
f g

2
1
b
b
< |[F , G]| >
2

The last inequality is called the uncertainty principle in its most general form.

III.

Example A. Space of functions, position and momentum operators

1. The differentiable functions F (x, y, z), G(x, y, z),.... , which are square integrable:
Z +
|F (x, y, z)|2 dxdydz <

behave as vectors, i.e. they are vectors from a linear vector space, in which the vectors are functions of x, y and z.
We define the inner product by the equation:
Z + Z + Z +
Z +
< F |G >=
F (x, y, z)G(x, y, z)dxdydz =
d3 rF (r)G(r)
(20)

2. The position operator b

r is one of the operators used in quantum mechanics. By definition, the position operator
acts on a given function F (x, y, z) F (r) simply multiplying that function by corresponding coordinate:
b
rj F (x, y, z) = xj F (x, y, z);

j = 1, 2, 3 or

rj = xj

x1 = x, x2 = y, x3 = z

(21)

11
The position operator b
r possesses eigenvalues < xj < + lying in the continuous range. The ket-vector
corresponding to the eigenvalue xj will be denoted by |x, y, z > (or shortly |r >):
b
rj |x, y, z >= xj |x, y, z >,

xj = x, y, z

(22)

(23)

0

b
1=

where (r) = (x)(y)(z), and (x) is the Dirac delta function

Z +
d3 rF (r)(r r0 ) = F (r0 )

Here are some useful equations, related to the Dirac function:


(x) =

0 x 6= 0
x=0

(x2 x20 ) =


F (x)(x x0 ) =

,
a

F (x0 ),
x0 [a, b]
0, x0 < a, or x0 > b

1
[(x x0 ) + (x + x0 )] ,
|2x0 |

G(x)(F (x) x0 )dx =

(ax) =

1
(x)
|a|

G(x0 )
| dFdx(x) | x=x0

The position representation of any vector |F > from the vector space of functions is:
Z
Z
|F >= d3 r|r >< r|F >= d3 rF (r)|r >

(24)

By means of the equations (22) and (23), one can prove that the matrix elements of the above operators are:
< r0 |b
rj |r >= xj (r r0 ),

(25)

b acts on a given function F (x, y, z) F (r) according to the equation:

3. By definition, the momentum operator p
b j F (x, y, z) =
p

~
F (x, y, z);
xj

j = 1, 2, 3 or

rj = xj

x1 = x, x2 = y, x3 = z

(26)

In the one-dimensional case the momentum operator is

px =

~
x

defined on the space of differentiable functions F (x) for a x b. If the momentum operator is a Hermitian operator,
then according to eq. (14) we have:
Z
~ b
~
dG(x)
~ d
|G >=
F (x)
dx = F (x)G(x)|ba
< F |px |G >=< F |
x
a
dx

Z b
~
dF (x)
~
~ d
(27)
+
G (x)
dx = F (x)G(x)|ba + < G|
|F >
a
dx

x
~
= F (x)G(x)|ba + < G|px |F >

Thus, the momentum operator is a Hermitian operator if only the boundary conditions are imposed, i.e. the term
F (x)G(x)|ba = 0

(28)

Performing exactly the same considerations as in the case of position operator, we can introduce the ket-eigenvectors
b:
|p > of the momentum operator p
b j |p >= pj |p >,
p

j = x, y, z

(29)

12
Those ket-eigenvectors are orthogonal and normalized:
< p|p0 >= (p p0 ),

Z
b
1=

d3 p|p >< p|

(30)

The momentum representation of any ket-vector |F > is:

Z

Z
3
d p|p >< p| |F >= d3 pF (p)|p >;
|F >=

(31)

where F (p) =< p|F > is the Fourier transform of F (r) =< r|F >, defined as follows:
Z
Z
 p.r 
 p.r 
d3 r
d3 p
F
(p)
exp

,
F
(p)
=
F
(r)
exp

F (r) =
~
~
(2~)3/2
(2~)3/2

(32)

The fundamental assumption in physics is that there exists a particular symmetry between coordinates and the
corresponding momenta - all the coordinates can be transformed into their momenta, and vice versa. In terms of
Dirac notations, this can be expressed by the following relationships:
 p.r 
 p.r 
1
1
< r|p >=
exp

exp

,
<
p|r
>=
(33)
~
~
(2~)3/2
(2~)3/2
By means of the equations (33 ), one can prove that the matrix elements of the position and momentum operators
are:
< r0 |b
rj |r >= xj (r r0 ),

< p0 |b
rj |p >=

~ (p p0 )

p0j

(34)

< r0 |b
pj |r >=

~ (r r0 )

x0j

(35)

< p0 |b
pj |p >= pj (p p0 ),

IV.

THE POSTULATES OF THE NON-RELATIVISTIC QUANTUM MECHANICS

Postulate I: The state of the system is represented by a state vector |(t) > in a Hilbert space. The state vector
contains all needed information about the system. Any superposition of state vectors ia also a state vector.
According to this postulate, the state of the system is described by a ket |(t) > in a Hilbert space. If we
choose a position representation, the state of the system at any instant of time may be represented by a state (or
wave function) (r, t) =< r| >. All information regarding the state of the system is contained in the wave function.
Example Suppose, we have three operators in the Hilbert space, defined by their matrices:

0
1
0
1
Lx = 1 0 1 ,
2 0 1 0

0
1
Ly = 0
2 0 0

1
0
0
1
Lz = 0 0 0
2 0 0 1

(36)

By calculating the corresponding conjugate and transpose matrices we find that all of the above matrices are
Hermitian ones. The corresponding eigenvalues and eigenvectors are as follows:
For Lx :

1 0
det 1 1 = 0
0 1

1
1
1 = 1 |X1 >=
2 ,
2
1

1
1
2 = 1 |X2 >=
2 ,
2
1

3 = 0

1 1
0
|X3 >=
2
1

(37)

13

For Ly :

0
det = 0
0

1
1
1 = 1 |Y1 >=
2 ,
2
1

2 = 1

1
1
|Y2 >=
2 ,
2
1

11
0
3 = 0 |Y2 >=
2 1

(38)

For Lz :

1 0
0

0 =0
det 0
0
0 1

1
0
1 = 1 |Z1 >= 0 , 2 = 1 |Z2 >= 0 ,
0
1

0
|Z3 >= 1
0

3 = 0

(39)

To find the corresponding bra vectors, one has to conjugate and transpose the above ket vectors. For example, the
bra < Y1 | is

1


1
1
< Y1 | =
=
1 2 1
2
2
2
1
As we mentioned, eigenvectors of an Hermitian operator form a basis. This means, that one can use the eigenvectors
b x in the basis
|Zn >, for example, as a basis. Let us find the normalized eigenvectors |Xn > and eigenvalues of L
|Zn >. The eigenvector |Xi > could be expended in the basis |Zn >:
!
3
3
X
X
b
Xin |Zn >,
(40)
|Xi >= 1|Xi >=
|Zn >< Zn | |Xi >=
n=1

n=1

where

1
1 2
1
1
2
1
1

= 1
,
X
=
2
1
1 1 2

2
2
2 2 0
2 2 0

Xin =< Zn |Xi >

i, n = 1, 2, 3

b = Xin
X

(41)

b = X
bX
b = 1) and it transforms the eigenvectors |Zn > into the
b = Xin is a unitary matrix (X
b X
The matrix X
b matrix transforms |Xi > into |Zn >.
eigenvectors |Xi >. The X
Postulate II: To any observable f (such as position, linear momentum, energy, angular momentum, or number of particles) in physics, there exists a corresponding Hermitian operator Fb such that:
(i) the measurement of f yields values (call these measured values of f ) which are the eigenvalues f1 , f2 , ..., fn , ...
of Fb, i.e. Fb|n >= fn |n >, where |n > is the eigenvector which corresponds to the eigenvalue fn ;
(ii) if the system is in a state | >, a measurement of the observable f will yield the value fn with probability
| < n | > |2 , and therefore, the average (or the expectation value) of f is f =< |Fb| >.
Example. Suppose that the system is in the state in which the outcome of the measurement of the observable Lz
is 2 = 1. This means that the state vector is:

1
|(t) >= |Z2 >= 0
(42)
0

14
Let us calculate Lx , L2x and Lx .
bx :
First, we have to expand the state vector |Z2 > in terms of the eigenvectors of L
|Z2 >=

3
X

< Xn |Z2 > |Xn >=

n=1

1
1
1
|X1 > + |X2 > |X3 >
2
2
2

(43)

Lx = 1 = 1

with

1
4

Lx = 2 = 1

with

1
4

Lx = 3 = 0

with

1
2

Thus, the average of the observable Lx is:

Lx =

3
X

PnLx n =

n=1

1
1
1
(1) + (1) + (0) = 0
4
4
2

c2 , L
c2 must have common eigenvectors |Xn >, where
cx ] = 0, and therefore, L
cx and L
Let us calculate L2x . Note that [L
x
x
c2 are 2 . Thus, L2 is calculated to be:
n = 1, 2, 3. The eigenvalues of L
n
x
x
L2x

3
X

PnLx 2n =

n=1

1
1
1
1
(1)2 + (1)2 + (0)2 =
4
4
2
2

Finally, we find for the standard deviation:

v
r
u 3
uX
2
1
1
1
1
Lx
t
(1 0)2 + (1 0)2 + (0 0)2 =
Pn n Lx =
Lx =
4
4
2
2
n=1
Example. Suppose the system is in a state with Lz = 1, which means the state vector is |Z1 > instead of |Z2 >
bx :
in the previous example. Now, we expand the state vector |Z1 > in terms of the eigenvectors of L
|Z1 >=

3
X

< Xn |Z1 > |Xn >=

n=1

1
1
1
|X1 > + |X2 > + |X3 >
2
2
2

(44)

Lx = 1 = 1

with

1
4

Lx = 2 = 1

with

1
4

Lx = 3 = 0

with

probability P3Lx = | < X3 |Z1 > |2 =

1
2

Postulate III: A measurement of the observable f that yields the value fn leaves the system in the state
n , i.e. the measurement changes the state of the system. In other words, if the state of the system is |(t) > and the

15
result of the measurement on state |(t) > is fn , the state of the system immediately after the measurement is
given by a projection of |(t) > onto the eigenvector |n > corresponding to fn :
| >af ter = Pb|n > |(t) >=< n |(t) > |n >
Here, Pb|n > = |n >< n | is the projection operator that projects any vector onto its component along |n >.
Example Suppose that the state of the system in |Zn > basis is:

1/2
= 1 |Z1 > + 1 |Z2 > + 1 |Z3 >
| >= 1/2

2
2
2
1/ 2

(45)

Then, if L2z is measured and the result is +1, what is the state immediately after the measurement?
b z and L
b 2z have common eigenvectors |Zn >, where n = 1, 2, 3. The eigenvalues an of L
b 2z are an = 2n :
First, L
a1 = 1,

a2 = 1,

a3 = 0

In the case of degeneracy, we have a doublet of orthonormal vectors |Z1 > and |Z2 > with the same eigenvalue
a1 = a2 = 1. The probability to measure a1 = 1 is:
| < Z1 | > |2 =

1
2

| < Z2 | > |2 =

1
4

The probability to measure a2 = 1 is:

After the measurements, the system could be in a state |Z1 > with a probability 2/3 and in a state |Z2 > with a
probability 1/3.
Postulate IV: The state vector |(t) > obeys the Schrodinger equation:
~
b is the Hamiltonian operator.
where H

d
b
|(t) >= H|(t)
>,
dt