8 views

Uploaded by Hassan N. Al-Obaidi

Ph.D. Course in Quantum Mechanics, Semester-I

- Chapter 1
- Doyle Stein (Ieee Tac) [1981]
- Classical Mechanics notes (4 of 10)
- weeks1_2
- Weinberg
- Angular Momentum
- Eigenvalues e i Gen Vectors
- Quantum Mechanics Without State Vectors
- Linear Algebra
- physics
- Bao Lin Wang2005
- 0904.3644v1[1]
- Brief Introduction to Vectors and Matrices Chapter3
- Leonard Schiff Quantum Mechanics
- Zander 1992
- 1407.5107
- Modal FRA With Nastran
- Information theory in biology
- CH05
- The Classical Schrodinger's Equation

You are on page 1of 30

2013-2014

Chapter Two Matrix Formulation of Quantum Mechanics Paul Dirac has used a special abbreviated notation to rephrasing rules and so ideas of quantum mechanics. These notation, together with the sense of Hilbert space, have been provided a successful complete abstract formulation for the quantum mechanics. Hence, the common analytical manipulation could be replaced by another simple analog one. In fact this approach may compensates and spare the difficulty usually associates with the conventional procedure.

2-1 Abstract View of Quantum Mechanics According to thoughts mentioned above the complete orthonormal eigen wave functions for any quantum mechanical system are regarded to notation, each of these functions, say , is represented by a KET state vector | . So, the completeness principle in this notation becomes; vector | . From equation (2-1) we may get; = | | = | | | = | . . . (21) serve as a BAISES vectors in Hilbert space (linear vector space). In Dirac

The substitution of the project cn in the completeness principle leads to; Since the last equation must be valid for any ket vector | it follows . . . (23) is a unit operator, which has the property that when it acts on Where 1 . . . (24) | | = 1

that;

any state leads to no change in that state. Equation (2-11) usually called completeness relation.

-1-

2013-2014

space of kets described by bras and the relation between them is; . . . (2 5)

Thus from equation (2-1) one may get; ii- The overlap integral of the two state vectors | and | may . . . (2 6) formulate as follows;

| = | = |

| = |

| =

. . . (2 7)

iii-

vectors from the left, whereas it acts upon bra vectors from the right, and transform them into other bra, ket, vectors respectively. For instance, if; . . . (29) = | =

| = |

We find;

| ) = | | = (

. . . (210)

. . . (211)

H.W: ) = . the same operator. i.e. ( 1- Show that by taking the hermitian conjugate twice one may get back to = 2- Prove that for Hermitian operator

-2-

= |). Hint; assume that | =| + |. (self-adjoint) (| ) = . 4- Prove that: ( . ) = 5- Prove that: ( + ) = + 6- Prove that: ( 2 -2 The Projection Operator = | |

(adjoint) operator to derive the general formula for the Hermitian operator

2013-2014

The operator | | is called projection operator and it is defined by; . . . (212) = | | | = . . . (213)

= || Keeping in mind; = 1

=

to ||| . Another results could be read from equation (2-13), that is once a ket | is projected into a particular eigen ket |, then no more changes could be happened if a further projection be done. Actually, such a property for Pn is completely matches with the distribution of what a measurement process being. For example lets which described by ket vector | {also one may say a collection of assume we wont to measure the energy of a quantum mechanical system systems all of which are described by the same vector |}. The

-3-

an operator acting on arbitrary ket |, it will projects it into the eigen ket The name projection operator is come from the property that when such

2013-2014

measurement must change the vector | so that it end up to the eigen vector | . The reason behind is that subsequent measurements in that the energy expectation value become; | = | || |

one of these collection}, like | , and measuring its energy. Along with measuring process we doing means that we are picking a one eigen ket {a

finding a particular result, En with a probability ||| , the vector | can only give the same result over and over again. Thereby, | = | | | | = |

could be written in terms of its Equation (2-14) state that , operator . . . (214) . . . (215)

2-3 Matrix Representation of Operator In the previous course we learned a simple approach to formulate operators, and so wave functions, in terms of matrices depending on the analogy with mathematical vectors. Now we would explore another

advanced one to do so in accordance with the ideas mentioned in the previous section. Anyway, lets start, as example, with following relation; Where | is any of the complete orthonormal eigen ket vectors for the . . . (216) this equation with | we can get; | | = + 1, | = + 1| + 1

quantum harmonic oscillator system. Now, by take the scalar product for

. . . (217)

Obviously the regards of all admissible values of n and m leads to arrange equation (2-17) in the form of array or matrix. The conventional notation of a matrix Mij has the first index labeling the row and the second we find; labeling the column of the array. Accordingly if we write | | as

-4-

operator and | any complete set of states, a matrix representation of in terms of the basis provided by the complete set of states |. following; 1) Conventionally the multiplication of two matrices is define as follows; . . . (219) () = () () () () The above appellation needs some justification, so we try the

0 1 =0 0

0 0 2 0

0 0 0 3

2013-2014

. . . (218)

and . To do so lets expand the state representation of the operators So, it is necessary to verify that this relation holds good for the matrix | in terms of the complete eigen kets | as follows; . So equation (2-20) became; Where = Hence; | = | | | | = | | = | . . . (220)

. . . (221)

. . . (222)

| = ( ) and | | = ( ) . |

It is seen that equation (2-22) is the same as equation (2-19) provided that

2) Further justification for matrix representation come from the definition of hermitian conjugate operator given by; is represented by a matrix, then the Which shows that if the operator . . . (223) will be represented by the herrmitian hermitian conjugate operator conjugate matrix, since the latter is defined by

-5-

| = | = | | |

) ) = ( (

2013-2014

. . . (224)

Consider that the complete set of kets | , are an eigen kets of an . . . (225)

An example for such operators is the Hamiltonian of harmonic oscillator that satisfy the following eigen value equation; | = + |

. . . (226)

However, the adoption of equation (2-25) leads to write equation (2-26) in the following matrix form; 0 0 0

0 | = | 0 0

0 0

0 0 0 0

. . . (227)

It is quite obvious that the determination of the eigen values of a hermitian operator is equivalent to diagonalizing its correspondent matrix. In other word, the problem of finding the eigen values for an operator expressed in terms of its complete set of orthonormal eigenkets is tantamount to digonalizing its correspondence matrix. The question now is how we could make the digonalizaition process when such an operator is given in terms of a set of orthonormal kets but this set is not eigen kets for this operator. whose eigen kets are | . Further, Let us consider the operator

in a basis | , that is we suppose we know the matrix elements of know each entry in the square matrix;

-6-

According to the completeness principle, an arbitrary eigen kets | may . . . (228) be expand in terms of the basis | . So, | = | | | = | | . . . (229)

| = |

2013-2014

And so,

. . . (230)

= , | || |

| = |

. . . (231)

. . . (232)

= ,

from the space | transformation for the matrix elements of operator Absolutely equation (2-33) emphases the fact that we are make a to the space | . For this reason matrix is called transformation . . . (234)

= ( ) =

. . . (233)

This matrix is called unitary transformation matrix and we can easily prove that. It should be mentioned that when the matrix || is hermitian,

hermitian property for a matrix being conserved without change after transformation process.

-7-

2013-2014

H.W:

1) Prove that = 1.

It is often useful to define the summation of diagonal elements of a matrix to be the trace of that matrix. i.e. is; Accordingly one may say the trace of operator = || . . . (235) =

. . . (236)

Which tends directly to the form given by equation (2-35). Anyway, the trace of a product of two matrices has a useful characteristic that is; = . . . (237)

= ()

From another point of view we can justifying the identity (2-37) by another approach considering the operator in equation (2-11) as follows; = |||| |||| = = || 1 = 1 = || = H.W: Prove that TrABC=TrBCA

-8-

2013-2014

Keeping in mind the polynomial in equation (2-35) must be convergence. Otherwise equation (2-38) being not valid and so no useful consequence can be mention. Thereby, for finite dimensional matrices their own traces must vanish, while for infinite dimension matrices are not. An examples = , namely; from the definition , ] = ..(b) [ , ] = ..(c) [ [ , ] = 1 ] = ..(a) , [ for the first kind are the matrices of angular momentum, that are deduced

(Prove)

. . . (239)

[ , ] =

(Prove)

. . . (240)

2-5 Matrix Representation of Angular Momentum 2-5-1 Review for Some Basics Before going in details for represent the angular momentum operators in matrices form, it is better to review some important facts that are covered previously in the M.Sc. course. Anyway, one really do know that incompatible observables are belong to operators which are not commute, so they obey to Heisenberg uncertainty principle. Evidently, incompatible observables do not have shared a same eigenvectors, or at least, they cannot have a complete set of common eigenvectors. The Matrices that represents incompatible observables cannot simultaneously be diagonolized, that is, they cannot all be brought to diagonal form by the same similarity transformation. On the other hand, compatible observables, whose operators do commute, share a complete

-9-

2013-2014

set of eigenvectors and hence the corresponding matrices can be simultaneously diagonolized. , and share , , The question now is whether the operators

a common complete set of eigenvectors or not. According to the last paragraph, this is possible if and only if these operators are commute with each others. So, one has to review the following; i) It had seen that the angular momentum is conserved observable for a = 0, or equivalently; motion. i.e. Or more precisely; , ] = 0 [ = = = 0 , , , systems have a central potentials. In other word it is a constant of

. . . (240a)

(Prove) . . . (240b)

. i.e.

(Prove) . . . (241)

iii) The components of angular momentum are not commute with each It seen that, as a consequence for these remarks, only one component of other as indicated by equations (2-39).

and so as to form a simultaneously can be chosen together with may met such a goal. An objective answer for this question is that

commuting set. Next question, however, is what will the component that is usually commuting set. For conventional purposes the component hence any of the remaining ones could be choose instead. However, if | = ( + 1)|

nothing may prevent from picking any one of them as a part of the

picked keeping in mind there is nothing be special about this choice and the aspired eigenvectors are denoted by | one may forward put the following eigen value equations; . . . (242a)

- 10 -

= 0 could leads to have H.W: Prove that only the case for which simultaneous eigenfunctions for all three components of the angular momentum. Hint: Assume that the set of functions | are forms a complete set for the three components and making use equation (239a). Solution: Assume that the functions | forms a complete simultaneous commuting set, so: | = | And, | = | Thus it follow that; = = | | = The substitution of the last equation in equation (2-39a) leads to the result; | = 0 So, = = , | = ( ) = 0 Similarly one may shows that = 0 also, and thus it means that for = 0 could only have simultaneously eigenfunctions for all three components of the angular momentum. |

= ( | = ( (0))

and are hermitian Where and are just a real numbers since . . . (242b)

| = |

2013-2014

(0) )

- 11 -

2013-2014

2-5-2 Creative and Destructive Operator , , ), and , computing set for the angular momentum operators ( The problem of what eigenvectors that forms a complete simultaneous

as a part of commuting set. Where the have been solved by chosen Next problem, however, is to explore what will happen when the residual

desired eigenvector is named to be | as indicated in equation (2-42). , ) coming to act upon |, which is the task of this components ( context. It customary to define the following two operators; = . . . (243a) . . . (243b) = +

, = + ,

. . . (244)

And so;

, =

. . . (245)

= + ( ) = + = + [ , ] . . . (247a)

And similarly;

(Prove)

- 12 -

. . . (247b)

2013-2014

In accordance with equations (2-42a) and (2-46) one may write; | are an Obviously, equation (2-49) sate that the vector states | = ( + )| | = ( 1) | | = ( + 1) | . . . (249)

with eigen value ( + 1). On the other hand the eigen states for consideration of equation (2-45) reads that;

| = | | = ( + 1)

+ = + + =

. . . (248)

. . . (250a)

| are eigen state vectors for the operator | and It is clear that . . . (250b) and are called creative and destructive reason, in fact, the operators | = |, + 1 | = | , 1 . . . (251a) . . . (251b)

with an eigen value raised and lowered by unity respectively. For this operators alternatively. Therefore, one may write;

Where and are real numbers to be evaluated as in follow. By taking the conjugate of equation (2-51a) and multiplying the result by | = | (, )| , + 1|, + 1 |

- 13 -

. . . (252a)

. . . (252b)

Now one be able to precisely express x and y components of angular momentum. However, because;

Thus;

+ ) = (

2013-2014

| = {|, + 1 + |, 1}

| + |} | = {

. . . (253a)

and act upon state | the Therefore, one may conclude that when |, 1 both with equal probability. Remarks:

| = {|, + 1 + | , 1}

. . . (253b)

is a hermitian operator, it is reliable, according to equation 1) Since = 0 or = 1. But, apart from = 0, the case = 1 can never (2-42a), to say that; ( + 1) 0. The reason behind is that as long as the form ( + 1) = 0 is concerned, one may found that either

satisfy this inquiry although the remaining negative value does. But a singularity in the values of can never be accepted. So it is desirable to disregard all the negative values for and keep the positive ones in ()| () = | | | = |

= |( + 1) | = {( + 1) ( + 1)} 0

- 14 -

= |( + 1) ( + 1)|

. . . (254a)

2013-2014

Similarly;

Thus;

() = {( + 1) ( 1)} 0 ()| ( + 1) ( 1) ( + 1) ( + 1)

. . . (254b)

. . . (255a) . . . (255b)

3) Another approach could be adopted to verify that the higher and lower |, = 0 and |, = 0

It follows respectively;

And

( )( + + 1) = 0 ( + )( + 1) = 0

should be

rejected since they doesnt achieve equations (2-55 b and a) respectively. While the solutions = and = must accepted. Since the

transition from to , and vice versa, can be done through a unit and respectively. There will be step that repeated by applying value. This means, however, a state being degenerated with degree 2 + 1. 2 + 1 state as one graduate from to + passing though the zero

2-5-3 Matrices of Angular Momentum Operators It should be mention that in some cases the representation of operators by matrices becomes a necessary mission. So, let us try to treat the

- 15 -

2013-2014

angular momentum operator and start from the following commutation relation, see equations (2-41); ] = 0 , [

So;

| = 0 , |

only have matrix elements between states that have the and hence will of specific the corresponding angular momentum vector other one like . same angular momentum quantum numbers s. In other word states of different s are independent from each other. Strictly speaking, for state

= 0

= 0 { ( + 1) ( + 1)}

projected on by 2 + 1 different orientation depend on and not any . Anyway, regarding states of fixed , with the aid that commute with = 0 0 0 0 0 1 . . . (256) Indeed this conclusion is valid well for any operator

that differ from zero are those for which Thus the matrix elements of in matrix representation is; = . However for = 1 the 1 = 0 0 . . . (257)

Furthermore, with aid of equations (2-51) one may setup the following two equations; = (, ) , = (, ) ,

- 16 -

. . . (258a) . . . (258b)

and are those for It is seen that the non-zero matrix elements of which = + 1 and = 1 respectivly. Keep in mind the values

and of (, ) and (, ) are as expressed in equations (2-52). So in matrix notation are respectively as follows; 0 = 0 0 0 = 2 0 2 0 0 0 0 2 0 2 0 0 0 0 . . . (259a)

2013-2014

. . . (259b)

So one has a method by means the matrix of any operator could be constructed. and . 1) By using equations (2-53) find the matrix for of each of H.W: = ( + ) =

And;

= ( ) =

3) Making use the matrix notion, verify the validity of relations (2-39).

0 1 0

1 0 1

1 0 1

0 1 0

0 1 0

and Insertion of unit operator appears in equation (2-11) between | leads to the form;

- 17 -

When the scalar product of this relation with any member of a complete set |, say |, is regarded one have; | | = | . . . (261)

. i.e. Lets consider, for example, the relation that define an operator | | = . . . (260)

And

By writing | as a column matrix (vector) and so | as one may set up the following two equations; 1| 2| | = 3| . . . (263) | 1| 2| | = 3| |

n | | =

2013-2014

. . . (262)

. . . (264)

Or equivalently, =

. . . (265a)

Last equation proves that matrices could be used to represents both operators and kets (bras). However, since; | = |n Accordingly the scalar product | can be written as; | = | | | = | = ( ) . . . (267)

- 18 -

. . . (265b)

2013-2014

2-7 Transformation Between Different Representations In section (2-4) we talk about the possibility to represent an operator, , in terms of two independent sets of biases namely | and | . like while the other is The first of them is considered to be an eigenkets for not. Actually one may revise this situation for another operator like The relation between these two representation has been established through the unitary transformation matrix, = | . It seen that in terms of | (section 2-4) how could transform a representation of to its correspondence in terms of | . So one may easily follow a same in terms of | . procedure to transform a representation of could Any state vector like | can be expanded, and so operator like represented, in terms of these two different sets of biases. However, this section investigate the transformation between these two representations of both state vectors and operators. 2-7-1 Transformation of State Vectors | = | | | = | The expression of | in terms of the set of biases | is given as; . . . (268a)

In matrix form;

In the representation of | the same state vector (|) is depicted as; | = | | = | | . . . (269a)

| =

. . . (268b)

- 19 -

. . . (269b)

Now, starting from the representation in terms of |, we want to know the expression of the state vector | in terms of the second representation (| ). Due to that an eigenket | can be expanded in terms of the biases | and hence equation (2-68a) becomes; Where = | is the unitary transformation matrix. The comparison between equations (2-70) and (2-69a) reveals that; = . . . (271a) H.W: Verify the validity of equation ((2-71a). Anyway equation (2-71a) gives the relation between two different representations for the same state vector. In matrix form this equation can be written as; = = | = , | | = , | | . . . (270)

2013-2014

. . . (271b)

Or equivalently;

H.W: Show that by starting from the representation in terms of | for the state vector | one can deduce its representation in terms of | to be as; = . . . (272a) In matrix form equation (2-72a) written as; =

Or equivalently; = .

. . . (272b)

- 20 -

2013-2014

, by According to equation (2-68) the matrix elements of an operator means of an arbitrary two state vectors (|and |), can be written as follows; | = ,| | | | | | = , | . . . (273a)

| = ( |

. . . (273b)

It is well known that expectation value is a special case for the last equation, so it may written as; | = ,| | | | | | = , | . . . (274a) )

| = ( | . . . (274b) Equation (2-73 and 74) gives an evidence for the fact that, the diagonal elements of any operators matrix are corresponds to expectation values of that operator over all of the ket vectors (biases), by means a given state and state vector | vector is expanded. Obviously both of operator are represented in terms of the orthonormal set (biases) | . Anyway, there are only two cases of the arbitrary state vector (|) can be taken into account that are, either it being expanded in terms of an eigenkets ( recall equation (2-68)) or conversely in terms of not | for operator eigenkets | for this operator (recall equation (2-69)). Or equivalently;

- 21 -

2013-2014

Case-I: V-Representation Concerning with such a case the state vector | being expanded in . So terms of the set | , which is an eigenkets vectors for the operator equation (2-74a) becomes; | = ,| | | | | | = , | . . . (275a) | = , |

| = |

0 0 0 0 |( = | ) 0 0 0 0 Since the probability is conserved in a system of a stationary states it follows that; 0 0 0 0 . . . (275b) | = |0 0 0 0 0 In matrix form;

It is worth to mention that for such a case the projection of state vector | on any eigenket vector | has a probability | || , since an expectation value govern all of the allowed states of a quantum mechanical system. Actually equation (2-75) state that a measurement for the observable A has been carried out, and hence the probable results are distributed among all of the admissible states (kets) of the system (space). H.W: i- Derive with explanation why the scalar product ends to the unit value for each of the admissible state. i.e. = 1 . ii- set the expression in (i) in matrix form.

- 22 -

2013-2014

Case-II: U-Representation The state vector | will expanded now in terms of the orthonormal set . So equation (2(biases) | which is not an eigenkets for the operator 74a) can be written as; | = ,| | | | | | = . | )

| = . | In matrix form;

| = ( |

. . . (276a)

in Where the matrix elements in this representation are nominated by order to make a clear distinguish over its counterpart in V-representation. Lets now make a transformation from U-representation to Vrepresentation starting from equation (2-76a). this equation with aid of equation (2-71a) can be re-written in the form;

| = ., | | = ., | | = , | , | = , |

. . . (276b)

. . . (277)

The result of comparison between equations (2-77 and 75a) leads to the following formula; = = . . . (277a) . . . (277b)

Or;

= = ,

- 23 -

2013-2014

Equation (2-77) is exactly similar to equation (2-33) and so a has been carried transformation for the matrix elements of the operator out from the space | to the space | . In other word a is implemented by means of diagonalizaition process for the operator the unitary transformation matrix . Starting from equation (2-75a) with aid of equation (2-72a) one may use a similar approach to make a transformation from U-representation to the V-representation as follows;

| = , | | = ,, ( | ) | = ,, |

| = , , | | = , | . . . (278) ,

The comparison between equations (2-78 and 76a) reveals that; = = . . . (279a) . . . (279b)

Or; In general;

= ,

H.W: i- Starting from equation (2-76a) obtain the expression in equation (275a). ii- Prove that; [, ] = 0. become diagonal. iii- Verify at which condition the matrix of operator iv- Show that the probability distribution identity, = 1, can be

() ()

()

is assumed

- 24 -

2013-2014

v-Show that the probability distribution formula in (ii) may implies the aspect of degeneracy. vi- Show that the probability distribution formula in (ii) involve that the eigen vectors for each eigen value being normalized.

2-8 Eigen Values (Vectors) Determination Obviously an eigen value equation is a special case for equation (2-60) which in turn can be written as; | = | . . . (280)

Indeed this equation describe a definite measurement process for the observable in state vector |, however, the result is an eigen value . Equation (2-80) implies that either the state vector | is one of the complete eigenket vectors | , so must tend to be any of , or the over all of the eigenkets eigen value belong to the projection of vectors |. The latter case involves the degeneracy for the state vector | that has an eigen value . Lets ignore the degeneracy and focus attention for the former case. Accordingly equation (2-80) can be written as; H.W: = , . . . . (281) In matrix form equation (2-82) can be written as;

= () () = , .

. . . (282a)

. . . (282b)

() ( ) =0

. . . (283a)

- 25 -

= 0 . . . (283b) According to theory of matrices there will be a non-trivial solution for this system of equations, ( ) matrix, if and only if when the determinant of this matrix being null. i.e. The solution of equation (2-84) gives an algebraic equation of of i become square matrix. By solving this set of power when the matrix of to be the roots of the equations one can find the eigen values of algebraic equation , , , , . Just the eigen values being well defined the correspondence eigen vectors can be found from equation (283a). Actually this is an efficient approach for finding eigen values and their correspondent eigen vectors for operators that represented by finite matrices. However, this is not so simple for infinite matrices which equivalent to solving Schrodinger equation. Indeed what we have been did so for is that the following system of equations is setup; = 0 . . . (284)

2013-2014

0 0 0 . . . (285) 0 = 0 0 Thus one obviously seen that are , , , , and the corresponding orthonormalized eigenvectors are; 1 0 0 = 0, = 1, , = 0 0 0 1

- 26 -

2013-2014

Example: Determine the eigenvalues and the normalized eigenvectors for the matrix shown below and then find the unitary matrix that diagonalizes it. 0 0 = 0 0 0 0 0 Solution: According to the relation; = we have; 0 0 0 0 0 0 0 0 = 0 0 0 =0 0

Both of these two ways approaches leads to that being either (0) or (1) or (-1). So the eigen vectors corresponds to each of these eigen values may deduced as follows; i) When = 0 = = 0 and hence = 1 due to | | + | | + | | = 1. i.e. (0 0 1). reason. i.e. = 1

, =

, =

= 0 due to the

- 27 -

2013-2014

, , 0 Therefore, the following eigen vectors; 0, and are 1 0 0 respectively correspondence to the eigen values = 0,1, 1. So the is; matrix that diagonalize 1 = 2 0 1 0 0 2 1 0

H.W: for the above example find and then verify the relation; = 1. Exercises: 1) The y-component of the angular momentum may given by the = ( + ). By following expression; = =

assume the following spherical harmonics; = =

(biases) for the representation requirements find; i- The matrix form of this operator. ii- The eigen values and their corresponding eigen vectors for this operator. 2) Concerning with harmonic oscillator system determine , and 19 / 4 = 19 / 6 4 i- Determine the eigen values and eigen vectors. ii- Determine the matrix U the diagonalizes A. 3

and = =

as a ket vectors

(1

2 1

0 ).

- 28 -

2013-2014

2-9 Selection Rules Transition in certain dynamical variable from a specific state to another different one is not always possible. So, it is important to know the procedure by which these transitions are allowed or not. For example one may start with a molecule modulated to be a harmonic oscillator since it is a simple one. However, this may corresponds to absorption or transmission of infrared radiation. The question now is what the restrictions in the transition of the dipole moment for the case under consideration. Initially, lets neglect the rotational and consider vibration transitions and start with expanding by McLaren's series around the equilibrium nucleus separation as follows;

( x) = ( x0 ) + ( x 0 ) x +

What the physical meaning that could be realized from this expansion?

Then the dipole moment transition considering the first two terms is;

| m nm = n |

| m n | x nm = n | 0 | m + 0

+ + a ) | m nm = 0 nm + C n | (a + + a ) | m nm = C n | ( a

1/ 2 where C = 0(h/ 2m)

+ | m + n | a | m nm = C { n | a

nm = C { m + 1 ( m + 1) n | m + 1 + n n | m 1}

nm = C { m + 1 ( m + 1) n , m + 1 + n n , m 1 }

So the only non-zero matrix element are those for which m=n1. Hence the selection rules for these transitions are;

n = 1

Concerning with the rotational transitions a similar approach can be followed if a diatomic molecule is modulated as a rigid rotator. Typically,

- 29 -

2013-2014

this situation will govern the transitions and absorption in the range of far infrared or microwave radiation. For simplicity one may assume that the field is polarized in one dimension, say z, for example. Then dipole moment will take the form;

= 0 cos

Which direction is this?

So,

| lm lm, lm = l m |

lm, lm = 0 l m | cos | lm

cos | l , m = al | l 1, m + bl | l + 1, m

m m

l m, lm = 0 a l l m | l 1, m + 0 bl l , m | l + 1, m

l m , lm = 0 a l m m l ,l 1 + 0 b l m m l , l +1

It can be seen that the non-zero matrix element of are those for which;

m = 0

m m

and

l = 1

H.W: Deduce the dipole moment selection rules for a molecule considering its rotation and vibration motion together. Hint; Use the assumption ( x) = (a + b q) cos where a and b are constants.

- 30 -

- Chapter 1Uploaded bySamuel Jacob Karunakaran
- Doyle Stein (Ieee Tac) [1981]Uploaded bykorsair
- Classical Mechanics notes (4 of 10)Uploaded byOmegaUser
- weeks1_2Uploaded byEric Parker
- WeinbergUploaded byÍcaro Lorran Lopes Costa
- Angular MomentumUploaded byBheim Llona
- Eigenvalues e i Gen VectorsUploaded byjaganathanrenuga
- Quantum Mechanics Without State VectorsUploaded byRizwan Ahmed Rana
- Linear AlgebraUploaded byakshayrangasaid
- physicsUploaded bySamar Hanna
- Bao Lin Wang2005Uploaded byrezza ruzuqi
- 0904.3644v1[1]Uploaded byBayer Mitrovic
- Brief Introduction to Vectors and Matrices Chapter3Uploaded byarcangelizeno
- Leonard Schiff Quantum MechanicsUploaded bysxydxt
- Zander 1992Uploaded byadnan fazil
- 1407.5107Uploaded byDaniel Musumeci
- Modal FRA With NastranUploaded by5colourbala
- Information theory in biologyUploaded byraghav79
- CH05Uploaded byDaniyal
- The Classical Schrodinger's EquationUploaded byAlexandra
- Mine ProjectUploaded byFabio Rossi
- CrossUploaded bysasi_manjaz
- SSRN-id1355451Uploaded byShivanshu Shrivastava
- 749578_multivariateUploaded bybosnia76
- Classical and Quantum Motion in Inverse Square PotentialUploaded byMajid Hasan
- Systems StanfordUploaded byJohn
- Manual # 12Uploaded bymusa
- Controlling GraphsUploaded byBeto Lang
- 132Uploaded byLameune
- 02 Matlab TutorialUploaded bykararc

- Molecular Term SymbolsUploaded by1jerusha
- MPA Fisher- Exotic Phases in Quantum MagnetsUploaded byKonnasder
- yag_manual_1.pdfUploaded byVysakh Vasudevan
- Chapter 40Uploaded byGrace Sun Hae Lutia
- Molecular Orbital TheoryUploaded bynadide
- Approximation Methods in Quantum MechanicsUploaded byJohn
- l13Uploaded byBen Hogarth
- WKBUploaded byMuhammad Abdur Raafay Khan
- Spin Density Distribution in Transition Metal CompUploaded byDavey
- Problem Set 1 2017 Phys 468Uploaded byfuckit
- Quantum Computation by Adiabatic EvolutionUploaded byBruna Israel
- High-Resolution Electron-Energy-Loss Spectroscopy of Thin Films of C60 on Si 100.pdfUploaded byJessica Jimenez
- GaoUploaded byGaurav Gyawali
- Quantum VacuumUploaded byOdessa File
- 2 ElectronicUploaded byHandugan Quinlog Noel
- Kronnig-penney Using Block TheoremUploaded byChris Evan
- Elements of String TheoryUploaded byfrappets
- physics through problem solvingUploaded byjasmon
- Weak MeasurementsUploaded byMae Hwee Teo
- 71Uploaded byYocobSamandrews
- A. Buonanno, T. Damour and G. Veneziano- Pre-big bang bubbles from the gravitational instability of generic string vacuaUploaded byHuntsmith
- THE METAPHYSICS OF QUANTUM MECHANICSUploaded byCornelis_Boekh_1837
- Nucl.Phys.B v.590.pdfUploaded bybuddy72
- Molecule ShapesUploaded bySandunil Jayasinghe
- quantum chemisryUploaded bycuongtran_siegen
- Lecture on Perturbation TheoryUploaded byKAY King
- chem634syllabusUploaded bythucinor
- Optical solitons as quantum objects.pdfUploaded byIsmael Arce
- Quantum Physics.docxUploaded byLak Davis
- Mit QmIII Spring2003.Ps10Uploaded byAlias AKA