P. 1
Mathematical Methods For Physicists Webber and Arfken Ch. 3 Selected Solutions

Mathematical Methods For Physicists Webber and Arfken Ch. 3 Selected Solutions

|Views: 1,603|Likes:
Published by Josh Brewer
Ch. 3: 3.3.1, 3.3.12, 3.3.13, 3.5.4, 3.5.6, 3.5.9, 3.5.30
Ch. 3: 3.3.1, 3.3.12, 3.3.13, 3.5.4, 3.5.6, 3.5.9, 3.5.30

More info:

Published by: Josh Brewer on Dec 10, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

08/18/2013

pdf

text

original

Physics 451 Homework Assignment #2 — Solutions Textbook problems: Ch. 3: 3.3.1, 3.3.12, 3.3.13, 3.5.4, 3.5.6, 3.5.9, 3.5.30 Chapter 3 3.3.

1 Show that the product of two orthogonal matrices is orthogonal.

Fall 2004

Suppose matrices A and B are orthogonal. This means that AA = I and B B = I. We now denote the product of A and B by C = AB. To show that C is orthogonal, we compute C C and see what happens. Recalling that the transpose of a product is the reversed product of the transposes, we have C C = (AB)(AB) = AB B A = AA = I The statement that this is a key step in showing that the orthogonal matrices form a group is because one of the requirements of being a group is that the product of any two elements (ie A and B) in the group yields a result (ie C) that is also in the group. This is also known as closure. Along with closure, we also need to show associativity (okay for matrices), the existence of an identity element (also okay for matrices) and the existence of an inverse (okay for orthogonal matrices). Since all four conditions are satisfied, the set of n × n orthogonal matrices form the orthogonal group denoted O(n). While general orthogonal matrices have determinants ±1, the subgroup of matrices with determinant +1 form the “special orthogonal” group SO(n). 3.3.12 A is 2 × 2 and orthogonal. Find the most general form of A= Compare with two-dimensional rotation. Since A is orthogonal, it must satisfy the condition AA = I, or a b c d a c b d = a2 + b2 ac + bd ac + bd c2 + d2 = 1 0 0 1 a b c d

This gives three conditions i) a2 + b2 = 1, ii) c2 + d2 = 1, iii) ac + bd = 0

These are three equations for four unknowns, so there will be a free parameter left over. There are many ways to solve the equations. However, one nice way is

to notice that a2 + b2 = 1 is the equation for a unit circle in the a–b plane. This means we can write a and b in terms of an angle θ a = cos θ, b = sin θ

Similarly, c2 + d2 = 1 can be solved by setting c = cos φ, d = sin φ

Of course, we have one more equation to solve, ac + bd = 0, which becomes cos θ cos φ + sin θ sin φ = cos(θ − φ) = 0 This means that θ − φ = π/2 or θ − φ = 3π/2. We must consider both cases separately. φ = θ − π/2: This gives c = cos(θ − π/2) = sin θ, or A1 = cos θ sin θ sin θ − cos θ (1) d = sin(θ − π/2) = − cos θ

This looks almost like a rotation, but not quite (since the minus sign is in the wrong place). φ = θ − 3π/2: This gives c = cos(θ − 3π/2) = − sin θ, or A2 = which is exactly a rotation. Note that we can tell the difference between matrices of type (1) and (2) by computing the determinant. We see that det A1 = −1 while det A2 = 1. In fact, the A2 type of matrices form the SO(2) group, which is exactly the group of rotations in the plane. On the other hand, the A1 type of matrices represent rotations followed by a mirror reflection y → −y. This can be seen by writing A1 = 1 0 0 −1 cos θ − sin θ sin θ cos θ cos θ − sin θ sin θ cos θ (2) d = sin(theta − 3π/2) = cos θ

Note that the set of A1 matrices by themselves do not form a group (since they do not contain the identity, and since they do not close under multiplication). However the set of all orthogonal matrices {A1 , A2 } forms the O(2) group, which is the group of rotations and mirror reflections in two dimensions. 3.3.13 Here |x and |y are column vectors. Under an orthogonal transformation S, |x = S|x , |y = S|y . Show that the scalar product x |y is invariant under this orthogonal transformation. To prove the invariance of the scalar product, we compute x |y = x |SS|y = x |y

where we used SS = I for an orthogonal matrix S. This demonstrates that the scalar product is invariant (same in primed and unprimed frame). 3.5.4 Show that a real matrix that is not symmetric cannot be diagonalized by an orthogonal similarity transformation. We take the hint, and start by denoting the real non-symmetric matrix by A. Assuming that A can be diagonalized by an orthogonal similarity transformation, that means there exists an orthogonal matrix S such that Λ = SAS where Λ is diagonal

We can ‘invert’ this relation by multiplying both sides on the left by S and on the right by S. This yields A = SΛS Taking the transpose of A, we find A = (SΛS) = S ΛS However, the transpose of a transpose is the original matrix, S = S, and the transpose of a diagonal matrix is the original matrix, Λ = Λ. Hence A = SΛS = A Since the matrix A is equal to its transpose, A has to be a symmetric matrix. However, recall that A is supposed to be non-symmetric. Hence we run into a contradiction. As a result, we must conclude that A cannot be diagonalized by an orthogonal similarity transformation.

3.5.6 A has eigenvalues λi and corresponding eigenvectors |xi . Show that A−1 has the same eigenvectors but with eigenvalues λ−1 . i If A has eigenvalues λi and eigenvectors |xi , that means A|xi = λi |xi Multiplying both sides by A−1 on the left, we find A−1 A|xi = λi A−1 |xi or |xi = λi A−1 |xi Rewriting this as A−1 |xi = λ−1 |xi i it is now obvious that A−1 has the same eigenvectors, but eigenvalues λ−1 . i 3.5.9 Two Hermitian matrices A and B have the same eigenvalues. Show that A and B are related by a unitary similarity transformation. Since both A and B have the same eigenvalues, they can both be diagonalized according to Λ = U AU † , Λ = V BV † where Λ is the same diagonal matrix of eigenvalues. This means U AU † = V BV † ⇒ B = V † U AU † V

If we let W = V † U , its Hermitian conjugate is W † = (V † U )† = U † V . This means that B = W AW † where W = V † U and W W † = V † U U † V = I. Hence A and B are related by a unitary similarity transformation. 3.5.30 a) Determine the eigenvalues and eigenvectors of 1 1 Note that the eigenvalues are degenerate for thogonal for all = 0 and → 0. = 0 but the eigenvectors are or-

We first find the eigenvalues through the secular equation 1−λ 1− This is easily solved (1 − λ)2 −
2

= (1 − λ)2 −

2

=0

=0

(λ − 1)2 =

2

(λ − 1) = ±

(3)

Hence the two eigenvalues are λ+ = 1 + and λ− = 1 − . For the eigenvectors, we start with λ+ = 1 + . Substituting this into the eigenvalue problem (A − λI)|x = 0, we find − − a b =0 ⇒ (a − b) = 0 ⇒ a=b

Since the problem did not ask to normalize the eigenvectors, we can take simply λ+ = 1 + : For λ− = 1 − , we obtain instead a b This gives λ− = 1 − : |x− = 1 −1 =0 ⇒ (a + b) = 0 ⇒ a = −b |x+ = 1 1

Note that the eigenvectors |x+ and |x− are orthogonal and independent of . In a way, we are just lucky that they are independent of (they did not have to turn out that way). However, orthogonality is guaranteed so long as the eigenvalues are distinct (ie = 0). This was something we proved in class. b) Determine the eigenvalues and eigenvectors of 1
2

1 1

Note that the eigenvalues are degenerate for = 0 and for this (nonsymmetric) matrix the eigenvectors ( = 0) do not span the space.

In this nonsymmetric case, the secular equation is 1−λ
2

1 = (1 − λ)2 − 1−λ

2

=0

Interestingly enough, this equation is the same as (3), even though the matrix is different. Hence this matrix has the same eigenvalues λ+ = 1 + and λ− = 1 − . For λ+ = 1 + , the eigenvector equation is −
2

1 −

a b

=0

− a+b=0

b= a

Up to normalization, this gives λ+ = 1 + : |x+ = 1 (4)

For the other eigenvalue, λ− = 1 − , we find 1
2

a b

=0

a+b=0

b=− a

Hence, we obtain λ− = 1 − : |x− = 1 − (5)

In this nonsymmetric case, the eigenvectors do depend on . And furthermore, 1 when = 0 it is easy to see that both eigenvectors degenerate into the same . 0 c) Find the cosine of the angle between the two eigenvectors as a function of 0 ≤ ≤ 1. for

For the eigenvectors of part a), they are orthogonal, so the angle is 90◦ . Thus this part really refers to the eigenvectors of part b). Recalling that the angle can be defined through the inner product, we have x+ |x− = |x+ | |x− | cos θ or cos θ = x+ |x− x+ |x+ 1/2 x− |x− 1− 2 √ 1+ 2 1+

1/2

Using the eigenvectors of (4) and (5), we find cos θ = √
2

=

1− 1+

2 2

Recall that the Cauchy-Schwarz inequality guarantees that cos θ lies between −1 and +1. When = 0 we find cos θ = 1, so the eigenvectors are collinear (and degenerate), while for = 1, we find instead cos θ = 0, so the eigenvectors are orthogonal.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->