Professional Documents
Culture Documents
Similarity Transform: A Typical Matrix
Similarity Transform: A Typical Matrix
Suppose that we have our favorite matrix aa. It is real and will have distinct real eigen values. We see that from
the linear ODE problem, it would be nice to transform this problem into a simpler problem that has only diagonal
elements. Can we do this? What values will be on the diagonal of the simpler matrix?
‡ A typical matrix
ij 2. -1. 3. yz
jj z
jj 2. -4. -3. zzz
j z
k 1. 2. 7. {
evals = Eigenvalues@aaD
evecs = Eigenvectors@aaD
We should verify that these are linearly independent and check to see if they happen to be orthogonal.
Det[evecs]
0.470973
2 similarity.transforms.nb
evecsP2T . evecsP2T
1.
evecs[[1]].evecs[[2]]
-0.135743
evecs[[3]].evecs[[1]]
-0.0764779
evecs[[2]].evecs[[3]]
-0.85796
Now we understand that this transformation can be accomplished using a similarity transform where the form is
P-1 A P. The matrix P is composed of column vectors that are the eigenvectors of A.
p = Transpose@evecsD
pinv = Inverse@pD
Chop@pinv . aa . pD
ij 7.2789 0y
zz
0
jj
jj 0 -2.44731 0 zzz
j z
k 0 0 0.16841 {
ij 1. 2. 3. yz
jj z
jj 2. 0 -3. zzz
j z
k 3. -3. 1. {
evals = Eigenvalues@aaD
evecs = Eigenvectors@aaD
evecs[[1]].evecs[[1]]
1.
evecs[[1]].evecs[[2]]
-4.44089 µ 10-16
evecs[[1]].evecs[[3]]
9.71445 µ 10-17
evecs[[2]].evecs[[3]]
-2.77556 µ 10-17
p = Transpose@evecsD
Inverse@pD
Chop@Inverse@pD . aa . pD
-4.69527 0y
jij zz
0
jj
jj 0 zzzz
j z
0 4.22547
k 0 0 2.4698 {
Chop@Transpose@pD . aa . pD
ij -4.69527 0 0y
zz
jj
jj
jj 0 4.22547 0 zzzz
z
k 0 0 2.4698 {
ij 1. -1. 3. yz
jj z
jj 2. 0 -3. zzz
j z
k 1. 1. 1. {
evals = Eigenvalues@aaD
evecs = Eigenvectors@aaD
ij 0.517975 y
zz
0.853586 0.055619
jj
jj -0.143161 + 0.448655 Â 0.825076 + 0. Â 0.00842373 - 0.312078 Â zzz
jj zz
k -0.143161 - 0.448655 Â 0.825076 + 0. Â 0.00842373 + 0.312078 Â {
6 similarity.transforms.nb
p = Transpose@evecsD
pinv = Inverse@pD
Chop@pinv . aa . pD
ij 2.75531 0y
zz
0
jj
jj
jj -0.377654 + Â 0 zzzz
z
0 2.22227
k 0 0 -0.377654 - 2.22227 Â {
aa = 882, -1, 2, 0<, 80, 3, -1, 0<, 80, 1, 1, 0<, 80, 1, -3, 5<<
ij 2 -1 2 0 yz
jj z
jj 0 3 -1 0 zzz
jj z
jj 0 1 1 0 zzz
jj
j zzz
k 0 1 -3 5 {
ij 1 0 0 0y
zz
jj zz
jj 0 zz ÄÄÄ3ÄÄ
jj
2
1 1 zz
R = jjj 5 z
z
jj 0 2 1 ÄÄÄ9ÄÄ zzzz
jj
jj zz
k0 0 0 1{
jij
0y
zz
1 0 0
jj zz
jj 0 Å23ÅÅÅ
zz
jj zz
1 1
jj zz
jj 0 Å59ÅÅÅ zz
jj 2 1 zz
k0 0 0 1{
p = Transpose@RD
jij
0y
zz
1 0 0
jj 0 1 2 0 zzz
jj zz
jj
jj 0 1 1 0 zzz
jj zz
j z
k 0 ÅÅÅ3ÅÅ Å9ÅÅÅ 1{
2 5
pinv = Inverse@pD
jij
0y
zz
1 0 0
jj 0 -1 0 zzz
jj 2
zz
jj
jj 0 1 -1 0 zzz
jj zz
j z
k 0 Å9ÅÅÅ - Å9ÅÅÅ 1{
1 7
simans = pinv . aa . p
0y
jij zz
2 1 0
jj 0
jj 0 zzz
zz
2 1
jj
jj 0 0 zzz
jj zz
0 2
k0 0 0 5{
We see that the new matrix is much simpler that the original aa, but not exactly diagonal. This is the expected
result for a matrix with repeated eigenvalues However, there is a caveat.
mat={{2,2,1},{1,3,1},{1,2,2}}
2 2 1y
jij z
jj 1 3 1 zzz
jj zz
k1 2 2{
Eigenvalues[mat]
81, 1, 5<
evs=Eigenvectors[mat]
ij -1 0 1 yz
jj z
jj -2 1 0 zzz
j z
k 1 1 1{
Thus there was no need to construct a generalized eigen vector. Now try a similarity transform.
p=Transpose[%]
ij -1 -2 1 yz
jj z
jj 0 1 1 zzz
j z
k 1 0 1{
Inverse[p].mat.p
jij
1 0 0y
z
jj 0 1 0 zzz
jj zz
j z
k0 0 5{
similarity.transforms.nb 9
-s s 0 y
jij z
jj 1 -1 0 zzz
jj zz
k 0 0 -b {
MatrixForm@aaD
ij -s s 0 yz
jj z
jj 1 -1 0 zzz
j z
k 0 0 -b {
evals = Eigenvalues@aaD
evecs = Eigenvectors@aaD
ij 1 1 0 yz
jj z
jj 0 0 1 zzz
j z
k -s 1 0 {
ij 1 1 0 yz
jj z
jj -s 1 0 zzz
j z
k 0 0 1{
evecsP2T . evecsP1T
1-s
similarity.transforms.nb 11
p = Transpose@evecsD
1 -s 0 y
jij z
jj 1 1 0 zzz
jj zz
k0 0 1{
pinv = Inverse@pD
ij Ås+1ÅÅÅ1ÅÅÅÅ Å ÅÅÅs+1ÅÅsÅÅÅÅÅÅ 0 yz
jj zz
jj zz
jj - ÅÅs+1
ÅÅÅÅÅÅÅÅÅ
Å ÅÅÅÅÅ ÅÅÅÅÅÅ zz
jj zz
1 1
s+1
0
k 0 0 1{
pinv . aa . p
ij 0 0 0 y
zz
jj
jj 0 -H ÅÅÅÅÅÅsÅÅ ÅÅÅ + Å ÅÅÅ1Å ÅÅÅÅ L s - Å ÅÅÅsÅÅÅÅÅÅ - ÅÅÅÅÅ1ÅÅÅÅÅÅ 0 zzz
jj zz
j s+1 s+1 s+1 s+1
z
k 0 0 -b {
Simplify@%D