250 views

Uploaded by Mohammad Umar Rehman

Example of Hessenberg Reduction

- UT Dallas Syllabus for engr2300.003.11f taught by Carlos Busso Recabarren (cxb093000)
- (Thefreestudy.com) GATE Quick Refresher Guide by the GATE Academy
- Solution of Macroeconometric Models
- QRG_ME.pdf
- Courses
- EIGENVALUES
- NM-Ang
- HMWK 1
- Multigrid Methods
- Electrical engineering topicwise solved papers by R.K. Kanodia.pdf
- Cleaning Correlation Matrices
- Unit -1 Matrices Part –a
- Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdf
- SNM R08 AprilMay 10
- Dynamic characteristics of a curved cable-stayed bridge from strong motion records
- Sci Lab Primer
- Applications Spectral Theory Koop Man
- C# (Nishant Orignal Copy)
- MATLAB Primer.pdf
- Transient response analysis of the 72 Inch TAC-4 ruggedized shipboard rack subjected to an underwater explosion event.pdf

You are on page 1of 21

In addition to solving linear equations another important task of linear algebra is nding

eigenvalues.

Let F be some operator and x a vector. If F does not change the direction of the vector

x, x is an eigenvector of the operator, satisfying the equation

F(x) = x, (1)

where is a real or complex number, the eigenvalue corresponding to the eigenvector.

Thus the operator will only change the length of the vector by a factor given by the ei-

genvalue.

If F is a linear operator, F(ax) = aF(x) = ax, and hence ax is an eigenvector, too.

Eigenvectors are not uniquely determined, since they can be multiplied by any constant.

In the following only eigenvalues of matrices are discussed. A matrix can be considered

an operator mapping a vector to another one. For eigenvectors this mapping is a mere

scaling.

Let A be a square matrix. If there is a real or complex number and a vector x such

that

Ax = x, (2)

is an eigenvalue of the matrix and x an eigenvector. The quation (2) can also be writ-

ten as

(AI)x = 0.

If the equation is to have a nontrivial solution, we must have

det(AI) = 0. (3)

When the determinant is expanded, we get the characteristic polynomial, the zeros of

which are the eigenvalues.

If the matrix is symmetric and real valued, the eigenvalues are real. Otherwise at least

some of them may be complex. Complex eigenvalues appear always as pairs of complex

conjugates.

For example, the characteristic polynomial of the matrix

1 4

1 1

is

det

1 4

1 1

=

2

2 3.

The zeros of this are the eigenvalues

1

= 3 and

2

= 1.

Finding eigenvalues usign the characteristic polynomial is very laborious, if the matrix

is big. Even determining the coefcients of the equation is difcult. The method is not

suitable for numerical calculations.

To nd eigenvalues the matrix must be transformed to a more suiteble form. Gaussian

elimination would transform it into a triangular matrix, but unfortunately the eigenva-

lues are not conserved in the transform.

Another kind of transform, called a similarity transform, will not affect the eigenvalues.

Similarity transforms have the form

A A

= S

1

AS,

where S is any nonsingular matrix, The matrices A and A

QR decomposition

A commonly used method for nding eigenvalues is known as the QR method. The met-

hod is an iteration that repeatedly computes a decomposition of the matrix know as its

QR decomposition. The decomposition is obtained in a nite number of steps, and it has

some other uses, too. Well rst see how to compute this decomposition.

The QR decomposition of a matrix A is

A = QR,

where Q is an orthogonal matrix and R an upper triangular matrix. This decomposition

is possible for all matrices.

There are several methods for nding the decomposition

1) Householder transform

2) Givens rotations

3) GramSchmidt orthogonalisation

In the following we discuss the rst two methods with exemples. They will probably be

easier to understand than the formal algorithm.

Householder transform

The idea of the Householder transform is to nd a set of transforms that will make all

elements in one column below the diagonal vanish.

Assume that we have to decompose a matrix

A =

3 2 1

1 1 2

2 1 3

x

1

= a(:, 1) =

3

1

1

2

and compute the vector

u

1

= x

1

x

1

1

0

1

0 =

0.7416574

1

1

2

This is used to create a Householder transformation matrix

P

1

= I 2

u

1

u

T

1

u

1

2

=

0.2672612 0.6396433 0.7207135

0.5345225 0.7207135 0.4414270

It can be shown that this is an orthogonal matrix. It is easy to see this by calculating the

scalar product of any two columns. The products are zeros, and thus the column vectors

of the matrix are mutually orthogonal.

When the original matrix is multiplied by this transform the result is a matrix with zeros

in the rst column below the diagonal:

A

1

= P

1

A =

0. 0.4534522 0.6155927

0. 0.0930955 2.2311854

x

2

= a(2 : 3, 1) =

0.4534522

0.0930955

,

from which

u

2

= x

2

x

2

1

0

0.0094578

0.0930955

.

This will give the second transformation matrix

P

2

= I 2

u

2

u

T

2

u

2

2

=

1 0 0

0 0.9795688 0.2011093

0 0.2011093 0.9795688

The product of A

1

and the transformation matrix will be a matrix with zeros in the se-

cond column below the diagonal:

A

2

= P

2

A

1

=

0 0.4629100 0.1543033

0 0 2.3094011

Thus the matrix has been transformed to an upper triangular matrix. If the matrix is

bigger, repeat the same procedure for each column until all the elements below the diago-

nal vanish.

Matrices of the decomposition are now obtained as

Q = P

1

P

2

=

0.2672612 0.7715167 0.5773503

0.5345225 0.6172134 0.5773503

R = A

2

= P

2

P

1

A =

0 0.4629100 0.1543033

0 0 2.3094011

.

The matrix R is in fact the A

k

calculated in the last transformation; thus the original

matrix A is not needed. If memory must be saved, each of the matrices A

i

can be sto-

red in the area of the previous one. Also, there is no need to keep the earlier matrices P

i

,

but P

1

will be used as the initial value of Q, and at each step Q is always multiplied by

the new tranformation matrix P

i

.

As a check, we can calculate the product of the factors of the decomposition to see that

we will restore the original matrix:

QR =

3 2 1

1 1 2

2 1 3

.

Orthogonality of the matrix Q can be seen e.g. by calculating the productQQ

T

:

QQ

T

=

1 0 0

0 1 0

0 0 1

Q = P

1

P

2

P

n

,

R = P

n

P

2

P

1

A.

Givens rotations

Another commonly used method is based on Givens rotation matrices:

P

kl

() =

1

cos sin

.

.

. 1

.

.

.

sin cos

1

.

This is an orthogonal matrix.

Finding the eigenvalues

Eigenvalues can be found using iteratively the QR-algorithm, which will use the previous

QR decomposition. If we started with the original matrix, the task would be computa-

tionally very time consuming. Therefore we start by transformin the matrix to a more

suitable form.

A square matrix is in the block diagonal form if it is

T

11

T

12

T

13

T

1n

0 T

22

T

23

T

2n

0 0

.

.

.

0 0 0 T

nn

,

where the submatrices T

ij

are square matrices. It can be shown that the eigenvalues of

such a matrix are the eigenvalues of the diagonal blocks T

ii

.

If the matrix is a diagonal or triangular matrix, the eigenvalues are the diagonal ele-

ments. If such a form can be found, the problem is solved. Usually such a form cannot

be obtained by a nite number of similarity transformations.

If the original matrix is symmetric, it can be transformed to a tridiagonal form without

affecting its eigenvalues. In the case of a general matrix the result is a Hessenberg mat-

rix, which has the form

H =

x x x x x

x x x x x

0 x x x x

0 0 x x x

0 0 0 x x

vens rotations. The method is now slightly modied so that the elements immediately

below the diagonal are not zeroed.

Transformation using the Householder transforms

As a rst example, consider a symmetric matrix

A =

4 3 2 1

3 4 1 2

2 1 1 2

1 2 2 2

1

by taking only the elements of the rst column that are below the diagonal:

x

1

=

3

2

1

u

1

= x

1

x

1

1

0

1

0 =

0.7416574

2

1

p

1

= I 2u

1

u

T

1

/ u

1

=

0.5345225 0.4414270 0.7207135

0.2672612 0.7207135 0.6396433

P

1

=

1 0 0 0

0 0.8017837 0.5345225 0.2672612

0 0.5345225 0.4414270 0.7207135

0 0.2672612 0.7207135 0.6396433

Now we can make the similarity transform of the matrix A. The transformation matrix

is symmetric, so there is no need for transposing it:

A

1

= P

1

AP

1

=

4 3.7416574 0 0

3.7416574 2.4285714 1.2977396 2.1188066

0 1.2977396 0.0349563 0.2952113

0 2.1188066 0.2952113 4.5364723

The second column is handled in the same way. First we form the vector x

2

x

2

=

1.2977396

2.1188066

u

2

= x

2

x

2

(1, 0)

T

=

1.1869072

2.1188066

and

p

2

=

0.5223034 0.8527597

0.8527597 0.5223034

P

2

=

1 0 0 0

0 1 00

0 0 0.5223034 0.8527597

0 0 0.8527597 0.5223034

A

2

= P

2

A

1

P

2

=

4 3.7416574 0 0

3.7416574 2.4285714 2.4846467 0

0 2.4846467 3.5714286 1.8708287

0 0 1.8708287 1

As another example, we take an asymmetric matrix:

A =

4 2 3 1

3 4 2 1

2 1 1 2

1 2 2 2

A

2

=

3.7416574 1.7857143 2.9123481 1.1216168

0 2.0934787 4.2422252 1.0307759

0 0. 1.2980371 0.9720605

,

which is of the Hessenberg form.

QR-algorithm

We now have a simpler tridiagonal or Hessenberg matrix, which still has the same eigen-

values as the original matrix. Let this transformed matrix be H. Then we can begin to

search for the eigenvalues. This is done by iteration.

As an initial value, take A

1

= H. Then repeat the following steps:

Find the QR decomposition Q

i

R

i

= A

i

.

Calculate a new matrix A

i+1

= R

i

Q

i

.

The matrix Q of the QR decomposition is orthogonal and A = QR, and so R = Q

T

A.

Therefore

RQ = Q

T

AQ

is a similarity transform that will conserve the eigenvalues.

The sequence of matrices A

i

converges towards a upper tridiagonal or block matrix, from

which the eigenvalues can be picked up.

In the last example the limiting matrix after 50 iterations is

4.265E 08 4.9650342 0.8892339 0.4538061

0 0 1.9732687 1.3234202

0 0 0 0.9718955

The diagonal elements are now the eigenvalues. If the eigenvalues are complex numbers,

the matrix is a block matrix, and the eigenvalues are the eigenvalues of the diagonal 2 2

submatrices.

- UT Dallas Syllabus for engr2300.003.11f taught by Carlos Busso Recabarren (cxb093000)Uploaded byUT Dallas Provost's Technology Group
- (Thefreestudy.com) GATE Quick Refresher Guide by the GATE AcademyUploaded byRashmi Sahoo
- Solution of Macroeconometric ModelsUploaded bygiorgiop5
- QRG_ME.pdfUploaded byckvirtualize
- CoursesUploaded byVaibhav Mohnot
- EIGENVALUESUploaded byRhyzone Jeve
- NM-AngUploaded byNaca Maja
- HMWK 1Uploaded byTaratata Ignacio Feliciano
- Multigrid MethodsUploaded byAashish Akatsuki
- Electrical engineering topicwise solved papers by R.K. Kanodia.pdfUploaded byDharmesh Kushwaha
- Cleaning Correlation MatricesUploaded bydoc_oz3298
- Unit -1 Matrices Part –aUploaded byveludeepa
- Lecture Notes on orthogonal polynomials of several variables by Dr Yaun Xu.pdfUploaded byhumejias
- SNM R08 AprilMay 10Uploaded byebenesarb
- Dynamic characteristics of a curved cable-stayed bridge from strong motion recordsUploaded byDionysius Siringoringo
- Sci Lab PrimerUploaded byBurcu Deniz
- Applications Spectral Theory Koop ManUploaded byAndrew Fong
- C# (Nishant Orignal Copy)Uploaded byNishant Kumaria
- MATLAB Primer.pdfUploaded byjc224
- Transient response analysis of the 72 Inch TAC-4 ruggedized shipboard rack subjected to an underwater explosion event.pdfUploaded byFernando Raúl LADINO
- Getting Started with MatlabUploaded bykc_renji85
- The Relationship Between Finite ElementUploaded byXavi Vergara
- Journal of Computational and Applied Mathematics (Vol. 127, Issues 1-2) sUploaded bymindflores
- Macroeconomic Variables and the Performance of the Indian Stock MUploaded byselva kumar
- fulltextUploaded byisamac6662367
- PCA Term StructureUploaded bySiggenis Philein
- Lecture 1Uploaded byChava Tututi
- P. G. Kevrekidis and D.E. Pelinovsky- Discrete vector on-site vorticesUploaded by23213m
- Sol-manch5 TH 3rdEDITIONUploaded byAmin Mehdizade
- ramanujan.pdfUploaded byErkan

- NLC Synopsis 2Uploaded byriddler_007
- Field ControlUploaded byMohammad Umar Rehman
- Teaching StatementUploaded byMohammad Umar Rehman
- Eee 282n ProblemUploaded byMohammad Umar Rehman
- kkdjdurmkrmrmr.pdfUploaded byMayank Nautiyal
- EEC-504Uploaded byMohammad Umar Rehman
- Muharram Al Hram.pdfUploaded byMohammad Umar Rehman
- Lecture 16-17-18 Signal Flow GraphsUploaded byMohammad Umar Rehman
- LightLevels_outdoor+indoor.pdfUploaded byRajendra Lal Shrestha
- B.tech. Electrical III SemUploaded byMohammad Umar Rehman
- expt2Uploaded bybryar
- EEE-282N_S&S Quiz, U-1,2 - SolutionsUploaded byMohammad Umar Rehman
- Resistors i ParallelUploaded byAnonymous QvdxO5XTR
- Presentation 1Uploaded byMohammad Umar Rehman
- Turning your dissertation into a publishable journal articleUploaded byMohammad Umar Rehman
- 2011_Nature_Grad_Student_Aspirations and anxieties.pdfUploaded byMohammad Umar Rehman
- Calculus StandardsUploaded byMohammad Umar Rehman
- Chapter 10Uploaded byHinsermu Alemayehu Garbaba
- Iare_e Autonomous Regulations and Syllubus_12Uploaded byMohammad Umar Rehman
- G437.pdfUploaded byMohammad Umar Rehman
- Z01_FRAN6598_07_SE_All_0.pdfUploaded byMohammad Umar Rehman
- 2015JEEADVP 1 SolutionsUploaded byRajdeep
- Power Distribution IITKUploaded byMohammad Umar Rehman
- ab2Uploaded byMohammad Umar Rehman
- B015Uploaded byIndrojyoti Mondal
- 7709_1Uploaded byMohammad Umar Rehman
- 10629Uploaded byMohammad Umar Rehman
- Why Teach Mathematical ModellingUploaded byMohammad Umar Rehman
- ss_errorUploaded byJames W Mwangi
- EE6365_LMLUploaded byMohammad Umar Rehman

- Chapter 2 Introduction to the Finite Element Method for Linear Elastic Solids 2010Uploaded bySyed Haq
- Homework 1Uploaded byKristin Kathleen Creech Lassonde
- www.iiserpune.ac.in_~sdube_phy202_lecture3_4 vUploaded byNitinKumar
- Degradation of Energy of UniverseUploaded byProf M. SHAMIM
- n Body PlaygroundUploaded byNono4138
- c71826ab108691eeabc7Uploaded byapi-262225822
- Pro e Surface ModelingUploaded byRahul
- PH3001 Quantum Mechancs1 2006Uploaded byrandima fernando
- 1201.0537v1Uploaded byCristin Lee
- Linear System Theory 2 e SolUploaded byShruti Mahadik
- M840 Dissertation in Mathematics Print Final 6-9-2011Uploaded byAnonymous lKMmPk
- EMTTYSUploaded byDev Buddy
- Statics Ch2_present_4(3D Moment and Couple)Uploaded byAlexander Libak
- 0908.3275v2Uploaded bysukuje
- Maths1a Examples Alg VecspaceUploaded bytutisarmiento1
- exx_solUploaded byVikas Bhoria
- CaliceneUploaded byAndrew Bird
- Bsc Physics Science PemUploaded byKantilalk
- Fundamentals of Molecular SpectroscopyUploaded byVimal S. Katiyar
- 7.Content_Atomic and Molecular PhysicsUploaded byAbhishek Upadhyay
- Operator NormUploaded byBandile Mbele
- lec8Uploaded bymahendran
- Methods for Solving Mathematical Physics Problems 1904602053Uploaded bySerdar Bilge
- Chapter 39Uploaded byAhmad Mahdi
- MATH2101 Cheat SheetUploaded byWong Ng
- Examen de Ingreso de Postgrado en Fisica Rio de Janeiro BrasilUploaded byBenjamín Christian Valdez
- Dynamics Equation of Motion - Curvilinear Motion ProblemsUploaded byDiana Vega
- 45871255 Rotational MotionUploaded byTesfaye Belaye
- Dalton 2013 ManualUploaded bySebastian Oñate Escobar
- Lecture 1 (1)Uploaded bymrio2