You are on page 1of 18

Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.

org/wiki/Eigenvector

Eigenvalue, eigenvector and eigenspace


From Wikipedia, the free encyclopedia
(Redirected from Eigenvector)

In mathematics, given a linear transformation, an eigenvector


of that linear transformation is a nonzero vector which, when
that transformation is applied to it, may change in length, but not
direction.

For each eigenvector of a linear transformation, there is a


corresponding scalar value called an eigenvalue for that
vector, which determines the amount the eigenvector is scaled
under the linear transformation. For example, an eigenvalue of
+2 means that the eigenvector is doubled in length and points in
the same direction. An eigenvalue of +1 means that the
eigenvector is unchanged, while an eigenvalue of −1 means that
the eigenvector is reversed in direction. An eigenspace of a Fig. 1. In this shear mapping of the Mona Lisa,
given transformation for a particular eigenvalue is the set (linear the picture was deformed in such a way that its
central vertical axis (red vector) has not changed
span) of the eigenvectors associated to this eigenvalue, together
direction, but the diagonal vector (blue) has
with the zero vector (which has no direction). changed direction. Hence the red vector is an
eigenvector of the transformation and the blue
In linear algebra, every linear transformation between finite- vector is not. Since the red vector was neither
dimensional vector spaces can be expressed as a matrix, which is stretched nor compressed, its eigenvalue is 1.
a rectangular array of numbers arranged in rows and columns. All vectors with the same vertical direction
Standard methods for finding eigenvalues, eigenvectors, and —i.e., parallel to this vector—are also
eigenvectors, with the same eigenvalue.
eigenspaces of a given matrix are discussed below. Together with the zero-vector, they form the
eigenspace for this eigenvalue.
These concepts play a major role in several branches of both
pure and applied mathematics—appearing prominently in linear
algebra, functional analysis, and to a lesser extent in nonlinear mathematics.

Many kinds of mathematical objects can be treated as vectors: functions, harmonic modes, quantum states, and
frequencies, for example. In these cases, the concept of direction loses its ordinary meaning, and is given an
abstract definition. Even so, if this abstract direction is unchanged by a given linear transformation, the prefix
"eigen" is used, as in eigenfunction, eigenmode, eigenstate, and eigenfrequency.

Contents
1 History
2 Definitions
2.1 Left and right eigenvectors
3 Characteristic equation
3.1 Example
4 Existence and multiplicity of eigenvalues
4.1 Shear

1 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

4.2 Uniform scaling and reflection


4.3 Unequal scaling
4.4 Rotation
4.4.1 Rotation matrices on complex vector spaces
5 Infinite-dimensional spaces and spectral theory
5.1 Eigenfunctions
5.1.1 Waves on a string
6 Eigendecomposition
7 Applications
7.1 Schrödinger equation
7.2 Molecular orbitals
7.3 Geology and glaciology
7.4 Factor analysis
7.5 Vibration analysis
7.6 Eigenfaces
7.7 Tensor of inertia
7.8 Stress tensor
7.9 Eigenvalues of a graph
8 See also
9 Notes
10 References
11 External links

History
Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they
arose in the study of quadratic forms and differential equations.

Euler had also studied the rotational motion of a rigid body and discovered the importance of the principal axes.
As Lagrange realized, the principal axes are the eigenvectors of the inertia matrix.[1] In the early 19th century,
Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary
dimensions.[2] Cauchy also coined the term racine caractéristique (characteristic root) for what is now called
eigenvalue; his term survives in characteristic equation.[3]

Fourier used the work of Laplace and Lagrange to solve the heat equation by separation of variables in his
famous 1822 book Théorie analytique de la chaleur.[4] Sturm developed Fourier's ideas further and he brought
them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that symmetric
matrices have real eigenvalues.[2] This was extended by Hermite in 1855 to what are now called Hermitian
matrices.[3] Around the same time, Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit
circle,[2] and Clebsch found the corresponding result for skew-symmetric matrices.[3] Finally, Weierstrass
clarified an important aspect in the stability theory started by Laplace by realizing that defective matrices can
cause instability.[2]

In the meantime, Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of
their work is now called Sturm-Liouville theory.[5] Schwarz studied the first eigenvalue of Laplace's equation on
general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years

2 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

later.[6]

At the start of the 20th century, Hilbert studied the eigenvalues of integral operators by viewing the operators as
infinite matrices.[7] He was the first to use the German word eigen to denote eigenvalues and eigenvectors in
1904, though he may have been following a related usage by Helmholtz. "Eigen" can be translated as "own",
"peculiar to", "characteristic", or "individual"—emphasizing how important eigenvalues are to defining the
unique nature of a specific transformation. For some time, the standard term in English was "proper value", but
the more distinctive term "eigenvalue" is standard today.[8]

The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Von Mises
published the power method. One of the most popular methods today, the QR algorithm, was proposed
independently by John G.F. Francis[9][10] and Vera Kublanovskaya[11] in 1961.[12]

Definitions
See also: Eigenplane

Linear transformations of a vector space, such as rotation, reflection, stretching, compression, shear or any
combination of these, may be visualized by the effect they produce on vectors. In other words, they are vector
functions. More formally, in a vector space L, a vector function A is defined if for each vector x of L there
corresponds a unique vector y = A(x) of L. For the sake of brevity, the parentheses around the vector on which
the transformation is acting are often omitted. A vector function A is linear if it has the following two properties:

Additivity: A(x + y) = Ax + Ay
Homogeneity: A(αx) = αAx

where x and y are any two vectors of the vector space L and α is any scalar.[13] Such a function is variously
called a linear transformation, linear operator, or linear endomorphism on the space L.

Given a linear transformation A, a non-zero vector x is defined to be an eigenvector of the transformation if


it satisfies the eigenvalue equation Ax = λx for some scalar λ. In this situation, the scalar λ is called an
eigenvalue of A corresponding to the eigenvector x.[14]

The key equation in this definition is the eigenvalue equation, Ax = λx. That is to say that the vector x has the
property that its direction is not changed by the transformation A, but that it is only scaled by a factor of λ. Most
vectors x will not satisfy such an equation: a typical vector x changes direction when acted on by A, so that Ax is
not a multiple of x. This means that only certain special vectors x are eigenvectors, and only certain special
numbers λ are eigenvalues. Of course, if A is a multiple of the identity matrix, then no vector changes direction,
and all non-zero vectors are eigenvectors.

The requirement that the eigenvector be non-zero is imposed because the equation A0 = λ0 holds for every A
and every λ. Since the equation is always trivially true, it is not an interesting case. In contrast, an eigenvalue
can be zero in a nontrivial way. Each eigenvector is associated with a specific eigenvalue. One eigenvalue can
be associated with several or even with an infinite number of eigenvectors.

3 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

Geometrically (Fig. 2), the eigenvalue equation means that under


the transformation A eigenvectors experience only changes in
magnitude and sign—the direction of Ax is the same as that of x.
The eigenvalue λ is simply the amount of "stretch" or "shrink" to
which a vector is subjected when transformed by A. If λ = 1, the
vector remains unchanged (unaffected by the transformation). A
transformation I under which a vector x remains unchanged, Ix = x,
is defined as identity transformation. If λ = −1, the vector flips to
the opposite direction; this is defined as reflection.

If x is an eigenvector of the linear transformation A with eigenvalue


Fig. 2. A acts to stretch the vector x, not λ, then any scalar multiple αx is also an eigenvector of A with the
change its direction, so x is an eigenvector same eigenvalue. Similarly if more than one eigenvector share the
of A. same eigenvalue λ, any linear combination of these eigenvectors
will itself be an eigenvector with eigenvalue λ. [15]. Together with
the zero vector, the eigenvectors of A with the same eigenvalue form a linear subspace of the vector space
called an eigenspace.

The eigenvectors corresponding to different eigenvalues are linearly independent[16] meaning, in particular, that
in an n-dimensional space the linear transformation A cannot have more than n eigenvectors with different
eigenvalues.[17]

If a basis is defined in vector space, all vectors can be expressed in terms of components. For finite dimensional
vector spaces with dimension n, linear transformations can be represented with n × n square matrices.
Conversely, every such square matrix corresponds to a linear transformation for a given basis. Thus, in a
two-dimensional vector space R2 fitted with standard basis, the eigenvector equation for a linear transformation
A can be written in the following matrix representation:

where the juxtaposition of matrices denotes matrix multiplication.

Left and right eigenvectors

The word eigenvector formally refers to the right eigenvector xR. It is defined by the above eigenvalue
equation AxR = λRxR, and is the most commonly used eigenvector. However, the left eigenvector xL exists as
well, and is defined by xLA = λLxL.

Characteristic equation
When a transformation is represented by a square matrix A, the eigenvalue equation can be expressed as

This can be rearranged to

4 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

If there exists an inverse

then both sides can be multiplied by the inverse to obtain the trivial solution: x = 0. Thus we require there to be
no inverse by assuming from linear algebra that the determinant equals zero:

The determinant requirement is called the characteristic equation (less often, secular equation) of A, and the
left-hand side is called the characteristic polynomial. When expanded, this gives a polynomial equation for λ.
The eigenvector x or its components are not present in the characteristic equation.

Example

The matrix

defines a linear transformation of the real plane. The eigenvalues of this transformation are given by the
characteristic equation

The roots of this equation (i.e. the values of λ for which the equation holds) are λ = 1 and λ = 3. Having found
the eigenvalues, it is possible to find the eigenvectors. Considering first the eigenvalue λ = 3, we have

Both rows of this matrix equation reduce to the single linear equation x = y. To find an eigenvector, we are free
to choose any value for x, so by picking x=1 and setting y=x, we find the eigenvector to be

We can check this is an eigenvector by checking that : For the eigenvalue λ = 1, a similar

process leads to the equation x = − y, and hence the eigenvector is given by

The complexity of the problem for finding roots/eigenvalues of the characteristic polynomial increases rapidly
with increasing the degree of the polynomial (the dimension of the vector space). There are exact solutions for
dimensions below 5, but for dimensions greater than or equal to 5 there are generally no exact solutions and one

5 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

has to resort to numerical methods to find them approximately. For large symmetric sparse matrices, Lanczos
algorithm is used to compute eigenvalues and eigenvectors.

Existence and multiplicity of eigenvalues


For transformations on real vector spaces, the coefficients of the characteristic polynomial are all real. However,
the roots are not necessarily real; they may include complex numbers with a non-zero imaginary component. For
example, a matrix representing a planar rotation of 45 degrees will not leave any non-zero vector pointing in the
same direction. Over a complex vector space, the fundamental theorem of algebra guarantees that the
characteristic polynomial has at least one root, and thus the linear transformation has at least one eigenvalue.

As well as distinct roots, the characteristic equation may also have repeated roots. However, having repeated
roots does not imply there are multiple distinct (i.e. linearly independent) eigenvectors with that eigenvalue. The
algebraic multiplicity of an eigenvalue is defined as the multiplicity of the corresponding root of the
characteristic polynomial. The geometric multiplicity of an eigenvalue is defined as the dimension of the
associated eigenspace, i.e. number of linearly independent eigenvectors with that eigenvalue.

Over a complex space, the sum of the algebraic multiplicities will equal the dimension of the vector space, but
the sum of the geometric multiplicities may be smaller. In a sense, then it is possible that there may not be
sufficient eigenvectors to span the entire space. This is intimately related to the question of whether a given
matrix may be diagonalized by a suitable choice of coordinates.

Shear

Shear in the plane is a transformation in which all points along a


given line remain fixed while other points are shifted parallel to that
line by a distance proportional to their perpendicular distance from
the line.[18] Shearing a plane figure does not change its area. Shear
can be horizontal − along the X axis, or vertical − along the Y axis.
In horizontal shear (see figure), a point P of the plane moves
parallel to the X axis to the place P' so that its coordinate y does not
change while the x coordinate increments to become x' = x + k y,
where k is called the shear factor.

Horizontal shear. The shear angle φ is given The matrix of a horizontal shear transformation is . The
by k = cot φ.
characteristic equation is λ2 − 2 λ + 1 = (1 − λ)2 = 0 which has a
single, repeated root λ = 1. Therefore, the eigenvalue λ = 1 has algebraic multiplicity 2. The eigenvector(s) are
found as solutions of

The last equation is equivalent to y = 0, which is a straight line along the x axis. This line represents the
one-dimensional eigenspace. In the case of shear the algebraic multiplicity of the eigenvalue (2) is greater than
its geometric multiplicity (1, the dimension of the eigenspace). The eigenvector is a vector along the x axis. The

case of vertical shear with transformation matrix is dealt with in a similar way; the eigenvector in

6 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

vertical shear is along the y axis. Applying repeatedly the shear transformation changes the direction of any
vector in the plane closer and closer to the direction of the eigenvector.

Uniform scaling and reflection

As a one-dimensional vector space, consider a rubber string tied to


unmoving support in one end, such as that on a child's sling. Pulling the
string away from the point of attachment stretches it and elongates it by
some scaling factor λ which is a real number. Each vector on the string is
stretched equally, with the same scaling factor λ, and although elongated, it
preserves its original direction. For a two-dimensional vector space,
consider a rubber sheet stretched equally in all directions such as a small
area of the surface of an inflating balloon (Fig. 3). All vectors originating at
the fixed point on the balloon surface (the origin) are stretched equally with
the same scaling factor λ. This transformation in two-dimensions is
described by the 2×2 square matrix:
Fig. 3. When a surface is stretching
equally in all directions (a
homothety) any one of the radial
vectors can be the eigenvector.

Expressed in words, the transformation is equivalent to multiplying the


length of any vector by λ while preserving its original direction. Since the vector taken was arbitrary, every
non-zero vector in the vector space is an eigenvector. Whether the transformation is stretching (elongation,
extension, inflation), or shrinking (compression, deflation) depends on the scaling factor: if λ > 1, it is stretching;
if λ < 1, it is shrinking. Negative values of λ correspond to a reversal of direction, followed by a stretch or a
shrink, depending on the absolute value of λ.

Unequal scaling

For a slightly more complicated example, consider a sheet that is


stretched unequally in two perpendicular directions along the
coordinate axes, or, similarly, stretched in one direction, and shrunk
in the other direction. In this case, there are two different scaling
factors: k1 for the scaling in direction x, and k2 for the scaling in

direction y. The transformation matrix is , and the

characteristic equation is (k1 − λ)(k2 − λ) = 0. The eigenvalues,


obtained as roots of this equation are λ1 = k1, and λ2 = k2 which
means, as expected, that the two eigenvalues are the scaling factors
Vertical shrink (k 2 < 1) and horizontal in the two directions. Plugging k1 back in the eigenvalue equation
stretch (k 1 > 1) of a unit square. gives one of the eigenvectors:
Eigenvectors are u1 and u2 and eigenvalues
are λ1 = k 1 and λ2 = k 2. This transformation
orients all vectors towards the principal or, more simply, y = 0.
eigenvector u1.

Thus, the eigenspace is the x-axis. Similarly, substituting λ = k2 shows that the corresponding eigenspace is the
y-axis. In this case, both eigenvalues have algebraic and geometric multiplicities equal to 1. If a given eigenvalue
is greater than 1, the vectors are stretched in the direction of the corresponding eigenvector; if less than 1, they

7 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

are shrunken in that direction. Negative eigenvalues correspond to reflections followed by a stretch or shrink. In
general, matrices that are diagonalizable over the real numbers represent scalings and reflections: the
eigenvalues represent the scaling factors (and appear as the diagonal terms), and the eigenvectors are the
directions of the scalings.

The figure shows the case where k1 > 1 and 1 > k2 > 0. The rubber sheet is stretched along the x axis and
simultaneously shrunk along the y axis. After repeatedly applying this transformation of stretching/shrinking
many times, almost any vector on the surface of the rubber sheet will be oriented closer and closer to the
direction of the x axis (the direction of stretching). The exceptions are vectors along the y-axis, which will
gradually shrink away to nothing.

Rotation

For more details on this topic, see Rotation matrix.

A rotation in a plane is a transformation that describes motion of a vector, plane, coordinates, etc., around a
fixed point. Clearly, for rotations other than through 0° and 180°, every vector in the real plane will have its
direction changed, and thus there cannot be any eigenvectors. But this is not necessarily true if we consider the
same matrix over a complex vector space.

A counterclockwise rotation in the horizontal plane about the origin at an angle φ is represented by the matrix

The characteristic equation of R is λ2 − 2λ cos φ + 1 = 0. This quadratic equation has a discriminant D = 4 (cos2
φ − 1) = − 4 sin2 φ which is a negative number whenever φ is not equal a multiple of 180°. A rotation of 0°,
360°, … is just the identity transformation, (a uniform scaling by +1) while a rotation of 180°, 540°, …, is a
reflection (uniform scaling by -1). Otherwise, as expected, there are no real eigenvalues or eigenvectors for
rotation in the plane.

Rotation matrices on complex vector spaces

The characteristic equation has two complex roots λ1 and λ2. If we choose to think of the rotation matrix as a
linear operator on the complex two dimensional, we can consider these complex eigenvalues. The roots are
complex conjugates of each other: λ1,2 = cos φ ± i sin φ = e ± iφ, each with an algebraic multiplicity equal to 1,
where i is the imaginary unit.

The first eigenvector is found by substituting the first eigenvalue, λ1, back in the eigenvalue equation:

The last equation is equivalent to the single equation x = iy, and again we are free to set x = 1 to give the
eigenvector

8 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

Similarly, substituting in the second eigenvalue gives the single equation x = − iy and so the eigenvector is given
by

Although not diagonalizable over the reals, the rotation matrix is diagonalizable over the complex numbers, and
again the eigenvalues appear on the diagonal. Thus rotation matrices acting on complex spaces can be thought
of as scaling matrices, with complex scaling factors.

Infinite-dimensional spaces and spectral theory


For more details on this topic, see Spectral theorem.

If the vector space is an infinite dimensional Banach space, the notion of eigenvalues can be generalized to the
concept of spectrum. The spectrum is the set of scalars λ for which (T − λ)−1 is not defined; that is, such that T
− λ has no bounded inverse.

Clearly if λ is an eigenvalue of T, λ is in the spectrum of T. In general, the converse is not true. There are
operators on Hilbert or Banach spaces which have no eigenvectors at all. This can be seen in the following
example. The bilateral shift on the Hilbert space ℓ 2(Z) (that is, the space of all sequences of scalars … a−1, a0,
a1, a2, … such that

converges) has no eigenvalue but does have spectral values.

In infinite-dimensional spaces, the spectrum of a bounded operator is always nonempty. This is also true for an
unbounded self adjoint operator. Via its spectral measures, the spectrum of any self adjoint operator, bounded or
otherwise, can be decomposed into absolutely continuous, pure point, and singular parts. (See Decomposition of
spectrum.)

The hydrogen atom is an example where both types of spectra appear. The eigenfunctions of the hydrogen atom
Hamiltonian are called eigenstates and are grouped into two categories. The bound states of the hydrogen atom
correspond to the discrete part of the spectrum (they have a discrete set of eigenvalues which can be computed
by Rydberg formula) while the ionization processes are described by the continuous part (the energy of the
collision/ionization is not quantified).

Eigenfunctions

A common example of such maps on infinite dimensional spaces are the action of differential operators on
function spaces. As an example, on the space of infinitely differentiable functions, the process of differentiation
defines a linear operator since

where f(t) and g(t) are differentiable functions, and a and b are constants).

9 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

The eigenvalue equation for linear differential operators is then a set of one or more differential equations. The
eigenvectors are commonly called eigenfunctions. The most simple case is the eigenvalue equation for
differentiation of a real valued function by a single real variable. In this case, the eigenvalue equation becomes
the linear differential equation

Here λ is the eigenvalue associated with the function, f(x). This eigenvalue equation has a solution for all values
of λ. If λ is zero, the solution is

where A is any constant; if λ is non-zero, the solution is the exponential function

f(x) = Aeλx.

If we expand our horizons to complex valued functions, the value of λ can be any complex number. The
spectrum of d/dt is therefore the whole complex plane. This is an example of a continuous spectrum.

Waves on a string

The displacement, h(x,t), of a stressed rope fixed at both ends,


like the vibrating strings of a string instrument, satisfies the wave
equation

The shape of a standing wave in a string fixed at


its boundaries is an example of an eigenfunction
of a differential operator. The admittable
which is a linear partial differential equation, where c is the eigenvalues are governed by the length of the
constant wave speed. The normal method of solving such an string and determine the frequency of
oscillation.
equation is separation of variables. If we assume that h can be
written as the product of the form X(x)T(t), we can form a pair of
ordinary differential equations:

and T'' = − ω2T.

Each of these is an eigenvalue equation (the unfamiliar form of the eigenvalue is chosen merely for
convenience). For any values of the eigenvalues, the eigenfunctions are given by

and T = sin(ωt + ψ).

If we impose boundary conditions -- that the ends of the string are fixed with X(x)=0 at x=0 and x=L, for
example -- we can constrain the eigenvalues. For those boundary conditions, we find

sin(φ) = 0, and so the phase angle φ = 0

and

10 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

Thus, the constant ω is constrained to take one of the values , where n is any integer. Thus the
clamped string supports a family of standing waves of the form

h(x,t) = sin(nπx / L)sin(ωnt).

From the point of view of our musical instrument, the frequency ωn is the frequency of the nth harmonic
overtone.

Eigendecomposition
The spectral theorem for matrices can be stated as follows. Let A be a square n × n matrix. Let q1 ... qk be an
eigenvector basis, i.e. an indexed set of k linearly independent eigenvectors, where k is the dimension of the
space spanned by the eigenvectors of A. If k = n, then A can be written

where Q is the square n × n matrix whose i-th column is the basis eigenvector qi of A and Λ is the diagonal
matrix whose diagonal elements are the corresponding eigenvalues, i.e. Λii = λi.

Applications
Schrödinger equation

11 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

An example of an eigenvalue equation where the transformation


T is represented in terms of a differential operator is the
time-independent Schrödinger equation in quantum mechanics:

where H, the Hamiltonian, is a second-order differential operator


and ψE, the wavefunction, is one of its eigenfunctions
corresponding to the eigenvalue E, interpreted as its energy.

However, in the case where one is interested only in the bound


state solutions of the Schrödinger equation, one looks for ψE
within the space of square integrable functions. Since this space
is a Hilbert space with a well-defined scalar product, one can
introduce a basis set in which ψE and H can be represented as a
one-dimensional array and a matrix respectively. This allows one Fig. 8. The wavefunctions associated with the
to represent the Schrödinger equation in a matrix form. (Fig. 8 bound states of an electron in a hydrogen atom
presents the lowest eigenfunctions of the Hydrogen atom can be seen as the eigenvectors of the hydrogen
Hamiltonian.) atom Hamiltonian as well as of the angular
momentum operator. They are associated with
The Dirac notation is often used in this context. A vector, which eigenvalues interpreted as their energies
(increasing downward: n=1,2,3,...) and angular
represents a state of the system, in the Hilbert space of square momentum (increasing across: s, p, d,...). The
integrable functions is represented by . In this notation, the illustration shows the square of the absolute
Schrödinger equation is: value of the wavefunctions. Brighter areas
correspond to higher probability density for a
position measurement. The center of each figure
is the atomic nucleus, a proton.
where is an eigenstate of H. It is a self adjoint operator,
the infinite dimensional analog of Hermitian matrices (see Observable). As in the matrix case, in the equation
above is understood to be the vector obtained by application of the transformation H to .

Molecular orbitals

In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree-Fock theory, the
atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding
eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is
used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and
their eigenvalues. If one wants to underline this aspect one speaks of nonlinear eigenvalue problem. Such
equations are usually solved by an iteration procedure, called in this case self-consistent field method. In
quantum chemistry, one often represents the Hartree-Fock equation in a non-orthogonal basis set. This
particular representation is a generalized eigenvalue problem called Roothaan equations.

Geology and glaciology

In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a
mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six
numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample,
which can only be compared graphically such as in a Tri-Plot (Sneed and Folk) diagram [19], [20], or as a
Stereonet on a Wulff Net [21]. The output for the orientation tensor is in the three orthogonal (perpendicular)

12 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

axes of space. Eigenvectors output from programs such as Stereo32 [22] are in the order E1 ≥ E2 ≥ E3, with E1
being the primary orientation of clast orientation/dip, E2 being the secondary and E3 being the tertiary, in terms
of strength. The clast orientation is defined as the eigenvector, on a compass rose of 360°. Dip is measured as
the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of
E1, E2, and E3 are dictated by the nature of the sediment's fabric. If E1 = E2 = E3, the fabric is said to be
isotropic. If E1 = E2 > E3 the fabric is planar. If E1 > E2 > E3 the fabric is linear. See 'A Practical Guide to the
Study of Glacial Sediments' by Benn & Evans, 2004 [23].

Factor analysis

In factor analysis, the eigenvectors of a covariance matrix or correlation matrix correspond to factors, and
eigenvalues to the variance explained by these factors. Factor analysis is a statistical technique used in the social
sciences and in marketing, product management, operations research, and other applied sciences that deal with
large quantities of data. The objective is to explain most of the covariability among a number of observable
random variables in terms of a smaller number of unobservable latent variables called factors. The observable
random variables are modeled as linear combinations of the factors, plus unique variance terms. Eigenvalues are
used in analysis used by Q-methodology software; factors with eigenvalues greater than 1.00 are considered
significant, explaining an important amount of the variability in the data, while eigenvalues less than 1.00 are
considered too weak, not explaining a significant portion of the data variability.

Vibration analysis

Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of
freedom. The eigenvalues are used to determine the natural frequencies of vibration, and the eigenvectors
determine the shapes of these vibrational modes. The orthogonality properties of the eigenvectors allows
decoupling of the differential equations so that the system can be represented as linear summation of the
eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis.

The mode shapes of a cantilevered I-beam

1st lateral bending 1st torsional 1st vertical bending

2nd lateral bending 2nd torsional 2nd vertical bending

13 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

Eigenfaces

In image processing, processed images of faces can be seen as vectors


whose components are the brightnesses of each pixel.[24] The dimension of
this vector space is the number of pixels. The eigenvectors of the
covariance matrix associated to a large set of normalized pictures of faces
are called eigenfaces; this is an example of principal components analysis.
They are very useful for expressing any face image as a linear combination
of some of them. In the facial recognition branch of biometrics, eigenfaces
provide a means of applying data compression to faces for identification
purposes. Research related to eigen vision systems determining hand
gestures has also been made.

Similar to this concept, eigenvoices represent the general direction of


variability in human pronunciations of a particular utterance, such as a word
in a language. Based on a linear combination of such eigenvoices, a new Fig. 9. Eigenfaces as examples of
voice pronunciation of the word can be constructed. These concepts have eigenvectors
been found useful in automatic speech recognition systems, for speaker
adaptation.

Tensor of inertia

In mechanics, the eigenvectors of the inertia tensor define the principal axes of a rigid body. The tensor of
inertia is a key quantity required in order to determine the rotation of a rigid body around its center of mass.

Stress tensor

In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the
eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress
tensor has no shear components; the components it does have are the principal components.

Eigenvalues of a graph

In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A,
or (increasingly) of the graph's Laplacian matrix, which is either T−A or I−T 1/2AT −1/2, where T is a diagonal
matrix holding the degree of each vertex, and in T −1/2, 0 is substituted for 0−1/2. The kth principal eigenvector
of a graph is defined as either the eigenvector corresponding to the kth largest eigenvalue of A, or the
eigenvector corresponding to the kth smallest eigenvalue of the Laplacian. The first principal eigenvector of the
graph is also referred to merely as the principal eigenvector.

The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank
algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the
page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain
represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to
ensure a stationary distribution exists. The second principal eigenvector can be used to partition the graph into
clusters, via spectral clustering. Other methods are also available for clustering.

See also

14 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

Nonlinear eigenproblem
Quadratic eigenvalue problem
Eigenspectrum

Notes
1. ^ See Hawkins 1975, §2 13. ^ See Beezer 2006, Definition LT on p. 507; Strang
2. ^ a b c d See Hawkins 1975, §3 2006, p. 117; Kuttler 2007, Definition 5.3.1 on p. 71;
3. ^ a b c See Kline 1972, pp. 807-808 Shilov 1977, Section 4.21 on p. 77; Rowland, Todd
4. ^ See Kline 1972, p. 673 and Weisstein, Eric W. Linear transformation
5. ^ See Kline 1972, pp. 715-716 (http://mathworld.wolfram.com
6. ^ See Kline 1972, pp. 706-707 /LinearTransformation.html) From MathWorld − A
7. ^ See Kline 1972, p. 1063 Wolfram Web Resource
8. ^ See Aldrich 2006 14. ^ See Korn & Korn 2000, Section 14.3.5a;
9. ^ J.G.F. Francis, "The QR Transformation, I" (part Friedberg, Insel & Spence 1989, p. 217
1), The Computer Journal, vol. 4, no. 3, pages 15. ^ For a proof of this lemma, see Shilov 1977, p. 109,
265-271 (1961); "The QR Transformation, II" (part and Lemma for the eigenspace
2), The Computer Journal, vol. 4, no. 4, pages 16. ^ For a proof of this lemma, see Roman 2008,
332-345 (1962). Theorem 8.2 on p. 186; Shilov 1977, p. 109;
10. ^ John G.F. Francis (1934 - ), devised the “QR Hefferon 2001, p. 364; Beezer 2006, Theorem
transformation” for computing the eigenvalues of EDELI on p. 469; and Lemma for linear
matrices. Born in London in 1934, he presently independence of eigenvectors
(2007) resides in Hove, England (near Brighton). In 17. ^ See Shilov 1977, p. 109
1954 he worked for the National Research 18. ^ Definition according to Weisstein, Eric W. Shear
Development Corporation (NRDC). In 1955-1956 he (http://mathworld.wolfram.com/Shear.html) From
attended Cambridge University. He then returned to MathWorld − A Wolfram Web Resource
the NRDC, where he served as assistant to 19. ^ Graham, D., and Midgley, N., 2000. Earth Surface
Christopher Strachey. At this time he devised the QR Processes and Landforms (25) pp 1473-1477
transformation. In 1961 he left the NRDC to work at 20. ^ Sneed ED, Folk RL. 1958. Pebbles in the lower
Ferranti Corporation, Ltd. and then at the University Colorado River, Texas, a study of particle
of Sussex. Subsequently, he had positions with morphogenesis. Journal of Geology 66(2): 114–150
various industrial organizations and consultancies. 21. ^ GIS-stereoplot: an interactive stereonet plotting
His interests encompassed artificial intelligence, module for ArcView 3.0 geographic information
computer languages, and systems engineering. He is system (http://dx.doi.org/10.1016
currently retired. (See: http://www-sbras.nsc.ru /S0098-3004(97)00122-2)
/mathpub/na-net/db/showfile.phtml?v07n34.html#1 .) 22. ^ Stereo32 (http://www.ruhr-uni-bochum.de/hardrock
11. ^ Vera N. Kublanovskaya, "On some algorithms for /downloads.htm)
the solution of the complete eigenvalue problem" 23. ^ Benn, D., Evans, D., 2004. A Practical Guide to
USSR Computational Mathematics and the study of Glacial Sediments. London: Arnold. pp
Mathematical Physics, vol. 3, pages 637–657 103-107
(1961). Also published in: Zhurnal Vychislitel'noi 24. ^ Xirouhakis, A.; Votsis, G.; Delopoulus, A. (2004)
Matematiki i Matematicheskoi Fiziki, vol.1, no. 4, (PDF), Estimation of 3D motion and structure of
pages 555–570 (1961). human faces, Online paper in PDF format, National
12. ^ See Golub & van Loan 1996, §7.3; Meyer 2000, Technical University of Athens,
§7.3 http://www.image.ece.ntua.gr/papers/43.pdf

References
Korn, Granino A.; Korn, Theresa M. (2000), Dover Publications, 2 Revised edition, ISBN
Mathematical Handbook for Scientists and 0-486-41147-8.
Engineers: Definitions, Theorems, and Lipschutz, Seymour (1991), Schaum's outline of
Formulas for Reference and Review, 1152 p., theory and problems of linear algebra,

15 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

Schaum's outline series (2nd ed.), New York, Golub, Gene F.; van der Vorst, Henk A. (2000),
NY: McGraw-Hill Companies, ISBN "Eigenvalue computation in the 20th century",
0-07-038007-4. Journal of Computational and Applied
Friedberg, Stephen H.; Insel, Arnold J.; Spence, Mathematics 123: 35–65,
Lawrence E. (1989), Linear algebra (2nd ed.), doi:10.1016/S0377-0427(00)00413-1.
Englewood Cliffs, NJ 07632: Prentice Hall, Akivis, Max A.; Vladislav V. Goldberg (1969),
ISBN 0-13-537102-3. Tensor calculus, Russian, Science Publishers,
Aldrich, John (2006), "Eigenvalue, Moscow.
eigenfunction, eigenvector, and related terms", Gelfand, I. M. (1971), Lecture notes in linear
in Jeff Miller (Editor), Earliest Known Uses of algebra, Russian, Science Publishers, Moscow.
Some of the Words of Mathematics, Alexandrov, Pavel S. (1968), Lecture notes in
http://jeff560.tripod.com/e.html, retrieved on 22 analytical geometry, Russian, Science
August 2006 Publishers, Moscow.
Strang, Gilbert (1993), Introduction to linear Carter, Tamara A.; Tapia, Richard A.;
algebra, Wellesley-Cambridge Press, Wellesley, Papaconstantinou, Anne, Linear Algebra: An
MA, ISBN 0-961-40885-5. Introduction to Linear Algebra for
Strang, Gilbert (2006), Linear algebra and its Pre-Calculus Students, Rice University, Online
applications, Thomson, Brooks/Cole, Belmont, Edition, http://ceee.rice.edu/Books
CA, ISBN 0-030-10567-6. /LA/index.html, retrieved on 19 February 2008.
Bowen, Ray M.; Wang, Chao-Cheng (1980), Roman, Steven (2008), Advanced linear
Linear and multilinear algebra, Plenum Press, algebra (3rd ed.), New York, NY: Springer
New York, NY, ISBN 0-306-37508-7. Science + Business Media, LLC, ISBN
Cohen-Tannoudji, Claude (1977), "Chapter II. 978-0-387-72828-5.
The mathematical tools of quantum mechanics", Shilov, Georgi E. (1977), Linear algebra
Quantum mechanics, John Wiley & Sons, ISBN (translated and edited by Richard A. Silverman
0-471-16432-1. ed.), New York: Dover Publications, ISBN
Fraleigh, John B.; Beauregard, Raymond A. 0-486-63518-X.
(1995), Linear algebra (3rd ed.), Addison- Hefferon, Jim (2001), Linear Algebra, Online
Wesley Publishing Company, ISBN book, St Michael's College, Colchester,
0-201-83999-7 (international edition). Vermont, USA, http://joshua.smcvt.edu
Golub, Gene H.; van Loan, Charles F. (1996), /linearalgebra/.
Matrix computations (3rd Edition), Johns Kuttler, Kenneth (2007) (PDF), An introduction
Hopkins University Press, Baltimore, MD, ISBN to linear algebra, Online e-book in PDF format,
978-0-8018-5414-9. Brigham Young University,
Hawkins, T. (1975), "Cauchy and the spectral http://www.math.byu.edu/~klkuttle
theory of matrices", Historia Mathematica 2: /Linearalgebra.pdf.
1–29, doi:10.1016/0315-0860(75)90032-4. Demmel, James W. (1997), Applied numerical
Horn, Roger A.; Johnson, Charles F. (1985), linear algebra, SIAM, ISBN 0-89871-389-7.
Matrix analysis, Cambridge University Press, Beezer, Robert A. (2006), A first course in
ISBN 0-521-30586-1 (hardback), ISBN linear algebra, Free online book under GNU
0-521-38632-2 (paperback). licence, University of Puget Sound,
Kline, Morris (1972), Mathematical thought http://linear.ups.edu/.
from ancient to modern times, Oxford Lancaster, P. (1973), Matrix theory, Russian,
University Press, ISBN 0-195-01496-0. Moscow, Russia: Science Publishers.
Meyer, Carl D. (2000), Matrix analysis and Halmos, Paul R. (1987), Finite-dimensional
applied linear algebra, Society for Industrial vector spaces (8th ed.), New York, NY:
and Applied Mathematics (SIAM), Philadelphia, Springer-Verlag, ISBN 0387900934.
ISBN 978-0-89871-454-8. Pigolkina, T. S. and Shulman, V. S., Eigenvalue
Brown, Maureen (October 2004), Illuminating (in Russian), In:Vinogradov, I. M. (Ed.),
Patterns of Perception: An Overview of Q Mathematical Encyclopedia, Vol. 5, Soviet
Methodology. Encyclopedia, Moscow, 1977.

16 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

Pigolkina, T. S. and Shulman, V. S., Eigenvector Shores, Thomas S. (2007), Applied linear
(in Russian), In:Vinogradov, I. M. (Ed.), algebra and matrix analysis, Springer
Mathematical Encyclopedia, Vol. 5, Soviet Science+Business Media, LLC, ISBN
Encyclopedia, Moscow, 1977. 0-387-33194-8.
Greub, Werner H. (1975), Linear Algebra (4th Sharipov, Ruslan A. (1996), Course of Linear
Edition), Springer-Verlag, New York, NY, ISBN Algebra and Multidimensional Geometry: the
0-387-90110-8. textbook, Online e-book in various formats on
Larson, Ron; Edwards, Bruce H. (2003), arxiv.org, Bashkir State University, Ufa,
Elementary linear algebra (5th ed.), Houghton arΧiv:math/0405323v1, ISBN 5-7477-0099-5,
Mifflin Company, ISBN 0-618-33567-6. http://www.geocities.com/r-sharipov.
Curtis, Charles W., Linear Algebra: An Gohberg, Israel; Lancaster, Peter; Rodman,
Introductory Approach, 347 p., Springer; 4th ed. Leiba (2005), Indefinite linear algebra and
1984. Corr. 7th printing edition (August 19, applications, Basel-Boston-Berlin: Birkhäuser
1999), ISBN 0387909923. Verlag, ISBN 3-7643-7349-0.

External links
Theory

Eigenvalue (of a matrix) (http://planetmath.org/?op=getobj&from=objects&id=4397) on PlanetMath


MathWorld: Eigenvector (http://mathworld.wolfram.com/Eigenvector.html)
Eigen Vector Examination (Demo) with Sound (http://web.mit.edu/18.06/www/Demos/eigen-applet-
all/eigen_sound_all.html)
MIT Video Lecture on Eigenvalues and Eigenvectors (http://academicearth.org/lectures/eigenvalues-
and-eigenvectors) , from MIT OpenCourseWare
What are Eigen Values? (http://www.physlink.com/education/AskExperts/ae520.cfm) from
PhysLink.com's "Ask the Experts"
Numerical solution of eigenvalue problems (http://www.cs.utk.edu/~dongarra/etemplates/index.html)
Edited by Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst

Algorithms

ARPACK (http://www.caam.rice.edu/software/ARPACK/) is a collection of FORTRAN subroutines for


solving large scale (sparse) eigenproblems.
IRBLEIGS (http://www.math.uri.edu/~jbaglama/) , has MATLAB code with similar capabilities to
ARPACK. (See this paper (http://www.math.uri.edu/~jbaglama/papers/paper10.pdf) for a comparison
between IRBLEIGS and ARPACK.)
LAPACK (http://netlib.org/lapack/) is a collection of FORTRAN subroutines for solving dense linear
algebra problems
ALGLIB (http://www.alglib.net/eigen/) includes a partial port of the LAPACK to C++, C#, Delphi, etc.
Vanderplaats Research and Development (http://www.vrand.com) - Provides the SMS
(http://www.vrand.com) eigenvalue solver for Structural Finite Element. The solver is in the GENESIS
(http://www.vrand.com/Genesis.html) program as well as other commercial programs. SMS can be easily
use with MSC.Nastran or NX/Nastran via DMAPs.

Online calculators

arndt-bruenner.de (http://www.arndt-bruenner.de/mathe/scripts/engl_eigenwert.htm)
bluebit.gr (http://www.bluebit.gr/matrix-calculator/)
wims.unice.fr (http://wims.unice.fr
/wims/wims.cgi?session=6S051ABAFA.2&+lang=en&+module=tool%2Flinear%2Fmatrix.en)

17 of 18 1/29/2009 5:14 PM
Eigenvalue, eigenvector and eigenspace - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Eigenvector

Retrieved from "http://en.wikipedia.org/wiki/Eigenvalue,_eigenvector_and_eigenspace"


Categories: Fundamental physics concepts | Abstract algebra | Linear algebra | Matrices | Singular value
decomposition | German loanwords
Hidden category: Articles including recorded pronunciations

This page was last modified on 25 January 2009, at 22:28.


All text is available under the terms of the GNU Free Documentation License. (See Copyrights for
details.)
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a U.S. registered 501(c)(3)
tax-deductible nonprofit charity.

18 of 18 1/29/2009 5:14 PM

You might also like