This action might not be possible to undo. Are you sure you want to continue?
From Wikipedia, the free encyclopedia Jump to: navigation, search For more specific information regarding the eigenvalues and eigenvectors of matrices, see Eigendecomposition of a matrix.
In this shear mapping of the Mona Lisa, the picture was deformed in such a way that its central vertical axis (red vector) has not changed direction, but the diagonal vector (blue) has changed direction. Hence the red vector is an eigenvector of the transformation and the blue vector is not. Since the red vector was neither stretched nor compressed, its eigenvalue is 1. All vectors with the same or opposite direction (vertical upward or downward) are also eigenvectors, and have the same eigenvalue. Together with the zero vector, they form the eigenspace for this eigenvalue. The eigenvectors of a square matrix are the non-zero vectors which, after being multiplied by the matrix, remain proportional to the original vector. For each eigenvector, the corresponding eigenvalue is the factor by which the eigenvector changes when multiplied by the matrix. The prefix eigen- is adopted from the German word "eigen" for "innate", "own". The eigenvectors are sometimes also called proper vectors, or characteristic vectors. Similarly, the eigenvalues are also known as proper values, or characteristic values. The mathematical expression of this idea is as follows: if A is a square matrix, a non-zero vector v is an eigenvector of A if there is a scalar λ such that
The scalar λ is said to be the eigenvalue of A corresponding to v. An eigenspace of A is the set of all eigenvectors with the same eigenvalue together with the zero vector. However, the zero vector is not an eigenvector. These ideas are often extended to more general situations, where scalars are elements of any field, vectors are elements of any vector space, and linear transformations may or may not be represented by matrix multiplication. For example, instead of real numbers, scalars may be the complex numbers; instead of arrows, vectors may be functions or frequencies; instead of matrix
multiplication, linear transformations may be operators such as the derivative from calculus. These are only a few of countless examples where eigenvectors and eigenvalues are important. In these cases, the concept of direction loses its ordinary meaning, and is given an abstract definition. Even so, if this abstract direction is unchanged by a given linear transformation, the prefix "eigen" is used, as in eigenfunction, eigenmode, eigenface, eigenstate, and eigenfrequency. Eigenvalues and eigenvectors have many applications in both pure and applied mathematics. They are used in matrix factorization, in quantum mechanics, and in many other areas.
• • • •
1 Definition o 1.1 Prerequisites and motivation o 1.2 Example o 1.3 Formal definition 2 Eigenvalues and eigenvectors of matrices o 2.1 Characteristic polynomial o 2.2 Eigenspace o 2.3 Algebraic and geometric multiplicities o 2.4 Worked example o 2.5 Eigendecomposition o 2.6 Further properties o 2.7 Examples in the plane 2.7.1 Shear 2.7.2 Uniform scaling and reflection 2.7.3 Unequal scaling 2.7.4 Rotation 3 Calculation 4 Left and right eigenvectors 5 History 6 Infinite-dimensional spaces and spectral theory o 6.1 Eigenfunctions 6.1.1 Waves on a string 7 Applications o 7.1 Schrödinger equation o 7.2 Molecular orbitals o 7.3 Geology and glaciology o 7.4 Principal components analysis o 7.5 Vibration analysis o 7.6 Eigenfaces o 7.7 Tensor of inertia o 7.8 Stress tensor o 7.9 Eigenvalues of a graph
• • • • 8 See also 9 Notes 10 References 11 External links  Definition  Prerequisites and motivation Matrix A acts by stretching the vector x. For example. in the standard coordinates of n-dimensional space a vector can be written A matrix can be written . Once a set of Cartesian coordinates are established. so x is an eigenvector of A. vectors can be thought of as arrows that have both length (or magnitude) and direction. Eigenvectors and eigenvalues depend on the concepts of vectors and linear transformations. In the most elementary case. not changing its direction. A linear transformation can be described by a square matrix. a vector can be described relative to that set of coordinates by a sequence of numbers.
. The vector x is an eigenvector of the matrix A with eigenvalue λ if the following equation holds: Geometrically. shrinks.Here n is some fixed natural number. If the eigenvalue λ > 1. If 0 < λ < 1. x is shrunk (or compressed). If λ = 1. As a special case. the identity matrix I is the matrix that leaves all vectors unchanged: Every non-zero vector x is an eigenvector of the identity matrix with eigenvalue 1. If λ is negative then x flips and points in the opposite direction as well as being scaled by a factor equal to the absolute value of λ. the multiplication of a vector x by a square matrix A changes both the magnitude and the direction of the vector upon which it acts. Indeed.) When multiplied by a matrix. each eigenvector of that matrix changes its magnitude by a factor. called the eigenvalue corresponding to that eigenvector. (The term "eigenvector" is meaningless except in relation to some particular matrix. Usually. the eigenvalue equation can be interpreted as follows: a vector x is an eigenvector if multiplication by A stretches. x is stretched by this factor. the vector x is not affected at all by multiplication by A. The case λ = 0 means that x shrinks to a point (represented by the origin). flips and stretches. but in the special case where it changes only the scale of the vector and leaves the direction unchanged. or flips and shrinks x. then that vector is called an eigenvector of that matrix. leaves unchanged.  Example For the matrix A the vector is an eigenvector with eigenvalue 1. or switches the vector to the opposite direction. flips.
the action of the transformation T on x. let x be a vector in that vector space. a more general definition is given: Let V be any vector space. Most. but not all  authors also require x to be non-zero. while λx means the product of the number λ times the vector x. Note that Tx means T of x. since and this vector is not a multiple of the original vector x. For example. The set of eigenvalues of T is sometimes called the spectrum of T. if A is the following matrix (a so-called diagonal matrix): . and let T be a linear transformation mapping V into V.  Eigenvalues and eigenvectors of matrices  Characteristic polynomial Main article: Characteristic polynomial The eigenvalues of A are precisely the solutions λ to the equation Here det is the determinant of matrices and I is the n×n identity matrix.  Formal definition In abstract mathematics. This equation is called the characteristic equation (or less often the secular equation) of A.On the other hand the vector is not an eigenvector. Then x is an eigenvector of T with eigenvalue λ if the following equation holds: This equation is called the eigenvalue equation.
. . Therefore. to obtain x = 0. The solutions to this equation are the eigenvalues λi = ai. λ cannot be an eigenvalue. too: if A − λI is not invertible. It can be shown that the converse holds. Proving the afore-mentioned relation of eigenvalues and solutions of the characteristic equation requires some linear algebra. n). if λ is such that A − λI is invertible.i (i = 1. The left-hand side of this equation can be seen (using Leibniz' rule for the determinant) to be a polynomial function in λ.. This polynomial is . thus leading to the characteristic equation. the eigenvalue equation for a matrix A can be expressed as which can be rearranged to If there existed an inverse then both sides could be left-multiplied by it. A criterion from linear algebra states that a matrix (here: A − λI) is non-invertible if and only if its determinant is zero. λ is an eigenvalue.then the characteristic equation reads . specifically the notion of linearly independent vectors: briefly.. whose coefficients depend on the entries of A.
then any scalar multiple αx is also an eigenvector of A with the same eigenvalue. they may include complex numbers with a non-zero imaginary component. it is possible that there may not be sufficient eigenvectors to span the entire space. The simplest case is of course when mi = ni = 1. number of linearly independent eigenvectors with that eigenvalue. For example. The geometric multiplicity of an eigenvalue is defined as the dimension of the associated eigenspace. is given by summing the geometric multiplicities Over a complex vector space. In a sense. since A(αx) = αAx = αλx = λ(αx). such as the Cayley-Hamilton theorem. In other words. the number of eigenvectors belonging to λi. there may stricly less than n distinct eigenvalues. any non-zero linear combination of eigenvectors that share the same eigenvalue λ. the sum of the algebraic multiplicities will equal the dimension of the vector space. x. it is important for theoretical purposes.  Eigenspace If x is an eigenvector of the matrix A with eigenvalue λ. the roots are not necessarily real. The algebraic multiplicity ni and geometric multiplicity mi may or may not be equal. At least for small matrices. Both algebraic and geometric multiplicity are integers between (including) 1 and n.  Algebraic and geometric multiplicities Given an n×n matrix A and an eigenvalue λi of this matrix. i. This happens for the matrix describing the shear mapping discussed below. but the sum of the geometric multiplicities may be smaller. the corresponding eigenvectors also have complex components. This is intimately related to the question of whether a given matrix may be diagonalized by a suitable choice of coordinates. but we always have mi ≤ ni. the coefficients of the characteristic polynomial are all real.) For a complex eigenvalue. More generally. In case of dim(Eλ) = 1. the characteristic equation need not have n distinct solutions. Eλ. that is to say. However. (This existence of such a solution is known as the fundamental theorem of algebra. even if the entries of the matrix A are complex numbers to begin with. will itself be an eigenvector with eigenvalue λ. the eigenvalues of A) can be found directly. the highest power of λ occurring in this polynomial is λn.e. However. They are called multiplicities: the algebraic multiplicity of an eigenvalue is defined as the multiplicity of the corresponding root of the characteristic polynomial.called the characteristic polynomial. a 2×2 matrix describing a 45° rotation will not leave any non-zero vector pointing in the same direction. If the matrix has real entries. there is at least one complex number λ solving the characteristic equation. The total number of linearly independent eigenvectors. Together with the zero vector. roughly speaking. However. there are two numbers measuring. . Its degree is n. the eigenvectors of A with the same eigenvalue form a linear subspace of the vector space called an eigenspace. It also shows that any n×n matrix has at most n eigenvalues. it is called an eigenline and λ is called a scaling factor. Moreover. the solutions of the characteristic equation (hence.
All defective matrices have fewer than n distinct eigenvalues. y) with y = x is an eigenvector to the eigenvalue λ = 3. . this yields the quadratic equation whose solutions (also called roots) are λ = 1 and λ = 3. The eigenvectors can be indexed by eigenvalues.. 2. 0) is excluded. That is to say. A similar calculation shows that the eigenvectors corresponding to the eigenvalue λ = 1. but not all matrices with fewer than n distinct eigenvalues are defective. any vector of the form (x.e. The eigenvectors can also be indexed using the simpler notation of a single index xk. The eigenvectors to the eigenvalue λ = 3 are determined by using the eigenvalue equation. using a double index. in particular. this equation comparing two vectors is tantamount to a system of the following two linear equations: Both equations reduce to the single linear equation x = y.. x. However.  Worked example These concepts are explained for the matrix The characteristic equation of this matrix reads Calculating the determinant. the vector (0. meaning.j being the jth eigenvector for the ith eigenvalue. y) such that y = −x. Spelling this out.  Eigendecomposition . which in this case reads The juxtaposition at the left hand side denotes matrix multiplication.The eigenvectors corresponding to different eigenvalues are linearly independent. are given by non-zero vectors (x. with k = 1. with xi. i. that in an n-dimensional space the linear transformation A cannot have more than n eigenvalues (or eigenspaces). A matrix is said to be defective if it fails to have n linearly independent eigenvectors. .
 Examples in the plane The following table presents some example transformations in the plane along with their 2×2 matrices. • Determinant of A . i. Let A be a square n × n matrix. qk be an eigenvector basis. horizontal shear scaling unequal scaling counterclockwise rotation by .. where k is the dimension of the space spanned by the eigenvectors of A. Let q1 . i. Every eigenvalue of a Unitary matrix has absolute value | λ | = 1. an indexed set of k linearly independent eigenvectors. Then Trace of A . eigenvalues.Main article: Eigendecomposition of a matrix The spectral theorem for matrices can be stated as follows. • . If k = n. i.e. Λii = λi. A is Hermitian. then A can be written where Q is the square n × n matrix whose i-th column is the basis eigenvector qi of A and Λ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues.  Further properties Let A be an n×n matrix with eigenvalues λi. every eigenvalue is real.. • • • Eigenvalues of Ak are If A = AH.. and eigenvectors.e.e.
2 = cos φ ± i sin φ = e ± iφ n1 = 2. where k is called the shear factor. . The shear angle φ is determined by k = cot φ. λ2 = k2 λ1.illustratio n matrix character 2 λ − 2λ+1 = (1 − λ)2 λ2 − 2λk + k2 = (λ istic =0 k)2 = 0 equation eigenvalu es λi algebraic and geometri c multiplici ties (λ − k1)(λ − k2) = 0 λ2 − 2λ cos φ + 1 = 0 λ1=1 λ1=k λ1 = k1. m1 = 2 n1 = m1 = 1. n2 = m2 =1 =1 eigenvect ors  Shear Shear in the plane is a transformation in which all points along a given line remain fixed while other points are shifted parallel to that line by a distance proportional to their perpendicular distance from the line. a point P of the plane moves parallel to the x-axis to the place P' so that its coordinate y does not change while the x coordinate increments to become x' = x + k y. In the horizontal shear depicted above. m1 = 1 n1 = 2. n2 = m2 n1 = m1 = 1.
Thus. Mechanically. every non-zero vector is an eigenvector with eigenvalue k. Instead. In this case. for rotations other than through 0° and 180°. if less than 1. it is stretching. there are two different scaling factors: k1 for the scaling in direction x. it is shrinking. … is just the identity transformation (a uniform scaling by +1). All vectors originating at origin (i. Negative eigenvalues correspond to reflections followed by a stretch or shrink. Otherwise. which is a negative number whenever φ is not equal to a multiple of 180°. and thus there cannot be any eigenvectors. followed by a stretch or a shrink. while a rotation of 180°.  Uniform scaling and reflection Multiplying every vector with a constant real number k is represented by the diagonal matrix whose entries on the diagonal are all equal to k. depending on the absolute value of k. extension. Negative values of k correspond to a reversal of direction. etc.. consider a sheet that is stretched unequally in two perpendicular directions along the coordinate axes. Clearly. …. as expected. 360°. every vector in the real plane will have its direction changed. The rubber sheet is stretched along the x axis and simultaneously shrunk along the y axis. 540°. inflation). and k2 for the scaling in direction y. coordinates. The exceptions are vectors along the y-axis. Whether the transformation is stretching (elongation. A rotation of 0°. or.. they are shrunken in that direction. After repeatedly applying this transformation of stretching/shrinking many times.e. or shrinking (compression. around a fixed point. which will gradually shrink away to nothing. matrices that are diagonalizable over the real numbers represent scalings and reflections: the eigenvalues represent the scaling factors (and appear as the diagonal terms). plane.  Unequal scaling For a slightly more complicated example. similarly. almost any vector on the surface of the rubber sheet will be oriented closer and closer to the direction of the x axis (the direction of stretching). stretched in one direction. and shrunk in the other direction.Applying repeatedly the shear transformation changes the direction of any vector in the plane closer and closer to the direction of the eigenvector. The figure shows the case where k1 > 1 and 1 > k2 > 0. the vectors are stretched in the direction of the corresponding eigenvector. see Rotation matrix.  Rotation For more details on this topic. if 0 < k < 1. In general. this corresponds to stretching a rubber sheet equally in all directions such as a small area of the surface of an inflating balloon. the fixed point on the balloon surface) are stretched equally with the same scaling factor k while preserving its original direction. The characteristic equation is a quadratic equation with discriminant D = 4 (cos2 φ − 1) = − 4 sin2 φ. the eigenvalues are complex . But this is not necessarily true if we consider the same matrix over a complex vector space. deflation) depends on the scaling factor: if k > 1. there are no real eigenvalues or eigenvectors for rotation in the plane. and the eigenvectors are the directions of the scalings. is a reflection (uniform scaling by -1). If a given eigenvalue is greater than 1. A rotation in a plane is a transformation that describes motion of a vector.
Worse. Fourier used the work of Laplace and Lagrange to solve the heat equation by separation of variables in his famous 1822 book Théorie analytique de la chaleur. they arose in the study of quadratic forms and differential equations. Lagrange realized that the principal axes are the eigenvectors of the inertia matrix. and is defined by  History Eigenvalues are often introduced in the context of linear algebra or matrix theory.numbers in general. with complex scaling factors.  Left and right eigenvectors The word eigenvector formally refers to the right eigenvector xR. but for dimensions greater than or equal to 5 there are generally no exact solutions and one has to resort to numerical methods to find them approximately. Cauchy also coined the term racine caractéristique (characteristic root) for what is now called eigenvalue. It is defined by the above eigenvalue equation and is the most commonly used eigenvector. Efficient. however. and generalized it to arbitrary dimensions. the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors. In the early 19th century. For large Hermitian sparse matrices. accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the advent of the QR algorithm in 1961.  Calculation The complexity of the problem for finding roots/eigenvalues of the characteristic polynomial increases rapidly with increasing the degree of the polynomial (the dimension of the vector space). Historically. Cauchy saw how their work could be used to classify the quadric surfaces. However. the rotation matrix is diagonalizable over the complex numbers. the left eigenvector xL exists as well. because the roots of a polynomial are an extremely sensitive function of the coefficients (see Wilkinson's polynomial). his term survives in characteristic equation. Thus rotation matrices acting on complex spaces can be thought of as scaling matrices. Although not diagonalizable over the reals. this computational procedure can be very inaccurate in the presence of round-off error. and again the eigenvalues appear on the diagonal. among several other possibilities. Euler studied the rotational motion of a rigid body and discovered the importance of the principal axes. There are exact solutions for dimensions below 5. Sturm developed .
and singular parts. bounded or otherwise. In general. In infinite-dimensional spaces. the QR algorithm. This is also true for an unbounded self adjoint operator. the standard term in English was "proper value". pure point. For some time. There are operators on Hilbert or Banach spaces which have no eigenvectors at all. was proposed independently by John G. when Von Mises published the power method. Weierstrass clarified an important aspect in the stability theory started by Laplace by realizing that defective matrices can cause instability. Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century. the discipline that grew out of their work is now called Sturm–Liouville theory. This was extended by Hermite in 1855 to what are now called Hermitian matrices. The bilateral shift on the Hilbert space ℓ 2(Z) (that is. the spectrum of any self adjoint operator. that is. The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929. a1. a2. the space of all sequences of scalars … a−1. … such that converges) has no eigenvalue but does have spectral values. such that T − λI has no bounded inverse. Finally. In the meantime. and Clebsch found the corresponding result for skew-symmetric matrices. Francis and Vera Kublanovskaya in 1961. Liouville studied eigenvalue problems similar to those of Sturm. the notion of eigenvalues can be generalized to the concept of spectrum.  Infinite-dimensional spaces and spectral theory For more details on this topic. The spectrum is the set of scalars λ for which (T − λI)−1 is not defined. the spectrum of a bounded operator is always nonempty. though he may have been following a related usage by Helmholtz. Around the same time. the converse is not true. Via its spectral measures. Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle.F. but the more distinctive term "eigenvalue" is standard today. If the vector space is an infinite dimensional Banach space. (See Decomposition of spectrum. At the start of the 20th century. see Spectral theorem.Fourier's ideas further and brought them to the attention of Cauchy.) . while Poincaré studied Poisson's equation a few years later. One of the most popular methods today. Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. λ is in the spectrum of T. Clearly if λ is an eigenvalue of T. He was the first to use the German word eigen to denote eigenvalues and eigenvectors in 1904. can be decomposed into absolutely continuous. a0. who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues. This can be seen in the following example.
The eigenvectors are commonly called eigenfunctions. the solution is where A is any constant. The spectrum of d/dt is therefore the whole complex plane. The bound states of the hydrogen atom correspond to the discrete part of the spectrum (they have a discrete set of eigenvalues which can be computed by Rydberg formula) while the ionization processes are described by the continuous part (the energy of the collision/ionization is not quantized). This is an example of a continuous spectrum. The eigenvalue equation for linear differential operators is then a set of one or more differential equations. As an example. on the space of infinitely differentiable functions. and a and b are constants.  Waves on a string . The simplest case is the eigenvalue equation for differentiation of a real valued function by a single real variable. If λ is zero. the solution is the exponential function If we expand our horizons to complex valued functions.  Eigenfunctions Main article: Eigenfunction A common example of such maps on infinite dimensional spaces are the action of differential operators on function spaces. We seek a function (equivalent to an infinite-dimensional vector) which. when differentiated. The eigenfunctions of the hydrogen atom Hamiltonian are called eigenstates and are grouped into two categories. if λ is non-zero. the process of differentiation defines a linear operator since where f(t) and g(t) are differentiable functions. yields a constant times the original function. f(x). In this case. This eigenvalue equation has a solution for any value of λ.The hydrogen atom is an example where both types of spectra appear. the eigenvalue equation becomes the linear differential equation Here λ is the eigenvalue associated with the function. the value of λ can be any complex number.
satisfies the wave equation which is a linear partial differential equation. we find . h(x. the constant ω is constrained to take one of the values . The displacement. The normal method of solving such an equation is separation of variables. where n is any integer. The admittable eigenvalues are governed by the length of the string and determine the frequency of oscillation. where c is the constant wave speed.t). Thus the clamped string supports a family of standing waves of the form .The shape of a standing wave in a string fixed at its boundaries is an example of an eigenfunction of a differential operator. For any values of the eigenvalues. For those boundary conditions. for example) we can constrain the eigenvalues. the eigenfunctions are given by and If we impose boundary conditions (that the ends of the string are fixed with X(x)=0 at x=0 and x=L. If we assume that h can be written as the product of the form X(x)T(t). like the vibrating strings of a string instrument. of a stressed rope fixed at both ends. we can form a pair of ordinary differential equations: and Each of these is an eigenvalue equation (the unfamiliar form of the eigenvalue is chosen merely for convenience). and so the phase angle and Thus.
the frequency harmonic.. The illustration shows the square of the absolute value of the wavefunctions. Brighter areas correspond to higher probability density for a position measurement. which is called the (n-1)st overtone. This allows one to represent the Schrödinger equation in a matrix form.. They are associated with eigenvalues interpreted as their energies (increasing downward: n=1.3. An example of an eigenvalue equation where the transformation T is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics: where H.. a proton. one looks for ψE within the space of square integrable functions. . in the case where one is interested only in the bound state solutions of the Schrödinger equation. one can introduce a basis set in which ψE and H can be represented as a one-dimensional array and a matrix respectively. is the frequency of the nth  Applications  Schrödinger equation The wavefunctions associated with the bound states of an electron in a hydrogen atom can be seen as the eigenvectors of the hydrogen atom Hamiltonian as well as of the angular momentum operator. is one of its eigenfunctions corresponding to the eigenvalue E.. is a second-order differential operator and ψE. The center of each figure is the atomic nucleus. the Hamiltonian..) and angular momentum (increasing across: s.From the point of view of our musical instrument. However. interpreted as its energy.). the wavefunction. Since this space is a Hilbert space with a well-defined scalar product. p.2. d..
on a compass rose of 360°. E2. If E1 > E2 > E3 the fabric is linear. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. the infinite dimensional analog of Hermitian matrices (see Observable). the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. A vector. In the field. See 'A Practical Guide to the Study of Glacial Sediments' by Benn & Evans. The relative values of E1. It is a self adjoint operator. In this notation. In quantum chemistry. one often represents the Hartree–Fock equation in a non-orthogonal basis set. the term eigenvector is used in a somewhat more general meaning. with E1 being the primary orientation of clast orientation/dip. The clast orientation is defined as the eigenvector. If one wants to underline this aspect one speaks of nonlinear eigenvalue problem. E2 being the secondary and E3 being the tertiary. or as a Stereonet on a Wulff Net. As in the matrix case. Such equations are usually solved by an iteration procedure. Eigenvectors output from programs such as Stereo32  are in the order E1 ≥ E2 ≥ E3. called in this case self-consistent field method.  Principal components analysis . in the Hilbert space of square integrable functions is represented by . and E3 are dictated by the nature of the sediment's fabric. the fabric is said to be isotropic. since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. which represents a state of the system.  Molecular orbitals In quantum mechanics. in the equation above is understood to be the vector obtained by application of the transformation H to . eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. within the Hartree– Fock theory. If E1 = E2 > E3 the fabric is planar. and in particular in atomic and molecular physics. In this case. the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). which can only be compared graphically such as in a Tri-Plot (Sneed and Folk) diagram.Bra-ket notation is often used in this context. 2004. the Schrödinger equation is: where is an eigenstate of H. a geologist may collect such data for hundreds or thousands of clasts in a soil sample. This particular representation is a generalized eigenvalue problem called Roothaan equations. If E1 = E2 = E3.  Geology and glaciology In geology. especially in the study of glacial till. Dip is measured as the eigenvalue. in terms of strength.
478) direction and of 1 in the orthogonal direction. such as those encountered in data mining. PCA is performed on the covariance matrix or the correlation matrix (in which each variable is scaled to have its sample variance equal to one).3) with a standard deviation of 3 in roughly the (0. 0. This orthogonal decomposition is called principal components analysis (PCA) in statistics. In Q-methodology. as explaining an important amount of the variability in the data. psychology. and in marketing. PCA studies linear relations among variables. where the sample covariance matrices are PSD.PCA of the multivariate Gaussian distribution centered at (1.  Vibration analysis . while eigenvalues less than 1. Principal component analysis of the correlation matrix provides an orthonormal eigen-basis for the space of the observed data: In this basis.00 are considered practically insignificant. the largest eigenvalues correspond to the principalcomponents that are associated with most of the covariability among a number of observed data. Principal component analysis is used to study large data sets. in the field of psychometrics. For the covariance or correlation matrix. PCA is popular especially in psychology. Main article: Principal components analysis See also: Positive semidefinite matrix and Factor analysis The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors. principal component analysis can be used as a method of factor analysis in structural equation modeling. chemical research.878. as explaining only a negligible portion of the data variability. that is. the eigenvectors correspond to principal components and the eigenvalues to the variance explained by the principal components. each of which has a nonnegative eigenvalue. the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing): The factors with eigenvalues greater than 1. The orthogonal decomposition of a PSD matrix is used in multivariate analysis.00 are considered to be practically significant. More generally.
this is an example of principal components analysis. The dimension of this vector space is the number of pixels. processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The eigenvalue problem of complex structures is often solved using finite element analysis. They are very useful for expressing any face image as a linear combination of some of them.1st lateral bending (See vibration for more types of vibration) Main article: Vibration Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalues are used to determine the natural frequencies (or eigenfrequencies) of vibration. Research related to eigen vision systems determining hand gestures has also been made. In the facial recognition branch of biometrics. .  Eigenfaces Eigenfaces as examples of eigenvectors Main article: Eigenfaces In image processing. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces. eigenfaces provide a means of applying data compression to faces for identification purposes. and the eigenvectors determine the shapes of these vibrational modes.
This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix. The tensor of inertia is a key quantity required in order to determine the rotation of a rigid body around its center of mass. the components it does have are the principal components. such as a word in a language. however.  Eigenvalues of a graph In spectral graph theory. These concepts have been found useful in automatic speech recognition systems. where T is a diagonal matrix with Tv. the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. The kth principal eigenvector of a graph is defined as either the eigenvector corresponding to the kth largest or kth smallest eigenvalue of the Laplacian. the adjacency matrix must first be modified to ensure a stationary distribution exists. via spectral clustering. a new voice pronunciation of the word can be constructed.  Tensor of inertia In mechanics. the stress tensor has no shear components.  Stress tensor In solid mechanics. for speaker adaptation. in this orientation. Because it is diagonal.v equal to the degree of vertex v. the eigenvectors of the inertia tensor define the principal axes of a rigid body. or (increasingly) of the graph's Laplacian matrix. Based on a linear combination of such eigenvoices. the vth diagonal entry is deg(v)−1/2. and in T−1/2. The second smallest eigenvector can be used to partition the graph into clusters. an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A. An example is Google's PageRank algorithm.  See also • • • • • Nonlinear eigenproblem Quadratic eigenvalue problem Introduction to eigenstates Eigenplane Jordan normal form .Similar to this concept. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. The principal eigenvector is used to measure the centrality of its vertices. which is either T−A (sometimes called the Combinatorial Laplacian) or I−T−1/2AT−1/2 (sometimes called the Normalized Laplacian). Other methods are also available for clustering. eigenvoices represent the general direction of variability in human pronunciations of a particular utterance. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.
Inc. (Harcourt: San Diego. no. 20. 364. pp. ^ For a proof of this lemma.3. "On some algorithms for the solution of the complete eigenvalue problem" USSR Computational Mathematics and Mathematical Physics. p. vol.0 geographic information system 24. D.. The Computer Journal. and Lemma for the eigenspace 6.wolfram.G. 2. and Midgley. Journal of Geology 66(2): 114–150 23.pdf . http://www. "The QR Transformation. ^ See Kline 1972. D. 706-707 16. pages 637–657 (1961). 1063 17. 19.. pp. 5.5a. G. 186. Linear Algebra and Its Applications.2 on p.ntua. ^ See Kline 1972. 4. Section 14.html. p. p. no. 2000. A. London: Arnold. pp 103-107 26. ^ See Hawkins 1975. ^ See Kline 1972. Votsis. ^ Xirouhakis. Meyer 2000.. ^ Benn.gr/papers/43. "The QR Transformation. 1997) 10. pages 332-345 (1962). §2 11. A. p.3.. 469. 673 14. I" (part 1). Kublanovskaya. Francis. 109. Shear From MathWorld − A Wolfram Web Resource 9.F. pages 265-271 (1961). ^ Graham. ^ a b c See Kline 1972. 4. Wolfram Research. Evans. pages 555–570 (1961). ^ See also: eigen or eigenvalue at Wiktionary. Trefethen and David Bau. The Computer Journal. see Shilov 1977.. p. 1958. vol. §7. A Practical Guide to the study of Glacial Sediments. 3. see Roman 2008. otes 1. ^ Sneed ED. 715-716 15. ^ Axler. ^ Gilbert Strang. 4.com/Eigenvector. ^ Stereo32 25. Beezer 2006. Theorem 8. §3 12. Sheldon Linear Algebra Done Right 2nd Edition. 4. 1988). http://mathworld. Ch. umerical Linear Algebra (SIAM. N. Hefferon 2001. Delopoulus. ^ See Korn & Korn 2000.ece. 3. p. II" (part 2).3 21. Folk RL. ^ "Eigenvector". Theorem EDELI on p. ^ For a proof of this lemma. D. 109. Earth Surface Processes and Landforms (25) pp 1473-1477 22. 8. Texas. vol. 3. ^ See Golub & van Loan 1996. Insel & Spence 1989. Estimation of 3D motion and structure of human faces. ^ J. ^ Definition according to Weisstein.image. Pebbles in the lower Colorado River. and Lemma for linear independence of eigenvectors 7. Shilov 1977. ^ GIS-stereoplot: an interactive stereonet plotting module for ArcView 3. pp. ^ Vera N. (2004) (PDF). 2004. no.. ^ See Kline 1972. 3rd ed. §7. ^ a b c Lloyd N. ^ a b c d See Hawkins 1975.1. ^ See Aldrich 2006 18. Online paper in PDF format. Also published in: Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki. 807-808 13. National Technical University of Athens. Eric W. Friedberg. vol. a study of particle morphogenesis. Retrieved 29 January 2010. 217 4. page 77 5..
NJ 07632: Prentice Hall. Chao-Cheng (1980). ISBN 0-13-537102-3. Matrix analysis. 2 Revised edition. doi:10. Science Publishers. Matrix analysis and applied linear algebra. ISBN 0-521-30586-1 (hardback). John Wiley & Sons. (1975). New York. ISBN 0-030-10567-6. Friedberg. ISBN 978-0-89871-454-8. (2000). Linear and multilinear algebra.. Insel. "Eigenvalue computation in the 20th century".. Linear algebra (2nd ed. Spence. Gilbert (1993).). Ray M.. Russian. (1968).. Philadelphia. Arnold J. Seymour (1991). Lawrence E. Charles F. Science Publishers. Cohen-Tannoudji. Stephen H. Plenum Press. ISBN 0-961-40885-5. Lipschutz. Dover Publications. CA. Charles F. References • • • • • • • • • • • • • • • • • • • Korn. I. Goldberg (1969). Mathematical thought from ancient to modern times.1016/S0377-0427(00)00413-1. Beauregard. Theorems. Kline.). Theresa M. Cambridge University Press. Gelfand.html. Russian. Linear algebra (3rd ed. AddisonWesley Publishing Company. Baltimore. .). Brooks/Cole. Mathematical Handbook for Scientists and Engineers: Definitions. Introduction to linear algebra. John (2006).. Linear algebra and its applications. Johns Hopkins University Press. Akivis. eigenfunction. http://jeff560. Korn. "Cauchy and the spectral theory of matrices". ISBN 0-486-41147-8. Wellesley-Cambridge Press. Moscow. Journal of Computational and Applied Mathematics 123: 35–65.tripod. retrieved 2006-08-22 Strang. and Formulas for Reference and Review. Science Publishers. (1985). Aldrich.. (1971). Henk A. ISBN 978-0-8018-5414-9. John B. eigenvector. "Chapter II. Claude (1977). Schaum's outline series (2nd ed. Johnson. Earliest Known Uses of Some of the Words of Mathematics. Moscow. Golub. Brown. Lecture notes in linear algebra. (1989).. ISBN 0-306-37508-7. Moscow.1016/0315-0860(75)90032-4. Englewood Cliffs. ISBN 0-195-01496-0. Lecture notes in analytical geometry. M. MA. Strang. Bowen. and related terms". Morris (1972). "Eigenvalue. Golub. (2000). The mathematical tools of quantum mechanics". Van Loan. Gene F. NY. Granino A.com/e. Max A. Wang. van der Vorst.. ISBN 0-201-83999-7 (international edition). Fraleigh. Vladislav V. Thomson. Gene H. doi:10. Gilbert (2006). Wellesley. Pavel S. Schaum's outline of theory and problems of linear algebra. T. MD. Meyer. (1996). (2000). ISBN 0-521-38632-2 (paperback). Matrix computations (3rd Edition). Carl D. Illuminating Patterns of Perception: An Overview of Q Methodology. Society for Industrial and Applied Mathematics (SIAM). Horn. Maureen (October 2004). Historia Mathematica 2: 1–29. New York. (1995).. ISBN 0-471-16432-1. ISBN 0-07038007-4. NY: McGraw-Hill Companies. Belmont. in Jeff Miller (Editor). Hawkins.. Raymond A. Oxford University Press. Tensor calculus. Alexandrov. Roger A. Russian. 1152 p. Quantum mechanics.
University of Puget Sound. Edwards.. (1996). (1977). LLC. 7th printing edition (August 19. Rodman. Advanced linear algebra (3rd ed. New York: Dover Publications.). Roman. Linear Algebra: An Introduction to Linear Algebra for Pre-Calculus Students. Matrix theory. Vermont. Applied numerical linear algebra. Georgi E. Kenneth (2007) (PDF).. An introduction to linear algebra. Linear Algebra: An Introductory Approach. Applied linear algebra and matrix analysis.. http://arxiv. Moscow. Beezer. (1973). Brigham Young University. S.. Basel-Boston-Berlin: Birkhäuser Verlag. Springer-Verlag. Robert A. Bruce H. 1999). 4th ed. Corr. NY. http://linear. ISBN 0-618-33567-6. Soviet Encyclopedia. Paul R. Indefinite linear algebra and applications. Shilov. Course of Linear Algebra and Multidimensional Geometry: the textbook. Springer. Jim (2001). ISBN 5-7477-0099-5.pdf. Colchester. Lancaster. Richard A. ISBN 978-0-387-72828-5.rice. Anne. (1997). ISBN 0-89871389-7.ups. LLC. and Shulman.math. (1975). SIAM. Werner H. Online Edition. Ron.). Israel. retrieved 2008-02-19. Gohberg. Linear Algebra (4th Edition). A first course in linear algebra. Russian.byu. (1987). Online e-book in PDF format. Moscow. 347 p. Ufa. New York. P. New York. St Michael's College. Free online book under GNU licence. Rice University. In:Vinogradov.  External links The Wikibook Linear Algebra has a page on the topic of Eigenvalues and Eigenvectors The Wikibook The Book of Mathematical Proofs has a page on the topic of Algebra/Linear Transformations • What are Eigen Values? — non-technical introduction from PhysLink. Sharipov. Leiba (2005). (Ed. Tapia.). Thomas S.. Vol.org/pdf/math/0405323v1. NY: Springer-Verlag. Springer Science+Business Media. Curtis. Shores. Papaconstantinou.edu/linearalgebra/. Charles W. USA. James W. M. Silverman ed. Bashkir State University. S.). NY: Springer Science + Business Media. Ruslan A. Linear Algebra. I. Tamara A. Online e-book in various formats on arxiv. Kuttler. http://ceee. Russia: Science Publishers. ISBN 0-387-90992-3.). ISBN 3-7643-7349-0. ISBN 0-387-33194-8. 5. arXiv:math/0405323v1. Demmel. Pigolkina. 1977. Mathematical Encyclopedia.• • • • • • • • • • • • • • • • Carter. (2003). Halmos.edu/~klkuttle/Linearalgebra.org. Greub. ISBN 0387900934. ISBN 0-486-63518-X. Eigenvalue (in Russian). Finite-dimensional vector spaces (8th ed. T. Houghton Mifflin Company. 1984. Peter.com's "Ask the Experts" . Online book. Larson.edu/Books/LA/index. New York.edu/. (2007).html.smcvt. V. (2006). Steven (2008). ISBN 0-387-90110-8. Hefferon. Linear algebra (translated and edited by Richard A. http://www. Lancaster. http://joshua. Elementary linear algebra (5th ed.
) LAPACK is a collection of FORTRAN subroutines for solving dense linear algebra problems ALGLIB includes a partial port of the LAPACK to C++. James Demmel. IRBLEIGS. C#. Jack Dongarra. Math forums: .lecture from Kahn Academy Theory • • • • • • • Eigenvalue (of a matrix) on PlanetMath Eigenvector — Wolfram MathWorld Eigen Vector Examination working applet Same Eigen Vector Examination as above in a Flash demo with sound Computation of Eigenvalues Numerical solution of eigenvalue problems Edited by Zhaojun Bai. SMS can be easily use with MSC.unice.• Introduction to Eigen Vectors and Eigen Values -.Provides the SMS eigenvalue solver for Structural Finite Element.fr http://en. Online calculators • • • arndt-bruenner. has MATLAB code with similar capabilities to ARPACK.de bluebit.org/wiki/Eigenvalues_and_eigenvectors .wikipedia. and Henk van der Vorst Eigenvalues and Eigenvectors on the Ask Dr. The solver is in the GE ESIS program as well as other commercial programs. Delphi. Axel Ruhe. etc.gr wims.  Algorithms • • • • • ARPACK is a collection of FORTRAN subroutines for solving large scale (sparse) eigenproblems.Nastran or NX/Nastran via DMAPs. (See this paper for a comparison between IRBLEIGS and ARPACK. Vanderplaats Research and Development .
This action might not be possible to undo. Are you sure you want to continue?