You are on page 1of 10

CHAPTER ONE

1.0 Introduction

In mathematics, given a linear transformation, an eigenvector of that linear transformation is a

nonzero vector which, when that transformation is applied to it, may change in length, but not

direction.

For each eigenvector of a linear transformation, there is a corresponding scalar value called

an eigenvalue for that vector, which determines the amount the eigenvector is scaled under the

linear transformation. For example, an eigenvalue of +2 means that the eigenvector is doubled in

length and points in the same direction. An eigenvalue of +1 means that the eigenvector is

unchanged, while an eigenvalue of −1 means that the eigenvector is reversed in direction.

An eigenspace of a given transformation for a particular eigenvalue is the set (linear span) of the

eigenvectors associated to this eigenvalue, together with the zero vector (which has no direction).

In linear algebra, every linear transformation between finite-dimensional vector spaces can be

expressed as a matrix, which is a rectangular array of numbers arranged in rows and columns.

Standard methods for finding eigenvalues, eigenvectors, and eigenspaces of a given matrix are

discussed below.

These concepts play a major role in several branches of both pure and applied mathematics –

appearing prominently in linear algebra, functional analysis, and to a lesser extent

in nonlinear mathematics.
Many kinds of mathematical objects can be treated as vectors: functions, harmonic

modes, quantum states, and frequencies, for example. In these cases, the concept

of direction loses its ordinary meaning, and is given an abstract definition. Even so, if this

abstract direction is unchanged by a given linear transformation, the prefix “eigen” is used, as

in eigenfunction, eigenmode, eigenstate, and eigenfrequency.

A particularly important property of matrices is the multiplication process, which is different

than the multiplication operation of arithmetics. Two matrices can only be multiplied if their

dimensions are compatible, which means the number of columns in the first matrix is the same as

the number of rows in the second matrix. An important property of matrix multiplication is that

for matrices A and B the A*B is not equal to B*A for the general case (with the exception of A

or B to be the identity matrix). Similarly, for the multiplication of a matrix and a vector, the

number of columns in the first matrix must be the same as the number of rows of the vector. For

a matrix A with dimensions n x m and a matrix B with dimensions with m x p the multiplication

product C = A*B has dimensions n x p. For the special case of a square matrix of A and B with

dimensions n x n the multiplication product is always n x n, while for a square matrix and a

vector is n x 1. The general formula of matrix multiplication is as follows:


C =⃗
A.⃗
B , A=[ aij ] :m ×n , B=[ b ij ] :n ×p, C=[ C ij ] :m× p

n
C = a i1 b1 j + ai 2 b 2 j +. ..+ a¿ bnj =∑ aik bkj
k=1

Every squared matrix has its own eigenvalues (which are scalars or "numbers") and eigenvectors.

It is easier to understand what these special values of a matrix are by an example of matrix

multiplication with a vector that follows:


[ ][ ] [ ] []
1 2 1 1 1.1+2.0+ 1.1 2
2 0 −2 0 = 2.1+0.0−2.1 =2 0
−1 2 3 1 −1.1+2.0+ 3.1 2

The result of the previous multiplication is the initial vector multiplied by a number (or scalar).

The vectors that follow this property are the eigenvectors of that matrix while the number in

front of them is the eigenvalue. A special case of eigenvectors is the unit eigenvector, which is

the eigenvector with a length (or magnitude) of 1.

Mathematically, eigenvectors are the vectors that, after the linear transformation (which is the

matrix multiplication), change only by a scalar, with that scalar being the eigenvalue and

representing the change of the magnitude of the initial vector. Eigenvalues can take any real or

complex value which means that they can be positive or negative, but not zero (because all

vectors are eigenvectors of a zero eigenvalue). An interesting relation between eigenvalues and

eigenvectors can be found. Consider vector x = c*y, where c is a constant and y a unit vector,

that is the eigenvector of a matrix A.

A . ⃗x =λ . ⃗x ⇒ ⃗
⃗ A . ( c . ⃗y )=λ . ( c . ⃗y ) ⇒ c . ⃗
A . ⃗y=c . λ . ⃗y ⇒ A . ⃗y = λ.⃗y

An eigenvector multiplied by a scalar is a new eigenvector and considering that this scalar can be

an arbitrary real or complex number, there are infinite eigenvectors. Each eigenvalue leads to a

different unit eigenvector and has its own set of an infinite number of eigenvectors. So, the

number of eigenvalues of a matrix are the same as its possible unit vectors.
1.1 Background of the Study

Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically,

however, they arose in the study of quadratic forms and differential equations.Matrix is a two-

dimensional array of expressions or numbers, which defines a system of linear equations. The

roots of this system are termed as eigenvalues. Eigenvalues are also known as characteristic

values or characteristic roots. In branches like: physics and engineering, the knowledge of

eigenvalues and their calculation is extremely important.

1.1.1 Eigenvalue Equation

The equation for finding eigenvalues of a matrix, is known eigenvalue equation.

Eigenvalue equation is shown below:-

|A–λI|=0

Where A is an n × n square matrix.

Two parallel lines | | represent the determinant of expression written within it.

λ denotes the eigenvalue of matrix A.

I is the identity matrix of the same order as A.

1.1.2 Eigenvalue Properties

Few important properties of eigenvalues are as follows:

1) A matrix possesses inverse if and only if all of its eigenvalues are nonzero.

2) Let us consider a (m x m) matrix A, whose eigenvalues are λ1, λ2, …., λn, then:


i) Trace of matrix A is equal to sum of its eigenvalues as shown below:

tr (A) = λ1 + λ2 +…. + λn

ii) Determinant of matrix A is equal to product of eigenvalues of A as given below:

det (A) = λ1 . λ2 . …. . λn

iii) Eigenvalues of kthe power of matrix A i.e. Ak will be

λ 1k , λ 2k ,… λ nk

iv) If the matrix A is invertible, then its inverse A-1 does have eigenvalues;

1 1 1
, ,…., ,
λ1 λ2 λn

(3) Eigenvalue can be Zero

There may be situations that arise such that zero becomes one of the eigenvalues of a matrix. In

this case it is obviously implied that any of the solutions of eigenvalue equation of given matrix

is zero. This happens when there are more than one equilibrium point that lies at origin (0, 0).

(4) If A is an n × n triangular matrix (upper triangular, lower triangular, or diagonal), then the

eigenvalues of A are entries of the main diagonal of A.

(5) If μ ≠ 0 complex number, λ is an eigenvalue of matrix A, and x ≠ 0 corresponding

eigenvectors, then μx is a corresponding eigenvector.

(6) If A is an n × n matrix, then the following are equivalent.

 A is invertible.

 λ = 0 is not an eigenvalue of A
 If λ is an eigenvalue of matrix invertible A, and x ≠ 0 corresponding eigenvectors, then

1 / λ is an eigenvalue of A-1, and x is a corresponding eigenvector.

 det(A) ≠ 0.

 Ax = 0 has only the trivial solution.

 Ax = b has exactly one solution for every n × 1 matrix B

  AT A is invertible.

 A is diagonalizable.

 A has n linearly independent eigenvectors.

 The reduced row-echelon form of A is In.

 A is expressible as a product of elementary matrices.

 Ax = b is consistent for every n × 1 matrix b.

 The column vectors of A are linearly independent.

 The row vectors of A are linearly independent.

 The column vectors of A span Rn.

 The row vectors of A span Rn.

 The column vectors of A form a basis for Rn.

 The row vectors of A form a basis for Rn.

 A has rank n.

 A has nullity 0.

 The orthogonal complement of the null space of A is Rn.

 The orthogonal complement of the row space of A is 0.


 The range of TA is Rn.

 TA is one-to-one.

 λ = 0 is not an eigenvalue of A.

(7) If an n × n matrix A has n distinct eigenvalues, then A is diagonalizable.

(8) If A is a square matrix, then:

 For every eigenvalue of A, the geometric multiplicity is less than or equal to the algebraic

multiplicity.

 A is diagonalizable if and if the geometric multiplicity is equal to the algebraic

multiplicity for every eigenvalue.

(9) Let A and B are similar matrices. If the similarity transformations performed by the

orthogonal or unitary matrix Q i.e. if applies

B=QTAQ or B=UHA,

We will say that the matrices A and B are unitary similar. Since the unitary similar matrices are a

special case of a similar matrix, the eigenvalues of unitary similar matrices are the same.

(10) If A is Hermitian (symmetric) matrix, then:

 The eigenvalues of A are all real numbers.

 Eigenvectors from different eigenspace are orthogonal.

For example: If we consider a characteristic polynomial λ2 – 4λ = 0. Then the eigenvalues

are λ = 0 and λ = 4.
1.1.3 Dominant and Complex Eigenvalue

Dominant Eigenvalue

Dominant eigenvalue of a matrix is defined to be an eigenvalue which is greatest of all of its

eigenvalues.

Let us suppose that A is a square matrix of order n and λ1, λ2, …., λn be its eigenvalues, such that:

λ1 > λ2 >…. > λn

Then, λ1, which is the biggest value of all eigenvalues of matrix A, is known as the dominant

eigenvalue.

Complex Eigenvalue

So far, we know that all the values of λ computed by the eigenvalue equation:

|A – λI| = 0 are known as eigenvalues.

When the characteristic equation, thus solved, gives the roots that are complex in nature, the

matrix is said to have complex eigenvalues.

In other words, complex eigenvalues of a matrix are the eigenvalues that are of the form:

λ1 = a + ib and λ2 = a – ib

Where “a” and “b” are real and imaginary parts, respectively.

1.2 Statement of Problem


The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging

application. For example, this problem is crucial in solving systems of differential equations,

analyzing population growth models, and calculating powers of matrices (in order to define the

exponential matrix). Other areas such as physics, sociology, biology, economics and statistics

have focused considerable attention on “eigenvalues” and “eigenvectors”-their applications and

their computations.

When eigenvalues and eigenvectors are introduced to students, the formal world concept

definition may be given in words, but since it has an embedded symbolic form the student is

soon into symbolic world manipulations of algebraic and matrix representations, e.g.

transforming Ax = "x to |A–λI|=0. In this way the strong visual, or embodied metaphorical,

image of eigenvectors can be obscured by the strength of the formal and symbolic thrust.

However, using an enactive, embodied approach first could give a feeling for what eigenvalues,

and their associated eigenvectors are, and how they relate to the algebraic representation.

1.3 Objective of the Study

https://byjus.com/jee/eigenvalues-properties/

You might also like