com
1
A short Course on
Linear Vector Space
Lecture I
Basic Concept:
A vector has components. Example: Position vector,
A scalar has no components. Example: Mass,
Components of a vector are scalars; the scalars are some numbers that can be real or complex. Real
numbers are drawn from a real field and the complex numbers are drawn from a complex field .
Therefore, a vector is always defined over a field. The number of components (or tuples) of a vector
corresponds to the dimension of the space, we define it later as vector space! In the beginning, as we
wrote the components and mass , they are all real, and so they belong to a real field. We
write, .
To construct a vector space, there are some axioms that have to be followed which we describe
later.
Why the study of Vector Space?
Many mathematical systems satisfy the vector space axioms. The set of all complex numbers and
suitable sets of vectors, matrices, polynomials, functions all satisfy the structure of vector space. We
should therefore, study the properties of the abstract vector space, in general. Then we can apply
them to more specific mathematical systems representing specific physical problems.
Examples from Physics: Fourier components, Special functions etc.
What is Field?
A field is a set in which all the mathematical operations like addition, subtraction, multiplication and
division are well defined among its members.
Example: The set of rational numbers , the set of real numbers , the set of complex numbers .
Note: The set of integer numbers is not a field. In general, we denote a field by the symbol . A field
can be infinite or finite accordingly as its members.
To define a Field:
A field is a nonempty set on which two operations, addition (+) and multiplication (.) are defined
such that the following axioms hold for all members .
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
2
Addition:
i. (Closure property)
ii. (Commutativity)
iii. (Associativity)
iv. There exists such that (Identity)
v. There exists an element so that (Inverse)
Multiplication:
i. (Closure property)
ii. (Commutativity)
iii. (Associativity)
iv. There exists 1 such that (Identity)
v. If , there exists an element
such that
(Inverse)
Next we define a Vector Space in terms of axioms.
Vector Space:
A nonempty set V is said to be a vector space over a field if for all V and , the
following axioms hold.
Addition:
i. V (Closure)
ii. (Commutativity)
iii. (Associativity)
iv. There exists an element V, such that (Identity)
v. There exists an element V so that (Inverse)
Scalar Multiplication:
i. V (Closure)
ii. (Distributive over vector addition)
iii. (Distributive over scalar addition)
iv. (Associativity)
v. (Identity)
Example:
and so on.
Similarly,
is a vector space over the field where the vectors are ntuples and the
elements are drawn from the field We symbolically write,
}
Any set that is isomorphic to the vector space
(ii) V [Think as an operator.]
[We may study the mapping, its properties, in general and then there are the concepts of Image and
kernel, we may look up any text book for definitions.]
Some Examples of Vector Space
# MATRICES:
A matrix over a field ,
(
+,
Each row in the above can be regarded as a vector ( tuples) in
.
Any operation defined for the vector space
.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
4
The operations of addition and scalar multiplication:
,
where (rows), (columns), .
Note: The vector space
is isomorphic to
vector space of column vectors.
We can similarly construct the vector spaces with functions or polynomials etc.
(More discussions on this, later.)
On Linear Dependence of Vectors :
Linear combination: Let
, where
,
Linear dependence: If the linear combination of the vectors,
,
and the coefficients
.
Linear Independence: The set of vectors {
yields
.
Examples:
#1. Is the set of vectors { }
linearly independent?
Ans.
and
So the vectors are linearly independent.
#2. The following set of vectors { }
.
#3. How is about the following set of vectors { }
and
, that can satisfy the above. Thus we can say, the vectors are linearly
dependent.
Lecture II
Vector Subspace/ Spanning Set/ Inner Products
[In the Lecture NotesI we provided a formal introduction to Vector Space with some examples. Here we
continue with the structure and provide more practical examples.]
Subspaces:
A vector subspace is a vector space that is embedded in a larger vector space. In other words,
subspace is a subset of a vector space that is itself a vector space!
Note: Being a subset, the vectors in it follow most of the vector space axioms. One additional
condition that is required is the closure property. This automatically implies the inclusion of zero
element.
Symbolically, we write
is the subspace of , where is vector space defined over a field
Conditions:
Let be a vector space over a field .
Now if be a subspace of , then the following conditions have to satisfy.
0 , where 0 zero vector inclusion
***REPAIRED***
for all
, for and
Examples:
Closure property
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
6
#1. Straight lines or planes through the origin constitute a subspace of the three dimensional
Euclidean space.
(The xyplane)
The above is a 2D vector subspace of
which is isomorphic to
.
In the same way, the xaxis { }
is a 1D vector subspace of
which is then
isomorphic to .
#2. To prove that { }
is a subspace of
.
Proof:
(i) Zero vector inclusion:
0 = (0, 0, 0) which satisfies the constraint, , so 0
(ii) Closure property under addition:
Consider two vectors,
and
Condition satisfied
Thus,
(iii) Closure under multiplication:
Let
implies
for all and all .
Therefore, we can say
Spanning Set:
Let
be a set of elements from a vector space . The vector subspace consisting of all
linear combinations of
}. Now the
set of vectors {
} , where .
Note the following:
Every element of is a linear combination of {
}.
The subspace containing {
is spanned by , ,
make a
spanning set for
.
Consider as a linear combination of the above vectors:
Now we can check, every
is a linear combination of {
}. Thus {
} is a
spanning set for
.
Basis Set:
A basis set is a set of vectors {
, where
{ }.
From the constraint, we have
So we write, { }
For the following choices,
,
and
,
We can check that
and
and
can
be chosen as a basis set.
H.W. Problems:
#1. Prove the above claim (in the example)
#2. Prove that if
and
, then
{
{ }
DIRECT SUM:
The direct sum is when every vector in can be written in an unique way (i.e., one and only one
way) as .
Example:
{ } xyplane
and { } Line along zaxis
Any vector
Note the difference between above two cases:
Ordinary SUM: (1
st
case)
or , so we can
write the sum in more than one ways and so this it is not unique.
Direct SUM: (2
nd
case)
unique!
THEOREM:
The vector space is the direct sum of its subspaces and , if and only if (i)
and (ii) {}, intersection consists only zero element.
Proof:
Consider and {}
Let and
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
9
There is a vector for which .
To show uniqueness, also let
and
such that
.
Thus we can have,
But
and
and
A useful idea applied in Physics:
Consider, be the vector space of symmetric Matrices and be the vector space of anti
symmetric Matrices.
Any matrix can be written as the sum of symmetric and antisymmetric matrices,
, where
is symmetric Matrix]
is antisymmetric Matrix]
We can say,
and .
Next, we show that {}
Suppose, some element belongs to the intersection of the two sets, .
Thus we will have,
and also
which means .
Hence, {}
Therefore, .
NOTE:
The general forms of real symmetric and antisymmetric matrices,
(
),
(
)
Inner products:
Inner product of two vectors is a generalization of the dot product that we already know.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
10
Inner Product:
[The real inner product is a function (Mapping) from ]
For the real inner product, the following axioms hold:
(i)
for all ,
[Linearity]
(ii) , for all [Symmetry]
(iii) and if and only if , for all [Positive definite]
Note:
Linearity and symmetry together is called bilinearity.
Let
and
We define the inner product,
[Euclidean Norm]
If the vectors are represented by column matrices,
(
), (
)
Then
Transpose of ]
Now suppose, and are two vectors in
and is a matrix.
is a column and is another vector in
.
Now consider the inner product
Also, we can have
.
Example:
(
) and (
)
(
) , and are Orthogonal.
NORM of a vector is defined as,
(inner product with itself)
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
11
This is Euclidean Norm when
.
Following rules hold for a NORM:
0 for all and if and only if .
, for all and all .
Note:
For Euclidean Norm,
, for
is our special case. It may be called 2norm.
Two Important Theorems on inner product and sum of two vectors
CAUCHYSCHWARZ Inequality:
For two vectors
Proof:
According to definition,
[Euclidean, defined on Real field]
Consider any two real numbers,
Now,
Now considering,
and
we set
. (1)
Here we considered the norms,
and also
.
From (1), summing on both side,
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
12
In the above, we have considered the following identity:
Hence, (proved)
[Note that,
Proof:
[Since,
..(2)
Now we apply CauchySchwarz inequality,
From (2),
(proved)
Note:
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
13
Now if the inner product, , the vectors are orthogonal, we get back
Pythagoras Theorem!
Also, under the new symbols, we rediscover, if , i.e., if , then is called
unit vector. For any vector ,
(normalized)
For any nonnegative real number, is called the distance between and .
Inner Product defined on Complex vector space,
,
where
Note that,
and
The bilinearity is lost!
NOTE: A real inner product space is sometimes called Euclidean space and a complex inner product
space is called a Unitary space.
Lecture III
Eigen values and Eigen Vectors
[In the Lecture NotesI & II, we provided a formal introduction to Vector Space, subspace, inner products etc.
with examples.]
Consider a linear Map over .
If there exists a nonzero vector and a scalar such that
,
then is an eigenvector of and is an eigenvalue.
If the mapping is done by a real matrix and
, , we write
.(1),
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
14
= identity matrix. The above matrix equation (1) has a nontrivial solution, if and only
if
.
Note:
Otherwise, if ,
is the
only unique and trivial solution!
NOTE:
The solution of the degree polynomial equation gives values of , which are the
eigenvalues. However, the eigenvalues may not all be distinct.
For each distinct eigenvalue, there is an eigenvector.
Example #1: Consider, (
)
Characteristic polynomial, 

) wherefrom we get
( ).
As one parameter can be arbitrary, we choose, for example,
* . Similarly, the other vector can be found easily.
[Note that as the eigenvalue can be arbitrarily chosen, the length of the eigenvector is arbitrary.]
Similarity to a Diagonal Matrix
Suppose, the matrix has linearly independent eigenvectors. Each eigenvector can be thought of
as a column and we arrange the columns side by side to construct a matrix, .
Now the matrix is such that its columns are linearly independent!
is the characteristic equation of . This is a polynomial of degree in .
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
15
[Note: The matrix has maximal rank and is nonsingular. Thus
exists.]
Consider,
,
We can write,
[Each of
is a column matrix.]
+
, where (
.
A reverse thinking:
Imagine, we have a similarity transformation,
.
An Important Theorem:
Two matrices and represent the same linear operator if and only if they are similar to each
other. [In fact, all the matrix representations of a linear operator form an equivalence class of similar
matrices.]
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
16
Home Work
Consider the matrix
(
+
Find the Eigen values. Find the Eigen vectors. Prove that the eigenvectors are distinct (independent).
Now diagonalize to show that the diagonal elements are themselves the Eigenvalues.
Some Important propositions:
#1. A matrix with distinct Eigenvalues is diagonalizable!
#2. Eigenvalues are invariant under a similarity transformation.
[Note: A similarity transformation corresponds to a change of basis.]
Proof:
, (1)
where is the Eigenvalue of the square matrix .
Let , where is some nonsingular matrix.
Thus from (1),
Now, we can say
.
Thus we see, the Eigenvalue of is also the Eigenvalue of the matrix
corresponding to a
different Eigenvector.
Whats about Trace? Does it remain invariant under a similarity transformation?
Trace and determinant both remain invariant under a similarity transformation.
Trace:
[Note: We used the property, ]
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
17
Determinant:
[ ]
In our case,
NOTE: Trace and Determinant remain invariant, anyway, even for matrices that are not independent!
EIGENSPACE:
If the Eigenvalue is the multiple root of the characteristic equation then there are a number of Eigen
vectors corresponding to the particular Eigenvalue . The set of Eigen vectors corresponding to
together with zero vector form Eigen space,
, we can write,
Similarly,
for any
Now,
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
18
.
In this way, we can arrive at the following,
.
Now,
) , the
zero matrix.
Therefore, we can say,
.
[NOTE: This is a special proof when the matrix is diagonalizable. For the general proof, consult any
book, for example the book by Lipschutz (Shaum Series).]
Applications of CH Theorem:
Consider the matrix, (
)
Characteristic equation,


From CH theorem, we can write,
So, the higher power of can be expressed by the lower power of . This is indeed a nice trick!
Now we can easily calculate the even higher powers of the matrix.
And so on
Eigen values and Eigen vectors for a REAL SYMMETRIC MATRIX
Eigen Values are Real
Consider,
, where
]
If is a real symmetric matrix,
, which means
But
for any
is real.
[Note: We used the inner product definition,
Norm]
Eigen Values are Orthogonal.
Let is a real symmetric matrix and it has Eigen vectors , corresponding to two distinct Eigen
values , .
and
Now,
(1)
(2)
Take Transpose of (2),
(3) [
As the Eigen values are real and distinct,
Note:
The rows and columns of an orthogonal matrix are orthonormal.
[(
+(
+]
s are rows]
A real symmetric matrix has real Eigen values.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
20
Operation of Orthogonal Matrix:
is orthogonal matrix. When is applied on a vector , we get a new vector .
]
So the Norm is unchanged. This Represents Rotation!
#Example:
(
)
The Matrix is not necessarily symmetric here. But we may check,
(orthogonal).
Check the columns,
(
) and (
)
(
) . Also, , columns are orthonormal!
NOTE:
A Matrix is diagonalizable if it has two distinct Eigen values.
Any general Square Matrix can necessarily not be diagonalized by any similarity
transformation.
Consider a Real Symmetric Matrix. This is very special. It has distinct Eigenvalues and it is
diagonalizable.
#Example:
(
)
Eigen values
Corresponding Eigen vectors,
(
), (
)
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
21
Also, (
) (
.
Similarity Transformation:
Considering
,(
) (
,
(
) Diagonalized.
Important Conclusion:
A real symmetric matrix is diagonalizable by similarity transformation through a real orthogonal
matrix!
Some Important Points:
1. A linear operator is said to be Unitary if its: adjoint operator
, i.e.,
Eigen value, ,
[ ] [ ], this means Eigen value .
Lecture IV
[In this Lecture set, we discuss inner products, orthogonality, different Unitary and Hermitian
operators, applications and all that. ]
Some important ideas:
Orthogonal Diagonalizability of a real Symmetric Matrix
Consider a real matrix,
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
22
For orthogonal matrix,
Thus the matrix is symmetric!
Note: If a real matrix is diagonalizable by an orthogonal matrix, it is symmetric.
Similarity Transform and Characteristic Equation:
Consider two similar matrices, and :
Thus the similar matrices have same characteristic polynomials.
Operations with UNITARY Matrix
Preliminaries:
In our previous lecture, we defined the inner product on complex vector space.
# Examples:
(
)
(
)
(
)
(
)
(
)
(
) (
)
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
23
Note: The product of two unitary matrices is a unitary matrix. The inverse of a unitary matrix is
another unitary matrix. Identity matrix is unitary. The unitary matrices form a group called unitary
group.
Consider a special unitary matrix,
(
* (
* (
)
These special unitary matrices form a special unitary group: SU(2)
The Eigenvalues of a Unitary Matrix are equal to unity:
Let us consider be an eigenvector corresponding to the eigenvalue of the unitary matrix,
.
Therefore,
Inner Product is preserved due to operation of Unitary Matrix
Let us suppose,
, ,
.
Take any vector and expand it in terms of the orthogonal basis formed by the eigenvectors.
Now consider,
, ,
and
Now,
[Assume,
]
Also,
If now
, ,
for each .
Thus the inner product is preserved due to the operation of unitary matrix. In other words,
we can say that the linear operator carries an orthonormal basis into another and this is
then unitary operator.
Operations with Unitary Matrix:
NOTE:
The operation of on the vectors in a vector space means the isomorphism
onto itself.
Operations with HERMITIAN Matrix
A Hermitian matrix is selfadjoint,
.
All the eigenvalues of a Hermitian operator are real numbers.
Take any eigenvalue corresponding to the eigenvector :
Consider,
Again,
is also Hermitian.
Suppose,
Also,
Now
Thus we can write,
The eigenvalues of
and
Now,
But
] , which means
and
commute!
NOTE: The reverse is also true.
Any matrix which is not Hermitian, can be expressed as the sum of a Hermitian matrix and a
skew Hermitian matrix.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
26
Suppose, is an unitary matrix and is a Hermitian matrix. Consider the similarity
transformation,
.
Now the adjoint to the above,
So, the similarity transformation is selfadjoint!
# EXAMPLES of Hermitian Matrices that we often encounter in Physics:
Pauli spin matrices,
(
),
(
),
(
)
If we now construct a general matrix,
(
*, this is also a Hermitian matrix.
GramSchmidt Orthonormalization Process
This is a method to construct a orthonormal basis from an arbitrary basis:
{
} {
}
Consider,
so that
is normal,
Next,
and
is normal.
{
and
is normal.
Now, {
and
.
Notice the following transition:
(
) (
+(
)
The Transition matrix is clearly in triangular form.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
27
To use the method:
For a vector, if we construct a new vector in terms of the basis set {
} in the
following way,
The vector will be orthogonal to each of the vectors
s.
To check:
[ {
} is a orthonormal set]
This is true for all
s, .
Example #1:
Consider the following set of vectors,
in
) (
) (
*
We can now check,
.
Note:
.
Example #2: (Ref: HoffmanKunze)
Consider the vectors,
in
Apply GramSchmidt process to find orthogonal set of vectors.
Soln.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
28
Now
.
CauchySchwarz Inequality
From GramSchmidt orthogonalization process we go over to CauchySchwarz inequality.
Consider two vectors, , .
From
[From GramSchmidt]
Then
Also,
This is CauchySchwarz inequality.
Inner Product in complex Vector Space
For , , is a complex number. So we can write,
Now,
Inner Product space:
An inner product space is a real or complex vector space together with a specified inner product
defined on that space.
A finite dimensional real inner product space is called Euclidean space.
A complex inner product space is called Unitary space.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
29
[ ]
Consider,
(1)
Similarly,
(2)
Therefore, for
Real space:
(3)
Complex space:
(4)
[H.W.: Check the above two identities.]
(3) & (4) are called polarization identities.
Condition for Orthogonalization:
Consider a matrix, (
), where are complex numbers in general.
Now set two vectors,
and
Next, we apply GramSchmidt orthogonalization scheme,
Now
and
and
) and (
) are orthogonal.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
30
We have, 

On the contrary:
Take (
) where the
Now construct two vectors,
) and
)
Thus the two vectors are not orthogonal.
H.W.
Check the orthogonality for the following rotation matrix:
(
+
More about Inner Product Space:
For a vector space of matrices, the inner product
For a vector space of all continuous complex valued functions on the unit interval, ,
.
Lecture V
Lecture on Vector Space: Lecture Set V
[ Discussed about minimum and characteristic polynomials, triangular matrix etc. ]
Suppose, the matrix corresponding to an operator is of the following triangular form:
(
+
[Note: Precisely, an upper triangular form matrix.]
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
31
The characteristic polynomial can be written as in the following, a product of factors:
[Check this with a matrix.]
Theorem:
For a mapping, be a linear operation, the linear operator , whose characteristic
polynomial can be written as a product of factors, then there exists a basis in for which the
operator can be represented as a triangular matrix.
NOTE: The eigenvalues in this case are the entries appearing in the diagonal.
Two similar matrices, and , related by
Characteristic polynomial for the transposed matrix:
, as
.
Since, the determinant of a matrix and its transpose are same.
BLOCK MATRIX:
Consider a Block matrix, (
), where are themselves matrices.
We can show,
In the same way, for a triangular square matrix,
(
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
32
For characteristic polynomials:
(
)


Here also, the above can be generalized in the same way as we have done for determinant.
Minimum Polynomial:
For a square matrix , it satisfies the characteristic polynomial, .
[CH Theorem]
Now, there can be some other polynomials for which we may write,
The minimum polynomial of is the nonzero polynomial of lowest degree with leading
coefficient 1 (monic), which divides the characteristic polynomial, .
NOTE:
The minimum and characteristic polynomials of matrix have the same irreducible factors.
EXAMPLE:
Consider the following triangular matrix,
(
,
The characteristic polynomial of is
The min. polynomial can be one of the following, which is equal to zero and satisfied by :
or
, whereas,
is the min.
polynomial.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
33
How to actually find min. polynomial?
Example #1:
(
, [Example from Lipschutz]
Characteristic polynomial of



 

The 2
nd
term in the above is zero.


The min. polynomial will have the same irreducible factors and and that will divide
.
Thus we verify one of the options,
,
Now consider the first option, (
,(
,
For the 2
nd
option, it can be checked that
=0
, ,
and .
Therefore, the min. polynomial of :
, for which .
NOTE:
A triangular matrix, where all the diagonal elements are same.
For example,
(
, nsquare matrix
The characteristic equation,
,
The characteristic polynomial,
Invariant Subspace:
Formal Definition
For a linear operator, , a subspace is said to be invariant if for each element ,
we have , i.e., maps onto itself.
The subspace is called invariant.
THEOREM:
If is an invariant subspace of under the mapping by a linear operator, , then has got a
block matrix representation like
Example:
Consider the following rotation matrix in xyplane around zaxis,
0
0
0 0 1
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
36
The xyplane is the invariant subspace of the xyzspace under
.
Here,
(
),
),
.
Any vector in the xyplane remains in the xyplane due to the operation of
(
+(
) (
+ (
.
If and are subspaces of , then
{ }
For the direct sum, i.e.,
, {}
When we have two subspaces for which , {}, they are called complementary.
Let , then
For and [
]
Further, suppose some
, for some and also
{}
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
37
Thus and are complementary subspaces.
It follows that for any ,
Also, note that if is a projector, is also a projector.
The projectors and are called complementary projectors.
Consider the following direct sum:
So, for any vector in , can be written as
, where
for each
Now consider, is a linear operator and now is the direct sum of invariant subspaces,
,
Now the operator also can be written as the direct sum of operators.
, where
, .
Example:
,
,
Consider two bases in the two subspaces, {
} and {
}
Also,
And
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
38
Thus we write,
),
+
Primary Decomposition:
Consider the linear operator, .
If is the direct sum of invariant subspaces,
We have the min. polynomial
, where
, where
.
NILPOTENT:
is nilpotent if
, index of nilpotency
of .
Example: (
),
,
Theorem:
If is the nilpotent operator of some index , then has a block diagonal matrix representation,
where the diagonal blocks are of the following form
Block (
,
There is at least one of order and all other blocks are of orders .
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
39
Jordon Canonical Form:
is a linear operator where characteristic and min. polynomials are
1. There is at least one
of order
, all other
.
2. The sum of the orders of the
s is
.
3. The number of
[Note:
is the multiplicity of
and so on.]
Example:
Here,
2 1
0 2
2 1
0 2
3 1
0 3
3
A 77 matrix, for two independent eigenvectors belonging to 2.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
40
O
2 1
0 2
2
2
3 1
0 3
3
A 77 matrix, for three independent eigenvectors belonging to 2.
More on Jordon Canonical Form:
A matrix ( ) is diagonalizable if it has linearly independent eigenvectors.
Consider a matrix, (
,
Investigation shows that we have only 3 linearly independent eigenvectors, but the matrix is .
Therefore, the matrix is not diagonalizable!
But the matrix in this case, can be made nearly diagonal.
Consider the following,
(
,
(
,(
, (
, Jordon form
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
41
Theorem:
For every matrix there exists an invertible matrix such that
, where is a
canonical matrix.
The matrix consists of columns of eigenvectors or generalized eigenvectors.
For an eigenvector, .
For a generalized eigenvector,
, and
.
In the above example,
(
, , (
, , (
,
They are independent eigenvectors.
And
(
, but
, Generalized eigenvector.
Decomposition of a square Matrix: (LU Decomposition)
Consider to be an matrix.
, Lower Triangular matrix, Upper Triangular matrix
For example, consider a matrix:
(
+(
+ (
+
(
+ (
+
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
42
Here we have 9 equations and 12 unknowns. So this can NOT be solved by ordinary method.
Instead, the matrix equation is solved by a trick!
First solve, [Consider, ]
This is done by forward substitution.
(
+(
+ (
] for (1)
Next solve, for .
This is called back substitution.
] for (2)
These algorithms (1) and (2) are implemented in Computational methods.
Matrix Diagonalization: Eigen Decomposition
Consider a decomposition of a square matrix .
If has nondegenerate eigenvalues
, where
and so on.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
43
Then we construct
(
And find,
,
***REPAIRED***
and are similar.
This is called eigenvalue decomposition. [Nothing but similarity transformation]
We write,
Similarly,
Also,
[putting ]
Exponential of a matrix:
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
44
,
Lecture VII
Lecture on Vector Space: Lecture Set VII
Let be the vector space over a field . Consider a mapping, .
This is a linear functional if for every and for every we have the following linearity
relations:
A linear functional on is a linear mapping from to .
Now we have to understand that the set of linear functionals on a vector space over a field is
also a vector space over with the following addition and multiplication defined by:
[Where
and
and
as a vector.
We write,
, a scalar.
Dual Basis:
Consider a vector space of dimension defined over a field . The dimension of the dual space
is also . Each basis of determines a basis of
.
Now think of {
} to be a basis of over .
Let us also consider a set of vectors, {
so that
We thus can say, {
} is a basis of
.
Explicitly,
and so on.
Example:
in
Find the dual basis {
}.
Ans. Consider
, and
) ,
Thus dual basis is
, and
To Work out:
#1. Find a dual basis for
[Ans.
, and
]
#2. Consider the following basis of
.
Find the dual basis {
}.
[Ans.
]
Let {
.
Then for any ,
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
46
and for any functional
Proof: Suppose,
Similarly, we will have,
and so on.
Thus,
(1)
Now applying over (1),
One Theorem:
Let {
} and {
} and {
} be
the corresponding dual bases of
} to {
} then
is the
transition matrix from {
} to {
}
[We skip the proof here for this set of elementary lecture notes.]
In Quantum Mechanics:
Each quantum state of a particle is represented by a state vector, belonging to some state space ,
which is a subspace of the Hilbert space.
We call the elements of to be ket vectors according to Dirac notation.
In function space, , which corresponds to a ket vector,
[The advantage of Dirac notation is that it has to refer not coordinate system.]
For each pair of kets and , we associate a complex number:
inner product
which satisfies the following:
(i)
(ii)
(iii)
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
47
Dual Space:
Consider a linear functional : (complex field)
So, is a complex number.
The linear functional defined on a ket vector and that turns it into a complex number in .
The linear functionals defined on the kets constitute a vector space which is the dual space
of , we will call it
.
Now according to Dirac, any element of the dual space
For example, any bra < designates a linear functional which when applied on a ket, gives us a
number which we could write as but in Dirac notation we write,
So the inner product of earlier notation ( is now .
Because of the complex inner product, there is the antlinear relation:
Also, if is a ket and is also a ket, we can write,
And <
To every ket there is a Bra.
Then we can define an operator:
Apply the above one a ket, 
and we get
Where is a complex number.
If we have a normalized , we write
So we can define a projection operator by
Because,
For more details of the operator algebra and Quantum Mechanical techniques, follow a standard
Quantum Mechanics book with Dirac notation.
[Ref: Quantum Mechanics Concepts and Applications by N. Zettilli (Wiley Pub)]
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
48
APPENDIX
Matrix representation of a matrix in a basis, {
}:
..
Matrix representation of is the transpose of .
Example:
(
)
In the basis,
),
(
) (
) (
(
) (
) (
[]
(
) (
)
For be the invariant subspace of ,
In the basis of {
} of
..
and
...
[]
{
}
(
)
(
)
(
,
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com
49
References:
1) Linear Algebra Seymour Lipschutz (Schaums Outline Series)
2) Linear Algebra  K. Hoffman, Ray Kunze (Prentice Hall)
3) Linear Algebra Francis J. Wright (Lecture Notes) [centaur.maths.qmw.ac.uk]
4) Linear Algebra and Matrices Martin Fluch (Deptt. of Maths, Univ. of Helsinki webpage)
5) Linear Algebra Paul Dawkins (Lecture Notes in www.scribd.com)
6) Berkley Lecture Notes Edward Carter
7) Linear Algebra V.V. Voyevodin (Mir Pub, Moscow)
[Cartoon Source: Internet]