0% found this document useful (0 votes)
117 views170 pages

Linear Algebra

This document discusses key concepts in linear algebra, including characteristic values (eigenvalues), characteristic polynomials, diagonalizable transformations, invariant subspaces, triangular forms, simultaneous triangularization, direct sum decompositions, and invariant direct sums. It provides definitions, criteria, and examples for each concept, emphasizing their importance in understanding linear transformations and matrix structures. The document also highlights the relationships between these concepts and their applications in simplifying problems in linear algebra.

Uploaded by

fb718710
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
117 views170 pages

Linear Algebra

This document discusses key concepts in linear algebra, including characteristic values (eigenvalues), characteristic polynomials, diagonalizable transformations, invariant subspaces, triangular forms, simultaneous triangularization, direct sum decompositions, and invariant direct sums. It provides definitions, criteria, and examples for each concept, emphasizing their importance in understanding linear transformations and matrix structures. The document also highlights the relationships between these concepts and their applications in simplifying problems in linear algebra.

Uploaded by

fb718710
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd

UNIT 2:

Q .Characteristic values and characteristic polynomials:

In linear algebra, characteristic values (often called eigenvalues) and characteristic


polynomials are key concepts related to matrices.

Characteristic Polynomial
Given a square matrix
𝐴
A of size
𝑛
×
𝑛
n×n, the characteristic polynomial
𝑝
𝐴
(
𝜆
)
p
A

(λ) is defined as:


𝑝
𝐴
(
𝜆
)
=
det

(
𝐴

𝜆
𝐼
)
p
A

(λ)=det(A−λI)

where
𝜆
λ is a scalar,
𝐼
I is the identity matrix of the same size as
𝐴
A, and
det

det denotes the determinant.

Eigenvalues (Characteristic Values)


The eigenvalues (or characteristic values) of the matrix
𝐴
A are the values of
𝜆
λ for which the characteristic polynomial
𝑝
𝐴
(
𝜆
)
p
A

(λ) equals zero:

𝑝
𝐴
(
𝜆
)
=
0
p
A

(λ)=0

In other words, the eigenvalues are the roots of the characteristic polynomial.

2. diagonalizable transformations, annihilating polynomials


Diagonalizable Transformations
A linear transformation
𝑇
T on a vector space
𝑉
V is said to be diagonalizable if there exists a basis for
𝑉
V consisting of eigenvectors of
𝑇
T. In other words,
𝑇
T can be represented by a diagonal matrix in some basis.

Key Points:

Diagonalizable Matrix: A matrix


𝐴
A is diagonalizable if there exists an invertible matrix
𝑃
P and a diagonal matrix
𝐷
D such that
𝐴
=
𝑃
𝐷
𝑃

1
A=PDP
−1
.

Eigenvectors and Eigenvalues: The columns of


𝑃
P are the eigenvectors of
𝐴
A, and the entries of
𝐷
D are the corresponding eigenvalues.

Diagonalization Criterion: A matrix


𝐴
A is diagonalizable if and only if there are enough linearly independent
eigenvectors to form a basis for the vector space. Specifically,
𝐴
A is diagonalizable if and only if its algebraic multiplicity and geometric multiplicity
of each eigenvalue are equal.

3. Cayley-Hamilton theorem

4. invariant subspaces and triangular form


In linear algebra, the concepts of invariant subspaces and triangular form are
fundamental when analyzing the structure of linear transformations and
matrices. Here’s a detailed explanation of each:

### Invariant Subspaces

An **invariant subspace** of a vector space \( V \) with respect to a linear


transformation \( T \) is a subspace \( W \) of \( V \) such that:
\[ T(W) \subseteq W \]

In other words, if \( W \) is an invariant subspace, then applying the linear


transformation \( T \) to any vector in \( W \) results in another vector that
is still in \( W \).

**Key Points:**

1. **Definition:** A subspace \( W \) of \( V \) is invariant under \( T \) if


for every vector \( w \in W \), \( T(w) \) is also in \( W \).

2. **Importance:** Invariant subspaces are useful in simplifying problems


related to linear transformations and matrices. They are often used in the
process of diagonalization and finding the Jordan canonical form.

3. **Examples:**
- If \( T \) is a linear transformation on \( \mathbb{R}^3 \) and \( W \)
is the subspace spanned by the vector \( \begin{pmatrix} 1 \\ 0 \\ 0 \
end{pmatrix} \), then \( W \) is invariant under the transformation \( T \) if
\( T \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \) is also in \( W \).

### Triangular Form

The **triangular form** of a matrix is a simplified form of the matrix


obtained through row operations. There are two main types of triangular
forms:
1. **Upper Triangular Form:**

A matrix \( A \) is in upper triangular form if all entries below the main


diagonal are zero. That is:

\[
A = \begin{pmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\
0 & a_{22} & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & a_{nn}
\end{pmatrix}
\]

2. **Lower Triangular Form:**

A matrix \( A \) is in lower triangular form if all entries above the main


diagonal are zero. That is:

\[
A = \begin{pmatrix}
a_{11} & 0 & \cdots & 0 \\
a_{21} & a_{22} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nn}
\end{pmatrix}
\]

**Triangular Form and Invariant Subspaces:**

1. **Triangularization Theorem:**
Any square matrix \( A \) can be transformed into an upper triangular
form via similarity transformations. That is, there exists an invertible matrix \
( P \) such that:

\[
P^{-1} A P = U
\]

where \( U \) is an upper triangular matrix.

2. **Invariant Subspaces and Triangular Form:**


The invariant subspaces of a matrix \( A \) can be analyzed using its
triangular form. For instance:

- **Eigenvalues:** The eigenvalues of a matrix are the diagonal entries of


its triangular form.
- **Jordan Form:** The Jordan canonical form of a matrix, which
generalizes the concept of triangular form, reveals the structure of invariant
subspaces associated with each eigenvalue.
### Example of Triangular Form

Consider the matrix:

\[ A = \begin{pmatrix}
1 & 2 & 3 \\
4 & 5 & 6 \\
7&8&9
\end{pmatrix} \]

We can perform row operations to convert \( A \) into an upper triangular


form. For instance:

1. **Subtract 4 times Row 1 from Row 2:**

\[
\begin{pmatrix}
1 & 2 & 3 \\
0 & -3 & -6 \\
7&8&9
\end{pmatrix}
\]
2. **Subtract 7 times Row 1 from Row 3:**

\[
\begin{pmatrix}
1 & 2 & 3 \\
0 & -3 & -6 \\
0 & -6 & -12
\end{pmatrix}
\]

3. **Add 2 times Row 2 to Row 3:**

\[
\begin{pmatrix}
1 & 2 & 3 \\
0 & -3 & -6 \\
0&0&0
\end{pmatrix}
\]

The matrix is now in upper triangular form.

### Summary
- **Invariant Subspaces:** Subspaces that remain unchanged under a linear
transformation. They play a crucial role in understanding the structure of
linear transformations.
- **Triangular Form:** A simplified matrix form where all entries below (or
above) the main diagonal are zero. Triangular forms are useful for solving
systems of equations, finding eigenvalues, and understanding matrix
properties.

5. simultaneous triangularization and diagonalization

Simultaneous triangularization and diagonalization are advanced concepts in


linear algebra related to the simplification of multiple matrices. Here’s a
detailed look at each:

Simultaneous Triangularization
Simultaneous triangularization involves finding a common triangular form for
two or more matrices. Specifically, if
𝐴
A and
𝐵
B are two matrices, simultaneous triangularization seeks an invertible matrix
𝑃
P such that both
𝐴
A and
𝐵
B are triangular in the same basis.
Conditions:

Commutativity: For matrices


𝐴
A and
𝐵
B to be simultaneously triangularizable, they must commute. That is,
𝐴
𝐵
=
𝐵
𝐴
AB=BA.

Triangularization: If
𝐴
A and
𝐵
B commute, there exists an invertible matrix
𝑃
P such that both
𝑃

1
𝐴
𝑃
P
−1
AP and
𝑃

1
𝐵
𝑃
P
−1
BP are upper triangular.

6. direct sum decompositions


**Direct sum decompositions** in linear algebra involve
expressing a vector space as the direct sum of two or more
subspaces. This concept is fundamental in understanding the
structure of vector spaces and linear transformations. Here’s a
detailed look at direct sum decompositions:

### Definition of Direct Sum

A vector space \( V \) is said to be the **direct sum** of two


subspaces \( U \) and \( W \), denoted \( V = U \oplus W \), if
every vector \( v \in V \) can be uniquely written as:

\[ v = u + w \]

where \( u \in U \) and \( w \in W \).

### Key Properties

1. **Uniqueness of Representation:**
- Each vector \( v \in V \) has a unique representation as \( u +
w \) where \( u \in U \) and \( w \in W \).

2. **Subspace Conditions:**
- The subspaces \( U \) and \( W \) must intersect trivially,
i.e., \( U \cap W = \{0\} \). This ensures that the representation
is unique.

### Examples

1. **Example in \( \mathbb{R}^3 \):**

Consider the vector space \( \mathbb{R}^3 \) and the


subspaces \( U \) and \( W \) defined as:

\[
U = \text{span}\left\{\begin{pmatrix}1 \\ 0 \\ 0\end{pmatrix}, \
begin{pmatrix}0 \\ 1 \\ 0\end{pmatrix}\right\}
\]
\[
W = \text{span}\left\{\begin{pmatrix}0 \\ 0 \\ 1\end{pmatrix}\
right\}
\]

The vector space \( \mathbb{R}^3 \) can be decomposed as the


direct sum of \( U \) and \( W \):

\[
\mathbb{R}^3 = U \oplus W
\]

This means every vector in \( \mathbb{R}^3 \) can be uniquely


written as a sum of a vector in \( U \) and a vector in \( W \).

2. **Example with Polynomial Spaces:**

Consider the vector space of polynomials \( P_2 \) with degree


at most 2:

\[
P_2 = \text{span}\{1, x, x^2\}
\]
Decompose \( P_2 \) into the direct sum of subspaces \( U \)
and \( W \) where:

\[
U = \text{span}\{1\}
\]
\[
W = \text{span}\{x, x^2\}
\]

Any polynomial \( p(x) = a + bx + cx^2 \) in \( P_2 \) can be


uniquely written as:

\[
p(x) = (a) + (bx + cx^2)
\]

Here, \( a \in U \) and \( bx + cx^2 \in W \).

### Direct Sum of More Than Two Subspaces

A vector space \( V \) can be decomposed into the direct sum of


more than two subspaces. For example:

\[ V = U_1 \oplus U_2 \oplus \cdots \oplus U_k \]


where each \( U_i \) is a subspace of \( V \), and the intersection
of any combination of these subspaces is only the zero vector.
That is:

\[ \bigcap_{i=1}^k U_i = \{0\} \]

### Finding Direct Sum Decompositions

1. **Identify Subspaces:**
- Determine appropriate subspaces \( U \) and \( W \) that sum
to the vector space \( V \) and ensure \( U \cap W = \{0\} \).

2. **Verify Decomposition:**
- Check that every vector in \( V \) can be uniquely expressed
as the sum of a vector in \( U \) and a vector in \( W \).

3. **Generalize:**
- For decompositions involving more than two subspaces,
ensure that every vector in \( V \) can be uniquely written as a
sum of vectors from each subspace and that the subspaces
intersect trivially.

7. invariant direct sums

**Invariant direct sums** refer to a specific decomposition of a


vector space \( V \) or a module into subspaces that are invariant
under a given linear transformation or endomorphism. This
concept is crucial in understanding the structure of linear
operators and their effects on vector spaces.

### Definition

Given a vector space \( V \) and a linear operator \( T \) on \


( V \), a direct sum decomposition \( V = V_1 \oplus V_2 \oplus \
cdots \oplus V_k \) is said to be an **invariant direct sum** if
each subspace \( V_i \) is invariant under \( T \). This means:

\[ T(V_i) \subseteq V_i \]

for each \( i = 1, 2, \ldots, k \).

### Properties

1. **Invariant Subspaces:** Each subspace \( V_i \) in the


decomposition is invariant under the linear operator \( T \). This
ensures that the action of \( T \) on \( V_i \) remains within \( V_i
\).

2. **Direct Sum Decomposition:** The vector space \( V \) can be


uniquely expressed as the direct sum of the invariant subspaces.
This implies that each vector \( v \in V \) can be written uniquely
as:
\[ v = v_1 + v_2 + \cdots + v_k \]

where \( v_i \in V_i \) for each \( i \).

3. **Block Diagonal Form:** If \( V = V_1 \oplus V_2 \oplus \


cdots \oplus V_k \) is an invariant direct sum, then \( T \) can be
represented in a block diagonal form with respect to this
decomposition. This means that the matrix representation of \
( T \) with respect to a basis that aligns with the invariant
subspaces will be block diagonal, where each block corresponds
to the action of \( T \) restricted to an invariant subspace.

### Example

Consider the vector space \( \mathbb{R}^4 \) with the linear


operator \( T \) defined by:

\[ T = \begin{pmatrix}

1 & 0 & 0 & 0 \\

0 & 2 & 0 & 0 \\

0 & 0 & 1 & 0 \\

0&0&0&2

\end{pmatrix} \]
Let's find an invariant direct sum decomposition for \( \
mathbb{R}^4 \).

1. **Identify Invariant Subspaces:**

Notice that the matrix \( T \) is already in block diagonal form


with respect to the standard basis:

\[

T = \begin{pmatrix}

1 & 0 \\

0&2

\end{pmatrix} \oplus \begin{pmatrix}

1 & 0 \\

0&2

\end{pmatrix}

\]

The subspaces corresponding to the first \( 2 \times 2 \) block


and the second \( 2 \times 2 \) block are invariant under \( T \).
Specifically:
- \( V_1 = \text{span}\left\{\begin{pmatrix}1 \\ 0 \\ 0 \\ 0\
end{pmatrix}, \begin{pmatrix}0 \\ 1 \\ 0 \\ 0\end{pmatrix}\right\}
\)

- \( V_2 = \text{span}\left\{\begin{pmatrix}0 \\ 0 \\ 1 \\ 0\
end{pmatrix}, \begin{pmatrix}0 \\ 0 \\ 0 \\ 1\end{pmatrix}\right\}
\)

These subspaces are invariant under \( T \) because applying \(


T \) to any vector in \( V_1 \) or \( V_2 \) results in another
vector in the same subspace.

2. **Verify Direct Sum Decomposition:**

Check that:

\[

\mathbb{R}^4 = V_1 \oplus V_2

\]

Any vector in \( \mathbb{R}^4 \) can be uniquely written as


the sum of a vector in \( V_1 \) and a vector in \( V_2 \). For
instance, the vector \( \begin{pmatrix}a \\ b \\ c \\ d\
end{pmatrix} \) can be written as:

\[
\begin{pmatrix}a \\ b \\ c \\ d\end{pmatrix} = \begin{pmatrix}a
\\ b \\ 0 \\ 0\end{pmatrix} + \begin{pmatrix}0 \\ 0 \\ c \\ d\
end{pmatrix}

\]

where \( \begin{pmatrix}a \\ b \\ 0 \\ 0\end{pmatrix} \in V_1 \)


and \( \begin{pmatrix}0 \\ 0 \\ c \\ d\end{pmatrix} \in V_2 \).

8. primary decomposition theorem

UNIT 3 .

1. Adjoint of a linear transformation

The **adjoint** (or **adjoint operator**) of a linear


transformation is a concept that arises in the context of inner
product spaces. The adjoint of a linear transformation is defined
in a way that generalizes the notion of transpose for matrices in
the context of inner products. Here's a detailed explanation:

### Adjoint of a Linear Transformation

#### Definition

Let \( V \) and \( W \) be inner product spaces, and let \( T: V \to


W \) be a linear transformation. The adjoint \( T^*: W \to V \)
of \( T \) is a linear transformation that satisfies:
\[ \langle T(v), w \rangle_W = \langle v, T^*(w) \rangle_V \]

for all \( v \in V \) and \( w \in W \), where \( \langle \cdot, \


cdot \rangle_V \) and \( \langle \cdot, \cdot \rangle_W \) denote
the inner products on \( V \) and \( W \), respectively.

In other words, the adjoint \( T^* \) is the unique linear


transformation that makes the following diagram commute:

\[

\begin{array}{ccc}

V & \xrightarrow{T} & W \\

\downarrow{\langle \cdot, \cdot \rangle_V} & & \downarrow{\


langle \cdot, \cdot \rangle_W} \\

V^* & \xrightarrow{T^*} & W^*

\end{array}

\]

where \( V^* \) and \( W^* \) denote the dual spaces of \( V \)


and \( W \), respectively.

#### Adjoint in Matrix Representation


In the case where \( V \) and \( W \) are finite-dimensional inner
product spaces and \( T \) is represented by a matrix \( A \), the
adjoint \( T^* \) is represented by the conjugate transpose (or
Hermitian transpose) of \( A \), often denoted \( A^* \) or \( A^H
\).

For example, if \( A \) is an \( m \times n \) matrix with complex


entries, the adjoint matrix \( A^* \) is defined as:

\[

A^* = \overline{A}^T

\]

where \( \overline{A} \) is the matrix of complex conjugates of \


( A \), and \( \overline{A}^T \) is the transpose of \( \
overline{A} \).

#### Properties

1. **Self-Adjoint (or Hermitian) Operators:**

A linear operator \( T \) is self-adjoint if \( T = T^* \). In matrix


terms, this means the matrix \( A \) is Hermitian, i.e., \( A =
A^* \). Self-adjoint operators have real eigenvalues and
orthogonal eigenvectors.
2. **Adjoint of Adjoint:**

For the adjoint operator, the adjoint of \( T^* \) is \( T \), i.e., \


( (T^*)^* = T \).

3. **Linear Properties:**

The adjoint operator has the following properties:

- \( (T + S)^* = T^* + S^* \)

- \( (\alpha T)^* = \overline{\alpha} T^* \) for a scalar \( \


alpha \)

- \( (TS)^* = S^* T^* \) for linear transformations \( T \) and \


( S \)

### Example

Consider the linear transformation \( T \) on \( \mathbb{R}^2 \)


defined by the matrix:

\[

A = \begin{pmatrix}

1 & 2 \\

3&4

\end{pmatrix}

\]
To find the adjoint of \( T \), compute the conjugate transpose
(Hermitian transpose) of \( A \). Since \( A \) has real entries,
this is simply the transpose:

\[

A^* = A^T = \begin{pmatrix}

1 & 3 \\

2&4

\end{pmatrix}

\]

Here, \( A^* \) is the matrix representation of the adjoint


operator \( T^* \).

### Summary

The **adjoint** of a linear transformation \( T \) in the context of


inner product spaces is a linear transformation \( T^* \) such
that:

\[ \langle T(v), w \rangle_W = \langle v, T^*(w) \rangle_V \]

In finite-dimensional spaces, if \( T \) is represented by a matrix \


( A \), then \( T^* \) is represented by the conjugate transpose \(
A^* \) of \( A \). The adjoint is a key concept in functional
analysis and has applications in various areas such as quantum
mechanics and signal processing.

2 . Inner product spaces

**Inner product spaces** are a fundamental concept in linear


algebra and functional analysis. They provide a way to define
geometric notions such as angles, lengths, and orthogonality in a
vector space. Here’s a comprehensive overview:

### Definition

An **inner product space** is a vector space \( V \) equipped


with an inner product, which is a function that assigns a real or
complex number to each pair of vectors in \( V \) and satisfies
specific properties.

### Inner Product

The inner product \( \langle \cdot, \cdot \rangle \) on a vector


space \( V \) is a function:

\[ \langle \cdot, \cdot \rangle: V \times V \to \mathbb{F} \]


where \( \mathbb{F} \) is the field of scalars (either \( \
mathbb{R} \) for real numbers or \( \mathbb{C} \) for complex
numbers), that satisfies the following properties:

1. **Linearity in the First Argument:**

For all \( u, v, w \in V \) and scalar \( \alpha \in \mathbb{F} \):

\[

\langle \alpha u + v, w \rangle = \alpha \langle u, w \rangle


+ \langle v, w \rangle

\]

2. **Conjugate Symmetry:**

For all \( u, v \in V \):

\[

\langle u, v \rangle = \overline{\langle v, u \rangle}

\]

where \( \overline{\langle v, u \rangle} \) denotes the complex


conjugate of \( \langle v, u \rangle \). For real inner product
spaces, this reduces to symmetry:

\[

\langle u, v \rangle = \langle v, u \rangle

\]

3. **Positive-Definiteness:**

For all \( v \in V \):

\[

\langle v, v \rangle \geq 0

\]

with equality if and only if \( v = 0 \).

### Examples of Inner Product Spaces

1. **Euclidean Space \( \mathbb{R}^n \):**

In \( \mathbb{R}^n \), the inner product is usually defined as:


\[

\langle \mathbf{x}, \mathbf{y} \rangle = \sum_{i=1}^n x_i y_i

\]

where \( \mathbf{x} = (x_1, x_2, \ldots, x_n) \) and \( \


mathbf{y} = (y_1, y_2, \ldots, y_n) \) are vectors in \( \
mathbb{R}^n \).

2. **Complex Space \( \mathbb{C}^n \):**

In \( \mathbb{C}^n \), the inner product is defined as:

\[

\langle \mathbf{x}, \mathbf{y} \rangle = \sum_{i=1}^n x_i \


overline{y_i}

\]

where \( \overline{y_i} \) denotes the complex conjugate of \


( y_i \).

3. **Function Spaces:**
For spaces of functions, such as \( L^2 \) spaces, the inner
product is defined as:

\[

\langle f, g \rangle = \int_a^b f(x) \overline{g(x)} \, dx

\]

where \( f \) and \( g \) are functions defined on an interval \


([a, b]\).

### Norm and Orthogonality

1. **Norm:**

The norm induced by the inner product is defined as:

\[

\| v \| = \sqrt{\langle v, v \rangle}

\]

It measures the "length" or "magnitude" of a vector \( v \).

2. **Orthogonality:**
Two vectors \( u \) and \( v \) are orthogonal if:

\[

\langle u, v \rangle = 0

\]

Orthogonality generalizes the concept of perpendicular vectors


to more abstract vector spaces.

### Properties and Theorems

1. **Cauchy-Schwarz Inequality:**

For all \( u, v \in V \):

\[

|\langle u, v \rangle| \leq \| u \| \| v \|

\]

2. **Triangle Inequality:**
For all \( u, v \in V \):

\[

\| u + v \| \leq \| u \| + \| v \|

\]

3. **Parallelogram Law:**

For all \( u, v \in V \):

\[

\| u + v \|^2 + \| u - v \|^2 = 2 (\| u \|^2 + \| v \|^2)

\]

3. Eigen values and eigenvectors of a linear transformation

**Eigenvalues and eigenvectors** are fundamental concepts in


linear algebra that provide insight into the structure of linear
transformations and matrices. They are particularly useful in
understanding the behavior of linear systems, differential
equations, and many other applications.

### Definitions
#### Eigenvalues and Eigenvectors

For a linear transformation \( T: V \to V \) on a vector space \


( V \), an **eigenvector** is a non-zero vector \( \mathbf{v} \in
V \) such that when \( T \) is applied to \( \mathbf{v} \), the
result is a scalar multiple of \( \mathbf{v} \). The corresponding
**eigenvalue** is the scalar by which the eigenvector is scaled.

Formally, if \( T \) is represented by a matrix \( A \), then \( \


mathbf{v} \) is an eigenvector of \( A \) with eigenvalue \( \
lambda \) if:

\[ A \mathbf{v} = \lambda \mathbf{v} \]

### Finding Eigenvalues and Eigenvectors

1. **Eigenvalues:**

- To find the eigenvalues of a matrix \( A \), solve the


characteristic polynomial equation:

\[

\text{det}(A - \lambda I) = 0

\]
where \( I \) is the identity matrix of the same dimension as \(
A \), and \( \text{det} \) denotes the determinant.

2. **Eigenvectors:**

- Once the eigenvalues \( \lambda \) are found, substitute


each \( \lambda \) into the equation:

\[

(A - \lambda I) \mathbf{v} = 0

\]

to solve for the eigenvectors \( \mathbf{v} \). This involves


solving a system of linear equations.

### Example

Consider the matrix:

\[

A = \begin{pmatrix}

4 & 1 \\

2&3

\end{pmatrix}
\]

**Step 1: Find the Eigenvalues**

Compute the characteristic polynomial:

\[

\text{det}(A - \lambda I) = \text{det} \begin{pmatrix}

4 - \lambda & 1 \\

2 & 3 - \lambda

\end{pmatrix}

\]

\[

= (4 - \lambda)(3 - \lambda) - (1 \cdot 2)

\]

\[

= \lambda^2 - 7\lambda + 10

\]

Set the characteristic polynomial to zero:


\[

\lambda^2 - 7\lambda + 10 = 0

\]

Solve for \( \lambda \) using the quadratic formula:

\[

\lambda = \frac{7 \pm \sqrt{49 - 40}}{2} = \frac{7 \pm 3}{2}

\]

\[

\lambda_1 = 5 \quad \text{and} \quad \lambda_2 = 2

\]

**Step 2: Find the Eigenvectors**

For \( \lambda_1 = 5 \):

\[

(A - 5I) \mathbf{v} = 0

\]
\[

\begin{pmatrix}

4 - 5 & 1 \\

2&3-5

\end{pmatrix} \mathbf{v} = \begin{pmatrix}

-1 & 1 \\

2 & -2

\end{pmatrix} \mathbf{v} = \mathbf{0}

\]

Solve:

\[

- v_1 + v_2 = 0 \quad \text{or} \quad v_2 = v_1

\]

Thus, an eigenvector corresponding to \( \lambda_1 = 5 \) is:

\[

\mathbf{v}_1 = \begin{pmatrix}1 \\ 1\end{pmatrix}

\]
For \( \lambda_2 = 2 \):

\[

(A - 2I) \mathbf{v} = 0

\]

\[

\begin{pmatrix}

4 - 2 & 1 \\

2&3-2

\end{pmatrix} \mathbf{v} = \begin{pmatrix}

2 & 1 \\

2&1

\end{pmatrix} \mathbf{v} = \mathbf{0}

\]

Solve:

\[

2 v_1 + v_2 = 0 \quad \text{or} \quad v_2 = -2 v_1

\]
Thus, an eigenvector corresponding to \( \lambda_2 = 2 \) is:

\[

\mathbf{v}_2 = \begin{pmatrix}1 \\ -2\end{pmatrix}

\]

### Geometric Interpretation

- **Eigenvalues** represent the factors by which eigenvectors


are scaled during the transformation.

- **Eigenvectors** indicate directions in which the


transformation acts by scaling, rather than changing direction.

### Properties

1. **Linearly Independent Eigenvectors:**

- If a matrix \( A \) has \( n \) linearly independent


eigenvectors, it is diagonalizable.

2. **Diagonalizable Matrices:**

- A matrix \( A \) is diagonalizable if there exists an invertible


matrix \( P \) such that:
\[

P^{-1}AP = D

\]

where \( D \) is a diagonal matrix whose diagonal entries are


the eigenvalues of \( A \), and the columns of \( P \) are the
corresponding eigenvectors.

3. **Jordan Form:**

- If a matrix is not diagonalizable, it can often be put into


Jordan canonical form, which is a block diagonal form that
generalizes the notion of diagonalization.

4. Diagonalization

**Diagonalization** of a matrix involves finding a diagonal matrix


that is similar to a given matrix. This process simplifies the
matrix by transforming it into a diagonal form, which is easier to
work with, especially when computing powers of the matrix or
solving systems of linear differential equations.

### Definition

A square matrix \( A \) is said to be **diagonalizable** if there


exists an invertible matrix \( P \) and a diagonal matrix \( D \)
such that:

\[ A = PDP^{-1} \]

Here:

- \( P \) is the matrix whose columns are the eigenvectors of \


( A \).

- \( D \) is the diagonal matrix whose diagonal entries are the


eigenvalues of \( A \).

### Diagonalization Process

1. **Find the Eigenvalues:**

- Compute the characteristic polynomial of \( A \), which is


given by \( \text{det}(A - \lambda I) = 0 \).

- Solve this polynomial to find the eigenvalues \( \lambda \).

2. **Find the Eigenvectors:**

- For each eigenvalue \( \lambda \), solve the system \( (A - \


lambda I) \mathbf{v} = 0 \) to find the corresponding
eigenvectors \( \mathbf{v} \).
3. **Form the Matrix \( P \):**

- Construct the matrix \( P \) using the eigenvectors as its


columns.

4. **Form the Diagonal Matrix \( D \):**

- Construct the diagonal matrix \( D \) with the eigenvalues on


the diagonal in the same order as the eigenvectors in \( P \).

5. **Verify the Diagonalization:**

- Check if \( A = PDP^{-1} \).

### Example

Consider the matrix:

\[ A = \begin{pmatrix}

4 & 1 \\

2&3

\end{pmatrix} \]

**Step 1: Find the Eigenvalues**

Compute the characteristic polynomial:


\[ \text{det}(A - \lambda I) = \text{det} \begin{pmatrix}

4 - \lambda & 1 \\

2 & 3 - \lambda

\end{pmatrix} \]

\[

= (4 - \lambda)(3 - \lambda) - (1 \cdot 2)

\]

\[

= \lambda^2 - 7\lambda + 10

\]

Set the characteristic polynomial to zero:

\[

\lambda^2 - 7\lambda + 10 = 0

\]

Solve for \( \lambda \):


\[

\lambda = \frac{7 \pm \sqrt{49 - 40}}{2} = \frac{7 \pm 3}{2}

\]

\[

\lambda_1 = 5 \quad \text{and} \quad \lambda_2 = 2

\]

**Step 2: Find the Eigenvectors**

For \( \lambda_1 = 5 \):

Solve \( (A - 5I) \mathbf{v} = 0 \):

\[

(A - 5I) = \begin{pmatrix}

4 - 5 & 1 \\

2&3-5

\end{pmatrix} = \begin{pmatrix}

-1 & 1 \\

2 & -2

\end{pmatrix}
\]

Solving \( \begin{pmatrix}

-1 & 1 \\

2 & -2

\end{pmatrix} \mathbf{v} = \mathbf{0} \):

\[

- v_1 + v_2 = 0 \quad \text{or} \quad v_2 = v_1

\]

Thus, an eigenvector for \( \lambda_1 = 5 \) is:

\[

\mathbf{v}_1 = \begin{pmatrix}1 \\ 1\end{pmatrix}

\]

For \( \lambda_2 = 2 \):

Solve \( (A - 2I) \mathbf{v} = 0 \):

\[
(A - 2I) = \begin{pmatrix}

4 - 2 & 1 \\

2&3-2

\end{pmatrix} = \begin{pmatrix}

2 & 1 \\

2&1

\end{pmatrix}

\]

Solving \( \begin{pmatrix}

2 & 1 \\

2&1

\end{pmatrix} \mathbf{v} = \mathbf{0} \):

\[

2 v_1 + v_2 = 0 \quad \text{or} \quad v_2 = -2 v_1

\]

Thus, an eigenvector for \( \lambda_2 = 2 \) is:

\[

\mathbf{v}_2 = \begin{pmatrix}1 \\ -2\end{pmatrix}


\]

**Step 3: Form the Matrix \( P \)**

Construct matrix \( P \) with eigenvectors as columns:

\[

P = \begin{pmatrix}

1 & 1 \\

1 & -2

\end{pmatrix}

\]

**Step 4: Form the Diagonal Matrix \( D \)**

Construct matrix \( D \) with eigenvalues on the diagonal:

\[

D = \begin{pmatrix}

5 & 0 \\

0&2

\end{pmatrix}
\]

**Step 5: Verify the Diagonalization**

Compute \( P^{-1} \):

\[

P^{-1} = \frac{1}{\text{det}(P)} \text{adj}(P)

\]

where \(\text{det}(P) = -3\) and \(\text{adj}(P)\) is the adjugate


of \( P \):

\[

P^{-1} = \frac{1}{-3} \begin{pmatrix}

-2 & -1 \\

-1 & 1

\end{pmatrix} = \begin{pmatrix}

\frac{2}{3} & \frac{1}{3} \\

\frac{1}{3} & -\frac{1}{3}

\end{pmatrix}

\]
Verify \( A = PDP^{-1} \):

\[

PDP^{-1} = \begin{pmatrix}

1 & 1 \\

1 & -2

\end{pmatrix}

\begin{pmatrix}

5 & 0 \\

0&2

\end{pmatrix}

\begin{pmatrix}

\frac{2}{3} & \frac{1}{3} \\

\frac{1}{3} & -\frac{1}{3}

\end{pmatrix}

\]

Calculating this multiplication confirms \( A \), proving that the


diagonalization is correct.

5. Invariant subspaces
**Invariant subspaces** are an important concept in linear
algebra and functional analysis, especially when studying linear
transformations and their effects on vector spaces. Here’s a
detailed explanation along with an example:

### Definition

An **invariant subspace** \( W \) of a vector space \( V \) under


a linear transformation \( T: V \to V \) is a subspace of \( V \)
such that:

\[ T(W) \subseteq W \]

This means that if you apply the linear transformation \( T \) to


any vector in \( W \), the result will still be within \( W \).

### Key Points

1. **Subspace**: \( W \) must be a subspace of \( V \), meaning it


must be closed under vector addition and scalar multiplication,
and it must contain the zero vector.

2. **Invariant**: The subspace \( W \) is invariant under \( T \)


if \( T \) maps every vector in \( W \) to another vector in \( W \).
### Example

Consider the vector space \( \mathbb{R}^3 \) with the following


linear transformation \( T \) represented by the matrix:

\[

T = \begin{pmatrix}

1 & 2 & 3 \\

0 & 1 & 4 \\

0&0&1

\end{pmatrix}

\]

We want to find a subspace of \( \mathbb{R}^3 \) that is


invariant under \( T \).

**Step 1: Check for Invariant Subspaces**

To determine if a subspace is invariant, we need to check if \


( T \) maps vectors in that subspace to vectors within the
subspace.
Let’s explore possible invariant subspaces of \( \mathbb{R}^3 \).
We will start with some standard subspaces:

1. **Subspace \( W_1 \) spanned by \(\{(1, 0, 0)\}\)**:

Let’s apply \( T \) to the vector \((1, 0, 0)\):

\[

T \begin{pmatrix}

1 \\

0 \\

\end{pmatrix} = \begin{pmatrix}

1 \\

0 \\

\end{pmatrix}

\]

Since \(\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}\) is in the


subspace spanned by \(\{(1, 0, 0)\}\), \( W_1 \) is **invariant**
under \( T \).
2. **Subspace \( W_2 \) spanned by \(\{(1, 1, 0)\}\)**:

Apply \( T \) to \((1, 1, 0)\):

\[

T \begin{pmatrix}

1 \\

1 \\

\end{pmatrix} = \begin{pmatrix}

1 + 2 \cdot 1 + 3 \cdot 0 \\

0 + 1 \cdot 1 + 4 \cdot 0 \\

0 + 0 + 1 \cdot 0

\end{pmatrix} = \begin{pmatrix}

3 \\

1 \\

\end{pmatrix}

\]

The result \(\begin{pmatrix} 3 \\ 1 \\ 0 \end{pmatrix}\) is not a


scalar multiple of \(\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}\),
so \( W_2 \) is **not invariant** under \( T \).
3. **Subspace \( W_3 \) spanned by \(\{(1, 0, 0), (0, 1, 0)\}\)**:

Apply \( T \) to \((1, 0, 0)\) and \((0, 1, 0)\):

\[

T \begin{pmatrix}

1 \\

0 \\

\end{pmatrix} = \begin{pmatrix}

1 \\

0 \\

\end{pmatrix}

\]

\[

T \begin{pmatrix}

0 \\

1 \\

0
\end{pmatrix} = \begin{pmatrix}

2 \\

1 \\

\end{pmatrix}

\]

The results are \(\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}\)


and \(\begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix}\), which are both
in the subspace spanned by \(\{(1, 0, 0), (0, 1, 0)\}\). Hence, \
( W_3 \) is **invariant** under \( T \).

4. **Subspace \( W_4 \) spanned by \(\{(1, 1, 1)\}\)**:

Apply \( T \) to \((1, 1, 1)\):

\[

T \begin{pmatrix}

1 \\

1 \\

\end{pmatrix} = \begin{pmatrix}

1 + 2 + 3 \\
0 + 1 + 4 \\

0+0+1

\end{pmatrix} = \begin{pmatrix}

6 \\

5 \\

\end{pmatrix}

\]

The result \(\begin{pmatrix} 6 \\ 5 \\ 1 \end{pmatrix}\) is not a


scalar multiple of \(\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}\),
so \( W_4 \) is **not invariant** under \( T \).

Q. 6 . Inner product spaces

### Inner Product Spaces

An **inner product space** is a fundamental concept in linear


algebra and functional analysis. It extends the idea of vector
spaces by introducing an additional structure—a way to measure
angles and lengths, and to define orthogonality. This structure is
given by an inner product, which provides geometric and
analytical insights into the space.
### Definition

An **inner product space** is a vector space \( V \) over a field \(


\mathbb{F} \) (where \( \mathbb{F} \) is typically \( \
mathbb{R} \) or \( \mathbb{C} \)) equipped with an inner
product. The inner product is a function:

\[

\langle \cdot, \cdot \rangle: V \times V \to \mathbb{F}

\]

that satisfies the following properties for all vectors \( \


mathbf{u}, \mathbf{v}, \mathbf{w} \in V \) and scalars \( a \in \
mathbb{F} \):

1. **Linearity in the First Argument**:

\[

\langle a\mathbf{u} + b\mathbf{v}, \mathbf{w} \rangle = a\


langle \mathbf{u}, \mathbf{w} \rangle + b\langle \mathbf{v}, \
mathbf{w} \rangle

\]

2. **Conjugate Symmetry** (for complex spaces) or


**Symmetry** (for real spaces):

\[
\langle \mathbf{u}, \mathbf{v} \rangle = \overline{\langle \
mathbf{v}, \mathbf{u} \rangle}

\]

In real spaces, this simplifies to:

\[

\langle \mathbf{u}, \mathbf{v} \rangle = \langle \mathbf{v}, \


mathbf{u} \rangle

\]

3. **Positive-Definiteness**:

\[

\langle \mathbf{u}, \mathbf{u} \rangle \geq 0

\]

with equality if and only if \( \mathbf{u} = 0 \).

### Properties

1. **Norm Induced by the Inner Product**:

- The inner product induces a norm (or length) on \( V \),


defined by:

\[

\| \mathbf{u} \| = \sqrt{\langle \mathbf{u}, \mathbf{u} \


rangle}
\]

2. **Orthogonality**:

- Two vectors \( \mathbf{u} \) and \( \mathbf{v} \) are said to


be orthogonal if:

\[

\langle \mathbf{u}, \mathbf{v} \rangle = 0

\]

3. **Pythagorean Theorem**:

- For orthogonal vectors \( \mathbf{u} \) and \( \mathbf{v} \):

\[

\| \mathbf{u} + \mathbf{v} \|^2 = \| \mathbf{u} \|^2 + \| \


mathbf{v} \|^2

\]

4. **Cauchy-Schwarz Inequality**:

- For any vectors \( \mathbf{u} \) and \( \mathbf{v} \):

\[

|\langle \mathbf{u}, \mathbf{v} \rangle| \leq \| \


mathbf{u} \| \| \mathbf{v} \|

\]
5. **Triangle Inequality**:

- For any vectors \( \mathbf{u} \) and \( \mathbf{v} \):

\[

\| \mathbf{u} + \mathbf{v} \| \leq \| \mathbf{u} \| + \| \


mathbf{v} \|

\]

### Examples

1. **Euclidean Space**:

- In \( \mathbb{R}^n \), the inner product is the dot product:

\[

\langle \mathbf{x}, \mathbf{y} \rangle = \sum_{i=1}^n x_i y_i

\]

- The corresponding norm is:

\[

\| \mathbf{x} \| = \sqrt{\langle \mathbf{x}, \mathbf{x} \rangle}

\]

2. **Complex Vector Space**:

- In \( \mathbb{C}^n \), the inner product is:

\[
\langle \mathbf{x}, \mathbf{y} \rangle = \sum_{i=1}^n x_i \
overline{y_i}

\]

- The norm is:

\[

\| \mathbf{x} \| = \sqrt{\langle \mathbf{x}, \mathbf{x} \rangle}

\]

3. **Function Spaces**:

- Consider the space of square-integrable functions \( L^2([a,


b]) \) with inner product:

\[

\langle f, g \rangle = \int_a^b f(x) \overline{g(x)} \, dx

\]

- The norm is:

\[

\| f \| = \sqrt{\langle f, f \rangle} = \left( \int_a^b |f(x)|^2 \, dx


\right)^{1/2}

\]

4. **Polynomial Spaces**:

- For the space of polynomials of degree at most \( n \), \( \


mathbb{P}_n \), a common inner product is:
\[

\langle p, q \rangle = \int_{a}^{b} p(x) q(x) \, dx

\]

### Orthogonal Projections and Orthogonal Complements

1. **Orthogonal Projection**:

- The orthogonal projection of \( \mathbf{u} \) onto a


subspace \( W \) is the unique vector \( \mathbf{p} \in W \) such
that \( \mathbf{u} - \mathbf{p} \) is orthogonal to \( W \).

2. **Orthogonal Complement**:

- The orthogonal complement \( W^\perp \) of a subspace \


( W \) consists of all vectors in \( V \) that are orthogonal to
every vector in \( W \).

Q. 7. adjoints

### Adjoint of a Linear Transformation

In linear algebra, the concept of an adjoint (or Hermitian adjoint)


is a generalization of the transpose for linear transformations on
inner product spaces. It is closely tied to the idea of symmetry
and the preservation of inner products.
### Definition

Given a linear transformation \( T: V \to V \) on an inner product


space \( V \), the **adjoint** \( T^* \) of \( T \) is a linear
transformation \( T^*: V \to V \) such that:

\[

\langle T\mathbf{u}, \mathbf{v} \rangle = \langle \mathbf{u},


T^*\mathbf{v} \rangle

\]

for all vectors \( \mathbf{u}, \mathbf{v} \in V \). This definition


ensures that the inner product is preserved under the
transformation \( T \) and its adjoint \( T^* \).

### Properties

1. **Existence and Uniqueness**:

- For any linear transformation \( T \) on a finite-dimensional


inner product space \( V \), there exists a unique adjoint \( T^* \)
that satisfies the defining property. This adjoint is also a linear
transformation.

2. **Matrix Representation**:

- If \( T \) is represented by a matrix \( A \) with respect to


some orthonormal basis, then the matrix of the adjoint \( T^* \)
is the conjugate transpose \( A^* \) (also called the Hermitian
transpose) of \( A \). In other words:

\[

[T^*] = \overline{[T]}^T

\]

where \( \overline{[T]} \) denotes the matrix obtained by


taking the complex conjugate of each entry of \( [T] \), and \
( T^T \) denotes the transpose.

3. **Self-Adjoint (Hermitian) Operators**:

- An operator \( T \) is called **self-adjoint** or **Hermitian** if


\( T = T^* \). For such operators, the matrix representation is
equal to its own conjugate transpose:

\[

[T] = [T]^*

\]

4. **Adjoint of a Composition**:

- If \( T: V \to W \) and \( S: W \to V \) are linear


transformations between inner product spaces \( V \) and \( W \),
then:

\[

(ST)^* = T^* S^*

\]
5. **Adjoint of the Identity Transformation**:

- The identity transformation \( I \) on \( V \) has itself as the


adjoint:

\[

I^* = I

\]

6. **Adjoint of a Sum**:

- If \( T \) and \( S \) are linear transformations on \( V \), then:

\[

(T + S)^* = T^* + S^*

\]

7. **Adjoint of a Scalar Multiple**:

- If \( \alpha \) is a scalar and \( T \) is a linear transformation,


then:

\[

(\alpha T)^* = \overline{\alpha} T^*

\]

### Example
Let's consider an example using the standard basis for \( \
mathbb{R}^2 \).

**Example 1:**

Let \( V = \mathbb{R}^2 \) and let \( T \) be the linear


transformation represented by the matrix:

\[

A = \begin{pmatrix}

1 & 2 \\

3&4

\end{pmatrix}

\]

To find the adjoint \( T^* \), we compute the conjugate transpose


of \( A \):

1. **Conjugate of Matrix \( A \)**:

- Since \( A \) has real entries, the conjugate is just \( A \):

\[

\overline{A} = \begin{pmatrix}

1 & 2 \\
3&4

\end{pmatrix}

\]

2. **Transpose of \( \overline{A} \)**:

- The transpose of \( A \) is:

\[

A^T = \begin{pmatrix}

1 & 3 \\

2&4

\end{pmatrix}

\]

Since \( \overline{A} = A \) in this case, the adjoint is:

\[

A^* = \begin{pmatrix}

1 & 3 \\

2&4

\end{pmatrix}

\]

**Example 2:**
Consider the linear transformation \( T \) on \( \mathbb{C}^2 \)
given by:

\[

T(\mathbf{x}) = \begin{pmatrix}

1 & i \\

-i & 2

\end{pmatrix} \mathbf{x}

\]

where \( \mathbf{x} = \begin{pmatrix} x_1 \\ x_2 \


end{pmatrix} \).

To find the adjoint \( T^* \):

1. **Conjugate of Matrix \( T \)**:

- The conjugate of \( T \) is:

\[

\overline{T} = \begin{pmatrix}

1 & -i \\

i&2

\end{pmatrix}
\]

2. **Transpose of \( \overline{T} \)**:

- The transpose of \( \overline{T} \) is:

\[

(\overline{T})^T = \begin{pmatrix}

1 & i \\

-i & 2

\end{pmatrix}

\]

Thus, the adjoint \( T^* \) is:

\[

T^* = \begin{pmatrix}

1 & i \\

-i & 2

\end{pmatrix}

\]

Q. 8 . unitary and normal transformations

### Unitary and Normal Transformations


**Unitary** and **normal** transformations are specific types of
linear transformations on inner product spaces that have
significant implications in linear algebra and functional analysis.
They are related to concepts of orthogonality, eigenvalues, and
diagonalizability.

### Unitary Transformations

A linear transformation \( T: V \to V \) on an inner product


space \( V \) is called **unitary** if it preserves the inner
product.

#### Definition

A linear transformation \( T \) is unitary if:

\[

\langle T\mathbf{u}, T\mathbf{v} \rangle = \langle \mathbf{u}, \


mathbf{v} \rangle

\]

for all vectors \( \mathbf{u}, \mathbf{v} \in V \). In matrix terms,


if \( T \) is represented by a matrix \( U \), then \( U \) is unitary
if:
\[

U^* U = U U^* = I

\]

where \( U^* \) is the conjugate transpose (Hermitian transpose)


of \( U \), and \( I \) is the identity matrix.

#### Properties

1. **Preservation of Norms**:

- For any vector \( \mathbf{u} \in V \):

\[

\| T\mathbf{u} \| = \| \mathbf{u} \|

\]

2. **Orthogonality of Columns**:

- The columns of a unitary matrix are orthonormal. This means


that if \( U \) is a unitary matrix, its columns are orthogonal and
have unit length.

3. **Eigenvalues**:

- The eigenvalues of a unitary matrix \( U \) lie on the unit


circle in the complex plane (i.e., they have absolute value 1).
4. **Diagonalization**:

- Any unitary matrix \( U \) can be diagonalized by a unitary


matrix \( P \):

\[

U = P D P^*

\]

where \( D \) is a diagonal matrix with the eigenvalues of \


( U \) on the diagonal.

5. **Inverse**:

- The inverse of a unitary matrix \( U \) is its conjugate


transpose:

\[

U^{-1} = U^*

\]

#### Example

Consider the matrix:

\[

U = \frac{1}{\sqrt{2}} \begin{pmatrix}
1 & 1 \\

1 & -1

\end{pmatrix}

\]

To verify \( U \) is unitary, compute:

\[

U^* = \frac{1}{\sqrt{2}} \begin{pmatrix}

1 & 1 \\

1 & -1

\end{pmatrix}^* = \frac{1}{\sqrt{2}} \begin{pmatrix}

1 & 1 \\

1 & -1

\end{pmatrix}

\]

Then:

\[

U^* U = \frac{1}{\sqrt{2}} \begin{pmatrix}

1 & 1 \\
1 & -1

\end{pmatrix} \frac{1}{\sqrt{2}} \begin{pmatrix}

1 & 1 \\

1 & -1

\end{pmatrix} = \begin{pmatrix}

1 & 0 \\

0&1

\end{pmatrix} = I

\]

Thus, \( U \) is unitary.

### Normal Transformations

A linear transformation \( T: V \to V \) on an inner product


space \( V \) is called **normal** if it commutes with its adjoint.

#### Definition

A linear transformation \( T \) is normal if:

\[
T T^* = T^* T

\]

where \( T^* \) is the adjoint of \( T \).

#### Properties

1. **Spectral Theorem**:

- A normal operator on a finite-dimensional inner product space


can be diagonalized by a unitary matrix. Specifically, there exists
a unitary matrix \( U \) and a diagonal matrix \( D \) such that:

\[

T = U D U^*

\]

2. **Commutativity**:

- Normal operators commute with their adjoints, but they do


not necessarily commute with other normal operators. However,
if \( T \) and \( S \) are normal and they commute, then \( T +
S \) and \( TS \) are also normal.

3. **Eigenvalues**:

- The eigenvalues of a normal matrix are not necessarily on the


unit circle, unlike those of a unitary matrix. However, normal
matrices can always be diagonalized.

4. **Norm Preservation**:

- For normal operators, the norm of \( T \mathbf{u} \) is


bounded by the norm of \( T \):

\[

\| T \mathbf{u} \| \leq \| T \| \| \mathbf{u} \|

\]

#### Example

Consider the matrix:

\[

T = \begin{pmatrix}

2 & 1 \\

1&2

\end{pmatrix}

\]

To check if \( T \) is normal:
1. Compute \( T^* \):

- Since \( T \) has real entries, \( T^* = T \):

\[

T^* = \begin{pmatrix}

2 & 1 \\

1&2

\end{pmatrix}

\]

2. Compute \( T T^* \) and \( T^* T \):

\[

T T^* = \begin{pmatrix}

2 & 1 \\

1&2

\end{pmatrix} \begin{pmatrix}

2 & 1 \\

1&2

\end{pmatrix} = \begin{pmatrix}

5 & 4 \\

4&5

\end{pmatrix}

\]
\[

T^* T = \begin{pmatrix}

2 & 1 \\

1&2

\end{pmatrix} \begin{pmatrix}

2 & 1 \\

1&2

\end{pmatrix} = \begin{pmatrix}

5 & 4 \\

4&5

\end{pmatrix}

\]

Since \( T T^* = T^* T \), \( T \) is normal.

Q. 9 . spectral Theorem

Q. 10 . Jordan canonical form

### Jordan Canonical Form


The Jordan canonical form (also known as Jordan normal
form) is a canonical representation of a linear
transformation or matrix that simplifies the analysis of
its structure, especially in the context of eigenvalues and
generalized eigenspaces. It is particularly useful when
working with matrices that are not diagonalizable.

### Definition

The **Jordan canonical form** of a matrix \( A \) is a


block diagonal matrix \( J \) such that:

\[

A \text{ is similar to } J

\]

where \( J \) is composed of Jordan blocks. A matrix \


( A \) is similar to \( J \) if there exists an invertible
matrix \( P \) such that:

\[

A = P J P^{-1}

\]
### Jordan Block

A **Jordan block** \( J_k(\lambda) \) of size \( k \)


associated with the eigenvalue \( \lambda \) is a \( k \
times k \) matrix of the form:

\[

J_k(\lambda) = \begin{pmatrix}

\lambda & 1 & 0 & \cdots & 0 \\

0 & \lambda & 1 & \cdots & 0 \\

0 & 0 & \lambda & \cdots & 0 \\

\vdots & \vdots & \vdots & \ddots & \vdots \\

0 & 0 & 0 & \cdots & \lambda

\end{pmatrix}

\]

The Jordan block is an upper triangular matrix with \( \


lambda \) on the diagonal, 1s on the superdiagonal (just
above the main diagonal), and 0s elsewhere.
### Jordan Canonical Form

The Jordan canonical form \( J \) of a matrix \( A \) is a


block diagonal matrix composed of Jordan blocks \( J_k(\
lambda) \) corresponding to each eigenvalue \( \
lambda \) of \( A \). The blocks are arranged along the
diagonal of \( J \), and \( J \) looks like:

\[

J = \begin{pmatrix}

J_{k_1}(\lambda_1) & 0 & \cdots & 0 \\

0 & J_{k_2}(\lambda_2) & \cdots & 0 \\

\vdots & \vdots & \ddots & \vdots \\

0 & 0 & \cdots & J_{k_m}(\lambda_m)

\end{pmatrix}

\]

where \( \lambda_1, \lambda_2, \ldots, \lambda_m \) are


the distinct eigenvalues of \( A \), and \( J_{k_i}(\
lambda_i) \) are Jordan blocks corresponding to each \( \
lambda_i \).
### Process to Find Jordan Canonical Form

1. **Find the Eigenvalues**:

- Determine the eigenvalues \( \lambda \) of \( A \) by


solving the characteristic polynomial \( \text{det}(A - \
lambda I) = 0 \).

2. **Find the Jordan Blocks**:

- For each eigenvalue \( \lambda \), find the sizes of


the Jordan blocks by analyzing the geometric multiplicity
(dimension of eigenspaces) and algebraic multiplicity
(multiplicity of eigenvalue in characteristic polynomial).

3. **Construct the Jordan Blocks**:

- For each eigenvalue, construct Jordan blocks of


appropriate sizes. The size of the Jordan block
corresponds to the size of the largest generalized
eigenspace associated with that eigenvalue.

4. **Form the Jordan Canonical Form**:

- Arrange the Jordan blocks in a block diagonal matrix.


This matrix \( J \) is the Jordan canonical form of \( A \).
5. **Find the Similarity Transformation Matrix \( P \)**:

- To express \( A \) in Jordan canonical form, find an


invertible matrix \( P \) such that \( A = P J P^{-1} \).
This involves finding generalized eigenvectors that form
the columns of \( P \).

### Example

Consider the matrix:

\[

A = \begin{pmatrix}

4 & 1 & 0 \\

0 & 4 & 1 \\

0&0&4

\end{pmatrix}

\]

1. **Find Eigenvalues**:

- The characteristic polynomial is \( (4 - \lambda)^3 \),


so the eigenvalue \( \lambda = 4 \) has algebraic
multiplicity 3.
2. **Find Jordan Blocks**:

- Compute the eigenspace and generalized eigenspaces


for \( \lambda = 4 \). The matrix \( A \) has only one
Jordan block because the geometric multiplicity is 1
(dimension of the eigenspace corresponding to \( \
lambda = 4 \)).

3. **Construct Jordan Blocks**:

- The Jordan block corresponding to \( \lambda = 4 \)


is:

\[

J_3(4) = \begin{pmatrix}

4 & 1 & 0 \\

0 & 4 & 1 \\

0&0&4

\end{pmatrix}

\]

4. **Form the Jordan Canonical Form**:

- Since there is only one Jordan block, the Jordan


canonical form of \( A \) is:

\[

J = \begin{pmatrix}

4 & 1 & 0 \\

0 & 4 & 1 \\

0&0&4

\end{pmatrix}

\]

5. **Find the Similarity Transformation Matrix \( P \)**:

- The matrix \( P \) is composed of generalized


eigenvectors of \( A \). To find \( P \), compute the
generalized eigenspace vectors for \( \lambda = 4 \).

### Key Points

- **Jordan Canonical Form**: Provides a simplified form


of a matrix that reveals the structure of its eigenvalues
and generalized eigenspaces.

- **Jordan Blocks**: Block matrices associated with each


eigenvalue, which can be used to construct the Jordan
canonical form.

- **Similarity Transformation**: The matrix \( A \) is


similar to its Jordan canonical form \( J \), meaning they
represent the same linear transformation but in different
bases.

Understanding Jordan canonical form is essential for


analyzing the structure of linear operators and matrices,
especially when diagonalization is not possible.

UNIT 4 .

1. Canonical forms

Canonical forms are a way to simplify matrices and linear


transformations to make them easier to analyze and work with.
The goal of using canonical forms is to represent matrices in a
standard, simplified way that highlights their fundamental
properties. There are several important canonical forms in linear
algebra, including the Jordan canonical form and the diagonal
form. Here’s a detailed explanation of these forms:

### 1. **Diagonal Form**

**Diagonalization** is the process of converting a matrix into a


diagonal matrix via a similarity transformation. A matrix \( A \) is
diagonalizable if it can be written as:

\[ A = PDP^{-1} \]

where:

- \( D \) is a diagonal matrix containing the eigenvalues of \( A \)


on its diagonal.

- \( P \) is an invertible matrix whose columns are the


eigenvectors of \( A \).

**Conditions for Diagonalization:**

- The matrix \( A \) must have \( n \) linearly independent


eigenvectors (for an \( n \times n \) matrix).

**Example:**

Consider the matrix:

\[ A = \begin{pmatrix}

4 & 1 \\

2&3
\end{pmatrix} \]

**Eigenvalues:**

Solve the characteristic polynomial:

\[

\text{det}(A - \lambda I) = \lambda^2 - 7\lambda + 10 = 0

\]

The eigenvalues are \( \lambda_1 = 5 \) and \( \lambda_2 = 2 \).

**Eigenvectors:**

For \( \lambda_1 = 5 \):

\[

(A - 5I) = \begin{pmatrix}

-1 & 1 \\

2 & -2

\end{pmatrix} \Rightarrow \text{Eigenvector } \mathbf{v}_1 = \


begin{pmatrix}1 \\ 1\end{pmatrix}
\]

For \( \lambda_2 = 2 \):

\[

(A - 2I) = \begin{pmatrix}

2 & 1 \\

2&1

\end{pmatrix} \Rightarrow \text{Eigenvector } \mathbf{v}_2 = \


begin{pmatrix}1 \\ -2\end{pmatrix}

\]

**Diagonal Matrix:**

\[

D = \begin{pmatrix}

5 & 0 \\

0&2

\end{pmatrix}

\]

**Matrix \( P \):**
\[

P = \begin{pmatrix}

1 & 1 \\

1 & -2

\end{pmatrix}

\]

**Verification:**

\[

A = PDP^{-1}

\]

where \( P^{-1} \) can be computed as shown earlier. This


confirms that \( A \) can be diagonalized.

### 2. **Jordan Canonical Form**

If a matrix cannot be diagonalized, it can often be put into


**Jordan canonical form**. This form is a block diagonal matrix
where each block is a Jordan block.
A **Jordan block** for an eigenvalue \( \lambda \) looks like:

\[

J_k(\lambda) = \begin{pmatrix}

\lambda & 1 & 0 & \cdots & 0 \\

0 & \lambda & 1 & \cdots & 0 \\

0 & 0 & \lambda & \cdots & 0 \\

\vdots & \vdots & \vdots & \ddots & 1 \\

0 & 0 & 0 & \cdots & \lambda

\end{pmatrix}

\]

where \( k \) is the size of the Jordan block.

**Conditions for Jordan Form:**

- The matrix \( A \) can be put into Jordan form if it has a


complete set of generalized eigenvectors.

**Example:**

Consider the matrix:


\[

A = \begin{pmatrix}

5 & 1 \\

0&5

\end{pmatrix}

\]

**Eigenvalue:**

The eigenvalue is \( \lambda = 5 \).

**Eigenvector:**

For \( \lambda = 5 \):

\[

A - 5I = \begin{pmatrix}

0 & 1 \\

0&0

\end{pmatrix}

\]
The eigenvector corresponding to \( \lambda = 5 \) is:

\[

\mathbf{v} = \begin{pmatrix}1 \\ 0\end{pmatrix}

\]

**Generalized Eigenvector:**

Solve \( (A - 5I)^2 \mathbf{v} = 0 \):

\[

(A - 5I)^2 = \begin{pmatrix}

0 & 0 \\

0&0

\end{pmatrix}

\]

Generalized eigenvector is:

\[

\mathbf{v}_g = \begin{pmatrix}0 \\ 1\end{pmatrix}


\]

**Jordan Canonical Form:**

The Jordan form is:

\[

J = \begin{pmatrix}

5 & 1 \\

0&5

\end{pmatrix}

\]

**Matrix \( P \):**

Construct \( P \) using the eigenvector and generalized


eigenvector:

\[

P = \begin{pmatrix}

1 & 0 \\

0&1
\end{pmatrix}

\]

**Verification:**

\[

A = PJP^{-1}

\]

### 3. **Rational Canonical Form**

The **rational canonical form** is a canonical form used when


diagonalization and Jordan canonical forms are not applicable. It
is based on the invariant factors of the matrix and provides a way
to describe matrices up to similarity transformations using only
the structure of the minimal polynomial.

2. Similarity of linear transformations

**Similarity of linear transformations** is a key concept in linear


algebra that deals with how different linear transformations or
matrices can essentially represent the same transformation but
in different coordinate systems or bases. Here's a detailed
explanation:
### Definition

Two linear transformations \( T \) and \( S \) on a vector space \(


V \) are said to be **similar** if there exists an invertible linear
transformation \( P: V \to V \) such that:

\[ S = P^{-1}TP \]

In terms of matrices, if \( A \) and \( B \) are the matrices


representing the linear transformations \( T \) and \( S \) with
respect to some basis, then \( A \) and \( B \) are similar if there
exists an invertible matrix \( P \) such that:

\[ B = P^{-1}AP \]

### Intuition

- **Geometric Interpretation**: Similarity of linear


transformations indicates that \( T \) and \( S \) essentially
perform the same operation, but possibly in different coordinate
systems. If \( T \) maps vectors in one basis, \( S \) maps vectors
in another basis that is related to the first by the matrix \( P \).

- **Matrix Representation**: Similar matrices represent the same


linear transformation in different bases. The similarity
transformation \( P^{-1}AP \) changes the basis from the one
used for \( A \) to the one used for \( B \).

### Properties of Similarity

1. **Similar Matrices Have the Same Characteristic


Polynomial**: This implies that similar matrices have the same
eigenvalues.

2. **Similar Matrices Have the Same Jordan Canonical Form**:


Similar matrices share the same Jordan form if \( A \) and \( B \)
are similar.

3. **Similarity Preserves Rank**: The rank of a matrix is


invariant under similarity transformations.

4. **Similarity Preserves Determinant**: The determinant of


similar matrices is the same.

5. **Similarity Preserves Trace**: The trace (sum of diagonal


elements) of similar matrices is the same.

### Example

Consider the matrices:


\[ A = \begin{pmatrix}

4 & 1 \\

2&3

\end{pmatrix} \]

and

\[ B = \begin{pmatrix}

5 & 0 \\

0&2

\end{pmatrix} \]

We want to check if \( A \) and \( B \) are similar.

**Step 1: Find the Eigenvalues**

- For \( A \):

\[

\text{det}(A - \lambda I) = \text{det} \begin{pmatrix}

4 - \lambda & 1 \\
2 & 3 - \lambda

\end{pmatrix} = \lambda^2 - 7\lambda + 10

\]

The eigenvalues are \( \lambda_1 = 5 \) and \( \lambda_2 =


2 \).

- For \( B \):

The eigenvalues are directly \( \lambda_1 = 5 \) and \( \


lambda_2 = 2 \), since \( B \) is diagonal.

Since \( A \) and \( B \) have the same eigenvalues, they may be


similar.

**Step 2: Find Eigenvectors of \( A \)**

For \( \lambda_1 = 5 \):

\[

(A - 5I) = \begin{pmatrix}

-1 & 1 \\

2 & -2
\end{pmatrix} \Rightarrow \text{Eigenvector } \mathbf{v}_1 = \
begin{pmatrix}1 \\ 1\end{pmatrix}

\]

For \( \lambda_2 = 2 \):

\[

(A - 2I) = \begin{pmatrix}

2 & 1 \\

2&1

\end{pmatrix} \Rightarrow \text{Eigenvector } \mathbf{v}_2 = \


begin{pmatrix}1 \\ -2\end{pmatrix}

\]

**Step 3: Construct Matrix \( P \) and Verify**

Matrix \( P \) is formed by placing the eigenvectors as columns:

\[

P = \begin{pmatrix}

1 & 1 \\

1 & -2

\end{pmatrix}
\]

Compute \( P^{-1} \):

\[

P^{-1} = \frac{1}{-3} \begin{pmatrix}

-2 & -1 \\

-1 & 1

\end{pmatrix} = \begin{pmatrix}

\frac{2}{3} & \frac{1}{3} \\

\frac{1}{3} & -\frac{1}{3}

\end{pmatrix}

\]

Check if \( B = P^{-1}AP \):

\[

P^{-1}AP = \begin{pmatrix}

\frac{2}{3} & \frac{1}{3} \\

\frac{1}{3} & -\frac{1}{3}

\end{pmatrix}

\begin{pmatrix}
4 & 1 \\

2&3

\end{pmatrix}

\begin{pmatrix}

1 & 1 \\

1 & -2

\end{pmatrix} = \begin{pmatrix}

5 & 0 \\

0&2

\end{pmatrix}

\]

This confirms that \( A \) and \( B \) are indeed similar.

3. Reduction to triangular forms

**Similarity of linear transformations** is a fundamental concept


in linear algebra that helps in understanding how different linear
transformations can be related to each other. Two linear
transformations are considered similar if they represent the
same linear operation in different bases of the vector space.

### Definition
Two linear transformations \( T \) and \( S \) from a vector space
\( V \) to itself are said to be **similar** if there exists an
invertible linear transformation \( P \) (or equivalently, an
invertible matrix \( P \) if \( T \) and \( S \) are represented by
matrices) such that:

\[ T = P^{-1} S P \]

Here’s a step-by-step explanation of the concept:

1. **Matrix Representation**:

- A linear transformation \( T \) can be represented by a


matrix \( A \) in a given basis. Similarly, another linear
transformation \( S \) can be represented by a matrix \( B \) in
the same or different basis.

2. **Similarity Transformation**:

- If \( T \) and \( S \) are similar, then \( A \) and \( B \) are


related by a similarity transformation:

\[

A = P^{-1} B P

\]

where \( P \) is an invertible matrix that represents a change


of basis.
3. **Implications of Similarity**:

- Similar matrices represent the same linear transformation but


in different bases. They have the same eigenvalues,
characteristic polynomial, and minimal polynomial.

- They share many important properties and behaviors, such as


the determinant, trace, and rank.

### Example

Let’s consider an example to illustrate similarity of linear


transformations.

**Example:**

Let’s take two matrices \( A \) and \( B \) and check if they are


similar.

\[

A = \begin{pmatrix}

4 & 1 \\

0&2

\end{pmatrix}

\]
\[

B = \begin{pmatrix}

2 & 1 \\

0&4

\end{pmatrix}

\]

We want to find if there is an invertible matrix \( P \) such that:

\[

A = P^{-1} B P

\]

**Step 1: Compute the Eigenvalues**

First, find the eigenvalues of \( A \) and \( B \).

- **Eigenvalues of \( A \)**:

Compute the characteristic polynomial of \( A \):

\[
\text{det}(A - \lambda I) = \text{det} \begin{pmatrix}

4 - \lambda & 1 \\

0 & 2 - \lambda

\end{pmatrix} = (4 - \lambda)(2 - \lambda)

\]

Eigenvalues are \( \lambda_1 = 4 \) and \( \lambda_2 = 2 \).

- **Eigenvalues of \( B \)**:

Compute the characteristic polynomial of \( B \):

\[

\text{det}(B - \lambda I) = \text{det} \begin{pmatrix}

2 - \lambda & 1 \\

0 & 4 - \lambda

\end{pmatrix} = (2 - \lambda)(4 - \lambda)

\]

Eigenvalues are \( \lambda_1 = 2 \) and \( \lambda_2 = 4 \).

The eigenvalues of \( A \) and \( B \) are the same, so \( A \)


and \( B \) could potentially be similar.

**Step 2: Find the Eigenvectors**

For \( A \), find the eigenvectors corresponding to eigenvalues \


( 4 \) and \( 2 \).

- For \( \lambda = 4 \):

\[

(A - 4I) = \begin{pmatrix}

0 & 1 \\

0 & -2

\end{pmatrix}

\]

Eigenvector is \( \mathbf{v}_1 = \begin{pmatrix}1 \\ 0\


end{pmatrix} \).

- For \( \lambda = 2 \):

\[

(A - 2I) = \begin{pmatrix}
2 & 1 \\

0&0

\end{pmatrix}

\]

Eigenvector is \( \mathbf{v}_2 = \begin{pmatrix}-\frac{1}{2} \\


1\end{pmatrix} \).

For \( B \), find the eigenvectors corresponding to eigenvalues \


( 2 \) and \( 4 \).

- For \( \lambda = 2 \):

\[

(B - 2I) = \begin{pmatrix}

0 & 1 \\

0&2

\end{pmatrix}

\]

Eigenvector is \( \mathbf{v}_1 = \begin{pmatrix}1 \\ 0\


end{pmatrix} \).
- For \( \lambda = 4 \):

\[

(B - 4I) = \begin{pmatrix}

-2 & 1 \\

0&0

\end{pmatrix}

\]

Eigenvector is \( \mathbf{v}_2 = \begin{pmatrix}-\frac{1}{2} \\


1\end{pmatrix} \).

**Step 3: Construct Matrix \( P \)**

The matrices of eigenvectors can be used to construct \( P \). For


matrix \( A \), use:

\[

P = \begin{pmatrix}

1 & -\frac{1}{2} \\

0&1

\end{pmatrix}

\]
Verify if:

\[

A = P^{-1} B P

\]

Calculate \( P^{-1} \) and check if \( A = P^{-1} B P \). In this


case, you will find that:

\[

P^{-1} = \begin{pmatrix}

1 & \frac{1}{2} \\

0&1

\end{pmatrix}

\]

\[

P^{-1} B P = \begin{pmatrix}

4 & 1 \\

0&2

\end{pmatrix}
\]

Thus, \( A \) and \( B \) are similar.

4. Nilpotent transformations

**Nilpotent transformations** are a special class of linear


transformations that play an important role in linear algebra,
especially in the study of the structure of linear operators and
matrices. Here's a detailed explanation of nilpotent
transformations:

### Definition

A linear transformation \( T \) from a vector space \( V \) to itself


is called **nilpotent** if there exists a positive integer \( k \)
such that:

\[ T^k = 0 \]

where \( T^k \) denotes the \( k \)-th power of \( T \) (i.e.,


applying \( T \) \( k \) times in succession), and \( 0 \) is the zero
transformation (the transformation that maps every vector to the
zero vector).
In matrix terms, if \( A \) is a matrix representing the linear
transformation \( T \), then \( A \) is nilpotent if there exists a
positive integer \( k \) such that:

\[ A^k = 0 \]

### Properties of Nilpotent Transformations

1. **Nilpotent Matrix**:

- A matrix is nilpotent if some power of it results in the zero


matrix.

2. **Minimal Polynomial**:

- The minimal polynomial of a nilpotent matrix \( A \) is \


( x^m \) for some positive integer \( m \). In other words, the
minimal polynomial is of the form \( x^m \), where \( m \) is the
smallest integer such that \( A^m = 0 \).

3. **Jordan Canonical Form**:

- Every nilpotent matrix can be brought to a Jordan canonical


form consisting entirely of Jordan blocks with zero eigenvalue.
Each Jordan block is a matrix with zeros on the diagonal and
ones on the superdiagonal.

4. **Rank and Nullity**:


- For a nilpotent matrix \( A \), the rank of \( A \) is less than its
size, and the nullity (dimension of the null space) increases as
the power of \( A \) increases.

### Example

Let’s consider an example to illustrate nilpotent matrices.

**Example:**

Consider the matrix:

\[

A = \begin{pmatrix}

0 & 1 \\

0&0

\end{pmatrix}

\]

**Step 1: Compute Powers of \( A \)**

Calculate \( A^2 \):


\[

A^2 = \begin{pmatrix}

0 & 1 \\

0&0

\end{pmatrix} \cdot \begin{pmatrix}

0 & 1 \\

0&0

\end{pmatrix} = \begin{pmatrix}

0 & 0 \\

0&0

\end{pmatrix}

\]

Here, \( A^2 = 0 \), which shows that \( A \) is nilpotent. The


smallest such \( k \) is 2, so \( A \) is nilpotent of index 2.

**Step 2: Verify Minimal Polynomial**

The minimal polynomial of \( A \) is \( x^2 \), as \( A^2 = 0 \)


and \( A \neq 0 \).

**Step 3: Jordan Canonical Form**


The Jordan canonical form of \( A \) is:

\[

J = \begin{pmatrix}

0 & 1 \\

0&0

\end{pmatrix}

\]

This is already in Jordan form, where the Jordan block has zero
eigenvalues.

### Applications and Importance

1. **Matrix Decomposition**: Nilpotent matrices help in


understanding the structure of matrices and linear
transformations, especially in matrix decomposition techniques.

2. **Spectral Theory**: Nilpotent transformations have


eigenvalues all equal to zero, which simplifies the spectral
analysis.

3. **Jordan Canonical Form**: Nilpotent matrices are crucial in


the Jordan canonical form decomposition, where they appear as
Jordan blocks corresponding to the eigenvalue zero.

4. **Differential Equations**: Nilpotent operators often arise in


the context of differential equations and dynamical systems,
particularly in linearized systems around equilibrium points.

Q.5. Primary decomposition theorem

Q.6
**Jordan Blocks** and **Jordan Forms** are important
concepts in linear algebra that provide a way to
understand the structure of matrices, particularly those
that are not diagonalizable. They offer a standardized
form that simplifies many problems involving linear
transformations and matrices.

### Jordan Blocks

A **Jordan block** is a special kind of matrix that


appears in the Jordan canonical form of a matrix. Jordan
blocks are used to simplify matrices to a form where the
structure of their eigenvalues and generalized
eigenvectors is more apparent.
#### Definition

A Jordan block \( J_k(\lambda) \) for an eigenvalue \( \


lambda \) and of size \( k \) is a \( k \times k \) matrix of
the following form:

\[

J_k(\lambda) = \begin{pmatrix}

\lambda & 1 & 0 & \cdots & 0 \\

0 & \lambda & 1 & \cdots & 0 \\

0 & 0 & \lambda & \cdots & 0 \\

\vdots & \vdots & \vdots & \ddots & 1 \\

0 & 0 & 0 & \cdots & \lambda

\end{pmatrix}

\]

where:

- \(\lambda\) is an eigenvalue.

- The superdiagonal entries (the entries immediately


above the diagonal) are all \(1\).

- All other entries are \(0\).

### Jordan Canonical Form

The **Jordan canonical form** (or Jordan normal form)


of a matrix is a canonical form of a matrix that
generalizes the concept of diagonalization. It is
particularly useful when a matrix cannot be diagonalized
but can still be simplified to a block diagonal form.

#### Definition

A matrix \( A \) is said to be in Jordan canonical form if it


is similar to a matrix that is block diagonal, where each
block is a Jordan block.

More precisely, if \( A \) is a square matrix of size \( n \


times n \), then \( A \) can be put into Jordan canonical
form \( J \) such that:

\[ A = P J P^{-1} \]
where \( P \) is an invertible matrix and \( J \) is a block
diagonal matrix consisting of Jordan blocks.

### Example of Jordan Blocks and Jordan Form

Let's work through an example.

**Matrix Example:**

Consider the matrix:

\[

A = \begin{pmatrix}

5 & 1 & 0 \\

0 & 5 & 1 \\

0&0&5

\end{pmatrix}

\]

**Step 1: Find Eigenvalues**


The eigenvalue \( \lambda \) can be found by solving the
characteristic polynomial:

\[

\text{det}(A - \lambda I) = \text{det} \begin{pmatrix}

5 - \lambda & 1 & 0 \\

0 & 5 - \lambda & 1 \\

0 & 0 & 5 - \lambda

\end{pmatrix}

\]

\[

= (5 - \lambda)^3

\]

The eigenvalue is \( \lambda = 5 \) with algebraic


multiplicity 3.

**Step 2: Find Generalized Eigenvectors**


To construct the Jordan form, we need to determine the
number of Jordan blocks and their sizes. We do this by
finding the dimensions of the eigenspaces and
generalized eigenspaces.

- **Eigenvectors for \( \lambda = 5 \)**:

Solve \( (A - 5I) \mathbf{v} = 0 \):

\[

(A - 5I) = \begin{pmatrix}

0 & 1 & 0 \\

0 & 0 & 1 \\

0&0&0

\end{pmatrix}

\]

The eigenspace for \( \lambda = 5 \) is spanned by:

\[

\mathbf{v}_1 = \begin{pmatrix}1 \\ 0 \\ 0\end{pmatrix}


\]

- **Generalized Eigenvectors**:

Solve \( (A - 5I)^2 \mathbf{v} = 0 \) to find the


generalized eigenvectors:

\[

(A - 5I)^2 = \begin{pmatrix}

0 & 0 & 1 \\

0 & 0 & 0 \\

0&0&0

\end{pmatrix}

\]

The space for \( (A - 5I)^2 \) is spanned by:

\[

\mathbf{v}_2 = \begin{pmatrix}0 \\ 1 \\ 0\end{pmatrix}

\]
And the generalized eigenvector that satisfies \( (A -
5I)^2 \mathbf{v} = 0 \) but not \( (A - 5I) \mathbf{v} =
0 \) is:

\[

\mathbf{v}_3 = \begin{pmatrix}0 \\ 0 \\ 1\end{pmatrix}

\]

**Step 3: Jordan Canonical Form**

The Jordan canonical form of \( A \) consists of Jordan


blocks corresponding to the eigenvalue 5:

\[

J = \begin{pmatrix}

5 & 1 & 0 \\

0 & 5 & 1 \\

0&0&5

\end{pmatrix}

\]
**Step 4: Find Matrix \( P \)**

The matrix \( P \) that transforms \( A \) into \( J \) is


constructed from the eigenvectors and generalized
eigenvectors.

Thus, \( A = PJP^{-1} \), where \( J \) is the Jordan form,


and \( P \) is:

\[

P = \begin{pmatrix}

1 & 0 & 0 \\

0 & 1 & 0 \\

0&0&1

\end{pmatrix}

\]

Q . Invariants of linear transformations

In linear algebra, the **invariants** of a linear


transformation are properties or characteristics that
remain unchanged under the transformation. These
invariants provide important insights into the structure
of the linear transformation and help in understanding
how it behaves.

Here’s a detailed explanation of the key invariants of


linear transformations:

### 1. **Eigenvalues and Eigenvectors**

**Eigenvalues** and **eigenvectors** are fundamental


invariants of a linear transformation \( T \).

- **Eigenvalue**: A scalar \( \lambda \) is called an


eigenvalue of \( T \) if there exists a non-zero vector \( \
mathbf{v} \) (called an eigenvector) such that:

\[

T(\mathbf{v}) = \lambda \mathbf{v}

\]

This means the action of \( T \) on \( \mathbf{v} \) is


simply to scale \( \mathbf{v} \) by \( \lambda \), without
changing its direction.
- **Eigenvector**: The vector \( \mathbf{v} \)
corresponding to the eigenvalue \( \lambda \) satisfies
the equation above.

### 2. **Characteristic Polynomial**

The **characteristic polynomial** of a linear


transformation \( T \) (or its matrix representation \
( A \)) is given by:

\[

p(\lambda) = \text{det}(T - \lambda I)

\]

where \( I \) is the identity matrix. The roots of this


polynomial are the eigenvalues of \( T \). The
characteristic polynomial is an invariant because it
remains unchanged under similarity transformations.

### 3. **Minimal Polynomial**


The **minimal polynomial** of a linear transformation \
( T \) (or its matrix representation \( A \)) is the monic
polynomial \( m(\lambda) \) of least degree such that:

\[

m(T) = 0

\]

The minimal polynomial captures the smallest


polynomial that annihilates \( T \) and is also an
invariant under similarity transformations. For matrix \
( A \), it is the minimal polynomial such that \( m(A) =
0 \).

### 4. **Rank and Nullity**

- **Rank**: The rank of a linear transformation \( T \) is


the dimension of its image (or range). It tells us how
many dimensions the transformation maps into. The rank
is invariant under similarity transformations.

- **Nullity**: The nullity of \( T \) is the dimension of its


kernel (null space). It indicates the number of linearly
independent vectors that are mapped to zero. Nullity is
also invariant under similarity transformations.

### 5. **Trace**

The **trace** of a linear transformation \( T \) (or its


matrix representation \( A \)) is the sum of its
eigenvalues, counting multiplicities. It is also the sum of
the diagonal elements of \( A \) and is invariant under
similarity transformations.

### 6. **Determinant**

The **determinant** of a linear transformation \( T \) (or


its matrix representation \( A \)) is a scalar value that
can be computed from the matrix and reflects whether
the transformation preserves volume (if the determinant
is non-zero) or scales volume by a factor (the absolute
value of the determinant). The determinant is invariant
under similarity transformations.

### 7. **Jordan Canonical Form**

The **Jordan canonical form** is a special form of a


matrix that is block diagonal, with each block being a
Jordan block. The Jordan canonical form is unique up to
the order of the Jordan blocks and provides a way to
classify linear transformations up to similarity. The
Jordan blocks reflect the structure of the eigenvalues
and generalized eigenvectors of the transformation.

### Example of Invariants

Consider the matrix:

\[

A = \begin{pmatrix}

2 & 1 \\

0&2

\end{pmatrix}

\]

**Eigenvalues:**

Solve the characteristic polynomial:


\[

\text{det}(A - \lambda I) = \text{det} \begin{pmatrix}

2 - \lambda & 1 \\

0 & 2 - \lambda

\end{pmatrix} = (2 - \lambda)^2

\]

The eigenvalue is \( \lambda = 2 \), with algebraic


multiplicity 2.

**Minimal Polynomial:**

The minimal polynomial is \( (x - 2)^2 \) because:

\[

(A - 2I)^2 = \begin{pmatrix}

0 & 1 \\

0&0

\end{pmatrix}^2 = \begin{pmatrix}

0 & 0 \\
0&0

\end{pmatrix}

\]

**Rank and Nullity:**

- **Rank**: The matrix \( A \) has rank 1 because the


image of \( A \) is spanned by one vector.

- **Nullity**: The nullity of \( A \) is 1 because the kernel


is spanned by one vector.

**Trace:**

The trace of \( A \) is:

\[

\text{trace}(A) = 2 + 2 = 4

\]

**Determinant:**
The determinant of \( A \) is:

\[

\text{det}(A) = (2)(2) - (0)(1) = 4

\]

**Jordan Canonical Form:**

The Jordan canonical form of \( A \) is:

\[

J = \begin{pmatrix}

2 & 1 \\

0&2

\end{pmatrix}

\]

UNIT 5

Q . 1 Hermitian
In linear algebra, a matrix is called **Hermitian** if it is equal to
its own conjugate transpose. Hermitian matrices are an
important class of matrices, especially in the context of complex
vector spaces and quantum mechanics.

### Definition

A square matrix \( A \) with complex entries is called


**Hermitian** if:

\[ A = A^H \]

where \( A^H \) denotes the **Hermitian transpose** (or


**conjugate transpose**) of \( A \). The Hermitian transpose of \(
A \) is obtained by taking the transpose of \( A \) and then taking
the complex conjugate of each entry.

Formally, if \( A = [a_{ij}] \), then:

\[ A^H = [\overline{a_{ji}}] \]

where \( \overline{a_{ji}} \) denotes the complex conjugate of the


entry \( a_{ji} \).
### Properties of Hermitian Matrices

1. **Real Eigenvalues**:

- All eigenvalues of a Hermitian matrix are real numbers.

2. **Orthogonal Eigenvectors**:

- The eigenvectors of a Hermitian matrix corresponding to


distinct eigenvalues are orthogonal. This means if \( \
mathbf{v}_1 \) and \( \mathbf{v}_2 \) are eigenvectors
corresponding to different eigenvalues \( \lambda_1 \) and \( \
lambda_2 \), then \( \mathbf{v}_1 \cdot \mathbf{v}_2 = 0 \).

3. **Diagonalizable**:

- Hermitian matrices are always diagonalizable. This means


there exists a unitary matrix \( U \) such that:

\[

A = U \Lambda U^H

\]

where \( \Lambda \) is a diagonal matrix containing the real


eigenvalues of \( A \).

4. **Positive Definite Matrices**:


- A Hermitian matrix \( A \) is positive definite if and only if all
its eigenvalues are positive.

5. **Symmetric Case**:

- In the real case (when all entries are real), Hermitian


matrices are simply symmetric matrices, which means \( A =
A^T \).

### Example

Consider the matrix:

\[

A = \begin{pmatrix}

2 & i \\

-i & 3

\end{pmatrix}

\]

**Step 1: Compute the Hermitian Transpose**

To find \( A^H \):


\[

A^H = \begin{pmatrix}

2 & -i \\

i&3

\end{pmatrix}

\]

**Step 2: Check if \( A \) is Hermitian**

Compare \( A \) with \( A^H \):

\[

A = \begin{pmatrix}

2 & i \\

-i & 3

\end{pmatrix} \quad \text{and} \quad A^H = \begin{pmatrix}

2 & -i \\

i&3

\end{pmatrix}

\]

Since \( A = A^H \), the matrix \( A \) is Hermitian.


**Step 3: Find Eigenvalues and Eigenvectors**

To find the eigenvalues, solve the characteristic polynomial:

\[

\text{det}(A - \lambda I) = \text{det} \begin{pmatrix}

2 - \lambda & i \\

-i & 3 - \lambda

\end{pmatrix}

\]

\[

= (2 - \lambda)(3 - \lambda) - (-i \cdot i)

\]

\[

= (2 - \lambda)(3 - \lambda) + 1

\]

\[

= \lambda^2 - 5\lambda + 7
\]

Solving \( \lambda^2 - 5\lambda + 7 = 0 \) gives:

\[

\lambda = \frac{5 \pm \sqrt{25 - 28}}{2} = \frac{5 \pm i\sqrt{3}}


{2}

\]

So the eigenvalues are:

\[

\lambda_1 = \frac{5 + i\sqrt{3}}{2} \quad \text{and} \quad \


lambda_2 = \frac{5 - i\sqrt{3}}{2}

\]

Eigenvectors corresponding to these eigenvalues can be found


using standard methods.

### Applications of Hermitian Matrices

1. **Quantum Mechanics**:

- Hermitian matrices represent observable physical quantities


in quantum mechanics, such as position, momentum, and energy.
The real eigenvalues correspond to measurable quantities.

2. **Optimization**:

- Positive definite Hermitian matrices are used in optimization


problems, particularly in quadratic forms.

3. **Signal Processing**:

- Hermitian matrices are used in signal processing and


communications, especially in the context of covariance matrices.

4. **Computer Graphics**:

- In computer graphics and computer vision, Hermitian


matrices are used in various algorithms and techniques,
including those involving transformations and filtering.

Q.2 Self adjoint,

In the context of linear algebra, the term **self-adjoint** is often


used interchangeably with **Hermitian**. Specifically, a linear
operator or matrix is termed **self-adjoint** if it is equal to its
adjoint or Hermitian transpose. Here's a detailed explanation of
self-adjoint operators and matrices:

### Definition
A linear operator \( T \) on a Hilbert space \( \mathcal{H} \) is
called **self-adjoint** if:

\[ T = T^* \]

where \( T^* \) denotes the adjoint (or Hermitian adjoint) of \


( T \). For a matrix \( A \), the adjoint \( A^* \) is obtained by
taking the transpose and then taking the complex conjugate of
each entry:

\[ A^* = \overline{A^T} \]

### Properties of Self-Adjoint Operators and Matrices

1. **Real Eigenvalues**:

- The eigenvalues of a self-adjoint operator or matrix are


always real. This is a direct consequence of the fact that if \( T \)
is self-adjoint, then for any eigenvalue \( \lambda \) and
corresponding eigenvector \( \mathbf{v} \), \( \lambda \) must be
real.

2. **Orthogonal (or Unitary) Diagonalization**:

- Self-adjoint matrices (in finite-dimensional spaces) are


diagonalizable by an orthogonal (or unitary) matrix. This means
that there exists an orthogonal (or unitary) matrix \( U \) such
that:

\[

A = U \Lambda U^T

\]

where \( \Lambda \) is a diagonal matrix containing the real


eigenvalues of \( A \). For complex spaces, the matrix \( U \) is
unitary, i.e., \( U^* U = I \).

3. **Positive Semi-Definiteness**:

- A self-adjoint matrix \( A \) is positive semi-definite if and only


if all its eigenvalues are non-negative. That is, for any vector \( \
mathbf{x} \):

\[

\mathbf{x}^* A \mathbf{x} \geq 0

\]

4. **Symmetry (in Real Spaces)**:

- In real vector spaces, a matrix is self-adjoint if and only if it is


symmetric, meaning \( A = A^T \).
### Example of Self-Adjoint Matrices

Let's consider a matrix \( A \):

\[

A = \begin{pmatrix}

1 & 2 + i \\

2-i&3

\end{pmatrix}

\]

**Step 1: Compute the Adjoint**

To find \( A^* \):

- Take the transpose of \( A \):

\[

A^T = \begin{pmatrix}

1 & 2 - i \\

2+i&3

\end{pmatrix}
\]

- Take the complex conjugate:

\[

A^* = \begin{pmatrix}

1 & 2 - i \\

2+i&3

\end{pmatrix}

\]

**Step 2: Check if \( A \) is Self-Adjoint**

Compare \( A \) with \( A^* \):

\[

A = \begin{pmatrix}

1 & 2 + i \\

2-i&3

\end{pmatrix} \quad \text{and} \quad A^* = \begin{pmatrix}

1 & 2 - i \\

2+i&3
\end{pmatrix}

\]

Since \( A = A^* \), the matrix \( A \) is self-adjoint.

### Self-Adjoint Operators in Hilbert Spaces

In infinite-dimensional Hilbert spaces, a linear operator \( T \) is


self-adjoint if:

1. \( T \) is symmetric: \( \langle T \mathbf{x}, \mathbf{y} \rangle


= \langle \mathbf{x}, T \mathbf{y} \rangle \) for all \( \mathbf{x},
\mathbf{y} \in \mathcal{H} \).

2. The domain of \( T \) is equal to the domain of its adjoint \


( T^* \).

Self-adjoint operators are crucial in functional analysis and


quantum mechanics, where they represent observable quantities
and ensure real eigenvalues.

Q. 3. Unitary and normal linear transformation

### Unitary and Normal Linear Transformations


In linear algebra, unitary and normal linear transformations are
two important classes of linear operators that have special
properties and applications. They both relate to the structure of
linear transformations and matrices, especially in complex vector
spaces.

### Unitary Linear Transformations

**Definition**:

A linear transformation \( T: \mathcal{H} \to \mathcal{H} \) on a


complex Hilbert space \( \mathcal{H} \) is called **unitary** if it
preserves the inner product, which means:

\[ \langle T\mathbf{x}, T\mathbf{y} \rangle = \langle \


mathbf{x}, \mathbf{y} \rangle \]

for all vectors \( \mathbf{x}, \mathbf{y} \in \mathcal{H} \).

In terms of matrices, if \( T \) is represented by a matrix \( U \),


then \( U \) is unitary if:

\[ U^* U = U U^* = I \]

where \( U^* \) denotes the Hermitian adjoint (or conjugate


transpose) of \( U \), and \( I \) is the identity matrix.
**Properties**:

1. **Preserves Norms**:

- Unitary transformations preserve the norm of vectors. If \( \


mathbf{x} \) is a vector, then \( \|T\mathbf{x}\| = \|\mathbf{x}\| \).

2. **Orthogonality**:

- The columns (and rows) of a unitary matrix are orthonormal.


This means they are orthogonal to each other and have unit
norm.

3. **Eigenvalues**:

- The eigenvalues of a unitary matrix lie on the unit circle in the


complex plane. They have an absolute value of 1.

4. **Invertibility**:

- A unitary matrix is always invertible, and its inverse is its


adjoint, i.e., \( U^{-1} = U^* \).

5. **Diagonalization**:

- Unitary matrices can be diagonalized by a unitary matrix.


That is, if \( U \) is unitary, then there exists a unitary matrix \( P
\) and a diagonal matrix \( D \) such that:
\[

U = P D P^*

\]

**Example**:

Consider the matrix:

\[

U = \frac{1}{\sqrt{2}} \begin{pmatrix}

1 & 1 \\

-1 & 1

\end{pmatrix}

\]

To check if \( U \) is unitary, compute \( U^* U \):

\[

U^* = \frac{1}{\sqrt{2}} \begin{pmatrix}

1 & -1 \\

1&1

\end{pmatrix}

\]
\[

U^* U = \frac{1}{2} \begin{pmatrix}

1 & -1 \\

1&1

\end{pmatrix} \begin{pmatrix}

1 & 1 \\

-1 & 1

\end{pmatrix}

= \frac{1}{2} \begin{pmatrix}

1 + 1 & 0 \\

0&1+1

\end{pmatrix}

= \begin{pmatrix}

1 & 0 \\

0&1

\end{pmatrix}

=I

\]

Since \( U^* U = I \), the matrix \( U \) is unitary.


### Normal Linear Transformations

**Definition**:

A linear transformation \( T: \mathcal{H} \to \mathcal{H} \) on a


Hilbert space \( \mathcal{H} \) is called **normal** if it
commutes with its adjoint:

\[ T T^* = T^* T \]

In matrix terms, if \( T \) is represented by a matrix \( N \),


then \( N \) is normal if:

\[ N N^* = N^* N \]

**Properties**:

1. **Diagonalizability**:

- Normal matrices can be diagonalized by a unitary matrix. This


means there exists a unitary matrix \( U \) and a diagonal
matrix \( D \) such that:

\[

N = U D U^*

\]
2. **Spectral Theorem**:

- The spectral theorem states that any normal operator (or


matrix) can be expressed in terms of its eigenvalues and
eigenvectors, and can be diagonalized by a unitary matrix.

3. **Preservation of Eigenvalues**:

- The eigenvalues of a normal matrix are preserved under


similarity transformations.

4. **Orthogonality of Eigenvectors**:

- For a normal matrix, eigenvectors corresponding to distinct


eigenvalues are orthogonal.

**Example**:

Consider the matrix:

\[

N = \begin{pmatrix}

2 & 1 \\

1&3

\end{pmatrix}

\]
To check if \( N \) is normal, compute \( N N^* \) and \( N^*
N \):

\[

N^* = \begin{pmatrix}

2 & 1 \\

1&3

\end{pmatrix}

\]

\[

N N^* = \begin{pmatrix}

2 & 1 \\

1&3

\end{pmatrix} \begin{pmatrix}

2 & 1 \\

1&3

\end{pmatrix} = \begin{pmatrix}

5 & 5 \\

5 & 10

\end{pmatrix}

\]
\[

N^* N = \begin{pmatrix}

2 & 1 \\

1&3

\end{pmatrix} \begin{pmatrix}

2 & 1 \\

1&3

\end{pmatrix} = \begin{pmatrix}

5 & 5 \\

5 & 10

\end{pmatrix}

\]

Since \( N N^* = N^* N \), the matrix \( N \) is normal.

and the simplification of problems are crucial.

Q . 4 . Symmetric bilinear forms

### Symmetric Bilinear Forms

A **bilinear form** is a function that takes two vectors and maps


them to a scalar in a way that is linear in each argument. In the
context of symmetric bilinear forms, we impose the additional
requirement that the form be symmetric. Here’s a detailed
explanation:

### Definition

Let \( V \) be a vector space over a field (usually \( \mathbb{R} \)


or \( \mathbb{C} \)). A bilinear form \( B: V \times V \to \
mathbb{F} \) (where \( \mathbb{F} \) is the field of scalars) is a
function that satisfies the following properties:

1. **Linearity in each argument**: For all vectors \( \mathbf{u}, \


mathbf{v}, \mathbf{w} \in V \) and scalars \( a, b \in \
mathbb{F} \):

\[

B(a\mathbf{u} + b\mathbf{v}, \mathbf{w}) = aB(\mathbf{u}, \


mathbf{w}) + bB(\mathbf{v}, \mathbf{w})

\]

\[

B(\mathbf{u}, a\mathbf{v} + b\mathbf{w}) = aB(\mathbf{u}, \


mathbf{v}) + bB(\mathbf{u}, \mathbf{w})

\]
A bilinear form \( B \) is called **symmetric** if:

\[

B(\mathbf{u}, \mathbf{v}) = B(\mathbf{v}, \mathbf{u})

\]

for all \( \mathbf{u}, \mathbf{v} \in V \).

### Matrix Representation

If \( B \) is a symmetric bilinear form, it can be represented by a


symmetric matrix. To see this, consider the following:

1. **Choosing a Basis**:

- Let \( \{ \mathbf{e}_1, \mathbf{e}_2, \ldots, \mathbf{e}_n \} \)


be a basis for \( V \).

2. **Defining the Matrix**:

- Define the matrix \( A \) associated with the bilinear form \( B


\) by:

\[
A_{ij} = B(\mathbf{e}_i, \mathbf{e}_j)

\]

3. **Symmetry of the Matrix**:

- Since \( B \) is symmetric, we have:

\[

A_{ij} = B(\mathbf{e}_i, \mathbf{e}_j) = B(\mathbf{e}_j, \


mathbf{e}_i) = A_{ji}

\]

Thus, the matrix \( A \) is symmetric.

### Properties of Symmetric Bilinear Forms

1. **Symmetric Matrix**:

- The matrix representing a symmetric bilinear form is


symmetric, meaning \( A_{ij} = A_{ji} \).

2. **Quadratic Form**:

- A symmetric bilinear form \( B \) can be associated with a


quadratic form. For a vector \( \mathbf{x} \) in the space, the
quadratic form \( Q(\mathbf{x}) \) is given by:
\[

Q(\mathbf{x}) = B(\mathbf{x}, \mathbf{x})

\]

3. **Diagonalization**:

- In a real vector space, any symmetric bilinear form can be


diagonalized. This means that there exists an orthogonal basis in
which the matrix representing the bilinear form is diagonal. This
is a consequence of the spectral theorem for symmetric matrices.

4. **Positive Definiteness**:

- A symmetric bilinear form \( B \) is positive definite if \( B(\


mathbf{x}, \mathbf{x}) > 0 \) for all non-zero vectors \( \
mathbf{x} \). It is positive semi-definite if \( B(\mathbf{x}, \
mathbf{x}) \geq 0 \) for all vectors \( \mathbf{x} \).

### Example

Consider a bilinear form \( B \) defined on \( \mathbb{R}^2 \)


by:

\[

B(\mathbf{x}, \mathbf{y}) = 2x_1y_1 + 3x_2y_2


\]

where \( \mathbf{x} = (x_1, x_2) \) and \( \mathbf{y} = (y_1,


y_2) \).

**Matrix Representation**:

1. **Choose a Basis**:

- The standard basis for \( \mathbb{R}^2 \) is \( \{ \


mathbf{e}_1, \mathbf{e}_2 \} \), where \( \mathbf{e}_1 = (1, 0) \)
and \( \mathbf{e}_2 = (0, 1) \).

2. **Compute the Matrix**:

- \( A_{11} = B(\mathbf{e}_1, \mathbf{e}_1) = 2 \)

- \( A_{12} = B(\mathbf{e}_1, \mathbf{e}_2) = 0 \)

- \( A_{21} = B(\mathbf{e}_2, \mathbf{e}_1) = 0 \)

- \( A_{22} = B(\mathbf{e}_2, \mathbf{e}_2) = 3 \)

Therefore, the matrix \( A \) is:

\[

A = \begin{pmatrix}

2 & 0 \\
0&3

\end{pmatrix}

\]

Since \( A \) is symmetric, the bilinear form \( B \) is symmetric.

Q. 4. skew symmetric bilinear forms

### Skew-Symmetric Bilinear Forms

A **skew-symmetric bilinear form** is a specific type of bilinear


form with the property that it changes sign when the arguments
are swapped. This type of bilinear form is especially important in
linear algebra and geometry, particularly in the study of
determinants and differential forms.

### Definition

Let \( V \) be a vector space over a field \( \mathbb{F} \). A


bilinear form \( B: V \times V \to \mathbb{F} \) is called **skew-
symmetric** if:

\[

B(\mathbf{u}, \mathbf{v}) = -B(\mathbf{v}, \mathbf{u})


\]

for all vectors \( \mathbf{u}, \mathbf{v} \in V \).

### Properties of Skew-Symmetric Bilinear Forms

1. **Matrix Representation**:

- If \( B \) is a skew-symmetric bilinear form, the matrix \( A \)


representing \( B \) with respect to any basis is skew-symmetric.
This means:

\[

A_{ij} = -A_{ji}

\]

2. **Diagonal Elements**:

- The diagonal elements of a skew-symmetric matrix are always


zero. This is because:

\[

A_{ii} = -A_{ii}

\]
implies:

\[

A_{ii} = 0

\]

3. **Determinant**:

- The determinant of a skew-symmetric matrix is always zero if


the matrix dimension is odd. This follows from the fact that the
determinant of a skew-symmetric matrix is zero if the dimension
is odd, and can be computed using the fact that skew-symmetric
matrices can be decomposed into products involving
determinants.

4. **Orthogonality**:

- The eigenvalues of a real skew-symmetric matrix are purely


imaginary or zero. This is a consequence of the fact that the
eigenvalues of a skew-symmetric matrix \( A \) come in pairs \( \
pm i\lambda \).

### Matrix Representation

To represent a skew-symmetric bilinear form with a matrix,


consider a vector space \( V \) with a basis \( \{\mathbf{e}_1, \
mathbf{e}_2, \ldots, \mathbf{e}_n\} \). The matrix \( A \) of the
bilinear form \( B \) is defined by:
\[

A_{ij} = B(\mathbf{e}_i, \mathbf{e}_j)

\]

Since \( B \) is skew-symmetric:

\[

A_{ij} = -A_{ji}

\]

### Example

Consider the bilinear form \( B \) on \( \mathbb{R}^3 \) defined


by:

\[

B(\mathbf{x}, \mathbf{y}) = x_1 y_2 - x_2 y_1

\]

where \( \mathbf{x} = (x_1, x_2, x_3) \) and \( \mathbf{y} = (y_1,


y_2, y_3) \).
**Matrix Representation**:

1. **Choose Basis**:

- Let the basis for \( \mathbb{R}^3 \) be \( \{\mathbf{e}_1, \


mathbf{e}_2, \mathbf{e}_3\} \), where \( \mathbf{e}_1 = (1, 0,
0) \), \( \mathbf{e}_2 = (0, 1, 0) \), and \( \mathbf{e}_3 = (0, 0, 1)
\).

2. **Compute Matrix Elements**:

- \( B(\mathbf{e}_1, \mathbf{e}_1) = 0 \)

- \( B(\mathbf{e}_1, \mathbf{e}_2) = 1 \) (since \( x_1 = 1, x_2


= 0 \) and \( y_1 = 0, y_2 = 1 \))

- \( B(\mathbf{e}_2, \mathbf{e}_1) = -1 \)

- \( B(\mathbf{e}_2, \mathbf{e}_2) = 0 \)

- \( B(\mathbf{e}_1, \mathbf{e}_3) = 0 \)

- \( B(\mathbf{e}_2, \mathbf{e}_3) = 0 \)

- \( B(\mathbf{e}_3, \mathbf{e}_1) = 0 \)

- \( B(\mathbf{e}_3, \mathbf{e}_2) = 0 \)

- \( B(\mathbf{e}_3, \mathbf{e}_3) = 0 \)

Therefore, the matrix \( A \) is:

\[
A = \begin{pmatrix}

0 & 1 & 0 \\

-1 & 0 & 0 \\

0&0&0

\end{pmatrix}

\]

This matrix is skew-symmetric because \( A_{ij} = -A_{ji} \).

### Applications

1. **Determinants**:

- Skew-symmetric matrices are used in the study of


determinants, particularly in understanding the properties of
determinants of skew-symmetric matrices.

2. **Differential Forms**:

- In differential geometry, skew-symmetric bilinear forms are


related to the concept of differential 2-forms, which are used in
defining volume forms and other geometric constructs.

3. **Physics**:

- Skew-symmetric matrices appear in physics, particularly in


the study of angular momentum and in the representation of
rotations in 3-dimensional space.

4. **Optimization**:

- Skew-symmetric matrices can be used in optimization


problems, particularly those involving certain types of
constraints or in algorithms requiring matrix decompositions.

Q. 5 . Group preserving bilinear forms

### Group-Preserving Bilinear Forms

In the context of group theory and linear algebra, a **group-


preserving bilinear form** refers to a bilinear form that is
invariant under the action of a group on the vector space. This
concept is useful in various mathematical fields, including
representation theory, symmetry analysis, and geometry.

### Definition

Let \( V \) be a vector space over a field \( \mathbb{F} \), and


let \( G \) be a group that acts on \( V \) (meaning \( G \) has a
group action on \( V \)). A bilinear form \( B: V \times V \to \
mathbb{F} \) is said to be **group-preserving** if it is invariant
under the action of \( G \). This means:
\[

B(g \cdot \mathbf{u}, g \cdot \mathbf{v}) = B(\mathbf{u}, \


mathbf{v})

\]

for all vectors \( \mathbf{u}, \mathbf{v} \in V \) and for all \( g \


in G \).

### Properties

1. **Invariant Under Group Action**:

- The defining property of a group-preserving bilinear form is


its invariance under the group action. For every \( g \in G \) and
for all \( \mathbf{u}, \mathbf{v} \in V \):

\[

B(g \cdot \mathbf{u}, g \cdot \mathbf{v}) = B(\mathbf{u}, \


mathbf{v})

\]

2. **Matrix Representation**:

- If \( B \) is represented by a matrix \( A \) with respect to


some basis, then the matrix \( A \) must be such that the bilinear
form is invariant under the group action. This often involves
analyzing how the group elements transform the matrix
representation of \( B \).

3. **Symmetry and Group Actions**:

- For specific groups, like orthogonal groups or unitary groups,


the bilinear form preserving these groups often relates to well-
known forms such as inner products or metrics. For example, in
the context of the orthogonal group \( O(n) \), an inner product
is preserved.

4. **Applications in Representation Theory**:

- In representation theory, group-preserving bilinear forms


often appear as invariants associated with representations of a
group. The form helps in understanding how different
representations relate to each other and to the group's
symmetry.

### Examples

1. **Orthogonal Group**:

- Consider \( V = \mathbb{R}^n \) with the standard dot


product \( B(\mathbf{u}, \mathbf{v}) = \mathbf{u} \cdot \
mathbf{v} \). This bilinear form is preserved under the action of
the orthogonal group \( O(n) \) because:

\[

B(O\mathbf{u}, O\mathbf{v}) = (O\mathbf{u}) \cdot (O\


mathbf{v}) = \mathbf{u} \cdot \mathbf{v} = B(\mathbf{u}, \
mathbf{v})

\]

for any orthogonal matrix \( O \in O(n) \).

2. **Symplectic Group**:

- Consider \( \mathbb{R}^{2n} \) with the symplectic form \


( B(\mathbf{u}, \mathbf{v}) = \mathbf{u}^T J \mathbf{v} \),
where \( J \) is the standard symplectic matrix. This form is
preserved under the action of the symplectic group \( Sp(2n, \
mathbb{R}) \):

\[

B(g \cdot \mathbf{u}, g \cdot \mathbf{v}) = (g \cdot \


mathbf{u})^T J (g \cdot \mathbf{v}) = \mathbf{u}^T J \mathbf{v}
= B(\mathbf{u}, \mathbf{v})

\]

for any symplectic matrix \( g \in Sp(2n, \mathbb{R}) \).

3. **Unitary Group**:

- In complex vector spaces, the standard Hermitian inner


product \( B(\mathbf{u}, \mathbf{v}) = \langle \mathbf{u}, \
mathbf{v} \rangle \) is preserved under the action of the unitary
group \( U(n) \):

\[

B(U \mathbf{u}, U \mathbf{v}) = \langle U \mathbf{u}, U \


mathbf{v} \rangle = \langle \mathbf{u}, \mathbf{v} \rangle = B(\
mathbf{u}, \mathbf{v})

\]

for any unitary matrix \( U \in U(n) \).

You might also like