0 views

Uploaded by RAJESH KUMAR

hgghhggjk

- A Revolution in Modal Parameter Estimation.pdf
- Note Lin Alg Release13
- Linear Algebra Review Sushovan
- 1-s2.0-S0010448515000652-main
- Allahabad University PGAT Syllabus.pdf
- Ordinary and Partial Differential Equations
- Analise Multicritério AHP
- Stability Lin Sys
- 89_1
- GRE Prob
- 07 Matrix Algebra Intro (3)
- Automatic Att
- Midterm2 Formula Sheet
- Distributed Models in Plantwide Dynamic Simulation
- 3507[12-02-12]LASS XII -AG-9
- MATH 101 Bhandari
- 07_01_tt_the-role
- 11084392
- Cbse 2016 Delhi
- A DMAP Alter to Allow Amplitude-Dependent

You are on page 1of 20

Vikas Bist

Department of Mathematics

Panjab University, Chandigarh-160014

email: bistvikas@gmail.com

Last revised on March 5, 2018

This text is based on the lectures delivered for the B.Tech students of IIT Bhilai.

These lecture notes are basics of Linear Algebra, and can be treated as a first course

in Linear Algebra. The students are suggested to go through the examples carefully

and attempt the exercises appearing in the text.

C ONTENTS

1 Similarity of matrices 1

5 Unitary similarity 14

6 Bilinear forms 16

1 SIMILARITY OF MATRICES

1 S IMILARITY OF MATRICES

If B is a fixed basis then we have seen that [T ]B is an n × n matrix. Further T is

injective (hence bijective) if and only if [T ]B is invertible. Here we see how matrix

of a linear operator changes when we change basis.

We see how matrices appear when we compose linear transformations. Let

V, W and X be vector spaces over K. and let T : V −→ W and S : W −→ X be

linear transformations so that S ◦ T is defined. Let B, B 0 and B 00 be ordered bases

of V W and X respectively. Write B = {v1 , . . . , vn }. Then the j-th column of the

matrix B00 [S ◦ T ]B is:

Also

[(S ◦ T )vj ]B00 = B00 [(S ◦ T ]B [vj ]B .

Hence on equating these last two equations we have the following:

write [T ]B for B [T ]B If I is the identity operator on V, then using equation (1):

In = [I]B0 = B0 [I]B B [I]B0 .

R EMARK 1.1. Note that B0 [I]B is a matrix whose j-th column is the elements

of K when the j-th element of B 0 is expressed as a linear combination of the

elements of B.

linear operator on V and let B and B 0 be ordered bases of V. Then there is an

invertible matrix P such that [T ]B0 = P −1 [T ]B P.

Square matrices A and B are similar if for some invertible matrix P B =

P −1 AP. The above statement mean that the change of basis is a similarity transform

on the matrices of a linear operator.

P ROPOSITION 1.3. Similar matrices have the same trace and the same

determinants.

1

Recall that a linear operator on V is a linear transformation from V to V.

1

2 EIGENVALUES AND EIGENVETORS

Thus we can define the determinant and the trace of a linear operator T on V

by:

detT := det[T ]B and trT := tr[T ]B ,

where B is any ordered basis of V.

operator whose matrix with respect to a basis

2 1 1

{u1 , u2 , u3 } is A = 0 3 1 . We find the matrix of T with respect to the basis

0 1 3

B 0 = {v1 = u1 + u2 , v2 = u2 + u3 , v3 = u3 + u1 }. Thus

5 1 1

T (v1 ) = T (u1 + u2 ) = T (u1 ) + T (u2 ) = 3u1 + 3u2 + u3 = v1 + v2 + v3

2 2 2

T (v2 ) = T (u2 + u3 ) = T (u2 ) + T (u3 ) = 2u1 + 4u2 + 4u3 = v1 + 3v2 + 3v3

1 1 5

T (v3 ) = T (u3 + u1 ) = T (u3 ) + T (u1 ) = 3u1 + u2 + 3u3 = v1 + v2 + v3

2 2 2

5/2 1 1/2

Hence [T ]B0 = 1/2 3 1/2 .

1/2 1 5/2

1 0 1

Now B [I]B0 = 1 1 0 . If we call this matrix P, then P −1 [T ]B0 P = [T ]B .

0 1 1

Verify that [T ]B0 and [T ]B have the same determinant and the same rank.

of T if there is a non-zero v ∈ V such that T v = λv. The vector v is called an

eigenvector corresponding to λ.

to keep the things simple, we basically deal with square matrices. Restating the

above definition for matrices:

Let A be an n × n matrix. A non-zero column vector v is called an eigenvalue of A

if there is a number λ such that Av = λv. The number λ is called an eigenvector

of A.

Thus an eigenvalue is a number λ such that the homogeneous system

(A − λIn )x = 0

has a non-zero solution. This happens if and only if rank(A − λIn ) < n, that is,

A − λIn is not invertible which is equivalent to det(A − λIn ) = 0. This means

that eigenvalues of A are precisely those values λ for which the matrix (A − λIn )

2

2 EIGENVALUES AND EIGENVETORS

has the zero determinant. In other words eigenvalues are the roots of the following

monic polynomial:

cA (x) := det(xIn − A).

This polynomial is called the characteristic polynomial of A.

Note that eigenvectors corresponding to an eigenvalue are not unique. In fact

if u is an eigenvector, then αu is also an eigenvector corresponding to λ for any

α ∈ K,

ker(A − λIn ) 6= {0}. Any non-zero vector of ker(A − λIn ) is an eigenvector

corresponding to λ.

E XAMPLE 2.3.

Find the eigenvalues and the corresponding eigenvectors for the

3 1 1

matrix: A = 1 3 1 .

1 1 3

The characteristic equation of A is:

x−3 −1 −1

−1 = (x − 5)(x − 2)2 .

det(xI3 − A) = −1 x − 3

−1 −1 x − 3

To find eigenvectors corresponding to eigenvalue

5, we need to

find solution of

−2 1 1

the system (A − 5I3 )x = 0. Now A − 5I3 is 1 −2 1 . Now to solve

1 1 −2

this we reduce this matrix to row echelon from. Interchanging the first and the third

row and then adding first and the second row to the third row we have:

1 1 −2

1 −2 1

0 0 0

1 1 −2

0 −3 3

0 0 0

This is he row echelon form and so the rank of A − 5I3 is 2 thus nullity is 1. Thus

there is one fundamental solution and that is given by a solution of

x1 + x2 − 2x3 = 0, x1 − x3 = 0.

3

2 EIGENVALUES AND EIGENVETORS

1

Hence solution is x1 = x2 = x3 or u 1 , where u ∈ K. Thus we can take

1

1

eigenvector corresponding to 5 as 1 .

1

Eigenvector corresponding

to 2 is a solution of the system (A − 2I3 )x = 0.

1 1 1

Now A − 2I3 = 1 1 1 and the row echelon form for this matrix is

1 1 1

1 1 1

0 0 0

0 0 0

a matrix of rank 1 and so nullity 2. This means that there are 2 fundamental solutions

that is 2 linearly independent eigenvectors corresponding to eigenvalue 2.

Thus eigenvector corresponding to 2 is satisfying the equation x1 + x2 + x3 = 0

or x3 = −x1 − x2 . Thus any solution of the system is

x1 1 0

x2 = x1 0 + x2 1

−x1 − x2 −1 −1

1 0

The two linearly independent eigenvectors can be taken as 0 and 1 .

−1 −1

Thus an n × n matrix cannot have more than n distinct eigenvalues. The

0 2

matrix may not have eigenvalues. For example the matrix A =

−1 0

2

has characteristic polynomial x − 2. This has no eigenvalues when A is

considered as a matrix in Q. But A has eigenvalues when considered as a

matrix in R.

(i). Find the characteristic polynomial of A.

(ii). Find all its roots, these are eigenvalues.

(iii). For each eigenvalue λ, solve the system (A − λIn )x = 0.

4

2 EIGENVALUES AND EIGENVETORS

2 1 −1

E XAMPLE 2.4. Consider the matrix A = 0 2 1 . The characteristic

0 1 2

polynomial is (x − 1)(x − 3)(x − 2) Eigenvalues are 1, 2, 3.

Eigenvector

corresponding

by solving the system (A − I3 )x = 0,

to 1:obtained

1 1 −1 x1 0

that is, 0 1 1 x2 = 0 . Thus x1 +x2 −x3 = 0 and x2 +x3 = 0.

0 1 1 x3 0

2

Thus x2 = −x3 and x1 = 2x3 . An eigenvector corresponding to 1 is −1 .

1

Eigenvector

corresponding

to 2:

obtained by solving the system (A−2I3 )x = 0,

0 1 −1 x1 0

that is, 0 0 1 x2 = 0 . Thus we have x2 = x3 = 0. Hence an

0 1 0 x3 0

1

eigenvector corresponding to 2 is 0 .

0

Eigenvector

corresponding

to by solving the system (A−3I3 )x = 0,

3: obtained

−1 1 −1 x1 0

that is, 0 −1 1 x2 = 0 . Thus we have x2 = x3 and x1 = 0.

0 1 −1 x3 0

0

Hence an eigenvector corresponding to 3 is 1 . •

1

values, then eigenvectors corresponding to these eigenvalues are linearly

independent.

eigenvectors u1 , . . . , uk . Thus Aui = λi ui , for i = 1, . . . , k.

Let α1 u1 + . . . αm um = 0 and let r be largest index such that αr 6= 0. Then

multiplying by (A − λ1 In ) on the both sides we have:

being a a characteristic vector is non-zero too. Hence u1 , . . . , uk are linearly

independent.

5

2 EIGENVALUES AND EIGENVETORS

The reader can verify that eigenvectors in the Example 2.4 are actually linearly

independent.

Let A ∈ K n×n and λ be an eigenvalue of A. Assume that u1 , . . . , uk ∈ K n are

linearly independent eigenvectors corresponding to λ. Then extend {u1 , . . . , uk } to

a basis of K n : {u1 , u2 , . . . , uk , uk+1 , . . . , un }. Let P ∈ K n×n with j-th column

as uj . Then P is invertible. Then consider the matrix P −1 AP, For j = 1, . . . , k,

the j-th column is

(P −1 AP )ej = (P −1 A)P ej = P −1 Auj = (P −1 (λuj ) = λP −1 uj

= λP −1 P ej = λej .

λIk X

Thus P −1 AP = . It follows that if K n has a basis consisting of

0 Y

eigenvectors of A then P −1 AP is a diagonal matrix.

A matrix A that is similar to a diagonal matrix is called diagonalizable matrix.

the eigenvectors of A.

P ROPOSITION 2.6. If A ∈ K n×n has n distinct characteristic vectors, then A is

diagonalizable.

A adj(A) = det(A) In .

Now replacing the matrix A by the matrix xIn − A we have

(xIn − A) adj(xIn − A) = det(xIn − A) In = cA (x)In ,

where cA (x) is the characteristic polynomial of A. Now adj(xIn − A) is a matrix

with entries consisting of cofactors of xIn − A. Thus the entries are polynomials of

degree at most n − 1. Thus we can write:

adj(xIn − A) = B0 + b1 x + · · · + Bn−1 xn−1 ,

where each Bj is an n × n matrix. Thus:

(xIn − A)(B0 + b1 x + · · · + Bn−1 xn−1 ) = cA (x)In .

Equating the coefficients of the powers of x:

−AB0 = a0 In

B0 − AB1 = a1 In

..

.

Bn−1 − ABn−1 = an−1 In

Bn−1 = In .

6

3 INNER PRODUCT AND ORTHOGONALITY

(the last) by An and add all the resulting equations. We have2

a0 In + a1 A + · · · + an−1 An−1 + An = 0.

Hence cA (A) = 0. This proves the following important result called the Cayley

Hamilton Theorem.

its characteristic polynomial, then cA (A) = 0.

Proof. Let cA (x) = c1 + c1 x + · · · + cn−1 xn−1 + xn . Then cA (0) = 0 means that

c0 = 0. Now c0 = 0 if and only if one of the roots of the characteristic polynomial

is 0, that is, one of the the eigenvalues of A is 0. This implies that there is a non-zero

column vector u such that Au = 0. Hence rank(A) < n. This is equivalent to that

A is not invertible.

The Cayley Hamilton Theorem can be used for finding the inverse of a matrix.

The following example illustrates this fact.

4 2 2

E XAMPLE 2.9. Let A = 3 3 2 . Then cA (x) = x3 − 7x2 + 14x − 8.

−3 −1 0

Since the constant term of cA (x) is non-zero, A is invertible. By Cayley-Hamilton

Theorem: A3 − 7A2 + 14A − 8I3 = 0. Now multiplying by A−1 , we have:

A2 − 7A + 14I3 − 8A−1 = 0. Hence

1

A−1 = (A2 − 7A + 14I3 ).

8

an ordered pair x, y to (x, y) ∈ F such that the following properties hold.

(i).(x, x) ≥ 0 and (x, x) = 0 if and only if x = 0.

(ii). (αx + βy, z) = α(x, z) + β(y, z) for all x, y, x ∈ V and α, β ∈ F.

(iii). (y, x) = (x, y) for all x, y, x ∈ V.3

A vector space V over F equipped with an inner product is called an inner product

space.

2

For a polynomial p(x) = p0 + p1 x + · · · + pk xk , we write p(A) = p0 In + p1 A + · · · + pk Ak .

3

z denotes the complex conjugate of z.

7

3 INNER PRODUCT AND ORTHOGONALITY

(1) (x, αy + βz) = (αy + βz, x) = α(y, x) + β(z, x) = α(x, y) + β(x, z) for all

x, y, z ∈ V and α, β ∈ F.

(2) α(x, y) = (αx, y) = (x, αy) for all x, y ∈ V and α ∈ F.

E XERCISE 3.2. Show that in an inner product space V : (i) (x, 0) = 0 for all x ∈ V.

(ii) If x ∈ V such that (x, y) = 0 for all y ∈ V, then x = 0.

E XERCISE 3.3. Show that for fixed w ∈ V, the mapping f : V → F given by

f (x) = (x, w) is a linear form.

E XAMPLE 3.4. 1. On Cn the inner product (x, y) := y ∗ x is called the standard

inner product. The corresponding standard inner product on Rn is clearly:

(x, y) := y t x.

matrix. Then the following is also an inner product:

an inner product:

(A, B) = tr(B ∗ A).

If n = 1 this reduces to the standard inner product of vectors. It is for this

reason it is referred as the standard inner product of matrices.

at most n. Then there is an inner product on V given by:

Z 1

(p, q) = p(t)q(t) dt.

0

R EMARK 3.5. We normally deal here with the standard inner product space

Rn . Whenever we say inner product space F n , without mentioning inner

product, we mean the standard inner product space.

p

kxk := (x, x).

A vector x such that kxk = 1 is called a unit vector. The following is an important

inequality. Vectors x, y in an inner product space are called orthogonal if (x, y) =

0.

1

3

E XERCISE 3.6. In R find all vectors orthogonal to 1 .

1

8

3 INNER PRODUCT AND ORTHOGONALITY

called an orthonormal set if it is orthogonal and very vector is a unit vector, that is,

(

1 if i = j

(ui , uj ) = δij = .

0 if i 6= j

linearly independent

, . . . , uk } be an orthonormal set. Suppose that α1 u1 + · · · + αk uk =

0. Then αl = ( ki=1 αi ui , ul ) = 0 for each l.

Our next result shows that every finite dimensional inner product space has an

orthonormal basis.

be an ordered linearly independent set. Then there is an orthonormal set {u1 , . . . , uk }

such that for each hx1 , . . . , xl i = hu1 , . . . , ul i for all l = 1, . . . , k.

1

Proof. Let u1 = x1 . Suppose that mutually orthonormal set of vectors

kx1 k

{u1 , . . . , ul } have been constructed such that hx1 , . . . , xl i = hu1 , . . . , ul i. Define

l

X

yl+1 = xl+1 − (xl+1 , ui )ui .

i=1

1

Then (yl+1 , ui ) = 0 for all i = 1, . . . , l. Now let ul+1 = yl+1 . then clearly,

kul+1 k

{u1 , . . . , ul+1 } is an orthonormal set.

Since from the above equation it follows that xl+1 ∈ hyl+1 , u1 , . . . ul i =

hul+1 , u1 , . . . , ul i, and so hx1 , . . . , xl+1 i ⊆ hu1 , . . . , ul+1 i.

Also from above equation, it follows that

l

!

1 X

ul+1 = xl+1 − (xl+1 , ui )ui .

kyl+1 k

i=1

hx1 , . . . , xl+1 i.

9

3 INNER PRODUCT AND ORTHOGONALITY

Given a linearly independent set (ordered): {x1 , . . . , xk }.

x1

Write u1 = . (Normalize the vector)

kx1 k

u1 , . . . , ui are done, then define

yi+1 = xi+1 − ((xi+1 .u1 )u1 + · · · + (xi+1 , ui )ui ),

yi+1

Write ui+1 = .

|kyi+1 k

{u1 , . . . , uk } is an orthonormal set.

0 1 1

E XAMPLE 3.10. Find an orthonormal basis for R3 from a given basis 1 , 0 , 1 .

1 1 0

0 1 1 0

1

x1 = 1 , x2 = 0 , x3 = 1 . u1 = 2 1 .

√

1 1 0 1

r 1

1 0 1

2 1

y2 = x2 − (x2 , u1 )u1 = 0 − 12 1 = −1/2 . u2 = −

3 12

1 1 1/2 2

1 0 1 1

y3 = x3 − (x3 , u1 )u1 − (x3 , u2 )u2 = 1 − 12 1 − 31 − 12 = 23 1 .

1

0 1 2 −1

1

1

u3 = √ 1 .

2 3 −1

0 q 1 1

Hence orthonormal basis is √12 1 , 23 − 21 , 2√ 1

3

1 .

1

1 −1

2

v ∈ V, then v = ni=1 (v, ui )ui .

Define S ⊥ = {u ∈ V : (u, x) = 0 for all x ∈ S}. Then S ⊥ is a subspace of

V.

for all x ∈ S. Hence αu + βw ∈ S ⊥ , and so S ⊥ is a subspace of V.

E XERCISE 3.13. Let V be an inner product space and let X, Y ⊆ V. Prove that:

(i) If X ⊆ Y, then Y ⊥ ⊆ X ⊥ .

(ii) S ⊥ = hSi⊥ .

(iii) If X is a basis of a subspace W of V, then X ⊥ = W ⊥ .

10

3 INNER PRODUCT AND ORTHOGONALITY

Caution: (iv) is not true if X is a subset of V as X ⊥ is a subspace.

P ROPOSITION 3.14. Let w1 , . . . , wk be linearly independent in an inner product

space V. Then dim{w1 , . . . , wk }⊥ = dim(V ) − k.

Proof. Let dimV = n. Let W = hw1 , . . . , wk i. and let {u1 , . . . , uk } be an or-

thonormal basis of W. Then {w1 , . . . , wk }⊥ = W ⊥ = hu1 , . . . , uk i⊥ . Extend

u1 , . . . , uk to u1 , . . . , uk , xk+1 , . . . , xn so that this is a basis of V. Now by Gram

Schmidt the P take the corresponding orthonormal basis u1 , . . . , uk , uk+1 , . . . , un .

Now if v = ni=1 αi ui ∈ W ⊥ , then αj = (v, uj ) = 0 for j = 1, . . . , k. Hence

v ∈ huk+1 , . . . , un i. Therefore, dim W ⊥ = n − k.

orthonormal basis for W. Then for any v ∈ V :

k

X

v− (v, wj )wj ∈ W ⊥ .

i=1

P

onto W (and along W ⊥ .)

x 1

x2 x1 + x2 + x3 + x4 = 0

E XAMPLE 3.15. Let W = : be a subspace of

x 2x1 − x3 − x4 = 0

3

x4

R4 . Find W ⊥ . Also find the orthogonal projection of e1 along W.

⊥

* 1 2 + * 1 2 +

1 0 1

. Thus W ⊥ = y1 = 0

Observe that W = 1 , −1

1 , y2 = −1 .

1 −1 1 −1

These two vectors are already orthogonal. Thus an orthonormal basis for W ⊥ is

{ 12 y1 , √16 y2 }. Now e1 = 41 y1 + 31 y2 .

P ROPOSITION 3.17. Let W be a subspace of V. Then to each v ∈ V there are

unique vectors v1 and v2 such that v = v1 + v2 , v1 ∈ W and v2 ∈ W ⊥ .

Proof. Let {w1 , . . . , wk } be an orthonormal basis of W. Then for v2 = v −

P k ⊥ k

i=1 (v, wj )wj ∈ W . Thus v = v1 + v2 where v1 = sumi=1 (v, wj )wj ∈ W.

If v = v1 + v2 = v1 + v2 v1 v1 ∈ W, v2 v2 ∈ W ∗ ⊥, then v1 − v10 = v20 − v2 ∈

0 0 0 0

11

4 ADJOINT OF A LINEAR TRANSFORMATION

proposition proves that for given v ∈ V unique vectors v1 ∈ W and v2 ∈

W ⊥ such that v = v1 + v2 . Thus we have a mapping

P rW (v) ∈ W and so v − P rW (v) ∈ W ⊥ . Also if {w1 , . . . , wk } is an

orthonormal basis of W, then by uniqueness it follows that

k

X

P rW (v) = (v, wi )wi .

i=1

(ii) Observe that P rW (w) = w for all w ∈ W. Thus P rW 2 (v) = P r (v) for all

W

2

v ∈ V, that is P rW = P rW .

(iii) Let X = {w1 , . . . , wn } be an orthonormal basisof V such

that {w1 , . . . , wk }

Ir 0

is an orthonormal basis of W. Show that [P rW ]X = .

0 0

Exercise 3.3 shows that if V is an inner product space, then for fixed y ∈ V, the

map: v 7→ (v, x) is a linear functional. The next result states that the converse also

holds.

product space and let f be a linear functional on V. Then there is a unique y ∈ V

such that f (x) = (x, y) for all x ∈ V.

3.14). Let u be a unit vector in (ker f )⊥ .

We show that f (x) = (x, αu) for a suitable α ∈ F. If this is the case, then

f (u) = α or α = f (u). Now we verify that y = f (u)u is the required vector.

Indeed each v = v1 + γu, where v1 ∈ ker f, Now f (v) = γf (u). Also (v, y) =

(v, f (u)u) = f (u). Hence f (v) = (v, y) for all v ∈ V.

Finally, if w ∈ V is another such vector. Then f (v) = (v, y) = (v, w), for all

v ∈ V. But then (v, y − w) = 0 for all v ∈ V and so y = w.

T V → W be a linear transformation. Let y ∈ W be fixed. Then fT : V → F given

by fT (v) = (T v, y) is a linear form. By Proposition 4.1, there is unique vector

x ∈ V such that fT (x) = (v, x). Thus for each y ∈ W there is a unique vector

12

4 ADJOINT OF A LINEAR TRANSFORMATION

P ROPOSITION 4.2. T ∗ : W → V is linear.

Proof. For α, β ∈ F and y, z ∈ W :

= α(v, T ∗ (y)) + β(v, T ∗ (z)) = (v, αT ∗ (y) + βT ∗ (z)), for all v ∈ V

Let V and W be inner product spaces of dimensions n and m respectively. Let

V = {v1 , . . . , vP

n } and W = {w1 , . . . , wm } be orthonormal bases for V and W.

Then T (vj ) = m i=1 (T (vj ), wi )wi . Therefore

Now as (T ∗ (wj ), vi ) = (vi , T ∗ (wj ) = (T (vi ), wj ). This proves the following.

The (i, j)-th entry of V [T ∗ ]W is the conjugate of the (j, i)-th entry of W [T ]V .

the matrix of T ∗ is the conjugate transpose of A.

(i) If S, T ∈ L(V, W ), then (αS + T )∗ = αS ∗ + T ∗ for α ∈ F.

(ii) (ST )∗ = T ∗ S ∗ .

(iii) If T ∈ L(V ) and T is invertible, then (T ∗ )−1 = (T −1 )∗ .

x1

3 2 x1 + 2x2 + 3x3

E XAMPLE 4.5. Let T : R → R given by T x2 = . Then

4x1 + 5x2 + 6x3

x3

1 2 3

the matrix of with respect to the standard bases is . Therefore the matrix

4 5 5

1 4

of T ∗ with respect to the same bases if 2 5 . Hence T ∗ : R2 → R3 given by

3 6

y1 + 4y2

y

T ∗ 1 = 2y1 + 5y2 .

y2

3y1 + 6y2

13

5 UNITARY SIMILARITY

x

3 2 x + ιy

E XERCISE 4.6. Let T : C → C given by T y = . Find

(i + ι)y + 3z

z

1 + ι

T∗ .

1−ι

D EFINITION 4.7. A linear operator on an inner product space V is called self

adjoint if T ∗ = T. Thus T is self adjoint if and only if the matrix of the matrix of

T with respect to orthonormal basis is Hermitian4

5 U NITARY SIMILARITY

Let us assume from here onwards that all matrices have entries in C and Cn

has the standard inner product.

orthonormal columns is called a unitary matrix. Thus columns of a unitary matrix

form an orthonormal basis.

Note that the (i, j)-th entry of the matrix A∗ A is the inner (standard) product of

the i-th and j-th columns, it follows that A is unitary if and only if A∗ A = In .

onal matrix. An orthogonal matrix need not be unitary. For example

1 2 ι

√

3 ι −2

. However, if A ∈ Rn×n , then A is orthogonal if and only

if unitary.

E XERCISE 5.2. Show that the inverse of a unitary matrix is unitary, and the product

of two unitary matrices is unitary. Show by example that the sum of two unitary

matrices is not unitary.

unitary matrix U such that U ∗ AU = B.

A ∈ Cn×n is unitary diagonalizable if A is unitary similar to a diagonal

matrix.

triangular matrix.

statement is true for all matrices of order less than n. Let λ be an eigenvalue of

A and u be corresponding eigenvector of norm 1.. Let U1 be a unitary matrix

4

A matrix A is Hermitian if it is equal to its conjugate transpose, that is A = At ..

14

5 UNITARY SIMILARITY

whose

firstcolumn is u. Then the first column of P1−1 AP1 is λe1 and so U1∗ AU1 =

λ ∗

. Here * denotes those entries which are of no interest to us.

0 A1

∗

Now by induction hypothesis thereis an unitary matrix V such that V A1 V is

1 0

upper triangular. Now if U2 = , then U = U1 U2 is unitary and U ∗ AU =

0 V

λ ∗

. Hence an upper triangular matrix.

0 V ∗ A1 V

Note that diagonal entries of an upper triangular matrix are eigenvalues. Hence

we have the following statement.

Let A ∈ Cn×n . Then det(A) is the product of diagonal eigenvalues, and the

trace of A is the sum of its eigenvalues.

Then T is a diagonal matrix.

nalizable if and only if A is normal.

tary matrix U such that U ∗ A = T, an upper triangular matrix. Now T T ∗ =

(U ∗ AU )(U ∗ AU )∗ = U ∗ (AA∗ )U = U ∗ (A∗ A)U = (U ∗ A∗ U )(U ∗ AU ) = T ∗ T.

Hence T is a diagonal matrix.

Conversely, if U ∗ AU = D, a diagonal matrix, then A = U DU ∗ . Since any two

diagonal matrices commute, DD∗ = D∗ D. Thus AA∗ = (U DU ∗ )(U D∗ U ∗ ) =

U (DD∗ )U ∗ = U (D∗ D)U = A∗ A, and A is normal.

(i) kAu|| = kA∗ uk for every u ∈ Cn .

(ii) If Au = λu, then A∗ u = λu, where λ ∈ C.

(iii) Eigenvectors corresponding to distinct eigenvalues are orthogonal.

Proof. (i) kAuk2 = (Au)∗ (Au) = u∗ (A∗ A)u = u∗ (AA∗ )u = (A∗ u)∗ (A∗ u) =

kA∗ uk2 . Hence kAu|| = kA∗ uk.

(ii) A − λIn is also normal. Thus using (i): 0 = k(A − λIn )uk = k(A − λIn )∗ uk =

k(A∗ − λIn )uk. Hence A∗ u = λu.

(iii) Let Au = λu and Av = µv, λ 6= µ. Then λv ∗ u = v ∗ (Au) = (A∗ v)∗ u =

(µv)∗ u = µv ∗ u. Hence v ∗ u = 0

15

6 BILINEAR FORMS

2 1 1

E XAMPLE 5.10. The matrix 1 2 1 is Hermitian, and so unitary diagonalizable.

1 1 2

The

eigenvalues

of this matrix are 1 and 4. Eigenvectorscorresponding

to to 1 are

1 1 1

−1 and 0 and eigenvector corresponding to 4 is 1. Now by Proposition

0 −1 1

5.9, we need to find an orthonormal

1 eigenvectors corresponding to 1. These can be

√1 √1

√

1 1 2 6 3

−1 1 1

−1 and 1 . Hence U = √2 √6 √3 .

0 −2 −2 √1

0 √ 6 3

6 B ILINEAR FORMS

(i) f (αx + βy, z) = αf (x, z) + βf (y, z) for all α, β ∈ K and x, y, z ∈ V.

(ii)f (x, αy + βz) = αf (x, y) + βf (x, z) for all α, β ∈ K and x, y, z ∈ V.

(i) f (x, 0) = 0 for all x ∈ V and f (0, y) = 0 for all y ∈ V.

(ii) f (αx, y) = αf (x, y) = f (x, αy) for α ∈ K and x, y ∈ V .

y t Ax is a bilinear form.

Consider a bilinear form f :P

V. For x, y ∈ V, we have: x = ni=1 αi ui and y = ni=1 γi ui and thus

n

X

f (x, y) = αi γj f (ui , uj ) = [x]tB A[y]B ,

i,j=1

where A is an n × n matrix whose (i, j)-th entry is f (ui , uj ). The matrix A is called

the matrix of bilinear form f with respect to an ordered basis B, denoted by [f ]B .

basis of V, then [f + g]B = [f ]B + [g]B and [αf ]B = α[f ]B for all α ∈ K.

16

6 BILINEAR FORMS

be

Pnany map. Then for Pany x, y ∈ V we have unique representations: x =

n

i=1 αi ui and y = i=1 βi ui . Verify that if

n

X

f (x, y) = αi βj φ(ui , uj ),

i,j=1

K by f (x, y) = T (y)(x). Show that f is a bilinear from.

of V. Let f : V × V → K is a bilinear form on V. Then there is an invertible

matrix P such that

[f ]B = P t [f ]C P.

bilinear form on V, and B and C be the matrices of φ with respect to B and C,

respectively. There is an invertible matrix P such that [u]B = P [u]C . Thus for any

x, y ∈ V :

[φ]B = P t [φ]C P,

f (y, x), for all x, y ∈ V, and skew-symmetric if f (x, y) = −f (y, x), for all

x, y ∈ V.

K is a symmetric (skew-symmetric) bilinear form if and only if [f ]B is a symmetric

(skew-symmetric) matrix.

bilinear form on V over K is the sum of symmetric and skew-symmetric bilinear

forms.

5

Recall that V ∗ is the space of all linear forms on V.

17

7 SYMMETRIC BILINEAR FORMS

Proof. Define g(x, y) = 21 (f (x, y) + f (y, x)) and h(x, y) = 12 (f (x, y) − f (y, x)).

Then verify that g is symmetric bilinear for on V and h is skew-symmetric bilinear

for on V such that f = g + h.

f : V × V → K is a skew-symmetric bilinear form then f (x, x) = 0 for all x ∈ V.

space over R. f : V × V → R be a symmetric bilinear form. Let [f ] be the

matrix of f with respect to the standard basis. Thus f (x, y) = [x]t [f ][y]. Since f

is symmetric bilinear form, [f ] is a symmetric matrix. Since the eigenvalues of [f ]

are all real numbers. Let there be p be the number of positive eigenvalues, q be the

number of negative eigenvalues of [f ] and let r be the rank of the matrix [f ]. Write

s = n − r. there is a orthogonal matrix U such that

√

Let Q = diag( λ1 , . . . , λp+q , 0 . . . , 0). Then if P = U Q−1 , we have

p

The triplet (p, q, s) is called the index of f , the number p + q is the rank of f ,

and the number p − q is called the signature of f.

(i). Every symmetric bilinear from f on V can be represented by a block

diagonal matrix: diag(Ip , −Iq , 0s ), where p, q, s are uniquely determined

by f .

(ii). Two symmetric bilinear forms are equivalent if and only if they have the

same index.

(iii). Two symmetric bilinear forms are equivalent if and only if they have

the same rank and the same signature.

called the quadratic form corresponding to A.

A ∈ Rn×n and symmetric is positive semi-definite if xt Ax ≥ 0 for all x ∈

n

R .

The following conditions are equivalent for a symmetric matrix: (i) A is

positive semi-definite.

(ii) All eigenvalues of A are non-negative.

(iii) All principal submatrices have non-negative determinants’

(iv) There is a matrix B such that A = B t B.

18

7 SYMMETRIC BILINEAR FORMS

1 1 2

E XAMPLE 7.1. Let A = 1 2 1 . We find a matrix Q so that Qt AQ is diagonal.

2 1 3

The method is we start with A and I3 then the column operation that we apply on

A the same row operation is to be applied on A and the corresponding column

operation is applied on the identity matrix. Finally this way when A transformed to

the diagonal form, then the identity matrix transforms to Q.

Write columns of A and I3 .

1 1 2 1 0 0

1 2 1 0 1 0

2 1 3 0 0 1

−1 times the first column added to the second column, do the same on rows

1 0 2 1 −1 0

0 1 −1 0 1 0

2 1 3 0 0 1

−2 times the first column added to the third column, do the same on rows

1 0 0 1 −1 −2

0 1 −1 0 1 0

0 −1 −1 0 0 1

Add second column to the third column, do the same for rows:

1 0 0 1 −1 −3

0 1 0 0 1 1

0 0 −2 0 0 1

1 −1 −3 1 0 0

Hence Q = 0 1 1 and Qt AQ = 0 1 0 .

0 0 1 0 0 −2

19

- A Revolution in Modal Parameter Estimation.pdfUploaded byPrimoz Cermelj
- Note Lin Alg Release13Uploaded bydksdfkdf
- Linear Algebra Review SushovanUploaded byVijayantPanda
- 1-s2.0-S0010448515000652-mainUploaded byPrasanth Nava
- Allahabad University PGAT Syllabus.pdfUploaded byaglasem
- Ordinary and Partial Differential EquationsUploaded byseihn
- Analise Multicritério AHPUploaded byCarlosTamasauskas
- Stability Lin SysUploaded bypps2190087
- 89_1Uploaded byVikas Kumar
- GRE ProbUploaded byjfdinatale
- 07 Matrix Algebra Intro (3)Uploaded bysolracselbor
- Automatic AttUploaded bymahajan
- Midterm2 Formula SheetUploaded byRômulo Santos
- Distributed Models in Plantwide Dynamic SimulationUploaded byNeil Carrasco
- 3507[12-02-12]LASS XII -AG-9Uploaded byparidhi_1095
- MATH 101 BhandariUploaded byujjwal2110
- 07_01_tt_the-roleUploaded byLemdy Anwuna
- 11084392Uploaded byItz Timez
- Cbse 2016 DelhiUploaded byMd Rizwan Ahmad
- A DMAP Alter to Allow Amplitude-DependentUploaded byEgg Man
- Curso de Introduccion Al Matlab2Uploaded byunifim1102
- 12.pdfUploaded byTalin Machin
- lecture5_partitionSpectralUploaded byapi-3834272
- sec1_1Uploaded bylee
- Nbhm Paper 2011Uploaded byMichelle Moore
- math219.e192327.week12Uploaded byOrkun Akyol
- Chapter 12Uploaded byRandeep Iyyad N C
- floquetUploaded byThalia Juarez
- An Engineering Problem Can Be Reduced to a FormUploaded byAnonymous Th1S33
- Physical and Numerical Modeling of Seismic Soil-Structure Interaction in Layered SoilsUploaded byAnta Cristina Ionescu

- X CBSE 2016 English SolutionUploaded byRAJESH KUMAR
- productFlyer_978-0-387-90328-6Uploaded byDing dong
- X-CBSE-2016-Maths-Solution.pdfUploaded byRAJESH KUMAR
- DetUploaded byRAJESH KUMAR
- X-CBSE-2016-Social-Science-Solution.pdfUploaded byRAJESH KUMAR
- X-CBSE-2016-Social-Science-Solution.pdfUploaded byRAJESH KUMAR
- debuglog.txtUploaded byRAJESH KUMAR
- Analysis LogUploaded byRAJESH KUMAR
- ED536733.pdfUploaded byRAJESH KUMAR
- ED536733.pdfUploaded byRAJESH KUMAR
- Mathematics-1.pdfUploaded byRAJESH KUMAR
- Mathematics-1.pdfUploaded byRAJESH KUMAR
- Mathematics-1.pdfUploaded byRAJESH KUMAR
- X-CBSE-2016-Science-Solution.pdfUploaded byRAJESH KUMAR
- X-CBSE-2016-Science-Solution.pdfUploaded byRAJESH KUMAR
- Why Did Human History UnfoldUploaded byAlejandra Isabel Carrión Manríquez
- w0xf4pyop2.txtUploaded byRAJESH KUMAR
- 141 Finals AmcUploaded byRAJESH KUMAR
- maths ugcUploaded byAmit Sharma
- 141ex2samAUploaded byRAJESH KUMAR
- SummaryUploaded byRAJESH KUMAR
- AC2D5B7B-6003-4B4D-B841-75CEBDF801A4.pdfUploaded byAmarChaudhary
- 141ex2samBUploaded byRAJESH KUMAR
- keyma[1]Uploaded byRAJESH KUMAR
- assignUploaded byRAJESH KUMAR
- assignUploaded byRAJESH KUMAR
- 141FinalExamSampleEsu12Uploaded byRAJESH KUMAR
- workshopUploaded byRAJESH KUMAR
- 141e2samCUploaded byRAJESH KUMAR

- Higher Math XI-XIIUploaded bysdfdsf
- M. West The Writing Tablets and papyrus from the tomb II in DaphniUploaded byfcrevatin
- FUZZY LOGIC-SLIDEUploaded byBalasingam Periasamy
- art iii syllabus digitalUploaded byapi-140897250
- Research MethodsUploaded bylezimmer
- PPT-PMMUploaded byapurvabangera
- Simply SpeakingUploaded byLSara Martín G
- CSEUploaded byNaga Tej Dasari
- ตู้เก็บทูล mt Toolboss Supplychain by PTSCUploaded byAnansitthichok Muang
- Fluid Motion 4Uploaded bykhanasifalam
- About International Youth Forum SeligerUploaded bydodysetiawan18
- The Speeches of Acts_FF BruceUploaded byTeofanes Zorrilla
- reflection letter lUploaded byapi-336072959
- PHY5June2004Uploaded byapi-3726022
- EI15 AbstractsUploaded byPeiYingChua
- Mooc Report 2013Uploaded byAbhijeet B
- African Socialism and PostsocialismUploaded byMilutinRakovic
- His Eye is on the Sparrow CompleteUploaded byJohn Russell Morales
- Econ 399 Chapter2a.pptUploaded byJheena Yousafzai
- Academic Writing OrientationUploaded bycarlosgonau
- Effective Data VisualizationUploaded byteacherignou
- List of Search EnginesUploaded byMohammedRaffic
- SHOCK.PPTUploaded bytony_chris
- Strom Inger 1981Uploaded byJonatan Vignatti Muñoz
- Content ServerUploaded bysofiatatu
- Floating Gate Devices: Operation and Compact ModelingUploaded byParitosh Sanjay Gupta
- Seeking Truer Forms of Folklore SubversionUploaded byScribdSally100
- Barry Freundel Sentencing MemoUploaded bySherell Williams
- 03 Future Tense Changed.pptxUploaded byCarlos Billot Ayala
- Shape WarpUploaded byJefriSuarez