You are on page 1of 51

Outline of the week

1 Rank-nullity theorem (Lec 7)


2 Determinants, properties (Lec 7)
3 Invertibilty by rank/determinants (Lec 8)
4 Rank by determinants (Lec 8)
5 Cramer’s rule (Lec 8)
6 Expansion in a basis (Lec 9)
7 Orthogonal sets and orthonormal bases (Lec 9)
8 Gram-Schmidt process, Examples in R3 (Lec 9)

January 19, 2017 1 / 51


Nullity and the rank-nullity theorem

Definition 1 (Nullity)
If A is any m × n real matrix, the dimension of the null-space N (A) of A is
called the nullity of A and is denoted null(A).

Theorem 2 (Rank-nullity theorem)


Let A be any m × n real matrix. Let null(A) and rank(A) be respectively
the nullity and rank of A. Then

rank(A) + null(A) = n = no. of columns of A)

Caution: The title may suggest rank−nullity. It is rank+nullity

January 19, 2017 2 / 51


Proof of the rank-nullity theorem

Let  be a row-echelon form of A. Then there will be r = rank(A) pivots


and n − r pivot-free columns. The corresponding n − r free variables
xjt , t = 1, 2, ..., n − r describe the solutions of Ax = 0 i.e. the null-space of
A.
Setting successively xjt = 1 and rest to be 0, we get a basis of N (A).
Hence
null(A) = n − r = n − rank(A).
On ”solving” we get
rank(A) + null(A) = n.

January 19, 2017 3 / 51


An example of the rank-nullity equation

Example 3
Consider
 the augmented matrix
 of a homogeneous system Ax = 0.
1 3 1 −2 −3 0
 1 3
 3 −1 −4 0  
 2 6 −4 −7 −3 0 
3 9 1 −7 −8 0
Performing
 ERO 0 s yields: 
1 3 1 −2 −3 0
 1 3
 3 −1 −4 0  
 2 6 −4 −7 −3 0 
3 9 1 −7 −8 0
 
1 3 1 −2 −3 0
E21 (−1),E31 (−2),E42 (−3)
 0 0
 2 1 −1 0 
7−→  0 0 −6

−3 3 0 
0 0 −2 −1 10 0

January 19, 2017 4 / 51


Example contd.

 
1 3 1 −2 −3
E32 (3),E42 (1)
 0
 02 1 −1 
7−→  0
 (REF )
0 0 0 0 
0 0 0 0 0
 

 −6x 2 + 5x 4 + 5x5 

2x
 

 2 

 ⊂ R5 . Here x2 , x4 , x5

The solution set is N (A) =   −x4 + x5 
2x4

 

 

2x5
 
take arbitrary values since the second, fourth and fifth columns are
pivot-free.
Finally, null(A) = 3, rank(A) = 2 and null(A) + rank(A) = 5 which is the
no. of variables or the number of columns of A.

January 19, 2017 5 / 51


Example extended

 
1 3 1 −2 −3
1 3 3 −1 −4
Given A =  2 6 −4 −7 −3, find a basis of the null space N (A) of

3 9 1 −7 −8
A. The null space is described in the previous slide. Since the real triple
(x2 , x4 , x5 ) can take any value (in R3 ),we assign
 values
 (1,
 0, 0), (0,
 1,
 0)
−6 5 5
2 0 0
     
and (0, 0, 1) successively, to find v1 =  0  , v2 = −1 , v3 = 
   
1 as

0 2 0
0 0 2
three linearly independent elements of N (A) which form a basis.
We have parametrized N (A) by R3 and also used its standard basis to find
a basis of N (A).

January 19, 2017 6 / 51


A geometric viewpoint

Rank-Nullity theorem: Let A be an m × n matrix. As a map A : Rn −→ Rm , A


’transmits’ Rn linearly into Rm . Then null(A) is the number of dimensions which
vanish (transmission losses) and rank(A) = dim ARn is the number of dimensions
received. Naturally the sum rank(A) + null(A) = n = dim Rn is the total number
of dimensions transmitted. (A conservation law?)
Solvabilty of Ax = b: Given the linear map A : Rn −→ Rm and b ∈ Rm , we can
solve Ax = b for the unknown x ∈ Rn if and only if b ∈ ARn = C(A).
This geometrically obvious statement is equivalent to rank(A) = rank(A+ ) for the
solvability of the linear systems.
Adjoining b to the columns of A does not increase the linear span viz
LS{A1 , A2 , ..., An } = LS{A1 , A2 , ..., An ; b} ⇐⇒ b ∈ C(A).
Solution set: If p ∈ V is any particular solution i.e. Ap = b, then the set

S = p + N (A) = {p + v v ∈ N (A)}

is the full set of solutions of Ax = p.


The solution set is a parallelly displaced null space N (A).

January 19, 2017 7 / 51


Invertibility via rank

Theorem 4
An n × n matrix A is invertible if and only if its rank is n. Equivalently, if
and only if the null space of A is trivial.

Proof: Gauss-Jordan method of finding A−1 shows that the inverse will
exist if and only if the reduced row echelon form of the n × n matrix A
becomes the identity matrix In which has n pivots. [09 : 50]

January 19, 2017 8 / 51


Determinants

Let A be a square n × n real matrix. We define |A|- the determinant of A


(also written det A) in an inductive manner.
(a) If A is 1 × 1, A = [a] say, we define |A| = a.
To define n × n determinants inductively (induction on n) we suppose
that (n − 1) × (n − 1) determinants have already been defined.
(b) To carry out the induction we first define minors of A.

Definition 5 (Minors of A)
The (jk)th minor Mjk = Mjk (A) of A is defined to be the determinant of
the (n − 1) × (n − 1) sub-matrix of A obtained by deleting its j th row and
the k th column.

January 19, 2017 9 / 51


Determinants contd.

(c) Finally, the definition of |A| is in terms of its minors:

Definition 6 (Determinant of A)
The determinant |A| is defined as the sum
n
X
|A| = (−1)j+k ajk Mjk (Expansion by the j th row)
k=1
n
X
= (−1)j+k ajk Mjk (Expansion by the k th column)
j=1

Remark: We will consistently use the row expansions and finally prove
that column expansion is also valid. We will NOT prove that expansion by
any two rows are equal. It is given in the handout. [10:00]

January 19, 2017 10 / 51


Some useful notations

Let A be a square matrix of size n. We introduce the following associated


matrices:
M: The matrix of minors of A is denoted
h i
M = M(A) = [Mjk ] = Aĵ k̂ .

C: The matrix of co-factors of A is denoted


h i
C = C(A) = [Cjk ] = (−1)j+k Mjk .

Cofactors are just ±minors i.e. signed minors.


Adj: The transpose of the co-factor matrix C(A) is called the adjoint of A
and denoted
Adj(A) = C(A)T .

January 19, 2017 11 / 51


Properties of the determinant

Example 7
       
a b d c d −c d −b
A= , M= , C= , AdjA =
c d b a −b a −c a

The determinant function enjoys three crucial properties:


D-1 Skew-symmetry
D-2 Row-wise linearity
D-3 Normalization
These properties characterize the determinant function.

We state and prove(?) each of these in more detail.

January 19, 2017 12 / 51


Property D-1: Skew-symmetry

Theorem 8
Let A be any n × n matrix, n ≥ 2 and B is obtained from A by
interchanging two of its rows then |B| = −|A|.

Proof:
Direct check if n = 2. For n ≥ 3 expand by a row not involved in the
interchange. By induction hypothesis each involved cofactor changes
sign.
Corollary 9
If two of the rows are same then the determinant vanishes.
Proof:
On interchanging the said rows, the determinant changes sign while
the matrix and therefore the determinant remains same.

January 19, 2017 13 / 51


Property D-2: Row-wise linearity

Theorem 10
Let A be an n × n matrix. If its j th row is a linear combination of two
rows, say Aj = c 0 A0j + c 00 A00j , then |A| = c 0 |A0 | + c 00 |A00 |. The
notations
 are as follows:    
A1 A1 A1
 ..   ..   .. 
. .  . 
A =  Aj  , Aj = c 0 A0j + c 00 A00j , A0 =  A0j  A00 = A00j ,
     
. .  . 
 ..   ..   .. 
An An An

Proof:
Obvious. Just expand by the j th row. Cofactors remain unchanged.

Exercise: Break this property into two parts.

January 19, 2017 14 / 51


Property D-3: Normalization and

Theorem 11
Let In be the n × n identity matrix. Then |In | = 1.

Proof:
Obvious for n = 1. Assuming for (n − 1) and expanding |In | by the first
row we get |In | = 1.|In−1 | + 0 + · · · + 0 = 1.

Additional useful properties:


In addition to the crucial 3 properties, the determinant satisfies two more
useful properties:

Invariance under transposition: AT = |A| .
This validates expansion of any determinant by the columns.
Multiplicative property: |AB| = |A||B| .
Proof: Handout.
January 19, 2017 15 / 51
Applications of determinant: invertibility

Theorem 12 (Invertibility via determinant)


Let A be an n × n matrix. A is invertible if and only if |A| =
6 0.

Proof: A is invertible =⇒ ∃B s.t. AB = I =⇒ |A||B| = 1 =⇒ |A| =


6 0.

For the converse, we actually construct the inverse using the adjoint.
Lemma 13
Let A be a square matrix, then A(AdjA) = (AdjA)A = |A|I.

Proof: Since AdjA is C(A)T , the diagonal entries of A(AdjA) are


X
[A(AdjA)]jj = aj` Cj` = |A|; j = 1, 2, ...n.
`

(expansion by the j th row)

January 19, 2017 16 / 51


Proof of the lemma, contd.

For j 6= k say for definiteness j < k, the (j, k) entry of A(AdjA) is


X X
aj` Ck` = (−1)k+` aj` Mk`
` `

a11 ··· a1` ··· a1n
.. .. ..

. . .

aj1
··· aj` ··· ajn
(−1)k+` aj` ... .. ..
X
=

. .
` ak1 ··· ak` ··· akn

. .. ..
.. . .

an1 ··· an` ··· ann

(Gray entries are absent, the gray column moves from left to write while the gray
row remains steady)

January 19, 2017 17 / 51


X
Explanation (−1)k+` aj` Mk`
`


a11 ··· a1` ··· a1n a11 ··· a1` ··· a1n
.. .. .. .. .. ..

. . . . . .

aj1
··· aj` ··· ajn aj1
··· aj` ··· ajn
(−1) aj1 ...
j+1 .. .. +· · ·+(−1)j+n a .. .. ..

. .
jn .
. .
ak1
··· ak` ··· akn ak1
··· ak` ··· akn
. .. .. . .. ..
.. ..

. . . .
an1 ··· an` ··· ann an1 ··· an` ··· ann

(Gray entries are absent, the gray column moves from left to write while the gray
row remains steady)

January 19, 2017 18 / 51


Lemma contd.


a11 ··· a1` ··· a1n a11 ··· a1` ··· a1n
.. .. .. .. .. ..

. . . . . .

aj1
··· aj` ··· ajn aj1
··· aj` ··· ajn
(−1) aj` ... .. .. =
X
k+` .. .. ..
=

. . .
. .
` aj1 ··· aj` ··· ajn aj1 ··· aj` ··· ajn

. .. .. . .. ..
.. . . .. . .

an1 ··· an` ··· ann an1 ··· an` ··· ann
| {z }
(expanded by the k th row)
th th

=0 ∵ j row = k row .

Thus A(AdjA) = |A|I is proved.


The case of (AdjA)A = |A|I is left as an exercise.
We used a strange idea of absence by proxy. [1.0]

January 19, 2017 19 / 51


Illustration
  
a b c C11 C21 C31
AAdjA = p q r  C12 C22 C32 
x y z C13 C23 C33
Let us look at (1, 2) entry of the product.

â b c a b̂ c a b ĉ

aC21 + bC22 + cC23 = −a p̂ q̂ r̂ + b p̂ q̂ r̂ − c p̂ q̂ r̂
x̂ y z x ŷ z x y ẑ

â b c a b̂ c a b ĉ

= −a â b̂ ĉ + b â b̂ ĉ
− c â
b̂ ĉ
x̂ y z x ŷ z x y ẑ

a b c

= a b c (expansion by second row)
x y z
= 0.

The hat symbol means that the entry is absent.


January 19, 2017 20 / 51
Proof of the converse part of the theorem

6 0 =⇒ |A|−1 AdjA is the inverse of A.


From the lemma, |A| =

We summarize ALL the conditions for invertibilty of a square matrix:


Theorem 14
Let A be an n × n matrix. The following statements about A are
equivalent:
A is invertible.
A is a product of elementary row matrices.
A is of full rank i.e. rank(A) = n.
The determinant |A| =
6 0.

January 19, 2017 21 / 51


Proofs explained. D1: Skew-symmetry
   
a b 0 c d
Let A = and A = . Then |A| = ad − bc = −|A0 |.
c d a b

For more than order 2 say of order 3, let


   
a b p 1 2 r
A= c  d q  and A0 = c d q  .
1 2 r a b p

Expand |A0 | by the unaffected row which is the second row.



0
2 r 1 r 1 2
|A | = −c +d −q
b p a p a b

b p a p a b
= c −d +q = −|A|.
2 r 1 r 1 2

January 19, 2017 22 / 51


Proofs explained. D2: Row-wise linearity

Let the third row of 3 × 3 A be written as [1 2 r ] + [k ` m]


 
a b p
A= c d q .
1+k 2+` r +m

Expand |A| by the third row.



b p
− (2 + `) a p + (r + m) a b

|A| = (1 + k)
d q c q c d
   
b p
− 2 a p + r a b + k b p − ` a p
a b
= 1 + m
d q c q c d d q c q c d

a b p a b p
= c d q + c d q = |A0 | + |A00 |.

1 2 r k ` m

Similarly, for multiplying a row by a scalar.

January 19, 2017 23 / 51


Proofs explained. D3: |In | = 1

Expand by the first row and assume by induction that |In−1 | = 1. for
example,
1 0 0
0 1 0 = 1 1 0 + 0. ∗ +0.∗ = 1

0 1
0 0 1

January 19, 2017 24 / 51


Example

Example 15
Find the matrices of minors, cofactors and the adjoint of the matrix
 √ √ 
2 3 2
A = √ −1 √0 1 .
2 3 2

Verify that (AdjA)A = A(AdjA) = |A|I. Hence compute the inverse if it


exists.
 √ √ √ 
−√ 3 √ −(2 + 2) −√ 3 √
Minors: M(A) = (2 −√ 2) 3 2√ (2 −√ 2) 3 .
3 (2 + 2) 3
 √ √ √ 
√ − 3√ (2 + 2) √ − 3√
Cofactors: C(A) = ( 2 √
− 2) 3 2√ ( 2√− 2) 3 .
3 −(2 + 2) 3
January 19, 2017 25 / 51
Example contd.
 √ √ √ √ 
− √3 ( 2 − 2) 3 3√
AdjA = C(A)T = (2 +√ 2) √ 2 √ −(2√
+ 2) .
− 3 ( 2 − 2) 3 3

 √ √  √ √ √ √ 
2 3 2 − √3 ( 2 − 2) 3 3√
A(AdjA) = −1 0 1  (2 +√ 2) √ 2 √ −(2√
+ 2)
√ √
2 3 2 − 3 ( 2 − 2) 3 3
 
0 0 0
= 0 0 0
0 0 0

Further,
expanding
√ √by the second row
2 3 2
√ √ √ √
|A| = √−1 √0 1 = −(−1)(2 3 − 6) − 1(2 3 − 6) = 0 =⇒ A(AdjA) =
2 3 2
O = |A|I and the inverse does not exist. [1.5]

January 19, 2017 26 / 51


Applications of determinant: rank

Notation:
For any natural number n we denote by n the (ordered) set
{1, 2, ..., n}.
Let A be any real (or complex) m × n matrix and let J ⊂ m and
K ⊂ n be ordered subsets of indices. We denote by AJK the
submatrix of A obtained by selecting the rows corresponding to
indices in J and the columns corresponding to the indices in K .

Thus for example the (j, k)-minor Mjk = Aĵ k̂ for ĵ = n \ {j}, k̂ = n \ {k}.

Example 16
Let A = [ajk ] be a 5 × 10 matrix. Let J = {3, 5}, K = {2,4, 6}. 
a32 a34 a36
Determine the submatrix AJK . Answer: .
a52 a54 a56

January 19, 2017 27 / 51


Applications of determinant: rank (contd.)

Definition 17 (Determinantal rank)


For any m × n matrix A, the determinantal rank is defined as

rankd (A) = max r ∃ r × r submatrix AJK with AJK 6= 0.

Theorem 18 (Rank via determinants)


Let A be any matrix. Then rankd (A) = rank(A) = rankc (A).

Proof:
If r = rankd (A), there is an r × r submatrix AJK of nonzero determinant.
Then the rows of AJK are L.I.

This implies that the r rows of A corresponding to J are L.I. Hence

rank(A) ≥ r = rankd (A).

January 19, 2017 28 / 51


Applications of determinant: rank (contd.)

Conversely, Let r 0 = rank(A) and the rows corresponding to a subset


J 0 ⊂ n are L.I.

Then we can choose r 0 L.I. columns from AJ 0 n .


(∵ rankc (AJ 0 n ) = rank(AJ 0 n ))

The corresponding r 0 × r 0 submatrix AJ 0 K 0 say, is of full rank and hence of


nonzero determinant. Thus

r 0 = rank(A) ≤ rankd (A).

In fact this theorem extends the theorem about invertibilty via


determinants.

January 19, 2017 29 / 51


Example

Example 19
Find the rank by determinants. Verify by row reduction.
   
0 2 3 4 3
(i)  2 0 5 (ii) −8 −6 .
−3 5 0 16 12


0 2 3

(i) 2 0 5 = 0 + (−2) × 15 + 3 × 10 = 0.
−3 5 0
=⇒ rank(A) ≤ 2. ∵ (@ 3 × 3 subdeterminant 6= 0)

0 2
OTOH = −4 6= 0 for a 2 × 2 submatrix A{1,2}{1,2} ∴ rank(A) = 2.
2 0
Verification by row reduction:
January 19, 2017 30 / 51
Example contd.

     
0 2 3 −3 5 0 −3 5 0
 2 0 5 7→  2 0 5 7→  0 10/3 5
−3 5 0 0 2 3 0 2 3
   
−3 5 0 -3 5 0
7→  0 2 3 7→  0 2 3 (2 pivots.)
0 2 3 0 0 0
 
4 3
(ii) A = −8 −6 A{1,2}{1,2} = A{2,3}{1,2} = A{1,3}{1,2} = 0 =⇒
16 12
rank(A) < 2 =⇒ rank(A) = 1 ∵ A 6= [0].
Verification by row reduction:
   
4 3 4 3
−8 −6 → 7 0 0
16 12 0 0
January 19, 2017 31 / 51
Applications of determinant: Cramer’s Rule
Let Ax = b be a system of n equations in the same number n of variables.
Thus A is a square matrix in this case. We may (temporarily) call such a
system as a square system.

Theorem 20 (Cramer’s Rule)


Let Ax = b be a square system and |A| = 6 0. Then the solution vector x
exists and is unique. Further, the components of x are expressible as

A [b]
xk = ,
|A|

where Ak̂ [b] = [A1 , ..., b, ..., An ] is the n × n matrix obtained from A by
replacing its k th column by b.

Proof: Omitted.
(AdjA)b
Decode the column x = A−1 b = entry-wise.
|A|
January 19, 2017 32 / 51
Example

Example 21
Find the values of β for which Cramer’s rule is applicable. For the
remaining value(s) of β, find the number of solutions.

x + 2y + 3z = 20
x + 3y + z = 13
x + 6y + βz = β.

The coefficient matrix has determinant β + 5, hence for β 6= −5 Cramer’s


Rule is applicable.  
1 2 3
20

 
For β = −5, the augmented matrix A+ = 1 3 1 13  .
 
 
1 6 −5 −5

January 19, 2017 33 / 51


Example contd.

   
1 2 3 20 1 2 3 20

   
1 3 1 13  7→ 0 1 −2 −7 
   
   
1 6 −5 −5 0 4 −8 −25

 
1 2 3 20

 
7→  0 1 −2 −7 .
 
 
0 0 0 3

The pivot in the last column means that rank(A) = 2 < 3 = rank(A+ ).
Hence the system is not solvable for β = −5.

January 19, 2017 34 / 51


Applications of determinants: Extracting bases from rows
and columns
Theorem 22
For an m × n matrix A of rank r , let AJK be any r × r submatrix of non
zero determinant. Then the rows of A indexed by J and the columns of A
indexed by K form bases of the row and the column spaces of A
respectively.

Example 23
 
0 2 3
Let A =  2 0 5 be a matrix considered earlier. Find bases for the row and
−3 5 0
column spaces.

|A| = 0 =⇒ rank(A) < 3 and A{1,2}{1,2} = −4 6= 0. Hence the row and column
spaces are 2-dimensional and the first two rows of A can be chosen for a basis for
the row space and the first two columns of A for that of the column space of A.
[2.0]
January 19, 2017 35 / 51
Expansion in a basis

Recall V ⊂ Rn a vector space i.e. L(S) for some finite set of vectors.
Assume S to be l.i. so that it is also a basis of V .
Theorem 24
Let B = {v1 , v2 , ..., vk } be a basis of a vector space V ⊆ Rn and v ∈ V .
There exist unique scalars c1 , c2 , ..., ck s.t. v = c1 v1 + c2 v2 + · · · + ck vk .

Proof:
Existence is obvious since V = L(B). For uniqueness,
v = c1 v1 + c2 v2 + · · · + ck vk = c10 v1 + c20 v2 + · · · + ck0 vk =⇒ (c1 − c10 )v1 +
(c2 − c20 )v2 + · · · + (ck − ck0 )vk = 0 =⇒ (cr − cr0 ) = 0, 1 ≤ r ≤ k =⇒
uniqueness.

Remark: Unique representation as a linear combination is equivalent to no


redundancy in B.

January 19, 2017 36 / 51


Orthonormal basis

Though unique scalars exist to write any vector as a linear combination of


basis vectors, it is hard to actually calculate the scalars.
Exercise: Let {p, q, r} be a basis in R3 and v ∈ R3 , show that
v = ap + bq + cr, where

[v, q, r] [p, v, r] [p, q, v]


a= , b= and c = .
[p, q, r] [p, q, r] [p, q, r]

Thus computing the scalars involves computing 4 determinants of order 3


-a tedious job.

This situation can be remedied if we take certain special types of bases


called orthonormal bases.

January 19, 2017 37 / 51


Inner product or dot product in Rn , orthogonality

Recall that for the vectors v, w ∈ Rn , their dot product or the standard
X n
inner product is defined to be the scalar v · w = vT w = vj wj . It is also
j=1
denoted hv, wi. Evidently, hv, vi ≥ 0 with equality if and only if v = 0.

Definition 25 (Norm or length)


p
The norm or the length of a vector is defined to be kvk = hv, vi.

Definition 26 (Orthogonality)
We say v, w ∈ Rn are mutually orthogonal if hv, wi = 0.

We write v ⊥ w in such a case. How original!!

January 19, 2017 38 / 51


Cauchy-Schwartz, Triangle inequality, Pythagorus theorem

Theorem 27 (Cauchy-Schwartz, Triangle inequality, Pythagorus)


For v, w ∈ Rn , we have
1 Cauchy-Schwartz inequality:

|hv, wi| ≤ kvkkwk.

2 Triangle inequality
kv + wk ≤ kvk + kwk.
3 Pythagorus theorem: If v ⊥ w then

kv + wk2 = kvk2 + kwk2 .

Short proof: Self-study.

January 19, 2017 39 / 51


Orthogonality and linear independence

Theorem 28
Let S = {v1 , v2 , ..., vk } be a set of non-zero vectors in Rn . If the vectors
are mutually orthogonal, then S is a l.i. set.

Proof:
X X
cj vj = 0 =⇒ h cj vj , v` i = 0
=⇒ c` kv` k2 = 0, (1 ≤ ` ≤ k)
=⇒ c` = 0 since v` 6= 0.

January 19, 2017 40 / 51


Orthogonality/Orthonormality

Definition 29 (Orthogonality/Orthonormality)
Let S = {v1 , v2 , ..., vk } be a set of vectors in Rn .
1 We say that S is an orthogonal set if vj ⊥ v` ; ∀ j 6= ` ∈ {1, 2, ..., k}.
(One or more vector in S may be zero.)
2 We say that S is an orthonormal set if vj ⊥ v` ; ∀ j 6= ` ∈ {1, 2, ..., k}
and kvj k = 1 for each j (normalization).

Definition 30 (Orthonormal bases)


Let B = {v1 , v2 , ..., vk } be a basis of a vector space V ⊆ Rn . We say B is
an orthonormal basis of V , if B is an orthonormal set.

January 19, 2017 41 / 51


Expansion in an o.n. basis

Theorem 31
Let B = {u1 , u2 , ..., uk } be an orthonormal basis of a vector space V and
x ∈ V . Then

x = hx, u1 iu1 + hx, u2 iu2 + · · · + hx, uk iuk .

Proof:Direct check.
Start with x = c1 u1 + · · · + ck uk .
Take inner-product successively with u1 , u2 , ..., uk .

This yields cj = hx, uj i which is much easier to find.

January 19, 2017 42 / 51


Example

Example 32
    
0.6 −0.8
Verify that u1 = , u = is an orthonormal basis of R2 .
0.8   2 0.6
1
Expand the vector x = in this basis and verify equality.
1
Solution: (0.6)2 + (0.8)2 = 1 shows that kuj k2 = 1, j = 1, 2. Further u1 qu2 = 0
is easy AND x qu1 = 1.4, x qu2 = −0.2 is also easy. Verification:
   
0.6 −0.8
(x qu1 )u1 + (x qu2 )u2 = 1.4 − 0.2
0.8 0.6
   
0.84 + 0.16 1
= = = x.
1.12 − 0.12 1

January 19, 2017 43 / 51


Example

Example 33
      
 2/3 1/3 2/3 
Verify that u1 = −2/3 , u2 = 2/3 , u3 =  1/3  is an orthonormal
1/3 2/3 −2/3
 
basis of R3 .  
1
Expand the vector x = 1 in this basis and verify equality.
1
1 4 4
Solution: + + = 1 shows that kuj k2 = 1, j = 1, 2, 3. Further
9 9 9
uj quk = 0; j 6= k is easy AND x qu1 = 1/3, x qu2 = 5/3, x qu3 = 1/3 is also easy.
Verification:

(x qu1 )u1 + (x qu2 )u2 + (x qu3 )u3


         
2/3 1/3 2/3 2/9 + 5/9 + 2/9 1
1 5 1
= −2/3 + 2/3 +  1/3  = −2/9 + 10/9 + 1/9 = 1 = x.
3 3 3
1/3 2/3 −2/3 1/9 + 10/9 − 2/9 1

January 19, 2017 44 / 51


Product of matrices, a stray topic

Theorem 34
A
If A, B are m × n and n × p respectively which give us linear maps Rn −→ Rm
B
and Rp −→ Rn , then the product AB represents the composite
B A
Rp −→ Rn −→ Rm viz A ◦ B.

Proof: We should check that for each z ∈ Rp , ABz = (A ◦ B)(z). For


A = [ajk ], B = [brs ] and z = [z1 z2 ...zp ]T , the j th entry of ABz, obtained by
matrix multiplication is
p
" n #
X X
(ABz)j = ajk bk` z` .
`=1 k=1
| {z }
(j`) entry of AB
p
X
OTOH (Bz)k = bk` z` , so that as a composite of maps
`=1 " p #
X n X n X
 
A(Bz) j = ajk Bz k = ajk bk` z` .
k=1 k=1 `=1
January 19, 2017 45 / 51
Gram-Schmidt process

Suppose B = {v1 , v2 , ..., vk } is a basis of V or more generally, a spanning


set.
We produce in a recursive manner an orthogonal set of vectors
{w1 , w2 , ..., wk } in V , s.t. it also spans V .
By dropping zeroes, if any, from the new set, we get an orthogonal basis of
V which can be normalized.
Note: If B is a basis to start with, then the resulting o.g. set will be
without zeroes and will have k = dim V elements. Gram-Schmidt process:
w 1 = v1 .
v − hv2 , w1 i w , if w 6= 0

2 1 1
w2 = kw1 k2
v2 , if w1 = 0.

January 19, 2017 46 / 51


Gram-Schmidt process contd.

· · · having constructed {w1 , ..., wj−1 }, put


j−1
X hvj , w` i
wj = vj − w`
kw` k2
`=1
w` 6=0
..
.
k−1
X hvk , w` i
w k = vk − w`
kw` k2
`=1
w` 6=0

Proof:
Prove inductively that hwj , w` i = 0 for ` = 1, 2, ..., j − 1. In other words,
check at each stage which is routine.
Remark: If {v1 , ..., vk } is a l.i. set, then at no stage we get wj = 0. If not
we do get wjt = 0 for a few 1 ≤ j1 < · · · < jr ≤ k. We drop these to get
an o.g. basis. Next we normalize to get an o.n. basis of V .
January 19, 2017 47 / 51
Examples

Example 35
Apply G-S process to {i + j + k, i + j, i} in R3 .
Solution:
w1 = i + j + k.
hi + j, i + j + ki i + j − 2k
w2 = i + j − 2
w1 = .
ki + j + kk 3
hi, i + j + ki hi, i+j−2k
3 i i + j − 2k
w3 = i − (i + j + k) − 2
3 3
3
i−j
= .
2
Thus {w1 , w2 , w3 } = {i + j + k, i + j − 2k, i − j} is an o.g. set on
dropping the denominators
 for convenience. The corresponding o.n. set is
i + j + k i + j − 2k i − j
{u1 , u2 , u3 } = √ , √ , √
3 6 2

January 19, 2017 48 / 51


Examples

Example 36
Apply G-S process to {i, i + j, i − j, i + j + k} in R3 .
Solution:
w1 = i.
hi + j, ii
w2 = (i + j) − i = j.
kik2
hi − j, ii hi − j, ji
w3 = (i − j) − i− j=0
1 1
hi + j + k, ii hi + j + k, ji
w4 = (i + j + k) − i− j=k
1 1
Thus {w1 , w2 , w3 , w4 } = {i, j, 0, k} is an o.g. set. After dropping 0, the
corresponding o.n. set is
{u1 , u2 , u3 } = {i, j, k}- the standard o.n. basis of R3 .

January 19, 2017 49 / 51


Examples
Example 37
X
Let {u1 , u2 , ..., uk } be an o.n. set in V and v ∈ V Show that hv, vi ≥ hv, u` i2
`
(Bessel’s inequality).
When does equality X hold in Bessel’s inequality? X
2
Solution: Let v0 = hv, u` iu` . Then kv0 k2 = hv, u` i .
`
X
hv − v0 , v0 i = hv, hv, u` iu` i − kv0 k2
X 2
X 2
= hv, u` i − hv, u` i = 0
=⇒ kvk2 = kv − v0 k2 + kv0 k2 ≥ kv0 k2 .

Equality holds if and only if v − v0 = 0, in other words if and only if


v ∈ L {u1 , u2 , ..., uk } .

In particular, if the given o.n. set is a basis of V , the Bessel’s inequality becomes
equality for each vector v ∈ V .
January 19, 2017 50 / 51
Examples
Example 38
Orthogonalize the following ordered set of row-vectors in R4 .

{[1, 1, 0, 0], [1, 0, 1, 0], [1, 0, 0, 1], [0, 1, 1, 0], [0, 1, 0, 1], [0, 0, 1, 1]}

Do you get an orthogonal basis? Does [−2, −1, 1, 2] belong to the linear span?
Use Bessel’s inequality.
Solution: After the Gram-Schmidt process we get
1 1 1
{[1, 1, 0, 0], [1, −1, 2, 0], [1, −1, −1, 3], [−1, 1, 1, 1], 0, 0}.
2 3 2
The 4 nonzero elements give an orthogonal basis. [−2, −1, 1, 2] must be in linear
span. To verify Bessel’s equality, we first note that

[1, 1, 0, 0] [1, −1, 2, 0] [1, −1, −1, 3] [−1, 1, 1, 1]


u1 = √ , u2 = √ , u3 = √ , u4 = .
2 6 2 3 2
X 9 1 4
Now (v · uj )2 = + + + 4 = 10 = kvk2 . (Verified!)
2 6 3
January 19, 2017 51 / 51

You might also like