You are on page 1of 61

Gram-Schmidt Orthogonalization Process and

Orthogonal Complements

Dr. Radha Mohan

St. Stephen’s College

March 20, 2022


THIS MATERIAL IS COPYRIGHT GOVERNED AND IS NOT TO
BE SHARED WITH OTHERS OUTSIDE ST.STEPHEN’S
COLLEGE. UNAUTHORISED USE OF THIS MATERIAL CAN
LEAD TO PENALTIES AND LEGAL PROCEEDINGS , IF
WARRANTED.
Orthonormal Basis

I This material is from Section 6.2 Friedberg, Insel and Spence.


Orthonormal Basis

I This material is from Section 6.2 Friedberg, Insel and Spence.


I Bases that are are also orthonormal sets are the building
blocks of Inner Product Spaces
Orthonormal Basis

I This material is from Section 6.2 Friedberg, Insel and Spence.


I Bases that are are also orthonormal sets are the building
blocks of Inner Product Spaces
I Definition:
Let V be an Inner Product space over R or C called the field
F.
An ordered basis for V that is also an orthonormal set is an
orthonormal basis for V .
i.e. it is a set of unit vectors, (vectors of norm 1) and every
pair of vectors is orthogonal.( for x, y in this basis
< x, y >= 0, x 6= y.)
Example

I Example 1: The standard ordered basis for F n is an


orthonormal basis.
β = {e1 , e2 , . . . , en }
For i 6= j,
< ei , ej >=< (0, . . . , 1, 0, . . .), (0, . . . , 0, . . . , 1, 0, . . .) >=
0.0 + . . . + 1.0 + . . . + 0.1 + . . . + 0.0 = 0
< ei , ei >= (0, . . . , 1, 0 . . .), (0, . . . , 1, 0, . . .) >=
0.0 + . . . + 1.1 + 0.0 + . . . = 1.
So, ||ei || = 1, ∀ i.
Example:

I Example 2:
√ √ √ √
The set {(1/ 5, 2/ 5), (2/ 5, −1/ 5)} is an orthonormal
basis for R2 .
if the vectors are
√ e1 , e√
2 then √ √
< e1 , e2 >= 1/ 5.2/ 5 + 2/ 5. − 1/ 5 = 2/5 − 2/5 = 0.
< e1 , e1 >= 1/5 + 4/5 = 1
< e2 , e2 >= 4/5 + 1/5 = 1.
So, the set is orthonormal. To see it is a basis it is enough to
see that
√ it is√a linearly independent
√ √ set since it is of size 2.
c1 (1/ 5, 2/ 5) + c2 (2/ 5, −1/ 5) = (0, 0)
Then c1 + 2c2 = 0 and 2c1 − c2 = 0. So, c2 = 2c1 and
c1 + 4c1 = 0 so, c1 = c2 = 0.
Theorem
I The next Theorem illustrates the importance of an
orthonormal set.
Theorem
I The next Theorem illustrates the importance of an
orthonormal set.
I Theorem 6.3:
Let V be an inner product space. and S = {v1 , v2 , . . . , vn } be
a finite orthogonal subset of non-zero vectors.
If y ∈ span(S),
n
X < y, vi > vi
y= .
||vi ||2
i=1
Theorem
I The next Theorem illustrates the importance of an
orthonormal set.
I Theorem 6.3:
Let V be an inner product space. and S = {v1 , v2 , . . . , vn } be
a finite orthogonal subset of non-zero vectors.
If y ∈ span(S),
n
X < y, vi > vi
y= .
||vi ||2
i=1
I Proof:
Let y = ni=1 ai vi
P
For j, 1 ≤ j ≤ n,
Then,
< y, vj >=< ni=1 ai vi , vj >= ni=1 ai < vi , vj >= aj ||vj ||2
P P
since S is an orthogonal set.
Hence, aj =< y, vj > /||vj ||2 .
Thus, the result follows.
Corollary

I Corollary 1.:
Let V be an inner product space and let S = {v1 , v2 , . . . , vn }
be a finite orthonormal set.
Then if y ∈ span(S),
n
X
y= < y, vi > vi .
i=1
Corollary

I Corollary 1.:
Let V be an inner product space and let S = {v1 , v2 , . . . , vn }
be a finite orthonormal set.
Then if y ∈ span(S),
n
X
y= < y, vi > vi .
i=1
I Proof:
By Theorem 6.3,
n
X < y, vi > vi
y= .
||vi ||2
i=1

Since,
Pthe set is orthonormal,
y = ni=1 < y, vi > vi .
An orthogonal subset of non-zero vectors is linearly
independent

I Corollary 2:
Let V be an inner product space and S = {v1 , . . . , vn } be an
orthogonal subset of non-zero vectors.
Then S is a linearly independent set.
An orthogonal subset of non-zero vectors is linearly
independent

I Corollary 2:
Let V be an inner product space and S = {v1 , . . . , vn } be an
orthogonal subset of non-zero vectors.
Then S is a linearly independent set.
I Proof:
Suppose y = ni=1 ai vi = 0.
P
By Theorem 6.3,
ai =< y, vi > /||vi ||2 =< 0, vi > /||vi ||2 = 0, ∀ i.
Hence, S is linearly independent.
Example

I Example 3:
√ √ √
Le S = {1/ 2(1, 1, 0), 1/ 3(1, −1, 1), 1/ 6(−1, 1, 2)}.
We had seen that this set is orthonormal. Since, it an
orthogonal set of non-zero vectors by Corollary 2 it is linearly
independent. Thus, a basis for R3 .
Let x = (2, 1, 3). √ √ √
< x, v1 >= 2.(1/ 2) + 1(1/ 2)√+ 2.0 =√3/ 2.
< x, v2 >= 2.1 + 1.(−1) + 3.1/√3 = 4/√3.
< x, v3 >= 2.(−1) + 1.1 + 3.2/ 6 = 5/ 6.
(2, 1, 3) = 3/2(1, 1, 0) + 4/3(1, −1, 1) + 5/6(−1, 1, 2) by
Corollary 1.
Example

I Example 3:
√ √ √
Le S = {1/ 2(1, 1, 0), 1/ 3(1, −1, 1), 1/ 6(−1, 1, 2)}.
We had seen that this set is orthonormal. Since, it an
orthogonal set of non-zero vectors by Corollary 2 it is linearly
independent. Thus, a basis for R3 .
Let x = (2, 1, 3). √ √ √
< x, v1 >= 2.(1/ 2) + 1(1/ 2)√+ 2.0 =√3/ 2.
< x, v2 >= 2.1 + 1.(−1) + 3.1/√3 = 4/√3.
< x, v3 >= 2.(−1) + 1.1 + 3.2/ 6 = 5/ 6.
(2, 1, 3) = 3/2(1, 1, 0) + 4/3(1, −1, 1) + 5/6(−1, 1, 2) by
Corollary 1.
I Remark:
By Corollary 2, H contains an infinite linearly independent set
hence, it is not finite dimensional.
Orthonormal Basis

I Remark:
It is true that every finite dimensional inner product space has
an orthonormal basis.
The next Theorem allows us to construct an orthogonal set
from a given set of linearly independent vectors so that they
generate the same subspace.
Motivation:
I Let w1 and w2 be linearly independent vectors in R2 .
Motivation:
I Let w1 and w2 be linearly independent vectors in R2 .

I R2 pic.pdf
Motivation

I We see that w2 − cw1 is perpendicular, in other words,


orthogonal to w1 .
Motivation

I We see that w2 − cw1 is perpendicular, in other words,


orthogonal to w1 .
I Thus, < w2 − cw1 , w1 >= 0.
Motivation

I We see that w2 − cw1 is perpendicular, in other words,


orthogonal to w1 .
I Thus, < w2 − cw1 , w1 >= 0.
I So, < w2 , w1 > −c|| w1 ||2 = 0.
Motivation

I We see that w2 − cw1 is perpendicular, in other words,


orthogonal to w1 .
I Thus, < w2 − cw1 , w1 >= 0.
I So, < w2 , w1 > −c|| w1 ||2 = 0.
I So, c = <w2 ,w1 >
||w1 ||2
.
Motivation

I We see that w2 − cw1 is perpendicular, in other words,


orthogonal to w1 .
I Thus, < w2 − cw1 , w1 >= 0.
I So, < w2 , w1 > −c|| w1 ||2 = 0.
I So, c = <w2 ,w1 >
||w1 ||2
.
I So, v1 = w1 and v2 = w2 − <w2 ,w1 >
||w1 ||2
v1
Gram-Schmidt Orthogonalization
I Theorem 6.4 Let V be an inner product space and let
S = {w1 , w2 , . . . , wn } be a set of linearly independent vectors
in V . Let v1 = w1 and
k−1
X < wk , vi >
v k = wk − vi
i=1
|| vi ||2

for 2 ≤ k ≤ n.
Let S 0 = {v1 , v2 , . . . , vn }, Then S 0 is an orthogonal set of
vectors and span(S) = span(S 0 ).
Gram-Schmidt Orthogonalization
I Theorem 6.4 Let V be an inner product space and let
S = {w1 , w2 , . . . , wn } be a set of linearly independent vectors
in V . Let v1 = w1 and
k−1
X < wk , vi >
v k = wk − vi
i=1
|| vi ||2

for 2 ≤ k ≤ n.
Let S 0 = {v1 , v2 , . . . , vn }, Then S 0 is an orthogonal set of
vectors and span(S) = span(S 0 ).
I Proof:
Note that vk is a generalization of v2 which we discussed for
R2 .
The proof proceeds by induction on n.
For k = 1, 2 . . . , n define Sk = {w1 , w2 , . . . , wk } and
Sk 0 = {v1 , v2 , . . . , vk }.
For k = 1 take S = {w1 } = {v1 } = S 0 .
Gram-Schmidt Orthogonalization
I Proof continued: Assume Sk−1 0 has been constructed with the
desired properties.
Suppose vk = 0 then by the definition of vk ,
wk ∈ span(Sk−10 ) = span(Sk−1 )
This is a contradiction since Sk is linearly independent.
Now for j < k.
k−1
X < wk , vi >
< vk , vj >=< wk − 2 vi , vj >
i=1
|| v i ||
< wk , vj >
=< wk , vj > − || vj ||2 = 0.
|| vj ||2
We have just shown that since Sk−1 0 is an orthogonal set so is
Sk 0 .
Since, span(Sk−1 ) = span(Sk−1 0 ) we have that
Sk 0 ⊆ span(Sk ).
But Sk 0 is an orthogonal set, thus, it is linearly independent so
dimension of spanSk 0 is k.
Gram-Schmidt Orthogonalization

I Proof continued: .
Since, Sk is a linealy independent set by assumption;
dimension of span(Sk ) = k. Thus, span(Sk 0 ) = span(Sk ).
Gram-Schmidt Orthogonalization

I Proof continued: .
Since, Sk is a linealy independent set by assumption;
dimension of span(Sk ) = k. Thus, span(Sk 0 ) = span(Sk ).
I Remark: The procedure followed in the proof of the Gram-
Schmidt Orthogonalization to get an orthogonal set is called
the Gram-Schmidt Orthogonalization process.
Example

I Example 4:
Let V = R4 and let w1 = (1, 0, 1, 0), w2 = (1, 1, 1, 1) and
w3 = ((0, 1, 2, 1).
Then check that {w1 , w2 , w3 } is a linearly independent set.
Obtain an orthonormal set from this.
First, we orthogonalize using the Gram-Schmidt process.
v1 = w1 = (1, 0, 1, 0)
v2 = w2 − <w 2 ,v1 >
||v1 ||2
v1 =
(1, 1, 1, 1) − (1 + 0 + 1 + 0)/2(1, 0, 1, 0) = (0, 1, 0, 1)
v3 = w3 − <w 3 ,v1 >
||v1 ||2
v1 − <w 3 ,v2 >
||v2 ||2
v2 =
(0, 1, 2, 1) − (1, 0, 1, 0) − 2/2(0,
√ 1, 0, 1) =√(−1, 0, 1, 0)√
Now we normalize. ||v1 || = 2, ||v2 || = 2, ||v3 || = 2.
So we
√ get √ √
{1/ 2(1, 0, 1, 0), 1/ 2(0, 1, 0, 1), 1/ 2(−1, 0, 1, 0)} an
orthonormal set.
Example
I Example 5 :
R1
Let V = P (R) with < f, g >= −1 f (t)g(t)dt.
Let {1, x, x2 } be a basis for P2 (R).
Construct an orthonormal basis for P2 (R).
We first construct an orthogonal set.
v1 = 1
1R
<x,1> −1 tdt.
v2 = x − v = x − ||v
||v1 ||2 1 1 ||
2 = x − 0.
1
x2 , v1 >= −1 t2 dt = 2/3
R
Now, <
< x2 , v2
>=< x2 , x >= 0
v2 = x2 −R 2/3/2.1 = x2 − 1/3
1
||v1 ||2 = −1 1.dt = 2
1
||v2 ||2 = −1 t2 = 2/3
R
1 R1
||v3 ||2 = −1 (t2 − 1/3)2 dt = −1 t4 − 2/3t2 + 1/9)dt =
R

2/5√ − 4/9
p + 2/9 =√ 8/45.
√ p
{1/ 2, 3/2x, 3 5/ 8(x2 − 1/3) = 5/8(3x2 − 1)} is an
orthonormal basis of P2 (R).
If we orthonormalize a basis of P (R) we get the Legendre
Existence of an orthonormal basis

I Theorem 6.5:
Let V be an inner product space which is finite dimensional.
Then V has an orthonormal
Pn basis {v1 , . . . , vn }.
Let x ∈ V then x = i=1 < x, vi > vi .
Existence of an orthonormal basis

I Theorem 6.5:
Let V be an inner product space which is finite dimensional.
Then V has an orthonormalPn basis {v1 , . . . , vn }.
Let x ∈ V then x = i=1 < x, vi > vi .
I Proof:
Let {w1 , . . . , wn } be a basis for V . Then orthogonalize this
set via the Gram-Schmidt process. Further, normalize the set
obtained to get a orthonormal set {v1 , . . . , vn }.
By Corollary 2 to Theorem 6.3 this is a set of non-zero
orthogonal vectors and hence linearly independent. Thus, it is
a basis for V .
By Corollary 1 to Theorem 6.3 the result follows.
Example

I Example 6:
We use Theorem 6.5 to represent 1 + 2x + 3x2 in terms of an
orthonormal
√ p basis forp
P2 (R).
β = {1/ 2, 3/2x, 5/8(3x2 − 1)}.
R1 √
< 1 + 2x + 3x2 , v1 >= −1 (1 + 2t + 3t2 )1/ 2dt =
√ √ √
1/ 2.(2 + 2) = 4/ 2 =R 2 2.
1 p
< 1 + 2x + 3x2 , v2 >= −1 (1 + 2t + 3t2 ) 3/2tdt =
p p √
3/2(t2 ?2 + 2/3t3 + 3/4t4 |1−1 = 3/2.4/3 = 2 6/3.
R1 p
< 1 + 2x + 3x2 , v3 >= −1 (1 + 2t + 3t2 ) 5/8(3t2 − 1)dt =
p p
5/8(t3 / + 6t4 /4 + 9t5 /5|1−1 − 4) = 5/8(2 + 18/5 − 4) =
p √
5/8.8/5 = 2 10/5
Hence,
√ 1 + 2x √
√ + x2 =p √ p
2 2.1/ 2 + 2 6/3. 3/2x + 2 10/5. 5/8(3x2 − 1).
[T ]β when β is an orthonormal basis

I Corollary:
Let T be a linear operator on a finite dimensional vector space
Pn basis β = {v1 , . . . , vn }
V with orthonormal
Then T (vj ) = i=1 < T vj , vi > vi
and if A = [T ]β then Aij =< T vj , vi >.
[T ]β when β is an orthonormal basis

I Corollary:
Let T be a linear operator on a finite dimensional vector space
Pn basis β = {v1 , . . . , vn }
V with orthonormal
Then T (vj ) = i=1 < T vj , vi > vi
and if A = [T ]β then Aij =< T vj , vi >.
I Proof:
By Corollary
P to Theorem 6.3 with x = T vj
T vj = ni=1 < T vj , vi > vi .
T vj gives the j-th column of A hence, Aij =< T vj , vi >.
Fourier coefficients

I Definition:
Let x ∈ V , an inner product space. Let β be an orthonormal
subset of V . (possibly infinite)
The Fourier coefficients of x relative to β are < x, y > such
that y ∈ β.
Fourier coefficients

I Definition:
Let x ∈ V , an inner product space. Let β be an orthonormal
subset of V . (possibly infinite)
The Fourier coefficients of x relative to β are < x, y > such
that y ∈ β.
I Fourier studied that scalars 02π f (t) cos(nt)dt and
R
R 2π R 2π −int dt for
0 f (t) sin(nt)dt or more generally, cn = 0 f (t)e
a function f .
In the context of H and S = {fn = eint |n ∈ Z}. We saw
previously that S is an orthonormal set these are the Fourier
coefficients for f being < f, fn > for a continuous function f .
Example
I Example 7:
H be the inner product space of continuous complex valued
functions on [0, 2π]
The set S = {fn = eint |n ∈ Z} is an orthonormal set.
For an orthonormal set {v1 , v2 , . . . vn } we have a result that
||x||2 ≥ nk=1 | < x, vk > |2
P
Now {f−n , . . . , f−1 , f0 , f1 , . . . , fn } is an orthonormal set
So, by Bessel’s
P−1 Inequality, 2
||f || ≥ k=−n | < f, fk > | + | < f, f0 > |2 +
2
P n 2
k=1 | < f, fk > | .
Let f (t) = t
k 6= 0, R 2π
< f, fk >= 1/2π 0 te−ikt dt =
−ikt R 2π −ikt
1/2π[ te−ik |2π
0 + 1/2πik 0 e dt = −1/ik.
R 2π
< f, f0 >= 1/2π 0 tdt = 1/2π.t2 /2|2π 0 = π.
P−1 Pn
||f || ≥ k=−n (1/k) + π + k=1 (1/k)2 .
2 2 2
Example

I Example 7:
R 2π
||t||2 = 1/2π 0 t2 dt = 1/2π[t3 /3|2π
0 = 4/3π
2

So,
4/3π 2 ≥ 2 nk=1 1/k 2 + π 2 .
P
So,
π 2 /6 ≥ nk=1 1/k 2 .
P
Letting nP→∞
∞,
π /6 ≥ k=1 1/k 2 .
2
Orthogonal Complement
I Definition:
Let S be a non-empty subset of an inner product space.
S ⊥ = {x ∈ V | < x, s >= 0, ∀s ∈ S}.
S ⊥ (read S perp) is the orthogonal complement of S.
Orthogonal Complement
I Definition:
Let S be a non-empty subset of an inner product space.
S ⊥ = {x ∈ V | < x, s >= 0, ∀s ∈ S}.
S ⊥ (read S perp) is the orthogonal complement of S.
I S ⊥ is a subspace of V .
Let x, y ∈ S ⊥ then
< αx + y, s >= α < x, s > + < y, s >= 0 + 0 = 0.
Thus, αx + y ∈ S ⊥ .
Orthogonal Complement
I Definition:
Let S be a non-empty subset of an inner product space.
S ⊥ = {x ∈ V | < x, s >= 0, ∀s ∈ S}.
S ⊥ (read S perp) is the orthogonal complement of S.
I S ⊥ is a subspace of V .
Let x, y ∈ S ⊥ then
< αx + y, s >= α < x, s > + < y, s >= 0 + 0 = 0.
Thus, αx + y ∈ S ⊥ .
I Example 8:
{0}⊥ = V
V ⊥ = {0}.
Orthogonal Complement
I Definition:
Let S be a non-empty subset of an inner product space.
S ⊥ = {x ∈ V | < x, s >= 0, ∀s ∈ S}.
S ⊥ (read S perp) is the orthogonal complement of S.
I S ⊥ is a subspace of V .
Let x, y ∈ S ⊥ then
< αx + y, s >= α < x, s > + < y, s >= 0 + 0 = 0.
Thus, αx + y ∈ S ⊥ .
I Example 8:
{0}⊥ = V
V ⊥ = {0}.
I Example 9:
V = R3 . S = {e3 = (0, 0, 1)}.
S ⊥ = {(x, y, z)| < (x, y, z), (0, 0, 1) >= z = 0} =
{(x, y, 0)|x, y ∈ R} which is the xy plane.
Distance from a point to a plane
I Problem:
Consider the problem in R3 of finding the distance from a
point P to a plane W .
~
If we let y be the vector OP
We may restate the problem as follows: Determine the vector
u in W ”closest” to y..
The desired distance is then ||y − u||.
Noe that y − u is orthogonal to every vector in W so
y − u ∈ W⊥
Distance from a point to a plane
I Problem:
Consider the problem in R3 of finding the distance from a
point P to a plane W .
~
If we let y be the vector OP
We may restate the problem as follows: Determine the vector
u in W ”closest” to y..
The desired distance is then ||y − u||.
Noe that y − u is orthogonal to every vector in W so
y − u ∈ W⊥
I
Express a vector in V as u + z where u ∈ W and z ∈ W ⊥
I Theorem 6.6:
Let W be a finite dimensional subspace of an inner product
space V and let y ∈ V .
Then there exists a unique vector u ∈ W and z ∈ W ⊥ such
that y = u + z.
Furthermore, if β = {v1 , v2 , . . . , vn } is an orthonormal basis
fo WP, then
u = ni=1 < y, vi > vi .
Express a vector in V as u + z where u ∈ W and z ∈ W ⊥
I Theorem 6.6:
Let W be a finite dimensional subspace of an inner product
space V and let y ∈ V .
Then there exists a unique vector u ∈ W and z ∈ W ⊥ such
that y = u + z.
Furthermore, if β = {v1 , v2 , . . . , vn } is an orthonormal basis
fo WP , then
u = ni=1 < y, vi > vi .
I Proof:
Define u = ni=1 < y, vi > vi . Clearly, u ∈ W .
P
Define z = y − u then y = u + z.
We want to show that z ∈ W ⊥ .
It is enough to show that z is orthogonal Pto each vj
< z, vj >=< y − u, vj >=< y, vj > − ni=1 < y, vi ><
vi , vj >=< y, vj > − < y, vj >= 0 since β is an orthonormal
set.
Express a vector in V as u + z where u ∈ W and z ∈ W ⊥

I Proof continued: To prove uniqueness:


Further, W ∩ W ⊥ = {x ∈ W | < x, w >= 0, ∀ w ∈ W }. In
particular, < x, x >= 0 thus x = 0.
So, W ∩ W ⊥ = {0}.
If y = u + z = u1 + z1 , u, u1 ∈ W , z, z1 ∈ W ⊥ .
Then u − u1 = z1 − z ∈ W ∩ W ⊥ = {0} . Hence, u = u1 and
z = z1 .
Vector u is closest to y

I Corollary:
With the notation of Theorem 6.6 the vector u is closest to y.
If a ∈ W then ||y − a|| ≥ ||y − u|| and this inequality is an
equality ⇐⇒ a = u.
Vector u is closest to y

I Corollary:
With the notation of Theorem 6.6 the vector u is closest to y.
If a ∈ W then ||y − a|| ≥ ||y − u|| and this inequality is an
equality ⇐⇒ a = u.
I Proof:
As in Theorem 6.6 we have y = u + z where u ∈ W, z ∈ W ⊥
Let a ∈ W . Then u − a is orthogonal to z.
So,
||y − a||2 = ||u + z − a||2 = ||u − a + z||2 =
||u − a||2 + ||z||2 ≥ ||z||2 = ||y − u||2 ..
Using the fact that when x, z are orthogonal
||x + z||2 = ||x||2 + ||z||2 .
Suppose ||y − a|| = ||y − u|| then ||u − a|| = 0 and a = u.
Vector u is closest to y

I Corollary:
With the notation of Theorem 6.6 the vector u is closest to y.
If a ∈ W then ||y − a|| ≥ ||y − u|| and this inequality is an
equality ⇐⇒ a = u.
I Proof:
As in Theorem 6.6 we have y = u + z where u ∈ W, z ∈ W ⊥
Let a ∈ W . Then u − a is orthogonal to z.
So,
||y − a||2 = ||u + z − a||2 = ||u − a + z||2 =
||u − a||2 + ||z||2 ≥ ||z||2 = ||y − u||2 ..
Using the fact that when x, z are orthogonal
||x + z||2 = ||x||2 + ||z||2 .
Suppose ||y − a|| = ||y − u|| then ||u − a|| = 0 and a = u.
I u is called the orthogonal projection of y in W .
Example

I Example 10:
Let V = P3R(R)
1
< f, g >= −1 f (t)g(t)dt for f, g ∈ V .
Compute the orthogonal projection of f (x) = x3 on P2 (R)
An orthonormal
√ p basisp for W is
β = {1/ 2, 3/2x, 5/8(3x2 − 1)}.
The orthogonal
P3 projection,
u(x) = i=1R< x3 , vi >√vi
< x3 , v1 >= −1 t3 (1/2 2)dt = 0.
R1 p p √
< x3 , v2 >= −1 t3 3/2tdt = 3/2t5 /5|1−1 = 6/5.
R1 p
< x3 , v3 >= −1 t3 5/8(3t2 − 1)dt = 0
p p
Hence, u(x) = 6/5 3/2x = 3/5x.
Bessel’s Inequality
I Bessel’s Inequality:
Let V be an inner product space and let S = {v1 , v2 , . . . , vn }
be an orthonormal subset of V .
For everyPx ∈ V , we have
||x||2 ≥ ni=1 | < x, vi > |2 .
Prove that the inequality is an equality ⇐⇒ x ∈ span(S).
Bessel’s Inequality
I Bessel’s Inequality:
Let V be an inner product space and let S = {v1 , v2 , . . . , vn }
be an orthonormal subset of V .
For everyPx ∈ V , we have
||x||2 ≥ ni=1 | < x, vi > |2 .
Prove that the inequality is an equality ⇐⇒ x ∈ span(S).
I Proof:
Let W = span(S) and let x ∈ V .
By Theorem 6.6,
x = u + zPwhere u ∈ W, z ∈ W ⊥ .
and u = ni=1 < x, vi > vi
||x||2 2 2 2 2
Pn= ||u + z|| = ||u||
Pn + ||z|| ≥ ||u|| =
< Pi=1 < x, vi > vi , P ij=1 < x, vj > vj >
n n
=
Pn i=1 Pn< x, vi >< vi , j=1 < x, vj > vj >=
Pi=1 j=1 < x, vj > < x, vj > < vi , vj >=
n 2
i=1 | < x, vi > | .
Bessel’s Inequality

.Proof continued: The inequality is an equality


⇐⇒ ||z||2 = 0 or z = 0. ⇐⇒ x = u or x ∈ W = span(S).
Orthogonal complements in a finite dimensional inner
product space
I Theorem 6.7:
Suppose that S = {v1 , v2 , . . . , vk } is an orthonormal subset of
an inner product space V .
a) Then S can be extended to an orthonormal subset
{v1 , v2 , . . . , vk , vk+1 , . . . , vn } for V .
b) if W = span(S) then S1 = {vk+1 , . . . , vn } is an
orthonormal basis for W ⊥
c) If W is any subspace of V then
dim(V ) = dim(W ) + dim(W ⊥ ).
Orthogonal complements in a finite dimensional inner
product space
I Theorem 6.7:
Suppose that S = {v1 , v2 , . . . , vk } is an orthonormal subset of
an inner product space V .
a) Then S can be extended to an orthonormal subset
{v1 , v2 , . . . , vk , vk+1 , . . . , vn } for V .
b) if W = span(S) then S1 = {vk+1 , . . . , vn } is an
orthonormal basis for W ⊥
c) If W is any subspace of V then
dim(V ) = dim(W ) + dim(W ⊥ ).
I Proof:
a) S can be extended to a basis S 0 of V then orthogonalize by
Gram-Schmidt process to get a linearly independent spanning
set for V and normalize to get {v1 , . . . , vn } is an orthonormal
basis of V .
Orthogonal complements in a finite dimensional inner
product space

I Proof continued:
b) S1 is a subset of a basis so it is linearly independent. We
need only show S1 spans W ⊥ .
Now, S1 ⊆ W ⊥P .
For x ∈ V, x = ni=1 < x, vi > vi .
If x ∈ WP⊥ then < x, v >= 0, f ori = 1, . . . , k
i
So, x = ni=k+1 < x, vi > vi .
Thus, W ⊥ ⊆ span(S1 ).
Orthogonal complements in a finite dimensional inner
product space

I Proof continued:
b) S1 is a subset of a basis so it is linearly independent. We
need only show S1 spans W ⊥ .
Now, S1 ⊆ W ⊥P .
For x ∈ V, x = ni=1 < x, vi > vi .
If x ∈ WP ⊥ then < x, v >= 0, f ori = 1, . . . , k
i
So, x = ni=k+1 < x, vi > vi .
Thus, W ⊥ ⊆ span(S1 ).
I It is the clear that
dim(V ) = n = k + n − k = dim(W ) + dim(W ⊥ ).
Example

I Example 11:
Let W = span({e1 , e2 }) in R3 .
x = (a, b, c) ∈ W ⊥ ⇐⇒ < x, e1 >= a = 0 and
< x, e2 >= b = 0.
x = (0, 0, c) so W ⊥ = span({e3 }).
dim(W ⊥ ) = 1, dim(W ) = 2.
So, dim(R3 ) = 2 + 1 = 3.

You might also like