0% found this document useful (0 votes)
57 views36 pages

2-Vector Spaces

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views36 pages

2-Vector Spaces

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Vectors and Vector Spaces

For the most part, vector spaces we consider conform


to the undergraduate idea of vectors in 2-D and 3-D:
Cartesian n-space with the conventional vector and
scalar operations:

v Geometric
Interpretation
w

A vector is viewed as an n-tuple of numbers


n
Inner (Dot) Product: v , w = v w = w v = ∑ wi vi
T T
i =1
= v w cos( θ)

v & w are orthogonal iff v, w = 0


"magnitude" or "norm"
1
v = v, v 2

Outer product : v w = vwT

Cross (vector ) product: v × w = v w sin( θ )

Only defined in 3-D (Why ??)


However we need rigorously defined concepts of
vectors and spaces in order to generalize into different
dimensions (even infinite dimensions):

First, a FIELD :

A field F is a set of two or more elements for which the


operations of addition, multiplication, and division are defined,
and for which the following holds:

1. If a ∈ F and b ∈ F , then ( a + b ) and (b + a ) ∈ F


2. ( ab) = (ba ) ∈ F
3. There exists a unique element 0 ∈ F such that a + 0 = a
and 0( a ) = 0
4. If b ≠ 0, then ( a / b ) ∈ F
5. There exists a unique identity element 1 ∈ F such that
1( a ) = ( a )1 = ( a / 1) = a
6. For every a ∈ F there is a unique negative element - a ∈ F
such that a + (- a ) = 0.
7. Associative, commutative, and distributive laws hold.

Examples: Are these fields?


1. The set {0,1}? No 1+1=2 2 ∉ F
2. The set of all matrices of the form  x − y
(x, y are real)? No Special Matrix  y x 
3. The set of all polynomials in s? No (Division)
4. The set of real numbers? Yes
5. The set of integers? No (Division)
Now for the definition of a Linear Vector Space: (x, y and z are
considered vectors in the space)

A linear vector space X is a set of elements called vectors


defined over a scalar field F, which satisfies the following rules:

1. If x + y = v, then v ∈ X
2. x + y = y + x
3. ( x + y) + z = x + ( y + z)
4. There exists a vector 0 ∈ X such that x + 0 = 0 + x = x
5. For every x ∈ X , there is a unique vector y ∈ X such that
x + y = 0 ( y = - x).
6. For every x ∈ X and scalar a ∈ F , the product ax gives
another vector y ∈ X . ( a may be the unit scalar so that
ax = 1 ⋅ x = x ⋅ 1 = x )

7. For any scalars a, b ∈ F , and for any vector x ∈ X ,


a ( bx ) = ( ab) x.

8. Scalar multiplication is distributive; i.e.,

(a + b) x = ax + bx
a( x + y ) = ax + ay
EXAMPLES: Are these vector spaces?

1. The set of all n-tuples of scalars from F, over F? Yes


(for examples, n-dimensional vectors of reals (over R), or
n-dimensional complex numbers (over C)).
2. The set of all complex numbers, over reals? Yes
3. The set of all reals, over complex numbers.? No
(RN*CN=CN)
4. The set of all m x n matrices, over R? Over C? Yes, Yes
5. The set of all piecewise continuous functions of time, over
R? Yes
6. The set of all polynomials in s of order less than n, with
real coefficients, over R? Yes (Not a field, but it is a
vector space, no division requirement)
Linear Dependence and Independence:
Consider a set of vectors from linear vector space X : {x1 , x2 , K xn }.
If there exists a set of n scalars ai , not all of which are zero,
such that the linear combination a1 x1 + a2 x2 + K + an xn = 0 then
the set {xi } is linearly dependent.
A set of vectors which is not linearly dependent is linearly
independent. If the linear combination a1 x1 + a2 x2 + K + an xn = 0
implies that each ai = 0, then the set {xi } is a linearly independent
set.

In other notation, the set {xi } is linearly independent if the equation


Xa = 0 implies a = 0, where
X = [x1 x2 K xn ] and a = [a1 a2 K an ]T
3
EXAMPLE: Consider the three vectors from R :
2 1  0
x1 = − 1 x2 = 3 x3 = 7
     
 0  4 8
This is a linearly dependent set because we can choose the
set of a's as a1 = −1, a2 = 2 , a3 = −1 (not all are zero)
such that 3
∑ai xi = − x1 + 2 x2 − x3 = 0
i=1

When we have a set of n n-tuples, a convenient test is


to form a determinant from them. If it is zero, the set
is linearly dependent.
2 1 0
−1 3 7 = 0
0 4 8
EXAMPLE: The set R(s) of all rational polynomial functions in
s, is a vector space over the field of reals, and over the field
of rational transfer functions. Consider the two vectors of the
space ordered pairs of such functions:

 s1+1  (s+1s)(+2s+3) 
x1 =  1  x2 =  1 
 s+2   s+3 
If our field is the set of all real numbers, is this a linearly
independent set? YES. One can verify that if a1 and a2
are real numbers, then setting
 s +2 
0  1 
0 =   = a1x1 + a2 x2 = a1  s1+1 + a2 ( s +1)(1 s +3) 
0  s +2   s +3 
will imply that a1 = a2 = 0 .
If however the field is taken as the set of all rational polynomial
functions in s, then we have a different result. With some
tinkering, we can choose
s+3
a1 = 1 and a2 = −
s+2
 s 1+1   − s 1+1  0
giving ∑ ai xi =  1  + − 1  = 0 = 0
 s + 2   s + 2   
so now the vectors are linearly Dependent !!

In
Inthe
thefirst
firstcase,
case,there
theremight
mightin insome
somecircumstances
circumstancesexist
existaa
value
value for
for sssuch
suchthat
thatthe
thelinear
linearcombination
combination equals
equalszero.
zero.
This
This does
doesnotnot result
result in
in linear
linear dependence
dependence because
because thethe
combination
combinationisisnot not"identically"
"identically"equal equalto
tozero.
zero. This
Thisisistrue
trueifif
the
thevector's
vector'selements
elementsare arefunctions
functionsofoftime,
time,too.
too. The
The
combination
combinationmightmightbebezero
zerofor foraaparticular
particularinstant
instantin
intime,
time,
even
evenififthe
thevectors
vectorsare
arelinearly
linearlyindependent.
independent.
From
Fromour
ourintuitive
intuitivesense
senseof
ofvectors:
vectors:
!
!Two
Two linearly
linearly independent
independent vectors
vectors in
in 2-D
2-D are
are non-collinear.
non-collinear.
!
!Three
Three linearly
linearly independent
independent vectors
vectors in
in 3-D
3-D are
are non-coplanar.
non-coplanar.

Two LEMMAS:
If we have a set of linearly dependent vectors, and we add
another vector to that set, the result will also have to be
linearly dependent.
If we have a set of vectors that is linearly dependent, then
one of the vectors can be written as a linear combination
of the others in the set.

This
ThisisisNOT
NOTthe
thedefinition
definitionof
oflinear
lineardependence!
dependence!

""
These dependencies manifest themselves in the
degeneracy, or rank deficiency, of the matrix of vectors.
We can find relationships between the degeneracies of
matrices with the following formulae:

If matrices A,B , and C have ranks r ( A), r ( B ), and


r (C ), then the relationsh ip AB = C and Sylvester' s
inequality imply ( n = no. of cols. of A) : Sylvester’s
r ( A) + r ( B ) − n ≤ r (C ) ≤ min( r ( A), r ( B )) Inequality

or if degeneraci es q ( A), q ( B ), and q (C ) are used :


q (C ) ≤ q ( A) + q ( B )
Bases and Dimensionality

DEFINITION: the DIMENSION of a linear vector space is the


largest possible number of linearly independent vectors that
can be taken from that space.

DEFINITION; A set of linearly independent vectors in vector


space X is a basis of X iff every vector in X can be written
as a unique linear combination of these vectors.

In
In an
ann-dimensional
n-dimensionalvector
vectorspace,
space,there
therehave
haveto
tobe
be
exactly
exactlynnvectors
vectorsin
inany
anybasis
basisset,
set,but
butthere
thereare
arean
an
infinite
infinitenumber
numberofofsuch
suchsets
setsthat
thatqualify
qualifyas
asaabasis.
basis.
Heuristic
BASIS COORDINATE SYSTEM View of
Basis
THEOREM: In an n-dimensional linear vector space, any
set of n linearly independent vectors qualifies as a basis
set.

PROOF: This statement implies that a vector x should be


described by any n linearly independent vectors, say
{e1 ,e 2 ,K ,e n }
That is, for any vector x;

x = linear combination of ei 's


n
= ∑α i ei
i =1
!
Because the space is n-dimensional, the set
{x , e1 , e 2 , K , e n } must be linearly dependent, so
that
α 0 x + α1e1 +K+ α n en = 0

for a set of α 's not all of which are zero.

If α 0 = 0 then α1 = K = αn = 0 , so we can safely assume α 0 ≠ 0 .


We are therefore allowed to write:
α1 αn
x=− e1 + K+ − en
α0 α0

=β1e1 +K+ β n en
n
= ∑ βi ei
i =1
So we have written vector x as a linear combination of the ei 's.
Now we must show that this expression is unique:
Suppose there were another set of constants {β i } such that
n
x = ∑βi ei
i =1
n
We already have x = ∑βi ei so
i =1

But the set {e i }


n n
x − x = ∑βi ei − ∑βi ei
i =1 i =1 is known to
n be a basis;
= ∑( βi − βi ) ei = 0 therefore, for this
i =1 equation to hold,
we must have
!
βi − βi = 0
for all i. (because bases are lin. indep sets).
Therefore βi = βi and this proves uniqueness of the
representation
n
x = ∑βi ei
i =1
#
The uniqueness result implies that a vector from any space
can be identified by its n coefficients, once the basis
vectors are chosen. In fancy terms, an n-dimensional
space is isomorphic to the Euclidean space of n-tuples.
one to one and onto mapping
Once the basis {e i } is chosen, the set of numbers
β = [β1 β 2 L β n ]T is called the representation
of x in {e i } .
DEFINITION: A set of vectors X is spanned by a set of
vectors {x i } if every x ∈ X can be written as a linear
combination of the xi 's . Equivalently, the xi 's
span X.

Notation: X = sp{ xi }

Note:
Note: The set{x i } isisnot
Theset notnecessarily
necessarilyaabasis.
basis. ItItmay
maybe
be
linearly
linearlydependent.
dependent. For Forexample,
example,five
fivenon-collinear
non-collinearvectors
vectors
in
in 2-D
2-D suffice
suffice to
tospan
spanthethe2-D
2-DEuclidean
Euclideanspace,
space,butbutas
asaa
set
setthey
theyare
arenot
notaabasis.
basis.
EXAMPLES:

1. Let X be the space of all vectors written x = [x1 x2 L xn ]T


such that x1 = x 2 = L = x n .

One legal basis for this space is the single vector

v = [1 1 L 1]T

This is therefore a one dimensional space, despite it


consisting of vectors containing n "components".
2. Consider the space of all polynomials in s of degree less
than 4 with real coefficients. One basis for this space is the
set
e1 = 1 e2 = s e3 = s 2 e4 = s 3

In this basis, the vector


x = 3s 3 + 2 s 2 − 2 s + 10
can be written as
x = 3e4 + 2 e3 + ( −2 ) e2 + 10 e1
e1 e2 e3 e4
T
So the vector x has the representation x = 10 −2 2 3
in the {ei } basis.
We can write another basis for this space too:
f1 f2 f3 f4
{ f i } = {s 3 − s 2 , s 2 − s , s − 1, 1}
f1 f2 f3 f4
{ f i } = {s 3 − s 2 , s 2 − s , s − 1, 1}

In this different basis, the same vector has a different


representation:
f1 f2 f3 f4
x = 3( s3 − s2 ) + 5( s2 − s) + 3( s − 1) + 13
or
T
x = 3 5 3 13
Changing the Basis of a Vector:
First we take a vector x and write its expansion in two
different bases, {v i } and {vi′ } : scalars
n n
x= ∑ x j v j = ∑ xˆi vˆi
j =1 i =1

But since the basis vectors v j are in the same space, they
themselves can be expanded into the {v̂i } basis:
n
vj = ∑ bij vˆi
i =1

Substituting in here,
n
 n  n

j =1
x j  ∑ bij vˆi  = ∑ xˆi vˆi and simplifying;
 i =1  i =1
n  n 
∑  ∑ bij x j − xˆi vˆi = 0
i =1 j =1 
Because {v̂i } is a basis, for this to be true, we must have:
n Otherwise, one of the vˆ i
xˆi = ∑ bij x j could be written in terms
j =1 of the others
This is how we get the components of a vector in a new basis
from the components of the old basis. The b's above come
from our knowledge of how the two bases are related.
Notice that this relationship can also be written as a matrix; that
is,
[x ]vˆ = [B ][x ]v

In words, the representation of x in the new basis are equal to


the representation of x in the old basis, times a matrix with
elements taken from the expansion of the old basis in terms of
the new basis.
2
EXAMPLE: Consider the space R and the two bases:
 1   0    1   − 1 
{e1 ,e2 } =   ,    and {e1 ,e2 } =   ,   
  0  1    2  0  

2
Now let a vector x be represente d by x =   in the {ei } basis.
2
Find the representation for x in the {ei } basis :
First we write down the relationship between the two bases:
b11
0
e1 = 0e1 + −1e2 = [e1 e2 ]  b21
 − 1
b12
1 2
e2 = 1
2 e1 + 1
2 e2 = [e1 e2 ]  b22
1
 2
0 1
2
So B= 
− 1
1
2
Giving
0 1
2  21
x ( = x in the {e } basis) = Bx =  =
1   2   − 1
− 1 2   

Notice that to do the reverse (go from {e} to {e } ), we would


get B −1, which always exists. (Why?) . That is,

1 − 1  1   2
−1
Property
x=B x=    =  of Basis
2 0   − 1  2

e1
x
e2
e2
e1
#
Some more definitions:

An inner product is defined on a linear vector space as


an operation of two vectors, denoted x , y such that:

1. x, y = y, x Complex Conjugate
2. x , αy1 + βy2 = α x , y1 + β x , y2
3. x , x ≥ 0 for all x and x , x = 0 iff x = 0.

Linear
LinearVector
VectorSpaces
Spaceswith
withinner
innerproducts
productsdefined
definedare
are
sometimes
sometimes called
called Hilbert
Hilbert spaces.
spaces.
A norm is used to describe a concept of "length" for a
vector, and it must follow the rules:

1. x = 0 iff x = 0.
2. αx = α x for any scalar α.
3. x+ y ≤ x + y (triangle inequality)

and also

x , y ≤ x ⋅ y Cauchy - Schwarz inequality

Linear
LinearVector
VectorSpaces
Spaceswith
withnorms
normsdefined
definedare
aresometimes
sometimescalled
called
Banach
Banachspaces.
spaces.
The most common norm, the "Euclidean" norm is
induced from the inner product:
1
n
2
2

=  ∑ xi 
1
x = x,x 2

i =1 

A unit vector is a vector whose norm is one. Any vector can


be made into a unit vector by dividing itself by its norm.

A metric, or concept of distance between two vectors, is


also induced by a norm:

ρ( x , y ) = x − y Metric space
The angle θ between two vectors can be computed for any
dimensional space by the formula:
x , y = x y cos θ

A pair of vectors is orthogonal if


x, y = 0

A set {xi } of vectors is orthogonal if


xi , x j = 0 if i ≠ j and
xi , x j ≠ 0 if i = j

A set {xi } of vectors is orthonormal if

xi , x j = 0 if i ≠ j and
xi , x j = 1 if i = j
Gram Schmidt Orthogonalization:
It is computationally convenient if we always use basis sets that
are orthonormal. If we have a basis that is not orthonormal,
we can use this process to orthogonalize the set, then
normalize it by making all vectors unit vectors.

Let the sets {yi } and {vi } denote, respectively, the original
(non-orthogonal) and the new (orthogonal) basis sets.

Step 1: Let v1 = y1

Step 2: Choose v2 as y2 with all components along v1


subtracted out.
component of y2 along v1
v2 = y2 − av1 We don't know a
yet, so we'll
compute it:
Because vi 's are orthogonal,
v1 , v2 = 0 = v1 , y2 − a v1 , v1
v1 , y2
So a=
v1 , v1
Step 3: Go on to choose v3 as y3 with components along
both previous vi - basis vectors subtracted out:
v3 = y3 − a1v1 − a2 v2

Again 0
v1 , v3 = 0 = v1 , y3 − a1 v1 , v1 − a2 v1 , v2

v2 , v3 = 0 = v2 , y3 − a1 v2 , v1 − a2 v2 , v2
0

v1 , y3 v2 , y3
So a1 = and a2 = etc.
v1 , v1 v2 , v2
Continuing on in the same way,
i −1 v k , yi
vi = yi − ∑ vk , vk
vk
k =1

vi
Then normalize the set by vi =
vi

If we always use orthonormal bases, we can always find the


component of vector x along the jth basis vector by finding
the inner product of x with e j :
n
x = ∑ ai ei
i =1
n j th component
e j , x = ∑ ai e j , ei = a j
i =1
Manifolds, Subspaces, and Projections:

In finite dimensions, manifolds and subspaces are equivalent;

A subspace is a subset of a vector space that itself


qualifies as a vector space.

DEFINITION: A subspace M of vector space X is a subset


of X such that if x , y ∈ M and z = αx + β y then z ∈ M .
If dim( M ) = dim( X ), then M = X .
If dim( M ) < dim( X ), then M is a proper subspace.

! All subspaces must contain the zero vector!!

A little reflection shows that:


2
1. All proper subspaces of R are straight lines that pass
through the origin.
3
2. All proper subspaces of R are planes or lines that pass
through the origin.
n
3. All proper subspaces of R are "surfaces" of dimension
n-1 or less, and pass through the origin.
Projections:
Suppose U is a proper subspace of X (dim(U)<dim(X)).
Then for every x ∈ X there is a u ∈U such that x − u , y = 0
for every y ∈U . The vector u is the orthogonal
projection of x into U.

3
X=R
x
x-u

y
u

U=R2

You might also like