You are on page 1of 6

Linear Dependence

A set of vectors V = {v 1 , v 2 , ..., v n } is linearly dependent if and only if there is a set of


constants
{ c 1 , c 2 , ..., c n } such that c 1 v 1 + c 2 v 2 + ... + cn v n = 0, none of the vectors belonging to V
are the zero vector, and there's at least one c k ≠ 0.

c1
c2
v1 v2 … vn = 0 ↔ Ac = 0

cn
Another way of describing it is that the nullspace of A has rank bigger than 1.

Example: if V ⊂ R n , we can verify whether it is linear dependent or not by evaluating the


determinant of the matrix formed by the vectors of V.

Vector Space
A vector space V must satisfy the following conditions:
1. 0 ∈ V
2. If v ∈ V, and � ∈ R, then �v ∈ V
3. If v 1 , v 2 ∈ V, then v 1 + v 2 ∈ V

Nullspace (kernel)
The kernel of a matrix A is the vector space of all vectors v that satisfy the equation Ax = 0
1 1 1 1
Example: Determine the nullspace of the matrix 1 2 3 4
4 3 2 1
x
1 1 1 1 0
y
1 2 3 4 = 0
z
4 3 2 1 0
w
1 1 1 1 |0 1 1 1 1 |0 1 0 -1 -2 | 0 (I - II)
1 2 3 4 | 0 → 0 1 2 3 | 0 (II - I) → 0 1 2 3 | 0
4 3 2 1 |0 0 -1 -2 -3 | 0 (III - 4I) 0 0 0 0 | 0 (III + II)
x 1 2 1 2
x = z + 2w y -2 -3 -2 -3
→ =z +w → N(A) = span ,
y = -2z - 3w z 1 0 1 0
w 0 1 0 1

Columnspace
C(A) = C( rref(A) )
Where rref(A) is the reduced echelon form of A, or the matrix A "escalonada".
The basis of C(A) is formed by the pivot columns of A.
Rowspace
The rowspace is the orthogonal complement of the nullspace

… r 1T … r 1T · x = 0
… r 2T … x = 0 → r 2T · x = 0
… ⋮ … ⋮
… r nT … r nT · x = 0
Where x ∈ N(A). Remember that the orthogonal complement of a subspace V is
V ⟂ = {x | x · v = 0, v ∈ V }

The four equations below are equivalent



N(A) = C A T
N A T = C(A) ⟂
N(A) ⟂ = C A T

N AT = C(A)
T ⟂
Because A = A T and V = V ⟂

Projection onto a subspace


Notice that for any vector x ∈ R n , and given a subspace V ⊂ R n , we have
x = proj V x + proj V ⟂ x
⏠⏣⏣⏡
v
⏣⏣⏢ ⏠⏣⏣⏣ ⏡⏣⏣⏣ ⏢
w

Let {b 1 , b 2 ,..., b k } be the basis of V, and B be the matrix whose columns are b 1 , b 2 ,..., b k .
Then v = Bp.
Because x - v = w ∈ V , V = C(B) and therefore V ⟂ = C(B) ⟂ = N B T .

Then B T w = 0 and therefore B T (x - v) = 0


Because v ∈ C(B), v = Bp,
B T x = B T Bp
-1
And then proj V (x) = B B T B BT x

Least Squares
Lets say we have a list of points in a plane (x 1 , y 1 ), (x 2 , y 2 ), ..., (x N , y N ).
We want to find m, n such that
y 1 = mx 1 + n
y 2 = mx 2 + n

y N = mx N + n
x1 1 y1
x2 1 m y2
=
⋮ ⋮ n ⋮
xN 1 yN
If the points aren't colinear there is no such solution, but we can find the least squares
approximation for it. Let A be the N × 2 matrix.
x1 1
x1 x2 … xN x2 1
ATA =
1 1 … 1 ⋮ ⋮
xN 1

∑ xi2 ∑ xi
ATA =
∑ xi N
2

det A A = N∑ T
x i2 - ∑ xi
-1 1 N -∑ x i
ATA =
-∑ x i ∑ xi2
2
N∑ x i2 - ∑ x i
-1 1 N -Nx⏨
ATA =
x ) 2 -Nx⏨ ∑ x i2
N∑ x i2 - (N ⏨
x1 1
-1 1 x2 1 N -Nx⏨ x x … x
1 2 N
A ATA AT =
N∑ x i - (N ⏨
2
x) 2 ⋮ ⋮ -Nx⏨ ∑ x i 2 1 1 … 1
xN 1
x1 1
1 Nx 1 - N x⏨ Nx 2 - N x⏨ … Nx
-1 x2 1
A ATA AT = 2 ⋮ ⋮ -Nx 1 x⏨+ ∑ x 2 -Nx 2 x⏨+ ∑ x 2 … -Nx N
N∑ x i2 - ∑ x i i i
xN 1
Eigen decomposition
Let A nm be a matrix, such that the relation
Av = �v
gives the eigenvalues � 1 , � 2 ,... � n and eigenvectors v 1 , v 2,..., v n . Let also T be the
matrix such that its columns are formed by the eigenvectors, and D the matrix whose
diagonals are filled with the eigenvalues of A
�1
⋮ ⋮ ⋮
�2
D= , T = v 1 v 2 ... v n

⋮ ⋮ ⋮
�n
You can see that since any column matrix C gives us the relation
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
A c 1 c 2 ... c n = Ac 1 Ac 2 ... Ac n
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
then
Av k = � k v k
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
Av 1 Av 2 … Av n = � 1 v 1 �2 v2 … �n vn
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
�1 0 0 0
⋮ ⋮ ⋮
0 �2 0 0
AT = v 1 v 2 … v n
0 0 ⋱ 0
⋮ ⋮ ⋮
0 0 0 �n
AT = TD
Which is a really useful relation. If T is a square matrix, another way to write it would be

A = TDT -1
Which enables us to calculate exponents of matrices, for example
A 3 = TDT -1 TDT -1 TDT -1
A 3 = TD 3 T -1
And you can see that D k is simply

� 1k 0 0 0
0 �2k 0 0
Dk =
0 0 ⋱ 0
0 0 0 � nk
TDT -1 - �I x = 0

You might also like