You are on page 1of 2

Basic Laws of Matrices and Numbers Proving Field Some Random Subspace Theorem Linear Transformation over 𝔽𝔽

Law Example Relation to F and V Number Law Definition Let 𝕎𝕎1 and 𝕎𝕎2 be subspaces of 𝕍𝕍 Let 𝕍𝕍 and 𝕎𝕎 be vector spaces over 𝔽𝔽. ∀ 𝒖𝒖, 𝒗𝒗 ∈ 𝕍𝕍 and ∀ 𝑐𝑐, 𝑑𝑑 ∈ 𝔽𝔽
Commutative Addition 𝑨𝑨 + 𝑩𝑩 = 𝑩𝑩 + 𝑨𝑨 (F2), (V2) (F1) Closure Under Addition ∀ 𝑎𝑎, 𝑏𝑏 ∈ 𝔽𝔽 𝑎𝑎 + 𝑏𝑏 ∈ 𝔽𝔽
(F2) Commutative Addition ∀ 𝑎𝑎, 𝑏𝑏 ∈ 𝔽𝔽 𝑎𝑎 + 𝑏𝑏 = 𝑏𝑏 + 𝑎𝑎 Then: A mapping 𝑇𝑇: 𝕍𝕍 → 𝕎𝕎 is a linear transformation iff it satisfies the following:
Associative Addition 𝑨𝑨 + 𝑩𝑩 + 𝑪𝑪 = 𝑨𝑨 + (𝑩𝑩 + 𝑪𝑪) (F3), (V3)
1. 𝕎𝕎1 ∩ 𝕎𝕎2 is a subspace 1. 𝑇𝑇 𝑐𝑐𝑐𝑐 + 𝑑𝑑𝑑𝑑 = 𝑐𝑐𝑐𝑐 𝒖𝒖 + 𝑑𝑑𝑑𝑑 𝒗𝒗
Associative Multiplication 𝑨𝑨𝑨𝑨 𝑪𝑪 = 𝑨𝑨(𝑩𝑩𝑩𝑩) (F8), - (F3) Associative Addition ∀ 𝑎𝑎, 𝑏𝑏, 𝑐𝑐 ∈ 𝔽𝔽 𝑎𝑎 + 𝑏𝑏 + 𝑐𝑐 = 𝑎𝑎 + (𝑏𝑏 + 𝑐𝑐)
2. 𝕎𝕎1 + 𝕎𝕎2 = 𝑤𝑤1 + 𝑤𝑤2 ∶ 𝑤𝑤1 ∈ 𝕎𝕎1 𝑎𝑎𝑎𝑎𝑎𝑎 𝑤𝑤2 ∈ 𝕎𝕎2 is a subspace 2. 𝑇𝑇 0 = 0
Distributive Law 𝑨𝑨 𝑩𝑩 + 𝑪𝑪 = 𝑨𝑨𝑨𝑨 + 𝑨𝑨𝑨𝑨 (F11), (V9) (F4) UNIQUE Additive Identity ∀ 𝑎𝑎 ∈ 𝔽𝔽, ∃ 0 ∈ 𝔽𝔽 𝑎𝑎 + 0 = 𝑎𝑎
(F5) UNIQUE Additive Inverse ∀ 𝑎𝑎 ∈ 𝔽𝔽, ∃ 𝑏𝑏 ∈ 𝔽𝔽 𝑎𝑎 + 𝑏𝑏 = 0, 𝑏𝑏 = −𝑎𝑎 Definition of linear span
Additional Law Only for Numbers 𝒖𝒖 = 𝒗𝒗 → 𝑇𝑇 𝒖𝒖 = 𝑇𝑇(𝒗𝒗)
(F6) Close under Multiplication ∀ 𝑎𝑎, 𝑏𝑏 ∈ 𝔽𝔽 𝑎𝑎𝑎𝑎 ∈ 𝔽𝔽 Let 𝐵𝐵 be a non-empty subset of 𝕍𝕍 𝑇𝑇 𝒖𝒖 = 𝑇𝑇 𝒗𝒗 → 𝒖𝒖 = 𝒗𝒗 iff 𝑇𝑇 is injective
Commutative Multiplication 𝑎𝑎𝑎𝑎 = 𝑏𝑏𝑏𝑏 (F7), -
(F7) Commutative Multiplication ∀ 𝑎𝑎, 𝑏𝑏 ∈ 𝔽𝔽 𝑎𝑎𝑎𝑎 = 𝑏𝑏𝑏𝑏
Identities The subspace 𝕎𝕎 = {all linear combinations of some vectors in 𝐵𝐵} Linear Operator and Linear Functional
(F8) Associative Multiplication ∀ 𝑎𝑎, 𝑏𝑏, 𝑐𝑐, ∈ 𝔽𝔽 𝑎𝑎𝑎𝑎 𝑐𝑐 = 𝑎𝑎(𝑏𝑏𝑏𝑏) 1. 𝕎𝕎 is the subspace of 𝕍𝕍 spanned by 𝐵𝐵
Found in 1. Linear Operator
(F9) UNIQUE Multiplicative Identity ∀ 𝑎𝑎 ∈ 𝔽𝔽, ∃ 1 ≠ 0 1𝑎𝑎 = 𝑎𝑎 2. 𝕎𝕎 is a linear span of 𝐵𝐵
Type Definition and Examples Vector • A linear transformation on the same vector space
Field (F10) UNIQUE Multiplicative Inverse ∀ 𝑎𝑎 ∈ 𝔽𝔽, 𝑎𝑎 ≠ 0, ∃ 𝑐𝑐 ∈ 𝔽𝔽 𝑎𝑎𝑎𝑎 = 1, 𝑐𝑐 = 𝑎𝑎−1 3. 𝐵𝐵spans 𝕎𝕎
Space
(F11) Distributive Law ∀ 𝑎𝑎, 𝑏𝑏, 𝑐𝑐 ∈ 𝔽𝔽 𝑎𝑎 𝑏𝑏 + 𝑐𝑐 = 𝑎𝑎𝑎𝑎 + (𝑎𝑎𝑎𝑎) 2. Linear Functional
∀ 𝑎𝑎, 𝑖𝑖𝑖𝑖 𝑎𝑎 + 0 = 𝑎𝑎, then 0 is the additive Denote 𝕎𝕎 = span𝔽𝔽 (𝐵𝐵) (or span(𝐵𝐵) for short).
Additive (F4) - 0 (V4) - 𝟎𝟎 • A linear transformation from a vector space to its field
identity Proving Vector Space Remark: Need to justify that 𝕎𝕎 is a subspace.
∀ 𝑎𝑎, 𝑖𝑖𝑖𝑖 1 ⋅ 𝑎𝑎 = 𝑎𝑎, then 1 is the multiplicative Number Law Definition Basis and Dimension Linearly Independent Transformations
Multiplicative (F9) - 1 (V4) - 1
identity (V1) Closure Under Addition ∀ 𝒖𝒖, 𝒗𝒗 ∈ 𝕍𝕍 𝒖𝒖 + 𝒗𝒗 ∈ 𝕍𝕍 Let 𝕍𝕍 be a finite dimension vector space and 𝐵𝐵 be a subset of 𝕍𝕍 A set of linear transformations is linearly independent iff
Inverses (V2) Commutative Addition ∀ 𝒖𝒖, 𝒗𝒗 ∈ 𝕍𝕍 𝒖𝒖 + 𝒗𝒗 = 𝒗𝒗 + 𝒖𝒖
𝑐𝑐1 𝑇𝑇1 + 𝑐𝑐2 𝑇𝑇2 + ⋯ + 𝑐𝑐𝑛𝑛 𝑇𝑇𝑛𝑛 = 0𝑉𝑉 has only the trivial solution
∀ 𝑏𝑏, 𝑖𝑖𝑖𝑖 𝑥𝑥 + 𝑏𝑏 = 0, then 𝑥𝑥 is the additive inverse (V3) Associative Addition ∀ 𝒖𝒖, 𝒗𝒗, 𝒘𝒘 ∈ 𝕍𝕍 𝒖𝒖 + 𝒗𝒗 + 𝒘𝒘 = 𝒖𝒖 + (𝒗𝒗 + 𝒘𝒘) 𝐵𝐵 is a basis if it satisfies any two of the following statements:
Additive (F5) - (−𝑏𝑏) (F5) - (−𝒃𝒃)
of 𝑏𝑏 (V4) UNIQUE Additive Identity ∀ 𝒖𝒖 ∈ 𝕍𝕍, ∃ 𝟎𝟎 ∈ 𝕍𝕍 𝒖𝒖 + 𝟎𝟎 = 𝒖𝒖 1. 𝐵𝐵 is linearly independent
Common Transformations
∀ 𝑏𝑏 ≠ 0, 𝑖𝑖𝑖𝑖 𝑥𝑥 × 𝑏𝑏 = 1, then 𝑥𝑥 is the (V5) UNIQUE Multiplicative Identity ∀ 𝒖𝒖 ∈ 𝕍𝕍, ∃ 1 ∈ 𝔽𝔽 1𝒖𝒖 = 𝒖𝒖 2. span(𝐵𝐵) = 𝕍𝕍
Multiplicative (F10) - (𝑏𝑏 −1 ) Not defined Let 𝑨𝑨 be an m-by-n matrix over 𝔽𝔽. 𝐿𝐿𝐴𝐴 𝒖𝒖 = 𝑨𝑨𝒖𝒖, ∀ 𝒖𝒖 ∈ 𝔽𝔽𝑛𝑛
multiplicative inverse of 𝑏𝑏 3. 𝐵𝐵 = dim 𝕍𝕍
(V6) Closure under Multiplication ∀ 𝒖𝒖 ∈ 𝕍𝕍, 𝑐𝑐 ∈ 𝔽𝔽, 𝑐𝑐𝑐𝑐 ∈ 𝕍𝕍
Dimension Identity Operator from 𝕍𝕍 to 𝕍𝕍: 𝐼𝐼𝑉𝑉 𝒖𝒖 = 𝒖𝒖, ∀ 𝒖𝒖 ∈ 𝕍𝕍
Terminology (V7) Associative Multiplication ∀ 𝒖𝒖 ∈ 𝕍𝕍, 𝑏𝑏, 𝑐𝑐 ∈ 𝔽𝔽, 𝑏𝑏(𝑐𝑐𝑐𝑐) = 𝑐𝑐(𝑏𝑏𝑏𝑏)
Finite Field A field 𝔽𝔽 is called a finite field if it has only finitely many elements The dimension of a vector space 𝕍𝕍, dim𝔽𝔽 𝕍𝕍 , is defined to be the number of Zero Transformation from 𝕍𝕍 to 𝕎𝕎: 𝑂𝑂𝑉𝑉,𝑊𝑊 𝒖𝒖 = 0, ∀ 𝒖𝒖 ∈ 𝕍𝕍
(V8) UNIQUE additive inverse ∀ 𝒖𝒖 ∈ 𝕍𝕍, ∃ 𝒗𝒗 ∈ 𝕍𝕍 𝒗𝒗 + 𝒖𝒖 = 𝟎𝟎, 𝒖𝒖 = −𝒗𝒗
Subtraction 𝑎𝑎 − 𝑏𝑏 = 𝑎𝑎 + (−𝑏𝑏) vectors in a basis for 𝕍𝕍, a.k.a, the cardinality of the basis Let 𝑓𝑓 ∈ 𝒞𝒞 ∞ 𝑎𝑎, 𝑏𝑏 be a real vector space of infinitely differentiable functions.
(V9) Distributive Addition ∀ 𝒖𝒖, 𝒗𝒗 ∈ 𝕍𝕍, 𝑐𝑐 ∈ 𝔽𝔽 𝑐𝑐 𝒖𝒖 + 𝒗𝒗 = 𝑐𝑐𝑐𝑐 + 𝑐𝑐𝑐𝑐
𝑎𝑎
Division = 𝑎𝑎 × 𝑏𝑏 −1 (V10) Distributive Scalar Multiplication ∀ 𝒖𝒖 ∈ 𝕍𝕍 , 𝑏𝑏, 𝑐𝑐 ∈ 𝔽𝔽 𝑏𝑏 + 𝑐𝑐 𝒖𝒖 = 𝑏𝑏𝑏𝑏 + 𝑐𝑐𝑐𝑐 The dimension of a space is the minimum number of vectors required to span
𝑏𝑏 Differential Operator:
Linear System over 𝔽𝔽 All coefficients of linear equations are in 𝔽𝔽 Proving Subset is a Subspace (and Vector Space) that space. For 𝑓𝑓 ∈ 𝒞𝒞 ∞ 𝑎𝑎, 𝑏𝑏 , define the differential operator
Matrix Over 𝔽𝔽 All entries of the matrix are in 𝔽𝔽 Let 𝕍𝕍 be a vector space over 𝔽𝔽. A subset 𝕎𝕎 of 𝕍𝕍 is a subspace of 𝕍𝕍 iff: 𝑑𝑑𝑑𝑑 𝑥𝑥
𝐷𝐷 𝑓𝑓 𝑥𝑥 = i.e. 𝑓𝑓𝑓(𝑥𝑥)
Contains the zero vector 0 is in 𝕎𝕎 Let 𝕎𝕎 be a subspace of a vector space 𝕍𝕍 𝑑𝑑𝑑𝑑
ℚ 𝒊𝒊 = 𝑎𝑎 + 𝑏𝑏𝒊𝒊: 𝑎𝑎, 𝑏𝑏 ∈ ℚ ⊂ ℂ Integral Operator:
𝐝𝐝𝐝𝐝𝐝𝐝 𝕎𝕎 ≤ 𝐝𝐝𝐝𝐝𝐝𝐝 𝕍𝕍 , and 𝐝𝐝𝐝𝐝𝐝𝐝 𝕎𝕎 = 𝐝𝐝𝐝𝐝𝐝𝐝 𝕍𝕍 iff 𝕎𝕎 = 𝕍𝕍
Gaussian Rational The addition and multiplication are defined as those of complex Closed under vector addition ∀ 𝒖𝒖, 𝒗𝒗 ∈ 𝕎𝕎 , 𝒖𝒖 + 𝒗𝒗 ∈ 𝕎𝕎 For 𝑓𝑓 ∈ 𝒞𝒞 ∞ 𝑎𝑎, 𝑏𝑏 , define the integral operator
numbers Closed under scalar multiplication ∀ 𝒖𝒖 ∈ 𝕎𝕎, 𝑐𝑐 ∈ 𝔽𝔽, 𝑐𝑐𝑐𝑐 ∈ 𝕎𝕎 Coordinate Vectors 𝑥𝑥
𝐹𝐹 𝑓𝑓 𝑥𝑥 = ∫𝑎𝑎 𝑓𝑓 𝑡𝑡 𝑑𝑑𝑑𝑑 for 𝑥𝑥 ∈ [𝑎𝑎, 𝑏𝑏]
If 𝐵𝐵 = {𝒗𝒗1 , 𝒗𝒗2 , … , 𝒗𝒗𝑛𝑛 } is an ordered basis of 𝕍𝕍 and 𝒖𝒖 ∈ 𝕍𝕍, then
Complex Numbers Properties Range and Kernel of Matrix Determinant Transformation det: 𝑨𝑨 ∈ ℳ𝑛𝑛𝑛𝑛𝑛𝑛 𝔽𝔽 ↦ det 𝑨𝑨 ∈ 𝔽𝔽
𝒖𝒖 = 𝑐𝑐1 𝒗𝒗1 + 𝑐𝑐2 𝒗𝒗2 + … + 𝑐𝑐𝑛𝑛 𝒗𝒗𝑛𝑛 , for some (𝑐𝑐1 , 𝑐𝑐2 , … , 𝑐𝑐𝑛𝑛 ) ∈ 𝔽𝔽
Let 𝑨𝑨 be a 𝑚𝑚 × 𝑛𝑛 matrix
Trace Transformation tr: 𝑨𝑨 ∈ ℳ𝑛𝑛𝑛𝑛𝑛𝑛 𝔽𝔽 ↦ tr(𝑨𝑨) ∈ 𝔽𝔽
Multiplication 𝑎𝑎 + 𝑏𝑏𝒊𝒊 ⋅ 𝑐𝑐 + 𝑑𝑑𝒊𝒊 = 𝑎𝑎𝑎𝑎 − 𝑏𝑏𝑏𝑏 + 𝑎𝑎𝑎𝑎 + 𝑏𝑏𝑏𝑏 𝒊𝒊 The Range of a matrix is the set of possible linear The Kernel (nullspace) of a matrix is the set of vectors And the coordinate vectors of 𝒖𝒖 relative to the basis 𝐵𝐵 is
combinations of its column space. 𝑣𝑣 such that 𝒖𝒖 𝐵𝐵 = (𝑐𝑐1 , 𝑐𝑐2 , … , 𝑐𝑐𝑛𝑛 ) or 𝒖𝒖 𝐵𝐵 = 𝑐𝑐1 , 𝑐𝑐2 , … , 𝑐𝑐𝑛𝑛 𝑇𝑇 Matrices for Linear Transformations
1 𝑎𝑎 𝑏𝑏
Division = 2 − 2 𝒊𝒊 R 𝑨𝑨 = {𝑐𝑐1𝑨𝑨1 + ⋯ + 𝑐𝑐𝑛𝑛 𝑨𝑨𝑛𝑛 : (𝑐𝑐1 , … , 𝑐𝑐𝑛𝑛 ) ∈ 𝔽𝔽} ker 𝑨𝑨 = 𝑣𝑣 ∈ 𝔽𝔽𝑛𝑛 : 𝑨𝑨𝑣𝑣 = 𝟎𝟎 Direct Sums
𝑎𝑎 + 𝑏𝑏𝒊𝒊 𝑎𝑎 + 𝑏𝑏 2 𝑎𝑎 + 𝑏𝑏 2 Given
where 𝑨𝑨𝑖𝑖 denotes the 𝑖𝑖𝑡𝑡𝑡 column of 𝑨𝑨 Let 𝕎𝕎1 and 𝕎𝕎2 be subspaces of 𝕍𝕍
The nullity of 𝑨𝑨, nullity 𝑨𝑨 = dim(ker 𝑨𝑨 ) 1. 𝑇𝑇: 𝕍𝕍 → 𝕎𝕎 is a linear transformation, and 𝕍𝕍, 𝕎𝕎 are finite dimensional
Norm Also, the number of non-pivot columns of 𝑨𝑨 The subspace 𝕎𝕎1 + 𝕎𝕎2 is a direct sum of 𝕎𝕎1 and 𝕎𝕎2 iff 2. 𝐵𝐵 and 𝐶𝐶 are ordered bases of 𝕍𝕍 and 𝕎𝕎, respectively
𝑎𝑎 + 𝑏𝑏𝒊𝒊 = 𝑎𝑎2 + 𝑏𝑏 2 The rank of 𝑨𝑨, rank 𝑨𝑨 = dim(R 𝑨𝑨 )
∀ 𝒖𝒖 ∈ 𝕎𝕎1 + 𝕎𝕎2 , 𝒖𝒖 can be expressed UNIQUELY as 3. 𝒖𝒖 𝐵𝐵 and 𝑇𝑇 𝒖𝒖 𝐶𝐶 are coordinate vectors of 𝒖𝒖 and 𝑇𝑇 𝒖𝒖 relative to 𝐵𝐵 and 𝐶𝐶
Ranks and Dimensions of Matrix
Transpose of a Matrix Inverse of a Matrix Dimension Theorem: rank 𝑨𝑨 + nullity 𝑨𝑨 = # columns of 𝑨𝑨 𝒖𝒖 = 𝑤𝑤1 + 𝑤𝑤2 for 𝑤𝑤1 ∈ 𝕎𝕎1 , 𝑤𝑤2 ∈ 𝕎𝕎2 𝑇𝑇 𝒖𝒖 𝐶𝐶 = 𝑇𝑇 𝐶𝐶,𝐵𝐵 𝒖𝒖 𝐵𝐵 , ∀ 𝒖𝒖 ∈ 𝕍𝕍
The transpose of a 𝑚𝑚 × 𝑛𝑛 matrix 𝑨𝑨, The following are equivalent:
denoted by 𝑨𝑨T (or 𝑨𝑨t ), The inverse of a 𝑛𝑛 × 𝑛𝑛 matrix 𝑨𝑨, is 𝑨𝑨-1 In this case, we denote by 𝕎𝕎1 ⊕ 𝕎𝕎2
1) rank 𝑨𝑨 = 𝐫𝐫𝐫𝐫𝐫𝐫𝐫𝐫 𝑨𝑨T 𝑇𝑇
𝑨𝑨 is invertible if ∃ 𝑩𝑩 | 𝑨𝑨𝑨𝑨 = 𝑰𝑰𝑛𝑛 & 𝐁𝐁𝐁𝐁 = 𝑰𝑰𝑛𝑛 Properties of ranks, for a 𝑚𝑚 × 𝑛𝑛 matrix 𝑨𝑨: Proving Direct Sums (Equivalent Definition) 𝕍𝕍 𝑢𝑢 𝑇𝑇 𝑢𝑢 𝕎𝕎
It is the matrix obtained by interchanging the 𝑨𝑨 is singular if for all 𝑩𝑩, 𝑨𝑨𝑨𝑨 ≠ 𝑰𝑰 and 𝐁𝐁𝐁𝐁 ≠ 𝑰𝑰 2) dim(𝐫𝐫𝐫𝐫𝐫𝐫 𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬 of 𝑨𝑨) The subspace 𝕎𝕎1 + 𝕎𝕎2 is a direct sum of 𝕎𝕎1 and 𝕎𝕎2 if and only if
𝑛𝑛 𝑛𝑛 1) rank 𝑨𝑨 ≤ min 𝑚𝑚, 𝑛𝑛 , the largest possible rank
rows and columns of 𝑨𝑨. 3) dim(𝐜𝐜𝐜𝐜𝐜𝐜 𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬 of 𝑨𝑨) 𝑨𝑨𝑚𝑚×𝑛𝑛 = 𝑇𝑇
2) rank 𝑨𝑨 = min 𝑚𝑚, 𝑛𝑛 iff 𝑨𝑨 is of full rank 𝕎𝕎1 ∩ 𝕎𝕎2 = {0} 𝐶𝐶,𝐵𝐵
Laws of Transposition Laws of Inversion 4) # of non-zero rows of rref(𝑨𝑨)
4) rank 𝑨𝑨𝑨𝑨 ≤ min rank 𝑨𝑨 , rank 𝑩𝑩
𝑨𝑨𝑇𝑇 𝑇𝑇 = 𝑨𝑨 𝑨𝑨𝑨𝑨 = 𝑨𝑨𝑩𝑩′ → 𝑩𝑩 = 𝑩𝑩′ 5) # of leading entries of rref(𝑨𝑨) IMPORTANT NOTE ABOUT DIRECT SUM
6) # of pivot columns of rref(𝑨𝑨) dim 𝕍𝕍 = 𝐵𝐵 = 𝑛𝑛 dim 𝕎𝕎 = 𝐶𝐶 = 𝑚𝑚
𝑨𝑨 + 𝑩𝑩 𝑇𝑇 = 𝑨𝑨𝑇𝑇 + 𝑩𝑩𝑇𝑇 𝑨𝑨−1 −1 = 𝑨𝑨 *𝑨𝑨𝑨𝑨 = 𝒃𝒃 is consistent iff 𝐫𝐫𝐫𝐫𝐫𝐫𝐫𝐫 𝑨𝑨 = 𝐫𝐫𝐫𝐫𝐫𝐫𝐫𝐫 𝑨𝑨 𝒃𝒃) If 𝕍𝕍 = 𝕎𝕎1 ⊕ 𝕎𝕎2, 𝑣𝑣 ∈ 𝕍𝕍 need not necessarily belong to only 𝕎𝕎1 or 𝕎𝕎2
7) # of columns of 𝑨𝑨 −nullity(𝑨𝑨)
𝑨𝑨𝑨𝑨 𝑇𝑇 = 𝑩𝑩𝑇𝑇𝑨𝑨𝑇𝑇 𝑎𝑎𝑨𝑨 −1 = (1/𝑎𝑎)𝑨𝑨−1 8) # of columns of 𝑨𝑨𝐓𝐓 − nullity(𝑨𝑨𝐓𝐓) Let 𝐵𝐵 = {𝑣𝑣1 , 𝑣𝑣2 , … , 𝑣𝑣𝑛𝑛 } be a basis for 𝕍𝕍, then
Dimension of Direct Sums
𝑨𝑨 −1 = (𝑨𝑨−1)𝑇𝑇
𝑇𝑇
Range and Kernel of Transformation Let 𝕎𝕎1 ⊕ 𝕎𝕎2 be a direct sum 𝑨𝑨𝑚𝑚×𝑛𝑛 = 𝑇𝑇 𝐶𝐶,𝐵𝐵 = ( 𝑇𝑇 𝑣𝑣1 𝐶𝐶 𝑇𝑇 𝑣𝑣2 𝐶𝐶 … 𝑇𝑇 𝑣𝑣𝑛𝑛 𝐶𝐶 )
𝑨𝑨−1 = 𝑩𝑩, 𝑩𝑩−1 = 𝑨𝑨, 𝑩𝑩𝑩𝑩 = 𝑰𝑰
Let 𝑇𝑇: 𝕍𝕍 → 𝕎𝕎 be a linear transformation Where 𝑨𝑨 is the matrix for 𝑇𝑇 relative to the ordered bases 𝐵𝐵 and 𝐶𝐶
Properties of Determinants The Range of 𝑇𝑇 is the set of images in 𝕎𝕎 of all The Kernel (or null space) of 𝑇𝑇 is the set of all elements • If 𝐵𝐵1 and 𝐵𝐵2 are bases for 𝕎𝕎1 and 𝕎𝕎2 respectively, then
det 𝑨𝑨𝑨𝑨 = det 𝑨𝑨 det(𝑩𝑩) If 𝑨𝑨 is triangular, then elements in 𝕍𝕍 in 𝕍𝕍 whose image in 𝕎𝕎 is 𝟎𝟎 𝐵𝐵1 ∪ 𝐵𝐵2 is a basis for 𝕎𝕎1 ⊕ 𝕎𝕎2
Uniqueness of Transformations
det 𝑨𝑨𝑨𝑨 = det(𝑩𝑩𝑩𝑩) det 𝑨𝑨 = product of diagonal entries
R 𝑇𝑇 = 𝑇𝑇 𝑢𝑢 ∶ 𝑢𝑢 ∈ 𝕍𝕍 ker 𝑇𝑇 = 𝑢𝑢 ∈ 𝕍𝕍 ∶ 𝑇𝑇 𝑢𝑢 = 𝟎𝟎 • If both 𝕎𝕎1 and 𝕎𝕎2 are finite dimensional, then For a fixed bases 𝐵𝐵 and 𝐶𝐶
det 𝑎𝑎𝑨𝑨 = 𝑎𝑎𝑛𝑛 det 𝑨𝑨 , 𝑛𝑛 is the order of 𝑨𝑨 det 𝑨𝑨 = 0 if 𝑨𝑨 has dim 𝕎𝕎1 ⊕ 𝕎𝕎2 = dim 𝕎𝕎1 + dim 𝕎𝕎2
det 𝑨𝑨T = det(𝑨𝑨) Ranges are subspaces in 𝕎𝕎 Kernels are subspaces in 𝕍𝕍 1. Every transformation has unique matrix of transformation. That is, there exists
- Two identical rows Sum and Intersection of Vector Spaces
1 The rank of 𝑇𝑇, rank 𝑇𝑇 = dim(R 𝑇𝑇 ) The nullity of 𝑇𝑇, nullity 𝑇𝑇 = dim(ker 𝑇𝑇 ) a bijection from the transformations to the matrices.
det 𝑨𝑨−1 = - Two identical columns
det(𝑨𝑨) dim 𝕎𝕎1 + 𝕎𝕎2 = dim 𝕎𝕎1 + dim 𝕎𝕎2 − dim 𝕎𝕎1 ∩ 𝕎𝕎2 2. Let
Dimension Theorem for Matrix of Transformation • 𝑇𝑇3 = 𝑇𝑇1 + 𝑇𝑇2 , then 𝐴𝐴3 = 𝐴𝐴1 + 𝐴𝐴2
1
𝑨𝑨−1 = adj(𝑨𝑨) det 𝑨𝑨 = � eigenvalues of 𝑨𝑨 Let 𝑇𝑇: 𝕍𝕍 → 𝕎𝕎 be a linear transformation, and 𝐵𝐵 and 𝐶𝐶 to be the respective bases To find basis of sum of vector spaces, take union of the 2 bases, and find • 𝑇𝑇4 = 𝑐𝑐𝑇𝑇1 , then 𝐴𝐴4 = 𝑐𝑐𝐴𝐴1
det 𝑨𝑨
Let dim 𝕍𝕍 = 𝐵𝐵 = 𝑛𝑛, and 𝑨𝑨𝑚𝑚×𝑛𝑛 = 𝑇𝑇 𝐶𝐶,𝐵𝐵 be the matrix of the transformation redundant vector using GJE on coordinate vectors
Composition of Transformation
Properties of Trace rank 𝑇𝑇 + nullity 𝑇𝑇 = dim 𝕍𝕍
∀ 𝑨𝑨, 𝑩𝑩 ∈ ℳ𝑛𝑛×𝑛𝑛 𝔽𝔽 , 𝑐𝑐 ∈ 𝔽𝔽 . To find basis of intersection of vector space, take an arbitrary vector in either of
Let 𝑆𝑆: 𝕌𝕌 → 𝕍𝕍 and 𝑇𝑇: 𝕍𝕍 → 𝕎𝕎 be linear transformations. Then, ∀ 𝑢𝑢 ∈ 𝕌𝕌
=

=
=

rank 𝐴𝐴 + nullity 𝐴𝐴 = 𝑛𝑛 = #columns of A the vector spaces, and apply the condition of the other vector space.
Transpose tr 𝑨𝑨 = tr(𝑨𝑨T ) 𝑇𝑇 ∘ 𝑆𝑆 𝒖𝒖 = 𝑇𝑇 𝑆𝑆 𝒖𝒖
Addition tr 𝑨𝑨 + 𝑩𝑩 = tr 𝑨𝑨 + tr(𝑩𝑩) Row and Column Space The following statements are equivalent:
Cofactor Expansion
Scalar Multiplication tr 𝑐𝑐𝑨𝑨 = 𝑐𝑐 tr(𝑨𝑨) Type Take Let 𝔸𝔸, 𝔹𝔹, ℂ be ordered bases for 𝕌𝕌, 𝕍𝕍, 𝕎𝕎 respectively. Then,
Let 𝑨𝑨 be a square matrix of order 𝑛𝑛
Given 𝑨𝑨𝑝𝑝×𝑞𝑞 , 𝑩𝑩𝑞𝑞×𝑟𝑟 , 𝑪𝑪𝑟𝑟×𝑝𝑝, Find any row/column with the easiest numbers. 𝑇𝑇 ∘ 𝑆𝑆 𝐶𝐶,𝐴𝐴 = 𝑇𝑇 𝐶𝐶,𝐵𝐵 𝑆𝑆 𝐵𝐵,𝐴𝐴
Row Final Linearly Independent rows 𝑎𝑎11 𝑎𝑎12 𝑎𝑎13
Circular Trace tr 𝑨𝑨𝑨𝑨𝑨𝑨 = tr 𝑩𝑩𝑩𝑩𝑩𝑩 = tr(𝑪𝑪𝑪𝑪𝑪𝑪) ≠ tr(𝑨𝑨𝑨𝑨𝑨𝑨) Original columns corresponding to final 1) 𝑨𝑨 is invertible 𝑇𝑇 ∘ 𝑆𝑆
Column 𝑨𝑨 = 𝑎𝑎21 𝑎𝑎22 𝑎𝑎23
Linearly Independent columns 𝑎𝑎31 𝑎𝑎32 𝑎𝑎33 2) 𝑨𝑨 is of full rank
Eigenvalues tr 𝑨𝑨 = � eigenvalues 𝑜𝑜𝑜𝑜 𝑨𝑨
2) The linear system 𝑨𝑨𝑨𝑨 = 𝟎𝟎 has only the trivial solution
𝕌𝕌 𝕍𝕍 𝕎𝕎
𝑎𝑎22 𝑎𝑎23 3) The RREF of 𝑨𝑨 is an identity matrix
𝑨𝑨 𝑀𝑀11 =
How does ERO Change Determinant? Type of Matrices 𝑎𝑎32 𝑎𝑎33 4) 𝑨𝑨 can be expressed as a product of elementary matrices 𝑆𝑆 𝑇𝑇
5) det(𝑨𝑨) ≠ 0 𝑢𝑢 𝑆𝑆(𝑢𝑢) 𝑇𝑇 𝑆𝑆 𝑢𝑢
𝑨𝑨 –ERO 𝑩𝑩 Change in Determinant = 𝑎𝑎11 𝗑𝗑 (−1)1+1det 𝑀𝑀11 𝑎𝑎12 𝑎𝑎13 𝑆𝑆 𝑇𝑇
Symmetric 𝑨𝑨T = 𝑨𝑨 𝑀𝑀21 = 6) The rows of 𝑨𝑨 form a basis for 𝐑𝐑n 𝐵𝐵,𝐴𝐴 𝐶𝐶,𝐵𝐵
𝑐𝑐𝑐𝑐𝑖𝑖 det 𝑩𝑩 = 𝑐𝑐 � det(𝑨𝑨) + 𝑎𝑎21 𝗑𝗑 (−1)2+1 det 𝑀𝑀21 𝑎𝑎32 𝑎𝑎33
7) The columns of 𝑨𝑨 form a basis for 𝐑𝐑n
Anti-Symmetric 𝑨𝑨T = −𝑨𝑨 + 𝑎𝑎31 𝗑𝗑 (−1)3+1 det 𝑀𝑀31
𝑅𝑅𝑖𝑖 ↔ 𝑅𝑅𝑅𝑅 det 𝑩𝑩 = −det(𝑨𝑨) 𝑎𝑎 𝑎𝑎13 8) rank 𝑨𝑨 = 𝑛𝑛
𝑀𝑀31 = 12
Orthogonal 𝑨𝑨T = 𝑨𝑨−1 𝑎𝑎22 𝑎𝑎23 9) 0 is not an eigenvalue of 𝑨𝑨 dim 𝕌𝕌 = 𝐴𝐴 = 𝑘𝑘 dim 𝕍𝕍 = 𝐵𝐵 = 𝑛𝑛 dim 𝕎𝕎 = 𝐶𝐶 = 𝑚𝑚
𝑅𝑅𝑖𝑖 + 𝑐𝑐𝑐𝑐𝑐𝑐 det 𝑩𝑩 = det(𝑨𝑨)
Eigenvectors and Eigenvalues Similar Matrices
𝒏𝒏 × 𝒏𝒏 matrix
Let 𝑨𝑨 be a square matrix of order 𝑛𝑛. 𝒏𝒏 eigenvalues, not all necessarily distinct Let 𝑨𝑨, 𝑩𝑩, ∈ ℳ𝑛𝑛×𝑛𝑛 𝔽𝔽 ,
A nonzero column vector 𝒙𝒙 in 𝔽𝔽n is called an eigenvector of 𝑨𝑨 if 𝑨𝑨𝑨𝑨 = 𝜆𝜆𝒙𝒙, associated with the
eigenvalue 𝝀𝝀.
Let 𝐾𝐾𝜆𝜆𝑖𝑖 𝑇𝑇 = Ker 𝑇𝑇 − 𝜆𝜆𝑖𝑖 𝐼𝐼𝑣𝑣 𝑠𝑠𝑖𝑖 , then For each 𝜆𝜆𝑖𝑖 , Diagonalisable iff 𝑩𝑩 is similar to 𝑨𝑨 if ∃ 𝑷𝑷 ∈ ℳ𝑛𝑛×𝑛𝑛 𝔽𝔽 such that 𝑩𝑩 = 𝑷𝑷−𝟏𝟏 𝑨𝑨𝑨𝑨
dim Ker 𝑇𝑇 − 𝜆𝜆𝑖𝑖 = # of 𝑱𝑱𝑡𝑡 𝜆𝜆𝑖𝑖 1. # eigenvectors = 𝑛𝑛 Properties of Similar Matrices
Determining eigenvalue:
𝑉𝑉 = 𝐾𝐾𝜆𝜆1 𝑇𝑇 ⊕ 𝐾𝐾𝜆𝜆2 𝑇𝑇 ⊕ ⋯ ⊕ 𝐾𝐾𝜆𝜆𝑘𝑘 𝑇𝑇 … 2. 𝑚𝑚𝑇𝑇 𝑥𝑥 = 𝑥𝑥 − 𝜆𝜆1 𝑥𝑥 − 𝜆𝜆2 … 𝑥𝑥 − 𝜆𝜆𝑘𝑘
Eigenvalues are roots to the characteristic equation det 𝜆𝜆𝑰𝑰 − 𝑨𝑨 = 0. It is necessary that similar matrices have the same determinant and trace, but matrices with the
dim Ker 𝑇𝑇 − 𝜆𝜆𝑖𝑖 𝑚𝑚 − dim Ker 𝑇𝑇 − 𝜆𝜆𝑖𝑖 𝑚𝑚−1 = # of 𝑱𝑱𝑡𝑡 𝜆𝜆𝑖𝑖 with 𝑡𝑡 ≥ 𝑚𝑚
*For a triangular matrix, it’s eigenvalues are its diagonal entries 𝑐𝑐𝑇𝑇 𝑥𝑥 = 𝑥𝑥 − 𝜆𝜆1 𝑟𝑟1 𝑥𝑥 − 𝜆𝜆2 𝑟𝑟2 … 𝑥𝑥 − 𝜆𝜆𝑖𝑖 𝑟𝑟𝑘𝑘
same determinant and trace are not necessarily similar
Let 𝐶𝐶𝑖𝑖 be the basis for 𝐾𝐾𝜆𝜆𝑖𝑖 𝑇𝑇 , then 𝑚𝑚𝑇𝑇 𝑥𝑥 = 𝑥𝑥 − 𝜆𝜆1 𝑠𝑠1 𝑥𝑥 − 𝜆𝜆2 𝑠𝑠2 … 𝑥𝑥 − 𝜆𝜆𝑖𝑖 𝑠𝑠𝑘𝑘
𝑘𝑘
Determining eigenvector: 1. 𝐸𝐸𝜆𝜆𝑖𝑖 𝑇𝑇 ⊆ 𝐾𝐾𝜆𝜆𝑖𝑖 (𝑇𝑇) 𝑘𝑘 𝑘𝑘 Transition Matrices
Substitute the eigenvalue into 𝜆𝜆𝑰𝑰 − 𝑨𝑨 𝒙𝒙 = 0, and the solution is the associated eigenspace and 𝐶𝐶 = � 𝐶𝐶𝑖𝑖 𝑚𝑚𝑇𝑇 𝑥𝑥 = � 𝑚𝑚𝑇𝑇|𝐾𝐾𝜆𝜆 (𝑇𝑇) (𝑥𝑥) 𝑟𝑟𝑖𝑖 = # 𝜆𝜆𝑖𝑖 , � 𝑟𝑟𝑖𝑖 = 𝑛𝑛
2. 𝐾𝐾𝜆𝜆𝑖𝑖 (𝑇𝑇) is 𝑇𝑇-invariant 𝑖𝑖=1 Let 𝐵𝐵 and 𝐶𝐶 be ordered bases for 𝕍𝕍, 𝐵𝐵 = {𝑣𝑣1 , 𝑣𝑣2 , … , 𝑣𝑣𝑛𝑛 }
eigenvectors. 𝑖𝑖=1 𝑖𝑖=1
𝑖𝑖

is a basis for 𝑉𝑉 3. dim 𝐾𝐾𝜆𝜆𝑖𝑖 𝑇𝑇 = 𝑟𝑟𝑖𝑖


Diagonalization and Limit Matrix 𝑠𝑠𝑖𝑖 𝑘𝑘 • Minimal Polynomial is the smallest degree Then the transition matrix from 𝐵𝐵 to 𝐶𝐶 is given by,
4. 𝑚𝑚𝑇𝑇 |𝐾𝐾 (𝑇𝑇) 𝑥𝑥 = 𝑥𝑥 − 𝜆𝜆𝑖𝑖 monic polynomial such that 𝑚𝑚𝑇𝑇 𝑇𝑇 = 0𝑣𝑣
A square matrix 𝑨𝑨 is diagonalizable if there exists an invertible matrix 𝑷𝑷 such that Define 𝑄𝑄𝑖𝑖𝑚𝑚 = 𝑇𝑇 − 𝜆𝜆𝑖𝑖 𝐼𝐼𝑣𝑣 𝑚𝑚 𝜆𝜆𝑖𝑖
𝑐𝑐𝑇𝑇 𝑥𝑥 = � 𝑐𝑐𝑇𝑇|𝐾𝐾𝜆𝜆 (𝑇𝑇) (𝑥𝑥) 𝐼𝐼𝑣𝑣 𝐶𝐶,𝐵𝐵 = 𝐼𝐼𝑣𝑣 𝑣𝑣1 𝐶𝐶 𝐼𝐼𝑣𝑣 𝑣𝑣2 𝐶𝐶 … 𝐼𝐼𝑣𝑣 𝑣𝑣𝑛𝑛 𝐶𝐶 = ( 𝑣𝑣1 𝐶𝐶 𝑣𝑣2 𝐶𝐶 … 𝑣𝑣𝑛𝑛 𝐶𝐶 )
𝑷𝑷−1𝑨𝑨𝑨𝑨 is a diagonal matrix, where 𝑷𝑷 diagonalizes 𝑨𝑨. 5. 𝑐𝑐𝑇𝑇 |𝐾𝐾 𝑥𝑥 = 𝑥𝑥 − 𝜆𝜆𝑖𝑖 𝑟𝑟𝑖𝑖 𝑖𝑖 • If ∃ 𝑝𝑝 𝑥𝑥 such that 𝑝𝑝 𝑇𝑇 = 𝑂𝑂𝑣𝑣 , then
→ Ker 𝑄𝑄𝑖𝑖𝑚𝑚 = Ker 𝑇𝑇 − 𝜆𝜆𝑖𝑖 𝑚𝑚 𝜆𝜆𝑖𝑖 (𝑇𝑇) 𝑖𝑖=1
degree of 𝑝𝑝 𝑥𝑥 ≥ degree of 𝑚𝑚𝑇𝑇 (𝑥𝑥) Let 𝑇𝑇: 𝕍𝕍 → 𝕍𝕍 be a linear operator, and 𝐵𝐵, 𝐶𝐶 be two ordered bases for 𝕍𝕍
𝑨𝑨 is diagonalizable
Suppose that 𝑐𝑐𝑇𝑇 𝑥𝑥 can be factorized into linear factors over 𝔽𝔽, Finding Basis such that 𝑇𝑇 is JCF: 𝑇𝑇 𝐵𝐵.𝐵𝐵 ≔ 𝑇𝑇 𝐵𝐵 , and 𝑇𝑇 𝐶𝐶,𝐶𝐶 ≔ 𝑇𝑇 𝐶𝐶 are the matrices for 𝑇𝑇 relative to 𝐵𝐵, 𝐶𝐶
1) iff the # of linearly independent eigenvectors of 𝑨𝑨 = # of columns, or 𝐵𝐵
then ∃ an ordered basis 𝐵𝐵 such that 𝑇𝑇 𝐵𝐵 = 𝐽𝐽 𝑷𝑷 ≔ 𝐼𝐼𝑣𝑣 𝐶𝐶,𝐵𝐵 is the transition matrix from 𝐵𝐵 to 𝐶𝐶, where 𝑷𝑷 is an invertible matrix
2) if the number of distinct eigenvalues of 𝑨𝑨 = number of columns 1. For each 𝜆𝜆𝑖𝑖 , find 𝑢𝑢 ∈ 𝐾𝐾𝜆𝜆𝑖𝑖 𝑇𝑇 , 𝑢𝑢 ∉ Ker(𝑄𝑄 𝑠𝑠𝑖𝑖−1) by finding RREF of 𝐾𝐾𝜆𝜆𝑖𝑖
Let 𝑱𝑱𝑡𝑡 𝜆𝜆𝑖𝑖 represent a Jordan Block associated with the eigenvalue 𝜆𝜆𝑖𝑖
3) Power of linear factor in characteristic polynomial = dimension of associated eigenspace 2. Apply 𝑄𝑄 on each 𝑢𝑢𝑖𝑖 found
𝑇𝑇 𝐵𝐵 = 𝑃𝑃−1 𝑇𝑇 𝐶𝐶 𝑃𝑃
For each 𝜆𝜆𝑖𝑖 : 3. Find additional 𝑗𝑗 linearly independent vectors 𝑣𝑣 ∈ Ker 𝑄𝑄 𝑠𝑠𝑠𝑠−1 , 𝑣𝑣 ∉ Ker(𝑄𝑄 𝑠𝑠𝑖𝑖−2) ,
𝑨𝑨 = 𝑷𝑷𝑷𝑷𝑷𝑷−1 where 𝑫𝑫 is a diagonal matrix of the eigenvalues of 𝑨𝑨, *Use this when question is asking about similar matrices for transition matrices
1. 𝑑𝑑𝑑𝑑𝑑𝑑 E𝜆𝜆𝑖𝑖 = Number of 𝑱𝑱𝑡𝑡 𝜆𝜆𝑖𝑖 where 𝑗𝑗 = dim Ker 𝑄𝑄𝑘𝑘 − dim Ker 𝑄𝑄𝑘𝑘−1
𝑨𝑨𝑛𝑛 = 𝑷𝑷𝑫𝑫𝑛𝑛 𝑷𝑷−1, � and 𝑷𝑷 is a matrix with the
corresponding columns, the associated eigenvectors 2. 𝑟𝑟𝑖𝑖 = Sum of sizes of all 𝑱𝑱𝑡𝑡 𝜆𝜆𝑖𝑖 Inverse mapping
𝑫𝑫 = 𝑷𝑷−1𝑨𝑨𝑨𝑨 4. Repeat steps 2 and 3 until you reach 𝑄𝑄𝑠𝑠𝑖𝑖−1 𝑢𝑢𝑖𝑖
3. 𝑠𝑠𝑖𝑖 = Size of largest 𝑱𝑱𝑡𝑡 𝜆𝜆𝑖𝑖
Eigenvalues of Transformations 4. 𝑛𝑛 = Sum of all 𝑱𝑱𝑡𝑡 𝜆𝜆𝑖𝑖 5. Basis is {𝑄𝑄𝑠𝑠𝑖𝑖−1 𝑢𝑢1 , 𝑄𝑄𝑠𝑠𝑖𝑖−2 𝑢𝑢1 , … , 𝑢𝑢1 , 𝑄𝑄 𝑠𝑠𝑖𝑖−2 𝑣𝑣1 , 𝑄𝑄 𝑠𝑠𝑖𝑖−3 𝑣𝑣1 , … , 𝑣𝑣𝑣, … } Let 𝑓𝑓: 𝔸𝔸 → 𝔹𝔹 be a mapping.
*Each Jordan Block provides one eigenvector 𝑓𝑓 is bijective if and only if there exist a mapping 𝑔𝑔: 𝔹𝔹 → 𝔸𝔸 such that
Let 𝑇𝑇 be a linear operator on 𝕍𝕍. 6. Union of all bases is a basis for 𝑉𝑉
𝑓𝑓 ∘ 𝑔𝑔 = 𝑖𝑖𝑑𝑑𝐵𝐵 and 𝑔𝑔 ∘ 𝑓𝑓 = 𝑖𝑖𝑑𝑑𝐴𝐴
A vector 𝒖𝒖 in 𝕍𝕍 is called an eigenvector of 𝑇𝑇 if 𝑇𝑇(𝒖𝒖) = 𝜆𝜆𝒖𝒖, associated with the eigenvalue 𝝀𝝀.
Cayley-Hamilton Theorem Adjoint Linear Operator
Where 𝑖𝑖𝑑𝑑𝐴𝐴 and 𝑖𝑖𝑑𝑑𝐵𝐵 are the identity mappings.
All processes are the same to determine eigenvalue and eigenspace except now Let 𝑇𝑇 be a linear operator on 𝕍𝕍, and 𝑨𝑨 = 𝑇𝑇 be the matrix of the A linear operator 𝑇𝑇 ∗ on 𝕍𝕍 is called the adjoint of 𝑇𝑇 if ∀ 𝑢𝑢, 𝑣𝑣 ∈ 𝕍𝕍
𝐵𝐵
𝑨𝑨 = 𝑇𝑇 𝐵𝐵, for any basis 𝐵𝐵 of 𝕍𝕍 transformation for any basis 𝐵𝐵, Here 𝑔𝑔 is the inverse of 𝑓𝑓, and 𝑓𝑓 that of 𝑔𝑔. 𝑔𝑔 = 𝑓𝑓 −1, 𝑓𝑓 = 𝑔𝑔 −1
𝑇𝑇(𝑢𝑢), 𝑣𝑣 = 𝑢𝑢, 𝑇𝑇 ∗ (𝑣𝑣) , or 𝑢𝑢, 𝑇𝑇(𝑣𝑣) = 𝑇𝑇 ∗ (𝑢𝑢), 𝑣𝑣
𝑇𝑇 is diagonalizable if there exists an ordered basis 𝐵𝐵 of 𝕍𝕍 such that 𝑇𝑇 𝐵𝐵 = 𝑇𝑇 𝐵𝐵,𝐵𝐵 is diagonal 𝑐𝑐𝑨𝑨 𝑨𝑨 = 0 and 𝑐𝑐𝑇𝑇 𝑇𝑇 = 0𝑉𝑉 ↔ 𝑐𝑐𝑇𝑇 𝑇𝑇 𝐵𝐵 = 0𝑛𝑛 3 Types of Functions Row and Column Space of
If 𝕍𝕍 is finite dimensional, then Definition Transformation
Product of Vector Spaces Complex Conjugate and Conjugate Transpose 1. 𝑇𝑇 ∗ always exists and is unique
𝒇𝒇 𝐢𝐢𝐢𝐢 𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢 one − to − one 𝐢𝐢𝐢𝐢𝐢𝐢 . 𝑇𝑇: 𝕍𝕍 → 𝕎𝕎 is a linear
Let 𝕍𝕍 be a vector space, then 𝕍𝕍𝑛𝑛 = 𝕍𝕍 × 𝕍𝕍 × ⋯ × 𝕍𝕍, dim 𝕍𝕍𝑛𝑛 = dim 𝕍𝕍 𝑛𝑛
Let 𝑐𝑐 = 𝑎𝑎 + 𝑏𝑏𝑏𝑏, and the complex conjugate 𝑐𝑐̅ = 𝑎𝑎 − 𝑏𝑏𝑏𝑏 2. 𝑇𝑇 ∗ 𝐵𝐵 = 𝑇𝑇 𝐵𝐵 ∗ for some ordered orthonormal basis of 𝕍𝕍
∀ 𝑥𝑥1, 𝑥𝑥2 ∈ 𝑋𝑋, 𝑥𝑥1 ≠ 𝑥𝑥2 → 𝑓𝑓 𝑥𝑥1 ≠ 𝑓𝑓 𝑥𝑥2 𝑜𝑜𝑜𝑜 .

Injective
transformation
Special Operators and Corresponding Matrices ∀ 𝑥𝑥1, 𝑥𝑥2 ∈ 𝑋𝑋, 𝑓𝑓 𝑥𝑥1 = 𝑓𝑓 𝑥𝑥2 → 𝑥𝑥1 = 𝑥𝑥2.
Vector Addition: 𝑢𝑢1 , 𝑢𝑢2 , … , 𝑢𝑢𝑛𝑛 + 𝑣𝑣1 , 𝑣𝑣2 , … , 𝑣𝑣𝑛𝑛 = 𝑢𝑢1 + 𝑣𝑣1 , … , 𝑢𝑢𝑛𝑛 + 𝑣𝑣𝑛𝑛 , 𝑢𝑢𝑖𝑖 , 𝑣𝑣𝑖𝑖 ∈ 𝕍𝕍 Let 𝑨𝑨 = 𝑎𝑎𝑖𝑖𝑖𝑖 ∈ ℳ𝑚𝑚×𝑛𝑛 ℂ , Self-Adjoint 𝑇𝑇 = 𝑇𝑇 ∗ Hermitian 𝑇𝑇 𝐵𝐵 = 𝑇𝑇 ∗ 𝐵𝐵 The row space of 𝑇𝑇 is 𝕍𝕍
𝑚𝑚×𝑛𝑛
Scalar Multiplication: 𝑐𝑐 𝑢𝑢1 , 𝑢𝑢2 , … , 𝑢𝑢𝑛𝑛 = 𝑐𝑐𝑢𝑢1 , 𝑐𝑐𝑢𝑢2 , … , 𝑐𝑐𝑢𝑢𝑛𝑛 , 𝑐𝑐 ∈ 𝔽𝔽, 𝑢𝑢𝑖𝑖 ∈ 𝕍𝕍 � 𝑇𝑇 = 𝑎𝑎𝑗𝑗𝑗𝑗 Normal 𝑇𝑇𝑇𝑇 ∗ = 𝑇𝑇 ∗ 𝑇𝑇 Normal 𝑇𝑇 𝐵𝐵 𝑇𝑇 ∗ 𝐵𝐵 = 𝑇𝑇 ∗ 𝐵𝐵 𝑇𝑇 𝐵𝐵 * If 𝑿𝑿 > 𝒀𝒀 , 𝒇𝒇 𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜 𝐛𝐛𝐛𝐛 𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢𝐢 The column space of 𝑇𝑇 is 𝕎𝕎
then the conjugate transpose of 𝑨𝑨 = 𝑨𝑨∗ = 𝑨𝑨
𝑛𝑛×𝑛𝑛
Multilinear Form Properties of Conjugate Transpositions Unitary ∗ ∗
𝑇𝑇𝑇𝑇 = 𝑇𝑇 𝑇𝑇 = 𝐼𝐼𝑣𝑣 , 𝔽𝔽 = ℂ Unitary 𝑇𝑇 𝐵𝐵 𝑇𝑇 ∗ 𝐵𝐵 = 𝐼𝐼 𝒇𝒇 𝐢𝐢𝐢𝐢 𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬 onto 𝐢𝐢𝐢𝐢𝐢𝐢 .

Surjective
A mapping 𝑇𝑇: 𝕍𝕍𝑛𝑛 → 𝔽𝔽 is a multilinear form on 𝕍𝕍 if for each 𝑖𝑖, Orthogonal 𝑇𝑇𝑇𝑇 ∗ = 𝑇𝑇 ∗ 𝑇𝑇 = 𝐼𝐼𝑣𝑣 , 𝔽𝔽 = ℝ Orthogonal 𝑇𝑇 𝐵𝐵 𝑇𝑇𝑇𝑇 𝐵𝐵 = 𝐼𝐼 ∀ 𝑦𝑦 ∈ 𝑌𝑌, ∃ 𝑥𝑥 ∈ 𝑋𝑋 such that 𝑦𝑦 = 𝑓𝑓(𝑥𝑥).
Let 𝑨𝑨, 𝑩𝑩 ∈ ℳ𝑚𝑚×𝑛𝑛 ℂ , 𝑪𝑪 ∈ ℳ𝑛𝑛×𝑝𝑝 (ℂ), and 𝑐𝑐 ∈ ℂ
2 *ALL HERMITIAN AND UNITARY MATRICES ARE NORMAL range 𝑓𝑓 = codomain(𝑓𝑓).
𝑨𝑨 + 𝑩𝑩 ∗ = 𝑨𝑨∗ + 𝑩𝑩∗ 𝑨𝑨𝑨𝑨 ∗ = 𝑪𝑪∗ 𝑨𝑨∗ 𝑐𝑐𝑨𝑨 ∗ ̅ ∗
= 𝑐𝑐𝑨𝑨
𝑇𝑇 𝑢𝑢1, … , 𝑢𝑢𝑖𝑖−1 , 𝑎𝑎𝑎𝑎 + 𝑏𝑏𝑏𝑏, 𝑢𝑢𝑖𝑖+1 , … , 𝑢𝑢𝑛𝑛 = 𝑎𝑎𝑎𝑎 𝑢𝑢1 , … , 𝑢𝑢𝑖𝑖−1 , 𝑣𝑣, 𝑢𝑢𝑖𝑖+1 , … , 𝑢𝑢𝑛𝑛 + 𝑏𝑏𝑏𝑏 𝑢𝑢1 , … , 𝑢𝑢𝑖𝑖−1 , 𝑤𝑤, 𝑢𝑢𝑖𝑖+1 , … , 𝑢𝑢𝑛𝑛 𝑣𝑣𝑣𝑣̅ = 𝑣𝑣
Equivalent Statements
* If 𝑿𝑿 < 𝒀𝒀 , 𝒇𝒇 𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜 𝐛𝐛𝐛𝐛 𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬𝐬
Alternative Multilinear Form Inner Product 1. 𝑇𝑇 is unitary/orthogonal 𝒇𝒇 𝐢𝐢𝐢𝐢 𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛 𝐢𝐢𝐢𝐢𝐢𝐢 .
2. ∀ 𝑢𝑢, 𝑣𝑣 ∈ 𝕍𝕍, 𝑇𝑇 𝑢𝑢 , 𝑇𝑇 𝑣𝑣 = 𝑢𝑢, 𝑣𝑣

Bijective
A multilinear form 𝑇𝑇: 𝕍𝕍𝑛𝑛 → 𝔽𝔽 is alternative on 𝕍𝕍 if 𝑇𝑇 𝑢𝑢1 , 𝑢𝑢2 , … , 𝑢𝑢𝑛𝑛 = 0, where 𝑢𝑢𝑖𝑖 = 𝑢𝑢𝑗𝑗 , 𝑖𝑖 ≠ 𝑗𝑗 Let 𝕍𝕍 be a vector space over 𝔽𝔽 = ℝ or ℂ, and if 𝕍𝕍 is an inner product space, It is both injective and surjective.
Bilinear Form
3. ∀ 𝑢𝑢 ∈ 𝕍𝕍, 𝑇𝑇 𝑢𝑢 = 𝑢𝑢
An inner product on 𝕍𝕍, ∀ 𝑢𝑢, 𝑣𝑣 ∈ 𝕍𝕍, 𝑐𝑐 ∈ ℂ is given by the mapping 4. ∃ an orthonormal basis 𝐵𝐵 = {𝑢𝑢1 , 𝑢𝑢2 , … , 𝑢𝑢𝑛𝑛 } such that 𝑇𝑇 𝑢𝑢1 , 𝑇𝑇 𝑢𝑢2 … , 𝑇𝑇 𝑢𝑢𝑛𝑛 is also orthonormal * If 𝑿𝑿 ≠ 𝒀𝒀 , 𝒇𝒇 𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜𝐜 𝐛𝐛𝐛𝐛 𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛𝐛
Let 𝑨𝑨 ∈ ℳ𝑛𝑛×𝑛𝑛 𝔽𝔽 , and 𝑇𝑇: 𝔽𝔽𝑛𝑛 × 𝔽𝔽𝑛𝑛 → 𝔽𝔽, , : 𝕍𝕍 → 𝔽𝔽, 𝑢𝑢, 𝑣𝑣 = 𝑐𝑐
Given a basis 𝐵𝐵 = 𝑏𝑏1 , … , 𝑏𝑏𝑛𝑛 of 𝔽𝔽𝑛𝑛, Additional Theorems/Information
Composition of Function Types
1. 𝑇𝑇 is unitarily diagonalizable iff 𝑇𝑇 is normal
and satisfies the 4 axioms∀ 𝑢𝑢, 𝑣𝑣, 𝑤𝑤 ∈ 𝕍𝕍, 𝑐𝑐 ∈ ℂ If 𝒈𝒈 ∘ 𝒇𝒇 is 𝒈𝒈 must be 𝒇𝒇 must be
𝑇𝑇 𝑏𝑏1 , 𝑏𝑏1 𝑇𝑇 𝑏𝑏1 , 𝑏𝑏2 ⋯ 𝑇𝑇 𝑏𝑏1 , 𝑏𝑏𝑛𝑛 2. 𝑇𝑇 is orthogonally diagonalizable iff 𝑇𝑇 is symmetric
Every bilinear form can be expressed as 𝑇𝑇 𝑢𝑢, 𝑣𝑣 = 𝑢𝑢𝑇𝑇 𝐴𝐴𝐴𝐴, where 𝐴𝐴 = 𝑇𝑇 𝑏𝑏2, 𝑏𝑏1

𝑇𝑇 𝑏𝑏2 , 𝑏𝑏2

⋯ 𝑇𝑇 𝑏𝑏2, 𝑏𝑏𝑛𝑛
⋮ ⋮
IP1 𝑢𝑢, 𝑣𝑣 = 𝑣𝑣, 𝑢𝑢 3. Eigenvectors of normal matrices form an orthonormal basis for ℂ𝑛𝑛 Injective - Injective
𝑇𝑇 𝑏𝑏𝑛𝑛 , 𝑏𝑏1 𝑇𝑇 𝑏𝑏𝑛𝑛 , 𝑏𝑏2 ⋯ 𝑇𝑇 𝑏𝑏𝑛𝑛 , 𝑏𝑏𝑛𝑛
IP2 𝑢𝑢 + 𝑣𝑣, 𝑤𝑤 = 𝑢𝑢, 𝑤𝑤 + 𝑣𝑣, 𝑤𝑤 and 𝑢𝑢, 𝑣𝑣 + 𝑤𝑤 = 𝑢𝑢, 𝑣𝑣 + 𝑢𝑢, 𝑤𝑤 Orthogonality Surjective Surjective -
Invariant Subspace Let 𝕍𝕍 be a inner product space, then 𝑢𝑢, 𝑣𝑣 are orthogonal iff 𝑢𝑢, 𝑣𝑣 = 0 Bijective Surjective Injective
IP3 𝑐𝑐𝑐𝑐, 𝑣𝑣 = 𝑐𝑐 𝑢𝑢, 𝑣𝑣 and 𝑢𝑢, 𝑐𝑐𝑐𝑐 = 𝑐𝑐̅ 𝑢𝑢, 𝑣𝑣
Let 𝑇𝑇 be a linear operator on 𝕍𝕍 Injective and Surjective
A subset 𝐵𝐵 of 𝕍𝕍 is an orthogonal set iff all vectors are pairwise orthogonal,
IP4 𝑢𝑢, 𝑢𝑢 > 0, and 𝑢𝑢, 𝑣𝑣 = 0 ↔ 𝑢𝑢 = 0 And orthonormal iff they are all unit vectors
A subspace 𝕎𝕎 of 𝕍𝕍 is 𝑇𝑇-invariant if 𝑇𝑇 𝒖𝒖 ∈ 𝕎𝕎, ∀ 𝒖𝒖 ∈ 𝕎𝕎 ⇒ T 𝕎𝕎 = 𝑇𝑇 𝒖𝒖 : 𝒖𝒖 ∈ 𝕎𝕎 ⊂ 𝕎𝕎 𝑇𝑇: 𝕍𝕍 → 𝕎𝕎 is a linear transformation
Let 𝐵𝐵 = {𝑢𝑢1 , 𝑢𝑢2 , … , 𝑢𝑢𝑛𝑛 } be an ordered basis for an inner product space 𝑉𝑉 Gram-Schmidt Process
Restriction
Denote 𝐴𝐴 = 𝑢𝑢𝑖𝑖 , 𝑢𝑢𝑗𝑗 , i.e. the 𝑖𝑖, 𝑗𝑗 − entry of 𝐴𝐴 is 𝑢𝑢𝑖𝑖 , 𝑢𝑢𝑗𝑗 . Then, 𝑇𝑇 is injective iff Ker 𝑇𝑇 = {0} and nullity(𝑇𝑇) = {0}
If 𝕎𝕎 is a 𝑇𝑇-invariant subspace of 𝕍𝕍, then the restriction of 𝑇𝑇 on 𝕎𝕎, 𝑇𝑇|𝕎𝕎 : 𝕎𝕎 → 𝕎𝕎, is defined by Let {𝑢𝑢1 , 𝑢𝑢2 , … , 𝑢𝑢𝑛𝑛 } be a basis of 𝕍𝕍,
𝑇𝑇 is surjective iff R 𝑇𝑇 = 𝕎𝕎 and rank 𝑇𝑇 = dim(𝕎𝕎)
𝑢𝑢, 𝑣𝑣 = 𝑢𝑢 𝑇𝑇 𝐴𝐴 𝑣𝑣
𝐵𝐵 𝐵𝐵 𝑣𝑣1 = 𝑢𝑢1 1 1 1
𝑇𝑇 � 𝒖𝒖 = 𝑇𝑇 𝒖𝒖 , ∀ 𝒖𝒖 ∈ 𝕎𝕎 𝑤𝑤1 = 𝑣𝑣1 , 𝑤𝑤2 = 𝑣𝑣2 , … , 𝑤𝑤𝑛𝑛 = 𝑣𝑣𝑛𝑛 ** ALL LINEAR OPERATORS ARE INJECTIVE IFF IT IS SURJECTIVE
𝑢𝑢2 , 𝑣𝑣1 𝑣𝑣1 𝑣𝑣2 𝑣𝑣𝑛𝑛
𝕎𝕎 Norm 𝑣𝑣2 = 𝑢𝑢2 − 𝑣𝑣 Bijective and Isomorphism
Block Matrices and Invariant Subspaces 𝑣𝑣1 , 𝑣𝑣1 1
Let 𝕍𝕍 be a inner product space, then
𝑢𝑢3 , 𝑣𝑣1 𝑢𝑢3 , 𝑣𝑣2 𝑤𝑤1 , 𝑤𝑤2 , 𝑤𝑤3 , … , 𝑤𝑤𝑛𝑛 Let 𝑇𝑇: 𝕍𝕍 → 𝕎𝕎 be a linear transformation.
Let 𝕎𝕎 be a 𝑇𝑇-invariant subspace of 𝕍𝕍, 𝑣𝑣3 = 𝑢𝑢3 − 𝑣𝑣 − 𝑣𝑣
𝑣𝑣1 , 𝑣𝑣1 1 𝑣𝑣2 , 𝑣𝑣2 2 is an orthonormal basis for 𝑉𝑉 𝑇𝑇 is called isomorphism from 𝕍𝕍 onto 𝕎𝕎 if 𝑇𝑇 is bijective
The norm of a vector 𝑢𝑢 = 𝑢𝑢 = 𝑢𝑢, 𝑢𝑢 , 𝑢𝑢 = 1 iff 𝑢𝑢 is a unit vector
𝐶𝐶 = {𝑢𝑢1 , 𝑢𝑢2 , … , 𝑢𝑢𝑚𝑚 } be an ordered basis for 𝕎𝕎, and The distance between 𝑢𝑢 and 𝑣𝑣 is 𝑑𝑑 𝑢𝑢, 𝑣𝑣 = 𝑢𝑢 − 𝑣𝑣 𝑣𝑣1 , 𝑣𝑣2 , 𝑣𝑣3 , … , 𝑣𝑣𝑛𝑛 is an orthogonal basis for 𝑉𝑉 And if 𝑇𝑇 is isomorphism, then
𝐵𝐵 = {𝑢𝑢1 , 𝑢𝑢2 , … , 𝑢𝑢𝑚𝑚 , 𝑢𝑢𝑚𝑚+1 , … , 𝑢𝑢𝑛𝑛 } be an ordered basis of 𝕍𝕍
𝑇𝑇 −1 is a linear transformation and also is isomorphism
Some theorems: Orthogonal Complementary Equivalent Statements
𝐴𝐴1 𝐴𝐴2 Let 𝕎𝕎 be a subspace of an inner product space 𝕍𝕍, 1. 𝕎𝕎⊥ ⊆ 𝕍𝕍 𝕍𝕍 and 𝕎𝕎 are isomorphic, 𝕍𝕍 ≅ 𝕎𝕎, iff dim(𝕍𝕍) = dim(𝕎𝕎)
Then, 𝑇𝑇 𝐵𝐵 = 𝑇𝑇 𝑢𝑢1 𝐵𝐵 𝑇𝑇 𝑢𝑢2 𝐵𝐵 … 𝑇𝑇 𝑢𝑢𝑚𝑚 𝐵𝐵 𝑇𝑇 𝑢𝑢𝑚𝑚+1 𝐵𝐵 … 𝑇𝑇 𝑢𝑢𝑛𝑛 𝐵𝐵 = 1. 𝑢𝑢 > 0, and 𝑢𝑢 = 0 ↔ 𝑢𝑢 = 0
0 𝐴𝐴3 the orthogonal complementary of 𝕎𝕎, 𝕎𝕎⊥ There exists an isomorphism from 𝕍𝕍 to 𝕎𝕎
2. 𝑐𝑐𝑐𝑐 = 𝑐𝑐 ⋅ 𝑢𝑢 2. 𝕎𝕎 ∩ 𝕎𝕎⊥ = {0}
𝐴𝐴1 = 𝑇𝑇 �
3. 𝑢𝑢, 𝑣𝑣 < 𝑢𝑢 ⋅ 𝑣𝑣 3. If 𝕎𝕎 is finite dimensional, 𝕍𝕍 = 𝕎𝕎 ⊕ 𝕎𝕎⊥ Isomorphism vs 𝑻𝑻 𝑪𝑪,𝑩𝑩
𝑤𝑤 𝑐𝑐
𝕎𝕎⊥ = 𝑣𝑣 ∈ 𝕍𝕍 𝑣𝑣 is orthogonal to 𝕎𝕎 4. If 𝕍𝕍 is finite dimensional,
Characteristic Polynomial 4. 𝑢𝑢, 𝑣𝑣 = 𝑢𝑢 ⋅ 𝑣𝑣 iff 𝑢𝑢 = 𝑎𝑎𝑎𝑎, for some 𝑎𝑎 ∈ 𝔽𝔽 = 𝑣𝑣 ∈ 𝕍𝕍 𝑢𝑢, 𝑣𝑣 = 0, ∀ 𝑢𝑢 ∈ 𝕎𝕎 Let 𝕍𝕍 and 𝕎𝕎 be two vector spaces, and dim(𝕍𝕍) = dim(𝕎𝕎).
dim 𝕍𝕍 = dim 𝕎𝕎 + dim(𝕎𝕎⊥ )
5. 𝑢𝑢 + 𝑣𝑣 ≤ 𝑢𝑢 + 𝑣𝑣
𝑐𝑐𝑇𝑇 𝑥𝑥 = 𝑐𝑐𝑇𝑇 𝐵𝐵 𝑥𝑥 = det 𝑥𝑥𝐼𝐼𝑛𝑛 − 𝑇𝑇 𝐵𝐵 Orthogonal Projection A.K.A. Best Approximation 1. 𝑇𝑇 is an isomorphism iff 𝑇𝑇 𝐶𝐶,𝐵𝐵 is an invertible matrix;
𝑥𝑥𝐼𝐼 − 𝐴𝐴1 −𝐴𝐴2 Least Squares Solution Let 𝕎𝕎 be a subspace of an inner product space 𝕍𝕍, suppose 𝕍𝕍 = 𝕎𝕎 ⊕ 𝕎𝕎⊥, −1
= 𝑚𝑚 = det 𝑥𝑥𝐼𝐼𝑚𝑚 − 𝐴𝐴1 det 𝑥𝑥𝐼𝐼𝑛𝑛−𝑚𝑚 − 𝐴𝐴3 = 𝑐𝑐𝐴𝐴1 𝑥𝑥 𝑐𝑐𝐴𝐴3 𝑥𝑥 2. If 𝑇𝑇 is an isomorphism, 𝑇𝑇 −1 𝐵𝐵,𝐶𝐶 = 𝑇𝑇 𝐶𝐶,𝐵𝐵
0 𝑥𝑥𝐼𝐼𝑛𝑛−𝑚𝑚 − 𝐴𝐴3 Let 𝑨𝑨𝑨𝑨 = 𝒃𝒃 be a linear system. A vector 𝒖𝒖 ∈ 𝐑𝐑n is the least squares Then ∀ 𝑢𝑢 ∈ 𝕍𝕍, 𝑢𝑢 = 𝑤𝑤 + 𝑤𝑤 ′ , where 𝑤𝑤 ∈ 𝕎𝕎 and 𝑤𝑤 ′ ∈ 𝕎𝕎⊥
solution to the system if 𝒃𝒃 − 𝑨𝑨𝑨𝑨 ≤ | 𝒃𝒃 − 𝑨𝑨𝑨𝑨 | for all 𝒗𝒗 ∈ 𝐑𝐑n.
Cyclic Subspace 𝑢𝑢 = 𝑤𝑤 iff 𝑢𝑢 ∈ 𝕎𝕎 𝑇𝑇 𝑢𝑢 𝐶𝐶 = 𝑇𝑇 𝐶𝐶,𝐵𝐵 𝑢𝑢 𝐵𝐵
This is a unique expression, and 𝑤𝑤 is the orthogonal projection of 𝑢𝑢 onto 𝕎𝕎
𝕎𝕎 = span{𝑢𝑢, 𝑇𝑇 𝑢𝑢 , 𝑇𝑇 2 𝑢𝑢 , … }, is a T-invariant subspace Finding Least Squares Solution: The mapping Proj𝕎𝕎 : 𝕍𝕍 → 𝕎𝕎 is the orthogonal projection of 𝕍𝕍 onto 𝕎𝕎 𝑢𝑢 − 𝑤𝑤 ≤ 𝑢𝑢 − 𝑣𝑣
𝑇𝑇
Method 1: Projection
The dimension of 𝕎𝕎 is equal to the smallest positive integer 𝑘𝑘 such that 𝑇𝑇 𝑘𝑘 (𝑢𝑢) is a linear 𝒖𝒖 is a least squares solution to 𝑨𝑨𝑨𝑨 = 𝒃𝒃 iff 𝑨𝑨𝑨𝑨 = 𝒑𝒑,
Computing Projection 𝕍𝕍 𝑢𝑢 𝑇𝑇 𝑢𝑢 𝕎𝕎
combination of {𝑢𝑢, 𝑇𝑇 𝑢𝑢 , … , 𝑇𝑇 𝑘𝑘−1 𝑢𝑢 }, the basis for 𝕎𝕎 where 𝒑𝒑 is the projection of 𝒃𝒃 on 𝑨𝑨 Let {𝑤𝑤1 , 𝑤𝑤2 , … , 𝑤𝑤𝑘𝑘 } be an orthonormal basis for 𝕎𝕎, then 𝑨𝑨𝑛𝑛×𝑛𝑛 = 𝑇𝑇 𝐶𝐶,𝐵𝐵

𝑇𝑇 𝑘𝑘 𝑢𝑢 = 𝑎𝑎0 𝑢𝑢 + 𝑎𝑎1 𝑇𝑇 𝑢𝑢 + ⋯ + 𝑎𝑎𝑘𝑘−1𝑇𝑇 𝑘𝑘−1(𝑢𝑢) Method 2: Transpose Proj𝕎𝕎 𝑢𝑢 = 𝑢𝑢, 𝑤𝑤1 𝑤𝑤1 + 𝑢𝑢, 𝑤𝑤2 𝑤𝑤2 + ⋯ + 𝑢𝑢, 𝑤𝑤𝑘𝑘 𝑤𝑤𝑘𝑘
𝑐𝑐𝑇𝑇| 𝑥𝑥 = −𝑎𝑎0 − 𝑎𝑎1 𝑥𝑥 − ⋯ − 𝑎𝑎𝑘𝑘−1 𝑥𝑥𝑘𝑘−1 + 𝑥𝑥 𝑘𝑘 𝒖𝒖 is a least squares solution to 𝑨𝑨𝑨𝑨 = 𝒃𝒃 iff 𝑨𝑨T𝑨𝑨𝑨𝑨 = 𝑨𝑨T𝒃𝒃 Proj𝕎𝕎⊥ 𝑢𝑢 = 𝑢𝑢 − 𝑢𝑢, 𝑤𝑤1 𝑤𝑤1 − 𝑢𝑢, 𝑤𝑤2 𝑤𝑤2 − ⋯ − 𝑢𝑢, 𝑤𝑤𝑘𝑘 𝑤𝑤𝑘𝑘 dim 𝕍𝕍 = 𝐵𝐵 = 𝑛𝑛 dim 𝕎𝕎 = 𝐶𝐶 = 𝑛𝑛

You might also like