0% found this document useful (0 votes)
76 views42 pages

Linear Algebra Concepts Overview

This document provides an overview of key concepts in linear algebra, including: - Vectors and matrices, including definitions, notation, and basic operations like addition, scalar multiplication, and matrix multiplication. - Matrices representing linear transformations like stretching, rotation, reflection, and projection. - Vector spaces and subspaces, and how linear systems define subspaces like the column space and null space of a matrix. - The concepts of linear independence and spanning sets, and the definition of a basis as a maximal linearly independent set that spans a vector space. - Orthogonality of vectors and the definition of an orthonormal basis consisting of orthogonal unit vectors.

Uploaded by

CHW
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views42 pages

Linear Algebra Concepts Overview

This document provides an overview of key concepts in linear algebra, including: - Vectors and matrices, including definitions, notation, and basic operations like addition, scalar multiplication, and matrix multiplication. - Matrices representing linear transformations like stretching, rotation, reflection, and projection. - Vector spaces and subspaces, and how linear systems define subspaces like the column space and null space of a matrix. - The concepts of linear independence and spanning sets, and the definition of a basis as a maximal linearly independent set that spans a vector space. - Orthogonality of vectors and the definition of an orthonormal basis consisting of orthogonal unit vectors.

Uploaded by

CHW
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Review of Linear Algebra

Prof. Yeh, T.‐J.
Basic concepts
• Vector in Rn is an ordered
set of n real numbers. 

1 

 6 
– e.g. v = (1,6,3,4) is in  3 
 
R4  4 

– A column vector:
– A row vector: 1 6 3 4 
• m-by-n matrix is an
object in Rmxn with m  1 2 8 
rows and n columns,  
each entry filled with a  4 78 6 
(typically) real number:  9 3 2 
 
Basic concepts
Vector norms: A norm of a vector ||x|| is informally
a measure of the “length” of the vector.

– Common norms: L1, L2 (Euclidean)

– L
Basic concepts
We will use lower case letters for vectors. The elements
are referred by xi.
• Vector dot (inner) product:

If u•v=0, ||u||2  ≠0, ||v||2 ≠0  u and v are orthogonal


If u•v=0, ||u||2 = 1, ||v||2 = 1  u and v are orthonormal

• Vector outer product:


Basic concepts
We will use upper case letters for matrices. The elements
are referred by Ai,j.
• Matrix product:

e.g.  a11 a12   b11 b12 


A   , B   
 a21 a22   b21 b22  In general,
 a b  a12b21 a11b12  a12b22 
AB   11 11 
 a21b11  a22b21 a21b12  a22b22 
Basic concepts
• Transpose: reflect vector/matrix on line:
T T
a a b  a c 
   a b      
b c d  b d 

(Ax)T=xTAT
Matrices as linear transformations

 5 0 1  5 
      (stretching)
 0 5 1  5 

 0  11   1
      (rotation)
 1 0 1  1 
Matrices as linear transformations
 0 1  1   0 
      (reflection)
 1 0  0   1 

 1 0 1  1 
      (projection)
 0 0 1  0 

 1 c  x   x  cy 
      (shearing)
 0 1  y   y 
Matrices as sets of constraints
x y z 1
2x  y  z  2

 x
 1 1 1   1 
  y    
 2  1 1 z   2 
 
Vector spaces
• Formally, a vector space is a set of vectors which is closed 
under addition and multiplication by real numbers.
• A subspace is a subset of a vector space which is a vector 
space itself, e.g. the plane z=0 is a subspace of R3 (It is 
essentially R2.).
• We’ll be looking at Rn and subspaces of Rn
Our notion of planes in R3 may 
be extended to hyperplanes in 
Rn (of dimension n‐1)

Note: subspaces must include 
the origin (zero vector).
Linear system & subspaces
• Linear systems define certain 
1 0  b1 
  u    subspaces
 2 3     b2  • Ax = b is solvable iff b may be 
 1 3  v   b  written as a linear combination of 
   3 the columns of A
• The set of possible vectors b forms 
1  0   b1  a subspace called the column space
      of A
u  2   v 3    b2 
1  3  b  (1,2,1)
     3
(0,3,3)
Linear system & subspaces
The set of solutions to Ax = 0 forms a subspace called 
the null space of A.

1 0 0
  u     Null space: {(0,0)}
 2 3     0 
 1 3  v   0 
   

 1 0 1  x   0 
    
 2 3 5  y    0   Null space: {(c,c,‐c)}
 1 3 4  z   0 
    
Linear independence and basis
• Vectors v1,…,vk are linearly independent if 
c1v1+…+ckvk = 0 implies c1=…=ck=0
(2,2)
i.e. the nullspace is the origin
| | |  c1   0 
(0,1)     
(1,1)
 v1 v2 v3  c2    0 
| | |  c3   0 
(1,0) 

 1 0  0
  u    Recall nullspace contained 
 2 3     0  only (u,v)=(0,0).
 1 3  v   0  i.e. the columns are linearly 
    independent.
Linear independence and basis
• If all vectors in a vector space may be expressed as 
linear combinations of v1,…,vk, then v1,…,vk span the 
space.

 2 1 0 0  2  .9   .3   .1 


               
2
   20  2 1   2 0   2   1.57 .2   1.29 1   2 .2 
 2 0 0 1  2 0 0 1
               
(0,0,1) (.1,.2,1)

(0,1,0)

(.3,1,0)
(1,0,0) (.9,.2,0)
Linear independence and basis
• A basis is a set of linearly independent vectors which span the space. 
• The dimension of a space is the # of “degrees of freedom” of the 
space; it is the number of vectors in any basis for the space.
• A basis is a maximal set of linearly independent vectors and a minimal 
set of spanning vectors.

 2 1 0 0  2  .9   .3   .1 


               
2
   20  2 1   2 0   2   1.57 .2   1.29 1   2 .2 
 2 0 0 1  2 0 0 1
               
(0,0,1) (.1,.2,1)

(0,1,0)

(.3,1,0)
(1,0,0) (.9,.2,0)
Linear independence and basis
• Two vectors are orthogonal if their dot product is 0.
• An orthogonal basis consists of orthogonal vectors.
• An orthonormal basis consists of orthogonal vectors of 
unit length.

 2 1 0 0  2  .9   .3   .1 


               
2
   20  2 1   2 0   2   1.57 .2   1.29 1   2 .2 
 2 0 0 1  2 0 0 1
               
(0,0,1) (.1,.2,1)

(0,1,0)

(.3,1,0)
(1,0,0) (.9,.2,0)
About subspaces
• The rank of A is the dimension of the column space of A.
• It also equals the dimension of the row space of A (the 
subspace of vectors which may be written as linear 
combinations of the rows of A).

1 0 (1,3) = (2,3) – (1,0)
 
 2 3 Only 2 linearly independent 
 1 3 rows, so rank = 2.
 
About subspaces
Fundamental Theorem of Linear Algebra:
If A is m x n with rank r,
Column space(A) has dimension r
Nullspace(A) has dimension n‐r (= nullity of A)
Row space(A) = Column space(AT) has dimension r
Left nullspace(A) = Nullspace(AT) has dimension m ‐ r
Rank‐Nullity Theorem:  rank + nullity = n

(0,0,1)
1 0
  m = 3
(0,1,0) 0 1 n = 2
0 0 r = 2
(1,0,0)  
Non‐square matrices
m = 3
1 0 n = 2 1 0 1
  r = 2   x1   
0 1 System Ax=b may not  0 1    1
 2 3 have a solution (x has    x2   
  2 variables but 3  2 3 1
constraints).
m = 2
1 0 2 n = 3  x1 
  r = 2 1 0 2   1
0 1 3 System Ax=b is    x2    
underdetermined (x   0 1 3   1
has 3 variables and 2   x3 
constraints).
Basis transformations
• Before talking about basis transformations, we 
need to recall matrix inversion and projections.
Matrix inversion
• To solve Ax=b, we can write a closed‐form solution if we 
can find a matrix A‐1
s.t. AA‐1 =A‐1A=I (identity matrix)
• Then Ax=b iff x=A‐1b:
x = Ix = A‐1Ax = A‐1b
• A is non‐singular iff A‐1 exists iff Ax=b has a unique 
solution.
• Note: If A‐1,B‐1 exist, then (AB)‐1 = B‐1A‐1,
and (AT)‐1 = (A‐1)T
Projections
(2,2,2)
b = (2,2)
(0,0,1)

(0,1,0)

(1,0,0) a = (1,0) aT b  2
c  T a   
 2   1 0 0  2 
    
a a 0
2
   0 1 0  2 
 0   0 0 0  2 
    
Basis transformations
We may write v=(2,2,2) in terms of an alternate basis:

 2   1 0 0  2   2   .9 .3 .1 1.57 
         
 2    0 1 0  2  2
   .2 1 .2  1. 29 
 2   0 0 1  2   2   0 0 1  2 
         
(0,0,1) (.1,.2,1)

(0,1,0)

(.3,1,0)
(1,0,0) (.9,.2,0)

Components of (1.57,1.29,2) are projections of v onto new basis vectors, 
normalized so new v still has same length.
Basis transformations
Given vector v written in standard basis, rewrite as vQ in 
terms of basis Q.

If columns of Q are orthonormal, vQ = QTv

Otherwise, vQ = Q‐1v
Special matrices
 a 0 0 a b c
   
 0 b 0 diagonal 0 d e upper-triangular
0 0 c 0 0
   f 

a b 0 0 a 0 0
   
c d e 0 0  lower-triangular
tri-diagonal b c
0 h d e

f g
  f 
0 0 j 
 i

1 0 0
 
0 1 0 I (identity matrix)
0 0 1
 
Special matrices
• Matrix A is symmetric if A = AT
• A is positive definite if xTAx>0 for all non‐zero x (positive semi‐definite
if inequality is not strict)

 1 0 0  a 
  
a b c  0 1 0  b   a 2  b 2  c 2
 0 0 1  c 
  
 1 0 0  a 
  
a b c  0  1 0  b   a 2  b 2  c 2
 0 0 1  c 
  
Special matrices
• Matrix A is symmetric if A = AT
• A is positive definite if xTAx>0 for all non‐zero x (positive semi‐definite
if inequality is not strict)

• Useful fact: Any matrix of form ATA is positive semi‐definite.

To see this, xT(ATA)x = (xTAT)(Ax) = (Ax)T(Ax) ≥ 0
Special Matrices
Matrix in (upper) companion
form
Determinants
• If det(A) = 0, then A is 
singular.
• If det(A) ≠ 0, then A is 
invertible.
• To compute:
– Simple example: a b 
det   ad  bc
c d 
– Matlab: det(A)
Determinants
• m‐by‐n matrix A is rank‐deficient if it has rank 
r < m (≤ n)
• Thm: rank(A) < r iff
det(A) = 0 for all t‐by‐t submatrices,
r ≤ t ≤ m
Inverse of Block Triangular Matrices
Properties of the Determinant
Eigenvalues & eigenvectors
• How can we characterize matrices?
• The solutions to Ax = λx in the form of eigenpairs (λ,x) = 
(eigenvalue,eigenvector) where x is non‐zero
• To solve this, (A – λI)x = 0
• λ is an eigenvalue iff det(A – λI) = 0
Eigenvalues & eigenvectors
(A – λI)x = 0
λ is an eigenvalue iff det(A – λI) = 0
Example:

1 4 5 
 
A  0 3 / 4 6 
0 0 1 / 2 

1   4 5 
 
det( A  I )   0 3/ 4   6   (1   )(3 / 4   )(1 / 2   )
 0 0 1 / 2   

  1,   3 / 4,   1 / 2
Eigenvalues & eigenvectors
 2 0 Eigenvalues λ = 2, 1 with 
A   
0 1 eigenvectors (1,0), (0,1)
Eigenvectors of a linear transformation A are not rotated (but will be scaled by 
the corresponding eigenvalue) when A is applied.

(0,1)
Av
v

(1,0) (2,0)
Solving Ax=b
x +  2y + z = 0 1 2 1 0 Write system of 
y ‐ z = 2   equations in matrix form.
0 1  1 2
x          +2z= 1 1 0 2 1
‐‐‐‐‐‐‐‐‐‐‐‐‐  
x +  2y + z = 0 1 2 1 0
  Subtract first row from 
0 1  1 2
last row.
y ‐ z = 2
‐2y + z= 1 0  2 1 1
 
‐‐‐‐‐‐‐‐‐‐‐‐‐
1 2 1 0 Add 2 copies of second 
x +  2y + z = 0  
0 1  1 2
row to last row.
y ‐ z = 2 0 0  1 5
‐ z = 5  
Now solve by back‐substitution: z = ‐5, y = 2‐z = 7, x = ‐2y‐z = ‐9
Solving Ax=b & condition numbers
• Matlab: linsolve(A,b)
• How stable is the solution?
• If A or b are changed slightly, how much does it effect x?
• The condition number c of A measures this:
c = λmax/ λmin
• Values of c near 1 are good.

You might also like