You are on page 1of 52

Linear Algebra

Dr. Manoj BR
Assistant Professor
Department of Electronics & Electrical Engineering
Indian Institute of Technology Guwahati
Mathematical Foundations
 : approaches
 : in an algorithm Assign to variable the new value
 : The value of that leads to the maximum value of
 : The value of that leads to the minimum value of
 : that is, the remainder when is divided by (e.g.: 7 mod 5 =2)
 : logarithm base , or natural logarithm of
 : logarithm base 10 of
 : logarithm base 2 of
 : Vector of length solely of 1’s
 : Matrix whose diagonal elements are and off-diagonal elements are 0
 : Euclidean norm of vector
 Determinant of ( is a matrix )
 : Cardinality of set i.e., the number of (possible non distinct) discrete elements in it
 Calligraphic font generally denotes sets or lists

 Scalars
, defining a real-valued scalar
, defining a natural number scalar

 Vectors: A dimensional vector is assumed to be a column vector. Vector is denoted by bold font
,

 Matrices: It is a two-dimensional array of numbers


 Complex conjugate transpose (Hermitian)

 , ,…..,

 If , then it is called Hermitian Matrix

 Example: If then

 Tensors: An array with more than two axes

 An array of numbers arranged on a regular grid with a variable numbers of axes

 tensor; element of at coordinates

 Example of (3,3,3) tensor


 Vector norms

 ,

 1norm:

 Euclidean (2norm):

 norm:
 Examples – consider a vector 𝑥2

 1norm:
¿|𝒙|∨¿1 ¿
 2norm :
𝑥1
 norm:

¿|𝒙|∨
¿ 2¿
 Vector norms
 Example- consider a vector

 1norm: 1

 2norm:

 norm:
Pictorial representation of all the - norms
Inner product

 Example

Then inner product of 3 1

 Example

Then inner product of 13 11 31 7


 We call a vector normalized if

 The angle between two -dimensional vectors obeys

, (Cauchy-Schwarz inequality)
Vectors are orthogonal (perpendicular to each other) 𝑥3
 Example:

0 01 10 0

(1,0,1) and (0,1,0) are orthogonal to each other 𝑥2


 If the vectors have unit norm, then they are orthonormal
𝑥1
 Outer Product

 Example: then
 Spectral norm (2-norm)

 Largest magnification that can be obtained by applying to any vector

 Measures how much the mapping induced by can stretch vectors

 Spectral norm is defined as the square root of maximum eigen value of


Some properties of spectral radius and norm
 , where denotes spectral radius of a matrix   i.e. maximum absolute value of an eigenvalue
of  (in this case all eigenvalues are real and non-negative)
 Square root of the spectral radius is the spectral norm
 If and then
 Example: Find the of
• Characteristic equation

• Spectral radius is 25
• Spectral norm is
 Frobenius norm:

 Example: . Then

Linear independence

 A set of vectors is linearly independent if no vector in the set can be written as a linear
combination of any of the others

 Example: and are linearly independent since they are not multiples of each other and
cannot be 0 other than
 Rank

 The rank of is the number of independent rows or columns of

 The ranks of , , and are the same

 If is square and full rank, there is unique inverse such that

 An matrix has rank then is invertible


 Example:

Echelon form: Number of non-zero rows = 2. Hence Rank = 2


 Subspaces

 The space spanned by a collection of vectors

, is called linear subspace

 If the vectors are linearly independent they are called the basis for the subspace

 The number of basis vectors is called the dimensions of subspace

 If the vectors are orthogonal, then we have an orthogonal basis

 If the vectors are orthonormal, then we have an orthonormal basis


 Example: Let be the (real) vector space of all functions from into

 are subspaces of ?

 Solution: Not a subspace

Let and

Then both satisfy the condition:

But

while

These are not equal polynomials so the condition does not hold for
 are subspaces of V?
 Solution: Yes, a subspace

Let ,

Thus,

It follows all the properties (multiplicative, additive , etc.) for subspaces


 Unitary

 A square matrix is called unitary if and

 its rows and columns are orthonormal

 Example: is a Unitary matrix?


Projections

 We want to find the projection point p 𝒃


𝒆=𝒃− 𝒑 𝒂
 The point p must be some multiple

of the given vector


𝒑
 Every point on the line is a multiple of Projection of b onto
𝜃 line through a
 The problem is to compute the coefficient
Fig: The projection p is the point (on the
line through a) closest to b

 The projection of the vector b onto the line in the direction of a is

Projection onto a line


 The vector is put before the number:

 Projection onto a line is carried out by a projection matrix

So, the projection matrix is:

 is the matrix that multiplies b and produces p

 Properties

1. is a symmetric matrix

2. Its square is itself:


 Example: Find the projection of the vector b = onto the line in the direction of ?

Solution:

 The projection of the vector b onto the line in the direction of a is


Singular value decomposition

 For any matrix , there is a decomposition:

 and are unitary matrices

 is diagonal with positive real entries

 Columns of are called the left singular vectors

 Columns of are called the right singular vectors

 Diagonal entries of are called the singular values. They are positive, real, and sorted

…. 0; largest singular value


 If is an matrix then,

: : :n
 Rank The number of non-zero singular values
 Steps for getting SVD for a Matrix

1. The ordering of the vectors comes from the ordering of the singular values (largest to
smallest)

2. The columns of U are the eigenvectors of

3. The columns of V are the eigenvectors of

4. The diagonal elements of are the singular values

5. The relationships between and (with normalization):

The scaling factor comes from


 Example: Find the SVD of where ?

Solution: First, we’ll work with

 The characteristic polynomial is so eigenvalues of are Singular values are

 The eigenvalues of are 25, 9, and 0, and since is symmetric we know that the eigenvectors will
be orthogonal
 For reduced matrix is so,

 for reduced matrix is so,

 For the last eigenvector, we could compute unit vector perpendicular to and by

so, for eigenvector is given by


 We have to compute U by the formula ,
Eigenvectors and Eigenvalues

 that makes singular is called an eigenvalue


 Solution vector and corresponding scalar are called the eigenvector and associated eigenvalue,
respectively
 Characteristic equation:

Properties
 Trace[] =
 Det() =
 If a matrix is diagonal, then its eigenvalues are simply the non-zero entries on the diagonal, and
the eigenvectors are the unit vectors parallel to the coordinate axes
Eigenvalue decomposition

exist when is invertible and when eigenvalues are distinct


 Example: Find the eigenvalue decomposition of matrix

Eigenvalue decomposition of matrix



so
 For the eigenvector 2
Eigenvector is given by
 For eigenvector 8

Eigenvector given by

Eigenvalue decomposition and SVD

 SVD of

 The eigenvalues of are singular values of squared

 Eigenvectors of are the left singular vectors of


Matrix inversion

: ; ; =

 = adjoint of

 If is not square (or if does not exist because the columns of are not linearly

independent), we use pseudoinverse

 If is non-singular , then

(or)
 Example: Find inverse of ?

 Example: Find Pseudoinverse of ?


Derivative of matrices
 Suppose is a scalar valued function of variable which we represent as the vector

Derivative or gradient of with respect to

 If we have an dimensional vector-valued function of dimensional vector

 If is square, its determinant is called simply the 𝐽𝑎𝑐𝑜𝑏𝑖𝑎𝑛 or 𝐽𝑎𝑐𝑜𝑏𝑖𝑎𝑛 𝑑𝑒𝑡𝑒𝑟𝑚𝑖𝑛𝑎𝑛𝑡


 Example: find the gradient of

 Example: find the value of ?


 If the entries of matrix depend upon a scalar parameter then can take the derivative of M
component by component, to get another matrix, as

We know that:

0=
Vector derivative identities

 Consider a matrix and a vector that are independent of

After differentiation w.r.t. to


 Example: then find

If an dimensional vector-valued function of d-dimensional vector

Then,

 Steps

………………….., (
…………..
………

 =
( ……….+ ……….+ )
………
If is symmetric, then

 ()
 Example: Let

Solution:

From this example we can also verify


Second derivatives of a scalar function of a scalar

Taylor series about a point

……. …

 Example:

you can represent any infinitely-differentiable function as an infinite polynomial


Analogously, if the scalar-valued is a instead function of a vector , we can expand in a Taylor
series around a point

 is the Hessian matrix, the matrix of second order derivatives of , here evaluated at
 Example: then find the Hessian matrix?

Hessian of a matrix can be given by


Least square solution

…….

………..

……………

 Orthogonality of and
 Least square problems with several variables

 To project onto a subspace – rather than just onto a line

 Choose so as to minimize the error

 Projection

 Error vector:


Solution
 The error vector must be perpendicular to each column …… of

or
(OR)

 is invertible exactly when the columns of are linearly independent


 Best estimate

Projection

Projection matrix

 The matrix that gives


 Example: Solve the following system of linear equations, using matrix inversion method:
5x + 2 y = 3, 3x + 2 y = 5 and find x and y.

The matrix form of the system is where

If
then,

Solution:
References

 G. Strang, Linear Algebra and Its Applications

 Gilbert Strang lectures on Linear Algebra (MIT)

https://www.youtube.com/watch?v=QVKj3LADCnA&list=PL49CF3715CB9EF31D

 Carl D. Meyer, Matrix Analysis and Applied Linear Algebra

 Seymour Lipschutz and Marc Lipson, Linear Algebra, Schaum’s Outlines

You might also like