You are on page 1of 9

Transformations

2.1 Linear Transformation

When we multiply an 𝑚 × 𝑛 matrix by an 𝑛 × 1 column vector, the result is an 𝑚 × 1 column


vector. In this section we will discuss how, through matrix multiplication, an 𝑚 ×
𝑛 matrix transforms an 𝑛 × 1 column vector into an m×1 column vector. Recall that the 𝑛 ×
1 vector given by

is said to belong to ℝ𝒏 , which is the set of all 𝑛 × 1 vectors. In this section, we will discuss
transformations of vectors in ℝ𝒏 .
Example.
1 2 0
Consider the matrix 𝐴 = [ ]. Show that by matrix multiplication 𝐴 transforms vector
2 1 0
in ℝ3 into vector in ℝ2 .
Solution:
Vector in ℝ3 are vector of size 3 × 1, while vector in ℝ2 are vector of size 2 × 1. If we multiply
𝐴, which is a 2 × 3 matrix, by 3 × 1 vector, the result will be 2 × 1 vector. This what we mean
when we say 𝐴 transforms vectors.
𝑥
For [𝑦] in ℝ3 , multiply on the left by the given matrix to obtain the new vector. This product
𝑧
looks like

The resulting product is 2 × 1 vector which is determined by the choice of 𝑥 and 𝑦.


1
5
Here, the vector [2] in ℝ3 is transformed by the matrix into the vector [ ] in ℝ2 .
4
3

The idea is to define a function which takes vectors in ℝ3 and delivers new vectors
in ℝ2. In this case, that function is multiplication by the matrix 𝐴. Let 𝑇 denote such a function.
The notation 𝑇: ℝ𝑛 → ℝ𝑚 means that the function 𝑇 transforms vectors in ℝ𝑛 into vectors
in ℝ𝑚. The notation 𝑇(𝑥⃗) means the transformation T applied to the vector 𝑥⃗. The above
example demonstrated a transformation achieved by matrix multiplication. In this case, we
often write

Therefore, 𝑇𝐴 is the transformation determined by the matrix 𝐴. In this case we say that T is a
matrix transformation.
Recall the property of matrix multiplication that states that for 𝑘 and 𝑝 scalars,

In particular, for 𝐴 an 𝑚 × 𝑛 matrix and 𝐵 and 𝐶, 𝑛 × 1 vectors in ℝ𝑛 , this formula holds. In


other words, this means that matrix multiplication gives an example of a linear transformation,
which we will now define.
Definition:
Let 𝑇: ℝ𝑛 → ℝ𝑚 be a function, where for each 𝑥 ∈ ℝ𝑛 , 𝑇(𝑥⃗) ∈ ℝ𝑚. Then 𝑇 is a linear
transformation if whenever 𝑘, 𝑝 are scalars and 𝑥⃗1 and 𝑥⃗2 are vector in ℝ𝑛 (𝑛 × 1 vectors),

Proof:
We need to show that for all scalars 𝑘, 𝑝 and vectors 𝑥⃗1 , 𝑥⃗2 .
Let

Then
Therefore, 𝑇 is a linear transformation. From above explanation, we can conclude that matrix
transformations are in fact linear transformations.

2.2 Eigenvalues and Eigenvectors


Let 𝑨 be an n x n real square matrix. A scalar 𝜆 (possibly complex) and a nonzero vector 𝒗
satisfying the equation 𝑨𝒗 = 𝜆𝒗 are said to be, respectively, an eigenvalue and an eigenvector
of 𝑨. For 𝜆 to be an eigenvalue it is necessary and sufficient for the matrix 𝜆𝑰 − 𝑨 to be singular;
that is, det[𝜆𝑰 − 𝑨] = 0, where 𝐼 is the 𝑛 × 𝑛 identity matrix.

Eigenvalues or are characteristic vector of a linear transformation is a nonzero vector that


changes at most by a scalar factor when that linear transformation is applied to it. Eigenvectors
are the vectors (non-zero) that do not change the direction when any linear transformation is
applied.
In this shear mapping, the blue arrow changes direction, whereas the pink arrow does not.
Here, the pink arrow is an eigenvector because it does not change direction. Also, the length of
this arrow is not changed; its eigenvalue is 1.

How do we find these eigen things?

We start by finding the eigenvalue. We know this equation must be true:


𝑨𝒗 = 𝜆𝒗
Next we put in an identity matrix so we are dealing with matrix-vs-matrix:
𝑨𝒗 = 𝜆𝑰𝒗
Bring all to left hand side
𝑨𝒗 − 𝜆𝑰𝒗 = 𝟎
If 𝒗 is non-zero then we can solve for λ using just the determinant
|𝑨 − 𝜆𝑰| = 𝟎
Example
−6 3
Find the eigenvalues of matrix 𝑨 = [ ; ].
4 5
Start with |𝑨 − 𝜆𝑰| = 𝟎

Which is

Calculating that determinant gets:

Which simplifies to this quadratic equation


Where

Now we know eigenvalues, let us find their matching eigenvectors. Find the eigenvector
for the eigenvalue 𝜆 = 6. Start with 𝑨𝒗 = 𝜆𝒗. Put in the values we know

After multiplying we get these two equations:

Bringing all to left hand side

Either equation reveals that 𝑦 = 4𝑥, so the eigenvector is any non-zero multiple of this.

From the above example, now we know that 𝑨𝒗 = 𝜆𝒗 holds.

And also

Example of eigenvalues for 3 × 3 matrix.


2 0 0
Find eigenvalues of matrix 𝐴 = [0 4 5].
0 4 3
First calculate 𝐴 − 𝜆𝐼 = 0:
2 0 0 1 0 0
[0 4 5] − 𝜆 [0 1 0] = 0
0 4 3 0 0 1
2−𝜆 0 0
[ 0 4−𝜆 5 ]=0
0 4 3−𝜆
Which is

This end up being a cubic equation, but just looking at it here we see one of the roots is 2
(because of 2 − 𝜆), and the part inside the square brackets is quadratic, with roots of -1 and 8.
Therefore, the eigenvalues are -1, 2, and 8.
We can find the eigenvector using the eigenvalues. Let us use 𝜆 = −1 to find the
eigenvector. Start with 𝑨𝒗 = 𝜆𝒗, we have

After multiplying we get these equations

Bringing all to the left-hand side

So 𝑥 = 0, and 𝑦 = −𝑧 and so the eigenvector is

Test 𝑨𝒗

And 𝜆𝒗
So 𝑨𝒗 = 𝜆𝒗 holds.

2.3 Quadratic Form

Bilinear and Quadratic forms are linear transformations in more than one variable over a vector
space. The two sets of variables in bilinear form (𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑚 ) and (𝑦1 , 𝑦2 , 𝑦3 , … , 𝑦𝑛 )
become quadratic form if the two sets are equal and 𝑥𝑖 = 𝑦𝑖 for each 𝑖. For example, 𝑓(𝑥, 𝑦) =
𝑥 2 − 2𝑦 2 + 5𝑥𝑦 is a real quadratic form in two variables 𝑥 and 𝑦.
Now, quadratic form as linear transformation over a vector space 𝑉 is defined as if 𝑓 is a
symmetric bilinear form, that is 𝑓(𝑥, 𝑦) = 𝑓(𝑦, 𝑥) for every 𝑥, 𝑦 in 𝑉, the quadratic form is
given by 𝑞(𝑥) = 𝑓(𝑥, 𝑥) for every 𝑥 in 𝑉. Here, q has the property 𝑞(𝑎𝑥) = 𝑎2 𝑞(𝑥) where
𝑎 is a scalar and 𝑥 is in 𝑉. For example, let us take a bilinear for 𝑓 defined by matrix

Then its quadratic form will be

A function of n variables 𝑓(𝑥1 , 𝑥2 , … , 𝑥𝑛 ) is called a quadratic form if

where 𝑄(𝑛×𝑛) = [𝑞𝑖𝑗 ] and 𝑋 T = (𝑥1 , 𝑥2 , … , 𝑥𝑛 ). Without any loss of generality, Q can always
be assumed to be symmetric.

Definitions
1. A matrix Q is positive definite if and only if the quadratic form 𝑥 𝑇 𝑸𝑥 > 0 for all 𝑥 ≠ 0
(every eigenvalue is positive). For example,
is positive definite.
2. A matrix Q is positive semidefinite if and only if the quadratic form 𝑥 𝑇 𝑸𝑥 ≥ 0 for all 𝑥 and
there exists an 𝑥 ≠ 0 such that 𝑥 𝑇 𝑸𝑥 ≥ 0 (eigenvalue ≥ 0). For example,

is positive semidefinite.
3. A matrix Q is negative definite if and only if the −𝑸 is positive definite. In other words, Q
is negative definite when 𝑥 𝑇 𝑸𝑥 < 0 for all 𝑥 ≠ 0 (every eigenvalue is negative). For
example,

is negative definite.
4. A matrix Q is negative semidefinite if −𝑸 is positive semidefinite (eigenvalue ≤ 0). For
example,

is negative semidefinite.
5. A matrix Q is indefinite if 𝑥 𝑇 𝑸𝑥 is positive for some 𝑥 and negative for some other x (neither
positive definite nor negative definite). For example

is indefinite.

2.4 Principal Minor

If Q is an 𝑛 × 𝑛 matrix, then the principal minor of order k is a submatrix of size 𝑘 × 𝑘 obtained


by deleting any 𝑛 − 𝑘 rows and their corresponding columns from the matrix Q.
Example
Principal minors of order 1 are essentially the diagonal elements 1, 5, and 9. The principal
minor of order 2 are the following (2 × 2) matrices:

The principal minor of order 3 is the matrix Q itself.


The determinant of a principal minor is called the principal determinant. For an 𝑛 × 𝑛
square matrix, there are 2𝑛 − 1 principal determinants in all.
The leading principal minor of order k of an 𝑛 × 𝑛 matrix is obtained by deleting the last
𝑛 − 𝑘 rows and their corresponding columns. In above example, the leading principal minor
of order 1 is 1 (delete the last two rows and columns). The leading principal minor of order 2
1 2
is [ ], while that of order 3 is the matrix Q itself. The number of leading principal
4 5
determinants of an 𝑛 × 𝑛 matrix is n.
There are some easier tests to determine whether a given matrix is positive definite,
negative definite, positive semidefinite, negative semidefinite, or in
definite. All these tests are valid only when the matrix is symmetric.
Tests for Positive-Definite Matrices
1. All diagonal elements must be positive.
2. All the leading principal determinants must be positive.
Tests for Positive-Semidefinite Matrices
1. All diagonal elements are nonnegative.
2. All the principal determinants are nonnegative.

You might also like