Professional Documents
Culture Documents
Linear Algebra Study Guide 035508
Linear Algebra Study Guide 035508
Dr Phil Anderson
Room E/3.10
andersonpi1@cf.ac.uk
Contents
EN2090 – Engineering Mathematics 2: Linear Algebra ............................................................. 1
Contents.............................................................................................................................. 1
Why learn about matrices and linear algebra? ...................................................................... 3
Outline syllabus ...................................................................................................................... 3
Section 1: Review of basic matrix properties ............................................................................ 4
Aims of this section............................................................................................................. 4
Objectives of this section.................................................................................................... 4
1.1 Basic matrix properties ................................................................................................... 4
1.2 Transpose of a Matrix ..................................................................................................... 5
Worked Example 1.1........................................................................................................... 5
1.3 Special matrices .............................................................................................................. 5
1.4 Matrix addition and subtraction ..................................................................................... 6
Worked Example 1.2........................................................................................................... 6
1.5 Matrix multiplication ....................................................................................................... 7
Worked Example 1.3........................................................................................................... 7
1.6 Scalar product and unit vectors ...................................................................................... 8
Worked example 1.4........................................................................................................... 9
1.7 The determinant of a square matrix ............................................................................. 10
Worked Example 1.5......................................................................................................... 10
Some properties of determinants ........................................................................................ 11
1.8 The inverse of a square matrix ...................................................................................... 11
Worked Example 1.5 Inverse of a 22 matrix ............................................................... 12
Worked Example 1.6 Inverse of a 33 matrix............................................................... 12
1.9 Summary of Section 1 .................................................................................................... 13
1.10 Problems....................................................................................................................... 14
1.11 Solutions ....................................................................................................................... 14
Section 2: Solution of Linear Simultaneous Equations ............................................................ 16
Aims of this section........................................................................................................... 16
Objectives of this section.................................................................................................. 16
2.1 Introduction .................................................................................................................. 16
Worked example 2.1......................................................................................................... 17
2.2 Solution of homogeneous equations ( A x = 0 ) ......................................................... 17
Worked example 2.2......................................................................................................... 18
2.3 Solution of A x = b by matrix inversion .................................................................. 18
Worked example 2.3......................................................................................................... 18
2.4 Solution of A x = b using Gaussian elimination ...................................................... 19
Worked example 2.4......................................................................................................... 20
2.5 General methodology for Gaussian elimination ........................................................... 22
Worked example 2.5......................................................................................................... 22
Worked example 2.6......................................................................................................... 24
2.6 Summary........................................................................................................................ 26
2.7 Problems......................................................................................................................... 27
Section 3 Eigenvalues, Eigenvectors and their Applications.................................................. 28
Aims of this section........................................................................................................... 28
Objectives of this section.................................................................................................. 28
3.1 Introduction ................................................................................................................... 28
3.2 Calculating the Eigenvalues ........................................................................................... 29
Worked example 3.1......................................................................................................... 29
Worked example 3.2......................................................................................................... 30
3.3 Useful properties of the eigenvalues ............................................................................ 31
3.4 Introduction to Eigenvectors ......................................................................................... 32
3.5 Calculating Eigenvectors................................................................................................ 32
Worked example 3.3......................................................................................................... 32
Worked example 3.4......................................................................................................... 33
Worked example 3.5......................................................................................................... 34
3.6 Application to the solution of coupled differential equations ...................................... 36
3.7 A note on repeated eigenvalues ................................................................................... 37
3.8 Summary........................................................................................................................ 37
3.9 Problems......................................................................................................................... 39
Why learn about matrices and linear algebra?
Matrices have widespread use in Engineering. The aim of this part of the course is to build
on the basic introduction to matrices from Engineering Mathematics 1. The outline syllabus
is given below.
We will begin with a review of the basic properties before introducing more advanced
methods of solving linear simultaneous equations.
The other main topic is eigenvalues and eigenvectors, which has applications across the
whole of Engineering, some examples being
• Frequency and mode analysis of mechanical or electrical vibrations
• Principal stress and principal axis analysis in rigid body mechanics
• Stability analysis of general dynamical systems
Outline syllabus
In this module we will only deal with square matrices (order N N), column vectors (order N
1) and row vectors (order 1 N). The integer N is the dimensionality of the vector space in
which we operate.
1
B =
2 3 1 2 3
has transpose BT = = B, i.e. B is symmetric
2 5 0 2 5 0
3 3 6
0 6 0
1
x = is a 3 1 column vector with transpose xT = (1 − 2 3)
−
2
3
2
y = (2 0 − 1) is a 1 3 row vector with transpose yT =
0
− 1
Note: the transpose of a column vector is a row vector (and vice versa).
2 0 0
e.g. C =
A diagonal matrix has elements equal to zero apart
0 − 3 0
from along its diagonal. 0 0 7
1 0 0
The identity (or unit) matrix I is diagonal, with all I = 0 1 0
diagonal elements equal to unity. 0 0 1
1 4 3
e.g. D =
0 − 3 5
A triangular matrix has non-zero elements only in its
0 0 1
lower or upper corners.
0 0 0
0 =
The null (or zero) matrix 0 has all its elements equal 0 0 0
0 0
to zero. 0
(Note: these are shown for 33 matrices but can apply to any order).
Two matrices can be added (or subtracted) provided they have the same order, e.g. a 33
matrix can be added to a 33 matrix, a 31 column vector can be added to a 31 column
vector, a 13 row vector can be added to a 13 row vector. We simply add (or subtract) the
corresponding elements of the two matrices, i.e. if C = A B then the elements of C are cij =
aij bij.
then find: A + B, A – B, xT + y, yT – x
0 + 1 1 + 2 − 1 + 3 1 3 2
A+B= ,
2 + 2 − 3 + 5 1+ 0 = 4 2 1
1+ 3 0 + 0 4 + 6 4 0 10
−1 −1 − 4
A–B=
0 −8 1
− 2 0 − 2
xT + y = (1 − 2 3) + (2 0 − 1) = (3 − 2 2) ,
2 −1 1
yT - x =
0 − −2 = 2
− 1 − 3 − 4
N
If C = A B then the elements of the product C are:
cij = aik bkj
k =1
Only the following multiplications are allowed using N dimensional square matrices and
vectors (row or column):
xTx, Ax, AB
1
x Tx =
(1 −2 3) − 2 = (1 1) + (− 2 −2 ) + (3 3) = 14
3
0 1 − 1 1 0 1 + 1 −2 + − 1 3 − 5
Ax =
2 − 3 1 − 2 = 2 1 + − 3 −2 + 1 3 = 11
1 0 4 3 1 1 + 0 −2 + 4 3 13
0 1 − 1 1
AB =
2 3
2 − 3 1 2 5 0
1 0 4 3 6
0
−1 5 −6
= −1 −11 12
13 2 27
y = yy T = y12 + y22 + + y N2 .
The same applies to a column vector x, except its magnitude is defined to be:
z
zˆ =
z
Two vectors are said to be orthogonal if abT = 0 (for row vectors) or if aTb = 0 (for column
vectors). The scalar product of a vector with itself enables the magnitude of the vector to be
calculated.
x − 2
x= x x= T
(− 2) + (3) + (1) = 14
2 2 2
→ xˆ = = 1
3
x 14
1
( 2) + ( 0 ) + ( 4 ) = 20 = 2 5
2 2 2
y = yTy =
1
y 1
→ yˆ = = 0
y 5
2
2
xT y = ( −2 3 1) 0 = ( −2 2 ) + ( 3 0 ) + (1 4 ) = 0
4
• Any row can be used to find det(A) but we usually use the first.
• If det(A) = 0 then the matrix A is said to be singular.
−1 2 0
1 1 3 1 3 1
3 1 1 = −1 −2 +0 = −3 − 8 + 0 = −11
−1 2 2 2 2 −1
2 −1 2
Some properties of determinants
We will illustrate these properties
a b
using the 22 matrix A (opposite): A= , det (A ) = ad − bc
c d
If two rows (or columns) are swapped,
c d
det(A) changes sign. B= , det (B ) = bc − ad
a b
1 2
A=
3 4
4 − 2 − 2 1
→ det (A ) = −2, adj(A ) = → A −1 =
−3 1 1.5 − 0.5
− 2 1 1 2 1 0
Check:
A −1 A = = (as required)
1.5 − 0.5 3 4 0 1
1 − 2 1
A = −1 1 0 → det (A ) = 1 (1) + 2 (− 1) + 1 (3) = 2
− 2 −1 1
>> A=[1 -2 1; -1 1 0; -2 -1 1]
A=
1 -2 1
-1 1 0
-2 -1 1
>> inv(A)
ans =
2 1 0 1 − 3 1 2
1.2 Given u = 0 , v = (4 1 2), C = 2 1 1 , D = − 1 1 0 , calculate
− 2 1 0 2 3 2 1
T
(a) vu, (b) u u, (c) Cu, and (d) CD
1.3 Calculate the magnitudes and unit vectors of x, y, u and v in P1.1 and P1.2.
1.4 Write down two vectors that are orthogonal to x and two vectors orthogonal to u.
1.6 Demonstrate the various determinant properties listed in section 1.2 applied to
det(C).
1.7 Calculate the inverses of matrices A, B, C and D. Verify your results by performing
the check AA −1 = I , etc. for B, C and D.
1.11 Solutions
1 − 3 5
1.1 (a) yx = −5, (b) xy = , (c) x T x = 5, (d) Ax = , (e) doesn’t exist, since
2 − 6 1
6 9 5 3
no. of columns in B no. rows in y, (f) AB = , (g) BA =
− 2 5 − 6 6
0 0 0 3 3
1.2 (a) vu = 4, (b) uTu = 8, (c) Cu = 2 = 2 1 , (d) CD = − 4 5 5
− 2 − 1 3 5 4
1 1
x 5 1 1 u 1
1.3 x x=5 → x=
T
ˆ = = , u u = 8 → uˆ =
T
= 0
5 2
5 2 8 2
5
− 1
yy T = 10 → yˆ =
y
=
1
(1 − 3) , vv T = 21 → vˆ = v = 1 (4 1 2)
10 10 21 21
1 − 1
2 − 2
1.4 and are orthogonal to x; 0 and 0 are orthogonal to u.
− 1 1 1 − 1
0 1 1 1 2 1
1.6
( )
Swap cols 1 and 2 → 1 2 1 = −1 ; C = 0 1 0 → det C T = 1 = det (C);
T
1 1 2
0 1 2
1 0 1
Multiply row 3 by 5 → 2 1 1 = 5 = 5 det (C) ;
5 0 10
2 0 1
Replace col 1 with (col 1 + col 3) → 3 1 1 = 1 = det (C)
3 0 2
1 1 − 1 0.25 − 0.25 1 6 − 1
1.7 A −1 = = , B −1 =
4 1 3 0.25 0.75 12 0 2
2 0 − 1 −1 − 3 2
−1 1
C = −3 1 1 , D −1 = −1 9 2
−1 12
0 1 5 − 9 2
Section 2: Solution of Linear Simultaneous Equations
Aims of this section
To study the use of matrix methods to solve a system of linear simultaneous equations, with
emphasis on Gaussian elimination.
2.1 Introduction
One of the main practical applications of matrices is in the solution of sets of linear
equations. Consider the following system of n linear simultaneous equations (“linear”
means no powers of x greater than 1).
or, equivalently Ax = b
A is a n×n square matrix, x and b are n×1 column vectors.
Worked example 2.1
The following two examples show how to cast systems of two and three linear simultaneous
2x + y = 5 2 1 x 5
→ =
x − 2y = −5 1 − 2 y − 5
x + y + z = 6 1 1 1 x 6
x + 2 y + 3z = 14 → 1 2 3 y = 14
x + 4 y + 9 z = 36 1 4 9 z 36
−1 −1
To solve A x = b , find the inverse of A (i.e. A ), multiply both sides by A and use
−1 −1
the fact that A A = AA = I (where I is the identity), i.e.
A −1 A x = I x = x = A −1b
−1
A x = 0 is x = A 0 , i.e. the only solution is the trivial solution x = 0 .
det (A ) = 0 and b = 0 Conversely, there are non-trivial solutions (i.e. x 0 ) of
A x = 0 when det (A ) = 0 .
This important result underpins the theory of eigenvalues and eigenvectors.
x + 5 y + 3z = 0
5 x + y − kz = 0
x + 2 y + kz = 0
Non homogeneous equations are of the form A x = b with b 0 . In this case with
b 0 , if det (A ) 0 then A
−1
exists and (as we’ve seen) there is one unique
solution, given by x = A −1b . More subtly, if det (A ) = 0 there is either no solution or
an infinite number of solutions.
1 1 1 x 6
Writing in the form A x = b gives
1 2 3 y = 14
1 4 9 z 36
det (A ) = 2 0 , there is one unique solution given by x = A b
−1
Since
1 6 −5 1 𝑥
𝑨−𝟏 = (−6 8 −2) → (𝑦)
2 𝑧
2 −3 1
1 6 −5 1 6 1
= (−6 8 −2) (14) = (2)
2
2 −3 1 36 3
𝑥=1
i.e. 𝑦 = 2
𝑧=3
Matrix inversion is easy for solving two or three simultaneous equations but it becomes
unwieldy for four (or more) equations. It is more efficient to use an elimination method, e.g.
Gaussian elimination.
To illustrate its use, consider this simple example, where we need to solve:
x − 2y = −5 (1)
2x + y = 5 ( 2)
Subtract eqn (2) from 2 × eqn (1), giving a new eqn (2):
x − 2y = −5 (1)
− 5y = − 15 ( 2)
Eqn (2) now only has one unknown (y), so is used to find y = 3. On inserting the solution
for y into eqn (1), there is now only one remaining unknown (x) and rearranging eqn (1)
gives x = 2 y − 5 = 1.
Here we have actually reduced the original matrix equation Ax = b into a new form
1 − 2 1 − 2
A= → U=
2 1 0 5
The general process of Gaussian elimination uses the row operations:
Row n (Rn) now has only one unknown (xn), thus xn = d n / unn .
Rn−1 is then used to find xn −1 (using xn ), etc. for Rn−2 R1 until all the x1 xn
are found
in any order to reduce the first column of B to zero (apart from the first element a11 ), then
reduce the second column of B to zero apart from the first two elements, etc., until B is
converted into an upper-triangular matrix. This is best illustrated by worked example.
(a ) x + y + z = 6 (b) 2 x − y − z = 11
x + 2 y + 3z = 14 x + 2 y − 3z = 3
x + 4 y + 9 z = 36 3x + 2 y + z = − 5
Writing each set of equations as A x = b , there will be a unique solution if
det (A ) 0 , so remember to check this first.
(a) det (A ) = 2 (i.e. 0) so there will be one unique solution.
R1 1 1 1 6
The augmented matrix (with rows labelled) is
R2 1 2 3 14
1 4 9 36
R3
R1 1 1 1 6
Replace R2 with R2 − R1, thus forming
R2 0 1 2 8
1 36
R3 4 9
R1 1 1 1 6
Replace R3 with R3 − R1, thus forming
R2 0 1 2 8
0 30
R3 3 8
R1 1 1 1 6
Replace R3 with R3 − 3R2 , thus forming
R2 0 1 2 8
0 0 2 6
R3
Finally, re-write in the form U x = d , with U an upper-triangular matrix (you can skip this
step when you get more familiar with the method).
1 1 1 x 6
0 1 2 y =8
0 0 2 z 6
2 z = 6 → z = 3;
Therefore,
y + 2z = 8 → y = 8 − 2z = 2
x + y + z = 6 → x = 6− y − z =1
Hence, the equations are solved. This is often a lot less painful than when using matrix
inversion, particularly when there are > 3 equations.
(b) det (A ) = 30 (i.e. 0) so there will be one unique solution.
R1 2 −1 −1 11
The augmented matrix is
R2 1 2 −3 3
3 − 5
R3 2 1
R2 → 2 R2 − R1
R1 2 −1 −1 11
R2 0 1 −1 −1
R2 → R2 / 5 3
R3 2 1 − 5
R3 → 2 R3 − 3R1
R1 2 −1 −1 11
R2 0 1 −1 −1
0 7 − 43
R3 5
R3 → R3 − 7R2 ; R1 2 −1 −1 11
R3 → R3 / 12
R2 0 1 −1 −1
0 0 1 − 3
R3
z = −3; y − z = −1 → y = −4; 2 x − y − z = 11 → x = 2
(a ) x + 2 y + 3z = 10 ( b) 2 x + 3 y + 4 z = 1
−x + y + z = 0 x + 2 y + 3z = 1
+ y − z = 1 x + 4 y + 5z = 2
( c) w + 2 x + 3y + z = 5
2w + x + y + z = 3
w + 2x + y = 4
x + y + 2z = 0
R2 → R1 + R2 ; R3 → R2 − 3R3 R1 1 2 3 10
R2 0 3 4 10
0 0 7
R3 7
7 z = 7 → z = 1; 3 y + 4 z = 10 → y = 2;
x + 2 y + 3z = 10 → x = 3
R2 → 2 R2 − R1; R3 → 2 R3 − R1; R1 2 3 4 1
R3 → 5R2 − R3, thus leading to the solutions R2 0 1 2 1
0 2
x = − 12 , y = 0, z = 12 R3 0 4
(c) det (A ) = 12 (i.e. 0), so one unique solution. To solve, fist form the augmented
matrix and solve using the process:
𝑅2 → 2𝑅1 − 𝑅2 ; 𝑅3 → 𝑅1 − 𝑅3 ; 𝑅4 → 3𝑅4 − 𝑅2 ; 𝑅4 → 𝑅3 + 𝑅4
R1 1 2 3 1 5 R1 1 2 3 1 5
R2 2 1 1 1 3 R2 0 3 5 1 7
1 →
R3 2 1 0 4 R3 0 0 2 1 1
R4 0 1 1 2 0 R4 0 0 0 6 − 6
The solution is w = 1, x = 1, y = 1, z = −1
2.6 Summary
Homogeneous equations have the form A x = 0 . There are only non-zero solutions for x
(an infinite number of them) when det (A ) = 0 .
Non homogenous equations have the form A x = b (with b 0), and have a single
unique solution for x when det (A ) 0 . The solution is obtained by matrix inversion or
(often more conveniently) by elimination methods (e.g. Gaussian elimination).
After completing this section you should be able to use Gaussian elimination to find the
solution of A x = b , applied to any number of linear simultaneous equations.
2.7 Problems
2.1 Determine the values of the constants p, q and r for which the following sets of
homogeneous equations have non-trivial (i.e. non-zero) solutions.
3x + y + 2 z = 0 x + y − rz = 0
3 px + 2 y = 0
(a) (b) 4 x + 2 y − qz = 0 (c) rx − 3 y + 11z = 0
2 x + py = 0
2 x − y + 3qz = 0 2 x + 4 y − 8z = 0
2.2 Use Gaussian elimination to solve these sets of simultaneous equations (remember to
check that a unique solution exists before proceeding – see the worked examples in the
lecture notes for guidance).
2x + y = 3 3x + 2 y = −1 8 x + 5 y = 29.5
(a) (b) (c)
x − y =1 7x + 3y = 6 − 3x + 9 y = 27
2.3 Use Gaussian elimination to solve these sets of simultaneous equations (remember to
check that a unique solution exists before proceeding).
4w − x − z = −4
− w + 4x − y = 1
− x + 4y − z = 4
− w − y + 4 z = 10
Section 3 Eigenvalues, Eigenvectors and their Applications
3.1 Introduction
If x 0 and A is an n × n matrix, there are generally n values of the scalar quantity , the
eigenvalues of A. Each eigenvalue is associated with a non-zero solution x (which is an n × 1
column vector); these solutions x are the eigenvectors of A (generally there are n of them).
Ax = x = I x → ( A − I ) x = 0
i.e.
a11 a12 a1n x1 x1 a11 − a12 a1n x1 0
21
a a a 2n 2
x 2
x 21a a − a 2 n x2 0
22
= → 22
=
n1
a a n2 a nn n
x n
x n1
a a n2 a nn − xn 0
As we found in section 2.2, the new system of homogeneous equations ( A − I ) x = 0
has non-zero solutions ( x 0 ) only if det (A − I ) = 0 , i.e.
Direct evaluation of the determinant gives an nth order polynomial in , called the
characteristic equation of A, whose solutions are the n eigenvalues.
2 − −1
det (A − I ) = = (2 − )(2 − ) − 1 = 0
−1 2 −
2 − 4 + 3 = 0 → ( − 1)( − 3) = 0 → = 1 and = 3
B is also a 2×2 matrix, whose characteristic equation is again a quadratic, yielding two
eigenvalues, found by solving:
3− 2
det (A − I ) = = (3 − )(1 − ) − 2 = 0 → 2 − 4 + 1 = 0
1 1−
This doesn’t factorise, so use − b b 2 − 4ac 4 12
= = = 2 3
2a 2
Worked example 3.2
1 1 − 2 2 0 1
Find the characteristic
A = −1 2 1 and B = − 1 4 − 1
equation and
0 1 − 1 −1 0
eigenvalues of 2
The characteristic equation of a 3×3 matrix like A will be a cubic equation yielding three
eigenvalues, found by setting det (A − I ) = 0 , i.e.
1− 1 −2
−1 2 − 1 = (1 − ) ( − ( 2 − )(1 + ) − 1) − (1 + ) + 2 = 0
0 1 −1 −
n n
•
i = aii = trace(A )
Sum of eigenvalues is the trace of A, i.e.
i =1 i =1
n
• Product of eigenvalues is the determinant of A, i.e.
i = det (A )
i =1
• The eigenvalues of A T
are also 1 , 2 , 3 n
• The eigenvalues of A −1 are 1 1 1 1
, ,
1 2 3 n
→ x1T x1 = ( ) = 2 2
x1 =
1 1 1 2
1
x1
→ xˆ 1 = T = = 1 (final answer)
x1 x1 2
2 1 2
To find x 2 (and hence x̂ 2 ), next we substitute the eigenvalue 2 = 3
2 − 1 x1 x1 2 x1 − x2 = 3 x1
Ax 2 = 2 x 2 → = 3 →
− 1 2 x2 x2 − x1 + 2 x2 = 3 x2
Again writing x1 = , both equations agree that x2 = − x1 = − , so
1 1 1
1
x2 2
x2 = → ˆ2
x = = = 1
− T
x2 x2 2 − 2 − 1 − 2
1 12
x1 1 1
x1 = 0 → xˆ 1 = = 0 = 0 0
−
T
x1 x1 2 − 2 − 1 − 1
2
We now repeat this process for the other two eigenvalues 2 = 2 and 3 = 3 to find the
other two eigenvectors x 2 and x 3 .
2 0 1 x1 x1 2 x1 + x3 = 2 x1
Bx 2 = 2 x 2 → − 1 4 − 1 x2 = 2 x2 → − x1 + 4 x2 − x3 = 2 x2
−1 0 x3 x3 − x1 + 2 x2 = 2 x3
2
Using row 1, if x1 = then x3 = 0 ; using row 3, x2 = / 2 .
2 2 25
1 x 1 1
x 2 = 12 = → xˆ 2 = T2 = 5
0 2 0 x2 x2 5 0 0
2 0 1 x1 x1 2 x1 + x3 = 3x1
Bx 3 = 3x 3 → − 1 4 − 1 x2 = 3 x2 → − x1 + 4 x2 − x3 = 3x2
−1 0 x3 x3 − x1 + 2 x2 = 3x3
2
Using row 1, if x1 = then x3 = too; using row 3, x2 = 2 .
1
6
x 1
x 3 = 2 → xˆ 3 = T3 = 2
2
x3 x3 6 6
1
6
1− 0 4
0 2− 0 = −(1 − )(2 − )(3 + ) − 12(2 − ) = 0
3 1 −3−
( )
→ (2 − ) 2 + 2 − 15 = 0 → ( − 2)( − 3)( + 5) = 0
− 3 2 − 3 − 3 4
2
2 213
x 1
xˆ 1 = T1 = 0 0
x1 x1 13 − 3 − 3
13
1 0 4 x1 x1 x1 + 4 x3 = 2 x1
Ax 2 = 2 x 2 → 0 2 0 x2 = 2 x2 → 2 x2 = 2 x2
3 − 3 x3 x3 3x1 + x2 − 3x3 = 2 x3
1
4 4 4
66
7 1 x 1
x 2 = − 4 = − 7 → xˆ 2 = T2 = − 7 −
7
1 4 x x
2 2
66
66
1
4 66
1 0 4 x1 x1 x1 + 4 x3 = 3x1
Ax 3 = 3x 3 → 0 2 0 x2 = 3 x2 → 2 x2 = 3x2
3 − 3 x3 x3 3x1 + x2 − 3x3 = 3x3
1
2 2 25
1 x 1
x 3 = 0 = 0 → xˆ 3 = T3 = 0 0
1 2 x3 x3 5 1
2 5
3.6 Application to the solution of coupled differential equations
Consider the specific example shown of two equal masses m tied to identical springs of
force constant k (we neglect the effects of gravity).
We will now use eigenvalue analysis to study the natural oscillations (or normal modes) of
this simple system. These ideas can be extended to much more complex dynamical systems.
m d 2 x1 m d 2 x2
= −2 x1 + x2 , = x1 − 2 x2
k dt 2 k dt 2
2 − 1 X 1 X1 m2
= , i.e. AX = X, with =
−1 2 X 2 X2 k
k i k 3k
i = → 1 = and 2 = 3 1
m m m
The normal modes of oscillation are the corresponding eigenvectors.
You can see from this simple example why the set of eigenvalues of a matrix A is also often
called its spectrum. Note also that, since the system matrix involved here is symmetric, the
resulting eigenvectors are orthogonal (hence the term “normal” mode).
However, a matrix can have repeated eigenvalues (of multiplicity m). It is sometimes
possible to construct two independent eigenvectors corresponding to the same eigenvalue,
thus completing the full set. However, this is not always possible and it is important to note
that if a matrix has repeated eigenvalues then a full set of independent eigenvectors may
not exist for that matrix.
3.8 Summary
After completing this section you should be able to find the eigenvalues and unit
eigenvectors of any 22 or 33 matrix.
3.9 Problems
3.1 Calculate the eigenvalues and (unit) eigenvectors of the following matrices.
1 0 2 3 4 − 2 cos sin
(a) (b) (c) (d)
1 2 3 2 1 1 − sin cos
(a) the sum of the eigenvalues equals the trace of each matrix, and
(b) the product of eigenvalues equals the determinant of each matrix.
3.3 For any symmetric matrices in 3.1, show that the eigenvectors are mutually orthogonal.
3.4 Calculate the eigenvalues and (unit) eigenvectors of the following matrices.
2 0 2 2 −1 0 2 7 0 10 −2 4
(a) 0 4 0 (b) − 1 2 − 1 (c) 1 3 1 (d) − 20 4 − 10
2 5 0 − 1 2 5 8 − 30 − 13
0 0 6
(a) the sum of the eigenvalues equals the trace of each matrix, and
(b) the product of eigenvalues equals the determinant of each matrix.
3.6 For any symmetric matrices in 3.4, show that the eigenvectors are mutually orthogonal.
3.7 Three balls of equal mass m are joined by identical springs of force constant k (similar to
that described in section 3.5). Determine the natural frequencies and normal modes of this
system, and sketch the resulting displacements for each mode.