You are on page 1of 23

Week 10

1
Part 2. Mainly based on the text book by D.J. Inman
Week 10
2
2. Eigenvalues and Natural Frequencies
 Can connect the vibration problem with the algebraic eigenvalue
problem developed in mathematics
 This will give powerful computational skills and some powerful
theory
 Sets the background needed for analyzing system with an
arbitrary number of DOF
 Which will most likely need the help of computers and a
numerical software, e.g. Matlab
 All codes in numerical software have eigen-solvers, so the
painful calculations based on the previous method can be
automated
Week 10
3
Some Matrix Results to Help Us
A matrix M is defined to be symmetric if
M = M
T
A symmetric matrix M is positive definite if
x
T
Mx > 0 for all nonzero vectors x
A symmetric positive definite matrix M can be factored
M = LL
T
Here L is upper triangular, called a Cholesky matrix
Week 10
4
Some Matrix Results to Help Us
If the matrix L is diagonal, it defines the matrix square root
The matrix square root is the matrix M
1/ 2
such that
M
1/ 2
M
1/ 2
= M
If M is diagonal, then the matrix square root is just the root
of the diagonal elements:
L = M
1/ 2
=
m
1
0
0 m
2

¸

(
¸
(
(
(4.35)
It will be more complicated if M is not diagonal, which means that
the system is dynamically coupled.
Based on M
in Eq (4.6)
Week 10
5
Transfomation of Coordinates
For a symmetric positive-definite matrix M, we can calculate:
Now we would like to transform the equation
into a symmetric eigenvalue problem.
Week 10
6
Transfomation of Coordinates
How does the vibration problem relates to the
real symmetric eigenvalue problem?
(4.39)
Week 10
7
Vibration Problem vs Real Symmetric
Eigenvalue Problem
So here we let λ = ω
2
To solve equation (4.39)
2
2
vibration problem real symmetric
eigenvalue problem
(4.40) (4.41)
Assume ( ) in ( ) ( )
, or

j t
j t j t
t e t K t
e K e
K K
e
e e
e
e ì
= + =
÷ + = =
= · = =
q v q q 0
v v 0 v 0
v v v v v 0
λ is called the eigenvalue
v is the corresponding eigenvector
Week 10
8
Real Symmetric Eigenvalue Problem
 There are n eigenvalues and they are all real valued
 There are n eigenvectors and they are all real valued
 The set of eigenvectors v are orthogonal  orthogonality property
of normal modes
 The set of eigenvectors are linearly independent
 The matrix is similar to a diagonal matrix
Useful concepts for any number of DOF!
If the system being modelled has n lumped masses each free to move
with a single displacement labeled x
i
(t), the matrices M, K and hence
will be n x n and the vectors u, q, and v will be n x 1 in dimension.
K
~
x =
x
1
M
x
n

¸

(
¸
(
(
(
, y =
y
1
M
y
n

¸

(
¸
(
(
(
, inner product is x
T
y = x
i
y
i
i =1
n
¿
x orthogonal to y if x
T
y = 0
x is normal if x
T
x = 1
¹
`
¦
)
¦
if a the set of vectores is is both orthogonal and normal it
is called an orthonormal set

The norm of x is x = x
T
x


If a set of vectors is both orthogonal and normal, it is called an
orthonormal set.
The norm (magnitude) of vector x is:

We can normalize any vector by calculating so that the
magnitude is 1.
( )
2 1
1
2
(
¸
(

¸

= =
¿
=
n
i
i
T
x x x x
Week 10
9
Normal and Orthogonal Vectors


Week 10
10
Example: calculating λ and v
Related to Examples 4.2.2 - 4.2.4 in the text book of D.J. Inman.
 Solve the eigenvalue problem for the 2-DOF system of Example 4.1.5.
• m
1
= 9 kg, m
2
= 1 kg, k
1
= 24 N/m and k
2
= 3 N/m

Week 10
11
Example: calculating λ and v
Rewriting Eq. 4.41 gives , where v must be non-zero.
Hence the matrix coefficient must be singular:




These are the eigenvalues of the matriks and the same as ω
i
2
in the
previous method of calculation.

The eigenvectors are now calculated for each eigenvalue.
( ) 0
~
= ÷ v I K ì
K
~
The eigenvector v
1
associated with λ
1
:
Week 10
12
Example: calculating λ and v
0
12 11
= + ÷ v v
v
1
= v
2

defines the
direction of v
1
Normalize the vector so that
the norm (magnitude) = 1

The eigenvector v
2
associated with λ
2
 do the same, try yourself !


It can be shown that v
1
and v
2
are orthogonal:
Also, v
i
are orthonormal :

Week 10
13
Example: calculating λ and v
While the eigenvalues are the same as the square of the natural
frequencies, the normalized eigenvectors v are different from the
mode shapes u.
However, they are related:
Week 10
14
Example: calculating λ and v
The orthonormal set of vectors is used to form an Orthogonal Matrix
Week 10
15
Orthogonal Matrix
| |
| |
1 2
1 1 1 2
2 1 2 2
1 2 1 1 2 2
1 2 2 1 1 1 2 1 2
1 2
2
1 2 1 2 2 2
1 0
0 1
0
diag( , )
0
T T
T
T T
T T T
T T
T T
P
P P I
P KP P K K P ì ì
ì
ì ì
e e
ì
ì ì
=
(
(
= = =
(
(
¸ ¸
¸ ¸
( = =
¸ ¸
(
(
= = = = A
(
(
¸ ¸
¸ ¸
v v
v v v v
v v v v
v v v v
v v v v
v v v v
P is called an orthogonal matrix
P is also called a modal matrix
called a matrix of eigenvectors (normalized)
(4.47)
Week 10
16
Example: calculate λ, ω, v, P and Λ
Related to Examples 4.2.5 in the text book of D.J. Inman.
Figure 4.4
The equations of motion:
1 1 1 2 1 2 2
2 2 2 1 2 3 2
( ) 0
(4.49)
( ) 0
m x k k x k x
m x k x k k x
+ + ÷ =
÷ + + =
1 2 2 1
2 2 3 2
0
0 (4.50)
0
k k k m
k k k m
+ ÷
( (
+ =
( (
÷ +
¸ ¸ ¸ ¸
x x
Week 10
17
Example: calculate λ, ω, v, P and Λ
1 2 2 1
2 2 3 2
0
0 (4.50)
0
k k k m
k k k m
+ ÷
( (
+ =
( (
÷ +
¸ ¸ ¸ ¸
x x
m
1
=1 kg, m
2
= 4 kg, k
1
= k
3
=10 N/m and k
2
=2 N/m
( )
1/ 2 1/ 2
2
1 2
1 2
1 0 12 2
,
0 4 2 12
12 1
1 12
12 1
det det 15 35 0
1 12
2.8902 and 12.1098
1.7 rad/s and 12.1098 ra
M K
K M KM
K I
ì
ì ì ì
ì
ì ì
e e
÷ ÷
÷
( (
¬ = =
( (
÷
¸ ¸ ¸ ¸
÷
(
¬ = =
(
÷
¸ ¸
÷ ÷
(
¬ ÷ = = ÷ + =
(
÷ ÷
¸ ¸
¬ = =
¬ = = d/s
Week 10
18
Example: calculate λ, ω, v, P and Λ
Next compute the eigenvectors.
1
11
21
11 21
1
2 2 2 2 2
1 11 21 11 11
11
For equation (4.41 ) becomes:
12- 2.8902 1
0
1 3- 2.8902
9.1089
Normalizing yields
1 (9.1089)
0.
v
v
v v
v v v v
v
ì
÷
( (
=
( (
÷
¸ ¸ ¸ ¸
¬ =
= = + = +
¬ =
v
v
21
1 2
1091, and 0.9940
0.1091 0.9940
, likewise
0.9940 0.1091
v =
÷
( (
= =
( (
¸ ¸ ¸ ¸
v v
Week 10
19
Example: calculate λ, ω, v, P and Λ
Next form P and Λ, check the value of P to see if it behaves as its
suppose to:
| |
1 2
0.1091 0.9940
0.9940 0.1091
0.1091 0.9940 12 1 0.1091 0.9940 2.8402 0
0.9940 0.1091 1 3 0.9940 0.1091 0 12.1098
0.1091 0.9940 0.1091 0.9940
0.9940 0.1091 0.9940 0.109
T
T
P
P KP
P P
÷
(
= =
(
¸ ¸
÷ ÷
( ( ( (
= =
( ( ( (
÷ ÷
¸ ¸ ¸ ¸ ¸ ¸ ¸ ¸
÷
(
=
(
÷
¸ ¸
v v
1 0
1 0 1
( (
=
( (
¸ ¸ ¸ ¸
OK !!
= Λ
Week 10
20
A Note on Eigenvectors
In the previous section, we could have chosed v
2
to be
v
2
=
0.9940
÷0.1091

¸

(
¸
(
instead of v
2
=
-0.9940
0.1091

¸

(
¸
(
because one can always multiple an eigenvector by a constant
and if the constant is -1 the result is still a normalized vector.
Will this make any difference?
No! Try it in the previous example
Week 10
21
A Note on Previous Examples
All of the previous examples can and should be solved
by “hand” to learn the methods.
However, they can also be solved on calculators with
matrix functions and with the codes listed in the last
section.
In fact, for more than 2-DOF one must use a code to
solve for the natural frequencies and mode shapes.
 We will have a look at Matlab later (if time permits)
Week 10
22
3 Approaches to Compute Mode Shapes
and Frequencies

(i) e
2
Mu = Ku (ii) e
2
u = M
÷1
Ku (iii) e
2
v = M
÷1
2
KM
÷1
2
v
i. Is the Generalized Symmetric Eigenvalue Problem,
easy for hand computations, inefficient for computers
ii. Is the Asymmetric Eigenvalue Problem, very
expensive computationally  see in Rao or Kelly
iii. Is the Symmetric Eigenvalue Problem, the cheapest
computationally
This was discussed today
Week 10
23
 To compute the inverse of the square matrix A: inv(A) or use
A\eye(n) where n is the size of the matrix
 [P,D]=eig(A) computes the eigenvalues and normalized
eigenvectors (watch the order). Stores them in the eigenvector
matrix P and the diagonal matrix D (D=A)
 To compute the matrix square root use sqrtm(A)
 To compute the Cholesky factor: L= chol(M)
 To compute the norm: norm(x)
 To compute the determinant det(A)
 To enter a matrix: K=[27 -3;-3 3]; M=[9 0;0 1];
 To multiply: K*inv(chol(M))
Related Matlab Commands