You are on page 1of 2

Math 54 Cheat Sheet Diagonalization Least squares solution makes kAx − bk smallest.

Diagonalizability: A is diagonalizable if A = P DP −1 for some x̂ = R−1 QT b, where A = QR.


Inner product spaces f · g = ab f (t)g(t)dt. G-S applies with this inner
R
diagonal D and invertible P .
A and B are similar if A = P BP −1 for P invertible product as well.
Vector spaces Theorem: A is diagonalizable ⇔ A has n linearly independent Cauchy-Schwarz: |u · v| ≤ kuk kvk
eigenvectors Triangle inequality: ku + vk ≤ kuk + kvk
Subspace: If u and v are in W , then u + v are in W , and cu is in W Theorem: IF A has n distinct eigenvalues, THEN A is diagonalizable,
Nul(A): Solutions of Ax = 0. Row-reduce A. but the opposite is not always true!!!! Symmetric matrices (A = AT )
Row(A): Space spanned by the rows of A: Row-reduce A and choose Notes: A canbe diagonalizable even if it’s not invertible 
(Ex:  Has n real eigenvalues, always diagonalizable, orthogonally
the rows that contain the pivots. 0 0 1 1 diagonalizable (A = P DP T , P is an orthogonal matrix, equivalent to
A= ). Not all matrices are diagonalizable (Ex: )
Col(A): Space spanned by columns of A: Row-reduce A and choose the 0 0 0 1 symmetry!).
columns of A that contain the pivots Consequence: A = P DP −1 ⇒ An = P Dn P −1 Theorem: If A is symmetric, then any two eigenvectors from different
Rank(A): = Dim(Col(A)) = number of pivots How to diagonalize: To find the eigenvalues, calculate det(A − λI), and eigenspaces are orthogonal.
Rank-Nullity theorem: Rank(A) + dim(N ul(A)) = n, where A is find the roots of that. How to orthogonally diagonalize: First diagonalize, then apply G-S on
m×n To find the eigenvectors, for each λ find a basis for N ul(A − λI), each eigenspace and normalize. Then P = matrix of (orthonormal)
Linear transformation: T (u + v) = T (u) + T (v), T (cu) = cT (u), which you do by row-reducing eigenvectors, D = matrix of eigenvalues.
where c is a number. Rational roots theorem: If p(λ) = 0 has a rational root r = ab , then a Quadratic forms: To find the matrix, put the x2i -coefficients on the
T is one-to-one if T (u) = 0 ⇒ u = 0 divides the constant term of p, and b divides the leading coefficient. diagonal, and evenly distribute the other terms. For example, if the
T is onto if Col(T ) = Rm . Use this to guess zeros of p. Once you have a zero that works, use long x1 x2 −term is 6, then the (1, 2)th and (2, 1)th entry of A is 3.
Linearly independence: division! Then A = P DP −1 , where D= diagonal matrix of Then orthogonally diagonalize A = P DP T .
a1 v1 + a2 v2 + · · · + an vn = 0 ⇒ a1 = a2 = · · · = an = 0. eigenvalues, P = matrix of eigenvectors Then let y = P T x, then the quadratic form becomes
To show lin. ind, form the matrix of the vectors, and show that Complex eigenvalues If λ = a + bi, and v is an eigenvector, then λ1 y12 + · · · + λn yn
2 , where λ are the eigenvalues.
i
N ul(A) = {0} 
a b
 Spectral decomposition: λ1 u1 u1 T + λ2 u2 u2 T + · · · + λn un un T
A = P CP −1 , where P = Re(v) Im(v) , C =
 
Linear dependence: a1 v1 + a2 v2 + · · · + an vn = 0 for −b a
a1 , a2 , · · · , an , not all zero. p
C is a scaling of det(A) followed by a rotation by θ, where:
Second-order and Higher-order differential
Span: Set of linear combinations of v1 , · · · vn
Basis B for V : A linearly independent set such that Span (B) = V √ 1
cos(θ) sin(θ) equations
C=
det(A) − sin(θ) cos(θ) Homogeneous solutions: Auxiliary equation: Replace equation by
To show sthg is a basis, show it is linearly independent and spans.
To find a basis from a collection of vectors, form the matrix A of the polynomial, so y 000 becomes r3 etc. Then find the zeros (use the rational
vectors, and find Col(A).
Orthogonality roots theorem and long division, see the ‘Diagonalization-section).
To find a basis for a vector space, take any element of that v.s. and u, v orthogonal
√ if u · v = 0. ’Simple zeros’ give you ert , Repeated zeros (multiplicity m) give you
express it as a linear combination of ’simpler’ vectors. Then show those kuk = u · u Aert + Btert + · · · Ztm−1 ert , Complex zeros r = a + bi give you
vectors form a basis. {u1 · · · un } is orthogonal if ui · uj = 0 if i 6= j, orthonormal if Aeat cos(bt) + Beat sin(bt).
Dimension: Number of elements in a basis. ui · ui = 1 Undetermined coefficients: y(t) = y0 (t) + yp (t), where y0 solves the
To find dim, find a basis and find num. elts. W ⊥ : Set of v which are orthogonal to every w in W . hom. eqn. (equation = 0), and yp is a particular solution. To find yp :
Theorem: If V has a basis of vectors, then every basis of V must have n If {u1 · · · un } is an orthogonal basis, then: If the inhom. term is Ctm ert , then:
y·u
vectors. y = c1 u1 + · · · cn un ⇒ cj = u ·uj yp = ts (Am tm · · · + A1 t + 1)ert , where if r is a root of aux with
j j
Basis theorem: If V is an n−dim v.s., then any lin. ind. set with n multiplicity m, then s = m, and if r is not a root, then s = 0.
Orthogonal matrix Q has orthonormal columns!
elements is a basis, and any set of n elts. which spans V is a basis. If the inhom term is Ctm eat sin(βt), then: yp = ts (Am tm · · · +
Consequence:QT Q = I, QQT = Orthogonal projection on Col(Q).
Matrix of a lin. transf T with respect to bases B and C: For every vector A1 t + 1)eat cos(βt) + ts (Bm tm · · · + B1 t + 1)ert sin(βt), where
kQxk = kxk
v in B, evaluate T (v), and express T (v) as a linear combination of s = m, if a + bi is also a root of aux with multiplicity m (s = 0 if not).
(Qx) · (Qy) = x · y
vectors in C. Put the coefficients in a column vector, and then form the cos always goes with sin and vice-versa, also, you have to look at
Orthogonal projection: If {u1 · · · uk } is a basis for W , then orthogonal
matrix of the column vectors you found!     a + bi as one entity.
Coordinates: To find [x]B , express x in terms of the vectors in B. projection of y on W is: ŷ = uy·uu1 u1 + · · · + uy·uu1 uk Variation of parameters: First, make sure the leading coefficient
1 1 k k
x = PB [x]B , where PB is the matrix whole columns are the vectors in y − ŷ is orthogonal to ŷ, shortest distance btw y and W is ky − ŷk (usually the coeff. of y 00 ) is = 1.. Then y = y0 + yp as above. Now
B. Gram-Schmidt: Start with B = {u1 , · · · un }. Let: suppose yp (t) = v1 (t)y
 1 (t) +v 2 (t)y 2 (t),  y1 and y2 are your
 where
Invertible matrix theorem: If A is invertible, then: A is row-equivalent v1 = u1  y y2 v10 0
 hom. solutions. Then 10 = . Invert the matrix and
to I, A has n pivots, T (x) = Ax is one-to-one and onto, Ax = b has a v2 = u2 − u 2 ·v1
v1 y1 y20 v20 f (t)
v1 ·v1 0 0
unique solution for every b, AT is invertible, det(A) 6= 0, the columns 
u3 ·v1
   solve for v1 and v2 , and integrate to get v1 and v2 , and finally use:
3 ·v2
of A form a basis for Rn , N ul(A) = {0}, Rank(A) = n v3 = u3 − v ·v v1 − u v ·v
v2 yp (t) = v1 (t)y1 (t) + v2 (t)y2 (t).
1 1 2 2
−1 −1
Then {v1 · · · vn } is an orthogonal basis for Span(B), and if
     
a b 1 d −b a b 1 d −b
= ad−bc wi = kv vi
, then {w1 · · · wn } is an orthonormal basis for Span(B). Useful formulas: = ad−bc
c d −c a ik
c d −c a
A | I → I | A−1
    R R
QR-factorization: To find Q, apply G-S to columns of A. Then R sec(t) = ln |sec(t) + tan(t)|, tan(t) = ln |sec(t)|,
R = QT A tan2 (t) = tan(x) − x, ln(t) = t ln(t) − t
R
Change of basis: [x]C = PC←B [x]B (think of C as the new, cool basis)
[C | B] → [I | PC←B ] Least-squares: To solve Ax = b in the least squares-way, solve Linear independence: f, g, h are linearly independent if
PC←B is the matrix whose columns are [b]C , where b is in B AT Ax = AT b. af (t) + bg(t) + ch(t) = 0 ⇒ a = b = c = 0. To show linear

dependence, do it directly.
 To show linear
 independence, form the frequencies ±i, ± 3i (± square roots of eigenvalues) f (t) and g(t). Then differentiate f as many times until you get 0, and
f (t) g(t)   " √3 # antidifferentiate as many times until it aligns with the 0 for f . Then
Wronskian: W (t) = 0 (for 2 functions), sin π3 

f (t) g 0 (t)
f
Proper modes: v1 = = √23 , multiply the diagonal terms and do + first term − second term etc.
sin 2 π3 RT
cos πmx sin πnx
   
f (t) g(t) h(t) " √ # 2 Orthogonality formulas: −T T T
dx = 0
f (t) =  f 0 (t)
W g 0 (t) h0 (t)  (for 3 functions). Then pick a

sin 2 π3 
 3 RT πmx
 πnx

v2 = = 2
√ cos T cos T dx = 0 if m 6= n
f 00 (t) g 00 (t) h00 (t) sin 4 π3 − 23 R−T
T πmx
 πnx

f (t0 )) is easy to evaluate. If det 6= 0, then f, g, h −T sin T
sin T
dx = 0 if m 6= n
point t0 where det(W
Heat/Wave equations:
are linearly independent! Try to look for simplifications before you Case N = 3 
−2 1 0
 Step 1: Suppose u(x, t) = X(x)T (t), plug this into PDE, and group
differentiate.
X 00 (x)
Fundamental solution set: If f, g, h are solutions and linearly Equation: x00 = Ax, A =  1 −2 1  X-terms and T -terms. Then X(x) = λ, so X 00 = λX. Then find a
independent. 0 1 −2 differential equation for T . Note: If you have an α-term, put it with T .
√ √
Largest interval of existence: First make sure the leading coefficient Proper frequencies: Eigenvalues of A: λ = −2, −2 − 2, −2 + 2, Step 2: Deal with X 00 = λX. Use boundary conditions to find X(0)
equals to 1. Then look at the domain of each term. For each domain, √  p √   p √  etc. (if you have ∂u∂x
, you might have X 0 (0) instead of X(0)).
consider the part of the interval which contains the initial condition. then proper frequencies ± 2i, ± 2 + 2 i, ± 2− 2 i Step 3: Case 1: λ = ω 2 , then X(x) = Aeωx + Be−ωx , then find
Finally, intersect the intervals and change any brackets to parentheses.    √2  ω = 0, contradiction. Case 2: λ = 0, then X(x) = Ax + B, then eihter
sin π4  sin 2 π4 
  
Harmonic oscillator: my 00 + by 0 + ky = 0 (m = inertia, b = damping, 2  find X(x) = 0 (contradiction), or find X(x) = A. Case 3: λ = −ω 2 ,
Proper modes: v1 = sin 2 π4  =  √1  , v2 = sin 4 π4  =

k = stiffness) then X(x) = A cos(ωx) + B sin(ωx). Then solve for ω, usually
sin 3 π4 2 sin 6 π4 ω = πm . Also, if case 2 works, should find cos, if case 2 doesn’t work,
Systems of differential equations √  2 T
 
1

sin 3 π4 
 2 should find sin.
2 
To solve x0 = Ax: x(t) = Aeλ1 t v1 + Beλ2 t v2 + eλ3 t v3 (λi are  0  , v3 = sin 6 π  =  −1 Finally, λ = −ω 2 , and X(x) = whatever you found in 2) w/o the
4  √  constant.
your eigenvalues, vi are your eigenvectors) −1 sin 9 π4 2
Fundamental matrix: Matrix whose columns are the solutions, without 2 Step 4: Solve for T (t) with the λ you found. Remember that for the heat
General case (just in case!)
the constants (the columns are solutions and linearly independent) equation: T 0 = λT ⇒ T (t) = A g λt
m e . And for the wave equation:
−2 1 0 ··· 0
 
Complex eigenvalues If λ = α + iβ, and v = a + ib. Then: T 00 = λT ⇒ T (t) = A m cos(ωt) + B m sin(ωt).
 1 −2 1 0 ··· 0 
g g
x(t) = A eαt cos(βt)a − eαt sin(βt)b + ∞
 P
 0

1 −2 1 0 · · ·
 Step 5: Then u(x, t) = m=0 T (t)X(x) (if case 2 works),
B eαt sin(βt)a + eαt cos(βt)b u(x, t) = ∞
 P
00
Equation: x = Ax, A =  . m=1 T (t)X(x) (if case 2 doesn’t work!)
 
.. .. .. .. .. 
Notes: You only need to consider one complex eigenvalue. For real  .. . . . . .  Step 6: Use u(x, 0), and plug in t = 0. Then use Fourier cosine or sine
1
= aa−bi
 
eigenvalues, use the formula above. Also, a+bi 2 +b2
 0 0 ··· 1 −2 1  series or just ‘compare’, i.e. if u(x, 0) = 4 sin(2πx) + 3 sin(3πx),
−2 then A m = 0 if m 6= 2, 3.
Generalized eigenvectors If you only find one eigenvector v (even 0 0 0 0 1 f2 = 4, A f3 = 3, and Ag
though there are supposed to be 2), then solve the following equation for 

 Step 7: (only for wave equation): Use ∂u ∂t
u(x, 0): Differentiate Step 5
u: (A − λI)(u) = v (one solution is enough). Proper frequencies: ±2i sin 2(N +1)
, k = 1, 2, · · · N with respect to t and set t = 0. Then use Fourier cosine or series or
Then: x(t) = Aeλt v + B teλt v + eλt u

   ‘compare’
Undetermined coefficients First find hom. solution. Then for xp , just sin kπ Nonhomogeneous heat equation:
like regular undetermined coefficients, except that instead of guessing   N +1  
 ∂u ∂2u
  sin 2kπ  ∂t
= β ∂x2
+ P (x)
a N +1 
xp (t) = aet + b cos(t), you guess aet + b cos(t), where a = 1 is u(0, t) = U , u(L, t) = U 2

Proper modes: vk =  ..
 1
a2   
u(x, 0) = f (x)
 .
 
a vector. Then plug into x0 = Ax + f and solve for a etc.  Then u(x, t) = v(x) + w(x, t), where:
N kπ
Variation of parameters First hom. solution xh (t) = Ax1 (t) + Bx2 (t). sin N +1 v(x) =
 0xp (t) = v1 (t)x1 (t) + v2 (t)x2 (t), then solve
Then sps
h i
U2 − U1 + 0L 0z β1 P (s)dsdz L x
+ U1 − 0x 0z β1 P (s)dsdz and
R R R R
f (t) v10 = f , where W
W
 
f (t) = x1 (t) | x2 (t) . Multiply both Partial differential equations
v2 w(x, t) solves the hom. eqn:
P series: f defined
Full Fourier on(−T, T ): 
∂w 2
−1
f (x) e ∞ πmx
+ bm sin πmx = β ∂∂xw

m=0 am cos , where:
  ∂t 2
sides by W f (t) , integrate and solve for v1 (t), v2 (t), and plug back T T
1
RT
a0 = 2T −T f (x)dx w(0, t) = 0, w(L, t) = 0
into xp . Finally, x = xh + xp RT

u(x, 0) = f (x) − v(x)
An tn am = T1 −T f (x) cos πmx

Matrix exponential eAt = ∞ At
P
n=0 n! . To calculate e , either T D’Alembert’s formula: ONLY works for wave equation and
diagonalize: A = P DP −1 −1
⇒ e = P e P , where eDt is a
At Dt b0 = 0 R −∞ < x < ∞:
T
bm = T1 −T f (x) sin πmx
 R x+αt
diagonal matrix with diag. entries eλi t . Or if A only has one eigenvalue T u(x, t) = 21 (f (x + αt) + f (x − αt)) + 2α
1
x−αt g(s)ds, where
Cosine series: f defined on (0, T ): f (x) e ∞ πmx

(A−λI)n tn
P
λ with multiplicity m, use eAt = eλt m−1 m=0 am cos , utt = α2 uxx , u(x, 0) = f (x), ∂u
P
n=0 n!
. Solution of T ∂t
u(x, 0) = g(x). The integral just
where: R means ‘antidifferentiate and plug in’.
x0 = Ax is then x(t) = eAt c, where c is a constant vector. 2 T
a0 = 2T f (x)dx (not a typo)
R 0T
am = T 0 f (x) cos πmx
2 Laplace equation:

Coupled mass-spring system T
Sine series: f defined on (0, T ): f (x) e ∞
P πmx
 Same as for Heat/Wave, but T (t) becomes Y (y), and we get
Case N = 2 m=0 bm sin T
, where:
  b0 = 0 R Y 00 (y) = −λY (y). Also, instead of writing
−2 1
Equation: x00 = Ax, A = T
bm = T2 0 f (x) sin πmx ωy + B −ωy , write

Y (y) = A gme
gme
1 −2 T
Tabular integration: (IBP: f 0 g = f g − f g 0 ) To integrate
R R
Proper frequencies: Eigenvalues of A are: λ = −1, −3, then proper Y (y) = Am cosh(ωy) + B
g g m sinh(ωy). Remember cosh(0) = 1,
sinh(0) = 0
R
f (t)g(t)dt where f is a polynomial, make a table whose first row is

You might also like