You are on page 1of 6

Linear algebra from notes:

- Echelon form does not require first entries to be 1 for this course
- A matrix in Echelon form is an upper triangular, hence the determinant is the
product of the diagonal elements.
- The rank of the matrix cannot be more than the number of rows/columns of the
matrix since row-rank = column-rank
- The dimensions of the vector x in Ax will either go to the output Ax, or get squashed
into the origin ker(A), which gives us the rank-nullity theorem (for a wide matrix)
- For Ax=b, for a solution to exist, b must be in the column space of A (Range , Image)
- Column space (Image) of a (mxn) belongs to Rm , because columns have m elements.
- Checking that elements form a vector space is checking the two condiditons + Zero
- If a square matrix is not full rank, the nullity is > 1, then it it will have at least one
dimension that will get squashed into the origin, so it will have at least one
eigenvalue that is equal to 0
- Projection: b: What we want to approximate. p: point in the subspace that is closest
to b. x: The scaling of the basis vectors of the subspace that gets you to p. e: The
difference betweent the point b and p, and it is orthogonal to all basis vectors.
- For data fitting, the functional form does not need to be linear, but it needs to be
linear in the coefficients we want to find.
- The zero eigenvalue has an eigenvector that is in the null space of A
- Eigenvalues of a triangular matrix are the diagonal entries

- If you decompose a matrix into SVD or Eigendecomposition, and write it in terms of


sums, then each term contributes with 1 rank since every column is a mulitple of the
eigenvector, until you add them all up and get rank r.
- To prove that a matrix is positive semi-definitive, show that xT A x > 0
- You could have a non-square matrix with nullity, such that in total, SVD has more
than 1 zero singular values
- Frobenius norm of A is the same as the trace of ATA
- The 2-norm of a matrix is the ratio of the Euclidean distance after and before the
tranformation. IF we had a symmetric matrix, then the eigenvectors are mutually
orthogonal, so to get the maximum enlargement, you need to be aligned with the
eigenvector of maximum eigenvalue. Hence the 2-norm is the same as the spectral
radius of a symmetric matrix. Spectral radius is the absolute value of the biggest
eigenvalue. If the matrix is not symmetric, then maximum enlargement could occur
at an intermediate direction between non-orthogonal eigevectors.
- Infinte is row-sum, 1 is column-sum
- Iterative methods are only for square matrices
- The Jacobi method you only leave D behind, for the Gauss-Seidl method you only
take U across.
- For convergance, the iteration matric P must have a norm that is strictly less than 1,
and it is sufficient to have any norm less than 1, even if others are > 1.
- Spectral radius < 1 is a necessary convergance rule that uses the 2- norm for real
symmetric matrices. Diagonally dominant is a sufficient condition for convergence
that uses the infinite norm. Diagonally dominant is only when the diagonal is greater
than the sum of its row, nothing to do with its columns tho. This only works for the
Jacobi method
- A key trick is that for any iterative scheme, if you put the true solution vector s in,
you get s out. You can then subtract the original equation to prove divergence.

- The power method makes the vector converge into the eigenvector, and the ratio of
n+ 1
any norm of ¿∨ ∨¿ will give the eigenvalue (dominant)
n
- Rate of convergence of the power method depends on the ratio of the biggest
eigenvalues.
- Shifted power method applies a linear transformation -> Power method -> linear
transformation back to the original space
- If a matrix tranforms x into Ax, then the Rayleigh coefficient tells us how much we
need to scale x in order to get as close as possible to Ax . Obviously if x is an
eigenvector, then we would need to scale it by lambda (eigenvalue).
- The Rayleigh method is the same as the power method but while evaluating Rayleigh
quotient at each iteration.

- The 2- norm is always less than or equal to the Frobenius norm for any matrix.
- The rank of a square matrix is the number of non-zero eigenvalues. So when you
deflate a matrix, you subtract one rank, so you get one zero eigenvalue. When
deflating, you replace the dominant eigenvalue with a zero.
- In proving norms compatibility, the absolute of a sum is always less than the sum of
many absolutes, so if you take the Abs inside, you put an inequality ≤
-

Stats & probability:


- Permuations nPr: tells you how many different ways you can arrange a sample of r
items from a total of n unique items. Which basically is the same as how many
different ways you can sample a space of unique n with r samples when you care
about the order.
- Combination with replacement is the same as ordering r items into n+r-1 possibilities
- Total probability theorem finds the probability using all possible “routes”
- The mean is about a sample space. The expectation is about a distribution. If we take
a sample, we would “expect” the mean of the sample to be the expectation. If the
sample tends to infinity, then the mean of the sample is the expectation.
- The mean of a sample is in itself a random variable.
- For poisson distribution: If you are given a unit time, say month. And you are
required to find the probability of the event happening k times in a different unit
time, say a day. Then you can do it by changing lambda so that it represents the
probability of the event happening in the asked unit time, here lambda represents
the probability of it happening in a day.
- OR: If you are given lambda for a month and asked to find p(k=3) in two months then
you can do p(k=0)*p(k=3) + p(k=1)*p(k=2) and adjust for combinations.
- If X and Y are independent, then E[XY] = E[X]*E[Y]

- The Chebycheff’s inequality is a really pessimistic estimate of the probability a


sample lies k standard deviations away from the mean. (1/k2) for any distribution.
- The exponential distribution is the continuous analogue of the geometric ditribution.
(The probability that the interval between happenings is …..)
- The poisson has mean, variance = lambda; whereas the exponential has mean,
standard deviation = 1/lambda.
- Remember the mean and variance of different approximation according to the
central limit theorem.
- For the normal distribution remember that you can intepolate between different
point.
- Memorise the mean and variance of the Chi-squared and t-distribution.
- The sample variance formula is divided by n-1 instead of N is the population variance
- For confidence intervals: You take a set of sample (5 measurements) you can then
find the mean of the set, say m, this mean is the best estimate of the population
mean. Now say you take 5 sets of measurements, each with mean mi , the mean of
the means is the same as the mean of the population. But the variance of the means
is equal to the variance of the population divided by n
- Does the variance of the sample means have a Chi-square distribution? Yes:
(n-1)S2/sigma2 ~ X2
- Look at the confidence interval of the variance of the distribution using the sample’s
variance S2.
- The more samples you take the smaller the confidence interval is (more confident).
- The power of the test = 1- beta, is the probability that we catch a false null
hypothesis.
- The number of tests required to achieve a confidence interval can be found from the
confidence interval formula
- Look tutorial sheet for how you can reduce the number of tests required.
PDE’s from notes:
- For SoV, check one of them is bounded, then check If the bounded has homo BC’s
- If the Bounded has non-homo, Use superpositoin to make It so. Then you construct
the full solution by a homo solution and a PI that does not change the original PDE
when substituted. Usually a linear equation. Then, the IC’s change to be the opposite
of the PI, because IC’s are the sum of the PI and the homo solution.
- The intergal of sin^2 for any frequency over a period is half that period.
- We need the boudary conditions to be homogenuous or periodic so we can transfer
the BC from the original variable to the separated variable
- Deriving D‘Almbert’s solution is by taking a fowrard and backward waves and
summing them for IC1 Dirichlet condition, and summing the derivatives wrt time and
equating them to the IC2 Neumann condition.
- The full solution of the Travelling Wave equation requires solution to be an
 Odd function,
 2L periodic,
So ends become fixed and the BC’s are satisfied

- You cant have all BC’s and IC’s to be zero otherwise problem is trivial

- Story of separation of Variables


 First we have a pde, we assume a solution of SoV and substitute,
 We make both sides dependant on 1 variable and equate them
 We equate them to a constant that could be +ve -ve or zero
 Boundary conditions hint us towards the sign of λ 2
 When we try different lambda’s we choose the one which does not
give us trivial solutions, so there is only 1 reasonable option
 We transfer the BC of the full solution to the bounder domain
because they’re homo or its periodic.
 Then we use BC of the Domain that is bounded (periodic) to
determine the eigenvalues of the problem. We still have coefficients.
 We then use the sign of lambda and our eigenvalues to determine the
solution of the other domain.
 We then combine the solutions and combine coefficients,
 Use superposition to find the more general solution, since eignevalues
are infinte usually.
 We then use orthogonality to determine the coefficients and take the
inner products of the base functions. Remember the inner product
results for each trig, Legendre, Bessel
- Story of PDE tute sheets
 Q1 practice in scaling. Homo non-homo BC by addition U =V −V 0
 Q2 familiarise with the general solution of Wave eq. Introduction to
finding the coefficients with orthogonality
 Q3 Standard Sov for diffusion
 Q4 Sov diffusion with orthogonality integration
 Q5 standard wave
 Q6 Standard Laplace
 Q7, When you have a time dependant sinosoidal BC, then this acts as
a forcing function. This means that we can expect the time part of SoV
to be a sinosoidally varying, with the same frequency. Evanescent
waves have a sinosoidal forcing at one end, and zero BC at the other
 Q8 Laplace with a Particular integral (non homo IC) when you take the
inverse laplace of a shifted function don’t forget the shifted Heaviside

- Tute sheet 2 the more interesting one


 Q1 Laplace basics, bounded are homo, non-bounded must have non-
zero
 Q2 Laplace in cylindrical, if you have trig solution, you can use Fourier
series symmetry type to calcel out some of the terms immediately

A1 2014
- Damped wave equation could represent a string in a viscous medium, where
damping coefficient is the vicocity of the medium.
- Whe you take inverse laplace you don’t take the s back ffs
- For the propogation constant, use the form given in HLT, so you would write the
propagation constant in terms of its real and negative imaginary parts
- Phase velocity is the rate at which he apex moves, for the equation it is the
frequency divided by the wave number, which is the real part of the propagation
constant
- Group velocity is the velocity of the profile of the wave

A1 2021
- D’Alembert solution condition is that the intitial and boundary conditions are not all
zero, and that the functions f and g are sufficiently differentiable
- Can’t use SoV with cross-derivatives
- If you have time dependent BC, then you are more likely to use laplace. When you
have a multipication of Heaviside with a another function, you must make the other
function shifted exactly the same amount as the Heaviside.
- Travelling waves don’t have to have the same amplitude, so superimpose the two
functional forms arbitrarily scaled to get the general solution.
- Having trig general solution and a cos BC makes you think about simply matching
terms?

A1 2016
- Superposition of the forward wave and the reflected wave can be seen as the
superposition of forward in medium 1 and backward in medium 2
- Differentiating f ( x−ct ) wrt time can be done quick method −cf ' (x−ct ) where f ' is
the derivative of f wrt to ( x – ct )
- Contiuity of displacement and velocity at any boundary gives the reflection
coefficients which are given in HLT page 171
- Solutions that are sums of sin and cos are very much related to Fourier Series
- For phase and group velocities you have to eliminate omega from the RHS

You might also like