Professional Documents
Culture Documents
Fundamentals (H)
Formulae
DF(H) ‐ University of Glasgow ‐ John H. Williamson ‐ 2017
This lists all of the numbered formulae in Data Fundamentals (H). The most important
formulae are shown with bold 埣�tles. You should know bolded formulae thoroughly .
The Unit in which the formula first appeared and is described is indicated in the sec埣�on
headings. The mathema埣�cal nota埣�on used is most compactly explained in Mathema埣�cal
Nota埣�on: A Guide for Engineers and Scien埣�sts (h埉�ps://www.amazon.co.uk/Mathema埣�cal‐
Nota埣�on‐Guide‐Engineers‐Scien埣�sts/dp/1466230525).
week_1_numerical_i.ipynb
Nota埣�on for a vector
x
Nota埣�on for a matrix
A
week_2_numerical_ii.ipynb
Rela埣�ve precision of floa埣�ng point representa埣�on.
|float(x) − x|
ϵ = ,
|x|
IEEE754 guarantee for rela埣�ve precision (machine precision ϵ) for a t bit man埣�ssa
floa埣�ng point number.
1
−t −t−1
ϵ = 2 = 2 ,
2
week_4_matrices_i.ipynb
Addi埣�on of two vectors
x + y = [x1 + y1 , x2 + y2 , … , xn + yn ]
Scalar mul埣�plica埣�on a vector
cx = [cx1 , cx2 , … , cxn ]
Linear interpola埣�on of two vectors.
lerp(x, y, α) = αx + (1 − α)y
Cosine of angle between two vectors in terms of normalised dot product
x ∙ y
cos θ = .
||x|| ||y||
Dot product / inner product
x ∙ y = ∑ xi y i .
Mean vector
1
mean(x1 , x2 , … , xn ) = ∑ xi
N
i
Defini埣�on of linearity (for a linear func埣�on f and equivalent matrix A)
Matrix addi埣�on
a1,1 + b1,1 a1,2 + b1,2 … a1,m + b1,m
⎡ ⎤
⎣ ⎦
an,1 + bn,1 an,2 + bn,2 … an,m + bn,m
Scalar matrix mul埣�plica埣�on
ca1,1 ca1,2 … ca1,m
⎡ ⎤
⎣ ⎦
can,1 can,2 … can,m
Matrix mul埣�plica埣�on
Cij = ∑ aik bkj
Outer product (matrix version)
T
x ⊗ y = x y
Inner product (matrix version)
T
x ∙ y = xy
week_5_matrices_ii.ipynb
Power itera埣�on for leading eigenvalue
Axn−1
xn =
∥Axn−1 ∥∞
Defini埣�on of eigenvalue and eigenvector
Axi = λi xi
Trace of a matrix
n
Tr(A) = ∑ λi
i=1
Determinant of a matrix
n
det(A) = ∏ λi
i=1
Linear system Ax = y
Ax = y
SVD defini埣�on
A = U ΣV ,
(pseudo‐)inverse by SVD
A = U ΣV
−1 T † T
A = V Σ U
week_6_op埣�misa埣�on_i.ipynb
Defini埣�on of op埣�misa埣�on
∗
θ = arg min L(θ)
θ∈Θ
Objec埣�ve func埣�on for approxima埣�on
′
L(θ) = ∥y − y∥ = ∥f (x; θ) − y∥
Equality constraints for op埣�misa埣�on
∗
θ = arg min L(θ) subject to c(θ) = 0
θ∈Θ
Inequality constraints for op埣�misa埣�on
∗
θ = arg min L(θ) subject to c(θ) ≤ 0
θ∈Θ
week_7_op埣�misa埣�on_ii.ipynb
Gradient vector
∂L(θ) ∂L(θ) ∂L(θ)
∇L(θ) = [ , ,…, ,]
∂θ1 ∂θ2 ∂θn
∇ is called "nabla" or "del" and is used to represent the gradient of a func埣�on.
Gradient descent algorithm
(i+1) (i) (i)
θ = θ − δ∇L(θ ),
Defini埣�on of differen埣�a埣�on
df (x) f (x + h) − f (x − h)
= lim
dx h→0 2h
Hessian matrix
2 2 2 2
∂ L(θ) ∂ L(θ) ∂ L(θ) ∂ L(θ)
⎡ … ⎤
2
∂θ1 ∂ θ1 ∂ θ2 ∂ θ1 ∂ θ3 ∂ θ1 ∂ θn
⎢ ⎥
⎢ ∂
2
L(θ) ∂
2
L(θ) ∂
2
L(θ) ∂
2
L(θ)
⎥
⎢ ⎥
⎢ … ⎥
2 ∂ θ2 ∂ θ1
2
∂ θ2 ∂ θ3 ∂ θ2 ∂ θn
∇∇L(θ) = ∇ L(θ) = ⎢ ∂θ2
⎥,
⎢ ⎥
⎢ … ⎥
⎢ ⎥
⎢ 2 2 2 2 ⎥
∂ L(θ) ∂ L(θ) ∂ L(θ) ∂ L(θ)
⎣ … ⎦
2
∂ θn ∂ θ1 ∂ θn ∂ θ2 ∂ θn ∂ θ3 ∂θ
n
Convex sum of sub‐objec埣�ve func埣�ons
L(θ) = λ1 L1 (θ) + λ2 L2 (θ) + ⋯ + λn Ln (θ),
Week_8_probability_I.ipynb
Boundedness of probabili埣�es
0 ≤ P (A) < 1
Unitarity of probabili埣�es
∑ P (A) = 1
Sum rule for probabili埣�es
P (A ∨ B) = P (A) + P (B) − P (A ∧ B),
∧ is "and" and ∨ is "or"
Condi埣�onal probability defini埣�on
P (A ∧ B)
P (A|B) =
P (B)
is read as "given" or "condi埣�oned on", as in "probability of A given B".
|
Empirical distribu埣�on
nx
P (X = x) = ,
N
Condi埣�onal probability in terms of joint and marginal
P (X, Y )
P (X|Y ) =
P (Y )
fX (X, Y )
fX (X|Y ) =
fX (Y )
Entropy of a random variable
H (X) = ∑ −P (X = x) log2 (P (X = x))
Bayes Rule
P (B|A)P (A)
P (A|B) =
P (B)
Bayes Rule (in terms of hypotheses H and data D)
P (D|H )P (H )
P (H |D) =
P (D)
Integra埣�on over the evidence
P (B) = ∑ P (B|Ai )P (Ai )
Defini埣�on of odds
1 − p
odds =
p
Defini埣�on of log‐odds (logit)
p
logit(p) = log( )
1 − p
Probability of mul埣�ple independent outcomes
n
P (X1 = xi , … , Xn = xn ) = ∏ P (Xi = xi )
i=1
Log‐probability of mul埣�ple independent outcomes
n
i=1
Probability of con埣�nuous random variable in terms of PDF fX (x)
b
Defini埣�on of cumula埣�ve density func埣�on
x
cX (x) = ∫ fX (x) = P (X ≤ x)
−∞
Nota埣�on for normally distributed variable X
2
X ∼ N (μ, σ )
(note ∼ is read "distributed as")
week_9_probability_ii.ipynb
Expecta埣�on of a random variable
Expecta埣�on of a func埣�on of a random variable
E[g(X)] = ∫ fX (x)g(x)dx
x
Acceptance probability for Metropolis‐Has埣�ngs jump from x to x′
′ ′
fX (x )/fX (x), fX (x) > fX (x )
P (accept) = {
′
1, fX (x) ≤ fX (x )
week_10_signals.ipynb
Defini埣�on of Nyquist limit
fs
fn =
2
Signal to noise ra埣�o
S
SNR = ,
N
S
SNRdB = 10 log10 ( )
N
Exponen埣�al smoothing
y[t] = αy[t − 1] + (1 − α)x[t]
Convolu埣�on of sampled signals
M
(x ∗ y)[n] = ∑ x[n]y[n − m]
m=−M