Professional Documents
Culture Documents
Matrix CookBook 2008
Matrix CookBook 2008
[ http://matrixcookbook.com ]
Errors: Very likely there are errors, typos, and mistakes for which we apolo-
gize and would be grateful to receive corrections at cookbook@2302.dk.
1
CONTENTS CONTENTS
Contents
1 Basics 5
1.1 Trace and Determinants . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 The Special Case 2x2 . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Derivatives 7
2.1 Derivatives of a Determinant . . . . . . . . . . . . . . . . . . . . 7
2.2 Derivatives of an Inverse . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Derivatives of Eigenvalues . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Derivatives of Matrices, Vectors and Scalar Forms . . . . . . . . 9
2.5 Derivatives of Traces . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6 Derivatives of vector norms . . . . . . . . . . . . . . . . . . . . . 13
2.7 Derivatives of matrix norms . . . . . . . . . . . . . . . . . . . . . 13
2.8 Derivatives of Structured Matrices . . . . . . . . . . . . . . . . . 13
3 Inverses 16
3.1 Basic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Exact Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Implication on Inverses . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4 Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.5 Generalized Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.6 Pseudo Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4 Complex Matrices 22
4.1 Complex Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 22
5 Decompositions 25
5.1 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . 25
5.2 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . 25
5.3 Triangular Decomposition . . . . . . . . . . . . . . . . . . . . . . 27
7 Multivariate Distributions 31
7.1 Student’s t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.2 Cauchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.3 Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.4 Multinomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.5 Dirichlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.6 Normal-Inverse Gamma . . . . . . . . . . . . . . . . . . . . . . . 32
7.7 Wishart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.8 Inverse Wishart . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 2
CONTENTS CONTENTS
8 Gaussians 34
8.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.2 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.3 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8.4 Mixture of Gaussians . . . . . . . . . . . . . . . . . . . . . . . . . 39
9 Special Matrices 40
9.1 Block matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
9.2 Discrete Fourier Transform Matrix, The . . . . . . . . . . . . . . 41
9.3 Hermitian Matrices and skew-Hermitian . . . . . . . . . . . . . . 42
9.4 Idempotent Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 43
9.5 Orthogonal matrices . . . . . . . . . . . . . . . . . . . . . . . . . 43
9.6 Positive Definite and Semi-definite Matrices . . . . . . . . . . . . 44
9.7 Singleentry Matrix, The . . . . . . . . . . . . . . . . . . . . . . . 46
9.8 Symmetric, Skew-symmetric/Antisymmetric . . . . . . . . . . . . 48
9.9 Toeplitz Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
9.10 Transition matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.11 Units, Permutation and Shift . . . . . . . . . . . . . . . . . . . . 50
9.12 Vandermonde Matrices . . . . . . . . . . . . . . . . . . . . . . . . 51
A One-dimensional Results 61
A.1 Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
A.2 One Dimensional Mixture of Gaussians . . . . . . . . . . . . . . . 62
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 3
CONTENTS CONTENTS
A Matrix
Aij Matrix indexed for some purpose
Ai Matrix indexed for some purpose
Aij Matrix indexed for some purpose
An Matrix indexed for some purpose or
The n.th power of a square matrix
A−1 The inverse matrix of the matrix A
A+ The pseudo inverse matrix of the matrix A (see Sec. 3.6)
A1/2 The square root of a matrix (if unique), not elementwise
(A)ij The (i, j).th entry of the matrix A
Aij The (i, j).th entry of the matrix A
[A]ij The ij-submatrix, i.e. A with i.th row and j.th column deleted
a Vector
ai Vector indexed for some purpose
ai The i.th element of the vector a
a Scalar
det(A) Determinant of A
Tr(A) Trace of the matrix A
diag(A) Diagonal matrix of the matrix A, i.e. (diag(A))ij = δij Aij
eig(A) Eigenvalues of the matrix A
vec(A) The vector-version of the matrix A (see Sec. 10.2.2)
sup Supremum of a set
||A|| Matrix norm (subscript if any denotes what norm)
AT Transposed matrix
A−T The inverse of the transposed and vice versa, A−T = (A−1 )T = (AT )−1 .
A∗ Complex conjugated matrix
AH Transposed and complex conjugated matrix (Hermitian)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 4
1 BASICS
1 Basics
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 5
1.2 The Special Case 2x2 1 BASICS
Eigenvalues
λ2 − λ · Tr(A) + det(A) = 0
p p
Tr(A) + Tr(A)2 − 4 det(A) Tr(A) − Tr(A)2 − 4 det(A)
λ1 = λ2 =
2 2
λ1 + λ2 = Tr(A) λ1 λ2 = det(A)
Eigenvectors
A12 A12
v1 ∝ v2 ∝
λ1 − A11 λ2 − A11
Inverse
1 A22 −A12
A−1 = (26)
det(A) −A21 A11
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 6
2 DERIVATIVES
2 Derivatives
This section is covering differentiation of a number of expressions with respect to
a matrix X. Note that it is always assumed that X has no special structure, i.e.
that the elements of X are independent (e.g. not symmetric, Toeplitz, positive
definite). See section 2.8 for differentiation of structured matrices. The basic
assumptions can be written in a formula as
∂Xkl
= δik δlj (27)
∂Xij
The following rules are general and very useful when deriving the differential of
an expression ([18]):
∂A = 0 (A is a constant) (28)
∂(αX) = α∂X (29)
∂(X + Y) = ∂X + ∂Y (30)
∂(Tr(X)) = Tr(∂X) (31)
∂(XY) = (∂X)Y + X(∂Y) (32)
∂(X ◦ Y) = (∂X) ◦ Y + X ◦ (∂Y) (33)
∂(X ⊗ Y) = (∂X) ⊗ Y + X ⊗ (∂Y) (34)
∂(X−1 ) = −X−1 (∂X)X−1 (35)
∂(det(X)) = det(X)Tr(X−1 ∂X) (36)
∂(ln(det(X))) = Tr(X−1 ∂X) (37)
∂XT = (∂X)T (38)
∂XH = (∂X)H (39)
∂ det(X)
= det(X)(X−1 )T (41)
∂X
∂ det(AXB)
= det(AXB)(X−1 )T = det(AXB)(XT )−1 (42)
∂X
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 7
2.2 Derivatives of an Inverse 2 DERIVATIVES
∂ det(XT AX)
= 2 det(XT AX)X−T (43)
∂X
If X is not square but A is symmetric, then
∂ det(XT AX)
= 2 det(XT AX)AX(XT AX)−1 (44)
∂X
If X is not square and A is not symmetric, then
∂ det(XT AX)
= det(XT AX)(AX(XT AX)−1 + AT X(XT AT X)−1 ) (45)
∂X
∂ ln det(XT X)|
= 2(X+ )T (46)
∂X
∂ ln det(XT X)
= −2XT (47)
∂X+
∂ ln | det(X)|
= (X−1 )T = (XT )−1 (48)
∂X
∂ det(Xk )
= k det(Xk )X−T (49)
∂X
∂Y−1 ∂Y −1
= −Y−1 Y (50)
∂x ∂x
from which it follows
∂(X−1 )kl
= −(X−1 )ki (X−1 )jl (51)
∂Xij
∂aT X−1 b
= −X−T abT X−T (52)
∂X
∂ det(X−1 )
= − det(X−1 )(X−1 )T (53)
∂X
∂Tr(AX−1 B)
= −(X−1 BAX−1 )T (54)
∂X
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 8
2.3 Derivatives of Eigenvalues 2 DERIVATIVES
∂ X ∂
eig(X) = Tr(X) = I (55)
∂X ∂X
∂ Y ∂
eig(X) = det(X) = det(X)X−T (56)
∂X ∂X
∂xT a ∂aT x
= = a (57)
∂x ∂x
∂aT Xb
= abT (58)
∂X
∂aT XT b
= baT (59)
∂X
∂aT Xa ∂aT XT a
= = aaT (60)
∂X ∂X
∂X
= Jij (61)
∂Xij
∂(XA)ij
= δim (A)nj = (Jmn A)ij (62)
∂Xmn
∂(XT A)ij
= δin (A)mj = (Jnm A)ij (63)
∂Xmn
∂ X X
Xkl Xmn = 2 Xkl (64)
∂Xij
klmn kl
∂bT XT Xc
= X(bcT + cbT ) (65)
∂X
∂(Bx + b)T C(Dx + d)
= BT C(Dx + d) + DT CT (Bx + b) (66)
∂x
∂(XT BX)kl
= δlj (XT B)ki + δkj (BX)il (67)
∂Xij
∂(XT BX)
= XT BJij + Jji BX (Jij )kl = δik δjl (68)
∂Xij
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 9
2.4 Derivatives of Matrices, Vectors and Scalar Forms 2 DERIVATIVES
See Sec 9.7 for useful properties of the Single-entry matrix Jij
∂xT Bx
= (B + BT )x (69)
∂x
∂bT XT DXc
= DT XbcT + DXcbT (70)
∂X
∂
(Xb + c)T D(Xb + c) = (D + DT )(Xb + c)bT (71)
∂X
Assume W is symmetric, then
∂
(x − As)T W(x − As) = −2AT W(x − As) (72)
∂s
∂
(x − s)T W(x − s) = 2W(x − s) (73)
∂x
∂
(x − s)T W(x − s) = −2W(x − s) (74)
∂s
∂
(x − As)T W(x − As) = 2W(x − As) (75)
∂x
∂
(x − As)T W(x − As) = −2W(x − As)sT (76)
∂A
n−1
∂ T n T n Xh
a (X ) X b = Xn−1−r abT (Xn )T Xr
∂X r=0
i
+(Xr )T Xn abT (Xn−1−r )T (79)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 10
2.5 Derivatives of Traces 2 DERIVATIVES
f = xT Ax + bT x (81)
∂f
∇x f = = (A + AT )x + b (82)
∂x
∂2f
= A + AT (83)
∂x∂xT
∂
Tr(X) = I (84)
∂X
∂
Tr(XA) = AT (85)
∂X
∂
Tr(AXB) = AT BT (86)
∂X
∂
Tr(AXT B) = BA (87)
∂X
∂
Tr(XT A) = A (88)
∂X
∂
Tr(AXT ) = A (89)
∂X
∂
Tr(A ⊗ X) = Tr(A)I (90)
∂X
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 11
2.5 Derivatives of Traces 2 DERIVATIVES
∂
Tr(X2 ) = 2XT (91)
∂X
∂
Tr(X2 B) = (XB + BX)T (92)
∂X
∂
Tr(XT BX) = BX + BT X (93)
∂X
∂
Tr(XBXT ) = XBT + XB (94)
∂X
∂
Tr(AXBX) = AT XT BT + BT XT AT (95)
∂X
∂
Tr(XT X) = 2X (96)
∂X
∂
Tr(BXXT ) = (B + BT )X (97)
∂X
∂
Tr(BT XT CXB) = CT XBBT + CXBBT (98)
∂X
∂
Tr XT BXC = BXC + BT XCT
(99)
∂X
∂
Tr(AXBXT C) = AT CT XBT + CAXB (100)
∂X
∂ h i
Tr (AXb + c)(AXb + c)T = 2AT (AXb + c)bT (101)
∂X
∂ ∂
Tr(X ⊗ X) = Tr(X)Tr(X) = 2Tr(X)I (102)
∂X ∂X
See [7].
∂
Tr(Xk ) = k(Xk−1 )T (103)
∂X
k−1
∂ k
X
Tr(AX ) = (Xr AXk−r−1 )T (104)
∂X r=0
T T T
∂
= CXXT CXBBT
∂X Tr B X CXX CXB
+CT XBBT XT CT X
+CXBBT XT CX
+CT XXT CT XBBT (105)
2.5.4 Other
∂
Tr(AX−1 B) = −(X−1 BAX−1 )T = −X−T AT BT X−T (106)
∂X
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 12
2.6 Derivatives of vector norms 2 DERIVATIVES
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 13
2.8 Derivatives of Structured Matrices 2 DERIVATIVES
∂g(U) h ∂g(U) ∂U i
= Tr ( )T . (117)
∂Xij ∂U ∂Xij
2.8.2 Symmetric
If A is symmetric, then Sij = Jij + Jji − Jij Jij and therefore
T
df ∂f ∂f ∂f
= + − diag (118)
dA ∂A ∂A ∂A
∂Tr(AX)
= A + AT − (A ◦ I), see (122) (119)
∂X
∂ det(X)
= det(X)(2X−1 − (X−1 ◦ I)) (120)
∂X
∂ ln det(X)
= 2X−1 − (X−1 ◦ I) (121)
∂X
2.8.3 Diagonal
If X is diagonal, then ([18]):
∂Tr(AX)
= A◦I (122)
∂X
2.8.4 Toeplitz
Like symmetric matrices and diagonal matrices also Toeplitz matrices has a
special structure which should be taken into account when the derivative with
respect to a matrix with Toeplitz structure.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 14
2.8 Derivatives of Structured Matrices 2 DERIVATIVES
∂Tr(AT)
(123)
∂T
∂Tr(TA)
=
∂T
Tr([AT ]n1 ) Tr([[AT ]1n ]n−1,2 )
Tr(A) ··· An1
. . .
. . .
Tr([AT ]1n ))
Tr(A) . . .
. . .
=
. . .
Tr([[AT ]1n ]2,n−1 ) . . . Tr([[AT ]1n ]n−1,2 )
. . . .
. . . .
. . . . Tr([AT ]n1 )
A1n ··· Tr([[AT ]1n ]2,n−1 ) Tr([AT ]1n )) Tr(A)
≡ α(A)
As it can be seen, the derivative α(A) also has a Toeplitz structure. Each value
in the diagonal is the sum of all the diagonal valued in A, the values in the
diagonals next to the main diagonal equal the sum of the diagonal next to the
main diagonal in AT . This result is only valid for the unconstrained Toeplitz
matrix. If the Toeplitz matrix also is symmetric, the same derivative yields
∂Tr(AT) ∂Tr(TA)
= = α(A) + α(A)T − α(A) ◦ I (124)
∂T ∂T
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 15
3 INVERSES
3 Inverses
3.1 Basic
3.1.1 Definition
The inverse A−1 of a matrix A ∈ Cn×n is defined such that
AA−1 = A−1 A = I, (125)
where I is the n × n identity matrix. If A−1 exists, A is said to be nonsingular.
Otherwise, A is said to be singular (see e.g. [12]).
3.1.3 Determinant
The determinant of a matrix A ∈ Cn×n is defined as (see [12])
n
X
det(A) = (−1)j+1 A1j det ([A]1j ) (129)
j=1
Xn
= A1j cof(A, 1, j). (130)
j=1
3.1.4 Construction
The inverse matrix can be constructed, using the adjoint matrix, by
1
A−1 = · adj(A) (131)
det(A)
For the case of 2 × 2 matrices, see section 1.2.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 16
3.2 Exact Relations 3 INVERSES
d+
c(A) = (132)
d−
The condition number can be used to measure how singular a matrix is. If the
condition number is large, it indicates that the matrix is nearly singular. The
condition number can also be estimated from the matrix norms. Here
where k · k is a norm such as e.g the 1-norm, the 2-norm, the ∞-norm or the
Frobenius norm (see Sec 10.5p for more on matrix norms).
The 2-norm of A equals (max(eig(AH A))) [12, p.57]. For a symmetric
matrix, this reduces to ||A||2 = max(|eig(A)|) [12, p.394]. If the matrix ia
symmetric and positive definite, ||A||2 = max(eig(A)). The condition number
based on the 2-norm thus reduces to
max(eig(A))
kAk2 kA−1 k2 = max(eig(A)) max(eig(A−1 )) = . (134)
min(eig(A))
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 17
3.2 Exact Relations 3 INVERSES
(A + cdT )+ = A+ + G (147)
β = 1 + dT A+ c (148)
v = A+ c (149)
n = (A+ )T d (150)
w = (I − AA+ )c (151)
m = (I − A+ A)T d (152)
the solution is given as six different cases, depending on the entities ||w||,
||m||, and β. Please note, that for any (column) vector v it holds that v+ =
vT
vT (vT v)−1 = ||v|| 2 . The solution is:
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 18
3.3 Implication on Inverses 3 INVERSES
See [29].
3.4 Approximations
The following is a Taylor expansion
The following approximation is from [21] and holds when A large and symmetric
A − A(I + A)−1 A ∼
= I − A−1 (166)
(Q + σ 2 M)−1 ∼
= Q−1 − σ 2 Q−1 MQ−1 (167)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 19
3.5 Generalized Inverse 3 INVERSES
I AA+ A = A
II A+ AA+ = A+
III AA+ symmetric
IV A+ A symmetric
The matrix A+ is unique and does always exist. Note that in case of com-
plex matrices, the symmetric condition is substituted by a condition of being
Hermitian.
3.6.2 Properties
Assume A+ to be the pseudo-inverse of A, then (See [3])
(A+ )+ = A (169)
(AT )+ = (A+ )T (170)
(cA)+ = (1/c)A+ (171)
(AT A)+ = A+ (AT )+ (172)
(AAT )+ = (AT )+ A+ (173)
3.6.3 Construction
Assume that A has full rank, then
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 20
3.6 Pseudo Inverse 3 INVERSES
Assume A does not have full rank, i.e. A is n×m and rank(A) = r < min(n, m).
The pseudo inverse A+ can be constructed from the singular value decomposi-
tion A = UDVT , by
A+ = Vr D−1 T
r Ur (178)
where Ur , Dr , and Vr are the matrices with the degenerated rows and columns
deleted. A different way is this: There do always exist two matrices C n × r
and D r × m of rank r, such that A = CD. Using these matrices it holds that
See [3].
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 21
4 COMPLEX MATRICES
4 Complex Matrices
4.1 Complex Derivatives
In order to differentiate an expression f (z) with respect to a complex z, the
Cauchy-Riemann equations have to be satisfied ([7]):
∂f (z) ∂f (z)
=i . (182)
∂=z ∂<z
A complex function that satisfies the Cauchy-Riemann equations for points in a
region R is said yo be analytic in this region R. In general, expressions involving
complex conjugate or conjugate transpose do not satisfy the Cauchy-Riemann
equations. In order to avoid this problem, a more generalized definition of
complex derivative is used ([23], [6]):
• Generalized Complex Derivative:
df (z)
∇f (z) = 2 (186)
dz∗
∂f (z) ∂f (z)
= +i .
∂<z ∂=z
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 22
4.1 Complex Derivatives 4 COMPLEX MATRICES
df (Z)
∇f (Z) = 2 (187)
dZ∗
∂f (Z) ∂f (Z)
= +i .
∂<Z ∂=Z
These expressions can be used for gradient descent algorithms.
∂g(u) ∂g ∂u ∂g ∂u∗
= + (188)
∂x ∂u ∂x ∂u∗ ∂x
∂g ∂u ∂g ∗ ∗ ∂u∗
= +
∂u ∂x ∂u ∂x
Notice, if the function is analytic, the second term reduces to zero, and the func-
tion is reduced to the normal well-known chain rule. For the matrix derivative
of a scalar function g(U), the chain rule can be written the following way:
∂Tr(X∗ ) ∂Tr(XH )
= = I (190)
∂<X ∂<X
∗
∂Tr(X ) ∂Tr(XH )
i =i = I (191)
∂=X ∂=X
Since the two results have the same sign, the conjugate complex derivative (184)
should be used.
∂Tr(X) ∂Tr(XT )
= = I (192)
∂<X ∂<X
∂Tr(X) ∂Tr(XT )
i =i = −I (193)
∂=X ∂=X
Here, the two results have different signs, and the generalized complex derivative
(183) should be used. Hereby, it can be seen that (85) holds even if X is a
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 23
4.1 Complex Derivatives 4 COMPLEX MATRICES
complex number.
∂Tr(AXH )
= A (194)
∂<X
∂Tr(AXH )
i = A (195)
∂=X
∂Tr(AX∗ )
= AT (196)
∂<X
∂Tr(AX∗ )
i = AT (197)
∂=X
∂Tr(XXH ) ∂Tr(XH X)
= = 2<X (198)
∂<X ∂<X
H
∂Tr(XX ) ∂Tr(XH X)
i =i = i2=X (199)
∂=X ∂=X
By inserting (198) and (199) in (183) and (184), it can be seen that
∂Tr(XXH )
= X∗ (200)
∂X
∂Tr(XXH )
=X (201)
∂X∗
Since the function Tr(XXH ) is a real function of the complex matrix X, the
complex gradient matrix (187) is given by
∂Tr(XXH )
∇Tr(XXH ) = 2 = 2X (202)
∂X∗
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 24
5 DECOMPOSITIONS
5 Decompositions
5.1 Eigenvalues and Eigenvectors
5.1.1 Definition
The eigenvectors v and eigenvalues λ are the ones satisfying
Avi = λi vi (205)
5.1.3 Symmetric
Assume A is symmetric, then
A = UDVT , (217)
where
eigenvectors of AAT
U = p n×n
D = diag(eig(AAT )) n×m (218)
V = eigenvectors of AT A m×m
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 25
5.2 Singular Value Decomposition 5 DECOMPOSITIONS
where D is diagonal with the square root of the eigenvalues of AAT , V is the
eigenvectors of AAT and UT is the eigenvectors of AT A.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 26
5.3 Triangular Decomposition 5 DECOMPOSITIONS
A = BT B, (225)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 27
6 STATISTICS AND PROBABILITY
6.1.1 Mean
The vector of means, m, is defined by
6.1.2 Covariance
The matrix of covariance M is defined by
or alternatively as
M = h(x − m)(x − m)T i (228)
as h i
(3) (3)
M3 = m::1 m::2 ...m(3)
::n (230)
where ’:’ denotes all elements within the given index. M3 can alternatively be
expressed as
M3 = h(x − m)(x − m)T ⊗ (x − m)T i (231)
as
h i
(4) (4) (4) (4) (4) (4) (4) (4)
M4 = m::11 m::21 ...m::n1 |m::12 m::22 ...m::n2 |...|m::1n m::2n ...m(4)
::nn (233)
or alternatively as
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 28
6.2 Expectation of Linear Combinations
6 STATISTICS AND PROBABILITY
E[Ax + b] = Am + b (238)
E[Ax] = Am (239)
E[x + b] = m + b (240)
See [7].
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 29
6.3 Weighted Scalar Variable 6 STATISTICS AND PROBABILITY
hyi = wT m (254)
h(y − hyi)2 i = wT M2 w (255)
h(y − hyi)3 i = wT M3 w ⊗ w (256)
h(y − hyi)4 i = wT M4 w ⊗ w ⊗ w (257)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 30
7 MULTIVARIATE DISTRIBUTIONS
7 Multivariate Distributions
7.1 Student’s t
The density of a Student-t distributed vector t ∈ RP ×1 , is given by
Γ( ν+P
2 ) det(Σ)−1/2
p(t|µ, Σ, ν) = (πν)−P/2 (258)
Γ(ν/2) 1 + ν −1 (t − µ)T Σ−1 (t − µ)(ν+P )/2
7.1.1 Mean
E(t) = µ, ν>1 (259)
7.1.2 Variance
ν
cov(t) = Σ, ν>2 (260)
ν−2
7.1.3 Mode
The notion mode meaning the position of the most probable value
mode(t) = µ (261)
ν det(Ω)−ν/2 det(Σ)−N/2 ×
−(ν+P )/2
det Ω−1 + (T − M)Σ−1 (T − M)T
(262)
where M is the location, Ω is the rescaling matrix, Σ is positive definite, ν is
the degrees of freedom, and Γ denotes the gamma function.
7.2 Cauchy
The density function for a Cauchy distributed vector t ∈ RP ×1 , is given by
Γ( 1+P
2 ) det(Σ)−1/2
p(t|µ, Σ) = π −P/2 (263)
Γ(1/2) 1 + (t − µ)T Σ−1 (t − µ)(1+P )/2
where µ is the location, Σ is positive definite, and Γ denotes the gamma func-
tion. The Cauchy distribution is a special case of the Student-t distribution.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 31
7.3 Gaussian 7 MULTIVARIATE DISTRIBUTIONS
7.3 Gaussian
See sec. 8.
7.4 Multinomial
If the vector n contains counts, i.e. (n)i ∈ 0, 1, 2, ..., then the discrete multino-
mial disitrbution for n is given by
d d
n! Y X
P (n|a, n) = ani , ni = n (264)
n1 ! . . . n d ! i i i
P
where ai are probabilities, i.e. 0 ≤ ai ≤ 1 and i ai = 1.
7.5 Dirichlet
The Dirichlet distribution is a kind of “inverse” distribution compared to the
multinomial distribution on the bounded continuous variate x = [x1 , . . . , xP ]
[16, p. 44] P
P P
Γ p αp Y
p −1
p(x|α) = QP xα
p
p Γ(α p ) p
7.7.1 Mean
E(M) = mΣ (266)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 32
7.8 Inverse Wishart 7 MULTIVARIATE DISTRIBUTIONS
7.8.1 Mean
1
E(M) = Σ (268)
m−P −1
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 33
8 GAUSSIANS
8 Gaussians
8.1 Basics
8.1.1 Density and normalization
The density of x ∼ N (m, Σ) is
1 1
p(x) = p exp − (x − m)T Σ−1 (x − m) (269)
det(2πΣ) 2
∂p(x)
= −p(x)Σ−1 (x − m) (270)
∂x
∂2p
= p(x) Σ−1 (x − m)(x − m)T Σ−1 − Σ−1 (271)
∂x∂xT
then
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 34
8.1 Basics 8 GAUSSIANS
then
n µ̂
a = µa + Σc Σ−1
b (xb − µb ) (276)
p(xa |xb ) = Nxa (µ̂a , Σ̂a )
Σ̂a = Σa − Σc Σ−1
b Σc
T
n µ̂ =
b µb + ΣTc Σ−1
a (xa − µa )
p(xb |xa ) = Nxb (µ̂b , Σ̂b ) (277)
Σ̂b = Σb − ΣTc Σ−1
a Σc
Note, that the covariance matrices are the Schur complement of the block ma-
trix, see 9.1.5 for details.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 35
8.2 Moments 8 GAUSSIANS
Σ−1
c = Σ−1 −1
1 + Σ2 (283)
−1 −1 −1 −1 −1
mc = (Σ1 + Σ2 ) (Σ1 m1 + Σ2 m2 ) (284)
1 T −1
C = (m Σ + mT2 Σ−1 −1
2 )(Σ1 + Σ2 )
−1 −1
(Σ−1 −1
1 m1 + Σ2 m2 )(285)
2 1 1
1
− mT1 Σ−1
1 m 1 + m T −1
2 Σ2 m 2 (286)
2
In a trace formulation (assuming Σ1 , Σ2 are symmetric)
1
− Tr((X − M1 )T Σ−1
1 (X − M1 )) (287)
2
1
− Tr((X − M2 )T Σ−1
2 (X − M2 )) (288)
2
1
= − Tr[(X − Mc )T Σ−1
c (X − Mc )] + C (289)
2
Σ−1
c = Σ−1
1 + Σ2
−1
(290)
−1 −1 −1 −1 −1
Mc = (Σ1 + Σ2 ) (Σ1 M1 + Σ2 M2 ) (291)
1 h −1 i
C = Tr (Σ1 M1 + Σ−1 T −1 −1 −1
2 M2 ) (Σ1 + Σ2 ) (Σ−1 −1
1 M1 + Σ2 M2 )
2
1
− Tr(MT1 Σ−1 T −1
1 M1 + M2 Σ2 M2 ) (292)
2
8.2 Moments
8.2.1 Mean and covariance of linear forms
First and second moments. Assume x ∼ N (m, Σ)
E(x) = m (294)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 36
8.2 Moments 8 GAUSSIANS
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 37
8.3 Miscellaneous 8 GAUSSIANS
8.2.5 Moments
X
E[x] = ρk m k (306)
k
XX
Cov(x) = ρk ρk0 (Σk + mk mTk − mk mTk0 ) (307)
k k0
8.3 Miscellaneous
8.3.1 Whitening
Assume x ∼ N (m, Σ) then
z = Σ−1/2 (x − m) ∼ N (0, I) (308)
Conversely having z ∼ N (0, I) one can generate data x ∼ N (m, Σ) by setting
x = Σ1/2 z + m ∼ N (m, Σ) (309)
1/2 1/2 1/2
Note that Σ means the matrix which fulfils Σ Σ = Σ, and that it exists
and is unique since Σ is positive definite.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 38
8.4 Mixture of Gaussians 8 GAUSSIANS
8.3.3 Entropy
Entropy of a D-dimensional gaussian
Z p D
H(x) = − N (m, Σ) ln N (m, Σ)dx = ln det(2πΣ) + (311)
2
8.4.2 Derivatives
P
Defining p(s) = k ρk Ns (µk , Σk ) one get
∂ ln p(s) ρj Ns (µj , Σj ) ∂
= P ln[ρj Ns (µj , Σj )] (313)
∂ρj k ρ k N s (µk , Σk ) ∂ρ j
ρj Ns (µj , Σj ) 1
= P (314)
k ρk Ns (µk , Σk ) ρj
∂ ln p(s) ρj Ns (µj , Σj ) ∂
= P ln[ρj Ns (µj , Σj )] (315)
∂µj k ρk Ns (µk , Σk ) ∂µj
ρj Ns (µj , Σj ) −1
= P Σj (s − µj ) (316)
k ρk Ns (µk , Σk )
∂ ln p(s) ρj Ns (µj , Σj ) ∂
= P ln[ρj Ns (µj , Σj )] (317)
∂Σj ρ
k k s N (µk , Σk ) ∂Σ j
ρj Ns (µj , Σj ) 1
−Σ−1 −1 T −1
= P j + Σj (s − µj )(s − µj ) Σj (318)
ρ
k k s N (µk , Σk ) 2
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 39
9 SPECIAL MATRICES
9 Special Matrices
9.1 Block matrices
Let Aij denote the ijth block of A.
9.1.1 Multiplication
Assuming the dimensions of the blocks matches we have
A11 A12 B11 B12 A11 B11 + A12 B21 A11 B12 + A12 B22
=
A21 A22 B21 B22 A21 B11 + A22 B21 A21 B12 + A22 B22
as
A11 A12
det = det(A22 ) · det(C1 ) = det(A11 ) · det(C2 )
A21 A22
as −1
C−1 −A−1 −1
A11 A12 1 11 A12 C2
=
A21 A22 −C2 A21 A−1
−1
11 C2−1
A−1 −1 −1 −1
−C−1 −1
11 + A11 A12 C2 A21 A11 1 A12 A22
= −1 −1
−A22 A21 C1 A22 + A22 A21 C1 A12 A−1
−1 −1 −1
22
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 40
9.2 Discrete Fourier Transform Matrix, The 9 SPECIAL MATRICES
is the matrix
A11 − A12 A−1
22 A21
that is, what is denoted C2 above. Using the Schur complement, one can rewrite
the inverse of a block matrix
−1
A11 A12
A21 A22
(A11 − A12 A−1 −1
I −A12 A−1
I 0 22 A21 ) 0 22
=
−A−1
22 A21 I 0 A−1
22 0 I
The Schur complement is useful when solving linear systems of the form
A11 A12 x1 b1
=
A21 A22 x2 b2
When the appropriate inverses exists, this can be solved for x1 which can then
be inserted in the equation for x2 to solve for x2 .
The DFT of the vector x = [x(0), x(1), · · · , x(N − 1)]T can be written in matrix
form as
X = WN x, (328)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 41
9.3 Hermitian Matrices and skew-Hermitian 9 SPECIAL MATRICES
9.3.1 Skew-Hermitian
A matrix A is called skew-hermitian if
A = −AH
For real valued matrices, skew-Hermitian and skew-symmetric matrices are
equivalent.
A Hermitian ⇔ iA is skew-hermitian (337)
A skew-Hermitian ⇔ xH Ay = −xH AH y, ∀x, y (338)
A skew-Hermitian ⇒ eig(A) = iλ, λ ∈ R (339)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 42
9.4 Idempotent Matrices 9 SPECIAL MATRICES
9.4.1 Nilpotent
A matrix A is nilpotent if
A2 = 0
9.4.2 Unipotent
A matrix A is unipotent if
AA = I
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 43
9.6 Positive Definite and Semi-definite Matrices 9 SPECIAL MATRICES
9.5.1 Ortho-Sym
A matrix Q+ which simultaneously is orthogonal and symmetric is called an
ortho-sym matrix [19]. Hereby
QT+ Q+ = I (349)
Q+ = QT+ (350)
1 + (−1)k 1 + (−1)k+1
Qk+ = I+ Q+ (351)
2 2
1 + cos(kπ) 1 − cos(kπ)
= I+ Q+ (352)
2 2
9.5.2 Ortho-Skew
A matrix which simultaneously is orthogonal and antisymmetric is called an
ortho-skew matrix [19]. Hereby
QH
− Q− = I (353)
Q− = −QH
− (354)
ik + (−i)k ik − (−i)k
Qk− = I−i Q− (355)
2 2
π π
= cos(k )I + sin(k )Q− (356)
2 2
9.5.3 Decomposition
A square matrix A can always be written as a sum of a symmetric A+ and an
antisymmetric matrix A−
A = A + + A− (357)
xT Ax > 0, ∀x 6= 0 (358)
xT Ax ≥ 0, ∀x (359)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 44
9.6 Positive Definite and Semi-definite Matrices 9 SPECIAL MATRICES
9.6.2 Eigenvalues
The following holds with respect to the eigenvalues:
H
A pos. def. ⇔ eig( A+A
2 )>0
H (360)
A pos. semi-def. ⇔ eig( A+A
2 )≥0
9.6.3 Trace
The following holds with respect to the trace:
9.6.4 Inverse
If A is positive definite, then A is invertible and A−1 is also positive definite.
9.6.5 Diagonal
If A is positive definite, then Aii > 0, ∀i
9.6.6 Decomposition I
The matrix A is positive semi-definite of rank r ⇔ there exists a matrix B of
rank r such that A = BBT
9.6.7 Decomposition II
Assume A is an n × n positive semi-definite, then there exists an n × r matrix
B of rank r such that BT AB = I.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 45
9.7 Singleentry Matrix, The 9 SPECIAL MATRICES
AJij = 0 0 . . . Ai
... 0 (363)
i.e. an n × p matrix of zeros with the i.th column of A in place of the j.th
column. Assume A to be n × m and Jij to be p × n
0
..
.
0
Jij A =
Aj (364)
0
.
..
0
i.e. an p × m matrix of zeros with the j.th row of A in the placed of the i.th
row.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 46
9.7 Singleentry Matrix, The 9 SPECIAL MATRICES
If A is symmetric then
Sij = Jij + Jji − Jij Jij (377)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 47
9.8 Symmetric, Skew-symmetric/Antisymmetric 9 SPECIAL MATRICES
A = AT (378)
Symmetric matrices have many important properties, e.g. that their eigenvalues
are real and eigenvectors orthogonal.
9.8.2 Skew-symmetric/Antisymmetric
The antisymmetric matrix is also known as the skew symmetric matrix. It has
the following property from which it is defined
A = −AT (379)
Hereby, it can be seen that the antisymmetric matrices always have a zero
diagonal. The n × n antisymmetric matrices also have the following properties.
9.8.3 Decomposition
A square matrix A can always be written as a sum of a symmetric A+ and an
antisymmetric matrix A−
A = A+ + A− (382)
Such a decomposition could e.g. be
A + AT A − AT
A= + = A+ + A− (383)
2 2
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 48
9.9 Toeplitz Matrices 9 SPECIAL MATRICES
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 49
9.10 Transition matrices 9 SPECIAL MATRICES
The transition matrix usually describes the probability of moving from state i
to j in one step and is closely related to markov processes. Transition matrices
have the following properties
Prob[i → j in 1 step] = (P)ij (389)
Prob[i → j in 2 steps] = (P2 )ij (390)
Prob[i → j in k steps] = (Pk )ij (391)
If all rows are identical ⇒ Pn = P (392)
αP = α, α is called invariant (393)
P
where α is a so-called stationary probability vector, i.e., 0 ≤ αi ≤ 1 and i αi =
1.
9.11.3 Permutations
Let P be some permutation matrix, e.g.
eT2
0 1 0
= eT1
P= 1 0 0 = e2 e1 e3 (396)
0 0 1 eT3
For permutation matrices it holds that
PPT = I (397)
and that
eT2 A
PA = eT1 A
AP = Ae2 Ae1 Ae3 (398)
eT3 A
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 50
9.12 Vandermonde Matrices 9 SPECIAL MATRICES
That is, the first is a matrix which has columns of A but in permuted sequence
and the second is a matrix which has the rows of A but in the permuted se-
quence.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 51
10 FUNCTIONS AND OPERATORS
assuming the limit exists and is finite. If the coefficients cn fulfils n cn xn < ∞,
P
then one can prove that the above series exists and is finite, see [1]. Thus for
any analytical function f (x) there exists a corresponding matrix function f (x)
constructed by the Taylor expansion. Using this one can prove the following
results:
1) A matrix A is a zero of its own characteristic polynomium [1]:
X
p(λ) = det(Iλ − A) = cn λn ⇒ p(A) = 0 (409)
n
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 52
10.2 Kronecker and Vec Operator 10 FUNCTIONS AND OPERATORS
eA eB = eA+B if AB = BA (416)
(eA )−1 = e−A (417)
d tA
e = AetA = etA A, t∈R (418)
dt
d
Tr(etA ) = Tr(AetA ) (419)
dt
det(eA ) = eTr(A) (420)
∞
X (−1)n A2n+1 1 1
sin(A) ≡ = A − A3 + A5 − ... (421)
n=0
(2n + 1)! 3! 5!
∞
X (−1)n A2n 1 1
cos(A) ≡ = I − A2 + A4 − ... (422)
n=0
(2n)! 2! 4!
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 53
10.3 Solutions to Systems of Equations
10 FUNCTIONS AND OPERATORS
A ⊗ (B + C) A⊗B+A⊗C
= (424)
A⊗B B⊗A6= in general (425)
A ⊗ (B ⊗ C) (A ⊗ B) ⊗ C
= (426)
(αA A ⊗ αB B) αA αB (A ⊗ B)
= (427)
(A ⊗ B)T A T ⊗ BT
= (428)
(A ⊗ B)(C ⊗ D) AC ⊗ BD
= (429)
(A ⊗ B)−1 A−1 ⊗ B−1
= (430)
rank(A ⊗ B) =
rank(A)rank(B) (431)
Tr(A ⊗ B) =
Tr(A)Tr(B) (432)
det(A ⊗ B) = det(A)rank(B) det(B)rank(A) (433)
{eig(A ⊗ B)} = {eig(B ⊗ A)} if A, B are square (434)
{eig(A ⊗ B)} = {eig(A)eig(B)T } if A, B are square (435)
Where {λi } denotes the set of values λi , that is, the values in no particular
order or structure.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 54
10.3 Solutions to Systems of Equations
10 FUNCTIONS AND OPERATORS
and
as −1
a Rxx Rx1 Rx,y
= (440)
b Rx1 R11 Ry1
Ax = b (441)
Ax = b ⇒ x = A−1 b (442)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 55
10.3 Solutions to Systems of Equations
10 FUNCTIONS AND OPERATORS
AX + XB = C (452)
vec(X) = (I ⊗ A + BT ⊗ I)−1 vec(C) (453)
Sec 10.2.1 and 10.2.2 for details on the Kronecker product and the vec op-
erator.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 56
10.4 Vector Norms 10 FUNCTIONS AND OPERATORS
P
n An XBn = C (454)
P T −1
vec(X) = n Bn ⊗ A n vec(C) (455)
See Sec 10.2.1 and 10.2.2 for details on the Kronecker product and the vec
operator.
X
||x||1 = |xi | (456)
i
||x||22 = x xH
(457)
" #1/p
X
p
||x||p = |xi | (458)
i
||x||∞ = max |xi | (459)
i
||A|| ≥ 0 (460)
||A|| = 0 ⇔ A = 0 (461)
||cA|| = |c|||A||, c∈R (462)
||A + B|| ≤ ||A|| + ||B|| (463)
where || · || ont the left side is the induced matrix norm, while || · || on the right
side denotes the vector norm. For induced norms it holds that
||I|| = 1 (465)
||Ax|| ≤ ||A|| · ||x||, for all A, x (466)
||AB|| ≤ ||A|| · ||B||, for all A, B (467)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 57
10.6 Rank 10 FUNCTIONS AND OPERATORS
10.5.3 Examples
X
||A||1 = max |Aij | (468)
j
q i
||A||2 = max eig(AH A) (469)
||A||p = ( max ||Ax||p )1/p (470)
||x||p =1
X
||A||∞ = max |Aij | (471)
i
sX j
q
||A||F = |Aij | = Tr(AAH )
2 (Frobenius) (472)
ij
||A||max = max |Aij | (473)
ij
||A||KF = ||sing(A)||1 (Ky Fan) (474)
where sing(A) is the vector of singular values of the matrix A.
10.5.4 Inequalities
E. H. Rasmussen has in yet unpublished material derived and collected the
following inequalities. They are collected in a table as below, assuming A is an
m × n, and d = rank(A)
||A||max ||A||1 ||A||∞ ||A||2 ||A||F ||A||KF
||A||max 1 1 √1 √1 √1
||A||1 m m √m √m √m
||A||∞ √n √n √ n n n
||A||2 mn n m √ 1 1
√ √ √
||A||F √ mn √n √m d √ 1
||A||KF mnd nd md d d
which are to be read as, e.g.
√
||A||2 ≤ m · ||A||∞ (475)
10.6 Rank
10.6.1 Sylvester’s Inequality
If A is m × n and B is n × r, then
rank(A) + rank(B) − n ≤ rank(AB) ≤ min{rank(A), rank(B)} (477)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 58
10.7 Integral Involving Dirac Delta Functions
10 FUNCTIONS AND OPERATORS
See [9].
10.8 Miscellaneous
For any A it holds that
It holds that
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 59
A ONE-DIMENSIONAL RESULTS
A One-dimensional Results
A.1 Gaussian
A.1.1 Density
(x − µ)2
1
p(x) = √ exp − (482)
2πσ 2 2σ 2
A.1.2 Normalization
Z
(s−µ)2 √
e− 2σ2 ds = 2πσ 2 (483)
r 2
b − 4ac
Z
−(ax2 +bx+c) π
e dx = exp (484)
a 4a
r 2
c1 − 4c2 c0
Z
c2 x2 +c1 x+c0 π
e dx = exp (485)
−c2 −4c2
A.1.3 Derivatives
∂p(x) (x − µ)
= p(x) (486)
∂µ σ2
∂ ln p(x) (x − µ)
= (487)
∂µ σ2
1 (x − µ)2
∂p(x)
= p(x) − 1 (488)
∂σ σ σ2
1 (x − µ)2
∂ ln p(x)
= − 1 (489)
∂σ σ σ2
A.1.5 Moments
If the density is expressed by
(s − µ)2
1
p(x) = √ exp − or p(x) = C exp(c2 x2 + c1 x) (490)
2πσ 2 2σ 2
then the first few basic moments are
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 60
A.2 One Dimensional Mixture of Gaussians
A ONE-DIMENSIONAL RESULTS
−c1
hxi = µ = 2c2 2
−1 −c1
hx2 i = σ 2 + µ2 = 2c2 + h 2c2
c21
i
c1
hx3 i = 3σ 2 µ + µ3 = (2c ) 2 3 − 2c2
2 4 2 2
c1 c1 −1 1
hx4 i = µ4 + 6µ2 σ 2 + 3σ 4 = 2c2 + 6 2c2 2c2 +3 2c2
A.2.2 Moments
An useful fact of MoG, is that
X
hxn i = ρk hxn ik (493)
k
where h·ik denotes average with respect to the k.th component. We can calculate
the first four moments from the densities
1 (x − µk )2
X 1
p(x) = ρk p exp − (494)
2πσk2 2 σk2
k
X
ρk Ck exp ck2 x2 + ck1 x
p(x) = (495)
k
as
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 61
A.2 One Dimensional Mixture of Gaussians
A ONE-DIMENSIONAL RESULTS
h i
P P −ck1
hxi = k ρk µk = k ρk
2ck2 2
2 2 −1 −ck1
µ2k )
P P
hx i = k ρk (σk + = k ρk 2ck2 + 2ck2
c2k1
h h ii
ck1
hx3 i 2 3
P P
= k ρk (3σk µk + µk ) = k ρ k (2ck2 ) 2 3 − 2ck2
2 2
1 ck1 c2k1
hx4 i 4 2 2 4
P P
= k ρk (µk + 6µk σk + 3σk ) = k ρk 2ck2 2ck2 − 6 2ck2 + 3
¿From the un-centralized moments one can derive other entities like
2
hx2 i − hxi2 2
P
= k,k 0 ρk ρk 0 µk + σk − µk µk 0
2
hx3 i − hx2 ihxi = 3 2 2
P
k,k0 ρk ρk0 3σk µk + µk − (σk + µk )µk0
hx4 i − hx2 i2 4 2 2 4 2 2 2 2
P
= k,k0 ρk ρk0 µk + 6µk σk + 3σk − (σk + µk )(σk0 + µk0 )
A.2.3 Derivatives
Defining p(s) = k ρk Ns (µk , σk2 ) we get for a parameter θj of the j.th compo-
P
nent
∂ ln p(s) ρj Ns (µj , σj2 ) ∂ ln(ρj Ns (µj , σj2 ))
=P 2 (496)
∂θj k ρk Ns (µk , σk ) ∂θj
that is,
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 62
B PROOFS AND DETAILS
∂(Xn )kl ∂ X
= Xk,u1 Xu1 ,u2 ...Xun−1 ,l
∂Xij ∂Xij u1 ,...,un−1
= δk,i δu1 ,j Xu1 ,u2 ...Xun−1 ,l
+Xk,u1 δu1 ,i δu2 ,j ...Xun−1 ,l
..
.
+Xk,u1 Xu1 ,u2 ...δun−1 ,i δl,j
n−1
X
= (Xr )ki (Xn−1−r )jl
r=0
n−1
X
= (Xr Jij Xn−1−r )kl
r=0
Using the properties of the single entry matrix found in Sec. 9.7.4, the result
follows easily.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 63
B.1 Misc Proofs B PROOFS AND DETAILS
Through the calculations, (85) and (194) were used. In addition, by use of (195),
the derivative is found with respect to the imaginary part of X
Notice, for real X, A, the sum of (203) and (204) is reduced to (45).
Similar calculations yield
and
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 64
REFERENCES REFERENCES
References
[1] Karl Gustav Andersson and Lars-Christer Boiers. Ordinaera differentialek-
vationer. Studenterlitteratur, 1992.
[2] Jörn Anemüller, Terrence J. Sejnowski, and Scott Makeig. Complex inde-
pendent component analysis of frequency-domain electroencephalographic
data. Neural Networks, 16(9):1311–1323, November 2003.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 65
REFERENCES REFERENCES
[18] Thomas P. Minka. Old and new matrix algebra useful for statistics, De-
cember 2000. Notes.
[19] Daniele Mortari Ortho–Skew and Ortho–Sym Matrix Trigonometry John
Lee Junkins Astrodynamics Symposium, AAS 03–265, May 2003. Texas
A&M University, College Station, TX
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2008, Page 66
Index
Anti-symmetric, 48 Probability matrix, 50
Pseudo-inverse, 20
Block matrix, 40
Schur complement, 35, 41
Chain rule, 14 Single entry matrix, 46
Cholesky-decomposition, 27 Singular Valued Decomposition (SVD),
Co-kurtosis, 28 25
Co-skewness, 28 Skew-Hermitian, 42
Cramers Rule, 56 Skew-symmetric, 48
Stochastic matrix, 50
Derivative of a complex matrix, 22 Student-t, 31
Derivative of a determinant, 7 Sylvester’s Inequality, 60
Derivative of a trace, 11 Symmetric, 48
Derivative of an inverse, 8
Derivative of symmetric matrix, 14 Toeplitz matrix, 48
Derivatives of Toeplitz matrix, 15 Transition matrix, 50
Dirichlet distribution, 32
Unipotent, 43
Eigenvalues, 25
Eigenvectors, 25 Vandermonde matrix, 51
Exponential Matrix Function, 54 Vec operator, 54
Hermitian, 42
Idempotent, 43
Kronecker product, 54
Moore-Penrose inverse, 20
Multinomial distribution, 32
Nilpotent, 43
Norm of a matrix, 58
Norm of a vector, 58
Normal-Inverse Gamma distribution, 32
Normal-Inverse Wishart distribution, 33
Orthogonal, 43
67