Professional Documents
Culture Documents
Matrix Cookbook PDF
Matrix Cookbook PDF
[ http://matrixcookbook.com ]
Errors: Very likely there are errors, typos, and mistakes for which we apolo-
gize and would be grateful to receive corrections at cookbook@2302.dk.
1
CONTENTS CONTENTS
Contents
1 Basics 5
1.1 Trace and Determinants . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 The Special Case 2x2 . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Derivatives 7
2.1 Derivatives of a Determinant . . . . . . . . . . . . . . . . . . . . 7
2.2 Derivatives of an Inverse . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Derivatives of Eigenvalues . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Derivatives of Matrices, Vectors and Scalar Forms . . . . . . . . 9
2.5 Derivatives of Traces . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6 Derivatives of vector norms . . . . . . . . . . . . . . . . . . . . . 13
2.7 Derivatives of matrix norms . . . . . . . . . . . . . . . . . . . . . 13
2.8 Derivatives of Structured Matrices . . . . . . . . . . . . . . . . . 14
3 Inverses 16
3.1 Basic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Exact Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Implication on Inverses . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4 Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.5 Generalized Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.6 Pseudo Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4 Complex Matrices 23
4.1 Complex Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 Higher order and non-linear derivatives . . . . . . . . . . . . . . . 26
4.3 Inverse of complex sum . . . . . . . . . . . . . . . . . . . . . . . 26
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 2
CONTENTS CONTENTS
7 Multivariate Distributions 36
7.1 Cauchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.2 Dirichlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.3 Normal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.4 Normal-Inverse Gamma . . . . . . . . . . . . . . . . . . . . . . . 36
7.5 Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.6 Multinomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.7 Student’s t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.8 Wishart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.9 Wishart, Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8 Gaussians 39
8.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.2 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
8.3 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8.4 Mixture of Gaussians . . . . . . . . . . . . . . . . . . . . . . . . . 44
9 Special Matrices 45
9.1 Block matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
9.2 Discrete Fourier Transform Matrix, The . . . . . . . . . . . . . . 46
9.3 Hermitian Matrices and skew-Hermitian . . . . . . . . . . . . . . 47
9.4 Idempotent Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 48
9.5 Orthogonal matrices . . . . . . . . . . . . . . . . . . . . . . . . . 48
9.6 Positive Definite and Semi-definite Matrices . . . . . . . . . . . . 50
9.7 Singleentry Matrix, The . . . . . . . . . . . . . . . . . . . . . . . 51
9.8 Symmetric, Skew-symmetric/Antisymmetric . . . . . . . . . . . . 53
9.9 Toeplitz Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
9.10 Transition matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 55
9.11 Units, Permutation and Shift . . . . . . . . . . . . . . . . . . . . 56
9.12 Vandermonde Matrices . . . . . . . . . . . . . . . . . . . . . . . . 57
A One-dimensional Results 64
A.1 Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
A.2 One Dimensional Mixture of Gaussians . . . . . . . . . . . . . . . 65
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 3
CONTENTS CONTENTS
A Matrix
Aij Matrix indexed for some purpose
Ai Matrix indexed for some purpose
Aij Matrix indexed for some purpose
An Matrix indexed for some purpose or
The n.th power of a square matrix
A−1 The inverse matrix of the matrix A
A+ The pseudo inverse matrix of the matrix A (see Sec. 3.6)
A1/2 The square root of a matrix (if unique), not elementwise
(A)ij The (i, j).th entry of the matrix A
Aij The (i, j).th entry of the matrix A
[A]ij The ij-submatrix, i.e. A with i.th row and j.th column deleted
a Vector
ai Vector indexed for some purpose
ai The i.th element of the vector a
a Scalar
det(A) Determinant of A
Tr(A) Trace of the matrix A
diag(A) Diagonal matrix of the matrix A, i.e. (diag(A))ij = δij Aij
eig(A) Eigenvalues of the matrix A
vec(A) The vector-version of the matrix A (see Sec. 10.2.2)
sup Supremum of a set
||A|| Matrix norm (subscript if any denotes what norm)
AT Transposed matrix
A−T The inverse of the transposed and vice versa, A−T = (A−1 )T = (AT )−1 .
A∗ Complex conjugated matrix
AH Transposed and complex conjugated matrix (Hermitian)
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 4
1 BASICS
1 Basics
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 5
1.2 The Special Case 2x2 1 BASICS
Eigenvalues
λ2 − λ · Tr(A) + det(A) = 0
p p
Tr(A) + Tr(A)2 − 4 det(A) Tr(A) − Tr(A)2 − 4 det(A)
λ1 = λ2 =
2 2
λ1 + λ2 = Tr(A) λ1 λ2 = det(A)
Eigenvectors
A12 A12
v1 ∝ v2 ∝
λ1 − A11 λ2 − A11
Inverse
1 A22 −A12
A−1 = (27)
det(A) −A21 A11
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 6
2 DERIVATIVES
2 Derivatives
This section is covering differentiation of a number of expressions with respect to
a matrix X. Note that it is always assumed that X has no special structure, i.e.
that the elements of X are independent (e.g. not symmetric, Toeplitz, positive
definite). See section 2.8 for differentiation of structured matrices. The basic
assumptions can be written in a formula as
∂Xkl
= δik δlj (28)
∂Xij
The following rules are general and very useful when deriving the differential of
an expression ([19]):
∂A = 0 (A is a constant) (29)
∂(αX) = α∂X (30)
∂(X + Y) = ∂X + ∂Y (31)
∂(Tr(X)) = Tr(∂X) (32)
∂(XY) = (∂X)Y + X(∂Y) (33)
∂(X ◦ Y) = (∂X) ◦ Y + X ◦ (∂Y) (34)
∂(X ⊗ Y) = (∂X) ⊗ Y + X ⊗ (∂Y) (35)
∂(X−1 ) = −X−1 (∂X)X−1 (36)
∂(det(X)) = det(X)Tr(X−1 ∂X) (37)
∂(ln(det(X))) = Tr(X−1 ∂X) (38)
∂XT = (∂X)T (39)
∂XH = (∂X)H (40)
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 7
2.2 Derivatives of an Inverse 2 DERIVATIVES
∂ det(X)
= det(X)(X−1 )T (43)
∂X
X ∂ det(X)
Xjk = δij det(X) (44)
∂Xik
k
∂ det(AXB)
= det(AXB)(X−1 )T = det(AXB)(XT )−1 (45)
∂X
∂ det(XT AX)
= 2 det(XT AX)X−T (46)
∂X
If X is not square but A is symmetric, then
∂ det(XT AX)
= 2 det(XT AX)AX(XT AX)−1 (47)
∂X
If X is not square and A is not symmetric, then
∂ det(XT AX)
= det(XT AX)(AX(XT AX)−1 + AT X(XT AT X)−1 ) (48)
∂X
∂ ln det(XT X)|
= 2(X+ )T (49)
∂X
∂ ln det(XT X)
= −2XT (50)
∂X+
∂ ln | det(X)|
= (X−1 )T = (XT )−1 (51)
∂X
∂ det(Xk )
= k det(Xk )X−T (52)
∂X
∂Y−1 ∂Y −1
= −Y−1 Y (53)
∂x ∂x
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 8
2.3 Derivatives of Eigenvalues 2 DERIVATIVES
∂ X ∂
eig(X) = Tr(X) = I (59)
∂X ∂X
∂ Y ∂
eig(X) = det(X) = det(X)X−T (60)
∂X ∂X
∂xT a ∂aT x
= = a (61)
∂x ∂x
∂aT Xb
= abT (62)
∂X
∂aT XT b
= baT (63)
∂X
∂aT Xa ∂aT XT a
= = aaT (64)
∂X ∂X
∂X
= Jij (65)
∂Xij
∂(XA)ij
= δim (A)nj = (Jmn A)ij (66)
∂Xmn
∂(XT A)ij
= δin (A)mj = (Jnm A)ij (67)
∂Xmn
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 9
2.4 Derivatives of Matrices, Vectors and Scalar Forms 2 DERIVATIVES
∂ X X
Xkl Xmn = 2 Xkl (68)
∂Xij
klmn kl
∂bT XT Xc
= X(bcT + cbT ) (69)
∂X
∂(Bx + b)T C(Dx + d)
= BT C(Dx + d) + DT CT (Bx + b) (70)
∂x
∂(XT BX)kl
= δlj (XT B)ki + δkj (BX)il (71)
∂Xij
∂(XT BX)
= XT BJij + Jji BX (Jij )kl = δik δjl (72)
∂Xij
See Sec 9.7 for useful properties of the Single-entry matrix Jij
∂xT Bx
= (B + BT )x (73)
∂x
∂bT XT DXc
= DT XbcT + DXcbT (74)
∂X
∂
(Xb + c)T D(Xb + c) = (D + DT )(Xb + c)bT (75)
∂X
Assume W is symmetric, then
∂
(x − As)T W(x − As) = −2AT W(x − As) (76)
∂s
∂
(x − s)T W(x − s) = 2W(x − s) (77)
∂x
∂
(x − s)T W(x − s) = −2W(x − s) (78)
∂s
∂
(x − As)T W(x − As) = 2W(x − As) (79)
∂x
∂
(x − As)T W(x − As) = −2W(x − As)sT (80)
∂A
As a case with complex values the following holds
∂(a − xH b)2
= −2b(a − xH b)∗ (81)
∂x
This formula is also known from the LMS algorithm [14]
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 10
2.5 Derivatives of Traces 2 DERIVATIVES
n−1
∂ T n T n Xh
a (X ) X b = Xn−1−r abT (Xn )T Xr
∂X r=0
i
+(Xr )T Xn abT (Xn−1−r )T (84)
∂ (Ax)T (Ax) ∂ xT AT Ax
= (86)
∂x (Bx)T (Bx) ∂x xT BT Bx
AT Ax xT AT AxBT Bx
= 2 T −2 (87)
x BBx (xT BT Bx)2
f = xT Ax + bT x (88)
∂f
∇x f = = (A + AT )x + b (89)
∂x
∂2f
= A + AT (90)
∂x∂xT
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 11
2.5 Derivatives of Traces 2 DERIVATIVES
∂
Tr(X) = I (91)
∂X
∂
Tr(XA) = AT (92)
∂X
∂
Tr(AXB) = AT BT (93)
∂X
∂
Tr(AXT B) = BA (94)
∂X
∂
Tr(XT A) = A (95)
∂X
∂
Tr(AXT ) = A (96)
∂X
∂
Tr(A ⊗ X) = Tr(A)I (97)
∂X
∂
Tr(X2 ) = 2XT (98)
∂X
∂
Tr(X2 B) = (XB + BX)T (99)
∂X
∂
Tr(XT BX) = BX + BT X (100)
∂X
∂
Tr(XBXT ) = XBT + XB (101)
∂X
∂
Tr(AXBX) = A T X T BT + BT X T A T (102)
∂X
∂
Tr(XT X) = 2X (103)
∂X
∂
Tr(BXXT ) = (B + BT )X (104)
∂X
∂
Tr(BT XT CXB) = CT XBBT + CXBBT (105)
∂X
∂
Tr XT BXC = BXC + BT XCT
(106)
∂X
∂
Tr(AXBXT C) = AT CT XBT + CAXB (107)
∂X
∂ h i
Tr (AXB + C)(AXC + C)T = 2AT (AXB + C)BT (108)
∂X
∂ ∂
Tr(X ⊗ X) = Tr(X)Tr(X) = 2Tr(X)I(109)
∂X ∂X
See [7].
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 12
2.6 Derivatives of vector norms 2 DERIVATIVES
∂
Tr(Xk ) = k(Xk−1 )T (110)
∂X
k−1
∂ X
Tr(AXk ) = (Xr AXk−r−1 )T (111)
∂X r=0
T T T
∂
= CXXT CXBBT
∂X Tr B X CXX CXB
+CT XBBT XT CT X
+CXBBT XT CX
+CT XXT CT XBBT (112)
2.5.4 Other
∂
Tr(AX−1 B) = −(X−1 BAX−1 )T = −X−T AT BT X−T (113)
∂X
Assume B and C to be symmetric, then
∂ h i
Tr (XT CX)−1 A = −(CX(XT CX)−1 )(A + AT )(XT CX)−1 (114)
∂X
∂ h i
Tr (XT CX)−1 (XT BX) = −2CX(XT CX)−1 XT BX(XT CX)−1
∂X
+2BX(XT CX)−1 (115)
∂ h i
Tr (A + XT CX)−1 (XT BX) = −2CX(A + XT CX)−1 XT BX(A + XT CX)−1
∂X
+2BX(A + XT CX)−1 (116)
See [7].
∂Tr(sin(X))
= cos(X)T (117)
∂X
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 13
2.8 Derivatives of Structured Matrices 2 DERIVATIVES
If A has no special structure we have simply Sij = Jij , that is, the structure
matrix is simply the singleentry matrix. Many structures have a representation
in singleentry matrices, see Sec. 9.7.6 for more examples of structure matrices.
∂g(U) h ∂g(U) ∂U i
= Tr ( )T . (126)
∂Xij ∂U ∂Xij
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 14
2.8 Derivatives of Structured Matrices 2 DERIVATIVES
2.8.2 Symmetric
If A is symmetric, then Sij = Jij + Jji − Jij Jij and therefore
T
df ∂f ∂f ∂f
= + − diag (127)
dA ∂A ∂A ∂A
That is, e.g., ([5]):
∂Tr(AX)
= A + AT − (A ◦ I), see (131) (128)
∂X
∂ det(X)
= det(X)(2X−1 − (X−1 ◦ I)) (129)
∂X
∂ ln det(X)
= 2X−1 − (X−1 ◦ I) (130)
∂X
2.8.3 Diagonal
If X is diagonal, then ([19]):
∂Tr(AX)
= A◦I (131)
∂X
2.8.4 Toeplitz
Like symmetric matrices and diagonal matrices also Toeplitz matrices has a
special structure which should be taken into account when the derivative with
respect to a matrix with Toeplitz structure.
∂Tr(AT)
(132)
∂T
∂Tr(TA)
=
∂T
Tr([AT ]n1 ) Tr([[AT ]1n ]n−1,2 )
Tr(A) ··· An1
. . .
. . .
Tr([AT ]1n ))
Tr(A) . . .
. . .
=
. . .
Tr([[AT ]1n ]2,n−1 ) . . . Tr([[AT ]1n ]n−1,2 )
. . . .
. . . .
. . . . Tr([AT ]n1 )
A1n ··· Tr([[AT ]1n ]2,n−1 ) Tr([AT ]1n )) Tr(A)
≡ α(A)
As it can be seen, the derivative α(A) also has a Toeplitz structure. Each value
in the diagonal is the sum of all the diagonal valued in A, the values in the
diagonals next to the main diagonal equal the sum of the diagonal next to the
main diagonal in AT . This result is only valid for the unconstrained Toeplitz
matrix. If the Toeplitz matrix also is symmetric, the same derivative yields
∂Tr(AT) ∂Tr(TA)
= = α(A) + α(A)T − α(A) ◦ I (133)
∂T ∂T
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 15
3 INVERSES
3 Inverses
3.1 Basic
3.1.1 Definition
The inverse A−1 of a matrix A ∈ Cn×n is defined such that
AA−1 = A−1 A = I, (134)
where I is the n × n identity matrix. If A−1 exists, A is said to be nonsingular.
Otherwise, A is said to be singular (see e.g. [12]).
3.1.3 Determinant
The determinant of a matrix A ∈ Cn×n is defined as (see [12])
n
X
det(A) = (−1)j+1 A1j det ([A]1j ) (138)
j=1
Xn
= A1j cof(A, 1, j). (139)
j=1
3.1.4 Construction
The inverse matrix can be constructed, using the adjoint matrix, by
1
A−1 = · adj(A) (140)
det(A)
For the case of 2 × 2 matrices, see section 1.2.
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 16
3.2 Exact Relations 3 INVERSES
d+
c(A) = (141)
d−
The condition number can be used to measure how singular a matrix is. If the
condition number is large, it indicates that the matrix is nearly singular. The
condition number can also be estimated from the matrix norms. Here
where k · k is a norm such as e.g the 1-norm, the 2-norm, the ∞-norm or the
Frobenius norm (see Sec 10.4p for more on matrix norms).
The 2-norm of A equals (max(eig(AH A))) [12, p.57]. For a symmetric
matrix, this reduces to ||A||2 = max(|eig(A)|) [12, p.394]. If the matrix ia
symmetric and positive definite, ||A||2 = max(eig(A)). The condition number
based on the 2-norm thus reduces to
max(eig(A))
kAk2 kA−1 k2 = max(eig(A)) max(eig(A−1 )) = . (143)
min(eig(A))
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 17
3.2 Exact Relations 3 INVERSES
(A + cdT )+ = A+ + G (156)
β = 1 + dT A+ c (157)
v = A+ c (158)
n = (A+ )T d (159)
w = (I − AA+ )c (160)
m = (I − A+ A)T d (161)
the solution is given as six different cases, depending on the entities ||w||,
||m||, and β. Please note, that for any (column) vector v it holds that v+ =
vT
vT (vT v)−1 = ||v|| 2 . The solution is:
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 18
3.3 Implication on Inverses 3 INVERSES
3.4 Approximations
The following is a Taylor expansion if An → 0 when n → ∞,
(I + A)−1 = I − A + A2 − A3 + ... (174)
Note the following variant can be useful
−1 1 1 1 1 2 1 3
(I + A) = I − A + 2 A − 3 A + ... (175)
c c c c c
The following approximation is from [22] and holds when A large and symmetric
A − A(I + A)−1 A ∼
= I − A−1 (176)
If σ 2 is small compared to Q and M then
(Q + σ 2 M)−1 ∼
= Q−1 − σ 2 Q−1 MQ−1 (177)
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 19
3.5 Generalized Inverse 3 INVERSES
I AA+ A = A
II A+ AA+ = A+
III AA+ symmetric
IV A+ A symmetric
The matrix A+ is unique and does always exist. Note that in case of com-
plex matrices, the symmetric condition is substituted by a condition of being
Hermitian.
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 20
3.6 Pseudo Inverse 3 INVERSES
3.6.2 Properties
Assume A+ to be the pseudo-inverse of A, then (See [3] for some of them)
(A+ )+ = A (179)
(AT )+ = (A+ )T (180)
(AH )+ = (A+ )H (181)
(A∗ )+ = (A+ )∗ (182)
(A+ A)AH = AH (183)
(A+ A)AT 6 = AT (184)
(cA)+ = (1/c)A+ (185)
A+ = (AT A)+ AT (186)
A+ = AT (AAT )+ (187)
(AT A)+ = A+ (AT )+ (188)
(AAT )+ = (AT )+ A+ (189)
A+ = (AH A)+ AH (190)
A+ = AH (AAH )+ (191)
(AH A)+ = A+ (AH )+ (192)
(AAH )+ = (AH )+ A+ (193)
(AB)+ = (A+ AB)+ (ABB+ )+ (194)
f (AH A) − f (0)I = A+ [f (AAH ) − f (0)I]A (195)
f (AAH ) − f (0)I = A[f (AH A) − f (0)I]A+ (196)
where A ∈ Cn×m .
Assume A to have full rank, then
3.6.3 Construction
Assume that A has full rank, then
A n×n Square rank(A) = n ⇒ A+ = A−1
A n×m Broad rank(A) = n ⇒ A+ = AT (AAT )−1
A n×m Tall rank(A) = m ⇒ A+ = (AT A)−1 AT
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 21
3.6 Pseudo Inverse 3 INVERSES
Assume A does not have full rank, i.e. A is n×m and rank(A) = r < min(n, m).
The pseudo inverse A+ can be constructed from the singular value decomposi-
tion A = UDVT , by
A+ = Vr D−1 T
r Ur (203)
where Ur , Dr , and Vr are the matrices with the degenerated rows and columns
deleted. A different way is this: There do always exist two matrices C n × r
and D r × m of rank r, such that A = CD. Using these matrices it holds that
See [3].
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 22
4 COMPLEX MATRICES
4 Complex Matrices
4.1 Complex Derivatives
In order to differentiate an expression f (z) with respect to a complex z, the
Cauchy-Riemann equations have to be satisfied ([7]):
∂f (z) ∂f (z)
=i . (207)
∂=z ∂<z
A complex function that satisfies the Cauchy-Riemann equations for points in a
region R is said yo be analytic in this region R. In general, expressions involving
complex conjugate or conjugate transpose do not satisfy the Cauchy-Riemann
equations. In order to avoid this problem, a more generalized definition of
complex derivative is used ([24], [6]):
• Generalized Complex Derivative:
df (z)
∇f (z) = 2 (211)
dz∗
∂f (z) ∂f (z)
= +i .
∂<z ∂=z
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 23
4.1 Complex Derivatives 4 COMPLEX MATRICES
df (Z)
∇f (Z) = 2 (212)
dZ∗
∂f (Z) ∂f (Z)
= +i .
∂<Z ∂=Z
These expressions can be used for gradient descent algorithms.
∂g(u) ∂g ∂u ∂g ∂u∗
= + (213)
∂x ∂u ∂x ∂u∗ ∂x
∂g ∂u ∂g ∗ ∗ ∂u∗
= +
∂u ∂x ∂u ∂x
Notice, if the function is analytic, the second term reduces to zero, and the func-
tion is reduced to the normal well-known chain rule. For the matrix derivative
of a scalar function g(U), the chain rule can be written the following way:
∂Tr(X∗ ) ∂Tr(XH )
= = I (215)
∂<X ∂<X
∗
∂Tr(X ) ∂Tr(XH )
i =i = I (216)
∂=X ∂=X
Since the two results have the same sign, the conjugate complex derivative (209)
should be used.
∂Tr(X) ∂Tr(XT )
= = I (217)
∂<X ∂<X
∂Tr(X) ∂Tr(XT )
i =i = −I (218)
∂=X ∂=X
Here, the two results have different signs, and the generalized complex derivative
(208) should be used. Hereby, it can be seen that (92) holds even if X is a
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 24
4.1 Complex Derivatives 4 COMPLEX MATRICES
complex number.
∂Tr(AXH )
= A (219)
∂<X
∂Tr(AXH )
i = A (220)
∂=X
∂Tr(AX∗ )
= AT (221)
∂<X
∂Tr(AX∗ )
i = AT (222)
∂=X
∂Tr(XXH ) ∂Tr(XH X)
= = 2<X (223)
∂<X ∂<X
H
∂Tr(XX ) ∂Tr(XH X)
i =i = i2=X (224)
∂=X ∂=X
By inserting (223) and (224) in (208) and (209), it can be seen that
∂Tr(XXH )
= X∗ (225)
∂X
∂Tr(XXH )
=X (226)
∂X∗
Since the function Tr(XXH ) is a real function of the complex matrix X, the
complex gradient matrix (212) is given by
∂Tr(XXH )
∇Tr(XXH ) = 2 = 2X (227)
∂X∗
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 25
4.2 Higher order and non-linear derivatives 4 COMPLEX MATRICES
∂ (Ax)H (Ax) ∂ xH AH Ax
= (230)
∂x (Bx)H (Bx) ∂x xH BH Bx
AH Ax xH AH AxBH Bx
= 2 H −2 (231)
x BBx (xH BH Bx)2
E = A + tB (232)
F = B − tA, (233)
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 26
5 SOLUTIONS AND DECOMPOSITIONS
and
as −1
a Rxx Rx1 Rx,y
= (239)
b Rx1 R11 Ry1
Ax = b (240)
Ax = b ⇒ x = A−1 b (241)
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 27
5.1 Solutions to linear equations5 SOLUTIONS AND DECOMPOSITIONS
The equation have many solutions x. But xmin is the solution which minimizes
||Ax − b||2 and also the solution with the smallest norm ||x||2 . The same holds
for a matrix version: Assume A is n × m, X is m × n and B is n × n, then
AX = B ⇒ Xmin = A+ B (247)
The equation have many solutions X. But Xmin is the solution which minimizes
||AX − B||2 and also the solution with the smallest norm ||X||2 . See [3].
Similar but different: Assume A is square n × n and the matrices B0 , B1
are n × N , where N > n, then if B0 has maximal rank
where Amin denotes the matrix which is optimal in a least square sense. An
interpretation is that A is the linear approximation which maps the columns
vectors of B0 into the columns vectors of B1 .
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 28
5.2 Eigenvalues and Eigenvectors5 SOLUTIONS AND DECOMPOSITIONS
xT Ax = 0, ∀x ⇒ A=0 (250)
AX + XB = C (251)
vec(X) = (I ⊗ A + BT ⊗ I)−1 vec(C) (252)
Sec 10.2.1 and 10.2.2 for details on the Kronecker product and the vec op-
erator.
P
n An XBn = C (253)
P T −1
vec(X) = n Bn ⊗ A n vec(C) (254)
See Sec 10.2.1 and 10.2.2 for details on the Kronecker product and the vec
operator.
Avi = λi vi (255)
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 29
5.3 Singular Value Decomposition
5 SOLUTIONS AND DECOMPOSITIONS
5.2.3 Symmetric
Assume A is symmetric, then
Note that the coefficients gj for j = 1, ..., n are the n invariants under rotation
of A. Thus, gj is the sum of the determinants of all the sub-matrices of A taken
j rows and columns at a time. That is, g1 is the trace of A, and g2 is the sum
of the determinants of the n(n − 1)/2 sub-matrices that can be formed from A
by deleting all but two rows and columns, and so on – see [17].
A = UDVT , (269)
where
eigenvectors of AAT
U = p n×n
D = diag(eig(AAT )) n×m (270)
V = eigenvectors of AT A m×m
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 30
5.3 Singular Value Decomposition
5 SOLUTIONS AND DECOMPOSITIONS
UT
A = V D , (272)
where D is diagonal with the square root of the eigenvalues of AAT , V is the
eigenvectors of AAT and UT is the eigenvectors of AT A.
UT
A = V D , (274)
where D is diagonal with the square root of the eigenvalues of AAT , V is the
eigenvectors of AAT and UT is the eigenvectors of AT A.
where D is diagonal with the square root of the eigenvalues of AAT , V is the
eigenvectors of AAT and UT is the eigenvectors of AT A.
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 31
5.4 Triangular Decomposition 5 SOLUTIONS AND DECOMPOSITIONS
A = LU (277)
5.5.1 Cholesky-decomposition
Assume A is a symmetric positive definite square matrix, then
A = UT U = LLT , (278)
A = LDMT (279)
where L, M are unique unit lower triangular matrices and D is a unique diagonal
matrix.
A = LDLT = LT DL (280)
larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to k), then the
principal minor is called a leading principal minor. For an n times n square matrix, there are
n leading principal minors. [31]
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 32
6 STATISTICS AND PROBABILITY
6.1.1 Mean
The vector of means, m, is defined by
6.1.2 Covariance
The matrix of covariance M is defined by
or alternatively as
M = h(x − m)(x − m)T i (283)
as h i
(3) (3)
M3 = m::1 m::2 ...m(3)
::n (285)
where ’:’ denotes all elements within the given index. M3 can alternatively be
expressed as
M3 = h(x − m)(x − m)T ⊗ (x − m)T i (286)
as
h i
(4) (4) (4) (4) (4) (4) (4) (4)
M4 = m::11 m::21 ...m::n1 |m::12 m::22 ...m::n2 |...|m::1n m::2n ...m(4)
::nn (288)
or alternatively as
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 33
6.2 Expectation of Linear Combinations
6 STATISTICS AND PROBABILITY
E[Ax + b] = Am + b (293)
E[Ax] = Am (294)
E[x + b] = m + b (295)
See [7].
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 34
6.3 Weighted Scalar Variable 6 STATISTICS AND PROBABILITY
hyi = wT m (309)
h(y − hyi)2 i = wT M2 w (310)
h(y − hyi)3 i = wT M3 w ⊗ w (311)
h(y − hyi)4 i = wT M4 w ⊗ w ⊗ w (312)
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 35
7 MULTIVARIATE DISTRIBUTIONS
7 Multivariate Distributions
7.1 Cauchy
The density function for a Cauchy distributed vector t ∈ RP ×1 , is given by
Γ( 1+P
2 ) det(Σ)−1/2
p(t|µ, Σ) = π −P/2 (313)
Γ(1/2) 1 + (t − µ)T Σ−1 (t − µ)(1+P )/2
where µ is the location, Σ is positive definite, and Γ denotes the gamma func-
tion. The Cauchy distribution is a special case of the Student-t distribution.
7.2 Dirichlet
The Dirichlet distribution is a kind of “inverse” distribution compared to the
multinomial distribution on the bounded continuous variate x = [x1 , . . . , xP ]
[16, p. 44] P
P P
Γ p αp Y
p −1
p(x|α) = QP xα
p
p Γ(α p ) p
7.3 Normal
The normal distribution is also known as a Gaussian distribution. See sec. 8.
7.6 Multinomial
If the vector n contains counts, i.e. (n)i ∈ 0, 1, 2, ..., then the discrete multino-
mial disitrbution for n is given by
d d
n! Y X
P (n|a, n) = ani , ni = n (314)
n1 ! . . . n d ! i i i
P
where ai are probabilities, i.e. 0 ≤ ai ≤ 1 and i ai = 1.
7.7 Student’s t
The density of a Student-t distributed vector t ∈ RP ×1 , is given by
Γ( ν+P
2 ) det(Σ)−1/2
p(t|µ, Σ, ν) = (πν)−P/2 (315)
Γ(ν/2) 1 + ν −1 (t − µ)T Σ−1 (t − µ)(ν+P )/2
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 36
7.8 Wishart 7 MULTIVARIATE DISTRIBUTIONS
7.7.1 Mean
E(t) = µ, ν>1 (316)
7.7.2 Variance
ν
cov(t) = Σ, ν>2 (317)
ν−2
7.7.3 Mode
The notion mode meaning the position of the most probable value
mode(t) = µ (318)
ν det(Ω)−ν/2 det(Σ)−N/2 ×
−(ν+P )/2
det Ω−1 + (T − M)Σ−1 (T − M)T
(319)
7.8 Wishart
The central Wishart distribution for M ∈ RP ×P , M is positive definite, where
m can be regarded as a degree of freedom parameter [16, equation 3.8.1] [8,
section 2.5],[11]
1
p(M|Σ, m) = QP ×
2mP/2 π P (P −1)/4 p Γ[ 12 (m + 1 − p)]
det(Σ)−m/2 det(M)(m−P −1)/2 ×
1 −1
exp − Tr(Σ M) (320)
2
7.8.1 Mean
E(M) = mΣ (321)
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 37
7.9 Wishart, Inverse 7 MULTIVARIATE DISTRIBUTIONS
7.9.1 Mean
1
E(M) = Σ (323)
m−P −1
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 38
8 GAUSSIANS
8 Gaussians
8.1 Basics
8.1.1 Density and normalization
The density of x ∼ N (m, Σ) is
1 1
p(x) = p exp − (x − m)T Σ−1 (x − m) (324)
det(2πΣ) 2
∂p(x)
= −p(x)Σ−1 (x − m) (325)
∂x
∂2p
= p(x) Σ−1 (x − m)(x − m)T Σ−1 − Σ−1 (326)
∂x∂xT
then
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 39
8.1 Basics 8 GAUSSIANS
then
n µ̂
a = µa + Σc Σ−1
b (xb − µb ) (331)
p(xa |xb ) = Nxa (µ̂a , Σ̂a )
Σ̂a = Σa − Σc Σ−1
b Σc
T
n µ̂ =
b µb + ΣTc Σ−1
a (xa − µa )
p(xb |xa ) = Nxb (µ̂b , Σ̂b ) (332)
Σ̂b = Σb − ΣTc Σ−1
a Σc
Note, that the covariance matrices are the Schur complement of the block ma-
trix, see 9.1.5 for details.
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 40
8.2 Moments 8 GAUSSIANS
Σ−1
c = Σ−1 −1
1 + Σ2 (338)
−1 −1 −1 −1 −1
mc = (Σ1 + Σ2 ) (Σ1 m1 + Σ2 m2 ) (339)
1 T −1
C = (m Σ + mT2 Σ−1 −1
2 )(Σ1 + Σ2 )
−1 −1
(Σ−1 −1
1 m1 + Σ2 m2 )(340)
2 1 1
1
− mT1 Σ−1
1 m 1 + m T −1
2 Σ2 m 2 (341)
2
In a trace formulation (assuming Σ1 , Σ2 are symmetric)
1
− Tr((X − M1 )T Σ−1
1 (X − M1 )) (342)
2
1
− Tr((X − M2 )T Σ−1
2 (X − M2 )) (343)
2
1
= − Tr[(X − Mc )T Σ−1
c (X − Mc )] + C (344)
2
Σ−1
c = Σ−1
1 + Σ2
−1
(345)
−1 −1 −1 −1 −1
Mc = (Σ1 + Σ2 ) (Σ1 M1 + Σ2 M2 ) (346)
1 h −1 i
C = Tr (Σ1 M1 + Σ−1 T −1 −1 −1
2 M2 ) (Σ1 + Σ2 ) (Σ−1 −1
1 M1 + Σ2 M2 )
2
1
− Tr(MT1 Σ−1 T −1
1 M1 + M2 Σ2 M2 ) (347)
2
8.2 Moments
8.2.1 Mean and covariance of linear forms
First and second moments. Assume x ∼ N (m, Σ)
E(x) = m (349)
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 41
8.2 Moments 8 GAUSSIANS
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 42
8.3 Miscellaneous 8 GAUSSIANS
8.2.5 Moments
X
E[x] = ρk m k (361)
k
XX
Cov(x) = ρk ρk0 (Σk + mk mTk − mk mTk0 ) (362)
k k0
8.3 Miscellaneous
8.3.1 Whitening
Assume x ∼ N (m, Σ) then
z = Σ−1/2 (x − m) ∼ N (0, I) (363)
Conversely having z ∼ N (0, I) one can generate data x ∼ N (m, Σ) by setting
x = Σ1/2 z + m ∼ N (m, Σ) (364)
1/2 1/2 1/2
Note that Σ means the matrix which fulfils Σ Σ = Σ, and that it exists
and is unique since Σ is positive definite.
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 43
8.4 Mixture of Gaussians 8 GAUSSIANS
8.3.3 Entropy
Entropy of a D-dimensional gaussian
Z p D
H(x) = − N (m, Σ) ln N (m, Σ)dx = ln det(2πΣ) + (366)
2
8.4.2 Derivatives
P
Defining p(s) = k ρk Ns (µk , Σk ) one get
∂ ln p(s) ρj Ns (µj , Σj ) ∂
= P ln[ρj Ns (µj , Σj )] (368)
∂ρj k ρ k N s (µk , Σk ) ∂ρ j
ρj Ns (µj , Σj ) 1
= P (369)
k ρk Ns (µk , Σk ) ρj
∂ ln p(s) ρj Ns (µj , Σj ) ∂
= P ln[ρj Ns (µj , Σj )] (370)
∂µj k ρk Ns (µk , Σk ) ∂µj
ρj Ns (µj , Σj ) −1
= P Σj (s − µj ) (371)
k ρk Ns (µk , Σk )
∂ ln p(s) ρj Ns (µj , Σj ) ∂
= P ln[ρj Ns (µj , Σj )] (372)
∂Σj ρ
k k s N (µk , Σk ) ∂Σ j
ρj Ns (µj , Σj ) 1
−Σ−1 −1 T −1
= P j + Σj (s − µj )(s − µj ) Σj (373)
ρ
k k s N (µk , Σk ) 2
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 44
9 SPECIAL MATRICES
9 Special Matrices
9.1 Block matrices
Let Aij denote the ijth block of A.
9.1.1 Multiplication
Assuming the dimensions of the blocks matches we have
A11 A12 B11 B12 A11 B11 + A12 B21 A11 B12 + A12 B22
=
A21 A22 B21 B22 A21 B11 + A22 B21 A21 B12 + A22 B22
as
A11 A12
det = det(A22 ) · det(C1 ) = det(A11 ) · det(C2 )
A21 A22
as −1
C−1 −A−1 −1
A11 A12 1 11 A12 C2
=
A21 A22 −C2 A21 A−1
−1
11 C2−1
A−1 −1 −1 −1
−C−1 −1
11 + A11 A12 C2 A21 A11 1 A12 A22
= −1 −1
−A22 A21 C1 A22 + A22 A21 C1 A12 A−1
−1 −1 −1
22
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 45
9.2 Discrete Fourier Transform Matrix, The 9 SPECIAL MATRICES
is the matrix
A11 − A12 A−1
22 A21
that is, what is denoted C2 above. Using the Schur complement, one can rewrite
the inverse of a block matrix
−1
A11 A12
A21 A22
(A11 − A12 A−1 −1
I −A12 A−1
I 0 22 A21 ) 0 22
=
−A−1
22 A21 I 0 A−1
22 0 I
The Schur complement is useful when solving linear systems of the form
A11 A12 x1 b1
=
A21 A22 x2 b2
When the appropriate inverses exists, this can be solved for x1 which can then
be inserted in the equation for x2 to solve for x2 .
The DFT of the vector x = [x(0), x(1), · · · , x(N − 1)]T can be written in matrix
form as
X = WN x, (383)
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 46
9.3 Hermitian Matrices and skew-Hermitian 9 SPECIAL MATRICES
9.3.1 Skew-Hermitian
A matrix A is called skew-hermitian if
A = −AH
For real valued matrices, skew-Hermitian and skew-symmetric matrices are
equivalent.
A Hermitian ⇔ iA is skew-hermitian (392)
A skew-Hermitian ⇔ xH Ay = −xH AH y, ∀x, y (393)
A skew-Hermitian ⇒ eig(A) = iλ, λ ∈ R (394)
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 47
9.4 Idempotent Matrices 9 SPECIAL MATRICES
9.4.1 Nilpotent
A matrix A is nilpotent if
A2 = 0
A nilpotent matrix has the following property:
9.4.2 Unipotent
A matrix A is unipotent if
AA = I
A unipotent matrix has the following property:
QT Q = QQT = I
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 48
9.5 Orthogonal matrices 9 SPECIAL MATRICES
Q−1 = QT
Q−T = Q
QQT = I
QT Q = I
det(Q) = ±1
9.5.1 Ortho-Sym
A matrix Q+ which simultaneously is orthogonal and symmetric is called an
ortho-sym matrix [20]. Hereby
QT+ Q+ = I (407)
Q+ = QT+ (408)
1 + (−1)k 1 + (−1)k+1
Qk+ = I+ Q+ (409)
2 2
1 + cos(kπ) 1 − cos(kπ)
= I+ Q+ (410)
2 2
9.5.2 Ortho-Skew
A matrix which simultaneously is orthogonal and antisymmetric is called an
ortho-skew matrix [20]. Hereby
QH
− Q− = I (411)
Q− = −QH
− (412)
ik + (−i)k ik − (−i)k
Qk− = I−i Q− (413)
2 2
π π
= cos(k )I + sin(k )Q− (414)
2 2
9.5.3 Decomposition
A square matrix A can always be written as a sum of a symmetric A+ and an
antisymmetric matrix A−
A = A + + A− (415)
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 49
9.6 Positive Definite and Semi-definite Matrices 9 SPECIAL MATRICES
xT Ax > 0, ∀x 6= 0 (416)
xT Ax ≥ 0, ∀x (417)
9.6.2 Eigenvalues
The following holds with respect to the eigenvalues:
H
A pos. def. ⇔ eig( A+A
2 )>0
A+AH
(418)
A pos. semi-def. ⇔ eig( 2 ) ≥ 0
9.6.3 Trace
The following holds with respect to the trace:
9.6.4 Inverse
If A is positive definite, then A is invertible and A−1 is also positive definite.
9.6.5 Diagonal
If A is positive definite, then Aii > 0, ∀i
9.6.6 Decomposition I
The matrix A is positive semi-definite of rank r ⇔ there exists a matrix B of
rank r such that A = BBT
9.6.7 Decomposition II
Assume A is an n × n positive semi-definite, then there exists an n × r matrix
B of rank r such that BT AB = I.
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 50
9.7 Singleentry Matrix, The 9 SPECIAL MATRICES
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 51
9.7 Singleentry Matrix, The 9 SPECIAL MATRICES
might have
0 0 0 0
0 0 1 0
J23 =
0
(420)
0 0 0
0 0 0 0
The single-entry matrix is very useful when working with derivatives of expres-
sions involving matrices.
AJij = 0 0 . . . Ai
... 0 (421)
i.e. an n × p matrix of zeros with the i.th column of A in place of the j.th
column. Assume A to be n × m and Jij to be p × n
0
..
.
0
Jij A =
Aj (422)
0
.
..
0
i.e. an p × m matrix of zeros with the j.th row of A in the placed of the i.th
row.
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 52
9.8 Symmetric, Skew-symmetric/Antisymmetric 9 SPECIAL MATRICES
If A is symmetric then
Sij = Jij + Jji − Jij Jij (435)
A = AT (436)
Symmetric matrices have many important properties, e.g. that their eigenvalues
are real and eigenvectors orthogonal.
9.8.2 Skew-symmetric/Antisymmetric
The antisymmetric matrix is also known as the skew symmetric matrix. It has
the following property from which it is defined
A = −AT (437)
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 53
9.9 Toeplitz Matrices 9 SPECIAL MATRICES
Hereby, it can be seen that the antisymmetric matrices always have a zero
diagonal. The n × n antisymmetric matrices also have the following properties.
det(AT ) = det(−A) = (−1)n det(A) (438)
− det(A) = det(−A) = 0, if n is odd (439)
The eigenvalues of an antisymmetric matrix are placed on the imaginary axis
and the eigenvectors are unitary.
9.8.3 Decomposition
A square matrix A can always be written as a sum of a symmetric A+ and an
antisymmetric matrix A−
A = A+ + A− (440)
Such a decomposition could e.g. be
A + AT A − AT
A= + = A+ + A− (441)
2 2
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 54
9.10 Transition matrices 9 SPECIAL MATRICES
The transition matrix usually describes the probability of moving from state i
to j in one step and is closely related to markov processes. Transition matrices
have the following properties
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 55
9.11 Units, Permutation and Shift 9 SPECIAL MATRICES
9.11.3 Permutations
Let P be some permutation matrix, e.g.
eT2
0 1 0
= eT1
P= 1 0 0 = e2 e1 e3 (454)
0 0 1 eT3
PPT = I (455)
and that
eT2 A
PA = eT1 A
AP = Ae2 Ae1 Ae3 (456)
eT3 A
That is, the first is a matrix which has columns of A but in permuted sequence
and the second is a matrix which has the rows of A but in the permuted se-
quence.
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 56
9.12 Vandermonde Matrices 9 SPECIAL MATRICES
A related but slightly different matrix is the ’recurrent shifted’ operator defined
on a 4x4 example by
0 0 0 1
1 0 0 0
L̂ =
0 1 0 0
(459)
0 0 1 0
i.e. a matrix defined by (L̂)ij = δi,j+1 + δi,1 δj,dim(L) . On a signal x it has the
effect
(L̂n x)t = xt0 , t0 = [(t − n) mod N ] + 1 (460)
That is, L̂ is like the shift operator L except that it ’wraps’ the signal as if it
was periodic and shifted (substituting the zeros with the rear end of the signal).
Note that L̂ is invertible and orthogonal, i.e.
1 v1 v12 · · · v1n−1
1 v2 v22 · · · v n−1
2
V= . . . (462)
. . .. ..
. . . .
1 vn vn2 · · · vnn−1
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 57
10 FUNCTIONS AND OPERATORS
assuming the limit exists and is finite. If the coefficients cn fulfils n cn xn < ∞,
P
then one can prove that the above series exists and is finite, see [1]. Thus for
any analytical function f (x) there exists a corresponding matrix function f (x)
constructed by the Taylor expansion. Using this one can prove the following
results:
1) A matrix A is a zero of its own characteristic polynomium [1]:
X
p(λ) = det(Iλ − A) = cn λn ⇒ p(A) = 0 (467)
n
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 58
10.2 Kronecker and Vec Operator 10 FUNCTIONS AND OPERATORS
eA eB = eA+B if AB = BA (474)
(eA )−1 = e−A (475)
d tA
e = AetA = etA A, t∈R (476)
dt
d
Tr(etA ) = Tr(AetA ) (477)
dt
det(eA ) = eTr(A) (478)
∞
X (−1)n A2n+1 1 1
sin(A) ≡ = A − A3 + A5 − ... (479)
n=0
(2n + 1)! 3! 5!
∞
X (−1)n A2n 1 1
cos(A) ≡ = I − A2 + A4 − ... (480)
n=0
(2n)! 2! 4!
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 59
10.2 Kronecker and Vec Operator 10 FUNCTIONS AND OPERATORS
A ⊗ (B + C) =A⊗B+A⊗C (482)
A⊗B 6=B⊗A in general (483)
A ⊗ (B ⊗ C) =(A ⊗ B) ⊗ C (484)
(αA A ⊗ αB B) =αA αB (A ⊗ B) (485)
(A ⊗ B)T =A T ⊗ BT (486)
(A ⊗ B)(C ⊗ D) =AC ⊗ BD (487)
(A ⊗ B)−1 =A−1 ⊗ B−1 (488)
(A ⊗ B)+ = A + ⊗ B+ (489)
rank(A ⊗ B) =rank(A)rank(B) (490)
Tr(A ⊗ B) =Tr(A)Tr(B) = Tr(ΛA ⊗ ΛB ) (491)
det(A ⊗ B) = det(A)rank(B) det(B)rank(A) (492)
{eig(A ⊗ B)} = {eig(B ⊗ A)} if A, B are square (493)
T
{eig(A ⊗ B)} = {eig(A)eig(B) } (494)
if A, B are symmetric and square
eig(A ⊗ B) = eig(A) ⊗ eig(B) (495)
Where {λi } denotes the set of values λi , that is, the values in no particular
order or structure, and ΛA denotes the diagonal matrix with the eigenvalues of
A.
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 60
10.3 Vector Norms 10 FUNCTIONS AND OPERATORS
||A|| ≥ 0 (504)
||A|| = 0 ⇔ A = 0 (505)
||cA|| = |c|||A||, c∈R (506)
||A + B|| ≤ ||A|| + ||B|| (507)
where || · || ont the left side is the induced matrix norm, while || · || on the right
side denotes the vector norm. For induced norms it holds that
||I|| = 1 (509)
||Ax|| ≤ ||A|| · ||x||, for all A, x (510)
||AB|| ≤ ||A|| · ||B||, for all A, B (511)
10.4.3 Examples
X
||A||1 = max |Aij | (512)
j
q i
||A||2 = max eig(AH A) (513)
||A||p = ( max ||Ax||p )1/p (514)
||x||p =1
X
||A||∞ = max |Aij | (515)
i
j
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 61
10.5 Rank 10 FUNCTIONS AND OPERATORS
sX q
||A||F = |Aij |2 = Tr(AAH ) (Frobenius) (516)
ij
||A||max = max |Aij | (517)
ij
||A||KF = ||sing(A)||1 (Ky Fan) (518)
10.4.4 Inequalities
E. H. Rasmussen has in yet unpublished material derived and collected the
following inequalities. They are collected in a table as below, assuming A is an
m × n, and d = rank(A)
||A||max ||A||1 ||A||∞ ||A||2 ||A||F ||A||KF
||A||max 1 1 √1 √1 √1
||A||1 m m √m √m √m
||A||∞ √n √n √ n n n
||A||2 mn n m √ 1 1
√ √ √
||A||F √ mn √n √m d √ 1
||A||KF mnd nd md d d
which are to be read as, e.g.
√
||A||2 ≤ m · ||A||∞ (519)
10.5 Rank
10.5.1 Sylvester’s Inequality
If A is m × n and B is n × r, then
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 62
10.7 Miscellaneous 10 FUNCTIONS AND OPERATORS
See [9].
10.7 Miscellaneous
For any A it holds that
It holds that
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 63
A ONE-DIMENSIONAL RESULTS
A One-dimensional Results
A.1 Gaussian
A.1.1 Density
(x − µ)2
1
p(x) = √ exp − (526)
2πσ 2 2σ 2
A.1.2 Normalization
Z
(s−µ)2 √
e− 2σ2 ds = 2πσ 2 (527)
r 2
b − 4ac
Z
−(ax2 +bx+c) π
e dx = exp (528)
a 4a
r 2
c1 − 4c2 c0
Z
c2 x2 +c1 x+c0 π
e dx = exp (529)
−c2 −4c2
A.1.3 Derivatives
∂p(x) (x − µ)
= p(x) (530)
∂µ σ2
∂ ln p(x) (x − µ)
= (531)
∂µ σ2
1 (x − µ)2
∂p(x)
= p(x) − 1 (532)
∂σ σ σ2
1 (x − µ)2
∂ ln p(x)
= − 1 (533)
∂σ σ σ2
A.1.5 Moments
If the density is expressed by
(s − µ)2
1
p(x) = √ exp − or p(x) = C exp(c2 x2 + c1 x) (534)
2πσ 2 2σ 2
then the first few basic moments are
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 64
A.2 One Dimensional Mixture of Gaussians
A ONE-DIMENSIONAL RESULTS
−c1
hxi = µ = 2c2 2
−1 −c1
hx2 i = σ 2 + µ2 = 2c2 + h 2c2
c21
i
c1
hx3 i = 3σ 2 µ + µ3 = (2c ) 2 3 − 2c2
2 4 2 2
c1 c1 −1 1
hx4 i = µ4 + 6µ2 σ 2 + 3σ 4 = 2c2 + 6 2c2 2c2 +3 2c2
A.2.2 Moments
An useful fact of MoG, is that
X
hxn i = ρk hxn ik (537)
k
where h·ik denotes average with respect to the k.th component. We can calculate
the first four moments from the densities
1 (x − µk )2
X 1
p(x) = ρk p exp − (538)
2πσk2 2 σk2
k
X
ρk Ck exp ck2 x2 + ck1 x
p(x) = (539)
k
as
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 65
A.2 One Dimensional Mixture of Gaussians
A ONE-DIMENSIONAL RESULTS
h i
P P −ck1
hxi = k ρk µk = k ρk
2ck2 2
2 2 −1 −ck1
µ2k )
P P
hx i = k ρk (σk + = k ρk 2ck2 + 2ck2
c2k1
h h ii
ck1
hx3 i 2 3
P P
= k ρk (3σk µk + µk ) = k ρ k (2ck2 ) 2 3 − 2ck2
2 2
1 ck1 c2k1
hx4 i 4 2 2 4
P P
= k ρk (µk + 6µk σk + 3σk ) = k ρk 2ck2 2ck2 − 6 2ck2 + 3
¿From the un-centralized moments one can derive other entities like
2
hx2 i − hxi2 2
P
= k,k 0 ρk ρk 0 µk + σk − µk µk 0
2
hx3 i − hx2 ihxi = 3 2 2
P
k,k0 ρk ρk0 3σk µk + µk − (σk + µk )µk0
hx4 i − hx2 i2 4 2 2 4 2 2 2 2
P
= k,k0 ρk ρk0 µk + 6µk σk + 3σk − (σk + µk )(σk0 + µk0 )
A.2.3 Derivatives
Defining p(s) = k ρk Ns (µk , σk2 ) we get for a parameter θj of the j.th compo-
P
nent
∂ ln p(s) ρj Ns (µj , σj2 ) ∂ ln(ρj Ns (µj , σj2 ))
=P 2 (540)
∂θj k ρk Ns (µk , σk ) ∂θj
that is,
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 66
B PROOFS AND DETAILS
∂(Xn )kl ∂ X
= Xk,u1 Xu1 ,u2 ...Xun−1 ,l
∂Xij ∂Xij u1 ,...,un−1
= δk,i δu1 ,j Xu1 ,u2 ...Xun−1 ,l
+Xk,u1 δu1 ,i δu2 ,j ...Xun−1 ,l
..
.
+Xk,u1 Xu1 ,u2 ...δun−1 ,i δl,j
n−1
X
= (Xr )ki (Xn−1−r )jl
r=0
n−1
X
= (Xr Jij Xn−1−r )kl
r=0
Using the properties of the single entry matrix found in Sec. 9.7.4, the result
follows easily.
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 67
B.1 Misc Proofs B PROOFS AND DETAILS
Through the calculations, (92) and (219) were used. In addition, by use of (220),
the derivative is found with respect to the imaginary part of X
Notice, for real X, A, the sum of (228) and (229) is reduced to (48).
Similar calculations yield
and
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 68
REFERENCES REFERENCES
References
[1] Karl Gustav Andersson and Lars-Christer Boiers. Ordinaera differentialek-
vationer. Studenterlitteratur, 1992.
[2] Jörn Anemüller, Terrence J. Sejnowski, and Scott Makeig. Complex inde-
pendent component analysis of frequency-domain electroencephalographic
data. Neural Networks, 16(9):1311–1323, November 2003.
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 69
REFERENCES REFERENCES
http://www.mathpages.com/home/kmath128.htm
http://en.wikipedia.org/wiki/Minor_(linear_algebra)
[32] Zhaoshui He, Shengli Xie, et al, ”Convolutive blind source separation in
frequency domain based on sparse representation”, IEEE Transactions on
Audio, Speech and Language Processing, vol.15(5):1551-1563, July 2007.
Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 70
Index
Anti-symmetric, 49 Nilpotent, 44
Norm of a matrix, 59
Block matrix, 41 Norm of a vector, 59
Normal-Inverse Gamma distribution, 33
Chain rule, 14 Normal-Inverse Wishart distribution, 34
Cholesky-decomposition, 28
Co-kurtosis, 29 Orthogonal, 44
Co-skewness, 29
Condition number, 61 Power series of matrices, 54
Cramers Rule, 57 Probability matrix, 51
Pseudo-inverse, 20
Derivative of a complex matrix, 22
Derivative of a determinant, 7 Schur complement, 36, 42
Derivative of a trace, 11 Single entry matrix, 47
Derivative of an inverse, 8 Singular Valued Decomposition (SVD),
Derivative of symmetric matrix, 15 26
Derivatives of Toeplitz matrix, 15 Skew-Hermitian, 43
Dirichlet distribution, 33 Skew-symmetric, 49
Stochastic matrix, 51
Eigenvalues, 26 Student-t, 32
Eigenvectors, 26 Sylvester’s Inequality, 61
Exponential Matrix Function, 55 Symmetric, 49
Gaussian, conditional, 36 Taylor expansion, 54
Gaussian, entropy, 40 Toeplitz matrix, 50
Gaussian, linear combination, 36 Transition matrix, 51
Gaussian, marginal, 35 Trigonometric functions, 55
Gaussian, product of densities, 37
Generalized inverse, 20 Unipotent, 44
Moore-Penrose inverse, 20
Multinomial distribution, 33
71