Professional Documents
Culture Documents
MatrixCookBook PDF
MatrixCookBook PDF
Errors: Very likely there are errors, typos, and mistakes for which we apolo-
gize and would be grateful to receive corrections at kbp@imm.dtu.dk.
1
CONTENTS CONTENTS
Contents
1 Basics 5
1.1 Trace and Determinants . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 The Special Case 2x2 . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Derivatives 7
2.1 Derivatives of a Determinant . . . . . . . . . . . . . . . . . . . . 7
2.2 Derivatives of an Inverse . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Derivatives of Matrices, Vectors and Scalar Forms . . . . . . . . 9
2.4 Derivatives of Traces . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5 Derivatives of Structured Matrices . . . . . . . . . . . . . . . . . 12
3 Inverses 14
3.1 Exact Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Implication on Inverses . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 Generalized Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.5 Pseudo Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 Complex Matrices 17
4.1 Complex Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 17
5 Decompositions 20
5.1 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . 20
5.2 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . 20
5.3 Triangular Decomposition . . . . . . . . . . . . . . . . . . . . . . 21
7 Gaussians 24
7.1 One Dimensional . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7.2 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7.3 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7.4 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.5 One Dimensional Mixture of Gaussians . . . . . . . . . . . . . . . 29
7.6 Mixture of Gaussians . . . . . . . . . . . . . . . . . . . . . . . . . 30
8 Miscellaneous 31
8.1 Functions and Series . . . . . . . . . . . . . . . . . . . . . . . . . 31
8.2 Indices, Entries and Vectors . . . . . . . . . . . . . . . . . . . . . 32
8.3 Solutions to Systems of Equations . . . . . . . . . . . . . . . . . 35
8.4 Block matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.5 Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
8.6 Positive Definite and Semi-definite Matrices . . . . . . . . . . . . 38
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 2
CONTENTS CONTENTS
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 3
CONTENTS CONTENTS
det(A) Determinant of A
||A|| Matrix norm (subscript if any denotes what norm)
AT Transposed matrix
A∗ Complex conjugated matrix
AH Transposed and complex conjugated matrix
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 4
1 BASICS
1 Basics
Tr(A) = Tr(AT )
Tr(AB) = Tr(BA)
Tr(A + B) = Tr(A) + Tr(B)
Tr(ABC) = Tr(BCA) = Tr(CAB)
Y
det(A) = λi λi = eig(A)
i
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 5
1.2 The Special Case 2x2 1 BASICS
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 6
2 DERIVATIVES
2 Derivatives
This section is covering differentiation of a number of expressions with respect to
a matrix X. Note that it is always assumed that X has no special structure, i.e.
that the elements of X are independent (e.g. not symmetric, Toeplitz, positive
definite). See section 2.5 for differentiation of structured matrices. The basic
assumptions can be written in a formula as
∂Xkl
= δik δlj
∂Xij
The following rules are general and very useful when deriving the differential of
an expression ([10]):
∂A = 0 (A is a constant) (1)
∂(αX) = α∂X (2)
∂(X + Y) = ∂X + ∂Y (3)
∂(Tr(X)) = Tr(∂X) (4)
∂(XY) = (∂X)Y + X(∂Y) (5)
∂(X ◦ Y) = (∂X) ◦ Y + X ◦ (∂Y) (6)
∂(X ⊗ Y) = (∂X) ⊗ Y + X ⊗ (∂Y) (7)
∂(X−1 ) = −X−1 (∂X)X−1 (8)
∂(det(X)) = det(X)Tr(X−1 ∂X) (9)
∂(ln(det(X))) = Tr(X−1 ∂X) (10)
∂XT = (∂X)T (11)
∂XH = (∂X)H (12)
∂ det(X)
= det(X)(X−1 )T
∂X
∂ det(AXB)
= det(AXB)(X−1 )T = det(AXB)(XT )−1
∂X
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 7
2.2 Derivatives of an Inverse 2 DERIVATIVES
∂ ln det(XT X)|
= 2(X+ )T
∂X
∂ ln det(XT X)
= −2XT
∂X+
∂ ln | det(X)|
= (X−1 )T = (XT )−1
∂X
∂ det(Xk )
= k det(Xk )X−T
∂X
See [7].
∂aT X−1 b
= −X−T abT X−T
∂X
∂ det(X−1 )
= − det(X−1 )(X−1 )T
∂X
∂Tr(AX−1 B)
= −(X−1 BAX−1 )T
∂X
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 8
2.3 Derivatives of Matrices, Vectors and Scalar Forms 2 DERIVATIVES
∂xT b ∂bT x
= =b
∂x ∂x
∂aT Xb
= abT
∂X
∂aT XT b
= baT
∂X
∂aT Xa ∂aT XT a
= = aaT
∂X ∂X
∂X
= Jij
∂Xij
∂(XA)ij
= δim (A)nj = (Jmn A)ij
∂Xmn
∂(XT A)ij
= δin (A)mj = (Jnm A)ij
∂Xmn
∂ X X
Xkl Xmn = 2 Xkl
∂Xij
klmn kl
T T
∂b X Xc
= X(bcT + cbT )
∂X
∂(Bx + b)T C(Dx + d)
= BT C(Dx + d) + DT CT (Bx + b)
∂x
∂(XT BX)kl
= δlj (XT B)ki + δkj (BX)il
∂Xij
∂(XT BX)
= XT BJij + Jji BX (Jij )kl = δik δjl
∂Xij
See Sec 8.2 for useful properties of the Single-entry matrix Jij
∂xT Bx
= (B + BT )x
∂x
∂bT XT DXc
= DT XbcT + DXcbT
∂X
∂
(Xb + c)T D(Xb + c) = (D + DT )(Xb + c)bT
∂X
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 9
2.3 Derivatives of Matrices, Vectors and Scalar Forms 2 DERIVATIVES
n−1
X
∂ T n
a X b= (Xr )T abT (Xn−1−r )T (14)
∂X r=0
∂ T n T n Xh
n−1
a (X ) X b = Xn−1−r abT (Xn )T Xr
∂X r=0
i
+(Xr )T Xn abT (Xn−1−r )T (15)
f = xT Ax + bT x
∂f
∇x f = = (A + AT )x + b
∂x
∂2f
= A + AT
∂x∂xT
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 10
2.4 Derivatives of Traces 2 DERIVATIVES
∂
Tr(X) = I
∂X
∂
Tr(XB) = BT (16)
∂X
∂
Tr(BXC) = BT CT
∂X
∂
Tr(BXT C) = CB
∂X
∂
Tr(XT C) = C
∂X
∂
Tr(BXT ) = B
∂X
∂
Tr(X2 ) = 2X
∂X
∂
Tr(X2 B) = (XB + BX)T
∂X
∂
Tr(XT BX) = BX + BT X
∂X
∂
Tr(XBXT ) = XBT + XB
∂X
∂
Tr(XT X) = 2X
∂X
∂
Tr(BXXT ) = (B + BT )X
∂X
∂
Tr(BT XT CXB) = CT XBBT + CXBBT
∂X
∂ £ ¤
Tr XT BXC = BXC + BT XCT
∂X
∂
Tr(AXBXT C) = AT CT XBT + CAXB
∂X
∂ h i
Tr (AXb + c)(AXb + c)T = 2AT (AXb + c)bT
∂X
See [7].
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 11
2.5 Derivatives of Structured Matrices 2 DERIVATIVES
∂
Tr(Xk ) = k(Xk−1 )T
∂X
X k−1
∂
Tr(AXk ) = (Xr AXk−r−1 )T
∂X r=0
∂ £ ¤
Tr BT XT CXXT CXB = CXXT CXBBT + CT XBBT XT CT X
∂X
+ CXBBT XT CX + CT XXT CT XBBT
2.4.4 Other
∂
Tr(AX−1 B) = −(X−1 BAX−1 )T = −X−T AT BT X−T
∂X
Assume B and C to be symmetric, then
∂ h i
Tr (XT CX)−1 A = −(CX(XT CX)−1 )(A + AT )(XT CX)−1
∂X
∂ h i
Tr (XT CX)−1 (XT BX) = −2CX(XT CX)−1 XT BX(XT CX)−1
∂X
+2BX(XT CX)−1
See [7].
If A has no special structure we have simply Sij = Jij , that is, the structure
matrix is simply the singleentry matrix. Many structures have a representation
in singleentry matrices, see Sec. 8.2.7 for more examples of structure matrices.
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 12
2.5 Derivatives of Structured Matrices 2 DERIVATIVES
2.5.1 Symmetric
If A is symmetric, then Sij = Jij + Jji − Jij Jij and therefore
· ¸ · ¸T · ¸
df ∂f ∂f ∂f
= + − diag
dA ∂A ∂A ∂A
∂Tr(AX)
= A + AT − (A ◦ I), see (20) (17)
∂X
∂ det(X)
= 2X − (X ◦ I) (18)
∂X
∂ ln det(X)
= 2X−1 − (X−1 ◦ I) (19)
∂X
2.5.2 Diagonal
If X is diagonal, then ([10]):
∂Tr(AX)
= A◦I (20)
∂X
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 13
3 INVERSES
3 Inverses
3.1 Exact Relations
3.1.1 The Woodbury identity
See [?].
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 14
3.3 Approximations 3 INVERSES
3.3 Approximations
(I + A)−1 = I − A + A2 − A3 + ...
A − A(I + A)−1 A ∼
= I − A−1 if A large and symmetric
If σ 2 is small then
(Q + σ 2 M)−1 ∼
= Q−1 − σ 2 Q−1 MQ−1
3.5.2 Properties
Assume A+ to be the pseudo-inverse of A, then (See [3])
(A+ )+ = A
(AT )+ = (A+ )T
(cA)+ = (1/c)A+
(AT A)+ = A+ (AT )+
(AAT )+ = (AT )+ A+
Assume A to have full rank, then
(AA+ )(AA+ ) = AA+
(A+ A)(A+ A) = A+ A
Tr(AA+ ) = rank(AA+ ) (See [14])
+ +
Tr(A A) = rank(A A) (See [14])
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 15
3.5 Pseudo Inverse 3 INVERSES
3.5.3 Construction
Assume that A has full rank, then
A n×n Square rank(A) = n ⇒ A+ = A−1
A n×m Broad rank(A) = n ⇒ A+ = AT (AAT )−1
A n×m Tall rank(A) = m ⇒ A+ = (AT A)−1 AT
Assume A does not have full rank, i.e. A is n×m and rank(A) = r < min(n, m).
The pseudo inverse A+ can be constructed from the singular value decomposi-
tion A = UDVT , by
A+ = VD+ UT
A different way is this: There does always exists two matrices C n × r and D
r × m of rank r, such that A = CD. Using these matrices it holds that
See [3].
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 16
4 COMPLEX MATRICES
4 Complex Matrices
4.1 Complex Derivatives
In order to differentiate an expression f (z) with respect to a complex z, the
Cauchy-Riemann equations have to be satisfied ([7]):
∂f (z) ∂f (z)
=i . (23)
∂=z ∂<z
A complex function that satisfies the Cauchy-Riemann equations for points in a
region R is said yo be analytic in this region R. In general, expressions involving
complex conjugate or conjugate transpose do not satisfy the Cauchy-Riemann
equations. In order to avoid this problem, a more generalized definition of
complex derivative is used ([12], [6]):
• Generalized Complex Derivative:
df (z)
∇f (z) = 2 (27)
dz∗
∂f (z) ∂f (z)
= +i
∂<z ∂=z
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 17
4.1 Complex Derivatives 4 COMPLEX MATRICES
df (Z)
∇f (Z) = 2 (28)
dZ∗
∂f (Z) ∂f (Z)
= +i
∂<Z ∂=Z
These expressions can be used for gradient descent algorithms.
∂g(u) ∂g ∂u ∂g ∂u∗
= + (29)
∂x ∂u ∂x ∂u∗ ∂x
∂g ∂u ³ ∂g ∗ ´∗ ∂u∗
= +
∂u ∂x ∂u ∂x
Notice, if the function is analytic, the second term reduces to zero, and the func-
tion is reduced to the normal well-known chain rule. For the matrix derivative
of a scalar function g(U), the chain rule can be written the following way:
∂Tr(X∗ ) ∂Tr(XH )
= = I (31)
∂<X ∂<X
∗
∂Tr(X ) ∂Tr(XH )
i =i = I (32)
∂=X ∂=X
Since the two results have the same sign, the conjugate complex derivative (25)
should be used.
∂Tr(X) ∂Tr(XT )
= = I (33)
∂<X ∂<X
∂Tr(X) ∂Tr(XT )
i =i = −I (34)
∂=X ∂=X
Here, the two results have different signs, the generalized complex derivative
(24) should be used. Hereby, it can be seen that (??) holds even if X is a
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 18
4.1 Complex Derivatives 4 COMPLEX MATRICES
complex number.
∂Tr(AXH )
= A (35)
∂<X
∂Tr(AXH )
i = A (36)
∂=X
∂Tr(AX∗ )
= AT (37)
∂<X
∂Tr(AX∗ )
i = AT (38)
∂=X
∂Tr(XXH ) ∂Tr(XH X)
= = 2<X (39)
∂<X ∂<X
H
∂Tr(XX ) ∂Tr(XH X)
i =i = i2=X (40)
∂=X ∂=X
By inserting (39) and (40) in (24) and (25), it can be seen that
∂Tr(XXH )
= X∗ (41)
∂X
∂Tr(XXH )
=X (42)
∂X∗
Since the function Tr(XXH ) is a real function of the complex matrix X, the
complex gradient matrix (28) is given by
∂Tr(XXH )
∇Tr(XXH ) = 2 = 2X (43)
∂X∗
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 19
5 DECOMPOSITIONS
5 Decompositions
5.1 Eigenvalues and Eigenvectors
5.1.1 Definition
The eigenvectors v and eigenvalues λ are the ones satisfying
Avi = λi vi
AV = VD, (D)ij = δij λi
where the columns of V are the vectors vi
eig(AB) = eig(BA)
A is n × m ⇒ At most min(n, m) distinct λi
rank(A) = r ⇒ At most r non-zero λi
5.1.3 Symmetric
Assume A is symmetric, then
VVT = I (i.e. V is orthogonal)
λi ∈ R (i.e. λi is real)
P p
Tr(Ap ) = i λi
eig(I + cA) = 1 + cλi
eig(A−1 ) = λ−1
i
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 20
5.3 Triangular Decomposition 5 DECOMPOSITIONS
where D is diagonal with the square root of the eigenvalues of AAT , V is the
eigenvectors of AAT and UT is the eigenvectors of AT A.
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 21
6 GENERAL STATISTICS AND PROBABILITY
E[AXB + C] = AE[X]B + C
Var[Ax] = AVar[x]AT
Cov[Ax, By] = ACov[x, y]BT
See [14].
6.2 Expectations
Assume x to be a stochastic vector with mean m, covariance M and central
moments vr = E[(x − m)r ].
E[Ax + b] = Am + b
E[Ax] = Am
E[x + b] = m+b
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 22
6.2 Expectations 6 GENERAL STATISTICS AND PROBABILITY
See [7].
E[(Ax + a)bT (Cx + c)(Dx + d)T ] = (Ax + a)bT (CMDT + (Cm + c)(Dm + d)T )
+(AMCT + (Am + a)(Cm + c)T )b(Dm + d)T
+bT (Cm + c)(AMDT − (Am + a)(Dm + d)T )
See [7].
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 23
7 GAUSSIANS
7 Gaussians
7.1 One Dimensional
7.1.1 Density and Normalization
The density is µ ¶
1 (s − µ)2
p(s) = √ exp −
2πσ 2 2σ 2
Normalization integrals
Z √
(s−µ)2
e− 2σ2 ds = 2πσ 2
Z r · 2 ¸
−(ax2 +bx+c) π b − 4ac
e dx = exp
a 4a
Z r · 2 ¸
2 π c − 4c2 c0
ec2 x +c1 x+c0 dx = exp 1
−c2 −4c2
c2 x2 + c1 x + c0 = −a(x − b)2 + w
1 c1 1 c21
−a = c2 b= w= + c0
2 c2 4 c2
or
1
c2 x2 + c1 x + c0 = − (x − µ)2 + d
2σ 2
−c1 −1 c2
µ= σ2 = d = c0 − 1
2c2 2c2 4c2
7.1.3 Moments
If the density is expressed by
· ¸
1 (s − µ)2
p(x) = √ exp − or p(x) = C exp(c2 x2 + c1 x)
2πσ 2 2σ 2
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 24
7.2 Basics 7 GAUSSIANS
h(x − µ)i = 0 = 0h i
2 2 −1
h(x − µ) i = σ = 2c2
h(x − µ)3 i = 0 = 0
h i2
1
h(x − µ)4 i = 3σ 4 = 3 2c2
7.2 Basics
7.2.1 Density and normalization
The density of x ∼ N (m, Σ) is
· ¸
1 1 T −1
p(x) = p exp − (x − m) Σ (x − m)
det(2πΣ) 2
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 25
7.2 Basics 7 GAUSSIANS
p
det(2π(AT Σ−1 A)−1 )
NAx [m, Σ] = p Nx [A−1 m, (AT Σ−1 A)−1 ]
det(2πΣ)
Σ−1
c = Σ−1 −1
1 + Σ2
mc = (Σ−1 −1 −1
1 + Σ2 ) (Σ−1 −1
1 m1 + Σ2 m2 )
1 T −1
C = (m Σ + mT2 Σ−1 −1
2 )(Σ1 + Σ2 )
−1 −1
(Σ−1 −1
1 m1 + Σ2 m2 )
2 1 1
1³ ´
− mT1 Σ−1
1 m 1 + m T −1
2 Σ2 m 2
2
In a trace formulation (assuming Σ1 , Σ2 are symmetric)
1
− Tr((X − M1 )T Σ1−1 (X − M1 ))
2
1
− Tr((X − M2 )T Σ2−1 (X − M2 ))
2
1
= − Tr[(X − Mc )T Σ−1
c (X − Mc )] + C
2
Σ−1
c = Σ−1
1 + Σ2
−1
Mc = (Σ−1 −1 −1
1 + Σ2 ) (Σ−1 −1
1 M1 + Σ2 M2 )
1 h i
C = Tr (Σ−1
1 M 1 + Σ −1
2 M2 )T
(Σ−1
1 + Σ−1 −1
2 ) (Σ −1
1 M1 + Σ −1
2 M2 )
2
1
− Tr(MT1 Σ−1 T −1
1 M1 + M2 Σ2 M2 )
2
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 26
7.3 Moments 7 GAUSSIANS
7.3 Moments
7.3.1 Mean and covariance of linear forms
First and second moments. Assume x ∼ N (m, Σ)
E(x) = m
E[Ax] = AE[x]
Var[Ax] = AVar[x]AT
Cov[Ax, By] = ACov[x, y]BT
E(xxT ) = Σ + mmT
E[xT Ax] = Tr(AΣ) + mT Am
Var(xT Ax) = 2σ 4 Tr(A2 ) + 4σ 2 mT A2 m
E[(x − m0 )T A(x − m0 )] = (m − m0 )T A(m − m0 ) + Tr(AΣ)
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 27
7.3 Moments 7 GAUSSIANS
E[xxT xxT ] =
2(Σ + mmT )2 + mT m(Σ − mmT )
+Tr(Σ)(Σ + mmT )
E[xxT AxxT ] = (Σ + mmT )(A + AT )(Σ + mmT )
+mT Am(Σ − mmT ) + Tr[AΣ(Σ + mmT )]
E[x xx x] = 2Tr(Σ2 ) + 4mT Σm + (Tr(Σ) + mT m)2
T T
7.3.5 Moments
X
E[x] = ρk mk
k
XX
Cov(x) = ρk ρk0 (Σk + mk mTk − mk mTk0 )
k k0
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 28
7.4 Miscellaneous 7 GAUSSIANS
7.4 Miscellaneous
7.4.1 Whitening
Assume x ∼ N (m, Σ) then
z = Σ−1/2 (x − m) ∼ N (0, I)
Conversely having z ∼ N (0, I) one can generate data x ∼ N (m, Σ) by setting
x = Σ1/2 z + m ∼ N (m, Σ)
Note that Σ1/2 means the matrix which fulfils Σ1/2 Σ1/2 = Σ, and that it exists
and is unique since Σ is positive definite.
7.4.3 Entropy
Entropy of a D-dimensional gaussian
Z p D
H(x) = N (m, Σ) ln N (m, Σ)dx = − ln det(2πΣ) −
2
K
X · ¸
ρk 1 (s − µk )2
p(s) = p exp −
2πσk2 2 σk2
k
7.5.2 Moments
An useful fact of MoG, is that
X
hxn i = ρk hxn ik
k
where h·ik denotes average with respect to the k.th component. We can calculate
the first four moments from the densities
X · ¸
1 1 (x − µk )2
p(x) = ρk p exp −
2πσk2 2 σk2
k
X £ ¤
p(x) = ρk Ck exp ck2 x2 + ck1 x
k
as
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 29
7.6 Mixture of Gaussians 7 GAUSSIANS
P P h i
−ck1
hxi = k ρk µk = k ρk
· 2ck2 ³ ´2 ¸
2
P 2
P −1 −ck1
hx i = k ρk (σk + µ2k ) = k ρk 2ck2 + 2ck2
P P h h ii
ck1 c2k1
hx3 i = 2 3
k ρk (3σk µk + µk ) = k ρ k (2ck2 ) 2 3 − 2ck2
·³ ´2 ·³ ´2 ¸¸
P P 1 ck1 c2k1
hx4 i = k ρk (µ4k + 6µ2k σk2 + 3σk4 ) = k ρk 2ck2 2ck2 − 6 2ck2 + 3
From the un-centralized moments one can derive other entities like
P £ 2 ¤
hx2 i − hxi2 = 2
k,k0 ρk ρk £µk + σk − µk µk
0 0
¤
P
hx3 i − hx2 ihxi = 0
2 3 2 2
k,k0 ρk ρk £3σk µk + µk − (σk + µk )µk
0
¤
P
hx4 i − hx2 i2 = 4 2 2 4 2 2 2 2
k,k0 ρk ρk µk + 6µk σk + 3σk − (σk + µk )(σk0 + µk0 )
0
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 30
8 MISCELLANEOUS
8 Miscellaneous
8.1 Functions and Series
8.1.1 Finite Series
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 31
8.2 Indices, Entries and Vectors 8 MISCELLANEOUS
X∞
1 1
e−A = (−1)n An = I − A + A2 − ...
n=0
n! 2
X∞
1 1
etA = (tA)n = I + tA + t2 A2 + ...
n=0
n! 2
X∞
(−1)n−1 n 1 1
ln(I + A) = A = A − A2 + A3 − ...
n=1
n 2 3
eA eB = eA+B if AB = BA
X∞
(−1)n A2n+1 1 1
sin(A) = = A − A3 + A5 − ...
n=0
(2n + 1)! 3! 5!
X∞
(−1)n A2n 1 1
cos(A) = = I − A2 + A4 − ...
n=0
(2n)! 2! 4!
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 32
8.2 Indices, Entries and Vectors 8 MISCELLANEOUS
8.2.2 Permutations
Let P be some permutation matrix, e.g.
0 1 0 £ ¤ eT2
P= 1 0 0 = e2 e1 e3 = eT1
0 0 1 eT3
then
£ ¤ eT2 A
AP = Ae2 Ae1 Ae2 PA = eT1 A
eT3 A
That is, the first is a matrix which has columns of A but in permuted sequence
and the second is a matrix which has the rows of A but in the permuted se-
quence.
i.e. an n × p matrix of zeros with the i.th column of A in the placed of the j.th
column. Assume A to be n × m and Jij to be p × n
0
0
..
.
Jij A = Aj
.
..
0
i.e. an p × m matrix of zeros with the j.th row of A in the placed of the i.th
row.
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 33
8.2 Indices, Entries and Vectors 8 MISCELLANEOUS
Tr(AJji B) = (BA)ij
Tr(AJij Jij B) = diag(AT BT )ij
Assume A is n × n, Jij is n × m B is m × n, then
Sij = Jij
If A is symmetric then
Sij = Jij + Jji − Jij Jij
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 34
8.3 Solutions to Systems of Equations 8 MISCELLANEOUS
Ax = b
Ax = b ⇒ x = A−1 b
Ax = b ⇒ x = (AT A)−1 AT b = A+ b
The equation have many solutions x. But xmin is the solution which minimizes
||Ax − b||2 and also the solution with the smallest norm ||x||2 . The same holds
for a matrix version: Assume A is n × m, X is m × n and B is n × n, then
AX = B ⇒ Xmin = A+ B
The equation have many solutions X. But Xmin is the solution which minimizes
||AX − B||2 and also the solution with the smallest norm ||X||2 . See [3].
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 35
8.4 Block matrices 8 MISCELLANEOUS
where Amin denotes the matrix which is optimal in a least square sense. An
interpretation is that A is the linear approximation which maps the columns
vectors of B0 into the columns vectors of B1 .
Ax = 0, ∀x ⇒ A=0
xT Ax = 0, ∀x ⇒ A=0
8.4.1 Multiplication
Assuming the dimensions of the blocks matches we have
· ¸· ¸ · ¸
A11 A12 B11 B12 A11 B11 + A12 B21 A11 B12 + A12 B22
=
A21 A22 B21 B22 A21 B11 + A22 B21 A21 B12 + A22 B22
as µ· ¸¶
A11 A12
det = det(A22 ) · det(C1 ) = det(A11 ) · det(C2 )
A21 A22
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 36
8.5 Matrix Norms 8 MISCELLANEOUS
as · ¸−1 · ¸
A11 A12 C−1
1 −A−1 −1
11 A12 C2
=
A21 A22 −C2 A21 A−1
−1
11 C2−1
· ¸
A−1 −1 −1 −1
11 + A11 A12 C2 A21 A11 −C−1 −1
1 A12 A22
= −1 −1
−A22 A21 C1 A22 + A22 A21 C1 A12 A−1
−1 −1 −1
22
||A|| ≥ 0 ||A|| = 0 ⇔ A = 0
||cA|| = |c|||A||, c∈R
||A + B|| ≤ ||A|| + ||B||
8.5.2 Examples
X
||A||1 = max |Aij |
j
i
q
||A||2 = max eig(AT A)
||A||p = ( max ||Ax||p )1/p
||x||p =1
X
||A||∞ = max |Aij |
i
j
sX
||A||F = |Aij |2 (Frobenius)
ij
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 37
8.6 Positive Definite and Semi-definite Matrices 8 MISCELLANEOUS
8.5.3 Inequalities
E. H. Rasmussen has in yet unpublished material derived and collected the
following inequalities. They are collected in a table as below, assuming A is an
m × n, and d = min{m, n}
||A||max ||A||1 ||A||∞ ||A||2 ||A||F ||A||KF
||A||max 1 1 √1 √1 √1
||A||1 m m √m √m √m
||A||∞ √n √n √ n n n
||A||2 mn n m √ 1 1
√ √ √
||A||F √ mn √n √m d √ 1
||A||KF mnd nd md d d
which are to be read as, e.g.
√
||A||2 ≤ m · ||A||∞
xT Ax > 0, ∀x
xT Ax ≥ 0, ∀x
8.6.2 Eigenvalues
The following holds with respect to the eigenvalues:
8.6.3 Trace
The following holds with respect to the trace:
8.6.4 Inverse
If A is positive definite, then A is invertible and A−1 is also positive definite.
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 38
8.7 Integral Involving Dirac Delta Functions 8 MISCELLANEOUS
8.6.5 Diagonal
If A is positive definite, then Aii > 0, ∀i
8.6.6 Decomposition I
The matrix A is positive semi-definite of rank r ⇔ there exists a matrix B of
rank r such that A = BBT
8.6.7 Decomposition II
Assume A is an n × n positive semi-definite, then there exists an n × r matrix
B of rank r such that BT AB = I.
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 39
8.8 Miscellaneous 8 MISCELLANEOUS
See [8].
8.8 Miscellaneous
For any A it holds that
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 40
A PROOFS AND DETAILS
∂(Xn )kl ∂ X
= Xk,u1 Xu1 ,u2 ...Xun−1 ,l
∂Xij ∂Xij u1 ,...,un−1
= δk,i δu1 ,j Xu1 ,u2 ...Xun−1 ,l
+Xk,u1 δu1 ,i δu2 ,j ...Xun−1 ,l
..
.
+Xk,u1 Xu1 ,u2 ...δun−1 ,i δl,j
n−1
X
= (Xr )ki (Xn−1−r )jl
r=0
n−1
X
= (Xr Jij Xn−1−r )kl
r=0
Using the properties of the single entry matrix found in Sec. 8.2.5, the result
follows easily.
Through the calculations, (16) and (35) were used. In addition, by use of (36),
the derivative is found with respect to the imaginary part of X
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 41
A PROOFS AND DETAILS
Notice, for real X, A, the sum of (44) and (45) is reduced to (13).
Similar calculations yield
and
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 42
REFERENCES REFERENCES
References
[1] Karl Gustav Andersson and Lars-Christer Boiers. Ordinaera differentialek-
vationer. Studenterlitteratur, 1992.
[2] Jörn Anemüller, Terrence J. Sejnowski, and Scott Makeig. Complex inde-
pendent component analysis of frequency-domain electroencephalographic
data. Neural Networks, 16(9):1311–1323, November 2003.
[3] S. Barnet. Matrices. Methods and Applications. Oxford Applied Mathe-
matics and Computin Science Series. Clarendon Press, 1990.
[4] Christoffer Bishop. Neural Networks for Pattern Recognition. Oxford Uni-
versity Press, 1995.
[5] Robert J. Boik. Lecture notes: Statistics 550. Online, April 22 2002. Notes.
[6] D. H. Brandwood. A complex gradient operator and its application in
adaptive array theory. IEE Proceedings, 130(1):11–16, February 1983. PTS.
F and H.
[7] M. Brookes. Matrix reference manual, 2004. Website May 20, 2004.
[8] Mads Dyrholm. Some matrix results, 2004. Website August 23, 2004.
[9] Simon Haykin. Adaptive Filter Theory. Prentice Hall, Upper Saddle River,
NJ, 4th edition, 2002.
[10] Thomas P. Minka. Old and new matrix algebra useful for statistics, De-
cember 2000. Notes.
[11] L. Parra and C. Spence. Convolutive blind separation of non-stationary
sources. In IEEE Transactions Speech and Audio Processing, pages 320–
327, May 2000.
[12] Laurent Schwartz. Cours d’Analyse, volume II. Hermann, Paris, 1967. As
referenced in [9].
[13] Shayle R. Searle. Matrix Algebra Useful for Statistics. John Wiley and
Sons, 1982.
[14] G. Seber and A. Lee. Linear Regression Analysis. John Wiley and Sons,
2002.
[15] S. M. Selby. Standard Mathematical Tables. CRC Press, 1974.
[16] Inna Stainvas. Matrix algebra in differential calculus. Neural Computing
Research Group, Information Engeneering, Aston University, UK, August
2002. Notes.
[17] Max Welling. The kalman filter. Lecture Note.
Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 43