Professional Documents
Culture Documents
Pertemuan II
MATRIX ALGEBRA
DICTIONARY.COM
Definitions
Matrices vs Vector
• Away from the definition, a matrix is simply a rectangular
way of storing data.
• A matrix A is a system of numbers with n rows and p
columns:
• A vector x can be
represented geometrically
as a directed line in n
dimensions with
component x1 along the
first axis, x2 along the
second axis,..., and xn along
the nth axis.
Vector Length
• A vector has both direction and length.
• The length of a vector emanating from the origin is given by
the Pythagorean formula:
Vector Angle
• The inner (or dot) product of two vectors x and y is the sum of
element-by-element multiplication:
x′y = x1y1 + x2y2 + . . . + xkyk
• Then,
Vector Projections
• The projection (or shadow) of a vector x on a vector y is
is normalized
MATRICE
Matrix Properties
• The following are some algebraic properties of
matrices:
o (A + B) + C = A + (B + C) - Associative
o A + B = B + A - Commutative
o c(A + B) = cA + cB - Distributive
o (c + d)A = cA + dA
o (A + B)′ = A′ + B′
o (cd)A = c(dA)
o (cA)′ = cA′
Matrix Properties
• The following are more algebraic properties of matrices:
o c(AB) = (cA)B
o A(BC) = (AB)C
o A(B + C) = AB + AC
o (B + C)A = BA + CA
o (AB)′ = B′A′
o For xj such that Axj is defined:
Matrix Determinant
• A square matrix can be characterized by a scalar value
called a determinant.
detA = |A|
• Much like the matrix inverse, calculation of the
determinant is very complicated and tedious, and is best
left to computers.
• What can be learned from determinants is if a matrix is
singular:
➔ If the square matrix A is singular, its determinant is 0
➔ If A is nonsingular, its determinant is nonzero
• If A and B are square and the same size, the determinant
of the product is the product of the determinants:
|AB| = |A||B|.
Eigenvalues and Eigenvectors
• A square matrix can be decomposed into a set of
eigenvalues and eigenvectors.
Ax = λx
Let θik denote the angle formed by the vectors di and dk. From
the length of projection formula, we get
Thus,
So, the relation implies …
Then
RANDOM SAMPLES AND THE EXPECTED
VALUES OF THE SAMPLE MEAN AND
COVARIANCE MATRIX
Random Sample
Suppose, the data have not yet been observed, but we intend to
collect n sets of measurements on p variables. Before the
measurements are made, their values cannot, in general, be
predicted exactly. Consequently, we treat them as
random variables.
If the row vectors X’1, X’2, … , X’n above represent independent observations
from a common joint distribution with density function f(x) = f(x1, x2, … , xp).
then X1, X2, … , X„ are said to form a random sample from f(x).
Sampling Distribution of 𝑋ത and Sn
Let X1, X2,..., Xn be a random sample from a joint distribution that has
mean vector μ and covariance matrix ∑.
Generalized Variance
The overall sample covariance matrix gives a picture of the
covariation between each variable in the sample.
where
Then,
Generalized variance also has interpretations in the p-
space scatter plot representation of the data.
To begin, imagine a
multidimensional
cube (an ellipsoid)
that represents the
distribution of sample
matrix X.
Example
Deviation
Matrix S
LINEAR COMBINATIONS OF
VARIABLES
Let a1, a2, . . . , ap be constants and consider the linear combination
of the elements of the vector y,
Sample
Variance
Responsi
1. Is the following matrix positive definite?
2. Check that
is an orthogonal matrix.