You are on page 1of 11

# Matrix Algebra

Adnan M. S. Fakir

Matrices
A matrix A is a rectangular array of numbers and is a generalisation of a vector. Matrices can be
useful in a variety of different contexts (solving a system of linear equations, input-output models,
determining the sign of a second derivative when solving optimization problems and other)
a11
A= a 21

a31

a12
a 22
a32

is an example of a 3 2 matrix, with 3 rows and 2 columns. aij represents the element of the matrix
in row i and column j.
In general a matrix is of order m n, with m rows and n columns. A row vector is a matrix with m
= 1, a column vector has n = 1.
Matrices and vectors usually contain numbers, but there is no reason why they cannot hold
variables, formulae, etc. Thus we might need to distinguish between x (variable) and x (vector).
Vectors and matrices are usually distinguished by being printed in bold or (on the board) by being
underlined (e.g. A ).
~

## Rules for operations on vectors and matrices

Addition
If A and B are of the same order then C=A+B is found by adding the corresponding elements in A
and B.
Example

3
A=
2

1
B=
3

3 1 2
Hence C = A + B = + =
2 3 5
by simply adding corresponding elements. This can be demonstrated diagrammatically:

y
5

(-1,3)

Example
1 3
A=

2 4

0 1
B=

2 2

Then
1 0 3 (1) 1 2
C=A+B=

2 2 4 2 4 6

Subtraction
If A and B are of the same order then D=A-B is found by subtracting the corresponding elements in
A and B.
Example
Using the same A and B matrices as in the above Example, we get
1 0 3 (1) 1 4
D=A-B=

2 2 4 2 0 2

Scalar multiplication
If k is a scalar, then the product of k and a matrix is obtained by multiplying each element of the
matrix by k.
Example

3 2
A=

4 0

9 6
3A=

12 0

Transpose
If the rows and columns of a matrix are interchanged the new matrix is known as the transpose of
the original matrix. We write the transpose of A as A or sometimes AT.
Example
5 1
A=

6 2

5 6
A=

1 2

## (Note that this also defines row and column vectors.)

Properties of the transpose:
(A)=A
(A+B)=A+B
(AB)=BA
Matrix multiplication
Multiplying two vectors or matrices requires them to be conformable. Two matrices are
conformable if the number of columns of the first equals the number of rows of the second.
E.g. if A is m n and B is n k then they are conformable. If A is m r and B is n k then they
are not conformable, since r n. AB does not exist in the latter case.
The dimensions of the resulting matrix are given by the rows of the first matrix and the columns of
the second.
E.g. if C = AB (A and B as above) then C is m k.
Each element of C, cij, is obtained as the scalar or inner product of row i from A with column j of
B, e.g. to get c23 take the scalar product of row 2 of A with row 3 from B. Formally,
n

cij =

a
k 1

ik

bkj

## This is best demonstrated by example.

Example

2
2 1
A=
B=

3 1
4
Note that A is of order (22), B is (21) so C = AB is (22)(21) = (21)
2(2) (1)4 0
C=AB=

3(2) 1(4) 10

Note that the result is a transformation of B, it is rotated clockwise and lengthened. Other matrices
0 1
perform similar types of transformation, for example
performs a 90 degree rotation (anti1 0
clockwise) of a vector.
Example for matrices
1 1
0 1
1 3
A=
B=
, then C=AB=

2 1
3 8
1 2

Example

3 12 15
3
A= B= 1 4 5 , AB=

2 8 10
2
Note that:
A(B+C) = AB + AC
(A+B)C = AC + BC
A(BC) = (AB)C,
Note also that
AB BA in general, even when conformable in both directions. This is different from ordinary
multiplication, which is commutative. Pre-multiplication of A by B is not the same as postmultiplication of A by B.
However, it is true that A(BC) = (AB)C = ABC. It doesnt matter which matrices we multiply first,
but we must respect their relative positions.
The identity matrix is the matrix I which is the solution to AI = A = IA. It is only defined for
square matrices. It is like the number 1 under multiplication, or zero under addition. It makes no
change to the target vector or matrix. For order n n

1 0 ... 0
0 1 ... 0

I=
: : ... :

0 0 ... 1
Example
2 1 1 0 2 1
3 1 0 1 3 1

Division
This is not generally defined for matrices. See inversion of matrices below.

## Matrix inversion (2x2 matrices)

The inverse of the matrix A is the matrix A-1 which is the solution to the equation AA-1 = I. It is
only defined for square matrices. In the case of the inverse, AA-1 = A-1A. For the 2 2 matrix

a b
d b
1

.
the inverse is A-1 =
A
ad bc c a
c d
Example
4 1
6 2

2 1 1 2 1 1 0.5
1

2
4 2 1 6 6 4 2 6 4 3

## As a check (AA-1 = I):

4 1 1 0.5 1 0

6 2 3
2 0 1

The term ad-bc is called the determinant of the matrix A, written det A or sometimes |A|, and this
is an important property of square matrices.
If det A = 0 then the inverse of A cannot be found since 1/(ad-bc) = infinity. The matrix A is then
said to be singular. Only non-singular matrices can be inverted.
If we have a matrix product AB, then its inverse (AB)-1 = B-1A-1. Note the reversal of A and B, as
for the transpose.
As with many theorems about matrices, this is relatively easy to prove:
(AB)(AB)-1 = I
by definition of the inverse.
Pre-multiplying both sides by A-1:
A-1(AB)(AB)-1 = A-1
hence
-1
-1
B(AB) = A
Pre-multiplying both sides by B-1:
B-1B(AB)-1 = B-1A-1
hence
-1
-1 -1
(AB) = B A
QED.
Another simple theorem tells us that the transpose of the inverse is equal to the inverse of the
transpose, i.e.
(A-1) = (A)-1
Writing and solving a system of equations in matrix form
The system of linear equations
x + 2y = 5
3x - y = 1
can be written in matrix form as Ax = b, where

1 2
x
, x ,
A
3 1
y

5
b
1

## Pre-multiplying both sides by A-1 gives the solution x = A-1b.

A-1 =

1 2 1 / 7 2 / 7
1

1 1 3 2 3 1 3 / 7 1 / 7

Hence
1 / 7 2 / 7 5 1
x = A-1b =

3 / 7 1 / 7 1 2

## Application: supply and demand

A simple demand and supply model can be written in matrix form:
QD = 100 - 5P
QS = 20 + 3P
can be written
1 5 Q 100
1 3 P 20

## and then solved by the above methods.

Inverting larger matrices (from here the section taught in Matrix Algebra II).
Inverting 3 3 or larger matrices is more difficult (the best solution? Use a computer.). It can
however be done in the following manner. First we need to find the cofactors of A. Each element
of A has a corresponding cofactor Aij (N.B. elements of A are aij).
The cofactor Aij is the determinant of the submatrix (called a minor) obtained by deleting row i and
column j from A, multiplied by (-1)i+j. Best explained by example.
2 4 1

2 1 3

3 7
(1)11
A11 = det
1 3

## (submatrix obtained by deleting row and column 1)

3 7
= 2, and (-1)1+1 = 1, so
det
1 3

A11 = 2.
Similarly, A12 = 2, A13 = -2, etc.

## The full set of cofactors is

2
2
2
11 4
6

25 10 10

We can now find det A by taking any row or column of A, multiplying by the corresponding
cofactors, and summing. Using the first row, we get
det A = 2 2 + 4 2 + 1 -2 = 10.
(You can verify that using any other row (or even column!) gives the same result.)
The inverse of A may now be found by the formula
A-1 =

1
adj A
det A

where adj A is the adjoint of A. The adjoint of A is the transpose of the cofactor matrix found
above, i.e.
A11

adj A = A12
A
13

A21
A22
A23

A31

A32
A33

## Note that the cofactors have been transposed here.

Evaluating this we have
2 11 25 0.2 1.1 2.5
1
A =
2
4 10 0.2
0.4 1
10
2 6 10 0.2 0.6 1
-1

## Some rules about determinants

det A = det A.
swapping any two columns of a matrix changes the sign of the determinant (ditto rows,
following previous rule)
a matrix with two identical rows (columns) has a zero determinant
if one row (column) is multiplied by a scalar, so is the determinant
adding a multiple of one row (column) to another does not alter the value of the determinant

## Cramers rule for solving systems of linear equations

Above we solved a system of two linear equations. Cramers rule allows us to generalise this to
more equations and unknowns. Given the system of n equations defined by
Ax = b
then the solution for the variable xi is

xi

det Ai
det A

## where Ai is the matrix obtained by replacing the ith column of A by b.

Example
Find the value of x2 in the system of equations defined by
2 4 1 x1 5

4 3 7 x 2 3
2 1 3 x 1

We have already found det A = 10. If we replace the second column of A by 5, 3, 1 we then find
the determinant of that matrix to be 12. Hence x2 = 12/10 = 1.2
The rank of a matrix
The rank (or row rank) of a matrix is the maximum number of linearly independent rows.
Similarly, the column rank is the maximum number of linearly independent columns. These two are
always equal, so we can simply call it the rank.
For a matrix A of dimension m n, the rank r must be less than or equal to the smaller of m or n,
i.e. r min(m, n).
A square matrix A is of full rank if r = m = n. In this case the matrix is invertible (non-singular).
If r < m then the matrix is singular (has a zero determinant) and cannot be inverted.
An interpretation of this latter case is that in a system of m simultaneous equations involving m
unknowns, some of the equations are linearly dependent, so there is no unique solution.
Examples

3 12 15
2 8 10 is of rank 1 since the second row is two-thirds of the first.

2 1
3 1 is of rank 2 since the rows (and columns) are independent. This matrix is of full rank.

To find the (row) rank of a matrix, one reduces it to echelon form and then counts the number of
non-zero rows. The echelon form of a matrix has as many zeroes on the left of each row as
possible, starting with the bottom row and working up. Best explained by example, using
elementary row operations.
1 1 1

A = 1 2 3
2 2 2

Row 3 - 2 row 1:
1 1 1

1 2 3
0 0 0

Row 2 - row 1
1 1 1

0 1 2
0 0 0

This is the echelon form, with two non-zero rows, so r = 2. Since r < m we know the matrix is
singular and has a zero determinant. We could have immediately found the zero determinant since
one row is a multiple of another.
Application - Input-output analysis
The matrix A could be interpreted as containing the input requirements of good i to produce a unit
output of good j. The columns of A represent outputs, the rows inputs. For example, the 3 3
matrix
0.3 0.1 0.2

## A = 0.0 0.4 0.1

0.2 0.0 0.1

implies that to produce one unit of output of good (column) 2 requires 0.1 units of good 1, 0.4 units
of good 2 and 0 of good 3. Note that good 2 requires some of itself as an input. This is not unusual
- steel is used in steel mills to produce steel.
To produce a vector of outputs x would require input y of
y = Ax
(Note that y and x are different vectors but relating to the same goods. All goods are both inputs
and outputs.) If we also have to meet final demand d (e.g. for consumption or export) then we need
output of
x = d + y = d + Ax
How much output do we need to produce therefore to meet a final demand vector d?
x = Ax + d
x(I - A) = d
x = (I - A)-1d

30

Hence if d = 20
60

0.7 0.1 0.2

then (I-A) = 0
0.9 0.4
0.2
0
0.9

## (I-A) = 0.153 1.128 0.535

0.344 0.038 1.205

-1

75.1

so (I-A) d = 59.3
83.4

-1

This is the kind of problem Soviet central planners had to solve. Since they had 400 or more
industries, it was quite a job inverting the matrix The failure of communism can perhaps be put
down to the failure to invert a 400 400 matrix!
Eigenvectors and eigenvalues, diagonalisation
An eigenvalue of A (also called a characteristic root or latent root) is a number associated with A
which has a number of uses (later in course).
An eigenvalue r is a solution to Ax = rx, where r is a scalar. In this case, the matrix A transforms
the vector x by lengthening it by a factor r. To find the eigenvalue(s) we proceed as follows:
Ax = rx
(A - rI)x = 0 (note we cannot just subtract r, a scalar, from A, a matrix).
This has a non-trivial (i.e. not x = 0) solution if det (A - rI) = 0. . In other words r is an eigenvalue
of A if and only if A-rI is a singular matrix.
Solving this requires solving a polynomial in r of degree k (where A is k k).
(Trivial) Example
3 1 1
A= 1 3 1

1 1 3

Subtracting 2 from each diagonal entry transforms A into the singular matrix, so r = 2
Example
(a) Find the eigenvalues for matrix A

1 3
A=

2 0

## Its characteristic polynomial or equation is:

A rI

1 r

0r

r (r 1) 6 r 2 r 6 (r 3)(r 2)

## So the eigenvalues for matrix A are: r1 3 and r2 2

We can now find the eigenvectors (or characteristic or latent vectors), one associated with each
root. These are the vectors x that solve (A - rI)x = 0.

3 x1 0
1 (3)
( A r1I)x 0 ( A (3)I)x

2
0 (3) x2 0

2 3 x1 0
Hence

2 3 x 2 0
3
and the first eigenvector will be x1 (or any multiple thereof).
2

1
Similarly the second eigenvector (for r2 2 ) can be derived. It will be x 2 or any multiple of
1
1
1

1 3 3 9
Check: Ax1 =

2 0 2 6

which is -3 times x.
Diagonalisation
A matrix such as A can be diagonalised using the characteristic vectors. If we construct a matrix P
out of the characteristic vectors then P-1AP is a diagonal matrix made up of the eigenvalues of A.

3 1
P
hence P-1 =

2 1

0.2 0.2
0.4 0.6

3 0
and hence P-1AP =

0 2
We will see later how this is useful in dealing with systems of differential equations, for example.