You are on page 1of 16

Matrix Arithmetic

Harold W. Ellingsen Jr.
SUNY Potsdam
ellinghw@potsdam.edu
This supplements Linear Algebra by Hefferon, which is lacking a chapter of matrix arithmetic
for its own sake. Instructors who wish to introduce these manipulations earlier and without the
rigor of linear transformations may find this useful. Please be sure to cover the text’s Chapter One
first.
This material is Free, including the LaTeX source, under the Creative Commons AttributionShareAlike 2.5 License. The latest version should be the one on this site. Any feedback on the note
is appreciated.

Matrix Arithmetic
In this note we explore matrix arithmetic for its own sake. The author introduces it in Chapter
Four using linear transformations. While his approach is quite rigorous, matrix arithmetic can be
studied after Chapter One. This note assumes that Chapter 
One has been completed.

a11 a12 · · · a1n
 a21 a22 · · · a2n 


For a shortcut notation instead of writing a matrix A as  ..
..
.. , we will write
.
.
 .
.
.
. 
am1 am2 · · · amn
A = (aij )m×n or just A = (aij ) if the size of A is understood.
When given a set of objects in mathematics, there are two basic questions one should ask: When
are two objects equal? and How can we combine two objects to produce a third object? For the
first question we have the following definition.
Definition 1
We say that two matrices A = (aij ) and B = (bij ) are equal, written A = B, provided they have
the same size and their corresponding entries are equal, that is, their sizes are both m × n and for
each 1 ≤ i ≤ m and 1 ≤ j ≤ n, aij = bij .
Example 1

  

1 −9
1 −9 7
1. Let A =
and B = 0 1 . Since the size of A is 2 × 3 and that of B is
0 1 −5
7 −5
3 × 2, A 6= B. Do note, however, that the entries of A and B are the same. 
2 
 

x y−x
1
x−y
2. Find all values of x and y so that
=
. We see that the size of
2
y2
x+1
1
each matrix is 2 × 2. So we set the corresponding entries equal:
x2 = 1
2 =x+1

y−x = x−y
y2 = 1

We see that x = ±1 and y = ±1. From 2 = x+1, we get that x must be 1. From y −x = x−y,
we get that 2y = 2x and so x = y. Thus y is also 1.
1

A + B = B + A (Matrix addition is commutative. Since A and B are both add them. So how can we add and subtract two matrices? Eventually we will multiply matrices. Here are some examples. Here we go: B − A = 3 −1 − 2 2−3 −3 −1 = = . 1. −1 2 6 −2 −1 −2 −3 Compute each of the following. we just add or subtract their corresponding entries. if possible. we have been doing this for quite a while now: Adding.B= . Here we go: 2A − 3B = − = . but for now we consider another multiplication. These matrices have the same size. Here are the definitions. We define scalar multiplication by for any r ∈ R. subtracting.) 2. Definition 2 Let A = (aij ) and B = (bij ) be m × n matrices. We just multiply each entry of C by 4: 4C =  4·1 4·2 4·3 4 · (−1) 4 · (−2) 4 · (−3)  =   4 8 24 . 2 6 − (−1) −2 − 2 7 −4 3. A + B. −1 2 6 −2 −1 + 6 2 + (−2) 5 0 2. we just multiply each entry by the scalar.) 2 . These definitions should appear quite natural: When two matrices have the same size. multiplying. B. r(A + B) = rA + rB (Matrix addition is associative. 1. and dividing(when possible) real numbers. wecan subtract them. Properties of Matrix Arithmetic Let A. rA is the matrix (raij ). and for the scalar multiplication. we can   Here we go: A + B = 2 3 −1 2 2 + (−1) 3+2 1 5 + = = .   2×  2 matrices. No can do. A + (B + C) = (A + B) + C 3.As for the second question. denoted by A − B. B − A.) (Scalar multiplication distributes over matrix addition. and C = . We define their sum. 4C. so do the multiplication first and then the  we’ll   scalar    4 6 −3 6 7 0 subtraction. s ∈ R. and C be m × n matrices and r. B + C. Example 2 Let       2 3 −1 2 1 2 3 A= . −4 −8 −24 5. −2 4 18 −6 −20 10 Matrix arithmetic has some of the same properties as real number arithmetic. Since A and   B  are −1 2 2 − 6 −2 −1 both  2 × 2 matrices. denoted by A + B. and their difference. 2A − 3B. to be the respective matrices (aij + bij ) and (aij − bij ). B and C have different sizes: B is 2 × 2 and C is 2 × 3. 4.

M + N. (If you’re not familiar with this. How about that real number addition distributes over matrix addition. Now we must show that their corresponding entries are equal. there is a zero matrix. (rs)A = r(sA) (Real number addition distributes over scalar multiplication.4. Hence (r + s)A = rA + sA. By assumption M. Hence they’re equal. There is a unique m × n matrix Θ such that for any m × n matrix M. and each matrix has a negative? You should prove the rest at some point in your life. that is. N. Θ. (r + s)A and rA + sA have the same size.) (An associativity for scalar multiplication. For uniqueness. 3 . we see that Θ = Ψ. and M + Θ have the same size. Hence Θ is unique. Ψ = Ψ + Θ. For uniqueness suppose that P is an m × n matrix with the property that M + P = Θ. Then the ij-entry of (r + s)A is (r + s)aij . Since matrix addition is commutative. C + Ψ = C. Proof Let A = (aij ) be an m × n matrix and r. By definition. Now we will multiply matrices. s ∈ R. we have (r + s)(aij ) = raij + saij = raij + saij . suppose that Ψ is an m × n matrix with the property that for any m × n matrix C. the ij-entry of Θ. For every m × n matrix M there is a unique m × n matrix N such that M + N = Θ. Notice that M. We will NEVER just simply multiply the corresponding entries! What we do is an extension of the dot product of vectors. Now the ij-entry of M + N is mij + (−mij ) = 0.) 6. (This N is called the negative of M and is denoted −M. But by the property of Θ. Hence a desired N exists. Now this makes sense as each mij is a real number and so its negative is also a real number. M + Θ = M. Let 1 ≤ i ≤ m and 1 ≤ j ≤ n. (r + s)A = rA + sA 5.) First we will multiply a row by a column and the result will be a real number(or scalar). which is the sum of the ij-entries of rA and sA. don’t worry about it. Notice that the ij-entry of M + Θ is mij + 0. but not in the way you’re thinking. Then N =N +Θ = N + (M + P ) = (N + M) + P =Θ+P =P as Θ is the zero matrix as M + P = Θ by associativity of matrix addition as N + M = Θ as Θ is the zero matrix. Hence this N is unique. Then Θ = Θ + Ψ by the property of Ψ. This is exactly the ij-entry of M. Using the usual properties of real number arithmetic. the ij-entry of rA + sA. Let M = (mij ) be an m × n matrix and let Θ be the m × n matrix all of whose entries are 0.) 7. and Θ all have the same size. Let N = (−mij ).) Let’s prove something. (This Θ is called the m × n zero matrix.

0 3 = 0(5) + 3(−1) = −3. bp · · · + ap bp. bp   b1    b2   and define their product. −1 −1 4 −1(5) + 4(−1) −9 Now we can extend this multiplication to appropriately sized arbitrary matrices.. 1 ≤ i ≤ m. denoted by a1 a2 · · · ap  . we can think of a matrix as several row vectors of the same size put together. 2 Now we’ll multiply a general matrix by a column. Example 3 Multiply!    3 1. Example 4 Multiply the matrix by the column. 2 3 4 4 = 2(3) + 3(4) + 4(5) = 38. . Notice that we’re just taking the sum of the products of the corresponding entries and that we may view a real number as a 1 × 1 matrix. Afterall. to be the real number a1b1 + a2b2 + . Let’s do a couple of examples. To do such a multiplication. to be the m × 1 column vector whose i-th entry. We can think of a matrix as several column vectors of the same size put together. denoted by Ab.     1     1 2 3   1(1) + 2(2) + 3(−3) −4 1.. we must be sure that they have the same number of entries. 4 . the number of entries in each row must be the number of entries in the column and then we multiply each row of the matrix by the column. To multiply a row by a column. Here is a couple of examples. Definition 4 Let A be an m × p matrix and b a p × 1 column vector.  with p entries . −2 1 2 −2(1) + 1(2) + 2(−3) −6 −3       2 −2   2(5) + (−2)(−1) 12 5      2. 2 = = . 5   2  −2  2. −1 2 −2 3  −1 = −1(2) + 2(−2) + (−2)(−1) + 3(2) = 2. This means that the number of columns of our first matrix must be the number of rows of the second. is the product of the i-th row of A and b.Definition 3   b1  b2     We take a row vector a1 a2 · · · ap with p entries and a column vector  . We define their product.

is the product of the i-th row of A and the j-th column of B.) 5 . A(BC) = (AB)C (Matrix multiplication is associative. Example 5 Multiply. and natural. 1 ≤ i ≤ m and 1 ≤ j ≤ n. (rA)B = r(AB) = A(rB) (Scalar multiplication commutes with matrix multiplication. Properties of Matrix Multiplication: Let A.    2 2 9 1 2 3 2. Here are a few examples. Note  that  the size of  both products is 2 × 2. it’s true: Matrix multiplication is not commutative. The first matrix is 2 × 3 and then second is also 2 × 3. Here we go:       1 2 3 35 2(1) + 2(5) + 9(−1) 2(2) + 2(2) + 9(3) 2 2 9   5 2 = . if possible. = −1(1) + 0(5) + 8(−1) −1(2) + 0(2) + 8(3) −9 22 −1 0 8 −1 3 Now we state some nice. Can we multiply ? −1 0 8 5 2 3 No. Can we multiply 5 2? −1 0 8 −1 3 Yes. so their product is 2 × 2. The number of columns of the first is not the same as the number of rows of the second.) 2.     1 2 2 2 9  3. Let A = and B = . Since both matrices are 2 × 2. to be the m × n matrix whose ij-entry. We define their product. Then 1. and C be matrices of the appropriate sizes and r ∈ R. Here we go: 1 2 4 −3 AB = = 3 4 −2 1  st  1 row of A times 1st column of B 1st row of A times 2nd column of B = 2nd row of A times 1st column of B 2nd row of A times 2nd column of B     1(4) + 2(−2) 1(−3) + 2(1) 0 −1 = and 3(4) + 4(−2) 3(−3) + 4(1) 4 −5    4 −3 1 2 BA = = −2 1 3 4  st  1 row of B times 1st column of A 1st row of B times 2nd column of A = 2nd row of B times 1st column of A 2nd row of B times 2nd column of A     4(1) + (−3)(3) 4(2) + (−3)(4) −5 −4 = . B.     1 2 4 −3 1. the firstis 2 × 3 and the second is 3 × 2.Definition 5 Let A be an m × p matrix and B a p × n matrix. −2(1) + 1(3) −2(2) + 1(4) 1 0 Did you notice what just happened? We have that AB 6= BA! Yes. denoted by AB. we can find AB and 3 4 −2 1 BA.

 b p + c2 A and the j-th column of B + C. This is what we mean by appropriate sizes: B and C must be the same size in order to add them.) 5. respectively. m × n. Notice that we’re using the k to denote the row numbers of B and C. So let’s write A = (aik ).. There is an elegant proof. IM = MI = M. Now we will prove that last statement. So the ij-entry of A(B + C) is the product of the i-th row of  . Let I be the n × n matrix whose main diagonal entries are all 10 s and all of its 6 . Now we show their corresponding entries are equal..  and  .) A proof that matrix multiplication is associative would be quite messy at this point. Let’s prove the first distributive property and existence of the identity matrix. Let 1 ≤ i ≤ m and 1 ≤ j ≤  n. We will just take it to be true. We have that the two matrices on each side of the equals sign have the same size.. Let M = (mij ) be an n × n matrix. cp bp   b 1 + c1 b2 + c2    j-th column of B + C is  . For   write the i-th row of A  simplicity. We need a definition first: The main diagonal of a matrix A consists of the entries of the from aii ..  b p + c2 a1(b1 + c1) + a2(b2 + c2 ) + · · · + ap (bp + cp ) = a 1 b 1 + a 1 c1 + a 2 b 2 + b 2 c2 + · · · + a p b p + a p cp = (a1b1 + a2b2 + · · · + apbp ) + (a1c1 + a2c2 + · · · + ap cp ). . Proof Let A be an m × p matrix and B and C p × n matrices.3. (I is called the n × n identity matrix. and the number of columns in A must be the number of rows in B and C in order to multiply them. . namely. B = (bkj ). So we’re done. about this mysterious identity matrix. . Notice that the expressions in parentheses are the products of the i-th row of A with each of the j-th columns of B and C and that the sum of these two is the ij-entry of AB + AC. c1 b1       c2   b2  as a1 a2 · · · ap and the j-th columns of B and C as  . Multiplying and then using the usual properties of real number   b 1 + c1   b2 + c2   arithmetic.  =  . (A + B)C = AC + BC (Matrix multiplication distributes over matrix addition. Then the . You should prove the rest at some point in your life. A(B + C) = AB + AC 4. and C = (ckj ). but we need to learn some more linear algebra first. we have a1 a2 · · · ap  . There exists a unique n × n matrix I such that for all n × n matrices M.

. Thus IM = M. More explicitly.. .  Now we return to systems of linear equations.  other entries 00 s. We can express this system as a matrix equation Ax = b. un system: Just check whether or not Au = b. So when we multiply this i-th row of I by the j-th column of M. and b =  . . ..  . Notice that the i-th row of I is the row vector whose i-th entry is 1 and all others 0’s. ... The matrix A we need is the matrix of the coefficients in the system and the x is the column vector  the b is  thecolumn vector  of the constants.. This is exactly how we multiply a row by a column. Thus MI = M. the only entry in the column that gets multiplied by the 1 is mij . . . . . the only entry in the row that gets multiplied the 1 is mij . . . . Let’s do an example. x + 2y − z = −1 1. . 1 0 0 0 ··· 0 0 1 0 0 · · · 0    0 0 1 0 · · · 0   . So x1 b1 a11 a12 · · · a1n  x2   b2   a21 a22 · · · a2n        have A =  . . .  am1 am2 · · · amn xn bm represents the system of linear equations. Write it as a matrix equation. . . . . you ask. u1  u2    It also provides a more convenient way of determining whether or not  . 0 0 0 0 ··· 1 IM and MI are both n × n. So following the above. Since M and I are both n × n. . we  of the variables. How?. . The proof that I is unique is quite similar to that of the zero matrix.  . . thier products ... . I =  . So our matrix equation Ax = b . . Here’s a generic one: a11x1 + a12x2 + · · · + a1n xn = b1 a21x1 + a22x2 + · · · + a2n xn = b2 . Now notice that the j-th column of I is the column vector whose j-th entry is a 1 and all others 0’s. i. .. .. ..  .. x =  . 7 . Example 6 Consider the linear system 2x − y =0 x +z =4 .   . . Just look at each equation: We’re multiplying a’s by x’s and adding them up.. we let A be the matrix of coefficients... So when we multiply the i-th row of M by the j-th column of I.. which is a much more conciseway of writing a system.. . am1x1 + am2x2 + · · · + amn xn = bm . And we’re done.e.  is a solution to the .

So the answer to our question is no. Thanks for asking. Then 0 0 c d   a b AB = . we have Au = 1 0 1  2 = 1(1) + 0(2) + 1(3) = 3 1 2 −2 3 1(1) + 2(2) − 2(3)     0 1  4  = b. Perhaps you’ve 0 1 −1(2) + 2(1) −1(1) + 2(1) 0 1 noticed that B is also invertible. Let’s try to find its B. So 2 is a solution to the system. Multiplying. 8 . is a definition: 0 1 Definition 6 We will say that an n × n matrix A is invertible provided there is an n × n matrix B for which AB = BA = I. Determine whether or not 2 is a solution to the system. yes. The matrix AB has a row of 0’s.x the columnvector of and b the column vector We have   the  variables. 1 2 −2 z −1 1 2 −2 z −1   1  2. where I is the appropriate identity matrix?  Consider 1 0 a b the specific nonzero matrix A = . x = y . So let B = .    A = 2 −1 0 x 0 2 −1 0 x 0 1 0 1 . So A is invertible. there are. we have Av = 1 0 2   1 2 −2 2 1(2) + 2(4) − 2(2)   0 2 4 6= b. we get that AB = = 1 1 −1 2 1(1) + 1(−1) 1(−1) + 1(2)       1 0 1(2) − 1(1) 1(1) − 1(1) 1 0 and BA = = . Notice that II = I. Then multiplying. as xx−1 = 1. so there is at least one invertible matrix for each possible size. This B is called an inverse of A. 3        1 2 −1 0 1 2(1) − 1(2) + 0(3) Let u = 2. Oh. is there a matrix B such that AB = BA =I. 6 2 Invertible Matrices We know that every nonzero real number x has a multiplicative inverse. then. namely x−1 = 1/x. Here’s one now. which is   0 0 1 0 . Example 7      2 1 1 −1 2(1) + 1(−1) 2(−1) + 1(2) Let A = and B = . Is there an analogous inverse for matrices? That is. −1 3   2 3. for any nonzero square matrix A.   of the constants. b =  4 . Determine whether or not 4 is a solution. 2        2 2 −1 0 2 2(2) − 1(4) + 0(2) 1  4 = 1(2) + 0(4) + 1(2) = Let v = 4. and the equation is 1 0 1   y  =  4 . so it can never be the identity. Are there more? Why. Here. Then multiplying. So 4 is not a solution to the system.

then B = C. Now B = BI = B(AC) = (BA)C = IC = C. This is due to the fact that matrix multiplication is not commutative. All vectors in Rn will be written as columns. Note that we reverse the order of the inverses in the product. taking full advantage that matrix multiplication is associative: (AB)(B −1A−1 ) = A(BB −1)A−1 = AIA−1 = AA−1 = I (B −1 A−1 )(AB) = B −1 (A−1 A)B = B −1IB = B −1 B = I. AB is invertible and (AB)−1 = B −1 A−1 . If B and C are two inverses for A. Notice that we used the associativity of matrix multiplication here. is unique. Then 1.As you may have guessed. . But there are also lots of matrices that are not invertible. and Hence AB is invertible and its inverse is B −1 A−1 . if it exists. . Next we prove a theorem about the inverses of matrices. Proposition 1 Let A be an n × n invertible matrix. Thus (A−1 )−1 = A. Corollary to Proposition 2 If A1. A2. . Now we can refer to the inverse of a square matrix A and so we will write its inverse as A−1 and read it as “A inverse”. B. Proposition 2 Let A and B be n × n invertible matrices. A−1 is invertible and (A−1 )−1 = A and 2. Proof Let A and B be n × n invertible matrices. Before determining a method to determine whether or not a matrix is invertible. Then AA−1 = A−1A = I. the inverse of a square matrix. Am are invertible n×n matrices. 9 . Assume that A is invertible and B and C are its inverses. How do we know if a matrix is invertible or not? The following theorem tells us. In this case we have AA−1 = A−1A = I. there are lots of invertible matrices. So we have that AB = BA = AC = CA = I. . Proof Let A. So A satisfies the definition for A−1 being invertible. then A1A2 · · · Am is invertible and (A1A2 · · · Am )−1 = −1 −1 A−1 m · · · A2 A1 . In other words. and C be n × n matrices. The proof of the following corollary is a nice exercise using induction. let’s state and prove a few results about invertible matrices. To show that AB is invertible. we will just multiply.

Assume that the reduced echelon form of A is I. Then the following are equivalent. 3. The matrix equation Ax = θ has only x = θ as its solution. each ej is just along for the ride. 1. Let b ∈ Rn and consider A−1 b. we can reduce B to I. that it is the A part of the augmented matrix that dictates the row operations we must use. A−1 b ∈ Rn . Then we have Au = b −1 A (Au) = A−1 b (A−1 A)u = A−1 b Iu = A−1 b u = A−1 b. In the proof we take advantage that the logical connective “implies” is transitive. one is true if and only if the other is true. In this way finding a matrix B for which AB = I is the same as solving the n systems of linear equations whose matrix equations are given by Axj = ej where ej is the j-th column of I for 1 ≤ j ≤ n. Hence BA = I. Then A = AI = A(BC) = (AB)C = IC = C. Recall that we can view the multiplication of two matrices as the multiplication of a matrix by a sequence of columns. (1) ⇒ (3). In other words. if P ⇒ Q and Q ⇒ R. Since A−1 is n × n and b ∈ Rn . and θ the zero vector in Rn . we see that Ib is indeed b. we must reduce the augmented matrix (A|ej ). To solve each system. What we mean that “the following are equivalent” is that for any two of the statements. The reduced row echelon form of A is I. The previous argument then shows that there is an n × n matrix C for which BC = I. For uniqueness. Proof Let A be an n × n matrix. Notice. each of these systems has a unique solution. 10 . Technically we showed that I is the identity for square matrices. which we assumed is I. “reduces to” is an equivalence relation. Since we have reduced that giant augmented matrix (A|I) to (I|B). Now since matrix multiplication is not commutative. For each 1 ≤ j ≤ n the solution to Axj = ej is the j-th column of B. I the n × n identity matrix. and θ the zero vector in Rn . Now A(A−1 b) = (AA−1)b = Ib = b.The Invertible Matrix Theorem(IMT) Part I Let A be an n × n matrix. however. A is invertible. but since we can make a square matrix whose columns are all b. We will prove that (2) ⇒ (1). Now we assume that A is invertible. we must still show that BA = I. For any b ∈ Rn the matrix equation Ax = b has exactly one solution.6 in Chapter One Section III. suppose that u ∈ Rn is another one. we have in fact reduced I to B. This suggests that in practice we can reduce the giant augmented matrix (A|I) until the A part is in its reduced echelon form. I the n × n identity matrix. Thus A is invertible. 2. the reduced echelon form of B is I. We wish to find an n × n matrix B so that AB = BA = I. and (4) ⇒ (2). Since we can reduce I to B. Since the reduced echelon form of A is I. We have just shown that the equation Ax = b has a solution. then P ⇒ R. Hence we can reduce (A|I) to (I|B) for some n × n matrix B. Thus AB = I. that is. By Lemma 1. (3) ⇒ (4). 4.

Now to complete the proof. We can rewrite this inverse a bit more nicely 1/3 −2/3   1 1 1 −1 . then we know that A is invertible and the matrix in the I part is its inverse. If this form is I. if this form is not I. we must show that if the matrix equation Ax = θ has only x = θ as its solution. then the matrix equation Ax = θ has only x = θ as its solution. Hence the matrix equation Ax = θ has more than one solution. Since A is square. Determine whether or not the matrix is invertible and if so. by factoring out the 1/3: A = 3 1 −2   1 0 2 (b) Let A = −1 1 −2. We form giant augmented matrix (A|I) and row reduce: 2 2 1       1 0 2 | 1 0 0 1 0 2 | 1 0 0 1 0 2 | 1 0 0 −1 1 −2 | 0 1 0 ∼ 0 1 0 | 1 1 0 ∼ 0 1 0 | 1 1 0 ∼ 2 2 1 | 0 0 1 0 2 −3 | −2 0 1 0 0 −3 | −4 −2 1     1 0 2 | 1 0 0 1 0 0 | −5/3 −4/3 2/3 0 1 0 | 1 1 0  ∼ 0 1 0 | 1 1 0 . Assume that the reduced echelon form of A is not I. This means that the system has more than one solution(in fact it has infinitely many. The first part of the proof provides a method for determining whether or not a matrix is invertible and if so. Example 8 1. Therefore we have proved the theorem. the augmented matrix (A|θ) will have an entire row of zeroes when A has been reduced to its reduced echelon form. We do so by contraposition. As the number of equations and unknowns are the same.   2 1 (a) Let A = . we form the giant augmented matrix (A|I) and row reduce it until the A part is in reduced echelon form. In solving the homogeneous system of linear equations corresponding to Ax = θ. its reduced echelon form must contain a row of zeroes. to find its inverse. Thus 0 1 | 1/3 −2/3   1/3 1/3 −1 A is invertible and A = . then the reduced echelon form of A is I. finding its inverse: Given an n× n matrix A. the system must have a free variable. we automatically have that if for any b ∈ Rn the matrix equation Ax = b has exactly one solution. then A is not invertible. We form the giant augmented matrix (A|I) and row reduce: 1 −1         2 1 | 1 0 1 −1 | 0 1 1 −1 | 0 1 1 −1 | 0 1 ∼ ∼ ∼ ∼ 1 −1 | 0 1 2 1 | 1 0 0 3 | 1 −2 0 1 | 1/3 −2/3   1 0 | 1/3 1/3 So we see that the reduced echelon form of A is the identity. then the reduced echelon form of A is I. Thus by contraposition we have proved that if the matrix equation Ax = θ has only x = θ as its solution.Hence the only solution is x = A−1 b. Notice all of those So we see that A is invertible and A−1 =  1 4/3 2/3 −1/3 11 . 0 0 1 | 4/3 2/3 −1/3 0 0 1 | 4/3 2/3 −1/3   −5/3 −4/3 2/3 1 0 . but who’s counting?). Since θ is a particular vector in Rn .

Solve the following system of linear equations using the inverse of the matrix of coefficients. we can use the inverse of a matrix to solve a system of linear equations with the same number of equations and unknowns. conveniently. If A is invertible. As seen in the proof of the theorem. Proof Let A and B be n × n matrices. The only solution to Bx = θ is x = θ. If AB = I. Then x = A b = 3 6 4 2 −1 6 0 The following theorem tells us that if the product of two square matrices is the identity. the matrix A from part (b) above. −1 1 | 0 1 0 0 | 1 1 B is not invertible. B is invertible. 3 4 2 −1  1 −1 (c) Let B = . where A is the matrix of the coefficients. Assume AB = I. we express the system as a matrix equation Ax = b.thirds in the inverse? Factoring out a 1/3. then they are in fact inverses of each other. A is too and A−1 = (B −1 )−1 = B. I the n × n identity matrix. 12 . 2. So B −1 exists. The matrix equation for this system is Ax = b where x = y  z        −5 −4 2 3 3 3 1 −1       3 3 0 −3 = 0. Consider the matrix equation Bx = θ and let u ∈ Rn be a solution. We will use the IMT Part I to prove that B is invertible first. Since B −1 is invertible. then the solution is x = A−1 b. we get A−1    −5 −4 2 1 = 3 3 0 . x +2z = 3 −x +y−2z = −3 2x+2y +z = 6. Specifically. Notice that the coefficient matrix is. Theorem 2 Let A and B be n × n matrices and I the n × n identity matrix. Hence by the IMT Part I. We form the giant augmented matrix (B|I) and row reduce: −1 1     1 −1 | 1 0 1 −1 | 1 0 ∼ . and θ the zero vector in Rn . and b = −3 . whose   x  inverse we’ve already found. So we have Bu = θ A(Bu) = Aθ (AB)u = θ Iu = θ u = θ. Then multiplying both sides of AB = I on the right by B −1 gives us that A = B −1 . then A and B are invertible and A−1 = B. Since the reduced echelon form of B is not I.

Proposition 3 Let A and B be appropriately sized matrices and r ∈ R.. as we can see above.Transposes We finish this section off with what’s called the transpose of a matrix. The first row of A is 1 2 3 and the second row is 4 5 6 . Definition 7 Let A = (aij ) be an m × n matrix. we get 2 3 0   −1 −4 2 1 3. So multiplying B T AT makes sense and its size is also n × m. We will write AT = (aTji ) where aTji = aij . is the matrix whose i-th column is the i-th row of A. (AB)T = B T AT Proof The first three properties seem perfectly natural. AT = 2 5. that is. bp 13 . We make the rows of B the columns of B T . or equivalently. Here is a couple of examples. Doing so. But what about their corresponding entries? The ji-entry of (AB)T is the ij-entry of AB. So these become the rows of AT . you should try to prove them. Here’s the definition. Example 9     1 2 3 1. Then the ij-entry of . Then 1. Let B = −4 1 9. 4 5 6   −1 0 2 2. Then AB is an m × n matrix. whose j-th row is the j-th column of A. Notice that AT is an n × m matrix. Given how the inverse of the product of matrices works. . Let’s prove it. (rA)T = rAT 4. (AT )T = A 2. b1  b2     let a1 a2 · · · ap be the i-th row of A and  . (A + B)T = AT + B T 3. and . Let A = . Alternatively. This means that (AB)T is an n × m matrix. denoted by AT . For simplicity. Some time when you get bored. Let A be an m × p matrix and B a p × n matrix. maybe this is fine. which is the i-th row  ofA times the j-th column of B. The transpose of A. Then B T is an n × p matrix and AT is a p × m matrix. we see that the columns of 3 6       1 2 3 A are . But what about that multiplication one? Does that seem natural? Maybe. So these 4 5 6   1 4 become the columns of AT .  the j-th column of B. BT =  0 2 9 0 Note that the main diagonals of a matrix and its transpose are the same.

Therefore (AB)T = B T AT . 1 −1 0 −1 5 1 0 2 1 1 −1     3 4   2 E = −2 3. (a) Find two matrices A and B for which AB and BA are defined. 6. −3 0 1 whenever possible. As you would expect. However. Prove that for any r ∈ R and m × n matrix A. We have seen an example of two 2 × 2 matrices A and B for which AB 6= BA. . −A = (−1)A and 0A = Θ. and C are matrices above. 14 . Here are some exercises. it’s more that just not commutative. Prove that for any m × n matrix A. (b) Find two matrices A and B for which AB is defined. C = . Let Θ be the m × n zero matrix. but this is also equal to b1 a1 + b2a2 + · · · bp ap. This is exactly the product of the j-th row of B T and the i-th column . 7. If a computation is not possible.AB is a1b1 + a2b2 + · · · + apbp . Let m ∈ N. B. which is the ji-entry of B T AT . we define Am to be the product of A with itself m times. state why. but BA is not. Compute each of the following and simplify.D= . it’s soooooo not commutative. F = . Enjoy. Notice that this makes sense as matrix multiplication is associative. Illustrate the associativity of matrix multiplication by multiplying (AB)C and A(BC) where A. B = 5 −1. then r = 0 or A = Θ. and C. which is the product of a1 a2     b1 b2 · · · bp and  . Thus ji-entry of (AB)T is the ji-entry of B T AT .        3 4 1 −2 3 4 −1 2 −1 0 1 1.. then A = B. Let A be an n × n matrix.   1 −2 4 (a) Compute A for A = . 5.  (a) 3C − 4D (e) 3BC − 4BD (b) A − (D + 2C) (f) CB + D (c) A − E (g) GC (d) AE (h) F G 2. Let A = . but have different sizes. 3. 1 −1 (b) Provide a counter-example to the statement: For any 2 × 2 matrices A and B. ap T of A . showing us that matrix multiplication is not commutative. (AB)2 = A2 B 2 . 4. and G = 2 −1 . B. Doing the following problems will illustrate what we mean by this. if A + C = B + C. if rA = Θ. Prove that for all m × n matrices A. Let Θ be the m × n zero matrix.

Provide a counter-example to each of the following statements: (a) For any 2 × 2 matrices A and B. Solve the systems of linear equations using the inverse of the coefficient matrix. (a) Express the system as a matrix equation. If so. 9. if AB = AC. Consider the system of linear equations: x1 + x2 + x3 + x4 x1 − x2 + x3 + x4 x2 − x3 − x4 x1 + x2 − x3 − x4 =3 =5 = −4 = −3. then A = Θ or A = I. then A = Θ or B = Θ. (a) (b) 2x + y − z = 2 2x − y + 2z = −1 x+y−z =3 x1+2x2 +x3 x2 −x3 +x4 2x1 +4x2 +3x3 x1+2x2 −x3 +2x4 15 =2 = −2 =0 = −4 .8. x is the column matrix of the n variables.     1 −1 −1 2    (b) Use matrix multiplication to determine whether or not u =   1  and v =  0  are 2 3 solutions to the system. if AB = Θ. 11. Determine whether or not each of the following matrices is invertible. and θ is the column matrix of the m zeroes. B. 10. then B = C. Use the properties of matrix arithmetic to show that for any solutions u and v to the system and r ∈ R. A. if A2 = A. and C.  1 2 (a) A =  4 7  4 (c) C = 2  −2 0 −1 3 3 8  −6 −3 −5 −5 0 2  3 3   2 1 −1 (b) B = 2 −1 2  1 1 −1   −1 0 1 (d) D D where D = 0 2 1 T 12. u + v and ru are also solutions. Suppose that we have a homogeneous system of linear equations whose matrix equation is given by Ax = θ where A is the m × n matrix of coefficients. find its inverse. (b) For any 2 × 2 matrices. (c) For any 2 × 2 matrix A. Let Θ be the 2 × 2 zero matrix and I the 2 × 2 identity matrix.

17. Determine (B T )−1 for the matrix B in Problem #10. we say that A is skew-symmetric provided AT = −A. AAT . Find an example of a 2 × 2 nonidentity matrix whose inverse is its transpose. Hint: Use Proposition 3. (a) Prove that for all m ∈ N. A + AT . (a) Give examples of symmetric and skew-symmetric 2 × 2. Using the matrices in Problem #1. (d) Prove that any n × n can be written as the sum of a symmetric and skew-symmetric matrices. and 4 × 4 matrices. Am is invertible and determine a formula for its inverse in terms of A−1. We say that an n×n matrix A is symmetric provided AT = A. (b) Prove that AT is invertible and determine its inverse in terms of A−1 . (b) What can you say about the main diagonal of a skew-symmetric matrix? (c) Prove that for any n × n matrix A. 15. 14. Hint: Did you do part (c) yet? 16 .13. 3 × 3. Hint: Use induction and the fact that An+1 = An A. compute each of the following. Let A be an n × n invertible matrix. A + B is invertible. Provide a counter-example to the statement: For any n × n invertible matrices A and B. (a) 3A − 2E T (d) 3AT − 2E (b) AT B (e) CC T + F G (c) DT (F + GT ) (f) (F T + G)D 16. 18. if possible. and AT A are symmetric and A − AT is skew-symmetric.