Professional Documents
Culture Documents
(bamunoba@gmail.com )
September 1, 2015
Contents
1 Matrices 4
1.1 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.1 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2.1 Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2.2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
References 22
i
Introduction
ics that concerns mathematical methods and techniques that are typically used in engineering and
industry. Along with elds like engineering physics and engineering geology (both of which may
subject motivated by engineers' needs both for practical, theoretical and other considerations out-
with their specialization, and to deal with constraints to be eective in their work. It is an art of
topics from pure mathematics, mathematical physics, applied mathematics, computational math-
ematics as well as statistics. With this in mind, engineering mathematics is therefore a creative
and exciting discipline, spanning traditional boundaries. Engineering mathematicians can be found
in an extraordinarily wide range of careers, from designing next generation Formula One cars to
working at the cutting edge of robotics, from running their own business creating new autonomous
Historically, engineering mathematics consisted mostly of applied analysis, most notably: dier-
ential equations; real and complex analysis (including vector and tensor analysis); approximation
theory (broadly construed, to include asymptotic, variational, and perturbative methods, represen-
tations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied
probability, outside of analysis. These areas of mathematics were intimately tied to the develop-
ment of Newtonian physics, and the mathematical physics of that period. This history also left a
legacy: until the early 20th century subjects such as classical mechanics were often taught in applied
mathematics departments at some American and European universities, and uid mechanics may
The success of modern numerical computer methods and software has led to the emergence of com-
putational mathematics, computational science, and computational engineering (the last two are
sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance
computing for the simulation of phenomena and the solution of problems in the sciences and engi-
neering. These are often considered interdisciplinary elds, but are also of interest to engineering
Course Description
1
Page 2
According to the College of Engineering, Design, Art and Technology (CEDAT) handbook,
analytical skills for the study of more advanced subjects. To achieve this, the subject has been
divided into 4 sometimes overlapping courses labelled Engineering Mathematics I, II, III and
and integral calculus, with some elements of matrix theory. Below we spell out its objectives.
Objectives
course also provides the mathematical tools needed in other semesters' course units.
To develop the analytical and critical thinking abilities fundamental to problem solving.
Course Content
Denition, properties, range, domain of the elementary real valued functions, concept of
a limit of a real valued function, continuity, indeterminate forms and l`Hopital's rules.
Cartesian and polar algebra representations, absolute values; algebra of complex num-
bers i.e., products, powers and quotients; extraction of roots, De Moivre's Theorem,
integration. Denite integral: its interpretation as area under a curve and its applications
such as: length of a curve, area bound between curves, volume of revolution, moments.
Improper integrals and their evaluation using limits, integration of a continuous function;
Denitions: scalars, vectors, unit vector, and dimensionality. Operations on vectors: ad-
dition, subtraction, multiplication, dot and cross products, position and distance vectors.
Learning Outcomes
On completing the course the student should be able to:
Relate mathematics to the physical world, providing a sound basis for later specialization.
continuous interim assessments (assignments and tests) and a nal examination. Interim
assessment will carry a total of 40% and nal examination will carry 60% of the nal mark.
These notes contain more than 200 carefully selected problems and are intended to help the reader
in better understanding them, developing skills and intuition in engineering mathematics. Some
problems are very simple, to encourage the beginner. The development of the material does not
depend on the problems and omission of some or all of them does not destroy the continuity. We
strongly warn the students following the course, these notes are not a substitute for lectures.
1. Matrices
1.1 Matrices
In working with a system of linear equations or physical, economic and biological models that can
be transformed into systems of linear equations, only the coecients and their respective positions
are important. Also reducing to echelon form, it is essential to keep equations carefully aligned.
Therefore, these coecients can be eciently arranged in a rectangular array of numbers called
a matrix. Moreover, in Linear Algebra, certain abstract objects are introduced such as change of
basis, linear operator and bilinear form which can also be represented by these rectangular arrays.
In this chapter, we shall study these rectangular arrays, i.e., matrices and certain algebraic oper-
ations dened on them. The material introduced here is mainly computational. However, as with
linear equations, the abstract treatment presented in the subjects of Linear Algebra, Functional
Analysis, Abstract Algebra and many others gives more insight into the structure of these matrices.
1.1.1 Matrices
Let K be an arbitrary eld (for this course K will be R or C). A rectangular array of the form
a11 a12 · · · a1n
a21 a22 · · · a2n
.. . . . ,
. . . .
. . .
a11 a12 · · · a1n
where the aij are scalars, is called a matrix over K, or simply a matrix if K is implicit. It is also
denoted by (aij ), i = 1, . . . , m and j = 1, . . . , n or simply (aij ). The m horizontal n-tuples are called
the rows of the matrix and the n-vertical m-tuples are its columns. The element aij is called the
ij -entry or ij -component; it appears in the ith row and j th column. A matrix with m rows and n
columns is called an m by n matrix or m×n matrix; and the pair (m, n) is called its size or shape.
Matrices will be denoted by upper case letters A, B, . . . and the elements of the eld by lower case
letters a, b, . . .. For example, the above matrix is denoted and dened as A = (aij ), where aij ∈ K .
Two matrices A and B are equal, written A = B if they have the same shape and if the corresponding
elements are equal, i.e., aij = bij , for all i = 1, . . . , m and j = 1, . . . , n. Therefore, equality of two
4
Section 1.1. Matrices Page 5
m×n matrices is equivalent to a system of mn equalities, one for each pair of elements.
A matrix with one row is also called a row vector and one with one column is called a column vector.
In particular, an element in the eld K can be viewed as a 1×1 matrix. A matrix with all its
entries or components as 0's is called a zero matrix, and shall be denoted by 0. There are several
types of matrices, e.g., square and non-square matrices, but we shall discuss them as we go along.
Let A and B be two matrices with the same size; say m × n: The sum of A and B is the matrix
A+B obtained by summing corresponding entries, A + B = (aij + bij ). The product of the scalar
k by the matrix A written k·A or simply kA is the matrix obtained by multiplying each entry of
Basic properties of matrices under the operations of matrix addition and scalar multiplication follow.
1.1.1 Theorem. Let V be a set of all m × n matrices over a eld K . Then for any matrices
A, B, C ∈ V and any scalars k, k1 , k2 ∈ K ,
1. A + (B + C) = (A + B) + C
2. A + 0 = A
3. A + (−A) = 0
4. A + B = B + A
5. k(A + B) = kA + kB
6. (k1 + k2 )A = k1 A + k2 A
7. (k1 k2 A) = k1 (k2 A)
8. 1 · A = A and 0 · A = 0.
The above results show that V has a vector space structure, (or is a vector space) over K.
Section 1.1. Matrices Page 6
The product of two matrices A and B, written AB is somewhat complicated. Formally, suppose
A = (aij ) and B = (bij ) are matrices such that the number of columns of A is equal to the number
of rows in B; say A is an m×p matrix and B is a p×n matrix. Then the product AB is the m×n
matrix whose ij -entry is obtained by multiplying the ith row Ai of A by the j th column Bj of B,
Pp Pp
i.e., AB = (Ai · B j ) as follows: Ai · B j = k=1 aik bkj , i.e., if C = AB , then cij = k=1 aik bkj . We
emphasize that AB is not dened if A is an m × p matrix and B is a q × n matrix, where p 6= q .
1 2 −1 2
Consider A= and B = . It is easy to see that both AB and BA are dened.
3 2 0 2
1 2 −1 2 1(−1) + 2(0) 1(2) + 2(2) −1 6
AB = = = ,
3 2 0 2 3(−1) + 2(0) 3(2) + 2(2) −3 10
−1 2 1 2 (−1)(1) + 2(3) (−1)(2) + 2(2) .5 2
BA = = =
0 2 3 2 0(2) + 2(3) 0(2) + 2(2) 6 4
It is clear that AB 6= BA. (We are even lucky, it may not even be denable) In general, matrix
multiplication is not commutative, i.e., AB 6= BA. It does however satisfy the following properties.
1.1.2 Theorem. Let A, B and C be conformable matrices over a eld K and k be a scalar. Then
1. A(BC) = (AB)C .
2. A(B + C) = AB + AC .
3. (B + C)A = BA + CA.
Assume the sums and products in the above theorem are dened. Moreover, 0 · A = 0 and B · 0 = 0.
The transpose of a matrix A, written as AT is the matrix obtained by writing the rows of A, in
an n×m matrix. The transpose operation on matrices satises the following properties.
1.1.3 Theorem. .
1. (A + B)T = AT + B T .
2. (AT )T = A
3. (kA)T = kAT
Section 1.1. Matrices Page 7
4. (AB)T = B T AT .
1. Let
2 −3 0 1 2
1 −1 2 4 0 −3
A= , B = , C = 5 −1 −4 2 , D = −1 .
0 3 4 −1 −2 3
−1 0 0 3 3
Find
(a) i. A+B
ii. A+C
iii. 3A − 4B
(b) i. AB
ii. AC
iii. AD
iv. BC
v. BD
vi. CD
(c) i. AT
ii. AT C
iii. D T AT
iv. BT A
v. DT D
vi. DDT
2. Construct suitable examples of matrices and verify the results in Theorems 1.1.1 and 1.1.2.
called the coecients of the xi respectively and b is called the constant term or simply the constant
Section 1.1. Matrices Page 8
is then said to satisfy the equation. If there is no ambiguity about the position of the unknowns in
the equation, then we denote this solution by an n-tuple u = (k1 , k2 , . . . , kn ) ∈ K n . For example,
(why?), while or however, the 4-tuple (1, 2, 4, 5) is not a solution to the same equation (why?).
a11 x1 + a12 x2 + · · · + a1n xn = b1
a21 x1 + a22 x2 + · · · + a2n xn
= b2
(1.1)
. .
.
. = ..
am1 x1 + am2 x2 + · · · + amn xn
= bm
a11 a12 · · · a1n x1 b
1
a21 a22 · · · a2n x2 b2
.. . . . . = .
. . . . . ..
. . . .
a11 a12 · · · a1n xn bn
or simply AX = B , where A = (aij ), X = (xi ) and B = (bi ). That is every solution of the system
is a solution of the matrix equation and vice versa. A solution (p articular solution ) to AX = B is
an n-tuple u = (k1 , . . . , kn ) of scalars in the eld K that satises each of the equations in Equation
(1.1); the set of all such solutions is called the solution set or the general solution.
Observe that the associated homogeneous system of is then equivalent to AX = 0. This system
always has a solution, namely the n-tuple 0 = (0, . . . , 0) called the zero or trivial solution. Any
other solution if it exists is called a nonzero or nontrivial solution. The above matrix A is called
thecoecient matrix of the system, X is the unknown matrix (also indeterminants or variables
matrix ), B , the constant matrix and (A|B) is the augmented matrix. Observe that the system is
completely determined by its augmented matrix. In studying linear equations it is usually simpler
to use the language and theory of matrices as indicated by the following theorem.
Proof. A( ni=1 ki ui ) =
P Pn
Given that Aui = 0. Hence i=1 ki Aui = 0. Accordingly, the linear
1.1.5 Theorem. Suppose the eld K is innite (if K is the real eld R or the complex eld C).
Then the system AX = B has no solution, a unique or an innite number of solutions.
Proof. It suces to show that if AX = B has more than one solution, the it has innitely many.
Suppose that u and v are distinct solutions of AX = B , i.e., Au = B and Av = B . For any k ∈ K,
A(u + k(u − v)) = Au + k(Au − Av) = B + k(B − B) = 0. For each k ∈ K , u + k(u − v) is a solution
to AX = B . Since all such solutions are distinct, AX = B has an innitely many solutions.
In the following problem we deduce the conditions that must be placed on a, b, and c so that the
following system of equations has no solutions, a unique solution, innitely many solutions,
x + 2y − 3z = a
2x + 6y − 11z = b
x − 2y + 7z = c.
Proof. We postpone the solution of this result till after row reductions. It is easier there.
A matrix A = (aij ) is an echelon matrix matrix, or an echelon form, if the number of zeros preceding
the rst nonzero entry of a row increases row by row until only the zero rows remain, i.e., if there
exist nonzero entries a1j1 , a2j2 , . . . , arjr , where j1 < j2 < · · · < jr such that aij = 0 for i ≤ r, j < ji ,
and for i > r. We call distinguished elements of the echelon matrix A.
a1j1 , a2j2 , . . . , arjr the In
particular, an echelon matrix is a row reduced echelon matrix if the distinguished elements are:
2. each equal to 1.
2 3 2 0 4 5 −6 0
1 2 3 1 3 0 0 4 0
0 0 7 1 −3 2 0 0 0 1 0 −3 0
0 0 4 0
, , .
0 0 0 0 0 6 2 0 0 0 0 0 0 0 1 2 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Section 1.1. Matrices Page 10
The third matrix above is an example of a row reduced echelon matrix, the other two are not. The
zero matrix for any numbers of rows or columns is also a row reduced echelon matrix.
sequence of the following operations called elementary operations. A matrix obtained from an
E3 . Replace the ith row by k times the j th row plus the ith row: Ri → kRj + Ri . In actual
practice, we apply E2 and then E3 in one step, i.e., the step E: Replace the ith row by k1
times the j th row plus k2 (nonzero scalar) times the ith row: Ri → k1 Rj + k2 Ri , k2 6= 0.
0 1 0 1 0 0 1 0 0
E1 =
1 0 0 , E2 = 0 1 0 , E3 = −3 1 0 .
0 0 1 0 0 −7 0 0 1
The reader no doubt recognises the similarity of the above operations and those used in solving
systems of linear equations. The following algorithm is also similar to the one used for linear
equations. The algorithm below describes steps needed to row reduce a matrix to its echelon form :
Step 1. Suppose the j1 column is the rst column with a nonzero entry. Interchange the rows
so that this nonzero entry appears in the rst row, that is so that a1j1 6= 0.
Step 3. Repeat Steps 1 and 2 with the submatrix formed by all the rows excluding the rst.
The term row reduce shall mean to transform by elementary row operations. Suppose A = (aij )
is a matrix in echelon form with distinguished elements a1j1 , a2j2 , . . . , arjr . Apply the operations
by an echelon matrix whose distinguished elements are the only nonzero entries in their respective
Section 1.1. Matrices Page 11
1. In other words, the process row reduces an echelon matrix to one in row reduced echelon form.
Any arbitrary matrix is row equivalent to atleast one row reduced echelon matrix. In fact, it can be
shown that A is row equivalent to only one such a matrix, we call it the row canonical form of A.
1 −2 3 −1
A=
2 −1 2 2 .
3 1 2 3
We demonstrate how to reduce A to echelon form and then its row canonical form.
1 −2 3 −1 1 −2 3 −1 1 −2 3 −1
A=
2 −1 2 2
0 3 −4 4
0
3 −4 .
4
3 1 2 3 0 7 −7 6 0 0 7 −10
The last matrix is in echelon form. We further row reduce it to obtain its row canonical form.
15
1 −2 3 −1 3 0 1 5 21 0 0 45 1 0 0 7
0
3 −4 4
0 3 −4
4
0
21 0 −12
0 1 0
− 74
.
0 0 7 −10 0 0 7 −10 0 0 7 −10 0 0 1 − 10
7
Suggest another method one would use to obtain this row canonical form (and give its shortcomings).
a b de−bf af −ce
(a) if
c 6= d , then the system has a unique solution x= ad−bc and y= ad−bc .
a b e
(b) if
c 6= d 6= f , then the system has no solution.
a b e
(c) if
c 6= d = f , then the system has more than one solution.
2. Determine which of the following system of linear equations is inconsistent, consistent and
Section 1.1. Matrices Page 12
2x + y − 2z + 3w =1
3x + 2y − z + 3w =4, (1.2)
3x + 3y + 3z − 3w = 5
x + 2y − 3z =4
x + 3y + z
= 11
, (1.3)
2x + 5y − 4z = 13
2x + 6y + 2z = 22
x + 2y − 2z + 3w =2
2x + 4y − 3z + 4w =5 . (1.4)
5x + 10y − 8z + 11w
= 12
2x − 3y + 6z − 5w =3
x + 2y − 3z = −1
2x + y − 2z = 10
y − 4z + v =1 , 3x − y + 2z =7 ,
3x + 2y + 2z =1 .
v − w
= 2. 5x + 3y − 4z
=2
5x + 4y + 3z =1
4. Determine the values of k such that the system in unknowns x, y and z has
(b) no solution
kx + y + z =1
x + y + kz =2
z + 2y + kz
=1
x + ky + z =1, , 3x + 4y + 2z =k.
2x + ky + 8z
=3
x + y + kz =1 2x + 3y − z
=1
Let {u1 , u2 , . . . , ur } be a set of nonzero vectors in K n, (or an arbitrary vector space V over K ).
These vectors are linearly independent over K if the linear combination α1 u1 +α2 u2 +· · ·+αn un = 0
implies α1 = α2 = · · · = αn = 0. If there exist some non-trivial solution α1 , α2 , . . . , αn such that
Section 1.1. Matrices Page 13
For example, the vectors {(1, 0), (0, 1)} are linearly independent over R, since 0 = (0, 0) = x(1, 0) +
y(1, 0) = (x, y) implies x = y = 0. However, the set {(1, 0), (2, 0)} is linearly dependent over K.
1. Determine whether the following vectors are linearly dependent or independent over R.
(c) u = (1, −2, 3, 1), v = (3, 2, 1, −2) and w = (1, 6, −5, −4).
Let A be an arbitrary m × n matrix over K . The row space of A is the subspace of K n generated by
its rows and the column space of A is the subspace of K m generated by its columns. The dimensions
Proof. 1
1.1.6 Theorem. The row rank and the column rank of a matrix are equal.
The rank of the matrix A, written Rank(A), is the common value of its row rank and column
matrix. Therefore, the rank of a matrix gives the maximum number of independent rows and also
the maximum number of independent columns. To obtain, the row rank of a matrix, reduce it to
echelon form. Since row equivalent matrices have the same row space, the non zero rows of the
echelon matrix (are independent) form the basis of the row space. For column rank, do the same
use column operations or equivalently, transpose the matrix and then proceed as above. The rank
of the null space is given by obtaining the solution space, and the number of independent variables
generating the solution space of the matrix. This satises the Rank-Nullity Theorem.
Section 1.1. Matrices Page 14
(a)
1 −1 −1
1 1 5 1 −1 −2
P = , Q= , and R= 4 −3 −1,
2 3 13 3 −2 −3
3 −1 3
1 2 −3 0 2 3 4 5 6 0 1 3 −2
A=
2 4 −2 2
and B=
0 0 3 2 5
and C=
2 1 −4 3 .
3 6 −4 3 0 0 0 0 2 2 3 3 −1
3. Given
1 −1
2 0
A=
2 6 −3 −3 .
3 10 −6 −5
Compute
(b) the null space and the rank of the null space of A
A matrix with the same number of rows as columns is called a square matrix. A square matrix with
n rows and n columns is said to be order n and is called an n-square matrix. The diagonal (or main
diagonal) of the main square matrix A = (aij ) consists of the elements a11 , a22 , . . . , ann . An upper
triangular matrix or simply a triangular matrix is a square matrix whose entries below the main
diagonal are all zero. Similarly, a lower triangular matrix is a square matrix whose entries above
Section 1.1. Matrices Page 15
the main diagonal are all zero. A diagonal matrix is a square matrix whose non-diagonal entries are
all zeroes. In particular, the n-square matrix with 1's on the diagonal and 0's else where denoted
by In or simply I is the called the unit or identity matrix. This matrix is similar to the scalar 1 in
that for any n-square matrix A, AI = IA = A. The matrix kI for a scalar k∈K is called a scalar
Recall that not every two matrices can be added or multiplied. However, if we consider square
matrices of some given order n, then this inconvenience disappears. Specically, the operations of
addition, multiplication, scalar multiplication and transpose can be performed on any n×n matrices
and the result is again an n×n matrix. In particular, if A is any n-square matrix, we can form
or root of the polynomial f (x). So the set of n-square matrices over K forms a matrix algebra.
1 2
1. Let A= . Show that A is a zero f (x) = x2 + 3x − 10 but not g(x) = 2x2 − 3x + 5.
3 −4
A square matrix A is said to be invertible if there exists a matrix B with the property that AB =
I = BA, where I is the identity matrix. Such a matrix B is unique; for AB1 = I and AB2 = I
implies that B1 = B1 I = B1 (AB2 ) = (B1 A)B2 = IB2 = B2 . We call such a matrix B, the inverse
of A and denote it by A−1 . Observe that the above relation is symmetric, that is if B is the inverse
necessary to test only one product to determine whether two given matrices are inverses.
1. Suppose A is invertible and say, it is row reducible to the identity matrix I by the sequence
of elementary operations e1 , . . . , en .
1 0 2
A=
2 −1 3 .
4 1 8
In case the nal block of (A, I) is not of the form (I, B), then the given matrix is not row
1.2 Determinants
To every square matrix A over a eld K, there is a specic scalar assigned called the determinant
of A; it is usually denoted by det(A) or |A|. This determinant function (a functional) was rst
discovered in the investigation of system of linear equations. Today, this is an indispensable tool
in investigating and obtaining properties of a linear operator. We shall begin this section with a
1.2.1 Permutations
1 2 ··· n
lower case greek letters, e.g., σ = or σ = j1 j2 . . . jn where σ(i) = ji . Since σ
j1 j2 · · · jn
is a bijection. The sequence j1 j2 . . . jn is simply a re-arrangement of the numbers 1, 2, . . . , n. The
number of such permutations is n! and that the set of them is denoted by Sn . We remark that if
Sn forms a group under composition, and in fact it is called the Symmetric group on n elements.
to whether there is an even or odd number of pairs (i, k) for which i>k but i precedes k in σ.
For example: the permutation σ = 35142 ∈ S5 . The set {(3, 1), (3, 2), (5, 1), (5, 4), (5, 2), (4, 2)} has
even cardinality and so the permutation σ is even. The identity permutation ι = 12 . . . n is even.
In S3 , the permutations 123, 231, 312 are even whereas 132, 213, 321 are odd. The sign or parity of
A transposition τ is a permutation that interchanges two numbers i and j and xes the others:
τ (i) = j , τ (j) = i and τ (k) = k for all k 6= i, j . It is denoted by τ = (ij). For example, in S4 ,
τ = 21 is a transposition. It is not hard to show that a transposition is an odd permutation.
1.2.2 Determinants
Let A = (aij ) be an n-square matrix over a eld K. The determinant of the n-square matrix
P
A = (aij ), denoted by det(A) or |A| is the sum det(A) = |A| = σ sgn(σ)a1j1 a2j2 . . . anjn over all
P
permutations σ = j1 j2 . . . jn in Sn , i.e., |A| = σ∈Sn sgn(σ)a1σ(1) a2σ(2) . . . anσ(n) . The determinant
of the n-square matrix A is said to be of order n and is frequently denoted by |(aij )|. We emphasize
Section 1.2. Determinants Page 17
that a square array of scalars enclosed by straight lines is not a matrix but rather the scalar that
the determinant assigns to the matrix formed by the arrays of scalars. Examples include:
1. Let A = (a11 ), then |A| = a11 . (since the one permutation in S1 is even)
a11 a12
2. In S2 , the permutation 12 is even while 21 is odd. Hence = a11 a22 − a12 a21 .
a21 a22
3. Recall in S3 , the permutations 123, 231, 312 are even whereas 132, 213, 321 are odd. So
a a a
11 12 13
a21 a22 a23 = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 − a12 a21 a33 − a11 a23 a32 .
a31 a32 a33
We will obtain this formula using other methods like cofactor expansion or Laplace expansions.
As n increases the number of terms in the determinant becomes astronomical. Accordingly, we use
indirect methods to evaluate determinants rather than the denition. In fact we prove a number
of order n−1 as in case n=3 above. We now list some basic properties of the determinant.
By this theorem, any theorem about the determinant of a matrix which concerns the rows of A will
have an analogous theorem concerns columns of A. This is like some sort of duality in the results.
The next theorem gives certain cases for which the determinant can be obtained immediately.
3. If A is triangular, i.e., A has zeros above or below the diagonal, then the determinant of A
product of diagonal elements. Thus in particular, |I| = 1, where I is the identity matrix.
The next theorem shows how the determinant of a matrix is aected by the elementary operations".
We now state two of the most important and useful theorems on determinants.
1.2.4 Theorem. Let A be an n-square matrix. Then the following are equivalent.
3. |A| =
6 0.
5 4 2 1 2−3 −2 5
2 3 1 −2 −2 −3 2 −5
A= , and B= ,
−5 −7 −3 9 1 3 −2 2
1 −2 −1 4 −1 −6 4 3
by
Consider an n-square matrix A = (aij ). Let Mij denote the (n − 1)-square submatrix of A obtained
by deleting its ith row and j th column. The determinant |Mij | is called the minor of the element
aij of A, and dene the cofactor of aij , denoted by Aij to be the signed minor: Aij = (−1)i+j |Mij |.
Note that the signs" (−1)i+j accompanying the minors form a chessboard pattern with +'s on the
main diagonal. We emphasize that Mij denoted a matrix whereas Aij denoted a scalar.
Section 1.2. Determinants Page 19
1.2.6 Theorem. The determinant of the matrix A = (aij ) is equal to the sum of the products
obtained by multiplying the elements of any row (column) by their respective cofactors.
n
X n
X
|A| = aij Aij = aij Aij = |A|.
i=1 j=1
The above formulas, called the Laplace expansions of the determinant of A by the ith row and
the j th column respectively, oer a method of simplifying the computation of |A|, i.e., by adding a
multiple of a row (column) to another row (column) we can reduce A to a matrix containing a row
or column with one entry 1 and the others are 0. Expanding by this row or column reduces the
computation of |A| to the computation of a determinant of order one less than that of |A|.
2 1 1
1. For the matrix 0 5 −2, nd the cofactor of
1 −3 4
1 2 3 2 0 1 2 0 1
4 −2 3 , 2 −3, .
2 −3
3 3
2 5 −1 −1 −3 5 −1 −3 5
2 5 −3 −2 −2 −5 −4
3 1 2 −2 3
−2 −3 2 −5 −5 2 8 −5 3 −1 5 0
, , .
1
3 −2 2
−2 4
7 −3
4 0
2 1
−1 −6 4 3 2 −3 −5 8 1 7 2 −3
Consider an n-square matrix A = (aij ). The transpose of cofactors of the elements aij of A, denoted
by adj(A) is called the classical adjoint of A. We say classical adjoint" instead of simply adjoint".
1.2.7 Theorem. For any square matrix A, A · adj(A) = adj(A) · A = |A|I . Therefore, if |A| =
6 0,
1
A−1 = adj(A).
|A|
Section 1.2. Determinants Page 20
1 2 3
Consider the matrix A=
2 3 4.
1 5 7
1. Compute |A|.
2. Find adj(A).
4. Find A−1 .
The above theorem gives an important method for obtaining the inverse of a given matrix.
1.2.8 Theorem. A linear system AX = B has a unique solution if and only if |A| =
6 0.
The above theorem gives us Crammer's rule" for solving systems of linear equations. We emphasize
that the theorem only refers to a system with the same number of equations and unknowns and
that it only gives a solution when ∆ 6= 0. In fact, if ∆ = 0, the theorem does not tell us whether or
not the system has a solution. However, in the case of homogeneous system we get Theorem 1.2.9.
Proof. We know that AX = B has a unique solution if and only if A is invertible, and A is
obtain X = A−1 AX = 1
|A| adj(A)B . It follows that xi = 1
|A| (b1 A1i + b2 A2i + · · · + bn Ani ) = |A|i , the
determinant of the matrix obtained by replacing the ith column of A by the column vector B.
1.2.9 Theorem. The homogeneous system AX = 0 has a nonzero solution if and only if |A| = 0.
We remark that the preceding theorem is of interest more for theoretical and historical reasons than
for practical reasons. The previous method of solving systems of linear system of equations, i.e., by
reducing a system to echelon form, is usually much more ecient than by using determinants.
1 2 3
1. (a) Consider the matrix A=
2 3 4 .
1 5 7
i. Compute |A|.
ii. Find adj(A).
x + 2y + 3z =a
x + 2y + 3z =0
2x + 3y + 4z =b , 2x + 3y + 4z =0
x + 5y + 7z
=c x + 5y + 7z =1
References
[1] S. Lipschutz. Theory and Problems of Linear Algebra. Schaum's Outline Series - McGraw-Hill
22