Professional Documents
Culture Documents
CHAPTER ONE
VECTORS
Unit outcomes
After the completion of this unit, you will be able to:
define vector in space;
understand the basic ideas of vector algebra;
represent a vector by a directed straight line;
write a vector in terms of component vectors;
write a vector in terms of component unit vectors;
obtain the directional cosines of vector;
calculate the scalar and vector product of two vectors;
determine the angle between two vectors;
calculate the distance between line and plane and
explain the equation of line and plane.
Introduction
Certain Physical quantities such as mass, area, density, volume, time, temperature….etc, that
possess only magnitude are called scalars. On the other hand, there are physical quantities such
as force, displacement, velocity, acceleration, momentum, weight… etc that has both magnitude
and direction. Such quantities are called vectors.
The concept of a vector is essential for the whole course. It provides the foundation and
geometric motivation for everything that follows. Hence, the properties of vectors, both algebraic
and geometric, will be discussed in this unit. We shall study structures with two operations, an
addition and a scalar multiplication, that are subject to some simple conditions. We will reflect
more on the conditions later, but on first reading notice how reasonable they are.
As we see from the above figure, a single number represent a point in 1-space (A), a couple
represents a point in 2-space and a triple represents a point in 3-space (C). Although we
cannot draw a picture to go further, we can say that a quadruple of numbers or
represent a point in 4-sapce. A quintuple would be a point in 5-space, then would
come a sextuple, septuple, octuple…etc. for example, let (2, 3, -4) be a point in 3-space, then 2 is
the first coordinate, 3 is the 2nd coordinates and (- 4) is the third coordinate of the point in 3-
space.
Example 1.1: is a point in 2-spaces and is a point in 3-spaces.
Next we will see the definition of a point in n-space.
Definition 1.2: (A point in n-space): A point in n-space is n-tuple ( ) of
numbers. Where n is a positive integer. The numbers are called the coordinates
of the point X.
The set of all points in n-spaces is represented by . Up to now we have seen that the
definition and representation of points in 1, 2, 3… and n-space.
Now subsequently let us see, how to add, subtract… points in n-space
( )
( )
( ) ( )
( )
4. Scalar Multiplication:
are always true.
a. b.
in the plane is a vector ⃗⃗⃗⃗⃗ with initial point at the origin. This is the only vector whose initial
point is the origin and – , which is equal to⃗⃗⃗⃗⃗ . Moreover, ⃗⃗⃗⃗⃗ is uniquely
vector and refer to it as the coordinate representation of relative to the chosen coordinate
system. In view of this, we shall call (x, y) either a point or a vector, depending on the
interpretation which we have in mind. So if ⃗⃗⃗⃗⃗ ⃗⃗⃗⃗⃗ , then we can write – . In view
of this two vectors ⃗⃗⃗⃗⃗⃗ and⃗⃗⃗⃗⃗⃗ are equal (or equivalence) if – – .
tA
for t > 0 Fig. a for t< 0 Fig. b
Using the above definitions and applying the associative and commutative properties of real
numbers, one can prove the following theorem.
Theorem 1.2.1 Let A, B and C be any members of , and let m and n be any real
numbers. Then
a) m(nA) (mn) A
Associativity
b) ( A B) C A ( B C )
c) A B B A Commutative
d) (m n) A mA nA
Distributive property
e) m( A B) mA mB
f) 0. A O
g) A O A
Similarly, prove the other.Example1.2.2: A boat captain wants to travel due south at 40 knots. If
the current is moving northwest at 16 knots, in what direction and magnitude should he works
the engine?
Fig.1.4
Solution: We have where u corresponds to the engine‘s vector and
corresponds to the velocity of the current We have and
8 2 √ j
Hence: – – 8 2 8 2 8 2 – 8 2 .
40 8 2
The direction is arctan = -1.35 radians.
8 2
Exercise 1.2.1:
1) Given three vectors A = (3, 2, 4), B = (-2, 3, 4) and C = (1,6, 9), find
a) A + B c) 3A + B – 4C
b) 21A – 4B d) A – 2B + 22C
2) Determine whether and can be found to satisfy the vector equations
a)
b)
z x2 y2 z 2
x2 y2 z 2
x
√ = ||P1P2||
a) A.B B. A
b) t ( A.B) (tA).B A.(tB)
c) ( A B).C A.C B.C
d) If is the zero vector, then A.A 0 , and otherwise A.A 0
Any non-zero vector can be represented by providing its magnitude and a unit vector along its
direction. Let ̂ be a unit vector in the direction of A. Then ̂ ‖ ‖
direction of A is: ̂ ‖ ‖
.
Activity 1.3.3:
1) Given three vectors ), find the unit vector in
the direction of the following vectors:
a) A + B – C b) A + 3B - 2C c) 4A - 4B + C
2) The vectors are unit vectors in the direction of
positive x, y and z axis, respectively. Find a unit vector in the direction of
Note:
1) Let be two n-tuples of vectors. We define the distance between A and B to be
‖ ‖ √
2) Let A be any vector and , then‖ ‖ ‖ ‖ and‖ ‖ ‖ ‖
The following theorem gives us a geometric interpretation for the scalar product.
Theorem 1.3.1: Let and
be non-zero vectors and let be
‖ ‖ √ √ √ .
‖ ‖‖ ‖
=( ) √
0.914
√
Note:
1) Two non-zero vectors are said to be orthogonal (Perpendicular) if the angle between them is
2
2) Two non-zero vectors A and B are said to be orthogonal (Perpendicular) if A.B = 0.
Activity 1.3.5:
1) Which of the following pairs of vectors are perpendicular to each other?
a.
b. and
c. and
d.
2) Suppose , what can you deduce about A, B and C?
Exercise 1.3.2:
1. Show that‖ ‖ ‖ ‖ if and only if
2. Let be non-zero vectors such that i j . Let be
numbers such that, . Show that for all i = 1,2, 3, …, r.
The following are two of the important inequalities
Theorem 1.3.2 Let A and B be vectors. Then
b) A B A B (Triangle inequality)
Proof: a) If one of A or B is a zero vector then both sides of the inequality are equal to 0.
Suppose both A and B are non-zero. Then from A.B A B cos , we have
A B ( A B) . ( A B)
2
A. A 2 A.B B.B
A 2 A.B B
2 2
A 2 A.B B
2 2
( why ?)
A 2 A B B
2 2
( why ?)
A B , By taking the square roots,
2
Therefore A B A B
Remark: The inequalities of Theorem 1.3.2 hold true also for any vectors A and B in Rn.
We now define the component of a vector in the direction of another vector.
Let A and B be two non-zero vectors. And
Let be the angle between them.
Fig. 1.7 projection of vectors
Then from figure the terminal point of B drop perpendicular to the line containing A. The
vector ⃗⃗⃗⃗⃗⃗ has magnitude ‖⃗⃗⃗⃗⃗ ‖ ‖ ‖ , , and its direction is either the same
vector in the direction of A and since ⃗⃗⃗⃗⃗ has magnitude ‖ ‖ and is in the direction of A
or opposite to A, we can write
⃗⃗⃗⃗⃗⃗⃗ ‖ ‖ Or⃗⃗⃗⃗⃗⃗⃗ (‖ ) (why?)
‖ ‖ ‖
That is, (‖ ‖
)
We call ‖ ‖
,
(‖ ‖
) ( ) ( ) and
(‖ ‖
) ( ) =( )
Activity 1.3.6: One application of projections of vector arises in the definition of the work done
by a force on a moving body. Find another application.
Suppose . Then the direction cosines of the
vector u are denoted by: ‖ ‖ ‖ ‖ ‖ ‖
,
Fig. 1.8
Where the direction angles are angles which are make a vector with the positive ,
and -axes respectively.
Remark:
Example 1.3.8: Let Find the direction cosines of u.
Solution: Since ‖ ‖ √ √ √ √
i j k i j k i j k i j k
A B a1 a2 a3 a1 a2 a3 a1 a2 a3 a1 a2 a3
b1 b2 b3 b1 b2 b3 b1 b2 b3 b1 b2 b3
a2 a3 a a3 a a2
i j 1 k 1
b2 b3 b1 b3 b1 b2
and .Then,
AxB= – –
Activity 1.4.1: Find B x A.
The following are some of the basic properties of cross product
Theorem 1.4.1 For vectors A, B and C and scalars t ,
1. A x B = -(B x A)
2. A x A = O
3. tA x B = t(A x B) = A x (tB), t
4. ‖ ‖ ‖ ‖ ‖ ‖
5. C. (A x B) = B. (C x A) = A. (B x C)
6. (A + B) x C = (A x C) + (B x C)
7. C x (A + B) = (C x A) + (C x B)
8. and (that is, is perpendicular to both A and B.)
9. –
Proof: The following is the proof for 1, 2 and 8. The rest are left as an exercise for your practice.
1) From the definition of cross product,
– – . For B x A, we
interchange A and B to obtain
–
= – –
= - (A x B)
2) –
– , since multiplication is
commutative.
Hence A x A = (0, 0, 0)
8) Setting C = A in 5 yields
A . (A x B) = B . (A x A)
= B.0 (why?)
=0
By setting C = B in 5,
B .(A x B) = A . (B x B)
= A.0=0
This shows that for non-zero vectors A and B, the cross product is orthogonal to both A
and B.
Activity 1.4.2: Are the usual commutative and associative laws valid? i.e. for any vectors A, B and
C in , is A x B = B x A? Justify your answer. Is A x (B x C) = (A x B) x C?
Exercise 1.4.1: Let A = B= and C = . Find
a. b. c. – d. e.
From 4) of theorem 1.4.1, we derive an important formula for the norm of the cross product.
1) ‖ ‖ ‖ ‖ ‖ ‖
=‖ ‖ ‖ ‖ ‖ ‖ ‖ ‖ ( is the angle between A and B)
=‖ ‖ ‖ ‖ , by taking out the common factoring
=‖ ‖ ‖ ‖ and
‖ ‖ ‖ ‖‖ ‖ . By squaring both sides and (for sin is non- negative)
Activity 1.4.3:
1) For the unit vectors i, j and k , find i j , j k and k i . What is j i ?
2) If A and B are parallel, what is A B?
3) If A and B are orthogonal, What is ‖ ‖?
Exercise 1.4.1:
1) Find a unit vector perpendicular to both A = and B =
2) Prove that (A – B)x(A + B) = 2(AxB).
‖ ‖ ‖ ‖
But we know that the area A of a parallelogram is given by
‖ ‖‖ ‖ ‖ ‖‖ ‖
But we know that ‖ ‖‖ ‖ ‖ ‖ Therefore the area of a parallelogram formed by the
vectors a and b is given by
‖ ‖ is the area of the Parallelogram.
The direction of a x b is a right angle to the parallelogram that follows the right hand rule.
To find the volume of the parallelepiped spanned by three vectors u, v, and w, we find the triple
product: i.e. The triple product has an interesting application in geometry.
Consider the parallelepiped with edges a, b, and c. We know that ‖ ‖ ‖ ‖‖ ‖
Area of the parallelogram defined by a and b.
Thus ‖ ‖
‖ ‖ , But
‖ ‖
‖ ‖‖ ‖ ,
=‖ ‖‖ ‖
= ‖ ‖‖ ‖
=
The norm of the triple product is the volume of the parallelepiped with as edge vectors
= Volume.
This can be found by computing the determinant of the three vectors:
| | .
Example 1.5.1:
1) Find the area of the parallelogram which is formed by the two vectors and
.
2) Find the volume of the parallelepiped spanned by the vectors
and
Solution: 1) The area of the parallelogram is given by:
‖ ‖ ‖ ‖ ‖ ‖ √ √ .
2) The volume of the parallelepiped spanned by the three vectors is
| ( )|
=
Exercise1.5.1: Find the area of the triangle having vertices at
, and .
z
Q ( x, y , z )
P ( x0 , y 0 , z 0 )
L v = <a, b,c>
As we see from figure, let Q = (x, y, z) be any other point on the line ‗L‘, then we have vector
PQ which is parallel to v. i.e PQ = P – Q = (x, y, z) – (x0, y0, z0)
= <x – x0, y – y0, z – z0>
Now, PQ x x0 , y y0 , z z0 is parallel to can be written as:
PQ = , where t is a scalar.
Thus PQ x x0 , y y0 , z z0 =
x, y, z x0 , y0 , z0 t a, b, c
From the above vector equation of line, we derive the parametric equations of a line. That is
x x0 ta,
y y 0 tb, is called the parametric equations of a line L in 3D space (2)
z z 0 tc
where ( x0 , y 0 , z 0 ) is a point passing through the line and v = < a, b, c > is a vector that the line
is parallel to. The vector v = < a, b, c > is called the direction vector for the line L and its
components a, b, and c are called the direction numbers. Suppose that a 0, b 0, c 0 , if we
solve the parametric equation with respect to the variable t, we get:
x x0 y y0 z z0
t , t , t and it is called symmetric equations of a line.
a b c
The symmetric equations of a line L in 3D space can re-write as:
x x0 y y 0 z z 0
(3)
a b c
As we see from the above discussion to write the equation of a line in 3D space, we need to have
a point on the line and a parallel vector to the line.
Example 1.6.1.1: Find the vector, parametric, and symmetric equations for the line through the
point (1, 0, -3) and parallel to the vector .
Solution: To find the equation of a line in 3D space, we must have at least one point on the line
and a parallel vector. Let v , x0 , y 0 , z 0 =( and <x, y, z>
be any point on the given line, then using vector equation (1) above, we can write the vector
equation of the line as x 1, y 0, z 3 t 2,4,5
And from equation (2) we give the parametric equation of the line by
x 1 2t ,
y 4t ,
z 3 5t
If we solve the parametric equation for with respect to t, we get
x 1 y z3
is called the symmetric equation of a line.
2 4 5
Example 1.6.1.2: Find the parametric and symmetric equations of the line through the points
(1, 2, 0) and (-5, 4, 2)
Solution: To write the vector equation of a line, we need to have a vector parallel, so let
P = (1, 2, 0) and Q = (-5, 4, 2), then the parallel vector v is given by
v PQ 5 1, 4 2, 2 0 6, 2, 2 Suppose that be any point on the given line and
< x, y, z > is a point that the line pass through it.
Now the parametric equations of a line are given by
x x0 ta,
y y 0 tb, .
z z 0 tc
We can use either point (1, 2, 0) or (-5, 4, 2) to be the first point on the line ( x0 , y 0 , z 0 ) . So
we choose the point ( x0 , y0 , z 0 ) (1,2,0) . The terms a, b, and c are the components of our
parallel vector given by v = <a, b, c > = <-6, 2, 2 >. Thus, the parametric equation of our line is
given by
x 1 t (6), x 1 6t ,
y 2 t (2), OR y 2 2t,
z 0 t (2) z 2t
If we solve each parametric equation for t. we get symmetric equation
x 1 y2 z
t , t , t
6 2 2
Setting these equations equal gives the symmetric equations
x 1 y 2 z
6 2 2
It is important to note that the equations of lines in 3D space are not unique. In Example 2, for
instance, had we used the point Q = (-5, 4, 2) to represent the equation of the line with the parallel
vector v = <-6, 2, 2 >, the parametric equations becomes
x 5 6t , y 4 2t, z 2 2t
z n = < a, b, c >
P ( x0 , y 0 , z 0 )
Q ( x, y , z )
x
Fig.1.12 Illustration of plane in 3D
The plane consists of all points Q = (x, y, z) for which the vector PQ is orthogonal to the normal
vector n = <a, b, c >. Since PQ and n are orthogonal, we re-write as:
n PQ 0
a, b, c x x0 , y y0 , z z0 0
If we putting d ax0 by0 cz 0 , we get the general form of the equation of a plane in 3D
space. i.e
ax by cz d 0 .
Therefore the standard equation of a plane in 3D space is given by:
a( x x0 ) b( y y0 ) c( z z0 ) 0 , where ( x0 , y 0 , z 0 ) is a point on the plane and
Example 1.6.2.1: Find an equation of the plane containing the point with normal vector
Solution: First, we'll need to find a vector normal to the plane (any one). Notice that two vectors
lying in the plane are⃗⃗⃗⃗⃗ and ⃗⃗⃗⃗⃗
A vector orthogonal to both is the cross product
⃗⃗⃗⃗⃗ ⃗⃗⃗⃗⃗ | |
| | | | | |
So, ⃗⃗⃗⃗⃗⃗⃗ ⃗⃗⃗⃗⃗⃗ is normal vector orthogonal to the plane: it is the normal vector a ( we are
looking for. The equation for the plane with this normal vector and containing ) is then
) or
n2
n1
Plane 1
1) Perpendicular if n1 n2 0 , then the angle between two plane is .
2
2) Parallel if n2 cn1 , where c is a scalar.
Notes:
1) Given the general equation of a plane ax by cz d 0 , then the normal vector is
n = <a, b, c >.
2) The intersection of two planes is a line.
Example 1.6.2.1.1: Determine whether the planes x 3 y 6 z 4 and 5 x y z 4 are
orthogonal, parallel, or neither. Find the angle of intersection and the set of parametric equations
for the line of intersection of the plane.
Solution: Form the equation of the plane x 3 y 6 z 4 , the normal vector is n1 1,3,6
and similarly for the plane 5 x y z 4 , the normal vector is n2 5,1,1 . Then the two
planes are orthogonal if only if their corresponding normal vectors are orthogonal, that is
n1 n2 0 . Thus
n1 n2 1,3,6 5,1,1 (1)(5) (3)(1) (6)(1) 5 3 6 4 0
Hence, the planes are not orthogonal. If the planes are parallel, then their corresponding normal
vectors must be parallel. So that there exist a real number k, where n2 = k n1 or k n1 = n2 and if
1 1
equations k 5, - 3k 1, 6k -1 , which implies k 5, k and k - . Since the values
3 6
of k are not the same for each component to make the vector n2 a scalar multiple of the vector n1
, so the planes are not parallel. Thus, the planes must intersect in a straight line at a given angle.
To find this angle, we use the equation,
n1 n2
cos
| n1 | | n2 |
For this formula, we have the following:
n1 n2 1,3,6 5,1,1 (1)(5) (3)(1) (6)(1) 5 3 6 4
4
Thus, cos ,then if we solve for we have
46 27
4
cos1 1.68 radians 96.50 .
46 27
Finally to find the equation of intersection line between the two planes, we need a point on the
line and a parallel vector. But first we need to find a point on the line, by considering the case
where the line touches the x-y plane, that is, where z = 0. If we take the two equations of the
plane
x 3y 6z 4
16
Adding the two equations gives 16y = -16 or y 1 . Substituting y 1 back into
16
equation (1) gives x 3(1) 4 or x 3 4 . Solving for x gives x = 4-3 = 1. Thus, the point on
the plane is (1, -1, 0). To find a parallel vector for the line, we use the fact that since the line is
on both planes, it must be orthogonal to both normal vectors n1 and n2 . Since the cross product
n1 n2 gives a vector orthogonal to both n1 and n2 , n1 n2 will be a parallel vector for the line.
Thus, we say that
i j k
3 6 1 6 1 -3
v n1 n2 1 3 6 i j k
1 1 5 1 5 1
5 1 1
i (3 6) j (1 30) k (1 15)
3i 31 j 16k
Hence, using the point (1, -1, 0) and the parallel vector v 3i 31 j 16k , we find the
parametric equations of the line are
x 1 3t , y 1 31t , z 16t
By projecting the vector PQ onto the normal vector n (calculating the scalar projection
compn PQ ), we can find the distance D. n Q
| PQ n |
Distance Between Qandtheplane D | compn PQ | P
|n|
Example 1.6.2.2.1: Find the distance between the point (1, 2, 3) and plane 2 x y z 4
Solution: Since we are given the point Q = (1, 2, 3), we need to find a point on the plane
2 x y z 4 in order to find the vector PQ . We can simply do this by setting y = 0 and z = 0
in the plane equation and solving for x. Thus we have
2x y z 4
2x 0 0 4
2x 4
4
x 2
2
Thus P = (2, 0, 0) and the vector PQ is PQ 1 2,2 0,3 0 1,2,3 .
Hence, using the fact that the normal vector for the plane is n 2,1,1 , we have
Distance Between | PQ n | | 1,2,3 2,1,1 | 2 2 3 | | 1 | 1
Q and the plane |n| (2) 2 (1) 2 (1) 2 4 11 6 6
1
Thus, the distance is .
6
Activity 1.6:
1) Find the parametric and symmetric equations of the line through in the
direction of
2) Find the parametric and symmetric equations of the line through and
Unit Summary
A vector is a physical quantity that has both magnitude and direction in space, as opposed
to a scalar, with has no direction.
In two dimensions a vector can be represented either by its two components or by its
magnitude and direction .the two ways of describing a vector can be related by
trigonometry.
Vector addition means adding the components of two vectors to form the components of
a new vector. In graphically terms, this corresponds to drawing the vectors as two arrows
laid tip-to-tail and drawing the sum vector from the tail to the first vector to the tip of the
second one. Vector subtraction is performed by negating the vector to be subtracted and
the adding.
Multiplying a vector by a scalar means multiplying each of its components by a scalar to
create a new vector. Division by the a scalar is defined similarly.
Basic concept Dot product Cross product
Magnitude Definition Definition
√ | |
Unit vector
Area of parallelogram=
Angle between two vector
Area of triangle=
Note
is acute angle are parallel
is obtuse angle
orthogonal (or
perpendicular)
Line Plane
1.Vector form with
3.Symmetric form
N=UXV
U is vector parallel to line l1
V is vector parallel to line l2
P is point of a line l1
Q is point of a line l2
Miscellaneous Exercise
1) Let A = (0, 1, 5) ,B = √ ). Find the angle between A and B.
2) Which of the following vectors are parallel or perpendicular to (1, 1, -1)?
a) (2, 2, -2) d) (1, 0, 1)
1 1 1
b) (2, -2, 0) e) , ,
2 2 2
c) (-2, 2, 2) f)
3) a) Find all vectors that are orthogonal to E1 = (1, 0, 0)
b) Find all vectors that are orthogonal to both E1 and E3 = (0, 0, 1)
c) Find all vectors that are orthogonal to E1, E2 and E3 = (0, 0, 1)
4) Find a non-zero vector orthogonal to (1, 2, -1)
5) Find a unit vector in the direction of (3, -1, 2, 4)
1 1 1 1 1 2 1 1
6) Let U1 , , , U 2 , , , U3 , ,0
3 3 3 6 6 6 2 2
a) Show that each u1, u2, u3 is orthogonal to the other two and that each is a unit vector
b) Find the projection of E1 on each of u1, u2, u3
c) Find the projection of A = (a1, a2, a3) on u1.
7) Find a real number such that the vectors A = ( ,-3,1) and B = ( , ,2) are
perpendicular.
8) Find two vectors each of norm 1 that are perpendicular to the vector A = (3,2).
9) If U and Vare perpendicular unit vectors, show that U V = 2
10) Vectors A and B make an angle of = 6. If A = 3 , and B = 1,then calculate
the angle between the vectors A+B and B-A.
11) Find the cosine of the angle between the vectors A = (4,1,6) and B= (3,0,2).
12) Find the projection of vector A= (-7,1,3) on to vector B = (5,0,1)
13) Prove that a triangle inscribed in a circle and has a diameter for a side must be a right-
angled triangle.
14) Give three vectors A,B and C such that A+B+C = 0.
15) If A = 3, B = 1 and C = 4, evaluate A.B+B.C+A. C.
16) Let A,B and C be three non-zero vectors. If A.B = A.C is it necessarily true that B = C?
Justify!
17) Prove that the points A.B and C are collinear if and only if
OC a OA b OB where a+b = 1
every number t.
21) If A + B + C = 0. Show that A x B = B x C = C x A
22) Find a formula for the area of a parallelogram whose vertices, in order, are P, Q, R & S.
B x PQ
by: d
B
CHAPTER TWO
VECTOR SPACES
V3) There is an element of V, denoted by O (called the zero element), such that
for all elements of V.
V4) For , there exists – such that
Activity 2.1.2: What is the name given for each of the above properties?
Other properties of a vector space can be deduced from the above eight properties. For example,
the property can be proved as :
(by V8)
(by V7)
Which shows .
For and .
Hence . Now, the element of is . Hence is in H.
is a subspace of .
Activity 2.3.1: Take any vector in . Let W be the set of all vectors in where .
Discuss whether W is a subspace of or not.
Definition 2.3.2: Let be elements of a vector space V over k. Let
be elements of k. Then an expression of the form is
called a linear combination of .
x
– …………………………….… (*)
2
And thus .
Therefore, given any we can find and given by (*) and (x, y) can be written
as a linear combination of and as
4 6 4
For example, (4, 3) = (0,1) (2,1) Or (4, 3) = -(0, -1) + 2(2, 1)
2 2
Note that { } is also a basis of . Hence a vector space can have two or more basis.
Find other bases of .
2) Show that and form a basis of .
Solution: and are linearly independent but they do not generate .
There are no numbers and a2 for which
Theorem 2.5.2: Let V be a vector space and suppose that one basis B has n elements, and
another basis W has m elements. Them
Proof: As B is a basis, is impossible. Otherwise by theorem 3.4.1, W will be a linearly
dependent set. Which contradicts the fact that W is a basis. Similarly, as W is a basis, n > m is
also impossible. Hence
Definition 2.5.2: Let V be a vector space having a basis consisting of n elements. We shall say
that n is the dimension of V. It is denoted by dim V.
Remarks: 1) If { } then V doesn‘t have a basis, and we shall say that dim v is zero.
2) The zero vector space or a vector space which has a basis consisting of
a finite number of elements, is called finite dimensional. Other vector
spaces are called infinite dimensional.
Example 2.5.3:
1) over has dimension 3. In general over has dimension n.
2) over has dimension . In fact, {1} is a basis of , because and any
number has a unique expression .
Definition 2.5.3: The set of elements { } of a vector space V is said to be a maximal
set of linearly independent elements if are linearly independent and if given any
element w of V, the elements are linearly dependent.
Example 2.5.4: In { } is a maximal set of linearly independent
elements.
We now give criteria which allow us to tell when elements of a vector space constitute a basis.
Theorem 2.5.3: Let V be a vector space and{ } be a maximal set of linearly
independent elements of V. Then { } is a basis of V.
Proof: It suffices to show that generate V. (Why?) Let .
Then are linearly dependent (why?).
Hence there exist numbers not all such that
In particular . (why?
Therefore, by solving for ,
.
.
Theorem 2.6.2: Let V be a finite dimensional vector space over the field K. Let W be a
subspace. Then there exists a subspace U such that V is the direct sum of W and U.
Proof: Left as an Exercise!
Theorem 2.6.3: If V is a finite dimensional vector space over the field K, and is the direct sum
of subspaces U, W then
.
Proof: Exercise
Remark: We can also define V as a direct sum of more than two subspaces. Let
be subspaces of V. We shall say that V is their direct sum if every element of can
be expressed in a unique way as a sum
v w1 w2 ....... wr .
With ,for .
Suppose now that U, W are arbitrarily vector spaces over the field K(i.e. not necessarily
subspaces of some vector space). We let UxW be the set of all pairs (u, w) whose first
component is an element u of U and whose second component is an element w of W. We define
the addition of such pairs component wise, namely, if and
,then we define .
If , we define the product by
.
It is then immediately verified that is a vector space, called the direct product of U and
W.
Note: If is a positive integer, written as a sum of two positive integers , , then we see
that is the direct product and
Example 2.6.1: Let { },and { }.
Show that V is the direct sum of W and U.
Solution: Since V, U and W are vector spaces, and in addition to that U and W are subspaces of
V. The sum of U and W is:
{ } .
Thus; . The intersection of U and W is: U { }.
3) Let, ,( ) -, and W ,( ) -.
Unit Summary
A vector space V over a field K is any set of objects with two operations, addition and
multiplication by scalars; satisfying the following conditions for all objects u, v and w in V
and for all scalars in a field F.
i) V is closed under addition that is, u + v is an element of V,
ii) Addition is associative that is, (u + v) + w =u + (v +w),
iii) Addition is commutative that is, u + v = v +u,
iv) There exists an element of V denoted by 0 such that 0 + u = u = u + 0, 0 is called the
zero element of V,
v) For each u an element of V, there exists an element – u of V such that u + −u = 0, −u is
called the additive inverse of u,
vi) u is an element in V,
vii) (u +v) = u + v,
viii) = ,
xi) (a +b)u = au + bu,
ix) 1u = u1 = u, where 1 is the unity in F.
Let V be a vector space and W be a subset of V. Then W is said to be a subspace of V if for
every element u, v of W and a scalar in F
i) u + v in W
ii) au in W
iii) The zero element O of V is also the zero element of W.
Let be vectors in and be real numbers. Then
is called the linear combination of the vectors
. are called coefficients of the linear combination.
Let S = { } be a set of vectors in . Then the set of all linear combinations of
is called the span of S or the set generated by S.
Let V be a vector space and let be elements of V. We say that
are linearly dependent if there exists numbers not all equal to zero such that
. = 0 Otherwise are linearly independent.
Let { } be a set of vectors in a vector space V. Then { } is
said to be a basis of Vif the following conditions are satisfied.
i) arelinearly independent.
ii) V= The span of { } that is every element of V is a linear combination of
the vectors .
The number of elements of the basis for a vector space is called the dimension of the
vector space.
Let V be a vector space of dimension n, and let linearly independent
elements of V. Then form a basis for V.
Let V be a vector space over the field F, and let U, W be subspaces of V. If U + W = V,
and if U n W = {O} then V is the direct sum of U and W
CHAPTER THREE
MATRICES
Unit outcomes
After the completion of this chapter, you will be able to:
define matrices and give examples by themselves ;
know the role operation on the matrices and their properties;
distinguish types of matrices;
identify a system of linear equations (or linear system) and describe its solution set;
write down the coefficient matrix and augmented matrix of a linear system;
use elementary row operations to reduce matrices to echelon forms; and
make use of echelon forms in finding the solution sets of linear systems; performing
inverses of nonsingular square matrices.
Introduction
The concept of matrices has had its origin in various types of linear problems. The most
important of which concerns the nature of solutions of any given system of linear equations.
Matrices are also useful in organizing and manipulating large amounts of data. Today, the
subject of matrices is one of the most important and powerful tools in Mathematics which has
found applications to a very large number of disciplines such as Engineering, Business and
Economics, statistics etc.
3.1 Definition of a matrix
‗m‘ horizontal rows and ‗n‘ vertical columns enclosed by a pair of brackets [ ], ( ).
a11 a12 ... a1n
a a22 ... a2 n
21
. . .
is a matrix.
. . .
. . .
am1 am 2 ... amn
to familiarize ourselves with some terms that are associated with matrices.
The numbers in a matrix are called the entries or the elements of the matrix. For the entry
a ij , the first subscript i specify the row and the second subscript j the column in which the
entry appears. That is, a ij is an element of matrix A which is located in the i th row and j th
column of the matrix A. Whenever we talk about a matrix, we need to know the order of the
matrix.
The order of a matrix is the number of rows and columns it has. When we say a matrix is a 3 by
4 matrix, we are saying that it has 3 rows and 4 columns. The rows are always mentioned first
and the columns second. This means that a 3 4 matrix does not have the same order as a 4 3
matrix. It must be noted that even though an m n matrix contains mn elements, the entire
matrix should be considered as a single entity. In keeping with this point of view, matrices are
denoted by single capital letters such as A, B, C and so on.
Remark: By the size of a matrix or the dimension of a matrix we mean the order of the matrix.
1 5 2
Example 3.1.1: Let A . A matrix with size can be written as
0 3 6
( )
a) Find
b) What is the size of matrix A?
Example 3.1.2: Form a 4 by 5 matrix, B, such that bij = i+ j.
Solution: Since the number of rows is specified first, this matrix has four rows and five columns.
1 1 1 2 1 3 1 4 1 5
2 1 22 23 24 2 5
b11 b12
b13 b14 b15
b b
B 21 22
b23 b24 b25 3 1 3 2 33 3 4 3 5
b31 b32 b33 b34 b35
4 1 42 43 44 4 5
b41 b42 b43 b44 b45
2 3 4 5 6
3 4 5 6 7
= .
4 5 6 7 8
5 6 7 8 9
Activity 3.1.1: Form a 4 by 3 matrix, B, such that a) bij i j b) bij (1) i j
Remark:
1) A vector is a matrix having either one row or one column.
2) A matrix consisting of a single row (a1, a2, a3, …, an) is called a row vector. Hence a row
matrix is a matrix.
matrix is an
Definition 3.1.2(Equality of matrices): Two matrices A and B are said to be equal, written
A = B, if they are of the same order and if all corresponding entries are equal.
Example 3.1.3:
5 1 0 2 3 1 0 9
1) but 9 2 . Why?
2 3 4 2 3 2 2 2
x y 6 1 6
2) Given the matrix equation
3 8 . Find x and y.
x y 8
x y 1
Solution: By the definition of equality of matrices,
x y 3
Solving gives x = 2 and y = -1.
Exercise 3.1.2: Find the values of x, y, z and w which satisfy the matrix equation
x y 2 x z 1 5 x 3 2 y x 0 7
a) 2 x y 3z w 0 13 b.
z 1 4 w 6 3 2 w
3. 2 Operations of Matrix
Remark: Notice that we can add two matrices if and only if they are of the same order. If they
are, we say they are conformable for addition. Also, the order of the sum of two matrices is
same as that of the two original matrices.
Example 3.2.1.1: Consider the matrices
( ) ( ) ( ), then find
( ) ( ) ( ) ( ),
but we can‘t find A + C and B + C. Because they are undefined matrices, since they have
different size.
Exercise 3.2.1.1: Given the matrices A, B, and C below
1 2 4 2 -1 3 4
A = 2 3 1
B = 2 4 2 C = 2 Find, if possible.
5 0 3 3 6 1 3
a) A + B b) B + C
Note: If A is any matrix, the negative of A, denoted by –A, is the matrix obtained by replacing
each entry in A by its negative. For example, if
2 1 2 1
A 5 4 , then A 5 4
6 0 6 0
A + (–B). In other words, to find A – B, we subtract each entry of B from the corresponding
entry of A.
Example 3.2.2.1: ( ) ( )
( ) ( ) ( )
( )
( ) ( ) ( )
( ) ( ) ( )
2 x 3 y 16 2 x 3 y 16
x 5 y 22 x 5 y 22 , By equality of matrices we have
2 x 3 y 16
, After evaluating we get the values x = 2 and y = -4
x 5 y 22
Theorem 3.2.4.1:(Properties of scalar multiplications)
1) If A and B are two matrices of the same order and if k is a scalar, then
k(A + B) = kA + kB
2) If k1 and k2 are two matrices of the same order if A is matrix, then:
(k1 + k2)A = k1A + k2A
3) If k1 and k2 are two matrices of the same order if A is matrix, then:
(k1k2)A = k1(k2A) = k2(k1A
To determine whether a product of two matrices is defined is to write down the size of the first
factor and, to the right of it, write down the size of the second factor as shown below. If, the
inside numbers are the same, then the product is defined. The outside numbers then give the size
of the product.
Example 3.2.5.1:
1) Consider the matrices
2) Let ( ) ( )
[ ] [ ]
But BA is a m m matrix
[ ][ ] [ ]
So AB BA in general.
Note:
Multiplication of matrices is associative.
Matrix multiplication is distributive with respect to addition.
Matrix multiplication is not commutative.
3) Consider ( ) ( )
( ) ( )
* + [ ]
* + [ ] * +
⏟
⏟
[
⏟ ] [ ] [ ]
n
cik ai1 b1k ai 2 b2 k ... ainbnk aij b jk , I = 1,2,…,m and k = 1,2,…,p
j 1
Thus, the product AB is the m p matrix, where each entry cik of AB is obtained by
multiplying corresponding entries of the ith row of A by those of the kth column of B and then
finding the sum of the results.
Remark: For real numbers, a multiplied by itself n times can be written as an. Similarly, a
square matrix A multiplied by itself n times can be written as An. Therefore, A2
means AA, A3 means AAA and so on.
Exercise3.2.6.1
1) If a matrix A is 3x5 and the product AB is 3x7, then what is the order of B?
2) How many rows does X have if XY is a 2x6 matrix?
1 2 3 4 5 6 1 2 1
3) If
A 1 0 2 , B 1 0 1 and C 1 2 3 . Find each of the
1 3 1 2 1 2 1 2 2
following
i) A+B
ii) 2B – 3C
iii) A+B–C
iv) A – 2B + 3C
v) 2A – C
5 2 2 4 1 3
4) Let A , B , C Find the following:
1 3 6 1 7 2
i) AB ii) BC iii) (AB)C iv) A(BC)
4 1 4
5) If A 4 0 4 , compute A2. Is it equal to I3, where I3 is the identity matrix
3 1 3
of order 3?
If ( ) ( )
Example 3.2.7.1
2 3
1) If ( ) then A t
4 1
6 4
2) If ( ) ( )
3) Zero or Null Matrix: A matrix whose entries are all 0 is called a zero or null matrix. It is
0 0 0 0
usually denoted by 0m n or more simply by 0. For example, 0 is a 2 4
0 0 0 0
zero matrix,
4) Square Matrix: An m n matrix is said to be a square matrix of order n if m = n. That is,
if it has the same number of columns as rows.
Example 3.3.1:
3 4 6
2 1
A= 2 1 3 and B=
5 6
are square matrices of order 3 and 2 respectively.
5 2 1
In a square matrix A aij of order n, the entries a11, a22 , ..., ann which lie on the diagonal
extending from the left upper corner to the lower right corner are called the main diagonal
3 2 4
entries, or more simply the main diagonal. Thus, in the matrix C = 1 6 0 the entries
5 1 8
5 0 0
B 0 0 0 is a diagonal matrix
0 0 7
Notation: A diagonal matrix A of order n with diagonal elements a11 , a22 , ..., ann is denoted
7) Identity Matrix or Unit Matrix: A square matrix is said to be identity matrix or unit
matrix, if all its main diagonal entries are 1‘s and all other entries are 0‘s. In other words, a
diagonal matrix whose all main diagonal elements are equal to 1 is called an identity or unit
matrix. An identity matrix of order n is denoted by ‗In‘ or more simply by I.
1 0 0
1 0
Example 3.3.3 I 0 1 0 is identity matrix of order 3. I 2 is identity matrix of
0 1
0 0 1
order 2.
8) Triangular Matrix: A square matrix is said to be
i. An upper triangular matrix if all entries below the main diagonal are zeros.
ii. Lower triangular matrix if all entries above the main diagonal are zeros.
5 0 0 0
2 4 8 1 0
Example 3.3.4 0 1 2 and
3 0
are upper and lower triangular matrices,
6 1 2 0
0 0 3
2 4 8 6
respectively.
i.e A square matrix ( ) of order n is called an upper triangular matrix if
Note: A triangular matrix is a matrix which is either upper triangular or lower triangular.
9) A square matrix A (aij ) is said to be symmetric if At A , or equivalently, if a ij a ji
Exercise 3.3.1.
a 3 4 8
c 3 9
1) For A
b
is to be a symmetric matrix, what numbers should the letters a to j
d e f 10
g j
h i
represent?
2) A) Does a symmetric matrix have to be square?
B) Are all square matrices symmetric?
10) A square matrix A is said to be skew symmetric if .
( ) ( ) ( )
( ) ( ) ( )
Definition 3.3.1.1 If A is a square matrix, then the trace of A denoted by tr(A) is defined as the
sum of the diagonal elements of A. i.e, ∑ ( )
[ ] And
( )
Exercise 3.3.1.1
1) a) Form a 4 by 5 matrix, B, such that bij = i*j, where * represents multiplication.
i) ( A B) t At B t , ii) ( AB ) t B t A t iii) (2 A) t 2 At
1 1 1
3) Let A , is A t A is symmetric?
1 2 3
3.4 Elementary row and column operations
In section we will see the definition and properties of Elementary row and column operations.
Definition 3.4.1: The following operation are called elementary row operations on matrix
1) Interchanging any two rows (or columns). It can be represented by
2) Multiplying a row (or column) by a non-zero scalar. It is called scaling. It is represented
by .
3) Replacing one row (the row) or column (the column) by the sum of itself and a
multiple of (k times) another row or column.
By: Bule Hora University Mathematics Department Page 58
Linear Algebra I Module for Chemist Students
( ) ( ) ( ) ( )
B is obtained from A by
C is obtained from B by
D is obtained from C by
2) Show that A is row equivalent to B.
. / ( )
Solution: Since B is obtained from A by performing the following Elementary row operation,
then A is row equivalent to B.
Definition3.4.3 A matrix obtained from an identity (unit) matrix by applying a single elementary
operation is called an elementary matrix.
Example 3.4.2:
A nonzero row (or column) in a matrix means a row (or column) that contains at least one
non-zero entry.
A leading entry of a row refers to the left most nonzero entry (in a non-zero row).
2 3 2 1 1 0 0 29
A) 0 1 4 8 , 0 1 0 16
5
0 0 0 0 0 1 1
2
( ) ( ) ( ) ( )
Remark:
1. A matrix in row-echelon form has zeros below each leading 1, where as a matrix in
reduced row-echelon form has zeros both above and below each leading 1.
2) Each matrix is row equivalent to one and only one row reduced echelon matrix. But a
matrix can be row equivalent to more than one echelon matrices.
3) If matrix A is row equivalent to an echelon matrix U, we call U an echelon form of A. If U
is in reduced echelon form, we call U the reduced echelon form of A.
2) Determine which of the following matrices are in row reduced echelon form and which
others are in row echelon form (but not in reduced echelon form)
1 0 1 0
1 0 1 1 1 1 0 0
a) 0 1 0 0 0 0 1 1
b) c)
0 0 0 1
0 0 0 0 0 0
0 0 0 0
0 2 3 4 5 1 0 5 0 8 3
0 0 3 4 5 0 1 4 1 0 6
d) e)
0 0 0 0 5 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0
Exercise 3.5.2.1: Reduce the following Matrices in to row reduced echelon form.
A) ( )
B) ( )
( ) ( ) ( )
( )
Now U = ( ) is a row echelon form of A and since the number of non zero row of
( ) ( )
Solution:
( ) ( ) ( )
Since ( )is row-echelon form and the number of non zero row it is 3, then,
Exercise 3.6.1: Find the row reduced echelon form of each of the following matrices and
determine the rank.
1 3 0 0 3
1 3 5 7 0 1 2 1 3 0
0
a). 2 4 6 8 c). 2 4 5 5 3
0 1 0
b).
0 0 0 0 0
3 5 7 9 3 6 6 8 3
0 0 0 3 1
Definition 3.7.2: A system of linear equations (or a linear system) is a collection of one or more
linear equations involving the same variables, say x1 , x 2 ,..., x n .
1 3 1 1 3 1 2
The matrix A 0
1 2 is the coefficient matrix and 0 1 2 4 is the
2 3 3 2 3 3 5
augmented matrix.
Are the coefficient matrix and the augmented matrix of a homogeneous linear system equal?
Why?
A system of linear equations has either
1) no solution, or
2) exactly one solution, or
3) Infinitely many solutions.
We say that a linear system is Consistent if it has either one solution or infinitely many
solutions; a system is inconsistent if it has no solution.
3.7.1 Solving a linear system
This is the process of finding the solutions of a linear system of equation. We first see the
technique of elimination (Gaussian elimination method) and then we add two more techniques,
matrix inversion method and Cramer‘s rule.
Gaussian-Jordan Elimination Method
The Gaussian elimination method is a standard method for solving linear systems. It applies to
any system, no matter whether m < n, m = n or m > n (where m and n are number of equations
and variables respectively). We know that equivalent linear systems have the same solutions.
Thus the basic strategy in this method is to replace a given system with an equivalent system,
which is easier to solve.
The basic operations that are used to produce an equivalent system of linear equations are the
following:
1) Replace one equation by the sum of itself and a multiple of another equation.
2) Interchange two equations
3) Multiply all the terms in an equation by a non-zero constant.
The above three basic operations listed above correspond to the three elementary row operations
on the augmented matrix. Thus to solve a linear system by elimination we first perform
appropriate row operations on the augmented matrix of the system to obtain the augmented
matrix of an equivalent linear system which is easier to solve and use back substitution on the
resulting new system. This method can also be used to answer questions about existence and
uniqueness of a solution whenever there is no need to solve the system completely.
In Gaussian elimination method we either transform the augmented matrix to an echelon matrix
or a reduced echelon matrix. That is we either find an echelon form or the reduced echelon
form of the augmented matrix of the system.
Example 3.7.1.2:
1) Determine if the following system is consistent. If so how many solutions does it have?
x1 x2 x3 3
x1 5 x2 5 x3 2
2 x1 x2 x3 1
1 1 1 3
Solution: The augmented matrix is A 1 5 5 2 Let us perform a finite sequence of
2 1 1 1
1 1 1 3 R R R 1 1 1 3
R3 2 R1 R3
A B 1 5 5 2 0 6 6 1
2 1 2
2 1 1 1 2 1 1 1
1 1 1 3 1
R3 2 R 2 R3 1 1 1 3
0 6 6 1 0 6 6 1
9
0 3 3 5 0 0 0
2
Let us find an echelon form of the augmented matrix first. From this we can determine whether
the system is consistent or not. If it is consistent we go ahead to obtain the reduced echelon form
of [Ab], which enable us to describe explicitly all the solutions.
2 1 1 2
R2 R1 R2
2 1 1 2
R3 3R1 R3
2 1 1
4 0 0 2
6
6 3 2 9 6 3 2 9
2 1 1 2 R 3 5 R 2 R 3 2 1 1 2 1
0 0 2 2
6 0 0 2 6 R 2 2 R 2
0 0 5 15 0 0 0 0
1 1
2 1 1 2 2 1 0 1 1 0
0 0 1 3 R
1
R1 2 R1
2 2
1 R 2 R1
3
0 0 1 3 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0
The associated linear system to the reduced echelon form of [Ab] is
x 1
2
y 21
z 3
0 0
2 +4 −3 =1
Now let us determine the row echelon form of the augmented matrix.
To determine the row – echelon form add −2 times the first row to the second to obtain
[ ] [ ]
[ ] [ ]
[ ]
[ ]
By performing additional elementary row operation we can obtained the row reduced echelon
Thus [ ]0 1 [ ] [ ]0 1 [ ]
The solution is the unique solution of the given system of linear equation
x1 x2 x4 1
4) Find the solution set of the system: x1 x2 x3 2 .
x 2 x3 x 4 0
Remark: A system of linear equation AX B is consistent iff the ranks of the coefficient matrix
and the augmented matrix are equal.
Theorem 3.7.1: A homogenous system of equation with more unknowns than equations has
infinitely many solution
Exercise 3.7.1
1) Find the solution set of the following system:
2 x1 x 2 3x3 1
x1 3x 2 5 x3 2 x 4 11
x1 x 2 2 x3 2
a. c. 3x1 2 x 2 7 x3 5 x 4 0
4 x1 3x 2 x3 3
2 x1 x 2 x 4 7
x1 5 x3 3
x1 2 x 2 3x3 x4 0 x1 3x2 2 x3 5 x4 10
b. 3x1 x2 5 x3 x 4 0 d. 3x1 2 x2 5 x3 4 x4 5
2 x1 x 2 x 4 0 2 x1 x2 x3 5 x4 5
x yz 6
2) For what values of and the system: x 2 y 3z 10 , has
x 2 y z
i. No solution ii. Unique solution iii. Infinitely many solution
3) Find the augmented matrix for each of the following system of linear equations:
a) { c) {
b) { d) {
a) [ ]
b) [ ]
c) * +
d) * +
5) A) Find a linear equation in the variables x and y that has a general solution
.
b). show that is also general solution of the equation in part (a).
Show that for this system to be consistent, the constants must satisfy .
[ ] [ ] [ ]
[ ][ ] [ ]
Let us now summarize the different possible cases that may arise for the solution of the system
of equation .
1) If and hence has no solution.
2) If , and there exists a unique solution to the
system Ax = b.
3) If , and we have an infinite number of
i.e. .
Suppose then the system has a unique solution and unique solution is give by
Consider the linear system of two equations in two variables: Determinant of second order is
| |
For
| | | |
The solution is ,
Then, . Where
| | | | | | | |
Example 3.7.3.1: Solve the following linear system using Cramer‘s rule
2
| | | |
Solution: .
| | | |
Unit Summary
This chapter introduces matrices as a way of representing data. Matrices will be used to
organize data as well as to solve for variables.
The first section gives the definition of a matrix and its dimensions. It then explains how to
add and subtract matrices. Not all matrices can be added to or subtracted from all other
matrices, as this section explains. Matrices can be added and subtracted only if they have the
same dimensions.
The second section explains two types of multiplication associated with matrices: scalar
multiplication—that is, multiplication by a constant—and multiplication of two matrices.
Matrix multiplication is associative, but not commutative.
Just as there is an additive identity and a multiplicative identity for all real numbers (an
addition and a multiplication that does not change the number), there is an additive identity
and a multiplicative identity for all matrices. The next section deals with these two identities,
and introduces the identity matrix.
The subsequent section introduces operations "within" a single matrix—elementary row
operations. There are three elementary row operations, and they are used to row reduce a
matrix. Row reduction is used in almost all calculations with matrices, so it is important to
understand this topic.
The final section of this chapter explains the concept of the inverse of a matrix. Just as most
real numbers have a multiplicative inverse, most matrices also have multiplicative inverse—
that is, a matrix that, when multiplied by the original
matrix, yields the identity. The inverse of a matrix can be found using the row reduction, and
this section explains how.
Matrices are important in Algebra II, as we will see in the next chapter. They are used in
multiple ways to solve systems of equations. In addition, they are important in higher
algebra. A large portion of linear algebra, which you may study in college, deals entirely with
matrices. Matrices are also used by mathematicians, physicists, and biologists to organize
data and study complex phenomena; for example, matrices are used to study population
growth and determine when a population will stabilize.
Miscellaneous Exercise
2 4 4 1 4 3 2 0
1) Given A = B= C= D = 3 1 E=
1 3 2 0 3 1 1 2
Calculate the following and if not possible, put undefined:
a) A + B b) 3B c) AC d) AE e) AD f) B + D g) B – 2A
1 2 3 2 1 1
2 4 0
2) Given A , B , C 1 0 and D 2 Evaluate the
1 0 2 3 1 1
1 1 0
following: a) b). c) d)
1 0 1 1 0
2 A 3 A
1 1
3) Find a 2x2 matrix A such that
1 1 2
0 3 1
5) Prove that A 3 0 5 is a skew-symmetric matrix
1 5 0
6) Find the row reduced echelon form of each of the following matrices and determine the
rank.
1 3 0 0 3
1 3 5 7 0 0
a) 2 4 6 8
0 1 0
b)
0 0 0 0 0
3 5 7 9
0 0 0 3 1
1 0 1 0 0
0 1 2 1 3 0
1
d) 2 4 5 5 3
0 1 0
c)
0 1 0 2 1
3 6 6 8 3
0 0 0 1 1
7) Use Gaussian Elimination to solve the following system
a) –
2 x 3 y z 10
b) x 3 z 6
5 x 2 y 13
c) x + y + z = 4
-2x - y + 3z = 1
y + 5z = 9
8) Find the inverses (if they exist) of:
i) * + ii) [ ]
a.
x1 2 x 2 3x3 x4 0 x1 3x2 2 x3 5 x4 10
b. 3x1 x2 5 x3 x 4 0 d. 3x1 2 x2 5 x3 4 x4 5
2 x1 x 2 x 4 0 2 x1 x2 x3 5 x4 5
x yz 6
11) For what values of and the system: x 2 y 3z 10 , has
x 2 y z
i. No solution ii. Unique solution iii. Infinitely many solution
12) Let = the set of all mxn matrices. Is a vector space under matrix addition and
scalar multiplication?
CHAPTER FOUR
DETERMINANTS
Unit outcome:
At the end of this chapter, the students should be able to find
determinant of a square matrix;
inverse of a square matrix with the help of determinants;
the solution for system of equations using determinants;
rank of a matrix;
eigenvalues and eigenvectors and
area of a parallelogram and volume of a parallelepiped.
4.1 Definition of Determinants
We interrupt our discussion of matrices to introduce the concept of the determinant function.
Remember that a matrix is simply an ordered arrangement of elements; it is meaningless to
assign a single numerical value to a matrix.
However, if A is a square matrix, then the determinant function associates with A exactly one
numerical value called the determinant of A, that gives us valuable information about the
matrix. Denoting the determinant of A by or det A. we can think of the determinant function
as correspondence:
In this case, the straight bars do NOT mean absolute value; they represent the determinant of the
matrix. We will see some of the uses of the determinant in the subsequent sections. For now, let's
find out how to compute the determinant of a matrix so that we can use it later.
Definition 4.1.1: (Determinant of order 1): Let [ ] be a square matrix of order , Then
determinant of A is defined as the number a11 itself. That is, .
Example4.1.1: .
To define the determinant of a square matrix A of order we need the concepts of the
minor and the cofactor of an element.
Let | | be a determinant of order n. The minor of , is the determinant that is left by
deleting the row and the column. It is denoted by .
a 22 a 23 a 21 a 23
M 11 , the minor of a12 is M 12 , and so on.
a32 a33 a32 a33
Let A aij be a determinant of order n. The cofactor of aij denoted Cij or Aij, is defined as
(1) i j M ij , where i + j is the sum of the row number i and column number j in which the entry
M ij , if i j is even
lies. Thus C ij . For example, the cofactor of a12 in the 3 x 3 determinant
M ij , if i j is odd
a11 a12 a13
a21 a23 a a
a21 a22 a23 is C12 (1)1 2 21 23
a31 a33 a31 a33
a31 a32 a33
0 1 2
Example 4.1.3: Evaluate the cofactor of each of the entries of the matrix: 1 2 3
3 1 1
Solution: C11 = -1, C21 = 1 , C31 = -1, C12 = 8, C13 = -5, C22 = -6, C32 = 2, C23 = 3, C33 = -1
Activity 4.1.1: Evaluate the cofactor of each of the entries of the given matrices:
2 3 4 2 0 1
a. 3 2 1 b. 5 1 0
1 1 2 0 1 3
Definition 4.1.3 :( Determinant of order n): If A is a square matrix of order n (n >2), then its
determinant may be calculated by multiplying the entries of any row (or column) by their
cofactors and summing the resulting products. That is,
Or .
Remark: It is a fact that determinant of a matrix is unique and does not depend on the row or
column chosen for its evaluation.
1 3 4
Example 4.1.4: Find the value of 0 2 5
2 6 3
Solution: Choose a given row or column. Let us arbitrarily select the first row. Then
1 3 4
2 5 0 5 0 2
0 2 5 (1) ( 3 )( 1 ) 4 = 1(6 30) 3(0 10) 4(0 4)
6 3 2 3 2 6
2 6 3
= 22
If we had expanded along the first column, then
1 3 4
2 5 3 4
0 2 5 (1) 0 (2) , as before
6 3 2 5
2 6 3
1 2 0 1
3 1 4 1
Example 4.1.5: Find the value of A
2 0 3 3
4 3 1 2
1 4 1 3 4 1 3 1 4
=(1) 0 3 3 2 2 3 3 0 (1) 2 0 3 = –
3 1 2 4 1 2 4 3 1
2 1 3
Example 4.1.6: Find the value of A 5 7 0 with the help of Sarrus‘ diagram
4 1 6
Solution: The Sarrus‘ diagram for the given determinant is to the right. Thus the value of the
determinant is
Exercise 4.1.1:
∑ | |
= + +
=
Remark: For simplicity, to find the determinant of a matrix, We expand the determinant by
arrow or column that contains number zeros.
2 ) Find if ( )
Solution: Notice that the column contains more number of zeros. Thus, we expand the
determinant by the column. Hence, we use the formula,
Now, =∑
+ +
= 5(8 +2) =50
Remark: If a square matrix A is a triangular matrix (upper or lower), then is the product of
the diagonal entries of A.
Solution: We can see that A and B are triangular square matrices. Thus their determinant is the
product of the main diagonal entries. Now, = (4)(2)(5)(6)=240 and = (3)(2)(1)=6
| | | | or .
(column).
Property 3: If any two rows (or columns) of a determinant are identical, the value of the
a1 b1 c1
determinant is zero. That is, a1 b1 c1 0 . R1 and R2 are identical
a2 b2 c2
Property 6: If each element of a row (or column) of a determinant is the sum of two elements,
the determinant can be expressed as the sum of two determinants. That is,
a11 a12 a13 a11 a12 a13 a11 a12 a13
a21 a22 a23 a21 a22 a23 a21 a22 a23
a31 b1 a32 b2 a33 b3 a31 a32 a33 b1 b2 b3
1 18 72
Example 4.2.1: Find the value of the determinant A 2 40 148
3 45 150
1 18 72
4 4
A 0 4 4 = = 24 - 36 = -12
9 6
0 9 6
yz x y
Example 4.2.2: Show that z x z x ( x y z )( x z )2
x y y z
yz x y
Solution: Let z x z x . Performing R1 R1 R2 , R1 R1 R3 , we get
x y y z
2( x y z ) x y z x yz 2 1 1
zx z x = (x yz) z x z x
x y y z x y y z
0 0 1
= (x yz) xz zx x (Property 5)
x y yz z
= ( x y z )( x z )2
Activity 4.2.1: Evaluate the following determinants by using the properties listed above:
1 3 1 2
3 1 43 2 4 6
2 5 1 2
a) 2 7 35 b) 7 9 11 c)
0 4 5 1
1 3 17 8 10 12
3 10 6 8
Example 4.2.3:
3 0 2 5 3 0 2 5
1) Let A and B , then AB A B (3)(3) 9
4 1 1 4 4 1 1 4
2) Let A and B be 3x3 matrix with and
Find
Solution: det(2 AB t ) 2 3 det A det B t 8(2)(3) 48 , since .
1 2 3
Example 4.3.1: Find adj A, if A 1 0 1
4 3 2
Solution: We first find the cofactor of each entry. That is,
= Cofactor of (i. e cofactor of 1) = = -3
= Cofactor of (i. e cofactor of 2) = =6
= Cofactor of (i. e cofactor of 3) = = -3
= Cofactor of (i. e cofactor of -1) = =5
= Cofactor of (i. e cofactor of 0) = = -10
= Cofactor of (i. e cofactor of 1) = =5
= Cofactor of (i. e cofactor of 4) = =2
= Cofactor of (i. e cofactor of 3) = = -4
= Cofactor of (i. e cofactor of 2) = =2
We have
By: Bule Hora University Mathematics Department Page 84
Linear Algebra I Module for Chemist Students
3 5 2
Thus, adj A 6 10 4 .
3 5 2
1 5 0
Activity 4.3.1: Find adj A if A 2 4 1
0 2 0
2 1 3
Example 4.3.2: If A 2 0 1 , verify that
4 5 6
Solution: We have – .
Now ,
.
5 9 1
Therefore, adj A 16 24 4
10 14 2
2 1 3 5 9 1 4 0 0 1 0 0
Hence A( adjA ) 2 0 1 16 24 4 = 0 4 0 4 0 1 0 A I
3
4 5 6 10 14 2 0 0 4 0 0 1
invertible matrix. It may easily be seen that if a matrix A is invertible, its inverse is unique.
The inverse of an invertible matrix A is denoted by .
Does every square matrix possess an inverse? To answer this let us consider the matrix
0 0
A . If B is any square matrix of order , we find that .
0 0
We thus see that there cannot be any matrix B for which AB and BA both are equal to .
Therefore A is not invertible. Hence, we conclude that a square matrix may fail to have an
inverse. However, if A is a square matrix such that A 0 , then A is invertible and
1
( ) ( ) . Thus A is invertible and. A1 adjA
A
6 7 1
Example 4.3.3: Find if the matrix A 3 5 has no inverse.
9 11
2 2 8 0
( 2)( 4) 0
2 or 4
3 1 2
Example 4.3.4: If A 2 3 1 , then A 3( 3 2 ) 1( 2 1 ) 2( 4 3 ) 8
1 2 1
1
Note: If A is an invertible matrix, then AA-1 = In and where .
det A
Properties of the inverse of a matrix
1. A square matrix is invertible if and only if it is non-singular.
2. The inverse of the inverse is the original matrix itself, i.e. .
3. The inverse of the transpose of a matrix is the transpose of its inverse, i.e.,
.
4. If A and B are two invertible matrices of the same order, then AB is also invertible and
moreover, .
4 2 1
Example 4.3.5: Find the inverse of the matrix A= 7 3 3 .
2 0 1
. To find , let denote the cofactor of the element in the row and
1 4 0
a b
Activity 4.3.3: 1. Find the inverse of A, if i) A ii) A 1 2 2
c d 0 0 2
3 4 2 8
2. Find matrix A such that A .
6 2 9 4
3 AX B X A1B (True/False)
4.4 Cramer‟s Rule for solving system of linear equations (homogeneous and
non homogeneous)
Suppose we have to solve a system of linear equations in unknowns . Let be
the matrix obtained from A by replacing column by the vector and be the column
vector of matrix .
Now let be columns of the identity matrix and Ii(x) be the matrix obtained
from by replacing column by x.
If then by using matrix multiplication we have
[ ] [ ]
[ ]
By the multiplicative property of determinants,
The second determinant on the left is . (Make a cofactor expansion along the th row.) Hence
det Ai ( B )
Therefore if then we xi .
det A
This method for finding the solutions of linear equations in unknowns is known as Cramer‘s
Rule.
Example 4.4.1: Solve the following system of linear equations by Cramer‘s Rule.
2 1 1 x1 6
where A 1 4 2 x x2 and B 4
3 0 1 x 7
3
det Ai ( B)
By Cramer‘s Rule, xi (i 1, 2, 3)
det A
2 1 1
det A 1 4 2 3
3 0 1
6 1 1 2 6 1
4 4 2 1 4 2
det A1 ( B) 7 0 1 6 det A2 ( B) 3 7 1 3
x1 2 , x2 1 and
det A 2 3 det A 2 3
2 1 6
1 4 4
det A3 ( B) 3 0 7 3
x3 1.
det A 2 3
Example 4.4.2:Solve the following system of linear equations by Cramer‘s Rule.
2 x1 x 2 7
3x1 2 x3 8
x 2 2 x3 3
Solution: Matrix form of the given system is
2 1 0 x1 7
where A 3 0 1 x x2 and b 8
0 1 2 x 3
3
det Ai ( B )
By Cramer‘s Rule, xi (i 1, 2, 3)
det A
2 1 0
det A 3 0 1 4
0 1 2
7 1 0 2 7 0
8 0 1 3 8 1
det A1 (b) 3 1 2 6 3 det A2 (b) 0 3 2 16
x1 , x2 4 and
det A 4 4 2 det A 4 4
2 1 7
3 0 8
det A3 (b) 0 1 3 14 7
x3 .
det A 4 4 2
Activity 4.4.1:
1. Use Cramer‘s rule to solve each of the following
a) b) – –
–
6 7 1
Example 4.4.3: Let A 3 5 . Find the value(s) of if Ax 0 has non-zero solution.
9 11
Solution: det A = 0
6 7 1
3 5 0
9 11
2 2 8 0
( 2)( 4) 0 or 2, 4
Solution: – –
A is non-singular and hence
1 2 3
Example 4.5.2: Obtain the rank of the matrix B= 3 4 5
4 6
5
Solution: – – – – … (*)
1 2
A minor of order 2 of B is 2 0 . So …(**)
3 4
From (*) and (**);
1 2 3
Example 4.5.3: Find the rank of the matrix A =
2 4 6 23
Solution: Since A is a 2x3 matrix, rank of . But every determinant of order 2 in
A is zero. So rank of ... (*)
But A is a nonzero matrix … (**)
from (*) and (**)
1 0 4 5
Example 4.5.4: Find the rank of the matrix A = 2 1 3 0 .
8 1 0 7
Solution: The given matrix A is a 3x4 matrix. Therefore rank of A 3 … (*)
A non-vanishing determinant of order 3 in A is
1 0 4
2 1 3 = -43. So rank of A 3… (**)
8 1 0
1 1 1 2
Example 4.5.5: The 34 matrix A = 2 2 2 4 has a row that is a constant multiple of
1 2 3 4
another row (i.e ).This matrix possesses four square sub matrices order
1 1 1 1 1 2 1 1 2 1 1 2
3: 2 2 2 , 2 2 4 , 2 2 4 , 2 2 4 .
1 2 3 1 2 4 1 3 4 3 3 4
The determinant of each of these matrices is zero. Because, in each case the second row is a
constant multiple of the first row. Thus the rank of the matrix A cannot be equal to 3. That is
rank of . However, it is easy to find a 2 x 2 sub matrix of A whose determinant is
2 4
different from zero. Take its determinant is equal to .
3 4
This indicates that
We will now describe the relation between determinant, rank and reduce echelon form of a
matrix A, in the case of homogeneous system of equation and nonhomogeneous
system of equation .
Let A be an matrix, then the following are equivalent.
a. The system of equation has no solution
b. Rank(A)
c. A is not row equivalent to .
d.
e. A is singular
Let A be an matrix, then the following are equivalent.
a. The system of equations has only a trivial solution
b. A is non-singular
c. Rank(A) = n
d.
e. A is row equivalent to
equation A I n X 0 .
Example 4.7.1: Let A 1 6 . Find the eigenvalues and the corresponding eigenvectors of A.
5 2
1 6 1 0
Solution: A I 2 0 0 1 0
5 2
1 6
0 3 28 0
2
5 2
7 or 4
The corresponding eigenvectors can now be found as follow:
1 6 1 0 x 0
For = 7: (A – 7I2)X = 0 7 y 0
5 2 0 1
6 6 x 0
y 0 y x
5 5
1
Hence, any vector of the type , where is any real number, is an eigenvector corresponding
1
to the eigenvalue 7.
1 0
For = -4: (A + 4I2)X = 0
6 1 x 0
4 y 0
5 2 0 1
5 6 x 0 5
y 0 y 6 x
5 6
1
Hence, any vector of the type 5 , where is any real number, is an eigenvector
6
corresponding to the eigenvalue –4.
Note: If X is an eigenvector with eigenvalue , X is also an eigenvector with the same
eigenvalue, where is a non-zero scalar.
The following theorem summarizes our results so far.
Theorem 4.7.1: If A is an matrix, then the following statements are equivalent:
i) is an eigenvalue of .
ii) There is a non-zero vector X Kn such that AX = X.
iii) The system of equations (A - I)X = 0 has non-trivial solutions.
iv) is a solution of the characteristic equation det (A - I) in K.
3 2 0
Example 4.7.2: Let A 2 3 0 . Find the eigenvalues and the corresponding
0 0 5
eigenvectors of A.
3 2 0
Solution: Characteristic equation of A: 2 3 0 0
0 0 5
(3 - ) (3 - ) (5 - ) - 4(5 - ) = 0
[(3 - )2 - 4] (5 - ) = 0
(2 - 6 + 5) ( - 5) = 0
( - 1) ( - 5)2 = 0
So, eigenvalues of A are: = 1 and = 5.
To find the corresponding eigenvectors, we substitute the values of in the equation
3 2 0 x 0
(A - I) X = 0. That is, 2 3 0 y 0 …(*)
0 0 5 z 0
2 2 0 x 0
For = 1, (*) becomes: 2 2 0 y 0 x = y, z = 0
0 0 4 z 0
2 2 0 x 0
For = 5, (*) becomes: 2 2 0 y 0 x = -y
0 0 0 z 0
x x 0 1 0
X x x 0 x 1 z 0
z 0 z 0 1
Exercise 4.7.1:Find the eigenvalues and the corresponding eigenvectors of the matrices:
2 3 1
3 2
a) A b) A 1 2 1
3 2 1 3 2
1 1 1
1 1
c) A d) A 0 2 1
2 3 0 0 1
Theorem 4.7.2 Let A be an matrix. Then
1) If A is a triangular matrix, then the diagonal entries are eigenvalues of A.
2) A and has the same eigenvalues.
3) If K is a nonzero scalar, then the eigenvalues of KA are K times the eigenvalue of A.
4) If is an eigenvalue of a nonsingular matrix A, then is an eigenvalue of .
5) If P is matrix and nonsingular, then A and has the same eigenvalues.
6) Eigenvector corresponding to distinct eigenvalues are linearly independent.
7) The eigenvalue a real symmetric are all real.
8) The eigenvectors corresponding to two distinct eigenvalue of areal symmetric matrix are
orthogonal.
Proof:
1) For simplicity, We consider the following upper triangular matrices
A =( )
| ( ) ( )| = 0 | |
= or = or =
Observe that the eigenvalues of A are the diagonal entries of A.
Let be the eigenvalue of A. then . Since = = and
it follows that is an eigenvalue of .
a. ( ) b. ( )
( ) are .
Solution: Observe that the matrices in and respectively lower and upper triangular
matrices. Thus, the eigenvalues are 4, and -3,5,1 and -2 respectively. Since a matrix and its
transpose have the same eigenvalue and it follows that the eigenvalues of are
a n bn
2 , respectively, i.e.,
Ax1 1 x1 and Ax2 2 x2 .
Thus,
x1t Ax2 x1t Ax2 x1t 2 x2 2 x1t x2
and
x1t Ax2 x1t At x2 x1t At x2 Ax1 x2 1 x1 x2 1 x1t x2 .
t t
0 0 2
Example 4.8.1: Let A 0 2 0 .A is a symmetric matrix. The characteristic
2 0 3
equation is
0 2
det(I A) 0 2 0 2 4 1 0 .
2 0 3
The eigenvalues of A are 2, 4, 1 . The eigenvectors associated with these eigenvalues are
0 1 2
x1 1 2, x2 0 4, x3 0 1 .
0 2 1
Note: If A is an n n symmetric matrix, then there exists an orthogonal matrix P such that
D P1 AP Pt AP
Where col1 ( P), col2 ( P),, coln ( P) are n linearly independent eigenvectors of A and the diagonal
elements of D are the eigenvalues of A associated with these eigenvectors.
0 2 2
2 .
Example 4.8.2: Let A 2 0
2 2 0
2 2
f ( ) det(I A) 2 2 2 4 0 .
2
2 2
Thus, 2, 2, 4.
2I Ax 0 .
The eigenvectors are
1 1
t 1 s 0 , t , s R, t 0 or s 0.
0 1
1 1
v1 1 and v 2 0 are two eigenvectors of A. However, the two eigenvectors are not
0 1
orthogonal. We can obtain two orthonormal eigenvectors via Gram-Schmidt process. The
orthogonal eigenvectors are
1
v v1 1
*
1
0
.
1 / 2
v2 v1
v2 v2 v1 1 / 2
*
v1 v1
1
4I Ax 0 .
1
The eigenvectors are: r 1, r R, r 0 .
1
1
v3 1 is an eigenvectors of A. Standardizing the eigenvector results in
1
1 / 3
v3
w3 1 / 3 .
v3 1 /
3
Thus,
1 / 2 1/ 6 1/ 3
P w1 w2 w3 1 / 2 1/ 6 1/ 3 ,
0 2/ 6 1 / 3
2 0 0
D
0 2 0
,and
D P t AP .
0 0 4
v1 v1
2
vi vi1 vi vi2 vi v2 vi v1
v vi vi 1 vi 2 v2 v1
*
vi 1 vi 1 vi 2 vi 2 v2 v2 v1 v1
i
vn vn1 vn vn2 vn v2 vn v1
v vn vn1
*
vn2 v2 v1
vn1 vn1 vn2 vn2 v2 v2 v1 v1
n
Unit Summary
Determinant of a square matrix is a scalar.
The determinant of a triangular matrix is the product of its main diagonal elements.
A matrix having zero rows or columns has a zero determinant.
If B is formed from A by multiplying row(or column) k by scalar , then .
If two rows (or columns) of A are the same, then = 0.
If each entry of row (or column) k of A is written as a sum , then
. Where each entry of is the same as A except row (or column) k of (i.e. ) and
each entry of is the same as A except row (or column) k of (i.e. ).
If A and B have the same size, then |AB |= |A ||B |.
If for some scalar , row (or column) k of A is times row (or column) i, then |A|=0.
If B is formed from A by interchanging two rows (or columns), then |B |=- |A |.
If B is formed from A by adding times row (or column) i to row (or column) k. Then |B |= |
A|.
|A |= .
If , then |B |= |A |, where n is the order of A (and B).
The absolute value of determinants is the volume of the parallelepiped determined by
the column vectors of the determinant.
CHAPTER FIVE
LINEAR TRANSFORMATIONS
Unit outcomes
After the completion of this unit, students will be able to:
define linear transformation and its properties;
know how to represent linear transformation and they can give examples by
themselves;
explain the rank and nullity of linear transformations and their representation;
define what by meant algebra of linear transformations and can write an examples;
know matrix representation of linear transformations and their properties;
define eigenvalues and eigenvectors of a linear transformation and their representations
and
explain eigen space of a linear transformation and give examples by themselves.
Introduction
In this section, we shall begin the study of functions of the form f(X) = Y, where the independent
variable X is a vector in and the dependent variable Y is a vector in. We shall concentrate on a
special class of such functions called “linear transformations.” Linear transformations are
fundamental in the study of linear algebra and have many important applications in physics,
engineering, social sciences, and various branches of mathematics.
5.1 Definition of linear Transformations and examples
A linear transformation is a special type of function. Hence before the discussion about linear
maps, it is helpful to revise the concept of functions.
Activity 5.1.1: Recall about the meaning and properties of a function. Also try to recall about
related concepts like domain, range, one to one, on to, composition, and inverse.
Recall that a function (mapping) consists of the following:
i) A set , each of whose element is mapped
Activity 5.1.2:
1) Let [ and [ . Define a function by:
.
a) show that is one-to-one
b) Show that is on to
c) Find the inverse of f
2) Let f : be given by f
i) Is one-to-one? Verify!
ii) Is onto? Verify!
iii) Does have inverse? If so find its inverse.
As we have said above linear transformation is a special type of function. So what types of
criteria‘s (conditions) are must be satisfied, in order to say a function to be a linear
transformation? The following definition will enable us to give a complete answer for this
question.
Definition 5.1.1: Let and be vector space over the same field A function is
called a linear transformation (or a linear mapping) of in to if it satisfies the following
conditions:
i)
ii)
Note:
1) Using condition (ii) of the definition, one can show that where and are
zero vectors in V and respectively.
(Since for any
= (by (ii), and since (The zero elements in the field ))
= (Why?).This proves that a linear mapping maps a zero vector in to zero vectors.
2) The two conditions in the definition are equivalent to
.
Proof:( ) Suppose the mapping is a linear transformation. Then
and , now since
by (i)
= by (ii).
Thus , and .
( ) suppose that and
Then want to that is a linear transformation.
, (since 1 is the multiplicative identity in ).
= by (ii).
= ------------------------------------------------------- (1)
,( since 0 is the zero element of K).
= , by (ii).
= zero vector in W)
= --------------------------------------------------------------------- (2)
Therefore from equation (1) and (2) we see that T is a linear transformation ,
. In general using induction we can write as:
or
n n n
T i vi iT (vi ) i T (vi ) , for any „n‟ vectors in V and for any „n‟
i 1 i 1 i 1
Most of the common functions studied in calculus are not linear transformations. In the
following example we see some functions which are not linear transformation.
5) Let consider the following
a) f(x) = sinx is not a linear transformation from R into R because, in general,
sin(x1 + x2) sinx1 + sinx2For instance, sin( + )
b) f(x) = x2 is not a linear transformation from R into R because, in general, (x1 + x2)2 x12
+ x22.For instance, (1 + 2)2 12 + 22
c) f(x) = x + 1is not a linear transformation from R into R because, f(x1 + x2) = x1 + x2 + 1.
Whereas ;
f(x1) + f( x2) = ( x1 + 1) + ( x2 + 1)
= x1 + x2 + 2
So f(x1 + x2) f(x1) + f( x2)
Remark: The function in Example in above (c) points out two uses of the term linear. In
calculus, f(x) = x + 1 is called a linear function because its graph is a line. It is not a linear
transformation from the vector space into however, because it preserves neither vector addition
nor scalar multiplication
6) Let a function T: be given by Show that T a linear
transformation?
Solution: Let Then
i)
=
= , (since „+‟ is commutative)
= , since
=
=
ii)
=
=
= Thus, T is a linear transformation.
= (3,1,3) 32 12 32
= 19
But, N(0, 1, 2) + N(3, 0, 1) = (0,1,2) 3, 0, 1
= 0 2 12 2 2 32 0 2 12
= 5 10
Activity 5.1.3:
1) Show that the mapping defined by – is a linear
transformation.
2) Is the mapping defined by a linear transformation?
Justify your answer!
Let us add one more example. Recall that vectors in a vector space V over a
field K are linearly independent iff (where
implies
Example 5.1.8: Let T be a linear transformation from a vector space V in to W over the same
field K. Prove that the vectors v1, v2, v3, …, vn V are linearly independent if
are linearly independent vectors in W.
Exercise 5.1.1
1) Determine whether or not each of the following mappings is linear transformation.
a. given by
b) given by
c) given by
d) given by
e) given by
2) Let M2 denote the vector space of 2 2 matrices over .
ab
Let T: M2 M2 be given by T
a b 2c
. Is Ta linear mapping? Justify your
c d 3a c d
answer!
3) Let V be the vector space of m n matrices over . Let P be a fixed mm matrix and Q a
fixed n n matrix over . Show that the mapping L: V V defined by T(A) = PAQ is a
linear transformation.
4) Show that the mapping F: 2 defined by F(a, b) = |a – b| is not a linear transformation.
5) Let G : V W be a linear transformation where V and W are vector spaces over the same
field K. Prove that if u1, u2, u3, …, un are linearly dependent vectors in V then G(u1), G(u2),
G(u3), …, G(un) are linearly dependent vectors in W.
6) Let U, V, and W be vector spaces over the same field K. If g: U V and f : V W are
linear transformations show that fog is also a linear transformation from U in to W.
7) i) Let A = (a, b, c) be a fixed given vector in 3. Define T: 3 by T(X) = AX,
X 3. (Since AX is the scalar (dot) product of A and X). Show that T is a linear
transformation.
ii) Let A be as in (i) define T: 3 by T(X) = AX + 4. Show that T is not
a linear transformation.
8) Let V be the space of n x 1 matrices over and let W be the space of m 1 matrices over .
Let A be a fixed m n matrix over . Define T: V W by T(X) = AX, X V. Prove that
i) T is a linear transformation.
ii) T is a zero transformation if and only if A is the zero matrix.
9) Let V be the vector space of all n n matrices over and let B a fixed n n matrix.
If T: V V is defined by T(A) = AB – BA then verify that T is a linear transformation.
10) Let V be a vector space over , and f: V , g: V be two linear transformations. Let
F: V 2 be the mapping defined by F(v) = (f(v), g(v)). Show that F is a linear
transformation.
11) Let V, W be two vector spaces over the same field K and let F: V W be a linear
transformation. Let U be the subset of V consisting of all elements u such that F(u) = O w.
Prove that U is a subspace of V.
12) Let F: 34 be a linear transformation. Let P be a point of 3 and A is a non zero
element of 3. Describe F[S], where S = {X 3| X = P + t A, t IR }. (Distinguish the
cases when F(A) = 0 and F(A) 0).
We now state two other basic properties in the following theorem. The proof is left for you as an
exercise.
Theorem 5.1.1: Let T be a linear transformation from a vector space V in to W over the
same field K. Then i) T(0) = 0
ii)
ii) – –
Our next theorem asserts that a linear transformation from a given finite dimensional vector
space V in to any vector space W is completely determined by its values on the elements of a
given basis of V.
Theorem 5.1.2: Let V and W be vector spaces over the field K. Let { } be a basis
of V. If { }is a set of arbitrary vectors in W, then there exists a unique linear
transformation F: V W such that for j = 1, 2, …, n. To prove the theorem, we
need to:
a) Define a function F from V into W such that for all
b) Show F is a linear transformation
c) Show that F is unique.
Proof: Let V and W be vector spaces over the field K. Let { } be a basis of V and
{ } be any set of n-vectors in W. Since { }is a basis of V, for any
there exist unique scalars such that
n
v a v
i 1
i i
a) Define F : V W by i.e
n n
F( v) a i w i , where v a v i i
i 1 i 1
Every element v of V is mapped to only one element of W as the scalars ai‗s are unique. As W
n
is a vector space, every linear combination of vectors in W is also in W. So aiwi W.
i 1
So for each
b) We show that F is a linear transformation,
i.e and and
To do this, let . Then
n n
x x i vi and y yv i i for some unique scalars in K.
i 1 i 1
n n
i) F(x + y) = F x i v i yi vi
i1 i 1
n
= F ( x i y i ) v i
i1
n
= (x i yi ) w i by definition of F
i 1
n n
= xi wi yi w i
i 1 i 1
n
n
= F
i 1
x v
i i
F yi vi
i 1
by definition of F
=
.
n
ii) F(x) = F x i v i
i1
n
= F (x i ) v i
i1
n
= (x i )w i , by definition of F
i 1
n
= xi wi
i 1
n
= F xi vi , by definition of F
i 1
= F (x)
So,
Therefore from (i) and (ii) we conclude that F is a linear transformation from V in to W.
c) In (a) and (b) we have shown the existence of a linear transformation F : V W such that
G for all . To prove that F is unique, suppose that G: V W is
a linear transformation such that G { } Let ‗x‘ be any vector in V
n
then x xv
i 1
i i for some unique scalar in K.
n
Thus G ( x ) G x i v i
i1
n
= x i G(v i ) as G is a linear transformation
i 1
n
= x i w i as G(vi) = wi for each i = 1, 2, 3, ..., n.
i 1
n
= F x i v i by definition of F.
i1
= F(x), Since for any we conclude that
This proves that F is unique. With this we complete the proof of the theorem.
Remark:
1) The vectors in theorem 4.2.2 are completely arbitrary; they may be
linearly dependent, independent or they may even be equal to each other. But the number of
these vectors in W must be equal with that of the number of basis vectors of V.
2) In determining the linear transformation from V in to W the assumption that { }
is a basis of V is essential.
Example 5.1.3:
a) Is there a linear transformation T from in to such that and
y 2
T ( x, y) T (2,3) x y (1, 0)
3 3
y 2
= T (2,3) x y T (1, 0)
3 3
y 2
= (4,5) x y (0, 0)
3 3
4 5y
= y , (0, 0)
3 3
4 5y
Therefore, T ( x, y) y, , for all (x, y)
3 3
Observe that the image of any vector (a, b) 2 under the linear transformation of the given
4 5
example is b , . So the image of 2 under T is the line through (0, 0) with
3 3
4 5
direction vector , .
3
3
4 5
Activity 5.1.4: Let be as in example 4.2.2 above, i.e T (a, b) b, b
3 3
i) Let { } Find the image of A under T i.e T[A].
ii) Describe the set containing all elements in whose images is (0,0)
Example 5.1.5: Find a linear transformation such that
.
Solution: { } is a basis of . (verify). Thus there exists a unique linear
transformation such that ) and .To
describe this unique linear transformation explicitly, suppose . Then
(x, y) = t1 (1, 2) + t2 (0, 1), for some t1, t2 as is a basis of 2.
= (t1, 2t1 + t3)
x t1
Thus we have
y 2t1 t 2
Which in turn implies t1 = x and t2 = y – 2x. So (x, y) = x(1, 2) + (y – 2x) (0, 1) and
T(x, y) = x T (1, 2) + (y - 2x) T(0, 1)
= x (3, -1, 5) + (y – 2x) (2, 1, -1)
= (3x, -x, 5x) + (2y – 4x, y-2x, 2x – y)
= (2y – x, y – 3x, 7x – y)
Therefore the required linear transformation is given by
T(x, y) = (2y – x, y – 3x, -y + 7x).
Activity 5.1.5:
1) a) Find a linear transformation T: 22 such that T(1, 0) = (1, 1) and
T (0, 1) = (-1, 2)
c) Prove that T maps the square with vertices (0, 0), (1, 0), (1, 1) and (0, 1) in to a
parallelogram.
x ab
y a b
z a b c
xy xy
b , a , c z x
2 2
( x y) ( x y)
Thus (x, y, z) = (1,-1,1) + (1,1,1) + (z - x) (0,0,1)
2 2
( x y) ( x y)
L(x, y, z) = L(1,1,1) L(1,1,1) ( z x) L(0,0,1)
2 2
x y ( x y)
= (1, 0) (0, 1) ( z x) (0, 0)
2 2
x y x y
= ,
2 2
Therefore, we have obtained a linear transformation L: 32 given by
x y x y
L(x, y, z) = , , Moreover
2 2
1 (1) 1 (1)
L (1,-1, 1) = , = (1, 0) and
2 2
1 1 1 1)
L (1, 1, 1) = , (0, 1) .
2 2
Consequently we can say that there is a linear transformation L: 32 satisfying the two
given conditions L(1, -1, 1) = (1, 0) and L(1,1,1) = (0, 1).
Note: The linear transformation T: V W whose existence and uniqueness is guaranteed by
theorem 4.2.2 depends on the given basis vectors of V and the given vectors of W whose number
equals the number of basis vectors in V.
x y x y
Is L: 32 given by L(x, y, z) = , the only linear mapping that can satisfy
2 2
the requirements of the question in the above example? Replace (0, 0, 1) by (1, 0, 0) in the
solution of the above example and find a linear transformation
L: 32 such that L(1, -1, 1) = (1, 0) and L(1, 1, 1) = (0, 1). Do the same by replacing (0, 0)
by (1, 1) and (0, 0,1) by (1, 0, -1).
Exercise5.1.2:
1) Find a linear transformation
a) such that and
b) such that and
c) such that and
d) such that
= And
ii) ( )
=
=
=
Therefore, T is a linear transformation.
b) To find KerT and ImT for the give function, lets define KerT and ImT in following way
a) kerT ( p, q, r ) 3 T ( p, q, r ) (0,0)
= ( p, q, r ) 3
( p q, q r ) (0,0)
= ( p, q, r ) 3
p q 0 and q r 0
= ( p, q, r ) 3
pqr
= ( p, p, p) p = p(1,1,1) p
= ( x y,0) (0, y z ) x, y, z
= ( x y) (1, 0) ( y z ) (0,1) x, y, z
In the next theorem we state an equivalent condition that help us to determine whether a given
linear transformation T is one-to-one (injective) or not, using the concept of kerT.
Theorem 5.2.2: Let T: V W be a linear transformation. Then T is one-to-one if and only if
{ }
Proof: ( ) Suppose T is one-to-one. We need to show that kerT contains only the zero vector.
Let , then . But , so from this idea we have ,
this implies as T is one-to-one. Therefore { }
( ) Suppose { } Then we want to show that T is one-to-one.
Now let . Then
or
Thus by definition linear transformation, we have
, which implies
. Since ker T contains only the zero vectors by assumption,
, and hence . Consequently T is one-to-one.
Example 5.2.2: Let T be a linear transformation given by
– Then:
i) Find ker T and ImT.
ii) Is T one-to-one? Why?
Solution: i) { }
={ }
={ – }
= { } { }. And using the
definition of ImT, it follows
ImT = { (u , v, w) Im T | (u , v, w) T (a, b) ( a, b) 2 }
= { (u , v, w) Im T | (u , v, w) (a b, a b, b) a, b }
(u, v, w) Im T | u a b, v a b wb a, b }
(u, v, w) Im T | u v 2b wb a, b }
(u, v, w) Im T | u v 2w, u , v, w }
(u, v, w) Im T | u v 2w, u , v, w }
(v 2w, v, w) v, w
(v, v,0) (2w,0, w) v, w v(1,1,0) w(2,0,1) v, w
Therefore ImT is a subspace of 3 generated by (1, 1, 0) and (2, 0,1). What is dim(kerT)? What
is dim(ImT)?
ii) Since ker T = {(0, 0)} (since it contains only the zero vector of ), So T is one – to – one.
But if { } we can conclude that T is not one-to-one.
Activity 5.2.1:
Let T be given by –
a) Show that T is a linear transformation
b) i) Find ker T
ii) Is T one-to-one? Verify
iii) Find ImT.
Does a linear transformation maps linearly independent vectors in to linearly independent
vectors?
Consider the linear transformations L and T from in to given by
and . The vectors and are linearly independent
in . But and are linearly dependent vectors in
. On the other hand , are linearly independent.
Thus from this particular instance we conclude that a linear transformation may or may not map
linearly independent vectors in to linearly independent vectors.
Under what condition does it map linearly independent vectors in to linearly independent
vectors?
The following theorem provides a sufficient condition for this.
Hence But { }.
Thus, . Since are
linearly independent vectors in V, it follows that
This completes the proof.
In the theorem 5.3.2, we have proved that if the kernel of a linear transformation T contains
only the zero vector i.e T is one to one then T maps linearly independent vectors in to linearly
independent vectors.
In the next theorem we will see the dimension of the kernel and image of a linear transformation
L: V W with the dimension of V. Before going to it let us have the following definition.
Definition 5.2.2: Let L be a linear transformation from a vector space V in to W over the field K.
a) The dimension of the Kernel (the null space) of L is called the nullity of L.
b) The dimension of the Image (the range) of L is called the rank of L.
Example 5.2.3: The linear transformation with and of example
4.3.1 is given by .We have seen that its kernel is a subspace of
generated by So . That is Nullity of Its image is a
subspace of with basis { } So . Since
and , we have
Example 5.2.4: Consider the linear transformation T given in example 4.3.2. We have
seen that its kernel contains only the zero vector, so
The set { } is a basis of its image, so Again we have
Theorem 5.2.4: (Rank-nullity theorem) Let V and W be vector spaces over the same field K.
Let T: V W be a linear transformation. If V is finite dimensional vector space then
i.e
Proof: Since V is finite dimensional vector space, it is obvious that Ker T and
is finite dimensional. Moreover dim (Ker T),
Let { } and { } be basis of kerT and ImT respectively
Then there exist such that:
for
Claim: { } is a bais of V.
Now we show that
i) generates V
ii) is linearly independent.
i) Let . Then and hence there exist a unique scalars in
K such that .
So, T , as for
So we have
w.
w since w and
for all j = 1,2,3,…..,n and for all i = 1,2,3,…….,m,
, since { } is basis of ImT.
v (replace by 0 in……….. (1)
, since { } is a basis of Ker T.
That is is a linearly independent set in V. From (i) and (ii) it follows that
{ } is a basis of V.
Hence
Therefore,
Example 5. 2. 5: 1) Let be a linear transformation defined by
x y z w 0 (1)
x 2 z w 0 (2)
x y 3z 3w 0 (3)
.
Adding (1) and (3), we get – By dividing both sides of this equation by 2,
we get equation (2)
i.e. –
Thus
=
=
Hence the vectors generate ker T as every in ker T
can be written as a linear combination of Moreover they are linearly
independent vectors in ker T. Therefore { } is a basis of ker T and hence
That is nullity of T is 2. Now by using rank-nullity theorem, we can easily
determine the rank of T.
. If then
=
=
Therefore, is a linear transformation.
Example 5.2.7: Let be a linear transformation defined by
–
i) Find a basis for
ii) Find a basis for
iii) Verify rank nullity theorem for T
Solution: i) From the definition of kerT, we have , then
– , by definition
a b c 0
2a b c 0
3a 2b 0
3
From b a
2
a
Substituting this in and – c .
2
3 a 3 1
a, a, a 1, ,
2 2 2 2
– , where .
x y z u (1)
2 x y z v (2)
3x 2 y w (3)
By adding equations (1) and (2) we get, But
So
Thus any vectors in ImT can be written as in the form of:
.
.
.
Hence every vector in ImT is a linear combination of
Thus { } generates ImT. In addition are also a linearly
independent Therefore, this implies { } is a basis of ImT.
iii) as its basis contains only one non-zero vector, as its basis
contains two vectors. i.e.
,as stated in the theorem.
Recall that a linear transformation T: V W
i) is one-to-one iff { }
ii) is on to iff .
Now suppose V and W are respectively „n‟ and „m‟ dimensional vector spaces over the same
field K. If n > m, is there a one-to-one linear transformation from V in to W?
Suppose there is, then { } and hence .
1 2 0 1
A 2 1 2 1 and X is a column vector in
1 4 4 2
2) Find a linear transformation such that { – } is the
i) Kernel of T ii) image of T
3) Find a linear transformation whose kernel is generated by
.
4) Find a linear transformation whose image is generated by
–
ii) –
–
Theorem 5.3.2: Let nd be vector spaces over the field K. Let T and S be linear
transformations from V in to W and from W into Z respectively. Then the
compose function SoT defined by for all is a
linear transformation.
Proof: Let T and S be as in the hypothesis of the theorem Let and . Then:
)
, why?
, why?
and
, because T is a linear transformation
, why?
So, Hence,
Example 5.3.3: Let be a linear operator on given by
). Show that – , where I is the identity mapping on
and is the zero mapping on .
Solution: –
– – – So,
[ – ] –
– –
– – – –
Thus, –
Activity 5.3.2:
1) Let T and S be linear operators on . Does imply either ?
Explain! Give a counter example if your answer is No.
2) Let V be finite dimensional vector space over the field F and T be a linear operator on V.
Suppose that T. Prove that the range and null space of T have only the
zero vector in common.
Now let us discuss about inverse of a linear transformation. Recall that a function from V in
to W is called invertible if there exists a functions S from W in to such that is the identity
function on and is the identity function on W. If T is invertible function then, S is unique
and is denoted by and is called the inverse of T. when ever
exists. Furthermore we know that is invertible iff T is one- to- one and on to.
Theorem 5.3.3: Let and be vector spaces over the field and Let be a linear
transformation fro V in to W. If T is invertible then the inverse function is a
linear transformation.
Proof: Suppose W is invertible then there exists a unique function such
that for all . Moreover is 1-1 and on to. We need
to show that is linear. Let and . Then there exist unique vectors
such that and as T is 1-1 and on to. So
and .
, because T is linear
.
Thus for any Since
.
Therefore, is a linear transformation.
Remark: Let be invertible linear transformations. Then
i) is also invertible and
ii) is also invertible and
Example 5.3.4: Let be a linear transformation given by .
Is T invertible? If so, find a rule for like the one which defines T.
Solution: We know that { } then is one to-one
(a, b, c) 3
| T (a, b, c) (0,0,0)
(a, b, c) 3
| (a,3b, c) (0,0,0)
(a, b, c) 3
| a 0, 3b 0 and c 0
(0,0,0).
Therefore T is one to one. Moreover from rank-nullity theorem,
So
. Since is a subspace of and , we have
. Hence T is on to as .
Therefore is invertible as it one to- one and on to.
To find , let Then . So
v
y , z w But Thus
3
v
u, , w
3
Whenever V is a finite dimensional vector space, one – to – oneness can be related to linear
independence and rank as stated in the theorem below.
Theorem 5.3.4: Let be a linear transformation and V be finite dimensional vector
space with . Then the following statements are equivalent.
i) T is 1 – 1
ii) If are linearly independent vectors in then
are linearly independent vectors in ImT
iii) If { } is a basis of V then { } is a basis of .
Proof: Left as exercise
Activity 5.3.3:
Prove theorem 4.4.4. Follow the following steps
1) Show that (iv) (i)
2) Show that (i) (ii)
3) Show that (ii) (iii)
4) Show that (iii) (iv)
5) Form 1, 2, 3 and 4, can we conclude that (i) (ii) (iii) (iv)? How?
Remark: For finite dimensional vector spaces V and W over the field K with ,
we have the following results about any linear transformation T : V W.
a) T is 1 – 1 T is invertible
b) T is on to T is invertible.
Thus is the pre-image of v under I – L. So for any vector v V there exists a vector
in V such that – . From this it follows that – is on
to. Therefore I – L is invertible as it is 1 – 1 and on to.
Exercise 5.3.1:
1) Let T and S be linear operators on defined by and
.
i) How do you describe T and S geometrically?
ii) Give rules like the one defining T and S for each of the linear transformations
–
2) Let T be the linear operator on defined by
– Is T invertible? If so, find a rule for .
3) For the linear operator T of exercise 2, Show – – (I – identity
aping and O zero mapping)
4) Let T be a linear transformation from in to and Let U be a linear transformation from
in to . Prove that the linear transformation UT is not invertible.
5) be a linear transformation. Show that L is invertible and find for:
a) L( x, y, z ) ( x y, x z , y 3 z )
b) L( x, y, z ) (3x y z , x y, 4 x y z )
x
x
by T B 2 x y is the linear transformation associated with matrix B.
y 3x
1 3 0 2 2
Let A , u , v , w and TA be the linear transformation associated
0 1 2 0 2
with matrix A. Find i). TA (u), TA (v) and T A ( w )
0 2 2 0
ii) The image of the square with vertices , , and .
0 0 2 2
Solution:
1 3 0 6
i). T A ( u ) Au
0 1 2 2
1 3 2 2
T A ( v ) AV
0 1 0 0
1 3 2 8
TA ( w ) Aw
0 1 2 2
ii). T A deforms the given square as if the top of the square were pushed to the right while
the base is held fixed (see the figure below).
1 0
2) A . Give a geometrical description of the linear transformation
0 1
associated with .
3)
i) Find the linear transformation T associated with B.
ii) Find X in whose image under is b
iii) Determine whether the system has no solution, unique solution or many
Solutions: In this section we will study how to define system of linear equations with the help
of linear transformation associated with the coefficient matrix of the system. Consider the
iffb is in the range of TA. If there is exactly one element X n whose image is b under
then the system AX b has exactly one solution. But if b has more than one pre-images
under then the system has more than one solution. If there is no such
that T ( X ) b , (i.e b is not in the range of ), then the system has no solution.
A
Activity 5.4.3:
Prove that if TA { } then the system has at most one solution. Further b
the zero column vector in then the homogeneous AX has at least one solution.
What is this solution? The solution set AX is the kernel of TA. So the solution set
AX is a subspace of . Suppose and { } is a basis of
. Then any solution AX 0 can be expressed as
where are scalars. Now let Xo be one particular solution of the non-
homogeneous system AX b w be any solution AX b AX o b Aw , so
ii) b Im TA ?
iii) Is there more than one X whose image under
iv) Describe the solution set AX b
Solution: T A : 3 2 is given by
x1 x
1 0 3 1 x1 3x3
TA x2 x2
x 2 1 3 x 2 x1 x2 3 x3
3 3
i)
ker TA X 3 TA ( X ) 0
a
a 3c 0 a 3c 0
So b ker TA
c 2a b 3c 0 2a b 3c 0
a 3c b
3c 3
Thus ker T A 3c c = c 3 c
c 1
ii) Since dim(kerTA) = 1 and 3 = dim (kerTA) + dim(ImTA),
iii) Since b has a pre-image under TA , there exists x o 3 such that T A ( x o ) b . Moreover
3
for any , w x o 3 is a pre-image of b, as
1
3 3
TA ( w) TA ( xo 3 TA ( xo ) TA 3 = b . = b
1 1
Exercise 5.4.1.1:
1 0 1
1) Let A 3 1 5 and TA be the linear transformation associated with matrix A. Find
4 2 1
X such that TA (X )
1 3 4 3 1
2) Let A 0 1 3 2 , b 1 and TA be the linear transformation associated with
3 7 6 5 7
matrix A.
i) Find ket TA
ii) Is b in the range of TA
iii) Describe the solution set of AX = b
3) Suppose T : 5 2 and T(X) = AX for some matrix A and each X in 5 . How many
rows and columns does A have?
1 1
ii) Consider a basis { }of , b1 and b2 .The coordinate
0 2
1 2
vector of X relative to is X , since .
6 3
Activity 5.4.4:
5
1) Find the vector X determined by the coordinate vector X where
3
3 4
, 6
5
3
2) Find the coordinate vector X of X 5 relative to the basis
4
1 2 1
0 , 1 , 1
of .
3 8 2
3) Let be the space of polynomial functions from in to of degree two or less and P be
an element of defined by
a) Find the coordinate vector of P relative to the standard basis
{ }, where, , and .
b) Find the coordinate vector of P relative to the basis { }, where
, and, .
Now let us deal with the matrix of a given linear transformation. Let V be an n-dimensional
vector space over the field K and let W be an m-dimensional vector space over K. Let
{ } and { } be ordered bases of V and W respectively. Suppose
T : V W is a linear transformation. Then for any in T ( x ) W nd can be expressed
as a linear combination of elements of the basis . So we have,
T (v1 ) a11w1 a21w2 am1wm
T (v2 ) a12 w1 a22 w2 am 2 wm
(1)
T (v j ) a1 j w1 a2 j w2 amj wm
T (vn ) a1n w1 a2 n w2 amn wm
Where, ‘s are scalars in K.
Writing the coordinates of successively as columns of a matrix we get
That
M T (v1 ) ' T (v2 ) ' ... T (v j ) ' ...T (vn ) '
The matrix M in (2) is called a matrix representation of T or the matrix for T relative to the
bases and . If and ' are standard bases of V and W respectively, we call the matrix M in
(2) the standard matrix for the linear transformation T.
Example 5.4.4: Define by
a) Find the standard matrix for T.
b) Find the matrix of T relative to the ordered bases
{ } { } respectively.
Solution: a) Here we use the standard basis of .
0 7 4
The matrix of T relative to and ‘ is M
4 6 5
Our next task is to examine how the matrix M in (2) determines the linear transformation T.
If is a vector in V, then the coordinate vector of x relative to
x1
x2
and
.
X
.
.
xn
…….... (3)
Using the basis ' in W, we can rewrite (3) in terms of coordinate vectors relative to ' as
T ( x) ' x1 T (v1 ) ' x2 T (v2 ) ' ... xn T (vn ) ' … (4)
Thus if X is the coordinate vector of x relative to , then the equation in (5) shows that
Note: In case when W is the same as V and the basis ' is the same as , the matrix M in (2)
Activity 5.4.5: Using equation (3), verify equations (4) and (5).
Example 5.4.5: Let be the linear transformation defined by
Find the matrix of T relative to the bases
{ of and { } of .
Solution:
3
Solution: Let . Then the coordinate vector of x relative to X 4
0
and the coordinate vector of T(X) relative to is
0 6 1 3 24
T ( X ) T X = 0 5 1 4 = 20 . Hence T(x) = 24b1 - 20b2 + 11b3.
1 2 7 0 11
Exercise 5.4.2.1:
1) Let F be defined by Find the matrix
associated with F with respect to the standard bases of and .
2) Let T defined by Find the matrix of T relative to the
bases { } { }
1 2 1
3) Let A and TA be a linear mapping from to defined by
3 4 0
where v is a column vector in . Find the matrix of relative to the
1 0 0
1 2
B 1 0 , 1 , 0 B 2 , of and respectively.
0 0 1 3 5
4) Suppose that { } and { } be a basis for real vector spaces V and
W, respectively. Let T : V W be a linear transformation with the property that
Summary
Let and be vector space over the same field A function is called a
linear transformation (or a linear mapping) of in to if it satisfies the following
conditions:
i)
ii)
T: V W is a linear transformation from vector space V in to W over the same field K
then
i) T(Ov) = Ow
T i v i
n n
ii)
i 1
T( v
i 1
i
) i K and vi V, i = 1, 2, …, n.
Miscellaneous Exercise
1) Determine whether or not each of the following mappings is linear transformation.
given by
b) given by
c) given by
2) Let M2 denote the vector space of 2 2 matrices over .
ab
Let T: M2 M2 be given by T
a b 2c
3) . Is Ta linear mapping? Justify
c d 3a c d
your answer!
4) Let V be the vector space of m n matrices over . Let P be a fixed mm matrix and Q a
fixed n n matrix over . Show that the mapping L: V V defined by T(A) = PAQ is a
linear transformation.
5) Show that the mapping F: 2 defined by F(a, b) = |a – b| is not a linear
transformation
6) Let V be the space of n x 1 matrices over and let W be the space of m 1 matrices over
. Let A be a fixed m n matrix over . Define T: V W by T(X) = AX, X V.
Prove that i) T is a linear transformation.
ii) T is a zero transformation if and only if A is the zero matrix.
7) Let L: V W be a liner transformation. Let w W and such that .
Show that iff where u is an element of the kernel of L.
8) Let the linear transformation be defined by
– – . Find the rank and nullity of T.
9) Show that the linear transformation for which
and is both one-to-one and on
to.
10) Let T be the linear transformation from in to defined by:
– –
11) If (a, b, c) is a vector in 3, what are the conditions on a, b, and c so that the vector be in
the range of T? What is the rank of T ?
12) What are the conditions on , and so that be in the null space of T? What is
the nullity of T? Is T one-to-one? Why? And is T on-to? Why?
13) L be a linear transformation. Show that L is invertible and find for:
a) L( x, y, z ) ( x y, x z , y 3 z )
b) L( x, y, z ) (3x y z , x y, 4 x y z )
CHAPTER SIX
GROUP REPRSENATION
6.1 Definition and Examples
It takes patience to appreciate the diverse ways in which groups arise, but one of these ways is so
familiar that we can use it to ease our way into the basic definition. To this end, recall the
following three things about the set of integers with respect to the operation addition. First
addition is associative. Second, 0 is an identity. And third, relative to 0, each integer has an
inverse ( its negative ). They show that the integers with addition form a group, in the sense of
the following definition.
Definition 6.1.1 we say that a system , consisting of a non-empty set and a binary
operation on a is a group if the following four axioms are satisfied:
G1: … (Closure)
G2: … (Associativity)
G3: such that, … (Existence of identity element)
G4: , , such that … (Existence of inverse elements)
Remarks:
1. Axiom G1 is actually redundant as the fact is a binary operation on guarantees that is
closed under But G1 is traditionally stated in the definition of a group along with the other
axioms to emphasize the closure property of the binary operation.
2. A group may have many properties other than those specified by the axioms. One property
enjoyed by many groups is that the binary operation is commutative. These groups are so
important we make the following definition:
Examples 6.1.1
a) is a group, where is the set of integers. First the sum of any two integers is an
integer. That is, so closurity is satisfied. 0 is an identity
element of Z because, for any .
Again for any there is such that that means
every integer has an inverse. Therefore, is a group. is not a group, Axiom G4
fails.(show!)
each since as .
Definition 6.1.2 A group is said to be abelian (or commutative) if the following axiom is
also satisfied:
G5: …. (Commutativity)
The groups in example (a) and (b) are abelian while the groups in example (c) and (d) are not
because matrix multiplication is not commutative and for part (d) if we take and we
get while
a1 a2
: a2 a3
a3 a1
a4a4
is a permutation on a1, a2, a3, a4. It is advisable to develop a shorthand notation for the
mapping . The notation
1 2 3 4
=
2 3 1 4
will be used to represent the permutation . Let the permutation be represented by:
1 2 3 4
=
3 4 1 2
Since and are mappings, we can compose them as usual to obtain o and o :
1 2 3 4 1 2 3 4 1 2 3 4
o = o =
2 3 1 4 3 4 1 2 1 4 2 3
1 2 3 4 1 2 3 4 1 2 3 4
o = o = Observe that o o.
3 4 1 2 2 3 1 4 4 1 3 2
b) Returning to , we can easily see that in general a permutation on the n symbols a1, a2
…, an can be represented by:
1 2 n
= where i1, i2, …, in is a simple rearrangement of 1, 2, …, n.
i1 i2 in
c) What is the order of Sn?The order of Sn is the number of permutations of 1, 2, …, n and it is
Exercise 6.1
1) Determine which of the following are groups.
a) ({-1, 1}, . ), where ―.‖ is the usual multiplication.
b) ( , U), where X is a set and ―U‖ denotes union of sets.
c) where –
d) (S, .), where { : is a non-zero real number and } and ― .‖ is the usual
multiplication.
e) (2\{(0, 0)}, .) where (a, b).(c, d) = (ac – bd, ad + bc)
2) Give an addition table for Z5.
3) Let be the set of all functions , Prove that together with the usual addition of
functions is a group.
4) Let be a group and are in . Show that an1an11 … a11
5) Let { } and, for , let be given by . Show that the
set { } together with composition for functions form a group.
6) Let be a group and for each let : be defined by . Prove that
the set { } together with composition of functions is a group.
7) Compute the following products in the group .
1 2 3 1 2 3
a)
3 2 1 2 3 1
1 2 3 1 2 3
b)
2 3 1 1 3 2
8) Give a multiplication table for .
9) Let be a group containing an even number of elements. Show that there exists ,
, such that .
10) Let be a group and . If , prove that .
11) If is a group and for all , prove that is an abelian group.
12) If is a group such that for every pair , prove that is abelian.
+ 0 1 2 3
0 0 1 2 3
1 1 2 3 0
2 2 3 0 1
3 3 0 1 2
Observe that the only proper subgroup of Z4 is {0, 2}
3) What are the subgroups of (Z, +)? We have the trivial subgroup {0}. For every positive
integer m, ( , +) is a subgroup of (Z, +). In fact these are the only subgroups of Z, as
shown in the following theorem. { }.for instance,
{ }
Theorem 6.3.6: ( , +), where runs through the set of positive integers, is the only non-zero
proper subgroup of (Z, +).
Proof: Let be a subgroup of and .
Then H contains a non-zero integer and, being a subgroup, must contain a positive integer (since
if , then – also). Let be the smallest positive integer in H. Then , since a
typical element of is of the form where . We need only show that . Let
. By Division Algorithm such that , where . Since
Examples 6.4.1:
1) Since for any integer the group of integers is cyclic group as both 1
and -1 are generators of for the group.
2) (Z4, +) is also a cyclic group and both 1 is a generator since ,
, in so we can write <1> = Z4.
is also a generator for (show!)
3) In the group of integers Z , for any n Z, .
Theorem 6.4.1: Let G be a finite cyclic group generated by . If , then:
i) for any positive integer .
ii) { } is precisely the set of elements belonging to G.
Proof: (i) Suppose and . Let . Then for some integer . By
Division Algorithm there are unique integers such that . Then
Exercise 6.4.1:
1 2 3 1 2 3
1) Show that , is a subgroup of S3
1 2 3 1 3 2
2) Prove that every cyclic group is abelian
3) Let G be a group, and let A and B be subgroups of G. Prove that A B is a subgroup
of G. Show that A B need not be a subgroup of G.
4) We have seen that M = the set of all 2 2 matrices over Z is a group with matrix
a b
addition as a binary operation. Is the set of matrices of the form , where a,
0 0
bZ, a subgroup of M?
5) Find all subgroups of S3.
6) Let G be a g group and let C(G) = {x G: xg = gx for all g G}. Prove that C(G) is
a subgroup of G.
7) Let G be an abelian group. Show that the set
{ for some } is a subgroup of G.
8) Show that a group G has no proper subgroups if and only if it is a cyclic group of
prime order.
Lemma 6.5.1: Let [ ] the equivalence class of a under ―Congruence modulo ‖. Then
[ ] . That is, [ ] is precisely the right coset of in determined by .
Proof: [ ] { }
{ }
{ }
Since the set of right cosets of H in G are simply the equivalence classes of an equivalence
relation on G, we immediately obtain the following theorem.
Theorem 6.5.2: Let G be a group and let H be a subgroup of G. Then the set of all right cosets
of H in G form a partition of G. Hence every element of G belongs to one and only one right
coset of H in G, i.e., any two right cosets of H in G either are identical or have no elements in
common.
Lemma 6.5.2: There is a one to one correspondence between any two right cosets of H in G.
Proof: Let Ha and Hb be any two arbitrary right cosets of H in G. Define a mapping : Ha
Hb by (ha) = hb for all h H. Then is onto since for each hbHb, ha Ha such that
(ha) = hb. Supposes (ha) = (ha), i.e., hb = hb. Then by cancellation law,
h = h, and so ha – ha. Therefore, is 1-1. Hence, is a 1-1 correspondence between Ha and
Hb.
Remark: Any two right cosets of H in G have the same number of elements. Moreover, since H
is a right coset, each right coset of H in G contains the same number of elements as H, namely,
o(H).
Theorem 6.5.3: (Lagrange): If G is a finite group and H subgroup of G, then o(H)|o(G).
Proof: Let n = 0(G), m = 0(H). Let k = The number of distinct right cosets of H in G. By 6.5.5.
and 6.5.6, any two distinct right cosets of H in G have no elements in common and each has m
elements. Hence, n = mk.
Definition 6.5.3: Let G be a group and H a subgroup of G. The number of distinct right cosets
of H in G is called the index of H in G. The symbol iG(H) denotes the index of H in G.
Note: By Lagrange‘s theorem, in case G is a finite group.
o(G )
iG(H) = .
o( H )
Definition 6.5.4: If is a group and , the order (or period) of a is the least positive integer
m such that . If no such integer exists, we say that a is of infinite order. We use the
notation o(a) for the order of a. Note that where
Lagrange‘s theorem has several corollaries. We list a few.
Corollary 6.5.1: If G is a finite group of order n, then an = for all a G.
Proof: Let a G and H = <a>. Let m = o(H). By Lagrange‘s theorem, for some
positive integer k. We know am = e. Hence, an = amk = (am)k = ek = e.
Corollary 6.5.2: If G is a finite group and a G, o(a)|o(G).
Proof: We know o(a) = o(H) where H = <a>. We also know by Lagrange‘s Theorem o(H)|o(G).
Corollary 6.5.3: If G is a finite group of order p where p is a prime number, then G is cyclic and
every element of G except the identity is a generator of G.
Proof: Let G. By 4.5.11, . Since is prime and , . Hence the
cyclic subgroup generated by any element of G other than the identity has order and
therefore must be all of G.
Exercise 6.5
1) Let h be a positive integer. List all right cosets of nZ in Z.
2) Let Z[x] be the set of all polynomials with integer coefficients and H the set of all
polynomials with integer coefficients whose constant terms are zero.
a) Prove that H < Z[x]
b) Find all cosets of H in Z[x]
3) Let G be a group, o(G) = 12. What is the maximum number of proper subgroups G can
have? If G has subgroups of order 2 and 3, what is the minimum number of proper
subgroups G can have?
4) Let G be an abelian group, H < G, K < G, with o(H) = 5 and o(K) = 7. Prove that there
exists an element in G of order 35.
5) Let [x] be the set of all polynomials over the reals and L the set of all polynomials in [x]
of degree 1 or less.
a) In [x], is 3x2 (1 + 2x2) (mod L)?
b) In [x], is 4x3 + 2x2 – 5x + 3 (4x3 + 2x2 + 8x – 19) (mod L)?
c) In [x], is x2 (1 + x + x2) (mod L)?
6) Let H be the set of all polynomials having 3 as a root and [x] as in problem 5.
a) Prove that H <[x]
b) Is x3 3x2 (mod H)?
c) Is 3x3 – 7x2 (2x3 – 4x2) (mod H)?
d) Is x3 – 2x2 – 3x 0 (mod H)?
6.5 Normal Subgroups and Quotient Groups
Definition 6.6.1: A subgroup H of a group G is said to be a normal subgroup of G if
for all .
Definition 6.6.2: ={ : } for .
Corollary 6.6.1: A subgroup H of a group G is normal if an only if H, .
Notation: To say H is a normal subgroup of G is expressed symbolically as H G. If H G
and H G, we write H G.
Example6.6.1.{ } and G are normal subgroups of every group G, where is the identity element
of G. These are called trivial normal subgroups of group G.
Example 6.6.2 Let M2 be the set of all non-singular 2 2 matrices with real entries.Then M2
forms a group under matrix multiplication (please check). Further, let N be the subset of M2
of all matrices of unit determinant. Then N is a subgroup of M2 (please verify). To prove
that N M2, let A N and let X M2 and let |A| stand for determinant of A. Then:
|X-1AX| = |X|-1|A| |X|
= |X|-1 |X| |A|
= |A|
=e
X-1 AX N
Hence by definition, N M2.
Properties of Normal Subgroups
Theorem 6.6.1: Let H be a subgroup of G. Then H G xHx-1 = H x G.
Proof: xHx-1 = H x G
xHx-1 H
H G (Corollary 6.6.3)
Conversely, let H G
(HaHb)b-1 = (Hab)b-1
HaH1 = Ha1
HaH1 = Ha1
HaH = Ha (HaH)a-1 = (Ha)a-1
HaHa-1 = H = HH (Lemma 6.6.6)
H(aHa-1) = HH
aHa = H
H G (Theorem 6.6.5)
Theorem 6.6.4: Let G be a group. Any subgroup of index 2 is normal in G.
Proof: Let H < G of index 2. Then iG(H) = 2. In other words, there are 2 right cosets of H in
G. Let x G but x H. Then H Hx = and G = H Hx (right coset
decomposition). It follows that Hx = G\H.
Again there are two left cosets of H in G and H xH = whenever x H, x G.
Therefore, G = H xH and xH = G\H
Therefore, xH = Hx = G\H.
Hence H G.
Example6.6.3: Consider symmetric group S3 on the set {1, 2 3}. The elements of S3 are:
1 2 3 1 2 3
P0 = =( 1) u1 = = (2 3)
1 2 3 1 3 2
1 2 3 1 2 3
P1 = = (1 2 3) u2 = = (1 3)
2 3 1 3 2 1
1 2 3 1 2 3
P2 = = (1 3 2) u3 = = (1 2)
3 1 2 2 1 3
Considers H = {P0, P1, P2}please verify from multiplication table that H G.
0(S3) = 6, 0(H) = 3 Therefore, iG(H) = 2 Hence H S3 (using Theorem 6.6.8)
We can also prove that H G without using Theorem 6.6.8.
Right cosets: HP0 = HP1 = HP2 = H = {P0, P1, P2}
Hu1 = Hu2 = Hu3 = {u1, u2, u3}
Left cosets: P0H = P1H = H = {P0, P1, P2}
u1H = u2H = u3H = {u1, u2, u3} Therefore, Hx = xH, x S3.Hence H G
Therefore, Hu2 u2H and Hu3 u3H. Hence H is not a normal subgroup of S3.
Similarly, one can check that subgroups {P0, u2} and {P0, u3} of S3 are also not normal
subgroups of G. In fact, it can be easily verified that S3 has no normal subgroup of index 3.
Definition 6.6.3: Let G be a group and let N G( N a normal subgroup of G).Then we define:
{ }. In other words, G/N is the collection of all right cosets of N in G.
Theorem 6.6.5: Let N be a normal subgroup of G, and let denote the set of all right cosets
of N in G. For and let with this operation is a
group called the quotient group (or factor group) of G by N.
Proof: we must first prove that the operation on G/N is well-defined, that is, if and
, then . From we have for some
and from we have for some . Therefore . But
for some because N is normal in G. This gives , so that
with this proves that hence the operation is
well-defined.
Associativity: The operation on G/N is associative because if , then
.
Existence of Identity: Let be the identity of G. Then is the identity element of
G/Nbecause if , then and .
Existence of Inverses: Let Then
and Therefore, the inverse of Na is . This proves that
is a group.
o(G )
Corollary 6.6.2: If G is a finite group and N G, then o(G/N) =
o( N )
H0 H1 H2 H3
H0 H0 H1 H2 H3
H1 H1 H2 H3 H0
H2 H2 H3 H0 H1
H3 H3 H0 H1 H2
For example,
{
It follows from the operation table that G/H forms a group.
In other words, Z8/H, where H = {0, 4}, is the quotient group of Z8.
Example6.6.6: What is the group when 3 and { } in the usual notation?
. N0 N1 where N0 = N
N0 N0 N1 N1 = Nx
N1 N1 N0
6.6 Homomorphism
One way to study a relatively large and complicated group is to study its smaller and less
complicated subgroups. Homomorphism can help us do just that.
Definition 6.7.1: Let be groups with operations * and respectively. A mapping
is called a homomorphism if
Example6.7.1:
1) Let G = any group with identity element . , defined by , is a
homomorphism because for all .
Definition 6.7.3:
Let be a homomorphism.
a) If is onto, is called an epimorphism.
b) If is one-to-one, is called a monomorphism.
c) If is both an epimorphism and a monomorphism, then is called an isomorphism. If
is an isomorphism, then and are said to be
isomorphic, and we write .
Theorem 6.7.1: Let be a homomorphism. Then
1) is an epimorphism if and only if .
2) is a monomorphism if and only if { }, where is the identity element of .
Proof: 1) is obvious
2) Suppose is a monomorphism. Let , then , where is the identity element
in . But . Hence, . Therefore
{ }
Conversely suppose { }. Let such that then
= . hence, is a
monomorphism.
Theorem 6.7.6: ( Fundemental theorem of homomorphism ): Let be an
ephimorphism. Then ).
Proof: Since is normal subgroup of we have is a group ( quotient group).
Define a map by ), where
Claim: is well-defined, i.e., then . But a b(mod K)
for some . Therefore, for some . Hence,
which means . Thus is
well-defined.
Now .Hence, is a homomorphism.
Suppose )
) . Hence is one-o-one. Clearly is onto, since is onto. Hence,
is an isomorphism.
Exercise 6.7.1
1) Let be a homomorphism. Prove that
a) is a subgroup of
b) If is onto and , then .
c) If is cyclic, then Im is cyclic.
d) If is abelian then is abelian.
2) Let P be the set of polynomials over of degree less than or equal to 2. Let : P be
defined by (f) = f (5). Prove that is a homomorphism. Describe the set -1
(0).
3) Show directly that as groups Z4/{0, 2} Z2 by defining a homomorphism from Z4 to Z2.
4) Let [x] be the group of polynomials over and A the set of polynomial with 3 as a root.
Prove that [x]/A .
5) Let G be a finite group and o(G) = p, p rime.
Prove that Zp G.
6) Define : (, + ) (+,.) by . Show that is an isomorphism.
7) Prove that any infinite cyclic group is isomorphic to the group of integers under
addition.
6.7 Automorphisms
Definition 6.8.1: Let be any group. An isomorphism space is called an
Automorphism of G. In other words, a map is to be Automorphisms iff:
a) is homomorphism;
b) is one-to-one; and
c) is onto.
Examples 6.8.1:
1) The identity mapping of onto itself is clearly an Automorphisms of .
2) Let be any abelian group. Define a mapping by
. is an Automorphisms of .
First we show that is well defined. That is, for x, y in if , then
. Now implies where e is the identity of . Then,
hence . Obviously is onto and one-to-one. Moreover, 1 2
. Hence C(G) K . It follows that =
Hence, it follows that
6.8 Representations
Cayley‟s Theorem 6.9.1: Every group is isomorphic to a group of permutations.
Proof: Let be any given group. We proceed as follows:
Step I. We shall produce a suitable set . Let us think of just as a set. Let be the group
of all permutations of . For , we define a map given by ,
1 2. Hence, is 1 – 1
Therefore, is a permutation of . Hence .
Let { }
Step II: Se next show that
i) Let , G. The , we have ( )(x) = = = =
Therefore, =
ii) where is the identity of . Therefore, is the identity of
iii) Let , where . Then, and
Therefore, ( )-1 = . Hence .
Step III: To prove that . Define a map by
i) It follows that is onto
ii) Let
(x) = (x)
Therefore, is 1 – 1
References
D. C. Lay, Linear algebra and its applications, Pearson Addison Wesley, 2006
Bernard Kolman& David R. Hill, Elementary linear algebra, 8th
ed., Prentice Hall, 2004Serge Lang; Linear Algebra
Demissu Gemeda, An Introduction to Linear Algebra, Department of
Mathematics, AAU, 2000
H. Anton and C Rorres, Elementary linear algebra, John Wiley &Sons, INC., 1994
K. Hoffman & R. Kunze, Linear Algebra, 2nd ed., Prentice Hall INC1971
S. Lipschutz, Theory and problems of linear algebra, 2nd ed.McGraw-Hill1991
H. Anton (2005).Elementary Linear Algebra, 5th edition. New York:John Wiley &
Son.Inc.
D.C.Lay.(2006).Linear Algebra and its Applications.Pearson Welsley.
K.Hoffman & R.Kunze.(1971). Linear Algebra 2nd ed. New York: Prentice Hall INC.