Hanoi University of Technology

Faculty of Applied mathematics and informatics
Advanced Training Program

Lecture on

Algebra

Dr. Nguyen Thieu Huy

Hanoi 2008

Nguyen Thieu Huy, Lecture on Algebra

Preface

This Lecture on Algebra is written for students of Advanced Training Programs of Mechatronics (from California State University –CSU Chico) and Material Science (from University of Illinois- UIUC). When preparing the manuscript of this lecture, we have to combine the two syllabuses of two courses on Algebra of the two programs (Math 031 of CSU Chico and Math 225 of UIUC). There are some differences between the two syllabuses, e.g., there is no module of algebraic structures and complex numbers in Math 225, and no module of orthogonal projections and least square approximations in Math 031, etc. Therefore, for sake of completeness, this lecture provides all the modules of knowledge which are given in both syllabuses. Students will be introduced to the theory and applications of matrices and systems of linear equations, vector spaces, linear transformations, eigenvalue problems, Euclidean spaces, orthogonal projections and least square approximations, as they arise, for instance, from electrical networks, frameworks in mechanics, processes in statistics and linear models, systems of linear differential equations and so on. The lecture is organized in such a way that the students can comprehend the most useful knowledge of linear algebra and its applications to engineering problems. We would like to thank Prof. Tran Viet Dung for his careful reading of the manuscript. His comments and remarks lead to better appearance of this lecture. We also thank Dr. Nguyen Huu Tien, Dr. Tran Xuan Tiep and all the lecturers of Faculty of Applied Mathematics and Informatics for their inspiration and support during the preparation of the lecture. Hanoi, October 20, 2008

Dr. Nguyen Thieu Huy

1

Nguyen Thieu Huy, Lecture on Algebra

Contents
Chapter 1: Sets.................................................................................. 4
I. Concepts and basic operations........................................................................................... 4 II. Set equalities .................................................................................................................... 7 III. Cartesian products........................................................................................................... 8

Chapter 2: Mappings........................................................................ 9
I. Definition and examples.................................................................................................... 9 II. Compositions.................................................................................................................... 9 III. Images and inverse images ........................................................................................... 10 IV. Injective, surjective, bijective, and inverse mappings .................................................. 11

Chapter 3: Algebraic Structures and Complex Numbers........... 13
I. Groups ............................................................................................................................. 13 II. Rings............................................................................................................................... 15 III. Fields............................................................................................................................. 16 IV. The field of complex numbers ...................................................................................... 16

Chapter 4: Matrices........................................................................ 26
I. Basic concepts ................................................................................................................. 26 II. Matrix addition, scalar multiplication ............................................................................ 28 III. Matrix multiplications................................................................................................... 29 IV. Special matrices ............................................................................................................ 31 V. Systems of linear equations............................................................................................ 33 VI. Gauss elimination method ............................................................................................ 34

Chapter 5: Vector spaces ............................................................... 41
I. Basic concepts ................................................................................................................. 41 II. Subspaces ....................................................................................................................... 43 III. Linear combinations, linear spans................................................................................. 44 IV. Linear dependence and independence .......................................................................... 45 V. Bases and dimension...................................................................................................... 47 VI. Rank of matrices ........................................................................................................... 50 VII. Fundamental theorem of systems of linear equations ................................................. 53 VIII. Inverse of a matrix ..................................................................................................... 55 X. Determinant and inverse of a matrix, Cramer’s rule...................................................... 60 XI. Coordinates in vector spaces ........................................................................................ 62

Chapter 6: Linear Mappings and Transformations .................... 65
I. Basic definitions .............................................................................................................. 65 II. Kernels and images ........................................................................................................ 67 III. Matrices and linear mappings ....................................................................................... 71 IV. Eigenvalues and eigenvectors....................................................................................... 74 V. Diagonalizations............................................................................................................. 78 VI. Linear operators (transformations) ............................................................................... 81

Chapter 7: Euclidean Spaces ......................................................... 86
I. Inner product spaces. ....................................................................................................... 86 II. Length (or Norm) of vectors .......................................................................................... 88 III. Orthogonality ................................................................................................................ 89 IV. Projection and least square approximations: ................................................................ 93 V. Orthogonal matrices and orthogonal transformation ..................................................... 97

2

................ 107 3 ............................................................................................................. Quadratic forms . 102 VII...................................Nguyen Thieu Huy................... Lecture on Algebra IV......... Quadric lines and surfaces..........

3. Philippines. We use the following notations: ∃: “there exists” ∃! : “there exists a unique” ∀: “ for each” or “for all” ⇒: “implies” ⇔: ”is equivalent to” or “if and only if” 1. Myanmar. Brunei. 1. Malaysia. e.Nguyen Thieu Huy.The set of real numbers. then we denote by x ∈ E (pronounce: x belongs to E). .7} or B = {Vietnam. Concepts of sets: A set is a collection of objects or things. A = {1. Laos. but China is not. Thailand. Basic notations: Let E be a set. Singapore}.2. 2 ] The notation “|” means “such that”. The objects or things in the set are called elements (or member) of the set. If x is not an element of E. Cambodia. Example: The set of real solutions of the inequality x2 ≤ 2 is G = {x |x ∈ R and - 2 ≤ x ≤ 2 } = [ . C and set braces to denote a set. Description of a set: Traditionally.2. . If x is an element of E. Indonesia.1. b) Set–builder notation: This is a notation which lists the rules that determine whether an object is an element of the set.. denoted by R. Examples: .A set of students in a class.g.A set of countries in ASEAN group. we use upper case letters A. Lecture on Algebra Chapter 1: Sets I.2.3. then we write x ∉ E. There are several ways to describe a set. then Vietnam is in this set. Concepts and Basic Operations 1. B. a) Roster notation (or listing notation): We list all the elements of a set in a couple of braces. 4 .

By logical expression: A ⊂ B ⇔ ( x ∈A ⇒ x ∈B) By Venn diagram: A B b) Empty set: We accept that. denoted by A∪B. B be two sets. This can be written in logical expression by A = B ⇔ (x ∈ A ⇔ x ∈ B) 1. empty set and two equal sets: a) Subsets: The set A is called a subset of a set B if from x ∈ A it follows that x ∈B.4 Subsets. such a set is called an empty set (or void set) denoted by ∅. the union of A and B. if A⊂ B and B ⊂ A.5. We say that A equals B. is given by: A ∩ B = {x | x∈A and x∈B}. B be two set. Note: For every set A. This is called Venn diagram. and given by A∪B = {x ⎜x∈A or x∈B}. Union: Let A. Intersection: Let A. This means that x∈ A ∩B ⇔ (x ∈ A and x ∈ B). This means that 5 . denoted by A ∩ B. denoted by A = B. Lecture on Algebra c) Venn diagram: Some times we use a closed figure on the plan to indicate a set. we have ∅ ⊂ A. 1. B be two sets.6.Nguyen Thieu Huy. We then denote by A ⊂ B to indicate that A is a subset of B. Then the intersection of A and B. By Venn diagram: 1. there is a set that has no element. c) Two equal sets: Let A.

denoted by CXA (or A’ when X is clearly understood). The complement of A in X. denoted by A\B (or A–B).8.3] = {x | x∈R and 0 < x ≤ 3} B = [-1. is given by CXA = X \ A = {x | x∈X and x ∉ A)} = {x | x ∉ A} (when X is clearly understood) Examples: Consider X =R. B be two sets: The subtraction of A and B. Then.1} = [0. Complement of a set: Let A and X be two sets such that A ⊂ X. is given by A\B = {x | x∈A and x∉B} This means that: x∈A\B ⇔ (x ∈ A and x∉B). Subtraction: Let A. A∩B = {x∈R | 0≤ x ≤ 3 and -1 ≤ x ≤2} = = {x∈ R | 0 ≤ x ≤ .Nguyen Thieu Huy.7. By Venn diagram: 1. -1] 6 . 1. 2] = {x|x∈R and -1 ≤ x ≤ 2}. By Venn diagram: 1. Lecture on Algebra x∈ A∪B ⇔ (x ∈ A or x ∈ B). A = [0.

we prove only one equality (3).2]} = {x∈R ⎪ 2 ≤ x ≤3} = [2. 1. Lecture on Algebra 2. (A∩B)∩C = A∩(B∩C) (Associative law) 3. Proof: Since the proofs of these equalities are relatively simple. B. A∩(B∪C) = (A∩B) ∪ (A∩C) (Distributive law) 4. We use the logical expression of the equal sets. 7 .Nguyen Thieu Huy. A ∪ B = B∪A. A∪B = {x∈R | 0 ≤ x ≤ 3 or -1 ≤ x ≤2} = {x∈R ⎪ -1≤ x ≤ 3} = [-1. the other ones are left as exercises. To prove (3).3] 4. Set equalities Let A. The proofs of other equalities are left for the readers as exercises. where B’=CXB with a set X containing both A and B. C be sets. The following set equalities are often used in many problems related to set theory. A’ = R \ A = {x ∈ R ⎪ x < 0 or x > 3} II.3] 3. (A∪B) ∪C = A∪(B∪C). A∩B = B∩A (Commutative law) 2. ⎡x∈ A x ∈ A ∪ (B ∩C) ⇔ ⎢ ⎣x ∈ B ∩ C ⎡x ∈ A ⇔ ⎢ ⎧x ∈ B ⎢ ⎨ ⎢ ⎩x ∈ C ⎣ ⎧⎡ x ∈ A ⎪⎢ ⎪ x∈B ⇔ ⎨⎣ ⎪⎡ x ∈ A ⎪⎢ x ∈ C ⎩⎣ ⎧x ∈ A ∪ B ⇔ ⎨ ⎩x ∈ A ∪ C ⇔ x∈(A∪B)∩(A∪C) This equivalence yields that A∪(B∩C) = (A∪B)∩(A∪C). A \ B = A∩B’. A∪(B∩C) = (A∪B)∩(A∪C). A \ B = {x∈ R ⎪ 0 ≤ x ≤ 3 and x ∉ [-1.

two elements (a.y)⎢(x∈A) and (y∈B)}.…. 2. xn) = (y1. Let A1 x A2 x… xAn be the Cartesian product of given sets A1. yn) ⇔ xi = yi ∀ i= 1.2…. d) of A x B are equal if and only if a = c and b=d. Equality of elements in a Cartesian product: 1. n} In case.xn)⎪xi ∈ Ai = 1. Definition: 1.…An. B be two sets. we have that (x1. 8 . for (x1. Cartesian products 3.Nguyen Thieu Huy... Let A.1. Let A x B be the Cartesian Product of the given sets A and B. A2…An be given sets. The Cartesian product of A and B.x An. 2…. d) ⇔ ⎨ ⎩b = d 2. is given by A1 x A2 x…. The Cartesian Product of A1. (a.An = {(x1. we denote A1 x A2 x…x An = A x A x A x…x A = An. y2…yn) in A1 x A2 x…. A1 = A2 = …= An = A. Then. b) and (c. x2…xn) and (y1. Then. ⎧a = c In other words. 3. A2…An. Let A1.2.…. denoted by AxB.. x2…. Lecture on Algebra III. x2. n. is given by A x B = {(x. b) = (c. y2. denoted by A1 x A2 x…x An.

2. f 1. the assignment f: X → Y.3. 1. denoted by h = gof. Let X. Remark: We use the notation f: X → Y x a f(x) to indicate that f(x) is assigned to x. where R is the set of real numbers. g f 2. g f X → Y → W). Then. II. and g(x) = -x ∀x ∈R+. Definition: Let X. The f statement that (X. Compositions 2. Y. we define the mapping h: X → W by h(x) = g(f(x)) ∀x ∈ X. This is called the identity mapping on the set X. Lecture on Algebra Chapter 2: Mappings I.2. f(x) = sinx ∀x∈R. Example: R → R+ → R-.Nguyen Thieu Huy. Y. Definition: Given two mappings: f: X → Y and g: Y → W (or shortly. 2. (gof)(x) = g(f(x)) = -x2.2. g(x) = 2x + 1 ∀x ∈R. f: R → R. f(x) = x ∀x ∈ X. f) is a mapping is written by f: X → Y (or X → Y). Then. 9 . is an ordered triple (X. “well-defined” means that for each x∈X there corresponds one and only one f(x) ∈Y. (gof)(x) = g(f(x)) ∀x∈X. Y be nonvoid sets.1. fog ≠ gof. A mapping with domain X and range Y. denoted by IX 3.3. here R+=[0. 0]. 1. The mapping h is called the composition of g and f. g X → Y are equal if f(x) = g(x) ∀x ∈ X.4. f: X → X. Remark: Two mapping X → Y and Then. and y0 ∈Y. we write f=g. g f Example: R → R → R. f) where f assigns to each x∈X a well-defined f(x) ∈Y. f(x) = x2. f(x) = x2 ∀x∈R. Definition and examples 1. is a mapping. This is called a constant mapping. Examples: 1. that is. Here.1. ∞] and R-=(-∞. Remark: In general. A mapping is sometimes called a map or a function. f(x) = y0 ∀x ∈X. Y be nonempty sets.

1) f(A∪B) = f(A) ∪f(B) 2) f(A∩B) ⊂ f(A) ∩ f(B) 3) f-1(C∪D) = f-1(C) ∪f-1(D) 4) f-1(C∩D) = f-1(C) ∩ f-1(D) Proof: We shall prove (1) and (3). Definition: Let f: X → Y be a mapping. 1] ⊂R. D be subsets of Y. which is defined by f(S) = {f(s)⎪s∈S} = {y∈Y⎪∃s∈S with f(s) = y} Example: f: R →R. Definition: Let T ⊂ Y. 3. let A. which is defined by f-1(T) = {x∈X⎪f(x) ∈T}. f(S) = {f(s)⎪s∈[0. 3. x−2 S = (-∞.2] ⊂ R. Definition: For S ⊂ X. x−2 3. 2]} = [0.4. B be subsets of X and C. That is to say. 3. the image of S in a subset of Y. Lecture on Algebra Then (fog)(x) = f(g(x)) = (2x+1)2 ∀x ∈ R.1 }= [-1/2. f(X). S = [0. f(x) = x +1 ∀x∈R\{2}. 2]} = {s2⎪s∈[0. 10 . the inverse image of T is a subset of X.1. fog ≠ gof.Nguyen Thieu Huy. the other properties are left as exercises. Imf = f(X) = {f(x)⎪x∈X} = {y ∈Y⎪∃x∈X with f(x) = y}. denoted by Imf. Properties of images and inverse images: Let f: X → Y be a mapping.2. (1) : Since A ⊂ A ∪B and B ⊂ A∪B. and (gof)(x) = g(f(x)) = 2x2 + 1 ∀x ∈ R Clearly. the following properties hold. f-1(S) = {x∈R\{2}⎪f(x) ≤ -1} = {x∈R\{2}⎪ x +1 ≤ . III. Then. is called the image of f. Image and Inverse Images Suppose that f: X → Y is a mapping.2). it follows that f(A) ⊂ f(A∪B) and f(B) ⊂ f(A∪B). Then. 8]. f(x) = x2+2x ∀x∈ R. The image of the domain X. Example: f: R\ {2} →R.3.

Then. This mapping is injective but not ⎣ 2 2⎦ ⎣ 2 2⎦ surjective. ⎥ → R. or equivalently. then x1 = x2. The mapping is called surjective (or onto) if Imf = Y. there exists an x ∈ A ∪B. 3. f : ⎢− ⎡ π π⎤ ⎡ π π⎤ ⎡ π π⎤ ⎡ π π⎤ . f-1(C∪D) = f-1(D)) = f-1(C)∪f-1(D). f: R → [-1. f(x) = sinx ∀x∈ ⎢− . This condition is equivalent to: For x1. then f(x1) ≠ f(x2). Lecture on Algebra These inclusions yield f(A) ∪f(B) ⊂ f(A∪B). This mapping is surjective but not injective. by definition of an Image. b. The mapping is called bijective if it is surjective and injective. Definition: Let f: X → Y be a mapping. f(x) = sinx∀x∈ ⎢− . But. Hence. Bijective. a.1]. Conversely. take any y ∈ f(A∪B). ⎥ . Injective. This mapping is not injective since f(0) = f(2π) = 0. ⎥ →[-1. c. Surjective. This mapping is bijective.x2∈X if x1≠x2. and Inverse Mappings 4. f(A∪B) = f(A) ∪ f(B). f(R) = Imf = [-1. this implies that y = f(x) ∈ f(A) (if x ∈A) or y=f(x) ∈ f(B) (if x ∈B). ∀y∈Y. It is also not surjective. This yields that f(A∪B) ⊂ f(A) ∪ f(B). f(x) = sinx ∀x ∈R. (3): x∈f-1(C∪D) ⇔ f(x) ∈ C∪D ⇔ (f(x) ∈ C or f(x) ∈D) ⇔ (x ∈f-1(C) or x ∈ f-1(D)) ⇔ x ∈ f-1(C)∪f-1(D) Hence. y ∈ f(A) ∪ f(B). 1] ≠ R 2. because. ∃x∈X such that f(x) = y.1]. IV. f(x) = sinx ∀x∈R. ⎥ .x2∈X if f(x1) = f(x2).1. ⎣ 2 2⎦ ⎣ 2 2⎦ 11 . we have that. The mapping is called injective (or one–to–one) if the following condition holds: For x1. Examples: f 1. Therefore. such that y = f(x). R → R. f: ⎢− . 4.Nguyen Thieu Huy.

Definition: Let f: X → Y be a bijective mapping. The inverse mapping f-1 : [− 1.1]. In fact. since f is bijective we can define an assignment g : Y → X by g(y) = x if f(x) = y. . Examples: 1. then h(f(x)) = x = g(f(x)) ∀x ∈ X. the mapping g: Y → X satisfying gof = IX and fog = IY is called the inverse mapping of f. take (fof-1)(x) = elnx = x ∀x ∈(0. f-1(x) = lnx ∀x ∈ (0. g(f(x)) = x ∀x ∈ X and f(g(y)) = y ∀y∈Y. ⎢− . denoted by g = f-1. 2 ⎥ ⎣ ⎦ ⎡ π π⎤ This mapping is bijective. For a bijective mapping f: X → Y we now show that there is a unique mapping g: Y →X satisfying gof = IX and fog = IY.2. To see this.1] →. f: ⎢− ⎡ π π⎤ . and (f-1of)(x) = lnex = x ∀x ∈R. The inverse mapping is f-1: (0. This gives us a mapping. if h: Y → X is another mapping satisfying hof = IX and foh = IY. and sin(arcsinx)=x ∀x ∈[-1. → [− 1.∞). Then. ∃! x∈ X such that f(x) = y ⇒ h(y) = h(f(x)) = g(f(x)) = g(y). ∀y ∈ Y. ⎥ is denoted by ⎣ 2 2⎦ f-1 = arcsin. gof=IX and fog = IY. 12 .∞).∞).1].1] ⎣ 2 2⎥ ⎦ 2. f-1(x) = arcsinx ∀x ∈[-1. Lecture on Algebra 4. Therefore. that is to say. f(x) = sinx ∀x ∈ ⎣ 2 2⎥ ⎦ ⎡ π π⎤ ⎢− 2 . by the bijectiveness of f. The above g is unique is the sense that. Then.Nguyen Thieu Huy. f(x) = ex ∀x ∈R. This means that h = g. ∞) → R. f: R →(0. We also can write: arcsin(sinx)=x ∀x ∈ ⎢− ⎡ π π⎤ . Clearly.

b) a a • b 3. Lecture on Algebra Chapter 3: Algebraic Structures and Complex Numbers I. b ∈G (b3) an element e ∈ G is the neutral element of G if e∗a = a∗e = a ∀a∈G 13 . b. Then.1. Take G = {f: X → X⎪ f is a mapping}:= Hom (X) for X ≠ ∅.b) = a∗b for each (a. and we will write ϕ(a. is called an algebraic structure. “∗” = “•” (the usual multiplication in R) is a binary operation defined by •: RxR→R (a. “∗” = “+” (the usual addition in R) is a binary operation defined by +: RxR→R (a. ϕ is called a binary operation.2. ∗) we will say that (b1) ∗ is associative if (a∗b) ∗c = a∗(b∗c) ∀a.g) a fog 1. Consider the algebraic structure (G.b) ∈ GxG. Groups 1. b. and c in G (b2) ∗ is commutative if a∗b = b∗a ∀a. The composition operation “o” is a binary operation defined by: o: Hom(X) x Hom(X) → Hom(X) (f.Nguyen Thieu Huy. Definition: a.b) a a + b 2) Take G = R. A couple (G. Examples: 1) Consider G = R. ∗). where G is a nonempty set and ∗ is a binary operation. Definition: Suppose G is non empty set and ϕ: GxG → G be a mapping.

the element a’∈G such that a∗a’ = a’∗a=e. and the identity mapping IX is an neutral element. Consider (R. d. o). then “+” is associative and commutative. 1. 1. ∗). if ∗ is written as + (additive group) 14 .5. •).4. if ∗ is written as • (multiplicative group) a’ = . ∗ is associative 2. Lecture on Algebra Examples: 1. will be called the opposition of a. called inverse of a. ∃a’∈ G such that a∗a’ = a’∗a = e Remark: Consider a group (G. For a ∈ G.+) is called an additive group. There is a neutral element e∈G 3. Remark: If a binary operation is written as +. If a binary operation is written as ⋅.+). ∗). then (G. If ∗ is commutative. Definition: The algebraic structure (G. if there is a neutral element e ∈G. ∗) is called abelian group (or commutative group). If ∗ is written as •. this neutral element is unique. Consider (R. then “o” is associative but not commutative. If ∗ is written as +.Nguyen Thieu Huy.3. 2. b. and 0 is a neutral element. called negative of a. 1. then (G. Proof: Let e’ be another neutral element. Then. 3.a. ∗) is called a group if the following conditions are satisfied: 1. c. then (G. then “•” is associative and commutative and 1 is an neutral element. Then. denoted by a’ = a-1. a. ∀a∈ G. then the neutral element will be denoted by 0G (or 0 if G is clearly understood) and called the null element. Consider (Hom(X). Therefore e = e’. •) is called a multiplicative group. Lemma: Consider an algebraic structure (G. then the neutral element will be denoted by 1G (or 1) and called the identity element. e = e∗e’ because e’ is a neutral element and e’ = e∗e’ because e is a neutral element of G.

(R. •) is called a ring if the following properties are 15 . End(X) = {f: X → X ⎪ f is bijective}. o) is noncommutative group with the neutral element is IX. a’∗a = e. •) is abelian multiplicative group. For a. Similarly. For a ∈G. (End(X). Lecture on Algebra Examples: 1. Also. a∗c = a∗b ⇒ a-1∗ (a∗c) a-1∗ (a∗b) ⇒ (a-1∗a) ∗c = (a-1∗a) ∗b ⇒ e∗c = e∗b ⇒ c = b. Let a’ be another inverse of a. 1. The triple (V. 2. c∗a = b∗a ⇒ c = b. the inverse a-1 of a is unique. the following assertion hold. Then. +. + and satisfied: • are binary operations on V. 2. where o is the composition operation. 2. x.1. It follows that (a’∗a) ∗a-1 = a’∗ (a∗a-1) = a’∗e = a’. 1. +) is abelian additive group. Then.Nguyen Thieu Huy. c ∈ G we have that a∗c = b∗c ⇒ a = b c∗a = c∗b ⇒ a = b (Cancellation law in Group) 3. the equation a∗x = b has a unique solution x = a-1∗b. +. (R. II. Then. For a. Proposition: Let (G. b ∈ G. •) where V is a nonempty set. the equation x∗a = b has a unique solution x = b∗a-1 Proof: 1. Rings 2.6. b. 3. The proof of (3) is left as an exercise. Definition: Consider triple (V. ∗) be a group. Let X be nonempty set.

+.a = OV 2. such as x2 + 1 = 0 or x2 – 10x + 40 = 0. •). •) is called a field if (V.4. +. Then. a • (b – c) = a•b – a•c. +. Remark: If V is a nontrivial ring. a-1•a = 1V. a. and if a∈V. +. where b – c is denoted for b + (-c) 3. V = {OV}. Proposition: Let (V. the multiplicative operation is associative and commutative: ∀a. in addition. Lecture on Algebra (V. (V. •) is a field if and only if the following conditions hold: (V.3. Fields 3. 2. if a ∈ V and a≠OV then a has a multiplicative inverse a-1∈ V. If. •) is called a commutative ring. nontrivial ring such that.2. then 1V ≠OV. then ∃a-1 ∈V. were observed early in history and led to the introduction of complex numbers. the following equalities hold.Nguyen Thieu Huy.b. the multiplicative operation is commutative then the ring (V. and we call 1V the multiplication identity. 1. (Q. and c•(a + b) = c•a + c•b V has identity element 1V corresponding to operation “•” .2. Definition: A triple (V. +. Example: (R. Definition: We say that the ring is trivial if it contains only one element. 16 . +.OV = OV. a ≠ OV. 2. 3.b. +. Examples: (R. is a commutative ring. •) are fields. IV. Detailedly.c∈V we have that (a + b) •c = a•c + a•b. there is multiplicative identity 1V ≠ OV. +. +. and c•(b + a) = c•b + c•a.c ∈ V we have that (a + b) •c = a•c + b•c.1. 2. •) is a commutative. •) be a ring. The field of complex numbers Equations without real solutions. (b–c) •a = b•a – c•a III. •) is a commutative group. +) is a commutative group Operation “•” is associative ∀ a. •) with the usual additive and multiplicative operations.

then their sum (a. 1) (R2. 1) = (-1.0) = (a + c. Indeed. Then.0) • (1. (0.b)∈ R2 (a.1.b) = (a. +. With this notation.0) = a ∈ R.b) is (c.b) + (c.0). we consider additive and multiplicative operations defined by (a.b) • ⎜ ⎜ 2 ⎟ = (1.0) + (c. i2 = i. •) is obviously a commutative.i = (0. Now. 0) and their product (a. Lecture on Algebra 4.1) = a + bi 17 . (c.0) • (0.0) ≠ (0. ad + bc) Then. that is (a. 0) are still belong to the horizontal axis.d) = (ac . nontrivial ring with null element (0.1). (R2.0) = (ac. we can write: for (a.bd. 0) and identity element (1. consider i = (0.0). 2) Let now (a.0).b) • (c.d) = b ⎞ b ⎞ ⎛ a ⎛ a .0) + (b. b + d) (a. we see that the inverse of (a.b) ≠ (0. 2 2⎟ 2 2 a +b ⎠ a + b2 ⎠ ⎝a + b ⎝a + b We can present R2 in the plane We remark that if two elements (a.0)•(c.− 2 since (a.0) belong to horizontal axis. +. and the addition and multiplication are operated as the addition and multiplication in the set of real numbers. •) is a field.Nguyen Thieu Huy. 0) = -1. 1).0).d) = (a + c. This allows to identify each element on the horizontal axis with a real number. Construction of the field of complex numbers: On the set R2.− 2 .

b ∈R and i2 = -1} and call C the set of complex numbers. two complex numbers z1 = a1 + b1i and z2 = a2 + b2i are equal if and only if a1 = a2. Hence.3. 4. In this form. Rez1=Rez2 and Imz1 = Imz2. z1 − = z1 ( z 2 1 ) z2 (z2 ≠0).Nguyen Thieu Huy. we have z1 – z2 = a1 – a2 + (b1 – b2)i. 4. 18 . the real number a is called the real part of z. Example: 2 + 4i – (3 + 2i) = -1 + 2i.⎜ 2 2 2 − 2 2 2 ⎟. We denote by a = Rez and b = Imz. •) is a field which is called the field of complex numbers. Subtraction and division in canonical forms: 1) Subtraction: For z1 = a1 + b1i and z2 = a2 + b2i. The additive and multiplicative operations on C can be reformulated as. we have z 2 1 = a2 b − 2 2 2 i .2. For z∈ C. where a. The representation of a complex number z ∈ C as z = a + bi for a. − For z2 = a2 + b2i. and the real number b is called the imaginary part. Lecture on Algebra We set C = {a + bi⏐a. Imaginary and real parts: Consider the field of complex numbers C. It follows from above construction that (C. Therefore. 2 a + b2 a 2 + b2 2 2 ⎛ a a1 + b1i bi ⎞ = (a1 + b1i ). We also have the following ⎜a +b a 2 + b2 a 2 + b2 ⎟ 2 ⎝ 2 ⎠ practical rule: To compute z1 a1 + b1i = we multiply both denominator and numerator by z 2 a 2 + b2 i a2 – b2i. Therefore. z can be written as z = a + bi. the calculation is done by usual ways as in R with the attention that i2 = -1. that is. 2) Division: By definition. +. then simplify the result. in this form. Also. b1 = b2. (a+bi) + (c+di) = (a+c) + (b+d)i (a+bi) • (c+di) = ac + bdi2 + (ad + bc)i = (ac – bd) + (ad + bc)i (Because i2=-1).b∈R and i2 = -1. in canonical form. b ∈R and i2 = -1. is called the canonical form (algebraic form) of a complex number z.

0) on the horizontal axis. mathematicians were suspicious of them. and even the great eighteen–century Swiss mathematician Leonhard Euler. Complex plane: Complex numbers admit two natural geometric interpretations. which is therefore called the real axis. = = − i 8 + 3i 8 + 3i 8 − 3i 73 73 73 4. who used them in calculations with unparalleled proficiency. → → 19 . which may in turn be represented as an arrow from the origin to the point (x. First. did not recognize then as “legitimate” numbers.4. In this interpretation.4. Because of this correspondence between complex numbers and points in the plane. The complex numbers z = x + yi may be thought of as the vector x i +y j in the plane. or just yi. The second geometric interpretation of complex numbers is in terms of vectors. is identified with the point (x. or x+ 0. A number 0 + yi.2). Figure 4. It was the nineteenth–century German mathematician Carl Friedrich Gauss who fully appreciated their geometric significance and used his standing in the scientific community to promote their legitimacy to other mathematicians and natural philosophers. each real number a.4. Lecture on Algebra a1 + b1i a1 + b1i a2 − b2i a1a2 + b1b2 + (a2b1 − a1b2 )i = . = 2 2 a2 + b2i a2 + b2i a2 − b2i a2 + b2 Example: 2 − 7i 2 − 7i 8 − 3i − 5 − 62i − 5 62 = .2 When complex numbers were first noticed (in solving polynomial equations). we may identify the complex number x + yi with the point (x.Nguyen Thieu Huy.y) in the plane (see Fig.y). is called a pure imaginary number and is associated with the point (0. This axis is called the imaginary axis.3. as in Fig.y) on the vertical axis. we often refer to the xy-plane as the complex plane.i.

z2 in C 20 .z 2 ∀z1. the definition of addition of complex numbers is equivalent to the parallelogram law for vector addition. z1 + z 2 = z1 + z 2 2. 4.4.4. Complex conjugate: Let z = x +iy be a complex number then the complex conjugate z of z is defined by z = x – iy. which are easy to prove. Lecture on Algebra z Fig. In this interpretation.4. 4. and the second component is Imz.4). Parallelogram law for addition of complex numbers The first component of this vector is Rez. since we add two vectors by adding the respective component (see Fig. z1 . 1.Nguyen Thieu Huy. z2 in C ∀z1. It follows immediately from definition that Rez = x = (z + z )/2. and Imz = y = (z . Complex numbers as vectors in the plane Fig.z 2 = z1 .3.5.z )/2 We list here some properties related to conjugation.

subtract.y = OM ⎪z⎪=⎪ OM ⎪= x + y 2 2 4. z = x + i. the modulus ⎪z⎪ is precisely the length of the vector which represents z in the complex plane. that is.Nguyen Thieu Huy. Modulus of complex numbers: For z = x + iy ∈ C we define ⎪z⎪= x + y . From this representation we have that ⎨ ⎩ y = r sin θ 21 . multiply or divide complex numbers. So. To do more complicated operations on complex numbers such as taking to the powers or roots. z ∈R if and only if z = z . if α ∈ R and z ∈ C. ⎜ 1 ⎜z ⎝ 2 ⎞ z1 ⎟= ⎟ z ⎠ 2 ∀z1. then α . For z∈ C we have that.7. we need the following form of complex numbers. z2 in C 4. Polar (trigonometric) forms of complex numbers: The canonical forms of complex numbers are easily used to add. Let we start by employ the polar coordination: For z = x + iy ⎧ x = r cos θ 0 ≠ z = x + iy = OM = (x. 4. the angle θ is defined by x ⎧ ⎪cos θ = 2 x + y2 ⎪ ⎨ y ⎪sin θ = ⎪ x2 + y2 ⎩ (II) The equalities (I) and (II) define the unique couple (r.z 5.z = α .y).6. Then we can put ⎨ ⎩ y = r sin θ where r = ⎪z⎪ = x2 + y2 (I) and θ is angle between OM and the real axis. Lecture on Algebra ⎛z 3. θ) with 0≤θ<2π such that ⎧ x = r cos θ . 2 2 and call it modulus of z.

z2⎪=⎪z1⎪.Nguyen Thieu Huy.1) z1 ⇔ z1 = z. 2) Division: Take z = (4.z2 = r1r2[cos(θ1+θ2) + isin(θ1+θ2)]. It follows that ⎪z1. Therefore. Examples: 1) z = 1 + i = 1 ⎞ π π⎞ ⎛ 1 ⎛ 2⎜ +i ⎟ = 2 ⎜ cos + i sin ⎟ 4 4⎠ 2⎠ ⎝ ⎝ 2 ⎛1 3 ⎞ ⎛ ⎛ π⎞ ⎛ π ⎞⎞ − i ⎟ = 6⎜ cos⎜ − ⎟ + i sin⎜ − ⎟ ⎟ ⎜2 2 ⎟ ⎝ 3 ⎠⎠ ⎝ ⎠ ⎝ ⎝ 3⎠ 2) z = 3 . and the angle θ is called the argument of z.⎪z2⎪ and arg (z1.z 2 ⇒ ⎪z1⎪ = ⎪z⎪. Lecture on Algebra z = x + iy = x2 + y2 ⎜ ⎛ x ⎜ x2 + y2 ⎝ +i ⎞ ⎟ 2 2 ⎟ x +y ⎠ y = ⎪z⎪(cosθ + isinθ). Putting ⎪z⎪ = r we obtain z = r(cosθ+isinθ). We now consider the multiplication and division of complex numbers represented in polar forms. 1) Multiplication: Let z1 = r1(cosθ1 + isinθ1) .3 3i = 6⎜ Remark: Two complex number in polar forms z1 = r1(cosθ1 + isinθ1).z2) = arg(z1) + arg(z2). division in polar forms: ∀k ∈Z. z2 = r2(cosθ2 + isinθ2) are equal if and only if ⎧r1 = r2 ⎨ ⎩θ1 = θ 2 + 2 kπ 4.⎪z2⎪ (for z2 ≠0) z2 22 . Multiplication.z2 = r1r2[cosθ1cosθ2 .8. This form of a complex number z is called polar (or trigonometric) form of complex numbers. z1. where r = ⎪z⎪ is the modulus of z. denoted by θ = arg (z).sinθ1sinθ2+i(cosθ1sinθ2 + cosθ2sinθ1)]. z2 = r2(cosθ2+isinθ2) Then. z1.

4. take z-1 = 1 1 = (cos( −θ ) + i sin( −θ ) ) = r −1 (cos( −θ ) + i sin( −θ ) ) . we write z = r(cosθ + isinθ) and w = ρ(cosϕ + isinϕ). z 2 = 3⎜ cos + i sin ⎟ 4 5 ⎠ 2 2⎠ ⎝ ⎝ Therefore. we see that z2 = r2(cos2θ + isin2θ). To find such w.2) π π⎞ 3π 5π ⎞ ⎛ ⎛ We first write z1 = 2 2 ⎜ cos + i sin ⎟. A special case of this formula is the formula of de Moivre: (cosθ + isinθ)n = cosnθ + isinnθ ∀n∈N. and n ∈N* = N \ {0}. z r 2 Therefore z-2 = z −1 ( ) = r −2 (cos(−2θ ) + i sin(−2θ ) ) . By induction. by induction. Again. Lecture on Algebra ⇔ ⎪z⎪ = z1 z2 Moreover.9. by equation (4. This yields that zn = rn(cosnθ + isinnθ) for all n ∈Z. we use the polar forms. for z1=r1(cosθ1+isinθ1) and z2 = r2(cosθ2 + isinθ2) ≠0. we easily obtain that zn = rn(cosnθ + isinnθ) for all n∈N. arg(z1) = argz+argz2 ⇔ arg(z) =arg(z1) – arg(z2).1).Nguyen Thieu Huy. Therefore. z2 r2 z1 = -2 + 2i. We have that Example: z1 r = z = 1 [cos(θ1-θ2) + isin(θ1-θ2)]. (4. Using relation wn = z. 5π ⎞ 5π ⎛ z1. z2 = 3i. we obtain: z-n = r-n(cos(-nθ) + isin(-nθ)) ∀n∈N. We now determine ρ and ϕ. Roots: Given z ∈C. Now. First.z2 = 6 2 ⎜ cos + i sin ⎟ 4 ⎠ 4 ⎝ z1 2 2 ⎛ 5π ⎞ π = ⎜ cos + i sin ⎟ 3 ⎝ 4 4 ⎠ z2 3) Integer power: for z = r(cosθ + isinθ). we obtain that 23 . Which is useful for expressing cosnθ and sinnθ in terms of cosθ and sinθ. we want to find all w ∈C such that wn = z. we obtain that.

k ∈ Z ⎩nϕ = θ + 2kπ .2.2⎬ 3 3 ⎠ ⎩⎝ ⎭ = ⎨1. z z = ± w (for w2 = z). n − 1⎬ ⎨ r ⎜ cos n n ⎠ ⎩ ⎝ ⎭ For z = r(cosθ + isinθ) we denote by n ⎧ z = ⎨n ⎩ ⎫ θ + 2kπ θ + 2kπ ⎞ ⎛ r ⎜ cos + i sin ⎟ k = 0. 3 1 = 3 1(cos 0 + i sin 0) (complex roots) ⎧⎛ ⎫ 2kπ 2kπ ⎞ + i sin = ⎨1⎜ cos ⎟ k = 0.1. n For each w ∈ C such that wn = z. Lecture on Algebra ρn(cosnϕ + isinnϕ) = r(cosθ + isinθ) ⎧ ρ = n r (real positive root of r) ⎧ρ n = r ⎪ ⇔⎨ ⇔⎨ θ + 2kπ . Shortly.. the set 2 2⎠ 2 2 ⎠⎭ ⎝ ⎩ ⎝ contains two opposite values {w.1⎬ 2 2 ⎪ ⎝ ⎪ ⎠ ⎩ ⎭ ⎧ ⎛ θ θ⎞ θ θ ⎞⎫ ⎛ = ⎨ r ⎜ cos + i sin ⎟. 1…. for z = x + iy. n-1.1. w is one of the following values ⎧n ⎛ ⎫ θ + 2 kπ θ + 2 kπ ⎞ + i sin ⎟ k = 0.1. 24 . we have the practical formula. Therefore. -w}. Square roots: For Z = r(cosθ+isinθ) ∈ C. we compute the complex square roots z = r (cosθ + i sin θ ) = ⎧ ⎛ ⎫ θ + 2kπ θ + 2kπ ⎞ ⎪ ⎪ = ⎨ r ⎜ cos + i sin ⎟ k = 0.n − 1⎬ n n ⎝ ⎠ ⎭ and call n z the set of all nth roots of complex number z.− r ⎜ cos + i sin ⎟⎬ . corresponding to k = 0.− z. we call w an nth root of z and write w ∈ Examples: 1.Nguyen Thieu Huy..− − i⎬ 2 2 2 2 ⎭ 2.. we write Also.. ⎧ ⎩ 1 3 1 3 ⎫ + i.. k ∈ Z ⎪ϕ = n ⎩ Note that there are only n distinct values of w. for z ≠ 0. Therefore.

3) 3. Eq (4.xn.x2)…(x . a ≠ 0.b.4) has n solutions in C.4ac∈C.3). z2 = 2 – i. By formula (4. we obtain that ∆ =±w = ±(1 + 3i) Then.i ⎜ ⎟ 2 ⎢ 2 ⎝ ⎠⎥ ⎣ ⎦ if ⎧1 where signy = ⎨ ⎩− 1 if y≥0 y<0 (4. taking z2 – (5+i)z + 8 + i = 0. Lecture on Algebra ⎡ 1 ⎛ ⎞⎤ 1 ( z − x ) ⎟⎥ z = ± ⎢ ( z + x ) + ⎜ ( sign y ). such that the left-hand side of (4. 2a Concretely. (an≠0) (4. Fundamental Theorem of Algebra: Consider the polynomial equation of degree n in C: anxn + an-1xn-1 +…+ a1x + a0 = 0. x2….x1)(x . we obtain the solutions z1.c ∈C. ai ∈C ∀i = 0. 2 We finish this chapter by introduce (without proof) the Fundamental Theorem of Algebra. a. Complex quadratic equations: az2 + bz + c = 0. we have that ∆ = -8+6i.2 = 5 + i ± (1 + 3i) or z1 = 3 + 2i. 25 .2 = −b±w where w2 = ∆ = b2.10.xn). …n.4) can be factorized by anxn + an-1xn-1 + … + a0 = an(x . By the same way as in the real-coefficient case. 4. This means that.4) Then. there are complex numbers x1. the solutions are z1. 1.Nguyen Thieu Huy.

1. Definition: A matrix is a rectangular array of numbers (of a field K) enclosed in brackets. Notations: Usually. ⎣1 8⎦ Note that we sometimes use the brackets ( . we denote matrices by capital letter A. Basic concepts Let K be a field (say. K = R or C).1. We list here some of them Sales figures: Let a store have products I.4 8⎤ ⎡6⎤ Examples 1: ⎢ ⎥. 1. III. ) to indicate matrices. ⎣5 − 32 0⎦ ⎣1 ⎦ [1 ⎡2 3⎤ 5 4]. II. These numbers are called entries or elements of the matrix ⎡2 0.Nguyen Thieu Huy.2. the numbers of sales of each product per day can be represented by a matrix ⎡ Monday Tuesday Wednesday Thursday I ⎢ 10 20 15 3 II ⎢ ⎢ 0 12 7 3 III ⎢ 9 6 8 ⎣ 0 . The notion of matrices comes from variety of applications. so on… 26 . ⎢ ⎥. Lecture on Algebra Chapter 4: Matrices I. Then. ⎢ ⎥ .Systems of equations: Consider the system of equations Friday ⎤ 4 ⎥ ⎥ 5 ⎥ ⎥ 9 ⎦ ⎧5x − 10 y + z = 2 ⎪ ⎨6 x − 3y − 2 z = 0 ⎪2 x + y − 4 z = 0 ⎩ Then the coefficients can be represented by a matrix ⎡5 − 10 1 ⎤ ⎢6 − 3 − 2 ⎥ ⎥ ⎢ ⎢2 1 4 ⎥ ⎦ ⎣ We will return to this type of coefficient matrices later. C or by writing the general entry. B. thus A = [ajk].

Transposition: The transposition AT of an mxn matrix A = [ajk] is the nxm matrix that has the first row of A as its first column. If m = n. a1n ⎤ a2n ⎥ T ⎥ we have that A = M ⎥ a mn ⎥ ⎦ ⎡ a11 a 21 . Vectors: A vector is a matrix that has only one row – then we call it a row vector..column vector.4.then we call it a column vector.. In both case. and the mth row of A as its mth column. Then. a m 2 ⎥ ⎢ ⎥ O M ⎥ M ⎢M ⎢a1n a 2 n .…. ⎢M ⎥ ⎢ ⎥ ⎣bn ⎦ 1. an mxn matrix A is of the form: ⎡ a11 ⎢a A = ⎢ 21 ⎢M ⎢a m1 ⎣ a12 a 22 M am2 K K O K a1n ⎤ a2n ⎥ ⎥. a m1 ⎤ ⎢a12 a 22 . the second row of A as its second column.. 1. respectively. we call its entries the components. Then. we call A an n-square matrix. or only one column . a23 is the entry in row 2 and column 3.. the first subscript is the row. . O ..3.…ann is called the main diagonal (or principal diagonal) of A... Lecture on Algebra By an mxn matrix we mean a matrix with m rows (called row vectors) and n columns (called column vectors). 2x1. Thus.. In the double subscript notation for the entries. and the second subscript is the column in which the given entry stands. a22. M ⎥ a mn ⎥ ⎦ Example 2: On the example 1 above we have the 2x3. A = [a1 a2 …an] – row vector ⎡b1 ⎤ ⎢b ⎥ 2 B = ⎢ ⎥ ..Nguyen Thieu Huy.... its diagonal containing the entries a11. Thus. Thus. a mn ⎥ ⎣ ⎦ ⎡1 4 ⎤ ⎡1 2 3⎤ T ⎢ ⎥ Example 3: A = ⎢ ⎥. ⎡ a11 ⎢a for A = ⎢ 21 ⎢M ⎢a m1 ⎣ a12 a 22 M am2 . A = ⎢ 2 0 ⎥ ⎣4 0 7⎦ ⎢3 7 ⎥ ⎦ ⎣ 27 . 1x3 and 2x2 matrices.

is obtained by adding the corresponding entries. Denoted by Mmxn(R) the set of all mxn matrices with the entries being real numbers. written cA.3. a11 = b11. B = [bjk] we say A = B if they have the same size and the corresponding entries are equal. a12 = b12. ⎥ ⎥ ⎢ ⎢ ⎢ 6 14⎥ ⎢− 3 7 ⎥ ⎦ ⎦ ⎣ ⎣ 2. also A +(-B) is written A – B and is called the difference of A and B. Example: (-k)A is written – kA. 2.3. Matrix addition. 5 ⎤ ⎡2 ⎥ ⎢ For A = − 1 4 we have 2A = ⎥ ⎢ ⎢ 3 − 7⎥ ⎦ ⎣ ⎡0 0 ⎤ ⎥ ⎢ 0A = 0 0 .Nguyen Thieu Huy. B be two matrices having the same sizes.A = ⎢ 1 − 4 ⎥ . Definition: 1. scalar multiplication 2. B = ⎢ 7 1 4 ⎥ ⎣1 4 2 ⎦ ⎣ ⎦ ⎡2 + 2 3 + 0 (−1) + 5⎤ ⎡4 3 4⎤ A+B= ⎢ ⎥=⎢ ⎥ ⎣ 1 + 7 4 + 1 2 + 4 ⎦ ⎣8 5 6 ⎦ 2.2. written A + B. that is. 28 . Here. and so on… 2. Lecture on Algebra II. The following properties are easily to prove.1. Definition: Let A. (-1)A is simply written –A and is called negative of A. ⎥ ⎢ ⎢0 0 ⎥ ⎦ ⎣ ⎡ 4 10⎤ ⎡− 2 − 5 ⎤ ⎢− 2 8 ⎥ . is the mxn matrix cA = [cajk] obtained by multiplying each entry in A by c. For two matrices A = [ajk]. Definition: An mxn zero matrix is an mxn matrix with all entries zero – it is denoted by O. Then their sum. (Note: Matrices of different sizes can not be added) ⎡2 3 − 1⎤ ⎡2 0 5⎤ Example: For A = ⎢ ⎥. and . Two matrices are said to have the same size if they are both mxn. Definition: The product of an mxn matrix A = [ajk] and a number c (or scalar c).

Lecture on Algebra 2.4. That is.B (in this order) is an mxp matrix defined by C = [cjk]. III. Then. Transposition: (A+B)T = AT+BT (αA)T=αAT. β are numbers) α(A + B) = αA + αB (α+β)A = αA + βA (αβ)A = α(βA) (written αβA) 1A = A. +) is a commutative group. and B = [bjk] be an nxp matrix. (A+B) + C = A + (B+C) A+B=B+A A+O=O+A=A A + (-A) = O (written A – A = O) 2. Detailedly. Briefly. We can illustrate by the figure 29 . “multiplication of rows into columns”. For the scalar multiplication we have that (α.2…n. Matrix multiplications 3.Nguyen Thieu Huy.1. m. 3. multiply each entry in jth row of A by the corresponding entry in the kth column of B and then add these n products. Properties: 1. Definition: Let A = [ajk] be an mxn matrix. 2…. (Mmxn(R). with the entries: cjk = aj1b1k + aj2b2k + …+ ajnbnk = ∑a l =1 n jl lk b where j = 1. the product C = A. the matrix addition has the following properties. k = 1.

... ⎠ C ⎛ . Clearly.. ⎞ ⎜ ⎟ ⎜ . AB ≠BA in general. ⎝ th a j2 a j3 ⎛M ⎜ ⎞⎜ ⎟ ...⎟ k ⎜ ⎟ ⎝ . AB = O does not imply A = O or B = O. Examples: A= ⎢ ⎡0 1 ⎤ . 30 ..B= 0 0⎥ ⎣ ⎦ ⎡1 0 ⎤ ⎢0 0⎥ .. then ⎣ ⎦ ⎡1 0 ⎤ ⎥ ... ⎜ j row ⎜ a j1 ⎜ .. that is.c jk .Nguyen Thieu Huy... a jn ⎟⎜ ⎟⎜ ⎠⎜ ⎜ ⎝ b1k b2 k M b3k bnk B M⎞ ⎟ ⎟ ⎟= ⎟ ⎟ ⎟ ⎠ A Note: AB is defined only if the number of columns of A is equal to the number of rows of B. ⎥ 15⎥ ⎦ 4⎤ 2 ⎥ is not defined ⎥ 0⎥ ⎦ ⎡1 ⎡2 5 ⎤ ⎢ Exchanging the order. then ⎢ 2 1 − 1⎥ ⎢ ⎣ ⎦⎢ ⎣3 Remarks: 1) The matrix multiplication is not commutative. AB≠BA. ⎡1 Example: ⎢2 ⎢ ⎢3 ⎣ 4⎤ ⎡1 × 2 + 4 × 1 ⎥ ⎡2 5 ⎤ = ⎢ 2 × 2 + 2 × 1 2⎥ ⎢ 1 − 1⎥ ⎢ ⎦ ⎢3 × 2 + 0 × 5 0⎥ ⎣ ⎣ ⎦ 1 × 5 + 4 × (− 1) ⎤ 2 × 5 + 2 × (− 1)⎥ = ⎥ 3 × 5 + 0 × (− 1)⎥ ⎦ ⎡6 ⎢ = 1 ⎢ ⎢6 ⎣ 1⎤ 8 ⎥.. ⎣0 0 ⎦ AB = O and BA = ⎢ 2) The above example also shows that. Lecture on Algebra kth column j ⎛ .

IV.Nguyen Thieu Huy. an upper triangular matrix is a square matrix whose entries below the main diagonal are all zero. Unit matrix: A unit matrix is the diagonal matrix whose entries on the main diagonal are all equal to 1. Triangular matrices: A square matrix whose entries above the main diagonal are all zero is called a lower triangular matrix.1. Example: ⎡1 2 − 1⎤ ⎥ ⎢ A = 0 0 4 .2.3.B and C are matrices such that the expression on the left are defined. ⎡1 0 0 ⎤ ⎥ ⎢ Example: 0 0 0 ⎥ ⎢ ⎢0 0 3⎥ ⎦ ⎣ 4.2 Properties: Let A. a) (kA)B = k(AB) = A(kB) (written k AB) b) A(BC) = (AB)C (written ABC) c) (A+B). Lecture on Algebra 3. C be matrices and k be a number. I2 = ⎜0 0 1⎟ ⎠ ⎝ ⎛1 0⎞ ⎜ ⎟ ⎜0 1⎟ ⎠ ⎝ 31 .Upper triangular matrix ⎥ ⎢ ⎢0 0 3 ⎥ ⎦ ⎣ ⎡1 0 0 ⎤ ⎥ ⎢ B = 3 7 0 .C = AC + BC d) C (A+B) = CA+CB provided. B. We denote the unit matrix by In (or I) where the subscript n indicates the size nxn of the unit matrix. that is ajk = 0 ∀j≠k is called a diagonal matrix. Meanwhile. ⎛1 0 0⎞ ⎟ ⎜ Example: I3 = ⎜ 0 1 0 ⎟ .Lower triangular matrix ⎥ ⎢ ⎢2 0 0 ⎥ ⎦ ⎣ 4. A. Diagonal matrices: A square matrix whose entries above and below the main diagonal are all zero. Special matrices 4.

respectively. Transposition of matrix multiplication: (AB)T = BTAT provided AB is defined. in matrix form ⎢ ⎡y 1 ⎤ ⎡ b11 = y 2 ⎥ ⎢b 21 ⎣ ⎦ ⎣ b12 ⎤ ⎡x1 ⎤ ⎡x1 ⎤ = B⎢ ⎥ . translations…) ⎧ x1 = a11 w1 + a12 w2 The first transformation is defined by ⎨ (I) ⎩ x 2 = a 21 w1 + a 22 w2 or. AIn = A = ImA. 4. then.g. A motivation of matrix multiplication: Consider the transformations (e. in matrix form ⎢ ⎡x1 ⎤ ⎡ a 11 ⎥=⎢ ⎣x 2 ⎦ ⎣a 21 a 12 ⎤ ⎡w 1 ⎤ ⎡w1 ⎤ = A⎢ ⎥ . 4. Lecture on Algebra Remarks: 1) Let A ∈ Mmxn (R) – set of all mxn matrix whose entries are real numbers.Nguyen Thieu Huy.5. (Mn(R). 4. and it is called anti-symmetric (or skew-symmetric) if AT = -A. a 22 ⎥ ⎢w 2 ⎥ ⎦⎣ ⎦ ⎣w 2 ⎦ ⎧y1 = b11 x1 + b12 x 2 (II) y 2 = b 21x1 + b 22 x 2 ⎩ The second transformation is defined by ⎨ or. Then. b 22 ⎥ ⎢x 2 ⎥ ⎦⎣ ⎦ ⎣x 2 ⎦ 32 . +. Symmetric and antisymmetric matrices: A square matrix A is called symmetric if AT=A. 2) Denote by Mn(R) = Mnxn(R).6.4. •) is a noncommutative ring where + and • are the matrix addition and multiplication. rotations.

That is. are given numbers. ⎨ c 21 = b21 a11 + b22 a 21 ⎪ ⎪c 22 = b21 a12 + b22 a 22 ⎩ ⎧ y1 = c11 w1 + c12 w2 where ⎨ ⎩ y 2 = c 21 w1 + c 22 w2 This yields that: C = ⎢ ⎡ c11 ⎣c 21 c12 ⎤ = BA. V. + a x = b ⎪ 21 1 22 2 2n n 2 ⎨ ⎪M ⎪a m1 x1 + a m 2 x 2 + . are all zero.…. the matrix multiplication allows to simplify the calculations related to the composition of the transformations. Definition: A system of m linear equations in n unknowns x1.1. then the system (5. Systems of Linear Equations We now consider one important application of matrix theory. application to systems of linear equations.. + a1n x n = b1 ⎪a x + a x + . the ajk. are also given numbers.. 1≤j≤m.. 1 ≤i ≤m. + a mn x n = bm ⎩ (5. 5. we obtain immediately that ⎡y1 ⎤ ⎡x1 ⎤ ⎡w1 ⎤ = B ⎢ ⎥ = B. Note that the system (5. which are called the coefficients of the system..1) in called a nonhomogeneous system.1) is called a homogeneous system. if we use the matrix multiplication.xn is a set of equations of the form ⎧a11 x1 + a12 x 2 + .Nguyen Thieu Huy. Let we start by some basic concepts of systems of linear equations.1) Where.A ⎢ ⎥ ⎢y ⎥ ⎣ 2⎦ ⎣x 2 ⎦ ⎣w 2 ⎦ Therefore.1) is also called a linear system of equations. and c 22 ⎥ ⎦ ⎡y1 ⎤ ⎡w1 ⎤ = BA ⎢ ⎥ ⎢y ⎥ ⎣ 2⎦ ⎣w 2 ⎦ However. 1≤i≤m.. The bi. Lecture on Algebra To compute the formula for the composition of these two transformations we substitute (I) in to (II) and obtain that ⎧c11 = b11 a11 + b12 a 21 ⎪c = b a + b a ⎪ 12 11 12 12 22 . If bi. 1≤k≤n. x2. then (5. If at least one bk is not zero. 33 ..

⎡b1 ⎤ ⎢b ⎥ ⎢ 2 ⎥ are column vectors.1. We note that A determines system (5. This method is called Gauss elimination method. If the system (5. X = ⎢ ⎥ and B = ⎢M ⎥ ⎢ ⎥ ⎣ xn ⎦ The matrix A = AM B is called the augmented matrix of the system (5. ⎢M ⎥ ⎢ ⎥ ⎣bm ⎦ ⎡ x1 ⎤ ⎢x ⎥ 2 where A = [ajk] is called the coefficient matrix.1) completely. it has at least one trivial solution x1 = 0.1) using operations on its augmented matrix. 6. Gauss Elimination Method ~ [ ] ~ ~ We now study a fundamental method to solve system (5..1).Nguyen Thieu Huy. A solution vector of (5.2. A is obtained by augmenting A by the column B.xn that satisfy all the m equations of ⎡ x1 ⎤ ⎢x ⎥ 2 (5. 5. x2…. Examples: Example 1: Consider the electric circuit 34 .1). because it contains all the given numbers appearing in (5.. Coefficient matrix and augmented matrix: We write the system (5.1) is a set of numbers x1.1). Lecture on Algebra A solution of (5.1) is homogeneous. VI.xn = 0..1). We first consider the following example from electric circuits.1) is a column vector X = ⎢ ⎥ whose components constitute a ⎢M ⎥ ⎢ ⎥ ⎣ xn ⎦ solution of (5.1) in the matrix form: AX = B. x2 = 0..

1) This system is so simple that we could almost solve it by inspection.2 below) from which we shall then readily obtain the values of the unknowns by “back substitution”. We write the system and its augmented matrix side by side: Equations Pivot → Eliminate → x1 -x1 -x2 + x3 = 0 +x2-x3 = 0 10x2+25x3 = 90 20x1 +10x2 = 80 Augmented matrix: 0⎤ ⎡ 1 −1 1 ⎢− 1 1 − 1 0 ⎥ ~ ⎥ A = ⎢ ⎢ 0 10 25 90⎥ ⎥ ⎢ ⎣ 20 10 0 80 ⎦ 35 .i1 + i2 – i3 = 0 Right loop: 10i2 + 25i3 = 90 Left loop: 20i1 + 10i2 = 80 Putting now x1 = i1. echelon form-see Definition 6. x3 = i3 we obtain the linear system of equations ⎧x 1 − x 2 + x 3 = 0 ⎪− x + x − x = 0 ⎪ 1 2 3 ⎨ ⎪10 x 2 + 25x 3 = 90 ⎪20 x1 + 10 x 2 = 80 ⎩ (6. and choose directions arbitrarily.Nguyen Thieu Huy. also for large systems. the sum of all voltage drops equals the impressed electromotive force. It is a reductions to “triangular form” (or. Lecture on Algebra Label the currents as shown in the above figure. This is not the point. We use the following Kirchhoffs laws to derive equations for the circuit: + Kirchhoff’s current law (KCL) at any node of a circuit. precisely. Applying KCL and KVL to above circuit we have that Node P: i1 – i2 + i3 = 0 Node Q: . In any closed loop. x2 = i2. the sum of the inflowing currents equals the sum of the outflowing currents. + Kirchhoff’s voltage law (KVL). The point is to perform a systematic method – the Gauss elimination – which will work in general.

Since it contain no x2term (needed as the next pivot. which we indicate behind the new matrix in (6. remains untouched.2).2) Second step: Elimination of x2 The first equation. in fact. it is 0 = 0) – first we have to change the order of equations (and corresponding rows of the new matrix) to get a nonzero pivot. We put the second equation (0 = 0) at the end and move the third and the fourth equations one place up. do Subtract 3 times the pivot equation from the third equation. which has just served as pivot equation. Add the pivot equation to the second equation. Lecture on Algebra First step: Elimination of x1 Call the first equation the pivot equation and its x1 – term the pivot in this step.x2 Pivot → Eliminate → + x3 = 0 Corresponding to 10x2 +25x3 = 90 30x3 -20x3 =80 0=0 1 0⎤ ⎡1 − 1 ⎢0 10 25 90 ⎥ ⎥ ⎢ ⎢0 30 − 20 80 ⎥ ⎥ ⎢ 0 0⎦ ⎣0 0 To eliminate x2. The result is ⎧x 1 − x 2 + x 3 = 0 ⎪0 = 0 ⎪ ⎨ ⎪10 x 2 + 25x 3 = 90 ⎪30 x 2 − 20 x 3 = 80 ⎩ 0⎤ ⎡1 − 1 1 ⎢0 0 0 0 ⎥ Row2 + Row1 → Row2 ⎢ ⎥ ⎢0 10 25 90⎥ Row 4 − 20 × Row1 → Row4 ⎢ ⎥ ⎣0 30 − 20 80⎦ (6. For this. This corresponds to row operations on the augmented matrix. do these operations.Nguyen Thieu Huy. We want to take the (new) second equation as the next pivot equation. and use this equation to eliminate x1 (get rid of x1) in other equations. Subtract 20 times the pivot equation from the fourth equation. the result is 36 . We get x1 .

Nguyen Thieu Huy, Lecture on Algebra

x1 – x2 + x3 = 0 10x2 + 25x3 = 90 -95x3 = -190 0=0

0 ⎤ ⎡1 − 1 1 ⎢0 10 25 90 ⎥ ⎢ ⎥ Row3 − 3 × Row2 → Row3 ⎢0 0 − 95 − 190⎥ ⎢ ⎥ 0 0 ⎦ ⎣0 0

(6.3)

Back substitution: Determination of x3, x2, x1. Working backward from the last equation to the first equation the solution of this “triangular system” (6.3), we can now readily find x3, then x2 and then x1: -95x3 = -190 10x2 + 25x3 = 90 x1 – x2 + x3 = 0

i3 = x3 = 2 (amperes) i2 = x2 = 4 (amperes) i1 = x1 = 2 (amperes)

Note: A system (5.1) is called overdetermined if it has more equations than

unknowns, as in system (6.1), determined if m = n, and underdetermined if (5.1) has fewer equations than unknowns.
Example 2: Gauss elimination for an underdetermined system. Consider

⎧3 x1 + 2 x 2 + 2 x3 − 5 x 4 = 8 ⎪ ⎨6 x1 + 15 x 2 + 15 x3 − 54 x 4 = 27 ⎪12 x − 3x − 3 x + 24 x = 21 2 3 4 ⎩ 1

2 −5 8 ⎤ ⎡3 2 ⎢ 6 15 15 − 54 27⎥ The augmented matrix is: ⎢ ⎥ ⎢12 − 3 − 3 24 21⎥ ⎣ ⎦
1st step: elimination of x1 2 8 ⎤ −5 ⎡3 2 ⎢0 11 11 − 44 11 ⎥ Row2 − 0,2 × Row1 → Row2 ⎥ Row3 − 0,4 × Row1 → Row3 ⎢ ⎢0 − 11 − 11 44 − 11⎥ ⎦ ⎣ 2nd step: Elimination of x2

⎡3 2 2 − 5 8 ⎤ ⎢0 11 11 − 44 11⎥ ⎢ ⎥ ⎢0 0 0 0 0⎥ ⎣ ⎦

Row3 – Row1 → Row3

37

Nguyen Thieu Huy, Lecture on Algebra

Back substitution: Writing the matrix into the system we obtain 3x1 + 2x2 + 2x3 – 5x4 =8

11x2 + 11x3 – 44x4 = 11 From the second equation, x2 = 1 – x3 + 4x4. From this and the first equation, we obtain that x1 = 2 – x4. Since x3, x4 remain arbitrary, we have infinitely many solutions, if we choose a value of x3 and a value of x4, then the corresponding values of x1 and x2 are uniquely determined.
Example 3: What will happen if we apply the Gauss elimination to a linear system

that has no solution? The answer is that in this case the method will show this fact by producing a contradiction – for instance, consider

⎧3x1 + 2x 2 + x 3 = 3 ⎪ ⎨2x1 + x 2 + x 3 = 0 ⎪6x + 2x + 4x = 6 2 3 ⎩ 1
⎡ 3 2 1 3⎤ The augmented matrix is ⎢2 1 1 0⎥ ⎢ ⎥ ⎢6 2 4 6⎥ ⎣ ⎦
⎡3 2 ⎢ −1 → ⎢0 ⎢0 −32 ⎣ ⎡3 2 1 3⎤ ⎢ ⎥ 1 1 − 2 ⎥ → ⎢0 − 3 ⎢0 03 2 0⎥ ⎣ ⎦ 1 3⎤ ⎥ 1 − 2⎥ 3 0 12 ⎥ ⎦

The last row correspond to the last equation which is 0 = 12. This is a contradiction yielding that the system has no solution. The form of the system and of the matrix in the last step of Gauss elimination is called the echelon form. Precisely, we have the following definition.
6.2. Definition: A matrix is of echelon form if it satisfies the following conditions:

i) All the zero rows, if any, are on the bottom of the matrix ii) In the nonzero row, each leading nonzero entry is to the right of the leading nonzero entry in the preceding row. Correspondingly, A system is called an echelon system if its augmented matrix is an echelon matrix.

38

Nguyen Thieu Huy, Lecture on Algebra

⎡1 − 2 ⎢0 2 Example: A = ⎢ ⎢0 0 ⎢ ⎣0 0

3 −4 1 5 0 7 0 0

5⎤ 7⎥ ⎥ is an echelon matrix 8⎥ ⎥ 0⎦

6.3. Note on Gauss elimination: At the end of the Gauss elimination (before the back

substitution) the reduced system will have the echelon form: a11x1 + a12x2 + … + a1nxn
~ ~ a 2 j2 x j2 + ... + a 2 n x n O

= b1
~ = b2

~ ~ ~ a rjr x jr + ... + a rn x n = br ~ 0 = br +1 0 =0 0 =0 M

~ ~ where r ≤ m; 1 < j2 < … < jr and a11 ≠ 0; a 2 j2 ≠ 0,..., a rjr ≠ 0.

From this, there are three possibilities in which the system has
~ a) no solution if r < m and the number br +1 is not zero (see example 3) ~ b) precisely one solution if r = n and br +1 , if present, is zero (see example 1) ~ c) infinitely many solution if r < n and br +1 , if present, is zero.

Then, the solutions are obtained as follows: +) First, determine the so–called free variables which are the unknowns that are not leading in any equations (i.e, xk is free variable ⇔ xk ∉{x1, x j2 ..., x jr } +) Then, assign arbitrary values for free variables and compute the remain unknowns x1, x j2 ,..., x jr by back substitution (see example 2).
6.4. Elementary row operations:

To justify the Gauss elimination as a method of solving linear systems, we first introduce the two related concepts.

39

Clearly. 6. the Gauss elimination consists of these operations for pivoting and getting zero. Addition of a constant multiple of one equation to another equation. Addition of a constant multiple of one row to another row (Ri + kRj → Ri) So. Interchange of two equations 2. Neither does the multiplication of the new equation a nonzero constant c. To these correspond the following Elementary row operations of matrices: 1. because multiplication of the new equation by 1/c produces the original equation. since by adding -αEi to the equation resulting from the addition we get back the original equation.Nguyen Thieu Huy.6. 6. Multiplication of an equation by a nonzero constant 3. Hence. which implies that the Gauss elimination yields all solutions of the original system. Similarly for the addition of an equation αEi to an equation Ej. Theorem: Row equivalent systems of linear equations have the same sets of solutions. Lecture on Algebra Elementary operations for equations: 1. Interchange of two rows (denoted by Ri ↔ Rj) 2.5. Multiplication of a row by a nonzero constant: (kRi → Ri) 3. 40 . Definition: A system of linear equations S1 is said to be row equivalent to a system of linear equations S2 if S1 can be obtained from S2 by (finitely many) elementary row operations. the desired justification of the Gauss elimination as a solution method now follows from the subsequent theorem. Proof: The interchange of two equations does not alter the solution set. the system produced by the Gauss elimination at the end is row equivalent to the original system to be solved.

Remarks: 1. V is called a vector space over K if the following axioms hold.. λ ∈ R. 2. u ∈ V 8) 1. 1.g.2. z ∈ R} with the vector addition and scalar multiplication defined as usual: (x1. 4) for each u ∈ V. y. v. µ ∈ K. µ ∈ K. z1 + z2) λ(x. z) = (λx. Consider R3 = {(x.1. We endow two operations as follows. Elements of V are called vectors. y2. y. v) a u + v Scalar multiplication : KxV→V (λ. v ∈ V. K = R or C). Examples: 1. z1) + (x2. y1. 41 . there is a unique vector in V denoted by –u such that u + (-u) = O (written: u – u = O) 5) λ(u+v) = λu + λv for all λ ∈ K. u. Definition: Let K be a field (e. y1 + y2. u ∈V 7) λ(µu) = (λµ)u for all λ. denoted by O ∈ V.Nguyen Thieu Huy. such that u + O = u for all u. v) a λv Then. λz) . Basic concepts 1. The axioms (1)– (4) say that (V. v ∈ V 6) (λ+µ) u = λu + µu for all λ. y. z2) = (x1 + x2. z)⎪x. 1) (u + v) + w = u + (v + w) for all u. Vector addition +: VxV→V (u. V be nonempty set.u = u for all u ∈V where 1 is the identity element of K. v ∈ V 3) there exists a null vector. Lecture on Algebra Chapter 5: Vector spaces I. +) is a commutative group. w ∈V 2) u + v = v + u for all u. λy.

xn) + (y1. λx = O ⇔ ⎢ ⎣x = O 2. Then. (λ-µ)x = λx .µy 4. we have that λO = O... Lecture on Algebra Thinking of R3 as the coordinators of vectors in the usual space.. y ∈V. then λO = λ(O +O) = λO + λO. The null vector is the zero matrix O. 4. x2.xn)⎪xi ∈R ∀i = 1.3. Let P[x] be the set of all polynomials with real coefficients.Nguyen Thieu Huy. λ. 2…}. xn + yn) λ(x1.II. by cancellation law.. then the axioms (1) – (8) are obvious as we already knew in high schools. Again.. We consider the matrix addition and scalar multiplication defined as in Chapter 4.0). µ ∈K. (-λ)x = . P[x] = {a0 + a1x + … + anxn⎪a0. a1. Then Rn is a vector space over R. 1. x2... x2 + y2. Properties: Let V be a vector space over K.0. 2.µx 3. The null vector is O = (0. Then R3 is a vector space over R with the null vector is O = (0.4 in Chapter 4 show that Mmxn (R) is a vector space over R. λx2. Let x = O. Then. the following assertion hold: ⎡λ = 0 1. consider Rn = {(x1.0). +) we obtain: 0x = O. 3. then 0x = (0+0)x = 0x + 0x Using cancellation law in group (V.0. 2…n} with the vector addition and scalar multiplication defined as (x1.. λxn) ..…yn) = (x1 + y1..…. Let Mmxn (R) be the set of all mxn matrices with real entries. 42 .λx Proof: 1.. xn) = (λx1. x.…. “⇐”: Let λ = 0. Similarly. λ ∈ R. n = 0. .….…. the properties 2. It is easy to check that all the axioms (1)(8) hold true. 1. That is. x2. y2. Then P[x] in a vector space over R with respect to the usual operations of addition of polynomials and multiplication of a polynomial by a real number.an ∈ R. λ(x – y) = λx . “⇒”: Let λx = O and λ≠0.

(2) and (3).µx we obtain that λx . and the null element belong to W. Then W is a subspace of V if and only if the following conditions hold. Corollary: Let W be a subset of a vector space V over K. Definition: Let W be a subset of a vector space V over K. 2. 0x = O ∈W. Then.µx = (λ-µ)x The assertions (3).3. λx = (λ-µ+µ)x = (λ-µ)x + µx Therefore. 2. (4) can be proved by the similar ways. II. v ∈W we have that au+bv∈W.Nguyen Thieu Huy. W is a subspace of V if and only if the following conditions hold: (i) O∈W (ii) for all a. w ≠ ∅. W is closed under the vector addition. b ∈K and all u. Lecture on Algebra Then ∃λ-1. Then. ∀λ ∈K ⇒λu ∈ W Proof: “⇒” let W ⊂ V be a subspace of V.1.+) is a group. Subspaces 2. Since (W. the conditions (2) and (3) are clearly true. to prove that W is a vector space over K what we need is the fact that the vector addition and scalar multiplication are also the operation on W. adding both sides to . Then.2. 2. W is called a subspace of V if and only if W is itself a vector space over K with respect to the operations of vector addition and scalar multiplication on V.v∈W ⇒ u + v ∈W 3. 43 . W is closed under the scalar multiplication. O∈W (where O is the null element of V) 2. that is ∀u ∈W. The following theorem provides a simpler criterion for a subset W of V to be a subspace of V. Therefore. Since W is a vector space over K. They are precisely (1). ∃x ∈ W. Theorem: Let W be a subset of a vector space V over K. that is ∀u. 1. “⇐”: This implication is obvious: Since V is already a vector space over K and W ⊂ V. multiplying λ-1 we obtain that λ-1(λx) = λ-1O = O ⇒ (λ-1λ)x = 0 ⇒ 1x = O ⇒ x = O.

Therefore.0). Consider M = {X ∈Mnx1(R)⎪AX = 0}. The set of all such linear combinations is denoted by Span{v1. 0.α2…αn ∈R.1)} ⊂R3. v2…vn} and is called the linear span of v1.y2. Hence M is a subspace of Mnx1(R). Theorem: Let S be a subset of a vector space V. 0. y. 1. then we call it a spanning set of V. (0. 0.2. 0) + z(0. ii) If W is a subspace of V containing S.vn. 0) + y(0. (0.y2. linear spans 3. v2…vn.1. 3. 2. 1.0.y∈R} is a subspace of R3.0) in M we have that λ(x1.ur} spans V. S = {(1.y10) and (x2. v2….0) ∈M and for (x1. (0. Let V = R3.+ αnvn⎪αi ∈K ∀i = 1. u2.…n} 3. and A ∈Mmxn(R). Lecture on Algebra Examples: 1. That is. z ∈R} = R3 Hence.vn ∈V. λy1 + µ2. Also. y1. 1)⏐x. S = {(1. 0. v2…. then Span S ⊂ W.. any vector in V of the form α1v1 + α2v2 + …+αnvn for α1. Let O be a null element of a vector space V is then {O} is a subspace of V. Let V = Mnx1(R). Then. t)⎪x. 0. III. it follows that A(λX1 + µX2) = λAX1 + µAX2 = O + O = O.. Then.. Span {v1. v2…vn} = {α1v1 + α2v2 + ….. span S = {x(1.. 0)}⊂ R3.0) = (λx1 + µx2..y ∈R}. 0)⎪x. (0. 1. λX1 + µX2 ∈ M. Examples: 1. (0. 1)} is a spanning set of R3 44 . For X1. Linear combinations. (0.z ∈R} = {(x.y ∈R} = {(x. 0)⎪x. if the set {u1. 0). X2 ∈M. y. 0.0).2. 3. y. 2. Then.3..0. the following assertions hold: i) The Span S is a subspace of V which contains S. is called a linear combination of v1.1 Definition: Let V be a vector space over K and let v1. because. M = {x.y..0)⎪x. Definition: Given a vector space V. Span S = {x(1. 1.Nguyen Thieu Huy.0. the set of vectors {u1. 0) . u2.ur} are said to span V if V = Span {u1. u2. 0) + y(0. Then. 0) ∈ M ∀λ∈R.0) + µ(x2.y. S = {(1. 0) . un}.

Nguyen Thieu Huy.vm = 1. 7. v = (0. Otherwise. …. The first two vectors u and v are linearly independent since: λ1u + λ2v = (0. then S is linearly dependent. -λ1 + 3λ2. the set {v1. 0. such that α1v1 + α2v2 + …. 5. Remarks: (1) If the set of vectors S = {v1….λ2) = (0.=αm = 0. 1) + z(0. 0) ⇒ λ1 = λ2 = 0. v2…vm are linearly dependent. -2) = (0.1) also holds when one of α1. vm are said to be linear independent. 0. 1).1. .+ αmvm = O ⇒ α1 = α2 = .. 2) u = (6. not all zero. we have that 1.O + O + …. α1v1 + α2v2 + …. w = (5.1). Linear dependence and independence 4. w = (0. . 0) ⎧6 x = 0 ⎪ ⇒ ⎨2 x + 5y = 0 ⇒x=y=z=0 ⎪4 x + y − 2 z = 0 ⎩ 4.1) always holds if α1 = α2 = .=αm = 0. If (4.. If (4. 5. 2. 0.v1 + 0. Examples: (4. 4) + y(0.αm belonging to K. v2…vm} is said to be linearly dependent.vm} contains O. . . v2…vm are linearly independent. 0. 7. If the vectors v1. 2. v2…vm ∈V are said to be linearly dependent if there exist scalars α1. 0.. -2) are linearly dependent since 3u + 2v – w = (0. 3x – 3y + 7z.3. 4x + y – 2z) = (0. the vectors v1. 45 . 3. 0) ⇒ (λ1+λ2. 3.3.+ O = O (2) Let v ∈V.1) 1) u = (1. Definition: Let V be a vector space over K. 0) ⇒ (6x – 2x + 5y. Then v is linear independent if and only if v ≠ O.2) are linearly independent since x(6. v2…. then the vectors v1. . v=(1. α2. The vectors v1.αm is not zero.+ αmvm = O. 0.1) holds only in this case. We observe that (4. Lecture on Algebra IV. α2 . 3. say v1 = O. that is.-1. then the vectors v1. v2…vm are linearly independent.. v2…vm} is linearly independent. Otherwise. ….0). 0. we say that the set {v1. Indeed. 0).v2 + …+ 0. 4).2. 3.

…. then vk - m k ≠ i =1 ∑α v m i i = 0 . let T = {vk1. and y ∈ V such that {v1. there exist α1. “⇒” since S is linearly dependent. k2. m}\{k1 .. and x is a linear combination of S. Therefore. then S is also linearly dependent. such that. say λk ≠ 0. y} is linearly dependent then y is a linearly combination of S.…vm} is linearly independent. vk2. (6) If S = {v1. T is linearly independent. vm} is linear independent. αk = 1. … αm not all zero (since αk = 1) such that This means that S is linearly dependent.. we have that αk1vk1 + … + αknvkn + i i∈{1. λ2.2. 2. Lecture on Algebra (3) If S = {v1. k ≠ i =1 ∑ α i v i + v k = 0. i =1 ∑ λ i v i = 0 ⇒ λ k v k = ∑ (− λ i )v i k ≠ i =1 m m ⇒ vk = ⎛ λi ⎞ ⎟v i ⎟ k ≠ i =1 ⎝ k ⎠ ∑ ⎜− λ ⎜ m “⇐” If there is vk ∈ S such that vk = k ≠ i =1 ∑ α i vi .Nguyen Thieu Huy. (4) S = {v1.…kn} ⊂ {1. In fact. k 2 . Proof. then αi = αi’ ∀i = i =1 m m 46 .…. (5) If S ={v1. if x = 1.k n } ∑ 0. m k =1 ∑ α i v i = ∑ α' i v i . Then.v =0 This yields that αk1 = αk2 = … = αkn = 0 since S in linearly independent.. 2…m}.. v2…vm. Therefore. Take a linear combination αk1vk1 + … + αknvkn=O.…vm} linearly independent. then every subset T of S is also linear independent. there are scalars λ1. αk+1. Alternatively. if S contains a linearly dependent subset.m. then this combination is unique in the sense that.…vkn} where {k1. … λm. … vm} is linearly dependent if and only if there exists a vector vk ∈S such that vk is a linear combination of the rest vectors of S. not al zero.

u4 are linearly independent because we can not express any vector uk (k≥2) as a linear combination of the preceding vectors. Definition: A set S = {v1. An immediate consequence of this theorem is the following. not all zero.. Theorem: Suppose the set S = {v1…vm}of nonzero vectors (m ≥2). Bases and dimension 5. This is a contradiction because v1 ≠ 0.2. Lecture on Algebra 4. u2. v2…vn} in a vector space V is called a basis of V if the following two conditions hold. Corollary: The nonzero rows of an echelon matrix are linearly independent. “⇐”: This implication follows from Remark 4.. Let k be the largest integer such that m a a ak ≠ 0.. such that i =1 ∑ a i v i = 0 . V. we have that there exist scalars a1. the following assertions are equivalent: i) S in a basis of V 47 .3. vn} be a subset of V. Then. − k −1 v k −1 ak ak If k = 1 ⇒ v1 = 0 since a2 = a3 = … = am = 0. (1) S is linearly independent (2) S is a spanning set of V. vk = − 1 v 1 .2 (4). am.Nguyen Thieu Huy. 4. and S = {v1….4. a2…. Example: ⎛0 ⎜ ⎜0 ⎜0 ⎜ ⎜0 ⎜0 ⎝ 1 0 0 0 0 2 1 0 0 0 3 2 0 0 0 2 − 4 ⎞ u1 ⎟ 5 2 ⎟ u2 1 − 1 ⎟ u3 ⎟ 0 1 ⎟ u4 0 0 ⎟ ⎠ Then u1.1. Proposition: Let V be a vector space over K. u3. Then. The following proposition gives a characterization of a basis of a vector space 5. Then if k > 1. there exists a k > 1 such that vk = α1v1 + α1v2 + …+ αk-1vk-1 Proof: “⇒” since {v1. S is linearly dependent if and only if one of its vector is a linear combination of the preceding vectors. v2…vm} are linearly dependent. That is.

Starting from v1 we have: v1 = λ1u1+λ2u2 + …+λrur. Let now λ1.1) ∈ Span S. (0. Without loosing of generality we can suppose that λ1 ≠0. then the number of vectors in each basis is the same.v2 we obtain that λ1=λ2 = …=λn=0.0). u can be uniquely written as a linear combination of S. 0.…. 5. Then. (x. Pn[x] = {a0 + a1x + …+ anxn⎪a0.. (ii) ⇒ (i): The assertion (ii) implies that span S = V.5. Definition: A vector space V is said to be of finite dimension if either V = {O} (trivial vector space) or V has a basis with n elements for some fixed n≥1. Examples: 1.v1 + …+ 0. u2…ur} and T = {v1.. S = {(1.1)} is a basis of Rn. 3.0). 1. we have that ∀u∈V. it follows that not all λ1….vk} be subsets of vector space V such that T is linearly independent and every vector in T can be written as a linear combination of S. Then.….y) ∈R2.x. − r u r λ1 λ1 λ1 48 . Lecture on Algebra ii) ∀u∈ V.. 0. we can see that S = {(1. Proof: For the purpose of contradiction let k > r ⇒ k ≥ r +1.λr are zero. u can be written as a linear combination of S. Proof: (i) ⇒ (ii). The uniqueness of such an expression follows from the linear independence of S.an ∈R} is the space of polynomials with real coefficients and degrees ≤ n.0) + y(0. Then. V = R2. In the same way as above. Since v1 ≠0.….1)} is a basis of R2 because S is linearly independent and ∀(x.…xn} is a basis of Pn[x]. 5.Nguyen Thieu Huy. 0. Since S is a Spanning set of V.0).…(0. (0. u1 = For v2 we have that λ λ 1 v1 − 2 u 2 − . 2. Lemma: Let S = {u1.. This yields that S is linearly independent. since O ∈V can be uniquely written as a linear combination of S and O = 0. v2 .y) = x(1. and S is called usual basis of Rn.λn ∈K such that λ1v1 + …+λnvn = O. S = {1. Then k ≤r.4.…. The following lemma and consequence show that if V is of finite dimension. ….

we obtain that vr+1 is a linear combination of {v1.6. and every vector of T is a linear combination of S. Also.7. v2. we say that V is of null–dimension and write dim V = 0 2) if V ≠ {0} and S = {v1.ur}.…. the following assertions hold. Therefore. Theorem: Let V be a vector space of n–dimension then the following assertions hold: 1.6. we say that V is of n–dimension and write dimV = n. 5. 2. Then every basic of V has the same number of elements. T is linearly dependent. v2. 3. 5.v2…. v2 is a linear combination of {v1. Definition: Let V be a vector space of finite dimension. vn} of V (with n elements) is a basis of V. Proceeding in this way. By the similar way as above. Interchanging the roll of T to S and vice versa we obtain n ≤m. m = n.v2…vr+1} is linearly dependent. Any linearly independent set of vectors in V with n elements is basis of V. 5. 5. and so on. V ≠{0}. Since T is linearly independent. dim(Rn) = n. and therefore. v2. u2. dim(Pn[x]) = n+1. Proof: Let S = {u1….8. The following theorem is direct consequence of Lemma 5. Examples: dim(R2) = 2. we have that m≤n. Any subset of V containing n+1 or more vectors is linearly dependent.Nguyen Thieu Huy.9.….un} and T = {v1…vm} be bases of V. u3…ur}. we can derive that v3 is a linear combination of {v1. Theorem: Let V be a finite–dimensional vector space. Theorem: Let V be a vector space of n –dimension then.…vr}. Then: 1) if V = {0}. Thus. Lecture on Algebra r λ ⎛ 1 ⎞ r i v2 = ∑ µ i u i = µ1 ⎜ ⎜ λ v1 − ∑ λ u i ⎟ + ∑ µ i u i ⎟ i =2 1 i =1 ⎝ 1 ⎠ i=2 r Therefore.vn} is a basic of V. {v1. we have the following theorem which can be proved by the same method.5 and Theorem 5. 49 . This is a contradiction. Any spanning set T = {v1.

Example: ⎛1 − 1 0 ⎞ ⎟ ⎜ A = ⎜1 3 − 1 ⎟ .v(r) be linearly independent. Rank of matrices 6. rank A = 2 because the first two rows are linearly ⎜5 3 − 2⎟ ⎠ ⎝ independent. Lecture on Algebra (1) If S = {u1. In fact.….un} is a basis of V. Hence.Nguyen Thieu Huy. The rank of a matrix equals the maximum number of linearly independent column vectors of A. We will prove that q ≤ r. and the three rows are linearly dependent. u(1) = c11v(1)+c12v(2) + …+c1rv(r) u(2) = c21v(1)+c22v(2) + …+ c2rv(r) u(s) = cs1v(1)+cs2v(2) + … + csrv(r). let v(1). (2) If T is a spanning set of V. u(2)…u(s) of A are linear combinations of v(1). v(2)…. u2. Definition: The maximum number of linearly independent row vectors of a matrix A = [ajk] is called the rank of A and is denoted by rank A. v(2)….u2. 6. and all the rest row vectors u(1).uk} be a linearly independent subset of V with k<n then one can extend S by vectors uk+1.….2. Theorem. A and AT has the same rank. By “maximum linearly independent subset of T” we mean the linearly independent set of vectors S ⊂ T such that if any vector is added to S from T we will obtain a linear dependent set of vectors.…. then the maximum linearly independent subset of T is a basis of V. and let q be the maximum number of linearly independent column vectors of A. VI.1.v(r).un such that {u1. Writing 50 . Proof: Let r be the maximum number of linearly independent row vectors of A.

2. …n. + v rk ⎜ 2 r ⎟ = v 1k ⎜ ⎟ + v 2 k ⎜ ⎜ M ⎟ M M ⎟ M ⎜ ⎟ ⎟ ⎜ ⎜ ⎟ ⎟ ⎜ ⎜c ⎟ ⎜c ⎟ ⎜c ⎟ ⎜u ⎟ ⎝ sr ⎠ ⎝ s2 ⎠ ⎝ s1 ⎠ ⎝ sk ⎠ For all k = 1.. Therefore.3. 6. v 1n ) .⎜1 ⎟⎬ ⎪⎜ c ⎟ ⎜ c ⎟ ⎜ c ⎟⎪ ⎪⎜ 11 ⎟ ⎜ 12 ⎟ ⎜ 1r ⎟⎪ ⎪⎜M ⎟ ⎜M ⎟ ⎜M ⎟⎪ ⎪⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎪ ⎩⎝ c s1 ⎠ ⎝ c s 2 ⎠ ⎝ c sr ⎠⎭ Hence.. 51 .. there fore. Applying this argument for AT we derive that r ≤ q. ⎛ c12 ⎞ ⎛ c12 ⎞ ⎛ c11 ⎞ ⎛ u 1k ⎞ ⎜ ⎟ ⎟ ⎜ ⎜ ⎟ ⎟ ⎜ ⎜c ⎟ ⎜ c 22 ⎟ ⎜ c 21 ⎟ ⎜ u2k ⎟ + .. We denote the row space of A by rowsp(A) and the column space of A by colsp(A)... q = dim V ≤ r. v rn ) u (1) = (u11 M u (s) = (u s1 ... This yields that ⎧⎛1 ⎞ ⎛ 0 ⎞ ⎛ 0 ⎞⎫ ⎪⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎪ ⎪⎜ 0 ⎟ ⎜1 ⎟ ⎜ 0 ⎟⎪ ⎪⎜M ⎟ ⎜M ⎟ ⎜M ⎟⎪ ⎪⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎪ V = Span {column vectors of A} ⊂Span ⎨⎜ 0 ⎟. 2…n.. Lecture on Algebra v (1) = (v11 M v ( r ) = (v r1 v 12 v r2 u12 u s2 .. Definition: The span of row vectors of A is called row space of A. u sn ) we have that u1k = c11v1k+c12v2k+…+c1rvrk u2k = c21v1k+c22v2k+…+c2rvrk M usk = cs1v1k + cs2v2k +…+ csrvrk for all k = 1. The span of column vectors of A is called column space of A.Nguyen Thieu Huy.... ⎜ 0 ⎟. r = q. u1n ) .

7. Concretely. A can be obtained from B after finitely many row operations. then count the number of non-zero rows of this echelon form. Then.4). 6. This yields that rowsp (A) = rowsp (B). where k ≠ 0 k +) the inverse of kRi + Rj → Rj is Rj – kRi →Rj Therefore. Corollary: The row space and the column space of a matrix A have the same dimension which is equal to rank A. Indeed. Theorem: Row equivalent matrices have the same row spaces and then have the same rank. The above arguments also show the following theorem. 6.5.4. the rank of a echelon matrix equals the number of its non–zero rows (see corollary 4. in practice. rowsp (A) ⊂ rowsp (B). this number is the rank of A. Lecture on Algebra From Theorem 6.…. Remark: The elementary row operations do not alter the rank of a matrix. Corollary: Consider the subset S = {v1. 6. 52 . 6. Note that each row operation can be inverted to obtain the original state. This yields that rowsp(B) ⊂ rowsp(A).2 we obtain the following corollary. ⎛ 1 2 0 − 1 ⎞ R2 − 2 R1 → R2 ⎟ ⎜ Example: A = ⎜ 2 6 − 3 − 3 ⎟ − − − − − − → ⎜ 3 10 − 6 − 5 ⎟ R − 3R → R 1 3 ⎠ 3 ⎝ ⎛1 2 0 −1⎞ ⎟ ⎜ ⎜0 2 − 3 −1⎟ ⎜0 4 − 6 − 2⎟ ⎠ ⎝ ⎛ 1 2 0 − 1⎞ ⎟ R3 − 2 R2 → R3 ⎜ ⎜ 0 2 − 3 − 1⎟ ⇒ rank ( A) = 2 − − − − − −− → ⎜ 0⎟ ⎝0 0 0 ⎠ From the definition of a rank of a matrix. Then. +) the inverse of Ri ↔ Rj is Rj ↔ Ri +) the inverse of kRi → Ri is 1 Ri → Ri. Therefore. v2. to calculate the rank of a matrix. we immediately have the following corollary. vk} ⊂ Rn the the following assertions hold. each row of B is a linear combination of rows of A.6. Note that.Nguyen Thieu Huy. let B be obtained from A after finitely many row operations. we use the row operations to deduce it to an echelon form.

rank S = dim span S = rank A =2. where A is the kxn matrix whose row vectors are v1. v2. ⎜x ⎟ ⎝ n⎠ ⎛ b1 ⎞ ⎜ ⎟ ~ ⎜ b2 ⎟ and B = ⎜ .….1) has a solution if and only if rank A = rank A . respectively. Note that.1) ⎛ x1 ⎞ ⎜ ⎟ where A = (ajk) is an mxn coefficient matrix. The second assertion follows straightforwardly. the system has infinitely many solutions with n – r free variables to which arbitrary values can be assigned.1)}. Fundamental theorem of systems of linear equations 7. 2. Theorem: Consider the system of linear equations AX = B (7. ~ the system has a unique solution.1). 0. all the solutions can be obtained by Gauss eliminations.…. vk. . . (2.Nguyen Thieu Huy.3).3. 6. Moreover. Example: Consider S = {(1. 6. Also. X = ⎜M ⎟ is the column vector of unknowns. VII. S is linearly independent if and only if the kxn matrix with row vectors v1. Then the M ⎟ ⎜ ⎟ ⎜b ⎟ ⎝ m⎠ ~ ~ system (7. if rankA = rank A = n. 10. -6. -1). . Put 0 − 1 ⎞ ⎛ 1 2 0 − 1 ⎞ ⎛ 1 2 0 − 1⎞ ⎛1 2 ⎟ ⎜ ⎜ ⎟ ⎟ ⎜ A = ⎜ 2 6 − 3 − 3 ⎟ → ⎜ 0 2 − 3 − 1 ⎟ → ⎜ 0 2 − 3 − 1⎟ ⎜ 3 10 − 6 − 5 ⎟ ⎜ 0 4 − 6 − 2 ⎟ ⎜ 0 0 0 0 ⎟ ⎠ ⎝ ⎝ ⎠ ⎠ ⎝ Then.1. Then. -3. Dim spanS = rank(A). Definition: Let S be a finite subset of vector space V. v2. . 53 . . 2.1). and if rank A = rank A <n. Lecture on Algebra 1. denoted by rankS. Let A = [A M B] be the augmented matrix of the system (7. a basis of span S can be chosen as {(0. vk has rank k. the number dim(Span S) is called the rank of S. 2. 2. Proof: We prove the first assertion of the theorem.5)}. (3.8. (0. -3.

if X1. It is obvious that colsp (A) ⊂colsp ( A ). we have that A(λX1 + µX2) = λAX1+µAX2 = λO + +µO = 0. The solution vectors of (7. xn = 0. X2 are solutions of the nonhomogeneous system AX = B.2) always has a trivial solution x1 = 0.3. x2…xn we obtain that x1C1 + x2C 2 + … + xnCn = B.2. That is. there fore. Thus. Theorem: (Solutions of homogeneous systems of linear equations) ~ ~ ~ ~ ~ ~ A homogeneous system of linear equations AX = O (7. In the case of homogeneous linear system. nullA = {X ∈ Mnx1(R)⎪AX = O}. then A(X1 – X2) = AX1 – AX2 = B – B = 0. x2…. the vectors space of all solution vectors of the homogeneous system (7. 7. say x1. it is a subspace of Mnx1(R). then colsp( A ) = colsp(A). Lecture on Algebra “⇒” Denote by c1. Definition: Let A be a real matrix of the size mxn (A ∈ Mmxn(R)).1) has a solution. This yields that there exist x1.1. x2…xn such that B = x1C1 + x2C2 + … + xnCn. denoted by nullA. X2 ∈null A. 54 .Nguyen Thieu Huy. Since the system (7. xn. B ∈colsp (A) ⇒ colsp ( A ) ⊂ colsp (A). rankA = rank A =dim(colsp(A)). colsp ( A ) = colsp(A). Then. We now observe that. This follows that B ∈colspA. Note that it is easy to check that nullA is a subspace of Mnx1(R) because. we have the following theorem which is a direct consequence of Theorem 7. Therefore. c2…cn the column vectors of A. “⇐” if rankA = rank A . µ ∈R and X1. λX1 + µX2 ∈ nullA. then the dimension of nullA is called nullity of A.2) form a vector space of dimension n – r. Nontrivial solutions exist if and only if r = rankA < n.1) has a solution x1. 7. Note also that nullity of A equals n – rank A. This means that system (7.2) is called the null space of the matrix A. Hence. x2 = 0…. Therefore. +) O ∈nullA (since AO =O) +) ∀λ. X1 – X2 is a solution of the homogeneous system AX = O.

Theorem: Let X0 be a fixed solution of the nonhomogeneous system (7.. Ax = ABb = b for any b ∈Mnx1 (R). ⎛1 ⎞ ⎜ ⎟ ⎜0⎟ Taking now ⎜ ⎟. 8. B is called the inverse of A.1) has a unique solution X = A-1H for any H∈Mnx1 (R). Conversely. So that we can write x = Bb where B depends only on A. Therefore. therefore. all the solutions of the nonhomogeneous system (7. ⎜M ⎟ 0 ⎜ ⎟ ⎜ ⎟ ⎜ 0⎟ ⎜1 ⎟ ⎝ ⎠ ⎝ ⎠ Similarly. from ABb = b we obtain that AB = I. for any b ∈ Mnx1 (R). Then.4. 55 . Proof: Let A be nonsingular. Definition: Let A be a n-square matrix. B = BI = BAC = IC = C..2). This means that A is invertible and A-1 = B. M ⎜ ⎟ ⎜0⎟ ⎝ ⎠ ⎛ 0⎞ ⎛ 0⎞ ⎜ ⎟ ⎜ ⎟ ⎜1 ⎟ ⎜M ⎟ . Inverse of a matrix 8. A is said to be invertible (or nonsingular) if the exists an nxn matrix B such that BA = AB = In. Lecture on Algebra This observation leads to the following theorem. let rankA = n. the system Ax = b always has a unique solution x. then the inverse of A is unique. Then. . Remark: If A is invertible. AB = BA = I. denoted by B = A-1. Theorem (Existence of the inverse): The inverse A-1 of an nxn matrix exists if and only if rankA = n. 7. Indeed.1). This yields that Rank A = n. ⎜ ⎟ as b. This also yield BA = I. Then. Then. Therefore..2. Then. VIII. Then. We consider the system AX = H (8. x = Bb = BAx for any x ∈ Mnx1(R). A is nonsingular ⇔ rank A = n. and the Gauss elimination shows that each component xj of x is a linear combination of those of b.1) It is equivalent to A-1AX = A-1H ⇔IX = A-1H ⇔ X = A-1H. (8.1) are of the form X = X0 + Xh where Xh is a solution of the corresponding homogeneous system (7.Nguyen Thieu Huy. let B and C be inverses of A.1.

reduces it to [I M K] the augmented matrix of the system IX = A-1. Therefore. 2) The algorithm of Gauss – Jordan method to determine A-1 Step1: Construct the augmented matrix [A M I] Step2: Using row elementary operations on [A M I] to reduce A to the unit matrix. and. Obviously. Examples: 1. by eliminating the entries in U above the main diagonal. Determination of the inverse (Gauss – Jordan method): 1) Gauss – Jordan method (a variation of Gauss elimination): Consider an nxn matrix with rank A = n. Using A we form n systems: AX(1) = e(1). Proof: Let AB = I. where U is upper triangle since Gauss elimination triangularized systems.Nguyen Thieu Huy. x(2). Null (BTAT) ⊃ Null (AT) Therefore. Then. AX(2) = e(2)…AX(n) =e(n) where e(j) has the jth component 1 and other components 0. we have to check only one of the two equations AB = I or BA = I to conclude B = A-1. n – rank (BTAT) ≥ n . Step3: Read off the inverse A-1. e(2)…e(n)] (unit matrix).…x(n)] and I = [e(1). and A is invertible. Hence. we have that B = IB = A-1AB = A-1I = A-1. Now. 8. rank A = n. Lecture on Algebra Remark: Let A. We combine the n systems into a single system AX = I. AB = I if and only if BA = I. we must have K = A-1 and can then read off A-1 at the end. B be n-square matrices.rank (AT) ⇒ rank A = rank AT ≥ rank (BTAT) = rank I = n. the obtained square matrix on the right will be A-1. also A=B-1. nullity of BTAT ≥ nullity of AT.3. A = ⎢ ⎡5 3⎤ ⎥ ⎣2 1⎦ 56 . and the n augmented matrix ~ A = [A M I]. Hence. Introducing the nxn matrix X = [x(1). Now. The Gauss – Jordan elimination now operates on [U M H]. Thus. Then. AX= I implies X = A-1I = A-1 and to solve AX = I for X we can apply the ~ Gauss elimination to A =[A M I] to get [U M H].

Nguyen Thieu Huy. If A. •) is a group. B are nonsingular nxn matrices. Also. [A M I ] = 2 − 1 3 M 0 1 0 ⎥ ⎢ ⎥ ⎢ ⎢4 1 8 M 0 0 1 ⎥ ⎢4 1 8⎥ ⎦ ⎣ ⎦ ⎣ R2 − 2 R1 → R2 ⎡1 0 2 1 0 0⎤ R3 + R2 → R3 ⎢0 − 1 − 1 − 2 1 0 ⎥ ⎯ ⎯→ ⎯ ⎯→ ⎢ ⎥ 0 − 4 0 1⎥ R3 − 4 R1 → R3 ⎢0 1 ⎣ ⎦ 2 M 1 0 0⎤ (−1) R2 → R2 ⎡1 0 2 M 1 0 0⎤ ⎡1 0 ⎢0 − 1 − 1 M − 2 1 0 ⎥ ⎢0 1 1 M 2 − 1 0 ⎥ ⎯ ⎯→ ⎥ ⎢ ⎥ ⎢ ⎢0 0 − 1 M − 6 1 1⎥ (−1) R3 → R3 ⎢0 0 1 M 6 − 1 − 1⎥ ⎦ ⎣ ⎦ ⎣ ⎤ 0⎥ ⎥ 1⎥ ⎦ R2 − R3 → R2 ⎡1 0 0 M − 11 2 02 ⎤ ⎢0 1 0 M − 4 0 1 ⎥ ⎯ ⎯→ ⎥ ⎢ R1 − 2 R3 → R1 ⎢0 0 1 M 6 − 1 − 1⎥ ⎦ ⎣ 8. A ∈GL(Rn) ⇒B = C. Remarks: 1. 2.(A-1)-1 = A. Denote by GL(Rn) the set of all invertible matrices of the size nxn. Then. A = 2 − 1 3 . Lecture on Algebra 3 1 ⎡ 3 1 M ⎡ ⎤ ⎢1 ⎡5 3 M 1 0 ⎤ 1 M 0⎥ 5 5 →⎢ [A M I] = ⎢ 5 ⎥→⎢ 5 1 2 ⎣ 2 1 M 0 1 ⎦ ⎢2 1 M 0 1 ⎥ ⎢ 0 − M − ⎣ ⎦ ⎣ 5 5 3 1 ⎡ ⎤ ⎡− 1 3 ⎤ 1 M 0 ⎥ ⎡1 0 M − 1 3 ⎤ ⇒ A −1 = ⎢ → ⎢ →⎢ 5 5 ⎥ ⎥ ⎢ 0 1 M 2 − 5⎥ ⎣ 0 1 M 2 − 5 ⎦ ⎣ 2 − 5⎦ ⎣ ⎦ ⎡1 0 2 M 1 0 0 ⎤ ⎡1 0 2 ⎤ ⎥ ⎢ ⎥ ⎢ 2.4. we have the cancellation law: AB = AC. (GL(Rn). 3. IX. then AB is a nonsingular and (AB)-1 = B-1A-1. Determinants 57 .

1 1 2 Example: 1 1 3 =1 1 3 1 2 −1 1 3 0 2 +2 1 1 0 1 = -1-2+2 = -1 0 1 2 To compute the determinant of higher order. which is written as D = det A = M a n1 and defined inductively by for n = 1.. Definitions: Let A be an nxn square matrix. 1 ≤ k ≤ n. and the formula is Det A = ∑ (−1) k =1 n j +k a jk M jk where Mjk (a ≤ k ≤ n) is a determinant of order n – 1. 2. 3. the determinant of the submatrix of A. D = (-1)1+1a11M11+(-1)1+2a12M12+. then the value of the determinant is multiplied by (-1). Properties: 1. obtained from A by deleting the jth row and the kth column. A = [ajk]. 9. We can compute a determinant by expanding any row. a determinant of order n is a a11 a 21 number associated with A = [ajk]. namely. a1n . If any two rows of a determinant are interchanged. D = det A = a11 for n = 2.. say row j. we need some more subtle properties of determinants.. 58 . D = det A = a a = a11a22 – a12a21 a 22 ⎦ 21 22 for n ≥ 2. Lecture on Algebra 9. A = [a11].2. a nn = A ⎡ a 11 ⎣a 21 a 12 ⎤ a11 a12 ⎥ .. Then. namely. is a determinant of order n – 1.Nguyen Thieu Huy..1.. A = ⎢ a12 a 22 an2 .. obtained from A by deleting the first row and the kth column.. a 2 n .+(-1)1+na1nM1n where M1k. the determinant of the submatrix of A. A determinant having two identical rows is equal to 0.

. and thus det(A-1) = 1 det A 59 . 9. respectively. If each entry in a row is expressed as a binomial. a 1n a 21 a 22 a n2 . If corresponding entries in two rows of a determinant are proportional.. 5 10 35 1 2 7 Example: 1 2 21 = 5 1 2 1 = 5... The determinant of a triangular matrix is the multiplication of all entries in the main diagonal.Nguyen Thieu Huy...... The value of a determinant is not altered if its rows are written as columns in the same order. det A = det AT.det(B). then the determinant can be written as the sum of the two corresponding determinants. a 2n a nn a 21 a 22 a n2 . a nn + ' a 11 ' a 12 ' . Det (AB) = det(A)... 8. The value of a determinant is left unchanged if the entries in a row are altered by adding them any constant multiple of the corresponding entries in any other row. If all the entries in a row of a determinant are multiplied by the same factor α.. Concretely.. Note: From (8) we have that.. a 2 n . 10. a 2 n . Lecture on Algebra 1 2 1 Example: 3 0 0 = −3 2 1 1 4 +0 1 1 2 4 +0 1 1 2 4 −0 1 2 2 1 = −21 2 1 4 4.. the properties (1) – (7) still hold if we replace rows by columns.(− 6) = −30 1 3 5 1 33 5 5. a 1n + a 1n a 11 = a 21 a 12 a 22 a n2 . 5 10 35 Example: 2 4 3 14 = 0 sine the first two rows are proportional. 9 1 6. the value of determinant is zero.. a nn M a n1 M a n1 M a n1 7. In other words.. then the value of the new determinant is α times the value of the given determinant. .. ' a 11 + a 11 ' a 12 + a 12 ' . a 1n .

Nguyen Thieu Huy. Lecture on Algebra Remark: The properties (1). A n1 ⎤ .. the matrix C = [Aij] is called the cofactor matrix of A. B = A-1. Cramer’s rule 10.1.. det A det A If k ≠l.. Then.. A n 2 ⎥ ⎥ ⎥ ⎥ . (7) and (9) allow to use the elementary row operations to deduce the determinant to the triangular form. we have that det Ak = 0. PRoof: Denote by AB = [gkl] where B = n A 21 A 22 A2n . Since Ak has two identical row. Mjk is called the minor of ajk in D. Definition: Let A = [ajk] be an nxn matrix. gkl = J =1 ∑ (−1) l + s a ks detls = A M det A k det A where Ak is obtained from A by replacing lth row by kth row. Determinant and inverse of a matrix. Therefore. (4).. gkl = 0 if k ≠l.. Then. AB = I. we have that gkl = n s =1 ∑ a ks detls . Hence. Also. A nn ⎦ 1 [Ajk]T. 10. det A Then. if k = l.2 Theorem: The inverse matrix of the nonsingular square matrix A = [ajk] is given by ⎡ A11 ⎢A 1 1 12 -1 T T A = C = [Ajk] = ⎢ ⎢ M det A det A ⎢ ⎣ A 1n where Ajk occupies the same place as akj (not ajk) does in A. Example: 1 2 7 7 1 2 7 R2 − 5R1 → R2 1 2 R3 + R2 → R3 0 1 2 = −16 ⎯ ⎯→ 0 1 2 5 11 37 ⎯ ⎯→ 0 0 − 16 3 5 3 R3 − 3R1 → R3 0 − 1 − 18 X. and let Mjk be a determinant of order n – 1. gkk = n ∑ (−1) s =1 a ks M ks det A = = 1. 60 . A k +s A Clearly. obtained from D = det A by deleting the jth row and kth column of A. then multiply the entries on the main diagonal to obtain the value of the determinant. the number Ajk=(-1)j+k Mjk is called the cofactor of ajk in D. Thus.

x2. xn) is the solution of AX = B.. x n = n D D D (Cramer’s rule) where Dk is the determinant obtained from D by replacing in D the kth column by the column vector B.5... ..4. Cramer’s theorem (solution by determinant): Consider the linear system AX = B where the coefficient matrix A is of the size nxn.. ⎡ x1 ⎤ If D = det A ≠ 0. Cn] we have that Dk=det Ak Since (x1.. Let C1.…. Proof: Since rank A = rank A = n. det ([C1.. C2... Lecture on Algebra 10.. Definition: Let A be an mxn matrix. we have that the system has a unique solution ~ (x1. Putting Ak = [ C1. whereas the determinant of every square submatrix of r + 1 or more rows is zero.3. Cn be columns of A. Theorem: An mxn matrix A = [ajk] has rank r ≥ 1 if and only if A has an rxr submatrix with nonzero determinant. xn)..n.Cn]. 61 . xk = det Ak Dk = for all k=1.. x 2 = 2 .. we have that B= ∑x C i =1 i n i ⇒ det Ak = ∑ xi det[C1 ..2. if i ≠ k.Nguyen Thieu Huy..C k −1C i C k +1 ] i =1 n = xk det A (note that. 10. then the system has precisely one solution X= ⎢M ⎥ given by the ⎢ ⎥ ⎢ xn ⎥ ⎣ ⎦ formula x1 = D D1 D ...... det A D Note: Cramer’s rule is not practical in computations but it is of theoretical interest in differential equations and other theories that have engineering applications.. we have that rank A = n ⇔ det A ≠0 ⇔ ∃A-1(that mean that A is nonsingular). Cn] = 0) Therefore.. so A = [C1 C2.. for an nxn matrix A. In particular. We call a submatrix of A a matrix obtained from A by omitting some rows or some columns or both.Ck-1 Ci Ck+1 .. 10...Ck-1 B Ck+1 .

Coordinates in vector spaces 11.1..1. that is to say: there exists a unique (a1.. Let V = P2[t].1.0). (1.0. t-1.0.1. γ = 2.1.a2. (t-1)2} and v = 2t2 – 5t + 6. ⎛6⎞ ⎜ ⎟ [v]S = ⎜ 4 ⎟ ⇔ v = 6(1. Lecture on Algebra XI.0.0).0 ) + 4(0. Then. (1..1+ β(t-1)+ γ(t-1)2 for all t... (0.1. Definition: Let V be an n–dimensional vector space over K. Examples: 1) v = (6.4. ⎛ a1 ⎞ ⎜ ⎟ ⎜a2 ⎟ Denote by [v]S = ⎜ ⎟ and call it the coordinate vector of v. S = {1. Then. V. as in seen in sections IV.the usual basis of R3.1) ⎜3⎟ ⎝ ⎠ If we take other basis ε = {(1. S = {(1.0. and S = {e1... ⎜1 ⎟ ⎝ ⎠ 2.. This yields [v]S = ⎜ − 1 ⎟ ⎜2 ⎟ ⎝ ⎠ 62 ...an) ∈Kn such that v = i =1 ∑ a i ei . Then.1)..3) ∈R3. n scalars a1.1. it is known that any vector v ∈V can be uniquely expressed as a linear combination of the basis vectors in S. We also denote by M ⎜ ⎟ ⎜a ⎟ ⎝ n⎠ (v)S=(a1 a2 . v = 2t2-5t+6= α .1)}. (0.0) + 1 (1. a2.1)} then ⎛2⎞ ⎜ ⎟ [v]ε = ⎜ 3 ⎟ since v = 2(1.Nguyen Thieu Huy.0) + 3(0. an) and call it the row vector of coordinates of v. β = . en} be its basis.0).0) + 3(1. n Then. ⎜γ ⎟ ⎝ ⎠ ⎛3 ⎞ ⎜ ⎟ Therefore.. To find [v]S we let ⎛α ⎞ ⎜ ⎟ [v]S = ⎜ β ⎟ .. an are called coordinates of v with respect to the basis S. we easily obtain α = 3.0.0.1.0).

a1n ⎟ ⎟ a 22 .. vn}. v2. we obtain that v= ∑ λi vi = ∑ λi ∑ aki u k = ∑ i =1 i =1 k =1 i =1 n n n n ∑ λi aki u k = ∑ ⎜ ∑ λi aki ⎟u k k =1 k =1 n n ⎛ n ⎞ ⎠ ⎝ i =1 63 . Set U = {u1...1. Change of bases: Definition: Let V be vector space and U = { u1. Lecture on Algebra 11. [v]U = P[v]S for all v ∈ V.0..1.1. and U. P = [ajk].+an1un v2 = a12u1 + a22u2+.1) = 1 (1..0). since U is a basis of V.. S be bases of V.1.0).1.0) = 1 (1.+annun ⎛ ⎜a ⎜ 11 Then.0. un}..1) (1. we have the expression (1. Example: Consider R3 and U = {(1. a nn ⎟ ⎠ S = {(1.0.0) + 0 (0. Then.0. the matrix P = ⎜ a 21 ⎜M ⎜ ⎜a ⎝ n1 to S.1) (1.+an2un M vn = a1nu1 + a2nu2+.1..2. a 2 n ⎟ is called the change-of-basis matrix from U ⎟ ⎟ a n 2 ...0..0) + 1 (0.0) = 1 (1.0).. ⎜ 0 0 1⎟ ⎠ ⎝ Theorem: Let V be an n–dimensional vector space over K. (0.v2.(0...(1.. S = {v1... then.the usual basis.0) + 0 (0. we can express: v1 = a11u1 + a21u2+.1.the other basis of R3..0..1)}.0.. and S = {v1. and P be the change-of-basis matrix from U to S.. un}. u2.1)}.1) This mean that the change-of-basis matrix from U to S is ⎛ 1 1 1⎞ ⎟ ⎜ P = ⎜ 0 1 1⎟ .Nguyen Thieu Huy.0.. Then.0) + 0 (0.0. ⎞ a12 .0).0) + 1 (0.v3} be two bases of V.0..0) + 1 (0. (1. u2.1.. Proof. By the definition of P.

and P be the change- of-basis matrix from U to S. Then v = ∑ γ k u k Therefore. 64 . Lecture on Algebra ⎛γ 1 ⎞ ⎜ ⎟ n n ⎜γ 2 ⎟ Let [v]U = ⎜ ⎟. Remark: Let U. S be bases of vector space V of n–dimension. P is nonsingular.. it follows that rank P = rank S = n.n M k =1 i =1 ⎜ ⎟ ⎜γ ⎟ ⎝ n⎠ This yields that [v]U = P[v]S.2. It is straightforward to see that P-1 is the change-of-basis matrix from S to U. since S is linearly independent... Hence. γk = ∑ a ki λ i ∀ k = 1. Then.Nguyen Thieu Huy.

A mapping F: V → W is called a linear mapping (or vector space homomorphism) if it satisfies the following conditions: 1) For any u. λz + µz1) = 65 . F(v) = Ow ∀ v ∈ V is a linear mapping.y.4y – 2z) is a linear mapping because F (λ(x. λy + µy1. F(v)= v ∀ v ∈ V is a linear mapping. Remark: 1) The two conditions (1) and (2) in above definition are equivalent to: “∀ λ. v ∈V. F(x.y. 6) The mapping F: R3 → R2. F: V → W is linear if it preserves the two basic operations of vector spaces. 4) The zero mapping F: V →W.z) = (2x + y +z. 1.z) = (x. ∀ u.0) is a linear mapping. x . 5) The identity mapping F: V → V.1. Lecture on Algebra Chapter 6: Linear Mappings and Transformations I. F (kv) = k F(v) In other words. y +2) is not a linear mapping since f(0.0) = (1.y. that of vector addition and that of scalar multiplication. Basic definitions 1.2) ≠ (0. Then. X2 ∈ Mnx1 (R) b) F (λX) = A(λX) = λAX ∀ λ ∈ R and X∈ Mnx1 (R).2. a) F(X1 + X2) = A(X1 +X2) = AX1 + AX2 = F (X1) + F(X2) ∀ X1. F(u+v) = F(u)+ F(v) 2) For all k ∈ K.y) = (x+1. µ ∈ K.v ∈ V ⇒ F(λu + µv) = λF(u) + µF(v)” 2) Taking k = 0 in condition (2) we have that F(Ov) = Ow for a linear mapping F: V → W. F(X) = AX ∀ X∈ Mnx1 (R) is a linear mapping because. and v ∈V.z1) )=F(λx+µx1. F(x.y1.z)+ µ(x1. Examples: 1) Given A ∈ Mmxn (R).0) ∈ R2. the mapping F: Mnx1 (R) → Mmx1 (R). 2) The projection F: R3 → R3. 3) The translation f: R2 → R2. f(x. Definition: Let V and W be two vector spaces over the same field K (say R or C).y.Nguyen Thieu Huy.

4x-2y-z) + µ(2x1+y1+ z1.u2.Nguyen Thieu Huy. Let G be another linear mapping such that G(vi) = ui ∀i= 1.µ ∈R and (x. We now prove the uniqueness.. n F(v2) = u2. un be any n vectors in W.y1. for finite–dimensional vector spaces. let {v1. Since for each v ∈ V there exists a unique i =1 n n (λ1.3. 1... 4x1 – 2y1 – z1) = λF(x. Then. 66 . n n G(v) = G( ∑ λ i v i ) = i =1 i =1 ∑ λ i G( v i ) = ∑ λ i u i = ∑ λ i F ( v i ) = i =1 i =1 n n n ⎛ n ⎞ F ⎜ ∑ λ i v i ⎟ = F( v ) .…..n..y1..z1) ∈R3 The following theorem say that. Example: Let V be a vector space of n–dimension over K. Lecture on Algebra =(2(λx +µx1)+λy+µy1+λz+µz1. v2..z) + µF(x1. F(v2) = u2... The mapping F is then called an isomorphism between V and W... The linearity of F follows easily from the definition. We put F: V →W by F(v) = i =1 ∑ λ i u i for v= ∑ λi vi ∈ V .y. F(vn) = un. 1. Definition: Two vector spaces V and W over K are said to be isomorphic if there exists a bijective linear mapping F: V → W..4. and S be a basis of V....z) ∈R3. Then. λn) ∈ Kn such that v = i =1 ∑ λ i v i . ⎜ ⎟ ⎝ i =1 ⎠ This yields that G = F. and we denote by V ≅ W. for each v = i =1 ∑ λ i v i ∈ V we have that.z1)∀λ.. 2. (x1. a linear mapping is completely determined by the images of the elements of a basis.. Theorem: Let V and W be vector spaces of finite – dimension. vn } be a basis of V and let u1.y. This follows that F is a mapping satisfying F(v1) = u1. Proof.. 4(λx+µx1) – 2 (λy+µy1) – (λz + µz1)) = λ(2x + y + z. there exists a unique linear mapping F: V → W such that F(v1) = u1. F(vn) = un..

for λ. xn ∈ R} = Span{C1.y. Kernels and Images 2..y ∈ R}.0) ∀ (x..v2. Im F = {x1 C1 + x2 C2 + . V ≅ Kn. where v1.1. Then. the mapping F: V → Kn v a (v1. Ker F = {v∈V⏐F(v) = O} = F-1({O}) 2. Cn} = Column space of A = Colsp (A) 67 .. consider F: Mnx1 (R) → Mmx1 (R) defined by F(X) = AX ∀ X ∈ Mnx1 (R).. Then.vn is coordinators of v with respect to S. This is the Oz – axis 2) Let A ∈ Mmxn (R). ker F is a subspace of V. That is. Lecture on Algebra Then... Example: 1) Consider the projection F: R3 → R3. . λu + µw = λf(v) + µF(v’) = F(λv + µv’). 2.0.y. F(v’) = w.y. II. ImF is a subspace of W.0.. Therefore. . and the kernel Ker F of F is a subspace of V. + xn Cn | x1 .. µ ∈ K and u.. The image of F is the set F(V) = {F (v) |v ∈V }.0) ∈ R3} = {(0. the image ImF of W is a subspace of W... Then. O ∈ Im F. Proof: Since F is a linear mapping.y. Therefore. we obtain that F(O) = O.v2.z)| (x. Cn]. . w ∈Im F we have that there exist v. Ker F = {(x. C2. written by Ker F.0)| x.z)| z ∈ R}.z) ∈ R3.y.z) ∈ R3} = {(x.. This yields that λv + µw ∈ Im F. is an isomorphism between V and Kn.y.z) = (0. Hence.. and we denote by Im F = F(V). . C2.Nguyen Thieu Huy. that is A = [C1 C2 .3.. Im F = {F(x. Similarly..vn). It can also be expressed as F(V) = {u∈W⏐∃ v ∈ V such that F(v) = u}.y. v’ ∈V such that F(v) = u. Moreover. Definition: Let F: V → W be a linear mapping.y. F(x. Therefore. Theorem: Let F: V → W be a linear mapping. Definition: The kernel of a linear mapping F: V → W.. To compute ImF we let C1.z) ∈ R3| F(x. This is the plane xOy. Cn be columns of A. is the set of all elements of V which map into O∈ W (null vector of W).z) = (x. ImF = {AX | X ∈ Mnx1 (R)}.2.

. e2 . Then.e2.. F(v2). We now prove {F(ek+1).. by Theorem 2.vm} = V.. We have that span {v1.. Note that Null A is the solution space of homogeneous linear system AX = 0 2. it follows that {F(ek+1).v2.. F(en)} span ImF.. + λn F(en) = 0 ⇒ F (λk+1 ek+1 + ...4. ek} be a basis of KerF.en such that {e1.… vm} = V. e2 .. ∃ v ∈ V such that u = F(v). {F(e1). … F(vm) span Im F..4.. we obtain that 68 ... (Dimension Formula) Proof. Let {e1.. u ∈ Span {F(v1). In fact.. Clearly Span {F(v1). dimV = dim Ker F + dim Im F. This yields. Extend {e1.. F(en)} is linearly independent.…. ek} Therefore...en} is a basis of V. i=1 J = k +1 k n Since e1.. F(vm)}. en are linearly in dependent.. λ2.5. Im F = Span {F(v1) . there exist λ1.Nguyen Thieu Huy... + λn en) = 0 ⇒ λk+1 ek+1 + ..…vm span a vector space V and the mapping F: V → W is linear.….. . Since span {v1. F(en)} spans Im F but F(e1) = F(e2) = . Indeed. Hence... Proof. 2.. . Theorem: Let V be a vector space of finite – dimension.. q... and F: V → W be a linear mapping... F(v1). Let now u ∈ Im F.e.. Theorem: Suppose that v1... there exist λ1 … λm such that v= ∑ λ i v i . F(en)} is a basis of Im F. ...d..ek} by ek+1. i=1 m Therefore.. Then. Lecture on Algebra Ker F = {X ∈ Mnx1 (R)⏐AX = 0} = null space of A = Null A. Then. + λn en ∈ ker F = span {e1. λk such that ∑ λ iei i =1 k = J = k +1 ∑ λJeJ n ⇒ ∑ (−λ i )e i + ∑ λ J e J = 0...... if λk+1F(ek+1) +.. F(vm)} ⇒ Im F ⊂ Span {F(v1).. We now prove {F(ek+1). = F(ek) = 0. F(vm)}.. F(v2). F(vm)} ⊂ Im F. i=1 m U = F(v) = F( ∑ λ i v i ) = i=i m ∑ λ i F( v i ) ..

Nguyen Thieu Huy, Lecture on Algebra

λ1 = λ2 = ... = λk = λk+1 = ... = λn = 0. Hence, F (ek+1),..., F(en) are also linearly independent. Thus, {F(ek+1)... F(en)} is a basis of Im F. Therefore, dim Im F = n – k = n – dim ker F. This follow that dim ImF + dim Ker F = n = dim V.
2.6. Definition: Let F: V → W be a linear mapping. Then,

1) the rank of F, denoted by rank F, is defined to be the dimension of its image. 2) the nullity of F, denoted by null F, is defined to be the dimension of its kernel. That is, rank F = dim (ImF), and null F = dim (Ker F).
Example: F: R4 →R3

F(x,y,s,t) = (s – y + s + t, x +2s – t, x+y + 3s – 3t) ∀ (x,y,s,t) ∈R4 Let find a basis and the dimension of the image and kernel of F. Solution: Im F = span{F(1,0,0,0), F(0,1,0,0), F(0,0,1,0), F(0,0,0,1)} = span{(1,1,1); (-1,0,1); (1,2,3); (1. -1, -3)}. Consider
1 ⎛1 ⎜ ⎜−1 0 A=⎜ 1 2 ⎜ ⎜ 1 −1 ⎝ 1 1 3 1

⎞ ⎟ ⎟ ⎟→ ⎟ ⎟ ⎠

⎛1 ⎜ ⎜0 ⎜0 ⎜ ⎜0 ⎝

1 1 0 0

1 2 0 0

⎞ ⎟ ⎟ ⎟ (by elementary row operations) ⎟ ⎟ ⎠

Therefore, dim ImF = rank A = 2; and a basis of Im F is S = {(1,1,1); (0,1,2)}. Ker F is the solution space of homogeneous linear system:

⎧ x−y+s+t = 0 ⎪ ⎨ x + 2s − t = 0 ⎪x + y + 3s − 3t = 0 ⎩

(I)

The dimension of ker F is easily found by the dimension formula Ker F = dim (R4) – dim ker F = 4-2 = 2 To find a basis of ker F we have to solve the system (I), say, by Gauss elimination.

69

Nguyen Thieu Huy, Lecture on Algebra

⎛1 −1 1 1 ⎞ ⎛1 −1 1 1 ⎞ ⎛1 −1 1 1 ⎞ ⎜ ⎜ ⎟ ⎟ ⎜ ⎟ ⎜0 1 1 − 2⎟ ⎜0 1 1 − 2⎟ ⎜1 0 2 −1⎟ ⎜1 1 3 − 3⎟ → ⎜ 0 2 2 − 4 ⎟ → ⎜ 0 0 0 0 ⎟ ⎜ ⎜ ⎟ ⎟ ⎜ ⎟ ⎜ ⎜ ⎟ ⎟ ⎜ ⎟ ⎝ ⎝ ⎠ ⎠ ⎝ ⎠ ⎧x − y + s + t = 0 Hence, (I) ⇔ ⎨ ⎩y + s − 2 t = 0 ⇔y = -s +2t; x = -2s – 3t for arbitrary s and t. This yields (x,y,s,t) = (-s +2t, -2s – 3t, s,t) = s(-1,-2, 1, 0)+ t(2,-3,0,1) ∀ s,t ∈ 1 ⇒ Ker F = span {(-1,-2, 1, 0), (2,-3,0,1)}. Since this two vectors are linearly independent, we obtain a basis of KerF as {(-1,-2, 1, 0), (2,-3,0,1)}.
2.7. Applications to systems of linear equations:

Consider the homogeneous linear system AX = 0 where A ∈ Mmxn(R) having rank A = r. Let F: Mnx1 (R) → Mmx1 (R) be the linear mapping defined by FX = AX ∀ X ∈ Mnx1 (R); and let N be the solution space of the system AX = 0. Then, we have that N = ker F. Therefore, dim N = dim KerF = dim(Mnx1(R) – dim ImF = n – dim(colsp (A)) = n – rankA = n – r. We thus have found again the property that the dimension of solution space of homogeneous linear system AX = 0 is equal to n – rankA, where n is the number of the unknowns.
2.8. Theorem: Let F: V → W be a linear mapping. Then the following assertions

hold. 1) F is in injective if and only if Ker F = {0} 2) In case V and W are of finite – dimension and dim V = dim W, we have that F is in injective if and only if F is surjective.
Proof:

1) “⇒” let F be injective. Take any x ∈ Ker F Then, F(x) = O = F(O) ⇒ x = O. This yields, Ker F = {O} “⇐”: Let Ker F = {O}; and x1, x2 ∈ V such that f(x1) = f(x2). Then 70

Nguyen Thieu Huy, Lecture on Algebra

O = F(x1) – F(x2) = F(x1 – x2) ⇒ x1 – x2 ∈ ker F = {O}. ⇒ x1 – x2 = O ⇒ x1 = x2. This means that F is injective. 2) Let dim V = dim W = n. Using the dimension formula we derive that: F is injective ⇔ Ker F = {0} ⇔ dim Ker F = 0 ⇔ dim Im F = n (since dim Ker F + dim Im F = n) ⇔ Im F =W (since dim W = n) ⇔ F is surjective. (q.e.d.) The following theorem gives a special example of vector space.
2.9. Theorem: Let V and W be vector spaces over K. Denote by

Hom (V,W)= {f: V → W | f is linear}. Define the vector addition and scalar multiplication as follows: (f +g)(x) = f(x) + g(x) ∀ f,g ∈ Hom (V,W); ∀x ∈V (λf)(x) = λf(x) ∀ λ ∈ K; ∀ f ∈ Hom (V,W); ∀x ∈V Then, Hom (V,W) is a vector space over K.
III. Matrices and linear mappings 3.1. Definition. Let V and W be two vector spaces of finite dimension over K, with

dim V = n; dim W = m. Let S = {v1, v2, ...vn} and U= {u1, u2 ,..., um} be bases of V and W, respectively. Suppose the mapping F: V → W is linear. Then, we can express F(v1) = a11 u1 +a21 u2 +... + am1um F(v2) = a12 u1 +a22 u2 +... + am2um

M F(vn) = a1n u1 +a2n u2 +... + amnum
⎛ a11 a12 ... a1n ⎞ ⎜ ⎟ ⎜ a 21 a 22 ... a 2 n ⎟ The transposition of the above matrix of coefficients, that is the matrix ⎜ , M M M ⎟ ⎜ ⎟ ⎜ a a ...a ⎟ ⎝ m1 m 2 mn ⎠ denoted by [F ]U , is called the matrix representation of F with respect to bases S and U (we
S

write [F] when the bases are clearly understood)

71

e.0) = (-4.2.d) Example: Return to the above example. S [F ]U [X]S = [F(X)] U.1) ⎡2 − 4 5 ⎤ S Therefore.z) = (2x-4y+5z. F: R3 → R2 F(x.-6) = 5(1. U be bases of V and W respectively. x+3y-6z). Theorem: Let V and W be two vector spaces of finite dimension.y. U = {u1.1) = 2(1.1) F(0. respectively.0. and the two usual bases S and U of R3 and R2.y. [F ]U = ⎢ − 6⎥ ⎣1 3 ⎦ 3.1) = (5.3) = -4(1. . J =1 k =1 k =1 J =1 m n n m For S = {v1.0) + 3(0.. x+3y-6z).1) Proof: S [F ]U ⎛ n ⎞ ⎜ ∑ a1k x k ⎟ ⎜ k =1 ⎟ ⎟ for [X]S = [X]S = [ajk] [X]S = ⎜M ⎜ ⎟ ⎜ n ⎟ ⎜ ∑ a mk x k ⎟ ⎝ k =1 ⎠ n n ⎛ x1 ⎞ ⎜ ⎟ ⎜ x2 ⎟ ⎜M ⎟ ⎜ ⎟ ⎜x ⎟ ⎝ n⎠ Compute now. u2.0) + (-6)(0. ⎟ ⎟ ⎟ ⎟ ⎠ (q. vn}. F(x. v2. F(X) = F (∑ x J v J ) = J =1 ∑ x J F( v J ) = J =1 ∑ x J ∑ a kJ u k J =1 k =1 n m = ∑∑ x J a Jk u k = ∑ (∑ a kJ x J )u k . Lecture on Algebra Example: Consider the linear mapping F: R3 → R2.0) = (2. for any X ∈ V. Then..z) = (2x -4y +5z.Nguyen Thieu Huy. Then.1.. F: V → W be linear. and S.0) + 1(0.0. (3.. 72 . Then. F(1. . um} ⎛ m ⎜ ∑ a1J x J ⎜ J =1 ⎜ m ⎜∑a x Therefore [F(X)] U = ⎜ J =1 2 J J ⎜M ⎜ ⎜ m ⎜ ∑ a mJ x J ⎝ J =1 ⎞ ⎟ ⎟ ⎟ ⎟ S ⎟ = [F ]U [X]S.1) F(0.

respectively. S Note: For a linear mapping F: V → W with finite–dimensional spaces V and W. Suppose that f: V →E and g: E →W are linear mappings then R S R [gof ]U = [g ]U [ f ]S 3. 3. we have that. by Corollary 3. Let V. In other words.4. U be bases of V.2) 73 . Hom(V. S. S and U be bases of V and W. W be vector spaces of finite dimension.3. F: V → W be a linear mapping. Lecture on Algebra ⎛ 2x .3.4y + 5z ⎞ [F(x.2 and Corollary 3. Theorem. (3. then A = [F ]U . S’. Q-1 [F ]ε P[X]S’ = [F(X)]U’ for all X ∈ V S Therefore. respectively. respectively. Moreover. W be vector spaces of finite–dimension. E. E. we can replace the action of F by the multiplication of its matrix representation [F ]U through equality (3. the S mapping ℑ: Hom (V.1).z)]U = ⎜ ⎜ x + 3y . let P be the S S' change–of–basis matrix from S to S’.y)]S ⎜ ⎟ − 6⎦ ⎝ ⎠ ⎣ 3. S S [F ]U'' = Q-1 [F ]U P. and Q be change–of–basis matrix from U to U’.6z ⎟ = ⎟ ⎝ ⎠ ⎡2 − 4 5 ⎤ ⎛ x ⎞ S ⎢1 3 ⎥ ⎜ y ⎟ = [F ]U [(x. respectively. S [F ]U [X]S = [ F(X)]U [X]S = P[X]S’ and [F(X)]U = Q[F(X)]U’ Therefore. and R.y. If we fix two bases S and U of V and W. Corollary: If A is a matrix such that A [X]S = [F(X)]U for all X ∈ V. To do that. [F ]U [X]S = [F ]U P[X]S’ = [F(X)]U= Q[F(X)]U’ S S Hence.W) ≅ Mmxn (K). W. We want to find the relation bet ween [F ]U and [F ]U ' . U’ be other bases of V and W. Change of bases: Let V.3. The following theorem gives the relation between the matrices and the composition of mappings.W) → Mmxn (R) F a [F ]U S is an isomorphism.5. By Theorem 3.Nguyen Thieu Huy.

The set of all eigenvalues of A. which belong to K.Nguyen Thieu Huy. (K = R or C). We find the spectrum of A on R . respectively.1)} of R . Recall that Mn (K) is the set of all n-square matrices with entries belonging to K. (1. X ≠ 0 such that AX = λX}. Then.1). ⎟ ⎝ ⎠ By definition we have that ⎛ x ⎞ ⎛0⎞ λ ∈ SpRA ⇔ ∃X = ⎜ 1 ⎟ ≠ ⎜ ⎟ such that AX = λX ⎜x ⎟ ⎜0⎟ ⎝ 2⎠ ⎝ ⎠ ⇔ The system (A . the change – of – basis matrix from S to S’ is P = ⎜ 1 − 1 1 ⎟ ⎜0 1 1⎟ ⎝ ⎠ 2 ⎛1 − 1⎞ and the change – of – basis matrix from U to U’ is Q = ⎜ ⎜1 1 ⎟ . A number λ ∈ K is called an eigenvalue of A if there exists a non–zero column vector X ∈Mnx1 (K) for which AX = λX. is called the spectrum on K of A.-1. A ∈ Mnxn (K).1). Definition: Let A be an n-square matrix.0). consider the basis S’ = {(1.λI)X = 0 has nontrivial solution X ≠ 0 74 . Lecture on Algebra Example: Consider F: R3 → R2 F(x. Then S [F ]U ⎡2 − 4 1 ⎤ = ⎢ − 5⎥ ⎣1 3 ⎦ Now. SpKA = { λ∈K | ∃ X ∈ Mnx1(K). ⎛− 5 2 ⎞ Example: A = ⎜ ⎜ 2 − 2 ⎟ ∈ M2(R). that is SpR A.y. denoted by SpKA. (-1.z) = (2x-4y+z.1.1)} of R3 and the basis U’ = ⎛1 1 0⎞ ⎜ ⎟ {(1.1. [F ] = Q-1 [F ] S' U' S U ⎡1 1 0 ⎤ −1 − 5 / 2⎤ ⎛1 − 1⎞ ⎡2 − 4 1 ⎤ ⎢ ⎥ ⎡1 0 P= ⎜ ⎥ ⎢1 − 1 1⎥ = ⎢3 − 7 1 / 2 ⎥ ⎜1 1 ⎟ ⎢1 3 ⎟ − 5⎦ ⎝ ⎦ ⎠ ⎣ ⎢0 1 1 ⎥ ⎣ ⎦ ⎣ IV. and every vector satisfying this relation is then called an eigenvector of A corresponding to the eigenvalue λ. U be usual bases of R3 and R2. x + 4y -5z). (0. 4. That is to say. Eigenvalues and Eigenvectors Consider a field K.1. ⎟ ⎝ ⎠ Therefore. Let S.

λI)X = 0 for λ ∈ SpRA. Lecture on Algebra ⇔ det (A . Theorem: Let A ∈Mn(K). therefore.λI) = 0 has a nontrivial solution) ⇔ det (A . ⎜ 1 ⎟ ⎝ ⎠ 4. Definition.2.λI) is called the characteristic polynomial of A and the equation χ (λ) = 0 is called characteristic 75 .λI) = 0 (ii) 4. (i) The scalar λ ∈K is an eigenvalue of A (ii) λ ∈K satisfies det (A . the following assertion are equivalent.λI) = 0 Proof: (i) ⇔ λ ∈SpK(A) ⇔( ∃ X ≠0. Then. Then. SpRA = {-1. Let A ∈ Mn(K).1) ⇔ ⎨ 1 ⇔ ⎜ 1 ⎟ = x2 ⎜ ⎟ ⎜ 1 ⎟ ⎜x ⎟ ⎝ ⎠ ⎝ 2⎠ ⎩2 x1 + 4 x 2 = 0 ⎛ − 2⎞ Therefore.1) ⇔ ⎢ ⎥ ⎢ x ⎥ = ⎢0⎥ ⇔ ⎨ 2 x − x = 0 ⇔ ⎜ x ⎟ = x1 ⎜ 2 ⎟ ⎜ ⎟ ⎜ ⎟ − 2 + 1⎦ ⎣ 2 ⎦ ⎣ ⎦ ⎣ 2 ⎝ ⎠ 1 2 ⎝ 2⎠ ⎩ ⎛1⎞ Therefore. X ∈Mnx1(K) such that AX = λX) ⇔ ( The system (A .Nguyen Thieu Huy. -6} ⎣λ = −6 To find eigenvectors of A. the eigenvectors corresponding to λ2 = -6 are of the form k ⎜ ⎟ for all k ≠0. the polynomial χ (λ) = det (A . we have to solve the systems (A . the eigenvectors corresponding to λ1 = -1 are of the form k ⎜ ⎟ for all k ≠0 ⎜2⎟ ⎝ ⎠ For λ = λ2 = -6: ⎧ x + 2 x2 = 0 ⎛x ⎞ ⎛ − 2⎞ (4.3. For λ = λ1 = -1: we have that (4.λI) = 0 ⇔ −5−λ 2 = 0 ⇔ λ2 +7λ+ 6 = 0 2 −2−λ ⎡ λ = −1 ⇔ ⎢ .1) 2 ⎤ ⎡ x1 ⎤ ⎡0⎤ ⎧− 4 x1 + 2 x 2 = 0 ⎡− 5 + 1 ⎛ x1 ⎞ ⎛1⎞ (4.

⎢1 ⎥ ⎢0 ⎥ ⎣ ⎦ ⎣ ⎦ ⎡ − 2 ⎤ ⎡ 3⎤ Therefore. kX is also an eigenvector of A for all k ≠ 0. Eλ is a subspace of Mnx1 (K). Indeed.Nguyen Thieu Huy. Then.x2) goes over into the point Q(y1. SpK(A) = {5. Note: For λ ∈ SpK(A). ⎢0⎥ } ⎢ ⎥ ⎢ ⎥ ⎢ 0 ⎥ ⎢1 ⎥ ⎣ ⎦ ⎣ ⎦ 4.y2) given by 76 . Therefore.5. and therefore dim Eλ = n – rank (A . solve (A+3I)X=0 ⇔ X=x2 ⎢ ⎥ 3 ⎢ ⎥ 2 3. -3} ⎡1⎤ For λ1 = 5.λ2 +21λ +45 = 0 ⇔ λ1 = 5. Therefore. x For λ2 = λ3 = -3. E(-3)=Span{ ⎢ 1 ⎥ . It is easy to see that. since AX = λX we have that A(kX) = kAX = k(λX) = λ(kX).λI). λ2 = λ3 = -3. E5 = Span ⎨⎜ 2 ⎟⎬ . Lecture on Algebra equation of A. consider (A – 5I)X = 0 ⇔X =x1 ⎢ 2 ⎥ ∀x1 ⎢ ⎥ ⎢− 1⎥ ⎣ ⎦ ⎧⎛ 1 ⎞⎫ ⎪⎜ ⎟⎪ Therefore. the set Eλ = {X ∈ Mnx1(K) | AX = λX} is called the eigenspace of A corresponding to λ. An application: Stretching of an elastic membrane 2 An elastic membrane in the x1x2 plane with boundary x1 + x 2 = 1 is stretched so that 2 a point P(x1. for λ ∈ SpK(A). ⎛ − 2 2 − 3⎞ ⎜ ⎟ 1 − 6 ⎟ we have that Examples: For A = ⎜ 2 ⎜ −1 − 2 0 ⎟ ⎝ ⎠ | A .λI| = 0 ⇔ -λ3 . Remark: If X is an eigenvector of A corresponding to an eigenvalue λ ∈ K. so is kX for all k ≠ 0. ⎪⎜ −1⎟⎪ ⎩⎝ ⎠⎭ ⎡ 3⎤ ⎡ − 2⎤ ⎢ 1 ⎥ +x ⎢0⎥ ∀x .λI) > 0. the eigenspace Eλ coincides with Null(A .

X ≠ 0 ⇔ AX = λX. Then. that is.λI) = 0 ⇔ 5−λ 3 = 0 ⇔ λ1 = 8. 77 . Lecture on Algebra ⎡y ⎤ ⎡5 3⎤ ⎡ x1 ⎤ Y= ⎢ 1 ⎥ = AX = ⎢ ⎥⎢ ⎥ ⎣3 5⎦ ⎣x 2 ⎦ ⎣y 2 ⎦ Find the “principal directions”. E8 = Span ⎨⎜ ⎟⎬ ⎜ ⎟ ⎩⎝1⎠⎭ ⎧⎛1 ⎞⎫ For λ2 = 2. E2 = Span ⎨⎜ ⎟⎬ ⎜ ⎟ ⎩⎝ − 1⎠⎭ ⎛ 1⎞ ⎛1 ⎞ We choose u1 = ⎜ ⎟ .Nguyen Thieu Huy. directions of the position vector X ≠0 of P for which the direction of the position vector Y of Q is the same or exactly opposite (or. u2 = ⎜ ⎟ . Solve det (A . in other words. u1. X and Y are linearly dependent). respectively. This means that we have to find eigenvalues and eigenvectors of A. What shape does the boundary circle take under this deformation? Solution: We are looking for X such that Y = λX. λ2 =2 3 5−λ ⎧⎛1⎞⎫ For λ1 = 8. u2 give the principal directions. The ⎜ 1⎟ ⎜ − 1⎟ ⎝ ⎠ ⎝ ⎠ eigenvalues show that in the principal directions the membrane is stretched by factors 8 and 2.

λ2 . | T | = | A . x2. Proof: Suppose that the conclusion is false... Cr+1 not all zero. such that. 1 ⎞ ⎛ 1 1 ⎞ ⎛ 1 v1 = ⎜ . B = T-1AT (5. Proof: B = T-1AT ⇒ | B . Diagonalizations 5.λI | 5. Let r be the largest integer such that {x1.. the corresponding eigenvectors x1.3) 78 .xr+1} is linearly dependent.2. Lemma: Let λ1. Lecture on Algebra Accordingly... say.1) 5. v2 = ⎜ ⎟. | A. Similarities: An nxn matrix B is called similar to an nxn matrix A if there is an nxn – nonsingular matrix T such that..+Cr+1xr+1 = 0 (3. C1x1 + C2x2 + .. x2.Nguyen Thieu Huy.− ⎟ . xk form a linearly independent set. r < k and the set {x1. 2 32 2 64 4 V. there are scalars C1.v2 – coordinate system.. therefore have the same set of eigenvalues..1. if we choose the principal directions as directions of a new Cartesian v1. . ⎜x ⎟ ⎜y ⎟ ⎝ 2⎠ ⎝ 2⎠ ⎛ 1 ⎜ ⎛ z1 ⎞ ⎜ 2 ⇒ ⎜ ⎟= ⎜ z ⎟ ⎜ −1 ⎝ 2⎠ ⎜ ⎝ 2 ⇒ ⎛ 1 ⎜ ⎛ z1 ⎞ ⎜ 2 ⎜ ⎟= ⎜z ⎟ ⎜ −1 ⎝ 2⎠ ⎜ ⎝ 2 1 ⎞ ⎟ 2 ⎟⎛ y1 ⎞ ⎜ ⎟ 1 ⎟⎜ y 2 ⎟ ⎝ ⎠ ⎟ 2⎠ 8 ⎞ x2 ⎟ 2 ⎟ 2 ⎟ x2 ⎟ 2 ⎠ 1 ⎞ ⎛ 8 x + ⎜ ⎟ 5 3 x ⎛ ⎞⎛ 1 ⎞ ⎜ 2 1 2 ⎟⎜ ⎟⎜ ⎟ = 1 ⎟⎜ 3 5 ⎟⎜ x 2 ⎟ ⎜ − 2 ⎝ ⎠⎝ ⎠ x1 ⎜ ⎟ 2⎠ ⎝ 2 2 z1 z 2 z2 z2 2 + 2 = 2(x1 + x 2 ) = 2 ⇔ 1 + 2 = 1 and we obtain a new shape as an ellipse.λI | . then B and A have the same characteristic polynomial. Then.. 2⎠ ⎝ 2 2⎠ ⎝ 2 ⎛ x1 ⎞ ⎛ y1 ⎞ Then ⎜ ⎟ = A⎜ ⎟ .xr} is linearly independent... Theorem: If B is similar to A.λI | = | T-1AT – T-1λIT | = | T-1(A . C2.λI)T | = | T-1 |. .3. λk be distinct eigenvalues of an nxn matrix A. Then. This means that....

4. ⎢ ⎥ ⎣λ=2 ⎢1 1 0 ⎥ ⎣ ⎦ ⎛ 1⎞ ⎛1⎞ ⎛1⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ and A has 3 linearly independent vectors u1= ⎜1⎟ . Let C1. Lecture on Algebra ⇒λr+1 ∑ C i x i = 0 and A( ∑ C i x i ) = 0 i =1 i =1 r +1 r +1 ⎧ r +1 ⎪∑ λ r +1C i x i = 0 r ⎪ Then ⎨ i =1 1 ⇒ ∑ (λ r +1 − λ i )C i x i = 0 r+ i =1 ⎪ ∑λ C x = 0 i i i ⎪ i =1 ⎩ Since x1. Cr+1xr+1 = 0 Therefore Cr+1 = 0 because xr+1≠ 0.. 5.λI | = (λ+1)2(λ-2) = 0 ⇔ ⎢ .3).r.. x2.. ⎛ λ1 ⎜ ⎜0 -1 Proof: If A is diagonalizable.a diagonal O M ⎟ ⎟ L λn ⎟ ⎠ 79 . Cn]. Cn be columns of T.2. e. that is. | A .xr are linearly independent. T AT = ⎜ M ⎜ ⎜0 ⎝ matrix. Note: There are nxn matrices which do not have n distinct eigenvalues but they still have n linearly independent eigenvectors.... Theorem: If an nxn matrix A has n distinct eigenvalues.. T = [C1C2..λi)Ci = 0 ∀ i = 1. then it has n linearly independent vectors. ⎡0 1 1 ⎤ ⎡λ = −1 A = ⎢1 0 1⎥ . 5. 0 λ2 M 0 L 0⎞ ⎟ L 0⎟ . Then A is diagonalizable if and only if A has n linearly independent eigenvectors. u3= ⎜ 0 ⎟ ⎜ 1⎟ ⎜1⎟ ⎜ − 1⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ 5. then ∃T... A is similar to a diagonal matrix.. . u2= ⎜ − 1⎟ .5... C2.. Theorem: Let A be an nxn matrix.2.r it follows that Ci = 0 ∀i = 1.. we have that (λr+1 . By (3. that is. Cr+1 are zero. Definition: A square nxn matrix A is said to be diagonalizable if there is a nonsingular nxn matrix T so that T-1AT is diagonal..Nguyen Thieu Huy.6.g. This contradicts with the fact that not all C1..

λn... ⎛ λ1 ⎜ ⎜0 Set T = [ C1 C2 . Conversely.. i = 1.... We have the following algorithm of diagonalization of nxn matrix A..Cn... n.. Cn]..λn Step 2: For each λi solve (A .7. Then.d) 5... Step 3: Let u1. Then.. then come to next step.. By theorem 5...un..... Clearly.un be n linearly independent eigenvectors of A found in Step 2. 80 . If A has n linearly independent eigenvectors. C2.. set T = [u1 u2.. Step 1: Solve the characteristic equation to find all eigenvalues λ1. AT = T ⎜ M ⎜ ⎜0 ⎝ ⎛ λ1 ⎜ ⎜0 -1 Therefore T AT= ⎜ M ⎜ ⎜0 ⎝ 0 0 λ2 M 0 L 0⎞ ⎟ L 0⎟ O M ⎟ ⎟ L λn ⎟ ⎠ λ2 M 0 L 0⎞ ⎟ L 0⎟ is a diagonal matrix. If A has less than n linearly independent eigenvectors. C2.6 ⎛ λ1 ⎜ ⎜0 -1 we have that T AT = ⎜ M ⎜ ⎜0 ⎝ 0 λ2 M 0 L 0⎞ ⎟ L 0⎟ where ui is the eigenvectors corresponding to the O M ⎟ ⎟ L λn ⎟ ⎠ eigenvalue λi. the process of finding of T such that T-1AT is a diagonal matrix.2. A has n linearly independent eigenvectors C1. O M ⎟ ⎟ L λn ⎟ ⎠ (q. λ2.un] (that is columns of T are u1..e.. is called the diagonalization of A. respectively. Cn corresponding to eigenvalues λ1.. C1.. then conclude that A can not be diagonalized. C2. respectively.Nguyen Thieu Huy... u2 . Obviously.. Lecture on Algebra ⎛ λ1 ⎜ ⎜0 Then AT = T ⎜ M ⎜ ⎜0 ⎝ 0 λ2 M 0 L 0⎞ ⎟ L 0⎟ ⇒ O M ⎟ ⎟ L λn ⎟ ⎠ AC1 = λ1C1 AC 2 = λ 2 C 2 M AC n = λ n C n ⇒ A has n eigenvectors C1.u2.λiI)X = 0 to find all the linearly independent eigenvectors of A. respectively). Cn are linearly independent since T is nonsingular. λ2. Definition: Let A be a diagonalizable matrix..

Setting now T = ⎜ − 1 0 1⎟ we obtain that ⎜ 0 − 1 1⎟ ⎠ ⎝ ⎛ − 1 0 0⎞ ⎜ ⎟ T AT = ⎜ 0 − 1 0 ⎟ finishing the diagonalization of A. Definition: A linear mapping F: V→ V(from V to itself) is called a linear operator (or transformation). ⎜x ⎟ ⎝ 3⎠ ⎛1⎞ ⎜ ⎟ There are two linearly independent eigenvectors u1 = ⎜ − 1⎟ . ⎜ 0 0 2⎟ ⎝ ⎠ -1 VI. Linear operators (transformations) 6.λI | = (λ+1)2(λ-2) = 0 ⇔ ⎢ 1 ⎣ λ3 = 2 ⎛ x1 ⎞ ⎜ ⎟ For λ1= λ2 = -1: Solve (A+I)X = 0 for X = ⎜ x 2 ⎟ we have x1+x2+x3 =0. 81 . u2.1. solve (A-2I)X = 0 for X = ⎜ x 2 ⎟ we obtain that ⎜x ⎟ ⎝ 3⎠ ⎧− 2 x 1 + x 2 + x 3 = 0 ⎛ x1 ⎞ ⎛1⎞ ⎜ ⎟ ⎜ ⎟ ⎪ ⎨ x1 − 2x 2 + x 3 = 0 ⇔ ⎜ x 2 ⎟ = x1 ⎜ 1 ⎟ . There is one linearly independent eigenvector ⎜x ⎟ ⎜1⎟ ⎪ x + x − 2x = 0 2 3 ⎝ ⎠ ⎝ 3⎠ ⎩ 1 ⎛1⎞ ⎜ ⎟ ⎜ 0 ⎟ corresponding to ⎜ − 1⎟ ⎝ ⎠ ⎛ 1⎞ ⎜ ⎟ u3 = ⎜1⎟ corresponding to the eigenvector λ3 = 2. ⎢ ⎥ ⎢1 1 0 ⎥ ⎣ ⎦ ⎡λ = λ 2 = −1 Characteristic equation: | A .Nguyen Thieu Huy. ⎛ x1 ⎞ ⎜ ⎟ For λ3 = 2. Therefore. Lecture on Algebra ⎡0 1 1 ⎤ Example: A = ⎢1 0 1⎥ . u3. and u2 = ⎜0⎟ ⎝ ⎠ the eigenvector -1. A has 3 linearly independent ⎜ 1⎟ ⎝ ⎠ 1 1⎞ ⎛ 1 ⎟ ⎜ eigenvectors u1.

2) in Section 3. ii) F is injective. if A is an nxn square matrix satisfying A[x]S = [Fx]S for all x ∈ V.z) = (2x – 3y+z. By Theorem 2. Therefore. and F: V →V be a linear operator. Then.3 we have that [F]S [x]S = [Fx]S ∀ x ∈ V. Theorem: Let V be a vector space of finite dimension and F: V →V be a linear operator. we obtain that two matrices A and B represent the same linear operator F if and only if they are similar. we can construct the matrix [F ]S .Nguyen Thieu Huy. i) F is nonsingular.0). denoted by [F]S = [F ]S . 2x –y – 5z).0). (0. Lecture on Algebra 6. 6. This matrix is called the matrix representation of F corresponding S to S. 82 .2. by definition 3. and Corollary 3. (0. Linear operators and matrices: Let V a vector space of finite dimension. and let S and U be two different bases of V. Change of bases: Let F: V →V be an operator where V is of n–dimension. Let S be the ⎛2 − 3 1 ⎞ ⎜ ⎟ usual basis in R . Then. we have the following definition of eigenvectors and eigenvalues of an operator T: V →V where V is a vector space over K.1.0. we have that B = P-1AP (this follows from the formula (3.x +5y -3z. S = {(1.8 we obtain the following results on linear operators. [F]S = ⎜ 1 5 − 3 ⎟ ⎜ 2 − 1 − 5⎟ ⎝ ⎠ 3 6.3.1)}. B = [F]U and supposing that P is the change–of–basis matrix from S to U. 6. S By Theorem 3.0.4.5. Let S be a basis of V. Eigenvectors and eigenvalues of operators Similarly to square matrices.4). the following assertions are equivalent.y. Then. iv) F is bijective. Definition: A linear operator F: V →V is called nonsingular if Ker F = {0}.1. Example: Let F: R3 → R3 . 6.2. Putting A = [F]S. F(x. then A = [F]S. iii) F is surjective. Conversely.6.

Theorem.λI) ≠{0} iii) λ is an eigenvalue of the matrix [F]S for any basis S of V.x+z.7. Example: Let F: R3 →R3 be defined by F(x. Let V be a vector space of finite dimension and F: V →V be a linear operator.y) ≠ (0.y) ⎧(2 − λ)x + y = 0 ⇔ ∃ (x. f(x. Then. by the same arguments we can deduce the following theorem. Then.0) such that ⎨ ⎩ (3 − λ)y = 0 ⎧(2 − λ)x + y = 0 ⇔ the system ⎨ has a nontrivial solution (x. The set of all eigenvalues of T in K is denoted by SpKT and is called spectral set of T on K.z) = (y+z. Lecture on Algebra Definition: A Scalar λ ∈ K is called an eigenvalue of T if there exists a nonzero vector v ∈V for which T(v) = λv. that is.3y) λ ∈ SpKf ⇔ ∃(x.y) ≠ (0. i) λ ∈ K is an eigenvalue of F ii) (F -λI) is a singular operator.y) = (2x+y.2} ⎜1 1 0⎟ ⎝ ⎠ 83 .x+y).y) =λ(x.0) such that f(x. and S be the usual basis of R3.y.y) ≠(0. Moreover. Since f(v) = λv⇔ [f(v)]U= λ[v]U ⇔[f]U[v]U= λ[v]U for any basis U of R2. every vector satisfying this relation is called an eigenvector of T corresponding to λ. Example: Let f : R2 → R2. the following assertions are equivalent. 6.Nguyen Thieu Huy. ⎛0 1 1⎞ ⎜ ⎟ Then [F]S = ⎜ 1 0 1 ⎟ and SpRF = SpR[F]S = {-1. Ker (F . we can easily obtain that λ is an eigenvalue of f if and only if λ is an eigenvalue of [f]U for any basis U of R2.0) ⎩ (3 − λ)y = 0 ⇔ 2−λ 1 = 0 ⇔ (2-λ)(3-λ)=0 0 3−λ 1 ⎤ ⎧λ = 2 ⎡2 − λ is the matrix representation of ⇔⎨ ⇔ λ ∈ SpR[f]S where [f]S = ⎢ 3 − λ⎥ ⎩λ = 3 ⎣ 0 ⎦ f with respect to the usual basis S of R2.

we can replace the action of an operator T by that of its matrix representation. the usual basis S. 6. Theorem: let V be a vector space of finite dimension with dim V =n. Since. the following assertions are equivalent. i) T is diagonalizable.λI) = 1 − λ 1 = 0 ⇔ (λ+1)2(λ-2) = 0 1 1 −λ 84 . in finite-dimensional spaces. Diagonalization of a linear operator: Definition: The operator T: V→ V (where V is a finite-dimensional vector space) is said to be diagonalizable if there is a basis S of V such that [T]S is a diagonal matrix. Then. we can deduce the finding of eigenvectors and eigenvalues of an operator T to that of its matrix representation [T]S for any basis S of V. and they are easily computed by solving the characteristic equation 1 −λ 1 Det ([T]S . ii) T has n linearly independent eigenvectors.Nguyen Thieu Huy. and T: V→ V be linear operator. Therefore. Then.z) = (y+z. Then. iii) There is a basis of V. we write ⎛0 1 1⎞ ⎟ ⎜ [T]S = ⎜ 1 0 1 ⎟ ⎜1 1 0⎟ ⎠ ⎝ The eigenvalues of T coincide with the eigenvalues of [T]S. Lecture on Algebra Remark: Let T: V→ V be an operator on finite dimensional space V and S be a basis of V. v∈ V is an eigenvector of T ⇔ [v]S is an eigenvector of [T]S This equivalence follows from the fact that [T]S[v]S = [Tv]S. which are consisted of n eigenvectors of T.x+y) To diagonalize T we choose any basis of R3.y. The process of finding S is called the diagonalization of T. say. we thus have the following theorem whose proof is left as an excercise. Example: Let consider above example T: R3 →R3 defined by T(x.x+z.8.

85 .0) and v2 = (1.1).v2. ⎜x ⎟ ⎝ 3⎠ We then have ⎧− 2 x 1 + x 2 + x 3 = 0 ⎪ ⇔ ⎨ x1 − 2x 2 + x 3 = 0 ⇔ (x1. Clearly.1. we solve ([T]S .x3)). there is one linearly independent eigenvector corresponding to λ = 2. v2. ⎛ x1 ⎞ ⎜ ⎟ For λ = 2. x3)) ⎜x ⎟ ⎝ 3⎠ Therefore. x2.1.Nguyen Thieu Huy.λI) X = 0 ⇔ ([T]S + I)X = 0 ⎧x 1 + x 2 + x 3 = 0 ⎪ ⇔ ⎨x1 + x 2 + x 3 = 0 (for x = ⎪x + x + x = 0 2 3 ⎩ 1 ⎛ x1 ⎞ ⎜ ⎟ ⎜ x 2 ⎟ corresponding to v = (x1. v3 are linearly independent and the set U = {v1. using the fact that v ∈ V is a eigenvector of T if and only if [v]S is an eigenvector of [T]S. Lecture on Algebra Next. there are two linearly independent eigenvectors corresponding to λ = -1.-1.-1).x2.v3} is a basis of R3 for which the matrix representation [T]U is diagonal. these are v1 = (1. that may be chosen as v3 = (1. we solve ([T]S -2I)X = 0 (for x = ⎜ x 2 ⎟ corresponding to v = (x1. we have that for λ = -1.x3) = x1(1.0. v1.1) ∀ x1 ⎪ x + x − 2x = 0 2 3 ⎩ 1 Thus.x2.

let an inner product be defined by < (u1.v > = <v. u> ∀ u.. v ∈V.v2 ∈V Examples: Take V= Rn.. Lecture on Algebra Chapter 7: Euclidean Spaces I.u2. we have <au +bw.v> i =1 i =1 u u 86 . and a. Definition: let V be a vector space over R..vn)..v> = <u1.. and <u.v2> ∀ α.un)..v ∈ V (Symmetry) (I3): <u.u2.1.v>∀ u. The vector space V with a inner product is called a (real) Inner Product Space (we write IPS to stand for the phrase: “Inner Product Space”).v> = k<u..v> + <u2. Inner product spaces 1. we obtain a function: VxV→R (u.β ∈ R and u. Suppose to each pair of vectors u.. i =1 n (I1): For u = (u1.w2. v ∈ V there is assigned a real number.v2.v > + b < w.un). 2) <ku.b ∈ R.u> = 0 if and only if u = O-the null vector of V (Positive definite). αv1+βv2> = <αv1 + βv2.w = (w1.v.v1> + β<u.v>.u> = α <u. we can check ∑ uivi .v> + b<w.v> This function is called a real inner product on V if it satisfies the following axioms: (I1): < au + bw.v) a <u. Then.. u2.u> +β<v2.u> ≥ 0 ∀ u ∈ V.v> = ∑ (au i =1 u i + bwi )vi = ∑ au i vi + b∑ wi vi = a<u.wn) and v= (v1. Note: a) The axiom (I1) is equivalent to 1) < u1 +u2...v1. v ∈V b) Using (I1) and (I2) we obtain that < u.v > ∀ u. denoted by <u. (v1.v2.Nguyen Thieu Huy. u> = α <v1..w ∈ V and a.b ∈R (Linearity) (I2): < u.vn)> = Then.v> ∀ u1..v > = a< u.

Specially..0.>). 3) Consider Mmxn (R) – the vector space of all real matrices of the size mxn.. is an inner product on Mmxn (R).g(x)dx ∀ f.g>= ∫a f (x).v = <u.u> = ∑ u 2 ≥ 0 and <u.u> = < 0. where <. Example of Euclidean spaces: 1) (Rn. the following assignment <A.v>. Lecture on Algebra I2): <u.O.b] making C[a. u > i =1 u i =1 u u I3): <u. Definition: An Inner Product Space of finite dimension is called an Euclidean space. Then. <. 87 . This inner product is called usual scalar product (or dot product) on R2.u> = 0 because <O..2. 1.b] →R | f is continuous} be the vector space of all real.…. Y= ⎜ M ⎟ . where Tr ([Cij]) = ∑C i =1 n ii for an n-square matrix [Cij].3.> is an inner product making Rn the IPS.. where <.u> = 0< O. Remark: 1) <O. It is called the unual inner product on Mmxn(R). Then. when n= 1 we have that the (Mmx1(R)..Nguyen Thieu Huy.v> = ∑ w i v i = ∑ v i u i =< v. 2) Let C[a... g ∈C[a.u> > 0.. <.B> = Tr (ATB). 2) (Mmxn(R). i =1 ⎜x ⎟ ⎜y ⎟ ⎝ n⎠ ⎝ n⎠ T n 1. and is sometimes denoted by u..u> = 0 ⇔ ∑ u 2 i i i =1 i =1 u =0 ⇔ u1 = u2 =.. un = 0 ⇔ u = (0.b] = {f: [a.>) is an IPS with ⎛ x1 ⎞ ⎛ y1 ⎞ ⎜ ⎟ ⎜ ⎟ <X..> is usual scalar product .b] an IPS..0)-null vector of Rn.b] b is an inner product on C[a. Therefore.u> = 0.>).. 2) If u ≠ O then <u. one can show that the following assignment <f.Y> = X Y = ∑ xi y i for X = ⎜ M ⎟ .> is usual inner product .b] ⊂ R. <. we obtain that <.. continuous function defined on [a.

Remarks: (C-S) 1) If u = 1.v> + 2 < u. (3) ⇔ u + v 2 ≤ u 2+ v 2+2 u v ⇔ <u+v.2. We now prove (C-S).u> + 2<u.Nguyen Thieu Huy.u> +<v. The proofs of (1) and (2) are straightforward.1.v> ≥ 0 ∀ t ∈ R If u = 0. for all t ∈ R we have that < t u +v. u∈V. 2) The non–negative real number d(u.u> <v. v > ⇔ <u.u> + 2<v. 1) u ≥ 0 ∀ u ∈ V and u = 0 ⇔ u = O. v ∈ V. <.v) = u − v is called the distance between u and v.v>t + <v.d) 2. 2) λu = | λ| u ∀ λ ∈R. Definition: Let (V. u+v> ≤ <u.3. 3) ||u+v|| ≤ ||u||+||v|| ∀ u.v> + 2 < u. We now justify this definition by proving the following properties of the length of a vector. u > and call it the length (or norm) of u.e. 88 . v > By the linearity of the inner product the above inequality is equivalent to <u. We prove (3). For each u ∈V we define u = < u. u >< v.v>2 ≤ <u. v > This last inequality is a consequence of the following Cauchy – Schwarz inequality <u. then u is called a unit vector. the inequality (C-S) is obvious. Length (or Norm) of vectors 2. If u ≠ 0. Lecture on Algebra II.> ) be an IPS.. tu+v> ≥ 0 ⇔ t2 <u.v>2 ≤ <u.v> (q. 2. u >< v.v> ≤ <u.u> <v.. Proposition. Indeed. u >< v.v>.u> + 2<u.v> + <v. In fact. The length of vectors in V has the following properties.v> ≤ < u. then we have that ∆’ ≤ 0 ⇔ <u. Proof.

>. Lecture on Algebra 3) For nonzero vectors u.3. 3. (0.2. v ∈ V. π 2 (1. Examples: 1) Consider R3 with usual scalar product. 3.1.0)> = 1.3)} ⊂R3 89 ..-1.-1.-1).>) is an IPS.xn) and y = (y1.4. Note: 1) If u is orthogonal to every v ∈V.z) ∈ R3 | x+ 3y -4z = 0} = Span {(3. Definition: Let S be a subset of V.Nguyen Thieu Huy.. u ⊥ = {u} = {v∈V | v ⊥ u}. Moreover.4)⊥ = {(x.. III..1+0.+ xnyn)2 ≤ ( x12 + x 2 + . if S is a subspace of V then S⊥ ∩ S ={0}. ⊥ The following properties of S⊥ are easy to prove.. the angle between u and v is Example: let V = Rn. S⊥ is a subspace of V and S⊥ ∩ S ⊂ {0}. 1) ⊥ (1. + y n ) for x = (x1. the angle between u and v is defined to be the angle θ.. Definition: the vectors u...3. (1. denoted by S ⊥ (read S “perp”) consists of those vectors in V which are orthogonal to every vectors of S. x2.. then <u. .. we can compute (1. if u ⊥ v..u> = 0 ⇒ u = 0..v ≠ 0. 2) For u. that is to say.the usual scalar product. denoted by u ⊥ v (we also say that u is orthogonal to v). such that cosθ = < u.v 4) In the space Rn with usual scalar product <. The orthogonal complement of S.. yn) ∈Rn.-1. In particular. for a given u ∈ V.> we have that the Cauchy –Schwarz inequality (C – S) becomes 2 2 2 2 (x1y1 + x2y2 + . For S ⊂ V.y. S ⊥ = {v∈V | <v...1. <. v > u. Then.v> = 0.0).. and <.0) because <(1. Orthogonality Throughout this section..0 = 0 3. y2. which is known as Bunyakovskii inequality.. 0 ≤ θ ≤ π. (V.1-1. Proposition: Let V be an ISP.v ∈V are said to be orthogonal if <u. + x n )( y12 + y 2 + . 1.u> = 0 ∀ u ∈ S}.

1).uj> = ⎢ ⎣1 if i = j The concept of orthogonality leads to the following definition of orthogonal and orthonormal bases..2. +) S is orthogonal ⇔<ui.-2. u2...1. and S is called orthonormal if S is orthogonal and each vector in S has unit length. That means. +) S is orthonormal ⇔<ui. Then.v ∈V.k}. Proof: Let S = { u1.k}. 3.. Remark: Let S = { e1.. λ1 = λ2 = .. Since <uj. y. 3.. z ) ∈ R 3 ⎨ ⎬ ⎩− 2 x + y + z = 0 ⎭ ⎩ = Span{(1.uj> ≠ 0 we must have that λj = 0. e2.Nguyen Thieu Huy. u J for fixed j ∈{1.. λ2. and this is true for any j ∈ {1. (-2.u2. Definition: A basis S of V is called an orthogonal (orthonormal) basis if S is an orthogonal (orthonormal.…. respectively) set of vectors.5. Then.6. u J = i =1 i =1 k k ∑λ i =1 k i u i .... the coordinate representation of the inner product in V can be written as 90 .. S in an orthogonal basis of V.1..1)}. let S = {u1. 3. Corollary: Let S be an orthogonal set of n nonzero vectors in a euclidean space V with dim V = n. uj> = 0 ∀ i ≠j Then.1)}⊥ = ⎨( x.. u J = λ J u J . λk ∈ R such that: It follows that 0= ∑λiui = 0 .en} be an ON – basis of V and u. Definition: The set S ⊂ V is called orthogonal if each pair of vectors in S are orthogonal. .uj> = 0 ∀ i ≠j. i =1 k ∑ λi u i . uk} with ui ≠ 0 ∀ i and <ui.. 1≤ j ≤ k. Then S is linealy independent. We write ON–basis to stand for “orthonormal basis”.. 1 ≤ j ≤ k ⎡0 if i ≠ j where 1≤ i ≤ k. where 1 ≤ i ≤ k.2.4. Then.. = λk = 0 yielding that S is linearly independent.uk}. u J = ∑ λi u i . Theorem: Suppose that S is an orthogonal set of nonzero vectors in V. To be more concretely. suppose λ1.. Lecture on Algebra ⎧ ⎧ x − 2y + z = 0 ⎫ 2) Similarly {(1.

Taking the inner product <v .g> = ∫ π f (t ) g (t )dt − π for all f.. 3. u i > . sinnt. cost.e.2.. the coordinate of v with respect to S is ⎛ < v. u j > = λj <uj. u > ⎟ 2 2 n n ⎠ ⎝ 1 1 Proof. <uk.. u > .u j > for all j ∈ {1..-4). eJ > = ⎨ ⎩1 if i = j 3.8.u i =1 n < v. we have that u 2 + v Proof: u + v 2 2 = u+v 2 2 = <u +v. Then. u1 > < v. (q.…. π].v> = ∑u e .7. . Let v = (3. We now determine λk∀ k = 1.u > . u2. (2.}. k . 5.. we obtain that S is an orthogonal set..n. < u .. Then to compute the coordinate of v with respect to S we just have to compute 91 .. g ∈ V Consider S = {1..d) Example: V = R3 with usual scalar product < . (3.. Therefore λj = < v. Lecture on Algebra <u. u n > ⎞ ⎜ ⎟ ⎜ < u . sint. u j > < u j . uj>.. Then v= ∑ < u .. and <f.2.u+v> = <u.uj> for fixed j ∈ {1. con2t.n}..Nguyen Thieu Huy. we have that <v..u> + <v.u i i i > In orther word..u j k =1 n because.un} be an orthogonal basis of V and v ∈V..v> +2<u. cosnt...n..1).2.∑v i =1 i i J =1 n n J e J = ∑∑ ei . -4).1)} is an orthogonal basis of R3.. < u . e J u i v J = ∑ u i vi = [u ]T [v] S S i =1 J =1 i =1 n n n ⎧0 if i ≠ j ) (since <ei . and ∫ πcos nt sin mt − π π dt = 0 ∀ m. cos mt − π dt = 0 ∀ m ≠ n and ∫−πsin nt sinmt dt = 0 ∀ m ≠ n.v> = u 2 + v Example: Let V = C[-π.uj>=0 if k ≠ j..1.. Pithagorean theorem: Let u ⊥v. Due to the fact that ∫ πcos nt.. u 2 > < v.-2. > S = {(1. Theorem: Suppose S = {u1..n}.uj> = ∑λ u k =1 k n = ∑ λk < u k . Let V = ∑ k =1 n λkuk ..

1) 2 = ⎛ 3 9 −5⎞ Therefore: (v)S = ⎜ .−4) > (1.2.. Lecture on Algebra λ1 = < (1. let us start by putting: u1 = v1. Therefore.. u1 > < v 2 .. we have the following problem..−4).vk}∀ k = 1.1). .2. (3...e2.u2>.u2} = span {v1.5.2....2. < u1 .n. does an ON–Basis exist? Precisely.2.. 92 . u1 > ..−1) 2 = 9 3 = 6 2 = 27 9 = 21 7 −5 14 λ2 = < (2.9.−2. then for v∈ V we have that v = ∑ < v.vk} ∀ k = 1. it is natural to pose the question: For any Euclidean space.−4) 2 λ3 = < (3. (v)S = (<v.−4) > (3..v2}. u1>..un} = Span {v1. then. v2. <v..−2..u2....1. Gram-Schmidt orthonormalization process: As shown a bove..ek} = Span {v1..vn} be a basis of Euclidean space V. Let find the real number α21 such that <u2. we find an orthogonal basis {u1. Solution: Firstly..5.un} is an ON – basis of V...−4) > (1...en} of V such that Span {e1... (3.e2. To do that..u2. u1 > It is straightforward to see that span {u1.1). u1> = 0. This is equivalent to α21 = - < v 2 .u2. u i > u i i =1 n In other words. and hence u2 = v2 u1. (3. u1 > < u1 . Find an ON – basis {e1. 3.v2.... the ON – Basis (or orthogonal basis) has many important properties.un} of V satisfying: (*) Span {u1.Nguyen Thieu Huy. u2 = v2 + α21u1.5. ⎟ ⎝ 2 7 14 ⎠ Note: If S = {u1..v2.un>). Problem: Let {v1.n.. <v..

u k −1 for all k = 2.1. we find uk in the form uk = vk- < v k . ..e3} = Span {v1..uk.1)} be a basis of V.vk} for all Secondly. we can extend S to an ON – basis U = {u1. . The above process is called the Gram – Schmidt orthonormalization process.. 2) Let S = {u1. . u 2 > < v k .. and S = {v1 = (1. . Corollary: 1) Every Euclidean space has at least one orthonormal basis. u1 > 2 ⎛ 2 1 1⎞ .. u1 > < vk .v2. . Span {e1.. 93 . We implement the Gram – Schmidt orthonormalization process as follow: u1 = v1= (1...uk} = Span {v1.. . ⎟ 3 < u1 .e2.. .1).un} satisfying that <ui.1.. >. ) 3 2 3 3 3 2 2 u3 = v3 - Now.1). 3..v2... Then.n.... putting: e1 = u u1 ⎛ 1 1 1 ⎞ ⎛ 2 1 1 ⎞ =⎜ .(1.1) . − .uk+1.e2. Lecture on Algebra Proceeding in this way.1) = ⎜ − .. en = n we obtain an ON – basis of V u1 u2 un satisfying that Span {e1.u 2 − .Nguyen Thieu Huy.1. Theorem: Let V be an Euclidean space..e2} = Span{v1.. . putting: e1 = k = 1. e2 = 2 = ⎜ − ⎟...u2. un} of V. v2 = (0.. ⎟ .1. u k −1 > yielding the set of vectors {u1. IV..1) − (− ....u2.vk}∀ k = 1.. v3 = (0.0.1. we obtain an ON – basis {e1.. u k −1 > . e2 = 2 .n. uj> = 0 ∀ i ≠ j and Span {u1.. ) = (0.u1 − < u1 .... . Example: Let V = R3 with usual scalar product < .v3} = R3..u2.− .− .....2..e3} satisfying that u3 2 2⎠ ⎝ Span {e1} = span {v1}. Projection and least square approximations 4. u2 > < u k −1 .e2. u1 u2 6 6 6⎠ ⎝ 3 3 3⎠ ⎝ e3 = u3 1 1 ⎞ ⎛ = ⎜ 0.uk} be an orthonormal subset of vectors is an Euclidean space V with dim V = n > k.. n.. u1 u u .ek} = span {v1. .. u1 > ⎝ 3 3 3⎠ 1 1 2 1 1 1 1 (1..10. u1 > < u2 .1. ⎟ .u2..1) u2 = v2 - < v 2 .u1 = (0.v2} and span {e1.. W be a subspace of V with W ≠ {0}.1.v2.

w2 = l = k +1 ∑ n v. Hence.. u i = 0 (because l ≠i for l = k+1.k). u J u J + l = k +1 ∑ n v... Similarly. To do that.w1 = w '2 . w1 .... uk+1. un} of V. Proof. u n k i v. Lecture on Algebra Then for each v ∈ V. let v = w 1 + w '2 be ' ' ' another expression such that w1 ∈ W and w2 ∈ W⊥ . Then.. Therefore. uk} is a basis of W.w1 ∈ W and also w1 . for v ∈ V we have the expression v= ∑ J =1 n v. P is called the orthogonal projection on to W... u l u l . w2 ⊥u for all u ∈W. u i u i = i =1 k l = k +1 i =1 ∑ ∑ u. u l u l ... We now prove that w2 ∈W⊥..uk}. v can be uniquely expressed as v = w1 + w2 with w1 ∈ W and w2 ∈ W⊥. u J u J .. u i u i .. We next prove that the ' expression v = w1+w2 for w1 ∈ W and w2 ∈W⊥ is unique. Indeed..w2 ∈W⊥.. Since for all v ∈ V.. . . u l u l . 2) We denote by P the mapping P : V → W P (v) = w1 if v = w1 + w2 where w1 ∈ W and w2 ∈ W⊥.u2. and call V the direct sum of W and W⊥. By Gram – Schmidt ON process. This yields w1 ' ' ' w1 ∈W ∩W⊥={0} ⇒ w1 . Then. this means that w2 ∈ W⊥.2.w1 = w '2 . Definition: 1) Lef W be a nontrivial subspace of V. 94 .Nguyen Thieu Huy..w2. It follows that v = w1+w2 = w 1 + w '2 ⇒ ' ' ' w1 .n and i = 1. Now. <w2.u> = l = k +1 ∑ n v. there exists a unique couple of vectors w1 ∈ W and w2 ∈ W⊥ such that v = w1 +w2. w '2 = w 2 . taking any u ∈ W.w1 = 0 ⇒ w1 = w1 . u J u J = ∑ J =1 k v.u2. ∑ u . Therefore.uk. we have that u= ∑ i =1 k u. say S = {u1.. the expression V = w1 + w2 for w1 ∈ W and w2 ∈ W⊥ is unique. Hence.. Putting w1 = ∑ J =1 k v. we extend S to an ON – basis {u1. we write this relation as V = W ⊕ W⊥ .. u l u l we have that w1 ∈W because {u1.. W has an ON–basis. 4.

2) For an othogonal projection P: V → W we have that.0). we first find an ON – basis of W. S = ⎨⎜ ⎟⎬ = {u1.-1.3).. 2 Therefore. ⎜ .0 ⎟.. Ker P = W⊥ and ImP = W. P is a linear operator satisfying properties that P2 = P.1.u2} 3 3 ⎠⎭ ⎩⎝ 2 2 ⎠ ⎝ 3 Then P(v)= <v. the orthogonal projection P on W can be determined by P: V → W P(v) = ∑ i =1 k v. 4.1. Then v − u ≥ v − P( v) for all v ∈V and u ∈W.Nguyen Thieu Huy. Lemma: Let V be an Eudidean space. -1. Proof: We start by computing v−u 2 = v − P ( v ) + P( v ) − u 2 Since v – P(v) ∈ W⊥ and P(v) –u ∈W (because v = P(v)+v – P(v) where P(v) ∈W and v – P(v) ∈W⊥).1)} and P: V → W be the orthogonal projection onto W. u2> u2 = (1. we obtain using Pithagorean Theorem that v − P ( v ) + P (v ) − u 2 2 = v − P( v ) + P( v ) − u 2 2 ≥ v − P( v ) .4. u i u i for all v ∈ V. 95 .-2). (1. and P: V → W be the orthogonal projection on W. Application: Least square approximation For A ∈ Mmxn(R). uk} of W.1. Lecture on Algebra Remarks: 1) Looking at the proof of Theorem 4. 4. This can be easily done by using Gram-Schmidt process to obtain ⎧⎛ 1 1 ⎞ ⎛ 1 1 1 ⎞⎫ . W be a subspace of V. B ∈ Mmx1(R) consider the following problem.. v − u ≥ v − P ( v ) . we obtain that: For an ON – basis S = {u1. and the equality happens if and only if u = P(v). We now find P(v).3.− . Example: Let V = R3 with usual scalar product and W = Span{(1.. and v = (2. u1> u1 + <v. To do so. .

~ ~ We now find X such that A X = P(B).. Solition: We will use Lemma 4. We now find the vector X ⎛1 1 2⎞ ⎜ ⎟ ~ such X minimizes the norm ||AX .Nguyen Thieu Huy. we have to solve the system 96 ⎛1 ⎞ ⎜ ⎟ ⎜1 ⎟ . n.2. ⎛ 1 1 2 ⎞⎛ x1 ⎞ ⎛1 ⎞ ⎜ ⎟⎜ ⎟ ⎜ ⎟ Example: Consider the system ⎜ 1 − 1 0 ⎟⎜ x 2 ⎟ = ⎜1 ⎟ ⎜ 0 1 1 ⎟⎜ x ⎟ ⎜ 2 ⎟ ⎝ ⎠⎝ 3 ⎠ ⎝ ⎠ ~ ~ Since r(A) = 2 < r ( A ) = 3.. ~ ⇔ C iT (A X – B) = 0 ∀ i = 1. that is. by Lemma 4. Putting colsp (A) = W and taking into account that AX ∈W for all X ∈ Mnx1(R)..AX||2 ≥ || B.3.P(B)||2 and the equality happens if and only if AX = P(B) ). Ci > = 0 ∀ i = 1. B = ⎜0 1 1⎟ ⎝ ⎠ To do so. Then. Lecture on Algebra Problem: Let AX = B have no solution. we write ~ A X – B = P(B) . (since || AX-B||2 = || B ..1) ~ Therefore. is called a least square solution of the system AX = B. X is a solution of the linear system (4.3 to find the least square solution.1). we consider V = Mmx1(R) with the usual inner product <u. To do that. we pose the following question: Which vector X will minimize the norm || AX-B||2 ? Such a vector. This is equivalent to ~ A X – B ⊥ U for all U ∈ W = Colsp (A) ~ ⇔ A X – B ⊥ Ci ∀ i = 1. B ∉ Colsp(A). where P: V →W is the orthogonal projection on to W. that || AX-B||2 is smallest for such an X = X that A X = P(B).2. this system has no solution. To do so. ~ ~ ⇔ AT (A X – B) = 0 ⇔ATA X –ATB = 0 ⇔ ~ ATA X = ATB (4.2. if it exists.B||2 where A = ⎜ 1 − 1 0 ⎟ .. n (where Ci is the ith column of A) ~ ⇔ < A X – B. n.. we ~ ~ obtain. ⎜2⎟ ⎝ ⎠ .v>= uTv for all u and v ∈ V .B∈W⊥.

Let V be an Euclidean Space. Definition.….1.2. e2. ani ) ⎜ M ⎜ ⎜a ⎝ nJ ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ATA = (cij) cij = ∑ bik akJ = ∑ aki akJ = (a1i a2i k =1 k =1 n n 97 ..un} be two ON – bases of V and A be the change–of–basis matrix from S to U. …. Orthogonal matrices and orthogonal transformation 5. compute ⎛ a1J ⎜ ⎜ a2 J . Proposition. A is nonsingular and A-1 = AT) ⎛ cos α sin α ⎞ Examples: 1) A = ⎜ ⎜ − sin α cos α ⎟ ⎟ ⎝ ⎠ ⎛ 1 ⎜ ⎜ 2 ⎜ 1 2) A = ⎜ − 2 ⎜ 0 ⎜ ⎜ ⎝ 1 2 1 2 0 ⎞ 0⎟ ⎟ ⎟ 0⎟ ⎟ 1⎟ ⎟ ⎠ 5. An nxn matrix is said to be orthogonal if ATA = I = AAT (i..e.Nguyen Thieu Huy. Let S = {e1.. Then. u2. ⎜3 ⎟ ⎜ t ⎟ ⎝ ⎠ V. Lecture on Algebra ~ ATA X = ATB ⎛ 2 0 2 ⎞⎛ x1 ⎞ ⎛ 2 ⎞ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⇔ ⎜ 0 3 3 ⎟⎜ x 2 ⎟ = ⎜ 2 ⎟ ⎜ 2 3 5 ⎟⎜ x ⎟ ⎜ 4 ⎟ ⎝ ⎠⎝ 3 ⎠ ⎝ ⎠ ⎧ x1 + x3 = 1 ⎪ ⇔⎨ 2 x 2 + x3 = ⎪ 3 ⎩ ⎧ x1 = 1 − x3 ⎪ 2 ⎪ ⇔ ⎨ x 2 = − x3 3 ⎪ ⎪ x3 is arbitrary ⎩ ⎛ 1− t ⎞ ~ ⎜2 ⎟ We thus obtain X = ⎜ − t ⎟ ∀ t ∈ R. say A = (aij). Proof. Putting AT = (bij). where bij = aji.en} and U={u1. the change –of–basis matrix from an ON – basis to another ON – basis of V is an orthogonal matrix.

. A is an orthogonal matrix. The following theorem provides another criterion for the orthogonality of a linear transformation in Euclidean spaces. Example: Let V = R3 with the usual scalar product < .0. the matrix representation [f]S of f corresponding to S is orthogonal.un} of V.1)} be the usual basis of R3. < .3.y> for all x. (0.u2.. Proof: Let S be an ON – basis of V.. 3⎟ 1 ⎟ ⎟ 3⎠ 5. This means that A is orthogonal. Then. ⎜ ⎟ ⎟⎜ ⎟⎜ ⎜ 6 ⎠ ⎝ 3 3 3 ⎠⎭ 2 ⎠⎝ 6 6 ⎩⎝ 2 Then. ⎧⎛ 1 2 ⎞ ⎛ 1 1 1 ⎞⎫ 1 ⎞⎛ 1 1 . Then. f(y)> = [ f ( x )]T [f(y)]S = ([f]S[x]S)T [f(x)]S[f(y)]S S = [ x]T [ f ]T [f]S[y]S S S Therefore. ∑ alJ el =< u i . . the change – of – basis matrix for S to U is ⎛ 1 ⎜ ⎜ 2 ⎜ 1 A = ⎜− 2 ⎜ ⎜ 0 ⎜ ⎝ 1 6 1 − 6 2 6 1 ⎞ ⎟ 3⎟ 1 ⎟ ⎟ . > S = {(1.(0.f(y) > = <x. we obtain <f(x).. Lecture on Algebra = ∑a k =1 n ki k e . the following assertions are equivalent. ⎜ .Nguyen Thieu Huy. Theorem: Let V be an Euclidean space. we have that: 98 .− ... u J > = 1 if i=j l =1 n 0 if i ≠j Therefore. . Let (V.. ii) For any ON – basis S of V.0 ⎟. iii) ||f(x)|| = ||x|| for all x ∈V. Definition. 5.0). i) f is orthogonal.1. Then.0. ⎟.0). >) be an IPS. Clearly. .− U = ⎨⎜ ⎟⎬ be another ON – basis of R3. ATA = I.y ∈ V. the linear transformation f: V → V is said to be orthogonal if < f(x). taking the coordinates by this basis.4. and f: V → V be a linear transformation.. for an ON – basis S = {u1.

f(y) >= <x. Then A is said to be orthogonally diagonalizable if there is an orthogonal matrix P such that PTAP is a diagonal matrix. we obtain ⎢⎢ f(x) ⎢⎢2 = ⎢⎢x⎢⎢2 ⇒ ⎢⎢f(x) ⎢⎢ = ⎢⎢x⎢⎢∀ x ∈ V. x + y . T T T ⇔ [u i ]S [ f ]s [ f ]s u j = [u i ]S u j T T T [ ] [ ] S ⎧1 if i = j ∀ i. X1 ⊥ X2 with T respect to the usual inner product in Mnx1 (R) (that is.y ∈ V. by definition. Remark: If A is orthogonally diagonalizable. y + y. Lecture on Algebra <f(x). T We thus obtain the equivalence (i) ⇔ (ii).y ∈S ⇔ [x ]S [ f ]S [ f ]S [ y ]S = [x ]S [ y ]S ∀ x. The process of finding such an orthogonal matrix P is called the orthogonal diagonalization of A. Then. A is symmetric. X2 be eigenvectors corresponding to λ1. then. simply taking x = y.n} =⎨ ⎩0 if i ≠ j ⇔ [ f ]S [ f ]S = I ⇔ [ f ]S is an orthogonal matrix. y ⇒ f ( x ).5. 5. f(x + y) = x + y. respectively.y> holds true for all x. x + 2 x. 5. The following lemma shows an important property of symmetric real matrices. Therefore. λ2.j ∈ {1. ∃ P – orthogonal such that PTAP = D – diagonal ⇒ A = PDPT ⇒ AT = PDPT = A. f ( y ) = x. X1 . Then. f(x + y). f ( y ) = x. 5. (i) ⇒ (iii): Since < f(x). f ( y ) + f ( y ). λ2 be two distinct eigenvalues of A. f(y)> = <x. ⇒ f ( x ). A is orthogonally diagonalizable if and only if A is symmetric. f ( x ) + 2 f ( x ). We now prove the equivalence (i) ⇔ (iii).y ∈ S.7.6 Theorem: Let A be a real. nxn matrix. X 2 = X 1 X2 = 0) 99 .. and X1.2.y> ∀ x.. (iii) ⇒ (i): We have that ⎢⎢ f(x + y) ⎢⎢2 = ⎢⎢x + y⎢⎢2 for all x. Definition: Let A be real nxn matrix. and λ1. Lemma: Let A be a real symmetric matrix. Therefore. y ∀ x.y∈V.Nguyen Thieu Huy.y ∈V. The converse is also true as we accept the following theorem.

say . n.. X 2 Therefore. Step 3. We obtain immediately that ⎡λ1 0. 0 ⎥ PT AP = ⎢ ⎥ M ⎥ ⎢M M ⎢ 0 0. Yn} from {X1.2. λn ⎥ ⎣ ⎦ where λi is the eigenvalue corresponding to the eigenvector Yi .1... Yn]. Firstly... we compute eigenvectors X by solving.Nguyen Thieu Huy.. Lecture on Algebra t λ1X1 . Find the eigenvalues of A by solving the characteristic equation det (A . 0 ⎤ ⎢ 0 λ2 . we follow the algorithm 5... ⎡0 1 1 ⎤ ⎥ ⎢ Example: Let A = 1 0 1 ⎥ ⎢ ⎢1 1 0 ⎥ ⎦ ⎣ To diagonalize orthogonally A.. Xn}. X1..8. X 2 = 0 .. Algorithm of orthogonal diagonalization of symmetric nxn matrix A: Step 1. Step 4..λ 2) X1 . Set P = [Y1 Y2 . X 2 = λ1X1X 2 = (AX1 ) X 2 T T T Proof: In fact. T T = X1 A X 2 = X1 AX 2 = X1 λ 2 X 2 = λ 2 X1 .λI) = 0 Step 2. Since λ 1 ≠ λ 2. X2.7).. Using Gram – Schmidt process to obtain the ON – basis {Y1. . this implies that X1 . i = 1...λI| = 0 −λ ⇔ 1 1 −λ 1 1 ⎡ λ = λ 2 = −1 1 = 0 ⇔ (λ+1)2 (λ . Xn. (Note: This Gram-Schmidt process can be conducted in each eigenspace since the two eigenvectors in distinct eigenspaces are already orthogonal due to Lemma 5.. respectively. 5. X 2 = 0 .. we have the algorithm of orthogonal diagonalization of a symmetric real matrix A.2) = 0 ⇔ ⎢ 1 ⎣λ 3 = 2 −λ 1 For λ1 = λ2 = ... Y2. X2.8. we compute the eigenvalues of A by solving: |A . Find all the linearly independent eigenvectors of A. Next. ( λ 1 . 100 .

u1 > ⎜ ⎟ ⎜1 ⎟ ⎜ ⎟ ⎝ ⎠ ⎧⎛ 1 ⎞ ⎛ 0 ⎞⎫ ⎪⎜ ⎟ ⎜ ⎟⎪ ⎨⎜ − 1⎟. ⎜ 1 ⎟⎬ . X 2 = ⎜ 1 ⎟ . the eigenspace E(-1) = Span ⎜0 ⎟ ⎜ − 1⎟ ⎝ ⎠ ⎝ ⎠ ⎛1 ⎞ ⎜ ⎟ By Gram – Schmidt process.u = ⎜ − 1 ⎟ . In other worlds. that are ⎛1 ⎞ ⎛0 ⎞ ⎜ ⎟ ⎜ ⎟ X1 = ⎜ − 1 ⎟. u1 ) . Then. there are two linearly independent eigenvectors corresponding to λ = -1. Y2 corresponding to λ = -1. we have u1 = v1 = ⎜ − 1 ⎟ . Y2 = ⎟ to obtain two orthonormal U2 ⎜ 2⎟ 6⎟ ⎜ ⎟ ⎜0 ⎜ 2 ⎟ ⎟ ⎟ ⎜ ⎜ ⎠ ⎝ ⎝ 6 ⎠ eigenvectors Y1. Lecture on Algebra ⎛ x1 ⎞ ⎜ ⎟ (A + E)X = 0 ⇔x1 + x2 + x3 = 0 for X = ⎜ x 2 ⎟ ⎜x ⎟ ⎝ 3⎠ Then.Nguyen Thieu Huy. ⎜0 ⎟ ⎝ ⎠ ⎛ 1⎞ ⎜− ⎟ ⎜ 2⎟ (v 2 . For λ3 = 2. we put u2 = v2 1 ⎜ 2⎟ < u1 . ⎪⎜ 0 ⎟ ⎜ − 1⎟⎪ ⎩⎝ ⎠ ⎝ ⎠⎭ Y1 = U1 U1 ⎛ 1 ⎞ ⎛ 1 ⎞ ⎟ ⎟ ⎜ ⎜− 6⎟ ⎜ 2 ⎟ ⎜ ⎜ 1 ⎟ U2 ⎜ 1 ⎟ = ⎜− = ⎜− ⎟ . We solve (A – 2E) X = 0 ⎧− 2 x 1 + x 2 + x 3 = 0 ⎛ x1 ⎞ ⎛ 1⎞ ⎜ ⎟ ⎜ ⎟ ⎪ ⇔ ⎨x 1 − 2 x 2 + x 3 = 0 ⇔ ⎜ x 2 ⎟ = x 1 ⎜ 1 ⎟ ⎜x ⎟ ⎜ 1⎟ ⎪x + x − 2 x = 0 ⎝ ⎠ 2 3 ⎩ 1 ⎝ 3⎠ 101 .

Y2... Y3} of M3x1 (R) which contains the linearly independent eigenvectors of A. The ⎧⎛ 1⎞⎫ ⎪⎜ ⎟⎪ eigenspace is E2 = Span ⎨⎜ 1 ⎟⎬ . A quadratic form q on Rn is a mapping q: Rn → R defined by q(x1. x2. 2 2 2 Then q is a quadratic form on R3. 1 ≤ i ≤ j ≤ n. IV.1 Definition: Consider the space Rn. we put ⎛ 1 ⎜ ⎜ 2 ⎜ 1 P = ⎜− 2 ⎜ ⎜ 0 ⎜ ⎝ − − 1 6 1 2 6 1 ⎞ ⎟ 3⎟ 1 ⎟ T ⎟ to obtain P AP = 3⎟ 1 ⎟ ⎟ 3⎠ ⎛−1 0 0⎞ ⎟ ⎜ ⎜ 0 − 1 0 ⎟ finishing the orthogonal ⎜ 0 0 2⎟ ⎠ ⎝ 6 diagonalization of A.. x2. To implement the Gram-Schmidt process.Nguyen Thieu Huy... xn) ∈ Rn (6.1) where the constants cij ∈ R are given. we just have to ⎪⎜ 1⎟⎪ ⎩⎝ ⎠⎭ put ⎛ ⎜ ⎜ ⎜ Y3 = ⎜ ⎜ ⎜ ⎜ ⎝ 1 ⎞ ⎟ 3⎟ 1 ⎟ ⎟ 3⎟ 1 ⎟ ⎟ 3⎠ We now obtain the ON – basis {Y1. there is only linearly independent eigenvector corresponding to λ = 2. Lecture on Algebra Therefore. Quadratic forms 6. x2. Example: Let q : R3 → R be defined by q (x1. 102 . x3 ) ∈ R3. Then. x3) = x1 − 2x1x 2 − x1x 3 + x 2 − 2x 2 x 3 − x 3 ∀ (x1. x2. xn) = 1≤ i ≤ j ≤ n ∑c ij xi x j for all (x1..

xn) = c11 x 1 + c22 x 2 2 + .2. Putting B = PTAP. Lecture on Algebra 6.. x2. xn) ∈ Rn ⎛ x1 ⎞ ⎜ ⎟ ⎜ x2 ⎟ where [x] = ⎜ ⎟ is the coordinate vector of X with respect to the usual basis of Rn and M ⎜ ⎟ ⎜x ⎟ ⎝ n⎠ A = [aij] is a symmetric matrix with aij = aji = cij/2 for i ≠j.. x2.. For x = (x1. .. we have [x] = [x]E = P[x]S . q has no cross product terms xixj with i≠j).. This fact can be easily seen by directly computing the product of matrices of the form: ⎛ a11 ⎜ ⎜a q(x1.xn) ∈ Rn. and let P be the change-of–basis matrix from E to S.... as above... .x2.1) can be expressed uniquely in the matrix form as q(x)=q(x1. consider a new basis S of Rn.. Definition: The quadratic form q on Rn is said to be in canonical form (or in diagonal form) if 2 q(x1.Nguyen Thieu Huy. q has the form: T 103 . . Now.. every quadratic form q can be transformed to the canonical form by choosing a relevant coordinate system... (That is.n. q = [x]TA[x] = [ x]S PTAP[x]S. and aii = cii for i = 1... we obtain that. in the new coordinate system with basis S.2.. 6. We will show that.. Then.x2. xn) ⎜ 21 M ⎜ ⎜a ⎝ n1 a12 a 22 M an2 L a1n ⎞ ⎛ x1 ⎞ ⎟ ⎜ ⎟ L a2n ⎟ ⎜ x 2 ⎟ T ⎟ ⎜ ⎟ = [x] A[x] O M ⎟ ⎜M ⎟ L a nn ⎟ ⎜ x ⎟ ⎠ ⎝ n⎠ The above symmetric matrix A is called the matrix representation of the quadratic form q with respect to usual basis of Rn. Change of coordinates: Let E be the usual (canonical) basis of Rn..xn)∈Rn. the quadratic form (6... Y = [x]S. xn)=[x]TA[x] for x=(x1... In general. Therefore. xn) = (x1 x2 . + cnn x n 2 for all (x1.. Let q(x) = [x]TA[x] be a quadratic form on Rn with the matrix representation A. x2. we denote by [x] the coordinate vector of x with respect to E..3.

1. Then. (1.0. Let S = {(1. Therefore.0).1)} be another basis of R3. changing to the new coordinates. Lecture on Algebra q = YTBY. Example: Consider the quadratic form q: R3 → R defined by q(x1.x3) with respect to the usual basis of R3.Nguyen Thieu Huy.x2. the relation between the old and new coordinate vector X = ⎜ x 2 ⎟ and Y = ⎜x ⎟ ⎝ 3⎠ ⎛ y1 ⎞ ⎜ ⎟ ⎜ y2 ⎟ = ⎜y ⎟ ⎝ 3⎠ ⎛ x1 ⎞ ⎛ 1 1 1 ⎞ ⎛ y 1 ⎞ ⎜ ⎟ ⎜ ⎟⎜ ⎟ [(x1. ⎜ x ⎟ ⎜ 0 0 1⎟ ⎜ y ⎟ ⎠⎝ 3⎠ ⎝ 3⎠ ⎝ we obtain q = XTAX = YTPTAPY ⎛ 1 0 0 ⎞ ⎛ 0 1 1 ⎞ ⎛ 1 1 1 ⎞ ⎛ y1 ⎞ ⎟⎜ ⎜ ⎟⎜ ⎟⎜ ⎟ = (y1 y2 y3) ⎜ 1 1 0 ⎟ ⎜ 1 0 1 ⎟ ⎜ 0 1 1 ⎟ ⎜ y 2 ⎟ ⎜1 1 1 ⎟ ⎜ 1 1 0 ⎟ ⎜ 0 0 1⎟ ⎜ y ⎟ ⎠⎝ ⎝ ⎠⎝ ⎠⎝ 3⎠ ⎛ 1 1 2 ⎞ ⎛ y1 ⎞ ⎜ ⎟⎜ ⎟ = (y1 y2 y3) ⎜ 1 2 4 ⎟ ⎜ y2 ⎟ ⎜ 2 4 6⎟ ⎜ y ⎟ ⎝ ⎠⎝ 3⎠ = y1 + 2y 2 + 6y 3 + 2y1y 2 + 4y 2 y 3 + 8y 3 y1 . (1. x2. we can compute the change-of –basis matrix P from the usual basis to the basis S as ⎛ 1 1 1⎞ ⎜ ⎟ P = ⎜ 0 1 1⎟ .x3) = 2x1x2 +2x2x3 +2x3x1. x3)]S is ⎜ x 2 ⎟ = ⎜ 0 1 1 ⎟ ⎜ y 2 ⎟ .0). or in matrix representation by ⎛ x1 ⎞ ⎛ 0 1 1 ⎞⎛ x1 ⎞ ⎜ ⎟ ⎜ ⎟⎜ ⎟ q = (x1 x2 x3) ⎜ 1 0 1 ⎟⎜ x 2 ⎟ = XTAX for X = ⎜ x 2 ⎟ -the coordinate vector of ⎜x ⎟ ⎜ 1 1 0 ⎟⎜ x ⎟ ⎝ ⎠⎝ 3 ⎠ ⎝ 3⎠ (x1. ⎜ 0 0 1⎟ ⎝ ⎠ ⎛ x1 ⎞ ⎜ ⎟ Thus.1. 2 2 2 104 .x2. where B = PTAP.

⎜y ⎟ ⎝ 3⎠ Therefore. Alqorithm of diagonalization of a quadratic form q = XTAX: Step 1: Orthogonally diagonalize A. that is. find an orthogonal matrix P so that PTAP is diagonal. q has the form: ⎛ λ1 ⎜ ⎜0 q = XTAX = YTPTAPY = YT ⎜ M ⎜ ⎜0 ⎝ 0 L λ2 L 0 0⎞ ⎟ 0⎟ ⎟Y ⎟ L λn ⎟ ⎠ ⎛ y1 ⎞ ⎜ ⎟ 2 2 2 = λ1y1 + λ 2 y 2 + . Transformation of quadratic forms to principal axes (or canonical forms): Consider a quadratic form q in matrix representation: q = XTAX.a diagonal matrix. Lecture on Algebra 6.5. Since A is symmetric. A is orthogonally diagonalizable. in the new coordinate system. where X = [x] is the coordinate vector of x with respect to the usual basis of Rn.4. we have the following algorithm to diagonalize a quadratic form q.. Hence. This means that there exists an orthogonal matrix P such that PTAP is diagonal. 6. i. ⎛ λ1 ⎜ ⎜0 T P AP = ⎜ M ⎜ ⎜0 ⎝ 0 λ2 M 0 L 0⎞ ⎟ L 0⎟ . This means that we choose a new basis S such that the change–of–basis matrix from the usual basis to the basis S is the matrix P. and the vectors in the basis S are called the principal axes for q. + λ n y n for Y = ⎜ y 2 ⎟ . q has a canonical form. This can always be done because A is a symmetric matrix.e. 105 ... and A is the matrix representation of q.Nguyen Thieu Huy. O M ⎟ ⎟ L λn ⎟ ⎠ We then change the coordinate system by putting X = PY. The above process is called the transformation of q to the principal axes (or the diagonalization of q). More concretely.

λ2. Then.y1 − y 2 + 2 y 3 ⎜ 2 ⎜ 0 0 2 ⎟ ⎜ y3 ⎟ ⎠ ⎝ ⎠ ⎝ 6.. yn) ⎜ M ⎜ ⎜0 ⎝ 2 2 2 0 L λ2 L 0 0⎞ ⎟ ⎛ y1 ⎞ 0⎟ ⎜ ⎟ ⎟ ⎜ y2 ⎟ ⎟ ⎜ y3 ⎟ L λn ⎟ ⎝ ⎠ ⎠ = λ1y1 + λ 2 y 2 + .. λn are all eigenvalues of A ⎛ 0 1 1 ⎞⎛ x1 ⎞ ⎜ ⎟⎜ ⎟ Example: Consider q = 2x1x2 +2x2x3 +2x3x1= (x1 x2 x3) ⎜ 1 0 1 ⎟⎜ x 2 ⎟ ⎜ 1 1 0 ⎟⎜ x ⎟ ⎝ ⎠⎝ 3 ⎠ To diagonalize q. by this example. This is already done in Example after algorithm 5.. in the new coordinate system.. in the new coordinate system... Lecture on Algebra Step 2.8..6. Change the coordinates by putting X = PY (here P acts as the change–of– basis matrix). we obtain ⎛ 1 ⎜ ⎜ 2 ⎜ 1 P = ⎜− 2 ⎜ ⎜ 0 ⎜ ⎝ − − 1 6 1 2 6 1 ⎞ ⎟ 3⎟ 1 ⎟ T ⎟ for which P AP = 3⎟ 1 ⎟ ⎟ 3⎠ ⎛−1 0 0⎞ ⎟ ⎜ ⎜ 0 −1 0⎟ ⎜ 0 0 2⎟ ⎠ ⎝ 6 We next change the coordinates by simply putting ⎛ x1 ⎞ ⎛ y1 ⎞ ⎜ ⎟ ⎜ ⎟ x 2 ⎟ =P ⎜ y 2 ⎟ ⎜ ⎜x ⎟ ⎜y ⎟ ⎝ 3⎠ ⎝ 3⎠ Then. Then. there is a basis of Rn (a coordinate system in Rn) in which q is represented by a diagonal matrix. every other diagonal 106 .. we first orthogonally diagonalize A. q has the diagonal form: ⎛ λ1 ⎜ ⎜0 q = YTPTAPY = (y1 y2.Nguyen Thieu Huy. q has the form: ⎛ y1 ⎞ ⎜ ⎟ q = (y1 y2 y3) PTAP ⎜ y 2 ⎟ = (y1 y2 y3) ⎜y ⎟ ⎝ 3⎠ ⎛ − 1 0 0 ⎞ ⎛ y1 ⎞ ⎟ ⎜ ⎟ ⎜ 2 2 0 − 1 0 ⎟ ⎜ y 2 ⎟ = . Law of inertia: Let q be a quadratic form on Rn. Note that λ1. + λ n y n finishing the process.

6. Putting ⎨y 2 = y 2 ⎪ ' ⎪y 3 = y 3 ⎩ Then. the signature of q is s = 1-2 = -1. Quadric lines and surfaces 7.7 Definition: A quadratic form q on Rn is said to be positive definite if q(v) > 0 for every nonzero vector v ∈ Rn. 2 2 ' ⎧y1 = y1 + y '3 ⎪ ⎪ ' 2 2 we obtain q = 2 y12 − 2 y 2 − 2 y 3 . 107 . Quadric lines: Consider the coordinate plane xOy. m = 2. we obtain immediately the following theorem. The difference s= p . ' ⎧x1 = y1 − y '2 ⎪ ⎪ ' ' Firstly. putting ⎨x 2 = y 2 + y 2 we have that ⎪ ' ⎪x 3 = y 3 ⎩ q = 2 y1 − 2 y 2 + 4 y1y 3 ' ' '2 ' ' '2 ' 2 ' '2 = 2 ⎛ y1 + 2 y1y 3 + y 3 ⎞ − 2 y 2 − 2y 2 − 2 y 3 ⎜ ⎟ '2 ' 2 ⎝ ⎠ = 2 y1 + y 3 ' ( ' 2 ) − 2 y '2 − 2 y '3 . VII.m is called the signature of q. let us use another way to transform q to canonical form as follow. m=2. By the diagonalization of a quadratic form. Lecture on Algebra representation of q has the same number p of positive entries and the same number m of negative entries. 6. Example: In the above example q = 2x1x2 +2x2x3 +2x3x1 we have that p = 1. s = 1-2 = -1 as above. To illustrate the Law of inertia. Theorem: A quadratic form is positive definite if and only if all the eigenvalues of its matrix representation are positive. Therefore.1. A quadric line is a line on the plane xOy which is described by the equation aux2 + 2a12xy + a22y2 + b1x + b2y + c = 0.Nguyen Thieu Huy. we have the same p = 1.8.

Step 3.1) to principal axes. as known in the above section. We can see that the equation of a quadric line is the sum of a quadratic form and a linear form. Orthogonally diagonalize A to find an orthogonal matrix P such that PT A P = ⎜ ⎜ ⎛ λ1 ⎝0 0⎞ ⎟ λ2 ⎟ ⎠ ' ⎛x⎞ ⎛x Step 2. Algorithm of transformation the quadric line (7. Eliminate the first order terms if possible. ⎟ ⎠ Then. Lecture on Algebra or in matrix form: ⎛ a1 (x y) ⎜ ⎜a ⎝ 12 a12 ⎞⎛ x ⎞ ⎛x⎞ ⎟⎜ ⎟ + (b1 b2 )⎜ ⎟ + c = 0 ⎜ y⎟ a22 ⎟⎜ y ⎟ ⎝ ⎠ ⎠⎝ ⎠ where the 2x2 symmetric matrix A = (aij) ≠ 0. the quadric line has the form ' λ1x' 2 +λ 2 y' 2 + b1x'+ b '2 y ' + c = 0 ' where (b1' b2 ) = (b1 b2 )P . Also. B=(b1 b2) is a row matrix. Step 1.1) where A is a 2x2 symmetric nonzero matrix. Transformation of quadric lines to the principal axes: Consider the quadric line described by the equation ⎛x⎞ ⎛x⎞ (x y) A ⎜ ⎟ + B⎜ ⎟ + c = 0 ⎜ y⎟ ⎜ y⎟ ⎝ ⎠ ⎝ ⎠ (7. Therefore. Example: Consider the quadric line described by the equation 108 . 7. Change the coordinates by putting ⎜ ⎟ =P ⎜ ⎜y⎟ ⎜ ' ⎝ ⎠ ⎝y ⎞ ⎟.2. in the new coordinate system. we then have the following algorithm of transformation a quadric line to the principal axes (or the canonical form). and c is constant. the quadratic form is always transformed to the principal axes.Nguyen Thieu Huy. Basing on the algorithm of diagonalization of a quadratic form. we will see that we can also transform the quadric lines to the principal axes.

2) in a matrix form (7. We just have to put ⎛ 1 ⎞ ⎛ 1 ⎞ ⎟ ⎜ ⎟ ⎜ u1 ⎜ 2 ⎟. we change the coordinates by putting ⎜ ⎟ = ⎜y⎟ ⎜ 1 ⎝ ⎠ − ⎜ 2 ⎝ Therefore. setting P = ⎜ ⎜ 1 ⎜− 2 ⎝ 1 ⎞ ⎟ 2⎟ we have that PT AP = 1 ⎟ ⎟ 2⎠ ⎛0 0⎞ ⎜ ⎜0 2⎟ ⎟ ⎝ ⎠ 1 ⎞ ⎟ 2 ⎟⎛ x' ⎞ ⎜ ⎟ 1 ⎟⎜ y' ⎟ ⎝ ⎠ ⎟ 2⎠ ⎛ 1 ⎜ ⎛x⎞ ⎜ 2 Next.Nguyen Thieu Huy. we write (7. e = u 2 = ⎜ 2 ⎟ . the equation in new coordinate system is 109 . we diagonalize A = ⎜ ⎜ 1 1⎟ orthogonally starting by computing the ⎟ ⎝ ⎠ eigenvalue of A from the characteristic equation 1− λ 1 ⎡λ = 0 =0⇔⎢ 1 1− λ ⎣λ 2 = 2 1 For λ1 = 0. e1 = = 2 || u1 || ⎜ 1 ⎟ || u 2 || ⎜ 1 ⎟ ⎟ ⎜ ⎟ ⎜− 2⎠ ⎝ 2⎠ ⎝ ⎛ 1 ⎜ 2 Then. there is also only one linearly independent eigenvector u2= ⎜ ⎟ ⎜ ⎟ The Gram-Schmidt process is very simple in this case.2) (x ⎛x⎞ ⎛ 1 1 ⎞⎛ x ⎞ ⎟⎜ ⎟ + (8 1)⎜ ⎟ = 0 . there is one linearly independent eigenvector u1 = ⎜ ⎜ ⎛1 ⎞ ⎟ − 1⎟ ⎝ ⎠ ⎛ 1⎞ ⎝ 1⎠ For λ2 = 2. y )⎜ ⎜y⎟ ⎜ 1 1 ⎟⎜ y ⎟ ⎝ ⎠ ⎠⎝ ⎠ ⎝ ⎛ 1 1⎞ Firstly. Lecture on Algebra x2+2xy+y2+8x+y=0 To perform the above algorithm.

7. Quadric surfaces: 7 X = 0. in matrix form 110 . we continue to change coordinates by putting 9 2 y' ) ⎧ 81 2 ⎪X = x ' − ⎪ 112 ⎨ 9 ⎪ Y = y' + ⎪ 4 2 ⎩ In fact.3. 2 Consider now the coordinate space Oxyz. this is a parabola. 4 2 ⎟ ⎠ ⎝ Then.Nguyen Thieu Huy. A quadric surface is a surface in space Oxyz which is described by the equation a11x2+ a22y2+a33z2+2a12xy+2a23yz + 2a13zx+b1x+b2y+b3z + c = 0 or.81 ⎞ ⎛ ⎟=0 ⎜ x' − ⇔ 2⎜ y ' + ⎟ + 112 ⎟ 4 2⎠ 2⎜ ⎝ ⎠ ⎝ (We write this way to eliminate the first order term Now. Lecture on Algebra ⎛ 1 ⎜ ⎛ 0 0 ⎞⎛ x' ⎞ ⎜ 2 (x' y')⎜ ⎜ 0 2 ⎟⎜ y' ⎟ + (8 1)⎜ 1 ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ⎜− 2 ⎝ ⇔ 2 y' + 2 1 ⎞ ⎟ 2 ⎟⎛ x' ⎞ = 0 ⎜ ⎟ 1 ⎟⎜ y' ⎟ ⎝ ⎠ ⎟ 2⎠ 7 9 x' + y' = 0 2 2 2 9 ⎞ 7 ⎛ 2 . we obtain the equation in principal axes: 2Y2 + Therefore. this is the translation of the coordinate system to the new origin I⎜ ⎛ 81 2 − 9 ⎞ ⎟ ⎜ 112 .

Lecture on Algebra (x y ⎛ a11 ⎜ z )⎜ a12 ⎜a ⎝ 13 a12 a 22 a 23 a13 ⎞⎛ x ⎞ ⎟⎜ ⎟ a 23 ⎟⎜ y ⎟ + (b1 a33 ⎟⎜ z ⎟ ⎠⎝ ⎠ b2 ⎛x⎞ ⎜ ⎟ b3 )⎜ y ⎟ + c = 0. ⎜z ⎟ ⎜z ⎟ ⎝ ⎠ ⎝ ⎠ Then. orthogonally diagonalize A to obtain an orthogonal matrix P such that ⎛ λ1 ⎜ PTAP = ⎜ 0 ⎜0 ⎝ 0 λ2 0 0⎞ ⎟ 0⎟ λ3 ⎟ ⎠ ⎛ x' ⎞ ⎛x⎞ ⎜ ⎟ ⎜ ⎟ Step 2. the equation in the new coordinate system is ' λ1 x ' 2 + λ 2 y ' 2 + λ3 z ' 2 +b1' x '+b2 y '+b3' z '+ c = 0 where (b1' ' b2 b3' = (b1 ) b2 b3 ) P Step 3. ⎜z ⎟ ⎜ 1 1 0 ⎟⎜ z ⎟ ⎝ ⎠ ⎠⎝ ⎠ ⎝ 111 . Write the equation in the matrix form as above ⎛x⎞ ⎛x⎞ ⎜ ⎟ ⎜ ⎟ (x y z )A⎜ y ⎟ + B⎜ y ⎟ + c = 0 . we can transform a quadric surface to the principal axes by the following algorithm. Change the coordinates by putting ⎜ y ⎟ = P ⎜ y' ⎟ ⎜ z' ⎟ ⎜z ⎟ ⎝ ⎠ ⎝ ⎠ Then. ⎜z ⎟ ⎝ ⎠ where A =(aij) is a 3x3 symmetric matrix. A ≠ 0. Similarly to the case of quadric lines. 7. Algorithm of transformation of a quadric surface to the principal axes (or to the canonical form): Step 1. Example: Consider the quadric surface described by the equation: 2xy + 2xz + 2yz – 6x – 6y – 4z = 0 ⎛x⎞ ⎛ 0 1 1 ⎞⎛ x ⎞ ⎜ ⎟ ⎟⎜ ⎟ ⎜ ⇔ (x y z )⎜ 1 0 1 ⎟⎜ y ⎟ + (− 6 − 6 − 4 )⎜ y ⎟ = 0 .4. Eliminate the first order terms if possible.Nguyen Thieu Huy.

x . 112 . ⎟) (the new origin is I ⎜ 0. 3⎟ ⎜ 0 0 2⎟ ⎝ ⎠ 1 ⎟ ⎟ 3⎠ 6 ⎛x⎞ ⎛ x' ⎞ ⎜ ⎟ ⎜ ⎟ We then put ⎜ y ⎟ = P ⎜ y' ⎟ to obtain equation in new coordinate system as ⎜z ⎟ ⎜ z' ⎟ ⎝ ⎠ ⎝ ⎠ ⎛ 1 ⎜ ⎜ 2 ⎜ 1 '2 '2 '2 − x − y + 2 z + (− 6 − 6 − 4) ⎜ − 2 ⎜ ⎜ 0 ⎜ ⎝ ⇔ x '2 − y '2 + 2 z '2 + 2 − − 1 6 1 2 6 6 1 ⎞ ⎟ 3 ⎟⎛ x' ⎞ 1 ⎟⎜ ⎟ ⎟⎜ y ' ⎟ = 0 3 ⎟⎜ ⎟ 1 ⎟⎝ z ' ⎠ ⎟ 3⎠ 4 6 y '− 16 3 z' = 0 2 2 ⎞ 4 ⎞ ⎛ ⎛ ⇔ . we put (in fact.Nguyen Thieu Huy.8 by that we obtain ⎛ 1 ⎜ ⎜ 2 ⎜ 1 P = ⎜− 2 ⎜ ⎜ 0 ⎜ ⎝ − − 1 6 1 2 6 1 ⎞ ⎟ 3⎟ ⎛ −1 0 0⎞ ⎜ ⎟ 1 ⎟ T ⎟ for which P AP= ⎜ 0 − 1 0 ⎟ .⎜ y' − ⎟ + 2⎜ z ' − ⎟ − 10 = 0 6⎠ 3⎠ ⎝ ⎝ 2 Now. this is a translation) ⎧ ⎪X = x ' ⎪ ⎪ ⎨Y = y ' − ⎪ ⎪ ⎪Z = z'− ⎩ 2 ⎛ 2 4 ⎞ . 6 6 3⎠ ⎝ 4 3 to obtain the equation of the surface in principal axes X2 + Y2 – 2Z2 = -10. Lecture on Algebra ⎛0 1 1⎞ ⎟ ⎜ The orthogonal diagonalization of A = ⎜ 1 0 1 ⎟ was already done in the ⎜1 1 0⎟ ⎠ ⎝ example after algorithm 5.

7. this is a sphere) 113 . Lecture on Algebra We can conclude that.6).6. this surface is a two-fold hyperboloid (see the subsection 7.Nguyen Thieu Huy. Basic quadric surfaces in principal axes: 1) Ellipsoid: x2 y2 z2 + + 2 =1 a2 b2 c (If a = b = c.

Nguyen Thieu Huy. Lecture on Algebra 2) 1 – fold hyperboloid x2 a2 + y2 b2 − z2 c2 =1 3) 2 – fold hyperboloid x2 y2 z 2 + − = −1 a2 b2 c2 114 .

Nguyen Thieu Huy. Lecture on Algebra x2 y2 4) Elliptic paraboloid 2 + 2 − z = 0 a b (If a = b this a paraboloid of revolution) 5) Cone: x2 a2 + y2 b2 − z2 c2 =0 115 .

y) = 0. Example of quadric cylinders: 1) x2 + y2 = 1 116 . This cylinder is consisted of generating lines paralleling to z–axis and leaning on a directrix which lies on xOy – plane and has the equation f(x.Nguyen Thieu Huy.y) = 0. Since the roles of x. y.z) = 0. we consider only the case of equation f(x.y) = 0 (on this plane). or f(x. Lecture on Algebra x2 y2 6) Hyperbolic – paraboloid (or Saddle surface) 2 − 2 − z = 0 a b 7) Cylinder: A cylinder has one of the following form of equation: f(x.z) = 0 or f(y. z are equal.

Nguyen Thieu Huy. Lecture on Algebra 2) x2 -y = 0 3) x2-y2 = 1 117 .

S. 2001. 4. [http://www. Gilbert Strang.edu/_ec/book/] 2. 1998. Introduction to Linear Algebra.Nguyen Thieu Huy. Linear Algebra and its Applications. New York. 3. Lipschutz. 3rd Edition. 118 .miami. (Schaum.1991) McGraw-hill. 2002. Connell.math.H. E. Schaum's Outline of Theory and Problems of Linear Algebra. David C. Lecture on Algebra References 1. 1991. Lay. Addison-Wesley. Wellesley-Cambridge Press. Elements of abstract and linear algebra.

Sign up to vote on this title
UsefulNot useful