cbr

© All Rights Reserved

1 views

cbr

© All Rights Reserved

- Syllabus Class XII
- a2wpp
- MM2 Linear Algebra 2011 I 1x2
- Mathematics for economic analysis.pdf
- 1 Introduction
- Matrices Example with Mathematica
- VR10-UG-ECE
- HP 50g User's Guide English
- Revision for Java Programming I Basic
- Determinants Notes on Calculus at MIT Level
- Core.animation.enter.the.Matrix
- Matrix
- MA1 Wks2-7
- CME200_AnswerBook_evennumbered
- Math 15 week 4-
- Finals
- Affine Transformation
- MIT 18.022 11
- Topic One - Review of Math and Stat_Nov2nd
- Ch 5 System of Equations

You are on page 1of 12

INTRODUCTION

A. Rationalization of the importance of critical book review

Criticizing books is a scientific activity undertaken to provide feedback and

assessment of the contents of a book. The purpose of this book is to provide

comprehensive information or understanding of what appears and is revealed in a book.

It also gives consideration to the reader whether the book deserves a welcome from the

public or not. Critical books are useful to increase the essential knowledge of the criticized

book. Readers who want to know more about the entire contents of the book will then

look in the stores or sites that sell the books.

In this "Critical Book Review", the author reviews and analyzes a book entitled

"Introduction to Matrices and Linear Transformations". This book is written by Daniel

T. Finkbeiner. " Introduction to Matrices and Linear Transformations " was published in

1919 by W.H. Freeman and Company. The 471-page book consists of eleven chapters,

namely: 1) Chapter I, about linear equations; 2) Chapter II, about linear spaces, 3) Chapter

III, about linear mappings. 4) Chapter IV, on matrices, 5) Chapter V, on determinants, 6)

Chapter VI equivalence realitions on rectangular matrices, 7) Chapter VII a canonical

form for similariy, 8) Chapter VIII on inner product spaces, 9) Chapter IX on scalar-

valued functions, 10) Chapter X, Application: Linear Programming, and 11) Chapter XI,

Application: Linear Differential Equations. A brief explanation on this critical book about

Determinants, is in chapter V that will be explained briefly in Chapter II.

B. Goals of CBR

This Critical Book Review aims:

1. Review the contents of a book.

2. Finding and knowing the information contained in the book.

3. Train yourself to think critically in searching for information provided by each

chapter of the first and second books.

4. Compare the contents of the first book and the second book

5. Find the advantages and disadvantages of the first book content

C. Benefits of CBR

1) To fulfill the task of the course of Introduction to Matrices and Linear

Transformations.

2) To increase knowledge determinants.

D. Book Identity

1. Title : Introduction to Matrices and Linear Transformations.

2. Edition : Third Edition

3. Author : Daniel Talbot Finkbeiner

4. Publisher : W.H. Freeman and Company

5. City of Publication : America

6. Year of Publication : 1919

7. ISBN : 0-7167-0084-0

CHAPTER II

SUMMARY

A. Determinants

1. Basic Properties of Determinants

Althoung the case n = 1 is trivial, it is the obvious starting pint. The determinant of a

one-by-one matrix is defined by:

det (a) = a

For n = 2 the determinant of a two-by-two matrix is defined by:

11 12

( ) = 11 22 21 12

21 22

For n = 3 the determinant of a three-by-three matrix is defined by:

11 12 13

( 21 22 23 ) = 11 22 33 + 21 21 21 + 21 21 21

31 32 33

21 21 21 21 21 21 21 21 21

obviously the coputation reqired to verify these statements for n = 3 is much longer that

required to verify the corresponding atatements for n =2. it is aqually obvious that a

computational defenition of det A for n-by-n matrix will be quite awkward, so we shall

tha a different approach.

Defenition 1

a determinant is a function, denoted det, that assigns to each n-by-n matrix having column

vectors A1,......, An ascalar avlue det A that has the following three properties : for each

scalar c and each i = 1,....,n.

1) det (A1, ......, cAi, ......, An) = c det (A1, ..., Ai, ...., An)

2) det (A1, ..., Ai,..., Aj,...., An) = det (A1, ..., Ai + cAj, ...., An) for each ji

3) det (I) = 1.

Theorem 1

If det is a function having properties 1 and 2 of definition 1, the the following statements

hold for each 1, j = 1,...., n such that ji

a) det (A1, ..., Ai,..., Aj,...., An) = - det (A1, ..., Ai + cAj, ...., An). in words, intercahnge

of any columns of a reverses the sign of the determinant; or, det is an alternating

function of the column of A.

b) if the columns of A are linearly dependent, then det A = 0.

c) if the columns of B are a permutation of the columns of A, then det B = det A,

where the plus sign applies if that permutation can be performed by an even

number of transpositions (interchange of pairs of column), and the minus sign

applies if the permutation can be performed by an add number of tranformations.

d) det (A1, ..., Bi+Ci..., An) = det (A1, ..., Bi,...., An) + det (A1, ..., Ci,...., An). in words,

if column i is expressed as the sum of two column vectors, Ai=Bi + Ci, then det A

is the sum of the two indicated determinants; or det A is an additive function of

each column.

2. An Explicit Formula For Det A

Let A and B be any n-by-n matrices and let C = BA. we compute the elements in

Ck, column k of C:

1 = 11 1 + 12 2 + + 1

2 = 21 1 + 22 2 + + 2

.

.

= 1 + 2 2 + +

Letting Bj denote column j of B, we can write

1 = 11 1 + 12 2 + + 1 =

=1

Hence: det C = det (C1, C2,..., Cn) = det(=1 1, =1 2 , , =1 )

Theorem 2 An function det that has the three properties of defenition 1 must have the

form

det A = (1) (1) (2)2 ()

Theorem 3 If A and B are n-by-n matrices, then

det (BA) = (det B)(det A)

Theorem 4 An n-by-n matrix is nonsingular if and only if det A 0. if A is nonsingular,

det (A-1) = (det A)-1

theorem 5 If At is the transpose of A, then det (At) = det A.

Theorem 6 For each value of n = 1, 2, ...., there exists one and only one determinant

function for n-by-n matrices. its value is expressed by formula 1 and also by 2.

Theorem 7

= det

=1

= det

=1

2.2 The book of Elementary Linear Algebra Tenth Edition Howard Anton

2.1 Determinants by Cofactor Expansion

Definition 1

If A is a square matrix, then the minor of entry aij is denoted by Mij and is defined to be

the determinant of thesubmatrix that remains after the ith row and jth column are

deleted from A. The number (-1)i+jMij is denoted by Cijand is called the cofactor of

entry aij.

Definition of a General Determinant

Formula 4 is a special case of the following general result, which we will state without

proof.

Theorem 2.1.1

If A is annxn matrix, then regardless of which row or column of A is chosen, the number

obtained by multiplying theentries in that row or column by the corresponding cofactors

and adding the resulting products is always the same.

Definition 2

If A is annxn matrix, then the number obtained by multiplying the entries in any row or

column of A by thecorresponding cofactors and adding the resulting products is called

the determinant of A, and the sums themselves arecalled cofactor expansions of A.

That is,

[cofactor expansion along the jth column]

[cofactor expansion along the ith column]

Theorem 2.1.2

If A is annxn triangular matrix (upper triangular, lower triangular, or diagonal), then

det(A) is the product of the entries on the main diagonal of the matrix; that is, .det(A) =

a11a22ann.

Theorem 2.2.1

Let A be a square matrix. If A has a row of zeros or a column of zeros, then det(A) = 0.

Proof Since the determinant of A can be found by a cofactor expansion along any row

or column, we can use the row or

column of zeros. Thus, if we let C1,C2,,Cn denote the cofactors of A along that row or

column, that

The following useful theorem relates the determinant of a matrix and the determinant of

its transpose.

Theorem 2.2.2

Let A be a square matrix. Then det(A) = det(AT)

Because transposing a matrix changes its columns to rows and its rows to columns,

almost every theorem about the rows of a determinant has a companion version about

columns, and vice versa.

Proof Since transposing a matrix changes its columns to rows and its rows to columns,

the cofactor expansion of A along any row is the same as the cofactor expansion of AT

along the corresponding column. Thus, both have the same determinant.

The next theorem shows how an elementary row operation on a square matrix affects

the value of its determinant. In place of a formal proof we have provided a table to

illustrate the ideas in the case3X3.

Theorem 2.2.3

Let A be annxn matrix.

(a) If B is the matrix that results when a single row or single column of A is multiplied

by a scalar k, then det(B) = k det(A).

(b) If B is the matrix that results when two rows or two columns of A are interchanged,

then det(B) = -det(A).

(c) If B is the matrix that results when a multiple of one row of A is added to another

row or when a multiple of

one column is added to another column, then det(B) = det(A).

Elementary Matrices

It will be useful to consider the special case of Theorem 2.2.3 in which A=In is the nxn

identity matrix and E (rather than B) denotes the elementary matrix that results when the

row operation is performed on In. In this special case Theorem 2.2.3 implies the

following result.

Theorem 2.2.4

Let E be an nxn elementary matrix.

(a) If E results from multiplying a row of In by a nonzero number k, then det(E) =k.

(b) If E results from interchanging two rows of In, then det(E) =-1.

(c) If E results from adding a multiple of one row of In to another, then det(E) =1.

If a square matrix A has two proportional rows, then a row of zeros can be introduced by

adding a suitable multiple of one of the rows to the other. Similarly for columns. But

adding a multiple of one row or column to another does not change the determinant, so

from Theorem 2.2.1, we must have det(A). This proves the following theorem.

Theorem 2.2.5

If A is a square matrix with two proportional rows or two proportional columns, then

det(A)=0.

We will now give a method for evaluating determinants that involves substantially less

computation than cofactor expansion. The idea of the method is to reduce the given

matrix to upper triangular form by elementary row operations, then compute the

determinant of the upper triangular matrix (an easy computation), and then relate that

determinant to that of the original matrix.

Basic Properties of Determinants

Suppose that A and B are nxn matrices and k is any scalar. We begin by considering

possible relationships between det(A), det(B), and det(kA), det(A+B), det(AB).

Since a common factor of any row of a matrix can be moved through the determinant

sign, and since each of the n rows in kA has a common factor of k, it follows that

det(kA)=kn det(A).

Theorem 2.3.1

Let A, B, and C nxn be matrices that differ only in a single row, say the rth, and assume

that the rth row of C can be obtained by adding corresponding entries in the rth rows of

A and B. Then

det(C)=det(A)+det(B)

The same result holds for columns.

Considering the complexity of the formulas for determinants and matrix multiplication,

it would seem unlikely that a simple relationship should exist between them. This is

what makes the simplicity of our next result so surprising. We will show that if A and B

are square matrices of the same size, then

det(AB)=det (A) det (B)

Determinant Test for Invertibility

Theorem 2.3.3

A square matrix A is invertible if and only if det(A) 0.

Proof Let R be the reduced row echelon form of A. As a preliminary step, we will show

that det(A) and det(R) are both zero or both nonzero: Let E1,E2,Er be the elementary

matrices that correspond to the elementary row operations that produce R from A. Thus

R=ErE2E1A

and from 3,

det(R)=det(Er)det(E2)det(E1)det(A)

We pointed out in the margin note that accompanies Theorem 2.2.4 that the determinant

of an elementary matrix is nonzero. Thus, it follows from Formula 4 that det(A) and

det(R) are either both zero or both nonzero, which sets the stage for the main part of the

proof. If we assume first that A is invertible, then it follows from Theorem 1.6.4 that

R=I and hence that det(R)=(10). This, in turn, implies that det(A)0, which is what we

wanted to show.

It follows from Theorems 2.3.3 and Theorem 2.2.5 that a square matrix with two

proportional rows or two proportional columns is not invertible.

Conversely, assume that det(A)0. It follows from this that det(R)0, which tells us that

R cannot have a row of zeros. Thus, it follows from Theorem 1.4.3 that R=I and hence

that A is invertible by Theorem 1.6.4.

Theorem 2.3.4

If A and B are square matrices of the same size, then

det(AB)=det (A) det (B)

Proof We divide the proof into two cases that depend on whether or not A is invertible.

If the matrix A is not

invertible, then by Theorem 1.6.5 neither is the product AB. Thus, from Theorem

Theorem 2.3.3, we have det(AB)=0

and det(A)=, so it follows that det(AB)=det(A) det(B).

Theorem 2.3.5

If A is invertible, then

1

det(1 ) =

det()

Proof Since , it follows that A-1A=I. Therefore, we must have det(A-1)det(A)=I.

Since det(A)0, the proof can be completed by dividing through by det(A).

Adjoint of a Matrix

In a cofactor expansion we compute det(A)by multiplying the entries in a row or column

by their cofactors and adding the resulting products. It turns out that if one multiplies the

entries in any row by the corresponding cofactors from a different row, the sum of these

products is always zero. (This result also holds for columns.) Although we omit the

general proof, the next example illustrates the idea of the proof in a special case.

1 1 1

det(1 ) =

11 22

Moreover, by using the adjoint formula it is possible to show that

1 1 1

det(1 ) = , ,,

11 22

-1

are actually the successive diagonal entries of A .

Definition 1

If A is any nxn matrix and Cij is the cofactor of aij, then the matrix

is called the matrix of cofactors from A. The transpose of this matrix is called the

adjoint of A and is denoted by adj(A).

If A is an invertible matrix, then

1

1 = ()

det()

Proof We show first that

() = det()

If = is a system of n linear equations in n unknowns such that , then the system

has a unique solution. This solution is

det(1 ) det(2 ) det(3 )

1 = , 2 = , 3 =

det() det() det()

Where is the matrix obtained by replacing the entries in the jth column of A by the

entries in the matrix

CHAPTER III

ADVANTAGES OF BOOK

Howard

1. The book has a cover that interests readers to read it.

2. Every important word is given bold letters. This facilitates the reader in searching

for those words.

3. Preparation chapters and sub chapters are quite good.

4. Put pictures and tables for a more detailed explanation.

5. Attach an example and how to solve it.

1. The book has a cover and also an attractive color to read.

2. There are many colors on the contents of the book that make the book more

interesting to read.

3. Describes the complete material.

4. Presents the formula accompanied by examples of problems and discussion.

5. Include drawings and tables explaining the material covered.

6. Attach an important note to each left of the book.

CHAPTER IV

WEAKNESS OF BOOK

Howard

Nothing to comment on because the book is very high standard and has been

used a lot by the community, especially in universities by these students show that this

book is very good.

1. In the book reviewed we did not find any greed in each book because the book

has been applied in everyday life.

2. There are not many examples of questions and exercises about each chapter, so

readers better understand the content of the material.

CHAPTER V

IMPLICATIONS

5.1 Theory

VectorAdditionViewed asTranslation If v, w, and v + w are positioned so their initial

points coincide, then the terminal point of v + w can be viewed in two ways:

2. The terminal point of v + w is the point that results when the terminal point of v

is translated in the direction of w by a distance equal to the length of w

3. The terminal point of v + w is the point that results when the terminal point of w

is translated in the direction of v by a distance equal to the length of v

4. Accordingly, we say that v + w is the translation of v by w or, alternatively, the

translation of w by v.

If uv=0 , u and v are orthogonal. Point-Normal forms of lines and planes. To find

the equation of a line or plane, we take an arbitrary point P0 =(Xo , Yo, Zo), and another

point, P(x,y,z). We form a vector P o P= xx0, yy0, zz0 . Then we know that the

normal must be orthogonal to this vector (and the plane/line), so that nP 0P=0 . If this

normal n, is defined as n = (a, b, c), then the above equation becomes (by the component

dot product):

The above equation, [Point-Normal Form of a Plane],

can be simplified. If we multiply the terms out and simplify we get the following theorem.

Theorem 3.3.1 Point-Normal Forms of Lines and Planes

Ax + By + C = 0, is a line in R2 with normal n=(a,b).

Ax + By + Cz + D = 0, is a plane in R3 with normal n=(a, b, c).

1. Utilization of wave frequency difference in color

In the field of medicine, said Dr. Erwin Tb. Kusuma, Sp.KJ, color therapy is classified as

electromagnetic medicine or treatment with electromagnetic waves. Unwittingly the body

has a congenital response Automatically against color and light. It can happen because

basically Color is an element of light, and light is one form of energy. Giving energy to

the body will have a positive effect. When applied To the body, the color has its own

energy characteristics. The use of color depends on the problems each person experiences.

2. Colors on computer

Colors on computer monitors are commonly based on what is called the RGB color

model. Colors in this system are created by adding together percentages of the primary

colors red (R), green (G), and blue (B). One way to do this is to identify the primary colors

with the vectors

r = (1, 0, 0) (pure red),

g = (0, 1, 0) (pure green),

b = (0, 0, 1) (pure blue)

in R3 and to create all other colors by forming linear combinations of r, g, and b using

coefficients between 0 and 1, inclusive; these coefficients represent the percentage of

each pure color in the mix. The set of all such color vectors is called RGB space or the

RGB color cube. Thus, each color vector c in this cube is expressible as a linear

combination of the form

c = k1r + k2g + k3b

= k1(1, 0, 0) + k2(0, 1, 0) + k3(0, 0, 1)

= (k1, k2, k3)

where 0 ki 1. As indicated in the figure, the corners of the cube represent the pure

primary colors together with the colors black, white, magenta, cyan, and yellow. The

vectors along the diagonal

running from black to white correspond to shades of gray.

5.2 Analysis

In the application of students should be able to analyze the benefits of the matrix

in life and how to manage and develop the theory well.

CHAPTER VI

CLOSED

6.1 Conclusion

6.1.1 The advantages of book Elementary Linear Algebra Applications

Version Howard

1. The book has a cover that interests readers to read it.

2. Every important word is given bold letters. This facilitates the reader in searching

for those words.

3. Preparation chapters and sub chapters are quite good.

4. Put pictures and tables for a more detailed explanation.

5. Attach an example and how to solve it.

6.1.2 The Advantages of book Linear Algebra 1 Stefan Martynkiw

1. The book has a cover and also an attractive color to read.

2. There are many colors on the contents of the book that make the book more

interesting to read.

3. Describes the complete material.

4. Presents the formula accompanied by examples of problems and discussion.

5. Include drawings and tables explaining the material covered.

6. Attach an important note to each left of the book.

6.2 Suggestion

Thanks to friends who helped complete this Critical Book, so we can finish it

just in time. In this writing we really need input from Lecturers and friends all for the

perfection of this Critical Book.

Bibliography

Tenth Edition. United States of America : Wiley

Martynkiw, Stefan. 2010. Linear Algebra I. United States of America : Wiley

- Syllabus Class XIIUploaded bysanjeev kumar
- a2wppUploaded byJohn Doe
- MM2 Linear Algebra 2011 I 1x2Uploaded byAaron Lee
- Mathematics for economic analysis.pdfUploaded byRonnie
- 1 IntroductionUploaded byKenneth Chaw
- Matrices Example with MathematicaUploaded byBogdan Tanasoiu
- VR10-UG-ECEUploaded byMannanHarsha
- HP 50g User's Guide EnglishUploaded byaliprimera
- Revision for Java Programming I BasicUploaded byRyan Tang
- Determinants Notes on Calculus at MIT LevelUploaded byMounica Paturu
- Core.animation.enter.the.MatrixUploaded bytdkhoa
- MatrixUploaded byBeatrizAL
- MA1 Wks2-7Uploaded byErin Gallagher
- CME200_AnswerBook_evennumberedUploaded bylukethepirate98
- Math 15 week 4-Uploaded bygaryart111
- FinalsUploaded byaubrey tolentino
- Affine TransformationUploaded bysan_kumar@ymail.comm
- MIT 18.022 11Uploaded bynislam57
- Topic One - Review of Math and Stat_Nov2ndUploaded byChristine oosi
- Ch 5 System of EquationsUploaded bythiban130991
- Simultaneous EquationsUploaded byJohn Goh
- admath.Uploaded byPhilip Arpia
- MAT3310-3_on_the_web.pdfUploaded byjulianli0220
- user manualUploaded byapi-306302717
- productFlyer_978-0-387-78356-7Uploaded bybrenda
- A Complete Quantitative Analysis of SelfUploaded byMuntariNglebur
- bagong balitaUploaded byArmada Arjay
- mat888Uploaded byBurican Bogdan Alexandru
- basicRUploaded byJohnny VP
- mc40Uploaded byChris Antoniou

- MYSTRAN Demo Problem ManualUploaded byja_mufc_scribd
- IA Matrices Notes (Cambridge)Uploaded byucaptd3
- Ee263 Course ReaderUploaded bysurvinderpal
- BSc Brian HesselmannUploaded byMarko Mandić
- Artigo IEEE - PESCHON 1968 - Sensitivity in Power SystemsUploaded byAndrey Lopes
- wp11-01bkUploaded byeamado1
- Mathematicas en Ingenieria QuimicaUploaded byYazmin Baltazar
- Voltage Stability Assessment IndexUploaded byAriishaNic
- LHSUploaded bypachernyangyuen
- 10[1].1.1.119Uploaded byRajni Kaur
- JIM 105 - Final 2015-2016Uploaded byNazz
- Programming LabUploaded byAyush Jain
- Unmix 6 User ManualUploaded bysamuel
- 3642125972Uploaded byEduardo Bambill
- Introduction to Composite MaterialsUploaded bytaurus31
- State SpaceUploaded byAnirban Ghose
- Maths Syllabus New IMA Bhubaneswar tifrUploaded byPritesh Singh
- Incidence GeometryUploaded byGiovanni Gatto
- Plane_Wave_Scattering_Matrix_Theory_of_Antenna-Antenna_Interactions.pdfUploaded bywes
- Mathematics 2009 Unsolved Paper Outside Delhi.pdfUploaded byroxtr
- The New Features of Fortran 2003Uploaded byAlan Jalil
- Solving-Structural-Vibration-Problems-Using-ODS-and-FEA.pdfUploaded byclaude_01
- B.tech Mech Engg 2015 Batch Part-II, III, IV (1)Uploaded byAnonymous uleFNlrL
- Ferguson Part2Uploaded byAnonymous 8sV1nuK59
- A Bid Mark-up Multi-Factor Evaluation Process Bidding Strategy in EgyptUploaded byAnonymous vQrJlEN
- Math Zc234 l8Uploaded bySadhanandan Nambittiyath
- MathReview_3Uploaded byresperado
- Assesing Attribute ImportanceUploaded byJorge Muñoz Aristizabal
- Book Python for ControlUploaded byLeonora C. Ford
- MTech-SyllabusUploaded bySamuel Cherukutty Cheruvathur