112 views

Uploaded by Pradip Adhikari

Numerical Analysis

- Newton-raphson vs Broyden
- Solution Manual 3rd Ed. Metal Forming: Mechanics and Metallurgy CHAPTER 1-3
- Numerical Solution of Harmonic Oscillator
- pdf Engineering - Fluid Dynamics - Shallow Liquid Simulation Using Matlab (2001 Neumann)
- Concepts in Theoretical PhysicConcepts in Theoretical Physics Lecture 5s Lecture 5
- maths topic
- Proper Orthogonal Decomposition
- Quiz 04sle Gaussianelimination Solution
- History of Canonical Correspondence Analysis Cajo Ter Braak
- Inverse Problems in Geophysics Roel Snieder and Jeannot Trampert
- Chapter 1 - Solution of Linear System
- 00141687
- MATRICES
- mazzia
- Harr Image Compression
- 00937367
- Matrices..
- Linear Eqn
- Advance Math
- Ch_99-AppdxA

You are on page 1of 19

U 1 is upper triangular. (b) The inverse of a unit lower triangular matrix is unit lower triangular. (c) The product

of two upper (lower) triangular matrices is upper (lower) triangular.

Proof. The proof is simple.

P. 134. 2. Prove that if a nonsingular matrix A has an LU -factorization in which L is a unit lower triangular

matrix, then L and U are unique.

Proof. This can be proved from the results in Ex 1.

P. 134. 3. Prove that algorithms B), C), F), and G) always solve Ax = b if A is nonsingular.

Proof. The proof is simple.

P. 134. 5. Show that if all the principal minors of A are nonsingular and lii 6= 0 for each i, then ukk 6= 0 for

1 k n.

Proof. This is because Ak = Lk Uk where Ak , Lk and Uk are the kth leading principle minors of A, L and U ,

respectively.

0 1

P. 134. 6. Prove that the matrix A =

does not have an LU -factorization. Caution: This is not a simple

1 1

consequence of the theorem proved in this section.

Solution. This is because the zero element on the diagonal.

P. 134. 7. (a) Write the row version of the Doolittle algorithm that computes the kth row of L and the kth row of

U at the kth step.

(b) Write the column version of the Doolittle algorithm, which computes the kth column of U and the kth column

of L at the kth step.

Solution.

1

u11 u12 u1n

l21 1

u22 u2n

A = LU = .

.. .

.

.

.

.

.

.

.

. .

unn

ln1 ln2 1

Doolitte algorithm:

(u11 , u12 , , u1n )

(l21 , l31 , , ln1 )

(u22 , u23 , , u2n )

(l32 , l42 , , ln2 )

=

=

=

=

(a21 , a31 , , an1 )/u11

(a22 l21 u12 , a23 l21 u13 , , a2n l21 u1n )

(a32 l31 u12 , a42 l41 u12 , , an2 ln1 u12 )/u22

(u11 , u12 , , u1n ) = (a11 , a12 , , a1n )

l21 = a21 u11 , (u22 , u23 , , u2n ) = (a22 l21 u12 , a23 l21 u13 , , a2n l21 u1n )

l31 = a31 /u11 , l32 = (a32 l31 u12 )/u22 , (u33 , , u3n ) = (a33 l31 u13 l32 u23 , , a3n l31 u1n l32 u2n )

1. For i = 1, , n Do:

2.

For j = 1, , i 1 Do:

Pj1

3.

lij = aij k=1 lik ukj /ujj

4.

EndDo

5.

For j = i, , n Do:

Pi1

6.

uij = aij k=1 lik ukj

7.

EndDo

8. EndDo

1. For j = 1, , n Do:

2.

For i = 1, , j Do:

Pi1

3.

uij = aij k=1 lik ukj

4.

EndDo

5.

For i = j + 1, , n Do:

6.

Pj1

lij = aij k=1

lik ukj /ujj

7.

EndDo

8. EndDo

P. 134. 8. By use of the equation U U l = I, obtain an algorithm for finding the inverse of an upper triangular

matrix.

Solution. Let X be the inverse of the upper triangular matrix U , then X must be upper triangular and U X = I,

which has the component form

n

X

uik xkj = ij

k=1

uii xij +

j

X

uik xkj = ij

k=i+1

So

j

X

uik xkj /uii

xij = ij

k=i+1

1. For j = 1, , n Do:

2.

For i = j,

, 1 Do:

Pj

3.

xij = ij k=i+1 uik xkj /uii

4.

EndDo

5. EndDo

0 a

0 b

Solution.

0 a

l11 0

u11 u12

=

0 b

l21 l22

0 u22

l11 u11 = 0,

l11 u12 = a,

l21 u11 = 0,

u11 = 0,

l11 u12 = a,

u11 = 0,

u12 = a,

0 0

has an LU -factorization. Does it have an LU a b

Solution.

0 0

l11 0

u11 u12

=

a b

l21 l22

0 u22

l11 u11 = 0,

l11 u12 = 0,

l21 u11 = a,

l11 = 0,

l21 u11 = a,

1 5

P. 134. 15. Find all the LU -factorizations of A =

in which L is unit lower triangular.

3 15

Solution.

1 5 1 5

A=

the 1st row of U and the 1st column of L

3 15

3 0

1 5

3 15

1 0

3 1

1 5

0 0

P. 134. 16. If A is invertible and has an LU decomposition then all principal minors of A are nonsingular.

Solution. If A is invertible and has an LU decomposition then L and U are also invertible. From A = LU , we

find that Ak = Lk Uk where Ak (Lk , Uk ) are the submatrix of A which is obtained from the first kth rows and first

kth columns of A (L, U ). Since the triangular matrix L and U are nonsingular, Lk and Uk are also nonsingular, thus

Ak = Lk Uk are also nonsingular.

P. 134. 19. Prove or disprove: If A has an LU -factorization in which L is unit lower triangular, then it has an

LU -factorization in which U is unit upper triangular.

Solution. A has the LU factorization A = LU , where L is unit lower triangular. Thus A is nonsingular and thus

U is also nonsingular. Define D = diag(u11 , , unn ), then A = (LD)(D1 )U where LD is lower triangular and

(D1 )U is unit upper triangular.

P. 134. 22. Use the Cholesky Theorem to prove that these two properties of a symmetric matrix A are equivalent:

(a) A is positive definite: (b) there exists x(1) , x(2) , ..., x(n) in Rn such that Aij = (x(i) )T x(j) .

Solution. By the Cholesky Theorem, the symmetric positive definite matrix A has the Cholesky decomposition

A = XX T where X is the lower triangular with positive elements on its diagonal. Let x(i) be the ith row of X,

A = XX T is equivalent to Aij = (x(i) )T x(j) for 1 i, j n. Obviously, the rows of X is a linearly independent set

of vectors.

P. 134. 24. Prove that if all the leading principal minors of A are nonsingular, then A has a factorization LDU in

which L is unit lower triangular, U is unit upper triangular, and D is diagonal.

Solution. According to the LU factorization of A

A = LU1

where L is the unit lower triangular and U1 is upper triangular, we write U1 = DU where the diagonal matrix D is

the diagonal part of U1 and thus U is the unit upper triangular. So we have the desired decomposition A = LDU .

P. 134. (Continuation) If A is a symmetric matrix whose leading principal minors are nonsingular, then A has a

factorization LDLT in which L is unit lower triangular and D is diagonal.

Solution. First A has a factorization A = LDU in which L is unit lower triangular, U is unit upper triangular, and

D is diagonal. Since A = AT , L(DU ) = U T (DLT ) where L and U T is the unit lower triangular, DU and DLT are

upper triangular. According

to the uniquenessof LU -decomposition, we have L = U T . Thus A = LDLT .

2

6 4

P. 134. 29. Consider A = 6 17 17 Determine directly the factorization A = LDLT where D is diagonal

4 17 20

and L is unit lower triangular-that is, do not use Gaussian elimination.

Solution. We use the Doolittle factorization of A

2 6 4

2 6 4

2

6 4

2

6 4

A(1) = 6 17 17 3 17 17 3 1 5 3 1 5

2 5 3

2 5 20

2 17 20

4 17 20

Thus

2 0 0

1 3 2

2 6 4

1 0 0

2

6 4

1 0 0

6 17 17 = 3 1 0 0 1 5 = 3 1 0 0 1 0 0 1 5

0 0 1

0 0 3

2 5 1

0 0 3

2 5 1

4 17 20

3 0 1

P. 134. 31. Find the LU -factorization of the matrix A = 0 1 3

1 3 0

Solution.

3 0 1

3 0 1/3

3 0 1/3

2 6 4

0 1 3 0 1 3 0 1 3 3 1 5

1 3 0

1 3 0

1 3 0

2 5 26/3

3 0 1

2 0

0

1 6 4

0 1 3 = 3 1 0 0 1 5

1 3 0

2 5 26/3

0 5 1

P. 155. 1. Solve the following linear systems twice. First, use Gaussian elimination and give the factorization

A = LU . Second, use Gaussian elimination with scaled row pivoting and determine the factorization of the form

P A = LU . (c)

1

1

0

3

1 0 3

x1

4

0 3 1 x2 0

=

1 1 1 x3 3

0 1 2

x4

1

1

1

0

3

1 0 3

1

0 3 1

1

1 1 1

0

0 1 2

3

1

1

0

3

1 0 3

1 1 0 3

1

1 3 2

1 1 3 2

1

1 1 1

0 1 4 1

0

3 1 7

3 3 8 1

3

1 0 3

1 3 2

1 4 1

3 2 3

1 1 0 3

1

1 0 3

1 3 2

0 3 1 1 1

=

4 1

1 1 1 0 1 1

3

3 3 2 1

0 1 2

1/3

1 1 0 3 s1 = 3

1/3 1 1/3 7/3

1 0 3 1 s2 = 3

1/3

1/3 0 8/3 1/3

0 1 1 1 s3 = 1

1

0 1 1

0

s4 = 3

3 0

1

2

3 0 1 2

3

0 4/3 4/3

1/3 0 1/2 2

1/3 0 8/3 1/3

0 8/3

1/3

=B

0 1 1 1

1 1

1

3 0

1

2

0

1

2

where p1 = 4, p2 = 3, p3 = 2, p4 = 1.

(P )ij = pi j

0

0

=

0

1

0

0

1

0

0

1

0

0

1

0

,

0

0

3

0

uij = Bpi j (i j) =

0

0

0 1

2

1 1 1

,

0 8/3 1/3

0 0 2

1

0

lij = Bpi j (i > j) =

1/3

1/3

0

1

0 1

0 1/2 1

We have P A = LU .

P. 155. 3. Let (p1 , p2 , , pn ) be a permutation of (1, 2, , n) and define the matrix P by Pij = pi ,j . Let A be

an arbitrary n n matrix. Describe P A, AP, P 1 , and P AP 1 .

Solution. The ith row of P A is just the pi th row of A

(P A)ij =

n

X

Pik Akj =

k=1

n

X

pi ,k Akj = Api ,j

k=1

(AP )ij =

n

X

k=1

Aik Pkj =

n

X

k=1

Aik pk ,j = Aik ,

pk = j

ij = (P P 1 )ij =

n

X

Pik (P 1 )kj =

k=1

n

X

pi ,k (P 1 )kj = (P 1 )pi ,j

k=1

(P AP 1 )ij =

n

X

(P A)ik (P 1 )kj =

k=1

n

X

Api ,k (P 1 )kj =

k=1

n

X

i(k) = j k = pj

k=1

P. 155. 4. Gaussian elimination with full pivoting treats both rows and columns in an order different from the

natural order. Thus, in the first step, the pivot element aij is chosen, so that |aij | is the largest in the entire matrix.

This determines that row i will be the pivot row and column j will be the pivot column. Zeros are created in column

j by subtracting multiples of row i from the other rows.

Solution. Let p1 , p2 , , pn be the indices of the rows in the order in which they become pivot rows. Let q1 , q2 , , qn

be the indices of the columns in the order in which they become pivot columns. The ith pivot element locates at

(pi , qi ) (1 i n). Let A(1) = A, and define A(2) , , A(n) recursively by the formula

(k)

if i k or i > k > j

api ,qj

(k)

(k)

(k)

(k)

(k+1)

api ,qj = api ,qj (api k /apk k )/apk j

if i > k and j > k

(k) (k)

api k /apk k

if i > k and k = j

Define a permutation matrix P whose elements are Pij = pi j and define Qij = iqj . Define an upper triangular

(n)

matrix U whose elements are uij = api qj if j i. Define a unit lower triangular matrix L whose elements are

(n)

lij = api qj if j < i. Then P AQ = LU .

Proof. From the recursive formula,

(k)

ukj = a(n)

pk qj = apk qj ,

kj

This is because the pk -th row does not changed during the Gaussian elimination from A(k) A(n) .

(n)

(k+1)

(k)

(k)

= api k /apk k ,

kj

This is because the k-th column does not changed during the Gaussian elimination from A(k+1) A(n) . Let

i j,

(LU )ij =

i

X

lik ukj =

i1

X

(k+1)

(i)

(1)

(a(k)

pi qj api qj ) + api qj = api qj = api qj

k=1

i1

X

(k)

(k)

(i)

(api k /apk k )a(k)

pk qj + api qj

k=1

k=1

P. 155. 8. Let the n n matrix A be processed by forward elimination, with the resulting matrix called B, and

permutation vector p = (p1 , p2 , , pn )- Let P be the matrix that results from the identity matrix by writing its rows

in the order p1 , p2 , , pn . Prove that the LU -decomposition of P A is obtained as follows: Put C = P B, Lij = Cij

for j < i, and Uij = Cij for i j. (Of course, Uij = 0 if i > j, Lij = 0 if j > i, and Lii = 1.)

Solution. Define a permutation matrix P whose elements are Pij = pi j . Define an upper triangular matrix U

(n)

(n)

whose elements are uij = api j if j i. Define a unit lower triangular matrix L whose elements are lij = api j if j < i.

Then P A = LU .

P. 155. Show how Gaussian elimination with scaled row pivoting works on this example (forward phase only)

2 2 4

1 1 1

3 7 5

Solution.

2 2 4

2 4 2

s1 = 4

2 -4 2

1

1 1 s2 = 1 1

1 1 1

1 1 = B

s

=

7

3 7 5

3 4 8

3 1 6

3

Hence p1 = 2, p2 = 1, p3 = 3.

0 1 0

(P )ij = pi j = 1 0 0 ,

0 0 1

1 1 1

uij = Bpi j (i j) = 4 2 ,

6

3 1 1

We have P A = LU .

P. 155. 12. Assume that A is tridiagonal. Define c0 = 0 and an = 0. Show that if A is columnwise diagonally

dominant

|di | > |ai | + |ci1 |,

(1 i n)

then the algorithm for tridiagonal systems will, in theory, be successful since no zero pivot entries will be encountered.

Solution.

d1 c1

a1 d2 c2

..

..

..

.

.

.

a

d

c

n2

n1

an1

n1

dn

d2 d2 = d2 a1 c1 /d1

The submatrix of order n 1

d2 c2

a2 d3 c3

..

..

..

.

.

.

an1 dn

|d2 | |d2 | |a1 c1 /d1 | > |a2 | + |c1 | |a1 c1 /d1 | > |a2 |

So this submatrix is also columnwise diagonally dominant. Thus we can continue the second step of Gaussian

elimination without pivoting.

P. 155. 17. Show how Gaussian elimination with scaled row pivoting works on this example.

Solution. It is similar to Ex. 10.

P. 155. 30. Use Gaussian elimination with scaled row pivoting to find the determinant of ...

Solution. It is similar to Ex. 10. We first use the Gaussian elimination with scaled row pivoting to obtain P A = LU .

Then we can calculate the determinant of A.

P. 155. 40. Count the number of long operations involved in the LU -factorization of an n n matrix, assuming

that no pivoting is employed.

Pn1

Solution. The number of multiplication is k=1 (n k)2 = O(n3 /3).

P. 178. 2. Prove that if A is invertible and ||B A|| < ||A1 ||1 , then B is invertible.

Solution. ||I A1 B|| ||B A|| ||A1 || < 1, thus A1 B is invertible. Therefore B is also invertible since A is

invertible.

P. 178. 3. Prove that if ||A|| < 1, then

||(I A)1 ||

1

1 + ||A||

Solution.

||(I A)1 ||(1 + ||A||) ||(I A)1 || ||I A|| 1

P. 178. 4. Prove that if A is invertible and ||A B|| < ||A1 ||1 , then

||A1 B 1 || ||A1 ||

||I A1 B||

1 ||I A1 B||

||A1 B 1 || ||A1 || ||I (A1 B)1 || ||A1 ||

||I A1 B||

1 ||I A1 B||

||I (I C)1 || = ||I

C k || = ||

k=0

C k || ||C|| ||

k=1

C k ||

k=0

||C||

1 ||C||

and let C = I A1 B.

P. 178. 6. Prove that if A is invertible, then for any B,

||B A1 ||

||I AB||

||A||

Solution.

||B A1 || ||A|| ||I AB||

P. 178. 7. Prove or disprove: If 1 = ||A|| > ||B|| , then A B is invertible.

Solution. Choose B = 21 A but A is singular. then A B is also singular.

P. 178. 9. Prove or disprove: If ||AB I|| < 1, then ||BA I|| < 1 .

Solution. Choose

1 0

b11 0

A=

, B=

1 1

0 b22

AB I =

b11 1

0

b11

b22 1

BA I =

b11 1

0

b22

b22 1

Obviously we choose b11 and b22 are very close to 1 and we have ||AB I|| < 1 but ||I AB|| > 1.

P. 178. 11. Prove that if A is invertible and ||B A|| < ||A1 ||1 , then

B 1 = A1

(I BA1 )k

k=0

||I BA1 || ||A1 || ||A B|| < 1

Let C = I BA1 , we only want to prove that

(I C)1 =

Ck

k=0

P. 178. 14. Prove that if inf R ||I A|| < 1, then A is invertible.

Solution. Since inf R ||I A|| < 1, we can find a 0 6= R such that ||I A|| < 1. Thus A is invertible and

A is also invertible.

P. 178. 18. Prove that if E is an n n matrix for which ||E|| is sufficiently small,

||(I E)1 (I + E)|| 3||E||2

then How small must ||E|| be?

Solution.

||(I E)1 (I + E)|| = ||

E k || ||E||2

k=2

1

3||E||2

1 ||E||

1

3

1 ||E||

i.e., ||E|| 2/3.

P. 178. 20. Consider the vector space V of all continuous functions defined on the interval [0, 1]. Two important

norms on V are

Z 1

||x|| = max |x(t)|, ||x||1 =

|x(t)|dt

0t1

Show that the sequence of functions xn (t) = tn has the properties ||xn || = 1 and ||xn ||1 0 as n . Thus these

norms lead to different concepts of convergence.

Solution. This is because the norms in an space V with infinity dimensions are not equivalent.

P. 178. 21. Prove that if ||AB I|| < 1, then 2B BAB is a better approximate inverse for A than B, in the sense

that A(2B BAB) is closer to I.

Solution. We want to prove that

||I A(2B BAB)|| ||I B(2B BAB)||

We just prove it if B = I and thus ||I A|| < 1. The above inequality is equivalent to

||I 2A + A2 || ||I A||

This is obvious because

||I 2A + A2 || ||I A||2 ||I A||

P. 178. 25. Give a series that represents A1 under the assumption that ||I A|| < 1 for some known scalar .

Solution. Since ||I A|| < 1 , we have

(A)1 =

(A)k

k=0

i.e. ,

A1 =

k+1 Ak

k=0

P. 178. 27. Prove that if A is ill conditioned, then there is a singular matrix near A. In fact, there is a singular

matrix within distance ||A||/(A) of A.

P. 178. 31. Prove that if there is a polynomial p without constant term such that

||I p(A)|| < 1

then A is invertible.

Solution. This is because p(A) = Aq(A) is invertible, where q is a polynomial. thus A is also invertible.

P. 178. 32. Prove that if p is a polynomial with constant term c0 and if |c0 | + ||I p(A)|| < 1, then A is invertible.

Solution. This is because ||I q(A)|| < 1 where q = p c0 and then apply the above result in Ex. 31

P. 201. 1. Prove that if A is diagonally dominant and if Q is chosen as in the Jacobi method, then

(I Q1 A) < 1

Solution. Let be an eigenvalue of I Q1 A and the corresponding eigenvectors x with ||x|| = 1. We have

(I Q1 A)x = x,

or Qx Ax = Qx

i.e.,

n

X

aij xj = aii xi ,

1in

j6=i

X

||

|aij |/|aii | < 1

j6=i

P. 201. 2. Prove that if A has this property (unit row diagonally dominant)

X

aii = 1

|aij | (1 i n)

j6=i

Solution. For the Richardson iteration, we have the spliting matrix Q = I. We can prove this result by following

the proof of Ex. 1.

P. 201. 3. Repeat Problem 2 with this assumption (unit column diagonally dominant)

X

ajj = 1

|aij | (1 j n)

i6=j

Solution. Let be an eigenvalue of I Q1 A and the corresponding left eigenvectors x with ||x|| = 1. We have

xT (I Q1 A) = xT ,

or xT xT A = xT

i.e.,

n

X

aij xi = xj ,

1jn

i6=j

X

||

|aij | < 1

j6=i

P. 201. 5. Let || || be a norm on Rn , and let S be an n n nonsingular matrix. Define ||x|| = ||Sx||, and prove

that || | is a norm.

Solution. The proof is simple.

P. 201. 6. (Continuation) Let || || be a subordinate matrix norm, and let S be a nonsingular matrix. Define

||A|| = ||SAS 1 ||, and show that || || is a subordinate matrix norm.

Solution.

||A|| = ||SAS 1 || = sup

x6=0

||SAS 1 x||

||SAy||

||Ay||

= sup

= sup

||x||

y6=0 ||Sy||

y6=0 ||y||

P. 201. 7. Using Q as in the Gauss-Seidel method, prove that if A is diagonally dominant, then ||I Q1 A|| < 1.

Solution. For the Gauss-Seidel method, (I Q1 A) < 1 if A is diagonally dominant.

P. 201. 8. Prove that (A) < 1 if and only if limk Ak x = 0 for every x.

Solution. First we have

(A) = inf ||A||

||||

If (A) < 1, we can find a subordinated matrix norm || || such that ||A|| < 1. Thus we have limk Ak x = 0 for

every x. If limk Ak x = 0 for every x, limk Ak = 0.

P 1 BP =

where

Pr

i=1

i 1

i 1

.. ..

. .

Ji =

..

. 1

i

J1

J2

..

.

Jr

= i I + Eni ,1

ni ni

ni = n, and

Enki ,1 = 0,

B k = P J k P 1 ,

Jk =

J1k

k ni

J2k

..

.

Jrk

k = 1, 2, . . .

B k 0 J k 0 Jik 0 |i | < 0,

i = 1, , r

Jik = (i I + Eni ,1 )k =

k

X

Ckj kj

(Eni ,1 )j =

i

j=0

ki

Ck1 k1

i

ki

nX

i 1

Ckj kj

(Eni ,1 )j

i

j=0

...

...

1 k1

Ck i

...

..

..

.

.

..

.

k(ni 1)

Ckni 1 i

ni 2 k(ni 2)

Ck

i

..

.

..

.

Ck1 k1

i

ki

for a large k.

P. 201. 10. Which of the norm axioms are satisfied by the spectral radius function p and which are not? Give

proofs and examples, as appropriate.

Solution.

(A) 0,

(cA) = |c|(A),

cR

If (A) = 0 we can not obtain A = 0. The triangular inequality of the spectral radius is also not valid.

P. 201. 15. Let A be diagonally dominant, and let Q be the lower triangular part of A, as in the Gauss-Seidel

method. Prove that (I Q1 A) is no greater than the largest of the ratios

Pn

j=i+1 |aij |

ri =

Pi1

|aii | j=1 |aij |

Solution. See the proof of the convergence of Gauss-Seidel iteration in the talk.

P. 201. 19. Is there a matrix A such that (A) < ||A|| for all subordinate matrix norms?

Solution. Let be a eigenvalue of A and its corresponding eigenvector is x with ||x|| = 1

Ax = x

Thus || ||A||.

P

P. 201. 20. Prove that if (A) < 1, then I A is invertible and (I A)1 = k=0 Ak .

Solution. If (A) < 1, then there exists a subordinate matrix norm || || such that ||A|| < 1. If I A is singular,

there exists a nonzero vector x such that (I A)x = 0. Thus

||x|| = ||Ax|| ||A|| ||x||

which leads to ||A|| 1. This is a contraction.

P. 201. 21. Is the inequality (AB) (A)(B) true for all pairs of n n matrices? Is your answer the same when

A and B are upper triangular?

Solution. The inequality (AB) (A)(B) is wrong for all pairs of n n matrices. But if A and B are upper

triangular, it is correct.

P. 201. 25. Show that for nonsingular matrices A and B, (AB) = (BA).

Solution. BA = A1 (AB)A.

P. 201. 30. Show that these matrices

R=I A

J = I D1 A

G = I (D CL )1 A

L = I (D CL )1 A

U = I (D CU )1 A

S = I (2 )(D CU )1 D(D CL )1 A

are the iteration matrices for the Richardson, Jacobi, Gauss-Seidel, forward SOR, backward SOR, and SSOR methods,

respectively. Then show that the splitting matrices Q and iteration matrices G given in this section are correct.

Solution. It is simple to prove.

P. 201. 31. Find the explicit form for the iteration matrix I Q1 A in the Gauss-Seidel method when

2 1

1 2 1

.

.

.

.. .. ..

A=

1 2 1

1 2

Solution.

1/2

1/4

1/8

..

.

1/2

1/4

..

.

1/2

.. . .

.

.

1/2n 1/2n1

1/2

(I Q1 A)ij = ij

ij

X Akj

2ik+1

ki

P. 201. 33. Give an example of a matrix A that is not diagonally dominant, yet the Gauss-Seidel method applied

to Ax = b converges.

Solution. If A is nonreducible and weak diagonally dominant, the Gauss-Seidel method applied to Ax = b converges.

For example,

2 1

1 2 1

.

.

.

.. .. ..

A=

1 2 1

1 2

P. 201. 35. Prove that if the number = ||I Q1 A|| is less than 1, then

||x(k) x||

||x(k) x(k1) ||

1

Solution. we have

||x(k+1) x(k) || ||x(k) x(k1) ||

||x(k+1) x|| ||x(k) x||

||x(k+1) x(k) || ||x(k) x|| ||x(k+1) x|| = (1 )||x(k) x||

P. 234. 1. Let A be an n n matrix that has a linearly independent set of n eigenvectors, {u(1) , , u(n) }. Let

Au(i) = i u(i) , and let P be the matrix whose columns are the vectors u(1) , , u(n) . What is P 1 AP ?

Solution. P 1 AP = diag(1 , , n ).

P. 234. 2. Show that if the normalized and unnormalized versions of the power method are started at the same

initial vector, then the values of r in the two algorithms will be the same.

Solution. The normalized power method:

x(1) = Ax(0) ,

x(k) = Ay (k1) =

x(2) = Ay (1) ,

A2 x(k2)

Ak x(0)

k1 [a1 u(1) + (k) ]

Ax(k1)

=

=

=

=

||x(k1) ||

||Ax(k2) ||

||Ak1 x(0) ||

||k1

[a1 u(1) + (k1) ]||

1

(xk+1 )/(xk ) =

1

(Ak x(0) ) ||Ak x(0) ||

which has the same limit as the unnormalized version of power method.

P. 234. 3. In the power method, let rk = (xk+1 )/(xk ). We know that limk rk = 1 . Show that the relative

errors obey

k

rk 1

2

=

ck

1

1

where the numbers ck form a convergent (and hence bounded) sequence.

Solution. This is because

(k) =

n

X

ai (i /1 )k xi = (2 /1 )k (a2 x2 + O(1)) 0,

i=2

rk

(x(k+1) )

a1 (u(1) ) + ((k+1) )

=

1

(x(k) )

a1 (u(1) ) + ((k) )

k

rk 1

((k+1) (k) )

2

=

=

ck

1

1

a1 (u(1) ) + ((k) )

ck =

a1 (u(1) ) + ((k) )

has a limit as k .

P. 234. 7. In the normalized power method, show that if 1 > 0 then the vectors xk converge to an eigenvector.

Solution. See the proof of Ex. 2.

P. 234. 8. Devise a simple modification of the power method to handle the following case: 1 = 2 > |3 |

|4 | |n |.

Solution. We still adopt the power method

x(k) = Ax(k1) ,

k = 0, 1,

so that

x(k) = Ak x(0) =

n

X

i=1

where

(k) =

n

X

ai (i /1 )k xi 0,

i=3

(x(k) ) = k1 [a1 (u(1) ) + (1)k a2 (u(2) ) + ((k) )]

Consequently, the following ratios converge to 1

rk

(x(k+1) )

a1 (u(1) ) + (1)k+1 a2 (u(2) ) + ((k+1) )

= 1

1

(k)

(x )

a1 (u(1) ) + (1)k a2 (u(2) ) + ((k) )

P. 234. 10. Let the eigenvalues of A satisfy 1 > 2 > > n (all real, but not necessarily positive). What value

of the parameter should be used in order for the power method to converge most rapidly to 1 when applied to

A + I.

Solution. Choose such that < (1 + n )/2 to ensure

|1 ()| > |i ()| (i = 2, , n)

P. 234. 11. Prove that I AB has the same eigenvalues as I BA, if either A or B is nonsingular.

Solution. If A is singular, I BA = A1 (I AB)A.

P. 234. 12. If the power method is applied to a real matrix with a real starting vector, what will happen if a

dominant eigenvalue is complex? Does the theory outlined in the text apply?

Solution. Yes. Although x(k) produced by power method is real, the complex sequence

rk

a1 (u(1) ) + ((k+1) )

(x(k+1) )

=

1 C

1

(x(k) )

a1 (u(1) ) + ((k) )

P. 234. 16. Let A = LU where L is unit lower triangular and U is upper triangular. Put B = U L and show that

B and A have the same eigenvalues.

Solution. LU = L(U L)L1 .

P. 242. 1. Find the Schur factorizations of

3 8

4 7

A=

, B=

2 3

1 12

Solution. Find all eigenvalues of A and its eigenvectors. A = P DP 1 where the ith column of P is the eigenvector

corresponding to the ith eigenvalue i . For the matrix given in this example, P can also taken to be unitary.

P. 242. 2. Prove that the eigenvalues of A lie in the intersection of the two sets D and E denned by

n

n

o

X

D = ni=1 x C : |z aii |

|aij |

i6=j=1

n

n

o

X

E = ni=1 x C : |z aii |

|aji |

i6=j=1

Solution. The proof that the eigenvalues of A lie in E is similar that of the proof that the eigenvalues of A lie in

D. The only modification is that the eigenvector is to be the left eigenvectors.

P. 242. 3. Prove that if is an eigenvalue of A, then there is a nonzero vector x such that xT A = xT . (Here xT

denotes a row vector.)

Solution. Since is an eigenvalue of A, I A is singular, and thus (I A)T is also singular and there exists a

nonzero vector x such that (I A)T x = 0. Therefore xT A = xT .

P. 242. 4. Prove that if A is Hermitian, then the deflation technique in the text will produce a Hermitian matrix.

Solution. According to the proof of Schur Theorem, if A is Hermitian, U AU is also Hermitian and thus A which

is obtained by deleting the obtained by deleting the first row and column of U AU is also Hermitian.

P. 242. 11. Prove or disprove: if {x1 , x2 , , xk } and {y1 , y2 , , yk } are orthonormal sets in C n then there is a

unitary matrix U such that U xi = yi for 1 i k.

Solution. Yes. We can find a unitary matrix U which satisfies the above conditions. We prove this for k = n.

According to U xi = yi for 1 i n, we have U X = Y where the ith column of X (Y ) is xi (yi ). So we have

U = Y X 1 which is unitary.

P. 242. 12. Prove that if (I vv )x = y for some triple of vectors v, x, y, then (x, y) is real.

Solution. y x = x (I vv )x = ||x||22 ||v x||2 is real.

P. 242. 13. Find the precise conditions on a pair of vectors u and v in order that I uv be unitary.

Solution.

I = (I uv ) (I uv ) = (I vu )(I uv ) = I uv vu + vu uv

i.e.

(vu )(uv ) = vu + uv

If u is normalized to be u u = 1, it becomes

vv = vu + uv

P. 242. 16. Prove that for any square matrix A, ||A||22 ||A A||2 .

Solution. This is because

||A||22 = (A A) ||A A||2

P. 242. 17. Let Aj denote the jth column of A. Prove that ||Aj ||2 ||A||2 . Is this true for all subordinate matrix

norms?

Solution.

||A||2 ||Aej ||2 = ||Aj ||2

This is true for all subordinate matrix norms.

P. 242. 25. Let A be n n, let B be m m, and let C be n m. Prove that if C has rank m, and if AC = CB,

then

sp(B) sp(A)

Solution.

Bx = x

A(Cx) = CBx = (Cx)

P. 242. 26. If x x = 2, what is (I xx )1 ?

Solution.

(I xx )(I xx ) = I

P. 242. 27. Let x x = 1 and determine whether I xx is invertible.

Solution. I xx is not invertible. For example, x = e1 .

P. 242. 28. Prove or disprove: If A is a square matrix, then there is a unitary Hermitian matrix U such that U AU

is triangular.

Solution.

P. 242. 29. Without computing them, prove that the eigenvalues of the matrix

6 2 1

A = 1 5 0

2 1 4

satisfy the inequality 1 || 9.

Solution. Using Gershgorins Theorem.

P. 242. 32. Prove that I xx is singular if and only if x x = 1, and find the inverse in all nonsingular cases.

Solution. Since I xx is singular, there exists a nonzero vector y such that (I xx )y = 0, from which we have

y = (x y)x. Note that x y 6= 0. Therefore x y = x yx x which leads to x x = 1. If x x = 1, I xx has an

eigenvalue 0 corresponding to the eigenvector x.

P.255. 1. Prove that if x 6= y and (x, y) is real, then a unitary matrix U satisfying U x = y is given by U = I vu ,

with v = xy and u = 2v/||v||22 . Explain why this is a better method for constructing the Householder transformations.

Solution. The construction of U can be found in my slides.

P.255. 9. For fixed u and x, what value of t makes the expression ||u tx||2 a minimum?

The minimum is realized by

d

||u tx||22 = 0

dt

i.e., t = (u x + x u)/(2x x).

P.255. 10. Prove that the matrix having elements (xi , yj ) is unitary if {x1 , x2 , ..., xn } and {y1 , y2 , , yn } are

orthonormal bases in C n .

Solution. Define

X = (x1 , , xn ),

Y = (y1 , , yn )

Q = ((xi , yj )) = X Y

Q Q = (X Y ) (X Y ) = Y XX Y = I

P.255. 16. Use Householders algorithm to find the QR factorization

0 4

A= 0 0

5 2

Solution.

0 0 1

H1 = 0 1 0 ,

1 0 0

1 0 0

H2 = 0 0 1 ,

0 1 0

5 2

H2 H1 A = 0 4 = R,

0 0

5 2

H1 A = 0 0 ,

0 4

0 0 1

H 2 H1 = 1 0 0 ,

0 1 0

Q = (H2 H1 )1

0 1 0

=0 0 1

1 0 0

P.255. 19. Let A be an m n matrix, b an m-vector, and a > 0. Using the Euclidean norm, define

F (x) = ||Ax b||22 + ||x||22

Prove that F (x) is a minimum when x is a solution of the equation

(AT A + I)x = AT b

Prove that when x is so defined,

F (x + h) = F (x) + (Ah)T Ah + hT h

Solution. Consider the function F (x + h) with respect to where h is a given vector. F (x) is the minimum only if

dF (x + h)

0=

d

=0

(AT A + I)x = AT b

P.255. 33. Find the least-squares solution of the system

3 2 1

(x, y)

= (3, 0, 1)

2 3 2

Solution. The system is equivalent to

3 2

3

2 3 x =0

y

1 2

1

Its normal equation is

14 14

14 17

x

y

3 2 1

2 3 2

3

3 2

2 3 x = 3 2 1 0

y

2 3 2

1 2

1

P.255. 35. Let A be an (n + 1) n matrix of rank n, and let z be a nonzero vector orthogonal to the columns of

A. Show that the equation Ax + z = b has a solution in x and . Show that the x-vector obtained in this way is the

least-squares solution of the equation Ax = b.

Solution. The (n + 1) (n + 1) matrix (A, z) is nonsingular. Thus Ax + z = b has a solution in x and . The

solution obtained in this way satisfies

A b = A Ax + z = A Ax

Thus x-vector is the least-squares solution of the equation Ax = b.

Find the QR-factorization of the matrix

3 2 3

A=

4 5 6

Applying the rotating matrix (orthogonal matrix)

c s

,

s c

c2 + s 2 = 1

c s

s c

3

4

3c + 4s

3s + 4c

5

0

c s

s c

c2 + s2 = 1,

3s + 4c = 0 c = 3/5,

3 2 3

4 5 6

3c + 4s

3s + 4c

s = 4/5

5

0

=R

P.276. 1. Let A be an n n upper Hessenberg matrix having a 0 in position Ak,k1 . Show that the spectrum of A

is the union of the spectra of the two submatrices Aij (1 i, j < k) and Aij (k i, j n).

Solution. The matrix can be written as

A11

A=

0 A22

where A11 = (Aij )1i,jk1 and A22 = (Aij )ki,jn are the upper Hessenberg matrix. Thus the spectral of A is the

union of the spectral of A11 and A22 .

P.276. 2. Show that in the QR algorithm we have Ak+1 = Qk Ak Qk . From this, prove that the Q-factoring of Ak is

(Q1 Qk )(Rk R1 ) = Ak

Solution. First we have

Tk AQ

k

Ak+1 = (Q1 Q2 . . . Qk )T A1 (Q1 Q2 . . . Qk ) = Q

k1 R

k1 ,

Inductive proof. The result is valid for k = 1. Assume that Ak1 = Q

kR

k = Q1 Q2 . . . Qk Rk R2 R1 = Q1 Q2 Qk1 Ak Rk1 R2 R1

Q

k1 Ak R

k1 = AQ

k1 R

k1 = Ak

= Q

6. Find the eigenvalues of the matrix

1 4 1

A = 1 2 5

5 4 3

P.276. 7. Prove that in the shifted QR-algorithm Ak+1 is unitarily similar to Ak

Solution.

Ak sk I = Qk Rk ,

P.276. 11. Let A be a real matrix having upper triangular block structure

A=

0 A22 A23

0

0 A33

..

..

..

.

.

.

0

0

0

A1n

A2n

A3n

.

..

. ..

Ann

in which each An is a 2 2 matrix. Give a simple procedure for computing the eigenvalues of A, including proofs.

Solution. The spectral of the matrix A is the unions of the spectral of Akk , 1 k n. The two eigenvalues of Akk

can be calculated easily.

P.276. 12. Prove or disprove: If U is unitary, R is upper triangular, and U R is upper Hessenberg, then U is upper

Hessenberg.

Solution. Let A = U R be the upper Hessenberg. Then U = AR1 . The multiplication of a upper Hessenberg

matrix and a upper triangular is also upper Hessenberg matrix.

The following questions comes from Heinrich Dinkel. Thanks!

Page 85, PB 3.4 Exercise 1: I dont have any Idea which uppercound

for C is meant, since F is not a given function.

Use

Exercise 10: I can apply Newtons formula on the term F(x), but it

results in an iteration, which would just cancel out the x_n terms,

which I guess is totally wrong.

Find the zero of f(x) = F(x) - x.

Exercise 23: How do I compute the power q? I can guess values like

2,3,4 for q, which all fit in the constraint, but i havent seen any

computation in the book, they only explain it by using an arbitrary

F(s) and just say that if f(s) = 0, but f(s) is not 0, q=2?

tends to

a constant if n goes to infnity. Here the constant is not zero or

infinity and x is the limit of x_n. So you first find the limit x of

this sequence.

compute that formula

You just use the Newton-Cotes formula based on these four nodes.

See Example 1 in P. 446

Exercise 11: No clue how to begin with

This formula is exact for f1(x)=1 and f2(x)=cos(x). Then you can

find A1 and A2.

Exercise 15: How do I know which degree is the maximum?

Find A, B and C such that this formula is exact for f(x)=1,x and

x*x. If it is not exact for f(x)=x*x*x, then the maximum degree of

this formula is 2.

Page 441 Exercise 5: No clue how to begin with

See the Richardson extrapolation in my silde.

Exercise 9: No clue how to begin with

Expanding the right hans side at x to the order 4.

Exercise 11: How do I know which is more accurate? Inserting some

random values?

The same with Exercise 5.

Exercise 21 : Should I use the taylor expansion for each f(x),

f(x+h) , etc. repsectively?

Yes. Then you combine these expansions and the term with f and the

first derivative are cancelled if A, B, C and D are chosen suitably.

Page 462 Exercise 1: I would like to have an example how to use the

formula given, do i always fix the values for p(x) and q(x) to be

e.g. ax^2+bx+c? and do i always need to have q_0(x) = 1, q_1(x) =x

etc.?

Since n=1, we need to calculate x_0, x_1 and c such that this

formula is exact for f in PI_3. So you should choose f(x)=1, x, x^2,

x^3 to determine these values.

Exercise 10: Essentially the same as in 1.

Yes.

Page 470

following equation". Just copy the equation?

See the Romberg algorithm. This equation is just the first step of

the Richardson extrapolation in the Romberg algorithm.

- Newton-raphson vs BroydenUploaded bycorreita77
- Solution Manual 3rd Ed. Metal Forming: Mechanics and Metallurgy CHAPTER 1-3Uploaded byNadia Zukry
- Numerical Solution of Harmonic OscillatorUploaded bysebastian
- pdf Engineering - Fluid Dynamics - Shallow Liquid Simulation Using Matlab (2001 Neumann)Uploaded byelfrich
- Concepts in Theoretical PhysicConcepts in Theoretical Physics Lecture 5s Lecture 5Uploaded bySatkas Dimitrios
- maths topicUploaded byamitkathuria023_6245
- Proper Orthogonal DecompositionUploaded byKenry Xu Chi
- Quiz 04sle Gaussianelimination SolutionUploaded byAnonymous WjGf1l
- History of Canonical Correspondence Analysis Cajo Ter BraakUploaded bylacos
- Inverse Problems in Geophysics Roel Snieder and Jeannot TrampertUploaded bycargadory2k
- Chapter 1 - Solution of Linear SystemUploaded byAlex
- 00141687Uploaded bySharat Chandra Keswar
- MATRICESUploaded byAlejandroHerreraGurideChile
- mazziaUploaded byBarracuda Benary
- Harr Image CompressionUploaded byAliasghar Kazemi Nasab
- 00937367Uploaded bysuchi87
- Matrices..Uploaded byArunabh Bhattacharya
- Linear EqnUploaded byBoo Cori
- Advance MathUploaded byRaine Lopez
- Ch_99-AppdxAUploaded bySainath Varikuti
- FinalUploaded byWilliam Rodzewich
- r5100204 Mathematical MethodsUploaded bysivabharathamurthy
- QUESTION_BANK_M201.docUploaded byjoydeep12
- Mathematics 2Uploaded byvivek95
- PAPERUploaded bysameernsn
- Finit Volum MethodsUploaded bycormosraul364
- seminar repoert.pdfUploaded byAsmita Pawar
- Chapter_02Uploaded byRibhu Chopra
- Syll2013(1)Uploaded byAnonymous 7PNVhPu
- Complex Embeddings for Simple Link PredictionUploaded byJan Hula

- 7-10-newUploaded byPradip Adhikari
- Base RxnUploaded byPradip Adhikari
- गजलUploaded byPradip Adhikari
- The Regulations of Semester Registration for International Scholarship Students.pdfUploaded byPradip Adhikari
- Crack Front OscillationsUploaded byPradip Adhikari
- Readme.txtUploaded byPradip Adhikari
- Session 1Uploaded byPradip Adhikari
- Session 1Uploaded byPradip Adhikari
- session1.txtUploaded byPradip Adhikari
- session02.txtUploaded byPradip Adhikari
- Session 02Uploaded byPradip Adhikari
- Name List of Msc EntranceUploaded byPradip Adhikari
- Building MaterialUploaded byPradip Adhikari
- Pulchowk Campus Admission 2072.pdfUploaded byPradip Adhikari
- Vademecum Eiffel 2016 UkUploaded byBernard Underwood
- New Text DocumentUploaded byPradip Adhikari
- Building Sample load.docxUploaded byPradip Adhikari
- AfricaUploaded byPradip Adhikari
- Elasticity HoweworkUploaded byPradip Adhikari
- The State of the Practice of Foundation EngineeringUploaded byPradip Adhikari

- programacion momentosUploaded bypaladin_kaos
- IDL Analyst.pdfUploaded byAshoka Vanjare
- Tutorial Premidsem MA214UPDATEDUploaded bysxdcfvg
- Lec 08 Triangular FactorizationUploaded byArpan Seth
- 3.Lu DecompositionUploaded bySamuel Mawutor Gamor
- QB 14MA251Uploaded byRajesh
- Sparse Matrix SolutionUploaded byRonaldo Anticona Guillen
- Computational Fluid Dynamics Dr.remia Iwatsu CuttbusUploaded byMohamed Ali
- Computational Methods in Power System AnalysisUploaded byrsrtnj
- Chaddyner.com-1420095382Uploaded byOmar Benites Alfaro
- DeterminantUploaded bykushkim
- Simultaneous Linear EquationsUploaded byAnonymous J1Plmv8
- Permutations and Row SwapsUploaded byShuX13
- Solutions Manual Scientific ComputingUploaded bySanjeev Gupta
- Num LinearUploaded byAnonymous fp9Kgglz
- chap0Uploaded byUzma Azam
- Linear Algebra: Foundations to Frontiers - Notes to LAFF WithUploaded byKaZeemKA
- Finite Different Method - Heat Transfer - Using MatlabUploaded byLe Cong Lap
- 15 EE394J 2 Spring11 Power System MatricesUploaded byyourou1000
- MATLAB Course - Part 1Uploaded byROBERT
- Fast Sparse Matrix Inverse ComputationUploaded bysuvabrata_das01
- Qingwen Hu-Concise Introduction to Linear Algebra-Chapman and Hall_CRC (2017)Uploaded byStjepkoKatulić
- Mathematics in Chemical EngineeringUploaded byPedro CoronadOyarvide
- Matrix ComputationsUploaded byheyheyworld
- numerical_matrix_analysis_linear_systems_and_lea.pdfUploaded bymarianandrone
- CalculusUploaded byRaisul
- ChE401 Selected ProblemsUploaded bymiguel_marsh
- Fast RCS Computation Over a Frequency BandUploaded byhadi_sez
- Solution of Macroeconometric ModelsUploaded bygiorgiop5
- 1Uploaded bybbteenager