0 Up votes0 Down votes

14 views93 pagesJun 03, 2015

© © All Rights Reserved

PDF, TXT or read online from Scribd

© All Rights Reserved

14 views

© All Rights Reserved

- Steve Jobs
- Wheel of Time
- NIV, Holy Bible, eBook
- NIV, Holy Bible, eBook, Red Letter Edition
- Cryptonomicon
- The Woman Who Smashed Codes: A True Story of Love, Spies, and the Unlikely Heroine who Outwitted America's Enemies
- Contagious: Why Things Catch On
- Crossing the Chasm: Marketing and Selling Technology Project
- Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are
- Zero to One: Notes on Start-ups, or How to Build the Future
- Console Wars: Sega, Nintendo, and the Battle that Defined a Generation
- Dust: Scarpetta (Book 21)
- Hit Refresh: The Quest to Rediscover Microsoft's Soul and Imagine a Better Future for Everyone
- The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution
- Crushing It!: How Great Entrepreneurs Build Their Business and Influence—and How You Can, Too
- Make Time: How to Focus on What Matters Every Day
- Algorithms to Live By: The Computer Science of Human Decisions
- Wild Cards

You are on page 1of 93

Numerical Analysis (9th Edition)

R L Burden & J D Faires

Beamer Presentation Slides

prepared by

John Carroll

Dublin City University

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Relaxation Techniques

2 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Relaxation Techniques

2 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Relaxation Techniques

2 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Relaxation Techniques

2 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Relaxation Techniques

3 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Motivation

We have seen that the rate of convergence of an iterative

technique depends on the spectral radius of the matrix associated

with the method.

Relaxation Techniques

4 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Motivation

We have seen that the rate of convergence of an iterative

technique depends on the spectral radius of the matrix associated

with the method.

One way to select a procedure to accelerate convergence is to

choose a method whose associated matrix has minimal spectral

radius.

Relaxation Techniques

4 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Motivation

We have seen that the rate of convergence of an iterative

technique depends on the spectral radius of the matrix associated

with the method.

One way to select a procedure to accelerate convergence is to

choose a method whose associated matrix has minimal spectral

radius.

We start by introducing a new means of measuring the amount by

which an approximation to the solution to a linear system differs

from the true solution to the system.

Relaxation Techniques

4 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Motivation

We have seen that the rate of convergence of an iterative

technique depends on the spectral radius of the matrix associated

with the method.

One way to select a procedure to accelerate convergence is to

choose a method whose associated matrix has minimal spectral

radius.

We start by introducing a new means of measuring the amount by

which an approximation to the solution to a linear system differs

from the true solution to the system.

The method makes use of the vector described in the following

definition.

Numerical Analysis (Chapter 7)

Relaxation Techniques

4 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Definition

IRn is an approximation to the solution of the linear

Suppose x

system defined by

Ax = b

with respect to this system is

The residual vector for x

r = b Ax

Relaxation Techniques

5 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Definition

IRn is an approximation to the solution of the linear

Suppose x

system defined by

Ax = b

with respect to this system is

The residual vector for x

r = b Ax

Comments

A residual vector is associated with each calculation of an

approximate component to the solution vector.

Relaxation Techniques

5 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Definition

IRn is an approximation to the solution of the linear

Suppose x

system defined by

Ax = b

with respect to this system is

The residual vector for x

r = b Ax

Comments

A residual vector is associated with each calculation of an

approximate component to the solution vector.

The true objective is to generate a sequence of approximations

that will cause the residual vectors to converge rapidly to zero.

Numerical Analysis (Chapter 7)

Relaxation Techniques

5 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Looking at the Gauss-Seidel Method

Suppose we let

(k )

ri

(k )

(k )

(k )

Relaxation Techniques

6 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Looking at the Gauss-Seidel Method

Suppose we let

(k )

ri

(k )

(k )

(k )

(k )

to the approximate solution vector xi defined by

(k )

xi

(k )

(k )

(k )

(k 1)

= (x1 , x2 , . . . , xi1 , xi

Relaxation Techniques

(k 1) t

, . . . , xn

6 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Looking at the Gauss-Seidel Method

Suppose we let

(k )

ri

(k )

(k )

(k )

(k )

to the approximate solution vector xi defined by

(k )

xi

(k )

(k )

(k )

(k 1)

= (x1 , x2 , . . . , xi1 , xi

(k )

(k )

rmi

= bm

is

i1

X

(k )

amj xj

j=1

(k 1) t

, . . . , xn

Relaxation Techniques

n

X

(k 1)

amj xj

j=i

6 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

(k )

rmi

= bm

i1

X

(k )

amj xj

n

X

j=1

(k 1)

amj xj

j=i

(k )

(k )

rmi

= bm

i1

X

j=1

(k )

amj xj

n

X

(k 1)

amj xj

(k 1)

ami xi

j=i+1

for each m = 1, 2, . . . , n.

Relaxation Techniques

7 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

(k )

rmi = bm

i1

X

(k )

amj xj

j=1

n

X

(k 1)

amj xj

(k 1)

ami xi

j=i+1

(k )

(k )

rii

= bi

i1

X

j=1

(k )

aij xj

is

n

X

(k 1)

aij xj

(k 1)

aii xi

j=i+1

Relaxation Techniques

8 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

(k )

rmi = bm

i1

X

(k )

amj xj

j=1

n

X

(k 1)

amj xj

(k 1)

ami xi

j=i+1

(k )

(k )

= bi

rii

i1

X

(k )

aij xj

j=1

(k 1)

(k )

+ rii

n

X

(k 1)

aij xj

= bi

i1

X

(k )

aij xj

j=1

Numerical Analysis (Chapter 7)

(k 1)

aii xi

j=i+1

so

aii xi

is

Relaxation Techniques

n

X

(k 1)

aij xj

j=i+1

R L Burden & J D Faires

8 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

(E)

(k 1)

aii xi

(k )

rii

= bi

i1

X

(k )

aij xj

n

X

(k 1)

aij xj

j=i+1

j=1

(k )

i1

n

X

X

1

(k )

(k )

(k 1)

bi

xi =

aij xj

aij xj

aii

j=1

Relaxation Techniques

j=i+1

9 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

(E)

(k 1)

aii xi

(k )

rii

= bi

i1

X

(k )

aij xj

n

X

(k 1)

aij xj

j=i+1

j=1

(k )

i1

n

X

X

1

(k )

(k )

(k 1)

bi

xi =

aij xj

aij xj

aii

j=1

j=i+1

(k 1)

aii xi

Numerical Analysis (Chapter 7)

(k )

+ rii

(k )

= aii xi

Relaxation Techniques

9 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

(k 1)

aii xi

(k )

+ rii

(k )

= aii xi

Consequently, the Gauss-Seidel method can be characterized as

(k )

choosing xi to satisfy

(k )

(k )

xi

(k 1)

= xi

rii

aii

Relaxation Techniques

10 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

We can derive another connection between the residual vectors

and the Gauss-Seidel technique.

Relaxation Techniques

11 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

We can derive another connection between the residual vectors

and the Gauss-Seidel technique.

(k )

(k )

(k )

(k )

xi+1 = (x1 ,. . ., xi

(k 1)

(k 1) t

).

, xi+1 , . . ., xn

Relaxation Techniques

11 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

We can derive another connection between the residual vectors

and the Gauss-Seidel technique.

(k )

(k )

(k )

(k )

xi+1 = (x1 ,. . ., xi

(k 1)

(k 1) t

).

, xi+1 , . . ., xn

(k )

(k )

rmi

= bm

i1

X

(k )

amj xj

j=1

Relaxation Techniques

n

X

is

(k 1)

amj xj

j=i

11 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

(k )

rmi

= bm

i1

X

(k )

amj xj

n

X

(k 1)

amj xj

j=i

j=1

(k )

(k )

ri,i+1 = bi

i

X

j=1

(k )

aij xj

n

X

(k 1)

aij xj

j=i+1

Relaxation Techniques

12 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

(k )

rmi

= bm

i1

X

(k )

amj xj

n

X

(k 1)

amj xj

j=i

j=1

(k )

(k )

ri,i+1 = bi

i

X

(k )

aij xj

j=1

= bi

i1

X

j=1

n

X

(k 1)

aij xj

j=i+1

(k )

aij xj

n

X

(k 1)

aij xj

(k )

aii xi

j=i+1

Relaxation Techniques

12 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

(k )

ri,i+1

= bi

i1

X

(k )

aij xj

j=1

n

X

(k 1)

aij xj

(k )

aii xi

j=i+1

(k )

i1

n

X

X

1

(k )

(k )

(k 1)

bi

xi =

aij xj

aij xj

aii

j=1

j=i+1

(k )

Relaxation Techniques

13 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

(k )

ri,i+1

= bi

i1

X

(k )

aij xj

j=1

n

X

(k 1)

aij xj

(k )

aii xi

j=i+1

(k )

i1

n

X

X

1

(k )

(k )

(k 1)

bi

xi =

aij xj

aij xj

aii

j=1

j=i+1

(k )

(k )

characterized by choosing each xi+1 in such a way that the ith

(k )

component of ri+1 is zero.

Numerical Analysis (Chapter 7)

Relaxation Techniques

13 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Relaxation Techniques

14 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Reducing the Norm of the Residual Vector

(k )

zero, however, is not necessarily the most efficient way to reduce

(k )

the norm of the vector ri+1 .

Relaxation Techniques

15 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Reducing the Norm of the Residual Vector

(k )

zero, however, is not necessarily the most efficient way to reduce

(k )

the norm of the vector ri+1 .

If we modify the Gauss-Seidel procedure, as given by

(k )

(k )

xi

(k 1)

xi

r

+ ii

aii

Relaxation Techniques

15 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Reducing the Norm of the Residual Vector

(k )

zero, however, is not necessarily the most efficient way to reduce

(k )

the norm of the vector ri+1 .

If we modify the Gauss-Seidel procedure, as given by

(k )

(k )

xi

(k 1)

xi

r

+ ii

aii

to

(k )

(k )

xi

(k 1)

= xi

Relaxation Techniques

rii

aii

15 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Reducing the Norm of the Residual Vector

(k )

zero, however, is not necessarily the most efficient way to reduce

(k )

the norm of the vector ri+1 .

If we modify the Gauss-Seidel procedure, as given by

(k )

(k )

xi

(k 1)

xi

r

+ ii

aii

to

(k )

(k )

xi

(k 1)

= xi

rii

aii

the residual vector and obtain significantly faster convergence.

Numerical Analysis (Chapter 7)

Relaxation Techniques

15 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Introducing the SOR Method

Methods involving

(k )

(k )

xi

(k 1)

= xi

rii

aii

Relaxation Techniques

16 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Introducing the SOR Method

Methods involving

(k )

(k )

xi

(k 1)

= xi

rii

aii

the procedures are called under-relaxation methods.

Relaxation Techniques

16 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Introducing the SOR Method

Methods involving

(k )

(k )

xi

(k 1)

= xi

rii

aii

the procedures are called under-relaxation methods.

We will be interested in choices of with 1 < , and these are

called over-relaxation methods.

Relaxation Techniques

16 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Introducing the SOR Method

Methods involving

(k )

(k )

xi

(k 1)

= xi

rii

aii

the procedures are called under-relaxation methods.

We will be interested in choices of with 1 < , and these are

called over-relaxation methods.

They are used to accelerate the convergence for systems that are

convergent by the Gauss-Seidel technique.

Relaxation Techniques

16 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Introducing the SOR Method

Methods involving

(k )

(k )

xi

(k 1)

= xi

rii

aii

the procedures are called under-relaxation methods.

We will be interested in choices of with 1 < , and these are

called over-relaxation methods.

They are used to accelerate the convergence for systems that are

convergent by the Gauss-Seidel technique.

The methods are abbreviated SOR, for Successive

Over-Relaxation, and are particularly useful for solving the linear

systems that occur in the numerical solution of certain

partial-differential equations.

Numerical Analysis (Chapter 7)

Relaxation Techniques

16 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

A More Computationally-Efficient Formulation

(k )

(k )

rii

= bi

i1

X

j=1

(k )

aij xj

n

X

in the form

(k 1)

aij xj

(k 1)

aii xi

j=i+1

Relaxation Techniques

17 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

A More Computationally-Efficient Formulation

(k )

(k )

rii

= bi

i1

X

(k )

aij xj

n

X

in the form

(k 1)

aij xj

(k 1)

aii xi

j=i+1

j=1

(k )

(k )

xi

(k 1)

xi

r

+ ii

aii

Relaxation Techniques

17 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

A More Computationally-Efficient Formulation

(k )

(k )

rii

= bi

i1

X

(k )

aij xj

n

X

in the form

(k 1)

aij xj

(k 1)

aii xi

j=i+1

j=1

(k )

(k )

xi

(k 1)

xi

r

+ ii

aii

(k )

xi

i1

n

X

X

(k 1)

(k )

(k 1)

bi

= (1 )xi

+

aij xj

aij xj

aii

j=1

Relaxation Techniques

j=i+1

17 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

To determine the matrix form of the SOR method, we rewrite

i1

n

X

X

(k )

(k 1)

(k )

(k 1)

bi

aij xj

aij xj

xi = (1 )xi

+

aii

j=1

Relaxation Techniques

j=i+1

18 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

To determine the matrix form of the SOR method, we rewrite

i1

n

X

X

(k )

(k 1)

(k )

(k 1)

bi

aij xj

aij xj

xi = (1 )xi

+

aii

j=1

j=i+1

as

(k )

aii xi

i1

X

(k )

aij xj

= (1

(k 1)

)aii xi

j=1

n

X

(k 1)

aij xj

+ bi

j=i+1

Relaxation Techniques

18 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

(k )

aii xi

i1

X

(k )

aij xj

(k 1)

= (1 )aii xi

n

X

(k 1)

aij xj

+ bi

j=i+1

j=1

In vector form, we therefore have

(D L)x(k ) = [(1 )D + U]x(k 1) + b

Relaxation Techniques

19 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

(k )

aii xi

i1

X

(k )

aij xj

(k 1)

= (1 )aii xi

n

X

(k 1)

aij xj

+ bi

j=i+1

j=1

In vector form, we therefore have

(D L)x(k ) = [(1 )D + U]x(k 1) + b

from which we obtain:

x(k ) = (D L)1 [(1 )D + U]x(k 1) + (D L)1 b

Relaxation Techniques

19 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Relaxation Techniques

20 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Letting

T = (D L)1 [(1 )D + U]

Relaxation Techniques

20 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Letting

T = (D L)1 [(1 )D + U]

and

c = (D L)1 b

Relaxation Techniques

20 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Letting

T = (D L)1 [(1 )D + U]

and

c = (D L)1 b

x(k ) = T x(k 1) + c

Relaxation Techniques

20 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Example

The linear system Ax = b given by

4x1 + 3x2

= 24

3x1 + 4x2 x3 = 30

x2 + 4x3 = 24

has the solution (3, 4, 5)t .

Relaxation Techniques

21 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Example

The linear system Ax = b given by

4x1 + 3x2

= 24

3x1 + 4x2 x3 = 30

x2 + 4x3 = 24

has the solution (3, 4, 5)t .

Compare the iterations from the Gauss-Seidel method and the

SOR method with = 1.25 using x(0) = (1, 1, 1)t for both

methods.

Relaxation Techniques

21 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (1/3)

Relaxation Techniques

22 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (1/3)

For each k = 1, 2, . . . , the equations for the Gauss-Seidel method are

(k 1)

(k )

= 0.75x2

(k )

= 0.75x1 + 0.25x3

(k )

= 0.25x2 6

x1

x2

x3

+6

(k )

(k 1)

+ 7.5

(k )

Relaxation Techniques

22 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (1/3)

For each k = 1, 2, . . . , the equations for the Gauss-Seidel method are

(k 1)

(k )

= 0.75x2

(k )

= 0.75x1 + 0.25x3

(k )

= 0.25x2 6

x1

x2

x3

+6

(k 1)

(k )

+ 7.5

(k )

and the equations for the SOR method with = 1.25 are

(k )

x1

(k )

x2

(k )

x3

(k 1)

= 0.25x1

=

=

(k 1)

0.9375x2

+ 7.5

(k )

(k 1)

(k 1)

0.9375x1 0.25x2

+ 0.3125x3

(k )

(k 1)

0.3125x2 0.25x3

7.5

Relaxation Techniques

+ 9.375

22 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Gauss-Seidel Iterations

k

(k )

x1

(k )

x2

(k )

x3

1

1

1

5.250000

3.812500

5.046875

3.1406250

3.8828125

5.0292969

3.0878906

3.9267578

5.0183105

Relaxation Techniques

7

3.0134110

3.9888241

5.0027940

23 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Gauss-Seidel Iterations

k

(k )

x1

(k )

x2

(k )

x3

1

1

1

5.250000

3.812500

5.046875

3.1406250

3.8828125

5.0292969

3.0878906

3.9267578

5.0183105

7

3.0134110

3.9888241

5.0027940

k

(k )

x1

(k )

x2

(k )

x3

1

1

1

6.312500

3.5195313

6.6501465

2.6223145

3.9585266

4.6004238

3.1333027

4.0102646

5.0966863

Relaxation Techniques

7

3.0000498

4.0002586

5.0003486

23 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (3/3)

Relaxation Techniques

24 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (3/3)

For the iterates to be accurate to 7 decimal places,

the Gauss-Seidel method requires 34 iterations,

Relaxation Techniques

24 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (3/3)

For the iterates to be accurate to 7 decimal places,

the Gauss-Seidel method requires 34 iterations,

as opposed to 14 iterations for the SOR method with = 1.25.

Relaxation Techniques

24 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Relaxation Techniques

25 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

chosen when the SOR method is used?

Relaxation Techniques

26 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

chosen when the SOR method is used?

Although no complete answer to this question is known for the

general n n linear system, the following results can be used in

certain important situations.

Relaxation Techniques

26 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Theorem (Kahan)

If aii 6= 0, for each i = 1, 2, . . . , n, then (T ) | 1|. This implies

that the SOR method can converge only if 0 < < 2.

Relaxation Techniques

27 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Theorem (Kahan)

If aii 6= 0, for each i = 1, 2, . . . , n, then (T ) | 1|. This implies

that the SOR method can converge only if 0 < < 2.

The proof of this theorem is considered in Exercise 9, Chapter 7 of

Burden R. L. & Faires J. D., Numerical Analysis, 9th Ed., Cengage,

2011.

Relaxation Techniques

27 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Theorem (Kahan)

If aii 6= 0, for each i = 1, 2, . . . , n, then (T ) | 1|. This implies

that the SOR method can converge only if 0 < < 2.

The proof of this theorem is considered in Exercise 9, Chapter 7 of

Burden R. L. & Faires J. D., Numerical Analysis, 9th Ed., Cengage,

2011.

Theorem (Ostrowski-Reich)

If A is a positive definite matrix and 0 < < 2, then the SOR method

converges for any choice of initial approximate vector x(0) .

Relaxation Techniques

27 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Theorem (Kahan)

If aii 6= 0, for each i = 1, 2, . . . , n, then (T ) | 1|. This implies

that the SOR method can converge only if 0 < < 2.

The proof of this theorem is considered in Exercise 9, Chapter 7 of

Burden R. L. & Faires J. D., Numerical Analysis, 9th Ed., Cengage,

2011.

Theorem (Ostrowski-Reich)

If A is a positive definite matrix and 0 < < 2, then the SOR method

converges for any choice of initial approximate vector x(0) .

The proof of this theorem can be found in Ortega, J. M., Numerical

Analysis; A Second Course, Academic Press, New York, 1972, 201 pp.

Numerical Analysis (Chapter 7)

Relaxation Techniques

27 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Theorem

If A is positive definite and tridiagonal, then (Tg ) = [(Tj )]2 < 1, and

the optimal choice of for the SOR method is

2

=

1+

1 [(Tj )]2

Relaxation Techniques

28 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Theorem

If A is positive definite and tridiagonal, then (Tg ) = [(Tj )]2 < 1, and

the optimal choice of for the SOR method is

2

=

1+

1 [(Tj )]2

Relaxation Techniques

28 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Theorem

If A is positive definite and tridiagonal, then (Tg ) = [(Tj )]2 < 1, and

the optimal choice of for the SOR method is

2

=

1+

1 [(Tj )]2

The proof of this theorem can be found in Ortega, J. M., Numerical

Analysis; A Second Course, Academic Press, New York, 1972, 201 pp.

Relaxation Techniques

28 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Example

Find the optimal choice of for the SOR method for the matrix

4

3

0

4 1

A= 3

0 1

4

Relaxation Techniques

29 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (1/3)

This matrix is clearly tridiagonal, so we can apply the result in the

SOR theorem if we can also show that it is positive definite.

Relaxation Techniques

30 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (1/3)

This matrix is clearly tridiagonal, so we can apply the result in the

SOR theorem if we can also show that it is positive definite.

Because the matrix is symmetric, the theory tells us that it is

positive definite if and only if all its leading principle submatrices

has a positive determinant.

Relaxation Techniques

30 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (1/3)

This matrix is clearly tridiagonal, so we can apply the result in the

SOR theorem if we can also show that it is positive definite.

Because the matrix is symmetric, the theory tells us that it is

positive definite if and only if all its leading principle submatrices

has a positive determinant.

This is easily seen to be the case because

4 3

det(A) = 24, det

= 7 and

3 4

Relaxation Techniques

det ([4]) = 4

30 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (2/3)

We compute

Tj

= D 1 (L + U)

Relaxation Techniques

31 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (2/3)

We compute

Tj

= D 1 (L + U)

0 3 0

4 0 0

0 1

= 0 14 0 3

0

1 0

0 0 1

4

Relaxation Techniques

31 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (2/3)

We compute

Tj

= D 1 (L + U)

0 3 0

4 0 0

0 1

= 0 14 0 3

0

1 0

0 0 14

0

0.75 0

0

0.25

= 0.75

0

0.25 0

Relaxation Techniques

31 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (2/3)

We compute

Tj

so that

= D 1 (L + U)

0 3 0

4 0 0

0 1

= 0 14 0 3

0

1 0

0 0 14

0

0.75 0

0

0.25

= 0.75

0

0.25 0

0.75

0

0.25

Tj I = 0.75

0

0.25

Relaxation Techniques

31 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (3/3)

Therefore

0.75

0

0.25

det(Tj I) = 0.75

0

0.25

Relaxation Techniques

32 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (3/3)

Therefore

0.75

0

0.25

det(Tj I) = 0.75

0

0.25

Relaxation Techniques

= (2 0.625)

32 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (3/3)

Therefore

0.75

0

0.25

det(Tj I) = 0.75

0

0.25

Thus

(Tj ) =

= (2 0.625)

0.625

Relaxation Techniques

32 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (3/3)

Therefore

0.75

0

0.25

det(Tj I) = 0.75

0

0.25

Thus

(Tj ) =

and

=

= (2 0.625)

0.625

2

2

q

1.24.

=

1 + 1 0.625

1 + 1 [(Tj )]2

Relaxation Techniques

32 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Solution (3/3)

Therefore

0.75

0

0.25

det(Tj I) = 0.75

0

0.25

Thus

(Tj ) =

and

=

= (2 0.625)

0.625

2

2

q

1.24.

=

1 + 1 0.625

1 + 1 [(Tj )]2

This explains the rapid convergence obtained in the last example when

using = 1.25.

Numerical Analysis (Chapter 7)

Relaxation Techniques

32 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Outline

Relaxation Techniques

33 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

To solve

Ax = b

given the parameter and an initial approximation x(0) :

Relaxation Techniques

34 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

To solve

Ax = b

given the parameter and an initial approximation x(0) :

INPUT

the entries aij , 1 i, j n, of the matrix A;

the entries bi , 1 i n, of b;

the entries XOi , 1 i n, of XO = x(0) ;

the parameter ; tolerance TOL;

maximum number of iterations N.

Numerical Analysis (Chapter 7)

Relaxation Techniques

34 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Step 1 Set k = 1

Step 2 While (k N) do Steps 36:

Relaxation Techniques

35 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Step 1 Set k = 1

Step 2 While (k N) do Steps 36:

Step 3 For i = 1, . . . , n

set xi = (1 )XOi +

i

P

1 h Pi1

j=1 aij xj nj=i+1 aij XOj + bi

aii

Relaxation Techniques

35 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Step 1 Set k = 1

Step 2 While (k N) do Steps 36:

Step 3 For i = 1, . . . , n

set xi = (1 )XOi +

i

P

1 h Pi1

j=1 aij xj nj=i+1 aij XOj + bi

aii

STOP (The procedure was successful)

Relaxation Techniques

35 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Step 1 Set k = 1

Step 2 While (k N) do Steps 36:

Step 3 For i = 1, . . . , n

set xi = (1 )XOi +

i

P

1 h Pi1

j=1 aij xj nj=i+1 aij XOj + bi

aii

STOP (The procedure was successful)

Step 5 Set k = k + 1

Relaxation Techniques

35 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Step 1 Set k = 1

Step 2 While (k N) do Steps 36:

Step 3 For i = 1, . . . , n

set xi = (1 )XOi +

i

P

1 h Pi1

j=1 aij xj nj=i+1 aij XOj + bi

aii

STOP (The procedure was successful)

Step 5 Set k = k + 1

Step 6 For i = 1, . . . , n set XOi = xi

Relaxation Techniques

35 / 36

Residual Vectors

SOR Method

Optimal

SOR Algorithm

Step 1 Set k = 1

Step 2 While (k N) do Steps 36:

Step 3 For i = 1, . . . , n

set xi = (1 )XOi +

i

P

1 h Pi1

j=1 aij xj nj=i+1 aij XOj + bi

aii

STOP (The procedure was successful)

Step 5 Set k = k + 1

Step 6 For i = 1, . . . , n set XOi = xi

Step 7 OUTPUT (Maximum number of iterations exceeded)

STOP (The procedure was successful)

Numerical Analysis (Chapter 7)

Relaxation Techniques

35 / 36

Questions?

- The Method of Endogenous Gridpoints for Solving Dynamic Stochastic Optimization ProblemsUploaded bymirando93
- project 2.pdfUploaded byAfeef Murad
- 04. Sytem of Linear EquationsUploaded byζοβαγεπ ἯοΣΣαῖη
- chap3Uploaded byAbhilashKumarB
- Requirements Controlled DesignUploaded bypmholling
- Determination of Small crack stress intensity factors for an american society for testing materialsUploaded by정우철
- IntroductionUploaded byDana Eu
- 520afB.tech Civil Engg 6thSemUploaded byHarshal Gupta
- Cada 2008Uploaded byGogy
- Numerical Solutions of Stiff Initial Value Problems Using Modified Extended Backward Differentiation FormulaUploaded byInternational Organization of Scientific Research (IOSR)
- Adaptive Formulas of Numerical Differentiation ofUploaded byFxHartanto
- 0000409Uploaded byField Marshal Thebe Hanyane
- MATLAB Assignment Help myassignmenthelp.netUploaded byJulia Ray
- ANSYS 15.0 DesignXplorer UpdateUploaded byj_c_garcia_d
- cfdpreUploaded byHüseyin Cihan Kurtuluş
- AIAA-2009-4140.pdfUploaded byAli Yıldırım
- KSR-Numerical MethodsUploaded bykpgs12
- Numerical & Statistical Computations V1Uploaded bysolvedcare
- mcasylsem1Uploaded byAyush Agrawal
- Pde Book ContentsUploaded byEdg Miranda
- Elements of Numerical Analysis with MathematicaUploaded bythevectorcalcguy
- EEE_2_3_4_Year_2011-12.pdfUploaded byCharan Priya Mangipudy
- Ajc Thesis NumericalSolutions of SVUploaded byLavinia Maria
- Chung 1993Uploaded bysicheng6699
- This Page Features MathJax Technology to Render Mathematical FormulaeUploaded byAdeel Khan
- 9.xlsUploaded byhgcu
- 1_3 DataTypesUploaded byintj2001712
- 2008 Van Long Limit Shakedown Analysis 3D Steel FramesUploaded byseln_2811
- pettersson2007.pdfUploaded bysuresh
- Query OptimizationUploaded byNiraj Koirala

- Rainfall Modelling Using a Latent Gaussian VariableUploaded byA. Villa
- A New MCDM Method Based on GRA in Interval Valued tic Fuzzy SettingUploaded byOcta_Bun
- CraftUploaded bydhia_albrihi
- Questions Data ScienceUploaded byVipul Sehgal
- CoupledTanksUploaded bybgizachew
- Advances in Waveform Agile Sensing for TrackingUploaded byDr-Gnaneswar Nadh Satapathi
- image restoration techniques using fusion to remove motion blurUploaded byrahul sharma
- 6. Electrical -Ijeeer -Speed Control for Bldc - V. Chandra ShekharUploaded byTJPRC Publications
- [IJCST-V4I3P8]:Md. Sujan Ali, Mst. Jannatul Ferdous, Md. Ekramul Hamid, Md. Khademul Islam MollaUploaded byEighthSenseGroup
- Class 3 LP SensitivityUploaded byVikas Singh
- Kohonen Self Organizing Map With Modified K-meansUploaded byRongChơiChiTử
- COMPUSOFT, 3(11), 1265-1269.pdfUploaded byIjact Editor
- LinearProgramming1.docxUploaded byÖzlem Yurtsever
- Selection SortUploaded bySherdil
- Lecture 10.pptUploaded byPoorni Purushoth
- Domain Name System(DNS)Uploaded byPikesh Patel
- A Neural Network for Predicting Moisture Content of GrainUploaded byLiliana Şeremet
- Control SystemUploaded byRatsih
- Induction Motor Speed ControlUploaded byMohammedHaytham
- Timefor Final Results 6Uploaded byJohanna Alumbro
- RungeKuttaFehlbergUploaded byDante Góngora Galdos
- A Generalized Transform, Grouping Fourier, Laplace and Z TransformsUploaded byerf7
- ds7201-advanced-digital-image-processing.pdfUploaded bypalaniappan
- Decision Tree PruningUploaded byanon_144443406
- #AltDevBlogADay » Understanding the Fourier TransformUploaded byyleterikus yakelendras
- prezentare filogenieUploaded byandreea_dudu
- 18 the Lax Wendroff TechniqueUploaded bySnehit Sanagar
- Lect Slides 1Uploaded byMahadev
- SeminarUploaded byVivek Aggarwal
- Algoritmos BF e DF implementados em java e lispUploaded byArthur Giovanini

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.