- Efficient Iterative Algorithms for the Stochastic Finite Element Method With Application to Acoustic Scattering
- AMR_zhang
- Multigrid Tutorial
- Segmentation.ppt
- An Overview Of
- MULTIGRID - Solving Large Linear Systems by AMG
- Proceedings Multigrid Methods IV
- 16 Eigenvalue
- ME2013
- Theory Guide (1)
- An efﬁcient method for computing genus expansionsand counting numbers in the Hermitian matrix model
- svc_naps99
- CFD Tutorials
- Stability Analysis for VAR Systems
- MATRICES-COMPLETE_LECTURE_NOTE.pdf
- Pr Final
- ifac08_seul_yu 0457.pdf
- Matlab Tut
- Steger Warming Flux Vector Splitting Method
- How Much Mathematics Does an IT Engineer Need to Learn to Get Into Data Science_machine Learning
- Market Correlation and Market Volatility in US BLue Chip Stocks
- Modeling Analysis and Simulation(1)
- Gab Or
- GPTutorial
- MATH 2930 - Worksheet 14 Solutions
- Fall 2012 exam
- Computational Fluid Dynamics
- Bertilsson Et Al 2011 Multidimensional Consideration of Anthropometric Diversity
- Matrices
- The Official DIY Manual v3
- Tutorial on Practical Prediction Theory for Classiﬁcation
- Optical Transport Network (OTN) Tutorial
- XSLT 1.0 Tutorial
- Boy’s Vest Tutorial With PDF Pattern
- OMG Systems Modeling Language
- Ημερολόγιο Σχολικού Έτους 2015 16
- AutoCAD Tutorials
- How I Built an Electricity Producing Solar Panel
- Α γυμνασίου Βιβλίο καθηγητή
- Spartan’10
- PyGTK 2.0 Tutorial
- Ερωτήσεις_7
- Mathematics and Algorithms
- bit to byte
- Jalview 2.8
- Python 3.1 Version
- A Tutorial for GNU Libmicrohttpd
- Perl 5 Tutorial
- PRAAT
- DIY SKateboard Mould
- nRF24L01 and MiRF-V2
- A Yacc Tutorial
- Comsol Multiphysics
- A Gentle Introduction to Haskell 98
- Algebra Cheat Sheet
- Status Board
- PostgreSQL 7.3.2 Tutorial
- Debian Packaging Tutorial

By William L. Briggs

Presented by Van Emden Henson Center for Applied Scientific Computing Lawrence Livermore National Laboratory

This work was performed, in part, under the auspices of the United States Department of Energy by University of California Lawrence Livermore National Laboratory under contract number W-7405-Eng-48.

Outline

• Model Problems • Basic Iterative Schemes

– Convergence; experiments – Convergence; analysis

• Performance, (cont)

– Convergence – Higher dimensions – Experiments

• Development of Multigrid

– – – – Coarse-grid correction Nested Iteration Restriction and Interpolation Standard cycles: MV, FMG

• Some Theory

– The spectral picture – The subspace picture – The whole picture!

• Performance

– Implementation – storage and computation costs

• Complications

– Anisotropic operators and grids – Discontinuous or anisotropic coefficients – Nonlinear Problems: FAS

2 of 119

Suggested Reading

• Brandt, “Multi-level Adaptive Solutions to Boundary Value Problems,” Math Comp., 31, 1977, pp 333-390. • Brandt, “1984 Guide to Multigrid Development, with applications to computational fluid dynamics.” • Briggs, “A Multigrid Tutorial,” SIAM publications, 1987. • Briggs, Henson, and McCormick, “A Multigrid Tutorial, 2nd Edition,” SIAM publications, 2000. • Hackbusch, Multi-Grid Methods and Applications,” 1985. • Hackbusch and Trottenburg, “Multigrid Methods, SpringerVerlag, 1982” • Stüben and Trottenburg, “Multigrid Methods,” 1987. • Wesseling, “An Introduction to Multigrid Methods,” Wylie, 1992

3 of 119

**Multilevel methods have been developed for...
**

• Elliptic PDEs • Purely algebraic problems, with no physical grid; for example, network and geodetic survey problems. • Image reconstruction and tomography • Optimization (e.g., the travelling salesman and long transportation problems) • Statistical mechanics, Ising spin models. • Quantum chromodynamics. • Quadrature and generalized FFTs. • Integral equations.

4 of 119

Model Problems

• One-dimensional boundary value problem:

− u″ ( x ) + σ u( x ) = f ( x )

u( 0) = u( 1 ) = 0

1 • Grid: h = N

0 < x < 1,

σ>0

, xi = ih ,

xi

i = 0, 1 , . . . N

x =1

xN

x =0

x0 x1 x2

• Let

v i ≈ u( x i )

and f i ≈ f ( x i ) for i = 0, 1, . . . N

5 of 119

**We use Taylor Series to derive an approximation to u’’(x)
**

• We approximate the second derivative using Taylor series: 2 3

h h u( x i + 1 ) = u( x i ) + h u′ ( x i ) + u″ ( x i ) + u′′′ ( x i ) + O( h 4 ) 2! 3!

h2 h3 u( x i − 1 ) = u( x i ) − h u′ ( x i ) + u″ ( x i ) − u′′′ ( x i ) + O( h 4 ) 2! 3!

• Summing and solving,

u″ ( x i ) =

u( x i + 1 ) − 2 u( x i ) + u( x i − 1 ) h2

+ O( h 2)

6 of 119

**We approximate the equation with a finite difference scheme
**

• We approximate the BVP

− u″ ( x ) + σ u( x ) = f ( x )

u( 0) = u( 1 ) = 0

0 < x < 1,

σ>0

with the finite difference scheme:

− vi − 1 + 2 vi − vi + 1 h2

+ σ vi = f i

i = 1, 2, . . . N − 1

v0 = vN = 0

7 of 119

**The discrete model problem
**

T v = ( v , v , . . . , v ) • Letting 1 2 N − 1 and

f

= ( f 1, f 2, . . . , f N − 1 ) T

**we obtain the matrix equation A v = f where A is (N-1) x (N-1), symmetric, positive definite, and
**

ö æ 2 + σh2 −1 ÷ ç 2 ÷ ç − 1 2 + σh −1 ÷ ç 2 1 ç ÷ − 1 2 + σh − 1 A= ÷, 2 ç h ç ÷ ÷ ç 2 h − + σ − 1 2 1 ÷ ç ç 2÷ − 1 2 + σh ø è æ v1 ö ç v2 ÷ ç ÷ ç v3 ÷ v= ç ÷, ÷ ç v ç N −2 ÷ ÷ çv 1 N − è ø æ f1 ö ç f ÷ ç 2 ÷ ç f ÷ f =ç 3 ÷ ç ÷ ç f N −2÷ ç ÷ f è N −1ø

8 of 119

Solution Methods

• Direct

– Gaussian elimination – Factorization

• Iterative

– Jacobi – Gauss-Seidel – Conjugate Gradient, etc.

• Note: This simple model problem can be solved very efficiently in several ways. Pretend it can’t, and that it is very hard, because it shares many characteristics with some very hard problems.

9 of 119

**A two-dimensional boundary value problem
**

• Consider the problem:

0 < x < 1, 0 < y < 1 − ux x − uyy + σ u = f ( x , y ) , u = 0 , x = 0, x = 1, y = 0, y = 1; σ>0

• Where the grid is given:

hx = 1 1 , hy = , M N

z

y

10 of 119

( x i , yj ) = ( i hx , j hy )

0≤ i ≤ M

0≤ j ≤ N

x

**Discretizing the 2D problem
**

• Let v i j ≈ u( x i , yj ) and f i j ≈ f ( x i , yj ) . Again, using 2nd order finite differences to approximate u x x and uy y we obtain the approximate equation for the unknown u( x i , yj ) , for i=1,2, … M-1 and j=1,2, …, N-1:

− v i − 1, j + 2v i j − v i + 1, j

2 hx

+

− v i , j − 1 + 2v i j − v i , j + 1

2 hy

+ σ vi j = f i j

v i j = 0 , i = 0, i = M , j = 0, j = M

**• Ordering the unknowns (and also the vector f ) lexicographically by y-lines:
**

v = ( v 1, 1 , v 1, 2 , . . . , v 1, N − 1 , v 2, 1 , v 2, 2 , . . . v 2, N − 1, . . . , v N − 1, 1, v N − 1, 2 , . . . , v N − 1, N − 1 ) T

11 of 119

**Yields the linear system
**

• We obtain a block-tridiagonal system Av = f :

æ A1 − Iy ö ç −I A ÷ − I y y 2 ç ÷ ç ÷ − Iy A3 − Iy ç ÷ ç ÷ − Iy AN − 2 − Iy ÷ ç ç ÷ − I A y N −1 ø è æ f1 ö v1 ö æ ç ÷ f v ç 2 ÷ ç 2 ÷ ç v3 ÷ ç f ÷ 3 = ÷ ç ç ÷ ç ÷ ÷ ç ç ÷ ÷ ç çf ÷ è vN − 1 ø è N −1 ø 1

2 hy

**where Iy is a diagonal matrix with diagonal and 1 1 æ 1 + +σ −
**

ç 2 2 ç h x hy ç 1 ç − ç 2 hx ç ç Ai = ç ç ç ç ç ç ç ç è

2 hx

on the

ö ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ⋅⋅ ÷ ⋅ ÷ 1 + +σ÷ 2 ÷ hy ø

1

2 hx

+ −

1

2 hy

+σ 1

2 hx

− +

1

2 hx

1

2 hx

1

2 hy ⋅⋅ ⋅

+σ

−

1

2 hx

⋅⋅ −

⋅ 1

1

2 2 hx hx

12 of 119

**Iterative Methods for Linear Systems
**

• Consider Au = f where A is NxN and let v be an approximation to u. • Two important measures:

– The Error:

e = u−v ,

with norms

|| e ||∞ = max | ei |

– The Residual:

|| e ||2 =

å

N 2 e i =1 i

r = f − A v with || r ||2 || r ||∞

13 of 119

Residual correction

• Since e = u − v , and r = f − A v , we can write A u = f as

A ( v + e) = f

which means that A e Residual Equation:

= f − Av

Ae = r

, which is the

• Residual Correction:

u = v +e

14 of 119

Relaxation Schemes

• Consider the 1D model problem

− ui − 1 +2 ui − ui + 1 = h 2 f i 1 ≤ i ≤ N −1 u0 = uN = 0

• Jacobi Method (simultaneous displacement): Solve the ith equation for v i holding other variables fixed: 1 ( o ld) ( n e w ) o ld) + h 2 f ) vi = ( v i − 1 + v i( + 1 ≤ i ≤ N −1 1 i 2

15 of 119

**In matrix form, the relaxation is
**

• Let A = ( D − L − U ) where D is diagonal and L and U are the strictly lower and upper parts of A. • Then

Au = f

(D −L −U) u = f D u = ( L + U) u + f

becomes

u =D

−1

( L + U) u + D

−1

f

**−1 • Let R J = D ( L + U), then the iteration is: v ( n ew) = R J v ( o ld ) + D − 1 f
**

16 of 119

• From the derivation, u = D − 1 ( L + U) u + D − 1 f u = RJ u + D − 1 f • the iteration is

The iteration matrix and the error

v ( n ew) = R J v ( o ld ) + D − 1 f

• subtracting, • or

u − v ( n ew) = R J u + D − 1 f − ( R J v ( o ld) + D − 1 f )

u − v ( n ew) = R J u − R J v ( o ld)

e ( n ew) = R J e ( o ld)

17 of 119

• hence

**Weighted Jacobi Relaxation
**

• Consider the iteration:

ω ( o ld ) ( n e w ) ( o l d ) o ld ) + h 2f ) vi ← ( 1 − ω) v i + ( v i − 1 + v i( + i 1 2

**• Letting A = D-L-U, the matrix form is:
**

v ( n ew) = ( 1 − ω) I + ω D − 1( L + U) v ( o ld) + ω h 2D − 1f

= R ω v ( o ld) + ω h 2D − 1f

• Note that

.

R ω = [ ( 1 − ω) I +ω R J ]

( a pp ro x )

• It is easy to see that if e ≡ u ( ex a c t ) − u then e ( n ew) = R ω e ( o ld)

,

18 of 119

**Gauss-Seidel Relaxation (1D)
**

• Solve equation i for ui and update immediately. • Equivalently: set each component of r to zero. • Component form: for i = 1, 2, . . . N − 1, set

1 vi ← ( vi − 1 + vi + 1 + h 2 f i ) 2

• Matrix form:

A = (D −L −U)

( D − L) u = U u + f

u = ( D − L) − 1 U u + ( D − L) − 1 f

• Let R G = ( D − L ) − 1U • Then iterate v ( n ew) ← R G v ( old) + ( D − L ) − 1f • Error propagation: e ( n ew) ← R G e ( o ld)

19 of 119

**Red-Black Gauss-Seidel
**

• Update the even (red) points

1 v 2i ← ( v 2i − 1 + v 2i + 1 + h 2 f 2i ) 2

**• Update the odd (black) points
**

v 2i + 1 ← 1 ( v 2i + v 2i + 2 + h 2 f 2i + 1 ) 2

x0

xN

20 of 119

Numerical Experiments

• Solve A u = 0 , − ui − 1 + 2 ui − ui + 1 = 0 • Use Fourier modes as initial iterate, with N =64:

vk æ ikπ ö = ( v i ) k = sin ç ÷ N ø è

component

k =1

1 ≤ i ≤ N − 1,

1 ≤ k ≤ N −1

mode

1 0 .8 0 .6 0 .4 0 .2 0 -0 . 2 -0 . 4 -0 . 6 -0 . 8 -1 0 0 .1 0 .2 0 .3 0 .4 0 .5 0 .6 0 .7 0 .8 0 .9 1

k =3

k =6

21 of 119

**Error reduction stalls
**

2 • Weighted ω = 3 Jacobi on 1D problem. • Initial guess: v = 1 æ sin æ j π ö + sin æ 6 j π ö + sin æ 32 j π ö

0

3ç è

ç N ÷ ø è

ç N ÷ ø è

ö ç N ÷ ÷ ø ø è

**• Error || e ||∞ plotted against iteration number:
**

1 0 .9 0 .8 0 .7 0 .6 0 .5 0 .4 0 .3 0 .2 0 .1 0 0 10 20 30 40 50 60 70 80 90 100

22 of 119

**Convergence rates differ for different error components
**

• Error, ||e||∞ , in weighted Jacobi on Au = 0 for 100 iterations using initial guesses of v1, v3, and v6

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 20 40 60 80 100 120

k =1

k =3

k =6

23 of 119

**Analysis of stationary iterations
**

• Let v ( new) = R v ( old) + g . The exact solution is unchanged by the iteration, i.e., u = R u + g . • Subtracting, we see that

e ( n ew) = R e ( o ld)

• Letting e0 be the initial error and ei be the error after the ith iteration, we see that after n iterations we have

e ( n ) = R n e ( 0)

24 of 119

**A quick review of eigenvectors and eigenvalues
**

• The number λ is an eigenvalue of a matrix B, and w its associated eigenvector, if Bw = λ w. • The eigenvalues and eigenvectors are characteristics of a given matrix. • Eigenvectors are linearly independent, and if there is a complete set of N distinct eigenvectors for an NxN matrix, they form a basis; i.e., for any v, there exist unique ck such that:

v =

k =1

å ck wk

25 of 119

N

**“Fundamental theorem of iteration”
**

• R is convergent (that is, R n → 0 as n → ∞) if and only if ρ ( R ) < 1 , where

ρ ( R ) = ma x { | λ1 | , | λ2 | , ... , | λN | }

therefore, for any initial vector v(0), we see that e ( n ) → 0 a s n → ∞ if and only if ρ ( R ) < 1. • ρ ( R ) < 1 assures the convergence of the iteration given by R and ρ ( R ) is called the convergence factor for the iteration.

26 of 119

Convergence Rate

• How many iterations are needed to reduce the initial error by 10 − d ?

|| e ( M ) || || e ( 0) || ≤ || R M || ∼ ( ρ ( R ) ) M ∼ 10 − d

• So, we have M =

d æ 1 ö l og10 ç ρ( R) ÷ è ø

.

• The convergence rate is given:

æ 1 ö ra t e = l og10 ç ÷ ρ ( ) R è ø

= − l og10 ( ρ ( R ) )

digit s it er a t ion

27 of 119

**Convergence analysis for weighted Jacobi on 1D model
**

R ω = ( 1 − ω) I + ω D − 1( L + U ) = I − ωD − 1 A

æ 2 −1 ö ÷ ç −1 2 −1 ÷ ωç 1 2 1 − − Rω = I − ç ÷ 2ç ⋅⋅ ⋅⋅ ⋅⋅ ÷ ⋅÷ ⋅ ⋅ ç −1 2 ø è

For the 1D model problem, he eigenvectors of the weighted Jacobi iteration and the eigenvectors of the matrix A are the same! The eigenvalues are related as well.

28 of 119

ω λ ( Rω ) = 1 − λ ( A) 2

**Good exercise: Find the eigenvalues & eigenvectors of A
**

• Show that the eigenvectors of A are Fourier modes! j kπ ö k π æ æ ö wk , j = sin ç λk ( A ) = 4 sin2 ç , ÷ ÷ 2 N N è ø è ø

1 0 .8 0 .6 0 .4 0 .2 0 -0 . 2 -0 . 4 -0 . 6 -0 . 8 -1 0 10 20 30 40 50 60

1 0 .8 0 .6 0 .4 0 .2 0 -0 . 2 -0 . 4 -0 . 6 -0 . 8 -1 0 10 20 30 40 50 60

1 0 .8 0 .6 0 .4 0 .2 0 -0 . 2 -0 . 4 -0 . 6 -0 . 8 -1 0 10 20 30 40 50 60

1

1 0 .8 0 .6

0 .8 0 .6 0 .4

0 .4

0 .2

0 .2 0 -0 . 2 -0 . 4 -0 . 6 -0 . 8 -1 0 10 20 30 40 50 60

0 -0 . 2 -0 . 4 -0 . 6 -0 . 8 -1 0 10 20 30 40 50 60

29 of 119

**Eigenvectors of Rω and A are the same,the eigenvalues related
**

kπ ö λk ( R ω ) = 1 − ç 2N ÷ ø è • Expand the initial error in terms of the N −1 eigenvectors: æ 2 2 ω sin

e ( 0) =

k =1

å

c k wk

• After M iterations,

R M e ( 0) =

N −1 k =1

å

ck R

M

wk =

N −1 k =1

å

c k λM k wk

• The kth mode of the error is reduced by λk at each iteration

30 of 119

**Relaxation suppresses eigenmodes unevenly
**

• Look carefully at

1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 k

æ kπ ö λk ( R ω ) = 1 − 2 ω sin2 ç ÷ 2 N è ø

ω = 1/3

ω = 1/ 2

**Note that if 0 ≤ ω ≤ 1 then | λk ( R ω ) | < 1 for
**

k = 1, 2, . . . , N − 1

Eigenvalue

N 2

For 0 ≤ ω ≤ 1,

ω = 2/ 3

ω=1

æ π ö λ1 = 1 − 2 ω sin2 ç ÷ 2 N è ø

æ πh ö = 1 − 2 ω s in 2 ç ÷ 2 è ø

= 1 − O ( h2) ≈ 1

31 of 119

**Low frequencies are undamped
**

• Notice that no value of ω will damp out the long (i.e., low frequency) waves.

1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 k

ω = 1/3

ω = 1/ 2

N 2

**What value of ω gives the best damping of the short waves ?
**

N ≤ k ≤ N 2

Eigenvalue

ω=1

ω = 2/ 3

**Choose ω such that
**

2

λN ( R ω ) = − λN ( R ω )

2 Þ ω= 3

32 of 119

**The Smoothing factor
**

• The smoothing factor is the largest absolute value among the eigenvalues in the upper half of the spectrum of the iteration matrix

smoot hing f a ct or = ma x λk ( R ) N f or ≤ k≤ N 2

**• For Rω, with ω = since
**

λN

2

**1 2 , the smoothing factor is 3 , 3
**

1 = 3

= λN

and

λk <

1 3

for

N <k < N. 2

Nö æ çk « 2 ÷ è ø

• But,

2 λk ≈ 1 − k 2 π2 h 2 3

for long waves

.

33 of 119

Convergence of Jacobi on Au=0

100 90 80 70 60 50 40 30 20 10 0 0 10 20 30 40 50 60

Unweighted Jacobi

100 90 80 70 60 50 40 30 20 10 0 0 10 20

Weighted Jacobi

30

40

50

60

Wavenumber, k

Wavenumber, k

• Jacobi method on Au=0 with N=64. Number of iterations required to reduce to ||e||∞ < .01

æ jkπ ö ÷ • Initial guess : vkj = sin ç è N ø

34 of 119

**Weighted Jacobi Relaxation Smooths the Error
**

• Initial error:

2 1 0 -1 -2 0 0 .5 1

1 æ 16j π ö 1 æ 32j π ö æ 2j π ö v k j = sin ç ÷ + 2 sin ç N ÷ + 2 sin ç N ÷ N è ø è ø è ø

**• Error after 35 iteration sweeps:
**

2 1 0 -1 -2 0 0 .5 1

Many relaxation schemes have the smoothing property, where oscillatory modes of the error are eliminated effectively, but smooth modes are damped very slowly.

35 of 119

**Other relaxations schemes may be analyzed similarly
**

• Gauss-Seidel relaxation applied to the 3-point difference matrix A (1D model problem): • Good exercise: Show that

æ kπ ö λk ( R G ) = cos2 ç N ÷ è ø

1

RG = ( D − L )

−1

U

æ j kπ ö s in ç ÷ è N ø

wk , j = ( λk )

1 0.8 0.6

j /2

0.8

0.4 0.2

0.6

0

0.4

-0.2 -0.4

0.2

-0.6 -0.8

0 0 10 20 30 40 50 60

-1 0 10 20 30 40 50 60

36 of 119

**Convergence of Gauss-Seidel on Au=0
**

• Eigenvectors of RG are not the same as those of A. Gauss-Seidel mixes the modes of A.

100 90 80 70 60 50 40 30 20 10 0 0 10 20 30 40 50 60

Jacobi method on Au=0 with N=64. Number of iterations required to reduce to ||e||∞ < .01 Initial guess : (Modes of A)

æ jkπ ö vkj = sin ç ÷ è N ø

37 of 119

**First observation toward multigrid
**

• Many relaxation schemes have the smoothing property, where oscillatory modes of the error are eliminated effectively, but smooth modes are damped very slowly. • This might seem like a limitation, but by using coarse grids we can use the smoothing property to good advantage.

x0 x0

xN

xN

2

Ωh

Ω

2h

• Why use coarse grids??

38 of 119

**Reason #1 for using coarse grids: Nested Iteration
**

• Coarse grids can be used to compute an improved initial guess for the fine-grid relaxation. This is advantageous because:

– Relaxation on the coarse-grid is much cheaper (1/2 as many points in 1D, 1/4 in 2D, 1/8 in 3D) – Relaxation on the coarse grid has a marginally better convergence rate, for example

1 − O( 4h 2 )

instead of

1 − O ( h 2)

39 of 119

**Idea! Nested Iteration
**

• …

**Ω to obtain initial guess v 2h 2h • Relax on Au=f on Ω to obtain initial guess v h h Ω • Relax on Au=f on to obtain … final solution???
**

• Relax on Au=f on • But, what is Au=f on

4h

Ω , Ω 4h , … ?

2h

**• What if the error still has smooth components h when we get to the fine grid Ω ?
**

40 of 119

**Reason #2 for using a coarse grid: smooth error is (relatively) more oscillatory there!
**

• A smooth function:

1 0 . 5

0

- 0

. 5

- 1

**• Can be represented by linear interpolation from a coarser grid:
**

1 0 . 5

0

0

. 5

1

On the coarse grid, the smooth error appears to be relatively higher in frequency: in the example it is the 4-mode, out of a possible 16, on the fine grid, 1/4 the way up the spectrum. On the coarse grid, it is the 4-mode out of a possible 8, hence it is 1/2 the way up the spectrum. Relaxation will be more effective on this mode if done on the coarser grid!!

41 of 119

0

- 0

. 5

- 1

0

0

. 5

1

**For k=1,2,…N/2, the kth mode is preserved on the coarse grid
**

1

æ 2j k π ö æ j kπ ö h 2h = sin ç wk , 2j = sin ç = wk ÷ ÷ ,j N 2 N / è ø è ø

0 .8 0 .6 0 .4 0 .2 0 -0 . 2 -0 . 4 -0 . 6 -0 . 8 -1

**Also, note that
**

h →0 wN /2

1 0 .8 0 .6 0 .4 0 .2 0 -0 . 2 -0 . 4 -0 . 6 -0 . 8 -1

0

2

4

6

8

10

12

k=4 mode, N=12 grid

on the coarse grid

0

1

2

3

4

5

6

k=4 mode, N=6 grid

42 of 119

**For k > N/2, wkh is invisible on the coarse grid: aliasing!!
**

• For k > N/2, the kth mode on the fine grid is aliased and appears as the (N-k)th mode on the coarse grid:

æ ( 2j ) πk ö h ( wk ) = s in ç ÷ 2j N è ø

1 0 .8 0 .6 0 .4 0 .2 0 -0 . 2 -0 . 4 -0 . 6 -0 . 8 -1 0 2 4 6 8 10 12

æ 2πj ( N − k ) ö = − sin ç ÷ N ø è

æ π ( N − k) j ö = − s in ç ÷ N / 2 è ø

k=9 mode, N=12 grid

1 0 .8 0 .6 0 .4 0 .2 0 -0 . 2 -0 . 4 -0 . 6 -0 . 8 -1 0 1 2 3 4 5 6

= −(

2h ) wN −k

k=3 mode, N=12 grid

43 of 119

**Second observation toward multigrid:
**

• Recall the residual correction idea: Let v be an approximation to the solution of Au=f, where the residual r=f-Av. The the error e=u-v satisfies Ae=r. • After relaxing on Au=f on the fine grid, the error will be smooth. On the coarse grid, however, this error appears more oscillatory, and relaxation will be more effective. • Therefore we go to a coarse grid and relax on the residual equation Ae=r, with an initial guess of e=0.

44 of 119

**Idea! Coarse-grid correction
**

• Relax on Au=f on • Compute r = f − A v h . 2h • Relax on Ae=r on Ω to obtain an approximation to the error, e 2h . • Correct the approximation v h ← v h + e 2h . • Clearly, we need methods for the mappings h 2h and Ω 2h Ωh

Ω h to obtain an approximation v h

Ω

Ω

45 of 119

**1D Interpolation (Prolongation)
**

• Mapping from the coarse grid to the fine grid: h 2h h I2 : Ω → Ω h • Let v h , v 2h be defined on Ω h, Ω 2h. Then

h 2h h I2 v = v h

where

h = v 2h v2 i i

1 2h h v 2i + 1 = ( v i + v i2h ) + 1 2

for

N 0≤i ≤ −1 2

46 of 119

**1D Interpolation (Prolongation)
**

• Values at points on the coarse grid map unchanged to the fine grid • Values at fine-grid points NOT on the coarse grid are the averages of their coarse-grid neighbors

Ω 2h

Ω

h

47 of 119

**The prolongation operator (1D)
**

h I • We may regard 2h as a linear operator from

ℜN/2-1

ℜN-1

• e.g., for N=8,

ö æ 1/2 ÷ ç 1 ÷ ç 2h ÷ æ v1 ç 1/2 1/2 ÷ ç 2h ç 1 ÷ ç v2 ç ÷ ç 2h ç / / 1 2 1 2 ÷ è v3 ç ç 1 ÷ ÷ ç / 1 2 ø 7x 3 è

h æ v1 ç h ç v2 ç h ö ç v3 ÷ ç h = ç v4 ÷ ç h ÷ ç v5 ø 3x 1 ç h ç v6 ç vh è 7

ö ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ÷ ø 7x 1

•

h I2 h has full rank, and thus null space { φ }

48 of 119

**How well does v2h approximate u?
**

• Imagine that a coarse-grid approximation v2h has been found. How well does it approximate the exact solution u ? Think u error!!

**• If u is smooth, a coarse-grid interpolant of v2h may do very well.
**

49 of 119

**How well does v2h approximate u?
**

• Imagine that a coarse-grid approximation v2h has been found. How well does it approximate the exact solution u ? Think u error!!

**• If u is oscillatory, a coarse-grid interpolant of v2h may not work well.
**

50 of 119

**Moral of this story:
**

• If u is smooth, a coarse-grid interpolant of v2h may do very well. • If u is oscillatory, a coarse-grid interpolant of v2h may not work well. • Therefore, nested iteration is most effective when the error is smooth!

51 of 119

**• Mapping from the fine grid to the coarse grid:
**

2h Ih : Ω h → Ω 2h

1D Restriction by injection

**• Let v h , v 2h be defined on Ω h, Ω 2h . Then
**

h . where v i2h = v 2 i

2h h Ih v = v 2h

52 of 119

**1D Restriction by full-weighting
**

h 2h • Let v h , v 2h be defined on Ω , Ω . Then

where

2h h Ih v = v 2h

1 h 2 h h + vh v i = ( v 2i − 1 + 2 v 2 i 2i + 1 ) 4

53 of 119

**The restriction operator R (1D)
**

2h • We may regard I h as a linear operator from

ℜN-1

ℜN/2-1

• e.g.,

h æ v1 ç for N=8, h ç v2 ç h ç v3 1/4 1/2 1/4 æ öç h 1/4 1/2 1/4 ç ÷ ç v4 è 1/4 1/2 1/4 ø ç v h ç 5 ç h ç v6 ç vh è 7

ö ÷ ÷ ÷ 2h ö æ v1 ÷ ç 2h ÷ ÷ = ç v2 ÷ ÷ ÷ ç 2h ÷ ÷ è v3 ø ÷ ÷ ÷ ø

N 2h N • I h has rank ∼ , and thus dim(NS(R)) ∼ 2 2

54 of 119

**Prolongation and restriction are often nicely related
**

• For the 1D examples, linear interpolation and fullweighting are related by:

æ1 ö ç2 ÷ ç ÷ ç1 1 ÷ 1 ç ÷ h I2 = 2 ÷ h 2 ç ç 1 1÷ ç ÷ ç 2÷ ç ÷ 1ø è

1æ 2h Ih = ç 4 è

1 2 1 ö 1 2 1 ÷ 1 2 1 ø

**• A commonly used, and highly useful, requirement is that
**

h 2h T I 2h = c ( I h )

for c in

ℜ

55 of 119

2D Prolongation

h 2h v2 = v i , 2j ij

1 2h h v 2i + 1, 2j = ( v i j + v ih+ 1, j ) 2

h v2 i , 2j + 1 =

h v2 i + 1, 2j + 1

1 2h ( v i j + v ih, j + 1 ) 2 1 = ( v i2jh + v ih+ 1, j + v ih + v ih+ 1, j + 1 ) , j + 1 4

1 4 1 2 1 4

1 2 1 1 2

1 4 1 2 1 4

We denote the operator by using a “give to” stencil, ] [. Centered over a c-point, , it shows what fraction of the c-point’s value is contributed to neighboring f-points, .

56 of 119

**2D Restriction (full-weighting)
**

1 16 1 8 1 16 1 8 1 4 1 8 1 16 1 8 1 16

We denote the operator by using a “give to” stencil, [ ]. Centered over a c-point, , it shows what fractions of the neighboring ( ) f-points’ value is contributed to the value at the c-point.

57 of 119

**Now, let’s put all these ideas together
**

• Nested Iteration (effective on smooth error modes) • Relaxation (effective on oscillatory error modes) • Residual equation (i.e., residual correction) • Prolongation and Restriction

58 of 119

**Coarse Grid Correction Scheme
**

v h ← CG ( v h, f h , α1 , α2 )

**• 1) Relax α1 times on A h u h = f h on Ω h with arbitrary initial guess v h .
**

h h • 2) Compute r h = f − A v h.

2h • 3) Compute r 2h = I h r.h

2h • 4) Solve A e 2h = r 2h on Ω 2h .

h ← v h + I h e 2h v • 5) Correct fine-grid solution . 2h

**• 6) Relax α2 times on A h u h = f h on Ω h with initial guess v h .
**

59 of 119

**Coarse-grid Correction
**

h h h A u = f Relax on h = f h −Ahuh r Compute

uh ←

Correct u h + eh

Restrict 2h h r 2h = I h r

eh

2h Solve A e2h = r2h e 2h = ( A 2h) − 1 r 2h

Interpolate

h 2h ≈ I2h e

60 of 119

What is

A

2h

?

• For this scheme to work, we must have A , a coarse-grid operator. For the moment, we will 2h simply assume that A is “the coarse-grid h version” of the fine-grid operator A .

2h A • We will return to the question of constructing

2h

later.

61 of 119

**How do we “solve” the coarsegrid residual equation? Recursion!
**

uh ← Gν ( Ah, f h )

2h h f 2h ← Ih ( f − Ahuh )

uh ←uh + eh

h 2h eh ←I2 hu

u2h ← Gν ( A2h, f 2h )

4h 2h 2h 2h f 4h ←I2 ( f − A u ) h

**u2h ←u2h + e2h
**

2h 4h e2h ←I4 hu

u4h ← Gν ( A4h, f 4h )

8h 4h 4h 4h f 8h ←I4 ( f − A u ) h

u4h ←u4h + e4h

u8h ← Gν ( A8h, f 8h )

**4h 8h e4h ←I8 hu
**

u8h ←u8h + e8h

eH = ( AH) −1f H

62 of 119

**V-cycle (recursive form)
**

v h ← MV h ( v h , f h )

h h h A u = f α 1) Relax 1 times on , initial v h arbitrary

2) If

Else:

**Ω his the coarsest grid, go to 4)
**

h h h h f 2h ← I 2 ( f − A v ) h v 2h ← 0

v 2h ← M V 2h ( v 2h , f 2h )

3) Correct

h 2h v h ← v h + I2 hv

h h 4) Relax α2 times on A u

= f , initial guess v h

63 of 119

h

h h Storage Costs: v and f must

be stored on each level

q

In 1-d, each coarse grid has about half the number of points as the finer grid.

q In 2-d, each coarse grid has about one-

fourth the number of points as the finer grid. about 2 − d the number of points as the finer grid.

d −d

q In d-dimensions, each coarse grid has

q Total storage cost: 2N ( 1 + 2

+2

− 2d

+2

− 3d

+ …+2

− Md

) <

2N d 1 − 2− d

less than 2, 4/3, 8/7 the cost of storage on the fine grid for 1, 2, and 3-d problems, respectively.

64 of 119

Computation Costs

• Let 1 Work Unit (WU) be the cost of one relaxation sweep on the fine-grid. • Ignore the cost of restriction and interpolation (typically about 20% of the total cost). • Consider a V-cycle with 1 pre-Coarse-Grid correction relaxation sweep and 1 post-CoarseGrid correction relaxation sweep. • Cost of V-cycle (in WU):

2 ( 1 + 2 − d + 2 − 2d + 2 − 3d + … + 2 − M d ) < 2 1− 2−d

**• Cost is about 4, 8/3, 16/7 WU per V-cycle in 1, 2, and 3 dimensions.
**

65 of 119

Convergence Analysis

• First, a heuristic argument:

– The convergence factor for the oscillatory modes of the error (e.g., the smoothing factor) is small and independent of the grid spacing h.

smoot hing f a ct or = ma x λk ( R ) f or N ≤ k≤ N 2

**– The multigrid cycling schemes focus the relaxation on the oscillatory components on each level.
**

Relax 1st coarse grid Relax on fine grid

smooth

k=1 k=N/2

oscillatory

k=N

**The overall convergence factor for multigrid methods is small and independent of h!
**

66 of 119

**Convergence analysis, a little more precisely...
**

• Continuous problem: A u = f , ui = u( x i ) h h • Discrete problem: A u h = f , v h ≈ u h • Global error:

Ei = u( x i ) − uih p (p=2 for model problem) E ≤ Kh

• Algebraic error: ei = uih − v ih • Suppose a tolerance ε is specified such that v h must satisfy u − v h ≤ ε • Then this is guaranteed if

u − uh

+ uh − vh

=

E

+ e

≤ε

67 of 119

**We can satisfy the requirement by imposing two conditions
**

ε 1) E ≤ 2 . We use this requirement to determine a grid spacing h ∗ from

æ ε ö ∗ h ≤ç ÷ 2 K è ø

1/p

**ε 2) e ≤ , which determines the number of 2
**

iterations required.

∗ ε ∗ p h = K ( h ) on Ω • If we iterate until e ≤ then 2 we have converged to the level of truncation.

68 of 119

**Converging to the level of truncation
**

• Use a MV scheme with convergence rate γ < 1 independent of h (fixed α1 and α2 ). • Assume a d-dimensional problem on an NxNx…xN grid with h = N − .1 • The V-cycle must reduce the error from p −p to

e ∼ O( h ) ∼ O( N )

e ∼ O( 1)

**• We can determine θ, the number of V-cycles required to accomplish this.
**

69 of 119

**Work needed to converge to the level of truncation
**

• Since θ V-cycles at convergence rate γ are required, we see that −p θ

γ ∼ O( N

)

implying that θ ∼ O( l o g N ) . • Since one V-cycle costs O(1) WU and one WU is O(Nd), we see that the cost of converging to the level of truncation using the MV method is • which is comparable to fast direct methods (FFT based).

O( N d l og N )

70 of 119

A numerical example

• Consider the two-dimensional model problem (with σ=0), given by

− u x x − u y y = 2 ( 1 − 6x 2 ) y 2 ( 1 − y 2 ) + ( 1 − 6y 2 ) x 2 ( 1 − x 2 )

**inside the unit square, with u=0 on the boundary. • The solution to this problem is
**

u( x , y ) = ( x 2 − x 4 ) ( y 4 − y 2)

**• We examine the effectiveness of MV cycling to solve this problem on NxN grids [(N-1) x (N-1) interior] for N=16, 32, 64, 128.
**

71 of 119

**Numerical Results, MV cycling
**

Shown are the results of 16 V-cycles. We display, at the end of each cycle, the residual norm, the error norm, and the ratio of these norms to their values at the end of the previous cycle. N=16, 32, 64, 128

72 of 119

**Look again at nested iteration
**

• Idea: It’s cheaper to solve a problem (i.e., takes fewer iterations) if the initial guess is good. • How to get a good initial guess:

– Interpolate coarse solution to fine grid. – “Solve” the problem on the coarse grid first. – Use interpolated coarse solution as initial guess on fine grid.

q Now, let’s use the V-cycle as the solver on each grid

level! This defines the Full Multigrid (FMG) cycle.

73 of 119

**The Full Multigrid (FMG) cycle
**

v h ← FM G ( f h )

h 2h 4h H • Initialize f , f , f , . . . , f

• Solve on coarsest grid

H −1 H H v =(A ) f

•

interpolate initial guess

2h 4h v 2h ← I 4 hv

• perform V-cycle • interpolate initial guess • perform V-cycle

v 2h ← MV 2h ( v 2h , f 2h )

h 2h v h ← I2 hv

v h ← MV h ( v h , f h )

74 of 119

**Full Multigrid (FMG)
**

• Restriction • Interpolation • High-order Interpolation

75 of 119

**FMG-cycle (recursive form)
**

v h ← FM G ( f h )

h 2h 4h H 1) Initialize f , f , f , . . . , f

2) If

**Ω h is the coarsest grid, then solve exactly
**

v 2h ← FM G ( f 2h )

Else:

3) Set initial guess 4)

h 2h v h ← I2 hv

v h ← M V( v h , f h ) ,

η times

76 of 119

**Cost of an FMG cycle
**

• One V-cycle is performed per level, at a cost of WU per grid (where the WU is for the 1−2 size grid involved).

−d

1

• The size of the WU for coarse-grid j is 2 times the size of the WU on the fine grid (grid 0). • Hence, cost of the FMG(1,1) cycle is less than

2 æ ç è 1− 2−d 2 ö −d − 2d +2 + ... ) = ÷( 1 + 2 −d 2 ø ( 1−2 )

− jd

• d=1: 8 WU;

d=2: 7/2 WU

d=3: 5/2 WU

77 of 119

**How to tell if truncation error is reached with Full Multigrid (FMG)
**

• If truncation error is reached, e ∼ O( h 2 ) for each grid level h. The norms of the errors at the “solution” points in the cycle should form a Cauchy sequence and

e

2j h

∼ e

jh

1 4

78 of 119

**Cost to achieve convergence to truncation by the FMV method
**

• Consider using the FMV method, which solves the 2h problem on Ω to the level of truncation before h going to Ω , i.e.,

e 2h = u 2h − v 2h

p

∼ K ( 2h)

−p

p

• We ask that ∼ Kh = 2 e 2h which implies −p that the algebraic error must be reduced by 2 h on Ω . Hence, θ1 V-cycles are needed, where

eh

**thus θ1 ∼ O( 1) and computational cost of the d FMV method O( N ) .
**

79 of 119

γ

θ1

∼ 2

−p

A numerical example

• Consider again the two-dimensional model problem (with σ=0), given by

− u x x − u y y = 2 ( 1 − 6x 2 ) y 2 ( 1 − y 2 ) + ( 1 − 6y 2 ) x 2 ( 1 − x 2 )

inside the unit square, with u=0 on the boundary.

• We examine the effectiveness of FMG cycling to solve this problem on NxN grids [(N-1) x (N-1) interior] for N=16, 32, 64, 128.

80 of 119

FMG results

• Shown are results for three FMG cycles, and a comparison to MV cycle results.

81 of 119

What is A

2h

?

**• Recall the coarse-grid correction scheme: h h h – 1) Relax on A u h = f on Ω to get v h .
**

2h 2h h h h f = I ( f − A v ). – 2) Compute h

– 4) Solve

A 2h e 2h = r 2h on Ω 2h .

h R a n ge( I 2h ) . Then the

• Assume that e h ∈

h – 5) Correct fine-grid solution v h ← v h + I 2h e 2h .

residual equation can be written h 2h 2h ∈ Ω 2h u u for some rh = A h e h = A h I2 h

• How does

Ah

act upon R a n ge ( I 2h ) ?

82 of 119

h

How does A

h

h act on R a n ge ( I 2h

)?

u 2h

h 2h I 2h u

h h 2h A I 2h u

h • Therefore, the odd rows of A h I 2 h are zero (in 1D only) and r2i + 1 = 0 . We therefore keep the even rows 2h h Ω of A h I 2 for the residual equations on . These h rows can be picked out by applying restriction!

2h h h 2h 2h h Ih A I 2h u = I h r

83 of 119

Building A .

2h

• The residual equation on the coarse grid is:

2h h h 2h 2h h Ih A I 2h u = I h r

2h A • Therefore, we identify the coarse-grid operator

as

A

2h

=

2h h h I h A I 2h

**• Next, we determine how to compute the entries of the coarse-grid matrix.
**

84 of 119

Computing the

ith

row of A .

2h

T 2h 2h 2h e = ( 0 , 0 , . . . , 0 , 1 , 0 , . . . , 0 ) • Compute A e i where i

i −1

i

i +1

0

1

1 2

2

0

1 2

0

−

−

1

1 h

2

0

−

−

1 2h

1 4h

2

0

0

1 2h 2

1 4h 2

2h ei h 2h I2 h ei h h 2h A I 2h e i

2h h h 2h I h A I 2h e i

85 of 119

1 2h

2

**The row of A looks a h lot like a row of A !
**

2h

• The ith row of

ith

A

2h

is

1 ( 2h) 2

−1

2

−1 ,

which is the

Ω

2h version of

A

h

.

• Therefore, IF relaxation on Ω leaves only error in the range of interpolation, then solving determines the error exactly! • In general, this will not be the case, but this argument certainly leads to a plausible representation 2h for A .

86 of 119

h

**The variational properties
**

2h A • The definition for that resulted from the

foregoing line of reasoning is useful for both theoretical and practical reasons. Together with the commonly used relationship between restriction and prolongation we have the following “variational properties”:

2h

A

=

2h h h I h A I 2h

(Galerkin Condition ) for c in

h 2h T I 2h = c ( I h )

ℜ

87 of 119

**Properties of the Grid Transfer Operators: Restriction
**

• Full Weighting:

2h Ih

:Ω →

h

2h I h : ℜN-1

2h or Ω

ℜN/2-1

• For N=8,

1 2 1 1 2h Ih = 1 2 1 4 1 2 1 3× 7

**2h • I h has rank N
**

dimension

2

.

N − 1 and a null space 2

2h N ( Ih ) with

88 of 119

**2h Spectral properties of I h .
**

2h • How does I h act on the eigenvectors of

j kπ ö æ h = s in • Consider wk ç N ÷ ,j ø è

A

h

?

, 1 ≤ k ≤ N-1, 0 ≤ j ≤ N

**• Good Exercise: show that
**

(

2h h Ih wk ) j

=

æ co s 2

kπ ç 2N è

ö 2h ÷ wk , j ø

2h ≡ c k wk ,j

for 1 ≤ k ≤ N/2.

89 of 119

**2h Spectral properties of I h (cont.).
**

2h th h I • i.e., h [k mode on Ω ] = ck [ kth mode on Ω 2h ]

h : N=8, k=2 Ω

2h Ih

Ω

2h

: N=4, k=2

90 of 119

**2h Spectral properties of I h (cont.).
**

N N < k′ < N • Let k ′ = N − k for 1 ≤ k < , so that 2 2

**• Then (another good exercise!)
**

2h h ( Ih wk ′ )

j

æ k π ö 2h 2 = − sin ç wk , j ÷ 2N è ø

2h ≡ s k wk ,j

91 of 119

**2h Spectral properties of I h (cont.).
**

2h 2h ] th mode on Ω h] = -s [ kth mode on I • i.e., [(N-k) Ω h k

•

Ωh

2h Ih

: N=8, k=6

•

Ω 2h : N=4, k=2

92 of 119

**2h Spectral properties of I h (cont.).
**

2h h 2h I • Summarizing: h wk = c k wk

2h h 2h Ih wk ′ = − s k wk

N 1<k < 2

k′ = N − k

2h h Ih wN /2 = 0

• Complementary modes:

h , wh } W k = spa n { wk k′

2h 2h Ih W k → wk

93 of 119

h 2 Null Space of I h .

• Observe that h and e i is the i unit vector.

h h • Let ηi ≡ A e i .

æ hö 2h N ( Ih ) = spa n ç A h e i ÷ where i is odd è ø th

**0 0 0 0 −1 2 −1 0 • While the ηi look oscillatory, they contain all of h the Fourier modes of A , i.e.,
**

ηi =

N −1 k =1

0

å

a k wk

ak ≠ 0

h • All the Fourier modes of A are needed to represent the null space of restriction!

94 of 119

**Properties of the Grid Transfer Operators: Interpolation
**

h • Interpolation: I 2h : Ω 2h → Ω h h N/2-1 I2 h:ℜ

or

ℜN-1

• For N=8,

1 2 1 1 1 h I2 2 h = 2 1 1 2 1

•

**h I 2h has full rank and null space {φ } .
**

95 of 119

h Spectral properties of I 2h .

h • How does I 2h act on the eigenvectors of

j kπ ö æ 2h ) = s in • Consider ( wk ç N /2 ÷ , 1 ≤ k ≤ N/2-1, j è ø

A 2h ?

0 ≤ j ≤ N/2

• Good Exercise: show that the modes of A are h , but that the space W is NOT preserved by I 2 k h preserved:

h 2h I2 h wk

2h

=

æ 2 co s

kπ ç 2N è

ö h æ kπ 2 ÷ wk − s in ç 2N ø è

ö h ÷ wk ′ ø

96 of 119

h − s wh = c k wk k k′

**h Spectral properties of I 2h (cont.).
**

h h − s wh I2 = c w h k k k k′

**• Interpolation of smooth Ω h oscillatory modes on Ω .
**

N • Note that if k « , 2

2 ö æ æ k h 2h I2 h wk = ç 1 − O ç 2 ÷ è èN ø

2h

modes excites

ö h æ k2 ö h ÷ wk + O ç 2 ÷ wk ′ ø èN ø

h ≈ wk

**h • I 2h is second-order interpolation.
**

97 of 119

h The Range of I 2h .

h h • The range of I 2h is the span of the columns of I 2h

• Let ξi be the ith column of

h I2 h.

ξi :

ξh i =

N −1 k =1

å

h , bk wk

bk ≠ 0

• All the Fourier modes of h represent Range( I 2h )

A

h

are needed to

98 of 119

**Use all the facts to analyze the coarse-grid correction scheme α
**

1) Relax

α

times on

Ωh : vh ← R

2h

1

vh

2) Compute and restrict residual f 3) Solve residual equation

2h ← Ih ( f h − A h vh )

2h − 1 2h 2 h v =(A ) f

h 4) Correct fine-grid solution v h ← v h + I 2h v 2h .

**• The entire process appears as
**

2h − 1 2h h h h α h + I2 ( A ) I ( f − A R v ) h h • The exact solution satisfies

vh

←

α h R v

α 2h − 1 2h h h h α h u h ← R u h + I2 ( A ) I ( f − A R u ) h h

99 of 119

**Error propagation of the coarsegrid correction scheme
**

• Subtracting the previous two expressions, we get

eh ←

2h − 1 2h h h I − I 2h ( A ) Ih A R α e h

e h ← CG e h

h • How does CG act on the modes of A ? Assume h and w h for 1 ≤ k ≤ N /2 consists of the modes wk k′ and k ′ = N − k .

• We know how R , A , h h act on wk and wk ′ .

h

α

2h Ih

, (A

2h

)

−1

h , I2 h

100 of 119

Error propagation of CG

• For now, assume no relaxation α = 0 . Then h , wh } W k = spa n { wk k′ is invariant under CG.

h = s wh + s wh CG wk k k k k′

h = c wh + c wh CG wk ′ k k k k′

where

ck = æ 2 co s kπ ö ç 2N ÷ ø è

æ kπ ö 2 s k = sin ç 2N ÷ è ø

101 of 119

**CG error propagation for k << N
**

• Consider the case k << N (extremely smooth and oscillatory modes):

æ k2 ö wk → O ç w 2 ÷ k èN ø æ k2 ö + Oç w 2 ÷ k′ èN ø

æ æ k2 ö ö wk → ç 1 − O ç w 2 ÷÷ k è èN øø

æ æ k2 ö ö + ç 1 − Oç w 2 ÷ ÷ k′ è èN øø

• Hence, CG eliminates the smooth modes but does not damp the oscillatory modes of the error!

102 of 119

**Now consider CG with relaxation
**

• Next, include α relaxation sweeps. Assume that h the relaxation R preserves the modes of A (although this is often unnecessary). Let λk denote the eigenvalue of R associated with wk . For k < < N/2,

α wk → λk s k

wk +

α λk s k wk ′ Small!

Small!

α α wk ′ → λk ′ c k wk + λk ′ c k wk ′

103 of 119

**The crucial observation:
**

• Between relaxation and the coarsegrid correction, both smooth and oscillatory components of the error are effectively damped. • This is essentially the “spectral” picture of how multigrid works. We examine now another viewpoint, the “algebraic” picture of multigrid.

104 of 119

**Recall the variational properties
**

• All the analysis that follows assumes that the variational properties hold:

2h h h A 2h = I h A I 2h

h 2h T I 2h = c ( I h )

(Galerkin Condition ) for c in

ℜ

105 of 119

**Algebraic interpretation of coarse-grid correction
**

• Consider the subspaces that make up

Ω

h

and

Ω

2h

Ω

h

h R( I2 h)

2h N( Ih )

For the rest of this talk, ‘R( )’ refers to the Range of a linear operator. From the fundamental theorem of linear algebra: or

2h 2h T N( Ih ) ⊥ R( ( Ih ) )

h I 2h

2h Ih

2h R( Ih )

Ω 2h

2h h N ( I h ) ⊥ R ( I 2h )

106 of 119

h Subspace decomposition of Ω .

• Since

A

h

**has full rank, we can say, equivalently,
**

Ah 2h h N ( Ih A )

h R( I2 h) ⊥

h A x, y x ⊥ y ( where means that h A

= 0 ).

h • Therefore, any e h can be written as e h = s h + t h 2h h h t ∈ N ( I ) where s h ∈ R ( I 2 and h h A ) .

• Hence,

h 2h h Ω h = R ( I2 ) N ( I ⊕ h h A )

107 of 119

**Characteristics of the subspaces
**

• Since s h = I 2h q 2h for some q 2h ∈ Ω 2h , we associate s h with the smooth components of e h. But, s h generally has all Fourier modes in it (recall h the basis vectors for I 2h ).

h

• Similarly, we associate t h with oscillatory h components of e h , although t generally has all 2h ) is Fourier modes in it as well. Recall that N ( I h 2h h spanned by ηi = A h e i therefore N ( I h A ) is T h e = ( 0 , 0 , . . . , 0 , 1 , 0 , . . . , 0 ) spanned by the unit vectors i for odd i, which “look” oscillatory.

108 of 119

**Algebraic analysis of coarse-grid correction
**

• Recall that (without relaxation)

h 2h − 1 2h h CG = I − I 2h ( A ) Ih A

h • First note that if s h ∈ R ( I 2h ) then CG s h = 0 . 2h h = I h q 2h s This follows since for some q 2h ∈ Ω 2h and therefore

CG s h =

h 2h − 1 2h h h 2h I − I 2h ( A ) I h A I 2h q

=0

**• It follows that N ( CG) = R ( I 2h ) , that is, the null space of coarse-grid correction is the range of interpolation.
**

109 of 119

h

A

2h

by Galerkin property

**More algebraic analysis of coarse-grid correction
**

h 2h h • Next, note that if t ∈ N ( I h A ) then

− 1 2h h Ih A

CG t

h

= I−

h I2 h(

A

2h

)

th

0

• Therefore

CG t

h

=t

h

.

• CG is the identity on

2h h N ( Ih A )

110 of 119

**How does the algebraic picture fit with the spectral view?
**

• We may view

Ω

h

in two ways:

h = Ω

that is,

Low frequency modes 1 ≤ k ≤ N/2

⊕

High frequency modes N/2 < k < N

Ω

or as

h

= L ⊕ H

h R ( I 2h )

Ω

h

=

2h h ⊕ N( Ih A )

111 of 119

**Actually, each view is just part of the picture
**

• The operations we have examined work on different spaces!

2h h • While N ( I h A ) is mostly oscillatory, it isn’t H . h and while R ( I 2 h ) is mostly smooth, it isn’t L .

• Relaxation eliminates error from

H.

h • Coarse-grid correction eliminates error from R ( I 2h )

112 of 119

**How it actually works (cartoon)
**

H

t

h

2h h N ( Ih A )

eh

L

sh

h R ( I2 h)

113 of 119

**Relaxation eliminates H, but h R ( I 2h ). increases the error in 2h h
**

H

N ( Ih A )

th

eh

L

sh

h R ( I2 h)

114 of 119

**CG eliminates R ( but increases the error in H. 2h h
**

h I 2h )

H

N ( Ih A )

th

eh

L

sh

h R ( I2 h)

115 of 119

**Difficulties: anisotropic operators and grids
**

• Consider the operator :

−α ∂2u ∂x 2 −β ∂2u ∂y 2 = f ( x , y)

1.2 1 0.8 0.6 0.4 0.2 0 -100 -50 0 50 100

q If α « β then the GS-smoothing

factors in the x- and y-directions are shown at right. Note that GS relaxation does not damp oscillatory components in the xdirection. • The same phenomenon occurs for grids with much larger spacing in one direction than the other:

116 of 119

**Difficulties: discontinuous or anisotropic coefficients
**

• Consider the operator : − ∇ • ( D( x , y ) ∇ u ) where æ d11( x , y ) d12( x , y ) ö D( x , y ) = ç ÷ è d21( x , y ) d22( x , y ) ø

x- and y-directions can be highly variable, and very often, GS relaxation does not damp oscillatory components in the one or both directions.

q Again, GS-smoothing factors in the

• Solutions: line-relaxation (where whole gridlines of values are found simultaneously), and/or semicoarsening (coarsening only in the strongly coupled direction).

117 of 119

**For nonlinear problems, use FAS (Full Approximation Scheme)
**

• Suppose we wish to solve: A ( u) = f where the operator is non-linear. Then the linear residual equation A e = r does not apply. • Instead, we write the non-linear residual equation:

A ( u + e) − A ( u ) = r

**• This is transferred to the coarse grid as:
**

2h A 2h ( u 2h + e 2h ) = I h ( f h − A h( u h) )

**• We solve for w 2h ≡ u 2h + e 2h and transfer the error (only!) to the fine grid:
**

h 2h − I 2h u h ) u h ← u h + I2 ( w h h

118 of 119

**Multigrid: increasingly, the right tool!
**

• Multigrid has been proven on a wide variety of problems, especially elliptic PDEs, but has also found application among parabolic & hyperbolic PDEs, integral equations, evolution problems, geodesic problems, etc. • With the right setup, multigrid is frequently an optimal (i.e., O(N)) solver. • Multigrid is of great interest because it is one of the very few scalable algorithms, and can be parallelized readily and efficiently!

119 of 119

- Efficient Iterative Algorithms for the Stochastic Finite Element Method With Application to Acoustic ScatteringUploaded byEndless Love
- AMR_zhangUploaded byjayashreek_raman
- Multigrid TutorialUploaded byapi-3708428
- Segmentation.pptUploaded bymoldova89
- An Overview OfUploaded bySudan Shrestha
- MULTIGRID - Solving Large Linear Systems by AMGUploaded byKajuyali Al-Javibi
- Proceedings Multigrid Methods IVUploaded bydmp130
- 16 EigenvalueUploaded bySiti Mariam Anakpakijan
- ME2013Uploaded byHafed Alfarok
- Theory Guide (1)Uploaded byCaren Fontella
- An efﬁcient method for computing genus expansionsand counting numbers in the Hermitian matrix modelUploaded byMichael Pearson
- svc_naps99Uploaded byNikhil Perla
- CFD TutorialsUploaded byOsama A Ghani
- Stability Analysis for VAR SystemsUploaded byCristian Cernega
- MATRICES-COMPLETE_LECTURE_NOTE.pdfUploaded byabhinav
- Pr FinalUploaded byHakan Kaya
- ifac08_seul_yu 0457.pdfUploaded byЋирка Фејзбуџарка
- Matlab TutUploaded byGuruxyz
- Steger Warming Flux Vector Splitting MethodUploaded bysanal_iitb
- How Much Mathematics Does an IT Engineer Need to Learn to Get Into Data Science_machine LearningUploaded byhamedfazelm
- Market Correlation and Market Volatility in US BLue Chip StocksUploaded byJoshy_29
- Modeling Analysis and Simulation(1)Uploaded byCesar Flores Tuso
- Gab OrUploaded bySujith Kumar
- GPTutorialUploaded byKayalvizhi Lakshman
- MATH 2930 - Worksheet 14 SolutionsUploaded byBrimwoodboy
- Fall 2012 examUploaded byDaniel Lopez Castaño
- Computational Fluid DynamicsUploaded bywalaywan
- Bertilsson Et Al 2011 Multidimensional Consideration of Anthropometric DiversityUploaded byMohamed Mansour
- MatricesUploaded byRichard Lu

- The Official DIY Manual v3Uploaded byninagika
- Tutorial on Practical Prediction Theory for ClassiﬁcationUploaded byninagika
- Optical Transport Network (OTN) TutorialUploaded byninagika
- XSLT 1.0 TutorialUploaded byninagika
- Boy’s Vest Tutorial With PDF PatternUploaded byninagika
- OMG Systems Modeling LanguageUploaded byninagika
- Ημερολόγιο Σχολικού Έτους 2015 16Uploaded byninagika
- AutoCAD TutorialsUploaded byninagika
- How I Built an Electricity Producing Solar PanelUploaded byninagika
- Α γυμνασίου Βιβλίο καθηγητήUploaded byninagika
- Spartan’10Uploaded byninagika
- PyGTK 2.0 TutorialUploaded byninagika
- Ερωτήσεις_7Uploaded byninagika
- Mathematics and AlgorithmsUploaded byninagika
- bit to byteUploaded byninagika
- Jalview 2.8Uploaded byninagika
- Python 3.1 VersionUploaded byninagika
- A Tutorial for GNU LibmicrohttpdUploaded byninagika
- Perl 5 TutorialUploaded byninagika
- PRAATUploaded byninagika
- DIY SKateboard MouldUploaded byninagika
- nRF24L01 and MiRF-V2Uploaded byninagika
- A Yacc TutorialUploaded byninagika
- Comsol MultiphysicsUploaded byninagika
- A Gentle Introduction to Haskell 98Uploaded byninagika
- Algebra Cheat SheetUploaded byninagika
- Status BoardUploaded byninagika
- PostgreSQL 7.3.2 TutorialUploaded byninagika
- Debian Packaging TutorialUploaded byninagika