Professional Documents
Culture Documents
Method of Gradient Conjugate
Method of Gradient Conjugate
Gradients
Dragica Vasileska
x y x i yi
i 1
1 T
1
A x Ax b
f ' ( x)
2
2
f(x)
x n
Singular
positive-indefinite
matrix
Negative-definite
matrix
Indefinite
matrix
A
, b
, c0 x
2 6
8
2
Contours of the
quadratic form
Gradient of
the quadratic form
Convergence of the
Jacobi method which
starts at [-2,-2]T and
converges to [2, -2]T
d
T dx (1)
f(x (1) ) f ' (x (1) )
0
d
d
r(T1)r( 0 ) 0
b Ax (1) T r(0) 0
b A x (0) r(0) T r(0) 0
r(T0 ) r( 0 ) Ar( 0 ) T r( 0 ) 0
r(T0 ) r( 0 )
r(T0 ) Ar( 0 )
(f)
r(i) b Ax (i)
(i)
T
r(i)
r(i)
T
r(i)
Ar(i)
Two matrix-vector
multiplications are
required.
The error vector e(i) is a linear combination of the eigenvectors of A, and all
eigenvalues are the same
n
e (i) j v j
j1
r(i) Ae (i) j j v j j v j
T
j1
r r(i)
(i)
(i)
r T Ar(i)
(i)
j1
1
, e (i 1) e (i) (i) r(i) 0
d
T dx (1)
f(x (1) ) f ' (x (1) )
0
d
d
r(T1)r( 0 ) 0
b Ax (1) T r(0) 0
b A x (0) r(0) T r(0) 0
r(T0 ) r( 0 ) Ar( 0 ) T r( 0 ) 0
r(T0 ) r( 0 )
r(T0 ) Ar( 0 )
(f)
r(i) b Ax (i)
T
(i)
r (i) r(i)
T
r (i) Ar(i)
x (i 1) x (i) (i)r(i)
To avoid one matrix-vector
multiplication, one uses instead
r(i 1) r(i) (i) Ar(i)
T
r q
x x r
If i is divisib le by 50
r b - Ax
else
r r - q
endif
r Tr
i i 1
The error vector e(i) is a linear combination of the eigenvectors of A, and all
eigenvalues are the same
n
e (i) j v j
j1
r(i) Ae (i) j j v j j v j
T
j1
r r(i)
(i)
(i)
r T Ar(i)
(i)
j1
1
, e (i 1) e (i) (i) r(i) 0
e(i 1) e T(i 1) Ae (i 1)
A
e(i )
2
A
, 1
2 2 2
j j j
2 3
2
j j j j j j
( 2 2 )2
( 2 )( 3 2 )
max / min , 2 / 1
Conditioning number
1
1
Worst-case scenario: =
d T( i ) e ( i 1) 0
d T( i ) e ( i ) ( i ) d ( i ) 0
d T( i ) e ( i ) ( i ) d T( i ) d ( i ) 0
(i)
d T( i ) e ( i )
d T( i ) d ( i )
d T(i ) Ae (i 1) 0
d T(i ) A e (i ) (i )d (i ) 0
(i )
d T(i) r(i)
d T(i) Ad (i)
e ( 0) ( j) d ( j)
j 0
u T(i) Ad ( j)
j 0
d T(j) Ad ( j)
d (i ) u (i ) ijd ( j) , ij
ij (i 1) d T Ad
r(i 1)r(i 1)
( i 1)
( i 1)
i j1
i j1
new
dTq
x x d
If i is d ivisible b y 50
r b - Ax
else
r r - q
endif
old new
new r T r
new
old
d r d
i i 1
Iterative
y = Ay
Pivoting
LU
GMRES,
BiCGSTAB,
Cholesky
Conjugate
gradient
More Robust
More General
More Robust
Less Storage (if sparse)
step length
approx solution
residual
improvement
search direction
x xk
2 x x0
( A ) 1
A ( A ) 1
9. Preconditioning Techniques
References:
J.A. Meijerink and H.A. van der Vorst, An iterative solution method for
linear systems of which the coefficient matrix is a symmetric M-matrix,
Mathematics of Computation, Vol. 31, No. 137, pp. 148-162 (1977).
J.A. Meijerink and H.A. van der Vorst, Guidelines for the usage of
incomplete decompositions in solving sets of linear equations as they
occur in practical problems, Journal of Computational Physics, Vol. 44,
pp. 134-155 (1981).
D.S. Kershaw, The incomplete Cholesky-Conjugate gradient method for
the iterative solution of systems of linear equations, Journal of
Computational Physics, Vol. 26, pp. 43-65 (1978).
H.A. van der Vorst, High performance preconditioning, SIAM Journal
of Scientific Statistical Computations, Vol. 10, No. 6, pp. 1174-1185
(1989).
~
rT ~
r,
new
new
~new
~
dT q
~
~
x~
x d
~
~
r E 1b - E 1AE T ~
x or ~
r ~
r - q
old new
new ~r T ~r
new
old
~
~
d~
r d
i i 1
i 0, r b Ax, d M 1r
new
dT q
x x d
r b - Ax or r r - q
old new
new rT M -1r
new
old
d M 1r d
i i 1
Tridiagonal Preconditioning
Block Diagonal Preconditioning
i 1
Aii Lik
k 1
i 1
A ji L jk Lik
L ji
k 1
, j i 1, i 2, , n
Lii
Modified Cholesky factorization: A = LDLT
~
~
ai bi ci
i bi c~i
di
LT
D
~
~ ~
~
~
a~ d 1 a b 2 d c~ 2 d
, b b , c~ c , i 1,2, , n
i
i 1 i 1
i m i m
1
i
T
Modified ICCG(0): M ( L D )D ( D L )
bi21 ci2 m
diag ( A ) diag ( M ), d i ai
d i 1 d i m
J k I J J 2 J3
k 0
( 1) D (L U ) D 1
k 0
m
k 1
1
k 1
M m k ( 1) D (L U ) D
k 0
A
1
Ax b, 0
BT
1
0
A2
BT
B1 x1 d1
B2 x2 d2
Q z f
M -1
0
0
M
0
0
1
1
1
T
M L 0
M
0 L , where L 0 M 2 0
2
B T BT S
1
0
0
S
1
0
B
1
1
M 0 M 2 B 2 , S* S BT M -1B1 BT M -1B 2
1
1
2
2
BT BT S*
1
p
r,
p
r
r
r , 0 new
new
(BCG) Algorithm was propoWhile i imax and new 2 0
sed by Lanczos in 1954.
q Ap
It solves not only the original
q* AT p*
new
system Ax=b but also the dual
r
r
new
The search direction pj does
new
p* r * p*
i i 1
i 0, r0 b Ax 0 , r0* is arbitrary
p r0 , u r0
new r*0T r , 0 new
While i imax and new 2 0
*Tnew
r0 Ap
q u Ap
x x (u q )
r r - A (u q )
old new
new r0*T r
new
old
u r q
p u (q p )
i i 1
i 0, r0 b Ax 0 , r0* is arbitrary
p r0
new r0*T r , 0 new
While i imax and new 2 0
*Tnew
r0 Ap
s r Ap
sT As
As T As
x x p s
r s - As
old new
new r0*T r
new
old
p r (p Ap)
i i 1
n1/2
n1/3
2D
3D
Sparse Cholesky:
O(n1.5 )
O(n2 )
CG, exact
arithmetic:
O(n2 )
O(n2 )
CG, no precond:
O(n1.5 )
O(n1.33 )
O(n1.25 )
O(n1.17 )
O(n)