This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

5, October 1985

1985 Society for Industrial and Applied Mathematics 006

Downloaded 07/30/12 to 190.43.2.193. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

**RESIDUAL INVERSE ITERATION FOR THE NONLINEAR EIGENVALUE PROBLEM*
**

A. NEUMAIER

Abstract. For the nonlinear eigenvalue problem residual inverse iteration with shift tr is defined by

a (!+1) := const.

A()=0,

where A(.) is a matrix-valued operator,

(x (l) A(o") -1A(Al+l)x<l)),

In the linear case, A(A)= A-AI, this is theoretically where AI+I is an appropriate approximation of equivalent to ordinary inverse iteration, but the residual formulation results in a considerably higher limit accuracy when the residual A(Al+l)x (l Ax <l At+x <t) is accumulated in double precision. In the nonlinear case, if tr is sufficiently close to convergence is at least linear with convergence factor proportional to trAs with ordinary inverse iteration, the convergence can be accelerated by using variable shifts.

1.

,

.

1. Introduction. Inverse iteration is generally considered as one of the standard methods for the computation of selected eigenpairs of a linear eigenvalue problem. Although it is best analyzed in terms of eigenvector expansions (see e.g. Wilkinson 11]), local convergence can also be proved by deriving inverse iteration from Newton’s method applied to a suitable equivalent system of nonlinear equations (Unger [10]). To treat the more general nonlinear eigenvalue problem, this latter approach can be generalized and leads to a nonlinear version of inverse iteration for the eigenvalue problem A(A)x =0, x0, namely

(1)

y()= A(At)-A’(A)x (1), x +1) y)/e*y t), A+ A- 1/e*y ).

The numerical behaviour of this and other related methods is discussed in the survey by Ruhe [6]. An essential disadvantage of these methods is the fact that in each step the coefficient matrix A(AI) of the linear system for y(l) changes; and, in contrast to the linear case, working with a fixed "shift" tr instead of Al results in convergence to the wrong problem, namely to a solution of the linearized problem A(A*)x=

(A* r)A’(A*)x.

In the following, we circumvent this difficulty by considering a variant of inverse iteration based on the use of the residual. To motivate the new approach we rearrange (1) such that

X(/+l) x(l)__ dx (1)

is computed from x (l) by subtracting the correction term

dx (1)

x (1)

x (1+1)

x (1) _it_ (AI+

Al)y (1)

X(I)-I (AI+ AI)A(AI)-A’(AI)X (l)

A(AI)-I(A(AI) + (/1+1- il)At(l)) x(1) a(Al)-lA(Al+)xl+ O((Al+l- Al)2),

if A(A) is twice continuously differentiable. By neglecting the error term we obtain for x +1 the expression

(2)

X

(/+1)

x(I)-A(AI)-IA(AI+I)A (1),

* Received by the editors February 21, !984, and in revised form September 7, 1984. t Institut fiir Angewandte Mathematik, Universitit Freiburg, D-7800 Freiburg, West Germany.

914

see http://www.siam. Step 1. residual inverse iteration can be profitably used to refine eigenvalue approximations obtained from the QR or QZ algorithm (see e. 62]. 2. [2. x () := X()/e*2() the vector b # 0 has to be chosen suitably (see 4).php where the new approximation /1+1 for the eigenvalue now has to be determined beforehand (we shall use a generalized Rayleigh quotient). with some remarks on the convergence behaviour in case of variable shifts. Hence in the presence of rounding errors residual inverse iteration with double precision accumulation of the residuals gives about the same limit accuracy as one would get with ordinary inverse iteration only when the complete iteration is performed in double precision. The new algorithm. called residual inverse iteration. Section 3 gives the local convergence proof. We as the normalized (5) A(r)2 ()= b. It turns out that in this formulation AI may be replaced by a constant "shift" tr without destroying convergence to the wanted eigenpair. usually e will be the unit vector with a 1 in the position of the largest entry of suggest the following iteration for the approximation of a solution of (3). Residual inverse iteration. In 2.43. [9]. that A(tr) is nonsingular. we comment on the practical realization and demonstrate the behaviour of the algorithm with three examples" the Frank matrix of order 11 and two definite quadratic eigenvalue problems of Scott and Ward [7]. The paper is organized as follows. for linear problems. residual inverse iteration is defined for fixed shift. Thus residual inverse iteration provides a simple alternative to some refinement procedures proposed in the literature ([1]. local convergence is at least linear with a convergence factor proportional to the distance of cr to the nearest eigenvalue (provided that is simple and isolated). and can be used either with a fixed shift cr or with variable shift. In particular. Put 0. C"-{0}. residual inverse iteration is again theoretically equivalent to ordinary inverse iteration.RESIDUAL INVERSE ITERATION 915 Downloaded 07/30/12 to 190.193. residual inverse iteration is no longer strictly equalent to (1). The algorithm. We consider the finite-dimensional nonlinear eigenvalue problem (3) A(])=0. In case that A(A)=A-AI. In 4.g. We suppose that an approximation tre D to is known. and has the advantage of preserving the structure of A and not requiring an initial eigenvector approximation. We use the notation C "" for the set of complex gquare n x n-matrices. Double precision computation of the residuals again leads (in well-conditioned cases) to results which are correct to almost double precision.2. DC. and compute an initial approximation x () to solution of the equation . . But in the nonlinear case. denote conjugate transposition by an asterisk *. For the fixed shift. and use I1" for an arbitrary vector norm. thus computes the new approximation x (I+1) by applying to X (I+1) a correction term computed from the residual A(A)x () for a suitable A. where A" D --> C "" is a continuous matrix-valued map.org/journals/ojsa. Stewart [8]) since the single precision partial factorization available from the QR or QZ algorithm can be reused to save factorization time. [12]). Redistribution subject to SIAM license or copyright. and that e is a normalization vector such that (4) e*) 1.

residual inverse iteration is equivalent to ordinary inverse iteration with shift r (in the absence of rounding errors).g. . NEUMAIER Step 2. . is real. Formula (6a) is appropriate only when A(A) is Hermitian and A. Therefore C . so that if<l+1). y* for a suitable constant y.)x O. Compute the residual (7) (8) and normalizing the vector r (l) :-. Redistribution subject to SIAM license or copyright. At is defined in [A(A2). and for any pair fi of right and left eigenvectors. Thus we can hope that in the more general situation discussed above. If A() is singular we call each vector #0 satisfying A()=0 a right eigenvector associated with and each vector )3 # 0 satisfying )3*A()=0 a left eigenvector associated with PROPOSITION 1. indeed we then have (A oI) +1) (A.siam. A2]=A’()+O(AI-?k.a(Ai+l)X (!). A2-). in Wilkinson [11].916 A.A(A1) A’(A1) if hi h2. the columns of C are multiples of and the rows of C are multiples of 3*. (6b) has to be used. .43. is parallel to (A-oI)-lx <l). The root closest to At is accepted as A+I.+. Increase by one and return to Step 2. and hence x 1+1). U. Compute an improved approximation A+I to Downloaded 07/30/12 to 190.crI)x (l).o’I)x (l) (A o’I) dx ) (A. otherwise. (10) )3*A’(): # 0. is continuously differentiable and satisfies the relations A[AI.A2]. The following conditions are equivalent: (i) d (h) := det A(A) has a simple zero at h (ii) A() has corank 1. In the special case A(A A A1.AI+II)x (1) (ll+l O’)X (l).2. A(A2)=A(A)+(A2-A)A[A. Since the adjoint matrix C := Adj A() satisfies CA()= A()C =det (A())I =0. Parlett [5]) are still valid. Step 4.193. see http://www. and . Compute an improved approximation x 1+1) to by solving the equation A(tr) dx (/+1) :__ (l)-- r (l) (9) Step 5. Step 3. some of the excellent convergence properties of inverse iteration (as discussed e.(A. In this section we shall assume that the matrix function A(A) is twice continuously diiterentiable in some neighbourhood U of Then the divided difference . Proof Suppose first that A() has corank 1. Convergence analysis. 3.org/journals/ojsa.php by solving one of the equations (6a) (6b) x()*A(A+)x (t) 0 (’) or e*A(tr)-’A(A.

) ) + O(A _)2. as above. Let be a simple isolated eigenvalue of A(A). and let be a corresponding right eigenvector normalized such that e* 1. Redistribution subject to SIAM license or copyright. with B defined by (13). ]) + O(A )2 (A -) tr (3. A[A. Again neither (i) nor (ii) holds. PROPOSITION 2. see http://www. B + A(r). Then A(o-) is nonsingular. And if s=>2 then all (n-l)(n-l) minors of A() are zero whence C =0 and. and x t for suitable since A() has corank 1. ]) =det A() + (A -) tr (Adj A().A’() -((r. C0 whence y0. Proofi If o. (i) and (ii) are equivalent. ])+ O(A _)2 (A ) tr (CA[A. Then. B := A(. .. We shall call a simple isolated eigenvalue of the matrix function A(A) if A(A) is twice continuously ditterentiable in some neighbourhood of and the conditions (i) and (ii) of Proposition 1 are satisfied. Since x was arbitrary.)O(e). ]))e* B+ O(r-). with a left eigenvector 33. Downloaded 07/30/12 to 190. Define (15) S S := A(r) + (1 o" + )A[o.) + A’()e* *A()x+*A’()e*x=*A’(). Then.. A]. (14) 2/e*2=+(o’-. and (12) if o" # A then the vector :g := x. e’x. Then the matrix (11) is nonsingular. B is nonsingular.siam.org/journals/ojsa.is in a sufficiently small neighbourhood of the simple eigenvalue then det A(o-) 0 whence A(tr) is nonsingular. eq.76)].l x=+O(e).193. Suppose now that A() has corank s 1.A()+ (A[(r.2. With the assumptions of Proposition 2.t)=O(A-) 2. If s =0 then d() 0 and neither (i) nor (ii) holds. (4.php A(X) is nonzero. Now by d (A) det A(A )A[A. Therefore A()x Bx-A’().e*x 0. Assume that Bx 0. =+0(). d(. Itr. ]e*.A(o’)-IA(A )x satisfies (13) O# e*2= (1+ 0()). suppose that for sufficiently small >-_ e > 0 we have 0 < <= and Proof. det (A() + (A Hence in this case.33"A’()) + O(A _)2 (A -)7" 33"A’(.)A[cr. and by (10) then e*x we have 0 *Bx 0. . Now t= te* e*x 0 implies x 0.RESIDUAL INVERSE ITERATION 917 since some (n -1) (n -1) minor of GrSbner [3. l-1 PROPOSITION 3..43. This proves the proposition.

see http://www. for sufficiently small >-e > 0 we have 0< Itr-[ <= and x (0= + O(e). is real then the zero (20) y*A(a)x of Proof.e.X )(1 + O(e)) e*2(tr ) which implies (13) and cr-a e*2(r-)(1 + O(e)). and (17) imply z := S(2.+l + O(f()) since f’(:) is bounded away from zero (near ).). PROPOSITION 4.e*2(A(tr) A()) (o’. S) S -1= B-l+ O(-)= O(1). . ].e*2(tr-. S-’z=(r--a)(+O(e))--e**(r--i.)S. .siam. =+ o(). e --> 0. and Downloaded 07/30/12 to 190.php (16) Moreover. (o’. so that (17) Since A[tr.) y*A(X)x’>= y*A()(x ’. then the zero of (66) closest to satisfies (19) and in case that A(A is Hermitian and al+l=. ) A(tr)2. e*=0. .)A[tr.e*2.f() a. this implies -e**. Y=2/[I2[[.+. ] + (1 r + )A[er. (12). a left eigenvector corresponding to and f(a) approaches the function := which has a simple zero at Therefore if a is sufficiently small.a)A[o’. ))= e*2-e*. . A(o’) (A(tr). and consider the function f(a):= A(er) approaches A(.org/journals/ojsa.)f’() for some has a simple zero a. NEUMAIER and since B is nonsingular by Proposition 2. ] + O(e)). +f.A)(S + O(e)).+)=f(S)+(a. a]x.) whence y approaches a left nullvector .+.]: (er. ) A(tr)(x. f(a) :*A(A) i.a )(A[tr.918 A. (18) Multiplication with e* gives = 0 (tr.2.. Redistribution subject to SIAM license or copyright.)A[o’. (15).+. ]) S).e*2. f(a) Now o=f(a. . : A(). close to near whence .)= O(e).) Ajar. By (16). Insertion into (18) and division by e*2 finally gives 2/e*2-=(cr-.43. 2=e*A()-’. S is nonsingular. (A(er) A()): + (1 (r .+O(e). Under the assumptions of Proposition 2.e*2.-. Write . For 8.e*2(o-. e*(2-e*2.)O(e) which implies (14). ])e*) r + )A[er. of (6a) closest to satisfies . But f(.e*2(o--. if.193.A(A ))x.

An appropriate choice of b is then the vector b Sj. several remarks are in place.. iix. 4. j (1. a factorization A(tr) Q1B(tr)Q2 with orthogonal Q1.2.RESIDUAL INVERSE ITERATION 919 Downloaded 07/30/12 to 190. in place of (5). and it may be more economical to factor B(cr) instead" (21c) B(cr)= S’R’ (21b) and solve x(O) (o)/e. 1)*. this suggests that the convergence is accelerated by updating the shift tr in each iteration step (or in some iteration steps only. 71 The theorem implies local linear convergence with a convergence factor proportional to In particular. where R is upper triangular. x=(/e* ( in place of (5). With the observation that for r -> the solution x of (5) converges to Propositions 3 and 4 lead to the following local convergence theorem.g(o). Redistribution subject to SIAM license or copyright. It follows easily from the theorem that we have quadratic convergence (and in the Hermitian case with real even cubic convergence) if in each iteration step. c) e*A(cr) (22b) (5b) . Numerical examples. or e*Q*2R’-S’-IQ*I using (21b. and the QR (or QZ) algorithm has been used to compute the eigenvalues (cf.org/journals/ojsa.i where 1 if (6b) is used. and ) 37 x (l).) O(e2).php so that (19) holds.siam.43. It is a useful fact that the factorization can be reused to find the vector e rA(r) -1 required in (6b) as (22a) e*A(cr) -1= e*R-1S -1 using (21a). and 2 if A(A is Hermitian.. and suppose that there is a corresponding eigenvector normalized such that e* 1. Wilkinson [11]. is real. This choice is motivated in Wilkinson [11] for ordinary inverse iteration and works well in the present algorithm.. 71 . -= . (5) is usually solved by using a factorization (21a) A(tr)= SR. observing that in the f() x(t)*A()x (1) (x (l) .193. tr is replaced by the most recent value of Al. 1)*. Let be a simple isolated eigenvalue of A(A ).)*A()(x 1). see http://www. 1)*. and S is a permuted lower unit triangular or orthogonal matrix. Stewart [8]). so that we actually solve (5a) R (1. Q2 and Hessenberg (or tridiagonal) B(tr) is already available. THEOREM... and (6a) is used. (20) is proved in the same way with Hermitian case 33 := is a left eigenvector.)_.(o) x(O) Q. if the extra work to refactor A(o-) is considered as being too much).. For the actual computation on a computer. R’X () (1.. Then the residual inverse iteration converges for all tr sufficiently close to and we have . Parlett [5]. In the special case that A(A) is linear in A.

and it is sufficient to take for A/+ one Newton step (linear interpolation) or Euler step (quadratic interpolation) from AI (starting with Ao := tr) towards the solution. 60 bits for double precision). (6b) need not be solved to full accuracy. b). A is mainly determined by the accuracy with which the residual A(A+I)X is computed. and the vector e was chosen as the unit vector with a 1 in the position of the absolutely largest entry of the most recent x 1. To demonstrate the behaviour of residual inverse iteration we report here some of the numerical experiments which we have done on the UNIVAC 1100/82 of the University of Freiburg (mantissa length: 27 bits for single precision. isolated eigenvalues it was found that multiple.:ll C. Redistribution subject to SIAM license or copyright. With strategy (6b) and constant shifts accurate to 10% and .193. monotonic and linear convergence of A to one of the eigenvalues nearest to tr. Finally. Then A/+I and r ) should be computed in double precision. For a fixed shift tr we generally observed global. but r l can be rounded to single precision before it is stored. nondefective eigenvalues were found with the same speed and accuracy as simple eigenvalues. Therefore it is sensible to store x and A o in double precision.org/journals/ojsa. the equations (6a) resp. (8a. and this is confirmed by numerical examples shown below. In almost all examples tried. b) can be performed in single precision. b). corresponding 1.920 A.43. . c). as well as the computation of e*A(tr) -1 by (11a. inf IIx o-- x where the infimum extends over all eigenvalues of A(A) distinct from and C varied between 0.5 and 3. Our first example is a standard eigenvalue problem Ao to A(A)= Ao-AL The matrix Ao is the Frank matrix of order 11 (see [3])" . The resulting limit accuracy can then be expected to be about the same as with the use of double precision throughout. nondefective case.php (8a) (8b) dx R-1S-lr (1) using (21a). Although our convergence analysis applies only to simple. NEUMAIER and to find the correction dx (1) in (8) as Downloaded 07/30/12 to 190. 11 10 9 10 10 9 9 9 1 1 1 1 1 All eigenvalues are simple. the observed convergence factor for the eigenvector was . This position was found to be independent of except sometimes for 1 or 2.2. The linear equations were solved using single precision Gauss elimination with column pivoting. x+. We did not try residual inverse iteration on defective problems. The classical analysis of inverse iteration guarantees such a behaviour in the linear. see http://www. the correction (9) should be done in double precision again. We now consider specific examples. or dx (l)= Q*R’-IS’-IQ*lr(I) using (21b. can be approxiEquations (8) and (9) suggest that the limit accuracy with which mated by x 1.siam. Finally. The factorizations (21a-c) and the solution of the equations (Sa.

see http://www. -. We remark that if (6a) is used in place of (6b) to compute al+l then the iteration fails to converge to the small eigenvalues since the corresponding left and right eigenvectors are almost orthogonal. q*-infA --the computed eigenvalue. definite quadratic eigenvalue problem taken from Scott and Ward [7]: -10AE+A +10 2A2+2A+2 -llh2+h+9 sym.936 550 668 659 857 0 . 1)*.47 -. all eigenvalues were found very accurately.000 260 416 666 666 7.52 157 1. xl This confirms the claim that a single precision factorization coupled with double precision residuals suffices to produce results comparable with the use of double precision throughout. o.08 .5 2. 1/2.15 208 1.193.g.0. .0001 the sixth iterate x x (6) was accurate to 15 decimals. -0. 20"). in particular this explains the slow convergence when the shift is near the average of two consecutive eigenvalues.1%.004 838 220 309 025 -. A Xll xlo 1. O.27 -1. listed are r--the (constant) shift. 1/2.000 000 000 000 000 2. 1 (normalized).511 761 939 586 031 0 . 2. A. The associated normalized eigenvector is (--o.779 1.4 1.000 000 000 000 000 1.43.937 Selected results are given in Table 1.880 -1.php eigenvalue With constant shift cr - RESIDUAL INVERSE ITERATION 921 1.5 . e. respectively.9 .512 . 0.94 20* 8 20* 16 81o-8 410 2o-8 71o-7 6o It is seen that as described above the convergence rate q strongly correlates with the relative distance q* of the shift from the eigenvalues.9 I/q* 15. O. Our second example is a symmetric.502 415 273 308 102 5 .502 .org/journals/ojsa. Ax--max.9 1. -A2+A-1 2A2+2A+3 -12A2+10 A(A)= A2+2A+2 2A2+A-1 -A2-2A+2 -10A2+2A+12 2A2+3A + AE+3A-2 A 2-2A 3AE+A-2 -llA2+3A +10 Its eigenvalues (to three decimals) are: -1.82 17. O. /--the number of iterations (max.2.0048 -. TABLE o" Ax 14 1/q 16. We give details for the Downloaded 07/30/12 to 190.88 16.siam. norm of final eigenvector correction.879 927 281 097 871 3 . 1. Redistribution subject to SIAM license or copyright. q--average quotient of consecutive corrections. O.

php 3. L.141o 6. 1969. .(-8.8 REFERENCES [1] J.77 4. [3] R.371o 9.1 623 .193.siam. Computational Methods of Linear Algebra.3 .03 14. . is no longer nearest to the initial shift.. N. 4. TABLE 3 Step 2 3 4 5 6 7 8 Residual Correction 4. .63 1. --h2-. This matrix function has the double eigenvalues 1 and -2.19o 5. KARNEY. GREGORY AND D. New York-London.12o-9 1. the shift was updated in each iteration.243). Freeman. Sample results are given in Table 2.851o 1. replacing tr by the single precision truncation of the most recent hi. H. DONGARRA.359).0048. FADDEEVA. After 8 steps the approximate eigenpair agrees to 16 decimal places with that computed by the algorithm with constant shift tr 1 but the hi are no longer monotonic.701o 7. C. see http://www.131o 3. T. 20 (1983). TABLE 2 tr Ax 1/q 1. B.01 -2.821o 1.75.org/journals/ojsa.247 .3 57 413 l/q* 1.771o-3 3. The convergence behaviour can be seen from Table 3 which lists the maximal element of the residuals and the eigenvector corrections. . [2] D. 1963.. K. we demonstrate the effect of convergence acceleration with the matrix function of Example 2. Redistribution subject to SIAM license or copyright. this Journal./i. 23-45.3A -b 1 A(A) h 2-1 sym.242640687 119285 .01 20* 14 9 7 21o 51o 710 2 The multiple eigenvalues are found as efficiently as the others.24. and the limit eigenvalue -1.36.8 65.o-*O 6.511o 1. Finally. WILKINSON.-.43. San Francisco. Wiley-Interscience.07 2. NEUMAIER Downloaded 07/30/12 to 190. Starting with tr 2./ (-8. MOLER AND J.641o-.641o 5.05 15.2. A Collection of Matrices for Testing Computational Algorithms.25 1. pp. Improving the accuracy of computed eigenvalues and eigenvectors.19h + 14/ -4 +.922 A. FADDEEV AND V. and the simple eigenvalues -4 +. Our third example is another symmetric definite problem of Scott and Ward this time with multiple eigenvalues[7]. -h2-3A +1 -2A2-6A +2 -2h2-3h+5 h-I -2h2-Sh +2 2h2-2 -4h -9h2. J.

Prentice-Hall.org/journals/ojsa. 1965.Ziirich. Inst.2. The Symmetric Eigenvalue Problem. Z. pp. 281-282. Sci. New York San Francisco London.. 1980. 58-67. pp.43. H. PARLETT. Numer. T. 34 (1980). WILKINSON. Redistribution subject to SIAM license or copyright. SCOTT AND R. B. Oxford Univ. D. SYMM AND J. RUHE. Bibliogr. this Journal.. H. Angew. YAMAMOTO. Matrizenrechnung. WARD.. 189-199. Numer. Stat.. Algorithms for the nonlinear eigenvalue problem. Error bounds for computed eigenvalues and eigenvectors. 30 (1950). Solving symmetric-definite quadratic problems without factorization..siam. GRtBNER. Math. 35 (1980). SIAM J. Math. WILKINSON. J. Mech. Englewood Cliffs. pp. 113-126. pp. Nichtlineare Behandlung yon Eigenwertaufgaben. H.php [4] [5] [6] [7] [8] [9] [10] [11] [12] W. C. 1973. .RESIDUAL INVERSE ITERATION 923 Downloaded 07/30/12 to 190. Comput. N. pp. STEWART. A. Math. Introduction to Matrix Computations. Realistic error bounds for a simple eigenvalue and its associated eigenvector. Academic Press. see http://www. Press. The Algebraic Eigenvalue Problem.Wien. Mannheim. J. W. S. London. 3 (1982). UNGER.193. 10 (1973). G. 674-689. H. NJ. 1966.

- Calculo 2
- Equilibrio Químico
- 01 Concept Os General Es
- 03Integracion
- 02Derivacion
- DOCTRINAS
- 2 F PC eva 15.doc
- F PC -- 6- 30-05
- F PC -- 5- 23-05
- F PC -- 4- 16-05
- F PC -- 4- 16-05
- F PC -- 2- 25-04
- F PC -- 1- 11-04
- F PC -- 3- 02-05.doc
- Cur So Completo de Fengshui
- Krylov
- T SA -- 4 T- 02-04 - Repaso
- GUIAS DE LÓGICA IV BIM
- IVBim - TRIG - 5to. Año - Guía 8 - Repaso General.doc
- IVBim - TRIG - 5to. Año - Guía 3 - Sumatorias de senos y cos.doc
- 25 Deflation Techniques for an Implicitly Restarted Arnoldi Iteration
- Historia de La Integrla de Lebesgu
- 32 Residual inverse iteration for the nonlinear eigenvalue problem.pdf
- 38 Implicit application of polynomial filters in a k-step Arnoldi method.pdf

- Z Eigen Value Calculation in Ansys
- Two Dimensional Systems REPORT
- Matrices
- [MATRIKS & VEKTOR] - - UNSW - Essential Mathematical Skills (2002) Barry & Davis
- Matrices
- Null Space and Range Space
- Math 1ZC3 Custom Courseware 2012
- Coupled Strings
- ECE Formula
- 27 Liang
- Pattern Recognition
- handout1
- Factor Analysis
- 2006QUA2HW8
- A Signal Detector for Cognitive Radio System.pdf
- Lecture 2
- MARCEyck98
- Parallel Algorithms
- Changing a Matrix Into Echelon Form
- row_ops
- YEHjosa77
- System of Equations
- Array Calibration for Circular-Array Stap Using Clutter Scattering and Projection Matrix Fitting
- skemaSEE4113sem20809correction-1[1]
- lecture27-closures.pdf
- tpwl_ieee
- ida_guide
- Chapter 5 Diff eq sm
- Matrix Algebra
- Matlab

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd