You are on page 1of 15
Lecture No. 15 Method of Weighted Residuals The basic concept of the method of weighted residuals is to drive a residual error to zero through a set of orthogonality conditions. L(w) =p(x) eV Stu) =g() ef We let: N Uapp = Hat >, 449,09) ol where ig = a function which satisfies all the b.c.’s ©, = unknown coefficients which we must solve for 6, = known functions from a complete sequence To satisfy admissibility we must satisfy functional continuity requirements, ie. Uapp must be sufficiently differentiable in addition to satisfying the b S(up) = 8 (2) S() =0 f= 1,..,N Therefore, the b.c.’s must be satisfied independently of the parameters o,,. The problem with the method of weighted residuals is that it may be difficult to find functions which satisfy the above b.c. requirements. 15-1 We now define an interior domain “residual” or error: €, = L(Ugpy) —P (2) > =0 We note that w; must be linearly independent functions. If u,p) Satisfies admissibil- ity requirements, 6, come from a complete sequence and w; are linearly indepen- dent, then the method will work. Solution procedure (p.d.e. —> algebraic equations) 1. Construction of an approximate solution (follow the rules). 2, Reduction to a set of algebraic equations (orthogonality conditions on error). 3. Solve a set of simultaneous algebraic conditions. Selection of Weighting Functions LL Collocation Method (Point Collocation) * Constrain the error only at a set of selected points. * We define the approximating function as (assuming uy = 0): Ny Mapp ~ 2 04, 15-2 N &) = L(Wayp) —P = YL (O,) -P k=l * The parameters 0, are determined by enforcing the condition ¢, = 0 at NV points within the domain. Thus we select as the weighting function the dirac delta function. Let: w; = 8(x-x) Thus a = fe,(x)w,(a) dx 1 * = fe) 8@-x)dx x 1 =e(x) =0 Therefore we set the residual error equal to zero at a set of collocation points. Example of Point Collocation 2 de. L(w) -p = f#4u4e=0 dx” bets w= 0 at x= 0 u=Oat x=1 Hence 15-3 Let’s approximate the function as: 2 Ugpp = XC1L—X) (0, + OlgX + Ogx" +...) ‘This function is infinitely differentiable and satisfies the b.c.’s for arbitrary o., and is therefore admissible. Furthermore the 9, come from a complete sequence. “Note that: ee 6, = x(l-x) 6, =x(1-x)x 5 = x(1-x) 27 Let’s only use 2 terms in the approximation: Mapp = X(L= x) (0b + 04x) The error is (substituting u,,,, into the d.e.): ‘app = 2 ae L (Uapp) —pasxt (-2+x-x) 0, + (2-6x+x°-2°) Oy Now select 2 collocation points (since there are only 2 unknowns): 1 1. iiss 4g 2 Letting Leads to: ain GIS wf 2 u Ses Thus 40 27 x(x-1 ee =keanked (42+.40x) Notes on Point Collocation The computational effort required in the collocation procedure is minimal. The procedure does not produce symmetrical coefficient matrices nor does it produce positive definite matrices. These are both desirable properties. We also note that symmetry has nothing to do with @,’s selected! Setting the residual error to zero at discrete points does not mean that you have zero error at those points. The error will only go to zero as you take more and ‘app’ The residual is the differ- ence between the differential operator operating on the approximating func- more functions in the approximating sequence, zr tions, which have been truncated, and the function p (x) . ( 2. Least Squares Method The least squares method is implemented by taking the inner product of the error by itself and requiring this quantity to be a minimum. Thus we define ty, 8: N Happ = tat D&O; and-define the residual-error as: €) = LE (tapy) —P (2) Now we let: F = <€,8> = We note that any integrated measure of a function must be a quadratic (otherwise +'s and -’s cancel). Therefore a measure of the error is the square of the error. Our objective is to minimize F. Driving F to zero, will drive €, to zero. Since the o1,’s are the unknowns and F is a function of ot,, we minimize F with respect to 0,,’s: Substituting for F er < >= Es aa, 0 However we recall that: N 7a Eli) —p +S This assumes a linear operator although the method also works for nonlinear opera- tors L. Thus: de, aa, = L(@) This results in: = 0 Thus the weighting (test) functions are now the trial functions pushed through the differential operator. Therefore: w; = L(o)) Substituting for €,: N =0, f= lewN i=l = N O = —, 7 = LN This leads to a set of simultaneous equations: N YVabe= Hc f= baN where by = defines the coefficient matrix . Thus: ba = -¢ Thus again we have developed a system of simultaneous algebraic equations from a differential equation. Example of the Least Squares Method @u de. L(u) -p = S+utx=0 ae L(u) = P(x) be.'s u=0 atx=Oandx= Recall from the previous example, we set up an approximating function which was admissible (satisfied b.c.'s and had sufficient degree of functional continuity) and from a complete sequence (in this case we selected a sequence of polynomials): 15-8 where uy, = 0 6, = x(1-2) =P C-x) Wapp = [x(1—x)] +04 [27 (1x) ] Computing the components of the matrix: by = , = a-« => 2, ea 1 aa 2 L(o) = wth = -24x-x = Lo) L(9)) = 4-4x+ 527-209 4x4 l = [(4—4x4 5x? 2x3 +24) dx = 3.36667 0 We proceed to find by: boy = 2 L(05) L(y) = 4-24x+ 40x? - 16x9 + 13x - 245 +28 1 = [(4-24x+ 40x 16x? + 13x4 — 2x5 + x) dx = 3.7428 0 Similarly: = = 1.6833 Computing the coefficients of the vector ¢, leads to the following system of equa- tions: 3.36667 1.68333]/%1| _ 0.91667] 1.68333 3.7428 || a,) 6.55000] The final step involves solving this system of equations. 15-10 Notes on the Least Squares Method + The matrix produced is always symmetrical regardless of the operator Lor the functions 6: = = by, = by + Diagonal entries to the system matrix are always positive. Thus 2 0 which leads to a positive definite matrix. + The least squares method involves too much computational effort. It is difficult to manipulate the w;’s which equal the @,’s pushed through operator L, This is both messy and tedious. 3. The Galerkin method consists of taking the inner product of the error and trial func- jalerkin Method tions themselves. Thus: 15-11 Hence N <(L (up) -P), OP + D4 =0 j=1N i=l where by = + The Galerkin method is computationally much simpler than the least squares method. We no longer push @) through the operator L. + We note that the matrix produced is still symmetrical if the operator L is self ad- joint. The operator ZL is self adjoint if in the transformation = +... we have L* = L, Self adjointness of an op- erator is analogous to symmetry of a matrix. + We note that the matrix is still positive definite if the operator L is positive def- inite (and self adjoint). For a positive definite operator L(w) we have >0 for any w. We will discuss the above matters in more detail lat- er. Example of the Gale d de. L(u) -p = * tute =0 dx” bc, u = Oatx = Oandatx =1 15-12 ‘The approximating function: Ugpp = % [x(1—x)] +0, [x?(1-2)] Hence ug=0, o=2-2, g = P-2 This allows us to compute the components of the system matrix by + by = = f(-2+x- 0 1 = = [(2-6x427-29) (2? -x3) de 0 1 byp = = J (26x42 — 29) (x2) dx oO 1 by = = J (- 242-27) (x=) dx 0 Also computing the components of oe 33 1 10 20 |/) _ |73| 3 13 |e)” 1 20 105, 2 We now solve this system of simultaneous equations. Thus for Galerkin’s method the test (weighting) functions are the same as the tri- al functions! 15-13 Alternative notation for the Galerkin method Hence =0 j=1,2,..,N => JL) ~p) oaV = 0 7=1,2,. v Now let’s define: bu = 50,, +506, +... + 80yy This is a notational change (often used with the Galerkin method). 50,,, 3a. ...50ty are arbitrary coefficients. Since 6, are linearly independent functions, the error statement may now be written as: J L(u) —p) Budv = 0 for arbitrary 80, v The above expression produces NV equations since 6, are linearly independent fune- tions. Since the coefficients 80, are arbitrary, we can select them such that: Bu, = 1, Buy = 0, 3u3 = 0 Bu, = 0, Bu, = 1, Bu, = 0 This leads us directly back to our first statement: =0 f=14N 15-14 4. Subdomain Method Divide the domain V into N smaller domains and select the weighting func- tions as: 1 xin V, w= c 0 x not in V; = ~~ ‘Thus this method integrates the residual error over each subdomain and sets it equal to zero. 5, Method of Moments Select Thus the method applies the series 1, x,x’,2°,...,x”-' as weighting func- tions. Thus we compute higher order moments of the residual and force them to zero. 6. Least Squares Collocation + For conventional Collocation, the number of collocation points equals the num- ber of unknown 0,/s. + We extend the method to treat more collocation points than unknowns. Therefore the error is evaluated at M > N points: €) = L (Ugyp) = P(x) at M>N points @ Now sum €7 at these M different points: Fes <{L(u)—p}?,8(x-x,)> 0 m= me 15-15

You might also like