You are on page 1of 26

Lecture 5-6

Lecture annotation
Direct variational and weighted residual methods to solution of boundary value problems of
mechanics. Interior and boundary methods. Collocation method, min-max method, least
square method, orthogonal methods. Treffzs boundary method.
Introduction
In Lecture 3, we have learn about the Ritz method, the Galerkin method and the
method of orthogonal series expansion for approximate solution of boundary value problem
such as
, or in domain ,
, or on boundary ,
Au f
Bu g


Au f
Bu g
(1)
where A,B (A,B) are linear operators which operate in a separable Hilbert space (most often
L
2
()), in examples often considered to be linear, differential operators, u (u) is a solution
searched for. We have shown that in case when the operator A is positive definite, an
approximate variational method for the solution of Eq.(1) consists in the construction of a
certain sequence of functions (elements) u
n
H
A
, which is proved to converge in the energetic
space H
A
to the searched generalized solution u
0
, (usually these functions are from the the
linear set of all admissible functions D
A
, dense in H), for which the energetic functional of a
given problem attains its minimum. If the operator is not positive (even not linear), the
solution of a problem can be based upon the fundamental lemma of variational calculus. If it
holds for a certain function u
0
D
A
( )
0
, 0 for all 1, 2...
k
Au f k
, where
k
create a base in H, (2)
then
0
0 in Au f H
, (3)
hence u
0
D
A
is the solution of equation (3) in H. This simple idea is the basis of the Galerkin
method: It is assumed that the base {
k
} and the domain of definition D
A
of the operator A are
such that every linear combination of the basis functions is from D
A
, where we seek an
approximate solution u
n
of (3) in the form,
1
n
n k k
k
u c

, (4)
where n is an arbitrary but fixed chosen natural number and c
k
are unknown constants which
will be determined from the condition
( ) , 0, 1, 2....,
n k
Au f k n
. (5)
It should be noted that the Eq. (2) can also be approached from the viewpoint of a variational
principle, as has been explained in Lecture 2, see Eq. (25). (back to Lecture 7)
We have seen in Lecture 4, in the part devoted to the hybrid and modified variational
principles, that there is possible to weaken the requirements imposed upon functions from D
A
also in such a way that they need not satisfy boundary conditions, or satisfy them only
approximately. In this Lecture, we will deal with methods which can be obtained by various
modifications of the condition (5), whereas the condition (5) will be applied also to the
boundary conditions (1)
2
.
Firstly, we make several remarks concerning the notation used. The approximate (trial)
solution (4) will be written in the form that is more prevalent in technical literature, (see also
the last part of Lecture 4) :
( ) ( ) ( )
1
,
n
u i ui
i
u u u N

x N x u x % %
(6)
1
1
for scalar function u and
( ) ( )

u
u x N x u %
(6)
2
for a vectorial function u. N
ui
are linearly independent functions called basis or shape
function. As indicated in Lecture 4, N
u
is a matrix (or row vector, if appropriate), formed of
N
ui
. [ ]
0 1
, ,...
T
n
u u u u are unknown or free parameters (scalar values, and in some cases
functions) that are to be determined in some good fit sense. If u and u%are p1vectors and u
is a pn 1 vector, then N
u
(x) is a p pn matrix. For each r, 1 r p the rth row of N
u
(x) has n
contiguous non-zero elements, the first of which is the (n(r-1)+1)th entry in the row.
Using (6), we can approximate solution inside the domain or on its boundary . Hence, the
trial function method are often classified into interior and boundary procedures. In the case of
the interior method, the basis functions N
ui
are chosen to satisfy the boundary conditions (1)
2
so that u%satisfies the boundary conditions for all

i
u
. Here, the parameters

i
u
are to be
determined such that the differential equation(s) (1)
1
are satisfied inside in some
approximate sense. In the boundary methods, u%is chosen to satisfy the governing differential
equations (1)
1
, but not the boundary conditions. The problem is reduced to that of selecting
the parameters

i
u
such that the boundary conditions are approximated. (Also compare with
the second method applied in the Problem 1 of Lecture 3). Another possibility is the
combination of these methods for which the trial solution u%satisfies neither the boundary
conditions (1)
2
nor the equations (1)
1
.
According to the procedure for a determination of parameters

i
u
, the methods can be
categorized as belonging to either residual (or weighted residual) or variational methods.
Weighted residual methods
In the residual methods, the constants (functions)

i
u
are chosen such that an error term
or a residual is zero at selected points, zero in an average sense (convergence in the mean), or
minimized in some fashion. For Eqs. (1) the residuals for interior and boundary, respectively,
can be expressed as:
, ,
V S
R Au f R Bu g % %
(7)
where the p 1 vector u has been replaced by the approximate u%and R
V
and R
S
are p 1
vectors. As shown, often the subscripts are dropped. For solids and beams, Eq. (7) would be
(see Eq. (54)-(56) of Lecture 4 and Eq. (2e) of Appendix 1)
{ } ( ) ( )
{ } { }

on and/or on and/or
T
V V z
S u S S
R EIw p
w w T T
R
M M

+


' ;


R C u f
R u g R n p t p
% %
%
%
%
% %
%
%


(7)
2
where , , , w T M denote prescribed values of deflection, slope of beam axis, shear force and
bending moment, respectively, at the ends of a beam;
As indicated above, the basis functions are usually chosen to satisfy either the
boundary conditions or the governing equations. In either case, one of the residuals would be
zero. For example, with the interior method, if N
u
is chosen to satisfy all of the boundary
conditions, then R
S
= 0.
There are many techniques for selecting

i
u
to minimize or make the residual zero.
Several procedures are described in this Lecture. Frequently, the selection of the

i
u
is base on
a scalar R expressed in the form
( )
1
Interior method V 0, 1, 2....,
i V
Wh R d i n


(8)
1
2
and
( )
2
Boundary method S 0, 1, 2....,
i S
Wh R d i n


, (8)
2
where h
1
, h
2
are prescribed functions of R
V
, R
S
, the W
i
are weights or independent weighting
functions and n is equal to the number of unknown coefficients

i
u
in (6). Equations (8)
1
or
(8)
2
, respectively, represent a set of n simultaneous equations to be solved for the coefficients

i
u
.(back to Lecture 7)
For a general p-dimensional problem, the residual R is a p 1 vector. If the mappings
h
1
, h
2
are chosen such that h
1
(R
V
) or h
2
(R
S
), respectively are also p 1 vectors, then each
element of h
i
(R) satisfies n conditions similar to Eq. (8). The weighed residual method in this
case may be expressed as:
( )
( )
( )
( )
1 2
V 0, or V 0, 1, 2...., ,
k k
V S
h d h d k n



W R W R
(9)
where W
(k)
is a diagonal matrix with p non-zero entries
( ) ( ) ( )
1 2
, ,...,
k k k
p
W W W . Eq. (9) represents
np equations to be solved for np unknown coefficients

i
u
,1 i np.
The weighted-residual method and its variations are sometimes called error distribution,
projection, or assumed mode methods.
Remark 1. Here, we briefly expose the mathematical foundations of projection methods. Let a linear operator A
operates from Hilbert space H1 to Hilbert space. H2. Let its domain of definition DA be dense in H1 and its value
domain be dense in H2. We choose a base {n} in H1 such that every linear combination of the elements of this
base is from DA. We choose another system of functions {n}, which has the following properties:
1. n H2, n = 1, 2,..;
2. for an arbitrary k are functions 1, 2, ,n linearly independent
3. the system {k} is complete in H2.
Subspace H2 generated by functions 1, 2, ,n will be denoted by Mn. Let Qn be a projection operator that
maps H2 on Mn. (If L is a closed subspace of Hilbert space H, then every element (function) u H can be
uniquely resolved in the sum u= v + z, where v L, z L, i.e. z is orthogonal to every element of the subspace
L)
The projection method for the solution of equation
Au = f, u H1, f H2 (10)
consists in the following: Approximate solution is searched in the form
1
n
n k k
k
u a

(11)
and the coefficients ak are determined so that they satisfy the equation
QnAun = Qnf. (12)
Generally, projection operators can be defined using the relation
( )
1
n
n k k
k
Q u u

, (13)
where k are functionals that create a bi-orthogonal base to the base {k}. Then, from the equation
Qn (Aun-f) = 0 (14)
and from (13), it follows
k(Aun-f) = 0, k = 1,2.,n. (15)
It can be seen, the conditions (8) represent special cases of Eq.(15).
The relation (13) simplifies if the functionals k(u) are linear. Then, as it follows from the Riesz theorem, these
functionals have the form of scalar product (k ,u). Eq. (13) then provides
( )
1
,
n
n k k
k
Q u u

, (16)
which clearly demonstrates the projection of an element u into the subspace Mn. The conditions (15) have then
form
(Aun-f, k) = 0, k = 1,2.,n, (17)
(also compare with Eq. (30) of Lecture 3).
3
Collocation method
One method for selecting

i
u
, termed collocation, is to select as many points in (in
domain ), as there are unknown parameters

i
u
and then determine the parameters such that
the residual is zero at these points. Thus, for this point fitting technique, a scalar residual R
V
is
set equal to zero at n points (so-called collocation points). This lead to n simultaneous
algebraic equations with

i
u
as unknowns. The collocation points are selected to spread
reasonably evenly over . In terms of Eq. (8)
1
the collocation approach is equivalent to
setting
( ) ( )
, , 1, 2,..., ,
V V j j
h R R W j n x x
(18)
where (x x
j
) is the Dirac delta function which by definition satisfies (back to Lecture 7)
(back to Lecture 8) (back to Lecture 9-10)
( ) ( ) ( ) ( ) ( ) ( )
for 3D case d , for 1D case d
j j j j
R V R x x R x x R x




x x x x
.
(*)
Eq. (9) then provides
( )
0, 1, 2,...,
V j
R j n x
. (19)
For a linear elastic body in, say p = 3 dimensional space, we have from Eq. (7)
2

( ) ( ) grad div 2div
V
G + R u u f % %
or, using the notation from Lecture 4, see Eq. (54) and (55)
{ }
T
V
+ R C u f %
. (20)
Use the approximate solution in the form (6)
2
, Here, u%is p1 (31) vector, N
u
is p pn
(33n) matrix and u is pn 1 (3n 1) vector. Suppose the collocation points in are x
j
, j =
1,2,,n. Then the collocation method leads to a system of equations
{ } ( ) ( ) ( )
0,
or, shortly .
T
u j j V j
u u
+

C N x u f x R x
k u f
(21)
As expressed here, k
u
, appears to correspond to a stiffness matrix from FEM. This only a
formal comparison, however, since the elements u are not ordinarily equal to nodal
displacement, as they would for a stiffness matrix of FEM! (back to Lecture 8)
Example 1. Consider the fixed-hinged beam which is subjected to a linearly varying load pz = p0(1-x/L). For a
beam with constant EI, the governing differential equation is
( )
4
4
d
d
z
w x
EI p
x
(1a)
Use

u
w N w %
. The collocation method leads to the formulation
( )
( )
( ) ( ) ( )
4
4
4 4
d
d
0, 1, 2,...,
d d
u
j
j z j j z j
w x
R x EI p x EI x p x j n
x x
N w
%
. (1b)
Use a single parameter polynomial approximation to the deflection w, say
( )
2 3 4
1 1 2 3 4 5
, w w a a a a a x L + + + + %
, (1c)
where
1
w
is the free parameter and ai i = 1,2,3,4 are parameters that will be identified such that (1c) satisfies the
boundary conditions.
At the left-hand end it must hold
( )
( )
1 2
0 0
0
' 0 0
w
a a
w

. (1d)
Similarly, at the right-hand end it must hold
4
( )
( )
3 4 5 3 4 5
1 0
0 2 6 12 0
1 0
w
a a a a a a
w

+ + + +
;

. (1e)
Infinite number of combinations of a3,a4 and a5 will satisfy (1e). We choose, for example, a3 = 3, a4 = -5 a5 = 2.
Then
( )
2 3 4
1
3 5 2 w w + %
. (1f)
We could have chosen an approximation that does not satisfy all of the boundary conditions; however, it is better
to select a trial function for the collocation method that satisfies as many of the boundary conditions as possible.
The applied loading can be expressed as
pz = p0(1-). (1g)
Choose a single collocation point at = . Substitute (1f) and (1g) into (1a) and obtain
0 1
4
48
0
2
p w
EI
L
(1h)
and
4
0
1

96
p L
w
EI
. (1i)
From (1f) o, the approximate deflection is
( )
4
2 3 4 0
3 5 2
96
p L
w
EI
+ % , (1j)
which can be compared to the exact solution
( )
4
2 3 4 5 0
4 8 5
120
p L
w
EI
+ . (1k)
A comparison of the deflection of (1j) and (1k) gives the Table 1
Tab. 1
=
x/L
exact
( )
4
0
p L
EI
approximate
( )
4
0
p L
EI
Error(%)
1/4 1,19610
-3
1,2210
-3
2,0
1/2 2,3437510
-3
2,60410
-3
11,1
3/4 1,8310510
-3
2,1910
-3
19,6
Orthogonal collocation method
The collocation method, which only require that the residual be evaluated at the
collocation points, would appear to be the simplest of weighted residual methods. An
improvement to this procedure is to select judiciously the collocation points. A proper choice
makes the computations more convenient and the results more accurate. One such method,
which is discussed in several textbooks, is the orthogonal collocation technique proposed by
Lanczos. With this method, the collocation points are chosen to be the roots of orthogonal
polynomials such as the Legendre polynomials. Further simplicity is achieved if the constants
u in the approximate solutions are replaced by the values of the approximate solutions at the
collocation points. These values then become the coefficient to be determined.
Remark 2. Legendre polynomials Pn(x) or Tchebyshev polynomials are orthogonal in the space L2(-1,1).
Legendre polynomials have the form
( )
( ) ( )
( )
( ) ( ) ( )
( ) ( )
2 4
1 3 5 ...... 2 1 1 1 2 3
...
! 2 2 1 2 4 2 1 2 3
n n n
n
n n n n n n n
P x x x x
n n n n

1
+
1

]
and it holds
( ) ( )
1
1
d 0, for
k i
P x P x x i k

Tchebyshev polynomials of the first kind Tn(x) have the form


( ) ( )
1
1
cos arccos , 1
2
n n
T x n x x


and they are orthogonal with weight
2
1/ 1 x
, i.e.
5
( ) ( )
1
2
1
1
d 0, for
1
k i
T x T x x i k
x

.
First 5 Tchebyshev polynomials Tn(x) are shown in Fig. 1. The roots xj of the Tchebyshev polynomial Tn+1 are
2 1
cos , 1,..., 1
2 1
j
j
x j n
n
_
+

+
,
.
Fig. 1

Least squares collocation method


A useful application of the least squares technique is to couple it with collocation. This
method minimizes the sum of the squares of the residuals at the collocation points. In this
case, the number of collocation points is not necessarily equal to the number of free
parameters. Suppose m collocation points are selected at x
j
, j = 1,2,,m, then the least squares
method requires that
( )
2
1
be a minimum
m
j
j
R x

. (22)
Consider an approximate solution in the form of Eq.(6)
1
. Then ( ) ( ) ( )
j j j
R x Au x f x %
. If A
is a linear operator, then R(x
j
) is a linear function in terms of

i
u
, i = 1,2,,n. Let this linear
function be ( )

T
j j j
R x b p u
, then
( )
2
1
2
m
T T T T T
j
j
R x

u P P u u P b b b
, (23)
where
[ ]
1 2 1 2
, ,..., , , ,...,
T
T
T T T
m m
b b b 1
]
P p p p b .
The necessary condition to achieve a minimum is
( )
2 0

T T T T T T T

u P P u u P b b b P P u P b
u
. (24)
This constitutes a set of simultaneous linear equations from which

i
u
can be determined. For
a vector residual R the minimization process is repeated p-times.
6
Example 2. Return to the beam of Example 1 and use the same form of approximate solution, i.e. Eq. (1f).
Substitute this into the beam governing equation, and use x1 = L/4, x2 = L/2, x3 = 3L/4 as the collocation points.
The residual at these points are
( ) ( ) ( )
0 0 0
1 1 2 1 3 1 4 4 4
3 48 48 48
, ,
4 2 4
p p p
R x EI w R x EI w R x EI w
L L L
(2a)
Then from (23) it follows
[ ]
0 4
48 3 1 1
1,1,1 a , ,
4 2 4
T
T EI
p
L
1

1
]
P b . (2b)
And from (24), the linear equation for
1
w is
2
4
0
1 0 1 4 4
48 48 3
3 , odkud
2 96
p L EI EI
w p w
L L EI
_


,
. (2c)
This is the same result as that obtained in Example 1.
Mini-max method
One possibility of minimizing the residual is to find the free parameters u such that
the maximum residual at the selected points is minimized. This approach is often referred to
as the min-max, minimax, or minimum absolute error.
Suppose the residual R are sampled at x
j
, j = 1,2,,m locations (in ) Often, it makes
sense to determine these points in the same fashion as with orthogonal collocation. Then with
the minimax method the coefficients u are selected such that the maximum of the absolute
value of these residuals max|R(x
j
)| is a minimum, i.e.
( )
, 1,.., 1,..,
min max min max
j
j j
x j m j m
R x R

u u
. (25)
As with the least squares method, the number of collocation points n is not necessarily equal
to m , the number of unknown coefficients.
This is a useful method since it can be reduced to a problem in linear programming.
To convert the problem defined by (25) to a linear programming form, set an unknown
number equal to the (unknown) maximum value of R
j
, j = 1,2,,m, hence
1,...,
max
j
j m
R

. (26)
This is equivalent to requiring that R
j
satisfy
nebo 0 0
j j j
R R R +
. (27)
Now the min-max approximation problem is one of finding the unknown coefficients

i
u
, such
that is minimized subject to the condition (constraints) (27). This problem statement is
readily cast into a standard linear programming format.
The min-max method, as well as the least squares method, can be extended to apply to more
complicated problems taking advantage of the option to introduce constraints.
Subdomain method
Suppose the domain is subdivided into as many subdomains as there are unknown
parameters u . Then choose the parameters so that the average value of the residual over each
subdomain is zero. Thus, if there are np subdomains
j
, the integral of the residual over each
subdomain is set equal to zero, i.e
d 0, 1, 2,...,
j
V j mp

R
. (28)
With R = R
V
and the approximate solution of the form (6) we have
7
( )
d 0, 1, 2,...,
j
u
V j np

AN u f
. (29)
Eq.(29) leads to np simultaneous equations for the

i
u
, i = 1,2,,np unknowns.
In terms of (9), this subdomain method is equivalent to setting
( )
1 if ,
,
0 if .
j
j
j
h W


'

x
R R
x
(30)
where x . This approach is sometimes referred to as the method of integral relations. The
subdomains can be chosen to be continuously adjacent, overlapping, separated, of equal size,
or of different size.
Example 3. Consider again the beam from the Example 1. The linear simultaneous equations corresponding to
the subdomain method are readily obtained. For a simple beam, the residual is
2 2
2 2
d d
d d
z
w
R EI p
x x

%
. (3a)
Use

u
w N w %
, so that(24) leads to linear simultaneous equations
( )
2 2
2 2
d d
d d 0, 1, 2,..., ,
d d
u
j j
z
l l
EI x p x x j n
x x


u u
p k
N w
1 4 442 4 4 43 1 42 43
(3b)
or

u u
k w p
, where the beam is considered to be formed of segments lj. For the beam of Example 1, suppose
there is a single domain, and Eq.(1f) is used to approximate the deflection. Then Eq. (3b) becomes
( )
1
1 0 4
0
48
1 d 0 EI w p
L

1

1
]

, (3c)
i.e.,
0
1 4
48
0.
2
p
EI w
L
(3d)
This is the same result obtained in Example 1.
Orthogonality methods
A variety of techniques can be classified as orthogonality methods and, in principle,
there are based upon the fundamental lemma of variational calculus mentioned already in
Lecture 1 and also at the beginning of this Lecture, see Eqs. (2) and (3), and also the Remark
1.
Define a set of linearly independent functions
j
(x), j =1,2,,m in domain . The integrals
d 0, 1, 2,...,
j
R V j m


, (31)
which form a system of m equations, are referred to as the orthogonality conditions. For p-
dimensional problem, let
(k)
be a pp diagonal matrix of functions for k = 1,2,,m. The
orthogonality method requires that
( )
d 0, 1, 2,...,
k
V k m

R
. (32)
with the total number of equations being mp . Equations (32) are obtained from Eq.(9) by
choosing h to be the identity mapping. Observe that if the functions
j
(x) are chosen to be the
basing functions N
ui
from Eq.(6), we arrive to Galerkin method described in Lecture 3. There
are other several useful methods, e.g. least squares, and method of moments, which employ
the orthogonality conditions (31) or (32), respectively.
Methods of moments
In what is called the method of moments, Eq. (31) or (32), respectively is used with
j
,
j =1,2,,m selected as the first m members of a complete set of functions. Such set as an
8
ordinary polynomial, trigonometric, and Tchebyshev polynomial can be complete. If an
ordinary polynomial 1,x,x
2
,..x
m-1
is used, then
j
= x
j-1
. Successively higher moments of
the residual are required to be zero.( Apparently, the term moment refers to the mechanical
interpretation of the integrals (31)). Note that for the first approximation, i.e.
1
= 1, the
method of momentw is the same as the subdomain method with the subdomain equal to the
whole domain . The method of moments is sometimes referred to as the integral method of
von Karman and Pohlhausen.
Least squares method
The least square method is suitable for solution of some linear problem in a Hilbert
space. In the section devoted to the least squares collocation method, the least squares method
was used to minimize the sum of the residual at some selected points. In this section, the
integral form of the least squares method is considered. Here, the integral (over domain ) of
the weighted square of the residual is required to be a minimum. That is,
2 2
d or d is minimized R V WR V

(33)
Choose W
j
(x) to be positive, so that the integrand is positive. Substitution of Eq. (6) into (33)
gives an integral containing u as unknowns.
Remark 3. Note that in the second case in Eq.(33) we are working in the Hilbert space L2(;W), where W is a
certain, for all elements of the space the same, non negative function. The scalar product in v L2(;W) is defined
as
( ) , d W V


(34)
and the norm is
2
2
d W V


. (35)

For a pdimensional residual vector R write (33) in matrix form, as


d or d respectively
T T
V V


R R R WR
(36)
where W is a diagonal weighting matrix with positive elements W
1
,W
2
,,W
p
.
Using the relations (6) and (7), we can write Eq. (36) as follows
( ) ( ) ( ) ( )
d 2 d
T T T
T T T
u u u u u
V V

1
+
]

AN u f W AN u f u AN WAN u u AN Wf f Wf
.(37)
The necessary condition that the minimum in Eq.(37) be achieved is found by setting the
derivative of the integral with respect to u equal to zero. Thus
( ) ( )
2 d 0

T T
T T T
u u u
V

1
+
]


u AN WAN u u AN Wf f Wf
u
, (38)
wherefrom we get
( )
( )
( )
2
2
2 d 2 0
u
u
T T
u u u
V



p
k u
AN WAN u AN Wf
1 4 4 442 4 4 4 43 1 442 4 43
. (39)
These are np simultaneous equations which can be used to find the np unknown

i
u
.
Frequently, the weighting functions W
j
are set equal to unity.
Equation (39) can be considered as a special case of the orthogonality conditions of
Eq. (32), which can be put in the form
d 0 V

R
, (32)
2
9
where is pmp matrix and is constructed from matrices
(k)
k =1,2,.m such that the
matrix
(k)
enters the rows pk+1..p(k+1). If is set equal to (AN
u
)
T
W, we obtain Eq.(39).
Also, the least squares approach (39) can be obtained from Eq.(9) by using
( ) and replaced by .

R
R R W W
u
(40)
Example 4. Solve the problem in the Example 1 using the least squares method. Basing on Eqs.(1b) and (39),
and W selected to be 1, the least square relations would be
4
4 4
4 4 4
0 0
d
d d
d d 0
d d d
u
L L T T
u u
z
EI x p x
x x x


N
N N
w
. (4a)
The approximate solution is searched in the form of Eq. (1f), i.e. ( )
2 3 4
1
3 5 2
u
w w + N w %
. Then
4
4 4
d
48
d
u
x L

N
and (4a) becomes
1 0 1 0 4 4 4 4
0
48 48 1 d 48 48 0
2
L
EI EI x EI EI L
w p x w L p L
L L L L L
1 1 _ _

1 1
, , ] ]

, (4b)
wherefrom
4
0
1

96
p L
w
EI
and the approximate solution is the same as Eq.(1k) in Example 1.
Remark 4. Here, we present several remarks on mathematical foundations of the least squares method. Let A be
a linear operator defined on a linear set DA, dense in a given Hilbert space H. Suppose that the following
equation is to be solved
Au = f (41)
where f is a given function from H. For the solution of (41), we apply the least squares method, which consists of
the following:
We choose a sequence of linearly independent basis functions {n}, n DA. An approximate solution is
searched in the form
1
n
n k k
k
u a


(42)
where ak are constants, which are determined so that the expression
2
n
Au f (43)
attains a minimum value. The sequence {n} is called A-complete, if there is satisfied this condition: For an
arbitrary u DA and an arbitrary number > 0 we can find such a natural number n and constants 1, 2,, n,
that it holds
1
, where .
n
n n k k
k
Au Au u

<

(44)
The condition (43) leads to the system of linear equations with a1, a2, an as unknowns. It holds
( ) ( ) ( )
2
1 1
, 1 1
,
, 2 , , .
n n
n k k m m
k m
n n
k m k m k k
k m k
Au f a A f a A f
a a A A a A f f f


_


,
+


(45)
The values of ak, which render the expression
2
n
Au f minimum, satisfy the equations
2
, 1, 2,...
n
m
Au f m n
a

(46)
After differentiation, we obtain the system we are looking for:
( ) ( )
1
, , , 1, 2,...,
n
k k m m
k
a A A A f m n

(47)
Remind that the system (47) is symmetric. The determinant of the matrix of the system (47) is Grams
determinant of the elements A1, A2, An.
Assume that Eq. (41) has only a single solution; in other words, the homogeneous equation Au = 0 has only
trivial solution. Then A1, A2, An are linearly independent and Grams determinant is differing from 0.
Hence, the system (47) has only a single solution. The result obtained leads to the lemma:
Lemma 1: If homogeneous equation Au = 0 has only trivial solution, then approximate solutions obtained using
the least squares method can be constructed for an arbitrary n and are determined uniquely.
10
The sufficient conditions for the convergence of an approximate solution are given in the following auxiliary
lemma.
Lemma 2: Least squares method provides the sequence of approximate solutions that leads to the exact solution
u0, if there are satisfied the following conditions:
A. The sequence of basis functions is A-complete.
B. The equation (41) is solvable.
C. There exists such a constant K, that for an arbitrary u DA it holds
u K Au
. (48)
The condition C ensures the existence (and boundedness) of the inverse operator A
-1
. According to the lemma 1,
the sequence of approximate solutions {un} can be constructed. Since the condition B ensures the existence of a
solution and, according to the condition A, the sequence {n} is A-complete, then for a given >0 we can find a
natural number n0 and constants 1, 2,, n such that
0
1
n
k k
k
Au A
K

<

. (49)
This inequality holds also in the case when for k we substitute ak computed from the system (47), because the
left-hand side of (49) attains then a minimum. With increasing n0 this minimum dose not increase, hence for n
n0 it holds
0 n
Au Au
K

< . It now follows from the condition C, that when n n0, it holds
0 n
u u
K

< , i.e.
un converges to u0.
Suppose that the presumptions of the lemma 2 are satisfied and let un be an approximate solution of (41). The
inequality (48) gives a simple estimate of error un u0 , namely
( )
0 0 0 n n n n
u u K A u u K Au Au K Au f
. (50)
If we find un using the least square method, then for n Aun converges to f. Thus, the inequality (50) enables
us to judge something of the error of an approximate solution.
Furthermore assume that the operator A is positive definite. We already know that the solution of equation (41) is
equivalent to the problem of searching a minimum of functional F(u) = (Au,u) 2(f,u). It can be easily shown
that the solution un constructed by the least squares method converges also in the metric of energetic space HA.
As we know, the scalar product in HA is
( ) ( )
2
0 0 0 0
, , .
n n n n n
A
u u Au Au u u Au f u u (51)
According to the Schwarz inequality
2
0 0 n n n
A
u u Au f u u . (52)
Further, according to the inequality (48) we have
0 0 n n n
u u K Au Au K Au f
, (53)
hence
2 2
0 n n
A
u u K Au f , (54)
wherefrom it follows that
2
0 n
A
u u converges to zero.
It is essential to remind that in the metric of the space HA, the least squares method converges more slowly than
the Ritz method. Actually, if we use the same basis functions {n}, whereas
1 1
, ,
n n
n k k n k k
k k
u a v b



(55)
where the coefficients ak are determined by the least squares method and the coefficients bk are determined by
Ritz method, then it follows from the definition of the Ritz method that F(un) F(vn), or
0 0 n n
A A
u u v u
respectively.
The last inequality shows a slower convergence of the sequence {un} in comparison with the sequence {vn}. The
advantage of the least squares method is that the sequence {un} can converge (sometimes even with its certain
derivations) even in such cases, when the same cannot be claimed about the sequence {vn}.
Symmetry of the weight residual methods
Most of the approximate methods discussed above lead to a system of linear equations
in the unknown coefficients u , e.g.

u u
k u p
. (56)
Once the equation is solved for u , the approximate solution ( ) ( )

u
u x x N u %
is formed. We
must, of course, address such important computational considerations as the ease of solving of
11
Eq.(56) and the convergence characteristics of the resulting sequence of approximate
solutions, see also Remark 4.
Matrix equations of the form of Eq.(56) can be solved with fewer computations if the matrix
k
u
is symmetric. The integral expressions for the various residual methods can be identified
readily in terms of the elements k
ij
of the matrix k
u
. For example, for collocation,
( ) ( ) ,
ij uj i i i
k AN x p f x
, (57)
for the subdomain method
d , d
i i
ij uj i
k AN V p f V



, (58)
for Galerkins method
d , d
ij ui uj i ui
k N AN V p N f V


(59)
and for the least squares method
d , d
ij ui uj i ui
k AN AN V p AN f V


. (60)
As can be seen from the these relations, for the collocation (and minmax) and subdomain
procedures, the matrix k
u
is not in general symmetric, i.e. k
ij
k
ji
. This is also the case for
Galerkins method. The matrix k
u
is always symmetric for quadratic formulations such as
the least squares approach. (back to Lecture 8)
Weak formulation and boundary conditions
Fundamentals of the weak (generalized) solution were addressed in Lecture 2 and 3.
Here, we show its connection with weighted residual methods within the framework of the
used matrix formalism.
In the previous parts , weighted-residual formulations such as
( ) d 0 V

W Au f %
(61)
were utilized to compute the coefficient u of approximate solution. Alternative forms of Eq.
(61) can be obtained using Gauss integral theorem to form integrals such as
( )
1 2 3 4
d d d d V V S V

+

WAu Wf A WA u A WA u Wf % % %
(62)
in which A
1
, A
2
, A
3
,and A
4
are differential operators, is the boundary. In this alternative
formulation, the differentiation of u%usually of lower order than that appearing in A, so that
the requirements of the order of continuity on u%can be weakened. Hence, this is called, as
we have already seen in preceding lectures, a weak formulation or weak form. We have also
seen that such a formulation can be advantageous because it can make the choice the
approximate solution easier. The operators A
1
and A
3
on the weighting functions W involve
differentiations; consequently, the continuity requirements W are more severe than before.
Thus, W must have C
r-1
continuity where r is the highest derivative in the operators A
1
and A
3
.
(Compare also with the Remark 4 of Lecture 2). This requirement can be met by choosing
appropriate W, e.g., choosing W to be equal to the basing functions N
u
of Eq.(6).
The fundamentals of weak formulation were illustrated in the Remark 5 of Lecture 3,
in connection with the Ritz method and Galerkins method. Here, we will briefly comment the
weak formulation in connection with weighting methods. Note at the outset that it is often
desirable to make weighting functions W and the approximate solution satisfy certain
boundary conditions to eliminate the boundary terms. E.g. in the case of the problem in
Example 1, the successive integration by parts of the governing equation leads to
12
2 2
2 2
0
2
2 2 2
2 2 2 2
0 0 0
d d
d
d d
d d
d d d d
d 0,
d d d d d d
L
j z
L L
L
j j
j z j
w
W EI p x
x x
W W
w w w
EI W p x W EI EI
x x x x x x
_


,
_
+ +


,

%
% % %
(63)
hence, if W
j
satisfies the essential boundary conditions W
j
(0)= W
j
(L)= d W
j
/dx
x=0
= 0, then,
except of the first term, only the expression
2
2
d
d
0,
d d
j
x L
W
w
EI
x x

%
(64)
remains in Eq.(63). Thus, the natural boundary condition at the right-hand end of the beam
emerges
2
2
d
0
d
w
EI
x

%
; hence the bending moment at the hinged end of the beam is equal to
zero.
Variational methods
In the introduction, it was mentioned that the integral (global form) of differential
equation (2) can also be approached from the viewpoint of a variational principle. Recall that
in Lecture 4 the integral form of differential equations represented the three fundamental
relationships of solid mechanics kinematics (strain-displacements relations), material law,
and equilibrium. It is not surprising that the integrals of the variational principles can be used
directly in the same fashion as weighted-residual integrals. In fact, use of these integrals as the
basis of the approximation is referred to as a direct variational method. The most popular
direct variational method, the Ritz method, employs the integral relations of the principle of
minimum potential energy, or the principle of virtual displacements (work) as we have seen in
Lecture 3 and 4.
As with the previous methods, the discretization of the variational integrals leads to a
system of algebraic equations. The unknowns in the approximate solution are to be obtained
such that a variational functional is made stationary. This procedure leads to a best
approximation of certain characteristics, e.g. equilibrium or boundary conditions, of the
problem. The Ritz method was already mentioned in Lecture 3 and 4 and therefore, we are not
going to discuss it again. However at the end of this Lecture, the so-called extended Ritzs
method will be shown.
Kantorovitchs method
Kantorovitch suggested a method for the solution of minimal problems, which differs
substantially from the Ritz method in that it reduces the partial differential equations of
boundary value problems to the solution of lower dimensional (even one-dimensional)
boundary value problem. In this one-dimensional case, the method involves the solution of
ordinary differential equations.
The method still begins with the form of approximate solution
1

n
i ui
i
u u N

%
, but now the

i
u

are functions of one of the independent variables. In the case of a two-dimensional problem, a
possible approximate solution is
( ) ( ) ( )
1
,
n
i ui
i
u x y u x N y

%
. (65)
The unknowns ( )

i
u x
of Eq. (65) are determined so that a variational principle is satisfied.
Instead of the system of linear algebraic equations of the Ritz method, we obtain a boundary
value problem of a system of ordinary differential equations for the unknown functions ( )

i
u x
13
. In setting up the problem, it is not necessary to choose N
ui
to satisfy the x boundary
conditions, since they may be prescribed as boundary conditions for the functions ( )

i
u x
. The
method will be illustrated in the problem of the torsion of a prismatic bar using the Prandtl
stress function . With the Prandtl stress function the non-zero stress components are defined
as
,
yz xz
G G
x y



, (66)
where is the angle of twist per unit length of bar. Using the relations (66), the equilibrium
equation
0
yz
xz
x y

+

(67)
is automatically satisfied.
Remark 5. In Lecture 1, we have used the warping to formulate the problem of torsion of a prismatic bar. The
warping function and the Prandtl stress function are related by a simple relation, which will be shown here.
The stress components are expressed using the warping function as
,
xz yz
G y G x
x y
_ _
+


,
,
, (68)
wherefrom by comparison with (66) it follows
, y x
y x x y

+

. (69)
By differentiating Eqs. (66) and (68) we get
2 2
2
2
2
1 ,
= 1 .
xz
yz
G G
y y y x
G G
x x x y
_



,

_
+


,
(70)
Because the function is single valued and continuous, it holds
2 2
y x x y


. Subtract the first equation (70)
from the second equation (70) and get
2 2
2 2
2G
x y

+

. (71)
The boundary conditions for the stress function will be determined from the condition that the circumferential
surface of the bar, , is stress-free, i.e.
0, for ,
xz x yz y
n n x y +
. (72)
Note that the unit tangent vector of the cross-section contour has the components tx = -ny a ty = nx Thus, upon
introduction of Eq. (66) into the boundary condition (72) we finally become
0
x y
dx dy d
t t
x y x ds y ds ds

+ +

, (73)
where ds denotes the length element of the boundary of cross-section. The last equation indicates that the stress
function is constant along the boundary of the cross-section. Since the stresses are derived in terms of
derivates of rather than itself, the magnitude of the constant is arbitrary. Therefore, without loss of
generality (in the case of simply connected domains), it is common to assume that =0 along the boundary.
Kantorovitchs method will now be applied to find an approximate solution to Eq. (71)
with the boundary condition = 0 on , for a rectangular cross section of x 1 and y 2.
Choose
( ) ( )
2
1
1 x y %
, (74)
which satisfies the boundary condition for x = t1. The functional of potential energy
expressed in terms of the function can be easily set up using Eq. (58) of Lecture 1
14
( )
2
2
2 2
2 2 d d d d G x y x y x y x y
x y y x


1
_ _ _
+ + + + 1
' ;


, 1 , ,

]


, (75)
to which substitute for from the relations (69). Get
2
2
2 2 2 d d G x y x y
x y x y

1
_ _
+ + + 1


, 1 ,
]

. (76)
The third and forth term of the integrand can be arranged using Gauss integral theorem and
the boundary condition = 0. Finally obtain
2
2
2 4 d d G x y
x y

1
_ _
+ 1


, 1 ,
]

. (77)
Substitution of (74) into (77) leads to
( ) ( )
2
2 1
2
2 2 2 2 1
1 1
2 1

2 4 1 4 1 d d
d
G x x x x y
dy

1
_
+ 1

1 ,
]

. (78)
Integrate with respect to x to obtain
2
2 2
1 1 1
2
16 8 16
2 d
15 3 3
G y

1
+
1
]

, (79)
where the prime indicates a derivative with respect to y. Observe that the integrand in (79) is
of the following type
( )
2 2
1 1 1
16 8 16
, ,
15 3 3
F q q x + ,
where
1
q
and x = y. Eulers equation, that is a necessary and sufficient condition for =
0, has the general form
1 1
0

d F F
dy



(80)
wherefrom we get
1 1 1
5 5
, 0 for 2
2 2
y t . (81)
The solution to (81) is
( )
1
1 sinh 10 5
1 sinh
2 2 sinh 40
y y
1

1
]
(82)
and the final approximate solution is given by (74) with
1

of (82).
Trefftzs method: a boundary method
The procedures for the construction of approximate solutions based upon classical
weighted residuals and/or variational methods, and finite element method, which can be
treated as an extension of the classical methods wherein approximate solutions apply to
elements into which the system has been divided, sometimes encounter difficulties when the
domain (volume) is extremely large or when singularities occur in some of the variables.
Frequently, exact solutions exist for the differential equations in the volume and, sometimes,
there are solutions that take the singularity into account. In such cases, it may be useful to use
a boundary rather than an interior method.
Often for the boundary method, the selection of the form of approximate solution is
not straightforward. In general, singular functions such as Greens function can be employed.
These lead to an approximation in the form of a set of integral equations, which are the basis
15
of an important computational technique, the boundary element method (BEM) or the
boundary finite element method, which will be treated in Lectures 7 and 8.
In this section, a somewhat simpler and less generally applicable boundary method
will be presented. This is sometimes referred to as the Trefftz method.
This method utilizes such a form of approximate solution that satisfies the governing
differential equations and, through a work expression, lead to an approximation of the
boundary conditions. Remind that in interior methods such as the Ritz method, a variational
principle, using basis functions, which satisfy the displacement boundary conditions and the
kinematical conditions, provides an approximation to the conditions of equilibrium. However,
with the Trefftz method, by employing basis functions that satisfy the differential equation of
equilibrium as well as the kinematical relations, all of the boundary conditions are
approximated with the aid of a variational expression. The global form of the boundary
conditions will constitute the variational expression.
For the Trefftz method, the approximate solution must satisfy in the case of 3D
elasticity problem the equations (see Lecture 4, Eqs (54) and (55)):
{ } { } { } { } { } 0, , , for
T
+ f u D x
. (83)
That is, all of the governing differential equations in are to be satisfied. The variational
expression in terms of boundary integrals of the boundary conditions u = g on
u
and
{ } { } t n p
on

can be derived from the Hu-Washizu principle; because all of the


governing differential equations in are satisfied, there remain only boundary integrals in the
Hu-Washizu principle. Thus obtain
( ) ( ) d d 0
u
T T
S S



+

u t p t g u J =
. (84)
For a special case of beam deflection :
( ) ( ) ( ) ( )
0 0
on on
0
u
L L
w T T M M T w w M

1 1
+ + +
] ]
1 4 4 4 442 4 4 4 4 43 1 4 4 4 442 4 4 4 4 43

J =
, (85)
where T M, w and denotes the shear force, the bending moment, the deflection and the slope
of beam axis, respectively. , , , and w T M denotes their prescribed values at the ends of a
beam, see also Eq. (7)
2
. (back to Lecture 8)
Since Eqs. (84) and (85) are simply global forms of the boundary conditions of kind of Eq.
(8)
2
, as well as constituting a form of variational principle, the Trefftz method like Galerkins
method can be applied to problems for which a variational principle does not necessarily
exist.
For Trefftzs method, use the form of an approximate solution as

p u
+ u N N u %
, (86)
where N
p
is a vector of particular solutions of the differential equations for the problem. The
term

u
N u
is a set of linearly independent functions satisfying the homogeneous differential
equations. The parameters u are to be chosen to approximate the boundary conditions.
For a beam, choose an approximate solution of the form
( ) ( ) ( )
0

p u u
w x w x x + + N N w N w %
. (87)
Substitute Eq. (87) into Eq. (85). Note that ( ) ( ) ( )

T
T
u u
w x x x N w w N , and use
, , and w M EIw T EIw
, hence Eq. (85) assumes the form
( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
0
0
on
on

0
u
L
T T
T T
u u
L
T T
T T
u u
x EI w w x EI w w
EI x w w EI x w w

1
+
]
1
+ +
]
w N w N
w N w N
1 4 4 4 4 4 4 4 4 4 4 442 4 4 4 4 4 4 4 4 4 4 4 43
1 4 4 4 4 4 4 4 4 442 4 4 4 4 4 4 4 4 4 43
J =
(88)
16
or
( ) ( ) ( ) ( )
0 0
on on
0
u
L L
T T T T T
u u u u
w w w w w w w w




1 1 +
' ;
] ]


w N N N N
1 4 4 4 4 442 4 4 4 4 4 43 1 4 4 4 4 42 4 4 4 4 43
. (89)
These relations can be used to determine the unknown parameters w .
Example 5. Apply Trefftzs method to the beam investigated in the Example 1. By differentiation, we can easily
found the particular solution can be chosen as
( ) ( )
4
4 5 0
0
5
120
p L
w
EI
. (5a)
For the

u
N w
term of the approximate solution, select the simple polynomial
( )
1 2 2
1 2
2

, ,

u
w
w w
w

1
1 +
1
]
]
N w
(5b)
which satisfies the homogeneous differential equation (1a), so that
( ) ( )
4
2 4 5 0
1 2
5 .
120
p L
w w w
EI
+ + % (5c)
A complete solution for a simple beam with no loading (homogeneous equation) would be a third order
polynomial with a term including
3
. Hence, it should not be surprising when the approximate solution proposed
here using (5b) does not lead to the exact solution.
In Eq. (85) retain only those terms corresponding to the actual boundary conditions for the problem. Also, since
there are no applied forces or non-zero displacements on the boundaries 0 w M T . On u there is
prescribed ( ) ( ) ( ) 0 0 0 w w L
and on

( ) 0 M L
. Equation (85) then reduces to
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 0 0 0 0 0 L M L T L w L T w M + +
, (5d)
or, in the notation of Eq. (89)
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 0 0 0 0 0.
T T T T
u u u u
L w L L w L w w + + N N N N
(5e)
To use (5e), we will need
( ) ( )
[ ] ( ) [ ] ( )
[ ] ( ) [ ] ( )
[ ]
4
2 2 4 5 0
3
3 4 0
2
2 3 0
2 2
3
, , , 5 ,
120
1 1
1, 2 , 1, 2 4 ,
24
1 1
0, 2 , 0, 2 3 ,
6
1
0, 0 .
u
u
u
u
p L
w
EI
p L
w
L L EI
p L
w
L L EI
L



1 1 +
] ]
+
+

N w
N w
N w
N
%
%
%
(5f)
Substitute these expression in (5e), which assumes the non-dimensional form as
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 1 1 1 0 0 0 0 0
T T T T
u u u u
w w w w + + N N N N
(5e)2
and get
[ ] [ ]
4
0
1 0
0, 2 1, 0 0,
2 2 3
p L
EI
1 1
+ +
' ;
1 1
] ]
w w
(5g)
or
4
1
0
2
0 2 1 3
2 4 2 3
w
p L
w EI
1 1 1

1 1 1

] ] ]
(5h)
which has the solution
4
1
0
2
0
1 6
w
p L
w EI
1 1

1 1

] ]
. (5i)
The approximate deflection then becomes
( ) ( )
4
2 4 5 0
20 5 .
120
p L
w
EI
+ % (5j)
It is clear that this solution does not satisfy all boundary conditions. For example, w(L) 0. Therefore, we will try
to improve the approximate solution. For a second approximate solution, use:
( ) ( )
4
2 3 4 5 0
1 2 3
5
120
p L
w w w w
EI
+ + + % (5k)
17
We will need the derivatives
( ) ( )
[ ] ( ) [ ] ( )
[ ] ( ) [ ] ( )
3
2 2 3 4 0
2
2 3 0
2 2
2
2 0
3 3
1 1
1, 2 , 3 , 1, 2 , 3 4 ,
24
1 1
0, 2, 6 , 0, 2, 6 3 ,
6
1 1
0, 0, 6 , 0, 0, 6 2 .
2
u
u
u
p L
w
L L EI
p L
w
L L EI
p L
w
L L EI



1 1 +
] ]
+
+
N w
N w
N w
%
%
%
(5l)
Introduction of these expression into (5e)2 leads to:
[ ] [ ] [ ]
4 4
0 0
1 0 0
2 0, 2, 6 0 0,1,1 2 1, 0, 0 0,
3 3
3 6 0
p L p L
EI EI
1 1 1

1 1 1
+ + + +
' ; ' ;
1 1 1

1 1 1
] ] ]
w w w
(5m)
which gives the unknown parameters
1
4
0
2
3
0
1/ 30 .
1/15
w
p L
w
EI
w
1 1
1 1

1 1
1 1
] ]
(5n)
This solution corresponds to the polynomial expressing the beam deflection
( ) ( )
4
2 3 4 5 0
4 8 5 .
120
p L
w
EI
+ % (5o)
Eq. (5o) is also the exact solution for the displacement.
To compare the results, we label the solution as:
A,C exact solution, and the second approximate solution (5o) (the expressions are, of course, the same.)
B first approximate solution (5j)
Tab. 2
Deflection wEI/p0L
4
Bending moment M/p0L
2
Shear force T/p0L
= 0 = 1 = 0 = 1 = 0 = 1
B 0 -0,13 1/3 0 0 -0,5
A,C 0 0 -1/15 0 0,4 -0,1
In conclusion, let us note that Trefftzs method is not universally applicable, since
solutions are not always available for the governing differential equations. Also, the method
leads to algebraic equations that may be less banded than those of some other methods.
Extended Ritz method
Since Ritzs method can be based on the principle of minimum potential energy or on
the principle of virtual displacements, it is required that the basis functions satisfy the
displacement boundary conditions, i.e. u = g on
u
. It is possible to extend this principle and,
hence, the method, to relax the conditions to be satisfied by the basis functions. In fact, such
extension has already been shown in Lecture 4, see Eq.(12), where the boundary conditions u
= g on the part of the boundary
u

( and also the strain-displacements relations) were


considered as subsidiary conditions. Then this extended Ritz method provides a solution that
fulfils the displacement boundary conditions approximately, as well as the conditions of
equilibrium and the force boundary conditions.
If only the boundary conditions will be considered as subsidiary conditions in Eq.(12)
of Lecture 4 while the strain-displacements relations will be assumed to hold a priori, the
terms containing the Lagrange multipliers
ij
will not be involved, the multiplier is set
equal to the surface traction vector t, and Eq.(12) becomes
( )
( , , )
1
d d d ( )d
2
, ( )d
u
u
ijkl ij kl i i i i i i i
T
C V f u V p u S t g u S
S


+
' ;

+

u t
u t g u


H
(90)
18
Carrying out the variation of (90) and using Hookes law, we obtain in the matrix notation
{ } { }
( , , )
d d d ( )d 0.
u
T
T T T
i
V V S S

+

u
u f u p t g - u


H
(91)
(We denote the functional as H to differ from the more general functional H in Lecture 4)
For a beam, Eq. (91) would be adjusted to become
( ) ( )
u
0
0
0 0
on
on
d d 0
L L
L
L
z
EIw w x p w x T w M T w w M


1
1 + +
]
]

%
1 4 42 4 43
1 4 4 4 442 4 4 4 4 43
(92)
We will convert the beam expression into a form expressed in terms of displacements only.
For a uniform beam, set
, , and w M EIw T EIw
. Also, set all applied
displacements ( )
, w
and forces ( )
, T M
on the boundaries equal to zero. This reduces the
variational principle to the form
( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
0 0
0
d d
0
L L
z
L
EIw x w x x p x w x x
w x EIw x w x EI w x w x EIw x w x EI w x


+
+ + 1
]

(93)
The terms in brackets are the virtual work expressions
[ ]
0
L
Tw T w M M
on that portion of the boundary where the displacement boundary conditions occur, i.e. on
u
.
Consider the approximate solution in the form ( ) ( )

u
w x x N w %
and note that
( ) ( ) ( )

T T
u u
w x x x N w w N %
.
Then, Eq. (93) becomes
( )
0
0 0
on
d d 0
u
L L
L
T T T T T T T
u u u u u u u u u u u z
EI x p x x

1

1
1 + +
' ;
]
1

1
]


w N N N N N N N N N N w N
1 4 4 4 4 4 442 4 4 4 4 4 4 43
.(94)
Let
0
L
T T
u u u u
EI 1
]
R N N N N (95)
which corresponds to the term [ ]
0
L
T w M on
u
. Then
( ) ( )
0 0
d d 0
u u
L L
T T T T
u u u z
EI x p x x

1

1

1 + +
' ;
1

1

]

p k
w N N R R w N
1 442 4 43 1 442 4 43
(96)
Thus, the unknown parameters w can be found from
( )
0
T
u u
1
+ +
]
k R R w p
(97)
Example 6. Consider again the beam of the Example 1. This time we will employ such a form of approximate
solution that does not necessarily satisfy the displacement boundary conditions.
We begin with a polynomial form of approximate solution as
( )
1
2 2 3 4 2 3 4
1 2 3 4
3
4

, , , ,

u
w
w
w w w w
w
w

1
1
1
1 + + +
]
1
1
]
N w
(6a)
Although it is not necessary that any boundary conditions be satisfied, note that the condition w(0) =0 is fulfilled
by (6a). With d = dx/L the needed derivatives become
19
[ ]
2 3
2
2
3
1
1, 2 , 3 , 4 ,
1
0, 2, 6 ,12 ,
1
0, 0, 6, 24 .
u
u
u
L
L
L

1
]
1
]

N
N
N
(6b)
Substitute these relationships into (96)
1
2
3 3
0
2 144
5
0 0 0 0 0
0 4 6 8 2
0, 2, 6 ,12 d
0 6 12 18 6
0 8 18 12
u
EI EI
L L

1 1
1 1

1 1
1
' ;
]
1 1

1 1

] ]

k
(6c)
( )
2 1
0 0 3
0
4
1 6
1 12
1 d
1 20
1 30
u
p L p L

1 1
1 1
1 1

1 1
1 1
] ]

p
(6d)
Turn now to the boundary relation R of Eq.(95). Recall that the terms in R are taken from [ ]
0
L
T w M on
u. The u. boundary conditions are w(0) =0, (0) = 0, and w(L) =0, and, hence only those terms corresponding
to w(0), (0), and w(L) are retained in R. (In other words, (L) is not prescribed, hence it cannot occur among
the u. boundary conditions).Thus R becomes
( ) ( ) ( ) ( ) ( )
[ ] [ ] [ ]
3 3
0 0 0
1 0 1 0 2 6 24
1 0 0 0 0 6 24
0, 0, 6, 24 0, 0, 6, 0 0, 2, 0, 0
1 0 0 0 0 6 24
1 0 0 0 0 0 24
T T T
u u u u u u
EI L L
EI EI
L L
1 +
]
1 1 1 1
1 1 1 1

1 1 1 1
+
' ;
1 1 1 1

1 1 1 1

] ] ] ]
R N N N N N N
(6e)
The system of equations for w ,i.e. Eq. (97), becomes
1
2
0 3
3
4
0 2 6 24 1 6
2 4 12 32 1 12
6 12 24 48 1 20
24 32 48 76.8 1 30
w
w
EI
p L
w L
w
1 1 1
1 1 1
1 1 1

1 1 1
1 1 1
] ] ]
(6f)
which has the solution
4
0
0.4167
4.000
.
5.2916 120
1.8229
p L
EI
1
1
1

1
1
]
w
(6g)
As a second case, use a trial function with five unknown parameters
( )
1
2
2 3 4
3
4
5

1, , , , .

w
w
w w
w
w

1
1
1
1 1
]
1
1
1
]
%
(6h)
In contrast to the approximate solution of (6a), here, none of the displacement conditions are satisfied. We find
[ ]
2 3
2
2
3
1
0,1, 2 , 3 , 4 ,
1
0, 0, 2, 6 ,12 ,
1
0, 0, 0, 6, 24 .
u
u
u
L
L
L

1
]
1
]

N
N
N
(6i)
20
1
2
3 3
0
2 144
5
0 0 0 0 0 0
0 0 0 0 0 0
0 0 4 6 8 0, 0, 2, 6 ,12 d 2
0 0 6 12 18 6
0 0 8 18 12
u
EI EI
L L

1 1
1 1
1 1

1 1 1
' ;
]
1 1

1 1

1 1

] ]

k
(6j)
Completion of the R and pu matrices leads to the equations
1
2
3 0 3
4
5
0 0 0 0 24 1 2
0 0 2 6 24 1 6
0 2 4 12 32 1 12
0 6 12 24 48 1 20
24 24 32 48 76.8 1 30
w
w
EI
w p L
L
w
w
1 1 1
1 1 1
1 1 1
1 1 1
1 1 1
1 1 1
1 1 1
] ] ]
(6k)
with the solution
[ ]
4
0
2.17, 5.00, 4.00, 8.00, 2.50 .
120
T p L
EI
w (6l)
As a third case, use of an approximate solution with six unknown parameters
( )
1
2
3 2 3 4 5
4
5
6

1, , , , ,

w
w
w
w
w
w
w

1
1
1
1
1
1
]
1
1
1
1
]
%
(6m)
leads to the parameters
[ ]
4
0
0, 0, 4, 8, 5, 1
120
T p L
EI
w (6n)
which give the exact solution see Eq. (1k) of Example 1.
In summary, the results are
A. Exact solution is equal to D.
B. First approximate deflection:
( ) ( )
4
2 3 4 0
0.4167 4 5.2916 1.8229
120
p L
w
EI
+ + %
C. Second approximate deflection:
( ) ( )
4
2 3 4 0
2.17 5 4 8 2.5
120
p L
w
EI
+ + + %
D. Third approximate deflection:
( ) ( )
4
2 3 4 5 0
4 8 5
120
p L
w
EI
+ %
For each case, the values of the displacements at the boundaries are
A. The same as D.
B.
( ) ( ) ( )
3 4
0 0
0 0, 0 0.0035 0, 1 0.000955 0
p L p L
w w w
EI EI
% % % .
C.
( ) ( ) ( )
4 3 4
0 0 0
0 0.01805 0, 0 0.0416 0, 1 0.01 0
p L p L p L
w w w
EI EI EI
% % % .
D. ( ) ( ) ( ) 0 0, 0 =0, 1 0 w w w % % %
.
Note that the case B, which employed a form of approximate solution that satisfied one of the displacement
boundary conditions, led to better results than case C, whose trial function satisfied no displacement boundary
conditions. The higher order polynomial of case D was the best of all, since it gave the exact solution.
Extended Galerkins method
Recall from Lecture 3 that Galerkins method requires that the basis functions satisfy all
boundary conditions, i.e. both essential (displacement) and natural (force) boundary
conditions. These constraints can be relaxed by including these conditions in the global
representation of the fundamental equations. We begin with the global form of the equations
of equilibrium, along with the displacement and force boundary conditions. For a 3D
continuum
21
d ( ) d ( ) d 0
u
ij
i i ij j i i i i ij j
j
f u V n p u S u g n S
x

_
+ +


, (98)
1
or, in matrix notation
{ } ( ) ( ) ( ) d d d 0
u
T T T
V S S

+ +

u f u t p t u g


. (98)
2
(for the first term, see Eq. (54) of Lecture 4). For a beam (98) reduces to (back to Lecture 7)
( ) ( ) ( ) ( ) ( )
( )
( )
u
0
0
0
on
on
d
0.
L
L
iv
z
L
w x EIw x p x x T T w M M
T w w M

1 1 + +
] ]
1
+
]

1 4 4 4 442 4 4 4 4 43
%
1 4 4 4 4 4 2 4 4 4 4 4 3



(99)
Assume that all applied boundary forces and displacements are zero, i.e. 0 w M T
and use
, , and w M EIw T EIw
. Eq. (99) can be written as
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( )
u
0
0 0
0
on
on
d d
0.
L L
L
iv
z
L
w x EIw x x w x p x x w EIw x w EIw x
EI w w EI w w

1 1 + +
] ]
+ 1
]

1 4 4 4 4 4 4 42 4 4 4 4 4 4 43
1 4 4 4 4 4 442 4 4 4 4 4 4 43



(100)
Introduce the approximate solution of the form ( ) ( )

u
w x x N w %
( )
0 0
0
0
on on
d
d 0.
u
L
L L
T T iv T T T T
u u u u u u u u u u
L
T
u z
u
u
u
EI x EI EI
p x x

1
1
1 1 + +
'
1
] ]

1
1
]

k
R R
p
w N N N N N N N N N N w
N
1 4 4 42 4 4 43 1 4 4 4 2 4 4 4 3
1 442 4 43
1 4 4 442 4 4 4 43 1 4 4 442 4 4 4 43
1 442 4 43

(101)
For a particular case, retain only those terms corresponding to the actual boundary conditions
for the problem. We see that the unknown parameters w can be found from
( )

u u u
+ + 1
]
k R R w p

. (102)
Example 7. Consider again the beam of the Example 1. We will employ such a form of approximate solution
that does not satisfy all displacement boundary conditions.
As the first choice of an approximate solution, use
( )
1
2 2 3 4
3
4

, , , ,

w
w
w
w
w
1
1
1
1
]
1
1
]
%
(7a)
This approximate displacement satisfies only the boundary condition w(0) =0, but not the conditions
( ) ( ) ( ) ( ) 0 0 0 w w L M L
Substitute Eq. (6b) of Example 6 along with [ ]
4
1
0, 0, 0, 24
iv
u
L
N into Eq.
(101)
22
[ ]
2 1
3 4 4
0
4 24
5
0 0 0 12
0 0 0 8
0, 0, 0, 24 d
0 0 0 6
0 0 0
u
EI EI
L
L L
1 1
1 1

1 1

' ;
1 1

1 1

] ]

(7b)
( )
2 1
0 0 3
0
4
1 6
1 12
1 d
1 20
1 30
u
p L p L

1 1
1 1
1 1

1 1
1 1
] ]

p
(7c)
We still need to compute the boundary terms R

and Ru. We find


( ) ( ) ( ) ( ) ( ) ( )
at 1 at 0
1 1 0 0 0 0
u u
T T T
u u u u u u u
EI

1
1
+
1
]
R N N N N N N
1 442 4 43 1 4 4 4 4 4 2 4 4 4 4 4 3

(7d)
( ) ( )
at 1
1 1
T
u u
EI

1
1

1
]
R N N
1 442 4 43

(7e)
Numerical expressions for R

and Ru are computed as


[ ] [ ] [ ]
3 3
0 0 0 0 0 0 0
0 2 0 2 0 0 0
1,1,1,1 1, 0, 0, 0 0, 0, 0, 0
0 0 6 6 6 6 6
24 0 0 24 24 24 24
u
EI EI
L L
1 1 1 1
1 1 1 1

1 1 1 1
+
' ;
1 1 1 1

1 1 1 1

] ] ] ]
R
(7f)
[ ]
3 3
1 0 2 6 12
2 0 4 12 24
0, 2, 6,12
3 0 6 18 36
4 0 8 24 48
EI EI
L L
1 1
1 1
1 1

1 1
1 1
] ]
R

(7g)
Thus, the matrix on the left-hand side of Eq.(102) is
3
0 2 6 24
2 4 12 32
6 12 24 48
24 32 48 76.8
u u
EI
L
1
1
1
+ +
1
1
]
k R R

(7h)
We find that the w are the same as obtained in Example 6 for the same form of approximate solution, i.e. Eq.
(6g).
As a second case, we choose to use the form of approximate solution of case C of Example 6. Here
( )
1
2
2 3 4
3
4
5

1, , , , ,

w
w
w w
w
w
1
1
1
1 1
]
1
1
1
]
%
(7i)
which satisfies none of the boundary condition. We find
[ ]
[ ]
2 3
2
2
3
3
1
0,1, 2 , 3 , 4 ,
1
0, 0, 2, 6 ,12 ,
1
0, 0, 0, 6, 24 ,
1
0, 0, 0, 0, 24 .
u
u
u
iv
u
L
L
L
L
1
]
1
]

N
N
N
N

(7j)
Then compute
23
[ ]
1
2
3 3
3 0
4 24
5
0 0 0 0 24 1
0 0 0 0 12
0 0 0 0 8 0, 0, 0, 0, 24 d
0 0 0 0 6
0 0 0 0
u
EI EI
L L
1 1
1 1
1 1

1 1
' ;
1 1

1 1

1 1

] ]

(7k)
[ ] [ ] [ ] [ ]
3
3
0 0 0 0
0 0 0 1
1,1,1,1,1 0,1, 0, 0, 0 1, 0, 0, 0, 0 0, 0, 2, 6,12 0 2 0 2
6 0 6 3
24 0 0 4
0 0 0 0 0
0 0 2 6 12
0 2 4 12 24
0 6 12 24 42
24 24 32 48 72
u u
EI
L
EI
L
1 1 1 1
1 1 1 1
1 1 1 1

1 1 1 1 + + +
' ;
1 1 1 1

1 1 1 1

1 1 1 1

] ] ] ]

R R
.
1
1
1
1
1
1
1
]
(7l)
This leads to the same system of equations as obtained for case C of Example 6.
For the third an final form of approximate solution, consider that of case D of Example 6. As might be expected,
this leads to the same results found for extended Ritzs method in Example 6.
It is of interest that the extended Ritz and Galerkin methods gave the same results for the
same form of approximate solutions for the beam in Examples 6 and 7. This is not surprising
since both of these methods are set up such that the same restrictions, i.e., no boundary
conditions need to be satisfied, are imposed on their trial solutions and both involve the global
form of the equilibrium equations. Also, for a beam, it is readily shown that
( ) ( )
Galerkin
Ritz
T
u u u
1
+ + + + 1
]
]
k R R k R R

(103)
To do so, integrate k
uRitz
twice by parts. Thus
Ritz
0
0 0
d d
L L
L
T T iv T T
u u u u u u u u u
EI x EI x EI 1 +
]
k N N N N N N N N
(104)
Adding this to R + R
T
, with R from Eq.(95). gives k
u
+R

+ R
u
, where k
u
, R

and R
u
are given
by Eq. (101).
In general, the non-extended Ritz and Galerkin methods can be expected to give different
results.
Recommended literature for further reading
Finlayson,B.A. The Method of Weighted Residuals and Variational Principles, Academic
Press, New York, 1972.
Problems
1.Solve the differential equation u u x + for 0 x 1, with u(0) = u(1) = 0. Use
collocation and compare our answer with the exact solution, which is ( ) sin sin1 u x x x
.
Hint: Choose a simple approximate solution such as ( )
1
1 u u x x %
which satisfies the
boundary conditions, as would ( ) ( )
2 3
1 , 1 , etc. x x x x
( )
2
1 1 2
d
2 1 , where 1,
d
R Au f u u x x x A f x
x
+ + + %
For a single collocation point x = 1/2,
1/ 2 1 1
1 1
2
4 2
x
R u u

+ + .
24
2. Solve the Dirichlet problem for Poisssons equation in the domain (-1 < x < 1, -1 < y < 1)
with homogeneous boundary conditions on the boundary , i.e.
1 in , 0 on u u
(P1.1)
using a boundary residual method with collocation.
Hint. A particular solution is ( )
2 2
4 x y +
. Two linearly independent functions satisfying the
homogeneous differential equations are
4 2 2 4
1, 6 x x y y + . An approximate solution, which
satisfies the differential equation, would be ( ) ( )
2 2 4 2 2 4
1 2
4 6 u x y u u x x y y + + + + %
.
Choose
1 2
, u u
to satisfy 0 u % on the square x=t1, y=t1. For collocation points (x
1
, y
1
)=(1,0)
and (x
2
, y
2
)=(1,1)
1 2
3 10, 1 20 u u
. This gives
0.3 at 0 u x y %
, as compared to the
exact solution of u =0.2947. Exact solution obtain be the method of orthogonal series
expansion was derived in Problem 1 of Lecture 3. We borrow the results obtained using
MAPLE8 in problem3-1.mws and compare the approximate solution with the exact one in
problem56-2.mws.
4.Use the subdomain interior method to solve the Problem 1
Hint. For the form of approximate solution use that one in Problem 1. For a single subdomain
0 x 1, ( )
1
1
0
d 0 11 6 1 2 R x x u +

,( ( ) ( )
1 1
2 1 R x u u x x x + +
)), which gives
1
3 11 u
,
so that ( ) 3 1 11 u x x %
.Results:
x Exact solution Subdomain
1/4 0.0044 0.051
1/2 0.070 0.068
3/4 0.060 0.051
5. Use Galerkins method to solve the differential equation of Problem 1.
Hint. For the approximate solution ( )
1
1 u u x x %
( ) ( ) ( ) ( )
1 1
1 1
0 0
1 d 1 2 1 d 0 x x R x x x x u u x x x x 1 + +
]
gives
( )
1
5 18 and 5 1 18 u u x x %
. Results:
x Exact solution Subdomain
1/4 0.0044 0.052
1/2 0.070 0.069
3/4 0.060 0.052
6. Solve the Dirichlet problem for Laplaces equation in the domain (0 < x < 1, 0 < y < )
with boundary conditions u(0,y) = u(1,y) =0 for y >0, u(x,0) = x(1-x), and u(x,y) = 0 for
0x1.
a) Use the approximate solution as ( ) ( )
1
1 u u y x x %
. Find the boundary conditions on ( )
1
u y
and the residual function.
b) Apply collocation along x = 1/3. Answer: ( )
1
exp( 3 ) u y y
c) Compare with the exact solution obtained in the similar fashion as in Problem 1 of Lecture
3, see Eq.(P1.11). In the given case, the solution has the form
( ) sin cos
n n n n n
X x A k x B k x + . (P6.1)
where ( )
2 2
1, 2, 3, ,... and 0
n n
k n n B
. `
Y
n
(y) has the form
( ) exp( )
n n
Y y C n y
(P6.2)
The boundary condition u(x,0) = x(1-x) is expanded into Fourier series
25
( )
1,2..
1 sin
n
n
x x v n x


(P6.3)
where
( ) ( )
1
3 3
0
4
2 1 sin d 1 1 , 1, 2,...
n
n
v x x n x x n
n

1

]

(P6.4)
see problem56-6.mws.
The complete solution is then
( )
3 3
1
4
1 1 sin exp
n
n
u n x n y
n

1

]

, (P6.5)
see again problem56-6.mws
7. Solve the preceding problem using Galerkins method with the approximate solution of the
form ( ) ( ) ( ) ( )
2
2
1 2
1 1 u u y x x u y x x + % with two unknown functions
1 2
and u u
Hint. The residual is ( ) ( ) ( ) ( ) ( ) ( ) ( )
2
2 2
1 1 2 2
1 2 1 2 1 6 6 R u y x x u y u y x x u y x x x + + + .
According to Galerkins method, it is required
( ) ( ) ( ) ( )
1 1
2
2
0 0
, 1 0, and , 1 0 R x y x x R x y x x

(P7.1)
Obtain two ordinary differential equations
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
1 1 2 2
1 1 2 2
1 1 1 1
0,
30 3 140 15
1 1 1 2
0.
140 15 640 105
u y u y u y u y
u y u y u y u y
+
+
(P7.2)
Seek the solution in the form ( ) ( ) ( ) ( )
1 10 2 20
exp , exp u y u y u y u y
where
10 20
and u u

are constants. Obtain
2 2
10 20
2 2
10 20
1 1
0,
30 3 140 15
1 2
0,.
140 15 640 105
u u
u u


_ _
+

, ,
_ _
+

, ,
(P7.3)
Set the determinant of the system (P7.3) equal to zero and find the roots. Choose only
negative roots. Then, solve the system for
10 20
and u u
as unknowns for
1
= -10.1059 and
2
=
-3.1416. Set
1
20 1
1, superscript 1 refers to u , and find
1
10
0.21583 u . Similarly, set
2
20
1 u and find
2
10
0.8825 u , see problem56-6.mws. The general solution is
( ) ( ) ( )
( ) ( ) ( )
1 2
1 1 10 1 2 10 2
1 2
2 1 20 1 2 20 2
exp exp ,
exp exp ,
u y Cu y C u y
u y Cu y C u y


+
+
(P7.4)
The constants C
1
and C
2
are found from the boundary conditions ( ) ( )
1 2
0 1, 0 0 u u
.
Finally obtain the solution
( ) ( ) ( )
( ) ( ) ( )
1
2
0.8035exp 3.1416 0.1965exp 10.1059 ,
0.9105 exp 3.1416 exp 10.1059 .
u y y y
u y y y
+
1
]
(P7.5)
For comparison with the exact solution, see problem56-6.mws
26

You might also like