This action might not be possible to undo. Are you sure you want to continue?
Partial Diﬀerential Equations
by
Gordon C. Everstine
21 January 2010
Copyright c 2001–2010 by Gordon C. Everstine.
All rights reserved.
This book was typeset with L
A
T
E
X2
ε
(MiKTeX).
Preface
These lecture notes are intended to supplement a onesemester graduatelevel engineering
course at The George Washington University in numerical methods for the solution of par
tial diﬀerential equations. Both ﬁnite diﬀerence and ﬁnite element methods are included.
The main prerequisite is a standard undergraduate calculus sequence including ordinary
diﬀerential equations. In general, the mix of topics and level of presentation are aimed at
upperlevel undergraduates and ﬁrstyear graduate students in mechanical, aerospace, and
civil engineering.
Gordon Everstine
Gaithersburg, Maryland
January 2010
iii
Contents
1 Numerical Solution of Ordinary Diﬀerential Equations 1
1.1 Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Truncation Error for Euler’s Method . . . . . . . . . . . . . . . . . . . . . . 3
1.3 RungeKutta Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Systems of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Finite Diﬀerences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6.2 Solving Tridiagonal Systems . . . . . . . . . . . . . . . . . . . . . . . 10
1.7 Shooting Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Partial Diﬀerential Equations 11
2.1 Classical Equations of Mathematical Physics . . . . . . . . . . . . . . . . . . 11
2.2 Classiﬁcation of Partial Diﬀerential Equations . . . . . . . . . . . . . . . . . 14
2.3 Transformation to Nondimensional Form . . . . . . . . . . . . . . . . . . . . 15
3 Finite Diﬀerence Solution of Partial Diﬀerential Equations 16
3.1 Parabolic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.1 Explicit Finite Diﬀerence Method . . . . . . . . . . . . . . . . . . . . 16
3.1.2 CrankNicolson Implicit Method . . . . . . . . . . . . . . . . . . . . . 18
3.1.3 Derivative Boundary Conditions . . . . . . . . . . . . . . . . . . . . . 20
3.2 Hyperbolic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.1 The d’Alembert Solution of the Wave Equation . . . . . . . . . . . . 21
3.2.2 Finite Diﬀerences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.3 Starting Procedure for Explicit Algorithm . . . . . . . . . . . . . . . 26
3.2.4 Nonreﬂecting Boundaries . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.1 Derivative Boundary Conditions . . . . . . . . . . . . . . . . . . . . . 33
4 Direct Finite Element Analysis 34
4.1 Linear MassSpring Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2 Matrix Assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.3 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.4 Example and Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.5 PinJointed Rod Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.6 PinJointed Frame Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.7 Boundary Conditions by Matrix Partitioning . . . . . . . . . . . . . . . . . . 42
4.8 Alternative Approach to Constraints . . . . . . . . . . . . . . . . . . . . . . 43
4.9 Beams in Flexure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.10 Direct Approach to Continuum Problems . . . . . . . . . . . . . . . . . . . . 45
v
5 Change of Basis 49
5.1 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.2 Examples of Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3 Isotropic Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6 Calculus of Variations 57
6.1 Example 1: The Shortest Distance Between Two Points . . . . . . . . . . . . 60
6.2 Example 2: The Brachistochrone . . . . . . . . . . . . . . . . . . . . . . . . 61
6.3 Constraint Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.4 Example 3: A Constrained Minimization Problem . . . . . . . . . . . . . . . 63
6.5 Functions of Several Independent Variables . . . . . . . . . . . . . . . . . . . 64
6.6 Example 4: Poisson’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.7 Functions of Several Dependent Variables . . . . . . . . . . . . . . . . . . . . 66
7 Variational Approach to the Finite Element Method 66
7.1 Index Notation and Summation Convention . . . . . . . . . . . . . . . . . . 67
7.2 Deriving Variational Principles . . . . . . . . . . . . . . . . . . . . . . . . . . 68
7.3 Shape Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
7.4 Variational Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
7.5 Matrices for Linear Triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
7.6 Interpretation of Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.7 Stiﬀness in Elasticity in Terms of Shape Functions . . . . . . . . . . . . . . . 80
7.8 Element Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.9 Method of Weighted Residuals (Galerkin’s Method) . . . . . . . . . . . . . . 83
8 Potential Fluid Flow With Finite Elements 85
8.1 Finite Element Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
8.2 Application of Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
8.3 Free Surface Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
8.4 Use of Complex Numbers and Phasors in Wave Problems . . . . . . . . . . . 90
8.5 2D Wave Maker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
8.6 Linear Triangle Matrices for 2D Wave Maker Problem . . . . . . . . . . . . 94
8.7 Mechanical Analogy for the Free Surface Problem . . . . . . . . . . . . . . . 95
Bibliography 97
Index 99
List of Figures
1 1DOF MassSpring System. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 SimplySupported Beam With Distributed Load. . . . . . . . . . . . . . . . . 2
3 Finite Diﬀerence Approximations to Derivatives. . . . . . . . . . . . . . . . . 6
4 The Shooting Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5 Mesh for 1D Heat Equation. . . . . . . . . . . . . . . . . . . . . . . . . . . 17
vi
6 Heat Equation Stencil for Explicit Finite Diﬀerence Algorithm. . . . . . . . . 17
7 Heat Equation Stencil for r = 1/10. . . . . . . . . . . . . . . . . . . . . . . . 17
8 Heat Equation Stencils for r = 1/2 and r = 1. . . . . . . . . . . . . . . . . . 17
9 Explicit Finite Diﬀerence Solution With r = 0.48. . . . . . . . . . . . . . . . 18
10 Explicit Finite Diﬀerence Solution With r = 0.52. . . . . . . . . . . . . . . . 19
11 Mesh for CrankNicolson. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
12 Stencil for CrankNicolson Algorithm. . . . . . . . . . . . . . . . . . . . . . . 20
13 Treatment of Derivative Boundary Conditions. . . . . . . . . . . . . . . . . . 21
14 Propagation of Initial Displacement. . . . . . . . . . . . . . . . . . . . . . . 23
15 Initial Velocity Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
16 Propagation of Initial Velocity. . . . . . . . . . . . . . . . . . . . . . . . . . . 24
17 Domains of Inﬂuence and Dependence. . . . . . . . . . . . . . . . . . . . . . 25
18 Mesh for Explicit Solution of Wave Equation. . . . . . . . . . . . . . . . . . 26
19 Stencil for Explicit Solution of Wave Equation. . . . . . . . . . . . . . . . . . 26
20 Domains of Dependence for r > 1. . . . . . . . . . . . . . . . . . . . . . . . . 27
21 Finite Diﬀerence Mesh at Nonreﬂecting Boundary. . . . . . . . . . . . . . . . 28
22 Finite Length Simulation of an Inﬁnite Bar. . . . . . . . . . . . . . . . . . . 30
23 Laplace’s Equation on Rectangular Domain. . . . . . . . . . . . . . . . . . . 31
24 Finite Diﬀerence Grid on Rectangular Domain. . . . . . . . . . . . . . . . . 31
25 The Neighborhood of Point (i, j). . . . . . . . . . . . . . . . . . . . . . . . . 32
26 20Point Finite Diﬀerence Mesh. . . . . . . . . . . . . . . . . . . . . . . . . . 32
27 Laplace’s Equation With Dirichlet and Neumann B.C. . . . . . . . . . . . . 33
28 Treatment of Neumann Boundary Conditions. . . . . . . . . . . . . . . . . . 34
29 2DOF MassSpring System. . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
30 A Single Spring Element. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
31 3DOF Spring System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
32 Spring System With Constraint. . . . . . . . . . . . . . . . . . . . . . . . . . 37
33 4DOF Spring System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
34 PinJointed Rod Element. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
35 Truss Structure Modeled With PinJointed Rods. . . . . . . . . . . . . . . . 39
36 The Degrees of Freedom for a PinJointed Rod Element in 2D. . . . . . . . 40
37 Computing 2D Stiﬀness of PinJointed Rod. . . . . . . . . . . . . . . . . . . 40
38 PinJointed Frame Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
39 Example With Reactions and Loads at Same DOF. . . . . . . . . . . . . . . 42
40 Large Spring Approach to Constraints. . . . . . . . . . . . . . . . . . . . . . 43
41 DOF for Beam in Flexure (2D). . . . . . . . . . . . . . . . . . . . . . . . . . 44
42 The Beam Problem Associated With Column 1. . . . . . . . . . . . . . . . . 44
43 The Beam Problem Associated With Column 2. . . . . . . . . . . . . . . . . 45
44 DOF for 2D Beam Element. . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
45 Plate With Constraints and Loads. . . . . . . . . . . . . . . . . . . . . . . . 46
46 DOF for the Linear Triangular Membrane Element. . . . . . . . . . . . . . . 46
47 Element Coordinate Systems in the Finite Element Method. . . . . . . . . . 50
48 Basis Vectors in Polar Coordinate System. . . . . . . . . . . . . . . . . . . . 50
49 Change of Basis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
50 Element Coordinate System for PinJointed Rod. . . . . . . . . . . . . . . . 56
vii
51 Minimum, Maximum, and Neutral Stationary Values. . . . . . . . . . . . . . 58
52 Curve of Minimum Length Between Two Points. . . . . . . . . . . . . . . . . 60
53 The Brachistochrone Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . 61
54 Several Brachistochrone Solutions. . . . . . . . . . . . . . . . . . . . . . . . . 63
55 A Constrained Minimization Problem. . . . . . . . . . . . . . . . . . . . . . 64
56 TwoDimensional Finite Element Mesh. . . . . . . . . . . . . . . . . . . . . . 70
57 Triangular Finite Element. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
58 Axial Member (PinJointed Truss Element). . . . . . . . . . . . . . . . . . . 72
59 Neumannn Boundary Condition at Internal Boundary. . . . . . . . . . . . . 74
60 Two Adjacent Finite Elements. . . . . . . . . . . . . . . . . . . . . . . . . . 75
61 Triangular Mesh at Boundary. . . . . . . . . . . . . . . . . . . . . . . . . . . 78
62 Discontinuous Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
63 Compatibility at an Element Boundary. . . . . . . . . . . . . . . . . . . . . . 82
64 A Vector Analogy for Galerkin’s Method. . . . . . . . . . . . . . . . . . . . . 83
65 Potential Flow Around Solid Body. . . . . . . . . . . . . . . . . . . . . . . . 86
66 Streamlines Around Circular Cylinder. . . . . . . . . . . . . . . . . . . . . . 87
67 Symmetry With Respect to y = 0. . . . . . . . . . . . . . . . . . . . . . . . . 87
68 Antisymmetry With Respect to x = 0. . . . . . . . . . . . . . . . . . . . . . 88
69 Boundary Value Problem for Flow Around Circular Cylinder. . . . . . . . . 88
70 The Free Surface Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
71 The Complex Amplitude. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
72 Phasor Addition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
73 2D Wave Maker: Time Domain. . . . . . . . . . . . . . . . . . . . . . . . . 91
74 Graphical Solution of ω
2
/α = g tanh(αd). . . . . . . . . . . . . . . . . . . . . 93
75 2D Wave Maker: Frequency Domain. . . . . . . . . . . . . . . . . . . . . . . 93
76 Single DOF MassSpringDashpot System. . . . . . . . . . . . . . . . . . . . 95
viii
1 Numerical Solution of Ordinary Diﬀerential Equa
tions
An ordinary diﬀerential equation (ODE) is an equation that involves an unknown function
(the dependent variable) and some of its derivatives with respect to a single independent
variable. An nthorder equation has the highest order derivative of order n:
f
_
x, y, y
, y
, . . . , y
(n)
_
= 0 for a ≤ x ≤ b, (1.1)
where y = y(x), and y
(n)
denotes the nth derivative with respect to x. An nthorder ODE
requires the speciﬁcation of n conditions to assure uniqueness of the solution. If all conditions
are imposed at x = a, the conditions are called initial conditions (I.C.), and the problem
is an initial value problem (IVP). If the conditions are imposed at both x = a and x = b,
the conditions are called boundary conditions (B.C.), and the problem is a boundary value
problem (BVP).
For example, consider the initial value problem
_
m¨ u + ku = f(t)
u(0) = 5, ˙ u(0) = 0,
(1.2)
where u = u(t), and dots denote diﬀerentiation with respect to the time t. This equation
describes a onedegreeoffreedom massspring system which is released from rest and sub
jected to a timedependent force, as illustrated in Fig. 1. Initial value problems generally
arise in timedependent situations.
An example of a boundary value problem is shown in Fig. 2, for which the diﬀerential
equation is
_
_
_
EIu
(x) = M(x) =
FL
2
x −Fx
_
x
2
_
u(0) = u(L) = 0,
(1.3)
where the independent variable x is the distance from the left end, u is the transverse
displacement, and M(x) is the internal bending moment at x. Boundary value problems
generally arise in static (timeindependent) situations. As we will see, IVPs and BVPs must
be treated diﬀerently numerically.
A system of n ﬁrstorder ODEs has the form
_
¸
¸
¸
_
¸
¸
¸
_
y
1
(x) = f
1
(x, y
1
, y
2
, . . . , y
n
)
y
2
(x) = f
2
(x, y
1
, y
2
, . . . , y
n
)
.
.
.
y
n
(x) = f
n
(x, y
1
, y
2
, . . . , y
n
)
(1.4)
`
`
`
`
k
m
¸
u(t)
¸ ¸
¸
f(t)
Figure 1: 1DOF MassSpring System.
1
· · · · · · · · · · ·
F (N/m)
¡ ¸
L
Figure 2: SimplySupported Beam With Distributed Load.
for a ≤ x ≤ b. A single nthorder ODE is equivalent to a system of n ﬁrstorder ODEs. This
equivalence can be seen by deﬁning a new set of unknowns y
1
, y
2
, . . . , y
n
such that y
1
= y,
y
2
= y
, y
3
= y
, . . . , y
n
= y
(n−1)
. For example, consider the thirdorder IVP
y
= xy
+ e
x
y + x
2
+ 1, x ≥ 0
y(0) = 1, y
(0) = 0, y
(0) = −1.
(1.5)
To obtain an equivalent ﬁrstorder system, deﬁne y
1
= y, y
2
= y
, y
3
= y
to obtain
_
_
_
y
1
= y
2
y
2
= y
3
y
3
= xy
2
+ e
x
y
1
+ x
2
+ 1
(1.6)
with initial conditions y
1
(0) = 1, y
2
(0) = 0, y
3
(0) = −1.
1.1 Euler’s Method
This method is the simplest of the numerical methods for solving initial value problems.
Consider the IVP
_
y
(x) = f(x, y), x ≥ a
y(a) = η.
(1.7)
To eﬀect a numerical solution, we discretize the xaxis:
a = x
0
< x
1
< x
2
< · · · ,
where, for uniform spacing,
x
i
−x
i−1
= h, (1.8)
and h is considered small. With this discretization, we can approximate the derivative y
(x)
with the forward ﬁnite diﬀerence
y
(x) ≈
y(x + h) −y(x)
h
. (1.9)
If we let y
k
represent the numerical approximation to y(x
k
), then
y
(x
k
) ≈
y
k+1
−y
k
h
. (1.10)
2
Thus, a numerical (diﬀerence) approximation to the ODE, Eq. 1.7, is
y
k+1
−y
k
h
= f(x
k
, y
k
), k = 0, 1, 2, . . . (1.11)
or
_
y
k+1
= y
k
+ hf(x
k
, y
k
), k = 0, 1, 2, . . .
y
0
= η.
(1.12)
This recursive algorithm is called Euler’s method.
1.2 Truncation Error for Euler’s Method
There are two types of error that arise in numerical methods: truncation error (which arises
primarily from a discretization process) and rounding error (which arises from the ﬁniteness
of number representations in the computer). Reﬁning a mesh to reduce the truncation error
often causes the rounding error to increase.
To estimate the truncation error for Euler’s method, we ﬁrst recall Taylor’s theorem with
remainder, which states that a function f(x) can be expanded in a series about the point
x = c:
f(x) = f(c)+f
(c)(x−c)+
f
(c)
2!
(x−c)
2
+· · · +
f
(n)
(c)
n!
(x−c)
n
+
f
(n+1)
(ξ)
(n + 1)!
(x−c)
n+1
, (1.13)
where ξ is between x and c. The last term in Eq. 1.13 is referred to as the remainder term.
Note also that Eq. 1.13 is an equality, not an approximation.
In Eq. 1.13, let x = x
k+1
and c = x
k
, in which case
y(x
k+1
) = y(x
k
) + hy
(x
k
) +
1
2
h
2
y
(ξ
k
), (1.14)
where x
k
≤ ξ
k
≤ x
k+1
.
Since y satisﬁes the ODE, Eq. 1.7,
y
(x
k
) = f(x
k
, y(x
k
)), (1.15)
where y(x
k
) is the actual solution at x
k
. Hence,
y(x
k+1
) = y(x
k
) + hf(x
k
, y(x
k
)) +
1
2
h
2
y
(ξ
k
). (1.16)
Like Eq. 1.13, this equation is an equality, not an approximation.
By comparing this last equation to Euler’s approximation, Eq. 1.12, it is clear that Euler’s
method is obtained by omitting the remainder term
1
2
h
2
y
(ξ
k
) in the Taylor expansion of
y(x
k+1
) at the point x
k
. The omitted term accounts for the truncation error in Euler’s
method at each step. This error is a local error, since the error occurs at each step regardless
of the error in the previous step. The accumulation of local errors is referred to as the global
error, which is the more important error but much more diﬃcult to compute.
Most algorithms for solving ODEs are derived by expanding the solution function in a
Taylor series and then omitting certain terms.
3
1.3 RungeKutta Methods
Euler’s method is a ﬁrstorder method, since it was obtained by omitting terms in the Taylor
series expansion containing powers of h greater than one. To derive a secondorder method,
we again use Taylor’s theorem with remainder to obtain
y(x
k+1
) = y(x
k
) + hy
(x
k
) +
1
2
h
2
y
(x
k
) +
1
6
h
3
y
(ξ
k
) (1.17)
for some ξ
k
such that x
k
≤ ξ
k
≤ x
k+1
. Since, from the ODE (Eq. 1.7),
y
(x
k
) = f(x
k
, y(x
k
)), (1.18)
we can approximate
y
(x) =
df(x, y(x))
dx
=
f(x + h, y(x + h)) −f(x, y(x))
h
+ O(h) (1.19)
where we use the “big O” notation O(h) to represent terms of order h as h → 0. [For
example, 2h
3
= O(h
3
), 3h
2
+5h
4
= O(h
2
), h
2
O(h) = O(h
3
), and −287h
4
e
−h
= O(h
4
).] From
these last two equations, Eq. 1.17 can then be written as
y(x
k+1
) = y(x
k
) + hf(x
k
, y(x
k
)) +
h
2
[f(x
k+1
, y(x
k+1
)) −f(x
k
, y(x
k
))] + O(h
3
), (1.20)
which leads (after combining terms) to the diﬀerence equation
y
k+1
= y
k
+
1
2
h[f(x
k
, y
k
) + f(x
k+1
, y
k+1
)]. (1.21)
This formula is a secondorder approximation to the original diﬀerential equation y
(x) =
f(x, y) (Eq. 1.7), but it is an inconvenient approximation, since y
k+1
appears on both sides
of the formula. (Such a formula is called an implicit method, since y
k+1
is deﬁned implicitly.
An explicit method would have y
k+1
appear only on the lefthand side.)
To obtain instead an explicit formula, we use the approximation
y
k+1
= y
k
+ hf(x
k
, y
k
) (1.22)
to obtain
y
k+1
= y
k
+
1
2
h[f(x
k
, y
k
) + f(x
k+1
, y
k
+ hf(x
k
, y
k
))]. (1.23)
This formula is the RungeKutta formula of second order. Other higherorder formulas
can be derived similarly. For example, a fourthorder formula turns out to be popular in
applications.
We illustrate the implementation of the secondorder RungeKutta formula, Eq. 1.23,
with the following algorithm. We ﬁrst make the following three deﬁnitions:
a
k
= f(x
k
, y
k
), (1.24)
b
k
= y
k
+ ha
k
, (1.25)
4
c
k
= f(x
k+1
, b
k
), (1.26)
in which case
y
k+1
= y
k
+
1
2
h(a
k
+ c
k
). (1.27)
The calculations can then be performed conveniently with the following spreadsheet:
k x
k
y
k
a
k
b
k
c
k
y
k+1
0
1
2
.
.
.
1.4 Systems of Equations
The methods just derived can be extended directly to systems of equations. Consider the
initial value problem involving two equations:
_
_
_
y
(x) = f(x, y(x), z(x))
z
(x) = g(x, y(x), z(x))
y(a) = η, z(a) = ξ.
(1.28)
We recall from Eq. 1.12 that, for one equation, Euler’s method uses the recursive formula
y
k+1
= y
k
+ hf(x
k
, y
k
). (1.29)
This formula is directly extendible to two equations as
_
_
_
y
k+1
= y
k
+ hf(x
k
, y
k
, z
k
)
z
k+1
= z
k
+ hg(x
k
, y
k
, z
k
)
y
0
= η, z
0
= ξ.
(1.30)
We recall from Eq. 1.23 that, for one equation, the secondorder RungeKutta method
uses the recursive formula
y
k+1
= y
k
+
1
2
h[f(x
k
, y
k
) + f(x
k+1
, y
k
+ hf(x
k
, y
k
))]. (1.31)
For two equations, this formula becomes
_
¸
_
¸
_
y
k+1
= y
k
+
1
2
h[f(x
k
, y
k
, z
k
) + f(x
k+1
, y
k
+ hf(x
k
, y
k
, z
k
), z
k
+ hg(x
k
, y
k
, z
k
))]
z
k+1
= z
k
+
1
2
h[g(x
k
, y
k
, z
k
) + g(x
k+1
, y
k
+ hf(x
k
, y
k
, z
k
), z
k
+ hg(x
k
, y
k
, z
k
))]
y
0
= η, z
0
= ξ.
(1.32)
5
¸
`
x
y(x)
x −h x x + h
,
,
,
º
backward
diﬀerence
º
forward
diﬀerence
central
diﬀerence
Figure 3: Finite Diﬀerence Approximations to Derivatives.
1.5 Finite Diﬀerences
Before addressing boundary value problems, we want to develop further the notion of ﬁnite
diﬀerence approximation of derivatives.
Consider a function y(x) for which we want to compute the derivative y
(x) at some point
x. If we discretize the xaxis with uniform spacing h, we could approximate the derivative
using the forward diﬀerence formula
y
(x) ≈
y(x + h) −y(x)
h
, (1.33)
which is the slope of the line to the right of x (Fig. 3). We could also approximate the
derivative using the backward diﬀerence formula
y
(x) ≈
y(x) −y(x −h)
h
, (1.34)
which is the slope of the line to the left of x. Since, in general, there is no basis for choosing
one of these approximations over the other, an intuitively more appealing approximation
results from the average of these formulas:
y
(x) ≈
1
2
_
y(x + h) −y(x)
h
+
y(x) −y(x −h)
h
_
(1.35)
or
y
(x) ≈
y(x + h) −y(x −h)
2h
. (1.36)
This formula, which is more accurate than either the forward or backward diﬀerence formulas,
is the central ﬁnite diﬀerence approximation to the derivative.
6
Similar approximations can be derived for second derivatives. Using forward diﬀerences,
y
(x) ≈
y
(x + h) −y
(x)
h
≈
y(x + 2h) −y(x + h)
h
2
−
y(x + h) −y(x)
h
2
=
y(x + 2h) −2y(x + h) + y(x)
h
2
. (1.37)
This formula, which involves three points forward of x, is the forward diﬀerence approxima
tion to the second derivative. Similarly, using backward diﬀerences,
y
(x) ≈
y
(x) −y
(x −h)
h
≈
y(x) −y(x −h)
h
2
−
y(x −h) −y(x −2h)
h
2
=
y(x) −2y(x −h) + y(x −2h)
h
2
. (1.38)
This formula, which involves three points backward of x, is the backward diﬀerence approx
imation to the second derivative. The central ﬁnite diﬀerence approximation to the second
derivative uses instead the three points which bracket x:
y
(x) ≈
y(x + h) −2y(x) + y(x −h)
h
2
. (1.39)
This last result can also be obtained by using forward diﬀerences for the second derivative
followed by backward diﬀerences for the ﬁrst derivatives, or vice versa.
The central diﬀerence formula for second derivatives can alternatively be derived using
Taylor series expansions:
y(x + h) = y(x) + hy
(x) +
h
2
2
y
(x) +
h
3
6
y
(x) + O(h
4
). (1.40)
Similarly, by replacing h by −h,
y(x −h) = y(x) −hy
(x) +
h
2
2
y
(x) −
h
3
6
y
(x) + O(h
4
). (1.41)
The addition of these two equations yields
y(x + h) + y(x −h) = 2y(x) + h
2
y
(x) + O(h
4
) (1.42)
or
y
(x) =
y(x + h) −2y(x) + y(x −h)
h
2
+ O(h
2
), (1.43)
which, because of the error term, shows that the formula is secondorder accurate.
7
1.6 Boundary Value Problems
The techniques for initial value problems (IVPs) are, in general, not directly applicable to
boundary value problems (BVPs). Consider the BVP
_
y
(x) = f(x, y, y
), a ≤ x ≤ b
y(a) = η
1
, y(b) = η
2
.
(1.44)
This equation could be nonlinear, depending on f. The methods used for IVPs started at
one end (x = a) and computed the solution step by step for increasing x. For a BVP, not
enough information is given at either endpoint to allow a stepbystep solution.
Consider ﬁrst a special case of Eq. 1.44 for which the righthand side depends only on x
and y:
_
y
(x) = f(x, y), a ≤ x ≤ b
y(a) = η
1
, y(b) = η
2
.
(1.45)
Subdivide the interval (a, b) into n equal subintervals:
h =
b −a
n
, (1.46)
in which case
x
k
= a + kh, k = 0, 1, 2, . . . , n. (1.47)
Let y
k
denote the numerical approximation to the exact solution at x
k
. That is,
y
k
≈ y(x
k
). (1.48)
Then, if we use a central diﬀerence approximation to the second derivative in Eq. 1.45, the
ODE can be approximated by
y(x
k−1
) −2y(x
k
) + y(x
k+1
)
h
2
≈ f(x
k
, y
k
), (1.49)
which suggests the diﬀerence equation
y
k−1
−2y
k
+ y
k+1
= h
2
f(x
k
, y
k
), k = 1, 2, 3, . . . , n −1. (1.50)
Since this system of equations has n − 1 equations in n + 1 unknowns, the two boundary
conditions are needed to obtain a nonsingular system:
y
0
= η
1
, y
n
= η
2
. (1.51)
The resulting system is thus
−2y
1
+ y
2
= −η
1
+ h
2
f(x
1
, y
1
)
y
1
− 2y
2
+ y
3
= h
2
f(x
2
, y
2
)
y
2
− 2y
3
+ y
4
= h
2
f(x
3
, y
3
)
y
3
− 2y
4
+ y
5
= h
2
f(x
4
, y
4
)
.
.
.
y
n−2
− 2y
n−1
= −η
2
+ h
2
f(x
n−1
, y
n−1
),
(1.52)
which is a tridiagonal system of n−1 equations in n−1 unknowns. This system is linear or
nonlinear, depending on f.
8
1.6.1 Example
Consider
_
y
= −y(x), 0 ≤ x ≤ π/2
y(0) = 1, y(π/2) = 0.
(1.53)
In Eq. 1.45, f(x, y) = −y, η
1
= 1, η
2
= 0. Thus, the righthand side of the ith equation in
Eq. 1.52 has −h
2
y
i
, which can be moved to the lefthand side to yield the system
−(2 −h
2
)y
1
+ y
2
= −1
y
1
− (2 −h
2
)y
2
+ y
3
= 0
y
2
− (2 −h
2
)y
3
+ y
4
= 0
.
.
.
y
n−2
− (2 −h
2
)y
n−1
= 0.
(1.54)
We ﬁrst solve this tridiagonal system of simultaneous equations with n = 8 (i.e., h = π/16),
and compare with the exact solution y(x) = cos x:
Exact Absolute %
k x
k
y
k
y(x
k
) Error Error
0 0 1 1 0 0
1 0.1963495 0.9812186 0.9807853 0.0004334 0.0441845
2 0.3926991 0.9246082 0.9238795 0.0007287 0.0788715
3 0.5890486 0.8323512 0.8314696 0.0008816 0.1060315
4 0.7853982 0.7080045 0.7071068 0.0008977 0.1269565
5 0.9817477 0.5563620 0.5555702 0.0007917 0.1425085
6 1.1780972 0.3832699 0.3826834 0.0005865 0.1532604
7 1.3744468 0.1954016 0.1950903 0.0003113 0.1595767
8 1.5707963 0 0 0 0
We then solve this system with n = 40 (h = π/80):
Exact Absolute %
k x
k
y
k
y(x
k
) Error Error
0 0 1 1 0 0
5 0.1963495 0.9808025 0.9807853 0.0000172 0.0017571
10 0.3926991 0.9239085 0.9238795 0.0000290 0.0031363
15 0.5890486 0.8315047 0.8314696 0.0000351 0.0042161
20 0.7853982 0.7071425 0.7071068 0.0000357 0.0050479
25 0.9817477 0.5556017 0.5555702 0.0000315 0.0056660
30 1.1780972 0.3827068 0.3826834 0.0000233 0.0060933
35 1.3744468 0.1951027 0.1950903 0.0000124 0.0063443
40 1.5707963 0 0 0 0
Notice that a mesh reﬁnement by a factor of 5 has reduced the error by a factor of about
25. This behavior is typical of a numerical method which is secondorder accurate.
9
1.6.2 Solving Tridiagonal Systems
Tridiagonal systems are particularly easy (and fast) to solve using Gaussian elimination. It
is convenient to solve such systems using the following notation:
d
1
x
1
+ u
1
x
2
= b
1
l
2
x
1
+ d
2
x
2
+ u
2
x
3
= b
2
l
3
x
2
+ d
3
x
3
+ u
3
x
4
= b
3
.
.
.
l
n
x
n−1
+ d
n
x
n
= b
n
,
_
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
_
(1.55)
where d
i
, u
i
, and l
i
are, respectively, the diagonal, upper, and lower matrix entries in Row
i. All coeﬃcients can now be stored in three onedimensional arrays, D(·), U(·), and L(·),
instead of a full twodimensional array A(I,J). The solution algorithm (reduction to upper
triangular form by Gaussian elimination followed by backsolving) can now be summarized
as follows:
1. For k = 1, 2, . . . , n −1: [k = pivot row]
(a) m = −l
k+1
/d
k
[m = multiplier needed to annihilate term below]
(b) d
k+1
= d
k+1
+ mu
k
[new diagonal entry in next row]
(c) b
k+1
= b
k+1
+ mb
k
[new rhs in next row]
2. x
n
= b
n
/d
n
[start of backsolve]
3. For k = n −1, n −2, . . . , 1: [backsolve loop]
(a) x
k
= (b
k
−u
k
x
k+1
)/d
k
Tridiagonal systems arise in a variety of applications, including the CrankNicolson ﬁnite
diﬀerence method for solving parabolic partial diﬀerential equations.
1.7 Shooting Methods
Shooting methods provide a way to convert a boundary value problem to a trialanderror
initial value problem. It is useful to have additional ways to solve BVPs, particularly if the
equations are nonlinear.
Consider the following twopoint BVP:
_
_
_
y
= f(x, y, y
), a ≤ x ≤ b
y(a) = A
y(b) = B.
(1.56)
To solve this problem using the shooting method, we compute solutions of the IVP
_
_
_
y
= f(x, y, y
), x ≥ a
y(a) = A
y
(a) = M
(1.57)
10
¸
`
x
y
M
1
M
2
a
b
A
,
`
·
B
Figure 4: The Shooting Method.
for various values of M (the slope at the left end of the domain) until two solutions, one
with y(b) < B and the other with y(b) > B, have been found (Fig. 4). The initial slope M
can then be interpolated until a solution is found (i.e., y(b) = B).
2 Partial Diﬀerential Equations
A partial diﬀerential equation (PDE) is an equation that involves an unknown function
(the dependent variable) and some of its partial derivatives with respect to two or more
independent variables. An nthorder equation has the highest order derivative of order n.
2.1 Classical Equations of Mathematical Physics
1. Laplace’s equation (the potential equation)
∇
2
φ = 0 (2.1)
In Cartesian coordinates, the vector operator del is deﬁned as
∇ = e
x
∂
∂x
+e
y
∂
∂y
+e
z
∂
∂z
. (2.2)
∇
2
is referred to as the Laplacian operator and given by
∇
2
= ∇· ∇ =
∂
2
∂x
2
+
∂
2
∂y
2
+
∂
2
∂z
2
. (2.3)
Thus, Laplace’s equation in Cartesian coordinates is
∂
2
φ
∂x
2
+
∂
2
φ
∂y
2
+
∂
2
φ
∂z
2
= 0. (2.4)
In cylindrical coordinates, the Laplacian is
∇
2
φ =
1
r
∂
∂r
_
r
∂φ
∂r
_
+
1
r
2
∂
2
φ
∂θ
2
+
∂
2
φ
∂z
2
. (2.5)
Laplace’s equation arises in incompressible ﬂuid ﬂow (in which case φ is the velocity po
tential), gravitational potential problems, electrostatics, magnetostatics, steadystate
11
heat conduction with no sources (in which case φ is the temperature), and torsion
of bars in elasticity (in which case φ(x, y) is the warping function). Functions which
satisfy Laplace’s equation are referred to as harmonic functions.
2. Poisson’s equation
∇
2
φ + g = 0 (2.6)
This equation arises in steadystate heat conduction with distributed sources (φ =
temperature) and torsion of bars in elasticity (in which case φ(x, y) is the stress func
tion).
3. wave equation
∇
2
φ =
1
c
2
¨
φ (2.7)
In this equation, dots denote time derivatives, e.g.,
¨
φ =
∂
2
φ
∂t
2
, (2.8)
and c is the speed of propagation. The wave equation arises in several physical situa
tions:
(a) transverse vibrations of a string
For this onedimensional problem, φ = φ(x, t) is the transverse displacement of
the string, and
c =
¸
T
ρA
, (2.9)
where T is the string tension, ρ is the density of the string material, and A is the
crosssectional area of the string. The denominator ρA is mass per unit length.
(b) longitudinal vibrations of a bar
For this onedimensional problem, φ = φ(x, t) represents the longitudinal dis
placement, and
c =
¸
E
ρ
, (2.10)
where E and ρ are, respectively, the modulus of elasticity and density of the bar
material.
(c) transverse vibrations of a membrane
For this twodimensional problem, φ = φ(x, y, t) is the transverse displacement of
the membrane (e.g., drum head), and
c =
_
T
m
, (2.11)
where T is the tension per unit length, m is the mass per unit area (i.e., m = ρt),
and t is the membrane thickness.
12
(d) acoustics
For this threedimensional problem, φ = φ(x, y, z, t) is the ﬂuid pressure or veloc
ity potential, and c is the speed of sound, where
c =
¸
B
ρ
, (2.12)
where B = ρc
2
is the ﬂuid bulk modulus, and ρ is the density.
4. Helmholtz equation (reduced wave equation)
∇
2
φ + k
2
φ = 0 (2.13)
The Helmholtz equation is the timeharmonic form of the wave equation, in which inter
est is restricted to functions which vary sinusoidally in time. To obtain the Helmholtz
equation, we substitute
φ(x, y, z, t) = φ
0
(x, y, z) cos ωt (2.14)
into the wave equation, Eq. 2.7, to obtain
∇
2
φ
0
cos ωt = −
ω
2
c
2
φ
0
cos ωt. (2.15)
If we deﬁne the wave number k = ω/c, this equation becomes
∇
2
φ
0
+ k
2
φ
0
= 0. (2.16)
With the understanding that the unknown depends only on the spacial variables, the
subscript is unnecessary, and we obtain the Helmholtz equation, Eq. 2.13. This equa
tion arises in steadystate (timeharmonic) situations involving the wave equation, e.g.,
steadystate acoustics.
5. heat equation
∇· (k∇φ) + Q = ρc
˙
φ (2.17)
In this equation, φ represents the temperature T, k is the thermal conductivity, Q is the
internal heat generation per unit volume per unit time, ρ is the material density, and c
is the material speciﬁc heat (the heat required per unit mass to raise the temperature by
one degree). The thermal conductivity k is deﬁned by Fourier’s law of heat conduction:
ˆ q
x
= −kA
dT
dx
, (2.18)
where ˆ q
x
is the rate of heat conduction (energy per unit time) with typical units J/s
or BTU/hr, and A is the area through which the heat ﬂows. Alternatively, Fourier’s
law is written
q
x
= −k
dT
dx
, (2.19)
where q
x
is energy per unit time per unit area (with typical units J/(s·m
2
). There are
several special cases of the heat equation of interest:
13
(a) homogeneous material (k = constant):
k∇
2
φ + Q = ρc
˙
φ (2.20)
(b) homogeneous material, steadystate (timeindependent):
∇
2
φ = −
Q
k
(Poisson’s equation) (2.21)
(c) homogeneous material, steadystate, no sources (Q = 0):
∇
2
φ = 0 (Laplace’s equation) (2.22)
2.2 Classiﬁcation of Partial Diﬀerential Equations
Of the classical PDEs summarized in the preceding section, some involve time, and some
don’t, so presumably their solutions would exhibit fundamental diﬀerences. Of those that
involve time (wave and heat equations), the order of the time derivative is diﬀerent, so the
fundamental character of their solutions may also diﬀer. Both these speculations turn out
to be true.
Consider the general, secondorder, linear partial diﬀerential equation in two variables
Au
xx
+ Bu
xy
+ Cu
yy
+ Du
x
+ Eu
y
+ Fu = G, (2.23)
where the coeﬃcients are functions of the independent variables x and y (i.e., A = A(x, y),
B = B(x, y), etc.), and we have used subscripts to denote partial derivatives, e.g.,
u
xx
=
∂
2
u
∂x
2
. (2.24)
The quantity B
2
−4AC is referred to as the discriminant of the equation. The behavior of
the solution of Eq. 2.23 depends on the sign of the discriminant according to the following
table:
B
2
−4AC Equation Type Typical Physics Described
< 0 Elliptic Steadystate phenomena
= 0 Parabolic Heat ﬂow and diﬀusion processes
> 0 Hyperbolic Vibrating systems and wave motion
The names elliptic, parabolic, and hyperbolic arise from the analogy with the conic sections
in analytic geometry.
Given these deﬁnitions, we can classify the common equations of mathematical physics
already encountered as follows:
Name Eq. Number Eq. in Two Variables A, B, C Type
Laplace Eq. 2.1 u
xx
+ u
yy
= 0 A = C = 1, B = 0 Elliptic
Poisson Eq. 2.6 u
xx
+ u
yy
= −g A = C = 1, B = 0 Elliptic
Wave Eq. 2.7 u
xx
−u
yy
/c
2
= 0 A = 1, C = −1/c
2
, B = 0 Hyperbolic
Helmholtz Eq. 2.13 u
xx
+ u
yy
+ k
2
u = 0 A = C = 1, B = 0 Elliptic
Heat Eq. 2.17 ku
xx
−ρcu
y
= −Q A = k, B = C = 0 Parabolic
14
In the wave and heat equations in the above table, y represents the time variable. The
behavior of the solutions of equations of diﬀerent types diﬀers. Elliptic equations characterize
static (timeindependent) situations, and the other two types of equations characterize time
dependent situations.
2.3 Transformation to Nondimensional Form
It is often convenient, when solving equations, to transform the equation to a nondimensional
form. Consider, for example, the onedimensional wave equation
∂
2
u
∂x
2
=
1
c
2
∂
2
u
∂t
2
, (2.25)
where u is displacement (of dimension length), t is time, and c is the speed of propagation
(of dimension length/time). Let L represent some characteristic length associated with the
problem. We deﬁne the nondimensional variables
¯ x =
x
L
, ¯ u =
u
L
,
¯
t =
ct
L
, (2.26)
in which case the derivatives in Eq. 2.25 become
∂u
∂x
=
∂(L¯ u)
∂(L¯ x)
=
∂¯ u
∂¯ x
, (2.27)
∂
2
u
∂x
2
=
∂
∂x
_
∂u
∂x
_
=
∂
∂x
_
∂¯ u
∂¯ x
_
=
∂
∂¯ x
_
∂¯ u
∂¯ x
_
d¯ x
dx
=
1
L
∂
2
¯ u
∂¯ x
2
, (2.28)
∂u
∂t
=
∂(L¯ u)
∂(L
¯
t/c)
= c
∂¯ u
∂
¯
t
, (2.29)
∂
2
u
∂t
2
=
∂
∂t
_
∂u
∂t
_
=
∂
∂t
_
c
∂¯ u
∂
¯
t
_
= c
∂
∂
¯
t
_
∂¯ u
∂
¯
t
_
d
¯
t
dt
= c
∂
2
¯ u
∂
¯
t
2
c
L
=
c
2
L
∂
2
¯ u
∂
¯
t
2
. (2.30)
Thus, from Eq. 2.25,
1
L
∂
2
¯ u
∂¯ x
2
=
1
c
2
·
c
2
L
∂
2
¯ u
∂
¯
t
2
(2.31)
or
∂
2
¯ u
∂¯ x
2
=
∂
2
¯ u
∂
¯
t
2
. (2.32)
This is the nondimensional wave equation.
This last equation can also be obtained more easily by direct substitution of Eq. 2.26
into Eq. 2.25 and factoring out the constants:
∂
2
(L¯ u)
∂(L¯ x)
2
=
1
c
2
∂
2
(L¯ u)
∂(L
¯
t/c)
2
(2.33)
or
L
L
2
∂
2
¯ u
∂¯ x
2
=
1
c
2
c
2
L
L
2
∂
2
¯ u
∂
¯
t
2
. (2.34)
15
3 Finite Diﬀerence Solution of Partial Diﬀerential Equa
tions
3.1 Parabolic Equations
Consider the boundaryinitial value problem (BIVP)
_
¸
¸
_
¸
¸
_
u
xx
=
1
c
u
t
, u = u(x, t), 0 < x < 1, t > 0
u(0, t) = u(1, t) = 0 (boundary conditions)
u(x, 0) = f(x) (initial condition),
(3.1)
where c is a constant. This problem represents transient heat conduction in a rod with the
ends held at zero temperature and an initial temperature proﬁle f(x).
To solve this problem numerically, we discretize x and t such that
x
i
= ih, i = 0, 1, 2, . . .
t
j
= jk, j = 0, 1, 2, . . . .
(3.2)
3.1.1 Explicit Finite Diﬀerence Method
Let u
i,j
be the numerical approximation to u(x
i
, t
j
). We approximate u
t
with the forward
ﬁnite diﬀerence
u
t
≈
u
i,j+1
−u
i,j
k
(3.3)
and u
xx
with the central ﬁnite diﬀerence
u
xx
≈
u
i+1,j
−2u
i,j
+ u
i−1,j
h
2
. (3.4)
The ﬁnite diﬀerence approximation to the PDE is then
u
i+1,j
−2u
i,j
+ u
i−1,j
h
2
=
u
i,j+1
−u
i,j
ck
. (3.5)
Deﬁne the parameter r as
r =
ck
h
2
=
c ∆t
(∆x)
2
, (3.6)
in which case Eq. 3.5 becomes
u
i,j+1
= ru
i−1,j
+ (1 −2r)u
i,j
+ ru
i+1,j
. (3.7)
The domain of the problem and the mesh are illustrated in Fig. 5. Eq. 3.7 is a recursive
relationship giving u in a given row (time) in terms of three consecutive values of u in the row
below (one time step earlier). This equation is referred to as an explicit formula since one
unknown value can be found directly in terms of several other known values. The recursive
relationship can also be sketched with the stencil shown in Fig. 6. For example, for r = 1/10,
we have the stencil shown in Fig. 7. That is, for r = 1/10, the solution (temperature) at
16
¸
`
x
t
0 1 u = f(x)
u = 0 u = 0
,
i−1, j
,
i, j
,
i+1, j
,
i, j+1
¡ ¸
h
·
`
k
Figure 5: Mesh for 1D Heat Equation.
_
`
¸
r
_
`
¸
1 −2r
_
`
¸
r
_
`
¸
1
Figure 6: Heat Equation Stencil for Explicit Finite Diﬀerence Algorithm.
_
¸
1
10
_
¸
8
10
_
¸
1
10
_
`
¸
1
Figure 7: Heat Equation Stencil for r = 1/10.
_
¸
1
2
_
¸
0
_
¸
1
2
_
`
¸
1
r =
1
2
_
¸
1
_
¸
−1
_
¸
1
_
`
¸
1
r = 1
Figure 8: Heat Equation Stencils for r = 1/2 and r = 1.
17
¸
`
x
u
0.0 0.2 0.4 0.6 0.8 1.0
0.1
0.2
0.3
0.4
0.5
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
t = 0.048
t = 0.096
t = 0.192
r = 0.48
Exact
¸ ¸
F.D.
Figure 9: Explicit Finite Diﬀerence Solution With r = 0.48.
the new point depends on the three points at the previous time step with a 181 weighting.
Notice that, if r = 1/2, the solution at the new point is independent of the closest point,
as illustrated in Fig. 8. For r > 1/2 (e.g., r = 1), the new point depends negatively on the
closest point (Fig. 8), which is counterintuitive. It can be shown that, for a stable solution,
0 < r ≤ 1/2. An unstable solution is one for which small errors grow rather than decay as
the solution evolves.
The instability which occurs for r > 1/2 can be illustrated with the following example.
Consider the boundaryinitial value problem (in nondimensional form)
u
xx
= u
t
, 0 < x < 1, u = u(x, t)
u(0, t) = u(1, t) = 0
u(x, 0) = f(x) =
_
2x, 0 ≤ x ≤ 1/2
2(1 −x), 1/2 ≤ x ≤ 1.
(3.8)
The physical problem is to compute the temperature history u(x, t) for a bar with a pre
scribed initial temperature distribution f(x), no internal heat sources, and zero temperature
prescribed at both ends. We solve this problem using the explicit ﬁnite diﬀerence algorithm
with h = ∆x = 0.1 and k = ∆t = rh
2
= r(∆x)
2
for two diﬀerent values of r: r = 0.48
and r = 0.52. The two numerical solutions (Figs. 9 and 10) are compared with the analytic
solution
u(x, t) =
∞
n=1
8
(nπ)
2
sin
nπ
2
(sin nπx) e
−(nπ)
2
t
, (3.9)
which can be obtained by the technique of separation of variables. The instability for
r > 1/2 can be clearly seen in Fig. 10. Thus, a disadvantage of this explicit method is that
a small time step ∆t must be used to maintain stability. This disadvantage will be removed
with the CrankNicolson algorithm.
3.1.2 CrankNicolson Implicit Method
The CrankNicolson method is a stable algorithm which allows a larger time step than could
be used in the explicit method. In fact, CrankNicolson’s stability does not depend on the
parameter r.
18
¸
`
x
u
0.0 0.2 0.4 0.6 0.8 1.0
0.1
0.2
0.3
0.4
0.5
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
t = 0.052
t = 0.104
t = 0.208
r = 0.52
Exact
¸ ¸
F.D.
Figure 10: Explicit Finite Diﬀerence Solution With r = 0.52.
The basis for the CrankNicolson algorithm is writing the ﬁnite diﬀerence equation at a
midlevel in time: i, j +
1
2
. The ﬁnite diﬀerence x derivative at j +
1
2
is computed as the
average of the two central diﬀerence time derivatives at j and j +1. Consider again the PDE
of Eq. 3.1:
_
¸
¸
_
¸
¸
_
u
xx
=
1
c
u
t
, u = u(x, t)
u(0, t) = u(1, t) = 0 (boundary conditions)
u(x, 0) = f(x) (initial condition).
(3.10)
The PDE is approximated numerically by
1
2
_
u
i+1,j+1
−2u
i,j+1
+ u
i−1,j+1
h
2
+
u
i+1,j
−2u
i,j
+ u
i−1,j
h
2
_
=
u
i,j+1
−u
i,j
ck
, (3.11)
where the righthand side is a central diﬀerence approximation to the time derivative at the
middle point j +
1
2
. We again deﬁne the parameter r as
r =
ck
h
2
=
c ∆t
(∆x)
2
, (3.12)
and rearrange Eq. 3.11 with all j + 1 terms on the lefthand side:
−ru
i−1,j+1
+ 2(1 + r)u
i,j+1
−ru
i+1,j+1
= ru
i−1,j
+ 2(1 −r)u
i,j
+ ru
i+1,j
. (3.13)
This formula is called the CrankNicolson algorithm.
Fig. 11 shows the points involved in the CrankNicolson scheme. If we start at the bottom
row (j = 0) and move up, the righthand side values of Eq. 3.13 are known, and the lefthand
side values of that equation are unknown. To get the process started, let j = 0, and write the
CN equation for each i = 1, 2, . . . , N to obtain N simultaneous equations in N unknowns,
where N is the number of interior mesh points on the row. (The boundary points, with
known values, are excluded.) This system of equations is a tridiagonal system, since each
equation has three consecutive nonzeros centered around the diagonal. To advance in time,
19
¸
`
x
t
0 1 u = f(x)
u = 0 u = 0
,
i−1, j
,
i, j
,
i+1, j
,
i−1,j+1
,
i, j+1
,
i+1,j+1
¸
¡ ¸
h
·
`
k
Figure 11: Mesh for CrankNicolson.
_
`
¸
r
_
`
¸
2(1 −r)
_
`
¸
r
_
`
¸
−r
_
`
¸
2(1 + r)
_
`
¸
−r
Figure 12: Stencil for CrankNicolson Algorithm.
we then increment j to j = 1, and solve a new system of equations. An approach which
requires the solution of simultaneous equations is called an implicit algorithm. A sketch of
the CN stencil is shown in Fig. 12.
Note that the coeﬃcient matrix of the CN system of equations does not change from
step to step. Thus, one could compute and save the LU factors of the coeﬃcient matrix, and
merely do the forwardbackward substitution (FBS) at each new time step, thus speeding
up the calculation. This speedup would be particularly signiﬁcant in higher dimensional
problems, where the coeﬃcient matrix is no longer tridiagonal.
It can be shown that the CN algorithm is stable for any r, although better accuracy
results from a smaller r. A smaller r corresponds to a smaller time step size (for a ﬁxed
spacial mesh). CN also gives better accuracy than the explicit approach for the same r.
3.1.3 Derivative Boundary Conditions
Consider the boundaryinitial value problem
_
¸
¸
_
¸
¸
_
u
xx
=
1
c
u
t
, u = u(x, t)
u(0, t) = 0, u
x
(1, t) = g(t) (boundary conditions)
u(x, 0) = f(x) (initial condition).
(3.14)
The only diﬀerence between this problem and the one considered earlier in Eq. 3.1 is the
righthand side boundary condition, which now involves a derivative (a Neumann boundary
condition).
Assume a mesh labeling as shown in Fig. 13. We introduce extra “phantom” points to
the right of the boundary (outside the domain). Consider boundary Point 25, for example,
20
¸
¸
¸
13
14
15
23
24
25
33
34
35
Figure 13: Treatment of Derivative Boundary Conditions.
and assume we use an explicit ﬁnite diﬀerence algorithm. Since u
25
is not known, we must
write the ﬁnite diﬀerence equation for u
25
:
u
25
= ru
14
+ (1 −2r)u
24
+ ru
34
. (3.15)
On the other hand, a central ﬁnite diﬀerence approximation to the x derivative at Point 24
is
u
34
−u
14
2h
= g
24
. (3.16)
The phantom variable u
34
can then be eliminated from the last two equations to yield a new
equation for the boundary point u
25
:
u
25
= 2ru
14
+ (1 −2r)u
24
+ 2rhg
24
. (3.17)
3.2 Hyperbolic Equations
3.2.1 The d’Alembert Solution of the Wave Equation
Before addressing the ﬁnite diﬀerence solution of hyperbolic equations, we review some
background material on such equations.
The timedependent transverse response of an inﬁnitely long string satisﬁes the one
dimensional wave equation with nonzero initial displacement and velocity speciﬁed:
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
∂
2
u
∂x
2
=
1
c
2
∂
2
u
∂t
2
, −∞< x < ∞, t > 0,
u(x, 0) = f(x),
∂u(x, 0)
∂t
= g(x)
lim
x→±∞
u(x, t) = 0,
(3.18)
where x is distance along the string, t is time, u(x, t) is the transverse displacement, f(x) is
the initial displacement, g(x) is the initial velocity, and the constant c is given by
c =
¸
T
ρA
, (3.19)
where T is the tension (force) in the string, ρ is the density of the string material, and
A is the crosssectional area of the string. Note that c has the dimension of velocity. This
21
equation assumes that all motion is vertical and that the displacement u and its slope ∂u/∂x
are both small.
It can be shown by direct substitution into Eq. 3.18 that the solution of this system is
u(x, t) =
1
2
[f(x −ct) + f(x + ct)] +
1
2c
_
x+ct
x−ct
g(τ) dτ. (3.20)
The diﬀerentiation of the integral in Eq. 3.20 is eﬀected with the aid of Leibnitz’s rule:
d
dx
_
B(x)
A(x)
h(x, t) dt =
_
B
A
∂h(x, t)
∂x
dt + h(x, B)
dB
dx
−h(x, A)
dA
dx
. (3.21)
Eq. 3.20 is known as the d’Alembert solution of the onedimensional wave equation.
For the special case g(x) = 0 (zero initial velocity), the d’Alembert solution simpliﬁes to
u(x, t) =
1
2
[f(x −ct) + f(x + ct)], (3.22)
which may be interpreted as two waves, each equal to f(x)/2, which travel at speed c to
the right and left, respectively. For example, the argument x − ct remains constant if, as t
increases, x also increases at speed c. Thus, the wave f(x−ct) moves to the right (increasing
x) with speed c without change of shape. Similarly, the wave f(x + ct) moves to the left
(decreasing x) with speed c without change of shape. The two waves [each equal to half the
initial shape f(x)] travel in opposite directions from each other at speed c. If f(x) is nonzero
only for a small domain, then, after both waves have passed the region of initial disturbance,
the string returns to its rest position.
For example, let f(x), the initial displacement, be given by
f(x) =
_
−x + b, x ≤ b,
0, x ≥ b,
(3.23)
which is a triangular pulse of width 2b and height b (Fig. 14). For t > 0, half this pulse
travels in opposite directions from the origin. For t > b/c, where c is the wave speed, the
two halfpulses have completely separated, and the neighborhood of the origin has returned
to rest.
For the special case f(x) = 0 (zero initial displacement), the d’Alembert solution simpli
ﬁes to
u(x, t) =
1
2c
_
x+ct
x−ct
g(τ) dτ =
1
2
[G(x + ct) −G(x −ct)] , (3.24)
where
G(x) =
1
c
_
x
−∞
g(τ) dτ. (3.25)
Thus, similar to the initial displacement special case, this solution, Eq. 3.24, may be inter
preted as the combination (diﬀerence, in this case) of two identical functions G(x)/2, one
moving left and one moving right, each with speed c.
22
¸
`
x/b
−3 −2 −1 0 1 2 3
>
>.
f(x)/2
»
f(x)
(a) t = 0
>
>
>
¸
`
x/b
−3 −2 −1 0 1 2 3
(b) t =
b
2c
>
>
>
>
>
>
>
¸
`
x/b
−3 −2 −1 0 1 2 3
(c) t =
b
c
>
>
>
>
>
>
>
>
¸
`
x/b
−3 −2 −1 0 1 2 3
(d) t =
2b
c
>
>
>
>
>
>
>
>
Figure 14: Propagation of Initial Displacement.
For example, let the initial velocity g(x) be given by
g(x) =
_
Mc, x < b,
0, x > b,
(3.26)
which is a rectangular pulse of width 2b and height Mc, where the constant M is the
dimensionless Mach number, and c is the wave speed (Fig. 15). The travelling wave G(x)
is given by Eq. 3.25 as
G(x) =
_
_
_
0, x ≤ −b,
M(x + b), −b ≤ x ≤ b,
2Mb, x ≥ b.
(3.27)
That is, half this wave travels in opposite directions at speed c. Even though g(x) is nonzero
only near the origin, the travelling wave G(x) is constant and nonzero for x > b. Thus, as
time advances, the center section of the string reaches a state of rest, but not in its original
position (Fig. 16).
From the preceding discussion, it is clear that disturbances travel with speed c. For an
observer at some ﬁxed location, initial displacements occurring elsewhere pass by after a
ﬁnite time has elapsed, and then the string returns to rest in its original position. Nonzero
initial velocity disturbances also travel at speed c, but, once having reached some location,
will continue to inﬂuence the solution from then on.
Thus, the domain of inﬂuence of the data at x = x
0
, say, on the solution consists of all
points closer than ct (in either direction) to x
0
, the location of the disturbance (Fig. 17).
23
¸
`
x/b
g(x)
−3 −2 −1 0 1 2 3
(a)
¸
`
x/b
G(x)
−3 −2 −1 0 1 2 3
(b)
.
.
.
.
.
.
.
.
Figure 15: Initial Velocity Function.
¸
`
x/b
−4 −3 −2 −1 1 2 3 4
G(x)/2 moves left
−G(x)/2 moves right
(a) t = 0 (displacement=0)
.
.
.
.




¸
`
x/b
−4 −3 −2 −1 1 2 3 4
(b) t =
b
2c
.
.
.
.




.
.




¸
`
x/b
−4 −3 −2 −1 2 3 4
(c) t =
b
c
.
.
.
.
.
.
.
.












¸
`
x/b
−4 −3 −2 −1 1 3 4
(d) t =
2b
c
.
.
.
.
.
.
.
.












Figure 16: Propagation of Initial Velocity.
24
¸
`
x
t
,
slope=1/c −1/c
(x
0
, 0)
(a) Domain of Inﬂuence
of Data on Solution.
¸
`
x
t
,
−1/c 1/c
(x, t)
, ,
(x −ct, 0) (x + ct, 0)
(b) Domain of Dependence
of Solution on Data.
Figure 17: Domains of Inﬂuence and Dependence.
Conversely, the domain of dependence of the solution on the initial data consists of all points
within a distance ct of the solution point. That is, the solution at (x, t) depends on the
initial data for all locations in the range (x −ct, x + ct), which are the limits of integration
in the d’Alembert solution, Eq. 3.20.
3.2.2 Finite Diﬀerences
From the preceding discussion of the d’Alembert solution, we see that hyperbolic equations
involve wave motion. If the initial data are discontinuous (as, for example, in shocks), the
most accurate and the most convenient approach for solving the equations is probably the
method of characteristics. On the other hand, problems without discontinuities can probably
be solved most conveniently using ﬁnite diﬀerence and ﬁnite element techniques. Here we
consider ﬁnite diﬀerences.
Consider the boundaryinitial value problem (BIVP)
_
¸
¸
_
¸
¸
_
u
xx
=
1
c
2
u
tt
, u = u(x, t), 0 < x < a, t > 0
u(0, t) = u(a, t) = 0 (boundary conditions)
u(x, 0) = f(x), u
t
(x, 0) = g(x) (initial conditions).
(3.28)
This problem represents the transient (timedependent) vibrations of a string ﬁxed at the
two ends with both initial displacement f(x) and initial velocity g(x) speciﬁed.
A central ﬁnite diﬀerence approximation to the PDE, Eq. 3.28, yields
u
i+1,j
−2u
i,j
+ u
i−1,j
h
2
=
u
i,j+1
−2u
i,j
+ u
i,j−1
c
2
k
2
. (3.29)
We deﬁne the parameter
r =
ck
h
=
c ∆t
∆x
, (3.30)
and solve for u
i,j+1
:
u
i,j+1
= r
2
u
i−1,j
+ 2(1 −r
2
)u
i,j
+ r
2
u
i+1,j
−u
i,j−1
. (3.31)
Fig. 18 shows the mesh points involved in this recursive scheme. If we know the solution for
25
¸
`
x
t
0 a u = f(x), u
t
= g(x)
u = 0 u = 0
,
i−1, j
,
i, j
,
i+1, j
,
i, j+1
,
i, j−1
¡ ¸
h
·
`
k
Figure 18: Mesh for Explicit Solution of Wave Equation.
_
`
¸
r
2
_
`
¸
2(1 −r
2
)
_
`
¸
r
2
_
`
¸
1
_
`
¸
−1
Figure 19: Stencil for Explicit Solution of Wave Equation.
all time values up to the jth time step, we can compute the solution u
i,j+1
at Step j + 1 in
terms of known quantities. Thus, this algorithm is an explicit algorithm. The corresponding
stencil is shown in Fig. 19.
It can be shown that this ﬁnite diﬀerence algorithm is stable if r ≤ 1 and unstable if r > 1.
This stability condition is know as the Courant, Friedrichs, and Lewy (CFL) condition or
simply the Courant condition. It can be further shown that a theoretically correct solution
is obtained when r = 1, and that the accuracy of the solution decreases as the value of r
decreases farther and farther below the value of 1. Thus, the time step size ∆t should be
chosen, if possible, so that r = 1. If that ∆t is inconvenient for a particular calculation, ∆t
should be selected as large as possible without exceeding the stability limit of r = 1.
An intuitive rationale behind the stability requirement r ≤ 1 can also be made using
Fig. 20. If r > 1, the numerical domain of dependence (NDD) would be smaller than the
actual domain of dependence (ADD) for the PDE, since NDD spreads by one mesh point
at each level for earlier time. However, if NDD < ADD, the numerical solution would be
independent of data outside NDD but inside ADD. That is, the numerical solution would
ignore necessary information. Thus, to insure a stable solution, r ≤ 1.
3.2.3 Starting Procedure for Explicit Algorithm
We note that the explicit ﬁnite diﬀerence scheme just described for the wave equation requires
the numerical solution at two consecutive time steps to step forward to the next time. Thus,
at t = 0, a special procedure is needed to advance from j = 0 to j = 1.
26
¸
`
x
t
¡ ¸
h
·
`
k
,
`
`
`
`
`
`
`
`
`
`
`
`
`
Numerical Domain of Dependence
¡ ¸
Actual Domain of Dependence
¡ ¸
Slope = 1/c
`
Slope = −1/c
»
Figure 20: Domains of Dependence for r > 1.
The explicit ﬁnite diﬀerence algorithm is given in Eq. 3.31:
u
i,j+1
= r
2
u
i−1,j
+ 2(1 −r
2
)u
i,j
+ r
2
u
i+1,j
−u
i,j−1
. (3.32)
To compute the solution at the end of the ﬁrst time step, let j = 0:
u
i,1
= r
2
u
i−1,0
+ 2(1 −r
2
)u
i,0
+ r
2
u
i+1,0
−u
i,−1
, (3.33)
where the righthand side is known (from the initial condition) except for u
i,−1
. However,
we can write a central diﬀerence approximation to the ﬁrst time derivative at t = 0:
u
i,1
−u
i,−1
2k
= g
i
(3.34)
or
u
i,−1
= u
i,1
−2kg
i
, (3.35)
where g
i
is the initial velocity g(x) evaluated at the ith point, i.e., g
i
= g(x
i
). If we substitute
this last result into Eq. 3.33 (to eliminate u
i,−1
), we obtain
u
i,1
= r
2
u
i−1,0
+ 2(1 −r
2
)u
i,0
+ r
2
u
i+1,0
−u
i,1
+ 2kg
i
(3.36)
or
u
i,1
=
1
2
r
2
u
i−1,0
+ (1 −r
2
)u
i,0
+
1
2
r
2
u
i+1,0
+ kg
i
. (3.37)
This is the diﬀerence equation used for the ﬁrst row. Thus, to implement the explicit ﬁnite
diﬀerence algorithm, we use Eq. 3.37 for the ﬁrst time step and Eq. 3.31 for all subsequent
time steps.
3.2.4 Nonreﬂecting Boundaries
In some applications, it is of interest to model domains that are large enough to be considered
inﬁnite in extent. In a ﬁnite diﬀerence representation of the domain, an inﬁnite boundary has
27
,
,
,
,
¸
¸
,
¸
¸
i, j
Figure 21: Finite Diﬀerence Mesh at Nonreﬂecting Boundary.
to be truncated at some suﬃciently large distance. At such a boundary, a suitable boundary
condition must be imposed to ensure that outgoing waves are not reﬂected.
Consider a vibrating string which extends to inﬁnity for large x. We truncate the compu
tational domain at some ﬁnite x. Let the initial velocity be zero. The d’Alembert solution,
Eq. 3.20, of the onedimensional wave equation c
2
u
xx
= u
tt
can be written in the form
u(x, t) = F
1
(x −ct) + F
2
(x + ct), (3.38)
where F
1
represents a wave advancing at speed c toward the boundary, and F
2
represents
the returning wave, which should not exist if the boundary is nonreﬂecting. With F
2
= 0,
we diﬀerentiate u with respect to x and t to obtain
∂u
∂x
= F
1
,
∂u
∂t
= −cF
1
, (3.39)
where the prime denotes the derivative with respect to the argument. Thus,
∂u
∂x
= −
1
c
∂u
∂t
. (3.40)
This is the onedimensional nonreﬂecting boundary condition. Note that the x direction is
normal to the boundary. The boundary condition, Eq. 3.40, must be imposed to inhibit
reﬂections from the truncated boundary. This condition is exact in 1D (i.e., plane waves)
and approximate in higher dimensions, where the nonreﬂecting condition is written
c
∂u
∂n
+
∂u
∂t
= 0, (3.41)
where n is the outward unit normal to the boundary.
The nonreﬂecting boundary condition, Eq. 3.40, can be approximated in the ﬁnite dif
ference method with central diﬀerences expressed in terms of the phantom point outside the
boundary. For example, at the typical point (i, j) on the nonreﬂecting boundary in Fig. 21,
the general recursive formula is given by Eq. 3.31:
u
i,j+1
= r
2
u
i−1,j
+ 2(1 −r
2
)u
i,j
+ r
2
u
i+1,j
−u
i,j−1
. (3.42)
The central diﬀerence approximation to the nonreﬂecting boundary condition, Eq. 3.40, is
c
u
i+1,j
−u
i−1,j
2h
= −
u
i,j+1
−u
i,j−1
2k
(3.43)
28
or
r (u
i+1,j
−u
i−1,j
) = −(u
i,j+1
−u
i,j−1
) . (3.44)
The substitution of Eq. 3.44 into Eq. 3.42 (to eliminate the phantom point) yields
(1 + r)u
i,j+1
= 2r
2
u
i−1,j
+ 2(1 −r
2
)u
i,j
−(1 −r)u
i,j−1
. (3.45)
For the ﬁrst time step (j = 0), the last term in this relation is evaluated using the central
diﬀerence approximation to the initial velocity, Eq. 3.35. Note also that, for r = 1, Eq. 3.45
takes a particularly simple (and perhaps unexpected) form:
u
i,j+1
= u
i−1,j
. (3.46)
To illustrate the perfect wave absorption that occurs in a onedimensional ﬁnite diﬀerence
model, consider an inﬁnitelylong vibrating string with a nonzero initial displacement and
zero initial velocity. The initial displacement is a triangularshaped pulse in the middle of
the string, similar to Fig. 14. According to the d’Alembert solution, half the pulse should
propagate at speed c to the left and right and be absorbed into the boundaries. We solve
the problem with the explicit central ﬁnite diﬀerence approach with r = ck/h = 1. With
r = 1, the ﬁnite diﬀerence formulas 3.31 and 3.37 simplify to
u
i,j+1
= u
i−1,j
+ u
i+1,j
−u
i,j−1
(3.47)
and
u
i,1
= (u
i−1,0
+ u
i+1,0
)/2. (3.48)
On the right and left nonreﬂecting boundaries, Eq. 3.46 implies
u
n,j+1
= u
n−1,j
, u
0,j+1
= u
1,j
, (3.49)
where the mesh points in the x direction are labeled 0 to n. The ﬁnite diﬀerence calculation
for this problem results in the following spreadsheet:
t x=0 x=1 x=2 x=3 x=4 x=5 x=6 x=7 x=8 x=9 x=10

0 0.000 0.000 0.000 1.000 2.000 3.000 2.000 1.000 0.000 0.000 0.000
1 0.000 0.000 0.500 1.000 2.000 2.000 2.000 1.000 0.500 0.000 0.000
2 0.000 0.500 1.000 1.500 1.000 1.000 1.000 1.500 1.000 0.500 0.000
3 0.500 1.000 1.500 1.000 0.500 0.000 0.500 1.000 1.500 1.000 0.500
4 1.000 1.500 1.000 0.500 0.000 0.000 0.000 0.500 1.000 1.500 1.000
5 1.500 1.000 0.500 0.000 0.000 0.000 0.000 0.000 0.500 1.000 1.500
6 1.000 0.500 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.500 1.000
7 0.500 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.500
8 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
9 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
10 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
29
ρcA
Figure 22: Finite Length Simulation of an Inﬁnite Bar.
Notice that the triangular wave is absorbed without any reﬂection from the two boundaries.
For steadystate wave motion, the solution u(x, t) is timeharmonic, i.e.,
u = u
0
e
iωt
, (3.50)
and the nonreﬂecting boundary condition, Eq. 3.40, becomes
∂u
0
∂x
= −
iω
c
u
0
(3.51)
or, in general,
∂u
0
∂n
= −
iω
c
u
0
= −iku
0
, (3.52)
where n is the outward unit normal at the nonreﬂecting boundary, and k = ω/c is the
wave number. This condition is exact in 1D (i.e., plane waves) and approximate in higher
dimensions.
The nonreﬂecting boundary condition can be interpreted physically as a damper (dash
pot). Consider, for example, a bar undergoing longitudinal vibration and terminated on
the right end with the nonreﬂecting boundary condition, Eq. 3.40 (Fig. 22). The internal
longitudinal force F in the bar is given by
F = Aσ = AEε = AE
∂u
∂x
, (3.53)
where A is the crosssectional area of the bar, σ is the stress, E is the Young’s modulus of
the bar material, and u is displacement. Thus, from Eq. 3.40, the nonreﬂecting boundary
condition is equivalent to applying an end force given by
F = −
AE
c
v, (3.54)
where v = ∂u/∂t is the velocity. Since, from Eq. 2.10,
E = ρc
2
, (3.55)
Eq. 3.54 becomes
F = −(ρcA)v, (3.56)
which is a force proportional to velocity. A mechanical device which applies a force propor
tional to velocity is a dashpot. The minus sign in this equation means that the force opposes
the direction of motion, as required to be physically realizable. The dashpot constant is ρcA.
Thus, the application of this dashpot to the end of a ﬁnite length bar simulates exactly a bar
of inﬁnite length (Fig. 22). Since, in acoustics, the ratio of pressure (force/area) to velocity
is referred to as impedance, we see that the characteristic impedance of an acoustic medium
is ρc.
30
¸
`
x
y
a
b
T = f
1
(x)
T = f
2
(x)
T = g
1
(y) T = g
2
(y) ∇
2
T = 0
Figure 23: Laplace’s Equation on Rectangular Domain.
¸
`
,
»
Point (i, j)
i −1 i i + 1
j −1
j
j + 1
Figure 24: Finite Diﬀerence Grid on Rectangular Domain.
3.3 Elliptic Equations
Consider Laplace’s equation on the twodimensional rectangular domain shown in Fig. 23:
_
∇
2
T(x, y) = 0 (0 < x < a, 0 < y < b),
T(0, y) = g
1
(y), T(a, y) = g
2
(y), T(x, 0) = f
1
(x), T(x, b) = f
2
(x).
(3.57)
This problem corresponds physically to twodimensional steadystate heat conduction over
a rectangular plate for which temperature is speciﬁed on the boundary.
We attempt an approximate solution by introducing a uniform rectangular grid over the
domain, and let the point (i, j) denote the point having the ith value of x and the jth value
of y (Fig. 24). Then, using central ﬁnite diﬀerence approximations to the second derivatives
(Fig. 25),
∂
2
T
∂x
2
≈
T
i−1,j
−2T
i,j
+ T
i+1,j
h
2
, (3.58)
∂
2
T
∂y
2
≈
T
i,j−1
−2T
i,j
+ T
i,j+1
h
2
. (3.59)
The ﬁnite diﬀerence approximation to Laplace’s equation thus becomes
T
i−1,j
−2T
i,j
+ T
i+1,j
h
2
+
T
i,j−1
−2T
i,j
+ T
i,j+1
h
2
= 0 (3.60)
or
4T
i,j
−(T
i−1,j
+ T
i+1,j
+ T
i,j−1
+ T
i,j+1
) = 0. (3.61)
31
,
¡ ¸
h
`
·
h
(i −1, j −1) (i, j −1) (i + 1, j −1)
(i −1, j) (i, j) (i + 1, j)
(i −1, j + 1) (i, j + 1) (i + 1, j + 1)
Figure 25: The Neighborhood of Point (i, j).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Figure 26: 20Point Finite Diﬀerence Mesh.
That is, for Laplace’s equation with the same uniform mesh in each direction, the solution
at a typical point (i, j) is the average of the four neighboring points.
For example, consider the mesh shown in Fig. 26. Although there are 20 mesh points,
14 are on the boundary, where the temperature is known. Thus, the resulting numerical
problem has only six degrees of freedom (unknown variables). The application of Eq. 3.61
to each of the six interior points yields
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
4T
6
− T
7
− T
10
= T
2
+ T
5
−T
6
+ 4T
7
− T
11
= T
3
+ T
8
−T
6
+ 4T
10
− T
11
− T
14
= T
9
−T
7
− T
10
+ 4T
11
− T
15
= T
12
−T
10
+ 4T
14
− T
15
= T
13
+ T
18
−T
11
− T
14
+ 4T
15
= T
16
+ T
19
,
(3.62)
where all known quantities have been placed on the righthand side. This linear system of
six equations in six unknowns can be solved with standard equation solvers. Because the
central diﬀerence operator is a 5point operator, systems of equations of this type would have
at most ﬁve nonzero terms in each equation, regardless of how large the mesh is. Thus, for
large meshes, the system of equations is sparsely populated, so that sparse matrix solution
techniques would be applicable.
32
¸
`
x
y
a
b
T = f
1
(x)
T = f
2
(x)
T = g
1
(y)
∂T
∂x
= g(y) ∇
2
T = 0
Figure 27: Laplace’s Equation With Dirichlet and Neumann B.C.
Since the numbers assigned to the mesh points in Fig. 26 are merely labels for identiﬁca
tion, the pattern of nonzero coeﬃcients appearing on the lefthand side of Eq. 3.62 depends
on the choice of mesh ordering. Some equation solvers based on Gaussian elimination oper
ate more eﬃciently on systems of equations for which the nonzeros in the coeﬃcient matrix
are clustered near the main diagonal. Such a matrix system is called banded.
Systems of this type can also be solved using an iterative procedure known as relaxation,
which uses the following general algorithm:
1. Initialize the boundary points to their prescribed values, and initialize the interior
points to zero or some other convenient value (e.g., the average of the boundary values).
2. Loop systematically through the interior mesh points, setting each interior point to
the average of its four neighbors.
3. Continue this process until the solution converges to the desired accuracy.
3.3.1 Derivative Boundary Conditions
The approach of the preceding section must be modiﬁed for Neumann boundary conditions,
in which the normal derivative is speciﬁed. For example, consider again the problem of the
last section but with a Neumann, rather than Dirichlet, boundary condition on the right
side (Fig. 27):
_
_
_
∇
2
T(x, y) = 0 (0 < x < a, 0 < y < b),
T(0, y) = g
1
(y),
∂T(a, y)
∂x
= g(y), T(x, 0) = f
1
(x), T(x, b) = f
2
(x).
(3.63)
We extend the mesh to include additional points to the right of the boundary at x = a
(Fig. 28). At a typical point on the boundary, a central diﬀerence approximation yields
g
18
=
∂T
18
∂x
≈
T
22
−T
14
2h
(3.64)
or
T
22
= T
14
+ 2hg
18
. (3.65)
On the other hand, the equilibrium equation for Point 18 is
4T
18
−(T
14
+ T
22
+ T
17
+ T
19
) = 0, (3.66)
33
. . .
. . .
. . .
. . .
,
,
,
,
13
14
15
16
17
18
19
20
21
22
23
24
Figure 28: Treatment of Neumann Boundary Conditions.
`
`
`
`
k
1
m
2
¸
u
2
(t)
¸ ¸
`
`
`
`
k
2
m
3
¸
u
3
(t)
¸ ¸
¸
f
3
(t)
Figure 29: 2DOF MassSpring System.
which, when combined with Eq. 3.65, yields
4T
18
−2T
14
−T
17
−T
19
= 2hg
18
, (3.67)
which imposes the Neumann boundary condition on Point 18.
4 Direct Finite Element Analysis
The ﬁnite element method is a numerical procedure for solving partial diﬀerential equations.
The procedure is used in a variety of applications, including structural mechanics and dy
namics, acoustics, heat transfer, ﬂuid ﬂow, electric and magnetic ﬁelds, and electromagnetics.
Although the main theoretical bases for the ﬁnite element method are variational principles
and the weighted residual method, it is useful to consider discrete systems ﬁrst to gain some
physical insight into some of the procedures.
4.1 Linear MassSpring Systems
Consider the twodegreeoffreedom (DOF) system shown in Fig. 29. We let u
2
and u
3
denote the displacements from the equilibrium of the two masses m
2
and m
3
. The stiﬀnesses
of the two springs are k
1
and k
2
. The dynamic equilibrium equations could be obtained from
Newton’s second law (F=ma):
m
2
¨ u
2
+ k
1
u
2
−k
2
(u
3
−u
2
) = 0
m
3
¨ u
3
+ k
2
(u
3
−u
2
) = f
3
(t)
(4.1)
34
`
`
`
`
k
¸ ¸
1 2
Figure 30: A Single Spring Element.
or
m
2
¨ u
2
+ (k
1
+ k
2
)u
2
−k
2
u
3
= 0
m
3
¨ u
3
−k
2
u
2
+ k
2
u
3
= f
3
(t).
(4.2)
This system could be rewritten in matrix notation as
M¨ u +Ku = F(t), (4.3)
where
u =
_
u
2
u
3
_
(4.4)
is the displacement vector (the vector of unknown displacements),
K =
_
k
1
+ k
2
−k
2
−k
2
k
2
_
(4.5)
is the system stiﬀness matrix,
M =
_
m
2
0
0 m
3
_
(4.6)
is the system mass matrix, and
F =
_
0
f
3
_
(4.7)
is the force vector. This approach would be very tedious and errorprone for more complex
systems involving many springs and masses.
To develop instead a matrix approach, we ﬁrst isolate one element, as shown in Fig. 30.
The stiﬀness matrix K
el
for this element satisﬁes
K
el
u = F, (4.8)
or
_
k
11
k
12
k
21
k
22
_ _
u
1
u
2
_
=
_
f
1
f
2
_
. (4.9)
By expanding this equation, we obtain
k
11
u
1
+ k
12
u
2
= f
1
k
21
u
1
+ k
22
u
2
= f
2
_
. (4.10)
From this equation, we observe that k
11
can be deﬁned as the force on DOF 1 corresponding
to enforcing a unit displacement on DOF 1 and zero displacement on DOF 2:
k
11
= f
1
¸
¸
u
1
=1,u
2
=0
= k. (4.11)
35
`
`
`
`
k
1
`
`
`
`
k
2
¸ ¸ ¸
1 2 3
Figure 31: 3DOF Spring System.
Similarly,
k
12
= f
1
¸
¸
u
1
=0,u
2
=1
= −k, k
21
= f
2
¸
¸
u
1
=1,u
2
=0
= −k, k
22
= f
2
¸
¸
u
1
=0,u
2
=1
= k. (4.12)
Thus,
K
el
=
_
k −k
−k k
_
= k
_
1 −1
−1 1
_
. (4.13)
In general, for a larger system with many more DOF,
K
11
u
1
+ K
12
u
2
+ K
13
u
3
+· · · + K
1n
u
n
= F
1
K
21
u
1
+ K
22
u
2
+ K
23
u
3
+· · · + K
2n
u
n
= F
2
K
31
u
1
+ K
32
u
2
+ K
33
u
3
+· · · + K
3n
u
n
= F
3
.
.
.
K
n1
u
1
+ K
n2
u
2
+ K
n3
u
3
+· · · + K
nn
u
n
= F
n
_
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
_
, (4.14)
in which case we can interpret an individual element K
ij
in the stiﬀness matrix as the force
at DOF i if u
j
= 1 and all other displacement components u
i
= 0:
K
ij
= F
i
¸
¸
u
j
=1, others=0
. (4.15)
4.2 Matrix Assembly
We now return to the stiﬀness part of the original problem shown in Fig. 29. In the absence
of the masses and constraints, this system is shown in Fig. 31. Since there are three points in
this system, each with one DOF, the system is a 3DOF system. The system stiﬀness matrix
can be assembled for this system by adding the 3 ×3 stiﬀness matrices for each element:
K =
_
_
k
1
−k
1
0
−k
1
k
1
0
0 0 0
_
_
+
_
_
0 0 0
0 k
2
−k
2
0 −k
2
k
2
_
_
=
_
_
k
1
−k
1
0
−k
1
k
1
+ k
2
−k
2
0 −k
2
k
2
_
_
. (4.16)
The justiﬁcation for this assembly procedure is that forces are additive. For example, k
22
is
the force at DOF 2 when u
2
= 1 and u
1
= u
3
= 0; both elements which connect to DOF 2
contribute to k
22
. This matrix corresponds to the unconstrained system.
4.3 Constraints
The system in Fig. 29 has a constraint on DOF 1, as shown in Fig. 32. If we expand the
36
`
`
`
`
k
1
`
`
`
`
k
2
, ¸ ¸
1 2 3
»
ﬁxed
Figure 32: Spring System With Constraint.
`
`
`
`
k
1
`
`
`
`
k
2
`
`
`
`
k
4
`
`
`
`
k
3
¸
¸
¸ ¸
1
2
3 4
Figure 33: 4DOF Spring System.
(unconstrained) matrix system Ku = F into
K
11
u
1
+ K
12
u
2
+ K
13
u
3
= F
1
K
21
u
1
+ K
22
u
2
+ K
23
u
3
= F
2
K
31
u
1
+ K
32
u
2
+ K
33
u
3
= F
3
_
_
_
, (4.17)
we see that Row i corresponds to the equilibrium equation for DOF i. Thus, if u
i
= 0,
we do not need Row i of the matrix (although that equation can be saved to recover later
the constraint force). Also, Column i of the matrix multiplies u
i
, so that, if u
i
= 0, we do
not need Column i of the matrix. That is, if u
i
= 0 for some system, we can enforce that
constraint by deleting Row i and Column i from the unconstrained matrix. Hence, with
u
1
= 0 in Fig. 32, we delete Row 1 and Column 1 in Eq. 4.16 to obtain the reduced matrix
K =
_
k
1
+ k
2
−k
2
−k
2
k
2
_
, (4.18)
which is the same matrix obtained previously in Eq. 4.5.
Notice that, from Eqs. 4.16 and 4.18,
det K
3×3
= k
1
[(k
1
+ k
2
)k
2
−k
2
2
] + k
1
(−k
1
k
2
) = 0, (4.19)
whereas
det K
2×2
= (k
1
+ k
2
)k
2
−k
2
2
= k
1
k
2
= 0. (4.20)
That is, the unconstrained matrix K
3×3
is singular, but the constrained matrix K
2×2
is
nonsingular. Without constraints, K is singular (and the solution of the mechanical problem
is not unique) because of the presence of rigid body modes.
4.4 Example and Summary
Consider the 4DOF spring system shown in Fig. 33. The unconstrained stiﬀness matrix for
37
this system is
K =
_
¸
¸
¸
_
k
1
+ k
4
−k
1
−k
4
0
−k
1
k
1
+ k
2
−k
2
0
−k
4
−k
2
k
2
+ k
3
+ k
4
−k
3
0 0 −k
3
k
3
_
¸
¸
¸
_
. (4.21)
We summarize several properties of stiﬀness matrices:
1. K is symmetric. This property is a special case of the Betti reciprocal theorem in
mechanics.
2. An oﬀdiagonal term is zero unless the two points are common to the same element.
Thus, K is sparse in general and usually banded.
3. K is singular without enough constraints to eliminate rigid body motion.
For spring systems, that have only one DOF at each point, the sum of any matrix column
or row is zero. This property is a consequence of equilibrium, since the matrix entries in
Column i consist of all the grid point forces when u
i
= 1 and other DOF are ﬁxed. The
forces must sum to zero, since the object is in static equilibrium.
We summarize the solution procedure for spring systems:
1. Generate the element stiﬀness matrices.
2. Assemble the system K and F.
3. Apply constraints.
4. Solve Ku = F for u.
5. Compute reactions, spring forces, and stresses.
4.5 PinJointed Rod Element
Consider the pinjointed rod element (an axial member) shown in Fig. 34. From mechanics
of materials, we recall that the change in displacement u for a rod of length L subjected to
an axial force F is
u =
FL
AE
, (4.22)
where A is the rod crosssectional area, and E is the Young’s modulus for the rod material.
Thus, the axial stiﬀness is
k =
F
u
=
AE
L
. (4.23)
The rod is therefore equivalent to a scalar spring with k = AE/L, and
K
el
=
AE
L
_
1 −1
−1 1
_
. (4.24)
38
,
,
AE, L
»
F
F
Figure 34: PinJointed Rod Element.
, , , ,
, , ,
·
Figure 35: Truss Structure Modeled With PinJointed Rods.
39
,
,
¸
`
¸
`
1
2
3
4
θ
Figure 36: The Degrees of Freedom for a PinJointed Rod Element in 2D.
, ,
,
¸
`
F
x
F
y
(x
1
, y
1
)
(x
2
, y
2
)
1
θ
1 cos θ
`
`¬
Figure 37: Computing 2D Stiﬀness of PinJointed Rod.
However, matrix assembly for a truss structure (a structure made of pinjointed rods)
diﬀers from that for a collection of springs, since the rod elements are not all colinear (e.g.,
Fig. 35). Thus the elements must be transformed to a common coordinate system before the
element matrices can be assembled into the system stiﬀness matrix.
A typical rod element in 2D is shown in Fig. 36. To use this element in 2D trusses
requires expanding the 2 ×2 matrix K
el
into a 4 ×4 matrix. The four DOF for the element
are also shown in Fig. 36. To compute the entries in the ﬁrst column of the 4 ×4 K
el
, we set
u
1
= 1, u
2
= u
3
= u
4
= 0, and compute the four grid point forces F
1
, F
2
, F
3
, F
4
, as shown in
Fig. 37. For example, at Point 1,
F
x
= F cos θ = (k · 1 cos θ) cos θ = k cos
2
θ = k
11
= −k
31
(4.25)
F
y
= F sin θ = (k · 1 cos θ) sin θ = k cos θ sin θ = k
21
= −k
41
, (4.26)
where k = AE/L, and the forces at the opposite end of the rod (x
2
, y
2
) are equal in magnitude
and opposite in sign. These four values complete the ﬁrst column of the matrix. Similarly
we can ﬁnd the rest of the matrix to obtain
K
el
=
AE
L
_
¸
¸
¸
_
c
2
cs −c
2
−cs
cs s
2
−cs −s
2
−c
2
−cs c
2
cs
−cs −s
2
cs s
2
_
¸
¸
¸
_
, (4.27)
40
,
,
,
`
`
`
`
`¬
10,000 lb
60
◦
45
◦
1
2
3
¸
`
¸
`
¸
`
1
2
3
4
5
6
¡ ¸
2a
>
>.
s = −c =
a
L
s = c = −
a
L
Figure 38: PinJointed Frame Example.
where
c = cos θ =
x
2
−x
1
L
, s = sin θ =
y
2
−y
1
L
. (4.28)
Note that the angle θ is not of interest; only c and s are needed. Later, in the discussion of
matrix transformations, we will derive a more convenient way to obtain this matrix.
4.6 PinJointed Frame Example
We illustrate the matrix assembly and solution procedures for truss structures with the
simple frame shown in Fig. 38. For this frame, we assume E = 30 × 10
6
psi, L=10 in, and
A=1 in
2
. Before the application of constraints, the stiﬀness matrix is
K = k
0
_
¸
¸
¸
¸
¸
¸
¸
_
1 −1 −1 1 0 0
−1 1 1 −1 0 0
−1 1 1 + 1 −1 + 1 −1 −1
1 −1 −1 + 1 1 + 1 −1 −1
0 0 −1 −1 1 1
0 0 −1 −1 1 1
_
¸
¸
¸
¸
¸
¸
¸
_
, (4.29)
where
k
0
=
AE
L
_
a
L
_
2
= 1.5 ×10
6
lb/in,
a
L
=
1
√
2
. (4.30)
After constraints, the system of equations is
k
0
_
2 0
0 2
_ _
u
3
u
4
_
=
_
5000
−5000
√
3
_
, (4.31)
whose solution is
u
3
= 1.67 ×10
−3
in, u
4
= −2.89 ×10
−3
in. (4.32)
41
· · · · · · · · · · ·
, , , , , , , , ,
Figure 39: Example With Reactions and Loads at Same DOF.
The reactions can be recovered as
R
1
= k
0
(−u
3
+ u
4
) = −6840 lb, R
2
= k
0
(u
3
−u
4
) = 6840 lb
R
5
= k
0
(−u
3
−u
4
) = 1830 lb, R
6
= k
0
(−u
3
−u
4
) = 1830 lb.
(4.33)
4.7 Boundary Conditions by Matrix Partitioning
We recall that, to enforce u
i
= 0, we could delete Row i and Column i from the stiﬀness
matrix. By using matrix partitioning, we can treat nonzero constraints and recover the forces
of constraint.
Consider the ﬁnite element matrix system Ku = F, where some DOF are speciﬁed, and
some are free. We partition the unknown displacement vector into
u =
_
u
f
u
s
_
, (4.34)
where u
f
and u
s
denote, respectively, the free and constrained DOF. A partitioning of the
matrix system then yields
_
K
ff
K
fs
K
sf
K
ss
_ _
u
f
u
s
_
=
_
F
f
F
s
_
, (4.35)
where u
s
is prescribed. From the ﬁrst partition,
K
ff
u
f
+K
fs
u
s
= F
f
(4.36)
or
K
ff
u
f
= F
f
−K
fs
u
s
, (4.37)
which can be solved for the unknown u
f
. The second partition then yields the forces at the
constrained DOF:
F
s
= K
sf
u
f
+K
ss
u
s
, (4.38)
where u
f
is now known, and u
s
is prescribed.
The grid point forces F
s
at the constrained DOF consist of both the reactions of constraint
and the applied loads, if any, at those DOF. For example, consider the beam shown in Fig. 39.
When the applied load is distributed to the grid points, the loads at the two end grid points
would include both reactions and a portion of the applied load. Thus, F
s
, which includes all
loads at the constrained DOF, can be written as
F
s
= P
s
+R, (4.39)
42
¸ ,
i u
i
= U prescribed
(a)
`
`
`
`
k
0
¸
F
i
= k
0
U
(b)
Figure 40: Large Spring Approach to Constraints.
where P
s
is the applied load, and R is the vector of reactions. The reactions can then be
recovered as
R = K
sf
u
f
+K
ss
u
s
−P
s
. (4.40)
The disadvantage of using the partitioning approach to constraints is that the writing
of computer programs is made more complicated, since one needs the general capability
to partition a matrix into smaller matrices. However, for general purpose software, such a
capability is essential.
4.8 Alternative Approach to Constraints
Consider a structure (Fig. 40) for which we want to prescribe a value for the ith DOF:
u
i
= U. An alternative approach to enforce this constraint is to attach a large spring k
0
between DOF i and ground (a ﬁxed point) and to apply a force F
i
to DOF i equal to k
0
U.
This spring should be many orders of magnitude larger than other typical values in the
stiﬀness matrix. A large number placed on the matrix diagonal will have no adverse eﬀects
on the conditioning of the matrix.
For example, if we prescribe DOF 3 in a 4DOF system, the modiﬁed system becomes
_
¸
¸
¸
_
k
11
k
12
k
13
k
14
k
21
k
22
k
23
k
24
k
31
k
32
k
33
+ k
0
k
34
k
41
k
42
k
43
k
44
_
¸
¸
¸
_
_
¸
¸
¸
_
¸
¸
¸
_
u
1
u
2
u
3
u
4
_
¸
¸
¸
_
¸
¸
¸
_
=
_
¸
¸
¸
_
¸
¸
¸
_
F
1
F
2
F
3
+ k
0
U
F
4
_
¸
¸
¸
_
¸
¸
¸
_
. (4.41)
For large k
0
, the third equation is
k
0
u
3
≈ k
0
U or u
3
≈ U, (4.42)
as required.
The large spring approach can be used for any system of equations for which one wants
to enforce a constraint on a particular unknown. The main advantage of this approach is
that computer coding is easier, since matrix sizes do not have to change.
We can summarize the algorithm for applying the largespring approach for applying
constraints to the linear system Ku = F as follows:
1. Choose the large spring k
0
to be, say, 10,000 times the largest diagonal entry in the
unconstrained K matrix.
43
, , ¸
1 2
x
` `
EI, L
u
1
u
3
u
2
u
4
Figure 41: DOF for Beam in Flexure (2D).
,
,
`
·
w
1
= 1
`
`
F
1
M
1
F
2
M
2
Figure 42: The Beam Problem Associated With Column 1.
2. For each DOF i which is to be constrained (zero or not), add k
0
to the diagonal entry
K
ii
, and add k
0
U to the corresponding righthand side term F
i
, where U is the desired
constraint value for DOF i.
4.9 Beams in Flexure
Like springs and pinjointed rods, beam elements also have stiﬀness matrices which can be
computed exactly. In two dimensions, we designate the four DOF associated with ﬂexure as
shown in Fig. 41. The stiﬀness matrix would therefore be of the form
_
¸
¸
¸
_
¸
¸
¸
_
F
1
M
1
F
2
M
2
_
¸
¸
¸
_
¸
¸
¸
_
=
_
¸
¸
¸
_
k
11
k
12
k
13
k
14
k
21
k
22
k
23
k
24
k
31
k
32
k
33
k
34
k
41
k
42
k
43
k
44
_
¸
¸
¸
_
_
¸
¸
¸
_
¸
¸
¸
_
w
1
θ
1
w
2
θ
2
_
¸
¸
¸
_
¸
¸
¸
_
, (4.43)
where w
i
and θ
i
are, respectively, the transverse displacement and rotation at the ith point.
Rotations are expressed in radians. To compute the ﬁrst column of K, set
w
1
= 1, θ
1
= w
2
= θ
2
= 0, (4.44)
and compute the four forces. Thus, for an Euler beam, we solve the beam diﬀerential
equation
EIy
= M(x), (4.45)
subject to the boundary conditions, Eq. 4.44, as shown in Fig. 42. For Column 2, we solve
the beam equation subject to the boundary conditions
θ
1
= 1, w
1
= w
2
= θ
2
= 0, (4.46)
as shown in Fig. 43. If we then combine the resulting ﬂexural stiﬀnesses with the axial
44
,
,
`
`
F
1
M
1
F
2
M
2
Figure 43: The Beam Problem Associated With Column 2.
, ,
1 2
A, E, I, L
¸
`
u
1
u
2
u
3
¸
`
u
4
u
5
u
6
Figure 44: DOF for 2D Beam Element.
stiﬀnesses previously derived for the axial member, we obtain the twodimensional stiﬀness
matrix for the Euler beam:
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
F
1
F
2
F
3
F
4
F
5
F
6
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
=
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
AE
L
0 0 −
AE
L
0 0
0
12EI
L
3
6EI
L
2
0 −
12EI
L
3
6EI
L
2
0
6EI
L
2
4EI
L
0 −
6EI
L
2
2EI
L
−
AE
L
0 0
AE
L
0 0
0 −
12EI
L
3
−
6EI
L
2
0
12EI
L
3
−
6EI
L
2
0
6EI
L
2
2EI
L
0 −
6EI
L
2
4EI
L
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
u
1
u
2
u
3
u
4
u
5
u
6
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
, (4.47)
where the six DOF are shown in Fig. 44.
For this element, note that there is no coupling between axial and transverse behavior.
Transverse shear, which was ignored, can be added. The threedimensional counterpart to
this matrix would have six DOF at each grid point: three translations and three rotations
(u
x
, u
y
, u
z
, R
x
, R
y
, R
z
). Thus, in 3D, K is a 12 × 12 matrix. In addition, for bending in
two diﬀerent planes, there would have to be two moments of inertia I
1
and I
2
, in addition
to a torsional constant J and (possibly) a product of inertia I
12
. For transverse shear,
“area factors” would also be needed to reﬂect the fact that two diﬀerent beams with the
same crosssectional area, but diﬀerent crosssectional shapes, would have diﬀerent shear
stiﬀnesses.
4.10 Direct Approach to Continuum Problems
Stiﬀness matrices for springs, rods, and beams can be derived exactly. For most problems
of interest in engineering, exact stiﬀness matrices cannot be derived. Consider the 2D
problem of computing the displacements and stresses in a thin plate with applied forces and
45
Constraint
`
`
`
/
/
// `
/
/
// `
Load
Figure 45: Plate With Constraints and Loads.
.
.
.
.
.
.
.
.
.
.
.
.
(x
1
, y
1
)
(x
2
, y
2
)
(x
3
, y
3
)
1
2
3
¸
`
¸
`
¸
`
u
1
v
1
u
2
v
2
u
3
v
3
Figure 46: DOF for the Linear Triangular Membrane Element.
constraints (Fig. 45). This ﬁgure also shows the domain modeled with several triangular
elements. A typical element is shown in Fig. 46. With three grid points and two DOF at
each point, this is a 6DOF element. The displacement and force vectors for the element are
u =
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
u
1
v
1
u
2
v
2
u
3
v
3
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
, F =
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
F
1x
F
1y
F
2x
F
2y
F
3x
F
3y
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
. (4.48)
Since we cannot determine exactly a stiﬀness matrix that relates these two vectors, we
approximate the displacement ﬁeld over the element as
_
u(x, y) = a
1
+ a
2
x + a
3
y
v(x, y) = a
4
+ a
5
x + a
6
y,
(4.49)
where u and v are the x and y components of displacement, respectively. Note that the
number of undetermined coeﬃcients equals the number of DOF (6) in the element. We
46
choose polynomials for convenience in the subsequent mathematics. The linear approxima
tion in Eq. 4.49 is analogous to the piecewise linear approximation used in trapezoidal rule
integration.
At the vertices, the displacements in Eq. 4.49 must match the grid point values, e.g.,
u
1
= a
1
+ a
2
x
1
+ a
3
y
1
. (4.50)
We can write ﬁve more equations of this type to obtain
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
u
1
v
1
u
2
v
2
u
3
v
3
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
=
_
¸
¸
¸
¸
¸
¸
¸
_
1 x
1
y
1
0 0 0
0 0 0 1 x
1
y
1
1 x
2
y
2
0 0 0
0 0 0 1 x
2
y
2
1 x
3
y
3
0 0 0
0 0 0 1 x
3
y
3
_
¸
¸
¸
¸
¸
¸
¸
_
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
a
1
a
2
a
3
a
4
a
5
a
6
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
(4.51)
or
u = γa. (4.52)
Since the x and y directions uncouple in Eq. 4.51, we can write this last equation in
uncoupled form:
_
_
_
u
1
u
2
u
3
_
_
_
=
_
_
1 x
1
y
1
1 x
2
y
2
1 x
3
y
3
_
_
_
_
_
a
1
a
2
a
3
_
_
_
(4.53)
_
_
_
v
1
v
2
v
3
_
_
_
=
_
_
1 x
1
y
1
1 x
2
y
2
1 x
3
y
3
_
_
_
_
_
a
4
a
5
a
6
_
_
_
. (4.54)
We now observe that
det
_
_
1 x
1
y
1
1 x
2
y
2
1 x
3
y
3
_
_
= 2A = 0, (4.55)
since A is (from geometry) the area of the triangle. A is positive for counterclockwise
ordering and negative for clockwise ordering. Thus, unless the triangle is degenerate, we
conclude that γ is invertible, and
_
_
_
a
1
a
2
a
3
_
_
_
=
1
2A
_
_
x
2
y
3
−x
3
y
2
x
3
y
1
−x
1
y
3
x
1
y
2
−x
2
y
1
y
2
−y
3
y
3
−y
1
y
1
−y
2
x
3
−x
2
x
1
−x
3
x
2
−x
1
_
_
_
_
_
u
1
u
2
u
3
_
_
_
. (4.56)
Similarly,
_
_
_
a
4
a
5
a
6
_
_
_
=
1
2A
_
_
x
2
y
3
−x
3
y
2
x
3
y
1
−x
1
y
3
x
1
y
2
−x
2
y
1
y
2
−y
3
y
3
−y
1
y
1
−y
2
x
3
−x
2
x
1
−x
3
x
2
−x
1
_
_
_
_
_
v
1
v
2
v
3
_
_
_
. (4.57)
47
Thus, we can write
a = γ
−1
u, (4.58)
and treat the element’s unknowns as being either the physical displacements u or the non
physical coeﬃcients a.
The strain components of interest in 2D are
ε =
_
_
_
ε
xx
ε
yy
γ
xy
_
_
_
=
_
_
_
u
,x
v
,y
u
,y
+ v
,x
_
_
_
. (4.59)
From Eq. 4.49, we evaluate the strains for this element as
ε =
_
_
_
a
2
a
6
a
3
+ a
5
_
_
_
=
_
_
0 1 0 0 0 0
0 0 0 0 0 1
0 0 1 0 1 0
_
_
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
a
1
a
2
a
3
a
4
a
5
a
6
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
= Ba = Bγ
−1
u = Cu, (4.60)
where
C = Bγ
−1
. (4.61)
Eq. 4.60 is a formula to compute element strains given the grid point displacements. Note
that, for this element, the strains are independent of position in the element. Thus, this
element is referred to as the Constant Strain Triangle (CST).
From generalized Hooke’s law, each stress component is a linear combination of all the
strain components:
σ = Dε = DBγ
−1
u = DCu, (4.62)
where, for example, for an isotropic material in plane stress,
D =
E
1 −ν
2
_
_
1 ν 0
ν 1 0
0 0 (1 −ν)/2
_
_
, (4.63)
where E is Young’s modulus, and ν is Poisson’s ratio. Eq. 4.62 is a formula to compute ele
ment stresses given the grid point displacements. For this element, the stresses are constant
(independent of position) in the element.
We now derive the element stiﬀness matrix using the Principle of Virtual Work. Consider
an element in static equilibrium with a set of applied loads F and displacements u. According
to the Principle of Virtual Work, the work done by the applied loads acting through the
displacements is equal to the increment in stored strain energy during an arbitrary virtual
(imaginary) displacement δu:
δu
T
F =
_
V
δε
T
σdV. (4.64)
48
We then substitute Eqs. 4.60 and 4.62 into this equation to obtain
δu
T
F =
_
V
(Cδu)
T
(DCu) dV = δu
T
__
V
C
T
DCdV
_
u, (4.65)
where δu and u can be removed from the integral, since they are grid point quantities
independent of position. Since δu is arbitrary, it follows that
F =
__
V
C
T
DCdV
_
u. (4.66)
Therefore, the stiﬀness matrix is given by
K =
_
V
C
T
DCdV =
_
V
_
Bγ
−1
_
T
D
_
Bγ
−1
_
dV, (4.67)
where the integral is over the volume of the element. Note that, since the material property
matrix D is symmetric, K is symmetric. The substitution of the constant B, γ, and D
matrices into this equation yields the stiﬀness matrix for the Constant Strain Triangle:
K =
Et
4A(1 −ν
2
)
×
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
y
2
23
+λx
2
32
µx
32
y
23
y
23
y
31
+λx
13
x
32
νx
13
y
23
+λx
32
y
31
y
12
y
23
+λx
21
x
32
νx
21
y
23
+λx
32
y
12
x
2
32
+λy
2
23
νx
32
y
31
+λx
13
y
23
x
32
x
13
+λy
31
y
23
νx
32
y
12
+λx
21
y
23
x
21
x
32
+λy
12
y
23
y
2
31
+λx
2
13
µx
13
y
31
y
12
y
31
+λx
21
x
13
νx
21
y
31
+λx
13
y
12
x
2
13
+λy
2
31
νx
13
y
12
+λx
21
y
31
x
13
x
21
+λy
12
y
31
Sym y
2
12
+λx
2
21
µx
21
y
12
x
2
21
+λy
2
12
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
,
(4.68)
where
x
ij
= x
i
−x
j
, y
ij
= y
i
−y
j
, λ =
1 −ν
2
, µ =
1 + ν
2
, (4.69)
and t is the element thickness.
5 Change of Basis
On many occasions in engineering applications, the need arises to transform vectors and
matrices from one coordinate system to another. For example, in the ﬁnite element method,
it is frequently more convenient to derive element matrices in a local element coordinate
system and then transform those matrices to a global system (Fig. 47). Transformations
are also needed to transform from other orthogonal coordinate systems (e.g., cylindrical or
spherical) to Cartesian coordinates (Fig. 48).
49
¸
`
x
y
,
,
`
`
`` `
axial member
¯ x ¯ y
¸
`
x
y
´
´
´ ´




,
,
,
`
`
`` `
triangular membrane
¯ x ¯ y
Figure 47: Element Coordinate Systems in the Finite Element Method.
¸
`
x
y
`
`
`
``
,
e
r
e
θ
θ
r
Figure 48: Basis Vectors in Polar Coordinate System.
Let the vector v be given by
v = v
1
e
1
+ v
2
e
2
+ v
3
e
3
=
3
i=1
v
i
e
i
, (5.1)
where e
i
are the basis vectors, and v
i
are the components of v. With the summation con
vention, we can omit the summation sign and write
v = v
i
e
i
, (5.2)
where, if a subscript appears exactly twice, a summation is implied over the range.
An orthonormal basis is deﬁned as a basis whose basis vectors are mutually orthogonal
unit vectors (i.e., vectors of unit length). If e
i
is an orthonormal basis,
e
i
· e
j
= δ
ij
=
_
1, i = j,
0, i = j,
(5.3)
where δ
ij
is the Kronecker delta.
Since bases are not unique, we can express v in two diﬀerent orthonormal bases:
v =
3
i=1
v
i
e
i
=
3
i=1
¯ v
i
¯ e
i
, (5.4)
where v
i
are the components of v in the unbarred coordinate system, and ¯ v
i
are the com
ponents in the barred system (Fig. 49). If we take the dot product of both sides of Eq. 5.4
with e
j
, we obtain
3
i=1
v
i
e
i
· e
j
=
3
i=1
¯ v
i
¯ e
i
· e
j
, (5.5)
50
¸
x
1
`
x
2
¯ x
1
`
`
`
`
`
`
`
``
¯ x
2
v
θ
Figure 49: Change of Basis.
where e
i
· e
j
= δ
ij
, and we deﬁne the 3 ×3 matrix R as
R
ij
= ¯ e
i
· e
j
. (5.6)
Thus, from Eq. 5.5,
v
j
=
3
i=1
R
ij
¯ v
i
=
3
i=1
R
T
ji
¯ v
i
. (5.7)
Since the matrix product
C = AB (5.8)
can be written using subscript notation as
C
ij
=
3
k=1
A
ik
B
kj
, (5.9)
Eq. 5.7 is equivalent to the matrix product
v = R
T
¯ v. (5.10)
Similarly, if we take the dot product of Eq. 5.4 with ¯ e
j
, we obtain
3
i=1
v
i
e
i
· ¯ e
j
=
3
i=1
¯ v
i
¯ e
i
· ¯ e
j
, (5.11)
where ¯ e
i
· ¯ e
j
= δ
ij
, and e
i
· ¯ e
j
= R
ji
. Thus,
¯ v
j
=
3
i=1
R
ji
v
i
or ¯ v = Rv or v = R
−1
¯ v. (5.12)
A comparison of Eqs. 5.10 and 5.12 yields
R
−1
= R
T
or RR
T
= I or
3
k=1
R
ik
R
jk
= δ
ij
, (5.13)
51
where I is the identity matrix (I
ij
= δ
ij
):
I =
_
_
1 0 0
0 1 0
0 0 1
_
_
. (5.14)
This type of transformation is called an orthogonal coordinate transformation (OCT). A
matrix R satisfying Eq. 5.13 is said to be an orthogonal matrix. That is, an orthogonal
matrix is one whose inverse is equal to the transpose. R is sometimes called a rotation
matrix.
For example, for the coordinate rotation shown in Fig. 49, in 3D,
R =
_
_
cos θ sin θ 0
−sin θ cos θ 0
0 0 1
_
_
. (5.15)
In 2D,
R =
_
cos θ sin θ
−sin θ cos θ
_
(5.16)
and
_
v
x
= ¯ v
x
cos θ − ¯ v
y
sin θ
v
y
= ¯ v
x
sin θ + ¯ v
y
cos θ.
(5.17)
We recall that the determinant of a matrix product is equal to the product of the deter
minants. Also, the determinant of the transpose of a matrix is equal to the determinant of
the matrix itself. Thus, from Eq. 5.13,
det(RR
T
) = (det R)(det R
T
) = (det R)
2
= det I = 1, (5.18)
and we conclude that, for an orthogonal matrix R,
det R = ±1. (5.19)
The plus sign occurs for rotations, and the minus sign occurs for combinations of rotations
and reﬂections. For example, the orthogonal matrix
R =
_
_
1 0 0
0 1 0
0 0 −1
_
_
(5.20)
indicates a reﬂection in the z direction (i.e., the sign of the z component is changed).
We note that the length of a vector is unchanged under an orthogonal coordinate trans
formation, since the square of the length is given by
¯ v
i
¯ v
i
= R
ij
v
j
R
ik
v
k
= δ
jk
v
j
v
k
= v
j
v
j
, (5.21)
where the summation convention was used. That is, the square of the length of a vector is
the same in both coordinate systems.
52
To summarize, under an orthogonal coordinate transformation, vectors transform accord
ing to the rule
¯ v = Rv or ¯ v
i
=
3
j=1
R
ij
v
j
, (5.22)
where
R
ij
= ¯ e
i
· e
j
, (5.23)
and
RR
T
= R
T
R = I. (5.24)
5.1 Tensors
A vector which transforms under an orthogonal coordinate transformation according to the
rule ¯ v = Rv is deﬁned as a tensor of rank 1. A tensor of rank 0 is a scalar (a quantity which
is unchanged by an orthogonal coordinate transformation). For example, temperature and
pressure are scalars, since
¯
T = T and ¯ p = p.
We now introduce tensors of rank 2. Consider a matrix M = (M
ij
) which relates two
vectors u and v by
v = Mu or v
i
=
3
j=1
M
ij
u
j
(5.25)
(i.e., the result of multiplying a matrix and a vector is a vector). Also, in a rotated coordinate
system,
¯ v =
¯
M¯ u. (5.26)
Since both u and v are vectors (tensors of rank 1), Eq. 5.25 implies
R
T
¯ v = MR
T
¯ u or ¯ v = RMR
T
¯ u. (5.27)
By comparing Eqs. 5.26 and 5.27, we obtain
¯
M = RMR
T
(5.28)
or, in index notation,
¯
M
ij
=
3
k=1
3
l=1
R
ik
R
jl
M
kl
, (5.29)
which is the transformation rule for a tensor of rank 2. In general, a tensor of rank n, which
has n indices, transforms under an orthogonal coordinate transformation according to the
rule
¯
A
ij···k
=
3
p=1
3
q=1
· · ·
3
r=1
R
ip
R
jq
· · · R
kr
A
pq···r
. (5.30)
53
5.2 Examples of Tensors
1. Stress and strain in elasticity
The stress tensor σ is
σ =
_
_
σ
11
σ
12
σ
13
σ
21
σ
22
σ
23
σ
31
σ
32
σ
33
_
_
, (5.31)
where σ
11
, σ
22
, σ
33
are the direct (normal) stresses, and σ
12
, σ
13
, and σ
23
are the shear
stresses. The corresponding strain tensor is
ε =
_
_
ε
11
ε
12
ε
13
ε
21
ε
22
ε
23
ε
31
ε
32
ε
33
_
_
, (5.32)
where, in terms of displacements,
ε
ij
=
1
2
(u
i,j
+ u
j,i
) =
1
2
_
∂u
i
∂x
j
+
∂u
j
∂x
i
_
. (5.33)
The shear strains in this tensor are equal to half the corresponding engineering shear strains.
Both σ and ε transform like second rank tensors.
2. Generalized Hooke’s law
According to Hooke’s law in elasticity, the extension in a bar subjected to an axial force
is proportional to the force, or stress is proportional to strain. In 1D,
σ = Eε, (5.34)
where E is Young’s modulus, an experimentally determined material property.
In general threedimensional elasticity, there are nine components of stress σ
ij
and nine
components of strain ε
ij
. According to generalized Hooke’s law, each stress component can
be written as a linear combination of the nine strain components:
σ
ij
= c
ijkl
ε
kl
, (5.35)
where the 81 material constants c
ijkl
are experimentally determined, and the summation
convention is being used.
We now prove that c
ijkl
is a tensor of rank 4. We can write Eq. 5.35 in terms of stress
and strain in a second orthogonal coordinate system as
R
ki
R
lj
¯ σ
kl
= c
ijkl
R
mk
R
nl
¯ ε
mn
. (5.36)
If we multiply both sides of this equation by R
pj
R
oi
, and sum repeated indices, we obtain
R
pj
R
oi
R
ki
R
lj
¯ σ
kl
= R
oi
R
pj
R
mk
R
nl
c
ijkl
¯ ε
mn
, (5.37)
54
or, because R is an orthogonal matrix,
δ
ok
δ
pl
¯ σ
kl
= ¯ σ
op
= R
oi
R
pj
R
mk
R
nl
c
ijkl
¯ ε
mn
. (5.38)
Since, in the second coordinate system,
¯ σ
op
= ¯ c
opmn
¯ ε
mn
, (5.39)
we conclude that
¯ c
opmn
= R
oi
R
pj
R
mk
R
nl
c
ijkl
, (5.40)
which proves that c
ijkl
is a tensor of rank 4.
3. Stiﬀness matrix in ﬁnite element analysis
In the ﬁnite element method for linear analysis of structures, the forces F acting on an
object in static equilibrium are a linear combination of the displacements u (or vice versa):
Ku = F, (5.41)
where K is referred to as the stiﬀness matrix (with dimensions of force/displacement). In
this equation, u and F contain several subvectors, since u and F are the displacement and
force vectors for all grid points, i.e.,
u =
_
¸
¸
¸
_
¸
¸
¸
_
u
a
u
b
u
c
.
.
.
_
¸
¸
¸
_
¸
¸
¸
_
, F =
_
¸
¸
¸
_
¸
¸
¸
_
F
a
F
b
F
c
.
.
.
_
¸
¸
¸
_
¸
¸
¸
_
(5.42)
for grid points a, b, c, . . ., where
¯ u
a
= R
a
u
a
, ¯ u
b
= R
b
u
b
, · · · . (5.43)
Thus,
¯ u =
_
¸
¸
¸
_
¸
¸
¸
_
¯ u
a
¯ u
b
¯ u
c
.
.
.
_
¸
¸
¸
_
¸
¸
¸
_
=
_
¸
¸
¸
_
R
a
0 0 · · ·
0 R
b
0 · · ·
0 0 R
c
· · ·
.
.
.
.
.
.
.
.
.
.
.
.
_
¸
¸
¸
_
_
¸
¸
¸
_
¸
¸
¸
_
u
a
u
b
u
c
.
.
.
_
¸
¸
¸
_
¸
¸
¸
_
= Γu, (5.44)
where Γ is an orthogonal blockdiagonal matrix consisting of rotation matrices R:
Γ =
_
¸
¸
¸
_
R
a
0 0 · · ·
0 R
b
0 · · ·
0 0 R
c
· · ·
.
.
.
.
.
.
.
.
.
.
.
.
_
¸
¸
¸
_
, (5.45)
and
Γ
T
Γ = I. (5.46)
55
,
,
´ ¯ x
¯ y
θ
¸
`
x
y
,
,
´
1
2
´
3
4
θ
DOF
Figure 50: Element Coordinate System for PinJointed Rod.
Similarly,
¯
F = ΓF. (5.47)
Thus, if
¯
K¯ u =
¯
F, (5.48)
¯
KΓu = ΓF (5.49)
or
_
Γ
T
¯
KΓ
_
u = F. (5.50)
That is, the stiﬀness matrix transforms like other tensors of rank 2:
K = Γ
T
¯
KΓ. (5.51)
We illustrate the transformation of a ﬁnite element stiﬀness matrix by transforming the
stiﬀness matrix for the pinjointed rod element from a local element coordinate system to a
global Cartesian system. Consider the rod shown in Fig. 50. For this element, the 4 ×4 2D
stiﬀness matrix in the element coordinate system is
¯
K =
AE
L
_
¸
¸
¸
_
1 0 −1 0
0 0 0 0
−1 0 1 0
0 0 0 0
_
¸
¸
¸
_
, (5.52)
where A is the crosssectional area of the rod, E is Young’s modulus of the rod material,
and L is the rod length. In the global coordinate system,
K = Γ
T
¯
KΓ, (5.53)
where the 4 ×4 transformation matrix is
Γ =
_
R 0
0 R
_
, (5.54)
and the rotation matrix is
R =
_
cos θ sin θ
−sin θ cos θ
_
. (5.55)
56
Thus,
K =
AE
L
_
¸
¸
¸
_
c −s 0 0
s c 0 0
0 0 c −s
0 0 s c
_
¸
¸
¸
_
_
¸
¸
¸
_
1 0 −1 0
0 0 0 0
−1 0 1 0
0 0 0 0
_
¸
¸
¸
_
_
¸
¸
¸
_
c s 0 0
−s c 0 0
0 0 c s
0 0 −s c
_
¸
¸
¸
_
=
AE
L
_
¸
¸
¸
_
c
2
cs −c
2
−cs
cs s
2
−cs −s
2
−c
2
−cs c
2
cs
−cs −s
2
cs s
2
_
¸
¸
¸
_
, (5.56)
where c = cos θ and s = sin θ. This result agrees with that found earlier in Eq. 4.27.
5.3 Isotropic Tensors
An isotropic tensor is a tensor which is independent of coordinate system (i.e., invariant
under an orthogonal coordinate transformation). The Kronecker delta δ
ij
is a second rank
tensor and isotropic, since
¯
δ
ij
= δ
ij
, and
¯
I = RIR
T
= RR
T
= I. (5.57)
That is, the identity matrix I is invariant under an orthogonal coordinate transformation.
It can be shown in tensor analysis that δ
ij
is the only isotropic tensor of rank 2 and,
moreover, δ
ij
is the characteristic tensor for all isotropic tensors:
Rank Isotropic Tensors
1 none
2 cδ
ij
3 none
4 aδ
ij
δ
kl
+ bδ
ik
δ
jl
+ cδ
il
δ
jk
odd none
That is, all isotropic tensors of rank 4 must be of the form shown above, which has three
constants. For example, in generalized Hooke’s law (Eq. 5.35), the material property tensor
c
ijkl
has 3
4
= 81 constants (assuming no symmetry). For an isotropic material, c
ijkl
must be
an isotropic tensor of rank 4, thus implying at most three material constants (on the basis of
tensor analysis alone). The actual number of independent material constants for an isotropic
material turns out to be two rather than three, a result which depends on the existence of a
strain energy function, which implies the additional symmetry c
ijkl
= c
klij
.
6 Calculus of Variations
Recall from calculus that a function of one variable y = f(x) attains a stationary value
(minimum, maximum, or neutral) at the point x = x
0
if the derivative f
(x) = 0 at x = x
0
.
57
¸
`
,
(a) Minimum
¸
`
,
(b) Maximum
¸
`
,
(c) Neutral
Figure 51: Minimum, Maximum, and Neutral Stationary Values.
Alternatively, the ﬁrst variation of f (which is similar to a diﬀerential) must vanish for
arbitrary changes δx in x:
δf =
_
df
dx
_
δx = 0. (6.1)
The second derivative determines what kind of stationary value one has:
minimum:
d
2
f
dx
2
> 0 at x = x
0
(6.2)
maximum:
d
2
f
dx
2
< 0 at x = x
0
(6.3)
neutral:
d
2
f
dx
2
= 0 at x = x
0
, (6.4)
as shown in Fig. 51.
For a function of two variables z = f(x, y), z is stationary at (x
0
, y
0
) if
∂f
∂x
= 0 and
∂f
∂y
= 0 at (x
0
, y
0
).
Alternatively, for a stationary value,
δf =
_
∂f
∂x
_
δx +
_
∂f
∂y
_
δy = 0 at (x
0
, y
0
).
A function with n independent variables f(x
1
, x
2
, . . . , x
n
) is stationary at a point P if
δf =
n
i=1
∂f
∂x
i
δx
i
= 0 at P
for arbitrary variations δx
i
. Hence,
∂f
∂x
i
= 0 at P (i = 1, 2, . . . , n).
Variational calculus is concerned with ﬁnding stationary values, not of functions, but of
functionals. In general, a functional is a function of a function. More precisely, a functional
is an integral which has a speciﬁc numerical value for each function that is substituted into
it.
58
Consider the functional
I(φ) =
_
b
a
F(x, φ, φ
x
) dx, (6.5)
where x is the independent variable, φ(x) is the dependent variable, and
φ
x
=
dφ
dx
. (6.6)
The variation in I due to an arbitrary small change in φ is
δI =
_
b
a
δF dx =
_
b
a
_
∂F
∂φ
δφ +
∂F
∂φ
x
δφ
x
_
dx, (6.7)
where, with an interchange of the order of d and δ,
δφ
x
= δ
_
dφ
dx
_
=
d
dx
(δφ). (6.8)
With this equation, the second term in Eq. 6.7 can be integrated by parts to obtain
_
b
a
∂F
∂φ
x
δφ
x
dx =
_
b
a
∂F
∂φ
x
d
dx
(δφ) dx =
∂F
∂φ
x
δφ
¸
¸
¸
¸
b
a
−
_
b
a
δφ
d
dx
_
∂F
∂φ
x
_
dx. (6.9)
Thus,
δI =
_
b
a
_
∂F
∂φ
−
d
dx
_
∂F
∂φ
x
__
δφdx +
_
∂F
∂φ
x
δφ
_¸
¸
¸
¸
b
a
. (6.10)
Since δφ is arbitrary (within certain limits of admissibility), δI = 0 implies that both terms
in Eq. 6.10 must vanish. Moreover, for arbitrary δφ, a zero integral in Eq. 6.10 implies a
zero integrand:
d
dx
_
∂F
∂φ
x
_
−
∂F
∂φ
= 0, (6.11)
which is referred to as the EulerLagrange equation.
The second term in Eq. 6.10 must also vanish for arbitrary δφ:
_
∂F
∂φ
x
δφ
_¸
¸
¸
¸
b
a
= 0. (6.12)
If φ is speciﬁed at the boundaries x = a and x = b,
δφ(a) = δφ(b) = 0, (6.13)
and Eq. 6.12 is automatically satisﬁed. This type of boundary condition is called a geometric
or essential boundary condition. If φ is not speciﬁed at the boundaries x = a and x = b,
Eq. 6.12 implies
∂F(a)
∂φ
x
=
∂F(b)
∂φ
x
= 0. (6.14)
This type of boundary condition is called a natural boundary condition.
59
¸
`
x
y
(0, 0)
,
(1, 1)
,
,
ds
dx
dy
Figure 52: Curve of Minimum Length Between Two Points.
6.1 Example 1: The Shortest Distance Between Two Points
We wish to ﬁnd the equation of the curve y(x) along which the distance from the origin (0, 0)
to (1, 1) in the xyplane is least. Consider the curve in Fig. 52. The diﬀerential arc length
along the curve is given by
ds =
_
dx
2
+ dy
2
=
_
1 + (y
)
2
dx. (6.15)
We seek the curve y(x) which minimizes the total arc length
I(y) =
_
1
0
_
1 + (y
)
2
dx, (6.16)
where y(0) = 0 and y(1) = 1. For this problem,
F(x, y, y
) =
_
1 + (y
)
2
¸
1/2
, (6.17)
which depends only on y
explicitly. Thus, the EulerLagrange equation, Eq. 6.11, is
d
dx
_
∂F
∂y
_
= 0, (6.18)
where
∂F
∂y
=
1
2
_
1 + (y
)
2
¸
−1/2
2y
=
y
[1 + (y
)
2
]
1/2
. (6.19)
Hence,
y
[1 + (y
)
2
]
1/2
= c, (6.20)
where c is a constant of integration. If we solve this equation for y
, we obtain
y
=
_
c
2
1 −c
2
= a, (6.21)
where a is another constant. Thus,
y(x) = ax + b, (6.22)
and, with the boundary conditions,
y(x) = x, (6.23)
which is the straight line joining (0, 0) and (1, 1), as expected.
60
¸
·
O
y
x
,
A
y(x)
Figure 53: The Brachistochrone Problem.
6.2 Example 2: The Brachistochrone
We wish to determine the path y(x) from the origin O to the point A along which a bead will
slide under the force of gravity and without friction in the shortest time (Fig. 53). Assume
that the bead starts from rest. The velocity v along the path y(x) is
v =
ds
dt
=
_
dx
2
+ dy
2
dt
=
_
1 + (y
)
2
dx
dt
, (6.24)
where s denotes distance along the path. Thus,
dt =
_
1 + (y
)
2
dx
v
. (6.25)
As the bead slides down the path, potential energy is converted to kinetic energy. At
any location x, the kinetic energy equals the reduction in potential energy:
1
2
mv
2
= mgx (6.26)
or
v =
_
2gx. (6.27)
Thus, from Eq. 6.25,
dt =
¸
1 + (y
)
2
2gx
dx. (6.28)
The total time for the bead to fall from O to A is then
t =
_
x
A
0
¸
1 + (y
)
2
2gx
dx, (6.29)
which is the integral to be minimized. From Eq. 6.11, the EulerLagrange equation is
d
dx
_
∂F
∂y
_
−
∂F
∂y
= 0, (6.30)
where
F(x, y, y
) =
¸
1 + (y
)
2
2gx
. (6.31)
61
Thus, the EulerLagrange equation becomes
d
dx
_
1
2
_
1 + (y
)
2
2gx
_
−1/2
2y
2gx
_
= 0 (6.32)
or
d
dx
_
y
{x [1 + (y
)
2
]}
1/2
_
= 0. (6.33)
To solve this equation, we integrate Eq. 6.33 to obtain
y
{x [1 + (y
)
2
]}
1/2
= c, (6.34)
where c is a constant of integration. We then square both sides of this equation, and solve
for y
:
(y
)
2
= c
2
x
_
1 + (y
)
2
¸
(6.35)
or
y
=
dy
dx
=
_
c
2
x
1 −c
2
x
. (6.36)
This equation can be integrated using the change of variable
x =
1
c
2
sin
2
(θ/2), dx =
1
c
2
sin(θ/2) cos(θ/2) dθ. (6.37)
The integral implied by Eq. 6.36 then transforms to
y =
_
sin(θ/2)
cos(θ/2)
·
1
c
2
sin(θ/2) cos(θ/2) dθ =
1
c
2
_
sin
2
(θ/2) dθ (6.38)
=
1
2c
2
_
(1 −cos θ) dθ =
1
2c
2
(θ −sin θ) + y
0
, (6.39)
where y
0
is a constant of integration. Since the curve must pass through the origin, we must
have y
0
= 0. Also, from Eq. 6.37,
x =
1
2c
2
(1 −cos θ). (6.40)
If we then replace the constant a = 1/(2c
2
), we obtain the ﬁnal result in parametric form
_
x = a(1 −cos θ)
y = a(θ −sin θ),
(6.41)
which is a cycloid generated by the motion of a ﬁxed point on the circumference of a circle
of radius a which rolls on the positive side of the line x = 0.
To solve Eq. 6.41 for a speciﬁc cycloid (deﬁned by the two end points), one can ﬁrst
eliminate the circle radius a from Eq. 6.41 to solve (iteratively) for θ
A
(the value of θ at
Point A), after which either of the two equations in Eq. 6.41 can be used to determine the
constant a. The resulting cycloids for several end points are shown in Fig. 54.
This brachistochrone solution is valid for any end point which the bead can reach. Thus,
the end point must not be higher than the starting point. The end point may be at the same
elevation since, without friction, there is no loss of energy during the motion.
62
¸
·
y
x
1 2 3 4
1
2
, , ,
,
Figure 54: Several Brachistochrone Solutions.
6.3 Constraint Conditions
Suppose we want to extremize the functional
I(φ) =
_
b
a
F(x, φ, φ
x
) dx (6.42)
subject to the additional constraint condition
_
b
a
G(x, φ, φ
x
) dx = J, (6.43)
where J is a speciﬁed constant. We recall that the EulerLagrange equation was found by
requiring that δI = 0. However, since J is a constant, δJ = 0. Thus,
δ(I + λJ) = 0, (6.44)
where λ is an unknown constant referred to as a Lagrange multiplier. Thus, to enforce the
constraint in Eq. 6.43, we can replace F in the EulerLagrange equation with
H = F + λG. (6.45)
6.4 Example 3: A Constrained Minimization Problem
We wish to ﬁnd the function y(x) which minimizes the integral
I(y) =
_
1
0
(y
)
2
dx (6.46)
subject to the end conditions y(0) = y(1) = 0 and the constraint
_
1
0
y dx = 1. (6.47)
That is, the area under the curve y(x) is unity (Fig. 55). For this problem,
63
¸
`
x
y
, ,
(0, 0) (1, 0)
Area=1
Figure 55: A Constrained Minimization Problem.
H(x, y, y
) = (y
)
2
+ λy. (6.48)
The EulerLagrange equation,
d
dx
_
∂H
∂y
_
−
∂H
∂y
= 0, (6.49)
yields
d
dx
(2y
) −λ = 0 (6.50)
or
y
= λ/2. (6.51)
After integrating this equation, we obtain
y =
λ
4
x
2
+ Ax + B. (6.52)
With the boundary conditions y(0) = y(1) = 0, we obtain B = 0 and A = −λ/4. Thus,
y = −λx(1 −x)/4. (6.53)
The area constraint, Eq. 6.47, is used to evaluate the Lagrange multiplier λ:
1 =
_
1
0
y dx = −
_
1
0
λ
4
_
x −x
2
_
dx = −
λ
4
_
x
2
2
−
x
3
3
_¸
¸
¸
¸
1
0
= −
λ
24
. (6.54)
Thus, with λ = −24, we obtain the parabola
y(x) = 6x(1 −x). (6.55)
6.5 Functions of Several Independent Variables
Consider
I(φ) =
_
V
F(x, y, z, φ, φ
x
, φ
y
, φ
z
) dV, (6.56)
a functional with three independent variables (x, y, z). Note that, in 3D, the integration is
a volume integral.
The variation in I due to an arbitrary small change in φ is
δI =
_
V
_
∂F
∂φ
δφ +
∂F
∂φ
x
δφ
x
+
∂F
∂φ
y
δφ
y
+
∂F
∂φ
z
δφ
z
_
dV, (6.57)
64
where, with an interchange of the order of ∂ and δ,
δI =
_
V
_
∂F
∂φ
δφ +
∂F
∂φ
x
∂
∂x
(δφ) +
∂F
∂φ
y
∂
∂y
(δφ) +
∂F
∂φ
z
∂
∂z
(δφ)
_
dV. (6.58)
To perform a threedimensional integration by parts, we note that the second term in Eq. 6.58
can be expanded to yield
∂F
∂φ
x
∂
∂x
(δφ) =
∂
∂x
_
∂F
∂φ
x
δφ
_
−
∂
∂x
_
∂F
∂φ
x
_
δφ. (6.59)
If we then expand the third and fourth terms similarly, Eq. 6.58 becomes, after regrouping,
δI =
_
V
_
∂F
∂φ
δφ −
∂
∂x
_
∂F
∂φ
x
_
δφ −
∂
∂y
_
∂F
∂φ
y
_
δφ −
∂
∂z
_
∂F
∂φ
z
_
δφ
+
∂
∂x
_
∂F
∂φ
x
δφ
_
+
∂
∂y
_
∂F
∂φ
y
δφ
_
+
∂
∂z
_
∂F
∂φ
z
δφ
__
dV. (6.60)
The last three terms in this integral can then be converted to a surface integral using the
divergence theorem, which states that, for a vector ﬁeld f ,
_
V
∇· f dV =
_
S
f · ndS, (6.61)
where S is the closed surface which encloses the volume V . Eq. 6.60 then becomes
δI =
_
V
_
∂F
∂φ
−
∂
∂x
_
∂F
∂φ
x
_
−
∂
∂y
_
∂F
∂φ
y
_
−
∂
∂z
_
∂F
∂φ
z
__
δφdV
+
_
S
_
∂F
∂φ
x
n
x
+
∂F
∂φ
y
n
y
+
∂F
∂φ
z
n
z
_
δφdS, (6.62)
where n = (n
x
, n
y
, n
z
) is the unit outward normal on the surface.
Since δI = 0, both integrals in this equation must vanish for arbitrary δφ. It therefore
follows that the integrand in the volume integral must also vanish to yield the EulerLagrange
equation for three independent variables:
∂
∂x
_
∂F
∂φ
x
_
+
∂
∂y
_
∂F
∂φ
y
_
+
∂
∂z
_
∂F
∂φ
z
_
−
∂F
∂φ
= 0. (6.63)
If φ is speciﬁed on the boundary S, δφ = 0 on S, and the boundary integral in Eq. 6.62
automatically vanishes. If φ is not speciﬁed on S, the integrand in the boundary integral
must vanish:
∂F
∂φ
x
n
x
+
∂F
∂φ
y
n
y
+
∂F
∂φ
z
n
z
= 0 on S. (6.64)
This is the natural boundary condition when φ is not speciﬁed on the boundary.
65
6.6 Example 4: Poisson’s Equation
Consider the functional
I(φ) =
_
V
_
1
2
(φ
2
x
+ φ
2
y
+ φ
2
z
) −Qφ
_
dV, (6.65)
in which case
F(x, y, z, φ, φ
x
, φ
y
, φ
z
) =
1
2
(φ
2
x
+ φ
2
y
+ φ
2
z
) −Qφ. (6.66)
The EulerLagrange equation, Eq. 6.63, implies
∂
∂x
(φ
x
) +
∂
∂y
(φ
y
) +
∂
∂z
(φ
z
) −(−Q) = 0. (6.67)
That is,
φ
xx
+ φ
yy
+ φ
zz
= −Q (6.68)
or
∇
2
φ = −Q, (6.69)
which is Poisson’s equation. Thus, we have shown that solving Poisson’s equation is equiva
lent to minimizing the functional I in Eq. 6.65. In general, a key conclusion of this discussion
of variational calculus is that solving a diﬀerential equation is equivalent to minimizing some
functional involving an integral.
6.7 Functions of Several Dependent Variables
Consider the functional
I(φ
1
, φ
2
, φ
3
) =
_
b
a
F(x, φ
1
, φ
2
, φ
3
, φ
1
, φ
2
, φ
3
) dx, (6.70)
where x is the independent variable, and φ
1
(x), φ
2
(x), and φ
3
(x) are the dependent variables.
It can be shown that the generalization of the EulerLagrange equation, Eq. 6.11, for this
case is
d
dx
_
∂F
∂φ
1
_
−
∂F
∂φ
1
= 0 (6.71)
d
dx
_
∂F
∂φ
2
_
−
∂F
∂φ
2
= 0 (6.72)
d
dx
_
∂F
∂φ
3
_
−
∂F
∂φ
3
= 0. (6.73)
7 Variational Approach to the Finite Element Method
In modern engineering analysis, one of the most important applications of the energy the
orems, such as the minimum potential energy theorem in elasticity, is the ﬁnite element
method, a numerical procedure for solving partial diﬀerential equations. For linear equa
tions, ﬁnite element solutions are often based on variational methods.
66
7.1 Index Notation and Summation Convention
Let a be the vector
a = a
1
e
1
+ a
2
e
2
+ a
3
e
3
, (7.1)
where e
i
is the unit vector in the ith Cartesian coordinate direction, and a
i
is the ith
component of a (the projection of a on e
i
). Consider a second vector
b = b
1
e
1
+ b
2
e
2
+ b
3
e
3
. (7.2)
Then, the dot product (scalar product) is
a · b = a
1
b
1
+ a
2
b
2
+ a
3
b
3
=
3
i=1
a
i
b
i
. (7.3)
Using the summation convection, we can omit the summation symbol and write
a · b = a
i
b
i
, (7.4)
where, if a subscript appears exactly twice, a summation is implied over the range. The
range is 1,2,3 in 3D and 1,2 in 2D.
Let f(x
1
, x
2
, x
3
) be a scalar function of the Cartesian coordinates x
1
, x
2
, x
3
. We deﬁne
the shorthand comma notation for derivatives:
∂f
∂x
i
= f
,i
. (7.5)
That is, the subscript “, i” denotes the partial derivative with respect to the ith Cartesian
coordinate direction.
Using the comma notation and the summation convention, a variety of familiar quantities
can be written in compact form. For example, the gradient can be written
∇f =
∂f
∂x
1
e
1
+
∂f
∂x
2
e
2
+
∂f
∂x
3
e
3
= f
,i
e
i
. (7.6)
For a vectorvalued function F(x
1
, x
2
, x
3
), the divergence is written
∇· F =
∂F
1
∂x
1
+
∂F
2
∂x
2
+
∂F
3
∂x
3
= F
i,i
, (7.7)
and the Laplacian of the scalar function f(x
1
, x
2
, x
3
) is
∇
2
f = ∇· ∇f =
∂
2
f
∂x
2
1
+
∂
2
f
∂x
2
2
+
∂
2
f
∂x
2
3
= f
,ii
. (7.8)
The divergence theorem states that, for any vector ﬁeld F(x
1
, x
2
, x
3
) deﬁned in a volume
V bounded by a closed surface S,
_
V
∇· FdV =
_
S
F · ndS (7.9)
67
or, in index notation,
_
V
F
i,i
dV =
_
S
F
i
n
i
dS. (7.10)
The normal derivative can be written in index notation as
∂φ
∂n
= ∇φ · n = φ
,i
n
i
. (7.11)
The dot product of two gradients can be written
∇φ · ∇φ = (φ
,i
e
i
) · (φ
,j
e
j
) = φ
,i
φ
,j
e
i
· e
j
= φ
,i
φ
,j
δ
ij
= φ
,i
φ
,i
, (7.12)
where δ
ij
is the Kronecker delta deﬁned as
δ
ij
=
_
1, i = j,
0, i = j.
(7.13)
7.2 Deriving Variational Principles
For each partial diﬀerential equation of interest, one needs a functional which a solution
makes stationary. Given the functional, it is generally easy to see what partial diﬀerential
equation corresponds to it. The harder problem is to start with the equation and derive the
variational principle (i.e., derive the functional which is to be minimized). To simplify this
discussion, we will consider only scalar, rather than vector, problems. A scalar problem has
one dependent variable and one partial diﬀerential equation, whereas a vector problem has
several dependent variables coupled to each other through a system of partial diﬀerential
equations.
Consider Poisson’s equation subject to both Dirichlet and Neumann boundary conditions,
_
¸
¸
_
¸
¸
_
∇
2
φ + f = 0 in V,
φ = φ
0
on S
1
,
∂φ
∂n
+ g = 0 on S
2
.
(7.14)
This problem arises, for example, in (1) steadystate heat conduction, where the temperature
and heat ﬂux are speciﬁed on diﬀerent parts of the boundary, (2) potential ﬂuid ﬂow, where
the velocity potential and velocity are speciﬁed on diﬀerent parts of the boundary, and (3)
torsion in elasticity. Recall that, in index notation, ∇
2
φ = φ
,ii
.
We wish to ﬁnd a functional, I(φ) say, whose ﬁrst variation δI vanishes for φ satisfying
the above boundary value problem. From Eq. 7.14a,
0 =
_
V
(φ
,ii
+ f) δφdV (7.15)
=
_
V
[(φ
,i
δφ)
,i
−φ
,i
δφ
,i
] dV +
_
V
δ(fφ) dV (7.16)
=
_
S
φ
,i
n
i
δφdS −
_
V
1
2
δ(φ
,i
φ
,i
) dV + δ
_
V
fφdV (7.17)
=
_
S
(∇φ · n) δφdS −δ
_
V
1
2
φ
,i
φ
,i
dV + δ
_
V
fφdV, (7.18)
68
where ∇φ · n = ∂φ/∂n = −g on S
2
, and, since φ is speciﬁed on S
1
, δφ = 0 on S
1
. Then,
0 = −
_
S
2
g δφdS −δ
_
V
1
2
φ
,i
φ
,i
dV + δ
_
V
fφdV (7.19)
= −δ
_
S
2
gφdS −δ
_
V
1
2
φ
,i
φ
,i
dV + δ
_
V
fφdV (7.20)
= −δ
__
V
_
1
2
φ
,i
φ
,i
−fφ
_
dV +
_
S
2
gφdS
_
= −δI(φ). (7.21)
Thus, the functional for this boundary value problem is
I(φ) =
_
V
_
1
2
φ
,i
φ
,i
−fφ
_
dV +
_
S
2
gφdS (7.22)
or, in vector notation,
I(φ) =
_
V
_
1
2
∇φ · ∇φ −fφ
_
dV +
_
S
2
gφdS (7.23)
or, in expanded form,
I(φ) =
_
V
_
1
2
_
_
∂φ
∂x
_
2
+
_
∂φ
∂y
_
2
+
_
∂φ
∂z
_
2
_
−fφ
_
dV +
_
S
2
gφdS. (7.24)
If we were given the functional, Eq. 7.22, we could determine which partial diﬀeren
tial equation corresponds to it by computing and setting to zero the ﬁrst variation of the
functional. From Eq. 7.22,
δI =
_
V
_
1
2
δφ
,i
φ
,i
+
1
2
φ
,i
δφ
,i
−f δφ
_
dV +
_
S
2
g δφdS (7.25)
=
_
V
(φ
,i
δφ
,i
−fδφ) dV +
_
S
2
g δφdS (7.26)
=
_
V
[(φ
,i
δφ)
,i
−φ
,ii
δφ] dV −
_
V
f δφdV +
_
S
2
g δφdS (7.27)
=
_
S
φ
,i
n
i
δφdS −
_
V
(φ
,ii
+ f) δφdV +
_
S
2
g δφdS, (7.28)
where δφ = 0 on S
1
, and φ
,i
n
i
= ∇φ · n = ∂φ/∂n. Hence,
δI =
_
S
2
_
∂φ
∂n
+ g
_
δφdS −
_
V
(φ
,ii
+ f) δφdV. (7.29)
Stationary I (δI = 0) for arbitrary admissible δφ thus yields the original partial diﬀerential
equation and Neumann boundary condition, Eq. 7.14.
69
¸
`
x
y
Figure 56: TwoDimensional Finite Element Mesh.
¸
`
x
y
.
.
.
.
1
2
3
Figure 57: Triangular Finite Element.
7.3 Shape Functions
Consider a twodimensional ﬁeld problem for which we seek the function φ(x, y) satisfying
some (unspeciﬁed) partial diﬀerential equation in a domain. Let us subdivide the domain
into triangular ﬁnite elements, as shown in Fig. 56. A typical element, shown in Fig. 57,
has three degrees of freedom (DOF): φ
1
, φ
2
, and φ
3
. The three DOF are the values of the
fundamental unknown φ at the three grid points. (The number of DOF for an element
represents the number of pieces of data needed to determine uniquely the solution for the
element.)
For numerical solution, we approximate φ(x, y) using a polynomial in two variables having
the same number of terms as the number of DOF. Thus, we assume, for each element,
φ(x, y) = a
1
+ a
2
x + a
3
y, (7.30)
where a
1
, a
2
, and a
3
are unknown coeﬃcients. Eq. 7.30 is a complete linear polynomial in
two variables. We can evaluate this equation at the three grid points to obtain
_
_
_
φ
1
φ
2
φ
3
_
_
_
=
_
_
1 x
1
y
1
1 x
2
y
2
1 x
3
y
3
_
_
_
_
_
a
1
a
2
a
3
_
_
_
. (7.31)
This matrix equation can be inverted to yield
_
_
_
a
1
a
2
a
3
_
_
_
=
1
2A
_
_
x
2
y
3
−x
3
y
2
x
3
y
1
−x
1
y
3
x
1
y
2
−x
2
y
1
y
2
−y
3
y
3
−y
1
y
1
−y
2
x
3
−x
2
x
1
−x
3
x
2
−x
1
_
_
_
_
_
φ
1
φ
2
φ
3
_
_
_
, (7.32)
where
2A =
¸
¸
¸
¸
¸
¸
1 x
1
y
1
1 x
2
y
2
1 x
3
y
3
¸
¸
¸
¸
¸
¸
= x
2
y
3
−x
3
y
2
−x
1
y
3
+ x
3
y
1
+ x
1
y
2
−x
2
y
1
. (7.33)
70
Note that the area of the triangle is A. The determinant 2A is positive or negative de
pending on whether the grid point ordering in the triangle is counterclockwise or clockwise,
respectively. Since Eq. 7.31 is invertible, we can interpret the element DOF as either the grid
point values φ
i
, which have physical meaning, or the nonphysical polynomial coeﬃcients a
i
.
From Eq. 7.30, the scalar unknown φ can then be written in the matrix form
φ(x, y) =
_
1 x y
¸
_
_
_
a
1
a
2
a
3
_
_
_
(7.34)
=
1
2A
_
1 x y
¸
_
_
x
2
y
3
−x
3
y
2
x
3
y
1
−x
1
y
3
x
1
y
2
−x
2
y
1
y
2
−y
3
y
3
−y
1
y
1
−y
2
x
3
−x
2
x
1
−x
3
x
2
−x
1
_
_
_
_
_
φ
1
φ
2
φ
3
_
_
_
(7.35)
=
_
N
1
(x, y) N
2
(x, y) N
3
(x, y)
¸
_
_
_
φ
1
φ
2
φ
3
_
_
_
(7.36)
or
φ(x, y) = N
1
(x, y)φ
1
+ N
2
(x, y)φ
2
+ N
3
(x, y)φ
3
= N
i
φ
i
, (7.37)
where
_
_
_
N
1
(x, y) = [(x
2
y
3
−x
3
y
2
) + (y
2
−y
3
)x + (x
3
−x
2
)y]/(2A)
N
2
(x, y) = [(x
3
y
1
−x
1
y
3
) + (y
3
−y
1
)x + (x
1
−x
3
)y]/(2A)
N
3
(x, y) = [(x
1
y
2
−x
2
y
1
) + (y
1
−y
2
)x + (x
2
−x
1
)y]/(2A).
(7.38)
Notice that all the subscripts in this equation form a cyclic permutation of the numbers
1, 2, 3. That is, knowing N
1
, we can write down N
2
and N
3
by a cyclic permutation of the
subscripts. Alternatively, if we let i, j, and k denote the three grid points of the triangle,
the above equation can be written in the compact form
N
i
(x, y) =
1
2A
[(x
j
y
k
−x
k
y
j
) + (y
j
−y
k
)x + (x
k
−x
j
)y], (7.39)
where (i, j, k) can be selected to be any cyclic permutation of (1, 2, 3) such as (1, 2, 3), (2, 3, 1),
or (3, 1, 2).
In general, the functions N
i
are called shape functions or interpolation functions and are
used extensively in ﬁnite element theory. Eq. 7.37 implies that N
1
can be thought of as the
shape function associated with Point 1, since N
1
is the form (or “shape”) that φ would take
if φ
1
= 1 and φ
2
= φ
3
= 0. In general, N
i
is the shape function for Point i.
From Eq. 7.37, if φ
1
= 0, and φ
2
= φ
3
= 0,
φ(x, y) = N
1
(x, y)φ
1
. (7.40)
In particular, at Point 1,
φ
1
= φ(x
1
, y
1
) = N
1
(x
1
, y
1
)φ
1
(7.41)
or
N
1
(x
1
, y
1
) = 1. (7.42)
71
¸
¸
¸
x
, ,
L 1 2
Figure 58: Axial Member (PinJointed Truss Element).
Also, at Point 2,
0 = φ
2
= φ(x
2
, y
2
) = N
1
(x
2
, y
2
)φ
1
(7.43)
or
N
1
(x
2
, y
2
) = 0. (7.44)
Similarly,
N
1
(x
3
, y
3
) = 0. (7.45)
That is, a shape function takes the value unity at its own grid point and zero at the other
grid points in an element:
N
i
(x
j
, y
j
) = δ
ij
. (7.46)
Eq. 7.38 gives the shape functions for the linear triangle. An interesting observation is
that, if the shape functions are evaluated at the triangle centroid, N
1
= N
2
= N
3
= 1/3,
since each grid point has equal inﬂuence on the centroid. To prove this result, substitute the
coordinates of the centroid,
¯ x = (x
1
+ x
2
+ x
3
)/3, ¯ y = (y
1
+ y
2
+ y
3
)/3, (7.47)
into N
1
in Eq. 7.38, and use Eq. 7.33:
N
1
(¯ x, ¯ y) = [(x
2
y
3
−x
3
y
2
) + (y
2
−y
3
)¯ x + (x
3
−x
2
)¯ y]/(2A) =
1
3
. (7.48)
The preceding discussion assumed an element having three grid points. In general, for
an element having r points,
φ(x, y) = N
1
(x, y)φ
1
+ N
2
(x, y)φ
2
+· · · + N
r
(x, y)φ
r
= N
i
φ
i
. (7.49)
This is a convenient way to represent the ﬁnite element approximation within an element,
since, once the grid point values are known, φ can be evaluated anywhere in the element.
We note that, since the shape functions for the 3point triangle are linear in x and y, the
gradients in the x or y directions are constant. Thus, from Eq. 7.37,
∂φ
∂x
=
∂N
1
∂x
φ
1
+
∂N
2
∂x
φ
2
+
∂N
3
∂x
φ
3
= [(y
2
−y
3
)φ
1
+ (y
3
−y
1
)φ
2
+ (y
1
−y
2
)φ
3
]/(2A) (7.50)
∂φ
∂y
=
∂N
1
∂y
φ
1
+
∂N
2
∂y
φ
2
+
∂N
3
∂y
φ
3
= [(x
3
−x
2
)φ
1
+ (x
1
−x
3
)φ
2
+ (x
2
−x
1
)φ
3
]/(2A). (7.51)
A constant gradient within any element implies that many small elements would have to be
used wherever φ is changing rapidly.
Example. The displacement u(x) of an axial structural member (a pinjointed truss
member) (Fig. 58) is given by
u(x) = N
1
(x)u
1
+ N
2
(x)u
2
, (7.52)
72
where u
1
and u
2
are the axial displacements at the two end points. For a linear variation of
displacement along the length, it follows that N
i
must be a linear function which is unity at
Point i and zero at the other end. Thus,
_
_
_
N
1
(x) = 1 −
x
L
,
N
2
(x) =
x
L
(7.53)
or
u(x) =
_
1 −
x
L
_
u
1
+
_
x
L
_
u
2
. (7.54)
7.4 Variational Approach
Consider Poisson’s equation in a twodimensional domain subject to both Dirichlet and Neu
mann boundary conditions. We consider a slight generalization of the problem of Eq. 7.14:
_
¸
¸
¸
_
¸
¸
¸
_
∂
2
φ
∂x
2
+
∂
2
φ
∂y
2
+ f = 0 in A,
φ = φ
0
on S
1
,
∂φ
∂n
+ g + hφ = 0 on S
2
,
(7.55)
where S
1
and S
2
are curves in 2D, and f, g, and h may depend on position. The diﬀerence
between this problem and that of Eq. 7.14 is that the h term has been added to the boundary
condition on S
2
, where the gradient ∂φ/∂n is speciﬁed. The h term could arise in a variety
of physical situations, including heat transfer due to convection (where the heat ﬂux is
proportional to temperature) and free surface ﬂow problems (where the free surface and
radiation boundary conditions are both of this type). Assume that the domain has been
subdivided into a mesh of triangular ﬁnite elements similar to that shown in Fig. 56.
The functional which must be minimized for this boundary value problem is similar to
that of Eq. 7.24 except that an additional term must be added to account for the h term in
the boundary condition on S
2
. It can be shown that the functional which must be minimized
for this problem is
I(φ) =
_
A
_
1
2
_
_
∂φ
∂x
_
2
+
_
∂φ
∂y
_
2
_
−fφ
_
dA +
_
S
2
_
gφ +
1
2
hφ
2
_
dS, (7.56)
where A is the domain.
With a ﬁnite element discretization, we do not deﬁne a single smooth function φ over the
entire domain, but instead deﬁne φ over individual elements. Thus, since I is an integral, it
can be evaluated by summing over the elements:
I = I
1
+ I
2
+ I
3
+· · · =
E
e=1
I
e
, (7.57)
where E is the number of elements. The variation of I is also computed by summing over
the elements:
δI =
E
e=1
δI
e
, (7.58)
73
.
.
.
.
15
17
29
8
_
∂φ
∂n
.
.
.
.
17
29
35
9
_
»
∂φ
∂n
Figure 59: Neumannn Boundary Condition at Internal Boundary.
which must vanish for I to be a minimum.
We thus can consider a typical element. For one element, in index notation,
I
e
=
_
A
_
1
2
φ
,k
φ
,k
−fφ
_
dA +
_
S
2
_
gφ +
1
2
hφ
2
_
dS. (7.59)
The last two terms in this equation are, from Eq. 7.55c, integrals of the form
_
S
2
∂φ
∂n
φdS.
As can be seen from Fig. 59, for two elements which share a common edge, the unknown φ
is continuous along that edge, and the normal derivative ∂φ/∂n is of equal magnitude and
opposite sign, so that the individual contributions cancel each other out. Thus, the last two
terms in the functional make a nonzero contribution to the functional only if an element
abuts S
2
(i.e., if one edge of an element coincides with S
2
).
The degrees of freedom which deﬁne the function φ over the entire domain are the nodal
(grid point) values φ
i
, since, given all the φ
i
, φ is known everywhere using the element shape
functions. Therefore, to minimize I, we diﬀerentiate with respect to each φ
i
, and set the
result to zero:
∂I
e
∂φ
i
=
_
A
_
1
2
∂
∂φ
i
(φ
,k
)φ
,k
+
1
2
φ
,k
∂
∂φ
i
(φ
,k
) −f
∂φ
∂φ
i
_
dA +
_
S
2
_
g
∂φ
∂φ
i
+ hφ
∂φ
∂φ
i
_
dS (7.60)
=
_
A
_
φ
,k
∂
∂φ
i
(φ
,k
) −f
∂φ
∂φ
i
_
dA +
_
S
2
_
g
∂φ
∂φ
i
+ hφ
∂φ
∂φ
i
_
dS. (7.61)
where φ = N
j
φ
j
implies
φ
,k
= (N
j
φ
j
)
,k
= N
j,k
φ
j
, (7.62)
∂
∂φ
i
(φ
,k
) =
∂
∂φ
i
(N
j,k
φ
j
) = N
j,k
∂φ
j
∂φ
i
= N
j,k
δ
ij
= N
i,k
, (7.63)
and
∂φ
∂φ
i
=
∂
∂φ
i
(N
j
φ
j
) = N
j
∂φ
j
∂φ
i
= N
j
δ
ij
= N
i
. (7.64)
We now substitute these last three equations into Eq. 7.61 to obtain
∂I
e
∂φ
i
=
_
A
(N
j,k
φ
j
N
i,k
−fN
i
) dA +
_
S
2
(gN
i
+ hN
j
φ
j
N
i
) dS (7.65)
=
__
A
N
i,k
N
j,k
dA
_
φ
j
−
__
A
fN
i
dA −
_
S
2
gN
i
dS
_
+
__
S
2
hN
i
N
j
dS
_
φ
j
(7.66)
= K
ij
φ
j
−F
i
+ H
ij
φ
j
, (7.67)
74
.
.
.
.
.
.
.
.
15
17
29
35
8
9
_
_
Figure 60: Two Adjacent Finite Elements.
where, for each element,
K
e
ij
=
_
A
N
i,k
N
j,k
dA (7.68)
F
e
i
=
_
A
fN
i
dA −
_
S
2
gN
i
dS (7.69)
H
e
ij
=
_
S
2
hN
i
N
j
dS. (7.70)
The second term of the “load” F is present only if Point i is on S
2
, and the matrix entries
in H apply only if Points i, j are both on the boundary. The two terms in F correspond to
the body force and Neumann boundary condition, respectively.
The superscript e in the preceding equations indicates that we have computed the con
tribution for one element. We must combine the contributions for all elements. Consider
two adjacent elements, as illustrated in Fig. 60. In that ﬁgure, the circled numbers are the
element labels. From Eq. 7.67, for Element 8, for the special case h = 0,
∂I
8
∂φ
15
= K
8
15,15
φ
15
+ K
8
15,17
φ
17
+ K
8
15,29
φ
29
−F
8
15
, (7.71)
∂I
8
∂φ
17
= K
8
17,15
φ
15
+ K
8
17,17
φ
17
+ K
8
17,29
φ
29
−F
8
17
, (7.72)
∂I
8
∂φ
29
= K
8
29,15
φ
15
+ K
8
29,17
φ
17
+ K
8
29,29
φ
29
−F
8
29
. (7.73)
Similarly, for Element 9,
∂I
9
∂φ
17
= K
9
17,17
φ
17
+ K
9
17,29
φ
29
+ K
9
17,35
φ
35
−F
9
17
, (7.74)
∂I
9
∂φ
29
= K
9
29,17
φ
17
+ K
9
29,29
φ
29
+ K
9
29,35
φ
35
−F
9
29
, (7.75)
∂I
9
∂φ
35
= K
9
35,17
φ
17
+ K
9
35,29
φ
29
+ K
9
35,35
φ
35
−F
9
35
. (7.76)
To evaluate
e
∂I
e
∂φ
i
,
75
we sum on e for ﬁxed i. For example, for i = 17,
∂I
8
∂φ
17
+
∂I
9
∂φ
17
=K
8
17,15
φ
15
+ (K
8
17,17
+ K
9
17,17
)φ
17
+ (K
8
17,29
+ K
9
17,29
)φ
29
+ K
9
17,35
φ
35
−F
8
17
−F
9
17
. (7.77)
This process is then repeated for all grid points and all elements. To minimize the functional,
the individual sums are set to zero, resulting in a set of linear algebraic equations of the form
Kφ = F (7.78)
or, for the more general case where h = 0,
(K+H)φ = F, (7.79)
where φ is the vector of unknown nodal values of φ, and F is the vector of forcing functions
at the nodes. Because of the historical developments in structural mechanics, K is sometimes
referred to as the stiﬀness matrix. For each element, the contributions to K, F, and H are
K
ij
=
_
A
N
i,k
N
j,k
dA, (7.80)
F
i
=
_
A
fN
i
dA −
_
S
2
gN
i
dS, (7.81)
H
ij
=
_
S
2
hN
i
N
j
dS. (7.82)
The sum on k in the ﬁrst integral can be expanded to yield
K
ij
=
_
A
_
∂N
i
∂x
∂N
j
∂x
+
∂N
i
∂y
∂N
j
∂y
_
dA. (7.83)
Note that, in three dimensions, Eq. 7.80 still applies, except that the sum extends over the
range 1–3, and the integration is over the element volume.
Note also in Eq. 7.81 that, if g = 0, there is no contribution to the righthand side
vector F. Thus, to implement the Neumann boundary condition ∂φ/∂n = 0 at a boundary,
the boundary is left free. The zero gradient boundary condition is the natural boundary
condition for this formulation, since a zero gradient results by default if the boundary is left
free. The Dirichlet boundary condition φ = φ
0
must be explicitly imposed and is referred to
as an essential boundary condition.
7.5 Matrices for Linear Triangle
Consider the linear threenode triangle, for which, from Eq. 7.39, the shape functions are
given by
N
i
(x, y) =
1
2A
[(x
j
y
k
−x
k
y
j
) + (y
j
−y
k
)x + (x
k
−x
j
)y], (7.84)
76
where ijk can be any cyclic permutation of 123. To compute K from Eq. 7.83, we ﬁrst
compute the derivatives
∂N
i
∂x
= (y
j
−y
k
)/(2A) = b
i
/(2A), (7.85)
∂N
i
∂y
= (x
k
−x
j
)/(2A) = c
i
/(2A), (7.86)
where b
i
and c
i
are deﬁned as
b
i
= y
j
−y
k
, c
i
= x
k
−x
j
. (7.87)
By a cyclic permutation of the indices, we obtain
∂N
j
∂x
= (y
k
−y
i
)/(2A) = b
j
/(2A), (7.88)
∂N
j
∂y
= (x
i
−x
k
)/(2A) = c
j
/(2A). (7.89)
For the linear triangle, these derivatives are all constant and hence can be removed from the
integral to yield
K
ij
=
1
4A
2
(b
i
b
j
+ c
i
c
j
)A, (7.90)
where A is the area of the triangular element. Thus, the i, j contribution for one element
is
K
ij
=
1
4A
(b
i
b
j
+ c
i
c
j
), (7.91)
where i and j each have the range 123, since there are three grid points in the element.
However, b
i
and c
i
are computed from Eq. 7.87 using the shorthand notation that ijk is a
cyclic permutation of 123; that is, ijk = 123, ijk = 231, or ijk = 312. Thus,
K
11
= (b
2
1
+ c
2
1
)/(4A) = [(y
2
−y
3
)
2
+ (x
3
−x
2
)
2
]/(4A), (7.92)
K
22
= (b
2
2
+ c
2
2
)/(4A) = [(y
3
−y
1
)
2
+ (x
1
−x
3
)
2
]/(4A), (7.93)
K
33
= (b
2
3
+ c
2
3
)/(4A) = [(y
1
−y
2
)
2
+ (x
2
−x
1
)
2
]/(4A), (7.94)
K
12
= (b
1
b
2
+ c
1
c
2
)/(4A) = [(y
2
−y
3
)(y
3
−y
1
) + (x
3
−x
2
)(x
1
−x
3
)]/(4A), (7.95)
K
13
= (b
1
b
3
+ c
1
c
3
)/(4A) = [(y
2
−y
3
)(y
1
−y
2
) + (x
3
−x
2
)(x
2
−x
1
)]/(4A), (7.96)
K
23
= (b
2
b
3
+ c
2
c
3
)/(4A) = [(y
3
−y
1
)(y
1
−y
2
) + (x
1
−x
3
)(x
2
−x
1
)]/(4A). (7.97)
Note that K depends only on diﬀerences in grid point coordinates. Thus, two elements
that are geometrically congruent and diﬀer only by a translation will have the same element
matrix.
To compute the righthand side contributions as given in Eq. 7.81, consider ﬁrst the
contribution of the source term f. From Eq. 7.81, we calculate F
1
, which is typical, as
F
1
=
_
A
fN
1
dA =
_
A
f
2A
[(x
2
y
3
−x
3
y
2
) + (y
2
−y
3
)x + (x
3
−x
2
)y] dA. (7.98)
77
, ,
1 2
¡ ¸
L
Figure 61: Triangular Mesh at Boundary.
We now consider the special case f =
ˆ
f (a constant), and note that
_
A
dA = A,
_
A
x dA = ¯ x A,
_
A
y dA = ¯ y A, (7.99)
where (¯ x, ¯ y) is the centroid of the triangle given by Eq. 7.47. Thus,
F
1
=
ˆ
f
2A
[(x
2
y
3
−x
3
y
2
) + (y
2
−y
3
)¯ x + (x
3
−x
2
)¯ y] A. (7.100)
From Eq. 7.48, the expression in brackets is given by 2A/3. Hence,
F
1
=
1
3
ˆ
fA, (7.101)
and similarly
F
1
= F
2
= F
3
=
1
3
ˆ
fA. (7.102)
That is, for a uniform f, 1/3 of the total element “load” is applied to each grid point. For
nonuniform f, the integral could be computed using natural (or area) coordinates for the
triangle [7].
It is also of interest to calculate, for the linear triangle, the righthand side contributions
for the second term of Eq. 7.81 for the uniform special case g = ˆ g, where ˆ g is a constant.
For the second term of Eq. 7.81 to contribute, Point i must be on an element edge which lies
on the boundary S
2
. Since the integration involving g is on the boundary, the only shape
functions needed are those which describe the interpolation of φ on the boundary. Thus,
since the triangular shape functions are linear, the boundary shape functions are
N
1
= 1 −
x
L
, N
2
=
x
L
, (7.103)
where the subscripts 1 and 2 refer to the two element edge grid points on the boundary S
2
(Fig. 61), x is a local coordinate along the element edge, and L is the length of the element
edge. Thus, for Point i on the boundary,
F
i
= −
_
S
2
gN
i
dS. (7.104)
Since the boundary shape functions are given by Eq. 7.103,
F
1
= −ˆ g
_
L
0
_
1 −
x
L
_
dx = −
ˆ gL
2
, (7.105)
78
F
2
= −ˆ g
_
L
0
_
x
L
_
dx = −
ˆ gL
2
. (7.106)
Thus, for the two points deﬁning a triangle edge on the boundary S
2
,
F
1
= F
2
= −
ˆ gL
2
. (7.107)
That is, for a uniform g, 1/2 of the total element “load” is applied to each grid point.
To calculate H using Eq. 7.82, we ﬁrst note that Points i, j must both be on the boundary
for this matrix to contribute. Consider some triangular elements adjacent to a boundary,
as shown in Fig. 61. Since the integration in Eq. 7.82 is on the boundary, the only shape
functions needed are those which describe the interpolation of φ on the boundary. Thus,
using the boundary shape functions of Eq. 7.103 in Eq. 7.82, for constant h =
ˆ
h,
H
11
=
ˆ
h
_
L
0
_
1 −
x
L
_
2
dx =
ˆ
hL
3
(7.108)
H
22
=
ˆ
h
_
L
0
_
x
L
_
2
dx =
ˆ
hL
3
(7.109)
H
12
= H
21
=
ˆ
h
_
L
0
_
1 −
x
L
__
x
L
_
dx =
ˆ
hL
6
. (7.110)
Hence, for an edge of a linear 3node triangle,
H =
ˆ
hL
6
_
2 1
1 2
_
. (7.111)
7.6 Interpretation of Functional
Now that the variational problem has been solved (i.e., the functional I has been minimized),
we can evaluate I. We recall from Eq. 7.59 (with h = 0) that, for the twodimensional
Poisson’s equation,
I(φ) =
_
A
_
1
2
φ
,k
φ
,k
−fφ
_
dA +
_
S
2
gφdS, (7.112)
where
φ = N
i
φ
i
. (7.113)
Since φ
i
is independent of position, it follows that φ
,k
= N
i,k
φ
i
and
I(φ) =
_
A
_
1
2
N
i,k
φ
i
N
j,k
φ
j
−fN
i
φ
i
_
dA +
_
S
2
gN
i
φ
i
dS, (7.114)
=
1
2
φ
i
__
A
N
i,k
N
j,k
dA
_
φ
j
−
__
A
fN
i
dA −
_
S
2
gN
i
dS
_
φ
i
(7.115)
=
1
2
φ
i
K
ij
φ
j
−F
i
φ
i
(index notation) (7.116)
=
1
2
φ
T
Kφ −φ
T
F (matrix notation) (7.117)
=
1
2
φ · Kφ −φ · F (vector notation), (7.118)
79
where the last result has been written in index, matrix, and vector notations. The ﬁrst term
for I is a quadratic form.
In solid mechanics, I corresponds to the total potential energy. The ﬁrst term is the strain
energy, and the second term is the potential of the applied loads. Since strain energy is zero
only for zero strain (corresponding to rigid body motion), it follows that the stiﬀness matrix
K is positive semideﬁnite. For wellposed problems (which allow no rigid body motion),
K is positive deﬁnite. (By deﬁnition, a matrix K is positive deﬁnite if φ
T
Kφ > 0 for all
φ = 0.) Three consequences of positivedeﬁniteness are
1. All eigenvalues of K are positive.
2. The matrix is nonsingular.
3. Gaussian elimination can be performed without pivoting.
7.7 Stiﬀness in Elasticity in Terms of Shape Functions
In §4.10 (p. 45), the Principle of Virtual Work was used to obtain the element stiﬀness matrix
for an elastic ﬁnite element as (Eq. 4.67)
K =
_
V
C
T
DCdV, (7.119)
where D is the symmetric matrix of material constants relating stress and strain, and C is
the matrix converting grid point displacements to strain:
ε = Cu. (7.120)
For the constant strain triangle (CST) in 2D, for example, the fundamental unknowns u
and v (the x and y components of displacement) can both be expressed in terms of the three
shape functions deﬁned in Eq. 7.38:
u = N
i
u
i
, v = N
i
v
i
, (7.121)
where the summation convention is used. In general, in 2D, the strains can be expressed as
ε =
_
_
_
ε
xx
ε
yy
γ
xy
_
_
_
=
_
_
_
u
,x
v
,y
u
,y
+ v
,x
_
_
_
=
_
_
_
N
i,x
u
i
N
i,y
v
i
N
i,y
u
i
+ N
i,x
v
i
_
_
_
=
_
_
N
1,x
0 N
2,x
0 N
3,x
0
0 N
1,y
0 N
2,y
0 N
3,y
N
1,y
N
1,x
N
2,y
N
2,x
N
3,y
N
3,x
_
_
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
u
1
v
1
u
2
v
2
u
3
v
3
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
. (7.122)
80
¸
`
x
φ
a b
Figure 62: Discontinuous Function.
Thus, from Eq. 7.120, C in Eq. 7.119 is a matrix of shape function derivatives:
C =
_
_
N
1,x
0 N
2,x
0 N
3,x
0
0 N
1,y
0 N
2,y
0 N
3,y
N
1,y
N
1,x
N
2,y
N
2,x
N
3,y
N
3,x
_
_
. (7.123)
In general, C would have as many rows as there are independent components of strain (3 in
2D and 6 in 3D) and as many columns as there are DOF in the element.
7.8 Element Compatibility
In the variational approach to the ﬁnite element method, an integral was minimized. It was
also assumed that the integral evaluated over some domain was equal to the sum of the
integrals over the elements. We wish to investigate brieﬂy the conditions which must hold
for this assumption to be valid. That is, what condition is necessary for the integral over a
domain to be equal to the sum of the integrals over the elements?
Consider the onedimensional integral
I =
_
b
a
φ(x) dx. (7.124)
For the integral I to be welldeﬁned, simple jump discontinuities in φ are allowed, as illus
trated in Fig. 62. Singularities in φ, on the other hand, will not be allowed, since some
singularities cannot be integrated. Thus, we conclude that, for any functional of interest in
ﬁnite element analysis, the integrand may be discontinuous, but we do not allow singularities
in the integrand.
Now consider the onedimensional integral
I =
_
b
a
dφ(x)
dx
dx. (7.125)
Since the integrand dφ(x)/dx may be discontinuous, φ(x) must be continuous, but with kinks
(slope discontinuities). In the integral
I =
_
b
a
d
2
φ(x)
dx
2
dx, (7.126)
the integrand φ
may have simple discontinuities, in which case φ
is continuous with kinks,
and φ is smooth (i.e., the slope is continuous).
81
.
.
.
.
.
.
.
.
1
2
,
P
Figure 63: Compatibility at an Element Boundary.
Thus, we conclude that the smoothness required of φ depends on the highest order
derivative of φ appearing in the integrand. If φ
is in the integrand, φ
must be continuous.
If φ
is in the integrand, φ must be continuous. Therefore, in general, we conclude that, at
element interfaces, the ﬁeld variable φ and any of its partial derivatives up to one order less
than the highest order derivative appearing in I(φ) must be continuous. This requirement
is referred to as the compatibility requirement or the conforming requirement.
For example, consider the Poisson equation in 2D,
∂
2
φ
∂x
2
+
∂
2
φ
∂y
2
+ f = 0, (7.127)
for which the functional to be minimized is
I(φ) =
_
A
_
1
2
_
_
∂φ
∂x
_
2
+
_
∂φ
∂y
_
2
_
−fφ
_
dA. (7.128)
Since the highest derivative in I is ﬁrst order, we conclude that, for I to be welldeﬁned, φ
must be continuous in the ﬁnite element approximation.
For the 3node triangular element already formulated, the shape function is linear, which
implies that, in Fig. 63, given φ
1
and φ
2
, φ varies linearly between φ
1
and φ
2
along the line
1–2. That is, φ is the same at the midpoint P for both elements; otherwise, I(φ) might be
inﬁnite, since there could be a gap or overlap in the model along the line 1–2.
Note also that the Poisson equation is a second order equation, but φ need only be
continuous in the ﬁnite element approximation. That is, the ﬁrst derivatives φ
,x
and φ
,y
may have simple discontinuities, and the second derivatives φ
,xx
and φ
,yy
that appear in the
partial diﬀerential equation may not even exist at the element interfaces. Thus, one of the
strengths of the variational approach is that I(φ) involves derivatives of lower order than in
the original PDE.
In elasticity, the functional I(φ) has the physical interpretation of total potential en
ergy, including strain energy. A nonconforming element would result in a discontinuous
displacement at the element boundaries (i.e., a gap or overlap), which would correspond
to inﬁnite strain energy. However, note that having displacement continuous implies that
the displacement gradients (which are proportional to the stresses) are discontinuous at the
element boundaries. This property is one of the fundamental characteristics of an approxi
mate numerical solution. If all quantities of interest (e.g., displacements and stresses) were
continuous, the solution would be an exact solution rather than an approximate solution.
Thus, the approximation inherent in a displacementbased ﬁnite element method is that the
displacements are continuous, and the stresses (displacement gradients) are discontinuous at
element boundaries. In ﬂuid mechanics, a discontinuous φ (which is not allowed) corresponds
to a singularity in the velocity ﬁeld.
82
»
¸
`
x
y
z
>
>
>
>
>
>
v
u
`R = v −u (error)
Figure 64: A Vector Analogy for Galerkin’s Method.
7.9 Method of Weighted Residuals (Galerkin’s Method)
Here we discuss an alternative to the use of a variational principle when the functional is
either unknown or does not exist (e.g., nonlinear equations).
We consider the Poisson equation
∇
2
φ + f = 0. (7.129)
For an approximate solution
˜
φ,
∇
2
˜
φ + f = R = 0, (7.130)
where R is referred to as the residual or error. The best approximate solution will be one
which in some sense minimizes R at all points of the domain. We note that, if R = 0 in the
domain,
_
V
RW dV = 0, (7.131)
where W is any function of the spatial coordinates. W is referred to as a weighting function.
With n DOF in the domain, n functions W can be chosen:
_
V
RW
i
dV = 0, i = 1, 2, . . . , n. (7.132)
This approach is called the method of weighted residuals. Various choices of W
i
are possible.
When W
i
= N
i
(the shape functions), the process is called Galerkin’s method.
The motivation for using shape functions N
i
as the weighting functions is that we want the
residual (the error) orthogonal to the shape functions. In the ﬁnite element approximation,
we are trying to approximate an inﬁnite DOF problem (the PDE) with a ﬁnite number of
DOF (the ﬁnite element model). Consider an analogous problem in vector analysis, where
we want to approximate a vector v in 3D with another vector in 2D. That is, we are
attempting to approximate v with a lesser number of DOF, as shown in Fig. 64. The “best”
2D approximation to the 3D vector v is the projection u in the plane. The error in this
approximation is
R = v −u, (7.133)
which is orthogonal to the xyplane. That is, the error R is orthogonal to the basis vectors
e
x
and e
y
(the vectors used to approximate v):
R· e
x
= 0 and R· e
y
= 0. (7.134)
83
In the ﬁnite element problem, the approximating functions are the shape functions N
i
. The
counterpart to the dot product is the integral
_
V
RN
i
dV = 0. (7.135)
That is, the residual R is orthogonal to its approximating functions, the shape functions.
The integral in Eq. 7.135 must hold over the entire domain V or any portion of the
domain, e.g., an element. Thus, for Poisson’s equation, for one element,
0 =
_
V
(∇
2
φ + f)N
i
dV (7.136)
=
_
V
(φ
,kk
+ f)N
i
dV (7.137)
=
_
V
[(φ
,k
N
i
)
,k
−φ
,k
N
i,k
] dV +
_
V
fN
i
dV. (7.138)
The ﬁrst term is converted to a surface integral using the divergence theorem, Eq. 7.10, to
obtain
0 =
_
S
φ
,k
N
i
n
k
dS −
_
V
φ
,k
N
i,k
dV +
_
V
fN
i
dV, (7.139)
where
φ
,k
n
k
= ∇φ · n =
∂φ
∂n
(7.140)
and
φ
,k
= (N
j
φ
j
)
,k
= N
j,k
φ
j
. (7.141)
Hence, for each i,
0 =
_
S
∂φ
∂n
N
i
dS −
_
V
N
j,k
φ
j
N
i,k
dV +
_
V
fN
i
dV (7.142)
= −
__
V
N
i,k
N
j,k
dV
_
φ
j
+
__
V
fN
i
dV +
_
S
∂φ
∂n
N
i
dS
_
(7.143)
= −K
ij
φ
j
+ F
i
. (7.144)
Thus, in matrix notation,
Kφ = F, (7.145)
where
K
ij
=
_
V
N
i,k
N
j,k
dV (7.146)
F
i
=
_
V
fN
i
dV +
_
S
∂φ
∂n
N
i
dS. (7.147)
From Eq. 7.55, ∂φ/∂n is speciﬁed on S
2
and unknown a priori on S
1
, where φ = φ
0
is
speciﬁed. On S
1
, ∂φ/∂n is the “reaction” to the speciﬁed φ. At points where φ is speciﬁed,
the Dirichlet boundary conditions are handled like displacement boundary conditions in
structural problems.
84
Galerkin’s method thus results in algebraic equations identical to those derived from
a variational principle. However, Galerkin’s method is more general, since sometimes a
variational principle may not exist for a given problem. When a principle does exist, the
two approaches yield the same results. When the variational principle does not exist or is
unknown, Galerkin’s method can still be used to derive a ﬁnite element model.
8 Potential Fluid Flow With Finite Elements
In potential ﬂow, the ﬂuid is assumed to be inviscid and incompressible. Since there are no
shearing stresses in the ﬂuid, the ﬂuid slips tangentially along boundaries. This mathematical
model of ﬂuid behavior is useful for some situations.
Deﬁne a velocity potential φ such that velocity v = ∇φ. That is, in 3D,
v
x
=
∂φ
∂x
, v
y
=
∂φ
∂y
, v
z
=
∂φ
∂z
. (8.1)
It can be shown that, within the domain occupied by the ﬂuid,
∇
2
φ = 0. (8.2)
Various boundary conditions are of interest. At a ﬁxed boundary, where the normal
velocity vanishes,
v
n
= v · n = ∇φ · n =
∂φ
∂n
= 0, (8.3)
where n is the unit outward normal on the boundary. On a boundary where the velocity is
speciﬁed,
∂φ
∂n
= ˆ v
n
. (8.4)
We will see later in the discussion of symmetry that, at a plane of symmetry for the potential
φ,
∂φ
∂n
= 0, (8.5)
where n is the unit normal to the plane. At a plane of antisymmetry, φ = 0.
The boundary value problem for ﬂow around a solid body is illustrated in Fig. 65. In
this example, far away from the body, where the velocity is known,
v
x
=
∂φ
∂x
= v
∞
, (8.6)
which is speciﬁed.
As is, the problem posed in Fig. 65 is not a wellposed problem, because only conditions
on the derivative ∂φ/∂n are speciﬁed. Thus, for any solution φ, φ + c is also a solution for
any constant c. Therefore, for uniqueness, we must specify φ somewhere in the domain.
Thus, the potential ﬂow boundary value problem is that the velocity potential φ satisﬁes
_
¸
¸
_
¸
¸
_
∇
2
φ = 0 in V
φ =
ˆ
φ on S
1
∂φ
∂n
= ˆ v
n
on S
2
,
(8.7)
where v = ∇φ in V .
85
n
¸
n
¡
n
·n
`
n
∇
2
φ = 0
∂φ
∂n
= 0
´
∂φ
∂n
=
∂φ
∂x
= v
∞
∂φ
∂n
= −
∂φ
∂x
= −v
∞
∂φ
∂n
=
∂φ
∂y
= 0
∂φ
∂n
= −
∂φ
∂y
= 0
Figure 65: Potential Flow Around Solid Body.
8.1 Finite Element Model
A ﬁnite element model of the potential ﬂow problem results in the equation
Kφ = F, (8.8)
where the contributions to K and F for each element are
K
ij
=
_
A
N
i,k
N
j,k
dA =
_
A
_
∂N
i
∂x
∂N
j
∂x
+
∂N
i
∂y
∂N
j
∂y
_
dA, (8.9)
F
i
=
_
S
2
ˆ v
n
N
i
dS, (8.10)
where ˆ v
n
is the speciﬁed velocity on S
2
in the outward normal direction, and N
i
is the shape
function for the ith grid point in the element.
Once the velocity potential φ is known, the pressure can be found using the steadystate
Bernoulli equation
1
2
v
2
+ gy +
p
ρ
= c = constant, (8.11)
where v is the velocity magnitude given by
v
2
=
_
∂φ
∂x
_
2
+
_
∂φ
∂y
_
2
, (8.12)
gy is the (frequently ignored) body force potential, g is the acceleration due to gravity, y is
the height above some reference plane, p is pressure, and ρ is the ﬂuid density. The constant
c is evaluated using a location where v is known (e.g., v
∞
). For example, at inﬁnity, if we
ignore gy and pick p = 0 (ambient),
c =
1
2
v
2
∞
, (8.13)
86
_¸
¸
`
x
y
¸ ¸
¸ ¸
¸ ¸
Figure 66: Streamlines Around Circular Cylinder.
_¸
¸
`
x
y
,
,
P
P
>
>
Figure 67: Symmetry With Respect to y = 0.
and
p
ρ
+
1
2
v
2
=
1
2
v
2
∞
. (8.14)
8.2 Application of Symmetry
Consider the 2D potential ﬂow around a circular cylinder, as shown in Fig. 66. The velocity
ﬁeld is symmetric with respect to y = 0. For example, in Fig. 67, the velocity vectors at P
and its image P
are mirror images of each other. As P and P
get close to the axis y = 0,
the velocities at P and P
must converge to each other, since P and P
are the same point
in the plane y = 0. Thus,
v
y
=
∂φ
∂y
= 0 for y = 0. (8.15)
The ydirection in this case is the normal to the symmetry plane y = 0. Thus, in general,
we conclude that, for points in a symmetry plane with normal n,
∂φ
∂n
= 0. (8.16)
Note from Fig. 65 that the speciﬁed normal velocities at the two x extremes are of opposite
signs. Thus, from Eq. 8.10, the righthand side “loads” in Eq. 8.8 are equal in magnitude
and opposite in sign for the left and right boundaries, and the velocity ﬁeld is antisymmetric
with respect to the plane x = 0, as shown in Fig. 68. That is, the velocity vectors at P and
P
can be transformed into each other by a reﬂection and a negation of sign. Thus,
(v
x
)
P
= (v
x
)
P
. (8.17)
If we change the direction of ﬂow (i.e., make it right to left), then
(φ
P
)
ﬂow right
= (φ
P
)
ﬂow left
. (8.18)
87
_¸
¸
`
x
y
, ,
P P
>
>
Figure 68: Antisymmetry With Respect to x = 0.
,
a
∇
2
φ = 0
Antisym:
φ = 0
∂φ
∂n
= 0
∂φ
∂n
= 0
Sym:
∂φ
∂n
= 0
∂φ
∂n
= v
∞
Figure 69: Boundary Value Problem for Flow Around Circular Cylinder.
However, changing the direction of ﬂow also means that F in Eq. 8.8 becomes −F, since the
only nonzero contributions to F occur at the left and right boundaries. However, if the sign
of F changes, the sign of the solution also changes, i.e.,
(φ
P
)
ﬂow right
= −(φ
P
)
ﬂow left
. (8.19)
Combining the last two equations yields
(φ
P
)
ﬂow right
= −(φ
P
)
ﬂow right
. (8.20)
If we now let P and P
converge to the plane x = 0 and become the same point, we obtain
(φ)
x=0
= −(φ)
x=0
(8.21)
or (φ)
x=0
= 0. Thus, in general, we conclude that, for points in a symmetry plane with normal
n for which the solution is antisymmetric, φ = 0. The key to recognizing antisymmetry is
to have a symmetric geometry with an antisymmetric “loading” (RHS).
For example, for 2D potential ﬂow around a circular cylinder, Fig. 66, the boundary
value problem is shown in Fig. 69. This problem has two planes of geometric symmetry,
y = 0 and x = 0, and can be solved using a onequarter model. Since φ is speciﬁed on x = 0,
this problem is wellposed. The boundary condition ∂φ/∂n = 0 is the natural boundary
condition (the condition that results if the boundary is left free).
88
`
y
disturbed free
surface
`
·
η = deﬂection of
free surface
undisturbed free
surface
`
·
d = depth
·
g
Figure 70: The Free Surface Problem.
8.3 Free Surface Flows
Consider an inviscid, incompressible ﬂuid in an irrotational ﬂow ﬁeld with a free surface, as
shown in Fig. 70. The equations of motion and continuity reduce to
∇
2
φ = 0, (8.22)
where φ is the velocity potential, and the velocity is
v = ∇φ. (8.23)
The pressure p can be determined from the timedependent Bernoulli equation
−
p
ρ
=
∂φ
∂t
+
1
2
v
2
+ gy, (8.24)
where ρ is the ﬂuid density, g is the acceleration due to gravity, and y is the vertical coordi
nate.
If we let η denote the deﬂection of the free surface, the vertical velocity on the free surface
is
∂φ
∂y
=
∂η
∂t
on y = 0. (8.25)
If we assume small wave height (i.e., η is small compared to the depth d), the velocity v on
the free surface is also small, and we can ignore the velocity term in Eq. 8.24. If we also take
the pressure p = 0 on the free surface, Bernoulli’s equation implies
∂φ
∂t
+ gη = 0 on y = 0. (8.26)
This equation can be viewed as an equation for the surface elevation η given φ. We can then
eliminate η from the last two equations by diﬀerentiating Eq. 8.26:
0 =
∂
2
φ
∂t
2
+ g
∂η
∂t
=
∂
2
φ
∂t
2
+ g
∂φ
∂y
. (8.27)
Hence, on the free surface y = 0,
∂φ
∂y
= −
1
g
∂
2
φ
∂t
2
. (8.28)
This equation is referred to as the linearized free surface boundary condition.
89
¸
`
Re
Im
A
A
θ
Figure 71: The Complex Amplitude.
8.4 Use of Complex Numbers and Phasors in Wave Problems
The wave maker problem considered in the next section will involve a forcing function which
is sinusoidal in time (i.e., timeharmonic). It is common in engineering analysis to represent
timeharmonic signals using complex numbers, since amplitude and phase information can
be included in a single complex number. Such an approach is used with A.C. circuits,
steadystate acoustics, and mechanical vibrations.
Consider a sine wave φ(t) of amplitude
ˆ
A, circular frequency ω, and phase θ:
φ(t) =
ˆ
Acos(ωt + θ), (8.29)
where all quantities in this equation are real, and
ˆ
A can be taken as positive. Using complex
notation,
φ(t) = Re
_
ˆ
Ae
i(ωt+θ)
_
= Re
__
ˆ
Ae
iθ
_
e
iωt
_
, (8.30)
where i =
√
−1. If we deﬁne the complex amplitude
A =
ˆ
Ae
iθ
, (8.31)
then
φ(t) = Re
_
Ae
iωt
_
, (8.32)
where the magnitude of the complex amplitude is given by
A =
¸
¸
¸
ˆ
Ae
iθ
¸
¸
¸ =
¸
¸
¸
ˆ
A
¸
¸
¸
¸
¸
e
iθ
¸
¸
=
ˆ
A, (8.33)
which is the actual amplitude, and
arg(A) = θ, (8.34)
which is the actual phase angle. The complex amplitude A is thus a complex number which
embodies both the amplitude and the phase of the sinusoidal signal, as shown in Fig. 71.
The directed vector in the complex plane is called a phasor by electrical engineers.
It is common practice, when dealing with these sinsusoidal functions, to drop the “Re”,
and agree that it is only the real part which is of interest. Thus, we write
φ(t) = Ae
iωt
(8.35)
with the understanding that it is the real part of this signal which is of interest. In this
equation, A is the complex amplitude.
Two sinusoids of the same frequency add just like vectors in geometry. For example,
consider the sum
ˆ
A
1
cos(ωt + θ
1
) +
ˆ
A
2
cos(ωt + θ
2
).
90
¸
`
Re
Im
/
/
/
/
/
/
/
/`
A
1
A
2
A
1
+ A
2
Figure 72: Phasor Addition.
¸
∞
¡
n
¸
x
`
y
∇
2
φ = 0
∂φ
∂n
= 0 (rigid bottom)
∂φ
∂n
= −
1
g
¨
φ (linearized free surface condition)
∂φ
∂n
= v
n
= v
0
cos ωt (speciﬁed)
`
·
d
Figure 73: 2D Wave Maker: Time Domain.
In terms of complex arithmetic,
A
1
e
iωt
+ A
2
e
iωt
= (A
1
+ A
2
)e
iωt
, (8.36)
where A
1
and A
2
are complex amplitudes given by
A
1
=
ˆ
A
1
e
iθ
1
, A
2
=
ˆ
A
2
e
iθ
2
. (8.37)
This addition, referred to as phasor addition, is illustrated in Fig. 72.
8.5 2D Wave Maker
Consider a semiinﬁnite body of water with a wall oscillating in simple harmonic motion, as
shown in Fig. 73. In the time domain, the problem is
91
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
∇
2
φ = 0
∂φ
∂n
= v
0
cos ωt on x = 0
∂φ
∂n
= 0 on y = −d
∂φ
∂n
= −
1
g
¨
φ on y = 0,
(8.38)
where dots denote diﬀerentiation with respect to time, and an additional boundary condition
is needed for large x at the location where the model is terminated. For this problem, the
excitation frequency ω is speciﬁed. The solution of this problem is a function φ(x, y, t).
The forcing function is the oscillating wall. We ﬁrst write Eq. 8.38b in the form
∂φ
∂n
= v
0
cos ωt = Re
_
v
0
e
iωt
_
, (8.39)
where i =
√
−1, and v
0
is real. We therefore look for solutions in the form
φ(x, y, t) = φ
0
(x, y)e
iωt
, (8.40)
where φ
0
(x, y) is the complex amplitude. Eq. 8.38 then becomes
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
∇
2
φ
0
= 0
∂φ
0
∂n
= v
0
on x = 0
∂φ
0
∂n
= 0 on y = −d
∂φ
0
∂n
=
ω
2
g
φ
0
on y = 0.
(8.41)
It can be shown that, for large x,
∂φ
0
∂x
= −iαφ
0
, (8.42)
where α is the positive solution of
ω
2
g
= αtanh(αd), (8.43)
and ω is the ﬁxed excitation frequency. The graphical solution of this equation is shown in
Fig. 74.
Thus, for a ﬁnite element solution to the 2D wave maker problem, we truncate the
domain “suﬃciently far” from the wall, and impose a boundary condition, Eq. 8.42, referred
to as a radiation boundary condition which accounts approximately for the fact that the
92
¸
`
α
y
Figure 74: Graphical Solution of ω
2
/α = g tanh(αd).
∇
2
φ
0
= 0
∂φ
0
∂n
= v
0
∂φ
0
∂n
= 0
∂φ
0
∂n
=
ω
2
g
φ
0
∂φ
0
∂n
= −iαφ
0
`
·
d
¡ ¸
W
Figure 75: 2D Wave Maker: Frequency Domain.
ﬂuid extends to inﬁnity. If the radiation boundary is located at x = W, the boundary value
problem in the frequency domain becomes
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
∇
2
φ
0
= 0
∂φ
0
∂n
= v
0
on x = 0 (oscillating wall)
∂φ
0
∂n
= 0 on y = −d (rigid bottom)
∂φ
0
∂n
=
ω
2
g
φ
0
on y = 0 (linearized free surface condition)
∂φ
0
∂x
= −iαφ
0
on x = W (radiation condition),
(8.44)
as summarized in Fig. 75. Note the similarity between this radiation condition and the
nonreﬂecting boundary condition for the Helmholtz equation, Eq. 3.52.
93
8.6 Linear Triangle Matrices for 2D Wave Maker Problem
The boundary value problem deﬁned in Eq. 8.44 has two boundaries where ∂φ/∂n is speciﬁed
and two boundaries on which ∂φ/∂n is proportional to the unknown φ. Thus, this boundary
value problem is a special case of Eq. 7.55 (p. 73), so that the ﬁnite element formulas
derived in §7.5 are all applicable. Note that the function g appearing in Eq. 7.55 is not the
acceleration due to gravity appearing in the formulation of the free surface ﬂow problem.
The matrix system for the wave maker problem is therefore, from Eq. 7.79,
(K+H)φ = F, (8.45)
where, for each element, K, H, and F are given by Eqs. 7.80–7.82. Thus, from Eq. 7.91,
K
ij
=
1
4A
(b
i
b
j
+ c
i
c
j
), (8.46)
where i and j each have the range 123, and
b
i
= y
j
−y
k
, c
i
= x
k
−x
j
. (8.47)
For b
i
and c
i
, the symbols ijk refer to the three nodes 123 in a cyclic permutation. For
example, if j = 1, then k = 2 and i = 3. Thus,
K
11
= (b
2
1
+ c
2
1
)/(4A), (8.48)
K
22
= (b
2
2
+ c
2
2
)/(4A), (8.49)
K
33
= (b
2
3
+ c
2
3
)/(4A), (8.50)
K
12
= K
21
= (b
1
b
2
+ c
1
c
2
)/(4A), (8.51)
K
13
= K
31
= (b
1
b
3
+ c
1
c
3
)/(4A), (8.52)
K
23
= K
32
= (b
2
b
3
+ c
2
c
3
)/(4A). (8.53)
H is calculated using Eq. 7.111, where
ˆ
h = −ω
2
/g on the free surface, and
ˆ
h = iα on the
radiation boundary. Thus, for triangular elements adjacent to the free surface,
H
FS
= −
ω
2
L
6g
_
2 1
1 2
_
(8.54)
for each free surface edge. For elements adjacent to the radiation boundary,
H
RB
=
iαL
6
_
2 1
1 2
_
(8.55)
for each radiation boundary edge. Note that, since H is purely imaginary on the radiation
boundary, the coeﬃcient matrix K+H in Eq. 8.45 is complex, and the solution φ is complex.
The solution of free surface problems thus requires either the use of complex arithmetic or
separating the matrix system into real and imaginary parts.
The righthand side F is calculated using Eq. 7.107, where ˆ g = −v
0
on the oscillating
wall. Thus, for two points on an element edge on the oscillating wall,
F
1
= F
2
=
v
0
L
2
. (8.56)
94
`
`
`
`
k
c
m
¸
u(t)
¸ ¸
¸
f(t)
Figure 76: Single DOF MassSpringDashpot System.
The solution vector φ obtained from Eq. 8.45 is the complex amplitude of the velocity
potential. The timedependent velocity potential is given by
Re
_
φe
iωt
_
= Re [(φ
R
+ iφ
I
)(cos ωt + i sin ωt)] = φ
R
cos ωt −φ
I
sin ωt, (8.57)
where φ
R
and φ
I
are the real and imaginary parts of the complex amplitude. It is this
function which is displayed in computer animations of the timedependent response of the
velocity potential.
8.7 Mechanical Analogy for the Free Surface Problem
Consider the single DOF springmassdashpot system shown in Fig. 76. The application of
Newton’s second law of motion (F=ma) to this system yields the diﬀerential equation of
motion
m¨ u + c ˙ u + ku = f(t), (8.58)
where m is mass, c is the viscous dashpot constant, k is the spring stiﬀness, u is the dis
placement from the equilibrium, f is the applied force, and dots denote diﬀerentiation with
respect to the time t. For a sinusoidal force,
f(t) = f
0
e
iωt
, (8.59)
where ω is the excitation frequency, and f
0
is the complex amplitude of the force. The
displacement solution is also sinusoidal:
u(t) = u
0
e
iωt
, (8.60)
where u
0
is the complex amplitude of the displacement response. If we substitute the last
two equations into the diﬀerential equation, we obtain
−ω
2
mu
0
e
iωt
+ iωcu
0
e
iωt
+ ku
0
e
iωt
= f
0
e
iωt
(8.61)
or
(−ω
2
m + iωc + k)u
0
= f
0
. (8.62)
We make two observations from this last equation:
1. The inertia force is proportional to ω
2
and 180
◦
out of phase with respect to the elastic
force.
95
2. The viscous damping force is proportional to ω and leads the elastic force by 90
◦
.
Thus, in the free surface problem, we could interpret the free surface matrix H
FS
as an
inertial eﬀect (in a mechanical analogy) with a “surface mass matrix” M given by
M =
1
−ω
2
H
FS
=
L
6g
_
2 1
1 2
_
, (8.63)
where the diagonal “masses” are positive. Similarly, we could interpret the radiation bound
ary matrix H
RB
as a “damping” eﬀect with the “boundary damping matrix” B given by
B =
1
iω
H
RB
=
αL
6ω
_
2 1
1 2
_
, (8.64)
where the diagonal dampers are positive. This “damping” matrix is frequencydependent.
The free surface problem is a degenerate equivalent to the mechanical problem, since the
mass M occurs only on the free surface rather than at every point in the domain. In fact,
the ideal ﬂuid, for which ∇
2
φ = 0, behaves like a degenerate mechanical system, because
the ideal ﬂuid possesses the counterpart to the elastic forces but not the inertial forces. This
degeneracy is a consequence of the incompressibility of the ideal ﬂuid. A compressible ﬂuid
(such as occurs in acoustics) has the analogous mass eﬀects everywhere.
96
Bibliography
[1] K.J. Bathe. Finite Element Procedures. PrenticeHall, Inc., Englewood Cliﬀs, NJ, 1996.
[2] J.W. Brown and R.V. Churchill. Fourier Series and Boundary Value Problems.
McGrawHill, Inc., New York, seventh edition, 2006.
[3] G.R. Buchanan. Schaum’s Outline of Theory and Problems of Finite Element Analysis.
Schaum’s Outline Series. McGrawHill, Inc., New York, 1995.
[4] D.S. Burnett. Finite Element Analysis: From Concepts to Applications. AddisonWesley
Publishing Company, Inc., Reading, Mass., 1987.
[5] R.W. Clough and J. Penzien. Dynamics of Structures. McGrawHill, Inc., New York,
second edition, 1993.
[6] R.D. Cook, D.S. Malkus, M.E. Plesha, and R.J. Witt. Concepts and Applications of
Finite Element Analysis. John Wiley and Sons, Inc., New York, fourth edition, 2001.
[7] K.H. Huebner, D.L. Dewhirst, D.E. Smith, and T.G. Byrom. The Finite Element
Method for Engineers. John Wiley and Sons, Inc., New York, fourth edition, 2001.
[8] J.H. Mathews. Numerical Methods for Mathematics, Science, and Engineering.
PrenticeHall, Inc., Englewood Cliﬀs, NJ, second edition, 1992.
[9] K.W. Morton and D.F. Mayers. Numerical Solution of Partial Diﬀerential Equations.
Cambridge University Press, Cambridge, 1994.
[10] J.S. Przemieniecki. Theory of Matrix Structural Analysis. McGrawHill, Inc., New York,
1968 (also, Dover, 1985).
[11] J.N. Reddy. An Introduction to the Finite Element Method. McGrawHill, Inc., New
York, third edition, 2006.
[12] F. Scheid. Schaum’s Outline of Theory and Problems of Numerical Analysis. Schaum’s
Outline Series. McGrawHill, Inc., New York, second edition, 1989.
[13] G.D. Smith. Numerical Solution of Partial Diﬀerential Equations: Finite Diﬀerence
Methods. Oxford University Press, Oxford, England, third edition, 1985.
[14] I.M. Smith and D.V. Griﬃths. Programming the Finite Element Method. John Wiley
and Sons, Inc., New York, fourth edition, 2004.
[15] M.R. Spiegel. Schaum’s Outline of Theory and Problems of Advanced Mathematics for
Engineers and Scientists. Schaum’s Outline Series. McGrawHill, Inc., New York, 1971.
[16] J.S. Vandergraft. Introduction to Numerical Calculations. Academic Press, Inc., New
York, second edition, 1983.
97
[17] O.C. Zienkiewicz and R.L. Taylor. The Finite Element Method for Solid and Structural
Mechanics. Elsevier ButterworthHeinemann, Oxford, England, sixth edition, 2005.
[18] O.C. Zienkiewicz, R.L. Taylor, and P. Nithiarasu. The Finite Element Method for Fluid
Dynamics. Elsevier ButterworthHeinemann, Oxford, England, sixth edition, 2005.
[19] O.C. Zienkiewicz, R.L. Taylor, and J.Z. Zhu. The Finite Element Method: Its Basis
and Fundamentals. Elsevier ButterworthHeinemann, Oxford, England, sixth edition,
2005.
98
Index
acoustics, 13
backsolving, 10
banded matrix, 33
beams in ﬂexure, 44
Bernoulli equation, 86, 89
big O notation, 4
boundary conditions, 1
derivative, 20, 33
essential, 76
natural, 65, 76, 88
Neumann, 20
nonreﬂecting, 27
radiation, 92
boundary value problem(s), 1, 8
brachistochrone, 61
calculus of variations, 57
CFL condition, 26
change of basis, 49
compatibility, 81
complex amplitude, 90
complex numbers, 90
conic sections, 14
constant strain triangle, 48
constrained minimization, 63
constraints, 36
continuum problems
direct approach, 45
Courant condition, 26
CrankNicolson method, 10, 18
stencil, 20
CST, 48
cyclic permutation, 71
cycloid, 62
d’Alembert solution, 21, 22
del operator, 11
determinant of matrix, 52
discriminant, 14
displacement vector, 35
divergence theorem, 67
domain of dependence, 25
domain of inﬂuence, 23
electrostatics, 12
equation(s)
acoustics, 13
Bernoulli, 86, 89
classiﬁcation, 14
elliptic, 14, 31
EulerLagrange, 59, 63
heat, 13
Helmholtz, 13
hyperbolic, 14, 21
Laplace, 11
mathematical physics, 11
nondimensional, 15
ordinary diﬀerential, 1
parabolic, 14, 16
partial diﬀerential, 11
Poisson, 12, 66, 73
potential, 11
sparse systems, 32
systems, 5
wave, 12, 22
error
global, 3
local, 3
rounding, 3
truncation, 3
essential boundary condition, 76
Euler beam, 45
Euler’s method, 2
truncation error, 3
EulerLagrange equation, 59, 63
explicit method, 4, 16
ﬁnite diﬀerence(s), 6
backward, 6
central, 6
forward, 2, 6
Neumann boundary conditions, 33
relaxation, 33
Fourier’s law, 13
99
free surface ﬂows, 89
functional, 58
Galerkin’s method, 83
Gaussian elimination, 10, 80
gravitational potential, 11
heat conduction, 12
Helmholtz equation, 13
Hooke’s law, 48
implicit method, 4
incompressible ﬂuid ﬂow, 11
index notation, 67
initial conditions, 1
initial value problem(s), 1
interpretation of functional, 79
Kronecker delta, 50, 57, 68
Laplacian operator, 11
large spring method, 43
Leibnitz’s rule, 22
magnetostatics, 12
mass matrix, 35
massspring system, 1, 34
matrix assembly, 36
matrix partitioning, 42
mechanical analogy, 95
method of weighted residuals, 83
natural boundary condition, 65, 76, 88
Neumann boundary condition
ﬁnite diﬀerences, 20, 33
Newton’s second law, 34
nonconforming element, 82
nondimensional form, 15
orthogonal coordinate transformation, 52
orthogonal matrix, 52
orthonormal basis, 50
phantom points, 20
phasors, 90
addition, 91
pinjointed frame, 41
pivoting, 80
Poisson equation, 12
positive deﬁnite matrix, 80
potential energy, 80
potential ﬂuid ﬂow, 85
radiation boundary condition, 92
relaxation, 33
rod element, 38
rotation matrix, 52
rounding error, 3
RungeKutta methods, 4
separation of variables, 18
shape functions, 70
shooting methods, 10
solution procedure, 38
sparse system of equations, 32
speed of propagation, 12
stable solution, 18
stencil, 16, 20, 26
stiﬀness matrix, 35, 76
properties, 38
strain energy, 80
summation convention, 50, 67
symmetry, 87
Taylor’s theorem, 3, 4
tensors, 53
examples, 54
isotropic, 57
torsion, 12
transverse shear, 45
triangular element, 46
tridiagonal system, 8, 10
truncation error, 3
truss structure, 40
unit vectors, 50
unstable solution, 18
velocity potential, 11, 85
vibrations
bar, 12
membrane, 12
string, 12
virtual work, 48
100
warping function, 12
wave equation, 12, 22
wave maker problem, 91
matrices, 94
wave speed, 23
101
Copyright c 2001–2010 by Gordon C. Everstine. All rights reserved.
A This book was typeset with L TEX 2ε (MiKTeX).
Preface
These lecture notes are intended to supplement a onesemester graduatelevel engineering course at The George Washington University in numerical methods for the solution of partial diﬀerential equations. Both ﬁnite diﬀerence and ﬁnite element methods are included. The main prerequisite is a standard undergraduate calculus sequence including ordinary diﬀerential equations. In general, the mix of topics and level of presentation are aimed at upperlevel undergraduates and ﬁrstyear graduate students in mechanical, aerospace, and civil engineering. Gordon Everstine Gaithersburg, Maryland January 2010
iii
.
. . . . . 3.7 Shooting Methods . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . 1 2 3 4 5 6 8 9 10 10 11 11 14 15 16 16 16 18 20 21 21 25 26 27 31 33 34 34 36 36 37 38 41 42 43 44 45 2 Partial Diﬀerential Equations 2. . . .3 Derivative Boundary Conditions . .1 Example . . . . . . . . . 4. . . . . . . .3 Constraints .1 Explicit Finite Diﬀerence Method . . . . . . . . .9 Beams in Flexure . . . . . . . . . . . . .2 CrankNicolson Implicit Method . . . . . . .2. .7 Boundary Conditions by Matrix Partitioning 4.1 Euler’s Method . . . . .Contents 1 Numerical Solution of Ordinary Diﬀerential 1. . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Truncation Error for Euler’s Method .5 PinJointed Rod Element . . . . .6. 1. . . . . . . . . . . . . . .6 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . .2 Matrix Assembly . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Derivative Boundary Conditions . . . . . . . . . . . . . . . . v . . .1 Linear MassSpring Systems . . . . . . . .2 Finite Diﬀerences . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . .3 Starting Procedure for Explicit Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 The d’Alembert Solution of the Wave Equation . . .8 Alternative Approach to Constraints . . . . . . . . . . .6. . . . . . . . . . . . . 1. . . . . . . . . . . . . . . . . . . . . . . . . . 3. . 3. . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . .3. . . . . . 1. . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . .3 RungeKutta Methods . . . . . . .1 Classical Equations of Mathematical Physics . . . . . . . . . .4 Example and Summary . . . . . . . . . . . . .1.4 Nonreﬂecting Boundaries . . . . . . . . . . . . . . .4 Systems of Equations . . .6 PinJointed Frame Example . . . . . . .2 Solving Tridiagonal Systems . . . 1. 1. . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Finite Diﬀerence Solution of Partial Diﬀerential Equations 3.10 Direct Approach to Continuum Problems .1 Parabolic Equations . . . . . . . . . . . . . . .2 Hyperbolic Equations . . . . . . . . . . . . . Equations . . . . . . . . . . . . . . . 4. . . 1. . 1. . . . . 2. . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . 4 Direct Finite Element Analysis 4. . . . . . . . 3.3 Transformation to Nondimensional Form . . . . . . . . . . . . . . . .3 Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . .5 Finite Diﬀerences . . . . . . 4. . . . . . . . . . . . . . . . . . 3. . .2. . .2. . . . . . . .2. 4. . 3. . . 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Classiﬁcation of Partial Diﬀerential Equations . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . .
.4 Variational Approach . . . . . . . 8. . . . . . . . . . . . . . . . . 6. . . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Example 1: The Shortest Distance Between Two Points 6. . . . . . . . . . . . . . . . . SimplySupported Beam With Distributed Load. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Finite Diﬀerence Approximations to Derivatives. . . . . . . . . . . . . . . . . . .4 Use of Complex Numbers and Phasors in Wave Problems 8. . . . . . . 1 2 6 11 17 . . .8 Element Compatibility . . . . . . .1 Finite Element Model . . . . . . . . . . . . . The Shooting Method. . . . . . . . . . . . . . . . .4 Example 3: A Constrained Minimization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . 6. . . . .5 2D Wave Maker . . . . . . . . .5 Change of Basis 5. . . . . . . . . . . . . . . . 8 Potential Fluid Flow With Finite Elements 8. . . 8. . . . . . . . . . . . . . . . . . 7. . . . . . . . . .7 Stiﬀness in Elasticity in Terms of Shape Functions . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . vi . . . . . . . . . . . . 8. . 49 53 54 57 57 60 61 63 63 64 66 66 66 67 68 70 73 76 79 80 81 83 85 86 87 89 90 91 94 95 97 99 . . . . . . . . . . 7. . . . . . . . . 5. 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Tensors . . . . . . . 7. . . . . . . . . 6. . . . . . . . . . . . . . . . . . . List of Figures 1 2 3 4 5 1DOF MassSpring System. . . . . . . . . 7. . . . . . . . . . . . . . . . . . 7. . . . . . . Bibliography Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Method of Weighted Residuals (Galerkin’s Method) . . . . . . . . . . .3 Free Surface Flows . . . . . . . . . . . . . . .6 Linear Triangle Matrices for 2D Wave Maker Problem . . . . . . .7 Mechanical Analogy for the Free Surface Problem . . .5 Functions of Several Independent Variables .5 Matrices for Linear Triangle . . . . . . . .3 Shape Functions . .2 Deriving Variational Principles . . . . . . . . . .2 Application of Symmetry . . . . . . . . . . . .3 Isotropic Tensors . . . .2 Example 2: The Brachistochrone . . . . . . . . . . . . .2 Examples of Tensors . . . . . . . .1 Index Notation and Summation Convention . . 7 Variational Approach to the Finite Element Method 7. . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Calculus of Variations 6. . . Mesh for 1D Heat Equation. . . 6. . . . . .6 Example 4: Poisson’s Equation . . . . . .6 Interpretation of Functional . . . . . . . . . . 5. . . . . . . . . . .3 Constraint Conditions . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Functions of Several Dependent Variables . . . . . . . . . . . . . . . . . . . . . . . . . . .
48. . . . . . . . . . . . . Element Coordinate Systems in the Finite Element Method. . . . . . . . . . . . . . . . . . . . . . . . The Neighborhood of Point (i. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2DOF MassSpring System. . . . . . . . . . . . . . . . . . . . . . . . . Mesh for Explicit Solution of Wave Equation. . . . . . . . . . . . . .52. . . . . . . . . . Finite Length Simulation of an Inﬁnite Bar. . . . A Single Spring Element. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Beam Problem Associated With Column 1. . . . . . . . . . . . . . . . . . . . . DOF for 2D Beam Element. . . . . . . . . . . . . . . . . . . . Explicit Finite Diﬀerence Solution With r = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PinJointed Rod Element. . . . . . . . . . Spring System With Constraint. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Element Coordinate System for PinJointed Rod. . . . . . . . . . . . . . . . . Truss Structure Modeled With PinJointed Rods. . . . . . . . . . . . . . . . . . . . Large Spring Approach to Constraints. . . . . . Laplace’s Equation With Dirichlet and Neumann B. Heat Equation Stencil for r = 1/10. . . . . . 20Point Finite Diﬀerence Mesh. Finite Diﬀerence Grid on Rectangular Domain. . . DOF for Beam in Flexure (2D). . . . . . . . . . . Change of Basis. . . . . . . . . . . . Heat Equation Stencils for r = 1/2 and r = 1. . . . . . . . . . . .C. . . . . j). . . . . . . . . . . . . . . . . Example With Reactions and Loads at Same DOF. . . . . . . . . . . . . 4DOF Spring System. . . Laplace’s Equation on Rectangular Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 17 17 18 19 20 20 21 23 24 24 25 26 26 27 28 30 31 31 32 32 33 34 34 35 36 37 37 39 39 40 40 41 42 43 44 44 45 45 46 46 50 50 51 56 . . . . . . . . . . . . . . . . . . . Treatment of Derivative Boundary Conditions. . . . . DOF for the Linear Triangular Membrane Element. Stencil for CrankNicolson Algorithm. . . . . . . Propagation of Initial Velocity. . . . . . Computing 2D Stiﬀness of PinJointed Rod. . . . . . . . Domains of Dependence for r > 1. . . . . . Basis Vectors in Polar Coordinate System. Propagation of Initial Displacement. . . . . . . . . . . . The Beam Problem Associated With Column 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Heat Equation Stencil for Explicit Finite Diﬀerence Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . Plate With Constraints and Loads. . . . Domains of Inﬂuence and Dependence. . . . . . . . . . . . . . . . . Explicit Finite Diﬀerence Solution With r = 0. . . . . . . . . . . . . . . . . . . . . . . . The Degrees of Freedom for a PinJointed Rod Element in 2D. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Finite Diﬀerence Mesh at Nonreﬂecting Boundary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Initial Velocity Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3DOF Spring System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stencil for Explicit Solution of Wave Equation. . . . . . . . . . Treatment of Neumann Boundary Conditions. . . PinJointed Frame Example. . . . . . . . . . . . . . Mesh for CrankNicolson. .
. Compatibility at an Element Boundary. . . . . . . . . . . . . . . . . . . . . . . . The Brachistochrone Problem. . . . . . . . . . . . . . . . . . . . . . . . . . Boundary Value Problem for Flow Around Circular Cylinder. . . . . Two Adjacent Finite Elements. . . . . . . . . . . . . . . . . . . . . . . . Phasor Addition. . . . . . . . . . . . . . . . Triangular Mesh at Boundary. . . . . . . . . . . . . . . Antisymmetry With Respect to x = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Complex Amplitude. . . .51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 Minimum. . . . . . . . 58 60 61 63 64 70 70 72 74 75 78 81 82 83 86 87 87 88 88 89 90 91 91 93 93 95 viii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . and Neutral Stationary Values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Constrained Minimization Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Neumannn Boundary Condition at Internal Boundary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Single DOF MassSpringDashpot System. . . . . . . . . . . . . TwoDimensional Finite Element Mesh. . . . . . . Several Brachistochrone Solutions. . . . . Symmetry With Respect to y = 0. . . . . . . The Free Surface Problem. . . . . . . . . . . . . 2D Wave Maker: Time Domain. . . . . . . . . Streamlines Around Circular Cylinder. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maximum. . Triangular Finite Element. . . . . . . . . . . . Potential Flow Around Solid Body. . . . . . . . . . . Axial Member (PinJointed Truss Element). . 2D Wave Maker: Frequency Domain. . . . Graphical Solution of ω 2 /α = g tanh(αd). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Vector Analogy for Galerkin’s Method. . . . . . . . . Discontinuous Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Curve of Minimum Length Between Two Points. . . . . . . . . . . . . . .
If all conditions are imposed at x = a. y (n) = 0 for a ≤ x ≤ b. . y1 . This equation describes a onedegreeoffreedom massspring system which is released from rest and subjected to a timedependent force. . yn ) y (x) = f2 (x. and the problem is a boundary value problem (BVP).1) where y = y(x). An nthorder equation has the highest order derivative of order n: f x.3) 2 2 u(0) = u(L) = 0. As we will see. u(0) = 0. consider the initial value problem m¨ + ku = f (t) u u(0) = 5. for which the diﬀerential equation is x FL EIu (x) = M (x) = x − Fx (1. y2 . u is the transverse displacement. For example. . 1. the conditions are called initial conditions (I. .1 Numerical Solution of Ordinary Diﬀerential Equations An ordinary diﬀerential equation (ODE) is an equation that involves an unknown function (the dependent variable) and some of its derivatives with respect to a single independent variable. Initial value problems generally arise in timedependent situations. ˙ (1. y1 . An nthorder ODE requires the speciﬁcation of n conditions to assure uniqueness of the solution. and dots denote diﬀerentiation with respect to the time t. .2) where u = u(t).). yn (x) = fn (x. and y (n) denotes the nth derivative with respect to x. where the independent variable x is the distance from the left end. . yn ) 2 (1. . Boundary value problems generally arise in static (timeindependent) situations. . A system of n ﬁrstorder ODEs has the form y1 (x) = f1 (x. 2. . y2 . .4) . IVPs and BVPs must be treated diﬀerently numerically. . If the conditions are imposed at both x = a and x = b. and the problem is an initial value problem (IVP). . the conditions are called boundary conditions (B. y1 . (1. and M (x) is the internal bending moment at x. An example of a boundary value problem is shown in Fig. y2 .C. 1 .). . . . y . as illustrated in Fig. y.C. . yn ) E u(t) E k ¡e ¡e ¡ e¡ e¡ m e e f (t) Figure 1: 1DOF MassSpring System. y . . .
F (N/m)
c c c d '
c c c
c c c c
c
L
d E
Figure 2: SimplySupported Beam With Distributed Load. for a ≤ x ≤ b. A single nthorder ODE is equivalent to a system of n ﬁrstorder ODEs. This equivalence can be seen by deﬁning a new set of unknowns y1 , y2 , . . . , yn such that y1 = y, y2 = y , y3 = y , . . . , yn = y (n−1) . For example, consider the thirdorder IVP y = xy + ex y + x2 + 1, x ≥ 0 y(0) = 1, y (0) = 0, y (0) = −1. To obtain an equivalent ﬁrstorder system, deﬁne y1 = y, y2 = y , y3 = y to obtain y1 = y2 y = y3 2 y3 = xy2 + ex y1 + x2 + 1 with initial conditions y1 (0) = 1, y2 (0) = 0, y3 (0) = −1. (1.5)
(1.6)
1.1
Euler’s Method
This method is the simplest of the numerical methods for solving initial value problems. Consider the IVP y (x) = f (x, y), x ≥ a (1.7) y(a) = η. To eﬀect a numerical solution, we discretize the xaxis: a = x0 < x1 < x2 < · · · , where, for uniform spacing, xi − xi−1 = h, (1.8) and h is considered small. With this discretization, we can approximate the derivative y (x) with the forward ﬁnite diﬀerence y (x) ≈ y(x + h) − y(x) . h (1.9)
If we let yk represent the numerical approximation to y(xk ), then y (xk ) ≈ yk+1 − yk . h 2 (1.10)
Thus, a numerical (diﬀerence) approximation to the ODE, Eq. 1.7, is yk+1 − yk = f (xk , yk ), k = 0, 1, 2, . . . h or yk+1 = yk + hf (xk , yk ), k = 0, 1, 2, . . . y0 = η. This recursive algorithm is called Euler’s method. (1.12) (1.11)
1.2
Truncation Error for Euler’s Method
There are two types of error that arise in numerical methods: truncation error (which arises primarily from a discretization process) and rounding error (which arises from the ﬁniteness of number representations in the computer). Reﬁning a mesh to reduce the truncation error often causes the rounding error to increase. To estimate the truncation error for Euler’s method, we ﬁrst recall Taylor’s theorem with remainder, which states that a function f (x) can be expanded in a series about the point x = c: f (x) = f (c)+f (c)(x−c)+ f (n) (c) f (n+1) (ξ) f (c) (x−c)2 +· · ·+ (x−c)n + (x−c)n+1 , (1.13) 2! n! (n + 1)!
where ξ is between x and c. The last term in Eq. 1.13 is referred to as the remainder term. Note also that Eq. 1.13 is an equality, not an approximation. In Eq. 1.13, let x = xk+1 and c = xk , in which case 1 y(xk+1 ) = y(xk ) + hy (xk ) + h2 y (ξk ), 2 where xk ≤ ξk ≤ xk+1 . Since y satisﬁes the ODE, Eq. 1.7, y (xk ) = f (xk , y(xk )), where y(xk ) is the actual solution at xk . Hence, 1 y(xk+1 ) = y(xk ) + hf (xk , y(xk )) + h2 y (ξk ). 2 (1.16) (1.15) (1.14)
Like Eq. 1.13, this equation is an equality, not an approximation. By comparing this last equation to Euler’s approximation, Eq. 1.12, it is clear that Euler’s 1 method is obtained by omitting the remainder term 2 h2 y (ξk ) in the Taylor expansion of y(xk+1 ) at the point xk . The omitted term accounts for the truncation error in Euler’s method at each step. This error is a local error, since the error occurs at each step regardless of the error in the previous step. The accumulation of local errors is referred to as the global error, which is the more important error but much more diﬃcult to compute. Most algorithms for solving ODEs are derived by expanding the solution function in a Taylor series and then omitting certain terms. 3
1.3
RungeKutta Methods
Euler’s method is a ﬁrstorder method, since it was obtained by omitting terms in the Taylor series expansion containing powers of h greater than one. To derive a secondorder method, we again use Taylor’s theorem with remainder to obtain 1 1 y(xk+1 ) = y(xk ) + hy (xk ) + h2 y (xk ) + h3 y (ξk ) 2 6 for some ξk such that xk ≤ ξk ≤ xk+1 . Since, from the ODE (Eq. 1.7), y (xk ) = f (xk , y(xk )), we can approximate y (x) = df (x, y(x)) f (x + h, y(x + h)) − f (x, y(x)) = + O(h) dx h (1.19) (1.18) (1.17)
where we use the “big O” notation O(h) to represent terms of order h as h → 0. [For example, 2h3 = O(h3 ), 3h2 + 5h4 = O(h2 ), h2 O(h) = O(h3 ), and −287h4 e−h = O(h4 ).] From these last two equations, Eq. 1.17 can then be written as h y(xk+1 ) = y(xk ) + hf (xk , y(xk )) + [f (xk+1 , y(xk+1 )) − f (xk , y(xk ))] + O(h3 ), 2 which leads (after combining terms) to the diﬀerence equation 1 yk+1 = yk + h[f (xk , yk ) + f (xk+1 , yk+1 )]. 2 (1.21) (1.20)
This formula is a secondorder approximation to the original diﬀerential equation y (x) = f (x, y) (Eq. 1.7), but it is an inconvenient approximation, since yk+1 appears on both sides of the formula. (Such a formula is called an implicit method, since yk+1 is deﬁned implicitly. An explicit method would have yk+1 appear only on the lefthand side.) To obtain instead an explicit formula, we use the approximation yk+1 = yk + hf (xk , yk ) to obtain (1.22)
1 yk+1 = yk + h[f (xk , yk ) + f (xk+1 , yk + hf (xk , yk ))]. (1.23) 2 This formula is the RungeKutta formula of second order. Other higherorder formulas can be derived similarly. For example, a fourthorder formula turns out to be popular in applications. We illustrate the implementation of the secondorder RungeKutta formula, Eq. 1.23, with the following algorithm. We ﬁrst make the following three deﬁnitions: ak = f (xk , yk ), bk = yk + hak , 4 (1.24) (1.25)
yk + hf (xk . .31) (1. yk . Euler’s method uses the recursive formula yk+1 = yk + hf (xk . 2 The calculations can then be performed conveniently with the following spreadsheet: k 0 1 2 . zk ). yk . zk ) + g(xk+1 . yk . in which case (1. yk . z(x)) (1.28) z (x) = g(x. yk .23 that.27) yk+1 = yk + h(ak + ck ). yk . yk ). bk ). y(x).30) We recall from Eq. the secondorder RungeKutta method uses the recursive formula 1 yk+1 = yk + h[f (xk . zk + hg(xk . yk . 1. yk ))]. this formula becomes 1 yk+1 = yk + 2 h[f (xk . zk ))] k+1 y0 = η. for one equation. Consider the initial value problem involving two equations: y (x) = f (x. z0 = ξ. We recall from Eq. .ck = f (xk+1 . yk ) + f (xk+1 . (1.4 Systems of Equations The methods just derived can be extended directly to systems of equations. yk .12 that. yk + hf (xk . xk yk ak bk ck yk+1 1. 1.26) 1 (1. zk ) z = zk + hg(xk . zk ). zk + hg(xk . y(x). This formula is directly extendible to two equations as yk+1 = yk + hf (xk . zk ) k+1 y0 = η. zk ) + f (xk+1 . zk ))] 1 z = zk + 2 h[g(xk . z(a) = ξ. 2 For two equations.29) (1. z(x)) y(a) = η. yk + hf (xk .32) 5 . (1. z0 = ξ. for one equation.
36) 2h This formula. (1. is the central ﬁnite diﬀerence approximation to the derivative. 1. If we discretize the xaxis with uniform spacing h. we want to develop further the notion of ﬁnite diﬀerence approximation of derivatives. which is more accurate than either the forward or backward diﬀerence formulas. Consider a function y(x) for which we want to compute the derivative y (x) at some point x. Since.35) y(x + h) − y(x − h) . h (1.34) which is the slope of the line to the left of x. h (1. there is no basis for choosing one of these approximations over the other. an intuitively more appealing approximation results from the average of these formulas: y (x) ≈ or 1 y(x + h) − y(x) y(x) − y(x − h) + 2 h h y (x) ≈ (1. We could also approximate the derivative using the backward diﬀerence formula y (x) ≈ y(x) − y(x − h) .5 Finite Diﬀerences Before addressing boundary value problems.T y(x) s central diﬀerence rr j r r s s rr backward forward diﬀerence diﬀerence E x−h x x+h x Figure 3: Finite Diﬀerence Approximations to Derivatives. 3).33) which is the slope of the line to the right of x (Fig. 6 . we could approximate the derivative using the forward diﬀerence formula y (x) ≈ y(x + h) − y(x) . in general.
using backward diﬀerences.Similar approximations can be derived for second derivatives. by replacing h by −h. shows that the formula is secondorder accurate. is the forward diﬀerence approximation to the second derivative.37) This formula. h2 h3 y(x − h) = y(x) − hy (x) + y (x) − y (x) + O(h4 ). Using forward diﬀerences. h2 (1. y (x) ≈ y (x) − y (x − h) h y(x) − y(x − h) y(x − h) − y(x − 2h) − ≈ h2 h2 y(x) − 2y(x − h) + y(x − 2h) = .41) h3 h2 y (x) + y (x) + O(h4 ).42) (1. The central ﬁnite diﬀerence approximation to the second derivative uses instead the three points which bracket x: y (x) ≈ y(x + h) − 2y(x) + y(x − h) . y (x) ≈ y (x + h) − y (x) h y(x + 2h) − y(x + h) y(x + h) − y(x) ≈ − h2 h2 y(x + 2h) − 2y(x + h) + y(x) = . Similarly. h2 (1. The central diﬀerence formula for second derivatives can alternatively be derived using Taylor series expansions: y(x + h) = y(x) + hy (x) + Similarly. which involves three points forward of x. 2 h which.39) This last result can also be obtained by using forward diﬀerences for the second derivative followed by backward diﬀerences for the ﬁrst derivatives. or vice versa. which involves three points backward of x. because of the error term.43) 7 . y (x) = or (1. is the backward diﬀerence approximation to the second derivative.40) (1.38) This formula. h2 (1. 2 6 (1. 2 6 The addition of these two equations yields y(x + h) + y(x − h) = 2y(x) + h2 y (x) + O(h4 ) y(x + h) − 2y(x) + y(x − h) + O(h2 ).
b) into n equal subintervals: h= in which case xk = a + kh. Let yk denote the numerical approximation to the exact solution at xk . The resulting system is thus −2y1 + y2 y1 − 2y2 + y3 y2 − 2y3 + y3 − = = = = . . y2 ) h2 f (x3 . . y3 ) h2 f (x4 . . That is. k = 0. h2 which suggests the diﬀerence equation yk−1 − 2yk + yk+1 = h2 f (xk . yn = η2 . not enough information is given at either endpoint to allow a stepbystep solution. . depending on f . y(b) = η2 . 1. y.45) y(a) = η1 . a ≤ x ≤ b y(a) = η1 . For a BVP. yk ).1. n. 1. −η1 + h2 f (x1 . (1. (1. in general. Consider the BVP y (x) = f (x. . depending on f . a ≤ x ≤ b (1. if we use a central diﬀerence approximation to the second derivative in Eq. yn−1 ). 1.48) (1. . y1 ) h2 f (x2 .50) Since this system of equations has n − 1 equations in n + 1 unknowns. k = 1.46) Then.44 for which the righthand side depends only on x and y: y (x) = f (x.52) yn−2 − 2yn−1 = −η2 + h2 f (xn−1 . 2. which is a tridiagonal system of n − 1 equations in n − 1 unknowns. y). .49) (1.6 Boundary Value Problems The techniques for initial value problems (IVPs) are.44) This equation could be nonlinear. not directly applicable to boundary value problems (BVPs). 3. n (1. Consider ﬁrst a special case of Eq. yk ). the two boundary conditions are needed to obtain a nonsingular system: y0 = η1 .45. This system is linear or nonlinear. The methods used for IVPs started at one end (x = a) and computed the solution step by step for increasing x. (1.51) y4 2y4 + y5 (1. y4 ) (1. . the ODE can be approximated by y(xk−1 ) − 2y(xk ) + y(xk+1 ) ≈ f (xk . y ). n − 1. y(b) = η2 . . .47) b−a . Subdivide the interval (a. yk ≈ y(xk ). 2. 8 .
1.6.1
Example y = −y(x), 0 ≤ x ≤ π/2 y(0) = 1, y(π/2) = 0.
Consider (1.53)
In Eq. 1.45, f (x, y) = −y, η1 = 1, η2 = 0. Thus, the righthand side of the ith equation in Eq. 1.52 has −h2 yi , which can be moved to the lefthand side to yield the system −(2 − h2 )y1 + y2 y1 − (2 − h2 )y2 + y3 2 y2 − (2 − h )y3 + = −1 = 0 = 0 . . . 0.
y4
(1.54)
yn−2 − (2 − h2 )yn−1 =
We ﬁrst solve this tridiagonal system of simultaneous equations with n = 8 (i.e., h = π/16), and compare with the exact solution y(x) = cos x: k 0 1 2 3 4 5 6 7 8 xk 0 0.1963495 0.3926991 0.5890486 0.7853982 0.9817477 1.1780972 1.3744468 1.5707963 yk 1 0.9812186 0.9246082 0.8323512 0.7080045 0.5563620 0.3832699 0.1954016 0 Exact y(xk ) 1 0.9807853 0.9238795 0.8314696 0.7071068 0.5555702 0.3826834 0.1950903 0 Absolute Error 0 0.0004334 0.0007287 0.0008816 0.0008977 0.0007917 0.0005865 0.0003113 0 % Error 0 0.0441845 0.0788715 0.1060315 0.1269565 0.1425085 0.1532604 0.1595767 0
We then solve this system with n = 40 (h = π/80): k 0 5 10 15 20 25 30 35 40 xk 0 0.1963495 0.3926991 0.5890486 0.7853982 0.9817477 1.1780972 1.3744468 1.5707963 yk 1 0.9808025 0.9239085 0.8315047 0.7071425 0.5556017 0.3827068 0.1951027 0 Exact y(xk ) 1 0.9807853 0.9238795 0.8314696 0.7071068 0.5555702 0.3826834 0.1950903 0 Absolute Error 0 0.0000172 0.0000290 0.0000351 0.0000357 0.0000315 0.0000233 0.0000124 0 % Error 0 0.0017571 0.0031363 0.0042161 0.0050479 0.0056660 0.0060933 0.0063443 0
Notice that a mesh reﬁnement by a factor of 5 has reduced the error by a factor of about 25. This behavior is typical of a numerical method which is secondorder accurate. 9
1.6.2
Solving Tridiagonal Systems
Tridiagonal systems are particularly easy (and fast) to solve using Gaussian elimination. It is convenient to solve such systems using the following notation: d 1 x1 + u 1 x2 = b1 l2 x1 + d2 x2 + u2 x3 = b2 l3 x2 + d3 x3 + u3 x4 = b3 (1.55) . . . ln xn−1 + dn xn = bn , where di , ui , and li are, respectively, the diagonal, upper, and lower matrix entries in Row i. All coeﬃcients can now be stored in three onedimensional arrays, D(·), U(·), and L(·), instead of a full twodimensional array A(I,J). The solution algorithm (reduction to upper triangular form by Gaussian elimination followed by backsolving) can now be summarized as follows: 1. For k = 1, 2, . . . , n − 1: (a) m = −lk+1 /dk [k = pivot row]
[m = multiplier needed to annihilate term below] [new diagonal entry in next row] [new rhs in next row]
(b) dk+1 = dk+1 + muk (c) bk+1 = bk+1 + mbk 2. xn = bn /dn
[start of backsolve] [backsolve loop]
3. For k = n − 1, n − 2, . . . , 1: (a) xk = (bk − uk xk+1 )/dk
Tridiagonal systems arise in a variety of applications, including the CrankNicolson ﬁnite diﬀerence method for solving parabolic partial diﬀerential equations.
1.7
Shooting Methods
Shooting methods provide a way to convert a boundary value problem to a trialanderror initial value problem. It is useful to have additional ways to solve BVPs, particularly if the equations are nonlinear. Consider the following twopoint BVP: y = f (x, y, y ), a ≤ x ≤ b (1.56) y(a) = A y(b) = B. To solve this problem using the shooting method, we compute solutions of the IVP y = f (x, y, y ), x ≥ a (1.57) y(a) = A y (a) = M 10
y T M2
r T
A a
M1
B
c E
b Figure 4: The Shooting Method.
x
for various values of M (the slope at the left end of the domain) until two solutions, one with y(b) < B and the other with y(b) > B, have been found (Fig. 4). The initial slope M can then be interpolated until a solution is found (i.e., y(b) = B).
2
Partial Diﬀerential Equations
A partial diﬀerential equation (PDE) is an equation that involves an unknown function (the dependent variable) and some of its partial derivatives with respect to two or more independent variables. An nthorder equation has the highest order derivative of order n.
2.1
Classical Equations of Mathematical Physics
2
1. Laplace’s equation (the potential equation) φ=0 (2.1)
In Cartesian coordinates, the vector operator del is deﬁned as = ex
2
∂ ∂ ∂ + ey + ez . ∂x ∂y ∂z
(2.2)
is referred to as the Laplacian operator and given by
2
=
·
∂2 ∂2 ∂2 = + + . ∂x2 ∂y 2 ∂z 2
(2.3)
Thus, Laplace’s equation in Cartesian coordinates is ∂ 2φ ∂ 2φ ∂ 2φ + + 2 = 0. ∂x2 ∂y 2 ∂z In cylindrical coordinates, the Laplacian is
2
(2.4)
φ=
1 ∂ r ∂r
r
∂φ ∂r
+
1 ∂ 2φ ∂ 2φ + 2. r2 ∂θ2 ∂z
(2.5)
Laplace’s equation arises in incompressible ﬂuid ﬂow (in which case φ is the velocity potential), gravitational potential problems, electrostatics, magnetostatics, steadystate 11
Poisson’s equation 2 φ+g =0 (2. and torsion of bars in elasticity (in which case φ(x. t) is the transverse displacement of the string. respectively.8) and c is the speed of propagation. 12 .6) This equation arises in steadystate heat conduction with distributed sources (φ = temperature) and torsion of bars in elasticity (in which case φ(x. e. m = ρt). (c) transverse vibrations of a membrane For this twodimensional problem. and T c= . 2 φ= (2.g. 2.heat conduction with no sources (in which case φ is the temperature). t) is the transverse displacement of the membrane (e.g. the modulus of elasticity and density of the bar material. t) represents the longitudinal displacement.10) ρ where E and ρ are. and A is the crosssectional area of the string. drum head). wave equation 1¨ φ c2 In this equation.11) where T is the tension per unit length. φ = φ(x. and c= T . ρ is the density of the string material. (b) longitudinal vibrations of a bar For this onedimensional problem..9) ρA where T is the string tension. y) is the stress function).e. Functions which satisfy Laplace’s equation are referred to as harmonic functions. 3. φ = φ(x. φ = φ(x. ∂t (2.. The denominator ρA is mass per unit length. and t is the membrane thickness. (2. y) is the warping function). m is the mass per unit area (i. dots denote time derivatives. The wave equation arises in several physical situations: (a) transverse vibrations of a string For this onedimensional problem.7) 2 ¨ ∂ φ φ= 2. (2. m (2.. y. and E c= .
heat equation ˙ · (k φ) + Q = ρcφ (2. Fourier’s law is written dT qx = −k . t) is the ﬂuid pressure or velocity potential. 2. (2. e. 5. and A is the area through which the heat ﬂows. (2. and c is the material speciﬁc heat (the heat required per unit mass to raise the temperature by one degree). z.12) where B = ρc2 is the ﬂuid bulk modulus. k is the thermal conductivity. To obtain the Helmholtz equation. we substitute φ(x. steadystate acoustics.19) dx where qx is energy per unit time per unit area (with typical units J/(s·m2 ). Helmholtz equation (reduced wave equation) 2 φ + k2φ = 0 (2.13) The Helmholtz equation is the timeharmonic form of the wave equation. where c= B . z. This equation arises in steadystate (timeharmonic) situations involving the wave equation. to obtain 2 φ0 cos ωt = − ω2 φ0 cos ωt.14) into the wave equation. 4. this equation becomes 2 φ0 + k 2 φ0 = 0. and we obtain the Helmholtz equation. Eq.(d) acoustics For this threedimensional problem. the subscript is unnecessary.13.7. φ = φ(x.16) With the understanding that the unknown depends only on the spacial variables.17) In this equation. y. y. There are several special cases of the heat equation of interest: qx = −kA ˆ 13 . ρ is the material density. c2 (2.18) dx where qx is the rate of heat conduction (energy per unit time) with typical units J/s ˆ or BTU/hr.. and ρ is the density. z) cos ωt (2. in which interest is restricted to functions which vary sinusoidally in time. Q is the internal heat generation per unit volume per unit time. t) = φ0 (x. and c is the speed of sound. (2. y. 2. φ represents the temperature T .g.15) If we deﬁne the wave number k = ω/c. Eq. Alternatively. The thermal conductivity k is deﬁned by Fourier’s law of heat conduction: dT . ρ (2.
we can classify the common equations of mathematical physics already encountered as follows: Name Eq. and hyperbolic arise from the analogy with the conic sections in analytic geometry. linear partial diﬀerential equation in two variables Auxx + Buxy + Cuyy + Dux + Euy + F u = G. y). 2. Consider the general. 2. B = 0 Elliptic 2 2 uxx − uyy /c = 0 A = 1. no sources (Q = 0): 2 φ=0 (Laplace’s equation) (2. Of those that involve time (wave and heat equations). the order of the time derivative is diﬀerent. Both these speculations turn out to be true.g. steadystate (timeindependent): 2 φ=− Q k (Poisson’s equation) (2.). 2.2 Classiﬁcation of Partial Diﬀerential Equations Of the classical PDEs summarized in the preceding section.e.22) 2. C Type uxx + uyy = 0 A = C = 1.23) where the coeﬃcients are functions of the independent variables x and y (i. (2. and we have used subscripts to denote partial derivatives. Number Laplace Eq. secondorder. A = A(x. B = B(x.13 Heat Eq. Given these deﬁnitions. y). so presumably their solutions would exhibit fundamental diﬀerences.(a) homogeneous material (k = constant): k 2 ˙ φ + Q = ρcφ (2.23 depends on the sign of the discriminant according to the following table: B 2 − 4AC <0 =0 >0 Equation Type Elliptic Parabolic Hyperbolic Typical Physics Described Steadystate phenomena Heat ﬂow and diﬀusion processes Vibrating systems and wave motion The names elliptic. The behavior of the solution of Eq.6 Wave Eq. steadystate. B = C = 0 Parabolic 14 . B = 0 Elliptic kuxx − ρcuy = −Q A = k. parabolic. so the fundamental character of their solutions may also diﬀer. ∂x2 (2. B. B = 0 Hyperbolic 2 uxx + uyy + k u = 0 A = C = 1. some involve time.24) The quantity B 2 − 4AC is referred to as the discriminant of the equation. etc. uxx = ∂ 2u .. 2. C = −1/c . e. B = 0 Elliptic uxx + uyy = −g A = C = 1.. 2.17 Eq. and some don’t.21) (c) homogeneous material. 2. in Two Variables A.20) (b) homogeneous material.7 Helmholtz Eq.1 Poisson Eq.
2.29) (2. the onedimensional wave equation ∂ 2u 1 ∂ 2u = 2 2.31) or ∂ 2u ¯ ∂ 2u ¯ = 2. 2. when solving equations.30) ∂ 2u ∂ = 2 ∂t ∂t Thus. ¯/c) ¯ ∂t ∂(Lt ∂t ¯ ∂ 2u c ¯ c2 ∂ 2 u ∂u ¯ ∂ ∂ u dt ¯ ¯ =c 2 = .33) (2.25 become ∂u ∂(L¯) u ∂u ¯ = = . This last equation can also be obtained more easily by direct substitution of Eq. and the other two types of equations characterize timedependent situations. for example. y represents the time variable. Elliptic equations characterize static (timeindependent) situations. t is time. L u= ¯ u . dx L ∂ x2 ¯ (2. The behavior of the solutions of equations of diﬀerent types diﬀers. 2.28) (2. L ¯ ct t= .3 Transformation to Nondimensional Form It is often convenient. Consider. c =c ¯ ¯ ∂ t dt ¯ ¯ L ¯2 ∂t ∂t ∂t L ∂t 1 ∂ 2u ¯ 1 c2 ∂ 2 u ¯ = 2· 2 ¯2 L ∂x ¯ c L ∂t (2. from Eq. L (2. to transform the equation to a nondimensional form.25. ∂u ∂t = ∂ ∂t ∂(L¯) u ∂u ¯ ∂u = =c .26 into Eq.In the wave and heat equations in the above table. ¯ L 2 ∂ x2 ¯ c L ∂t 15 (2.26) in which case the derivatives in Eq. 2. ∂x2 c ∂t (2.25) where u is displacement (of dimension length). and c is the speed of propagation (of dimension length/time). ∂x ∂(L¯) x ∂x ¯ ∂ ∂ 2u = 2 ∂x ∂x ∂u ∂x = ∂ ∂x ∂u ¯ ∂x ¯ = ∂ ∂x ¯ ∂u ¯ ∂x ¯ d¯ x 1 ∂ 2u ¯ = .32) ¯ ∂ x2 ¯ ∂t This is the nondimensional wave equation.34) . Let L represent some characteristic length associated with the problem. (2. 2.27) (2. We deﬁne the nondimensional variables x= ¯ x .25 and factoring out the constants: ∂ 2 (L¯) u 1 ∂ 2 (L¯) u = 2 2 ¯/c)2 ∂(L¯) x c ∂(Lt or L ∂ 2u ¯ 1 c2 L ∂ 2 u ¯ = 2 2 2.
(3. .j = . 2 h (∆x)2 (3. for r = 1/10. 7. 3. 0 < x < 1.j ut ≈ (3. 5.j + rui+1. j = 0. The recursive relationship can also be sketched with the stencil shown in Fig. tj = jk.1 Explicit Finite Diﬀerence Method (3.j+1 − ui.3) k and uxx with the central ﬁnite diﬀerence uxx ≈ ui+1.j − 2ui.j+1 − ui.j . 3. 1. 0) = f (x) (initial condition).j + ui−1. we have the stencil shown in Fig. For example.j .j+1 = rui−1. t) = u(1. . 3. t) = 0 (boundary conditions) u(x. 6.5 becomes ui. This problem represents transient heat conduction in a rod with the ends held at zero temperature and an initial temperature proﬁle f (x).1 Finite Diﬀerence Solution of Partial Diﬀerential Equations Parabolic Equations Consider the boundaryinitial value problem (BIVP) u = 1 u . To solve this problem numerically. Eq. for r = 1/10. . We approximate ut with the forward ﬁnite diﬀerence ui. This equation is referred to as an explicit formula since one unknown value can be found directly in terms of several other known values.6) (3.1) where c is a constant. the solution (temperature) at 16 . .1.j + (1 − 2r)ui. h2 (3. .7 is a recursive relationship giving u in a given row (time) in terms of three consecutive values of u in the row below (one time step earlier). h2 ck Deﬁne the parameter r as r= in which case Eq.j + ui−1.j be the numerical approximation to u(xi . t > 0 xx t c u(0. u = u(x. i = 0. . 2.j ui+1.3 3.4) The ﬁnite diﬀerence approximation to the PDE is then ui. 1.7) ck c ∆t = .5) The domain of the problem and the mesh are illustrated in Fig. tj ). (3. That is.2) Let ui. . we discretize x and t such that xi = ih.j − 2ui. 2. t).
j s u=0 i+1. '$ 1 &% 96 96 96 87 87 87 1 10 8 10 1 10 Figure 7: Heat Equation Stencil for r = 1/10.t T s u=0 k ' T c i. '$ '$ 96 96 96 96 96 96 1 r= 2 1 &% 1 r=1 1 2 1 &% 87 87 87 87 87 87 1 2 0 −1 1 Figure 8: Heat Equation Stencils for r = 1/2 and r = 1. j +1 i. j s s i−1. 17 . 1 x 1 r 1 − 2r r Figure 6: Heat Equation Stencil for Explicit Finite Diﬀerence Algorithm. j h E E 0 u = f (x) Figure 5: Mesh for 1D Heat Equation.
0 Figure 9: Explicit Finite Diﬀerence Solution With r = 0.3 0.u T 0.8 1. This disadvantage will be removed with the CrankNicolson algorithm.48. (3..1 and k = ∆t = rh2 = r(∆x)2 for two diﬀerent values of r: r = 0.9) u(x. t = 0. t) = 0 2x.4 0.g.0 t = 0. 9 and 10) are compared with the analytic solution ∞ nπ 8 2 sin (sin nπx) e−(nπ) t . 3. which is counterintuitive. It can be shown that.1 0. u = u(x. r = 1).D. An unstable solution is one for which small errors grow rather than decay as the solution evolves.2 CrankNicolson Implicit Method The CrankNicolson method is a stable algorithm which allows a larger time step than could be used in the explicit method. as illustrated in Fig. and zero temperature prescribed at both ends. 10. t) = 2 (nπ) 2 n=1 which can be obtained by the technique of separation of variables.1. if r = 1/2. 0 < x < 1. 18 . For r > 1/2 (e.52.6 0. 1/2 ≤ x ≤ 1. t) for a bar with a prescribed initial temperature distribution f (x).8) The physical problem is to compute the temperature history u(x. the new point depends negatively on the closest point (Fig. 8). 0 < r ≤ 1/2. a disadvantage of this explicit method is that a small time step ∆t must be used to maintain stability. Notice that. The instability for r > 1/2 can be clearly seen in Fig. 0 ≤ x ≤ 1/2 u(x. CrankNicolson’s stability does not depend on the parameter r.2 0. t) = u(1. the solution at the new point is independent of the closest point. for a stable solution.48 t = 0. The two numerical solutions (Figs. Thus. 0) = f (x) = 2(1 − x). 8. In fact. t) u(0. the new point depends on the three points at the previous time step with a 181 weighting.2 0.192 E x 0. Consider the boundaryinitial value problem (in nondimensional form) uxx = ut .4 0. We solve this problem using the explicit ﬁnite diﬀerence algorithm with h = ∆x = 0.5 0. no internal heat sources.48 and r = 0. The instability which occurs for r > 1/2 can be illustrated with the following example. (3.096 Exact F.048 r = 0.
052 t = 0.0 Figure 10: Explicit Finite Diﬀerence Solution With r = 0.j + 2(1 − r)ui.j .j + ui−1. The PDE is approximated numerically by 1 ui+1.8 1. and write the CN equation for each i = 1.3 0.104 r = 0. .0 t = 0. where N is the number of interior mesh points on the row. the righthand side values of Eq. j + 2 . .208 E x 0.j ui. Fig.j+1 − rui+1.6 0. To get the process started. The basis for the CrankNicolson algorithm is writing the ﬁnite diﬀerence equation at a 1 1 midlevel in time: i. u = u(x.j+1 + 2(1 + r)ui. t) = 0 (boundary conditions) u(x.10) u(0. If we start at the bottom row (j = 0) and move up. t = 0.j+1 − 2ui.j+1 = rui−1.4 0.D. 3. 2 2 2 h h ck (3. are excluded. t) = u(1.2 0. Consider again the PDE of Eq. and the lefthand side values of that equation are unknown.j − 2ui.12) and rearrange Eq.j+1 − ui.j+1 ui+1. 3.13 are known.52 Exact F.52. .2 0. . 2 h (∆x)2 (3. N to obtain N simultaneous equations in N unknowns.1 0. 11 shows the points involved in the CrankNicolson scheme. with known values.11) where the righthand side is a central diﬀerence approximation to the time derivative at the 1 middle point j + 2 .13) This formula is called the CrankNicolson algorithm. 3. 2.1: u = 1 u . (3. t) xx t c (3. since each equation has three consecutive nonzeros centered around the diagonal. (The boundary points. 0) = f (x) (initial condition). To advance in time.4 0.u T 0.11 with all j + 1 terms on the lefthand side: − rui−1. We again deﬁne the parameter r as r= c ∆t ck = .j+1 + ui−1.5 0.j + = .) This system of equations is a tridiagonal system. 19 .j + rui+1. let j = 0. The ﬁnite diﬀerence x derivative at j + 2 is computed as the average of the two central diﬀerence time derivatives at j and j + 1.
thus speeding up the calculation. This speedup would be particularly signiﬁcant in higher dimensional problems. ux (1. and solve a new system of equations. 20 . 0) = f (x) (initial condition). one could compute and save the LU factors of the coeﬃcient matrix. 3. t) = 0. 12. Consider boundary Point 25. 3. It can be shown that the CN algorithm is stable for any r. An approach which requires the solution of simultaneous equations is called an implicit algorithm. t) = g(t) (boundary conditions) u(x. (3. although better accuracy results from a smaller r.14) The only diﬀerence between this problem and the one considered earlier in Eq. Assume a mesh labeling as shown in Fig.j +1 s u=0 i. j i+1.1. j h E E 0 u = f (x) Figure 11: Mesh for CrankNicolson. i−1. j s i+1. where the coeﬃcient matrix is no longer tridiagonal. which now involves a derivative (a Neumann boundary condition). 13.1 is the righthand side boundary condition. We introduce extra “phantom” points to the right of the boundary (outside the domain). 1 x −r r 2(1 + r) 2(1 − r) −r r Figure 12: Stencil for CrankNicolson Algorithm. Thus. and merely do the forwardbackward substitution (FBS) at each new time step. we then increment j to j = 1. Note that the coeﬃcient matrix of the CN system of equations does not change from step to step. A smaller r corresponds to a smaller time step size (for a ﬁxed spacial mesh).j +1 d j +1 i. for example.3 Derivative Boundary Conditions Consider the boundaryinitial value problem u = 1 u .t T s s s s u=0 k ' T c i−1. t) xx t c u(0. CN also gives better accuracy than the explicit approach for the same r. u = u(x. A sketch of the CN stencil is shown in Fig.
u(x. t is time. x→±∞ where x is distance along the string.2 3.15 14 13 25 24 23 p p p 35 p p p 34 p p p 33 Figure 13: Treatment of Derivative Boundary Conditions. and the constant c is given by c= T . Since u25 is not known.16) 2h The phantom variable u34 can then be eliminated from the last two equations to yield a new equation for the boundary point u25 : u25 = 2ru14 + (1 − 2r)u24 + 2rhg24 . and assume we use an explicit ﬁnite diﬀerence algorithm. f (x) is the initial displacement.1 Hyperbolic Equations The d’Alembert Solution of the Wave Equation Before addressing the ﬁnite diﬀerence solution of hyperbolic equations.2.17) 3. t > 0.15) On the other hand.18) = g(x) u(x.19) where T is the tension (force) in the string. 0) (3. −∞ < x < ∞. (3. ρ is the density of the string material. Note that c has the dimension of velocity. and A is the crosssectional area of the string. we must write the ﬁnite diﬀerence equation for u25 : u25 = ru14 + (1 − 2r)u24 + ru34 . t) = 0. we review some background material on such equations. a central ﬁnite diﬀerence approximation to the x derivative at Point 24 is u34 − u14 = g24 . (3. 0) = f (x). ∂x2 c ∂t ∂u(x. g(x) is the initial velocity. ρA (3. (3. This 21 . ∂t lim u(x. t) is the transverse displacement. The timedependent transverse response of an inﬁnitely long string satisﬁes the onedimensional wave equation with nonzero initial displacement and velocity speciﬁed: 2 1 ∂ 2u ∂ u = 2 2 .
(3. 3. t) = 2c x−ct 2 where G(x) = 1 c x g(τ ) dτ. may be interpreted as the combination (diﬀerence. be given by f (x) = −x + b. each equal to f (x)/2. x−ct (3. 0. the d’Alembert solution simpliﬁes to 1 1 x+ct g(τ ) dτ = [G(x + ct) − G(x − ct)] . ∂x dx dx (3. the wave f (x + ct) moves to the left (decreasing x) with speed c without change of shape.20) The diﬀerentiation of the integral in Eq.23) which is a triangular pulse of width 2b and height b (Fig. If f (x) is nonzero only for a small domain. after both waves have passed the region of initial disturbance. For t > b/c. Thus.equation assumes that all motion is vertical and that the displacement u and its slope ∂u/∂x are both small. B) − h(x. (3. 3. as t increases. 3. Eq. half this pulse travels in opposite directions from the origin. For the special case g(x) = 0 (zero initial velocity). the d’Alembert solution simpliﬁes to 1 u(x. respectively. t) = [f (x − ct) + f (x + ct)]. 3.22) which may be interpreted as two waves. then. one moving left and one moving right. For example. For t > 0. the two halfpulses have completely separated. t) dt = A(x) A ∂h(x. similar to the initial displacement special case. the wave f (x − ct) moves to the right (increasing x) with speed c without change of shape. A) . this solution. and the neighborhood of the origin has returned to rest. t) dB dA dt + h(x.21) Eq. The two waves [each equal to half the initial shape f (x)] travel in opposite directions from each other at speed c. 22 .25) Thus. t) = [f (x − ct) + f (x + ct)] + 2 2c x+ct g(τ ) dτ. each with speed c. It can be shown by direct substitution into Eq.20 is eﬀected with the aid of Leibnitz’s rule: d dx B(x) B h(x. let f (x). the initial displacement.24. in this case) of two identical functions G(x)/2. the argument x − ct remains constant if. Similarly. x ≥ b. the string returns to its rest position. x also increases at speed c. 2 (3. −∞ (3.24) u(x. 14). x ≤ b. where c is the wave speed. For example.18 that the solution of this system is 1 1 u(x. For the special case f (x) = 0 (zero initial displacement). which travel at speed c to the right and left.20 is known as the d’Alembert solution of the onedimensional wave equation.
on the solution consists of all points closer than ct (in either direction) to x0 . and then the string returns to rest in its original position. That is. −b ≤ x ≤ b. x ≥ b. (3. x < b. 2M b. 16). and c is the wave speed (Fig. Even though g(x) is nonzero only near the origin. will continue to inﬂuence the solution from then on. once having reached some location. but not in its original position (Fig. Thus. the domain of inﬂuence of the data at x = x0 . where the constant M is the dimensionless Mach number. let the initial velocity g(x) be given by g(x) = M c. the travelling wave G(x) is constant and nonzero for x > b. the center section of the string reaches a state of rest. For example. For an observer at some ﬁxed location. as time advances.26) which is a rectangular pulse of width 2b and height M c. the location of the disturbance (Fig.(a) t = 0 T d ¨rd ¨ © % ¨ d ¨ r ¨ r d f (x) f (x)/2 E −3 (b) t = −2 b 2c −2 b c ¨¨ −1 0 T 1 2 3 x/b ¨¨r r ¨rr ¨ ¨ rr ¨ ¨ r E −3 (c) t = −1 0 T 1 2 3 x/b ¨¨rr ¨¨rr rr¨¨ rr E −3 (d) t = −2 2b c ¨r rr −1 0 T 1 2 3 x/b ¨ ¨¨ r ¨¨ ¨r ¨ rr r E −3 −2 −1 0 1 2 3 x/b Figure 14: Propagation of Initial Displacement. but. x > b. Nonzero initial velocity disturbances also travel at speed c. 0. Thus. 3. 15). half this wave travels in opposite directions at speed c. 17). it is clear that disturbances travel with speed c. initial displacements occurring elsewhere pass by after a ﬁnite time has elapsed.27) M (x + b). say. G(x) = (3. 23 . From the preceding discussion. The travelling wave G(x) is given by Eq. 0.25 as x ≤ −b.
(a) −3 (b) −3 −2 −2 −1 T g(x) E 0 T G(x) 1 2 3 x/b $ $$$ $$$ $ E −1 0 1 2 3 x/b Figure 15: Initial Velocity Function. (a) t = 0 (displacement=0) −4 −3 b 2c −3 b c −3 2b c −2 −2 −1 $ $ T $ $ G(x)/2 moves left E 2 3 4 1 −G(x)/2 moves right x/b (b) t = −4 T $ $ $ $$$ 1 −1 E 2 3 4 x/b (c) t = −4 T $$ $$ $ $$$ 2 −2 −1 E 3 4 x/b (d) t = −4 $$$ $$$ $$ T −3 −2 −1 1 E 3 4 x/b Figure 16: Propagation of Initial Velocity. 24 .
(b) Domain of Dependence of Solution on Data. 3. 3. 0) x (a) Domain of Inﬂuence of Data on Solution.j+1 − 2ui.29) Fig.28) u(0. t) = u(a.20.j ui.j − 2ui.31) c ∆t ck = . the domain of dependence of the solution on the initial data consists of all points within a distance ct of the solution point.j + 2(1 − r2 )ui. for example. problems without discontinuities can probably be solved most conveniently using ﬁnite diﬀerence and ﬁnite element techniques.2 Finite Diﬀerences From the preceding discussion of the d’Alembert solution. 0) x (x − ct. u (x. x + ct). h ∆x (3. Eq. t).2. If the initial data are discontinuous (as. the solution at (x. Here we consider ﬁnite diﬀerences.j − ui. 0) r d d 1/c d−1/c d r dr (x. t This problem represents the transient (timedependent) vibrations of a string ﬁxed at the two ends with both initial displacement f (x) and initial velocity g(x) speciﬁed. Eq. t) depends on the initial data for all locations in the range (x − ct. (3. On the other hand.j + r2 ui+1. A central ﬁnite diﬀerence approximation to the PDE. t) E (x + ct. Figure 17: Domains of Inﬂuence and Dependence. 0) = f (x). in shocks). the most accurate and the most convenient approach for solving the equations is probably the method of characteristics.j+1 : ui. t) = 0 (boundary conditions) u(x.28. 2 h c2 k 2 We deﬁne the parameter r= and solve for ui.j−1 = . which are the limits of integration in the d’Alembert solution. we see that hyperbolic equations involve wave motion.j + ui−1. 3. If we know the solution for 25 . t > 0 xx tt c2 (3.j + ui.j−1 . Conversely. 0 < x < a.T t d d d slope=1/c d dr E T t −1/cd (x0 . 0) = g(x) (initial conditions). 18 shows the mesh points involved in this recursive scheme. yields ui+1. That is. u = u(x. Consider the boundaryinitial value problem (BIVP) u = 1 u .j+1 = r2 ui−1.30) (3.
2. j h E i. Friedrichs. since NDD spreads by one mesh point at each level for earlier time. j s s i−1. Thus. the numerical domain of dependence (NDD) would be smaller than the actual domain of dependence (ADD) for the PDE. if possible. Thus. 1 r2 2(1 − r2 ) r2 −1 Figure 19: Stencil for Explicit Solution of Wave Equation. However.3 Starting Procedure for Explicit Algorithm We note that the explicit ﬁnite diﬀerence scheme just described for the wave equation requires the numerical solution at two consecutive time steps to step forward to the next time. this algorithm is an explicit algorithm. to insure a stable solution. j +1 i. The corresponding stencil is shown in Fig. ut = g(x) a x Figure 18: Mesh for Explicit Solution of Wave Equation. the numerical solution would be independent of data outside NDD but inside ADD. If that ∆t is inconvenient for a particular calculation. j s s u=0 i+1. 20. the numerical solution would ignore necessary information. j −1 E 0 u = f (x). the time step size ∆t should be chosen. 19.j+1 at Step j + 1 in terms of known quantities. 26 . An intuitive rationale behind the stability requirement r ≤ 1 can also be made using Fig. all time values up to the jth time step. if NDD < ADD. That is. Thus. This stability condition is know as the Courant. so that r = 1. ∆t should be selected as large as possible without exceeding the stability limit of r = 1. It can be shown that this ﬁnite diﬀerence algorithm is stable if r ≤ 1 and unstable if r > 1. a special procedure is needed to advance from j = 0 to j = 1. and Lewy (CFL) condition or simply the Courant condition. we can compute the solution ui. It can be further shown that a theoretically correct solution is obtained when r = 1.t T s u=0 k ' T c i. and that the accuracy of the solution decreases as the value of r decreases farther and farther below the value of 1. Thus. r ≤ 1. If r > 1. 3. at t = 0.
j + r2 ui+1. i.33) (3. (3.1 + 2kgi or (3.31 for all subsequent time steps.36) (3.−1 . we can write a central diﬀerence approximation to the ﬁrst time derivative at t = 0: ui. In a ﬁnite diﬀerence representation of the domain.t T s 5 d T k c ' E h 5 Slope = −1/c d ~ 5 d 5 © 5 d 5 d 5 d 5 d 5 d 5 5 d 5 d 5 ' Numerical Domain of Dependence E Slope = 1/c E x ' Actual Domain of Dependence E Figure 20: Domains of Dependence for r > 1.−1 .4 Nonreﬂecting Boundaries In some applications.1 = r2 ui−1.0 − ui.2.35) where gi is the initial velocity g(x) evaluated at the ith point.0 + r2 ui+1.j + 2(1 − r2 )ui.−1 ). 3. However. If we substitute this last result into Eq. (3.1 − 2kgi .0 − ui. 3.0 + 2(1 − r2 )ui. gi = g(xi ).−1 = gi 2k or ui.33 (to eliminate ui.j+1 = r2 ui−1.j−1 .37 for the ﬁrst time step and Eq. To compute the solution at the end of the ﬁrst time step.0 + r2 ui+1. we use Eq.1 = r2 ui−1. we obtain ui. (3.1 − ui. let j = 0: ui.31: ui.e.0 + 2(1 − r2 )ui.0 + kgi .1 = r2 ui−1.32) where the righthand side is known (from the initial condition) except for ui. to implement the explicit ﬁnite diﬀerence algorithm.0 + (1 − r2 )ui.j − ui.37) 2 2 This is the diﬀerence equation used for the ﬁrst row. Thus.34) 1 1 ui. 3.0 + r2 ui+1. 3. an inﬁnite boundary has 27 . The explicit ﬁnite diﬀerence algorithm is given in Eq.. 3. it is of interest to model domains that are large enough to be considered inﬁnite in extent.−1 = ui.
a suitable boundary condition must be imposed to ensure that outgoing waves are not reﬂected. 21.43) . Eq. For example.42) The central diﬀerence approximation to the nonreﬂecting boundary condition. = −cF1 . to be truncated at some suﬃciently large distance.j − ui−1. 3. The boundary condition. is c (3.31: c ui. The d’Alembert solution.40) ∂x c ∂t This is the onedimensional nonreﬂecting boundary condition. Eq. we diﬀerentiate u with respect to x and t to obtain ∂u ∂u = F1 . This condition is exact in 1D (i. We truncate the computational domain at some ﬁnite x.j−1 =− 2h 2k 28 (3.j − ui. j) on the nonreﬂecting boundary in Fig.41) ∂n ∂t where n is the outward unit normal to the boundary.j−1 .20.j ui.38) where F1 represents a wave advancing at speed c toward the boundary.. plane waves) and approximate in higher dimensions.j + 2(1 − r2 )ui. With F2 = 0. (3. s s s i. Eq. The nonreﬂecting boundary condition. 3.j+1 = r2 ui−1. Note that the x direction is normal to the boundary. Eq.40.40. ui+1. and F2 represents the returning wave. 3. can be approximated in the ﬁnite difference method with central diﬀerences expressed in terms of the phantom point outside the boundary. At such a boundary.40. j s s Figure 21: Finite Diﬀerence Mesh at Nonreﬂecting Boundary. where the nonreﬂecting condition is written ∂u ∂u + = 0. t) = F1 (x − ct) + F2 (x + ct). at the typical point (i. Consider a vibrating string which extends to inﬁnity for large x. (3. 3.39) ∂u 1 ∂u =− . Thus.e.j+1 − ui.j + r2 ui+1. Let the initial velocity be zero. (3. the general recursive formula is given by Eq. which should not exist if the boundary is nonreﬂecting. (3. must be imposed to inhibit reﬂections from the truncated boundary. of the onedimensional wave equation c2 uxx = utt can be written in the form u(x. 3. ∂x ∂t where the prime denotes the derivative with respect to the argument.
45) (3.44 into Eq. 3.500 1.000 1.44) For the ﬁrst time step (j = 0).000 0.000 2.500 0.000 0. (3.000 0.000 1.000 0.500 0.000 0.000 0.500 1.500 1.000 1.000 3 0.or r (ui+1.000 0.49) (3. (3.j+1 = un−1.500 1.000 2 0.000 0. 3. 3.46) To illustrate the perfect wave absorption that occurs in a onedimensional ﬁnite diﬀerence model.500 0. the last term in this relation is evaluated using the central diﬀerence approximation to the initial velocity.j−1 and ui.000 0.000 0.46 implies un. consider an inﬁnitelylong vibrating string with a nonzero initial displacement and zero initial velocity.000 0. similar to Fig.000 1.000 0.500 1. According to the d’Alembert solution.000 0.000 1.j−1 .000 0.500 1.000 0.000 0. We solve the problem with the explicit central ﬁnite diﬀerence approach with r = ck/h = 1.000 2.000 0.000 3.000 0.500 0.000 0. 3.000 0.500 1.000 0.1 = (ui−1.000 0.j − ui−1.j − ui.35.j+1 = u1.47) where the mesh points in the x direction are labeled 0 to n.000 0.000 7 0. With r = 1.000 0. Eq.000 0.000 10 0. 3.000 0. for r = 1.j−1 ) .000 1.000 0.000 0.000 0.j+1 = ui−1.000 1.500 1.37 simplify to ui.000 0.000 0.000 1 0.000 0.0 + ui+1.j+1 = 2r2 ui−1.500 1.500 4 1.000 0.000 0.000 0.j .000 0.000 2.42 (to eliminate the phantom point) yields (1 + r)ui.j .500 1.000 0.000 0.000 0.000 0.j + 2(1 − r2 )ui.000 0.000 0.500 1.j − (1 − r)ui.000 5 1.000 0. (3.000 0.000 0.500 0.000 0.500 1.000 0.j + ui+1.000 0.000 0.000 0.000 1.000 0.000 0. 14.000 1.j ) = − (ui.000 0.000 0.000 0.000 2.000 0.j+1 − ui.500 6 1.000 0.000 29 .000 0.000 0.000 9 0.000 0. Eq.000 0.j . The initial displacement is a triangularshaped pulse in the middle of the string.45 takes a particularly simple (and perhaps unexpected) form: ui.0 )/2.000 0.500 1.000 1.000 0.000 0.000 1.000 2. Eq.500 0.000 1. Note also that. half the pulse should propagate at speed c to the left and right and be absorbed into the boundaries. u0.000 0. On the right and left nonreﬂecting boundaries. The substitution of Eq.000 0.j+1 = ui−1.500 0.000 0.500 1.000 0. the ﬁnite diﬀerence formulas 3.500 8 0.000 0. The ﬁnite diﬀerence calculation for this problem results in the following spreadsheet: t x=0 x=1 x=2 x=3 x=4 x=5 x=6 x=7 x=8 x=9 x=10 0 0.48) (3.000 0.31 and 3.000 0.
3. (3. E = ρc2 . ρcA d d d Figure 22: Finite Length Simulation of an Inﬁnite Bar.40. 3. This condition is exact in 1D (i. (3. σ is the stress. we see that the characteristic impedance of an acoustic medium is ρc. Eq. Eq. 22).10. Thus. the application of this dashpot to the end of a ﬁnite length bar simulates exactly a bar of inﬁnite length (Fig. The internal longitudinal force F in the bar is given by F = Aσ = AEε = AE ∂u . u = u0 eiωt . iω ∂u0 = − u0 = −iku0 ..40 (Fig. Since. the ratio of pressure (force/area) to velocity is referred to as impedance. ∂x (3. for example. in acoustics.55) .54 becomes F = −(ρcA)v. 2. the nonreﬂecting boundary condition is equivalent to applying an end force given by F =− AE v. plane waves) and approximate in higher dimensions. Notice that the triangular wave is absorbed without any reﬂection from the two boundaries.. and k = ω/c is the wave number. from Eq. the solution u(x. The nonreﬂecting boundary condition can be interpreted physically as a damper (dashpot). c (3. i. Eq. and u is displacement. The minus sign in this equation means that the force opposes the direction of motion. E is the Young’s modulus of the bar material. Thus.52) ∂n c where n is the outward unit normal at the nonreﬂecting boundary.54) where v = ∂u/∂t is the velocity. and the nonreﬂecting boundary condition. 30 (3. 3.53) (3.e. t) is timeharmonic. For steadystate wave motion. in general.50) where A is the crosssectional area of the bar. Consider.e.56) which is a force proportional to velocity.40. A mechanical device which applies a force proportional to velocity is a dashpot. 3. 22). becomes iω ∂u0 = − u0 ∂x c or. Since. a bar undergoing longitudinal vibration and terminated on the right end with the nonreﬂecting boundary condition. as required to be physically realizable. The dashpot constant is ρcA.51) (3. from Eq.
y) = 0 (0 < x < a.j+1 ) = 0.j ≈ .j−1 − 2Ti. j) E Figure 24: Finite Diﬀerence Grid on Rectangular Domain.57) This problem corresponds physically to twodimensional steadystate heat conduction over a rectangular plate for which temperature is speciﬁed on the boundary. b) = f2 (x).y T b T = g1 (y) T = f2 (x) 2 T =0 T = g2 (y) x T = f1 (x) a E Figure 23: Laplace’s Equation on Rectangular Domain. 3. T (x. We attempt an approximate solution by introducing a uniform rectangular grid over the domain. and let the point (i. 0) = f1 (x).j + Ti+1.j+1 + =0 h2 h2 or 4Ti.j − (Ti−1. j) denote the point having the ith value of x and the jth value of y (Fig. T (a. Then. y) = g2 (y).j + Ti+1.60) . ∂ 2T Ti−1. y) = g1 (y).j + Ti. (3. 25).j + Ti. 2 T (x.j + Ti. (3. 24). (3.3 Elliptic Equations Consider Laplace’s equation on the twodimensional rectangular domain shown in Fig.j−1 + Ti.j+1 ∂ 2T ≈ . 31 (3.58) 2 ∂x h2 Ti.61) (3.j + Ti+1. T (0.59) ∂y 2 h2 The ﬁnite diﬀerence approximation to Laplace’s equation thus becomes Ti−1. 23: T (x.j Ti.j − 2Ti. T j+1 j j−1 i−1 i i+1 © s Point (i. 0 < y < b). using central ﬁnite diﬀerence approximations to the second derivatives (Fig.j − 2Ti.j−1 − 2Ti.
j − 1) (i + 1. The application of Eq. where all known quantities have been placed on the righthand side. This linear system of six equations in six unknowns can be solved with standard equation solvers.62) −T7 − T10 + 4T11 − T15 = T12 −T10 + 4T14 − T15 = T13 + T18 −T11 − T14 + 4T15 = T16 + T19 .(i − 1. 32 . j) c (i + 1. consider the mesh shown in Fig.61 to each of the six interior points yields T7 − T10 = T2 + T5 4T6 − −T + 4T − T11 = T3 + T8 6 7 −T6 + 4T10 − T11 − T14 = T9 (3. j) s (i. j − 1) Figure 25: The Neighborhood of Point (i. j) is the average of the four neighboring points. That is. Because the central diﬀerence operator is a 5point operator. 4 3 2 1 8 7 6 5 12 11 10 9 16 15 14 13 20 19 18 17 Figure 26: 20Point Finite Diﬀerence Mesh. j − 1) (i. For example. the system of equations is sparsely populated. the resulting numerical problem has only six degrees of freedom (unknown variables). for large meshes. for Laplace’s equation with the same uniform mesh in each direction. 26. j + 1) h (i − 1. 3. j) h E ' (i − 1. Thus. the solution at a typical point (i. so that sparse matrix solution techniques would be applicable. Although there are 20 mesh points. j). Thus. where the temperature is known. j + 1) T (i + 1. j + 1) (i. systems of equations of this type would have at most ﬁve nonzero terms in each equation. 14 are on the boundary. regardless of how large the mesh is.
3. a central diﬀerence approximation yields g18 = or T22 = T14 + 2hg18 . 0) = f1 (x). boundary condition on the right side (Fig. the pattern of nonzero coeﬃcients appearing on the lefthand side of Eq. Since the numbers assigned to the mesh points in Fig.y T b T = g1 (y) T = f2 (x) ∂T = g(y) ∂x a E 2 T =0 T = f1 (x) x Figure 27: Laplace’s Equation With Dirichlet and Neumann B. 3. For example.g. 3. T (x.1 Derivative Boundary Conditions The approach of the preceding section must be modiﬁed for Neumann boundary conditions. the equilibrium equation for Point 18 is 4T18 − (T14 + T22 + T17 + T19 ) = 0. 3. On the other hand. = g(y). in which the normal derivative is speciﬁed. 0 < y < b). rather than Dirichlet. b) = f2 (x). y) = g1 (y). (3. y) = 0 (0 < x < a. Some equation solvers based on Gaussian elimination operate more eﬃciently on systems of equations for which the nonzeros in the coeﬃcient matrix are clustered near the main diagonal. 33 (3. 2. which uses the following general algorithm: 1. At a typical point on the boundary. Systems of this type can also be solved using an iterative procedure known as relaxation. 26 are merely labels for identiﬁcation.64) . Loop systematically through the interior mesh points. setting each interior point to the average of its four neighbors.66) (3. Such a matrix system is called banded. 28). ∂x We extend the mesh to include additional points to the right of the boundary at x = a (Fig. the average of the boundary values). Continue this process until the solution converges to the desired accuracy. consider again the problem of the last section but with a Neumann..C.65) ∂T18 T22 − T14 ≈ ∂x 2h (3. y) T (0. Initialize the boundary points to their prescribed values.62 depends on the choice of mesh ordering. and initialize the interior points to zero or some other convenient value (e.63) ∂T (a. T (x. 27): 2 T (x.
electric and magnetic ﬁelds. including structural mechanics and dynamics. 3. yields 4T18 − 2T14 − T17 − T19 = 2hg18 . We let u2 and u3 denote the displacements from the equilibrium of the two masses m2 and m3 ..1) 34 ... The stiﬀnesses of the two springs are k1 and k2 . Although the main theoretical bases for the ﬁnite element method are variational principles and the weighted residual method. it is useful to consider discrete systems ﬁrst to gain some physical insight into some of the procedures.. 16 15 14 13 20 19 18 17 p p p 24 p p p 23 p p p 22 p p p 21 Figure 28: Treatment of Neumann Boundary Conditions. .67) 4 Direct Finite Element Analysis The ﬁnite element method is a numerical procedure for solving partial diﬀerential equations. and electromagnetics.. . 4.. heat transfer. 29. (3.65. . The procedure is used in a variety of applications.1 Linear MassSpring Systems Consider the twodegreeoffreedom (DOF) system shown in Fig.. which... E u2 (t) k2 ¡e ¡e ¡ ¡ e¡ e¡ E u3 (t) E k1 ¡e ¡e ¡ ¡ e¡ e¡ m2 e e m3 e e f3 (t) Figure 29: 2DOF MassSpring System. ﬂuid ﬂow. The dynamic equilibrium equations could be obtained from Newton’s second law (F=ma): m2 u2 + k1 u2 − k2 (u3 − u2 ) = 0 ¨ m3 u3 + k2 (u3 − u2 ) = f3 (t) ¨ (4. which imposes the Neumann boundary condition on Point 18. acoustics. when combined with Eq.
we observe that k11 can be deﬁned as the force on DOF 1 corresponding to enforcing a unit displacement on DOF 1 and zero displacement on DOF 2: k11 = f1 u1 =1. or m2 u2 + (k1 + k2 )u2 − k2 u3 = 0 ¨ m3 u3 − k2 u2 + k2 u3 = f3 (t). we obtain k11 u1 + k12 u2 = f1 k21 u1 + k22 u2 = f2 . as shown in Fig.3) (4. and F= 0 f3 (4.9) (4.11) 35 . M= is the system mass matrix.u2 =0 = k. K= is the system stiﬀness matrix. k Figure 30: A Single Spring Element. To develop instead a matrix approach. or k11 k12 k21 k22 By expanding this equation.8) From this equation.6) k1 + k2 −k2 −k2 k2 (4. 30. (4.7) m2 0 0 m3 (4.10) u1 u2 = f1 f2 . (4. The stiﬀness matrix Kel for this element satisﬁes Kel u = F.4) (4. This approach would be very tedious and errorprone for more complex systems involving many springs and masses. we ﬁrst isolate one element.5) is the force vector. (4.2) 1 ¡e ¡e ¡ ¡ e¡ e¡ 2 is the displacement vector (the vector of unknown displacements). ¨ This system could be rewritten in matrix notation as M¨ + Ku = F(t). u where u= u2 u3 (4.
for a larger system with many more DOF. both elements which connect to DOF 2 contribute to k22 . . For example. 4. 1 ¡e ¡e ¡ ¡ e¡ e¡ k1 2 ¡e ¡e ¡ ¡ e¡ e¡ k2 3 Figure 31: 3DOF Spring System.15) 4.13) u1 =0.16) The justiﬁcation for this assembly procedure is that forces are additive. Similarly. (4. each with one DOF. others=0 . Kel = k −k −k k =k 1 −1 −1 1 . This matrix corresponds to the unconstrained system. this system is shown in Fig. 29 has a constraint on DOF 1. k22 = f2 u1 =0. 0 0 0 0 −k2 k2 0 −k2 k2 (4. the system is a 3DOF system. (4. Kn1 u1 + Kn2 u2 + Kn3 u3 + · · · + Knn un = Fn .14) in which case we can interpret an individual element Kij in the stiﬀness matrix as the force at DOF i if uj = 1 and all other displacement components ui = 0: Kij = Fi uj =1.u2 =0 = −k. 31.12) In general. k22 is the force at DOF 2 when u2 = 1 and u1 = u3 = 0. as shown in Fig.u2 =1 = k. 29.u2 =1 = −k. Since there are three points in this system. (4.3 Constraints The system in Fig. (4. k21 = f2 u1 =1. k12 = f1 Thus. K11 u1 + K12 u2 + K13 u3 + · · · + K1n un = F1 K21 u1 + K22 u2 + K23 u3 + · · · + K2n un = F2 K31 u1 + K32 u2 + K33 u3 + · · · + K3n un = F3 . If we expand the 36 . In the absence of the masses and constraints. 32. The system stiﬀness matrix can be assembled for this system by adding the 3 × 3 stiﬀness matrices for each element: k1 −k1 0 0 0 0 k1 −k1 0 K = −k1 k1 0 + 0 k2 −k2 = −k1 k1 + k2 −k2 . .2 Matrix Assembly We now return to the stiﬀness part of the original problem shown in Fig.
Thus. the unconstrained matrix K3×3 is singular. 33. but the constrained matrix K2×2 is nonsingular. we do not need Row i of the matrix (although that equation can be saved to recover later the constraint force). k1 1 ¡e ¡e ¡ ¡ e¡ e¡ 2 ¡e ¡e ¡ ¡ e¡ e¡ k2 ¡e ¡e ¡ ¡ e¡ e¡ 3 k4 k3 ¡e ¡e ¡ ¡ e¡ e¡ 4 Figure 33: 4DOF Spring System. 4.18) which is the same matrix obtained previously in Eq. from Eqs. (4.18. 4. if ui = 0 for some system.19) whereas 2 det K2×2 = (k1 + k2 )k2 − k2 = k1 k2 = 0.4 Example and Summary Consider the 4DOF spring system shown in Fig. Column i of the matrix multiplies ui .ﬁxed r© e¡ e ¡ ¡e ¡e ¡ 1 k1 2 ¡e ¡e ¡ ¡ e¡ e¡ k2 3 Figure 32: Spring System With Constraint. Hence. K21 u1 + K22 u2 + K23 u3 = F2 K31 u1 + K32 u2 + K33 u3 = F3 (4. 4. (4. Without constraints. Also. with u1 = 0 in Fig. 2 det K3×3 = k1 [(k1 + k2 )k2 − k2 ] + k1 (−k1 k2 ) = 0. so that. The unconstrained stiﬀness matrix for 37 . if ui = 0.17) we see that Row i corresponds to the equilibrium equation for DOF i. we do not need Column i of the matrix. (4. we delete Row 1 and Column 1 in Eq. That is. Notice that.5. if ui = 0. we can enforce that constraint by deleting Row i and Column i from the unconstrained matrix. 4.16 to obtain the reduced matrix K= k1 + k2 −k2 −k2 k2 . (unconstrained) matrix system Ku = F into K11 u1 + K12 u2 + K13 u3 = F1 .20) That is. K is singular (and the solution of the mechanical problem is not unique) because of the presence of rigid body modes.16 and 4. 32.
K is singular without enough constraints to eliminate rigid body motion. and Kel = AE L 1 −1 −1 1 . (4. 2. For spring systems. 4. Generate the element stiﬀness matrices. Assemble the system K and F. K is sparse in general and usually banded. Thus. This property is a special case of the Betti reciprocal theorem in mechanics. the sum of any matrix column or row is zero. since the object is in static equilibrium. 34. An oﬀdiagonal term is zero unless the two points are common to the same element. 3. and stresses. spring forces. 2. This property is a consequence of equilibrium. we recall that the change in displacement u for a rod of length L subjected to an axial force F is FL . and E is the Young’s modulus for the rod material. Compute reactions. 3. (4. the axial stiﬀness is F AE k= = .23) u L The rod is therefore equivalent to a scalar spring with k = AE/L. (4. that have only one DOF at each point.5 PinJointed Rod Element Consider the pinjointed rod element (an axial member) shown in Fig.24) 38 . From mechanics of materials.22) u= AE where A is the rod crosssectional area. (4. K is symmetric. since the matrix entries in Column i consist of all the grid point forces when ui = 1 and other DOF are ﬁxed. 4.21) We summarize several properties of stiﬀness matrices: 1. We summarize the solution procedure for spring systems: 1. The forces must sum to zero. Solve Ku = F for u. Apply constraints. Thus. 5.this system is K= k1 + k4 −k1 −k4 0 −k1 k1 + k2 −k2 0 −k4 −k2 k2 + k3 + k4 −k3 0 0 −k3 k3 .
L F © Figure 34: PinJointed Rod Element. 39 .F AE. r d d d d d d d r d d d d d dr d d r d d d d d d dr d dr c r Figure 35: Truss Structure Modeled With PinJointed Rods.
37. u2 = u3 = u4 = 0. r (x2 . y1 ) e && θ & dr r Fy T& 1 b &E Fx Figure 37: Computing 2D Stiﬀness of PinJointed Rod. and the forces at the opposite end of the rod (x2 .g. y2 ) are equal in magnitude and opposite in sign.27) K = . at Point 1. However. For example.25) (4. A typical rod element in 2D is shown in Fig. Fig. 36. 35). Similarly we can ﬁnd the rest of the matrix to obtain c2 cs −c2 −cs AE cs s2 −cs −s2 el (4. To use this element in 2D trusses requires expanding the 2 × 2 matrix Kel into a 4 × 4 matrix. we set u1 = 1. To compute the entries in the ﬁrst column of the 4 × 4 Kel . These four values complete the ﬁrst column of the matrix. (4. 36. Thus the elements must be transformed to a common coordinate system before the element matrices can be assembled into the system stiﬀness matrix. y2 ) & & & & & & 1 cos θ & e (x1 . F3 .26) where k = AE/L.. F4 . c2 cs L −c2 −cs −cs −s2 cs s2 40 . Fx = F cos θ = (k · 1 cos θ) cos θ = k cos2 θ = k11 = −k31 Fy = F sin θ = (k · 1 cos θ) sin θ = k cos θ sin θ = k21 = −k41 .4 T 2 θ T E 1 E3 Figure 36: The Degrees of Freedom for a PinJointed Rod Element in 2D. F2 . since the rod elements are not all colinear (e. The four DOF for the element are also shown in Fig. and compute the four grid point forces F1 . matrix assembly for a truss structure (a structure made of pinjointed rods) diﬀers from that for a collection of springs. as shown in Fig.
32) 2 0 0 2 u3 u4 = 5000 √ −5000 3 . in the discussion of matrix transformations. 38.28) L L Note that the angle θ is not of interest. Later. (4.89 × 10−3 in.31) . u4 = −2.29) K = k0 . the stiﬀness matrix is 1 −1 −1 1 0 0 −1 1 1 −1 0 0 1 1 + 1 −1 + 1 −1 −1 −1 (4.30) After constraints.6 PinJointed Frame Example We illustrate the matrix assembly and solution procedures for truss structures with the simple frame shown in Fig.10. 1 −1 −1 + 1 1 + 1 −1 −1 0 0 −1 −1 1 1 0 0 −1 −1 1 1 where k0 = AE L a L 2 = 1. c = cos θ = 4. we will derive a more convenient way to obtain this matrix. For this frame. Before the application of constraints. where x2 − x1 y2 − y1 . 41 (4. (4. L=10 in. we assume E = 30 × 106 psi.5 × 106 lb/in. the system of equations is k0 whose solution is u3 = 1. and A=1 in2 . a 1 =√ . only c and s are needed.67 × 10−3 in. s = sin θ = . L 2 (4.000 lb e e 4 s=c 3 e T 60◦ ee E3 d d d a a = − L r 2 d d ¨ s = −c = L d d¨ j r % d d d d ◦ 45 d d 6 2 d d T d d T d d E 5 1d E1 ' 2a E Figure 38: PinJointed Frame Example.
and us is prescribed. Consider the ﬁnite element matrix system Ku = F. which includes all loads at the constrained DOF.c c c c c c c r r r r r r d c c c c r r r d Figure 39: Example With Reactions and Loads at Same DOF. Thus. at those DOF. the loads at the two end grid points would include both reactions and a portion of the applied load. the free and constrained DOF. (4. 42 (4.34) where uf and us denote. (4.7 Boundary Conditions by Matrix Partitioning We recall that. The second partition then yields the forces at the constrained DOF: Fs = Ksf uf + Kss us .33) 4. A partitioning of the matrix system then yields Kf f Kf s Ksf Kss uf us = Ff Fs . R2 = k0 (u3 − u4 ) = 6840 lb R6 = k0 (−u3 − u4 ) = 1830 lb. We partition the unknown displacement vector into u= uf us . Fs . 39. From the ﬁrst partition. (4. if any. consider the beam shown in Fig. Kf f uf + Kf s us = Ff or Kf f uf = Ff − Kf s us . to enforce ui = 0. The grid point forces Fs at the constrained DOF consist of both the reactions of constraint and the applied loads. and some are free. can be written as Fs = Ps + R.39) (4. By using matrix partitioning.36) .35) where us is prescribed. The reactions can be recovered as R1 = k0 (−u3 + u4 ) = −6840 lb.38) where uf is now known. When the applied load is distributed to the grid points. we can treat nonzero constraints and recover the forces of constraint. (4. where some DOF are speciﬁed. For example. respectively. (4. R5 = k0 (−u3 − u4 ) = 1830 lb. we could delete Row i and Column i from the stiﬀness matrix.37) which can be solved for the unknown uf .
say. 10. (4.000 times the largest diagonal entry in the unconstrained K matrix. We can summarize the algorithm for applying the largespring approach for applying constraints to the linear system Ku = F as follows: 1.40) The disadvantage of using the partitioning approach to constraints is that the writing of computer programs is made more complicated. 43 . (4. A large number placed on the matrix diagonal will have no adverse eﬀects on the conditioning of the matrix.8 Alternative Approach to Constraints Consider a structure (Fig. Choose the large spring k0 to be.42) as required. if we prescribe DOF 3 in a 4DOF system. (4. However. the modiﬁed system becomes F1 k11 k12 k13 k14 u1 F2 k21 k22 k23 k24 u2 . such a capability is essential. the third equation is k0 u3 ≈ k0 U or u3 ≈ U. The reactions can then be recovered as R = Ksf uf + Kss us − Ps .i r (a) E ui = U prescribed F i = k0 U E (b) k0 ¡e ¡e ¡ ¡ e¡ e¡ d d d Figure 40: Large Spring Approach to Constraints. where Ps is the applied load. An alternative approach to enforce this constraint is to attach a large spring k0 between DOF i and ground (a ﬁxed point) and to apply a force Fi to DOF i equal to k0 U . since matrix sizes do not have to change. 4. and R is the vector of reactions. for general purpose software. This spring should be many orders of magnitude larger than other typical values in the stiﬀness matrix. The main advantage of this approach is that computer coding is easier. since one needs the general capability to partition a matrix into smaller matrices. 40) for which we want to prescribe a value for the ith DOF: ui = U . The large spring approach can be used for any system of equations for which one wants to enforce a constraint on a particular unknown.41) = k31 k32 k33 + k0 k34 u3 F3 + k0 U F4 k41 k42 k43 k44 u4 For large k0 . For example.
u2 T1 u EI. 2. For Column 2. for an Euler beam. we solve the beam equation subject to the boundary conditions θ1 = 1. respectively. 4. we solve the beam diﬀerential equation EIy = M (x).9 Beams in Flexure Like springs and pinjointed rods. θ1 = w2 = θ2 = 0. add k0 to the diagonal entry Kii . The stiﬀness matrix would therefore be of the form F1 k11 k12 k13 k14 w1 θ1 k21 k22 k23 k24 M1 . as shown in Fig. (4. beam elements also have stiﬀness matrices which can be computed exactly. Eq. we designate the four DOF associated with ﬂexure as shown in Fig. Thus. Rotations are expressed in radians. the transverse displacement and rotation at the ith point. For each DOF i which is to be constrained (zero or not). 41.44) and compute the four forces. (4. 4. If we then combine the resulting ﬂexural stiﬀnesses with the axial 44 .45) subject to the boundary conditions. 43.44. F T1 M1 r T F T2 M2 r w1 = 1 c Figure 42: The Beam Problem Associated With Column 1. To compute the ﬁrst column of K. (4. set w1 = 1. and add k0 U to the corresponding righthand side term Fi . 42.46) as shown in Fig. In two dimensions. w1 = w2 = θ2 = 0. (4.43) = F2 k31 k32 k33 k34 w2 M k41 k42 k43 k44 θ2 2 where wi and θi are. where U is the desired constraint value for DOF i. L r E u4 T3 r u x 1 2 Figure 41: DOF for Beam in Flexure (2D).
E. Ry . can be added. rods. and beams can be derived exactly. K is a 12 × 12 matrix. exact stiﬀness matrices cannot be derived. For transverse shear. “area factors” would also be needed to reﬂect the fact that two diﬀerent beams with the same crosssectional area. Consider the 2D problem of computing the displacements and stresses in a thin plate with applied forces and 45 . in 3D. I. uy . uz .47) where the six DOF are shown in Fig. for bending in two diﬀerent planes. Thus.10 Direct Approach to Continuum Problems Stiﬀness matrices for springs. In addition. matrix for the Euler beam: AE AE F 1 0 0 − L L 12EI 6EI F 0 2 0 L3 L2 6EI 4EI F 0 3 0 2 L L = AE F − AE 4 0 0 L L 12EI 6EI F5 0 − 3 − 2 0 L L 2EI 6EI F6 0 0 2 L L we obtain the twodimensional stiﬀness u1 u2 u3 u4 u5 u6 0 − 12EI L3 6EI − 2 L 0 12EI L3 6EI − 2 L − 0 6EI L2 2EI L 0 6EI L2 4EI L . but diﬀerent crosssectional shapes. L r E u1 u3 u6 u T5 r E u4 1 2 Figure 44: DOF for 2D Beam Element. For this element. Transverse shear. Rz ). Rx . which was ignored. 44. The threedimensional counterpart to this matrix would have six DOF at each grid point: three translations and three rotations (ux . there would have to be two moments of inertia I1 and I2 . note that there is no coupling between axial and transverse behavior. u T2 A. 4. For most problems of interest in engineering. in addition to a torsional constant J and (possibly) a product of inertia I12 .F T1 M1 r F T2 M2 r Figure 43: The Beam Problem Associated With Column 2. would have diﬀerent shear stiﬀnesses. stiﬀnesses previously derived for the axial member. (4.
Note that the number of undetermined coeﬃcients equals the number of DOF (6) in the element. F= . Constraint ! ¡ ! ¡ ¡ ¡ ¡ ! ¢ ¡ ¢ Load ¡ ¡ ¢ ¢ ¢ ¢ Figure 45: Plate With Constraints and Loads. y3 ) £3 d £ £ £ £ £ £d E u3 d d d d d v2 T d E u2 2$ £ T $d £ $$ (x .48) v2 F2y u F 3 3x v3 F3y Since we cannot determine exactly a stiﬀness matrix that relates these two vectors. 46. y1 ) $ E u1 v1 £ Figure 46: DOF for the Linear Triangular Membrane Element. The displacement and force vectors for the element are u1 F1x v F 1 1y u2 F2x u= . constraints (Fig. y ) $ 2 2 £1 $$$ $$$ £ (x1 . we approximate the displacement ﬁeld over the element as u(x. respectively. v3 T (x3 . y) = a1 + a2 x + a3 y v(x. this is a 6DOF element. A typical element is shown in Fig.49) where u and v are the x and y components of displacement. With three grid points and two DOF at each point. (4. y) = a4 + a5 x + a6 y. This ﬁgure also shows the domain modeled with several triangular elements. 45). We 46 . (4.
57) . 1 x3 y 3 (4. The linear approximation in Eq. 4. A is positive for counterclockwise ordering and negative for clockwise ordering. 4.55) (4. the displacements in Eq. y2 − y3 y3 − y1 y1 − y2 v2 a5 2A x3 − x 2 x 1 − x3 x2 − x1 v3 a6 47 (4. we can write this last equation in uncoupled form: 1 x 1 y 1 a1 u1 (4. Thus. 4.50) a1 a2 a3 a4 a5 a6 (4. At the vertices.. (4. x2 y3 − x3 y2 x3 y1 − x1 y3 x1 y2 − x2 y1 v1 a4 1 = .54) = 1 x2 y2 a5 v 2 a6 1 x3 y3 v3 We now observe that 1 x1 y 1 det 1 x2 y2 = 2A = 0.choose polynomials for convenience in the subsequent mathematics.56) = a2 y2 − y3 y3 − y1 y1 − y2 u2 2A a3 x3 − x2 x1 − x3 x2 − x1 u3 Similarly.53) = 1 x 2 y 2 a2 u 2 a3 1 x3 y 3 u3 1 x1 y1 a4 v1 .49 must match the grid point values.52) Since the x and y directions uncouple in Eq. We can write ﬁve more equations of this type to obtain 1 x1 y1 0 0 0 u1 v 0 0 0 1 x y 1 1 1 u2 1 x2 y2 0 0 0 = v2 0 0 0 1 x2 y2 u 1 x y 0 0 0 3 3 3 0 0 0 1 x3 y 3 v3 or u = γa. (4. u1 = a1 + a2 x1 + a3 y1 .g.51. e. (4. unless the triangle is degenerate. and x2 y3 − x3 y2 x3 y1 − x1 y3 x1 y2 − x2 y1 u1 a1 1 .49 is analogous to the piecewise linear approximation used in trapezoidal rule integration.51) since A is (from geometry) the area of the triangle. we conclude that γ is invertible.
and ν is Poisson’s ratio. We now derive the element stiﬀness matrix using the Principle of Virtual Work.59) εyy v.58) and treat the element’s unknowns as being either the physical displacements u or the nonphysical coeﬃcients a. (4.61) Eq.Thus. we evaluate the strains for this element as a1 a2 0 1 0 0 0 0 a2 a3 0 0 0 0 0 1 = Ba = Bγ −1 u = Cu. 4. D= ν 1 0 1 − ν2 0 0 (1 − ν)/2 (4.60) (4.y + v. (4.49. the strains are independent of position in the element. Eq.64) 48 . From generalized Hooke’s law. Consider an element in static equilibrium with a set of applied loads F and displacements u. Note that. (4. for example. The strain components of interest in 2D are u. For this element. each stress component is a linear combination of all the strain components: σ = Dε = DBγ −1 u = DCu. According to the Principle of Virtual Work. (4. = ε= a6 a4 a3 + a5 0 0 1 0 1 0 a 5 a6 where C = Bγ −1 . Thus. the work done by the applied loads acting through the displacements is equal to the increment in stored strain energy during an arbitrary virtual (imaginary) displacement δu: δuT F = V δεT σ dV. we can write a = γ −1 u. for this element. this element is referred to as the Constant Strain Triangle (CST).x εxx ε= = .60 is a formula to compute element strains given the grid point displacements.62) where.62 is a formula to compute element stresses given the grid point displacements. (4. for an isotropic material in plane stress. 1 ν 0 E .x From Eq. the stresses are constant (independent of position) in the element.y γxy u. 4.63) where E is Young’s modulus. 4.
and t is the element thickness. 2 2 x13 +λy31 νx13 y12 +λx21 y31 x13 x21 +λy12 y31 2 2 Sym y12 +λx21 µx21 y12 2 x2 +λy12 21 (4. (4. (4.67) where the integral is over the volume of the element. it is frequently more convenient to derive element matrices in a local element coordinate system and then transform those matrices to a global system (Fig.65) where δu and u can be removed from the integral. 2 µ= (4. 49 . since they are grid point quantities independent of position. Since δu is arbitrary. (4. The substitution of the constant B.66) Therefore. it follows that F= V CT DC dV u. yij = yi − yj . For example.60 and 4. Transformations are also needed to transform from other orthogonal coordinate systems (e. λ= 1−ν . 2 1+ν . cylindrical or spherical) to Cartesian coordinates (Fig. since the material property matrix D is symmetric. Note that. the need arises to transform vectors and matrices from one coordinate system to another. the stiﬀness matrix is given by K= V CT DC dV = V Bγ −1 T D Bγ −1 dV.62 into this equation to obtain δuT F = V (C δu)T (DCu) dV = δuT V CT DC dV u.We then substitute Eqs. K is symmetric.68) where xij = xi − xj . in the ﬁnite element method. γ. 47).69) 5 Change of Basis On many occasions in engineering applications.. 4. and D matrices into this equation yields the stiﬀness matrix for the Constant Strain Triangle: Et K= × 4A(1 − ν 2 ) 2 y23 +λx2 µx32 y23 y23 y31 +λx13 x32 νx13 y23 +λx32 y31 y12 y23 +λx21 x32 νx21 y23 +λx32 y12 32 2 2 x32 +λy23 νx32 y31 +λx13 y23 x32 x13 +λy31 y23 νx32 y12 +λx21 y23 x21 x32 +λy12 y23 2 2 y31 +λx13 µx13 y31 y12 y31 +λx21 x13 νx21 y31 +λx13 y12 .g. 48).
0.3) where δij is the Kronecker delta. and vi are the com¯ ponents in the barred system (Fig.2) where.4) where vi are the components of v in the unbarred coordinate system. i = j. (5. With the summation convention.1) where ei are the basis vectors. we can omit the summation sign and write v = vi ei . 49).4 with ej . if a subscript appears exactly twice. a summation is implied over the range. ei · ej = δij = 1. we can express v in two diﬀerent orthonormal bases: 3 3 v= i=1 vi ei = i=1 vi ei .5) 50 . ¯¯ (5. (5. (5. If we take the dot product of both sides of Eq. An orthonormal basis is deﬁned as a basis whose basis vectors are mutually orthogonal unit vectors (i. and vi are the components of v. Since bases are not unique. we obtain 3 3 vi ei · ej = i=1 i=1 vi ei · ej . y T eθ f w f f r I er f q θ E x Figure 48: Basis Vectors in Polar Coordinate System.y T y T y ¯ w f f f r I¯ x r y ¯ w f f axial member E r I¯ x r f triangular membrane r E x x Figure 47: Element Coordinate Systems in the Finite Element Method.e. If ei is an orthonormal basis. Let the vector v be given by 3 v = v1 e1 + v2 e2 + v3 e3 = i=1 vi ei . i = j. 5.. vectors of unit length). ¯ ¯ (5.
11) ¯ ¯ ¯ where ei · ej = δij . ¯ (5. (5. (5. 3 3 (5.10 and 5. 5.12 yields 3 R−1 = RT or RRT = I or k=1 Rik Rjk = δij . and we deﬁne the 3 × 3 matrix R as ¯ Rij = ei · ej .6) vj = i=1 Rij vi = ¯ i=1 T Rji vi . and ei · ej = Rji . where ei · ej = δij . we obtain 3 3 (5. ¯ ¯ ¯ (5.4 with ej .10) ¯ vi ei · ej = i=1 i=1 vi ei · ej . if we take the dot product of Eq.5. 5. 5.9) Eq.7) Since the matrix product C = AB can be written using subscript notation as 3 (5.x2 ¯ f w f f f f x T2 v x1 ¯ E f θ f I x1 Figure 49: Change of Basis.8) Cij = k=1 Aik Bkj . from Eq. 3 vj = ¯ i=1 Rji vi ¯ ¯ or v = Rv or v = R−1 v.12) A comparison of Eqs.13) 51 . ¯ Similarly. Thus. (5. 5.7 is equivalent to the matrix product ¯ v = RT v . Thus.
5. det(RRT ) = (det R)(det RT ) = (det R)2 = det I = 1. 49. 5. since the square of the length is given by vi vi = Rij vj Rik vk = δjk vj vk = vj vj .21) where the summation convention was used. the determinant of the transpose of a matrix is equal to the determinant of the matrix itself. Also. For example. That is.18) The plus sign occurs for rotations.16) We recall that the determinant of a matrix product is equal to the product of the determinants. from Eq. ¯¯ (5. A matrix R satisfying Eq. That is. (5. the sign of the z component is changed). det R = ±1.19) (5. We note that the length of a vector is unchanged under an orthogonal coordinate transformation. For example. the orthogonal matrix 1 0 0 (5. for the coordinate rotation shown in Fig.20) R= 0 1 0 0 0 −1 indicates a reﬂection in the z direction (i. R= and vx = vx cos θ − vy sin θ ¯ ¯ vy = vx sin θ + vy cos θ. R is sometimes called a rotation matrix. 52 . cos θ sin θ 0 (5.13. Thus. 0 0 1 (5.where I is the identity matrix (Iij = δij ): 1 0 0 I = 0 1 0 . an orthogonal matrix is one whose inverse is equal to the transpose.. and we conclude that.e.14) This type of transformation is called an orthogonal coordinate transformation (OCT). for an orthogonal matrix R.13 is said to be an orthogonal matrix. the square of the length of a vector is the same in both coordinate systems. ¯ ¯ (5. in 3D. and the minus sign occurs for combinations of rotations and reﬂections.17) cos θ sin θ − sin θ cos θ (5. 0 0 1 In 2D.15) R = − sin θ cos θ 0 .
¯u ¯ v = M¯ . we obtain ¯ M = RMRT or.24) (5.26) Since both u and v are vectors (tensors of rank 1). (5. in index notation.30) 53 . temperature and ¯ pressure are scalars. (5.To summarize.22) where ¯ Rij = ei · ej . transforms under an orthogonal coordinate transformation according to the rule 3 3 3 ¯ Aij···k = p=1 q=1 ··· r=1 Rip Rjq · · · Rkr Apq···r . 5. (5.27. and RRT = RT R = I. vectors transform according to the rule 3 ¯ v = Rv or vi = ¯ j=1 Rij vj .e.25 implies ¯ ¯ ¯ ¯ RT v = MRT u or v = RMRT u. 3 3 (5. A tensor of rank 0 is a scalar (a quantity which is unchanged by an orthogonal coordinate transformation).25) (i. Eq.23) 5.27) (5.. a tensor of rank n. Also. ¯ We now introduce tensors of rank 2. since T = T and p = p. in a rotated coordinate system. For example. In general. 5. which has n indices.1 Tensors A vector which transforms under an orthogonal coordinate transformation according to the ¯ rule v = Rv is deﬁned as a tensor of rank 1. (5. under an orthogonal coordinate transformation. By comparing Eqs.29) which is the transformation rule for a tensor of rank 2.26 and 5. Consider a matrix M = (Mij ) which relates two vectors u and v by 3 v = Mu or vi = j=1 Mij uj (5. the result of multiplying a matrix and a vector is a vector).28) ¯ Mij = k=1 l=1 Rik Rjl Mkl . (5.
5. Generalized Hooke’s law According to Hooke’s law in elasticity. ε31 ε32 ε33 where.31) where σ11 . and sum repeated indices. and the summation convention is being used. σ22 .37) 54 . We can write Eq. ¯ ¯ (5.36) If we multiply both sides of this equation by Rpj Roi .j + uj. σ33 are the direct (normal) stresses. σ13 . (5. and σ12 .32) The shear strains in this tensor are equal to half the corresponding engineering shear strains. σ31 σ32 σ33 (5.35 in terms of stress and strain in a second orthogonal coordinate system as Rki Rlj σkl = cijkl Rmk Rnl εmn . σ = Eε.2 Examples of Tensors 1. In 1D. 1 1 εij = (ui. (5. there are nine components of stress σij and nine components of strain εij . Stress and strain in elasticity The stress tensor σ is σ11 σ12 σ13 σ = σ21 σ22 σ23 . 2. According to generalized Hooke’s law. the extension in a bar subjected to an axial force is proportional to the force. ¯ ¯ (5. We now prove that cijkl is a tensor of rank 4. each stress component can be written as a linear combination of the nine strain components: σij = cijkl εkl . (5.35) where the 81 material constants cijkl are experimentally determined. Both σ and ε transform like second rank tensors. we obtain Rpj Roi Rki Rlj σkl = Roi Rpj Rmk Rnl cijkl εmn . an experimentally determined material property. The corresponding strain tensor is ε11 ε12 ε13 ε = ε21 ε22 ε23 . In general threedimensional elasticity.i ) = 2 2 ∂ui ∂uj + ∂xj ∂xi . or stress is proportional to strain. 5. and σ23 are the shear stresses. in terms of displacements.33) (5.34) where E is Young’s modulus.
ua ¯ ub ¯ ¯= u = uc ¯ .38) where K is referred to as the stiﬀness matrix (with dimensions of force/displacement). ua ub uc . since u and F are the displacement and force vectors for all grid points. (5.. c. . Thus. .. b. . . . because R is an orthogonal matrix. . ¯ which proves that cijkl is a tensor of rank 4. Stiﬀness matrix in ﬁnite element analysis In the ﬁnite element method for linear analysis of structures. . δok δpl σkl = σop = Roi Rpj Rmk Rnl cijkl εmn . . . ¯ ¯ ¯ we conclude that copmn = Roi Rpj Rmk Rnl cijkl . . . = Γu.or. .e. .. . .42) .46) . . . ub = Rb ub . and consisting of rotation matrices R: 0 ··· 0 ··· . . · · · .45) ΓT Γ = I. . the forces F acting on an object in static equilibrium are a linear combination of the displacements u (or vice versa): Ku = F. . . Rc · · · . ··· ··· ··· . σop = copmn εmn . 3. ¯ ¯ ¯ Since. (5. u and F contain several subvectors. . Ra 0 0 0 Rb 0 0 0 Rc . i.. Fa ua Fb ub (5. .43) where Γ is an orthogonal blockdiagonal matrix Ra 0 0 Rb Γ= 0 0 . (5.. . . . 55 (5.40) (5. . . where ¯ ¯ ua = Ra ua . for grid points a. In this equation.39) (5. F= u= Fc uc . .41) (5. in the second coordinate system.44) (5.
(5. Similarly.55) R 0 0 R .47) We illustrate the transformation of a ﬁnite element stiﬀness matrix by transforming the stiﬀness matrix for the pinjointed rod element from a local element coordinate system to a global Cartesian system. 50. (5. 1 0 L −1 0 0 0 0 0 where A is the crosssectional area of the rod. Consider the rod shown in Fig.q y T θ y ¯ s d ¯ x dq E 4 dq DOF θ 2 s d 1 d q s d 3 x Figure 50: Element Coordinate System for PinJointed Rod. Thus. the 4 × 4 2D stiﬀness matrix in the element coordinate system is 1 0 −1 0 0 0 ¯ = AE 0 0 (5. (5. For this element. That is. ¯ KΓu = ΓF or ¯ ΓT KΓ u = F. if ¯u ¯ K¯ = F.53) . and L is the rod length. E is Young’s modulus of the rod material.49) (5.48) (5. ¯ K = ΓT KΓ. where the 4 × 4 transformation matrix is Γ= and the rotation matrix is R= cos θ sin θ − sin θ cos θ 56 .54) (5.52) K . the stiﬀness matrix transforms like other tensors of rank 2: ¯ K = ΓT KΓ.51) (5.50) (5. ¯ F = ΓF. In the global coordinate system.
the identity matrix I is invariant under an orthogonal coordinate transformation. 4. −c2 −cs c2 cs −cs −s2 cs s2 (5. For an isotropic material. and ¯ = RIRT = RRT = I. This result agrees with that found earlier in Eq.3 Isotropic Tensors An isotropic tensor is a tensor which is independent of coordinate system (i. all isotropic tensors of rank 4 must be of the form shown above. thus implying at most three material constants (on the basis of tensor analysis alone). 5. a result which depends on the existence of a strain energy function. cijkl must be an isotropic tensor of rank 4. maximum. The actual number of independent material constants for an isotropic material turns out to be two rather than three. moreover. It can be shown in tensor analysis that δij is the only isotropic tensor of rank 2 and. 5.27. which has three constants. invariant under an orthogonal coordinate transformation). which implies the additional symmetry cijkl = cklij .e. The Kronecker delta δij is a second rank ¯ tensor and isotropic. in generalized Hooke’s law (Eq. I (5.56) where c = cos θ and s = sin θ.35). the material property tensor cijkl has 34 = 81 constants (assuming no symmetry). δij is the characteristic tensor for all isotropic tensors: Rank 1 2 3 4 odd Isotropic Tensors none cδij none aδij δkl + bδik δjl + cδil δjk none That is. K= AE L = AE L c −s 0 0 s c 0 0 0 0 c −s 0 0 s c 1 0 −1 0 0 −1 0 c s 0 0 0 0 −s c 0 0 1 0 0 0 c 0 0 0 0 0 −s 0 0 s c c2 cs −c2 −cs cs s2 −cs −s2 . 57 . 6 Calculus of Variations Recall from calculus that a function of one variable y = f (x) attains a stationary value (minimum. since δij = δij .. For example.57) That is.Thus. or neutral) at the point x = x0 if the derivative f (x) = 0 at x = x0 .
δf = ∂f ∂x δx + ∂f ∂y δy = 0 at (x0 . ∂x ∂y Alternatively. dx2 minimum: as shown in Fig. n). . . Hence.4) A function with n independent variables f (x1 . a functional is a function of a function. . xn ) is stationary at a point P if n δf = i=1 ∂f δxi = 0 at P ∂xi for arbitrary variations δxi . z is stationary at (x0 . not of functions. y0 ). (6. . ∂f = 0 at P ∂xi (i = 1. 2. Alternatively. . x2 . (6. For a function of two variables z = f (x. y0 ) if ∂f ∂f = 0 and = 0 at (x0 . More precisely. . and Neutral Stationary Values. a functional is an integral which has a speciﬁc numerical value for each function that is substituted into it. y0 ). . In general.2) (6.T r E T r T r E E (a) Minimum (b) Maximum (c) Neutral Figure 51: Minimum. 51. y). but of functionals. the ﬁrst variation of f (which is similar to a diﬀerential) must vanish for arbitrary changes δx in x: df δf = δx = 0. 58 .1) dx The second derivative determines what kind of stationary value one has: d2 f > 0 at x = x0 dx2 d2 f < 0 at x = x0 maximum: dx2 d2 f neutral: = 0 at x = x0 . for a stationary value. Variational calculus is concerned with ﬁnding stationary values.3) (6. Maximum. .
δI = 0 implies that both terms in Eq.14) ∂φx ∂φx This type of boundary condition is called a natural boundary condition.12) If φ is speciﬁed at the boundaries x = a and x = b. (6. This type of boundary condition is called a geometric or essential boundary condition.7) where.10) Since δφ is arbitrary (within certain limits of admissibility). (6. a (6. a (6.Consider the functional I(φ) = a b F (x. φ. 6. If φ is not speciﬁed at the boundaries x = a and x = b. the second term in Eq.8) With this equation. δφ(a) = δφ(b) = 0. (6. Eq. and φx = dφ .12 is automatically satisﬁed.13) and Eq. δI = a ∂F d − ∂φ dx δφ dx + ∂F δφ ∂φx . a zero integral in Eq. 6.6) The variation in I due to an arbitrary small change in φ is b b δI = a δF dx = a ∂F ∂F δφ + δφx dx. (6. 59 .12 implies ∂F (a) ∂F (b) = = 0.7 can be integrated by parts to obtain b a ∂F δφx dx = ∂φx b b a ∂F d ∂F δφ − (δφ) dx = ∂φx dx ∂φx a ∂F ∂φx b b δφ a d dx b ∂F ∂φx dx. with an interchange of the order of d and δ. dx (6. δφx = δ dφ dx = d (δφ). Moreover.11) dx ∂φx ∂φ which is referred to as the EulerLagrange equation.9) Thus.10 implies a zero integrand: d ∂F ∂F − = 0. 6. 6.5) where x is the independent variable. 6.10 must vanish. The second term in Eq. for arbitrary δφ. (6. dx (6. 6. φ(x) is the dependent variable. φx ) dx. ∂φ ∂φx (6.10 must also vanish for arbitrary δφ: ∂F δφ ∂φx b = 0.
(6.21) . 1) ds dy r dx (0. Consider the curve in Fig. 1). and. 60 (6. 1) in the xyplane is least. y(x) = x. F (x. 1 − c2 y (6.23) (6. (6. y [1 + (y )2 ]1/2 (6.15) We seek the curve y(x) which minimizes the total arc length I(y) = 0 1 + (y )2 dx.20) (6. (6. (6. = c.19) Hence. y ) = 1 + (y )2 d dx where 1 ∂F = 1 + (y )2 ∂y 2 ∂F ∂y −1/2 1/2 . 0) to (1. Thus. is = 0.22) c2 = a. 0) and (1. y(x) = ax + b. 6. with the boundary conditions. y. the EulerLagrange equation. Thus.11. [1 + (y )2 ]1/2 where c is a constant of integration. we obtain y = where a is another constant. 6. 52.17) which depends only on y explicitly. The diﬀerential arc length along the curve is given by ds = dx2 + dy 2 = 1 1 + (y )2 dx.16) where y(0) = 0 and y(1) = 1. For this problem. as expected.1 Example 1: The Shortest Distance Between Two Points We wish to ﬁnd the equation of the curve y(x) along which the distance from the origin (0.18) 2y = . 0) E r x Figure 52: Curve of Minimum Length Between Two Points.y T s (1. which is the straight line joining (0. Eq. If we solve this equation for y .
the EulerLagrange equation is d dx where F (x. 2gx (6. 6. 6.2 Example 2: The Brachistochrone We wish to determine the path y(x) from the origin O to the point A along which a bead will slide under the force of gravity and without friction in the shortest time (Fig. potential energy is converted to kinetic energy.27) (6. from Eq. 2gx (6. Thus.11. From Eq.28) 2gx.24) where s denotes distance along the path.25. dt = 1 + (y )2 dx .30) . At any location x. 6.31) ∂F ∂y − ∂F = 0. ∂y (6.O E y y(x) sA x c Figure 53: The Brachistochrone Problem. y. (6. Assume that the bead starts from rest. 53). dt (6. The velocity v along the path y(x) is v= ds = dt dx2 + dy 2 = dt 1 + (y )2 dx . y ) = 61 1 + (y )2 . dt = 1 + (y )2 dx. v (6.25) As the bead slides down the path.26) The total time for the bead to fall from O to A is then xA t= 0 1 + (y )2 dx.29) which is the integral to be minimized. the kinetic energy equals the reduction in potential energy: 1 2 mv = mgx 2 or v= Thus. 2gx (6.
Also. we must have y0 = 0. we integrate Eq. Thus. 54.37) (6. 6.41) which is a cycloid generated by the motion of a ﬁxed point on the circumference of a circle of radius a which rolls on the positive side of the line x = 0. we obtain the ﬁnal result in parametric form x= x = a(1 − cos θ) y = a(θ − sin θ). from Eq.40) 2c2 If we then replace the constant a = 1/(2c2 ). 6. 6. This brachistochrone solution is valid for any end point which the bead can reach. 2c 2c sin2 (θ/2) dθ (6.37. The resulting cycloids for several end points are shown in Fig.36 then transforms to x= y= sin(θ/2) 1 1 · 2 sin(θ/2) cos(θ/2) dθ = 2 cos(θ/2) c c 1 1 = 2 (1 − cos θ) dθ = 2 (θ − sin θ) + y0 .33) 1 + (y )2 2gx −1/2 2y =0 2gx (6. 6. (6.34) where c is a constant of integration. dx = 2 sin(θ/2) cos(θ/2) dθ. (6. (6.41 for a speciﬁc cycloid (deﬁned by the two end points). 2 c c The integral implied by Eq. without friction.38) (6. 62 . there is no loss of energy during the motion. 6.32) To solve this equation. the end point must not be higher than the starting point. after which either of the two equations in Eq.35) (y )2 = c2 x 1 + (y )2 or dy c2 x = . To solve Eq. Since the curve must pass through the origin.33 to obtain = c.36) (6. 1 (1 − cos θ). dx 1 − c2 x This equation can be integrated using the change of variable y = 1 1 sin2 (θ/2).Thus. and solve for y : (6.41 to solve (iteratively) for θA (the value of θ at Point A). the EulerLagrange equation becomes d 1 dx 2 or d dx y {x [1 + (y )2 ]}1/2 y {x [1 + (y )2 ]}1/2 = 0. 6.39) where y0 is a constant of integration. (6. The end point may be at the same elevation since.41 can be used to determine the constant a. one can ﬁrst eliminate the circle radius a from Eq. We then square both sides of this equation.
46) subject to the end conditions y(0) = y(1) = 0 and the constraint 1 y dx = 1.3 Constraint Conditions b Suppose we want to extremize the functional I(φ) = a F (x.47) That is.1 2 3 4 E y 1 r r r 2 c x r Figure 54: Several Brachistochrone Solutions. since J is a constant. However.44) where λ is an unknown constant referred to as a Lagrange multiplier. 55). to enforce the constraint in Eq. 63 . φx ) dx = J. (6. 0 (6. Thus. δJ = 0. we can replace F in the EulerLagrange equation with H = F + λG. For this problem.45) 6.43. φx ) dx (6. We recall that the EulerLagrange equation was found by requiring that δI = 0.4 Example 3: A Constrained Minimization Problem 1 We wish to ﬁnd the function y(x) which minimizes the integral I(y) = 0 (y )2 dx (6. the area under the curve y(x) is unity (Fig.43) where J is a speciﬁed constant. φ. δ(I + λJ) = 0. 6. 6.42) subject to the additional constraint condition b G(x. Thus. a (6. (6. φ.
z.49) (6. the integration is a volume integral. ∂φ ∂φx ∂φy ∂φz 64 (6. we obtain the parabola y(x) = 6x(1 − x). z).51) (6. 4 (6. we obtain y= λ 2 x + Ax + B.52) (6.48) With the boundary conditions y(0) = y(1) = 0. φy . ∂y (6. Thus.47.55) 6. Note that. y. 0) x Figure 55: A Constrained Minimization Problem. 24 (6.56) a functional with three independent variables (x. After integrating this equation.5 Functions of Several Independent Variables I(φ) = V Consider F (x. H(x. in 3D.50) ∂H ∂y − ∂H = 0. φ. The area constraint. 6. we obtain B = 0 and A = −λ/4. φx . Eq.57) . φz ) dV. d dx yields d (2y ) − λ = 0 dx or y = λ/2. (6.y T Area=1 r r E (0. y.54) Thus. y ) = (y )2 + λy. y. 0) (1. is used to evaluate the Lagrange multiplier λ: 1 1 (6. The EulerLagrange equation. The variation in I due to an arbitrary small change in φ is δI = V ∂F ∂F ∂F ∂F δφ + δφx + δφy + δφz dV. (6. with λ = −24.53) 1= 0 y dx = − 0 λ λ x − x2 dx = − 4 4 x 2 x3 − 2 3 1 =− 0 λ . y = −λx(1 − x)/4.
60) The last three terms in this integral can then be converted to a surface integral using the divergence theorem.58) To perform a threedimensional integration by parts.59) If we then expand the third and fourth terms similarly. Eq. 65 . It therefore follows that the integrand in the volume integral must also vanish to yield the EulerLagrange equation for three independent variables: ∂ ∂x ∂F ∂φx + ∂ ∂y ∂F ∂φy + ∂ ∂z ∂F ∂φz − ∂F = 0. If φ is not speciﬁed on S. δI = V ∂F ∂F ∂ ∂F ∂ ∂F ∂ δφ + (δφ) + (δφ) + (δφ) dV. (6. with an interchange of the order of ∂ and δ. (6. 6. ∂φ (6. 6. the integrand in the boundary integral must vanish: ∂F ∂F ∂F nx + ny + nz = 0 on S.60 then becomes δI = ∂ ∂F ∂ ∂F ∂ ∂F − − − ∂φ ∂x ∂φx ∂y ∂φy ∂z V ∂F ∂F ∂F + nx + ny + nz δφ dS. · f dV = V S f · n dS.62 automatically vanishes.63) If φ is speciﬁed on the boundary S. which states that. ∂φ ∂φx ∂x ∂φy ∂y ∂φz ∂z (6. δφ = 0 on S.64) ∂φx ∂φy ∂φz This is the natural boundary condition when φ is not speciﬁed on the boundary. (6. ∂x ∂φx ∂y ∂φy ∂z ∂φz (6. for a vector ﬁeld f . 6. Since δI = 0. Eq. nz ) is the unit outward normal on the surface.58 becomes. 6.62) where n = (nx . ∂φx ∂φy ∂φz S ∂F ∂φz δφ dV (6.61) where S is the closed surface which encloses the volume V . ny .58 can be expanded to yield ∂F ∂ ∂ (δφ) = ∂φx ∂x ∂x ∂F ∂ δφ − ∂φx ∂x ∂F ∂φx δφ. after regrouping. we note that the second term in Eq. both integrals in this equation must vanish for arbitrary δφ. δI = V ∂F ∂ ∂F ∂ ∂F ∂ ∂F δφ − δφ − δφ − δφ ∂φ ∂x ∂φx ∂y ∂φy ∂z ∂φz ∂ ∂F ∂ ∂F ∂ ∂F + δφ + δφ + δφ dV.where. and the boundary integral in Eq.
ﬁnite element solutions are often based on variational methods. φ1 . φ2 . and φ1 (x).6 Example 4: Poisson’s Equation 1 2 (φ + φ2 + φ2 ) − Qφ dV. φxx + φyy + φzz = −Q or 2 (6. for this case is ∂F d ∂F =0 (6. and φ3 (x) are the dependent variables. one of the most important applications of the energy theorems. For linear equations. φ3 .70) where x is the independent variable. φx . y z 2 x Consider the functional I(φ) = V (6. φz ) = (φ2 + φ2 + φ2 ) − Qφ. φ2 .72) (6.63. 6. φy . implies ∂ ∂ ∂ (φx ) + (φy ) + (φz ) − (−Q) = 0. φ3 ) = a F (x.73) 7 Variational Approach to the Finite Element Method In modern engineering analysis. φ. z.67) (6. ∂x ∂y ∂z That is. φ2 . φ1 .66) (6.6. such as the minimum potential energy theorem in elasticity. It can be shown that the generalization of the EulerLagrange equation. y.69) φ = −Q.68) (6. y z 2 x The EulerLagrange equation.65.11. 66 . a numerical procedure for solving partial diﬀerential equations. a key conclusion of this discussion of variational calculus is that solving a diﬀerential equation is equivalent to minimizing some functional involving an integral. φ3 ) dx. (6. Eq. 6. φ2 (x). Eq. we have shown that solving Poisson’s equation is equivalent to minimizing the functional I in Eq.7 Functions of Several Dependent Variables b Consider the functional I(φ1 . which is Poisson’s equation. In general.71) − dx ∂φ1 ∂φ1 d dx d dx ∂F ∂φ2 ∂F ∂φ3 − − ∂F =0 ∂φ2 ∂F = 0. ∂φ3 (6. is the ﬁnite element method. 6.65) in which case 1 F (x. Thus. 6.
i . ∂xi (7.2.1 Index Notation and Summation Convention a = a1 e1 + a2 e2 + a3 e3 . a variety of familiar quantities can be written in compact form. x2 .2) a · b = a1 b 1 + a2 b 2 + a3 b 3 = i=1 ai b i . x3 ) deﬁned in a volume V bounded by a closed surface S. The range is 1.2 in 2D. Then.3 in 3D and 1. for any vector ﬁeld F(x1 . the gradient can be written f= ∂f ∂f ∂f e1 + e2 + e3 = f.6) For a vectorvalued function F(x1 . · F dV = V S F · n dS (7. if a subscript appears exactly twice. (7.i ei . i” denotes the partial derivative with respect to the ith Cartesian coordinate direction.4) where. Let f (x1 . ∂x2 ∂x2 ∂x3 1 (7.7.i . ∂x1 ∂x2 ∂x3 (7. we can omit the summation symbol and write a · b = ai b i . the divergence is written ·F= ∂F1 ∂F2 ∂F3 + + = Fi. x2 . Consider a second vector b = b1 e 1 + b2 e 2 + b 3 e 3 .1) Let a be the vector where ei is the unit vector in the ith Cartesian coordinate direction.9) 67 . the dot product (scalar product) is 3 (7. Using the comma notation and the summation convention. For example. (7. x2 . and ai is the ith component of a (the projection of a on ei ). x3 ). x2 . (7. ∂x1 ∂x2 ∂x3 (7.5) That is.3) Using the summation convection. the subscript “. x3 .8) The divergence theorem states that. a summation is implied over the range.7) and the Laplacian of the scalar function f (x1 . We deﬁne the shorthand comma notation for derivatives: ∂f = f. x3 ) is 2 f= · f= ∂ 2f ∂ 2f ∂ 2f + 2 + 2 = f. x2 .ii . x3 ) be a scalar function of the Cartesian coordinates x1 .
e. in (1) steadystate heat conduction. it is generally easy to see what partial diﬀerential equation corresponds to it. ∂n The dot product of two gradients can be written φ· φ = (φ.i φ. and (3) torsion in elasticity.17) (7. where the velocity potential and velocity are speciﬁed on diﬀerent parts of the boundary. one needs a functional which a solution makes stationary. φ = φ0 on S1 .i φ. Recall that.18) = = 1 δ(φ.or. We wish to ﬁnd a functional. 0= V (φ.i ni . From Eq. for example. Fi.i ni δφ dS − 68 . To simplify this discussion. 0. i = j.j ei · ej = φ. (7.i φ. ∂n This problem arises. (2) potential ﬂuid ﬂow.11) (7. (7. i = j.14a.14) ∂φ + g = 0 on S2 . Given the functional.15) δ(f φ) dV (7. S V 2 V φ. 2 φ + f = 0 in V. 2 φ = φ. rather than vector.16) (7.j ej ) = φ.i ) dV + δ f φ dV S V 2 V 1 = ( φ · n) δφ dS − δ φ. 7.13) 7. (7. whose ﬁrst variation δI vanishes for φ satisfying the above boundary value problem. A scalar problem has one dependent variable and one partial diﬀerential equation..i φ.ii . whereas a vector problem has several dependent variables coupled to each other through a system of partial diﬀerential equations.i δφ). where the temperature and heat ﬂux are speciﬁed on diﬀerent parts of the boundary.i ] dV + V V (7.i dV + δ f φ dV.j δij = φ.2 Deriving Variational Principles For each partial diﬀerential equation of interest.i − φ.i ei ) · (φ. in index notation. The harder problem is to start with the equation and derive the variational principle (i. we will consider only scalar. I(φ) say.10) The normal derivative can be written in index notation as ∂φ = φ · n = φ. (7.i dV = V S Fi ni dS.i δφ.i . in index notation. derive the functional which is to be minimized).i φ. problems.ii + f ) δφ dV [(φ. Consider Poisson’s equation subject to both Dirichlet and Neumann boundary conditions.12) where δij is the Kronecker delta deﬁned as δij = 1.
Then.26) = V = V [(φ.29) Stationary I (δI = 0) for arbitrary admissible δφ thus yields the original partial diﬀerential equation and Neumann boundary condition.i − f δφ) dV + g δφ dS S2 g δφ dS S2 (7.ii + f ) δφ dV + S2 where δφ = 0 on S1 . 7.22.i ni = δI = S2 φ · n = ∂φ/∂n.24) If we were given the functional.i − f δφ dV + 2 2 (φ. since φ is speciﬁed on S1 .i − φ. 69 . Eq. From Eq. (7. 7.28) = S φ. I(φ) = V 1 φ· 2 φ − f φ dV + S2 gφ dS (7.i dV + δ f φ dV = −δ S2 V 2 V 1 = −δ φ.i − f φ dV + gφ dS = −δI(φ). (7.14.25) (7. ∂φ + g δφ dS − ∂n (φ.i δφ.i δφ).19) (7.i φ. δI = V 1 1 δφ.i φ. V (7. in expanded form.20) (7.22) or. the functional for this boundary value problem is I(φ) = V 1 φ.i δφ.23) or.i + φ.ii + f ) δφ dV.i φ.i ni δφ dS − V (φ. 2 V S2 g δφ dS − δ (7. and φ. δφ = 0 on S1 .i dV + δ f φ dV S2 V 2 V 1 gφ dS − δ φ.i φ. I(φ) = V 1 2 ∂φ ∂x 2 + ∂φ ∂y 2 + ∂φ ∂z 2 − f φ dV + S2 gφ dS.where φ · n = ∂φ/∂n = −g on S2 . in vector notation. and.22.i − f φ dV + 2 gφ dS S2 (7.ii δφ] dV − V f δφ dV + S2 g δφ dS g δφ dS.i φ. Hence. Eq. we could determine which partial diﬀerential equation corresponds to it by computing and setting to zero the ﬁrst variation of the functional. 0=− 1 φ. 7.27) (7.21) Thus.
A typical element.30 is a complete linear polynomial in two variables. We can evaluate this equation at the three grid points to obtain 1 x 1 y 1 a1 φ1 = 1 x 2 y 2 a2 .3 Shape Functions Consider a twodimensional ﬁeld problem for which we seek the function φ(x. we assume. (7. The three DOF are the values of the fundamental unknown φ at the three grid points. for each element. = y2 − y3 y3 − y1 y1 − y2 φ2 a2 2A x3 − x2 x1 − x3 x2 − x1 φ3 a3 where 2A = 1 x1 y1 1 x2 y2 1 x3 y3 = x2 y3 − x3 y2 − x1 y3 + x3 y1 + x1 y2 − x2 y1 . (7. 56. shown in Fig.30) where a1 . as shown in Fig. φ(x.31) φ 2 φ3 1 x3 y 3 a3 This matrix equation can be inverted to yield x2 y3 − x3 y2 x3 y1 − x1 y3 x1 y2 − x2 y1 φ1 a1 1 . y) using a polynomial in two variables having the same number of terms as the number of DOF.33) (7. we approximate φ(x. 57. Let us subdivide the domain into triangular ﬁnite elements. φ2 . 7.yT E x Figure 56: TwoDimensional Finite Element Mesh. and a3 are unknown coeﬃcients.32) 70 . a2 . Eq.) For numerical solution. yT 3 £d £ d £ d $ $$$ 2 £ E 1 x Figure 57: Triangular Finite Element. y) = a1 + a2 x + a3 y. 7. (7. Thus. and φ3 . y) satisfying some (unspeciﬁed) partial diﬀerential equation in a domain. has three degrees of freedom (DOF): φ1 . (The number of DOF for an element represents the number of pieces of data needed to determine uniquely the solution for the element.
y) = 1 [(xj yk − xk yj ) + (yj − yk )x + (xk − xj )y]. From Eq. 7. at Point 1. y1 ) = N1 (x1 . y) = N1 (x. 2. y)φ1 .35) = y3 − y1 y1 − y2 φ2 1 x y y2 − y3 2A φ3 x3 − x2 x1 − x3 x2 − x1 φ1 (7. y1 )φ1 or N1 (x1 . 3) such as (1. y) = N1 (x. y) N3 (x.37. φ1 = φ(x1 . and k denote the three grid points of the triangle. Since Eq. Ni is the shape function for Point i. The determinant 2A is positive or negative depending on whether the grid point ordering in the triangle is counterclockwise or clockwise. y) = [(x1 y2 − x2 y1 ) + (y1 − y2 )x + (x2 − x1 )y]/(2A). 7.37 implies that N1 can be thought of as the shape function associated with Point 1. knowing N1 . That is. y)φ2 + N3 (x.41) (7. respectively.38) (7. since N1 is the form (or “shape”) that φ would take if φ1 = 1 and φ2 = φ3 = 0. j. 2). In particular. 2. the above equation can be written in the compact form Ni (x.39) where (i.30.37) Notice that all the subscripts in this equation form a cyclic permutation of the numbers 1. 7. (2. (7. j. y) φ 2 φ3 or φ(x. 3. the functions Ni are called shape functions or interpolation functions and are used extensively in ﬁnite element theory. if φ1 = 0. φ(x. or the nonphysical polynomial coeﬃcients ai .36) = N1 (x. Eq. 7. y)φ1 + N2 (x. 3). y) = [(x3 y1 − x1 y3 ) + (y3 − y1 )x + (x1 − x3 )y]/(2A) 2 N3 (x. In general. or (3.40) . which have physical meaning. where N1 (x. 1. 2A (7. 3. and φ2 = φ3 = 0. 1).Note that the area of the triangle is A. From Eq.42) (7. 71 (7. we can interpret the element DOF as either the grid point values φi . k) can be selected to be any cyclic permutation of (1. if we let i. the scalar unknown φ can then be written in the matrix form a1 φ(x. y)φ3 = Ni φi . y) = 1 x y (7. Alternatively. In general. we can write down N2 and N3 by a cyclic permutation of the subscripts. 2.31 is invertible. y1 ) = 1. y) = [(x2 y3 − x3 y2 ) + (y2 − y3 )x + (x3 − x2 )y]/(2A) N (x.34) a 2 a3 x2 y3 − x3 y2 x3 y1 − x1 y3 x1 y2 − x2 y1 φ1 1 (7. y) N2 (x.
7. 72 (7. φ(x. (7. x = (x1 + x2 + x3 )/3.1 § q ¦ L 2¤ q ¥ E x Figure 58: Axial Member (PinJointed Truss Element). and use Eq. since each grid point has equal inﬂuence on the centroid. y2 )φ1 or N1 (x2 . y3 ) = 0. y = (y1 + y2 + y3 )/3. In general. The displacement u(x) of an axial structural member (a pinjointed truss member) (Fig. since the shape functions for the 3point triangle are linear in x and y. Thus. y) = N1 (x. 0 = φ2 = φ(x2 . y)φr = Ni φi . (7. y ) = [(x2 y3 − x3 y2 ) + (y2 − y3 )¯ + (x3 − x2 )¯]/(2A) = . Example.43) The preceding discussion assumed an element having three grid points. if the shape functions are evaluated at the triangle centroid. 7.49) This is a convenient way to represent the ﬁnite element approximation within an element.38 gives the shape functions for the linear triangle. yj ) = δij . To prove this result.38.33: 1 N1 (¯.46) Eq. We note that.47) (7. φ can be evaluated anywhere in the element. y)φ2 + · · · + Nr (x. (7. y2 ) = 0. x ¯ x y 3 (7.37. the gradients in the x or y directions are constant. ∂N1 ∂φ = φ1 + ∂x ∂x ∂φ ∂N1 = φ1 + ∂y ∂y ∂N2 φ2 + ∂x ∂N2 φ2 + ∂y ∂N3 φ3 = [(y2 − y3 )φ1 + (y3 − y1 )φ2 + (y1 − y2 )φ3 ]/(2A) (7.52) . N1 = N2 = N3 = 1/3. 7. Similarly.45) That is. (7. at Point 2. ¯ ¯ into N1 in Eq.50) ∂x ∂N3 φ3 = [(x3 − x2 )φ1 + (x1 − x3 )φ2 + (x2 − x1 )φ3 ]/(2A). since. 58) is given by u(x) = N1 (x)u1 + N2 (x)u2 . 7. a shape function takes the value unity at its own grid point and zero at the other grid points in an element: Ni (xj . y)φ1 + N2 (x.51) ∂y A constant gradient within any element implies that many small elements would have to be used wherever φ is changing rapidly. substitute the coordinates of the centroid. N1 (x3 . An interesting observation is that. for an element having r points.44) (7.48) (7. once the grid point values are known. Also. y2 ) = N1 (x2 . from Eq.
55) φ = φ0 on S1 . The diﬀerence between this problem and that of Eq. The functional which must be minimized for this boundary value problem is similar to that of Eq. and h may depend on position. and f . L (7. N1 (x) = 1 − x . 73 (7. Assume that the domain has been subdivided into a mesh of triangular ﬁnite elements similar to that shown in Fig. It can be shown that the functional which must be minimized for this problem is I(φ) = A 1 2 ∂φ ∂x 2 + ∂φ ∂y 2 − f φ dA + S2 1 gφ + hφ2 dS. ∂x2 ∂y 2 (7. (7. The h term could arise in a variety of physical situations. ∂n where S1 and S2 are curves in 2D. We consider a slight generalization of the problem of Eq. The variation of I is also computed by summing over the elements: E δI = e=1 δIe . (7. 7.57) where E is the number of elements. where the gradient ∂φ/∂n is speciﬁed. 7. but instead deﬁne φ over individual elements. it follows that Ni must be a linear function which is unity at Point i and zero at the other end.where u1 and u2 are the axial displacements at the two end points. g.54) L L 7. With a ﬁnite element discretization. 7.53) N2 (x) = x L or x x u(x) = 1 − u1 + u2 .14: 2 2 ∂ φ + ∂ φ + f = 0 in A. 56. including heat transfer due to convection (where the heat ﬂux is proportional to temperature) and free surface ﬂow problems (where the free surface and radiation boundary conditions are both of this type). 2 (7. Thus.4 Variational Approach Consider Poisson’s equation in a twodimensional domain subject to both Dirichlet and Neumann boundary conditions. For a linear variation of displacement along the length. since I is an integral. ∂φ + g + hφ = 0 on S2 .58) . we do not deﬁne a single smooth function φ over the entire domain.56) where A is the domain.14 is that the h term has been added to the boundary condition on S2 . it can be evaluated by summing over the elements: E I = I1 + I2 + I3 + · · · = e=1 Ie .24 except that an additional term must be added to account for the h term in the boundary condition on S2 . Thus.
and the normal derivative ∂φ/∂n is of equal magnitude and opposite sign.k ∂φi ∂φi ∂φi ∂φi S2 A where φ = Nj φj implies φ.60) ∂φi 2 ∂φi ∂φi ∂φi ∂φi A 2 ∂φi S2 ∂φ ∂φ ∂φ ∂ (φ.65) hNi Nj dS φj S2 Ni.k − f Ni ) dA + A S2 (gNi + hNj φj Ni ) dS f Ni dA − gNi dS S2 (7. 74 .k φ. to minimize I.k = (Nj φj ).k ) = (Nj.k φj Ni.62) (7. for two elements which share a common edge. which must vanish for I to be a minimum. the last two terms in the functional make a nonzero contribution to the functional only if an element abuts S2 (i. φ is known everywhere using the element shape functions. 7. since. ∂φi ∂φi ∂φi We now substitute these last three equations into Eq. and set the result to zero: 1 ∂ 1 ∂ ∂φ ∂φ ∂φ ∂Ie = (φ.63) and (7.67) = Kij φj − Fi + Hij φj . Therefore. The degrees of freedom which deﬁne the function φ over the entire domain are the nodal (grid point) values φi .k ) − f dA + g + hφ dS (7.55c. Thus. ∂φi ∂φi ∂φi ∂φ ∂ ∂φj = (Nj φj ) = Nj = Nj δij = Ni .64) (Nj. we diﬀerentiate with respect to each φi .61) = φ.59) The last two terms in this equation are. For one element.k )φ. if one edge of an element coincides with S2 ). 7.61 to obtain ∂Ie = ∂φi = A (7. We thus can consider a typical element. 59.k .k Nj. (7. in index notation.k φj . ∂ ∂φj ∂ (φ. so that the individual contributions cancel each other out.k + φ.k δij = Ni.k = Nj.k = Nj. given all the φi .29 ∂φ ∂n 15 £d £ l d £ 8 $d $ 17 $ £ $ 29 $$$ 35 $ £ l d 9 £ ∂φ dd££ © 17 ∂n Figure 59: Neumannn Boundary Condition at Internal Boundary.66) (7. 2 (7.k (φ. from Eq. integrals of the form ∂φ φ dS. ∂n S2 As can be seen from Fig.k dA φj − A + (7.e.k ) − f dA + g + hφ dS. the unknown φ is continuous along that edge. Ie = A 1 φ.k φj ) = Nj..k − f φ dA + 2 S2 1 gφ + hφ2 dS.
29 φ29 − F17 . From Eq. ∂φ17 ∂I9 9 9 9 9 = K29. j are both on the boundary. The superscript e in the preceding equations indicates that we have computed the contribution for one element.17 φ17 + K29. The second term of the “load” F is present only if Point i is on S2 .68) (7. for the special case h = 0.29 φ29 + K29.72) (7.15 φ15 + K17. 60.75) (7.k Nj. respectively. for Element 9. for each element.17 φ17 + K15. the circled numbers are the element labels. ∂φ17 ∂I8 8 8 8 8 = K29. as illustrated in Fig.35 φ35 − F29 .70) Fie = A f Ni dA − e Hij = S2 hNi Nj dS.29 φ29 + K35.74) (7. The two terms in F correspond to the body force and Neumann boundary condition.17 φ17 + K35. Consider two adjacent elements.17 φ17 + K29. ∂φi 75 .71) (7. and the matrix entries in H apply only if Points i. e Kij = A Ni.29 φ29 − F15 .17 φ17 + K17. In that ﬁgure.67. ∂φ29 Similarly. ∂φ15 ∂I8 8 8 8 8 = K17. ∂I8 8 8 8 8 = K15.17 φ17 + K17.15 φ15 + K29. ∂φ35 To evaluate e (7.k dA gNi dS S2 (7.29 φ29 + K17. We must combine the contributions for all elements.29 φ29 − F29 . ∂I9 9 9 9 9 = K17.29 $$$ 35 $ £ l £d 9 £ £ l £ d £ 8 $d£ $ 17 $$ £ 15 Figure 60: Two Adjacent Finite Elements. ∂φ29 ∂I9 9 9 9 9 = K35. 7.15 φ15 + K15. for Element 8.69) (7.35 φ35 − F17 . where.76) ∂Ie .73) (7.35 φ35 − F35 .
the boundary is left free. for i = 17. The zero gradient boundary condition is the natural boundary condition for this formulation. 7. For example. Eq.77) This process is then repeated for all grid points and all elements.78) where φ is the vector of unknown nodal values of φ. from Eq.k Nj. For each element.83) Note that. The sum on k in the ﬁrst integral can be expanded to yield Kij = A ∂Ni ∂Nj ∂Ni ∂Nj + ∂x ∂x ∂y ∂y dA. to implement the Neumann boundary condition ∂φ/∂n = 0 at a boundary. the shape functions are given by 1 Ni (x. Note also in Eq.17 + K17. the contributions to K. (7.39.84) 2A 76 . K is sometimes referred to as the stiﬀness matrix. for which. there is no contribution to the righthand side vector F. 7. and H are Kij = A Ni. Thus.81) (7. (7.15 φ15 + (K17. 7.29 )φ29 ∂φ17 ∂φ17 9 8 9 + K17.80) (7.5 Matrices for Linear Triangle Consider the linear threenode triangle.80 still applies. and the integration is over the element volume.81 that. To minimize the functional. since a zero gradient results by default if the boundary is left free. (7.82) Fi = A f Ni dA − Hij = S2 hNi Nj dS.we sum on e for ﬁxed i. gNi dS. S2 (7. for the more general case where h = 0. (7.29 + K17. resulting in a set of linear algebraic equations of the form Kφ = F or. The Dirichlet boundary condition φ = φ0 must be explicitly imposed and is referred to as an essential boundary condition. y) = [(xj yk − xk yj ) + (yj − yk )x + (xk − xj )y].k dA. ∂I9 ∂I8 8 8 9 8 9 + =K17. the individual sums are set to zero. F. if g = 0. except that the sum extends over the range 1–3. (K + H)φ = F. and F is the vector of forcing functions at the nodes. Because of the historical developments in structural mechanics. in three dimensions.35 φ35 − F17 − F17 .79) (7.17 )φ17 + (K17. 7.
j contribution for one element is 1 (bi bj + ci cj ). To compute K from Eq. = (b2 b3 + c2 c3 )/(4A) = [(y3 − y1 )(y1 − y2 ) + (x1 − x3 )(x2 − x1 )]/(4A). since there are three grid points in the element.98) 77 . ∂y (7. ijk = 231. or ijk = 312.94) (7. 2 = (b2 + c2 )/(4A) = [(y1 − y2 )2 + (x2 − x1 )2 ]/(4A).89) For the linear triangle. = (b1 b3 + c1 c3 )/(4A) = [(y2 − y3 )(y1 − y2 ) + (x3 − x2 )(x2 − x1 )]/(4A).83. ci = xk − xj . 7. consider ﬁrst the contribution of the source term f . 7. we ﬁrst compute the derivatives ∂Ni = (yj − yk )/(2A) = bi /(2A). 3 3 = (b1 b2 + c1 c2 )/(4A) = [(y2 − y3 )(y3 − y1 ) + (x3 − x2 )(x1 − x3 )]/(4A). which is typical. (7. (7. Thus.96) (7. However. as F1 = A f N1 dA = A f [(x2 y3 − x3 y2 ) + (y2 − y3 )x + (x3 − x2 )y] dA. 1 1 2 = (b2 + c2 )/(4A) = [(y3 − y1 )2 + (x1 − x3 )2 ]/(4A). 7. Thus. that is. these derivatives are all constant and hence can be removed from the integral to yield 1 (bi bj + ci cj )A.92) (7.93) (7. bi and ci are computed from Eq. Thus.91) Kij = 4A where i and j each have the range 123. (7.95) (7. 2A (7.81. ijk = 123.86) ∂y where bi and ci are deﬁned as b i = yj − yk .97) Note that K depends only on diﬀerences in grid point coordinates. (7. we obtain ∂Nj = (yk − yi )/(2A) = bj /(2A).88) (7.where ijk can be any cyclic permutation of 123.81.87 using the shorthand notation that ijk is a cyclic permutation of 123. two elements that are geometrically congruent and diﬀer only by a translation will have the same element matrix. To compute the righthand side contributions as given in Eq.87) By a cyclic permutation of the indices. ∂x ∂Nj = (xi − xk )/(2A) = cj /(2A). (7. 7. K11 K22 K33 K12 K13 K23 = (b2 + c2 )/(4A) = [(y2 − y3 )2 + (x3 − x2 )2 ]/(4A).90) Kij = 4A2 where A is the area of the triangular element. we calculate F1 . the i. From Eq.85) ∂x ∂Ni = (xk − xj )/(2A) = ci /(2A). (7.
It is also of interest to calculate. for Point i on the boundary. the only shape functions needed are those which describe the interpolation of φ on the boundary. L L (7. and note that dA = A. for a uniform f . ˆ ˆ For the second term of Eq. A A x dA = x A. y ) is the centroid of the triangle given by Eq.101) where the subscripts 1 and 2 refer to the two element edge grid points on the boundary S2 (Fig. the righthand side contributions for the second term of Eq. where g is a constant. since the triangular shape functions are linear. For nonuniform f . 7. 7.' d d d s d 1 d s d 2 d d d L E Figure 61: Triangular Mesh at Boundary. L 2 (7.99) where (¯. 7.103.104) Since the boundary shape functions are given by Eq. 61). x ¯ F1 = ˆ f [(x2 y3 − x3 y2 ) + (y2 − y3 )¯ + (x3 − x2 )¯] A. the boundary shape functions are N1 = 1 − x x . for the linear triangle. L F1 = −ˆ g 0 1− 78 x gL ˆ dx = − . x y 2A (7. Hence. ¯ A y dA = y A. Point i must be on an element edge which lies on the boundary S2 .102) 3 That is. Thus.48. 7.100) From Eq.103) (7. Thus. 7.105) . x is a local coordinate along the element edge. 1ˆ F1 = f A. Thus. (7.81 for the uniform special case g = g .81 to contribute.47. N2 = . the expression in brackets is given by 2A/3. ¯ (7. the integral could be computed using natural (or area) coordinates for the triangle [7]. and L is the length of the element edge. Fi = − S2 gNi dS. Since the integration involving g is on the boundary. ˆ We now consider the special case f = f (a constant). 1/3 of the total element “load” is applied to each grid point. 3 and similarly 1ˆ F1 = F2 = F3 = f A. (7.
j must both be on the boundary for this matrix to contribute. it follows that φ. 7. 7.117) (7. L 2 0 Thus. 7.82. 1/2 of the total element “load” is applied to each grid point.107) 2 That is. Thus. the functional I has been minimized).82.k Nj. (7.106) gL ˆ . S2 (7.k dA φj − f Ni dA − 2 A A 1 = φi Kij φj − Fi φi (index notation) 2 1 = φT Kφ − φT F (matrix notation) 2 1 = φ · Kφ − φ · F (vector notation). F2 = −ˆ g F1 = F2 = − L (7.118) 1 = φi Ni. 7.k − f φ dA + gφ dS. we can evaluate I.k φi and I(φ) = A 1 Ni. 61.82 is on the boundary. for an edge of a linear 3node triangle. 1 φ. 2 79 gNi dS φi S2 .108) (7.e. for the twodimensional Poisson’s equation.113) Since φi is independent of position. for the two points deﬁning a triangle edge on the boundary S2 . ˆ using the boundary shape functions of Eq.112) I(φ) = 2 S2 A where φ = Ni φi .114) (7. for a uniform g. Consider some triangular elements adjacent to a boundary. as shown in Fig. (7. (7..k φ. L ˆ H11 = h 0 1− L x L 2 2 dx = dx = ˆ hL 3 (7. We recall from Eq.103 in Eq.109) (7.115) (7. (7.k φj − f Ni φi dA + 2 gNi φi dS.6 Interpretation of Functional Now that the variational problem has been solved (i. for constant h = h. H= ˆ hL 6 2 1 1 2 .59 (with h = 0) that. the only shape functions needed are those which describe the interpolation of φ on the boundary. 7. Since the integration in Eq. we ﬁrst note that Points i.116) (7.k φi Nj.k = Ni.111) 7. L 6 Hence.x gL ˆ dx = − . To calculate H using Eq.110) H22 ˆ =h 0 L x L 1− ˆ hL 3 ˆ H12 = H21 = h 0 x L ˆ hL x dx = .
I corresponds to the total potential energy.7 Stiﬀness in Elasticity in Terms of Shape Functions In §4.x N3. it follows that the stiﬀness matrix K is positive semideﬁnite. and vector notations.y = . 45). a matrix K is positive deﬁnite if φT Kφ > 0 for all φ = 0. 2.where the last result has been written in index. (7.x 0 u2 0 N1.y N1.x vi u1 v1 N1. the strains can be expressed as u.120) For the constant strain triangle (CST) in 2D. Gaussian elimination can be performed without pivoting. matrix. (7.121) where the summation convention is used.x ui εxx ε= = = ε v.y N2. (7.y N3. The ﬁrst term is the strain energy. 4. (By deﬁnition. In solid mechanics.x 0 N2. the fundamental unknowns u and v (the x and y components of displacement) can both be expressed in terms of the three shape functions deﬁned in Eq.122) 0 N2.x 0 N3.y Ni. and the second term is the potential of the applied loads.38: u = Ni ui .x Ni.10 (p.y ui + Ni. The ﬁrst term for I is a quadratic form.y 0 N3. For wellposed problems (which allow no rigid body motion).y v2 N1. 3. 7.y vi yy γxy u. (7. for example. in 2D. The matrix is nonsingular. and C is the matrix converting grid point displacements to strain: ε = Cu.119) where D is the symmetric matrix of material constants relating stress and strain. the Principle of Virtual Work was used to obtain the element stiﬀness matrix for an elastic ﬁnite element as (Eq. K is positive deﬁnite.x Ni.y + v.67) K= V CT DC dV. 7.x N2. In general. All eigenvalues of K are positive. v = Ni vi .) Three consequences of positivedeﬁniteness are 1. Since strain energy is zero only for zero strain (corresponding to rigid body motion).x u3 v3 80 .
dx2 (7.e. what condition is necessary for the integral over a domain to be equal to the sum of the integrals over the elements? Consider the onedimensional integral b I= a φ(x) dx. Singularities in φ. as illustrated in Fig. That is.126) the integrand φ may have simple discontinuities. 62.y N1. dx (7.y . since some singularities cannot be integrated. but with kinks (slope discontinuities). 7. (7. 7.φ T ppp E x a b Figure 62: Discontinuous Function.y 0 N3. the slope is continuous). 7.120. and φ is smooth (i.x N2.y N3.x N3. the integrand may be discontinuous.x 0 N3.x (7. We wish to investigate brieﬂy the conditions which must hold for this assumption to be valid.y N2.125) Since the integrand dφ(x)/dx may be discontinuous. Thus. Thus. but we do not allow singularities in the integrand. simple jump discontinuities in φ are allowed. from Eq. In the integral b I= a d2 φ(x) dx. φ(x) must be continuous. will not be allowed. we conclude that.124) For the integral I to be welldeﬁned.x 0 C = 0 N1. an integral was minimized. C would have as many rows as there are independent components of strain (3 in 2D and 6 in 3D) and as many columns as there are DOF in the element. It was also assumed that the integral evaluated over some domain was equal to the sum of the integrals over the elements. on the other hand. Now consider the onedimensional integral b I= a dφ(x) dx. in which case φ is continuous with kinks.119 is a matrix of shape function derivatives: N1.x 0 N2. 81 . C in Eq.y 0 N2.123) In general. N1. for any functional of interest in ﬁnite element analysis..8 Element Compatibility In the variational approach to the ﬁnite element method.
63. Therefore. the shape function is linear. note that having displacement continuous implies that the displacement gradients (which are proportional to the stresses) are discontinuous at the element boundaries. in general. In ﬂuid mechanics. and the second derivatives φ. the ﬁeld variable φ and any of its partial derivatives up to one order less than the highest order derivative appearing in I(φ) must be continuous. If φ is in the integrand. the functional I(φ) has the physical interpretation of total potential energy. otherwise. a gap or overlap).e. for I to be welldeﬁned. That is. ∂x2 ∂y 2 for which the functional to be minimized is I(φ) = A (7. φ must be continuous in the ﬁnite element approximation.x and φ. we conclude that. since there could be a gap or overlap in the model along the line 1–2.128) Since the highest derivative in I is ﬁrst order.xx and φ. and the stresses (displacement gradients) are discontinuous at element boundaries. φ must be continuous. in Fig. That is. the ﬁrst derivatives φ. at element interfaces.y may have simple discontinuities. the approximation inherent in a displacementbased ﬁnite element method is that the displacements are continuous.. This requirement is referred to as the compatibility requirement or the conforming requirement. Thus. For example. If all quantities of interest (e. φ varies linearly between φ1 and φ2 along the line 1–2. ∂ 2φ ∂ 2φ + + f = 0.. I(φ) might be inﬁnite.g. φ is the same at the midpoint P for both elements. displacements and stresses) were continuous.127) 1 2 ∂φ ∂x 2 + ∂φ ∂y 2 − f φ dA. Note also that the Poisson equation is a second order equation. A nonconforming element would result in a discontinuous displacement at the element boundaries (i. Thus. the solution would be an exact solution rather than an approximate solution. consider the Poisson equation in 2D.yy that appear in the partial diﬀerential equation may not even exist at the element interfaces. given φ1 and φ2 . However. one of the strengths of the variational approach is that I(φ) involves derivatives of lower order than in the original PDE. including strain energy. which implies that. (7. but φ need only be continuous in the ﬁnite element approximation. If φ is in the integrand. In elasticity. φ must be continuous. a discontinuous φ (which is not allowed) corresponds to a singularity in the velocity ﬁeld. we conclude that.2 $$ $ $ £ £d r £ P £ d £ £ $$£ d 1 $$ £ Figure 63: Compatibility at an Element Boundary. we conclude that the smoothness required of φ depends on the highest order derivative of φ appearing in the integrand. This property is one of the fundamental characteristics of an approximate numerical solution. For the 3node triangular element already formulated. 82 . Thus. which would correspond to inﬁnite strain energy.
. We consider the Poisson equation 2 φ + f = 0. .9 Method of Weighted Residuals (Galerkin’s Method) Here we discuss an alternative to the use of a variational principle when the functional is either unknown or does not exist (e.130) ˜ For an approximate solution φ. V (7. We note that.133) which is orthogonal to the xyplane. 7. The best approximate solution will be one which in some sense minimizes R at all points of the domain. the error R is orthogonal to the basis vectors ex and ey (the vectors used to approximate v): R · ex = 0 and R · ey = 0.z T v B ¨¨T R ¨ ¨ ¨ ¨ rr rr r j r = v − u (error) E y x © u Figure 64: A Vector Analogy for Galerkin’s Method. where we want to approximate a vector v in 3D with another vector in 2D.131) where W is any function of the spatial coordinates.132) This approach is called the method of weighted residuals. The motivation for using shape functions Ni as the weighting functions is that we want the residual (the error) orthogonal to the shape functions. nonlinear equations). 64. . . n functions W can be chosen: RWi dV = 0. n. With n DOF in the domain. W is referred to as a weighting function. the process is called Galerkin’s method. 83 (7. Various choices of Wi are possible. where R is referred to as the residual or error. we are trying to approximate an inﬁnite DOF problem (the PDE) with a ﬁnite number of DOF (the ﬁnite element model). That is. The error in this approximation is R = v − u.134) . V i = 1.. When Wi = Ni (the shape functions). we are attempting to approximate v with a lesser number of DOF. In the ﬁnite element approximation.g. as shown in Fig. The “best” 2D approximation to the 3D vector v is the projection u in the plane. Consider an analogous problem in vector analysis. (7. (7. if R = 0 in the domain. 2˜ φ + f = R = 0. 2. That is.129) (7. RW dV = 0. (7.
55.137) f Ni dV.k − φ.k Nj.147) Ni. V (7.146) (7. for one element. the Dirichlet boundary conditions are handled like displacement boundary conditions in structural problems.k ] dV + V = (7.k = (Nj φj ). ∂φ/∂n is speciﬁed on S2 and unknown a priori on S1 .141) φ. 0= S ∂φ Ni dS − ∂n V Nj. 7. in matrix notation.kk + f )Ni dV [(φ. Thus. an element. 84 .In the ﬁnite element problem. where Kij = V (7. Kφ = F. The integral in Eq.g.k Nj. the approximating functions are the shape functions Ni .143) (7.k dV ∂φ Ni dS. (7.136) (7.140) (7.k Ni. for Poisson’s equation. Eq. 7. the residual R is orthogonal to its approximating functions.k = Nj. ∂φ/∂n is the “reaction” to the speciﬁed φ. where φ = φ0 is speciﬁed.k Ni nk dS − 0= S V V where φ. to obtain f Ni dV. ∂n Fi = V f Ni dV + S From Eq. for each i. The counterpart to the dot product is the integral RNi dV = 0.k dV + φ. On S1 . e. 7.145) (7.142) (7.138) The ﬁrst term is converted to a surface integral using the divergence theorem. At points where φ is speciﬁed.k φj Ni.k Ni.135 must hold over the entire domain V or any portion of the domain.135) That is. Thus.k dV φj + V f Ni dV + S = −Kij φj + Fi .10.139) φ.k dV + V V f Ni dV ∂φ Ni dS ∂n (7..k Ni ). V = V (φ.144) =− Ni.k nk = and φ·n= ∂φ ∂n (7.k φj . the shape functions. 0= V ( 2 φ + f )Ni dV (7. Hence.
the ﬂuid slips tangentially along boundaries. At a plane of antisymmetry. for any solution φ. we must specify φ somewhere in the domain. where the velocity is known. since sometimes a variational principle may not exist for a given problem. within the domain occupied by the ﬂuid. Thus. ∂φ = 0. the problem posed in Fig.6) ∂x which is speciﬁed. (8. 2 φ = 0. Galerkin’s method can still be used to derive a ﬁnite element model. the potential ﬂow boundary value problem is that the velocity potential φ satisﬁes 2 φ = 0 in V ˆ φ = φ on S1 (8. φ + c is also a solution for any constant c. (8. the two approaches yield the same results. This mathematical model of ﬂuid behavior is useful for some situations. Deﬁne a velocity potential φ such that velocity v = φ. ˆ (8. When the variational principle does not exist or is unknown. the ﬂuid is assumed to be inviscid and incompressible.4) ∂n We will see later in the discussion of symmetry that. for uniqueness.3) ∂n where n is the unit outward normal on the boundary. 85 .Galerkin’s method thus results in algebraic equations identical to those derived from a variational principle. ˆ ∂n where v = φ in V .1) vx = ∂x ∂y ∂z It can be shown that. vz = . because only conditions on the derivative ∂φ/∂n are speciﬁed. where the normal velocity vanishes. (8. ∂φ ∂φ ∂φ . On a boundary where the velocity is speciﬁed. However. 65. 65 is not a wellposed problem. (8. ∂φ = vn . Since there are no shearing stresses in the ﬂuid. At a ﬁxed boundary. The boundary value problem for ﬂow around a solid body is illustrated in Fig. Thus. In this example.2) Various boundary conditions are of interest. As is. vy = . 8 Potential Fluid Flow With Finite Elements In potential ﬂow. ∂φ vn = v · n = φ · n = = 0. That is. Galerkin’s method is more general. in 3D.5) ∂n where n is the unit normal to the plane. at a plane of symmetry for the potential φ. When a principle does exist. ∂φ vx = = v∞ . far away from the body.7) ∂φ = vn on S2 . φ = 0. (8. Therefore.
and ρ is the ﬂuid density. (8.k Nj. ˆ dA. the pressure can be found using the steadystate Bernoulli equation p 1 2 v + gy + = c = constant.g.9) (8. For example.∂φ ∂φ = =0 ∂n ∂y n' n T 2 φ=0 E n n ∂φ ∂φ =− = −v∞ ∂n ∂x s d d ∂φ =0 ∂n ∂φ ∂φ = = v∞ ∂n ∂x ∂φ ∂φ =− =0 ∂n ∂y n c Figure 65: Potential Flow Around Solid Body.13) 2 86 . Once the velocity potential φ is known.11) 2 ρ where v is the velocity magnitude given by v = 2 ∂φ ∂x 2 + ∂φ ∂y 2 . (8. p is pressure.1 Finite Element Model Kφ = F.12) gy is the (frequently ignored) body force potential. 1 2 c = v∞ . v∞ ). 8. y is the height above some reference plane..10) Fi = S2 where vn is the speciﬁed velocity on S2 in the outward normal direction. (8.8) A ﬁnite element model of the potential ﬂow problem results in the equation where the contributions to K and F for each element are Kij = A Ni. at inﬁnity. if we ignore gy and pick p = 0 (ambient).k dA = A ∂Ni ∂Nj ∂Ni ∂Nj + ∂x ∂x ∂y ∂y vn Ni dS. (8. (8. g is the acceleration due to gravity. The constant c is evaluated using a location where v is known (e. and Ni is the shape ˆ function for the ith grid point in the element.
since P and P are the same point in the plane y = 0. 67.15) ∂y The ydirection in this case is the normal to the symmetry plane y = 0.. Thus. (8. As P and P get close to the axis y = 0. For example.10.e. Thus. in Fig. for points in a symmetry plane with normal n. as shown in Fig.16) ∂n Note from Fig.14) 8.E E E 96 E y T E E x 87 E Figure 66: Streamlines Around Circular Cylinder. as shown in Fig. 87 (8. from Eq. make it right to left). (vx )P = (vx )P . and p 1 2 1 2 + v = v∞ . yT rP r 1( j r Ex 0) ¨B r ¨ P Figure 67: Symmetry With Respect to y = 0. Thus.18) (8. (8. The velocity ﬁeld is symmetric with respect to y = 0. 8. the velocity vectors at P and P can be transformed into each other by a reﬂection and a negation of sign. ∂φ vy = = 0 for y = 0. ∂φ = 0. That is. 66. the righthand side “loads” in Eq.17) . then (φP )ﬂow right = (φP )ﬂow left . 65 that the speciﬁed normal velocities at the two x extremes are of opposite signs.2 Application of Symmetry Consider the 2D potential ﬂow around a circular cylinder. we conclude that. ρ 2 2 (8. If we change the direction of ﬂow (i. and the velocity ﬁeld is antisymmetric with respect to the plane x = 0. 8.8 are equal in magnitude and opposite in sign for the left and right boundaries. the velocity vectors at P and its image P are mirror images of each other. 68. Thus. in general. the velocities at P and P must converge to each other.
The key to recognizing antisymmetry is to have a symmetric geometry with an antisymmetric “loading” (RHS).8 becomes −F.y T Pr rrP B ¨ ¨1( j r Ex 0) Figure 68: Antisymmetry With Respect to x = 0. in general. (8. Combining the last two equations yields (φP )ﬂow right = −(φP )ﬂow right . 69. Since φ is speciﬁed on x = 0.20) (8. This problem has two planes of geometric symmetry. y = 0 and x = 0. However. we conclude that. Fig. ∂φ =0 ∂n Antisym: φ=0 2 φ=0 ∂φ = v∞ ∂n a r ∂φ =0 ∂n ∂φ =0 ∂n Sym: Figure 69: Boundary Value Problem for Flow Around Circular Cylinder.19) If we now let P and P converge to the plane x = 0 and become the same point.e. we obtain (φ)x=0 = −(φ)x=0 (8.21) or (φ)x=0 = 0. the boundary value problem is shown in Fig. i. since the only nonzero contributions to F occur at the left and right boundaries. However. this problem is wellposed. 66. if the sign of F changes. changing the direction of ﬂow also means that F in Eq. and can be solved using a onequarter model.. 8. the sign of the solution also changes. for 2D potential ﬂow around a circular cylinder. (φP )ﬂow right = −(φP )ﬂow left . The boundary condition ∂φ/∂n = 0 is the natural boundary condition (the condition that results if the boundary is left free). Thus. φ = 0. for points in a symmetry plane with normal n for which the solution is antisymmetric. 88 . For example.
on the free surface y = 0. ∂y g ∂t2 This equation is referred to as the linearized free surface boundary condition. as shown in Fig. and we can ignore the velocity term in Eq. (8.27) . the velocity v on the free surface is also small.24. 8. ∂φ 1 ∂ 2φ =− .3 Free Surface Flows Consider an inviscid.26: 0= Hence. and y is the vertical coordinate. ρ ∂t 2 (8.28) ∂ 2φ ∂η ∂ 2φ ∂φ +g = 2 +g . η is small compared to the depth d). the vertical velocity on the free surface is ∂φ ∂η = on y = 0..22) where φ is the velocity potential.26) ∂t This equation can be viewed as an equation for the surface elevation η given φ. (8. We can then eliminate η from the last two equations by diﬀerentiating Eq.24) where ρ is the ﬂuid density. g is the acceleration due to gravity.e. 89 (8. (8. (8. 8. If we let η denote the deﬂection of the free surface.25) ∂y ∂t If we assume small wave height (i. 70.23) The pressure p can be determined from the timedependent Bernoulli equation − ∂φ 1 2 p = + v + gy. If we also take the pressure p = 0 on the free surface. 8. 2 ∂t ∂t ∂t ∂y (8. The equations of motion and continuity reduce to 2 φ = 0. Bernoulli’s equation implies ∂φ + gη = 0 on y = 0.y T T b & & & b & & & c T η = deﬂection of & & free surface disturbed free undisturbed free surface surface g d = depth c c Figure 70: The Free Surface Problem. incompressible ﬂuid in an irrotational ﬂow ﬁeld with a free surface. and the velocity is v= φ.
C. For example. to drop the “Re”. In this equation. which is the actual amplitude. since amplitude and phase information can be included in a single complex number.4 Use of Complex Numbers and Phasors in Wave Problems The wave maker problem considered in the next section will involve a forcing function which is sinusoidal in time (i. The directed vector in the complex plane is called a phasor by electrical engineers. timeharmonic). It is common practice. E 8.34) which is the actual phase angle. 90 . A is the complex amplitude. Such an approach is used with A. Using complex notation.. It is common in engineering analysis to represent timeharmonic signals using complex numbers.e. Two sinusoids of the same frequency add just like vectors in geometry.Im T A I A θ Re Figure 71: The Complex Amplitude. consider the sum ˆ ˆ A1 cos(ωt + θ1 ) + A2 cos(ωt + θ2 ). and mechanical vibrations. ˆ ˆ φ(t) = Re Aei(ωt+θ) = Re Aeiθ eiωt . (8.29) ˆ where all quantities in this equation are real. (8. and A can be taken as positive. and agree that it is only the real part which is of interest.33) (8. we write φ(t) = Aeiωt (8. when dealing with these sinsusoidal functions.35) (8. circuits. and phase θ: ˆ φ(t) = A cos(ωt + θ). If we deﬁne the complex amplitude ˆ A = Aeiθ . 71. where the magnitude of the complex amplitude is given by ˆ ˆ ˆ A = Aeiθ = A eiθ = A. and arg(A) = θ.31) with the understanding that it is the real part of this signal which is of interest. circular frequency ω. then φ(t) = Re Aeiωt . The complex amplitude A is thus a complex number which embodies both the amplitude and the phase of the sinusoidal signal. steadystate acoustics. as shown in Fig. ˆ Consider a sine wave φ(t) of amplitude A. Thus.30) √ where i = −1. (8.32) (8.
A1 eiωt + A2 eiωt = (A1 + A2 )eiωt .36) This addition. 8. y T 1¨ ∂φ = − φ (linearized free surface condition) ∂n g E T 2 x φ=0 d ∞ ∂φ = vn = v0 cos ωt (speciﬁed) ∂n n' ∂φ = 0 (rigid bottom) ∂n E c Figure 73: 2D Wave Maker: Time Domain. referred to as phasor addition. is illustrated in Fig.37) (8. (8. where A1 and A2 are complex amplitudes given by ˆ A1 = A1 eiθ1 . the problem is 91 . as shown in Fig. 73. ˆ A2 = A2 eiθ2 . In terms of complex arithmetic.Im T A2 ¢ ¢ ¢ ¢ ¢ A1 + A2 I A1 ¢ ¢ ¢ E Re Figure 72: Phasor Addition. 72. In the time domain.5 2D Wave Maker Consider a semiinﬁnite body of water with a wall oscillating in simple harmonic motion.
For this problem.40) (8. ∂n g (8. for a ﬁnite element solution to the 2D wave maker problem. 8. y.38) where dots denote diﬀerentiation with respect to time. 74. We therefore look for solutions in the form φ(x.42) (8. for large x. t) = φ0 (x. 8. and an additional boundary condition is needed for large x at the location where the model is terminated. referred to as a radiation boundary condition which accounts approximately for the fact that the 92 . 2 φ=0 ∂φ = v0 cos ωt on x = 0 ∂n ∂φ ∂n = 0 on y = −d ∂φ 1¨ = − φ on y = 0. we truncate the domain “suﬃciently far” from the wall. ∂φ0 = −iαφ0 .39) (8. where φ0 (x. 8. y)eiωt . g (8. The forcing function is the oscillating wall. ∂x where α is the positive solution of ω2 = α tanh(αd). Eq.42. y) is the complex amplitude. Thus. We ﬁrst write Eq. ∂n g It can be shown that.41) and ω is the ﬁxed excitation frequency.38 then becomes 2 φ0 = 0 ∂φ0 = v0 on x = 0 ∂n ∂φ0 ∂n = 0 on y = −d ∂φ ω2 0 = φ0 on y = 0. The solution of this problem is a function φ(x. the excitation frequency ω is speciﬁed. y. Eq.38b in the form ∂φ = v0 cos ωt = Re v0 eiωt . The graphical solution of this equation is shown in Fig.43) (8. and impose a boundary condition. ∂n where i = √ −1. and v0 is real. t).
3. ∂φ0 ω2 = φ0 ∂n g T ∂φ0 = v0 ∂n ' d 2 φ0 = 0 W E ∂φ0 = −iαφ0 ∂n c ∂φ0 =0 ∂n Figure 75: 2D Wave Maker: Frequency Domain. If the radiation boundary is located at x = W .44) ∂n = 0 on y = −d (rigid bottom) ∂φ0 ω2 = φ0 on y = 0 (linearized free surface condition) ∂n g ∂φ0 = −iαφ0 on x = W (radiation condition).52. Eq. 75. ∂x as summarized in Fig. the boundary value problem in the frequency domain becomes 2 φ0 = 0 ∂φ0 ∂n = v0 on x = 0 (oscillating wall) ∂φ0 (8. Note the similarity between this radiation condition and the nonreﬂecting boundary condition for the Helmholtz equation. ﬂuid extends to inﬁnity.y T E α Figure 74: Graphical Solution of ω 2 /α = g tanh(αd). 93 .
46) where i and j each have the range 123. Thus. Kij = 1 (bi bj + ci cj ). For elements adjacent to the radiation boundary. 7. 7. (8.5 are all applicable.79. HFS ω2L =− 6g 2 1 1 2 (8. and h = iα on the radiation boundary. HRB = iαL 6 2 1 1 2 (8. (K + H)φ = F. 2 2 2 = (b3 + c2 )/(4A).52) (8. if j = 1. so that the ﬁnite element formulas derived in §7. and the solution φ is complex. and bi = y j − y k . K11 K22 K33 K12 K13 K23 = (b2 + c2 )/(4A).51) (8.82. this boundary value problem is a special case of Eq. K. 4A (8. and F are given by Eqs. Thus. c i = x k − x j .53) ˆ ˆ H is calculated using Eq. the symbols ijk refer to the three nodes 123 in a cyclic permutation. for two points on an element edge on the oscillating wall. where g = −v0 on the oscillating ˆ wall. from Eq.56) . 1 1 = (b2 + c2 )/(4A). 7.50) (8. H. Thus.55 (p. F1 = F2 = 94 v0 L . Note that. then k = 2 and i = 3. for triangular elements adjacent to the free surface.44 has two boundaries where ∂φ/∂n is speciﬁed and two boundaries on which ∂φ/∂n is proportional to the unknown φ.54) for each free surface edge. 7. 8. For example. The matrix system for the wave maker problem is therefore. the coeﬃcient matrix K+H in Eq.6 Linear Triangle Matrices for 2D Wave Maker Problem The boundary value problem deﬁned in Eq.49) (8.45 is complex. 2 (8. Thus. Note that the function g appearing in Eq.55) for each radiation boundary edge. since H is purely imaginary on the radiation boundary. (8. 7. for each element.47) For bi and ci . = K32 = (b2 b3 + c2 c3 )/(4A).8. from Eq.111. 7. 73). where h = −ω 2 /g on the free surface. (8.80–7.45) where.55 is not the acceleration due to gravity appearing in the formulation of the free surface ﬂow problem. 7.107. = K31 = (b1 b3 + c1 c3 )/(4A). The solution of free surface problems thus requires either the use of complex arithmetic or separating the matrix system into real and imaginary parts. Thus. The righthand side F is calculated using Eq. 8.48) (8. 3 = K21 = (b1 b2 + c1 c2 )/(4A).91.
8. and dots denote diﬀerentiation with respect to the time t.62) (8. For a sinusoidal force.E u(t) f (t) k ¡e ¡e ¡ ¡ e¡ e¡ m e e E c Figure 76: Single DOF MassSpringDashpot System.60) where u0 is the complex amplitude of the displacement response. The displacement solution is also sinusoidal: u(t) = u0 eiωt . f is the applied force. If we substitute the last two equations into the diﬀerential equation. We make two observations from this last equation: 1. (8.57) where φR and φI are the real and imaginary parts of the complex amplitude. The timedependent velocity potential is given by Re φeiωt = Re [(φR + iφI )(cos ωt + i sin ωt)] = φR cos ωt − φI sin ωt.45 is the complex amplitude of the velocity potential. The solution vector φ obtained from Eq.7 Mechanical Analogy for the Free Surface Problem Consider the single DOF springmassdashpot system shown in Fig. The application of Newton’s second law of motion (F=ma) to this system yields the diﬀerential equation of motion m¨ + cu + ku = f (t). we obtain − ω 2 mu0 eiωt + iωcu0 eiωt + ku0 eiωt = f0 eiωt or (−ω 2 m + iωc + k)u0 = f0 . 76. 8. It is this function which is displayed in computer animations of the timedependent response of the velocity potential.58) where m is mass. f (t) = f0 eiωt . k is the spring stiﬀness. (8. and f0 is the complex amplitude of the force. (8. The inertia force is proportional to ω 2 and 180◦ out of phase with respect to the elastic force. u is the displacement from the equilibrium. 95 (8.61) .59) where ω is the excitation frequency. u ˙ (8. c is the viscous dashpot constant.
for which 2 φ = 0. (8. This degeneracy is a consequence of the incompressibility of the ideal ﬂuid. (8. since the mass M occurs only on the free surface rather than at every point in the domain. The viscous damping force is proportional to ω and leads the elastic force by 90◦ . we could interpret the radiation boundary matrix HRB as a “damping” eﬀect with the “boundary damping matrix” B given by B= 1 αL HRB = iω 6ω 2 1 1 2 .64) where the diagonal dampers are positive. because the ideal ﬂuid possesses the counterpart to the elastic forces but not the inertial forces. A compressible ﬂuid (such as occurs in acoustics) has the analogous mass eﬀects everywhere. the ideal ﬂuid. Similarly. behaves like a degenerate mechanical system. The free surface problem is a degenerate equivalent to the mechanical problem. 96 . In fact.63) where the diagonal “masses” are positive. Thus. This “damping” matrix is frequencydependent. we could interpret the free surface matrix HFS as an inertial eﬀect (in a mechanical analogy) with a “surface mass matrix” M given by M= L 1 HFS = 2 −ω 6g 2 1 1 2 . in the free surface problem.2.
2001. 1983. Huebner. [4] D.W. New York. Inc.M. Numerical Methods for Mathematics.D.V. Burnett. Cambridge. New York. Clough and J. [10] J. Dover. AddisonWesley Publishing Company. John Wiley and Sons. 2004. New York. An Introduction to the Finite Element Method. [12] F.H. Churchill. Englewood Cliﬀs. McGrawHill. Oxford. 2006. Smith. 1996. fourth edition. [16] J. Mass. seventh edition.. D.G. Morton and D. Mathews. Cambridge University Press. [14] I. McGrawHill. Vandergraft. NJ.. McGrawHill. New York.. 2006. John Wiley and Sons. New York. Inc. The Finite Element Method for Engineers. Mayers. 2001. 1987. and T. fourth edition. Finite Element Analysis: From Concepts to Applications.E. Plesha. [2] J.E.V. 1968 (also. Buchanan.. Byrom. New York. Bathe. 1992. and Engineering. Przemieniecki.. Inc. 1989. McGrawHill. Inc. Malkus. Reddy. Inc. Brown and R. M.N. Smith. Schaum’s Outline Series. [13] G. Witt.H. second edition. 1995. Inc. John Wiley and Sons. McGrawHill.. Inc. Dewhirst. 1994. [11] J. Scheid. Inc. McGrawHill. McGrawHill.. [6] R. Science. second edition. second edition.Bibliography [1] K.J. New York.. Smith and D. Inc.S.. Englewood Cliﬀs. Dynamics of Structures. Inc. Schaum’s Outline of Theory and Problems of Finite Element Analysis. Oxford University Press. third edition. 1985.L. Spiegel. Theory of Matrix Structural Analysis. Concepts and Applications of Finite Element Analysis.W. fourth edition.. Inc. Programming the Finite Element Method. NJ.S. second edition. New York.. [5] R. Schaum’s Outline Series. Cook.. [9] K. 1971. Schaum’s Outline of Theory and Problems of Advanced Mathematics for Engineers and Scientists. Inc. New York.R.. Penzien.. Introduction to Numerical Calculations. New York. [7] K. 1993.J. Inc. 1985). New York. PrenticeHall. Schaum’s Outline of Theory and Problems of Numerical Analysis. Finite Element Procedures. [15] M.D. Schaum’s Outline Series. D.R. Fourier Series and Boundary Value Problems. Reading.S. Academic Press.. Numerical Solution of Partial Diﬀerential Equations: Finite Diﬀerence Methods. 97 .W. [3] G. England.S.F. Griﬃths. Inc. PrenticeHall. third edition. and R. [8] J. D. Numerical Solution of Partial Diﬀerential Equations.
C. Zienkiewicz. [18] O. Oxford. R. Elsevier ButterworthHeinemann.Z. England. Zienkiewicz and R. Elsevier ButterworthHeinemann. sixth edition. [19] O.C. Oxford. Taylor.L. 2005. and P. R. The Finite Element Method for Solid and Structural Mechanics.L.L.C. Elsevier ButterworthHeinemann. sixth edition. The Finite Element Method for Fluid Dynamics. Taylor. England. Nithiarasu. England. and J. 2005. 98 . Zienkiewicz.[17] O. Taylor. sixth edition. Oxford. 2005. Zhu. The Finite Element Method: Its Basis and Fundamentals.
12 equation(s) acoustics. 16 partial diﬀerential. 57 CFL condition. 90 conic sections. 11 Poisson. 14. 22 error global. 76 natural. 20. 89 classiﬁcation. 73 potential. 14. 81 complex amplitude. 3 EulerLagrange equation. 92 boundary value problem(s). 13 backsolving. 14 constant strain triangle. 33 beams in ﬂexure. 23 electrostatics. 6 backward. 59. 11 nondimensional. 6 central. 4 boundary conditions.Index acoustics. 16 ﬁnite diﬀerence(s). 14 elliptic. 59. 13 hyperbolic. 21 Laplace. 26 CrankNicolson method. 13 Bernoulli. 71 cycloid. 32 systems. 88 Neumann. 63 heat. 10 banded matrix. 11 determinant of matrix. 1 derivative. 76. 86. 14. 86. 2. 36 continuum problems direct approach. 13 99 . 31 EulerLagrange. 20 nonreﬂecting. 2 truncation error. 3 rounding. 48 cyclic permutation. 27 radiation. 13 Helmholtz. 6 forward. 61 calculus of variations. 89 big O notation. 12. 8 brachistochrone. 4. 12. 45 Courant condition. 63 explicit method. 33 essential. 20 CST. 44 Bernoulli equation. 11 mathematical physics. 49 compatibility. 25 domain of inﬂuence. 1. 63 constraints. 35 divergence theorem. 21. 33 Fourier’s law. 18 stencil. 45 Euler’s method. 14 displacement vector. 26 change of basis. 3 essential boundary condition. 3 local. 33 relaxation. 65. 11 sparse systems. 66. 22 del operator. 52 discriminant. 10. 6 Neumann boundary conditions. 3 truncation. 76 Euler beam. 15 ordinary diﬀerential. 67 domain of dependence. 48 constrained minimization. 90 complex numbers. 1 parabolic. 62 d’Alembert solution. 5 wave.
16. 38 strain energy. 68 Laplacian operator. 85 vibrations bar. 50. 3 RungeKutta methods. 58 Galerkin’s method. 12 Helmholtz equation. 18 shape functions. 12 membrane. 80 potential ﬂuid ﬂow. 67 initial conditions. 11 heat conduction. 80 gravitational potential. 22 magnetostatics. 1. 87 Taylor’s theorem. 20. 79 Kronecker delta. 50 phantom points. 33 Newton’s second law. 35 massspring system. 65. 3 truss structure. 88 Neumann boundary condition ﬁnite diﬀerences. 26 stiﬀness matrix. 18 velocity potential. 53 examples. 12 positive deﬁnite matrix. 80 summation convention. 50. 35. 36 matrix partitioning. 12 string. 82 nondimensional form. 46 tridiagonal system. 11. 57. 34 matrix assembly. 8. 67 symmetry. 4 separation of variables. 50 unstable solution. 1 interpretation of functional. 33 rod element. 54 isotropic. 20 phasors. 10 solution procedure. 45 triangular element. 95 method of weighted residuals.free surface ﬂows. 85 radiation boundary condition. 41 pivoting. 13 Hooke’s law. 52 orthogonal matrix. 12 transverse shear. 83 Gaussian elimination. 12 mass matrix. 70 shooting methods. 52 orthonormal basis. 12 virtual work. 80 Poisson equation. 40 unit vectors. 4 incompressible ﬂuid ﬂow. 3. 34 nonconforming element. 48 implicit method. 10 truncation error. 80 potential energy. 48 100 . 52 rounding error. 10. 90 addition. 89 functional. 1 initial value problem(s). 4 tensors. 76. 42 mechanical analogy. 57 torsion. 92 relaxation. 11 large spring method. 15 orthogonal coordinate transformation. 18 stencil. 11 index notation. 38 sparse system of equations. 38 rotation matrix. 12 stable solution. 32 speed of propagation. 76 properties. 20. 43 Leibnitz’s rule. 83 natural boundary condition. 91 pinjointed frame.
warping function. 12 wave equation. 23 101 . 22 wave maker problem. 12. 94 wave speed. 91 matrices.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.