This action might not be possible to undo. Are you sure you want to continue?
Partial Diﬀerential Equations
by
Gordon C. Everstine
21 January 2010
Copyright c 2001–2010 by Gordon C. Everstine.
All rights reserved.
This book was typeset with L
A
T
E
X2
ε
(MiKTeX).
Preface
These lecture notes are intended to supplement a onesemester graduatelevel engineering
course at The George Washington University in numerical methods for the solution of par
tial diﬀerential equations. Both ﬁnite diﬀerence and ﬁnite element methods are included.
The main prerequisite is a standard undergraduate calculus sequence including ordinary
diﬀerential equations. In general, the mix of topics and level of presentation are aimed at
upperlevel undergraduates and ﬁrstyear graduate students in mechanical, aerospace, and
civil engineering.
Gordon Everstine
Gaithersburg, Maryland
January 2010
iii
Contents
1 Numerical Solution of Ordinary Diﬀerential Equations 1
1.1 Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Truncation Error for Euler’s Method . . . . . . . . . . . . . . . . . . . . . . 3
1.3 RungeKutta Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Systems of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Finite Diﬀerences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6.2 Solving Tridiagonal Systems . . . . . . . . . . . . . . . . . . . . . . . 10
1.7 Shooting Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Partial Diﬀerential Equations 11
2.1 Classical Equations of Mathematical Physics . . . . . . . . . . . . . . . . . . 11
2.2 Classiﬁcation of Partial Diﬀerential Equations . . . . . . . . . . . . . . . . . 14
2.3 Transformation to Nondimensional Form . . . . . . . . . . . . . . . . . . . . 15
3 Finite Diﬀerence Solution of Partial Diﬀerential Equations 16
3.1 Parabolic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.1 Explicit Finite Diﬀerence Method . . . . . . . . . . . . . . . . . . . . 16
3.1.2 CrankNicolson Implicit Method . . . . . . . . . . . . . . . . . . . . . 18
3.1.3 Derivative Boundary Conditions . . . . . . . . . . . . . . . . . . . . . 20
3.2 Hyperbolic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.1 The d’Alembert Solution of the Wave Equation . . . . . . . . . . . . 21
3.2.2 Finite Diﬀerences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.3 Starting Procedure for Explicit Algorithm . . . . . . . . . . . . . . . 26
3.2.4 Nonreﬂecting Boundaries . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.1 Derivative Boundary Conditions . . . . . . . . . . . . . . . . . . . . . 33
4 Direct Finite Element Analysis 34
4.1 Linear MassSpring Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2 Matrix Assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.3 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.4 Example and Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.5 PinJointed Rod Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.6 PinJointed Frame Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.7 Boundary Conditions by Matrix Partitioning . . . . . . . . . . . . . . . . . . 42
4.8 Alternative Approach to Constraints . . . . . . . . . . . . . . . . . . . . . . 43
4.9 Beams in Flexure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.10 Direct Approach to Continuum Problems . . . . . . . . . . . . . . . . . . . . 45
v
5 Change of Basis 49
5.1 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.2 Examples of Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3 Isotropic Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6 Calculus of Variations 57
6.1 Example 1: The Shortest Distance Between Two Points . . . . . . . . . . . . 60
6.2 Example 2: The Brachistochrone . . . . . . . . . . . . . . . . . . . . . . . . 61
6.3 Constraint Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.4 Example 3: A Constrained Minimization Problem . . . . . . . . . . . . . . . 63
6.5 Functions of Several Independent Variables . . . . . . . . . . . . . . . . . . . 64
6.6 Example 4: Poisson’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.7 Functions of Several Dependent Variables . . . . . . . . . . . . . . . . . . . . 66
7 Variational Approach to the Finite Element Method 66
7.1 Index Notation and Summation Convention . . . . . . . . . . . . . . . . . . 67
7.2 Deriving Variational Principles . . . . . . . . . . . . . . . . . . . . . . . . . . 68
7.3 Shape Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
7.4 Variational Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
7.5 Matrices for Linear Triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
7.6 Interpretation of Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.7 Stiﬀness in Elasticity in Terms of Shape Functions . . . . . . . . . . . . . . . 80
7.8 Element Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.9 Method of Weighted Residuals (Galerkin’s Method) . . . . . . . . . . . . . . 83
8 Potential Fluid Flow With Finite Elements 85
8.1 Finite Element Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
8.2 Application of Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
8.3 Free Surface Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
8.4 Use of Complex Numbers and Phasors in Wave Problems . . . . . . . . . . . 90
8.5 2D Wave Maker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
8.6 Linear Triangle Matrices for 2D Wave Maker Problem . . . . . . . . . . . . 94
8.7 Mechanical Analogy for the Free Surface Problem . . . . . . . . . . . . . . . 95
Bibliography 97
Index 99
List of Figures
1 1DOF MassSpring System. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 SimplySupported Beam With Distributed Load. . . . . . . . . . . . . . . . . 2
3 Finite Diﬀerence Approximations to Derivatives. . . . . . . . . . . . . . . . . 6
4 The Shooting Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5 Mesh for 1D Heat Equation. . . . . . . . . . . . . . . . . . . . . . . . . . . 17
vi
6 Heat Equation Stencil for Explicit Finite Diﬀerence Algorithm. . . . . . . . . 17
7 Heat Equation Stencil for r = 1/10. . . . . . . . . . . . . . . . . . . . . . . . 17
8 Heat Equation Stencils for r = 1/2 and r = 1. . . . . . . . . . . . . . . . . . 17
9 Explicit Finite Diﬀerence Solution With r = 0.48. . . . . . . . . . . . . . . . 18
10 Explicit Finite Diﬀerence Solution With r = 0.52. . . . . . . . . . . . . . . . 19
11 Mesh for CrankNicolson. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
12 Stencil for CrankNicolson Algorithm. . . . . . . . . . . . . . . . . . . . . . . 20
13 Treatment of Derivative Boundary Conditions. . . . . . . . . . . . . . . . . . 21
14 Propagation of Initial Displacement. . . . . . . . . . . . . . . . . . . . . . . 23
15 Initial Velocity Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
16 Propagation of Initial Velocity. . . . . . . . . . . . . . . . . . . . . . . . . . . 24
17 Domains of Inﬂuence and Dependence. . . . . . . . . . . . . . . . . . . . . . 25
18 Mesh for Explicit Solution of Wave Equation. . . . . . . . . . . . . . . . . . 26
19 Stencil for Explicit Solution of Wave Equation. . . . . . . . . . . . . . . . . . 26
20 Domains of Dependence for r > 1. . . . . . . . . . . . . . . . . . . . . . . . . 27
21 Finite Diﬀerence Mesh at Nonreﬂecting Boundary. . . . . . . . . . . . . . . . 28
22 Finite Length Simulation of an Inﬁnite Bar. . . . . . . . . . . . . . . . . . . 30
23 Laplace’s Equation on Rectangular Domain. . . . . . . . . . . . . . . . . . . 31
24 Finite Diﬀerence Grid on Rectangular Domain. . . . . . . . . . . . . . . . . 31
25 The Neighborhood of Point (i, j). . . . . . . . . . . . . . . . . . . . . . . . . 32
26 20Point Finite Diﬀerence Mesh. . . . . . . . . . . . . . . . . . . . . . . . . . 32
27 Laplace’s Equation With Dirichlet and Neumann B.C. . . . . . . . . . . . . 33
28 Treatment of Neumann Boundary Conditions. . . . . . . . . . . . . . . . . . 34
29 2DOF MassSpring System. . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
30 A Single Spring Element. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
31 3DOF Spring System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
32 Spring System With Constraint. . . . . . . . . . . . . . . . . . . . . . . . . . 37
33 4DOF Spring System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
34 PinJointed Rod Element. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
35 Truss Structure Modeled With PinJointed Rods. . . . . . . . . . . . . . . . 39
36 The Degrees of Freedom for a PinJointed Rod Element in 2D. . . . . . . . 40
37 Computing 2D Stiﬀness of PinJointed Rod. . . . . . . . . . . . . . . . . . . 40
38 PinJointed Frame Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
39 Example With Reactions and Loads at Same DOF. . . . . . . . . . . . . . . 42
40 Large Spring Approach to Constraints. . . . . . . . . . . . . . . . . . . . . . 43
41 DOF for Beam in Flexure (2D). . . . . . . . . . . . . . . . . . . . . . . . . . 44
42 The Beam Problem Associated With Column 1. . . . . . . . . . . . . . . . . 44
43 The Beam Problem Associated With Column 2. . . . . . . . . . . . . . . . . 45
44 DOF for 2D Beam Element. . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
45 Plate With Constraints and Loads. . . . . . . . . . . . . . . . . . . . . . . . 46
46 DOF for the Linear Triangular Membrane Element. . . . . . . . . . . . . . . 46
47 Element Coordinate Systems in the Finite Element Method. . . . . . . . . . 50
48 Basis Vectors in Polar Coordinate System. . . . . . . . . . . . . . . . . . . . 50
49 Change of Basis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
50 Element Coordinate System for PinJointed Rod. . . . . . . . . . . . . . . . 56
vii
51 Minimum, Maximum, and Neutral Stationary Values. . . . . . . . . . . . . . 58
52 Curve of Minimum Length Between Two Points. . . . . . . . . . . . . . . . . 60
53 The Brachistochrone Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . 61
54 Several Brachistochrone Solutions. . . . . . . . . . . . . . . . . . . . . . . . . 63
55 A Constrained Minimization Problem. . . . . . . . . . . . . . . . . . . . . . 64
56 TwoDimensional Finite Element Mesh. . . . . . . . . . . . . . . . . . . . . . 70
57 Triangular Finite Element. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
58 Axial Member (PinJointed Truss Element). . . . . . . . . . . . . . . . . . . 72
59 Neumannn Boundary Condition at Internal Boundary. . . . . . . . . . . . . 74
60 Two Adjacent Finite Elements. . . . . . . . . . . . . . . . . . . . . . . . . . 75
61 Triangular Mesh at Boundary. . . . . . . . . . . . . . . . . . . . . . . . . . . 78
62 Discontinuous Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
63 Compatibility at an Element Boundary. . . . . . . . . . . . . . . . . . . . . . 82
64 A Vector Analogy for Galerkin’s Method. . . . . . . . . . . . . . . . . . . . . 83
65 Potential Flow Around Solid Body. . . . . . . . . . . . . . . . . . . . . . . . 86
66 Streamlines Around Circular Cylinder. . . . . . . . . . . . . . . . . . . . . . 87
67 Symmetry With Respect to y = 0. . . . . . . . . . . . . . . . . . . . . . . . . 87
68 Antisymmetry With Respect to x = 0. . . . . . . . . . . . . . . . . . . . . . 88
69 Boundary Value Problem for Flow Around Circular Cylinder. . . . . . . . . 88
70 The Free Surface Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
71 The Complex Amplitude. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
72 Phasor Addition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
73 2D Wave Maker: Time Domain. . . . . . . . . . . . . . . . . . . . . . . . . 91
74 Graphical Solution of ω
2
/α = g tanh(αd). . . . . . . . . . . . . . . . . . . . . 93
75 2D Wave Maker: Frequency Domain. . . . . . . . . . . . . . . . . . . . . . . 93
76 Single DOF MassSpringDashpot System. . . . . . . . . . . . . . . . . . . . 95
viii
1 Numerical Solution of Ordinary Diﬀerential Equa
tions
An ordinary diﬀerential equation (ODE) is an equation that involves an unknown function
(the dependent variable) and some of its derivatives with respect to a single independent
variable. An nthorder equation has the highest order derivative of order n:
f
_
x, y, y
, y
, . . . , y
(n)
_
= 0 for a ≤ x ≤ b, (1.1)
where y = y(x), and y
(n)
denotes the nth derivative with respect to x. An nthorder ODE
requires the speciﬁcation of n conditions to assure uniqueness of the solution. If all conditions
are imposed at x = a, the conditions are called initial conditions (I.C.), and the problem
is an initial value problem (IVP). If the conditions are imposed at both x = a and x = b,
the conditions are called boundary conditions (B.C.), and the problem is a boundary value
problem (BVP).
For example, consider the initial value problem
_
m¨ u + ku = f(t)
u(0) = 5, ˙ u(0) = 0,
(1.2)
where u = u(t), and dots denote diﬀerentiation with respect to the time t. This equation
describes a onedegreeoffreedom massspring system which is released from rest and sub
jected to a timedependent force, as illustrated in Fig. 1. Initial value problems generally
arise in timedependent situations.
An example of a boundary value problem is shown in Fig. 2, for which the diﬀerential
equation is
_
_
_
EIu
(x) = M(x) =
FL
2
x −Fx
_
x
2
_
u(0) = u(L) = 0,
(1.3)
where the independent variable x is the distance from the left end, u is the transverse
displacement, and M(x) is the internal bending moment at x. Boundary value problems
generally arise in static (timeindependent) situations. As we will see, IVPs and BVPs must
be treated diﬀerently numerically.
A system of n ﬁrstorder ODEs has the form
_
¸
¸
¸
_
¸
¸
¸
_
y
1
(x) = f
1
(x, y
1
, y
2
, . . . , y
n
)
y
2
(x) = f
2
(x, y
1
, y
2
, . . . , y
n
)
.
.
.
y
n
(x) = f
n
(x, y
1
, y
2
, . . . , y
n
)
(1.4)
`
`
`
`
k
m
¸
u(t)
¸ ¸
¸
f(t)
Figure 1: 1DOF MassSpring System.
1
· · · · · · · · · · ·
F (N/m)
¡ ¸
L
Figure 2: SimplySupported Beam With Distributed Load.
for a ≤ x ≤ b. A single nthorder ODE is equivalent to a system of n ﬁrstorder ODEs. This
equivalence can be seen by deﬁning a new set of unknowns y
1
, y
2
, . . . , y
n
such that y
1
= y,
y
2
= y
, y
3
= y
, . . . , y
n
= y
(n−1)
. For example, consider the thirdorder IVP
y
= xy
+ e
x
y + x
2
+ 1, x ≥ 0
y(0) = 1, y
(0) = 0, y
(0) = −1.
(1.5)
To obtain an equivalent ﬁrstorder system, deﬁne y
1
= y, y
2
= y
, y
3
= y
to obtain
_
_
_
y
1
= y
2
y
2
= y
3
y
3
= xy
2
+ e
x
y
1
+ x
2
+ 1
(1.6)
with initial conditions y
1
(0) = 1, y
2
(0) = 0, y
3
(0) = −1.
1.1 Euler’s Method
This method is the simplest of the numerical methods for solving initial value problems.
Consider the IVP
_
y
(x) = f(x, y), x ≥ a
y(a) = η.
(1.7)
To eﬀect a numerical solution, we discretize the xaxis:
a = x
0
< x
1
< x
2
< · · · ,
where, for uniform spacing,
x
i
−x
i−1
= h, (1.8)
and h is considered small. With this discretization, we can approximate the derivative y
(x)
with the forward ﬁnite diﬀerence
y
(x) ≈
y(x + h) −y(x)
h
. (1.9)
If we let y
k
represent the numerical approximation to y(x
k
), then
y
(x
k
) ≈
y
k+1
−y
k
h
. (1.10)
2
Thus, a numerical (diﬀerence) approximation to the ODE, Eq. 1.7, is
y
k+1
−y
k
h
= f(x
k
, y
k
), k = 0, 1, 2, . . . (1.11)
or
_
y
k+1
= y
k
+ hf(x
k
, y
k
), k = 0, 1, 2, . . .
y
0
= η.
(1.12)
This recursive algorithm is called Euler’s method.
1.2 Truncation Error for Euler’s Method
There are two types of error that arise in numerical methods: truncation error (which arises
primarily from a discretization process) and rounding error (which arises from the ﬁniteness
of number representations in the computer). Reﬁning a mesh to reduce the truncation error
often causes the rounding error to increase.
To estimate the truncation error for Euler’s method, we ﬁrst recall Taylor’s theorem with
remainder, which states that a function f(x) can be expanded in a series about the point
x = c:
f(x) = f(c)+f
(c)(x−c)+
f
(c)
2!
(x−c)
2
+· · · +
f
(n)
(c)
n!
(x−c)
n
+
f
(n+1)
(ξ)
(n + 1)!
(x−c)
n+1
, (1.13)
where ξ is between x and c. The last term in Eq. 1.13 is referred to as the remainder term.
Note also that Eq. 1.13 is an equality, not an approximation.
In Eq. 1.13, let x = x
k+1
and c = x
k
, in which case
y(x
k+1
) = y(x
k
) + hy
(x
k
) +
1
2
h
2
y
(ξ
k
), (1.14)
where x
k
≤ ξ
k
≤ x
k+1
.
Since y satisﬁes the ODE, Eq. 1.7,
y
(x
k
) = f(x
k
, y(x
k
)), (1.15)
where y(x
k
) is the actual solution at x
k
. Hence,
y(x
k+1
) = y(x
k
) + hf(x
k
, y(x
k
)) +
1
2
h
2
y
(ξ
k
). (1.16)
Like Eq. 1.13, this equation is an equality, not an approximation.
By comparing this last equation to Euler’s approximation, Eq. 1.12, it is clear that Euler’s
method is obtained by omitting the remainder term
1
2
h
2
y
(ξ
k
) in the Taylor expansion of
y(x
k+1
) at the point x
k
. The omitted term accounts for the truncation error in Euler’s
method at each step. This error is a local error, since the error occurs at each step regardless
of the error in the previous step. The accumulation of local errors is referred to as the global
error, which is the more important error but much more diﬃcult to compute.
Most algorithms for solving ODEs are derived by expanding the solution function in a
Taylor series and then omitting certain terms.
3
1.3 RungeKutta Methods
Euler’s method is a ﬁrstorder method, since it was obtained by omitting terms in the Taylor
series expansion containing powers of h greater than one. To derive a secondorder method,
we again use Taylor’s theorem with remainder to obtain
y(x
k+1
) = y(x
k
) + hy
(x
k
) +
1
2
h
2
y
(x
k
) +
1
6
h
3
y
(ξ
k
) (1.17)
for some ξ
k
such that x
k
≤ ξ
k
≤ x
k+1
. Since, from the ODE (Eq. 1.7),
y
(x
k
) = f(x
k
, y(x
k
)), (1.18)
we can approximate
y
(x) =
df(x, y(x))
dx
=
f(x + h, y(x + h)) −f(x, y(x))
h
+ O(h) (1.19)
where we use the “big O” notation O(h) to represent terms of order h as h → 0. [For
example, 2h
3
= O(h
3
), 3h
2
+5h
4
= O(h
2
), h
2
O(h) = O(h
3
), and −287h
4
e
−h
= O(h
4
).] From
these last two equations, Eq. 1.17 can then be written as
y(x
k+1
) = y(x
k
) + hf(x
k
, y(x
k
)) +
h
2
[f(x
k+1
, y(x
k+1
)) −f(x
k
, y(x
k
))] + O(h
3
), (1.20)
which leads (after combining terms) to the diﬀerence equation
y
k+1
= y
k
+
1
2
h[f(x
k
, y
k
) + f(x
k+1
, y
k+1
)]. (1.21)
This formula is a secondorder approximation to the original diﬀerential equation y
(x) =
f(x, y) (Eq. 1.7), but it is an inconvenient approximation, since y
k+1
appears on both sides
of the formula. (Such a formula is called an implicit method, since y
k+1
is deﬁned implicitly.
An explicit method would have y
k+1
appear only on the lefthand side.)
To obtain instead an explicit formula, we use the approximation
y
k+1
= y
k
+ hf(x
k
, y
k
) (1.22)
to obtain
y
k+1
= y
k
+
1
2
h[f(x
k
, y
k
) + f(x
k+1
, y
k
+ hf(x
k
, y
k
))]. (1.23)
This formula is the RungeKutta formula of second order. Other higherorder formulas
can be derived similarly. For example, a fourthorder formula turns out to be popular in
applications.
We illustrate the implementation of the secondorder RungeKutta formula, Eq. 1.23,
with the following algorithm. We ﬁrst make the following three deﬁnitions:
a
k
= f(x
k
, y
k
), (1.24)
b
k
= y
k
+ ha
k
, (1.25)
4
c
k
= f(x
k+1
, b
k
), (1.26)
in which case
y
k+1
= y
k
+
1
2
h(a
k
+ c
k
). (1.27)
The calculations can then be performed conveniently with the following spreadsheet:
k x
k
y
k
a
k
b
k
c
k
y
k+1
0
1
2
.
.
.
1.4 Systems of Equations
The methods just derived can be extended directly to systems of equations. Consider the
initial value problem involving two equations:
_
_
_
y
(x) = f(x, y(x), z(x))
z
(x) = g(x, y(x), z(x))
y(a) = η, z(a) = ξ.
(1.28)
We recall from Eq. 1.12 that, for one equation, Euler’s method uses the recursive formula
y
k+1
= y
k
+ hf(x
k
, y
k
). (1.29)
This formula is directly extendible to two equations as
_
_
_
y
k+1
= y
k
+ hf(x
k
, y
k
, z
k
)
z
k+1
= z
k
+ hg(x
k
, y
k
, z
k
)
y
0
= η, z
0
= ξ.
(1.30)
We recall from Eq. 1.23 that, for one equation, the secondorder RungeKutta method
uses the recursive formula
y
k+1
= y
k
+
1
2
h[f(x
k
, y
k
) + f(x
k+1
, y
k
+ hf(x
k
, y
k
))]. (1.31)
For two equations, this formula becomes
_
¸
_
¸
_
y
k+1
= y
k
+
1
2
h[f(x
k
, y
k
, z
k
) + f(x
k+1
, y
k
+ hf(x
k
, y
k
, z
k
), z
k
+ hg(x
k
, y
k
, z
k
))]
z
k+1
= z
k
+
1
2
h[g(x
k
, y
k
, z
k
) + g(x
k+1
, y
k
+ hf(x
k
, y
k
, z
k
), z
k
+ hg(x
k
, y
k
, z
k
))]
y
0
= η, z
0
= ξ.
(1.32)
5
¸
`
x
y(x)
x −h x x + h
,
,
,
º
backward
diﬀerence
º
forward
diﬀerence
central
diﬀerence
Figure 3: Finite Diﬀerence Approximations to Derivatives.
1.5 Finite Diﬀerences
Before addressing boundary value problems, we want to develop further the notion of ﬁnite
diﬀerence approximation of derivatives.
Consider a function y(x) for which we want to compute the derivative y
(x) at some point
x. If we discretize the xaxis with uniform spacing h, we could approximate the derivative
using the forward diﬀerence formula
y
(x) ≈
y(x + h) −y(x)
h
, (1.33)
which is the slope of the line to the right of x (Fig. 3). We could also approximate the
derivative using the backward diﬀerence formula
y
(x) ≈
y(x) −y(x −h)
h
, (1.34)
which is the slope of the line to the left of x. Since, in general, there is no basis for choosing
one of these approximations over the other, an intuitively more appealing approximation
results from the average of these formulas:
y
(x) ≈
1
2
_
y(x + h) −y(x)
h
+
y(x) −y(x −h)
h
_
(1.35)
or
y
(x) ≈
y(x + h) −y(x −h)
2h
. (1.36)
This formula, which is more accurate than either the forward or backward diﬀerence formulas,
is the central ﬁnite diﬀerence approximation to the derivative.
6
Similar approximations can be derived for second derivatives. Using forward diﬀerences,
y
(x) ≈
y
(x + h) −y
(x)
h
≈
y(x + 2h) −y(x + h)
h
2
−
y(x + h) −y(x)
h
2
=
y(x + 2h) −2y(x + h) + y(x)
h
2
. (1.37)
This formula, which involves three points forward of x, is the forward diﬀerence approxima
tion to the second derivative. Similarly, using backward diﬀerences,
y
(x) ≈
y
(x) −y
(x −h)
h
≈
y(x) −y(x −h)
h
2
−
y(x −h) −y(x −2h)
h
2
=
y(x) −2y(x −h) + y(x −2h)
h
2
. (1.38)
This formula, which involves three points backward of x, is the backward diﬀerence approx
imation to the second derivative. The central ﬁnite diﬀerence approximation to the second
derivative uses instead the three points which bracket x:
y
(x) ≈
y(x + h) −2y(x) + y(x −h)
h
2
. (1.39)
This last result can also be obtained by using forward diﬀerences for the second derivative
followed by backward diﬀerences for the ﬁrst derivatives, or vice versa.
The central diﬀerence formula for second derivatives can alternatively be derived using
Taylor series expansions:
y(x + h) = y(x) + hy
(x) +
h
2
2
y
(x) +
h
3
6
y
(x) + O(h
4
). (1.40)
Similarly, by replacing h by −h,
y(x −h) = y(x) −hy
(x) +
h
2
2
y
(x) −
h
3
6
y
(x) + O(h
4
). (1.41)
The addition of these two equations yields
y(x + h) + y(x −h) = 2y(x) + h
2
y
(x) + O(h
4
) (1.42)
or
y
(x) =
y(x + h) −2y(x) + y(x −h)
h
2
+ O(h
2
), (1.43)
which, because of the error term, shows that the formula is secondorder accurate.
7
1.6 Boundary Value Problems
The techniques for initial value problems (IVPs) are, in general, not directly applicable to
boundary value problems (BVPs). Consider the BVP
_
y
(x) = f(x, y, y
), a ≤ x ≤ b
y(a) = η
1
, y(b) = η
2
.
(1.44)
This equation could be nonlinear, depending on f. The methods used for IVPs started at
one end (x = a) and computed the solution step by step for increasing x. For a BVP, not
enough information is given at either endpoint to allow a stepbystep solution.
Consider ﬁrst a special case of Eq. 1.44 for which the righthand side depends only on x
and y:
_
y
(x) = f(x, y), a ≤ x ≤ b
y(a) = η
1
, y(b) = η
2
.
(1.45)
Subdivide the interval (a, b) into n equal subintervals:
h =
b −a
n
, (1.46)
in which case
x
k
= a + kh, k = 0, 1, 2, . . . , n. (1.47)
Let y
k
denote the numerical approximation to the exact solution at x
k
. That is,
y
k
≈ y(x
k
). (1.48)
Then, if we use a central diﬀerence approximation to the second derivative in Eq. 1.45, the
ODE can be approximated by
y(x
k−1
) −2y(x
k
) + y(x
k+1
)
h
2
≈ f(x
k
, y
k
), (1.49)
which suggests the diﬀerence equation
y
k−1
−2y
k
+ y
k+1
= h
2
f(x
k
, y
k
), k = 1, 2, 3, . . . , n −1. (1.50)
Since this system of equations has n − 1 equations in n + 1 unknowns, the two boundary
conditions are needed to obtain a nonsingular system:
y
0
= η
1
, y
n
= η
2
. (1.51)
The resulting system is thus
−2y
1
+ y
2
= −η
1
+ h
2
f(x
1
, y
1
)
y
1
− 2y
2
+ y
3
= h
2
f(x
2
, y
2
)
y
2
− 2y
3
+ y
4
= h
2
f(x
3
, y
3
)
y
3
− 2y
4
+ y
5
= h
2
f(x
4
, y
4
)
.
.
.
y
n−2
− 2y
n−1
= −η
2
+ h
2
f(x
n−1
, y
n−1
),
(1.52)
which is a tridiagonal system of n−1 equations in n−1 unknowns. This system is linear or
nonlinear, depending on f.
8
1.6.1 Example
Consider
_
y
= −y(x), 0 ≤ x ≤ π/2
y(0) = 1, y(π/2) = 0.
(1.53)
In Eq. 1.45, f(x, y) = −y, η
1
= 1, η
2
= 0. Thus, the righthand side of the ith equation in
Eq. 1.52 has −h
2
y
i
, which can be moved to the lefthand side to yield the system
−(2 −h
2
)y
1
+ y
2
= −1
y
1
− (2 −h
2
)y
2
+ y
3
= 0
y
2
− (2 −h
2
)y
3
+ y
4
= 0
.
.
.
y
n−2
− (2 −h
2
)y
n−1
= 0.
(1.54)
We ﬁrst solve this tridiagonal system of simultaneous equations with n = 8 (i.e., h = π/16),
and compare with the exact solution y(x) = cos x:
Exact Absolute %
k x
k
y
k
y(x
k
) Error Error
0 0 1 1 0 0
1 0.1963495 0.9812186 0.9807853 0.0004334 0.0441845
2 0.3926991 0.9246082 0.9238795 0.0007287 0.0788715
3 0.5890486 0.8323512 0.8314696 0.0008816 0.1060315
4 0.7853982 0.7080045 0.7071068 0.0008977 0.1269565
5 0.9817477 0.5563620 0.5555702 0.0007917 0.1425085
6 1.1780972 0.3832699 0.3826834 0.0005865 0.1532604
7 1.3744468 0.1954016 0.1950903 0.0003113 0.1595767
8 1.5707963 0 0 0 0
We then solve this system with n = 40 (h = π/80):
Exact Absolute %
k x
k
y
k
y(x
k
) Error Error
0 0 1 1 0 0
5 0.1963495 0.9808025 0.9807853 0.0000172 0.0017571
10 0.3926991 0.9239085 0.9238795 0.0000290 0.0031363
15 0.5890486 0.8315047 0.8314696 0.0000351 0.0042161
20 0.7853982 0.7071425 0.7071068 0.0000357 0.0050479
25 0.9817477 0.5556017 0.5555702 0.0000315 0.0056660
30 1.1780972 0.3827068 0.3826834 0.0000233 0.0060933
35 1.3744468 0.1951027 0.1950903 0.0000124 0.0063443
40 1.5707963 0 0 0 0
Notice that a mesh reﬁnement by a factor of 5 has reduced the error by a factor of about
25. This behavior is typical of a numerical method which is secondorder accurate.
9
1.6.2 Solving Tridiagonal Systems
Tridiagonal systems are particularly easy (and fast) to solve using Gaussian elimination. It
is convenient to solve such systems using the following notation:
d
1
x
1
+ u
1
x
2
= b
1
l
2
x
1
+ d
2
x
2
+ u
2
x
3
= b
2
l
3
x
2
+ d
3
x
3
+ u
3
x
4
= b
3
.
.
.
l
n
x
n−1
+ d
n
x
n
= b
n
,
_
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
_
(1.55)
where d
i
, u
i
, and l
i
are, respectively, the diagonal, upper, and lower matrix entries in Row
i. All coeﬃcients can now be stored in three onedimensional arrays, D(·), U(·), and L(·),
instead of a full twodimensional array A(I,J). The solution algorithm (reduction to upper
triangular form by Gaussian elimination followed by backsolving) can now be summarized
as follows:
1. For k = 1, 2, . . . , n −1: [k = pivot row]
(a) m = −l
k+1
/d
k
[m = multiplier needed to annihilate term below]
(b) d
k+1
= d
k+1
+ mu
k
[new diagonal entry in next row]
(c) b
k+1
= b
k+1
+ mb
k
[new rhs in next row]
2. x
n
= b
n
/d
n
[start of backsolve]
3. For k = n −1, n −2, . . . , 1: [backsolve loop]
(a) x
k
= (b
k
−u
k
x
k+1
)/d
k
Tridiagonal systems arise in a variety of applications, including the CrankNicolson ﬁnite
diﬀerence method for solving parabolic partial diﬀerential equations.
1.7 Shooting Methods
Shooting methods provide a way to convert a boundary value problem to a trialanderror
initial value problem. It is useful to have additional ways to solve BVPs, particularly if the
equations are nonlinear.
Consider the following twopoint BVP:
_
_
_
y
= f(x, y, y
), a ≤ x ≤ b
y(a) = A
y(b) = B.
(1.56)
To solve this problem using the shooting method, we compute solutions of the IVP
_
_
_
y
= f(x, y, y
), x ≥ a
y(a) = A
y
(a) = M
(1.57)
10
¸
`
x
y
M
1
M
2
a
b
A
,
`
·
B
Figure 4: The Shooting Method.
for various values of M (the slope at the left end of the domain) until two solutions, one
with y(b) < B and the other with y(b) > B, have been found (Fig. 4). The initial slope M
can then be interpolated until a solution is found (i.e., y(b) = B).
2 Partial Diﬀerential Equations
A partial diﬀerential equation (PDE) is an equation that involves an unknown function
(the dependent variable) and some of its partial derivatives with respect to two or more
independent variables. An nthorder equation has the highest order derivative of order n.
2.1 Classical Equations of Mathematical Physics
1. Laplace’s equation (the potential equation)
∇
2
φ = 0 (2.1)
In Cartesian coordinates, the vector operator del is deﬁned as
∇ = e
x
∂
∂x
+e
y
∂
∂y
+e
z
∂
∂z
. (2.2)
∇
2
is referred to as the Laplacian operator and given by
∇
2
= ∇· ∇ =
∂
2
∂x
2
+
∂
2
∂y
2
+
∂
2
∂z
2
. (2.3)
Thus, Laplace’s equation in Cartesian coordinates is
∂
2
φ
∂x
2
+
∂
2
φ
∂y
2
+
∂
2
φ
∂z
2
= 0. (2.4)
In cylindrical coordinates, the Laplacian is
∇
2
φ =
1
r
∂
∂r
_
r
∂φ
∂r
_
+
1
r
2
∂
2
φ
∂θ
2
+
∂
2
φ
∂z
2
. (2.5)
Laplace’s equation arises in incompressible ﬂuid ﬂow (in which case φ is the velocity po
tential), gravitational potential problems, electrostatics, magnetostatics, steadystate
11
heat conduction with no sources (in which case φ is the temperature), and torsion
of bars in elasticity (in which case φ(x, y) is the warping function). Functions which
satisfy Laplace’s equation are referred to as harmonic functions.
2. Poisson’s equation
∇
2
φ + g = 0 (2.6)
This equation arises in steadystate heat conduction with distributed sources (φ =
temperature) and torsion of bars in elasticity (in which case φ(x, y) is the stress func
tion).
3. wave equation
∇
2
φ =
1
c
2
¨
φ (2.7)
In this equation, dots denote time derivatives, e.g.,
¨
φ =
∂
2
φ
∂t
2
, (2.8)
and c is the speed of propagation. The wave equation arises in several physical situa
tions:
(a) transverse vibrations of a string
For this onedimensional problem, φ = φ(x, t) is the transverse displacement of
the string, and
c =
¸
T
ρA
, (2.9)
where T is the string tension, ρ is the density of the string material, and A is the
crosssectional area of the string. The denominator ρA is mass per unit length.
(b) longitudinal vibrations of a bar
For this onedimensional problem, φ = φ(x, t) represents the longitudinal dis
placement, and
c =
¸
E
ρ
, (2.10)
where E and ρ are, respectively, the modulus of elasticity and density of the bar
material.
(c) transverse vibrations of a membrane
For this twodimensional problem, φ = φ(x, y, t) is the transverse displacement of
the membrane (e.g., drum head), and
c =
_
T
m
, (2.11)
where T is the tension per unit length, m is the mass per unit area (i.e., m = ρt),
and t is the membrane thickness.
12
(d) acoustics
For this threedimensional problem, φ = φ(x, y, z, t) is the ﬂuid pressure or veloc
ity potential, and c is the speed of sound, where
c =
¸
B
ρ
, (2.12)
where B = ρc
2
is the ﬂuid bulk modulus, and ρ is the density.
4. Helmholtz equation (reduced wave equation)
∇
2
φ + k
2
φ = 0 (2.13)
The Helmholtz equation is the timeharmonic form of the wave equation, in which inter
est is restricted to functions which vary sinusoidally in time. To obtain the Helmholtz
equation, we substitute
φ(x, y, z, t) = φ
0
(x, y, z) cos ωt (2.14)
into the wave equation, Eq. 2.7, to obtain
∇
2
φ
0
cos ωt = −
ω
2
c
2
φ
0
cos ωt. (2.15)
If we deﬁne the wave number k = ω/c, this equation becomes
∇
2
φ
0
+ k
2
φ
0
= 0. (2.16)
With the understanding that the unknown depends only on the spacial variables, the
subscript is unnecessary, and we obtain the Helmholtz equation, Eq. 2.13. This equa
tion arises in steadystate (timeharmonic) situations involving the wave equation, e.g.,
steadystate acoustics.
5. heat equation
∇· (k∇φ) + Q = ρc
˙
φ (2.17)
In this equation, φ represents the temperature T, k is the thermal conductivity, Q is the
internal heat generation per unit volume per unit time, ρ is the material density, and c
is the material speciﬁc heat (the heat required per unit mass to raise the temperature by
one degree). The thermal conductivity k is deﬁned by Fourier’s law of heat conduction:
ˆ q
x
= −kA
dT
dx
, (2.18)
where ˆ q
x
is the rate of heat conduction (energy per unit time) with typical units J/s
or BTU/hr, and A is the area through which the heat ﬂows. Alternatively, Fourier’s
law is written
q
x
= −k
dT
dx
, (2.19)
where q
x
is energy per unit time per unit area (with typical units J/(s·m
2
). There are
several special cases of the heat equation of interest:
13
(a) homogeneous material (k = constant):
k∇
2
φ + Q = ρc
˙
φ (2.20)
(b) homogeneous material, steadystate (timeindependent):
∇
2
φ = −
Q
k
(Poisson’s equation) (2.21)
(c) homogeneous material, steadystate, no sources (Q = 0):
∇
2
φ = 0 (Laplace’s equation) (2.22)
2.2 Classiﬁcation of Partial Diﬀerential Equations
Of the classical PDEs summarized in the preceding section, some involve time, and some
don’t, so presumably their solutions would exhibit fundamental diﬀerences. Of those that
involve time (wave and heat equations), the order of the time derivative is diﬀerent, so the
fundamental character of their solutions may also diﬀer. Both these speculations turn out
to be true.
Consider the general, secondorder, linear partial diﬀerential equation in two variables
Au
xx
+ Bu
xy
+ Cu
yy
+ Du
x
+ Eu
y
+ Fu = G, (2.23)
where the coeﬃcients are functions of the independent variables x and y (i.e., A = A(x, y),
B = B(x, y), etc.), and we have used subscripts to denote partial derivatives, e.g.,
u
xx
=
∂
2
u
∂x
2
. (2.24)
The quantity B
2
−4AC is referred to as the discriminant of the equation. The behavior of
the solution of Eq. 2.23 depends on the sign of the discriminant according to the following
table:
B
2
−4AC Equation Type Typical Physics Described
< 0 Elliptic Steadystate phenomena
= 0 Parabolic Heat ﬂow and diﬀusion processes
> 0 Hyperbolic Vibrating systems and wave motion
The names elliptic, parabolic, and hyperbolic arise from the analogy with the conic sections
in analytic geometry.
Given these deﬁnitions, we can classify the common equations of mathematical physics
already encountered as follows:
Name Eq. Number Eq. in Two Variables A, B, C Type
Laplace Eq. 2.1 u
xx
+ u
yy
= 0 A = C = 1, B = 0 Elliptic
Poisson Eq. 2.6 u
xx
+ u
yy
= −g A = C = 1, B = 0 Elliptic
Wave Eq. 2.7 u
xx
−u
yy
/c
2
= 0 A = 1, C = −1/c
2
, B = 0 Hyperbolic
Helmholtz Eq. 2.13 u
xx
+ u
yy
+ k
2
u = 0 A = C = 1, B = 0 Elliptic
Heat Eq. 2.17 ku
xx
−ρcu
y
= −Q A = k, B = C = 0 Parabolic
14
In the wave and heat equations in the above table, y represents the time variable. The
behavior of the solutions of equations of diﬀerent types diﬀers. Elliptic equations characterize
static (timeindependent) situations, and the other two types of equations characterize time
dependent situations.
2.3 Transformation to Nondimensional Form
It is often convenient, when solving equations, to transform the equation to a nondimensional
form. Consider, for example, the onedimensional wave equation
∂
2
u
∂x
2
=
1
c
2
∂
2
u
∂t
2
, (2.25)
where u is displacement (of dimension length), t is time, and c is the speed of propagation
(of dimension length/time). Let L represent some characteristic length associated with the
problem. We deﬁne the nondimensional variables
¯ x =
x
L
, ¯ u =
u
L
,
¯
t =
ct
L
, (2.26)
in which case the derivatives in Eq. 2.25 become
∂u
∂x
=
∂(L¯ u)
∂(L¯ x)
=
∂¯ u
∂¯ x
, (2.27)
∂
2
u
∂x
2
=
∂
∂x
_
∂u
∂x
_
=
∂
∂x
_
∂¯ u
∂¯ x
_
=
∂
∂¯ x
_
∂¯ u
∂¯ x
_
d¯ x
dx
=
1
L
∂
2
¯ u
∂¯ x
2
, (2.28)
∂u
∂t
=
∂(L¯ u)
∂(L
¯
t/c)
= c
∂¯ u
∂
¯
t
, (2.29)
∂
2
u
∂t
2
=
∂
∂t
_
∂u
∂t
_
=
∂
∂t
_
c
∂¯ u
∂
¯
t
_
= c
∂
∂
¯
t
_
∂¯ u
∂
¯
t
_
d
¯
t
dt
= c
∂
2
¯ u
∂
¯
t
2
c
L
=
c
2
L
∂
2
¯ u
∂
¯
t
2
. (2.30)
Thus, from Eq. 2.25,
1
L
∂
2
¯ u
∂¯ x
2
=
1
c
2
·
c
2
L
∂
2
¯ u
∂
¯
t
2
(2.31)
or
∂
2
¯ u
∂¯ x
2
=
∂
2
¯ u
∂
¯
t
2
. (2.32)
This is the nondimensional wave equation.
This last equation can also be obtained more easily by direct substitution of Eq. 2.26
into Eq. 2.25 and factoring out the constants:
∂
2
(L¯ u)
∂(L¯ x)
2
=
1
c
2
∂
2
(L¯ u)
∂(L
¯
t/c)
2
(2.33)
or
L
L
2
∂
2
¯ u
∂¯ x
2
=
1
c
2
c
2
L
L
2
∂
2
¯ u
∂
¯
t
2
. (2.34)
15
3 Finite Diﬀerence Solution of Partial Diﬀerential Equa
tions
3.1 Parabolic Equations
Consider the boundaryinitial value problem (BIVP)
_
¸
¸
_
¸
¸
_
u
xx
=
1
c
u
t
, u = u(x, t), 0 < x < 1, t > 0
u(0, t) = u(1, t) = 0 (boundary conditions)
u(x, 0) = f(x) (initial condition),
(3.1)
where c is a constant. This problem represents transient heat conduction in a rod with the
ends held at zero temperature and an initial temperature proﬁle f(x).
To solve this problem numerically, we discretize x and t such that
x
i
= ih, i = 0, 1, 2, . . .
t
j
= jk, j = 0, 1, 2, . . . .
(3.2)
3.1.1 Explicit Finite Diﬀerence Method
Let u
i,j
be the numerical approximation to u(x
i
, t
j
). We approximate u
t
with the forward
ﬁnite diﬀerence
u
t
≈
u
i,j+1
−u
i,j
k
(3.3)
and u
xx
with the central ﬁnite diﬀerence
u
xx
≈
u
i+1,j
−2u
i,j
+ u
i−1,j
h
2
. (3.4)
The ﬁnite diﬀerence approximation to the PDE is then
u
i+1,j
−2u
i,j
+ u
i−1,j
h
2
=
u
i,j+1
−u
i,j
ck
. (3.5)
Deﬁne the parameter r as
r =
ck
h
2
=
c ∆t
(∆x)
2
, (3.6)
in which case Eq. 3.5 becomes
u
i,j+1
= ru
i−1,j
+ (1 −2r)u
i,j
+ ru
i+1,j
. (3.7)
The domain of the problem and the mesh are illustrated in Fig. 5. Eq. 3.7 is a recursive
relationship giving u in a given row (time) in terms of three consecutive values of u in the row
below (one time step earlier). This equation is referred to as an explicit formula since one
unknown value can be found directly in terms of several other known values. The recursive
relationship can also be sketched with the stencil shown in Fig. 6. For example, for r = 1/10,
we have the stencil shown in Fig. 7. That is, for r = 1/10, the solution (temperature) at
16
¸
`
x
t
0 1 u = f(x)
u = 0 u = 0
,
i−1, j
,
i, j
,
i+1, j
,
i, j+1
¡ ¸
h
·
`
k
Figure 5: Mesh for 1D Heat Equation.
_
`
¸
r
_
`
¸
1 −2r
_
`
¸
r
_
`
¸
1
Figure 6: Heat Equation Stencil for Explicit Finite Diﬀerence Algorithm.
_
¸
1
10
_
¸
8
10
_
¸
1
10
_
`
¸
1
Figure 7: Heat Equation Stencil for r = 1/10.
_
¸
1
2
_
¸
0
_
¸
1
2
_
`
¸
1
r =
1
2
_
¸
1
_
¸
−1
_
¸
1
_
`
¸
1
r = 1
Figure 8: Heat Equation Stencils for r = 1/2 and r = 1.
17
¸
`
x
u
0.0 0.2 0.4 0.6 0.8 1.0
0.1
0.2
0.3
0.4
0.5
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
t = 0.048
t = 0.096
t = 0.192
r = 0.48
Exact
¸ ¸
F.D.
Figure 9: Explicit Finite Diﬀerence Solution With r = 0.48.
the new point depends on the three points at the previous time step with a 181 weighting.
Notice that, if r = 1/2, the solution at the new point is independent of the closest point,
as illustrated in Fig. 8. For r > 1/2 (e.g., r = 1), the new point depends negatively on the
closest point (Fig. 8), which is counterintuitive. It can be shown that, for a stable solution,
0 < r ≤ 1/2. An unstable solution is one for which small errors grow rather than decay as
the solution evolves.
The instability which occurs for r > 1/2 can be illustrated with the following example.
Consider the boundaryinitial value problem (in nondimensional form)
u
xx
= u
t
, 0 < x < 1, u = u(x, t)
u(0, t) = u(1, t) = 0
u(x, 0) = f(x) =
_
2x, 0 ≤ x ≤ 1/2
2(1 −x), 1/2 ≤ x ≤ 1.
(3.8)
The physical problem is to compute the temperature history u(x, t) for a bar with a pre
scribed initial temperature distribution f(x), no internal heat sources, and zero temperature
prescribed at both ends. We solve this problem using the explicit ﬁnite diﬀerence algorithm
with h = ∆x = 0.1 and k = ∆t = rh
2
= r(∆x)
2
for two diﬀerent values of r: r = 0.48
and r = 0.52. The two numerical solutions (Figs. 9 and 10) are compared with the analytic
solution
u(x, t) =
∞
n=1
8
(nπ)
2
sin
nπ
2
(sin nπx) e
−(nπ)
2
t
, (3.9)
which can be obtained by the technique of separation of variables. The instability for
r > 1/2 can be clearly seen in Fig. 10. Thus, a disadvantage of this explicit method is that
a small time step ∆t must be used to maintain stability. This disadvantage will be removed
with the CrankNicolson algorithm.
3.1.2 CrankNicolson Implicit Method
The CrankNicolson method is a stable algorithm which allows a larger time step than could
be used in the explicit method. In fact, CrankNicolson’s stability does not depend on the
parameter r.
18
¸
`
x
u
0.0 0.2 0.4 0.6 0.8 1.0
0.1
0.2
0.3
0.4
0.5
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
t = 0.052
t = 0.104
t = 0.208
r = 0.52
Exact
¸ ¸
F.D.
Figure 10: Explicit Finite Diﬀerence Solution With r = 0.52.
The basis for the CrankNicolson algorithm is writing the ﬁnite diﬀerence equation at a
midlevel in time: i, j +
1
2
. The ﬁnite diﬀerence x derivative at j +
1
2
is computed as the
average of the two central diﬀerence time derivatives at j and j +1. Consider again the PDE
of Eq. 3.1:
_
¸
¸
_
¸
¸
_
u
xx
=
1
c
u
t
, u = u(x, t)
u(0, t) = u(1, t) = 0 (boundary conditions)
u(x, 0) = f(x) (initial condition).
(3.10)
The PDE is approximated numerically by
1
2
_
u
i+1,j+1
−2u
i,j+1
+ u
i−1,j+1
h
2
+
u
i+1,j
−2u
i,j
+ u
i−1,j
h
2
_
=
u
i,j+1
−u
i,j
ck
, (3.11)
where the righthand side is a central diﬀerence approximation to the time derivative at the
middle point j +
1
2
. We again deﬁne the parameter r as
r =
ck
h
2
=
c ∆t
(∆x)
2
, (3.12)
and rearrange Eq. 3.11 with all j + 1 terms on the lefthand side:
−ru
i−1,j+1
+ 2(1 + r)u
i,j+1
−ru
i+1,j+1
= ru
i−1,j
+ 2(1 −r)u
i,j
+ ru
i+1,j
. (3.13)
This formula is called the CrankNicolson algorithm.
Fig. 11 shows the points involved in the CrankNicolson scheme. If we start at the bottom
row (j = 0) and move up, the righthand side values of Eq. 3.13 are known, and the lefthand
side values of that equation are unknown. To get the process started, let j = 0, and write the
CN equation for each i = 1, 2, . . . , N to obtain N simultaneous equations in N unknowns,
where N is the number of interior mesh points on the row. (The boundary points, with
known values, are excluded.) This system of equations is a tridiagonal system, since each
equation has three consecutive nonzeros centered around the diagonal. To advance in time,
19
¸
`
x
t
0 1 u = f(x)
u = 0 u = 0
,
i−1, j
,
i, j
,
i+1, j
,
i−1,j+1
,
i, j+1
,
i+1,j+1
¸
¡ ¸
h
·
`
k
Figure 11: Mesh for CrankNicolson.
_
`
¸
r
_
`
¸
2(1 −r)
_
`
¸
r
_
`
¸
−r
_
`
¸
2(1 + r)
_
`
¸
−r
Figure 12: Stencil for CrankNicolson Algorithm.
we then increment j to j = 1, and solve a new system of equations. An approach which
requires the solution of simultaneous equations is called an implicit algorithm. A sketch of
the CN stencil is shown in Fig. 12.
Note that the coeﬃcient matrix of the CN system of equations does not change from
step to step. Thus, one could compute and save the LU factors of the coeﬃcient matrix, and
merely do the forwardbackward substitution (FBS) at each new time step, thus speeding
up the calculation. This speedup would be particularly signiﬁcant in higher dimensional
problems, where the coeﬃcient matrix is no longer tridiagonal.
It can be shown that the CN algorithm is stable for any r, although better accuracy
results from a smaller r. A smaller r corresponds to a smaller time step size (for a ﬁxed
spacial mesh). CN also gives better accuracy than the explicit approach for the same r.
3.1.3 Derivative Boundary Conditions
Consider the boundaryinitial value problem
_
¸
¸
_
¸
¸
_
u
xx
=
1
c
u
t
, u = u(x, t)
u(0, t) = 0, u
x
(1, t) = g(t) (boundary conditions)
u(x, 0) = f(x) (initial condition).
(3.14)
The only diﬀerence between this problem and the one considered earlier in Eq. 3.1 is the
righthand side boundary condition, which now involves a derivative (a Neumann boundary
condition).
Assume a mesh labeling as shown in Fig. 13. We introduce extra “phantom” points to
the right of the boundary (outside the domain). Consider boundary Point 25, for example,
20
¸
¸
¸
13
14
15
23
24
25
33
34
35
Figure 13: Treatment of Derivative Boundary Conditions.
and assume we use an explicit ﬁnite diﬀerence algorithm. Since u
25
is not known, we must
write the ﬁnite diﬀerence equation for u
25
:
u
25
= ru
14
+ (1 −2r)u
24
+ ru
34
. (3.15)
On the other hand, a central ﬁnite diﬀerence approximation to the x derivative at Point 24
is
u
34
−u
14
2h
= g
24
. (3.16)
The phantom variable u
34
can then be eliminated from the last two equations to yield a new
equation for the boundary point u
25
:
u
25
= 2ru
14
+ (1 −2r)u
24
+ 2rhg
24
. (3.17)
3.2 Hyperbolic Equations
3.2.1 The d’Alembert Solution of the Wave Equation
Before addressing the ﬁnite diﬀerence solution of hyperbolic equations, we review some
background material on such equations.
The timedependent transverse response of an inﬁnitely long string satisﬁes the one
dimensional wave equation with nonzero initial displacement and velocity speciﬁed:
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
∂
2
u
∂x
2
=
1
c
2
∂
2
u
∂t
2
, −∞< x < ∞, t > 0,
u(x, 0) = f(x),
∂u(x, 0)
∂t
= g(x)
lim
x→±∞
u(x, t) = 0,
(3.18)
where x is distance along the string, t is time, u(x, t) is the transverse displacement, f(x) is
the initial displacement, g(x) is the initial velocity, and the constant c is given by
c =
¸
T
ρA
, (3.19)
where T is the tension (force) in the string, ρ is the density of the string material, and
A is the crosssectional area of the string. Note that c has the dimension of velocity. This
21
equation assumes that all motion is vertical and that the displacement u and its slope ∂u/∂x
are both small.
It can be shown by direct substitution into Eq. 3.18 that the solution of this system is
u(x, t) =
1
2
[f(x −ct) + f(x + ct)] +
1
2c
_
x+ct
x−ct
g(τ) dτ. (3.20)
The diﬀerentiation of the integral in Eq. 3.20 is eﬀected with the aid of Leibnitz’s rule:
d
dx
_
B(x)
A(x)
h(x, t) dt =
_
B
A
∂h(x, t)
∂x
dt + h(x, B)
dB
dx
−h(x, A)
dA
dx
. (3.21)
Eq. 3.20 is known as the d’Alembert solution of the onedimensional wave equation.
For the special case g(x) = 0 (zero initial velocity), the d’Alembert solution simpliﬁes to
u(x, t) =
1
2
[f(x −ct) + f(x + ct)], (3.22)
which may be interpreted as two waves, each equal to f(x)/2, which travel at speed c to
the right and left, respectively. For example, the argument x − ct remains constant if, as t
increases, x also increases at speed c. Thus, the wave f(x−ct) moves to the right (increasing
x) with speed c without change of shape. Similarly, the wave f(x + ct) moves to the left
(decreasing x) with speed c without change of shape. The two waves [each equal to half the
initial shape f(x)] travel in opposite directions from each other at speed c. If f(x) is nonzero
only for a small domain, then, after both waves have passed the region of initial disturbance,
the string returns to its rest position.
For example, let f(x), the initial displacement, be given by
f(x) =
_
−x + b, x ≤ b,
0, x ≥ b,
(3.23)
which is a triangular pulse of width 2b and height b (Fig. 14). For t > 0, half this pulse
travels in opposite directions from the origin. For t > b/c, where c is the wave speed, the
two halfpulses have completely separated, and the neighborhood of the origin has returned
to rest.
For the special case f(x) = 0 (zero initial displacement), the d’Alembert solution simpli
ﬁes to
u(x, t) =
1
2c
_
x+ct
x−ct
g(τ) dτ =
1
2
[G(x + ct) −G(x −ct)] , (3.24)
where
G(x) =
1
c
_
x
−∞
g(τ) dτ. (3.25)
Thus, similar to the initial displacement special case, this solution, Eq. 3.24, may be inter
preted as the combination (diﬀerence, in this case) of two identical functions G(x)/2, one
moving left and one moving right, each with speed c.
22
¸
`
x/b
−3 −2 −1 0 1 2 3
>
>.
f(x)/2
»
f(x)
(a) t = 0
>
>
>
¸
`
x/b
−3 −2 −1 0 1 2 3
(b) t =
b
2c
>
>
>
>
>
>
>
¸
`
x/b
−3 −2 −1 0 1 2 3
(c) t =
b
c
>
>
>
>
>
>
>
>
¸
`
x/b
−3 −2 −1 0 1 2 3
(d) t =
2b
c
>
>
>
>
>
>
>
>
Figure 14: Propagation of Initial Displacement.
For example, let the initial velocity g(x) be given by
g(x) =
_
Mc, x < b,
0, x > b,
(3.26)
which is a rectangular pulse of width 2b and height Mc, where the constant M is the
dimensionless Mach number, and c is the wave speed (Fig. 15). The travelling wave G(x)
is given by Eq. 3.25 as
G(x) =
_
_
_
0, x ≤ −b,
M(x + b), −b ≤ x ≤ b,
2Mb, x ≥ b.
(3.27)
That is, half this wave travels in opposite directions at speed c. Even though g(x) is nonzero
only near the origin, the travelling wave G(x) is constant and nonzero for x > b. Thus, as
time advances, the center section of the string reaches a state of rest, but not in its original
position (Fig. 16).
From the preceding discussion, it is clear that disturbances travel with speed c. For an
observer at some ﬁxed location, initial displacements occurring elsewhere pass by after a
ﬁnite time has elapsed, and then the string returns to rest in its original position. Nonzero
initial velocity disturbances also travel at speed c, but, once having reached some location,
will continue to inﬂuence the solution from then on.
Thus, the domain of inﬂuence of the data at x = x
0
, say, on the solution consists of all
points closer than ct (in either direction) to x
0
, the location of the disturbance (Fig. 17).
23
¸
`
x/b
g(x)
−3 −2 −1 0 1 2 3
(a)
¸
`
x/b
G(x)
−3 −2 −1 0 1 2 3
(b)
.
.
.
.
.
.
.
.
Figure 15: Initial Velocity Function.
¸
`
x/b
−4 −3 −2 −1 1 2 3 4
G(x)/2 moves left
−G(x)/2 moves right
(a) t = 0 (displacement=0)
.
.
.
.




¸
`
x/b
−4 −3 −2 −1 1 2 3 4
(b) t =
b
2c
.
.
.
.




.
.




¸
`
x/b
−4 −3 −2 −1 2 3 4
(c) t =
b
c
.
.
.
.
.
.
.
.












¸
`
x/b
−4 −3 −2 −1 1 3 4
(d) t =
2b
c
.
.
.
.
.
.
.
.












Figure 16: Propagation of Initial Velocity.
24
¸
`
x
t
,
slope=1/c −1/c
(x
0
, 0)
(a) Domain of Inﬂuence
of Data on Solution.
¸
`
x
t
,
−1/c 1/c
(x, t)
, ,
(x −ct, 0) (x + ct, 0)
(b) Domain of Dependence
of Solution on Data.
Figure 17: Domains of Inﬂuence and Dependence.
Conversely, the domain of dependence of the solution on the initial data consists of all points
within a distance ct of the solution point. That is, the solution at (x, t) depends on the
initial data for all locations in the range (x −ct, x + ct), which are the limits of integration
in the d’Alembert solution, Eq. 3.20.
3.2.2 Finite Diﬀerences
From the preceding discussion of the d’Alembert solution, we see that hyperbolic equations
involve wave motion. If the initial data are discontinuous (as, for example, in shocks), the
most accurate and the most convenient approach for solving the equations is probably the
method of characteristics. On the other hand, problems without discontinuities can probably
be solved most conveniently using ﬁnite diﬀerence and ﬁnite element techniques. Here we
consider ﬁnite diﬀerences.
Consider the boundaryinitial value problem (BIVP)
_
¸
¸
_
¸
¸
_
u
xx
=
1
c
2
u
tt
, u = u(x, t), 0 < x < a, t > 0
u(0, t) = u(a, t) = 0 (boundary conditions)
u(x, 0) = f(x), u
t
(x, 0) = g(x) (initial conditions).
(3.28)
This problem represents the transient (timedependent) vibrations of a string ﬁxed at the
two ends with both initial displacement f(x) and initial velocity g(x) speciﬁed.
A central ﬁnite diﬀerence approximation to the PDE, Eq. 3.28, yields
u
i+1,j
−2u
i,j
+ u
i−1,j
h
2
=
u
i,j+1
−2u
i,j
+ u
i,j−1
c
2
k
2
. (3.29)
We deﬁne the parameter
r =
ck
h
=
c ∆t
∆x
, (3.30)
and solve for u
i,j+1
:
u
i,j+1
= r
2
u
i−1,j
+ 2(1 −r
2
)u
i,j
+ r
2
u
i+1,j
−u
i,j−1
. (3.31)
Fig. 18 shows the mesh points involved in this recursive scheme. If we know the solution for
25
¸
`
x
t
0 a u = f(x), u
t
= g(x)
u = 0 u = 0
,
i−1, j
,
i, j
,
i+1, j
,
i, j+1
,
i, j−1
¡ ¸
h
·
`
k
Figure 18: Mesh for Explicit Solution of Wave Equation.
_
`
¸
r
2
_
`
¸
2(1 −r
2
)
_
`
¸
r
2
_
`
¸
1
_
`
¸
−1
Figure 19: Stencil for Explicit Solution of Wave Equation.
all time values up to the jth time step, we can compute the solution u
i,j+1
at Step j + 1 in
terms of known quantities. Thus, this algorithm is an explicit algorithm. The corresponding
stencil is shown in Fig. 19.
It can be shown that this ﬁnite diﬀerence algorithm is stable if r ≤ 1 and unstable if r > 1.
This stability condition is know as the Courant, Friedrichs, and Lewy (CFL) condition or
simply the Courant condition. It can be further shown that a theoretically correct solution
is obtained when r = 1, and that the accuracy of the solution decreases as the value of r
decreases farther and farther below the value of 1. Thus, the time step size ∆t should be
chosen, if possible, so that r = 1. If that ∆t is inconvenient for a particular calculation, ∆t
should be selected as large as possible without exceeding the stability limit of r = 1.
An intuitive rationale behind the stability requirement r ≤ 1 can also be made using
Fig. 20. If r > 1, the numerical domain of dependence (NDD) would be smaller than the
actual domain of dependence (ADD) for the PDE, since NDD spreads by one mesh point
at each level for earlier time. However, if NDD < ADD, the numerical solution would be
independent of data outside NDD but inside ADD. That is, the numerical solution would
ignore necessary information. Thus, to insure a stable solution, r ≤ 1.
3.2.3 Starting Procedure for Explicit Algorithm
We note that the explicit ﬁnite diﬀerence scheme just described for the wave equation requires
the numerical solution at two consecutive time steps to step forward to the next time. Thus,
at t = 0, a special procedure is needed to advance from j = 0 to j = 1.
26
¸
`
x
t
¡ ¸
h
·
`
k
,
`
`
`
`
`
`
`
`
`
`
`
`
`
Numerical Domain of Dependence
¡ ¸
Actual Domain of Dependence
¡ ¸
Slope = 1/c
`
Slope = −1/c
»
Figure 20: Domains of Dependence for r > 1.
The explicit ﬁnite diﬀerence algorithm is given in Eq. 3.31:
u
i,j+1
= r
2
u
i−1,j
+ 2(1 −r
2
)u
i,j
+ r
2
u
i+1,j
−u
i,j−1
. (3.32)
To compute the solution at the end of the ﬁrst time step, let j = 0:
u
i,1
= r
2
u
i−1,0
+ 2(1 −r
2
)u
i,0
+ r
2
u
i+1,0
−u
i,−1
, (3.33)
where the righthand side is known (from the initial condition) except for u
i,−1
. However,
we can write a central diﬀerence approximation to the ﬁrst time derivative at t = 0:
u
i,1
−u
i,−1
2k
= g
i
(3.34)
or
u
i,−1
= u
i,1
−2kg
i
, (3.35)
where g
i
is the initial velocity g(x) evaluated at the ith point, i.e., g
i
= g(x
i
). If we substitute
this last result into Eq. 3.33 (to eliminate u
i,−1
), we obtain
u
i,1
= r
2
u
i−1,0
+ 2(1 −r
2
)u
i,0
+ r
2
u
i+1,0
−u
i,1
+ 2kg
i
(3.36)
or
u
i,1
=
1
2
r
2
u
i−1,0
+ (1 −r
2
)u
i,0
+
1
2
r
2
u
i+1,0
+ kg
i
. (3.37)
This is the diﬀerence equation used for the ﬁrst row. Thus, to implement the explicit ﬁnite
diﬀerence algorithm, we use Eq. 3.37 for the ﬁrst time step and Eq. 3.31 for all subsequent
time steps.
3.2.4 Nonreﬂecting Boundaries
In some applications, it is of interest to model domains that are large enough to be considered
inﬁnite in extent. In a ﬁnite diﬀerence representation of the domain, an inﬁnite boundary has
27
,
,
,
,
¸
¸
,
¸
¸
i, j
Figure 21: Finite Diﬀerence Mesh at Nonreﬂecting Boundary.
to be truncated at some suﬃciently large distance. At such a boundary, a suitable boundary
condition must be imposed to ensure that outgoing waves are not reﬂected.
Consider a vibrating string which extends to inﬁnity for large x. We truncate the compu
tational domain at some ﬁnite x. Let the initial velocity be zero. The d’Alembert solution,
Eq. 3.20, of the onedimensional wave equation c
2
u
xx
= u
tt
can be written in the form
u(x, t) = F
1
(x −ct) + F
2
(x + ct), (3.38)
where F
1
represents a wave advancing at speed c toward the boundary, and F
2
represents
the returning wave, which should not exist if the boundary is nonreﬂecting. With F
2
= 0,
we diﬀerentiate u with respect to x and t to obtain
∂u
∂x
= F
1
,
∂u
∂t
= −cF
1
, (3.39)
where the prime denotes the derivative with respect to the argument. Thus,
∂u
∂x
= −
1
c
∂u
∂t
. (3.40)
This is the onedimensional nonreﬂecting boundary condition. Note that the x direction is
normal to the boundary. The boundary condition, Eq. 3.40, must be imposed to inhibit
reﬂections from the truncated boundary. This condition is exact in 1D (i.e., plane waves)
and approximate in higher dimensions, where the nonreﬂecting condition is written
c
∂u
∂n
+
∂u
∂t
= 0, (3.41)
where n is the outward unit normal to the boundary.
The nonreﬂecting boundary condition, Eq. 3.40, can be approximated in the ﬁnite dif
ference method with central diﬀerences expressed in terms of the phantom point outside the
boundary. For example, at the typical point (i, j) on the nonreﬂecting boundary in Fig. 21,
the general recursive formula is given by Eq. 3.31:
u
i,j+1
= r
2
u
i−1,j
+ 2(1 −r
2
)u
i,j
+ r
2
u
i+1,j
−u
i,j−1
. (3.42)
The central diﬀerence approximation to the nonreﬂecting boundary condition, Eq. 3.40, is
c
u
i+1,j
−u
i−1,j
2h
= −
u
i,j+1
−u
i,j−1
2k
(3.43)
28
or
r (u
i+1,j
−u
i−1,j
) = −(u
i,j+1
−u
i,j−1
) . (3.44)
The substitution of Eq. 3.44 into Eq. 3.42 (to eliminate the phantom point) yields
(1 + r)u
i,j+1
= 2r
2
u
i−1,j
+ 2(1 −r
2
)u
i,j
−(1 −r)u
i,j−1
. (3.45)
For the ﬁrst time step (j = 0), the last term in this relation is evaluated using the central
diﬀerence approximation to the initial velocity, Eq. 3.35. Note also that, for r = 1, Eq. 3.45
takes a particularly simple (and perhaps unexpected) form:
u
i,j+1
= u
i−1,j
. (3.46)
To illustrate the perfect wave absorption that occurs in a onedimensional ﬁnite diﬀerence
model, consider an inﬁnitelylong vibrating string with a nonzero initial displacement and
zero initial velocity. The initial displacement is a triangularshaped pulse in the middle of
the string, similar to Fig. 14. According to the d’Alembert solution, half the pulse should
propagate at speed c to the left and right and be absorbed into the boundaries. We solve
the problem with the explicit central ﬁnite diﬀerence approach with r = ck/h = 1. With
r = 1, the ﬁnite diﬀerence formulas 3.31 and 3.37 simplify to
u
i,j+1
= u
i−1,j
+ u
i+1,j
−u
i,j−1
(3.47)
and
u
i,1
= (u
i−1,0
+ u
i+1,0
)/2. (3.48)
On the right and left nonreﬂecting boundaries, Eq. 3.46 implies
u
n,j+1
= u
n−1,j
, u
0,j+1
= u
1,j
, (3.49)
where the mesh points in the x direction are labeled 0 to n. The ﬁnite diﬀerence calculation
for this problem results in the following spreadsheet:
t x=0 x=1 x=2 x=3 x=4 x=5 x=6 x=7 x=8 x=9 x=10

0 0.000 0.000 0.000 1.000 2.000 3.000 2.000 1.000 0.000 0.000 0.000
1 0.000 0.000 0.500 1.000 2.000 2.000 2.000 1.000 0.500 0.000 0.000
2 0.000 0.500 1.000 1.500 1.000 1.000 1.000 1.500 1.000 0.500 0.000
3 0.500 1.000 1.500 1.000 0.500 0.000 0.500 1.000 1.500 1.000 0.500
4 1.000 1.500 1.000 0.500 0.000 0.000 0.000 0.500 1.000 1.500 1.000
5 1.500 1.000 0.500 0.000 0.000 0.000 0.000 0.000 0.500 1.000 1.500
6 1.000 0.500 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.500 1.000
7 0.500 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.500
8 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
9 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
10 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
29
ρcA
Figure 22: Finite Length Simulation of an Inﬁnite Bar.
Notice that the triangular wave is absorbed without any reﬂection from the two boundaries.
For steadystate wave motion, the solution u(x, t) is timeharmonic, i.e.,
u = u
0
e
iωt
, (3.50)
and the nonreﬂecting boundary condition, Eq. 3.40, becomes
∂u
0
∂x
= −
iω
c
u
0
(3.51)
or, in general,
∂u
0
∂n
= −
iω
c
u
0
= −iku
0
, (3.52)
where n is the outward unit normal at the nonreﬂecting boundary, and k = ω/c is the
wave number. This condition is exact in 1D (i.e., plane waves) and approximate in higher
dimensions.
The nonreﬂecting boundary condition can be interpreted physically as a damper (dash
pot). Consider, for example, a bar undergoing longitudinal vibration and terminated on
the right end with the nonreﬂecting boundary condition, Eq. 3.40 (Fig. 22). The internal
longitudinal force F in the bar is given by
F = Aσ = AEε = AE
∂u
∂x
, (3.53)
where A is the crosssectional area of the bar, σ is the stress, E is the Young’s modulus of
the bar material, and u is displacement. Thus, from Eq. 3.40, the nonreﬂecting boundary
condition is equivalent to applying an end force given by
F = −
AE
c
v, (3.54)
where v = ∂u/∂t is the velocity. Since, from Eq. 2.10,
E = ρc
2
, (3.55)
Eq. 3.54 becomes
F = −(ρcA)v, (3.56)
which is a force proportional to velocity. A mechanical device which applies a force propor
tional to velocity is a dashpot. The minus sign in this equation means that the force opposes
the direction of motion, as required to be physically realizable. The dashpot constant is ρcA.
Thus, the application of this dashpot to the end of a ﬁnite length bar simulates exactly a bar
of inﬁnite length (Fig. 22). Since, in acoustics, the ratio of pressure (force/area) to velocity
is referred to as impedance, we see that the characteristic impedance of an acoustic medium
is ρc.
30
¸
`
x
y
a
b
T = f
1
(x)
T = f
2
(x)
T = g
1
(y) T = g
2
(y) ∇
2
T = 0
Figure 23: Laplace’s Equation on Rectangular Domain.
¸
`
,
»
Point (i, j)
i −1 i i + 1
j −1
j
j + 1
Figure 24: Finite Diﬀerence Grid on Rectangular Domain.
3.3 Elliptic Equations
Consider Laplace’s equation on the twodimensional rectangular domain shown in Fig. 23:
_
∇
2
T(x, y) = 0 (0 < x < a, 0 < y < b),
T(0, y) = g
1
(y), T(a, y) = g
2
(y), T(x, 0) = f
1
(x), T(x, b) = f
2
(x).
(3.57)
This problem corresponds physically to twodimensional steadystate heat conduction over
a rectangular plate for which temperature is speciﬁed on the boundary.
We attempt an approximate solution by introducing a uniform rectangular grid over the
domain, and let the point (i, j) denote the point having the ith value of x and the jth value
of y (Fig. 24). Then, using central ﬁnite diﬀerence approximations to the second derivatives
(Fig. 25),
∂
2
T
∂x
2
≈
T
i−1,j
−2T
i,j
+ T
i+1,j
h
2
, (3.58)
∂
2
T
∂y
2
≈
T
i,j−1
−2T
i,j
+ T
i,j+1
h
2
. (3.59)
The ﬁnite diﬀerence approximation to Laplace’s equation thus becomes
T
i−1,j
−2T
i,j
+ T
i+1,j
h
2
+
T
i,j−1
−2T
i,j
+ T
i,j+1
h
2
= 0 (3.60)
or
4T
i,j
−(T
i−1,j
+ T
i+1,j
+ T
i,j−1
+ T
i,j+1
) = 0. (3.61)
31
,
¡ ¸
h
`
·
h
(i −1, j −1) (i, j −1) (i + 1, j −1)
(i −1, j) (i, j) (i + 1, j)
(i −1, j + 1) (i, j + 1) (i + 1, j + 1)
Figure 25: The Neighborhood of Point (i, j).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Figure 26: 20Point Finite Diﬀerence Mesh.
That is, for Laplace’s equation with the same uniform mesh in each direction, the solution
at a typical point (i, j) is the average of the four neighboring points.
For example, consider the mesh shown in Fig. 26. Although there are 20 mesh points,
14 are on the boundary, where the temperature is known. Thus, the resulting numerical
problem has only six degrees of freedom (unknown variables). The application of Eq. 3.61
to each of the six interior points yields
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
4T
6
− T
7
− T
10
= T
2
+ T
5
−T
6
+ 4T
7
− T
11
= T
3
+ T
8
−T
6
+ 4T
10
− T
11
− T
14
= T
9
−T
7
− T
10
+ 4T
11
− T
15
= T
12
−T
10
+ 4T
14
− T
15
= T
13
+ T
18
−T
11
− T
14
+ 4T
15
= T
16
+ T
19
,
(3.62)
where all known quantities have been placed on the righthand side. This linear system of
six equations in six unknowns can be solved with standard equation solvers. Because the
central diﬀerence operator is a 5point operator, systems of equations of this type would have
at most ﬁve nonzero terms in each equation, regardless of how large the mesh is. Thus, for
large meshes, the system of equations is sparsely populated, so that sparse matrix solution
techniques would be applicable.
32
¸
`
x
y
a
b
T = f
1
(x)
T = f
2
(x)
T = g
1
(y)
∂T
∂x
= g(y) ∇
2
T = 0
Figure 27: Laplace’s Equation With Dirichlet and Neumann B.C.
Since the numbers assigned to the mesh points in Fig. 26 are merely labels for identiﬁca
tion, the pattern of nonzero coeﬃcients appearing on the lefthand side of Eq. 3.62 depends
on the choice of mesh ordering. Some equation solvers based on Gaussian elimination oper
ate more eﬃciently on systems of equations for which the nonzeros in the coeﬃcient matrix
are clustered near the main diagonal. Such a matrix system is called banded.
Systems of this type can also be solved using an iterative procedure known as relaxation,
which uses the following general algorithm:
1. Initialize the boundary points to their prescribed values, and initialize the interior
points to zero or some other convenient value (e.g., the average of the boundary values).
2. Loop systematically through the interior mesh points, setting each interior point to
the average of its four neighbors.
3. Continue this process until the solution converges to the desired accuracy.
3.3.1 Derivative Boundary Conditions
The approach of the preceding section must be modiﬁed for Neumann boundary conditions,
in which the normal derivative is speciﬁed. For example, consider again the problem of the
last section but with a Neumann, rather than Dirichlet, boundary condition on the right
side (Fig. 27):
_
_
_
∇
2
T(x, y) = 0 (0 < x < a, 0 < y < b),
T(0, y) = g
1
(y),
∂T(a, y)
∂x
= g(y), T(x, 0) = f
1
(x), T(x, b) = f
2
(x).
(3.63)
We extend the mesh to include additional points to the right of the boundary at x = a
(Fig. 28). At a typical point on the boundary, a central diﬀerence approximation yields
g
18
=
∂T
18
∂x
≈
T
22
−T
14
2h
(3.64)
or
T
22
= T
14
+ 2hg
18
. (3.65)
On the other hand, the equilibrium equation for Point 18 is
4T
18
−(T
14
+ T
22
+ T
17
+ T
19
) = 0, (3.66)
33
. . .
. . .
. . .
. . .
,
,
,
,
13
14
15
16
17
18
19
20
21
22
23
24
Figure 28: Treatment of Neumann Boundary Conditions.
`
`
`
`
k
1
m
2
¸
u
2
(t)
¸ ¸
`
`
`
`
k
2
m
3
¸
u
3
(t)
¸ ¸
¸
f
3
(t)
Figure 29: 2DOF MassSpring System.
which, when combined with Eq. 3.65, yields
4T
18
−2T
14
−T
17
−T
19
= 2hg
18
, (3.67)
which imposes the Neumann boundary condition on Point 18.
4 Direct Finite Element Analysis
The ﬁnite element method is a numerical procedure for solving partial diﬀerential equations.
The procedure is used in a variety of applications, including structural mechanics and dy
namics, acoustics, heat transfer, ﬂuid ﬂow, electric and magnetic ﬁelds, and electromagnetics.
Although the main theoretical bases for the ﬁnite element method are variational principles
and the weighted residual method, it is useful to consider discrete systems ﬁrst to gain some
physical insight into some of the procedures.
4.1 Linear MassSpring Systems
Consider the twodegreeoffreedom (DOF) system shown in Fig. 29. We let u
2
and u
3
denote the displacements from the equilibrium of the two masses m
2
and m
3
. The stiﬀnesses
of the two springs are k
1
and k
2
. The dynamic equilibrium equations could be obtained from
Newton’s second law (F=ma):
m
2
¨ u
2
+ k
1
u
2
−k
2
(u
3
−u
2
) = 0
m
3
¨ u
3
+ k
2
(u
3
−u
2
) = f
3
(t)
(4.1)
34
`
`
`
`
k
¸ ¸
1 2
Figure 30: A Single Spring Element.
or
m
2
¨ u
2
+ (k
1
+ k
2
)u
2
−k
2
u
3
= 0
m
3
¨ u
3
−k
2
u
2
+ k
2
u
3
= f
3
(t).
(4.2)
This system could be rewritten in matrix notation as
M¨ u +Ku = F(t), (4.3)
where
u =
_
u
2
u
3
_
(4.4)
is the displacement vector (the vector of unknown displacements),
K =
_
k
1
+ k
2
−k
2
−k
2
k
2
_
(4.5)
is the system stiﬀness matrix,
M =
_
m
2
0
0 m
3
_
(4.6)
is the system mass matrix, and
F =
_
0
f
3
_
(4.7)
is the force vector. This approach would be very tedious and errorprone for more complex
systems involving many springs and masses.
To develop instead a matrix approach, we ﬁrst isolate one element, as shown in Fig. 30.
The stiﬀness matrix K
el
for this element satisﬁes
K
el
u = F, (4.8)
or
_
k
11
k
12
k
21
k
22
_ _
u
1
u
2
_
=
_
f
1
f
2
_
. (4.9)
By expanding this equation, we obtain
k
11
u
1
+ k
12
u
2
= f
1
k
21
u
1
+ k
22
u
2
= f
2
_
. (4.10)
From this equation, we observe that k
11
can be deﬁned as the force on DOF 1 corresponding
to enforcing a unit displacement on DOF 1 and zero displacement on DOF 2:
k
11
= f
1
¸
¸
u
1
=1,u
2
=0
= k. (4.11)
35
`
`
`
`
k
1
`
`
`
`
k
2
¸ ¸ ¸
1 2 3
Figure 31: 3DOF Spring System.
Similarly,
k
12
= f
1
¸
¸
u
1
=0,u
2
=1
= −k, k
21
= f
2
¸
¸
u
1
=1,u
2
=0
= −k, k
22
= f
2
¸
¸
u
1
=0,u
2
=1
= k. (4.12)
Thus,
K
el
=
_
k −k
−k k
_
= k
_
1 −1
−1 1
_
. (4.13)
In general, for a larger system with many more DOF,
K
11
u
1
+ K
12
u
2
+ K
13
u
3
+· · · + K
1n
u
n
= F
1
K
21
u
1
+ K
22
u
2
+ K
23
u
3
+· · · + K
2n
u
n
= F
2
K
31
u
1
+ K
32
u
2
+ K
33
u
3
+· · · + K
3n
u
n
= F
3
.
.
.
K
n1
u
1
+ K
n2
u
2
+ K
n3
u
3
+· · · + K
nn
u
n
= F
n
_
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
_
, (4.14)
in which case we can interpret an individual element K
ij
in the stiﬀness matrix as the force
at DOF i if u
j
= 1 and all other displacement components u
i
= 0:
K
ij
= F
i
¸
¸
u
j
=1, others=0
. (4.15)
4.2 Matrix Assembly
We now return to the stiﬀness part of the original problem shown in Fig. 29. In the absence
of the masses and constraints, this system is shown in Fig. 31. Since there are three points in
this system, each with one DOF, the system is a 3DOF system. The system stiﬀness matrix
can be assembled for this system by adding the 3 ×3 stiﬀness matrices for each element:
K =
_
_
k
1
−k
1
0
−k
1
k
1
0
0 0 0
_
_
+
_
_
0 0 0
0 k
2
−k
2
0 −k
2
k
2
_
_
=
_
_
k
1
−k
1
0
−k
1
k
1
+ k
2
−k
2
0 −k
2
k
2
_
_
. (4.16)
The justiﬁcation for this assembly procedure is that forces are additive. For example, k
22
is
the force at DOF 2 when u
2
= 1 and u
1
= u
3
= 0; both elements which connect to DOF 2
contribute to k
22
. This matrix corresponds to the unconstrained system.
4.3 Constraints
The system in Fig. 29 has a constraint on DOF 1, as shown in Fig. 32. If we expand the
36
`
`
`
`
k
1
`
`
`
`
k
2
, ¸ ¸
1 2 3
»
ﬁxed
Figure 32: Spring System With Constraint.
`
`
`
`
k
1
`
`
`
`
k
2
`
`
`
`
k
4
`
`
`
`
k
3
¸
¸
¸ ¸
1
2
3 4
Figure 33: 4DOF Spring System.
(unconstrained) matrix system Ku = F into
K
11
u
1
+ K
12
u
2
+ K
13
u
3
= F
1
K
21
u
1
+ K
22
u
2
+ K
23
u
3
= F
2
K
31
u
1
+ K
32
u
2
+ K
33
u
3
= F
3
_
_
_
, (4.17)
we see that Row i corresponds to the equilibrium equation for DOF i. Thus, if u
i
= 0,
we do not need Row i of the matrix (although that equation can be saved to recover later
the constraint force). Also, Column i of the matrix multiplies u
i
, so that, if u
i
= 0, we do
not need Column i of the matrix. That is, if u
i
= 0 for some system, we can enforce that
constraint by deleting Row i and Column i from the unconstrained matrix. Hence, with
u
1
= 0 in Fig. 32, we delete Row 1 and Column 1 in Eq. 4.16 to obtain the reduced matrix
K =
_
k
1
+ k
2
−k
2
−k
2
k
2
_
, (4.18)
which is the same matrix obtained previously in Eq. 4.5.
Notice that, from Eqs. 4.16 and 4.18,
det K
3×3
= k
1
[(k
1
+ k
2
)k
2
−k
2
2
] + k
1
(−k
1
k
2
) = 0, (4.19)
whereas
det K
2×2
= (k
1
+ k
2
)k
2
−k
2
2
= k
1
k
2
= 0. (4.20)
That is, the unconstrained matrix K
3×3
is singular, but the constrained matrix K
2×2
is
nonsingular. Without constraints, K is singular (and the solution of the mechanical problem
is not unique) because of the presence of rigid body modes.
4.4 Example and Summary
Consider the 4DOF spring system shown in Fig. 33. The unconstrained stiﬀness matrix for
37
this system is
K =
_
¸
¸
¸
_
k
1
+ k
4
−k
1
−k
4
0
−k
1
k
1
+ k
2
−k
2
0
−k
4
−k
2
k
2
+ k
3
+ k
4
−k
3
0 0 −k
3
k
3
_
¸
¸
¸
_
. (4.21)
We summarize several properties of stiﬀness matrices:
1. K is symmetric. This property is a special case of the Betti reciprocal theorem in
mechanics.
2. An oﬀdiagonal term is zero unless the two points are common to the same element.
Thus, K is sparse in general and usually banded.
3. K is singular without enough constraints to eliminate rigid body motion.
For spring systems, that have only one DOF at each point, the sum of any matrix column
or row is zero. This property is a consequence of equilibrium, since the matrix entries in
Column i consist of all the grid point forces when u
i
= 1 and other DOF are ﬁxed. The
forces must sum to zero, since the object is in static equilibrium.
We summarize the solution procedure for spring systems:
1. Generate the element stiﬀness matrices.
2. Assemble the system K and F.
3. Apply constraints.
4. Solve Ku = F for u.
5. Compute reactions, spring forces, and stresses.
4.5 PinJointed Rod Element
Consider the pinjointed rod element (an axial member) shown in Fig. 34. From mechanics
of materials, we recall that the change in displacement u for a rod of length L subjected to
an axial force F is
u =
FL
AE
, (4.22)
where A is the rod crosssectional area, and E is the Young’s modulus for the rod material.
Thus, the axial stiﬀness is
k =
F
u
=
AE
L
. (4.23)
The rod is therefore equivalent to a scalar spring with k = AE/L, and
K
el
=
AE
L
_
1 −1
−1 1
_
. (4.24)
38
,
,
AE, L
»
F
F
Figure 34: PinJointed Rod Element.
, , , ,
, , ,
·
Figure 35: Truss Structure Modeled With PinJointed Rods.
39
,
,
¸
`
¸
`
1
2
3
4
θ
Figure 36: The Degrees of Freedom for a PinJointed Rod Element in 2D.
, ,
,
¸
`
F
x
F
y
(x
1
, y
1
)
(x
2
, y
2
)
1
θ
1 cos θ
`
`¬
Figure 37: Computing 2D Stiﬀness of PinJointed Rod.
However, matrix assembly for a truss structure (a structure made of pinjointed rods)
diﬀers from that for a collection of springs, since the rod elements are not all colinear (e.g.,
Fig. 35). Thus the elements must be transformed to a common coordinate system before the
element matrices can be assembled into the system stiﬀness matrix.
A typical rod element in 2D is shown in Fig. 36. To use this element in 2D trusses
requires expanding the 2 ×2 matrix K
el
into a 4 ×4 matrix. The four DOF for the element
are also shown in Fig. 36. To compute the entries in the ﬁrst column of the 4 ×4 K
el
, we set
u
1
= 1, u
2
= u
3
= u
4
= 0, and compute the four grid point forces F
1
, F
2
, F
3
, F
4
, as shown in
Fig. 37. For example, at Point 1,
F
x
= F cos θ = (k · 1 cos θ) cos θ = k cos
2
θ = k
11
= −k
31
(4.25)
F
y
= F sin θ = (k · 1 cos θ) sin θ = k cos θ sin θ = k
21
= −k
41
, (4.26)
where k = AE/L, and the forces at the opposite end of the rod (x
2
, y
2
) are equal in magnitude
and opposite in sign. These four values complete the ﬁrst column of the matrix. Similarly
we can ﬁnd the rest of the matrix to obtain
K
el
=
AE
L
_
¸
¸
¸
_
c
2
cs −c
2
−cs
cs s
2
−cs −s
2
−c
2
−cs c
2
cs
−cs −s
2
cs s
2
_
¸
¸
¸
_
, (4.27)
40
,
,
,
`
`
`
`
`¬
10,000 lb
60
◦
45
◦
1
2
3
¸
`
¸
`
¸
`
1
2
3
4
5
6
¡ ¸
2a
>
>.
s = −c =
a
L
s = c = −
a
L
Figure 38: PinJointed Frame Example.
where
c = cos θ =
x
2
−x
1
L
, s = sin θ =
y
2
−y
1
L
. (4.28)
Note that the angle θ is not of interest; only c and s are needed. Later, in the discussion of
matrix transformations, we will derive a more convenient way to obtain this matrix.
4.6 PinJointed Frame Example
We illustrate the matrix assembly and solution procedures for truss structures with the
simple frame shown in Fig. 38. For this frame, we assume E = 30 × 10
6
psi, L=10 in, and
A=1 in
2
. Before the application of constraints, the stiﬀness matrix is
K = k
0
_
¸
¸
¸
¸
¸
¸
¸
_
1 −1 −1 1 0 0
−1 1 1 −1 0 0
−1 1 1 + 1 −1 + 1 −1 −1
1 −1 −1 + 1 1 + 1 −1 −1
0 0 −1 −1 1 1
0 0 −1 −1 1 1
_
¸
¸
¸
¸
¸
¸
¸
_
, (4.29)
where
k
0
=
AE
L
_
a
L
_
2
= 1.5 ×10
6
lb/in,
a
L
=
1
√
2
. (4.30)
After constraints, the system of equations is
k
0
_
2 0
0 2
_ _
u
3
u
4
_
=
_
5000
−5000
√
3
_
, (4.31)
whose solution is
u
3
= 1.67 ×10
−3
in, u
4
= −2.89 ×10
−3
in. (4.32)
41
· · · · · · · · · · ·
, , , , , , , , ,
Figure 39: Example With Reactions and Loads at Same DOF.
The reactions can be recovered as
R
1
= k
0
(−u
3
+ u
4
) = −6840 lb, R
2
= k
0
(u
3
−u
4
) = 6840 lb
R
5
= k
0
(−u
3
−u
4
) = 1830 lb, R
6
= k
0
(−u
3
−u
4
) = 1830 lb.
(4.33)
4.7 Boundary Conditions by Matrix Partitioning
We recall that, to enforce u
i
= 0, we could delete Row i and Column i from the stiﬀness
matrix. By using matrix partitioning, we can treat nonzero constraints and recover the forces
of constraint.
Consider the ﬁnite element matrix system Ku = F, where some DOF are speciﬁed, and
some are free. We partition the unknown displacement vector into
u =
_
u
f
u
s
_
, (4.34)
where u
f
and u
s
denote, respectively, the free and constrained DOF. A partitioning of the
matrix system then yields
_
K
ff
K
fs
K
sf
K
ss
_ _
u
f
u
s
_
=
_
F
f
F
s
_
, (4.35)
where u
s
is prescribed. From the ﬁrst partition,
K
ff
u
f
+K
fs
u
s
= F
f
(4.36)
or
K
ff
u
f
= F
f
−K
fs
u
s
, (4.37)
which can be solved for the unknown u
f
. The second partition then yields the forces at the
constrained DOF:
F
s
= K
sf
u
f
+K
ss
u
s
, (4.38)
where u
f
is now known, and u
s
is prescribed.
The grid point forces F
s
at the constrained DOF consist of both the reactions of constraint
and the applied loads, if any, at those DOF. For example, consider the beam shown in Fig. 39.
When the applied load is distributed to the grid points, the loads at the two end grid points
would include both reactions and a portion of the applied load. Thus, F
s
, which includes all
loads at the constrained DOF, can be written as
F
s
= P
s
+R, (4.39)
42
¸ ,
i u
i
= U prescribed
(a)
`
`
`
`
k
0
¸
F
i
= k
0
U
(b)
Figure 40: Large Spring Approach to Constraints.
where P
s
is the applied load, and R is the vector of reactions. The reactions can then be
recovered as
R = K
sf
u
f
+K
ss
u
s
−P
s
. (4.40)
The disadvantage of using the partitioning approach to constraints is that the writing
of computer programs is made more complicated, since one needs the general capability
to partition a matrix into smaller matrices. However, for general purpose software, such a
capability is essential.
4.8 Alternative Approach to Constraints
Consider a structure (Fig. 40) for which we want to prescribe a value for the ith DOF:
u
i
= U. An alternative approach to enforce this constraint is to attach a large spring k
0
between DOF i and ground (a ﬁxed point) and to apply a force F
i
to DOF i equal to k
0
U.
This spring should be many orders of magnitude larger than other typical values in the
stiﬀness matrix. A large number placed on the matrix diagonal will have no adverse eﬀects
on the conditioning of the matrix.
For example, if we prescribe DOF 3 in a 4DOF system, the modiﬁed system becomes
_
¸
¸
¸
_
k
11
k
12
k
13
k
14
k
21
k
22
k
23
k
24
k
31
k
32
k
33
+ k
0
k
34
k
41
k
42
k
43
k
44
_
¸
¸
¸
_
_
¸
¸
¸
_
¸
¸
¸
_
u
1
u
2
u
3
u
4
_
¸
¸
¸
_
¸
¸
¸
_
=
_
¸
¸
¸
_
¸
¸
¸
_
F
1
F
2
F
3
+ k
0
U
F
4
_
¸
¸
¸
_
¸
¸
¸
_
. (4.41)
For large k
0
, the third equation is
k
0
u
3
≈ k
0
U or u
3
≈ U, (4.42)
as required.
The large spring approach can be used for any system of equations for which one wants
to enforce a constraint on a particular unknown. The main advantage of this approach is
that computer coding is easier, since matrix sizes do not have to change.
We can summarize the algorithm for applying the largespring approach for applying
constraints to the linear system Ku = F as follows:
1. Choose the large spring k
0
to be, say, 10,000 times the largest diagonal entry in the
unconstrained K matrix.
43
, , ¸
1 2
x
` `
EI, L
u
1
u
3
u
2
u
4
Figure 41: DOF for Beam in Flexure (2D).
,
,
`
·
w
1
= 1
`
`
F
1
M
1
F
2
M
2
Figure 42: The Beam Problem Associated With Column 1.
2. For each DOF i which is to be constrained (zero or not), add k
0
to the diagonal entry
K
ii
, and add k
0
U to the corresponding righthand side term F
i
, where U is the desired
constraint value for DOF i.
4.9 Beams in Flexure
Like springs and pinjointed rods, beam elements also have stiﬀness matrices which can be
computed exactly. In two dimensions, we designate the four DOF associated with ﬂexure as
shown in Fig. 41. The stiﬀness matrix would therefore be of the form
_
¸
¸
¸
_
¸
¸
¸
_
F
1
M
1
F
2
M
2
_
¸
¸
¸
_
¸
¸
¸
_
=
_
¸
¸
¸
_
k
11
k
12
k
13
k
14
k
21
k
22
k
23
k
24
k
31
k
32
k
33
k
34
k
41
k
42
k
43
k
44
_
¸
¸
¸
_
_
¸
¸
¸
_
¸
¸
¸
_
w
1
θ
1
w
2
θ
2
_
¸
¸
¸
_
¸
¸
¸
_
, (4.43)
where w
i
and θ
i
are, respectively, the transverse displacement and rotation at the ith point.
Rotations are expressed in radians. To compute the ﬁrst column of K, set
w
1
= 1, θ
1
= w
2
= θ
2
= 0, (4.44)
and compute the four forces. Thus, for an Euler beam, we solve the beam diﬀerential
equation
EIy
= M(x), (4.45)
subject to the boundary conditions, Eq. 4.44, as shown in Fig. 42. For Column 2, we solve
the beam equation subject to the boundary conditions
θ
1
= 1, w
1
= w
2
= θ
2
= 0, (4.46)
as shown in Fig. 43. If we then combine the resulting ﬂexural stiﬀnesses with the axial
44
,
,
`
`
F
1
M
1
F
2
M
2
Figure 43: The Beam Problem Associated With Column 2.
, ,
1 2
A, E, I, L
¸
`
u
1
u
2
u
3
¸
`
u
4
u
5
u
6
Figure 44: DOF for 2D Beam Element.
stiﬀnesses previously derived for the axial member, we obtain the twodimensional stiﬀness
matrix for the Euler beam:
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
F
1
F
2
F
3
F
4
F
5
F
6
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
=
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
AE
L
0 0 −
AE
L
0 0
0
12EI
L
3
6EI
L
2
0 −
12EI
L
3
6EI
L
2
0
6EI
L
2
4EI
L
0 −
6EI
L
2
2EI
L
−
AE
L
0 0
AE
L
0 0
0 −
12EI
L
3
−
6EI
L
2
0
12EI
L
3
−
6EI
L
2
0
6EI
L
2
2EI
L
0 −
6EI
L
2
4EI
L
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
u
1
u
2
u
3
u
4
u
5
u
6
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
, (4.47)
where the six DOF are shown in Fig. 44.
For this element, note that there is no coupling between axial and transverse behavior.
Transverse shear, which was ignored, can be added. The threedimensional counterpart to
this matrix would have six DOF at each grid point: three translations and three rotations
(u
x
, u
y
, u
z
, R
x
, R
y
, R
z
). Thus, in 3D, K is a 12 × 12 matrix. In addition, for bending in
two diﬀerent planes, there would have to be two moments of inertia I
1
and I
2
, in addition
to a torsional constant J and (possibly) a product of inertia I
12
. For transverse shear,
“area factors” would also be needed to reﬂect the fact that two diﬀerent beams with the
same crosssectional area, but diﬀerent crosssectional shapes, would have diﬀerent shear
stiﬀnesses.
4.10 Direct Approach to Continuum Problems
Stiﬀness matrices for springs, rods, and beams can be derived exactly. For most problems
of interest in engineering, exact stiﬀness matrices cannot be derived. Consider the 2D
problem of computing the displacements and stresses in a thin plate with applied forces and
45
Constraint
`
`
`
/
/
// `
/
/
// `
Load
Figure 45: Plate With Constraints and Loads.
.
.
.
.
.
.
.
.
.
.
.
.
(x
1
, y
1
)
(x
2
, y
2
)
(x
3
, y
3
)
1
2
3
¸
`
¸
`
¸
`
u
1
v
1
u
2
v
2
u
3
v
3
Figure 46: DOF for the Linear Triangular Membrane Element.
constraints (Fig. 45). This ﬁgure also shows the domain modeled with several triangular
elements. A typical element is shown in Fig. 46. With three grid points and two DOF at
each point, this is a 6DOF element. The displacement and force vectors for the element are
u =
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
u
1
v
1
u
2
v
2
u
3
v
3
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
, F =
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
F
1x
F
1y
F
2x
F
2y
F
3x
F
3y
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
. (4.48)
Since we cannot determine exactly a stiﬀness matrix that relates these two vectors, we
approximate the displacement ﬁeld over the element as
_
u(x, y) = a
1
+ a
2
x + a
3
y
v(x, y) = a
4
+ a
5
x + a
6
y,
(4.49)
where u and v are the x and y components of displacement, respectively. Note that the
number of undetermined coeﬃcients equals the number of DOF (6) in the element. We
46
choose polynomials for convenience in the subsequent mathematics. The linear approxima
tion in Eq. 4.49 is analogous to the piecewise linear approximation used in trapezoidal rule
integration.
At the vertices, the displacements in Eq. 4.49 must match the grid point values, e.g.,
u
1
= a
1
+ a
2
x
1
+ a
3
y
1
. (4.50)
We can write ﬁve more equations of this type to obtain
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
u
1
v
1
u
2
v
2
u
3
v
3
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
=
_
¸
¸
¸
¸
¸
¸
¸
_
1 x
1
y
1
0 0 0
0 0 0 1 x
1
y
1
1 x
2
y
2
0 0 0
0 0 0 1 x
2
y
2
1 x
3
y
3
0 0 0
0 0 0 1 x
3
y
3
_
¸
¸
¸
¸
¸
¸
¸
_
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
a
1
a
2
a
3
a
4
a
5
a
6
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
(4.51)
or
u = γa. (4.52)
Since the x and y directions uncouple in Eq. 4.51, we can write this last equation in
uncoupled form:
_
_
_
u
1
u
2
u
3
_
_
_
=
_
_
1 x
1
y
1
1 x
2
y
2
1 x
3
y
3
_
_
_
_
_
a
1
a
2
a
3
_
_
_
(4.53)
_
_
_
v
1
v
2
v
3
_
_
_
=
_
_
1 x
1
y
1
1 x
2
y
2
1 x
3
y
3
_
_
_
_
_
a
4
a
5
a
6
_
_
_
. (4.54)
We now observe that
det
_
_
1 x
1
y
1
1 x
2
y
2
1 x
3
y
3
_
_
= 2A = 0, (4.55)
since A is (from geometry) the area of the triangle. A is positive for counterclockwise
ordering and negative for clockwise ordering. Thus, unless the triangle is degenerate, we
conclude that γ is invertible, and
_
_
_
a
1
a
2
a
3
_
_
_
=
1
2A
_
_
x
2
y
3
−x
3
y
2
x
3
y
1
−x
1
y
3
x
1
y
2
−x
2
y
1
y
2
−y
3
y
3
−y
1
y
1
−y
2
x
3
−x
2
x
1
−x
3
x
2
−x
1
_
_
_
_
_
u
1
u
2
u
3
_
_
_
. (4.56)
Similarly,
_
_
_
a
4
a
5
a
6
_
_
_
=
1
2A
_
_
x
2
y
3
−x
3
y
2
x
3
y
1
−x
1
y
3
x
1
y
2
−x
2
y
1
y
2
−y
3
y
3
−y
1
y
1
−y
2
x
3
−x
2
x
1
−x
3
x
2
−x
1
_
_
_
_
_
v
1
v
2
v
3
_
_
_
. (4.57)
47
Thus, we can write
a = γ
−1
u, (4.58)
and treat the element’s unknowns as being either the physical displacements u or the non
physical coeﬃcients a.
The strain components of interest in 2D are
ε =
_
_
_
ε
xx
ε
yy
γ
xy
_
_
_
=
_
_
_
u
,x
v
,y
u
,y
+ v
,x
_
_
_
. (4.59)
From Eq. 4.49, we evaluate the strains for this element as
ε =
_
_
_
a
2
a
6
a
3
+ a
5
_
_
_
=
_
_
0 1 0 0 0 0
0 0 0 0 0 1
0 0 1 0 1 0
_
_
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
a
1
a
2
a
3
a
4
a
5
a
6
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
= Ba = Bγ
−1
u = Cu, (4.60)
where
C = Bγ
−1
. (4.61)
Eq. 4.60 is a formula to compute element strains given the grid point displacements. Note
that, for this element, the strains are independent of position in the element. Thus, this
element is referred to as the Constant Strain Triangle (CST).
From generalized Hooke’s law, each stress component is a linear combination of all the
strain components:
σ = Dε = DBγ
−1
u = DCu, (4.62)
where, for example, for an isotropic material in plane stress,
D =
E
1 −ν
2
_
_
1 ν 0
ν 1 0
0 0 (1 −ν)/2
_
_
, (4.63)
where E is Young’s modulus, and ν is Poisson’s ratio. Eq. 4.62 is a formula to compute ele
ment stresses given the grid point displacements. For this element, the stresses are constant
(independent of position) in the element.
We now derive the element stiﬀness matrix using the Principle of Virtual Work. Consider
an element in static equilibrium with a set of applied loads F and displacements u. According
to the Principle of Virtual Work, the work done by the applied loads acting through the
displacements is equal to the increment in stored strain energy during an arbitrary virtual
(imaginary) displacement δu:
δu
T
F =
_
V
δε
T
σdV. (4.64)
48
We then substitute Eqs. 4.60 and 4.62 into this equation to obtain
δu
T
F =
_
V
(Cδu)
T
(DCu) dV = δu
T
__
V
C
T
DCdV
_
u, (4.65)
where δu and u can be removed from the integral, since they are grid point quantities
independent of position. Since δu is arbitrary, it follows that
F =
__
V
C
T
DCdV
_
u. (4.66)
Therefore, the stiﬀness matrix is given by
K =
_
V
C
T
DCdV =
_
V
_
Bγ
−1
_
T
D
_
Bγ
−1
_
dV, (4.67)
where the integral is over the volume of the element. Note that, since the material property
matrix D is symmetric, K is symmetric. The substitution of the constant B, γ, and D
matrices into this equation yields the stiﬀness matrix for the Constant Strain Triangle:
K =
Et
4A(1 −ν
2
)
×
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
y
2
23
+λx
2
32
µx
32
y
23
y
23
y
31
+λx
13
x
32
νx
13
y
23
+λx
32
y
31
y
12
y
23
+λx
21
x
32
νx
21
y
23
+λx
32
y
12
x
2
32
+λy
2
23
νx
32
y
31
+λx
13
y
23
x
32
x
13
+λy
31
y
23
νx
32
y
12
+λx
21
y
23
x
21
x
32
+λy
12
y
23
y
2
31
+λx
2
13
µx
13
y
31
y
12
y
31
+λx
21
x
13
νx
21
y
31
+λx
13
y
12
x
2
13
+λy
2
31
νx
13
y
12
+λx
21
y
31
x
13
x
21
+λy
12
y
31
Sym y
2
12
+λx
2
21
µx
21
y
12
x
2
21
+λy
2
12
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
,
(4.68)
where
x
ij
= x
i
−x
j
, y
ij
= y
i
−y
j
, λ =
1 −ν
2
, µ =
1 + ν
2
, (4.69)
and t is the element thickness.
5 Change of Basis
On many occasions in engineering applications, the need arises to transform vectors and
matrices from one coordinate system to another. For example, in the ﬁnite element method,
it is frequently more convenient to derive element matrices in a local element coordinate
system and then transform those matrices to a global system (Fig. 47). Transformations
are also needed to transform from other orthogonal coordinate systems (e.g., cylindrical or
spherical) to Cartesian coordinates (Fig. 48).
49
¸
`
x
y
,
,
`
`
`` `
axial member
¯ x ¯ y
¸
`
x
y
´
´
´ ´




,
,
,
`
`
`` `
triangular membrane
¯ x ¯ y
Figure 47: Element Coordinate Systems in the Finite Element Method.
¸
`
x
y
`
`
`
``
,
e
r
e
θ
θ
r
Figure 48: Basis Vectors in Polar Coordinate System.
Let the vector v be given by
v = v
1
e
1
+ v
2
e
2
+ v
3
e
3
=
3
i=1
v
i
e
i
, (5.1)
where e
i
are the basis vectors, and v
i
are the components of v. With the summation con
vention, we can omit the summation sign and write
v = v
i
e
i
, (5.2)
where, if a subscript appears exactly twice, a summation is implied over the range.
An orthonormal basis is deﬁned as a basis whose basis vectors are mutually orthogonal
unit vectors (i.e., vectors of unit length). If e
i
is an orthonormal basis,
e
i
· e
j
= δ
ij
=
_
1, i = j,
0, i = j,
(5.3)
where δ
ij
is the Kronecker delta.
Since bases are not unique, we can express v in two diﬀerent orthonormal bases:
v =
3
i=1
v
i
e
i
=
3
i=1
¯ v
i
¯ e
i
, (5.4)
where v
i
are the components of v in the unbarred coordinate system, and ¯ v
i
are the com
ponents in the barred system (Fig. 49). If we take the dot product of both sides of Eq. 5.4
with e
j
, we obtain
3
i=1
v
i
e
i
· e
j
=
3
i=1
¯ v
i
¯ e
i
· e
j
, (5.5)
50
¸
x
1
`
x
2
¯ x
1
`
`
`
`
`
`
`
``
¯ x
2
v
θ
Figure 49: Change of Basis.
where e
i
· e
j
= δ
ij
, and we deﬁne the 3 ×3 matrix R as
R
ij
= ¯ e
i
· e
j
. (5.6)
Thus, from Eq. 5.5,
v
j
=
3
i=1
R
ij
¯ v
i
=
3
i=1
R
T
ji
¯ v
i
. (5.7)
Since the matrix product
C = AB (5.8)
can be written using subscript notation as
C
ij
=
3
k=1
A
ik
B
kj
, (5.9)
Eq. 5.7 is equivalent to the matrix product
v = R
T
¯ v. (5.10)
Similarly, if we take the dot product of Eq. 5.4 with ¯ e
j
, we obtain
3
i=1
v
i
e
i
· ¯ e
j
=
3
i=1
¯ v
i
¯ e
i
· ¯ e
j
, (5.11)
where ¯ e
i
· ¯ e
j
= δ
ij
, and e
i
· ¯ e
j
= R
ji
. Thus,
¯ v
j
=
3
i=1
R
ji
v
i
or ¯ v = Rv or v = R
−1
¯ v. (5.12)
A comparison of Eqs. 5.10 and 5.12 yields
R
−1
= R
T
or RR
T
= I or
3
k=1
R
ik
R
jk
= δ
ij
, (5.13)
51
where I is the identity matrix (I
ij
= δ
ij
):
I =
_
_
1 0 0
0 1 0
0 0 1
_
_
. (5.14)
This type of transformation is called an orthogonal coordinate transformation (OCT). A
matrix R satisfying Eq. 5.13 is said to be an orthogonal matrix. That is, an orthogonal
matrix is one whose inverse is equal to the transpose. R is sometimes called a rotation
matrix.
For example, for the coordinate rotation shown in Fig. 49, in 3D,
R =
_
_
cos θ sin θ 0
−sin θ cos θ 0
0 0 1
_
_
. (5.15)
In 2D,
R =
_
cos θ sin θ
−sin θ cos θ
_
(5.16)
and
_
v
x
= ¯ v
x
cos θ − ¯ v
y
sin θ
v
y
= ¯ v
x
sin θ + ¯ v
y
cos θ.
(5.17)
We recall that the determinant of a matrix product is equal to the product of the deter
minants. Also, the determinant of the transpose of a matrix is equal to the determinant of
the matrix itself. Thus, from Eq. 5.13,
det(RR
T
) = (det R)(det R
T
) = (det R)
2
= det I = 1, (5.18)
and we conclude that, for an orthogonal matrix R,
det R = ±1. (5.19)
The plus sign occurs for rotations, and the minus sign occurs for combinations of rotations
and reﬂections. For example, the orthogonal matrix
R =
_
_
1 0 0
0 1 0
0 0 −1
_
_
(5.20)
indicates a reﬂection in the z direction (i.e., the sign of the z component is changed).
We note that the length of a vector is unchanged under an orthogonal coordinate trans
formation, since the square of the length is given by
¯ v
i
¯ v
i
= R
ij
v
j
R
ik
v
k
= δ
jk
v
j
v
k
= v
j
v
j
, (5.21)
where the summation convention was used. That is, the square of the length of a vector is
the same in both coordinate systems.
52
To summarize, under an orthogonal coordinate transformation, vectors transform accord
ing to the rule
¯ v = Rv or ¯ v
i
=
3
j=1
R
ij
v
j
, (5.22)
where
R
ij
= ¯ e
i
· e
j
, (5.23)
and
RR
T
= R
T
R = I. (5.24)
5.1 Tensors
A vector which transforms under an orthogonal coordinate transformation according to the
rule ¯ v = Rv is deﬁned as a tensor of rank 1. A tensor of rank 0 is a scalar (a quantity which
is unchanged by an orthogonal coordinate transformation). For example, temperature and
pressure are scalars, since
¯
T = T and ¯ p = p.
We now introduce tensors of rank 2. Consider a matrix M = (M
ij
) which relates two
vectors u and v by
v = Mu or v
i
=
3
j=1
M
ij
u
j
(5.25)
(i.e., the result of multiplying a matrix and a vector is a vector). Also, in a rotated coordinate
system,
¯ v =
¯
M¯ u. (5.26)
Since both u and v are vectors (tensors of rank 1), Eq. 5.25 implies
R
T
¯ v = MR
T
¯ u or ¯ v = RMR
T
¯ u. (5.27)
By comparing Eqs. 5.26 and 5.27, we obtain
¯
M = RMR
T
(5.28)
or, in index notation,
¯
M
ij
=
3
k=1
3
l=1
R
ik
R
jl
M
kl
, (5.29)
which is the transformation rule for a tensor of rank 2. In general, a tensor of rank n, which
has n indices, transforms under an orthogonal coordinate transformation according to the
rule
¯
A
ij···k
=
3
p=1
3
q=1
· · ·
3
r=1
R
ip
R
jq
· · · R
kr
A
pq···r
. (5.30)
53
5.2 Examples of Tensors
1. Stress and strain in elasticity
The stress tensor σ is
σ =
_
_
σ
11
σ
12
σ
13
σ
21
σ
22
σ
23
σ
31
σ
32
σ
33
_
_
, (5.31)
where σ
11
, σ
22
, σ
33
are the direct (normal) stresses, and σ
12
, σ
13
, and σ
23
are the shear
stresses. The corresponding strain tensor is
ε =
_
_
ε
11
ε
12
ε
13
ε
21
ε
22
ε
23
ε
31
ε
32
ε
33
_
_
, (5.32)
where, in terms of displacements,
ε
ij
=
1
2
(u
i,j
+ u
j,i
) =
1
2
_
∂u
i
∂x
j
+
∂u
j
∂x
i
_
. (5.33)
The shear strains in this tensor are equal to half the corresponding engineering shear strains.
Both σ and ε transform like second rank tensors.
2. Generalized Hooke’s law
According to Hooke’s law in elasticity, the extension in a bar subjected to an axial force
is proportional to the force, or stress is proportional to strain. In 1D,
σ = Eε, (5.34)
where E is Young’s modulus, an experimentally determined material property.
In general threedimensional elasticity, there are nine components of stress σ
ij
and nine
components of strain ε
ij
. According to generalized Hooke’s law, each stress component can
be written as a linear combination of the nine strain components:
σ
ij
= c
ijkl
ε
kl
, (5.35)
where the 81 material constants c
ijkl
are experimentally determined, and the summation
convention is being used.
We now prove that c
ijkl
is a tensor of rank 4. We can write Eq. 5.35 in terms of stress
and strain in a second orthogonal coordinate system as
R
ki
R
lj
¯ σ
kl
= c
ijkl
R
mk
R
nl
¯ ε
mn
. (5.36)
If we multiply both sides of this equation by R
pj
R
oi
, and sum repeated indices, we obtain
R
pj
R
oi
R
ki
R
lj
¯ σ
kl
= R
oi
R
pj
R
mk
R
nl
c
ijkl
¯ ε
mn
, (5.37)
54
or, because R is an orthogonal matrix,
δ
ok
δ
pl
¯ σ
kl
= ¯ σ
op
= R
oi
R
pj
R
mk
R
nl
c
ijkl
¯ ε
mn
. (5.38)
Since, in the second coordinate system,
¯ σ
op
= ¯ c
opmn
¯ ε
mn
, (5.39)
we conclude that
¯ c
opmn
= R
oi
R
pj
R
mk
R
nl
c
ijkl
, (5.40)
which proves that c
ijkl
is a tensor of rank 4.
3. Stiﬀness matrix in ﬁnite element analysis
In the ﬁnite element method for linear analysis of structures, the forces F acting on an
object in static equilibrium are a linear combination of the displacements u (or vice versa):
Ku = F, (5.41)
where K is referred to as the stiﬀness matrix (with dimensions of force/displacement). In
this equation, u and F contain several subvectors, since u and F are the displacement and
force vectors for all grid points, i.e.,
u =
_
¸
¸
¸
_
¸
¸
¸
_
u
a
u
b
u
c
.
.
.
_
¸
¸
¸
_
¸
¸
¸
_
, F =
_
¸
¸
¸
_
¸
¸
¸
_
F
a
F
b
F
c
.
.
.
_
¸
¸
¸
_
¸
¸
¸
_
(5.42)
for grid points a, b, c, . . ., where
¯ u
a
= R
a
u
a
, ¯ u
b
= R
b
u
b
, · · · . (5.43)
Thus,
¯ u =
_
¸
¸
¸
_
¸
¸
¸
_
¯ u
a
¯ u
b
¯ u
c
.
.
.
_
¸
¸
¸
_
¸
¸
¸
_
=
_
¸
¸
¸
_
R
a
0 0 · · ·
0 R
b
0 · · ·
0 0 R
c
· · ·
.
.
.
.
.
.
.
.
.
.
.
.
_
¸
¸
¸
_
_
¸
¸
¸
_
¸
¸
¸
_
u
a
u
b
u
c
.
.
.
_
¸
¸
¸
_
¸
¸
¸
_
= Γu, (5.44)
where Γ is an orthogonal blockdiagonal matrix consisting of rotation matrices R:
Γ =
_
¸
¸
¸
_
R
a
0 0 · · ·
0 R
b
0 · · ·
0 0 R
c
· · ·
.
.
.
.
.
.
.
.
.
.
.
.
_
¸
¸
¸
_
, (5.45)
and
Γ
T
Γ = I. (5.46)
55
,
,
´ ¯ x
¯ y
θ
¸
`
x
y
,
,
´
1
2
´
3
4
θ
DOF
Figure 50: Element Coordinate System for PinJointed Rod.
Similarly,
¯
F = ΓF. (5.47)
Thus, if
¯
K¯ u =
¯
F, (5.48)
¯
KΓu = ΓF (5.49)
or
_
Γ
T
¯
KΓ
_
u = F. (5.50)
That is, the stiﬀness matrix transforms like other tensors of rank 2:
K = Γ
T
¯
KΓ. (5.51)
We illustrate the transformation of a ﬁnite element stiﬀness matrix by transforming the
stiﬀness matrix for the pinjointed rod element from a local element coordinate system to a
global Cartesian system. Consider the rod shown in Fig. 50. For this element, the 4 ×4 2D
stiﬀness matrix in the element coordinate system is
¯
K =
AE
L
_
¸
¸
¸
_
1 0 −1 0
0 0 0 0
−1 0 1 0
0 0 0 0
_
¸
¸
¸
_
, (5.52)
where A is the crosssectional area of the rod, E is Young’s modulus of the rod material,
and L is the rod length. In the global coordinate system,
K = Γ
T
¯
KΓ, (5.53)
where the 4 ×4 transformation matrix is
Γ =
_
R 0
0 R
_
, (5.54)
and the rotation matrix is
R =
_
cos θ sin θ
−sin θ cos θ
_
. (5.55)
56
Thus,
K =
AE
L
_
¸
¸
¸
_
c −s 0 0
s c 0 0
0 0 c −s
0 0 s c
_
¸
¸
¸
_
_
¸
¸
¸
_
1 0 −1 0
0 0 0 0
−1 0 1 0
0 0 0 0
_
¸
¸
¸
_
_
¸
¸
¸
_
c s 0 0
−s c 0 0
0 0 c s
0 0 −s c
_
¸
¸
¸
_
=
AE
L
_
¸
¸
¸
_
c
2
cs −c
2
−cs
cs s
2
−cs −s
2
−c
2
−cs c
2
cs
−cs −s
2
cs s
2
_
¸
¸
¸
_
, (5.56)
where c = cos θ and s = sin θ. This result agrees with that found earlier in Eq. 4.27.
5.3 Isotropic Tensors
An isotropic tensor is a tensor which is independent of coordinate system (i.e., invariant
under an orthogonal coordinate transformation). The Kronecker delta δ
ij
is a second rank
tensor and isotropic, since
¯
δ
ij
= δ
ij
, and
¯
I = RIR
T
= RR
T
= I. (5.57)
That is, the identity matrix I is invariant under an orthogonal coordinate transformation.
It can be shown in tensor analysis that δ
ij
is the only isotropic tensor of rank 2 and,
moreover, δ
ij
is the characteristic tensor for all isotropic tensors:
Rank Isotropic Tensors
1 none
2 cδ
ij
3 none
4 aδ
ij
δ
kl
+ bδ
ik
δ
jl
+ cδ
il
δ
jk
odd none
That is, all isotropic tensors of rank 4 must be of the form shown above, which has three
constants. For example, in generalized Hooke’s law (Eq. 5.35), the material property tensor
c
ijkl
has 3
4
= 81 constants (assuming no symmetry). For an isotropic material, c
ijkl
must be
an isotropic tensor of rank 4, thus implying at most three material constants (on the basis of
tensor analysis alone). The actual number of independent material constants for an isotropic
material turns out to be two rather than three, a result which depends on the existence of a
strain energy function, which implies the additional symmetry c
ijkl
= c
klij
.
6 Calculus of Variations
Recall from calculus that a function of one variable y = f(x) attains a stationary value
(minimum, maximum, or neutral) at the point x = x
0
if the derivative f
(x) = 0 at x = x
0
.
57
¸
`
,
(a) Minimum
¸
`
,
(b) Maximum
¸
`
,
(c) Neutral
Figure 51: Minimum, Maximum, and Neutral Stationary Values.
Alternatively, the ﬁrst variation of f (which is similar to a diﬀerential) must vanish for
arbitrary changes δx in x:
δf =
_
df
dx
_
δx = 0. (6.1)
The second derivative determines what kind of stationary value one has:
minimum:
d
2
f
dx
2
> 0 at x = x
0
(6.2)
maximum:
d
2
f
dx
2
< 0 at x = x
0
(6.3)
neutral:
d
2
f
dx
2
= 0 at x = x
0
, (6.4)
as shown in Fig. 51.
For a function of two variables z = f(x, y), z is stationary at (x
0
, y
0
) if
∂f
∂x
= 0 and
∂f
∂y
= 0 at (x
0
, y
0
).
Alternatively, for a stationary value,
δf =
_
∂f
∂x
_
δx +
_
∂f
∂y
_
δy = 0 at (x
0
, y
0
).
A function with n independent variables f(x
1
, x
2
, . . . , x
n
) is stationary at a point P if
δf =
n
i=1
∂f
∂x
i
δx
i
= 0 at P
for arbitrary variations δx
i
. Hence,
∂f
∂x
i
= 0 at P (i = 1, 2, . . . , n).
Variational calculus is concerned with ﬁnding stationary values, not of functions, but of
functionals. In general, a functional is a function of a function. More precisely, a functional
is an integral which has a speciﬁc numerical value for each function that is substituted into
it.
58
Consider the functional
I(φ) =
_
b
a
F(x, φ, φ
x
) dx, (6.5)
where x is the independent variable, φ(x) is the dependent variable, and
φ
x
=
dφ
dx
. (6.6)
The variation in I due to an arbitrary small change in φ is
δI =
_
b
a
δF dx =
_
b
a
_
∂F
∂φ
δφ +
∂F
∂φ
x
δφ
x
_
dx, (6.7)
where, with an interchange of the order of d and δ,
δφ
x
= δ
_
dφ
dx
_
=
d
dx
(δφ). (6.8)
With this equation, the second term in Eq. 6.7 can be integrated by parts to obtain
_
b
a
∂F
∂φ
x
δφ
x
dx =
_
b
a
∂F
∂φ
x
d
dx
(δφ) dx =
∂F
∂φ
x
δφ
¸
¸
¸
¸
b
a
−
_
b
a
δφ
d
dx
_
∂F
∂φ
x
_
dx. (6.9)
Thus,
δI =
_
b
a
_
∂F
∂φ
−
d
dx
_
∂F
∂φ
x
__
δφdx +
_
∂F
∂φ
x
δφ
_¸
¸
¸
¸
b
a
. (6.10)
Since δφ is arbitrary (within certain limits of admissibility), δI = 0 implies that both terms
in Eq. 6.10 must vanish. Moreover, for arbitrary δφ, a zero integral in Eq. 6.10 implies a
zero integrand:
d
dx
_
∂F
∂φ
x
_
−
∂F
∂φ
= 0, (6.11)
which is referred to as the EulerLagrange equation.
The second term in Eq. 6.10 must also vanish for arbitrary δφ:
_
∂F
∂φ
x
δφ
_¸
¸
¸
¸
b
a
= 0. (6.12)
If φ is speciﬁed at the boundaries x = a and x = b,
δφ(a) = δφ(b) = 0, (6.13)
and Eq. 6.12 is automatically satisﬁed. This type of boundary condition is called a geometric
or essential boundary condition. If φ is not speciﬁed at the boundaries x = a and x = b,
Eq. 6.12 implies
∂F(a)
∂φ
x
=
∂F(b)
∂φ
x
= 0. (6.14)
This type of boundary condition is called a natural boundary condition.
59
¸
`
x
y
(0, 0)
,
(1, 1)
,
,
ds
dx
dy
Figure 52: Curve of Minimum Length Between Two Points.
6.1 Example 1: The Shortest Distance Between Two Points
We wish to ﬁnd the equation of the curve y(x) along which the distance from the origin (0, 0)
to (1, 1) in the xyplane is least. Consider the curve in Fig. 52. The diﬀerential arc length
along the curve is given by
ds =
_
dx
2
+ dy
2
=
_
1 + (y
)
2
dx. (6.15)
We seek the curve y(x) which minimizes the total arc length
I(y) =
_
1
0
_
1 + (y
)
2
dx, (6.16)
where y(0) = 0 and y(1) = 1. For this problem,
F(x, y, y
) =
_
1 + (y
)
2
¸
1/2
, (6.17)
which depends only on y
explicitly. Thus, the EulerLagrange equation, Eq. 6.11, is
d
dx
_
∂F
∂y
_
= 0, (6.18)
where
∂F
∂y
=
1
2
_
1 + (y
)
2
¸
−1/2
2y
=
y
[1 + (y
)
2
]
1/2
. (6.19)
Hence,
y
[1 + (y
)
2
]
1/2
= c, (6.20)
where c is a constant of integration. If we solve this equation for y
, we obtain
y
=
_
c
2
1 −c
2
= a, (6.21)
where a is another constant. Thus,
y(x) = ax + b, (6.22)
and, with the boundary conditions,
y(x) = x, (6.23)
which is the straight line joining (0, 0) and (1, 1), as expected.
60
¸
·
O
y
x
,
A
y(x)
Figure 53: The Brachistochrone Problem.
6.2 Example 2: The Brachistochrone
We wish to determine the path y(x) from the origin O to the point A along which a bead will
slide under the force of gravity and without friction in the shortest time (Fig. 53). Assume
that the bead starts from rest. The velocity v along the path y(x) is
v =
ds
dt
=
_
dx
2
+ dy
2
dt
=
_
1 + (y
)
2
dx
dt
, (6.24)
where s denotes distance along the path. Thus,
dt =
_
1 + (y
)
2
dx
v
. (6.25)
As the bead slides down the path, potential energy is converted to kinetic energy. At
any location x, the kinetic energy equals the reduction in potential energy:
1
2
mv
2
= mgx (6.26)
or
v =
_
2gx. (6.27)
Thus, from Eq. 6.25,
dt =
¸
1 + (y
)
2
2gx
dx. (6.28)
The total time for the bead to fall from O to A is then
t =
_
x
A
0
¸
1 + (y
)
2
2gx
dx, (6.29)
which is the integral to be minimized. From Eq. 6.11, the EulerLagrange equation is
d
dx
_
∂F
∂y
_
−
∂F
∂y
= 0, (6.30)
where
F(x, y, y
) =
¸
1 + (y
)
2
2gx
. (6.31)
61
Thus, the EulerLagrange equation becomes
d
dx
_
1
2
_
1 + (y
)
2
2gx
_
−1/2
2y
2gx
_
= 0 (6.32)
or
d
dx
_
y
{x [1 + (y
)
2
]}
1/2
_
= 0. (6.33)
To solve this equation, we integrate Eq. 6.33 to obtain
y
{x [1 + (y
)
2
]}
1/2
= c, (6.34)
where c is a constant of integration. We then square both sides of this equation, and solve
for y
:
(y
)
2
= c
2
x
_
1 + (y
)
2
¸
(6.35)
or
y
=
dy
dx
=
_
c
2
x
1 −c
2
x
. (6.36)
This equation can be integrated using the change of variable
x =
1
c
2
sin
2
(θ/2), dx =
1
c
2
sin(θ/2) cos(θ/2) dθ. (6.37)
The integral implied by Eq. 6.36 then transforms to
y =
_
sin(θ/2)
cos(θ/2)
·
1
c
2
sin(θ/2) cos(θ/2) dθ =
1
c
2
_
sin
2
(θ/2) dθ (6.38)
=
1
2c
2
_
(1 −cos θ) dθ =
1
2c
2
(θ −sin θ) + y
0
, (6.39)
where y
0
is a constant of integration. Since the curve must pass through the origin, we must
have y
0
= 0. Also, from Eq. 6.37,
x =
1
2c
2
(1 −cos θ). (6.40)
If we then replace the constant a = 1/(2c
2
), we obtain the ﬁnal result in parametric form
_
x = a(1 −cos θ)
y = a(θ −sin θ),
(6.41)
which is a cycloid generated by the motion of a ﬁxed point on the circumference of a circle
of radius a which rolls on the positive side of the line x = 0.
To solve Eq. 6.41 for a speciﬁc cycloid (deﬁned by the two end points), one can ﬁrst
eliminate the circle radius a from Eq. 6.41 to solve (iteratively) for θ
A
(the value of θ at
Point A), after which either of the two equations in Eq. 6.41 can be used to determine the
constant a. The resulting cycloids for several end points are shown in Fig. 54.
This brachistochrone solution is valid for any end point which the bead can reach. Thus,
the end point must not be higher than the starting point. The end point may be at the same
elevation since, without friction, there is no loss of energy during the motion.
62
¸
·
y
x
1 2 3 4
1
2
, , ,
,
Figure 54: Several Brachistochrone Solutions.
6.3 Constraint Conditions
Suppose we want to extremize the functional
I(φ) =
_
b
a
F(x, φ, φ
x
) dx (6.42)
subject to the additional constraint condition
_
b
a
G(x, φ, φ
x
) dx = J, (6.43)
where J is a speciﬁed constant. We recall that the EulerLagrange equation was found by
requiring that δI = 0. However, since J is a constant, δJ = 0. Thus,
δ(I + λJ) = 0, (6.44)
where λ is an unknown constant referred to as a Lagrange multiplier. Thus, to enforce the
constraint in Eq. 6.43, we can replace F in the EulerLagrange equation with
H = F + λG. (6.45)
6.4 Example 3: A Constrained Minimization Problem
We wish to ﬁnd the function y(x) which minimizes the integral
I(y) =
_
1
0
(y
)
2
dx (6.46)
subject to the end conditions y(0) = y(1) = 0 and the constraint
_
1
0
y dx = 1. (6.47)
That is, the area under the curve y(x) is unity (Fig. 55). For this problem,
63
¸
`
x
y
, ,
(0, 0) (1, 0)
Area=1
Figure 55: A Constrained Minimization Problem.
H(x, y, y
) = (y
)
2
+ λy. (6.48)
The EulerLagrange equation,
d
dx
_
∂H
∂y
_
−
∂H
∂y
= 0, (6.49)
yields
d
dx
(2y
) −λ = 0 (6.50)
or
y
= λ/2. (6.51)
After integrating this equation, we obtain
y =
λ
4
x
2
+ Ax + B. (6.52)
With the boundary conditions y(0) = y(1) = 0, we obtain B = 0 and A = −λ/4. Thus,
y = −λx(1 −x)/4. (6.53)
The area constraint, Eq. 6.47, is used to evaluate the Lagrange multiplier λ:
1 =
_
1
0
y dx = −
_
1
0
λ
4
_
x −x
2
_
dx = −
λ
4
_
x
2
2
−
x
3
3
_¸
¸
¸
¸
1
0
= −
λ
24
. (6.54)
Thus, with λ = −24, we obtain the parabola
y(x) = 6x(1 −x). (6.55)
6.5 Functions of Several Independent Variables
Consider
I(φ) =
_
V
F(x, y, z, φ, φ
x
, φ
y
, φ
z
) dV, (6.56)
a functional with three independent variables (x, y, z). Note that, in 3D, the integration is
a volume integral.
The variation in I due to an arbitrary small change in φ is
δI =
_
V
_
∂F
∂φ
δφ +
∂F
∂φ
x
δφ
x
+
∂F
∂φ
y
δφ
y
+
∂F
∂φ
z
δφ
z
_
dV, (6.57)
64
where, with an interchange of the order of ∂ and δ,
δI =
_
V
_
∂F
∂φ
δφ +
∂F
∂φ
x
∂
∂x
(δφ) +
∂F
∂φ
y
∂
∂y
(δφ) +
∂F
∂φ
z
∂
∂z
(δφ)
_
dV. (6.58)
To perform a threedimensional integration by parts, we note that the second term in Eq. 6.58
can be expanded to yield
∂F
∂φ
x
∂
∂x
(δφ) =
∂
∂x
_
∂F
∂φ
x
δφ
_
−
∂
∂x
_
∂F
∂φ
x
_
δφ. (6.59)
If we then expand the third and fourth terms similarly, Eq. 6.58 becomes, after regrouping,
δI =
_
V
_
∂F
∂φ
δφ −
∂
∂x
_
∂F
∂φ
x
_
δφ −
∂
∂y
_
∂F
∂φ
y
_
δφ −
∂
∂z
_
∂F
∂φ
z
_
δφ
+
∂
∂x
_
∂F
∂φ
x
δφ
_
+
∂
∂y
_
∂F
∂φ
y
δφ
_
+
∂
∂z
_
∂F
∂φ
z
δφ
__
dV. (6.60)
The last three terms in this integral can then be converted to a surface integral using the
divergence theorem, which states that, for a vector ﬁeld f ,
_
V
∇· f dV =
_
S
f · ndS, (6.61)
where S is the closed surface which encloses the volume V . Eq. 6.60 then becomes
δI =
_
V
_
∂F
∂φ
−
∂
∂x
_
∂F
∂φ
x
_
−
∂
∂y
_
∂F
∂φ
y
_
−
∂
∂z
_
∂F
∂φ
z
__
δφdV
+
_
S
_
∂F
∂φ
x
n
x
+
∂F
∂φ
y
n
y
+
∂F
∂φ
z
n
z
_
δφdS, (6.62)
where n = (n
x
, n
y
, n
z
) is the unit outward normal on the surface.
Since δI = 0, both integrals in this equation must vanish for arbitrary δφ. It therefore
follows that the integrand in the volume integral must also vanish to yield the EulerLagrange
equation for three independent variables:
∂
∂x
_
∂F
∂φ
x
_
+
∂
∂y
_
∂F
∂φ
y
_
+
∂
∂z
_
∂F
∂φ
z
_
−
∂F
∂φ
= 0. (6.63)
If φ is speciﬁed on the boundary S, δφ = 0 on S, and the boundary integral in Eq. 6.62
automatically vanishes. If φ is not speciﬁed on S, the integrand in the boundary integral
must vanish:
∂F
∂φ
x
n
x
+
∂F
∂φ
y
n
y
+
∂F
∂φ
z
n
z
= 0 on S. (6.64)
This is the natural boundary condition when φ is not speciﬁed on the boundary.
65
6.6 Example 4: Poisson’s Equation
Consider the functional
I(φ) =
_
V
_
1
2
(φ
2
x
+ φ
2
y
+ φ
2
z
) −Qφ
_
dV, (6.65)
in which case
F(x, y, z, φ, φ
x
, φ
y
, φ
z
) =
1
2
(φ
2
x
+ φ
2
y
+ φ
2
z
) −Qφ. (6.66)
The EulerLagrange equation, Eq. 6.63, implies
∂
∂x
(φ
x
) +
∂
∂y
(φ
y
) +
∂
∂z
(φ
z
) −(−Q) = 0. (6.67)
That is,
φ
xx
+ φ
yy
+ φ
zz
= −Q (6.68)
or
∇
2
φ = −Q, (6.69)
which is Poisson’s equation. Thus, we have shown that solving Poisson’s equation is equiva
lent to minimizing the functional I in Eq. 6.65. In general, a key conclusion of this discussion
of variational calculus is that solving a diﬀerential equation is equivalent to minimizing some
functional involving an integral.
6.7 Functions of Several Dependent Variables
Consider the functional
I(φ
1
, φ
2
, φ
3
) =
_
b
a
F(x, φ
1
, φ
2
, φ
3
, φ
1
, φ
2
, φ
3
) dx, (6.70)
where x is the independent variable, and φ
1
(x), φ
2
(x), and φ
3
(x) are the dependent variables.
It can be shown that the generalization of the EulerLagrange equation, Eq. 6.11, for this
case is
d
dx
_
∂F
∂φ
1
_
−
∂F
∂φ
1
= 0 (6.71)
d
dx
_
∂F
∂φ
2
_
−
∂F
∂φ
2
= 0 (6.72)
d
dx
_
∂F
∂φ
3
_
−
∂F
∂φ
3
= 0. (6.73)
7 Variational Approach to the Finite Element Method
In modern engineering analysis, one of the most important applications of the energy the
orems, such as the minimum potential energy theorem in elasticity, is the ﬁnite element
method, a numerical procedure for solving partial diﬀerential equations. For linear equa
tions, ﬁnite element solutions are often based on variational methods.
66
7.1 Index Notation and Summation Convention
Let a be the vector
a = a
1
e
1
+ a
2
e
2
+ a
3
e
3
, (7.1)
where e
i
is the unit vector in the ith Cartesian coordinate direction, and a
i
is the ith
component of a (the projection of a on e
i
). Consider a second vector
b = b
1
e
1
+ b
2
e
2
+ b
3
e
3
. (7.2)
Then, the dot product (scalar product) is
a · b = a
1
b
1
+ a
2
b
2
+ a
3
b
3
=
3
i=1
a
i
b
i
. (7.3)
Using the summation convection, we can omit the summation symbol and write
a · b = a
i
b
i
, (7.4)
where, if a subscript appears exactly twice, a summation is implied over the range. The
range is 1,2,3 in 3D and 1,2 in 2D.
Let f(x
1
, x
2
, x
3
) be a scalar function of the Cartesian coordinates x
1
, x
2
, x
3
. We deﬁne
the shorthand comma notation for derivatives:
∂f
∂x
i
= f
,i
. (7.5)
That is, the subscript “, i” denotes the partial derivative with respect to the ith Cartesian
coordinate direction.
Using the comma notation and the summation convention, a variety of familiar quantities
can be written in compact form. For example, the gradient can be written
∇f =
∂f
∂x
1
e
1
+
∂f
∂x
2
e
2
+
∂f
∂x
3
e
3
= f
,i
e
i
. (7.6)
For a vectorvalued function F(x
1
, x
2
, x
3
), the divergence is written
∇· F =
∂F
1
∂x
1
+
∂F
2
∂x
2
+
∂F
3
∂x
3
= F
i,i
, (7.7)
and the Laplacian of the scalar function f(x
1
, x
2
, x
3
) is
∇
2
f = ∇· ∇f =
∂
2
f
∂x
2
1
+
∂
2
f
∂x
2
2
+
∂
2
f
∂x
2
3
= f
,ii
. (7.8)
The divergence theorem states that, for any vector ﬁeld F(x
1
, x
2
, x
3
) deﬁned in a volume
V bounded by a closed surface S,
_
V
∇· FdV =
_
S
F · ndS (7.9)
67
or, in index notation,
_
V
F
i,i
dV =
_
S
F
i
n
i
dS. (7.10)
The normal derivative can be written in index notation as
∂φ
∂n
= ∇φ · n = φ
,i
n
i
. (7.11)
The dot product of two gradients can be written
∇φ · ∇φ = (φ
,i
e
i
) · (φ
,j
e
j
) = φ
,i
φ
,j
e
i
· e
j
= φ
,i
φ
,j
δ
ij
= φ
,i
φ
,i
, (7.12)
where δ
ij
is the Kronecker delta deﬁned as
δ
ij
=
_
1, i = j,
0, i = j.
(7.13)
7.2 Deriving Variational Principles
For each partial diﬀerential equation of interest, one needs a functional which a solution
makes stationary. Given the functional, it is generally easy to see what partial diﬀerential
equation corresponds to it. The harder problem is to start with the equation and derive the
variational principle (i.e., derive the functional which is to be minimized). To simplify this
discussion, we will consider only scalar, rather than vector, problems. A scalar problem has
one dependent variable and one partial diﬀerential equation, whereas a vector problem has
several dependent variables coupled to each other through a system of partial diﬀerential
equations.
Consider Poisson’s equation subject to both Dirichlet and Neumann boundary conditions,
_
¸
¸
_
¸
¸
_
∇
2
φ + f = 0 in V,
φ = φ
0
on S
1
,
∂φ
∂n
+ g = 0 on S
2
.
(7.14)
This problem arises, for example, in (1) steadystate heat conduction, where the temperature
and heat ﬂux are speciﬁed on diﬀerent parts of the boundary, (2) potential ﬂuid ﬂow, where
the velocity potential and velocity are speciﬁed on diﬀerent parts of the boundary, and (3)
torsion in elasticity. Recall that, in index notation, ∇
2
φ = φ
,ii
.
We wish to ﬁnd a functional, I(φ) say, whose ﬁrst variation δI vanishes for φ satisfying
the above boundary value problem. From Eq. 7.14a,
0 =
_
V
(φ
,ii
+ f) δφdV (7.15)
=
_
V
[(φ
,i
δφ)
,i
−φ
,i
δφ
,i
] dV +
_
V
δ(fφ) dV (7.16)
=
_
S
φ
,i
n
i
δφdS −
_
V
1
2
δ(φ
,i
φ
,i
) dV + δ
_
V
fφdV (7.17)
=
_
S
(∇φ · n) δφdS −δ
_
V
1
2
φ
,i
φ
,i
dV + δ
_
V
fφdV, (7.18)
68
where ∇φ · n = ∂φ/∂n = −g on S
2
, and, since φ is speciﬁed on S
1
, δφ = 0 on S
1
. Then,
0 = −
_
S
2
g δφdS −δ
_
V
1
2
φ
,i
φ
,i
dV + δ
_
V
fφdV (7.19)
= −δ
_
S
2
gφdS −δ
_
V
1
2
φ
,i
φ
,i
dV + δ
_
V
fφdV (7.20)
= −δ
__
V
_
1
2
φ
,i
φ
,i
−fφ
_
dV +
_
S
2
gφdS
_
= −δI(φ). (7.21)
Thus, the functional for this boundary value problem is
I(φ) =
_
V
_
1
2
φ
,i
φ
,i
−fφ
_
dV +
_
S
2
gφdS (7.22)
or, in vector notation,
I(φ) =
_
V
_
1
2
∇φ · ∇φ −fφ
_
dV +
_
S
2
gφdS (7.23)
or, in expanded form,
I(φ) =
_
V
_
1
2
_
_
∂φ
∂x
_
2
+
_
∂φ
∂y
_
2
+
_
∂φ
∂z
_
2
_
−fφ
_
dV +
_
S
2
gφdS. (7.24)
If we were given the functional, Eq. 7.22, we could determine which partial diﬀeren
tial equation corresponds to it by computing and setting to zero the ﬁrst variation of the
functional. From Eq. 7.22,
δI =
_
V
_
1
2
δφ
,i
φ
,i
+
1
2
φ
,i
δφ
,i
−f δφ
_
dV +
_
S
2
g δφdS (7.25)
=
_
V
(φ
,i
δφ
,i
−fδφ) dV +
_
S
2
g δφdS (7.26)
=
_
V
[(φ
,i
δφ)
,i
−φ
,ii
δφ] dV −
_
V
f δφdV +
_
S
2
g δφdS (7.27)
=
_
S
φ
,i
n
i
δφdS −
_
V
(φ
,ii
+ f) δφdV +
_
S
2
g δφdS, (7.28)
where δφ = 0 on S
1
, and φ
,i
n
i
= ∇φ · n = ∂φ/∂n. Hence,
δI =
_
S
2
_
∂φ
∂n
+ g
_
δφdS −
_
V
(φ
,ii
+ f) δφdV. (7.29)
Stationary I (δI = 0) for arbitrary admissible δφ thus yields the original partial diﬀerential
equation and Neumann boundary condition, Eq. 7.14.
69
¸
`
x
y
Figure 56: TwoDimensional Finite Element Mesh.
¸
`
x
y
.
.
.
.
1
2
3
Figure 57: Triangular Finite Element.
7.3 Shape Functions
Consider a twodimensional ﬁeld problem for which we seek the function φ(x, y) satisfying
some (unspeciﬁed) partial diﬀerential equation in a domain. Let us subdivide the domain
into triangular ﬁnite elements, as shown in Fig. 56. A typical element, shown in Fig. 57,
has three degrees of freedom (DOF): φ
1
, φ
2
, and φ
3
. The three DOF are the values of the
fundamental unknown φ at the three grid points. (The number of DOF for an element
represents the number of pieces of data needed to determine uniquely the solution for the
element.)
For numerical solution, we approximate φ(x, y) using a polynomial in two variables having
the same number of terms as the number of DOF. Thus, we assume, for each element,
φ(x, y) = a
1
+ a
2
x + a
3
y, (7.30)
where a
1
, a
2
, and a
3
are unknown coeﬃcients. Eq. 7.30 is a complete linear polynomial in
two variables. We can evaluate this equation at the three grid points to obtain
_
_
_
φ
1
φ
2
φ
3
_
_
_
=
_
_
1 x
1
y
1
1 x
2
y
2
1 x
3
y
3
_
_
_
_
_
a
1
a
2
a
3
_
_
_
. (7.31)
This matrix equation can be inverted to yield
_
_
_
a
1
a
2
a
3
_
_
_
=
1
2A
_
_
x
2
y
3
−x
3
y
2
x
3
y
1
−x
1
y
3
x
1
y
2
−x
2
y
1
y
2
−y
3
y
3
−y
1
y
1
−y
2
x
3
−x
2
x
1
−x
3
x
2
−x
1
_
_
_
_
_
φ
1
φ
2
φ
3
_
_
_
, (7.32)
where
2A =
¸
¸
¸
¸
¸
¸
1 x
1
y
1
1 x
2
y
2
1 x
3
y
3
¸
¸
¸
¸
¸
¸
= x
2
y
3
−x
3
y
2
−x
1
y
3
+ x
3
y
1
+ x
1
y
2
−x
2
y
1
. (7.33)
70
Note that the area of the triangle is A. The determinant 2A is positive or negative de
pending on whether the grid point ordering in the triangle is counterclockwise or clockwise,
respectively. Since Eq. 7.31 is invertible, we can interpret the element DOF as either the grid
point values φ
i
, which have physical meaning, or the nonphysical polynomial coeﬃcients a
i
.
From Eq. 7.30, the scalar unknown φ can then be written in the matrix form
φ(x, y) =
_
1 x y
¸
_
_
_
a
1
a
2
a
3
_
_
_
(7.34)
=
1
2A
_
1 x y
¸
_
_
x
2
y
3
−x
3
y
2
x
3
y
1
−x
1
y
3
x
1
y
2
−x
2
y
1
y
2
−y
3
y
3
−y
1
y
1
−y
2
x
3
−x
2
x
1
−x
3
x
2
−x
1
_
_
_
_
_
φ
1
φ
2
φ
3
_
_
_
(7.35)
=
_
N
1
(x, y) N
2
(x, y) N
3
(x, y)
¸
_
_
_
φ
1
φ
2
φ
3
_
_
_
(7.36)
or
φ(x, y) = N
1
(x, y)φ
1
+ N
2
(x, y)φ
2
+ N
3
(x, y)φ
3
= N
i
φ
i
, (7.37)
where
_
_
_
N
1
(x, y) = [(x
2
y
3
−x
3
y
2
) + (y
2
−y
3
)x + (x
3
−x
2
)y]/(2A)
N
2
(x, y) = [(x
3
y
1
−x
1
y
3
) + (y
3
−y
1
)x + (x
1
−x
3
)y]/(2A)
N
3
(x, y) = [(x
1
y
2
−x
2
y
1
) + (y
1
−y
2
)x + (x
2
−x
1
)y]/(2A).
(7.38)
Notice that all the subscripts in this equation form a cyclic permutation of the numbers
1, 2, 3. That is, knowing N
1
, we can write down N
2
and N
3
by a cyclic permutation of the
subscripts. Alternatively, if we let i, j, and k denote the three grid points of the triangle,
the above equation can be written in the compact form
N
i
(x, y) =
1
2A
[(x
j
y
k
−x
k
y
j
) + (y
j
−y
k
)x + (x
k
−x
j
)y], (7.39)
where (i, j, k) can be selected to be any cyclic permutation of (1, 2, 3) such as (1, 2, 3), (2, 3, 1),
or (3, 1, 2).
In general, the functions N
i
are called shape functions or interpolation functions and are
used extensively in ﬁnite element theory. Eq. 7.37 implies that N
1
can be thought of as the
shape function associated with Point 1, since N
1
is the form (or “shape”) that φ would take
if φ
1
= 1 and φ
2
= φ
3
= 0. In general, N
i
is the shape function for Point i.
From Eq. 7.37, if φ
1
= 0, and φ
2
= φ
3
= 0,
φ(x, y) = N
1
(x, y)φ
1
. (7.40)
In particular, at Point 1,
φ
1
= φ(x
1
, y
1
) = N
1
(x
1
, y
1
)φ
1
(7.41)
or
N
1
(x
1
, y
1
) = 1. (7.42)
71
¸
¸
¸
x
, ,
L 1 2
Figure 58: Axial Member (PinJointed Truss Element).
Also, at Point 2,
0 = φ
2
= φ(x
2
, y
2
) = N
1
(x
2
, y
2
)φ
1
(7.43)
or
N
1
(x
2
, y
2
) = 0. (7.44)
Similarly,
N
1
(x
3
, y
3
) = 0. (7.45)
That is, a shape function takes the value unity at its own grid point and zero at the other
grid points in an element:
N
i
(x
j
, y
j
) = δ
ij
. (7.46)
Eq. 7.38 gives the shape functions for the linear triangle. An interesting observation is
that, if the shape functions are evaluated at the triangle centroid, N
1
= N
2
= N
3
= 1/3,
since each grid point has equal inﬂuence on the centroid. To prove this result, substitute the
coordinates of the centroid,
¯ x = (x
1
+ x
2
+ x
3
)/3, ¯ y = (y
1
+ y
2
+ y
3
)/3, (7.47)
into N
1
in Eq. 7.38, and use Eq. 7.33:
N
1
(¯ x, ¯ y) = [(x
2
y
3
−x
3
y
2
) + (y
2
−y
3
)¯ x + (x
3
−x
2
)¯ y]/(2A) =
1
3
. (7.48)
The preceding discussion assumed an element having three grid points. In general, for
an element having r points,
φ(x, y) = N
1
(x, y)φ
1
+ N
2
(x, y)φ
2
+· · · + N
r
(x, y)φ
r
= N
i
φ
i
. (7.49)
This is a convenient way to represent the ﬁnite element approximation within an element,
since, once the grid point values are known, φ can be evaluated anywhere in the element.
We note that, since the shape functions for the 3point triangle are linear in x and y, the
gradients in the x or y directions are constant. Thus, from Eq. 7.37,
∂φ
∂x
=
∂N
1
∂x
φ
1
+
∂N
2
∂x
φ
2
+
∂N
3
∂x
φ
3
= [(y
2
−y
3
)φ
1
+ (y
3
−y
1
)φ
2
+ (y
1
−y
2
)φ
3
]/(2A) (7.50)
∂φ
∂y
=
∂N
1
∂y
φ
1
+
∂N
2
∂y
φ
2
+
∂N
3
∂y
φ
3
= [(x
3
−x
2
)φ
1
+ (x
1
−x
3
)φ
2
+ (x
2
−x
1
)φ
3
]/(2A). (7.51)
A constant gradient within any element implies that many small elements would have to be
used wherever φ is changing rapidly.
Example. The displacement u(x) of an axial structural member (a pinjointed truss
member) (Fig. 58) is given by
u(x) = N
1
(x)u
1
+ N
2
(x)u
2
, (7.52)
72
where u
1
and u
2
are the axial displacements at the two end points. For a linear variation of
displacement along the length, it follows that N
i
must be a linear function which is unity at
Point i and zero at the other end. Thus,
_
_
_
N
1
(x) = 1 −
x
L
,
N
2
(x) =
x
L
(7.53)
or
u(x) =
_
1 −
x
L
_
u
1
+
_
x
L
_
u
2
. (7.54)
7.4 Variational Approach
Consider Poisson’s equation in a twodimensional domain subject to both Dirichlet and Neu
mann boundary conditions. We consider a slight generalization of the problem of Eq. 7.14:
_
¸
¸
¸
_
¸
¸
¸
_
∂
2
φ
∂x
2
+
∂
2
φ
∂y
2
+ f = 0 in A,
φ = φ
0
on S
1
,
∂φ
∂n
+ g + hφ = 0 on S
2
,
(7.55)
where S
1
and S
2
are curves in 2D, and f, g, and h may depend on position. The diﬀerence
between this problem and that of Eq. 7.14 is that the h term has been added to the boundary
condition on S
2
, where the gradient ∂φ/∂n is speciﬁed. The h term could arise in a variety
of physical situations, including heat transfer due to convection (where the heat ﬂux is
proportional to temperature) and free surface ﬂow problems (where the free surface and
radiation boundary conditions are both of this type). Assume that the domain has been
subdivided into a mesh of triangular ﬁnite elements similar to that shown in Fig. 56.
The functional which must be minimized for this boundary value problem is similar to
that of Eq. 7.24 except that an additional term must be added to account for the h term in
the boundary condition on S
2
. It can be shown that the functional which must be minimized
for this problem is
I(φ) =
_
A
_
1
2
_
_
∂φ
∂x
_
2
+
_
∂φ
∂y
_
2
_
−fφ
_
dA +
_
S
2
_
gφ +
1
2
hφ
2
_
dS, (7.56)
where A is the domain.
With a ﬁnite element discretization, we do not deﬁne a single smooth function φ over the
entire domain, but instead deﬁne φ over individual elements. Thus, since I is an integral, it
can be evaluated by summing over the elements:
I = I
1
+ I
2
+ I
3
+· · · =
E
e=1
I
e
, (7.57)
where E is the number of elements. The variation of I is also computed by summing over
the elements:
δI =
E
e=1
δI
e
, (7.58)
73
.
.
.
.
15
17
29
8
_
∂φ
∂n
.
.
.
.
17
29
35
9
_
»
∂φ
∂n
Figure 59: Neumannn Boundary Condition at Internal Boundary.
which must vanish for I to be a minimum.
We thus can consider a typical element. For one element, in index notation,
I
e
=
_
A
_
1
2
φ
,k
φ
,k
−fφ
_
dA +
_
S
2
_
gφ +
1
2
hφ
2
_
dS. (7.59)
The last two terms in this equation are, from Eq. 7.55c, integrals of the form
_
S
2
∂φ
∂n
φdS.
As can be seen from Fig. 59, for two elements which share a common edge, the unknown φ
is continuous along that edge, and the normal derivative ∂φ/∂n is of equal magnitude and
opposite sign, so that the individual contributions cancel each other out. Thus, the last two
terms in the functional make a nonzero contribution to the functional only if an element
abuts S
2
(i.e., if one edge of an element coincides with S
2
).
The degrees of freedom which deﬁne the function φ over the entire domain are the nodal
(grid point) values φ
i
, since, given all the φ
i
, φ is known everywhere using the element shape
functions. Therefore, to minimize I, we diﬀerentiate with respect to each φ
i
, and set the
result to zero:
∂I
e
∂φ
i
=
_
A
_
1
2
∂
∂φ
i
(φ
,k
)φ
,k
+
1
2
φ
,k
∂
∂φ
i
(φ
,k
) −f
∂φ
∂φ
i
_
dA +
_
S
2
_
g
∂φ
∂φ
i
+ hφ
∂φ
∂φ
i
_
dS (7.60)
=
_
A
_
φ
,k
∂
∂φ
i
(φ
,k
) −f
∂φ
∂φ
i
_
dA +
_
S
2
_
g
∂φ
∂φ
i
+ hφ
∂φ
∂φ
i
_
dS. (7.61)
where φ = N
j
φ
j
implies
φ
,k
= (N
j
φ
j
)
,k
= N
j,k
φ
j
, (7.62)
∂
∂φ
i
(φ
,k
) =
∂
∂φ
i
(N
j,k
φ
j
) = N
j,k
∂φ
j
∂φ
i
= N
j,k
δ
ij
= N
i,k
, (7.63)
and
∂φ
∂φ
i
=
∂
∂φ
i
(N
j
φ
j
) = N
j
∂φ
j
∂φ
i
= N
j
δ
ij
= N
i
. (7.64)
We now substitute these last three equations into Eq. 7.61 to obtain
∂I
e
∂φ
i
=
_
A
(N
j,k
φ
j
N
i,k
−fN
i
) dA +
_
S
2
(gN
i
+ hN
j
φ
j
N
i
) dS (7.65)
=
__
A
N
i,k
N
j,k
dA
_
φ
j
−
__
A
fN
i
dA −
_
S
2
gN
i
dS
_
+
__
S
2
hN
i
N
j
dS
_
φ
j
(7.66)
= K
ij
φ
j
−F
i
+ H
ij
φ
j
, (7.67)
74
.
.
.
.
.
.
.
.
15
17
29
35
8
9
_
_
Figure 60: Two Adjacent Finite Elements.
where, for each element,
K
e
ij
=
_
A
N
i,k
N
j,k
dA (7.68)
F
e
i
=
_
A
fN
i
dA −
_
S
2
gN
i
dS (7.69)
H
e
ij
=
_
S
2
hN
i
N
j
dS. (7.70)
The second term of the “load” F is present only if Point i is on S
2
, and the matrix entries
in H apply only if Points i, j are both on the boundary. The two terms in F correspond to
the body force and Neumann boundary condition, respectively.
The superscript e in the preceding equations indicates that we have computed the con
tribution for one element. We must combine the contributions for all elements. Consider
two adjacent elements, as illustrated in Fig. 60. In that ﬁgure, the circled numbers are the
element labels. From Eq. 7.67, for Element 8, for the special case h = 0,
∂I
8
∂φ
15
= K
8
15,15
φ
15
+ K
8
15,17
φ
17
+ K
8
15,29
φ
29
−F
8
15
, (7.71)
∂I
8
∂φ
17
= K
8
17,15
φ
15
+ K
8
17,17
φ
17
+ K
8
17,29
φ
29
−F
8
17
, (7.72)
∂I
8
∂φ
29
= K
8
29,15
φ
15
+ K
8
29,17
φ
17
+ K
8
29,29
φ
29
−F
8
29
. (7.73)
Similarly, for Element 9,
∂I
9
∂φ
17
= K
9
17,17
φ
17
+ K
9
17,29
φ
29
+ K
9
17,35
φ
35
−F
9
17
, (7.74)
∂I
9
∂φ
29
= K
9
29,17
φ
17
+ K
9
29,29
φ
29
+ K
9
29,35
φ
35
−F
9
29
, (7.75)
∂I
9
∂φ
35
= K
9
35,17
φ
17
+ K
9
35,29
φ
29
+ K
9
35,35
φ
35
−F
9
35
. (7.76)
To evaluate
e
∂I
e
∂φ
i
,
75
we sum on e for ﬁxed i. For example, for i = 17,
∂I
8
∂φ
17
+
∂I
9
∂φ
17
=K
8
17,15
φ
15
+ (K
8
17,17
+ K
9
17,17
)φ
17
+ (K
8
17,29
+ K
9
17,29
)φ
29
+ K
9
17,35
φ
35
−F
8
17
−F
9
17
. (7.77)
This process is then repeated for all grid points and all elements. To minimize the functional,
the individual sums are set to zero, resulting in a set of linear algebraic equations of the form
Kφ = F (7.78)
or, for the more general case where h = 0,
(K+H)φ = F, (7.79)
where φ is the vector of unknown nodal values of φ, and F is the vector of forcing functions
at the nodes. Because of the historical developments in structural mechanics, K is sometimes
referred to as the stiﬀness matrix. For each element, the contributions to K, F, and H are
K
ij
=
_
A
N
i,k
N
j,k
dA, (7.80)
F
i
=
_
A
fN
i
dA −
_
S
2
gN
i
dS, (7.81)
H
ij
=
_
S
2
hN
i
N
j
dS. (7.82)
The sum on k in the ﬁrst integral can be expanded to yield
K
ij
=
_
A
_
∂N
i
∂x
∂N
j
∂x
+
∂N
i
∂y
∂N
j
∂y
_
dA. (7.83)
Note that, in three dimensions, Eq. 7.80 still applies, except that the sum extends over the
range 1–3, and the integration is over the element volume.
Note also in Eq. 7.81 that, if g = 0, there is no contribution to the righthand side
vector F. Thus, to implement the Neumann boundary condition ∂φ/∂n = 0 at a boundary,
the boundary is left free. The zero gradient boundary condition is the natural boundary
condition for this formulation, since a zero gradient results by default if the boundary is left
free. The Dirichlet boundary condition φ = φ
0
must be explicitly imposed and is referred to
as an essential boundary condition.
7.5 Matrices for Linear Triangle
Consider the linear threenode triangle, for which, from Eq. 7.39, the shape functions are
given by
N
i
(x, y) =
1
2A
[(x
j
y
k
−x
k
y
j
) + (y
j
−y
k
)x + (x
k
−x
j
)y], (7.84)
76
where ijk can be any cyclic permutation of 123. To compute K from Eq. 7.83, we ﬁrst
compute the derivatives
∂N
i
∂x
= (y
j
−y
k
)/(2A) = b
i
/(2A), (7.85)
∂N
i
∂y
= (x
k
−x
j
)/(2A) = c
i
/(2A), (7.86)
where b
i
and c
i
are deﬁned as
b
i
= y
j
−y
k
, c
i
= x
k
−x
j
. (7.87)
By a cyclic permutation of the indices, we obtain
∂N
j
∂x
= (y
k
−y
i
)/(2A) = b
j
/(2A), (7.88)
∂N
j
∂y
= (x
i
−x
k
)/(2A) = c
j
/(2A). (7.89)
For the linear triangle, these derivatives are all constant and hence can be removed from the
integral to yield
K
ij
=
1
4A
2
(b
i
b
j
+ c
i
c
j
)A, (7.90)
where A is the area of the triangular element. Thus, the i, j contribution for one element
is
K
ij
=
1
4A
(b
i
b
j
+ c
i
c
j
), (7.91)
where i and j each have the range 123, since there are three grid points in the element.
However, b
i
and c
i
are computed from Eq. 7.87 using the shorthand notation that ijk is a
cyclic permutation of 123; that is, ijk = 123, ijk = 231, or ijk = 312. Thus,
K
11
= (b
2
1
+ c
2
1
)/(4A) = [(y
2
−y
3
)
2
+ (x
3
−x
2
)
2
]/(4A), (7.92)
K
22
= (b
2
2
+ c
2
2
)/(4A) = [(y
3
−y
1
)
2
+ (x
1
−x
3
)
2
]/(4A), (7.93)
K
33
= (b
2
3
+ c
2
3
)/(4A) = [(y
1
−y
2
)
2
+ (x
2
−x
1
)
2
]/(4A), (7.94)
K
12
= (b
1
b
2
+ c
1
c
2
)/(4A) = [(y
2
−y
3
)(y
3
−y
1
) + (x
3
−x
2
)(x
1
−x
3
)]/(4A), (7.95)
K
13
= (b
1
b
3
+ c
1
c
3
)/(4A) = [(y
2
−y
3
)(y
1
−y
2
) + (x
3
−x
2
)(x
2
−x
1
)]/(4A), (7.96)
K
23
= (b
2
b
3
+ c
2
c
3
)/(4A) = [(y
3
−y
1
)(y
1
−y
2
) + (x
1
−x
3
)(x
2
−x
1
)]/(4A). (7.97)
Note that K depends only on diﬀerences in grid point coordinates. Thus, two elements
that are geometrically congruent and diﬀer only by a translation will have the same element
matrix.
To compute the righthand side contributions as given in Eq. 7.81, consider ﬁrst the
contribution of the source term f. From Eq. 7.81, we calculate F
1
, which is typical, as
F
1
=
_
A
fN
1
dA =
_
A
f
2A
[(x
2
y
3
−x
3
y
2
) + (y
2
−y
3
)x + (x
3
−x
2
)y] dA. (7.98)
77
, ,
1 2
¡ ¸
L
Figure 61: Triangular Mesh at Boundary.
We now consider the special case f =
ˆ
f (a constant), and note that
_
A
dA = A,
_
A
x dA = ¯ x A,
_
A
y dA = ¯ y A, (7.99)
where (¯ x, ¯ y) is the centroid of the triangle given by Eq. 7.47. Thus,
F
1
=
ˆ
f
2A
[(x
2
y
3
−x
3
y
2
) + (y
2
−y
3
)¯ x + (x
3
−x
2
)¯ y] A. (7.100)
From Eq. 7.48, the expression in brackets is given by 2A/3. Hence,
F
1
=
1
3
ˆ
fA, (7.101)
and similarly
F
1
= F
2
= F
3
=
1
3
ˆ
fA. (7.102)
That is, for a uniform f, 1/3 of the total element “load” is applied to each grid point. For
nonuniform f, the integral could be computed using natural (or area) coordinates for the
triangle [7].
It is also of interest to calculate, for the linear triangle, the righthand side contributions
for the second term of Eq. 7.81 for the uniform special case g = ˆ g, where ˆ g is a constant.
For the second term of Eq. 7.81 to contribute, Point i must be on an element edge which lies
on the boundary S
2
. Since the integration involving g is on the boundary, the only shape
functions needed are those which describe the interpolation of φ on the boundary. Thus,
since the triangular shape functions are linear, the boundary shape functions are
N
1
= 1 −
x
L
, N
2
=
x
L
, (7.103)
where the subscripts 1 and 2 refer to the two element edge grid points on the boundary S
2
(Fig. 61), x is a local coordinate along the element edge, and L is the length of the element
edge. Thus, for Point i on the boundary,
F
i
= −
_
S
2
gN
i
dS. (7.104)
Since the boundary shape functions are given by Eq. 7.103,
F
1
= −ˆ g
_
L
0
_
1 −
x
L
_
dx = −
ˆ gL
2
, (7.105)
78
F
2
= −ˆ g
_
L
0
_
x
L
_
dx = −
ˆ gL
2
. (7.106)
Thus, for the two points deﬁning a triangle edge on the boundary S
2
,
F
1
= F
2
= −
ˆ gL
2
. (7.107)
That is, for a uniform g, 1/2 of the total element “load” is applied to each grid point.
To calculate H using Eq. 7.82, we ﬁrst note that Points i, j must both be on the boundary
for this matrix to contribute. Consider some triangular elements adjacent to a boundary,
as shown in Fig. 61. Since the integration in Eq. 7.82 is on the boundary, the only shape
functions needed are those which describe the interpolation of φ on the boundary. Thus,
using the boundary shape functions of Eq. 7.103 in Eq. 7.82, for constant h =
ˆ
h,
H
11
=
ˆ
h
_
L
0
_
1 −
x
L
_
2
dx =
ˆ
hL
3
(7.108)
H
22
=
ˆ
h
_
L
0
_
x
L
_
2
dx =
ˆ
hL
3
(7.109)
H
12
= H
21
=
ˆ
h
_
L
0
_
1 −
x
L
__
x
L
_
dx =
ˆ
hL
6
. (7.110)
Hence, for an edge of a linear 3node triangle,
H =
ˆ
hL
6
_
2 1
1 2
_
. (7.111)
7.6 Interpretation of Functional
Now that the variational problem has been solved (i.e., the functional I has been minimized),
we can evaluate I. We recall from Eq. 7.59 (with h = 0) that, for the twodimensional
Poisson’s equation,
I(φ) =
_
A
_
1
2
φ
,k
φ
,k
−fφ
_
dA +
_
S
2
gφdS, (7.112)
where
φ = N
i
φ
i
. (7.113)
Since φ
i
is independent of position, it follows that φ
,k
= N
i,k
φ
i
and
I(φ) =
_
A
_
1
2
N
i,k
φ
i
N
j,k
φ
j
−fN
i
φ
i
_
dA +
_
S
2
gN
i
φ
i
dS, (7.114)
=
1
2
φ
i
__
A
N
i,k
N
j,k
dA
_
φ
j
−
__
A
fN
i
dA −
_
S
2
gN
i
dS
_
φ
i
(7.115)
=
1
2
φ
i
K
ij
φ
j
−F
i
φ
i
(index notation) (7.116)
=
1
2
φ
T
Kφ −φ
T
F (matrix notation) (7.117)
=
1
2
φ · Kφ −φ · F (vector notation), (7.118)
79
where the last result has been written in index, matrix, and vector notations. The ﬁrst term
for I is a quadratic form.
In solid mechanics, I corresponds to the total potential energy. The ﬁrst term is the strain
energy, and the second term is the potential of the applied loads. Since strain energy is zero
only for zero strain (corresponding to rigid body motion), it follows that the stiﬀness matrix
K is positive semideﬁnite. For wellposed problems (which allow no rigid body motion),
K is positive deﬁnite. (By deﬁnition, a matrix K is positive deﬁnite if φ
T
Kφ > 0 for all
φ = 0.) Three consequences of positivedeﬁniteness are
1. All eigenvalues of K are positive.
2. The matrix is nonsingular.
3. Gaussian elimination can be performed without pivoting.
7.7 Stiﬀness in Elasticity in Terms of Shape Functions
In §4.10 (p. 45), the Principle of Virtual Work was used to obtain the element stiﬀness matrix
for an elastic ﬁnite element as (Eq. 4.67)
K =
_
V
C
T
DCdV, (7.119)
where D is the symmetric matrix of material constants relating stress and strain, and C is
the matrix converting grid point displacements to strain:
ε = Cu. (7.120)
For the constant strain triangle (CST) in 2D, for example, the fundamental unknowns u
and v (the x and y components of displacement) can both be expressed in terms of the three
shape functions deﬁned in Eq. 7.38:
u = N
i
u
i
, v = N
i
v
i
, (7.121)
where the summation convention is used. In general, in 2D, the strains can be expressed as
ε =
_
_
_
ε
xx
ε
yy
γ
xy
_
_
_
=
_
_
_
u
,x
v
,y
u
,y
+ v
,x
_
_
_
=
_
_
_
N
i,x
u
i
N
i,y
v
i
N
i,y
u
i
+ N
i,x
v
i
_
_
_
=
_
_
N
1,x
0 N
2,x
0 N
3,x
0
0 N
1,y
0 N
2,y
0 N
3,y
N
1,y
N
1,x
N
2,y
N
2,x
N
3,y
N
3,x
_
_
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
u
1
v
1
u
2
v
2
u
3
v
3
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
. (7.122)
80
¸
`
x
φ
a b
Figure 62: Discontinuous Function.
Thus, from Eq. 7.120, C in Eq. 7.119 is a matrix of shape function derivatives:
C =
_
_
N
1,x
0 N
2,x
0 N
3,x
0
0 N
1,y
0 N
2,y
0 N
3,y
N
1,y
N
1,x
N
2,y
N
2,x
N
3,y
N
3,x
_
_
. (7.123)
In general, C would have as many rows as there are independent components of strain (3 in
2D and 6 in 3D) and as many columns as there are DOF in the element.
7.8 Element Compatibility
In the variational approach to the ﬁnite element method, an integral was minimized. It was
also assumed that the integral evaluated over some domain was equal to the sum of the
integrals over the elements. We wish to investigate brieﬂy the conditions which must hold
for this assumption to be valid. That is, what condition is necessary for the integral over a
domain to be equal to the sum of the integrals over the elements?
Consider the onedimensional integral
I =
_
b
a
φ(x) dx. (7.124)
For the integral I to be welldeﬁned, simple jump discontinuities in φ are allowed, as illus
trated in Fig. 62. Singularities in φ, on the other hand, will not be allowed, since some
singularities cannot be integrated. Thus, we conclude that, for any functional of interest in
ﬁnite element analysis, the integrand may be discontinuous, but we do not allow singularities
in the integrand.
Now consider the onedimensional integral
I =
_
b
a
dφ(x)
dx
dx. (7.125)
Since the integrand dφ(x)/dx may be discontinuous, φ(x) must be continuous, but with kinks
(slope discontinuities). In the integral
I =
_
b
a
d
2
φ(x)
dx
2
dx, (7.126)
the integrand φ
may have simple discontinuities, in which case φ
is continuous with kinks,
and φ is smooth (i.e., the slope is continuous).
81
.
.
.
.
.
.
.
.
1
2
,
P
Figure 63: Compatibility at an Element Boundary.
Thus, we conclude that the smoothness required of φ depends on the highest order
derivative of φ appearing in the integrand. If φ
is in the integrand, φ
must be continuous.
If φ
is in the integrand, φ must be continuous. Therefore, in general, we conclude that, at
element interfaces, the ﬁeld variable φ and any of its partial derivatives up to one order less
than the highest order derivative appearing in I(φ) must be continuous. This requirement
is referred to as the compatibility requirement or the conforming requirement.
For example, consider the Poisson equation in 2D,
∂
2
φ
∂x
2
+
∂
2
φ
∂y
2
+ f = 0, (7.127)
for which the functional to be minimized is
I(φ) =
_
A
_
1
2
_
_
∂φ
∂x
_
2
+
_
∂φ
∂y
_
2
_
−fφ
_
dA. (7.128)
Since the highest derivative in I is ﬁrst order, we conclude that, for I to be welldeﬁned, φ
must be continuous in the ﬁnite element approximation.
For the 3node triangular element already formulated, the shape function is linear, which
implies that, in Fig. 63, given φ
1
and φ
2
, φ varies linearly between φ
1
and φ
2
along the line
1–2. That is, φ is the same at the midpoint P for both elements; otherwise, I(φ) might be
inﬁnite, since there could be a gap or overlap in the model along the line 1–2.
Note also that the Poisson equation is a second order equation, but φ need only be
continuous in the ﬁnite element approximation. That is, the ﬁrst derivatives φ
,x
and φ
,y
may have simple discontinuities, and the second derivatives φ
,xx
and φ
,yy
that appear in the
partial diﬀerential equation may not even exist at the element interfaces. Thus, one of the
strengths of the variational approach is that I(φ) involves derivatives of lower order than in
the original PDE.
In elasticity, the functional I(φ) has the physical interpretation of total potential en
ergy, including strain energy. A nonconforming element would result in a discontinuous
displacement at the element boundaries (i.e., a gap or overlap), which would correspond
to inﬁnite strain energy. However, note that having displacement continuous implies that
the displacement gradients (which are proportional to the stresses) are discontinuous at the
element boundaries. This property is one of the fundamental characteristics of an approxi
mate numerical solution. If all quantities of interest (e.g., displacements and stresses) were
continuous, the solution would be an exact solution rather than an approximate solution.
Thus, the approximation inherent in a displacementbased ﬁnite element method is that the
displacements are continuous, and the stresses (displacement gradients) are discontinuous at
element boundaries. In ﬂuid mechanics, a discontinuous φ (which is not allowed) corresponds
to a singularity in the velocity ﬁeld.
82
»
¸
`
x
y
z
>
>
>
>
>
>
v
u
`R = v −u (error)
Figure 64: A Vector Analogy for Galerkin’s Method.
7.9 Method of Weighted Residuals (Galerkin’s Method)
Here we discuss an alternative to the use of a variational principle when the functional is
either unknown or does not exist (e.g., nonlinear equations).
We consider the Poisson equation
∇
2
φ + f = 0. (7.129)
For an approximate solution
˜
φ,
∇
2
˜
φ + f = R = 0, (7.130)
where R is referred to as the residual or error. The best approximate solution will be one
which in some sense minimizes R at all points of the domain. We note that, if R = 0 in the
domain,
_
V
RW dV = 0, (7.131)
where W is any function of the spatial coordinates. W is referred to as a weighting function.
With n DOF in the domain, n functions W can be chosen:
_
V
RW
i
dV = 0, i = 1, 2, . . . , n. (7.132)
This approach is called the method of weighted residuals. Various choices of W
i
are possible.
When W
i
= N
i
(the shape functions), the process is called Galerkin’s method.
The motivation for using shape functions N
i
as the weighting functions is that we want the
residual (the error) orthogonal to the shape functions. In the ﬁnite element approximation,
we are trying to approximate an inﬁnite DOF problem (the PDE) with a ﬁnite number of
DOF (the ﬁnite element model). Consider an analogous problem in vector analysis, where
we want to approximate a vector v in 3D with another vector in 2D. That is, we are
attempting to approximate v with a lesser number of DOF, as shown in Fig. 64. The “best”
2D approximation to the 3D vector v is the projection u in the plane. The error in this
approximation is
R = v −u, (7.133)
which is orthogonal to the xyplane. That is, the error R is orthogonal to the basis vectors
e
x
and e
y
(the vectors used to approximate v):
R· e
x
= 0 and R· e
y
= 0. (7.134)
83
In the ﬁnite element problem, the approximating functions are the shape functions N
i
. The
counterpart to the dot product is the integral
_
V
RN
i
dV = 0. (7.135)
That is, the residual R is orthogonal to its approximating functions, the shape functions.
The integral in Eq. 7.135 must hold over the entire domain V or any portion of the
domain, e.g., an element. Thus, for Poisson’s equation, for one element,
0 =
_
V
(∇
2
φ + f)N
i
dV (7.136)
=
_
V
(φ
,kk
+ f)N
i
dV (7.137)
=
_
V
[(φ
,k
N
i
)
,k
−φ
,k
N
i,k
] dV +
_
V
fN
i
dV. (7.138)
The ﬁrst term is converted to a surface integral using the divergence theorem, Eq. 7.10, to
obtain
0 =
_
S
φ
,k
N
i
n
k
dS −
_
V
φ
,k
N
i,k
dV +
_
V
fN
i
dV, (7.139)
where
φ
,k
n
k
= ∇φ · n =
∂φ
∂n
(7.140)
and
φ
,k
= (N
j
φ
j
)
,k
= N
j,k
φ
j
. (7.141)
Hence, for each i,
0 =
_
S
∂φ
∂n
N
i
dS −
_
V
N
j,k
φ
j
N
i,k
dV +
_
V
fN
i
dV (7.142)
= −
__
V
N
i,k
N
j,k
dV
_
φ
j
+
__
V
fN
i
dV +
_
S
∂φ
∂n
N
i
dS
_
(7.143)
= −K
ij
φ
j
+ F
i
. (7.144)
Thus, in matrix notation,
Kφ = F, (7.145)
where
K
ij
=
_
V
N
i,k
N
j,k
dV (7.146)
F
i
=
_
V
fN
i
dV +
_
S
∂φ
∂n
N
i
dS. (7.147)
From Eq. 7.55, ∂φ/∂n is speciﬁed on S
2
and unknown a priori on S
1
, where φ = φ
0
is
speciﬁed. On S
1
, ∂φ/∂n is the “reaction” to the speciﬁed φ. At points where φ is speciﬁed,
the Dirichlet boundary conditions are handled like displacement boundary conditions in
structural problems.
84
Galerkin’s method thus results in algebraic equations identical to those derived from
a variational principle. However, Galerkin’s method is more general, since sometimes a
variational principle may not exist for a given problem. When a principle does exist, the
two approaches yield the same results. When the variational principle does not exist or is
unknown, Galerkin’s method can still be used to derive a ﬁnite element model.
8 Potential Fluid Flow With Finite Elements
In potential ﬂow, the ﬂuid is assumed to be inviscid and incompressible. Since there are no
shearing stresses in the ﬂuid, the ﬂuid slips tangentially along boundaries. This mathematical
model of ﬂuid behavior is useful for some situations.
Deﬁne a velocity potential φ such that velocity v = ∇φ. That is, in 3D,
v
x
=
∂φ
∂x
, v
y
=
∂φ
∂y
, v
z
=
∂φ
∂z
. (8.1)
It can be shown that, within the domain occupied by the ﬂuid,
∇
2
φ = 0. (8.2)
Various boundary conditions are of interest. At a ﬁxed boundary, where the normal
velocity vanishes,
v
n
= v · n = ∇φ · n =
∂φ
∂n
= 0, (8.3)
where n is the unit outward normal on the boundary. On a boundary where the velocity is
speciﬁed,
∂φ
∂n
= ˆ v
n
. (8.4)
We will see later in the discussion of symmetry that, at a plane of symmetry for the potential
φ,
∂φ
∂n
= 0, (8.5)
where n is the unit normal to the plane. At a plane of antisymmetry, φ = 0.
The boundary value problem for ﬂow around a solid body is illustrated in Fig. 65. In
this example, far away from the body, where the velocity is known,
v
x
=
∂φ
∂x
= v
∞
, (8.6)
which is speciﬁed.
As is, the problem posed in Fig. 65 is not a wellposed problem, because only conditions
on the derivative ∂φ/∂n are speciﬁed. Thus, for any solution φ, φ + c is also a solution for
any constant c. Therefore, for uniqueness, we must specify φ somewhere in the domain.
Thus, the potential ﬂow boundary value problem is that the velocity potential φ satisﬁes
_
¸
¸
_
¸
¸
_
∇
2
φ = 0 in V
φ =
ˆ
φ on S
1
∂φ
∂n
= ˆ v
n
on S
2
,
(8.7)
where v = ∇φ in V .
85
n
¸
n
¡
n
·n
`
n
∇
2
φ = 0
∂φ
∂n
= 0
´
∂φ
∂n
=
∂φ
∂x
= v
∞
∂φ
∂n
= −
∂φ
∂x
= −v
∞
∂φ
∂n
=
∂φ
∂y
= 0
∂φ
∂n
= −
∂φ
∂y
= 0
Figure 65: Potential Flow Around Solid Body.
8.1 Finite Element Model
A ﬁnite element model of the potential ﬂow problem results in the equation
Kφ = F, (8.8)
where the contributions to K and F for each element are
K
ij
=
_
A
N
i,k
N
j,k
dA =
_
A
_
∂N
i
∂x
∂N
j
∂x
+
∂N
i
∂y
∂N
j
∂y
_
dA, (8.9)
F
i
=
_
S
2
ˆ v
n
N
i
dS, (8.10)
where ˆ v
n
is the speciﬁed velocity on S
2
in the outward normal direction, and N
i
is the shape
function for the ith grid point in the element.
Once the velocity potential φ is known, the pressure can be found using the steadystate
Bernoulli equation
1
2
v
2
+ gy +
p
ρ
= c = constant, (8.11)
where v is the velocity magnitude given by
v
2
=
_
∂φ
∂x
_
2
+
_
∂φ
∂y
_
2
, (8.12)
gy is the (frequently ignored) body force potential, g is the acceleration due to gravity, y is
the height above some reference plane, p is pressure, and ρ is the ﬂuid density. The constant
c is evaluated using a location where v is known (e.g., v
∞
). For example, at inﬁnity, if we
ignore gy and pick p = 0 (ambient),
c =
1
2
v
2
∞
, (8.13)
86
_¸
¸
`
x
y
¸ ¸
¸ ¸
¸ ¸
Figure 66: Streamlines Around Circular Cylinder.
_¸
¸
`
x
y
,
,
P
P
>
>
Figure 67: Symmetry With Respect to y = 0.
and
p
ρ
+
1
2
v
2
=
1
2
v
2
∞
. (8.14)
8.2 Application of Symmetry
Consider the 2D potential ﬂow around a circular cylinder, as shown in Fig. 66. The velocity
ﬁeld is symmetric with respect to y = 0. For example, in Fig. 67, the velocity vectors at P
and its image P
are mirror images of each other. As P and P
get close to the axis y = 0,
the velocities at P and P
must converge to each other, since P and P
are the same point
in the plane y = 0. Thus,
v
y
=
∂φ
∂y
= 0 for y = 0. (8.15)
The ydirection in this case is the normal to the symmetry plane y = 0. Thus, in general,
we conclude that, for points in a symmetry plane with normal n,
∂φ
∂n
= 0. (8.16)
Note from Fig. 65 that the speciﬁed normal velocities at the two x extremes are of opposite
signs. Thus, from Eq. 8.10, the righthand side “loads” in Eq. 8.8 are equal in magnitude
and opposite in sign for the left and right boundaries, and the velocity ﬁeld is antisymmetric
with respect to the plane x = 0, as shown in Fig. 68. That is, the velocity vectors at P and
P
can be transformed into each other by a reﬂection and a negation of sign. Thus,
(v
x
)
P
= (v
x
)
P
. (8.17)
If we change the direction of ﬂow (i.e., make it right to left), then
(φ
P
)
ﬂow right
= (φ
P
)
ﬂow left
. (8.18)
87
_¸
¸
`
x
y
, ,
P P
>
>
Figure 68: Antisymmetry With Respect to x = 0.
,
a
∇
2
φ = 0
Antisym:
φ = 0
∂φ
∂n
= 0
∂φ
∂n
= 0
Sym:
∂φ
∂n
= 0
∂φ
∂n
= v
∞
Figure 69: Boundary Value Problem for Flow Around Circular Cylinder.
However, changing the direction of ﬂow also means that F in Eq. 8.8 becomes −F, since the
only nonzero contributions to F occur at the left and right boundaries. However, if the sign
of F changes, the sign of the solution also changes, i.e.,
(φ
P
)
ﬂow right
= −(φ
P
)
ﬂow left
. (8.19)
Combining the last two equations yields
(φ
P
)
ﬂow right
= −(φ
P
)
ﬂow right
. (8.20)
If we now let P and P
converge to the plane x = 0 and become the same point, we obtain
(φ)
x=0
= −(φ)
x=0
(8.21)
or (φ)
x=0
= 0. Thus, in general, we conclude that, for points in a symmetry plane with normal
n for which the solution is antisymmetric, φ = 0. The key to recognizing antisymmetry is
to have a symmetric geometry with an antisymmetric “loading” (RHS).
For example, for 2D potential ﬂow around a circular cylinder, Fig. 66, the boundary
value problem is shown in Fig. 69. This problem has two planes of geometric symmetry,
y = 0 and x = 0, and can be solved using a onequarter model. Since φ is speciﬁed on x = 0,
this problem is wellposed. The boundary condition ∂φ/∂n = 0 is the natural boundary
condition (the condition that results if the boundary is left free).
88
`
y
disturbed free
surface
`
·
η = deﬂection of
free surface
undisturbed free
surface
`
·
d = depth
·
g
Figure 70: The Free Surface Problem.
8.3 Free Surface Flows
Consider an inviscid, incompressible ﬂuid in an irrotational ﬂow ﬁeld with a free surface, as
shown in Fig. 70. The equations of motion and continuity reduce to
∇
2
φ = 0, (8.22)
where φ is the velocity potential, and the velocity is
v = ∇φ. (8.23)
The pressure p can be determined from the timedependent Bernoulli equation
−
p
ρ
=
∂φ
∂t
+
1
2
v
2
+ gy, (8.24)
where ρ is the ﬂuid density, g is the acceleration due to gravity, and y is the vertical coordi
nate.
If we let η denote the deﬂection of the free surface, the vertical velocity on the free surface
is
∂φ
∂y
=
∂η
∂t
on y = 0. (8.25)
If we assume small wave height (i.e., η is small compared to the depth d), the velocity v on
the free surface is also small, and we can ignore the velocity term in Eq. 8.24. If we also take
the pressure p = 0 on the free surface, Bernoulli’s equation implies
∂φ
∂t
+ gη = 0 on y = 0. (8.26)
This equation can be viewed as an equation for the surface elevation η given φ. We can then
eliminate η from the last two equations by diﬀerentiating Eq. 8.26:
0 =
∂
2
φ
∂t
2
+ g
∂η
∂t
=
∂
2
φ
∂t
2
+ g
∂φ
∂y
. (8.27)
Hence, on the free surface y = 0,
∂φ
∂y
= −
1
g
∂
2
φ
∂t
2
. (8.28)
This equation is referred to as the linearized free surface boundary condition.
89
¸
`
Re
Im
A
A
θ
Figure 71: The Complex Amplitude.
8.4 Use of Complex Numbers and Phasors in Wave Problems
The wave maker problem considered in the next section will involve a forcing function which
is sinusoidal in time (i.e., timeharmonic). It is common in engineering analysis to represent
timeharmonic signals using complex numbers, since amplitude and phase information can
be included in a single complex number. Such an approach is used with A.C. circuits,
steadystate acoustics, and mechanical vibrations.
Consider a sine wave φ(t) of amplitude
ˆ
A, circular frequency ω, and phase θ:
φ(t) =
ˆ
Acos(ωt + θ), (8.29)
where all quantities in this equation are real, and
ˆ
A can be taken as positive. Using complex
notation,
φ(t) = Re
_
ˆ
Ae
i(ωt+θ)
_
= Re
__
ˆ
Ae
iθ
_
e
iωt
_
, (8.30)
where i =
√
−1. If we deﬁne the complex amplitude
A =
ˆ
Ae
iθ
, (8.31)
then
φ(t) = Re
_
Ae
iωt
_
, (8.32)
where the magnitude of the complex amplitude is given by
A =
¸
¸
¸
ˆ
Ae
iθ
¸
¸
¸ =
¸
¸
¸
ˆ
A
¸
¸
¸
¸
¸
e
iθ
¸
¸
=
ˆ
A, (8.33)
which is the actual amplitude, and
arg(A) = θ, (8.34)
which is the actual phase angle. The complex amplitude A is thus a complex number which
embodies both the amplitude and the phase of the sinusoidal signal, as shown in Fig. 71.
The directed vector in the complex plane is called a phasor by electrical engineers.
It is common practice, when dealing with these sinsusoidal functions, to drop the “Re”,
and agree that it is only the real part which is of interest. Thus, we write
φ(t) = Ae
iωt
(8.35)
with the understanding that it is the real part of this signal which is of interest. In this
equation, A is the complex amplitude.
Two sinusoids of the same frequency add just like vectors in geometry. For example,
consider the sum
ˆ
A
1
cos(ωt + θ
1
) +
ˆ
A
2
cos(ωt + θ
2
).
90
¸
`
Re
Im
/
/
/
/
/
/
/
/`
A
1
A
2
A
1
+ A
2
Figure 72: Phasor Addition.
¸
∞
¡
n
¸
x
`
y
∇
2
φ = 0
∂φ
∂n
= 0 (rigid bottom)
∂φ
∂n
= −
1
g
¨
φ (linearized free surface condition)
∂φ
∂n
= v
n
= v
0
cos ωt (speciﬁed)
`
·
d
Figure 73: 2D Wave Maker: Time Domain.
In terms of complex arithmetic,
A
1
e
iωt
+ A
2
e
iωt
= (A
1
+ A
2
)e
iωt
, (8.36)
where A
1
and A
2
are complex amplitudes given by
A
1
=
ˆ
A
1
e
iθ
1
, A
2
=
ˆ
A
2
e
iθ
2
. (8.37)
This addition, referred to as phasor addition, is illustrated in Fig. 72.
8.5 2D Wave Maker
Consider a semiinﬁnite body of water with a wall oscillating in simple harmonic motion, as
shown in Fig. 73. In the time domain, the problem is
91
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
∇
2
φ = 0
∂φ
∂n
= v
0
cos ωt on x = 0
∂φ
∂n
= 0 on y = −d
∂φ
∂n
= −
1
g
¨
φ on y = 0,
(8.38)
where dots denote diﬀerentiation with respect to time, and an additional boundary condition
is needed for large x at the location where the model is terminated. For this problem, the
excitation frequency ω is speciﬁed. The solution of this problem is a function φ(x, y, t).
The forcing function is the oscillating wall. We ﬁrst write Eq. 8.38b in the form
∂φ
∂n
= v
0
cos ωt = Re
_
v
0
e
iωt
_
, (8.39)
where i =
√
−1, and v
0
is real. We therefore look for solutions in the form
φ(x, y, t) = φ
0
(x, y)e
iωt
, (8.40)
where φ
0
(x, y) is the complex amplitude. Eq. 8.38 then becomes
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
∇
2
φ
0
= 0
∂φ
0
∂n
= v
0
on x = 0
∂φ
0
∂n
= 0 on y = −d
∂φ
0
∂n
=
ω
2
g
φ
0
on y = 0.
(8.41)
It can be shown that, for large x,
∂φ
0
∂x
= −iαφ
0
, (8.42)
where α is the positive solution of
ω
2
g
= αtanh(αd), (8.43)
and ω is the ﬁxed excitation frequency. The graphical solution of this equation is shown in
Fig. 74.
Thus, for a ﬁnite element solution to the 2D wave maker problem, we truncate the
domain “suﬃciently far” from the wall, and impose a boundary condition, Eq. 8.42, referred
to as a radiation boundary condition which accounts approximately for the fact that the
92
¸
`
α
y
Figure 74: Graphical Solution of ω
2
/α = g tanh(αd).
∇
2
φ
0
= 0
∂φ
0
∂n
= v
0
∂φ
0
∂n
= 0
∂φ
0
∂n
=
ω
2
g
φ
0
∂φ
0
∂n
= −iαφ
0
`
·
d
¡ ¸
W
Figure 75: 2D Wave Maker: Frequency Domain.
ﬂuid extends to inﬁnity. If the radiation boundary is located at x = W, the boundary value
problem in the frequency domain becomes
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
∇
2
φ
0
= 0
∂φ
0
∂n
= v
0
on x = 0 (oscillating wall)
∂φ
0
∂n
= 0 on y = −d (rigid bottom)
∂φ
0
∂n
=
ω
2
g
φ
0
on y = 0 (linearized free surface condition)
∂φ
0
∂x
= −iαφ
0
on x = W (radiation condition),
(8.44)
as summarized in Fig. 75. Note the similarity between this radiation condition and the
nonreﬂecting boundary condition for the Helmholtz equation, Eq. 3.52.
93
8.6 Linear Triangle Matrices for 2D Wave Maker Problem
The boundary value problem deﬁned in Eq. 8.44 has two boundaries where ∂φ/∂n is speciﬁed
and two boundaries on which ∂φ/∂n is proportional to the unknown φ. Thus, this boundary
value problem is a special case of Eq. 7.55 (p. 73), so that the ﬁnite element formulas
derived in §7.5 are all applicable. Note that the function g appearing in Eq. 7.55 is not the
acceleration due to gravity appearing in the formulation of the free surface ﬂow problem.
The matrix system for the wave maker problem is therefore, from Eq. 7.79,
(K+H)φ = F, (8.45)
where, for each element, K, H, and F are given by Eqs. 7.80–7.82. Thus, from Eq. 7.91,
K
ij
=
1
4A
(b
i
b
j
+ c
i
c
j
), (8.46)
where i and j each have the range 123, and
b
i
= y
j
−y
k
, c
i
= x
k
−x
j
. (8.47)
For b
i
and c
i
, the symbols ijk refer to the three nodes 123 in a cyclic permutation. For
example, if j = 1, then k = 2 and i = 3. Thus,
K
11
= (b
2
1
+ c
2
1
)/(4A), (8.48)
K
22
= (b
2
2
+ c
2
2
)/(4A), (8.49)
K
33
= (b
2
3
+ c
2
3
)/(4A), (8.50)
K
12
= K
21
= (b
1
b
2
+ c
1
c
2
)/(4A), (8.51)
K
13
= K
31
= (b
1
b
3
+ c
1
c
3
)/(4A), (8.52)
K
23
= K
32
= (b
2
b
3
+ c
2
c
3
)/(4A). (8.53)
H is calculated using Eq. 7.111, where
ˆ
h = −ω
2
/g on the free surface, and
ˆ
h = iα on the
radiation boundary. Thus, for triangular elements adjacent to the free surface,
H
FS
= −
ω
2
L
6g
_
2 1
1 2
_
(8.54)
for each free surface edge. For elements adjacent to the radiation boundary,
H
RB
=
iαL
6
_
2 1
1 2
_
(8.55)
for each radiation boundary edge. Note that, since H is purely imaginary on the radiation
boundary, the coeﬃcient matrix K+H in Eq. 8.45 is complex, and the solution φ is complex.
The solution of free surface problems thus requires either the use of complex arithmetic or
separating the matrix system into real and imaginary parts.
The righthand side F is calculated using Eq. 7.107, where ˆ g = −v
0
on the oscillating
wall. Thus, for two points on an element edge on the oscillating wall,
F
1
= F
2
=
v
0
L
2
. (8.56)
94
`
`
`
`
k
c
m
¸
u(t)
¸ ¸
¸
f(t)
Figure 76: Single DOF MassSpringDashpot System.
The solution vector φ obtained from Eq. 8.45 is the complex amplitude of the velocity
potential. The timedependent velocity potential is given by
Re
_
φe
iωt
_
= Re [(φ
R
+ iφ
I
)(cos ωt + i sin ωt)] = φ
R
cos ωt −φ
I
sin ωt, (8.57)
where φ
R
and φ
I
are the real and imaginary parts of the complex amplitude. It is this
function which is displayed in computer animations of the timedependent response of the
velocity potential.
8.7 Mechanical Analogy for the Free Surface Problem
Consider the single DOF springmassdashpot system shown in Fig. 76. The application of
Newton’s second law of motion (F=ma) to this system yields the diﬀerential equation of
motion
m¨ u + c ˙ u + ku = f(t), (8.58)
where m is mass, c is the viscous dashpot constant, k is the spring stiﬀness, u is the dis
placement from the equilibrium, f is the applied force, and dots denote diﬀerentiation with
respect to the time t. For a sinusoidal force,
f(t) = f
0
e
iωt
, (8.59)
where ω is the excitation frequency, and f
0
is the complex amplitude of the force. The
displacement solution is also sinusoidal:
u(t) = u
0
e
iωt
, (8.60)
where u
0
is the complex amplitude of the displacement response. If we substitute the last
two equations into the diﬀerential equation, we obtain
−ω
2
mu
0
e
iωt
+ iωcu
0
e
iωt
+ ku
0
e
iωt
= f
0
e
iωt
(8.61)
or
(−ω
2
m + iωc + k)u
0
= f
0
. (8.62)
We make two observations from this last equation:
1. The inertia force is proportional to ω
2
and 180
◦
out of phase with respect to the elastic
force.
95
2. The viscous damping force is proportional to ω and leads the elastic force by 90
◦
.
Thus, in the free surface problem, we could interpret the free surface matrix H
FS
as an
inertial eﬀect (in a mechanical analogy) with a “surface mass matrix” M given by
M =
1
−ω
2
H
FS
=
L
6g
_
2 1
1 2
_
, (8.63)
where the diagonal “masses” are positive. Similarly, we could interpret the radiation bound
ary matrix H
RB
as a “damping” eﬀect with the “boundary damping matrix” B given by
B =
1
iω
H
RB
=
αL
6ω
_
2 1
1 2
_
, (8.64)
where the diagonal dampers are positive. This “damping” matrix is frequencydependent.
The free surface problem is a degenerate equivalent to the mechanical problem, since the
mass M occurs only on the free surface rather than at every point in the domain. In fact,
the ideal ﬂuid, for which ∇
2
φ = 0, behaves like a degenerate mechanical system, because
the ideal ﬂuid possesses the counterpart to the elastic forces but not the inertial forces. This
degeneracy is a consequence of the incompressibility of the ideal ﬂuid. A compressible ﬂuid
(such as occurs in acoustics) has the analogous mass eﬀects everywhere.
96
Bibliography
[1] K.J. Bathe. Finite Element Procedures. PrenticeHall, Inc., Englewood Cliﬀs, NJ, 1996.
[2] J.W. Brown and R.V. Churchill. Fourier Series and Boundary Value Problems.
McGrawHill, Inc., New York, seventh edition, 2006.
[3] G.R. Buchanan. Schaum’s Outline of Theory and Problems of Finite Element Analysis.
Schaum’s Outline Series. McGrawHill, Inc., New York, 1995.
[4] D.S. Burnett. Finite Element Analysis: From Concepts to Applications. AddisonWesley
Publishing Company, Inc., Reading, Mass., 1987.
[5] R.W. Clough and J. Penzien. Dynamics of Structures. McGrawHill, Inc., New York,
second edition, 1993.
[6] R.D. Cook, D.S. Malkus, M.E. Plesha, and R.J. Witt. Concepts and Applications of
Finite Element Analysis. John Wiley and Sons, Inc., New York, fourth edition, 2001.
[7] K.H. Huebner, D.L. Dewhirst, D.E. Smith, and T.G. Byrom. The Finite Element
Method for Engineers. John Wiley and Sons, Inc., New York, fourth edition, 2001.
[8] J.H. Mathews. Numerical Methods for Mathematics, Science, and Engineering.
PrenticeHall, Inc., Englewood Cliﬀs, NJ, second edition, 1992.
[9] K.W. Morton and D.F. Mayers. Numerical Solution of Partial Diﬀerential Equations.
Cambridge University Press, Cambridge, 1994.
[10] J.S. Przemieniecki. Theory of Matrix Structural Analysis. McGrawHill, Inc., New York,
1968 (also, Dover, 1985).
[11] J.N. Reddy. An Introduction to the Finite Element Method. McGrawHill, Inc., New
York, third edition, 2006.
[12] F. Scheid. Schaum’s Outline of Theory and Problems of Numerical Analysis. Schaum’s
Outline Series. McGrawHill, Inc., New York, second edition, 1989.
[13] G.D. Smith. Numerical Solution of Partial Diﬀerential Equations: Finite Diﬀerence
Methods. Oxford University Press, Oxford, England, third edition, 1985.
[14] I.M. Smith and D.V. Griﬃths. Programming the Finite Element Method. John Wiley
and Sons, Inc., New York, fourth edition, 2004.
[15] M.R. Spiegel. Schaum’s Outline of Theory and Problems of Advanced Mathematics for
Engineers and Scientists. Schaum’s Outline Series. McGrawHill, Inc., New York, 1971.
[16] J.S. Vandergraft. Introduction to Numerical Calculations. Academic Press, Inc., New
York, second edition, 1983.
97
[17] O.C. Zienkiewicz and R.L. Taylor. The Finite Element Method for Solid and Structural
Mechanics. Elsevier ButterworthHeinemann, Oxford, England, sixth edition, 2005.
[18] O.C. Zienkiewicz, R.L. Taylor, and P. Nithiarasu. The Finite Element Method for Fluid
Dynamics. Elsevier ButterworthHeinemann, Oxford, England, sixth edition, 2005.
[19] O.C. Zienkiewicz, R.L. Taylor, and J.Z. Zhu. The Finite Element Method: Its Basis
and Fundamentals. Elsevier ButterworthHeinemann, Oxford, England, sixth edition,
2005.
98
Index
acoustics, 13
backsolving, 10
banded matrix, 33
beams in ﬂexure, 44
Bernoulli equation, 86, 89
big O notation, 4
boundary conditions, 1
derivative, 20, 33
essential, 76
natural, 65, 76, 88
Neumann, 20
nonreﬂecting, 27
radiation, 92
boundary value problem(s), 1, 8
brachistochrone, 61
calculus of variations, 57
CFL condition, 26
change of basis, 49
compatibility, 81
complex amplitude, 90
complex numbers, 90
conic sections, 14
constant strain triangle, 48
constrained minimization, 63
constraints, 36
continuum problems
direct approach, 45
Courant condition, 26
CrankNicolson method, 10, 18
stencil, 20
CST, 48
cyclic permutation, 71
cycloid, 62
d’Alembert solution, 21, 22
del operator, 11
determinant of matrix, 52
discriminant, 14
displacement vector, 35
divergence theorem, 67
domain of dependence, 25
domain of inﬂuence, 23
electrostatics, 12
equation(s)
acoustics, 13
Bernoulli, 86, 89
classiﬁcation, 14
elliptic, 14, 31
EulerLagrange, 59, 63
heat, 13
Helmholtz, 13
hyperbolic, 14, 21
Laplace, 11
mathematical physics, 11
nondimensional, 15
ordinary diﬀerential, 1
parabolic, 14, 16
partial diﬀerential, 11
Poisson, 12, 66, 73
potential, 11
sparse systems, 32
systems, 5
wave, 12, 22
error
global, 3
local, 3
rounding, 3
truncation, 3
essential boundary condition, 76
Euler beam, 45
Euler’s method, 2
truncation error, 3
EulerLagrange equation, 59, 63
explicit method, 4, 16
ﬁnite diﬀerence(s), 6
backward, 6
central, 6
forward, 2, 6
Neumann boundary conditions, 33
relaxation, 33
Fourier’s law, 13
99
free surface ﬂows, 89
functional, 58
Galerkin’s method, 83
Gaussian elimination, 10, 80
gravitational potential, 11
heat conduction, 12
Helmholtz equation, 13
Hooke’s law, 48
implicit method, 4
incompressible ﬂuid ﬂow, 11
index notation, 67
initial conditions, 1
initial value problem(s), 1
interpretation of functional, 79
Kronecker delta, 50, 57, 68
Laplacian operator, 11
large spring method, 43
Leibnitz’s rule, 22
magnetostatics, 12
mass matrix, 35
massspring system, 1, 34
matrix assembly, 36
matrix partitioning, 42
mechanical analogy, 95
method of weighted residuals, 83
natural boundary condition, 65, 76, 88
Neumann boundary condition
ﬁnite diﬀerences, 20, 33
Newton’s second law, 34
nonconforming element, 82
nondimensional form, 15
orthogonal coordinate transformation, 52
orthogonal matrix, 52
orthonormal basis, 50
phantom points, 20
phasors, 90
addition, 91
pinjointed frame, 41
pivoting, 80
Poisson equation, 12
positive deﬁnite matrix, 80
potential energy, 80
potential ﬂuid ﬂow, 85
radiation boundary condition, 92
relaxation, 33
rod element, 38
rotation matrix, 52
rounding error, 3
RungeKutta methods, 4
separation of variables, 18
shape functions, 70
shooting methods, 10
solution procedure, 38
sparse system of equations, 32
speed of propagation, 12
stable solution, 18
stencil, 16, 20, 26
stiﬀness matrix, 35, 76
properties, 38
strain energy, 80
summation convention, 50, 67
symmetry, 87
Taylor’s theorem, 3, 4
tensors, 53
examples, 54
isotropic, 57
torsion, 12
transverse shear, 45
triangular element, 46
tridiagonal system, 8, 10
truncation error, 3
truss structure, 40
unit vectors, 50
unstable solution, 18
velocity potential, 11, 85
vibrations
bar, 12
membrane, 12
string, 12
virtual work, 48
100
warping function, 12
wave equation, 12, 22
wave maker problem, 91
matrices, 94
wave speed, 23
101
Copyright c 2001–2010 by Gordon C. Everstine. All rights reserved.
A This book was typeset with L TEX 2ε (MiKTeX).
Preface
These lecture notes are intended to supplement a onesemester graduatelevel engineering course at The George Washington University in numerical methods for the solution of partial diﬀerential equations. Both ﬁnite diﬀerence and ﬁnite element methods are included. The main prerequisite is a standard undergraduate calculus sequence including ordinary diﬀerential equations. In general, the mix of topics and level of presentation are aimed at upperlevel undergraduates and ﬁrstyear graduate students in mechanical, aerospace, and civil engineering. Gordon Everstine Gaithersburg, Maryland January 2010
iii
.
. . . . . . . . . . . . . . . . . .1 Linear MassSpring Systems . . . . . . .2 Finite Diﬀerences . . . . . .3 Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Beams in Flexure . . .2 Classiﬁcation of Partial Diﬀerential Equations .2. . . . . . .10 Direct Approach to Continuum Problems . . . . . . . . . . . .1.2. . . . . . . .1 The d’Alembert Solution of the Wave Equation . . .2 CrankNicolson Implicit Method . . . .6 Boundary Value Problems . . . . . . . . . . . . . . v . . . . . . . . . . .2. . . . . . .1 Derivative Boundary Conditions . . . . . . . .1 Explicit Finite Diﬀerence Method . . . . . . . . . . .3 Starting Procedure for Explicit Algorithm . . . . . . .2 Solving Tridiagonal Systems . . . . . . . . . . 4. . .4 Example and Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Boundary Conditions by Matrix Partitioning 4. . . . . . . . . . . . . . . . . . 4. . . . . . .3 RungeKutta Methods . . . . .4 Systems of Equations . . . . . . . . . . .2 Hyperbolic Equations . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . 2. . . . . . . . . . . . . . . 4. .3 Transformation to Nondimensional Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . 1. .3 Derivative Boundary Conditions . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Direct Finite Element Analysis 4. . . 3. . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. .1 Parabolic Equations . . . . . . . . . . . . . . . . . . . . 1.1 Classical Equations of Mathematical Physics . . . . .1 Example . . . . . 2. . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . .6 PinJointed Frame Example . . . . . . . . . . . . . . . . . . . . . . .3 Constraints . . . . . . . . . . . . . . . . . . .2 Truncation Error for Euler’s Method . . .6. . . . . 4. . . . . . . . . . . . . 1 2 3 4 5 6 8 9 10 10 11 11 14 15 16 16 16 18 20 21 21 25 26 27 31 33 34 34 36 36 37 38 41 42 43 44 45 2 Partial Diﬀerential Equations 2.2. . 1. . . . 4. . 1. . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . .1 Euler’s Method . . . . 4. . . . . . . . . 4. . . . . .5 PinJointed Rod Element . . . . . . . . . . . . 3. . . . . . . . . . . . . . .4 Nonreﬂecting Boundaries . . . . . . . . . . . . . . . . .5 Finite Diﬀerences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . .8 Alternative Approach to Constraints . .2 Matrix Assembly . . . . . . . . . . . . . . . . . . . . .Contents 1 Numerical Solution of Ordinary Diﬀerential 1. . . .7 Shooting Methods . . . . . . . . . . . . . . . . . . 3 Finite Diﬀerence Solution of Partial Diﬀerential Equations 3. . . . . . . . . . 3. . 4. . . . . . . . . . . 3. . . 1. . . . . . . . . . 1. . . . . . . . . . . . . . . .1. . . . . . . . . . . 3. . . . . . . . . . . . . . . 1. . . . .
. 8. . . . . . . .1 Finite Element Model . . . . . . . . .5 Functions of Several Independent Variables . .6 Linear Triangle Matrices for 2D Wave Maker Problem . . 6. . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . vi . . . . . 7. . .5 2D Wave Maker . .9 Method of Weighted Residuals (Galerkin’s Method) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Index Notation and Summation Convention . . . . . . . . . . . . 1 2 6 11 17 . . . . . . 6. . . . . . The Shooting Method. . . . . . .2 Application of Symmetry . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Potential Fluid Flow With Finite Elements 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SimplySupported Beam With Distributed Load. . . . . . . . . . . . . . . . . .7 Functions of Several Dependent Variables . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . .2 Examples of Tensors . . . . . . . . . . . 7 Variational Approach to the Finite Element Method 7. . . . . . . . . . . . . 6 Calculus of Variations 6. . . . . . . .8 Element Compatibility . . . . . . . . 5. . . . .6 Interpretation of Functional . . . . . . . .3 Free Surface Flows . . . . 7. . . . . . . 7. . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . .5 Matrices for Linear Triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Deriving Variational Principles . . . . . . . .5 Change of Basis 5. . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . 7. .4 Variational Approach . . . . . . . . . . . . . . . . 6. . . . Finite Diﬀerence Approximations to Derivatives. . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Constraint Conditions . . Bibliography Index . . . . . . . . . . . . . . . . .7 Stiﬀness in Elasticity in Terms of Shape Functions . . . . . . . . . . . . . . . . . . . . . . . . .3 Isotropic Tensors . .7 Mechanical Analogy for the Free Surface Problem . . . . . . . . . . . . . . . . . . List of Figures 1 2 3 4 5 1DOF MassSpring System. . . . 7. . . . . . . . . . . . . . . . .4 Use of Complex Numbers and Phasors in Wave Problems 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Example 3: A Constrained Minimization Problem . . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . Mesh for 1D Heat Equation. . .2 Example 2: The Brachistochrone . . .3 Shape Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 53 54 57 57 60 61 63 63 64 66 66 66 67 68 70 73 76 79 80 81 83 85 86 87 89 90 91 94 95 97 99 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Tensors . . . . .6 Example 4: Poisson’s Equation . . 8. .1 Example 1: The Shortest Distance Between Two Points 6. . . . .
. Domains of Dependence for r > 1. . . . . . . . . . . . Computing 2D Stiﬀness of PinJointed Rod. . . . . . . . . . . . . . . . . Stencil for Explicit Solution of Wave Equation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Propagation of Initial Displacement. . . . . . . . . . . . The Neighborhood of Point (i. . . . . . . . . . . . . . . . . Finite Diﬀerence Grid on Rectangular Domain. . . . . Example With Reactions and Loads at Same DOF. . . . . . Plate With Constraints and Loads. . . . . . . . . . . . . . . . . . . 17 17 17 18 19 20 20 21 23 24 24 25 26 26 27 28 30 31 31 32 32 33 34 34 35 36 37 37 39 39 40 40 41 42 43 44 44 45 45 46 46 50 50 51 56 . . . . . . . . . . . . . . . . . . . . . . PinJointed Rod Element. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Heat Equation Stencil for Explicit Finite Diﬀerence Algorithm. . . . . . . . Heat Equation Stencil for r = 1/10. . . . . . . . . . . . vii . . . . . . . . . . . . . . . . . . . . Explicit Finite Diﬀerence Solution With r = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Large Spring Approach to Constraints. Truss Structure Modeled With PinJointed Rods. . . . . . . . . . . . . . . . . . Basis Vectors in Polar Coordinate System. Laplace’s Equation on Rectangular Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Beam Problem Associated With Column 1. . . . . . . 4DOF Spring System. . . . . . .48. . . . . . . . . . . . . . . . . . . . . . . . . . . Change of Basis. Stencil for CrankNicolson Algorithm. Treatment of Derivative Boundary Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Finite Diﬀerence Mesh at Nonreﬂecting Boundary. . . . . . . . . . . . PinJointed Frame Example. . . . . . . . . . . . . . . . . . . . . . . . . The Beam Problem Associated With Column 2. . . . .C. Explicit Finite Diﬀerence Solution With r = 0. . 3DOF Spring System. . . . . . . . . . . . . . . . . 20Point Finite Diﬀerence Mesh. . . . . . . Domains of Inﬂuence and Dependence. . . . Heat Equation Stencils for r = 1/2 and r = 1. . . . . . . . . . . . Mesh for CrankNicolson. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2DOF MassSpring System. . . . . . . . j). . . DOF for 2D Beam Element. . Laplace’s Equation With Dirichlet and Neumann B. . . . . . . . . . . . . . . . . . . . . . . . . . . . Initial Velocity Function. . . . . . . . . . . . . . . . Element Coordinate System for PinJointed Rod. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DOF for the Linear Triangular Membrane Element. . . . . . Treatment of Neumann Boundary Conditions. . DOF for Beam in Flexure (2D). . . . . . . . . Propagation of Initial Velocity. . . . . . . . . . . . . . . . . . . . . . Element Coordinate Systems in the Finite Element Method. . . Finite Length Simulation of an Inﬁnite Bar. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spring System With Constraint. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Degrees of Freedom for a PinJointed Rod Element in 2D. . . . . . . . . . . . . . . . . . A Single Spring Element. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mesh for Explicit Solution of Wave Equation. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Brachistochrone Problem. . . . . . . . . . . . The Complex Amplitude. . . . . . . . . . Triangular Finite Element. . . . . . . . . . . . . . . . . . . . . Single DOF MassSpringDashpot System. Compatibility at an Element Boundary. Symmetry With Respect to y = 0. . . . . . . . . . . . . . . Phasor Addition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 60 61 63 64 70 70 72 74 75 78 81 82 83 86 87 87 88 88 89 90 91 91 93 93 95 viii . . . . . . . . . . . . . A Constrained Minimization Problem. . . and Neutral Stationary Values. Antisymmetry With Respect to x = 0. . . . . . . . . . . . 2D Wave Maker: Frequency Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2D Wave Maker: Time Domain. . . . . Discontinuous Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TwoDimensional Finite Element Mesh. Graphical Solution of ω 2 /α = g tanh(αd). . . . . . . . . . . . . . . . . . . . . . . . . . . The Free Surface Problem. . . . . . . . . . . . . . . Triangular Mesh at Boundary. . . . . . . . . . Neumannn Boundary Condition at Internal Boundary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Curve of Minimum Length Between Two Points. . . . . Potential Flow Around Solid Body. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 Minimum. . . . . . . Two Adjacent Finite Elements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boundary Value Problem for Flow Around Circular Cylinder. . . . . . . . . . . . . . . . . . . . . . . . Axial Member (PinJointed Truss Element). . . . . . . . . . . . . . Maximum. . . . . . . . . . . . . . . . . . . . . . Several Brachistochrone Solutions. . . . . A Vector Analogy for Galerkin’s Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Streamlines Around Circular Cylinder. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
u(0) = 0. for which the diﬀerential equation is x FL EIu (x) = M (x) = x − Fx (1. . y (n) = 0 for a ≤ x ≤ b. consider the initial value problem m¨ + ku = f (t) u u(0) = 5. . If the conditions are imposed at both x = a and x = b. 2.). . . y .1) where y = y(x). u is the transverse displacement. y1 . the conditions are called initial conditions (I. . . A system of n ﬁrstorder ODEs has the form y1 (x) = f1 (x. and M (x) is the internal bending moment at x. An nthorder equation has the highest order derivative of order n: f x. . y. yn (x) = fn (x. yn ) 2 (1. If all conditions are imposed at x = a.1 Numerical Solution of Ordinary Diﬀerential Equations An ordinary diﬀerential equation (ODE) is an equation that involves an unknown function (the dependent variable) and some of its derivatives with respect to a single independent variable. Initial value problems generally arise in timedependent situations. and the problem is an initial value problem (IVP). . An nthorder ODE requires the speciﬁcation of n conditions to assure uniqueness of the solution. 1 . . For example.4) . y2 . as illustrated in Fig. . (1. and y (n) denotes the nth derivative with respect to x.3) 2 2 u(0) = u(L) = 0. . y1 .). 1. the conditions are called boundary conditions (B. . .C. and the problem is a boundary value problem (BVP). . and dots denote diﬀerentiation with respect to the time t. ˙ (1.C. . An example of a boundary value problem is shown in Fig. y . Boundary value problems generally arise in static (timeindependent) situations. yn ) E u(t) E k ¡e ¡e ¡ e¡ e¡ m e e f (t) Figure 1: 1DOF MassSpring System. yn ) y (x) = f2 (x. . y1 . This equation describes a onedegreeoffreedom massspring system which is released from rest and subjected to a timedependent force. . As we will see. where the independent variable x is the distance from the left end. . y2 . IVPs and BVPs must be treated diﬀerently numerically. y2 .2) where u = u(t).
F (N/m)
c c c d '
c c c
c c c c
c
L
d E
Figure 2: SimplySupported Beam With Distributed Load. for a ≤ x ≤ b. A single nthorder ODE is equivalent to a system of n ﬁrstorder ODEs. This equivalence can be seen by deﬁning a new set of unknowns y1 , y2 , . . . , yn such that y1 = y, y2 = y , y3 = y , . . . , yn = y (n−1) . For example, consider the thirdorder IVP y = xy + ex y + x2 + 1, x ≥ 0 y(0) = 1, y (0) = 0, y (0) = −1. To obtain an equivalent ﬁrstorder system, deﬁne y1 = y, y2 = y , y3 = y to obtain y1 = y2 y = y3 2 y3 = xy2 + ex y1 + x2 + 1 with initial conditions y1 (0) = 1, y2 (0) = 0, y3 (0) = −1. (1.5)
(1.6)
1.1
Euler’s Method
This method is the simplest of the numerical methods for solving initial value problems. Consider the IVP y (x) = f (x, y), x ≥ a (1.7) y(a) = η. To eﬀect a numerical solution, we discretize the xaxis: a = x0 < x1 < x2 < · · · , where, for uniform spacing, xi − xi−1 = h, (1.8) and h is considered small. With this discretization, we can approximate the derivative y (x) with the forward ﬁnite diﬀerence y (x) ≈ y(x + h) − y(x) . h (1.9)
If we let yk represent the numerical approximation to y(xk ), then y (xk ) ≈ yk+1 − yk . h 2 (1.10)
Thus, a numerical (diﬀerence) approximation to the ODE, Eq. 1.7, is yk+1 − yk = f (xk , yk ), k = 0, 1, 2, . . . h or yk+1 = yk + hf (xk , yk ), k = 0, 1, 2, . . . y0 = η. This recursive algorithm is called Euler’s method. (1.12) (1.11)
1.2
Truncation Error for Euler’s Method
There are two types of error that arise in numerical methods: truncation error (which arises primarily from a discretization process) and rounding error (which arises from the ﬁniteness of number representations in the computer). Reﬁning a mesh to reduce the truncation error often causes the rounding error to increase. To estimate the truncation error for Euler’s method, we ﬁrst recall Taylor’s theorem with remainder, which states that a function f (x) can be expanded in a series about the point x = c: f (x) = f (c)+f (c)(x−c)+ f (n) (c) f (n+1) (ξ) f (c) (x−c)2 +· · ·+ (x−c)n + (x−c)n+1 , (1.13) 2! n! (n + 1)!
where ξ is between x and c. The last term in Eq. 1.13 is referred to as the remainder term. Note also that Eq. 1.13 is an equality, not an approximation. In Eq. 1.13, let x = xk+1 and c = xk , in which case 1 y(xk+1 ) = y(xk ) + hy (xk ) + h2 y (ξk ), 2 where xk ≤ ξk ≤ xk+1 . Since y satisﬁes the ODE, Eq. 1.7, y (xk ) = f (xk , y(xk )), where y(xk ) is the actual solution at xk . Hence, 1 y(xk+1 ) = y(xk ) + hf (xk , y(xk )) + h2 y (ξk ). 2 (1.16) (1.15) (1.14)
Like Eq. 1.13, this equation is an equality, not an approximation. By comparing this last equation to Euler’s approximation, Eq. 1.12, it is clear that Euler’s 1 method is obtained by omitting the remainder term 2 h2 y (ξk ) in the Taylor expansion of y(xk+1 ) at the point xk . The omitted term accounts for the truncation error in Euler’s method at each step. This error is a local error, since the error occurs at each step regardless of the error in the previous step. The accumulation of local errors is referred to as the global error, which is the more important error but much more diﬃcult to compute. Most algorithms for solving ODEs are derived by expanding the solution function in a Taylor series and then omitting certain terms. 3
1.3
RungeKutta Methods
Euler’s method is a ﬁrstorder method, since it was obtained by omitting terms in the Taylor series expansion containing powers of h greater than one. To derive a secondorder method, we again use Taylor’s theorem with remainder to obtain 1 1 y(xk+1 ) = y(xk ) + hy (xk ) + h2 y (xk ) + h3 y (ξk ) 2 6 for some ξk such that xk ≤ ξk ≤ xk+1 . Since, from the ODE (Eq. 1.7), y (xk ) = f (xk , y(xk )), we can approximate y (x) = df (x, y(x)) f (x + h, y(x + h)) − f (x, y(x)) = + O(h) dx h (1.19) (1.18) (1.17)
where we use the “big O” notation O(h) to represent terms of order h as h → 0. [For example, 2h3 = O(h3 ), 3h2 + 5h4 = O(h2 ), h2 O(h) = O(h3 ), and −287h4 e−h = O(h4 ).] From these last two equations, Eq. 1.17 can then be written as h y(xk+1 ) = y(xk ) + hf (xk , y(xk )) + [f (xk+1 , y(xk+1 )) − f (xk , y(xk ))] + O(h3 ), 2 which leads (after combining terms) to the diﬀerence equation 1 yk+1 = yk + h[f (xk , yk ) + f (xk+1 , yk+1 )]. 2 (1.21) (1.20)
This formula is a secondorder approximation to the original diﬀerential equation y (x) = f (x, y) (Eq. 1.7), but it is an inconvenient approximation, since yk+1 appears on both sides of the formula. (Such a formula is called an implicit method, since yk+1 is deﬁned implicitly. An explicit method would have yk+1 appear only on the lefthand side.) To obtain instead an explicit formula, we use the approximation yk+1 = yk + hf (xk , yk ) to obtain (1.22)
1 yk+1 = yk + h[f (xk , yk ) + f (xk+1 , yk + hf (xk , yk ))]. (1.23) 2 This formula is the RungeKutta formula of second order. Other higherorder formulas can be derived similarly. For example, a fourthorder formula turns out to be popular in applications. We illustrate the implementation of the secondorder RungeKutta formula, Eq. 1.23, with the following algorithm. We ﬁrst make the following three deﬁnitions: ak = f (xk , yk ), bk = yk + hak , 4 (1.24) (1.25)
z0 = ξ. Euler’s method uses the recursive formula yk+1 = yk + hf (xk . bk ). yk . yk . y(x).27) yk+1 = yk + h(ak + ck ). zk ) + g(xk+1 . 1. zk + hg(xk . the secondorder RungeKutta method uses the recursive formula 1 yk+1 = yk + h[f (xk .12 that. yk . z(x)) (1. yk ) + f (xk+1 . yk + hf (xk . yk ))].30) We recall from Eq. zk ) z = zk + hg(xk . z0 = ξ. yk + hf (xk . 1. (1.26) 1 (1. Consider the initial value problem involving two equations: y (x) = f (x. yk .31) (1. zk ) + f (xk+1 . yk . yk + hf (xk . yk . zk ).28) z (x) = g(x. 2 The calculations can then be performed conveniently with the following spreadsheet: k 0 1 2 . for one equation. this formula becomes 1 yk+1 = yk + 2 h[f (xk . yk . zk ) k+1 y0 = η.4 Systems of Equations The methods just derived can be extended directly to systems of equations. zk + hg(xk .ck = f (xk+1 . yk ). zk ).29) (1.32) 5 . yk . z(x)) y(a) = η. We recall from Eq. This formula is directly extendible to two equations as yk+1 = yk + hf (xk .23 that. . y(x). zk ))] 1 z = zk + 2 h[g(xk . z(a) = ξ. xk yk ak bk ck yk+1 1. (1. 2 For two equations. . in which case (1. zk ))] k+1 y0 = η. for one equation.
h (1. is the central ﬁnite diﬀerence approximation to the derivative. there is no basis for choosing one of these approximations over the other. in general.T y(x) s central diﬀerence rr j r r s s rr backward forward diﬀerence diﬀerence E x−h x x+h x Figure 3: Finite Diﬀerence Approximations to Derivatives. we could approximate the derivative using the forward diﬀerence formula y (x) ≈ y(x + h) − y(x) . 6 . 3). we want to develop further the notion of ﬁnite diﬀerence approximation of derivatives. an intuitively more appealing approximation results from the average of these formulas: y (x) ≈ or 1 y(x + h) − y(x) y(x) − y(x − h) + 2 h h y (x) ≈ (1. (1.5 Finite Diﬀerences Before addressing boundary value problems. We could also approximate the derivative using the backward diﬀerence formula y (x) ≈ y(x) − y(x − h) . 1. Consider a function y(x) for which we want to compute the derivative y (x) at some point x.33) which is the slope of the line to the right of x (Fig. which is more accurate than either the forward or backward diﬀerence formulas. If we discretize the xaxis with uniform spacing h.36) 2h This formula. Since.34) which is the slope of the line to the left of x.35) y(x + h) − y(x − h) . h (1.
is the backward diﬀerence approximation to the second derivative. y (x) ≈ y (x + h) − y (x) h y(x + 2h) − y(x + h) y(x + h) − y(x) ≈ − h2 h2 y(x + 2h) − 2y(x + h) + y(x) = . which involves three points backward of x.Similar approximations can be derived for second derivatives.42) (1.41) h3 h2 y (x) + y (x) + O(h4 ). by replacing h by −h. h2 (1. Using forward diﬀerences. h2 (1. 2 6 (1.39) This last result can also be obtained by using forward diﬀerences for the second derivative followed by backward diﬀerences for the ﬁrst derivatives. The central ﬁnite diﬀerence approximation to the second derivative uses instead the three points which bracket x: y (x) ≈ y(x + h) − 2y(x) + y(x − h) . 2 h which. y (x) = or (1.43) 7 . h2 (1. Similarly. 2 6 The addition of these two equations yields y(x + h) + y(x − h) = 2y(x) + h2 y (x) + O(h4 ) y(x + h) − 2y(x) + y(x − h) + O(h2 ). y (x) ≈ y (x) − y (x − h) h y(x) − y(x − h) y(x − h) − y(x − 2h) − ≈ h2 h2 y(x) − 2y(x − h) + y(x − 2h) = . shows that the formula is secondorder accurate. or vice versa.38) This formula. h2 h3 y(x − h) = y(x) − hy (x) + y (x) − y (x) + O(h4 ).40) (1. is the forward diﬀerence approximation to the second derivative. which involves three points forward of x. The central diﬀerence formula for second derivatives can alternatively be derived using Taylor series expansions: y(x + h) = y(x) + hy (x) + Similarly. because of the error term.37) This formula. using backward diﬀerences.
n. . n − 1.46) Then. y(b) = η2 . 1. . n (1. not enough information is given at either endpoint to allow a stepbystep solution. y1 ) h2 f (x2 .45. . yk ≈ y(xk ). the two boundary conditions are needed to obtain a nonsingular system: y0 = η1 . Let yk denote the numerical approximation to the exact solution at xk . the ODE can be approximated by y(xk−1 ) − 2y(xk ) + y(xk+1 ) ≈ f (xk . y ). k = 1. y4 ) (1. . The methods used for IVPs started at one end (x = a) and computed the solution step by step for increasing x.44 for which the righthand side depends only on x and y: y (x) = f (x. yn−1 ). 8 . 1. This system is linear or nonlinear. h2 which suggests the diﬀerence equation yk−1 − 2yk + yk+1 = h2 f (xk . y3 ) h2 f (x4 . a ≤ x ≤ b (1. 2. . yk ).49) (1.50) Since this system of equations has n − 1 equations in n + 1 unknowns. .51) y4 2y4 + y5 (1. Consider the BVP y (x) = f (x.6 Boundary Value Problems The techniques for initial value problems (IVPs) are. 3.45) y(a) = η1 . That is. −η1 + h2 f (x1 . if we use a central diﬀerence approximation to the second derivative in Eq. y(b) = η2 .44) This equation could be nonlinear. y). k = 0. .52) yn−2 − 2yn−1 = −η2 + h2 f (xn−1 . . The resulting system is thus −2y1 + y2 y1 − 2y2 + y3 y2 − 2y3 + y3 − = = = = . which is a tridiagonal system of n − 1 equations in n − 1 unknowns. Subdivide the interval (a. yk ). depending on f . a ≤ x ≤ b y(a) = η1 . (1. Consider ﬁrst a special case of Eq.1. depending on f . yn = η2 . For a BVP. 2. y2 ) h2 f (x3 . 1. b) into n equal subintervals: h= in which case xk = a + kh.47) b−a . . .48) (1. (1. y. (1. in general. not directly applicable to boundary value problems (BVPs).
1.6.1
Example y = −y(x), 0 ≤ x ≤ π/2 y(0) = 1, y(π/2) = 0.
Consider (1.53)
In Eq. 1.45, f (x, y) = −y, η1 = 1, η2 = 0. Thus, the righthand side of the ith equation in Eq. 1.52 has −h2 yi , which can be moved to the lefthand side to yield the system −(2 − h2 )y1 + y2 y1 − (2 − h2 )y2 + y3 2 y2 − (2 − h )y3 + = −1 = 0 = 0 . . . 0.
y4
(1.54)
yn−2 − (2 − h2 )yn−1 =
We ﬁrst solve this tridiagonal system of simultaneous equations with n = 8 (i.e., h = π/16), and compare with the exact solution y(x) = cos x: k 0 1 2 3 4 5 6 7 8 xk 0 0.1963495 0.3926991 0.5890486 0.7853982 0.9817477 1.1780972 1.3744468 1.5707963 yk 1 0.9812186 0.9246082 0.8323512 0.7080045 0.5563620 0.3832699 0.1954016 0 Exact y(xk ) 1 0.9807853 0.9238795 0.8314696 0.7071068 0.5555702 0.3826834 0.1950903 0 Absolute Error 0 0.0004334 0.0007287 0.0008816 0.0008977 0.0007917 0.0005865 0.0003113 0 % Error 0 0.0441845 0.0788715 0.1060315 0.1269565 0.1425085 0.1532604 0.1595767 0
We then solve this system with n = 40 (h = π/80): k 0 5 10 15 20 25 30 35 40 xk 0 0.1963495 0.3926991 0.5890486 0.7853982 0.9817477 1.1780972 1.3744468 1.5707963 yk 1 0.9808025 0.9239085 0.8315047 0.7071425 0.5556017 0.3827068 0.1951027 0 Exact y(xk ) 1 0.9807853 0.9238795 0.8314696 0.7071068 0.5555702 0.3826834 0.1950903 0 Absolute Error 0 0.0000172 0.0000290 0.0000351 0.0000357 0.0000315 0.0000233 0.0000124 0 % Error 0 0.0017571 0.0031363 0.0042161 0.0050479 0.0056660 0.0060933 0.0063443 0
Notice that a mesh reﬁnement by a factor of 5 has reduced the error by a factor of about 25. This behavior is typical of a numerical method which is secondorder accurate. 9
1.6.2
Solving Tridiagonal Systems
Tridiagonal systems are particularly easy (and fast) to solve using Gaussian elimination. It is convenient to solve such systems using the following notation: d 1 x1 + u 1 x2 = b1 l2 x1 + d2 x2 + u2 x3 = b2 l3 x2 + d3 x3 + u3 x4 = b3 (1.55) . . . ln xn−1 + dn xn = bn , where di , ui , and li are, respectively, the diagonal, upper, and lower matrix entries in Row i. All coeﬃcients can now be stored in three onedimensional arrays, D(·), U(·), and L(·), instead of a full twodimensional array A(I,J). The solution algorithm (reduction to upper triangular form by Gaussian elimination followed by backsolving) can now be summarized as follows: 1. For k = 1, 2, . . . , n − 1: (a) m = −lk+1 /dk [k = pivot row]
[m = multiplier needed to annihilate term below] [new diagonal entry in next row] [new rhs in next row]
(b) dk+1 = dk+1 + muk (c) bk+1 = bk+1 + mbk 2. xn = bn /dn
[start of backsolve] [backsolve loop]
3. For k = n − 1, n − 2, . . . , 1: (a) xk = (bk − uk xk+1 )/dk
Tridiagonal systems arise in a variety of applications, including the CrankNicolson ﬁnite diﬀerence method for solving parabolic partial diﬀerential equations.
1.7
Shooting Methods
Shooting methods provide a way to convert a boundary value problem to a trialanderror initial value problem. It is useful to have additional ways to solve BVPs, particularly if the equations are nonlinear. Consider the following twopoint BVP: y = f (x, y, y ), a ≤ x ≤ b (1.56) y(a) = A y(b) = B. To solve this problem using the shooting method, we compute solutions of the IVP y = f (x, y, y ), x ≥ a (1.57) y(a) = A y (a) = M 10
y T M2
r T
A a
M1
B
c E
b Figure 4: The Shooting Method.
x
for various values of M (the slope at the left end of the domain) until two solutions, one with y(b) < B and the other with y(b) > B, have been found (Fig. 4). The initial slope M can then be interpolated until a solution is found (i.e., y(b) = B).
2
Partial Diﬀerential Equations
A partial diﬀerential equation (PDE) is an equation that involves an unknown function (the dependent variable) and some of its partial derivatives with respect to two or more independent variables. An nthorder equation has the highest order derivative of order n.
2.1
Classical Equations of Mathematical Physics
2
1. Laplace’s equation (the potential equation) φ=0 (2.1)
In Cartesian coordinates, the vector operator del is deﬁned as = ex
2
∂ ∂ ∂ + ey + ez . ∂x ∂y ∂z
(2.2)
is referred to as the Laplacian operator and given by
2
=
·
∂2 ∂2 ∂2 = + + . ∂x2 ∂y 2 ∂z 2
(2.3)
Thus, Laplace’s equation in Cartesian coordinates is ∂ 2φ ∂ 2φ ∂ 2φ + + 2 = 0. ∂x2 ∂y 2 ∂z In cylindrical coordinates, the Laplacian is
2
(2.4)
φ=
1 ∂ r ∂r
r
∂φ ∂r
+
1 ∂ 2φ ∂ 2φ + 2. r2 ∂θ2 ∂z
(2.5)
Laplace’s equation arises in incompressible ﬂuid ﬂow (in which case φ is the velocity potential), gravitational potential problems, electrostatics, magnetostatics, steadystate 11
.heat conduction with no sources (in which case φ is the temperature).. e. (c) transverse vibrations of a membrane For this twodimensional problem. and A is the crosssectional area of the string. m is the mass per unit area (i.7) 2 ¨ ∂ φ φ= 2. and T c= . t) is the transverse displacement of the membrane (e.. φ = φ(x.6) This equation arises in steadystate heat conduction with distributed sources (φ = temperature) and torsion of bars in elasticity (in which case φ(x.8) and c is the speed of propagation. (b) longitudinal vibrations of a bar For this onedimensional problem. ρ is the density of the string material. m (2. The denominator ρA is mass per unit length. ∂t (2.10) ρ where E and ρ are. respectively.9) ρA where T is the string tension. and E c= . 12 .g. (2. φ = φ(x.g. t) represents the longitudinal displacement. φ = φ(x. and t is the membrane thickness. Functions which satisfy Laplace’s equation are referred to as harmonic functions. 3. m = ρt). wave equation 1¨ φ c2 In this equation. dots denote time derivatives. (2. y. Poisson’s equation 2 φ+g =0 (2. and torsion of bars in elasticity (in which case φ(x. y) is the warping function). and c= T . The wave equation arises in several physical situations: (a) transverse vibrations of a string For this onedimensional problem.11) where T is the tension per unit length. y) is the stress function). the modulus of elasticity and density of the bar material. t) is the transverse displacement of the string. drum head). 2.e. 2 φ= (2.
the subscript is unnecessary.14) into the wave equation. Eq. φ represents the temperature T .15) If we deﬁne the wave number k = ω/c. z. This equation arises in steadystate (timeharmonic) situations involving the wave equation. y. y. e. There are several special cases of the heat equation of interest: qx = −kA ˆ 13 . y. 5.18) dx where qx is the rate of heat conduction (energy per unit time) with typical units J/s ˆ or BTU/hr.. t) is the ﬂuid pressure or velocity potential. Helmholtz equation (reduced wave equation) 2 φ + k2φ = 0 (2. z.13) The Helmholtz equation is the timeharmonic form of the wave equation.g. steadystate acoustics. Q is the internal heat generation per unit volume per unit time. z) cos ωt (2.12) where B = ρc2 is the ﬂuid bulk modulus. we substitute φ(x. and c is the material speciﬁc heat (the heat required per unit mass to raise the temperature by one degree). this equation becomes 2 φ0 + k 2 φ0 = 0.16) With the understanding that the unknown depends only on the spacial variables. ρ (2. Eq. 2. and c is the speed of sound.13. t) = φ0 (x. where c= B .19) dx where qx is energy per unit time per unit area (with typical units J/(s·m2 ). (2. To obtain the Helmholtz equation. c2 (2. The thermal conductivity k is deﬁned by Fourier’s law of heat conduction: dT . heat equation ˙ · (k φ) + Q = ρcφ (2. (2. and A is the area through which the heat ﬂows. φ = φ(x. 2. k is the thermal conductivity.7. and ρ is the density. and we obtain the Helmholtz equation. Fourier’s law is written dT qx = −k . in which interest is restricted to functions which vary sinusoidally in time. ρ is the material density.17) In this equation. to obtain 2 φ0 cos ωt = − ω2 φ0 cos ωt. 4. (2. Alternatively.(d) acoustics For this threedimensional problem.
secondorder. etc. B = 0 Elliptic kuxx − ρcuy = −Q A = k. steadystate (timeindependent): 2 φ=− Q k (Poisson’s equation) (2.g. some involve time.23 depends on the sign of the discriminant according to the following table: B 2 − 4AC <0 =0 >0 Equation Type Elliptic Parabolic Hyperbolic Typical Physics Described Steadystate phenomena Heat ﬂow and diﬀusion processes Vibrating systems and wave motion The names elliptic. we can classify the common equations of mathematical physics already encountered as follows: Name Eq. 2. so presumably their solutions would exhibit fundamental diﬀerences.24) The quantity B 2 − 4AC is referred to as the discriminant of the equation. 2. uxx = ∂ 2u .7 Helmholtz Eq. in Two Variables A. parabolic.22) 2. y). B = C = 0 Parabolic 14 . A = A(x. Of those that involve time (wave and heat equations). so the fundamental character of their solutions may also diﬀer. ∂x2 (2. and some don’t. The behavior of the solution of Eq. Number Laplace Eq. 2. no sources (Q = 0): 2 φ=0 (Laplace’s equation) (2. C Type uxx + uyy = 0 A = C = 1.). Both these speculations turn out to be true. linear partial diﬀerential equation in two variables Auxx + Buxy + Cuyy + Dux + Euy + F u = G. B = 0 Hyperbolic 2 uxx + uyy + k u = 0 A = C = 1. B = 0 Elliptic 2 2 uxx − uyy /c = 0 A = 1. B. (2.(a) homogeneous material (k = constant): k 2 ˙ φ + Q = ρcφ (2.1 Poisson Eq. and hyperbolic arise from the analogy with the conic sections in analytic geometry..21) (c) homogeneous material.2 Classiﬁcation of Partial Diﬀerential Equations Of the classical PDEs summarized in the preceding section. B = B(x. the order of the time derivative is diﬀerent. e.20) (b) homogeneous material.6 Wave Eq. and we have used subscripts to denote partial derivatives. Given these deﬁnitions. 2..13 Heat Eq. Consider the general. steadystate. C = −1/c . B = 0 Elliptic uxx + uyy = −g A = C = 1. y).e.17 Eq. 2. 2.23) where the coeﬃcients are functions of the independent variables x and y (i.
the onedimensional wave equation ∂ 2u 1 ∂ 2u = 2 2. from Eq. L u= ¯ u .28) (2. ¯ L 2 ∂ x2 ¯ c L ∂t 15 (2.25. 2. and the other two types of equations characterize timedependent situations.26 into Eq. 2. t is time. for example. 2.34) .29) (2. (2. 2. ∂u ∂t = ∂ ∂t ∂(L¯) u ∂u ¯ ∂u = =c . ∂x2 c ∂t (2. when solving equations.26) in which case the derivatives in Eq.30) ∂ 2u ∂ = 2 ∂t ∂t Thus. We deﬁne the nondimensional variables x= ¯ x . L (2.32) ¯ ∂ x2 ¯ ∂t This is the nondimensional wave equation.25 and factoring out the constants: ∂ 2 (L¯) u 1 ∂ 2 (L¯) u = 2 2 ¯/c)2 ∂(L¯) x c ∂(Lt or L ∂ 2u ¯ 1 c2 L ∂ 2 u ¯ = 2 2 2. Consider. c =c ¯ ¯ ∂ t dt ¯ ¯ L ¯2 ∂t ∂t ∂t L ∂t 1 ∂ 2u ¯ 1 c2 ∂ 2 u ¯ = 2· 2 ¯2 L ∂x ¯ c L ∂t (2.31) or ∂ 2u ¯ ∂ 2u ¯ = 2. ¯/c) ¯ ∂t ∂(Lt ∂t ¯ ∂ 2u c ¯ c2 ∂ 2 u ∂u ¯ ∂ ∂ u dt ¯ ¯ =c 2 = .25 become ∂u ∂(L¯) u ∂u ¯ = = . L ¯ ct t= . ∂x ∂(L¯) x ∂x ¯ ∂ ∂ 2u = 2 ∂x ∂x ∂u ∂x = ∂ ∂x ∂u ¯ ∂x ¯ = ∂ ∂x ¯ ∂u ¯ ∂x ¯ d¯ x 1 ∂ 2u ¯ = .25) where u is displacement (of dimension length). and c is the speed of propagation (of dimension length/time).27) (2.3 Transformation to Nondimensional Form It is often convenient.33) (2. to transform the equation to a nondimensional form. Elliptic equations characterize static (timeindependent) situations. y represents the time variable. 2.In the wave and heat equations in the above table. Let L represent some characteristic length associated with the problem. The behavior of the solutions of equations of diﬀerent types diﬀers. dx L ∂ x2 ¯ (2. This last equation can also be obtained more easily by direct substitution of Eq.
The recursive relationship can also be sketched with the stencil shown in Fig.j ut ≈ (3. (3. t > 0 xx t c u(0.j . .j + (1 − 2r)ui. for r = 1/10. h2 ck Deﬁne the parameter r as r= in which case Eq. 0) = f (x) (initial condition). we discretize x and t such that xi = ih. 3. 0 < x < 1.1) where c is a constant. 3. . . t) = 0 (boundary conditions) u(x.7) ck c ∆t = . j = 0.j + ui−1.j+1 = rui−1.j − 2ui.7 is a recursive relationship giving u in a given row (time) in terms of three consecutive values of u in the row below (one time step earlier). 1. tj = jk. tj ).2) Let ui. (3.1 Finite Diﬀerence Solution of Partial Diﬀerential Equations Parabolic Equations Consider the boundaryinitial value problem (BIVP) u = 1 u . we have the stencil shown in Fig. 5. t) = u(1. 7.j + rui+1.j be the numerical approximation to u(xi .j = . For example. . 2. . 3.j ui+1. the solution (temperature) at 16 . 6. This problem represents transient heat conduction in a rod with the ends held at zero temperature and an initial temperature proﬁle f (x). for r = 1/10. t). i = 0.5 becomes ui. We approximate ut with the forward ﬁnite diﬀerence ui.4) The ﬁnite diﬀerence approximation to the PDE is then ui. .6) (3. 1.j − 2ui. 2.3) k and uxx with the central ﬁnite diﬀerence uxx ≈ ui+1. u = u(x.j+1 − ui. .3 3.j . To solve this problem numerically. h2 (3.5) The domain of the problem and the mesh are illustrated in Fig. That is.1. 2 h (∆x)2 (3.j + ui−1. This equation is referred to as an explicit formula since one unknown value can be found directly in terms of several other known values.j+1 − ui. Eq.1 Explicit Finite Diﬀerence Method (3.
17 . j s s i−1.t T s u=0 k ' T c i. j s u=0 i+1. j +1 i. 1 x 1 r 1 − 2r r Figure 6: Heat Equation Stencil for Explicit Finite Diﬀerence Algorithm. '$ '$ 96 96 96 96 96 96 1 r= 2 1 &% 1 r=1 1 2 1 &% 87 87 87 87 87 87 1 2 0 −1 1 Figure 8: Heat Equation Stencils for r = 1/2 and r = 1. j h E E 0 u = f (x) Figure 5: Mesh for 1D Heat Equation. '$ 1 &% 96 96 96 87 87 87 1 10 8 10 1 10 Figure 7: Heat Equation Stencil for r = 1/10.
For r > 1/2 (e.096 Exact F.2 CrankNicolson Implicit Method The CrankNicolson method is a stable algorithm which allows a larger time step than could be used in the explicit method.1 0. 0 < r ≤ 1/2.5 0. 0 < x < 1. The two numerical solutions (Figs.3 0. no internal heat sources.8) The physical problem is to compute the temperature history u(x. the new point depends on the three points at the previous time step with a 181 weighting. 1/2 ≤ x ≤ 1.u T 0.6 0.D. which is counterintuitive.9) u(x. t) u(0. An unstable solution is one for which small errors grow rather than decay as the solution evolves.48 and r = 0. t) = 0 2x. if r = 1/2. a disadvantage of this explicit method is that a small time step ∆t must be used to maintain stability. Notice that. t) for a bar with a prescribed initial temperature distribution f (x).8 1. (3. In fact. 8.0 Figure 9: Explicit Finite Diﬀerence Solution With r = 0. t) = 2 (nπ) 2 n=1 which can be obtained by the technique of separation of variables. Consider the boundaryinitial value problem (in nondimensional form) uxx = ut .2 0.48 t = 0.2 0. We solve this problem using the explicit ﬁnite diﬀerence algorithm with h = ∆x = 0. The instability for r > 1/2 can be clearly seen in Fig.1. t = 0. The instability which occurs for r > 1/2 can be illustrated with the following example. as illustrated in Fig. 3. the solution at the new point is independent of the closest point. 8). CrankNicolson’s stability does not depend on the parameter r.0 t = 0.48. This disadvantage will be removed with the CrankNicolson algorithm. 9 and 10) are compared with the analytic solution ∞ nπ 8 2 sin (sin nπx) e−(nπ) t .g. for a stable solution. the new point depends negatively on the closest point (Fig. (3. 18 . u = u(x. r = 1).4 0.. 0 ≤ x ≤ 1/2 u(x. t) = u(1. It can be shown that.1 and k = ∆t = rh2 = r(∆x)2 for two diﬀerent values of r: r = 0. 0) = f (x) = 2(1 − x). and zero temperature prescribed at both ends.192 E x 0.4 0.52.048 r = 0. 10. Thus.
. Fig. the righthand side values of Eq.j+1 ui+1. 3. t) = 0 (boundary conditions) u(x.j+1 + ui−1.j + rui+1.4 0.2 0. since each equation has three consecutive nonzeros centered around the diagonal. 2 2 2 h h ck (3.10) u(0.8 1. If we start at the bottom row (j = 0) and move up.0 Figure 10: Explicit Finite Diﬀerence Solution With r = 0. are excluded.j+1 = rui−1. Consider again the PDE of Eq. and the lefthand side values of that equation are unknown. To get the process started.13 are known. and write the CN equation for each i = 1.j+1 − ui.12) and rearrange Eq.j − 2ui.208 E x 0. .5 0.52 Exact F.11 with all j + 1 terms on the lefthand side: − rui−1.1 0. 3. let j = 0.j + 2(1 − r)ui. We again deﬁne the parameter r as r= c ∆t ck = .j + = .1: u = 1 u . u = u(x.0 t = 0.13) This formula is called the CrankNicolson algorithm. 0) = f (x) (initial condition). where N is the number of interior mesh points on the row.j + ui−1. . t = 0. 3. 19 . with known values. . t) xx t c (3.3 0. (3.4 0. j + 2 .j ui.6 0. (The boundary points. t) = u(1. The ﬁnite diﬀerence x derivative at j + 2 is computed as the average of the two central diﬀerence time derivatives at j and j + 1. 2 h (∆x)2 (3.052 t = 0.u T 0.j+1 + 2(1 + r)ui. 2. The PDE is approximated numerically by 1 ui+1. To advance in time.2 0.D. The basis for the CrankNicolson algorithm is writing the ﬁnite diﬀerence equation at a 1 1 midlevel in time: i.) This system of equations is a tridiagonal system. 11 shows the points involved in the CrankNicolson scheme.j+1 − rui+1.j .52. N to obtain N simultaneous equations in N unknowns.11) where the righthand side is a central diﬀerence approximation to the time derivative at the 1 middle point j + 2 .104 r = 0.j+1 − 2ui.
j +1 d j +1 i. j i+1. It can be shown that the CN algorithm is stable for any r.1 is the righthand side boundary condition. This speedup would be particularly signiﬁcant in higher dimensional problems. Note that the coeﬃcient matrix of the CN system of equations does not change from step to step. and merely do the forwardbackward substitution (FBS) at each new time step. although better accuracy results from a smaller r. we then increment j to j = 1. u = u(x. 1 x −r r 2(1 + r) 2(1 − r) −r r Figure 12: Stencil for CrankNicolson Algorithm. Thus.j +1 s u=0 i. j h E E 0 u = f (x) Figure 11: Mesh for CrankNicolson. ux (1. We introduce extra “phantom” points to the right of the boundary (outside the domain). t) xx t c u(0. for example. (3. CN also gives better accuracy than the explicit approach for the same r.1. Consider boundary Point 25.14) The only diﬀerence between this problem and the one considered earlier in Eq. t) = 0. j s i+1. 13. and solve a new system of equations. An approach which requires the solution of simultaneous equations is called an implicit algorithm. 3. A sketch of the CN stencil is shown in Fig. t) = g(t) (boundary conditions) u(x. thus speeding up the calculation. 0) = f (x) (initial condition). one could compute and save the LU factors of the coeﬃcient matrix. A smaller r corresponds to a smaller time step size (for a ﬁxed spacial mesh). i−1. 20 . 12. where the coeﬃcient matrix is no longer tridiagonal. 3.3 Derivative Boundary Conditions Consider the boundaryinitial value problem u = 1 u . which now involves a derivative (a Neumann boundary condition).t T s s s s u=0 k ' T c i−1. Assume a mesh labeling as shown in Fig.
0) (3. t) = 0. we must write the ﬁnite diﬀerence equation for u25 : u25 = ru14 + (1 − 2r)u24 + ru34 . t) is the transverse displacement. The timedependent transverse response of an inﬁnitely long string satisﬁes the onedimensional wave equation with nonzero initial displacement and velocity speciﬁed: 2 1 ∂ 2u ∂ u = 2 2 .15) On the other hand. t is time.15 14 13 25 24 23 p p p 35 p p p 34 p p p 33 Figure 13: Treatment of Derivative Boundary Conditions. u(x.18) = g(x) u(x. −∞ < x < ∞. (3.2 3. we review some background material on such equations.1 Hyperbolic Equations The d’Alembert Solution of the Wave Equation Before addressing the ﬁnite diﬀerence solution of hyperbolic equations.19) where T is the tension (force) in the string. (3.16) 2h The phantom variable u34 can then be eliminated from the last two equations to yield a new equation for the boundary point u25 : u25 = 2ru14 + (1 − 2r)u24 + 2rhg24 . ρA (3. ∂x2 c ∂t ∂u(x.2. and A is the crosssectional area of the string. 0) = f (x). ρ is the density of the string material. Note that c has the dimension of velocity. Since u25 is not known. t > 0. (3. g(x) is the initial velocity. and the constant c is given by c= T . f (x) is the initial displacement. x→±∞ where x is distance along the string. a central ﬁnite diﬀerence approximation to the x derivative at Point 24 is u34 − u14 = g24 .17) 3. This 21 . and assume we use an explicit ﬁnite diﬀerence algorithm. ∂t lim u(x.
after both waves have passed the region of initial disturbance. t) dt = A(x) A ∂h(x. 3. Thus. which travel at speed c to the right and left. each equal to f (x)/2. the wave f (x + ct) moves to the left (decreasing x) with speed c without change of shape.23) which is a triangular pulse of width 2b and height b (Fig. half this pulse travels in opposite directions from the origin. respectively. x also increases at speed c.21) Eq. the d’Alembert solution simpliﬁes to 1 u(x. 2 (3.25) Thus.22) which may be interpreted as two waves. It can be shown by direct substitution into Eq. t) = [f (x − ct) + f (x + ct)] + 2 2c x+ct g(τ ) dτ.24. as t increases. 3. For example.equation assumes that all motion is vertical and that the displacement u and its slope ∂u/∂x are both small. 22 . each with speed c. in this case) of two identical functions G(x)/2. Eq. be given by f (x) = −x + b. t) = [f (x − ct) + f (x + ct)]. For the special case f (x) = 0 (zero initial displacement). A) . For t > b/c. x−ct (3. 3. one moving left and one moving right. If f (x) is nonzero only for a small domain.20 is known as the d’Alembert solution of the onedimensional wave equation. this solution. the two halfpulses have completely separated. Similarly.20) The diﬀerentiation of the integral in Eq.18 that the solution of this system is 1 1 u(x. and the neighborhood of the origin has returned to rest. ∂x dx dx (3. B) − h(x. (3. For the special case g(x) = 0 (zero initial velocity). the string returns to its rest position.20 is eﬀected with the aid of Leibnitz’s rule: d dx B(x) B h(x. 14). The two waves [each equal to half the initial shape f (x)] travel in opposite directions from each other at speed c. t) = 2c x−ct 2 where G(x) = 1 c x g(τ ) dτ. x ≤ b. similar to the initial displacement special case. t) dB dA dt + h(x.24) u(x. For example. 3. 0. the argument x − ct remains constant if. the wave f (x − ct) moves to the right (increasing x) with speed c without change of shape. may be interpreted as the combination (diﬀerence. the initial displacement. (3. let f (x). then. −∞ (3. where c is the wave speed. For t > 0. x ≥ b. the d’Alembert solution simpliﬁes to 1 1 x+ct g(τ ) dτ = [G(x + ct) − G(x − ct)] .
25 as x ≤ −b. x > b. 0. and then the string returns to rest in its original position. From the preceding discussion. once having reached some location. Thus. 23 .(a) t = 0 T d ¨rd ¨ © % ¨ d ¨ r ¨ r d f (x) f (x)/2 E −3 (b) t = −2 b 2c −2 b c ¨¨ −1 0 T 1 2 3 x/b ¨¨r r ¨rr ¨ ¨ rr ¨ ¨ r E −3 (c) t = −1 0 T 1 2 3 x/b ¨¨rr ¨¨rr rr¨¨ rr E −3 (d) t = −2 2b c ¨r rr −1 0 T 1 2 3 x/b ¨ ¨¨ r ¨¨ ¨r ¨ rr r E −3 −2 −1 0 1 2 3 x/b Figure 14: Propagation of Initial Displacement. 16). Even though g(x) is nonzero only near the origin. That is.27) M (x + b). will continue to inﬂuence the solution from then on. the travelling wave G(x) is constant and nonzero for x > b. half this wave travels in opposite directions at speed c. and c is the wave speed (Fig. The travelling wave G(x) is given by Eq. 2M b. initial displacements occurring elsewhere pass by after a ﬁnite time has elapsed. where the constant M is the dimensionless Mach number. (3. on the solution consists of all points closer than ct (in either direction) to x0 . 3. 15). x < b. Nonzero initial velocity disturbances also travel at speed c. the center section of the string reaches a state of rest. as time advances.26) which is a rectangular pulse of width 2b and height M c. let the initial velocity g(x) be given by g(x) = M c. 0. but not in its original position (Fig. the domain of inﬂuence of the data at x = x0 . G(x) = (3. Thus. say. it is clear that disturbances travel with speed c. 17). but. x ≥ b. the location of the disturbance (Fig. For an observer at some ﬁxed location. −b ≤ x ≤ b. For example.
24 . (a) t = 0 (displacement=0) −4 −3 b 2c −3 b c −3 2b c −2 −2 −1 $ $ T $ $ G(x)/2 moves left E 2 3 4 1 −G(x)/2 moves right x/b (b) t = −4 T $ $ $ $$$ 1 −1 E 2 3 4 x/b (c) t = −4 T $$ $$ $ $$$ 2 −2 −1 E 3 4 x/b (d) t = −4 $$$ $$$ $$ T −3 −2 −1 1 E 3 4 x/b Figure 16: Propagation of Initial Velocity.(a) −3 (b) −3 −2 −2 −1 T g(x) E 0 T G(x) 1 2 3 x/b $ $$$ $$$ $ E −1 0 1 2 3 x/b Figure 15: Initial Velocity Function.
31) c ∆t ck = . On the other hand. 3. (b) Domain of Dependence of Solution on Data. 3. 0) = g(x) (initial conditions).28) u(0. u = u(x. If we know the solution for 25 . 2 h c2 k 2 We deﬁne the parameter r= and solve for ui. Eq.j−1 = . A central ﬁnite diﬀerence approximation to the PDE.j+1 − 2ui. 0) x (x − ct.j+1 = r2 ui−1.28. which are the limits of integration in the d’Alembert solution. we see that hyperbolic equations involve wave motion. problems without discontinuities can probably be solved most conveniently using ﬁnite diﬀerence and ﬁnite element techniques.T t d d d slope=1/c d dr E T t −1/cd (x0 . the domain of dependence of the solution on the initial data consists of all points within a distance ct of the solution point. t).j−1 .2 Finite Diﬀerences From the preceding discussion of the d’Alembert solution.2. Conversely. 0 < x < a. 0) x (a) Domain of Inﬂuence of Data on Solution.j ui.29) Fig. u (x. That is. Consider the boundaryinitial value problem (BIVP) u = 1 u . yields ui+1. 0) r d d 1/c d−1/c d r dr (x. Figure 17: Domains of Inﬂuence and Dependence. 3. the solution at (x.j+1 : ui. x + ct). t) = u(a. 0) = f (x). in shocks). t) = 0 (boundary conditions) u(x.j + ui. t This problem represents the transient (timedependent) vibrations of a string ﬁxed at the two ends with both initial displacement f (x) and initial velocity g(x) speciﬁed. Eq. If the initial data are discontinuous (as. (3.30) (3. 18 shows the mesh points involved in this recursive scheme. t) E (x + ct. for example.j − 2ui. Here we consider ﬁnite diﬀerences.j + ui−1.j + r2 ui+1. h ∆x (3.20.j − ui. t > 0 xx tt c2 (3. t) depends on the initial data for all locations in the range (x − ct. the most accurate and the most convenient approach for solving the equations is probably the method of characteristics.j + 2(1 − r2 )ui.
j s s u=0 i+1. a special procedure is needed to advance from j = 0 to j = 1. the numerical domain of dependence (NDD) would be smaller than the actual domain of dependence (ADD) for the PDE. the time step size ∆t should be chosen.t T s u=0 k ' T c i. j −1 E 0 u = f (x). j s s i−1. the numerical solution would ignore necessary information. Thus. ut = g(x) a x Figure 18: Mesh for Explicit Solution of Wave Equation. 3. Friedrichs. 19. The corresponding stencil is shown in Fig. to insure a stable solution.3 Starting Procedure for Explicit Algorithm We note that the explicit ﬁnite diﬀerence scheme just described for the wave equation requires the numerical solution at two consecutive time steps to step forward to the next time. If r > 1. so that r = 1. if NDD < ADD. It can be shown that this ﬁnite diﬀerence algorithm is stable if r ≤ 1 and unstable if r > 1. If that ∆t is inconvenient for a particular calculation. and Lewy (CFL) condition or simply the Courant condition. r ≤ 1. It can be further shown that a theoretically correct solution is obtained when r = 1. This stability condition is know as the Courant. all time values up to the jth time step. However. the numerical solution would be independent of data outside NDD but inside ADD. An intuitive rationale behind the stability requirement r ≤ 1 can also be made using Fig. Thus. Thus. ∆t should be selected as large as possible without exceeding the stability limit of r = 1. 26 .j+1 at Step j + 1 in terms of known quantities. at t = 0. we can compute the solution ui.2. Thus. That is. and that the accuracy of the solution decreases as the value of r decreases farther and farther below the value of 1. this algorithm is an explicit algorithm. j +1 i. if possible. j h E i. since NDD spreads by one mesh point at each level for earlier time. 1 r2 2(1 − r2 ) r2 −1 Figure 19: Stencil for Explicit Solution of Wave Equation. 20.
3.0 + kgi .−1 . 3.35) where gi is the initial velocity g(x) evaluated at the ith point.36) (3.0 + r2 ui+1.0 + 2(1 − r2 )ui. (3.−1 = gi 2k or ui. i.−1 = ui. However.j + r2 ui+1. we can write a central diﬀerence approximation to the ﬁrst time derivative at t = 0: ui.0 + r2 ui+1.1 = r2 ui−1. to implement the explicit ﬁnite diﬀerence algorithm.j−1 . 3. The explicit ﬁnite diﬀerence algorithm is given in Eq.0 − ui.0 − ui.1 − ui. In a ﬁnite diﬀerence representation of the domain. we obtain ui.2. 3.−1 .4 Nonreﬂecting Boundaries In some applications.33) (3.e. Thus.−1 ).33 (to eliminate ui.j + 2(1 − r2 )ui.32) where the righthand side is known (from the initial condition) except for ui.j+1 = r2 ui−1.31 for all subsequent time steps. let j = 0: ui.1 + 2kgi or (3. it is of interest to model domains that are large enough to be considered inﬁnite in extent.37 for the ﬁrst time step and Eq.34) 1 1 ui.31: ui. To compute the solution at the end of the ﬁrst time step.0 + r2 ui+1. an inﬁnite boundary has 27 .t T s 5 d T k c ' E h 5 Slope = −1/c d ~ 5 d 5 © 5 d 5 d 5 d 5 d 5 d 5 5 d 5 d 5 ' Numerical Domain of Dependence E Slope = 1/c E x ' Actual Domain of Dependence E Figure 20: Domains of Dependence for r > 1. gi = g(xi ). (3. If we substitute this last result into Eq.37) 2 2 This is the diﬀerence equation used for the ﬁrst row.0 + 2(1 − r2 )ui.1 = r2 ui−1.1 = r2 ui−1. (3..0 + (1 − r2 )ui. 3.j − ui.1 − 2kgi . we use Eq.
to be truncated at some suﬃciently large distance.42) The central diﬀerence approximation to the nonreﬂecting boundary condition. 21. 3.20.39) ∂u 1 ∂u =− .40. The d’Alembert solution. Thus. where the nonreﬂecting condition is written ∂u ∂u + = 0. This condition is exact in 1D (i. For example.j+1 − ui. Eq. Consider a vibrating string which extends to inﬁnity for large x. a suitable boundary condition must be imposed to ensure that outgoing waves are not reﬂected.41) ∂n ∂t where n is the outward unit normal to the boundary.j + r2 ui+1. which should not exist if the boundary is nonreﬂecting. (3. Note that the x direction is normal to the boundary.j + 2(1 − r2 )ui.j−1 =− 2h 2k 28 (3. Eq.j+1 = r2 ui−1.j ui. j) on the nonreﬂecting boundary in Fig. must be imposed to inhibit reﬂections from the truncated boundary. of the onedimensional wave equation c2 uxx = utt can be written in the form u(x.40.38) where F1 represents a wave advancing at speed c toward the boundary.43) . the general recursive formula is given by Eq. = −cF1 . 3. 3. (3. With F2 = 0. we diﬀerentiate u with respect to x and t to obtain ∂u ∂u = F1 .j − ui. ui+1.. Let the initial velocity be zero.j − ui−1.e. (3. t) = F1 (x − ct) + F2 (x + ct). (3. and F2 represents the returning wave. The nonreﬂecting boundary condition.40) ∂x c ∂t This is the onedimensional nonreﬂecting boundary condition. is c (3. 3.40. Eq.j−1 . The boundary condition. Eq. ∂x ∂t where the prime denotes the derivative with respect to the argument. j s s Figure 21: Finite Diﬀerence Mesh at Nonreﬂecting Boundary. At such a boundary. at the typical point (i.31: c ui. plane waves) and approximate in higher dimensions. We truncate the computational domain at some ﬁnite x. s s s i. 3. can be approximated in the ﬁnite difference method with central diﬀerences expressed in terms of the phantom point outside the boundary.
500 0. The ﬁnite diﬀerence calculation for this problem results in the following spreadsheet: t x=0 x=1 x=2 x=3 x=4 x=5 x=6 x=7 x=8 x=9 x=10 0 0.000 0.000 0. the ﬁnite diﬀerence formulas 3.500 8 0.46 implies un.000 0.500 1.000 0.000 0.500 0.000 0.000 0.500 0.000 0.j + ui+1. 3. Eq.000 1 0.500 1.000 10 0.000 0. half the pulse should propagate at speed c to the left and right and be absorbed into the boundaries.000 0.000 0. 3.000 0. the last term in this relation is evaluated using the central diﬀerence approximation to the initial velocity.500 6 1.49) (3.000 1.000 0.500 1.000 0.000 0.500 1.or r (ui+1.000 0.000 0.000 0.500 1.000 0.000 0.000 0.500 1.000 0. We solve the problem with the explicit central ﬁnite diﬀerence approach with r = ck/h = 1. u0. (3.000 0.0 )/2.j−1 .000 0.500 1.000 0.500 1.000 0.j − ui.000 2.000 0.000 0. The substitution of Eq.500 1.000 1.000 0.000 0.500 1. Eq.000 0.47) where the mesh points in the x direction are labeled 0 to n.000 2.000 0.000 0.000 1. 3.j+1 = u1.37 simplify to ui. 14.000 0.000 1.500 1.j−1 ) . 3.000 0.000 1. similar to Fig.000 0.j+1 = un−1.48) (3.45 takes a particularly simple (and perhaps unexpected) form: ui.j+1 = 2r2 ui−1. for r = 1.j+1 − ui.000 0.31 and 3.j + 2(1 − r2 )ui.500 1.000 0.000 0.000 5 1.45) (3.000 0.000 0. According to the d’Alembert solution.j+1 = ui−1.000 0.1 = (ui−1.000 0.000 0.000 2.000 0.44) For the ﬁrst time step (j = 0).000 9 0.000 0.42 (to eliminate the phantom point) yields (1 + r)ui.j − ui−1. On the right and left nonreﬂecting boundaries.000 1.44 into Eq. Eq.j . consider an inﬁnitelylong vibrating string with a nonzero initial displacement and zero initial velocity.j−1 and ui.000 1.j .000 0.000 0.500 0.500 1. (3.000 0.000 0.500 0. The initial displacement is a triangularshaped pulse in the middle of the string.35. (3.000 0.000 0.000 2.000 2 0.j+1 = ui−1.000 0.000 0. Note also that.000 1.j − (1 − r)ui.000 0.000 0.000 0.000 0.0 + ui+1.000 0.000 7 0.000 1.000 1.000 0.500 4 1.000 2.000 0.000 0.000 0.000 0.500 1.000 1.500 0.46) To illustrate the perfect wave absorption that occurs in a onedimensional ﬁnite diﬀerence model.500 0.000 0.j .000 3 0.000 29 .000 0. 3.000 0.000 3.000 0.j ) = − (ui.000 1.000 0.000 0. With r = 1.000 0.
3.54) where v = ∂u/∂t is the velocity.40 (Fig. the application of this dashpot to the end of a ﬁnite length bar simulates exactly a bar of inﬁnite length (Fig. Consider.51) (3. (3. E is the Young’s modulus of the bar material.40. a bar undergoing longitudinal vibration and terminated on the right end with the nonreﬂecting boundary condition. ∂x (3. in general.e. we see that the characteristic impedance of an acoustic medium is ρc. Eq. u = u0 eiωt . 22). This condition is exact in 1D (i. Since. in acoustics.10. The minus sign in this equation means that the force opposes the direction of motion.52) ∂n c where n is the outward unit normal at the nonreﬂecting boundary. The internal longitudinal force F in the bar is given by F = Aσ = AEε = AE ∂u . and u is displacement. Thus. Notice that the triangular wave is absorbed without any reﬂection from the two boundaries. 30 (3.55) . Since.56) which is a force proportional to velocity. σ is the stress. the nonreﬂecting boundary condition is equivalent to applying an end force given by F =− AE v. ρcA d d d Figure 22: Finite Length Simulation of an Inﬁnite Bar. c (3. 3..40. 3. Thus.. The dashpot constant is ρcA. E = ρc2 . i. from Eq. becomes iω ∂u0 = − u0 ∂x c or. 3. plane waves) and approximate in higher dimensions. and k = ω/c is the wave number. A mechanical device which applies a force proportional to velocity is a dashpot. as required to be physically realizable.e. from Eq. 2. for example.50) where A is the crosssectional area of the bar. Eq. (3. 22).53) (3. the solution u(x. Eq. For steadystate wave motion. the ratio of pressure (force/area) to velocity is referred to as impedance. iω ∂u0 = − u0 = −iku0 .54 becomes F = −(ρcA)v. and the nonreﬂecting boundary condition. The nonreﬂecting boundary condition can be interpreted physically as a damper (dashpot). t) is timeharmonic.
(3.61) (3.y T b T = g1 (y) T = f2 (x) 2 T =0 T = g2 (y) x T = f1 (x) a E Figure 23: Laplace’s Equation on Rectangular Domain. ∂ 2T Ti−1. 0 < y < b). T j+1 j j−1 i−1 i i+1 © s Point (i. using central ﬁnite diﬀerence approximations to the second derivatives (Fig.57) This problem corresponds physically to twodimensional steadystate heat conduction over a rectangular plate for which temperature is speciﬁed on the boundary. 25).j+1 + =0 h2 h2 or 4Ti.58) 2 ∂x h2 Ti. 24).j−1 + Ti. 31 (3.59) ∂y 2 h2 The ﬁnite diﬀerence approximation to Laplace’s equation thus becomes Ti−1.j ≈ .j + Ti.j + Ti.j Ti. Then. 23: T (x.j+1 ) = 0. j) E Figure 24: Finite Diﬀerence Grid on Rectangular Domain. 2 T (x. y) = 0 (0 < x < a.j + Ti+1.3 Elliptic Equations Consider Laplace’s equation on the twodimensional rectangular domain shown in Fig. and let the point (i. T (a. y) = g1 (y). b) = f2 (x). j) denote the point having the ith value of x and the jth value of y (Fig. T (0.j − 2Ti. (3. (3.60) . y) = g2 (y).j−1 − 2Ti.j + Ti. 3.j − (Ti−1.j+1 ∂ 2T ≈ . We attempt an approximate solution by introducing a uniform rectangular grid over the domain.j + Ti+1.j + Ti+1. 0) = f1 (x).j − 2Ti. T (x.j−1 − 2Ti.
j − 1) Figure 25: The Neighborhood of Point (i. systems of equations of this type would have at most ﬁve nonzero terms in each equation. Because the central diﬀerence operator is a 5point operator. j). for large meshes. 3. 26. j) h E ' (i − 1. consider the mesh shown in Fig. j) c (i + 1. j + 1) T (i + 1. The application of Eq. That is. 4 3 2 1 8 7 6 5 12 11 10 9 16 15 14 13 20 19 18 17 Figure 26: 20Point Finite Diﬀerence Mesh. the solution at a typical point (i. j + 1) h (i − 1. the resulting numerical problem has only six degrees of freedom (unknown variables). for Laplace’s equation with the same uniform mesh in each direction. regardless of how large the mesh is. the system of equations is sparsely populated. so that sparse matrix solution techniques would be applicable. 14 are on the boundary.61 to each of the six interior points yields T7 − T10 = T2 + T5 4T6 − −T + 4T − T11 = T3 + T8 6 7 −T6 + 4T10 − T11 − T14 = T9 (3. j + 1) (i. j − 1) (i. Although there are 20 mesh points.(i − 1. j) s (i.62) −T7 − T10 + 4T11 − T15 = T12 −T10 + 4T14 − T15 = T13 + T18 −T11 − T14 + 4T15 = T16 + T19 . j) is the average of the four neighboring points. j − 1) (i + 1. Thus. This linear system of six equations in six unknowns can be solved with standard equation solvers. For example. Thus. 32 . where all known quantities have been placed on the righthand side. where the temperature is known.
y) = g1 (y). 28).64) . 3. At a typical point on the boundary. Since the numbers assigned to the mesh points in Fig. 33 (3. Loop systematically through the interior mesh points. the pattern of nonzero coeﬃcients appearing on the lefthand side of Eq. and initialize the interior points to zero or some other convenient value (e. 2. = g(y). ∂x We extend the mesh to include additional points to the right of the boundary at x = a (Fig. y) T (0. y) = 0 (0 < x < a. Continue this process until the solution converges to the desired accuracy.C. 3. Initialize the boundary points to their prescribed values. which uses the following general algorithm: 1. b) = f2 (x). 0) = f1 (x).63) ∂T (a. in which the normal derivative is speciﬁed. boundary condition on the right side (Fig. a central diﬀerence approximation yields g18 = or T22 = T14 + 2hg18 .62 depends on the choice of mesh ordering. setting each interior point to the average of its four neighbors. 0 < y < b).. the average of the boundary values). For example. 26 are merely labels for identiﬁcation.66) (3. the equilibrium equation for Point 18 is 4T18 − (T14 + T22 + T17 + T19 ) = 0.3. Systems of this type can also be solved using an iterative procedure known as relaxation.y T b T = g1 (y) T = f2 (x) ∂T = g(y) ∂x a E 2 T =0 T = f1 (x) x Figure 27: Laplace’s Equation With Dirichlet and Neumann B. consider again the problem of the last section but with a Neumann.65) ∂T18 T22 − T14 ≈ ∂x 2h (3. 27): 2 T (x. 3. Some equation solvers based on Gaussian elimination operate more eﬃciently on systems of equations for which the nonzeros in the coeﬃcient matrix are clustered near the main diagonal. (3.g. On the other hand. rather than Dirichlet. Such a matrix system is called banded. T (x. T (x.1 Derivative Boundary Conditions The approach of the preceding section must be modiﬁed for Neumann boundary conditions.
. The dynamic equilibrium equations could be obtained from Newton’s second law (F=ma): m2 u2 + k1 u2 − k2 (u3 − u2 ) = 0 ¨ m3 u3 + k2 (u3 − u2 ) = f3 (t) ¨ (4. when combined with Eq. ﬂuid ﬂow.1) 34 .65. acoustics.. 3. which imposes the Neumann boundary condition on Point 18. which. .1 Linear MassSpring Systems Consider the twodegreeoffreedom (DOF) system shown in Fig.... 16 15 14 13 20 19 18 17 p p p 24 p p p 23 p p p 22 p p p 21 Figure 28: Treatment of Neumann Boundary Conditions. E u2 (t) k2 ¡e ¡e ¡ ¡ e¡ e¡ E u3 (t) E k1 ¡e ¡e ¡ ¡ e¡ e¡ m2 e e m3 e e f3 (t) Figure 29: 2DOF MassSpring System. and electromagnetics... including structural mechanics and dynamics.. yields 4T18 − 2T14 − T17 − T19 = 2hg18 . heat transfer. The stiﬀnesses of the two springs are k1 and k2 . . (3. We let u2 and u3 denote the displacements from the equilibrium of the two masses m2 and m3 . 29. . electric and magnetic ﬁelds. 4.67) 4 Direct Finite Element Analysis The ﬁnite element method is a numerical procedure for solving partial diﬀerential equations. The procedure is used in a variety of applications. Although the main theoretical bases for the ﬁnite element method are variational principles and the weighted residual method.. it is useful to consider discrete systems ﬁrst to gain some physical insight into some of the procedures.
To develop instead a matrix approach.9) (4. we observe that k11 can be deﬁned as the force on DOF 1 corresponding to enforcing a unit displacement on DOF 1 and zero displacement on DOF 2: k11 = f1 u1 =1. k Figure 30: A Single Spring Element.6) k1 + k2 −k2 −k2 k2 (4. M= is the system mass matrix. ¨ This system could be rewritten in matrix notation as M¨ + Ku = F(t).10) u1 u2 = f1 f2 .2) 1 ¡e ¡e ¡ ¡ e¡ e¡ 2 is the displacement vector (the vector of unknown displacements).8) From this equation. The stiﬀness matrix Kel for this element satisﬁes Kel u = F.7) m2 0 0 m3 (4.4) (4. u where u= u2 u3 (4.3) (4. (4. or k11 k12 k21 k22 By expanding this equation. we obtain k11 u1 + k12 u2 = f1 k21 u1 + k22 u2 = f2 .5) is the force vector. as shown in Fig. or m2 u2 + (k1 + k2 )u2 − k2 u3 = 0 ¨ m3 u3 − k2 u2 + k2 u3 = f3 (t). and F= 0 f3 (4.11) 35 . This approach would be very tedious and errorprone for more complex systems involving many springs and masses. we ﬁrst isolate one element. (4. (4.u2 =0 = k. K= is the system stiﬀness matrix. 30.
K11 u1 + K12 u2 + K13 u3 + · · · + K1n un = F1 K21 u1 + K22 u2 + K23 u3 + · · · + K2n un = F2 K31 u1 + K32 u2 + K33 u3 + · · · + K3n un = F3 .u2 =0 = −k. for a larger system with many more DOF. as shown in Fig. 32. .16) The justiﬁcation for this assembly procedure is that forces are additive. others=0 . (4.13) u1 =0. k22 is the force at DOF 2 when u2 = 1 and u1 = u3 = 0. In the absence of the masses and constraints.2 Matrix Assembly We now return to the stiﬀness part of the original problem shown in Fig. the system is a 3DOF system.u2 =1 = k. For example. 4.3 Constraints The system in Fig.14) in which case we can interpret an individual element Kij in the stiﬀness matrix as the force at DOF i if uj = 1 and all other displacement components ui = 0: Kij = Fi uj =1. Similarly. k21 = f2 u1 =1. 0 0 0 0 −k2 k2 0 −k2 k2 (4. (4. (4. Kn1 u1 + Kn2 u2 + Kn3 u3 + · · · + Knn un = Fn . 29 has a constraint on DOF 1. this system is shown in Fig. each with one DOF. 1 ¡e ¡e ¡ ¡ e¡ e¡ k1 2 ¡e ¡e ¡ ¡ e¡ e¡ k2 3 Figure 31: 3DOF Spring System.u2 =1 = −k. k22 = f2 u1 =0. If we expand the 36 . Since there are three points in this system. (4. 29.15) 4. . This matrix corresponds to the unconstrained system. 31. Kel = k −k −k k =k 1 −1 −1 1 .12) In general. both elements which connect to DOF 2 contribute to k22 . The system stiﬀness matrix can be assembled for this system by adding the 3 × 3 stiﬀness matrices for each element: k1 −k1 0 0 0 0 k1 −k1 0 K = −k1 k1 0 + 0 k2 −k2 = −k1 k1 + k2 −k2 . k12 = f1 Thus.
but the constrained matrix K2×2 is nonsingular. Hence. K21 u1 + K22 u2 + K23 u3 = F2 K31 u1 + K32 u2 + K33 u3 = F3 (4.18) which is the same matrix obtained previously in Eq. if ui = 0. (4. if ui = 0 for some system. 33.17) we see that Row i corresponds to the equilibrium equation for DOF i. Without constraints.18. 4. we do not need Row i of the matrix (although that equation can be saved to recover later the constraint force). Column i of the matrix multiplies ui . 4. with u1 = 0 in Fig.5. we delete Row 1 and Column 1 in Eq. That is.4 Example and Summary Consider the 4DOF spring system shown in Fig. (4. 32. Notice that. 2 det K3×3 = k1 [(k1 + k2 )k2 − k2 ] + k1 (−k1 k2 ) = 0. (4. Also. we can enforce that constraint by deleting Row i and Column i from the unconstrained matrix.19) whereas 2 det K2×2 = (k1 + k2 )k2 − k2 = k1 k2 = 0.ﬁxed r© e¡ e ¡ ¡e ¡e ¡ 1 k1 2 ¡e ¡e ¡ ¡ e¡ e¡ k2 3 Figure 32: Spring System With Constraint. K is singular (and the solution of the mechanical problem is not unique) because of the presence of rigid body modes. k1 1 ¡e ¡e ¡ ¡ e¡ e¡ 2 ¡e ¡e ¡ ¡ e¡ e¡ k2 ¡e ¡e ¡ ¡ e¡ e¡ 3 k4 k3 ¡e ¡e ¡ ¡ e¡ e¡ 4 Figure 33: 4DOF Spring System.16 to obtain the reduced matrix K= k1 + k2 −k2 −k2 k2 . Thus. if ui = 0.20) That is. the unconstrained matrix K3×3 is singular. The unconstrained stiﬀness matrix for 37 . (unconstrained) matrix system Ku = F into K11 u1 + K12 u2 + K13 u3 = F1 . so that. we do not need Column i of the matrix. 4. 4.16 and 4. from Eqs.
This property is a special case of the Betti reciprocal theorem in mechanics.5 PinJointed Rod Element Consider the pinjointed rod element (an axial member) shown in Fig. Apply constraints. Thus. 2. since the matrix entries in Column i consist of all the grid point forces when ui = 1 and other DOF are ﬁxed. Compute reactions. Thus. 3.this system is K= k1 + k4 −k1 −k4 0 −k1 k1 + k2 −k2 0 −k4 −k2 k2 + k3 + k4 −k3 0 0 −k3 k3 . We summarize the solution procedure for spring systems: 1. the axial stiﬀness is F AE k= = . (4. Solve Ku = F for u. An oﬀdiagonal term is zero unless the two points are common to the same element.21) We summarize several properties of stiﬀness matrices: 1. 2. K is sparse in general and usually banded. 34. and E is the Young’s modulus for the rod material. (4. since the object is in static equilibrium. Generate the element stiﬀness matrices. 4. spring forces. and Kel = AE L 1 −1 −1 1 . the sum of any matrix column or row is zero. From mechanics of materials. 4. K is symmetric. Assemble the system K and F. For spring systems. This property is a consequence of equilibrium. and stresses. that have only one DOF at each point. 5. The forces must sum to zero.22) u= AE where A is the rod crosssectional area. (4.23) u L The rod is therefore equivalent to a scalar spring with k = AE/L.24) 38 . we recall that the change in displacement u for a rod of length L subjected to an axial force F is FL . (4. K is singular without enough constraints to eliminate rigid body motion. 3.
F AE. r d d d d d d d r d d d d d dr d d r d d d d d d dr d dr c r Figure 35: Truss Structure Modeled With PinJointed Rods. L F © Figure 34: PinJointed Rod Element. 39 .
The four DOF for the element are also shown in Fig. Thus the elements must be transformed to a common coordinate system before the element matrices can be assembled into the system stiﬀness matrix. at Point 1.. 37. F4 . (4. To compute the entries in the ﬁrst column of the 4 × 4 Kel . F3 . u2 = u3 = u4 = 0. 35).4 T 2 θ T E 1 E3 Figure 36: The Degrees of Freedom for a PinJointed Rod Element in 2D. and the forces at the opposite end of the rod (x2 . y2 ) & & & & & & 1 cos θ & e (x1 .27) K = . F2 . since the rod elements are not all colinear (e. c2 cs L −c2 −cs −cs −s2 cs s2 40 . 36. Fig. For example. matrix assembly for a truss structure (a structure made of pinjointed rods) diﬀers from that for a collection of springs. we set u1 = 1. Fx = F cos θ = (k · 1 cos θ) cos θ = k cos2 θ = k11 = −k31 Fy = F sin θ = (k · 1 cos θ) sin θ = k cos θ sin θ = k21 = −k41 . and compute the four grid point forces F1 . Similarly we can ﬁnd the rest of the matrix to obtain c2 cs −c2 −cs AE cs s2 −cs −s2 el (4. A typical rod element in 2D is shown in Fig.26) where k = AE/L. However. 36.25) (4. These four values complete the ﬁrst column of the matrix. y2 ) are equal in magnitude and opposite in sign. To use this element in 2D trusses requires expanding the 2 × 2 matrix Kel into a 4 × 4 matrix.g. r (x2 . as shown in Fig. y1 ) e && θ & dr r Fy T& 1 b &E Fx Figure 37: Computing 2D Stiﬀness of PinJointed Rod.
(4.5 × 106 lb/in. (4. c = cos θ = 4. where x2 − x1 y2 − y1 . L=10 in. For this frame. the system of equations is k0 whose solution is u3 = 1.31) . s = sin θ = . a 1 =√ . 38. only c and s are needed. we assume E = 30 × 106 psi. Later.6 PinJointed Frame Example We illustrate the matrix assembly and solution procedures for truss structures with the simple frame shown in Fig.30) After constraints.89 × 10−3 in.67 × 10−3 in.32) 2 0 0 2 u3 u4 = 5000 √ −5000 3 . we will derive a more convenient way to obtain this matrix. 1 −1 −1 + 1 1 + 1 −1 −1 0 0 −1 −1 1 1 0 0 −1 −1 1 1 where k0 = AE L a L 2 = 1. u4 = −2. 41 (4. and A=1 in2 . the stiﬀness matrix is 1 −1 −1 1 0 0 −1 1 1 −1 0 0 1 1 + 1 −1 + 1 −1 −1 −1 (4.10.29) K = k0 .000 lb e e 4 s=c 3 e T 60◦ ee E3 d d d a a = − L r 2 d d ¨ s = −c = L d d¨ j r % d d d d ◦ 45 d d 6 2 d d T d d T d d E 5 1d E1 ' 2a E Figure 38: PinJointed Frame Example. in the discussion of matrix transformations. Before the application of constraints.28) L L Note that the angle θ is not of interest. L 2 (4.
the free and constrained DOF. When the applied load is distributed to the grid points. A partitioning of the matrix system then yields Kf f Kf s Ksf Kss uf us = Ff Fs . From the ﬁrst partition. consider the beam shown in Fig. to enforce ui = 0. if any. (4. R2 = k0 (u3 − u4 ) = 6840 lb R6 = k0 (−u3 − u4 ) = 1830 lb. 39. By using matrix partitioning.7 Boundary Conditions by Matrix Partitioning We recall that.36) . can be written as Fs = Ps + R. we could delete Row i and Column i from the stiﬀness matrix.33) 4. where some DOF are speciﬁed.c c c c c c c r r r r r r d c c c c r r r d Figure 39: Example With Reactions and Loads at Same DOF. Thus. (4.39) (4. 42 (4.35) where us is prescribed. Consider the ﬁnite element matrix system Ku = F. The second partition then yields the forces at the constrained DOF: Fs = Ksf uf + Kss us .38) where uf is now known. we can treat nonzero constraints and recover the forces of constraint.34) where uf and us denote. We partition the unknown displacement vector into u= uf us . The reactions can be recovered as R1 = k0 (−u3 + u4 ) = −6840 lb.37) which can be solved for the unknown uf . (4. respectively. (4. and us is prescribed. R5 = k0 (−u3 − u4 ) = 1830 lb. The grid point forces Fs at the constrained DOF consist of both the reactions of constraint and the applied loads. and some are free. For example. Fs . (4. at those DOF. Kf f uf + Kf s us = Ff or Kf f uf = Ff − Kf s us . the loads at the two end grid points would include both reactions and a portion of the applied load. which includes all loads at the constrained DOF.
10. Choose the large spring k0 to be. 4. An alternative approach to enforce this constraint is to attach a large spring k0 between DOF i and ground (a ﬁxed point) and to apply a force Fi to DOF i equal to k0 U . for general purpose software.000 times the largest diagonal entry in the unconstrained K matrix. the third equation is k0 u3 ≈ k0 U or u3 ≈ U. (4. This spring should be many orders of magnitude larger than other typical values in the stiﬀness matrix. say.41) = k31 k32 k33 + k0 k34 u3 F3 + k0 U F4 k41 k42 k43 k44 u4 For large k0 . where Ps is the applied load.i r (a) E ui = U prescribed F i = k0 U E (b) k0 ¡e ¡e ¡ ¡ e¡ e¡ d d d Figure 40: Large Spring Approach to Constraints.40) The disadvantage of using the partitioning approach to constraints is that the writing of computer programs is made more complicated. 40) for which we want to prescribe a value for the ith DOF: ui = U . since one needs the general capability to partition a matrix into smaller matrices. The reactions can then be recovered as R = Ksf uf + Kss us − Ps . A large number placed on the matrix diagonal will have no adverse eﬀects on the conditioning of the matrix. The main advantage of this approach is that computer coding is easier. (4. if we prescribe DOF 3 in a 4DOF system.42) as required. the modiﬁed system becomes F1 k11 k12 k13 k14 u1 F2 k21 k22 k23 k24 u2 . However. For example. The large spring approach can be used for any system of equations for which one wants to enforce a constraint on a particular unknown. such a capability is essential. and R is the vector of reactions. 43 . since matrix sizes do not have to change. (4.8 Alternative Approach to Constraints Consider a structure (Fig. We can summarize the algorithm for applying the largespring approach for applying constraints to the linear system Ku = F as follows: 1.
the transverse displacement and rotation at the ith point. In two dimensions. w1 = w2 = θ2 = 0. θ1 = w2 = θ2 = 0. (4. If we then combine the resulting ﬂexural stiﬀnesses with the axial 44 .43) = F2 k31 k32 k33 k34 w2 M k41 k42 k43 k44 θ2 2 where wi and θi are. 41. To compute the ﬁrst column of K. as shown in Fig. For each DOF i which is to be constrained (zero or not). (4. The stiﬀness matrix would therefore be of the form F1 k11 k12 k13 k14 w1 θ1 k21 k22 k23 k24 M1 .44) and compute the four forces. 42. and add k0 U to the corresponding righthand side term Fi .u2 T1 u EI. beam elements also have stiﬀness matrices which can be computed exactly. respectively.46) as shown in Fig. 43. add k0 to the diagonal entry Kii . Eq. (4. For Column 2. (4. 4. Thus. 2. 4. we designate the four DOF associated with ﬂexure as shown in Fig.45) subject to the boundary conditions. for an Euler beam.44. Rotations are expressed in radians.9 Beams in Flexure Like springs and pinjointed rods. L r E u4 T3 r u x 1 2 Figure 41: DOF for Beam in Flexure (2D). where U is the desired constraint value for DOF i. set w1 = 1. we solve the beam equation subject to the boundary conditions θ1 = 1. F T1 M1 r T F T2 M2 r w1 = 1 c Figure 42: The Beam Problem Associated With Column 1. we solve the beam diﬀerential equation EIy = M (x).
The threedimensional counterpart to this matrix would have six DOF at each grid point: three translations and three rotations (ux . Rz ). Transverse shear. for bending in two diﬀerent planes. rods. 44. In addition. For transverse shear. “area factors” would also be needed to reﬂect the fact that two diﬀerent beams with the same crosssectional area. Rx . 4. For most problems of interest in engineering. (4. which was ignored. uy .10 Direct Approach to Continuum Problems Stiﬀness matrices for springs. there would have to be two moments of inertia I1 and I2 . in addition to a torsional constant J and (possibly) a product of inertia I12 . and beams can be derived exactly. u T2 A. For this element. exact stiﬀness matrices cannot be derived. Thus. uz . in 3D. note that there is no coupling between axial and transverse behavior. stiﬀnesses previously derived for the axial member. L r E u1 u3 u6 u T5 r E u4 1 2 Figure 44: DOF for 2D Beam Element. would have diﬀerent shear stiﬀnesses. matrix for the Euler beam: AE AE F 1 0 0 − L L 12EI 6EI F 0 2 0 L3 L2 6EI 4EI F 0 3 0 2 L L = AE F − AE 4 0 0 L L 12EI 6EI F5 0 − 3 − 2 0 L L 2EI 6EI F6 0 0 2 L L we obtain the twodimensional stiﬀness u1 u2 u3 u4 u5 u6 0 − 12EI L3 6EI − 2 L 0 12EI L3 6EI − 2 L − 0 6EI L2 2EI L 0 6EI L2 4EI L .F T1 M1 r F T2 M2 r Figure 43: The Beam Problem Associated With Column 2. Consider the 2D problem of computing the displacements and stresses in a thin plate with applied forces and 45 . K is a 12 × 12 matrix. but diﬀerent crosssectional shapes. can be added. E.47) where the six DOF are shown in Fig. I. Ry .
Note that the number of undetermined coeﬃcients equals the number of DOF (6) in the element. (4. constraints (Fig. y3 ) £3 d £ £ £ £ £ £d E u3 d d d d d v2 T d E u2 2$ £ T $d £ $$ (x . 45). The displacement and force vectors for the element are u1 F1x v F 1 1y u2 F2x u= . With three grid points and two DOF at each point. respectively. we approximate the displacement ﬁeld over the element as u(x. 46. Constraint ! ¡ ! ¡ ¡ ¡ ¡ ! ¢ ¡ ¢ Load ¡ ¡ ¢ ¢ ¢ ¢ Figure 45: Plate With Constraints and Loads. v3 T (x3 . This ﬁgure also shows the domain modeled with several triangular elements. y1 ) $ E u1 v1 £ Figure 46: DOF for the Linear Triangular Membrane Element. y) = a4 + a5 x + a6 y. y ) $ 2 2 £1 $$$ $$$ £ (x1 . A typical element is shown in Fig. this is a 6DOF element.49) where u and v are the x and y components of displacement. F= . (4. We 46 . y) = a1 + a2 x + a3 y v(x.48) v2 F2y u F 3 3x v3 F3y Since we cannot determine exactly a stiﬀness matrix that relates these two vectors.
52) Since the x and y directions uncouple in Eq.51. At the vertices.55) (4. unless the triangle is degenerate. 4.choose polynomials for convenience in the subsequent mathematics.49 must match the grid point values.. the displacements in Eq. (4. y2 − y3 y3 − y1 y1 − y2 v2 a5 2A x3 − x 2 x 1 − x3 x2 − x1 v3 a6 47 (4. 1 x3 y 3 (4.51) since A is (from geometry) the area of the triangle. (4. Thus. 4. u1 = a1 + a2 x1 + a3 y1 .50) a1 a2 a3 a4 a5 a6 (4. and x2 y3 − x3 y2 x3 y1 − x1 y3 x1 y2 − x2 y1 u1 a1 1 .57) .g. we conclude that γ is invertible. The linear approximation in Eq. 4.49 is analogous to the piecewise linear approximation used in trapezoidal rule integration. e. We can write ﬁve more equations of this type to obtain 1 x1 y1 0 0 0 u1 v 0 0 0 1 x y 1 1 1 u2 1 x2 y2 0 0 0 = v2 0 0 0 1 x2 y2 u 1 x y 0 0 0 3 3 3 0 0 0 1 x3 y 3 v3 or u = γa. (4.54) = 1 x2 y2 a5 v 2 a6 1 x3 y3 v3 We now observe that 1 x1 y 1 det 1 x2 y2 = 2A = 0.53) = 1 x 2 y 2 a2 u 2 a3 1 x3 y 3 u3 1 x1 y1 a4 v1 . x2 y3 − x3 y2 x3 y1 − x1 y3 x1 y2 − x2 y1 v1 a4 1 = . A is positive for counterclockwise ordering and negative for clockwise ordering.56) = a2 y2 − y3 y3 − y1 y1 − y2 u2 2A a3 x3 − x2 x1 − x3 x2 − x1 u3 Similarly. we can write this last equation in uncoupled form: 1 x 1 y 1 a1 u1 (4.
for this element. (4. (4. (4. The strain components of interest in 2D are u. the work done by the applied loads acting through the displacements is equal to the increment in stored strain energy during an arbitrary virtual (imaginary) displacement δu: δuT F = V δεT σ dV. From generalized Hooke’s law.58) and treat the element’s unknowns as being either the physical displacements u or the nonphysical coeﬃcients a. for an isotropic material in plane stress. for example.64) 48 . and ν is Poisson’s ratio. We now derive the element stiﬀness matrix using the Principle of Virtual Work. 1 ν 0 E .x From Eq.Thus.59) εyy v. D= ν 1 0 1 − ν2 0 0 (1 − ν)/2 (4.62) where. 4. For this element.62 is a formula to compute element stresses given the grid point displacements.y + v. this element is referred to as the Constant Strain Triangle (CST).61) Eq.60) (4.49. Note that. each stress component is a linear combination of all the strain components: σ = Dε = DBγ −1 u = DCu.63) where E is Young’s modulus. the strains are independent of position in the element.y γxy u. Consider an element in static equilibrium with a set of applied loads F and displacements u.x εxx ε= = . the stresses are constant (independent of position) in the element. (4. 4. = ε= a6 a4 a3 + a5 0 0 1 0 1 0 a 5 a6 where C = Bγ −1 . (4. Thus. According to the Principle of Virtual Work. we evaluate the strains for this element as a1 a2 0 1 0 0 0 0 a2 a3 0 0 0 0 0 1 = Ba = Bγ −1 u = Cu. we can write a = γ −1 u. Eq. 4.60 is a formula to compute element strains given the grid point displacements.
4. 49 .68) where xij = xi − xj . The substitution of the constant B. 2 µ= (4. yij = yi − yj . Transformations are also needed to transform from other orthogonal coordinate systems (e. (4. the stiﬀness matrix is given by K= V CT DC dV = V Bγ −1 T D Bγ −1 dV.We then substitute Eqs.. γ. cylindrical or spherical) to Cartesian coordinates (Fig.60 and 4. and D matrices into this equation yields the stiﬀness matrix for the Constant Strain Triangle: Et K= × 4A(1 − ν 2 ) 2 y23 +λx2 µx32 y23 y23 y31 +λx13 x32 νx13 y23 +λx32 y31 y12 y23 +λx21 x32 νx21 y23 +λx32 y12 32 2 2 x32 +λy23 νx32 y31 +λx13 y23 x32 x13 +λy31 y23 νx32 y12 +λx21 y23 x21 x32 +λy12 y23 2 2 y31 +λx13 µx13 y31 y12 y31 +λx21 x13 νx21 y31 +λx13 y12 . 2 2 x13 +λy31 νx13 y12 +λx21 y31 x13 x21 +λy12 y31 2 2 Sym y12 +λx21 µx21 y12 2 x2 +λy12 21 (4.62 into this equation to obtain δuT F = V (C δu)T (DCu) dV = δuT V CT DC dV u. For example.66) Therefore. it follows that F= V CT DC dV u. since they are grid point quantities independent of position. λ= 1−ν . and t is the element thickness. (4. K is symmetric. Since δu is arbitrary. in the ﬁnite element method.65) where δu and u can be removed from the integral. (4. since the material property matrix D is symmetric.g. the need arises to transform vectors and matrices from one coordinate system to another. 47). 2 1+ν . Note that. it is frequently more convenient to derive element matrices in a local element coordinate system and then transform those matrices to a global system (Fig. 48).67) where the integral is over the volume of the element.69) 5 Change of Basis On many occasions in engineering applications.
With the summation convention. (5. we can express v in two diﬀerent orthonormal bases: 3 3 v= i=1 vi ei = i=1 vi ei . Since bases are not unique. we can omit the summation sign and write v = vi ei . 0. Let the vector v be given by 3 v = v1 e1 + v2 e2 + v3 e3 = i=1 vi ei . An orthonormal basis is deﬁned as a basis whose basis vectors are mutually orthogonal unit vectors (i.1) where ei are the basis vectors.3) where δij is the Kronecker delta. ¯¯ (5.5) 50 . 5. i = j. 49). i = j.. (5. vectors of unit length).4) where vi are the components of v in the unbarred coordinate system. If ei is an orthonormal basis. ei · ej = δij = 1. a summation is implied over the range. If we take the dot product of both sides of Eq. y T eθ f w f f r I er f q θ E x Figure 48: Basis Vectors in Polar Coordinate System. if a subscript appears exactly twice.e. we obtain 3 3 vi ei · ej = i=1 i=1 vi ei · ej .y T y T y ¯ w f f f r I¯ x r y ¯ w f f axial member E r I¯ x r f triangular membrane r E x x Figure 47: Element Coordinate Systems in the Finite Element Method. and vi are the com¯ ponents in the barred system (Fig.4 with ej . (5.2) where. ¯ ¯ (5. and vi are the components of v.
13) 51 .4 with ej .5. and we deﬁne the 3 × 3 matrix R as ¯ Rij = ei · ej . 3 vj = ¯ i=1 Rji vi ¯ ¯ or v = Rv or v = R−1 v. ¯ (5. 5.x2 ¯ f w f f f f x T2 v x1 ¯ E f θ f I x1 Figure 49: Change of Basis.10 and 5. and ei · ej = Rji . (5.7 is equivalent to the matrix product ¯ v = RT v . (5.8) Cij = k=1 Aik Bkj . 3 3 (5.11) ¯ ¯ ¯ where ei · ej = δij . (5.12) A comparison of Eqs. from Eq.7) Since the matrix product C = AB can be written using subscript notation as 3 (5. ¯ ¯ ¯ (5.6) vj = i=1 Rij vi = ¯ i=1 T Rji vi . Thus. if we take the dot product of Eq. 5. Thus. ¯ Similarly. 5. we obtain 3 3 (5.9) Eq. where ei · ej = δij .12 yields 3 R−1 = RT or RRT = I or k=1 Rik Rjk = δij .10) ¯ vi ei · ej = i=1 i=1 vi ei · ej . 5.
the sign of the z component is changed). in 3D. 5. and we conclude that.19) (5.20) R= 0 1 0 0 0 −1 indicates a reﬂection in the z direction (i.16) We recall that the determinant of a matrix product is equal to the product of the determinants.14) This type of transformation is called an orthogonal coordinate transformation (OCT). (5. for an orthogonal matrix R. and the minus sign occurs for combinations of rotations and reﬂections.21) where the summation convention was used. ¯ ¯ (5. 0 0 1 (5. 52 .13. since the square of the length is given by vi vi = Rij vj Rik vk = δjk vj vk = vj vj . 49. 0 0 1 In 2D. an orthogonal matrix is one whose inverse is equal to the transpose.15) R = − sin θ cos θ 0 . for the coordinate rotation shown in Fig. the square of the length of a vector is the same in both coordinate systems. cos θ sin θ 0 (5. For example. A matrix R satisfying Eq.17) cos θ sin θ − sin θ cos θ (5.13 is said to be an orthogonal matrix.18) The plus sign occurs for rotations.where I is the identity matrix (Iij = δij ): 1 0 0 I = 0 1 0 . That is. det(RRT ) = (det R)(det RT ) = (det R)2 = det I = 1. the determinant of the transpose of a matrix is equal to the determinant of the matrix itself. from Eq.e. R is sometimes called a rotation matrix. That is. We note that the length of a vector is unchanged under an orthogonal coordinate transformation. the orthogonal matrix 1 0 0 (5. Thus. For example. R= and vx = vx cos θ − vy sin θ ¯ ¯ vy = vx sin θ + vy cos θ. 5. Also. ¯¯ (5.. det R = ±1.
23) 5. transforms under an orthogonal coordinate transformation according to the rule 3 3 3 ¯ Aij···k = p=1 q=1 ··· r=1 Rip Rjq · · · Rkr Apq···r . Also. By comparing Eqs. in index notation. (5. (5.27) (5. under an orthogonal coordinate transformation. which has n indices. Eq. temperature and ¯ pressure are scalars. we obtain ¯ M = RMRT or. 5. For example.24) (5. (5. 3 3 (5. ¯ We now introduce tensors of rank 2. vectors transform according to the rule 3 ¯ v = Rv or vi = ¯ j=1 Rij vj .27.e.29) which is the transformation rule for a tensor of rank 2. the result of multiplying a matrix and a vector is a vector).25 implies ¯ ¯ ¯ ¯ RT v = MRT u or v = RMRT u. in a rotated coordinate system. since T = T and p = p..30) 53 .To summarize. and RRT = RT R = I.26 and 5. (5. Consider a matrix M = (Mij ) which relates two vectors u and v by 3 v = Mu or vi = j=1 Mij uj (5.28) ¯ Mij = k=1 l=1 Rik Rjl Mkl .22) where ¯ Rij = ei · ej . In general. (5. ¯u ¯ v = M¯ . a tensor of rank n.25) (i.26) Since both u and v are vectors (tensors of rank 1). 5. A tensor of rank 0 is a scalar (a quantity which is unchanged by an orthogonal coordinate transformation).1 Tensors A vector which transforms under an orthogonal coordinate transformation according to the ¯ rule v = Rv is deﬁned as a tensor of rank 1.
each stress component can be written as a linear combination of the nine strain components: σij = cijkl εkl . the extension in a bar subjected to an axial force is proportional to the force. The corresponding strain tensor is ε11 ε12 ε13 ε = ε21 ε22 ε23 . σ13 .i ) = 2 2 ∂ui ∂uj + ∂xj ∂xi . or stress is proportional to strain.31) where σ11 . (5. σ22 . 1 1 εij = (ui. and sum repeated indices. In general threedimensional elasticity. ¯ ¯ (5. σ33 are the direct (normal) stresses. 2.33) (5. (5. and σ12 . According to generalized Hooke’s law. σ31 σ32 σ33 (5.37) 54 .34) where E is Young’s modulus.32) The shear strains in this tensor are equal to half the corresponding engineering shear strains. and the summation convention is being used. an experimentally determined material property.35 in terms of stress and strain in a second orthogonal coordinate system as Rki Rlj σkl = cijkl Rmk Rnl εmn . there are nine components of stress σij and nine components of strain εij . We can write Eq.5. we obtain Rpj Roi Rki Rlj σkl = Roi Rpj Rmk Rnl cijkl εmn .36) If we multiply both sides of this equation by Rpj Roi . ¯ ¯ (5.35) where the 81 material constants cijkl are experimentally determined. In 1D. We now prove that cijkl is a tensor of rank 4. and σ23 are the shear stresses. Both σ and ε transform like second rank tensors. σ = Eε.2 Examples of Tensors 1. Generalized Hooke’s law According to Hooke’s law in elasticity. ε31 ε32 ε33 where. (5. in terms of displacements. 5.j + uj. Stress and strain in elasticity The stress tensor σ is σ11 σ12 σ13 σ = σ21 σ22 σ23 .
¯ ¯ ¯ we conclude that copmn = Roi Rpj Rmk Rnl cijkl .. . 3. c. . . . u and F contain several subvectors. .e. . . and consisting of rotation matrices R: 0 ··· 0 ··· .44) (5.40) (5..or. ua ¯ ub ¯ ¯= u = uc ¯ . 55 (5. Stiﬀness matrix in ﬁnite element analysis In the ﬁnite element method for linear analysis of structures. (5. ¯ which proves that cijkl is a tensor of rank 4. Thus. ··· ··· ··· .. (5. ¯ ¯ ¯ Since. In this equation. . . i. . . .46) . since u and F are the displacement and force vectors for all grid points. . in the second coordinate system. the forces F acting on an object in static equilibrium are a linear combination of the displacements u (or vice versa): Ku = F. (5.. .38) where K is referred to as the stiﬀness matrix (with dimensions of force/displacement). .39) (5. F= u= Fc uc . σop = copmn εmn . · · · . ua ub uc .42) . Fa ua Fb ub (5.41) (5. . for grid points a. . δok δpl σkl = σop = Roi Rpj Rmk Rnl cijkl εmn .. . . where ¯ ¯ ua = Ra ua .45) ΓT Γ = I. . . = Γu. ub = Rb ub . Rc · · · . b. Ra 0 0 0 Rb 0 0 0 Rc . because R is an orthogonal matrix. . . . . . . . .43) where Γ is an orthogonal blockdiagonal matrix Ra 0 0 Rb Γ= 0 0 .
(5.53) . the 4 × 4 2D stiﬀness matrix in the element coordinate system is 1 0 −1 0 0 0 ¯ = AE 0 0 (5. 1 0 L −1 0 0 0 0 0 where A is the crosssectional area of the rod. 50. ¯ K = ΓT KΓ. In the global coordinate system.52) K . Consider the rod shown in Fig. That is. and L is the rod length. where the 4 × 4 transformation matrix is Γ= and the rotation matrix is R= cos θ sin θ − sin θ cos θ 56 . Similarly.54) (5.48) (5. For this element.47) We illustrate the transformation of a ﬁnite element stiﬀness matrix by transforming the stiﬀness matrix for the pinjointed rod element from a local element coordinate system to a global Cartesian system. (5.49) (5. Thus. ¯ KΓu = ΓF or ¯ ΓT KΓ u = F.q y T θ y ¯ s d ¯ x dq E 4 dq DOF θ 2 s d 1 d q s d 3 x Figure 50: Element Coordinate System for PinJointed Rod. the stiﬀness matrix transforms like other tensors of rank 2: ¯ K = ΓT KΓ. E is Young’s modulus of the rod material. ¯ F = ΓF.55) R 0 0 R .51) (5. if ¯u ¯ K¯ = F. (5.50) (5.
which has three constants.56) where c = cos θ and s = sin θ.e. the material property tensor cijkl has 34 = 81 constants (assuming no symmetry). a result which depends on the existence of a strain energy function. −c2 −cs c2 cs −cs −s2 cs s2 (5. since δij = δij . The actual number of independent material constants for an isotropic material turns out to be two rather than three. invariant under an orthogonal coordinate transformation). which implies the additional symmetry cijkl = cklij . The Kronecker delta δij is a second rank ¯ tensor and isotropic. This result agrees with that found earlier in Eq. and ¯ = RIRT = RRT = I. moreover. in generalized Hooke’s law (Eq. the identity matrix I is invariant under an orthogonal coordinate transformation..57) That is. 5. I (5. 5. It can be shown in tensor analysis that δij is the only isotropic tensor of rank 2 and. cijkl must be an isotropic tensor of rank 4. maximum. For example.27. K= AE L = AE L c −s 0 0 s c 0 0 0 0 c −s 0 0 s c 1 0 −1 0 0 −1 0 c s 0 0 0 0 −s c 0 0 1 0 0 0 c 0 0 0 0 0 −s 0 0 s c c2 cs −c2 −cs cs s2 −cs −s2 . 6 Calculus of Variations Recall from calculus that a function of one variable y = f (x) attains a stationary value (minimum.Thus. δij is the characteristic tensor for all isotropic tensors: Rank 1 2 3 4 odd Isotropic Tensors none cδij none aδij δkl + bδik δjl + cδil δjk none That is.35).3 Isotropic Tensors An isotropic tensor is a tensor which is independent of coordinate system (i. For an isotropic material. 57 . or neutral) at the point x = x0 if the derivative f (x) = 0 at x = x0 . 4. all isotropic tensors of rank 4 must be of the form shown above. thus implying at most three material constants (on the basis of tensor analysis alone).
xn ) is stationary at a point P if n δf = i=1 ∂f δxi = 0 at P ∂xi for arbitrary variations δxi . . . . . for a stationary value. 51. . In general.T r E T r T r E E (a) Minimum (b) Maximum (c) Neutral Figure 51: Minimum. y0 ) if ∂f ∂f = 0 and = 0 at (x0 .4) A function with n independent variables f (x1 . not of functions. For a function of two variables z = f (x.2) (6.3) (6. Hence. n). δf = ∂f ∂x δx + ∂f ∂y δy = 0 at (x0 . y). (6. and Neutral Stationary Values. but of functionals. Maximum. ∂x ∂y Alternatively. y0 ). a functional is an integral which has a speciﬁc numerical value for each function that is substituted into it. . 58 . y0 ). a functional is a function of a function. the ﬁrst variation of f (which is similar to a diﬀerential) must vanish for arbitrary changes δx in x: df δf = δx = 0. Variational calculus is concerned with ﬁnding stationary values. x2 . ∂f = 0 at P ∂xi (i = 1. . (6. . More precisely. 2. z is stationary at (x0 . Alternatively. dx2 minimum: as shown in Fig.1) dx The second derivative determines what kind of stationary value one has: d2 f > 0 at x = x0 dx2 d2 f < 0 at x = x0 maximum: dx2 d2 f neutral: = 0 at x = x0 .
dx (6.12) If φ is speciﬁed at the boundaries x = a and x = b. δφx = δ dφ dx = d (δφ).14) ∂φx ∂φx This type of boundary condition is called a natural boundary condition. the second term in Eq. dx (6.6) The variation in I due to an arbitrary small change in φ is b b δI = a δF dx = a ∂F ∂F δφ + δφx dx. Moreover.10 must vanish.10) Since δφ is arbitrary (within certain limits of admissibility). ∂φ ∂φx (6. The second term in Eq. δI = 0 implies that both terms in Eq. (6. for arbitrary δφ. with an interchange of the order of d and δ. δφ(a) = δφ(b) = 0. a zero integral in Eq.12 is automatically satisﬁed.13) and Eq.7) where. 6. a (6.10 must also vanish for arbitrary δφ: ∂F δφ ∂φx b = 0. δI = a ∂F d − ∂φ dx δφ dx + ∂F δφ ∂φx . 6. Eq. φ(x) is the dependent variable. If φ is not speciﬁed at the boundaries x = a and x = b. φx ) dx. This type of boundary condition is called a geometric or essential boundary condition.8) With this equation.11) dx ∂φx ∂φ which is referred to as the EulerLagrange equation. 6. 59 .Consider the functional I(φ) = a b F (x.7 can be integrated by parts to obtain b a ∂F δφx dx = ∂φx b b a ∂F d ∂F δφ − (δφ) dx = ∂φx dx ∂φx a ∂F ∂φx b b δφ a d dx b ∂F ∂φx dx. φ. (6. a (6. 6.10 implies a zero integrand: d ∂F ∂F − = 0.9) Thus. (6. and φx = dφ . 6. 6. (6. (6.5) where x is the independent variable.12 implies ∂F (a) ∂F (b) = = 0.
0) to (1. F (x.16) where y(0) = 0 and y(1) = 1. which is the straight line joining (0. y ) = 1 + (y )2 d dx where 1 ∂F = 1 + (y )2 ∂y 2 ∂F ∂y −1/2 1/2 . 6. as expected. 0) E r x Figure 52: Curve of Minimum Length Between Two Points. with the boundary conditions. y(x) = x.y T s (1. (6.1 Example 1: The Shortest Distance Between Two Points We wish to ﬁnd the equation of the curve y(x) along which the distance from the origin (0. we obtain y = where a is another constant. 0) and (1.19) Hence. Thus.20) (6. y. 60 (6. Eq. (6. y(x) = ax + b. 1). (6.23) (6. The diﬀerential arc length along the curve is given by ds = dx2 + dy 2 = 1 1 + (y )2 dx. For this problem. 6. the EulerLagrange equation. [1 + (y )2 ]1/2 where c is a constant of integration.21) . Thus.11. 1) in the xyplane is least. 1) ds dy r dx (0. Consider the curve in Fig. (6. 52.22) c2 = a. = c. If we solve this equation for y . y [1 + (y )2 ]1/2 (6. and.17) which depends only on y explicitly.18) 2y = . 1 − c2 y (6.15) We seek the curve y(x) which minimizes the total arc length I(y) = 0 1 + (y )2 dx. is = 0.
6. potential energy is converted to kinetic energy.24) where s denotes distance along the path.30) .28) 2gx. 2gx (6. y ) = 61 1 + (y )2 . From Eq. dt = 1 + (y )2 dx .25) As the bead slides down the path. from Eq. dt = 1 + (y )2 dx. Thus.31) ∂F ∂y − ∂F = 0.25. the kinetic energy equals the reduction in potential energy: 1 2 mv = mgx 2 or v= Thus. 2gx (6.27) (6. ∂y (6. The velocity v along the path y(x) is v= ds = dt dx2 + dy 2 = dt 1 + (y )2 dx . Assume that the bead starts from rest.11.29) which is the integral to be minimized. v (6.2 Example 2: The Brachistochrone We wish to determine the path y(x) from the origin O to the point A along which a bead will slide under the force of gravity and without friction in the shortest time (Fig. 6. At any location x. the EulerLagrange equation is d dx where F (x. 6. (6.26) The total time for the bead to fall from O to A is then xA t= 0 1 + (y )2 dx.O E y y(x) sA x c Figure 53: The Brachistochrone Problem. y. 2gx (6. dt (6. 53).
41 for a speciﬁc cycloid (deﬁned by the two end points). 6. 1 (1 − cos θ). 6.35) (y )2 = c2 x 1 + (y )2 or dy c2 x = .39) where y0 is a constant of integration. we obtain the ﬁnal result in parametric form x= x = a(1 − cos θ) y = a(θ − sin θ). 6.34) where c is a constant of integration. 6. The resulting cycloids for several end points are shown in Fig. dx 1 − c2 x This equation can be integrated using the change of variable y = 1 1 sin2 (θ/2). The end point may be at the same elevation since. we must have y0 = 0. the EulerLagrange equation becomes d 1 dx 2 or d dx y {x [1 + (y )2 ]}1/2 y {x [1 + (y )2 ]}1/2 = 0. after which either of the two equations in Eq. (6. Also. (6. we integrate Eq. the end point must not be higher than the starting point. We then square both sides of this equation.33) 1 + (y )2 2gx −1/2 2y =0 2gx (6. and solve for y : (6. without friction.41) which is a cycloid generated by the motion of a ﬁxed point on the circumference of a circle of radius a which rolls on the positive side of the line x = 0.37) (6. 6. (6.36 then transforms to x= y= sin(θ/2) 1 1 · 2 sin(θ/2) cos(θ/2) dθ = 2 cos(θ/2) c c 1 1 = 2 (1 − cos θ) dθ = 2 (θ − sin θ) + y0 . there is no loss of energy during the motion.32) To solve this equation.37.41 to solve (iteratively) for θA (the value of θ at Point A). Thus. Since the curve must pass through the origin.36) (6.38) (6. 54.33 to obtain = c. To solve Eq. (6. from Eq. 6. dx = 2 sin(θ/2) cos(θ/2) dθ. one can ﬁrst eliminate the circle radius a from Eq. 2c 2c sin2 (θ/2) dθ (6. 2 c c The integral implied by Eq. This brachistochrone solution is valid for any end point which the bead can reach. 62 .Thus.41 can be used to determine the constant a.40) 2c2 If we then replace the constant a = 1/(2c2 ).
Thus. to enforce the constraint in Eq. For this problem. 6. φ. φ. 55).47) That is. since J is a constant.4 Example 3: A Constrained Minimization Problem 1 We wish to ﬁnd the function y(x) which minimizes the integral I(y) = 0 (y )2 dx (6. 6. δ(I + λJ) = 0. δJ = 0. 0 (6.43) where J is a speciﬁed constant. (6. 63 .3 Constraint Conditions b Suppose we want to extremize the functional I(φ) = a F (x. φx ) dx = J.43.46) subject to the end conditions y(0) = y(1) = 0 and the constraint 1 y dx = 1.45) 6. (6.44) where λ is an unknown constant referred to as a Lagrange multiplier. a (6. the area under the curve y(x) is unity (Fig. φx ) dx (6.42) subject to the additional constraint condition b G(x. we can replace F in the EulerLagrange equation with H = F + λG. We recall that the EulerLagrange equation was found by requiring that δI = 0. However.1 2 3 4 E y 1 r r r 2 c x r Figure 54: Several Brachistochrone Solutions. Thus.
55) 6. y. in 3D.51) (6.50) ∂H ∂y − ∂H = 0. z. φy . d dx yields d (2y ) − λ = 0 dx or y = λ/2. Eq. y = −λx(1 − x)/4.52) (6.56) a functional with three independent variables (x.53) 1= 0 y dx = − 0 λ λ x − x2 dx = − 4 4 x 2 x3 − 2 3 1 =− 0 λ .47. The EulerLagrange equation. (6. z).5 Functions of Several Independent Variables I(φ) = V Consider F (x.y T Area=1 r r E (0. 4 (6. we obtain the parabola y(x) = 6x(1 − x). y ) = (y )2 + λy. ∂y (6. The area constraint. we obtain B = 0 and A = −λ/4. φ. 0) x Figure 55: A Constrained Minimization Problem. φz ) dV. the integration is a volume integral. H(x. After integrating this equation. Note that.49) (6. we obtain y= λ 2 x + Ax + B. is used to evaluate the Lagrange multiplier λ: 1 1 (6. y.48) With the boundary conditions y(0) = y(1) = 0. φx . 24 (6. The variation in I due to an arbitrary small change in φ is δI = V ∂F ∂F ∂F ∂F δφ + δφx + δφy + δφz dV. 6. (6. 0) (1. with λ = −24. ∂φ ∂φx ∂φy ∂φz 64 (6. y. Thus.54) Thus.57) .
6. 6.62 automatically vanishes. It therefore follows that the integrand in the volume integral must also vanish to yield the EulerLagrange equation for three independent variables: ∂ ∂x ∂F ∂φx + ∂ ∂y ∂F ∂φy + ∂ ∂z ∂F ∂φz − ∂F = 0.60) The last three terms in this integral can then be converted to a surface integral using the divergence theorem.59) If we then expand the third and fourth terms similarly. nz ) is the unit outward normal on the surface. 65 . (6. we note that the second term in Eq.62) where n = (nx . (6.58 becomes.61) where S is the closed surface which encloses the volume V . the integrand in the boundary integral must vanish: ∂F ∂F ∂F nx + ny + nz = 0 on S. (6. with an interchange of the order of ∂ and δ.58) To perform a threedimensional integration by parts. and the boundary integral in Eq. 6. ∂φ (6.60 then becomes δI = ∂ ∂F ∂ ∂F ∂ ∂F − − − ∂φ ∂x ∂φx ∂y ∂φy ∂z V ∂F ∂F ∂F + nx + ny + nz δφ dS. Eq.where. Eq. ∂φx ∂φy ∂φz S ∂F ∂φz δφ dV (6. ∂φ ∂φx ∂x ∂φy ∂y ∂φz ∂z (6. δI = V ∂F ∂F ∂ ∂F ∂ ∂F ∂ δφ + (δφ) + (δφ) + (δφ) dV. ny . both integrals in this equation must vanish for arbitrary δφ.63) If φ is speciﬁed on the boundary S.64) ∂φx ∂φy ∂φz This is the natural boundary condition when φ is not speciﬁed on the boundary.58 can be expanded to yield ∂F ∂ ∂ (δφ) = ∂φx ∂x ∂x ∂F ∂ δφ − ∂φx ∂x ∂F ∂φx δφ. ∂x ∂φx ∂y ∂φy ∂z ∂φz (6. δφ = 0 on S. · f dV = V S f · n dS. after regrouping. δI = V ∂F ∂ ∂F ∂ ∂F ∂ ∂F δφ − δφ − δφ − δφ ∂φ ∂x ∂φx ∂y ∂φy ∂z ∂φz ∂ ∂F ∂ ∂F ∂ ∂F + δφ + δφ + δφ dV. 6. which states that. If φ is not speciﬁed on S. for a vector ﬁeld f . Since δI = 0.
z. y z 2 x Consider the functional I(φ) = V (6.68) (6. ﬁnite element solutions are often based on variational methods.6. ∂x ∂y ∂z That is. y z 2 x The EulerLagrange equation. Thus. (6.69) φ = −Q. a numerical procedure for solving partial diﬀerential equations. ∂φ3 (6. φ2 . φ2 . 6. In general.73) 7 Variational Approach to the Finite Element Method In modern engineering analysis.67) (6. and φ3 (x) are the dependent variables. φxx + φyy + φzz = −Q or 2 (6. is the ﬁnite element method. For linear equations. It can be shown that the generalization of the EulerLagrange equation. 6. which is Poisson’s equation.6 Example 4: Poisson’s Equation 1 2 (φ + φ2 + φ2 ) − Qφ dV.11. 66 . 6.65) in which case 1 F (x. a key conclusion of this discussion of variational calculus is that solving a diﬀerential equation is equivalent to minimizing some functional involving an integral. implies ∂ ∂ ∂ (φx ) + (φy ) + (φz ) − (−Q) = 0. such as the minimum potential energy theorem in elasticity. φ1 . one of the most important applications of the energy theorems.71) − dx ∂φ1 ∂φ1 d dx d dx ∂F ∂φ2 ∂F ∂φ3 − − ∂F =0 ∂φ2 ∂F = 0.65. φy . φ3 ) = a F (x.63. we have shown that solving Poisson’s equation is equivalent to minimizing the functional I in Eq. for this case is ∂F d ∂F =0 (6. Eq.70) where x is the independent variable. and φ1 (x). 6. φ2 (x). Eq. φ3 .7 Functions of Several Dependent Variables b Consider the functional I(φ1 . φ2 . φ3 ) dx.72) (6. φx . φz ) = (φ2 + φ2 + φ2 ) − Qφ. φ. y.66) (6. φ1 .
the divergence is written ·F= ∂F1 ∂F2 ∂F3 + + = Fi. (7.2) a · b = a1 b 1 + a2 b 2 + a3 b 3 = i=1 ai b i .7) and the Laplacian of the scalar function f (x1 . the subscript “. if a subscript appears exactly twice.7. x3 ) deﬁned in a volume V bounded by a closed surface S.5) That is.i ei .6) For a vectorvalued function F(x1 .1 Index Notation and Summation Convention a = a1 e1 + a2 e2 + a3 e3 . We deﬁne the shorthand comma notation for derivatives: ∂f = f. for any vector ﬁeld F(x1 . x2 . · F dV = V S F · n dS (7. x3 ) is 2 f= · f= ∂ 2f ∂ 2f ∂ 2f + 2 + 2 = f.3) Using the summation convection. x2 . (7. x3 ) be a scalar function of the Cartesian coordinates x1 .3 in 3D and 1.9) 67 . ∂x1 ∂x2 ∂x3 (7. The range is 1. Let f (x1 . ∂x2 ∂x2 ∂x3 1 (7. and ai is the ith component of a (the projection of a on ei ). x3 .i . we can omit the summation symbol and write a · b = ai b i . the dot product (scalar product) is 3 (7.ii . the gradient can be written f= ∂f ∂f ∂f e1 + e2 + e3 = f. Then.4) where. x3 ). ∂x1 ∂x2 ∂x3 (7. Using the comma notation and the summation convention. x2 .2. For example. x2 . ∂xi (7. a variety of familiar quantities can be written in compact form. Consider a second vector b = b1 e 1 + b2 e 2 + b 3 e 3 . a summation is implied over the range. i” denotes the partial derivative with respect to the ith Cartesian coordinate direction.i .8) The divergence theorem states that. x2 .2 in 2D.1) Let a be the vector where ei is the unit vector in the ith Cartesian coordinate direction. (7.
or. rather than vector. Fi.ii . 2 φ = φ. ∂n The dot product of two gradients can be written φ· φ = (φ.12) where δij is the Kronecker delta deﬁned as δij = 1.16) (7.17) (7.i ni . We wish to ﬁnd a functional.i dV + δ f φ dV.i ni δφ dS − 68 . in (1) steadystate heat conduction. Given the functional.. where the velocity potential and velocity are speciﬁed on diﬀerent parts of the boundary. (7. From Eq.i δφ. it is generally easy to see what partial diﬀerential equation corresponds to it. (2) potential ﬂuid ﬂow.10) The normal derivative can be written in index notation as ∂φ = φ · n = φ.i . whose ﬁrst variation δI vanishes for φ satisfying the above boundary value problem. (7. and (3) torsion in elasticity. in index notation.i φ. in index notation.14a.i ei ) · (φ. 0= V (φ.13) 7.j ej ) = φ. where the temperature and heat ﬂux are speciﬁed on diﬀerent parts of the boundary. (7. whereas a vector problem has several dependent variables coupled to each other through a system of partial diﬀerential equations.e. one needs a functional which a solution makes stationary. 0.i ] dV + V V (7. Recall that.i − φ. 7. To simplify this discussion.i φ. (7.i δφ). 2 φ + f = 0 in V. A scalar problem has one dependent variable and one partial diﬀerential equation.i ) dV + δ f φ dV S V 2 V 1 = ( φ · n) δφ dS − δ φ. problems. Consider Poisson’s equation subject to both Dirichlet and Neumann boundary conditions.j ei · ej = φ.2 Deriving Variational Principles For each partial diﬀerential equation of interest.ii + f ) δφ dV [(φ.i dV = V S Fi ni dS.i φ. S V 2 V φ.18) = = 1 δ(φ. ∂n This problem arises.i φ. The harder problem is to start with the equation and derive the variational principle (i.i φ. i = j. φ = φ0 on S1 .14) ∂φ + g = 0 on S2 .15) δ(f φ) dV (7. I(φ) say. derive the functional which is to be minimized). we will consider only scalar. for example.j δij = φ.11) (7. i = j.
i φ.28) = S φ.23) or. (7. Hence.i − f δφ dV + 2 2 (φ. 7.i dV + δ f φ dV = −δ S2 V 2 V 1 = −δ φ. I(φ) = V 1 φ· 2 φ − f φ dV + S2 gφ dS (7.ii + f ) δφ dV + S2 where δφ = 0 on S1 .22.25) (7. ∂φ + g δφ dS − ∂n (φ.ii + f ) δφ dV.i φ.21) Thus.i δφ). δI = V 1 1 δφ. (7. 2 V S2 g δφ dS − δ (7. Eq. Eq.20) (7.29) Stationary I (δI = 0) for arbitrary admissible δφ thus yields the original partial diﬀerential equation and Neumann boundary condition.i dV + δ f φ dV S2 V 2 V 1 gφ dS − δ φ.i + φ. and φ.27) (7.i δφ.24) If we were given the functional.22) or. V (7.i φ.26) = V = V [(φ.i − f δφ) dV + g δφ dS S2 g δφ dS S2 (7. 69 .i δφ. 7.i φ. From Eq.19) (7.i − φ.14. the functional for this boundary value problem is I(φ) = V 1 φ. and. 7.i − f φ dV + gφ dS = −δI(φ).i ni δφ dS − V (φ. δφ = 0 on S1 . in expanded form.where φ · n = ∂φ/∂n = −g on S2 . Then. we could determine which partial diﬀerential equation corresponds to it by computing and setting to zero the ﬁrst variation of the functional.22. I(φ) = V 1 2 ∂φ ∂x 2 + ∂φ ∂y 2 + ∂φ ∂z 2 − f φ dV + S2 gφ dS.i − f φ dV + 2 gφ dS S2 (7.ii δφ] dV − V f δφ dV + S2 g δφ dS g δφ dS.i ni = δI = S2 φ · n = ∂φ/∂n. 0=− 1 φ.i φ. since φ is speciﬁed on S1 . in vector notation.
= y2 − y3 y3 − y1 y1 − y2 φ2 a2 2A x3 − x2 x1 − x3 x2 − x1 φ3 a3 where 2A = 1 x1 y1 1 x2 y2 1 x3 y3 = x2 y3 − x3 y2 − x1 y3 + x3 y1 + x1 y2 − x2 y1 .30 is a complete linear polynomial in two variables. a2 . φ(x. Eq. 7.30) where a1 . as shown in Fig. (7. A typical element. Thus. 57. The three DOF are the values of the fundamental unknown φ at the three grid points. y) = a1 + a2 x + a3 y. 7.31) φ 2 φ3 1 x3 y 3 a3 This matrix equation can be inverted to yield x2 y3 − x3 y2 x3 y1 − x1 y3 x1 y2 − x2 y1 φ1 a1 1 .) For numerical solution. we assume. (7. y) using a polynomial in two variables having the same number of terms as the number of DOF. for each element. (7. we approximate φ(x. y) satisfying some (unspeciﬁed) partial diﬀerential equation in a domain. Let us subdivide the domain into triangular ﬁnite elements. We can evaluate this equation at the three grid points to obtain 1 x 1 y 1 a1 φ1 = 1 x 2 y 2 a2 .33) (7. shown in Fig.32) 70 . (The number of DOF for an element represents the number of pieces of data needed to determine uniquely the solution for the element.yT E x Figure 56: TwoDimensional Finite Element Mesh. yT 3 £d £ d £ d $ $$$ 2 £ E 1 x Figure 57: Triangular Finite Element. and φ3 .3 Shape Functions Consider a twodimensional ﬁeld problem for which we seek the function φ(x. φ2 . has three degrees of freedom (DOF): φ1 . 56. and a3 are unknown coeﬃcients.
y) φ 2 φ3 or φ(x. we can interpret the element DOF as either the grid point values φi .35) = y3 − y1 y1 − y2 φ2 1 x y y2 − y3 2A φ3 x3 − x2 x1 − x3 x2 − x1 φ1 (7. In general. or the nonphysical polynomial coeﬃcients ai .41) (7.34) a 2 a3 x2 y3 − x3 y2 x3 y1 − x1 y3 x1 y2 − x2 y1 φ1 1 (7.40) .31 is invertible. 3. 71 (7. (2. the functions Ni are called shape functions or interpolation functions and are used extensively in ﬁnite element theory. From Eq. φ(x. y) = [(x3 y1 − x1 y3 ) + (y3 − y1 )x + (x1 − x3 )y]/(2A) 2 N3 (x. (7. 2A (7. Ni is the shape function for Point i. which have physical meaning. y)φ1 + N2 (x. if we let i. 7. where N1 (x. In general. 3.Note that the area of the triangle is A. 7. 2.38) (7. y1 ) = N1 (x1 . y)φ3 = Ni φi . y) = N1 (x. 2. y) = [(x1 y2 − x2 y1 ) + (y1 − y2 )x + (x2 − x1 )y]/(2A).42) (7. y) N2 (x. y) = 1 x y (7. y)φ1 . 7.37 implies that N1 can be thought of as the shape function associated with Point 1.30. we can write down N2 and N3 by a cyclic permutation of the subscripts. From Eq. 7. The determinant 2A is positive or negative depending on whether the grid point ordering in the triangle is counterclockwise or clockwise. y) = N1 (x. y) = [(x2 y3 − x3 y2 ) + (y2 − y3 )x + (x3 − x2 )y]/(2A) N (x. 2). Eq. 2. since N1 is the form (or “shape”) that φ would take if φ1 = 1 and φ2 = φ3 = 0.37. Since Eq. knowing N1 . φ1 = φ(x1 . or (3. y1 ) = 1. 3) such as (1. the above equation can be written in the compact form Ni (x. That is. In particular.36) = N1 (x. 1. and φ2 = φ3 = 0. j. y1 )φ1 or N1 (x1 . and k denote the three grid points of the triangle. y) = 1 [(xj yk − xk yj ) + (yj − yk )x + (xk − xj )y]. if φ1 = 0. y) N3 (x. y)φ2 + N3 (x. j. k) can be selected to be any cyclic permutation of (1. 1). respectively.37) Notice that all the subscripts in this equation form a cyclic permutation of the numbers 1. the scalar unknown φ can then be written in the matrix form a1 φ(x. 3). Alternatively. at Point 1.39) where (i.
x ¯ x y 3 (7. y = (y1 + y2 + y3 )/3. the gradients in the x or y directions are constant. 7.37.51) ∂y A constant gradient within any element implies that many small elements would have to be used wherever φ is changing rapidly. since each grid point has equal inﬂuence on the centroid. 72 (7.33: 1 N1 (¯.43) The preceding discussion assumed an element having three grid points.50) ∂x ∂N3 φ3 = [(x3 − x2 )φ1 + (x1 − x3 )φ2 + (x2 − x1 )φ3 ]/(2A). (7. N1 = N2 = N3 = 1/3.45) That is. y) = N1 (x. 0 = φ2 = φ(x2 . y3 ) = 0. from Eq. (7. An interesting observation is that. Also. In general. ¯ ¯ into N1 in Eq. 58) is given by u(x) = N1 (x)u1 + N2 (x)u2 . y)φ2 + · · · + Nr (x. and use Eq. once the grid point values are known. φ can be evaluated anywhere in the element. Thus.38.44) (7. x = (x1 + x2 + x3 )/3. Similarly. The displacement u(x) of an axial structural member (a pinjointed truss member) (Fig. φ(x. since.46) Eq. (7.52) . To prove this result. since the shape functions for the 3point triangle are linear in x and y. if the shape functions are evaluated at the triangle centroid. y2 ) = 0. N1 (x3 . a shape function takes the value unity at its own grid point and zero at the other grid points in an element: Ni (xj . ∂N1 ∂φ = φ1 + ∂x ∂x ∂φ ∂N1 = φ1 + ∂y ∂y ∂N2 φ2 + ∂x ∂N2 φ2 + ∂y ∂N3 φ3 = [(y2 − y3 )φ1 + (y3 − y1 )φ2 + (y1 − y2 )φ3 ]/(2A) (7.1 § q ¦ L 2¤ q ¥ E x Figure 58: Axial Member (PinJointed Truss Element). (7. yj ) = δij .38 gives the shape functions for the linear triangle. 7. y)φr = Ni φi . for an element having r points. at Point 2. Example. substitute the coordinates of the centroid. y2 ) = N1 (x2 . 7. We note that. 7.49) This is a convenient way to represent the ﬁnite element approximation within an element.47) (7.48) (7. y ) = [(x2 y3 − x3 y2 ) + (y2 − y3 )¯ + (x3 − x2 )¯]/(2A) = . y2 )φ1 or N1 (x2 . y)φ1 + N2 (x.
14: 2 2 ∂ φ + ∂ φ + f = 0 in A. Thus. The diﬀerence between this problem and that of Eq. where the gradient ∂φ/∂n is speciﬁed. (7.54) L L 7. Thus. and h may depend on position. we do not deﬁne a single smooth function φ over the entire domain. N1 (x) = 1 − x . g. but instead deﬁne φ over individual elements. The variation of I is also computed by summing over the elements: E δI = e=1 δIe . It can be shown that the functional which must be minimized for this problem is I(φ) = A 1 2 ∂φ ∂x 2 + ∂φ ∂y 2 − f φ dA + S2 1 gφ + hφ2 dS. The h term could arise in a variety of physical situations. including heat transfer due to convection (where the heat ﬂux is proportional to temperature) and free surface ﬂow problems (where the free surface and radiation boundary conditions are both of this type). 7. it follows that Ni must be a linear function which is unity at Point i and zero at the other end. 56. ∂x2 ∂y 2 (7.14 is that the h term has been added to the boundary condition on S2 .53) N2 (x) = x L or x x u(x) = 1 − u1 + u2 .58) . 7. and f . Assume that the domain has been subdivided into a mesh of triangular ﬁnite elements similar to that shown in Fig. (7. 2 (7. since I is an integral. L (7. ∂φ + g + hφ = 0 on S2 .where u1 and u2 are the axial displacements at the two end points. For a linear variation of displacement along the length. ∂n where S1 and S2 are curves in 2D.55) φ = φ0 on S1 . 73 (7. it can be evaluated by summing over the elements: E I = I1 + I2 + I3 + · · · = e=1 Ie . The functional which must be minimized for this boundary value problem is similar to that of Eq. With a ﬁnite element discretization. 7.24 except that an additional term must be added to account for the h term in the boundary condition on S2 .57) where E is the number of elements. We consider a slight generalization of the problem of Eq.56) where A is the domain.4 Variational Approach Consider Poisson’s equation in a twodimensional domain subject to both Dirichlet and Neumann boundary conditions.
k ) = (Nj.k Nj. if one edge of an element coincides with S2 ). 7. The degrees of freedom which deﬁne the function φ over the entire domain are the nodal (grid point) values φi . and set the result to zero: 1 ∂ 1 ∂ ∂φ ∂φ ∂φ ∂Ie = (φ.k δij = Ni. ∂ ∂φj ∂ (φ.k + φ. ∂n S2 As can be seen from Fig.62) (7. Ie = A 1 φ.. the last two terms in the functional make a nonzero contribution to the functional only if an element abuts S2 (i.k )φ. given all the φi .66) (7.k dA φj − A + (7.60) ∂φi 2 ∂φi ∂φi ∂φi ∂φi A 2 ∂φi S2 ∂φ ∂φ ∂φ ∂ (φ. we diﬀerentiate with respect to each φi . in index notation. 2 (7. and the normal derivative ∂φ/∂n is of equal magnitude and opposite sign.55c. 7.k (φ.59) The last two terms in this equation are. (7.k φj .k = (Nj φj ).k φj Ni.k φj ) = Nj.61 to obtain ∂Ie = ∂φi = A (7.29 ∂φ ∂n 15 £d £ l d £ 8 $d $ 17 $ £ $ 29 $$$ 35 $ £ l d 9 £ ∂φ dd££ © 17 ∂n Figure 59: Neumannn Boundary Condition at Internal Boundary.63) and (7. Thus. ∂φi ∂φi ∂φi We now substitute these last three equations into Eq.65) hNi Nj dS φj S2 Ni. since. from Eq.k ) − f dA + g + hφ dS (7. ∂φi ∂φi ∂φi ∂φ ∂ ∂φj = (Nj φj ) = Nj = Nj δij = Ni . 74 .k ∂φi ∂φi ∂φi ∂φi S2 A where φ = Nj φj implies φ. φ is known everywhere using the element shape functions. which must vanish for I to be a minimum. so that the individual contributions cancel each other out.k − f φ dA + 2 S2 1 gφ + hφ2 dS.k ) − f dA + g + hφ dS.k φ.k = Nj. to minimize I. We thus can consider a typical element.64) (Nj. For one element.67) = Kij φj − Fi + Hij φj . integrals of the form ∂φ φ dS.k − f Ni ) dA + A S2 (gNi + hNj φj Ni ) dS f Ni dA − gNi dS S2 (7. for two elements which share a common edge.e.k .k = Nj. the unknown φ is continuous along that edge.61) = φ. Therefore. 59.
∂I9 9 9 9 9 = K17. e Kij = A Ni.29 φ29 + K35. as illustrated in Fig. ∂φ17 ∂I8 8 8 8 8 = K29. ∂φ35 To evaluate e (7.17 φ17 + K35.k Nj. 7.17 φ17 + K17. We must combine the contributions for all elements.15 φ15 + K29.72) (7. ∂φi 75 . Consider two adjacent elements.29 $$$ 35 $ £ l £d 9 £ £ l £ d £ 8 $d£ $ 17 $$ £ 15 Figure 60: Two Adjacent Finite Elements.29 φ29 − F17 . ∂φ17 ∂I9 9 9 9 9 = K29. From Eq.29 φ29 + K29. for each element. j are both on the boundary. The two terms in F correspond to the body force and Neumann boundary condition.75) (7. for Element 8.15 φ15 + K15.29 φ29 − F15 .74) (7.k dA gNi dS S2 (7. The superscript e in the preceding equations indicates that we have computed the contribution for one element.35 φ35 − F29 . The second term of the “load” F is present only if Point i is on S2 . ∂φ29 Similarly.29 φ29 − F29 .17 φ17 + K29. for Element 9.70) Fie = A f Ni dA − e Hij = S2 hNi Nj dS.67.73) (7. the circled numbers are the element labels. and the matrix entries in H apply only if Points i. where.69) (7.35 φ35 − F35 .17 φ17 + K15.76) ∂Ie .68) (7. ∂I8 8 8 8 8 = K15.29 φ29 + K17. 60. for the special case h = 0.17 φ17 + K17.15 φ15 + K17.35 φ35 − F17 .71) (7. respectively. ∂φ29 ∂I9 9 9 9 9 = K35. In that ﬁgure.17 φ17 + K29. ∂φ15 ∂I8 8 8 8 8 = K17.
35 φ35 − F17 − F17 . and F is the vector of forcing functions at the nodes. The Dirichlet boundary condition φ = φ0 must be explicitly imposed and is referred to as an essential boundary condition. (7.5 Matrices for Linear Triangle Consider the linear threenode triangle.80) (7. 7. y) = [(xj yk − xk yj ) + (yj − yk )x + (xk − xj )y].we sum on e for ﬁxed i.k dA.80 still applies. (7. the shape functions are given by 1 Ni (x. S2 (7. 7. there is no contribution to the righthand side vector F. and the integration is over the element volume. for the more general case where h = 0. the boundary is left free. (7. if g = 0. (7. For each element.17 )φ17 + (K17. the individual sums are set to zero. F. Eq.39.17 + K17.81) (7. to implement the Neumann boundary condition ∂φ/∂n = 0 at a boundary.82) Fi = A f Ni dA − Hij = S2 hNi Nj dS. gNi dS.77) This process is then repeated for all grid points and all elements. To minimize the functional.15 φ15 + (K17. in three dimensions.29 + K17. ∂I9 ∂I8 8 8 9 8 9 + =K17. 7. Because of the historical developments in structural mechanics.83) Note that.84) 2A 76 . from Eq. For example. (K + H)φ = F. except that the sum extends over the range 1–3. for i = 17. The sum on k in the ﬁrst integral can be expanded to yield Kij = A ∂Ni ∂Nj ∂Ni ∂Nj + ∂x ∂x ∂y ∂y dA. 7. the contributions to K.78) where φ is the vector of unknown nodal values of φ.29 )φ29 ∂φ17 ∂φ17 9 8 9 + K17. The zero gradient boundary condition is the natural boundary condition for this formulation. resulting in a set of linear algebraic equations of the form Kφ = F or. for which.k Nj. and H are Kij = A Ni.79) (7. Note also in Eq. Thus.81 that. since a zero gradient results by default if the boundary is left free. K is sometimes referred to as the stiﬀness matrix.
ijk = 231. j contribution for one element is 1 (bi bj + ci cj ). (7.91) Kij = 4A where i and j each have the range 123. To compute K from Eq. Thus. as F1 = A f N1 dA = A f [(x2 y3 − x3 y2 ) + (y2 − y3 )x + (x3 − x2 )y] dA.87) By a cyclic permutation of the indices.97) Note that K depends only on diﬀerences in grid point coordinates. 3 3 = (b1 b2 + c1 c2 )/(4A) = [(y2 − y3 )(y3 − y1 ) + (x3 − x2 )(x1 − x3 )]/(4A). From Eq. we obtain ∂Nj = (yk − yi )/(2A) = bj /(2A). (7.85) ∂x ∂Ni = (xk − xj )/(2A) = ci /(2A). we calculate F1 .93) (7.83. 1 1 2 = (b2 + c2 )/(4A) = [(y3 − y1 )2 + (x1 − x3 )2 ]/(4A). consider ﬁrst the contribution of the source term f . = (b1 b3 + c1 c3 )/(4A) = [(y2 − y3 )(y1 − y2 ) + (x3 − x2 )(x2 − x1 )]/(4A). or ijk = 312. 7. the i. ∂y (7. two elements that are geometrically congruent and diﬀer only by a translation will have the same element matrix. bi and ci are computed from Eq. Thus.90) Kij = 4A2 where A is the area of the triangular element. these derivatives are all constant and hence can be removed from the integral to yield 1 (bi bj + ci cj )A. we ﬁrst compute the derivatives ∂Ni = (yj − yk )/(2A) = bi /(2A). ci = xk − xj . that is. 2A (7. 7. 2 = (b2 + c2 )/(4A) = [(y1 − y2 )2 + (x2 − x1 )2 ]/(4A).89) For the linear triangle.86) ∂y where bi and ci are deﬁned as b i = yj − yk . Thus.where ijk can be any cyclic permutation of 123.98) 77 . (7. ijk = 123. = (b2 b3 + c2 c3 )/(4A) = [(y3 − y1 )(y1 − y2 ) + (x1 − x3 )(x2 − x1 )]/(4A).81.81. (7.88) (7.96) (7. which is typical. ∂x ∂Nj = (xi − xk )/(2A) = cj /(2A). 7.87 using the shorthand notation that ijk is a cyclic permutation of 123. To compute the righthand side contributions as given in Eq. (7.94) (7. K11 K22 K33 K12 K13 K23 = (b2 + c2 )/(4A) = [(y2 − y3 )2 + (x3 − x2 )2 ]/(4A). However. (7. 7.95) (7. since there are three grid points in the element.92) (7.
7. x ¯ F1 = ˆ f [(x2 y3 − x3 y2 ) + (y2 − y3 )¯ + (x3 − x2 )¯] A.100) From Eq.' d d d s d 1 d s d 2 d d d L E Figure 61: Triangular Mesh at Boundary. x y 2A (7. ¯ (7. x is a local coordinate along the element edge. the expression in brackets is given by 2A/3. the only shape functions needed are those which describe the interpolation of φ on the boundary. L 2 (7. where g is a constant. L L (7.103) (7.102) 3 That is.81 to contribute. for the linear triangle. the integral could be computed using natural (or area) coordinates for the triangle [7]. It is also of interest to calculate. and L is the length of the element edge. A A x dA = x A. since the triangular shape functions are linear. 7. and note that dA = A.101) where the subscripts 1 and 2 refer to the two element edge grid points on the boundary S2 (Fig. 7.48.99) where (¯. 1ˆ F1 = f A. for a uniform f .103.81 for the uniform special case g = g . Thus. L F1 = −ˆ g 0 1− 78 x gL ˆ dx = − .104) Since the boundary shape functions are given by Eq. for Point i on the boundary. ¯ A y dA = y A. 7. 1/3 of the total element “load” is applied to each grid point. Hence.105) . Fi = − S2 gNi dS. ˆ ˆ For the second term of Eq. Point i must be on an element edge which lies on the boundary S2 . N2 = . the righthand side contributions for the second term of Eq. (7.47. Since the integration involving g is on the boundary. (7. Thus. y ) is the centroid of the triangle given by Eq. the boundary shape functions are N1 = 1 − x x . 3 and similarly 1ˆ F1 = F2 = F3 = f A. ˆ We now consider the special case f = f (a constant). Thus. For nonuniform f . 61). 7.
Consider some triangular elements adjacent to a boundary. L ˆ H11 = h 0 1− L x L 2 2 dx = dx = ˆ hL 3 (7. 1/2 of the total element “load” is applied to each grid point.113) Since φi is independent of position. as shown in Fig.111) 7. the only shape functions needed are those which describe the interpolation of φ on the boundary.116) (7.109) (7.k φi and I(φ) = A 1 Ni. it follows that φ. L 2 0 Thus.k Nj. for constant h = h.115) (7.82. L 6 Hence. S2 (7. (7. 1 φ.k φj − f Ni φi dA + 2 gNi φi dS.118) 1 = φi Ni. ˆ using the boundary shape functions of Eq. 7.110) H22 ˆ =h 0 L x L 1− ˆ hL 3 ˆ H12 = H21 = h 0 x L ˆ hL x dx = .108) (7. for an edge of a linear 3node triangle. we can evaluate I.59 (with h = 0) that. Since the integration in Eq.k φi Nj. 7.112) I(φ) = 2 S2 A where φ = Ni φi ..x gL ˆ dx = − .107) 2 That is.k − f φ dA + gφ dS. (7.114) (7.106) gL ˆ . H= ˆ hL 6 2 1 1 2 . F2 = −ˆ g F1 = F2 = − L (7.k = Ni. 61. for the two points deﬁning a triangle edge on the boundary S2 .103 in Eq.k φ.82. we ﬁrst note that Points i.k dA φj − f Ni dA − 2 A A 1 = φi Kij φj − Fi φi (index notation) 2 1 = φT Kφ − φT F (matrix notation) 2 1 = φ · Kφ − φ · F (vector notation). To calculate H using Eq. for the twodimensional Poisson’s equation.e. We recall from Eq. Thus.6 Interpretation of Functional Now that the variational problem has been solved (i.117) (7. the functional I has been minimized).82 is on the boundary. (7. 7. 7. for a uniform g. 2 79 gNi dS φi S2 . j must both be on the boundary for this matrix to contribute. 7. (7.
y 0 N3. and the second term is the potential of the applied loads. it follows that the stiﬀness matrix K is positive semideﬁnite.y N1. In general.y Ni. and C is the matrix converting grid point displacements to strain: ε = Cu.x N3. The ﬁrst term for I is a quadratic form. K is positive deﬁnite. and vector notations. The matrix is nonsingular. 7.y N2. For wellposed problems (which allow no rigid body motion).122) 0 N2.x 0 N2. the fundamental unknowns u and v (the x and y components of displacement) can both be expressed in terms of the three shape functions deﬁned in Eq.y + v. v = Ni vi .7 Stiﬀness in Elasticity in Terms of Shape Functions In §4. 4.x Ni. the Principle of Virtual Work was used to obtain the element stiﬀness matrix for an elastic ﬁnite element as (Eq.x 0 N3. (By deﬁnition. 2. matrix.where the last result has been written in index.x ui εxx ε= = = ε v. Gaussian elimination can be performed without pivoting. Since strain energy is zero only for zero strain (corresponding to rigid body motion). (7.67) K= V CT DC dV. I corresponds to the total potential energy.y vi yy γxy u.y v2 N1.x vi u1 v1 N1. (7. for example.) Three consequences of positivedeﬁniteness are 1. in 2D.x Ni. All eigenvalues of K are positive. the strains can be expressed as u.y N3. a matrix K is positive deﬁnite if φT Kφ > 0 for all φ = 0. (7. 7. 45).y ui + Ni.38: u = Ni ui .x 0 u2 0 N1.119) where D is the symmetric matrix of material constants relating stress and strain. (7. The ﬁrst term is the strain energy. 3.x N2.y = .120) For the constant strain triangle (CST) in 2D.121) where the summation convention is used.x u3 v3 80 .10 (p. In solid mechanics.
x (7.8 Element Compatibility In the variational approach to the ﬁnite element method. 7. Thus. 81 .x 0 C = 0 N1.y N3. the slope is continuous).x N3. φ(x) must be continuous.y . since some singularities cannot be integrated. It was also assumed that the integral evaluated over some domain was equal to the sum of the integrals over the elements. but we do not allow singularities in the integrand.y 0 N2.y N1. on the other hand. N1. dx2 (7. will not be allowed.y N2.. We wish to investigate brieﬂy the conditions which must hold for this assumption to be valid. but with kinks (slope discontinuities).x 0 N3. Thus.124) For the integral I to be welldeﬁned. as illustrated in Fig.119 is a matrix of shape function derivatives: N1. and φ is smooth (i. from Eq. C in Eq.126) the integrand φ may have simple discontinuities. simple jump discontinuities in φ are allowed.e. in which case φ is continuous with kinks. Now consider the onedimensional integral b I= a dφ(x) dx. what condition is necessary for the integral over a domain to be equal to the sum of the integrals over the elements? Consider the onedimensional integral b I= a φ(x) dx. 7. That is. an integral was minimized. the integrand may be discontinuous. C would have as many rows as there are independent components of strain (3 in 2D and 6 in 3D) and as many columns as there are DOF in the element. In the integral b I= a d2 φ(x) dx.x N2. 62.125) Since the integrand dφ(x)/dx may be discontinuous. for any functional of interest in ﬁnite element analysis.x 0 N2. Singularities in φ.y 0 N3.φ T ppp E x a b Figure 62: Discontinuous Function.123) In general. we conclude that. (7. dx (7. 7.120.
That is. and the second derivatives φ. given φ1 and φ2 . we conclude that.yy that appear in the partial diﬀerential equation may not even exist at the element interfaces. (7... but φ need only be continuous in the ﬁnite element approximation. the approximation inherent in a displacementbased ﬁnite element method is that the displacements are continuous. That is. we conclude that the smoothness required of φ depends on the highest order derivative of φ appearing in the integrand. φ must be continuous. note that having displacement continuous implies that the displacement gradients (which are proportional to the stresses) are discontinuous at the element boundaries. ∂ 2φ ∂ 2φ + + f = 0. the ﬁrst derivatives φ. the functional I(φ) has the physical interpretation of total potential energy. and the stresses (displacement gradients) are discontinuous at element boundaries.128) Since the highest derivative in I is ﬁrst order. In elasticity. one of the strengths of the variational approach is that I(φ) involves derivatives of lower order than in the original PDE. otherwise. 63. However. A nonconforming element would result in a discontinuous displacement at the element boundaries (i. at element interfaces. I(φ) might be inﬁnite.y may have simple discontinuities. If all quantities of interest (e. Note also that the Poisson equation is a second order equation. including strain energy.g. we conclude that. in Fig. Thus. displacements and stresses) were continuous. a gap or overlap). consider the Poisson equation in 2D. This requirement is referred to as the compatibility requirement or the conforming requirement. which would correspond to inﬁnite strain energy. for I to be welldeﬁned. φ is the same at the midpoint P for both elements.xx and φ. 82 . φ varies linearly between φ1 and φ2 along the line 1–2. the solution would be an exact solution rather than an approximate solution. since there could be a gap or overlap in the model along the line 1–2. a discontinuous φ (which is not allowed) corresponds to a singularity in the velocity ﬁeld. φ must be continuous in the ﬁnite element approximation.127) 1 2 ∂φ ∂x 2 + ∂φ ∂y 2 − f φ dA. ∂x2 ∂y 2 for which the functional to be minimized is I(φ) = A (7. If φ is in the integrand. which implies that.2 $$ $ $ £ £d r £ P £ d £ £ $$£ d 1 $$ £ Figure 63: Compatibility at an Element Boundary. φ must be continuous. If φ is in the integrand.e.x and φ. In ﬂuid mechanics. This property is one of the fundamental characteristics of an approximate numerical solution. the ﬁeld variable φ and any of its partial derivatives up to one order less than the highest order derivative appearing in I(φ) must be continuous. Therefore. in general. For example. Thus. the shape function is linear. For the 3node triangular element already formulated. Thus.
We note that.. The motivation for using shape functions Ni as the weighting functions is that we want the residual (the error) orthogonal to the shape functions.129) (7.134) . Various choices of Wi are possible. We consider the Poisson equation 2 φ + f = 0. (7. the process is called Galerkin’s method.133) which is orthogonal to the xyplane. In the ﬁnite element approximation. 2. (7. 83 (7. . .132) This approach is called the method of weighted residuals. n functions W can be chosen: RWi dV = 0. With n DOF in the domain. That is. as shown in Fig.z T v B ¨¨T R ¨ ¨ ¨ ¨ rr rr r j r = v − u (error) E y x © u Figure 64: A Vector Analogy for Galerkin’s Method.130) ˜ For an approximate solution φ. 64. Consider an analogous problem in vector analysis. where R is referred to as the residual or error. 2˜ φ + f = R = 0. 7. .9 Method of Weighted Residuals (Galerkin’s Method) Here we discuss an alternative to the use of a variational principle when the functional is either unknown or does not exist (e. where we want to approximate a vector v in 3D with another vector in 2D. we are trying to approximate an inﬁnite DOF problem (the PDE) with a ﬁnite number of DOF (the ﬁnite element model). V i = 1. the error R is orthogonal to the basis vectors ex and ey (the vectors used to approximate v): R · ex = 0 and R · ey = 0. When Wi = Ni (the shape functions). The “best” 2D approximation to the 3D vector v is the projection u in the plane. W is referred to as a weighting function. V (7. nonlinear equations). The error in this approximation is R = v − u. The best approximate solution will be one which in some sense minimizes R at all points of the domain. RW dV = 0. That is. if R = 0 in the domain. n.g. (7. . we are attempting to approximate v with a lesser number of DOF.131) where W is any function of the spatial coordinates.
k dV + V V f Ni dV ∂φ Ni dS ∂n (7.In the ﬁnite element problem. Thus.140) (7.k nk = and φ·n= ∂φ ∂n (7. e.136) (7. On S1 .k Ni nk dS − 0= S V V where φ. 7. 0= V ( 2 φ + f )Ni dV (7. (7. ∂n Fi = V f Ni dV + S From Eq. Kφ = F.k dV φj + V f Ni dV + S = −Kij φj + Fi .k Ni.g. At points where φ is speciﬁed. The counterpart to the dot product is the integral RNi dV = 0. for Poisson’s equation.55.146) (7. for each i. the approximating functions are the shape functions Ni .k dV ∂φ Ni dS.10.138) The ﬁrst term is converted to a surface integral using the divergence theorem. Thus. where Kij = V (7.k = (Nj φj ). the Dirichlet boundary conditions are handled like displacement boundary conditions in structural problems.k ] dV + V = (7.kk + f )Ni dV [(φ.k Nj. 7. The integral in Eq.k − φ. the shape functions.k Ni. ∂φ/∂n is speciﬁed on S2 and unknown a priori on S1 .k Ni ).137) f Ni dV.k = Nj. Eq. for one element. 7.k Nj. the residual R is orthogonal to its approximating functions. to obtain f Ni dV.. 84 .k φj Ni. in matrix notation.145) (7. where φ = φ0 is speciﬁed. 0= S ∂φ Ni dS − ∂n V Nj.147) Ni. Hence.143) (7.141) φ.142) (7. ∂φ/∂n is the “reaction” to the speciﬁed φ. an element. V = V (φ.k φj . V (7.139) φ.144) =− Ni.k dV + φ.135 must hold over the entire domain V or any portion of the domain.135) That is.
3) ∂n where n is the unit outward normal on the boundary. φ + c is also a solution for any constant c.Galerkin’s method thus results in algebraic equations identical to those derived from a variational principle. 65. far away from the body. in 3D. where the velocity is known. Since there are no shearing stresses in the ﬂuid. Deﬁne a velocity potential φ such that velocity v = φ. (8. vy = .4) ∂n We will see later in the discussion of symmetry that.6) ∂x which is speciﬁed. ∂φ = 0. since sometimes a variational principle may not exist for a given problem. ∂φ ∂φ ∂φ . vz = . Galerkin’s method can still be used to derive a ﬁnite element model. ∂φ vn = v · n = φ · n = = 0. In this example.5) ∂n where n is the unit normal to the plane. (8. 8 Potential Fluid Flow With Finite Elements In potential ﬂow. 65 is not a wellposed problem. (8. (8. the potential ﬂow boundary value problem is that the velocity potential φ satisﬁes 2 φ = 0 in V ˆ φ = φ on S1 (8. for any solution φ. This mathematical model of ﬂuid behavior is useful for some situations. the problem posed in Fig. Thus. (8. Thus. ˆ (8. the ﬂuid is assumed to be inviscid and incompressible. When a principle does exist. because only conditions on the derivative ∂φ/∂n are speciﬁed. ˆ ∂n where v = φ in V .2) Various boundary conditions are of interest. Therefore.7) ∂φ = vn on S2 . The boundary value problem for ﬂow around a solid body is illustrated in Fig. 2 φ = 0. That is. 85 . for uniqueness. ∂φ = vn . at a plane of symmetry for the potential φ. At a plane of antisymmetry.1) vx = ∂x ∂y ∂z It can be shown that. Galerkin’s method is more general. we must specify φ somewhere in the domain. However. As is. where the normal velocity vanishes. When the variational principle does not exist or is unknown. On a boundary where the velocity is speciﬁed. At a ﬁxed boundary. ∂φ vx = = v∞ . the ﬂuid slips tangentially along boundaries. within the domain occupied by the ﬂuid. the two approaches yield the same results. φ = 0.
9) (8.1 Finite Element Model Kφ = F. and Ni is the shape ˆ function for the ith grid point in the element. the pressure can be found using the steadystate Bernoulli equation p 1 2 v + gy + = c = constant. 1 2 c = v∞ .g. (8. at inﬁnity.12) gy is the (frequently ignored) body force potential. and ρ is the ﬂuid density. The constant c is evaluated using a location where v is known (e. 8. ˆ dA..k Nj. (8. y is the height above some reference plane.k dA = A ∂Ni ∂Nj ∂Ni ∂Nj + ∂x ∂x ∂y ∂y vn Ni dS. (8. (8. p is pressure.13) 2 86 .10) Fi = S2 where vn is the speciﬁed velocity on S2 in the outward normal direction.11) 2 ρ where v is the velocity magnitude given by v = 2 ∂φ ∂x 2 + ∂φ ∂y 2 . if we ignore gy and pick p = 0 (ambient).∂φ ∂φ = =0 ∂n ∂y n' n T 2 φ=0 E n n ∂φ ∂φ =− = −v∞ ∂n ∂x s d d ∂φ =0 ∂n ∂φ ∂φ = = v∞ ∂n ∂x ∂φ ∂φ =− =0 ∂n ∂y n c Figure 65: Potential Flow Around Solid Body. Once the velocity potential φ is known. g is the acceleration due to gravity. (8. v∞ ). For example.8) A ﬁnite element model of the potential ﬂow problem results in the equation where the contributions to K and F for each element are Kij = A Ni.
in general. The velocity ﬁeld is symmetric with respect to y = 0. 66. 68.15) ∂y The ydirection in this case is the normal to the symmetry plane y = 0. for points in a symmetry plane with normal n. yT rP r 1( j r Ex 0) ¨B r ¨ P Figure 67: Symmetry With Respect to y = 0. the righthand side “loads” in Eq. (8. in Fig.16) ∂n Note from Fig. 8.14) 8. 8. For example. Thus. ∂φ vy = = 0 for y = 0. ∂φ = 0. since P and P are the same point in the plane y = 0. the velocity vectors at P and P can be transformed into each other by a reﬂection and a negation of sign. and the velocity ﬁeld is antisymmetric with respect to the plane x = 0. As P and P get close to the axis y = 0. That is. Thus.8 are equal in magnitude and opposite in sign for the left and right boundaries. as shown in Fig.17) .E E E 96 E y T E E x 87 E Figure 66: Streamlines Around Circular Cylinder. If we change the direction of ﬂow (i. and p 1 2 1 2 + v = v∞ .e. ρ 2 2 (8.. make it right to left). we conclude that. the velocity vectors at P and its image P are mirror images of each other. 65 that the speciﬁed normal velocities at the two x extremes are of opposite signs. as shown in Fig.2 Application of Symmetry Consider the 2D potential ﬂow around a circular cylinder. from Eq. Thus. 87 (8. (8. the velocities at P and P must converge to each other. then (φP )ﬂow right = (φP )ﬂow left .18) (8. Thus. 67.10. (vx )P = (vx )P .
y T Pr rrP B ¨ ¨1( j r Ex 0) Figure 68: Antisymmetry With Respect to x = 0. 8. This problem has two planes of geometric symmetry. φ = 0. ∂φ =0 ∂n Antisym: φ=0 2 φ=0 ∂φ = v∞ ∂n a r ∂φ =0 ∂n ∂φ =0 ∂n Sym: Figure 69: Boundary Value Problem for Flow Around Circular Cylinder. Since φ is speciﬁed on x = 0. we obtain (φ)x=0 = −(φ)x=0 (8. Fig.19) If we now let P and P converge to the plane x = 0 and become the same point. the boundary value problem is shown in Fig. for 2D potential ﬂow around a circular cylinder.20) (8. The key to recognizing antisymmetry is to have a symmetric geometry with an antisymmetric “loading” (RHS).21) or (φ)x=0 = 0.e. (φP )ﬂow right = −(φP )ﬂow left . However. and can be solved using a onequarter model. The boundary condition ∂φ/∂n = 0 is the natural boundary condition (the condition that results if the boundary is left free). 88 . (8. in general. However. the sign of the solution also changes. since the only nonzero contributions to F occur at the left and right boundaries. 69. changing the direction of ﬂow also means that F in Eq. we conclude that. for points in a symmetry plane with normal n for which the solution is antisymmetric. y = 0 and x = 0. i. Thus. this problem is wellposed.. if the sign of F changes. For example. Combining the last two equations yields (φP )ﬂow right = −(φP )ﬂow right .8 becomes −F. 66.
22) where φ is the velocity potential. 8. the vertical velocity on the free surface is ∂φ ∂η = on y = 0. Bernoulli’s equation implies ∂φ + gη = 0 on y = 0. (8. (8.24.24) where ρ is the ﬂuid density.y T T b & & & b & & & c T η = deﬂection of & & free surface disturbed free undisturbed free surface surface g d = depth c c Figure 70: The Free Surface Problem. and the velocity is v= φ.23) The pressure p can be determined from the timedependent Bernoulli equation − ∂φ 1 2 p = + v + gy. and we can ignore the velocity term in Eq.25) ∂y ∂t If we assume small wave height (i.e. 70.. If we let η denote the deﬂection of the free surface. g is the acceleration due to gravity. on the free surface y = 0.3 Free Surface Flows Consider an inviscid. incompressible ﬂuid in an irrotational ﬂow ﬁeld with a free surface. ∂y g ∂t2 This equation is referred to as the linearized free surface boundary condition. 8. If we also take the pressure p = 0 on the free surface. 89 (8. ρ ∂t 2 (8. 2 ∂t ∂t ∂t ∂y (8.26) ∂t This equation can be viewed as an equation for the surface elevation η given φ. The equations of motion and continuity reduce to 2 φ = 0. (8.26: 0= Hence. (8. and y is the vertical coordinate. η is small compared to the depth d).28) ∂ 2φ ∂η ∂ 2φ ∂φ +g = 2 +g . 8. as shown in Fig. ∂φ 1 ∂ 2φ =− .27) . We can then eliminate η from the last two equations by diﬀerentiating Eq. the velocity v on the free surface is also small.
ˆ Consider a sine wave φ(t) of amplitude A. we write φ(t) = Aeiωt (8. which is the actual amplitude. when dealing with these sinsusoidal functions. ˆ ˆ φ(t) = Re Aei(ωt+θ) = Re Aeiθ eiωt . (8. and A can be taken as positive. 71. and agree that it is only the real part which is of interest.34) which is the actual phase angle.4 Use of Complex Numbers and Phasors in Wave Problems The wave maker problem considered in the next section will involve a forcing function which is sinusoidal in time (i.35) (8. E 8.. timeharmonic). Two sinusoids of the same frequency add just like vectors in geometry. circuits. circular frequency ω.30) √ where i = −1.C.31) with the understanding that it is the real part of this signal which is of interest. then φ(t) = Re Aeiωt . consider the sum ˆ ˆ A1 cos(ωt + θ1 ) + A2 cos(ωt + θ2 ). to drop the “Re”. and phase θ: ˆ φ(t) = A cos(ωt + θ).Im T A I A θ Re Figure 71: The Complex Amplitude. 90 .29) ˆ where all quantities in this equation are real. If we deﬁne the complex amplitude ˆ A = Aeiθ . as shown in Fig. The complex amplitude A is thus a complex number which embodies both the amplitude and the phase of the sinusoidal signal. Such an approach is used with A. since amplitude and phase information can be included in a single complex number.32) (8. It is common in engineering analysis to represent timeharmonic signals using complex numbers. steadystate acoustics. Using complex notation. Thus. A is the complex amplitude. and arg(A) = θ. It is common practice. The directed vector in the complex plane is called a phasor by electrical engineers. and mechanical vibrations. (8.33) (8. where the magnitude of the complex amplitude is given by ˆ ˆ ˆ A = Aeiθ = A eiθ = A. In this equation.e. For example. (8.
5 2D Wave Maker Consider a semiinﬁnite body of water with a wall oscillating in simple harmonic motion. y T 1¨ ∂φ = − φ (linearized free surface condition) ∂n g E T 2 x φ=0 d ∞ ∂φ = vn = v0 cos ωt (speciﬁed) ∂n n' ∂φ = 0 (rigid bottom) ∂n E c Figure 73: 2D Wave Maker: Time Domain. is illustrated in Fig. 72. ˆ A2 = A2 eiθ2 . In terms of complex arithmetic.Im T A2 ¢ ¢ ¢ ¢ ¢ A1 + A2 I A1 ¢ ¢ ¢ E Re Figure 72: Phasor Addition. In the time domain.37) (8. 73. the problem is 91 . (8. as shown in Fig. where A1 and A2 are complex amplitudes given by ˆ A1 = A1 eiθ1 . referred to as phasor addition. A1 eiωt + A2 eiωt = (A1 + A2 )eiωt .36) This addition. 8.
41) and ω is the ﬁxed excitation frequency. y. and v0 is real. ∂n g (8.38) where dots denote diﬀerentiation with respect to time. 74. The graphical solution of this equation is shown in Fig. and an additional boundary condition is needed for large x at the location where the model is terminated. the excitation frequency ω is speciﬁed. g (8. The solution of this problem is a function φ(x. and impose a boundary condition. ∂n g It can be shown that. 2 φ=0 ∂φ = v0 cos ωt on x = 0 ∂n ∂φ ∂n = 0 on y = −d ∂φ 1¨ = − φ on y = 0. referred to as a radiation boundary condition which accounts approximately for the fact that the 92 . The forcing function is the oscillating wall. ∂x where α is the positive solution of ω2 = α tanh(αd). We ﬁrst write Eq. y. We therefore look for solutions in the form φ(x. Eq. ∂φ0 = −iαφ0 . y) is the complex amplitude. y)eiωt . 8. for large x.39) (8.43) (8.42. 8. ∂n where i = √ −1. where φ0 (x. t) = φ0 (x.38 then becomes 2 φ0 = 0 ∂φ0 = v0 on x = 0 ∂n ∂φ0 ∂n = 0 on y = −d ∂φ ω2 0 = φ0 on y = 0. Eq.42) (8. For this problem. 8. t).38b in the form ∂φ = v0 cos ωt = Re v0 eiωt . for a ﬁnite element solution to the 2D wave maker problem. we truncate the domain “suﬃciently far” from the wall.40) (8. Thus.
93 . Note the similarity between this radiation condition and the nonreﬂecting boundary condition for the Helmholtz equation. If the radiation boundary is located at x = W . 3.y T E α Figure 74: Graphical Solution of ω 2 /α = g tanh(αd). Eq. ﬂuid extends to inﬁnity. ∂x as summarized in Fig. 75. the boundary value problem in the frequency domain becomes 2 φ0 = 0 ∂φ0 ∂n = v0 on x = 0 (oscillating wall) ∂φ0 (8.52. ∂φ0 ω2 = φ0 ∂n g T ∂φ0 = v0 ∂n ' d 2 φ0 = 0 W E ∂φ0 = −iαφ0 ∂n c ∂φ0 =0 ∂n Figure 75: 2D Wave Maker: Frequency Domain.44) ∂n = 0 on y = −d (rigid bottom) ∂φ0 ω2 = φ0 on y = 0 (linearized free surface condition) ∂n g ∂φ0 = −iαφ0 on x = W (radiation condition).
for two points on an element edge on the oscillating wall.46) where i and j each have the range 123. = K32 = (b2 b3 + c2 c3 )/(4A). for each element. 7.5 are all applicable.45) where. from Eq. 2 (8.80–7. 7.48) (8.55) for each radiation boundary edge. for triangular elements adjacent to the free surface. 7.55 is not the acceleration due to gravity appearing in the formulation of the free surface ﬂow problem.111.49) (8. 7. (8. then k = 2 and i = 3.55 (p.8. 8. the symbols ijk refer to the three nodes 123 in a cyclic permutation. so that the ﬁnite element formulas derived in §7. and h = iα on the radiation boundary. where g = −v0 on the oscillating ˆ wall. K11 K22 K33 K12 K13 K23 = (b2 + c2 )/(4A). For example.82. and bi = y j − y k . F1 = F2 = 94 v0 L . since H is purely imaginary on the radiation boundary. K. 7. The righthand side F is calculated using Eq. Note that.51) (8. c i = x k − x j .79.54) for each free surface edge. (8.47) For bi and ci . Thus. Thus.91. 2 2 2 = (b3 + c2 )/(4A). and the solution φ is complex. from Eq.107.56) . the coeﬃcient matrix K+H in Eq. Thus. 7. (8. (K + H)φ = F. HRB = iαL 6 2 1 1 2 (8. Thus. 3 = K21 = (b1 b2 + c1 c2 )/(4A).44 has two boundaries where ∂φ/∂n is speciﬁed and two boundaries on which ∂φ/∂n is proportional to the unknown φ. 73). if j = 1. this boundary value problem is a special case of Eq.52) (8. For elements adjacent to the radiation boundary.45 is complex. 1 1 = (b2 + c2 )/(4A). 7. = K31 = (b1 b3 + c1 c3 )/(4A). H. Note that the function g appearing in Eq.6 Linear Triangle Matrices for 2D Wave Maker Problem The boundary value problem deﬁned in Eq. The matrix system for the wave maker problem is therefore. 4A (8. Kij = 1 (bi bj + ci cj ). The solution of free surface problems thus requires either the use of complex arithmetic or separating the matrix system into real and imaginary parts.53) ˆ ˆ H is calculated using Eq. 8. where h = −ω 2 /g on the free surface.50) (8. and F are given by Eqs. Thus. HFS ω2L =− 6g 2 1 1 2 (8.
61) . It is this function which is displayed in computer animations of the timedependent response of the velocity potential. u ˙ (8.E u(t) f (t) k ¡e ¡e ¡ ¡ e¡ e¡ m e e E c Figure 76: Single DOF MassSpringDashpot System. and dots denote diﬀerentiation with respect to the time t. The timedependent velocity potential is given by Re φeiωt = Re [(φR + iφI )(cos ωt + i sin ωt)] = φR cos ωt − φI sin ωt. We make two observations from this last equation: 1. f is the applied force. The displacement solution is also sinusoidal: u(t) = u0 eiωt .62) (8. f (t) = f0 eiωt . 8. u is the displacement from the equilibrium. 76.58) where m is mass. we obtain − ω 2 mu0 eiωt + iωcu0 eiωt + ku0 eiωt = f0 eiωt or (−ω 2 m + iωc + k)u0 = f0 . (8. (8. For a sinusoidal force. If we substitute the last two equations into the diﬀerential equation. c is the viscous dashpot constant.45 is the complex amplitude of the velocity potential. and f0 is the complex amplitude of the force. 95 (8.7 Mechanical Analogy for the Free Surface Problem Consider the single DOF springmassdashpot system shown in Fig. The inertia force is proportional to ω 2 and 180◦ out of phase with respect to the elastic force. 8.60) where u0 is the complex amplitude of the displacement response. The application of Newton’s second law of motion (F=ma) to this system yields the diﬀerential equation of motion m¨ + cu + ku = f (t). k is the spring stiﬀness. The solution vector φ obtained from Eq.59) where ω is the excitation frequency.57) where φR and φI are the real and imaginary parts of the complex amplitude. (8.
since the mass M occurs only on the free surface rather than at every point in the domain. for which 2 φ = 0.64) where the diagonal dampers are positive.2. In fact. Thus. the ideal ﬂuid. (8. This degeneracy is a consequence of the incompressibility of the ideal ﬂuid. This “damping” matrix is frequencydependent. The free surface problem is a degenerate equivalent to the mechanical problem. we could interpret the radiation boundary matrix HRB as a “damping” eﬀect with the “boundary damping matrix” B given by B= 1 αL HRB = iω 6ω 2 1 1 2 . 96 . behaves like a degenerate mechanical system. A compressible ﬂuid (such as occurs in acoustics) has the analogous mass eﬀects everywhere. Similarly. we could interpret the free surface matrix HFS as an inertial eﬀect (in a mechanical analogy) with a “surface mass matrix” M given by M= L 1 HFS = 2 −ω 6g 2 1 1 2 . The viscous damping force is proportional to ω and leads the elastic force by 90◦ . in the free surface problem.63) where the diagonal “masses” are positive. (8. because the ideal ﬂuid possesses the counterpart to the elastic forces but not the inertial forces.
Byrom.. Fourier Series and Boundary Value Problems. Inc. Reddy. Inc. New York. [14] I. Schaum’s Outline of Theory and Problems of Finite Element Analysis. 1968 (also. 1995.V. Przemieniecki. PrenticeHall. Finite Element Procedures. Inc. Cambridge. fourth edition. McGrawHill. McGrawHill. Smith.J. Griﬃths. New York... fourth edition. Inc. [6] R. D. New York.F.. New York. Inc. Englewood Cliﬀs.Bibliography [1] K. [15] M. Dynamics of Structures. Inc. Scheid. Schaum’s Outline Series. NJ. New York. Inc. Cook.S. Malkus.H. Churchill.S.J. [2] J. New York.. Dover. 1992. Schaum’s Outline of Theory and Problems of Numerical Analysis.S. 1996. Burnett. Schaum’s Outline Series. 1994. 1985). New York... Inc.E. John Wiley and Sons. Oxford University Press.S. [12] F. Oxford. Inc. second edition. The Finite Element Method for Engineers. Plesha. 1985. Schaum’s Outline of Theory and Problems of Advanced Mathematics for Engineers and Scientists. Mathews. Buchanan. Inc...H. New York.D.. [16] J. third edition.L. Smith and D. Mass. and T.. Mayers. Numerical Methods for Mathematics. England. and R. NJ. 1983. M.. Inc. [11] J. 1971. [10] J. Science.R. Programming the Finite Element Method. Penzien. D. D. Introduction to Numerical Calculations.R. 2004. Brown and R. 2006. [7] K. 97 . [9] K. 2001. 1993. AddisonWesley Publishing Company. Academic Press. fourth edition. New York.. Smith.. 2001. McGrawHill.. [4] D. Cambridge University Press. Dewhirst.E. third edition. Huebner. John Wiley and Sons. second edition. 1989. Clough and J. Morton and D. second edition.D. Concepts and Applications of Finite Element Analysis. An Introduction to the Finite Element Method. [8] J. Schaum’s Outline Series.W. McGrawHill. Inc. Numerical Solution of Partial Diﬀerential Equations: Finite Diﬀerence Methods. Vandergraft. Inc. Reading. [3] G. New York.N. McGrawHill.M.W. [13] G. second edition. Spiegel. seventh edition. Witt. John Wiley and Sons. Englewood Cliﬀs. New York.G. 1987. PrenticeHall. McGrawHill. Inc. [5] R.V. 2006. Numerical Solution of Partial Diﬀerential Equations.W. Finite Element Analysis: From Concepts to Applications. Theory of Matrix Structural Analysis. Bathe. and Engineering. McGrawHill.
sixth edition.Z. The Finite Element Method for Solid and Structural Mechanics.L.[17] O. Elsevier ButterworthHeinemann. The Finite Element Method: Its Basis and Fundamentals. [19] O. [18] O. Zienkiewicz and R. Elsevier ButterworthHeinemann. sixth edition.L. England. Oxford. 98 .C. 2005. England. Zienkiewicz. and J. Taylor.L. R. Elsevier ButterworthHeinemann. and P. 2005. The Finite Element Method for Fluid Dynamics. England. Oxford.C. Taylor. R. 2005. sixth edition. Nithiarasu. Zienkiewicz.C. Oxford. Zhu. Taylor.
32 systems. 45 Courant condition. 12 equation(s) acoustics. 13 Helmholtz. 10 banded matrix. 12. 44 Bernoulli equation. 13 99 . 4 boundary conditions. 61 calculus of variations. 86. 12.Index acoustics. 11 Poisson. 3 truncation. 5 wave. 21. 11 nondimensional. 33 relaxation. 3 EulerLagrange equation. 14. 6 forward. 76. 22 del operator. 6 Neumann boundary conditions. 1 derivative. 3 local. 65. 3 essential boundary condition. 23 electrostatics. 86. 45 Euler’s method. 25 domain of inﬂuence. 52 discriminant. 63 heat. 3 rounding. 73 potential. 63 constraints. 14. 20 nonreﬂecting. 31 EulerLagrange. 49 compatibility. 11 mathematical physics. 89 classiﬁcation. 20. 11 sparse systems. 14 elliptic. 33 essential. 14 constant strain triangle. 26 CrankNicolson method. 16 ﬁnite diﬀerence(s). 6 backward. 10. 90 conic sections. 8 brachistochrone. 27 radiation. 71 cycloid. 15 ordinary diﬀerential. 89 big O notation. 4. 48 cyclic permutation. 2 truncation error. 92 boundary value problem(s). 1 parabolic. 6 central. 81 complex amplitude. 13 Bernoulli. 35 divergence theorem. 2. 16 partial diﬀerential. 11 determinant of matrix. 62 d’Alembert solution. 88 Neumann. 57 CFL condition. 76 natural. 66. 22 error global. 33 Fourier’s law. 67 domain of dependence. 21 Laplace. 59. 1. 13 hyperbolic. 20 CST. 14. 90 complex numbers. 63 explicit method. 13 backsolving. 59. 76 Euler beam. 26 change of basis. 33 beams in ﬂexure. 36 continuum problems direct approach. 18 stencil. 14 displacement vector. 48 constrained minimization.
38 sparse system of equations. 11 index notation. 12 virtual work. 11 heat conduction. 20. 12 membrane. 80 potential energy. 32 speed of propagation.free surface ﬂows. 3. 52 rounding error. 67 initial conditions. 50. 12 mass matrix. 16. 48 100 . 45 triangular element. 12 Helmholtz equation. 82 nondimensional form. 91 pinjointed frame. 87 Taylor’s theorem. 85 radiation boundary condition. 76 properties. 40 unit vectors. 13 Hooke’s law. 1 initial value problem(s). 52 orthogonal matrix. 34 matrix assembly. 33 rod element. 20. 38 strain energy. 3 RungeKutta methods. 57. 35 massspring system. 15 orthogonal coordinate transformation. 83 Gaussian elimination. 85 vibrations bar. 35. 88 Neumann boundary condition ﬁnite diﬀerences. 10 solution procedure. 42 mechanical analogy. 34 nonconforming element. 53 examples. 89 functional. 54 isotropic. 1. 20 phasors. 8. 67 symmetry. 79 Kronecker delta. 26 stiﬀness matrix. 22 magnetostatics. 57 torsion. 38 rotation matrix. 43 Leibnitz’s rule. 18 stencil. 12 string. 50. 68 Laplacian operator. 1 interpretation of functional. 95 method of weighted residuals. 12 transverse shear. 80 gravitational potential. 65. 36 matrix partitioning. 10 truncation error. 12 stable solution. 46 tridiagonal system. 92 relaxation. 12 positive deﬁnite matrix. 58 Galerkin’s method. 90 addition. 48 implicit method. 11 large spring method. 50 unstable solution. 33 Newton’s second law. 76. 3 truss structure. 70 shooting methods. 80 potential ﬂuid ﬂow. 18 velocity potential. 41 pivoting. 80 summation convention. 50 phantom points. 4 incompressible ﬂuid ﬂow. 11. 83 natural boundary condition. 4 tensors. 52 orthonormal basis. 80 Poisson equation. 18 shape functions. 4 separation of variables. 10.
23 101 . 91 matrices. 12 wave equation. 94 wave speed.warping function. 12. 22 wave maker problem.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.